1 Introduction

The failure probability is a critical index to assess the safety of a complex engineering structure. In the past decades, the static reliability analysis models (also named as time-independent reliability analysis models) and the corresponding efficient algorithms have been well researched. The reliability analysis models mainly include the probabilistic model, the non-probabilistic model (Wang and Matthies 2019) and the hybrid model (Xiao et al. 2019; Wang et al. 2017a; Wang and Matthies 2020). This paper concerns the probabilistic model. The algorithms for estimating the failure probability have been developed maturely including the analytical methods (Keshtegar and Chakraborty 2018; Huang et al. 2018), the sampling-based methods (Yun et al. 2018; Geyer et al. 2019; Grooteman 2008), the moment-based methods (Zhao and Ono 2001; Liu et al. 2020; Zhang and Pandey 2013), the information criterion-based methods (Lim et al. 2016; Zhong and You 2015; Amalnerkar et al. 2020), and the surrogate model-based methods (Zhang et al. 2019; Echard et al. 2011; Hong et al. 2021; Xiao et al. 2020; Yun et al. 2019). The classical analytical methods include the first order reliability method (FORM) (Keshtegar and Chakraborty 2018) and the second order reliability method (Huang et al. 2018). The sampling-based methods contain the Monte Carlo simulation (MCS) method, importance sampling method (Yun et al. 2018; Geyer et al. 2019), adaptive radial-based importance sampling (ARBIS) method (Grooteman 2008), etc. The moment-based methods are mainly divided into two categories, i.e., the integral moment-based methods (Zhao and Ono 2001; Liu et al. 2020) and the fractional moment-based methods (Zhang and Pandey 2013). The information criterion-based methods contain the Akaike information criterion-based method (Lim et al. 2016), the Bayesian information criterion-based method (Zhong and You 2015) and the Bootstrap information criterion-based method (Amalnerkar et al. 2020). The research orientations of the surrogate model-based methods include the efficient learning functions (Zhang et al. 2019), the kind of surrogate models (Echard et al. 2011), the schemes of sampling (Hong et al. 2021) and the application for multiple failure modes (Xiao et al. 2020; Yun et al. 2019). The traditional static reliability analysis methods do not consider the time-dependent uncertainties such as the stochastic process loads and material deterioration. Therefore, actual results from the perspective of the full life cycle may not the same as that analyzed by the static reliability analysis method. To cover the shortage of the conventional static reliability analysis, time-dependent reliability analysis model obtains wide researches in recent years. Time-dependent reliability is able to measure the ability of structure fulfilling its function over a period of time (Wang et al. 2017b; Li et al. 2020). The mathematical models of analyzing the time-dependent reliability and the time-dependent failure probability are shown in Eqs. (1) and (2).

$$ P_{{\text{r}}} (t_{0} ,t_{e} ) = \Pr \left\{ {g({\varvec{X}},{\varvec{Y}}(t),t) > 0,\forall t \in [t_{0} ,t_{e} ]} \right\} $$
(1)
$$ P_{{\text{f}}} (t_{0} ,t_{e} ) = \Pr \left\{ {g({\varvec{X}},{\varvec{Y}}(t),t) \le 0,\exists t \in [t_{0} ,t_{e} ]} \right\} $$
(2)

where \(\Pr \{ \cdot \}\) is the operation of probability, \({\varvec{X}}{ = [}X_{1} ,X_{2} ,\ldots,X_{n} ]\) denotes the n-dimensional input variables, \({\varvec{Y}}(t){ = [}Y_{1} (t),Y_{2} (t),\ldots,Y_{m} (t)]\) denotes the \(m\)-dimensional input stochastic process variables, \(t\) denotes the time parameter, \(G(t) = g({\varvec{X}},{\varvec{Y}}(t),t)\) denotes the time-dependent limit state function, \([t_{0} ,t_{e} ]\) is the predefined time interval of interest, “\(\forall\)” means “for all” and “\(\exists\)” means “there exists”.

The uncertain inputs of structures may implicitly or explicitly include the time parameter. As a result, the output of the structure will be a more complicated stochastic process by propagation of uncertainties. How to efficiently estimate the time-dependent failure probability is a pivotal problem in engineering applications. To handle this problem, researchers have been studying two kinds of methods, i.e., the first-passage-based methods (Andrieu-Renaud et al. 2004; Sudret 2008; Singh et al. 2010; Hu and Du 2012, 2013a; Jiang et al. 2019; Li et al. 2007) and the extreme value-based methods (Zhou et al. 2017; Hu and Du 2013b, 2015; Du 2014; Zhang et al. 2014; Li et al. 2019; Wang and Wang 2015; Lu et al. 2020; Hu and Mahadevan 2016; Wang and Chen 2016; Feng et al. 2019). The first-passage-based methods regard the probability of the out-crossing event occurring for the first time over a period of time as the corresponding time-dependent failure probability. To estimate the out-crossing rate, Andrieu-Renaud et al. (2004) proposed the PHI2 method by combining the FORM and a parallel static reliability model. Based on the classical PHI2 method, Sudret (2008) developed a more stable enhanced method. Singh et al. (2010) integrated the importance sampling into the first-passage-based method. Due to the assumptions of independence and Poisson distribution, the first-passage-based method may result in a low fidelity. Then, the joint out-crossing rate-based methods (Hu and Du 2012, 2013a) have been developed to face the strong dependence of the out-crossing events. While the first-passage-based method also may result in an inaccurate result for the problems with nonlinear responses and multimodal properties (Jiang et al. 2019).

The extreme value-based methods avoid using the assumptions of the first-passage-based methods. The extreme value-based method equivalently defines the time-dependent failure probability by evaluating the probability that the minimum value of the concerned model output exceeds its predefined threshold within the time interval of interest. The extreme value-based method builds a bridge between the time-dependent reliability analysis and the time-independent reliability analysis. Thus, the methods researched in the time-independent reliability analysis can be inducted into the time-dependent reliability analysis skillfully. Li et al. (2007) developed the probability density evolution method to approximate the extreme value distribution. Zhou et al. (2017) used the probability density evolution method to assess the time-dependent system reliability. Hu and Du (2013b) proposed a sampling approach to approximate the extreme value distribution. Du (2014) proposed the envelope functions-based method. Zhang et al. (2014) introduced the maximum entropy approach to approximate the distribution of the extreme value. Li et al. (2019) extended the subset simulation into the estimation of high-dimensional time-dependent failure probability. Besides, surrogate-based methods gain much attention since the response function is approximated by a surrogate model with a few number of calls to the real limit state function. Wang and Wang (2015) proposed the double-loop nested surrogate method. Subsequently, Lu et al. (2020) proposed a moving extremum surrogate method. The outer loop constructs a Kriging model of the extreme value function among the predefined time interval with respect to the stochastic inputs. The inner loop builds a series of one-dimensional Kriging models with respect to the time parameter to identify the extreme time for each outer training sample of stochastic inputs. Hu and Du (2015) developed the mixed efficient global optimization and adaptive sampling strategies to improve the efficiency of identifying the extreme values and reduce the number of training samples in the outer Kriging model. As Hu and Mahadevan (2016) remarked, the double-loop nested surrogate method exists two main drawbacks. On the one hand, the accuracy of finding the extreme time will influence the accuracy of the outer surrogate model of the extreme value function. On the other hand, finding the extreme time in the inner loop requires a large number of calls to the real limit state function, especially for the problems with stochastic processes over a long time period. Then, Hu and Mahadevan (2016) proposed a single-loop Kriging (SILK) surrogate method for analyzing the time-dependent failure probability where the global optimization used to find the extreme value is avoided. Based on the thought of SILK surrogate method, Wang and Chen (2016) combined the equivalent stochastic process transformation and the Kriging model to efficiently analyze the time-dependent failure probability with stochastic process variables. Besides, Feng et al. (2019) used the extended support vector regression to estimate the time-dependent failure probability. The SILK surrogate method constructs a single surrogate model with respect to the random inputs and time parameter. The candidate sampling pool (CSP) of SILK surrogate method is the MCS samples, each random sample of input variables requires to be combined with all discrete points of time, which leads to a tremendous size of CSP especially for the small time-dependent failure probability (generally smaller than \(10^{ - 3}\)). In each adaptive iteration of updating Kriging model, all MCS samples of inputs combined with all discrete points of time in the CSP requires to be calculated by the current Kriging model to find the next best training sample which will be added into the current training sample set. It takes much time and memory to find each next best training sample and judge whether the Kriging model satisfies the convergent condition. Therefore, the aim of this paper is to reduce the training burden of the original SILK surrogate method from the view point of reduction and stratification of the MCS-CSP. To achieve this aim, the ARBIS method (Grooteman 2008; Yun et al. 2020) is employed, where the optimal hypersphere is searched step by step. Then, samples inside the optimal hypersphere are directly recognized as safe samples and removed from the CSP. Besides, the used MCS-CSP constructed by the samples outside the optimal hypersphere is divided into several sub-CSPs by the in-process hyperspheres. In this paper, embedding ARBIS into the SILK surrogate method achieves two superiorities. The first one is that the whole size of CSP is reduced and the second one is that the SILK surrogate method is sequentially constructed in each small sub-CSP.

Thus, the main contributions of this paper are summarized as follows: (1) an enhanced SILK surrogate method is proposed, which can save much more learning time of Kriging model for assessing the time-dependent failure probability. (2) a modified strategy to determine the \(U\) learning function value of each candidate sample is proposed from the most easily identifiable failure time during the predefined time period to accelerate the convergence of updating the Kriging model.

The rest of this paper is organized as follows. Section 2 briefly reviews the original SILK surrogate method for analyzing the time-dependent failure probability. Section 3 elaborately introduces the proposed ARBIS enhanced SILK surrogate method. Section 4 analyzes a mathematical problem, a hydrokinetic turbine blade structure, and a turbine blade structure to demonstrate the efficiency and accuracy of the enhanced SILK surrogate method. Section 5 summarizes the conclusions of this paper.

2 The original SILK surrogate method for analyzing the time-dependent failure probability

2.1 Karhunen–Loeve expansion of stochastic processes

By using the Karhunen–Loeve (K–L) expansion (Huang et al. 2007), the stochastic process can be approximately expressed by combination of the independent variables \({\varvec{\varepsilon}}\) and time parameter \(t\). For a stochastic process \(Y_{j} (t)\), the expression of K–L expansion is shown as follows,

$$ Y_{j} (t) = \mu_{{Y_{j} }} (t) + \sigma_{{Y_{j} }} (t)\sum\limits_{i = 1}^{{n_{ej} }} {\sqrt {\lambda_{i} } \varepsilon_{i} } f_{i} (t) $$
(3)

where \(\mu_{{Y_{j} }} (t)\) and \(\sigma_{{Y_{j} }} (t)\) are the mean and standard deviation of the stochastic process, \(\varepsilon_{i} (i = 1,2,\ldots,n_{ej} )\) are the mutually independent standard normal variables, \(\lambda_{i} (i = 1,2,\ldots,n_{ej} )\) and \(f_{i} (t)(i = 1,2,\ldots,n_{ej} )\) are the eigenvalues and eigenvectors of the covariance function of the stochastic process \(Y_{j} (t)\), and \(n_{ej}\) is the number of eigenvectors utilized to represent the stochastic process.

After the K–L expansion, the time-dependent limit state function \(g({\varvec{X}},{\varvec{Y}}(t),t)\) is approximated by \(g({\varvec{X}},{\varvec{\varepsilon}},t)\) where both \({\varvec{X}}\) and \({\varvec{\varepsilon}}\) are random variables. In the SILK surrogate method, Kriging model is directly built for \(g({\varvec{X}},{\varvec{Y}}(t),t)\) by K-L expansion-based stochastic process sampling instead of building the Kriging model of \(g({\varvec{X}},{\varvec{\varepsilon}},t)\) because \(g({\varvec{X}},{\varvec{\varepsilon}},t)\) involves high-dimensional random inputs.

2.2 The single-loop Kriging surrogate method

The basic principle of SILK surrogate method (Hu and Mahadevan 2016) is to establish a Kriging model \(g_{K} ({\varvec{X}},{\varvec{Y}}(t),t)\), and carry out the time-dependent failure probability analysis by \(g_{{{K}}} ({\varvec{X}},{\varvec{Y}}(t),t)\). The concrete steps are summarized as follows.

Step 1: Generate initial training sample set about \({\varvec{X}}\), \({\varvec{Y}}(t)\) and \(t\), i.e.,

$$ {\varvec{S}}_{0} = \left[ {\begin{array}{*{20}c} {{\varvec{x}}^{(1)} } & {{\varvec{\varepsilon}}^{(1)} } & {t^{(1)} } \\ \vdots & \vdots & \vdots \\ {{\varvec{x}}^{{(N_{0} )}} } & {{\varvec{\varepsilon}}^{{(N_{0} )}} } & {t^{{(N_{0} )}} } \\ \end{array} } \right] \to \text{Eq}.\,(3) \to \left[ {\begin{array}{*{20}c} {{\varvec{x}}^{(1)} } & {{\varvec{y}}^{(1)} } & {t^{(1)} } \\ \vdots & \vdots & \vdots \\ {{\varvec{x}}^{{(N_{0} )}} } & {{\varvec{y}}^{{(N_{0} )}} } & {t^{{(N_{0} )}} } \\ \end{array} } \right] $$
(4)

where \(N_{0}\) is the number of initial training samples, \({\varvec{x}}^{(i)} = \left\{ {x_{1}^{(i)} ,x_{2}^{(i)} ,\ldots,x_{n}^{(i)} } \right\}\), \({\varvec{\varepsilon}}^{(i)} = \left\{ {{\varvec{\varepsilon}}_{1}^{(i)} ,{\varvec{\varepsilon}}_{2}^{(i)} ,\ldots,{\varvec{\varepsilon}}_{m}^{(i)} } \right\}\), \({\varvec{\varepsilon}}_{i}^{(j)} = \left[ {\varepsilon_{i,1}^{(j)} ,\varepsilon_{i,2}^{(j)} ,\ldots,\varepsilon_{{i,n_{ei} }}^{(j)} } \right]\) represents the jth sample of the standard normal variable vector in the ith stochastic process variable, \(\varepsilon_{i,k}^{(j)} (i = 1,2,\ldots,m; k = \text{1},2,\ldots,n_{ei} ; j = \text{1},\ldots,N_0)\) is the standard normal variable and \(n_{ei}\) is the number of eigenvectors employed to represent the ith stochastic process variable, \({\varvec{y}}^{(i)} = \left\{ {y_{1}^{(i)} ,y_{2}^{(i)} ,\ldots,y_{m}^{(i)} } \right\}\) obtained by taking \({\varvec{\varepsilon}}^{(i)}\) and \(t^{(i)}\) into Eq. (3), i.e., \({\varvec{y}}^{(i)} = {\varvec{y}}({\varvec{\varepsilon}}^{(i)} ,t^{(i)} )\).

The limit state function values of all samples in matrix \({\varvec{S}}_{0}\) are evaluated by the time-dependent limit state function \(g({\varvec{X}},{\varvec{Y}}(t),t)\). Then, the initial training sample set \({\varvec{T}}\) is constructed as \({\varvec{T}} = \left\{ {[({\varvec{x}}^{(1)} ,{\varvec{y}}^{(1)} ,t^{(1)} ),g({\varvec{x}}^{(1)} ,{\varvec{y}}^{(1)} ,t^{(1)} )],\ldots,[({\varvec{x}}^{{(N_{0} )}} ,{\varvec{y}}^{{(N_{0} )}} ,t^{{(N_{0} )}} ),g({\varvec{x}}^{{(N_{0} )}} ,{\varvec{y}}^{{(N_{0} )}} ,t^{{(N_{0} )}} )]} \right\}\).

Step 2: Generate the MCS-CSP of random inputs and time parameter, i.e.,

$$ {\varvec{S}}_{{\varvec{X\varepsilon }}} = \left[ {\begin{array}{*{20}c} {{\varvec{x}}^{(1)} } & {{\varvec{\varepsilon}}^{(1)} } \\ \vdots & \vdots \\ {{\varvec{x}}^{(N)} } & {{\varvec{\varepsilon}}^{(N)} } \\ \end{array} } \right] $$
(5)
$$ {\varvec{S}}_{t} = \left[ {t^{(1)} ,t^{(2)} ,\ldots,t^{{(N_{t} )}} } \right]^{\text{T}} $$
(6)

where \({\varvec{S}}_{{\varvec{X\varepsilon }}}\) is the sample matrix of random variables \({\varvec{X}}\) and \({\varvec{\varepsilon}}\), \(N\) is the number of MCS samples and \({\varvec{S}}_{t}\) is the sample set of time parameter by discretizing the predefined time interval \([t_{0} ,t_{e} ]\) into \(N_{t}\) time instants.

Step 3: Construct the Kriging model \(g_{K} ({\varvec{X}},{\varvec{Y}}(t),t)\) by taking the training sample set \({\varvec{T}}\) into the DACE toolbox (Nielsen and DACE 2007). Then, the Kriging prediction mode is obtained by Eq. (7). The theory of Kriging model can refer to the Refs. (Nielsen and DACE 2007; Kersaudy et al. 2015).

$$ g_{K} ({\varvec{X}},{\varvec{Y}}(t),t)\sim N\left( {\mu_{{g_{K} }} ({\varvec{X}},{\varvec{Y}}(t),t),\sigma_{{g_{K} }}^{2} ({\varvec{X}},{\varvec{Y}}(t),t)} \right) $$
(7)

where \(N( \cdot , \cdot )\) represents the normal distribution with mean \(\mu_{{g_{K} }} ({\varvec{X}},{\varvec{Y}}(t),t)\) and standard deviation \(\sigma_{{g_{K} }}^{{}} ({\varvec{X}},{\varvec{Y}}(t),t)\).

Step 4: Find the best next constructive training sample. According to the property of Kriging model, the probability of accurately judging the sign of \(g({\varvec{x}}^{(i)} ,{\varvec{y}}({\varvec{\varepsilon}}^{(i)} ,t^{(j)} ),t^{(j)} )\) by the current Kriging model is reflected by the following U learning function (Echard et al. 2011),

$$ U({\varvec{x}}^{(i)} ,{\varvec{y}}({\varvec{\varepsilon}}^{(i)} ,t^{(j)} ),t^{(j)} {) = }\frac{{{|}\mu_{{g_{K} }} ({\varvec{x}}^{(i)} ,{\varvec{y}}({\varvec{\varepsilon}}^{(i)} ,t^{(j)} ),t^{(j)} ){|}}}{{\sigma_{{g_{K} }}^{{}} ({\varvec{x}}^{(i)} ,{\varvec{y}}({\varvec{\varepsilon}}^{(i)} ,t^{(j)} ),t^{(j)} )}} $$
(8)

where \(\Phi (U({\varvec{x}}^{(i)} ,{\varvec{y}}({\varvec{\varepsilon}}^{(i)} ,t^{(j)} ),t^{(j)} {))}\) is the probability of correct sign prediction of the limit state function at sample \(({\varvec{x}}^{(i)} ,{\varvec{y}}({\varvec{\varepsilon}}^{(i)} ,t^{(j)} ),t^{(j)} {)}\).

Echard et al. (2011) suggest that if \(U({\varvec{x}}^{(i)} ,{\varvec{y}}({\varvec{\varepsilon}}^{(i)} ,t^{(j)} ),t^{(j)} {)} \ge {2}\) the limit state function sign of the sample \(({\varvec{x}}^{(i)} ,{\varvec{y}}({\varvec{\varepsilon}}^{(i)} ,t^{(j)} ),t^{(j)} {)}\) can be regarded as an accurate identification. Then, the indicator function of failure domain is determined by the following equation, i.e.,

$$I_{FK} ({\varvec{x}}^{(i)} ,{\varvec{\varepsilon}}^{(i)} ) = \left\{ \begin{gathered} 1, \quad \text{if }\mu_{{g_{K} }} ({\varvec{x}}^{(i)} ,{\varvec{y}}({\varvec{\varepsilon}}^{(i)} ,t^{(j)} ),t^{(j)} ) < 0 \, \text{and} \, U({\varvec{x}}^{(i)} ,{\varvec{y}}({\varvec{\varepsilon}}^{(i)} ,t^{(j)} ),t^{(j)} {)} \ge 2, \exists j = 1,2,\ldots,N_{t} \hfill \\ 0,\quad \text{if }\mu_{{g_{K} }} ({\varvec{x}}^{(i)} ,{\varvec{y}}({\varvec{\varepsilon}}^{(i)} ,t^{(j)} ),t^{(j)} ) > 0 \, \text{and} \, U({\varvec{x}}^{(i)} ,{\varvec{y}}({\varvec{\varepsilon}}^{(i)} ,t^{(j)} ),t^{(j)} {)} \ge 2, \forall j = 1,2,\ldots,N_{t} \hfill \\ \end{gathered} \right.$$
(9)

Hu and Mahadevan (2016) defined the following \(U\) learning function of random inputs, i.e.,

$$ U_{{\varvec{X\varepsilon }}} ({\varvec{x}}^{(i)} ,{\varvec{\varepsilon}}^{(i)} ) = \left\{ {\begin{array}{*{20}l} {u_{e} ,} \hfill & {\text{if}\;\mu_{{g_{K} }} ({\varvec{x}}^{(i)} ,{\varvec{y}}({\varvec{\varepsilon}}^{(i)} ,t^{(j)} ),t^{(j)} ) < 0\;\text{and}\;U({\varvec{x}}^{(i)} ,{\varvec{y}}({\varvec{\varepsilon}}^{(i)} ,t^{(j)} ),t^{(j)} {)} \ge 2,\;\exists j = 1,2,\ldots,N_{t} } \hfill \\ {\mathop {\min }\limits_{{j = 1,2,\ldots,N_{t} }} \left\{ {U({\varvec{x}}^{(i)} ,{\varvec{y}}({\varvec{\varepsilon}}^{(i)} ,t^{(j)} ),t^{(j)} {)}} \right\},} \hfill & {{\text{otherwise}}} \hfill \\ \end{array} } \right. $$
(10)

where \(u_{e}\) is any number so that \(u_{e} > 2\).

If \(U_{{\varvec{X\varepsilon }}} ({\varvec{x}}^{(i)} ,{\varvec{\varepsilon}}^{(i)} ) \ge 2\), it can be assumed that the states (failure or safety) of sample \(({\varvec{x}}^{(i)} ,{\varvec{\varepsilon}}^{(i)} )\) is correctly identified by the Kriging model \(g_{K} ({\varvec{X}},{\varvec{Y}}(t),t)\). Therefore, if the minimum of \(U_{{\varvec{X\varepsilon }}} ({\varvec{x}},{\varvec{\varepsilon}})\) among the \(N\) MCS-CSP is less than 2, it means that the corresponding states of these samples with \(U_{{\varvec{X\varepsilon }}} < 2\) cannot be identified by the current Kriging model. Then, a new training sample point should be added into the training sample set to update the current Kriging model and make it more accurate. The new training sample point is identified by Eqs. (11) to (13),

$$ ({\varvec{x}}^{(new)} ,{\varvec{\varepsilon}}^{(new)} ) = \arg \mathop {\min }\limits_{i = 1.,2,\ldots,N} \left\{ {U_{{\varvec{X\varepsilon }}} ({\varvec{x}}^{(i)} ,{\varvec{\varepsilon}}^{(i)} )} \right\} $$
(11)
$$ t^{(new)} = \arg \mathop {\min }\limits_{{t \in {\varvec{S}}_{t} }} \left\{ {U({\varvec{x}}^{(new)} ,{\varvec{y}}({\varvec{\varepsilon}}^{(new)} ,t),t)} \right\} $$
(12)
$$ {\varvec{y}}^{(new)} = {\varvec{y}}({\varvec{\varepsilon}}^{(new)} ,t^{(new)} ) $$
(13)

Then, the limit state function value of \(({\varvec{x}}^{(new)} ,{\varvec{y}}^{(new)} ,t^{(new)} )\) is estimated by \(g({\varvec{x}}^{(new)} ,{\varvec{y}}^{(new)} ,t^{(new)} )\) and the training sample set is updated by the following formula, i.e., \({\varvec{T}} = {\varvec{T}} \cup \left\{ {[({\varvec{x}}^{(new)} ,{\varvec{y}}^{(new)} ,t^{(new)} ),g({\varvec{x}}^{(new)} ,{\varvec{y}}^{(new)} ,t^{(new)} )]} \right\}\). The traditional and classical stopping criterion is \(\mathop {\min }\limits_{i = 1,2,\ldots,N} U_{{\varvec{X\varepsilon }}} ({\varvec{x}}^{(i)} ,{\varvec{\varepsilon}}^{(i)} ) \ge 2\) (Echard et al. 2011) while the maximum relative error-based stopping criterion (Hu and Mahadevan 2016) also can be employed. If the stopping criterion satisfies, go to Step 5. Otherwise, turn to Step 3.

Step 5: Estimate the time-dependent failure probability and its coefficient of variation (COV) using the current Kriging model \(g_{K} ({\varvec{X}},{\varvec{Y}}(t),t)\), i.e.,

$$ \hat{P}_{f} (t_{0} ,t_{e} ) = \frac{{\sum\nolimits_{i - 1}^{N} { {I_{FK} ({\varvec{x}}^{(i)} ,({\varvec{\varepsilon}}^{(i)} )} } }}{N} $$
(14)
$$ COV_{{\hat{P}_{f} (t_{0} ,t_{e} )}} = \sqrt {\frac{{1 - \hat{P}_{f} (t_{0} ,t_{e} )}}{{(N - 1)\hat{P}_{f} (t_{0} ,t_{e} )}}} $$
(15)

If the condition of \(COV_{{\hat{P}_{f} (t_{0} ,t_{e} )}} \le 5\%\) is satisfied, output the time-dependent failure probability \(\hat{P}_{f} (t_{0} ,t_{e} )\) and its COV. Otherwise, enlarge the sample matrix \({\varvec{S}}_{{\varvec{X\varepsilon }}}\) and turn to Step 4.

It can be seen that to find a next best new training sample point in the original SILK surrogate method, limit state function values of the \(N \times N_{t}\) samples should be predicted by Kriging model in each updating step, which will take up much time and memory especially for assessing small time-dependent failure probabilities. In this regard, Sect. 3 will elaborately introduce the proposed ARBIS enhanced SILK surrogate method.

3 The ARBIS enhanced SILK surrogate method for estimating the time-dependent failure probability

The main thought of the ARBIS is to search the optimal hypersphere sequentially (Grooteman 2008). All samples inside the optimal hypersphere dropped into the safe domain, and hence the limit state function values of these samples do not need to be evaluated if the optimal hypersphere is found in advance. Therefore, embedding ARBIS into SILK surrogate method, the whole MCS-CSP can be reduced and partitioned simultaneously. The Kriging model is sequentially updated from one sub-CSP to another sub-CSP. On the one hand, the total size of CSP is reduced in each updating process of Kriging model. On the other hand, the participating CSP is reduced because samples inside the optimal hypersphere do not require to participate in the learning process of Kriging model. ARBIS method is first proposed for estimating the time-independent failure probability. This paper will explore how to extend the ARBIS strategy into the existing SILK surrogate method. The basic steps of ARBIS-based time-independent failure probability analysis are briefly summarized in the appendix.

3.1 The basic theory of the ARBIS enhanced SILK surrogate method

According to Eq. (A7), the ARBIS-based computational formula for estimating the time-dependent failure probability is expressed by Eq. (16).

$$ \begin{gathered} \hat{P}_{f} (t_{0} ,t_{e} ) = \frac{{\sum\nolimits_{j = 0}^{m} {N_{F}^{(j)} } }}{N} \hfill \\ = \sum\nolimits_{j = 0}^{m} {\frac{{N^{(j)} }}{N} \cdot \frac{{N_{F}^{(j)} }}{{N^{(j)} }}} \hfill \\ \end{gathered} $$
(16)

where all MCS samples of \({\varvec{X}}\) are transformed into the standard normal variables space denoted as \({\varvec{u}}\), the symbol \(N\) denotes the number of MCS samples, \(N^{(j)}\) denotes the number of samples in the jth subdomain, \(m\) denotes the number of subdomains, \(N_{F}^{(j)}\) denotes the failure samples in the jth subdomain \(D_{{\beta_{j} }}\) in which if \(j = 1\), \(D_{{\beta_{j} }} = \left\{ {{\varvec{u}}|||{\varvec{u}}|| \ge \beta_{j} } \right\}\). Otherwise, \(D_{{\beta_{j} }} = \left\{ {{\varvec{u}}|\beta_{j - 1} > ||{\varvec{u}}|| \ge \beta_{j} } \right\}\).

Let \(P_{{D_{{\beta_{j} }} }}\) denote the probability of \({\varvec{u}}\) over \(D_{{\beta_{j} }}\), i.e., \(P_{{D_{{\beta_{j} }} }} = N^{(j)} /N\) and \(P_{{f|D_{{\beta_{j} }} }}\) denote the conditional time-dependent failure probability when \({\varvec{u}}\) belongs to the subdomain \(D_{{D_{{\beta_{j} }} }}\), i.e., \(P_{{f|D_{{\beta_{j} }} }} = N_{F}^{(j)} /N^{(j)}\). Then, Eq. (16) can be equivalently expressed as follows,

$$ \hat{P}_{f} (t_{0} ,t_{e} ) = \sum\nolimits_{j = 0}^{m} {P_{{D_{{\beta_{j} }} }} } \cdot P_{{f|D_{{\beta_{j} }} }} $$
(17)

From Eq. (17), it can be seen that the SILK surrogate method is carried out \(m\) times to estimate the time-dependent failure probability. In each subdomain \(D_{{\beta_{j} }}\), the number of samples used as the candidate samples to find the sequentially contributive training samples and to carry out the failure probability analysis is \(N^{(j)}\). \(N^{(j)}\) is smaller than \(N\). The smaller number of candidate samples in each iteration can save much more training time of updating Kriging model correspondingly. Furthermore, samples inside the hypersphere \(\beta_{m}\) (\(\beta_{m}\) is the radius of the optimal hypersphere) is safe and do not require to identify their states (failure or safety) using the Kriging model. Therefore, \(\sum\nolimits_{j = 0}^{m} {N^{(j)} } \le N\) and the relationship of \(\sum\nolimits_{j = 0}^{m} {N^{(j)} } = N\) is almost impossible because the radius of the optimal hypersphere is almost impossible to be zero for engineering applications with small failure probability. That is to say, the proposed enhanced SILK surrogate model can reduce not only the size of CSP in each updating process of Kriging model but also the size of the whole MCS-CSP used to analyze the time-dependent failure probability. The smaller size of CSP in each iteration can save much more learning time of updating Kriging model especially for estimating the small time-dependent failure probability. The subdomains divided by the hyperspheres of the ARBIS method are shown in Fig. 1 for the sake of intuitive illustration.

Fig. 1
figure 1

The stratified domains of candidate samples in the enhanced SILK surrogate method

3.2 The implementation of ARBIS enhanced SILK surrogate method for estimating the time-dependent failure probability

The concrete steps of estimating the time-dependent failure probability by the proposed enhanced SILK surrogate method are summarized as follows. The corresponding flowchart is shown in Fig. 2.

Fig. 2
figure 2

Flowchart of the proposed enhanced SILK surrogate method for time-dependent reliability analysis

Step 1: Generate MCS samples of input variables and time parameter. First, use the equivalent probability transformation to convert the random samples of input variables into the standard normal space, i.e.,

$$ {\varvec{S}}_{{\varvec{x}}} = \left[ {\begin{array}{*{20}c} {x_{1}^{(1)} } & {x_{2}^{(1)} } & \cdots & {x_{n}^{(1)} } \\ {x_{1}^{(2)} } & {x_{2}^{{({2})}} } & \cdots & {x_{n}^{{({2})}} } \\ \vdots & \vdots & \ddots & \vdots \\ {x_{1}^{(N)} } & {x_{2}^{(N)} } & \cdots & {x_{n}^{(N)} } \\ \end{array} } \right] {\mathop \rightarrow \limits ^{{u_{i} = \Phi^{ - 1} (F_{{X_{i} }} (x_{i} ))}}} {\varvec{S}}_{u} = \left[ {\begin{array}{*{20}c} {u_{1}^{(1)} } & {u_{2}^{(1)} } & \cdots & {u_{n}^{(1)} } \\ {u_{1}^{(2)} } & {u_{2}^{{({2})}} } & \cdots & {u_{n}^{{({2})}} } \\ \vdots & \vdots & \ddots & \vdots \\ {u_{1}^{(N)} } & {u_{2}^{(N)} } & \cdots & {u_{n}^{(N)} } \\ \end{array} } \right] $$
(18)

where \({\varvec{S}}_{{\varvec{x}}}\) is the sample matrix of model inputs \({\varvec{X}}\), \({\varvec{S}}_{u}\) is the corresponding sample matrix of the standard normal variables, \(F_{{X_{i} }} ( \cdot )\) is the cumulative distribution function (CDF) of \(X_{i}\), and \(\Phi^{ - 1} ( \cdot )\) is the inverse CDF of the standard normal variable.

Secondly, generate MCS samples of the stochastic process variables \({\varvec{Y}}(t)\) if the problem involves the stochastic process variables, i.e.,

$$ {\varvec{S}}_{{\varvec{\varepsilon}}} = \left[ {\begin{array}{*{20}c} {{\varvec{\varepsilon}}_{1}^{(1)} } & {{\varvec{\varepsilon}}_{2}^{(1)} } & \cdots & {{\varvec{\varepsilon}}_{m}^{(1)} } \\ {{\varvec{\varepsilon}}_{1}^{(2)} } & {{\varvec{\varepsilon}}_{2}^{(2)} } & \cdots & {{\varvec{\varepsilon}}_{m}^{(2)} } \\ \vdots & \vdots & \ddots & \vdots \\ {{\varvec{\varepsilon}}_{1}^{(N)} } & {{\varvec{\varepsilon}}_{2}^{(N)} } & \cdots & {{\varvec{\varepsilon}}_{m}^{(N)} } \\ \end{array} } \right] = \left[ {\begin{array}{*{20}c} {{\varvec{\varepsilon}}^{(1)} } \\ {{\varvec{\varepsilon}}^{(2)} } \\ \vdots \\ {{\varvec{\varepsilon}}^{(N)} } \\ \end{array} } \right] $$
(19)

Thirdly, discrete the concerned time interval into \(N_{t}\) time instants, i.e.,

$$ {\varvec{S}}_{t} = \left[ {t^{(1)} ,t^{(2)} ,\ldots,t^{{(N_{t} )}} } \right]^{\text{T}} $$
(20)

Fourthly, by combining \({\varvec{S}}_{{\varvec{u}}}\) and \({\varvec{S}}_{{\varvec{\varepsilon}}}\), the matrix \({\varvec{S}}_{{\varvec{u\varepsilon }}}\) is obtained, i.e.,

$$ {\varvec{S}}_{{\varvec{u\varepsilon }}} = \left[ {\begin{array}{*{20}c} {u_{1}^{(1)} } & {u_{2}^{(1)} } & \cdots & {u_{n}^{(1)} } \\ {u_{1}^{(2)} } & {u_{2}^{{({2})}} } & \cdots & {u_{n}^{{({2})}} } \\ \vdots & \vdots & \ddots & \vdots \\ {u_{1}^{(N)} } & {u_{2}^{(N)} } & \cdots & {u_{n}^{(N)} } \\ \end{array} \begin{array}{*{20}c} {{\varvec{\varepsilon}}_{1}^{(1)} } & {{\varvec{\varepsilon}}_{2}^{(1)} } & \cdots & {{\varvec{\varepsilon}}_{m}^{(1)} } \\ {{\varvec{\varepsilon}}_{1}^{(2)} } & {{\varvec{\varepsilon}}_{2}^{(2)} } & \cdots & {{\varvec{\varepsilon}}_{m}^{(2)} } \\ \vdots & \vdots & \ddots & \vdots \\ {{\varvec{\varepsilon}}_{1}^{(N)} } & {{\varvec{\varepsilon}}_{2}^{(N)} } & \cdots & {{\varvec{\varepsilon}}_{m}^{(N)} } \\ \end{array} } \right] $$
(21)

Step 2: Construct the initial training samples set. Randomly select \(N_{0} \ll N\) samples from \({\varvec{S}}_{{\varvec{u\varepsilon }}}\) and \({\varvec{S}}_{t}\) respectively to construct the initial training sample set \({\varvec{T}}\), i.e.,

$$ \left[ {\begin{array}{*{20}c} {{\varvec{u}}^{(1)} } & {{\varvec{\varepsilon}}^{(1)} } & {t^{(1)} } \\ \vdots & \vdots & \vdots \\ {{\varvec{u}}^{{(N_{0} )}} } & {{\varvec{\varepsilon}}^{{(N_{0} )}} } & {t^{{(N_{0} )}} } \\ \end{array} } \right] \to \text{Eq}.(6) \to \left[ {\begin{array}{*{20}c} {{\varvec{u}}^{(1)} } & {{\varvec{y}}({\varvec{\varepsilon}}^{(1)} ,t^{(1)} )} & {t^{(1)} } \\ \vdots & \vdots & \vdots \\ {{\varvec{u}}^{{(N_{0} )}} } & {{\varvec{y}}({\varvec{\varepsilon}}^{{(N_{0} )}} ,t^{{(N_{0} )}} )} & {t^{{(N_{0} )}} } \\ \end{array} } \right] $$
(22)

where \({\varvec{y}}\text{(}{\varvec{\varepsilon}}^{(i)} ,t^{(i)} )\) is determined by taking \(\text{(}{\varvec{\varepsilon}}^{(i)} ,t^{(i)} )\) into Eq. (3).

The initial training sample set \({\varvec{T}}\) is constructed as follows,

$$ {\varvec{T}} = \bigcup\nolimits_{i = 1}^{{N_{0} }} {\left\{ {[({\varvec{u}}^{(i)} ,{\varvec{y}}({\varvec{\varepsilon}}^{(i)} ,t^{(i)} ),t^{(i)} ),g({\varvec{u}}^{(i)} ,{\varvec{y}}({\varvec{\varepsilon}}^{(i)} ,t^{(i)} ),t^{(i)} )]} \right\}} $$
(23)

Step 3: Initialize the parameters of ARBIS. Set \(k = 1\), \({\varvec{S}}_{{\varvec{A}}}^{(k)} = {\varvec{S}}_{{\varvec{u\varepsilon }}}\) and \(\beta = \beta_{k}\) where \(\beta\) is the radius of the current hypersphere. \(\beta_{1}\) can be determined by Eq. (A3), and also can be adjusted to guarantee that there are samples outside the \(\beta_{{1}}\)-hypersphere.

Step 4: Determine the kth sub-CSP \({\varvec{S}}_{{{\varvec{A}}_{outer} }}^{(k)}\). Select the samples outside the \(\beta\)-hypersphere from matrix \({\varvec{S}}_{{\varvec{A}}}^{(k)}\) and put these samples into matrix \({\varvec{S}}_{{{\varvec{A}}_{outer} }}^{(k)}\), i.e.,

$$ {\varvec{S}}_{{{\varvec{A}}_{outer} }}^{(k)} = \mathop {\arg }\limits_{{{\varvec{u}} \in {\varvec{S}}_{{\varvec{A}}}^{(k)} }} (||{\varvec{u}}|| > \beta_{k} ) $$
(24)

If \({\varvec{S}}_{{{\varvec{A}}_{outer} }}^{(k)}\) is empty, turn to Step 10. Otherwise, execute the next step continuously.

Step 5: Construct the Kriging model of \(g({\varvec{u}},{\varvec{Y}}(t),t)\). Kriging model \(g_{K} ({\varvec{u}},{\varvec{Y}}(t),t)\sim N(\mu_{{g_{K} }} ({\varvec{u}},{\varvec{Y}}(t),t),\sigma_{{g_{K} }}^{2} ({\varvec{u}},{\varvec{Y}}(t),t))\) is obtained by taking the current training sample set \({\varvec{T}}\) into the DACE toolbox (Nielsen and DACE 2007).

Step 6: Update the training sample set \({\varvec{T}}\).

Step 6.1: Calculate the modified learning function values of candidate samples by the proposed learning function. For the time-dependent structure, if the limit state function value at a time instant is less than zero during the time interval of interest, the structure is regarded as failure. Otherwise, if the limit state function value is always larger than zero during the time interval of interest, the structure is regarded as safety. Therefore, for the safe structure, the safe states of all time instants need to be accurately identified. For the failed structure, just one failed time instant should be accurately identified. Therefore, the modified learning function to determine the \(U\) value of sample \(({\varvec{u}}^{(i)} ,{\varvec{\varepsilon}}^{(i)} )\) is shown as follows,

$$ U_{{\varvec{u\varepsilon }}}^{R} ({\varvec{u}}^{(i)} ,{\varvec{\varepsilon}}^{(i)} ) = \left\{ \begin{gathered} \mathop {\max }\limits_{{j = \widetilde{1},\widetilde{2},\ldots,\widetilde{P}}} \left\{ {U({\varvec{u}}^{(i)} ,{\varvec{\varepsilon}}^{(i)} ,t^{(j)} {)}} \right\}, \text{if }\mu_{{g_{K} }} ({\varvec{u}}^{(i)} ,{\varvec{y}}\text{(}{\varvec{\varepsilon}}^{(i)} ,t^{(j)} ),t^{(j)} ) < 0 \exists j = 1,2,\ldots,N_{t} \hfill \\ \mathop {\min }\limits_{{j = 1,2,\ldots,N_{t} }} \left\{ {U({\varvec{u}}^{(i)} ,{\varvec{\varepsilon}}^{(i)} ,t^{(j)} {)}} \right\}, \text{otherwise} \hfill \\ \end{gathered} \right. $$
(25)

where \((\widetilde{1},\widetilde{2},\ldots,\widetilde{p})\) denotes the number vector of time instants with \(\mu_{{g_{K} }} ({\varvec{u}}^{(i)} ,{\varvec{y}}\text{(}{\varvec{\varepsilon}}^{(i)} ,t^{(j)} ),t^{(j)} ) < 0\), and \(U({\varvec{u}}^{(i)} ,{\varvec{\varepsilon}}^{(i)} ,t^{(j)} {)}\) is calculated by Eq. (26),

$$ U({\varvec{u}}^{(i)} ,{\varvec{\varepsilon}}^{(i)} ,t^{(j)} {)} = \frac{{|\mu_{{g_{K} }} ({\varvec{u}}^{(i)} ,{\varvec{y}}\text{(}{\varvec{\varepsilon}}^{(i)} ,t^{(j)} ),t^{(j)} )|}}{{\sigma_{{g_{K} }} ({\varvec{u}}^{(i)} ,{\varvec{y}}\text{(}{\varvec{\varepsilon}}^{(i)} ,t^{(j)} ),t^{(j)} )}} $$
(26)

Step 6.2: Identification of a new training sample. First, find the sample point of \({\varvec{u}}\) and \({\varvec{\varepsilon}}\) with minimum value of \(U_{{\varvec{u\varepsilon }}}^{R}\), i.e.,

$$ ({\varvec{u}}^{(I)} ,{\varvec{\varepsilon}}^{(I)} ) = \arg \mathop {\min }\limits_{{({\varvec{u}},{\varvec{\varepsilon}}) \in {\varvec{S}}_{{{\varvec{A}}_{outer} }}^{(k)} }} U_{{\varvec{u\varepsilon }}}^{R} ({\varvec{u}},{\varvec{\varepsilon}}) $$
(27)

where the corresponding time instant is determined by

$$ t^{(I)} = \left\{ \begin{aligned} & \arg \mathop {\max }\limits_{{j = \widetilde{1},\widetilde{2},\ldots,\widetilde{p}}} \left\{ {U({\varvec{u}}^{(I)} ,{\varvec{\varepsilon}}^{(I)} ,t^{(j)} )} \right\}, \quad \text{if }\,\mu_{{g_{K} }} ({\varvec{u}}^{(i)} ,{\varvec{y}}\text{(}{\varvec{\varepsilon}}^{(i)} ,t^{(j)} ),t^{(j)} ) < 0 \exists j = 1,2,\ldots,N_{t} \hfill \\ &\arg \mathop {\min }\limits_{{j = 1,2,\ldots,N_{t} }} \left\{ {U({\varvec{u}}^{(I)} ,{\varvec{\varepsilon}}^{(I)} ,t^{(j)} )} \right\}, \quad \text{otherwise} \hfill \\ \end{aligned} \right. $$
(28)

Then, the new training point is determined as \(({\varvec{u}}^{(I)} ,{\varvec{y}}({\varvec{\varepsilon}}^{(I)} ,t^{(I)} ),t^{(I)} )\).

Step 6.3: Judge whether the training sample set \({\varvec{T}}\) requires to be updated. If \(U_{{\varvec{u\varepsilon }}}^{R} ({\varvec{u}}^{(I)} ,{\varvec{\varepsilon}}^{(I)} ) \ge 2\), execute the next step continuously. Otherwise, update the training sample set \({\varvec{T}}\) by Eq. (29),

$$ {\varvec{T}} = {\varvec{T}} \cup \left\{ {[({\varvec{u}}^{(I)} ,{\varvec{y}}^{(I)} ,t^{(I)} ),g({\varvec{u}}^{(I)} ,{\varvec{y}}^{(I)} ,t^{(I)} )]} \right\} $$
(29)

where \({\varvec{y}}^{(I)} = {\varvec{y}}({\varvec{u}}^{(I)} ,{\varvec{t}}^{(I)} )\). Then, turn to Step 5.

Step 7: Predict the states (failure or safety) of all samples in matrix \({\varvec{S}}_{{{\varvec{A}}_{outer} }}^{(k)}\). By using the current Kriging model \(g_{K} ({\varvec{u}},{\varvec{Y}}(t),t)\), the states of all samples in matrix \({\varvec{S}}_{{{\varvec{A}}_{outer} }}^{(k)}\) are predicted by Eq. (30),

$$I_{FK} ({\varvec{u}}^{(w)} ,{\varvec{\varepsilon}}^{(w)} ) = \left\{ \begin{aligned}& 1, \quad \text{ if }\,\mu_{{g_{K} }} ({\varvec{u}}^{(w)} ,{\varvec{y}}\text{(}{\varvec{\varepsilon}}^{(w)} ,t^{(j)} ),t^{(j)} ) < 0 \exists j = 1,2,\ldots,N_{t} \hfill \\ &0,\quad \text{otherwise} \hfill \\ \end{aligned} \right. (w = 1,2,\ldots,N_{{{\varvec{S}}_{{{\varvec{A}}_{outer} }}^{(k)} }} )$$
(30)

where \(N_{{{\varvec{S}}_{{{\varvec{A}}_{outer} }}^{(k)} }}\) denotes the number of samples in \({\varvec{S}}_{{{\varvec{A}}_{outer} }}^{(k)}\).

Count the number of samples in matrix \({\varvec{S}}_{{{\varvec{A}}_{outer} }}^{(k)}\) satisfying the condition of \(I_{FK} ({\varvec{u}}^{(w)} ,{\varvec{\varepsilon}}^{(w)} ) = 1\) and put the satisfactory samples into \({\varvec{S}}_{F}^{(k)}\). Let \(N_{F}^{(k)}\) denote the number of failure samples in matrix \({\varvec{S}}_{{{\varvec{A}}_{outer} }}^{(k)}\). If \(N_{F}^{(k)}\) equals to zero, turn to Step 10. Otherwise, execute the next step continuously.

Step 8: Find the next hypersphere \(\beta_{k + 1}\).

Step 8.1: First, the failure sample \(({\varvec{u}}^{(F)} ,{\varvec{\varepsilon}}^{(F)} )\) with the maximum value of joint PDF in the matrix \({\varvec{S}}_{F}^{(k)}\) is select by Eq. (31),

$$ ({\varvec{u}}^{(F)} ,{\varvec{\varepsilon}}^{(F)} ) = \arg \mathop {\max }\limits_{{({\varvec{u}},{\varvec{\varepsilon}}) \in {\varvec{S}}_{F}^{(k)} }} \varvec{\varphi }({\varvec{u}},{\varvec{\varepsilon}}) $$
(31)

where \(\varvec{\varphi }({\varvec{u}},{\varvec{\varepsilon}})\) is the joint PDF of \({\varvec{u}}\) and \({\varvec{\varepsilon}}\).

The radius of the next hypersphere is determined by solving Eq. (32), i.e.,

$$ \mathop {\min }\limits_{{t \in [t_{0} ,t_{e} ]}} g(\beta_{k + 1} \frac{{{\varvec{u}}^{(F)} }}{{||({\varvec{u}}^{(F)} ,{\varvec{\varepsilon}}^{(F)} )||}},{\varvec{y}}(\beta_{k + 1} \frac{{{\varvec{\varepsilon}}^{(F)} }}{{||({\varvec{u}}^{(F)} ,{\varvec{\varepsilon}}^{(F)} )||}},t),t) = 0 $$
(32)

Step 8.2: Solve Eq. (32) by dichotomy and Kriging model. The solution to Eq. (32) is the boundary between \(\mathop {\min }\limits_{{t \in [t_{0} ,t_{e} ]}} g({\varvec{u}},{\varvec{y}}\text{(}t\text{)},t),t) > 0\) and \(\mathop {\min }\limits_{{t \in [t_{0} ,t_{e} ]}} g({\varvec{u}},{\varvec{y}}\text{(}t\text{)},t),t) < 0\) along the direction of vector \(({\varvec{u}}^{(F)} ,{\varvec{\varepsilon}}^{(F)} )\). Therefore, the dichotomy combined with adaptive Kriging model method can be constructed to efficiently find the \(\beta_{k + 1}\)-hypersphere. The elaborate steps are summarized as follows.

Step 8.2.1: Initialize the parameters of the dichotomy. Set \(a = 0\), \(b = ||({\varvec{u}}^{(F)} ,{\varvec{\varepsilon}}^{(F)} )||\) and \(l = 0\). If the accuracy of dichotomy is \(E_{rr}\), the least number \(l^{^{\prime}}\) of bipartition is determined as

$$ l^{^{\prime}} \ge \frac{{\lg (b - a) - \lg E_{rr} }}{\lg 2} $$
(33)

Step 8.2.2: Estimate the sign of \(\mathop {\min }\limits_{{t \in [t_{0} ,t_{e} ]}} g\left\{ {\left( {\frac{a + b}{2}} \right)\frac{{{\varvec{u}}^{(F)} }}{{||({\varvec{u}}^{(F)} ,{\varvec{\varepsilon}}^{(F)} )||}},{\varvec{y}}(\left( {\frac{a + b}{2}} \right)\frac{{{\varvec{\varepsilon}}^{(F)} }}{{||({\varvec{u}}^{(F)} ,{\varvec{\varepsilon}}^{(F)} )||}},t),t} \right\}\). Use the current Kriging model \(g_{K} ({\varvec{u}},{\varvec{Y}}(t),t)\sim N(\mu_{{g_{K} }} ({\varvec{u}},{\varvec{Y}}(t),t),\sigma_{{g_{K} }}^{2} ({\varvec{u}},{\varvec{Y}}(t),t))\) to estimate the value of \(U_{{\varvec{u\varepsilon }}}^{R} (\overline{{\varvec{u}}} ,\overline{{\varvec{\varepsilon}}} )\) where \(\overline{{\varvec{u}}} = \left( {\frac{a + b}{2}} \right)\frac{{{\varvec{u}}^{(F)} }}{{||({\varvec{u}}^{(F)} ,{\varvec{\varepsilon}}^{(F)} )||}}\) and \(\overline{{\varvec{\varepsilon}}} = \left( {\frac{a + b}{2}} \right)\frac{{{\varvec{\varepsilon}}^{(F)} }}{{||({\varvec{u}}^{(F)} ,{\varvec{\varepsilon}}^{(F)} )||}}\), i.e.,

$$ U_{{\varvec{u\varepsilon }}}^{R} (\overline{{\varvec{u}}} ,\overline{{\varvec{\varepsilon}}} ) = \left\{ \begin{aligned} &\mathop {\max }\limits_{{j = \widetilde{1},\widetilde{2},\ldots,\widetilde{P}}} \left\{ {U(\overline{{\varvec{u}}} ,\overline{{\varvec{\varepsilon}}} ,t^{(j)} {)}} \right\}, \quad \text{if }\mu_{{g_{K} }} (\overline{{\varvec{u}}} ,{\varvec{y}}\text{(}\overline{{\varvec{\varepsilon}}} ,t^{(j)} ),t^{(j)} ) < 0 \exists j = 1,2,\ldots,N_{t} \hfill \\ &\mathop {\min }\limits_{{j = 1,2,\ldots,N_{t} }} \left\{ {U(\overline{{\varvec{u}}} ,\overline{{\varvec{\varepsilon}}} ,t^{(j)} {)}} \right\}, \quad \text{otherwise} \hfill \\ \end{aligned} \right. $$
(34)

If \(U_{{\varvec{u\varepsilon }}}^{R} (\overline{{\varvec{u}}} ,\overline{{\varvec{\varepsilon}}} ) < 2\), find the time instant \(t^{(I)}\) by Eq. (35),

$$ t^{(I)} = \left\{ \begin{aligned} &\arg \mathop {\max }\limits_{{j = \widetilde{1},\widetilde{2},\ldots,\widetilde{p}}} \left\{ {U(\overline{{\varvec{u}}} ,\overline{{\varvec{\varepsilon}}} ,t^{(j)} {)}} \right\},\quad \text{if }\mu_{{g_{K} }} (\overline{{\varvec{u}}} ,{\varvec{y}}\text{(}\overline{{\varvec{\varepsilon}}} ,t^{(j)} ),t^{(j)} ) < 0 \exists j = 1,2,\ldots,N_{t} \hfill \\ &\arg \mathop {\min }\limits_{{j = 1,2,\ldots,N_{t} }} \left\{ {U(\overline{{\varvec{u}}} ,\overline{{\varvec{\varepsilon}}} ,t^{(j)} )} \right\}, \quad \text{otherwise} \hfill \\ \end{aligned} \right. $$
(35)

and then the training sample set \({\varvec{T}}\) is updated, i.e.,

$$ {\varvec{T}} = {\varvec{T}} \cup \left\{ {[(\overline{{\varvec{u}}} ,{\varvec{y}}(\overline{{\varvec{\varepsilon}}} ,t^{(I)} ),t^{(I)} ),g(\overline{{\varvec{u}}} ,{\varvec{y}}(\overline{{\varvec{\varepsilon}}} ,t^{(I)} ),t^{(I)} )]} \right\} $$
(36)

Reconstruct the Kriging model \(g_{K} ({\varvec{u}},{\varvec{Y}}(t),t)\) using the current training sample set \({\varvec{T}}\) and turn to the beginning of Step 8.2.2.

If \(U_{{\varvec{u\varepsilon }}}^{R} (\overline{{\varvec{u}}} ,\overline{{\varvec{\varepsilon}}} ) \ge 2\), the sign of \(\mathop {\min }\limits_{{t \in [t_{0} ,t_{e} ]}} g\left\{ {\overline{{\varvec{u}}} ,\overline{{\varvec{\varepsilon}}} ,t} \right\}\) is estimated by (37), i.e.,

$$ I_{FK} (\overline{{\varvec{u}}} ,\overline{{\varvec{\varepsilon}}} ) = \left\{ \begin{aligned} &1, \quad \text{ if }\mu_{{g_{K} }} (\overline{{\varvec{u}}} ,{\varvec{y}}\text{(}\overline{{\varvec{\varepsilon}}} ,t^{(j)} ),t^{(j)} ) < 0 \exists j = 1,2,\ldots,N_{t} \hfill \\ &0, \quad \text{otherwise} \hfill \\ \end{aligned} \right. $$
(37)

and execute the next step continuously.

Step 8.2.3: Update the parameters of the dichotomy. If \(I_{FK} (\overline{{\varvec{u}}} ,\overline{{\varvec{\varepsilon}}} ) = 1\), \(b = \left( {\frac{a + b}{2}} \right)\). Otherwise, \(a = \left( {\frac{a + b}{2}} \right)\). If \(l \ge \text{ceil}\left[ {\frac{{\lg (||({\varvec{u}}^{(F)} ,{\varvec{\varepsilon}}^{(F)} ){||}) - \lg E_{rr} }}{\lg 2}} \right]\) (where ceil (\(X\)) rounds \(X\) to the nearest integer greater than or equal to \(X\)), turn to the next step continuously. Otherwise, set \(l = l + 1\) and turn to Step 8.2.2.

Step 8.2.4: Obtain the radius of the next hypersphere. The radius of the next hypersphere is determined by \(\beta_{k + 1} = (a + b)/2\).

Step 9: Update the parameters of ARBIS. Set \(\beta = \beta_{k + 1}\), \({\varvec{S}}_{{\varvec{A}}}^{(k + 1)} = {\varvec{S}}_{{\varvec{A}}}^{(k)} - {\varvec{S}}_{{{\varvec{A}}_{outer} }}^{(k)}\) and \(k = k + 1\). Then, turn to Step 4.

Step 10: Estimate the time-dependent failure probability. The time-dependent failure probability and its COV are estimated by Eqs. (38) and (39) respectively, i.e.,

$$ \hat{P}_{f} (t_{0} ,t_{e} ) = \frac{{\sum\nolimits_{i = 1}^{k - 1} {N_{F}^{(i)} } }}{N} = \sum\nolimits_{i = 1}^{k - 1} {\left( {\frac{{N_{F}^{(i)} }}{{N_{{{\varvec{S}}_{{{\varvec{A}}_{outer} }}^{(i)} }} }} \cdot \frac{{N_{{{\varvec{S}}_{{{\varvec{A}}_{outer} }}^{(i)} }} }}{N}} \right)} $$
(38)
$$ COV_{{\hat{P}_{f} (t_{0} ,t_{e} )}} = \sqrt {\frac{{1 - \hat{P}_{f} (t_{0} ,t_{e} )}}{{N\hat{P}_{f} (t_{0} ,t_{e} )}}} $$
(39)

If \(COV_{{\hat{P}_{f} (t_{0} ,t_{e} )}} \le 5\%\), output \(\hat{P}_{f} (t_{0} ,t_{e} )\) and \(COV_{{\hat{P}_{f} (t_{0} ,t_{e} )}}\). Otherwise, increase \(N\) and enlarge the corresponding sample matrix \({\varvec{S}}_{{\varvec{u\varepsilon }}}\), and then turn to Step 3.

From the above procedure, it can be seen that the main contribution of the proposed method is that the MCS-CSP is divided into several sub-CSPs by the hyperspheres involved in the ARBIS method. Then, the MCS samples inside the optimal hypersphere will be removed from the participating CSP and Kriging model is updated sequentially in each sub-CSP, which can save much training time for finding each next best training sample to update Kriging model so that enhance the efficiency of time-dependent reliability analysis. Besides, the proposed enhanced SILK surrogate method unifies the computation of time-dependent failure probability and the radiuses of hyperspheres. Thus, the proposed enhanced SILK surrogate method can use the adaptive SILK model to find the optimal and in-process hyperspheres as byproducts.

4 Case studies

In this section, the efficiency and accuracy of the proposed enhanced SILK surrogate method for analyzing the time-dependent failure probability are demonstrated by three case studies. Sobol’s sequence (Sobol 1976, 1998) is chosen in this paper to generate MCS samples of random inputs for its high convergence rate. Sobol’s sequence is the best choice and performs optimal when the sample size \(N\) equals to a power of 2, i.e., \(N = 2^{h}\) where \(h\) is a non-negative integer.

Except for the number of calls to the real limit state function, the size of participating candidate samples and the used CPU time also demonstrate the efficiency of the proposed method. We define the ratio between the samples inside the optimal hypersphere and the MCS samples (named as candidate sample reduction ratio), and the CPU time reduction ratio in Eqs. (40) and (41), respectively.

$$ \text{Candidate sample reduction ratio = }\frac{{|N_{csp} (\text{SILK) - }N_{csp} (\text{proposed)}|}}{{N_{csp} (\text{SILK)}}} $$
(40)
$$ \text{CPU time reduction ratio = }\frac{{|\text{Time }(\text{SILK}) - \text{Time }(\text{proposed})|}}{{\text{Time }(\text{proposed})}} $$
(41)

where \(N_{csp} (\text{SILK})\) represents the size of MCS samples, \(N_{csp} (\text{proposed)}\) represents size of samples outside the optimal hypersphere, \({{Time}} (\text{SILK})\) represents the CPU time for estimating the time-dependent failure probability by the original SILK surrogate method and \(\text{Time} (\text{proposed})\) represents the CPU time for estimating the time-dependent failure probability by the proposed enhanced SILK surrogate method.

The candidate sample reduction ratio defined in this paper only reflects the proportion of samples inside the optimal hypersphere in the total MCS samples, so that only demonstrates the superiority of the proposed method from the perspective of avoiding the samples inside the optimal hypersphere participating in updating Kriging model. The CPU time reduction ratio reflects the superiority of the proposed method from the following three aspects. The first aspect of reducing the computational time is to avoid a large number of samples inside the optimal hypersphere participating the updating process of Kriging model. The second aspect of reducing the computational time is to further reduce the size of candidate sampling pool in each learning step of Kriging model by dividing the samples outside the optimal hypersphere into several subdomains and updating the Kriging model sequentially in each subdomain. The third aspect of reducing the computational time is the reduced number of calls to the real limit state function. Therefore, the CPU time reduction ratio is a comprehensive index, and the candidate sample reduction ratio is a component.

4.1 Case study I: a mathematical problem

A numerical time-dependent limit state function \(g({\varvec{X}},t)\) is used to test the efficiency of the proposed method, and the expression of \(g({\varvec{X}},t)\) is described as follows (Wang and Wang 2015),

$$ g({\varvec{X}},t) = X_{1}^{2} X_{2} - 5X_{1} t + (X_{2} + 1)t^{2} - C $$
(42)

where \(X_{1}\) and \(X_{2}\) are two random normal variables with mean 3 and standard derivation 0.3, \(t\) is the time variable within \([0,5]\) and \(C\) is a constant. Then, the time-dependent failure probability is defined as

$$ P_{f} (t_{0} ,t_{e} ) = \Pr \left\{ {X_{1}^{2} X_{2} - 5X_{1} t + (X_{2} + 1)t^{2} - C \le 0, \exists t \in [0,5]} \right\} $$
(43)

In this example, two cases are considered. The first one sets \(C\) as 20 and the second one sets \(C\) as 10. The two cases have different magnitudes of time-dependent failure probabilities. The first one is also analyzed in Wang and Wang (2015) and the corresponding results are shown in Table 1.

Table 1 Results of some compared methods for case study I with C = 20

Table 2 shows the results estimated by the original SILK surrogate method and the proposed enhanced SILK surrogate method with \(C = 20\). The stratified boundaries and the size of each sub-CSP of input variables in the proposed enhanced SILK surrogate method are shown in Table 3 where 8192 samples of random inputs are generated. Table 3 shows the details of the proposed method and the radius of the optimal hypersphere. The size of MCS samples of input variables used in the original SILK surrogate method is 8192 and the corresponding 3022 samples are dropped into the optimal hypersphere. Therefore, in the proposed enhanced SILK surrogate method, the corresponding 3022 samples will be removed from the learning process of Kriging model. Besides, the samples outside the optimal hypersphere are divided into five sub-CSPs. Thus, the size of CSP in each iteration of updating Kriging model in the proposed enhanced SILK surrogate method is quite small than that in the whole MCS candidate samples-based original SILK surrogate method. The candidate sample reduction ratio and the CPU time reduction ratio are estimated by Eqs. (40) and (41). The results show that compared with the original SILK surrogate method, the proposed enhanced SILK surrogate method can avoid 36.89% candidate samples participating in learning Kriging model and reduce 71.23% computational time.

Table 2 Results of case study I with C = 20
Table 3 The details of the proposed enhanced SILK surrogate method in case study I with parameter C = 20

In the second case, 262,144 samples of random inputs are generated to estimate the time-dependent failure probability and the MCS solution is 0.0022. Based on the 262,144 input samples, the original SILK surrogate method and the proposed enhanced SILK surrogate method are carried out. Results in Table 4 not only show the accuracy of the proposed enhanced SILK surrogate method but also show that the proposed method can save 99.55% computational time compared with the original SILK surrogate method. Table 5 shows the details of the proposed method with \(C = 10\). From Table 5, it can be concluded that 258,441 MCS samples are dropped inside the optimal hypersphere. Therefore, compared with the original SILK surrogate method 98.59% MCS candidate samples of random inputs can be removed from the participating CSP in the enhanced SILK surrogate method.

Table 4 Results of case study I with C = 10
Table 5 The details of the proposed enhanced SILK surrogate method in case study I with parameter C = 10

By analyzing the two cases, the efficiency and accuracy of the proposed enhanced SILK surrogate method are verified. In addition, results also indicate that by the proposed method the smaller the time-dependent failure probability is, the higher candidate sample reduction ratio and higher CPU time reduction ratio are with the same response function but different failure thresholds.

4.2 Case study II: a hydrokinetic turbine blade

As a renewable energy device, hydrokinetic turbine converts the kinetic energy of flowing water electrical energy (Hu et al. 2020). The river flow load is a time-dependent based stochastic process variable. In this case study, the proposed enhanced SILK surrogate method is utilized to assess the time-dependent failure probability with a stochastic process variable.

Figure 3 shows the simplified hydrokinetic turbine blade and its environmental loads. The river velocity \(V(t)\) is considered as a stochastic process variable. The mean function \(\mu_{v} (t)\), the standard deviation function \(\sigma_{v} (t)\) and the auto-correlation coefficient function \(\rho_{v}\) are given as follows,

$$ \mu_{v} (t) = \sum\limits_{i = 1}^{4} {a_{i}^{m} \sin (b_{i}^{m} t + c_{i}^{m} )} $$
(44)
$$ \sigma_{v} (t) = \sum\limits_{i = 1}^{4} {a_{i}^{s} \exp \left\{ { - \left( {\frac{{t - b_{i}^{s} }}{c}} \right)^{2} } \right\}} $$
(45)
$$ \rho_{v} = \cos \left[ {2\pi (t_{2} - t_{1} )} \right] $$
(46)

where the constants \(a\), \(b\) and \(c\) are

$$ \begin{gathered} a_{1}^{m} = 3.8150, a_{2}^{m} = 2.5280, a_{3}^{m} = 1.1760, a_{4}^{m} = - 0.0786 \hfill \\ b_{1}^{m} = 0.2895, b_{2}^{m} = 0.5887, b_{3}^{m} = 0.7619, b_{4}^{m} = 2.1830 \hfill \\ c_{1}^{m} = - 0.2668, c_{2}^{m} = 0.9651, c_{3}^{m} = 3.1160, c_{4}^{m} = - 3.1610 \hfill \\ \end{gathered} $$
(47)
$$ \begin{gathered} a_{1}^{s} = 0.7382, a_{2}^{s} = 1.0130, a_{3}^{s} = 1.8750, a_{4}^{s} = 1.2830 \hfill \\ b_{1}^{s} = 6.4560, b_{2}^{s} = 4.0750, b_{3}^{s} = 0.7619, b_{4}^{s} = 1.0350 \hfill \\ c_{1}^{s} = 0.9193, c_{2}^{s} = 1.5610, c_{3}^{s} = 6.9590, c_{4}^{s} = 2.2370 \hfill \\ \end{gathered} $$
(48)
Fig. 3
figure 3

Hydrokinetic turbine blade

The flap wise bending moment created at the blade root is estimated by Eq. (49), i.e.,

$$ M_{flap} = \frac{1}{2}\rho C_{m} v(t)^{2} $$
(49)

where \(\rho = 1000\;\text{kg}{/}\text{m}^{3}\) is the river water density and \(C_{m} = 0.3422\) is the coefficient of moment obtained from the blade element momentum theory.

Thus, the time-dependent limit state function and the corresponding time-dependent failure probability of this hydrokinetic turbine blade are defined by Eqs. (50) and (51) respectively.

$$ g({\varvec{X}},{\varvec{Y}}(t),t) = M_{resist} - M_{flap} = \frac{{\varepsilon_{a} EI}}{{h_{1} }} - \frac{1}{2}\rho C_{m} v(t)^{2} $$
(50)
$$ P_{f} (t_{0} ,t_{e} ) = \Pr \left\{ {\frac{{\varepsilon_{a} EI}}{{h_{1} }} - \frac{1}{2}\rho C_{m} v(t)^{2} \le 0, \exists t \in [0,10] \text{yr}} \right\} $$
(51)

where the Young modulus \(E\) is \(14\,\text{GPa}\), the moment of inertia \(I\) at root of the blade is \((2/3)l_{1} (h_{1}^{3} - h_{2}^{3} )\) and the allowable strain is denoted by \(\varepsilon_{a}\). \(l_{1}\), \(h_{1}\), \(h_{2}\) and \(\varepsilon_{a}\) are mutually independent random variables and their distribution parameters are shown in Table 6.

Table 6 The detailed distribution information of input variables in case study II

The MCS solution of the time-dependent failure probability of this hydrokinetic turbine blade is \(7.4387 \times 10^{ - 4}\) using 524,288 samples of random inputs. Based on the same 524,288 samples of random inputs, the original SILK surrogate method needs 69 real limit state function evaluations. The proposed enhanced SILK surrogate method only needs 65 real limit state function evaluations. The results are shown in Table 7. Table 8 shows the details of the proposed enhanced SILK surrogate method, which reflects that 364,031 samples of random inputs are located inside the optimal hypersphere and the size of each sub-CSP is quite smaller than that of the whole MCS-CSP. Therefore, from the perspective of computational time, the proposed enhanced SILK surrogate method uses less time than the original SILK surrogate method. Results in Table 9 reflect that 71.62% samples are located in the optimal hypersphere and the corresponding CPU time reduction ratio is 96.25%, which demonstrates the efficiency of the proposed method for this hydrokinetic turbine blade with stochastic process input and non-normal random input random variables.

Table 7 Results of case study II
Table 8 The details of the proposed enhanced SILK surrogate method in case study II
Table 9 The candidate sample reduction ratio and CPU time reduction ratio of the proposed enhanced SILK method in case study II

4.3 Case study III: a turbine blade structure

The turbine blade of the aero-engine shown in Fig. 4 bears alternating load during the working time, and the material performance will be decaying in time. The angular velocity in cruise-maximum-cruise state is \(\omega (t) = \omega_{0} + 104 \times \left| {\sin \left( {{{\pi t} \mathord{\left/ {\vphantom {{\pi t} 2}} \right. \kern-\nulldelimiterspace} 2}} \right)} \right|\) where \(\omega_{0}\) is a stochastic variable and \(t\) is the time parameter. The material used is the DD6 single-crystal superalloy and its properties are related to the temperature. The distribution types and distribution parameters of the material properties including the Young’s modulus, Poisson’s ratio, shear modulus and the linear expansion coefficient are shown in Tables 10, 11, 12 and 13, respectively. The limit state function of the turbine blade structure is defined as the maximum stress of the turbine blade body not exceeding the threshold value \(S^{thr}\), i.e.,

$$ g({\varvec{X}},t) = S^{thr} - S_{\max } (\rho ,E(T),\nu (T),G(T),\alpha (T),\omega (t)) $$
(52)

where the \(S^{thr} = 900\text{e}^{ - 0.015t}\), \(T\) represents the temperature parameter and the maximum stress \(S_{\max }\) is analyzed by the finite element model (FEM) in ABAQUS software. The FEM model of the turbine blade structure is shown in Fig. 5. The input variables are listed in Table 14. The relationships of \(E(T)\), \(\nu (T)\), \(G(T)\), \(\alpha (T)\) and \(X_{1}\), \(X_{2}\), \(X_{3}\), \(X_{4}\) are shown respectively as follows:

$$ \begin{gathered} E(T) = X_{1} \sigma_{E} (T) + \mu_{E} (T) \hfill \\ \nu (T) = X_{2} \sigma_{\nu } (T) + \mu_{\nu } (T) \hfill \\ G(T) = X_{3} \sigma_{G} (T) + \mu_{G} (T) \hfill \\ \alpha (T) = X_{4} \sigma_{\alpha } (T) + \mu_{\alpha } (T) \hfill \\ \end{gathered} $$
(53)

where \(\mu_{E} (T)\), \(\mu_{\nu } (T)\), \(\mu_{G} (T)\) and \(\mu_{\alpha } (T)\) respectively represent mean values of Young’s modulus, Poisson’s ratio, shear modulus and linear expansion coefficient at the temperature \(T\). \(\sigma_{E} (T)\), \(\sigma_{\nu } (T)\), \(\sigma_{G} (T)\) and \(\mu_{\alpha } (T)\) respectively represent standard derivation of Young’s modulus, Poisson’s ratio, shear modulus and linear expansion coefficient at the temperature \(T\).

Fig. 4
figure 4

The geometry of the turbine blade

Table 10 The distribution of Young’s modulus of the DD6 single-crystal superalloy with crystallographic orientation [001] at different temperatures
Table 11 The Poisson’s ratio of DD6 single-crystal superalloy with crystallographic orientation [001] at different temperatures
Table 12 The shear modulus of DD6 single-crystal superalloy with crystallographic orientation [001] at different temperatures
Table 13 The linear expansion coefficient of DD6 single-crystal superalloy with crystallographic orientation [001] at different temperatures
Fig. 5
figure 5

(a) The temperature of the turbine blade, (b) The stress of the turbine blade

Table 14 The distribution information of model input variables

The definition of time-dependent failure probability for this turbine blade is shown as follows,

$$ P_{f} (t_{0} ,t_{e} ) = \Pr \left\{ {900\text{e}^{ - 0.015t} - S_{\max } (\rho ,E(T),\nu (T),G(T),\alpha (T),\omega (t)) \le 0, \exists t \in [0,2\text{h}]} \right\} $$
(54)

To estimate Eq. (54) by the original SILK surrogate method and the proposed enhanced SILK surrogate method 1,048,576 MCS samples of model inputs \({\varvec{X}}\) are generated. Table 15 shows the radial of the optimal hypersphere, the size of each sub-CSP, the time-dependent failure probability in each subdomain and the probability of each subdomain involved in the proposed enhanced SILK surrogate method. From Table 15, it can be seen that the number of input samples in each sub-CSP is quite smaller than the whole size of MCS samples (1,048,576 input samples). In this regard, much computational time can be saved by the proposed enhanced SILK surrogate method. In addition, samples inside the optimal hypersphere can be directly regarded as safe samples, and thus these samples can be removed from the adaptive process of updating the Kriging model. Removing a large number of samples from the MCS-CSP can not only save a great deal of learning time but also reduce the number of iterations used to update Kriging model because the states (failure or safety) of these samples do not need to be identified by Kriging model. Table 16 shows the results obtained by the original SILK surrogate method and the proposed enhanced SILK surrogate method. For analyzing this small time-dependent failure probability, the original SILK surrogate method needs 14,218 min where the computational time consists of two parts. The first part is the computational time of FEM analyses and the second part is the computational time of finding all sequentially added training samples to adaptively update the Kriging model. The computational time of FEM analyses in the original SILK surrogate method is 3120 min while the computational time of finding all training samples is 11,098 min. The computational time of finding all training samples in the original SILK method is about 3.6 times of that in analyzing the FEMs. It shows the importance of reducing the number of candidate samples in each iteration on improving the computational efficiency. Under the condition that the computational accuracy of the proposed enhanced SILK surrogate method is consistent with that of the original SILK surrogate method, the proposed enhanced SILK surrogate method needs 279 FEM analyses which are smaller than those used in the original SILK surrogate method. The computational time of the proposed enhanced SILK surrogate method is 2901 min where the time used in FEM analyses is 2790 min and the computational time of finding all sequentially added training samples is 111 min. It can be seen that the computational time of finding all sequentially added training samples in the original SILK surrogate method is about 100 times of that in the proposed enhanced SILK surrogate method, which illustrates the high efficiency of the proposed enhanced SILK surrogate method. Figure 6 visually shows the used time of finding each training sample along with the adaptive learning process of the original SILK surrogate method and the proposed enhanced SILK surrogate method, respectively. Table 17 summarizes the candidate samples reduction ratio and CPU time reduction ratio of the proposed enhanced SILK surrogate method over the original SILK method, which shows the high efficiency of the proposed enhanced SILK surrogate method for analyzing the time-dependent failure probability with this FEM-based analysis structure model.

Table 15 The details of the proposed enhanced SILK surrogate method in case study IV
Table 16 Results of case study IV
Fig. 6
figure 6

The learning time of each iteration

Table 17 The candidate sample reduction ratio and CPU time reduction ratio of the proposed method for case study IV

5 Conclusions

The single-loop Kriging (SILK) surrogate method directly constructing the time-dependent limit state function is more efficient than the nested double-loop surrogate method. But for small time-dependent failure probability, much more candidate samples are involved in the current SILK surrogate method due to the large combinations of stochastic samples and time samples, which increases the learning time of adaptively updating Kriging model. In this regard, this paper presents an adaptive radial-based importance sampling (ARBIS) scheme enhanced SILK surrogate method. By finding the optimal hypersphere adaptively, MCS samples of the stochastic inputs are partitioned into several subsets. Then, the time-dependent failure probability is estimated by combination of several time-dependent failure probabilities in the subdomains. The size of candidate sampling pool (CSP) in analyzing each time-dependent failure probability is reduced compared with the size of the CSP used in the original SILK surrogate method. Because samples inside the optimal hypersphere can be directly regarded as the safe samples without any limit state function evaluations, the samples inside the optimal hypersphere can be removed from the learning process of Kriging model. For small time-dependent failure probability, the radius of the optimal hypersphere is generally large and thus much samples can be removed from the CSP. Therefore, embedding ARBIS into the SILK surrogate model can reduce the total size of CSP and stratify the samples outside the optimal hypersphere into several sub-CSPs. The substantial reduction of candidate samples can extremely reduce the learning time of Kriging model and enhance the efficiency of SILK surrogate method especially for estimating the small time-dependent failure probability. In addition, solving the radius of hypersphere is transformed into the classification of model output (failure or safety) by dichotomy, which is unified with the time-dependent reliability analysis. Thus, the Kriging model constructed for analyzing the time-dependent failure probability also can be adaptively used to determine the radius of all hyperspheres. To accelerate the convergence rate of updating Kriging model, a modified version of learning function is constructed by selecting the most easily identifiable failure time during the predefined time period. Results of three case studies demonstrate the merits of the proposed enhanced SILK surrogate method.

The aim of this paper is to embed the ARBIS into the SILK surrogate and sequentially establish the Kriging model in each subdomain. The boundaries of each subdomain are determined by line-search scheme (Grooteman 2008). For problems with discounted and asymmetric failure domains, the global optimization algorithm can be used to search the hyperspheres. It should be emphasized that the proposed method is not limited to the Kriging model, other mainstream surrogate models for sample classification also can be introduced in the proposed enhanced SILK surrogate method.