Keywords

1 Introduction

Analysis of degradation and shock multi-stage processes is a key step in the development and implementation of modern highly reliable technologies. It is quite expected that the emergence of new models forces to modernize research methods for shock models with changing degradation rate, with soft and hard failures with a natural or predetermined threshold level, which is generally a random variable [9, 14]. In particular, effective methods for calculating the realistic model parameters and system reliability are in demand.

Nevertheless, the naive Monte Carlo method is still popular in a large number of modern works, despite its well-known inefficiency, for example, see [6, 7, 11, 17]. For instance, as it proposed in [15], a standard solution is based on the stationary distribution of the built-in Markov chain for the simplest cases. Then, for the generalized model, Monte Carlo simulation is used to approximate the cost of maintenance.

Nevertheless, the models become more complex, and analytical methods are less available. In particular, the Wiener process with independent and normally distributed increments is widely used for non-monotonic degradation. The Gamma process is useful in the stochastic modeling of monotonic and gradual degradation, characterized by the sequence of tiny increments, such as wear, fatigue, corrosion, crack growth, erosion, consumption, degrading health index.

In addition, more complex models of two-stage or multi-stage degradation include Gamma-Gamma, Wiener-Wiener, Gamma-Wiener degradation models are also considered (for example, see [10, 16]). In this regard, we suggest looking for more effective alternatives for the naive simulation method that can be used to analyze the reliability of modern systems.

For a homogeneous case of exponential degradation stages of the system with gradual and instantaneous failures, analytical formulas were obtained and an advanced simulation technique was proposed in [3].

In [2] a heterogeneous degradation process was investigated analytically for exponential stages. Numerical experiments confirmed that using the standard Monte Carlo method impairs the accuracy of the probability estimates and other characteristics of the degradation process.

In [1] the variance reduction technique has been extended to estimate the failure probability that a random sum exceeds a random variable V. In [4] a variance reduction technique using a special variant of conditional Monte Carlo approach proposed for heterogeneous degradation process. It was shown by numerical examples that relative error is bounded and even is vanishing when the degradation stage has heavy-tailed distribution. On the other hand, our experiments show that this method has not an advantage for the light-tailed stages.

A variance reduction technique based on the Importance sampling with an exponential change of measure for light-tailed degradation stages was introduced in [5].

All these techniques were tested on a model of the degradation process, which describes the thickness of the anti-corrosion coating and is described in [3]. Since it is possible to obtain analytical results in the simplest cases, this model is convenient for analyzing the effectiveness of accelerated simulation and variance reducing methods. In addition, the process has a regenerative structure, which is typical for degradation models. The proposed accelerated algorithm can also be extended for more complex models by replacing the procedure of simulation the time spent at the degradation stages.

2 Crude Simulation of the Degradation Process

Following [3] consider the degradation process \(X:=\{X(t),\,t\ge 0\}\) with a finite state space \(E=\{0, 1, \dots , L, \dots , M, \dots , K; F\}\) describing the degradation stages of the system. Two-threshold policy (KL) is considered, which means that the system is restored in the stage K and then proceeds to stage L (see Fig. 1a).

Let \(T_i\) be the transition time from i to \(i+1\) stage. Random variables (r.v.) \(T_i\) are independent but not necessarily identically distributed. Note that

$$ S_{i,j}:=\sum _{k=i}^{j-1}T_k,\, \,\,\;\;0 \le i \le K-1,\, j>i, $$

is the transition time from state j to state i. We define the following distribution functions (d.f.):

$$\begin{aligned}&F_i(t)=\mathbb {P}(T_i \le t) ;\;\; F_V(t)=\mathbb {P}(V \le t); \;\;\end{aligned}$$
(1)
$$\begin{aligned}&F_{U_F}(t)=\mathbb {P}(U_F \le t); \;\; F_{U_{K,L}}(t)=\mathbb {P}(U_{K,L} \le t);\end{aligned}$$
(2)
$$\begin{aligned}&F_{i,j}(t)=\mathbb {P}(S_{i\,j}\le t) = F_{i,\,j-1}*F_j(t) = \int _{0^-}^{t}F_{i,\,j-1}(t-v)dF_j(v); \end{aligned}$$
(3)
$$\begin{aligned}&F_{i,i+1}(t) = F_i(t),\;\;\; F_{i,i}(t)\equiv 0, \end{aligned}$$
(4)

where \(*\) means convolution. Starting in state \(X(0)=0\) the process successively passes \(K-1\) intermediate degradation stages and reaches the state M. We denote by V a random time after which a failure can occur. Thus, after the stage M either event \(\{S_{M,K} \ge V\}\) (instantaneous failure) may happen during a random period V, or event \(\{S_{M,K} < V\}\) (starting the preventive repair stage) occurs during the time

$$ S_{M,K}= \sum _{i=M}^{K-1}T_i. $$

The process X is strongly regenerative with moments

$$ \tau _{n+1} =\inf \{Z_i > \tau _n:\, X(Z_i^+)=M\},\,\,n\ge 0,\,\,\tau _0:=0, $$

where \(Z_k\) is the the hitting time of the stage \(k \ge 1\), and cycle lengths \(Y_k= \tau _{k+1} - \tau _k,\,k\ge 1\) are i.i.d.

There are two types of regeneration cycles for degradation process: with and without failure (see the illustration Fig. 1a)

$$\begin{aligned}&Y={\left\{ \begin{array}{ll} Y_F=V+U_F+S_{0,M}, &{} \text { if } S_{M,K} \ge V \\ Y_{NF}=S_{M,K}+U_{K,L}+S_{L,M}, &{}\text { if } S_{M,K} < V, \end{array}\right. } \end{aligned}$$
(5)

where r.v. V, \(U_{F}\), \(S_{0, M}=\sum _{i=0}^{M-1}T_i\) with known distributions are independent as well as r.v. \(S_{M,K}\), \(U_{K,L}\), \(S_{L,M}=\sum _{i=L}^{M-1}T_i\) After failure and repair, the process returns to the initial state 0. Thus (unconditional) regeneration cycle length Y can be written as

$$\begin{aligned}&Y= Y_F\cdot I_{\{V \le S_{M,K}\}} + Y_{NF}\cdot I_{\{S_{M,K} < V\}}. \end{aligned}$$
(6)

where \(I_A\) denotes indicator function. The variable Y plays an important role in the analysis of the degradation process [3].

Fig. 1.
figure 1

Two splitting schemes: a) by stages b) by the value of \(S_{M,K}\)

The main target is to find the probability of instantaneous failure within the regeneration cycle, that is

$$\begin{aligned} p_F=\mathbb {P}(S_{M,K} \ge V) = \mathsf E[F_V(S_{M,K})], \end{aligned}$$
(7)

where \(F_V\) is the distribution function of the random variable V. But the simulation also necessary for other characteristics like the mean lifetime T

$$ \mathbb {E}[T]=\mathbb {E}[Y_{NF}](\mathbb {E}[N]-1)+\mathbb {E}[V | V \le S_{M,K}], $$

where \(\mathbb {E}[N]=1/p_F\) is the mean number of cycles until complete failure; mean cycle length \(\mathbb {E}[Y_F]\) with failure or \(\mathbb {E}[Y_{NF}]\) without failure; reliability function

$$ R(t)=\mathbb {P}[T>t|X(0)=0],\,\,t\ge 0,\; $$

where T stands for the lifetime of the system.

Note that, for more complex shock models simulation allows us to estimate parameters of the model like the damage threshold, the intensity of random shock, critical shock inter-arrival time, scale, and shape parameters in Gamma models, etc.

If the failure is not a rare event and it is easy to simulate the r.v. V, \(U_F\), \(S_{0,M}\), \(S_{M,K}, U_{K,L}\), \(S_{L,M}\) on a computer and the performance function is computationally inexpensive to construct the regeneration cycles, then the \(p_F\) and other characteristics of degradation process can be approximated by crude Monte Carlo unbiased estimators. However, for highly reliable systems, such an assumption seems to be naive.

3 Splitting Scenarios for the Degradation Process

The splitting procedure for a homogeneous degradation process was firstly proposed in [3], where the method showed the effectiveness of the estimation for exponential degradation stages. Denote \(\{l_i\}, \{R_i\}, i \ge 0\) the sequence of levels and the corresponding factors. After crossing the level \(l_i\), the trajectory is split (produces independent copies) into \(R_i\) trajectories, which further develop independently. Taking into account the generalization of the method for other models, let’s now compare two possible splitting scenarios:

  • a) the process splits at each degradation stage \(i \in [M, K-1]\) (Fig. 1a);

  • b) the levels of splitting \(l_i\) depend on the value of the accumulated amount of time \(S_{M, K}\) (Fig. 1b), their total number is determined by the pilot run.

In the first case a), it is impossible to go through several levels of splitting, while in the second b), the trajectory of the process can cross several trajectories at once in one step during event simulation.

In both cases, the splitting of the trajectories occurs only in the area after stage M, when instantaneous failure becomes possible.

Unlike processes with negative drift (which is typical for problems of evaluating rare events), it is impossible to perform optimal leveling due to the randomness of the threshold time to failure V for the degradation process. However, all methods for rare events probabilities simulation are designed to solve problems with a constant value of failure threshold [12, 13].

For the standard splitting procedure, an optimal distance between thresholds \(\{l_i\}\) and splitting factors \(\{R_i\}\) at each threshold are defined by the pilot run [12]. The pilot run defines threshold partition in accordance with the requirement that conditional probabilities of transition between thresholds \(p_i\) is not so rare but gives the biased estimator itself.

From the point of view of the regeneration theory, the problem of rare event probability (and other characteristics of the process) estimating can be reduced to the problem of constructing regeneration cycles using the accelerated simulation technique. In comparison with the naive Monte Carlo method, the splitting method will allow building regeneration cycles much faster. Yet, it should be noted that the cycles constructed in this way cannot be considered independent.

Recall that, as it was proposed in [3], to speed-up the simulation procedure of waiting for a rare system failure, we introduce the embedded splitting procedure. The splitting started strictly in regeneration moments \(\tau _i\) after which an instantaneous failure becomes possible, see Fig. 1. Further, we will use splitting only in the region between stages M and K and construct a mathematical model of cycles based on the branching process.

3.1 Fully Branching Regeneration Splitting - Scenario a)

The first scheme a) employs fully branching regeneration, where the splitting levels \(l_i\) are fixed and strictly correspond to the degradation stages. For this reason, initially, we cannot change the partition into levels. Thus, it is impossible to influence the estimation variance by optimizing the levels, unlike the standard splitting method. However, this algorithm is the easiest to implement regardless of the choice of programming language.

Besides, given the artificial branching of the process, it is necessary to take into account the dependency between cycles. At each level, we generate \(R_i\) copies of r.v. \(T_i\), \(M \le i \le K-1\). So, each original path generates

$$D=R_M\cdots R_{K-1}$$

subpaths called group of cycles. Each process trajectory started from the initial threshold \(l_M\) gives the group of D dependable regeneration cycles. The cycles from different groups are independent by construction. The total number of groups is \(R_{M-1}\). The dependence is generated by the same pre-history of the random sum \(S_{MK}\) realizations before the splitting point at each stage.

For convenience, we will assume that after the moment of splitting at the level \(l_i, i \in [M+1, K-1]\), all \(R_i\) trajectories of brunching degradation process \(X^*\) develop simultaneously. (This can be realistic if the algorithms are implemented using parallel programming tools.)

Let us consider in detail the construction of traditional regenerative cycles from the first group \(X^{*}_1\) generated by the first trajectory starting from the stage \(M-1\). For the remaining groups \(X^*_i, i \in [2, R_{M-1}]\), the reasoning is similar. Let’s introduce the following notation:

  • \(g_i\) - generation of \(l_i, i \in [M, K-1]\) threshold;

  • \(n_i\) - the number of trajectories starting from the level \(l_{i-1}\) and reaching the level \(l_i\), \(i \in [M+1, K-1]\), \(n_M=R_{M-1}\);

  • \(N^*\) - the maximum stage number reached by the trajectories before the failure (if it occurs) \(N^*=\max \{i: n_i>0, i \in [M, K-1]\}\);

  • \(t^{(i)}_{k_i}\) - the splitting moment of the bunch number \(k_i \in [1, n_i]\) at the level \(l_i, i \in [M+1, K-1]\);

  • \(H^{(i)}_{k_i,j}(t)\) - accumulated time spent in degradation stages at time t by jth process copy that started from \(l_i\) threshold at the moment \(t^{(i)}_{k_i}\);

  • \(\eta ^{(i)}_{k_i,j}\) - the instantaneous failure moment for jth copy of the process after the hitting \(l_i\) threshold;

  • \(Z^{(i+1)}_{k_i,j}\) - the moment of the next event, namely, either a transition to the next stage or a failure occur at the level \(l_i\).

We can conclude that

$$\begin{aligned}&H^{(i)}_{k_i,j}(t^{(i)}_{k_i})= \sum _{p=M}^{i-1}T_p, \;\; i \ge M+1; \end{aligned}$$
(8)
$$\begin{aligned}&\eta ^{(i)}_{k_i,j} = \min \{t>t^{(i)}_{k_i}: H^{(i)}_{k_i,j}(t) \ge V\};\end{aligned}$$
(9)
$$\begin{aligned}&t^{(i+1)}_{k_i, j} = t^{(i)}_{k_i} + T_{i,j}, \;\; T_{i,j}\,=\,_{st} T_i \sim F_i;\end{aligned}$$
(10)
$$\begin{aligned}&Z^{(i+1)}_{k_i,j} = \min \{\eta ^{(i)}_{k_i,j}, t^{(i+1)}_{k_i, j}\}. \end{aligned}$$
(11)

We’ll say that the trajectory belongs to the generation \(g_i\) if it started from a level \(l_i\) and either reached the level \(l_{i+1}\) or the failure occurred, so denote:

$$\begin{aligned}&g_i = \{H^{(i)}_{k_i,j}, \; j \ge M\}, \end{aligned}$$
(12)
$$\begin{aligned}&H^{(i)}_{k_i,j} = \{H^{(i)}_{k_i,j}(t), \; t^{(i)}_{k_i} \le t \le Z^{(i+1)}_{k_i,j}\}. \end{aligned}$$
(13)

Recall that D cycles in each group \(X^*_i, i \in [1, R_{M-1}]\) will be formed from trajectories that met failure at one of the levels (cycles with the failure \(Y_F\)), or reached stage K (cycles without failure \(Y_{NF}\)).

Below we give an algorithm for sequentially constructing cycles in the group \(X^*_1\) assuming that \(n_{M+1}>0\). The output of the Algorithm 1 is the set of cycles in first group

$$X^{*}_1 = \{G^{(M)}, \dots , G^{(N^*)}\}.$$
figure a

Remark 1

As a rule, for degradation models with rare failures, it is most often performed that \(N^{*}=K-1\) and thus splitting occurs at each stage of degradation, therefore this algorithm becomes fully branching.

Further, each group must be supplemented to a completed regeneration cycle according to the type of cycle (6) by simulation. Thus, each set of cycles \(G^{(i)}\) has its own tail of the regeneration cycle. The proposed Algorithm 1 is started \(R_{M-1}\) times at regeneration moments \(\tau _i\). The cycles from different groups \(X^{*}_i\) are independent, but the trajectories in one group cycles have a construction dependency. Then, the total number of the failures in the ith group is

$$\begin{aligned} A_i=\sum _{j=(i-1)\cdot D+1}^{i\cdot D}I^{(j)}, i=1, \dots , R_{M-1}, \end{aligned}$$

where \(I^{(j)}=1\) for the cycle with failure (\(I^{(j)}=0\), otherwise). Sequence \(\{I^{(j)}, j \ge 1\}\) is discrete D-dependent regenerative with constant cycle length D and regeneration instants are {\(i\cdot D\)}, \(i \in [1, R_{M-1}]\).

Remark 2

Note that constructing trajectories according to the algorithm is necessary for estimating performance measures such as the mean life-time \(\mathsf ET\), the average length of the cycles \(\mathsf EY\), \(\mathsf EY_F\), the reliability function R(t), etc. If it is required to estimate only the failure probability, then the regeneration cycle can be represented by an indicator, and the Algorithm 1 can be simplified.

Moreover, the regenerative interpretation is successful in terms of interval estimation. The regenerative structure gives the following unbiased and strongly consistent estimator \(\widehat{p}_F\):

$$\begin{aligned} \widehat{p}_F = \frac{\sum _{j=1}^{R_{M-1}}A_j}{R_{M-1}\cdot D} \rightarrow \frac{\mathbb {E}\sum _{j=1}^{D}I^{(j)}}{D} = p_F \end{aligned}$$
(15)

as \(R_{M-1} \rightarrow \infty \) w.p.1. The following \(100(1-\delta )\%\) confidence interval for \(p_F\) based on the regenerative variant of Central Limit Theorem, that is well known from [8]

$$\begin{aligned} \Big [\widehat{p}_F \pm \frac{z(\delta )\sqrt{v_n}}{\sqrt{n}} \Big ] \end{aligned}$$
(16)

where quantile \(z(\delta )\) satisfies \(\mathbb {P}[N(0,1) \le z(\delta )] = 1 - \delta /2\), and

$$\begin{aligned} v_n = \frac{n^{-1}\sum _{i=1}^{n}[A_i - \widehat{p_F}D]^2}{D^2} \end{aligned}$$
(17)

is a weakly consistent estimator of \(\sigma ^2 = \mathbb {E}[A_1 - p_FD]^2/D^2\) if \(\mathbb {E} (A_1-\gamma \alpha _1)^2<\infty \). Under moment assumptions, \(\mathsf EA_1^2 < \infty ,\) the estimate (17) is strongly consistent.

3.2 Standard Splitting Procedure - Scenario b)

Another splitting scenario can be proposed based on the standard splitting method [12, 13]. Recall that in standard spitting procedure the state space E of process is divided into \(M+1\) nested subsets \(C_i\), which defined by importance function f:

$$\begin{aligned} C_i = \{ x \in E \mid f(x) \ge l_i\},\; i\in [1, M+1]. \end{aligned}$$

For the degradation process the splitting levels \(l_i\) selected according to the value of \(S_{M, K}\), see the illustration in Fig. 1b), so the cumulative residence time in the degradation stages is chosen as importance function f. Since the standard algorithm arranges the levels \(\{l_i\}\) based on the fixed value (level \(l_{M+1}=l\)), it is proposed to use the expectation value \(\mathsf EV\) as a threshold value instead of r.v. V. Denote the following notation for conditional probabilities of reaching the next level \(i+1\) while being at the current level i:

$$\begin{aligned} p_1=\mathsf P(C_1),\dots , p_{i+1}=\mathsf P(C_{i+1}|C_{i}), i\in [1, M]. \end{aligned}$$

Thus, the standard point estimator \(\widehat{p}_F\) given by splitting scenario b) is

$$\begin{aligned} \hat{p}_F = \prod _{i=1}^{M+1}\hat{p_i} = \frac{n_{M+1}}{R_0}\prod _{i=1}^{M}\frac{1}{R_i} = \displaystyle \frac{n_{M+1}}{R_0 D}, \end{aligned}$$
(18)

where \(\widehat{p}_i\) is the estimate of transition probability \(p_i\), \(R_i\) is the splitting factor, \(n_{M+1}\) is the number of failures, \(R_0\) is the number of simulation starts.

It is known that the splitting simulation effort is defined as follows:

$$\begin{aligned} \sum _{i=1}^{M+1}R_i\mathsf E[n_{i-1}] = R_0\sum _{i=1}^{M+1}\frac{1}{p_i}\prod _{j=1}^{i}p_jR_j, \end{aligned}$$
(19)

where \(n_i\) is the number of reaching the level \(l_i\). It is optimal if the number of branching trajectories does not grow exponentially on the one hand and the process is not damped on the other. It is obvious that the case \(p_jR_j>1\) implies the explosion of the algorithm effort, the case \(p_jR_j<1\) gives \(n_{M+1}= 0\) with high probability, and the case \(R_j =1/p_j\) is optimal. Thus, the optimal values for simulation are related by the ratio \( R_i = 1/\widehat{p}_i, \) but for the degradation process, it should be expected that \(\widehat{p}_i \approx 1\). Thus, the optimal option is not suitable for accumulation processes and does not give an effect.

Generally an optimal levels \(\{l_i\}\) and optimal splitting factors \(\{R_i\}\) at each threshold determined from the so-called adaptive pilot run of splitting procedure (for instance, see [12]). Pilot run gives biased estimator but defines threshold partition according to requirement that \(p_i\) is not rare event probabilities. Thus, the input parameters to the adaptive algorithm include sample size N, \(p_i = p\in (0,1)\), fixed level \(l_{M+1}=l\) and importance function f(x). The key points are:

  1. 1.

    to get the optimal levels \(l_i=\min (l_{M+1},\hat{l_i})\), where

    $$\begin{aligned} \hat{l_i} = \mathop {arg\,min}\limits _{l \in \{f(X_1), \dots , f(X_N)\}}{\Big \{\frac{1}{N}\sum _{i=1}^{N}I[f(X_i) \ge l] \le p\Big \}} \end{aligned}$$
  2. 2.

    to get the splitting factors

    $$\begin{aligned} R_i=\mathsf {Ber}(p) + \Big \lfloor \frac{N}{n_i}\Big \rfloor \end{aligned}$$

    .

Due to the randomness of the parameter V the adaptive pilot run for average level system, where \(l = E[V]\) was applied to calculate thresholds \(l_i\) and factors \(R_i\).

Remark 3

In addition, the degradation process due to arbitrary increments can cross several levels at once, so the following splitting condition was introduced: if any trajectory starting from level \(l_i\) crossed some level \(l_{i+k}, i+k \le M+1\), then it splits into \(\prod _{j=1}^{k}R_{i+j}\) copies.

4 Simulation Results

An experimental comparison of two splitting scenarios was made with respect to the following evaluation quality criteria: relative error RE and relative experimental error RER (if \(p_F\) is analytically available)

$$\begin{aligned} RE[\widehat{p}_{F}]=\frac{\sqrt{Var[\widehat{p}_F]}}{\mathbb {E}[\hat{p}_F]}, \; RER[\widehat{p_F}]=|\hat{p_F} - p_F|\cdot 100/p_F. \end{aligned}$$

To estimate the variance, samples of 100 values were constructed. The tables show the average computation time for one sample element. In some cases, Monte Carlo estimation for probabilities less than \(10^{-7}\) was too long to be complete. An invariant for comparing the three algorithms MC, \(RS_{a)}\) and \(RS_{b)}\) is to construct the same number n of cycles.

All numerical tests were executed on ultrabook HP ENVY Intel(R) Core(TM) i3 7100U 2.4 GHz processor with 4 GB of RAM, running Windows 10. Tables 1, 2, and 3 show the results of running a programs in Python3 and C++ for the Monte Carlo method (MC) and the regenerative splitting method (RS) for a) and b) scenarios.

Table 1. Time (s) estimator: MC vs. \(RS_{a)}\) and \(RS_{b)}\): \(T_j \sim Exp(\lambda _j)\), \(V \sim Exp(\nu )\)
Table 2. RER estimator: MC vs. \(RS_{a)}\) and \(RS_{b)}\): \(T_j \sim Exp(\lambda _j)\), \(V \sim Exp(\nu )\)
Table 3. RE estimator: MC vs. \(RS_{a)}\) and \(RS_{b)}\): \(T_j \sim Exp(\lambda _j)\), \(V \sim Exp(\nu )\)

Let’s fix the parameters of the model \(\nu =0.5\), \(\mu _F=1.5\), \(\mu =2\), \(L=1\), \(M=5\), \(K=17\). To observe both RE and RER values we give an example of exponential degradation periods \(T_j \sim Exp(\lambda _j)\) for the heterogeneous case where the analytical formulas are known from [2]. So, we will vary number of regeneration cycles n and sequence of values

$$\begin{aligned} \lambda _j = \lambda _{K-1} - (K-j-1) s, \; j \in [0, K-2], \end{aligned}$$

where \(\lambda _{K-1} \) will be initialized before starting the splitting procedure, and the other values will be shifted by step s, thus the condition

$$\lambda _0< \dots< \lambda _{K-1}, \;\; \nu < \lambda _j, \; j=[0,K-1],$$

for increasing the degradation rate is guaranteed.

We compare all methods with the same number n of regeneration cycles and track the corresponding time \(t_{MC}\), \(t_{RS_{a)}}\), \(t_{RS_{b)}}\) of each method in seconds. The number of cycles n is chosen for practical reasons so that the RER does not exceed \(5\%\) for all methods. For variance estimation, a sample of 100 values was constructed.

The Table 1 shows that the \(RS_{a)}\) algorithm is significantly superior in time to the others MC and \(RS_{b)}\). Despite the higher RE values the \(RS_{a)}\) splitting scenario provides a lower RER and is, therefore, closer to the analytical value than \(RS_{b)}\).

It should be noted that for the scenario \(RS_{b)}\), cases with a fixed optimal number of splits \(R_i\) and a random factors with exponential \(R_i \sim Exp(0.33)\) (\(RS_{exp}\)) and uniform \(R_i \sim Uni[1,5]\) (\(RS_{uni}\)) were also studied.

Table 4. RER and time (s) results for various \(R_i\) types: \(T_j \sim Exp(\lambda _j)\), \(V \sim Exp(\nu )\)

The Table 4 demonstrates the values for comparison RER and time for \(RS_{b)}\) algorithm in these cases with MC method. Nevertheless, no significant differences between the types of splitting and MC estimates are observed. That fact shows again that the standard optimal conditions have no effect in the case of the degradation process. Empty spaces in the table mean that the computation time for one element of the sample was too long to build a full sample of size 100.

Table 5. RE and time (s) estimator in C++: MC vs. \(RS_{a)}\): \(T_i\sim exp(\lambda )\), \(V \sim Exp(\nu )\)

The Table 5 shows the results of the algorithm in the C++ language. Expectedly, the running time for the same number of cycles is more comfortable in comparison with the Python language, so that we can calculate the estimate by MC for a lower probability. Note that the MC method gives a smaller RE in some cases. That fact can be explained due to the direct dependence of variance \(\widehat{p}_F\) on \(\nu \) and it is rather big here.

Table 6. Var and time (s) estimator in C++: MC vs. \(RS_{a)}\) and \(RS_{b)}\): \(T_i \sim \) Weibull

In the next example, we try to avoid the influence of the variance r.v. V and assume that it is fixed \(V=1/\nu \). We use \(\nu \) as a rarity parameter to change the probability value in the experiment. Consider i.i.d \(T_i=T\) with light tail Weibull d.f. \(F_{T}(x) = 1-e^{-3x^4}, x \ge 0\). It is difficult to get the analytics for the Weibull degradation stages, so we compare simulation methods with each other. This is clear from Table 6 that it is possible to repeat the observation that splitting at the degradation stages is more beneficial than scenario b).

5 Conclusion

We performed a series of experiments that demonstrated the effectiveness of the two splitting schemes for the accelerated simulation of regeneration cycles and estimation of the degradation process characteristics. Both variants of the splitting procedure were compared with the cycle constructing procedure via naive Monte Carlo method in terms of simulation time, relative error, and relative experimental error estimates. Based on numerical experiments, it can be summarized that the first scenario a) is the most efficient for simulation of the degradation process with rare failures. At the same time, the organization of branching at the moments of transition between the stages of degradation is simpler and more intuitively clear. In addition, it turns out that the choice of the optimal splitting parameters is difficult due to the fact that the time to failure is a random variable. In particular, the use of the optimal splitting rates does not give an effect for processes with independent increments, even compared to the Monte Carlo method.