Keywords

1 Introduction

Modern systems consist of numerous parts working together, making the maintenance action for the systems more difficult. In general, systems can be classified into repairable and non-repairable systems according to feasibility of maintenance activity. A repairable system is one that can be restored to an operating condition without replacement of the entire system after some repair activity is executed. For the repairable system, the patterns of failures collected after successive repairs are very important to establish an effective maintenance policy. For example, increasing time intervals between failures suggest reliability improvement, while decreasing time intervals imply reliability deterioration. Repair processes of this type can emulate a minimal repair model in which the repair or the substitution of a failed part tends to have a negligible effect on overall system reliability, restoring the system performance to the exact same condition as it was just before the failure. Because the system is restored to its current state (immediately preceding the most recent failure), the assumption of minimal repair reveals a failure pattern governed by a nonhomogeneous Poisson process (NHPP). The NHPP has garnered significant attention in the reliability literature [1, 2].

2 Nonhomogeneous Poisson Process Model

The NHPP is defined by its nonnegative intensity function \(\lambda \left( t \right)\). The expected number of failures in the time interval \(\left( {0,{ }t} \right]\) is obtained by \({\Lambda }\left( t \right){ } = \int_{0}^{t} {{\uplambda }\left( u \right){\text{ d}}u} { }\). The intensity function \(\lambda \left( t \right)\) is equal to the rate of occurrence of failures (ROCOF) associated with the repairable system [2]. When the intensity function is constant, i.e., \(\lambda \left( t \right){ } \equiv { }\lambda\), the process reduces to a homogeneous Poisson process (HPP). The NHPP has been widely used in modeling failure frequency for repairable systems because of its flexibility and mathematical tractability via its intensity function \(\lambda \left( t \right)\) [3].

2.1 Monotonic Failure Intensity Model

The most commonly applied form of NHPP is the power law process (PLP). Crow [4] suggested a PLP model under “find it and fix it” conditions with the intensity function

$$ \begin{array}{*{20}c} {\lambda \left( t \right) = \frac{\beta }{\alpha }\left( {\frac{t}{\alpha }} \right)^{\beta - 1} ,\quad t > 0} \\ \end{array} $$
(1)

where \(\beta\) (> 0) and \(\alpha\) (> 0) are the shape and scale parameters, respectively. The corresponding mean cumulative number of failures over \(\left( {0,{ }t} \right]\) is \(\Lambda \left( t \right){ } = { }\left( {\frac{t}{\alpha }} \right)^{\beta }\). As another functional form of NHPP, a log linear process (LLP) has intensity function

$$ \begin{array}{*{20}c} {\lambda \left( t \right) = \gamma e^{\kappa t} ,\quad t > 0} \\ \end{array} $$
(2)

and the corresponding mean cumulative number of failures over \(\left( {0,{ }t} \right]\) is \(\Lambda \left( t \right){ } = { }\gamma \kappa^{ - 1} \left( {e^{\kappa t} - 1} \right)\), for the parameters \(\gamma ( > { }0)\) and \(\kappa\). The LLP model was first proposed by Cox and Lewis [5] to model air conditioner failures. The PLP and the LLP models have been employed to model failure patterns of a repairable system having monotonic intensity, i.e., decreasing failure patterns (reliability improve- ment) with β < 1 (κ < 0) or increasing failure patterns (reliability deterioration) with \(\beta > 1{ }(\kappa > 0)\). When \(\beta = 1{ }\left( {\kappa = 0} \right)\), , the PLP (LLP) reduces to the HPP.

The intensity function of the PLP model tends to infinity as the system age increases, whereas the observed failure process may have a finitely bounded intensity function. Considering NHPPs with a finite and bounded intensity function, Pulcini [6] proposed a bounded intensity process (BIP) with intensity function

$$ \begin{array}{*{20}c} {\lambda \left( t \right) = a\left[ {1 - e^{{ - \frac{t}{b}}} } \right].\quad a,b > 0;t > 0} \\ \end{array} $$
(3)

The intensity function is increasing and bounded, approaching an asymptote of a as t tends to infinity.

2.2 Non-monotonic Failure Intensity Model

In some cases, a repairable system is subject to early (or infant mortality) failures due to the presence of assembly defects that are not screened out completely through the burn-in process, as well as wear-out failures caused by deteriorating phenomena. This causes a so-called bathtub-shaped failure intensity, which is typical for large and complex systems that are characterized by a number of different failure modes [7]. The PLP and the LLP models are too simplistic to accommodate this bathtub characteristic of the failure process. As an alternative, unions of several independent NHPPs called superposed Poisson processes (SPPs) have been developed to model this kind of non-monotonic failure intensity. When any subsystem failure can independently cause the system to break down, the superposed model is a natural model for the failure of the system. For an SPP based on J independent processes, let \(N_{j} \left( t \right)\) be the number of failures in \(\left( {0,{ }t} \right]\) for the jth subsystem \({ }\left( {j{ } = { }1,{ }2,{ }.{ }.{ }.{ },{ }J} \right)\) with the intensity function \(\lambda_{j} \left( t \right){ } = { }dE\left[ {N_{j} \left( t \right)} \right]/dt\). The number of failures in \(\left( {0,{ }t} \right]\) for the system in the SPP is characterized by \({\text{N}}\left( {\text{t}} \right) = \sum\nolimits_{{{\text{j}} = 1}}^{{\text{J}}} {{\text{N}}_{{\text{j}}} \left( {\text{t}} \right)}\). If \({\text{N}}_{{\text{j}}} \left( {\text{t}} \right),{\text{ j}} = 1,{ }2,{ }.{ }.{ }.{ },{\text{ J}}\) are independent nonhomogeneous Poisson processes, then \({\text{N}}\left( {\text{t}} \right)\) is also the NHPP with intensity function \(\uplambda \left( {\text{t}} \right) = \sum\nolimits_{{{\text{j}} = 1}}^{{\text{J}}} {\uplambda _{{\text{j}}} \left( {\text{t}} \right)}\)

SPPs have found successful application in modeling software reliability, where early detection and removal of coding errors can sometimes lead to reliability growth, e.g., the Musa-Okumoto process [8] for modeling recurrent errors in a software. Pulcini [9] proposed the superposition of two independent power law processes (called the “superposed power law process” (S-PLP)) to model the bathtub-shaped failure pattern of a repairable system with intensity function

$$ \begin{array}{*{20}c} {\lambda \left( t \right) = \frac{{\beta_{1} }}{{\alpha_{1} }}\left( {\frac{t}{{\alpha_{1} }}} \right)^{{\beta_{1} - 1}} + \frac{{\beta_{2} }}{{\alpha_{2} }}\left( {\frac{t}{{\alpha_{2} }}} \right)^{{\beta_{2} - 1}} ,\quad \alpha_{j} ,\beta_{j} > 0, j = 1, 2} \\ \end{array} $$
(4)

In Pulcini’s model, the parameters \(\beta_{1}\) and \({ }\beta_{2}\) determine the failure patterns of a repairable system. For example, \(\beta_{1} < 1\) models the failure pattern of a system improving over time, while \(\beta_{2} > 1\) models that of a system deteriorating over time. As a result, the S-PLP with \(\beta_{1} < 1\) and \(\beta_{2} > 1\) is able to model a repairable system with the bathtub-shaped failure intensity. Yang and Kuo [10] proposed the superposition of the Musa-Okumoto process and the power law process as

$$ \begin{array}{*{20}c} {\lambda \left( t \right) = \frac{{\beta_{1} }}{{t + \alpha_{1} }} + \alpha_{2} \beta_{2} t^{{\beta_{2} - 1}} \quad \alpha_{j} ,\beta_{j} > 0, j = 1.2} \\ \end{array} $$
(5)

with corresponding mean cumulative number of failures \(\left( {0,{ }t} \right]\), \(\Lambda \left( t \right) = \beta_{1} \ln \left( {1 + \frac{{\text{t}}}{{\alpha_{1} }}} \right) + \alpha_{2} t^{{\beta_{2} }} .\) As Hjorth [11] pointed out, this intensity function has increasing, decreasing, and bathtub types of shapes. Later, Guida and Pulcini [12] proposed the bathtub bounded intensity process (BBIP) represented by the following superposed intensity function

$$ \begin{array}{*{20}c} {\lambda \left( t \right) = ae^{ - t/b} + \alpha \left( {1 - e^{{ - \frac{t}{\beta }}} } \right), a,b,\alpha ,\beta > 0,} \\ \end{array} $$
(6)

where the first component represents a log-linear process with decreasing intensity function and the latter component is a bounded intensity process with increasing bounded intensity function [6]. Guida and Pulcini [12] showed that the BBIP is able to model the failure pattern of a repairable system subject to both early failures and deterioration phenomena, featuring a finite asymptote as the system age increases.

This work is mainly motivated by unscheduled maintenance data of artillery systems collected during exercise in the field over a fixed period of time from the Republic of Korea (ROK) army. Some of the artilleries are subject to early failures due to the presence of defective parts or assembling defects, as well as wear-out failures caused by deteriorating phenomena. This causes a non-monotonic trend in the failure data in which the intensity function initially decreases followed by a long period of constant or near constant intensity until wear-out finally occurs, at which time the intensity function begins to increase. We found that existing models including the S-PLP and the BBIP did not adequately capture the non-monotonic trend in the failure process for this field artilleries. Because of this, we propose a superposed log-linear process (S-LLP) to model ROK Army artillery system failures, and we derive the maximum likelihood estimators (MLEs) for the model parameters, along with their confidence intervals. Based on the NHPP models for a repairable system, we will go over the application of mixed-effects models for recurrent failure data from multiple repairable systems for the purpose of reliability analysis.

3 Superposed Log-Linear Process for Bathtub-Shaped Intensity

Consider a repairable system with failures observed over the time interval \(\left( {0,{ }T} \right]\). Suppose that the failures are subject to two different failure modes, and that each of the modes are modeled by an LLP with parameters \(\alpha_{j}\) and \(\beta_{j}\) for \(j{ } = { }1,{ }2\). . We propose a superposed log-linear process (S-LLP) with intensity function

$$ \begin{array}{*{20}c} {\lambda \left( t \right) = \alpha_{1} e^{{ - \beta_{1} t}} + \alpha_{2} e^{{ - \beta_{2} t}} ,\quad \alpha_{1} ,\alpha_{2} ,\beta_{1} ,\beta_{2} > 0,} \\ \end{array} $$
(7)

for \(t \ge 0\). A key difference between the S-LLP and previously mentioned SPPs is evident in the parameters \(\beta_{1}\) and \(\beta_{2}\). By limiting them to be strictly non-negative, the superposed process is a mixture of an increasing and a decreasing pair of intensity functions. Note that if \(\beta_{1}\) = \({ }\beta_{2} = 0\), , the S-LLP is reduced to the homogeneous Poisson process (HPP) with a constant intensity \(\lambda { } \equiv { }\alpha_{1} + \alpha_{2}\). Unlike the S-PLP intensity function for \(\beta_{1} < 1\) or \(\beta_{2} < 1\), , the S-LLP intensity function (7) is finite at \(t = 0\). The first derivative of intensity function \(\lambda \left( t \right)\) with respect to \(t\),

$$ \begin{array}{*{20}c} {\lambda^{^{\prime}\left( t \right)} = - \alpha_{1} e^{{ - \beta_{1} t}} + \alpha_{2} e^{{\beta_{2} t}} ,} \\ \end{array} $$

is equal to \(\alpha_{2} \beta_{2} - \alpha_{1} \beta_{1}\) at \(t = 0\), hence \(\lambda \left( t \right)\) is initially decreasing if and only if \(\alpha_{1} \beta_{1} > \alpha_{2} \beta_{2}\), and \(\lambda {^{\prime}}\left( t \right)\) is equal to 0 at \(t = \tau\), where \(\tau\) is given by

$$ \begin{array}{*{20}c} {\tau = \frac{1}{{\beta_{1} + \beta_{2} }}\ln \left( {\frac{{\alpha_{1} \beta_{1} }}{{\alpha_{2} \beta_{2} }}} \right)} \\ \end{array} $$
(8)

The point with minimum intensity (\(\tau\)) lies between \(0\) and \(T\) if \(0 \le {\text{ln}}\left( {\alpha_{1} \beta_{1} /\alpha_{2} \beta_{2} } \right) \le \left( {\beta_{1} + \beta_{2} } \right)T\). The second derivative of the intensity function is

$$ \begin{array}{*{20}c} {\lambda^{\prime\prime}\left( \tau \right) = \alpha_{1} \beta_{1}^{2} e^{{ - \beta_{1} \tau }} + \alpha_{2} \beta_{2}^{2} e^{{\beta_{2} \tau }} > 0,} \\ \end{array} $$
(9)

and \(\tau\) represents a unique time-point having minimum intensity value

$$ \begin{array}{*{20}c} {\lambda \left( \tau \right) = \alpha_{1} e^{{ - \beta_{1} \tau }} \left( {\frac{{\beta_{1} + \beta_{2} }}{{\beta_{2} }}} \right).} \\ \end{array} $$
(10)

That is, the intensity decreases until \({\text{t}} = \tau\), after which it increases from \({\text{t}} = \tau\) to \({\text{t}} = T\). Thus, the intensity function (7) reflects a bathtub behavior of sequential failures in a repairable system when the system is subject both to early failures and to wear-out failures. The expected number of failures up to \({\text{t}}\) is given by

$$ \begin{array}{*{20}c} {\Lambda \left( t \right) = \mathop \smallint \limits_{0}^{t} \lambda \left( u \right){\text{d}}u = \frac{{\alpha_{1} }}{{\beta_{1} }}\left( {1 - e^{{ - \beta_{1} t}} } \right) + \frac{{\alpha_{2} }}{{\beta_{2} }}\left( {e^{{\beta_{2} t}} - 1} \right), \quad t \ge 0.} \\ \end{array} $$
(11)

Similar to the S-PLP, it is the sum of expected number of failures caused by each failure mode, and it has an inflection point.

3.1 Maximum Likelihood Estimation

We consider the likelihood function for an NHPP with the first n failure-times, \({\varvec{t}}{ } \equiv { }(t_{1} < t_{2} < \cdot\cdot\cdot < t_{n} )\), which are observed until \(T\). Under a failure-truncated sampling, the log-likelihood function of the S-LLP is

$$ \begin{array}{*{20}c} \begin{aligned} \ell \left( {\alpha_{1} ,\alpha_{2} ,\beta_{1} ,\beta_{2} ;{\varvec{t}}} \right) & = \mathop \sum \limits_{i = 1}^{n} \ln \left[ {\alpha_{1} e^{{ - \beta_{1} t_{i} }} + \alpha_{2} e^{{\beta_{2} t_{i} }} } \right] \\ & \quad - \left[ {\frac{{\alpha_{1} }}{{\beta_{1} }}\left( {1 - e^{{ - \beta_{1} t_{n} }} } \right) + \frac{{\alpha_{2} }}{{\beta_{2} }}\left( {e^{{\beta_{2} t_{n} }} - 1} \right)} \right], \\ \end{aligned} \\ \end{array} $$
(12)

and \(t_{n}\) is replaced by \(T\) under a time-truncated sampling. The maximum likelihood estimators (MLEs) of the parameters \({\varvec{\theta}} \equiv \left( {\alpha_{1} ,{ }\alpha_{2} ,{ }\beta_{1} ,{ }\beta_{2} } \right)^{T}\) can be found by solving the following likelihood equations:

$$ \begin{aligned} \frac{\partial \ell }{{\partial \alpha_{1} }} & = \mathop \sum \limits_{i = 1}^{n} \frac{{e^{{ - \beta_{1} t_{i} }} }}{{\alpha_{1} e^{{ - \beta_{1} t_{i} }} + \alpha_{2} e^{{\beta_{2} t_{i} }} }} - \frac{1}{{\beta_{1} }}\left( {1 - e^{{ - \beta_{1} t_{n} }} } \right) = 0, \\ \frac{\partial \ell }{{\partial \beta_{1} }} & = \mathop \sum \limits_{i = 1}^{n} \frac{{ - \alpha_{1} t_{i} e^{{ - \beta_{1} t_{i} }} }}{{\alpha_{1} e^{{ - \beta_{1} t_{i} }} + \alpha_{2} e^{{\beta_{2} t_{i} }} }} + \left[ {\frac{{\alpha_{1} }}{{\beta_{1}^{2} }}\left( {1 - e^{{ - \beta_{1} t_{n} }} } \right) - \frac{{\alpha_{1} t_{n} }}{{\beta_{1} }}e^{{ - \beta_{1} t_{n} }} } \right] = 0, \\ \frac{\partial \ell }{{\partial \alpha_{2} }} & = \mathop \sum \limits_{i = 1}^{n} \frac{{e^{{\beta_{2} t_{i} }} }}{{\alpha_{1} e^{{ - \beta_{1} t_{i} }} + \alpha_{2} e^{{\beta_{2} t_{i} }} }} - \frac{1}{{\beta_{2} }}\left( {e^{{\beta_{2} t_{n} }} - 1} \right) = 0, \\ \frac{\partial \ell }{{\partial \beta_{2} }} & = \mathop \sum \limits_{i = 1}^{n} \frac{{\alpha_{2} t_{i} e^{{\beta_{2} t_{i} }} }}{{\alpha_{1} e^{{ - \beta_{1} t_{i} }} + \alpha_{2} e^{{\beta_{2} t_{i} }} }} + \left[ {\frac{{\alpha_{2} }}{{\beta_{2}^{2} }}\left( {e^{{\beta_{2} t_{n} }} - 1} \right) - \frac{{\alpha_{2} t_{n} }}{{\beta_{2} }}e^{{\beta_{2} t_{n} }} } \right] = 0, \\ \end{aligned} $$
(13)

Obviously, there is no closed form solution to the MLEs in (13), and these equations must be solved numerically. Even though \(\ell \left( {\alpha_{1} ,{ }\alpha_{2} ,{ }\beta_{1} ,{ }\beta_{2} ;{ }{\varvec{t}}} \right)\) is an amalgamation of relatively well-behaved (generally concave) functions, a general search method such as Newton–Raphson is slow to work across four dimensions. In this work, we introduce a slightly more efficient numeric method based on a conditional likelihood method used by Cox and Lewis [5].

Once the MLEs of the model parameters have been obtained, the MLEs of other quantities of interest, such as the expected number of failures up to a given time, \(\Lambda \left( t \right)\), as well as the probability distribution of the number of failures occurring in a future time interval \({\text{Pr}}\left\{ {N{ }\left( {T,{ }T{ } + \Delta } \right){ } = { }k} \right\}\), can be given as

$$ \begin{array}{*{20}c} {{\hat{\Lambda }}\left( t \right) = \frac{{\hat{\alpha }_{1} }}{{\hat{\beta }_{1} }}\left( {1 - e^{{ - \hat{\beta }_{1} t}} } \right) + \frac{{\hat{\alpha }_{2} }}{{\hat{\beta }_{2} }}\left( {e^{{\hat{\beta }_{2} t}} - 1} \right),} \\ \end{array} $$
(14)

and

$$ \begin{array}{*{20}c} {\widehat{{{\text{Pr}}}}\left\{ {N\left( {T,T + {\Delta }} \right) = k} \right\} = \frac{{\left[ {{\hat{\Lambda }}\left( {T + {\Delta }} \right) - {\hat{\Lambda }}\left( T \right)} \right]^{k} }}{k!} \cdot e^{{ - \left[ {{\hat{\Lambda }}\left( {T + {\Delta }} \right) - {\hat{\Lambda }}\left( T \right)} \right]}} ,} \\ \end{array} $$

for \(k = 0,1,2, \ldots\) and \({\hat{\Lambda }}\left( T \right) = n\).

We can construct confidence intervals for these and other functions based on standard errors derived from the (observed) Fisher information matrix. A large-sample approximation of estimated standard errors of the ML estimators is given by the estimated variance–covariance matrix \({\hat{\Sigma }}_{{\hat{\user2{\theta }}}}\) for \(\hat{\user2{\theta }} \equiv \left( {\hat{\alpha }_{1} ,\hat{\beta }_{1} ,\hat{\alpha }_{2} ,\hat{\beta }_{2} } \right)^{{\text{T}}} ,\) where \({\hat{\Sigma }}_{{\hat{\user2{\theta }}}}\) is computed as the inverse of the estimated Fisher information matrix.

We are primarily interested in constructing confidence intervals for \({\Lambda }\left( t \right)\) instead of the basic parameter set \({\varvec{\theta}}\). We approximate the standard error for \({\Lambda }\left( t \right)\) using properties of \({\hat{\mathbf{\Sigma }}}_{{\hat{\user2{\theta }}}}\) and by using the delta method on (14). In general, for a differentiable real-valued function \(g\left( {\varvec{\theta}} \right)\), the approximate standard error of \(\hat{g} \equiv g\left( {\hat{\user2{\theta }}} \right)\) can be obtained by using the delta method as

$$ \begin{array}{*{20}c} {\widehat{s.e.}\left( {\hat{g}} \right) = \sqrt {\mathop \sum \limits_{i = 1}^{4} \left( {\frac{\partial g}{{\partial \theta_{i} }}|_{{\hat{\user2{\theta }}}} } \right)^{2} \widehat{{{\text{Var}}}}\left( {\hat{\theta }_{i} } \right) + \mathop \sum \limits_{i = 1}^{4} \mathop \sum \limits_{j \ne i}^{4} \left( {\frac{\partial g}{{\partial \theta_{i} }}|_{{\hat{\user2{\theta }}}} } \right)\left( {\frac{\partial g}{{\partial \theta_{j} }}|_{{\hat{\user2{\theta }}}} } \right)\widehat{{{\text{Cov}}}}\left( {\hat{\theta }_{i} ,\hat{\theta }_{j} } \right)} ,} \\ \end{array} $$
(15)

where \(\left( {\theta_{1} ,{ }\theta_{2} ,{ }\theta_{3} ,{ }\theta_{4} } \right){ } \equiv { }\left( {\alpha_{1} ,{ }\beta_{1} ,{ }\alpha_{2} ,{ }\beta_{2} } \right)\).

When the function \(g\left( {\varvec{\theta}} \right)\) is invertible, the approximate standard error (15) is exactly same as that given by the estimated variance–covariance matrix relative to the log-likelihood function re-parameterized in term of \(g\). On the other hand, when \(g\left( {\varvec{\theta}} \right)\) is not invertible (as in the case of \({\Lambda }\left( t \right)\)), the log-likelihood function cannot be re-parameterized directly, and the delta-method seems to be the only available method that does not require resampling methods [12]. The approximate \(\left( {1 - {\upgamma }} \right){\text{\% }}\) confidence interval for the function g results in either

$$ \hat{g}{ } \pm {\text{z}}_{\gamma /2} \cdot \widehat{s.e.}\left( {\hat{g}} \right)\quad {\text{or}}\quad \hat{g}\exp \left\{ { \pm {\text{z}}_{\gamma /2} \cdot \frac{{\widehat{s.e.}\left( {\hat{g}} \right)}}{{\hat{g}}}} \right\} $$

using the normal approximation or the lognormal approximation, respectively. Although the normal assumptions (based on asymptotic properties of the MLE) are not perfectly realized for \({\hat{\Lambda }}\left( t \right)\) at small values of \(t\), we do not consider transformations in this case because confidence intervals for \({\Lambda }\left( t \right)\) are of more interest at values of \(t\) not close to zero.

3.2 Analysis of Artillery Repair Data

The proposed model was applied to field repair data of eight sets of artillery systems. Each artillery system was subject to minimal repair at time of failure and all failure data for the eight artillery systems were treated as failure-truncated samples. As shown in Fig. 1, due to a number of failures observed during the early and final periods of data collection, the bathtub-shaped failure intensity potentially seems to be appropriate to describe the failure pattern of the artillery systems.

Fig. 1
An event graph of artillery I D versus failure in hours has 8 horizontal lines from 1 to 8 on the y-axis. Each line has a different number of x between 0 and 1500 hours. Values are approximated.

Event plot showing failure times for eight sets of ROK artillery repair data across a 1500 h period of observation

In practice, decisions concerning failure patterns have been made using graphical techniques or statistical trend tests [2]. The total time on test (TTT) plot [13] helps reveal failure patterns through curvature. A bathtub-shaped failure process can be observed in a TTT plot by an S-shaped function. For example, the first artillery data set (ID-1) consists of 62 failure-times observed until \({\text{t}}_{62} = 1,452\) hours and its TTT plot is contained in Fig. 2, which shows a clear indication of a bathtub-shaped intensity function for failure data of the system. It was observed that the bathtub-shaped patterns of failures are also dominant in the other artilleries in the TTT plots.

Fig. 2
A line graph has a dotted inclined line from 0 to 1.3 and a zig-zag line from slightly above 0 to 1 on the y-axis. Values are approximated.

Total Time on Test (TTT) plot for failure data of ROK artillery ID-1

As a test for non-monotonic trends in recurrent failures, a large positive value of Vaurio’s statistic [14]

$$ V = \frac{{\mathop \sum \nolimits_{i = 1}^{n} \left| {t_{i} - t_{n} /2} \right| - nt_{n} /4}}{{t_{n} \sqrt {n/48} }} $$

indicates the presence of a bathtub behavior, while a large negative value indicates the presence of an inverse bathtub behavior. In applying the Vaurio’s trend test to eight sets of artillery repair data, we summarized the test results in Table 1, along with their \(p\)-values. At significance level \({\upalpha } = 0.05\), the test results provide statistical evidence of the bathtub behavior of failure intensity with respect to failure data of the eight artillery systems.

Table 1 Statistical trend tests for eight sets of ROK artillery repair data

Based on the log-likelihood in (12), MLEs for S-LLP model parameters were computed using the artillery data and the details on the algorithm are described at Appendix A in Mun et al. [15]. Estimates of the S-LLP model, along with their standard errors, are given in Table 2. To obtain the standard errors of \(\hat{\user2{\theta }} \equiv \left( {\hat{\alpha }_{1} ,\hat{\beta }_{1} ,\hat{\alpha }_{2} ,\hat{\beta }_{2} } \right)^{T} ,\) the estimated variance–covariance matrix was computed as the inverse of the estimated Fisher information matrix. For artillery ID-1, for instance, the estimated variance–covariance matrix is

$$ {\hat{\mathbf{\Sigma }}}_{{\hat{\user2{\theta }}}} = \left[ {\begin{array}{*{20}l} {276.0} \hfill & { - 12.40} \hfill & {11.80} \hfill & {26.90} \hfill \\ {} \hfill & {0.330} \hfill & { - 0.170} \hfill & { - 0.331} \hfill \\ {} \hfill & {} \hfill & {0.010} \hfill & {0.015} \hfill \\ {} \hfill & {} \hfill & {} \hfill & {0.023} \hfill \\ \end{array} } \right] \times 10^{ - 6} $$
Table 2 ML estimates of the S-LLP parameters and their standard errors for eight sets of artillery repair data (corresponding approximate 95% confidence intervals under the lognormal approximation in parentheses)

and the standard errors of \(\hat{\user2{\theta }}\) are the square roots of diagonal elements in \({\hat{\mathbf{\Sigma }}}_{{\hat{\user2{\theta }}}}\). Approximate 95% confidence intervals for \({\varvec{\theta}}\) can be constructed using the lognormal approximation.

Using the MLEs for the S-LLP model parameters, we can obtain the MLE for the expected number of failures, \({\hat{\Lambda }}\left( t \right)\), from (14). Figure 3 depicts \({\hat{\Lambda }}\left( t \right)\) under the S-PLP and the BBIP assumption, as well as \({\hat{\Lambda }}\left( t \right)\) under the S-LLP assumption. The figure shows that the S-LLP provides the best representation for the whole data set of eight artillery systems. Admittedly, the S-LLP model, as well as the S-PLP and the BBIP models, fails to handle the early failure data. All of the artillery repair data contain a time-lag to first failure (see Fig. 1), and it is not easy for the superposed models to represent a bathtub-shaped failure intensity that can explicitly fit the time-lag to first failure. More complex and highly parameterized models, for instance, that include the addition of a constant into the intensity functions of S-PLP, BBIP, and S-LLP, may be an alternative to capture the time-lag, but it will greatly increase model complexity as well. Under the S-LLP model, 90% (pointwise) confidence intervals for Λ(t) are plotted for eight individual sets of artillery repair data in Fig. 4.

Fig. 3
Eight graphs of the cumulative number of failures versus cumulative operating time for I D 1 to 8. Each has three increasing curves for S L L P, S P L P, and B B I P. Each has a dark-shaded zig-zag increasing curve.

Observed cumulative number of failures along with the expected number of failures Λ(t) under the S-PLP, the BBIP, and the S-LLP assumption for eight sets of ROK artillery repair data

Fig. 4.
Eight graphs of the cumulative number of failures versus cumulative operating time for I D 1 to 8. Each has a solid upward curve between two dotted increasing curves and a dark zig-zag line over the solid curve.

90% pointwise confidence intervals for Λ(t) under the S-LLP model for eight sets of ROK artillery repair data (The vertical axis is log-scaled for better representation of the confidence intervals)

4 Mixed-Effects NHPP Model

Occasionally, multiple repairable systems may present system-to-system variability due to changes in operating environments and working intensities of individual systems. In this case, it may be more reasonable to assume a heterogeneity among all the systems. Lawless [16] refers to such effects as “unobserved heterogeneity”. To take the heterogeneity among systems into account, Bayesian methods (both empirical and hierarchical) have been applied to multiple repairable systems due to their flexibility in accounting for parameter uncertainty and allowing the incorporation of a prior knowledge into the process under study (see, e.g., Hamada et al. [17], Reese et al. [18], Arab et al. [19]). System heterogeneity may be described via the prior distributions of the model parameters, however, there may also be homogeneity between individual systems. This homogeneity can be explicitly modeled by assuming common parameters in the Bayesian model. If prior distributions are unnecessarily assigned to the common parameters, the prior information employed to the common parameters can make the parameter estimation procedure more complicated. The computational complexity and the difficulty in choosing proper prior distributions have been obstacles for reliability engineers who wish to apply Bayesian methods to such practical reliability problems.

As another approach, the unobserved heterogeneity has been explicitly incorporated into the model under study in the formulation of mixed-effects model. Mixed-effects models, which is also called a “random-effects model”, are widely used in medical studies [20, 21], because they can model both between-individual and within-individual variation found in the data. For analyzing the reliability of multiple repairable systems, the underlying model for each individual system may be reasonably assumed to be an NHPP. Based on NHPPs with non-monotonic failure intensities, we will illustrate the inference procedure on the parameters of the mixed-effects NHPP model. The mixed-effects NHPP model allows explicit modeling and analysis of between-individual and within-individual variation of recurrent failures, along with a common baseline for all the individuals. In the formation of a mixed-effects model, the probability distributions for non-normal data involving both fixed and random effects is appropriate, a generalized mixed-effects model can be a useful tool for such purposes. The (generalized) mixed-effects models are easily implemented through commercial softwares such as S-PLUS® NLME library and SAS® NLMIXED procedure.

4.1 Mixed-Effects NHPP Model Without Covariates

Suppose that there are m independent systems; the system \(i\) is observed over the time interval \(\left( {0,{ }T_{i} } \right)\) and \(n_{i}\) failures are observed to occur, at times \(t_{i1} < \cdot\cdot\cdot < t_{{in_{i} }}\). For the parameters \({\varvec{\theta}}\) of the NHPP, the likelihood function is

$$ \begin{array}{*{20}c} {{\mathcal{L}}\left( {\varvec{\theta}} \right) = \mathop \prod \limits_{i = 1}^{m} \left\{ {\mathop \prod \limits_{j = 1}^{{n_{i} }} \lambda \left( {t_{ij} ;{\varvec{\theta}}} \right)} \right\}{\text{exp}}\left\{ { - {\Lambda }\left( {T_{i} ;{\varvec{\theta}}} \right)} \right\}} \\ \end{array} $$
(16)

with failure intensity \(\lambda \left( \cdot \right)\) and its cumulative mean function \(\Lambda \left( \cdot \right)\). By incorporating the inter-individual variation into the random effects \({\varvec{b}}_{i}\), along with fixed effects \({\varvec{\zeta}}\) (identical to all the systems), the conditional mean for a failure process of the \(i\)th system \({\varvec{t}}_{i} = \left( {t_{i1} ,{ }.{ }.{ }.{ },{ }t_{in} } \right)^{T}\) is \(E\left[ {{\varvec{t}}_{i} {|}{\varvec{b}}_{i} } \right] \equiv {\varvec{\mu}}_{{\text{i}}} = {\Lambda }({\varvec{t}}_{i} |{\varvec{b}}_{i} )\). The contribution to the likelihood function (16) having observed failures \(n_{i}\) at times \(t_{ij}\) for individual system \(i\) is

$$ {\mathcal{L}}_{i} \left( {\varvec{\zeta}} \right) = \mathop \smallint \limits_{{{\varvec{b}}_{i} }}^{{}} \left\{ {\mathop \prod \limits_{j = 1}^{{n_{i} }} \lambda (t_{ij} |{\varvec{b}}_{i} )} \right\}\exp \left\{ { - {\Lambda }\left( {T_{i} |{\varvec{b}}_{i} } \right)} \right\}p\left( {{\varvec{b}}_{i} } \right)d{\varvec{b}}_{i} $$

The likelihood function with parameters \({\varvec{\zeta}}\) and \({\varvec{b}}_{i}\) from the sample of m systems has the form

$$ \begin{array}{*{20}c} {{\mathcal{L}}\left( {\varvec{\zeta}} \right) = \mathop \prod \limits_{i = 1}^{m} \mathop \smallint \limits_{{{\varvec{b}}_{i} }}^{{}} \left\{ {\mathop \prod \limits_{j = 1}^{{n_{i} }} \lambda (t_{ij} |{\varvec{b}}_{i} )} \right\}\exp \left\{ { - {\Lambda }\left( {T_{i} |{\varvec{b}}_{i} } \right)} \right\}p\left( {{\varvec{b}}_{i} } \right)d{\varvec{b}}_{i} ,} \\ \end{array} $$
(17)

and maximizing the likelihood function (17) yields the maximum likelihood estimate (MLE) of \({\varvec{\zeta}}\), denoted by \(\hat{\user2{\zeta }}\).

4.2 Mixed-Effects NHPP Model with Covariates

Suppose that individual \(i\) has a covariate vector \({\varvec{x}}_{{\text{i}}}\) and a failure intensity \(\lambda_{{{\varvec{x}}_{i} }} \left( {t_{ij} ;{ }{\varvec{\theta}},{ }{\varvec{\xi}}_{i} } \right)\), then the contribution to the likelihood function for individual \(i\) with fixed effects \({\varvec{\beta}}_{{\varvec{x}}}\) and random-effects \({\varvec{b}}_{i}\) for \({\varvec{\xi}}_{i} \equiv \left( {{\varvec{\beta}}_{x} ,{ }{\varvec{b}}_{i} } \right)^{T}\) is given by

$$ \begin{array}{*{20}c} {{\mathcal{L}}_{i} \left( {{\varvec{\theta}},{\varvec{\beta}}_{x} , {\varvec{b}}_{i} } \right) = \mathop \smallint \limits_{{{\varvec{b}}_{i} }}^{{}} \left\{ {\mathop \prod \limits_{j = 1}^{{n_{i} }} \lambda_{{{\varvec{x}}_{i} }} \left( {t_{ij} ;{\varvec{\theta}},\left( {{\varvec{\beta}}_{x} , {\varvec{b}}_{i} } \right)} \right)} \right\}\exp \left\{ { - {\Lambda }_{{{\varvec{x}}_{i} }} \left( {T_{i} ;{\varvec{\theta}},\left( {{\varvec{\beta}}_{x} , {\varvec{b}}_{i} } \right)} \right)} \right\}p\left( {{\varvec{b}}_{i} } \right)d{\varvec{b}}_{i} .} \\ \end{array} $$
(18)

The NHPP is flexible in that the covariate information, if exists, can be explicitly modeled via the failure intensity

$$ \begin{array}{*{20}c} {\lambda_{{{\varvec{x}}_{i} }} \left( {t_{ij} ;{\varvec{\theta}},{\varvec{\xi}}_{i} } \right) = \lambda_{0} \left( {t_{ij} ;{\varvec{\theta}}} \right)h\left( {{\varvec{x}}_{{\varvec{i}}} ;{\varvec{\xi}}_{i} } \right),} \\ \end{array} $$
(19)

where \({\varvec{\xi}}_{i}\) is the coefficient vector for covariate \({\varvec{x}}_{i}\), and \(h\left( \cdot \right)\) is a positive-valued monotonic differentiable function, e.g., exp(·) or log(·). The NHPP model with the failure intensity (19) is called a “proportional intensity Poisson process model” and \(\lambda_{0} \left( {t_{ij} ;{ }{\varvec{\theta}}} \right)\) serves as the baseline intensity function. The baseline intensity function is assumed to be constant across individuals; that is, \({\varvec{\theta}}\) has fixed effects. Inter-individual variability is instead incorporated in the function \(h\left( {{\varvec{x}}_{i} ;{ }{\varvec{\xi}}_{i} } \right)\). The model with \(h\left( {{\varvec{x}}_{i} ;{ }{\varvec{\xi}}_{i} } \right) \equiv \exp \left( {{\varvec{x}}_{i}^{T} {\varvec{\xi}}_{i} } \right)\) has been commonly employed because it is convenient and flexible (e.g., Andersen and Gill [22]). The mean intensity function corresponding to the failure intensity (19) is \({\Lambda }_{{{\varvec{x}}_{{\text{i}}} }} \left( {t;{\varvec{\theta}},{ }{\varvec{\xi}}_{i} } \right) = {\Lambda }_{0} \left( {t;{\varvec{\theta}}} \right){ }h\left( {{\varvec{x}}_{i} ;{ }{\varvec{\xi}}_{i} } \right)\), where \({\Lambda }_{0} \left( {t;{\varvec{\theta}}} \right) = \mathop \smallint \limits_{0}^{t} \lambda_{0} \left( {u;{ }{\varvec{\theta}}} \right){\text{ d}}u\). The likelihood function (18) can be rewritten by the factorization as (Cox and Lewis [5], Sect. 5.3)

$$ \begin{array}{*{20}c} \begin{aligned} & {\mathcal{L}}_{i} \left( {{\varvec{\theta}},{\varvec{\beta}}_{{\varvec{x}}} , {\varvec{b}}_{i} } \right) = \mathop \prod \limits_{j = 1}^{{n_{i} }} \left\{ {\frac{{\lambda_{0} \left( {t_{ij} ;{ }{\varvec{\theta}}} \right){ }}}{{{\Lambda }_{0} \left( {t_{ij} ;{\varvec{\theta}}} \right)}}} \right\} \times \int\limits_{{{\varvec{b}}_{i} }}^{{}} {\left\{ {{\Lambda }_{0} \left( {T_{i} ;{\varvec{\theta}}} \right){ }h\left( {{\varvec{x}}_{i}^{{\text{T}}} \left( {{\varvec{\beta}}_{{\varvec{x}}} + {\varvec{b}}_{{\text{i}}} } \right)} \right)} \right\}^{{n_{i} }} } \\ & \quad \exp \left\{ { - {\Lambda }_{0} \left( {T_{i} ;{\varvec{\theta}}} \right)h\left( {{\varvec{x}}_{i}^{{\text{T}}} \left( {{\varvec{\beta}}_{{\varvec{x}}} + {\varvec{b}}_{{\text{i}}} } \right)} \right)} \right\}p\left( {{\varvec{b}}_{i} } \right)d{\varvec{b}}_{i} . \\ \end{aligned} \\ \end{array} $$
(20)

The likelihood function for a sample of m independent individuals is the product of terms \({\mathcal{L}}_{1} , \ldots ,{\mathcal{L}}_{m}\) giving

$$ \begin{array}{*{20}c} {{\mathcal{L}}\left( {{\varvec{\theta}},{\varvec{\beta}}_{{\varvec{x}}} } \right) = {\mathcal{L}}_{1} \left( {\varvec{\theta}} \right){\mathcal{L}}_{2} \left( {{\varvec{\theta}},{\varvec{\beta}}_{{\varvec{x}}} } \right),} \\ \end{array} $$
(21)

where \({\mathcal{L}}_{1} \left( {\varvec{\theta}} \right)\) is the product of the first terms and \({\mathcal{L}}_{2} \left( {{\varvec{\theta}},{\varvec{\beta}}_{{\varvec{x}}} } \right)\) is the product of the second terms in right-hand side of (20). Lawless [16] considered the following intensity function

$$ \begin{array}{*{20}c} {\lambda_{{\varvec{x}}} \left( {t_{ij} ;\theta ,{\varvec{\beta}}_{{\varvec{x}}} ,{\varvec{b}}_{i} } \right) = \lambda_{0} \left( {t_{ij} ;{\varvec{\theta}}} \right)b_{i} exp\left( {{\varvec{x}}_{i}^{T} {\varvec{\beta}}_{{\varvec{x}}} } \right),} \\ \end{array} $$
(22)

where \({\varvec{b}}_{i}\) is assumed to be an \(iid\) gamma-distributed random variable with mean 1 and variance \(\phi\). In this case the second term in (20) becomes

$$ \frac{{{\Gamma }\left( {n_{i} + \phi^{ - 1} } \right)}}{{{\Gamma }\left( {\phi^{ - 1} } \right)}} \cdot \frac{{\left[ {\phi {\Lambda }_{0} \left( {\tau_{i} ;{\varvec{\theta}}} \right)\exp \left( {{\varvec{x}}_{i}^{T} {\varvec{\beta}}_{{\varvec{x}}} } \right)} \right]^{{n_{i} }} }}{{\left[ {1 + \phi {\Lambda }_{0} \left( {\tau_{i} ;{\varvec{\theta}}} \right)\exp \left( {{\varvec{x}}_{i}^{T} {\varvec{\beta}}_{{\varvec{x}}} } \right)} \right]^{{n_{i} + \phi^{ - 1} }} }}, $$

which is a negative binomial regression model. The negative binomial model is a reasonable model to accommodate extra-Poisson variability. For instance, if the baseline intensity function has a power law process, for \({\varvec{\theta}} \equiv \left( {\alpha , \beta } \right)\)

$$ \begin{array}{*{20}c} {{\mathcal{L}}_{1} \left( {\varvec{\theta}} \right) = \mathop \prod \limits_{i = 1}^{m} \mathop \prod \limits_{j = 1}^{{n_{i} }} \left( {\frac{\beta }{{t_{ij} }}} \right)\quad and\quad {\mathcal{L}}_{2} \left( {{\varvec{\theta}},{\varvec{\beta}}_{{\varvec{x}}} } \right) = \mathop \prod \limits_{i = 1}^{m} \frac{{\Gamma \left( {n_{i} + \phi^{ - 1} } \right)}}{{\Gamma \left( {\phi^{ - 1} } \right)}} \cdot \frac{{\left[ {\phi \left( {T_{i} /\alpha } \right)^{\beta } \exp \left( {{\varvec{x}}_{i}^{T} {\varvec{\beta}}_{{\varvec{x}}} } \right)} \right]^{{n_{i} }} }}{{\left[ {1 + \phi \left( {T_{i} /\alpha } \right)^{\beta } \exp \left( {{\varvec{x}}_{i}^{T} {\varvec{\beta}}_{{\varvec{x}}} } \right)} \right]^{{n_{i} + \phi^{ - 1} }} }},} \\ \end{array} $$
(23)

4.3 Estimation of Parameters in Mixed-Effects NHPP Model

In general, the integral calculations in the likelihood function (17) and (21) involve high-dimensional integration, and do not produce closed-form expressions, requiring numerical integration techniques to estimate the likelihood function. Bae and Kvam [23] introduced various approximation methods to numerically optimize the likelihood function from repeated-measured degradation data of vacuum fluorescent displays when the distribution of \({\varvec{b}}_{i}\) is multivariate normal. SAS® NLMIXED procedure provides several approximation methods including adaptive Gaussian quadrature [24] and first-order method [25] for the mixed-effects model.

In the NHPP model without covariates, ML estimates of \({\varvec{\zeta}}\) are obtained by maximizing the likelihood function (17) numerically or using approximation methods (if necessary). A simple approach to estimation in NHPP model with covariates is to estimate \({\varvec{\theta}}\) by maximizing \({\mathcal{L}}_{1} \left( {\varvec{\theta}} \right),\) and then to maximize (21) with respect to \({\varvec{\beta}}_{{\varvec{x}}}\), with \({\varvec{\theta}}\) fixed at their estimates [14]. With the PLP baseline intensity, for example, we can first estimate \(\beta\) in the likelihood function (23) by maximizing \({\mathcal{L}}_{1}\) with respect to \(\beta\), then plug in \(\hat{\beta }\) and maximize \({\mathcal{L}}_{2}\) with respect to \(\phi ,{ }\alpha ,\) and \({\varvec{\beta}}_{{\varvec{x}}}\). Maximization of \({\mathcal{L}}_{2}\) for fixed \(\beta\) is easy using Newton’s method or the scoring algorithm [26].

The random-effects in the mixed-effects NHPP model are assumed to have normal distributions with zero means. Their specific values for a given individual are just realizations from the normal distributions. These random effects can be efficiently estimated using empirical Bayes methods [27]. For the failure process of the \(i\)th system \({\varvec{t}}_{i}\), empirical Bayes estimates of \({\varvec{b}}_{i}\) (denoted by \(\hat{\user2{b}}_{i}\)) is given by the posterior mean of \({\varvec{b}}_{i}\) as

$$ \hat{\user2{b}}_{i} = E\left( {{\varvec{b}}_{i} {|}{\varvec{t}}_{i} } \right) = \frac{{\mathop \smallint \nolimits_{{{\varvec{b}}_{i} }}^{{}} {\varvec{b}}_{i} p\left( {{\varvec{t}}_{i} {|}{\varvec{b}}_{i} } \right)p\left( {{\varvec{b}}_{i} } \right)d{\varvec{b}}_{i} }}{{\mathop \smallint \nolimits_{{{\varvec{b}}_{i} }}^{{}} p\left( {{\varvec{t}}_{i} {|}{\varvec{b}}_{i} } \right)p\left( {{\varvec{b}}_{i} } \right)d{\varvec{b}}_{i} }} $$

for the conditional probability function of \({\varvec{t}}_{i}\) given \({\varvec{b}}_{i}\), \(p\left( {{\varvec{t}}_{i} {|}{\varvec{b}}_{i} } \right)\). If parametric assumptions on the distribution of random-effects are made, e.g., normal, then empirical Bayes methods are equivalent to best linear unbiased prediction (BLUP) methods [28].

Confidence intervals can be constructed for the parameters of the mixed-effects model or their functions based on standard errors derived from the (observed) Fisher information matrix. In generalized mixed-effects NHPP model without covariates, a large-sample approximation of standard errors of the ML estimators is given through the estimated variance–covariance matrix \({\hat{\Xi }}_{{{\hat{\mathbf{\zeta }}}}}\), which is computed as the inverse of the observed Fisher information matrix. That is, \({\hat{\Xi }}_{{{\hat{\mathbf{\zeta }}}}} \equiv {\mathcal{I}}\left( {{\hat{\mathbf{\zeta }}}} \right)^{ - 1} { }\) for \({\mathcal{I}}\left( {{\hat{{\varvec{\upzeta}}}}} \right) = - \partial^{2} l/\partial {{\varvec{\upzeta}}}^{2}\) evaluated at \({{\varvec{\upzeta}}} = {\hat{{\varvec{\upzeta}}}} \), where \(l = {\text{log }}{\mathcal{L}}\left( {\varvec{\zeta}} \right).\) For example, in the NHPP model with covariates, the asymptotic variance–covariance matrix of \(\left( {{\varvec{\theta}},{\varvec{\beta}}_{{\varvec{x}}} } \right)\) is obtained as \({\mathcal{I}}\left( {\hat{\user2{\theta }},\hat{\user2{\beta }}_{{\varvec{x}}} } \right)^{ - 1}\), where

$$ {\mathcal{I}}\left( {\hat{\user2{\theta }},\hat{\user2{\beta }}_{{\varvec{x}}} } \right)^{ - 1} \equiv \left[ {\begin{array}{*{20}c} { - \frac{{\partial^{2} l}}{{\partial {\varvec{\theta}}^{2} }}} & { - \frac{{\partial^{2} l}}{{\partial {\varvec{\theta}}\partial {\varvec{\beta}}_{{\varvec{x}}} }}} \\ {} & { - \frac{{\partial^{2} l}}{{\partial {\varvec{\beta}}_{{\varvec{x}}}^{2} }}} \\ \end{array} } \right]^{ - 1} = \left[ {\begin{array}{*{20}c} { - \left( {\frac{{\partial^{2} l_{1} }}{{\partial {\varvec{\theta}}^{2} }} + \frac{{\partial^{2} l_{2} }}{{\partial {\varvec{\theta}}^{2} }}} \right)} & { - \frac{{\partial^{2} l_{2} }}{{\partial {\varvec{\theta}}\partial {\varvec{\beta}}_{{\varvec{x}}} }}} \\ {} & { - \frac{{\partial^{2} l_{2} }}{{\partial {\varvec{\beta}}_{{\varvec{x}}}^{2} }}} \\ \end{array} } \right]^{ - 1} $$

evaluated at \({\varvec{\theta}} = \hat{\user2{\theta }}\) and \({\varvec{\beta}}_{{\varvec{x}}} = \hat{\user2{\beta }}_{{\varvec{x}}}\). Here, \(l_{1} = {\text{log }}{\mathcal{L}}_{1} \left( {\varvec{\theta}} \right)\) and \(l_{2} = {\text{log}}{\mathcal{L}}_{2} \left( {{\varvec{\theta}},{ }{\varvec{\beta}}_{{\varvec{x}}} } \right).\) Then, similar to the case of the NHPP model without covariates, approximate standard errors for (or functions of) \(\hat{\user2{\theta }}\) and \(\hat{\user2{\beta }}_{{\varvec{x}}}\) are computed using the delta method, and their Wald-type confidence intervals are also computed. For the random-effects, the standard errors of \(\hat{\user2{b}}_{i}\) are computed using the delta method and confidence intervals of the random-effects may be constructed using the Wald-type statistics.

After fitting mixed-effects NHPP model to failure-time data from multiple repairable systems, we need to assess the significance of the terms in the model. The significance test can be done through a likelihood ratio statistic. Denote \({\mathcal{L}}_{F}\) as the likelihood for the full model, and \({\mathcal{L}}_{R}\) as the likelihood for the reduced model. Then under the null hypothesis that the reduced model is adequate, the likelihood ratio test (LRT) statistic

$$ 2\log \left( {{\mathcal{L}}_{F} /{\mathcal{L}}_{R} } \right) = 2\left( {\log {\mathcal{L}}_{F} - \log {\mathcal{L}}_{R} } \right) $$

will approximately follow a \(\chi^{2}\) distribution with \(\left( {\psi_{F} - \psi_{R} } \right)\) degrees of freedom, where \(\psi_{F}\) and \(\psi_{R}\) are the number of parameters to be estimated in the full and reduced model, respectively.

Even though the LRT can assess the significance of particular terms, model selection procedure via such pairwise comparisons has been criticized owing to an overuse of hypothesis testing. By contrast, an information-based model selection procedure allows comparison of multiple candidate models. Two widely used information criteria for assessing model fit are Akaike’s information criterion (AIC) [29] and the Bayesian information criterion (BIC) [30]. For the log-likelihood of a model, \(l\), the AIC and BIC are, respectively

$$ {\text{AIC}} = - 2l + 2p^{*} ,\quad {\text{and}}\quad {\text{BIC}} = - 2l + p^{*} {\text{log}}N $$

where \(p^{*}\) denotes the total number of parameters in the model, and \(N\) denotes the total number of observations in the data set; that is, \(N = \mathop \sum \limits_{i = 1}^{,} n_{i}\) for the mixed-effects NHPP model. If we use the AIC to compare several models for the same data, we prefer the model with the lowest AIC value. Similarly, when using the BIC, we prefer the model with the lowest BIC value.

Residuals can be set up to provide checks on the assumed model. Under the NHPP model, the quantities \(\Lambda \left( {t_{ij} } \right) - \Lambda \left( {t_{i,j - 1} } \right)\) are independent standard exponential random variables for \(j{ } = { }1,{ }.{ }.{ }.{ },{ }n_{i}\). Therefore, residuals \(e_{ij} = \hat{\Lambda }\left( {t_{ij} } \right) - \hat{\Lambda }\left( {t_{i,j - 1} } \right)\) should look like standard exponential random variables if the NHPP model under assumptions is correct. The deviation from the model assumptions can be checked by plotting \((e_{ij} ,e_{i,j - 1}\)) to detect serial correlation with respect to \(j\) in the \(e_{ij}\)’s. See Lawless [14] for more details on the properties of residuals and formal model assessment using the residuals.

4.4 Application of Mixed-Effects NHPP Model to Artillery Repair Data

Mun et al. [15] analyzed field-repair data of eight sets of artillery systems where their failure intensities appear bathtub-shaped. Failure frequency also tended to vary greatly across all the systems. Mun et al. [15] proposed the S-LLP model (7) instead of S-PLP model proposed by Pulcini [9] to describe the artillery repair data with bathtub-shaped failure intensity. To incorporate individual variability into the superposed NHPP models, we considered both the mixed-effects S-PLP model and the mixed-effects S-LLP model. For the mixed-effects S-PLP model, the general model for comparison is a mean failure intensity of the S-PLP with four random-effects

$$ {\Lambda }_{ij} \left( t \right) = \left( {\frac{{t_{ij} }}{{\zeta_{1} + b_{i1} }}} \right)^{{\vartheta_{1} + b_{i2} }} + \left( {\frac{{t_{ij} }}{{\zeta_{2} + b_{i3} }}} \right)^{{\vartheta_{2} + b_{i4} }} , $$

and similarly, the general model for the mixed-effects S-LLP is

$$ {\Lambda }_{ij} \left( t \right) = \left( {\frac{{\gamma_{1} + b_{i1} }}{{\kappa_{1} + b_{i2} }}} \right)\left( {1 - e^{{ - \left( {\kappa_{1} + b_{i2} } \right)t_{ij} }} } \right) + \left( {\frac{{\gamma_{2} + b_{i3} }}{{\kappa_{2} + b_{i4} }}} \right)\left( {e^{{\left( {\kappa_{2} + b_{i4} } \right)t_{ij} }} - 1} \right), $$

where the random-effects \(\left( {b_{i1} , b_{i2} , b_{i3} , b_{i4} } \right)\) of the two models have general covariance structures, for \(i = 1, . . . , m, j = 1, . . . , n_{i}\). After executing the LRT procedure and computing the AIC and BIC, the final parameter estimates of mixed-effects S-PLP model are: \(\hat{\zeta }_{1} = 4.1720, \hat{\vartheta }_{1} = 0.6544, \hat{\zeta }_{2} = 1095.51, \hat{\vartheta }_{2} = 12.5636\)., and \(\left( {b_{i1} , b_{i2} , b_{i3} , b_{i4} } \right)^{T} { } \sim {\mathcal{N}}\left( {0, 0, 0, 0} \right)^{T} { },{\text{ diag}}\left( {4.8261, 0.0028, 1.6880 \times 104, 14.0611} \right)\), where \({\text{diag}}\left( \cdot \right){ }\) denotes a diagonal matrix. The final parameter estimates of the mixed-effects S-LLP model are: \({\hat{\rm{\gamma }}}_{1} = 0.0843,{\hat{\rm{\kappa }}}_{1} = 0.0020,{\hat{\rm{\gamma }}}_{2} = 5.4120 \times 10^{ - 5} ,{\hat{\rm{\kappa }}}_{2} = 0.0057,\) and \(\left( {b_{i1} , b_{i2} } \right)^{T} \sim {\mathcal{N}}\left( {0,{ }0} \right)^{{\text{T}}} { },{\text{ diag}}\left( {4.8018 \times 10^{ - 4} , 2.0789 \times 10^{ - 7} } \right)\), and \(b_{i4} \sim {\mathcal{N}}\left( {0, 2.0737 \times 10^{ - 7} } \right)\).

We compared their modeling performance with individually fitted S-PLP and S-LLP models, correspondingly, in terms of mean square errors. Before comparing their modeling performance, we performed the diagnostics for the fitted models based on the residuals derived from each of the super- posed NHPP models. The histograms of the residuals from the fitted models (Fig. 5) justify the assumptions for the four superposed NHPP models. Each of superposed NHPP models incorporating both fixed-effects and random-effects has smaller MSE than individually fitted superposed NHPP models (see Table 3). We chose the mixed-effects S-LLP model which has the smallest average MSE with respect to artillery systems data for further analytical purpose. Table 4 compares parameter estimates of individually fitted S-LLP model and those of mixed-effects S-LLP model and their 95% pointwise confidence intervals using the lognormal approximation for the eight artillery systems. The parameter estimates of mixed-effects S-LLP model are consistently smaller than those of individually fitted S-LLP model, and their confidence intervals are consistently shorter than those of individually fitted S-LLP model. We also observed that mixed-effects S-PLP model has consistently shorter confidence intervals than individually fitted S-PLP model. The estimate of cumulative number of failures and its 95% (pointwise) confidence intervals are plotted for eight individual sets of artillery systems data in Fig. 6.

Fig. 5
Four combination graphs of density versus residual for individual and mixed effects S P L P and S L L P models. Each has decreasing bars and a decreasing curve.

Histograms of the residuals from each of the superposed NHPP models for eight artillery systems

Table 3 Mean squared errors between observed and estimated number of failures from each of superposed NHPP models for eight artillery systems
Table 4 Parameter estimates of both individually fitted S-LLP model and mixed-effects S-LLP model, along with their approximate 95% confidence intervals under the lognormal approximation in parentheses
Fig. 6
Eight scatterplot graphs of the cumulative number of failures versus cumulative operating time for I D 1 to 8. Each has a dotted increasing curve with a few points over it and a shaded area around the curve.

\({\hat{\Lambda }}\left( t \right)\) and 95% pointwise confidence intervals for \({\hat{\Lambda }}\left( t \right)\) under the mixed-effects S-LLP model for eight sets of artillery systems (The vertical axis is log-scaled for better representation of the confidence intervals)

5 Conclusions

Some complex systems show the bathtub-shaped failure intensity that are characterized by a number of different failure modes as in the repairable artillery systems. The monotonic failure intensity models such as the PLP and the LLP models are not appropriate to model the bathtub-shaped failure pattern. As an alternative, a superposed log linear process (S-LLP), which is a mixture of nonhomogeneous Poisson processes, was developed to model this kind of non-monotonic failure intensity. The derived S-LLP model is shown to be much better at fitting the repair data than previous models that have been derived for bathtub-shaped failure intensities. Although the estimation problem is computationally cumbersome, the MLEs are straightforward and can be used to construct approximate confidence bounds for cumulative failure intensity.

For multiple repairable systems presenting system-to-system variability owing to operation environments or working intensities of individual systems, we go over the application of mixed-effects models to recurrent failure data from multiple repairable systems based on the superposed Poisson process to model the bathtub-shaped failure intensities. The mixed-effects models explicitly involve between-system variation through random-effects, along with a common baseline for all the systems through fixed-effects for both normal and non-normal data. Details on estimation of the parameters of the mixed-effects superposed Poisson process models and construction of their confidence intervals are examined. An applicative example shows prominent proof of the mixed-effects superposed Poisson process models for the purpose of reliability analysis.