Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

In this chapter, we extend and generalize approaches and results of the previous chapter to various reliability-related settings of a more complex nature. We relax some assumptions of the traditional models except the one that defines the underlying shock process as the nonhomogeneous Poisson process (NHPP). Only in the last section, we suggest an alternative to the Poisson process to be called the geometric point process. It is remarkable that although the members of the class of geometric processes do not possess the property of independent increments, some shock models can be effectively described without specifying the corresponding dependence structure. Most of the contents of this chapter is based on our recent work [511] and covers various settings that, we believe, are meaningful both from the theoretical and the practical points of view. The chapter is rather technical in nature, however, general descriptions of results are reasonably simple and illustrated by meaningful examples. As the assumption of the NHPP of shocks is adopted, many of the proofs follow the same pattern by using the time-transformation of the NHPP to the HPP (see the derivation of Eq. (2.31)). This technique will be used often in this chapter. Sometimes the corresponding derivations will be reasonably abridged, whereas other proofs will be presented at full length.

Recall that in extreme shock models, only an impact of the current, possibly fatal shock is usually taken into account, whereas in cumulative shock models, the impacts of the preceding shocks are accumulated as well. In this chapter, we combine extreme shock models with specific cumulative shock models and derive probabilities of interest, e.g., the probability that the process will not be terminated during a ‘mission time’. We also consider some meaningful interpretations and examples. We depart from the assumption that the probability of termination does not depend on the history of the process and this makes the modeling more complex on the one hand, but more adequate on the other hand.

4.1 The Terminating Shock Process with Independent Wear Increments

4.1.1 General Setting

Consider a system subject to a NHPP of shocks with rate \( \nu (t) \). Let it be ‘absolutely reliable’ in the absence of shocks. As in Chap. 3, assume that each shock (regardless of its number) results in the system’s failure (and, therefore, in the termination of the corresponding Poisson shock process) with probability \( p(t) \) and is harmless to the system with probability \( q(t) = 1 - p(t) \). Denote the corresponding time to failure of a system by \( T_{S} \). Then Eq. (3.18) can be written now as

$$ P(T_{S} > t) \equiv \bar{F}_{S} (t) = \exp \left( { - \int\limits_{0}^{t} {p(u)\nu (u)\,{\text{d}}u} } \right), $$
(4.1)

whereas the corresponding failure rate is

$$ \lambda_{S} (t) = p(t)\nu (t). $$

The formal proof of (4.1) can be found in Beichelt and Fisher [3] and Block et al. [4]. A ‘non-technical proof’, based on the notion of the conditional intensity function (CIF) (see [15]) is given e.g., in Nachlas [25] and Finkelstein [17]. Thus, (4.1) describes an extreme shock model, as only the impact of the current, possibly fatal shock is taken into account. For convenience, we shall often call the described model the \( p(t) \Leftrightarrow q(t) \) model.

It is clear that the extreme shock model can be easily modified to the case when a system can also fail from causes other than shocks. Denote the corresponding Cdf in the absence of shocks by \( F(t) \) and assume that the process of failure from other causes and the shock process are independent. It follows from the competing risks considerations that

$$ P(T_{S} > t) = \bar{F}(t)\exp \left( { - \int\limits_{0}^{t} {p(u)\nu (u)\,{\text{d}}u} } \right). $$
(4.2)

A crucial assumption for obtaining Eqs. (4.1) and (4.2) is the assumption that with probability \( q(t) = 1 - p(t) \), a shock does not result in any changes in a system. However, in practice, shocks can also increase deterioration, wear, etc. The effect of different shocks is also usually accumulated in some way. Therefore, we start with the following setting [5]:

Let the lifetime of a system in a baseline environment (without shocks) be denoted by \( R \). Thus, \( P(R\,\le\,t) = F(t) \). We interpret here \( R \) as some initial, random resource, which is ‘consumed’ by a system (with rate 1) in the process of its operation. Therefore, the age of our system in this case is equal to a calendar time \( t \), and a failure occurs when this age reaches \( R \). It is clear that when the remaining resource decreases with time, our system can be considered as aging (deteriorating).

Let \( \{ N(t),\,t\; \ge \;0\} \) denote an orderly point process of shocks with arrival times \( T_{i} ,\,i = 1,\,2, \ldots \) Denote also by \( F_{S} (t) \) the Cdf that describes the lifetime of our system, \( T_{S} \) in the presence of shocks. Assume that the \( i \)th shock causes immediate system’s failure with probability \( p(t) \), but in contrast to the extreme shock model, with probability \( q(t) \), it now increases the age of a system by a random increment \( W_{i}\,\ge\,0 \). In terms of repair actions, this repair is ‘worse than minimal’. In accordance with this setting, a random age of a system at time \( t \) (which is similar to the ‘virtual age’ of Finkelstein [16, 17]) is

$$ T_{v} = t + \sum\limits_{i\, = \,0}^{N(t)} {W_{i} } , $$

where, formally, \( W_{0} = 0 \) corresponds to the case \( N(t) = 0 \) when there are no shocks in \( [0,\,t] \). Failure occurs when this random variable reaches the boundary \( R \). Therefore,

$$ \begin{aligned} &P(T_{S}> t|N(s),\;0\,\le\,s \le t;\;W_{1} ,\,W_{2} , \ldots ,W_{N(t)} ;\,R) \\ & = \prod\limits_{i\, = \,0}^{N(t)} q (T_{i} )\,I(T_{v}\,\le\,R) \\ & = \prod\limits_{i\, = \,0}^{N(t)} q (T_{i} )\,I\left( {\sum\limits_{i\, = \,0}^{N(t)} {W_{i} }\,\le\,R - t} \right), \\ \end{aligned} $$
(4.3)

where \( q(T_{0} ) = 1 \) describes the case when \( N(t) = 0 \) and \( I(x) \) is the corresponding indicator. This probability should be understood conditionally on realizations of \( N(t),\;W_{i} ,i = 1,\,2, \ldots ,\,N(t) \) and \( R \).

Relationship (4.3) is very general and it is impossible to ‘integrate out’ explicitly \( N(t),\,W_{i} ,\,i = 1,\,2, \ldots ,\,N(t) \) and \( R \) without substantial simplifying assumptions. Therefore, after the forthcoming comment we will consider two important specific cases [5].

The described model can be equivalently formulated in the following way. Let \( F(t) \) be the distribution of a lifetime of the wearing item in a baseline environment. Failure occurs when this wear, which in the standardized form is equal to \( t \), reaches the resource (boundary) \( R \). Denote the random wear in a more severe environment by \( W_{t} ,\,t\,\ge\,0. \) Specifically, for our shock model, \( W_{t} = t + \sum\nolimits_{i\, = \,0}^{N\left( t \right)} {W_{i} } \), where \( W_{i} ,\,i = 1,\,2, \ldots ,\,N(t), \) are the random increments of wear due to shocks and \( W_{0} \equiv 0 \) [18]. For convenience, in what follows we will use this wear-based interpretation.

4.1.2 Exponentially Distributed Boundary

In addition to the previous assumptions, we need the following:

Assumption 1.\( N(t),\,t\,\ge\,0, \) is the NHPP with rate \( \nu (t) \).

Assumption 2. \( W_{i} ,i = 1,2, \ldots \,\), are i.i.d. random variables characterized by the moment generating function \( M_{W} (t) \) and the Cdf \( G(t) \).

Assumption 3.\( N(t),\,t\,\ge\,0 \); \( W_{i} ,\,i = 1,\,2, \ldots \) and \( R \) are independent of each other.

Assumption 4.\( R \) is exponentially distributed with the failure rate \( \lambda \), i.e., \( \overline{F} (t) = \exp \{ - \lambda t\} \).

The following result gives the survival function and the failure rate function for \( T_{S} \) [5].

Theorem 4.1

Let\( m(t) \equiv E(N(t)) = \int_{ \, 0}^{ \, t} {\nu (x)} \, {\text{d}}x \). Suppose that Assumptions 14 hold and that the inverse function\( m^{ - 1} (t) \)exists for t > 0. Then the survival function for\( T_{S} \)and the corresponding failure rate\( \lambda_{S} (t) \)are given by

$$ P(T_{S} > t) = \exp \left\{ { - \lambda t - \int\limits_{0}^{t} {v(x)\,} {\text{d}}x + M_{W} \left( { - \lambda } \right) \cdot \int\limits_{0}^{t} {q(x)\,v(x)\,{\text{d}}x} } \right\},\,t\,\ge\,0, $$

and

$$ \lambda_{S} (t) = \lambda + (1 - M_{W} ( - \lambda ) \cdot q(t))\,\nu (t), $$
(4.4)

respectively.

Proof

Given the assumptions, we can directly ‘integrate out’ the variable \( R \) and define the corresponding probability as

$$ \begin{aligned} & P(T_{S} > t\,|\,N\left( s \right), \, 0 \le s \le t, \, W_{1} ,\,W_{2} , \cdots ,\,W_{N(t)} ) \\ &= \left( {\prod\limits_{i\, = \,0}^{N\left( t \right)} {q\left( {T_{i} } \right)} } \right) \cdot \exp \left\{ { - \int\limits_{0}^{{t \,+\, \sum\nolimits_{i\, = \,0}^{N(t)} {W_{i} } }} {\lambda \,{\text{d}}u} } \right\} \\ &= \exp \left\{ { - \lambda t - \lambda \sum\limits_{i\, = \,1}^{N(t)} {W_{i} } + \sum\limits_{i\, = \,1}^{N(t)} {\ln q\left( {T_{i} } \right)} } \right\}. \\ \end{aligned} $$

Thus

$$\begin{aligned} & P(T_{S} > t\,|\,N\left( s \right), \, 0 \le s \le t) \\ &= \exp \left\{ { - \lambda t} \right\} \cdot \exp \left\{ {\sum\limits_{i\, = \,1}^{N(t)} {\ln q\left( {T_{i} } \right)} } \right\} \cdot E\left[ {\exp \left\{ { - \sum\limits_{i\, = \,1}^{N(t)} {\lambda W_{i} } } \right\}} \right] \\ \end{aligned} $$
$$ = \exp \left\{ { - \lambda t} \right\} \cdot \exp \left\{ {\sum\limits_{i\, = \,1}^{N(t)} {\left[ {\ln q\left( {T_{i} } \right) + \ln\, \left( {M_{W} \left( { - \lambda } \right)} \right)} \right]} } \right\}. $$
(4.5)

We use now the same reasoning as when deriving Eq. (2.31). Therefore, some evident intermediate transformations are omitted. More details can be found in the original publication [5]. A similar approach is applied to our presentation in the rest of this chapter.

Define \( N^{ * } \left( t \right) \equiv N\left( {m^{ - 1} \left( t \right)} \right),\,t\,\ge\,0 \), and \( T_{j}^{ * } \equiv m\left( {T_{j} } \right),\,j\,\ge\,1 \). It is well-known that \( \{ N^{ * } \left( t \right),\,t\,\ge\,t\} \) is a stationary Poisson process with intensity one (see, e.g., [14]) and \( T_{j}^{ * } ,\,j \ge 1 \), are the times of occurrence of shocks in the new time scale. Let \( s = m\left( t \right) \). Then

$$ \begin{gathered} E\left[ {\exp \left\{ {\sum\limits_{i\, = \,1}^{N(t)} {\left[ {\ln q\left( {T_{i} } \right) + \ln \left( {M_{W} \left( { - \lambda } \right)} \right)} \right]} } \right\}} \right] \hfill \\ = E\left[ {E\left[ {\exp \left\{ {\sum\limits_{i\, = \,1}^{{N^{ * } (s)}} {\left[ {\ln q\left( {m^{ - 1} \left( {T_{i}^{ * } } \right)} \right) + \ln \left( {M_{W} \left( { - \lambda } \right)} \right)} \right]} } \right\}\,|\,N^{ * } \left( s \right)} \right]} \right]. \hfill \\ \end{gathered} $$
(4.6)

The joint distribution of \( \left( {T_{1}^{ * } ,\,T_{2}^{ * } ,\, \ldots ,\,T_{n}^{ * } } \right) \) given \( N^{ * } \left( s \right) = n \) is the same as the joint distribution of \( \left( {V_{\left( 1 \right)} ,\,V_{\left( 2 \right)} , \ldots ,\,V_{\left( n \right)} } \right) \), where \( V_{\left( 1 \right)} \; \le \;V_{\left( 2 \right)} \le \ldots \le V_{\left( n \right)} \) are the order statistics of i.i.d. random variables \( V_{1} ,\,V_{2} , \ldots ,V_{n} \) which are uniformly distributed in the interval \( \left[ {0,\,s} \right]\; = \;\left[ {0,\,m\left( t \right)} \right] \). Then

$$ \begin{aligned} & E\left[ {\exp \left\{ {\sum\limits_{i\, = \,1}^{{N^{ * } (s)}} {\left( {\ln q\left( {m^{ - 1} \left( {T_{i}^{ * } } \right)} \right) + \ln \left( {M_{W} \left( { - \lambda } \right)} \right)} \right)} } \right\}\,|\,N^{ * } \left( s \right) = n} \right] \\ &= E\left[ {\exp \left\{ {\sum\limits_{i\, = \,1}^{n} {\left( {\ln q\left( {m^{ - 1} \left( {V_{\left( i \right)} } \right)} \right) + \ln \left( {M_{W} \left( { - \lambda } \right)} \right)} \right)} } \right\}} \right] \\ &= E\left[ {\exp \left\{ {\sum\limits_{i\, = \,1}^{n} {\left( {\ln q\left( {m^{ - 1} \left( {V_{i} } \right)} \right) + \ln \left( {M_{W} \left( { - \lambda } \right)} \right)} \right)} } \right\}} \right] \\ &= \left( {E\left[ {\exp \left\{ {\ln q\left( {m^{ - 1} \left( {sU} \right)} \right) + \ln \left( {M_{W} \left( { - \lambda } \right)} \right)} \right\} \, } \right] \, } \right)^{n} , \\ \end{aligned} $$
(4.7)

where \( U \equiv V_{1} /s = V_{1} /m\left( t \right) \) is a random variable uniformly distributed in the unit interval [0,1]. Therefore,

$$ \begin{aligned} & E\left[ {\exp \left\{ {\ln q\left( {m^{ - 1} \left( {sU} \right)} \right) + \ln\,\left( {M_{W} \left( { - \lambda } \right)} \right)} \right\}} \right] \\ &= \int\limits_{0}^{1} {\exp \left\{ { \, \ln q\left( {m^{ - 1} \left( {m\left( t \right)u} \right)} \right) + \ln\,\left( {M_{W} \left( { - \lambda } \right)} \right)} \right\}} \,{\text{d}}u \\ &= \frac{{M_{W} \left( { - \lambda } \right)}}{m\left( t \right)}\int\limits_{0}^{t} {q\left( x \right)v\left( x \right)\,{\text{d}}x} . \\ \end{aligned} $$
(4.8)

From Eqs. (4.5)–(4.8),

$$ \begin{aligned} P(T_{S} > t) &= \exp \left\{ { - \lambda t} \right\} \cdot \sum\limits_{n\, = \,0}^{\infty } {\left( {\frac{{M_{W} \left( { - \lambda } \right)}}{m\left( t \right)}\int\limits_{0}^{t} {q\left( x \right)v\left( x \right){\text{d}}x} } \right)^{n} } \frac{{s^{n} }}{n \, !}e^{ - s} . \\ &= \exp \left\{ { - \lambda t} \right\} \cdot e^{ - s} \cdot \exp \left\{ {M_{W} \left( { - \lambda } \right) \cdot \frac{s}{m\left( t \right)}\int\limits_{0}^{t} {q\left( x \right)v\left( x \right){\text{d}}x} } \right\} \\ &= \exp \left\{ { - \lambda t - \int\limits_{0}^{t} {v\left( x \right)} \,{\text{d}}x + M_{W} \left( { - \lambda } \right) \cdot \int\limits_{0}^{t} {q\left( x \right)v\left( x \right)\,{\text{d}}x} } \right\}. \\ \end{aligned} $$

Therefore, the failure rate of the system, \( \lambda_{S} (t) \), is given by

$$ \lambda_{S} (t) = \lambda + \left( {1 - M_{W} ( - \lambda ) \cdot q(t)} \right)\nu (t). $$

\( \square\)

The following corollary defines the failure rate that describes \( T_{S} \) when \( W_{i} \)’s are distributed exponentially with mean \( \mu \).

Corollary 4.1

If the \( W_{i} \) ’s are distributed exponentially with mean \( \mu \) then the failure rate \( \lambda_{S} (t) \) is given by

$$ \lambda_{S} (t) = \lambda + \left( {1 - \frac{q(t)}{\lambda \mu \; + \;1}} \right)\nu (t). $$
(4.9)

We present now a qualitative analysis of the obtained result. Eq. (4.4) suggests that the failure rate \( \lambda_{S} (t) \) can be interpreted as a failure rate of a series system with dependent (via \( R \)) components. When \( \mu \to \infty \), from Eq. (4.9), we obtain \( \lambda_{S} (t) \to \lambda + \nu (t) \), which means that a failure occurs either in accordance with the baseline \( F(t) \) or as a result of the first shock (competing risks). Note that, in accordance with the properties of Poisson processes, the rate \( \nu (t) \) is equal to the failure rate, which corresponds to the time to the first shock. Therefore, the two ‘components’ of the described series system are asymptotically independent as \( \mu \to \infty \).

When \( \mu = 0 \), which means that \( W_{i} = 0,\;i \ge 1 \), Eq. (4.9) becomes \( \lambda_{S} (t) = \lambda + p(t)\nu (t) \). Therefore, this specific case describes the series system with two independent components. The first component has the failure rate \( \lambda \) and the second component has the failure rate \( p(t)\,\nu (t) \).

Let \( q(t) = 1 \) (there are no ‘killing’ shocks) and let \( W_{i} \) be deterministic and equal to \( \mu \). Then \( M_{W} ( - \lambda ) = \exp \{ - \mu \lambda \} \) and Eq. (4.4) becomes

$$ \lambda_{S} (t) = \lambda + (1 - \exp \{ - \mu \lambda \} )\nu (t). $$

Assume for simplicity of notation that there is no baseline wear and all wear increments come from shocks. Then from Theorem1

$$ P(T_{S} > t) = \exp \left\{ { - \int\limits_{0}^{t} {v\left( x \right)} \,{\text{d}}x + M_{W} \left( { - \lambda } \right) \cdot \int\limits_{0}^{t} {q\left( x \right)v\left( x \right){\text{d}}x} } \right\}. $$

The form of this equation suggests the following probabilistic interpretation [6]. A system can fail from (i) the critical shock or (ii) the accumulated wear caused by the shocks. Suppose that the system has survived until time \( t \). Then, as the distribution of the random boundary \( R \) is exponential, the accumulated wear until time \( t \), \( \sum\nolimits_{i\, = \,0}^{N\left( t \right)} {W_{i} } \), does not affect the failure process of the component after time \( t \). That is, on the next shock, the probability of the system’s failure due to the accumulated wear given that a critical shock has not occurred, is just \( P(R \le W_{N(t)\; + \;1} ) \). This probability does not depend on the wear accumulation history, that is,

$$ \begin{aligned} & P(R \ge W_{1} + W_{2} + \ldots + W_{n}\,|\,R > W_{1} + W_{2} + \ldots + W_{n - 1} ) \\ &\quad = P(R > W_{n} ),\quad \forall n = 1,\,2, \ldots ,\,W_{1} ,\,W_{2} , \ldots , \\ \end{aligned} $$

where \( W_{1} + W_{2} + \ldots + W_{n\; - \;1} \equiv 0 \) when \( n = 1 \). Finally, each shock results in the immediate failure with probability \( p(t) + q(t)\,P(R\; \le \;W_{1} ) \); otherwise, the system survives with probability \( q(t)\,P(R\; > \;W_{1} ) \). Although we have two (independent) causes of failure in this case, the second cause also does not depend on the history of the process and, therefore, our initial \( p(t) \Leftrightarrow q(t) \) model can be applied after an obvious modification. In accordance with (4.1), the corresponding failure rate can then be immediately obtained as

$$ \begin{aligned} \lambda_{S} (t) &= \left( {p(t) + q(t)\,P(R\,\le\,W_{1} )} \right)\,\nu (t) \\ &= \left( {1 - q(t)\,P(R > W_{1} )} \right)\,\nu (t) \\ &= \left( {1 - q(t)\,M_{W} \left( { - \lambda } \right)} \right)\nu (t). \\ \end{aligned} $$

The validity of the above reasoning and interpretation can be verified by comparing this failure rate function with that directly derived in (4.4) (\( \lambda = 0 \)).

It is clear that this reasoning can be applied due to the specific, exponential distribution of the boundary \( R \), which implies the Markov property for the wear ‘accumulation’. In the next section, the case of a deterministic boundary will be considered and, obviously, the foregoing interpretation ‘does not work’ for this case.

4.1.3 Deterministic Boundary

Let \( R = b \) be the deterministic boundary. Let other assumptions of Sect. 4.3.1 hold. We consider the case when \( t\,<\,b \), which means that a failure cannot occur without shocks. The following result gives the survival function for \( T_{S} \).

Theorem 4.2

Supposethat Assumptions 13 of Sect. 4.3.1 hold and that the inverse function\( m^{ - 1} \left( t \right) \)exists for t > 0. Furthermore, letthe\( W_{i} \)’s be i.i.d. exponential with mean\( 1/\eta \). Then the survival function for\( T_{S} \)is given by

$$ \begin{aligned} P(T_{S} > t) &= \sum\limits_{n\, = \,0}^{\infty } {\left( {\sum\limits_{j\, = \,n}^{\infty } {\frac{{\left( {\eta \left( {b - t} \right)} \right)^{j} }}{j \, !}\exp \left\{ { - \eta \left( {b - t} \right)} \right\}} } \right)} \\ & \times \left( {\frac{1}{m\left( t \right)}\int\limits_{0}^{t} {q\left( x \right)v\left( x \right)} \,{\text{d}}x} \right)^{n} \cdot \frac{{m\left( t \right)^{n} }}{n \, !}\exp \left\{ { - m\left( t \right)} \right\},\;0\,\le\,t\,<\,b. \\ \end{aligned} $$
(4.10)

Proof

Similar to the previous subsection,

$$ \begin{aligned} P(T_{S} > & t|N\left( s \right),\quad 0\,\le\,s\,\le\,t,\quad W_{1} ,\,W_{2} , \ldots ,W_{N(t)} ) \\ &= \left( {\prod\limits_{i\, = \,1}^{N(t)} {q\left( {T_{i} } \right)} } \right) \cdot I\left( {t + \sum\limits_{i\, = \,1}^{N(t)} {W_{i} \le b} } \right). \\ \end{aligned} $$

Thus, we have

$$ \begin{aligned} & P(T_{S} > t|\,N\left( s \right),\quad 0\,\le\,s\,\le\,t) \\ &= \left( {\prod\limits_{i\, = \,1}^{N(t)} {q\left( {T_{i} } \right)} } \right)\;P\left( {\sum\limits_{i\, = \,1}^{N(t)} {W_{i}\,\le\,b - t} } \right) \\ &= \left( {\prod\limits_{i\, = \,1}^{N(t)} {q\left( {T_{i} } \right)} } \right)\;G^{{\left( {N(t)} \right)}} \left( {b - t} \right), \\ \end{aligned} $$

where \( G^{\left( n \right)} \left( t \right) \) is the n-fold convolution of \( G\left( t \right) \) with itself.

As a special case, when the \( W_{i} \)’s are i.i.d. exponential with mean \( 1/\eta \),

$$ P(T_{S} > t|\,N\left( s \right),\,0\,\le\,s\,\le\,t) = \left( {\prod\limits_{i\, = \,1}^{N\left( t \right)} {q\left( {T_{i} } \right)} } \right) \cdot \Uppsi \left( {N\left( t \right)} \right), $$

where

$$ \Uppsi \left( {N\left( t \right)} \right) \equiv \sum\limits_{j\, = \,N\left( t \right)}^{\infty } {\frac{{\left( {\eta \left( {b\, - \,t} \right)} \right)^{j} }}{j!}} \exp \left\{ { - \eta \left( {b - t} \right)} \right\}, $$

and

$$ \begin{aligned} P(T_{S} > t) &= E\left[ {\left( {\prod\limits_{i\, = \,1}^{N\left( t \right)} {q\left( {T_{i} } \right)} } \right) \cdot \Uppsi \left( {N\left( t \right)} \right)} \right] \\ &= E\left[ {E\left[ {\left( {\prod\limits_{i\, = \,1}^{N\left( t \right)} {q\left( {T_{i} } \right)} } \right) \cdot \Uppsi \left( {N\left( t \right)} \right)|N\left( t \right)} \right]} \right], \\ \end{aligned} $$

where

$$ \begin{aligned} & E\left[ {\left( {\prod\limits_{i\, = \,1}^{N\left( t \right)} {q\left( {T_{i} } \right)} } \right) \cdot \Uppsi \left( {N\left( t \right)} \right)|N\left( t \right) = n} \right] \\ &= \Uppsi \left( n \right) \cdot E \, \left[ {\left( {\prod\limits_{i\, = \,1}^{N\left( t \right)} {q\left( {T_{i} } \right)} } \right)|N\left( t \right) = n} \right]. \\ \end{aligned} $$

Using the same notation and properties as those of the previous subsection, we have

$$ E\left[ {\left( {\prod\limits_{i\, = \,1}^{N\left( t \right)} {q\left( {T_{i} } \right)} } \right)|N\left( t \right) = n} \right]\; = \;\left[ {E\left( {q\left( {m^{ - 1} \left( {sU} \right)} \right)} \right)} \right]^{ \, n} $$

and

$$ E\left( {q\left( {m^{ - 1} \left( {sU} \right)} \right)} \right) = \frac{1}{m\left( t \right)}\int\limits_{0}^{t} {q\left( x \right)v\left( x \right)\,} {\text{d}}x. $$

Therefore,

$$ \begin{aligned} & E\left[ {\left( {\prod\limits_{i = 1}^{N\left( t \right)} {q\left( {T_{i} } \right)} } \right) \cdot \Uppsi \left( {N\left( t \right)} \right)|N\left( t \right) = n} \right] \\ & = \Uppsi \left( n \right) \cdot \left( {\frac{1}{m\left( t \right)}\int\limits_{0}^{t} {q\left( x \right)v\left( x \right)\,} {\text{d}}x} \right)^{n} . \\ \end{aligned} $$

Finally, we obtain a rather cumbersome Eq. (4.10).\( \square\)

It can be easily shown that the survival function in (4.10) can be written in the following compact form [6]:

$$ P(T_{s} > t) = \exp \left\{ { \, - \int\limits_{0}^{t} {p\left( x \right)\,v\left( x \right)} \,{\text{d}}x} \right\} \cdot \sum\limits_{n = 0}^{\infty } {P\left( {Z_{1}\,\ge\,n} \right)\, \cdot \,P\left( {Z_{2} = n} \right)} , $$
(4.11)

where \( Z_{1} \) and \( Z_{2} \) are two Poisson random variables with parameters \( \eta (b - t) \) and \( \int_{0}^{t} {q(x)\,\nu (x)\,{\text{d}}x} \), respectively. The following presents a qualitative analysis for two marginal cases of Eq. (4.11) for each fixed \( t\,<\,b \).

When \( \eta = 1/\mu \to \infty \), which means that the mean of increments \( W_{i} \) tends to 0, Eq. (4.11) ‘reduces’ to (4.1). Indeed, as \( \eta \to \infty \),

$$ \sum\limits_{n\, = \,0}^{\infty } {P(Z_{1}\,\ge\,n)\,P(Z_{2} = n)} \to \sum\limits_{n\, = \,0}^{\infty } {P(Z_{2} = n) = 1} , $$

because \( P(Z_{1}\,\ge\,n) \to 1 \) for \( \forall n\,\ge\,1 \) and \( P(Z_{1}\,\ge\,0) = 1 \). From ‘physical considerations’, it is also clear that as increments vanish, their impact on the model also vanishes.

When \( \eta \to 0 \), the mean of the increments tends to infinity and, therefore, the first shock will kill the system with probability tending to one as \( \eta \to 0 \). The infinite sum in the right-hand side in the following equation vanishes in this case:

$$ \begin{aligned} & \sum\limits_{n\, = \,0}^{\infty } {P(Z_{1}\,\ge\,n)\,P(Z_{2} = n)} = P(Z_{1}\,\ge\,0)\,P(Z_{2} = 0) + \sum\limits_{n\, = \,1}^{\infty } {P(Z_{1}\,\ge\,n)\,P(Z_{2} = n)} \\ & \to P(Z_{2} = 0), \\ \end{aligned} $$

as \( P(Z_{1}\,\ge\,0) = 1 \) and \( P(Z_{1}\,\ge\,n) \to 0 \) for \( \forall n\,\ge\,1 \) when \( \eta \to 0 \). Therefore, finally

$$ \begin{aligned} P(T_{S}\,>\,t) \to & \exp \left\{ { - \int\limits_{0}^{t} {p(x)\nu (x)\,{\text{d}}x} } \right\}\exp \left\{ { - \int\limits_{0}^{t} {q(x)\nu (x)\,{\text{d}}x} } \right\} \\ &= \exp \left\{ { - \int\limits_{0}^{t} {\nu (x)\,{\text{d}}x} } \right\}, \\ \end{aligned} $$

which is the probability that no shocks have occurred in \( [0,\,t] \). This is what we also expect from general considerations for \( \eta \to 0 \), as the system can survive for \( t\,<\,b \) only without shocks.

4.2 History-Dependent Termination Probability

Consider first, the orderly point process with the conditional (complete) intensity function (CIF) \( \nu (t|H(t)) \) [2, 15], where \( H(t) \) is the history of the process up to \( t \). This notion is similar to the intensity process defined in (2.12). Whereas the intensity process is considered as a stochastic process defined by filtration \( {\rm H}_{t - } \), the CIF is usually a realization of this process defined by the realization of filtration \( H(t) \). We will use these terms in our book interchangeably. Accordingly, let the probability of termination under a single shock be adjusted in a similar way and, therefore, also depend on this history, i.e., \( p(t|H(t)) \). Denote, as previously, by \( T_{S} \) the corresponding lifetime. It is clear that in accordance with our assumptions, the conditional probability of termination in the infinitesimal interval of time can be written in the following simplified form [17]:

$$ P[T_{S} \in [t,\,t + {\text{d}}t)|T_{S}\,\ge\,t,\,H(t)]\; = \;p\left( {t|H(t)} \right)\,\nu \left( {t|{\rm H}(t)} \right)\,{\text{d}}t. $$

The only way for \( p\left( {t|H(t)} \right)\,\nu (t|{\rm H}(t)) \) to become a ‘full-fledged’ failure rate that corresponds to the lifetime \( T_{S} \) is when there is no dependence on \( H(t) \) for both multipliers in the right-hand side. It is obvious that elimination of this dependence for the second multiplier uniquely leads to the NHPP. In what follows, we will consider this case. However, specific types of dependence on history in the first multiplier will be retained and this will give rise to the new classes of extreme shock models.

Model A. We will consider the NHPP of shocks with rate \( \nu (t) \) and with the history-dependent termination probability

$$ p\left( {t|H(t)} \right)\; = \;p(t|N(s),\,0\,\le\,s\,<\,t). $$

Let this be the simplest history case, i.e., the number of shocks, \( N(t) \) that our system has experienced in \( [0,\;t) \). This seems to be a reasonable assumption, as each shock can contribute to ‘weakening’ of the system by increasing the probability \( p\left( {t|H(t)} \right) \equiv p\left( {t,\,N(t)} \right) \) and, therefore, the function \( p\left( {t,\,N(t)} \right) \) is usually increasing in \( n(t) \) (for each realization, \( N(t) = n(t) \)). To obtain the following result, we must assume the specific form of this function. It is more convenient to consider the corresponding probability of survival. Let

$$ q\left( {t,\,n(t)} \right) \equiv 1 - p\left( {t,\,n(t)} \right) = q(t)\,\rho \left( {n(t)} \right), $$
(4.12)

where \( \rho \left( {n(t)} \right) \) is a decreasing function of its argument (for each fixed \( t \)). Thus the survival probability at each shock decreases as the number of survived shocks in \( [0,\;t) \) increases. The multiplicative form of (4.12) will be important for us as it will be ‘responsible’ for the vital independence to be discussed later.

The survival function of the system’s lifetime \( T_{S} \) is given by the following theorem.

Theorem 4.3

Let\( m(t) \equiv E\left( {N(t)} \right) = \int_{ \, 0}^{ \, t} {\nu (x)} \,{\text{d}}x \)and\( \Uppsi (n) \equiv \prod\nolimits_{i\, = \,0}^{n} {\rho (i)} \) (\( \rho (0) \equiv 1 \)). Suppose that the inverse function\( m^{ - 1} (t) \)exists for t > 0. Then

$$ P(T_{S}\,\ge\,t) = E\left[ {\Uppsi \left( {N_{q\nu } (t)} \right)} \right] \cdot \exp \left\{ { - \int\limits_{0}^{t} {p\left( x \right)} \,\nu (x)\,{\text{d}}x} \right\}, $$
(4.13)

where\( \{ N_{q\nu } (t),\,t\,\ge\,0\} \)follows the NHPP with rate\( q(t)\nu (t) \).

Proof

Obviously, conditioning on the process (in each realization) gives

$$ P(T_{S} \ge t|N(s),\,0\,\le\,s\,<\,t) = \prod\limits_{i\, = \,0}^{N(t)} q (T_{i} )\rho (i), $$

where formally \( q(T_{0} ) \equiv 1 \) and \( \rho (0) \equiv 1 \) corresponds to the case when \( N(t) = 0 \). Also, by convention, \( \prod\nolimits_{i\, = \,1}^{n} {( \cdot )_{i} \equiv 1} \) for \( n = 0 \). Then the corresponding expectation is

$$ P(T_{S}\,\ge\,t) = E\left[ {\prod\limits_{i\, = \,1}^{N(t)} q (T_{i} )\,\rho (i)} \right]. $$

As previously, define the stationary Poisson process with rate 1: \( N^{ * } (t) \equiv N\left( {m^{ - 1} (t)} \right),\,t\,\ge\,0 \), and \( T_{j}^{ * } \equiv m(T_{j} ),\,j\,\ge\,1 \) are the times of occurrence of shocks in the new time scale. Let \( s = m(t) \). Then

$$ E\left[ {\prod\limits_{i\, = \,1}^{N(t)} q (T_{i} )\,\rho (i)} \right] = E\left[ {E\left[ {\prod\limits_{i\, = \,1}^{{N^{*} (s)}} q \left( {m^{ - 1} (T_{i}^{*} )} \right)\,\rho (i)|N^{*} (s)} \right]} \right]. $$

The joint distribution of \( \left( {T_{1}^{ * } ,\,T_{2}^{ * } , \ldots ,\,T_{n}^{ * } } \right) \) given \( N^{ * } (s) = n \) is the same as the joint distribution of \( \left( {V_{\left( 1 \right)} ,\,V_{\left( 2 \right)} , \ldots ,\,V_{\left( n \right)} } \right) \), where \( V_{\left( 1 \right)} \; \le \;V_{\left( 2 \right)} \; \le \; \cdots \; \le \;V_{\left( n \right)} \) are the order statistics of i.i.d. random variables \( V_{1} ,\,V_{2} , \ldots ,\,V_{n} \) which are uniformly distributed in the interval \( \left[ {0,\,s} \right] = \left[ {0,\,m(t)} \right] \). Thus omitting derivations that are similar, to those in the proofs of Theorems 4.1 and 4.2 (see [6] for more details):

$$ E\left[ {\prod\limits_{i\, = \,1}^{{N^{*} (s)}} q \left( {m^{ - 1} (T_{i}^{*} )} \right)\,\rho (i)|N^{*} (s) = n} \right] = \prod\limits_{i\, = \,1}^{n} {\rho (i)} \,\left( {E\left[ {q\left( {m^{ - 1} (sU)} \right)} \right]} \right)^{n} , $$

where \( U \equiv V_{1} /s = V_{1} /m(t) \) is a random variable uniformly distributed in the unit interval [0,1]. Therefore,

$$ E\left[ {q(m^{ - 1} (sU))} \right] = \int\limits_{0}^{1} {q\left( {m^{ - 1} \left( {su} \right)} \right)} \,{\text{d}}u = \int\limits_{0}^{1} {q\left( {m^{ - 1} \left( {m(t)u} \right)} \right)} \,{\text{d}}u\; = \;\frac{1}{m(t)}\int\limits_{0}^{t} {q(x)} \nu (x)\,{\text{d}}x. $$

Hence,

$$ E\left[ {\prod\limits_{i\, = \,1}^{{N^{*} (s)}} q (m^{ - 1} (T_{i}^{*} )\,\rho (i)|N^{*} (s) = n} \right]\; = \;\prod\limits_{i\, = \,1}^{n} {\rho (i)} \cdot \left( {\frac{1}{m(t)}\int\limits_{0}^{t} {q(x)} \nu (x)\,{\text{d}}x} \right)^{n} . $$

Using \( \Uppsi (n) \equiv \prod\nolimits_{i\, = \,1}^{n} {\rho (i),} \)

$$ \begin{aligned} P(T_{S} \ge t) &= E\left[ {\prod\limits_{i\, = \,1}^{N(t)} q (T_{i} )\rho (i)} \right] \\ &= \sum\limits_{n\, = \,0}^{\infty } {\Uppsi (n)\left( {\frac{1}{m(t)}\int\limits_{0}^{t} {q(x)} \nu (x)\,{\text{d}}x} \right)^{n} } \cdot \frac{{(m(t))^{n} }}{n \, !}e^{ - m(t)} \\ &= \exp \left\{ { - \int\limits_{0}^{t} {p(x)} \nu (x)\,{\text{d}}x} \right\} \cdot \sum\limits_{n = 0}^{\infty } {\Uppsi (n)} \cdot \frac{{\left( {\int_{0}^{t} {q(x)} \nu (x)\,{\text{d}}x} \right)^{n} }}{n \, !} \cdot \exp \left\{ { - \int\limits_{0}^{t} {q(x)} \nu (x)\,{\text{d}}x} \right\} \\ &= E\left[ {\Uppsi (N_{q\nu } (t))} \right] \cdot \exp \left\{ { - \int\limits_{0}^{t} {p(x)} \nu (x)\,{\text{d}}x} \right\}, \\ \end{aligned} $$

where \( \{ N_{q\nu } (t),\,t \ge 0\} \) follows the NHPP with rate \( q(t)\nu (t) \).\( \square \)

Example 4.1

Let \( \rho (i) = \rho^{i\; - \;1} ,\,i = 1,\,2, \ldots \). Then \( \Uppsi (n) \equiv \rho^{n(n\; - \;1)/2} \) and

$$ \begin{aligned} P(T_{S} \ge t) &= \sum\limits_{n\, = \,0}^{\infty } {\rho^{n(n\; - \;1)/2} } \cdot \frac{{\left( {\int_{0}^{t} {q(x)} \nu (x)\,{\text{d}}x} \right)^{n} }}{n \, !} \cdot \exp \left\{ { - \int\limits_{0}^{t} {q(x)} \nu (x)\,{\text{d}}x} \right\} \cdot \exp \left\{ { - \int\limits_{0}^{t} {p(x)} \nu (x)\,{\text{d}}x} \right\} \\ &= \sum\limits_{n\, = \,0}^{\infty } {\rho^{n(n\, - \,1)/2} } \cdot \frac{{\left( {\int_{0}^{t} {q(x)} \nu (x)\,{\text{d}}x} \right)^{n} }}{n \, !} \cdot \exp \left\{ { - \int\limits_{0}^{t} {\nu (x)} \,{\text{d}}x} \right\}. \\ \end{aligned} $$
(4.14)

The following discussion will help us in the further presentation of our time-dependent results. Let \( \{ N(t),\,t \ge 0\} \) be the NHPP with rate \( \nu (t) \). If an event occurs at time \( t \), it is classified as a Type I event with probability \( p(t) \) and as a Type II event with the complementary probability \( 1 - p(t) \), as in our initial \( p(t) \Leftrightarrow q(t) \) model. Then \( \{ N_{1} (t),\,t \ge 0\} \) and \( \{ N_{2} (t),\,t \ge 0\} \) are the independent NHPP with rates \( p(t)\,\nu (t) \) and \( q(t)\,\nu (t), \) respectively, and \( N(t) = N_{1} (t) + N_{2} (t) \). Accordingly, e.g., given that there have been no Type I events in \( [0,\;t) \), the process \( \{ N(t),\,t \ge 0\} \) reduces to \( \{ N_{2} (t),\,t \ge 0\} \), as in our specific case when a Type I event (fatal shock) leads to the termination of the process (failure). Therefore, in order to describe the lifetime to termination, it is obviously sufficient to consider \( \{ N_{2} (t),\,t \ge 0\} \), and not the original \( \{ N(t),\,t \ge 0\} \).

We will use a similar reasoning for a more general \( p(t|H(t)) \Leftrightarrow q(t|H(t)) \) model considered above, although interpretation of the types of events will be slightly different in this case. In the following, in accordance with our previous notation, \( N_{2} (t) = N_{q\nu } (t) \) and the arrival times of this process are denoted by \( T_{(q\nu )1} ,\,T_{(q\nu )2} , \ldots \).

The multiplicative form of the specific result in (4.13) indicates that it might be also obtained and interpreted via the following general reasoning, which can be useful for probabilistic analysis of various extensions of standard extreme shock models. Considering the classical \( p(t) \Leftrightarrow q(t) \) extreme shock model, assume that there can be other additional causes of termination dependent either directly on a history of the point process (as in Model A) or on some other variables, as for the marked point process, when each event is ‘characterized’ by some variable (e.g., damage or wear). Just for the sake of definiteness of presentation, let us call this ‘initial’ cause of failure, which corresponds to the \( p(t) \Leftrightarrow q(t) \)model, the main or the critical cause of failure (termination) and the shock that leads to this event—the critical shock (Type I event). However, distinct from the \( p(t) \Leftrightarrow q(t) \) model, the Type II events, which follow the Poisson process with rate \( q(t)\,\nu (t), \) can now also result in failure.

Let \( E_{C} (t) \) denote the event that there were no critical shocks until time \( t \) in the absence of other causes of failures. Then, obviously,

$$ P(T_{S} \ge t|E_{C} (t)) = \frac{{P(T_{S} \; \ge \;t,\;E_{C} (t))}}{{P(E_{C} (t))}} = \frac{{P(T_{S} \; \ge \;t)}}{{P(E_{C} (t))}}, $$

and, thus,

$$ P(T_{S} \ge t) = P(T_{S} \ge t|E_{C} (t))\,P(E_{C} (t)), $$

where

$$ P(E_{C} (t)) = P(N_{1} (t) = 0) = \exp \left\{ { - \int\limits_{0}^{t} {p\left( x \right)} \nu (x)\,{\text{d}}x} \right\}. $$
(4.15)

Therefore, in accordance with our previous reasoning and notation, we can describe \( P(T_{S} \ge t|E_{C} (t)) \) in terms of the process \( \{ N_{q\nu } (t),\,t \ge 0\} \) (and not in terms of the original process\( \{ N(t),\,t \ge 0\} \)) in the following general form to be specified for the forthcoming model:

$$ P(T_{S} \ge t|E_{C} (t)) = E(I(\Uppsi (N_{q\nu } (t),\,\Uptheta ) \in S)|E_{C} (t)), $$

where \( I( \cdot ) \) is the corresponding indicator, \( \Uptheta \) is a set of random variables that are ‘responsible’ for other causes of failure (see later), \( \Uppsi (N_{q\nu } (t),\,\Uptheta ) \) is a real-valued function of \( (N_{q\nu } (t),\,\Uptheta ) \) which represents the state of the system at time \( t \) (given \( E_{C} (t) \) i.e., no critical shock has occurred), and \( S \) is a set of real values which defines the survival of the system in terms of \( \Uppsi (N_{q\nu } (t),\,\Uptheta ) \). That is, if the critical shock has not occurred, the system survives when \( \Uppsi (N_{q\nu } (t),\,\Uptheta ) \in S \).

In order to apply effectively Model A, we have to reinterpret it as follows. Suppose first, that the system is composed of two parts in series and that each shock affects only one component. If it hits the first component (with probability \( p(t) \)), it directly causes its (and the systems) failure (the critical shock). On the other hand, if it hits the second component (with probability \( q(t) \)), then this component fails with probability \( 1 - \rho (n(t)) \) and survives with probability \( \rho (n(t)) \). This interpretation nicely conforms with the two independent causes of failure model in (4.12). Note that, in fact, we are speaking about the conditional independence of causes of failure (on condition that a shock from the Poisson process with rate \( \nu (t) \) has occurred).

Another (and probably more practical) interpretation is as follows. Assume that there are some parts of a system (component 1) that are critical only to, e.g., the shock’s level of severity, which is assumed to be random. This results in failure with probability \( p(t) \). On the other hand, the other parts (component 2) are critical only to accumulation of damage (failure with probability \( 1 - \rho (n(t)) \)). Assuming the series structure and the corresponding independence, we arrive at the survival (on shock) probability (4.12).

We can define now the function \( \Uppsi (N_{q\nu } (t),\,\Uptheta ) \) for Model A. Suppose that there have been no critical shocks in \( [0,\;t) \) and let \( \varphi_{i} = 1 \) if the second component survives the \( i \)th shock, and \( \varphi_{i} = 0 \), \( i = 1,\,2,\,3, \ldots N(t) \) otherwise. Then

$$ \Uppsi (N_{q\nu } (t),\,\Uptheta ) = \prod\limits_{i\; = \;1}^{{N_{q\nu } (t)}} {\varphi_{i} } , $$

and \( S = \{ 1\} \). Therefore, as events \( E_{C} (t) \) and \( \Uppsi (N_{q\nu } (t),\,\Uptheta ) \in S \) are ‘related’ only to the first and the second causes of failure, respectively, and these causes of failure are independent, we have:

$$ \begin{aligned} & P(T_{S} \ge t|E_{C} (t)) \\ &= E(I(\Uppsi (N_{q\nu } (t),\,\Uptheta ) \in S)|E_{C} (t)) \\ &= E(I(\Uppsi (N_{q\nu } (t),\,\Uptheta ) \in S)) \\ &= E\left( {I\left( {\prod\limits_{i\, = \,1}^{{N_{q\nu } (t)}} {\varphi_{i} } = 1} \right)} \right) = E\left[ {P\left( {\prod\limits_{i\, = \,1}^{{N_{q\nu } (t)}} {\varphi_{i} } = 1|N_{q\nu } (t)} \right)} \right] \\ &= E\left( {\prod\limits_{i\, = \,1}^{{N_{q\nu } (t)}} {\rho (i)} } \right). \\ \end{aligned} $$

Combining this equation with (4.15), we arrive at the original result in (4.13).

Model B. Consider now another type of extreme shock model, which is, in fact, a generalization of Model A. In model A, the second cause of failure (termination) was due to the number of noncritical shocks, no matter what the severity of these shocks was. Now, we will count only those shocks (to be called ‘dangerous’) with severity larger than some level \( \kappa \). Assume that the second cause of failure ‘materializes’ only when the number of dangerous shocks exceeds some random level \( M \). That is, given \( M = m \), in the absence of critical shocks, the system fails as soon as it experiences the \( (m + 1) \)th dangerous shock.

Assume that the shock’s severity is a random variable with the Cdf \( G(t) \), and the survival function for \( M \), \( P(M > l),\,l = 0,\,1,\,2,.. \), is also given. Suppose that there have been no critical shocks until time \( t \) and let \( \varphi_{i} \) be the indicator random variable (\( \varphi_{i} = 1 \) if the \( i \)th shock is dangerous and \( \varphi_{i} = 0 \) otherwise). Then, as previously,

$$ \Uppsi \left( {N_{q\nu } (t),\,\Uptheta ) = I(M\; \ge \;\sum\limits_{i\, = \,1}^{{N_{q\nu } (t)}} {\varphi_{i} } } \right), $$

and \( S = \{ 1\} \). Thus

$$ \begin{aligned} & P(T_{S} \ge t|E_{C} (t)) \\ & = E(I(\Uppsi (N_{q\nu } (t),\,\Uptheta ) \in S)) \\ & = E\left( {I\left( {M \ge \sum\limits_{i\, = \,1}^{{N_{q\nu } (t)}} {\varphi_{i} } } \right)} \right) \\ & = P\left( {M \ge \sum\limits_{i\, = \,1}^{{N_{q\nu } (t)}} {\varphi_{i} } } \right) \\ & = E\left[ {P\left( {M \ge \sum\limits_{i\, = \,1}^{{N_{q\nu } (t)}} {\varphi_{i} |} N_{q\nu } (t)} \right)} \right], \\ \end{aligned} $$

where,

$$ \begin{aligned} & P\left( {M \ge \sum\limits_{i\, = \,1}^{{N_{q\nu } (t)}} {\varphi_{i} } |N_{q\nu } (t) = n} \right) \\ &= P(M > n|N_{q\nu } (t) = n)\; + \;\sum\limits_{m\, = \,0}^{n} {P\left( {M \ge \sum\limits_{i\, = \,1}^{n} {\varphi_{i} } |N_{q\nu } (t) = n,\,M = m} \right)\, \cdot \,} P(M = m|N_{q\nu } (t) = n). \\ &= P(M > n)\; + \;\sum\limits_{m\, = \,0}^{n} {\sum\limits_{l\, = \,0}^{m} {\left( {\begin{array}{*{20}c} n \\ l \\ \end{array} } \right)\,\bar{G}(\kappa )^{l} \,G(\kappa )^{n\; - \;l} \, \cdot \,} } P(M = m) \\ &= P(M > n)\; + \;\sum\limits_{l\, = \,0}^{n} {\sum\limits_{m\, = \,l}^{n} {\left( {\begin{array}{*{20}c} n \\ l \\ \end{array} } \right)\bar{G}(\kappa )^{l} \,G(\kappa )^{n\; - \;l} \, \cdot \,} } P(M = m) \\ &= P(M > n)\; + \;\sum\limits_{l\, = \,0}^{n} {\left( {\begin{array}{*{20}c} n \\ l \\ \end{array} } \right)\bar{G}(\kappa )^{l} \,G(\kappa )^{n\; - \;l} \, \cdot \,\left( {P(M \ge l)\; - \;P(M \ge n + 1)} \right)} \\ &= \sum\limits_{l\, = \,0}^{n} {\left( {\begin{array}{*{20}c} n \\ l \\ \end{array} } \right)\bar{G}(\kappa )^{l} \,G(\kappa )^{n\; - \;l} \, \cdot \,P(M \ge l)} . \\ \end{aligned} $$

Thus, similar to the derivations of the previous section

$$ P(T_{S} \ge t|E_{C} (t)) = \sum\limits_{n\, = \,0}^{\infty } {\left[ {\sum\limits_{l\, = \,0}^{n} {P(M \ge l)\, \cdot \,\left( {\begin{array}{*{20}c} n \\ l \\ \end{array} } \right)\bar{G}(\kappa )^{l} \,G(\kappa )^{n\; - \;l} } } \right]} \, \cdot \,m_{q} (t)^{n} \frac{{\exp \left\{ { - m_{q} (t)} \right\}}}{n!}, $$

where \( m_{q} (t) \equiv \int_{ \, 0}^{ \, t} {q(x)\nu (x)} \,{\text{d}}x\), and finally, we have

$$ P(T_{S} \ge t) = \exp \left\{ { - \int\limits_{0}^{t} {p(x)} \nu (x)\,{\text{d}}x} \right\}\, \cdot \,\sum\limits_{n\, = \,0}^{\infty } {\left[ {\sum\limits_{l\, = \,0}^{n} {P(M \ge l) \cdot \left( {\begin{array}{*{20}c} n \\ l \\ \end{array} } \right)\overline{G} (\kappa )^{l} \,G(\kappa )^{n\; - \;l} } } \right]} \cdot \,m_{q} (t)^{n} \frac{{\exp \left\{ { - m_{q} (t)} \right\}}}{n!} $$

Note that, when the expression for \( P(T_{S} \ge t|E_{C} (t)) \) involves not only the number of shocks \( N_{q\nu } (t) \) but also the filtration generated by \( (N_{q\nu } (s),\,0 \le s \le t) \), the computation becomes intensive and the results might not be useful in practice. The corresponding example with numerical results can be found in [6].

4.3 Shot Noise Process for the Failure Rate

4.3.1 Shot Noise Process Without Critical Shocks

Assume that a system is subject to the NHPP of shocks \( \{ N(t),\,t \ge 0\} \) with rate \( \nu (t) \), which is the only possible cause of its failure. The consequences of shocks are accumulated in accordance with the ‘standard’ shot noise process \( X(t) \), \( X(0) = 0 \) (see e.g., [26], [27] and the previous chapter). Similar to (3.8), but in a slightly different and more convenient for us here notation, define the level of the cumulative stress (wear) at time \( t \) as the following stochastic process:

$$ X(t) = \sum\limits_{j\, = \,1}^{N(t)} {D_{j} h(t - T_{j} )} , $$
(4.16)

where \( T_{n} \) is the n-th arrival time in the shock process, \( D_{j} ,\,j = 1,\,2,\, \ldots \) are the i.i.d. magnitudes of shocks and \( h(t) \) is a non-negative, nonincreasing for \( t \ge 0 \), deterministic function (\( h(t) = 0 \) for \( t < 0 \)). The usual assumption for considering asymptotic properties of \( X(t) \) is that \( h(t) \) vanishes as \( t \to \infty \) and its integral in \( [0,\;\infty ) \)is finite, however, we formally do not need this rather restrictive assumption here. The shock process \( \{ N(t),\,t \ge 0\} \) and the sequence \( \{ D_{1} ,\,D_{2} ,\, \ldots \} \) are supposed to be independent.

The cumulative stress eventually results in failures, which can be probabilistically described in different ways. Denote by \( T_{S} \), as previously, the failure time of our system. Lemoine and Wenocur [23, 24], for example, modeled the distribution of \( T_{S} \) by assuming that the corresponding intensity process is proportional to \( X(t) \) (see (2.12) for a general definition). As we are dealing with the intensity process, we will rather use the term “stress” instead of “wear”. Proportionality is a reasonable assumption that describes the proportional dependence of the probability of failure in the infinitesimal interval of time on the level of stress

$$ \lambda_{t} \equiv k\,X(t)\; = \;k\sum\limits_{j\; = \;1}^{N(t)} {D_{j} \,h(t\; - \;T_{j} )} , $$
(4.17)

where \( k > 0 \) is the constant of proportionality. Then

$$ \begin{aligned} P(T_{S} > & t|N(s),\,0 \le s \le t,\,D_{1} ,\,D_{2} , \ldots ,\,D_{N(t)} ) \\ &= \exp \left\{ { - k\int\limits_{0}^{t} {\sum\limits_{j\; = \;1}^{N(x)} {D_{j} \,h(x\; - \;T_{j} )} \,{\text{d}}x} } \right\}, \\ \end{aligned} $$
(4.18)

Therefore, it means that the intensity process (4.17) can be also considered as the failure rate process [22]. Probability (4.18) should be understood conditionally on the corresponding realizations of \( \{ N(s),\,0 \le s \le t\} \) and \( D_{1} ,\,D_{2} , \ldots ,\,D_{N(t)} \). Therefore, ‘integrating them out’,

$$ P(T_{S} > t) = E\left[ {\exp \left\{ { - k\int\limits_{0}^{t} {X(u)\,{\text{d}}u} } \right\}} \right]. $$

Lemoine and Wenocur [24] had finally derived the following relationship for the survival probability \( P(T_{S} > t) \):

$$ P(T_{S} > t) = \exp \{ - m(t)\} \,\exp \left\{ {\int\limits_{0}^{t} {L(kH(u))\,\nu (t\; - \;u)\,{\text{d}}u} } \right\}, $$
(4.19)

where \( m(t) = \int_{0}^{t} {\nu (u)\,{\text{d}}u} ,\quad H(t) = \int_{0}^{t} {h(u)\,{\text{d}}u} \) and \( L( \cdot ) \) is the operator of the Laplace transform with respect to the distribution of the shock’s magnitude. In what follows, we generalize the approach of these authors to the case when a system can also fail due to a fatal shock with the magnitude exceeding the time-dependent bound, which is more realistic in practice.

4.3.2 Shot Noise Process with Critical Shocks and Deterioration

Model 1. In addition to the general assumptions of Lemoine and Wenocur [24] stated in the previous subsection, let on each shock, depending on its magnitude \( D_{j} ,\,j = 1,\,2 \ldots \), the following mutually exclusive events occur [11]:

  1. (i)

    If \( D_{j} > g_{U} (T_{j} ) \), then the shock results in an immediate system’s failure

  2. (ii)

    If \( D_{j} \le g_{L} (T_{j} ) \), then the shock does not cause any change in the system (harmless)

  3. (iii)

    If \( g_{L} (T_{j} ) < D_{j} \le g_{U} (T_{j} ) \), then the shock increases the stress by \( D_{j} \,h(0) \),

where \( g_{U} (t),\;g_{L} (t) \) are the decreasing, deterministic functions.

The functions of operating time, \( g_{U} (t),\;g_{L} (t) \) define the corresponding upper and lower bounds. Because they are decreasing, this means that the probability that the shock arriving at time \( t \) results in the system’s failure is increasing in time, whereas the probability that the shock is harmless is decreasing with time. Therefore, obviously, a deterioration of our system is described in this way. The function \( g_{U} (t) \) can also be interpreted as the strength of our system with respect to shocks, whereas the function \( g_{L} (t) \), can be interpreted as the ‘sensitivity’ to shocks. At many instances, they can be defined from the general ‘physical considerations’ on the criterion of failure of a system. For instance, the minimum peak voltage that can ruin a new electronic item is usually given in its specifications.

Define the following ‘membership function’:

$$ \xi (T_{j} ,\,D_{j} )\; = \;\left\{ \begin{gathered} 1,\quad g_{L} (T_{j} ) < D_{j} \le g_{U} (T_{j} ) \hfill \\ 0,\quad D_{j} \le g_{L} (T_{j} ) \hfill \\ \end{gathered} \right.. $$
(4.20)

Using this notation, the cumulative stress, similar to (4.16), can be written as

$$ X(t) \equiv \sum\limits_{j = 1}^{N(t)} {\xi (T_{j} ,\,D_{j} )\,D_{j} \,h(t\; - \;T_{j} )} , $$
(4.21)

provided that the system is operating at time \( t \) [i.e., the event \( D_{j} > g_{U} (T_{j} ),\,j = 1,\,2, \ldots \) did not happen in \( [0,\,t) \)].

Generalizing (4.17), assume that the conditional failure rate process \( \hat{\lambda }_{t} \) (on condition that the event \( D_{j} > g_{U} (T_{j} ),\,j = 1,\,2, \ldots \) did not happen in \( [0,\,t) \) and \( \{ N(t),\,T_{1} ,\,T_{2} , \ldots ,\,T_{N(t)} \} \) and \( \{ D_{1} ,\,D_{2} , \ldots ,\,D_{N(t)} \} \) are given) is proportional to \( X(t) \)

$$ \hat{\lambda }_{t} \equiv k\,X(t) = k\sum\limits_{n\, = \,1}^{N(t)} {\xi (T_{j} ,\;D_{j} )\,D_{j} h(t - T_{j} )} ,\;k > 0. $$
(4.22)

It is clear that conditionally on the corresponding history

  1. (i)

    If \( D_{j} > g_{U} (T_{j} ) \), for at least one \( j \), then

$$ P(T_{S} > t\,|\,N(s),\;0 \le s \le t,\;D_{1} ,\,D_{2} , \ldots ,\,D_{N(t)} )\; = \;0; $$
  1. (ii)

    If \( D_{j} \le g_{U} (t) \), for all \( j \), then

$$ P(T_{S} > t\,|\,N(s),\;0 \le s \le t,\;D_{1} ,\,D_{2} , \ldots ,\,D_{N(t)} ) = \exp \left\{ { - k\int\limits_{0}^{t} {\sum\limits_{j\, = \,1}^{N(x)} {\xi (T_{j} ,\,D_{j} )\,D_{j} \,h(x\; - \;T_{j} )} \,{\text{d}}x} .} \right\}. $$

Therefore,

$$ \begin{aligned} P(T_{S} > & t\;|\;N(s),\;0 \le s \le t,\;D_{1} ,\,D_{2} , \ldots ,\,D_{N(t)} ) \\ &= \prod\limits_{j\, = \,1}^{N(t)} \gamma (T_{j} ,\,D_{j} )\, \cdot \,\exp \left\{ { - k\int\limits_{0}^{t} {\sum\limits_{j\, = \,1}^{N(x)} {\xi (T_{j} ,\,D_{j} )\,D_{j} h(x\; - \;T_{j} )} \,{\text{d}}x} } \right\}, \\ \end{aligned} $$
(4.23)

where

$$ \gamma (T_{j} ,\,D_{j} )\; = \;\left\{ {\begin{array}{*{20}c} {0,\quad D_{j} > g_{U} (T_{j} )} \\ {1,\quad D_{j} \le g_{U} (T_{j} )} \\ \end{array} } \right.. $$
(4.24)

Thus, we have described a rather general model that extends (4.18) to the defined deterioration pattern. Indeed, if \( g_{U} (t) = \infty ;\,g_{L} (t) = 0 \), then \( \xi (T_{j} ,D_{j} ) \equiv 1 \)and (4.23) reduces to (4.18) with the corresponding survival probability (4.19). On the other hand, let \( g_{U} (t) = g_{L} (t) = g(t). \) Then, defining \( p(t) = P(D_{j} > g(t)) \) as the probability of failure under a shock at time \( t \) (\( q(t) = P(D_{j} \le g(t)) \), we obviously arrive at the \( p(t) \Leftrightarrow q(t) \) model described by Eq. (4.1).

On the basis of the above described model, we will derive now the (unconditional) survival function and the corresponding failure rate function. First, we need the following general lemma (see, [13] for the proof):

Lemma 4. 1

Let\( X_{1} ,\,X_{2} , \ldots ,\,X_{n} \)be i.i.d. random variables and\( Z_{1} ,\,Z_{2} , \ldots ,\,Z_{n} \)be i.i.d. continuous random variables with the corresponding common pdf. Furthermore, let\( {\rm X} = (X_{1} ,\,X_{2} , \ldots ,\,X_{n} ) \)and\( {\rm Z} = (Z_{1} ,\,Z_{2} , \ldots ,\,Z_{n} ) \)be independent. Suppose that the function\( \varphi (x,\,z)\;:\;R^{n} \; \times \;R^{n} \; \to \;R \)satisfies\( \varphi ({\rm X},\,t)\; =^{d} \varphi ({\rm X},\,\pi (t)) \), for any vector\( t \in R^{n} \)and for any n-dimensional permutation function\( \pi ( \cdot ) \). Then

$$ \varphi ({\rm X},\,{\rm Z}) =^{d} \varphi ({\rm X},\,{\rm Z}^{*} ), $$

where\( {\rm Z}^{*} = (Z_{(1)} ,\,Z_{(2)} , \ldots ,\,Z_{(n)} ) \)is the vector of the order statistics of\( {\rm Z} \).

We are ready now to prove the following theorem [11].

Theorem 4.4

Let\( H(t) = \int_{0}^{t} {h(v)\,{\text{d}}v} ,\;m\left( t \right) \equiv E(N(t)) = \int_{ \, 0}^{ \, t} {\nu \left( x \right)} \,{\text{d}}x \)and\( f_{D} (u),\,F_{D} (u) \)be the pdf and the Cdf of\( D =^{d} D_{j} ,\;j = 1,\,2,\, \ldots \). Assume that the inverse function\( m^{ - 1} \left( t \right) \)exists for t > 0. Then the survival function that corresponds to the lifetime\( T_{S} \)is

$$ \begin{aligned} & P(T_{S} > t) \\ & = \exp \left\{ { - \int\limits_{0}^{t} {\bar{F}_{D} (g_{L} (u))\,\nu (u)\,{\text{d}}u} } \right\}\;\exp \left\{ {\int\limits_{0}^{t} {\int\limits_{{g_{L} (s)}}^{{g_{U} (s)}} {\exp \{ - kuH(t\; - \;s)\} \,f_{D} (u)\,{\text{d}}u} \,\nu (s)\,{\text{d}}s} } \right\}, \\ \end{aligned} $$
(4.25)

and the corresponding failure rate is

$$ \lambda_{S} (t) = P(D > g_{U} (t))\,\lambda (t)\; + \;\int\limits_{0}^{t} {\int\limits_{{g_{L} (s)}}^{{g_{U} (s)}} {kuh(t\; - \;s)\;\exp \{ - kuH(t\; - \;s)\} \,f_{D} (u)\,{\text{d}}u} \,\nu (s)\,{\text{d}}s} . $$
(4.26)

Proof

Observe that

$$ \begin{aligned} & P(T_{S} > t\;|\;N(s),\;0 \le s \le t,\;D_{1} ,\,D_{2} , \ldots ,\,D_{N(t)} ) \\ &= \prod\limits_{j\, = \,1}^{N(t)} \gamma (T_{j} ,\,D_{j} )\;\exp \left\{ { - k\sum\limits_{j\, = \,1}^{N(t)} {\xi (T_{j} ,\,D_{j} )\,D_{j} \,H(t\; - \;T_{j} )} } \right\} \\ &= \exp \left\{ {\sum\limits_{j\, = \,1}^{N(t)} {(\ln \gamma (T_{j} ,\,D_{j} )\; - \;k\xi (T_{j} ,\,D_{j} )\,D_{j} \,H(t\; - \;T_{j} ))} } \right\}. \\ \end{aligned} $$

Therefore,

$$ \begin{aligned} P(T_{S}>t) &= E\left[{\exp \left\{ {\sum\limits_{j\, = \,1}^{N(t)} {(\ln \gamma (T_{j} ,\,D_{j} )\; - \;k\xi (T_{j} ,\,D_{j} )\,D_{j} H(t\; - \;T_{j} ))} } \right\}} \right] \\ &= E\left[ {E\left( {\exp \left\{ {\sum\limits_{j\, = \,1}^{N(t)} {(\ln \gamma (T_{j} ,\,D_{j} )\; - \;k\xi (T_{j} ,\,D_{j} )\,D_{j} \,H(t\; - \;T_{j} ))} } \right\}\left| {N(t)} \right.} \right)} \right]. \\ \end{aligned}$$

As previously, if \( m^{ - 1} \left( t \right) \) exists, then the joint distribution of \( T_{1} ,\,T_{2} , \ldots ,\,T_{n} \), given \( N(t) = n \), is the same as the joint distribution of the order statistics \( T_{(1)} ' \le T_{(2)} ' \le \ldots \le T_{(n)} ' \) of i.i.d. random variables \( T'_{1} ,T_{2} ', \ldots ,T_{n} ' \), where the pdf of the common distribution of \( T_{j} ' \)’s is given by \( \nu (x)/m(t) \). Thus,

$$ \begin{aligned} & E\left( {\exp \left\{ {\sum\limits_{j\, = \,1}^{N(t)} {(\ln \gamma (T_{j} ,D_{j} )\; - \;k\xi (T_{j} ,D_{j} )\,D_{j} \,H(t\; - \;T_{j} ))} } \right\}\left| {N(t) = n} \right.} \right) \\ &= E\left( {\exp \left\{ {\sum\limits_{j\, = \,1}^{n} {(\ln \gamma (T_{(j)} ',D_{j} )\; - \;k\xi (T_{(j)} ',D_{j} )\,D_{j} H(t\; - \;T_{(j)} '))} } \right\}} \right). \\ \end{aligned} $$

Let \( {\rm X} = (D_{1} ,D_{2} , \ldots ,D_{n} ) \), \( {\rm Z} = (T_{1} ',T_{2} ', \ldots ,T_{n} ') \) and

$$ \varphi ({\rm X},\,{\rm Z}) \equiv \sum\limits_{j = 1}^{n} {(\ln \gamma (T_{j} ',D_{j} )\; - \;k\xi (T_{j} ',D_{j} )\,D_{j} H(t\; - \;T_{j} '))} . $$
(4.27)

Note that, as was mentioned, if \( g_{U} (t) = \infty ;\;g_{L} (t) = 0 \), then \( \xi (T_{j} ,D_{j} ) \equiv 1 \) and our model reduces to the original model of Lemoine and Wenocur [24], where each term in \( \varphi ({\rm X},{\rm Z}) \) is just a simple product of \( D_{j} \) and \( H(t\; - \;T_{j} ') \). Due to this simplicity, the rest was straightforward. Now we have a much more complex form of \( \varphi ({\rm X},{\rm Z}) \), as given in (4.27), where the terms in the sum cannot be factorized.

Observe that the function \( \varphi (x,z) \) satisfies

$$ \varphi ({\rm X},t) =^{d} \varphi ({\rm X},\;\pi (t)) $$

for any vector \( t \in R^{n} \) and for any n-dimensional permutation function \( \pi ( \cdot ) \). Thus, applying Lemma 4.1,

$$ \begin{aligned} & \sum\limits_{j\, = \,1}^{n} {(\ln \gamma (T_{j} ',D_{j} )\; - \;k\xi (T_{j} ',D_{j} )\,D_{j} \,H(t\; - \;T_{j} '))} \\ &=^{d} \sum\limits_{j\, = \,1}^{n} {(\ln \gamma (T_{(j)} ',D_{j} )\; - \;k\xi (T_{(j)} ',D_{j} )\,D_{j} \,H(t\; - \;T_{(j)} '))} \\ \end{aligned} $$

and, therefore,

$$ \begin{aligned} & E\left( {\exp \left\{ {\sum\limits_{j\, = \,1}^{n} {(\ln \gamma (T_{(j)} ',D_{j} )\; - \;k\xi (T_{(j)} ',\,D_{j} )\,D_{j} \,H(t\; - \;T_{(j)} '))} } \right\}} \right) \\ &= E\left( {\exp \left\{ {\sum\limits_{j\, = \,1}^{n} {(\ln \gamma (T_{j} ',D_{j} )\; - \;k\xi (T_{j} ',D_{j} )\,D_{j} \,H(t\; - \;T_{j} '))} } \right\}} \right) \\ &= \left( {E\left( {\exp \{ \ln \gamma (T_{1} ',D_{1} )\; - \;k\xi (T_{1} ',D_{1} )\,D_{1} \,H(t\; - \;T_{1} ')\} } \right)} \right)^{n} . \\ \end{aligned} $$

As

$$ \begin{aligned} & E\left[ {\exp \{ \ln \gamma (T_{1} ',D_{1} )\; - \;k\xi (T_{1} ',D_{1} )\,D_{1} \,H(t\; - \;T_{1} ')\} \;|\;T_{1} '\; = \;s} \right] \\ &= E\left[ {\exp \{ \ln \gamma (s,D_{1} )\; - \;k\xi (s,D_{1} )\,D_{1} \,H(t\; - \;s)\} } \right] \\ &= \int\limits_{{g_{L} (s)}}^{{g_{U} (s)}} {\exp \{ - kuH(t\; - \;s)\} } \,f_{D} (u)\,{\text{d}}u\; + \;P(D_{1} \le g_{L} (s)), \\ \end{aligned} $$
(4.28)

where for \( D_{1} > g_{U} (s) \), \( \exp \{ \ln \gamma (s,\,D_{1} )\; - \;k\xi (s,D_{1} )\,D_{1} \,H(t\; - \;s)\} = 0 \), for all \( s > 0 \), the unconditional expectation is

$$ \begin{aligned} & E\left[ {\exp \{ \ln \gamma (T_{1} ',D_{1} )\; - \;k\xi (T_{1} ',D_{1} )\,D_{1} \,H(t\; - \;T_{1} ')\} } \right] \\ &= \int\limits_{0}^{t} {\int\limits_{{g_{L} (s)}}^{{g_{U} (s)}} {\exp \{ - kuH(t\; - \;s)\} } \,f_{D} (u)\,{\text{d}}u} \,\frac{\nu (s)}{m(t)}{\text{d}}s\; + \;\int\limits_{0}^{t} {P(D_{1} \le g_{L} (s))} \,\frac{\nu (s)}{m(t)}{\text{d}}s. \\ \end{aligned} $$

Let

$$ \alpha (t) \equiv \int\limits_{0}^{t} {\int\limits_{{g_{L} (s)}}^{{g_{U} (s)}} {\exp \{ - kuH(t\; - \;s)\} } \,f_{D} (u)\,{\text{d}}u\,} \lambda (s)\,{\text{d}}s\; + \;\int\limits_{0}^{t} {P(D_{1} \le g_{L} (s))} \,\nu (s)\,{\text{d}}s, $$

and we finally arrive at

$$ \begin{aligned} & P(T_{S} > t) \\ &= \sum\limits_{n\, = \,0}^{\infty } {\left( {\frac{\nu (t)}{m(t)}} \right)^{n} \,} \cdot \,\frac{{m(t)^{n} }}{n!}\,\exp \left\{ { - \int\limits_{0}^{t} {\nu (u)\,{\text{d}}u} } \right\} \\ &= \exp \left\{ { - \int\limits_{0}^{t} {\nu (u)\,{\text{d}}u} \; + \;\int\limits_{0}^{t} {\int\limits_{{g_{L} (s)}}^{{g_{U} (s)}} {\exp \{ - kuH(t\; - \;s)\} \,f_{D} (u)\,{\text{d}}u} \,\nu (s)\,{\text{d}}s} + \int\limits_{0}^{t} {P(D_{1} \; \le \;g_{L} (u))\nu (u)\,{\text{d}}u} } \right\} ,\\ \end{aligned} $$

which is obviously equal to (4.25).

The corresponding failure rate can be obtained as

$$ \begin{aligned} \lambda_{S} (t) &= - \frac{\text{d}}{{\text{d}}t}\ln P(T_{S} > t) \\ &= \nu (t)\; - \;P(g_{L} (t) \le D_{1} \le g_{u} (t))\,\nu (t) \\ & + \int\limits_{0}^{t} {\int\limits_{{g_{L} (s)}}^{{g_{U} (s)}} {kuh(t\; - \;s)\,\exp \{ - kuH(t\; - \;s)\} \,f_{D} (u)\,{\text{d}}u} \,\nu (s)\,{\text{d}}s\;} - \;P(D_{1} \le g_{L} (t))\,\nu (t) \\ &= P(D_{1} > g_{U} (t))\nu (t)\; + \;\int\limits_{0}^{t} {\int\limits_{{g_{L} (s)}}^{{g_{U} (s)}} {kuh(t\; - \;s)\,\exp \{ - kuH(t\; - \;s)\} \,f_{D} (u)\,{\text{d}}u\,} \nu (s)\,{\text{d}}s} , \\ \end{aligned} $$

where the Leibnitz rule was used for differentiation of the double integral.

$$ \square $$

Relationship (4.26) suggests that (4.25) can be equivalently written as

$$ P(T_{S} > t) = \exp \left\{ { - \int\limits_{0}^{t} {\bar{F}_{D} (g_{U} (u))\,\nu (u)\,{\text{d}}u} } \right\}\,\exp \left\{ { - \int\limits_{0}^{t} {\int\limits_{{g_{L} (s)}}^{{g_{U} (s)}} {kuh(t\; - \;s)\,\exp \{ - kuH(t\; - \;s)\} \,f_{D} (u)\,{\text{d}}u} \,\nu (s)\,{\text{d}}s} } \right\}. $$

Therefore, we can again interpret our system as a series one with two independent components: one that fails only because of fatal (critical) shocks and the other that fails because of nonfatal shocks.

Example 4.2

Consider the special case when \( g_{U} (t) = \infty \) and \( g_{L} (t) = 0 \). Then the survival function in (4.25) is

$$ \begin{aligned} P(T > t) &= \exp \left\{ { - \int\limits_{0}^{t} {\bar{F}_{D} (g_{L} (u))\,\nu (u)\,{\text{d}}u} } \right\}\,\exp \left\{ {\int\limits_{0}^{t} {\int\limits_{{g_{L} (s)}}^{{g_{U} (s)}} {\exp \{ - kuH(t\; - \;s)\} \,f_{D} (u)\,{\text{d}}u\,\nu (s)\,{\text{d}}s} } } \right\} \\ &= \exp \{ - m(t)\} \,\exp \left\{ {\int\limits_{0}^{t} {L(kH(t\; - \;s))\,\nu (s)\,{\text{d}}s} } \right\} \\ &= \exp \{ - m(t)\} \,\exp \left\{ {\int\limits_{0}^{t} {L(kH(u))\,\nu (t\; - \;u)\,{\text{d}}u} } \right\}, \\ \end{aligned} $$

where \( L( \cdot ) \) is the operator of the Laplace transform with respect to \( f_{D} (u) \). Therefore, we arrive at Eq. (4.19) obtained in [24].

Example 4.3

Suppose that \( \nu (t) = \nu \), \( t \ge 0 \), \( D_{j} \equiv d \), \( j = 1,2, \ldots \), and there exist \( t_{2} > t_{1} > 0 \) such that

  • \( g_{U} (t) > g_{L} (t) > d \), for \( 0 \le t < t_{1} \) (shocks are harmless);

  • \( d > g_{U} (t) > g_{L} (t) \), for \( t_{2} < t \) (shocks are fatal), and

  • \( g_{U} (t) > d > g_{L} (t) \), for \( t_{1} < t < t_{2} \); \( g_{L} (t_{1} ) = g_{U} (t_{2} ) = d \).

Let for the sake of further integration, \( h(t) = 1/(1\; + \;t) \), \( t \ge 0 \), and \( k = 1/d \) (for simplicity of notation). From Eq. (4.28),

$$\begin{aligned}& E[\exp \{ \ln \gamma (T^{\prime}_{1} ,D_{1} )\; - \;k\xi(T^{\prime}_{1} ,D_{1} )\,D_{1} H(t\; - \;T^{\prime}_{1} )\}\;|\;T^{\prime}_{1} = s]\\ &= \exp \{ \ln \gamma (s,d)\; - \;k\xi(s,d)\,dH(t\; - \;s)\} \\ &= \left\{ {\begin{array}{lll} {0,} &{\text{if}} & {g_{U} (s) > d(s > t_{2} )} \\ {\exp \{ - H(t\; -\;s)\} ,} & {\text{if}} & {g_{L} (s) < d\; \le \;g_{U} (s)\,(t_{1} < s \leq t_{2} )} \\ {1,} & {\text{if}} & {d \le g_{L} (s)\,(s \le t_{1} )} \\ \end{array} } \right. \\ &= \exp \{ - H(t\; - \;s)\}\,I(g_{L} (s) > d \leq g_{U} (s))\; + \;I(d\; \le \;g_{L} (s)) \\ &=\exp \{ - H(t\; - \;s)\} \,I(t_{1} > s \le t_{2} )\; + \;I(s \leq t_{1} ). \end{aligned}$$

Thus, ‘integrating \( T_{1} ' = s \) out’:

$$ \begin{aligned} & E\left[ {\exp \{ \ln \gamma (T_{1} ',D_{1} ) - k\xi (T_{1} ',D_{1} )\,D_{1} \,H(t\; - \;T_{1} ')\} } \right] \\ & = \frac{1}{m(t)}\left[ {\int\limits_{0}^{t} {\exp \{ - H(t - s)\} \,I(t_{1} < s \le t_{2} )} \,\nu (s)\,{\text{d}}s\; + \;\int\limits_{0}^{t} {I(s \le t_{1} )} \,\nu (s)\,{\text{d}}s} \right]. \\ \end{aligned} $$

Then,

$$ \begin{aligned} P(T_{S} > t) &= \exp \left\{ { - \int\limits_{0}^{t} {\nu (u)\,{\text{d}}u} \; + \;\int\limits_{0}^{t} {\exp \{ - H(t\; - \;s)\} \,I(t_{1} < s \le t_{2} )} \,\nu (s)\,{\text{d}}s\; + \;\int\limits_{0}^{t} {I(s \le t_{1} )} \,\nu (s)\,{\text{d}}s} \right\} \\ &= \exp \left\{ { - \int\limits_{0}^{t} {I(s > t_{1} )\,\nu (s)\,{\text{d}}s} \; + \;\int\limits_{0}^{t} {\exp \{ - H(t\; - \;s)\} \,I(t_{1} < s \le t_{2} )} \,\nu (s)\,{\text{d}}s} \right\}. \\ \end{aligned} $$

Thus [11],

  1. (i)

    For \( 0 \le t \le t_{1} \), \( P(T > t) = 1 \);

  2. (ii)

    For \( t_{1} \le t \le t_{2} \),

$$ \begin{aligned} P(T_{S} > t) &= \exp \left\{ { - \int\limits_{{t_{1} }}^{t} {\lambda {\text{d}}u} } \right\}\,\exp \left\{ {\lambda \int\limits_{{t_{1} }}^{t} {\exp \{ - H(t\; - \;s)\} } \,{\text{d}}s} \right\} \\ &= \exp \left\{ { - \nu (t\; - \;t_{1} )} \right\}\exp \left\{ {\nu \,\ln (1\; + \;t\; - \;t_{1} )} \right\} \\ &= \exp \left\{ { - \nu (t\; - \;t_{1} )} \right\}(1\; + \;t\; - \;t_{1} )^{\nu } ; \\ \end{aligned} $$
  1. (iii)

    For \( t_{2} \le t \),

$$ \begin{aligned} P(T_{S} > t) &= \exp \left\{ { - \int\limits_{{t_{1} }}^{t} {\nu {\text{d}}u} } \right\}\exp \left\{ {\nu \int\limits_{{t_{1} }}^{{t_{2} }} {\exp \{ - H(t\; - \;s)\} } \,{\text{d}}s} \right\} \\ &= \exp \{ - \nu (t\; - \;t_{1} )\} (1\; + \;t_{2} \; - \;t_{1} )^{\nu } , \\ \end{aligned} $$

which shows (compared with case (ii)) that if the system has survived in \( 0 \le t \le t_{1} \), then the next shock with probability 1 will ‘kill’ it.

Model 2. We consider now the following useful modification of Model 1:

Let, on each shock, depending on its magnitude \( D_{j} ,\,j = 1,2,.. \), the following mutually exclusive events occur:

  1. (i)

    If \( D_{j} > g_{U} (T_{j} ) \), the shock results in an immediate system failure (as in Model 1)

  2. (ii)

    If \( D_{j} \le g_{L} (T_{j} ) \), the shock is harmless (as in Model 1)

  3. (iii)

    If \( g_{L} (T_{j} ) < D_{j} \le g_{U} (T_{j} ) \), then the shock imposes a (constant) effect on the system lasting for a random time, which depends on its arrival time and magnitude.

In the latter case, assume that the larger are the shock’s arrival time and magnitude, the longer this effect lasts. Formally, let the shock increase the system failure rate by \( \eta \) units (constant) for the random time \( w(T_{j} ,\,D_{j} ) \), where \( w(t,d) \) is a strictly increasing function of each argument. Thus, along with decreasing functions \( g_{U} (t),\,g_{L} (t) \), the increasing function \( w(t,d) \) models deterioration of our system.

Similar to (4.22) (where for simplicity of notation, we set \( k \equiv 1 \)), the conditional failure rate process (on condition that the event \( D_{j} > g_{U} (T_{j} ),\,j = 1,\,2, \ldots \) did not happen in \( [0,t) \) and \( \{ N(t),\,T_{1} ,\,T_{2} , \ldots ,\,T_{N(t)} \} \) and \( \{ D_{1} ,D_{2} , \ldots ,D_{N(t)} \} \) are given) is

$$ \hat{\lambda }_{t} \equiv X(t) = \sum\limits_{j = 1}^{N(t)} {\xi (T_{j} ,D_{j} )\,\eta I(T_{j} \le t < T_{j} \; + \;w(T_{j} ,D_{j} ))} . $$

Then, similar to (4.23),

$$ \begin{aligned} & P(T_{S} > t\;|\;N(s),\;0 \le s \le t,\;D_{1} ,D_{2} , \ldots ,D_{N(t)} ) \\ & = \prod\limits_{j = 1}^{N(t)} \gamma (T_{j} ,D_{j} )\, \cdot \,\exp \left\{ { - \int\limits_{0}^{t} {\sum\limits_{j = 1}^{N(x)} {\xi (T_{j} ,D_{j} )\,\eta I(T_{j} \le x < T_{j} \; + \;w(T_{j} ,D_{j} ))} \,{\text{d}}x} } \right\}. \\ \end{aligned} $$
(4.29)

where the functions \( \xi (T_{j} ,D_{j} ) \) and \( \gamma (T_{j} ,D_{j} ) \) are defined in (4.20) and (4.24), respectively.

Similar to Theorem 4.4, the following result holds.

Theorem 4.5

Let\( \eta \)be the increment in the system’s failure rate due to a single shock that lasts for the random time\( w(T_{j} ,D_{j} ) \). Under assumptions of Theorem 4.4,the survival function\( P(T_{S} > t) \)is given by

$$ \begin{aligned} P(T_{S} > t) = \exp \left\{ { - \int\limits_{0}^{t} {\bar{F}_{D} (g_{L} (u))\,\nu (u)\,{\text{d}}u} } \right\} \\ & \times \exp \left\{ {\int\limits_{0}^{t} {\int\limits_{{g_{L} (s)}}^{{g_{U} (s)}} {\exp \{ - \eta \cdot \hbox{min} \{ w(u,s),\,(t\; - \;s)\} \} \,f_{D} (u)\,{\text{d}}u} \,\nu (s)\,{\text{d}}s} } \right\} .\\ \end{aligned} $$
(4.30)

Proof

Observe that from (4.29),

$$ \begin{aligned} & P(T_{S} > t|N(s),\;0 \le s \le t,\;D_{1} ,\,D_{2} , \ldots ,\,D_{N(t)} ) \\ &= \exp \left\{ {\sum\limits_{j = 1}^{N(t)} {(\ln \gamma (T_{j} ,D_{j} )\; - \;\eta \xi (T_{j} ,D_{j} )\,\hbox{min} \{ w(T_{j} ,D_{j} ),\,(t\; - \;T_{j} )\} )} } \right\}. \\ \end{aligned} $$

Therefore,

$$ \begin{aligned} &P(T_{S} > t) = E\left[ {\exp \left\{ {\sum\limits_{j = 1}^{N(t)} {(\ln \gamma (T_{j} ,D_{j} )\; - \;\eta \xi (T_{j} ,D_{j} )\;\hbox{min} \{ w(T_{j} ,D_{j} ),\;(t\; - \;T_{j} )\} )} } \right\}} \right] \\ &= \,E\left[ {E\left( {\exp \left\{ {\sum\limits_{j = 1}^{N(t)} {(\ln \gamma (T_{j} ,D_{j} )\; - \;\eta \xi (T_{j} ,D_{j} )\,\hbox{min} \{ w(T_{j} ,D_{j} ),\,(t\; - \;T_{j} )\} )} } \right\}\left| {N(t)} \right.} \right)} \right]. \\ \end{aligned} $$

Following straightforwardly the procedure described in the proof of Theorem 4.4, we eventually arrive at (4.30).\( \square\)

In contrast to Theorem 4.4 and owing to dependence in (4.30) on the function of minimum, the corresponding failure rate can only be obtained when specific forms of \( g_{U} (t) \), \( g_{L} (t) \), and \( w(t,\,d) \) are given. As in the case of Model 1, when \( g_{U} (t) = g_{L} (t) = g(t), \) this model also obviously reduces to the \( p(t) \Leftrightarrow q(t) \) model (4.1).

Example 4.4

Let \( g_{L} (t) = 0 \), \( g_{U} (t) = \infty \), for all \( t \ge 0 \), and \( w(t,\,d) = d \)(no deterioration in time). This means that the shocks are not fatal with probability 1 and that the durations of the shock’s effect do not depend on the arrival times but are just given by the i.i.d. random variables \( D_{j} \). In this case, from (4.30),

$$ \begin{aligned} P(T_{S} > t) &= \exp \left\{ { - \int\limits_{0}^{t} {\nu (u)\,{\text{d}}u} } \right\}\\ \times\;\exp \left\{ {\int\limits_{0}^{t} {\int\limits_{0}^{\infty } {\exp \{ - \eta \cdot \hbox{min} \{ w(u,s),\,(t\; - \;s)\} \} \,f_{D} (u)\,{\text{d}}u} \,\nu (s)\,{\text{d}}s} } \right\}, \\ \end{aligned} $$

where

$$ \begin{aligned} & \int\limits_{0}^{t} {\int\limits_{0}^{\infty } {\exp \{ - \eta \cdot \hbox{min} \{ w(u,\,s),\;(t\; - \;s)\} \} \,f_{D} (u){\text{d}}u} \,\nu (s)\,{\text{d}}s} \\ &= \int\limits_{0}^{t} {\int\limits_{0}^{t\; - \;s} {\exp \{ - \eta u\} \,f_{D} (u)\,{\text{d}}u} \,\nu (s)\,{\text{d}}s} + \int\limits_{0}^{t} {\int\limits_{t\; - \;s}^{\infty } {\exp \{ - \eta (t\; - \;s)\} \,f_{D} (u)\,{\text{d}}u} \,\nu (s)\,{\text{d}}s} . \\ &= \int\limits_{0}^{t} {\int\limits_{0}^{t\; - \;u} {\nu (s)\,{\text{d}}s\,\exp \{ - \eta u\} f_{D} (u)\,{\text{d}}u} } \; + \;\int\limits_{0}^{t} {\exp \{ - \eta (t\; - \;s)\} \,\overline{F}_{D} (t\; - \;s)} \,\nu (s)\,{\text{d}}s. \\ &= \int\limits_{0}^{t} {m(t\; - \;u)\,\exp \{ - \eta u\} \,f_{D} (u)\,{\text{d}}u} \; + \;\int\limits_{0}^{t} {\exp \{ - \eta (u)\} \,\overline{F}_{D} (u)} \,\nu (t\; - \;u)\,{\text{d}}u \\ &= [ - \overline{F}_{D} (u)\,\exp \{ - \eta u\} \,m(t\; - \;u)]_{0}^{t} \; - \;\int\limits_{0}^{t} {\overline{F}_{D} (u)\exp \{ - \eta u\} \,\nu (t - u)} \,{\text{d}}u \\ & - \eta \int\limits_{0}^{t} {\overline{F}_{D} (u)} \,\exp \{ - \eta u\} \,m(t\; - \;u)\,{\text{d}}u\; + \;\int\limits_{0}^{t} {\exp \{ - \eta (u)\} \,\overline{F}_{D} (u)} \,\nu (t\; - \;u)\,{\text{d}}u \\ &= m(t)\; - \;\eta \int\limits_{0}^{t} {\overline{F}_{D} (u)} \,\exp \{ - \eta u\} \,m(t\; - \;u)\,{\text{d}}u. \\ \end{aligned} $$

Therefore,

$$ P(T_{S} > t) = \exp \left\{ { - \eta \int\limits_{0}^{t} {\exp \{ - \eta u\} } \cdot \overline{F}_{D} (u)\, \cdot \,m(t\; - \;u)\,{\text{d}}u} \right\}, $$

and thus

$$ \lambda_{S} (t) = \eta \int\limits_{0}^{t} {\exp \{ - \eta u\} \cdot \overline{F}_{D} (u)} \cdot \nu (t\; - \;u)\,{\text{d}}u. $$

4.4 Extreme Shock Model with Delayed Termination

Consider an orderly point process (without multiple occurrences) \( \{ N(t),\,t \ge 0\} \) of some ‘initiating’ events (IEs) with arrival times \( T_{1} < T_{2} < T_{3} < \ldots \). Let each event from this process triggers the ‘effective event’ (EE), which occurs after a random time (delay) \( D_{i} ,i = 1,2, \ldots \), since the occurrence of the corresponding IE at \( T_{i} \). Obviously, in contrast to the initial ordered sequence \( T_{1} < T_{2} < T_{3} < \ldots \), the EEs \( \{ T_{i} + D_{i} \} ,i = 1,2, \ldots \) are now not necessarily ordered. This setting can be encountered in many practical situations, when, e.g., initiating events start the process of developing the non-fatal faults in a system and we are interested in the number of these faults in \( [0,t). \) Alternatively, effective events can result in fatal, terminating faults (failures) and then we are interested in the survival probability of our system. Therefore, the latter setting means that the first EE ruins our system. When there are no delays, each shock (with the specified probability) results in the failure of the survived system and the described model obviously reduces to the classical extreme shock model ([17]; [19]) considered in the previous section of this chapter and in Chap. 3.

The IEs can often be interpreted as some external shocks affecting a system, and for convenience and in the spirit of the current chapter, we will often use this term (interchangeably with the “IE”). We will consider the case of the NHPP of the IEs. The approach can, in principle, be applied to the case of renewal processes, but the corresponding formulas are too cumbersome. However, the obtained results for the NHPP case are in simple, closed forms that allow intuitive interpretations and proper analyses. Our presentation in this and the subsequent section will mostly follow Cha and Finkelstein[7].

Thus, a system is subject to the NHPP of IEs, \( \{ N(t),\,t \ge 0\} \) to be called shocks. Let the rate of this process be \( \nu (t) \) and the corresponding arrival times be denoted as \( T_{1} < T_{2} < T_{3} \ldots \). Assume that the \( i \)th shock is ‘harmless’ to the system with probability \( q(T_{i} ) \), and with probability \( p(T_{i} ) \) it triggers the failure process of the system which results in its failure after a random time \( D(T_{i} ) \), \( i = 1,2, \ldots \), where \( D(t) \) is a non-negative, semicontinuous random variable with the point mass at “0” (at each fixed \( t \)). Note that, this ‘point mass’ at 0 opens the possibility of the ‘immediate failure’ of the system on a shock’s occurrence, which is practically very important. Furthermore, the case of the ‘full point mass’ of \( D(t) \) at 0 reduces to the ordinary ‘extreme shock model’. Obviously, without the point mass at 0, we arrive at an absolutely continuous random variable. The distributions of \( D(t) \) having point masses at other values of time could be considered similarly.

Let \( G(t,x)\; \equiv \;P(D(t) \le x) \), \( \bar{G}(t,x) \equiv 1\; - \;G(t,x) \), and \( g(t,x) \) be the Cdf, the survival function and the pdf for the ‘continuous part’ of \( D(t) \), respectively. Then, in accordance with our terminology, the failure in this case is the EE.

First of all, we are interested in describing the lifetime of our system \( T_{S} \). The corresponding conditional survival function is given by

$$ \begin{aligned} & P(T_{S} > t\;|\;N(s),\;0 \le s \le t;\;D(T_{1} ),\;D(T_{2} ), \ldots ,\,D(T_{N(t)} );\;J_{1} ,\,J_{2} , \ldots ,J_{N(t)} ) \\ &= \prod\limits_{i\, = \,1}^{N(t)} {\left( {J_{i} + (1 - J_{i} )I(D(T_{i} ) > t - T_{i} )} \right)} , \\ \end{aligned} $$
(4.31)

where the indicators are defined as

$$\begin{aligned}I(D(T_{i} ) > t\; - \;T_{i} ) & = \left\{ {\begin{array}{ll}{1,} & {{\text{if}}\,D(T_{i} ) > t - T_{i} } \\ {0,} &{\text{otherwise}} \\ \end{array} } \right., \hfill \\ J_{i} & =\left\{ \begin{array}{ll}1,& {\text{if}}\,{\text{the}}\,{\text{ith}}\,{\text{shock}}\,{\text{does}}\,{\text{not}}\,{\text{trigger}}\,{\text{the}}\,{\text{subsequent}}\,{\text{failure}}\,{\text{process,}}\hfill \\ 0,& {\text{otherwise}} .\end{array} \right.\hfill\end{aligned}$$

Assume the following conditions regarding ‘conditional independence’:

  1. (i)

    Given the shock process, \( D(T_{i} ),\,i = 1,2, \ldots \), are mutually independent.

  2. (ii)

    Given the shock process, \( J_{i} \), \( i = 1,2, \ldots \), are mutually independent. (It means that whether each shock triggers the failure process of the system or not is ‘independently determined’).

  3. (iii)

    Given the shock process, \( \{ D(T_{i} ),\;i = 1,\,2, \ldots \} \) and \( \{ J_{i} ,\;i = 1,2, \ldots \} \) are mutually independent.

As in the previous sections, integrating out all conditional random quantities in (4.31) under the basic assumptions described above results in the following theorem.

Theorem 4.6

Let \( m^{ - 1} (t),\;t > 0 \) exist ( \( m(t) \equiv E(N(T)) \) . Then

$$ P(T_{S} \ge t) = \exp \left\{ { - \int\limits_{0}^{t} {G(x,\,t\; - \;x)\,p(x)\,\nu (x)\,{\text{d}}x} } \right\},\;t \ge 0, $$

and the failure rate function of the system is

$$ \lambda {}_{S}(t) = \int\limits_{0}^{t} {g(x,t\; - \;x)\,p(x)\,\nu (x)\,{\text{d}}x\; + \;G(t,0)\,p(t)\,} \nu (t),\;t \ge 0. $$

Proof

Given the assumptions, we can directly ‘integrate out’ \( J_{i} \)’s and \( D_{i} \)’s and define the corresponding probability in the following way:

$$ P(T_{S} > t\;|\;N(s),\,0 \le s \le t) = \prod\limits_{i\, = \,1}^{N(t)} {\left( {q(T_{i} ) + p(T_{i} )\overline{G} (T_{i} ,\;t - T_{i} )} \right)} . $$

Therefore,

$$ \begin{aligned} P(T_{S} > t) &= E\left[ {\prod\limits_{i\, = \,1}^{N(t)} {\left( {q(T_{i} ) + p(T_{i} )\overline{G} (T_{i} ,\;t - T_{i} )} \right)} } \right] \\ &= E\left[ {E\left[ {\prod\limits_{i\, = \,1}^{N(t)} {\left( {q(T_{i} ) + p(T_{i} )\,\overline{G} (T_{i} ,\;t - T_{i} )} \right)} \;|\;N(t)} \right]\,} \right]. \\ \end{aligned} $$
(4.32)

As the joint distribution of \( T_{1} ,T_{2} , \ldots ,T_{n} \) given \( N(t) = n \) is the same as the joint distribution of order statistics \( T_{(1)} ' \le T_{(2)} ' \le \ldots \le T_{(n)} ' \) of i.i.d. random variables \( T_{1} ',T_{2} ', \ldots ,T_{n} ' \), where the pdf of the common distribution of \( T_{j} ' \)’s is given by \( \nu (x)/m(t),\;0 \le x \le t \), we have

$$ \begin{aligned} & E\left[ {\prod\limits_{i\, = \,1}^{N(t)} {\left( {q(T_{i} ) + p(T_{i} )\,\overline{G} (T_{i} ,\;t - T_{i} )} \right)\;|\;N(t) = n} } \right] \\ &= E\left[ {\prod\limits_{i\, = \,1}^{n} {\left( {q(T_{(i)} ') + p(T_{(i)} ')\,\overline{G} (T_{(i)} ,\;t - T_{(i)} ')} \right)} } \right] \\ &= E\left[ {\prod\limits_{i\, = \,1}^{n} {\left( {q(T_{i} ') + p(T_{i} ')\,\overline{G} (T_{i} ',\;t - T_{i} ')} \right)} } \right] \\ &= \left( {E\left[ {q(T_{i} ') + p(T_{i} ')\,\overline{G} (T_{i} ',\;t - T_{i} ')} \right]} \right)^{n} \\ &= \left( {\frac{1}{m(t)}\int\limits_{0}^{t} {\left( {q(x) + p(x)\,\overline{G} (x;\;t - x)} \right)} \,\nu (x)\,{\text{d}}x} \right)^{n} . \\ \end{aligned} $$
(4.33)

From Eqs. (4.32) and (4.33),

$$ \begin{aligned} P(T_{S} > t) &= \sum\limits_{n\, = \,0}^{\infty } {\left( {\frac{1}{m(t)}\int\limits_{0}^{t} {\left( {q(x) + p(x)\,\overline{G} (x,\;t - x)} \right)} \,\nu (x)\,{\text{d}}x} \right)^{n} } \cdot \frac{{m(t)^{n} }}{n \, !}e^{ - m(t)} \\ &= e^{ - m(t)} \cdot \exp \left\{ {\int\limits_{0}^{t} {\left( {q(x) + p(x)\,\overline{G} (x,\;t - x)} \right)} \,\nu (x)\,{\text{d}}x} \right\} \\ &= \exp \left\{ {\int\limits_{0}^{t} {q(x)\nu (x)\,{\text{d}}x} + \int\limits_{0}^{t} {\overline{G} (x,\;t - x)p(x)\,\nu (x)\,{\text{d}}x} - \int\limits_{0}^{t} {\nu (x)\,{\text{d}}x} } \right\} \\ &= \exp \left\{ { - \int\limits_{0}^{t} {G(x,\;t - x)\,p(x)\,\nu (x)\,{\text{d}}x} } \right\}. \\ \end{aligned} $$

Therefore, by Leibnitz rule, the failure rate function of the system, \( \lambda_{S} (t) \), is given in the following meaningful and rather simple form:

$$ \lambda {}_{S}(t) = \int\limits_{0}^{t} {g(x;t - x)p(x)\,\nu (x)\,{\text{d}}x + G(t,\;0)p(t)} \,\nu (t),\;t \ge 0. $$
(4.34)
$$ \square $$

Formally, the split of effects to effective and ineffective shocks does not add any mathematical complexity because of the NHPP nature of the arrival process. This means that the result would be the same if we had only one type of effects and the NHPP with the rate function \( p(t)\,v(t) \). However, from the practical point of view and keeping in mind that we are generalizing here the classical extreme shock model with two types of effects, this splitting seems to be reasonable. Furthermore, we can consider the case of the multitype delayed consequences of shocks (\( n > 1 \)), where the shock that occurs at time \( t \) causes the delayed (with distribution \( G_{i} (t,x) \)) effect of type \( i \) with probability \( p_{i} (t) \), whereas the probability of ‘no effect’ is \( 1 - \sum\nolimits_{i\, = \,1}^{n} {p_{i} (t)} \). Obviously, this model is the same as the single-type model with \( G(t,x) = \sum\nolimits_{i\, = \,1}^{n} {p_{i}^{*} (t)\,G_{i} (t,x)} \) and \( p(t) = \sum\nolimits_{i\, = \,1}^{n} {p_{i} (t)} \), where \( p_{i}^{*} (t) = {{p_{i} (t)} \mathord{\left/ {\vphantom {{p_{i} (t)} {\sum\nolimits_{i\, = \,1}^{n} {p_{i} (t)} }}} \right. \kern-0pt} {\sum\nolimits_{i\, = \,1}^{n} {p_{i} (t)} }} \). Therefore, similar to Theorem 4.6,

$$ P(T_{S} \ge t) = \exp \left\{ { - \int\limits_{0}^{t} {\left( {\sum\limits_{i\, = \,1}^{n} {p_{i} (x)\,G_{i} (x,\;t - x)} } \right)\,\nu (x)\,{\text{d}}x} } \right\},\;t \ge 0 $$

and

$$ \lambda_{S} (t) = \int\limits_{0}^{t} {\left( {\sum\limits_{i\, = \,1}^{n} {p_{i} (x)\,g_{i} (x,\;t - x)} } \right)\,\nu (x)\,{\text{d}}x} \; + \;\left( {\sum\limits_{i\, = \,1}^{n} {p_{i} (t)G_{i} (t,\;0)} } \right)\,\nu (t). $$

4.5 Cumulative Shock Model with Initiated Wear Processes

Consider now a cumulative model for the IEs, where the accumulated wear can result in a system’s failure when it reaches the given boundary. Our setting that follows is different from the conventional one. In the conventional setting, the wear caused by a shock is incurred at the moment of the corresponding shock (see Sect. 4.1). In our model, however, the wear process, triggered by a shock, is activated at the moment of a shock’s occurrence and continuously increases with time.

Denote by \( W(t,u) \) the random wear incurred in \( u \) units of time after a single shock (IE) that has occurred at time \( t \). Let \( W(t,\;0) \equiv 0 \), for all \( t \ge 0 \). Assume that \( W(t,u) \) is stochastically increasing (see Sect. 2.8) in \( t \) and \( u \), that is,

$$ W(t_{1} ,\;u) \le_{st} W(t_{2} ,\;u)\;{\text{for all}}\;t_{2} > t_{1} > 0\;{\text{and for all}}\;u > 0; $$

and

$$ W(t_{1} ,\;u) \le_{st} W(t,u)\;{\text{for all}}\;u_{2} > u > 0\;{\text{and for all}}\;t > 0. $$

An example for this type of \( W(t,\;u) \) is the gamma process, with the pdf for \( W(t,\;u) \) given by

$$ f(w,\;t,\;u) = \frac{{\beta^{\alpha (t,\,u)} \cdot w^{\alpha (t,\,u)\; - \;1} \exp \{ - \beta w\} }}{\Upgamma (\alpha (t,\,u))},\;w \ge 0, $$

where \( \alpha (t,\;0) = 0 \), for all \( t \ge 0 \), and \( \alpha (t,\;u) \) is strictly increasing in both \( t \) and \( u \).

If all shocks from the initial process trigger wear, then the accumulated wear from all shocks in \( [0,\;t) \) is

$$ W(t) = \sum\limits_{i\, = \,0}^{N(t)} {W(T_{i} ,\,t\; - \;T_{i} )} , $$

which can be considered as a general form of a shot noise process (see Sect. 4.3). Assume that each shock with probability \( p(t) \) results in an immediate failure (termination), otherwise, with probability \( q(t) \) it triggers the wear process in the way described above. The failure also occurs when the accumulated wear reaches the random boundary \( R \) and we are interested in obtaining the distribution of the time to failure, \( T_{S} \).

The corresponding conditional survival probability for this model can be written as [7]

$$ \begin{aligned} P(T_{S} > & t|N(s),0 \le s \le t;\;W(T_{i} ,\;t - T_{i} ),\;i = 1,2, \ldots ,N(t);\;R) \\ &= \prod\limits_{i\, = \,0}^{N(t)} q (T_{i} ) \cdot I\left( {\sum\limits_{i\, = \,0}^{N(t)} {W(T_{i} ,\;t - T_{i} )} \le R} \right). \\ \end{aligned} $$

For obtaining the explicit expression for the unconditional survival probability in this case assume additionally that \( R \) is the exponentially distributed (with parameter \( \lambda \)) random variable.

Theorem 4.7

Let the shock process be the NHPP with rate \( \nu (t) \)and suppose that\( m^{ - 1} (t) \)exists (for\( t > 0 \)). Then

$$ P(T_{S} \ge t) = \exp \left\{ { - \int\limits_{0}^{t} {\nu (x)\,{\text{d}}x} + \int\limits_{0}^{t} {M_{W(x,\;t\, - \,x)} ( - \lambda ) \cdot q(x)\nu (x)} \,{\text{d}}x} \right\},\;t \ge 0, $$

and the corresponding failure rate function is

$$ \lambda {}_{S}(t) = p(t)\,\nu (t)\; - \;\int\limits_{0}^{t} {\frac{\text{d}}{{\text{d}}t}\left( {M_{W(x,\;t\, - \,x)} ( - \lambda )} \right) \cdot q(x)\,\nu (x)\,{\text{d}}x} ,\;t \ge 0, $$

where\( M_{W(t,\;u)} ( \cdot ) \)is the mgf of\( W(t,\;u) \)(for fixed\( t \)and\( u \)).

Proof

Given the assumptions, we can directly ‘integrate out’ the variable \( R \) and define the corresponding probability in the following way:

$$\begin{aligned} & P(T_{S} > t|N(s),\;0 \le s \le t;\;W(T_{i} ,t\; - \;T_{i}),\;i = 1,2, \ldots ,N(t)) \\ &= \left( {\prod\limits_{i\, =\,0}^{N\left( t \right)} {q\left( {T_{i} } \right)} } \right) \cdot\exp \left \{- \int\limits_{0}^{{\sum\limits_{i=0}^{N(t)}W(T_i,\,t-T_i)}} \lambda {\text{d}}u \right \}\\&= \exp \left\{ { - \lambda \sum\limits_{i\, = \,1}^{N(t)}{W(T_{i} ,\;t - T_{i} )} + \sum\limits_{i\, = \,1}^{N\left( t\right)} {\ln q\left( {T_{i} } \right)} } \right\}. \end{aligned} $$

Thus, the survival function can be obtained as

$$ P(T_{S} > t) = E\left[ {E\left[ {\exp \left\{ { - \lambda \sum\limits_{i = 1}^{N(t)} {W(T_{i} ,t\; - \;T_{i} )} \; + \;\sum\limits_{i = 1}^{N\left( t \right)} {\ln q\left( {T_{i} } \right)} } \right\}|N(t)} \right]} \right]. $$

Following the same procedure described in the Proof of Theorem 4.6,

$$ \begin{aligned} & E\left[ {\exp \left\{ { - \lambda \sum\limits_{i\, = \,1}^{N(t)} {W(T_{i} ,\;t - T_{i} )} + \sum\limits_{i\, = \,1}^{N\left( t \right)} {\ln q\left( {T_{i} } \right)} } \right\}|N(t) = n} \right] \\ &= \left( {E\left[ {\exp \left\{ { - \lambda \,W(T_{1} ',\;t - T_{1} ') + \ln q\left( {T_{1} '} \right)} \right\}} \right]} \right)^{n} . \\ \end{aligned} $$

Observe that,

$$ E\left[ {\exp \left\{ { - \lambda \,W(T_{1} ',\;t - T_{1} ') + \ln q\left( {T_{1} '} \right)} \right\}} \right]\; = \;\frac{1}{m(t)}\int\limits_{0}^{t} {\left( {q(x)\,M_{W(x,\;t\, - x)} ( - \lambda )} \right)} \,\nu (x)\,{\text{d}}x. $$

Hence,

$$ \begin{aligned} & E\left[ {\exp \left\{ { - \lambda \sum\limits_{i\, = \,1}^{N(t)} {W(T_{i} ,\;t - T_{i} )} + \sum\limits_{i\, = \,1}^{N\left( t \right)} {\ln q\left( {T_{i} } \right)} } \right\}|N(t) = n} \right] \\ &= \left( {\frac{1}{m(t)}\int\limits_{0}^{t} {\left( {q(x)\,M_{W(x,\;t\, - \,x)} ( - \lambda )} \right)} \,\nu (x)\,{\text{d}}x} \right)^{n} . \\ \end{aligned} $$

Finally,

$$ P(T_{S} > t) = \exp \left\{ { - \int\limits_{0}^{t} {\nu (x)\,{\text{d}}x} \; + \;\int\limits_{0}^{t} {M_{W(x,\;t\, - \,x)} ( - \lambda ) \cdot q(x)\,\nu (x)} \,{\text{d}}x} \right\}. $$

Therefore, by Leibnitz rule, the failure rate function of the system, \( \lambda_{S} (t) \), is

$$ \begin{aligned} \lambda {}_{S}(t) &= (1\; - \;M_{W(t,\;0)} ( - \lambda ) \cdot q(t))\,\nu (t) - \int\limits_{0}^{t} {\frac{\text{d}}{{\text{d}}t}\left( {M_{W(x,\;t\, - \,x)} ( - \lambda )} \right) \cdot q(x)\,\nu (x)\,{\text{d}}x} \\ &= p(t)\nu (t) - \int\limits_{0}^{t} {\frac{\text{d}}{{\text{d}}t}\left( {M_{W(x,\;t\; - \;x)} ( - \lambda )} \right) \cdot q(x)\,\nu (x)\,{\text{d}}x} . \\ \end{aligned} $$
$$ \square $$

Let, for simplicity, \( \lim_{t\, \to \,\infty } \nu (t) \equiv \nu (\infty ) \equiv \nu_{0} < \infty ,\;\nu_{0} > 0;\;p(t) \equiv p,\;q(t) \equiv q \). It is clear from general considerations that \( \lim_{t\, \to \,\infty } \lambda {}_{S}(t) = \lim_{t\, \to \,\infty } \nu (t) = \nu_{0} \) monotonically approaching the limit from below. Indeed, consider a system that had survived in \( [0,t) \), which means that the next interval \( [t,\;t + {\text{d}}t) \) starts with the same ‘resource’ \( R \), as the boundary is exponentially distributed. Due to the fact that all previous nonfatal shocks accumulate wear and all triggered wear processes are increasing, as \( t \) increases (\( W(t) \to \infty \) as \( t \to \infty \)), the resource \( R \) is ‘consumed more intensively’ with time. This obviously means that the probability of failure in \( [t,\;t + {\text{d}}t) \) is increasing in \( t \) and, therefore, \( \lambda {}_{S}(t) \) is increasing. Eventually, when \( t \to \infty \), each triggering shock becomes fatal in the limit, which means that

$$ \lim_{t\, \to \,\infty } \lambda {}_{S}(t) = \lim_{t\, \to \,\infty } \nu (t) = \nu_{0} . $$

The following example illustrates these considerations.

Example 4.5

Suppose that \( W(t,\;u) \) follows the gamma process, that is, the pdf of \( W(t,\;u) \) is

$$ f(w;\;t,\;u) = \frac{{\beta^{\alpha (t,\;u)} \cdot w^{\alpha (t,\;u)\, - \,1} \exp \{ - \beta w\} }}{\Upgamma (\alpha (t,\;u))},\;w \ge 0, $$

where \( \alpha (t,\;0) = 0 \) for all \( t \ge 0 \), and \( \alpha (t,\;u) \) is strictly increasing in both \( t \) and \( u \). Then

$$ M_{W(x,\;t\, - \,x)} ( - \lambda ) = \left( {\frac{\beta }{\beta + \lambda }} \right)^{\alpha (x,\;t\, - \,x)} , $$

and

$$ \frac{{\text{d}}}{{\text{d}}t}\left( {M_{W(x,\,t\; - \;x)} ( - \lambda )} \right) = \frac{{\text{d}}}{{\text{d}}t}\left( {\alpha (x,t\; - \;x)} \right)\,\ln \left( {\frac{\beta }{\beta + \lambda }} \right) \cdot \left( {\frac{\beta }{\beta + \lambda }} \right)^{\alpha (x,\,t\; - \;x)} . $$

Let \( \nu (t) = \nu ,\;q(t) = q,\;t \ge 0,\;\alpha (t,\;u) = \alpha u,\;t,\,u \ge 0 \). Then

$$ \begin{aligned} \int\limits_{0}^{t} {\frac{\text{d}}{{\text{d}}t}\left( {M_{W(x,\;t\, - \,x)} ( - \lambda )} \right) \cdot q(x)\,\nu (x)\,{\text{d}}x} \; &= \int\limits_{0}^{t} {\alpha \cdot \ln \left( {\frac{\beta }{\beta + \lambda }} \right) \cdot \left( {\frac{\beta }{\beta + \lambda }} \right)^{\alpha (t\; - \;x)} \cdot q\,\nu {\text{d}}x} \\ &= \int\limits_{0}^{\alpha t} {\ln \left( {\frac{\beta }{\beta + \lambda }} \right) \cdot \left( {\frac{\beta }{\beta + \lambda }} \right)^{x} \cdot q\,\nu {\text{d}}x} \\ &= q\nu \left( {\left( {\frac{\beta }{\beta + \lambda }} \right)^{\alpha t} - 1} \right). \\ \end{aligned} $$

Therefore, we have

$$ \lambda {}_{S}(t) = p\nu \; + \;q\nu \left( {1\; - \;\left( {\frac{\beta }{\beta + \lambda }} \right)^{\alpha t} } \right),\;t \ge 0 $$

and

$$ \lim_{t\; \to \;\infty } \lambda_{S} (t) \equiv \nu , $$

which illustrates the fact that every triggering shock in the limit becomes fatal.

4.6 ‘Curable’ Shock Processes

In this section, we generalize the setting of Sect. 4.4 to the case when each failure that was initiated (and delayed), has a chance to be repaired or cured as well. Therefore, as previously, consider a system subject to the NHPP of IEs \( \{ N(t),\;t \ge 0\} \) to be called shocks. Let the rate of this process be \( \nu (t) \) and the corresponding arrival times be denoted as \( T_{1} < T_{2} < T_{3} \ldots \). Assume that the \( i \)th shock triggers the failure process of the system which can result in its failure after a random time \( D(T_{i} ) \), \( i = 1,2, \ldots \), where for each fixed \( t \ge 0 \), the delay \( D(t) \) is a non-negative, continuous random variable. Let \( G(t,\;x) \equiv P(D(t) \le x) \), \( \bar{G}(t,\;x) \equiv 1 - G(t,\;x) \), and \( g(t,\;x) \) be the Cdf, the survival function, and the pdf of \( D(t) \), respectively. Assume now that with probability \( q(t,\;x) = 1 - p(t,\;x) \), where \( t \) is the time of a shock’s occurrence and \( x \) is the corresponding delay, each failure can be instantaneously cured (repaired), as if this shock did not trigger the failure process at all. For instance, it can be an instantaneous overhaul of an operating system by the new one that was not exposed to shocks before. It should be noted that this operation is executed at time \( t\; + \;x \) and not at time \( t \), as in the classical extreme shock model without delay. Different cure models have been considered mostly in the biostatistical literature (see Aalen et al. [1] and references therein). Usually, these models deal with a population that contains a subpopulation that is not susceptible to, e.g., a disease (i.e., ‘cured’) after some treatment. This setting is often described by the multiplicative frailty model with the frailty parameter having a mass at 0. It means that there exists a nonsusceptible (cured) subpopulation with the hazard rate equal to 0. In our case, however, the interpretation is different, but the mathematical description is also based on considering the corresponding improper distributions [9].

For simplicity of notation, consider the \( t \)-independent case, when \( D(t) \equiv D \), \( G(t,\;x) \equiv G(x) \), \( g(t,\;x) \equiv g(x) \) and \( p(t,\;x) \equiv p(x) \). The results can be easily modified to the \( t \)-dependent setting. Having in mind that \( D \) denotes the time of delay, let \( D_{C} \) be the time from the occurrence of an IE to the system failure caused by this IE. Note that \( D_{C} \) is an improper random variable, as \( D_{C} \equiv \infty \) (with a non-zero probability) when the corresponding IE does not result in an ultimate system failure due to cure. Then the improper survival function that describes \( D_{C} \) is:

$$ \bar{G}_{C} (x) \equiv 1\; - \;\int\limits_{0}^{x} {p(u)g(u)\,{\text{d}}u} $$
(4.35)

with the corresponding density:

$$ g_{C} (x) = p(x)\,g(x). $$
(4.36)

Thus, the EE that has occurred in \( [x,\;x + {\text{d}}x) \) is fatal with probability \( p(x) \) and is cured with probability \( q(x) \). For the specific case, \( p(x) \equiv p \), we can say that the proportion \( p \) of events of interest results in failure, whereas ‘the proportion \( 1\; - \;p \) is cured’

Another setting, which yields a similar description, is as follows: let each IE along with the failure development mechanism ignites a repair mechanism described by the repair time \( R \) with the Cdf \( K(t) \). If \( R > D \), then the EE is fatal, otherwise it will be repaired before the failure (\( R \le D \)) and therefore, can formally be considered as cured. Thus, probability \( p(x) \) in (4.36) has a specific, meaningful form in this case

$$ p(x) = 1\; - \;K(x). $$

After describing the setting, we are ready now to derive the formal result. The proof is relatively straightforward and similar to the proofs of the previous sections of this chapter; however the explicit result to be obtained is really meaningful. We are interested in describing the lifetime of our system \( T_{S} \) (time to the first fatal EE). The corresponding conditional survival function is given by

$$ \begin{aligned} P(T_{S} > & t|N(s),\;0 \le s \le t;\;D_{C1} ,D_{C2} , \ldots ,D_{CN(t)} ) \\ &= \prod\limits_{i\; = \;1}^{N(t)} {\left( {I(D_{Ci} > t - T_{i} )} \right)} , \\ \end{aligned} $$
(4.37)

where the indicators are defined as

$$ I(D_{Ci} > t - T_{i} ) = \left\{ \begin{gathered} 1,\;{\text{if}}\;D_{Ci} > t\; - \;T_{i} \hfill \\ 0,\;{\text{otherwise}} \hfill \\ \end{gathered} \right.. $$

Let

$$ J_{i} = \left\{ \begin{gathered} 1,\;{\text{if}}\,{\text{the}}\,{\text{ith}}\,{\text{cure}}\,{\text{process}}\,{\text{is}}\,{\text{successful}}, \hfill \\ 0,\;{\text{otherwise}}. \hfill \\ \end{gathered} \right. $$

We assume that given the shock process, (i) \( J_{i} \), \( i = 1,2, \ldots \), are mutually independent; (ii) \( D_{i} \), \( i = 1,2, \ldots \), are mutually independent; (iii) \( \{ J_{i} ,\;i = 1,2, \ldots \} \), \( \{ D_{i} ,\;i = 1,2, \ldots \} \) are mutually independent. Therefore, \( D_{Ci} \) \( i = 1,2, \ldots \), are also mutually independent.

Integrating out all conditional random quantities in (4.37) under the basic assumptions described above, we arrive at the following theorem, which modifies Theorem 4.6 [11]:

Theorem 4.8

Let \( m^{ - 1} (t) \) exist for \( t > 0 \) . Then

$$ P(T_{S} \ge t) = \exp \left\{ { - \int\limits_{0}^{t} {G_{C} (t - u)\nu (u)\,{\text{d}}u} } \right\},\;t \ge 0, $$
(4.38)

and the failure rate function of the system is

$$ \lambda {}_{S}(t) = \int\limits_{0}^{t} {p(t - u)\,g(t - u)\,\nu (u)\,{\text{d}}u} ,\;t \ge 0. $$
(4.39)

Proof

From (4.37),

$$ \begin{aligned} & P(T_{S} > t|N(t),\;T_{1} ,\;T_{2} , \ldots ,T_{N(t)} ;\;D_{C1} ,\;D_{C2} , \ldots ,\;D_{CN(t)} ) \\ &= \prod\limits_{i\; = \;1}^{N(t)} {\left( {I(D_{Ci} > t\; - \;T_{i} )} \right)} . \\ \end{aligned} $$

Due to the conditional independence assumption described above, we can ‘integrate out’ \( D_{Ci} \)’s separately and define the corresponding probability in the following way:

$$ P(T_{S} > t|N(t),\;T_{1} ,\;T_{2} , \ldots ,T_{n} ) = \prod\limits_{i\; = \;1}^{N(t)} {\left( {\bar{G}_{C} (t\; - \;T_{i} )} \right)} . $$

Therefore,

$$ \begin{aligned} P(T_{S} > t) &= E\left[ {\prod\limits_{i\; = \;1}^{N(t)} {\left( {\overline{G}_{C} (t\; - \;T_{i} )} \right)} } \right] \\ &= E\left[ {E\left[ {\prod\limits_{i\; = \;1}^{N(t)} {\left( {\overline{G}_{C} (t\; - \;T_{i} )} \right)} \;|\;N(t)} \right]} \right]. \\ \end{aligned} $$
(4.40)

The joint distribution of \( T_{1} ,T_{2} , \ldots ,T_{n} \) given \( N(t) = n \) is the same as the joint distribution of order statistics \( T_{(1)} ' \le T_{(2)} ' \le \ldots \le T_{(n)} ' \) of i.i.d. random variables \( T_{1} ',T_{2} ', \ldots ,T_{n} ' \), where the p.d.f. of the common distribution of \( T_{j} ' \)’s is given by \( \nu (x)/m(t),\;0 \le x \le t \):

$$ (T_{1} ,\;T_{2} , \ldots ,\;T_{n} |N(t) = n) =^{d} (T^{\prime}_{(1)} ,\;T^{\prime}_{(2)} , \ldots ,\;T^{\prime}_{(n)} ). $$

Then

$$ \begin{aligned} & E\left[ {\prod\limits_{i\; = \;1}^{N(t)} {\left( {\overline{G}_{C} (t\; - \;T_{i} )} \right)|\,N(t) = n} } \right] \\ &= E\left[ {\prod\limits_{i\; = \;1}^{n} {\left( {\overline{G}_{C} (t\; - \;T_{(i)} ')} \right)} } \right] \\ &= E\left[ {\prod\limits_{i\; = \;1}^{n} {\left( {\overline{G}_{C} (t\; - \;T_{i} ')} \right)} } \right] \\ &= \left( {E\left[ {\overline{G}_{C} (t\; - \;T_{i} ')} \right]} \right)^{n} \\ &= \left( {\frac{1}{m(t)}\int\limits_{0}^{t} {\left( {\overline{G} (t\; - \;u)} \right)} \,\nu (u)\,{\text{d}}u} \right)^{n} . \\ \end{aligned} $$
(4.41)

From Eqs. (4.40) and (4.41),

$$ \begin{aligned} P(T_{S} > t) &= \sum\limits_{n\; = \;0}^{\infty } {\left( {\frac{1}{m(t)}\int\limits_{0}^{t} {\left( {\overline{{G_{C} }} (t\; - \;u)} \right)} \,\nu (u)\,{\text{d}}u} \right)^{n} } \cdot \frac{{m(t)^{n} }}{n \, !}e^{ - m(t)} \\ &= e^{ - m(t)} \cdot \exp \left\{ {\int\limits_{0}^{t} {\left( {\overline{G}_{C} (t\; - \;u)} \right)} \,\nu (u)\,{\text{d}}u} \right\} \\ &= \exp \left\{ {\int\limits_{0}^{t} {\overline{G}_{C} (t\; - \;u)\,\nu (u)\,{\text{d}}x} - \int\limits_{0}^{t} {\nu (u)\,{\text{d}}u} } \right\} \\ &= \exp \left\{ { - \int\limits_{0}^{t} {G_{C} (t\; - \;u)\,\nu (u)\,{\text{d}}u} } \right\}, \\ \end{aligned} $$

where \( G_{C} (t - u) \) is defined by (4.35). Therefore, using Leibnitz rule and Eq. (4.36), \( \lambda_{S} (t) \) can be obtained in the following meaningful and a rather simple form:

$$ \lambda {}_{S}(t) = \int\limits_{0}^{t} {g_{C} (t\; - \;u)\,\nu (u)\;{\text{d}}u} \; = \;\int\limits_{0}^{t} {p(t\; - \;u)\,g(t\; - \;u)\,\nu (u)\,{\text{d}}u} . $$
(4.42)

\( \square\)

We will show now that under certain assumptions the \( p(t) \Leftrightarrow q(t) \) model (4.1) and the current one are asymptotically equivalent. Indeed, assume that \( \lim_{t\; \to \;\infty } \nu (t) \equiv \nu < \infty \). Without loss of generality, let \( p(t) \) and \( \nu (t) \) be the continuous functions with \( p(t) > 0 \), for all \( t \ge 0 \). Then the failure rate (4.42) tends to a constant as \( t \to \infty \), i.e.,

$$ \begin{aligned} \lim_{t\; \to \;\infty } \lambda {}_{S}(t) &= \lim_{t\; \to \;\infty } \int\limits_{0}^{t} {p(t - u)\,g(t - u)\,\nu (u)\,{\text{d}}u} \\ &= v\int\limits_{0}^{\infty } {p(u)g(u)} \,{\text{d}}u. \\ \end{aligned} $$

The latter integral obviously is finite as \( g(t) \) is the pdf and \( p(t) < 1 \) for all \( t > 0 \). Specifically, when \( \lim_{t\; \to \;\infty } p(t) = p, \)

$$ \lim_{t\, \to \,\infty } \lambda {}_{S}(t) = vp. $$

Thus, under the given assumptions, the failure rate (4.42), ‘asymptotically converges’ (as \( t \to \infty \)) to that of the classical extreme shock model (4.1).

4.7 Stress–Strength Model with Delay and Cure

Consider now a more specific and practical model with delay and possible cure that can be applied, e.g., in reliability modeling of materials and mechanical structures. Let, as previously, \( \nu (t) \) be the rate of the NHPP process of shocks (IEs) affecting our system and \( S_{i} \) denote the magnitude of the \( i \)th shock (stress). Assume that \( S_{i} ,i = 1,2, \ldots \) are i.i.d. random variables with the common Cdf \( F_{S} (s) \) (\( \overline{{F_{S} }} (s) \equiv 1\; - \;F_{S} (s) \)) and the corresponding pdf \( f_{S} (s) \). The system is characterized by its strength to resist stresses. Let first, the strength of the system \( Y \) be a constant, i.e., \( Y = y \). Assume that for each \( i = 1,2,.. \), the operable system immediately fails if \( S_{i} > y \) (fatal immediate failure) and the EE is triggered with the delay time and possible cure (as in the previous section) if \( S_{i} \le y \). It is clear that due to the described operation of thinning, the initial NHPP splits into two NHPP processes with rates \( \bar{F}_{S} (y)\,\nu (t) \) and \( F_{S} (y)\,\nu (t) \). Therefore, combining results of the previous section with the classical extreme shock model (4.1), Eqs. (4.38) and (4.39) can be generalized to

$$ P(T_{S} > t|Y = y) = \exp \left\{ { - \bar{F}_{S} (y)\int\limits_{0}^{t} {\nu (u)\,{\text{d}}u} } \right\}\exp \left\{ { - F_{S} (y)\int\limits_{0}^{t} {G_{C} (t\; - \;u)\,\nu (u)\,{\text{d}}u} } \right\},\;t \ge 0, $$
(4.43)
$$ \lambda {}_{S}(t|Y = y) = \bar{F}_{S} (y)\,\nu (t)\; + \;F_{S} (y)\int\limits_{0}^{t} {p(t\; - \;u)\,g(t\; - \;u)\,\nu (u)\,{\text{d}}u} ,\;t \ge 0, $$
(4.44)

accordingly.

In practice, due to various reasons, the strength of a system \( Y \) can be considered as a random variable. Let its support be, e.g., \( [0,\infty ) \). Denote by \( H_{Y} (y) \) (\( \overline{H}_{Y} (y) \equiv 1\; - \;H_{Y} (y) \)) and by \( h_{Y} (y) \), the corresponding Cdf and the pdf, respectively. The first guess in generalizing (4.43) and (4.44) to the case of a random \( Y \) would be just to replace \( F_{S} (u) \) and \( \overline{{F_{S} }} (u) \) in these equations by the expectations

$$ \int\limits_{0}^{\infty } {F_{S} (y)\,h_{Y} (y)\,{\text{d}}y} \;{\text{and}}\;\int\limits_{0}^{\infty } {\bar{F}_{S} (y)\,h_{Y} (y)\,{\text{d}}y} , $$
(4.45)

accordingly. However, it is not true, as the proper conditioning should be imposed (on condition that the previous shocks have been survived). This operation is similar to the Bayesian update of information. It can be easily seen from (4.43) and (4.44) that the model can be considered now as a mixture, or equivalently as a frailty model with the frailty parameter \( Y \) (see the next Chapter). Therefore, the mixture (observed) survival function for the lifetime \( T_{S} \) is obtained directly from (4.43) as the corresponding expectation:

$$ \begin{aligned} P(T_{S} > t) &= \int\limits_{0}^{\infty } {P(T_{S} \ge t|Y = y)\,h_{Y} (y)\,{\text{d}}y} \\ &= \int\limits_{0}^{\infty } {\exp \left\{ { - \int\limits_{0}^{t} {(\bar{F}_{S} (y)\,\nu (u)\,{\text{d}}u\; + \;F_{S} (y)\,G_{C} (t\; - \;u)\,\nu (u))\,{\text{d}}u} } \right\}h_{Y} (y)\,{\text{d}}y} , \\ \end{aligned} $$
(4.46)

whereas the failure rate is the following conditional expectation:

$$ \lambda_{S} (t) = \int\limits_{0}^{\infty } {\lambda {}_{S}(t|Y = y)} \,h_{Y} (y|T_{S} > t)\,{\text{d}}y, $$
(4.47)

where \( h_{Y} (y|T_{S} > t) \) is the pdf of the random variable \( Y|T_{S} > t \), or equivalently, \( \lambda_{S} (t) \), in accordance with the definition, is

$$ \lambda {}_{S}(t)\; = \; - \frac{{P^{\prime}(T_{S} > t)}}{{P(T_{S} > t)}}. $$

From (4.43), \( h_{Y} (y|T_{S} > t) \) can be obtained as

$$ \begin{aligned} h_{Y} (y|T_{S} > t)\; &= \exp \left\{ { - \bar{F}_{S} (y)\int\limits_{0}^{t} {\nu (u)\,{\text{d}}u} } \right\}\exp \left\{ { - F_{S} (y)\int\limits_{0}^{t} {G_{C} (t\; - \;u)\,\nu (u)\,{\text{d}}u} } \right\}h_{Y} (y) \\ & \times \left( {\int\limits_{0}^{\infty } {\exp \left\{ { - \int\limits_{0}^{t} {(\bar{F}_{S} (x)\,\nu (u)\,{\text{d}}u\; + \;F_{S} (x)\,G_{C} (t\; - \;u)\,\nu (u))\,{\text{d}}u} } \right\}h_{Y} (x)\,{\text{d}}x} } \right)^{ - 1} \\ \end{aligned} $$
(4.48)

Equations (4.44), (4.47) and (4.48) show that the explicit form of \( \lambda_{S} (t) \) is rather cumbersome and numerical methods should be used for calculating it in practice. However, our goal here is to emphasize the relevant methodological issues.

Specifically, when there is only a fatal immediate failure (i.e., without delays), Eq. (4.46) simplifies to

$$ P(T_{S} > t) = \int\limits_{0}^{\infty } {\exp \left\{ { - \overline{F}_{S} (y)\int\limits_{0}^{t} {\nu (u)\,{\text{d}}u} } \right\}h_{Y} (y)\,{\text{d}}y} $$
(4.49)

and after the change in the order of integration, the corresponding failure rate becomes

$$ \lambda_{S} (t) = \frac{{\int_{0}^{\infty } {\int_{0}^{s} {\exp \left\{ { - \overline{F}_{S} (y)\int\limits_{0}^{t} {\nu (u)\,{\text{d}}u} } \right\}\,} } h_{Y} (y)\,{\text{d}}y\,f_{S} (s)\,{\text{d}}s}}{{\int\limits_{0}^{\infty } {\exp \left\{ { - \overline{F}_{S} (y)\int\limits_{0}^{t} {\nu (u)\,{\text{d}}u} } \right\}h_{Y} (y)\,{\text{d}}y} }}\nu (t). $$
(4.50)

The right-hand side of Eq. (4.50) is still much more complex than the corresponding failure rate for the fixed strength model, which is the simple product, \( \bar{F}_{S} (y)\,\nu (t) \). The price for this simplicity is in neglecting the random nature of the strength of a system.

4.8 Survival of Systems with Protection Subject to Two Types of External Attacks

Consider a large system (LS) that, because of its importance and (or) large economic value, should be protected from possible harmful attacks or intrusions. At many instances, this protective function is performed by a specially designed defence system (DS). Therefore, the attacker wants to destroy the DS partially or completely and then to attack the LS [12].

Let the maximum level of performance of the DS be described by the value of the initial defence capacity, \( D_{M} \)—to be interpreted as, e.g., the total number of defence units, service points, firewalls, etc. For instance, we may imagine a system that executes defence against aircraft or missile strikes on some important object (as, e.g., a power station or a marine port during combat). Another more ‘peaceful example’ is the computer network that should be protected from hack-attacks aimed at disabling firewalls.

The attacker executes two types of attacks—those that target the DS and those that target the system itself. We will model these actions by two different stochastic point processes to be called for convenience, the A1 and the A2 shock processes, respectively. The shocks from the A1 process damage, i.e., destroy certain parts of the DS. We assume that the DS is repairable and, therefore, this effect is temporal. Given the stochastic nature of the setting, the actual defence capacity at time \( t \) can be modeled by a stochastic process \( \{ D(t),\,t \ge 0\} \). For example, it may be maximal for long periods of time, i.e., \( D(t) = D_{M} \), or severely hampered when \( D(t)\; < < \;D_{M} \). Thus, distinct from the conventional shock models with accumulated damage, our model describes a nonmonotonic damage process, which accounts for, e.g., the corresponding repair actions.

The DS defends the nonrepairable LS from the A2 process of shocks that are aimed to destroy the LS or, in other words, to completely terminate its operation. In accordance with reliability terminology, we will call this event a failure. Assume that, similar to the classical extreme shock models, each shock from the A2 process results in the LS failure with probability \( p(t) \) or it is ‘perfectly’ survived with the complementary probability \( q(t) = 1\; - \;p(t) \). The latter means in our case that the DS has neutralized the attack. It is natural to assume that these probabilities are the functions of the defence capacity in the following sense: for each realization of \( D(t) = d(t) \), the failure probability \( p(t) \) is a decreasing function of the actual defence capacity, i.e., \( p(t)\; = \;p^{*} (d(t)) \), where \( p^{*} ( \cdot ) \) is strictly decreasing in its argument. As the simplest and meaningful scenario, one may define a proportion-type function:

$$ p^{*} (d(t)) = {{(D_{M} \; - \;d(t))} \mathord{\left/ {\vphantom {{(D_{M} \; - \;d(t))} {D_{M} }}} \right. \kern-0pt} {D_{M} }}. $$

The failure of the LS occurs when the attack on it is not neutralized by the DS. We are interested in the survival probability of the LS in \( [0,t) \). An obvious specific case is when instead of the A2 shock process, only one attack at time instant \( t^{\prime} \in [0,t) \) is executed with the corresponding survival probability \( p(t^{\prime}) = p^{*} (d(t^{\prime})) \). The foregoing setting indicates that the description of the stochastic process \( \{ D(t),\;\,t \ge 0\} \) is the crucial part of our approach. In order to obtain the mathematically tractable solution, the relatively simple stochastic point processes need to be adopted as the corresponding models for the A1 and the A2 shock processes.

For a formal description, denote

  1. (i)

    \( \left\{ {N(t),\;t \ge 0} \right\} \) the NHPP process of the A1 shocks with rate \( v(t) \) and (ordered) arrival times \( R_{i} ,\;i = 0,\;1,\;2, \ldots ,\;R_{1} < R_{2} < R_{3} , \ldots \), where \( i = 0 \)formally means that there were no events in \( [0,\;t) \).

  2. (ii)

    \( \left\{ {Q(t),\;t \ge 0} \right\} \)—the NHPP process of the A2 shocks with rate \( w(t) \) and ordered arrival times \( B_{i} ,\;i = 1,\;2, \ldots ,\;B_{1} < B_{2} < B_{3} , \ldots \), where \( i = 0 \)formally means that there were no events in \( [0,\;t) \). The specific case of the only one A2 event in \( [0,\;t) \) will be also considered.

Assume that, when \( D(t) = D \), the A2 shock at time \( t \) directly destroys the operating LS with probability

$$ p(t\;|\;D(t) = D) = 1\; - \;\alpha \frac{D}{{D_{M} }} $$

and is survived with the complementary probability

$$ q(t\;|\;D(t) = D) \equiv 1\; - \;p(t\;|\;D(t)\; = \;D) = \alpha \frac{D}{{D_{M} }}, $$
(4.51)

where \( \{ D(t),\;t \ge 0\} \) is a stochastic process that models the defence capacity of the DS, \( D_{M} = D(0) \) is its fixed initial maximal value and \( \alpha \;(0 < \alpha \le 1) \) is a constant. The coefficient \( \alpha \) shows the protection coverage of the LS by the DS. Specifically, when \( \alpha = 1 \) and \( D(t) = D_{M} \), the DS executes the 100 % protection of the LS from the A2 shock at time \( t \). In what follows, for simplicity of notation, we will assume that \( \alpha = 1 \), whereas the general case is obtained by a trivial modification. It should be noted that Eq. (4.51) means that the survival probability for the A2 shock is proportional to the normalized defence capacity \( D(t)/D_{M} \).

We must set now the model for the process \( \{ D(t),\;t \ge 0\} \), which is the major challenge in this setting. Let the \( i \)th A1 shock causes the damage \( W_{i} ,\;i = 1,\;2, \ldots \) to the DS. We assume that this effect ‘expires’ in a random time \( \tau_{i} \) (e.g., the repair facility is restoring the DS from the consequences of this shock). As the damages are accumulated,

$$ D(t) = D_{M} \; - \;\sum\limits_{i\; = \;1}^{N(t)} {W_{i} } 1(t\; - \;R_{i} < \tau_{i} ), $$
(4.52)

where \( 1( \cdot ) \) is the corresponding indicator. Obviously, the stochastic process \( \{ D(t),t \ge 0\} \) should not be negative and we will discuss it for the specific models to follow.

The number of A1 shocks that contribute toward the total damage at time \( t \) can be obviously defined as the following stochastic process

$$ X(t) = \sum\limits_{i\; = \;1}^{N(t)} {1(t\; - \;R_{i} \le \tau_{i} )} , $$
(4.53)

In other words, \( X(t) \) counts the number of A1 shocks with ‘active’ damage (not eliminated or vanished) at time \( t \). Assume further that

  1. (iii)

    \( \tau_{i} ,\;i = 1,\;2,\;3, \ldots \) are i.i.d. random variables with the Cdf \( G(t) \) and mean \( \bar{\tau }_{G} \).

  2. (iv)

    \( W_{i} ,\;i = 1,\;2,\;3, \ldots \) are i.i.d. random variables with finite expectation \( E[W_{i} ] = d_{w} \) (for Model 1 to follow).

  3. (v)

    \( \{ N(t),\;t \ge 0\} ,\;\{ Q(t),\;t \ge 0\} ,\;W_{i} ,\;i = 1,\;2, \ldots \) and \( \tau_{i} ,\;i = 1,\;2, \ldots \) are independent of each other.

We will consider two models for damage accumulation and the resulting probabilities of interest.

Model 1. In accordance with (4.51) (\( \alpha = 1) \),

$$ q_{1} (t|W_{i} = w_{i} ,\;i = 1,\;2, \ldots ,\;X(t) = r) = \frac{{D_{M} - \sum\limits_{i\; = \;1}^{r} {w_{ji} } }}{{D_{M} }}, $$
(4.54)

where, \( j_{1} < j_{2} < \ldots < j_{r} \) are the subscripts of \( W_{i} \) for which \( \{ t\; - \;R_{i} < \tau_{i} \} \) is satisfied and the subscript “1” in \( q_{1} \) stands for the first model. Assume initially that there is only one A2 shock, whereas the case of the process of A2 shocks will be considered further. The unconditional probability of survival under a single A2 shock at time \( t \) is the corresponding expectation that, in accordance with Wald’s equality, can be written as

$$ \begin{aligned} q_{1} (t) &= E[q_{1} (t|W_{i} i = 1,\;2, \ldots ,\;X(t))] \\ &= \frac{{D_{M} - E\left[ {\sum\limits_{i\, = \,1}^{X(t)} {W_{{j_{i} }} } } \right]}}{{D_{M} }} = 1 - \frac{{E[X(t)]\,d_{w} }}{{D_{M} }}. \\ \end{aligned} $$
(4.55)

In this model, we implicitly assume that damages are relatively small compared with the full size \( D_{M} \), i.e.,\( d_{w} \ll D_{M}\) and the rate of the A1 process is not too large, in order (4.52) to be positive (i.e., the probability that it is formally negative is negligible). These assumptions in a broader context will be discussed later.

Model 2. Model 1 traditionally describes accumulation of damage via the i.i.d. increments. However, in view of our two shock processes setting, it can be interesting and appealing to consider a different new scenario when each shock decreasesproportionally the defence capacity [12]. The damage in this case depends on the value of the defence capacity: the larger \( D(t) \) corresponds to the larger damage from a shock. This assumption seems to be often more realistic than the i.i.d. one, as at many instances, the size of the damage depends on the size of the attacked system. Suppose that a single A2 shock has occurred at time \( t \). Then our assumption can be formalized as

$$ D(t) = kD(t - ), $$
(4.56)

where the proportionality factor \( k(0 < k < 1) \) describes the efficiency of attacks for each shock from the A1 process and “\( t - \)” denotes the time instant just prior to \( t \).

As the defence system starts at \( t = 0 \) at ‘full size’, its capacity at time \( t \) is given by the following random variable (for each fixed \( t \)), or equivalently, by the stochastic process \( \{ D(t),\;t \ge 0\} \):

$$ D(t) = D_{M} k^{X(t)} , $$
(4.57)

as the effect of all other damages caused by the process \( N(t),\;t \ge 0 \) (not counted by (4.53)), was eliminated (repaired). In contrast to Model 1, \( D(t) \) is always positive and no additional assumption for that is needed. In accordance with (4.51):

$$ q_{2} (t|X(t) = r) = k^{r} . $$
(4.58)

The unconditional probability of survival under a shock at time \( t \) is the corresponding expectation with respect to \( X(t) \):

$$ q_{2} (t) = E[q_{2} (t|X(t))] = E[k^{X(t)} ]. $$
(4.59)

In practice, \( k \) is usually close to \( 1 \) meaning that only a small portion of the defence capability is lost on each A1 shock.

Denote, as previously, by \( T_{S} \) the time to failure of the LS. Now we are ready for obtaining the survival probability, \( \Pr (T_{S} > t) \). As follows from (4.55) and (4.59), in order to describe the process \( \{ D(t),\;t \ge 0\} \) and to derive \( \Pr (T_{S} > t) \) for both models, we need to obtain the discrete distribution of \( X(t) \) given by Eq. (4.53). The proof of the following theorem is rather straightforward and similar to the proofs of the previous sections and, therefore, it is omitted. However, this result will be basic for our further derivations in this section.

Theorem 4.9

Let \( m_{v} (t) \equiv E(N(t)) = \int\limits_{0}^{t} {v(x)\;{\text{d}}x} \) denotes the cumulative rate of the A1 process of shocks and suppose that \( m_{v}^{ - 1} (t),\;t > 0 \) exists. Then, the distribution of \( X(t) \) for each fixed \( t \) is given by the following formula:

$$ \Pr (X(t) = r) = \frac{{\left( {\int_{0}^{t} {v(x)\,\bar{G}(t\, - \,x)\;{\text{d}}x} } \right)^{r} \exp \left\{ { - \int_{0}^{t} {v(x)\bar{G}(t\, - \,x){\text{d}}x} } \right\}}}{r!}, $$
(4.60)

where \( \bar{G}(t) \equiv 1\, - \,G(t) \) is the survival probability for \( \tau_{i} ,\;i = 1,\;2,\;3, \ldots \)

Consider first, the probability of survival under a single A2 shock at time \( t \), which can be already of a practical interest in applications. In fact, this is our \( q(t) \) defined for both models by expectations (4.55) and (4.59), respectively. The following theorem gives the corresponding expressions.

Theorem 4.10

The probability of survival of the operating LS under a single A2 shock at time \( t \) is

$$ q_{1} (t) = 1 - \frac{{\left[ {\int_{0}^{t} {v(x)\,\bar{G}(t\, - \,x)\;{\text{d}}x} } \right]d_{w} }}{{D_{M} }} $$
(4.61)

for Model 1 and

$$ q_{2} (t) = \exp \left\{ { - (1 - k)\int\limits_{0}^{t} {v(x)\,\bar{G}(t - x)\,{\text{d}}x} } \right\} $$
(4.62)

for Model 2.

Proof

It immediately follows from Eq. (4.60) that

$$ E[X(t)] = \int\limits_{0}^{t} {v(x)\,\bar{G}(t\, - \,x)\,{\text{d}}x} $$

and, therefore, (4.61) holds.

Similarly, for Model 2,

$$ \begin{aligned} q_{2} (t) &= E[k^{X(t)} ] \\ &= \sum\limits_{r\, = \,0}^{\infty } {k^{r} } \frac{{\left( {\int_{0}^{t} {v(x)\,\bar{G}(t\, - \,x)\,{\text{d}}x} } \right)^{r} \exp \left\{ { - \int_{0}^{t} {v(x)\,\bar{G}(t\, - \,x)\,{\text{d}}x} } \right\}}}{r!} \\ &= \exp \left\{ { - (1\, - \,k)\int\limits_{0}^{t} {v(x)\,\bar{G}(t\, - \,x)\,{\text{d}}x} } \right\}. \\ \end{aligned} $$

\( \square \)

Theorem 4.11

Let\( v(t) = v,\;t \in [0,\;\infty ) \)or\( \lim_{t\, \to \,\infty } v(t) = v \). Then the stationary values of\( q_{i} (t) \), i.e., \( \lim_{t\, \to \,\infty } q_{i} (t) = q_{i} ,i = 1,2 \)are given by

$$ q_{1} = 1\, - \,\frac{{\bar{\tau }_{G} d_{w} }}{{\bar{\tau }_{N} D_{M} }}, $$
(4.63)
$$ q_{2} = \exp \left\{ { - (1\, - \,k)\frac{{\bar{\tau }_{G} }}{{\bar{\tau }_{N} }}} \right\}, $$
(4.64)

where\( \bar{\tau }_{G} = \int_{0}^{\infty } {\bar{G}(x)} \,{\text{d}}x \)is the mean time which corresponds to random variables\( \tau_{i} ,\;i = 1,\;2, \ldots \)and\( \bar{\tau }_{N} = 1/v \)is the mean time (exactly or asymptotically as\( t \to \infty \)) between successive A1 shocks.

Theorem 4.11 is intuitively obvious and can be proved in a straightforward way by using the variable substitution \( y = t\; - \;x \) for the integrals in (4.61) and (4.62) and by applying the Lebesgue’s Dominated Convergence Theorem afterward. When \( \bar{\tau }_{G} /\bar{\tau }_{N} < < 1 \), which means a very quick repair of damage with respect to the time between successive A1 shocks, Model 2 reduces to a very simple (and usually not practically justified) setting when the repair periods after different A1 shocks do not overlap. In this case, the probability of failure that corresponds to (4.64) is just \( p_{2} = 1\; - \;q_{2} \; \approx \;(1\; - \;k)\,\bar{\tau }_{G} /\bar{\tau }_{N} . \)

It follows from the above reasoning that the stationary variant of (4.60) (i.e., for \( t \) sufficiently large and \( v(t) = v,\;t \in [0,\;\infty ) \) or \( \lim_{t\; \to \;\infty } v(t) = v \)) can be of interest. Denote, \( \bar{\tau }_{G} /\bar{\tau }_{N} \equiv \eta \). Then the stationary distribution for (4.60) is the Poisson random variable with this parameter:

$$ \Pr (X_{S} = r) = \frac{{\eta^{r} \exp \{ - \eta \} }}{r!} .$$
(4.65)

Theorem 4.10 provides a simple way of obtaining the probability of failure of the LS under a single attack at time \( t \).

We are ready now to consider the A2 process of shocks and to derive the corresponding probability of system’s survival, \( P(T_{S} > t) \) under the attacks of two types. However, it turns out that this problem is much more complex than it looks from the first sight and, therefore, additional assumptions should be imposed in order to simplify it and to obtain results that potentially can have practical value. First of all, we must answer the question: are the probabilities \( q_{i} (t)\;(p_{i} (t)) \) obtained in Theorem 4.10 suitable for using in the classical \( p(t) \Leftrightarrow q(t) \) model? Recall that in this extreme shock model, each event from the Poisson process of shocks with rate \( w(t) \) is survived with probability \( q(t) \) and ‘kills’ a system with the complementary probability \( p(t) = 1 - q(t) \) independently of all previous history. In this case, the system’s survival probability in \( [0,\;t) \) is given by the following exponential representation (see also Eq. (4.1):

$$ P(T_{S} > t) \equiv \bar{F}_{S} (t) = \exp \left( { - \int\limits_{0}^{t} {p(u)\,w(u)\,{\text{d}}u} } \right), $$
(4.66)

and, therefore, the corresponding failure rate function \( \lambda_{S} (t) \) is

$$ \lambda_{S} (t) = p(t)w(t),\;t \ge 0. $$
(4.67)

From the first glance, it looks that we have already everything in place for using (4.61) and (4.62) in Eq. (4.66). However, it can be shown, that certain dependence on history prevents from that and the only way to deal with this complexity for obtaining some practically meaningful results is to consider additional assumptions that allow for additional simplification of the model.

Let both A1 and A2 be the homogeneous Poisson shock processes with rates \( v \) and \( w \), respectively. Let the A2 shocks be sufficiently rare when compared with the dynamics of the \( X(t) \) process

$$ \bar{\tau}_{Q} \equiv \frac{1}{w} \gg \frac{1}{v} \equiv \bar{\tau }_{N} ;\;\bar{\tau }_{G} \ll \bar{\tau }_{Q} , $$
(4.68)

which makes sense in practice, as the intensity of attacks on the LS could be considered as much smaller than that on the DS. The second inequality in (4.68) implies that the mean time of repair of the DS is much smaller than the mean inter-arrival time of the potentially terminal A2 shocks, which is also a reasonable assumption in practice. Inequalities (4.68) can be considered as the analogue to the fast repair conditions (see e.g., Ushakov and Harrison [28]). Finkelstein and Zarudnij [20] have used the similar assumptions for approximating the multiple availability on stochastic demand (i.e., the repairable system should be available at all demands that occur in accordance with the homogeneous Poisson process in \( [0,\;t) \)). Assumptions (4.68) ‘can help to forget the history’ of the process \( X(t) \) and, therefore, a simple \( p(t) \Leftrightarrow q(t) \) model (4.66)–(4.67) holds. Indeed, under these assumptions the correlation between values of the process \( X(t) \) at instants of occurrence of the A2 shocks is negligible as the time between successive A2 shocks is sufficiently large. Therefore, the probabilities of survival under each A2 shock for both models are given approximately by Eq. (4.66), whereas the following result holds asymptotically:

Theorem 4.12

Let\( v(t) = v,\;w(t) = w;\; \)\( w/v \to 0 \), \( \bar{\tau }_{G} /\bar{\tau }_{Q} \to 0 \)and\( t \)is sufficiently large:\( t \gg \)\( \bar{\tau }_{Q} \). Then the probabilities of survival for two models, in accordance with Theorem 4.11, are

$$ P_{1} (T_{S} > t) = \exp \left\{ { - w\left[ {\eta \frac{{d_{w} }}{{D_{M} }}} \right]t} \right\}(1\; + \;o(1)), $$
(4.69)
$$ P_{2} (T_{S} > t) = \exp \left\{ { - w\left[ {1\; - \;\exp \left\{ { - (1\; - \;k)\eta } \right\}} \right]t} \right\}(1\; + \;o(1)), $$
(4.70)

where\( \eta \equiv \bar{\tau }_{G} /\bar{\tau }_{N} . \)

It should be noted that for the sufficiently small \( t \), when \( t \ll \bar{\tau }_{Q} \), we can approximately consider the case of only one A2 shock that is arriving in accordance with the distribution \( F(t) = 1\; - \;\exp \left\{ { - \int_{0}^{t} {w(u)\,{\text{d}}u} } \right\} \). Then

$$ P_{i} (T_{S} > t) = \int\limits_{0}^{t} {q_{i} (u)\,f(u)} \,{\text{d}}u\; + \;\exp \left\{ { - \int\limits_{0}^{t} {w(u)\,{\text{d}}u} } \right\}, $$

where \( q_{i} (u),\;i = 1,\;2 \) are given by Eqs. (4.61) and (4.62) and \( f(u) = F^{\prime}(t) \). Obviously, as in this case the A2 process can be approximately regarded as one first event, we do not need any other assumptions on the A1 process. Dealing with the A2 process of shocks, however, creates more mathematical difficulties and, therefore, a number of assumptions and simplifications have been made to arrive at approximations (4.69) and (4.70).

4.9 Geometric Process of Shocks

The nonhomogeneous Poisson process (NHPP), due to its relative probabilistic simplicity, is definitely the most popular counting (point) process in applications and, specifically, in shock modeling. It often allows for rather simple and compact expressions for the probabilities of interest for the basic and generalized settings as was shown in the Sect. 4.8. However, in practice, the point events do not necessarily possess the property of independent increments and the number of events in the fixed interval of time does not necessarily follow the Poisson distribution. Therefore, other distribution-based counting processes should also be considered and, therefore, in this section, we will suggest another distribution-based class of counting processes (with dependent increments) that still allows for compact, explicit relationships for some applications [10].

The counting (point) processes that describe ‘events’ in the real world should share certain natural properties that can be formulated in the following way:

  1. (i)

    two or more events cannot occur ‘at the same time’ (i.e., the process is orderly),

  2. (ii)

    the mean number of occurrences in \( (0,\;t] \) as a function of \( t \), i.e., \( \Uplambda (t) \equiv E[N(t)] \), is sufficiently ‘smooth’, so that its derivative that is called the rate or intensity, exists at every \( t \), i.e., \( \Uplambda '(t) = \lambda (t), \) \( t \ge 0 \), or \( \Uplambda (t) = \int_{0}^{t} {\lambda (u)\,{\text{d}}u} \).

It is well-known that these statements (for the sufficiently small \( \Updelta t \)) can be formalized as

  1. (a)

    \( N(0) = 0. \)

  2. (b)

    \( P(N(t\; + \;\Updelta t)\; - \;N(t) = 1) = \lambda (t)\,\Updelta t\; + \;o(\Updelta t). \)

  3. (c)

    \( P(N(t\; + \;\Updelta t)\; - \;N(t) \ge 2) = o(\Updelta t). \)

For the sake of notation, let us denote the general class of point processes, which satisfy (a), (b), and (c) by G. Clearly, if we adopt additionally

  1. (d)

    \( \{ N(t),\;t \ge 0\} \) has independent increments,

then we arrive at the NHPP. It is also well-known that assumptions (a)–(d) result in the Poisson distribution of the number of events in \( (t_{1} ,\;t_{2} ] \). Thus, in what follows, in accordance with our intention stated above, we will ‘depart’ from the governing Poisson distribution.

Definition 4.1

The counting process \( \{ N(t),\;t \ge 0\} \) belongs to the Class of Geometric Counting Processes (CGCP), i.e., \( \{ N(t),\;t \ge 0\} \in \Upgamma \), if

  1. (a)

    \( N(0) = 0 \).

  2. (b)
    $$ P(N(t_{2} )\; - \;N(t_{1} ) = k) = \left( {\frac{1}{{1 + \Uplambda (t_{2} )\; - \;\Uplambda (t_{1} )}}} \right)\left( {\frac{{\Uplambda (t_{2} )\; - \;\Uplambda (t_{1} )}}{{1 + \Uplambda (t_{2} ) - \Uplambda (t_{1} )}}} \right)^{k} ,\;k = 0,\;1,\;2, \ldots $$
    (4.71)

It is easy to see that properties (b) and (c) of the general class G can be derived from (4.71):

  1. (b)

    \( \begin{aligned} & P(N(t + \Updelta t) - N(t) = 1) \\ &= \lambda (t)\,\Updelta t + \left\{ { - \lambda (t)\,\Updelta t + \left( {\frac{1}{1 + \Uplambda (t + \Updelta t) - \Uplambda (t)}} \right)\left( {\frac{\Uplambda (t + \Updelta t) - \Uplambda (t)}{1 + \Uplambda (t + \Updelta t) - \Uplambda (t)}} \right)} \right\}, \\ \end{aligned} \)

where the second term in the right-hand side is clearly \( o(\Updelta t) \);

  1. (c)

    \( P(N(t + \Updelta t) - N(t) \ge 2) = \left( {\frac{\Uplambda (t + \Updelta t) - \Uplambda (t)}{1 + \Uplambda (t + \Updelta t) - \Uplambda (t)}} \right)^{2} \), which is obviously \( o(\Updelta t) \).

Therefore, the CGCP becomes a subclass of G.

Observe that the counting distribution in (4.71) is obtained from the time-dependent reparametrization of the geometric distribution:

$$ P(N = k) = d(1 - d)^{k} ,\;k = 0,\;1,\;2, \ldots , $$

where \( 0 < d < 1 \).

In accordance with (4.71), the mean number of events in \( (t_{1} ,t_{2} ] \) is

$$ E[N(t_{2} ) - N(t_{1} )] = \Uplambda (t_{2} ) - \Uplambda (t_{1} ) = \int\limits_{{t_{1} }}^{{t_{2} }} {\lambda (u)\,{\text{d}}u} . $$

Specifically,

$$ P(N(t) = k) = \left( {\frac{1}{1 + \Uplambda (t)}} \right)\left( {\frac{\Uplambda (t)}{1 + \Uplambda (t)}} \right)^{k} ,\;k = 0,\;1,\;2, \ldots , $$
(4.72)

where \( E[N(t)] = \Uplambda (t) = \int_{0}^{t} {\lambda (u){\text{d}}u} . \)

Thus NHPP and \( \{ N(t),\;t \ge 0\} \in \Upgamma \) can have the same rate, but the crucial difference is that the members of the latter class, as intended, do not possess the property of independent increments, which can be easily seen from the following considerations.

Definition 4.2

The orderly counting process \( \{ N(t),\;t \ge 0\} \) with \( N(0) = 0 \) possesses the weak positive (negative) dependence, if

$$ Cov\left( {I(\{ N(s + t) - N(s) = 0\} ),\;I(\{ N(s) = 0\} )} \right) > 0\;( < 0), $$
(4.73)

where \( I( \cdot ) \) is the indicator function for the corresponding event.

The intuitive meaning of (4.73) for the positive (negative) dependence case is that the two events \( \{ N(s) = 0\} \) and \( \{ N(s + t) - N(s) = 0\} \) have the ‘tendency’ to occur simultaneously (not to occur simultaneously). We will also interpret this definition in the other equivalent form after the following simple theorem.

Theorem 4.13

The counting process\( \{ N(t),\;t \ge 0\} \in \Upgamma \), possesses the weak positive dependence property.

Proof

Observe that, from (4.71),

$$ \begin{aligned} & Cov\left( {I(\{ N(s + t) - N(s) = 0\} ),\;I(\{ N(s) = 0\} )} \right) \\ &= E[I(\{ N(s + t) - N(s) = 0\} ,\;\{ N(s) = 0\} )] - E[I(\{ N(s + t) - N(s) = 0\} )]\,E[I(\{ N(s) = 0\} )] \\ &= P(N(s + t) - N(s) = 0,\;N(s) = 0) - P(N(s + t) - N(s) = 0)\,P(N(s) = 0) \\ &= P(N(s + t) = 0) - P(N(s + t) - N(s) = 0)\,P(N(s) = 0) \\ &= \frac{[1 + \Uplambda (s)][1 + \Uplambda (s + t) - \Uplambda (s)] - [1 + \Uplambda (s + t)]}{[1 + \Uplambda (s + t)][1 + \Uplambda (s + t) - \Uplambda (s)][1 + \Uplambda (s)]} > 0. \\ \end{aligned} $$

\(\square\)

It follows from the proof that, as \( P(N(s) = 0) > 0 \), inequality (4.73) (for positive dependence) is equivalent to

$$ P(N(s + t) - N(s) = 0|N(s) = 0) > P(N(s + t) - N(s) = 0) $$
(4.74)

or to

$$ P(N(s + t) - N(s) \ge 1|N(s) = 0) < P(N(s + t) - N(s) \ge 1). $$

The latter means that the absence of events in \( (0,\;s] \) decreases the probability of events in \( (s,\;s + t] \). This seems to be a more natural interpretation of a (weak) positive dependence.

In order to consider the rate and the corresponding conditional characteristic, we replace \( t \) in (4.74) by the infinitesimal \( {\text{d}}t \). Then

$$ \begin{aligned} & P(N(s + {\text{d}}t) - N(s) = 0|N(s) = 0) - P(N(s + {\text{d}}t) - N(s) = 0) \\ &= \frac{{\int_{0}^{s} {\lambda (u)\,{\text{d}}u\int_{s}^{s\, + \,{\text{d}}t} {\lambda (u)\,{\text{d}}u} } }}{{\left( {1 + \int_{0}^{s\, + \,{\text{d}}t} {\lambda (u)\,{\text{d}}u} } \right)\left( {1 + \int_{s}^{s\, + {\text{d}}t} {\lambda (u)\,{\text{d}}u} } \right)}} = \frac{{\lambda (s)\int_{0}^{t} {\lambda (u)\,{\text{d}}u} }}{{\left( {1 + \int_{0}^{s} {\lambda (u)\,{\text{d}}u + \lambda (s)\,{\text{d}}t} } \right)\left( {1 + \lambda (s)\,{\text{d}}t} \right)}}(1 + o(1))\,{\text{d}}t \\ &= \frac{{\lambda (s)\int_{0}^{s} {\lambda (u)\,{\text{d}}u} }}{{\left( {1 + \int_{0}^{s} {\lambda (u)\,{\text{d}}u} } \right)}}\left( {1 + o(1)} \right)\,{\text{d}}t = \frac{\lambda (s)\Uplambda (s)}{(1 + \Uplambda (s))}(1 + o(1))\,{\text{d}}t, \\ \end{aligned} $$

which is obviously positive. However, we can say now more about the corresponding dependence properties. As \( o(1) \) can be made as small as we wish, it is sufficient to consider \( {{\lambda (s)\Uplambda (s)} \mathord{\left/ {\vphantom {{\lambda (s)\Uplambda (s)} {(1 + \Uplambda (s))}}} \right. \kern-0pt} {(1 + \Uplambda (s))}} \). This expression (for \( \lambda^{\prime}(s) < \infty \)) is increasing in \( s \) when

$$ (\lambda^{\prime}(s)\Uplambda (s) + \lambda^{2} (s))\,(1 + \Uplambda (s)) - \lambda (s)^{2} \Uplambda (s) = \lambda^{\prime}(s)\Uplambda (s)\,(1 + \Uplambda (s)) + \lambda^{2} (s) > 0, $$
(4.75)

which holds, for instance, for increasing \( \lambda (s) \). Specifically, when \( \lambda (s) \equiv \lambda \), the left-hand side of (4.75) is equal to \( \lambda^{2} \). Thus, the dependence of the defined type is ‘getting stronger’ with \( s \) increasing.

Taking into account that \( \{ N(t),\;t \ge 0\} \in \Upgamma \) is orderly, i.e.,

$$ \begin{aligned} & P(N(s + {\text{d}}t) - N(s) = 0|N(s) = 0) - P(N(s + {\text{d}}t) - N(s) = 0) \\ & \quad = - (P(N(s + {\text{d}}t) - N(s) = 1|N(s) = 0) - P(N(s + {\text{d}}t) - N(s) = 1)) + o({\text{d}}t) ,\\ \end{aligned} $$

the difference between the conditional rate of \( \{ N(t),\;t \ge 0\} \in \Upgamma \) (the intensity function) on condition that there were no events in \( (0,\;s] \) and its unconditional rate, is obviously also increasing in \( s \) when (4.75) holds.

As previously, we will consider shocks as events of point processes. The described weak dependence means now that the absence of shocks in \( (0,\;s] \) decreases the probability of a shock in \( (s,\;s + {\text{d}}t] \), which can be natural for certain types of shock processes. For instance, the probability of an earthquake is usually larger when the previous earthquake occurred recently, compared with the case when it occurred earlier. A similar argument can be true for heart attacks. For another example, suppose that the ‘realization’ of a shock process is the homogeneous Poisson process (HPP) with a constant rate, but the rate is determined randomly at \( t = 0 \) (i.e., the conditional Poisson process). It is well-known [27], that the conditional Poisson process has dependent increments. It can be easily shown that it possesses our weak positive dependence property, i.e., the absence of a shock in \( (0,\;s] \) decreases the probability of a shock in \( (s,\;s + {\text{d}}t] \).

The NHPP has another important limitation in terms of the mean and variance relationship for the counting random variable \( {\text{Var}}[N(t)] = E[N(t)] \), for all \( t \ge 0 \). However, for \( \{ N(t),\;t \ge 0\} \in \Upgamma \),

$$ {\text{Var}}[N(t)] = \Uplambda (t)\,(1 + \Uplambda (t)) > E[N(t)] , $$
(4.76)

which can describe many other cases that are not covered by the NHPP.

Thus, in our formulation, the rates of the NHPP and the members of the CGCP, \( \{ N(t),\;t \ge 0\} \in \Upgamma \) can be the same, but because of the dependence of increments, the corresponding probabilistic properties are different. Different members of this class can possess different dependence structures sharing some common features (e.g., the positive dependence of the described type).

Usually for the corresponding stochastic modeling, we need a sufficiently complete description of a relevant stochastic process. However, there are settings when probabilistic reasoning and explicit results do not depend on certain properties of the processes. The shock models to be considered in the following examples are the perfect examples of that. It turns out that the results to be derived are valid for any member \( \{ N(t),\;t \ge 0\} \in \Upgamma \) and therefore, they do not depend on the specific dependence structure of this process [10]. Therefore, in practice, in order to apply the proposed CGCP, it is sufficient to check the validity of (4.71).

Example 4.6

Extreme Shock model. Consider an extreme shock model (see 4.1) for the specific case \( p(t) = p \) and let the shock process be from the CGCP, i.e., \( \{ N(t),\;t \ge 0\} \in \Upgamma \), with rate \( \lambda (t) \) and arrival times \( T_{i} ,\;i = 1,\;2, \ldots \). Then, due to the assumption of independence,

$$ P(T_{S} > t|N(t) = n) = q^{n} , $$

and

$$ \begin{aligned} P(T_{S} > t) &= E[P(T_{S} > t|N(t))] = E[q^{N(t)} ] \\ &= \sum\limits_{n\, = \,0}^{\infty } {q^{n} } \left( {\frac{1}{1 + \Uplambda (t)}} \right)\left( {\frac{\Uplambda (t)}{1 + \Uplambda (t)}} \right)^{n} = \frac{1}{1 + \Uplambda (t)p}. \\ \end{aligned} $$

The corresponding failure rate function is

$$ \lambda_{S} (t) = - \frac{d\ln P(T_{S} > t)}{{\text{d}}t} = \frac{\lambda (t)p}{1 + \Uplambda (t)p}. $$

Thus, the survival probability and the failure rate are obtained without specifying the dependence structure of the shock process. It should be noted that when the process of shocks is NHPP,

$$ \lambda_{S} (t) = p\lambda (t),\;\forall t \ge 0 $$

and the shape of \( \lambda_{S} (t) \) coincides with that of \( \lambda (t) \). However, in the considered case, the result can be dramatically different. Assume that \( \lambda (t) \) is differentiable, then

$$ \lambda^{\prime}_{S} (t) = \frac{{\lambda '(t)p - (\lambda (t)p)^{2} }}{{(1 + \Uplambda (t)p)^{2} }}, $$

and thus, \( \lambda_{S} (t) \) is increasing (decreasing) in \( (t_{1} ,t_{2} ) \) iff

$$ \lambda '(t) \ge p(\lambda (t))^{2} \;(\lambda '(t) \le p(\lambda (t))^{2} ) $$

in \( (t_{1} ,\;t_{2} ) \).

Let, specifically, \( \lambda (t) = \lambda \), \( \forall t \ge 0 \), and therefore, the failure rate, \( \lambda_{S} (t) \) is constant when shocks follow the HPP pattern. However, if it is the process, \( \{ N(t),\;t \ge 0\} \in \Upgamma \) with the same rate \( \lambda \), then the system failure rate, \( \lambda_{S} (t) = p\lambda /(1 + p\lambda t) \) is strictly decreasing with time. This can be loosely interpreted in the following way: equation \( P (T_{S} > t) = E[q^{N(t)}] \), which defines the survival probability for the extreme shock model with an arbitrary point process \( \{ N(t),\;t \ge 0) \) means that the larger \( t \) for the survived system results in the ‘sparser’ shocks in time. The latter, due to the independent increments property of the Poisson process, does not change the probability of a system’s failure in the infinitesimal interval of time \( [t,\;t + {\text{d}}t) \). However, for\( \{ N(t),\;t \ge 0\} \in \Upgamma \), as prompted by (4.74), it decreases the chance of shocks in the next interval, which eventually results in the decreasing failure rate.

Example 4.7

Cumulative Shock Model. Let, as previously, a system be subject to the process \( \{ N(t),\;t \ge 0\} \in \Upgamma \) of shocks with arrival times \( T_{i} ,\;i = 1,\;2, \ldots \). Assume that the \( i \)th shock increases the wear of a system by a random increment \( W_{i} \ge 0 \). In accordance with this setting, a random accumulated wear of a system at time \( t \) is

$$ W(t) = \sum\limits_{i\, = \,0}^{N(t)} {W_{i} } . $$

As previously, assume that the system fails when the accumulated wear exceeds a random boundary \( R \), i.e., \( W(t) > R \). The corresponding survival function in this case is given by

$$ P(T_{S} > t) = P(W(t) \le R). $$
(4.77)

Explicit derivations in (4.77) can be performed in specific, mathematically tractable cases.

Case 1.

Suppose that \( W_{i} ,\;i = 1,\;2, \ldots \) are i.i.d. and exponential with mean \( \theta \). Denote, for the sake of notation, the random variable with this distribution by \( W \). Let \( f_{R} (r) \) be the pdf of the random boundary \( R \). First of all, the mgf of \( W(t) \), \( M_{W(t)} (z) \), can be expressed as

$$ \begin{aligned} M_{W(t)} (z) &= E[\exp \{ zW(t)\} ] = \sum\limits_{n = 0}^{\infty } {E[\exp \{ zW\} ]^{n} \left( {\frac{1}{1 + \Uplambda (t)}} \right)\left( {\frac{\Uplambda (t)}{1 + \Uplambda (t)}} \right)^{n} } \\ &= \frac{1}{{1 + \Uplambda (t)[1 - (1 - \theta z)^{ - 1} ]}} = \frac{1}{1 + \Uplambda (t)} \cdot M_{0} (z) + \frac{\Uplambda (t)}{1 + \Uplambda (t)} \cdot M_{{{\text{exp}}[\theta (1 + \Uplambda (t))]}} (z), \\ \end{aligned} $$
(4.78)

where \( M_{0} (z) \equiv 1 \) corresponds to the mgf of the degenerate distribution with probability \( 1 \) at 0 and

$$ M_{{{\text{exp}}[\theta (1\, + \,\Uplambda (t))]}} (z) \equiv \left( {\frac{1}{1\, - \,\theta (1 + \Uplambda (t))z}} \right) $$

corresponds to the mgf of an exponential distribution with mean \( \theta (1 + \Uplambda (t)) \). It follows from (4.78) that the mgf of \( W(t) \) is given by the weighted average of the mgf’s of two random variables, which implies that the distribution of \( W(t) \) is the mixture of the corresponding distributions. Therefore, \( W(t) \) has the point mass at \( 0 \) (no shocks had occurred in \( [0,t] \)),

$$ P(W(t) = 0) = \frac{1}{1 + \Uplambda (t)}, $$

and, for \( x > 0,\;W(t) \) has the pdf

$$ f_{W(t)} (x) = \frac{\Uplambda (t)}{{\theta (1 + \Uplambda (t))^{2} }}\exp \left\{ { - \frac{x}{\theta (1 + \Uplambda (t))}} \right\},\;x \ge 0. $$

Then the Cdf of \( W(t) \) is given by

$$ F_{W(t)} (x) = 1 - \frac{\Uplambda (t)}{1 + \Uplambda (t)}\exp \left\{ { - \frac{x}{\theta (1 + \Uplambda (t))}} \right\},\;x \ge 0. $$

Finally, the survival function of a system can now be defined as

$$ \begin{aligned} P(T_{S} > t) &= \int\limits_{0}^{\infty } {F_{W(t)} (r)} \,f_{R} (r)\,{\text{d}}r,\;t \ge 0 \\ &= 1 - \frac{\Uplambda (t)}{1 + \Uplambda (t)}\int\limits_{0}^{\infty } {\exp \left\{ { - \frac{r}{\theta (1 + \Uplambda (t))}} \right\}\,f_{R} (r)\,{\text{d}}r} ,\;t \ge 0. \\ \end{aligned} $$

Case 2.

Suppose that the distribution of the random boundary \( R \) is now exponential with mean \( \theta \). Let \( M_{W} (z) \) be the mgf of an arbitrary distributed random variable \( W \) (\( W_{i} \, \)are i.i.d)).

Observe that, as the distribution of the random boundary \( R \) is exponential, the accumulated wear until time \( t \), \( W(t) = \sum\nolimits_{i\, = 0}^{N\left( t \right)} {W_{i} } \) does not affect the failure process of the system after time \( t \). That is, on the next shock, the probability of a system’s failure due to the accumulated wear is just \( P(R \le W_{N(t)\, + \,1} ) \), and does not depend on the wear accumulation history, i.e.,

$$ \begin{aligned} & P(R \ge W_{1} + W_{2} + \ldots + W_{n} |R \ge W_{1} + W_{2} + \ldots + W_{n\, - \,1} ) \\ &= P(R \ge W_{n} ),\;{\text{for}}\;{\text{all}}\;n = 1,\,2, \ldots ,\,W_{1} ,\,W_{2} , \ldots , \\ \end{aligned} $$

where \( W_{1} + W_{2} + \ldots + W_{n\, - \,1} \equiv 0 \) when \( n = 1 \). Then, finally, each shock results in the immediate failure of a system with probability \( P(R < W) \) and it does not cause any change in the system with probability \( P(R \ge W) \). This interpretation of the model implies that the cumulative shock model in this setting corresponds to the extreme shock model considered previously and

$$ p = P(R < W) = 1 - P(R \ge W) = 1 - M_{W} ( - \theta ). $$

Therefore,

$$ P(T_{S} > t) = \frac{1}{{1 + \Uplambda (t)\,(1 - M_{W} ( - \theta ))}},\;t \ge 0, $$

and the corresponding failure rate is

$$ \lambda {}_{S}(t) = \frac{{\lambda (t)(1 - M_{W} ( - \theta ))}}{{1 + \Uplambda (t)(1 - M_{W} ( - \theta ))}},\;t \ge 0. $$

Finally, the combined shock model (see also Sect. 4.1 for a more general setting) can be also considered. Assume that the \( i \)th shock, as in the extreme shock model, causes immediate system’s failure with probability \( p \), but in contrast to this model, with probability \( q \) it increases the wear of a system by a random increment \( W_{i} \ge 0 \). The failure occurs when a critical shock (that destroys a system with probability \( p \)) occurs or the random accumulated wear \( W(t) \) reaches the random boundary \( R \). Therefore,

$$ P(T_{S} > t|N(s),\;0 \le s \le t;\;W_{1} ,W_{2} , \ldots ,W_{N(t)} ;\;R) = q^{N(t)} I\left( {\sum\limits_{i\, = \,0}^{N(t)} {W_{i} } \le R} \right) $$

and the survival function of a system is

$$ P(T_{S} > t) = E[q^{N(t)} I(W(t) \le R)]. $$

As previously, for simplicity, let the distribution of a random boundary \( R \) be exponential with mean \( \theta \). In a similar way, it can be shown that

$$ P(T_{S} > t|N(t) = n) = E\left[ {\prod\limits_{i\, = \,1}^{n} {q\exp \{ - \theta W_{i} \} } } \right] = \left( {qM_{W} ( - \theta )} \right)^{n} . $$

Finally,

$$ P(T_{S} > t) = \frac{1}{{1 + \Uplambda (t)(1 - qM_{W} ( - \theta ))}}. $$

And the failure rate function is

$$ \lambda_{S} (t) = - \frac{{d\ln P(T_{S} > t)}}{{\text{d}}t} = \frac{{\lambda (t)(1 - qM_{W} ( - \theta ))}}{{1 + \Uplambda (t)(1 - qM_{W} ( - \theta ))}}. $$

Thus, we have shown that survival probabilities for some shock models can be effectively obtained for any process that belongs to the CGCP without specifying its dependence structure [10].

4.10 Information-Based Thinning of Shock Processes

4.10.1 General Setting

In this section, we consider some of the settings of the previous sections from a more general viewpoint that employs the operation of thinning of point processes [15]. Thinning of point processes is often applied in stochastic modeling when different types of point events (in terms of their impact, e.g., on a system) occur. In the previous sections, we were mostly interested in the corresponding survival probabilities and, therefore, there was a sequence of ‘survival events’ and one final event of failure. Now we will be interested in two sequences of events and will use this characterization for further discussion of the strength–stress model of Sect. 4.7.

When the initial point process is the NHPP, the thinned processes are also NHPP independent of each other [15]. The crucial assumption in obtaining this well-known result is that the classification of occurring point events is independent of all other events, including the history of the process. However, in practice, this classification is often dependent on the history. In this section, we define and describe the thinned processes for the history-dependent case using different levels of available information and apply our general results to the strength–stress type shock model, which is meaningful in reliability applications. For each considered level of information, we construct the corresponding conditional intensity function and interpret the obtained results.

Let us define the setting in formal terms. Suppose that each event from the NHPP, \( \{ N(t),\;t \ge 0\} \) with rate (intensity function) \( \nu (t) \) is classified as the Type I event with probability \( p(t) \) or as the Type II event with the complementary probability \( 1 - p(t) \). It is well-known (see, e.g., [4], [5]) that the corresponding stochastic processes \( \{ N_{1} (t),\;t \ge 0\} \) and \( \{ N_{2} (t),\;t \ge 0\} \) are NHPPs with rates \( p(t)\nu (t) \) and \( (1 - p(t))\nu (t) \), respectively, and they are stochastically independent. This operation for \( p(t) \equiv p \) is usually called in the literature ‘the thinning of the point process’ [15]. As stated above, in reality, classification of events is often history-dependent and the point process is not necessarily Poisson. Therefore, considering history-dependent thinning appears to be an interesting and important problem both from theoretical and practical points of view. The following setting considered in Sect. 4.7 can be helpful as a relevant example.

Suppose that an object (e.g., a system or an organism) is characterized by an unobserved random quantity \( U \)(e.g., strength or vitality). The object is ‘exposed’ to a marked NHPP with rate \( \nu (t) \), arrival times \( T_{1} < T_{2} < T_{3} \ldots \) and random marks \( S_{i} ,\;i = 1,\;2, \ldots \), that can be interpreted as some stresses or demands. If \( S_{i} > U \), then the Type I event occurs; if \( S_{i} \le U \) then the Type II event occurs. We are interested in probabilistic description of the processes of Type I and Type II events. It should be noted that probabilities \( P(S_{i} > U),\;i = 2,\,3, \ldots \) already depend on the history, as the distribution of \( U \) is updated by the previous information, as was mentioned in Sect. 4.7 [8].

First, we will characterize the ‘conditional properties’ of \( \{ N_{1} (t),t \ge 0\} \) and \( \{ N_{2} (t),\;t \ge 0\} \), (\( N_{{}} (t) = N_{1} (t) + N_{2} (t) \)). In various practical problems, we are often interested in the conditional intensity of one of the processes, as only this process ‘impacts’ our system. The conditional intensity or the intensity process and Eq. (2.12) e.g., for the thinned process, \( \{ N_{1} (t),\;t \ge 0\} \) is defined as

$$ \begin{aligned} \lambda_{1} (t|H_{1t - } ) &= \lim_{\Updelta t\, \to \,0} \frac{{E[N_{1} ((t + \Updelta t) - ) - N_{1} (t - )|H_{1t - } ]}}{\Updelta t} \\ &= \lim_{\Updelta t\, \to \,0} \frac{{P[N_{1} ((t + \Updelta t) - ) - N_{1} (t - ) = 1|H_{1t - } ]}}{\Updelta t}, \\ \end{aligned} $$
(4.79)

where \( H_{1t - } = \{ N_{1} (t - ),T_{11} ,T_{12} , \ldots ,T_{{1N_{1} (t - )}} \} \) is the history of the Type I process before time \( t \) and \( T_{1i} \), \( i = 1,\,2, \ldots \) are the corresponding sequential arrival times. In practice, we often observe the process \( \{ N_{1} (t),\;t \ge 0\} \), e.g., as the process of some ‘effective events’ that can cause certain ‘detectable changes’ (or consequences) in the system. On the other hand, \( \{ N_{2} (t),\;t \ge 0\} \) can be the process of ‘ineffective events’ that have no impact on the system at all. Therefore, the ‘observed history’ \( H_{1t - } \) is our ‘available information’ that is used for describing \( \{ N_{1} (t),\;t \ge 0\} \) via the corresponding conditional intensity, whereas the ineffective events are often (but not necessarily) not observed and thus information on \( \{ N_{2} (t),\;t \ge 0\} \) is not available.

As the conditional intensity fully describes the underlying point process, it can obviously be used for defining the corresponding conditional failure rates, which describe the times to events of interest. For example, assume that our system fails at the \( k \)th Type I event (e.g., due to accumulation of some damage), whereas Type II events, as previously, are ineffective. Then, given \( N_{1} (t - ) = k - 1 \), the conditional intensity \( \lambda_{1} (t|H_{1t - } ) \) in (4.79) can be viewed as the conditional failure rate (given the history). Specifically, when our system fails at the first Type I event, the history of our interest becomes \( H_{1t - } = \{ N_{1} (t - ) = 0\} \). Alternatively, let the system fail on the \( k \)th Type I event with probability \( p(k) \) and survives with probability \( 1 - p(k) \) independent of all other events. Then, given \( N_{1} (t - ) = k - 1 \), the conditional failure rate (on condition that the history \( H_{1t - } \) is given) at time \( t \) is \( \lambda_{1} (t|H_{1t - } )\,p(k) \). Thus, the Type 1 event could terminate the process, which is important for different reliability settings.

As illustrated in the above examples, different conditions can be defined that characterize ‘fatal events’. However, we are primarily interested in a general description of the process \( \{ N_{1} (t),\;t \ge 0\} \) via its conditional intensity \( \lambda_{1} (t|H_{1t - } ) \) (without termination). Thus, we will focus first on the conditional intensity (4.79) for a general history \( H_{1t - } = \{ N_{1} (t - ),\;T_{11} ,\;T_{12} , \ldots ,\;T_{{1N_{1} (t - )}} \} \). For convenience, at some instances, the notation \( H_{1t - } \) for denoting the corresponding realization \( \{ N_{1} (t - ) = n_{1} ,\;T_{11} = t_{11} ,\;T_{12} = t_{12} , \ldots ,T_{{1N_{1} (t - )}} = t_{{1n_{1} }} \} \) will be used as well. Furthermore, the case when the given history is partial, i.e., \( \lambda_{1} (t|H_{1t - }^{P} ) \), where \( H_{1t - }^{P} \) is the partial history of \( H_{1t - } \), will also be investigated. For example, there can be situations when the arrival times are not observed/recorded but only the number of Type I events is observed/recorded. In this case, the ‘available information’ at hand is only \( N_{1} (t - ) \).

Coming back to the specific stress–strength example, note that, when \( \{ N(t),\;t \ge 0\} \) is the NHPP, \( U \) is deterministic, \( U = u \) and \( S_{i} ,i = 1,2, \ldots \) are i.i.d. with the common Cdf \( F_{S} (s) \), the processes \( \{ N_{1} (t),\;t \ge 0\} \) and \( \{ N_{2} (t),\;t \ge 0\} \) are NHPPes. Moreover, they are stochastically independent with rates \( p(t)\nu (t) \) and \( (1 - p(t))\,\nu (t) \), respectively, where \( p(t) = P(S_{i} > u) \). Thus, obviously,

$$ \lambda_{1} (t|H_{1t - } ) = \lim_{\Updelta t\, \to \,0} \frac{{E[N_{1} ((t + \Updelta t) - ) - N_{1} (t - )|H_{1t - } ]}}{\Updelta t} = P(S_{i} > u)\,\nu (t), $$

as the process \( \{ N_{1} (t),\;t \ge 0\} \) possesses the property of independent increments.

We will come back to discussing the case when \( U \) is random after a general formulation of the operation of thinning [8].

4.10.2 Formal Description of the Information-Dependent Thinning

Let \( \{ N(t),\;t \ge 0\} \) denote an orderly point process of events with arrival times \( T_{i} ,\;i = 1,\,2, \ldots \). We assume that this process is external for the system in the sense that it may influence its performance but is not influenced by it [21]. On each event from \( \{ N(t),\,t \ge 0\} , \) depending on the history of the processes \( \{ N(t),t \ge 0\} \), \( \{ N_{1} (t),\;t \ge 0\} \) (note that, \( N(t) = N_{1} (t) + N_{2} (t) \) and see the corresponding description in the previous subsection) and also on some other random history process up to \( t \), \( \Upphi_{t - } \), the event is classified as belonging to either the Type I or to the Type II category. Specifically, \( \Upphi_{t - } \equiv \Upphi \) can be just a random variable as, e.g., the random quantity \( U \) in the previous example. The conditional probability of the Type I event in the infinitesimal interval of time can be formally written as

$$ \begin{aligned} P[N_{1} ((t + {\text{d}}t) - ) - N_{1} (t - ) &= 1|H_{1t - } ,H_{t - } ,\Upphi_{(t\, + \,{\text{d}}t) - } ] \\ &=\,P[N_{1} ((t + {\text{d}}t) - ) - N_{1} (t - ) = 1|H_{1t - } ,H_{t - } ,\Upphi_{(t\, + \,{\text{d}}t) - } ,N((t + {\text{d}}t) - ) - N(t - ) = 1] \\ & \times P[N((t + {\text{d}}t) - ) - N(t - ) = 1|H_{1t - } ,H_{t - } ,\Upphi_{(t\, + \,{\text{d}}t) - } ] \\ & + P[N_{1} ((t + {\text{d}}t) - ) - N_{1} (t - ) = 1|H_{1t - } ,H_{t - } ,\Upphi_{(t\, + \,{\text{d}}t) - } ,N((t + {\text{d}}t) - ) - N(t - ) = 0] \\ & \times P[N((t + {\text{d}}t) - ) - N(t - ) = 0|H_{1t - } ,H_{t - } ,\Upphi_{(t\, + \,{\text{d}}t) - } ] \\ &=\,P[N_{1} ((t + {\text{d}}t) - ) - N_{1} (t - ) = 1|H_{1t - } ,H_{t - } ,\Upphi_{(t\, + \,{\text{d}}t) - } ,N((t + {\text{d}}t) - ) - N(t - ) = 1] \\ & \times P[N((t + {\text{d}}t) - ) - N(t - ) = 1|H_{t - } ], \\ \end{aligned} $$
(4.80)

where

$$ P[N((t + {\text{d}}t) - ) - N(t - ) = 1|H_{1t - } ,\,H_{t - } ,\,\Upphi_{(t\, + \,{\text{d}}t) - } ] $$

reduces to

$$ P[N((t + {\text{d}}t) - ) - N(t - ) = 1|H_{t - } ], $$

as the initial point process is defined as external. It should be noted that \( H_{t - } \) is the history of the initial process \( \{ N(t),\;t \ge 0\} \) and it does not contain the information on the type of events and on the corresponding arrival times of events. In other words, mathematically, \( H_{t - } \) ‘does not define’ \( H_{1t - } \) and we need both of them for conditioning. Accordingly, from (4.80),

$$ \begin{gathered} P[N_{1} ((t + {\text{d}}t) - ) - N_{1} (t - ) = 1|H_{1t - } ,H_{t - } ,\Upphi_{(t + {\text{d}}t) - } ] \hfill \\ = P[N_{1} ((t + {\text{d}}t) - ) - N_{1} (t - ) = 1|H_{1t - } ,H_{t - } ,\Upphi_{(t + {\text{d}}t) - } ,N((t + {\text{d}}t) - ) - N(t - ) = 1] \cdot \nu (t|H_{t - } )\,{\text{d}}t, \hfill \\ \end{gathered} $$

where \( \nu (t|H_{t - } ) \) is the conditional intensity for \( N(t),t \ge 0 \)

$$ \nu (t|H_{t - } ) \equiv \lim_{\Updelta t\, \to \,0} \frac{{P[N((t + \Updelta t) - ) - N(t - ) = 1|H_{t - } ]}}{\Updelta t}. $$

Therefore, we arrive at the following result ([8] for the conditional intensity for a general history-dependent thinned process:

Theorem 4.14

Under the given assumptions, the conditional intensity\( \lambda_{1} (t|H_{1t - } ) \)is defined by the following expression:

$$ \lambda_{1} (t|H_{1t - } ) = E[P[N_{1} ((t + {\text{d}}t) - ) - N_{1} (t - ) = 1|H_{1t - } ,H_{t - } ,\Upphi_{(t\, + \,{\text{d}}t) - } ,N((t + {\text{d}}t) - ) - N(t - ) = 1] \cdot \nu (t|H_{t - } )], $$
(4.81)

where the expectation is with respect to the joint conditional distribution\( (H_{t - } ,\;\Upphi_{(t\, + \,{\text{d}}t) - } |H_{1t - } ) \).

Theorem 4.14 holds for general orderly point processes. Furthermore, when we observe only the partial history \( H_{1t - }^{P} \), the conditional intensity \( \lambda_{1} (t|H_{1t - }^{P} ) \) can be obtained from (4.81) by replacing \( H_{1t - } \) by \( H_{1t - }^{P} \) and by applying an appropriately modified conditional distribution \( (H_{t - } ,\;\Upphi_{(t\, + \,{\text{d}}t) - } |H_{1t - }^{P} ). \)

In what follows, we will simplify the setting and consider the case when the dependence on the history in the second multiplier in (4.81) is eliminated, whereas it is preserved for the first multiplier. Therefore, \( \nu (t|H_{t - } ) \) is substituted by the rate of the corresponding NHPP, \( \nu (t) \). This assumption enables to derive the closed-form results of the following subsection.

4.10.3 Stress–Strength Type Classification Model

Consider first, the case when only the partial information \( H_{1t - }^{P} = \{ N_{1} (t - )\} \) is observed, which means that the corresponding arrival times are not observed. Thus, only the number of Type 1 events is available. Then, formally,

$$ \begin{aligned} &\lambda_{1} (t|H_{1t - }^{P} ) \hfill \\& = E[P[N_{1} ((t + {\text{d}}t) - ) - N_{1} (t - ) = 1|H_{1t - }^{P} ,\;H_{t - } ,\;\Upphi_{(t\, + \,{\text{d}}t) - } ,\;N((t + {\text{d}}t) - )\\ & \quad - N(t - ) = 1]] \cdot \nu (t), \hfill \\ \end{aligned} $$
(4.82)

where the expectation is with respect to the joint conditional distribution \( (H_{t - } ,\;\Upphi_{(t\, + \,{\text{d}}t) - } |H_{1t - }^{P} ) \). Denote the pdf and the Cdf of a random quantity (strength) \( U \) by \( g_{U} (u) \) and \( G_{U} (u) \), respectively. In this case, \( \Upphi_{(t\, + \,{\text{d}}t) - } = \{ S_{1} ,\,S_{2} , \ldots ,\,S_{N((t\, + \,{\text{d}}t) - )} ;U\} \) and

$$ P[N_{1} ((t + {\text{d}}t) - ) - N_{1} (t - ) = 1|H_{1t - }^{P} ,\,H_{t - } ,\,\Upphi_{(t\, + \,{\text{d}}t) - } ,\,N((t + {\text{d}}t) - ) - N(t - ) = 1] = I(S_{N(t - )\, + \,1} > U), $$

where the conditional distribution of \( U|H_{1t - }^{P} \) does depend on the history \( H_{1t - }^{P} \) and, as previously, \( S_{i} \) denotes the value of stress on the \( i \)th event. Therefore, in accordance with Theorem 4.14, \( \lambda_{1} (t|H_{1t - }^{P} ) \) can be obtained as

$$ \lambda_{1} (t|H_{1t - }^{P} ) = P(S_{N(t - )\, + \,1} > U|H_{1t - }^{P} ) \cdot \nu (t). $$

As the distribution of \( S_{N(t - )\, + \,1} \) does not depend on the history \( H_{1t - }^{P} = \{ N_{1} (t - )\} \), it is sufficient to derive the distribution for \( U|H_{1t - }^{P} \). Given \( U = u \), the process \( \{ N_{1} (t),\;t \ge 0\} \) is the NHPP with intensity \( \overline{{F_{S} }} (u)\,\nu (t) \) and thus the conditional distribution of \( N_{1} (t - )|U \) is

$$ P(N_{1} (t - ) = n_{1} |U = u) = \frac{{\left( {\overline{{F_{S} }} (u)\int_{0}^{t} {\nu (x)\,{\text{d}}x} } \right)^{{n_{1} }} }}{{n_{1} !}}\exp \left\{ { - \overline{{F_{S} }} (u)\int\limits_{0}^{t} {\nu (x)\,{\text{d}}x} } \right\}. $$

Therefore, the conditional distribution of \( U|N_{1} (t - ) \) is

$$ \frac{{\frac{{\left( {\overline{{F_{S} }} (u)\int_{0}^{t} {\nu (x)\,{\text{d}}x} } \right)^{{n_{1} }} }}{{n_{1} !}}\exp \left\{ { - \overline{{F_{S} }} (u)\int_{0}^{t} {\nu (x)\,{\text{d}}x} } \right\} \cdot g_{U} (u)}}{{\int\limits_{0}^{\infty } {\frac{{\left( {\overline{{F_{S} }} (w)\int_{0}^{t} {\nu (x)\,{\text{d}}x} } \right)^{{n_{1} }} }}{{n_{1} !}}\exp \left\{ { - \overline{{F_{S} }} (w)\int_{0}^{t} {\nu (x)\,{\text{d}}x} } \right\} \cdot g_{U} (w)\,dw} }}. $$

Finally, from (4.82),

$$ \lambda_{1} (t|H_{1t - }^{P} ) = \frac{{\int\limits_{0}^{\infty } {\overline{{F_{S} }} (u) \cdot \frac{{\left( {\overline{{F_{S} }} (u)\int_{0}^{t} {\nu (x)\,{\text{d}}x} } \right)^{{n_{1} }} }}{{n_{1} !}}\exp \left\{ { - \overline{{F_{S} }} (u)\int_{0}^{t} {\nu (x)\,{\text{d}}x} } \right\} \cdot g_{U} (u)\,{\text{d}}u} }}{{\int\limits_{0}^{\infty } {\frac{{\left( {\overline{{F_{S} }} (w)\int_{0}^{t} {\nu (x)\,{\text{d}}x} } \right)^{{n_{1} }} }}{{n_{1} !}}\exp \left\{ { - \overline{{F_{S} }} (w)\int_{0}^{t} {\nu (x)\,{\text{d}}x} } \right\} \cdot g_{U} (w)\,dw} }} \cdot \nu (t). $$
(4.83)

For the specific case when \( H_{1t - }^{P} = \{ N_{1} (t - ) = 0\} \), i.e., \( n_{1} = 0, \) the conditional intensity \( \lambda_{1} (t|H_{1t - }^{P} ) \) in (4.83) reduces to

$$ \lambda_{S} (t) = \frac{{\int_{0}^{\infty } {\int_{0}^{s} {\exp \left\{ { - \overline{F}_{S} (r)\int_{0}^{t} {\nu (x)\,{\text{d}}x} } \right\} \cdot g_{U} (r)\,{\text{d}}r\,f_{S} (s)\,{\text{d}}s} } }}{{\int_{0}^{\infty } {\exp \left\{ { - \overline{F}_{S} (r)\int_{0}^{t} {\nu (x)\,{\text{d}}x} } \right\}g_{U} (r)\,{\text{d}}r} }}\nu (t), $$

which is, obviously the same as Eq. (4.50).

Consider now the case when the full history

$$ H_{1t - } = \{ N_{1} (t - ) = n_{1} ,\;T_{11} = t_{11} ,\;T_{12} = t_{12} , \ldots ,\,T_{{1N_{1} (t - )}} = t_{{1n_{1} }} \} $$

is observed and, therefore, is available. The crucial step in deriving the conditional intensity in the previous case was to obtain the conditional distribution of \( U|H_{1t - }^{P} \). Intuitively, as the distribution of \( U \) depends only on ‘the number of successes’ up to \( t \), but not on the arrival times of events, it seems that the full history \( H_{1t - } \)can be reduced to the partial history \( H_{1t - }^{P} \) ‘without loss of relevant information’ (i.e., the full history \( H_{1t - } \) is redundant). Thus it would be meaningful to see whether this statement is true or not. To show this, consider, as before,

$$ P[N_{1} ((t + {\text{d}}t) - ) - N_{1} (t - ) = 1|H_{1t - } ,\,H_{t - } ,\;\Upphi_{(t\, + \,{\text{d}}t) - } ,\;N((t + {\text{d}}t) - ) - N(t - ) = 1] = I(S_{N(t - )\, + \,1} > U). $$

In accordance with Theorem 4.14, \( \lambda_{1} (t|H_{1t - } ) \) can be obtained as

$$ \lambda_{1} (t|H_{1t - } ) = P(S_{N(t - )\, + \,1} > U|H_{1t - } ) \cdot \nu (t). $$

It is sufficient to derive the distribution for \( U|H_{1t - } \). Note that the joint conditional distribution of \( (N_{1} (t - ),T_{11} ,T_{12} , \ldots ,T_{{1N_{1} (t - )}} |U) \) is given by

$$ \begin{aligned} & \exp \left\{ {\int\limits_{0}^{{t_{11} }} {\overline{{F_{S} }} (u)\,\nu (x)\,{\text{d}}x} } \right\}\overline{{F_{S} }} (u)\,\nu (t_{11} )\,\exp \left\{ { - \int\limits_{{t_{11} }}^{{t_{12} }} {\overline{{F_{S} }} (u)\,\nu (x)\,{\text{d}}x} } \right\}\,\overline{{F_{S} }} (u)\,\nu (t_{2} ) \ldots \\ & \times \exp \left\{ { - \int\limits_{{t_{{1(n_{1} \, - \,1)}} }}^{{t_{{1n_{1} }} }} {\overline{{F_{S} }} (u)\,\nu (x)\,{\text{d}}x} } \right\}\overline{{F_{S} }} (u)\,\nu (t_{{1n_{1} }} )\,\exp \left\{ { - \int\limits_{{t_{{1n_{1} }} }}^{t} {\overline{{F_{S} }} (u)\,\nu (x)\,{\text{d}}x} } \right\} \\ &= \left( {\overline{{F_{S} }} (u)} \right)^{{n_{1} }} \nu (t_{11} )\nu (t_{12} ) \ldots \nu (t_{1n} )\exp \left\{ { - \overline{{F_{S} }} (u)\int\limits_{0}^{t} {\nu (x)\,{\text{d}}x} } \right\}. \\ \end{aligned} $$

Therefore, the conditional distribution of \( (U|N_{1} (t - ),T_{11} ,T_{12} , \ldots ,T_{{1N_{1} (t - )}} ) \) is

$$ \frac{{\left( {\overline{{F_{S} }} (u)} \right)^{{n_{1} }} \nu (t_{11} )\nu (t_{12} ) \ldots \nu (t_{{1n_{1} }} )\exp \left\{ { - \overline{{F_{S} }} (u)\int_{0}^{t} {\nu (x)\,{\text{d}}x} } \right\} \cdot g_{U} (u)}}{{\int_{0}^{\infty } {\left( {\overline{{F_{S} }} (w)} \right)^{{n_{1} }} } \nu (t_{11} )\nu (t_{12} ) \ldots \nu (t_{{1n_{1} }} )\exp \left\{ { - \overline{{F_{S} }} (w)\int_{0}^{t} {\nu (x)\,{\text{d}}x} } \right\} \cdot g_{U} (w)\,dw}}. $$

Finally, from (4.81)

$$ \lambda_{1} (t|H_{1t - } ) = \frac{{\int_{0}^{\infty } {\left( {\overline{{F_{S} }} (u)} \right)^{{n_{1} \, + \,1}} \nu (t_{11} )\nu (t_{12} ) \ldots \nu (t_{{1n_{1} }} )} \exp \left\{ { - \overline{{F_{S} }} (u)\int_{0}^{t} {\nu (x)\,{\text{d}}x} } \right\} \cdot g_{U} (u)\,{\text{d}}u}}{{\int_{0}^{\infty } {\left( {\overline{{F_{S} }} (w)} \right)^{{n_{1} }} \nu (t_{11} )\nu (t_{12} ) \ldots \nu (t_{{1n_{1} }} )} \exp \left\{ { - \overline{{F_{S} }} (w)\int_{0}^{t} {\nu (x)\,{\text{d}}x} } \right\} \cdot g_{U} (w)\,dw}} \cdot \nu (t). $$
(4.84)

It can be seen that \( \lambda_{1} (t|H_{1t - } ) \) in Eq. (4.84) and that in Eq. (4.83) are identical and, therefore, \( H_{1t - } \) can be reduced to the partial history \( H_{1t - }^{P} \) “without loss of relevant information” as our initial intuition prompted us.

Note that, as the external point process is the NHPP, \( \lambda (t|H_{t - } ) = v(t) \). Then, using \( \lambda (t|H_{1t - } ) = \lambda_{1} (t|H_{1t - } ) + \lambda_{2} (t|H_{1t - } ) \), the following relationship holds:

$$ \lambda_{2} (t|H_{1t - } ) \equiv \lim_{\Updelta t\, \to \,0} \frac{{P[N_{2} ((t + \Updelta t) - ) - N_{2} (t - ) = 1|H_{1t - } ]}}{\Updelta t} = \nu (t) - \lambda_{1} (t|H_{1t - } ). $$

It is clear that the conditional probability that the event that happened at time \( t \) belongs to \( \{ N_{1} (t),\;t \ge 0\} \) is

$$ \frac{{\lambda_{1} (t|H_{1t - } )}}{{\lambda_{1} (t|H_{1t - } ) + \lambda_{2} (t|H_{1t - } )}}. $$

Obviously, both processes \( \{ N_{1} (t),\;t \ge 0\} \) and \( \{ N_{2} (t),\;t \ge 0\} \) are not NHPPs now.

The case when we observe the full history of \( \{ N_{1} (t),\;t \ge 0\} \) and \( \{ N_{2} (t),\;t \ge 0\} \), can be considered in a similar way [8].