1 Introduction

In reliability engineering, a problem of interest is the study on the lifetime of coherent systems. According to Barlow and Proschan (1981), a coherent system is a technical structure consisting of no irrelevant component (a component is said to be irrelevant if its performance does not affect the performance of the system) and having a structure function that is monotone in each argument. In recent years, several authors have investigated the lifetime of coherent systems under different scenarios. Interesting problems associated to coherent systems are aging and stochastic properties of the inactivity times of a coherent system or its components. These kind of problems have been studied under different conditions by various authors. We refer, among others, to Asadi (2006), Navarro et al. (2005, 2010), Asadi and Berred (2012), Zhang (2010), Goliforushani et al. (2012), Goliforushani and Asadi (2011), Li and Zhang (2008), Li and Zhao (2006), Gertsbakh et al. (2011) and Tavangar and Asadi (2010).

Consider a coherent system consisting of n components with i.i.d. lifetimes \(X_{1},X_{2},...,X_{n}\) distributed according to a common continuous distribution F. Suppose that \(T=T(X_{1},X_{2},...,X_{n})\) denotes the system lifetime. The concept of signature of coherent systems is a useful tool in the study of the reliability of coherent systems. The signature associated to a system, which was introduced by Samaniego (1985), is in fact a probability vector \({\mathbf {s}}=(s_{1},s_{2},...,s_{n})\) such that

$$\begin{aligned} s_{i}=P(T=X_{i:n}), \ \ \ \ i=1,2,...,n, \end{aligned}$$

where \(X_{i:n}\) denotes the ith ordered lifetime among the n component lifetimes \(X_{1},X_{2},...,X_{n}\). Thus, the reliability function of the coherent system can be expressed as a mixture of reliability functions of order statistics with weights \(s_{1},s_{2},...,s_{n}\). In other words,

$$\begin{aligned} {\bar{F}}_{T}(t)=\sum _{i=1}^{n}s_{i}{\overline{F}}_{i:n}(t), \end{aligned}$$

where \({\bar{F}}_{i:n}(t)\) denotes the reliability function of \(X_{i:n}\). Several authors have studied various reliability properties of coherent systems based on the properties of signatures. We refer the reader to Kochar et al. (1999), Navarro et al. (2005, 2007, 2008), Khaledi and Shaked (2007), Samaniego et al. (2009), Goliforushani and Asadi (2011) and Goliforushani et al. (2012) for some recent developments on this subject.

In this paper, we consider a coherent system in which the signature vector is of the following form:

$$\begin{aligned} {\mathbf {s}}=(s_{1},...,s_{i},0,...,0), \end{aligned}$$
(1)

where \(s_{k}>0\) for \(k=1, 2,... , i, i=1, 2, ..., n-1\). A coherent system with the signature of the form (1) has the property that, upon the failure of the system at time t, components of the system with lifetimes \(X_{k:n}, k=i+1, i+2, ..., n\), will remain unfailed in the system. The study of the reliability properties of such a system may be of interest for engineers and system designers because after the failure of the system, the unfailed components in the system can be removed and used for some other testing purposes. The study on the reliability properties of unfailed components of the system have recently been considered by different authors under different conditions. See, for example, Kelkinnama and Asadi (2013), Kelkinnama et al. (2015) and Parvardeh and Balakrishnan (2013).

This paper is an investigation on the inactivity time of a coherent system under some conditions. The paper is organized as follows. In Sect. 2, we overview some basic definitions and useful lemmas which will be used in proving our main results throughout the paper. In Sect. 3, we introduce two conditional inactivity times associated to system lifetime and drive the corresponding mixture representations in terms of conditional inactivity times of order statistics. Several aging and stochastic ordering properties of the proposed conditional inactivity times are investigated in this section.

2 Preliminaries

In this section, we briefly give some basic definitions and lemmas which are useful in our derivations. Consider two nonnegative continuous random variables X and Y with respective distribution functions F and G , density functions f and g, and reliability functions \({\bar{F}}\) and \({\bar{G}}\), respectively.

Definition 1

The random variable X is said to be less than the random variable Y in the

  1. (i)

    stochastic order, denoted by \(X\le _{st}Y\), if \({\overline{F}}(x)\le {\overline{G}}(x)\) for all \(x>0\);

  2. (ii)

    reversed hazard order, denoted by \(X\le _{rh}Y\), if \(\ \frac{F(x)}{G(x)}\) is a decreasing function of\(\ x\);

  3. (iii)

    likelihood ratio order, denoted by \(X\le _{lr}Y\), if \(\ \frac{f(x)}{g(x)}\) is a decreasing function of \(\ x\).

Lemma 1

(Misra and Meulen 2003) Assume that \(\Theta \) is a subset of the real line \({\mathbb {R}}\) and that U is a nonnegative random variable whose distribution belongs to the family \(H=\{H(.|\theta ):\theta \in \Theta \},\) which satisfies, for \(\theta _{1},\theta _{2}\in \Theta ,\)

$$\begin{aligned} H(.|\theta _{1})\le _{st}(\ge _{st})H(.|\theta _{2}) \ \ \ \mathrm {whenever}\ \ \theta _{1}<\theta _{2}. \end{aligned}$$

Let \(\psi (u,\theta )\) be a real-valued function defined on \({\mathbb {R}}\times \Theta ,\) which is measurable in u for each \(\theta \) such that \( E_{\theta }[\psi (U,\theta )]\) exists. Then, \(E_{\theta }[\psi (U,\theta )]\) is

  1. (i)

    increasing in \(\theta \) if \(\psi (u,\theta )\) is increasing in \(\theta \) and increasing (decreasing) in u ;

  2. (ii)

    decreasing in \(\theta \) if \(\psi (u,\theta )\) is decreasing in \(\theta \) and decreasing (increasing) in u.

Definition 2

A bivariate function h(xy) is said to be

  1. (i)

    sign-regular of order 2 \((SR_{2})\) if

    $$\begin{aligned} \varepsilon _{1}h(x,y)\ge 0\ \ \ \mathrm{and} \ \ \ \varepsilon _{2}[h(x_{1},y_{1})h(x_{2},y_{2})-h(x_{1},y_{2})h(x_{2},y_{1})]\ge 0\ \end{aligned}$$
    (2)

    whenever \(x_{1}<x_{2}\), \(y_{1}<y_{2},\) for \(\varepsilon _{1}\) and \( \varepsilon _{2}\) equal to \(+1\) or \(-1\);

  2. (ii)

    totally positive of order 2 \((TP_{2})\) if (2) holds for \(\varepsilon _{1}=\varepsilon _{2}=+1\);

  3. (iii)

    reverse regular of order 2 \((RR_{2})\) if (2) holds for \(\varepsilon _{1}=+1\) and \(\varepsilon _{2}= -1\). For more details on SR2, see Karlin (1968) and Khaledi and Kochar (2001).

Lemma 2

(Karlin 1968) Let AB and C be subsets of the real line. Let L(xz) be \(SR_{2}\) for \(x\in A\) and \(z\in B\) , and let M(zy) be \(SR_{2}\) for \(z\in B\) and \(y\in C\). Then, for any \( \sigma -\)finite measure \(\mu (z),\)

$$\begin{aligned} K(x,y)=\int _{B}L(x,z)M(z,y)d\mu (z) \end{aligned}$$

is also \(SR_{2}\) for \(x\in A\) and \(y\in C\) and \(\varepsilon _{i}(K)=\varepsilon _{i}(L)\varepsilon _{i}(M)\) for \(i=1,2,\) where \(\varepsilon _{i}(K)=\varepsilon _{i}\) denotes the constant sign of the \(i-\) order determinant.

Lemma 3

Let \(\phi _{1}(t)=\frac{F(t)}{{\overline{F}}(t)}\) and \(\phi _{2}(t)=\frac{G(t)}{{\overline{G}}(t)}\) . If \(X\le _{st}Y,\) then

$$\begin{aligned} \lambda _{t}(u)=\frac{\sum _{l=k}^{j-1}\genfrac(){0.0pt}1{n}{l} \genfrac(){0.0pt}1{l}{k}\phi _{2}^{l}(t)(1-u)^{l-k}}{\sum _{l=k}^{j-1}\genfrac(){0.0pt}1{n}{l} \genfrac(){0.0pt}1{l}{k}\phi _{1}^{l}(t)(1-u)^{l-k}} \end{aligned}$$

is increasing in \(u\in {\mathbb {R}}_{+}\) for each \(t>0\) and any integers j and k such that \(1\le k<j\).

Proof

Let us define

$$\begin{aligned} \Phi _{i}(t,u)=\sum _{l=k}^{j-1}\genfrac(){0.0pt}1{n}{l}\genfrac(){0.0pt}1{l}{k}\phi _{i}^{l}(t)(1-u)^{l-k}, \ \ \ \ \ \ i=1,2, \end{aligned}$$

for \(u\in {\mathbb {R}}_{+}\) and \(t>0\). Then, \(\ \lambda _{t}(u)\) can be rewritten as

$$\begin{aligned} \lambda _{t}(u)=\frac{\Phi _{2}(t,u)}{\Phi _{1}(t,u)}, \ \ \ \ \ \ u\in {\mathbb {R}}_{+}\ and\ t>0. \end{aligned}$$

Since \(X\le _{st}Y, \phi _{2}(t)\le \phi _{1}(t)\) for all \(t>0,\) and so \(\phi _{i}^{l}(t)\) is \(RR_{2}\) in \((i,l)\in \{1,2\}\times {\mathbb {N}} \) for each fixed \(t>0\). Moreover, it is easy to see that \((1-u)^{l-k}\) is \( RR_{2}\) in \((l,u)\in {\mathbb {N}} \times {\mathbb {R}} _{+}\) for each fixed \(j\in {\mathbb {N}} \). Therefore, by Lemma 2, \(\Phi _{i}(t,u)\) is \(TP_{2}\) in \((i,u)\in \{1,2\}\times {\mathbb {R}} _{+}\) for each fixed \(t>0,\) i.e., \(\lambda _{t}(u)\) is increasing in \( u\in {\mathbb {R}}_{+}\) for fixed \(t>0\).

3 Mixture representation of inactivity times of coherent systems

In this section, we first consider a coherent system with signature vector

$$\begin{aligned} {\mathbf {s}}=(s_{1},...,s_{i},0,...,0), \end{aligned}$$
(3)

where \(s_{k}>0\) for \(k=1, 2,... , i, i=1, 2, ..., n-1\). We are interested in studying the conditional random variable

$$\begin{aligned} ( t-T\ |\ T<t<X_{j:n}), \quad j=i+1, i+2,... ,n. \end{aligned}$$
(4)

This conditional random variable shows the inactivity time of system where the system has failed before time t, but the components of the system with lifetimes \(X_{j:n}, j=i+1, i+2, ... , n,\) are still unfailed at time t. This kind of conditional random variables have potential applications in reliability engineering. Usually when a system is operating, its status is not monitored continuously. As an example, assume that the system has a series structure. For this kind of structure the lifetime is \(T=X_{1:n}\) and the signature of the system is \(\mathbf{s}=(1,0,...,0)\). Suppose that, at time t, the system is inspected by an operator and it is found that the system has already failed but at the time of inspection the other components are still operating. In this case, the conditional random variable \((t-T|T<t<X_{2:n})\) shows the inactivity time of the system in the time of inspection under the mentioned assumptions.

In the following theorem we obtain the reliability of conditional random variable (4).

Theorem 3

Suppose that a coherent system has lifetime T and signature \({\mathbf {s}}\) given in (1). Then, for \(j>i\) , all \(x<t\) and \(t>0,\) we have

$$\begin{aligned} P (t-T>x\ |\ T<t<X_{j:n})=\sum _{k=1}^{i}p_{k}(t)\nu _{j,k,n}(x,t), \end{aligned}$$
(5)

where

$$\begin{aligned} \nu _{j,k,n}(x,t)= P(t-X_{k:n}>x|X_{k:n}<t<X_{j:n}) \end{aligned}$$
(6)

and

$$\begin{aligned} p_{k}(t) = s_{k}\frac{P(X_{k:n}<t<X_{j:n})}{P(T<t<X_{j:n})} =P( T=X_{k:n} | T<t<X_{j:n}). \end{aligned}$$
(7)

Proof

We have

$$\begin{aligned}&P( t-T>x | T<t<X_{j:n})\\&\quad =\frac{P(T<t-x,X_{j:n}>t)}{P(T<t<X_{j:n})} \\&\quad =\sum _{k=1}^{i}s_{k}\frac{P(X_{k:n}<t-x,X_{j:n}>t)}{P(T<t<X_{j:n})} \\&\quad =\sum _{k=1}^{i}s_{k}\frac{P(X_{k:n}<t,X_{j:n}>t)}{P(T<t<X_{j:n})}\frac{ P(X_{k:n}<t-x,X_{j:n}>t)}{P(X_{k:n}<t,X_{j:n}>t)} \\&\quad =\sum _{k=1}^{i}p_{k}(t)P(t-X_{k:n}>x|X_{k:n}<t<X_{j:n}). \end{aligned}$$

The vector \({\mathbf {p}}(t)=(p_{1}(t), p_{2}(t), ... , p_{i}(t), 0 , ... , 0)\) can be considered as the conditional signature of the system in which the element \(p_{k}(t)\) is the probability that the component with lifetime \(X_{k:n}\) causes the failure of the system given that the system has failed by time t,  but the components with lifetimes \(X_{j:n}, j=i+1, i+2, ... , n\), are still alive at time t. Goliforushani et al. (2012) showed that, for \(k=1,...,i\) and \(i<j\),

$$\begin{aligned} p_{k}(t)=\frac{s_{k}W_{j,k}(t)}{\sum \nolimits _{m=1}^{i}s_{m}W_{j,m}(t)},\ \ \ \ \ \end{aligned}$$

where \(W_{j,m}(t)=\sum \nolimits _{l=m}^{j-1}\left( {\begin{array}{c}n\\ l\end{array}}\right) (\phi (t))^{l} \). They also showed that \(\lim _{t\rightarrow 0}{\mathbf {p}}(t)={\mathbf {s}},\ \lim _{t\rightarrow \infty }{\mathbf {p}}(t)=(0,...,0,1),\) \({\mathbf {p}}(t_{1})\le _{st}{\mathbf {p}}(t_{2})\) for all \(0\le t_{1}\le t_{2}\) and \( {\mathbf {p}}(t)\ge _{st}{\mathbf {s}}\) for all \(t\ge 0\).

It should be noted that \(\nu _{j,k,n}(x,t), x,t>0\) and \(1\le k<j\le n\), in (6) represents the inactivity time of a \((n-k+1)\)-out-of-n system where the system has failed by time t but at least \((n-j+1)\) components of the system are still alive.

The following theorem gives a mixture representation for \(\nu _{j,k,n}(x,t)\).

Theorem 4

The conditional probability \(\nu _{j,k,n}(x,t)\) in (6) can be represented as

$$\begin{aligned} \nu _{j,k,n}(x,t)=\sum _{l=k}^{j-1}C_{k,l,n}(t,x)K_{l,j,k}^{n}(t), \end{aligned}$$
(8)

where

$$\begin{aligned} C_{k,l,n}(t,x)=P(t-X_{k:n}>x\mid X_{l:n}<t<X_{l+1:n}) \end{aligned}$$
(9)

and

$$\begin{aligned} K_{l,j,k}^{n}(t)=\frac{\genfrac(){0.0pt}1{n}{l}\Phi ^{l}(t)}{\sum _{m=k}^{j-1}\genfrac(){0.0pt}1{n}{m}\Phi ^{m}(t)},\ \ \ \ \ \ 1\le k \le l<j \le n. \end{aligned}$$
(10)

Proof

We have

$$\begin{aligned} \nu _{j,k,n}(x,t)= & {} P(t-X_{k:n}>x\mid X_{k:n}<t<X_{j:n})\\= & {} \sum _{l=k}^{j-1}\frac{P(t-X_{k:n}>x,X_{l:n}<t<X_{l+1:n})}{P(X_{k:n}<t<X_{j:n})}\\= & {} \sum _{l=k}^{j-1}C_{k,l,n}(t,x)K_{l,j,k}^{n}(t), \end{aligned}$$

where \(C_{k,l,n}(t,x)=P(t-X_{k:n}>x\mid X_{l:n}<t<X_{l+1:n})\) and

$$\begin{aligned} K_{l,j,k}^{n}(t)= & {} \frac{P(X_{l:n}<t<X_{l+1:n})}{P(X_{k:n}<t<X_{j:n})}\\= & {} \frac{ \genfrac(){0.0pt}1{n}{l}(F(t))^{l}(1-F(t))^{n-l}}{\sum _{m=k}^{j-1} \genfrac(){0.0pt}1{n}{m}(F(t))^{m}(1-F(t))^{n-m}}\\= & {} \frac{\genfrac(){0.0pt}1{n}{l}\Phi ^{l}(t)}{\sum _{m=k}^{j-1}\genfrac(){0.0pt}1{n}{m}\Phi ^{m}(t)}, \ \ \ \ \ 1\le k\le l<j\le n. \end{aligned}$$

See also Goliforushani et al. (2012).

Using the elementary calculations based on the distribution of order statistics one can easily verify that \(C_{k,l,n}(t,x)\) in (9) can be written as

$$\begin{aligned} C_{k,l,n}(t,x)= & {} P(t-X_{k:n}>x\mid X_{l:n}<t<X_{l+1:n})\nonumber \\= & {} \sum \limits _{s=k}^{l}\left( {\begin{array}{c}l\\ s\end{array}}\right) (F_{t}(x))^{s}(1-F_{t}(x))^{l-s}\nonumber \\= & {} \int _{0}^{\frac{F(t-x)}{F(t)}}k{\left( {\begin{array}{c}l\\ k\end{array}}\right) }u^{k-1}(1-u)^{l-k}du, \end{aligned}$$
(11)

where \(F_{t}(x)=\frac{F(t-x)}{F(t)}, 0<x<t\). This in turn, implies that

$$\begin{aligned} (t-X_{k:n}|X_{l:n}<t<X_{l+1:n})\overset{d}{=}X_{l-k+1:l}^{t} \end{aligned}$$

where \(X_{l-k+1:l}^{t}\) denotes the \((l-k+1)\)th order statistic among l iid random variables distributed as \((t-X|X<t)\) with distribution function \(F_{t}(x)=\frac{F(t-x)}{F(t)}\). Let \(r(t)=\frac{f(t)}{F(t)}\) be the reversed hazard rate of the components of the system. Then, it is easy to see that r(t) is decreasing if and only if \(\frac{F(t-x)}{F(t)}\) is an increasing function of t, \(t>0\). Hence, from (11), we get that r(t) is decreasing in t if and only if \( C_{k,l,n}(t,x)\) is an increasing function of t for all \(x\ge 0\).

Remark 5

It is well known [see Shaked and Shanthikumar (2007)] that

$$\begin{aligned} X_{j:m}\le _{lr}&\ X_{i:n},\text { }j\le i\text {, }m-j\ge n-i, \\ X_{k-1:m-1}\le _{lr}&\ X_{k:m},\text { }k=2,...,m, \\ X_{k:m-1}\ge _{lr}&\ X_{k:m},\text { }k=1,...,m-1. \end{aligned}$$

Hence, we have

$$\begin{aligned} X_{l-k+1:l}^{t}\le _{lr}&\ X_{l+1-k+1:l+1}^{t}, \\ X_{l-k+1:l}^{t}\le _{lr}&\ X_{l-k+1:l-1}^{t}=X_{l-1-(k-1)+1:l-1}^{t}. \end{aligned}$$

This, in turn, implies that

$$\begin{aligned} (t-X_{k:n}|X_{l:n}<t<X_{l+1:n}) \le _{lr}&\ (t-X_{k:n}|X_{l+1:n}<t<X_{l+2:n}), \\ (t-X_{k:n}|X_{l:n}<t<X_{l+1:n})\le _{lr}&\ (t-X_{k:n}|X_{m:n}<t<X_{m+1:n}), \text { }l\le m, \\ (t-X_{k:n}|X_{l:n}<t<X_{l+1:n}) \le _{lr}&\ (t-X_{k-1:n}|X_{l-1:n}<t<X_{l:n}). \end{aligned}$$

Asadi (2006) has shown that

$$\begin{aligned} P(t-X_{m:l}>x|X_{l:l}<t)=\sum \limits _{m=j}^{l}\left( {\begin{array}{c}l\\ m\end{array}}\right) (F_{t}(x))^{m}(1-F_{t}(x))^{l-m}. \end{aligned}$$

Hence, from (11), we obtain

$$\begin{aligned} (t-X_{m:n}|X_{l:n}<t<X_{l+1:n})\overset{d}{=}X_{l-m+1:l}^{t}\overset{d}{=} (t-X_{m:l}|X_{l:l}<t), \end{aligned}$$

and hence

$$\begin{aligned} (t-X_{j:l}|X_{l:l}<t)&\le _{lr}&(t-X_{j:l+1}|X_{l+1:l+1}<t), \\ (t-X_{j:l}|X_{l:l}<t)&\le _{lr}&(t-X_{j:m}|X_{m:m}<t),\text { }l\le m. \end{aligned}$$

Now, we are ready to prove the following Theorem.

Theorem 6

Let r(t), be the common reversed hazard rate of the components of the system, where r(t) is assumed to be decreasing in \(t, t>0\). Then, \(\nu _{j,k,n}(x,t)\) in (6) is an increasing function of t for all \(x\ge 0\).

Proof

Note that

$$\begin{aligned} \frac{d}{dt}\upsilon _{j,k,n}(t,x)= & {} \sum _{l=k}^{j-1}\left[ \frac{d}{dt} C_{k,l,n}(t,x)\right] K_{l,j,k}^{n}(t)\nonumber \\&+\sum _{l=k}^{j-1}C_{k,l,n}(t,x)\left[ \frac{d}{dt}K_{l,j,k}^{n}(t)\right] . \end{aligned}$$
(12)

Goliforushani et al. (2012) have shown that when r(t) is decreasing in \(t, t>0,\) then \( C_{k,l,n}(t,x)\) is an increasing function of t for all \(x\ge 0\). Hence, the first term on the right-hand side of (12) is nonnegative. To complete the proof, we just need to show that the second term is also nonnegative. By taking \(U_{m}(t)= \genfrac(){0.0pt}1{n}{m}t^{m}\), we have

$$\begin{aligned}&\sum _{l=j}^{k-1}C_{k,l,n}(t,x)\left[ \frac{d}{dt}K_{l,j,k}^{n}(t)\right] \\&\quad = \frac{\sum _{l=j}^{k-1}C_{k,l,n}(t,x)\left[ U_{l}^{{\prime }}(t)\sum _{m=k}^{j-1}U_{m}(t)-U_{l}(t)\sum _{m=k}^{j-1}U_{m}^{{\prime }}(t) \right] }{\left[ \sum _{m=k}^{j-1}U_{m}(t)\right] ^{2}}. \end{aligned}$$

After some algebraic manipulations, it can be shown that the numerator of the above expression can be written as

$$\begin{aligned}&\sum _{l=k}^{j-1}\sum _{m=k}^{j-1}U_{l}^{^{\prime }}(t)U_{m}(t)\left[ C_{k,l,n}(t,x)-C_{k,m,n}(t,x)\right] \nonumber \\&\quad =\sum _{l=k}^{j-1}\sum _{m=k}^{l}U_{l}^{^{\prime }}(t)U_{m}(t)\left[ C_{k,l,n}(t,x)-C_{k,m,n}(t,x)\right] \nonumber \\&\qquad +\sum _{m=k}^{j-1}\sum _{l=k}^{m}U_{l}^{^{\prime }}(t)U_{m}(t)\left[ C_{k,l,n}(t,x)-C_{k,m,n}(t,x)\right] \nonumber \\&\quad =\sum _{l=j}^{k-1}\sum _{m=j}^{l}\left[ U_{l}^{^{\prime }}(t)U_{m}(t)-U_{m}^{^{\prime }}(t)U_{l}(t)\right] \left[ C_{k,l,n}(t,x)-C_{k,m,n}(t,x)\right] \nonumber \\&\quad =\sum _{l=j}^{k-1}\sum _{m=j}^{l}\left( l-m\right) \left[ \genfrac(){0.0pt}1{n}{l} \genfrac(){0.0pt}1{n}{m}t^{l+m-1}\right] \left[ C_{k,l,n}(t,x)-C_{k,m,n}(t,x)\right] \nonumber \\&\quad \ge {0}, \end{aligned}$$
(13)

where the last inequality follows from the fact that for \(m\le l,\) we have

$$\begin{aligned} (t-X_{k:n}|X_{m:n}<t<X_{m+1:n})\le _{lr}(t-X_{k:n}|X_{l:n}<t<X_{l+1:n}), \end{aligned}$$

so that

$$\begin{aligned} C_{k,l,n}(t,x)\ge C_{k,m,n}(t,x). \end{aligned}$$

This completes the proof of the theorem.

Theorem 7

Assume that \(X_1,...,X_n\) and \(Y_1,\dots ,Y_n\) are two sets of independent random variables with continuous distribution functions F and G, respectively. We also denote the corresponding kth order statistics by \(X_{k:n}\) and \(Y_{k:n}\), respectively. If \(X_1\le _{rh}Y_1\), then, for all \(1\le k<j\le n\),

$$\begin{aligned} (t-X_{k:n}|X_{k:n}<t<X_{j:n})\le _{rh}(t-Y_{k:n}|Y_{k:n}<t<Y_{j:n}). \end{aligned}$$

Proof

Note that from (11),

$$\begin{aligned} P(t-X_{k:n}>x|X_{l:n}<t<X_{l+1:n})=\int _{0}^{ {\bar{F}}_{T}(x)}k\genfrac(){0.0pt}1{l}{k}u^{k-1}(1-u)^{l-k}du, \end{aligned}$$

where \({\bar{F}}_{T}(x)=\frac{F(t-x)}{F(t)}\). Defining \(\phi _{1}(t)=\frac{F(t)}{{\bar{F}}(t)}\) and \(\phi _{2}(t)=\frac{ G(t)}{{\bar{G}}(t)},\) we have

$$\begin{aligned}&P(t-X_{k:n}>x|X_{k:n}<t<X_{j:n})\\&\quad =\frac{ \sum _{l=k}^{j-1}P(X_{k:n}<t-x|X_{l:n}<t<X_{l+1:n})P(X_{l:n}<t<X_{l+1:n})}{ P(X_{k:n}<t<X_{j:n})}\\&\quad =\frac{\sum _{l=k}^{j-1}\int _{0}^{{\bar{F}}_{T}(x)}k\genfrac(){0.0pt}1{l}{k} \genfrac(){0.0pt}1{n}{l}u^{k-1}(1-u)^{l-k}\phi _{1}^{l}(t)du}{\sum _{m=k}^{j-1}\genfrac(){0.0pt}1{n }{m}\phi _{1}^{m}(t)}\\&\quad =\frac{\int _{0}^{1}I(0<u<{\bar{F}}_{T}(x))\sum _{l=k}^{j-1}k \genfrac(){0.0pt}1{l}{k}\genfrac(){0.0pt}1{n}{l}u^{k-1}(1-u)^{l-k}\phi _{1}^{l}(t)du}{ \sum _{m=k}^{j-1}\genfrac(){0.0pt}1{n}{m}\phi _{1}^{m}(t)}. \end{aligned}$$

Similarly, we have

$$\begin{aligned}&P(t-Y_{k:n}|Y_{k:n}<t<Y_{j:n})\\&\quad =\frac{\int _{0}^{1}I(0<u<{\bar{G}} _{T}(x))\sum _{l=k}^{j-1}k\genfrac(){0.0pt}1{l}{k}\genfrac(){0.0pt}1{n}{l}u^{k-1}(1-u)^{l-k}\phi _{2}^{l}(t)du.}{\sum _{m=k}^{j-1}\genfrac(){0.0pt}1{n}{m}\phi _{2}^{m}(t)}. \end{aligned}$$

Note that

$$\begin{aligned}&\frac{\ P(t-Y_{k:n}|Y_{k:n}<t<Y_{j:n})}{ P(t-X_{k:n}|X_{k:n}<t<X_{j:n})}\\&\quad \propto \frac{\int _{0}^{1}I(0<u<\bar{G }_{T}(x))\sum _{l=k}^{j-1}k\genfrac(){0.0pt}1{l}{k}\genfrac(){0.0pt}1{n}{l}(1-u)^{l-k}\phi _{2}^{l}(t)du.}{\int _{0}^{1}I(0<u<{\bar{F}}_{T}(x))\sum _{l=k}^{j-1}k \genfrac(){0.0pt}1{l}{k}\genfrac(){0.0pt}1{n}{l}(1-u)^{l-k}\phi _{1}^{l}(t)du}\\&\quad \propto E_{x}\left[ \psi (U,x)\right] , \end{aligned}$$

where, for \(0<u<{\bar{F}}_{T}(x),\)

$$\begin{aligned} \psi (u,x)=\frac{I(0<u<{\bar{G}}_{T}(x))\sum _{l=k}^{j-1}k\genfrac(){0.0pt}1{l}{k} \genfrac(){0.0pt}1{n}{l}u^{k-1}(1-u)^{l-k}\phi _{2}^{l}(t)}{I(0<u<{\bar{F}} _{T}(x))\sum _{l=k}^{j-1}k\genfrac(){0.0pt}1{l}{k}\genfrac(){0.0pt}1{n}{l}u^{k-1}(1-u)^{l-k}\phi _{1}^{l}(t)} \end{aligned}$$

is decreasing in x by the assumption \(X\le _{rh}Y\) and is increasing in u by Lemma 3. The nonnegative random variable U belongs to the family of distributions \(H=\{H(.|x),x\in X\}\) with densities

$$\begin{aligned} h(u|x)=c(x)I(0<u<{\bar{F}}_{T}(x)) \sum _{l=k}^{j-1}k\genfrac(){0.0pt}1{l}{k}\genfrac(){0.0pt}1{n}{l}u^{k-1}(1-u)^{l-k}\phi _{1}^{l}(t) \end{aligned}$$

c(x) is normalizing constant. Since h(u|x) is totally negative of order 2 \((TN_{2})\) in \((u,x)\in {\mathbb {R}}_{+}^{2},\) we have \( H(.|x_{2})\le _{lr}H(.|x_{1})\). Hence, for \(0\le x_{1}\le x_{2},\) \( H(.|x_{2})\le _{st}H(.|x_{1})\). From Lemma 1, we have for \(0\le x_{1}\le x_{2}\), \(E_{x_{2}}\left[ \psi (U,x_{2})\right] \le E_{x_{1}}\left[ \psi (U,x_{1})\right] \). Thus,

$$\begin{aligned} \frac{P(t-Y_{k:n}|Y_{k:n}<t<Y_{j:n})}{P(t-X_{k:n}|X_{k:n}<t<X_{j:n})} \end{aligned}$$

is decreasing in x for any \(t\ge 0\).

Theorem 3.7

If \(k\le m<j\), then \((t-X_{k:n}|(X_{k:n}<t<X_{j:n})\ge _{lr}(t-X_{m:n}|(X_{m:n}<t<X_{j:n})\).

Proof

Let \(k\le m<j\) and let us denote

$$\begin{aligned} U= & {} (t-X_{k:n}|(X_{k:n}<t<X_{j:n}), \\ V= & {} (t-X_{m:n}|(X_{m:n}<t<X_{j:n})), \\ h_{j,k}(t)= & {} \frac{1}{P(X_{k:n}<t<X_{j:n})}. \end{aligned}$$

Then, we have

$$\begin{aligned}&P(t-X_{k:n} >x|X_{k:n}<t<X_{j:n})\\&\quad =h_{j,k}(t)P(X_{k:n}<t-x,X_{k:n}<t<X_{j:n}) \\&\quad =h_{j,k}(t)\sum _{l=k}^{j-1}P(X_{k:n}<t-x,X_{l:n}<t<X_{l:n}) \\&\quad =h_{j,k}(t)\sum _{l=k}^{j-1}\sum _{m=k}^{l}\frac{n!}{m!(l-m)!(n-l)!} F(t-x)^{m}[F(t)-F(t-x)]^{l-m}(1-F(t))^{n-l} \\&\quad =h_{j,k}(t)\sum _{l=k}^{j-1}\left( {\begin{array}{c}n\\ l\end{array}}\right) (1-F(t))^{n-l}\sum _{m=k}^{l}\left( {\begin{array}{c}l \\ m\end{array}}\right) F(t-x)^{m}[F(t)-F(t-x)]^{l-m} \\&\quad =h_{j,k}(t)\sum _{l=k}^{j-1}\left( {\begin{array}{c}n\\ l\end{array}}\right) F(t)^{l}(1-F(t))^{n-l}\sum _{m=k}^{l} \left( {\begin{array}{c}l\\ m\end{array}}\right) \left[ \frac{F(t-x)}{F(t)}\right] ^{m}\left[ \frac{F(t)-F(t-x)}{F(t)}\right] ^{l-m}\\&\quad =h_{j,k}(t)\sum _{l=k}^{j-1}\left( {\begin{array}{c}n\\ l\end{array}}\right) F(t)^{l}(1-F(t))^{n-l}\int _{0}^{ \frac{F(t-x)}{F(t)}}\frac{l!}{(k-1)!(l-k)!}u^{k-1}(1-u)^{l-k}du \end{aligned}$$

and, after some manipulations, we get

$$\begin{aligned} f_{U}(x) =&\, h_{j,k}(t)\sum _{l=k}^{j-1}\left( {\begin{array}{c}n\\ l\end{array}}\right) F(t)^{l}(1-F(t))^{n-l}\\&\quad \times \frac{f(t-x)}{F(t)}\frac{l!}{(k-1)!(l-k)!}\left[ \frac{F(t-x)}{F(t)}\right] ^{k-1}\left[ 1-\frac{F(t-x)}{F(t)}\right] ^{l-k} \\ =&\,k\left( {\begin{array}{c}n\\ k\end{array}}\right) h_{j,k}(t)f(t-x)\left[ F(t-x)\right] ^{k-1}\left( 1-F(t-x)\right) ^{n-k}\\&\quad \times \sum _{u=n-j+1}^{n-k}\left( {\begin{array}{c}n-k\\ u\end{array}}\right) \left( \frac{1-F(t)}{ 1-F(t-x)}\right) ^{u}\left[ 1-\frac{F(t)}{1-F(t-x)}\right] ^{n-k-u} \\ =&\,C_{n,k}h_{j,k}(t)f(t-x)\left[ F(t-x)\right] ^{k-1}\left( 1-F(t-x)\right) ^{n-k}\\&\quad \times \int _{0}^{\frac{1-F(t)}{1-F(t-x)}}\frac{(n-k)!}{ (n-j)!(j-k-1)!}u^{n-j}(1-u)^{j-k-1}du, \end{aligned}$$

where \(C_{n,k}=k\left( {\begin{array}{c}n\\ k\end{array}}\right) \). Similarly

$$\begin{aligned} f_{V}(x)= & {} C_{n,m}h_{j,m}(t)f(t-x)\left[ F(t-x)\right] ^{m-1}\left( 1-F(t-x)\right) ^{n-m}\\&\times \int _{0}^{\frac{1-F(t)}{1-F(t-x)}}\frac{(n-k)!}{ (n-j)!(j-k-1)!}u^{n-j}(1-u)^{j-m-1}du. \end{aligned}$$

Therefore, we have

$$\begin{aligned} H(x)=\frac{f_{U}(x)}{f_{V}(x)}&=\frac{C_{n,k}h_{j,k}(t)}{C_{n,m}h_{j,m}(t)}\left( \frac{1-F(t)}{ A(x,t)-F(t)-1}\right) ^{m-k}\\&\quad \times \frac{\int _{0}^{A(x,t)}u^{n-j}(1-u)^{j-k-1}du}{ \int _{0}^{A(x,t)}u^{n-j}(1-u)^{j-m-1}du}, \end{aligned}$$

where

$$\begin{aligned} A(x,t)=\frac{1-F(t)}{1-F(t-x)} \end{aligned}$$

which is a decreasing function of x. Now, let us define

$$\begin{aligned} B(t,x)= & {} \left( \frac{1-F(t)}{A(x,t)-F(t)-1}\right) ^{m-k},\\ C(x,t)= & {} \frac{\int _{0}^{A(x,t)}u^{n-j}(1-u)^{j-k-1}du}{ \int _{0}^{A(x,t)}u^{n-j}(1-u)^{j-m-1}du} . \end{aligned}$$

Then, clearly B(xt) increasing in x. On the other hand, we have

$$\begin{aligned} \frac{\partial }{\partial x}C(x,t)= & {} \frac{\frac{\partial }{\partial x}A(x,t)\left( A^{n-j}(t,x)(1-A(t,x))^{j-k-1} \int _{0}^{A(x,t)}u^{n-j}(1-u)^{j-m-1}du\right) }{\left( \int _{0}^{A(x,t)}u^{n-j}(1-u)^{j-m-1}du\right) ^{2}}\\&-\frac{\frac{\partial }{\partial x}A(x,t)\left( A^{n-j}(t,x)(1-A(t,x))^{j-m-1} \int _{0}^{A(x,t)}u^{n-j}(1-u)^{j-k-1}du\right) }{\left( \int _{0}^{A(x,t)}u^{n-j}(1-u)^{j-m-1}du\right) ^{2}}. \end{aligned}$$

The numerator of the above expression is equal to \(\eta _1\times \eta _2\), where

$$\begin{aligned} \eta _1=\frac{\partial }{\partial x}A(x,t)A^{n-j}(t,x)(1-A(t,x))^{j-m-1}, \end{aligned}$$

and

$$\begin{aligned} \eta _2=\left( (1-A(t,x))^{m-k}\int _{0}^{A(x,t)}u^{n-j}(1-u)^{j-m-1}du- \int _{0}^{A(x,t)}u^{n-j}(1-u)^{j-k-1}du\right) . \end{aligned}$$

Note that for \(0\le u\le A(x,t)\le 1, 1-A(x,t)\le 1-u\). This implies that

$$\begin{aligned}&\int _{0}^{A(x,t)}(1-A(t,x))^{m-k}u^{n-j}(1-u)^{j-m-1}du \\&\quad \le \int _{0}^{A(x,t)}(1-u)^{m-k}u^{n-j}(1-u)^{j-m-1}du \\&\quad =\int _{0}^{A(x,t)}u^{n-j}(1-u)^{j-k-1}du. \end{aligned}$$

Therefore \(\eta _2\le 0\). From this and the fact that \(\frac{\partial }{\partial x}A(x,t)\le 0,\) we get \(\frac{\partial }{\partial x}C(x,t)\ge 0,\) i.e., C(xt) increasing in x. Consequently \( H(x|t)=C_{j,k,m}B(x,t)C(x,t)\) is also increasing in x completing the proof of the theorem.

The following theorem compares two coherent systems with different signature vectors.

Theorem 8

For a fixed \(t\ge 0,\) let \({\mathbf {p}}_{1}(t)\) and \({\mathbf {p}}_{2}(t)\) be the vectors of conditional signatures in representation (5) of two coherent systems of order n, both based on components having iid lifetimes distributed as the common continuous distribution function F. Let \(T_{1}\) and \(T_{2}\) denote the corresponding lifetimes of the two systems.

  1. (i)

    If \({\mathbf {p}}_{1}(t)\le _{st} {\mathbf {p}}_{2}(t),\) then \((t-T_{1}|T_{1}<t<X_{j:n}) \ge _{st}(t-T_{2}|T_{2}<t<X_{j:n});\)

  2. (ii)

    If \({\mathbf {p}}_{1}(t)\le _{rh}{\mathbf {p}}_{2}(t),\) then \((t-T_{1}|T_{1}<t<X_{j:n})\ge _{rh}(t-T_{2}|T_{2}<t<X_{j:n});\)

  3. (iii)

    If \({\mathbf {p}}_{1}(t)\le _{lr} {\mathbf {p}}_{2}(t),\) then \((t-T_{1}|T_{1}<t<X_{j:n})\ge _{lr}(t-T_{2}|T_{2}<t<X_{j:n})\).

Proof

The proof follows from the mixture representation in (5) and Theorems (1.A.6), (1.B.50) and (1.C.17) of Shaked and Shanthikumar (2007), respectively.

In the sequel, we investigate the inactivity time of a coherent system under the assumption that in the time of inspection, it is realized by the operator that, the system has already failed and the number of failed components in the system are exactly l. In other words, we study the conditional random variables:

$$\begin{aligned} ( t-T | T<X_{l:n}<t<X_{l+1:n}), \ \ \ \ \ l=i+1, i+2, ... , n-1. \end{aligned}$$
(14)

The reliability function of this conditional random variable is given by

$$\begin{aligned}&P(t-T>x|T<X_{l:n}<t<X_{l+1:n})\\&\quad =\sum _{m=1}^{l-1}P(T=X_{m:n},t-T>x|T<X_{l:n}<t<X_{l+1:n})\\&\quad =\sum _{m=1}^{l-1}\frac{P(T=X_{m:n}, t-X_{m:n}>x,X_{m:n}<X_{l:n}<t<X_{l+1:n})}{P(T<X_{l:n}<t<X_{l+1:n})} \\&\quad =\sum _{m=1}^{l-1}\frac{s_{m}P(X_{l:n}<t<X_{l+1:n})}{ P(T<X_{l:n}<t<X_{l+1:n})}P(t-X_{m:n}>x|X_{l:n}<t<X_{l+1:n}) \\&\quad =\sum _{m=1}^{l-1}p_{l,m}(t)P(t-X_{m:n}>x|X_{l:n}<t<X_{l+1:n})\\&\quad =\sum _{m=1}^{l-1}p_{l,m}(t)C_{k,l,n}^{X}(t,x), \end{aligned}$$

where \(C_{k,l,n}^{X}(t,x)\) is defined in (9) and

$$\begin{aligned} p_{l,m}(t)= & {} \frac{s_{m}P(X_{l:n}<t<X_{l+1:n})}{P(T<X_{l:n}<t<X_{l+1:n})} \\= & {} \frac{s_{m}P(X_{l:n}<t<X_{l+1:n})}{ \sum _{u=1}^{l-1}s_{u}P(X_{l:n}<t<X_{l+1:n})} \\= & {} \frac{s_{m}}{\sum _{u=1}^{l-1}s_{u}},\text { }m=1,...,l-1 \\= & {} p_{m},\text { }m=1,...,l-1. \end{aligned}$$

This shows that \(p_{l,m}(t)\) does not depend on t and l.

Now, we can prove the following theorem.

Theorem 9

Assume that the conditions of Theorem 7 are met. Let also \(T_{1}\) and \(T_{2}\) denote the lifetimes of two systems with signature vectors (1), then

$$\begin{aligned} (t-T_{1}|T_{1}<X_{l:n}<t<X_{l+1:n})\ge _{st}(t-T_{2}|T_{2}<Y_{l:n}<t<Y_{l+1:n}). \end{aligned}$$

Proof

Note that

$$\begin{aligned} P(t-T_{1}>&x|T_{1}<X_{l:n}<t<X_{l+1:n}) -P(t-T_{2}>x|T_{1}<Y_{l:n}<t<Y_{l+1:n})\nonumber \\= & {} \sum _{m=1}^{l-1} p_{m}C_{k,l,n}^{X}(t,x)-\sum _{m=1}^{l-1}p_{m}C_{k,l,n}^{Y}(t,x)\nonumber \\= & {} \sum _{m=1}^{l-1}p_{m}(C_{k,l,n}^{X}(t,x)-C_{k,l,n}^{Y}(t,x)). \end{aligned}$$
(15)

From (11) and the assumption that \(X\le _{rh}Y,\) we easily get \(C_{k,l,n}^{X}(t,x)\ge C_{k,l,n}^{Y}(t,x)\). Hence the right hand side of (15) is nonnegative completing the proof of the theorem.

The results of the following theorem can be easily proved by Theorems 1.A.6., 1.B.52. and 1.C.17. of Shaked and Shanthikumar (2007), respectively.

Theorem 10

Let \({\mathbf {p}}_{1}\) and \({\mathbf {p}}_{2}\) be the vectors of coefficients in (1) for two coherent systems of order n, both based on components with i.i.d. lifetimes distributed as the common continuous distribution function F. Let \(T_{1}\) and \(T_{2}\) be the corresponding lifetimes of the systems.

  1. (i)

    If \({\mathbf {p}}_{1}\le _{st}{\mathbf {p}}_{2},\) then \((t-T_{1}|T_{1}<X_{l:n}<t<X_{l+1:n})\ge _{st}(t-T_{2}|T_{2}<Y_{l:n}<t<Y_{l+1:n});\)

  2. (ii)

    If \({\mathbf {p}}_{1}\le _{rh}{\mathbf {p}}_{2},\) then \( (t-T_{1}|T_{1}<X_{l:n}<t<X_{l+1:n})\ge _{rh}(t-T_{2}|T_{2}<Y_{l:n}<t<Y_{l+1:n});\)

  3. (iii)

    If \({\mathbf {p}}_{1}\le _{lr}{\mathbf {p}}_{2},\) then \( (t-T_{1}|T_{1}<X_{l:n}<t<X_{l+1:n})\ge _{lr}(t-T_{2}|T_{2}<Y_{l:n}<t<Y_{l+1:n})\).