2.1 Coherent System Lifetimes

Let us assume from now on that \(X_1,\ldots ,X_n\) are non-negative random variables on a given probability space \((\Omega ,\mathcal {S},\Pr )\) that represent the lifetimes of the components in a system. Hence the system lifetime T can be obtained from the component lifetimes as follows.

Proposition 2.1

If \(\psi \) is a semi-coherent system of order n with minimal path and minimal cut sets \(P_1,\ldots ,P_r\) and \(C_1,\ldots ,C_s\), then the system lifetime T can be written as

$$\begin{aligned} T=\max _{1\le j\le r}\min _{i\in P_j} X_i \end{aligned}$$
(2.1)

and

$$\begin{aligned} T=\min _{1\le j\le s}\max _{i\in C_j} X_i. \end{aligned}$$
(2.2)

The proof is immediate. Let us assume from now on that the above expressions (2.1) and (2.2) (or the expressions (1.4) and (1.5)) are used to extend the structure Boolean function \(\psi \) to a real valued function \(\psi :\mathbb {R}\rightarrow \mathbb {R}\) (we use the same notation). Thus, we can write the lifetime of the system just as \(T=\psi (X_1,\ldots ,X_n)\). For example, the lifetime of a series system with n components is \(T=\min (X_1,\ldots , X_n)\). However note that \(T\ne X_1 \cdots X_n\) (i.e. we cannot use the product-coproduct representation of the Boolean structure function to obtain the system lifetime).

As a consequence T is also a non-negative random variable (over the same probability space). Another consequence is that \(T=X_I\) for an \(I\in [n]\) (but not always the same I, that is, I is also a random variable that can take the values \(1,\ldots ,n\)).

Note that the lifetime of the k-out-of-n system  coincides with the order statistic \(X_{n-k+1:n}\) from \(X_1,\ldots ,X_n\). Therefore, the coherent systems contain the order statistics (ordered component lifetimes) as particular cases. Also, as a consequence of the preceding proposition, we have that T is equal to a \(X_{J:n}\) for \(J\in [n]\), that is, we know that the system is going to fail in one of the ordered points \(X_{1:n}\le \dots \le X_{n:n}\). In fact, we will show that, under some assumptions, T can be written as a mixture of the k-out-of-n systems.

We conclude this subsection by noting that the systems can also be studied by using stochastic processes. Thus, for a fixed time \(t\ge 0\), we can define the Boolean (or Bernoulli) random variables

$$B_i(t):=1_{\{X_i>t\}}$$

for \(i=1,\ldots ,n\), where \(1_A=1\) (resp. 0) if A is true (false) and \(B_i(t)=1\) (resp. 0) means that the ith component is working (has failed) at time t. Hence the system state at time t is

$$B(t)=\psi (B_1(t),\ldots ,B_n(t))=1_{\{T>t\}}.$$

Conversely, note that \(X_i=\sup \{t:B_i(t)=1\}\) and \(T=\sup \{t:B(t)=1\}\). Here the system performance is represented by the stochastic process \(\{B(t)\}_{t\ge 0}\) where we usually assume \(B(0)=1\) and \(B(\infty )=0\). As mentioned in the preface, we will not use this approach in the present book. The interested reader can go to the references cited there.

2.2 Reliability and Aging Functions

As the system and component lifetimes T and \(X_1,\ldots ,X_n\) are non-negative random variables, we can consider all the functions used to describe the aging process. Of course we can also use the functions used in the probability theory. The main one is the system reliability (or survival) function \(\bar{F}_T\) defined as

$$\bar{F}_T(t):=\Pr (T>t)$$

for all t. We usually assume \(\bar{F}_T(0)=1\) (the system is working at time \(t=0\)). \(\bar{F}_T\) is always a decreasing function and satisfies \(\lim _{t\rightarrow \infty } \bar{F}_T(t)=0\). The same properties are satisfied by the components’ reliability functions defined as

$$\bar{F}_i(t):=\Pr (X_i>t)$$

for all t and \(i=1,\ldots ,n\). The respective distribution (or unreliability) functions are defined as

$$F_T(t):=\Pr (T\le t)=1-\bar{F}_T(t)$$

and

$$F_i(t):=\Pr (X_i\le t)=1-\bar{F}_i(t)$$

for all t and \(i=1,\ldots ,n\). Clearly, \(F_T(t)\) and \(\bar{F}_T(t)\) represent the probabilities of a working or a broken system, respectively, at time t. So the people usually prefer to use \(\bar{F}_T(t)\) instead of \(F_T(t)\). Moreover, it is easy to see that, for non-negative random variables, the mean or expected value (lifetime) can be computed as

$$\begin{aligned} E(T)=\int _0^\infty \bar{F}_T(x)dx. \end{aligned}$$
(2.3)

A similar expression holds for the components. In Reliability Theory, this value is also called the Mean Time To Failure (MTTF). 

The components’ reliability functions will be modelled with the most usual models (distributions) for non-negative random variables. Then, as we will see in the following sections, the system reliability will be a function of the components’ reliability functions.

The most important model in this field is the exponential distribution with reliability function

$$\bar{F}_T(t)=\exp (-t/\mu ) \text { for } t\ge 0,$$

where \(\mu >0\) is the expected value (or MTTF). This model is the unique continuous model which satisfies the following property

$$\Pr (T>x)=\Pr (T-t>x|T>t) \text { for all } t,x\ge 0.$$

This property is called the lack of memory property  and means that the reliability in this model is the same for new and used units. So this model plays a central role in the reliability theory representing units which do not have aging. These are considered as good units since the reliability is usually lower for used units (natural or positive aging). In the opposite case, the used units (or the system) have greater reliability functions than the new units (unnatural or negative aging). Note that here “positive” does not mean “good”.

A good alternative (more flexible) model is the Weibull distribution with reliability function

$$\bar{F}(t)=\exp (-(t/\beta )^\alpha ) \text { for } t\ge 0,$$

where \(\alpha ,\beta >0\). The parameter \(\alpha \) is called the shape parameter and \(\beta \) is the scale parameter. Note that the Weibull model contains the exponential model (obtained when \(\alpha =1\)). It also contains models with natural (\(\alpha >1\)) and unnatural (\(0<\alpha <1\)) aging (see below). So it is more flexible than the exponential model. Its distribution function can be computed in R with pweibull(x,\(\mathtt {\alpha }\),\(\mathtt {\beta }\)).

The random variable \(T_t=(T-t|T>x)\) is called the residual lifetime (RL) of the system. It represents the performance of used systems that are working at time t. Its reliability function is

$$\begin{aligned} \bar{F}_T(x|t):=\Pr (T-t>x|T>t)=\frac{\Pr (T>x+t)}{\Pr (T>t)}=\frac{\bar{F}_T(x+t)}{\bar{F}_T(t)} \end{aligned}$$
(2.4)

for all \(x\ge 0\). It is defined for all \(t\ge 0\) such that \(\bar{F}(t)>0\). If \(\bar{F}(t)=0\) for a t, then this random variable does not exist (since the system has already failed for sure at time t). The lack of memory property  can also be written as

$$\bar{F}_T(x|t)=\bar{F}_T(x) \text { for all } t,x\ge 0.$$

The residual lifetime of the system will be studied in Sect. 4.4.

Analogously, the residual lifetimes of the components are \(X_{i,t}=(X_i-t|X_i>t)\) and their reliability functions are

$$\bar{F}_i(x|t):=\Pr (X_i-t>x|X_i>t)=\frac{\bar{F}_i(x+t)}{\bar{F}_i(t)}$$

for \(i=1,\ldots ,n\) and \(t\ge 0\) such that \(\bar{F}_i(t)>0\). They are plotted in Fig. 2.1 for \(t=0,1,2,3,4,5\) when the components have a common Weibull reliability with \(\beta =1\) and \(\alpha =2\) (left) and \(\alpha =1/2\) (right). In the left plot they are decreasing in t (positive or natural aging) while in the right plot they are increasing in t (negative aging).

Fig. 2.1
figure 1

Reliability functions of the residual lifetimes of a Weibull model with \(\beta =1\) and \(\alpha =2\) (left) and \(\alpha =1/2\) (right) for \(t=0,1,2,3,4,5\) (from the top on the left and from the bottom on the right)

The code for the left plot is the following:

figure a

There are several functions that can be used to describe the aging process. The first one is the mean residual life (MRL)   defined as

$$m_T(t)=E(T_t)=E(T-t|T>t)$$

for all \(t\ge 0\) such that \(\bar{F}(t)>0\) and that these expectations exist. The MRL functions of the components are defined analogously. From (2.3) and (2.4) it can be computed as

$$m_T(t)=\int _0^\infty \bar{F}_T(x|t)dx=\int _0^\infty \frac{\bar{F}_T(x+t)}{\bar{F}_T(t)}dx=\frac{1}{\bar{F}_T(t)}\int _t^\infty \bar{F}_T(x)dx.$$

Note that it is the area below the residual reliability function \(F_T(x|t)\) for \(t\ge 0\). This function is used to define the increasing mean residual life (IMRL) and decreasing mean residual life (DMRL) aging classes (according to the monotonicity of \(m_T\)). The natural aging is represented by the DMRL class. The exponential model belongs to both classes since its MRL satisfies \(m(t)=\mu \) for all \(t\ge 0\).

The second one is called the hazard (or failure) rate  (HR or FR) function and it is defined as

$$h_T(t)=\frac{f_T(t)}{\bar{F}_T(t)}$$

for all t such that \(\bar{F}_T(t)>0\), where \(f_T(t)=F_T'(t)\) is a probability density function (PDF) of T (so, note that \(h_T\) is not unique). To explain its meaning, we can write it as

$$h_T(t)=\lim _{\epsilon \rightarrow 0^+} \frac{\Pr (t<T<t+\epsilon |T>t)}{\epsilon }.$$

Hence, it represents the average probability of failure in the interval \([t,t+\epsilon ]\) when \(\epsilon \rightarrow 0^+\) for a unit that is working at time t.

It is used to define the increasing failure rate (IFR) and the decreasing failure rate (DFR) aging classes. The exponential model belongs to both classes since its hazard satisfies \(h(t)=1/\mu \) for all \(t\ge 0\). In the Weibull model we have \(h(t)=\alpha (t/\beta )^{\alpha -1}\) for all \(t\ge 0\). Therefore, it is IFR for \(\alpha \ge 1\) and DFR for \(0<\alpha \le 1\).

These aging functions are related (when they exist) by the following expression

$$h(t)=\frac{1+m'(t)}{m(t)}.$$

So both functions determine the reliability function through the following inversion formula 

$$\bar{F}(t)=\exp \left( -\int _{0}^t h(x)dx\right) $$

for all \(t\ge 0\).

Similar properties can be obtained for the reversed hazard rate (RHR) and the mean inactivity time (MIT) functions. The first one is defined by

$$\bar{h}_T(t)=\frac{f_T(t)}{F_T(t)}$$

for t such that \(F_T(t)>0\) and the second by

$$\bar{m}_T(t)=E(t-T|T\le t)$$

for all \(t\ge 0\) such that these expectations exist. The meaning of \(\bar{m}_T\) is clear, it is the expected inactivity time for a system (or unit) that has failed before t. Analogously, \(\bar{h}_T(t)\) represents the instantaneous probability of failure at t for a unit that has failed in the interval [0, t]. Note that the greater \(\bar{h}_T(t)\), the best, since it means that the inactivity time \((t-T|T\le t)\) is closed to zero. The monotonicity properties of \(\bar{h}_T\) and \(\bar{m}_T\) are used to define the aging classes IRHR/DRHR and IMIT/DMIT. All these aging classes will be studied in Chap. 4.

2.3 Signature Representations

The first signature representation was obtained by Samaniego (1985) (see also Samaniego 2007). It is based on the fact that the system is going to fail with a component failure. However we need some assumptions. The first one is that the component lifetimes should be independent and identically distributed (IID). In this case, the common distribution (reliability) of the component lifetimes is represented just as F (\(\bar{F}\)). The second one is that F should be continuous (to avoid ties). Then the representation can be stated as follows.

Theorem 2.1

(Samaniego, 1985) If T is the lifetime of a coherent system with IID component lifetimes \(X_1,\ldots ,X_n\) having a common continuous distribution function F, then

$$\begin{aligned} \bar{F}_T(t)=\sum _{i=1}^n s_i \bar{F}_{i:n}(t) \end{aligned}$$
(2.5)

for all t, where \(s_1,\ldots ,s_n\) are nonnegative coefficients such that \(\sum _{i=1}^n s_i=1\) and that do not depend on F and where \(\bar{F}_{i:n}\) is the reliability function of \(X_{i:n}\) for \(i=1,\ldots ,n\). Moreover, these coefficients satisfy \(s_i=\Pr (T=X_{i:n})\) for \(i=1,\ldots ,n\).

Proof

First note that the events \(\{T=X_{i:n}\}\), for \(i=1,2,\ldots ,n\), are a partition of the probability space \(\Omega \) since, as F is continuous, then \(\Pr (X_i=X_j)=0\) for all \(i\ne j\). Hence, from the law of total probability, we have

$$\begin{aligned} \bar{F}_T(t)&=\sum _{i=1}^n \Pr (\{T>t\}\cap \{T=X_{i:n}\})\\&=\sum _{i=1}^n \Pr (T=X_{i:n}) \Pr (T>t|T=X_{i:n})\\&=\sum _{i=1}^n \Pr (T=X_{i:n}) \Pr (X_{i:n}>t|T=X_{i:n})\\&=\sum _{i=1}^n \Pr (T=X_{i:n}) \Pr (X_{i:n}>t), \end{aligned}$$

where in the sum we only consider the terms with \(\Pr (T=X_{i:n})>0\) (the others are zero) and where the last equality is obtained from the independence of the events \(\{X_{i:n}>t\}\) and \(\{T=X_{i:n}\}\) (under the stated assumptions). Thus we obtain (2.5) with \(s_i=\Pr (T=X_{i:n})\), for \(i=1,\ldots ,n\) and

$$\sum _{i=1}^n s_i=\sum _{i=1}^n \Pr (T=X_{i:n})=\Pr (\Omega )=1$$

which concludes the proof.    \(\square \)

The vector \(\mathbf {s}=(s_1,\ldots ,s_n)\) with the coefficients in (2.5) is called the signature of the system in Samaniego (1985) (see also Samaniego 2007). It is also called the destruction spectrum (or simply D-spectrum) when we use networks instead of systems (see, e.g., Gertsbakh and Shpungin 2010, p. 85). 

Moreover, these coefficients only depend on the structure of the system (under these assumptions). Actually, they can be computed from \(\psi \) as

$$\begin{aligned} s_i=\frac{|A_i|}{n!} \text { for }i=1,\ldots ,n, \end{aligned}$$
(2.6)

where \(|A_i|\) is the cardinality of the set \(A_i\) of all the permutations \(\sigma \) of the set \([n]=\{1,\ldots ,n\}\) which satisfy that \(\psi (x_1,\ldots ,x_n)=x_{i:n}\) whenever \(x_{\sigma (1)}<\ldots <x_{\sigma (n)}\) (see Samaniego 2007, Chap. 3).

The signature coefficients are also determined by the Boolean structure function \(\psi \) as follows

$$\begin{aligned} s_i=\frac{1}{\left( {\begin{array}{c}n\\ i-1\end{array}}\right) }\sum \limits _{ \sum _{j=1}^nx_j=n-i+1 }\!\!\!\!\!\!\!\!\!\!\psi (x_1, \ldots x_n)\ - \frac{1}{\left( {\begin{array}{c}n\\ i\end{array}}\right) } \sum \limits _{ \sum _{j=1}^nx_j=n-i} \!\!\!\!\!\!\!\!\,\psi (x_1, \ldots x_n) \end{aligned}$$
(2.7)

for \( i=1,\ldots ,n\) (see Boland 2001).

Example 2.1

For the coherent system with lifetime \(\psi =\min (x_1,\max (x_2,x_3))\) (see Fig. 2.2), we have the following options (permutations):

$$\begin{array} [c]{|c|c|c|c|}\hline \sigma &{} x_{\sigma (1)}<x_{\sigma (2)}<x_{\sigma (3)} &{}\psi &{}J\\ \hline (1,2,3)&{}x_1<x_2<x_3 &{} x_1=x_{1:3} &{} 1\\ (1,3,2)&{}x_1<x_3<x_2 &{} x_1=x_{1:3} &{} 1\\ (2,1,3)&{}x_2<x_1<x_3 &{} x_1=x_{2:3} &{}2\\ (2,3,1)&{}x_2<x_3<x_1 &{} x_3=x_{2:3}&{}2 \\ (3,1,2)&{}x_3<x_1<x_2 &{} x_1=x_{2:3} &{}2\\ (3,2,1)&{}x_3<x_2<x_1 &{} x_2=x_{2:3}&{}2\\ \hline \end{array}$$

and hence its signature is \(\mathbf {s}=(1/3,2/3,0)\). Therefore, from (2.5), its reliability function can be written as

$$\bar{F}_T(t)=\frac{1}{3}\bar{F}_{1:3}(t)+\frac{2}{3}\bar{F}_{2:3}(t)$$

for all t. The signature can also be computed from (2.7) as

$$s_1=\frac{1}{\left( {\begin{array}{c}3\\ 0\end{array}}\right) }\sum _{x_1+x_2+x_3=3}\psi (x_1,x_2,x_3)-\frac{1}{\left( {\begin{array}{c}3\\ 1\end{array}}\right) }\sum _{x_1+x_2+x_3=2}\psi (x_1,x_2,x_3)=1-\frac{2}{3}=\frac{1}{3},$$
$$s_2=\frac{1}{\left( {\begin{array}{c}3\\ 1\end{array}}\right) }\sum _{x_1+x_2+x_3=2}\psi (x_1,x_2,x_3)-\frac{1}{\left( {\begin{array}{c}3\\ 2\end{array}}\right) }\sum _{x_1+x_2+x_3=1}\psi (x_1,x_2,x_3)=\frac{2}{3}$$

and \(s_3=1-s_1-s_2=0\).\(\blacktriangleleft \)

Fig. 2.2
figure 2

Example of a coherent system

Note that the signature contains the probabilities of the discrete random variable J such that \(T=X_{J:n}\). So we can say that T is a mixture of \(X_{1:n},\ldots , X_{n:n}\) with weights \(s_1,\ldots ,s_n\).

If \(X_1,\ldots ,X_n\) are IID, the ordered variables \(X_{1:n},\ldots , X_{n:n}\) are known as the order statistics. In Reliability Theory, they represent the lifetimes of k-out-of-n systems. Their basic properties can be seen in Arnold et al. (2008) and David and Nagaraja (2003). In particular, the expression for their reliability functions are the following (see, e.g., David and Nagaraja 2003, p. 46).

Proposition 2.2

If \(X_1,\ldots ,X_n\) are IID\(\sim \) \(F\), then the reliability function of \(X_{i:n}\) is

$$\begin{aligned} \bar{F}_{i:n}(t)=\sum _{j=0}^{i-1} \left( {\begin{array}{c}n\\ j\end{array}}\right) F^j(t) \bar{F}^{n-j}(t). \end{aligned}$$
(2.8)

Proof

Let us consider the Bernoulli random variables defined by \(B_i(t)=1\) iff \(X_i>t\). Then \(N(t):=\sum _{i=1}^n B_i(t)\) gives the number of components alive at time t. From the IID assumption, N(t) has a Binomial distribution \(\mathcal {B}(n,p_t)\) with probability \(p_t=\bar{F}(t)\). Therefore

$$\bar{F}_{i:n}(t)=\Pr (X_{i:n}>t)=\Pr ( N(t)>n-i)=\sum _{k=n-i+1}^{n} \left( {\begin{array}{c}n\\ k\end{array}}\right) F^{n-k}(t) \bar{F}^{k}(t)$$

and by doing the change \(j=n-k\) we obtain (2.8).    \(\square \)

Note that we can use the expression (2.8) in (2.5) to obtain \(\bar{F}_{T}\) as

$$\begin{aligned} \bar{F}_T(t)=\sum _{i=1}^{n}s_i \sum _{j=0}^{i-1} \left( {\begin{array}{c}n\\ j\end{array}}\right) F^j(t) \bar{F}^{n-j}(t). \end{aligned}$$
(2.9)

By interchanging the order of summations in (2.9) we obtain

$$\begin{aligned} \bar{F}_T(t)=\sum _{k=1}^{n}\left( \sum _{i=n-k+1}^n s_i\right) \left( {\begin{array}{c}n\\ k\end{array}}\right) F^{n-k}(t)\bar{F}^{k}(t), \end{aligned}$$
(2.10)

where \(S_{n-k+1}=\sum _{i=n-k+1}^n s_i\) is the probability that the system works when exactly k components work, where \(F^{n-k}(t)\bar{F}^{k}(t)\) is the probability of have k specific components working at age t and where \(\left( {\begin{array}{c}n\\ k\end{array}}\right) \) represents the number of options of choosing such k components. An extension of formula (2.10) to the case of non-ID components was obtained in Coolen and Coolen-Maturi (2012) (see also Samaniego and Navarro 2016).

Example 2.2

For the coherent system considered in Example 2.1 with signature (1/3, 2/3, 0), we need

$$\bar{F}_{1:3}(t)=\Pr (X_{1:3}>t)=\bar{F}^3(t)$$

and

$$\bar{F}_{2:3}(t)=\Pr (X_{2:3}>t)=\bar{F}^3(t)+3\,F(t)\bar{F}^2(t).$$

By replacing F with \(1-\bar{F}\) we get

$$\bar{F}_{2:3}(t)=3 \bar{F}^2(t)-2\bar{F}^3(t).$$

Note that we do not need

$$\bar{F}_{3:3}(t)=3\bar{F}(t)-3 \bar{F}^2(t)+\bar{F}^3(t)$$

(since \(s_3=0\)). Hence

$$\bar{F}_T(t)=\frac{1}{3} \bar{F}^3(t) +\frac{2}{3}(3 \bar{F}^2(t)-2\bar{F}^3(t))=2\bar{F}^2(t)-\bar{F}^3(t).$$

These reliability functions are plotted in Fig. 2.3, left, when the components are IID with a standard (\(\mu =1\)) exponential distribution. The dashed line is the common reliability function of the components. The code in R to get these plots is the following:

figure b

Note that

$$\bar{F}_{1:3}\le \bar{F}_T \le \bar{F}_{2:3}\le \bar{F}_{3:3}.$$

This is a general property of this system for all F (due to \(s_3=0\)). Also note that \(\bar{F}_T \le \bar{F}\) but that \(\bar{F}\) (dashed line) and \(\bar{F}_{2:3}\) (black line in the middle) are not ordered. By changing the signature we can plot other systems. \(\blacktriangleleft \)

Fig. 2.3
figure 3

Reliability (left) and hazard rate functions (right) of the system in Example 2.2 (red) and the associated k-out-of-3 systems (black) when the components are IID with a standard exponential distribution. The dashed lines are the common functions for the components

Table 2.1 Signatures of all the coherent systems with 1-4 IID components

Proceeding as in the preceding example, we can compute the signature vectors for all the coherent systems with 1-4 components. They were first computed by Shaked and Suárez–Llorens (2003) and they are given Table 2.1. Of course, the systems equivalent under permutations have the same signatures (so we just include one of them in the table). However, there are systems not equivalent under permutations that have the same signature as well (see, e.g., the systems with numbers 20 and 21). As a consequence of (2.5), then they also have the same reliability (distribution) when the component lifetimes are IID with a common continuous distribution F. Also note that if T is a system with signature \(\mathbf {s}=(s_1,\ldots ,s_n)\), then the signature of its dual system \(T^D\) is \(\mathbf {s}^D=(s_n,\ldots ,s_1)\) (see e.g. the systems with numbers 10 and 27). This is a general property and so we do not need to compute the signatures of the dual systems 20-28. The signatures for all the coherent systems with \(n=5\) and \(n=6\) components were obtained in Navarro and Rubio (2010).

As mentioned above, expression (2.5) is a mixture representation for T. So we can use here all the properties for mixtures. For example, the expected lifetime for the system (MTTF) is 

$$\begin{aligned} E(T)=\sum _{i=1}^n s_iE(X_{i:n}). \end{aligned}$$

A similar property holds for the respective distribution functions and, in the absolutely continuous case, the respective probability density functions (PDF) satisfy

$$\begin{aligned} f_T(t)=\sum _{i=1}^n s_i f_{i:n}(t). \end{aligned}$$
(2.11)

The PDF of the order statistics can be obtained as follows.

Proposition 2.3

If \(X_1,\ldots ,X_n\) are IID\(\sim \) \(F\) and F is absolutely continuous with PDF f, then the PDF of \(X_{i:n}\) is

$$\begin{aligned} f_{i:n}(t)= i \left( {\begin{array}{c}n\\ i\end{array}}\right) f(t) F^{i-1}(t) \bar{F}^{n-i}(t). \end{aligned}$$
(2.12)

Proof

From (2.8) we have

$$\bar{F}_{i:n}(t)=\sum _{j=0}^{i-1} \left( {\begin{array}{c}n\\ j\end{array}}\right) F^j(t) \bar{F}^{n-j}(t). $$

Differentiating this expression we obtain

$$\begin{aligned} \bar{F}^\prime _{i:n}(t)&=f(t) \sum _{j=1}^{i-1} \left( {\begin{array}{c}n\\ j\end{array}}\right) j F^{j-1}(t) \bar{F}^{n-j}(t)-f(t) \sum _{j=0}^{i-1} \left( {\begin{array}{c}n\\ j\end{array}}\right) (n-j) F^{j}(t) \bar{F}^{n-j-1}(t)\\&=nf(t) \sum _{j=1}^{i-1} \left( {\begin{array}{c}n-1\\ j-1\end{array}}\right) F^{j-1}(t) \bar{F}^{n-j}(t)-nf(t) \sum _{j=0}^{i-1} \left( {\begin{array}{c}n-1\\ j\end{array}}\right) F^{j}(t) \bar{F}^{n-j-1}(t)\\&=nf(t) \sum _{k=0}^{i-2} \left( {\begin{array}{c}n-1\\ k\end{array}}\right) F^{k}(t) \bar{F}^{n-k-1}(t)-nf(t) \sum _{j=0}^{i-1} \left( {\begin{array}{c}n-1\\ j\end{array}}\right) F^{j}(t) \bar{F}^{n-j-1}(t)\\&=-nf(t) \left( {\begin{array}{c}n-1\\ i-1\end{array}}\right) F^{i-1}(t) \bar{F}^{n-i}(t)\\&=-i \left( {\begin{array}{c}n\\ i\end{array}}\right) f(t) F^{i-1}(t) \bar{F}^{n-i}(t) \end{aligned}$$

and so (2.12) holds.    \(\square \)

Remark 2.1

The PDF in (2.12) can be rewritten as

$$\begin{aligned} f_{i:n}(t)= \frac{\Gamma (n+1)}{\Gamma (i)\Gamma (n-i+1)} f(t) F^{i-1}(t) \bar{F}^{n-i}(t) \end{aligned}$$
(2.13)

where \(\Gamma (p)=\int _0^\infty x^{p-1}e^{-x}dx\) is the gamma function. It can be proved that the function defined in (2.13) for \(i,n\in \mathbb {R}\) satisfying \(1\le i\le n\) is a proper PDF. Then we can consider the random variable \(X_{i:n}\) having this PDF for \(i,n\in \mathbb {R}\) satisfying \(1\le i\le n\) as an extension of the order statistics (k-out-of-n systems) that are obtained when i and n are integers.

As an immediate consequence of (2.12), the PDF of the system can be obtained as follows.

Corollary 2.1

If T is the lifetime of a coherent system with IID component lifetimes \(X_1,\ldots ,X_n\) having a common absolutely continuous distribution function F with PDF f, then the PDF \(f_T\) of T can be written as

$$\begin{aligned} f_T(t)=\sum _{i=1}^n s_i f_{i:n}(t)=f(t) \sum _{i=1}^n i s_i \left( {\begin{array}{c}n\\ i\end{array}}\right) F^{i-1}(t) \bar{F}^{n-i}(t) \end{aligned}$$

for all t.

All these expressions are convex combinations and so, the values for the systems will be between the minimum and maximum values for the k-out-of-n systems. In particular, as \(X_{1:n}\le \dots \le X_{n:n}\), then their respective reliability functions are ordered, that is,

$$\bar{F}_{1:n}(t)\le \dots \le \bar{F}_{n:n}(t)$$

for all t. Even more, as \(X_{1:n}\le T\le X_{n:n}\), then

$$\bar{F}_{1:n}(t)\le \bar{F}_T(t) \le \bar{F}_{n:n}(t)$$

for all t. In the IID\(\sim \) \(F\) case we can be more precise and write

$$\bar{F}_{i:n}(t)\le \bar{F}_T(t) \le \bar{F}_{j:n}(t),$$

where i is the smallest index with \(s_i>0\) and j is the greatest index with \(s_j>0\). In the preceding example the signature is (1/3, 2/3, 0). Hence \(i=1\) and \(j=2\) and so the orderings for the reliability functions in Fig. 2.3, left, is a general property for any continuous distribution F. In the next chapter we will use (2.5) and the ordering properties for mixtures given in Shaked and Shanthikumar (2007) to compare two systems by comparing their signatures.

However, the expressions for the mean residual life and the hazard rate functions are different. So it is difficult to determine the behavior of these functions in mixtures. For example, the hazard rate of the system can be written from (2.5) and (2.11) as

$$\begin{aligned} h_T(t)=\frac{f_T(t)}{\bar{F}_T(t)}=\frac{\sum _{i=1}^n s_i f_{i:n}(t)}{\sum _{i=1}^n s_i \bar{F}_{i:n}(t)}=\sum _{i=1}^n w_i(t) h_{i:n}(t) \end{aligned}$$
(2.14)

where \(h_{i:n}=f_{i:n}/\bar{F}_{i:n}\) is the hazard rate of \(X_{i:n}\) and

$$w_i(t)=\frac{s_i\bar{F}_{i:n}(t)}{\sum _{j=1}^n s_j \bar{F}_{j:n}(t)}.$$

Note that \(0\le w_i(t)\le 1\) and \(\sum _{i=1}^n w_i(t)=1\) for all t. Hence (2.14) is also a convex combination but, in this case, the coefficients \(w_1(t),\ldots ,w_n(t)\) depend on t.

In the IID case, the hazard rate functions of the k-out-of-n systems are ordered, that is, \(h_{1:n}\ge \dots \ge h_{n:n}\). Hence, in this case, we also have

$$h_{i:n}(t)\ge h_T(t) \ge h_{j:n}(t)$$

for all F and the indices defined above. For example, the hazard rate functions for the system in Example 2.2 when the component lifetimes are IID with a standard exponential are plotted in Fig. 2.3, right. The code in R to get this plot (by using also the code written above) is:

figure c

The following example shows that the continuity assumption in Samaniego’s representation cannot be dropped out.

Example 2.3

Let us consider the series system with lifetime \(T=X_{1:2}=\) \(\min (X_1,X_2)\), where \(X_1,X_2\) are IID with a common Bernoulli distribution of parameter 1/2, that is, \(\Pr (X_i=1)=\Pr (X_i=0)=1/2\) for \(i=1,2\). Then

$$\Pr (T=X_{1:2})=1$$

and

$$\Pr (T=X_{2:2})=\Pr (X_1=X_2)=\frac{1}{2}.$$

So (2.5) does not hold with these coefficients. Also note that \(1+1/2>1\). However, the signature computed from the structure by using (2.6) (or (2.7)) is \(\mathbf {s}=(1,0)\). Note that (2.5) holds with these coefficients. \(\blacktriangleleft \)

Therefore, in the general case, that is, when \((X_1,\ldots ,X_n)\) is an arbitrary random vector with joint distribution function

$$\mathbf {F}(x_1,\ldots ,x_n)=\Pr (X_1\le x_1,\ldots ,X_n\le x_n)$$

and joint reliability function

$${\bar{\mathbf {F}}}(x_1,\ldots ,x_n)=\Pr (X_1>x_1,\ldots ,X_n>x_n),$$

we can define two signatures as follows.

Definition 2.1

The structural signature of a coherent system \(\psi \) is \(\mathbf {s}=(s_1,\ldots ,s_n)\) where \(s_i\) is given by (2.6) (or by (2.7)).

Definition 2.2

The probabilistic signature of a coherent system with lifetime T and component lifetimes \((X_1,\ldots ,X_n)\) is \(\mathbf {p}=(p_1,\ldots ,p_n)\) where \(p_i=\Pr (T=X_{i:n})\).

Clearly, \(\mathbf {s}\) only depends on the structure of the system \(\psi \) while \(\mathbf {p}\) can also depend on the joint distribution function of the components \(\mathbf {F}\). In the preceding example \(\mathbf {p}=(1,1/2)\) and \(\mathbf {s}=(1,0)\). In this case Samaniego’s representation (2.5) holds for \(\mathbf {s}\). We will see that this is true for the general IID case but that we will not have representations for non-ID components.

As Samaniego’s representation does not necessarily hold in the general case we need another way to compute the system reliability. It is provided in the following theorem and it is called the minimal path set representation.

Theorem 2.2

(Minimal path set representation)  If T is the lifetime of a coherent (or semi-coherent) system with minimal path sets \(P_1,\ldots ,P_r\) and component lifetimes \((X_1,\ldots ,X_n)\), then

$$\begin{aligned} \bar{F}_T(t)=\sum _{i=1}^r \bar{F}_{P_i}(t)-\sum _{i=1}^{r-1}\sum _{j=i+1}^r \bar{F}_{P_i\cup P_j}(t)+\cdots +(-1)^{r+1} \bar{F}_{P_1\cup \ldots \cup P_r}(t) \end{aligned}$$
(2.15)

for all t, where \(\bar{F}_P(t)=\Pr (X_P>t)\) and \(X_P=\min _{j\in P}X_j\) for \(P \subseteq [n]\).

Proof

First note that from (2.1), the system lifetime can be written as \(T=\max _{1\le i\le r} X_{P_i}\). Then

$$\begin{aligned} \bar{F}_T(t)=\Pr (T>t)=\Pr \left( \max _{1\le i\le r} X_{P_i}>t\right) =\Pr \left( \cup _{i=1}^r \{X_{P_i}>t\}\right) . \end{aligned}$$

Hence, by using the inclusion-exclusion formula for the union of events, we obtain (2.15) taking into account that

$$\Pr \left( \{X_{P_i}>t\} \cap \{X_{P_j}>t\}\right) =\Pr \left( X_{P_i\cup P_j}>t\right) .$$

\(\blacktriangleleft \)

Note that (2.15) proves that the reliability function of the system is a linear combination of the reliability functions of the series systems obtained from unions of its minimal path sets. However, some coefficients can be negative and so it is not a mixture representation (as (2.5) was). Note that these coefficients sum up to one (take \(t\rightarrow -\infty \)). These representations are called generalized mixtures and they contain the usual mixtures (all the coefficients are non-negative) and the negative mixtures (some coefficients are negative). They have some common properties with the usual mixtures. For example, similar expressions hold for the respective distribution and probability density functions.

For the system with lifetime \(T=\min (X_1,\max (X_2,X_3))\) and minimal path sets \(P_1=\{1,2\}\) and \(P_2=\{1,3\}\), we have

$$\begin{aligned} \bar{F}_T(t)&=\Pr \left( \min (X_1,\max (X_2,X_3))>t \right) \nonumber \\&=\Pr \left( \{\min (X_1,X_2)>t\}\cup \{\min (X_1,X_3)>t\} \right) \nonumber \\&=\Pr \left( \min (X_1,X_2)>t\right) +\Pr \left( \min (X_1,X_3)>t\right) \nonumber -\Pr \left( \min (X_1,X_2,X_3)>t\right) \nonumber \\&=\bar{F}_{\{1,2\}}(t)+\bar{F}_{\{1,3\}}(t)-\bar{F}_{\{1,2,3\}}(t). \end{aligned}$$
(2.16)

The reliability functions of series systems can be computed from the joint reliability function \({\bar{\mathbf {F}}}\) of the components. For example, in this system, we have

$$\bar{F}_{\{1,2\}}(t)=\Pr (\min (X_1,X_2)>t)=\Pr (X_1>t,X_2>t)={\bar{\mathbf {F}}}(t,t,-\infty ).$$

Analogously, \(\bar{F}_{\{1,3\}}(t)={\bar{\mathbf {F}}}(t,-\infty ,t)\) and \(\bar{F}_{\{1,2,3\}}(t)={\bar{\mathbf {F}}}(t,t,t)\). Therefore,

$$\bar{F}_T(t)={\bar{\mathbf {F}}}(t,t,-\infty )+{\bar{\mathbf {F}}}(t,-\infty ,t)- {\bar{\mathbf {F}}}(t,t,t).$$

In the general case, for the series system \(X_{\{1,\ldots ,k\}}\) we have

$$\bar{F}_{\{1,\ldots ,k\}}(t) =\Pr (X_1>t,\ldots , X_k>t)={\bar{\mathbf {F}}}(t,\ldots ,t,-\infty ,\ldots ,-\infty )$$

where t is repeated k times, for \(k=1,\ldots ,n\). Similarly, for an arbitrary series system \(X_P\) with \(P\subseteq [n]\) we have

$$\bar{F}_{P}(t)=\Pr \left( \min _{j\in P}X_j>t\right) ={\bar{\mathbf {F}}}(t_1^P,\ldots ,t_n^P),$$

where \(t_i^P:=t\) if \(i\in P\) and \(t_i^P:=-\infty \) if \(i\notin P\).

If the component lifetimes are stochastically independent (IND), that is,

$${\bar{\mathbf {F}}}(x_1,\ldots ,x_n)=\Pr (X_1>x_1)\dots \Pr (X_n>x_n),$$

then these expressions can be reduced to

$$\bar{F}_{P}(t)=\prod _{j\in P}\Pr (X_j>t)=\prod _{j\in P}\bar{F}_j(t)$$

and if they are IID then \(\bar{F}_{P}(t)=\bar{F}^{|P|}(t)\) where |P| is the cardinality of P.

For the above system, we get

$$\bar{F}_T(t)=\bar{F}_1(t)\bar{F}_2(t)+\bar{F}_1(t)\bar{F}_3(t)- \bar{F}_1(t)\bar{F}_2(t)\bar{F}_3(t)$$

in the IND case and

$$\bar{F}_T(t)=2\bar{F}^2(t)-\bar{F}^3(t)$$

in the IID case. Of course, this last expression coincides with the one obtained from Samaniego’s representation. It does not coincide when the components are independent but not identically distributed (INID). For example, in Fig. 2.4 we plot the system reliability and hazard rate functions when the components are independent and have exponential distributions with means 1, 1/2, 1/3 (black) and a common mean 1/2 (red). In this example, the system with heterogeneous components is more reliable than the one with homogeneous components. The code is the following:

figure d
Fig. 2.4
figure 4

Reliability (left) and hazard rate functions (right) of the system in Example 2.2 when the components are independent and have exponential distributions of means 1, 1/2, 1/3 (black) and a common mean 1/2 (red). The dashed lines are the functions for the components

As a consequence we obtain the following representation for the IND case.

Corollary 2.2

(Minimal path set representation, IND case)  If T is the lifetime of a coherent (or semi-coherent) system with minimal path sets \(P_1,\ldots ,P_r\) and independent component lifetimes \(X_1,\ldots ,X_n\), then

$$\begin{aligned} \bar{F}_T(t)=\sum _{i=1}^r \prod _{k\in P_i}\bar{F}_k(t)-\sum _{i=1}^{r-1}\sum _{j=i+1}^r \prod _{k\in P_i\cup P_j}\bar{F}_k(t)+\cdots +(-1)^{r+1} \prod _{k\in P_1\cup \dots \cup P_r}\bar{F}_k(t) \end{aligned}$$
(2.17)

for all t, where \(\bar{F}_k(t)=\Pr (X_k>t)\) for \(k=1,\ldots ,n\).

A similar expression can be obtained from the minimal cut sets. It can be stated as follows. Its proof is analogous.

Theorem 2.3

(Minimal cut set representation)  If T is the lifetime of a coherent (or semi-coherent) system with minimal cut sets \(C_1,\ldots ,C_s\) and component lifetimes \((X_1,\ldots ,X_n)\), then

$$\begin{aligned} {F}_T(t)=\sum _{i=1}^s F^{C_i}(t)-\sum _{i=1}^{s-1}\sum _{j=i+1}^s {F}^{C_i\cup C_j}(t)+\cdots +(-1)^{s+1} {F}^{C_1\cup \ldots \cup C_s}(t) \end{aligned}$$
(2.18)

for all t, where \({F}^P(t)=\Pr (X^P\le t)\) and \(X^P=\max _{j\in P}X_j\) for \(P\subseteq [n]\).

Note that we obtain again generalized mixtures. Hence the same expression also holds for the respective reliability functions. We use the distribution functions because, in the IND case, it can be reduced to the following corollary.

Corollary 2.3

(Minimal cut set representation, IND case) If T is the lifetime of a coherent (or semi-coherent) system with minimal cut sets \(C_1,\ldots ,C_s\) and independent component lifetimes \(X_1,\ldots ,X_n\), then

$$\begin{aligned} {F}_T(t)=\sum _{i=1}^s \prod _{k\in C_i} F_k(t)-\sum _{i=1}^{s-1}\sum _{j=i+1}^s \prod _{k\in C_i\cup C_j} F_k(t)+\cdots +(-1)^{r+1} \prod _{k\in C_1\cup \dots \cup C_s} F_k(t) \end{aligned}$$
(2.19)

for all t, where \(F_k(t)=\Pr (X_k\le t)\) for \(k=1,\ldots ,n\).

The general expressions obtained from the minimal path or cut sets can be reduced to simpler expressions when the component lifetimes are exchangeable. The formal definition is the following.

Definition 2.3

The random vector \((X_1,\ldots ,X_n)\) is exchangeable (EXC) if

$$(X_1,\ldots ,X_n)=_{ST}(X_{\sigma (1)},\ldots ,X_{\sigma (n)})$$

for all the permutations \(\sigma :[n]\rightarrow [n]\), where \(=_{ST}\) denotes equality in distribution (law).

Clearly, \((X_1,\ldots ,X_n)\) is EXC iff its joint distribution (or reliability) function \(\mathbf {F}\) is permutation symmetric, that is,

$$\mathbf {F}(x_1,\ldots ,x_n)=_{ST}\mathbf {F}(x_{\sigma (1)},\ldots ,x_{\sigma (n)})$$

for all the permutations \(\sigma :[n]\rightarrow [n]\) and all \(x_1,\ldots ,x_n\in \mathbb {R}\).

If \((X_1,\ldots ,X_n)\) is EXC, then all the marginal distributions of dimension k are equal and, in particular, the variables are ID. Hence, the distributions of all the series (or parallel) systems with k components are equal and we have

$$\bar{F}_P(t)=\bar{F}_{\{1,\ldots ,k\}}(t)={\bar{\mathbf {F}}}(t,\ldots ,t,-\infty ,\ldots ,-\infty ),$$

where t is repeated \(k=|P|\) times. As a consequence, we obtain the following representation given in Navarro et al. (2007).

Theorem 2.4

(Minimal signature representation) If T is the lifetime of a coherent (or semi-coherent) system with EXC component lifetimes \((X_1,\ldots ,X_n)\), then

$$\begin{aligned} \bar{F}_T(t)=\sum _{i=1}^n a_i \bar{F}_{1:i}(t) \end{aligned}$$
(2.20)

for all t, where \(a_1,\ldots ,a_n\) are some integer coefficients such that \(a_1+\cdots +a_n=1\) and \(\bar{F}_{1:i}(t)=\Pr (\min (X_1,\ldots ,X_i)>t)\) for \(i=1,\ldots ,n\).

Proof

From (2.15), we have that \(\bar{F}_T\) is a linear combination of reliability functions \(\bar{F}_P\) of series systems. But, if \((X_1,\ldots ,X_n)\) is EXC, then \(\bar{F}_P\) can be replaced by \(\bar{F}_{1:i}\) with \(i=|P|\). Hence (2.20) holds for some coefficients \(a_1,\ldots ,a_n\in \mathbb {Z}\). Moreover these coefficients sum up to one (take \(t\rightarrow -\infty \)).    \(\square \)

The vector \(\mathbf {a}=(a_1,\ldots ,a_n)\) with these coefficients was call the minimal signature  of the system in Navarro et al. (2007). A similar representation can be obtained by using the parallel systems as follows.

Theorem 2.5

(Maximal signature representation) If T is the lifetime of a coherent (or semi-coherent) system with EXC component lifetimes \((X_1,\ldots ,X_n)\), then

$$\begin{aligned} F_T(t)=\sum _{i=1}^n b_i F_{i:i}(t) \end{aligned}$$
(2.21)

for all t, where \(b_1,\ldots ,b_n\) are some integer coefficients such that \(b_1+\cdots +b_n=1\) and \(F_{i:i}(t)=\Pr (\max (X_1,\ldots ,X_i)\le t)\) for \(i=1,\ldots ,n\).

The vector \(\mathbf {b}=(b_1,\ldots ,b_n)\) with these coefficients was call the maximal signature of the system in Navarro et al. (2007). Note that both representations (2.20) and (2.21) are generalized mixtures and that they hold for the general EXC case (including discrete or singular distributions). In both we can use distribution or reliability functions. However, it is better to use reliability functions with series systems and distribution functions with parallel systems. In the absolutely continuous case, we can also use probability density functions. We will see later that they do not hold without the EXC assumption.

Let us see an example. For the coherent system with lifetime \(T=\min (X_1,\) \(\max (X_2,X_3))\), we get

$$\bar{F}_T(t)=\bar{F}_{\{1,2\}}(t)+\bar{F}_{\{1,3\}}(t)-\bar{F}_{\{1,2,3\}}(t)$$

that, in the EXC case, can be reduced to

$$\bar{F}_T(t)=2\bar{F}_{1:2}(t)-\bar{F}_{1:3}(t).$$

Hence its minimal signature is \(\mathbf {a}=(0,2,-1)\). To compute its maximal signature we first write its distribution function as

$$F_T(t)=F^{\{1\}}(t)+ F^{\{2,3\}}(t)-F^{\{1,2,3\}}(t)$$

which, in the EXC case, gives

$$F_T(t)=F_{1:1}(t)+ F_{2:2}(t)-F_{3:3}(t)$$

for all t. So its maximal signature is \(\mathbf {b}=(1,1,-1)\).

The minimal and maximal signatures of all the coherent systems with 1-4 components are given in Table 2.2. It can be proved that the minimal (maximal) signature of a system coincides with the maximal (minimal) signature of its dual system (see, e.g., the systems in rows 10 and 27). This property is due to the fact that the minimal cut (path) sets of a system are the minimal path (cut) sets of its dual system.

Table 2.2 Minimal \(\mathbf {a}\) and maximal \(\mathbf {b}\) signatures of all the coherent systems with 1-4 exchangeable components

The EXC case includes the general IID case and so we can obtain the following representations.

Theorem 2.6

(Minimal and maximal signature representations, IID case) If T is the lifetime of a coherent (or semi-coherent) system with minimal and maximal signatures \((a_1,\ldots ,a_n)\) and \((b_1,\ldots ,b_n)\) and IID\(\sim \) \(F\) component lifetimes, then

$$\begin{aligned} \bar{F}_T(t)=\sum _{i=1}^n a_i \bar{F}^{i}(t) \end{aligned}$$
(2.22)

and

$$\begin{aligned} {F}_T(t)=\sum _{i=1}^n b_i {F}^{i}(t) \end{aligned}$$
(2.23)

for all t.

The proof is immediate from (2.20) and (2.21) since, in the IID case, we have \(\bar{F}_{1:i}(t)=\bar{F}^{i}(t)\) and \(F_{i:i}(t)={F}^{i}(t)\) for all t. So it is better to use reliability functions with the minimal signature and distribution functions with the maximal signature. Note that we do not need the assumption “F is continuous”. In the absolutely continuous IID case, the system PDF is

$$\begin{aligned} f_T(t)=f(t)\sum _{i=1}^n i a_i \bar{F}^{i-1}(t)=f(t)\sum _{i=1}^n i b_i F^{i-1}(t) \end{aligned}$$

where \(f=F'=-\bar{F}'\) is the common PDF of the components.

The minimal (or maximal) signature representation can be used to extend Samaniego’s representation to the general EXC case (which includes the IID case with a general distribution F). It is stated in the following theorem. This result was obtained in Navarro et al. (2008) by using a different proof. A similar result was obtained previously in Navarro and Rychlik (2007) for absolutely continuous EXC distributions.

Theorem 2.7

(Signature representation, EXC case) If T is the lifetime of a coherent system with structural signature \((s_1,\ldots ,s_n)\) and with EXC component lifetimes, then

$$\begin{aligned} \bar{F}_T(t)=\sum _{i=1}^n s_i \bar{F}_{i:n}(t) \end{aligned}$$
(2.24)

for all t.

Proof

From (2.20) we have

$$\bar{F}_T(t)=\sum _{i=1}^n a_i \bar{F}_{1:i}(t)$$

where \(\mathbf {a}=(a_1,\ldots ,a_n)\) is the minimal signature of T. This representation can be written as

$$\bar{F}_T(t)=\mathbf {a}(\bar{F}_{1:1}(t),\ldots ,\bar{F}_{1:n}(t))',$$

where \(\mathbf {v}'\) represents the transpose of \(\mathbf {v}\).

We can apply this representation to the k-out-of-n systems as well. Thus, for \(X_{1:n}\), which only has a minimal path set \(P_1=\{1,\ldots ,n\}\) (the unique set with cardinality n), we obtain the trivial representation

$$\bar{F}_{1:n}(t)= 0\bar{F}_{1:1}(t)+\cdots +0\bar{F}_{1:n-1}(t)+1\bar{F}_{1:n}(t),$$

that is, its minimal signature is \((0,\ldots ,0,1)\).

Analogously, for \(X_{2:n}\), its minimal path sets are all the sets with cardinality \(n-1\) (its has \(\left( {\begin{array}{c}n\\ n-1\end{array}}\right) =n\) minimal path sets). Then its minimal path set representation is

$$\bar{F}_{2:n}(t)= 0\bar{F}_{1:1}(t)+\cdots +0\bar{F}_{1:n-2}(t)+n\bar{F}_{1:n-1}(t)-(n-1)\bar{F}_{1:n}(t),$$

that is, its minimal signature is \((0,\ldots ,0,n,-n+1)\) (the last coefficient is \(-n+1\) because their sum is one).

In general, for \(X_{i:n}\), we obtain

$$\bar{F}_{i:n}(t)= 0\bar{F}_{1:1}(t)+\cdots +0\bar{F}_{1:n-i}(t)+\left( {\begin{array}{c}n\\ i\end{array}}\right) \bar{F}_{1:n-i+1}(t)+\cdots +a_{i,n} \bar{F}_{1:n}(t)$$

for \(i=1,\ldots ,n\). The coefficients in that representation are well known in the order statistics literature (see David and Nagaraja 2003, p. 46 or (2.25) below). However, we do not need them. We just need the fact that

$$(\bar{F}_{1:n}(t),\ldots ,\bar{F}_{n:n}(t))' =A_n(\bar{F}_{1:1}(t),\ldots ,\bar{F}_{1:n}(t))'$$

for all t, where \(A_n=(a_{i,j})\) is a triangular non-singular matrix of real (integer) numbers. Hence

$$(\bar{F}_{1:1}(t),\ldots ,\bar{F}_{1:n}(t))' =A_n^{-1}(\bar{F}_{1:n}(t),\ldots ,\bar{F}_{n:n}(t))'$$

for all t, where \(A_n^{-1}\) is the inverse matrix of \(A_n\).

Therefore, by using the minimal signature representation obtained in (2.20), we get

$$\begin{aligned} \bar{F}_T(t)&=\mathbf {a}(\bar{F}_{1:1}(t),\ldots ,\bar{F}_{1:n}(t))'\\&=\mathbf {a}A_n^{-1}(\bar{F}_{1:n}(t),\ldots ,\bar{F}_{n:n}(t))'\\&=\mathbf {c}(\bar{F}_{1:n}(t),\ldots ,\bar{F}_{n:n}(t))'\\&=\sum _{i=1}^n c_i \bar{F}_{i:n}(t) \end{aligned}$$

for all t, where \(\mathbf {c}=(c_1,\ldots ,c_n):=\mathbf {a}A_n^{-1}\) are some coefficients that do not depend on the joint distribution of the component lifetimes (they only depend on \(\mathbf {a}\) and \(A_n\)).

In the IID continuous case these coefficients coincide with the structural signature coefficients (take e.g. \(F(t)=t\) for \(0\le t\le 1\)), that is, \(\mathbf {c}=\mathbf {s}\) and so (2.24) holds.    \(\square \)

Remark 2.2

The preceding theorem proves that \(\bar{F}_T\) belongs to the vectorial space generated by the reliability functions of the k-out-of-n systems which coincides with the one generated by the series system reliability functions (in the EXC case). Actually, in many cases, these reliability functions are bases of this space and so the signatures can be seen as the coordinates of \(\bar{F}_T\) in these bases. Thus the structural signature can be obtained from the minimal signature as

$$\mathbf {s}=\mathbf {a}A_n^{-1}$$

and vice versa

$$\mathbf {a}=\mathbf {s}A_n.$$

Moreover, it can be proved that, in the absolutely continuous EXC case, \(s_i=\Pr (T=X_{i:n})\) (i.e. the structural and probability signatures coincide), see Navarro and Rychlik (2007).

Note that the rows of \(A_n\) are the minimal signatures of the k-out-of-n systems. As mentioned in the proof, the coefficients in \(A_n\) are well known in the order statistics literature. Actually, from David and Nagaraja (2003), p. 46, we have

$$\begin{aligned} \bar{F}_{i:n}(t)=\sum _{j=n-i+1}^n (-1)^{j-n+i-1} \left( {\begin{array}{c}n\\ j\end{array}}\right) \left( {\begin{array}{c}j-1\\ n-i\end{array}}\right) \bar{F}_{1:j}(t). \end{aligned}$$
(2.25)

Of course, the preceding theorem can also be applied to the case of IID\(\sim \) \(F\) component lifetimes. Here we do not the continuity assumption for F but we have to use the structural signature (not the probabilistic signature).

Remark 2.3

A similar proof can be obtained by using the maximal signature representations of the k-out-of-n systems. So we can also write

$$\mathbf {s}=\mathbf {b}B_n^{-1}$$

and

$$\mathbf {b}=\mathbf {s}B_n$$

for a triangular non-singular matrix \(B_n=(b_{i,j})\). The coefficients in \(B_n\) can also be obtained from David and Nagaraja (2003), p. 46, as

$$\begin{aligned} \bar{F}_{i:n}(t)=\sum _{j=i}^n b_{i,j} \bar{F}_{j:j}(t)=\sum _{j=i}^n (-1)^{j-i} \left( {\begin{array}{c}n\\ j\end{array}}\right) \left( {\begin{array}{c}j-1\\ i-1\end{array}}\right) \bar{F}_{j:j}(t). \end{aligned}$$
(2.26)

The rows of \(B_n\) are the maximal signatures of the order statistics. Note that, as a consequence, \(\mathbf {a}\) can be computed from \(\mathbf {b}\) and vice versa through

$$\mathbf {b}=\mathbf {s}B_n=\mathbf {a}A_n^{-1}B_n=\mathbf {a}C_n $$

and

$$\mathbf {a}=\mathbf {s}A_n=\mathbf {b}B_n^{-1}A_n=\mathbf {b}C_n^{-1},$$

where \(C_n=A_n^{-1}B_n\). If one prefer to use column vectors, just take the transposed matrices.

Example 2.4

Let us obtain \(A_3\) without using (2.25). As mentioned in the proof, the (trivial) minimal signature representation for \(X_{1:3}\) is

$$\bar{F}_{1:3}(t)= 0\bar{F}_{1:1}(t)+0\bar{F}_{1:2}(t)+1\bar{F}_{1:3}(t).$$

Analogously, for \(X_{2:3}\), we have

$$\bar{F}_{2:3}(t)= 0\bar{F}_{1:1}(t)+3\bar{F}_{1:2}(t)-2\bar{F}_{1:3}(t).$$

Finally, the minimal path set representation for \(X_{3:3}\) in the EXC case is

$$\bar{F}_{3:3}(t)= 3\bar{F}_{1:1}(t)-3\bar{F}_{1:2}(t)+1\bar{F}_{1:3}(t).$$

Hence

$$\left( \begin{array} [c]{c}\bar{F}_{1:3}(t)\\ \bar{F}_{2:3}(t)\\ \bar{F}_{3:3}(t) \end{array}\right) = \left( \begin{array} [c]{rrr}0&{}0&{}1\\ 0&{}3&{}-2\\ 3&{}-3&{}1 \end{array}\right) \left( \begin{array} [c]{c}\bar{F}_{1:1}(t)\\ \bar{F}_{1:2}(t)\\ \bar{F}_{1:3}(t) \end{array}\right) ,$$

that is,

$$A_3=\left( \begin{array} [c]{rrr}0&{}0&{}1\\ 0&{}3&{}-2\\ 3&{}-3&{}1 \end{array}\right) .$$

Its inverse matrix is

$$A_3^{-1}=\left( \begin{array} [c]{rrr}1/3&{}1/3&{}1/3\\ 2/3&{}1/3&{}0\\ 1&{}0&{}0 \end{array}\right) .$$

These matrices can be used to obtain one signature from the other. For example, for the system with lifetime \(T=\min (X_1,\max (X_2,X_3))\), \(\mathbf {s}\) can be computed from \(\mathbf {a}\) as

$$(s_1,s_2,s_3)=(a_1,a_2,a_3)A_3^{-1}=(0,2,-1)\left( \begin{array} [c]{ccc}1/3&{}1/3&{}1/3\\ 2/3&{}1/3&{}0\\ 1&{}0&{}0 \end{array}\right) =(1/3,2/3,0).$$

Conversely, \(\mathbf {a}\) can be computed from \(\mathbf {s}\) through

$$(a_1,a_2,a_3)=(s_1,s_2,s_3)A_3=(1/3,2/3,0)\left( \begin{array} [c]{rrr}0&{}0&{}1\\ 0&{}3&{}-2\\ 3&{}-3&{}1 \end{array}\right) =(0,2,-1).$$

Analogously, we obtain

$$B_3= \left( \begin{array} [c]{rrr}3&{}-3&{}1\\ 0&{}3&{}-2\\ 0&{}0&{}1 \end{array}\right) . $$

Note that the rows of \(B_n\) are the rows of \(A_n\) in the reverse order (since the dual system of \(X_{i:n}\) is \(X_{n-i+1:n}\)). Hence the maximal signature of the system can be obtained as

$$(b_1,b_2,b_3)=(s_1,s_2,s_3)B_3= (1/3, 2/3, 0)\left( \begin{array} [c]{rrr}3&{}-3&{}1\\ 0&{}3&{}-2\\ 0&{}0&{}1 \end{array}\right) =(1,1,-1). $$

It can be obtained directly from the minimal signature as

$$(b_1,b_2,b_3)=(a_1,a_2,a_3)A_3^{-1}B_3= (0,2,-1) \left( \begin{array} [c]{rrr}1&{}0&{}0\\ 2&{}-1&{}0\\ 3&{}-3&{}1 \end{array}\right) =(1,1,-1) .$$

Note that the rows of \(C_3=A_3^{-1}B_3\) are the maximal signatures of the series systems \(X_{1:1},X_{1:2},X_{1:3}\). Analogously,

$$(a_1,a_2,a_3)=(b_1,b_2,b_3)B_3^{-1}A_3= (1,1,-1) \left( \begin{array} [c]{rrr}1&{}0&{}0\\ 2&{}-1&{}0\\ 3&{}-3&{}1 \end{array}\right) = (0,2,-1)$$

that is, \(C_3^{-1}=C_3\). This is a general property, that is, \(C_n^{-1}=C_n\) for all n. \(\blacktriangleleft \)

The Samaniego’s representation can also be extended to semi-coherent systems as follows. This results was obtained in Navarro et al. (2008) (by using a different proof).

Theorem 2.8

If T is the lifetime of a coherent system with component lifetimes \(X_1,\ldots ,X_k\) contained in an EXC random vector \((X_1,\ldots ,X_n)\) (\(k<n\)), then

$$\begin{aligned} \bar{F}_T(t)=\sum _{i=1}^n s^{(n)}_i \bar{F}_{i:n}(t) \end{aligned}$$
(2.27)

for all t, where \(s^{(n)}_1,\ldots ,s^{(n)}_n\) are some coefficients that only depend on the structure of the system and that satisfy \(s_1^{(n)}+\cdots +s^{(n)}_n=1\).

Proof

As \((X_1,\ldots ,X_n)\) is EXC, so is \((X_1,\ldots ,X_k)\). Hence, from (2.20), we have

$$\bar{F}_T(t)=\sum _{i=1}^k a_i \bar{F}_{1:k}(t),$$

where \(\mathbf {a}=(a_1,\ldots ,a_k)\) is the minimal signature of T. This representation can be written as

$$\bar{F}_T(t)=\mathbf {a}^{(n)}(\bar{F}_{1:1}(t),\ldots ,\bar{F}_{1:n}(t))'$$

where \(\mathbf {a}^{(n)}:=(a_1,\ldots ,a_k,0,\ldots ,0)\in \mathbb {Z}^n\).

We can also apply here the representation for the k-out-of-n systems obtained in the preceding theorem. Thus,

$$(\bar{F}_{1:n}(t),\ldots ,\bar{F}_{n:n}(t))' =A_n(\bar{F}_{1:1}(t),\ldots ,\bar{F}_{1:n}(t))'$$

for all t, where \(A_n\) is a triangular non-singular matrix of real (integer) numbers. Hence

$$(\bar{F}_{1:1}(t),\ldots ,\bar{F}_{1:n}(t))' =A_n^{-1}(\bar{F}_{1:n}(t),\ldots ,\bar{F}_{n:n}(t))'$$

for all t, where \(A_n^{-1}\) is the inverse matrix of \(A_n\).

Therefore, by using the representation obtained above, we get

$$\begin{aligned} \bar{F}_T(t)&=\mathbf {a}^{(n)}(\bar{F}_{1:1}(t),\ldots ,\bar{F}_{1:n}(t))'\\&=\mathbf {a}^{(n)}A_n^{-1}(\bar{F}_{1:n}(t),\ldots ,\bar{F}_{n:n}(t))'\\&=\mathbf {s}^{(n)}(\bar{F}_{1:n}(t),\ldots ,\bar{F}_{n:n}(t))'\\&=\sum _{i=1}^n s_i^{(n)} \bar{F}_{i:n}(t) \end{aligned}$$

for all t, where \(\mathbf {s}^{(n)}=(s_1^{(n)},\ldots ,s^{(n)}_n):=\mathbf {a}^{(n)}A_n^{-1}\) are some coefficients that do not depend on the joint distribution of the component lifetimes (they only depend on \(\mathbf {a}\) and \(A_n\)) and that satisfy \(s_1^{(n)}+\cdots +s^{(n)}_n=1\).    \(\square \)

The vector \(\mathbf {s}^{(n)}=(s_1^{(n)},\ldots ,s^{(n)}_n)\) is called the structural signature of order \(\mathbf {n}\) of the system. It can be proved that if \((X_1,\ldots ,X_n)\) has an absolutely continuous EXC distribution, then \(s^{(n)}_i=\Pr (T=X_{i:n})\). Hence \(s^{(n)}_i\ge 0\) and so (2.27) is a mixture representation. The structural signature of order n of a semi-coherent system \(\psi \) can also be computed from

$$\begin{aligned} s^{(n)}_i=\frac{1}{\left( {\begin{array}{c}n\\ i-1\end{array}}\right) }\sum \limits _{ \sum _{j=1}^nx_j=n-i+1 }\!\!\!\!\!\!\!\!\!\!\psi (x_1, \ldots x_n)\ - \frac{1}{\left( {\begin{array}{c}n\\ i\end{array}}\right) } \sum \limits _{ \sum _{j=1}^nx_j=n-i} \!\!\!\!\!\!\!\!\,\psi (x_1, \ldots x_n) \end{aligned}$$

for \( i=1,\ldots ,n\). Of course, if \(\psi \) is a coherent system of order n, then we obtain the expression of the structural signature given in (2.7).

Analogously, \(\mathbf {a}^{(n)}.=(a_1,\ldots ,a_k,0,\ldots ,0)\) can be called the minimal signature of order \(\mathbf {n}\). Note that \(\mathbf {s}^{(n)}=\mathbf {a}^{(n)}A_n^{-1}\) and \(\mathbf {a}^{(n)}=\mathbf {s}^{(n)}A_n\). The maximal signature of order \(\mathbf {n}\) can be defined in a similar way as \(\mathbf {b}^{(n)}=(b_1,\ldots ,b_k,0,\ldots ,0)\). It can be used to obtain an alternative proof for the preceding theorem with \(\mathbf {s}^{(n)}=\mathbf {b}^{(n)}B_n^{-1}\) and \(\mathbf {b}^{(n)}=\mathbf {s}^{(n)}B_n\).

Remark 2.4

The preceding theorem can also be obtained by using the “Triangle Rule” of the order statistics.  Thus, if \((X_1,\ldots , X_{n+1})\) are EXC without ties, then

$$\Pr (X_{i:n}<X_{n+1}<X_{i+1:n})=\Pr (X_{n+1}=X_{i+1:n})=\frac{1}{n+1}$$

for \(i=0,\ldots ,n\) where, by convention \(X_{0:n}=-\infty \) and \(X_{n+1:n}=\infty \). Hence

$$\Pr (X_{i:n}=X_{i+1:n+1})=\Pr (X_{n+1}<X_{i:n})=\frac{i}{n+1}$$

and so

$$\Pr (X_{i:n}=X_{i:n+1})=1-\frac{i}{n+1}=\frac{n+1-i}{n+1}.$$

Consequently the order statistics from an EXC random vector without ties satisfy the following triangle rule

$$\begin{aligned} \bar{F}_{i:n}(t)=\frac{n+1-i}{n+1}\bar{F}_{i:n+1}(t)+\frac{i}{n+1}\bar{F}_{i+1:n+1}(t) \end{aligned}$$
(2.28)

for all t. Note that we can use this expression in (2.24) to write the reliability function \(\bar{F}_T\) of a coherent system with n components as a linear combination of \(\bar{F}_{1:n+1},\ldots ,\bar{F}_{n+1:n+1}\), that is, to compute its signature of order \(n+1\). Thus, if T has the signature \((s^{(n)}_1,\ldots ,s^{(n)}_n)\) of order n, then

$$\begin{aligned} \bar{F}_T&=\sum _{i=1}^n s^{(n)}_i \bar{F}_{i:n}\\&=\sum _{i=1}^n s^{(n)}_i \frac{n+1-i}{n+1}\bar{F}_{i:n+1}+\sum _{i=1}^n s^{(n)}_i \frac{i}{n+1}\bar{F}_{i+1:n+1}\\&=\sum _{i=1}^n s^{(n)}_i \frac{n+1-i}{n+1}\bar{F}_{i:n+1}+\sum _{i=2}^{n+1} s_{i-1}^{(n)} \frac{i-1}{n+1}\bar{F}_{i:n+1}\\&= \frac{ns^{(n)}_1}{n+1}\bar{F}_{1:n+1} +\sum _{i=2}^n \left( \frac{i-1}{n+1}s^{(n)}_{i-1}+\frac{n+1-i}{n+1}s^{(n)}_i \right) \bar{F}_{i:n+1}+\frac{ns^{(n)}_n}{n+1}\bar{F}_{n+1:n+1}. \end{aligned}$$

Hence, the signature of order \(n+1\) can be obtained as

$$\begin{aligned} \mathbf {s}^{(n+1)}=\left( \frac{n}{n+1} s^{(n)}_1, \frac{1}{n+1} s^{(n)}_1 +\frac{n-1}{n+1} s^{(n)}_2, \frac{2}{n+1} s^{(n)}_2 +\frac{n-2}{n+1} s^{(n)}_3,\ldots , \frac{n}{n+1} s^{(n)}_{n} \right) , \end{aligned}$$
(2.29)

that is,

$$s_i^{(n+1)}=\frac{i-1}{n+1}s^{(n)}_{i-1}+\frac{n+1-i}{n+1}s^{(n)}_{i}$$

for \(i=1,\ldots ,n+1\) where, by convention, \(s^{(n)}_0=s^{(n)}_{n+1}=0\). This gives us an alternative proof of Theorem 2.8 based on the Triangle Rule.  Actually, this was the proof used in Navarro et al. (2008). We can go further and compute the signature of order n from the signature of order \(k<n\). The explicit expressions can be seen in Navarro et al. (2008). Alternatively, we can use (2.29) \(n-k\) times.

The signatures of order 4 for all the coherent systems with 1-3 EXC components are given in Table 2.3. Let us see in some examples how to compute them.

Table 2.3 Signatures of order 4 of all the coherent systems with 1-3 EXC components

Example 2.5

We have seen that if \((X_1,X_2)\) are EXC (or just ID), then

$$\bar{F}_{2:2}= 2\bar{F}_{1:1}-\bar{F}_{1:2}.$$

Hence

$$\bar{F}_{1:1}= \frac{1}{2} \bar{F}_{1:2}+\frac{1}{2} \bar{F}_{2:2},$$

that is, the signature of order 2 of \(X_1\) is (1/2, 1/2). It can also be obtained from the Triangle Rule as follows. Obviously, the signature (of order 1) of \(X_1\) is \(\mathbf {s}=(1)\). Hence, from (2.29), we have

$$ \mathbf {s}^{(2)}=\left( \frac{1}{2} 1,\frac{1}{2} 1 \right) =\left( \frac{1}{2} ,\frac{1}{2} \right) .$$

By applying (2.29) again, we get

$$ \mathbf {s}^{(3)}=\left( \frac{2}{3} \frac{1}{2},\frac{1}{3} \frac{1}{2}+\frac{1}{3} \frac{1}{2},\frac{2}{3} \frac{1}{2} \right) =\left( \frac{1}{3} ,\frac{1}{3},\frac{1}{3} \right) .$$

Analogously, if \((X_1,\ldots ,X_n)\) are EXC without ties, then \(\Pr (X_1=X_{i:n})=1/n\) for \(i=1,\ldots ,n\). Hence, the signature of order n of \(X_1\) (or \(X_i\)) is \(\mathbf {s}^{(n)}=(1/n,\ldots , 1/n)\). \(\blacktriangleleft \)

Example 2.6

Let us consider again the coherent system \(T=\min (X_1,\max (X_2,X_3))\) with three EXC components. Recall that from Tables 2.1 and 2.2, the signature and the minimal signature of this system are (1/3, 2/3, 0) and \((0,2,-1)\), respectively. Therefore, the signature of order 4 can be obtained as

$$ \mathbf {s}^{(4)}= \mathbf {a}^{(4)}A_4^{-1}=(0,2,-1,0)\left( \begin{array} [c]{cccc}\frac{1}{4} &{} \frac{1}{4}&{}\frac{1}{4}&{}\frac{1}{4}\\ \frac{1}{2}&{}\frac{1}{3}&{}\frac{1}{6}&{}0 \\ \frac{3}{4}&{}\frac{1}{4}&{}0&{}0\\ 1&{}0&{}0&{}0 \end{array} \right) =\left( \frac{1}{4},\frac{5}{12},\frac{1}{3},0\right) ,$$

where \((0,2,-1,0)\) is the minimal signature of order 4, that is, the coefficients needed to write \(\bar{F}_T\) in terms of \(\bar{F}_{1:i}\), \(i=1,2,3,4\), and the matrix is the inverse matrix of

$$A_4=\left( \begin{array} [c]{rrrr}0&{}0&{}0&{}1\\ 0&{}0&{}4&{}-3\\ 0&{}6&{}-8&{}3\\ 4&{}-6&{}4&{}-1 \end{array} \right) $$

obtained by placing in the rows the minimal signatures of the order statistics \(X_{1:4},X_{2:4},X_{3:4},X_{4:4}\). Note that the rows of \(A_4^{-1}\) contain the signatures of order 4 of the series systems \(X_{1:1},X_{1:2},X_{1:3},X_{1:4}\).

Another option is to use the following representation based on the signature (1/3, 2/3, 0),

$$\begin{aligned} \bar{F}_T(t)=\frac{1}{3} \bar{F}_{1:3}(t)+\frac{2}{3} \bar{F}_{2:3}(t) \end{aligned}$$
(2.30)

and the relations of the distributions of order statistics based on the Triangle Rule  given in (2.28). Using this rule we have

$$\begin{aligned}&\bar{F}_{1:3}(t)=\frac{3}{4} \bar{F}_{1:4}(t)+ \frac{1}{4} \bar{F}_{2:4}(t)\\&\bar{F}_{2:3}(t)=\frac{1}{2} \bar{F}_{2:4}(t)+\frac{1}{2} \bar{F}_{3:4}(t) \end{aligned}$$

and replacing these expressions in (2.30), we obtain the signature of order 4 as follows

$$\begin{aligned} \bar{F}_T(t)&=\frac{1}{3} \bar{F}_{1:3}(t)+\frac{2}{3} \bar{F}_{2:3}(t)\\&=\frac{1}{3} \left( \frac{3}{4} \bar{F}_{1:4}(t)+ \frac{1}{4} \bar{F}_{2:4}(t)\right) +\frac{2}{3}\left( \frac{1}{2} \bar{F}_{2:4}(t)+\frac{1}{2} \bar{F}_{3:4}(t) \right) \\&=\frac{1}{4} \bar{F}_{1:4}(t)+ \frac{5}{12} \bar{F}_{2:4}(t)+\frac{1}{3}\bar{F}_{3:4}(t). \end{aligned}$$

Another option is to apply formula (2.29) to (1/3, 2/3, 0) to get

$$\left( \frac{3}{4}\ \frac{1}{3},\frac{1}{4}\ \frac{1}{3}+\frac{3}{4}\ \frac{2}{3}, \frac{2}{4}\ \frac{2}{3}+ \frac{2}{4}\ 0,\frac{3}{4}\ 0 \right) =\left( \frac{1}{4}, \frac{5}{12}, \frac{1}{3}, 0\right) .$$

\(\blacktriangleleft \)

We conclude this section with an example extracted from Example 5.1 in Navarro et al. (2008) which proves that representation (2.24) does not necessarily hold without the EXC (ID) assumption. Actually, it proves that the distribution of a system is not necessarily a mixture of the distributions of the order statistics associated to its component lifetimes.

Example 2.7

Let us consider the coherent system with three IND components and with lifetime \(T=\min (X_1,\max (X_2,X_3))\). Recall that the minimal path sets of T are \(\{1,2\}\) and \(\{1,3\}\), and so the reliability function of this system can be written as

$$\bar{F}_T(t)=\bar{F}_{\{1,2\}}(t) +\bar{F}_{\{1,3\}}(t)-\bar{F}_{1:3}(t).$$

If we assume that the component lifetimes are IND then

$$\bar{F}_T(t)=\bar{F}_{1}(t)\bar{F}_{2}(t) +\bar{F}_{1}(t)\bar{F}_{3}(t)-\bar{F}_{1}(t)\bar{F}_{2}(t)\bar{F}_{3}(t).$$

However, in general, we do not know if \(\bar{F}_T\) can necessarily be written as a mixture of \(\bar{F}_{1:3}\), \(\bar{F}_{2:3}\) and \(\bar{F}_{3:3}\). For example, if the components have exponential distributions with means 1/2, 1 and 1, respectively, then

$$\begin{aligned}&\bar{F}_{1}(t)=e^{-2t}\\&\bar{F}_{2}(t)=\bar{F}_{3}(t)=e^{-t}\\&\bar{F}_{\{1,2\}}(t)=\bar{F}_{\{1,3\}}(t)=e^{-3t},\\&\bar{F}_{1:3}(t)=e^{-4t},\\&\bar{F}_{2:3}(t)=e^{-2t}+2e^{-3t}-2e^{-4t}, \\&\bar{F}_{3:3}(t)=2e^{-t}-2e^{-3t}+e^{-4t},\\&\bar{F}_T(t)=2e^{-3t}-e^{-4t}, \end{aligned}$$

for all \(t\ge 0\). If we assume that \(\bar{F}_T\) can be written as a mixture of the functions \(\bar{F}_{1:3}\), \(\bar{F}_{2:3}\) and \(\bar{F}_{3:3}\) with some coefficients \(c_1,c_2\) and \(c_3\), we have

$$\begin{aligned} 2e^{-3t}-e^{-4t}=c_1 e^{-4t}+c_2\left( e^{-2t}+2e^{-3t}-2e^{-4t}\right) + c_3\left( 2e^{-t}-2e^{-3t}+e^{-4t}\right) \end{aligned}$$

for all \(t\ge 0\). The functions \(e^{-\lambda t}\) and \(e^{-\mu t}\) are linearly independent for \(\lambda \ne \mu \). Therefore, \(c_3=c_2=0\) and we conclude that \(\bar{F}_T\) cannot be written as a mixture of \(\bar{F}_{1:3}\), \(\bar{F}_{2:3}\) and \(\bar{F}_{3:3}\). In particular, \(\bar{F}_T\) is not equal to the mixture obtained neither with the structural signature \(\mathbf {s}=(1/3,2/3,0)\) given by

$$\bar{F}_a:=\frac{1}{3} \bar{F}_{1:3}+\frac{2}{3} \bar{F}_{2:3}$$

nor with that obtained with the probabilistic signature

$$\bar{F}_p:=p_1\bar{F}_{1:3}+p_2 \bar{F}_{2:3},$$

where \(p_i=\Pr (T=X_{i:3})\) for \(i=1,2\). In this example

$$p_1=\Pr (X_1<\min (X_2,X_3)),$$

where \(X_1\) and \(Y=\min (X_2,X_3)\) are IID. Therefore, \(p_1=p_2=1/2\). The plots of \(\bar{F}_T\) (black), \(\bar{F}_a\) (blue) and \(\bar{F}_p\) (red) and the corresponding hazard rate functions can be seen in Fig. 2.5. Note that the reliability functions are different but similar. The code in R to get the plots of the reliability functions is the following:

figure e

The code in R to get the plots of the hazard rate functions is the following:

figure f
Fig. 2.5
figure 5

Reliability functions (left) \(\bar{F}_T\) (black), \(\bar{F}_a\) (blue) and \(\bar{F}_p\) (red) and the corresponding hazard rate functions (right) of the system in Example 2.7 when the components are independent with exponential distributions of means 1/2, 1, 1. The dashed lines represent the functions for the k-out-of-3 systems for \(k=1,2,3\)

Note that in the general case, we can define two mixed systems associated to T, the average system

$$\bar{F}_a=s_1 \bar{F}_{1:n}+\cdots +s_n \bar{F}_{n:n}$$

obtained with the structural signature and the projected system

$$\bar{F}_p=p_1 \bar{F}_{1:n}+\cdots +p_n \bar{F}_{n:n}$$

obtained with the probabilistic signature. Both can be considered as good approximations of \(\bar{F}_T\), see Navarro et al. (2010) (the second one is usually better than the first one as it happen in Fig. 2.5). Note that \(\bar{F}_a\) is always the reliability function of a mixed system and that so is \(\bar{F}_p\) when \(p_1+\cdots +p_n=1\). Both \(\bar{F}_a\) and \(\bar{F}_b\) belongs to the vectorial space generated by \(\bar{F}_{1:n},\ldots , \bar{F}_{n:n}\). However, this is not always the case for \(\bar{F}_{T}\) as we have seen in Example 2.7.

2.4 Distortion Representations

The distorted distributions were introduced by Wang (1996) and Yaari (1987) in the context of theory of choice under risk. The purpose was to allow a “distortion” (a change) of the initial (or past) risk distribution function. The formal definition is the following.

Definition 2.4

The distorted distribution (DD)  associated to a distribution function (DF) F and to an increasing and continuous distortion function \(q:[0,1]\rightarrow [0,1]\) such that \(q(0)=0\) and \(q(1)=1\), is given by

$$\begin{aligned} F_{q}(t)=q(F(t)) \end{aligned}$$
(2.31)

for all t.

Note that the conditions on q assure that \(F_q\) is a proper distribution function for any distribution function F (actually, for this property, we just need a right-continuous distortion function). Moreover, if q is strictly increasing in [0, 1], then F and \(F_{q}\) have the same support. Also note that q is a distribution function with support included in [0, 1].

From (2.31), we have a similar expression for the respective reliability functions \(\bar{F}=1-F\) and \(\bar{F}_{q}=1-F_q\) that satisfy

$$\begin{aligned} \bar{F}_{q}(t)=\bar{q}(\bar{F}(t)), \end{aligned}$$
(2.32)

where \(\bar{q}(u):=1-q(1-u)\) is called the dual distortion function in Hürlimann (2004).

Note that \(\bar{q}\) is also a “distortion function”, that is, it is continuous, increasing and satisfies \(\bar{q}(0)=0\) and \(\bar{q}(1)=1\). Actually, expressions (2.31) and (2.32) are equivalent. However, sometimes it is better to use (2.32) instead of (2.31) (or vice versa).

If F is absolutely continuous with PDF \(f=F'\) and q is differentiable, then the PDF of \(F_q\) is

$$\begin{aligned} f_{q}(t)=f(t) q'(F(t))=f(t)\bar{q}^\prime (\bar{F}(t)) \end{aligned}$$
(2.33)

for all t.

From (2.32) and (2.33), the hazard rate function of \(F_q\) is

$$\begin{aligned} h_{q}(t)=\frac{f_q(t)}{\bar{F}_q(t)}=\frac{ \bar{q}'(\bar{F}(t))}{\bar{q}(\bar{F}(t))}f(t)=\alpha (\bar{F}(t)) h(t) \end{aligned}$$
(2.34)

for all t such that \(\bar{q}(\bar{F}(t))>0\), where \(\alpha (u):=u\bar{q}'(u)/\bar{q}(u)\) for \(u\in [0,1]\) and \(h(t)=f(t)/\bar{F}(t)\) is the hazard rate function of F.

Analogously, the reversed hazard rate function of \(F_q\) is

$$\begin{aligned} \bar{h}_{q}(t)=\frac{f_q(t)}{F_q(t)}=\frac{ q'(F(t))}{q(F(t))}f(t)=\bar{\alpha }(F(t)) \bar{h}(t) \end{aligned}$$
(2.35)

for all t such that \(q(F(t))>0\), where \(\bar{\alpha }(u):=u q'(u)/ q(u)\) for \(u\in [0,1]\) and \(\bar{h}(t)=f(t)/F(t)\) is the reversed hazard rate function of F.

However, the expression connecting the MRL functions is not so simple. Thus, if we assume that the support of F is contained in \([0,\infty )\), then

$$m_q(t)= \frac{\int _t^\infty \bar{q}(\bar{F}(x))dx}{\bar{q}(\bar{F}(t))}= \frac{\bar{F}(t)}{\bar{q}(\bar{F}(t))}\frac{\int _t^\infty \bar{q}(\bar{F}(x))dx}{\int _t^\infty \bar{F}(x)dx}m(t) $$

for all t such that \(\bar{q}(\bar{F}(t))>0\).

Several relevant models are contained in the distorted models. Let us see some of them:

  1. 1.

    Lehmann’s alternatives. They were introduce in hypothesis testing as an alternative to the distribution function proposed in the null hypothesis. They were defined by

    $$F_\theta (t)= F^\theta (t)$$

    for all t, where \(\theta \) is a positive parameter. Clearly, this is a distorted distribution with distortion function \(q(u)=u^\theta \) and dual distortion function \(\bar{q}(u)=1-(1-u)^\theta \) for \(u\in [0,1]\). The original distribution is obtained with \(\theta =1\).

  2. 2.

    Proportional hazard rate (PHR) Cox model. This model was introduced in survival analysis to model the different risks of patients. It is defined by the reliability (survival) function

    $$\bar{F}_\theta (t)= \bar{F}^\theta (t)$$

    for all t, where \(\theta \) is a positive parameter. Clearly, this is a distorted distribution with dual distortion function \(\bar{q}(u)=u^\theta \) and distortion function \(q(u)=1-(1-u)^\theta \). Again, the original distribution is obtained with \(\theta =1\) and, in the absolutely continuous case. Its hazard rate is

    $$h_\theta (t)=\theta h(t),$$

    that is, the hazard rate function \(h_\theta \) is proportional to the baseline hazard rate function h. Moreover, the \(\alpha \) function in (2.34) is constant and equal to \(\theta \). In practice, the \(\theta \) parameter is obtained (estimated) as \(\theta =c_1x_1+\cdots +c_kx_k\), where \(c_1,\cdots ,c_k\) are some positive parameters and \(x_1,\ldots ,x_k\) represent the characteristics of the patient.

  3. 3.

    Proportional reversed hazard rate (PRHR) model. This model is similar to the PHR model and it is defined by the distribution function 

    $$ F_\theta (t)= F^\theta (t)$$

    for all t, where \(\theta \) is a positive parameter. Clearly, this model is equivalent to the Lehmann’s alternative model given in item 1 above. In the absolutely continuous case, its reversed hazard rate is

    $$\bar{h}_\theta (t)=\theta \bar{h}(t),$$

    that is, the reversed hazard rate function is proportional to the baseline reversed hazard rate function (i.e. the function \(\bar{\alpha }\) in (2.35) is constant).

  4. 4.

    Order statistics. As we have seen in (2.8), the reliability function of the ith order statistic from a sample of IID\(\sim \) \(F\) random variables can be written as

    $$\bar{F}_{i:n}(t)=\sum _{j=0}^{i-1} \left( {\begin{array}{c}n\\ j\end{array}}\right) F^j(t) \bar{F}^{n-j}(t).$$

    Hence it is a distorted distribution with dual distortion function

    $$ \bar{q}_{i:n}(t)=\sum _{j=0}^{i-1} \left( {\begin{array}{c}n\\ j\end{array}}\right) (1-u)^j u^{n-j}.$$

    The similar expression for the distortion function is

    $$ q_{i:n}(t)=\sum _{j=i}^{n} \left( {\begin{array}{c}n\\ j\end{array}}\right) u^j (1-u)^{n-j}.$$

    Note that both are polynomials (based on Bernstein polynomials \(B^n_j(u)=\left( {\begin{array}{c}n\\ j\end{array}}\right) u^j (1-u)^{n-j}\)). Actually, these distortion functions can also be obtained from (2.26) and (2.25) as

    $$q_{i:n}(u)=\sum _{j=i}^n (-1)^{j-i} \left( {\begin{array}{c}n\\ j\end{array}}\right) \left( {\begin{array}{c}j-1\\ i-1\end{array}}\right) u^j $$

    and

    $$\bar{q}_{i:n}(u)=\sum _{j=n-i+1}^n (-1)^{j-n+i-1} \left( {\begin{array}{c}n\\ j\end{array}}\right) \left( {\begin{array}{c}j-1\\ n-i\end{array}}\right) u^j.$$

    Alternatively, we can also use their maximal and minimal signatures, respectively. In particular, for the minimum (series system) and maximum (parallel system) values we have \(\bar{F}_{1:n}=\bar{F}^n\) and \(F_{n:n}=F^n\). So they are included in the PHR and PRHR models, respectively.

The distorted distributions were generalized in Navarro et al. (2016) as follows.

Definition 2.5

The distorted distribution (DD)  associated to n distribution functions \(F_1,\ldots ,F_n\) and to an increasing and continuous distortion function \(Q:[0,1]^n\rightarrow [0,1]\) such that \(Q(0,\ldots ,0)=0\) and \(Q(1,\ldots ,1)=1\), is given by

$$\begin{aligned} F_{Q}(t)=Q(F_1(t),\ldots ,F_n(t)) \end{aligned}$$
(2.36)

for all t.

As above, the conditions on Q assure that \(F_Q\) is a proper distribution function for any distribution functions \(F_1,\ldots ,F_n\) (actually, for this property, we just need a right-continuous distortion function). Moreover, from (2.36), we have a similar expression for the respective reliability functions

$$\begin{aligned} \bar{F}_{Q}(t)=\bar{Q}(\bar{F}_1(t),\ldots ,\bar{F}_n(t)), \end{aligned}$$
(2.37)

where \(\bar{Q}(u_1,\ldots ,u_n):=1-Q(1-u_1,\ldots ,1-u_n)\) is called the dual distortion function. Note that \(\bar{Q}\) is also a “distortion function”, that is, it is continuous, increasing and satisfies \(\bar{Q}(0,\ldots ,0)=0\) and \(\bar{Q}(1,\ldots ,1)=1\). Actually, expressions (2.36) and (2.37) are equivalent. However, sometimes it could be better to use (2.37) instead of (2.36) (or vice versa). Note that these expressions are similar to copula representations but that here we obtain a univariate distribution (or reliability) function. The distortion functions are continuous aggregation functions (see Grabisch et al. 2009).

If \(F_1,\ldots ,F_n\) are absolutely continuous with probability density functions \(f_1,\ldots ,\) \(f_n\) and Q is differentiable, then the PDF of \(F_Q\) is

$$\begin{aligned} f_{Q}(t)=\sum _{i=1}^n f_i(t)\ \partial _i Q(F_1(t),\ldots ,F_n(t))=\sum _{i=1}^n f_i(t)\ \partial _i \bar{Q}(\bar{F}_1(t),\ldots ,\bar{F}_n(t)), \end{aligned}$$
(2.38)

for all t, where \(\partial _i G\) represents the partial derivative of G with respect to its ith variable.

From (2.37) and (2.38), the hazard rate function of \(F_q\) is

$$\begin{aligned} h_{Q}(t)=\frac{ \sum _{i=1}^n f_i(t) \partial _i \bar{Q}(\bar{F}_1(t),\ldots ,\bar{F}_n(t))}{\bar{Q}(\bar{F}_1(t),\ldots ,\bar{F}_n(t))}=\sum _{i=1}^n \alpha _i(\bar{F}_1(t),\ldots ,\bar{F}_n(t)) h_i(t) \end{aligned}$$
(2.39)

for all t such that \(\bar{F}_Q(t)>0\), where

$$\alpha _i(u_1,\ldots ,u_n):=\frac{u_i \partial _i \bar{Q}(u_1,\ldots ,u_n)}{\bar{Q}(u_1,\ldots ,u_n)}\ge 0$$

for \(u_1,\ldots ,u_n\in [0,1]\) such that \(\bar{Q}(u_1,\ldots ,u_n)>0\) and \(h_i(t)=f_i(t)/\bar{F}_i(t)\) for \(i=1,\ldots ,n\). A similar expression can be obtained for the reversed hazard rate function.

Let us see some examples.

  1. 1.

    Finite mixtures. As we have mentioned above, the distribution function of a finite mixture can be written as

    $$F(t)=p_1F_1(t)+\cdots +p_nF_n(t)$$

    for all t, where \(p_i\ge 0\) and \(p_1+\cdots +p_n=1\). Therefore it is a distorted distribution with distortion functions

    $$Q(u_1,\ldots ,u_n)=\bar{Q}(u_1,\ldots ,u_n)=p_1u_1+\cdots +p_nu_n.$$

    However, note that the negative mixtures cannot be represented as distorted distributions.

  2. 2.

    Generalized proportional hazard rate (GPHR) model. The PHR model defined above can be extended by

    $$\bar{F}(t)=\bar{F}^{\theta _1}_1(t)\dots \bar{F}^{\theta _n}_n(t),$$

    where \(\theta _1,\ldots ,\theta _n>0\). Clearly, this is a distorted distribution with dual distortion function

    $$\bar{Q}(u_1,\ldots ,u_n)=u^{\theta _1}_1\dots u^{\theta _n}_n.$$

    When \(\theta _1=\cdots =\theta _n=1\), we obtain the reliability of the series system with n independent components.

  3. 3.

    Generalized proportional reversed hazard rate (GPRHR) model. Analogously, the PRHR model defined above can be extended by

    $$ F(t)=F^{\theta _1}_1(t)\dots F^{\theta _n}_n(t),$$

    where \(\theta _1,\ldots ,\theta _n>0\). Clearly, this is a distorted distribution with distortion function

    $$ Q(u_1,\ldots ,u_n)=u^{\theta _1}_1\dots u^{\theta _n}_n.$$

    When \(\theta _1=\cdots =\theta _n=1\), we obtain the distribution of the parallel system with n independent components.

  4. 4.

    Aggregation functions. The continuous aggregation functions are equivalent to distorted distributions. So we can use them to get new (distorted) distributions. For example, we can use the arithmetic mean

    $$\bar{u}=A_1(u_1,\ldots ,u_n):=\frac{u_1+\cdots +u_n}{n}.$$

    This is also a mixture model (with a uniform mixing distribution). Another example is the geometric mean

    $$u_g=A_2(u_1,\ldots ,u_n):=\root n \of {u_1\dots u_n}.$$

    If it is applied to the reliability functions, then it is included in the GPHR model and if it is applied to the distribution functions, then it is included in the GPRHR model.

The goal of this section is to prove that the distribution function of a system can be written as a distortion of the distribution functions of the components. To this end we will use the copula theory. The main properties of copulas can be seen in Nelsen (2006) and Durante and Sempi (2016). Thus, if the random vector \(\mathbf {X}=(X_1,\ldots ,X_n)\) contains the lifetimes of the components in a system then, from Sklar’s theorem, we know that the joint distribution function \(\mathbf {F}\) of \(\mathbf {X}\) can be written as

$$\begin{aligned} \mathbf {F}(x_1,\ldots ,x_n)=C(F_1(x_1),\ldots ,F_n(x_n)) \end{aligned}$$
(2.40)

for all \(x_1,\ldots ,x_n\), where \(F_1,\ldots ,F_n\) are the marginal (component) distribution functions and C is a copula function, that is, it is a distribution function with uniform marginals over the interval [0, 1]. Many authors prefer to restrict copula functions to \(C:[0,1]^n\rightarrow [0,1]\). In this case, they can always be extended to determine an n-dimensional distribution function with uniform marginals. Moreover, if all the marginal distribution functions \(F_1,\ldots ,F_n\) are continuous, then C is unique. We also know that if C is a copula, then the right hand side of (2.40) determines a proper joint distribution function for all univariate distribution functions \(F_1,\ldots ,F_n\) (i.e., from a copula C, we can construct multivariate models with a fixed dependence structure and arbitrary univariate marginals).

A similar representation holds for the reliability functions, that is, the joint reliability function satisfies

$$\begin{aligned} {\bar{\mathbf {F}}}(x_1,\ldots ,x_n)=\widehat{C}(\bar{F}_1(x_1),\ldots ,\bar{F}_n(x_n)) \end{aligned}$$
(2.41)

for all \(x_1,\ldots ,x_n\), where \(\bar{F}_1,\ldots ,\bar{F}_n\) are the marginal (component) reliability functions and \(\widehat{C}\) is a copula function, called survival copula. It is easy to see that C determines \(\widehat{C}\) and vice versa. For example, if \(n=2\), then

$$\widehat{C}(u_1,u_2)=u_1+u_2-1+C(1-u_1,1-u_2)$$

for all \(u_1,u_2\in [0,1]\).

We can use (2.41) to prove that the series systems have distorted distributions. For example, if we consider \(X_{1:n}=\min (X_1,\ldots ,X_n)\), then its reliability function is

$$\bar{F}_{1:n}(t)=\Pr (X_{1:n}>t)=\Pr (X_1>t,\ldots ,X_n>t)=\widehat{C}(\bar{F}_1(t),\ldots ,\bar{F}_n(t)),$$

that is, it is a distorted distribution with dual distortion \(\bar{Q}=\widehat{C}\). Note that copula functions satisfy the properties of distorted functions but that the reverse is not true (we will see an example later).

If we consider the series system with just the first k components for \(k=1,\ldots ,n\), then its lifetime is \(X_{1:k}=\min (X_1,\ldots ,X_k)\) and its reliability function is

$$\bar{F}_{1:k}(t)=\Pr (X_{1:k}>t){=}\Pr (X_1>t,\ldots ,X_k>t){=}\widehat{C}(\bar{F}_1(t),\ldots ,\bar{F}_k(t),1,\ldots ,1)$$

that is, it is a distorted distribution with dual distortion function

$$\bar{Q}(u_1,\ldots ,u_n)=\widehat{C}(u_1,\ldots ,u_k,1,\ldots ,1)$$

for \(u_1,\ldots ,u_n\in [0,1]\).

In the general case, if we consider the series system formed with the components in the set \(P\subseteq [n]\), then its lifetime is \(X_{P}=\min _{j\in P}X_j\) and its reliability function is

$$\begin{aligned} \bar{F}_{P}(t)=\Pr (X_{P}>t)=\Pr (\cap _{j\in P}\{X_i>t\})=\widehat{C}_P(\bar{F}_1(t),\ldots ,\bar{F}_n(t)), \end{aligned}$$
(2.42)

where

$$\begin{aligned} \widehat{C}_P(u_1,\ldots ,u_n):=\widehat{C}(u^P_1,\ldots ,u^P_n), \end{aligned}$$
(2.43)

\(u_i^P=u_i\) if \(i\in P\) and \(u_i^P=1\) if \(i\notin P\) for \(u_1,\ldots ,u_n\in [0,1]\). Hence all the series systems have distorted distributions. Similar representations can be proved for the parallel systems by using (2.40).

Now we are in a position to prove the main result of this section which says that the same property holds for any semi-coherent system.

Theorem 2.9

(Distortion representation, general case) If T is the lifetime of a semi-coherent system and the component lifetimes \((X_1,\ldots ,X_n)\) have the survival copula \(\widehat{C}\), then the reliability function of T can be written as

$$\begin{aligned} \bar{F}_T(t)=\bar{Q}(\bar{F}_1(t),\ldots ,\bar{F}_n(t)) \end{aligned}$$
(2.44)

for all t, where \(\bar{Q}\) is a distortion function which depends on \(\psi \) and \(\widehat{C}\).

Proof

From the minimal path set representation (2.15), we have

$$ \bar{F}_T(t)=\sum _{i=1}^r \bar{F}_{P_i}(t)-\sum _{i=1}^{r-1}\sum _{j=i+1}^r \bar{F}_{P_i\cup P_j}(t)+\cdots +(-1)^{r+1} \bar{F}_{P_1\cup \ldots \cup P_r}(t). $$

Hence, from (2.42) and (2.43), we obtain (2.44) with

$$\begin{aligned} \bar{Q}(\mathbf {u})=\sum _{i=1}^r \widehat{C}_{P_i}(\mathbf {u})-\sum _{i=1}^{r-1}\sum _{j=i+1}^r \widehat{C}_{P_i\cup P_j}(\mathbf {u})+\cdots +(-1)^{r+1} \widehat{C}_{P_1\cup \dots \cup P_n}(\mathbf {u}) \end{aligned}$$
(2.45)

for \(\mathbf {u}=(u_1,\ldots ,u_n)\in [0,1]^n\), where \(\widehat{C}_P\) is defined by (2.43). The function \(\bar{Q}\) is always a distortion function since \(\bar{F}_T\) is a proper reliability function for all \(\bar{F}_1,\ldots ,\bar{F}_n\).    \(\square \)

A similar proof can be obtained by using parallel systems and the minimal cut set representation. The function \(\bar{Q}\) can be called distortion function (or domination) of the system. Note that it depends on both the structure (the minimal path sets) of the systems and on the structure dependence between the component lifetimes (the survival copula). However, it does not depend on the component (marginal) reliability functions. So (2.44) is a very convenient representation for the system reliability since all the system characteristics (dependence and structure) are included \(\bar{Q}\) and the different units are represented by their different marginal reliability functions. In many situations in practice, we can choose different units (reliabilities) for a fixed system structure \(\bar{Q}\) or study different system characteristics (different \(\bar{Q}\) functions) for arbitrary or fixed components.

Next we analyse some particular cases of interest.

Theorem 2.10

(Distortion representation, IND case) If T is the lifetime of a semi-coherent system with independent component lifetimes \(X_1,\ldots ,X_n\), then the reliability function of T can be written as

$$\begin{aligned} \bar{F}_T(t)=\bar{Q}(\bar{F}_1(t),\ldots ,\bar{F}_n(t)) \end{aligned}$$

for all t, where \(\bar{Q}\) is a multinomial which only depends on \(\psi \).

The proof is immediate from (2.17) or (2.45). The multinomial \(\bar{Q}\) was called reliability function of the structure \(\psi \) in Barlow and Proschan (1975), p. 21. However note that, \(\bar{Q}\) is not a joint reliability function (it is a distortion function). Also note that this multinomial can be obtained by using the product-coproduct representations for the structure function given in (1.6) and (1.7). It can also be obtained from the pivotal decomposition (1.3) or from the Möbius representation (1.10). 

In the general case, the distortion function \(\bar{Q}\) in (2.45) can also be obtained from the Möbius transform \(\widehat{\varphi }\) and \(\widehat{C}\) as

$$\bar{Q}(\mathbf {u})=\sum _{I\subseteq [n]}\widehat{\varphi }(I) \widehat{C}(\mathbf {u}_I),$$

where \(\mathbf {u}=(u_1,\ldots ,u_n)\) and \(\mathbf {u}_I=(u^I_1,\ldots ,u^I_n)\) with \(u_i^I=u_i\) if \(i\in I\) and \(u_i^I=1\) if \(i\notin I\), see (3.6) in Navarro and Spizzichino (2020).

Theorem 2.11

(Distortion representation, ID case) If T is the lifetime of a semi-coherent system and the component lifetimes \((X_1,\ldots ,X_n)\) have the survival copula \(\widehat{C}\) and a common reliability \(\bar{F}\), then the reliability function of T can be written as

$$\begin{aligned} \bar{F}_T(t)=\bar{q}(\bar{F}(t)) \end{aligned}$$

for all t, where \(\bar{q}\) is a distortion function which only depends on \(\psi \) and on \(\widehat{C}\).

The proof is immediate from (2.44) with

$$\bar{q}(u)=\bar{Q}(u,\ldots ,u)$$

for \(u\in [0,1]\). In particular, in the EXC case, \(\bar{q}\) can be written as

$$\bar{q}(u)=\sum _{i=1}^n a_i\widehat{C}(\underbrace{u,\ldots ,u}_{i\ times},\underbrace{1,\ldots ,1}_{n-i\ times}),$$

where \((a_1,\ldots ,a_n)\) is the minimal signature of order n.

Theorem 2.12

(Distortion representation, IID case) If T is the lifetime of a semi-coherent system with IID component lifetimes \(X_1,\ldots ,X_n\) having a common reliability \(\bar{F}\), then the reliability function of T can be written as

$$\begin{aligned} \bar{F}_T(t)=\bar{q}(\bar{F}(t)) \end{aligned}$$

for all t, where \(\bar{q}(u)=\sum _{i=1}^n a_i u^i\) is a distortion function and \((a_1,\ldots ,a_n)\) is the minimal signature of order n.

The proof is immediate from the two preceding theorems or from the minimal signature representation given in (2.22). Note that, in this case, \(\bar{q}\) is the polynomial obtained with the minimal signature coefficients.

Let us see some examples. The simplest one is the representation of the components. Thus, the reliability function of \(X_i\) can be written as

$$\bar{F}_i(t)=\bar{Q}_i(\bar{F}_1(t),\ldots ,\bar{F}_n(t))$$

for \(\bar{Q}_i(u_1,\ldots ,u_n)=u_i\) and \(i=1,\ldots ,n\).

As mentioned above, the representation for the series systems is also immediate. In particular, the reliability function of \(X_{1:k}\) is

$$\bar{F}_{1:k}(t)=\bar{Q}_{1:k}(\bar{F}_1(t),\ldots ,\bar{F}_n(t))$$

for

$$\bar{Q}_{1:k}(u_1,\ldots ,u_n)=\widehat{C}(u_1,\ldots ,u_k,1,\ldots ,1)$$

for \(k=1,\ldots ,n\). If the components are IND, then

$$\bar{Q}_{1:k}(u_1,\ldots ,u_n)=u_1 \dots u_k.$$

If the components are ID with a common reliability \(\bar{F}\), then \(\bar{F}_{1:k}(t)=\bar{q}_{1:k}(\bar{F}(t))\) with

$$\bar{q}_{1:k}(u)=\bar{Q}_{1:k}(u,\ldots ,u)=\widehat{C}(\underbrace{u,\ldots ,u}_{k\ times},\underbrace{1,\ldots ,1}_{n-k\ times})$$

and, in particular, it they are IID, then \(\bar{q}_{1:k}(u)=u^k\) for \(k=1,\ldots ,n\).

For the parallel systems, it is better to use the distributional copula C. Thus the distribution function of \(X_{k:k}\) can be written as

$$F_{k:k}(t)= Q_{k:k}(F_1(t),\ldots ,F_n(t))$$

for

$$Q_{k:k}(u_1,\ldots ,u_n)=C(u_1,\ldots ,u_k,1,\ldots ,1)$$

for \(k=1,\ldots ,n\). Hence, its reliability function is

$$\bar{F}_{k:k}(t)=\bar{Q}_{k:k}(\bar{F}_1(t),\ldots ,\bar{F}_n(t))$$

for

$$\bar{Q}_{k:k}(u_1,\ldots ,u_n)=1-C(1-u_1,\ldots ,1-u_k,1,\ldots ,1)$$

for \(k=1,\ldots ,n\).

We can also obtain representations based on \(\widehat{C}\) from the minimal path set representation. For example, for \(X_{2:2}\) we get

$$\bar{F}_{2:2}(t)=\bar{F}_{1}(t)+\bar{F}_{2}(t)-\bar{F}_{1:2}(t)=\bar{Q}_{2:2}(\bar{F}_1(t),\ldots ,\bar{F}_n(t))$$

with

$$\bar{Q}_{2:2}(u_1,\ldots ,u_n)=u_1+u_2-\widehat{C}(u_1,u_2).$$

A similar expression can be obtained for \(X_{k:k}\). If the components are IND, then

$$\bar{Q}_{k:k}(u_1,\ldots ,u_n)=1-(1-u_1) \dots (1-u_k) =\coprod _{i=1}^k u_i.$$

If they are ID, then \(\bar{F}_{k:k}(t)=\bar{q}_{k:k}(\bar{F}(t))\) for

$$\bar{q}_{k:k}(u)=1- C(\underbrace{1-u,\ldots ,1-u}_{k\ times},\underbrace{1,\ldots ,1}_{n-k\ times})$$

and, if they are IID, then \(\bar{q}_{k:k}(u)=1-(1-u)^k\).

We can also consider a general coherent system. For example, for our favourite system \(T= \min (X_1,\max (X_2,X_3))\), we have

$$\bar{F}_T(t)=\bar{F}_{\{1,2\}}(t)+\bar{F}_{\{1,3\}}(t)-\bar{F}_{1:3}(t)=\bar{Q}_{T}(\bar{F}_1(t),\bar{F}_2(t),\bar{F}_3(t))$$

with

$$\bar{Q}_{T}(u_1,u_2,u_3)=\widehat{C}(u_1,u_2,1)+\widehat{C}(u_1,1,u_3)-\widehat{C}(u_1,u_2,u_3).$$

If the components are IND, then

$$\bar{Q}_{T}(u_1,u_2,u_3)=u_1u_2+u_1u_3-u_1u_2u_3=u_1(u_2 \amalg u_3).$$

If they are ID, then \(\bar{F}_T(t)=\bar{q}_{T}(\bar{F}(t))\) with

$$\bar{q}_{T}(u)=\widehat{C}(u,u,1)+\widehat{C}(u,1,u)-\widehat{C}(u,u,u)$$

and, if they are IID, then \(\bar{q}_{T}(u)=2u^2-u^3\) for \(u\in [0,1]\). Recall that its minimal signature is \((0,2,-1)\).

Proceeding in a similar way we can obtain the dual distortion functions given in Tables 2.4 and 2.5 for all the systems with 1-3 IND and IID components, respectively. In the second case all the systems equivalent under permutations have the same distortions (so they are not repeated in the table).

Table 2.4 Dual distortions functions for all the systems with 1-3 IND components
Table 2.5 Dual distortions functions for all the systems with 1-3 IID components

The preceding representations can be used jointly with the representation based on distortions to compute the reliability and hazard rate functions of a system. For example, in Fig. 2.6, we plot the reliability functions for series and parallel systems of order 2 when the component lifetimes have exponential distributions of means 1 and 1/2 and when they are independent (left) or they have the following Clayton–Oakes survival copula (right)

$$ \widehat{C}(u,v)=\frac{uv}{u+v-uv}$$

for \(u,v\in [0,1]\). This copula induces a positive dependence between the component lifetimes. Note that, in both cases, \(\bar{F}_{1:2}\le \bar{F}_i\le \bar{F}_{2:2}\) holds (this property is always true) and that the positive dependence induced by this copula improves the series system but that it gets worse the parallel system (as expected). The code in R to get the right plot is the following. By changing C we can obtain other plots (including the left plot).

figure g
Fig. 2.6
figure 6

Reliability functions for the parallel system \(X_{2:2}\) (black) and the series system \(X_{1:2}\) (green) when the components have exponential distributions of means 1 (red) and 1/2 (blue) and they are independent (left) or dependent with a Clayton–Oakes copula (right)

We conclude this section by extending the signature representations. We have seen in the preceding section that they hold when the component lifetimes have an exchangeable (EXC) joint distribution function \(\mathbf {F}\). This condition is equivalent to have ID components and an EXC survival copula \(\widehat{C}\). We have also proved above that the ID condition cannot be dropped-out. However, let us see that the condition: “\(\widehat{C}\) is EXC”, can be relaxed. To this end we need the following concept extracted from Okolewski (2017). Recall that we use the following notation. For any set \(I\subseteq \{1,\ldots ,n\}\), \(\mathbf {u}_I:=(u_1,\ldots ,u_n)\) denotes the vector with \(u_i=u\) for \(i\in I\) and \(u_i=1\) if \(i\notin I\). The cardinality of the set I is denoted by |I|.

Definition 2.6

An n-dimensional copula C is said to be diagonal-dependent (shortly denoted by DD) if

$$\begin{aligned} C(\mathbf {u}_P)=C(\mathbf {u}_Q)\text { for all } P,Q\subseteq \{1,\ldots ,n\} \text { with } |P|=|Q|. \end{aligned}$$
(2.46)

The function \(\delta (u)=C(u,\ldots ,u)\) is called the diagonal section of the copula C. Hence note that C is DD if and only if

$$\begin{aligned} C(\mathbf {u}_P)=\delta _m(u)\text { for all } P\subseteq \{1,\ldots ,n\} \text { with } |P|=m \end{aligned}$$
(2.47)

for \(m=1,\ldots ,n\), where

$$\delta _m(u):=C(\ \underbrace{u,\ldots ,u}_{m-times},\underbrace{1,\ldots ,1}_{(n-m)-times})$$

is the diagonal section for the copula of the marginal distribution of the first m-variables. Clearly, \(\delta _n(u)=C(u,\ldots ,u)=\delta (u)\) and \(\delta _1(u)=u\) for all \(u\in [0,1]\) (since all the univariate marginals have a uniform distribution over the interval (0, 1)). So we just need to check (2.47) for \(m=2,\ldots ,n-1\).

In particular, a copula C is DD when all the marginals of dimension m have the same copula for all \(1<m<n\). Of course, all the EXC copulas are, in particular, DD. The reverse is not true, see the counterexample given in Navarro and Fernández-Sánchez (2020).

Now we are ready to state the following result extracted from Navarro and Fernández-Sánchez (2020).

Theorem 2.13

(Distortion reprersentation, DD-ID case) If T is the lifetime of a semi-coherent system and the component lifetimes \((X_1,\ldots ,X_n)\) are ID and have a DD survival copula, then (2.5) holds for the structural signature of dimension n.

Proof

From (2.15) we know that the system reliability function \(\bar{F}_T\) can be written as a linear combination of the reliability functions of the series systems. If the component lifetimes are ID with a common reliability function \(\bar{F}\) and a DD survival copula \(\widehat{C}\), then

$$\begin{aligned} \bar{F}_P(t)=\mathbb {P}\left( \min _{j\in P}T_j>t\right) =\widehat{C}_P (\bar{F}(t),\ldots ,\bar{F}(t))=\widehat{\delta }_{m}(\bar{F}(t)) \end{aligned}$$
(2.48)

holds for all t and all \(P\subseteq \{1,\ldots ,n\}\) with \(|P|=m\), where

$$\widehat{\delta }_m(u):=\widehat{C}(\ \underbrace{u,\ldots ,u}_{m-times},\underbrace{1,\ldots ,1}_{(n-m)-times})$$

for all \(u\in [0,1]\) and \(m=1,\ldots ,n\). Hence, all the series systems with the same number of components m do have the same reliability function given by (2.48). Therefore, the general representation (2.15) can be reduced to

$$\begin{aligned} \bar{F}_T(t)=a_1 \widehat{\delta }_1(\bar{F}(t))+\cdots +a_n \widehat{\delta }_n(\bar{F}(t)), \end{aligned}$$
(2.49)

where \(\mathbf {a}=(a_1,\ldots ,a_n)\) is the minimal signature of order n of the system.

The preceding representation (2.49) holds for any system structure (with the appropriate coefficients \(a_1,\ldots ,a_n\)). For example, the series system with n components has just one minimal path set \(P_1=\{1,\ldots ,n\}\) and lifetime \(T_{1:n}=\min (T_1,\ldots ,T_n)\). Hence

$$\begin{aligned} \bar{F}_{1:n}(t)=\Pr (T_1>t,\ldots ,T_n>t)=\widehat{C}(\bar{F}(t),\ldots ,\bar{F}(t))= \widehat{\delta }_n(\bar{F}(t)) \end{aligned}$$

for all t.

Analogously, the minimal path sets of \(T_{2:n}\) are all the subsets with \(n-1\) elements. So it has \(n=\left( {\begin{array}{c}n\\ n-1\end{array}}\right) \) minimal path sets and, from (2.15),

$$\begin{aligned} \bar{F}_{2:n}(t)= n \widehat{\delta }_{n-1}(\bar{F}(t))-(n-1) \widehat{\delta }_{n}(\bar{F}(t)) \end{aligned}$$

holds for all t. The last coefficient in the preceding expression is \(n-1\) because the coefficients in (2.49) sum up to 1 (take \(t\rightarrow -\infty \)).

In general, \(T_{i:n}\) has \(\left( {\begin{array}{c}n\\ n-i+1\end{array}}\right) \) minimal path sets and, from (2.15), its reliability function can be written as

$$\begin{aligned} \bar{F}_{i:n}(t)= a_{i, n-i+1} \widehat{\delta }_{n-i+1}(\bar{F}(t))+\cdots +a_{i,n} \widehat{\delta }_{n}(\bar{F}(t)) \end{aligned}$$
(2.50)

for some coefficients \(a_{i, n-i+1},\ldots ,a_{i, n}\) such that \(a_{i,n-i+1}+\cdots +a_{i,n}=1\) and \(a_{i, n-i+1}=\left( {\begin{array}{c}n\\ n-i+1\end{array}}\right) \) for \(i=1,\ldots ,n\).

Thus, if we define the column vectors \(\mathbf{r}(t)=(\bar{F}_{1:n}(t),\ldots ,\bar{F}_{n:n}(t))'\) and \(\mathbf{d}(t)=(\widehat{\delta }_{1}(\bar{F}(t)),\ldots ,\widehat{\delta }_{n}(\bar{F}(t)))'\), (2.50) proves that \(\mathbf{r}(t)=A_n \mathbf{d}(t)\) for a triangular real-valued matrix \(A_n=(a_{i,j})\) such that \(a_{i, n-i+1}=\left( {\begin{array}{c}n\\ n-i+1\end{array}}\right) \) and \(a_{i, j}=0\) for \(i=1,\ldots ,n\) and \(j=1,\ldots , n-i\). Hence \(A_n\) is not singular and so we can write \(\mathbf{d}(t)=A_n^{-1} \mathbf{r}(t)\), where \(A_n^{-1}\) is the inverse matrix of \(A_n\). Moreover, note that (2.49) can be rewritten as \(\bar{F}_T(t)=\mathbf {a}\ \mathbf{d}(t).\) Then

$$\bar{F}_T(t)=\mathbf {a}A_n^{-1} \mathbf{r}(t)=(c_1,\ldots ,c_n)\mathbf{r}(t)=c_1\bar{F}_{1:n}(t)+\cdots +c_n\bar{F}_{n:n}(t)$$

for all t, where \((c_1,\ldots ,c_n):=\mathbf {a}A_n^{-1}\) are some coefficients which only depend on the structure of the system. Therefore, these coefficients should be the same as that obtained in the IID continuous case, that is, \(c_i=s_i^{(n)}\) for \(i=1,\ldots ,n\). So (2.5) holds with the same coefficients for systems with ID component lifetimes and DD survival copulas.    \(\square \)

In Navarro and Fernández-Sánchez (2020) it is proved that the set \(\mathcal {S}_{DD}\) of all the DD copulas is much bigger than the set \(\mathcal {S}_{EXC}\) of EXC copulas. Actually, \(\mathcal {S}_{DD}\) is dense in the set of all the copulas while \(\mathcal {S}_{EXC}\) is not. Therefore, for any copula C we can find a “close” DD copula. The following example illustrate these representations.

Example 2.8

Let us consider again \(T=\min (X_1,\max (X_2,X_3))\) with

$$\bar{F}(t)=\Pr (X_{1}>t, X_2>t)+\Pr (X_{1}>t,X_3>t)-\Pr (X_1>t,X_2>t,X_3>t).$$

Let us assume

$$\Pr (X_1>x_1, X_2>x_2, X_3>x_3)=\widehat{C}(\bar{F}_1(x_1),\bar{F}_2(x_2), \bar{F}_3(x_3)),$$

where \(\widehat{C}\) is the survival copula. If we assume \(\bar{F}_1=\bar{F}_2=\bar{F}_3=\bar{F}\) (ID), then

$$\begin{aligned} \Pr (X_{1}>t, X_2>t)&=\widehat{C}(\bar{F}(t),\bar{F}(t),1)\\ \Pr (X_{1}>t, X_3>t)&=\widehat{C}(\bar{F}(t),1,\bar{F}(t))\\ \Pr (X_{1}>t, X_2>t,X_3>t)&=\widehat{C}(\bar{F}(t),\bar{F}(t),\bar{F}(t)). \end{aligned}$$

Therefore, \(\bar{F}_T(t)=\bar{q}(\bar{F}(t))\) with

$$\bar{q}(u)=\widehat{C}(u,u,1)+\widehat{C}(u,1,u)-\widehat{C}(u,u,u).$$

Analogously, it can be proved that \(\bar{F}_{i:3}(t)=\bar{q}_{i:3}(\bar{F}(t))\) for \(i=1,2\) with

$$\begin{aligned} \bar{q}_{1:3}(u)&=\widehat{C}(u,u,u)\\ \bar{q}_{2:3}(u)&=\widehat{C}(u,u,1)+\widehat{C}(u,1,u)+\widehat{C}(1,u,u)-2\widehat{C}(u,u,u). \end{aligned}$$

As the structural signature is \(s=(1/3,2/3,0)\), we do not need \(\bar{F}_{3:3}\).

If the components are IID, that is, \(\widehat{C}(u_1,u_2,u_3)=u_1u_2u_3\), then

$$\begin{aligned} \bar{q}(u)&=2u^2-u^3\\ \bar{q}_{1:3}(u)&=u^3\\ \bar{q}_{2:3}(u)&=3u^2-2u^3. \end{aligned}$$

Therefore

$$\bar{q}(u)= \frac{1}{3} \bar{q}_{1:3}(u)+\frac{2}{3} \bar{q}_{1:3}(u)$$

holds since

$$2u^2-u^3=\frac{1}{3} (u^3)+\frac{2}{3} (3u^2-2u^3).$$

If \(\widehat{C}\) is DD, then

$$\widehat{C}(u,u,1)=\widehat{C}(u,1,u)=\widehat{C}(1,u,u)$$

and so

$$\begin{aligned} \bar{q}(u)&=2\widehat{C}(u,u,1)-\widehat{C}(u,u,u)\\ \bar{q}_{1:3}(u)&=\widehat{C}(u,u,u)\\ \bar{q}_{2:3}(u)&=3\widehat{C}(u,u,1)-2\widehat{C}(u,u,u) \end{aligned}$$

for all \(u\in [0,1]\). Therefore

$$\bar{q}(u)= \frac{1}{3} \bar{q}_{1:3}(u)+\frac{2}{3} \bar{q}_{1:3}(u)$$

holds since

$$2\widehat{C}(u,u,1)-\widehat{C}(u,u,u)=\frac{1}{3} \widehat{C}(u,u,u)+\frac{2}{3} (3\widehat{C}(u,u,1)-2\widehat{C}(u,u,u)).$$

However, if \(\widehat{C}\) is the following Farlie-Gumbel-Morgenstern (FGM) copula:

$$\widehat{C}(u_1,u_2,u_3)=u_1u_2u_3(1+\theta (1-u_2)(1-u_3))$$

for \(-1\le \theta \le 1\), then

$$\begin{aligned} \bar{q}(u)&=2u^2-\widehat{C}(u,u,u)\\ \bar{q}_{1:3}(u)&=\widehat{C}(u,u,u)\\ \bar{q}_{2:3}(u)&=3u^2+\theta u^2 (1-u)^2 -2\widehat{C}(u,u,u). \end{aligned}$$

Therefore

$$\bar{q}(u)= \frac{1}{3} \bar{q}_{1:3}(u)+\frac{2}{3} \bar{q}_{1:3}(u)$$

does hold for \(\theta \ne 0\) since

$$2u^2-\widehat{C}(u,u,u)\ne \frac{1}{3} \widehat{C}(u,u,u)+\frac{2}{3} (3u^2+\theta u^2 (1-u)^2 -2\widehat{C}(u,u,u))$$

for \(0<u<1\). \(\blacktriangleleft \)

Problems

  1. 1.

    Prove that if X is a non-negative random variable, then

    $$E(X)=\int _0^\infty \bar{F}_X(x)dx.$$
  2. 2.

    Compute the MTTF in the exponential model.

  3. 3.

    Prove that the exponential model satisfies the lack of memory property.

  4. 4.

    Prove that the exponential model is the unique continuous model that satisfies the lack of memory property.

  5. 5.

    Prove that the MRL of the exponential model satisfies \(m(t)=\mu \) for all \(t\ge 0\).

  6. 6.

    Prove that the hazard rate of the exponential model satisfies \(h(t)=1/\mu \) for all \(t\ge 0\).

  7. 7.

    Obtain the hazard rate of the Weibull model.

  8. 8.

    Obtain the reliability function of a model with hazard rate \(h(t)=a+bt\) for \(t\ge 0\) and \(a,b\ge 0\).

  9. 9.

    Obtain the reliability function of a model with hazard rate \(h(t)=1/(a+bt)\) for \(t\ge 0\) and \(a,b\ge 0\).

  10. 10.

    Obtain the relationship between the reversed hazard rate and mean inactivity time functions.

  11. 11.

    Obtain a representation similar to (2.14) for the MRL of the system in the IID continuous case.

  12. 12.

    Obtain the minimal path set representation of a system of order 4.

  13. 13.

    Obtain the minimal cut set representation of a system of order 4.

  14. 14.

    Obtain the minimal signature representation of a system of order 4.

  15. 15.

    Obtain the maximal signature representation of a system of order 4.

  16. 16.

    Compute the matrices \(A_4\) and \(A_4^{-1}\).

  17. 17.

    Compute the matrices \(B_4\) and \(B_4^{-1}\).

  18. 18.

    Compute the matrix \(C_4\).

  19. 19.

    Obtain the signature of order 4 of a coherent system of order 3.

  20. 20.

    Obtain the signature of order 5 of a coherent system of order 4.

  21. 21.

    Prove with an example that Samaniego’s representation does not hold without the EXC assumption.

  22. 22.

    Prove that the function in (2.12) is a proper PDF for \(i,n\in \mathbb {R}\) satisfying \(1\le i\le n\).

  23. 23.

    Prove that \(\widehat{C}(u_1,u_2)=u_1+u_2-1+C(1-u_1,1-u_2)\).

  24. 24.

    Compute the distortion functions of a system of order 4.

  25. 25.

    Use the distortion function of a system to plot its reliability and hazard rate functions.

  26. 26.

    Compare the reliability functions of two systems by using distortions.

  27. 27.

    Compare the hazard rate functions of two systems by using distortions.

  28. 28.

    Obtain the signature representation for a DD (non-EXC) copula.