Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

5.1 Introduction

The concept of total time on test transform (TTT) was studied in the early 1970s; see, e.g., Barlow and Doksum [67] and Barlow et al. [65]. When several units are tested for studying their life lengths, some of the units would fail while others may survive the test duration. The sum of all observed and incomplete life lengths is generally visualized as the total time on test statistic. When the number of items placed on test tends to infinity, the limit of this statistic is called the total time on test transform. A formal definition of these two concepts will be introduced in the next section. The TTT is essentially a quantile-based concept, although it is discussed often in the literature in terms of F(x).

Many papers on TTT concentrate on reliability and its engineering applications. This include analysis of life lengths and new classes of ageing; see Abouammoh and Khalique [9], Ahmad et al. [25] and Kayid [318]. A special characteristic of TTT is that the basic ageing properties can be interpreted and determined through it. The works of Barlow and Campo [66], Bergman [89], Klefsjö [334], Abouammoh and Khalique [9] and Perez-Ocon et al. [492] are all of this nature. Properties of TTT were used for construction of bathtub-shaped distributions by Haupt and Schabe [266] and Nair et al. [447]. Much of the literature has focused on developing test procedures, most of which are for exponentially against alternatives like IHR, IHRA, NBUE, DMRL and HNBUE. For this, one may refer to Bergman [90], Klefsjö [335, 336], Kochar and Deshpande [348], Aarset [1], Xie [592, 593], Bergman and Klefsjö [96], Wei [579] and Ahmed et al. [25].

Applications of TTT can be found in a variety of fields. Of these, the role of TTT in reliability engineering will be taken up separately in Sect. 5.5. The optimal quantum of energy that may be sold under long-term contracts using TTT is discussed in Campo [125] and risk assessment of strategies in Zhao et al. [601]. TTT plotting of censored data (Westberg and Klefsjö [578]), problem of repairable limits (Dohi et al. [180]), normalized TTT plots and spacings (Ebrahimi and Spizzichino [183]), maintenance scheduling (Kumar and Westberg [357], Klefsjö and Westberg [340]), estimation in stationary observations (Csorgo and Yu [161]) and stochastic modelling (Vera and Lynch [573]) are some of the other topics discussed in the context of total time on test.

5.2 Definitions and Properties

We now give formal definitions of various concepts based on total time on test.

Definition 5.1.

Suppose n items are under test and successive failures are observed at \(X_{1:n} \leq X_{2:n} \leq \ldots \leq X_{n:n}\), and let \(X_{r:n} < t \leq X_{r+1:n}\), where X r: n ’s are order statistics from the distribution of a lifetime random variable X with absolutely continuous distribution function F(x). Then, the total time on test statistic during (0, t) is defined as

$$\displaystyle\begin{array}{rcl} & & \tau (t) = nX_{1:n} + (n - 1)(X_{2:n} - X_{1:n}) + \cdots + (n - r + 1)(X_{r:n} - X_{r-1:n}) \\ & & \qquad \qquad \qquad \qquad + (n - r)(t - X_{r:n}). {}\end{array}$$
(5.1)

The above expression is arrived at by noting that the test time observed between 0 and X 1: n is nX 1: n , that between X 1: n and X 2: n is \((n - 1)(X_{2:n} - X_{1:n})\) and so on, and finally that between X r: n and t is \((n - r)(t - X_{r:n})\). Also, the total time up to the rth failure is

$$\displaystyle{ \tau (X_{r:n}) = nX_{1:n} + (n - 1)(X_{2:n} - X_{1:n}) + \cdots + (n - r + 1)(X_{r:n} - X_{r-1:n}). }$$
(5.2)

It may also be noted that (5.1) is equivalent to

$$\displaystyle{\tau (t) = X_{1:n} + X_{2:n} + \cdots + X_{r:n} + (n - r)t.}$$

Definition 5.2.

The quantity

$$\displaystyle{ \phi _{r:n} = \frac{\tau (X_{r:n})} {\tau (X_{n:n})} = \frac{\sum _{j=1}^{r}(n - j + 1)(X_{j:n} - X_{j-1:n})} {\sum _{j=1}^{n}(n - j + 1)(X_{j:n} - X_{j-1:n})},\quad \mbox{ with }X_{0:n} = 0, }$$
(5.3)

is called the scaled total time on test statistic (scaled TTT statistic).

Noting that \(\bar{X}_{n} = \frac{1} {n}(X_{1:n} +\ldots +X_{n:n})\) is the sample mean of the n order statistics, we have \(\phi _{r:n} = \frac{\tau (X_{r:n})} {n\bar{X}_{n}}\). The empirical distribution function defined in terms of the order statistics is

$$\displaystyle{F_{n}(t) = \left \{\begin{array}{@{}l@{\quad }l@{}} 0, \quad &t < X_{1:n}, \\ \frac{r} {n},\quad &X_{r:n} \leq t < X_{r+1:n},\;r = 1,2,\ldots,n - 1,\\ 1, \quad &t \geq X_{ n:n}. \end{array} \right.}$$

If there exists an inverse function

$$\displaystyle{F_{n}^{-1}(t) =\inf [x \geq 0\vert F_{ n}(x) > t],}$$

we can verify that

$$\displaystyle\begin{array}{rcl} \int _{0}^{F_{n}^{-1}( \frac{r} {n})}\bar{F}_{n}(t)dt =\sum _{ j=1}^{r}\left (1 -\frac{j - 1} {n} \right )(X_{j:n} - X_{j-1:n}) = \frac{\tau (X_{r:n})} {n} & & {}\\ \end{array}$$

and

$$\displaystyle{ \lim _{n\rightarrow \infty }\lim _{\frac{r} {n}\rightarrow u}\int _{0}^{F_{n}^{-1}( \frac{r} {n})}\bar{F}_{n}(t)dt =\int _{ 0}^{{F}^{-1}(u) }\bar{F}(t)dt }$$
(5.4)

uniformly in u belonging to [0, 1]. The expression on the right side of (5.4), viz.,

$$\displaystyle{ \int _{0}^{{F}^{-1}(u) }\bar{F}(t)dt = H_{F}^{-1}(u), }$$
(5.5)

is called the total time on test transform. Accordingly we have, with a slightly different notation T(u) for H F  − 1(u), the following definition.

Definition 5.3.

The TTT of a lifetime random variable X is defined as

$$\displaystyle{Q(u) =\log \left( \frac{a + bu} {a(1 - u)}\right)^{ \frac{1} {a+b} }}$$
(5.6)

Example 5.1.

The linear hazard quantile function family of distributions specified by

$$\displaystyle{Q(u) =\log \Big {( \frac{a + bu} {a(1 - u)}\Big)}^{ \frac{1} {a+b} }}$$

(see Chap. 2) has

$$\displaystyle{q(u) = {[(1 - u)(a + bu)]}^{-1},}$$

and so, from (5.6), we find

$$\displaystyle{T(u) = \frac{1} {b}\log \Big(\frac{a + bu} {a} \Big).}$$

The expressions for TTT for some specific life distributions are presented in Table 5.1.

Table 5.1 Total time on test transforms for some specific life distributions

Some important properties of the TTT in (5.6) are the following:

  1. 1.

    T(0) = 0, T(1) = μ. T(u) is an increasing function if and only if F is continuous. In this case, T(u) is a quantile function and the corresponding distribution is called the transformed distribution;

  2. 2.

    The baseline distribution F is uniquely determined by T(u). To see this, we differentiate (5.6) to get

    $$\displaystyle{ T^{\prime}(u) = (1 - u)q(u), }$$
    (5.7)

    and thence

    $$\displaystyle{Q(u) =\int _{ 0}^{u} \frac{T^{\prime}(p)} {1 - p}dp;}$$
  3. 3.

    From Table 5.1, we see that the graph of the TTT of the exponential distribution is the diagonal line in the unit square;

  4. 4.

    Many identities exist between T(u) and the basic reliability functions introduced earlier in Sects. 2.32.6. Directly from (5.7) and (2.30), we have

    $$\displaystyle{ T^{\prime}(u) = \frac{1} {H(u)}. }$$
    (5.8)

Again from (2.35), we find

$$\displaystyle{T(u) =\mu -\int _{u}^{1}(1 - p)q(p)dp =\mu -(1 - u)M(u)}$$

and consequently

$$\displaystyle{ M(u) = \frac{\mu -T(u)} {1 - u}, }$$
(5.9)

which relates TTT and the mean residual quantile function. On the other hand, from (2.46), we find

$$\displaystyle{V (u) = \frac{1} {1 - u}\int _{u}^{1}{M}^{2}(p)dp}$$

and hence

$$\displaystyle{ V (u) = \frac{1} {1 - u}\int _{u}^{1}{\left (\frac{\mu -T(p)} {1 - p} \right )}^{2}dp, }$$
(5.10)

or equivalently

$$\displaystyle{ T(u) =\mu -(1 - u){[(1 - u)V ^{\prime}(u) - V (u)]}^{\frac{1} {2} }. }$$
(5.11)

Next, with regard to functions in reversed time, we have

$$\displaystyle{T(u) = Q(u) -\int _{0}^{u}p\,q(p)dp}$$

or

$$\displaystyle{uq(u) = [Q(u) - T(u)]^{\prime},}$$

and so

$$\displaystyle{ \Lambda (u) = \frac{1} {[Q(u) - T(u)]^{\prime}}. }$$
(5.12)

Also from (2.50), the reversed mean residual quantile function satisfies

$$\displaystyle{uR(u) =\int _{ 0}^{u}pq(p)dp}$$

and

$$\displaystyle{uR(u) = Q(u) - T(u),}$$

and consequently

$$\displaystyle{ T(u) = Q(u) - uR(u). }$$
(5.13)

Finally, we use the reversed variance quantile function

$$\displaystyle{D(u) = \frac{1} {u}\int _{0}^{u}{R}^{2}(p)dp}$$

to write

$$\displaystyle{D(u) = \frac{1} {u}\int _{0}^{u}\frac{Q(p) - T(p)} {{p}^{2}} dp.}$$

These relationships are used in the next section to characterize the ageing properties in terms of total time on test transform.

Definition 5.4.

We say that

$$\displaystyle{ \phi (u) = \frac{\int _{0}^{u}(1 - p)q(p)dp} {\int _{0}^{1}(1 - p)q(p)dp} = \frac{T(u)} {\mu } }$$
(5.14)

is the scaled total time on test transform, or scaled transform in short, of the random variable X.

Definition 5.5.

The plot of the points \(( \frac{r} {n},\phi _{r,n})\), \(r = 1,2,\ldots,n\), when connected by consecutive straight lines, is called the TTT-plot.

The statistic \(\frac{1} {n}\tau (X_{r:n})\) converges uniformly in u to the TTT as n →  and \(\frac{r} {n} \rightarrow u\). Now, we present the asymptotic distribution, which is due to Barlow and Campo [66]. Let \(\phi _{r,n} = \left \{\phi _{n}(p) = \frac{H_{n}^{-1}(p)} {H_{n}^{-1}(1)},\ 0 \leq p \leq 1\right \}\) be the scaled TTT process. Define

$$\displaystyle{S_{n}(p) = \sqrt{n}\left \{\frac{H_{n}^{-1}(p)} {\sum _{1}^{n}X_{j:n}} -\phi (p)\right \}}$$

for \(\frac{j-1} {n} \leq p \leq \frac{j} {n}\) and 1 ≤ j ≤ n, with \(S_{n}(0) = S_{n}(1) = 0\). Upon using

$$\displaystyle{\phi (u) = \frac{1} {\mu } \left \{(1 - u)Q(u) +\int _{ 0}^{u}Q(p)dp\right \},}$$

we see that

$$\displaystyle{\frac{H_{n}^{-1}( \frac{j} {n})} {H_{n}^{-1}(1)} =\int _{ 0}^{j/n}\frac{F_{n}^{-1}(u)} {\sum X_{j:n}} d\nu _{n}(u)du + \left (1 - \frac{j} {n}\right )\frac{X_{j:n}} {\sum X_{j:n}}}$$

converges to

$$\displaystyle{\int _{0}^{u}\frac{Q(p)} {\mu } dp + \frac{(1 - u)} {\mu } Q(u)}$$

with probability one and uniformly in 0 ≤ u ≤ 1 as n → , where ν n (u) puts mass \(\frac{1} {n}\) at \(u = \frac{j} {n}\). Next,

$$\displaystyle\begin{array}{rcl} S_{n}\left ( \frac{j} {n}\right )& =& \sqrt{n}\left (\frac{H_{n}^{-1}( \frac{j} {n})} {\sum X_{j:n}} -\phi (u)\right ) {}\\ & \doteq & \int _{0}^{ \frac{j} {n} }\sqrt{n}\left (\frac{X_{[nu]:n}} {\sum X_{j:n}} -\frac{Q(u)} {\mu } \right )d\nu _{n}(u) + \left (1 - \frac{j} {n}\right )\left (\frac{X_{j:n}} {\sum X_{j:n}} -\frac{{F}^{-1}( \frac{j} {n})} {\mu } \right ), {}\\ \end{array}$$

where [t] denotes the greatest integer contained in t. Then,

$$\displaystyle{\lim _{n\rightarrow \infty }\sqrt{n}\left (\frac{H_{n}^{-1}(p)} {\sum X_{j:n}} -\phi (u)\right ) =\int _{ 0}^{u}\theta (p)dp + (1 - u)\theta (u),}$$

with

$$\displaystyle{\theta (u) = -\frac{q(u)} {\mu } A(u) +{ \frac{Q(u)} {\mu }^{2}} \int _{0}^{1}A(p)q(p)dp}$$

and

$$\displaystyle{\lim _{n\rightarrow \infty }\sqrt{n}\left \{\frac{H_{n}^{-1}(u)} {\sum X_{j:n}} -\phi (u)\right \} = A(u).}$$

In the above, \(\{A(u),0 \leq u \leq 1\}\) is the Brownian bridge process.

5.3 Relationships with Other Curves

The similarity between the Lorenz curve used in economics and the TTT and the corresponding results have been discussed by Chandra and Singpurwalla [134] and Pham and Turkkan [493]. If X is a non-negative random variable with finite mean, the Lorenz curve is defined as

$$\displaystyle{ L(u) = \frac{1} {\mu } \int _{0}^{u}Q(p)dp, }$$
(5.15)

which is itself a continuous distribution function with L(0) = 0 and L(1) = 1. It is a bow-shaped curve below the diagonal of the unit square. Used as a measure of inequality in economics, we note that as the bow is more bent, the amount of inequality increases. Also L(u) is convex, increasing and is such that L(u) ≤ u, 0 ≤ u ≤ 1. The Lorenz curve determines the distribution of F up to a scale. Two well-known measures of inequality that are related to the Lorenz curve are the Gini index and the Pietra index. There are many analytic expressions for calculating the Gini index, including

$$\displaystyle\begin{array}{rcl} G = 2\int _{0}^{1}(u - L(u))du = 1 - 2\int _{ 0}^{1}L(u)du.& &{}\end{array}$$
(5.16)

In addition,

$$\displaystyle{G = 1 - {2\mu }^{-1}\int _{ 0}^{1}\int _{ 0}^{u}Q(p)dydu = 1 {-\mu }^{-1}E(X_{ 1:2}),}$$

where X 1: 2 is the smallest of a sample of size 2 from the population.

Next, the Pietra index is obtained from the maximum vertical deviation between L(u) and the line L(u) = u, given by

$$\displaystyle{ P {=\mu }^{-1}\int _{ 0}^{F(\mu )}(\mu -Q(p))dp = F(\mu ) - L(F(\mu )). }$$
(5.17)

It can be seen that P is \(\frac{1} {2\mu }\int _{0}^{\infty }\vert x -\mu \vert f(x)dx\), half the relative mean deviation. A detailed account of the results concerning L(u) and G can be found in Kleiber and Kotz [341].

The cumulative Lorenz curve of X is given by

$$\displaystyle\begin{array}{rcl} CL(u) =\int _{ 0}^{1}L(u)du = \frac{1} {\mu } \int _{0}^{1}\int _{ 0}^{u}Q(p)dudp.& &{}\end{array}$$
(5.18)

Chandra and Singpurwalla [133] observed that both L(u) and L  − 1(u) are distribution functions, L is convex, L  − 1 is concave, and that L(u) is related to the mean residual life function m(x). In the quantile set-up, the Lorenz curve can be related to all the basic reliability functions. For example, we have from (2.34) and (5.15) that

$$\displaystyle\begin{array}{rcl} Q(u) + M(u)& =& \frac{1} {1 - u}\int _{u}^{1}Q(p)dp, {}\\ \mu -\int _{0}^{u}Q(p)dp& =& (1 - u)(Q(u) + M(u)), {}\\ \mu [1 - L(u)]& =& (1 - u)(Q(u) + M(u)), {}\\ \end{array}$$

and so

$$\displaystyle{M(u) = \frac{\mu (1 - L(u))} {1 - u} - Q(u) =\mu \left [\frac{1 - L(u)} {1 - u} - L^{\prime}(u)\right ].}$$

Now, H(u) is recovered from (2.37) and V (u) from (2.46), after substituting for M(u). A much simpler expression results for the reversed mean residual quantile function R(u) as

$$\displaystyle{R(u) = Q(u) -\frac{\mu L(u)} {u} =\mu [L^{\prime}(u) - {u}^{-1}L(u)].}$$

Also,

$$\displaystyle{{[\Lambda (u)]}^{-1} = R(u) + uR^{\prime}(u)}$$

and

$$\displaystyle{D(u) = \frac{1} {u}\int _{0}^{u}{R}^{2}(p)dp.}$$

Example 5.2.

The Pareto distribution is one of the basic distributions used in modelling income data and it plays a role similar to the exponential distribution in reliability. Its quantile function is (Table 1.1)

$$\displaystyle{Q(u) =\sigma {(1 - u)}^{-\frac{1} {\alpha } }}$$

and so we obtain the following expressions:

$$\displaystyle\begin{array}{rcl} L(u)& =& 1 - {(1 - u)}^{1-\frac{1} {\alpha } }\quad \text{ since }\mu = \frac{\sigma \alpha } {\alpha -1},\ \alpha > 1, {}\\ M(u)& =& \frac{\sigma \alpha [{(1 - u)}^{1-\frac{1} {\alpha } }]} {(\alpha -1)1 - u} -\sigma {(1 - u)}^{-\frac{1} {\alpha } } = \frac{\sigma } {\alpha -1}{(1 - u)}^{-\frac{1} {\alpha } }, {}\\ H(u)& =& {[M(u) - (1 - u)M^{\prime}(u)]}^{-1} = \frac{\alpha {(1 - u)}^{\frac{1} {\alpha } }} {\sigma }, {}\\ V (u)& =& \frac{1} {1 - u}\int _{u}^{1}{M}^{2}(p)dp = \frac{{\sigma }^{2}\alpha } {{(\alpha -1)}^{2}(\alpha -2)}{(1 - u)}^{-\frac{2} {\alpha } }. {}\\ \end{array}$$

Also, the functions Λ(u), R(u) and D(u) can also be similarly found.

Chandra and Singpurwalla [134] obtained the following relationships between T(u), L(u) and the sample analogs corresponding to them:

  1. (a)
    $$\displaystyle{ T(u) = (1 - u)Q(u) +\mu L(u). }$$
    (5.19)

    Equation (5.19) is obtained by integrating by parts the right-hand side of (5.6) and then using (5.15). Since Q(u) = μ L ′(u), (5.19) has the alternative form

    $$\displaystyle{T(u) =\mu [(1 - u)L^{\prime}(u) + L(u)],}$$

    or equivalently

    $$\displaystyle{\phi (u) = (1 - u)L^{\prime}(u) + L(u).}$$

    Now, upon treating the last relationship as a linear differential equation in u and solving it, we obtain an integral expression for L(u) as

    $$\displaystyle{L(u) = (1 - u)\int _{0}^{u} \frac{\phi (p)} {{(1 - p)}^{2}}dp.}$$
  2. (b)

    We also have

    $$\displaystyle{C\phi (u) = 2CL(u),}$$

    where \(C\phi (u) =\int _{ 0}^{1}\phi (p)dp = \frac{1} {\mu } \int _{0}^{1}T(p)\,dp\) is called the cumulative total time on test transform. To establish the above assertion, we note that

    $$\displaystyle\begin{array}{rcl} \int _{0}^{1}\int _{ 0}^{u}Q(p)dp& =& -\int _{ 0}^{1}\left ( \frac{d} {dp}(1 - p)\int _{0}^{u}Q(p)dp\right )du {}\\ & =& \int _{0}^{1}(1 - p)Q(p)dp\text{ (by partial integration),} {}\\ \int _{0}^{u}(1 - p)q(p)dp& =& (1 - u)Q(u) +\int _{ 0}^{u}Q(p)dp. {}\\ \end{array}$$

    Thus, we get

    $$\displaystyle\begin{array}{rcl} C\phi (u)& =& \frac{1} {\mu } \int _{0}^{1}\int _{ 0}^{u}(1 - p)q(p)dpdu {}\\ & =& \frac{1} {\mu } \int _{0}^{1}\left \{(1 - u)Q(u) +\int _{ 0}^{u}Q(p)dp\right \}du {}\\ & =& \frac{1} {\mu } \int _{0}^{1}\left \{\int _{ 0}^{u}Q(p)dp\right \}du + \frac{1} {\mu } \int _{0}^{1}\left \{\int _{ 0}^{u}Q(p)dp\right \}du {}\\ & =& 2CL(u), {}\\ \end{array}$$

    as required.

  3. (c)

    \(G = 1 - C\phi (u)\), which is seen as follows:

    $$\displaystyle\begin{array}{rcl} G& =& 1 - {2\mu }^{-1}\int _{ 0}^{1}\left \{\int _{ 0}^{u}Q(p)dp\right \}du {}\\ & =& 1 - 2CL(u) = 1 - C\phi (u)\text{ (by using (b)).} {}\\ \end{array}$$

    If we denote the sample Lorenz curve and the sample Gini index by

    $$\displaystyle{L_{n}(u) = \frac{\sum _{r=1}^{[nu]}X_{r:n}} {\sum _{r=1}^{n}X_{r:n}},}$$

    and

    $$\displaystyle{G_{n} = \frac{\sum _{r=1}^{n-1}r(n - r)(X_{r+1:n} - X_{r:n})} {(n - 1)\sum _{r=1}^{n}X_{r:n}},}$$

    respectively, and the cumulative total time on test statistic by

    $$\displaystyle{V _{n} = \frac{1} {n - 1}\sum _{r=1}^{n-1}\phi _{ r:n},}$$

    then we have

    $$\displaystyle{\phi _{r:n} = L_{n}\left ( \frac{r} {n}\right ) + \frac{(n - r)X_{r:n}} {\sum _{j=1}^{n}X_{r:n}} }$$

    and

    $$\displaystyle{V _{n} = 1 - G_{n}.}$$

    Chandra and Singpurwalla [134] also pointed out the potential of the Lorenz curve in comparing the heterogeneity in survival data and also in characterizing the extremes of life distributions. The latter aspect is illustrated by the following theorem.

Theorem 5.1.

If X is IHR with mean μ, then

$$\displaystyle{L_{G}(u) \leq L_{F}(u) \leq L_{D}(u),\quad 0 \leq u \leq 1,}$$

and if X is DHR with mean μ, then

$$\displaystyle{L_{F}(u)\left \{\begin{array}{@{}l@{\quad }l@{}} \leq L_{G}(u),\quad &0 < u \leq 1 \\ \geq 0, \quad &0 \leq u < 1 \\ = 1, \quad &u = 1. \end{array} \right.}$$

Here, F and G are the distribution functions of X and exponential variable with same mean μ, respectively, and D is the distribution degenerate at μ.

The distribution which is degenerate at μ has h(x) =  at μ and so L D (u) = u characterizes distributions which are most IHR. Likewise, distributions with L(u) = 0 for u < 1 and L(u) = 1 for μ = 1 are the most DHR.

Pham and Turkkan [493] established more results in this direction. They pointed out that ϕ(u) strictly increases in the unit square with ϕ(0) = 0 and ϕ(1) = 1. Moreover,

  1. (a)

    \(\phi (F(\mu )) = 1 - E(\vert X -\mu \vert )\);

  2. (b)

    \(\phi (\text{Med}\,X) = \frac{1} {2} + \frac{(\text{Med}\,X-E\vert X-\text{Med}\,X\vert )} {2\mu }\);

  3. (c)

    In the unit square, the area between ϕ(u) and L(u) equals the area below L(u). The area above ϕ(u) is G;

  4. (d)

    \(L(u) = (1 - u)\int _{0}^{u} \frac{\phi (p)} {{(1 - p)}^{2}}dp\);

  5. (e)

    If X is NBUE, then the Pietra index is less than the reliability at μ andE( | X − Med X | ) < Med X;

  6. (f)

    When \(\frac{1} {2} < G \leq 1\) \((0 \leq G < \frac{1} {2})\) and F(x) is a family of IHR (DHR) distributions with common mean μ, F(x) becomes more IHR (DHR) when L(u) gets closer to the diagonal and ϕ(u) get closer to the upper (lower) side. Further, when \(G = \frac{1} {2}\), F(x) is exponential. When 0 ≤ P < e  − 1, X is IHR and the closer P is to zero, the more IHR X becomes. X is exponential when \(P = {e}^{-1}\). Also, e  − 1 < P < 1 provides DHR and P → 1 corresponding to the most DHR.

Another curve that has been used in the context of income inequality is the Bonferroni curve. For a non-negative random variable X, the first moment distribution of X is defined by the distribution function

$$\displaystyle{F_{1}(x) = \frac{\int _{0}^{x}tf(t)dt} {\mu }.}$$

The Bonferroni curve is defined in the orthogonal plane as (F(x), B 1(x)) within the unit square, where

$$\displaystyle{B_{1}(x) = \frac{F_{1}(x)} {F(x)}.}$$

In terms of quantile functions, we have

$$\displaystyle{ B(u) = B_{1}(Q(u)) = \frac{\int _{0}^{u}Q(p)dp} {\mu u}. }$$
(5.20)

One may refer to Giorgi [218], Giorgi and Crescenzi [219] and Pundir et al. [498] and the references therein for a study of (5.20) and its properties. As u → 0, B(u) has the indeterminate form \(\frac{0} {0}\) and hence the curve does not begin from the origin. It is strictly increasing but can be convex or concave in parts of the plane. Several results concerning B 1(x) have been given by Pundir et al. [498]. We now make a comparative study of B(u) with L(u) and ϕ(u). First, we note that B(u) characterizes the distribution of X through

$$\displaystyle{ Q(u) =\mu (B(u) + uB^{\prime}(u)). }$$
(5.21)

Also,

$$\displaystyle{B(u) = {u}^{-1}L(u)}$$

and

$$\displaystyle{\phi (u) = \frac{(1 - u)Q(u)} {\mu } + \frac{1} {\mu } \int _{0}^{u}Q(p)dp,}$$

or equivalently

$$\displaystyle{ \phi (u) = B(u) + u(1 - u)B^{\prime}(u). }$$
(5.22)

Solving (5.22) as a linear differential equation, we get

$$\displaystyle{B(u) = \frac{1 - u} {u} \int _{0}^{u} \frac{\phi (p)} {{(1 - p)}^{2}}dp,}$$

relating scaled TTT and the Bonferroni curve. Equation (5.20) verifies

$$\displaystyle{\mu uB(u) =\int _{ 0}^{u}Q(p)dp =\mu -\int _{ u}^{1}Q(p)dp,}$$

and hence

$$\displaystyle{M(u) = \frac{\mu (1 - uB(u))} {1 - u} - Q(u) =\mu \left \{\frac{1 - B(u)} {1 - u} - uB^{\prime}(u)\right \}}$$

by virtue of (5.21). Rewriting the above equation as

$$\displaystyle{B^{\prime}(u) + \frac{B(u)} {u(1 - u)} = \frac{1} {u(1 - u)} -\frac{M(u)} {u\mu } }$$

and solving it, we see that B(u) is uniquely determined by M(u) as

$$\displaystyle{B(u) = \frac{1 - u} {u} \int _{0}^{u}\frac{1} {p}\left \{ \frac{1} {1 - p} -\frac{M(p)} {\mu } \right \}dp.}$$

A more concise relationship exists between B(u) and the reversed mean residual quantile function R(u) in the form

$$\displaystyle{R(u) =\mu uB^{\prime}(u).}$$

As in the case of L(u), all other reliability functions can be derived using the relations they have with M(u) and R(u). Pundir et al. [498] showed that the Bonferroni index

$$\displaystyle{B = 1 -\int _{0}^{1}B(u)du}$$

is such that

$$\displaystyle{B \leq \frac{1} {2}(1 + G)\quad \text{ and }\quad B \leq 1 -\frac{V } {2},\;V = 1 - G.}$$

The Leimkuhler curve, which is closely related to the Lorenz curve, is also discussed recently for its relationships with the reliability functions. It is used in economics as a plot of cumulative proportion of productivity against cumulative proportion of sources and is also used in studying concentration of bibliometric distributions in information sciences. A general definition of the curve is given in Sarabia [518] and methods of generating such curves have been detailed in Sarabia et al. [519]. Balakrishnan et al. [60] have pointed out the relationships between reliability functions and the Leimkuhler curve. The Leimkuhler curve is defined in terms of quantile function as

$$\displaystyle\begin{array}{rcl} K(u)& =& \frac{1} {\mu } \int _{1-u}^{1}Q(p)dp \\ & =& \frac{1} {\mu } \left \{\int _{0}^{1}Q(p)dp -\int _{ 0}^{1-u}Q(p)dp\right \} \\ & =& 1 -\frac{1} {\mu } \int _{0}^{1-u}Q(p)dp. {}\end{array}$$
(5.23)

Evidently,

$$\displaystyle{K(u) = 1 - L(1 - u)\quad \text{or}\quad K(1 - u) = 1 - L(u)}$$

and so K(u) characterizes the distribution of X. The relation in (5.23) gives

$$\displaystyle\begin{array}{rcl} M(u)& =& \frac{\mu \{1 - K(1 - u)\}} {1 - u} - Q(u) {}\\ & =& \mu \left \{\frac{1 - K(1 - u)} {1 - u} - K^{\prime}(1 - u)\right \}. {}\\ \end{array}$$

Similarly, from

$$\displaystyle{\mu (1 - K(u)) =\int _{ 0}^{1-u}Q(p)dp}$$

and the definition of R(u), we obtain

$$\displaystyle{\mu (1 - K(u)) = (1 - u)\{Q(1 - u) - R(1 - u)\}.}$$

Since

$$\displaystyle{Q(1 - u) =\mu {K}^{-1}(u),}$$

upon combining the expressions, we obtain

$$\displaystyle{R(u) =\mu {u}^{-1}[K^{\prime}(1 - u) + K(1 - u) - 1].}$$

Regarding the geometric properties, it is seen from the definition that K(u) is continuous, concave and increasing with K(0) = 0 and K(1) = 1. The main difference between the Lorenz curve and the Leimkuhler curve K(u) is that in the Lorenz curve the sources are arranged in increasing order of productivity, while in the Leimkuhler curve the sources are arranged in decreasing order. The expressions of B(u), L(u) and K(u) for some distributions are presented in Table 5.2.

Table 5.2 Expressions of L(u), B(u) and K(u) for some distributions

5.4 Characterizations of Ageing Concepts

In this section, we discuss the role of TTT in detecting different ageing properties. In this regard, the new definitions offered below in terms of TTT provide alternative ways of interpreting and analysing lifetime data. The proofs given here assume that F is continuous and strictly increasing.

Theorem 5.2 (Barlow and Campo  [66]). 

A lifetime random variable X is IHR (DHR) if and only if the scaled transform ϕ(u) is concave (convex) for 0 ≤ u ≤ 1.

From (5.8), we have

$$\displaystyle{T^{\prime}(u) = \frac{1} {H(u)}}$$

and so

$$\displaystyle{ \frac{1} {{H}^{2}(u)}H^{\prime}(u) = -T^{\prime\prime}(u).}$$

Thus, H′(u) is positive (negative) or X is IHR (DHR) if and only if T′(u) is negative (positive). This is equivalent to the concavity (convexity) of T(u) or ϕ(u). It now follows that if ϕ(u) has an inflexion point u 0 such that 0 < u 0 < 1 and ϕ(u) is convex (concave) on [0, u 0], and concave (convex) on [u 0, 1], then X has a bathtub (upside-down bathtub)-shaped hazard quantile function. This can be used for constructing life distributions with BT (UBT) hazard quantile functions.

Barlow and Campo [66] have also shown that if X is IHRA (DHRA), then \(\frac{\phi (u)} {u}\) is decreasing (increasing) in 0 < u < 1. This condition is not sufficient as seen from the following life distribution (Barlow [64]) which is not IHRA, but at the same time \(\frac{\phi (u)} {u}\) is decreasing:

$$\displaystyle{F(x) = \left \{\begin{array}{@{}l@{\quad }l@{}} 0, \quad &0 \leq x < \frac{1} {2} \\ 1 -\exp [-(c + x)],\quad &x \geq \frac{1} {2}. \end{array} \right.}$$

In this regard, we have the following results.

Theorem 5.3 (Asha and Nair [39]). 

A necessary and sufficient condition for X to be DMTTF (IMTTF) is that \(\frac{\phi (u)} {u}\) is decreasing (increasing).

Theorem 5.4.

A necessary and sufficient condition for X to be IHRA (DHRA) is that

$$\displaystyle{ \frac{1} {t(u)}\int _{0}^{u} \frac{t(p)} {1 - p}dp \geq (\leq ) -\log (1 - u), }$$
(5.24)

where t(u) = T′(u).

The proof follows from (5.7), (5.8) and the definition of IHRA distributions.

Remark 5.1.

Since T(u) is the quantile function of the transformed distribution, t(u) is the corresponding quantile density function. From (5.7), \(t(u) = (1 - u)q(u)\) and so (5.24) is equivalent to

$$\displaystyle{ t(u) \leq (\geq ) - \frac{Q(u)} {\log (1 - u)}. }$$
(5.25)

Bergman [89] has proved that X is NBUE (NWUE) if and only if ϕ(u) ≥ u (ϕ(u) ≤ u). This follows from

$$\displaystyle\begin{array}{rcl} \phi (u) \geq u& \Leftrightarrow & \frac{1} {\mu } \int _{0}^{u}(1 - p)q(p)dp \geq u {}\\ & \Leftrightarrow & \frac{1} {\mu } [\mu (1 - u)M(u)] \geq u {}\\ & \Leftrightarrow & M(u) \leq \mu. {}\\ \end{array}$$

The proof in the case of NWUE involves simply reversing the inequalities.

Theorem 5.5 (Klefsjö  [333]). 

A lifetime random variable X is

  1. (a)

    DMRL (IMRL) if and only if \(\frac{1-\phi (u)} {1-u}\) is decreasing (increasing) in u;

  2. (b)

    HNBUE (HNWUE) if and only if

    $$\displaystyle{\phi (u) \leq (\geq )1 -\exp [-\frac{Q(u)} {\mu } ],\quad 0 \leq u \leq 1.}$$

These results are direct consequences of (5.9) and the definition of HNBUE (HNWUE).

In view of the definitions of ageing concepts in the quantile set-up in Chap. 4 and the identities between T(u), Q(u), H(u) and M(u), more ageing classes can be characterized in terms of T(u) or ϕ(u) as follows.

Theorem 5.6.

We say that X is

  1. (a)

    NBUHR (NWUHR) if and only if t(u) ≤ (≥)t(0);

  2. (b)

    NBUFHA (NWUHRA) if and only if \(-\frac{\log (1-u)} {Q(u)} \leq (\geq )t(0)\);

  3. (c)

    IHRA*t 0 if and only if

    $$\displaystyle{\int _{0}^{u} \frac{t(p)} {(1 - p)}dp \geq \frac{Q(u_{0})} {\log (1 - u_{0})}\log (1 - u)\text{ for all }u \geq u_{0};}$$
  4. (d)

    UBAE (UWAE) if and only if \(T(u) \leq (\geq )\mu - (1 - u)M(1)\) , where \(T(1) =\lim _{u\rightarrow 1-}T(u)\) is finite;

  5. (e)

    DMRLHA (IMRLHA) if and only if

    $$\displaystyle{- \frac{1} {Q(u)}\log (1 -\phi (u))}$$

    is increasing (decreasing) in u;

  6. (f)

    DVRL (IVRL) if and only if

    $$\displaystyle{\int _{u}^{1}{\left (\frac{1 -\phi (p)} {1 - p} \right )}^{2}dp \leq (\geq )\frac{{(1 -\phi (u))}^{2}} {1 - u};}$$
  7. (g)

    NBU (NWU) if and only if

    $$\displaystyle{\int _{0}^{u+v-uv}\frac{t(p)dp} {1 - p} \leq (\geq )Q(u) + Q(v),\;0 < v < 1,\;u + v - w < 1;}$$
  8. (h)

    NBU-t 0 (NWU-u 0 ) if and only if

    $$\displaystyle{\int _{0}^{u+u_{0}-uv}\frac{t(p)dp} {1 - p} \leq (\geq )Q(u) + Q(u_{0})}$$

    for some 0 < u 0 < 1 and all u;

  9. (i)

    NBU*u 0 (NWU*u 0 ) if and only if

    $$\displaystyle{\int _{0}^{u+v-uv} \frac{t(p)} {1 - p}dp \leq (\geq )Q(u + Q(v))}$$

    for some v ≥ u 0 and all u.

Note that in (g)–(i), Q(s) is evaluated as \(\int _{0}^{s}\frac{t(p)dp} {1-p}\).

Ahmad et al. [25] defined a new ageing class of life distributions called the new better than used in total time on test transform order (NBUT). They defined the class as distributions for which the inequality

$$\displaystyle{\int _{0}^{F_{t}^{-1}(u) }\bar{F}(x + t)dt \leq \bar{ F}(t)\int _{0}^{{F}^{-1}(u) }\bar{F}(x)dx}$$

is satisfied. It was proved that the NBUT class has the following preservation properties:

  1. (i)

    Let \(X_{1},X_{2},\ldots,X_{N}\) be a sequence of independent and identically distributed random variables and N be independent of the X i ’s. If X i ’s are NBUT, so is \(\min (X_{1},X_{2},\ldots,X_{N})\);

  2. (ii)

    The NBUT class is preserved under the formation of series systems provided that the constituent lifetime variables are independent and identically distributed;

  3. (iii)

    If X 1, X 2 and X 3 are independent and identically distributed, then

    $$\displaystyle{E\min (X_{1},X_{2},X_{3}) \geq \frac{2} {3}E\min (X_{1},X_{2}).}$$

This result is used to test exponentiality against non-exponential NBUT alternatives.

5.5 Some Generalizations

Several generalizations of the TTT have been proposed in the literature. The earliest one is that of Barlow and Doksum [67]. If F and G are absolutely continuous distribution functions with positive right continuous densities f and g, respectively, then the generalized total time on test transform is defined as

$$\displaystyle{H_{F}^{-1}(x) =\int _{ { F}^{-1}(0)}^{{F}^{-1}(x) }g[{G}^{-1}F(t)]dt,\quad 0 \leq x \leq 1.}$$

As before, H F ( ⋅) is a distribution function and \(H_{G}^{-1}(u) = u\), 0 ≤ u ≤ 1.

The generalized version can also be shown to possess properties similar to T(u). For instance, the density C F of H F is such that

$$\displaystyle{C_{F}(H_{F}^{-1}(u)) = \frac{f(Q_{F}(u))} {g(Q_{G}(u))} = h_{F}(Q(u)),\quad 0 \leq u \leq 1,}$$

where

$$\displaystyle{h_{F}(x) = \frac{f(x)} {g[{G}^{-1}(F(x))]}}$$

is referred to as the generalized failure rate function. Further, if S n ( ⋅) is the empirical distribution function based on a sample of size n from life distribution F, then H F  − 1 is estimated as

$$\displaystyle{H_{S_{n}}^{-1}(u) =\int _{ 0}^{S_{n}^{-1} }g[{G}^{-1}S_{ n}(t)]dt}$$

and so

$$\displaystyle{H_{S_{n}}^{-1}\left ( \frac{r} {n}\right ) =\int _{ 0}^{X_{r:n} }g[{G}^{-1}F_{ n}(u)]du =\sum _{ j=1}^{r}g{G}^{-1}\left (\frac{j - 1} {n} \right )(X_{j:n} - X_{j-1:n})}$$

for \(r = 1,2,\ldots,n\). Neath and Samaniego [468] proved that if G is exponential and F is IFRA, then \(\frac{H_{F}^{-1}} {u}\) is decreasing in u. Many reliability properties of the generalized transform like those of T(u) are still open problems. For a study of the order relations of the general form, we refer to Bartoszewicz [73]. Yet another extension due to Li and Shaked [388] is of the form

$$\displaystyle{T_{2}(u) =\int _{ 0}^{u}h(p)q(p)dp,}$$

where h(u) is positive on (0, 1) and zero elsewhere. The usual TTT results when \(h(p) = 1 - p\). While the main focus of Li and Shaked [388] is on stochastic orders, they also point out some applications of the order considered by them in reliability context. Various results regarding orderings can be seen in Bartoszewicz [74, 75] and Bartoszewicz and Benduch [76].

In a slightly different direction, Nair et al. [447] studied higher order TTT by applying Definition 5.3, recursively, to the transformed distributions.

Definition 5.6.

The TTT transform of order n (TTT-n) of the random variable X is defined recursively as

$$\displaystyle{ T_{n}(u) =\int _{ 0}^{u}(1 - p)t_{ n-1}(p)dp,\qquad n = 1,2,\ldots, }$$
(5.26)

where T 0(u) = Q(u) and \(t_{n}(u) = \frac{dT_{n}(u)} {du}\), provided that \(\mu _{n-1} =\int _{ 0}^{1}T_{n-1}(p)dp < \infty \).

The primary reasons for defining the above generalization are (i) the hierarchy of distributions generated by the iterative process reveals more clearly the reliability characteristics of the transformed models than that of T(u) and (ii) the results obtained from (3.27) subsume those for T(u) = T 1(u) and will generate new models and properties. We denote by Y n the random variable with quantile function T n (u), mean μ n , hazard quantile function H n (u), and mean residual quantile function M n (u). Recall that T(u), the transform of order one, is a quantile function and consequently the successive transforms T n , \(n = 2,3,\ldots\), are also quantile functions with support (0, μ n ). Differentiating (5.26), we obtain the quantile density function of Y n as

$$\displaystyle\begin{array}{rcl} t_{n}(u) = (1 - u)t_{n-1}(u) = {(1 - u)}^{n}t_{ 0}(u) = {(1 - u)}^{n}q(u),& &{}\end{array}$$
(5.27)

and hence

$$\displaystyle{t_{n}(u) = {[H_{n-1}(u)]}^{-1} = {(1 - u)}^{n-1}{(H(u))}^{-1},}$$

where H(u) is the hazard quantile function of X = Y 0. Thus, we have an identity connecting the hazard quantile function of the baseline distribution F(x) of X and that of Y n in the form

$$\displaystyle{ H(u) = {(1 - u)}^{n}H_{ n}(u),\;n = 0,1,2,\ldots. }$$
(5.28)

Using (5.9), we have

$$\displaystyle{T_{n+1}(u) =\mu _{n} - (1 - u)M_{n}(u),}$$

or equivalently

$$\displaystyle{t_{n+1}(u) = M_{n}(u) - (1 - u)M_{n}^{\prime}(u).}$$

This, along with \(t_{n+1}(u) = {(1 - u)}^{n}t_{1}(u)\) and

$$\displaystyle{t_{1}(u) = t(u) = M(u) - (1 - u)M^{\prime}(u),}$$

yields a relationship between the mean residual quantile functions of X and Y n as

$$\displaystyle{ M_{n}(u) - (1 - u)M_{n}^{\prime}(u) = {(1 - u)}^{n}\{M(u) - (1 - u)M^{\prime}(u)\}. }$$
(5.29)

Incidentially, the definition in (5.26) is also true for negative integers, since Q(u) can be thought of as a transform of T  − 1(u) and so on. Thus,

$$\displaystyle{t_{-n}(u) = {(1 - u)}^{-n}q(u)}$$

and

$$\displaystyle{H(u) = {(1 - u)}^{-n}H_{ -n}(u),\quad n = 1,2,\ldots }$$

A remarkable feature of the recurrent transform T n (u) is that the sequence \(\langle H_{n}(u)\rangle\) increases for positive n and decreases for negative n. Thus, Y n provides a life distribution whose failure rate is larger (smaller) than that of Y n − 1 when n is positive (negative). It is therefore of interest to know and compare the ageing patterns of Y n and Y n − 1.

Theorem 5.7.

  1. (i)

    If X is IHR, then Y n is IHR for all n;

  2. (ii)

    If X is DHR, then Y n is DHR (IHR) if \(Q(u) \geq (\leq )Q_{L}(k, \frac{1} {n})\) and is bathtub shaped if there exists a u 0 for which \(Q(u) \geq Q_{L}(k, \frac{1} {n})\) in [0,u 0 ] and \(Q(u) \leq Q_{L}(k, \frac{1} {n})\) in [u 0 ,1], where Q L (α,C) is the quantile function of the Lomax distribution (see Table 1.1).

Proof.

Since \(t_{n+1}(u) = {(1 - u)}^{n}t_{1}(u)\), we have

$$\displaystyle{t_{n+1}^{\prime}(u) = {(1 - u)}^{n-1}\{(1 - u)t_{ 1}^{\prime}(u) - nt_{1}(u)\}.}$$

Thus,

$$\displaystyle\begin{array}{rcl} X\text{ is IHR }& \Rightarrow & t_{1}(u)\text{ is decreasing } {}\\ & \Rightarrow & t_{n+1}^{\prime}(u) < 0 {}\\ & \Rightarrow & T_{n+1}(u)\text{ is concave } {}\\ & \Rightarrow & Y _{n}\text{ is IHR}. {}\\ \end{array}$$

Similarly, when X is DHR, T 1(u) is convex and accordingly

$$\displaystyle\begin{array}{rcl} Y _{n}\text{ is DHR (IHR) }& \Rightarrow & (1 - u)t_{1}^{\prime}(u) \geq (\leq )nt_{1}(u) {}\\ & \Rightarrow & t_{1}(u) \geq (\leq )k{(1 - u)}^{-n} {}\\ & \Rightarrow & Q(u) \geq (\leq )Q_{L}\left (k, \frac{1} {n}\right ). {}\\ \end{array}$$

The last part follows from the definition of bathtub-shaped hazard quantile function in Chap. 4.

In a similar manner, by backward iteration of a Q(u) = T 0(u) and using

$$\displaystyle{t_{1}^{\prime}(u) = {(1 - u)}^{-n}(n{(1 - u)}^{-1}t_{ n+1}(u) + t_{n+1}^{\prime}(u)),}$$

we get the following result.

Theorem 5.8.

  1. (i)

    If Y n is DHR, then X is DHR;

  2. (ii)

    If Y n is IHR, then X is IHR (DHR) if \(T_{n}(u) \leq (\geq )Q_{B}(k{(n + 1)}^{-1},{(n + 1)}^{-1})\) , and is upside-down bathtub shaped if there exists a u 0 for which \(T_{n}(u) \leq Q_{\beta }(k{(n + 1)}^{-1},{(n + 1)}^{-1})\) in [0,u 0 ] and \(T_{n}(u) \geq Q_{B}(k{(n + 1)}^{-1},{(n + 1)}^{-1})\) in [u 0 ,1]. Here, Q B (R,C) denotes the quantile function of the rescaled beta distribution.

Using Theorems 5.7 and 5.8, it is possible to construct BT and UBT distributions with finite range. Generation of BT distributions is facilitated by the choice of DHR distributions for which t n + 1(u) has a point of inflexion. On the other hand, IHR distributions can provide UBT models provided t n + 1(u) has an inflexion point for negative integers n. The following examples illustrate the procedure.

Example 5.3.

Consider the Weibull distribution with

$$\displaystyle{Q(u) =\sigma {(-\log (1 - u))}^{\frac{1} {\lambda } }.}$$

In this case, we have

$$\displaystyle{q(u) = \frac{\sigma } {\lambda (1 - u)}{(-\log (1 - u))}^{\frac{1} {\lambda } -1}}$$

and

$$\displaystyle{t_{n}(u) = \frac{\sigma } {\lambda }{(1 - u)}^{n-1}{(-\log (1 - u))}^{\frac{1} {\lambda } -1}.}$$

Hence,

$$\displaystyle{t_{n}^{\prime}(u) = \frac{\sigma } {\lambda }{(1 - u)}^{n-2}{(-\log (1 - u))}^{\frac{1} {\lambda } -2}\left [\frac{1} {\lambda } - 1 + (n - 1)\log (1 - u)\right ].}$$

Thus, when 0 < λ ≤ 1, T n + 1(u) is convex in [0, u 0] and concave in [u 0, 1], where

$$\displaystyle{u_{0} = 1 -\exp \left \{ \frac{\lambda -1} {(n - 1)\lambda }\right \}.}$$

It follows that Y n has BT hazard quantile function for n ≥ 1. Notice that with increasing values of n, the change point u 0 becomes larger. For λ ≥ 1 and every n, Y n is IHR.

Example 5.4.

The Burr distribution with k = 1 (see Table 1.1) has

$$\displaystyle{Q(u) = {u}^{1/\lambda }{(1 - u)}^{-\frac{1} {\lambda } }}$$

and

$$\displaystyle{t_{n+1}^{\prime}(u) = \frac{1} {\lambda } {u}^{\frac{1} {\lambda } -2}{(1 - u)}^{n-\frac{1} {\lambda } -1}\left \{\frac{1} {\lambda } - 1 - u(n - 1)\right \}.}$$

Therefore, \(u_{0} = \frac{\frac{1} {\lambda } -1} {n-1}\) is a point of inflexion when  > 1. Thus, Y n is BT in this case.

Theorem 5.9.

  1. (i)

    X is DMRL implies that Y n is DMRL;

  2. (ii)

    Y n is IMRL implies that X is IMRL.

Proof.

Theorem 5.3 gives the necessary and sufficient condition for X to be DMRL as \({(1 - u)}^{-1}(\mu -T_{1}(u))\) is decreasing in u. This condition is equivalent to

$$\displaystyle{ \mu -T_{1}(u) - (1 - u)t_{1}(u) \leq 0. }$$
(5.30)

Further,

$$\displaystyle{T_{n+1}(u) =\int _{ 0}^{u}{(1 - p)}^{n}t_{ 1}(p)dp = {(1 - u)}^{n}T_{ 1}(u) + A(u),}$$

where

$$\displaystyle{A(u) = n\int _{0}^{u}{(1 - p)}^{n-1}T_{ 1}(p)dp > 0\text{ for all }u.}$$

This gives

$$\displaystyle\begin{array}{rcl} \mu _{n} - T_{n+1}(u) - (1 - u)t_{n+1}(u)& =& \mu _{n} - {(1 - u)}^{n}T_{ 1}(u) - {(1 - u)}^{n+1}t_{ n+1}(u) - A(u) {}\\ & \leq & \mu _{n} - {(1 - u)}^{n}T_{ 1}(u) - {(1 - u)}^{n+1}t_{ n+1}(u) {}\\ & \leq & \mu _{1} - T_{1}(u) - (1 - u)t_{1}(u) \leq 0. {}\\ \end{array}$$

Hence, X is DMRL according to (5.29). This proves (i) and the proof of (ii) follows similarly by taking n as a negative integer.

Theorem 5.10.

  1. (i)

    X is IHRA implies that X n is IHRA;

  2. (iii)

    X n is DHRA implies that X is DHRA.

Proof.

We prove only (i) since the proof of (ii) follows on the same lines. In view of Theorem 5.2, X is IHRA if and only if u  − 1 T 1(u) is decreasing, or equivalently

$$\displaystyle{ t_{1}(u) \leq {u}^{-1}T(u). }$$
(5.31)

Considering T n (u), we can write

$$\displaystyle\begin{array}{rcl} t_{n+1}(u) - {u}^{-1}T_{ n+1}(u)& =& {(1 - u)}^{n}t_{ 1}(u) - {u}^{-1}{(1 - u)}^{n}T_{ 1}(u) - {u}^{-1}A(u) {}\\ & \leq & {(1 - u)}^{n}(t_{ 1}(u) - {u}^{-1}T_{ 1}(u)) {}\\ & \leq & t_{1} - {u}^{-1}T_{ 1}(u) \leq 0. {}\\ \end{array}$$

Result in (i) now follows by using (5.31).

Theorem 5.11.

  1. (i)

    X is NBUE implies that Y n is NBUE;

  2. (ii)

    Y n is NWUE implies that X n is NWUE.

Proof.

Recall that X is NBUE if and only if \({\mu }^{-1}T_{1}(u) >\mu\) for all u. Hence,

$$\displaystyle\begin{array}{rcl}{ u}^{-1}T_{ n}(u) -\mu _{n}& =& {u}^{-1}\{{(1 - u)}^{n}T_{ 1}(u) + A(u)\} -\mu _{n} {}\\ & \geq & {u}^{-1}{(1 - u)}^{n}T_{ 1}(u) -\mu _{1} {}\\ & \geq & {(1 - u)}^{n}\{{u}^{-1}T_{ 1}(u) -\mu _{1}\} \geq 0 {}\\ \end{array}$$

which implies that Y n is NBUE. Part (ii) follows similarly.

From the above theorems, it is evident that when X is ageing positively, the successive transforms are also ageing positively. Similar results can also be established in the case of other ageing concepts discussed in Chap. 4. It is important to mention that the converses of the above theorems need not be true (see next section).

5.6 Characterizations of Distributions

Various identities between the hazard quantile function, mean residual quantile function and the density quantile function of X and Y n enable us to mutually characterize the distributions of X and Y n . A preliminary result is that T n (u) characterizes the distribution of X. This follows from

$$\displaystyle{t_{n}(u) = {(1 - u)}^{n}q(u)}$$

and

$$\displaystyle{Q(u) =\int _{ 0}^{u}{(1 - p)}^{-n}t_{ n}(p)dp.}$$

The following theorems have been proved by Nair et al. [447].

Theorem 5.12.

The random variable Y n, \(n = 1,2,\ldots\) , has rescaled beta distribution

$$\displaystyle{Q(u) = R(1 - {(1 - u)}^{\frac{1} {c} })}$$

if and only if X is distributed as either exponential, Lomax or rescaled beta.

Proof.

To prove the if part, we observe that in the exponential case

$$\displaystyle{t_{n}(u) = {(1 - u)}^{n}q(u) {=\lambda }^{-1}{(1 - u)}^{n-1}}$$

and

$$\displaystyle{T_{n}(u) =\int _{ 0}^{u}t_{ n}(p)dp = {(\lambda n)}^{-1}\{1 - {(1 - u)}^{n}\}}$$

which is the quantile function of the rescaled beta distribution with parameters \(({(\lambda n)}^{-1},{n}^{-1})\) in the support \((0, \frac{1} {n\lambda })\). Similar calculations show that when X is Lomax, Y n is rescaled beta \((\alpha {(nC - 1)}^{-1},C{(nC - 1)}^{-1})\) with support \((0,\alpha {(nC - 1)}^{-1})\), and when X is rescaled beta (R, C), Y n has the same distribution with parameters \((R{(1 + nC)}^{-1},C{(1 + nC)}^{-1})\). Conversely, if we now assume that Y n is rescaled beta, its quantile function has the form

$$\displaystyle{T_{n}(u) = R_{n}(1 - {(1 - u)}^{ \frac{1} {C_{n}} })}$$

for some constants R n and C n  > 0. This gives

$$\displaystyle{t_{n}(u) = \frac{R_{n}} {C_{n}}{(1 - u)}^{ \frac{1} {C_{n}}-1} = {(1 - u)}^{n}q(u).}$$

The last equation means that (1 − u)n is a factor of the left-hand side and so

$$\displaystyle{ \frac{1} {C_{n}} = k_{n} + n}$$

for some real k n . Thus,

$$\displaystyle{q(u) = (k_{n} + n){(1 - u)}^{k_{n}-1}R_{ n}.}$$

Since q(u) is independent of n, taking n = 1, we have

$$\displaystyle{Q(u) = k_{1}^{-1}R_{ 1}(k_{1} + 1)\{1 - {(1 - u)}^{k_{1} }\}.}$$

Hence, for k 1 > 0, X follows rescaled beta distribution \((0,R_{1}k_{1}^{-1}(k_{1} + 1))\), Lomax law for − 1 < k 1 < 0, and exponential distribution as k 1 → 0. Hence, the theorem.

Theorem 5.13.

The random variable X follows the generalized Pareto distribution with quantile function (see Table  1.1 )

$$\displaystyle{ Q(u) = \frac{b} {a}\left \{{(1 - u)}^{- \frac{a} {a+1} } - 1\right \}\quad a > -1,\,b > 0, }$$
(5.32)

if and only if, for all \(n = 0,1,2,\ldots\) and 0 < u < 1,

$$\displaystyle{ M_{n}(u) = {(na + n + 1)}^{-1}{(1 - u)}^{n}M(u). }$$
(5.33)

Proof.

Assuming (5.33) to hold, we have

$$\displaystyle{M_{n}(u) - (1 - u)M_{n}^{\prime}(u) = \frac{1} {na + n + 1}\{M(u) - (1 - u)M^{\prime}(u) + nM(u)\},}$$

and then using the identity (5.29), we get

$$\displaystyle{ \frac{1} {na + n + 1}[M(u) - (1 - u)M^{\prime}(u) + nM(u)] = M(u) - (1 - u)M^{\prime}(u).}$$

The above equation simplifies to

$$\displaystyle{aM(u) = (a + 1)(1 - u)M^{\prime}(u)}$$

solving which we get

$$\displaystyle{M(u) = K{(1 - u)}^{- \frac{a} {a+1} }.}$$

Noting that \(M(0) =\mu = b\), we have K = b. Since the mean residual quantile function determines the distribution uniquely, we see from (2.48) that X has a generalized Pareto distribution with parameters (a, b). Next, we assume that X has the specified generalized Pareto distribution. Then,

$$\displaystyle{q(u) = \frac{b} {a + 1}{(1 - u)}^{- \frac{a} {a+1} -1}}$$

and

$$\displaystyle{M_{n}(u) =\int _{ u}^{1}{(1 - p)}^{n}q(p)dp,}$$

and so

$$\displaystyle{M_{n}(u) = \frac{b} {na + n + 1}{(1 - u)}^{n- \frac{a} {a+1} }.}$$

Using the expression (see Table 2.5)

$$\displaystyle{M(u) = b{(1 - u)}^{- \frac{a} {a+1} },}$$

the relationship in (5.33) is easily verified. Hence, the theorem.

There are other directions in which characterizations can be established. For instance, the relationship T(u) has with any reliability function is a characteristic property. It is easy to see that the simple identity

$$\displaystyle{T(u) = A + B\log H(u)}$$

holds true if and only if X follows the linear hazard quantile distribution. Recall that T(u) is also a quantile function representing some distribution. Thus, when X has a life distribution, the corresponding T(u) may also be a known life distribution. As an example, X follows power distribution if and only if the associated T(u) corresponds to the Govindarajulu distribution.

5.7 Some Applications

A direct approach to see the application of TTT in data analysis is through the model selection for an observed data. One can either derive a model based on physical conditions or postulate one that gives a reasonable fit. The TTT can then be derived and the data is analysed therefrom. An alternative approach is to start with a functional form of TTT and then choose the parameter values that give a satisfactory fit for the observations. The main point here is that the functional form should be flexible enough to represent different data situations. Since many of the quantile functions discussed in Chap. 3 provide great flexibility, their TTTs can provide candidates for this purpose. In such cases, to compute the descriptive measures of the distribution, one need not revert the TTT to the corresponding quantile function. We show that the descriptors can be obtained directly from T(u) and its derivative t(u).

For this purpose, we recall (1.38)–(1.41) and the identity \(t(u) = (1 - u)q(u)\). Then, the first four L-moments are as follows:

$$\displaystyle\begin{array}{rcl} L_{1}& =& \int _{0}^{1}(1 - p)q(p)dp =\int _{ 0}^{1}t(p)dp, {}\\ L_{2}& =& \int _{0}^{1}(p - {p}^{2})q(p)dp =\int _{ 0}^{1}pt(p)dp, {}\\ L_{3}& =& \int _{0}^{1}(3{p}^{2} - 2{p}^{3} - p)q(p)dp =\int _{ 0}^{1}p(2p - 1)t(p)dp, {}\\ L_{4}& =& \int _{0}^{1}(p - 6{p}^{2} + 10{p}^{3} - 5{p}^{4})q(p)dp =\int _{ 0}^{1}p(1 - 5p + 5{p}^{2})t(p)dp. {}\\ \end{array}$$

Example 5.5.

The quantile function of the generalized Pareto distribution (see Table 1.1) yields

$$\displaystyle{t(u) = \frac{b} {a + 1}{(1 - u)}^{- \frac{a} {a+1} }.}$$

Then, direct calculations using the above formulas result in

$$\displaystyle\begin{array}{rcl} L_{1} = b,\quad L_{2} = \frac{b(a + 1)} {a + 2},& & {}\\ \qquad L_{3} = \frac{b(a + 1)(2a + 1)} {(a + 2)(2a + 3)},\quad L_{4} = \frac{b(a + 1)(2a + 1)(3a + 2)} {(a + 2)(2a + 3)(3a + 4)}.& & {}\\ \end{array}$$

With these L-moments, descriptive measures like L-skewness and L-kurtosis can be readily derived from the formulas presented in Chap. 1.

In preventive maintenance policies, TTT has an effective role to play. At time x = 0, a unit starts functioning and is replaced upon age T or its failure which ever occurs first, with respective costs C 1 and C 2, with C 1 < C 2. If the unit lifetime is X, the first renewal occurs at Z = min(X, T) and

$$\displaystyle{E(Z) =\int _{ 0}^{T}\bar{F}(x)dx.}$$

The mean cost for one renewal period is

$$\displaystyle{\bar{F}(T)C_{1} + (1 -\bar{ F}(T))C_{2}}$$

and so the cost per unit time under age replacement model is

$$\displaystyle{C(T) = \frac{\bar{F}(T)C_{1} + (1 -\bar{ F}(T))C_{2}} {\int _{0}^{T}\bar{F}(x)dx}.}$$

This is equivalent to

$$\displaystyle{ C(T) = \frac{C_{1} + KF(T)} {\int _{0}^{T}\bar{F}(x)dx}, }$$
(5.34)

where \(K = C_{2} - C_{1}\). The simple replacement problem is to find an optimal interval T = T  ∗  such that it minimizes (5.34). In practice, one may not know the life distribution but only some observations, and so the optimal age replacement interval has to be estimated from the data. Assuming K = 1, without loss of generality, a value u  ∗  determined by u  ∗  = F(T  ∗ ) maximizes

$$\displaystyle{ \frac{1} {C(Q(u))} = \frac{T(u)} {u + C_{1}},\quad 0 \leq u \leq 1,}$$

or one that maximizes

$$\displaystyle{ \frac{\phi (u)} {u + C_{1}}.}$$

Bergman [89] and Bergman and Klefsjö [95] provide a nonparametric estimation concerning age replacement policies. Let \((X_{1:n},X_{2:n},\ldots,X_{n:n})\) be an ordered sample from an absolutely continuous distribution. For estimating ϕ(u), we use

$$\displaystyle{u_{r} = \frac{H_{n}^{-1}( \frac{r} {n})} {H_{n}^{-1}(1)} }$$

and determine

$$\displaystyle{\hat{T}_{n} = x_{\nu:n},}$$

where v is such that

$$\displaystyle{ \frac{u_{\nu }} { \frac{\nu }{n} + C_{1}} =\max _{1\leq r\leq n} \frac{u_{r}} {( \frac{r} {n}) + C_{1}}.}$$

Then,

  1. (i)

    \(C(\hat{T}_{n})\) tends with probability one to C(T  ∗ ) as n → ;

  2. (ii)

    the optimal cost C(T  ∗ ) may be estimated by \(C_{n}(\hat{T}_{n})\), where

    $$\displaystyle{C_{n}(X_{r:n}) = \frac{C_{1} + F_{n}(X_{r:n})} {\int _{0}^{X_{r:n}}\bar{F}_{n}(t)dt} }$$

    which is strongly consistent. If a unique optimal age replacement interval exists, then \(\hat{T}_{n}\) is strongly consistent. Bergman [89] explains a graphical method of determining T  ∗ . Draw the line passing through \((-\frac{C} {K},0)\) which touches the scaled transform ϕ(u) and has the largest slope. The abscissa of the point of contact is u  ∗ . One important advantage of the graphical method is that it is convenient for performing sensitivity analysis. For example, T  ∗  may be compared for different combinations of K and C 1. Suppose that instead of age replacement at T  ∗ , replacement can be thought of at T 1 and T 2 satisfying \(T_{1} < {T}^{{\ast}} < T_{2}\). Which of these ages give the minimum cost per unit time can also be addressed with the help of TTT (Bergman [91]).

The term availability refers to the probability that a system is performing satisfactorily at a given time and is equal to the reliability if no repair takes place. A second optimality criterion is to replace the unit at age T for which the asymptotic availability is maximized. This is equivalent to minimizing

$$\displaystyle{A(T) = \frac{m_{1} + (m_{1} - m_{2})F(T)} {\int _{0}^{T}\bar{F}(t)dt},}$$

where m 1 is the mean time of preventive maintenance and m 2 is the mean time of repair (Chan and Downs [132]). Since this expression is similar to (5.34), the same method of analysis can be adopted here as well.

Klefsjö [338] discusses the age replacement problem with discounted costs, minimal repair and replacements to extend system life. When costs have to be discounted at a constant rate α, the problem ends up to minimizing

$$\displaystyle{C(\alpha,T) = \frac{C_{1} + K(1 - {e}^{-\alpha T}\bar{F}(T))} {\alpha \int _{0}^{T}{e}^{-\alpha t}\bar{F}(t)dt} -\alpha (C_{1} + K)\int _{0}^{T}{e}^{-\alpha T}\bar{F}(t)dt;}$$

see Bergman and Klefsjö [92] for details. The above expression has a minimum at the same value of T as

$$\displaystyle{\frac{C_{1} + K(1 - {e}^{-\alpha T}\bar{F}(T))} {\int _{0}^{T}{e}^{-\alpha t}\bar{F}(t)dt},}$$

which is of the same form as (5.34) in which \(\bar{F}(t)\) is replaced by \(\bar{G}(t) = {e}^{-\alpha T}\bar{F}(t)\). Consequently, the optimization problem permits the usual analysis with ϕ(u) for \(\bar{G}\). The estimation problem is also dealt with likewise by minimizing

$$\displaystyle{\frac{C + KG_{n}(T)} {\int _{0}^{T}\bar{G}_{n}(t)dt},}$$

where

$$\displaystyle{\bar{G}_{n}(t) = {e}^{-\alpha t}\left (1 - \frac{r} {n}\right ),}$$

\(X_{r:n} \leq t \leq X_{r+1:n}\), for \(r = 0,1,\ldots,n - 1\).

The condition of replacement that the unit replacing the older one is as good as new is not always tenable. We assume a milder condition that the replacement is done by a new unit with probability p and a minimal repair is accomplished with probability (1 − p). In other words, the unit is repaired to the same state with the same hazard rate as just before failure.

If C  ∗  denotes the average repair cost, the long run average cost per unit is (Cleroux et al. [151])

$$\displaystyle{C_{p}(t) = \frac{C_{1} + (K + \frac{{C}^{{\ast}}} {P} ){F}^{p}(T)} {\int _{0}^{T}{F}^{-p}(t)dt}.}$$

Using the transform of F p, the above expression can also be brought to the standard form in (5.34). When the costs are discounted, the same kind of analysis is available in this case also.

Assume that the main objective is to extend system life, where the system has a vital component for which n spares are available. When the vital component fails, the system fails. Derman et al. [171] and Bergman and Klefsjö [93] then discussed the schedule of replacements of the vital component such that the system life is as long as possible. If v n is the expected life when an optimal schedule is used, they showed that v 0 = μ and

$$\displaystyle{v_{n} = v_{n-1} +\mu \max _{0\leq u\leq 1}\left \{\phi (u) -\frac{v_{n-1}} {\mu } u\right \}.}$$

Draw a line touching the ϕ(u) curve which is parallel to the line \(y = \frac{v_{n-1}} {\mu }\). If the touching point is \((u_{n},\phi (u_{0}))\), then the optimal replacement age is x n obtained by solving F(x n ) = u n .

It is customary to test certain devices, which have high initial hazard rates under conditions of field operation, to eliminate or reduce such early failures before sending them to the customers. Such an operation of screening equipments for the above purpose is called burn-in. If the burn-in is excessive, it will result in a loss to the manufacturer in terms of several kinds of costs. On the other hand, if burn-in is on a reduced scale, the problem of early failures may still persist among a percentage of products thus resulting in a return cost. So, an important problem in conducting the test is the determination of the optimal time point up to which the test has to be carried out. Test procedures based on hazard rate, mean residual life, coefficient of variation of residual life and so on have been proposed in the literature. Consider the case when a non-repairable component is scrapped if it fails during the burn-in period. Our problem is to determine the length T 0 of the burn-in period for which C(T), the expected long run cost per unit time of useful operation is minimized. Let b be the fixed cost per unit and d be the cost per unit time of burn-in. A unit which fails in useful operation after the burn-in results in a cost C 1. Then, Bergman and Klefsjö [94] have shown that

$$\displaystyle{C(T) = \frac{1 + b + C_{1}\bar{F}(T) + d\int _{0}^{T}\bar{F}(t)dt} {\mu -\int _{0}^{T}\bar{F}(t)dt} }$$

which is minimized for the same value of T as

$$\displaystyle{ \frac{\alpha -F(T)} {1 -\int _{0}^{T}\bar{F}(t)dt},}$$

where \(\alpha = (1 + b + d\mu + C_{1})C_{1}^{-1}\). Hence, T 0 is obtained by first graphically determining the value of u, say u 0, for which

$$\displaystyle{ \frac{\alpha -u} {1 -\phi (u)}}$$

is minimized and then solving F(T 0) = u 0; see Klefsjö [339]. Klefsjö and Westberg [340] point out that if the life distribution \(\bar{F}(T)\) is not known, it has to be estimated from the data. For complete samples, the empirical distribution function is the estimate of F. If the data is censored, i.e., in a set of n observations, k parts are observed to fail and n − k are withdrawn from observation, then the Kaplan–Meier estimator

$$\displaystyle{F_{n}^{K}(t) = 1 -\prod _{ r} \frac{n - r} {n - r + 1},}$$

where r runs through integer values for which t j: n  ≤ t and t j: n are observed failure times, could be used. The optimal replacement age is found by (1) drawing the TTT plot based on times to failure, (2) drawing a line from \((-\frac{C_{1}} {K},0)\) which touches TTT plot and has largest possible slope, and (3) taking the optimum replacement age as the failure time corresponding to the optimal point of contact. If the point of contact is (1,1), no preventive maintenance is necessary. Another major aspect of analysis of failure data for repairable systems is the possible trend in inter-failure times. Kvaloy and Lindqvist [365] used some tests based on TTT for this purpose. Some test statistics have also been proposed for testing exponentiality against IFRA alternative (Bergman [89]), for testing whether one distribution is more IFR than another (Wie [580]) and for testing exponentiality against IFR (DFR) alternative (Wie [579]).