Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

3.1 Introduction

Probability distributions facilitate characterization of the uncertainty prevailing in a data set by identifying the patterns of variation. By summarizing the observations into a mathematical form that contains a few parameters, distributions also provide means to analyse the basic structure that generates the observations. In finding appropriate distributions that adequately describe a data set, there are in general two approaches. One is to make assumptions about the physical characteristics that govern the data generating mechanism and then to find a model that satisfies such assumptions. This can be done either by deriving the model from the basic assumptions and relations or by adapting one of the conventional models from other disciplines, such as physical, biological or social sciences with appropriate conceptual interpretations. These theoretical models are later tested against the observations by the use of a goodness-of-fit test, for example. A second approach to modeling is entirely data dependent. Models derived in this manner are called empirical or black box models. In situations wherein there is a lack of understanding of the data generating process, the objective is limited to finding the best approximation to the data or because of the complexity of the model involved, a distribution is selected to fit the data. The usual procedure in such cases is to first make a preliminary assessment of the features of the available observations and then decide upon a mathematical formulation of the distribution that can approximate it. Empirical modelling problems usually focus attention on flexible families of distributions with enough parameters capable of producing different shapes and characteristics. The Pearson family, Johnson system, Burr family of distributions, and some others, which include several commonly occurring distributions, provide important aids in this regard. In this chapter, we discuss some families of distributions specified by their quantile functions that can be utilized for modelling lifetime data. Various quantile-based properties of distributions and concepts in reliability presented in the last two chapters form the background material for the ensuing discussion. The main distributions discussed here are the lambda distributions, power-Pareto model, Govindarajulu distribution and the generalized Weibull family. We also demonstrate that these models can be used as lifetime distributions while modelling real lifetime data.

3.2 Lambda Distributions

A brief historical account of developments on the lambda distributions was provided in Sect. 1.1. During the past 60 years, considerable efforts were made to generalize the basic model of Hastings et al. [ 264 ] and Tukey [ 567 ] and also to find new applications and inferential procedures. In general, the applications of different versions span a variety of fields such as inventory control (Silver [ 540 ]), logistic regression (Pregibon [ 497 ]), meteorology (Osturk and Dale [ 476 ]), survival analysis (Lefante Jr. [ 380 ]), queueing theory (Robinson and Chan [ 508 ]), random variate generation and goodness-of-fit tests (Cao and Lugosi [ 128 ]), fatigue studies (Bigerelle et al. [ 100 ]) process control (Fournier et al. [ 200 ]), biochemistry (Ramos-Fernandez et al. [ 505 ]), economics (Haritha et al. [ 260 ]), corrosion (Najjar et al. [ 456 ]) and reliability analysis (Nair and Vineshkumar [ 452 ]).

The basic model from which all other generalizations originate is the Tukey lambda distribution with quantile function

$$\displaystyle{ Q(u) = \frac{{u}^{\lambda } - {(1 - u)}^{\lambda }} {\lambda },\quad 0 \leq u \leq 1, }$$
(3.1)

defined for all non-zero lambda values. As λ → 0, we have

$$\displaystyle{Q(u) =\log \left ( \frac{u} {1 - u}\right )}$$

corresponding to the logistic distribution. van Dyke [ 571 ] compared a normalized version of (3.1) with the t-distribution. Model (3.1) was studied by Filliben [ 197 ] who used it to approximate symmetric distributions with varying tail weights. Joiner and Rosenblatt [ 304 ] studied the sample range and Ramberg and Schmeiser [ 503 ] discussed the application of the distribution in generating symmetric random variables. For λ = 1 and λ = 2, it is easy to verify that (3.1) becomes uniform over ( − 1, 1) and \((-\frac{1} {2}, \frac{1} {2})\), respectively. The density functions are U shaped for 1 < λ < 2 and unimodal for λ < 1 or λ > 2. With (3.1) being symmetric and having range for negative values of X, it has limited use as a lifetime model.

Remark 3.1.

The Tukey lambda distribution defined in (3.1) is an extremal distribution that gets characterized by means of largest order statistics. To see this, let \(X_{1:n} < \cdots < X_{n:n}\) be the order statistics from a random sample of size n from a symmetric distribution F with mean 0 and variance σ 2. Then, due to the symmetry of the distribution, we have \(E(X_{n:n}) = -E(X_{1:n})\), and so we can write from (1.23) and (1.24) that

$$\displaystyle\begin{array}{rcl} E(X_{n:n}) = \frac{1} {2}\int _{0}^{1}Q(u)n({u}^{n-1} - {(1 - u)}^{n-1})du.& &{}\end{array}$$
(3.2)

By applying Cauchy–Schwarz inequality to (3.2), we readily find

$$\displaystyle\begin{array}{rcl} E(X_{n:n})& \leq & \frac{\sigma } {2}{\left \{\int _{0}^{1}{n}^{2}\left ({u}^{2n-2} + {(1 - u)}^{2n-2} - 2{u}^{n-1}{(1 - u)}^{n-1}\right )du\right \}}^{1/2} \\ & =& \frac{\sigma n} {\sqrt{2}}{\left \{ \frac{1} {2n - 1} - B(n,n)\right \}}^{1/2}, {}\end{array}$$
(3.3)

where B(a, b) = Γ(a)Γ(b) ∕ Γ(a + b), a, b > 0, is the complete beta function. Note that, from (3.3), by setting n = 2 and n = 3, we obtain the bounds

$$\displaystyle{E(X_{2:2}) \leq \frac{\sigma } {\sqrt{3}}\quad \mbox{ and }\quad E(X_{3:3}) \leq \frac{\sigma \sqrt{3}} {2}.}$$

The bound in (3.3) was established originally by Hartley and David [ 263 ] and Gumbel [ 229 ]. It is useful to note that the bound in (3.3), derived from (3.2), is attained if and only if

$$\displaystyle{Q(u) \propto {u}^{n-1} - {(1 - u)}^{n-1},\ u \in (0,1).}$$

When n = 2 and 3, we thus find \(Q(u) \propto 2u - 1\), which corresponds to the uniform distribution; see Balakrishnan and Balasubramanian [ 50 ] for some additional insight into this characterization result. Thus, we observe from (3.2) that the Tukey lambda distribution with integral values of λ is an extremal distribution and is characterized by the mean of the largest order statistic in (3.3). The same goes for the Tukey lambda distribution in (3.1) for positive real values in terms of fractional order statistics, in view of Remark 1.1.

3.2.1 Generalized Lambda Distribution

Asymmetric versions of (3.1) in various forms such as

$$\displaystyle{Q(u) = A{u}^{\lambda } + B{(1 - u)}^{\theta } + C}$$

and

$$\displaystyle{Q(u) = a{u}^{\lambda } - {(1 - u)}^{\lambda }}$$

were studied subsequently (Joiner and Rosenblatt [ 304 ], Shapiro and Wilk [ 536 ]). All such versions are subsumed in the more general form

$$\displaystyle{ Q(u) =\lambda _{1} + \frac{1} {\lambda _{2}} ({u}^{\lambda _{3}} - {(1 - u)}^{\lambda _{4}}) }$$
(3.4)

introduced by Ramberg and Schmeiser [ 503 ], which is called the generalized lambda distribution. This is the most discussed member of the various lambda distributions, because of its versatility and special properties. In (3.4), λ 1 is a location parameter, λ 2 is a scale parameter, while λ 3 and λ 4 determine the shape. The distribution takes on different supports depending on the parameters λ 2, λ 3 and λ 4, while λ 1, being the location parameter, can take values on the real line in all cases (Table 3.1).

Table 3.1 Supports for the generalized lambda distribution

As a life distribution, the required constraint on the parameters is

$$\displaystyle{Q(0) =\lambda _{1} -\frac{1} {\lambda _{2}} \geq 0.}$$

The quantile density function is

$$\displaystyle{ q(u) =\lambda _{ 2}^{-1}[\lambda _{ 3}{u}^{\lambda _{3}-1} +\lambda _{ 4}{(1 - u)}^{\lambda _{4}-1}] }$$
(3.5)

and accordingly the density quantile function is

$$\displaystyle{ f(Q(u)) =\lambda _{2}{[\lambda _{3}{u}^{\lambda _{3}-1} +\lambda _{4}{(1 - u)}^{\lambda _{4}-1}]}^{-1} }$$
(3.6)

which has to remain non-negative for (3.4) to represent a proper distribution. This places constraints on the parameter space. A special feature of (3.4) is that it is a valid distribution only in the regions (λ 3 ≤ − 1, λ 4 ≥ 1), (λ 3 ≥ 1, λ 4 ≤ − 1), (λ 3 ≥ 0, λ 4 ≥ 0), (λ 3 ≤ 0, λ 4 ≤ 1), and for values in ( − 1 < λ 3 < 0, λ 4 > 0) for which

$$\displaystyle{\frac{{(1 -\lambda _{3})}^{1-\lambda _{3}}} {{(\lambda _{4} -\lambda _{3})}^{\lambda _{4}-\lambda _{3}}} {(\lambda _{4} - 1)}^{\lambda _{4}-1} < -\frac{\lambda _{3}} {\lambda _{4}}\,}$$

and values in \((\lambda _{3} > 1,-1 <\lambda _{4} < 0)\) for which

$$\displaystyle{\frac{{(1 -\lambda _{4})}^{1-\lambda _{4}}} {{(\lambda _{3} -\lambda _{4})}^{\lambda _{3}-\lambda _{4}}} {(\lambda _{3} - 1)}^{\lambda _{3}-1} < -\frac{\lambda _{4}} {\lambda _{3}}\;}$$

see Karian and Dudewicz [ 314 ] for a detailed study in this respect. Since

$$\displaystyle{E({X}^{r}) =\int _{ 0}^{1}{\left [\lambda _{ 1} + \frac{{p}^{\lambda _{3}} - {(1 - p)}^{\lambda _{4}}} {\lambda _{2}} \right ]}^{r}dp}$$

from (1.30), the mean is simply

$$\displaystyle{ E(X) =\mu =\lambda _{1} + \frac{1} {\lambda _{2}} \left ( \frac{1} {\lambda _{3} + 1} - \frac{1} {\lambda _{4} + 1}\right ). }$$
(3.7)

Since λ 1 is not present in the central moments, we set λ 1 = 0. Ramberg et al. [ 502 ] find that

$$\displaystyle{E({X}^{r}) =\lambda _{ 2}^{-r}\sum _{ i=0}^{r}\binom{r}{i}{(-1)}^{i}B(\lambda _{ 3}(r - i) + 1,\lambda _{4}i + 1)}$$

from which we obtain the following central moments:

$$ \displaystyle\begin{array}{rcl} {\sigma }^{2}& =& \frac{B - {A}^{2}} {\lambda _{2}^{2}} \,{}\end{array}$$
(3.8)
$$\displaystyle\begin{array}{rcl} \mu _{3}& =& \frac{C - 3AB + {A}^{3}} {\lambda _{2}^{3}} \,{}\end{array}$$
(3.9)
$$\displaystyle\begin{array}{rcl} \mu _{4}& =& \frac{D - 4AC + 6{A}^{2}B - 3{A}^{4}} {\lambda _{2}^{4}} \,{}\end{array}$$
(3.10)

where

$$\displaystyle\begin{array}{rcl} A& =& \frac{1} {\lambda _{3} + 1} - \frac{1} {\lambda _{4} + 1}\, {}\\ B& =& \frac{1} {2\lambda _{3} + 1} + \frac{1} {2\lambda _{4} + 1} - 2B(\lambda _{3} + 1,\lambda _{4} + 1)\, {}\\ C& =& \frac{1} {3\lambda _{3} + 1} - 3B(2\lambda _{3} + 1,\lambda _{4} + 1) + 3B(\lambda _{3} + 1,2\lambda _{4} + 1) - \frac{1} {3\lambda _{4} + 1}\, {}\\ \end{array}$$

and

$$\displaystyle\begin{array}{rcl} D& =& \frac{1} {4\lambda _{3} + 1} - 4B(2\lambda _{3} + 1,\lambda _{4} + 1) + 6B(2\lambda _{3} + 1,2\lambda _{4} + 1) {}\\ & & \quad - 4B(\lambda _{3} + 1,3\lambda _{4} + 1) + \frac{1} {4\lambda _{4} + 1}. {}\\ \end{array}$$

The rth moment exists only if \(-\frac{1} {r} <\min (\lambda _{3},\lambda _{4})\). When \(\lambda _{3} =\lambda _{4}\), it is verified that μ 3 = 0 and the generalized lambda distribution is symmetric in this case. A detailed study of the skewness and kurtosis for different values of λ 3 and λ 4 is given in Karian and Dudewicz [ 315 ]. The \((\beta _{1},\beta _{2})\) diagram includes the skewness values corresponding to the uniform, t, F, normal, Weibull, lognormal and some beta distributions. One limitation that needs to be mentioned regarding skewness is that the generalized lambda family does not cover the entire area as some other systems (like the Pearson system) do; but, it also covers some new areas that are not covered by others. This four-parameter distribution includes a wide range of shapes for its density function; see Fig. 3.1 for some selection shapes.

Fig. 3.1
figure 1

Density plots of the generalized lambda distribution (Ramberg and Schmeiser [ 503 ] model) for different choices of \((\lambda _{1},\lambda _{2},\lambda _{3},\lambda _{4})\). (a) (1,0.2,0.13,0.13); (b) (1,0.6,1.5,-1.5); (c) (1,0.6,1.75,1.2); (d) (1,0.2,0.13,0.013); (e) (1,0.2,0.0013,0.13); (f) (1,1,0.5,4)

The basic characteristics of the distribution can also be expressed in terms of the percentiles. Using (1.6)–(1.9), we have the following:

the median:
$$\displaystyle{ M =\lambda _{1} + \frac{1} {\lambda _{2}} \left [{\left (\frac{1} {2}\right )}^{\lambda _{3}} -{\left (\frac{1} {2}\right )}^{\lambda _{4}}\right ], }$$
(3.11)
the interquantile range:
$$\displaystyle{ \text{IQR} = \frac{1} {\lambda _{2}} \left [\frac{{3}^{\lambda _{3}} - 1} {{4}^{\lambda _{3}}} + \frac{{3}^{\lambda _{4}} - 1} {{4}^{\lambda _{4}}} \right ], }$$
(3.12)
Galton’s measure of skewness:
$$\displaystyle{ S = \frac{{4}^{-\lambda _{3}}({3}^{\lambda _{3}} - {2}^{\lambda _{3}+1} - 1) - {4}^{\lambda _{4}}(1 + {3}^{\lambda _{4}} - {2}^{\lambda _{4}+1})} {\frac{{3}^{\lambda _{3}}-1} {{4}^{\lambda _{3}}} + \frac{{3}^{\lambda _{4}}-1} {{4}^{\lambda _{4}}} } \, }$$
(3.13)
and Moors’ measure of kurtosis:
$$\displaystyle{ T = \frac{{8}^{-\lambda _{3}}(1 + {3}^{\lambda _{3}} + {5}^{\lambda _{3}} + {7}^{\lambda _{3}}) - {8}^{-\lambda _{4}}(1 + {3}^{\lambda _{4}} + {5}^{\lambda _{4}} + {7}^{\lambda _{4}})} {{4}^{-\lambda _{3}}({3}^{\lambda _{3}} - 1) + {4}^{-\lambda _{4}}({3}^{\lambda _{4}} - 1)}. }$$
(3.14)

For this distribution, the L-moments have comparatively simpler expressions than the conventional moments. One can use (1.34)–(1.37) to calculate these. To simplify their expressions, we employ the notation

$$\displaystyle{(n)_{(r)} = n(n + 1)\cdots (n + r - 1)}$$

and

$$\displaystyle{{(n)}^{(r)} = n(n - 1)\cdots (n - r + 1)}$$

to denote the ascending and descending factorials, respectively. Then, the first four L-moments are as follows (Asquith [ 40 ]):

$$\displaystyle\begin{array}{rcl} L_{1}& =& \lambda _{1} + \frac{1} {\lambda _{2}} \left ( \frac{1} {\lambda _{3} + 1} - \frac{1} {\lambda _{4} + 1}\right ),{}\end{array}$$
(3.15)
$$\displaystyle\begin{array}{rcl} L_{2}& =& \frac{1} {\lambda _{2}} \left ( \frac{\lambda _{3}} {(\lambda _{3} + 1)_{(2)}} + \frac{\lambda _{4}} {(\lambda _{4} + 1)_{(2)}}\right ),{}\end{array}$$
(3.16)
$$\displaystyle\begin{array}{rcl} L_{3}& =& \frac{1} {\lambda _{2}} \left ( \frac{\lambda _{3}^{(2)}} {(\lambda _{3} + 1)_{(3)}} - \frac{\lambda _{4}^{(2)}} {(\lambda _{4} + 1)_{(3)}}\right ),{}\end{array}$$
(3.17)
$$\displaystyle\begin{array}{rcl} L_{4}& =& \frac{1} {\lambda _{2}} \left ( \frac{\lambda _{3}^{(3)}} {(\lambda _{3} + 1)_{(4)}} - \frac{\lambda _{4}^{(3)}} {(\lambda _{4} + 1)_{(4)}}\right ).{}\end{array}$$
(3.18)

Thus, the L-skewness and L-kurtosis become

$$\displaystyle{ \tau _{3} = \frac{\lambda _{3}^{(2)}(\lambda _{4} + 1)_{(3)} -\lambda _{4}^{(2)}(\lambda _{3} + 1)_{(3)}} {\lambda _{3}(\lambda _{3} + 3)(\lambda _{4} + 1)_{(3)} +\lambda _{4}(\lambda _{4} + 3)(\lambda _{3} + 1)_{(3)}} }$$
(3.19)

and

$$\displaystyle{ \tau _{4} = \frac{{(\lambda _{3})}^{(3)}(\lambda _{4} + 1)_{(4)} + {(\lambda _{4})}^{(3)}(\lambda _{3} + 1)_{(4)}} {\lambda _{3}(\lambda _{3} + 3)(\lambda _{3} + 4)(\lambda _{4} + 1)_{(4)} -\lambda _{4}(\lambda _{4} + 3)(\lambda _{3} + 1)_{(4)}}. }$$
(3.20)

All the L-moments exist for every \(\lambda _{3},\lambda _{4} > -1\). On the other hand, the conventional moments require \(\lambda _{3},\lambda _{4} > -\frac{1} {4}\) for the evaluation of Pearson’s skewness β 1 and kurtosis β 2. Thus, L-skewness and kurtosis permit a larger range of values in the parameter space. The problem of characterizing the generalized lambda distribution has been considered in Karvanen and Nuutinen [ 313 ]. For the symmetric case, they have derived the boundaries analytically and in the general case, numerical methods have been used. They found that with an exception of the smallest values of τ 4, the family (3.4) covers all possible \((\tau _{3},\tau _{4})\) pairs and often there are two or more distributions sharing the same τ 3 and τ 4. A wider set of generalized lambda distributions can be characterized when L-moments are used than by the conventional moments. This is an important advantage in the context of data analysis while seeking appropriate models.

The moments of order statistics have closed forms as well. For example, the expectation of order statistics from a random sample of size n is obtained from (1.28) as

$$\displaystyle\begin{array}{rcl} E(X_{r:n})& =& \lambda _{1} + \frac{1} {\lambda _{2}} \frac{\Gamma (\lambda _{3} + r)} {\Gamma (r)} \frac{\Gamma (n + 1)} {\Gamma (\lambda _{3} + n + 1)} \\ & & +\frac{1} {\lambda _{2}} \frac{\Gamma (n +\lambda _{4} - r + 1)\Gamma (n + 1)} {\Gamma (n +\lambda _{4} + 1)\Gamma (n - r)}.{}\end{array}$$
(3.21)

In particular, from (3.21), we obtain

$$\displaystyle\begin{array}{rcl} E(X_{n:n})& =& \lambda _{1} + \frac{n} {\lambda _{2}(\lambda _{3} + n)} - \frac{n!} {\lambda _{2}(\lambda _{4} + 1)_{(n)}}\, {}\\ E(X_{1:n})& =& \lambda _{1} + \frac{n!} {\lambda _{2}(\lambda _{3} + 1)_{(n)}} - \frac{n} {\lambda _{2}(n +\lambda _{4})}. {}\\ \end{array}$$

Also, the distributions of X 1: n and X n: n are given by

$$\displaystyle\begin{array}{rcl} Q_{1}(u)& =& \lambda _{1} + \frac{1} {\lambda _{2}} \left [{(1 - {(1 - u)}^{ \frac{1} {n} })}^{\lambda _{3}} - {(1 - u)}^{\frac{\lambda _{4}} {n} }\right ], {}\\ Q_{n}(u)& =& \lambda _{1} + \frac{1} {\lambda _{2}} \left [{u}^{\frac{\lambda _{3}} {n} } - {(1 - {u}^{\frac{1} {n} })}^{\lambda _{4}}\right ]. {}\\ \end{array}$$

Since there exist members of generalized lambda family with support on the positive real line, its scope as a lifetime model is apparent. However, this fact has not been exploited much. The hazard quantile function (2.30) has the simple form

$$\displaystyle{ H(u) = \frac{\lambda _{2}} {(1 - u)[\lambda _{3}{u}^{\lambda _{3}-1} +\lambda _{4}{(1 - u)}^{\lambda _{4}-1}]}. }$$
(3.22)

Similarly, the mean residual quantile function is obtained from (2.43) as

$$\displaystyle\begin{array}{rcl} M(u)& =& \frac{1} {1 - u}\int _{u}^{1}(1 - p)q(p)dp {}\\ & =& \frac{1} {\lambda _{2}(1 - u)}\left [ \frac{\lambda _{4}} {\lambda _{4} + 1}{(1 - u)}^{\lambda _{4}+1} + \frac{1 - {u}^{\lambda _{3}+1}} {\lambda _{3} + 1} - (1 - u){u}^{\lambda _{3}}\right ]. {}\\ \end{array}$$

Note that, in this case,

$$\displaystyle\begin{array}{rcl} M(0)& =& \int _{0}^{1}Q(p)dp - Q(0) {}\\ & & {}\\ \end{array}$$

or

$$\displaystyle\begin{array}{rcl} \mu & =& Q(0) + M(0). {}\\ \end{array}$$

The above expression is a general condition to be used whenever the left end of the support is greater than zero. The variance residual quantile function is calculated as

$$\displaystyle\begin{array}{rcl} V (u)& =& \frac{1} {1 - u}\int _{u}^{1}{Q}^{2}(p)dp -{\left [ \frac{1} {1 - u}\int _{u}^{1}Q(p)dp\right ]}^{2} {}\\ & =& A_{1}(u) - A_{2}^{2}(u), {}\\ \end{array}$$

where

$$\displaystyle\begin{array}{rcl} A_{1}(u)& =& \frac{1} {\lambda _{2}^{2}(1 - u)}\left [\frac{1 - {u}^{2\lambda _{3}+1}} {2\lambda _{3} + 1} + \frac{{(1 - u)}^{2\lambda _{4}+1}} {2\lambda _{4} + 1} - 2B_{1-u}(\lambda _{4} + 1,\lambda _{3} + 1)\right ], {}\\ A_{2}(u)& =& \frac{1} {\lambda _{2}(1 - u)}\left [\frac{1 - {u}^{\lambda _{3}+1}} {\lambda _{3} + 1} -\frac{{(1 - u)}^{\lambda _{4}+1}} {\lambda _{4} + 1} \right ] {}\\ \end{array}$$

and \(B_{x}(m,n) =\int _{ 0}^{x}{t}^{m-1}{(1 - t)}^{n-1}dt\) is the incomplete beta function.

The term

$$\displaystyle{ \mu (u) = \frac{1} {1 - u}\int _{u}^{1}Q(p)dp }$$
(3.23)

is of interest in reliability analysis, being the quantile version of E(X | X > x). It is called the conditional mean life or the vitality function. One may refer to Kupka and Loo [ 363 ] for a detailed exposition of the properties of the vitality function and its role in explaining the ageing process. We see that from (3.23), Q(u) can be recovered up to an additive constant as

$$\displaystyle{Q(u) = - \frac{d} {du}(1 - u)\mu (u),}$$

and therefore functional forms of μ(u) will enable us to identify the life distribution. Thus, a generalized lambda distribution is determined as

$$\displaystyle{a - \frac{d} {du}(1 - u)\mu (u)}$$

if the conditional mean quantile function μ(u) satisfies

$$\displaystyle{\mu (u) = a + b\left [\frac{1 - {u}^{c}} {c} -\frac{{(1 - u)}^{d}} {d} \right ]}$$

for real a, b, c and d for which Q(0) ≥ 0.

The αth percentile residual life is calculated from (2.50) as

$$\displaystyle{P_{\alpha }(u) = \frac{1} {\lambda _{2}} [{(\alpha +u -\alpha u)}^{\lambda _{3}} - {u}^{\lambda _{3}} - {(1 - u)}^{\lambda _{4}}(1 - {(1-\alpha )}^{\lambda _{4}})].}$$

Various functions in reversed time presented in (2.50), (2.51) and (2.53) yield

$$\displaystyle\begin{array}{rcl} \Lambda (u)& =& \lambda _{2}{[u(\lambda _{3}{u}^{\lambda _{3}-1} +\lambda _{4}{(1 - u)}^{\lambda _{4}-1})]}^{-1}, {}\\ R(u)& =& \frac{1} {\lambda _{2}} \left [ \frac{\lambda _{3}} {\lambda _{3} + 1}{u}^{\lambda _{3}} - {(1 - u)}^{\lambda _{4}} + \frac{1 - {(1 - u)}^{\lambda _{4}+1}} {{(\lambda _{4} + 1)}^{u}} \right ], {}\\ {D}^{{\ast}}(u)& =& B_{ 1}(u) - B_{2}^{2}(u), {}\\ \end{array}$$

where

$$\displaystyle\begin{array}{rcl} B_{1}(u)& ={ \frac{1} {\lambda }^{2}u}\left [\frac{{u}^{2\lambda _{3}+1}} {2\lambda _{3}+1} -\frac{{(1-u)}^{2\lambda _{4}+1}-1} {2\lambda _{4}+1} - 2B_{u}(\lambda _{3} + 1,\lambda _{4} + 1)\right ]& {}\\ \end{array}$$

and

$$\displaystyle\begin{array}{rcl} B_{2}(u)& = \frac{1} {\lambda _{2}u}\left [\frac{{u}^{\lambda _{3}+1}} {\lambda _{3}+1} -\frac{{(1-u)}^{\lambda _{4}+1}-1} {\lambda _{4}+1} \right ].& {}\\ \end{array}$$

Like the function μ(u), one can also consider

$$\displaystyle{ \theta (u) = \frac{1} {u}\int _{0}^{u}Q(p)dp }$$
(3.24)

which is the quantile formulation of E(X | Xx). This latter function’s relationship with reversed hazard function has been used in Nair and Sudheesh [ 451 ] to characterize distributions. It has applications in several other fields like economics and risk analysis. For example, when X is interpreted as the income and x is the poverty level, the above expectation denotes the average income of the poor people and is an essential component for the evaluation of poverty index and income inequality. The form of (3.24) is convenient in identifying models, like

$$\displaystyle{\theta (u) = a + b\left [\frac{{u}^{c-1}} {c} + \frac{{(1 - u)}^{d} - 1} {du} \right ]}$$

determining the generalized lambda distribution. The formula for calculating Q(u) from θ(u) is

$$\displaystyle{ Q(u) = a + \frac{d} {du}u\theta (u). }$$
(3.25)

Finally, the reversed percentile residual life function is (2.50)

$$\displaystyle\begin{array}{rcl} q_{\alpha }(u)& =& \frac{1} {\lambda _{2}} \left [{u}^{\lambda _{3}} - {((1-\alpha )u)}^{\lambda _{3}} -\{ {(1 - u)}^{\lambda _{4}} - {(1 - (1-\alpha )u)}^{\lambda _{4}}\}\right ] {}\\ & =& \frac{1} {\lambda _{2}} \left [{u}^{\lambda _{3}} - (1 - {(1-\alpha )}^{\lambda _{3}}) - {(1 - u)}^{\lambda _{4}} + {(1 - u +\alpha u)}^{\lambda _{4}}\right ]. {}\\ \end{array}$$

There is no conflict of opinion regarding the potential of the generalized lambda family in empirical data modelling because of its flexibility to represent different kinds of data situations. However, the difficulties experienced in the estimation problem, especially on the computational front, have stimulated extensive research on various methods, conventional as well as new. A popular approach for estimation of parameters of quantile functions is the method of moments, in which the first four moments of the generalized lambda distribution are matched with the corresponding moments of the sample. Instead of choosing the first four moments directly, Ramberg and Schmeiser [ 504 ] opted for the equations

$$\displaystyle\begin{array}{rcl} \mu & =& \frac{1} {n}\sum _{i=1}^{n}x_{ i}\,{}\end{array}$$
(3.26)
$$ \displaystyle\begin{array}{rcl} {\sigma }^{2}& =& \frac{1} {n}\sum _{i=1}^{n}{(x_{ i} -\bar{ x})}^{2}\,{}\end{array}$$
(3.27)
$$\displaystyle\begin{array}{rcl} r_{1}& =& \frac{{n}^{1/2}\sum {(x_{i} -\bar{ x})}^{3}} {{[\sum {(x_{i} -\bar{ x})}^{2}]}^{3/2}} \,{}\end{array}$$
(3.28)
$$\displaystyle\begin{array}{rcl} r_{2}& =& \frac{n\sum {(x_{i} -\bar{ x})}^{4}} {{[\sum {(x_{i} -\bar{ x})}^{2}]}^{2}}\,{}\end{array}$$
(3.29)

where μ and σ 2 are as given in (3.7) and (3.8), \(\gamma _{1} ={ \frac{\mu _{3}} {\sigma }^{3}}\) and \(\gamma _{2} ={ \frac{\mu _{4}} {\sigma }^{4}}\) with values for μ 3 and μ 4 as in (3.9) and (3.10). Since γ 1 and γ 2 contain only λ 3 and λ 4, the solutions of (3.28) and (3.29) give λ 3 and λ 4. From the remaining two equations, λ 1 and λ 2 can be readily found. Even though theoretically the method looks simple, in practice, one has to apply numerical methods to solve the equations as they are nonlinear. Dudewicz and Karian [ 181 ] have provided extensive tables from which the parameters can be determined for a given choice of skewness and kurtosis of the data. They also describe an algorithm that summarizes the steps in the calculation. A second method to obtain a best solution is to use computer programs that ensure the solutions of (3.26)–(3.29) to satisfy

$$\displaystyle{ \max (\vert \mu -\hat{\mu }\vert,{\vert \sigma }^{2} {-\hat{\sigma }}^{2}\vert,\vert \gamma _{ 1} -\hat{\gamma }_{1}\vert,\vert \gamma _{2} -\hat{\gamma }_{2}\vert ) <\epsilon }$$
(3.30)

for some prefixed tolerance ε > 0. This is accomplished by starting with a good set of initial values for the parameters. Then search is made through algorithms that satisfy (3.30). However, there is no guarantee that a given set of initial values necessarily end up resulting in a solution nor that it improves upon the value of ε in each iteration. See Karian and Dudewicz [ 315 ] for such a computational program. In both the methods described above, the region specified by \(1 +\gamma _{ 1}^{2} <\gamma _{2} < 1.8 + 1.7\gamma _{1}^{2}\) is not attained and one may not arrive at a set of lambda values that satisfy a goodness-of-fit test. These and other problems are explained in Karian and Dudewicz [ 315, 317 ].

A similar logic applies to the method of L-moments prescribed in Asquith [ 40 ] and Karian and Dudewicz [ 315 ]. The equations to be solved in the latter work are

$$\displaystyle\begin{array}{rcl} L_{i}& =& l_{i},\quad i = 1,2,{}\end{array}$$
(3.31)
$$\displaystyle\begin{array}{rcl} \tau _{3}& =& t_{3},{}\end{array}$$
(3.32)
$$\displaystyle\begin{array}{rcl} \tau _{4}& =& t_{4},{}\end{array}$$
(3.33)

where \(L_{1},L_{2},\tau _{3}\) and τ 4 have the expressions in (3.15), (3.16), (3.19) and (3.20), where

$$\displaystyle\begin{array}{rcl} t_{3}& =& \frac{l_{3}} {l_{2}},\quad t_{4} = \frac{l_{4}} {l_{2}}, {}\\ l_{r}& =& \sum _{j=0}^{r-1}p_{ rj}b_{j},\quad r = 1,2,\ldots,n {}\\ \end{array}$$

where

$$\displaystyle{b_{j} = \frac{1} {n}\sum _{i=j+1}^{n} \frac{{(i - 1)}^{(j)}} {{(n - 1)}^{(r)}}x_{j:n}.}$$

Clearly, (3.32) and (3.33) do not contain λ 1 and λ 2 and are therefore solvable for λ 3 and λ 4. The other two parameters are then found from (3.31) by using the estimates of λ 3 and λ 4.

In the work of Asquith [ 40 ], estimates of λ 3 and λ 4 are values that minimize

$$\displaystyle{ \epsilon = {(t_{3} -\hat{\tau }_{3})}^{2} + {(t_{ 4} -\hat{\tau }_{4})}^{2}, }$$
(3.34)

where \(\hat{\tau }_{i}\) (i = 3, 4) is the estimated value of τ i . After choosing initial values of λ 3 and λ 4, we arrive at the optimal value according to (3.34) and then check whether the solutions obtained meet the requirements − 1 < τ 3 < 1 and \(\frac{1} {4}(5\tau _{3}^{2} - 1) \leq \tau _{ 4} < 1\). If not, we need to choose another set of initial values and repeat the above steps. After solving for λ 2 from (3.31), compute \(\hat{\tau }_{5}\) using the expression

$$\displaystyle{\tau _{5} = \frac{{(\lambda _{3})}^{(4)}(\lambda _{4} + 1)_{(4)} - {(\lambda _{4})}^{(4)}(\lambda _{3} + 1)_{(4)}} {(\lambda _{3} + 3)_{(3)}(\lambda _{4} + 3)_{(3)}[\lambda _{3}(\lambda _{4} + 1)_{(2)}\lambda _{4}(\lambda _{3} + 1)_{(2)}]}}$$

and seek the values that minimize \((t_{5} -\hat{\tau }_{5})\). Finally, we need to substitute it into (3.31) to find \(\hat{\lambda }_{1}\).

A third method is to match the percentiles of the distribution with those of the data. As a first step, the sample percentiles are computed as

$$\displaystyle{\xi _{p} = X_{r:n} + \frac{a} {b} (X_{r+1:n} - X_{r:n}),}$$

where \((n + 1)p = r + \frac{a} {b}\) in which r is a positive integer and \(0 < \frac{a} {b} < 1\). Karian and Dudewicz [ 315 ] considered the following four equations:

$$\displaystyle\begin{array}{rcl} \xi _{0.5}& =& Q(0.5) =\lambda _{1} + \frac{{(0.5)}^{\lambda _{3}} - {(0.5)}^{\lambda _{4}}} {\lambda _{2}} \, {}\\ \xi _{0.9} -\xi _{0.1}& =& Q(0.9) - Q(0.1) = \frac{{(0.9)}^{\lambda _{3}} - {(0.1)}^{\lambda _{4}} + {(0.9)}^{\lambda _{4}} - {(0.1)}^{\lambda _{3}}} {\lambda _{2}} \, {}\\ \frac{\xi _{0.5} -\xi _{0.1}} {\xi _{0.9} -\xi _{0.5}}& =& \frac{Q(0.5) - Q(0.1)} {Q(0.9) - Q(0.5)} = \frac{{(0.9)}^{\lambda _{4}} - {(0.1)}^{\lambda _{3}} + {(0.5)}^{\lambda _{3}} - {(0.5)}^{\lambda _{4}}} {{(0.9)}^{\lambda _{3}} - {(0.1)}^{\lambda _{4}} + {(0.5)}^{\lambda _{4}} - {(0.5)}^{\lambda _{3}}} \, {}\\ \frac{\xi _{0.75} -\xi _{0.25}} {\xi _{0.9} -\xi _{0.1}} & =& \frac{Q(0.75) - Q(0.25)} {Q(0.9) - Q(0.5)} = \frac{{(0.75)}^{\lambda _{3}} - {(0.25)}^{\lambda _{4}} + {(0.75)}^{\lambda _{4}} - {(0.25)}^{\lambda _{3}}} {{(0.9)}^{\lambda _{3}} - {(0.1)}^{\lambda _{4}} + {(0.9)}^{\lambda _{4}} - {(0.1)}^{\lambda _{3}}}. {}\\ \end{array}$$

Solving the above system of equations, we obtain the percentile-based estimates. For this purpose, either numerical methods have to be resorted to or refer to the tables in Appendix D of Karian and Dudewicz [ 315 ] which gives the values of λ 1, λ 2, λ 3 and λ 4 based on the sample values for the LHS of the above four equations.

In all the three methods discussed so far, the question of more than one set of lambda values in the admissible regions may be possible. The choice of the appropriate set depends on the data and some goodness-of-fit procedure. Karian and Dudewicz [ 314 ] compared the relative merits of the two-moment approaches and the percentile method. Using the p-values of the chi-square goodness-of-fit test, the quality of fit was ascertained. They noted that, in general, percentile and L-moment methods gave better fits more frequently. Further, in terms of the L 2-norm, which measures the discrepancy between two functions f(x) and g(x) by

$$\displaystyle{\int \vert g(x) - f(x){\vert }^{2}dx,}$$

the method of percentiles was found to be better than the method of moments over a broad range of values in the \((r_{1},r_{2})\) space in samples of size 1,000.

Another useful estimation procedure based on the least-square approach was proposed by Osturk and Dale [ 477 ]. Let X r: n (\(r = 1,\ldots,n\)) denote the order statistics of the data and U r: n the order statistics of the corresponding uniformly distributed random variable F(X) for \(r = 1,2,\ldots,n\). The least-square method is to find λ i such that the sum of squared differences between the observed and expected order statistics is minimum. This is achieved by minimizing

$$\displaystyle{ A(\lambda _{1},\lambda _{2},\lambda _{3},\lambda _{4}) =\sum _{ r=1}^{n}{\left \{x_{ r:n} -\lambda _{1} -\frac{1} {\lambda _{2}} (E(U_{r:n}^{\lambda _{3} } - {(1 - U_{r:n})}^{\lambda _{4}}))\right \}}^{2}. }$$
(3.35)

From the density function of uniform order statistics given by (Arnold et al. [ 37 ])

$$\displaystyle{ f_{r}(x_{r}) = \frac{1} {B(r,n - r + 1)}x_{r}^{r-1}{(1 - x_{ r})}^{n-r},\ 0 < x_{ r} < 1, }$$
(3.36)

we have

$$\displaystyle{M_{r} = E(U_{r:n}^{\lambda _{3} }) = \frac{\Gamma (n + 1)\Gamma (\lambda _{3} + r)} {\Gamma (r)\Gamma (n +\lambda _{3} + 1)} = \frac{n!} {(r - 1)(\lambda _{3} + r)_{(n+1)}}}$$

and similarly

$$\displaystyle{N_{r} = E{(1 - U_{r:n})}^{\lambda _{4}} = \frac{n!} {(n - r)!{(\lambda _{4} + n)}^{(r)}}.}$$

Owing to the difficulties in simultaneously minimizing (3.35) with respect to the four parameters, first minimize (3.35) with respect to λ 1 and λ 2 by treating λ 3 and λ 4 as constants. As in the case of simple linear regression, setting the derivatives of (3.35) to zero, we can solve for λ 1 and λ 2 as

$$\displaystyle{ \hat{\lambda }_{2} = \frac{\sum _{r=1}^{n}(x_{r:n} -\bar{ x})(\nu _{r}-\bar{\nu })} {\sum _{r=1}^{n}{(\nu _{r}-\bar{\nu })}^{2}} }$$
(3.37)

and

$$\displaystyle{ \hat{\lambda }_{1} =\bar{ x} =\bar{\nu }\bar{\lambda } _{2}, }$$
(3.38)

where \(\nu _{r} = M_{r} - N_{r}\) and \(\bar{\nu }= \frac{1} {n}\sum \nu _{r}\). Then, upon substituting (3.37) and (3.38) in (3.35), we get

$$\displaystyle{A(\lambda _{3},\lambda _{4}) =\sum _{ r=1}^{n}{(x_{ r:n-\bar{x}})}^{2}\left [1 - \frac{{(\sum _{r=1}^{n}(x_{ r:n} -\bar{ x})(\nu _{r}-\bar{\nu }))}^{2}} {\sum _{r=1}^{n}{(\nu _{r}-\bar{\nu })}^{2}\sum _{r=1}^{n}{(x_{r:n} -\bar{ x})}^{2}}\right ].}$$

Thus, λ 3 and λ 4 are found by minimizing

$$\displaystyle{ - \frac{{[\sum _{r=1}^{n}(x_{r:n} -\bar{ x})(\nu _{r}-\bar{\nu })]}^{2}} {\sum _{r=1}^{n}{(\nu _{r}-\bar{\nu })}^{2}\sum _{r=1}^{n}{(x_{r:n} -\bar{ x})}^{2}}. }$$
(3.39)

Finally, the solutions from (3.39), when substituted into (3.37) and (3.38), give \(\hat{\lambda }_{1}\) and \(\hat{\lambda }_{2}\).

A second version of percentile method in Karian and Dudewicz [ 314 ] proposes equating the population median M, the interdecile range

$$\displaystyle{\text{IDR} = Q(1 - u) - Q(u),}$$

the tail weight ratio

$$\displaystyle{\text{TWR} = \frac{Q(\frac{1} {2}) - Q(u)} {Q(1 - u) - Q(\frac{1} {2})}\,}$$

and the tail weight factor

$$\displaystyle{\text{TWF} = \frac{\text{IQR}} {\text{IDR}}}$$

with the corresponding sample quantities. These give rise to the equations

$$\displaystyle\begin{array}{rcl} \lambda _{1} + \frac{{(0.5)}^{\lambda _{3}} - {(0.5)}^{\lambda _{4}}} {\lambda _{2}} & =& m,{}\end{array}$$
(3.40)
$$\displaystyle\begin{array}{rcl} \frac{1} {\lambda _{2}} [{(1 - u)}^{\lambda _{3}} - {u}^{\lambda _{4}} + {(1 - u)}^{\lambda _{4}} - {u}^{\lambda _{3}}]& =& \xi _{1-u} -\xi _{u},{}\end{array}$$
(3.41)
$$\displaystyle\begin{array}{rcl} \frac{{(1 - u)}^{\lambda _{4}} - {u}^{\lambda _{3}} + {(0.5)}^{\lambda _{3}} - {(0.5)}^{\lambda _{4}}} {{(1 - u)}^{\lambda _{3}} - {u}^{\lambda _{4}} + {(0.5)}^{\lambda _{4}} - {(0.5)}^{\lambda _{3}}} & =& \frac{\xi _{0.5} -\xi _{u}} {\xi _{1-u} -\xi _{u}}\,{}\end{array}$$
(3.42)
$$\displaystyle\begin{array}{rcl} \frac{{(0.75)}^{\lambda _{3}} - {(0.25)}^{\lambda _{4}} + {(0.75)}^{\lambda _{4}} - {(0.25)}^{\lambda _{3}}} {{(1 - u)}^{\lambda _{3}} - {u}^{\lambda _{4}} + {(1 - u)}^{\lambda _{4}} - {u}^{\lambda _{3}}} & =& \frac{\xi _{0.75} -\xi _{0.25}} {\xi _{1-u} -\xi _{u}}.{}\end{array}$$
(3.43)

Since (3.42) and (3.43) involve only λ 3 and λ 4, they are solved first and then insert these values in (3.40) and (3.41) to estimate λ 1 and λ 2. All the equations involve u and therefore a choice of u lying between 0 and \(\frac{1} {4}\) is suggested by Karian and Dudewicz [ 314 ]. They also provide a table of

$$\displaystyle{\left [ \frac{\xi _{0.5} -\xi _{u}} {\xi _{1-u} -\xi _{0.5}}, \frac{\xi _{0.75} -\xi _{0.25}} {\xi _{1-u} -\xi _{u}} \right ]}$$

as pairs of values and the corresponding solutions, the algorithm and illustrations of how to use the tables.

King and MacGillivray [ 326 ] have introduced a new procedure called the starship method, which involves estimation of the parameters along with a goodness-of-fit test. Laying a four-dimensional grid over a region in the four-dimensional space that covers the range of the parameter values, a goodness of fit is performed over the points in the grid. If the fit is not satisfied with one point, another is selected and so on, with the procedure terminating with parameter values that have the best measure of fit. Lakhany and Mausser [ 371 ] and Fournier et al. [ 201 ] have pointed out that the starship method is quite time consuming especially for large samples.

In practice, in most of the methods described above, the parameters obtained need not produce an adequate model. There can also be cases where multiple solutions exist and the solutions do not span the entire data set. So, goodness-of-fit tests have to be carried out separately after estimation or such a test must be embedded in the procedure as with the starship method. There have been several attempts to device procedures that automate the restart of the algorithms and also do the necessary tests. Lakhany and Mausser [ 371 ] devised a modification to the starship method. Instead of using a full four-dimensional grid, they used successive simplex from random starting points until the goodness of fit does not reject the distribution. It cannot, however, be said that always the best fit is realized. The GLIDEX package provides fitting methods using discretized and numerical maximum likelihood approach (Su [ 549 ]) and the starship methods. King and MacGillvray [ 327 ] have suggested a method of estimation with the aid of location and scale free shape functionals

$$\displaystyle{S(u) = \frac{Q(u) + Q(1 - u) - 2M} {Q(u) - Q(1 - u)} }$$

and

$$\displaystyle{d(u,v) = Q(u) + Q(1 - u) -\frac{Q(v) + Q(1 - v)} {Q(v) - Q(1 - v)}}$$

by minimizing the distance between the sample and population values of the functionals. Fournier et al. [ 201 ] proposed another method that minimizes the \(D =\max \vert S_{n}(x) - F(x)\vert \), where S n (x) is the empirical distribution function in a two-dimensional grid representing the \((\lambda _{3},\lambda _{4})\) space. Two other works in this context are the estimation of parameters for grouped data (Tarsitano [ 564 ]) and for censored data (Mercy and Kumaran [ 416 ]). Karian and Dudewicz [ 316 ] discuss the computational difficulties encountered in the estimation procedure of the generalized lambda distribution.

3.2.2 Generalized Tukey Lambda Family

A major limitation of the generalized lambda family discussed above is that the distribution is valid only for certain regions in the parameter space. Freimer et al. [ 203 ] introduced a modified generalized lambda distribution defined by

$$\displaystyle{ Q(u) =\lambda _{1} + \frac{1} {\lambda _{2}} \left [\frac{{u}^{\lambda _{3}} - 1} {\lambda _{3}} -\frac{{(1 - u)}^{\lambda _{4}} - 1} {\lambda _{4}} \right ] }$$
(3.44)

which is well defined for the values of the shape parameters λ 3 and λ 4 over the entire two-dimensional space. The quantile density function has the simple form

$$\displaystyle{q(u) = \frac{1} {\lambda _{2}} [{u}^{\lambda _{3}-1} + {(1 - u)}^{\lambda _{4}-1}].}$$

Since our interest in (3.44) is as a life distribution, we should have

$$\displaystyle{Q(0) =\lambda _{1} - \frac{1} {\lambda _{2}\lambda _{3}} \geq 0}$$

in which case the support becomes \((\lambda _{1} - \frac{1} {\lambda _{2}\lambda _{3}},\lambda _{1} + \frac{1} {\lambda _{2}\lambda _{4}} )\) whenever \(\lambda _{3} >\lambda _{4} > 0\) and \((\lambda _{1} - \frac{1} {\lambda _{2}\lambda _{3}},\infty )\) if λ 3 > 0 and λ 4 ≤ 0. This is a crucial point to be verified when the distribution is used to model data pertaining to non-negative random variables. The exponential distribution is a particular case of the family as \(\lambda _{3} \rightarrow \infty \) and \(\lambda _{4} \rightarrow 0\). All the approximation that are valid for the modified generalized lambda family are valid in (3.44) as well.

The first four raw moments of this distribution are as follows:

$$\displaystyle\begin{array}{rcl} \mu & =& \lambda _{1} -\frac{1} {\lambda _{2}} \left [ \frac{1} {\lambda _{3} + 1} - \frac{1} {\lambda _{4} + 1}\right ], {}\\ \mu _{2}^{\prime}& =& \frac{1} {\lambda _{2}^{2}}\left [ \frac{1} {\lambda _{3}^{2}(2\lambda _{3} + 1)} - \frac{1} {\lambda _{4}^{2}(2\lambda _{4} + 1)} - \frac{2} {\lambda _{3}\lambda _{4}}B(\lambda _{3} + 1,\lambda _{4} + 1)\right ], {}\\ \mu _{3}^{\prime}& =& \frac{1} {\lambda _{2}^{3}}\left [ \frac{1} {\lambda _{3}^{3}(3\lambda _{3} + 1)} - \frac{1} {\lambda _{4}^{3}(3\lambda _{4} + 1)} - \frac{3} {\lambda _{3}^{2}\lambda _{4}}B(2\lambda _{3} + 1,\lambda _{4} + 1)\right. {}\\ & & \left.+ \frac{3} {\lambda _{3}\lambda _{4}^{2}}B(\lambda _{3} + 1,2\lambda _{4} + 1)\right ], {}\\ \mu _{4}^{\prime}& =& \frac{1} {\lambda _{2}^{4}}\left [ \frac{1} {\lambda _{3}^{4}(4\lambda _{3} + 1)} + \frac{1} {\lambda _{4}^{4}(4\lambda _{4} + 1)} + \frac{6} {\lambda _{3}^{2}\lambda _{4}^{2}}B(2\lambda _{3} + 1,2\lambda _{4} + 1)\right. {}\\ & & \left.- \frac{4} {\lambda _{3}^{3}\lambda _{4}}B(3\lambda _{3} + 1,\lambda _{4} + 1) + \frac{4} {\lambda _{3}\lambda _{4}^{3}}B(\lambda _{3} + 1,3\lambda _{4} + 1)\right ]. {}\\ \end{array}$$

In order to have a finite moment of order k, it is necessary that \(\min (\lambda _{3},\lambda _{4}) > -\frac{1} {k}\). An elaborate discussion on the skewness and kurtosis has been carried out in Freimer et al. [ 203 ]. The family completely covers the β 1 values with two disjoint curves corresponding to any \(\sqrt{\beta _{ 1}}\) except zero. As one of the parameters is held fixed, the behaviour of skewness is as follows. At \(\lambda _{3} = -\frac{1} {3}\), \(\sqrt{\beta _{ 1}} = -\infty \) then increases monotonically to zero for λ 3 in \((-\frac{1} {3},1)\) and then tends to as λ 3. Similarly, as λ 2 increases from \(-\frac{1} {3}\) to 1 to , \(\sqrt{\beta _{ 1}}\) decreases from to 0 and to − . The family attains symmetry at \(\lambda _{3} =\lambda _{4}\), but \(\sqrt{\beta _{1}}\) may be zero even if \(\lambda _{3}\neq \lambda _{4}\). Considerable richness is seen in density shapes, there being members that are unimodal, U-shaped, J-shaped and monotone, which are symmetric or skew with short, medium and long tails; see, e.g., Fig. 3.2. Also, there are members with arbitrarily large values for kurtosis, though it does not contain the lowest possible β 2 for a given β 1. There can be more than one set of \((\lambda _{3},\lambda _{4})\) corresponding to a given \((\beta _{1},\beta _{2})\).

Fig. 3.2
figure 2

Density plots of the GLD (Freimer et al. model) for different choices of \((\lambda _{1},\lambda _{2},\lambda _{3},\lambda _{4})\). (a) (2,1,2,0.5); (b) (2,1,0.5,2); (c) (2,1,0.5,0.5); (d) (3,1,1.5,2.5); (e) (3,1,1.5,1.6,); (f) (1,1,2,0.1); (g) (5,1,0.1,2)

Compared to the conventional central moments, the L-moments have much simpler expressions:

$$\displaystyle\begin{array}{rcl} L_{1}& =& \mu =\lambda _{1} -\frac{1} {\lambda _{2}} \left [ \frac{1} {\lambda _{3} + 1} - \frac{1} {\lambda _{4} + 1}\right ],{}\end{array}$$
(3.45)
$$\displaystyle\begin{array}{rcl} L_{2}& =& \frac{1} {\lambda _{2}} \left [ \frac{1} {(\lambda _{3} + 1)_{(2)}} - \frac{1} {(\lambda _{4} + 1)_{(2)}}\right ],{}\end{array}$$
(3.46)
$$\displaystyle\begin{array}{rcl} L_{3}& =& \frac{1} {\lambda _{2}} \left [ \frac{\lambda _{3} - 1} {(\lambda _{3} + 1)_{(3)}} - \frac{\lambda _{4} - 1} {(\lambda _{4} + 1)_{(3)}}\right ],{}\end{array}$$
(3.47)
$$\displaystyle\begin{array}{rcl} L_{4}& =& \frac{1} {\lambda _{2}} \left [\frac{{(\lambda _{3} - 1)}^{(2)}} {(\lambda _{3} + 1)_{(4)}} -\frac{{(\lambda _{4} - 1)}^{(2)}} {(\lambda _{4} + 1)_{(4)}}\right ].{}\end{array}$$
(3.48)

The measures of location, spread, skewness and kurtosis based on percentiles are as follows:

$$\displaystyle\begin{array}{rcl} M& =& \lambda _{1} + \frac{1} {\lambda _{2}} \left [\frac{{(\frac{1} {2})}^{\lambda _{3}} - 1} {\lambda _{3}} -\frac{{(\frac{1} {2})}^{\lambda _{4}} - 1} {\lambda _{4}} \right ],{}\end{array}$$
(3.49)
$$\displaystyle\begin{array}{rcl} \text{IQR}& =& \frac{1} {2\lambda _{2}}\left (\frac{{(\frac{3} {4})}^{\lambda _{3}} - {(\frac{1} {4})}^{\lambda _{3}}} {\lambda _{3}} -\frac{{(\frac{3} {4})}^{\lambda _{4}} - {(\frac{1} {4})}^{\lambda _{4}}} {\lambda _{2}} \right ),{}\end{array}$$
(3.50)
$$\displaystyle\begin{array}{rcl} S& =& \frac{\lambda _{4}\left \{{(\frac{3} {4})}^{\lambda _{3}} - 2{(\frac{1} {2})}^{\lambda _{3}} + {(\frac{1} {4})}^{\lambda _{3}}\right \} -\lambda _{ 3}\left \{{(\frac{3} {4})}^{\lambda _{4}} - 2{(\frac{1} {2})}^{\lambda _{4}} + {(\frac{1} {4})}^{\lambda _{4}}\right \}} {\lambda _{4}\left \{{(\frac{3} {4})}^{\lambda _{3}} - {(\frac{1} {4})}^{\lambda _{3}}\right \} +\lambda _{3}\left \{{(\frac{3} {4})}^{\lambda _{4}} - {(\frac{1} {4})}^{\lambda _{4}}\right \}} \,{}\end{array}$$
(3.51)
$$\displaystyle\begin{array}{rcl} T& =& \frac{\lambda _{4}\left \{{(\frac{7} {8})}^{\lambda _{3}} - {(\frac{5} {8})}^{\lambda _{3}} + {(\frac{3} {8})}^{\lambda _{3}} - {(\frac{1} {8})}^{\lambda _{3}}\right \} -\lambda _{ 3}\left \{{(\frac{7} {8})}^{\lambda _{4}} - {(\frac{5} {8})}^{\lambda _{4}} + {(\frac{3} {8})}^{\lambda _{4}} - {(\frac{1} {8})}^{\lambda _{4}}\right \}} {\lambda _{4}\left \{{(\frac{3} {4})}^{\lambda _{3}} - {(\frac{1} {4})}^{\lambda _{3}}\right \} +\lambda _{3}\left \{{(\frac{3} {4})}^{\lambda _{4}} - {(\frac{1} {4})}^{\lambda _{4}}\right \}} \ . \\ & & {}\end{array}$$
(3.52)

It could be seen that when λ 3 = 1, \(\lambda _{4} \rightarrow \infty \) and also when \(\lambda _{3} \rightarrow \infty \) and λ 4 = 1, we have S = 0. The expected value of the rth order statistic X r: n is

$$\displaystyle\begin{array}{rcl} \mu _{r:n} = E(X_{r:n})& =& \lambda _{1} - \frac{1} {\lambda _{2}\lambda _{3}} + \frac{1} {\lambda _{2}\lambda _{4}} + \frac{1} {\lambda _{2}\lambda _{3}} \frac{\Gamma (\lambda _{3} + r)} {\Gamma (n +\lambda _{3} + 1)} \frac{n!} {r!} {}\\ & & -\frac{1} {\lambda _{2}\lambda _{4}} \frac{n!} {(n - r)!} \frac{\Gamma (n +\lambda _{3} - r + 1)} {\Gamma (n +\lambda _{4} + 1)}. {}\\ \end{array}$$

Setting r = 1 and n, we get

$$\displaystyle\begin{array}{rcl} E(X_{1:n}) =\lambda _{1} - \frac{1} {\lambda _{2}\lambda _{3}} + \frac{1} {\lambda _{2}\lambda _{4}} + \frac{n!} {\lambda _{2}(\lambda _{3})_{(n+1)}} - \frac{n} {\lambda _{2}\lambda _{4}(\lambda _{4} + n)}& & {}\\ \end{array}$$

and

$$\displaystyle\begin{array}{rcl} E(X_{n:n}) =\lambda _{1} - \frac{1} {\lambda _{2}\lambda _{3}} + \frac{1} {\lambda _{2}\lambda _{4}} + \frac{n} {\lambda _{2}\lambda _{3}(\lambda _{3} + n)} - \frac{n!} {\lambda _{2}(\lambda _{4})_{(n+1)}}.& & {}\\ \end{array}$$

The distributions of X 1: n and X n: n are given by

$$\displaystyle{Q_{1}(u) =\lambda _{1} + \frac{1} {\lambda _{2}} \left [\frac{{[1 - (1 - {u}^{1/n})]}^{\lambda _{3}} - 1} {\lambda _{3}} -\frac{{(1 - u)}^{\frac{\lambda _{4}} {n} } - 1} {\lambda _{4}} \right ]}$$

and

$$\displaystyle{Q_{n}(u) =\lambda _{1} + \frac{1} {\lambda _{2}} \left (\frac{{u}^{\frac{\lambda _{3}} {n} } - 1} {\lambda _{3}} -\frac{{(1 - {u}^{\frac{1} {n} })}^{\lambda _{4}} - 1} {\lambda _{4}} \right ).}$$

Various reliability functions of the model have closed-form algebraic expressions, except for the variances which contain beta functions. The hazard quantile function is

$$\displaystyle{ H(u) =\lambda _{2}[{(1 - u)}^{\lambda _{4}} + (1 - u){u}^{\lambda _{3}-1}]. }$$
(3.53)

Mean residual quantile function simplifies to

$$\displaystyle{ M(u) = \frac{{(1 - u)}^{\lambda _{4}}} {\lambda _{2}(\lambda _{4} + 1)} + \frac{1 - {u}^{\lambda _{3}+1}} {\lambda _{2}(1 +\lambda _{3})(1 - u)} -\frac{{u}^{\lambda _{3}}} {\lambda _{2}\lambda _{3}}. }$$
(3.54)

The variance residual quantile function is

$$\displaystyle{V (u) = A_{1}(u) - A_{2}^{2}(u),}$$

where

$$\displaystyle{A_{1}(u) = \frac{1 - {u}^{2\lambda _{3}+1}} {\lambda _{2}^{2}(2\lambda _{3} + 1)(1 - u)} + \frac{{(1 - u)}^{2\lambda _{4}}} {\lambda _{2}\lambda _{4}(2\lambda _{4} + 1)} -\frac{2B_{1-u}(\lambda _{4} + 1,\lambda _{3} + 1)} {\lambda _{2}^{2}\lambda _{3}\lambda _{4}(1 - u)} }$$

and

$$\displaystyle{A_{2}(u) = \frac{1 - {u}^{\lambda _{3}+1}} {\lambda _{2}\lambda _{3}(1 +\lambda _{3})(1 - u)} -\frac{{(1 - u)}^{\lambda _{4}+1}} {\lambda _{2}\lambda _{4}(\lambda _{4} + 1)}.}$$

Percentile residual life function becomes

$$\displaystyle{P_{\alpha }(u) = \frac{1} {\lambda _{2}} [{(1 - (1-\alpha )(1 - u))}^{\lambda _{3}} + {(1 - u)}^{\lambda _{4}}(1 - {(1-\alpha )}^{\lambda _{4}}) - {u}^{\lambda _{3}}].}$$

Expression for the reversed hazard quantile function is

$$\displaystyle{\Lambda (u) ={ \left [\frac{u} {\lambda _{2}} ({u}^{\lambda _{3}-1} + {(1 - u)}^{\lambda _{4}-1})\right ]}^{-1}.}$$

The reversed mean residual quantile function is

$$\displaystyle{R(u) = \frac{1} {\lambda _{2}} \left [ \frac{{u}^{\lambda _{3}}} {\lambda _{3} + 1} -\frac{{(1 - u)}^{\lambda _{4}}} {\lambda _{4} + 1} -\frac{{(1 - u)}^{\lambda _{4}+1}} {\lambda _{4}(\lambda _{4} + 1)u} + \frac{1} {\lambda _{4}(\lambda _{4} + 1)u}\right ],}$$

the reversed percentile residual life function is

$$\displaystyle{q_{\alpha }(u) = \frac{{u}^{\lambda _{3}}} {\lambda _{2}\lambda _{3}} (1 - {(1-\alpha )}^{\lambda _{3}}) - \frac{1} {\lambda _{2}\lambda _{4}}[{(1 - u(1-\alpha ))}^{\lambda _{4}} - {(1 - u)}^{\lambda _{4}}],}$$

and the reversed variance residual quantile function is

$$\displaystyle{{D}^{{\ast}}(u) = B_{ 1}(u) - B_{2}^{2}(u),}$$

where

$$\displaystyle{B_{1}(u) = \frac{{u}^{2\lambda _{3}}} {\lambda _{2}^{2}\lambda _{3}^{2}(2\lambda _{3} + 1)} + \frac{{(1 - u)}^{2\lambda _{4}+1} - 1} {\lambda _{2}^{2}\lambda _{4}^{2}(2\lambda _{4} + 1)u} -\frac{2B_{u}(\lambda _{3} + 1,\lambda _{4} + 1)} {u\lambda _{2}^{2}\lambda _{3}\lambda _{4}} }$$

and

$$\displaystyle{B_{2}(u) = \frac{{u}^{\lambda _{3}}} {\lambda _{2}\lambda _{3}(\lambda _{3} + 1)} -\frac{{(1 - u)}^{\lambda _{4}+1} - 1} {\lambda _{2}\lambda _{3}(\lambda _{4} + 1)u}.}$$

Although the problem of estimating the parameters of (3.44) is quite similar and all the methods described earlier for the generalized lambda distribution are applicable in this case also, there is comparatively less literature available on this subject. The moment matching method and the least-square approach were discussed by Lakhany and Massuer [ 371 ]. Since these methods involved only replacement of the corresponding expressions for (3.44) in the previous section, the details are not presented here for the sake of brevity. Su [ 550 ] discussed two new approaches—the discretized approach and the method of maximum likelihood for the estimation problem, by tackling it on two fronts: (a) finding suitable initial values and (b) selecting the best fit through an optimization scheme. For the distribution in (3.44), the initial values of λ 3 and λ 4 consist of low discrepancy quasi-random numbers ranging from − 0. 25 to 1.5. After generating these random values, they were used to derive λ 1 and λ 2 by the method of moments as in Lakhany and Massuer [ 371 ]. From these initial values, the GLDEX package (Su [ 551 ]) is employed to find the best set of initial values for the optimization process. In the discretized approach, the range of the data is divided into equally spaced classes, and after arranging the observations in ascending order of magnitude, the proportion falling in each class is ascertained. Then, the differences between the observed (d i ) and theoretical (t i ) proportions are minimized through either

$$\displaystyle{\sum _{i=1}^{k}{(d_{ i} - t_{i})}^{2}\quad \text{ or }\quad \sum _{ i=1}^{k}d_{ i}{(d_{i} - t_{i})}^{2},}$$

where k is the number of classes.

In the maximum likelihood method, the u i values corresponding to each x i in the data are to be computed first using Q(u). A numerical method such as the Newton-Raphson can be employed for this purpose. Then, with the help of Nelder–Simplex algorithm, the log likelihood function

$$\displaystyle{\log L =\sum _{ i=1}^{n}\log \left ( \frac{\lambda _{2}} {u_{i}^{\lambda _{3}-1} + {(1 - u_{i})}^{\lambda _{4}-1}}\right )}$$

is maximized to get the final estimates. The GLDEX package provides diagnostic tests that assess the quality of the fit through the Kolmogorov–Smirnov test, quantile plots and agreement between the measures of location, spread, skewness and kurtosis of the data with those of the model fitted to the observations.

Haritha et al. [ 260 ] adopted a percentile method in which they matched the measures of location (median, M), spread (interquartile range), skewness (Galton’s coefficient, S) and kurtosis (Moors’ measure, T) of the population in (3.49)–(3.52) and the data. Among the solutions of the resulting equations, they chose the parameter values that gave

$$\displaystyle{e =\max (\vert \hat{M} - m\vert,\vert \hat{\text{IQR}} - iqr\vert,\vert \hat{S} - \Delta \vert,\vert \hat{T} - t\vert ) <\epsilon }$$

for the smallest ε.

3.2.3 van Staden–Loots Model

A four-parameter distribution that belongs to lambda family proposed by van Staden and Loots [ 572 ], but different from the two versions discussed in Sects. 3.2.1 and 3.2.2, will be studied in this section. The distribution is generated by considering the generalized Pareto model in the form

$$\displaystyle{Q_{1}(u) = \left \{\begin{array}{@{}l@{\quad }l@{}} \frac{1} {\lambda _{4}} (1 - {(1 - u)}^{\lambda _{4}})\quad &\lambda _{4}\neq 0 \\ -\log (1 - u) \quad &\lambda _{4} = 0 \end{array} \right.}$$

and its reflection

$$\displaystyle{Q_{2}(u) = \left \{\begin{array}{@{}l@{\quad }l@{}} \frac{1} {\lambda _{4}} ({u}^{\lambda _{4}} - 1)\quad &\lambda _{4}\neq 0 \\ \log u \quad &\lambda _{4} = 0. \end{array} \right.}$$

A weighted sum of these two quantile functions with respective weights λ 3 and 1 − λ 3, 0 ≤ λ 3 ≤ 1, along with the introduction of a location parameter λ 1 and a scale parameter λ 2, provide the new form. Thus, the quantile function of this model is

$$\displaystyle{ Q(u) =\lambda _{1} +\lambda _{2}\left [(1 -\lambda _{3})\frac{{u}^{\lambda _{4}} - 1} {\lambda _{4}} -\lambda _{3}\frac{{(1 - u)}^{\lambda _{4}} - 1} {\lambda _{4}} \right ],\quad \lambda _{2} > 0. }$$
(3.55)

Equation (3.55) includes the exponential, logistic and uniform distributions as special cases. The support of this distribution is as follows for different choices of λ 3 and λ 4:

$$\displaystyle{\begin{array}{@{\extracolsep {\fill }}llll@{}} \mathrm{Region}&\lambda _{3} & \lambda _{4} & \mathrm{Support} \\ 1 &0 & \leq 0&(-\infty,\lambda _{1}) \\ & & > 0&\left (\lambda _{1} -\frac{\lambda _{2}} {\lambda _{4}},\lambda _{1}\right ) \\ 2 &(0,1)& \leq 0&(-\infty,\infty ) \\ & & > 0&\left (\lambda _{1} -\frac{\lambda _{2}(1-\lambda _{3})} {\lambda _{4}},\lambda _{1} + \frac{\lambda _{3}\lambda _{2}} {\lambda _{4}} \right ) \\ 3 &1 & \leq 0&(\lambda _{1},\infty ) \\ & & > 0&\left (\lambda _{1},\lambda _{1} + \frac{\lambda _{2}} {\lambda _{4}} \right ) \\ \end{array} }$$

For (3.55) to be a life distribution, one must have \(\lambda _{1} -\lambda _{2}(1 -\lambda _{3})\lambda _{4}^{-1} \geq 0\). This gives members with both finite and infinite support, depending on whether λ 4 is positive or negative.

As for descriptive measures, the mean and variance are given by

$$\displaystyle{\mu =\lambda _{1} - \frac{\lambda _{2}} {(1 +\lambda _{4})}(1 - 2\lambda _{3})}$$

and

$${\displaystyle{\sigma }^{2} = \frac{\lambda _{2}^{2}} {{(1 +\lambda _{4})}^{2}}\left [\frac{\lambda _{3}^{2} + {(1 -\lambda _{3})}^{2}} {1 + 2\lambda _{4}} -\frac{2\lambda _{3}(1 -\lambda _{3})} {\lambda _{3}} ({(1 +\lambda _{4})}^{2}B(1 +\lambda _{ 4},1 +\lambda _{4}) - 1)\right ].}$$

One attractive feature of this family is that its L-moments have very simple forms, and they exist for all λ 4 > − 1, and are as follows:

$$\displaystyle\begin{array}{rcl} L_{1}& =& \mu, {}\\ L_{2}& =& \frac{\lambda _{2}} {(\lambda _{4} + 1)(\lambda _{4} + 2)}\, {}\\ L_{r}& =& \lambda _{2}{(1 - 2\lambda _{3})}^{S}\frac{{(\lambda _{4} - 1)}^{(r-2)}} {(\lambda _{4} + 1)_{(r)}} \,\ r = 3,4,\ldots, {}\\ \end{array}$$

where S = 1 when r is odd and S = 0 when r is even. These values give L-skewness and L-kurtosis to be

$$\displaystyle\begin{array}{rcl} \tau _{3}& =& \frac{(\lambda _{4} - 1)(1 - 2\lambda _{3})} {\lambda _{4} + 3} {}\\ \end{array}$$

and

$$\displaystyle\begin{array}{rcl} \tau _{4}& =& \frac{(\lambda _{4} - 1)(\lambda _{4} - 2)} {(\lambda _{4} + 3)(\lambda _{4} + 4)}\, {}\\ \end{array}$$

respectively. van Staden and Loots [ 572 ] note that, as in the case of the two four-parameter lambda families discussed in the last two sections, there is no unique \((\lambda _{3},\lambda _{4})\) pair for a given value of \((\tau _{3},\tau _{4})\). When \(\lambda _{3} = -\frac{1} {2}\), the distribution is symmetric.

L-skewness covers the entire permissible span ( − 1, 1) and the kurtosis is independent of λ 3 with a minimum attained at \(\lambda _{4} = \sqrt{6} - 1\). The percentile-based measures also have simple explicit forms and are given by

$$\displaystyle\begin{array}{rcl} M& =\lambda _{1} + \frac{\lambda _{2}(1-2\lambda _{3})} {\lambda _{4}} \left ({\left (\frac{1} {2}\right )}^{\lambda _{4}} - 1\right ),&{}\end{array}$$
(3.56)
$$\displaystyle\begin{array}{rcl} \text{In the symmetric case, }\quad M& =\mu =\lambda _{1},&{}\end{array}$$
(3.57)
$$\displaystyle\begin{array}{rcl} \text{IQR}& = \frac{\lambda _{2}({3}^{\lambda _{4}}-1)} {\lambda _{4}{4}^{\lambda _{4}}} \,&{}\end{array}$$
(3.58)
$$\displaystyle\begin{array}{rcl} S& = \frac{(1-2\lambda _{3})(1+{3}^{\lambda _{4}}-{2}^{\lambda _{4}+1})} {{3}^{\lambda _{4}}-1} \,&{}\end{array}$$
(3.59)
$$\displaystyle\begin{array}{rcl} T& = \frac{(1-2\lambda _{3}){2}^{\lambda _{4}}({7}^{\lambda _{4}}+{5}^{\lambda _{4}}+{3}^{\lambda _{4}}+1)} {{3}^{\lambda _{4}}-1}.&{}\end{array}$$
(3.60)

The quantile density function is

$$\displaystyle{q(u) =\lambda _{2}[(1 -\lambda _{3}){u}^{\lambda _{4}-1} +\lambda _{3}{(1 - u)}^{\lambda _{4}-1}]}$$

and so the density quantile function is

$$\displaystyle{f(Q(u)) =\lambda _{ 2}^{-1}{[(1 -\lambda _{ 3}){u}^{\lambda _{4}-1} +\lambda _{ 3}{(1 - u)}^{\lambda _{4}-1}]}^{-1}.}$$

Figure 3.3 displays some shapes of the density function.

Fig. 3.3
figure 3

Density plots of the GLD proposed by van Staden and Loots [ 572 ] for varying \((\lambda _{1},\lambda _{2},\lambda _{3},\lambda _{4})\). (a) (1,1,0.5,2); (b) (2,1,0.5,3); (c) (3,2,0.25,0.5); (d) (1,2,0.1, − 1)

The expectations of the order statistics from (3.55) are as follows:

$$\displaystyle\begin{array}{rcl} E(X_{r:n})& =& \lambda _{1} + \frac{\lambda _{2}} {\lambda _{4}}\left [\frac{(1 -\lambda _{3})\Gamma (\lambda _{4} + 1)} {\Gamma (r)} -\frac{\lambda _{3}\Gamma (n +\lambda _{4} - r + 1)} {\Gamma (n - r + 1)} \right ] \frac{n!} {\Gamma (\lambda _{4} + n + 1)} {}\\ & & \quad + \frac{\lambda _{2}} {\lambda _{4}}(2\lambda _{3} - 1),\quad r = 1,2,\ldots,n, {}\\ E(X_{1:n})& =& \lambda _{1} + \frac{\lambda _{2}} {\lambda _{4}}(2\lambda _{3} - 1) + \frac{\lambda _{2}} {\lambda _{4}}\left [ \frac{n!(1 -\lambda _{3})} {(\lambda _{4} + 1)_{(n)}} - \frac{n\lambda _{3}} {\lambda _{4} + n}\right ], {}\\ E(X_{n:n})& =& \lambda _{1} + \frac{\lambda _{2}} {\lambda _{4}}(2\lambda _{3} - 1) + \frac{\lambda _{2}} {\lambda _{4}}\left [\frac{n(1 -\lambda _{3})} {\lambda _{4} + n} - \frac{\lambda _{3}n!} {(\lambda _{4} + 1)_{(n)}}\right ]. {}\\ \end{array}$$

Since there are members of the family with support on the positive real line, the model will be useful for describing lifetime data. In this context, the hazard quantile function is

$$\displaystyle{ H(u) =\{\lambda _{2}(1 - u){((1 -\lambda _{3}){u}^{\lambda _{4}-1} +\lambda _{3}{(1 - u)}^{\lambda _{4}-1})\}}^{-1}. }$$
(3.61)

Similarly, the mean residual quantile function is

$$\displaystyle{ M(u) =\lambda _{2}\left [\frac{1 -\lambda _{3}} {\lambda _{4}} \left ( \frac{1 - {u}^{\lambda _{4}+1}} {(1 - u)(\lambda _{4} + 1)} - {u}^{\lambda _{4}}\right ) - \frac{\lambda _{3}} {\lambda _{4} + 1}{(1 - u)}^{\lambda _{4}}\right ], }$$
(3.62)

the reversed hazard quantile function is

$$\displaystyle{\Lambda (u) ={ \left [\lambda _{2}u\left \{(1 -\lambda _{3}){u}^{\lambda _{4}-1} +\lambda _{3}{(1 - u)}^{\lambda _{4}-1}\right \}\right ]}^{-1},}$$

and the reversed mean residual quantile function is

$$\displaystyle{R(u) =\lambda _{2}\left [\frac{(1 -\lambda _{3})} {\lambda _{4} + 1} {u}^{\lambda _{4}} -\frac{{(1 - u)}^{\lambda _{4}}} {\lambda _{4}} + \frac{{(1 - u)}^{\lambda _{4}+1} - 1} {u\lambda _{4}(\lambda _{4} + 1)} \right ].}$$

Further, the form

$$\displaystyle{u\;\theta (u) = A + B((1-\alpha ){u}^{C} +\alpha {(1 - u)}^{c})}$$

determines the quantile function in (3.55) as

$$\displaystyle{Q(u) = A + \frac{d} {du}u\;\theta (u),}$$

where θ(u) is as defined in (3.24).

van Staden and Loots [ 572 ] prescribed the method of L-moments for the estimation of the parameters. With the aid of

$$\displaystyle{\hat{\lambda }_{4} = \frac{3 + 7t_{4} \pm {(t_{4}^{2} + 98t_{4} + 1)}^{\frac{1} {2} }} {2(1 - t_{4})} \,}$$

where t 4 is the sample L-kurtosis, λ 4 can be estimated. Using \(\hat{\lambda }_{4}\), the estimate \(\hat{\lambda }_{3}\) of \(\lambda _{3}\) can be determined from

$$\displaystyle{\hat{\lambda }_{3} = \left \{\begin{array}{@{}l@{\quad }l@{}} \frac{1} {2}[1 -\frac{t_{3}(\hat{\lambda }_{3}+3)} {\hat{\lambda }_{4}-1} ]\quad &,\ \hat{\lambda }_{4}\neq 1 \\ \frac{1} {2} \quad &,\ \hat{\lambda }_{4} = 1 \end{array} \right.,}$$

where t 3 is the sample L-skewness. The other two-parameter estimates are computed as

$$\displaystyle\begin{array}{rcl} \hat{\lambda }_{2}& =& l_{2}(\hat{\lambda }_{4} + 1)(\hat{\lambda }_{4} + 2), {}\\ \hat{\lambda }_{1}& =& l_{1} + \frac{\hat{\lambda }_{2}(1 - 2\hat{\lambda }_{3})} {\hat{\lambda }_{4} + 1}, {}\\ \end{array}$$

with l 1 and l 2 being the usual first two sample L-moments.

The method of percentiles can also be applied for parameter estimation. In fact,

$$\displaystyle{ \frac{t} {s} = \frac{{2}^{\lambda _{4}}({7}^{\lambda _{4}} + {5}^{\lambda _{4}} + {3}^{\lambda _{4}} + 1)} {{3}^{\lambda _{4}} + 1 - {2}^{\lambda _{4}+1}} }$$

provides λ 4, t and s being the Moors and Galton measures, respectively, evaluated from the data. This is used in (3.59) to find \(\hat{\lambda }_{3}\), and then \(\hat{\lambda }_{2}\) and \(\hat{\lambda }_{1}\) are determined from (3.56) and (3.58) by equating IQR and M with iqr and m, respectively.

3.2.4 Five-Parameter Lambda Family

Gilchrist [ 215 ] proposed a five-parameter family of distributions with quantile function

$$\displaystyle{ Q(u) =\lambda _{1} + \frac{\lambda _{2}} {2}\left [(1 -\lambda _{3})\left (\frac{{u}^{\lambda _{4}} - 1} {\lambda _{4}} \right ) - (1 +\lambda _{3})\left (\frac{{(1 - u)}^{\lambda _{5}} - 1} {\lambda _{5}} \right )\right ] }$$
(3.63)

as an extension to the Freimer et al. [ 203 ] model in (3.44). Tarsitano [ 564 ] studied this model and evaluated various estimation methods for this family. The family has its quantile density function as

$$\displaystyle{q(u) =\lambda _{2}\left [\frac{1 -\lambda _{3}} {2} {u}^{\lambda _{4}-1} + \frac{1 -\lambda _{3}} {2} {(1 - u)}^{\lambda _{5}-1}\right ].}$$

In (3.63), λ 1 controls the location parameter though not exclusively, λ 2 ≥ 0 is a scale parameter and λ 3, λ 4 and λ 5 are shape parameters. It is evident that the generalized Tukey lambda family in (3.44) is a special case when λ 3 = 0. The support of the distribution is given by

$$\displaystyle\begin{array}{rcl} & & \left (\lambda _{1} -\lambda _{2}\frac{(1 -\lambda _{3})} {2},\lambda _{1} +\lambda _{2}\frac{(1 +\lambda _{3})} {2} \right )\text{ when }\lambda _{4} > 0,\;\lambda _{5} > 0, {}\\ & & \left (\lambda _{1} -\lambda _{2}\frac{(1 -\lambda _{3})} {2},\infty \right )\text{ when }\lambda _{4} > 0,\;\lambda _{5} \leq 0, {}\\ & & \left (-\infty,\lambda _{1} +\lambda _{2}\frac{(1 +\lambda _{3})} {2} \right )\text{ when }\lambda _{4} \leq 0,\;\lambda _{5} > 0. {}\\ \end{array}$$

In the case of non-negative random variables, the condition

$$\displaystyle{\lambda _{1} - \frac{\lambda _{2}} {2\lambda _{4}}(1 -\lambda _{3}) \geq 0}$$

would become necessary. The density function may be unimodal with or without truncated tails, U-shaped, S-shaped or monotone. The family also includes the exponential distribution when \(\lambda _{4} \rightarrow \infty \) and \(\lambda _{5} \rightarrow 0\), the generalized Pareto distribution when \(\lambda _{4} \rightarrow \infty \) and \(\vert \lambda _{5}\vert < \infty \), and the power distribution when \(\lambda _{5} \rightarrow 0\) and \(\vert \lambda _{4}\vert < \infty \). Some typical shapes of the distribution are displayed in Fig. 3.4. Tarsitano [ 564 ] has provided close approximations to various symmetric and asymmetric distributions using (3.63) and went on to recommend the usage of the model when a particular distributional form cannot be suggested from the physical situation under consideration. Setting \(Z = \frac{2(X-\lambda _{1})} {\lambda _{2}}\), Tarsitano [ 564 ] expressed the raw moments in the form

$$\displaystyle{E({Z}^{r}) =\sum _{ j=0}^{r}{(-1)}^{j}\binom{r}{j}{\left (\frac{1 -\lambda _{3}} {\lambda _{4}} \right )}^{r-j}{\left (\frac{1 +\lambda _{3}} {\lambda _{5}} \right )}^{j}B(1 + (r - j)\lambda _{ 4},1 + j\lambda _{5})}$$

provided λ 4 and λ 5 are greater than \(\frac{1} {r}\), where B(⋅,⋅) is the complete beta function, as before. The mean and variance are deduced from the above expression as

$$ \displaystyle\begin{array}{rcl} \mu & =& \lambda _{1} -\frac{\lambda _{2}(1 -\lambda _{3})} {2(1 +\lambda _{4})} + \frac{\lambda _{2}(1 +\lambda _{3})} {2(1 +\lambda _{5})}, {}\\ {\sigma }^{2}& =& \frac{{(1 -\lambda _{3})}^{2}} {\lambda _{4}^{2}(2\lambda _{4} + 1)} + \frac{{(\lambda _{3} + 1)}^{2}} {\lambda _{5}^{2}(2\lambda _{5} + 1)} -\frac{2(1 -\lambda _{3})} {\lambda _{4}\lambda _{5}} B(\lambda _{4} + 1,\lambda _{5} + 1). {}\\ \end{array} $$

The L-moments take on simpler expressions in this case, and the first four are as follows:

$$ \displaystyle\begin{array}{rcl} L_{1}& =& \mu, {}\\ L_{2}& =& \frac{\lambda _{2}(1 -\lambda _{3})} {2(\lambda _{4} + 1)_{(2)}} + \frac{\lambda _{2}(1 +\lambda _{3})} {2(\lambda _{5} + 1)_{(2)}}\, {}\\ L_{3}& =& \frac{\lambda _{2}(1 -\lambda _{3})(\lambda _{4} - 1)} {2(\lambda _{4} + 1)_{(3)}} -\frac{\lambda _{2}(1 +\lambda _{3})(\lambda _{5} - 1)} {2(\lambda _{5} + 1)_{(3)}} \, {}\\ L_{4}& =& \frac{\lambda _{2}(1 -\lambda _{3}){(\lambda _{4} - 1)}^{(2)}} {2(\lambda _{4} + 1)_{(4)}} -\frac{\lambda _{2}(1 +\lambda _{3}){(\lambda _{5} - 1)}^{(2)}} {2(\lambda _{5} + 1)_{(4)}}. {}\\ \end{array} $$

Percentile-based measures of location, spread, skewness and kurtosis can also be presented, but they involve rather cumbersome expressions. For example, the median is given by

$$\displaystyle{M =\lambda _{1} -\frac{\lambda _{2}(1 -\lambda _{3})} {2\lambda _{4}} + \frac{\lambda _{2}(1 +\lambda _{3})} {2\lambda _{5}} + \frac{\lambda _{2}(1 -\lambda _{3})} {\lambda _{4}}{ \left (\frac{1} {2}\right )}^{\lambda _{4}+1} -\frac{\lambda _{2}(1 -\lambda _{3})} {\lambda _{5}}{ \left (\frac{1} {2}\right )}^{\lambda _{5}+1}.}$$

The means of order statistics are as follows:

$$ \displaystyle\begin{array}{rcl} E(X_{r:n})& =& \lambda _{1} + \frac{\lambda _{2}} {2}\left [\frac{1 +\lambda _{3}} {\lambda _{5}} -\frac{1 -\lambda _{3}} {\lambda _{4}} \right ] + \frac{\lambda _{2}(1 -\lambda _{3})} {2\lambda _{4}} \frac{B(\lambda _{4} + r - 1,n - r + 1)} {B(r,n - r + 1)} \\ & & -\frac{\lambda _{2}(1 +\lambda _{3})} {2\lambda _{5}} \frac{B(r,\lambda _{1} + n - r + 1)} {B(r,n - r + 1)},\quad r = 1,\ldots,n, \\ E(X_{1:n})& =& \lambda _{1} + \frac{\lambda _{2}} {2}\left [\frac{1 +\lambda _{3}} {\lambda _{5}} -\frac{1 -\lambda _{3}} {\lambda _{4}} \right ] + \frac{n!} {(\lambda _{4})_{(n)}} - n \frac{\lambda _{2}} {2\lambda _{5}} \frac{(1 +\lambda _{3})} {n +\lambda _{5}} \, \\ E(X_{n:n})& =& \lambda _{1} + \frac{\lambda _{2}} {2}\left [\frac{1 +\lambda _{3}} {\lambda _{5}} -\frac{1 -\lambda _{3}} {\lambda _{4}} \right ] + \frac{n\lambda _{2}(1 -\lambda _{3})} {2\lambda _{4}(\lambda _{4} + n - 1)} - \frac{\lambda _{2}(1 +\lambda _{3})n!} {2(\lambda _{5} + 1)_{(n)}}.{}\end{array}$$
(3.64)
Fig. 3.4
figure 4

Density plots of the five-parameter GLD for different choices of \((\lambda _{1},\lambda _{2},\lambda _{3},\lambda _{4},\lambda _{5})\). (q) (1,1,0,10,10); (b) (1,1,0,2,2); (c) (1,1,0.5,-0.6,-0.5); (d) (1,1,-0.5,-0.6,-0.5); (e) (1,1,0.5,0.5,5)

Tarsitano [ 564 ] discussed the estimation problem through nonlinear least-squares and least absolute deviation approaches. For a random sample \(X_{1},X_{2},\ldots,X_{n}\) of size n from (3.59), under the least-squares approach, we consider

$$\displaystyle{X_{r:n} = E(X_{r:n}) +\epsilon _{r},\quad r = 1,2,\ldots,n,}$$

and then seek the parameter values that minimize

$$\displaystyle{ \sum _{r=1}^{n}{[X_{ r:n} - E(X_{r:n})]}^{2}. }$$
(3.65)

In terms of expectations of order statistics in (3.60), realize that X r: n is an estimate of the expectation in (3.64), which incidentally is linear in λ 1, λ 2 and λ 3 and nonlinear in λ 4 and λ 5. So, as in the case of Osturk and Dale method discussed earlier, we may fix \((\lambda _{4},\lambda _{5})\) and determine \(\lambda _{1},\lambda _{2}\) and λ 3. Then, \((\lambda _{4},\lambda _{5})\) can be found such that (3.65) is minimized. In the least absolute deviation procedure, the objective function to be minimized is

$$\displaystyle{\sum _{r=1}^{n}\vert X_{ r:n} - Q(u_{r}^{{\ast}})\vert,}$$

where

$$\displaystyle{u_{r}^{{\ast}} = Q(B_{ u}^{-1}(r,n - r + 1)).}$$

3.3 Power-Pareto Distribution

As seen earlier in Table 1.1, the quantile function of the power distribution is of the form

$$\displaystyle{Q_{1}(u) =\alpha {u}^{\lambda _{1}},\quad 0 \leq u \leq 1;\;\alpha,\lambda _{1} > 0,}$$

while that of the Pareto distribution is

$$\displaystyle{Q_{2}(u) =\sigma {(1 - u)}^{-\lambda _{2} },\quad 0 \leq u \leq 1;\;\sigma,\lambda _{2} > 0.}$$

A new quantile function can then be formed by taking the product of these two as

$$\displaystyle{ Q(u) = \frac{C{u}^{\lambda _{1}}} {{(1 - u)}^{\lambda _{2}}} \,\quad 0 \leq u \leq 1, }$$
(3.66)

where C > 0, \(\lambda _{1},\lambda _{2} > 0\) and one of the λ’s may be equal to zero. The distribution of a random variable X with (3.66) as its quantile function is called the power-Pareto distribution. Gilchrist [ 215 ] and Hankin and Lee [ 259 ] have studied the properties of this distribution. It has the quantile density function as

$$\displaystyle{ q(u) = \frac{C{u}^{\lambda _{1}-1}} {{(1 - u)}^{\lambda _{2}+1}}[\lambda _{1}(1 - u) +\lambda _{2}u]. }$$
(3.67)

In (3.66), C is a scale parameter, λ 1 and λ 2 are shape parameters, with λ 1 controlling the left tail and λ 2 the right tail. The shape of the density function is displayed in Fig. 3.5 for some parameter values when the scale parameter C = 1.

Fig. 3.5
figure 5

Density plots of power-Pareto distribution for some choices of \((C,\lambda _{1},\lambda _{2})\). (a) (1,0.5,0.01); (b) (1,1,0.2); (c) (1,0.2,0.1); (d) (1,0.1,1); (e) (1,0.5,0.001); (f) (1,2,0.001)

Conventional moments of (3.66) are given by

$$\displaystyle{E({X}^{r}) =\int _{ 0}^{1}{\left [ \frac{C{u}^{\lambda _{1}}} {{(1 - u)}^{\lambda _{2}}} \right ]}^{r}du = {C}^{r}B(1 + r\lambda _{ 1},1 - r\lambda _{2})}$$

which exists whenever \(\lambda _{2} < \frac{1} {r}\). From this, the mean and variance are obtained as

$$\displaystyle{ \mu = \frac{C\Gamma (1 +\lambda _{1})\Gamma (1 -\lambda _{2})} {\Gamma (2 +\lambda _{1} -\lambda _{2})} }$$
(3.68)

and

$${\displaystyle{\sigma }^{2} = {C}^{2}\left \{\frac{\Gamma (1 + 2\lambda _{1},1 - 2\lambda _{1})} {\Gamma (2 + 2\lambda _{1} - 2\lambda _{2})} -\frac{{\Gamma }^{2}(1 +\lambda _{1}){\Gamma }^{2}(1 -\lambda _{2})} {\Gamma (2 +\lambda _{1} -\lambda _{2})} \right \}.}$$

The range of skewness and kurtosis is evaluated over the range λ 1 > 0, \(0 \leq \lambda _{2} < \frac{1} {4}\). Minimum skewness and minimum kurtosis are both attained at λ 2 = 0, and both these measures show increasing trend with respect to λ 1 and λ 2. Kurtosis is also seen to be as an increasing function of skewness. Hankin and Lee [ 259 ] have mentioned that the distribution is more suitable for positively skewed data and can provide good approximations to gamma, Weibull and lognormal distributions.

Percentile-based measures are simpler and are given by

$$\displaystyle\begin{array}{rcl} M& =& C{2}^{\lambda _{2}-\lambda _{1}}, {}\\ \text{IQR}& =& C{4}^{\lambda _{2}-\lambda _{1}}({3}^{\lambda _{1}} - {3}^{-\lambda _{2}}), {}\\ S& =& \frac{{3}^{\lambda _{1}} + {3}^{-\lambda _{2}} - {2}^{\lambda _{2}-\lambda _{1}+1}} {{3}^{\lambda _{1}} - {3}^{-\lambda _{2}}}, {}\\ T& =& \frac{{2}^{\lambda _{2}-\lambda _{1}}({7}^{\lambda _{1}} - {5}^{\lambda _{1}}{3}^{-\lambda _{2}} + {3}^{\lambda _{1}}{5}^{-\lambda _{2}} - {7}^{-\lambda _{2}})} {{3}^{\lambda _{1}} - {3}^{-\lambda _{2}}}. {}\\ \end{array}$$

Further, the first four L-moments are as follows:

$$\displaystyle\begin{array}{rcl} L_{1}& =& \mu, {}\\ L_{2}& =& \frac{C(\lambda _{1} +\lambda _{2})} {\lambda _{1} -\lambda _{2} + 2}B(\lambda _{1} + 1,1 -\lambda _{2})\, {}\\ L_{3}& =& \frac{C(\lambda _{1}^{2} +\lambda _{ 2}^{2} + 4\lambda _{1}\lambda _{2} +\lambda _{2} -\lambda _{1})B((\lambda _{1} + 1,1 -\lambda _{2}))} {(\lambda _{1} -\lambda _{2} + 2)_{(2)}} \, {}\\ L_{4}& =& \frac{C(\lambda _{1} +\lambda _{2})(\lambda _{1}^{2} +\lambda _{ 2}^{2} + 8\lambda _{1}\lambda _{2} - 3\lambda _{1} + 3\lambda _{2} + 2)} {(\lambda _{1} -\lambda _{2} + 2)_{(3)}} B(\lambda _{1} + 1,1 -\lambda _{2}), {}\\ \end{array}$$

where B(⋅,⋅) is the complete beta function. From these expressions, we readily obtain the L-skewness and L-kurtosis measures as

$$\displaystyle{\tau _{3} = \frac{\lambda _{1}^{2} -\lambda _{2}^{2} + 4\lambda _{1}\lambda _{2} +\lambda _{2} -\lambda _{1}} {(\lambda _{1} -\lambda _{2} + 3)} }$$

and

$$\displaystyle{\tau _{4} = \frac{\lambda _{1}^{2} -\lambda _{2}^{2} + 8\lambda _{1}\lambda _{2} - 3\lambda _{1} + 3\lambda _{2} + 2} {(\lambda _{1} -\lambda _{2} + 3)(\lambda _{1} -\lambda _{2} + 4)}.}$$

The expected value of the rth order statistic is

$$\displaystyle{E(X_{r:n}) = C\frac{B(\lambda _{1} + r,n -\lambda _{2} - r)} {B(r,n - r + 1)} \quad n >\lambda _{2} + r,\quad r = 1,2,\ldots,n,}$$

while the quantile functions of X 1: n and X n: n are given by

$$\displaystyle{Q_{1}(u) = C\frac{\{1 - {(1 - {u}^{1/n})\}}^{\lambda _{1}}} {{(1 - u)}^{n/\lambda _{2}}} }$$

and

$$\displaystyle{Q_{n}(u) = C \frac{{u}^{\frac{\lambda _{1}} {n} }} {{(1 - {u}^{\frac{1} {n} })}^{\lambda _{2}}}.}$$

It is easily seen that the hazard quantile function is

$$\displaystyle{ H(u) = {(1 - u)}^{\lambda _{2}}\{C{u}^{\lambda _{1}-1}{(\lambda _{1}(1 - u) +\lambda _{2}u)\}}^{-1}, }$$
(3.69)

the mean residual quantile function is

$$\displaystyle{ M(u) = \frac{1} {1 - u}B_{u}(1 -\lambda _{2},1 +\lambda _{1}) - \frac{C{u}^{\lambda _{1}}} {{(1 - u)}^{\lambda _{2}}}, }$$
(3.70)

the reversed hazard quantile function is

$$\displaystyle{ \Lambda (u) = {(1 - u)}^{\lambda _{2}+1}{[C{u}^{\lambda _{1}}(\lambda _{1}(1 - u) +\lambda _{2}u)]}^{-1}, }$$
(3.71)

and the reversed mean residual quantile function is

$$\displaystyle{ R(u) = \frac{C{u}^{\lambda _{1}}} {{(1 - u)}^{\lambda _{2}}} -\frac{1} {u}B_{u}(\lambda _{1} + 1,1 -\lambda _{2}). }$$
(3.72)

Next, upon denoting the quantile function of the distribution by \(Q(u;C,\lambda _{1},\lambda _{2})\), we have the following characterization results for this family of distributions (Nair and Sankaran [ 443 ]).

Theorem 3.1.

A non-negative variable X is distributed as \(Q(u;C,\lambda _{1},0)\) if and only if

  1. (i)

    \(H(u) = k_{1}u{[(1 - u)Q(u)]}^{-1}\) , k 1 > 0;

  2. (ii)

    \(M(u) = k_{1}{[Q(u)]}^{-1}\) ;

  3. (iii)

    R(u) = k 2 Q(u), k 2 < 1;

  4. (iv)

    Λ(u)R(u) = k 3, k 2 < 1,

where k i ’s are constants.

Theorem 3.2.

A non-negative random variable X is distributed as Q 1 (u;C,0,λ 2 ) if and only if

  1. 1.

    \(H(u) = A_{1}{[Q_{1}(u)]}^{-1}\) , A 1 > 0;

  2. 2.

    \(\Lambda (u) = A_{1}(1 - u){[uQ_{1}(u)]}^{-1}\) ;

  3. 3.

    \(M(u) = A_{2}Q_{1}(u)\) ;

  4. 4.

    M(u)H(u) = A 3, A 3 > 0,

where A 1, A 2 and A 3 are constants.

An interesting special case of (3.66) arises when \(\lambda _{1} =\lambda _{2} =\lambda > 0\) in which case it becomes the loglogistic distribution (see Table 1.1). A detailed analysis of theloglogistic model and its applications in reliability studies have been made by Cox and Oakes [ 158 ] and Gupta et al. [ 237 ]. In this case, we deduce the following characterizations from the above.

Theorem 3.3.

A non-negative random variable X has loglogistic distribution with

$$\displaystyle{Q(u) = C{\left ( \frac{u} {1 - u}\right )}^{\lambda },\quad C,\lambda > 0,}$$

if and only if one of the following conditions hold:

  1. (i)

    \(H(u) = \frac{ku} {Q(u)}\) ;

  2. (ii)

    \(\Lambda (u) = \frac{k(1-u)} {Q(u)}\) .

Hankin and Lee [ 259 ] proposed two methods of estimation—the least-squares and the maximum likelihood. In the least-squares method, they use

$$\displaystyle{ E(\log X_{r:n}) =\log C +\lambda _{1}E(\log U_{r:n}) -\lambda _{2}E(\log (1 - U_{r:n})), }$$
(3.73)

since logX r: n and logQ(U r: n ) have the same distribution, where U r: n is the rth order statistic from a sample of size n from the uniform distribution. Thus, from (3.36), we have

$$\displaystyle\begin{array}{rcl} E(\log U_{r:n})& =& \frac{1} {B(r,n - r + 1)} =\int _{ 0}^{1}(\log u){u}^{r-1}{(1 - u)}^{n-r}du \\ & =& -\left (\frac{1} {r} + \frac{1} {r + 1} +\ldots + \frac{1} {n}\right ) {}\end{array}$$
(3.74)

and

$$\displaystyle\begin{array}{rcl} E(\log (1 - U_{r:n}))& = -\left ( \frac{1} {n-r} + \frac{1} {n-r+1} + \cdots + \frac{1} {n}\right ).&{}\end{array}$$
(3.75)

Then, the model parameters estimated by minimizing

$$\displaystyle{\sum {[\log X_{r:n} - E(\log X_{r:n})]}^{2}.}$$

Substituting (3.74) and (3.75) into the expression of E(logX r: n ) in (3.73), we have an ordinary linear regression problem and is solved by standard programs available for the purpose. Maximum likelihood estimates are calculated as described earlier in Sect. 3.2.2 by following the steps described in Hankin and Lee [ 259 ]. In a comparison of the two methods by means of simulated variances, Hankin and Lee [ 259 ] found the least-squares method to be better for small samples when the parameters λ 1 and λ 2 are roughly equal and the maximum likelihood method to be better otherwise.

3.4 Govindarajulu’s Distribution

Govindarajulu’s [ 224 ] model is the earliest attempt to introduce a quantile function, not having an explicit form of distribution function, for modelling data on failure times. He considered the quantile function

$$\displaystyle{ Q(u) =\theta +\sigma \left \{(\beta +1){u}^{\beta } -\beta {u}^{\beta +1}\right \},\quad 0 \leq u \leq 1;\;\sigma,\beta > 0. }$$
(3.76)

He used it to model the data on the failure times of a set of 20 refrigerators that were run to destruction under advanced stress conditions. Even though the validity of the model and its application to nonparametric inference were studied by him, the properties of the distribution were not explored. We now present a detailed study of its properties and applications.

The support of the distribution in (3.76) is (θ, θ + σ). Since we treat it as a lifetime model, θ is set to be zero so that (3.76) reduces to

$$\displaystyle{ Q(u) =\sigma \left \{(\beta +1){u}^{\beta } -\beta {u}^{\beta +1}\right \},\quad 0 \leq u \leq 1. }$$
(3.77)

Note that there is no loss of generality in studying the properties of this distribution based on (3.77) since the transformation Y = X + θ, where X has its quantile function Q(u) as in (3.77), will provide the corresponding results for (3.76). From (3.77), the quantile density function is

$$\displaystyle{ q(u) =\sigma \beta (\beta +1){u}^{\beta -1}(1 - u). }$$
(3.78)

Equation (3.78) yields the density function of X as

$$\displaystyle{ f(x) = {[\sigma \beta (\beta +1)]}^{-1}{F}^{1-\beta }(x){(1 - F(x))}^{-1}. }$$
(3.79)

Thus, this model belongs to the class of distributions, possessing density function explicitly in terms of the distribution function, discussed by Jones [ 307 ]. Further, by differentiating (3.78), we get

$$\displaystyle{q^{\prime}(u) =\sigma \beta (\beta +1){u}^{\beta -2}[(\beta -1) -\beta u]}$$

from which we observe that the density function is monotone decreasing for β ≤ 1, and q′(u) = 0 gives \(u_{0} {=\beta }^{-1}(\beta -1)\). Thus, when β > 1, there is an antimode at u 0. Figure 3.6 shows the shapes of the density function for some choices of β.

Fig. 3.6
figure 6

Density plots of Govindarajulu’s distribution for some choices of β. (a) β = 3; (b) β = 0. 5; (c) β = 2

The raw moments are given by

$$\displaystyle{E({X}^{r}) =\int _{ 0}^{1}{[Q(p)]}^{r}dp {=\sigma }^{r}\sum _{ j=0}^{r}{(-1)}^{j}\binom{r}{j}{(\beta +1){}^{r-j}\beta }^{j}/(\beta r + j + 1).}$$

In particular, the mean and variance are

$$\displaystyle\begin{array}{rcl} \mu & =& 2\sigma {(\beta +2)}^{-1}, {}\\ \text{var}& =& \frac{{\beta }^{2}(5\beta + 7){\sigma }^{2}} {(2\beta + 1)(2\beta + 2){(\beta +2)}^{2}}. {}\\ \end{array}$$

Moreover, we have the median as

$$\displaystyle{M =\sigma {2}^{-(\beta +1)}(\beta +2),}$$

the interquartile range as

$$\displaystyle{\text{IQR} =\sigma {4}^{-(\beta +1)}[{3}^{\beta }(\beta +4) - (3\beta + 4)],}$$

the skewness as

$$\displaystyle{S = \frac{\sigma [(\beta +1)\frac{{3}^{\beta }+1} {{4}^{\beta }} -\frac{\beta ({3}^{\beta +1}+1)} {{4}^{\beta }} + \frac{\beta +2} {{2}^{\beta }} ]} {\text{IQR}} }$$

and the kurtosis as

$$\displaystyle{T = \frac{\sigma \big[(\beta +1)\frac{({7}^{\beta }-{5}^{\beta }+{3}^{\beta }-1)} {{8}^{\beta }} -\frac{\beta ({7}^{\beta +1}-{5}^{\beta +1}+{3}^{\beta +1}-1)} {{8}^{\beta +1}} \big]} {\text{IQR}} }$$

as percentile-based descriptive measures. Much simpler expressions are available for the L-moments as follows:

$$\displaystyle\begin{array}{rcl} L_{1}& =& \mu, {}\\ L_{2}& =& \frac{2\beta \sigma } {(\beta +2)(\beta +3)}, {}\\ L_{3}& =& \frac{2\beta (\beta -2)\sigma } {(\beta +2)_{(3)}}, {}\\ L_{4}& =& \frac{{2\beta }^{3} - 1{2\beta }^{2} + 10\beta )\sigma } {(\beta +2)_{(4)}}. {}\\ \end{array}$$

Consequently, we have

$$\displaystyle{\tau _{3} = \frac{\beta -2} {\beta +4}\quad \text{and}\quad \tau _{4} = \frac{(\beta -5)(\beta -1)} {(\beta +4)(\beta +5)}.}$$

With τ 3 being an increasing function of β, its limits are obtained as β → 0 and β. These limits show that τ 3 lies between \((-\frac{1} {2},1)\), and so it does not cover the entire range ( − 1, 1). But the distribution has negatively skewed, symmetric (at β = 2) and positively skewed members. The L-kurtosis τ 4 is nonmonotone, decreasing initially, reaching a minimum in the symmetric case, and then increasing to unity.

A particularly interesting property of Govindarajulu’s distribution is the distribution of its order statistics. The density function of X r: n is

$$\displaystyle\begin{array}{rcl} \begin{array}{cl} f_{r}(x)& = \frac{1} {B(r,n-r+1)}f(x){F}^{r-1}(x){(1 - F(x))}^{n-r} \\ & = \frac{1} {\sigma \beta (\beta +1)B(r,n-r+1)}{F}^{r-\beta }(x){(1 - F(x))}^{n-r-1}, \end{array} & &{}\end{array}$$
(3.80)

upon using (3.79). So, we have

$$\displaystyle\begin{array}{rcl} \begin{array}{cl} E(X_{r:n})& = \frac{1} {B(r,n-r+1)}\int _{0}^{1}Q(u){u}^{r-1}{(1 - u)}^{n-r}du \\ & = \frac{n!\Gamma (\beta +r)\sigma } {(r-1)!\Gamma (n+\beta +2)}\{(n + 1)(\beta +1) -\beta (r - 1)\}.\end{array} & &{}\end{array}$$
(3.81)

In particular,

$$\displaystyle\begin{array}{rcl} E(X_{1:n})& =& \frac{(n + 1)!\sigma \Gamma (\beta +2)} {\Gamma (n +\beta +2)} = \frac{(n + 1)!\sigma } {(\beta +2)_{(n)}} {}\\ \end{array}$$

and

$$\displaystyle\begin{array}{rcl} E(X_{n:n})& =& \frac{n(n + 2\beta + 1)\sigma } {(n+\beta )(n +\beta +1)}. {}\\ \end{array}$$

The quantile functions of X 1: n and X n: n are

$$\displaystyle{Q_{1}(u) =\sigma {[(1 - {(1 - u)}^{ \frac{1} {n} })]}^{\beta }[1 +\beta {(1 - u)}^{ \frac{1} {n} }]}$$

and

$$\displaystyle{Q_{n}(u) =\sigma [(\beta +1){u}^{ \frac{\beta }{ n} } -\beta {u}^{\frac{\beta +1} {n} }].}$$

All the reliability functions also have tractable forms. The hazard quantile function is given by

$$\displaystyle{H(u) = {[\sigma \beta (\beta +1){u}^{\beta -1}{(1 - u)}^{2}]}^{-1}}$$

and the mean residual quantile function is

$$\displaystyle\begin{array}{rcl} M(u)& =& [2 - (\beta +1)(\beta +2){u}^{\beta } +\beta (\beta +2){u}^{\beta +1} -\beta (\beta +1){u}^{\beta +2}] {}\\ & & \times {[(\beta +2)(1 - u)]}^{-1}\sigma. {}\\ \end{array}$$

From the expression of the quantile function, it is evident that the parameter β largely controls the left tail and therefore facilitates in modelling reliability concepts in reversed time. Accordingly, the reversed hazard and reversed mean residual quantile functions are given by

$$\displaystyle\begin{array}{rcl} \Lambda (u)& =& {[\sigma \beta (\beta +1){u}^{\beta }(1 - u)]}^{-1}, {}\\ R(u)& =& \frac{\beta \sigma } {\beta +2}[\beta +2 - (\beta +1)u]{u}^{\beta }, {}\\ \end{array}$$

respectively. The reversed variance residual quantile function has the expression

$$\displaystyle\begin{array}{rcl} D(u)& =& {u}^{-1}\int _{ 0}^{u}{R}^{2}(p)dp {}\\ & =& \frac{{\sigma {}^{2}\beta }^{2}{u}^{2\beta }} {{(\beta +2)}^{2}}\left \{\frac{{(\beta +1)}^{2}{u}^{2}} {2\beta + 3} - (\beta +2)u + \frac{{(\beta +1)}^{2}} {2\beta + 1} \right \}. {}\\ \end{array}$$

We further note that the function

$$\displaystyle{ R(u)\Lambda (u) = \frac{{(\beta +1)}^{-1} - {(\beta +2)}^{-1}u} {1 - u} }$$
(3.82)

is a homographic function of u.

Characterization problems of life distributions by the relationship between the reversed hazard rate and the reversed mean residual life in the distribution function approach have been discussed in literature; see Chandra and Roy [ 135 ]. In this spirit, from (3.82), we have the following theorem.

Theorem 3.4.

For a non-negative random variable X, the relationship

$$\displaystyle{ R(u)\Lambda (u) = \frac{a + bu} {1 - u} }$$
(3.83)

holds for all 0 < u < 1 if and only if

$$\displaystyle{ Q(u) = K\left ( \frac{a} {1 - a}{u}^{\frac{1} {a}-1} - a{u}^{\frac{1} {a} }\right ) }$$
(3.84)

provided that a and b are real numbers satisfying

$$\displaystyle{ \frac{1} {a} + \frac{1} {b} = -1. }$$
(3.85)

Proof.

Suppose (3.83) holds. Then, we have

$$\displaystyle{ \left \{\frac{1} {u}\int _{0}^{u}p\,q(p)dp\right \}\big[{uq(u)\big]}^{-1} = \frac{a + bu} {1 - u}. }$$
(3.86)

Equation (3.86) simplifies to

$$\displaystyle{ \frac{uq(u)} {\int _{0}^{u}p\,q(p)dp} = \frac{1 - u} {u(a + bu)} = \frac{1} {au} - \frac{a + b} {a(a + bu)}.}$$

Upon integrating the above equation, we obtain

$$\displaystyle{\log \int _{0}^{u}pq(p)dp = \frac{1} {a}\log u -\frac{a + b} {ab} \log (a + bu) +\log K,}$$

or

$$\displaystyle{\int _{0}^{u}pq(p)dp = K{u}^{\frac{1} {a} }(a + bu)}$$

with the use of (3.85). By differentiation, we then obtain

$$\displaystyle{q(u) = K{u}^{\frac{1} {a}-2}(1 - u).}$$

Integrating the last expression from 0 to u and setting Q(0) = 0, we get (3.84). The converse part follows from the equations

$$\displaystyle{\Lambda (u) = {[K{u}^{\frac{1} {a}-1}(1 - u)]}^{-1}}$$

and

$$\displaystyle{R(u) = \frac{Ka} {a + 1}{u}^{ \frac{1} {a-1} }(a + 1 - u).}$$

Remark 3.2.

Govindarajulu’s distribution is secured when \(a = {(1+\beta )}^{-1}\). The condition imposed on a and b in (3.85) can be relaxed to provide a more general family of quantile functions.

Regarding the estimation of the parameters σ and β, all the conventional methods like method of moments, percentiles, least-squares and maximum likelihood can be applied to the distribution quite easily. For example, in the method of moments, equating the mean and variance, we obtain

$$\displaystyle{ \bar{X} = \frac{2\sigma } {\beta +2}\quad \text{and}\quad {S}^{2} = \frac{{\beta }^{2}(5\beta + 7){\sigma }^{2}} {(2\beta + 1)(2\beta + 2){(\beta +2)}^{2}}. }$$
(3.87)

Thus, we get

$$\displaystyle{\frac{\bar{{X}}^{2}} {{S}^{2}} ={ \frac{4(2\beta + 1)(2\beta + 2)} {\beta }^{2}(5\beta + 7)} }$$

which may be solved to get β. Then, by substituting it in (3.87), the estimate of σ can be found. There may be more than one solution for β and in this case a goodness of fit may then be applied to locate the best solution. The method of L-moments and some results comparing the different methods are presented in Sect. 3.6. Compared to the more flexible quantile functions discussed in the earlier sections, the estimation problem is easily resolvable in this case with no computational complexities. One of the major limitations of Govindarajulu’s model, as mentioned earlier, is that it cannot cover the entire skewness range. In the admissible range, however, it provides good approximations to other distributions as will be seen in Sect. 3.6.

3.5 Generalized Weibull Family

This particular family of distributions is different from the distributions discussed so far in this chapter in the sense that it has a closed-form expression for the distribution function. So, all conventional methods of analysis are possible in this case. As a generalization of the Weibull distribution, the generalized Weibull family is defined by Mudholkar et al. [ 428 ] as

$$\displaystyle{ Q(u) = \left \{\begin{array}{@{}l@{\quad }l@{}} \sigma {\left [\frac{1-{(1-u)}^{\lambda }} {\lambda } \right ]}^{\alpha } \quad &,\ \lambda \neq 0 \\ \sigma {(-\log (1 - u))}^{\alpha }\quad &,\ \lambda = 0 \end{array} \right., }$$
(3.88)

for α, σ > 0 and real λ. The corresponding distribution function is

$$\displaystyle{F(x) = 1 -{\left \{1 -\lambda {\left (\frac{x} {\sigma } \right )}^{\frac{1} {\alpha } }\right \}}^{ \frac{1} {\lambda } }}$$

with support (0, ) for λ ≤ 0 and \((0,{ \frac{\sigma } {\lambda }^{\alpha }})\) for λ > 0. The quantile density function has the form

$$\displaystyle{ q(u) =\sigma \alpha { \left [\frac{1 - {(1 - u)}^{\lambda }} {\lambda } \right ]}^{\alpha -1}{(1 - u)}^{\lambda -1}. }$$
(3.89)

The density function has a wide variety of shapes that include U-shaped, unimodal and monotone increasing or decreasing shapes. The raw moments are given by

$$\displaystyle{E({X}^{r}) = \left \{\begin{array}{@{}l@{\quad }l@{}} {{\frac{B(\frac{1} {\lambda },r\alpha +1)} {\lambda }^{r\alpha +1}} \sigma }^{r} \quad &,\ \lambda > 0 \\ {\frac{B(-r\alpha -\frac{1} {\lambda },r\alpha +1)} {{(-\lambda )}^{r\alpha +1}} \sigma }^{r}\quad &,\ \lambda < 0 \end{array} \right..}$$

Moments of all orders exist for α > 0, λ > 0. If λ < 0, then E(X r) exists if α λ > − r − 1. The expressions for the percentile-based descriptive measures are as follows:

$$\displaystyle\begin{array}{rcl} M& =&{ \left (\frac{1 -{\left (\frac{1} {2}\right )}^{\lambda }} {\lambda } \right )}^{\alpha }\sigma, {}\\ \text{IQR}& =&{ \frac{\sigma } {\lambda }^{2}}\left [{\left (1 -{\left (\frac{1} {4}\right )}^{\lambda }\right )}^{\alpha } -{\left (1 -{\left (\frac{3} {4}\right )}^{\lambda }\right )}^{\alpha }\right ], {}\\ S& =& \frac{{\left (1 - {(\frac{1} {4})}^{\lambda }\right )}^{\alpha } +{ \left (1 - {(\frac{3} {4})}^{\lambda }\right )}^{\alpha } - 2\left (1 - {(\frac{1} {2})}^{\lambda }\right )} {{\left (1 - {(\frac{1} {4})}^{\lambda }\right )}^{\alpha } -{\left (1 - {(\frac{3} {4})}^{\lambda }\right )}^{\alpha }}, {}\\ T& =& \frac{{\left (1 - {(\frac{1} {8})}^{\lambda }\right )}^{\alpha } -{\left (1 - {(\frac{3} {8})}^{\lambda }\right )}^{\alpha } +{ \left (1 - {(\frac{5} {8})}^{\lambda }\right )}^{\alpha } -{\left (1 - {(\frac{7} {8})}^{\lambda }\right )}^{\alpha }]} {{\left (1 - {(\frac{1} {4})}^{\lambda }\right )}^{\alpha } -{\left (1 - {(\frac{3} {4})}^{\lambda }\right )}^{\alpha }}. {}\\ \end{array}$$

For the calculation of L-moments, we use the result

$$\displaystyle{\int _{0}^{1}{u}^{r}Q(u)du =\sum _{ y=0}^{r}{\frac{{(-1)}^{y}\binom{r}{y}\sigma } {{\lambda }^{\alpha +1}}} B\left (\frac{y + 1} {\lambda },\alpha +1\right )}$$

in (1.34)–(1.37). Various reliability characteristics are determined as follows:

$$\displaystyle\begin{array}{rcl} H(u)& =&{ \left [\sigma \alpha {\left (\frac{1 - {(1 - u)}^{\lambda }} {\lambda } \right )}^{\alpha -1}{(1 - u)}^{\lambda }\right ]}^{-1}, {}\\ M(u)& =&{ \frac{\sigma \alpha } {\lambda }^{\alpha }}B_{{(1-u)}^{\lambda }}\Lambda (u), {}\\ \Lambda (u)& =&{ \left [{u\sigma }^{\alpha }{\left (\frac{1 - {(1 - u)}^{\lambda }} {\lambda } \right )}^{\alpha -1}{(1 - u)}^{\lambda -1}\right ]}^{-1}, {}\\ R(u)& =& \frac{\sigma \alpha } {{u\lambda }^{\alpha }}\left [{\left (\frac{1 - {(1 - u)}^{\lambda }} {\lambda } \right )}^{\alpha } - B\left (\frac{1} {\lambda } + 1,\alpha \right ) + B_{{(1-u)}^{\alpha }}\left (\alpha, \frac{1} {\lambda +1}\right )\right ]. {}\\ \end{array}$$

The parameters of the model are estimated by the method of maximum likelihood as discussed in Mudholkar et al. [ 428 ]. Due to the variety of shapes that the hazard functions can assume (see Chap. 4 for details), it is a useful model for survival data. This distribution has also appeared in some other discussions including assessment of tests of exponentiality (Mudholkar and Kollia [ 425 ]), approximations to sampling distributions, analysis of censored survival data (Mudholkar et al. [ 428 ]), and generating samples and approximating other distributions. Chi-squared goodness-of-fit tests for this family of distributions have been discussed by Voinov et al. [ 575 ].

3.6 Applications to Lifetime Data

In order to emphasize the applications of quantile function in reliability analysis, we demonstrate here that some of the models discussed in the preceding sections can serve as useful lifetime distributions. The conditions in the parameters that make the underlying random variables non-negative have been obtained. We now fit these models for some real data on failure times. Three representative models, one each from the lambda family, the power-Pareto and Govindarajulu’s distributions, will be considered for this purpose. The first two examples have been discussed in Nair and Vineshkumar [ 452 ].

The four-parameter lambda distribution in (3.44) is applied to the data of 100 observations on failure times of aluminum coupons (data source: Birnbaum and Saunders [ 104 ] and quoted in Lai and Xie [ 368 ]). The last observation in the data is excluded from the analysis to extract equal frequencies in the bins. Distribute the data into ten classes, each containing ten observations in ascending order of magnitude. For estimating the parameters, we use the method of L-moments. The first four sample L-moments are l 1 = 1, 391. 79, l 2 = 215. 683, l 3 = 3. 570 and l 4 = 20. 7676. Thus, the model parameters need to be solved from

$$\displaystyle\begin{array}{rcl} & & \lambda _{1} + \frac{1} {\lambda _{2}} \left ( \frac{1} {\lambda _{4} + 1} - \frac{1} {\lambda _{3} + 1}\right ) = 1,391.79, {}\\ & & \frac{1} {\lambda _{2}} \left ( \frac{1} {(\lambda _{3} + 1)(\lambda _{3} + 2)}) + \frac{1} {(\lambda _{4} + 1)(\lambda _{4} + 2)}\right ) = 215.683, {}\\ & & \frac{1} {\lambda _{2}} \left ( \frac{\lambda _{3} - 1} {(\lambda _{3} + 1)(\lambda _{3} + 2)(\lambda _{3} + 4)} - \frac{\lambda _{4} - 1} {(\lambda _{4} + 1)(\lambda _{4} + 2)(\lambda _{4} + 3)}\right ) = 3.570, {}\\ & & \frac{1} {\lambda _{2}} \left ( \frac{(\lambda _{3} - 1)(\lambda _{3} - 2)} {(\lambda _{3} + 1)(\lambda _{3} + 2)(\lambda _{3} + 3)(\lambda _{3} + 4)} - \frac{(\lambda _{4} - 1)(\lambda _{4} - 2)} {(\lambda _{4} + 1)(\lambda _{4} + 2)(\lambda _{4} + 3)(\lambda _{4} + 4)}\right ) {}\\ & & \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad = 20.7676. {}\\ \end{array}$$

Among the solutions, the best fitting one, determined by the chi-square test (i.e., the parameter estimates that gave the least chi-square value), is

$$\displaystyle{ \hat{\lambda }_{1} = 1,382.18,\quad \hat{\lambda }_{2} = 0.0033,\quad \hat{\lambda }_{3} = 0.2706\quad \text{ and }\quad \hat{\lambda }_{4} = 0.2211. }$$
(3.90)

Further, the upper limit of the support is 2,750.7, and thus the estimated support (256.28, 2,750.7) covers the range of observations (370, 2,240) in the data. Using (3.44) for \(u = \frac{1} {10}, \frac{2} {10},\cdots \), and the fact that if U has a uniform distribution on [0, 1] then X and Q(u) have identical distributions, we find the observed frequencies in the classes to be 10, 10, 9, 12, 8, 11, 8, 12 and 10. Of course, under the uniform model, the expected frequencies are 10 in each class. Thus, the optimized chi-square value for the fit is χ 2 = 1. 8 which does not lead to rejection of the model in (3.44) for the given data. The Q-Q plot corresponding to the model is presented in Fig. 3.7.

Fig. 3.7
figure 7

Q-Q plot for the data on lifetimes of aluminum coupons

The second example concerns the power-Pareto distribution in (3.66). To ascertain the potential of the model, we fit it to the data on the times to first failure of 20 electric carts, presented by Zimmer et al. [ 604 ], and also quoted in Lai and Xie [ 368 ]. Here again, the method of L-moments is adopted. The sample L-moments are l 1 = 14. 675, l 2 = 7. 335 and l 3 = 2. 4678. Equating the population L-moments L 1, L 2 and L 3 presented in Sect. 3.3 to l 1, l 2 and l 3 and solving the resulting equations, we obtain

$$\displaystyle{\hat{\lambda }_{1} = 0.2346,\quad \hat{\lambda }_{2} = 0.0967\quad \text{ and }\quad \hat{C} = 1,530.3.}$$

The corresponding Q-Q plot is presented in Fig. 3.8.

Fig. 3.8
figure 8

Q-Q plot for the data on times to first failure of electric carts

Govindarajulu’s distribution has already been shown as a suitable model for failure times in the original paper of Govindarajulu [ 224 ]. We reinforce this by fitting it to the data on the failure times of 50 devices, reported in Lai and Xie [ 368 ]. Equating the first two L-moments with those of the sample, the estimates of the model parameters are obtained as

$$\displaystyle{\hat{\sigma }= 93.463\quad \text{ and }\quad \hat{\beta } = 2.0915.}$$

Dividing the data into five groups of ten observations each, we find by proceeding as in the first example that the chi-square value is 1.8, which does not lead to the rejection of the considered model. Figure 3.9 presents the Q-Q plot of the fit obtained.

Fig. 3.9
figure 9

Q-Q plot for the data on failure times of devices using Govindarajulu’s model

The objectives of these illustrations were limited to the purpose of demonstrating the use of quantile function models in reliability analysis. A full theoretical analysis and demonstration to real data situations of all the reliability functions vis-a-vis their ageing properties will be taken up subsequently in Chap. 4.