Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

In designing a stochastic model for a particular modeling problem, an investigator will be vitally interested to know if their model fits the requirements of a specific underlying probability distribution. To this end, the investigator will depend on the characterizations of the selected distribution. Generally speaking, the problem of characterizing a distribution is an important problem in various fields and has recently attracted the attention of many researchers. Consequently, various characterization results have been reported in the literature. These characterizations have been established in many different directions, one of which is in terms of the truncated moments. We like to mention here the works of Galambos and Kotz [8], Kotz and Shanbhag [20], Glänzel [9, 10], Glänzel et al. [12], Glänzel and Hamedani [11], and Hamedani [1315].

Recently, Ahsanullah and Hamedani [3] characterized the power function and the beta of the first-kind distributions based on a truncated moment of the nth order statistic and first order statistic, respectively, extending some known characterizations of the power function and the uniform distributions (see [1, 2]). Following [3], Hamedani et al. [17] characterized the following distributions based on a truncated moment of the first order statistic: Burr type XII ( a special case), generalized beta 1, generalized beta 2 (the last two family of distributions unify many distributions employed for size distribution of income [21]), generalized Pareto, Pareto of first kind, and Weibull. The following families of distributions were also mentioned in [17] as special cases of Weibull: Burr type X, chi-square, extreme value type 2, gamma and Rayleigh. Hamedani [15] established characterizations of 31 more continuous univariate distributions based on a truncated moment of the first order statistic or of the nth order statistic or of a function of the first order statistic or of a function of the nth order statistic.

Various systems of distributions have been constructed to provide approximations to a wide variety of distributions (see, e.g., [18]). These systems are designed with the requirements of ease of computation and feasibility of algebraic manipulation. To meet the requirements, there must be as few parameters as possible in defining a member of the system.

One of these systems is Pearson system. A continuous distribution belongs to this system if its probability density function (pdf) f(x) satisfies a differential equation of the form

$$\frac{1} {f\left (x\right )} \frac{\mathrm{d}f\left (x\right )} {\mathrm{d}x} = - \frac{x + a} {b{x}^{2} + cx + d}$$
(13.1)

where a, b, c, and d are real parameters such that \(f\left (x\right )\) is a pdf. The shape of the pdf depends on the values of these parameters. Pearson [22] classified the different shapes into a number of types I–VII (see Appendix A). Many well-known distributions are special cases of Pearson-type distributions which are characterized in [15], Sects. 3–6.

Another system is Burr system, [6], which like Pearson system, has various types I–XII. This system, however, is not as involved and as basic as Pearson system. There are also families of distributions like extreme value and Pareto which have different kind or type members. These distributions are also characterized in [15] Sects. 3–6.

The families discussed in Sects. 5.3 and 5.5 of [15] were first introduced in [5] in the context of minimum dynamic discrimination information approach to probability modeling. The families in Sects. 5.8 and 5.9 of [15] appeared in [4], which were shown to be maximum dynamic entropy models.

The presentation of the content of this work is as follows. Sect. 13.2 deals with introduction of Amoroso distribution, the natural unification of the gamma and extreme value distributions. In Sect. 13.3, we present characterizations of the Amoroso distribution based on the truncated moment of a function of first order statistic and of a function of nth order statistic. Section 13.4 is devoted to definitions of SSK, SKS, SK, and SKS-type distributions. In Sect. 13.5, we present characterizations of SSK distribution based on a simple relationship between two truncated moments. Section 13.6 deals with the characterizations of SKS-type distribution based on the truncated moment of a function of first order statistic and of a function of nth order statistic. We also give a characterization of this distribution based on conditional expectation of adjacent generalized order statistics. In Sect. 13.7 we present a characterization of SK distribution based on a simple relation between two truncated moments. Finally, in Sect. 13.8 we have a very short concluding remark. For further characterization results in this direction, we refer the reader to Ahsanullah and Hamedani [3], Hamedani et al. [17], and Hamedani [15].

2 The Amoroso Distribution

This section deals with introducing the Amoroso distribution. It is pointed out by Crooks [7] that the Amoroso distribution, a four parameter, continuous, univariate, unimodel pdf with semi-infinite range, was originally developed to model lifetimes (see [7] for more details). Moreover, many well-known and important distributions are special cases or limiting forms of the Amoroso distribution. Table 13.1 is taken (with permission from G.E. Crooks for which we are grateful to him) from [7], which shows 35 special and four limiting cases of the Amoroso distribution. These distributions and their importance in different fields of studies have been discussed in detail in [7].

Table 13.1 The Amoroso family of distributions

The pdf of the Amoroso distribution is given by

$$f\left (x;a,\alpha ,\tau ,k\right ) = \frac{1} {\Gamma \left (k\right )}\left \vert \frac{\tau } {\alpha }\right \vert {\left (\frac{x - a} {\alpha } \right )}^{\tau k-1}\exp \left \{-{\left (\frac{x - a} {\alpha } \right )}^{\tau }\right \}$$
(13.2)

for x, a, α, τ in , k > 0, support x ≥ a if α > 0,  x ≤ a if α < 0. As usual, \(\Gamma \left (k\right ) ={ \int }_{0}^{\infty }{u}^{k-1}\) e − u du, for k > 0. 

The four real parameters of the Amoroso distribution consist of a location parameter a, a scale parameter α, and two shape parameters, τ and k. The shape parameter k is positive, and most of the time, an integer, k = n, or half-integer \(k = \frac{m} {2}\). If the random variable X has the Amoroso distribution with parameters a, α, τ and k > 0, we write \(X \sim \mathrm{Amoroso}\left (a,\alpha ,\tau ,k\right )\).

For further details about the distributions listed in Table 13.1 and their applications, we refer the reader to Crooks [7].

We give Table 13.2 displaying four cases based on the signs of α and τ for the random variable \(X \sim \mathrm{Amoroso}\left (a,\alpha ,\tau ,k\right )\). Without loss of generality we assume a = 0 throughout this work.

Table 13.2 Special rvs with generalized gamma distributions

For α > 0 and τ > 0, Amoroso(0, α, τ, k) = \(GG\left (\alpha ,\tau ,k\right )\), generalized gamma distribution. The characterizations given here are valid for the distributions of − X \(\left (\text{ when }\alpha < 0,\tau > 0\right ),\) \(\frac{1} {X}\) \(\left (\text{ when }\alpha > 0,\tau < 0\right )\), and \(-\frac{1} {X}\) \(\left (\text{ when }\alpha < 0,\tau < 0\right ).\)

Table 13.2 shows that for α < 0 a simple change of parameters α =  − α will produce the cases on the second row of the table. So, we investigate here the characterizations of the distribution of X when α > 0 and τ > 0 (Case I) and when α > 0 and τ < 0 (Case II).

Case I The pdf of the Amoroso random variable is given by

$$f\left (x;\alpha ,\tau ,k\right ) = \frac{\tau } {\alpha \Gamma \left (k\right )}{\left (\frac{x} {\alpha }\right )}^{\tau k-1}\exp \left \{-{\left (\frac{x} {\alpha }\right )}^{\tau }\right \},x \geq 0$$
(13.3)

where all three parameters α, τ, and k are positive.

Case II Letting γ =  − τ > 0, the pdf of the Amoroso random variable X is now

$$f\left (x;\alpha ,\gamma ,k\right ) = \frac{\gamma } {\alpha \Gamma \left (k\right )}{\left (\frac{x} {\alpha }\right )}^{-\left (\gamma k+1\right )}\exp \left \{-{\left (\frac{x} {\alpha }\right )}^{-\gamma }\right \},x \geq 0$$
(13.4)

where all three parameters α, γ, and k are positive.

The cumulative distribution function (cdf), F, corresponding to (13.2) and (13.4) are, respectively,

$$F\left (x\right ) = \frac{1} {\Gamma \left (k\right )}{\int }_{0}^{{( \frac{x} {\alpha })}^{\tau } }{u}^{k-1}{\mathrm{e}}^{-u}\mathrm{d}u,\;x \geq 0$$
(13.5)

and

$$F\left (x\right ) = 1 - \frac{1} {\Gamma \left (k\right )}{\int }_{0}^{{( \frac{x} {\alpha })}^{-\gamma } }{u}^{k-1}{\mathrm{e}}^{-u}\mathrm{d}u,\;x \geq 0$$
(13.6)

3 Characterizations of the Amoroso Distribution

This section is devoted to the characterizations of the Amoroso distribution based on truncated moment of a function of first order statistic as well as on truncated moment of a function of nth order statistic. As we pointed out in Sect. 13.2, we will present our characterizations of the Amoroso distribution in two separate cases as follows. First, however, we give the pdf of the jth order statistic.

Let X 1: n  ≤ X 2: n  ≤ ⋯ ≤ X n: n be the order statistics of a random sample of size n from a continuous cdfF with the corresponding pdff. The random variable X j: n denotes the jth order statistic from a random sample of n independent random variables X 1, X, , X n with common cdfF. Then, the pdff j: n of X j: n , j = 1, 2, , n is given by

$${f}_{j:n}\left (x\right ) = \frac{n!} {\left (j - 1\right )!\left (n - j\right )!}f\left (x\right ){\left (F\left (x\right )\right )}^{j-1}{\left (1 - F\left (x\right )\right )}^{n-j}.$$

The pdfs of the first and the nth order statistics are, respectively

$${f}_{1:n}\left (x\right ) = nf\left (x\right ){\left (1 - F\left (x\right )\right )}^{n-1}\text{ }and\text{ }{f}_{ n:n}\left (x\right ) = nf\left (x\right ){\left (F\left (x\right )\right )}^{n-1}.\text{ }$$

3.1 Characterizations of the Amoroso PDF (Case I)

In this subsection we present a characterization of the Amoroso distribution with pdf (13.3) in terms of a truncated moment of a function of the nth order statistic. We define the function

$${\gamma }_{1}\left [k;{\left (\frac{x} {\alpha }\right )}^{\tau }\right ] ={ \int }_{0}^{{\left ( \frac{x} {\alpha }\right )}^{\tau } }{u}^{k-1}{\mathrm{e}}^{-u}\mathrm{d}u\quad for\ \alpha > 0,\tau > 0,k > 0,\;and\;x \geq 0.$$

Proposition 13.3.1.1.

Let \(X : \Omega \rightarrow \left [0,\infty \right )\) be a continuous random variable with cdf F. The pdf of X is (13.3) if and only if

$$E\left \{{\gamma }_{1}\left [k;{\left (\frac{{X}_{n:n}} {\alpha } \right )}^{\tau }\right ]\vert {X}_{ n:n} < t\right \} = \frac{n} {n + 1}{\gamma }_{1}\left [k;{\left ( \frac{t} {\alpha }\right )}^{\tau }\right ],\ t > 0.$$
(13.7)

Proof.

Let X have pdf (13.3), then \(F\left (x\right )\) is given by (13.5). Now using (13.5) on the left-hand side of (13.7), we arrive at

$$\begin{array}{rcl} E\left \{{\gamma }_{1}\left [k;{\left (\frac{{X}_{n:n}} {\alpha } \right )}^{\tau }\right ]\vert {X}_{ n:n} < t\right \}& =& \frac{{\int }_{0}^{t}{\gamma }_{1}\left [k;{\left (\frac{x} {\alpha }\right )}^{\tau }\right ]d\left ({\left (F\left (x\right )\right )}^{n}\right )} {{\left (F\left (t\right )\right )}^{n}} \\ & =& {\gamma }_{1}\left [k;{\left ( \frac{t} {\alpha }\right )}^{\tau }\right ] - \frac{\Gamma \left (k\right )} {n + 1}F\left (t\right ) \\ & =& \frac{n} {n + 1}{\gamma }_{1}\left [k;{\left ( \frac{t} {\alpha }\right )}^{\tau }\right ]\,\text{ }t > \end{array}$$
(0.)

Now, assume \(\left (3.1.1\right )\) holds, then

$${\int }_{0}^{t}{\gamma }_{ 1}\left [k;{\left (\frac{x} {\alpha }\right )}^{\tau }\right ]d\left ({\left (F\left (x\right )\right )}^{n}\right ) = \frac{n} {n + 1}{\gamma }_{1}\left [k;{\left ( \frac{t} {\alpha }\right )}^{\tau }\right ]{\left (F\left (t\right )\right )}^{n},\text{ }t > 0.$$

Differentiating both sides of the above equation with respect to t and upon simplification, we obtain

$$\frac{f\left (t\right )} {F\left (t\right )} = \frac{ \frac{d} {dt}{\gamma }_{1}\left [k;{\left ( \frac{t} {\alpha }\right )}^{\tau }\right ]} {{\gamma }_{1}\left [k;{\left ( \frac{t} {\alpha }\right )}^{\tau }\right ]} ,\text{ }t > 0.$$

Integrating both sides of the last equation with respect to t from x to , and in view of the fact that \({\lim }_{t\rightarrow \infty }{\gamma }_{1}\left [k;{\left ( \frac{t} {\alpha }\right )}^{\tau }\right ] = \Gamma \left (k\right ),\) we obtain (13.5) which completes the proof.

Remark 13.3.1.2.

For k = 1, the following characterization in terms of the first order statistic is given for (13.3) (see \(\left [17\right ]\), Subsection \(\left (vi\right )\)).

Proposition 13.3.1.3.

Let X : Ω → ℝ + be a continuous random variable with cdf F such that lim x→∞ \({x}^{\tau }{\left (1 - F\left (x\right )\right )}^{n} = 0.\) Then X has pdf (13.3) (with k = 1) if and only if

$$E\left [{X}_{1:n}^{\tau }\vert {X}_{ 1:n}^{} > t\right ] = {t}^{\tau } + \frac{{\alpha }^{\tau }} {n} ,t > 0.$$

3.2 Characterizations of the Amoroso PDF (Case II)

In this subsection we present a characterization of the Amoroso distribution with pdf (13.4) in terms of a truncated moment of a function of the first order statistic.

Proposition 13.3.2.1.

Let \(X : \Omega \rightarrow \left [0,\infty \right )\) be a continuous random variable with cdf F. The pdf of X is (13.4) if and only if

$$E\left \{{\gamma }_{1}\left [k;{\left (\frac{{X}_{1:n}} {\alpha } \right )}^{-\gamma }\right ]\vert {X}_{ 1:n} > t\right \} = \frac{n} {n + 1}{\gamma }_{1}\left [k;{\left ( \frac{t} {\alpha }\right )}^{-\gamma }\right ],\ t > 0.$$
(13.8)

Proof.

Let X have pdf (13.4), then \(F\left (x\right )\) is given by (13.6), and

$$\begin{array}{rcl} E\left \{{\gamma }_{1}\left [k;{\left (\frac{{X}_{1:n}} {\alpha } \right )}^{-\gamma }\right ]\vert {X}_{ 1:n} > t\right \}& =& \frac{{\int }_{t}^{\infty }{\gamma }_{1}\left [k;{\left (\frac{x} {\alpha }\right )}^{-\gamma }\right ]nf\left (x\right ){\left (1 - F\left (x\right )\right )}^{n-1}\mathrm{d}x} {{\left (1 - F\left (t\right )\right )}^{n}} \\ & =& {\gamma }_{1}\left [k;{\left ( \frac{t} {\alpha }\right )}^{-\gamma }\right ] - \frac{\Gamma \left (k\right )} {n + 1}\left (1 - F\left (t\right )\right ) \\ & =& \frac{n} {n + 1}{\gamma }_{1}\left [k;{\left ( \frac{t} {\alpha }\right )}^{-\gamma }\right ],\text{ }t > \end{array}$$
(0.)

Now, assume (13.8) holds, then

$${\int }_{t}^{\infty }{\gamma }_{ 1}\left [k;{ \left ( \frac{x} {\alpha }\right )}^{-\gamma }\right ]nf\left (x\right ){\left (1 - F\left (x\right )\right )}^{n-1}\mathrm{d}x = \frac{n} {n + 1}{\gamma }_{1}\left [k;{ \left ( \frac{t} {\alpha }\right )}^{-\gamma }\right ]{\left (1 - F\left (t\right )\right )}^{n},\text{ }t > 0.$$

Differentiating both sides of the above equation with respect to t and upon simplification, we obtain

$$- \frac{f\left (t\right )} {1 - F\left (t\right )} = \frac{ \frac{\mathrm{d}} {\mathrm{dt}}{\gamma }_{1}\left [k;{\left ( \frac{t} {\alpha }\right )}^{-\gamma }\right ]} {{\gamma }_{1}\left [k;{\left ( \frac{t} {\alpha }\right )}^{-\gamma }\right ]} ,\quad t > 0.$$

Integrating both sides of this equation with respect to t from 0 to x, and in view of the fact that \({\lim }_{t\rightarrow 0}{\gamma }_{1}\left [k;{\left ( \frac{t} {\alpha }\right )}^{-\gamma }\right ] = \Gamma \left (k\right ),\) we obtain (13.6).

Remark 13.3.2.2.

For k = 1, the following characterization in terms of the nth order statistic is given for (13.4) (see [15], Subsect. 4.2).

Proposition 13.3.2.3.

Let X : Ω → ℝ + be a continuous random variable with cdf F such that lim x→0 \({x}^{-\gamma }{\left (F\left (x\right )\right )}^{n} = 0.\) Then X has pdf (13.4) (with k = 1) if and only if

$$E\left [{X}_{n:n}^{-\gamma }\vert {X}_{ n:n}^{} < t\right ] = {t}^{-\gamma } + \frac{1} {n{\alpha }^{\gamma }},\text{ }t > 0.$$

4 The SSK (Shakil–Singh–Kibria), SKS (Shakil–Kibria–Singh), SKS-Type, and SK (Shakil–Kibria) Distributions

In this section we will give the definitions of SSK, SKS, SKS-type, and SK distributions in Subsects. 13.4.113.4.4, respectively. Recently, some researchers have considered a generalization of (13.1) given by

$$\frac{1} {f\left (x\right )} \frac{\mathrm{d}f\left (x\right )} {\mathrm{d}x} = \frac{{\sum }_{j=0}^{m}{a}_{j}{x}^{j}} {{\sum }_{j=0}^{m}{b}_{j}{x}^{j}} ,$$
(13.9)

where m, n ∈  \(/\left \{0\right \}\) and the coefficients a j ′s, b j ′s are real parameters. The system of continuous univariate pdfs generated by (13.9) is called generalized Pearson system which includes a vast majority of continuous pdfs.

4.1 SSK Distribution (Product Distribution Based on the Generalized Pearson Differential Equation)

Shakil et al. [25] consider 13.10 when m = 2, n = 1, b 0 = 0, b 1 ≠0, and x > 0. The solution of this special case is an interesting three parameter distribution with pdff given by

$$f\left (x;\alpha ,\beta ,\nu \right ) = {C}_{1}{x}^{\nu }\exp \left (-\alpha {x}^{2} - \beta x\right ),\text{ }x > 0,\alpha > 0,\beta > 0,\nu > 0,$$
(13.10)

where \(\alpha = -\frac{{a}_{2}} {2{b}_{1}}\), \(\beta = -\frac{{a}_{1}} {{b}_{1}}\), \(\nu = \frac{{a}_{0}} {{b}_{1}}\), and b 1≠0 are parameters and C 1 is the normalizing constant.

Remark 13.4.1.1.

A special case of equation (13.4), with γ = 2, will also have a solution of the form (13.10) as well.

The family of the distributions represented by pdf 13.10 can be expressed in terms of confluent hypergeometric functions of Tricomi and Kummer. As pointed out in [25], it is a rich family which includes the product of exponential and Rayleigh pdfs, the product of gamma and Rayleigh pdfs, the product of gamma and Rice pdfs, the product of gamma and normal pdfs, and the product of gamma and half-normal pdfs, among others. For detailed treatment (theory and applications) of this family we refer the reader to [25]. The family of SSK distributions will be characterized in Sect. 13.5.

4.2 SKS Distribution

Shakil et al. [24] consider (13.9) when m = 2p, n = p + 1, a j  = 0, j = 1, 2, , p − 1, p + 1, , 2p − 1 = 0; b j  = 0, j = 1, 2, , p, b p + 1 ≠0, and x > 0. The solution of this special case is an interesting four parameter distribution with pdff (using their notation) given by

$$\begin{array}{rcl} f\left (x;\alpha ,\beta ,\nu ,p\right ) = {C}_{2}{x}^{\nu -1}\exp \left (-\alpha {x}^{p} - \beta {x}^{-p}\right ),\text{ }x > 0,\alpha \geq 0,\beta \geq 0,\nu \in \mathbb{R},& & \\ & &\end{array}$$
(13.11)

where \(\alpha = - \frac{{a}_{2p}} {p{b}_{p+1}}\), \(\beta = \frac{{a}_{p}} {p{b}_{p+1}}\), \(\nu = \frac{({a}_{p}+{b}_{p+1})} {{b}_{p+1}}\), b p + 1≠0,and p ∈  \(/\left \{0\right \}\) are parameters and C 2 is the normalizing constant.

Shakil et al. [24] classified their newly proposed family into the following three classes:

$$\begin{array}{rcl} \textrm{ Class I.}\quad \alpha > 0, \beta = 0, \nu > 0, \textrm{ and }p \in \mathbb{N}/\left \{0\right \}.& & \\ \textrm{ Class II.}\quad \alpha = 0, \beta > 0, \nu < 0, \textrm{ and }p \in \mathbb{N}/\left \{0\right \}.& & \\ \textrm{Class III.}\quad \alpha > 0, \beta > 0, \nu \in \mathbb{R},\textrm{ and }p \in \mathbb{N}/\left \{0\right \}.& & \\ \end{array}$$

Shakil et al. [24] pointed out that they found their “newly proposed model fits better than gamma, log-normal and inverse Gaussian distributions in the fields of biomedicine, demography, environmental and ecological sciences, finance, lifetime data, reliability theory, traffic data, etc. They hope that the findings of their paper will be useful for the practitioners in various fields of theoretical and applied sciences.” They also pointed out that “It appears from literature that not much attention has been paid to the study of the family of continuous pdfs that can be generated as a solution of the generalized Pearson differential equation (13.11), except three papers cited in [24].” For a detailed treatment of the above-mentioned three cases and their significance as well as related statistical analysis, we refer the reader to [24]. These cases were characterized in Hamedani [16] based on a simple relationship between two truncated moments.

4.3 SKS-Type Distribution

The SKS distribution has support in \(\left (0,\infty \right )\), and one may be interested in similar distribution with bounded support. We would like to present here a distribution with bounded support, which we call it SKS type given by the pdf

$$\begin{array}{rcl} f\left (x;\alpha ,\beta ,p\right ) = Cp{x}^{-(p+1)}\left (\beta - \alpha {x}^{2p}\right )\exp \left (-\alpha {x}^{p} - \beta {x}^{-p}\right ),\text{ }0 < x <{ \left (\frac{\beta } {\alpha }\right )}^{ \frac{1} {2p} },& & \\ & &\end{array}$$
(13.12)

where α > 0, β > 0, and p ∈   +  are parameters and \(C =\exp \left (2\sqrt{\alpha \beta }\right )\) is the normalizing constant.

Remark 13.4.3.1.

We do not require p to be a positive integer in (13.12). If, however, p ∈  \(/\left \{0\right \}\), then (13.12) will be a member of the generalized Pearson system defined via (13.9)

$$\frac{1} {f\left (x\right )} \frac{\mathrm{d}f\left (x\right )} {\mathrm{d}x} = \frac{{\beta }^{2}p - \beta \left (p + 1\right ){x}^{p} - 2\alpha \beta p{x}^{2p} - \alpha \left (p - 1\right ){x}^{3p} + {\alpha }^{2}p{x}^{4p}} {\beta {x}^{p+1} - \alpha {x}^{3p+1}}.$$

The cdfF corresponding to the pdf (13.12) is

$$F\left (x\right ) = C\exp \left (-\alpha {x}^{p} - \beta {x}^{-p}\right ),0 < x <{ \left (\frac{\beta } {\alpha }\right )}^{ \frac{1} {2p} }.$$
(13.13)

The family of SKS-type distributions will be characterized in Sect. 13.6.

4.4 SK Distribution

Shakil and Kibria [23] consider a solution of (13.9) for m = p, n = p + 1, a j  = 0, j = 1, 2, , p − 1, b j  = 0, j = 0, 1, , p, a p ≠0, b 1≠0, b p + 1≠0, and x > 0. This special five-parameter solution is given by

$$\begin{array}{rcl} f\left (x;\alpha ,\beta ,\nu ,\tau ,p\right ) = {C}_{3}{x}^{\nu -1}{\left (\alpha {x}^{p} + \beta \right )}^{-\tau },\text{ }x > 0,\alpha > 0,\beta > 0,\nu > 0,\tau > 0,p \in \mathbb{N}/\left \{0\right \},& &\end{array}$$
(13.14)

where α, β, ν, τ, p are parameters, \(\tau > \frac{\nu } {p}\), and C 3 is the normalizing constant. We refer the reader to [23] for further details and statistical analyses related to this family.

Final Remark of Sect.  13.4. In view of (13.9), we would like to make the observation that the pdff of a sub-family of the Amoroso family satisfies the generalized Pearson differential equation (13.9) with, of course, appropriate boundary condition. For a = 0, α > 0 \(\left (or\text{ }\alpha < 0\right )\), τ =  − γ, γ ∈  \(/\left \{0\right \}\), and k > 0, the pdff given by (13.2) satisfies (13.9) with a 0 = γαγ, a j  = 0, j = 1, 2, , γ − 1, \({a}_{\gamma } = -\left (\gamma k + 1\right )\); b j  = 0, j = 0, 1, , γ, and b γ + 1 = 1, i.e.,

$$\frac{1} {f\left (x\right )} \frac{\mathrm{d}f\left (x\right )} {\mathrm{d}x} = \frac{\gamma {\alpha }^{\gamma } -\left (\gamma k + 1\right ){x}^{\gamma }} {{x}^{\gamma +1}}.$$

For a = 0, α > 0 \(\left (or\text{ }\alpha < 0\right )\) , τ = γ, γ ∈  \(/\left \{0\right \}\), and k > 0, the pdff given by \(\left (2.1\right )\) satisfies \(\left (4.1\right )\) with a 0 = γk − 1 , a j  = 0, j = 1, 2, , γ − 1, a γ =  − γα − γ; b 0 = 0, and b 1 = 1, i.e.,

$$\frac{1} {f\left (x\right )} \frac{\mathrm{d}f\left (x\right )} {\mathrm{d}x} = \frac{\left (\gamma k - 1\right ) - \gamma {\alpha }^{-\gamma }{x}^{\gamma }} {{x}^{}}.$$

5 Characterizations of the SSK Distribution

In this section we present characterizations of the pdf (13.10) in terms of a simple relationship between two truncated moments. Our characterization results presented here will employ an interesting result due to Glänzel [9], which is stated here (Theorem G) for the sake of completeness.

Theorem G.Let \(\left (\Omega ,\mathcal{F},P\right )\) be a given probability space and let \(H = \left [a,b\right ]\) be an interval for some a < b (a =  − { and }b =  + { might as well be allowed})​. Let X : Ω → H be a continuous random variable with the distribution function F and let g and h be two real functions defined on H such that

$$E\left [g\left (X\right )\vert X \geq x\right ] = E\left [h\left (X\right )\vert X \geq x\right ]\lambda \left (x\right ),x \in H$$

is defined with some real function λ. Assume that g, \(h \in {C}^{1}\left (H\right )\), \(\lambda \in {C}^{2}\left (H\right )\), and F is twice continuously differentiable and strictly monotone function on the set H. Finally, assume that the equation hλ = g has no real solution in the interior of H. Then F is uniquely determined by the functions g, h, and λ, particularly

$$F\left (x\right ) ={ \int }_{a}^{x}C\left \vert \frac{{\lambda }^{{\prime}}\left (u\right )} {\lambda \left (u\right )h\left (u\right ) - g\left (u\right )}\right \vert \exp \left (-s\left (u\right )\right )\mathrm{d}u,$$

where the function s is a solution of the differential equation

$${s}^{{\prime}} = \frac{{\lambda }^{{\prime}}h} {\lambda h - g}$$

and C is a constant, chosen to make ∫ H dF = 1. 

Remark 13.5.1.

In Theorem G, the interval H need not be closed.

Proposition 13.5.2.

Let \(X : \Omega \rightarrow \left (0,\infty \right )\) be a continuous random variable and let \(h\left (x\right ) = {x}^{1-\nu }\exp \left (\beta x\right )\) for \(x \in \left (0,\infty \right ).\) The pdf of X is (13.10) if and only if there exist functions g and λ defined in Theorem G satisfying the differential equation

$$\frac{{\lambda }^{{\prime}}\left (x\right )} {\lambda \left (x\right )h\left (x\right ) - g\left (x\right )} = 2\alpha {x}^{\nu }\exp \left (-\beta x\right ),\text{ }x > 0.$$
(13.15)

Proof.

Let X have pdf (13.10) and let

$$g\left (x\right ) = {x}^{1-\nu }\left (\alpha + \beta {x}^{-1}\right ),\text{ }x > 0$$

and

$$\lambda \left (x\right ) = 2\alpha \exp \left (-\beta x\right ),\text{ }x > 0.$$

Then

$$\left (1 - F\left (x\right )\right )E\left [h\left (X\right )\vert X \geq x\right ] = \frac{{C}_{1}} {2\alpha }\exp \left (-\alpha {x}^{2}\right ),\text{ }x > 0,$$
$$\left (1 - F\left (x\right )\right )E\left [g\left (X\right )\vert X \geq x\right ] = {C}_{1}\exp \left (-\alpha {x}^{2} - \beta x\right ),\text{ }x > 0,$$

where C 1 is a constant. We also have

$$\lambda \left (x\right )h\left (x\right ) - g\left (x\right ) = -\beta {x}^{-\nu } < 0\text{ }for\text{ }x > 0.$$

The differential equation (13.15) clearly holds.

Conversely, if g and λ satisfy the differential equation (13.15) , then

$${s}^{{\prime}}\left (x\right ) = \frac{{\lambda }^{{\prime}}\left (x\right )h\left (x\right )} {\lambda \left (x\right )h\left (x\right ) - g\left (x\right )} = 2\alpha x,\text{ }x > 0,$$

and hence

$$s\left (x\right ) = \alpha {x}^{2},\text{ }x > 0.$$

Now from Theorem G, X has pdf (13.10).

Corollary 13.5.3.

Let \(X : \Omega \rightarrow \left (0,\infty \right )\) be a continuous random variable and let \(h\left (x\right ) = {x}^{1-\nu }\left (2\alpha + \beta {x}^{-1}\right )\) and \(g\left (x\right ) = {x}^{1-\nu }\exp \left (\beta x\right )\) for \(x \in \left (0,\infty \right ).\) The pdf of X is (13.10) if and only if the function λ has the form

$$\lambda \left (x\right ) = \frac{1} {2\alpha }\exp \left (\beta x\right ),x > 0.$$

Remark 13.5.4.

The general solution of the differential equation (13.15) is

$$\lambda \left (x\right ) =\exp \left (\alpha {x}^{2}\right )\left [-\int 2\alpha {x}^{\nu }\exp \left (-\alpha {x}^{2} - \beta x\right )g\left (x\right )\mathrm{d}x + D\right ],\text{ }x > 0,$$

where D is a constant. One set of appropriate functions is given in Proposition 13.5.2.

6 Characterizations of the SKS-Type Distribution

In this section we present two characterizations of pdf (13.12) in terms of a truncated moment of a function of first order statistic and of a function of nth order statistic, respectively. These characterizations are consequences of the following two theorems given in Hamedani [15], which are stated here for the sake of completeness. We also present a characterization of the pdf (13.12) based on the conditional expectation of adjacent generalized order statistics.

Theorem 1 (Theorem 2.2 of [15], p 464). 

Let \(X : \Omega \rightarrow \left (a,b\right )\) , a ≥ 0 be a continuous random variable with cdf F such that lim x→b \({x}^{\delta }{\left (1 - F\left (x\right )\right )}^{n} = 0\) , for some δ > 0. Let \(g\left (x,\delta ,n\right )\) be a real-valued function which is differentiable with respect to x and \({\int }_{a}^{b} \frac{\delta {x}^{\delta -1}} {ng\left (x,\delta ,n\right )}\mathrm{d}x = \infty.\) Then

$$E\left [{X}_{1:n}^{\delta }\vert {X}_{ 1:n}^{} > t\right ] = {t}^{\delta } + g\left (t,\delta ,n\right ),\text{ }a < t < b,$$

implies that

$$F\left (t\right ) = 1 -{\left (\frac{g\left (a,\delta ,n\right )} {g\left (t,\delta ,n\right )} \right )}^{ \frac{1} {n} }\exp \left (-{\int }_{a}^{t} \frac{\delta {x}^{\delta -1}} {ng\left (x,\delta ,n\right )}\mathrm{d}x\right ),\text{ }a \leq t < b.$$

Theorem 2 (Theorem 2.8 of  [24], p 469). 

Let \(X : \Omega \rightarrow \left (a,b\right )\) , a ≥ 0 be a continuous random variable with cdf F such that lim x→a \({(x - a)}^{-\delta }{\left (F\left (x\right )\right )}^{n} = 0\) , for some δ > 0. Let \(g\left (x,\delta ,n\right )\) be a real-valued function which is differentiable with respect to x and \({\int }_{a}^{b}\frac{\delta {(x-a)}^{-\delta -1}} {ng\left (x,\delta ,n\right )} \mathrm{d}x = \infty.\) Then

$$E\left [{({X}_{1:n}^{} - a)}^{-\delta }\vert {X}_{ n:n}^{} < t\right ] = {(t - a)}^{-\delta } + g\left (t,\delta ,n\right ),\text{ }a < t < b,$$

implies that

$$F\left (t\right ) ={ \left (\frac{g\left (b,\delta ,n\right )} {g\left (t,\delta ,n\right )}\right )}^{ \frac{1} {n} }\exp \left (-{\int }_{t}^{b}\frac{\delta {(x - a)}^{-\delta -1}} {ng\left (x,\delta ,n\right )} \mathrm{d}x\right ),\text{ }a \leq t < b.$$

Proposition 13.6.3.

Let \(X : \Omega \rightarrow \left (0,{\left (\frac{\beta } {\alpha }\right )}^{ \frac{1} {2p} }\right )\) be a continuous random variable with cdf F such that \({\lim }_{ x\rightarrow {\left ( \frac{\beta } {\alpha }\right )}^{ \frac{1} {2p} }}\) \({x}^{\delta }{\left (1 - F\left (x\right )\right )}^{n} = 0\) , for some δ > 0. The pdf of X is (13.12) if and only if

$$E\left [{X}_{1:n}^{\delta }\vert {X}_{ 1:n}^{} > t\right ] = {t}^{\delta } + \frac{\delta } {np}\left ( \frac{{x}^{\delta +p}} {\beta - \alpha {x}^{2p}}\right ),\text{ }0 < t <{ \left (\frac{\beta } {\alpha }\right )}^{ \frac{1} {2p} }.$$

Proof.

See Theorem 1.

Proposition 13.6.4.

Let \(X : \Omega \rightarrow \left (0,{\left (\frac{\beta } {\alpha }\right )}^{ \frac{1} {2p} }\right )\) be a continuous random variable with cdf F such that lim x→0 \({x}^{-\delta }{(F\left (x\right ))}^{n} = 0\) , for some δ > 0. The pdf of X is (13.12) if and only if

$$E\left [{X}_{n:n}^{-\delta }\vert {X}_{ n:n}^{} < t\right ] = {t}^{-\delta } - \frac{\delta } {np}\left ( \frac{{x}^{p-\delta }} {\beta - \alpha {x}^{2p}}\right ),\ \ 0 < t <{ \left (\frac{\beta } {\alpha }\right )}^{ \frac{1} {2p} }.$$

Proof.

See Theorem 2.

The concept of generalized order statistics \(\left (gos\right )\) was introduced by Kamps [19] in terms of their joint pdf. The order statistics, record values, k-record values, Pfeifer records, and progressive type II order statistics are special cases of the gos. The rvs (random variables) \(X\left (1,n,m,k\right )\), \(X\left (2,n,m,k\right )\), , \(X\left (n,n,m,k\right )\), k > 0, and m ∈  are ngos from an absolutely continuous cdfF with corresponding pdff if their joint pdf \({f}_{1,2,\ldots ,n}\left ({x}_{1},{x}_{2},\ldots ,{x}_{n}\right )\) can be written as

$$\begin{array}{rcl}{ f}_{1,2,\ldots ,n}\left ({x}_{1},{x}_{2},\ldots ,{x}_{n}\right )& =& k\left ({\Pi }_{j=1}^{n-1}{\gamma }_{ j}\right )\left [{\Pi }_{j=1}^{n-1}{\left (1 - F\left ({x}_{ j}\right )\right )}^{m}f\left ({x}_{ j}\right )\right ] \\ & & \times {\left (1 - F\left ({x}_{n}\right )\right )}^{k-1}f\left ({x}_{ n}\right ),{F}^{-1}\left (0+\right ) \\ & <& {x}_{1} < {x}_{2} < \cdots < {x}_{n} < {F}^{-1}\left (1-\right ), \end{array}$$
(13.16)

where \({\gamma }_{j} = k + \left (n - j\right )\left (m + 1\right )\) for all j, 1 ≤ j ≤ n, k is a positive integer, and m ≥ − 1. 

If k = 1 and m = 0, then \(X\left (r,n,m,k\right )\) reduces to the ordinary rth order statistic and (13.16) will be the joint pdf of order statistics \({\left ({X}_{j:n}\right )}_{1\leq j\leq n}\) from F. If k = 1 and m =  − 1, then (13.16) will be the joint pdf of the first n upper record values of the i. i. d. (independent and identically distributed) rvs with cdfF and pdff.

Integrating out x 1, x 2, , x r − 1, x r + 1, , x n from (13.16), we obtain the pdff r, n, m, k of \(X\left (r,n,m,k\right )\):

$${f}_{r,n,m,k}\left (x\right ) = \frac{{c}_{r}} {\Gamma \left (r\right )}{\left (1 - F\left (x\right )\right )}^{{\gamma }_{r}-1}f\left (x\right ){g}_{ m}^{r-1}\left (F\left (x\right )\right ),$$
(13.17)

where c r  = Π j = 1 n − 1γ j and

$$\begin{array}{rcl}{ g}_{m}\left (x\right )& =& \frac{1} {m + 1}\left [1 -{\left (1 - x\right )}^{m+1}\right ],\text{ }m\neq - 1 \\ & =& -\ln \left (1 - x\right ),m = -1,x \in \left (0,1\right )\end{array}$$

Since \({\lim }_{m\rightarrow -1} \frac{1} {m+1}\left [1 -{\left (1 - x\right )}^{m+1}\right ] = -\ln\) 1 − x, we write \({g}_{m}\left (x\right ) = \frac{1} {m+1}\) \(\left [1 -{\left (1 - x\right )}^{m+1}\right ]\), for all \(x \in \left (0,1\right )\) and all m with \({g}_{-1}\left (x\right ) {=\lim }_{m\rightarrow -1}{g}_{m}\left (x\right ).\)

The joint pdf of \(X\left (r,n,m,k\right )\) and \(X\left (r + 1,n,m,k\right )\), 1 ≤ r < n, is given by (see Kamps [19], p 68)

$$\begin{array}{rcl}{ f}_{r,,r+1,n,m,k}\left (x,y\right ) = \frac{{c}_{r+1}} {\Gamma \left (r\right )}{\left (1-F\left (x\right )\right )}^{m}f\left (x\right ){g}_{ m}^{r-1}\left (F\left (x\right )\right ){\left (1-F\left (x\right )\right )}^{{\gamma }_{r+1}-1}f\left (y\right ),x < y,& & \\ \end{array}$$

and consequently the conditional pdf of \(X\left (r + 1,n,m,k\right )\) given \(X\left (r,n,m,k\right ) = x\), for m ≥ − 1 , is

$${f}_{r+1\vert r,n,m,k}\left (y\vert x\right ) = {\gamma }_{r+1}{\left (\frac{1 - F\left (y\right )} {1 - F\left (x\right )}\right )}^{{\gamma }_{r+1}-1} \cdot \frac{f\left (y\right )} {\left (1 - F\left (x\right )\right )},\text{ }y > x,$$
(13.18)

where γ r + 1 = γ r  − 1 − m. The conditional pdf of \(X\left (r,n,m,k\right )\) given X(r + 1, n, m, k) = y, for m≠ − 1, is

$$\begin{array}{rcl}{ f}_{r\vert r+1,n,m,k}\left (x\vert y\right )& =& r{\left (1 - F\left (x\right )\right )}^{m}{\left (\frac{1 -{\left (1 - F\left (x\right )\right )}^{m+1}} {m + 1} \right )}^{r-1} \\ & & \times {\left (\frac{1 -{\left (1 - F\left (y\right )\right )}^{m+1}} {m + 1} \right )}^{-r}f\left (x\right ),\text{ }x < y.\end{array}$$
(13.19)

Our last characterization of the pdf (13.12) will be based on the conditional expectation of \(X\left (r,n,m,k\right )\) given \(X\left (r + 1,n,m,k\right )\) when m = 0. 

Proposition 13.6.5.

Let \({\left ({X}_{j}\right )}_{j\geq 1}\) be a sequence of i.i.d. rvs on \(\left (0,{\left (\frac{\beta } {\alpha }\right )}^{ \frac{1} {2p} }\right )\) with an absolutely continuous cdf F, corresponding pdf f and with \({\lim }_{x\rightarrow 0}s\left (x\right ){\left (F\left (x\right )\right )}^{r} = 0\) , where \(s\left (x\right ) = r\) C αx −p + βx p , where C is an arbitrary positive constant. Let \({\left (X\left (r,n,m,k\right )\right )}_{1\leq r\leq n}\) be the first n gos from F. Then

$$E\left [s\left (X\left (r,n,m,k\right )\right )\vert X\left (r + 1,n,m,k\right ) = t\right ] = s\left (t\right ) + {C}_{{_\ast}},0 < t <{ \left (\frac{\beta } {\alpha }\right )}^{ \frac{1} {2p} }$$
(13.20)

implies that

$$F\left (x\right ) = C\exp \left (-\alpha {x}^{p} - \beta {x}^{-p}\right ),\text{ }0 < x <{ \left (\frac{\beta } {\alpha }\right )}^{ \frac{1} {2p} },$$

where \(C =\exp \left (2\sqrt{\alpha \beta }\right ).\)

Proof.

From (13.20), in view of (13.19), we have

$${\int }_{0}^{t}s\left (x\right )r{\left (F\left (x\right )\right )}^{r-1}{\left (F\left (t\right )\right )}^{-r}f\left (x\right )\mathrm{d}x = s\left (t\right ) + {C}_{ {_\ast}},\text{ }0 < t <{ \left (\frac{\beta } {\alpha }\right )}^{ \frac{1} {2p} }.$$

Upon integrating by parts on the left-hand side of the last equality and in view of the assumption \({\lim }_{x\rightarrow 0}s\left (x\right ){\left (F\left (x\right )\right )}^{r} = 0\), we have

$${C}_{{_\ast}}{\left (F\left (t\right )\right )}^{r} = -{\int }_{0}^{t}{s}^{{\prime}}\left (x\right ){\left (F\left (x\right )\right )}^{r}\mathrm{d}x.$$
(13.21)

Now, differentiating both sides of (13.21) with respect to t, we arrive at

$$\frac{f\left (t\right )} {F\left (t\right )} = - \frac{1} {r{C}_{{_\ast}}}{s}^{{\prime}}\left (t\right ).$$

Integrating both sides of this equality from x to \( {\left (\frac{\beta } {\alpha }\right )}^{ \frac{1} {2p} },\) we have

$$ F\left (x\right ) = \left \{\exp \left (2\sqrt{\alpha \beta }\right )\right \}\exp \left (-\alpha {x}^{p} - \beta {x}^{-p}\right ),\text{ }0 < x <{ \left (\frac{\beta } {\alpha }\right )}^{ \frac{1} {2p} }. $$

7 Characterizations of the SK Distribution

In this section we present characterizations of the pdf (13.14) in terms of a simple relationship between two truncated moments. Our characterization results presented here will, as in Sect. 13.5, employ Theorem G.

Proposition 13.7.1.

Let \(X : \Omega \rightarrow \left (0,\infty \right )\) be a continuous random variable and let \(h\left (x\right ) = {x}^{p-\nu }\) for \(x \in \left (0,\infty \right ).\) The pdf of X is (13.14), with τ > 1, if and only if there exist functions g and λ defined in Theorem G, satisfying the differential equation

$$\frac{{\lambda }^{{\prime}}\left (x\right )} {\lambda \left (x\right )h\left (x\right ) - g\left (x\right )} = \alpha p\left (\tau - 1\right ){x}^{\nu -1}{\left (\alpha {x}^{p} + \beta \right )}^{-1},\text{ }x > 0.$$
(13.22)

Proof.

Let X have pdf (13.14) and let

$$g\left (x\right ) = {x}^{p-\nu }{\left (\alpha {x}^{p} + \beta \right )}^{-1},\text{ }x > 0,$$

and

$$\lambda \left (x\right ) = \frac{\tau } {\tau - 1}\left (\alpha {x}^{p} + \beta \right ),\text{ }x > 0.$$

Then

$$\begin{array}{rcl} \left (1 - F\left (x\right )\right )E\left [h\left (X\right )\vert X \geq x\right ]& = \frac{{C}_{3}} {\alpha p\left (\tau -1\right )}{\left (\alpha {x}^{p} + \beta \right )}^{1-\tau },\text{ }x > 0,& \\ \left (1 - F\left (x\right )\right )E\left [g\left (X\right )\vert X \geq x\right ]& = \frac{{C}_{3}} {\alpha p\tau }{\left (\alpha {x}^{p} + \beta \right )}^{-\tau },\text{ }x > 0, & \\ \end{array}$$

and

$$\lambda \left (x\right )h\left (x\right ) - g\left (x\right ) = -\frac{1} {\tau }{x}^{p-\nu } < 0\text{ }for\text{ }x > 0.$$

The differential equation (13.22) clearly holds.

Conversely, if g and λ satisfy the differential equation (13.22), then

$${s}^{{\prime}}\left (x\right ) = \frac{{\lambda }^{{\prime}}\left (x\right )h\left (x\right )} {\lambda \left (x\right )h\left (x\right ) - g\left (x\right )} = \alpha p\left (\tau - 1\right ){x}^{p-1}{\left (\alpha {x}^{p} + \beta \right )}^{-1},\text{ }x > 0,$$

and hence

$$ s\left (x\right ) =\ln { \left (\alpha {x}^{p} + \beta \right )}^{\tau -1},\text{ }x > 0. $$

Now from Theorem G, X has pdf (13.14).

Corollary 13.7.2.

Let \(X : \Omega \rightarrow \left (0,\infty \right )\) be a continuous random variable and let \(h\left (x\right ) = {x}^{p-\nu }{\left (\alpha {x}^{p} + \beta \right )}^{-1}\) and \(g\left (x\right ) = {x}^{p-\nu }\) for \(x \in \left (0,\infty \right ).\) The pdf of X is (13.14), with τ > 1, if and only if the function λ has the form

$$\lambda \left (x\right ) = \frac{\tau } {\tau - 1}\left (\alpha {x}^{p} + \beta \right ),\text{ }x > 0.$$

8 Conclusion

In designing a stochastic model for a particular modeling problem, an investigator will be vitally interested to know if their model fits the requirements of a specific underlying probability distribution. To this end, the investigator will vitally depend on the characterizations of the selected distribution. A good number of distributions which have important applications in many different fields have been mentioned in this work. Various characterizations of these distributions have been established. We certainly hope that these results will be of interest to an investigator who may believe their model has a distribution mentioned here and is looking for justifying the validity of their model.

9 Appendix A

We like to mention that this kind of characterization based on the ratio of truncated moments is stable in the sense of weak convergence; in particular, let us assume that there is a sequence \(\left \{{X}_{n}\right \}\) of random variables with distribution functions \(\left \{{F}_{n}\right \}\) such that the functions g n , h n , and λ n \(\left (n \in \mathbb{N}\right )\) satisfy the conditions of Theorem G and let g n  → g, h n  → h for some continuously differentiable real functions g and h. Let, finally, X be a random variable with distribution F. Under the condition that \({g}_{n}\left (X\right )\) and \({h}_{n}\left (X\right )\) are uniformly integrable and the family is relatively compact, the sequence X n converges to X in distribution if and only if λ n converges weakly to λ, where

$$\lambda \left (x\right ) = \frac{E\left [g\left (X\right )\vert X \geq x\right ]} {E\left [h\left (X\right )\vert X \geq x\right ]}.$$

This stability theorem makes sure that the convergence of distribution functions is reflected by corresponding convergence of the functions g , h, and λ, respectively. It guarantees, for instance, the “convergence” of characterization of the Wald distribution to that of the Lévy-Smirnov distribution if α → , as was pointed out in [11].

A further consequence of the stability property of Theorem G is the application of this theorem to special tasks in statistical practice such as the estimation of the parameters of discrete distributions. For such purpose, the functions g, h, and, specially, λ should be as simple as possible. Since the function triplet is not uniquely determined, it is often possible to choose λ as a linear function. Therefore, it is worth analyzing some special cases which helps to find new characterizations reflecting the relationship between individual continuous univariate distributions and appropriate in other areas of statistics.

In view of Theorem G, a characterization of the Pearson system, due to Glänzel [9], is given below.

Proposition A-2. Let X : Ω → H ⊆ ℝ be a continuous random variable and let \(g\left (x\right ) = {x}^{2} - tx - w\), \(h\left (x\right ) = rx + u\) for x ∈ H, where r, t, u, and w are real parameters such that the distribution is well defined on H. The distribution function of X belongs to Pearson’s system if and only if the function λ has the form λ = x, x ∈ H.

Remark A-3. Since it can always be assumed that the expectation of a non-strictly positive continuous random variable is zero, we let u = 0 , where appropriate, in the brief discussion below. Note that w > 0 if u = 0. 

The following cases can be distinguished:

  1. Type I.

     \(r \in \left (0,1\right )\), t≠0. (This is the family of finite beta distribution.)

  2. Type II.

     \(r \in \left (0,1\right )\), t = 0. (This is a symmetric beta distribution.)

  3. Type III.

     r = 1, t≠0. (This is the family of gamma distribution.) r = 1, t = 0. (This is the normal distribution.)

  4. Type IV.

     \(r \in \left (1 + \frac{{t}^{2}} {4w},\infty \right ),\) t≠0. 

  5. Type V.

     \(r = 1 + \frac{{t}^{2}} {4w}\), t≠0. (This is the family of inverse Gaussian distribution.)

  6. Type VI.

     \(r \in \left (1,1 + \frac{{t}^{2}} {4w}\right ),\) t≠0. (This is the family of infinite beta distribution.)

  7. Type VII.

     \(r \in \left (1 + \frac{{t}^{2}} {4w},\infty \right ),\) t = 0. 

The following proposition is given in Glänzel and Hamedani [11]

Proposition A-4. Let X : Ω → H ⊆ ℝ be a continuous random variable and let \(g\left (x\right ) = \frac{\left \{\left ({a}_{0}+1\right ){x}^{2}+\left ({a}_{ 1}+c\right )x+{a}_{2}\right \}} {\left \{{a}_{0}{x}^{2}+{a}_{1}x+{a}_{2}\right \}}\), \(h\left (x\right ) = \frac{\left \{x+c\right \}} {\left \{{a}_{0}{x}^{2}+{a}_{1}x+{a}_{2}\right \}}\) for x ∈ H, where c > 0, a 0 , a 1 , and a 2 are real parameters such that the distribution function is well defined on H. The distribution function of X belongs to Pearson’s system if and only if the function λ has the form λ = x, x ∈ H.

The families of Pearson’s system can be obtained from special choices of the parameters c, a 0, a 1, and a 2(see, e.g., [18]).