1 Introduction

The problem of modeling lifetime of systems and components in reliability theory and survival analysis among other sciences is very important. In some practical situations observations are not recorded according to the original distribution of the data. This may be due to the fact that the units of a population have unequal chances in order to be recorded by investigator. In a special case, the probability that the data from the main population are recorded, depends on size (length) of the data. The random sample that is drawn in this way is called a size-biased sample. Let X be a non-negative random variable (rv) with distribution function (df) F,  survival function (sf) \(\bar{F}=1-F\) and probability density function (pdf) f,  whenever it exists. The non-negative random variable \(X_{\theta }\) with the pdf

$$\begin{aligned} g(x\text { }|\text { }\theta )=\frac{x^{\theta }f(x)}{E\left[ X^{\theta } \right] },\quad x>0; \end{aligned}$$
(1)

is said to have size-biased distribution of order \(\theta \in [0,\infty ).\) The parameter \(\theta \) in the model is named the moment parameter. When \(\theta =1\), model is referred to as length-biased distribution. The study of reliability relations and some applications in forestry regarding size-biased distributions have been carried out by several authors (cf. Blumenthal 1967; Soheaffer 1972; Patil and Ord 1976; Patil and Rao 1978; Mahfoud and Patil 1982; Gupta and Keating 1986; Gupta and Kirmani 1990; Nanda and Jain 1999; Bartoszewicz and Skolimowska 2006; Gove 2003a, b). It is known that in many practical circumstances that the parameter \(\theta \) may not be constant due to various reasons, and the occurrence of heterogeneity is sometimes unpredictable and unexplained. The heterogeneity sometimes may not be possible to be neglected. In addition, it often happens that data from several populations is mixed and information about which subpopulation gave rise to individual data points is unavailable. Mixture models are used to model such data sets in nature. For example, measurements of life lengths of a device may be gathered without regard to the manufacturer, or data may be gathered on humans without regard, say, to blood type. If the ignored variable (manufacturer or blood type) has a bearing on the characteristic being measured, then the data are said to come from a mixture. Actually, it is hard to find data that are not some kind of a mixture, because there is almost always some relevant covariate that is not observed. The study of reliability properties in various mixture models has recently received much attention in the literature. Once we model reliability data by mixture models, the mixing operation can change the pattern of aging for the lifetime unit under consideration in some favorite way (see Block and Joe 1997; Finkelstein and Esaulova 2006; Gupta and Kirmani 2006; Marshall and Olkin 2007; Shaked and Shanthikumar 2007; Gupta and Gupta 2009; Li and Zhao 2011).

The purpose of this paper is to propose and analyze a mixture model of size-biased distributions where the moment parameter in the model is taken as a random variable. In view of the model, the dependence structure between the mixing and the overall random variables is determined and some closure properties with respect to some well-known stochastic orders are established. In addition, reliability aspects of the model using some aging properties are discussed. To underline the usefulness of the model and clarify some useful facts, some examples of interest in reliability and statistics are given. The rest of the paper is organized as follows. In Sect. 2, for ease of reference, we present some definitions and basic properties which will be used in the sequel. In Sect. 3, the new model and its representation are described. In that section, we provide some illustrative examples to describe the usefulness of the proposed model. In Sect. 4, we discuss the dependence structure of the model and establish some closure properties of the model with respect to some stochastic orders. In that section, we present a few preservation properties of some aging classes under the structure of the model. In Sect. 5 we obtain preservation properties of some stochastic orders in the model. Finally in Sect. 6, we give a summary and brief discussion and then conclude the paper.

Throughout the paper, the terms increasing and decreasing stand for non-decreasing and non-increasing, respectively. All the expectations and integrals are implicitly assumed to exist wherever they appear.

2 Preliminaries

In reliability and survival studies, the hazard rate (HR) and the reversed hazard rate (RHR) functions are very important measures. For the random variable X,  the HR function is given by \(r_{X}(x)=f(x)/\bar{F}(x),\,x\ge 0\) and the RHR function is given by \(\widetilde{r}_{X}(x)=f(x)/F(x),x>0\). In the following, we present definitions of some stochastic orders and aging notions used throughout the paper. For stochastic orders we refer to Shaked and Shanthikumar (2007) and for the aging notions we refer to Barlow and Proschan (1975) and Lai and Xie (2006).

Definition 2.1

Let X and Y be two non-negative random variables with df’s F and G,  sf’s \(\bar{F}\) and \(\bar{G}\), pdf’s f and g,  HR functions \(r_{X}\) and \(r_{Y},\) and RHR functions \(\widetilde{r}_{X}\) and \(\widetilde{r} _{Z},\) respectively. We say that X is smaller than Z in the:

  1. (i)

    Stochastic order (denoted as \(X\le _{st}Y\)), if \(\bar{F}(x)\le \bar{G}(x)\), for all \(x\ge 0.\)

  2. (ii)

    Likelihood ratio order (denoted as \(X\le _{lr}Y\)), if g(x) / f(x) , is increasing in \(x>0.\)

  3. (iii)

    Hazard rate order (denoted as \(X\le _{hr}Y\)), if \(r_{X}(x)\ge r_{Y}(x),\) for all \(x\ge 0.\)

  4. (iv)

    Reversed hazard rate order (denoted as \(X\le _{rh}Y\)), if \(\widetilde{r}_{X}(x)\le \widetilde{r}_{Y}(x),\) for all \(x>0.\)

  5. (v)

    Up likelihood ratio (up hazard rate) [up reversed hazard rate] order (denoted as \(X\le _{lr\uparrow ~(hr\uparrow )~[rh\uparrow ]}Y\)) if, for all \(x\ge 0,\) \(X-x\le _{lr~(hr)~[rh]}Y.\)

Definition 2.2

The non-negative random variable X is said to have:

  1. (i)

    Increasing (decreasing) likelihood ratio property [ILR (DLR)], if f is a log-concave (log-convex) function on \((0,\infty ).\)

  2. (ii)

    Increasing (decreasing) failure rate property [IFR (DFR)], if \(r_{X}\) is an increasing (decreasing) function on \((0,\infty ).\)

  3. (iii)

    Decreasing reversed failure rate property (DRFR) property if F is log-concave on \((0,\infty ).\)

Notions of total positivity, general composition theorem according to Karlin (1968) and positive likelihood ratio dependence according to Nelsen (2006) are defined below.

Definition 2.3

A non-negative function \(\beta (x,y)\) is said to be totally positive (reverse regular) of order 2, denoted as \(\hbox {TP}_{2}\, (\hbox {RR}_{2})\), in \((x,y)\in \chi \times \gamma \), if

$$\begin{aligned} \left| \begin{array}{cc} \beta (x_{1},y_{1}) &{} \quad \beta (x_{1},y_{2}) \\ \beta (x_{2},y_{1}) &{} \quad \beta (x_{2},y_{2}) \end{array} \right| \ge (\le )\,0, \end{aligned}$$

for all \(x_{1}\le x_{2}\in \chi \), and \(y_{1}\le y_{2}\in \gamma ,\) in which \(\chi \) and \(\gamma \) are two real subsets of the real line \(\mathbb {R}\).

Definition 2.4

Let \(h(y,\theta )\) be the joint density of \((Y,\Theta ).\) The random variables Y and \(\Theta \) are said to be positive likelihood ratio dependent (PLRD) if the joint density \(h(y,\theta )\) is TP\(_{2}\) in \((y,\theta )\).

Finally, the next identities indicate the structure of some reliability measures of the size-biased distribution with respect to those of the baseline distribution. Denote by \(G(x\text { }|\text { }\theta ),~\bar{G}(x\text { }|\text { }\theta ),~r(x\text { }|\text { }\theta )\) and \(\alpha (x\text { }|\text { }\theta )\) the df, the sf, the HR and the RHR of the random variable that has pdf (1). Then, one can observe that, for all \(x>0\)

$$\begin{aligned} G(x\text { }|\text { }\theta )= & {} \frac{E\left[ X^{\theta }\text { }|\text { }X\le x\right] }{E\left[ X^{\theta }\right] }F(x), \end{aligned}$$
(2)
$$\begin{aligned} \bar{G}(x\text { }|\text { }\theta )= & {} \frac{E\left[ X^{\theta }\text { }|\text { } X>x\right] }{E\left[ X^{\theta }\right] }\bar{F}(x), \end{aligned}$$
(3)
$$\begin{aligned} r(x\text { }|\text { }\theta )= & {} \frac{x^{\theta }}{E\left[ X^{\theta }\text { }| \text { }X>x\right] }r(x), \end{aligned}$$
(4)

and

$$\begin{aligned} \alpha (x\text { }|\text { }\theta )=\frac{x^{\theta }}{E\left[ X^{\theta } \text { }|\text { }X\le x\right] }\alpha (x). \end{aligned}$$
(5)

3 Structure of the model

We first introduce our mixture model of size-biased distributions using some well known reliability measures. Then, we provide some examples to describe the usefulness of the proposed model in practical situations. Let \(\Theta \) be a non-negative rv with df H (and pdf h if it exists). Then, the model of (1) could be extended to

$$\begin{aligned} g^{*}(x)&=\int _{0}^{\infty }\frac{x^{\theta }}{E\left[ X^{\theta } \right] }f(x)~dH(\theta ) \nonumber \\&=f(x)~E\left[ \frac{x^{\Theta }}{\mu (\Theta )}\right] ,~~x>0; \end{aligned}$$
(6)

where \(\mu (\theta )=\int _{0}^{\infty }x^{\theta }f(x)dx\) (which is the \(\theta \)th moment of X) and the expectation is being taken with respect to \(\Theta .\) The density in (6) is a weighted average of the density in (1) upon considering a prior distribution H for the parameter \(\theta .\) In the sequel, the rv that has pdf \(g^{*}\) (and df \(G^{*}\)) is denoted by \(Y^{*}.\) In the context of the model of (6), the rvs \(X,~\Theta \) and \(Y^{*}\) are called baseline, mixing and overall variables, respectively. It is noticeable here that \([Y^{*}\text { }|\text { }\Theta =\theta ]\) has the same pdf as in (1). Furthermore, when \(\Theta \) is degenerate such that \(P(\Theta =0)=1\) and that \(P(\Theta =1)=1,\) once at a time, then the original and the length-biased distributions are, respectively, resulted. The model in (6), therefore, gives more flexibility in modeling data based on size-biased distributions. By using the well-known Fubini theorem, the df of \(Y^{*}\) in terms of the df of X is obtained as

$$\begin{aligned} G^{*}(x)= & {} \int _{0}^{x}f(t)E\left[ \frac{t^{\Theta }}{\mu (\Theta )} \right] ~dt \nonumber \\= & {} \int _{0}^{x}\int _{0}^{\infty }\frac{t^{\theta }f(t)}{\mu (\theta )} ~dH(\theta )dt\nonumber \\= & {} \int _{0}^{\infty }\int _{0}^{x}\frac{t^{\theta }f(t)}{\mu (\theta )} ~dtdH(\theta ) \nonumber \\= & {} F(x)~E\left[ \frac{\mu (x,\Theta )}{\mu (\Theta )}\right] ,\quad \hbox {for all}\,x>0, \end{aligned}$$
(7)

where \(\mu (x,\theta )=E\left[ X^{\theta }\text { }|\text { }X\le x\right] .\) Likewise, we can get

$$\begin{aligned} \bar{G}^{*}(x)=\bar{F}(x)~E\left[ \frac{\nu (x,\Theta )}{\mu (\Theta )} \right] ,\quad \hbox {for all}~x>0, \end{aligned}$$
(8)

where \(\nu (x,\theta )=E\left[ X^{\theta }\text { }|\text { }X>x\right] .\) Denote by q and \(\beta \) the HR and the RHR functions of \(Y^{*},\) respectively. Then, in view of (4), we can get

$$\begin{aligned} q(x)&=\int _{0}^{\infty }\frac{g(x\text { }|\text { }\theta )}{\bar{G}^{*}(x)}~dH(\theta ) \nonumber \\&=\int _{0}^{\infty }r(x\text { }|\text { }\theta )\frac{\bar{G}(x\text { }| \text { }\theta )}{\bar{G}^{*}(x)}~dH(\theta ) \nonumber \\&=r(x)\int _{0}^{\infty }\frac{x^{\theta }}{\nu (x,\theta )}~d\Lambda (\theta \text { }|\text { }Y^{*}>x) \nonumber \\&=r(x)~E\left[ \frac{x^{\Theta }}{\nu (x,\Theta )}\text { }|\text { }Y^{*}>x\right] ,\quad \hbox {for all}~x>0, \end{aligned}$$
(9)

where \(\Lambda (\theta \text { }|\text { }Y^{*}>x)\) is the df of the rv \((\Theta \text { }|\text { } Y^{*}>x)\) which is given by

$$\begin{aligned} \Lambda (\theta \text { }|\text { }Y^{*}>x)=\frac{\int _{0}^{\theta }\bar{G} (x\text { }|\text { }w)~dH(w)}{\bar{G}^{*}(x)}. \end{aligned}$$

By the identity in (5), we can similarly derive

$$\begin{aligned} \beta (x)=\alpha (x)~E\left[ \frac{x^{\Theta }}{\mu (x,\Theta )}\text { }| \text { }Y^{*}\le x\right] , \end{aligned}$$
(10)

for all \(x>0.\) In the following examples we describe some useful situations where the mixture model of (6) is applicable.

Example 3.1

(Weighted distributions). The random variable \(X_{w}\) is the weighted version of X,  if it admits the pdf

$$\begin{aligned} f_{w}(x)=\frac{w(x)}{E\left[ w(X)\right] }f(x) \end{aligned}$$
(11)

where w is a non-negative function such that \(0<E\left[ w(X)\right] <\infty .\) Thus, the model of (6) corresponds to a weighted distribution with \(w(x)=E\left[ \frac{x^{\Theta }}{\mu (\Theta )}\right] ,\) for which \(E\left[ w(X)\right] =1.\)

Example 3.2

(Characterization of the mixing variable). Suppose that w is a weight function so that \(w(x)=\sum _{k=0}^{\infty }c_{k}~x^{k},\) for all \(x>0,\) where \(c_{k}\)s are some non-negative constants. Assume that \(\mu (k)\) is the kth moment of X. Then, \(X_{w}\) can be characterized using the distribution of Y, if an appropriate mixing variable is chosen. Let \(\Theta \) take on non-negative integer values. We have

$$\begin{aligned} \frac{w(x)}{E\left[ w(X)\right] }=\frac{\sum _{k=0}^{\infty } c_{k} x^k }{ \sum _{k=0}^{\infty }c_{k}~\mu (k)},~~x>0. \end{aligned}$$

In addition, it holds that

$$\begin{aligned} E\left[ \frac{x^{\Theta }}{\mu (\Theta )}\right] =\sum _{k=0}^{\infty }\frac{ P(\Theta =k)}{\mu (k)}~x^{k},~~\hbox {for all}~x>0. \end{aligned}$$

Thus the model in (6) coincides with the model in (11), if and only if

$$\begin{aligned} \sum _{k=0}^{\infty }\frac{c_{k}}{\sum _{k=0}^{\infty }c_{k}~\mu (k)} ~x^{k}=\sum _{k=0}^{\infty }\frac{P(\Theta =k)}{\mu (k)}~x^{k},~~ \hbox {for all}~x>0. \end{aligned}$$

From mathematical analysis, the above equality guarantees

$$\begin{aligned} \frac{c_{k}}{\sum _{k=0}^{\infty }c_{k}~\mu (k)}=\frac{P(\Theta =k)}{\mu (k)},\quad \hbox {for all}~k=0,1,...~. \end{aligned}$$

That is

$$\begin{aligned} P(\Theta =k)=\frac{c_{k}~\mu (k)}{\sum _{k=0}^{\infty }c_{k}~\mu (k)} ,\quad k=0,1,...~. \end{aligned}$$

For example, the weight functions \(w(x)=\exp (ax)\) with \(a>0,\) and \( w(x)=a^{x}\) with \(a>1,\) can be applied here because

$$\begin{aligned} \exp (ax)=\sum _{k=0}^{\infty }\frac{a^{k}}{k!}~x^{k},\quad \hbox {for all}~x>0, \end{aligned}$$

and

$$\begin{aligned} a^{x}=\sum _{k=0}^{\infty }\frac{(\ln a)^{k}}{k!}~x^{k},\quad \hbox {for all}~x>0. \end{aligned}$$

Also, the weight function \(w(x)=1+x\) leads to a mixing variable \(\Theta \) with \(P(\Theta =0)+P(\Theta =1)=1,\) such that

$$\begin{aligned} P(\Theta =0)=\frac{1}{1+E(X)}, \end{aligned}$$

and

$$\begin{aligned} P(\Theta =1)=\frac{E(X)}{1+E(X)}, \end{aligned}$$

i.e. \(\Theta \) has binary distribution with success probability \(\frac{E(X)}{1+E(X)}.\)

4 Dependence, closure properties and aging properties

In this section, it is first proven that \(Y^{*}\) and \(\Theta \) in the model of (6) admit the strongest positive dependence structure. Then, some stochastic orders are established between the baseline variable X and the overall variable \(Y^{*}\) in the mixture model given in (6).

Theorem 4.1

In view of (6), \(Y^{*}\) and \(\Theta \) are PLRD.

Proof

Denote by \(h(\cdot )\) the pdf of \(\Theta .\) Note that \(Y^{*}\) and \( \Theta \) have joint density

$$\begin{aligned} h(y,\theta )&=g(y\text { }|\text { }\theta )~h(\theta ) \\&=\frac{y^{\theta }}{\mu (\theta )}~f(y)~h(\theta ),\quad y>0,~\theta \ge 0. \end{aligned}$$

For all \(y_{1}\le y_{2}\in (0,\infty ),\) and \(\theta _{1}\le \theta _{2}\in [0,\infty ),\) we get

$$\begin{aligned} \frac{h(y_{2},\theta _{1})}{h(y_{1},\theta _{1})}= & {} \frac{f(y_{2})}{f(y_{1}) }\left[ \frac{y_{2}}{y_{1}}\right] ^{\theta _{1}} \\\le & {} \frac{f(y_{2})}{f(y_{1})}\left[ \frac{y_{2}}{y_{1}}\right] ^{\theta _{2}}\\= & {} \frac{h(y_{2},\theta _{2})}{h(y_{1},\theta _{2})}. \end{aligned}$$

Hence \(h(y,\theta )\) is TP\(_{2}\) and so \((Y^{*},\theta )\) is PLRD. \(\square \)

The following result reveals some closure properties of the weighted average size-biased model given in (6) with respect to the above mentioned stochastic orders. For similar kind of such results, we refer to Misra et al. (2008).

Theorem 4.2

Let X and \(Y^{*}\) have the pdfs f and \(g^{*},\) satisfying (6). Then:

  1. (i)

    \(X\le _{lr}Y^{*}.\)

  2. (ii)

    If X has ILR (IFR) [DR FR] property, then \(X\le _{lr\uparrow ~(hr\uparrow )~[rh\uparrow ]}Y^{*}.\)

Proof

  1. (i)

    Note that

    $$\begin{aligned} \frac{g^{*}(x)}{f(x)}=E\left[ \frac{x^{\Theta }}{\mu (\Theta )}\right] \end{aligned}$$

    is increasing in \(x>0\).

  2. (ii)

    We know that X is ILR (IFR) [DRFR] if and only if

    $$\begin{aligned} \frac{f(x)}{f(t+x)}~\left( \frac{\bar{F}(x)}{\bar{F}(t+x)}\right) ~\left[ \frac{F(x)}{F(t+x)}\right] \end{aligned}$$

    is increasing in x,  for all \(t\ge 0.\) Also, \(X\le _{lr\uparrow ~(hr\uparrow )~[rh\uparrow ]}Y^{*}\) if and only if

    $$\begin{aligned} \frac{g^{*}(x)}{f(t+x)}~\left( \frac{\bar{G}^{*}(x)}{\bar{F}(t+x)} \right) ~\left[ \frac{G^{*}(x)}{F(t+x)}\right] \end{aligned}$$

    is increasing in x,  for all \(t\ge 0.\) Set \(K_{1}(x,\theta )=x^{\theta }/\mu (\theta ),\) \(K_{2}(x,\theta )=E\left[ X^{\theta }\text { }|\text { }X>x\right] /\mu (\theta ),\) and \(K_{3}(x,\theta )=E\left[ X^{\theta }\text { }|\text { }X\le x\right] /\mu (\theta ).\) It is not hard to see that \(K_{1},\) \(K_{2}\) and \(K_{3}\) are increasing in x,  for all \(\theta \ge 0.\) Then, in view of (6), (7) and (8), we have

    $$\begin{aligned} \frac{g^{*}(x)}{f(t+x)}= & {} \frac{f(x)}{f(t+x)}~E[K_{1}(x,\theta )],\\ \frac{\bar{G}^{*}(x)}{\bar{F}(t+x)}= & {} \frac{\bar{F}(x)}{\bar{F}(t+x)} ~E[K_{2}(x,\theta )], \end{aligned}$$

    and

    $$\begin{aligned} \frac{G^{*}(x)}{F(t+x)}=\frac{F(x)}{F(t+x)}~E[K_{3}(x,\theta )], \end{aligned}$$

    which by assumption are increasing in x,  for all \(t\ge 0\).

Theorem 4.3

Let a be a given non-negative value. Then:

  1. (i)

    \(Y^{*}\ge _{lr}[Y^{*}\text { }|\text { }\Theta =a]\) holds if \(P(\Theta \ge a)=1.\)

  2. (ii)

    \(Y^{*}\le _{lr}[Y^{*}\text { }|\text { }\Theta =a]\) holds if \( P(\Theta \le a)=1.\)

Proof

The proof is straightforward and hence we omit it. \(\square \)

In rest of this section, we derive some implications involving some aging properties in the mixture model of size-biased distributions given earlier. The following theorem deals with mixture of size-biased distributions whose moment parameters are below a given value.

Theorem 4.4

Let \(P(\Theta \le a)=1,\) for some \(a\ge 0.\) Then

  1. (i)

    \([Y^{*}\text { }|\text { }\Theta =a]\) has ILR property if \(Y^{*}\) has ILR property.

  2. (ii)

    \(Y^{*}\) has DLR property if \([Y^{*}\text { }|\text { } \Theta =a]\) has DLR property.

Proof

We give the proof for the assertion (i). The other is similar. The identity given in (6), gives, for all \(x>0\)

$$\begin{aligned} \ln g^{*}(x)=\ln f(x)+\ln E\left[ \frac{x^{\Theta }}{\mu (\Theta )} \right] , \end{aligned}$$

which by taking derivative yields

$$\begin{aligned} \frac{\frac{dg^{*}(x)}{dx}}{g^{*}(x)}=\frac{f^{^{\prime }}(x)}{f(x)}+ \frac{C(x)}{x}, \end{aligned}$$
(12)

where

$$\begin{aligned} C(x)=\frac{E\left[ \Theta \frac{x^{\Theta }}{\mu (\Theta )}\right] }{E\left[ \frac{x^{\Theta }}{\mu (\Theta )}\right] }. \end{aligned}$$

Set

$$\begin{aligned} \phi (i,\theta )=\left\{ \begin{array}{l@{\quad }l} 1, &{} \hbox {for}~i=1 \\ \theta ,&{} \hbox {for}~i=2 \end{array} \right. \end{aligned}$$

and set \(\psi (\theta ,x)=x^{\theta }/\mu (\theta ).\) Denote, for \(i=1,2\) and \(x>0\)

$$\begin{aligned} C(i,x)&=E\left[ \phi (i,\Theta )~\psi (\Theta ,x)\right] \\&=\int _{0}^{\infty }\phi (i,\theta )~\psi (\theta ,x)~dH(\theta ). \\ \end{aligned}$$

We know that \(\phi (i,\theta )\) is TP\(_{2}\) in \((i,\theta )\in \{1,2\}\times [0,\infty )\) and that \(\psi (\theta ,x)\) is \(\hbox {TP}_{2}\) in \((\theta ,x)\in [0,\infty )\times (0,\infty ).\) By general composition theorem of Karlin (1968), it follows that C(ix) is TP\(_{2}\) in \((i,x)\in \{1,2\}\times (0,\infty ),\) from which we can say C(x) is increasing in x. Now observe that the equality in (12) can be rewritten as

$$\begin{aligned} \frac{\frac{dg^{*}(x)}{dx}}{g^{*}(x)}+\frac{1}{x}[a-C(x)]=\frac{ g^{^{\prime }}(x\mid a)}{g(x\mid a)}, \end{aligned}$$
(13)

where \(g(\cdot \mid a)\) is the pdf of \([Y^{*}\text { }|\text { }\theta =a].\) Note that \(a-C(x)\ge 0,\) because \(P(\Theta \le a)=1.\) By assumption \(\frac{dg^{*}(x)}{dx}/g^{*}(x)\) is decreasing in x. Thus the identity in (13) directly provides the proof. \(\square \)

The following result states that the DFR property passes from the baseline distribution into the overall distribution, if appropriate assumptions are imposed.

Theorem 4.5

Let \(\mu (\theta )<\infty ,\) for all \(\theta \) in support of \(\Theta \) and let xr(x) be decreasing. Then

$$\begin{aligned} X~\hbox {is DFR}~\Rightarrow ~Y^{*}~\hbox {is DFR}. \end{aligned}$$

Proof

Recall that the HR of \(Y^{*}\) and the HR of X are connected via

$$\begin{aligned} q(x)=r(x)~E\left[ \frac{x^{\Theta }}{\nu (x,\Theta )}\text { }|\text { } Y^{*}>x\right] ,~~\hbox {for all}~x>0. \end{aligned}$$

By assumption r is decreasing. Thus it suffices to prove that the other term is also decreasing. We have

$$\begin{aligned} E\left[ \frac{x^{\Theta }}{\nu (x,\Theta )}\text { }|\text { }Y^{*}>x \right]&=\int _{0}^{\infty }\frac{x^{w}}{\nu (x,w)}~d\Lambda (w\text { }|\text { } Y^{*}>x) \\&=E\left[ \phi (x,W)\right] , \end{aligned}$$

where W is a non-negative rv with pdf

$$\begin{aligned} \gamma (w\text { }|\text { }x)=\frac{\bar{G}(w\text { }|\text { }x)~h(w)}{\bar{G} (x)},\quad x>0,~w\ge 0, \end{aligned}$$

and \(\phi (x,w)=x^{w}/\nu (x,w).\) We know that

$$\begin{aligned} \phi (x,w)=\frac{1}{E\left[ \left( \frac{X}{x}\right) ^{w}\text { }|\text { } X>x\right] }, \end{aligned}$$

which is obviously decreasing in w,  for all \(x>0.\) Since \(\mu (w)<\infty ,\) for all w in support of \(\Theta ,\) then \(x^{w}\bar{F}(x)\) approaches to zero if x tends to infinity. Integration by parts yields

$$\begin{aligned} \phi (x,w)&=\frac{x^{w}\bar{F}(x)}{\int _{x}^{\infty }u^{w}f(u)~du} \nonumber \\&=1-w~\frac{\int _{x}^{\infty }u^{w-1}\bar{F}(u)~du}{\int _{x}^{\infty }u^{w}f(u)~du}. \end{aligned}$$
(14)

Let \(w\ge 0\) be fixed. Define

$$\begin{aligned} \rho (i,x)=\int _{0}^{\infty }\varphi (i,u)~\psi (u,x)~du, \end{aligned}$$

where

$$\begin{aligned} \varphi (i,\theta )=\left\{ \begin{array}{ll} u^{w}f(u), &{} \quad \hbox {for}~i=1 \\ u^{w-1}\bar{F}(u), &{} \quad \hbox {for}~i=2 \end{array} \right. \end{aligned}$$

and \(\psi (u,x)=1\) if \(u>x\) and \(\psi (u,x)=0\) if \(u\le x.\) By the assumption that xr(x) is decreasing, \(\varphi (i,u)\) is \(\hbox {TP}_{2}\) in \( (i,u)\in \{1,2\}\times (0,\infty ).\) Also, it is easily seen that \(\psi (u,x) \) is \(\hbox {TP}_{2}\) in \((u,x)\in (0,\infty )\times (0,\infty ).\) Thus, the general composition theorem of Karlin (1968) implies that \(\rho (i,x)\) is \(\hbox {TP}_{2}\) in \((i,x)\in \{1,2\}\times (0,\infty ).\) That is

$$\begin{aligned} \frac{\int _{x}^{\infty }u^{w-1}\bar{F}(u)~du}{\int _{x}^{\infty }u^{w}f(u)~du} \end{aligned}$$

is increasing in \(x>0.\) Therefore, from (14), for all \(w\ge 0,\) \(\phi (x,w)\) is decreasing in x. By Theorem 4.1 it is concluded that \([Y^{*}\text { }|\text { }\Theta =w_{1}]\le _{ lr}[Y\text { }|\text { }\Theta =w_{2}],\) for all \(w_{1}\le w_{2}\in [0,\infty ),\) which further implies that W is stochastically increasing in x. Applying Lemma 2.2(ii) in Misra and Van Der Meulen (2003), \(E\left[ \phi (x,W)\right] \) is decreasing in x. The proof is complete. \(\square \)

5 Stochastic order relations

In this section, first by assuming that the baseline variable in the mixture model of size-biased distributions is fixed in law, we investigate that some stochastic orders on two mixing variables, are translated to the overall variables. Then, by assuming that the mixing variable is fixed in law, we show that under some conditions, some stochastic orders are translated from overall variables into the baseline variables. Let \(\Theta _{i}\) be a non-negative rv with pdf \(h_{i},\) df \(H_{i}\) and sf \(\bar{H}_{i},\) for \(i=1,2.\) Then denote by \(Y_{i}^{*}\) a random variable that has density

$$\begin{aligned} g_{i}^{*}(x)=f(x)~E\left[ \frac{x^{\Theta _{i}}}{\mu (\Theta _{i})} \right] \end{aligned}$$
(15)

and df \(G_{i}^{*},\) for \(i=1,2.\) We assume that \(\Theta _{1}\) and \( \Theta _{2}\) are independent. The following result deals with the likelihood ratio order.

Theorem 5.1

Let the identity given in (15) hold. Then

$$\begin{aligned} \Theta _{1}\le _{ lr}\Theta _{2}~\Rightarrow ~Y_{1}^{*}\le _{lr}Y_{2}^{*}. \end{aligned}$$

Proof

Note that \(Y_{1}^{*}\le _{lr}Y_{2}^{*},\) if and only if, \( g_{i}^{*}(x)\) is TP\(_{2}\) in \((i,x)\in \{1,2\}\times (0,\infty ).\) In view of (15), we have

$$\begin{aligned} g_{i}^{*}(x)=\int _{0}^{\infty }\frac{f(x)x^{\theta }}{\mu (\theta )} h_{i}(\theta )~d\theta \end{aligned}$$

for \(i=1,2.\) We know by assumption that \(h_{i}(\theta )\) is \(\hbox {TP}_{2}\) in \((i,\theta )\in \{1,2\}\times [0,\infty ),\) because \(\Theta _{1}\le _{lr}\Theta _{2}.\) In parallel, obviously, \(\frac{f(x)x^{\theta }}{\mu (\theta )}\) is \(\hbox {TP}_{2}\) in \((x,\theta )\in (0,\infty )\times [0,\infty ).\) Thus, the general composition theorem of Karlin (1968) gives the desired result. \(\square \)

Theorem 5.2

In view of (15),

$$\begin{aligned} \Theta _{1}\le _{hr}\Theta _{2}~\Rightarrow ~Y_{1}^{*}\le _{hr}Y_{2}^{*}. \end{aligned}$$

Proof

Following the identity in (8), we can write, for all \(x>0\)

$$\begin{aligned} \bar{G}_{i}^{*}(x)&=\bar{F}(x)~E\left[ \frac{\nu (x,\Theta _{i})}{\mu (\Theta _{i})}\right] \\&=\int _{0}^{\infty }\psi (\theta ,x)~dH_{i}(\theta ),\quad i=1,2; \end{aligned}$$

where

$$\begin{aligned} \psi (\theta ,x)=\frac{\bar{F}(x)~\nu (x,\theta )}{\mu (\theta )} ,\quad x>0,\theta \ge 0. \end{aligned}$$

Observe that

$$\begin{aligned} \nu (x,\theta )&=E(X^{\theta }\text { }|\text { }X>x) \\&=\int _{x}^{\infty }u^{\theta }\frac{f(u)}{\bar{F}(x)}~du \\&=\int _{0}^{\infty }\varphi (\theta ,u)~\phi (u,x)~du \end{aligned}$$

in which \(\varphi (\theta ,u)=u^{\theta }\) and

$$\begin{aligned} \phi (u,x)=\left\{ \begin{array}{ll} 0, &{} \quad \hbox {for}~u\le x \\ \frac{f(u)}{\bar{F}(x)}, &{} \quad \hbox {for}~u>x. \end{array} \right. \end{aligned}$$

Clearly, \(\varphi \) is TP\(_{2}\) in \((\theta ,u)\) and \(\phi \) is TP\(_{2}\) in (ux). Hence using the general composition theorem of Karlin (1968) \(\nu (x,\theta )\) is TP\(_{2}\) in \((x,\theta ).\) By knowing that \(\mu (\theta )=\nu (0,\theta )\) and that \(\nu \) is TP\(_{2}\) in \((x,\theta ),\) \(\nu (x,\theta )/\mu (\theta )\) is increasing in \(\theta .\) Thus \(\psi (\theta ,x) \) is \(\hbox {TP}_{2}\) in \((\theta ,x)\) and is increasing in \(\theta .\) In addition, by assumption, \(\bar{H}_{i}(\theta )\) is \(\hbox {TP}_{2}\) in \((i,\theta )\in \{1,2\}\times [0,\infty ).\) Therefore, Lemma 4.2 of Li and Xu (2006) is applicable here which gives \(\bar{G}_{i}^{*}(x)\) is TP\(_{2}\) in \((i,x)\in \{1,2\}\times (0,\infty ),\) and hence the result follows. \(\square \)

Next result states that the reversed hazard rate order of two mixing variables implies the reversed hazard rate order of their corresponding overall variables provided that the baseline random variable is fixed in distribution.

Theorem 5.3

Let the model of (15) be given. Then

$$\begin{aligned} \Theta _{1}\le _{rh}\Theta _{2}~\Rightarrow ~Y_{1}^{*}\le _{rh}Y_{2}^{*}. \end{aligned}$$

Proof

Consider the equation given in (7). Note that \(Y_{1}^{*}\le _{rh}Y_{2}^{*},\) if and only if,

$$\begin{aligned} G_{i}^{*}(x)=F(x)~E\left[ \frac{\mu (x,\Theta _{i})}{\mu (\Theta _{i})} \right] \end{aligned}$$

is \(\hbox {TP}_{2}\) in \((i,x)\in \{1,2\}\times (0,\infty ).\) Equivalently, we need to show that

$$\begin{aligned} E\left[ \frac{\mu (x_{1},\Theta _{1})}{\mu (\Theta _{1})}~\frac{\mu (x_{2},\Theta _{2})}{\mu (\Theta _{2})}\right] \ge E\left[ \frac{\mu (x_{1},\Theta _{2})}{\mu (\Theta _{2})}~\frac{\mu (x_{2},\Theta _{1})}{\mu (\Theta _{1})}\right] \end{aligned}$$

for all \(x_{1}\le x_{2}\in (0,\infty ).\) Define

$$\begin{aligned} \phi _{1}(\theta _{1},\theta _{2})=\frac{\mu (x_{1},\theta _{2})}{\mu (\theta _{2})}\times \frac{\mu (x_{2},\theta _{1})}{\mu (\theta _{1})}, \end{aligned}$$

and

$$\begin{aligned} \phi _{2}(\theta _{1},\theta _{2})=\frac{\mu (x_{1},\theta _{1})}{\mu (\theta _{1})}\times \frac{\mu (x_{2},\theta _{2})}{\mu (\theta _{2})}. \end{aligned}$$

We have

$$\begin{aligned} \mu (x,\theta )&=E(X^{\theta }\text { }|\text { }X\le x) \\&=\int _{0}^{x}u^{\theta }\frac{f(u)}{F(x)}~du \\&=\int _{0}^{\infty }\varphi (\theta ,u)~\phi (u,x)~du \end{aligned}$$

in which \(\varphi (\theta ,u)=u^{\theta }\) and

$$\begin{aligned} \phi (u,x)=\left\{ \begin{array}{ll} \frac{f(u)}{F(x)}, &{} \quad \hbox {for}~u\le x \\ 0, &{} \quad \hbox {for}~u>x. \end{array} \right. \end{aligned}$$

Obviously, \(\varphi \) is \(\hbox {TP}_{2}\) in \((\theta ,u)\) and \(\phi \) is \(\hbox {TP}_{2}\) in (ux). Thus using the general composition theorem of Karlin (1968) \(\mu (x,\theta )\) is \(\hbox {TP}_{2}\) in \((x,\theta ).\) Thus, for all \(\theta _{1}\le \theta _{2}\) and \(x_{1}\le x_{2}\)

$$\begin{aligned} \Delta \phi _{21}(\theta _{1},\theta _{2})&=\phi _{2}(\theta _{1},\theta _{2})-\phi _{1}(\theta _{1},\theta _{2}) \\&=\frac{1}{\mu (\theta _{1})\mu (\theta _{2})}\left[ \mu (x_{1},\theta _{1})\mu (x_{2},\theta _{2})-\mu (x_{1},\theta _{2})\mu (x_{2},\theta _{1}) \right] \\&\ge 0. \end{aligned}$$

It holds that \(\mu (\theta _{i})=\mu (\infty ,\theta _{i}),\) for \(i=1,2.\) Because \(\mu (x,\theta )\) is TP\(_{2},\) thus the ratio \(\mu (x_{1},\theta _{1})/\mu (\theta _{1})\) is decreasing in \(\theta _{1}\) and furthermore \(\mu (x_{2},\theta _{1})/\mu (x_{1},\theta _{1})\) is increasing in \(\theta _{1}.\) Therefore

$$\begin{aligned} \Delta \phi _{21}(\theta _{1},\theta _{2})=\frac{\mu (x_{1},\theta _{1})}{ \mu (\theta _{1})\mu (\theta _{2})}\left[ \mu (x_{2},\theta _{2})-\mu (x_{1},\theta _{2})\frac{\mu (x_{2},\theta _{1})}{\mu (x_{1},\theta _{1})} \right] \end{aligned}$$

is non-negative and decreasing in \(\theta _{1},\) for all \(\theta _{1}\le \theta _{2}\in [0,\infty )\) and \(x_{1}\le x_{2}\in (0,\infty ).\) Now Theorem 1.B.48 of Shaked and Shanthikumar (2007) completes the proof. \(\square \)

In the remaining part of this section, we consider another observation regarding the mixture model of size-biased distributions. Let \(X_{i}\) be a non-negative rv with pdf \(f_{i}\) and df \(F_{i},\) for \(i=1,2.\) Then denote by \(Y_{i}\) the random variable that admits the density

$$\begin{aligned} g_{i}(x)=f_{i}(x)~E\left[ \frac{x^{\Theta }}{\mu _{i}(\Theta )}\right] , \end{aligned}$$
(16)

where \(\mu _{i}(\theta )=E(X_{i}^{\theta }),\) for \(i=1,2.\) That is we have two mixture model in (6) with two baseline variables \(X_{1}\) and \(X_{2},\) with two overall variables \(Y_{1}\) and \(Y_{2},\) and with a common mixing variable \(\Theta .\)

Theorem 5.4

Consider the model of (16). Let \((\Theta \,|\,Y_{1}>x)\le _{st}(\Theta \text { }|\text { }Y_{2}>x),\) for all \(x>0.\) Then

$$\begin{aligned} X_{1}\le _{hr}X_{2}~\Rightarrow ~Y_{1}\le _{hr}Y_{2}. \end{aligned}$$

Proof

Denote by \(r_{i}\) and \(q_{i},\) the HR functions of \(X_{i}\) and \(Y_{i},\) respectively, for \(i=1,2.\) As given in (9), we can write, for all \(x>0\)

$$\begin{aligned} q_{i}(x)=r_{i}(x)~E\left[ \frac{x^{\Theta }}{\nu _{i}(x,\Theta )}\text { }| \text { }Y_{i}>x\right] , \end{aligned}$$

where \(\nu _{i}(x,\theta )=E(X_{i}^{\theta }\,|\,X_{i}>x),\) for \(i=1,2.\) We know that \(X_{1}\le _{hr}X_{2},\) if and only if, \((X_{1}\text { }|\text { }X_{1}>x)\le _{st}(X_{2}\,|\,X_{2}>x),\) for all \(x>0.\) Thus it follows that

$$\begin{aligned} \nu _{1}(x,\theta )= & {} E(X_{1}^{\theta }\text { }|\text { }X_{1}>x) \\\le & {} \nu _{2}(x,\theta ) \\= & {} E(X_{2}^{\theta }\text { }|\text { }X_{2}>x), \end{aligned}$$

\(\hbox {for all}\,x>0~\hbox {and}~\theta \ge 0.\) Also, by assumption we have \(r_{1}(x)\ge r_{2}(x),\) for all \(x>0.\) Therefore, for all \(x>0\)

$$\begin{aligned} q_{1}(x)-q_{2}(x)&=\int _{0}^{\infty }\frac{x^{\theta }r_{1}(x)}{\nu _{1}(x,\theta )}~d\Lambda (\theta \text { }|\text { }Y_{1}>x)-\int _{0}^{\infty }\frac{x^{\theta }r_{2}(x)}{\nu _{2}(x,\theta )}~d\Lambda (\theta \text { }| \text { }Y_{2}>x) \nonumber \\&\ge r_{2}(x)~\int _{0}^{\infty }\frac{x^{\theta }}{\nu _{2}(x,\theta )}~d \left[ \Lambda (\theta \text { }|\text { }Y_{1}>x)-\Lambda (\theta \text { }| \text { }Y_{2}>x)\right] . \end{aligned}$$
(17)

Since \(x^{\theta }/\nu _{2}(x,\theta )\) is decreasing in \(\theta ,\) by assumption we have

$$\begin{aligned} \int _{0}^{w}d\left[ \Lambda (\theta \text { }|\text { }Y_{1}>x)-\Lambda (\theta \text { }|\text { }Y_{2}>x)\right] \ge 0, \end{aligned}$$

for all \(w>0.\) Applying Lemma 7.1(b) of Barlow and Proschan (1975) to (17) we conclude the proof. \(\square \)

Theorem 5.5

In view of the model given in (16), let \((\Theta \,|\,Y_{1}\le x)\le _{st}(\Theta \,|\,Y_{2}\le x),\) for all \(x>0,\) and let \((Y_{1}\,|\, \Theta =\theta _{1})\le _{rh}(Y_{2}\text { }|\text { }\Theta =\theta _{2}),\) for all \( \theta _{1}\le \theta _{2}\in [0,\infty ).\) Then \(Y_{1}\le _{rh}Y_{2}.\)

Proof

Denote by \(\alpha _{i}\) and \(\beta _{i},\) the RHR functions of \(X_{i}\) and \( Y_{i},\) respectively, for \(i=1,2.\) As the identity in (10) we can write, for all \(x>0\)

$$\begin{aligned} \beta _{i}(x)=\alpha _{i}(x)~E\left[ \frac{x^{\Theta }}{\mu _{i}(x,\Theta )} \text { }|\text { }Y_{i}\le x\right] , \end{aligned}$$

where \(\mu _{i}(x,\theta )=E\left[ X_{i}^{\theta }\text { }|\text { }X_{i}\le x\right] ,\) for \(i=1,2.\) The condition that \((Y_{1}\text { }|\text { } \Theta =\theta _{1})\le _{rh}(Y_{2}\text { }|\text { }\Theta =\theta _{2}),\) for all \(\theta _{1}\le \theta _{2}\in [0,\infty ),\) implies that

$$\begin{aligned} \frac{x^{\theta }\alpha _{2}(x)}{\mu _{2}(x,\theta )}&=\frac{x^{\theta }f_{2}(x)}{\int _{0}^{x}u^{\theta }f_{2}(u)~du} \\&\ge \frac{x^{\theta }f_{1}(x)}{\int _{0}^{x}u^{\theta }f_{1}(u)~du} \\&=\frac{x^{\theta }\alpha _{1}(x)}{\mu _{1}(x,\theta )}, \end{aligned}$$

for all \(x>0,\) and \(\theta \ge 0.\) Thus

$$\begin{aligned} \beta _{2}(x)-\beta _{1}(x)&=\int _{0}^{\infty }\frac{x^{\theta }\alpha _{2}(x)}{\mu _{2}(x,\theta )}~d\Lambda (\theta \text { }|\text { }Y_{2}\le x)-\int _{0}^{\infty }\frac{x^{\theta }\alpha _{1}(x)}{\mu _{1}(x,\theta )} ~d\Lambda (\theta \text { }|\text { }Y_{1}\le x) \nonumber \\&\ge \alpha _{1}(x)~\int _{0}^{\infty }\frac{x^{\theta }}{\mu _{1}(x,\theta )}~d\left[ \Lambda (\theta \text { }|\text { }Y_{2}\le x)-\Lambda (\theta \text { }|\text { }Y_{1}\le x)\right] . \end{aligned}$$
(18)

It can be checked as in the proof of Theorem 4.5 that \(x^{\theta }/\mu _{1}(x,\theta )\) is increasing in \(\theta .\) In parallel, by assumption,

$$\begin{aligned} \int _{w}^{\infty }d\left[ \Lambda (\theta \text { }|\text { }Y_{2}\le x)-\Lambda (\theta \text { }|\text { }Y_{1}\le x)\right] \ge 0, \end{aligned}$$

for all \(w\ge 0.\) Again, Lemma 7.1(a) of Barlow and Proschan (1975) can be applied to (18) and it completes the proof. \(\square \)

6 Summary and concluding remarks

Based on the concept of the mixture distribution, a new mixture model of size-biased distributions is introduced where the moment parameter in the model is taken as a random variable. The model is applicable in situations in which we are uncertain about the exact value of the parameter \(\theta \) which is the dimension of the size-biased model. We established the strongest dependence structure between the mixing and the overall random variables and we obtain some closure properties of the model with respect to some well-known stochastic orders. In addition, we provide various closure properties of the new model with respect to some stochastic orders like up likelihood ratio, up hazard rate and up reversed hazard rate orders and in terms of some aging classes such as DLR, DFR, and DRFR. We investigated a number of reliability aspects of the model using some aging properties. Further properties and applications of the new model can be considered in the future of this research.