Abstract
In this paper, we study the information-generating (IG) measure of \(k\)-record values and examine some of its main properties. We establish some bounds for the IG measure of \(k\)-record values. In addition, we present some results related to the characterization of an exponential distribution by maximization (minimization) of the IG measure of record values under certain conditions. We also examine the relative information generating (RIG) measure between the distribution of record values and the corresponding underlying distribution and present some results in this regard. Several examples have been provided throughout the study to illustrate the results. We also consider the problem of estimation of the IG measure for a two-parameter Weibull distribution based on the upper \(k\)-record values.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 INTRODUCTION
The record values was formulated by [13] as successive extremes occurring in a sequence of independent and identically distributed (iid) random variables. Records are of great importance in several real life problems involving destructive stress testing, sporting and athletic events, meteorological analysis, oil, and mining, surveys, hydrology, seismology, etc. Prediction of the next record value is an interesting problem in many real-life situations. For a detailed survey on the theory and application of record values, see, [2, 4, 29], and the references therein.
Let \(\{X_{i},i\geq 1\}\) be a sequence of iid random variables having a common cumulative distribution function (cdf) \(F(x)\) which is absolutely continuous. An observation \(X_{j}\) is called an upper record if its value exceeds that of all preceding observations. Thus, \(X_{j}\) is an upper record if \(X_{j}>X_{i}\) for every \(i<j\). In an analogous way, one can also define lower record values.
The characteristic features of the parent distribution can also be studied by looking at the record statistics that arise from a distribution. But we can see that after the first observation, the expected waiting time for the occurance of each record after the first may be infinite. Additionally, the presence of an outlier in a sequence of random variables prevents the realisation of record values from occurring later. One may overcome this difficulty by considering the \(k\)-record statistics introduced by [17].
Now, for a positive integer \(k\), the sequence of upper \(k\)-records, or simply \(k\)-records is defined as follows: For a positive integer \(k\), the upper \(k\)-record times \(T_{n(k)}\), for \(n>1\) are defined by
and, for \(n>1\),
where \(X_{i:m}\) is the \(i\)th order statistic in a random sample of size \(m\). The sequence of the upper \(k\)-record values \(U_{n(k)}\) are then defined by
In an analogous way, one can also define lower record values.
If the parent distribution is absolutely continuous with survival function \(\overline{F}_{X}(x)\) and probability density function(pdf) \(f_{X}(x)\), then, the pdf of \(n\)th upper \(k\)-record value \(U_{n(k)}\) is given by (see, [4]).
The pdf of \(n\)th lower \(k\)-record value \(L_{n(k)}\) is given by (see, [2])
Since the ordinary record values are contained in the \(k\)-records, by putting \(k=1\), the results for the usual records can be obtained as special cases. Several applications of \(k\)-records are available in the literature. For some recent applications of \(k\)-record values see, [6, 11, 12].
Information theory is one of the most important branches of science introduced by [18, 31]. The Shannon entropy of a continuous random variable \(X\), having pdf on support \(\chi\), is defined as
Shannon entropy measure and its extentions have been considered by several researchers. The study of the entropy measures for order statistics and record values has received considerable attention recently. For more details, one may refer to [1, 5, 7, 24, 27].
In information theory, generating functions have also been defined for probability densities to determine information quantities such as Shannon information and Kullback–Leibler divergence Golomb. [19] proposed information generating (IG) function (measure) to generate some well-known information measures. Suppose the variable \(X\) has a density function \(f(x)\). Then, the IG function of density \(f(x)\), for any \({\alpha}>0\), is defined as
provided the integral exists. To simplify notation, we suppress \(\chi\) for integration with respect to \(x\) throughout the paper, unless a distinction is needed.
Clearly \(G_{1}(X)=1\) and \(\dfrac{\partial}{\partial{\alpha}}G_{{\alpha}}(X)|_{{\alpha}=1}=-H(X)\), where \(H(X)\) is the Shannon entropy given in (5). In particular, when \({\alpha}=2\), the IG measure is reduced to \(\int_{\chi}f^{2}(x)dx=-2J(X)\), where \(J(X)\) is the extropy given by [28], which is also known as the informational energy (IE) measure. In physics and chemistry, the IG measure is known as the entropic moment, and it is closely related to the Renyi and Tsallis entropies. The IG measure plays a significant role in information theory and physics since it generates the most popular information measures, including Shannon entropy, Renyi entropy, Tsallis entropy, and extropy measures.
Recently, [32] has studied the IG function of record values and examine some properties of it. Clark [16] has used IG function for stochastic processes to assist in the derivation of information measures for point processes. Kharazmi and Balakrishnan [25] have studied the IG measure for order statistics and its applications in the study of mixed systems. Also Kharazmi and Balakrishnan [26] introduced Jensen IG measure and its connections to some well-known information measures such as Jensen-Shannon, Jensen–Taneja, and Jensen-extropy information measures.
Guiasu and Reischer [20] proposed relative information generating (RIG) measure between two density functions. Let \(X\) and \(Y\) be two random variables with density functions \(f\) and \(g\), respectively. Then, the relative information generating measure, for any \({\alpha}>0\), is defined as
provided the integral exists. It is obvious that \(R_{1}(f,g)=1\) and
the Kullback–Leibler divergence, originally defined by [23].
In this paper, we consider the IG measure of \(k\)-record values and examine some of its main properties. We also examine the relative information generating (RIG) measure between the distribution of record values and the corresponding underlying distribution. So far estimation of IG measure based on \(k\)-record values has not been considered in the available literature. Hence in this paper, we also consider the maximum likelihood estimation and Bayesian estimation of IG measure based on \(k\)-record values for Weibull distribution.
In the present work, our goal is to study IG measure for \(k\)-record values and then establish some results associated with it. The rest of this paper is orgnized as follows: In Section 2, we first examine the IG measure for \(n\)th upper and lower \(k\)-record values. Section 3 deals with some stochastic comparisons based on IG measure of \(k\)-record values and we examine the lower and upper bounds for the IG measure of \(k\)-record values. In Section 4, some results associated with the characterization of exponential distribution based on the IG measure of \(k\)-record values is given. Section 5 is devoted to the relative information generating divergence of \(k\)-record values. In Section 6, we consider the estimation of the IG measure for the Weibull distribution based on upper \(k\)-record values. We obtain the maximum likelihood estimators (MLEs) for IG measure and the Bayes estimators of IG measure. Finally, some concluding remarks are made in Section 7.
2 IG MEASURE OF \(k\)-RECORD VALUES
In this section, we first examine the IG measure of lower and upper \(k\)-record values and then establish some results for this measure.
Theorem 2.1. Let \(\{X_{i},i\geq 1\}\) be a sequence of iid continuous random variables from a distribution with common distribution function \(F(x)\) , pdf \(f(x)\) and quantile function \(F^{-1}(.)\) . Let \(U_{n(k)}\) denote the \(n\) th upper \(k\) -records. Then the IG measure of \(U_{n(k)}\) is given by,
where \(U_{n,k}\sim\Gamma(n,k)\) and \(V_{n,k}\sim\Gamma({\alpha}(n-1)+1,{\alpha}(k-1)+1)\) with \(\Gamma(\lambda,{\beta})\) denotes a gamma distribution with pdf given by
Proof. From the definition of IG measure given in (6), we have the IG measure of \(n\)th upper \(k\)-re- cords as,
On putting \(v={-\textrm{log}\;(1-F(x))}\), we get
Since
we have
Hence the result.
Example 2.1. Let \(\{X_{i},i\geq 1\}\) be a sequence of iid random variables having a common two-parameter Weibull distribution with pdf given by
Here,
Therefore,
and hence
Thus, we have
We have drawn the graphs of IG measure of \(n\)th upper \(k\)-records for Weibull distribution for different values of \({\alpha}\) and are given in Fig. 1. It can be observed from Fig. 1 that, \(G_{{\alpha}}(U_{n(k)})\) is increasing in \(n\) for \(0<{\alpha}<1\) and decreasing in \(n\) for \({\alpha}>1\). Also, it can be observed from Fig. 1 that, \(G_{{\alpha}}(U_{n(k)})\) is decreasing in \(k\) for \(0<{\alpha}<1\) and increasing in \(k\) for \({\alpha}>1\). Also, when \(k=1\), the IG measure of classical records is obtained.
Example 2.2. Let \(\{X_{i},i\geq 1\}\) be a sequence of iid random variables having a common Pareto distribution with pdf given by
Here,
Therefore,
and hence
Thus, we have
Example 2.3. Let \(\{X_{i},i\geq 1\}\) be a sequence of iid random variables having a common Rayleigh distribution with pdf given by
Here,
Therefore,
and hence
Thus, we have
Theorem 2.2. Let \(\{X_{i},i\geq 1\}\) be a sequence of iid continuous random variables from a distribution with common distribution function \(F(x)\) , pdf \(f(x)\) and quantile function \(F^{-1}(.)\) . Let \(L_{n(k)}\) denote the \(n\) th lower \(k\) -records. Then the IG measure of \(L_{n(k)}\) is given by,
where \(U_{n,k}\sim\Gamma(n,k)\) and \(V_{n,k}\sim\Gamma({\alpha}(n-1)+1,{\alpha}(k-1)+1)\).
Proof. From the definition of IG measure given in (6), we have the IG measure of \(n\)th upper \(k\)-re- cord as,
On putting \(v={-\textrm{log}\;F(x)}\),we get
where \(G_{{\alpha}}(U_{n,k})\) is defined in (11). Hence the result.
Example 2.4. Let \(\{X_{i},i\geq 1\}\) be a sequence of iid random variables having a common generalized exponential distribution with pdf given by
Here,
Therefore,
and hence
Thus, we have
We have drawn the graphs of IG measure of \(n\)th lower \(k\)-records for generalized exponential distribution for different values of \({\alpha}\) and are given in Fig. 2. It can be observed from Fig. 2 that, for \(n>k\), \(G_{{\alpha}}(L_{n(k)})\) is decreasing in \(n\) for \(0<{\alpha}<1\) and increasing in \(n\) for \({\alpha}>1\).
Example 2.5. Let \(\{X_{i},i\geq 1\}\) be a sequence of iid random variables having a common inverse exponential distribution with pdf given by
Here,
Therefore,
and hence
Thus, we have
3 PROPERTIES OF IG MEASURE OF \(k\)-RECORD VALUES
In this section, we derive some properties of IG measure of \(n\)th upper and lower \(k\)-record values. The following theorem shows the monotone behavior of IG measure of \(n\)th upper \(k\)-record value in terms of \(n\). In order to prove this theorem, we need the following definitions and lemmas.
Definition 3.1 [30]. Let \(X\) and \(Y\) be two non-negative random variables such that \(P(X>x)\leq P(Y>y)\) for all \(x\geq 0\). Then we say that \(X\) is said to be smaller than \(Y\) in the usual stochastic order (denoted by \(X\leq_{st}Y\)).
Definition 3.2 [30]. Let \(X\) and \(Y\) be two non-negative random variables with densities \(f\) and \(g\), respectively. The random variable \(X\) is said to be smaller than \(Y\) in likelihood ratio order (denoted by \(X\leq_{lr}Y\)) if \(f(x)g(y)\geq g(x)f(y)\) for all \(x\leq y\).
Lemma 3.1 [8]. If \(X\) and \(Y\) are two continuous or discrete random variables such that \(Y\leq_{lr}X\), then \(Y\leq_{st}X\).
Theorem 3.1. Let \(\{X_{i},i\geq 1\}\) be a sequence of iid continuous random variables from a distribution with common distribution function \(F(x)\) , pdf \(f(x)\) and quantile function \(F^{-1}(.)\) . Let \(U_{n(k)}\) denote the \(n\) th upper \(k\) -record. If \(f(x)\) is non-decreasing in x. Then,
-
1.
\(G_{{\alpha}}(U_{n(k)})\) is non-decreasing in \(n\) for \({\alpha}>1\).
-
2.
\(G_{{\alpha}}(U_{n(k)})\) is non-increasing in \(n\) for \(0<{\alpha}<1\).
Proof. From Theorem 2.1, we have
where \(V_{n,k}\sim\Gamma({\alpha}(n-1)+1,{\alpha}(k-1)+1)\). Then,
where \(D_{n}={\alpha}n\textrm{log}\;k+\textrm{log}\;(\Gamma({\alpha}(n-1)+1))-{\alpha}\textrm{log}\;\Gamma n-({\alpha}(n-1)+1)\textrm{log}\;({\alpha}(k-1)+1)\). Therefore,
Without loss of generality, assume that \(n\) is continuous and then taking derivative with respect to \(n\), we obtain,
where \(\psi(x)=\dfrac{d}{dx}(\textrm{log}\;\Gamma x)\) is the digamma function.
Since \(\psi(x)\) is an increasing function of \(x\) and \({\alpha}(n-1)+1>n\) and \({\alpha}(k-1)+1>k\) for \({\alpha}>1\), we conclude that \(D_{n}\) is an increasing function of \(n\). It is easy to show that \(V_{n,k}\leq_{lr}V_{n+1,k}\) and so \(V_{n,k}\leq_{st}V_{n+1,k}\). Moreover, \(f^{{\alpha}-1}(F^{-1}(1-e^{-x}))\) is non-decreasing in \(x\) for all \({\alpha}>1\), because \(f(x)\) is non-decreasing in \(x\). Thus we have,
and hence
Therefore, \(G_{{\alpha}}^{*}(U_{n+1(k)})-G_{{\alpha}}^{*}(U_{n(k)})\geq 0\) and which implies \(G_{{\alpha}}(U_{n(k)})\) is non-decreasing in \(n\) for \({\alpha}>1\).
Now, for \(0<{\alpha}<1\), we have \({\alpha}(n-1)+1<n\) and \({\alpha}(k-1)+1<k\), then \(D_{n}\) is a decreasing function of \(n\). Moreover, \(f^{{\alpha}-1}(F^{-1}(1-e^{-x}))\) is non-increasing in \(x\) for all \(0<{\alpha}<1\), because \(f(x)\) is non-decreasing in \(x\). Thus,
and hence
Therefore, \(G_{{\alpha}}^{*}(U_{n+1(k)})-G_{{\alpha}}^{*}(U_{n(k)})\leq 0\) and which implies \(G_{{\alpha}}(U_{n(k)})\) is non-increasing in \(n\) for \(0<{\alpha}<1\). This completes the theorem.
Now, we present two bounds for the IG measure of the \(n\)th upper \(k\)-record value.
Theorem 3.2. Let \(X\) be a random variable with IG measure \(G_{{\alpha}}(X)<\infty\) . Then the IG measure of the \(n\) th upper \(k\) -records \(U_{n(k)}\) is bounded above as
where
(i) \(U_{n,k}\sim\Gamma(n,k)\) and
(ii) \(B_{n(k)}=\dfrac{{({\alpha}(k-1)+1)({\alpha}(n-1)^{{\alpha}(n-1)})}}{\Gamma({\alpha}(n-1)+1)}e^{-{\alpha}(n-1)}\), provided the integral exists.
(iii) \(r(x)=\dfrac{f(x)}{1-F(x)}\) is the hazard rate function.
Proof. The mode \(m_{n,k}\) of \(\Gamma({\alpha}(n-1)+1,{\alpha}(k-1)+1)\) with density function \(g_{n,k}\) is known to be \(\dfrac{{\alpha}(n-1)}{{\alpha}(k-1)+1}\). Then we have,
Now, we get
where the last equality is obtained by using the transformation \(x=F^{-1}(1-e^{-V_{n,k}})\). Now, substituting the inequality (24) in (9) gives the required result.
Example 3.1. Let \(\{X_{i},i\geq 1\}\) be a sequence of iid random variables having Pareto II distribution with pdf given by
Then \(r(x)=\dfrac{c}{x+{\alpha}}\) and
So, for \({\alpha}>1\), we have
Theorem 3.3. Under the assumptions of Theorem 3.2, we have
for \({\alpha}>1(0<{\alpha}<1)\) , where \(M=f(m)<\infty\) and \(m=\sup\{x:f(x)\leq M\}\) is the mode of the density \(f\) .
Proof. Since \(M=f(m)\), where \(m\) is the mode of \(X\), we have
By putting \(y=1-e^{-V_{n,k}}\), we get,
Now, for \({\alpha}>1\),
Taking expectation on both sides, we have
Then by using (9), we get
Therefore,
For \(0<{\alpha}<1\), we have
Therefore similarly, we can prove that for \(0<{\alpha}<1\)
This completes the proof.
Example 3.2. Let \(\{X_{i},i\geq 1\}\) be a sequence of iid random variables having a common Gompertz distribution with pdf given by
Since the mode \(m\) of the distribution is \(\dfrac{1}{{\lambda}}\textrm{log}\;\dfrac{1}{{\beta}}\), we have
Example 3.3. Let \(\{X_{i},i\geq 1\}\) be a sequence of iid random variables having a standard half Cauchy distribution with pdf given by
Since the mode \(m\) of the distribution is \(0\), we have
Theorem 3.4. Let \(\{X_{i},i\geq 1\}\) be a sequence of iid continuous random variables from a distribution with common distribution function \(F(x)\) , density function \(f(x)\) , and quantile function \(F^{-1}(.)\) . Let \(L_{n(k)}\) denote the \(n\) th lower \(k\) -records. If \(f(x)\) is non-increasing in x. Then,
-
1.
\(G_{{\alpha}}(L_{n(k)})\) is non-decreasing in n for \({\alpha}>1\).
-
2.
\(G_{{\alpha}}(L_{n(k)})\) is non-increasing in n for \(0<{\alpha}<1\).
Proof. From Theorem 2.2, we have
where \(V_{n,k}\sim\Gamma({\alpha}(n-1)+1,{\alpha}(k-1)+1)\). Then,
where \(D_{n}={\alpha}n\textrm{log}\;k+\textrm{log}\;(\Gamma({\alpha}(n-1)+1))-{\alpha}\textrm{log}\;\Gamma n-({\alpha}(n-1)+1)\textrm{log}\;({\alpha}(k-1)+1)\) Therefore,
Without loss of generality, assume that \(n\) is continuous and then taking derivative with respect to \(n\), we obtain
where \(\psi(x)=\dfrac{d}{dx}(\textrm{log}\;\Gamma x)\) is the digamma function.
Since \(\psi(x)\) is an increasing function of \(x\) and \({\alpha}(n-1)+1>n\) and \({\alpha}(k-1)+1>k\) for \({\alpha}>1\), we conclude that \(D_{n}\) is an increasing function of \(n\). It is easy to show that \(V_{n,k}\leq_{lr}V_{n+1,k}\) and so \(V_{n,k}\leq_{st}V_{n+1,k}\). Moreover, \(f^{{\alpha}-1}(F^{-1}(e^{-x}))\) is non-decreasing in \(x\) for all \({\alpha}>1\), because \(f(x)\) is non-increasing in \(x\). We obtain,
thus
Therefore, \(G_{{\alpha}}^{*}(L_{n+1(k)})-G_{{\alpha}}^{*}(L_{n(k)})\geq 0\) and which implies \(G_{{\alpha}}(L_{n(k)})\) is non-decreasing in \(n\) for \({\alpha}>1\).
Now, for \(0<{\alpha}<1\), we have \({\alpha}(n-1)+1<n\) and \({\alpha}(k-1)+1<k\), then \(D_{n}\) is a decreasing function of \(n\). Moreover, \(f^{{\alpha}-1}(F^{-1}(e^{-x}))\) is non-increasing in \(x\) for all \(0<{\alpha}<1\), because \(f(x)\) is non-increasing in \(x\). Thus,
thus
Therefore, \(G_{{\alpha}}^{*}(L_{n+1(k)})-G_{{\alpha}}^{*}(L_{n(k)})\leq 0\) and which implies \(G_{{\alpha}}(L_{n(k)})\) is non-increasing in \(n\) for \(0<{\alpha}<1\). Hence the theorem.
Now, we present two bounds for the IG measure of the \(n\)th lower \(k\)-record value.
Theorem 3.5. Let \(X\) be a random variable with IG measure \(G_{{\alpha}}(X)<\infty\) . Then the IG measure of the \(n\) th lower \(k\) -records \(L_{n(k)}\) is bounded as
where
(i) \(U_{n,k}\sim\Gamma(n,k)\) and
(ii) \(B_{n(k)}=\dfrac{{({\alpha}(k-1)+1)({\alpha}(n-1)^{{\alpha}(n-1)})}}{\Gamma({\alpha}(n-1)+1)}e^{-{\alpha}(n-1)}\), provided the integral exists.
(iii) \(s(x)=\dfrac{f(x)}{F(x)}\).
Proof. The proof is omitted since it is similar to that of Theorem 3.2.
Theorem 3.6. Under the assumptions of Theorem 3.5, we have
for \({\alpha}>1(0<{\alpha}<1)\) , where \(M=f(m)<\infty\) and \(m=\sup\{x:f(x)\leq M\}\) is the mode of the density f.
Proof. Since \(M=f(m)\), where \(m\) is the mode of \(X\), we have
By putting \(y=e^{-V_{n,k}}\), we get
Now, for \({\alpha}>1\),
Taking expectation on both sides, we have
Then by using (13), we get
Therefore,
For \(0<{\alpha}<1\), we have
Therefore similarly, we can prove that for \(0<{\alpha}<1\),
This completes the proof.
Example 3.4. Let \(\{X_{i},i\geq 1\}\) be a sequence of iid random variables having a common Frechet distribution with pdf given by
Here \(f(x)\) is non-increasing.
Since the mode \(m\) of the distribution is \({\beta}\left(\dfrac{{\lambda}}{1+{\lambda}}\right)^{\frac{1}{{\lambda}}}\), we have for \({\alpha}>1(0<{\alpha}<1)\),
4 CHARACTERIZATION OF EXPONENTIAL DISTRIBUTION BY IG MEASURE OF \(k\)-RECORDS
In this section, we show that exponential distribution maximizes (minimizes) IG measure of \(k\)-record values under some information constraints. Consider a class of distributions \(F\) associated with a non-negative random variable \(X\) with \(F(0)=0\) and failure rate function \(r\) that satisfies the conditions:
-
\(r(x)=a(\theta)b(x)\)
-
\(b(x)\geq M,M>0,\)
where \(a(\theta)\) and \(b(x)=B^{\prime}(x)\) are non-negative functions of \(\theta\) and \(x\), respectively. We denote this class of distributions by \(C\). We then provide a characterization result for the class \(C\) in terms of IG measure of the \(n\)th upper \(k\)-record value \(U_{n(k)}\).
Theorem 4.1. The \(n\) th upper \(k\) -record value of the distribution \(F\) has maximum (minimum) IG measure in \(C\) , for \(0<{\alpha}<1({\alpha}>1)\) , if and only if
Proof. Let \(F(x:\theta)\) be a class \(C\) and \(U_{n(k)}\) denote the corresponding \(n\)th upper \(k\)-record value. Then, we have,
Noting that \(b(x)\geq M\), then for any \(0<{\alpha}<1({\alpha}>1)\), we have \(b^{{\alpha}-1}\leq(\geq)M^{{\alpha}-1}\). Therefore,
which is the IG measure of the \(n\)th upper \(k\)-record of \(F(x:\theta)=1-e^{-Ma(\theta)x}\). From this, it is clear that for any \(0<{\alpha}<1({\alpha}>1)\), the \(n\)th upper \(k\)-record of exponential distribution has maximum (minimum) IG measure in class \(C\).
To prove the converse, suppose the \(n\)th upper \(k\)-record of \(F(x:\theta)\) has maximum (minimum) IG measure in class \(C\). Then from (36), we have
Since \(G_{{\alpha}}(U_{n(k)})\) is maximum (minimum) for \(0<{\alpha}<1({\alpha}>1)\), we have
and so
Hence,
The function inside the integral is a non-negative function of \(x\geq 0\), because \(b(x)\geq M\). So,
As \(X\) is a non-negative random variable, we have \(B^{-1}(0)=0\) and so \(h(\theta)=0\). Now, making the transformation \(y=\dfrac{x}{a(\theta)}\) in (39), we can conclude that \(B(x)=Mx\), that is, \(X\) has exponential distribution, is required.
5 RELATIVE INFORMATION GENERATING DIVERGENCE OF \(k\)-RECORDS
In this section, we study the \(RIG\) divergence between a given parent density and corresponding density of the \(n\)th upper and lower \(k\)-record values.
Theorem 5.1. The RIG divergence between the densities of \(n\) th upper \(k\) -record and the parent distribution is given by the following representation,
Moreover, \(R_{{\alpha}}(f_{U_{n(k)}},f)\) is an increasing (decreasing) function of \(n\) for \({\alpha}>1(0<{\alpha}<1)\).
Proof. The RIG divergence between the densities \(n\)th upper \(k\)-record and the parent distribution is given by
On putting \(v={-\textrm{log}\;(1-F(x))}\), we get
where \(G_{{\alpha}}(U_{n,k})\) is defined in (11).
In order to examine the monotonicity behaviour of \(R_{{\alpha}}(f_{U_{n(k)}},f)\), we have
Now differentiating with respect to \(n\), we obtain,
Because \(\psi\) is an increasing(decreasing) function and for \({\alpha}>1(0<{\alpha}<1)\), we have \({\alpha}(n-1)+1>(<)n\) and \({\alpha}(k-1)+1>(<)k\), the inequality gets satisfied.
Theorem 5.2. The RIG divergence between the densities \(n\) th lower \(k\) -record and the parent distribution is given by the following representation,
Moreover, \(R_{{\alpha}}(f_{L_{n(k)}},f)\) is an increasing (decreasing) function of n for \({\alpha}>1(0<{\alpha}<1)\).
Proof. The proof is omitted since it is similar to that of Theorem 5.1.
6 ESTIMATION OF IG MEASURE FOR WEIBULL DISTRIBUTION BASED ON \(k\)-RECORD VALUES
In this section, we consider the estimation of IG measure for Weibull distribution based on \(n\)th upper \(k\)-record values. We obtain the maximum likelihood estimators (MLEs) and Bayes estimation of IG measure using MCMC method.
A two-parameter Weibull distribution has cdf given by
The pdf corresponding to the above cdf is given by
The Weibull distribution is widely used in many fields, including reliability engineering, survival analysis, hydrology, meteorology, and insurance. Furthermore, parametric inference of the Weibull distribution based on record data is of special interest because the Weibull distribution naturally arises from the extreme value theorem and has a significant physical interpretation in numerous practical contexts. [15] considered the estimation of entropy of Weibull distribution under generalized progressive hybrid censoring.
The IG measure for the Weibull distribution with cdf given in (46) is given by
6.1 Maximum Likelihood Estimation
In this subsection, we obtain the MLEs of IG measure for the two-parameter Weibull distribution based on \(n\)th upper \(k\)-record values. Let \(R_{i},i=1,2,...,n\) be the first \(n\)th upper \(k\)-record values arising from Weibull distribution with cdf given in (46). Let \(D_{n}=(R_{1},R_{2},...,R_{n})\). Then from (47) the likelihood function is given by
where \(d_{n}=(r_{1},r_{2},...,r_{n})\). The natural logarithm of the likelihood function is given by
When we differentiate \(\textrm{log}\;L({\lambda},{\beta}|d_{n})\) with respect to \({\beta}\) and \({\lambda}\) and equates to zero,
and
From (50), we get
By putting the value of \(\hat{{\lambda}}\) in (49), we get
Therefore the MLE of \({\beta}\) is given by
Thus the MLE of \({\lambda}\) is obtained as
Then by invariant property of MLE, the MLE of IG measure for Weibull distribution based on \(n\)th upper \(k\)-record values is given by
6.2 Bayesian Estimation
In this subsection, we consider the Bayesian estimation of the IG measure for the two-parameter Weibull distribution based on upper \(k\)-record values. Recently, Hassan and Zaky [22] studied the Bayesian estimation of entropy function for Lomax distribution based on record values and Al-Labadi and Berry [3] studied the Bayesian estimation of extropy and goodness of fit tests. Chacko and Asha [10] obtained estimators for the entropy functions of a Weibull distribution based on record values, and Chacko and Asha [9] obtained estimators for the entropy function of a generalized exponential distribution based on record values. Bayesian estimation of a two-parameter Weibull distribution using extension of Jeffreys’ prior information with three loss functions has been studied by [21].
Here, we consider Bayesian estimation of IG measure for the two-parameter Weibull distribution under symmetric as well as asymmetric loss functions. For a symmetric loss function we consider the squared error loss (SEL) function and for assymetric loss functions we consider both LINEX and entropy loss functions. The Bayes estimate of any parameter \(\mu\) under SEL is the posterior mean of \(\mu\). The Bayes estimate of \(\mu\) under LINEX loss function can be obtained as
provided \(E_{\mu}(.)\) exists. The Bayes estimate of \(\mu\) for the general entropy loss (EL) function is obtained as
Let \(R_{i},i=1,2,...,n\) be the first \(n\) upper \(k\)-record values arising from Weibull distribution with pdf given in (47). Then the likelihood function is given by
where \(d_{n}=(r_{1},r_{2},...,r_{n})\). Assume that the prior distributions of \({\beta}\) and \({\lambda}\) follow independent gamma distributions with density functions respectively given by
and
Thus the joint prior distribution of \({\beta}\) and \({\lambda}\) is given by
Then the joint posterior density of \({\beta}\) and \({\lambda}\) given \(D_{n}=d_{n}\) can be written as
Therefore the Bayes estimate of any function \(g({\beta},{\lambda})\) of \({\beta}\) and \({\lambda}\) under SEL, LL, and EL are respectively given by
and
It is not possible to compute (54)–(56) explicitly. Thus we propose MCMC method to find the Bayes estimates for the IG measure given in (48).
6.3 MCMC Method
In this subsection, we consider the MCMC method to generate samples from the posterior distributions and then find the Bayes estimates for IG measure. The joint posterior distribution given in (53) can be written as
From (57) the conditional posterior distribution of \({\beta}\) given \({\lambda}\) and \(d_{n}\) is given by
Again from (57), the conditional posterior distribution of \({\lambda}\) given \({\beta}\) and \(d_{n}\) is given by
Thus from (59) we can see that for a given \({\beta}\), the conditional posterior distribution of \({\lambda}\) follows a Gamma distribution with parameters \((n+c)\) and \((d+r_{n}^{\beta}k)\). That is, \({\lambda}\sim\) Gamma\((n+c,d+r_{n}^{\beta}k)\). Therefore one can easily generate sample from the posterior distribution of \({\lambda}\). But it is not possible to generate random variables from the posterior distribution of \({\beta}\) given in (58) using standard random number generation methods. Hence we use Metropolis–Hasting (M–H) algorithm to generate sample from (58) (see, [14]). Since the plot of (58) is similar to a normal plot we take normal proposal density for \({\beta}\) for the M–H algorithm.
By setting initial values \({\beta}^{(0)}\) and \({\lambda}^{(0)}\), let \({\beta}^{(t)}\) and \({\lambda}^{(t)}\), \(t=1,2,...,N\) be the observations generated from (58) and (59) respectively. Then the Bayes estimator of IG measure given in (48) under SEL, LL, and EL, by taking first \(m\) iterations as burn-in period, are respectively given by
and
where \(G_{\alpha}({\lambda}^{(t)},{\beta}^{(t)})\) is given in (48).
6.4 Simulation Study
In this subsection, we carry out a simulation study for illustrating the estimation procedures developed in previous subsections. First we obtain the MLEs for IG measure using (52). We have obtained the ML estimators and the corresponding MSE of MLEs for different values of \(n\) using 1000 simulated samples for different combinations of \({\beta}\) and \({\lambda}\) and are given in Tables 1 and 2. For the simulation studies for Bayes estimators we take the hyper parameters for the prior distributions of \({\beta}\) and \({\lambda}\) as \(a=2\), \(b=2\), \(c=2\), and \(d=2\). We have obtained the Bayes estimators for IG measure of Weibull distribution using upper \(k\)-record values under SEL, LL, and EL functions using MCMC method.
For that we use the following algorithm.
-
1.
Generate upper \(k\)-record values from two-parameter Weibull distribution with parameters \({\beta}\) and \({\lambda}\).
-
2.
Calculate estimators of IG measure using the generated upper \(k\)-record values using MCMC method as describe below.
-
(a)
Start with initial values \({\beta}^{(0)}\) and \({\lambda}^{(0)}\).
-
(b)
Set \(t=1\).
-
(c)
Generate \({\lambda}^{(t)}\) from Gamma\((n+c,d+r_{n}^{{\beta}^{(t-1)}}k)\).
-
(d)
Using M–H algorithm, generate \({\beta}^{(t)}\) from \(\pi_{1}^{*}({\beta}|{\lambda}^{(t)},d_{n})\).
-
(e)
Calculate \(\hat{G}_{\alpha}({\lambda}^{(t)},{\beta}^{(t)})\) using (48).
-
(f)
Set \(t=t+1\).
-
(g)
Repeat steps (c) to (f) for \(N=50\,000\) times.
-
(h)
Calculate the Bayes estimators for the IG measure \(G_{\alpha}({\lambda},{\beta})\) using (60) to (62) by taking burn-in-period \(m=5000\).
-
(a)
-
3.
Repeat the steps 1 and 2 for 1000 times.
-
4.
Calculate the Bayes estimates and the corresponding MSEs of the estimators.
Repeat the simulation study for \(n=6,8,10\), and for different values of \({\beta}\) and \({\lambda}\). The ML estimates, the Bayes estimators and the corresponding MSE for IG measure under SEL, LL, and EL functions for \({\alpha}=0.75\) are given in Table 1 and for \({\alpha}=1.5\) are given in Table 2. From Tables 1 and 2 we have the following inference.
-
1.
The MSEs of all estimators decrease when \(n\) increases.
-
2.
The MSEs corresponding to the Bayes estimates are smaller than that of MLEs.
-
3.
Among the Bayes estimators, estimators under EL function have the least MSE.
7 CONCLUSIONS
In this paper, we considered the IG and RIG measures for the \(n\)th upper and lower \(k\)-record value. The monotone behaviour of the IG measure of records has been established, and some bounds for the \(n\)th upper \(k\)-record value were obtained. Further, we have established some characterization results of exponential distribution by maximisation (minimization) of IG measure of its corresponding record values under some conditions. Then, we provided a discussion on the RIG divergence between the densities of \(k\)-record values and the distribution of the underlying sequence of random variables. Finally, as an application of IG measure, we obtained the MLEs and the Bayes estimates for the IG measure of the Weibull distribution based on upper \(k\)-record values. Among different estimators, the Bayes estimator under EL function performs better than MLE and Bayes estimators under SEL and LL in terms of MSE.
REFERENCES
M. Abbasnejad and N. R. Arghami, ‘‘Renyi entropy properties of records,’’ Journal of Statistical Planning and Inference 141 (7), 2312–2320 (2011).
M. Ahsanullah, ‘‘Record values-theory and applications. University Press of America, Maryland, United States,’’ Journal of Statistical Planning and Inference 141 (7), 2312–2320 (2004).
L. Al-Labadi and S. Berry, ‘‘Bayesian estimation of extropy and goodness of fit tests,’’ Journal of Applied Statistics 49 (2), 357–370 (2022).
B. C. Arnold, N. Balakrishnan, and H. N. Nagaraja, A first course in order statistics, vol. 54 (SIAM, Philadelphia, 1992).
N. Balakrishnan, F. Buono, and M. Longobardi, ‘‘On cumulative entropies in terms of moments of order statistics,’’ Methodology and Computing in Applied Probability 24 (1), 345–359 (2022).
S. Bansal and N. Gupta, ‘‘Weighted extropies and past extropy of order statistics and \(k\)-record values,’’ Communications in Statistics-Theory and Methods 51 (17), 6091–6108 (2022).
S. Baratpour, J. Ahmadi, and N. R. Arghami, ‘‘Characterizations based on Renyi entropy of order statistics and record values,’’ Journal of Statistical Planning and Inference 138 (8), 2544–2551 (2008).
P. J. Bickel and E. L. Lehmann, Descriptive Statistics for Non-Parametric Models, III. Dispersion. In Selected Works of E.L. Lehmann (Springer, 2012), p. 499–518.
M. Chacko and P. Asha, ‘‘Estimation of entropy for generalized exponential distribution based on record values,’’ Journal of the Indian Society for Probability and Statistics 19, 79–96 (2018).
M. Chacko and P. Asha, ‘‘Estimation of entropy for Weibull distribution based on record values,’’ Journal of Statistical Theory and Applications 20 (2), 279–288 (2021).
M. Chacko and L. Muraleedharan, ‘‘Inference based on \(k\)-record values from generalized exponential distribution,’’ Statistica 78 (1), 37–56 (2018).
M. Chacko and M. Shy Mary, ‘‘Concomitants of \(k\)-record values arising from Morgenstern family of distributions and their applications in parameter estimation,’’ Statistical Papers 54 (1), 21–46 (2013).
K. Chandler, ‘‘The distribution and frequency of record values,’’ Journal of the Royal Statistical Society: Series B (Methodological) 14 (2), 220–228 (1952).
S. Chib and E. Greenberg, ‘‘Understanding the Metropolis-Hastings algorithm,’’ The American Statistician 49 (4), 327–335 (1995).
Y. Cho, H. Sun, and K. Lee, ‘‘Estimating the entropy of a Weibull distribution under generalized progressive hybrid censoring,’’ Entropy 17 (1), 102–122 (2015).
D. E. Clark, ‘‘Local entropy statistics for point processes,’’ IEEE Transactions on Information Theory 66 (2), 1155–1163 (2019).
W. Dziubdziela and B. Kopociński, ‘‘Limiting properties of the \(k\)-th record values,’’ Applicationes Mathematicae 2 (15), 187–190 (1976).
R. A. Fisher, ‘‘Tests of significance in harmonic analysis,’’ Proceedings of the Royal Society of London. Series A, Containing Papers of a Mathematical and Physical Character 125 (796), 54–59 (1929).
S. Golomb, ‘‘The information generating function of a probability distribution (corresp.),’’ IEEE Transactions on Information Theory 12 (1), 75–77 (1966).
S. Guiasu and C. Reischer, ‘‘The relative information generating function,’’ Information sciences 35 (3), 235–241 (1985).
C. B. Guure, N. A. Ibrahim, and A. O. M. Ahmed, ‘‘Bayesian estimation of two-parameter Weibull distribution using extension of Jeffreys’ prior information with three loss functions,’’ Mathematical Problems in Engineering (2012).
A. S. Hassan and A. N. Zaky, ‘‘Entropy Bayesian estimation for Lomax distribution based on record,’’ Thailand Statistician 19 (1), 95–114 (2021).
J. M. Joyce, Kullback-Leibler Divergence, in: International Encyclopedia of Statistical Science (Springer, New York, 2011), p. 720–722.
S. Kayal, ‘‘Characterization based on generalized entropy of order statistics,’’ Communications in Statistics-Theory and Methods 45 (15), 4628–4636 (2016).
O. Kharazmi and N. Balakrishnan, ‘‘Information generating function for order statistics and mixed reliability systems,’’ Communications in Statistics-Theory and Methods, 1–10 (2021).
O. Kharazmi and N. Balakrishnan, ‘‘Jensen-information generating function and its connections to some well-known information measures,’’ Statistics & Probability Letters 170, 108995 (2021).
V. Kumar, ‘‘Some results on Tsallis entropy measure and \(k\)-record values,’’ Physica A: Statistical Mechanics and its Applications 462, 667–673 (2016).
F. Lad, G. Sanfilippo, and G. Agro, ‘‘Extropy: Complementary dual of entropy,’’ Statistical Science 30 (1), 40–58 (2015).
V. B. Nevzorov, ‘‘Records: Mathematical Theory, Translation of mathematical monographs,’’ American Mathematical Society, Providence, Rhode Island, USA 194 (2001).
M. Shaked and J. G. Shanthikumar, Stochastic Orders (Springer, New York, 2007).
C. E. Shannon, ‘‘A mathematical theory of communication,’’ The Bell system technical journal 27 (3), 379–423 (1948).
Z. Zamani, O. Kharazmi, and N. Balakrishnan, ‘‘Information Generating Function of Record Values,’’ Mathematical Methods of Statistics 31 (3), 120–133 (2022).
ACKNOWLEDGEMENTS
The authors would like to thank the editor and the reviewers for their constructive comments, which helped to improve the quality of the paper.
Author information
Authors and Affiliations
Corresponding authors
Ethics declarations
The authors declare that they have no conflicts of interest.
About this article
Cite this article
Chacko, M., Grace, A. Information Generating Function of \(\boldsymbol{k}\)-Record Values and Its Applications. Math. Meth. Stat. 32, 176–196 (2023). https://doi.org/10.3103/S106653072303002X
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.3103/S106653072303002X