1 INTRODUCTION

The record values was formulated by [13] as successive extremes occurring in a sequence of independent and identically distributed (iid) random variables. Records are of great importance in several real life problems involving destructive stress testing, sporting and athletic events, meteorological analysis, oil, and mining, surveys, hydrology, seismology, etc. Prediction of the next record value is an interesting problem in many real-life situations. For a detailed survey on the theory and application of record values, see, [2, 4, 29], and the references therein.

Let \(\{X_{i},i\geq 1\}\) be a sequence of iid random variables having a common cumulative distribution function (cdf) \(F(x)\) which is absolutely continuous. An observation \(X_{j}\) is called an upper record if its value exceeds that of all preceding observations. Thus, \(X_{j}\) is an upper record if \(X_{j}>X_{i}\) for every \(i<j\). In an analogous way, one can also define lower record values.

The characteristic features of the parent distribution can also be studied by looking at the record statistics that arise from a distribution. But we can see that after the first observation, the expected waiting time for the occurance of each record after the first may be infinite. Additionally, the presence of an outlier in a sequence of random variables prevents the realisation of record values from occurring later. One may overcome this difficulty by considering the \(k\)-record statistics introduced by [17].

Now, for a positive integer \(k\), the sequence of upper \(k\)-records, or simply \(k\)-records is defined as follows: For a positive integer \(k\), the upper \(k\)-record times \(T_{n(k)}\), for \(n>1\) are defined by

$$T_{1(k)}=k,\quad\text{with probability } 1 $$

and, for \(n>1\),

$$T_{n(k)}=\min\{j:j>T_{n-1(k)},X_{j}>X_{T_{n-1(k)}-k+1:T_{n-1(k)}}\},$$
(1)

where \(X_{i:m}\) is the \(i\)th order statistic in a random sample of size \(m\). The sequence of the upper \(k\)-record values \(U_{n(k)}\) are then defined by

$$U_{n(k)}=X_{T_{n(k)}-k+1:T_{n(k)}}\quad\textrm{for}\quad n\geq 1.$$
(2)

In an analogous way, one can also define lower record values.

If the parent distribution is absolutely continuous with survival function \(\overline{F}_{X}(x)\) and probability density function(pdf) \(f_{X}(x)\), then, the pdf of \(n\)th upper \(k\)-record value \(U_{n(k)}\) is given by (see, [4]).

$$f_{n(k)}(x)=\dfrac{k^{n}}{\Gamma n}[{-\textrm{log}\;(1-F(x))}]^{n-1}[1-F(x)]^{k-1}f(x),\quad-\infty<x<\infty.$$
(3)

The pdf of \(n\)th lower \(k\)-record value \(L_{n(k)}\) is given by (see, [2])

$$g_{n(k)}(x)=\dfrac{k^{n}}{\Gamma n}[{-\textrm{log}\;F(x)}]^{n-1}[F(x)]^{k-1}f(x),\quad-\infty<x<\infty.$$
(4)

Since the ordinary record values are contained in the \(k\)-records, by putting \(k=1\), the results for the usual records can be obtained as special cases. Several applications of \(k\)-records are available in the literature. For some recent applications of \(k\)-record values see, [6, 11, 12].

Information theory is one of the most important branches of science introduced by [18, 31]. The Shannon entropy of a continuous random variable \(X\), having pdf on support \(\chi\), is defined as

$$H(X)=-\int\limits_{\chi}f(x)\textrm{log}\;f(x)dx.$$
(5)

Shannon entropy measure and its extentions have been considered by several researchers. The study of the entropy measures for order statistics and record values has received considerable attention recently. For more details, one may refer to [1, 5, 7, 24, 27].

In information theory, generating functions have also been defined for probability densities to determine information quantities such as Shannon information and Kullback–Leibler divergence Golomb. [19] proposed information generating (IG) function (measure) to generate some well-known information measures. Suppose the variable \(X\) has a density function \(f(x)\). Then, the IG function of density \(f(x)\), for any \({\alpha}>0\), is defined as

$$G_{{\alpha}}(X)=\int\limits_{\chi}f^{{\alpha}}(X)dx=E[e^{({\alpha}-1)\textrm{log}f(x)}],$$
(6)

provided the integral exists. To simplify notation, we suppress \(\chi\) for integration with respect to \(x\) throughout the paper, unless a distinction is needed.

Clearly \(G_{1}(X)=1\) and \(\dfrac{\partial}{\partial{\alpha}}G_{{\alpha}}(X)|_{{\alpha}=1}=-H(X)\), where \(H(X)\) is the Shannon entropy given in (5). In particular, when \({\alpha}=2\), the IG measure is reduced to \(\int_{\chi}f^{2}(x)dx=-2J(X)\), where \(J(X)\) is the extropy given by [28], which is also known as the informational energy (IE) measure. In physics and chemistry, the IG measure is known as the entropic moment, and it is closely related to the Renyi and Tsallis entropies. The IG measure plays a significant role in information theory and physics since it generates the most popular information measures, including Shannon entropy, Renyi entropy, Tsallis entropy, and extropy measures.

Recently, [32] has studied the IG function of record values and examine some properties of it. Clark [16] has used IG function for stochastic processes to assist in the derivation of information measures for point processes. Kharazmi and Balakrishnan [25] have studied the IG measure for order statistics and its applications in the study of mixed systems. Also Kharazmi and Balakrishnan [26] introduced Jensen IG measure and its connections to some well-known information measures such as Jensen-Shannon, Jensen–Taneja, and Jensen-extropy information measures.

Guiasu and Reischer [20] proposed relative information generating (RIG) measure between two density functions. Let \(X\) and \(Y\) be two random variables with density functions \(f\) and \(g\), respectively. Then, the relative information generating measure, for any \({\alpha}>0\), is defined as

$$R_{{\alpha}}(f,g)=\int f^{{\alpha}}(x)g^{1-{\alpha}}(x)dx,$$
(7)

provided the integral exists. It is obvious that \(R_{1}(f,g)=1\) and

$$\dfrac{\partial}{\partial{\alpha}}R_{{\alpha}}(f,g)|_{{\alpha}=1}=\int f(x)\left(\textrm{log}\dfrac{f(x)}{g(x)}\right)dx=K(f,g),$$
(8)

the Kullback–Leibler divergence, originally defined by [23].

In this paper, we consider the IG measure of \(k\)-record values and examine some of its main properties. We also examine the relative information generating (RIG) measure between the distribution of record values and the corresponding underlying distribution. So far estimation of IG measure based on \(k\)-record values has not been considered in the available literature. Hence in this paper, we also consider the maximum likelihood estimation and Bayesian estimation of IG measure based on \(k\)-record values for Weibull distribution.

In the present work, our goal is to study IG measure for \(k\)-record values and then establish some results associated with it. The rest of this paper is orgnized as follows: In Section 2, we first examine the IG measure for \(n\)th upper and lower \(k\)-record values. Section 3 deals with some stochastic comparisons based on IG measure of \(k\)-record values and we examine the lower and upper bounds for the IG measure of \(k\)-record values. In Section 4, some results associated with the characterization of exponential distribution based on the IG measure of \(k\)-record values is given. Section 5 is devoted to the relative information generating divergence of \(k\)-record values. In Section 6, we consider the estimation of the IG measure for the Weibull distribution based on upper \(k\)-record values. We obtain the maximum likelihood estimators (MLEs) for IG measure and the Bayes estimators of IG measure. Finally, some concluding remarks are made in Section 7.

2 IG MEASURE OF \(k\)-RECORD VALUES

In this section, we first examine the IG measure of lower and upper \(k\)-record values and then establish some results for this measure.

Theorem 2.1. Let \(\{X_{i},i\geq 1\}\) be a sequence of iid continuous random variables from a distribution with common distribution function \(F(x)\) , pdf \(f(x)\) and quantile function \(F^{-1}(.)\) . Let \(U_{n(k)}\) denote the \(n\) th upper \(k\) -records. Then the IG measure of \(U_{n(k)}\) is given by,

$$G_{{\alpha}}(U_{n(k)})=G_{{\alpha}}(U_{n,k})\dfrac{{({\alpha}k)}^{{\alpha}(n-1)+1}}{({\alpha}(k-1)+1)^{{\alpha}(n-1)+1}}{E[{f^{{\alpha}-1}(F^{-1}(1-e^{-V_{n,k}}))}]},$$
(9)

where \(U_{n,k}\sim\Gamma(n,k)\) and \(V_{n,k}\sim\Gamma({\alpha}(n-1)+1,{\alpha}(k-1)+1)\) with \(\Gamma(\lambda,{\beta})\) denotes a gamma distribution with pdf given by

$$g(x)=\dfrac{{\beta}^{\lambda}}{\Gamma(\lambda)}x^{\lambda-1}e^{-{\beta}x},x>0.$$

Proof. From the definition of IG measure given in (6), we have the IG measure of \(n\)th upper \(k\)-re- cords as,

$$G_{{\alpha}}(U_{n(k)})=\dfrac{k^{{\alpha}n}}{[\Gamma n]^{\alpha}}\int[{-\textrm{log}\;(1-F(x))}]^{{\alpha}(n-1)}[1-F(x)]^{{\alpha}(k-1)}f^{\alpha}(x)dx.$$

On putting \(v={-\textrm{log}\;(1-F(x))}\), we get

$$G_{{\alpha}}(U_{n(k)})=\dfrac{k^{{\alpha}n}}{[\Gamma n]^{\alpha}}\int\limits_{0}^{\infty}v^{{\alpha}(n-1)}e^{-v({\alpha}(k-1)+1)}{f^{{\alpha}-1}(F^{-1}(1-e^{-v}))}dv$$
$${}=\dfrac{k^{{\alpha}n}\Gamma({\alpha}(n-1)+1)}{[\Gamma n]^{\alpha}({\alpha}(k-1)+1)^{{\alpha}(n-1)+1}}{E[{f^{{\alpha}-1}(F^{-1}(1-e^{-V_{n,k}}))}]}.$$
(10)

Since

$$G_{{\alpha}}(U_{n,k})=\dfrac{\Gamma({\alpha}(n-1)+1)k^{{\alpha}n}}{[\Gamma n]^{{\alpha}}({\alpha}k)^{{\alpha}(n-1)+1}},$$
(11)

we have

$$G_{{\alpha}}(U_{n(k)})=G_{{\alpha}}(U_{n,k})\dfrac{{({\alpha}k)}^{{\alpha}(n-1)+1}}{({\alpha}(k-1)+1)^{{\alpha}(n-1)+1}}{E[{f^{{\alpha}-1}(F^{-1}(1-e^{-V_{n,k}}))}]}.$$
(12)

Hence the result.

Example 2.1. Let \(\{X_{i},i\geq 1\}\) be a sequence of iid random variables having a common two-parameter Weibull distribution with pdf given by

$$f(x)={\beta}{\lambda}x^{{\beta}-1}e^{-{\lambda}x^{{\beta}}},x>0.$$

Here,

$$F^{-1}(x)=\left[\dfrac{-\textrm{log}\;(1-x)}{{\lambda}}\right]^{\frac{1}{{\beta}}}.$$

Therefore,

$$f^{{\alpha}-1}(F^{-1}(1-e^{-u}))={\lambda}^{\frac{{\alpha}-1}{{\beta}}}{\beta}^{{\alpha}-1}u^{(1-\frac{1}{{\beta}})({\alpha}-1)}e^{-u({\alpha}-1)},$$

and hence

$${E[{f^{{\alpha}-1}(F^{-1}(1-e^{-V_{n,k}}))}]}={\lambda}^{\frac{{\alpha}-1}{{\beta}}}{{\beta}}^{{\alpha}-1}\dfrac{({\alpha}(k-1)+1)^{{\alpha}(n-1)+1}}{\Gamma({\alpha}(n-1)+1)}\dfrac{\Gamma({\alpha}(n-\frac{1}{{\beta}}))+\frac{1}{{\beta}}}{({\alpha}k)^{{\alpha}(n-\frac{1}{{\beta}})+\frac{1}{{\beta}}}}.$$

Thus, we have

$$G_{{\alpha}}(U_{n(k)})=\dfrac{k^{{\alpha}n}{\lambda}^{\frac{{\alpha}-1}{{\beta}}}{{\beta}}^{{\alpha}-1}}{(\Gamma n)^{\alpha}}\dfrac{\Gamma({\alpha}(n-\frac{1}{{\beta}})+\frac{1}{{\beta}})}{({\alpha}k)^{{\alpha}(n-\frac{1}{{\beta}})+\frac{1}{{\beta}}}}.$$

We have drawn the graphs of IG measure of \(n\)th upper \(k\)-records for Weibull distribution for different values of \({\alpha}\) and are given in Fig. 1. It can be observed from Fig. 1 that, \(G_{{\alpha}}(U_{n(k)})\) is increasing in \(n\) for \(0<{\alpha}<1\) and decreasing in \(n\) for \({\alpha}>1\). Also, it can be observed from Fig. 1 that, \(G_{{\alpha}}(U_{n(k)})\) is decreasing in \(k\) for \(0<{\alpha}<1\) and increasing in \(k\) for \({\alpha}>1\). Also, when \(k=1\), the IG measure of classical records is obtained.

Fig. 1
figure 1

IG measure of \(n\)th upper \(k\)-records for Weibull distribution for different values of \({\alpha}\) and \(k\).

Example 2.2. Let \(\{X_{i},i\geq 1\}\) be a sequence of iid random variables having a common Pareto distribution with pdf given by

$$f(x)=\dfrac{{\lambda}}{\sigma}\left(\dfrac{x}{{\sigma}}\right)^{-{\lambda}-1},x\geq{\sigma}>0,{\lambda}>0.$$

Here,

$$F^{-1}(x)={\sigma}\left(1-x\right)^{-\frac{1}{{\lambda}}}.$$

Therefore,

$$f^{{\alpha}-1}(F^{-1}(1-e^{-u}))=\left(\dfrac{{\lambda}}{{\sigma}}\right)^{{\alpha}-1}e^{-\frac{u}{{\lambda}}({\alpha}-1)({\lambda}+1)},$$

and hence

$${E[{f^{{\alpha}-1}(F^{-1}(1-e^{-V_{n,k}}))}]}=\left(\dfrac{{\lambda}}{{\sigma}}\right)^{{\alpha}-1}\left(\dfrac{({\alpha}(k-1)+1)^{{\alpha}(n-1)+1}}{{\alpha}(1+{\lambda}k)-1}\right).$$

Thus, we have

$$G_{{\alpha}}(U_{n(k)})=\dfrac{k^{{\alpha}n}}{\left(\Gamma n\right)^{{\alpha}}}\left(\dfrac{{\lambda}}{{\sigma}}\right)^{{\alpha}-1}\dfrac{\Gamma{({\alpha}(n-1)+1)}}{({\alpha}(1+{\lambda}k)-1)^{{\alpha}(n-1)+1}}.$$

Example 2.3. Let \(\{X_{i},i\geq 1\}\) be a sequence of iid random variables having a common Rayleigh distribution with pdf given by

$$f(x)=\dfrac{x}{{\sigma}^{2}}e^{-\frac{x^{2}}{2{\sigma}^{2}}},x>0,{\sigma}>0.$$

Here,

$$F^{-1}(x)={\sigma}\sqrt{-2\textrm{log}\;(1-x)}.$$

Therefore,

$$f^{{\alpha}-1}(F^{-1}(1-e^{-u}))=\left(\dfrac{1}{{\sigma}}\right)^{{\alpha}-1}e^{-u({\alpha}-1)}$$

and hence

$${E[{f^{{\alpha}-1}(F^{-1}(1-e^{-V_{n,k}}))}]}=\left(\dfrac{1}{{\sigma}}\right)^{{\alpha}-1}\left(\dfrac{{\alpha}(k-1)+1}{{\alpha}k}\right)^{{\alpha}(n-1)+1}.$$

Thus, we have

$$G_{{\alpha}}(U_{n(k)})=\dfrac{k^{{\alpha}n}}{\left(\Gamma n\right)^{{\alpha}}}\left(\dfrac{1}{{\sigma}}\right)^{{\alpha}-1}\dfrac{\Gamma{({\alpha}(n-1)+1)}}{({\alpha}k)^{{\alpha}(n-1)+1}}.$$

Theorem 2.2. Let \(\{X_{i},i\geq 1\}\) be a sequence of iid continuous random variables from a distribution with common distribution function \(F(x)\) , pdf \(f(x)\) and quantile function \(F^{-1}(.)\) . Let \(L_{n(k)}\) denote the \(n\) th lower \(k\) -records. Then the IG measure of \(L_{n(k)}\) is given by,

$$G_{{\alpha}}(L_{n(k)})=G_{{\alpha}}(U_{n,k})\dfrac{{({\alpha}k)}^{{\alpha}(n-1)+1}}{({\alpha}(k-1)+1)^{{\alpha}(n-1)+1}}{E[{f^{{\alpha}-1}(F^{-1}(e^{-V_{n,k}}))}]},$$
(13)

where \(U_{n,k}\sim\Gamma(n,k)\) and \(V_{n,k}\sim\Gamma({\alpha}(n-1)+1,{\alpha}(k-1)+1)\).

Proof. From the definition of IG measure given in (6), we have the IG measure of \(n\)th upper \(k\)-re- cord as,

$$G_{{\alpha}}(L_{n(k)})=\dfrac{k^{{\alpha}n}}{[\Gamma n]^{\alpha}}\int[{-\textrm{log}\;F(x)}]^{{\alpha}(n-1)}[F(x)]^{{\alpha}(k-1)}f^{\alpha}(x)dx.$$
(14)

On putting \(v={-\textrm{log}\;F(x)}\),we get

$$G_{{\alpha}}(L_{n(k)})=\dfrac{k^{{\alpha}n}}{[\Gamma n]^{\alpha}}\int\limits_{0}^{\infty}v^{{\alpha}(n-1)}e^{-v({\alpha}(k-1)+1)}{f^{{\alpha}-1}(F^{-1}(e^{-v}))}dv$$
$${}=\dfrac{k^{{\alpha}n}\Gamma({\alpha}(n-1)+1)}{[\Gamma n]^{\alpha}({\alpha}(k-1)+1)^{{\alpha}(n-1)+1}}{E[{f^{{\alpha}-1}(F^{-1}(e^{-V_{n,k}}))}]}$$
$${}=G_{{\alpha}}(U_{n,k})\dfrac{{({\alpha}k)}^{{\alpha}(n-1)+1}}{({\alpha}(k-1)+1)^{{\alpha}(n-1)+1}}{E[{f^{{\alpha}-1}(F^{-1}(e^{-V_{n,k}}))}]}.$$
(15)

where \(G_{{\alpha}}(U_{n,k})\) is defined in (11). Hence the result.

Example 2.4. Let \(\{X_{i},i\geq 1\}\) be a sequence of iid random variables having a common generalized exponential distribution with pdf given by

$$f(x)={\beta}e^{-x}(1-e^{-x})^{{\beta}-1},x>0,{\beta}>0.$$

Here,

$$F^{-1}(x)=-\textrm{log}(1-x^{\frac{1}{{\beta}}}).$$

Therefore,

$$f^{{\alpha}-1}(F^{-1}(e^{-u}))={\beta}^{{\alpha}-1}(1-e^{-u/\beta})^{\alpha-1}e^{-u({\alpha}-1)(1-\frac{1}{{\beta}})}$$

and hence

$${E[{f^{{\alpha}-1}(F^{-1}(e^{-V_{n,k}}))}]}=\sum_{p=0}^{\infty}\binom{\alpha-1}{p}(-1)^{p}{\beta}^{{\alpha}-1}\left(\dfrac{{\alpha}(k-1)+1}{{\alpha}k-\frac{{\alpha}}{{\beta}}+\frac{p+1}{{\beta}}}\right)^{{\alpha}(n-1)+1}.$$

Thus, we have

$$G_{{\alpha}}(L_{n(k)})=\dfrac{k^{{\alpha}n}}{\left(\Gamma n\right)^{\alpha}}{\beta}^{{\alpha}-1}\sum_{p=0}^{\infty}\binom{\alpha-1}{p}(-1)^{p}\dfrac{\Gamma({\alpha}(n-1)+1)}{\left({\alpha}k-\frac{{\alpha}}{{\beta}}+\frac{p+1}{{\beta}}\right)^{{\alpha}(n-1)+1}}.$$

We have drawn the graphs of IG measure of \(n\)th lower \(k\)-records for generalized exponential distribution for different values of \({\alpha}\) and are given in Fig. 2. It can be observed from Fig. 2 that, for \(n>k\), \(G_{{\alpha}}(L_{n(k)})\) is decreasing in \(n\) for \(0<{\alpha}<1\) and increasing in \(n\) for \({\alpha}>1\).

Fig. 2
figure 2

IG measure of \(n\)th lower \(k\)-records for generalized exponential distribution for different values of \({\alpha}\) and \(k\).

Example 2.5. Let \(\{X_{i},i\geq 1\}\) be a sequence of iid random variables having a common inverse exponential distribution with pdf given by

$$f(x)=\dfrac{{\lambda}}{x^{2}}e^{-\frac{{\lambda}}{x}},x>0,{\lambda}>0.$$

Here,

$$F^{-1}(x)=-{\lambda}(\textrm{log}\;x)^{-1}.$$

Therefore,

$$f^{{\alpha}-1}(F^{-1}(e^{-u}))=\left(\dfrac{u^{2}}{{\lambda}}\right)^{{\lambda}-1}e^{-u({\alpha}-1)},$$

and hence

$${E[{f^{{\alpha}-1}(F^{-1}(e^{-V_{n,k}}))}]}=\dfrac{({\alpha}(k-1)+1)^{{\alpha}(n-1)+1}}{({\alpha}k)^{{\alpha}(n+1)-1}}\dfrac{\Gamma({\alpha}(n+1)-1)}{\Gamma({\alpha}(n-1)+1)}.$$

Thus, we have

$$G_{{\alpha}}(L_{n(k)})=\dfrac{k^{{\alpha}n}}{(\Gamma n)^{\alpha}}\dfrac{\Gamma({\alpha}(n+1)-1)}{({\alpha}k)^{{\alpha}(n+1)-1}}.$$

3 PROPERTIES OF IG MEASURE OF \(k\)-RECORD VALUES

In this section, we derive some properties of IG measure of \(n\)th upper and lower \(k\)-record values. The following theorem shows the monotone behavior of IG measure of \(n\)th upper \(k\)-record value in terms of \(n\). In order to prove this theorem, we need the following definitions and lemmas.

Definition 3.1 [30]. Let \(X\) and \(Y\) be two non-negative random variables such that \(P(X>x)\leq P(Y>y)\) for all \(x\geq 0\). Then we say that \(X\) is said to be smaller than \(Y\) in the usual stochastic order (denoted by \(X\leq_{st}Y\)).

Definition 3.2 [30]. Let \(X\) and \(Y\) be two non-negative random variables with densities \(f\) and \(g\), respectively. The random variable \(X\) is said to be smaller than \(Y\) in likelihood ratio order (denoted by \(X\leq_{lr}Y\)) if \(f(x)g(y)\geq g(x)f(y)\) for all \(x\leq y\).

Lemma 3.1 [8]. If \(X\) and \(Y\) are two continuous or discrete random variables such that \(Y\leq_{lr}X\), then \(Y\leq_{st}X\).

Theorem 3.1. Let \(\{X_{i},i\geq 1\}\) be a sequence of iid continuous random variables from a distribution with common distribution function \(F(x)\) , pdf \(f(x)\) and quantile function \(F^{-1}(.)\) . Let \(U_{n(k)}\) denote the \(n\) th upper \(k\) -record. If \(f(x)\) is non-decreasing in x. Then,

  1. 1.

    \(G_{{\alpha}}(U_{n(k)})\) is non-decreasing in \(n\) for \({\alpha}>1\).

  2. 2.

    \(G_{{\alpha}}(U_{n(k)})\) is non-increasing in \(n\) for \(0<{\alpha}<1\).

Proof. From Theorem 2.1, we have

$$G_{{\alpha}}(U_{n(k)})=\dfrac{k^{{\alpha}n}\Gamma({\alpha}(n-1)+1)}{[\Gamma n]^{\alpha}({\alpha}(k-1)+1)^{{\alpha}(n-1)+1}}{E[{f^{{\alpha}-1}(F^{-1}(1-e^{-V_{n,k}}))}]},$$
(16)

where \(V_{n,k}\sim\Gamma({\alpha}(n-1)+1,{\alpha}(k-1)+1)\). Then,

$$G_{{\alpha}}^{*}(U_{n(k)})=\textrm{log}\;G_{{\alpha}}(U_{n(k)})=D_{n}+\textrm{log}\;{E[{f^{{\alpha}-1}(F^{-1}(1-e^{-V_{n,k}}))}]},$$
(17)

where \(D_{n}={\alpha}n\textrm{log}\;k+\textrm{log}\;(\Gamma({\alpha}(n-1)+1))-{\alpha}\textrm{log}\;\Gamma n-({\alpha}(n-1)+1)\textrm{log}\;({\alpha}(k-1)+1)\). Therefore,

$$G_{{\alpha}}^{*}(U_{n+1(k)})-G_{{\alpha}}^{*}(U_{n(k)})=D_{n+1}-D_{n}+\textrm{log}\;\dfrac{{E[{f^{{\alpha}-1}(F^{-1}(1-e^{-V_{n+1,k}}))}]}}{{E[{f^{{\alpha}-1}(F^{-1}(1-e^{-V_{n,k}}))}]}}.$$
(18)

Without loss of generality, assume that \(n\) is continuous and then taking derivative with respect to \(n\), we obtain,

$$\dfrac{dDn}{dn}={\alpha}\left[\psi({\alpha}(n-1)+1)-\psi(n)-\textrm{log}\;\dfrac{k}{{\alpha}(k-1)+1}\right],$$

where \(\psi(x)=\dfrac{d}{dx}(\textrm{log}\;\Gamma x)\) is the digamma function.

Since \(\psi(x)\) is an increasing function of \(x\) and \({\alpha}(n-1)+1>n\) and \({\alpha}(k-1)+1>k\) for \({\alpha}>1\), we conclude that \(D_{n}\) is an increasing function of \(n\). It is easy to show that \(V_{n,k}\leq_{lr}V_{n+1,k}\) and so \(V_{n,k}\leq_{st}V_{n+1,k}\). Moreover, \(f^{{\alpha}-1}(F^{-1}(1-e^{-x}))\) is non-decreasing in \(x\) for all \({\alpha}>1\), because \(f(x)\) is non-decreasing in \(x\). Thus we have,

$${E[{f^{{\alpha}-1}(F^{-1}(1-e^{-V_{n+1,k}}))}]}\geq{E[{f^{{\alpha}-1}(F^{-1}(1-e^{-V_{n,k}}))}]}$$
(19)

and hence

$$\textrm{log}\;\dfrac{{E[{f^{{\alpha}-1}(F^{-1}(1-e^{-V_{n+1,k}}))}]}}{{E[{f^{{\alpha}-1}(F^{-1}(1-e^{-V_{n,k}}))}]}}\geq 0.$$
(20)

Therefore, \(G_{{\alpha}}^{*}(U_{n+1(k)})-G_{{\alpha}}^{*}(U_{n(k)})\geq 0\) and which implies \(G_{{\alpha}}(U_{n(k)})\) is non-decreasing in \(n\) for \({\alpha}>1\).

Now, for \(0<{\alpha}<1\), we have \({\alpha}(n-1)+1<n\) and \({\alpha}(k-1)+1<k\), then \(D_{n}\) is a decreasing function of \(n\). Moreover, \(f^{{\alpha}-1}(F^{-1}(1-e^{-x}))\) is non-increasing in \(x\) for all \(0<{\alpha}<1\), because \(f(x)\) is non-decreasing in \(x\). Thus,

$${E[{f^{{\alpha}-1}(F^{-1}(1-e^{-V_{n+1,k}}))}]}\leq{E[{f^{{\alpha}-1}(F^{-1}(1-e^{-V_{n,k}}))}]},$$
(21)

and hence

$$\textrm{log}\;\dfrac{{E[{f^{{\alpha}-1}(F^{-1}(1-e^{-V_{n+1,k}}))}]}}{{E[{f^{{\alpha}-1}(F^{-1}(1-e^{-V_{n,k}}))}]}}\leq 0.$$
(22)

Therefore, \(G_{{\alpha}}^{*}(U_{n+1(k)})-G_{{\alpha}}^{*}(U_{n(k)})\leq 0\) and which implies \(G_{{\alpha}}(U_{n(k)})\) is non-increasing in \(n\) for \(0<{\alpha}<1\). This completes the theorem.

Now, we present two bounds for the IG measure of the \(n\)th upper \(k\)-record value.

Theorem 3.2. Let \(X\) be a random variable with IG measure \(G_{{\alpha}}(X)<\infty\) . Then the IG measure of the \(n\) th upper \(k\) -records \(U_{n(k)}\) is bounded above as

$$G_{{\alpha}}(U_{n(k)})\leq G_{{\alpha}}(U_{n,k})\left[\dfrac{{\alpha}k}{{\alpha}(k-1)+1}\right]^{{\alpha}(n-1)+1}B_{n(k)}\int\limits_{-\infty}^{\infty}r(x)f^{{\alpha}-1}(x)dx,$$
(23)

where

(i) \(U_{n,k}\sim\Gamma(n,k)\) and

(ii) \(B_{n(k)}=\dfrac{{({\alpha}(k-1)+1)({\alpha}(n-1)^{{\alpha}(n-1)})}}{\Gamma({\alpha}(n-1)+1)}e^{-{\alpha}(n-1)}\), provided the integral exists.

(iii) \(r(x)=\dfrac{f(x)}{1-F(x)}\) is the hazard rate function.

Proof. The mode \(m_{n,k}\) of \(\Gamma({\alpha}(n-1)+1,{\alpha}(k-1)+1)\) with density function \(g_{n,k}\) is known to be \(\dfrac{{\alpha}(n-1)}{{\alpha}(k-1)+1}\). Then we have,

$$g_{n,k}(v)\leq g_{n,k}(m_{n,k})=\dfrac{{({\alpha}(k-1)+1)({\alpha}(n-1)^{{\alpha}(n-1)})}}{\Gamma({\alpha}(n-1)+1)}e^{-{\alpha}(n-1)}=B_{n(k)}.$$

Now, we get

$${E[{f^{{\alpha}-1}(F^{-1}(1-e^{-V_{n,k}}))}]}$$
$${}=\int\limits_{0}^{\infty}g_{n,k}(v){f^{{\alpha}-1}(F^{-1}(1-e^{-v}))}dv$$
$${}\leq B_{n(k)}\int\limits_{0}^{\infty}{f^{{\alpha}-1}(F^{-1}(1-e^{-v}))}dv=B_{n(k)}\int\limits_{-\infty}^{\infty}r(x)f^{{\alpha}-1}(x)dx,$$
(24)

where the last equality is obtained by using the transformation \(x=F^{-1}(1-e^{-V_{n,k}})\). Now, substituting the inequality (24) in (9) gives the required result.

Example 3.1. Let \(\{X_{i},i\geq 1\}\) be a sequence of iid random variables having Pareto II distribution with pdf given by

$$f(x)=\dfrac{c{\alpha}^{c}}{(x+{\alpha})^{c+1}},\;{\alpha},c>0,\;x>0.$$

Then \(r(x)=\dfrac{c}{x+{\alpha}}\) and

$$\int\limits_{0}^{\infty}r(x)f^{{\alpha}-1}(x)dx=\dfrac{c^{{\alpha}}}{({\alpha}-1)(c+1){\alpha}^{{\alpha}-1}},\forall{\alpha}>1.$$

So, for \({\alpha}>1\), we have

$$G_{{\alpha}}(U_{n(k)})\leq G_{{\alpha}}(U_{n,k})B_{n(k)}\dfrac{c^{{\alpha}}{\alpha}^{{\alpha}n-2({\alpha}-1)}}{(c+1)({\alpha}-1)}\left(\dfrac{k}{{\alpha}(k-1)+1}\right)^{{\alpha}(n-1)+1}.$$

Theorem 3.3. Under the assumptions of Theorem 3.2, we have

$$G_{{\alpha}}(U_{n(k)})\leq(\geq)G_{{\alpha}}(U_{n,k})\dfrac{({\alpha}k)^{{\alpha}(n-1)+1}}{({\alpha}(k-1)+1)^{{\alpha}(n-1)+1}}M^{{\alpha}-1}$$
(25)

for \({\alpha}>1(0<{\alpha}<1)\) , where \(M=f(m)<\infty\) and \(m=\sup\{x:f(x)\leq M\}\) is the mode of the density \(f\) .

Proof. Since \(M=f(m)\), where \(m\) is the mode of \(X\), we have

$$f(F^{-1}(y))\leq M.$$

By putting \(y=1-e^{-V_{n,k}}\), we get,

$$f(F^{-1}(1-e^{-V_{n,k}}))\leq M.$$

Now, for \({\alpha}>1\),

$$f^{{\alpha}-1}(F^{-1}(1-e^{-V_{n,k}}))\leq M^{{\alpha}-1}.$$

Taking expectation on both sides, we have

$${E[{f^{{\alpha}-1}(F^{-1}(1-e^{-V_{n,k}}))}]}\leq M^{{\alpha}-1}.$$

Then by using (9), we get

$$\dfrac{G_{{\alpha}}(U_{n(k)})}{G_{{\alpha}}(U_{n,k})}\left[\dfrac{({\alpha}(k-1)+1)}{{\alpha}k}\right]^{{\alpha}(n-1)+1}\leq M^{{\alpha}-1}.$$

Therefore,

$$G_{{\alpha}}(U_{n(k)})\leq G_{{\alpha}}(U_{n,k})\dfrac{({\alpha}k)^{{\alpha}(n-1)+1}}{({\alpha}(k-1)+1)^{{\alpha}(n-1)+1}}M^{{\alpha}-1}.$$

For \(0<{\alpha}<1\), we have

$$f^{{\alpha}-1}(F^{-1}(1-e^{-V_{n,k}}))\geq M^{{\alpha}-1}.$$

Therefore similarly, we can prove that for \(0<{\alpha}<1\)

$$G_{{\alpha}}(U_{n(k)})\geq G_{{\alpha}}(U_{n,k})\dfrac{({\alpha}k)^{{\alpha}(n-1)+1}}{({\alpha}(k-1)+1)^{{\alpha}(n-1)+1}}M^{{\alpha}-1}.$$

This completes the proof.

Example 3.2. Let \(\{X_{i},i\geq 1\}\) be a sequence of iid random variables having a common Gompertz distribution with pdf given by

$$f(x)={\lambda}{\beta}e^{{\lambda}x+{\beta}(1-e^{{\lambda}x})},\quad x>0,{\lambda},{\beta}>0.$$

Since the mode \(m\) of the distribution is \(\dfrac{1}{{\lambda}}\textrm{log}\;\dfrac{1}{{\beta}}\), we have

$$G_{{\alpha}}(U_{n(k)})\leq(\geq)G_{{\alpha}}(U_{n,k})\dfrac{({\alpha}k)^{{\alpha}(n-1)+1}}{({\alpha}(k-1)+1)^{{\alpha}(n-1)+1}}{\lambda}^{{\alpha}-1}e^{({\alpha}-1)({\beta}-1)}\quad\forall{\alpha}>1(0<{\alpha}<1).$$

Example 3.3. Let \(\{X_{i},i\geq 1\}\) be a sequence of iid random variables having a standard half Cauchy distribution with pdf given by

$$f(x)=\dfrac{2}{\pi(1+x^{2})},\quad x\geq 0.$$

Since the mode \(m\) of the distribution is \(0\), we have

$$G_{{\alpha}}(U_{n(k)})\leq(\geq)G_{{\alpha}}(U_{n,k})\dfrac{({\alpha}k)^{{\alpha}(n-1)+1}}{({\alpha}(k-1)+1)^{{\alpha}(n-1)+1}}\left(\dfrac{2}{\pi}\right)^{{\alpha}-1}\quad\forall{\alpha}>1(0<{\alpha}<1).$$

Theorem 3.4. Let \(\{X_{i},i\geq 1\}\) be a sequence of iid continuous random variables from a distribution with common distribution function \(F(x)\) , density function \(f(x)\) , and quantile function \(F^{-1}(.)\) . Let \(L_{n(k)}\) denote the \(n\) th lower \(k\) -records. If \(f(x)\) is non-increasing in x. Then,

  1. 1.

    \(G_{{\alpha}}(L_{n(k)})\) is non-decreasing in n for \({\alpha}>1\).

  2. 2.

    \(G_{{\alpha}}(L_{n(k)})\) is non-increasing in n for \(0<{\alpha}<1\).

Proof. From Theorem 2.2, we have

$$G_{{\alpha}}(L_{n(k)})=\dfrac{k^{{\alpha}n}\Gamma({\alpha}(n-1)+1)}{[\Gamma n]^{\alpha}({\alpha}(k-1)+1)^{{\alpha}(n-1)+1}}{E[{f^{{\alpha}-1}(F^{-1}(e^{-V_{n,k}}))}]},$$
(26)

where \(V_{n,k}\sim\Gamma({\alpha}(n-1)+1,{\alpha}(k-1)+1)\). Then,

$$G_{{\alpha}}^{*}(L_{n(k)})=\textrm{log}\;G_{{\alpha}}(L_{n(k)})=D_{n}+\textrm{log}\;{E[{f^{{\alpha}-1}(F^{-1}(e^{-V_{n,k}}))}]},$$
(27)

where \(D_{n}={\alpha}n\textrm{log}\;k+\textrm{log}\;(\Gamma({\alpha}(n-1)+1))-{\alpha}\textrm{log}\;\Gamma n-({\alpha}(n-1)+1)\textrm{log}\;({\alpha}(k-1)+1)\) Therefore,

$$G_{{\alpha}}^{*}(L_{n+1(k)})-G_{{\alpha}}^{*}(L_{n(k)})=D_{n+1}-D_{n}+\textrm{log}\;\dfrac{{E[{f^{{\alpha}-1}(F^{-1}(e^{-V_{n+1,k}}))}]}}{{E[{f^{{\alpha}-1}(F^{-1}(e^{-V_{n,k}}))}]}}.$$
(28)

Without loss of generality, assume that \(n\) is continuous and then taking derivative with respect to \(n\), we obtain

$$\dfrac{dDn}{dn}={\alpha}\left[\psi({\alpha}(n-1)+1)-\psi(n)-\textrm{log}\;\dfrac{k}{{\alpha}(k-1)+1}\right],$$

where \(\psi(x)=\dfrac{d}{dx}(\textrm{log}\;\Gamma x)\) is the digamma function.

Since \(\psi(x)\) is an increasing function of \(x\) and \({\alpha}(n-1)+1>n\) and \({\alpha}(k-1)+1>k\) for \({\alpha}>1\), we conclude that \(D_{n}\) is an increasing function of \(n\). It is easy to show that \(V_{n,k}\leq_{lr}V_{n+1,k}\) and so \(V_{n,k}\leq_{st}V_{n+1,k}\). Moreover, \(f^{{\alpha}-1}(F^{-1}(e^{-x}))\) is non-decreasing in \(x\) for all \({\alpha}>1\), because \(f(x)\) is non-increasing in \(x\). We obtain,

$${E[{f^{{\alpha}-1}(F^{-1}(e^{-V_{n+1,k}}))}]}\geq{E[{f^{{\alpha}-1}(F^{-1}(e^{-V_{n,k}}))}]},$$
(29)

thus

$$\textrm{log}\;\dfrac{{E[{f^{{\alpha}-1}(F^{-1}(e^{-V_{n+1,k}}))}]}}{{E[{f^{{\alpha}-1}(F^{-1}(e^{-V_{n,k}}))}]}}\geq 0.$$
(30)

Therefore, \(G_{{\alpha}}^{*}(L_{n+1(k)})-G_{{\alpha}}^{*}(L_{n(k)})\geq 0\) and which implies \(G_{{\alpha}}(L_{n(k)})\) is non-decreasing in \(n\) for \({\alpha}>1\).

Now, for \(0<{\alpha}<1\), we have \({\alpha}(n-1)+1<n\) and \({\alpha}(k-1)+1<k\), then \(D_{n}\) is a decreasing function of \(n\). Moreover, \(f^{{\alpha}-1}(F^{-1}(e^{-x}))\) is non-increasing in \(x\) for all \(0<{\alpha}<1\), because \(f(x)\) is non-increasing in \(x\). Thus,

$${E[{f^{{\alpha}-1}(F^{-1}(e^{-V_{n+1,k}}))}]}\leq{E[{f^{{\alpha}-1}(F^{-1}(e^{-V_{n,k}}))}]},$$
(31)

thus

$$\textrm{log}\;\dfrac{{E[{f^{{\alpha}-1}(F^{-1}(e^{-V_{n+1,k}}))}]}}{{E[{f^{{\alpha}-1}(F^{-1}(e^{-V_{n,k}}))}]}}\leq 0.$$
(32)

Therefore, \(G_{{\alpha}}^{*}(L_{n+1(k)})-G_{{\alpha}}^{*}(L_{n(k)})\leq 0\) and which implies \(G_{{\alpha}}(L_{n(k)})\) is non-increasing in \(n\) for \(0<{\alpha}<1\). Hence the theorem.

Now, we present two bounds for the IG measure of the \(n\)th lower \(k\)-record value.

Theorem 3.5. Let \(X\) be a random variable with IG measure \(G_{{\alpha}}(X)<\infty\) . Then the IG measure of the \(n\) th lower \(k\) -records \(L_{n(k)}\) is bounded as

$$G_{{\alpha}}(L_{n(k)})\leq G_{{\alpha}}(U_{n,k})\left[\dfrac{{\alpha}k}{{\alpha}(k-1)+1}\right]^{{\alpha}(n-1)+1}B_{n(k)}\int\limits_{-\infty}^{\infty}s(x)f^{{\alpha}-1}(x)dx,$$
(33)

where

(i) \(U_{n,k}\sim\Gamma(n,k)\) and

(ii) \(B_{n(k)}=\dfrac{{({\alpha}(k-1)+1)({\alpha}(n-1)^{{\alpha}(n-1)})}}{\Gamma({\alpha}(n-1)+1)}e^{-{\alpha}(n-1)}\), provided the integral exists.

(iii) \(s(x)=\dfrac{f(x)}{F(x)}\).

Proof. The proof is omitted since it is similar to that of Theorem 3.2.

Theorem 3.6. Under the assumptions of Theorem 3.5, we have

$$G_{{\alpha}}(L_{n(k)})\leq(\geq)G_{{\alpha}}(U_{n,k})\dfrac{({\alpha}k)^{{\alpha}(n-1)+1}}{({\alpha}(k-1)+1)^{{\alpha}(n-1)+1}}M^{{\alpha}-1}$$
(34)

for \({\alpha}>1(0<{\alpha}<1)\) , where \(M=f(m)<\infty\) and \(m=\sup\{x:f(x)\leq M\}\) is the mode of the density f.

Proof. Since \(M=f(m)\), where \(m\) is the mode of \(X\), we have

$$f(F^{-1}(y))\leq M.$$

By putting \(y=e^{-V_{n,k}}\), we get

$$f(F^{-1}(e^{-V_{n,k}}))\leq M.$$

Now, for \({\alpha}>1\),

$$f^{{\alpha}-1}(F^{-1}(e^{-V_{n,k}}))\leq M^{{\alpha}-1}.$$

Taking expectation on both sides, we have

$${E[{f^{{\alpha}-1}(F^{-1}(e^{-V_{n,k}}))}]}\leq M^{{\alpha}-1}.$$

Then by using (13), we get

$$\dfrac{G_{{\alpha}}(L_{n(k)})}{G_{{\alpha}}(U_{n,k})}\left[\dfrac{{\alpha}(k-1)+1}{{\alpha}k}\right]^{{\alpha}(n-1)+1}\leq M^{{\alpha}-1}.$$

Therefore,

$$G_{{\alpha}}(L_{n(k)})\leq G_{{\alpha}}(U_{n,k})\dfrac{({\alpha}k)^{{\alpha}(n-1)+1}}{({\alpha}(k-1)+1)^{{\alpha}(n-1)+1}}M^{{\alpha}-1}.$$

For \(0<{\alpha}<1\), we have

$$f^{{\alpha}-1}(F^{-1}(e^{-V_{n,k}}))\geq M^{{\alpha}-1}.$$

Therefore similarly, we can prove that for \(0<{\alpha}<1\),

$$G_{{\alpha}}(L_{n(k)})\geq G_{{\alpha}}(U_{n,k})\dfrac{({\alpha}k)^{{\alpha}(n-1)+1}}{({\alpha}(k-1)+1)^{{\alpha}(n-1)+1}}M^{{\alpha}-1}.$$

This completes the proof.

Example 3.4. Let \(\{X_{i},i\geq 1\}\) be a sequence of iid random variables having a common Frechet distribution with pdf given by

$$f(x)=\dfrac{{\lambda}}{{\beta}}\left(\dfrac{x}{{\beta}}\right)^{-({\lambda}+1)}e^{(\frac{x}{{\beta}})^{-{\lambda}}},\quad x>0,{\lambda},{\beta}>0.$$

Here \(f(x)\) is non-increasing.

Since the mode \(m\) of the distribution is \({\beta}\left(\dfrac{{\lambda}}{1+{\lambda}}\right)^{\frac{1}{{\lambda}}}\), we have for \({\alpha}>1(0<{\alpha}<1)\),

$$G_{{\alpha}}(U_{n(k)})\leq(\geq)G_{{\alpha}}(U_{n,k})\dfrac{({\alpha}k)^{{\alpha}(n-1)+1}}{({\alpha}(k-1)+1)^{{\alpha}(n-1)+1}}\left(\dfrac{{\lambda}}{{\beta}}\right)^{{\alpha}-1}\left(\dfrac{1+{\lambda}}{{\lambda}}\right)^{{\frac{(1+{\lambda})({\alpha}-1)}{{\lambda}}}}e^{-{\frac{(1+{\lambda})({\alpha}-1)}{{\lambda}}}}.$$

4 CHARACTERIZATION OF EXPONENTIAL DISTRIBUTION BY IG MEASURE OF \(k\)-RECORDS

In this section, we show that exponential distribution maximizes (minimizes) IG measure of \(k\)-record values under some information constraints. Consider a class of distributions \(F\) associated with a non-negative random variable \(X\) with \(F(0)=0\) and failure rate function \(r\) that satisfies the conditions:

  • \(r(x)=a(\theta)b(x)\)

  • \(b(x)\geq M,M>0,\)

where \(a(\theta)\) and \(b(x)=B^{\prime}(x)\) are non-negative functions of \(\theta\) and \(x\), respectively. We denote this class of distributions by \(C\). We then provide a characterization result for the class \(C\) in terms of IG measure of the \(n\)th upper \(k\)-record value \(U_{n(k)}\).

Theorem 4.1. The \(n\) th upper \(k\) -record value of the distribution \(F\) has maximum (minimum) IG measure in \(C\) , for \(0<{\alpha}<1({\alpha}>1)\) , if and only if

$$F(x:\theta)=1-e^{-Ma(\theta)x}.$$
(35)

Proof. Let \(F(x:\theta)\) be a class \(C\) and \(U_{n(k)}\) denote the corresponding \(n\)th upper \(k\)-record value. Then, we have,

$$G_{{\alpha}}(U_{n(k)})=G_{{\alpha}}(U_{n,k})\dfrac{({\alpha}k)^{{\alpha}}}{{\alpha}(k-1)+1^{{\alpha}(n-1)+1}}\int\limits_{0}^{\infty}x^{{\alpha}(n-1)}e^{-x({\alpha}(k-1)+1)}$$
$$\left[a(\theta)e^{-x}b\left[B^{-1}\left(\dfrac{x}{a(\theta)}\right)\right]\right]^{{\alpha}-1}\dfrac{({\alpha}(k-1)+1)^{{\alpha}(n-1)+1}}{\Gamma({\alpha}(n-1)+1)}dx$$
$${}=G_{{\alpha}}(U_{n,k})({\alpha}k)^{{\alpha}(n-1)+1}[a(\theta)]^{{\alpha}-1}\int\limits_{0}^{\infty}\dfrac{x^{{\alpha}(n-1)}e^{-{\alpha}kx}}{\Gamma({\alpha}(n-1)+1)}b^{{\alpha}-1}\left[B^{-1}\left(\dfrac{x}{a(\theta)}\right)\right]dx.$$
(36)

Noting that \(b(x)\geq M\), then for any \(0<{\alpha}<1({\alpha}>1)\), we have \(b^{{\alpha}-1}\leq(\geq)M^{{\alpha}-1}\). Therefore,

$$G_{{\alpha}}(U_{n(k)})\leq(\geq)G_{{\alpha}}(U_{n,k})({\alpha}k)^{{\alpha}(n-1)+1}[Ma(\theta)]^{{\alpha}-1}\int\limits_{0}^{\infty}\dfrac{x^{{\alpha}(n-1)}e^{-{\alpha}kx}}{\Gamma({{\alpha}(n-1)+1})}$$
$${}=G_{{\alpha}}(U_{n,k})[Ma(\theta)]^{{\alpha}-1},$$
(37)

which is the IG measure of the \(n\)th upper \(k\)-record of \(F(x:\theta)=1-e^{-Ma(\theta)x}\). From this, it is clear that for any \(0<{\alpha}<1({\alpha}>1)\), the \(n\)th upper \(k\)-record of exponential distribution has maximum (minimum) IG measure in class \(C\).

To prove the converse, suppose the \(n\)th upper \(k\)-record of \(F(x:\theta)\) has maximum (minimum) IG measure in class \(C\). Then from (36), we have

$$G_{{\alpha}}(U_{n(k)})=G_{{\alpha}}(U_{n,k})\dfrac{({\alpha}k)^{{\alpha}(n-1)+1}}{\Gamma({\alpha}(n-1)+1)}[a(\theta)]^{{\alpha}-1}\int\limits_{0}^{\infty}{x^{{\alpha}(n-1)}e^{-{\alpha}kx}}\;b^{{\alpha}-1}\left[B^{-1}\left(\dfrac{x}{a(\theta)}\right)\right]dx$$
$${}=G_{{\alpha}}(U_{n,k})M^{{\alpha}-1}[a(\theta)]^{{\alpha}-1}\int\limits_{0}^{\infty}\left[\dfrac{b\left[B^{-1}\left(\dfrac{x}{a(\theta)}\right)\right]}{M}\right]^{{\alpha}-1}\dfrac{{x^{{\alpha}(n-1)}e^{-{\alpha}kx}}{({\alpha}k)^{{\alpha}(n-1)+1}}}{\Gamma({\alpha}(n-1)+1)}dx.$$

Since \(G_{{\alpha}}(U_{n(k)})\) is maximum (minimum) for \(0<{\alpha}<1({\alpha}>1)\), we have

$$G_{{\alpha}}(U_{n(k)})=G_{{\alpha}}(U_{n,k})M^{{\alpha}-1}[a(\theta)]^{{\alpha}-1}$$
(38)

and so

$$\int\limits_{0}^{\infty}\left[\dfrac{b\left[B^{-1}\left(\dfrac{x}{a(\theta)}\right)\right]}{M}\right]^{{\alpha}-1}\dfrac{{x^{{\alpha}(n-1)}e^{-{\alpha}kx}}{({\alpha}k)^{{\alpha}(n-1)+1}}}{\Gamma({\alpha}(n-1)+1)}dx=1.$$

Hence,

$$\int\limits_{0}^{\infty}\left[\left[\dfrac{b\left[B^{-1}\left(\dfrac{x}{a(\theta)}\right)\right]}{M}\right]^{{\alpha}-1}-1\right]\dfrac{{x^{{\alpha}(n-1)}e^{-{\alpha}kx}}{({\alpha}k)^{{\alpha}(n-1)+1}}}{\Gamma({\alpha}(n-1)+1)}dx=0.$$

The function inside the integral is a non-negative function of \(x\geq 0\), because \(b(x)\geq M\). So,

$$\dfrac{b\left[B^{-1}\left(\dfrac{x}{a(\theta)}\right)\right]}{M}=1,\quad\forall x>0,$$
$$\dfrac{d}{dx}\left[B^{-1}\left(\dfrac{x}{a(\theta)}\right)\right]=\dfrac{1}{Ma(\theta)},$$
$$B^{-1}\left(\dfrac{x}{a(\theta)}\right)=\dfrac{1}{Ma(\theta)}x+h(\theta).$$
(39)

As \(X\) is a non-negative random variable, we have \(B^{-1}(0)=0\) and so \(h(\theta)=0\). Now, making the transformation \(y=\dfrac{x}{a(\theta)}\) in (39), we can conclude that \(B(x)=Mx\), that is, \(X\) has exponential distribution, is required.

5 RELATIVE INFORMATION GENERATING DIVERGENCE OF \(k\)-RECORDS

In this section, we study the \(RIG\) divergence between a given parent density and corresponding density of the \(n\)th upper and lower \(k\)-record values.

Theorem 5.1. The RIG divergence between the densities of \(n\) th upper \(k\) -record and the parent distribution is given by the following representation,

$$R_{{\alpha}}(f_{U_{n(k)}},f)=G_{{\alpha}}(U_{n,k})\dfrac{({\alpha}k)^{{\alpha}(n-1)+1}}{({\alpha}(k-1)+1)^{{\alpha}(n-1)+1}}.$$
(40)

Moreover, \(R_{{\alpha}}(f_{U_{n(k)}},f)\) is an increasing (decreasing) function of \(n\) for \({\alpha}>1(0<{\alpha}<1)\).

Proof. The RIG divergence between the densities \(n\)th upper \(k\)-record and the parent distribution is given by

$$R_{{\alpha}}(f_{U_{n(k)}},f)=\dfrac{k^{n{\alpha}}}{[\Gamma n]^{\alpha}}\int\limits_{-\infty}^{\infty}f(x)[{-\textrm{log}\;(1-F(x))}]^{{\alpha}(n-1)}[1-F(x)]^{{\alpha}(k-1)}dx.$$
(41)

On putting \(v={-\textrm{log}\;(1-F(x))}\), we get

$$R_{{\alpha}}(f_{U_{n(k)}},f)=\dfrac{k^{n{\alpha}}}{[\Gamma n]^{\alpha}}\int\limits_{0}^{\infty}v^{{\alpha}(n-1)}e^{-v({\alpha}(k-1)+1)}dv$$
$${}=\dfrac{k^{n{\alpha}}}{[\Gamma n]^{\alpha}}\dfrac{{\alpha}(n-1)+1}{({\alpha}(k-1)+1)^{{\alpha}(n-1)+1}}=G_{{\alpha}}(U_{n,k})\dfrac{({\alpha}k)^{{\alpha}(n-1)+1}}{({\alpha}(k-1)+1)^{{\alpha}(n-1)+1}},$$
(42)

where \(G_{{\alpha}}(U_{n,k})\) is defined in (11).

In order to examine the monotonicity behaviour of \(R_{{\alpha}}(f_{U_{n(k)}},f)\), we have

$$\textrm{log}\;R_{{\alpha}}(f_{U_{n(k)}},f)=n{\alpha}\textrm{log}\;k+\textrm{log}\;\Gamma({\alpha}(n-1)+1)-{\alpha}\textrm{log}\;(\Gamma n)$$
$${}-({\alpha}(n-1)+1)\textrm{log}\;({\alpha}(k-1)+1).$$
(43)

Now differentiating with respect to \(n\), we obtain,

$$\dfrac{d}{dn}\textrm{log}\;R_{{\alpha}}(f_{U_{n(k)}},f)={\alpha}\left[\psi({\alpha}(n-1)+1)-\psi(n)-\textrm{log}\;\dfrac{k}{{\alpha}(k-1)+1}\right].$$
(44)

Because \(\psi\) is an increasing(decreasing) function and for \({\alpha}>1(0<{\alpha}<1)\), we have \({\alpha}(n-1)+1>(<)n\) and \({\alpha}(k-1)+1>(<)k\), the inequality gets satisfied.

Theorem 5.2. The RIG divergence between the densities \(n\) th lower \(k\) -record and the parent distribution is given by the following representation,

$$R_{{\alpha}}(f_{L_{n(k)}},f)=G_{{\alpha}}(U_{n,k})\dfrac{({\alpha}k)^{{\alpha}(n-1)+1}}{({\alpha}(k-1)+1)^{{\alpha}(n-1)+1}}.$$
(45)

Moreover, \(R_{{\alpha}}(f_{L_{n(k)}},f)\) is an increasing (decreasing) function of n for \({\alpha}>1(0<{\alpha}<1)\).

Proof. The proof is omitted since it is similar to that of Theorem 5.1.

6 ESTIMATION OF IG MEASURE FOR WEIBULL DISTRIBUTION BASED ON \(k\)-RECORD VALUES

In this section, we consider the estimation of IG measure for Weibull distribution based on \(n\)th upper \(k\)-record values. We obtain the maximum likelihood estimators (MLEs) and Bayes estimation of IG measure using MCMC method.

A two-parameter Weibull distribution has cdf given by

$$F(x|{\beta},{\lambda})=1-e^{-{\lambda}x^{\beta}}.$$
(46)

The pdf corresponding to the above cdf is given by

$$f(x|{\beta},{\lambda})={\beta}{\lambda}x^{{\beta}-1}e^{-{\lambda}x^{\beta}}.$$
(47)

The Weibull distribution is widely used in many fields, including reliability engineering, survival analysis, hydrology, meteorology, and insurance. Furthermore, parametric inference of the Weibull distribution based on record data is of special interest because the Weibull distribution naturally arises from the extreme value theorem and has a significant physical interpretation in numerous practical contexts. [15] considered the estimation of entropy of Weibull distribution under generalized progressive hybrid censoring.

The IG measure for the Weibull distribution with cdf given in (46) is given by

$$G_{{\alpha}}({\lambda},{\beta})=\dfrac{k^{{\alpha}n}}{\Gamma(n)^{\alpha}}{\lambda}^{\frac{{\alpha}-1}{{\beta}}}{\beta}^{{\alpha}-1}\dfrac{\Gamma({\alpha}(n-\frac{1}{{\beta}})+\frac{1}{{\beta}})}{(k{\alpha})^{{\alpha}(n-\frac{1}{{\beta}})+\frac{1}{{\beta}}}}.$$
(48)

6.1 Maximum Likelihood Estimation

In this subsection, we obtain the MLEs of IG measure for the two-parameter Weibull distribution based on \(n\)th upper \(k\)-record values. Let \(R_{i},i=1,2,...,n\) be the first \(n\)th upper \(k\)-record values arising from Weibull distribution with cdf given in (46). Let \(D_{n}=(R_{1},R_{2},...,R_{n})\). Then from (47) the likelihood function is given by

$$L({\lambda},{\beta}|d_{n})=(k{\lambda}{\beta})^{n}e^{-k{\lambda}r_{n}^{\beta}}\prod_{i=1}^{n}r_{i}^{{\beta}-1},$$

where \(d_{n}=(r_{1},r_{2},...,r_{n})\). The natural logarithm of the likelihood function is given by

$$\textrm{log}\;L({\lambda},{\beta}|d_{n})=n\textrm{log}\;k{\lambda}{\beta}-k{\lambda}r_{n}^{\beta}+\sum_{i=1}^{n}\textrm{log}\;r_{i}^{{\beta}-1}.$$

When we differentiate \(\textrm{log}\;L({\lambda},{\beta}|d_{n})\) with respect to \({\beta}\) and \({\lambda}\) and equates to zero,

$$\dfrac{\partial\textrm{log}L}{\partial{\beta}}=\dfrac{n}{{\beta}}-k{\lambda}r_{n}^{\beta}\textrm{log}\;r_{n}+\sum_{i=1}^{n}\textrm{log}\;r_{i}=0$$
(49)

and

$$\dfrac{\partial\textrm{log}L}{\partial{\lambda}}=\dfrac{n}{{\lambda}}-kr_{n}^{\beta}=0.$$
(50)

From (50), we get

$$\hat{{\lambda}}=\dfrac{n}{kr_{n}^{\beta}}.$$

By putting the value of \(\hat{{\lambda}}\) in (49), we get

$$\dfrac{n}{{\beta}}-n\textrm{log}\;r_{n}+\sum_{i=1}^{n}\textrm{log}\;r_{i}=0.$$
(51)

Therefore the MLE of \({\beta}\) is given by

$$\hat{{\beta}}=\dfrac{n}{n\textrm{log}\;r_{n}-\sum_{i=1}^{n}\textrm{log}\;r_{i}}.$$

Thus the MLE of \({\lambda}\) is obtained as

$$\hat{{\lambda}}=\dfrac{n}{kr_{n}^{\beta}}.$$

Then by invariant property of MLE, the MLE of IG measure for Weibull distribution based on \(n\)th upper \(k\)-record values is given by

$$\hat{G}_{\textrm{MLE}}=\dfrac{k^{{\alpha}n}}{\Gamma(n)^{\alpha}}\hat{{\lambda}}^{\frac{{\alpha}-1}{\hat{{\beta}}}}\hat{{\beta}}^{{\alpha}-1}\dfrac{\Gamma({\alpha}(n-\frac{1}{\hat{{\beta}}})+\frac{1}{\hat{{\beta}}})}{(k{\alpha})^{{\alpha}(n-\frac{1}{\hat{{\beta}}})+\frac{1}{\hat{{\beta}}}}}.$$
(52)

6.2 Bayesian Estimation

In this subsection, we consider the Bayesian estimation of the IG measure for the two-parameter Weibull distribution based on upper \(k\)-record values. Recently, Hassan and Zaky [22] studied the Bayesian estimation of entropy function for Lomax distribution based on record values and Al-Labadi and Berry [3] studied the Bayesian estimation of extropy and goodness of fit tests. Chacko and Asha [10] obtained estimators for the entropy functions of a Weibull distribution based on record values, and Chacko and Asha [9] obtained estimators for the entropy function of a generalized exponential distribution based on record values. Bayesian estimation of a two-parameter Weibull distribution using extension of Jeffreys’ prior information with three loss functions has been studied by [21].

Here, we consider Bayesian estimation of IG measure for the two-parameter Weibull distribution under symmetric as well as asymmetric loss functions. For a symmetric loss function we consider the squared error loss (SEL) function and for assymetric loss functions we consider both LINEX and entropy loss functions. The Bayes estimate of any parameter \(\mu\) under SEL is the posterior mean of \(\mu\). The Bayes estimate of \(\mu\) under LINEX loss function can be obtained as

$$\hat{\mu}_{LB}=-\dfrac{1}{h}\textrm{log}\{E_{\mu}(e^{-h\mu}|\underline{x})\},\quad h\neq 0,$$

provided \(E_{\mu}(.)\) exists. The Bayes estimate of \(\mu\) for the general entropy loss (EL) function is obtained as

$$\hat{\mu}_{EB}=(E_{\mu}(\mu^{-q}|\underline{x}))^{-\frac{1}{q}},\quad q\neq 0.$$

Let \(R_{i},i=1,2,...,n\) be the first \(n\) upper \(k\)-record values arising from Weibull distribution with pdf given in (47). Then the likelihood function is given by

$$L({\lambda},{\beta}|d_{n})=(k{\lambda}{\beta})^{n}e^{-{\lambda}r_{n}^{\beta}k}\prod_{i=1}^{n}r_{i}^{{\beta}-1},$$

where \(d_{n}=(r_{1},r_{2},...,r_{n})\). Assume that the prior distributions of \({\beta}\) and \({\lambda}\) follow independent gamma distributions with density functions respectively given by

$$\pi_{1}({\beta}|a,b)=\dfrac{b^{a}}{\Gamma a}{\beta}^{a-1}e^{-b{\beta}};\quad a>0,\quad b>0,$$

and

$$\pi_{2}({\lambda}|c,d)=\dfrac{d^{c}}{\Gamma c}{\lambda}^{c-1}e^{-d{\lambda}};\quad c>0,\quad d>0.$$

Thus the joint prior distribution of \({\beta}\) and \({\lambda}\) is given by

$$\pi({\lambda},{\beta})=\dfrac{b^{a}d^{c}}{\Gamma a\Gamma c}{\beta}^{a-1}{\lambda}^{c-1}e^{-b{\beta}}e^{-d{\lambda}}.$$

Then the joint posterior density of \({\beta}\) and \({\lambda}\) given \(D_{n}=d_{n}\) can be written as

$$\pi^{*}({\lambda},{\beta}|d_{n})=\dfrac{L({\lambda},{\beta}|d_{n})\pi({\lambda},{\beta})}{\iint L({\lambda},{\beta}|d_{n})\pi({\lambda},{\beta})d{\lambda}d{\beta}}.$$
(53)

Therefore the Bayes estimate of any function \(g({\beta},{\lambda})\) of \({\beta}\) and \({\lambda}\) under SEL, LL, and EL are respectively given by

$$\hat{g}_{S}=\dfrac{\iint g({\beta},{\lambda})L({\lambda},{\beta}|d_{n})\pi({\lambda},{\beta})d{\lambda}d{\beta}}{\iint L({\lambda},{\beta}|d_{n})\pi({\lambda},{\beta})d{\lambda}d{\beta}},$$
(54)
$$\hat{g}_{L}=-\dfrac{1}{h}\textrm{log}\left[\dfrac{\iint e^{-hg({\beta},{\lambda})}L({\lambda},{\beta}|d_{n})\pi({\lambda},{\beta})d{\lambda}d{\beta}}{\iint L({\lambda},{\beta}|d_{n})\pi({\lambda},{\beta})d{\lambda}d{\beta}}\right],$$
(55)

and

$$\hat{g}_{E}=\left[\dfrac{\iint(g({\beta},{\lambda}))^{-q}L({\lambda},{\beta}|d_{n})\pi({\lambda},{\beta})d{\lambda}d{\beta}}{\iint L({\lambda},{\beta}|d_{n})\pi({\lambda},{\beta})d{\lambda}d{\beta}}\right]^{\frac{-1}{q}}.$$
(56)

It is not possible to compute (54)–(56) explicitly. Thus we propose MCMC method to find the Bayes estimates for the IG measure given in (48).

6.3 MCMC Method

In this subsection, we consider the MCMC method to generate samples from the posterior distributions and then find the Bayes estimates for IG measure. The joint posterior distribution given in (53) can be written as

$$\pi^{*}({\lambda},{\beta}|d_{n})\propto{\beta}^{n+a-1}{\lambda}^{c+n-1}e^{-{\lambda}(d+r_{n}^{\beta}k)}e^{-b{\beta}}e^{-({\beta}-1)\sum_{i=1}^{n}\textrm{log}\;r_{i}}.$$
(57)

From (57) the conditional posterior distribution of \({\beta}\) given \({\lambda}\) and \(d_{n}\) is given by

$$\pi^{*}_{1}({\beta}|{\lambda},d_{n})\propto{\beta}^{n+a-1}e^{-{\lambda}r_{n}^{\beta}k}e^{-{\beta}(b+\sum_{i=1}^{n}\textrm{log}\;r_{i})}.$$
(58)

Again from (57), the conditional posterior distribution of \({\lambda}\) given \({\beta}\) and \(d_{n}\) is given by

$$\pi^{*}_{2}({\lambda}|{\beta},d_{n})\propto{\lambda}^{c+n-1}e^{-{\lambda}(d+r_{n}^{\beta}k)}.$$
(59)

Thus from (59) we can see that for a given \({\beta}\), the conditional posterior distribution of \({\lambda}\) follows a Gamma distribution with parameters \((n+c)\) and \((d+r_{n}^{\beta}k)\). That is, \({\lambda}\sim\) Gamma\((n+c,d+r_{n}^{\beta}k)\). Therefore one can easily generate sample from the posterior distribution of \({\lambda}\). But it is not possible to generate random variables from the posterior distribution of \({\beta}\) given in (58) using standard random number generation methods. Hence we use Metropolis–Hasting (M–H) algorithm to generate sample from (58) (see, [14]). Since the plot of (58) is similar to a normal plot we take normal proposal density for \({\beta}\) for the M–H algorithm.

By setting initial values \({\beta}^{(0)}\) and \({\lambda}^{(0)}\), let \({\beta}^{(t)}\) and \({\lambda}^{(t)}\), \(t=1,2,...,N\) be the observations generated from (58) and (59) respectively. Then the Bayes estimator of IG measure given in (48) under SEL, LL, and EL, by taking first \(m\) iterations as burn-in period, are respectively given by

$$\hat{G}_{\textrm{SEL}}=\dfrac{1}{N-m}\sum_{t=m+1}^{N}G_{{\alpha}}({\lambda}^{(t)},{\beta}^{(t)}),$$
(60)
$$\hat{G}_{\textrm{LL}}=-\dfrac{1}{h}\textrm{log}\left[\dfrac{1}{N-m}\sum_{t=m+1}^{N}e^{-hG_{{\alpha}}({\lambda}^{(t)},{\beta}^{(t)})}\right],$$
(61)

and

$$\hat{G}_{\textrm{EL}}=\left[\dfrac{1}{N-m}\sum_{t=m+1}^{N}(G_{{\alpha}}({\lambda}^{(t)},{\beta}^{(t)}))^{-q}\right]^{-\frac{1}{q}},$$
(62)

where \(G_{\alpha}({\lambda}^{(t)},{\beta}^{(t)})\) is given in (48).

6.4 Simulation Study

In this subsection, we carry out a simulation study for illustrating the estimation procedures developed in previous subsections. First we obtain the MLEs for IG measure using (52). We have obtained the ML estimators and the corresponding MSE of MLEs for different values of \(n\) using 1000 simulated samples for different combinations of \({\beta}\) and \({\lambda}\) and are given in Tables 1 and 2. For the simulation studies for Bayes estimators we take the hyper parameters for the prior distributions of \({\beta}\) and \({\lambda}\) as \(a=2\), \(b=2\), \(c=2\), and \(d=2\). We have obtained the Bayes estimators for IG measure of Weibull distribution using upper \(k\)-record values under SEL, LL, and EL functions using MCMC method.

Table 1 The estimate and the corresponding MSE for maximum likelihood estimator and Bayes estimator for IG measure of Weibull distribution when \({\alpha}=0.75\)
Table 2 The estimate and the corresponding MSE for maximum likelihood estimator and Bayes estimator for IG measure of Weibull distribution when \({\alpha}=1.5\)

For that we use the following algorithm.

  1. 1.

    Generate upper \(k\)-record values from two-parameter Weibull distribution with parameters \({\beta}\) and \({\lambda}\).

  2. 2.

    Calculate estimators of IG measure using the generated upper \(k\)-record values using MCMC method as describe below.

    1. (a)

      Start with initial values \({\beta}^{(0)}\) and \({\lambda}^{(0)}\).

    2. (b)

      Set \(t=1\).

    3. (c)

      Generate \({\lambda}^{(t)}\) from Gamma\((n+c,d+r_{n}^{{\beta}^{(t-1)}}k)\).

    4. (d)

      Using M–H algorithm, generate \({\beta}^{(t)}\) from \(\pi_{1}^{*}({\beta}|{\lambda}^{(t)},d_{n})\).

    5. (e)

      Calculate \(\hat{G}_{\alpha}({\lambda}^{(t)},{\beta}^{(t)})\) using (48).

    6. (f)

      Set \(t=t+1\).

    7. (g)

      Repeat steps (c) to (f) for \(N=50\,000\) times.

    8. (h)

      Calculate the Bayes estimators for the IG measure \(G_{\alpha}({\lambda},{\beta})\) using (60) to (62) by taking burn-in-period \(m=5000\).

  3. 3.

    Repeat the steps 1 and 2 for 1000 times.

  4. 4.

    Calculate the Bayes estimates and the corresponding MSEs of the estimators.

Repeat the simulation study for \(n=6,8,10\), and for different values of \({\beta}\) and \({\lambda}\). The ML estimates, the Bayes estimators and the corresponding MSE for IG measure under SEL, LL, and EL functions for \({\alpha}=0.75\) are given in Table 1 and for \({\alpha}=1.5\) are given in Table 2. From Tables 1 and 2 we have the following inference.

  1. 1.

    The MSEs of all estimators decrease when \(n\) increases.

  2. 2.

    The MSEs corresponding to the Bayes estimates are smaller than that of MLEs.

  3. 3.

    Among the Bayes estimators, estimators under EL function have the least MSE.

7 CONCLUSIONS

In this paper, we considered the IG and RIG measures for the \(n\)th upper and lower \(k\)-record value. The monotone behaviour of the IG measure of records has been established, and some bounds for the \(n\)th upper \(k\)-record value were obtained. Further, we have established some characterization results of exponential distribution by maximisation (minimization) of IG measure of its corresponding record values under some conditions. Then, we provided a discussion on the RIG divergence between the densities of \(k\)-record values and the distribution of the underlying sequence of random variables. Finally, as an application of IG measure, we obtained the MLEs and the Bayes estimates for the IG measure of the Weibull distribution based on upper \(k\)-record values. Among different estimators, the Bayes estimator under EL function performs better than MLE and Bayes estimators under SEL and LL in terms of MSE.