1 Introduction

The concept of ranked set sampling (RSS) was first introduced by McIntyre [13] to estimate the mean pasture yields and found that RSS is a more efficient sampling method than simple random sampling (SRS) in terms of the population mean estimation. The RSS and some of its variants have been successfully applied in different areas such as industrial statistics, environmental and ecological studies, bio-statistics, statistical genetics etc. We assume that \(X_{SRS}=\left\{ X_{i}, i=\right. \) \(1, \ldots , n\}\) be a SRS of size n from a continuous distribution with probability density function (pdf) \(f (\cdot )\) and cumulative distribution function (cdf) \(F (\cdot )\). The one-cycle ranked set sampling involves an initial ranking of n samples of size n as follows:

$$\begin{aligned} \begin{array}{ccccccc} 1: &{} \underline{X_{(1: n) 1}} &{} X_{(2: n) 1} &{} \cdots &{} X_{(n: n) 1} &{} \rightarrow &{} X_{(1) 1}=X_{(1: n) 1} \\ 2: &{} X_{(1: n) 2} &{} \underline{X_{(2: n) 2}} &{} \cdots &{} X_{(n: n) 2} &{} \rightarrow &{} X_{(2) 2}=X_{(2: n) 2} \\ \vdots &{} \vdots &{} \vdots &{} \ddots &{} \vdots &{} \vdots &{} \vdots \\ n: &{} X_{(1: n) n} &{} X_{(2:n) n} &{} \cdots &{} \underline{X_{(n: n) n}} &{} \rightarrow &{} X_{(n) n}=X_{(n:n) n} \end{array} \end{aligned}$$

where \(X_{(i:n) j}\) denotes the ith order statistics from the jth SRS of size n. The resulting sample is called a RSS of size n and denoted by \(X_{RSS}=\left\{ X_{(i)i}, i=1, \ldots , n\right\} \), where \( X_{(i)i} \) is the ith order statistic in a set of size n obtained from the ith sample having pdf,

$$\begin{aligned} f_{(i : n)}(x)=\frac{1}{B(i, n-i+1)} f(x)[F(x)]^{i-1}[1-F(x)]^{n-i}, \end{aligned}$$

with \( B(i, n-i+1) \) is the beta function with the parameter i and \( n - i + 1 \).

Biradar et al. [2] proposed maximum ranked set sampling procedure with unequal samples (MaxRSSU) to estimate the mean of the exponential distribution and showed that MaxRSSU is better than that of the estimator based on SRS. In the MaxRSSU, we draw n simple random samples, where the size of the ith sample is i for \(i=1, \ldots , n\). The one-cycle MaxRSSU involves an initial ranking of n samples of size n as follows:

$$\begin{aligned} \begin{array}{ccccccc} 1: &{}\underline{X_{(1: 1) 1}} &{} &{} &{} &{} \rightarrow &{} Y_{1}=X_{(1: 1) 1} \\ 2: &{} X_{(1: 2) 2} &{} \underline{X_{(2: 2) 2}} &{} &{} &{} \rightarrow &{} Y_{2}=X_{(2: 2) 2} \\ \vdots &{} \vdots &{} \vdots &{} \ddots &{} \vdots &{} \vdots &{} \vdots \\ n: &{} X_{(1: n) n} &{} X_{(2: n) n} &{} \cdots &{} \underline{X_{(n: n) n}} &{} \rightarrow &{} Y_{n}=X_{(n: n) n} \end{array} \end{aligned}$$

where \(X_{(i : j) j}\) denotes the ith order statistic from the jth SRS of size j. The resulting sample is called one-cycle MaxRSSU of size n and denoted by \(Y_{M R S S U}=\left\{ Y_{i}, i=1,2, \ldots , n\right\} .\) Under the assumption of perfect judgment ranking (see Chen et al. [3]), \(Y_{i}\) has the same distribution as \(X_{(i) i}\) which is the ith order statistic (the maximum) in a set of size i obtained from the ith sample with probability density function (pdf),

$$\begin{aligned} f_{(i) i}(x)=i f(x)[F(x)]^{i-1} \end{aligned}$$
(1.1)

and distribution function,

$$\begin{aligned} F_{(i) i}(x)=[F(x)]^{i}. \end{aligned}$$
(1.2)

In MaxRSSU, we measure accurately only n maximum order statistics out of \(\sum _{i=1}^{n} i=n(n+1) / 2\) ranked units and it allows for an increase in set size without introducing too many ranking errors. In a similar manner one can also define procedure for minimum ranked set sampling with unequal samples (MinRSSU) as useful modification of RSS procedure. The one cycle MinRSSU involves an initial ranking of m samples of size n as follows,

$$\begin{aligned} \begin{array}{ccccccc} 1: &{}\underline{X_{(1: 1) 1}} &{} &{} &{} &{} \rightarrow &{} Y_{1}=X_{(1: 1) 1} \\ 2: &{}\underline{X_{(1: 2) 1}} &{} X_{(2: 2) 2} &{} &{} &{} \rightarrow &{} Y_{2}=X_{(1: 2) 2} \\ \vdots &{} \vdots &{} \vdots &{} \ddots &{} \vdots &{} \vdots &{} \vdots \\ n: &{} \underline{X_{(1: n) 1}} &{} X_{(2: n) n} &{} \cdots &{} X_{(n: n) n} &{} \rightarrow &{} Y_{n}=X_{(1: n) n} \end{array} \end{aligned}$$

Eskandarzadeh et al. [4] considered information measures of MaxRSSU in terms of Shannon entropy, Rényi entropy and Kullback-Leibler information. Jozani and Ahmadi [8] explored the notions of uncertainty and information content of RSS data and compared them with their counterparts in SRS data. Tahmasebi et al. [22] obtained some results on residual entropy for ranked set samples. Raqab and Qiu [18] considered the problem of the information content of RSS data based on entropy measure and the related monotonic properties and stochastic comparisons. More recently, Qiu and Eftekharian [17] obtained the information content of MinRSSU and MaxRSSU associated with extropy. However, little works have been found in literature on entropy properties of MaxRSSU (MinRSSU) design using quantile function, which calls for the present study.

Let X denote the lifetime of a system with pdf \(f(\cdot )\) and cdf \(F(\cdot )\). Shannon [20] introduced a measure of uncertainty associated with X as

$$\begin{aligned} H(X) = -E\left( \log f(X)\right) = -\int _{0}^{+\infty } f(x) \log (f(x)) d x, \end{aligned}$$
(1.3)

and has been used in various branches of statistics and related fields. The measure (1.3) is additive in nature, that is, for two independent random variables X and Y, the two dimensional version of (1.3),

$$\begin{aligned} H(X, Y) = -E\left( \log f(X, Y)\right) = H(X) + H(Y). \end{aligned}$$
(1.4)

Tsallis [25] introduced a non-additive generalization of the Shannon entropy which is given by,

$$\begin{aligned} S_{\alpha }(X)= & {} \frac{1}{1-\alpha }\left[ \int _{0}^{+\infty } f^{\alpha }(x) d x-1\right] \nonumber \\= & {} \frac{1}{1-\alpha }\left[ \int _{0}^{1} f^{\alpha -1}\left( F^{-1}(u)\right) d u-1\right] , \alpha >0, \alpha \ne 1, \end{aligned}$$
(1.5)

where \(\alpha \) is the entropic index. Clearly \(\lim _{\alpha \rightarrow 1} S_{\alpha }(X)=H(X)\). Unlike (1.4), Tsallis entropy in (1.5) is non-additive, as for any two independent random variables X and Y,

$$\begin{aligned} S_{\alpha }(X, Y) = S_{\alpha }(X)+S_{\alpha }(Y)+(1-\alpha ) S_{\alpha }(X) S_{\alpha }(Y). \end{aligned}$$
(1.6)

Many applications of Tsallis entropy such as fluxes of cosmic rays, turbulence, folded proteins and many other applications are given in Tsallis and Brigatti [26]. Wilk and Wodarczyk [28] stated that, there are situations in which uncertainties can only be calculated by Tsallis entropy and the Shannon entropy fails to provide them. Tsallis entropy has also been extensively used in image processing and signal processing, (see Tong et al. [24] and Weili et al. [27]).

It is well known that a probability distribution can be specified either by distribution function, or by quantile function defined as

$$\begin{aligned} Q(u)=F^{-1}(u)=\inf \{t: F(t) \ge u\}, 0<u<1. \end{aligned}$$
(1.7)

Note that it is an efficient alternative to distribution function in modeling and data analysis (see Nair et al. [15]). While dealing with real-life data, we often get probability models having no closed form distribution functions but they have closed form quantile functions (see Freimer et al. [5] and Hankin and Lee [7]). Hence, to study the properties of Tsallis entropy for such models via (1.5) is difficult. Thus, an alternative representation or approach to the Tsallis- \(\alpha \) divergence is required. Accordingly, in the present paper, we introduce quantile-based Tsallis entropy for MaxRSSU and MinRSSU and study some of its properties.

The paper is organized as follows: In Sect. 2, we obtain the quantile-based Tsallis entropy of MaxRSSU and MinRSSU and their comparison with SRS and RSS. We also provide monotonic properties, stochastic orders and bounds for the proposed measures. In Sect. 3, we consider the cumulative Tsallis entropy associated with MaxRSSU and MinRSSU sampling and study their various properties including some characterization results. In Sect. 4, we discuss an application of cumulative Tsallis entropy under MaxRSSU as well as its non-parametric empirical estimation.

2 Quantile-based Tsallis entropy based on RSS, MaxRSSU and MinRSSU

In this section we study the quantile-based Tsallis entropy for SRS and RSS data sets. Let \(\varvec{X_{SRS}}=\left\{ X_{i}, i=\right. \) \(1, \ldots , n\}\) denotes a SRS of size n from a absolutely continuous distribution with a common pdf \(f (\cdot )\), cdf \(F (\cdot )\) quantile function \( Q(\cdot ) \). For strictly increasing F, we have \( F(Q(u)) = u \), \( 0< u <1 \), which further implies \( f( Q(u)) q(u) =1 \), where f(Q(u)) and \( q(u) = \frac{d}{d u} Q(u) \) are respectively known as the density quantile function and quantile density function. Khammar and Jahanshahi [10] defined the quantile-based Tsallis entropy, given by,

$$\begin{aligned} {\mathcal {Q}} S_{\alpha }\left( {\varvec{X}}\right)= & {} \frac{1}{\alpha - 1 }\left[ 1-\int _{0}^{1} f^{\alpha }\left( Q(p) \right) d Q(p) \right] \\= & {} \frac{1}{\alpha - 1 }\left[ 1-\int _{0}^{1} q^{1- \alpha }\left( p \right) dp \right] . \end{aligned}$$

The corresponding quantile-based Tsallis entropy of \(\varvec{X_{\mathrm {SRS}}}\) of size n is given by,

$$\begin{aligned} {\mathcal {Q}}S_{\alpha }\left( \varvec{X_{\mathrm {SRS}}}\right)&=\frac{1}{\alpha -1}\left[ 1-\int _{0}^{1} \ldots \int _{0}^{1} q^{\alpha -1}\left( p_{1}\right) \ldots q^{\alpha -1}\left( p_{n}\right) d p_{1} \ldots d p_{n}\right] \nonumber \\&=\frac{1}{1-\alpha }\left[ \prod _{i=1}^{n} \int _{0}^{1} q^{1-\alpha } (p) d p -1\right] \nonumber \\&= \frac{1}{1-\alpha }\left[ \left( \int _{0}^{1} q^{1-\alpha } (p) d p \right) ^{n}-1\right] \nonumber \\&=\frac{1}{1-\alpha }\left( \left[ (1-\alpha ) {\mathcal {Q}}S_{\alpha }\left( {\varvec{X}}\right) +1\right] ^{n}-1\right) . \end{aligned}$$
(2.1)

Kumar et al. [12] introduced the Tsallis entropy in terms of quantile function for the ith order statistics as,

$$\begin{aligned} {\mathcal {Q}}S_{\alpha }\left( \varvec{X_{(i: n)}}\right)= & {} \frac{1}{1-\alpha }\left[ \int _{0}^{1}\left[ f^{\alpha }_{(i: n)}(Q(p))\right] d Q(p)-1\right] \\= & {} \frac{1}{1-\alpha }\left[ \int _{0}^{1}\left[ g^{\alpha }_{i}(p) q^{1-\alpha }(p)\right] dp-1\right] , \alpha >0, \alpha \ne 1, \end{aligned}$$

where \( g_{i}(p) \) follows the pdf of the beta distribution with the parameters i and \( (n- i +1 ) \). Under the perfect ranking assumption, the quantile version of the Tsallis entropy of \( \varvec{X_\mathrm {RSS}} \) of size n then becomes,

$$\begin{aligned} {\mathcal {Q}}S_{\alpha }\left( \varvec{X_{\mathrm {RSS}}}\right)&=\frac{1}{1-\alpha }\left( \prod _{i=1}^{n} \left[ \int _{0}^{1}f^{\alpha }_{(i: n)}(Q(p)) d Q(p) \right] -1\right) \nonumber \\&=\frac{1}{1-\alpha }\left( \prod _{i=1}^{n}\left[ (1-\alpha ) {\mathcal {Q}}S_{\alpha }\left( \varvec{X_{(i: n)}}\right) +1\right] -1\right) , \end{aligned}$$
(2.2)

where \(\varvec{X_{(i: n)}}\) is the i th order statistic of size n with pdf \(f_{(i: n)}(x)\).

Using (1.1), the corresponding quantile-based Tsallis entropy of \(\varvec{X_\mathrm {MaxRSSU}}\) of size n is obtained as

$$\begin{aligned} {\mathcal {Q}}S_{\alpha }\left( \varvec{X_\mathrm {MaxRSSU}}\right)&=\frac{1}{1-\alpha }\left( \prod _{i=1}^{n} \left[ \int _{0}^{1}f^{\alpha }_{(i: i)}(Q(p)) d Q(p) \right] -1\right) \nonumber \\&=\frac{1}{1-\alpha }\left( \prod _{i=1}^{n}\left[ (1-\alpha ) {\mathcal {Q}}S_{\alpha }\left( \varvec{X_{(i: i)}}\right) +1\right] -1\right) \nonumber \\&=\frac{1}{1-\alpha }\left( \prod _{i=1}^{n}\left[ \int _{0}^{1} i^{\alpha } p^{\alpha (i-1)} q^{1-\alpha } (p) d p\right] -1\right) \end{aligned}$$
(2.3)

where,

$$\begin{aligned} {\mathcal {Q}}S_{\alpha }\left( \varvec{X_{(i: i)}}\right) =\frac{1}{1-\alpha } \int _{0}^{1}\left[ \frac{p^{i-1}q^{1-\alpha }(p)}{B (i,1) }dp \right] -1, \alpha >0, \alpha \ne 1. \end{aligned}$$
(2.4)

The corresponding quantile-based Tsallis entropy for \( \varvec{X_\mathrm {MinRSSU}} \) is obtained as,

$$\begin{aligned} {\mathcal {Q}}S_{\alpha }\left( \varvec{X_\mathrm {MinRSSU}}\right)&=\frac{1}{1-\alpha }\left( \prod _{i=1}^{n} \left[ \int _{0}^{1}f^{\alpha }_{(1: i)}(Q(p)) d Q(p) \right] -1\right) \\&=\frac{1}{1-\alpha }\left( \prod _{i=1}^{n}\left[ \int _{0}^{1} i^{\alpha } (1-p)^{\alpha (i-1)} q^{1-\alpha } (p) d p\right] -1\right) . \end{aligned}$$

Example 2.1

Suppoe that X follows Govindarajulu distribution [6] that do not have any closed form distribution function. Then its quantile function and quantile density function are, respectively

$$\begin{aligned} Q(u) = a \left\{ (b+1) u^{b} - b u^{b+1} \right\} \end{aligned}$$
(2.5)

and

$$\begin{aligned} q(u) = ab (b+1) (1-u) u^{b-1} \;\;; 0 \le u \le 1 \; a,b >0. \end{aligned}$$
(2.6)

The corresponding quantile-based Tsallis entropy of \(\varvec{X_\mathrm {MaxRSSU}}\) and \(\varvec{X_\mathrm {MinRSSU}}\) of size n is given by,

$$\begin{aligned}&{\mathcal {Q}} S_{\alpha }\left( \varvec{X_\mathrm {MaxRSSU}}\right) \\&= \frac{1}{(1- \alpha )} \left[ (n!)^{\alpha }\left( ab(b+1) \right) ^{n(1- \alpha ) } \prod _{i=1}^{n} \left( B( \alpha i + b (1- \alpha ) ), 2- \alpha \right) -1 \right] . \end{aligned}$$

and

$$\begin{aligned}&{\mathcal {Q}} S_{\alpha }\left( \varvec{X_\mathrm {MinRSSU}}\right) \\&= \frac{1}{(1- \alpha )} \left[ (n!)^{\alpha } \left( ab(b+1) \right) ^{n(1- \alpha ) } \prod _{i=1}^{n} \left[ B( b+\alpha -b \alpha , \alpha i + 2 (1- \alpha ) ) \right] -1 \right] . \end{aligned}$$

Theorem 2.1

Let \( \varvec{X_\mathrm {MinRSSU}} \) and \( \varvec{X_\mathrm {MaxRSSU}} \) be the MinRSSU and MaxRSSU from the population with pdf \(f(\cdot )\)and cdf \(F(\cdot )\). Then

  1. 1.

    \(\left[ {\mathcal {Q}}S_{\alpha }\left( \varvec{X_{\mathrm {MinRSSU }}}\right) (1-\alpha )+1\right] \le \) \((n !)^{\alpha }\left[ {\mathcal {Q}}S_{\alpha }\left( \varvec{X_{\mathrm {SRS}}}\right) (1-\alpha )+1\right] \) for \(\alpha >0, \alpha \ne 1\).

  2. 2.

    \(\left[ {\mathcal {Q}}S_{\alpha }\left( \varvec{X_{\mathrm {MaxRSSU }}}\right) (1-\alpha )+1\right] \le \) \((n !)^{\alpha }\left[ {\mathcal {Q}}S_{\alpha }\left( \varvec{X_{\mathrm {SRS}}}\right) (1-\alpha )+1\right] \) for \(\alpha >0, \alpha \ne 1\).

Proof

The proof is straightforward from Theorem 2.1 of Tahmasebi et al. [23]. \(\square \)

We now prove some properties of \({\mathcal {Q}}S_{\alpha }\left( \varvec{X_\mathrm {MaxRSSU}}\right) \) and \({\mathcal {Q}}S_{\alpha }\left( \varvec{X_\mathrm {MinRSSU}}\right) \) using the following definitions of stochastic orders.

Definition 2.1

We say the random variable X is said to be smaller than Y in the:

  1. 1.

    Usual stochastic order (\(\left. X \le _{s t} Y\right) \) if \(P(X \ge x) \le P(Y \ge x)\) for all \(x \in {\mathbb {R}}\),

  2. 2.

    Hazard rate order (\(X \le _{h r} Y\)), if \(\lambda _{X}(x) \ge \lambda _{Y}(x)\) for all x, where \( \lambda _{X})(\cdot ) \), \( \lambda _{Y}(\cdot ) \) denote respectively the failure rates X and Y with \( \lambda (x) = \frac{f(x)}{{\bar{F}} (x)} \),

  3. 3.

    Hazard quantile order (\( X \le _{HQ} Y \)), if \( H_{X}(u) \ge H_{Y}(u) ; \;\forall u \in \left( 0,1 \right) \) where \( H_X(u) = \left[ (1 - u) q_X (u)\right] ^{-1} \) is the hazard quantile function,

  4. 4.

    Mean residual quantile function order (\( X \le _{MQ} Y \)), if \( M_{X}(u) \le M_{Y}(u); \; \forall u \in (0,1) \), where \( M_X (u) = (1 - u)^{-1} \int _{u}^{1}{\left( Q(p) - Q(u)\right) dp} \), and

  5. 5.

    Dispersive order (\(X \le _{d i s p} Y\)), if \(f\left( F^{-1}(u)\right) \ge g\left( G^{-1}(u)\right) \) for all \(u \in (0,1)\), where \(F^{-1}\) and \(G^{-1}\) are right continuous inverses of F and G, respectively.

Definition 2.2

A non-negative random variable X is said to have increasing (decreasing) failure rate [IFR (DFR)] if \( \lambda (x) \) is increasing (decreasing) in x.

Theorem 2.2

Let X and Y be two non-negative random variables with pdf’s \(f(\cdot )\) and \(g(\cdot )\) respectively. Then

  1. 1.

    If \(X \le _{disp}Y\), then \( {\mathcal {Q}}S_{\alpha }\left( \varvec{X_\mathrm {{MinRSSU}}}\right) \ge (\le ) {\mathcal {Q}}S_{\alpha }\left( \varvec{Y_\mathrm {{MinRSSU}}}\right) \) for \(\alpha >1(0<\alpha < 1), \alpha \ne 1\).

  2. 2.

    If \(X \le _{disp}Y\), then \( {\mathcal {Q}}S_{\alpha }\left( \varvec{X_\mathrm {{MaxRSSU}}}\right) \ge (\le ) {\mathcal {Q}}S_{\alpha }\left( \varvec{Y_\mathrm {{MaxRSSU}}}\right) \) for \(\alpha >1(0<\alpha < 1), \alpha \ne 1\).

Proof

From \(X \le _{disp}Y\), we have \(q_X(u) \le q_Y(u), \; \forall u \). Then we have,

$$\begin{aligned} \int _{0}^{1} i^{\alpha } (1-u)^{\alpha (i-1) } q_X^{1-\alpha }(u) du \le \int _{0}^{1} i^{\alpha } (1-u)^{\alpha (i-1) } q_Y^{1-\alpha }(u) du, \end{aligned}$$

which further yield case 1. The proof for \( \varvec{X_\mathrm {{MaxRSSU}}} \) is similar hence it is omitted. \(\square \)

Theorem 2.3

For the continuous non-negative random variables X and Y

  1. 1.

    if X or Y is DFR, and \( X \le _{h r} Y \), or

  2. 2.

    if X and Y have the same lower end support and if \( \frac{Q_{X}(u)}{Q_Y(u)} \) is increasing in \( u \in (0,1) \) and \( X \le _{s t} Y \), or

  3. 3.

    if \( \frac{M_{X}(u)}{M_Y(u)} \) is increasing in u and \( X \le _{MQ} Y \),

then

  1. 1.

    If \(X \le _{disp}Y\), then \( {\mathcal {Q}}S_{\alpha }\left( \varvec{X_\mathrm {{MinRSSU}}}\right) \ge (\le ) {\mathcal {Q}}S_{\alpha }\left( \varvec{Y_\mathrm {{MinRSSU}}}\right) \) for \(\alpha >1(0<\alpha < 1), \alpha \ne 1\).

  2. 2.

    If \(X \le _{disp}Y\), then \( {\mathcal {Q}}S_{\alpha }\left( \varvec{X_\mathrm {{MaxRSSU}}}\right) \ge (\le ) {\mathcal {Q}}S_{\alpha }\left( \varvec{Y_\mathrm {{MaxRSSU}}}\right) \) for \(\alpha >1(0<\alpha < 1), \alpha \ne 1\).

Proof

The condition 1, 2 and 3 implies \( X \le _{HQ} Y \). Now using Lemma 3.3(a) of Khammar and Jahanshahi [10] and Theorem 2.2, the proof is complete. \(\square \)

Example 2.2

Let X be random variable that follows Govindarajulu distribution with quantile function (2.5) and quantile density function (2.6). Then from (2.1), (2.2) and (2.3), we have,

$$\begin{aligned} {\mathcal {Q}}S_{\alpha }\left( \varvec{X_{\mathrm {SRS}}}\right)= & {} \frac{1}{\alpha -1} \left[ 1- \left\{ ab(b+1) \right\} ^{2(1-\alpha )}I_1^2\right] , \end{aligned}$$
(2.7)
$$\begin{aligned} {\mathcal {Q}}S_{\alpha }\left( \varvec{X_{\mathrm {RSS}}}\right)= & {} \frac{1}{\alpha -1} \left[ 1- \left\{ ab(b+1) \right\} ^{2(1-\alpha )}\frac{I_{1} I_{2}}{B(1,2)}\right] , \end{aligned}$$
(2.8)
$$\begin{aligned} {\mathcal {Q}}S_{\alpha }\left( \varvec{X_\mathrm {MinRSSU}}\right)= & {} \frac{1}{\alpha -1} \left[ 1- 2^{\alpha }\left\{ ab(b+1) \right\} ^{2(1-\alpha )}I_{1} I_{3}\right] , \end{aligned}$$
(2.9)
$$\begin{aligned} {\mathcal {Q}}S_{\alpha }\left( \varvec{X_\mathrm {MaxRSSU}}\right)= & {} \frac{1}{\alpha -1} \left[ 1- 2^{\alpha }\left\{ ab(b+1) \right\} ^{2(1-\alpha )}I_{1} I_{2}\right] , \end{aligned}$$
(2.10)

where \( I_{1} = B\{(b-1)(1-\alpha )+1,2-\alpha \} \), \( I_{2} = B\{(b-1)(1-\alpha )+2,2-\alpha \} \) and \( I_{3}=B\{(b-1)(1-\alpha )+1,2 \} \). Since, the parameters of beta function is always greater than 0, the Tsallis index parameter \( \alpha \) can take values between \( 0< \alpha < 2, \; \alpha \ne 1 \). The differences between \( {\mathcal {Q}}S_{\alpha }\left( \varvec{X_{\mathrm {SRS}}}\right) \), \( {\mathcal {Q}}S_{\alpha }\left( \varvec{X_{\mathrm {RSS}}}\right) \), \({\mathcal {Q}}S_{\alpha }\left( \varvec{X_\mathrm {MinRSSU}}\right) \) and \({\mathcal {Q}}S_{\alpha }\left( \varvec{X_\mathrm {MaxRSSU}}\right) \) for \(n = 2 \) are given by,

$$\begin{aligned} \delta _1(\alpha )= & {} {\mathcal {Q}}S_{\alpha }\left( \varvec{X_{\mathrm {SRS}}}\right) -{\mathcal {Q}}S_{\alpha }\left( \varvec{X_{\mathrm {RSS }}}\right) = \frac{\left[ (ab(b+1))^{2(1-\alpha )}\right] I_1 }{(\alpha -1)}\left[ \frac{I_2}{B\{1,2\}}-I_1 \right] ,\\ \delta _2(\alpha )= & {} {\mathcal {Q}}S_{\alpha }\left( \varvec{X_{\mathrm {SRS}}}\right) -{\mathcal {Q}}S_{\alpha }\left( \varvec{X_\mathrm {MinRSSU}}\right) = \frac{\left[ (ab(b+1))^{2(1-\alpha )}\right] I_1}{(\alpha -1)} \left[ 2^{\alpha }I_3-I_1\right] ,\\ \delta _3(\alpha )= & {} {\mathcal {Q}}S_{\alpha }\left( \varvec{X_\mathrm {SRS}}\right) -{\mathcal {Q}}S_{\alpha }\left( \varvec{X_{\mathrm {MaxRSSU}}}\right) = \frac{\left[ (ab(b+1))^{2(1-\alpha )}\right] I_1}{(\alpha -1)} \left[ 2^{\alpha }I_2-I_1\right] , \\ \delta _4(\alpha )= & {} {\mathcal {Q}}S_{\alpha }\left( \varvec{X_{\mathrm {RSS}}}\right) -{\mathcal {Q}}S_{\alpha }\left( \varvec{X_{\mathrm {MinRSSU }}}\right) = \frac{\left[ (ab(b+1))^{2(1-\alpha )}\right] I_1}{(\alpha -1)} \left[ 2^{\alpha }I_3-\frac{I_2}{B\{1,2\}}\right] ,\\ \delta _5(\alpha )= & {} {\mathcal {Q}}S_{\alpha }\left( \varvec{X_{\mathrm {RSS}}}\right) -{\mathcal {Q}}S_{\alpha }\left( \varvec{X_\mathrm {MaxRSSU}}\right) = \frac{\left[ (ab(b+1))^{2(1-\alpha )}\right] I_1I_2}{(\alpha -1)} \left[ 2^{\alpha }-\frac{1}{B\{1,2\}}\right] ,\\ \delta _6(\alpha )= & {} {\mathcal {Q}}S_{\alpha }\left( \varvec{X_\mathrm {SRS}}\right) -{\mathcal {Q}}S_{\alpha }\left( \varvec{X_{\mathrm {MaxRSSU}}}\right) = \frac{\left[ (ab(b+1))^{2(1-\alpha )}\right] 2^{\alpha } I_1}{(\alpha -1)} \left[ I_3-I_2\right] . \end{aligned}$$
Fig. 1
figure 1

The graph of \(\delta _{i}(\alpha ) \) of Govindarajulu distribution having quantile density function of the form (2.6) with parameters \( a= 2 \) and \( b=0.5 \) for \( 1< \alpha < 2 \)

Fig. 2
figure 2

The graph of \(\delta _{i}(\alpha ) \) of Govindarajulu distribution having quantile density function of the form (2.6) with parameters \( a= 2 \) and \( b=0.5 \) for \( 0< \alpha < 1 \)

From Figs. 1 and 2, it can be seen that, \(\delta _2(\alpha ),\delta _4(\alpha )\) and \( \delta _6(\alpha ) \) are all greater than \( 0, \forall \; 0<\alpha <2 \), while \(\delta _1(\alpha ),\delta _3(\alpha )\) and \( \delta _5(\alpha ) \) are less than \( 0, \; \forall \; 0<\alpha <2 \). Combining all these equations, we can conclude that, \( \varvec{X_\mathrm {MaxRSSU}} \ge \varvec{X_\mathrm {RSS}} \ge \varvec{X_\mathrm {SRS}} \ge \varvec{X_\mathrm {MinRSSU}} \) irrespective of \( \alpha \).

In the following theorem, we obtain some sharp bounds for \( {\mathcal {Q}}S_{\alpha }\left( \varvec{X_\mathrm {{MinRSSU}}}\right) \) and \( {\mathcal {Q}}S_{\alpha }\left( \varvec{X_\mathrm {{MaxRSSU}}}\right) \). The proof of the theorem can be obtained from Theorem 2.9 from Tahmasebi et al. [23].

Theorem 2.4

Let \( m_1 = \frac{1}{1-\alpha }\left( \prod _{i=1}^{n}\left[ \int _{\frac{\alpha (i-1)}{\alpha (i-1)+1}}^{1} i^{\alpha }(1-u)^{\alpha (i-1)} q^{1- \alpha }(p) d p\right] -1\right) \) and \(M_1 = \frac{1}{1-\alpha }\left( \prod _{i=1}^{n}\left[ \int _{0}^{\frac{1}{\alpha (i-1)+1}} i^{\alpha } (1-u)^{\alpha (i-1)} q^{1-\alpha }(p) d p\right] -1\right) \). Also let, \(m_2=\frac{1}{1-\alpha }\left( \prod _{i=1}^{n}\left[ \int _{\frac{\alpha (i-1)}{\alpha (i-1)+1}}^{1} i^{\alpha } u^{\alpha (i-1)} q^{1- \alpha }(p) d p\right] -1\right) \) and

\(M_2=\frac{1}{1-\alpha }\left( \prod _{i=1}^{n}\left[ \int _{0}^{\frac{1}{\alpha (i-1)+1}} i^{\alpha } u^{\alpha (i-1)} q^{1-\alpha }(p) d p\right] -1\right) \). Then,

  1. 1.

    For \(0<\alpha <1\), we have \( m_1< {\mathcal {Q}}S_{\alpha }\left( \varvec{X_\mathrm {{MinRSSU}}}\right) <M_1 \) and \( m_2< {\mathcal {Q}}S_{\alpha }\left( \varvec{X_\mathrm {{MaxRSSU}}}\right) <M_2\).

  2. 2.

    For \(\alpha >1\), we have \( M_1< {\mathcal {Q}}S_{\alpha }\left( \varvec{X_\mathrm {{MinRSSU}}}\right) <m_1 \) and \(M_2< {\mathcal {Q}}S_{\alpha }\left( \varvec{X_\mathrm {{MaxRSSU}}}\right) <m_2\).

  3. 3.

    If q(p) never increases, then all inequalities in parts (i) and (ii) are reversed.

3 Quantile-based cumulative Tsallis entropy for MaxRSSU and MinRSSU

Sankaran and Sunoj [19] introduced the quantile-based cumulative entropy and its dynamic version, given by

$$\begin{aligned} \zeta =\zeta (X) =-\int _{0}^{1}(\ln p) p d Q(p) =-\int _{0}^{1}(\ln p) p q(p) d p \end{aligned}$$
(3.1)

and

$$\begin{aligned} \zeta (u)=\zeta (X ; Q(u))=\frac{\ln u}{u} \int _{0}^{u} p q(p) d p-\frac{1}{u} \int _{0}^{u}(\ln p) p q(p) d p \end{aligned}$$
(3.2)

respectively. However, Krishnan et al. [11] proposed a quantile version of cumulative Tsallis entropy in past lifetime and its dynamic version as,

$$\begin{aligned} \tau _{\alpha } (X) = \frac{1}{\alpha -1 } \left( 1- \int _{ 0}^{1} p^{\alpha } q(p) dp \right) , \alpha >0 , \alpha \ne 1 \end{aligned}$$
(3.3)

and

$$\begin{aligned} \tau _{\alpha }(u)=\tau _{\alpha }(X ; Q(u)) = \frac{1}{\alpha -1 } \left( 1- \int _{ 0}^{1} \frac{p}{u}^{\alpha } q(p) dp \right) , \alpha >0 , \alpha \ne 1 , 0< u <1 .\nonumber \\ \end{aligned}$$
(3.4)

Using the quantile based-Tsallis entropy of SRS given in (2.1), we have the quantile based-cumulative Tsallis entropy of SRS for past lifetime as,

$$\begin{aligned} \tau _{\alpha }(\varvec{X_{SRS}}) = \frac{1}{\alpha -1 } \left( 1- \left[ \int _{ 0}^{1} p^{\alpha } q(p) dp \right] ^n \right) , \alpha >0 , \alpha \ne 1. \end{aligned}$$
(3.5)

Krishnan et al. [11] obtained the quantile based-cumulative Tsallis entropy for \(1 ^{st} \) order and \( n^{th} \) order statistics in the past lifetime as,

$$\begin{aligned} \tau _{\alpha }(\varvec{X_{1;n}} ) = \frac{1}{\alpha -1 } \left( 1- \int _{ 0}^{1} (-1)^{\alpha }(1-p)^{n \alpha } q(p) dp \right) \end{aligned}$$
(3.6)

and

$$\begin{aligned} \tau _{\alpha }(\varvec{X_{n;n}} ) = \frac{1}{\alpha -1 } \left( 1- \int _{ 0}^{1} (-1)^{\alpha }p^{n \alpha } q(p) dp \right) . \end{aligned}$$
(3.7)

The corresponding quantile-based cumulative Tsallis entropy for MinRSSU and MaxRSSU will be,

$$\begin{aligned} \tau _{\alpha } \left( \varvec{X_{MinRSSU}} \right) = \frac{1}{\alpha -1} \left[ 1-\prod _{i=1}^{n} \left( \int _{0}^{1} (-1)^{\alpha } (1-p)^{i \alpha } q(p) dp \right) \right] . \end{aligned}$$
(3.8)

and

$$\begin{aligned} \tau _{\alpha } \left( \varvec{X_{MaxRSSU}} \right) = \frac{1}{\alpha -1} \left[ 1-\prod _{i=1}^{n} \left( \int _{0}^{1} (-1)^{\alpha } p^{i \alpha } q(p) dp \right) \right] . \end{aligned}$$
(3.9)

Theorem 3.1

Let \( \varvec{X_{MaxRSSU}} \) be the MaxRSSU from the population with pdf \( f(\cdot ) \) and cdf \( F(\cdot ) \). Then we have, \( \tau _{\alpha } \left( \varvec{X_{MaxRSSU}} \right) \ge \tau _{\alpha }(\varvec{X_{SRS}} ) \) for all \( n\alpha \) an even integer greater than 1. .

Proof

Since \( 0< p < 1 \), for any \( \alpha \), \( p^{\alpha } \ge p^{i \alpha }, \; i = 1,2,\ldots ,n \). Multiplying with q(p) and integrating in the range of 0 to 1, we get

$$\begin{aligned} \int _{0}^{1} p^{\alpha } q(p) dp \ge \int _{0}^{1} p^{ i \alpha } q(p) dp, \; \forall \; i = 1,2,\ldots ,n, \end{aligned}$$

which in turn gives,

$$\begin{aligned} 1-\left[ \int _{0}^{1} p^{\alpha } q(p) dp \right] ^{n} \le 1-\prod _{ i = 1 }^{n} \left[ \int _{0}^{1} p^{ i \alpha } q(p) dp \right] . \end{aligned}$$

Multiplying the above equation with \( \frac{1}{\alpha -1} \), we obtain the inequality. \(\square \)

Theorem 3.2

If \(X \le _{d i s p} Y \) or \( X \le _{HQ} Y \), then \( \forall \alpha >1 \), we have, \( \tau _{\alpha } \left( \varvec{X_{MinRSSU}} \right) \ge (\le ) \tau _{\alpha } \left( \varvec{Y_{MinRSSU}} \right) \) and \( \tau _{\alpha } \left( \varvec{X_{MaxRSSU}} \right) \ge (\le ) \tau _{\alpha } \left( \varvec{Y_{MaxRSSU}} \right) \) provided \( n \alpha \) is an even (odd) integer.

Proof

Using dispersion ordering we have \( f ( F^{(-1)}(u) \ge g(G^{(-1)}(u) \) implies that \( q_Y(u) \ge q_X(u) \). The rest of the proof is straightforward and hence omitted. \(\square \)

Example 3.1

Suppose that X follows a power distribution with quantile function \( Q (u) = \gamma u ^{\frac{1}{\beta }} \). Then the quantile-based cumulative Tsallis entropy, is given by \( \tau _{\alpha } (X) = \frac{1}{(\alpha -1 ) }\left[ 1 - \frac{\gamma }{\beta \alpha +1 } \right] \). Then using (3.5) the quantile-based cumulative Tsallis entropy for SRS of size \( n =2 \) will be,

$$\begin{aligned} \tau _{\alpha }(\varvec{X_{SRS}}) = \frac{1}{(\alpha -1 ) } \left[ 1-\frac{ \gamma ^2}{(\beta \alpha +1)^2 }\right] . \end{aligned}$$
(3.10)

For the MinRSSU data, the quantile-based cumulative Tsallis entropy for size \( n= 2 \) is given by,

$$\begin{aligned} \tau _{\alpha } \left( \varvec{X_{MinRSSU}} \right) = \frac{1}{(\alpha -1)} \left[ 1-\frac{ \gamma ^2 B(\alpha +1,\frac{1}{\beta }) B(2 \alpha +1 , \frac{1}{\beta })}{\beta ^2 }\right] \; \forall \; \alpha =2,3,4\ldots \;.\nonumber \\ \end{aligned}$$
(3.11)

The difference between \( \varvec{X_{SRS}} \) and \( \varvec{X_{MinRSSU}} \) is

$$\begin{aligned}&\tau _{\alpha }(\varvec{X_{SRS}}) - \tau _{\alpha } \left( \varvec{X_{MinRSSU}} \right) \nonumber \\&\quad = \frac{1}{(\alpha -1)} \left[ \frac{ \gamma ^2 B(\alpha +1,\frac{1}{\beta }) B(2 \alpha +1 , \frac{1}{\beta })}{\beta ^2 } -\frac{ \gamma ^2}{(\beta \alpha +1)^2 } \right] . \end{aligned}$$
(3.12)

Similarly, for MaxRSSU data, the quantile-based cumulative Tsallis entropy for size \( n = 2 \) will be obtained by using (3.9),

$$\begin{aligned} \tau _{\alpha } \left( \varvec{X_{MaxRSSU}} \right) = \frac{1}{(\alpha -1)} \left[ \frac{2 \beta ^2 \alpha ^2 + 3 \alpha \beta - \gamma ^2 +1}{(\beta \alpha +1 ) (2 \beta \alpha +1 ) } \right] \; \forall \; \alpha =2,3,4\ldots \; .\nonumber \\ \end{aligned}$$
(3.13)

The difference between (3.13) and (3.10), gives,

$$\begin{aligned} \tau _{\alpha } \left( \varvec{X_{MaxRSSU}} \right) - \tau _{\alpha }(\varvec{X_{SRS}} ) =\frac{ \gamma ^2 \alpha \beta }{(\alpha -1 )( \alpha \beta +1 )^2 (2 \beta \alpha + 1) }. \end{aligned}$$
(3.14)

From the equation (3.14) we can easily conclude that, \( \tau _{\alpha } \left( \varvec{X_{MaxRSSU}} \right) > \tau _{\alpha } \left( \varvec{X_{SRS}} \right) \), since we assumed \( \alpha > 1 \). We cannot say any relationship between \( \tau _{\alpha } \left( \varvec{X_{MinRSSU}} \right) \) and \( \tau _{\alpha } \left( \varvec{X_{SRS}} \right) \) as equation (3.12) \( > 0 \) depends upon the parameters chosen.

The quantile-based cumulative Tsallis entropy of SRS, MinRSSU and MaxRSSU for \(n=2\) can also be obtained by using,

$$\begin{aligned} \tau _{\alpha }(\varvec{X_{SRS}})= & {} 2\tau _{\alpha } (X) + (1- \alpha ) \tau ^2_{\alpha } (X), \end{aligned}$$
(3.15)
$$\begin{aligned} \tau _{\alpha } \left( \varvec{X_{MinRSSU}} \right)= & {} \tau _{\alpha }(\varvec{X_{1;2}}) + \tau _{\alpha } (X) \nonumber \\&+ (1- \alpha ) \tau _{\alpha } (X) \tau _{\alpha }(\varvec{X_{1;2}}) \; \forall \; \alpha =2,3,4\ldots \;, \end{aligned}$$
(3.16)

and

$$\begin{aligned} \tau _{\alpha } \left( \varvec{X_{MaxRSSU}} \right)= & {} \tau _{\alpha }(\varvec{X_{2;2}}) + \tau _{\alpha } (X) \nonumber \\&+ (1- \alpha ) \tau _{\alpha } (X) \tau _{\alpha }(\varvec{X_{2;2}}) \;\forall \; \alpha =2,3,4\ldots \;. \end{aligned}$$
(3.17)

3.1 Past lifetime and MaxRSSU

Let \( t_X = \left( t - X|X < t\right) \) be a random variable describes the past lifetime at age t. Then the dynamic version of the cumulative Tsallis entropy for the past lifetime using quantile function has been proposed by Krishnan et al. [11]. The corresponding quantile-based dynamic cumulative past Tsallis entropy of SRS and MRSSU are obtained respectively as,

$$\begin{aligned} \tau _{\alpha } \left( \varvec{X_{SRS_n}} ; u \right) = \frac{1}{(\alpha -1)} \left[ 1- \left( \frac{\int _{0}^{u}p^{\alpha } q(p) dp}{u^{\alpha }} \right) ^n \right] , \end{aligned}$$
(3.18)

and

$$\begin{aligned} \tau _{\alpha } \left( \varvec{X_{MaxRSSU_n}} ;u \right) = \frac{1}{(\alpha -1)} \left[ 1- \prod _{i=1}^{n}\left( \frac{\int _{0}^{u}p^{i \alpha } q(p) dp}{u^{ i \alpha }} \right) \right] . \end{aligned}$$
(3.19)

The relationship between the cumulative past Tsallis entropy with respect to SRS and MRSSU is given by,

$$\begin{aligned} \tau _{\alpha } \left( \varvec{X_{SRS_n}} ;u \right) = \frac{1}{(\alpha -1)} \left[ 1- \left\{ 1- ( \alpha -1 ) \tau _{\alpha } (u) \right\} ^n \right] \end{aligned}$$
(3.20)

and

$$\begin{aligned} \tau _{\alpha } \left( \varvec{X_{MaxRSSU_n}};u \right) = \frac{1}{(\alpha -1)} \left[ 1- \prod _{ i=1}^{n}\left\{ 1- ( i \alpha -1 ) \tau _{ i \alpha } (u) \right\} \right] . \end{aligned}$$
(3.21)

where \( \tau _{\alpha } (u) = \frac{1}{(\alpha -1)} \left[ 1- \left( \frac{\int _{0}^{u}p^{ \alpha } q(p) dp}{u^{ \alpha }} \right) \right] \) is the quantile-based cumulative past Tsallis entropy due to Krishnan et al. [11]. Differentiating (3.19) with respect to u, we have,

$$\begin{aligned}&\left( \alpha -1 \right) \frac{d}{du} \tau _{\alpha } \left( \varvec{X_{MaxRSSU_n}} ; u \right) = \nonumber \\&\quad \sum _{i=1}^{n} \left\{ \prod _{j=1; j \ne i }^{n} \int _{0}^{u} \frac{p^{j \alpha } q(p) dp}{u^{j \alpha }} \left\{ \frac{i \alpha }{u} \int _{0}^{u} \frac{p^{i \alpha } q(p) dp}{u^{i\alpha }} - q(u)\right\} \right\} . \end{aligned}$$
(3.22)

Theorem 3.3

If \( X_{(i:n)} \le _{d i s p} Y_{(i;n)} \; \forall \; i =1,2,..,n \), then \( \tau _{\alpha } \left( \varvec{X_{MaxRSSU_n}} ;u \right) \le (\ge ) \; \tau _{\alpha } \left( \varvec{Y_{MaxRSSU_n}};u \right) \) for all \( \alpha >1 (0< \alpha < 1) \).

Proof

Assume that, \( X_{(i:n)} \le _{d i s p} Y_{(i;n)} \) then \( \tau _{\alpha } \left( \varvec{X_{n;n}} \right) \ge \tau _{\alpha } \left( \varvec{Y_{n;n}} \right) \) for \( \alpha > 1 \) ( see Theorem 3.3 in Krishnan et al. [11]). Then we have,

$$\begin{aligned} \prod _{i=1}^{n} ( \alpha -1 ) \tau _{\alpha } \left( \varvec{X_{i;i}} \right) \ge \prod _{i=1}^{n} ( \alpha -1 ) \tau _{\alpha } \left( \varvec{Y_{i;i}} \right) . \end{aligned}$$

The remaining part is straightforward and hence omitted. For \( 0< \alpha < 1 \), the inequality gets reversed. \(\square \)

Apart from comparing random variables based on \( \tau _{\alpha } \left( \varvec{X_{MaxRSSU}} \right) \), it also provide some characterizations to well known probability distributions. The following theorems prove to this effect.

Theorem 3.4

The quantile-based dynamic cumulative Tsallis entropy of MaxRSSU in past lifetime takes the form

$$\begin{aligned} \tau _{\alpha } \left( \varvec{X_{MaxRSSU_n}} ; u \right) = \frac{1}{\alpha -1 } \left\{ 1- \prod _{i=1}^{n} C_i R_{i;i}(u) \right\} \end{aligned}$$
(3.23)

if and only if X has a power distribution with quantile function \( Q (u) = \gamma u ^{\frac{1}{\beta }} \), where \( R_{i;i}(u) \) denotes the quantile-based \( i ^{th} \) order mean inactivity time, \( R_{i;i}(u) = \frac{1}{u^{i}} \int _{ 0}^{u} p^i q(p) dp \) and \( C_i = \frac{i \beta + 1}{ i \beta \alpha + 1}\).

Proof

We use the induction technique for the proof of this theorem. When \( n= 1 \), we have \( \tau _{\alpha } \left( X \right) = \frac{1}{\alpha -1 } \left\{ 1- \frac{\beta +1}{ \beta \alpha + 1 } R_{1;1}(u) \right\} \). That is,

$$\begin{aligned} \int _{0}^{u} \frac{p^{\alpha }q(p)}{u^{\alpha }} = \frac{\beta +1}{ \beta \alpha + 1 } R_{1;1}(u). \end{aligned}$$
(3.24)

The proof of this theorem is available in Krishnan et al. [11]. Now for \( n=2 \), the equation (3.23) becomes,

$$\begin{aligned} \tau _{\alpha } \left( \varvec{X_{MaxRSSU_n}} ; u \right) = \frac{1}{\alpha -1 } \left\{ 1- \prod _{i=1}^{2} C_i R_{i;i}(u) \right\} . \end{aligned}$$

Simplifying the above equation and using (3.24) we have,

$$\begin{aligned} \int _{0}^{u} \frac{p^{2\alpha }q(p)}{u^{2\alpha }} =C_2* R_{2;2}(u). \end{aligned}$$
(3.25)

The ’if’ part of the (3.25) can be easily obtained by simple mathematical calculation. To prove the ’only if’ part, we first take the derivative of (3.25) with respect to u, gives

$$\begin{aligned} u^{2 \alpha } q(u) = C_2*u^{2 \alpha -1}( u R_{2;2}(u) + 2 \alpha R^{'}_{2;2}(u)). \end{aligned}$$
(3.26)

Now using the result \( uR^{'}_{2;2}(u) = \frac{1}{\Lambda (u)} -2* R_{2;2}(u) \), substituting \( (u q(u))^{-1} =\Lambda (u) \) and simplifying we have,

$$\begin{aligned} R_{2;2}(u)\Lambda (u) = K^{*}. \end{aligned}$$
(3.27)

Differentiating (3.27) with respect to u,

$$\begin{aligned} R_{2;2}(u)\Lambda (u) = K^{*} \end{aligned}$$
(3.28)

where \( K ^{*} = \frac{1-C_2}{C_2(\alpha -1) *2} \). Differentiating (3.28) with respect to u we get,

$$\begin{aligned} R_{2;2}^{'}(u)\Lambda (u) = -R_{2;2}(u)\Lambda ^{'}(u). \end{aligned}$$
(3.29)

Substituting the relationship \( uR^{'}_{2;2}(u) = \frac{1}{\Lambda (u)} -2* R_{2;2}(u) \) and (3.27) in (3.29),

$$\begin{aligned} R_{2;2}(u)\Lambda ^{'}(u) = \frac{2K^{*}-1}{u}. \end{aligned}$$
(3.30)

Differentiating \( u q(u) \Lambda (u) =1 \), we get

$$\begin{aligned} \Lambda ^{'}(u) =- \Lambda (u) \left[ \frac{1}{u} + \frac{q^{'}(u)}{q(u)}\right] . \end{aligned}$$
(3.31)

Substituting this in (3.29) and simplifying we have,

$$\begin{aligned} K^{*} \left[ \frac{1}{u} + \frac{q^{'}(u)}{q(u)} \right] = \frac{2K^{*}-1}{u}. \end{aligned}$$
(3.32)

Solving (3.32) we have, \(q(p) = k_1 u^{2K^{*}-1} \) which is the quantile function of power distribution. Now for the last part of the induction method, one can undergo the same procedure to complete the proof. \(\square \)

Corollary 3.1

The quantile-based dynamic cumulative Tsallis entropy of MaxRSSU in past lifetime \( \tau _{\alpha } \left( \varvec{X_{MaxRSSU_n}} ; u \right) = C \), a constant for all n, if and only if X has an exponential distribution with the support \( ( - \infty , 0 ) \).

Similar to \( \tau _{\alpha } \left( \varvec{X_{MaxRSSU_n}} ;u \right) \), one can also characterize certain distributions using \( \tau _{\alpha } \left( \varvec{X_{SRS_n}} ;u \right) \).

Theorem 3.5

If X is a continuous random variable such that,

$$\begin{aligned} \tau _{\alpha } \left( \varvec{X_{SRS_n}} ;u \right) = \frac{1}{\alpha -1 } \left\{ 1- \left[ C(u) R(u)\right] ^n \right\} , \end{aligned}$$
(3.33)

where R(.) is the reversed mean residual quantile function. Then, for every n, R(u) takes the form,

$$\begin{aligned} R(u) = \frac{1}{C(u)}exp \left\{ \frac{\int _{0}^{u} ( \alpha (C(p)-1)dp}{p(1-C(p))}\right\} . \end{aligned}$$

Proof

Solving (3.33) we have the equation,

$$\begin{aligned} \int _{0}^{u} p^{\alpha }q(p)dp = C(u)R(u)u^{\alpha }. \end{aligned}$$

Using theorem 2.7 of Krishnan et al. [11], the proof is complete. \(\square \)

Corollary 3.2

Let X be a non-negative random variable, with quantile function Q(.) , Then for \( \alpha > 1 \),

$$\begin{aligned} \tau _{\alpha } \left( \varvec{X_{SRS_n}} ;u \right) =\frac{1}{\alpha -1 } \left\{ 1- C \left( R(u)\right) ^ n \right\} \end{aligned}$$
(3.34)

where \( C > \frac{1}{\alpha ^n} \), if and only if X has a power distribution for every n.

Corollary 3.3

The \( \tau _{\alpha } \left( \varvec{X_{SRS_n}} ;u \right) =C \) is a constant for every n if and only if X has a exponential distribution with negative support.

Theorem 3.6

If \( \tau _{ \alpha }(u) \) is increasing(decreasing), with respect to u, then we have

  1. 1.

    \( \tau _{\alpha } \left( \varvec{X_{SRS_n}} ;u \right) \) is also increasing (decreasing) with respect to u.

  2. 2.

    \( \tau _{\alpha } \left( \varvec{X_{MRSSU_n}} ;u \right) \) is also increasing(decreasing) with respect for all possible \( \alpha > 1 \).

Proof

If \( \tau _{ \alpha }(u) \) is increasing(decreasing), with respect to u, then we have for \( u_2 > u_1 \), \( \tau _{ \alpha }(u_2) \ge (\le ) \tau _{ \alpha }(u_1) \). Now for \( \alpha > 1\), multiplying the above equation by \( -(\alpha -1) \) and adding 1 we have,

$$\begin{aligned} 1-(\alpha -1 ) \tau _{ \alpha }(u_2) \le (\ge ) 1-(\alpha -1 ) \tau _{ \alpha }(u_1). \end{aligned}$$

Taking the whole power with respect to n, adding 1 on both sides and multiplying by \(\frac{1}{\alpha -1} \) we have the result. The proof for \( \alpha <1 \) and the second theorem is similar. Hence omitted. \(\square \)

3.2 Residual lifetime and MinRSSU

Let \( t^*_X =\left( X-t|X>t \right) \) be a random variable describing the residual lifetime at age \( t ^* \). Then the quantile-based cumulative residual Tsallis entropy is defined as,

$$\begin{aligned} \tau _{ 2\alpha }(u) = \frac{1}{\alpha - 1} \left\{ 1- \int _{u}^{1} \frac{(1-p)^{\alpha } q_X(p)dp}{(1-u)^{\alpha }} \right\} . \end{aligned}$$
(3.35)

Using equation (3.35), the corresponding dynamic version of quantile based-cumulative residual Tsallis entropy for MinRSSU becomes,

$$\begin{aligned} \tau _{\alpha } \left( \varvec{X_{MinRSSU_n}} ;u \right) = \frac{1}{(\alpha -1)} \left[ 1- \prod _{i=1}^{n}\left( \frac{\int _{u}^{1}(1-p)^{i \alpha } q(p) dp}{(1-u)^{ i \alpha }} \right) \right] . \end{aligned}$$
(3.36)

Example 3.2

For the proportional hazards model given by \( h_Y(x) = \theta h_X(x) \; \theta >0 \), where \( h_Y(x) \) and \( h_X(x) \) respectively represent the hazard functions of Y and X, the quantile-based cumulative residual Tsallis entropy for MinRSSU will be,

$$\begin{aligned} \tau _{\alpha } \left( \varvec{Y_{MinRSSU_n}} ;u \right)&=\frac{1}{\alpha -1}\left[ 1- \prod _{i=1}^{n}\left( \frac{\int _{u}^{1}(1-p)^{i \alpha } q_Y(p) dp}{(1-u)^{ i \alpha }} \right) \right] \nonumber \\&=\frac{1}{\alpha -1}\left[ 1- \prod _{i=1}^{n}\left[ 1- \left( \alpha -1\right) \tau _{ \alpha } ( \varvec{Y_{1:i}}) \right] \right] \end{aligned}$$
(3.37)

Clearly \( \forall \; i= 1,2,3\ldots ,n \),

$$\begin{aligned} \tau _{ \alpha } ( \varvec{Y_{1:i}})&=\frac{1}{\alpha -1}\left[ 1- \left( \frac{\int _{u}^{1}(1-p)^{i \alpha } q_Y(p) dp}{(1-u)^{ i \alpha }} \right) \right] \nonumber \\&=\frac{1}{\alpha -1}\left[ 1- \left( \frac{\int _{u}^{1}(1-p)^{i \alpha } (1-p)^{\frac{1}{\theta }-1} q_X(1-(1-p)^{\frac{1}{\theta }}) dp}{\theta (1-u)^{ i \alpha }} \right) \right] \nonumber \\&=\frac{1}{\alpha -1}\left[ 1- \left( \frac{\int _{1-(1-u)^{\frac{1}{\theta }}}^{1} (1-z)^{i \alpha \theta } q_X(z) dz }{\theta (1-u)^{ i \alpha }} \right) \right] \nonumber \\&\le \frac{1}{\alpha -1}\left[ 1- \left( \frac{\int _{u}^{1}(1-z)^{i \alpha } q_X(z) dz}{(1-u)^{ i \alpha }} \right) \right] \nonumber \\&\le \tau _{ \alpha } ( \varvec{X_{1:i}}) \end{aligned}$$
(3.38)

Combining equations (3.37) and (3.38), we have the inequality

$$\begin{aligned} \tau _{\alpha } \left( \varvec{Y_{MinRSSU_n}} ;u \right) \le \tau _{\alpha } \left( \varvec{X_{MinRSSU_n}} ;u \right) . \end{aligned}$$

In the next theorem we characterize exponential distribution using \( \tau _{\alpha } \left( \varvec{X_{MinRSSU_n}} ;u \right) \).

Theorem 3.7

\( \varvec{X} \) is exponentially distributed if and only if \( \tau _{\alpha } \left( \varvec{X_{MinRSSU_n}} ;u \right) \) is a constant.

Proof

For the exponential distribution with parameter \( \lambda >0 \),

$$\begin{aligned} \tau _{\alpha } \left( \varvec{X_{MinRSSU_n}} ;u \right) = \frac{1}{\alpha -1}\left[ 1- \prod _{i=1}^{n}\left( \frac{1}{i \alpha \lambda } \right) \right] , \end{aligned}$$
(3.39)

a constant. To prove the ’only if’ part we can write \( \tau _{\alpha } \left( \varvec{X_{MinRSSU_n}} ;u \right) \) as,

$$\begin{aligned} \tau _{\alpha } \left( \varvec{X_{MinRSSU_n}} ;u \right) = \frac{1}{\alpha -1}\left[ 1- \prod _{i=1}^{n}\left[ 1- \left( \alpha -1\right) \tau _{ \alpha } ( \varvec{X_{1:i}}) \right] \right] , \end{aligned}$$
(3.40)

using (3.37). Now \( \tau _{\alpha } \left( \varvec{X_{MinRSSU_n}} ;u \right) \) is a constant further implies that \( \forall \; i=1,2,3,\ldots ,n \), \( \tau _{ \alpha } ( \varvec{X_{1:i}}) \) is constant and hence \( \varvec{X} \) is exponential (see Theorem 3.2 of Sunoj et al. [21]). \(\square \)

If \( \tau _{\alpha } \left( \varvec{X_{MinRSSU_n}} ;u \right) \) is increasing in X and \( \phi (\cdot ) \) is non-negative, increasing and convex function. Then \( \tau _{\alpha } \left( \varvec{\phi (X)_{MinRSSU_n}} ;u \right) \) is also increasing (decreasing) if \( \alpha >1 (0<\alpha < 1)\). For example if X is an exponential random variable with mean \( \lambda \) and let \( \phi (X) = X^{\frac{1}{\gamma }} \), then \( \phi (X) \) satisfies all the desirable properties such as non-negativity, increasing and convexity \( ( 0< \gamma < 1 ) \). Hence we can conclude that \( \tau _{\alpha } \left( \varvec{\phi (X)_{MinRSSU_n}} ;u \right) \) is also increasing when \( \alpha > 1 \). In the next theorem we find bounds for \( \tau _{\alpha } \left( \varvec{X_{MinRSSU_n}} ;u \right) \) using hazard quantile function H(u) .

Theorem 3.8

Let X be a non-negative continuos random variable with quantile function Q(u) and hazard quantile function H(u) . If \( \tau _{\alpha } \left( \varvec{\phi (X)_{MinRSSU_n}} ;u \right) \) is increasing in u, then for \( \alpha >1 \) we have

$$\begin{aligned} \tau _{\alpha } \left( \varvec{X_{MinRSSU_n}} ;u \right) \ge \frac{1}{(\alpha -1)} \left[ 1- \prod _{i=1}^{n}\left( \frac{1}{i \alpha H(u)} \right) \right] , \end{aligned}$$

and for \( \alpha < 1 \),

$$\begin{aligned} \tau _{\alpha } \left( \varvec{X_{MinRSSU_n}} ;u \right) \le \frac{1}{(\alpha -1)} \left[ 1- \prod _{i=1}^{n}\left( \frac{1}{i \alpha H(u)} \right) \right] . \end{aligned}$$

Proof

Using (3.37) and for \( \alpha > 1 \) , we have,

$$\begin{aligned} \tau _{\alpha } \left( \varvec{X_{MinRSSU_n}} ;u \right)&=\frac{1}{\alpha -1}\left[ 1- \prod _{i=1}^{n}\left[ 1- \left( \alpha -1\right) \tau _{ \alpha } ( \varvec{X_{1:i}}) \right] \right] \nonumber \\&\ge \frac{1}{\alpha -1}\left[ 1- \prod _{i=1}^{n}\left[ 1- \left( \alpha -1\right) \left( \frac{1}{\alpha -1} \left\{ 1- \frac{1}{i \alpha H(u)}\right\} \right) \right] \right] \nonumber \\&= \frac{1}{(\alpha -1)} \left[ 1- \prod _{i=1}^{n}\left( \frac{1}{i \alpha H(u)} \right) \right] . \end{aligned}$$
(3.41)

The proof for \( \alpha < 1 \) is similar hence omitted. \(\square \)

4 Applications of \( \tau _{\alpha } \left( \varvec{X_{MaxRSSU}} \right) \)

4.1 Fixing the sample size associated with MaxRSSU

Many often, it is imperative to decide the size of the sample to be selected from a given population. From (3.9) it can be seen that \( \tau _{\alpha } \left( \varvec{X_{MaxRSSU}} \right) \) converges to a quantity \( \frac{1}{(\alpha -1)} \) when sample size n reaches a certain value. This is because when n increases, \( \prod _{i=1}^{n} \left( \int _{0}^{1} p^{i \alpha } q(p) dp \right) \rightarrow 0 \). This is the stage, at which maximum information, \( \tau _{\alpha } \left( \varvec{X_{MaxRSSU}} \right) \rightarrow \frac{1}{\alpha -1} \) can be obtained from the sample. Hence the convergence property of \( \tau _{\alpha } \left( \varvec{X_{MaxRSSU}} \right) \) can be taken as a methodology to fix the sample size n. Equivalently, we choose the optimum sample size n for which \( \tau _{\alpha } \left( \varvec{X_{MaxRSSU}} \right) \rightarrow \frac{1}{\alpha -1 }\).

However, in the case of \( \tau _{\alpha } \left( \varvec{X_{SRS}} \right) \), from (3.5), it can be seen that, \( \left[ \int _{ 0}^{1} p^{\alpha } q(p) dp \right] \) is always positive. As n increases and if the value of \( \left[ \int _{ 0}^{1} p^{\alpha } q(p) dp \right] \) lies between 0 and 1, then \( \tau _{\alpha } \left( \varvec{X_{SRS}} \right) \rightarrow \frac{1}{\alpha -1} \) or if the value of \( \left[ \int _{ 0}^{1} p^{\alpha } q(p) dp \right] \) is greater than 1, then \( \tau _{\alpha } \left( \varvec{X_{SRS}} \right) \) diverges to \(-\infty \). The convergence or divergence of \( \tau _{\alpha } \left( \varvec{X_{SRS}} \right) \), depends on the parameters of q(p) , while for any chosen value of the parameters in q(p) of MaxRSSU, \( \tau _{\alpha } \left( \varvec{X_{MaxRSSU}} \right) \rightarrow \frac{1}{\alpha -1 } \).

In order to elucidate the concept further, we consider the probability models, viz., the Govindarajulu, power and skew-lambda distributions with quantile functions \( q(u)=ab(b+1)(1-u)u^{b-1} \), \( q(u)= abu^{\frac{1}{b - 1}} \) and \( q(u)= b(au^{b-1} + (1-u)^{b-1})\) respectively, wherein the Govindarajulu and skew-lambda distributions do not have closed form distribution functions. For each of the three probability models, we determine the sample size n for which \( \tau _{\alpha } \left( \varvec{X_{MaxRSSU}} \right) \) attains its maximum value, \( \frac{1}{\alpha - 1} \). This is carried out for different parametric values by keeping the value of \( \alpha =2 \) and are given in Table 1. It can be observed that when the value of parameter increases the optimum sample size n also increases.

Table 1 Optimum sample size n for which \( \tau _{\alpha } \left( \varvec{X_{MRSSU}}\right) \) attains \( \frac{1}{\alpha - 1} \)

4.2 Empirical plug-in estimator for \( \tau _{\alpha } \left( \varvec{X_{MaxRSSU}} \right) \).

Parzen [16] defined an empirical estimator for the quantile function which is given by,

$$\begin{aligned} \bar{Q_i}(u) = X_{\{r;n\}i} , \; \text {for} \; \frac{r-1}{n}< u < \frac{r}{n}, \; r=1,2,\ldots ,n, \end{aligned}$$
(4.1)

and a smoothed version of (4.1) with the form,

$$\begin{aligned} {\hat{Q}}_{i;n}(u)=n \left( \frac{r}{n} -u \right) X_{\{r-1;n\}i} + n \left( u- \frac{r-1}{n} \right) . \end{aligned}$$
(4.2)

Differentiating (4.2), we can easily obtain the empirical estimator for quantile density function,

$$\begin{aligned} {\bar{q}}_i(u_r) = n \left( X_{\{r;n\}i} - X_{\{r-1;n\}i}\right) \; \text {for} \; \frac{r-1}{n}< u < \frac{r}{n}. \end{aligned}$$
(4.3)

Substituting (4.3) in (3.9) we have the empirical estimator for \( \tau _{ \alpha }\left( \varvec{X_{MaxRSSU}} \right) \), given by

$$\begin{aligned} {\bar{\tau }}_{ \alpha }\left( \varvec{X_{MaxRSSU}} \right) = \frac{1}{\alpha -1} \left[ 1-\prod _{i=1}^{n} \left( \int _{0}^{1} (-1)^{\alpha } (p)^{i \alpha } \bar{q_i}(p) dp \right) \right] . \end{aligned}$$
(4.4)

Changing the integral into summation and applying the transformation of \( u_i \) into \( (\frac{i}{n}) ^ {\alpha } \), the empirical estimator in (4.4) modifies to

$$\begin{aligned} {\bar{\tau }}_{ \alpha }\left( \varvec{X_{MaxRSSU}} \right) = \frac{1}{\alpha -1} \left[ 1-\prod _{i=1}^{n} \left( \sum _{j=1}^{m} (-1)^{\alpha } \left( \frac{j}{m}\right) ^{i \alpha } \bar{q_i}(p) dp \right) \right] , \end{aligned}$$
(4.5)

where \( \bar{q_i}(p) \) is the quantile density estimator for each \( X_{\{i:i\}(i)} \). Our initial simulations shows some deficiencies to \( {\hat{\tau }}_{ \alpha }\left( \varvec{X_{MaxRSSU}} \right) \) in terms of unbiasedness, we propose a further modified (4.5) in line with Kazemi et al. [9], given by

$$\begin{aligned} {\hat{\tau }}_{ \alpha }\left( \varvec{X_{MaxRSSU}} \right) = \frac{1}{\alpha -1} \left[ 1-\prod _{i=1}^{n} \left( \sum _{j=1}^{m} (-1)^{\alpha } \left( \frac{j}{m+n+w}\right) ^{i \alpha } \bar{q_i}(p) dp \right) \right] ,\nonumber \\ \end{aligned}$$
(4.6)

where n is the size of sample m is the number of cycles and w is a number that estimator has optimally low bias and MSE. To investigate the performance of (4.6), we use Govindarajulu model with quantile function \( q(u)=ab(b+1)(1-u)u^{b-1} \). Like in Sect. 4.1, we consider the Tsallis index \( \alpha = 2 \). We have also fixed the number of cycles m as 10. Table 2 now provides the selected value of w, estimate of \( {\hat{\tau }}_{ \alpha }\left( \varvec{X_{MaxRSSU}} \right) \), the bias and MSE for different parameter combinations of Govindarajulu distribution. It is evident that the proposed estimator is efficient with respect to bias and MSE.

Table 2 The estimated value, the bias and MSE of \( {\tau }_{\alpha } \left( \varvec{X_{MRSSU}} \right) \) using Govindarajulu distribution
Table 3 The selected w, the estimated value and bias of \( {\tau }_{\alpha } \left( \varvec{X_{MRSSU}} \right) \) for Aarset [1] data sets

Further, we extend the proposed estimation procedure for a real data set. We have chosen the data set given in Aarset [1] which represents the failure time of 50 devices. Nair et al. [14] fitted the Govindarajulu model with parameters \( a= 93.463 \) and \( b= 2.0915 \). Using these parameter estimates and (4.6), Table 3 now provides the value of w, the estimated value and bias corresponding to each sample size n, which further validates the usefulness of \( {\hat{\tau }}_{\alpha } \left( \varvec{X_{MaxRSSU}} \right) \) in the computation of uncertainty of a random phenomenon.