1 Introduction

A random variable X has the log-extended exponential–geometric distribution if it has the probability density function(pdf)

$$\begin{aligned} f\left( {x;\alpha ,\beta } \right) = \frac{{\alpha (1 + \beta )x^{\alpha - 1} }}{{(1 + \beta x^\alpha )^2 }} \end{aligned}$$

with distribution function

$$\begin{aligned} F\left( {x;\alpha ,\beta } \right) = \frac{{(1 + \beta )x^\alpha }}{{1 + \beta x^\alpha }} \end{aligned}$$
(1)

where \(0<x<1\), \(\alpha >0\), \(\beta >0\). We write LEEGD\((\alpha , \beta )\) to denote the distribution as defined in (1). The applications of this distribution are well known in the populations of cities, the intensities of earthquakes and the sizes of power outages. For further details on the importance and applications of this distribution, one may refer to Mitzenmacher (2004), Newman (2005), Sornette (2006) and Clauset et al. (2009).

Ranked set sampling (RSS) is a sampling technique used when the measurement of sampling units is quite difficult or expensive in terms of cost, time or other factors. However, a small set of units can easily be ranked according to the variable of interest, without requiring the actual measurement. Introduced by McIntyre (1952), the mathematical theory for RSS was given by Takahasi and Wakimoto (1968). Dell and Clutter (1972) showed that RSS is more efficient than simple random sampling (SRS) even with an error in ranking. To reduce the errors in ranking, Samawi et al. (1996) introduced a modification of RSS called extreme RSS (ERSS). Another scheme of RSS was investigated by Muttlak (1997) which is the median RSS(MRSS). For further introduction of these sampling schemes, refer to the literature (Abu-Dayyeh et al. 2013; Hassan 2013; Hussian 2014) and Esemen and Grler (2018).

In the literature, there are numbers of studies focused on parametric inference of distributions under RSS and its modifications. Stokes (1995) studied parameter estimation of the parameter for the location-scale family distributions under RSS. Shaibu and Muttlak (2004) studied parameter estimation of the parameter for the normal, exponential and gamma distributions under modified RSS. Xiaofang et al. (2018) studied parameter estimation of the parameter for the log-logistic distribution under RSS. However, one can in case the efficiency of RSS be increasing the sample size which may imply to increase the errors in ranking. Al-Saleh and Al-Omari (2002) suggested multistage RSS method that increases the efficiency of RSS for fixed sample size. Also, see Jemain and Al-Omari (2006) for estimating the population mean using multistage median ranked set samples, Al-Omari and Jaber (2008) for percentile double ranked set sampling, Al-Hadhrami and Al-Omari (2009) considered Bayesian inference on the variance of normal distribution using moving extremes RSS, Shadid et al. (2011) for best linear unbiased estimators and best linear invariant estimators of the location and scale parameters and the population mean using RSS, Al-Hadhrami and Al-Omari (2012) considered Bayes estimation of the mean of normal distribution using moving ERSS, Al-Omari and Al-Hadhrami (2011) investigated maximum likelihood estimators of the parameters of a modified Weibull distribution using extreme RSS, Haq et al. (2013) for partial RSS, Haq et al. (2014) for mixed RSS methods, and Al-Omari (2015) used the L RSS in estimating the distribution function. For some additional results and references, one can see Sinha et al. (1996), Chen et al. (2013, 2016, 2017), Dey et al. (2017), Chen et al. (2018), Qian et al. (2019), Chen et al. (2019) and He et al. (2019).

In the current paper, the Fisher information matrix of the log-extended exponential–geometric distribution LEEGD\((\alpha ,\beta )\) with parameters \(\alpha \) and \(\beta \) based on simple random sample (SRS), ranked set sample (RSS), median RSS (MRSS) and extreme RSS (ERSS) is discussed. These contents are, respectively, arranged in Sects. 2, 3, 4 and 5. Comparisons and conclusions are presented in Sect. 6. A real data set is used for illustration.

2 Fisher Information Matrix in SRS

Let \(\left\{ {X_1 ,X_2,\ldots ,X_m} \right\} \) be a simple random sample of size m from (1). Pedro et al. (2018) obtained the Fisher information matrix based on these samples:

$$\begin{aligned} I_{\mathrm{SRS}} (\alpha ,\beta )& = \left[ {\begin{array}{ll} {I_{11,\,SRS} } &{} {I_{12,\,SRS} } \\ {I_{12,\,SRS} } &{} {I_{22,\,SRS} } \\ \end{array}} \right] \\& = \left[ {\begin{array}{ll} {\frac{m}{{\alpha ^2 }} + \frac{{2m\beta (1 + \beta )}}{{\alpha ^2 }}\Gamma _3 (\beta ,3,1)} &{} { - \frac{{2m(1 + \beta )}}{\alpha }\Gamma _3 (\beta ,2,1)} \\ { - \frac{{2m(1 + \beta )}}{\alpha }\Gamma _3 (\beta ,2,1)} &{} {\frac{m}{{3(1 + \beta )^2 }}} \\ \end{array}} \right] , \end{aligned}$$
(2)

where \(\Gamma _m(z,s,a) = \int _0^1 {\frac{{u^a \log ^{s - 1} (1/u)}}{{(1 + zu)^{m+ 1} }}} du\), \(m= 0,1,2\ldots \) for any real numbers \(\alpha \ge 0\), \(s\ge 1\).

3 Fisher Information Matrix in RSS

An important advantage of RSS approach is that it improves the efficiency of estimators of the population parameters by providing more representative sample from the target population. In this section, we will study the Fisher information matrix for LEEGD\((\alpha ,\beta )\) with model parameters \(\alpha \) and \(\beta \) under RSS. The RSS method suggested by McIntyre (1952) can be summarized as follows: One first draws \(m^2\) units at random from the population and partitions them into m sets of m units. The m units in each set are ranked without making actual measurements. From the first set of m units, the unit ranked lowest is chosen for actual quantification. From the second set of m units, the unit ranked second lowest is measured. The process is continued until the unit ranked largest is measured from the mth set of m units.

Let \(\left\{ {X_{1(1)} ,X_{2(2)} ,\ldots ,X_{m(m)} } \right\} \) be a ranked set sample of size m from (1), then the pdf of \({X_{i(i)} }\) is

$$\begin{aligned} f_{(i)} (x)& = c(i,m)F^{i - 1} (x;\alpha ,\beta )\left[ {1 - F(x;\alpha ,\beta )} \right] ^{m - i} f(x;\alpha ,\beta )\\& = c(i,m)\frac{{\alpha \left( {(1 + \beta )x^\alpha } \right) ^i }}{{x(1 + \beta x^\alpha )^{m + 1} }}(1 - x^\alpha )^{m - i}, \end{aligned}$$

where \(c(i,m) = \frac{{m!}}{{(m - i)!(i - 1)!}}\). The log-likelihood function based on these samples is

$$\begin{aligned} L^{*}_{\mathrm{RSS}}& = d_{0}+ \sum \limits _{i = 1}^m {(i - 1)\ln \frac{{\left( {1 + \beta } \right) x_{_{i(i)} }^\alpha }}{{1 + \beta x_{_{i(i)} }^\alpha }}} \\&\quad+ \sum \limits _{i = 1}^m {(m - i)\ln \frac{{1 - x_{_{i(i)} }^\alpha }}{{1 + \beta x_{_{i(i)} }^\alpha }}} + \sum \limits _{i = 1}^m {\ln \frac{{\alpha \left( {1 + \beta } \right) x_{_{i(i)} }^{\alpha - 1} }}{{\left( {1 + \beta x_{_{i(i)} }^\alpha } \right) ^2 }}}, \end{aligned}$$

where \(d_{0}\) is a value which is free of \(\alpha \) and \(\beta \).

Taking the first derivative for \(L^{*}_{\mathrm{RSS}}\), we have

$$\begin{aligned} \frac{{\partial L^{*}_{\mathrm{RSS}} }}{{\partial \alpha }}&= \frac{m}{\alpha } + \sum \limits _{i = 1}^m {(i - 1)\left( {\ln x_{i(i)} - \frac{{\beta x_{_{i(i)} }^\alpha \ln x_{i(i)} }}{{1 + \beta x_{_{i(i)} }^\alpha }}} \right) } \\&\quad - \sum \limits _{i = 1}^m {(m - i)\left( {\frac{{x_{_{i(i)} }^\alpha \ln x_{i(i)} }}{{1 - x_{_{i(i)} }^\alpha }} + \frac{{\beta x_{_{i(i)} }^\alpha \ln x_{i(i)} }}{{1 + \beta x_{_{i(i)} }^\alpha }}} \right) } \\&\quad +\sum \limits _{i = 1}^m {\ln x_{i(i)} }- 2\sum \limits _{i = 1}^m {\frac{{\beta x_{_{i(i)} }^\alpha \ln x_{i(i)} }}{{1 + \beta x_{_{i(i)} }^\alpha }}} \end{aligned}$$

and

$$\begin{aligned} \frac{{\partial L^{*}_{\mathrm{RSS}}}}{{\partial \beta }}&= \frac{{m\left( {m + 1} \right) }}{{2\left( {1 + \beta } \right) }} - \sum \limits _{i = 1}^m {(i - 1)\frac{{x_{_{i(i)} }^\alpha }}{{1 + \beta x_{_{i(i)} }^\alpha }}} \\&\quad - \sum \limits _{i = 1}^m {(m - i)} \frac{{x_{_{i(i)} }^\alpha }}{{1 + \beta x_{_{i(i)} }^\alpha }} - 2\sum \limits _{i = 1}^m {\frac{{x_{_{i(i)} }^\alpha }}{{1 + \beta x_{_{i(i)} }^\alpha }}} \\&= \frac{{m\left( {m + 1} \right) }}{{2\left( {1 + \beta } \right) }} - \left( {m + 1} \right) \sum \limits _{i = 1}^m {\frac{{x_{_{i(i)} }^\alpha }}{{1 + \beta x_{_{i(i)} }^\alpha }}}. \end{aligned}$$

Taking the second derivative for \(L^{*}_{\mathrm{RSS}}\), we have

$$\begin{aligned} \frac{{\partial ^2 L^{*}_{\mathrm{RSS}} }}{{\partial \alpha ^2 }}&= - \frac{m}{{\alpha ^2 }} - \beta \sum \limits _{i = 1}^m {(i - 1)\left( {\frac{{x_{_{i(i)} }^\alpha \ln ^2 x_{i(i)} }}{{\left( {1 + \beta x_{_{i(i)} }^\alpha } \right) ^2 }}} \right) } \\&\quad - \sum \limits _{i = 1}^m {(m - i)\left( {\frac{{x_{_{i(i)} }^\alpha \ln ^2 x_{i(i)} }}{{\left( {1 - x_{_{i(i)} }^\alpha } \right) ^2 }} + \frac{{\beta x_{_{i(i)} }^\alpha \ln ^2 x_{i(i)} }}{{\left( {1 + \beta x_{_{i(i)} }^\alpha } \right) ^2 }}} \right) }\\&\quad - 2\beta \sum \limits _{i = 1}^m {\frac{{x_{_{i(i)} }^\alpha \ln ^2 x_{i(i)}}}{{\left( {1 + \beta x_{_{i(i)} }^\alpha } \right) ^2 }}},\\ \frac{{\partial ^2 L^{*}_{\mathrm{RSS}}}}{{\partial \beta ^2 }} &= - \frac{{m\left( {m + 1} \right) }}{{2\left( {1 + \beta } \right) ^2 }} + \left( {m + 1} \right) \sum \limits _{i = 1}^m {\frac{{x_{_{i(i)} }^{2\alpha } }}{{\left( {1 + \beta x_{_{i(i)} }^\alpha } \right) ^2 }}} \end{aligned}$$

and

$$\begin{aligned} \frac{{\partial L^{*}_{\mathrm{RSS}}}}{{\partial \alpha \partial \beta }} = - \left( {m + 1} \right) \sum \limits _{i = 1}^m {\frac{{x_{_{i(i)} }^\alpha \ln x_{i(i)} }}{{\left( {1 + \beta x_{_{i(i)} }^\alpha } \right) ^2 }}}. \end{aligned}$$

Then, we can obtain

$$\begin{aligned} I_{11,\,RSS}&= - E\left( {\frac{{\partial ^2 L^{*}_{\mathrm{RSS}}}}{{\partial \alpha ^2 }}} \right) \\&= \frac{m}{{\alpha ^2 }} + \beta m(m + 1)E\left( {\frac{{x^\alpha \ln ^2 x}}{{(1 + \beta x^\alpha )^2 }}} \right) + m(m - 1) \\&\quad E\left( {\frac{{x^\alpha \ln ^{2} x}}{{(1 - x^\alpha )^2 }}} \right) - m(m - 1) E\left( {\frac{{x^\alpha \ln ^{2} x}}{{(1 - x^\alpha )^2 }}\frac{{(1 + \beta )x^\alpha }}{{1 + \beta x^\alpha }}} \right) \\&= \frac{m}{{\alpha ^2 }} + \frac{{\beta m(m + 1)(1 + \beta )}}{{\alpha ^2 }}\int _0^1 {\frac{{t\ln ^2 t}}{{\left( {1 + \beta t} \right) ^4 }}} {\mathrm{d}}t \\&\quad + \frac{{m(m - 1)(1 + \beta )}}{{\alpha ^2 }}\int _0^1 {\frac{{t\ln ^2 t}}{{(1 - t)^2 (1 + \beta t)^2 }}} {\mathrm{d}}t \\&\quad - \frac{{m(m - 1)(1 + \beta )^2 }}{{\alpha ^2 }}\int _0^1 {\frac{{t^2 \ln ^2 t}}{{(1 - t)^2 (1 + \beta t)^3 }}} {\mathrm{d}}t \\&= \frac{m}{{\alpha ^2 }} + \frac{{m\left( {m + 1} \right) \beta \left( {1 + \beta } \right) }}{{\alpha ^2 }}\Gamma _3 \left( {\beta ,3,1} \right) \\&\quad + \frac{{2m\left( {m - 1} \right) \zeta \left( 3 \right) }}{{\alpha ^2 \left( {1 + \beta } \right) ^2 }} + \frac{{m\left( {m - 1} \right) \beta }}{{\alpha ^2 \left( {1 + \beta } \right) ^2 }}\Gamma _0 \left( {\beta ,3,0} \right) \\&\quad + \frac{{m\left( {m - 1} \right) \beta }}{{\alpha ^2 \left( {1 + \beta } \right) }}\Gamma _1 \left( {\beta ,3,0} \right) - \frac{{m\left( {m - 1} \right) }}{{\alpha ^2 }}\Gamma _2 \left( {\beta ,3,0} \right) , \end{aligned}$$
(3)
$$\begin{aligned} I_{22,\,RSS}&= - E\left( {\frac{{\partial ^2 L^{*}_{\mathrm{RSS}}}}{{\partial \beta ^2 }}} \right) = \frac{{m\left( {m + 1} \right) }}{{2\left( {1 + \beta } \right) ^2 }} \\&\quad - m\left( {m + 1} \right) E\left( {\frac{{x^{2\alpha } }}{{\left( {1 + \beta x^\alpha } \right) ^2 }}} \right) \\&= \frac{{m\left( {m + 1} \right) }}{{2\left( {1 + \beta } \right) ^2 }} - m\left( {m + 1} \right) \left( {1 + \beta } \right) \Gamma _3 \left( {\beta ,1,2} \right) \end{aligned}$$
(4)

and

$$\begin{aligned} I_{12,\,RSS}& = - E\left( {\frac{{\partial ^2 L_{\mathrm{RSS}}^* }}{{\partial \beta \partial \alpha }}} \right) = m\left( {m + 1} \right) E\left( {\frac{{x^\alpha \ln x}}{{\left( {1 + \beta x^\alpha } \right) ^2 }}} \right) \\&\quad = - m\left( {m + 1} \right) \frac{{\left( {1 + \beta } \right) }}{\alpha }\Gamma _3 \left( {\beta ,2,1} \right) , \end{aligned}$$
(5)

where \(\zeta (3) = \sum \nolimits _{m= 1}^\infty {\frac{1}{{m^3 }}}\). Combining (3), (4) with (5), we can obtain the Fisher information matrix under RSS:

$$\begin{aligned} I_{\mathrm{RSS}} (\alpha ,\beta ) = \left[ {\begin{array}{ll} {I_{11, {\mathrm{RSS}}}} &{} {I_{12, {\mathrm{RSS}}}} \\ {I_{12, {\mathrm{RSS}}}} &{} {I_{22, {\mathrm{RSS}}}} \\ \end{array}} \right] . \end{aligned}$$
(6)

4 Fisher Information Matrix in MRSS

The MRSS procedure, which is another scheme of RSS, was investigated by Muttlak (1997). The procedure is as follows: Randomly draw units of size \(m^{2}\) from the infinite population for which the unknown parameter is to be estimated, and randomly partition them into m sets of m units. If m is even, select the \(\frac{m}{2}\)th rank unit from each in \(\frac{m}{2}\) sets for actual measurement. Select the \(\left( {\frac{m}{{\mathrm{2}}}{\mathrm{+ 1}}} \right) \)th rank unit from each in the other sets. Such a ranked set sample of size m is denoted by MRSSE. If m is odd, from each set of m units the \(\frac{m+1}{2}\)th unit is measured. Such a ranked set sample of size m is denoted by MRSSO. In this section, we will study the Fisher information matrix for LEEGD\((\alpha ,\beta )\) with model parameters \(\alpha \) and \(\beta \) under MRSS.

Let \( \{X_{1(\frac{m}{2})}, X_{2(\frac{m}{2})}, \ldots ,X_{m(\frac{m}{2} + 1)} \}\) be an MRSSE of size m from (1), then the pdfs of \({X_{i\left( \frac{m}{2}\right) }}(i=1,2,\ldots ,\frac{m}{2})\) and \({X_{i(\frac{m}{2}+1)} }(i=\frac{m}{2}+1,\frac{m}{2}+2,\ldots ,m)\) are, respectively,

$$\begin{aligned} f_{(\frac{m}{2})} \left( x \right) = c\left(\frac{m}{2},m\right)\alpha \left( {1 + \beta } \right) ^{\frac{m}{2}} \frac{{(1 - x^\alpha )^{\frac{m}{2}} }}{{(1 + \beta x^\alpha )^{m + 1} }}x^{\frac{{\alpha m - 2}}{2}} \end{aligned}$$

and

$$\begin{aligned} f_{(\frac{m}{2} + 1)} \left( x \right) = c\left( \frac{m}{2} + 1,m\right) \alpha \left( {1 + \beta } \right) ^{\frac{m}{2} + 1} \frac{{(1 - x^\alpha )^{\frac{m}{2} - 1} }}{{(1 + \beta x^\alpha )^{m + 1} }}x^{\frac{{\alpha (m + 2) - 2}}{2}}. \end{aligned}$$

The log-likelihood function based on these samples is

$$\begin{aligned} L^{*}_{\mathrm{MRSSE}}&= d_{1}+ m\ln \alpha + \frac{{m(m + 1)}}{2}\ln (1 + \beta ) \\&\quad + \frac{{\alpha m - 2}}{2}\sum \limits _{i = 1}^{m/2} {\ln x_{i(\frac{m}{2})} } \\&\quad + \frac{m}{2}\sum \limits _{i = 1}^{m/2} {\ln \left( 1 - x_{_{i(\frac{m}{2})} }^\alpha \right) }\\&\quad - \left( {m + 1} \right) \sum \limits _{i = 1}^{m/2} {\ln \left( 1 + \beta x_{_{i(\frac{m}{2})} }^\alpha \right) } \\&\quad + \frac{{\alpha (m + 2) - 2}}{2}\sum \limits _{i = \frac{m}{2} + 1}^m {\ln x_{i(\frac{m}{2} + 1)} } \\&\quad + \frac{{m - 2}}{2}\sum \limits _{i = \frac{m}{2} + 1}^m {\ln \left( 1 - x_{_{i(\frac{m}{2} + 1)} }^\alpha \right) }\\&\quad- (m + 1) \sum \limits _{i = \frac{m}{2} + 1}^m {\ln \left( 1 + \beta x_{_{i(\frac{m}{2} + 1)} }^\alpha \right) }, \end{aligned}$$

where \(d_{1}\) is a value which is free of \(\alpha \) and \(\beta \).

Then, the first derivative of \(L^{*}_{\mathrm{MRSSE}}\) can be written as

$$\begin{aligned} \frac{{\partial L^{*}_{\mathrm{MRSSE}}}}{{\partial \alpha }}&= \frac{m}{\alpha } + \frac{m}{2}\sum \limits _{i = 1}^{m/2} {\ln x_{_{i(\frac{m}{2})} } }\\&\quad - \frac{m}{2} \sum \limits _{i = 1}^{m/2} {\frac{{x_{i({\textstyle {m \over 2}})}^\alpha \ln x_{i({\textstyle {m \over 2}})} }}{{1 - x_{i({\textstyle {m \over 2}})}^\alpha }} - \beta (m + 1)} \sum \limits _{i = 1}^{m/2} {\frac{{x_{i({\textstyle {m \over 2}})}^\alpha \ln x_{i({\textstyle {m \over 2}})} }}{{1 + \beta x_{i({\textstyle {m \over 2}})}^\alpha }}}\\&\quad+ \frac{{m + 2}}{2}\sum \limits _{i = {\textstyle {m \over 2}} + 1}^m {\ln x_{i({\textstyle {m \over 2}} + 1)} } - \frac{{m - 2}}{2}\\&\quad \sum \limits _{i = {\textstyle {m \over 2}} + 1}^m {\frac{{x_{i({\textstyle {m \over 2}} + 1)}^\alpha \ln x_{i({\textstyle {m \over 2}} + 1)} }}{{1 - x_{i({\textstyle {m \over 2}} + 1)}^\alpha }} - \beta (m + 1)} \\&\quad \sum \limits _{i = 1}^m {\frac{{x_{_{i({\textstyle {m \over 2}} + 1)} }^\alpha \ln x_{i({\textstyle {m \over 2}} + 1)} }}{{1 + \beta x_{_{i({\textstyle {m \over 2}} + 1)} }^\alpha }}} \end{aligned}$$

and

$$\begin{aligned} \frac{{\partial L^{*}_{\mathrm{MRSSE}}}}{{\partial \beta }}& = \frac{{m(m + 1)}}{{2(1 + \beta )}} - (m + 1)\sum \limits _{i = 1}^{m/2} {\frac{{x_{i({\textstyle {m \over 2}})}^\alpha }}{{1 + \beta x_{i({\textstyle {m \over 2}})}^\alpha }}} \\&\quad - (m + 1)\sum \limits _{i = {\textstyle {m \over 2}} + 1}^m {\frac{{x_{i({\textstyle {m \over 2}} + 1)}^\alpha }}{{1 + \beta x_{i({\textstyle {m \over 2}} + 1)}^\alpha }}}. \end{aligned}$$

The second derivative of \(L^{*}_{\mathrm{MRSSE}}\) can be written as

$$\frac{{\partial ^2 L^{*}_{\mathrm{MRSSE}}}}{{\partial \alpha ^2}}= - \frac{m}{{\alpha ^2 }} - \frac{m}{2}\sum \limits _{i = 1}^{m/2} {\frac{{x_{i({\textstyle {m \over 2}})}^\alpha \ln ^2 x_{i({\textstyle {m \over 2}})} }}{{(1 - x_{i({\textstyle {m \over 2}})}^\alpha )^2 }} - \beta (m + 1)} \sum \limits _{i = 1}^{m/2} {\frac{{x_{_{i({\textstyle {m \over 2}})} }^\alpha \ln ^2 x_{i({\textstyle {m \over 2}})} }}{{\left( 1 + \beta x_{_{i({\textstyle {m \over 2}})} }^\alpha \right) ^2 }}}- \frac{{m - 2}}{2}\sum \limits _{i = {\textstyle {m \over 2}} + 1}^m {\frac{{x_{i({\textstyle {m \over 2}} + 1)}^\alpha \ln ^2 x_{i({\textstyle {m \over 2}} + 1)} }}{{\left( 1 - x_{i({\textstyle {m \over 2}} + 1)}^\alpha \right) ^2 }}} - \beta (m + 1)\sum \limits _{i = {\textstyle {m \over 2}} + 1}^m {\frac{{x_{_{i({\textstyle {m \over 2}} + 1)} }^\alpha \ln ^2 x_{i({\textstyle {m \over 2}} + 1)} }}{{\left( 1 + \beta x_{_{i({\textstyle {m \over 2}} + 1)} }^\alpha \right) ^2 }}}, \frac{{\partial ^2 L^{*}_{\mathrm{MRSSE}}}}{{\partial \beta ^2 }} = - \frac{{m(m + 1)}}{{2(1 + \beta )^2 }} + (m + 1) \sum \limits _{i = 1}^{m/2} {\frac{{x_{i({\textstyle {m \over 2}})}^{2\alpha } }}{{(1 + \beta x_{i({\textstyle {m \over 2}})}^\alpha )^2 }}} + (m + 1) \sum \limits _{i = {\textstyle {m \over 2}} + 1}^m {\frac{{x_{i({\textstyle {m \over 2}} + 1)}^{2\alpha } }}{{\left( 1 + \beta x_{i({\textstyle {m \over 2}} + 1)}^\alpha \right) ^2 }}}$$

and

$$\begin{aligned} \frac{{\partial ^2 L^{*}_{\mathrm{MRSSE}}}}{{\partial \alpha \partial \beta }}& = - \left( {m + 1} \right) \sum \limits _{i = 1}^{m/2} {\frac{{x_{_{i(\frac{m}{2})} }^\alpha \ln x_{i({\textstyle {m \over 2}})} }}{{\left( 1 + \beta x_{_{i(\frac{m}{2})} }^\alpha \right) ^2 }}} \\&\quad - \left( {m + 1} \right) \sum \limits _{i = {\textstyle {m \over 2}} + 1}^m {\frac{{x_{_{i(\frac{m}{2} + 1)} }^\alpha \ln x_{i({\textstyle {m \over 2}} + 1)} }}{{\left( 1 + \beta x_{_{i(\frac{m}{2} + 1)} }^\alpha \right) ^2 }}}. \end{aligned}$$

Then, we can obtain

$$I_{11,\,MRSSE}=-E\left( {\frac{{\partial ^2 L^{*}_{\mathrm{MRSSE}}}}{{\partial \alpha ^2 }}} \right) = \frac{m}{{\alpha ^2 }} + \frac{{m^2 }}{4}E\left( {\frac{{x_{({\textstyle {m \over 2}})}^\alpha \ln ^2 x_{({\textstyle {m \over 2}})} }}{{\left( 1 - x_{_{({\textstyle {m \over 2}})} }^\alpha \right) ^2 }}} \right) + \frac{{\beta m(m + 1)}}{2}E\left( {\frac{{x_{_{({\textstyle {m \over 2}})} }^\alpha \ln ^2 x_{({\textstyle {m \over 2}})} }}{{\left( 1 + \beta x_{_{({\textstyle {m \over 2}})} }^\alpha \right) ^2 }}} \right) + \frac{{m(m - 2)}}{4}E\left( {\frac{{x_{({\textstyle {m \over 2}} + 1)}^\alpha \ln ^2 x_{({\textstyle {m \over 2}} + 1)} }}{{\left( 1 - x_{({\textstyle {m \over 2}} + 1)}^\alpha \right) ^2 }}} \right) + \frac{{\beta m(m + 1)}}{2}E\left( {\frac{{x_{_{({\textstyle {m \over 2}} + 1)} }^\alpha \ln ^2 x_{({\textstyle {m \over 2}} + 1)} }}{{\left( 1 + \beta x_{_{({\textstyle {m \over 2}} + 1)} }^\alpha \right) ^2 }}} \right) = \frac{m}{{\alpha ^2 }} + \frac{1}{{4\alpha ^2 }}m^2 c\left(\frac{m}{2},m\right)\left( {1 + \beta } \right) ^{{\textstyle {m \over 2}}} \int _0^1 {\frac{{t^{{\textstyle {m \over 2}}} \ln ^2 t(1 - t)^{{\textstyle {m \over 2}} - 2} }}{{(1 + \beta x^\alpha )^{m + 1} }}} {\mathrm{d}}t + \frac{1}{{2\alpha ^2 }}\beta m(m + 1)c(\frac{m}{2},m)\left( {1 + \beta } \right) ^{{\textstyle {m \over 2}}} \int _0^1 {\frac{{t^{{\textstyle {m \over 2}}} \ln ^2 t(1 - t)^{{\textstyle {m \over 2}}} }}{{(1 + \beta x^\alpha )^{m + 3} }}} {\mathrm{d}}t + \frac{1}{{4\alpha ^2 }}m(m - 2)c\left( \frac{m}{2} + 1,m\right) \left( {1 + \beta } \right) ^{{\textstyle {m \over 2}} + 1} \int _0^1 {\frac{{t^{{\textstyle {m \over 2}} + 1} \ln ^2 t(1 - t)^{{\textstyle {m \over 2}} - 3} }}{{(1 + \beta t)^{m + 1} }}} {\mathrm{d}}t + \frac{1}{{2\alpha ^2 }}\beta m(m + 1)c\left(\frac{m}{2} + 1,m\right)\left( {1 + \beta } \right) ^{{\textstyle {m \over 2}} + 1} \int _0^1 {\frac{{t^{{\textstyle {m \over 2}} + 1} \ln ^2 t(1 - t)^{{\textstyle {m \over 2}} - 1} }}{{(1 + \beta t)^{m + 3} }}} {\mathrm{d}}t = \frac{m}{{\alpha ^2 }} + \frac{1}{{4\alpha ^2 }}m^2 c\left(\frac{m}{2},m\right)\left( {1 + \beta } \right) ^{{\textstyle {m \over 2}}} \sum \limits _{k = 0}^{(m - 4)/2} {\left( \begin{array}{c}^{\frac{{m - 4}}{2}} k \end{array} \right) } \left( { - 1} \right) ^k \Gamma _m \left( \beta ,3,\frac{m}{2} + k\right) + \frac{1}{{2\alpha ^2 }}\beta m(m + 1)c\left( \frac{m}{2},m\right) \left( {1 + \beta } \right) ^{{\textstyle {m \over 2}}} \sum \limits _{k = 0}^{m/2} {\left( \begin{array}{c} ^{\frac{m}{2}} k \end{array} \right) } \left( { - 1} \right) ^k \Gamma _{m + 2} \left( \beta ,3,\frac{{m + 2}}{2} + k\right) + \frac{1}{{4\alpha ^2 }}m(m - 2)c\left( \frac{m}{2} + 1,m\right) \left( {1 + \beta } \right) ^{{\textstyle {m \over 2}} + 1} \sum \limits _{k = 0}^{(m - 6)/2} {\left( \begin{array}{c} ^{\frac{{m - 6}}{2}} k \end{array} \right) } \left( { - 1} \right) ^k \Gamma _m \left( \beta ,3,\frac{{m + 2}}{2} + k\right) + \frac{1}{{2\alpha ^2 }}\beta m(m + 1)c\left( \frac{m}{2} + 1,m\right) \left( {1 + \beta } \right) ^{{\textstyle {m \over 2}} + 1} \sum \limits _{k = 0}^{(m - 2)/2} {\left( \begin{array}{c} ^{\frac{{m - 2}}{2}} k \end{array} \right) } \left( { - 1} \right) ^k \Gamma _{m + 2} \left( \beta ,3,\frac{m}{2} + k\right) ,$$
(7)
$$ I_{22,\,MRSSE}=-E\left( {\frac{{\partial ^2 L^{*}_{{\mathrm{MRSSE}}}}}{{\partial \beta ^2 }}} \right) = \frac{{m(m + 1)}}{{2(1 + \beta )^2 }} - \frac{{m(m + 1)}}{2}E\left( {\frac{{x_{({{m \over 2}})}^{2\alpha } }}{{(1 + \beta x_{({{m \over 2}})}^\alpha )^2 }}} \right) - \frac{{m(m + 1)}}{2}E\left( {\frac{{x_{({{m \over 2}} + 1)}^{2\alpha } }}{{(1 + \beta x_{({{m \over 2}} + 1)}^\alpha )^2 }}} \right) = \frac{{m(m + 1)}}{{2(1 + \beta )^2 }} - \frac{1}{2}m(m + 1)(1 + \beta )^{{{m \over 2}}}c\left( \frac{m}{2},m\right) \int _0^1 {\frac{{t^{\frac{{m + 2}}{2}} (1 - t)^{\frac{m}{2}} }}{{\left( {1 + \beta t} \right) ^{m + 3} }}{\mathrm{d}}t} - \frac{1}{2}m(m + 1)(1 + \beta )^{{{m \over 2}} + 1} c\left( \frac{m}{2} + 1,m\right) \int _0^1 {\frac{{t^{\frac{{m + 4}}{2}} (1 - t)^{\frac{m}{2} - 1} }}{{\left( {1 + \beta t} \right) ^{m + 3} }}{\mathrm{d}}t} = \frac{{m(m + 1)}}{{2(1 + \beta )^2 }} - \frac{1}{2}m(m + 1)(1 + \beta )^{{{m \over 2}}} c\left( \frac{m}{2},m\right) \sum \limits _{k = 0}^{m/2} {\left( \begin{array}{c} {\frac{m}{2}} \\ k \end{array} \right) } \left( { - 1} \right) ^k \Gamma _{m + 2} \left( \beta ,1,\frac{{m + 2}}{2} + k\right) - \frac{1}{2}m(m + 1)(1 + \beta )^{{{m \over 2}} + 1} c\left( \frac{m}{2} + 1,m\right) \sum \limits _{k = 0}^{(m - 2)/2} {\left( \begin{array}{c} ^{\frac{{m - 2}}{2}} k \end{array} \right) } \left( { - 1} \right) ^k \Gamma _{m + 2} \left( \beta ,1,\frac{{m + 4}}{2} + k\right)$$
(8)

and

$$I_{12,\,MRSSE}=-E\left( {\frac{{\partial ^2 L^{*}_{\mathrm{MRSSE}}}}{{\partial \alpha \beta }}} \right) = \frac{{m\left( {m + 1} \right) }}{2}E\left( {\frac{{x_{_{(\frac{m}{2})} }^\alpha \ln x_{({\textstyle {m \over 2}})} }}{{(1 + \beta x_{_{(\frac{m}{2})} }^\alpha )^2 }}} \right) + \frac{{m\left( {m + 1} \right) }}{2}E\left( {\frac{{x_{_{(\frac{m}{2} + 1)} }^\alpha \ln x_{({\textstyle {m \over 2}} + 1)} }}{{(1 + \beta x_{_{(\frac{m}{2} + 1)} }^\alpha )^2 }}} \right) = \frac{1}{{2\alpha }}m\left( {m + 1} \right) \left( {1 + \beta } \right) ^{{\textstyle {m \over 2}}} c\left(\frac{m}{2},m\right) \int _0^1 {\frac{{t^{{\textstyle {m \over 2}}} (1 - t)^{{\textstyle {m \over 2}}} \ln t}}{{\left( {1 + \beta t} \right) ^{m + 3} }}} {\mathrm{d}}t + \frac{1}{{2\alpha }}m\left( {m + 1} \right) \left( {1 + \beta } \right) ^{{\textstyle {m \over 2}} + 1} c(\frac{m}{2} + 1,m) \int _0^1 {\frac{{t^{{\textstyle {m \over 2}} + 1} (1 - t)^{{\textstyle {m \over 2}} - 1} \ln t}}{{\left( {1 + \beta t} \right) ^{m + 3} }}} {\mathrm{d}}t = \frac{1}{{2\alpha }}m\left( {m + 1} \right) \left( {1 + \beta } \right) ^{{\textstyle {m \over 2}}} c(\frac{m}{2},m) \sum \limits _{k = 0}^{m/2} {\left( \begin{array}{c} ^{\frac{m}{2}} k \end{array} \right) } \left( { - 1} \right) ^{k + 1} \Gamma _{m + 2} \left( \beta ,2,\frac{m}{2} + k\right) + \frac{1}{{2\alpha }}m\left( {m + 1} \right) \left( {1 + \beta } \right) ^{{\textstyle {m \over 2}} + 1} c(\frac{m}{2} + 1,m ) \sum \limits _{k = 0}^{(m - 2)/2} {\left( \begin{array}{c}^{\frac{{m - 2}}{2}} k \end{array} \right) } \left( { - 1} \right) ^{k + 1} \Gamma _{m + 2} \left( \beta ,2,\frac{m}{2} + k\right) .$$
(9)

Combining (7), (8) with (9), we can obtain the Fisher information matrix based on MRSSE:

$$\begin{aligned} I_{\mathrm{MRSSE}} (\alpha ,\beta ) = \left[ {\begin{array}{ll} {I_{11,\,MRSSE}} &{} {I_{12,\,MRSSE}} \\ {I_{12,\,MRSSE}} &{} {I_{22,\,MRSSE}} \\ \end{array}} \right] . \end{aligned}$$
(10)

Let \(\{X_{1(\frac{{m + 1}}{2})}, X_{2(\frac{{m + 1}}{2})}, \ldots , X_{m(\frac{{m + 1}}{2})}\}\) be an MRSSO of size m from (1), the pdf of \(X_{i(\frac{{m + 1}}{2})}\) is

$$\begin{aligned} f_{(\frac{{m + 1}}{2})} \left( x \right)& = c\left( \frac{{m + 1}}{2},m\right) \alpha \left( {1 + \beta } \right) ^{\frac{{m + 1}}{2}}\\&\quad \frac{{x^{\frac{{\alpha (m + 1) - 2}}{2}} (1 - x^\alpha )^{\frac{{m - 1}}{2}} }}{{(1 + \beta x^\alpha )^{m + 1} }}. \end{aligned}$$

We can obtain based on these samples

$$I_{11,\,MRSSO}= \frac{m}{{\alpha ^2 }} + \frac{1}{{2\alpha ^2 }}m(m - 1)\left( {1 + \beta } \right) ^{{\textstyle {{m + 1} \over 2}}} c\left( \frac{{m + 1}}{2},m\right) \sum \limits _{k = 0}^{(m - 5)/2} {\left( \begin{array}{c}^{\frac{{m - 5}}{2}} k \end{array} \right) } \left( { - 1} \right) ^k \Gamma _m \left( \beta ,3,\frac{{m + 1}}{2} + k\right) + \frac{1}{{\alpha ^2 }}m(m + 1)\beta \left( {1 + \beta } \right) ^{{\textstyle {{m + 1} \over 2}}} c\left(\frac{{m + 1}}{2},m\right)\sum \limits _{k = 0}^{(m - 1)/2} {\left( \begin{array}{c}^{\frac{{m - 1}}{2}} k \end{array} \right) } \left( { - 1} \right) ^k \Gamma _{m + 2} \left( \beta ,3,\frac{{m + 1}}{2} + k\right) , $$
(11)
$$\begin{aligned} I_{22,\,MRSSO}= \frac{{m(m + 1)}}{{2(1 + \beta )^2 }} - m(m + 1)c\left(\frac{{m + 1}}{2},m\right)\left( {1 + \beta } \right) ^{{\textstyle {{m + 1} \over 2}}} \sum \limits _{k = 0}^{(m - 1)/2} {\left( \begin{array}{c}^{\frac{{m - 1}}{2}} \\ k \end{array} \right) } \left( { - 1} \right) ^k \Gamma _{m + 2} \left( \beta ,1,\frac{{m + 3}}{2} + k\right) \end{aligned}$$
(12)

and

$$\begin{aligned} I_{12,\,MRSSO} = \frac{1}{\alpha }m(m + 1)\left( {1 + \beta } \right) ^{{\textstyle {{m + 1} \over 2}}} c\left( \frac{{m + 1}}{2},m\right) \sum \limits _{k = 0}^{(m - 1)/2}{\left( \begin{array}{c}^{\frac{{m - 1}}{2}} \\ k \end{array} \right) } \left( { - 1} \right) ^{k + 1} \Gamma _{m + 2} \left( \beta ,2,\frac{{m + 1}}{2} + k\right) . \end{aligned}$$
(13)

The establishing procedures of (11), (12) and (13) are similar as those of (7), (8) and (9).

Combining (11), (12) with (13), we can obtain the Fisher information matrix based on MRSSO

$$\begin{aligned} I_{MRSSO} (\alpha ,\beta ) = \left[ {\begin{array}{*{20}c} {I_{11,{\mathrm{MRSSO}}}} &{} {I_{12, {\mathrm{MRSSO}}}} \\ {I_{12, {\mathrm{MRSSO}}}} &{} {I_{22, {\mathrm{MRSSO}}}} \\ \end{array}} \right] . \end{aligned}$$
(14)

5 Fisher Information Matrix in ERSS

Take the sorting error into consideration, Samawi et al. (1996) introduced a modification of RSS called extreme RSS (ERSS). The procedure of ERSS is described as follows: Randomly draw units of size \(m^{2}\) from the population, and randomly partition them into m sets of m units. If m is even, select the lowest ranked unit from each in \(\frac{m}{2}\) sets for actual measurement. Select the largest ranked unit from each in the other sets for actual measurement. Such a ranked set sample of size m is denoted by ERSSE. If m is odd, select the lowest ranked unit from each in \(\frac{m-1}{2}\) sets for actual measurement. Select the largest ranked unit from each in \(\frac{m-1}{2}\) sets for actual measurement. Select the \(\frac{m+1}{2}\) ranked unit from the mth set for actual measurement. Such a ranked set sample of size m is denoted by ERSSO. In this section, we will study the information matrix for LEEGD\((\alpha ,\beta )\) with model parameters \(\alpha \) and \(\beta \) under ERSS. Let \(\{X_{1(1)} ,X_{2(1)} , \ldots ,X_{m(m)} \}\) be an ERSSE of size m from (1), then the pdfs of \(X_{i(1)}(i=1,2,\ldots ,\frac{m}{2})\) and \(X_{i(m)}(i=\frac{m}{2}+1,\frac{m}{2}+2,\ldots ,m)\) are, respectively,

$$\begin{aligned} f_{(1)} (x) = \alpha m(1 + \beta )\frac{{(1 - x^\alpha )^{m - 1} }}{{(1 + \beta x^\alpha )^{m + 1} }} \end{aligned}$$

and

$$\begin{aligned} f_{(m)} (x) = \alpha m(1 + \beta )^m \frac{{x^{\alpha m - 1} }}{{(1 + \beta x^\alpha )^{m + 1} }}. \end{aligned}$$

The log-likelihood function based on these samples is

$$\begin{aligned} L^{*}_{\mathrm{ERSSE}}&= d_{\mathrm{2}} + m\ln \alpha + \frac{{m(m + 1)}}{2}\ln (1 + \beta ) \\&\quad + (m - 1)\sum \limits _{i = 1}^{m/2} {\ln (1 - x_{i(1)}^\alpha )} + (\alpha - 1)\sum \limits _{i = 1}^{m/2} {\ln x_{i(1)} }\\&\quad- (m + 1)\sum \limits _{i = 1}^{m/2} {\ln (1 + \beta x_{i(1)}^\alpha )} \\&\quad + (\alpha m - 1)\sum \limits _{i = {\textstyle {m \over 2}} + 1}^m {\ln x_{i(m)} } \\&\quad - (m + 1)\sum \limits _{i = {\textstyle {m \over 2}} + 1}^m {\ln (1 + \beta x_{i(m)}^\alpha )}, \end{aligned}$$

where \(d_{2}\) is a value which is free of \(\alpha \) and \(\beta \).

Taking the first derivative for \(L^{*}_{\mathrm{ERSSE}}\), we have

$$\begin{aligned} \frac{{\partial L^{*}_{\mathrm{ERSSE}}}}{{\partial \alpha }}&= \frac{m}{\alpha } - (m - 1)\sum \limits _{i = 1}^{m/2} {\frac{{x_{i(1)}^\alpha \ln x_{i(1)} }}{{1 - x_{i(1)}^\alpha }}}\\&\quad + \sum \limits _{i = 1}^{m/2} {\ln x_{i(1)} } - (m + 1)\sum \limits _{i = 1}^{m/2} {\frac{{\beta x_{i(1)}^\alpha \ln x_{i(1)} }}{{1 + \beta x_{i(1)}^\alpha }}} \\&\quad+ m\sum \limits _{i = {\textstyle {m \over 2}} + 1}^m {\ln x_{i(m)} } - (m + 1)\sum \limits _{i = {\textstyle {m \over 2}} + 1}^m {\frac{{\beta x_{i(m)}^\alpha \ln x_{i(m)} }}{{1 + \beta x_{i(m)}^\alpha }}} \end{aligned}$$

and

$$\begin{aligned} \frac{{\partial L^{*}_{\mathrm{ERSSE}}}}{{\partial \beta }}& = \frac{{m(m + 1)}}{{2(1 + \beta )}} - (m + 1)\sum \limits _{i = 1}^{m/2} {\frac{{x_{i(1)}^\alpha }}{{1 + \beta x_{i(1)}^\alpha }}}\\&\quad - (m + 1)\sum \limits _{i = {\textstyle {m \over 2}} + 1}^m {\frac{{x_{i(m)}^\alpha }}{{1 + \beta x_{i(m)}^\alpha }}}. \end{aligned}$$

Taking the second derivative for \(L^{*}_{\mathrm{ERSSE}}\), we have

$$\begin{aligned} \frac{{\partial ^2L^{*}_{\mathrm{ERSSE}}}}{{\partial \alpha ^2 }} = - \frac{m}{{\alpha ^2 }} - (m - 1)\sum \limits _{i = 1}^{m/2} {\frac{{x_{i(1)}^\alpha \ln ^2 x_{i(1)} }}{{(1 - x_{i(1)}^\alpha )^2 }}} - \beta (m + 1) \sum \limits _{i = 1}^{m/2} {\frac{{x_{i(1)}^\alpha \ln ^2 x_{i(1)} }}{{(1 + \beta x_{i(1)}^\alpha )^2 }}} - \beta (m + 1) \sum \limits _{i = {\textstyle {m \over 2}} + 1}^m {\frac{{x_{i(m)}^\alpha \ln ^2 x_{i(m)} }}{{(1 + \beta x_{i(m)}^\alpha )^2 }}}, \frac{{\partial ^2 L^{*}_{\mathrm{ERSSE}}}}{{\partial \beta ^2 }} = - \frac{{m(m + 1)}}{{2(1 + \beta )^2 }} + (m + 1)\sum \limits _{i = 1}^{m/2} {\frac{{x_{i(1)}^{2\alpha } }}{{(1 + \beta x_{i(1)}^\alpha )^2 }}} + (m + 1)\sum \limits _{i = {\textstyle {m \over 2}} + 1}^m {\frac{{x_{i(m)}^{2\alpha } }}{{(1 + \beta x_{i(m)}^\alpha )^2 }}} \end{aligned}$$

and

$$\begin{aligned} \frac{{\partial ^2 L^{*}_{\mathrm{ERSSE}}}}{{\partial \alpha \partial \beta }} = - \left( {m + 1} \right) \sum \limits _{i = 1}^{m/2} {\frac{{x_{_{i(1)} }^\alpha \ln x_{i(1)} }}{{(1 + \beta x_{_{i(1)} }^\alpha )^2 }}} - \left( {m + 1} \right) \sum \limits _{i = {\textstyle {m \over 2}} + 1}^m {\frac{{x_{_{i(m)} }^\alpha \ln x_{i(m)} }}{{(1 + \beta x_{_{i(m)} }^\alpha )^2 }}}. \end{aligned}$$

Then, we can obtain

$$\begin{aligned} I_{11,\,ERSSE}=- E\left( {\frac{{\partial ^2 L^{*}_{\mathrm{ERSSE}}}}{{\partial \alpha ^2 }}} \right) = \frac{m}{{\alpha ^2 }} + \frac{{m(m - 1)}}{2}E\left( {\frac{{x_{(1)}^\alpha \ln ^2 x_{(1)} }}{{(1 - x_{(1)}^\alpha )^2 }}} \right) + \frac{{\beta m(m + 1)}}{2} E\left( {\frac{{x_{(1)}^\alpha \ln ^2 x_{(1)} }}{{(1 + \beta x_{(1)}^\alpha )^2 }}} \right) + \frac{{\beta m(m + 1)}}{2}E\left( {\frac{{x_{(m)}^\alpha \ln ^2 x_{(m)} }}{{(1 + \beta x_{(m)}^\alpha )^2 }}} \right) = \frac{m}{{\alpha ^2 }} + \frac{1}{{2\alpha ^2 }}m^2 (m - 1)(1 + \beta ) \sum \limits _{k = 0}^{m - 3} {\left( \begin{array}{c} m - 3 \\ k \end{array} \right) } ( - 1)^k \Gamma _m (\beta ,3,k + 1) + \frac{1}{{2\alpha ^2 }}m^2 (m + 1)\beta (1 + \beta )\sum \limits _{k = 0}^{m - 1} {\left( \begin{array}{c} m - 1 \\ k \end{array} \right) } ( - 1)^k \Gamma _{m + 2} (\beta ,3,k + 1) + \frac{1}{{2\alpha ^2 }}m^2 (m + 1)\beta (1 + \beta )^m \Gamma _{m + 2} (\beta ,3,m), \end{aligned}$$
(15)
$$\begin{aligned} I_{22,\,ERSSE}=- E\left( {\frac{{\partial ^2 L^{*}_{\mathrm{ERSSE}}}}{{\partial \beta ^2 }}} \right) = \frac{{m(m + 1)}}{{2(1 + \beta )^2 }} - \frac{{m(m + 1)}}{2} E\left( {\frac{{x_{(1)}^{2\alpha } }}{{(1 + \beta x_{(1)}^\alpha )^2 }}} \right) - \frac{{m(m + 1)}}{2}E\left( {\frac{{x_{(m)}^{2\alpha } }}{{(1 + \beta x_{(m)}^\alpha )^2 }}} \right) = \frac{{m(m + 1)}}{{2(1 + \beta )^2 }} - \frac{1}{2}m^2 (m + 1)(1 + \beta ) \sum \limits _{k = 0}^{m - 1} {\left( \begin{array}{c} m - 1 \\ k \end{array} \right) ( - 1)^k } \Gamma _{m + 2} (\beta ,1,k + 2) - \frac{1}{2}m^2 (m + 1)(1 + \beta )^m \Gamma _{m + 2} (\beta ,1,m + 1) \end{aligned}$$
(16)

and

$$\begin{aligned} I_{12,\,ERSSE}=- E\left( {\frac{{\partial ^2 L^{*}_{\mathrm{ERSSE}}}}{{\partial \beta \partial \alpha }}} \right) = \frac{{m(m + 1)}}{2}E\left( {\frac{{x_{_{(1)} }^\alpha \ln x_{(1)} }}{{(1 + \beta x_{_{(1)} }^\alpha )^2 }}} \right) + \frac{{m(m + 1)}}{2} E\left( {\frac{{x_{_{(m)} }^\alpha \ln x_{(m)} }}{{(1 + \beta x_{_{(m)} }^\alpha )^2 }}} \right) = \frac{1}{{2\alpha }}m^2 (m + 1)(1 + \beta )\sum \limits _{k = 0}^{m - 3} {\left( \begin{array}{c} m - 3 \\ k \end{array} \right) } ( - 1)^{k + 1} \Gamma _m (\beta ,2,k + 1) - \frac{1}{{2\alpha }}m^2 (m + 1)(1 + \beta )^m \Gamma _{m + 2} (\beta ,2,m). \end{aligned} $$
(17)

Combining (15), (16) with (17), we can obtain the Fisher information matrix based on ERSSE:

$$\begin{aligned} I_{\mathrm{ERSSE}} (\alpha ,\beta ) = \left[ {\begin{array}{*{20}c} {I_{11, {\mathrm{ERSSE}}}} &{} {I_{12, {\mathrm{ERSSE}}}} \\ {I_{12, {\mathrm{ERSSE}}}} &{} {I_{22, {\mathrm{ERSSE}}}} \\ \end{array}} \right] . \end{aligned}$$
(18)

Let \({X_{1(1)} ,X_{2(1)} , \ldots ,X_{m(\frac{{m + 1}}{2})} }\) be an ERSSO of size m from (1), then we can obtain based on these samples

$$\begin{aligned} I_{11, {\mathrm{ERSSO}}}=\frac{m}{{\alpha ^2 }} + \frac{1}{{2\alpha ^2 }}m(m - 1)^2 (1 + \beta ) \sum \limits _{k = 0}^{m - 3} {\left( \begin{array}{c} m - 3 \\ k \end{array} \right) } ( - 1)^k \Gamma _m (\beta ,3,k + 1) + \frac{1}{{2\alpha ^2 }}\beta m(m^2 - 1)(1 + \beta )^m \sum \limits _{k = 0}^{m - 1} {\left( \begin{array}{c} m - 1 \\ k \end{array} \right) } ( - 1)^k \Gamma _{m + 2} (\beta ,3,k + 1) + \frac{1}{{2\alpha ^2 }}(m - 1)(1 + \beta )^{{\textstyle {{m + 1} \over 2}}} \Gamma _{m + 2} (\beta ,3,m) + \frac{1}{{2\alpha ^2 }}(m - 1)(1 + \beta )^{{\textstyle {{m + 1} \over 2}}} c(\frac{{m + 1}}{2},m) \sum \limits _{k = 0}^{(m - 5)/2} {\left( \begin{array}{c} ^{\frac{{m - 5}}{2}} \\ k \end{array} \right) } \left( { - 1} \right) ^k \Gamma _m (\beta ,3,\frac{{m + 1}}{2} + k) + \frac{1}{{\alpha ^2 }}\beta (m + 1)\left( {1 + \beta } \right) ^{{\textstyle {{m + 1} \over 2}}} c(\frac{{m + 1}}{2},m)\sum \limits _{k = 0}^{(m - 1)/2} {\left( \begin{array}{c}^{\frac{{m - 1}}{2}} \\ k \end{array} \right) } \left( { - 1} \right) ^k \Gamma _{m + 2} (\beta ,3,\frac{{m + 1}}{2} + k), \end{aligned} $$
(19)
$$\begin{aligned} I_{22, {\mathrm{ERSSO}}}= \frac{{m(m + 1)}}{{2(1 + \beta )^2 }} - \frac{1}{2}m(m^2 - 1)(1 + \beta )\sum \limits _{k = 0}^{m - 1} {\left( \begin{array}{c} m - 1 \\ k \end{array} \right) ( - 1)^k } \Gamma _{m + 2} (\beta ,1,k + 2)- \frac{1}{2}m(m^2 - 1)(1 + \beta )^m \Gamma _{m + 2} (\beta ,1,m + 1)- (m + 1)c(\frac{{m + 1}}{2},m)\left( {1 + \beta } \right) ^{{\textstyle {{m + 1} \over 2}}} \sum \limits _{k = 0}^{(m - 1)/2} {\left( \begin{array}{c} \frac{{m - 1}}{2} \\ k \end{array} \right) ( - 1)^k } \Gamma _{m + 2} (\beta ,1,k + 2) \end{aligned} $$
(20)

and

$$\begin{aligned} I_{12, {\mathrm{ERSSO}}}=\frac{1}{{2\alpha }}m(m^2 - 1)(1 + \beta )\sum \limits _{k = 0}^{m - 1} {\left( \begin{array}{c}m - 1 \\ k \end{array} \right) } ( - 1)^{k + 1} \Gamma _{m + 2} (\beta ,2,\frac{{m + 1}}{2} + k)- \frac{1}{{2\alpha }}m(m^2 - 1)(1 + \beta )^m \Gamma _{m + 2} (\beta ,2,m)+ \frac{1}{\alpha }(m + 1)\left( {1 + \beta } \right) ^{{\textstyle {{m + 1} \over 2}}} c(\frac{{m + 1}}{2},m)\sum \limits _{k = 0}^{(m - 1)/2} {\left( \begin{array}{c} ^{\frac{{m - 1}}{2}} \\ k \end{array} \right) } \left( { - 1} \right) ^{k + 1} \Gamma _{m + 2} (\beta ,2,\frac{{m + 1}}{2} + k). \end{aligned} $$
(21)

The establishing procedures of (19), (20) and (21) are similar as those of (15), (16) and (17).

Combining (19), (20) with (21), we can obtain the Fisher information matrix based on ERSSO:

$$\begin{aligned} I_{{\rm ERSSO}} (\alpha ,\beta ) = \left[ {\begin{array}{*{20}c} {I_{11, {\mathrm{ERSSO}}}} &{} {I_{12, {\mathrm{ERSSO}}}} \\ {I_{12, {\mathrm{ERSSO}}}} &{} {I_{22, {\mathrm{ERSSO}}}} \\ \end{array}} \right] . \end{aligned}$$
(22)

6 Comparison and Conclusions

6.1 Numerical Comparison

In this subsection, we will compare efficiency of RSS for parameter estimation for LEEGD\((\alpha ,\beta )\). For parameter inference of \(\alpha \), the relative efficiency of RSS with respect to (w.r.t.) SRS, the relative efficiency of MRSSK(K=E or O) w.r.t. SRS and the relative efficiency of ERSSK w.r.t. SRS may be defined as

$$\begin{aligned} \hbox {RE}^1& = \frac{{I_{11,\;{\mathrm{RSS}}} }}{{I_{11,\;{\mathrm{SRS}}}}},\\ \hbox {RE}^2& = \frac{{I_{11,\;{\mathrm{MRSSK}}} }}{{I_{11,\;{\mathrm{SRS}}} }},\\ \hbox {RE}^3& = \frac{{I_{11,\;{\mathrm{;ERSSK}}} }}{{I_{11,\;{\mathrm{SRS}}} }}, \end{aligned}$$

respectively. For parameter inference of \(\beta \), the relative efficiency of RSS w.r.t. SRS, the relative efficiency of MRSSK w.r.t. SRS and the relative efficiency of ERSSK w.r.t. SRS may be defined as

$$\begin{aligned} \hbox {RE}^4& = \frac{{I_{22,\;{\mathrm{RSS}}} }}{{I_{22,\;{\mathrm{SRS}}}}},\\ \hbox {RE}^5& = \frac{{I_{22,\;{\mathrm{MRSSK}}} }}{{I_{22,\;{\mathrm{SRS}}} }},\\ \hbox {RE}^6& = \frac{{I_{22,\;{\mathrm{ERSSK}}} }}{{I_{22,\;{\mathrm{SRS}}} }}, \end{aligned}$$

respectively. For parameter inference of \(\alpha \) and \(\beta \), the relative efficiency of RSS w.r.t. SRS, the relative efficiency of MRSSK w.r.t. SRS and the relative efficiency of ERSSK w.r.t. SRS may be defined as

$$\begin{aligned} RE^7& = \frac{{\det \left\{ {I_{\mathrm{RSS}} \left( {\alpha ,\beta } \right) } \right\} }}{{\det \left\{ {I_{\mathrm{SRS}} \left( {\alpha ,\beta } \right) } \right\} }},\\ RE^8& = \frac{{\det \left\{ {I_{\mathrm{MRSSK}} \left( {\alpha ,\beta } \right) } \right\} }}{{\det \left\{ {I_{\mathrm{SRS}} \left( {\alpha ,\beta } \right) } \right\} }}\\ RE^9& = \frac{{\det \left\{ {I_{\mathrm{ERSSK}} \left( {\alpha ,\beta } \right) } \right\} }}{{\det \left\{ {I_{\mathrm{SRS}} \left( {\alpha ,\beta } \right) } \right\} }}, \end{aligned}$$

respectively.

It can be seen that \({\mathrm{RE}}^i(i=1,2,\ldots ,9)\) are free of \(\alpha \). The results are, respectively, given in Tables 1, 2 and 3.

From Table 1, we conclude the following:

  1. (1)

    \({\mathrm{RE}}^1>1\), which means that for parameter inference of \(\alpha \) from LEEGD\((\alpha ,\beta )\) in which \(\beta \) is known, RSS is more efficient than SRS.

  2. (2)

    \({\mathrm{RE}}^2>1\), which means that for parameter inference of \(\alpha \) from LEEGD\((\alpha ,\beta )\) in which \(\beta \) is known, MRSS is more efficient than SRS.

  3. (3)

    \({\mathrm{RE}}^3>1\), which means that for parameter inference of \(\alpha \) from LEEGD\((\alpha ,\beta )\) in which \(\beta \) is known, ERSS is more efficient than SRS.

  4. (4)

    Comparing \({\mathrm{RE}}^1\), \({\mathrm{RE}}^2\) with \({\mathrm{RE}}^3\), we can conclude that for parameter inference of \(\alpha \) from LEEGD\((\alpha ,\beta )\) in which \(\beta \) is known, MRSS is more efficient than the other sampling designs.

From Table 2, we conclude the following:

  1. (5)

    \({\mathrm{RE}}^4>1\), which means that for parameter inference of \(\beta \) from LEEGD\((\alpha ,\beta )\) in which \(\alpha \) is known, RSS is more efficient than SRS.

  2. (6)

    \({\mathrm{RE}}^5>1\), which means that for parameter inference of \(\beta \) from LEEGD\((\alpha ,\beta )\) in which \(\alpha \) is known, MRSS is more efficient than SRS.

  3. (7)

    \({\mathrm{RE}}^6>1\), which means that for parameter inference of \(\beta \) from LEEGD\((\alpha ,\beta )\) in which \(\alpha \) is known, ERSS is more efficient than SRS.

  4. (8)

    Comparing \({\mathrm{RE}}^4\), \({\mathrm{RE}}^5\) with \({\mathrm{RE}}^6\), we can conclude that for parameter inference of \(\beta \) from LEEGD\((\alpha ,\beta )\) in which \(\alpha \) is known, MRSS is more efficient than the other sampling designs.

From Table 3, we conclude the following:

  1. (9)

    \({\mathrm{RE}}^7>1\), which means that for parameter inference of \(\alpha \) and \(\beta \) from LEEGD\((\alpha ,\beta )\), RSS is more efficient than SRS.

  2. (10)

    \({\mathrm{RE}}^8>1\), which means that for parameter inference of \(\alpha \) and \(\beta \) from LEEGD\((\alpha ,\beta )\), MRSS is more efficient than SRS.

  3. (11)

    \({\mathrm{RE}}^9>1\), which means that for parameter inference of \(\alpha \) and \(\beta \) from LEEGD\((\alpha ,\beta )\), ERSS is more efficient than SRS.

  4. (12)

    Comparing \({\mathrm{RE}}^7\), \({\mathrm{RE}}^8\) with \({\mathrm{RE}}^9\), we can conclude that for parameter inference of \(\alpha \) and \(\beta \) from LEEGD\((\alpha ,\beta )\), RSS is more efficient than the other sampling designs.

6.2 A Real Data Application

In this section, we present a data analysis with a real data. The data set is available from FreesFootnote 1 (2010) and consists of 73 observations on 7 variables. The data were collected from a questionnaire carried out with the purpose of relating cost-effectiveness to management philosophy of controlling the company's exposure to various property and casualty losses, after adjusting for company effects such as size and industry type. These data have been previously analyzed by Schmit and Roth (1990), Gomez-Deniz et al. (2014) and Jodra and Jimenez-Gamero (2016). In this section, interest is centered on the variable FIRMCOST (divided by 100), which is a measure of the cost-effectiveness of the risk management practices of the firm. The LEEGD was fitted to the variable FIRMCOST/100; the maximum likelihood estimators of \(\alpha \) and \(\beta \) are, respectively, 1.4322 and 52.1069. It can also be checked that the correlation coefficient between the theoretical and the empirical cumulative probabilities is 0.9956. The result of the analysis is presented in Tables 4 and 5. It can be seen from these tables that there are the same conclusions as simulation results of the previous sections. This agrees with the simulation results of the previous sections.

Table 1 The relative efficiency of the RSS, MRSS and ERSS for parameter inference of \(\alpha \)
Table 2 The relative efficiency of the RSS, MRSS and ERSS for parameter inference of \(\beta \)
Table 3 The relative efficiency of the RSS, MRSS and ERSS for parameter inference of \(\alpha \) and \(\beta \)
Table 4 The relative efficiency of the RSS, MRSS and ERSS for parameter inference of \(\alpha \)
Table 5 The relative efficiency of the RSS, MRSS and ERSS for parameter inference of \(\beta \)