1 Introduction

Ranked set sampling (RSS) was introduced by McIntyre (1952) for estimating the pasture yields. It is appropriate for situations where quantification of sampling units is either costly or difficult, but ranking the units in a small set is easy and inexpensive. For further introduction of RSS, refer to Stokes (1995), Al-Saleh and Al-Hadhrami (2003), Abu-Dayyeh et al. (2013), Omar and Ibrahim (2013), Wangxue et al. (2013), Wangxue et al. (2016) and Wangxue et al. (2017).

The procedure of RSS involves randomly drawing \(n^2\) units from the population and then randomly partitioning them into n sets of size n. The units are then ranked within each set. Here ranking could be judgement, visual perception, covariates, or any other method that does not require actual measurement of the units. For each set, one unit is selected and measured. The basic version of RSS can be elucidated as follows. First, the experimenter draws n independent simple random samples, each of size n from the population. Then units within the ith\((i = 1,2, \ldots , n)\) sample are subjected to judgement ordering, with negligible cost, and the unit possessing ith lowest rank is identified. Finally, the identified units are measured. Proceeding in this way, we attain a ranked set sample of size n. If needed, this process can be replicated r times (cycles) to yield a sample of desired size nr.

A random variable X is said to have a log-logistic distribution with the scale parameter \(\alpha \) and the shape parameter \(\beta \) if its distribution function is given by

$$\begin{aligned} F(x;\alpha ,\beta )=\frac{{x^\beta }}{{x^\beta +\alpha ^\beta }}, \end{aligned}$$
(1)

where \(x>0\), \(\alpha >0\), \(\beta >0\). The probability density function (pdf) corresponding to the distribution function in (1) is then given by

$$\begin{aligned} f(x;\alpha ,\beta )=\frac{{\beta \alpha ^\beta x^{\beta -1} }}{{(x^\beta + \alpha ^\beta )^2 }}. \end{aligned}$$

We write \(LLD(\alpha ,~\beta )\) to denote the distribution as defined in (1). The applications of log-logistic distribution are well known in wealth or income (see Fisk 1961), hydrology for modelling stream flow rates and precipitation (see Shoukri et al. 1988) and engineer of survival analysis (see Ashkar and Mahdi 2003). For further details on the importance and applications of a log-logistic distribution one may refer to Bennett (1983), Ahmad et al. (1988), Robson and Reed (1999) and Geskus (2001).

Parameter estimation problems for the log-logistic distribution have been discussed by many authors. Among recent literature, Balakrishnan et al. (1987) provided the best linear unbiased estimation of location and scale parameters of the log-logistic distribution under simple random sampling (SRS). Zhenmin (2006) discussed about the interval estimation for the shape parameter of the log-logistic distribution based on SRS. Further, inference on the parameters of the log-logistic distribution has been studied by many authors under SRS including Tiku and Suresh (1992), Gupta et al. (1999) and Kus and Kaya (2006).

It is well-known that the maximum likelihood method is a good choice to estimate the unknown parameters, because of its various attractive properties, such as being consistent and asymptotic normality and so on. In this article, we are interested in considering maximum likelihood estimatior(s) (MLE(s)) of the scale and shape parameters \(\alpha \) and \(\beta \) from log-logistic distribution. In Sect. 2, the existence and uniqueness of the MLE of \(\alpha \) for \(LLD(\alpha , \beta )\) in which \(\beta \) is known under SRS and RSS are proved. Moreover, the MLE of \(\alpha \) using a RSS version based on the order statistic that maximizes the Fisher information for a fixed set size (RSSF) will be considered. Since under some regularity conditions, the asymptotic efficiency of the MLE can be obtained from the inverse of the Fisher information number, the Fisher information number for \(\alpha \) under three different sampling schemes is respectively obtained. In Sect. 3, the existence and uniqueness of the MLE of \(\beta \) for \(LLD(\alpha , \beta )\) in which \(\alpha \) is known under SRS and RSS are proved. In addition, the MLE of \(\beta \) using RSSF will be considered. The Fisher information number for \(\beta \) under three different sampling schemes are respectively obtained similar to Sect. 2. In Sect. 4, the existence of the MLEs of \(\alpha \) and \(\beta \) from \(LLD(\alpha , \beta )\) under SRS and RSS are respectively proved. To compare MLEs of \(\alpha \) and \(\beta \) estimated simultaneously, the Fisher information matrices \(I_{SRS} (\alpha ,\beta )\) and \(I_{RSS} (\alpha ,\beta )\) will be computed. In Sect. 5, a comparison between these MLEs and the conclusions will be presented. Also we compare efficiencies of all these MLEs for the case of imperfect ranking.

2 MLE of \(\alpha \) when \(\beta \) is known

In this section, the existence and uniqueness of the MLE of \(\alpha \) for (1) in which \(\beta \) is known under SRS and RSS are proved. Moreover, the MLE of \(\alpha \) using RSSF will be considered. Since under some regularity conditions, the asymptotic efficiency of the MLE can be obtained from the inverse of the Fisher information number, the Fisher information number for \(\alpha \) under three different sampling schemes is respectively obtained.

2.1 MLE of \(\alpha \) using SRS

In this subsection, the existence and uniqueness of the MLE of \(\alpha \) for (1) in which \(\beta \) is known under SRS are proved. The Fisher information number for \(\alpha \) under this sampling is given.

Let \(\left\{ {X_1 ,X_2 ,X_3 , \cdots ,X_n } \right\} \) be a simple random sample of size n from (1) in which \(\beta \) is known. The log-likelihood function based on these samples is

$$\begin{aligned} \displaystyle \ln L_{SRS} = d - n\beta \ln \alpha - 2\sum \limits _{i = 1}^n {\ln \left[ {1 + \left( {\frac{{x_i }}{\alpha }} \right) ^\beta } \right] }, \end{aligned}$$

where d is a value which is free of \(\alpha \). If the MLE of \(\alpha \) exists, then it is a solution of the likelihood equation

$$\begin{aligned} \displaystyle n - 2\sum \limits _{i = 1}^n {\frac{{\left( \frac{{x_i }}{\alpha }\right) ^\beta }}{{1 + \left( \frac{{x_i }}{\alpha }\right) ^\beta }}} = 0. \end{aligned}$$
(2)

In order to study the existence and uniqueness of the MLE, the second-order derivative of log-likelihood function \(lnL_{SRS}\) is computed as

$$\begin{aligned} \displaystyle \frac{{\partial ^2 lnL_{SRS}}}{{\partial \alpha ^2 }} = - \frac{\beta }{{\alpha ^2 }}\left[ {n - 2\sum \limits _{i = 1}^n {\frac{{\left( \frac{{x_i }}{\alpha }\right) ^\beta }}{{1 + \left( \frac{{x_i }}{\alpha }\right) ^\beta }}} } \right] - \frac{{2\beta ^2 }}{{\alpha ^2 }}\sum \limits _{i = 1}^n {\frac{{\left( \frac{{x_i }}{\alpha }\right) ^\beta }}{{\left[ {1 + \left( \frac{{x_i }}{\alpha }\right) ^\beta } \right] ^2 }}}. \end{aligned}$$
(3)

The left hand side (LHS) of (2) is a continuous function. When \(\alpha \rightarrow 0\), the LHS of (2) goes to \(-n\) and when \(\alpha \rightarrow \infty \), the LHS of (2) goes to n. Thus, the solution of (2) exists. Since the first term of right hand side (RHS) of (3) is zero at any solution of (2) and the second term of RHS of (3) is always negative, (2) has a unique solution and this solution is the MLE of \(\alpha \). This estimator denoted by \(\hat{\alpha }_{SRS, MLE}.\)

The information number for \(\alpha \) under SRS

$$\begin{aligned} \displaystyle I_{SRS} (\alpha ) = \frac{{n\beta ^2 }}{{3\alpha ^2 }} \end{aligned}$$
(4)

is given by Reath et al. (2018).

2.2 MLE of \(\alpha \) using RSS

In this subsection, the existence and uniqueness of the MLE of \(\alpha \) for (1) in which \(\beta \) is known under RSS are proved. The Fisher information number for \(\alpha \) under this sampling is obtained.

Let \(\left\{ {X_{(1)1} ,X_{(2)2} ,X_{(3)3} , \cdots ,X_{(n)n} } \right\} \) be a ranked set sample of n from (1) in which \(\beta \) is known. The pdf of \(X_{(i)i}\) is

$$\begin{aligned} f_{(i)} (x) = \frac{{c(i,n)\beta \left( \frac{x}{\alpha }\right) ^{i\beta - 1} }}{{\alpha \left[ {1 + \left( \frac{x}{\alpha }\right) ^\beta } \right] ^{n + 1} }}, c(i,n) = \frac{{n!}}{{(n - i)!(i - 1)!}}. \end{aligned}$$

Then we have

$$\begin{aligned} L_{RSS}= \prod \limits _{i = 1}^n {\frac{{c(i,n)\beta \left( \frac{{x_{(i)i} }}{\alpha }\right) ^{i\beta - 1} }}{{\alpha \left[ {1 + \left( \frac{{x_{(i)i} }}{\alpha }\right) ^\beta } \right] ^{n + 1} }}}. \end{aligned}$$

and

$$\begin{aligned} \ln L_{RSS} = d - \frac{{n\left( {n + 1} \right) \beta }}{2}\ln \alpha - (n + 1)\sum \limits _{i = 1}^n {\ln \left[ {1 + \left( {\frac{{x_{(i)i} }}{\alpha }} \right) ^\beta } \right] }, \end{aligned}$$

where d is a value which is free of \(\alpha \). Taking the first derivative for \(lnL_{RSS}\), we have

$$\begin{aligned} \frac{{\partial \ln L_{RSS}}}{{\partial \alpha }} = - \frac{(n + 1)\beta }{\alpha }\left[ {\frac{{n}}{2} -\sum \limits _{i = 1}^n {\frac{{\left( \frac{{x_{(i)i} }}{\alpha }\right) ^\beta }}{{1 + \left( \frac{{x_{(i)i} }}{\alpha }\right) ^\beta }}} } \right] . \end{aligned}$$

If the MLE of \(\alpha \) exists, then it is a solution of the likelihood equation

$$\begin{aligned} \frac{{n}}{2} - \sum \limits _{i = 1}^n {\frac{{\left( \frac{{x_{(i)i} }}{\alpha }\right) ^\beta }}{{1 + \left( \frac{{x_{(i)i} }}{\alpha }\right) ^\beta }}} = 0. \end{aligned}$$
(5)

In order to study the existence and uniqueness of the MLE, the second-order derivative of \(lnL_{RSS}\) is computed as

$$\begin{aligned} \displaystyle \frac{{\partial ^2 \ln L_{RSS} }}{{\partial \alpha ^2 }} = - \frac{{(n + 1)\beta }}{{\alpha ^2 }}\left[ {\frac{n}{2} - \sum \limits _{i = 1}^n {\frac{{\left( \frac{{x_{(i)i} }}{\alpha }\right) ^\beta }}{{1 + \left( \frac{{x_{(i)i} }}{\alpha }\right) ^\beta }}} } \right] - \frac{{(n + 1)\beta ^2 }}{{\alpha ^2 }}\sum \limits _{i = 1}^n {\frac{{\left( \frac{{x_{(i)i} }}{\alpha }\right) ^\beta }}{{\left[ {1 + \left( \frac{{x_{(i)i} }}{\alpha }\right) ^\beta } \right] ^2 }}} . \end{aligned}$$
(6)

The LHS of (5) is a continuous function. When \(\alpha \rightarrow 0\), the LHS of (5) goes to \(\displaystyle -\frac{{n}}{2}\) and when \(\alpha \rightarrow \infty \), the LHS of (5) goes to \(\displaystyle \frac{{n}}{2}\). Thus, the solution of (5) exists. Since the first term of RHS of (6) is zero at any solution of (6) and the second term of RHS of (6) is always negative, (5) has a unique solution and this solution is the MLE of \(\alpha \). This estimator denoted by \({{\hat{\alpha }}} _{RSS,~MLE}\).

The Fisher information number for \(\alpha \) under RSS

$$\begin{aligned} \displaystyle I_{RSS} (\alpha )= & {} - E\left[ {\frac{{\partial ^2 lnL_{RSS}}}{{\partial \alpha ^2 }}} \right] \nonumber \\= & {} \frac{{n(n + 1)\beta }}{{2\alpha ^2 }} - \frac{{n(n + 1)\beta }}{{\alpha ^2 }}E\left[ {\frac{{\left( {\frac{x}{\alpha }} \right) ^\beta }}{{1 + \left( {\frac{x}{\alpha }} \right) ^\beta }}} \right] \nonumber \\&+ \frac{{n(n + 1)\beta ^2 }}{{\alpha ^2 }}E\left[ {\frac{{\left( {\frac{x}{\alpha }} \right) ^\beta }}{{\left( {1 + \left( {\frac{x}{\alpha }} \right) ^\beta } \right) ^2 }}} \right] \nonumber \\= & {} \frac{{n(n + 1)\beta }}{{2\alpha ^2 }} - \frac{{n(n + 1)\beta }}{{\alpha ^2 }}\int _0^\infty {\frac{t}{{\left( {1 + t} \right) ^3 }}} dt + \frac{{n(n + 1)\beta ^2 }}{{\alpha ^2 }}\int _0^\infty {\frac{t}{{\left( {1 + t} \right) ^4 }}} dt \nonumber \\ \displaystyle= & {} \frac{{n(n + 1)\beta }}{{2\alpha ^2 }} - \frac{{n(n + 1)\beta }}{{2\alpha ^2 }} + \frac{{n(n + 1)\beta ^2 }}{{6\alpha ^2 }}\nonumber \\= & {} \frac{{n(n + 1)\beta ^2 }}{{6\alpha ^2 }}. \end{aligned}$$
(7)

2.3 MLE of \(\alpha \) under RSS based on maximizing the Fisher information number

In this subsection, the existence and uniqueness of the MLE of \(\alpha \) for (1) in which \(\beta \) is known under RSSF are proved. The Fisher information number for \(\alpha \) under this sampling is obtained.

Lesitha and Thomas (2013) observed that median of a random sample contains the maximum information about \(\alpha \) when they studied the best linear unbiased estimator of \(\alpha \) from (1) in which \(\beta \) is known. So we will arrange RSS based on median.

Case 1: n is even

Select the \(\displaystyle \frac{n}{2}\)th rank unit from each in \(\displaystyle \frac{n}{2}\) sets and the \( \displaystyle \left( {\frac{n}{2} + 1} \right) \)th rank unit from each in the other sets for actual measurement. Denote them as \(\left\{ {X_{\left( {\textstyle {n \over 2}}\right) 1} , \cdots , X_{\left( {\textstyle {n \over 2}}\right) {\textstyle {n \over 2}}} ,X_{\left( {\textstyle {n \over 2}} + 1\right) {\textstyle {n \over 2}} + 1} , \cdots ,X_{\left( {\textstyle {n \over 2}} + 1\right) n} } \right\} \). The pdfs of \(X_{\left( {\textstyle {n \over 2}}\right) i}\) and \({{\varvec{X}}}_{\left( {\textstyle {n \over 2}} + 1\right) i}\) are respectively

$$\begin{aligned} \displaystyle f_{\left( \frac{n}{2}\right) } (x) = \frac{{c\left( {{\textstyle {n \over 2}},n} \right) \beta \left( {\frac{x}{\alpha }} \right) ^{{\textstyle {{n\beta } \over 2}} - 1} }}{{\alpha \left[ {1 + \left( {\frac{x}{\alpha }} \right) ^\beta } \right] ^{n + 1} }} \end{aligned}$$

and

$$\begin{aligned} \displaystyle f_{\left( \frac{n}{2} + 1\right) } \left( x \right) = \frac{{c\left( {{\textstyle {n \over 2}} + 1,n} \right) \beta \left( {\frac{x}{\alpha }} \right) ^{\left( {\textstyle {n \over 2}} + 1\right) \beta - 1} }}{{\alpha \left[ {1 + \left( {\frac{x}{\alpha }} \right) ^\beta } \right] ^{n + 1} }}. \end{aligned}$$

The log-likelihood function based on these samples \(\left\{ X_{\left( {\textstyle {n \over 2}}\right) 1} , \cdots ,X_{\left( {\textstyle {n \over 2}}\right) {\textstyle {n \over 2}}} ,X_{\left( {\textstyle {n \over 2}} + 1\right) {\textstyle {n \over 2}} + 1} , \cdots ,X_{\left( {\textstyle {n \over 2}} + 1\right) n} \right\} \) is

$$\begin{aligned} \ln L_{RSSFE}&= d - \frac{{n^2 \beta }}{4}\ln \alpha - \left( {n + 1} \right) \sum \limits _{i = 1}^{{\textstyle {n \over 2}}} {\ln \left[ {1 + \left( {\frac{{x_{\left( {\textstyle {n \over 2}}\right) i} }}{\alpha }} \right) ^\beta } \right] }\\&\quad - \frac{{n\left( {n + 2} \right) \beta }}{4}\ln \alpha - (n + 1)\sum \limits _{i = {\textstyle {n \over 2}} + 1}^n {\ln \left[ {1 + \left( {\frac{{x_{\left( {\textstyle {n \over 2}} + 1\right) i} }}{\alpha }} \right) ^\beta } \right] }, \end{aligned}$$

where d is a value which is free of \(\alpha \).

If the MLE of \(\alpha \) exists , then it is a solution of the likelihood equation

$$\begin{aligned} \frac{n}{2} - \sum \limits _{i = 1}^{\frac{n}{2}} {\frac{{\left( \frac{{x_{\left( \frac{n}{2}\right) i} }}{\alpha }\right) ^\beta }}{{1 + \left( \frac{{x_{\left( \frac{n}{2}\right) i} }}{\alpha }\right) ^\beta }}} - \sum \limits _{i = \frac{n}{2} + 1}^n {\frac{{\left( \frac{{x_{\left( \frac{n}{2} + 1\right) i} }}{\alpha }\right) ^\beta }}{{1 + \left( \frac{{x_{\left( \frac{n}{2} + 1\right) i} }}{\alpha }\right) ^\beta }}} = 0 \end{aligned}$$
(8)

In order to study the existence and uniqueness of the MLE, the second-order derivative of log likelihood function \(lnL_{_{RSSFE} }\) is computed as

$$\begin{aligned} \displaystyle \frac{{\partial ^2 lnL_{_{RSSFE} }}}{{\partial \alpha ^2 }}= & {} - \frac{{(n + 1)\beta }}{{\alpha ^2 }}\left[ {\frac{n}{2} - \sum \limits _{i = 1}^{\frac{n}{2}} {\frac{{\left( \frac{{x_{\left( \frac{n}{2}\right) i} }}{\alpha }\right) ^\beta }}{{1 + \left( \frac{{x_{\left( \frac{n}{2}\right) i} }}{\alpha }\right) ^\beta }}} - \sum \limits _{i = \frac{n}{2} + 1}^n {\frac{{\left( \frac{{x_{\left( \frac{n}{2} + 1\right) i} }}{\alpha }\right) ^\beta }}{{1 + \left( \frac{{x_{\left( \frac{n}{2} + 1\right) i} }}{\alpha }\right) ^\beta }}} } \right] \nonumber \\&- (n + 1)\frac{{\beta ^2 }}{{\alpha ^2 }}\left( {\sum \limits _{i = 1}^{{\textstyle {n \over 2}}} {\frac{{\left( \frac{{x_{\left( {\textstyle {n \over 2}}\right) i} }}{\alpha }\right) ^\beta }}{{\left[ {1 + \left( \frac{{x_{\left( {\textstyle {n \over 2}}\right) i} }}{\alpha }\right) ^\beta } \right] ^2 }} + \sum \limits _{i = {\textstyle {n \over 2}} + 1}^n {\frac{{\left( \frac{{x_{\left( {\textstyle {n \over 2}} + 1\right) i} }}{\alpha }\right) ^\beta }}{{\left[ {1 + \left( \frac{{x_{\left( {\textstyle {n \over 2}} + 1\right) i} }}{\alpha }\right) ^\beta } \right] ^2 }}} } } \right) . \end{aligned}$$
(9)

Note that the LHS of (8) is a continuous function. When \(\alpha \rightarrow 0\), the LHS of (8) goes to \(\displaystyle -\frac{{n}}{2}\) and when \(\alpha \rightarrow \infty \), the LHS of (8) goes to \(\displaystyle \frac{{n}}{2}\). Thus, the solution of (8) exists. Since the first term of RHS of (9) is zero at any solution of (9) and the second term of RHS of (9) is always negative , (8) has a unique solution and this solution is the MLE of \(\alpha \). This estimator denoted by \({{\hat{\alpha }}} _{~RSSFE, MLE}.\)

The information number for \(\alpha \) under RSS based on median with even n is

$$\begin{aligned} \displaystyle I_{RSSFE} (\alpha )= & {} - E\left[ {\frac{{\partial ^2 \ln L_{RSSFE } }}{{\partial \alpha ^2 }}} \right] \nonumber \\= & {} - \frac{{n(n + 1)\beta }}{{2\alpha ^2 }} + \frac{\beta }{{2\alpha ^2 }}n(n + 1)(\beta + 1)c\left( \frac{n}{2},n\right) \int _0^\infty {\frac{{t^{\frac{n}{2}} }}{{(1 + t)^{n + 3} }}dt} \nonumber \\&+ \frac{\beta }{{2\alpha ^2 }}n(n + 1)c\left( \frac{n}{2},n\right) \int _0^\infty {\frac{{t^{\frac{n}{2} + 1} }}{{(1 + t)^{n + 3} }}dt} \nonumber \\&+ \frac{\beta }{{2\alpha ^2 }}n(n + 1)(\beta + 1)c\left( \frac{n}{2} + 1,n\right) \int _0^\infty {\frac{{t^{\frac{n}{2} + 1} }}{{(1 + t)^{n + 3} }}dt}\nonumber \\&+ \frac{\beta }{{2\alpha ^2 }}n(n + 1)c\left( \frac{n}{2} + 1,n\right) \int _0^\infty {\frac{{t^{\frac{n}{2} + 2} }}{{(1 + t)^{n + 3} }}dt}\nonumber \\= & {} - \frac{{n(n + 1)\beta }}{{2\alpha ^2 }} + \frac{{n^2 \beta (\beta + 1)}}{{8\alpha ^2 }} + \frac{{n^2 \beta }}{{8\alpha ^2 }} + \frac{{n^2 \beta (\beta + 1)}}{{8\alpha ^2 }} + \frac{{n(n + 4)\beta }}{{8\alpha ^2 }}\nonumber \\= & {} \frac{{n^2 \beta ^2 }}{{4\alpha ^2 }}. \end{aligned}$$
(10)

Case 2: n is odd

In each of the n samples select the unit with rank \(\frac{{n + 1}}{2}\), denote them as \(\left\{ {X_{\left( {\textstyle {{n + 1} \over 2}}\right) 1} ,X_{\left( {\textstyle {{n + 1} \over 2}}\right) 2} ,X_{\left( {\textstyle {{n + 1} \over 2}}\right) 3} , \cdots ,X_{\left( {\textstyle {{n + 1} \over 2}}\right) n} } \right\} \). The pdf of \(X_{\left( {\textstyle {{n + 1} \over 2}}\right) i} \) is

$$\begin{aligned} f_{\left( \frac{{n + 1}}{2}\right) } (x) = \frac{{c\left( {\textstyle {{n + 1} \over 2}},n\right) \beta \left( \frac{x}{\alpha }\right) ^{{\textstyle {{n + 1} \over 2}}\beta - 1} }}{{\alpha \left[ {1 + \left( \frac{x}{\alpha }\right) ^\beta } \right] ^{n + 1} }}. \end{aligned}$$

The likelihood equation based on these samples \(\left\{ X_{\left( {\textstyle {{n + 1} \over 2}}\right) 1} ,X_{\left( {\textstyle {{n + 1} \over 2}}\right) 2} ,X_{\left( {\textstyle {{n + 1} \over 2}}\right) 3} , \cdots ,X_{\left( {\textstyle {{n + 1} \over 2}}\right) n} \right\} \) is

$$\begin{aligned} \displaystyle \frac{n}{2} - \sum \limits _{i = 1}^n {\frac{{\left( {\frac{{x_{\left( \frac{{n + 1}}{2}\right) i} }}{\alpha }} \right) ^\beta }}{{1 + \left( {\frac{{x_{\left( \frac{{n + 1}}{2}\right) i} }}{\alpha }} \right) ^\beta }}} = 0. \end{aligned}$$
(11)

It can be prove that (11) has a unique solution and this solution is the MLE of \(\alpha \). Denoted it as \({{\hat{\alpha }}} _{RSSFO,~MLE}\). The information number for \(\alpha \) under this RSS based on median with odd n is

$$\begin{aligned} I_{RSSFO} (\alpha )=\frac{{n(n + 1)^2 \beta ^2 }}{{4(n + 2)\alpha ^2 }}. \end{aligned}$$
(12)

The establishing procedures of (11) and (12) are similar to the even case.

3 MLE of \(\beta \) when \(\alpha \) is known

In this section, the existence and uniqueness of the MLE of \(\beta \) for (1) in which \(\alpha \) is known under SRS and RSS are proved. In addition, the MLE of \(\beta \) using RSSF will be considered. The Fisher information number for \(\beta \) under three different sampling schemes is respectively obtained similar to Sect. 2.

3.1 MLE of \(\beta \) using SRS

In this subsection, the existence and uniqueness of the MLE of \(\beta \) for (1) in which \(\alpha \) is known under SRS is proved. The Fisher information number for \(\alpha \) under this sampling is given.

Let \(\left\{ {X_1 ,X_2 ,X_3 , \cdots ,X_n } \right\} \) be a simple random sample of size n from (1) in which \(\alpha \) is known is known. The log-likelihood function based these samples is

$$\begin{aligned} \displaystyle \ln L_{SRS} = d + n\ln \beta + \beta \sum \limits _{i = 1}^n {\ln \frac{{x_i }}{\alpha }} - 2\sum \limits _{i = 1}^n {\ln \left[ {1 + \left( {\frac{{x_i }}{\alpha }} \right) ^\beta } \right] }, \end{aligned}$$

where d is a value which is free of \(\beta \). If the MLE of \(\beta \) exists, then it is a solution of the likelihood equation

$$\begin{aligned} \displaystyle \frac{n}{\beta } + \sum \limits _{i = 1}^n {\ln \frac{{x_i }}{\alpha }} - 2\sum \limits _{i = 1}^n {\frac{{\left( \frac{{x_i }}{\alpha }\right) ^\beta \ln \frac{{x_i }}{\alpha }}}{{1 + \left( \frac{{x_i }}{\alpha }\right) ^\beta }}} = 0. \end{aligned}$$
(13)

In order to study the existence and uniqueness of the MLE, the second-order derivative of log likelihood function \(lnL_{SRS}\) is computed as

$$\begin{aligned} \displaystyle \frac{{\partial ^2 lnL_{SRS}}}{{\partial \beta ^2 }} = - \frac{n}{{\beta ^2 }} - 2\sum \limits _{i = 1}^n {\frac{{\left( \frac{{x_i }}{\alpha }\right) ^\beta \ln ^2 \frac{{x_i }}{\alpha }}}{{\left[ 1 + \left( \frac{{x_i }}{\alpha }\right) ^\beta \right] ^2 }}}. \end{aligned}$$
(14)

The LHS of (13) is a continuous function. When \(\beta \rightarrow 0\), the LHS of (13) goes to \(\infty \) and when \(\beta \rightarrow \infty \), the LHS of (13) goes to \(- \sum \limits _{i = 1}^n {\ln \frac{{x_i }}{\alpha }}\). Thus, the solution of (13) exists. Since (14) is always negative, (13) has a unique solution and this solution is the MLE of \(\beta \). This estimator denoted by \({{\hat{\beta }}} _{SRS,~MLE}.\) The information number for \(\beta \) under SRS

$$\begin{aligned} \displaystyle I_{SRS}(\beta ) = \frac{{n(3 + \pi ^2 )}}{{9\beta ^2}} \end{aligned}$$
(15)

is given by Reath et al. (2018).

3.2 MLE of \(\beta \) using RSS

In this subsection, the existence and uniqueness of the MLE of \(\beta \) for (1) in which \(\alpha \) is known under RSS are proved. The Fisher information number for \(\beta \) is obtained.

Let \(\left\{ {X_{(1)1} ,X_{(2)2} ,X_{(3)3} , \cdots ,X_{(n)n} } \right\} \) be a ranked set sample of n from \(LLD(\alpha , \beta )\) with \(\alpha \) is known. Then we have

$$\begin{aligned} \displaystyle \ln L_{RSS} = d + n\ln \beta + \beta \sum \limits _{i = 1}^n {i\ln \frac{{x_{\left( i \right) i} }}{\alpha }} - (n + 1)\sum \limits _{i = 1}^n {\ln \left[ {1 + \left( {\frac{{x_{\left( i \right) i} }}{\alpha }} \right) ^\beta } \right] } \end{aligned}$$

and

$$\begin{aligned} \displaystyle \frac{{\partial lnL_{RSS} }}{{\partial \beta }} = \frac{n}{\beta } + \sum \limits _{i = 1}^n {i\ln \frac{{x_{(i)i} }}{\alpha }} - (n + 1)\sum \limits _{i = 1}^n {\frac{{\left( \frac{{x_{(i)i} }}{\alpha }\right) ^\beta \ln \frac{{x_{(i)i} }}{\alpha }}}{{1 + \left( \frac{{x_{(i)i} }}{\alpha }\right) ^\beta }}}, \end{aligned}$$

where d is a value which is free of \(\beta \). If the MLE of \(\beta \) exists, then it is a solution of the likelihood equation

$$\begin{aligned} \displaystyle \frac{n}{\beta } + \sum \limits _{i = 1}^n {i\ln \frac{{x_{(i)i} }}{\alpha }} - (n + 1)\sum \limits _{i = 1}^n {\frac{{\left( \frac{{x_{(i)i} }}{\alpha }\right) ^\beta \ln \frac{{x_{(i)i} }}{\alpha }}}{{1 + \left( \frac{{x_{(i)i} }}{\alpha }\right) ^\beta }}} = 0. \end{aligned}$$
(16)

In order to study the existence and uniqueness of the MLE, the second-order derivative of \(lnL_{RSS}\) is computed as

$$\begin{aligned} \displaystyle \frac{{\partial ^2 \ln L_{RSS} }}{{\partial \beta ^2 }} = - \frac{n}{{\beta ^2 }} - (n + 1)\sum \limits _{i = 1}^n {\frac{{\left( \frac{{x_{(i)i} }}{\alpha }\right) ^\beta \ln ^2 \frac{{x_{(i)i} }}{\alpha }}}{{\left( {1 + \left( \frac{{x_{(i)i} }}{\alpha }\right) ^\beta } \right) ^2 }}} \end{aligned}$$
(17)

The LHS of (16) is a continuous function. When \(\beta \rightarrow 0\), the LHS of (16) goes to \(\infty \) and when \(\beta \rightarrow \infty \), the LHS of (16) is always negative. Thus, the solution of (16) exists. Since the RHS of (17) is always negative, (16) has a unique solution and this solution is the MLE of \(\beta \). This estimator denoted by \(\hat{\beta }_{RSS,~MLE}.\)

The Fisher information number for \(\beta \)

$$\begin{aligned} \displaystyle I_{RSS} (\beta )= & {} - E\left[ {\frac{{\partial ^2 lnL_{RSS}}}{{\partial \beta ^2 }}} \right] = \frac{n}{{\beta ^2 }} + (n + 1)nE\left[ {\frac{{\left( \frac{x}{\alpha }\right) ^\beta \ln ^2 \frac{x}{\alpha }}}{{\left( {1 + \left( {\frac{x}{\alpha }} \right) ^\beta } \right) ^2 }}} \right] \nonumber \\= & {} \frac{n}{{\beta ^2 }} + \frac{{n(n + 1)}}{{\beta ^2 }}\int _0^\infty {\frac{{t\ln ^2 t}}{{(1 + t)^4 }}} dt= \frac{n}{{\beta ^2 }} + \frac{{n(n + 1)}}{{\beta ^2 }}\left( \frac{{\pi ^2 }}{{18}} - \frac{1}{3}\right) . \end{aligned}$$
(18)

3.3 MLE of \(\beta \) under RSS based on maximizing the Fisher information number

In this subsection, the existence and uniqueness of the MLE of \(\beta \) for (1) in which \(\alpha \) is known under RSSF are proved. The Fisher information number for \(\beta \) under this sampling is obtained.

We evaluate the Fisher information contained in the order statistics arising from (1) in which \(\alpha \) is known and observe that minimum of a random sample contains the maximum information about \(\beta \) in Table 1. So we will arrange RSS based on minimum.

Table 1 Fisher information

In each of the n samples select the unit with rank minimum, denote them as \(\left\{ {X_{(1)1} ,X_{(1)2} ,X_{(1)3} , \cdots ,X_{(1)n} } \right\} \). The pdf of \(X_{(1)i}\) is

$$\begin{aligned} f_{\left( 1 \right) } \left( x \right) = \frac{{c\left( {1,n} \right) \beta \left( {\frac{{x_{(1)} }}{\alpha }} \right) ^{\beta - 1} }}{{\alpha \left[ {1 + \left( {\frac{{x_{(1)} }}{\alpha }} \right) ^\beta } \right] ^{n + 1} }}. \end{aligned}$$

The log-likelihood function based these samples \(\left\{ {X_{(1)1} ,X_{(1)2} ,X_{(1)3} , \cdots ,X_{(1)n} } \right\} \) is

$$\begin{aligned} \ln L_{RSSF} = d + n\ln \beta + \beta \sum \limits _{i = 1}^n {\ln \frac{{x_{(1)i} }}{\alpha }} - (n + 1)\sum \limits _{i = 1}^n {\ln \left[ {1 + \left( {\frac{{x_{(1)i} }}{\alpha }} \right) ^\beta } \right] }, \end{aligned}$$

where d is a value which is free of \(\beta \). Taking the first derivative for \(lnL_{RSSF}\), we have

$$\begin{aligned} \displaystyle \frac{{\partial lnL_{RSSF} }}{{\partial \beta }} = \frac{n}{\beta } + \sum \limits _{i = 1}^n {\ln \frac{{x_{(1)i} }}{\alpha }} - (n + 1)\sum \limits _{i = 1}^n {\frac{{\left( \frac{{x_{(1)i} }}{\alpha }\right) ^\beta \ln \frac{{x_{(1)i} }}{\alpha }}}{{1 + \left( \frac{{x_{(1)i} }}{\alpha }\right) ^\beta }}} . \end{aligned}$$

If the MLE of \(\beta \) exists, then it is a solution of the likelihood equation

$$\begin{aligned} \displaystyle \frac{n}{\beta } + \sum \limits _{i = 1}^n \ln \frac{{x_{(1)i} }}{\alpha } - (n + 1)\sum \limits _{i = 1}^n {\frac{{\left( \frac{{x_{(1)i} }}{\alpha }\right) ^\beta \ln \frac{{x_{(1)i} }}{\alpha }}}{{1 + \left( \frac{{x_{(1)i} }}{\alpha }\right) ^\beta }}} = 0. \end{aligned}$$
(19)

In order to study the existence and uniqueness of the MLE, the second-order derivative of \(lnL_{RSSF}\) is computed as

$$\begin{aligned} \displaystyle \frac{{\partial ^2 L_{RSSF} }}{{\partial \beta ^2 }} = - \frac{n}{{\beta ^2 }} - (n + 1)\sum \limits _{i = 1}^n {\frac{{\left( {\frac{{x_{(1)i} }}{\alpha }} \right) ^\beta \ln ^2 \frac{{x_{(1)i} }}{\alpha }}}{{\left[ {1 + \left( {\frac{{x_{(1)i} }}{\alpha }} \right) ^\beta } \right] ^2 }}}. \end{aligned}$$
(20)

The LHS of (19) is a continuous function. When \(\beta \rightarrow 0\), the LHS of (19) goes to \(\infty \) and when \(\beta \rightarrow \infty \), the LHS of (19) is always negative. Thus, the solution of (19) exists. Since the RHS of (20) is always negative, (19) has a unique solution and this solution is the MLE of \(\beta \). This estimator denoted by \(\hat{\beta }_{RSSF,~MLE}.\)

The Fisher information number for \(\beta \) under RSS based on minimum

$$\begin{aligned} I_{RSSF } (\beta )= & {} -E\left[ {\frac{{\partial ^2 lnL_{RSS}}}{{\partial \beta ^2 }}} \right] \nonumber \\= & {} \frac{n}{{\beta ^2 }} + (n + 1)\sum \limits _{i = 1}^n {E\left( {\frac{{\left( {\frac{{x_{(1)i} }}{\alpha }} \right) ^\beta \ln ^2 \frac{{x_{(1)i} }}{\alpha }}}{{\left[ {1 + \left( {\frac{{x_{(1)i} }}{\alpha }} \right) ^\beta } \right] ^2 }}} \right) }\nonumber \\= & {} \frac{n}{{\beta ^2 }} + \frac{{n(n + 1)}}{{\beta ^2 }}E\left( {\frac{{y_{(1)} \ln ^2 y_{(1)} }}{{\left( {1 + y_{(1)} } \right) ^2 }}} \right) , \end{aligned}$$
(21)

\(y_{(1)}\) is 1th order statistics of a sample of size n from LLD(1, 1).

4 MLEs of \(\alpha \) and \(\beta \)

In this section, the existence of the MLEs of \(\alpha \) and \(\beta \) for \(LLD(\alpha , \beta )\) under SRS and RSS is respectively proved. To compare MLEs of \(\alpha \) and \(\beta \) estimated simultaneously, the Fisher information matrices \(I_{SRS} (\alpha ,\beta )\) and \(I_{RSS} (\alpha ,\beta )\) must be computed.

4.1 MLEs of \(\alpha \) and \(\beta \) using SRS

Let \(\left\{ {X_1 ,X_2 ,X_3 , \cdots ,X_n } \right\} \) be a simple random sample of size n from \(LLD(\alpha , \beta )\). The log-likelihood function based these samples is

$$\begin{aligned} \displaystyle \ln L_{SRS} = d + n\ln \beta + \beta \sum \limits _{i = 1}^n {\ln \frac{{x_i }}{\alpha }} - 2\sum \limits _{i = 1}^n {\ln \left[ {1 + \left( {\frac{{x_i }}{\alpha }} \right) ^\beta } \right] } , \end{aligned}$$

where d is a constant. This function implies that

$$\begin{aligned} \displaystyle \frac{{\partial \ln L_{SRS} }}{{\partial \alpha }} = n - 2\sum \limits _{i = 1}^n {\frac{{\left( \frac{{x_i }}{\alpha }\right) ^\beta }}{{1 + \left( \frac{{x_i }}{\alpha }\right) ^\beta }}} \end{aligned}$$
(22)

and

$$\begin{aligned} \displaystyle \frac{{\partial \ln L_{SRS} }}{{\partial \beta }} = \frac{n}{\beta } + \sum \limits _{i = 1}^n {\ln \frac{{x_i }}{\alpha }} - 2\sum \limits _{i = 1}^n {\frac{{\left( \frac{{x_i }}{\alpha }\right) ^\beta \ln \frac{{x_i }}{\alpha }}}{{1 + \left( \frac{{x_i }}{\alpha }\right) ^\beta }}}. \end{aligned}$$
(23)

In order to prove that the solutions of (22) and (23) are the MLEs of \(\alpha \) and \(\beta \), \( \displaystyle \frac{{\partial ^2 lnL_{SRS}}}{{\partial \alpha ^2 }} \) and \( \displaystyle \left( {\frac{{\partial ^2 \ln L_{SRS} }}{{\partial \alpha ^2 }}} \right) \left( {\frac{{\partial ^2 \ln L_{SRS} }}{{\partial \beta ^2 }}} \right) - \left( {\frac{{\partial ^2 \ln L_{SRS} }}{{\partial \alpha \partial \beta }}} \right) ^2 \) are computed. Note that \( \displaystyle \frac{{\partial ^2 lnL_{SRS}}}{{\partial \alpha ^2 }}|_{({{\hat{\alpha }}} _{SRS,~MLE},~{{\hat{\beta }}} _{SRS,~MLE})}<0. \)

$$\begin{aligned}&\left( {\frac{{\partial ^2 \ln L_{SRS} }}{{\partial \alpha ^2 }}} \right) \left( {\frac{{\partial ^2 \ln L_{SRS} }}{{\partial \beta ^2 }}} \right) - \left( {\frac{{\partial ^2 \ln L_{SRS} }}{{\partial \alpha \partial \beta }}} \right) ^2 \nonumber \\&\quad = \left( {n - 2\sum \limits _{i = 1}^n {\frac{{\left( \frac{{x_i }}{\alpha }\right) ^\beta }}{{1 + \left( \frac{{x_i }}{\alpha }\right) ^\beta }}} } \right) \nonumber \\&\qquad \left( {\frac{n}{{\alpha ^2 \beta }} + \frac{{2\beta }}{{\alpha ^2 }}\sum \limits _{i = 1}^n {\frac{{\left( \frac{{x_i }}{\alpha }\right) ^\beta }}{{\left[ {1 + \left( \frac{{x_i }}{\alpha }\right) ^\beta } \right] ^2 }} - \frac{1}{{\alpha ^2 }}\left[ {n - 2\sum \limits _{i = 1}^n {\frac{{\left( \frac{{x_i }}{\alpha }\right) ^\beta }}{{1 + \left( \frac{{x_i }}{\alpha }\right) ^\beta }}} } \right] } } \right. \nonumber \\&\qquad - \left. {\frac{{4\beta }}{{\alpha ^2 }}\sum \limits _{i = 1}^n {\frac{{\left( \frac{{x_i }}{\alpha }\right) ^\beta \ln \frac{{x_i }}{\alpha }}}{{\left[ 1 + \left( \frac{{x_i }}{\alpha }\right) ^\beta \right] ^2 }}} } \right) + \frac{{2n}}{{\alpha ^2 }}\sum \limits _{i = 1}^n {\frac{{\left( \frac{{x_i }}{\alpha }\right) ^\beta }}{{\left[ 1 + \left( \frac{{x_i }}{\alpha }\right) ^\beta \right] ^2 }}}. \end{aligned}$$
(24)

Since the first term of (24) is zero at any solutions of \(\alpha \) and \(\beta \) of (22) and (23), the second term \( \displaystyle \frac{{2n}}{{\alpha ^2 }}\sum \limits _{i = 1}^n {\frac{{\left( \frac{{x_i }}{\alpha }\right) ^\beta }}{{\left[ 1 + \left( \frac{{x_i }}{\alpha }\right) ^\beta \right] ^2 }}} \) is always positive, \( \displaystyle \left( {\frac{{\partial ^2 \ln L_{SRS} }}{{\partial \alpha ^2 }}} \right) \left( {\frac{{\partial ^2 \ln L_{SRS} }}{{\partial \beta ^2 }}} \right) - \left( {\frac{{\partial ^2 \ln L_{SRS} }}{{\partial \alpha \partial \beta }}} \right) ^2|_{({{\hat{\alpha }}} _{SRS,~MLE},~{{\hat{\beta }}} _{SRS,~MLE})}>0\). Thus the solutions of (22) and (23) are MLEs of \(\alpha \) and \(\beta \).

The fisher information matrix for \(\alpha \) and \(\beta \)

$$\begin{aligned} \displaystyle I_{SRS}(\alpha , \beta )= \left( {\begin{array}{cc} {\frac{{n\beta ^2 }}{{3\alpha ^2 }}} &{} 0 \\ 0 &{} {\frac{{n(3 + \pi ^2)}}{{9\beta ^2 }}} \\ \end{array}} \right) \end{aligned}$$
(25)

is given by Reath et al. (2018).

4.2 MLEs of \(\alpha \) and \(\beta \) using RSS

Let \(\left\{ {X_{(1)1} ,X_{(2)2} ,X_{(3)3} , \cdots ,X_{(n)n} } \right\} \) be a ranked set sample of size n from \(LLD(\alpha , \beta )\). Then we have

$$\begin{aligned} \displaystyle \frac{{\partial \ln L_{RSS}}}{{\partial \alpha }} = - \frac{\beta }{\alpha }\left[ {\frac{{n(n + 1)}}{2} - (n + 1)\sum \limits _{i = 1}^n {\frac{{\left( \frac{{x_{(i)i} }}{\alpha }\right) ^\beta }}{{1 + \left( \frac{{x_{(i)i} }}{\alpha }\right) ^\beta }}} } \right] \end{aligned}$$
(26)

and

$$\begin{aligned} \displaystyle \frac{{\partial \ln L_{RSS} }}{{\partial \beta }} = \frac{n}{\beta } + \sum \limits _{i = 1}^n {i\ln \frac{{x_{(i)i} }}{\alpha }} - (n + 1)\sum \limits _{i = 1}^n {\frac{{\left( \frac{{x_{(i)i} }}{\alpha }\right) ^\beta \ln \frac{{x_{(i)i} }}{\alpha }}}{{1 + \left( \frac{{x_i }}{\alpha }\right) ^\beta }}} . \end{aligned}$$
(27)

In order to prove that the solutions of (26) and (27) are the MLEs of \(\alpha \) and \(\beta \), \( \displaystyle \frac{{\partial ^2 lnL_{RSS}}}{{\partial \alpha ^2 }} \) and \( \displaystyle \left( {\frac{{\partial ^2 \ln L_{RSS} }}{{\partial \alpha ^2 }}} \right) \left( {\frac{{\partial ^2 \ln L_{RSS} }}{{\partial \beta ^2 }}} \right) - \left( {\frac{{\partial ^2 \ln L_{RSS} }}{{\partial \alpha \partial \beta }}} \right) ^2 \) are computed. Note that \( \displaystyle \frac{{\partial ^2 lnL_{RSS}}}{{\partial \alpha ^2 }}|_{({{\hat{\alpha }}} _{RSS,~MLE},~{{\hat{\beta }}} _{RSS,~MLE})}<0. \)

$$\begin{aligned}&\left( {\frac{{\partial ^2 \ln L_{RSS} }}{{\partial \alpha ^2 }}} \right) \left( {\frac{{\partial ^2 \ln L_{RSS} }}{{\partial \beta ^2 }}} \right) - \left( {\frac{{\partial ^2 \ln L_{RSS} }}{{\partial \alpha \partial \beta }}} \right) ^2\nonumber \\&\quad = \left[ {\frac{{n(n + 1)}}{2} - (n + 1)\sum \limits _{i = 1}^n {\frac{{\left( \frac{{x_{(i)i} }}{\alpha }\right) ^\beta }}{{1 + \left( \frac{{x_{(i)i} }}{\alpha }\right) ^\beta }}} } \right] \left( {\frac{n}{{\alpha ^2 \beta }} + \frac{{(n + 1)\beta }}{\alpha }\sum \limits _{i = 1}^n {\frac{{\left( \frac{{x_{(i)i} }}{\alpha }\right) ^\beta }}{{\left( {1 + \left( \frac{{x_{(i)i} }}{\alpha }\right) ^\beta } \right) ^2 }}} } \right) \nonumber \\&\qquad + \frac{{(n + 1)^2 \beta ^2 }}{{\alpha ^2 }}\sum \limits _{i = 1}^n {\frac{{\left( \frac{{x_{(i)i} }}{\alpha }\right) ^{2\beta } \ln ^2 \frac{{x_{(i)i} }}{\alpha }}}{{\left[ 1 + \left( \frac{{x_{(i)i} }}{\alpha }\right) ^\beta \right] ^4 }}} + \frac{{(n + 1)n}}{{\alpha ^2 }}\sum \limits _{i = 1}^n {\frac{{\left( \frac{{x_{(i)i} }}{\alpha }\right) ^\beta }}{{\left[ 1 + \left( \frac{{x_{(i)i} }}{\alpha }\right) ^\beta \right] ^2 }}} \end{aligned}$$
(28)

Since the first term of (28) is zero at any solutions of \(\alpha \) and \(\beta \) of (26) and (27), the second term \( \displaystyle \frac{{(n + 1)^2 \beta ^2 }}{{\alpha ^2 }}\sum \limits _{i = 1}^n {\frac{{\left( \frac{{x_{(i)i} }}{\alpha }\right) ^{2\beta } \ln ^2 \frac{{x_{(i)i} }}{\alpha }}}{{\left[ 1 + \left( \frac{{x_{(i)i} }}{\alpha }\right) ^\beta \right] ^4 }}} \) and the third term \( \displaystyle \frac{{(n + 1)n}}{{\alpha ^2 }}\sum \limits _{i = 1}^n {\frac{{\left( \frac{{x_{(i)i} }}{\alpha }\right) ^\beta }}{{\left[ 1 + \left( \frac{{x_{(i)i} }}{\alpha }\right) ^\beta \right] ^2 }}} \) are always positive, \( \displaystyle \left( {\frac{{\partial ^2 \ln L_{RSS} }}{{\partial \alpha ^2 }}} \right) \left( {\frac{{\partial ^2 \ln L_{RSS} }}{{\partial \beta ^2 }}} \right) - \left( {\frac{{\partial ^2 \ln L_{RSS} }}{{\partial \alpha \partial \beta }}} \right) ^2|_{({{\hat{\alpha }}} _{RSS,~MLE},~{{\hat{\beta }}} _{RSS,~MLE})}>0 \). Thus the solutions of (26) and (27) are MLEs of \(\alpha \) and \(\beta \).

In order to obtain the fisher information matrix for \(\alpha \) and \(\beta \), we also need to compute

$$\begin{aligned}&E\left[ {\frac{{\partial ^2 \ln L_{RSS} }}{{\partial \alpha \partial \beta }}} \right] \nonumber \\&\quad = E\left[ \frac{{n(n + 1)}}{{2\alpha }} - \frac{{n + 1}}{\alpha }\sum \limits _{i = 1}^n \frac{{\left( {\frac{{x_{(i)i} }}{\alpha }} \right) ^\beta }}{{1 + \left( {\frac{{x_{(i)i} }}{\alpha }} \right) ^\beta }}\right. \nonumber \\&\left. \qquad - \frac{{(n + 1)\beta }}{\alpha }\sum \limits _{i = 1}^n {\frac{{\left( {\frac{{x_{(i)i} }}{\alpha }} \right) ^\beta \ln \frac{{x_{(i)i} }}{\alpha }}}{{1 + \left( {\frac{{x_{(i)i} }}{\alpha }} \right) ^\beta }} + \frac{{(n + 1)\beta }}{\alpha }\sum \limits _{i = 1}^n {\frac{{\left( {\frac{{x_{(i)i} }}{\alpha }} \right) ^{2\beta } \ln \frac{{x_i }}{\alpha }}}{{\left[ {1 + \left( {\frac{{x_{(i)i} }}{\alpha }} \right) ^\beta } \right] ^2 }}} } \right] \nonumber \\&\quad = \frac{{n(n + 1)}}{{2\alpha }} - \frac{{n(n + 1)}}{\alpha }E\left[ {\frac{{\left( {\frac{x}{\alpha }} \right) ^\beta }}{{1 + \left( {\frac{x}{\alpha }} \right) ^\beta }}} \right] - \frac{{n(n + 1)\beta }}{\alpha }E\left[ {\frac{{\left( {\frac{x}{\alpha }} \right) ^\beta \ln \frac{x}{\alpha }}}{{1 + \left( {\frac{x}{\alpha }} \right) ^\beta }}} \right] \nonumber \\&\qquad + \frac{{n(n + 1)\beta }}{\alpha }E\left[ {\frac{{\left( {\frac{x}{\alpha }} \right) ^{2\beta } \ln \frac{x}{\alpha }}}{{\left[ {1 + \left( {\frac{x}{\alpha }} \right) ^\beta } \right] ^2 }}} \right] \nonumber \\&\quad = \frac{{n(n + 1)}}{{2\alpha }} - \frac{{n(n + 1)}}{\alpha }\int _0^\infty {\frac{t}{{(1 + t)^3 }}} dt + \frac{{n(n + 1)\beta }}{\alpha }\int _0^\infty {\frac{{\ln t}}{{(1 + t)^3 }}} dt\nonumber \\&\qquad - \frac{{n(n + 1)\beta }}{\alpha }\int _0^\infty {\frac{{\ln t}}{{(1 + t)^4 }}} dt\nonumber \\&\quad = 0. \end{aligned}$$
(29)

Combining (7), (18) with (29), we can obtain the fisher information matrix for \(\alpha \) and \(\beta \)

$$\begin{aligned} I_{RSS}(\alpha , \beta )= \left( {\begin{array}{cc} {\frac{{n(n + 1)\beta ^2 }}{{6\alpha ^2 }}} &{} 0 \\ 0 &{} {\frac{n}{{\beta ^2 }} + \frac{{n(n + 1)}}{{\beta ^2 }}\left( \frac{{\pi ^2 }}{{18}} - \frac{1}{3}\right) } \\ \end{array}} \right) . \end{aligned}$$
(30)

5 Numerical comparison

5.1 Results for perfect ranking

We will compare the above MLEs in terms of asymptotic efficiency. Combining (4), (7), (10) with (12) in Sect. 2, we can respectively obtain the asymptotic efficiencies \({{\hat{\alpha }}} _{RSS,MLE}\) with respect to (w.r.t.) \({{\hat{\alpha }}} _{SRS,MLE}\), \({{\hat{\alpha }}} _{~RSSFK, MLE}\)(\(K=E\) or O) w.r.t. \({{\hat{\alpha }}} _{SRS,MLE}\) and \({{\hat{\alpha }}} _{~RSSFK, MLE}\) (\(K=E\) or O) w.r.t. \({{\hat{\alpha }}} _{~RSS, MLE}\)

$$\begin{aligned} \displaystyle \mathrm{{AE}}^1 = \frac{{I_{\mathrm{{RSS}}} (\alpha )}}{{I_{\mathrm{{SRS}}} (\alpha )}}=\frac{n+1}{2}, \end{aligned}$$
(31)
$$\begin{aligned} \displaystyle \mathrm{{AE}}^2 = \frac{{I_{\mathrm{{~RSSFK}}} (\alpha )}}{{I_{\mathrm{{SRS}}} (\alpha )}} =\left\{ {\begin{array}{lc} {\frac{{3n}}{4}},\quad K=E\\ {} \\ {\frac{{3\left( {n + 1} \right) ^2 }}{{4\left( {n + 2} \right) }}},\quad K=O\\ \end{array}} \right. \end{aligned}$$
(32)

and

$$\begin{aligned} \displaystyle \mathrm{{AE}}^3= \frac{{I_{\mathrm{{~RSSFK}}} (\alpha )}}{{I_{\mathrm{{RSS}}} (\alpha )}} = \left\{ {\begin{array}{lc} {\frac{{3n}}{{2(n + 1)}}},\quad K=E\\ {} \\ {\frac{{3\left( {n + 1} \right) }}{{2\left( {n + 2} \right) }}},\quad K=O\\ \end{array}} \right. \end{aligned}$$
(33)

It can be seen that \(\mathrm{{AE}}^i\ge 1(i=1,2,3)\) for \(n>2\). Combining (15), (18) with (21) in Sect. 3, the asymptotic efficiencies \({\hat{\beta }} _{RSS,MLE}\) w.r.t. \({{\hat{\beta }}} _{SRS,MLE}\), \({{\hat{\beta }}} _{~RSSF, MLE}\) w.r.t. \({{\hat{\beta }}} _{SRS,MLE}\) and \({{\hat{\beta }}} _{~RSSF,~MLE}\) w.r.t. \({{\hat{\beta }}} _{~RSS,~MLE}\)

$$\begin{aligned} \displaystyle \mathrm{{AE}}^4= & {} \frac{{I_{\mathrm{{~RSS}}} (\beta )}}{{I_{\mathrm{{SRS}}} (\beta )}} =\frac{9}{{3 + \pi ^2 }}\left[ {1 + (n + 1)\left( {\frac{{\pi ^2 }}{{18}} - \frac{1}{3}} \right) } \right] , \end{aligned}$$
(34)
$$\begin{aligned} \displaystyle \mathrm{{AE}}^5= & {} \frac{{I_{\mathrm{{~RSSF}}} (\beta )}}{{I_{\mathrm{{SRS}}} (\beta )}} =\frac{9}{{3 + \pi ^2 }}\left[ {1 + (n + 1)E\left( {\frac{{y_{(1)} \ln ^2 y_{(1)} }}{{\left( {1 + y_{(1)} } \right) ^2 }}} \right) } \right] \end{aligned}$$
(35)

and

$$\begin{aligned} \displaystyle \mathrm{{AE}}^6= \frac{{I_{\mathrm{{~RSSF}}} (\beta )}}{{I_{\mathrm{{RSS}}} (\beta )}} =\frac{{1 + (n + 1)E\left( {\frac{{y_{(1)} \ln ^2 y_{(1)} }}{{\left( {1 + y_{(1)} } \right) ^2 }}} \right) }}{{1 + (n + 1)\left( {\frac{{\pi ^2 }}{{18}} - \frac{1}{3}} \right) }} \end{aligned}$$
(36)

are respectively given. It can be seen that \(\mathrm{{AE}}^4>1\) for \(n\ge 2\). Combining (15) with (18), we can obtain the asymptotic efficiency \(({{\hat{\alpha }}} _{RSS,MLE} ,{{\hat{\beta }}} _{RSS,MLE})\) w.r.t. \(({{\hat{\alpha }}} _{SRS,MLE} ,{{\hat{\beta }}} _{SRS,MLE} )\) in Sect. 3

$$\begin{aligned} \displaystyle \mathrm{{AE}}^7=\frac{{\det \left\{ {I_{\mathrm{{RSS}}} (\alpha ,\beta )} \right\} }}{{\det \left\{ {I_{\mathrm{{SRS}}} (\alpha ,\beta )} \right\} }} =\frac{9}{{6 + 2\pi ^2 }}\left( {n + 1} \right) \left[ {1 + (n + 1)\left( {\frac{{\pi ^2 }}{{18}} - \frac{1}{3}} \right) } \right] . \end{aligned}$$
(37)

It can be seen that \(\mathrm{{AE}}^7\) for \(n\ge 2\).

Table 2 Fisher information and asymptotic efficiency of MLE of \(\alpha \)
Table 3 Fisher information and asymptotic efficiency of MLE of \(\beta \)
Table 4 Asymptotic efficiency of MLEs of \(\alpha \) and \(\beta \)

In order to observe the change of \(I_{SRS} (\alpha )\), \(I_{RSS} (\alpha )\), \(I_{RSSFK}\), \(I_{SRS} (\beta )\), \(I_{RSS} (\beta )\), \(I_{RSSF} (\beta )\), \({\det \left\{ {I_{\mathrm{{RSS}}} (\alpha ,\beta )} \right\} }\), \({\det \left\{ {I_{\mathrm{{RSS}}} (\alpha ,\beta )} \right\} }\) and \(\mathrm{{AE}}^i(i=1,2,\ldots ,7)\) with the set size n, the numerical comparisons of their are given in the following Table 2, 3 and 4.

Table 5 Efficiency of MLE of \(\alpha \)
Table 6 Efficiency of MLE of \(\beta \)

From Table 2, we conclude the following:

  1. (1)

    \(\mathrm{{AE}}^i(i=1,2,3)>1\) for \(n>2\) and they are increase as n increase.

  2. (2)

    \(I_{RSS} (\alpha )>I_{SRS} (\alpha )\), \(I_{RSSFK} (\alpha )>I_{SRS} (\alpha )\) and \(I_{RSSFK} (\alpha )> I_{RSS} (\alpha )\) for \(n>2\).

  3. (3)

    \({{\hat{\alpha }}} _{RSSFK,MLE}\) is more efficient \(\hat{\alpha }_{SRS,MLE}\),  \({{\hat{\alpha }}} _{RSSFK,MLE}\) is more efficient \({{\hat{\alpha }}} _{SRS,MLE}\) and \({{\hat{\alpha }}} _{RSSFK,MLE}\) is more efficient \({{\hat{\alpha }}} _{RSS, MLE}\).

  4. (4)

    In conclusion, the MLE of \(\alpha \) using RSSF is more efficient than that using RSS, which is, in turn, more efficient than that using SRS. From Table 3, we conclude the following:

  5. (5)

    \(\mathrm{{AE}}^i(i=4,5,6)>1\) for \(n\ge 2\) and they are increase as n increase.

  6. (6)

    \(I_{RSS} (\beta )>I_{SRS} (\beta )\), \(I_{RSSF} (\beta )>I_{SRS} (\beta )\) and \(I_{RSSF} (\beta )> I_{RSS} (\beta )\) for \(n\ge 2\).

  7. (7)

    \({{\hat{\beta }}} _{RSS,MLE}\) is more efficient \({{\hat{\beta }}} _{SRS,MLE}\),  \({{\hat{\beta }}} _{RSSF,MLE}\) is more efficient \(\hat{\alpha }_{SRS,MLE}\) and \({{\hat{\beta }}} _{RSSF,MLE}\) is more efficient \({{\hat{\beta }}} _{RSS, MLE}\).

  8. (8)

    In conclusion, the MLE of \(\beta \) using RSSF is more efficient than that using RSS, which is, in turn, more efficient than that using SRS. From Table 4, we conclude the following:

  9. (9)

    \(\mathrm{{AE}}^7>1\) for \(n\ge 2\) and they are increase as n increase.

  10. (10)

    \({\det \left\{ {I_{\mathrm{{RSS}}} (\alpha ,\beta )} \right\} }>{\det \left\{ {I_{\mathrm{{SRS}}} (\alpha ,\beta )} \right\} }\).

  11. (11)

    In conclusion, the MLEs of \(\alpha \) and \(\beta \) using RSS are more efficient than that using SRS.

5.2 Results for imperfect ranking

Since perfect rankings are unlikely in practice, efficiencies of the above MLEs under imperfect ranking will be compared in this subsection. We respectively use \({{\hat{\alpha }}}^{*}_{RSS,MLE}\) and \({{\hat{\alpha }}}^{*}_{RSSFK,MLE}\) to denote the MLE of \(\alpha \) using imperfect RSS and imperfect RSSF, \({{\hat{\beta }}}^{*}_{RSS,MLE}\) and \({{\hat{\beta }}}^{*}_{RSSF,MLE}\) denote the MLE of \(\beta \) using imperfect RSS and imperfect RSSF. Then the efficiencies of \({\hat{\alpha }}^{*} _{RSS,MLE}\) w.r.t. \({{\hat{\alpha }}} _{SRS,MLE}\), \({\hat{\alpha }}^{*} _{~RSSFK, MLE}\) w.r.t. \({{\hat{\alpha }}} _{SRS,MLE}\), \({\hat{\alpha }}^{*} _{~RSSFK, MLE}\) w.r.t. \({{\hat{\alpha }}}^{*} _{~RSS, MLE}\), \({{\hat{\beta }}}^{*} _{RSS,MLE}\) w.r.t. \({{\hat{\beta }}} _{SRS,MLE}\), \({\hat{\beta }}^{*} _{~RSSF, MLE}\) w.r.t. \({{\hat{\beta }}} _{SRS,MLE}\) and \({\hat{\beta }}^{*} _{~RSSF,~MLE}\) w.r.t. \({{\hat{\beta }}}^{*} _{~RSS,~MLE}\) are respectively denoted as \(\mathrm{{AE}}^{1^{*}}\), \(\mathrm{{AE}}^{2^{*}}\), \(\mathrm{{AE}}^{3^{*}}\), \(\mathrm{{AE}}^{4^{*}}\), \(\mathrm{{AE}}^{5^{*}}\) and \(\mathrm{{AE}}^{6^{*}}\). We use the simulation method considered by Dell and Clutter (1972), in these simulations, we choose a random error variable \(\varepsilon \thicksim N(0, \sigma ^2)\). In the following simulation, we choose \(\sigma ^2\)=0.22, 0.54 , 1.00, n=5, 6, 7 and \(\alpha =1\), \(\beta =3\). All simulations results are calculated after 10000 iterations.

Based on Tables 5 and 6 we may conclude:

  1. (1)

    \({{\hat{\alpha }}}^{*} _{RSSFK,MLE}\) is more efficient \(\hat{\alpha }_{SRS,MLE}\),  \({{\hat{\alpha }}}^{*} _{RSSFK,MLE}\) is more efficient \({{\hat{\alpha }}} _{SRS,MLE}\) and \({{\hat{\alpha }}}^{*} _{RSSFK,MLE}\) is more efficient \({{\hat{\alpha }}} _{RSS, MLE}\).

  2. (2)

    \({{\hat{\beta }}}^{*} _{RSS,MLE}\) is more efficient \(\hat{\beta }_{SRS,MLE}\) and \({{\hat{\beta }}}^{*} _{RSSF,MLE}\) is more efficient \({{\hat{\beta }}} _{SRS,MLE}\) .

  3. (3)

    The MLEs of \(\alpha \) and \(\beta \) using imperfect RSS are more efficient than that using SRS.

  4. (4)

    All efficiencies are decreasing as \(\sigma ^{2}\) gets larger, i.e. all efficiencies are decreasing as error in ranking in increase.

  5. (5)

    In conclusion, the MLEs of \(\alpha \) and \(\beta \) using imperfect RSS are more efficient than that using SRS.