Abstract
In this article, maximum likelihood estimator(s) (MLE(s)) of the scale and shape parameters \(\alpha \) and \(\beta \) from log-logistic distribution will be respectively considered in cases when one parameter is known and when both are unknown under simple random sampling (SRS) and ranked set sampling (RSS). In addition, the MLE of one parameter, when another parameter is known using a RSS version based on the order statistic that maximizes the Fisher information for a fixed set size, will be considered. These MLEs will be compared in terms of asymptotic efficiencies. These MLEs based on RSS can be real competitors against those based on SRS. All efficiencies of these MLEs are simulated under imperfect ranking.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Ranked set sampling (RSS) was introduced by McIntyre (1952) for estimating the pasture yields. It is appropriate for situations where quantification of sampling units is either costly or difficult, but ranking the units in a small set is easy and inexpensive. For further introduction of RSS, refer to Stokes (1995), Al-Saleh and Al-Hadhrami (2003), Abu-Dayyeh et al. (2013), Omar and Ibrahim (2013), Wangxue et al. (2013), Wangxue et al. (2016) and Wangxue et al. (2017).
The procedure of RSS involves randomly drawing \(n^2\) units from the population and then randomly partitioning them into n sets of size n. The units are then ranked within each set. Here ranking could be judgement, visual perception, covariates, or any other method that does not require actual measurement of the units. For each set, one unit is selected and measured. The basic version of RSS can be elucidated as follows. First, the experimenter draws n independent simple random samples, each of size n from the population. Then units within the ith\((i = 1,2, \ldots , n)\) sample are subjected to judgement ordering, with negligible cost, and the unit possessing ith lowest rank is identified. Finally, the identified units are measured. Proceeding in this way, we attain a ranked set sample of size n. If needed, this process can be replicated r times (cycles) to yield a sample of desired size nr.
A random variable X is said to have a log-logistic distribution with the scale parameter \(\alpha \) and the shape parameter \(\beta \) if its distribution function is given by
where \(x>0\), \(\alpha >0\), \(\beta >0\). The probability density function (pdf) corresponding to the distribution function in (1) is then given by
We write \(LLD(\alpha ,~\beta )\) to denote the distribution as defined in (1). The applications of log-logistic distribution are well known in wealth or income (see Fisk 1961), hydrology for modelling stream flow rates and precipitation (see Shoukri et al. 1988) and engineer of survival analysis (see Ashkar and Mahdi 2003). For further details on the importance and applications of a log-logistic distribution one may refer to Bennett (1983), Ahmad et al. (1988), Robson and Reed (1999) and Geskus (2001).
Parameter estimation problems for the log-logistic distribution have been discussed by many authors. Among recent literature, Balakrishnan et al. (1987) provided the best linear unbiased estimation of location and scale parameters of the log-logistic distribution under simple random sampling (SRS). Zhenmin (2006) discussed about the interval estimation for the shape parameter of the log-logistic distribution based on SRS. Further, inference on the parameters of the log-logistic distribution has been studied by many authors under SRS including Tiku and Suresh (1992), Gupta et al. (1999) and Kus and Kaya (2006).
It is well-known that the maximum likelihood method is a good choice to estimate the unknown parameters, because of its various attractive properties, such as being consistent and asymptotic normality and so on. In this article, we are interested in considering maximum likelihood estimatior(s) (MLE(s)) of the scale and shape parameters \(\alpha \) and \(\beta \) from log-logistic distribution. In Sect. 2, the existence and uniqueness of the MLE of \(\alpha \) for \(LLD(\alpha , \beta )\) in which \(\beta \) is known under SRS and RSS are proved. Moreover, the MLE of \(\alpha \) using a RSS version based on the order statistic that maximizes the Fisher information for a fixed set size (RSSF) will be considered. Since under some regularity conditions, the asymptotic efficiency of the MLE can be obtained from the inverse of the Fisher information number, the Fisher information number for \(\alpha \) under three different sampling schemes is respectively obtained. In Sect. 3, the existence and uniqueness of the MLE of \(\beta \) for \(LLD(\alpha , \beta )\) in which \(\alpha \) is known under SRS and RSS are proved. In addition, the MLE of \(\beta \) using RSSF will be considered. The Fisher information number for \(\beta \) under three different sampling schemes are respectively obtained similar to Sect. 2. In Sect. 4, the existence of the MLEs of \(\alpha \) and \(\beta \) from \(LLD(\alpha , \beta )\) under SRS and RSS are respectively proved. To compare MLEs of \(\alpha \) and \(\beta \) estimated simultaneously, the Fisher information matrices \(I_{SRS} (\alpha ,\beta )\) and \(I_{RSS} (\alpha ,\beta )\) will be computed. In Sect. 5, a comparison between these MLEs and the conclusions will be presented. Also we compare efficiencies of all these MLEs for the case of imperfect ranking.
2 MLE of \(\alpha \) when \(\beta \) is known
In this section, the existence and uniqueness of the MLE of \(\alpha \) for (1) in which \(\beta \) is known under SRS and RSS are proved. Moreover, the MLE of \(\alpha \) using RSSF will be considered. Since under some regularity conditions, the asymptotic efficiency of the MLE can be obtained from the inverse of the Fisher information number, the Fisher information number for \(\alpha \) under three different sampling schemes is respectively obtained.
2.1 MLE of \(\alpha \) using SRS
In this subsection, the existence and uniqueness of the MLE of \(\alpha \) for (1) in which \(\beta \) is known under SRS are proved. The Fisher information number for \(\alpha \) under this sampling is given.
Let \(\left\{ {X_1 ,X_2 ,X_3 , \cdots ,X_n } \right\} \) be a simple random sample of size n from (1) in which \(\beta \) is known. The log-likelihood function based on these samples is
where d is a value which is free of \(\alpha \). If the MLE of \(\alpha \) exists, then it is a solution of the likelihood equation
In order to study the existence and uniqueness of the MLE, the second-order derivative of log-likelihood function \(lnL_{SRS}\) is computed as
The left hand side (LHS) of (2) is a continuous function. When \(\alpha \rightarrow 0\), the LHS of (2) goes to \(-n\) and when \(\alpha \rightarrow \infty \), the LHS of (2) goes to n. Thus, the solution of (2) exists. Since the first term of right hand side (RHS) of (3) is zero at any solution of (2) and the second term of RHS of (3) is always negative, (2) has a unique solution and this solution is the MLE of \(\alpha \). This estimator denoted by \(\hat{\alpha }_{SRS, MLE}.\)
The information number for \(\alpha \) under SRS
is given by Reath et al. (2018).
2.2 MLE of \(\alpha \) using RSS
In this subsection, the existence and uniqueness of the MLE of \(\alpha \) for (1) in which \(\beta \) is known under RSS are proved. The Fisher information number for \(\alpha \) under this sampling is obtained.
Let \(\left\{ {X_{(1)1} ,X_{(2)2} ,X_{(3)3} , \cdots ,X_{(n)n} } \right\} \) be a ranked set sample of n from (1) in which \(\beta \) is known. The pdf of \(X_{(i)i}\) is
Then we have
and
where d is a value which is free of \(\alpha \). Taking the first derivative for \(lnL_{RSS}\), we have
If the MLE of \(\alpha \) exists, then it is a solution of the likelihood equation
In order to study the existence and uniqueness of the MLE, the second-order derivative of \(lnL_{RSS}\) is computed as
The LHS of (5) is a continuous function. When \(\alpha \rightarrow 0\), the LHS of (5) goes to \(\displaystyle -\frac{{n}}{2}\) and when \(\alpha \rightarrow \infty \), the LHS of (5) goes to \(\displaystyle \frac{{n}}{2}\). Thus, the solution of (5) exists. Since the first term of RHS of (6) is zero at any solution of (6) and the second term of RHS of (6) is always negative, (5) has a unique solution and this solution is the MLE of \(\alpha \). This estimator denoted by \({{\hat{\alpha }}} _{RSS,~MLE}\).
The Fisher information number for \(\alpha \) under RSS
2.3 MLE of \(\alpha \) under RSS based on maximizing the Fisher information number
In this subsection, the existence and uniqueness of the MLE of \(\alpha \) for (1) in which \(\beta \) is known under RSSF are proved. The Fisher information number for \(\alpha \) under this sampling is obtained.
Lesitha and Thomas (2013) observed that median of a random sample contains the maximum information about \(\alpha \) when they studied the best linear unbiased estimator of \(\alpha \) from (1) in which \(\beta \) is known. So we will arrange RSS based on median.
Case 1: n is even
Select the \(\displaystyle \frac{n}{2}\)th rank unit from each in \(\displaystyle \frac{n}{2}\) sets and the \( \displaystyle \left( {\frac{n}{2} + 1} \right) \)th rank unit from each in the other sets for actual measurement. Denote them as \(\left\{ {X_{\left( {\textstyle {n \over 2}}\right) 1} , \cdots , X_{\left( {\textstyle {n \over 2}}\right) {\textstyle {n \over 2}}} ,X_{\left( {\textstyle {n \over 2}} + 1\right) {\textstyle {n \over 2}} + 1} , \cdots ,X_{\left( {\textstyle {n \over 2}} + 1\right) n} } \right\} \). The pdfs of \(X_{\left( {\textstyle {n \over 2}}\right) i}\) and \({{\varvec{X}}}_{\left( {\textstyle {n \over 2}} + 1\right) i}\) are respectively
and
The log-likelihood function based on these samples \(\left\{ X_{\left( {\textstyle {n \over 2}}\right) 1} , \cdots ,X_{\left( {\textstyle {n \over 2}}\right) {\textstyle {n \over 2}}} ,X_{\left( {\textstyle {n \over 2}} + 1\right) {\textstyle {n \over 2}} + 1} , \cdots ,X_{\left( {\textstyle {n \over 2}} + 1\right) n} \right\} \) is
where d is a value which is free of \(\alpha \).
If the MLE of \(\alpha \) exists , then it is a solution of the likelihood equation
In order to study the existence and uniqueness of the MLE, the second-order derivative of log likelihood function \(lnL_{_{RSSFE} }\) is computed as
Note that the LHS of (8) is a continuous function. When \(\alpha \rightarrow 0\), the LHS of (8) goes to \(\displaystyle -\frac{{n}}{2}\) and when \(\alpha \rightarrow \infty \), the LHS of (8) goes to \(\displaystyle \frac{{n}}{2}\). Thus, the solution of (8) exists. Since the first term of RHS of (9) is zero at any solution of (9) and the second term of RHS of (9) is always negative , (8) has a unique solution and this solution is the MLE of \(\alpha \). This estimator denoted by \({{\hat{\alpha }}} _{~RSSFE, MLE}.\)
The information number for \(\alpha \) under RSS based on median with even n is
Case 2: n is odd
In each of the n samples select the unit with rank \(\frac{{n + 1}}{2}\), denote them as \(\left\{ {X_{\left( {\textstyle {{n + 1} \over 2}}\right) 1} ,X_{\left( {\textstyle {{n + 1} \over 2}}\right) 2} ,X_{\left( {\textstyle {{n + 1} \over 2}}\right) 3} , \cdots ,X_{\left( {\textstyle {{n + 1} \over 2}}\right) n} } \right\} \). The pdf of \(X_{\left( {\textstyle {{n + 1} \over 2}}\right) i} \) is
The likelihood equation based on these samples \(\left\{ X_{\left( {\textstyle {{n + 1} \over 2}}\right) 1} ,X_{\left( {\textstyle {{n + 1} \over 2}}\right) 2} ,X_{\left( {\textstyle {{n + 1} \over 2}}\right) 3} , \cdots ,X_{\left( {\textstyle {{n + 1} \over 2}}\right) n} \right\} \) is
It can be prove that (11) has a unique solution and this solution is the MLE of \(\alpha \). Denoted it as \({{\hat{\alpha }}} _{RSSFO,~MLE}\). The information number for \(\alpha \) under this RSS based on median with odd n is
The establishing procedures of (11) and (12) are similar to the even case.
3 MLE of \(\beta \) when \(\alpha \) is known
In this section, the existence and uniqueness of the MLE of \(\beta \) for (1) in which \(\alpha \) is known under SRS and RSS are proved. In addition, the MLE of \(\beta \) using RSSF will be considered. The Fisher information number for \(\beta \) under three different sampling schemes is respectively obtained similar to Sect. 2.
3.1 MLE of \(\beta \) using SRS
In this subsection, the existence and uniqueness of the MLE of \(\beta \) for (1) in which \(\alpha \) is known under SRS is proved. The Fisher information number for \(\alpha \) under this sampling is given.
Let \(\left\{ {X_1 ,X_2 ,X_3 , \cdots ,X_n } \right\} \) be a simple random sample of size n from (1) in which \(\alpha \) is known is known. The log-likelihood function based these samples is
where d is a value which is free of \(\beta \). If the MLE of \(\beta \) exists, then it is a solution of the likelihood equation
In order to study the existence and uniqueness of the MLE, the second-order derivative of log likelihood function \(lnL_{SRS}\) is computed as
The LHS of (13) is a continuous function. When \(\beta \rightarrow 0\), the LHS of (13) goes to \(\infty \) and when \(\beta \rightarrow \infty \), the LHS of (13) goes to \(- \sum \limits _{i = 1}^n {\ln \frac{{x_i }}{\alpha }}\). Thus, the solution of (13) exists. Since (14) is always negative, (13) has a unique solution and this solution is the MLE of \(\beta \). This estimator denoted by \({{\hat{\beta }}} _{SRS,~MLE}.\) The information number for \(\beta \) under SRS
is given by Reath et al. (2018).
3.2 MLE of \(\beta \) using RSS
In this subsection, the existence and uniqueness of the MLE of \(\beta \) for (1) in which \(\alpha \) is known under RSS are proved. The Fisher information number for \(\beta \) is obtained.
Let \(\left\{ {X_{(1)1} ,X_{(2)2} ,X_{(3)3} , \cdots ,X_{(n)n} } \right\} \) be a ranked set sample of n from \(LLD(\alpha , \beta )\) with \(\alpha \) is known. Then we have
and
where d is a value which is free of \(\beta \). If the MLE of \(\beta \) exists, then it is a solution of the likelihood equation
In order to study the existence and uniqueness of the MLE, the second-order derivative of \(lnL_{RSS}\) is computed as
The LHS of (16) is a continuous function. When \(\beta \rightarrow 0\), the LHS of (16) goes to \(\infty \) and when \(\beta \rightarrow \infty \), the LHS of (16) is always negative. Thus, the solution of (16) exists. Since the RHS of (17) is always negative, (16) has a unique solution and this solution is the MLE of \(\beta \). This estimator denoted by \(\hat{\beta }_{RSS,~MLE}.\)
The Fisher information number for \(\beta \)
3.3 MLE of \(\beta \) under RSS based on maximizing the Fisher information number
In this subsection, the existence and uniqueness of the MLE of \(\beta \) for (1) in which \(\alpha \) is known under RSSF are proved. The Fisher information number for \(\beta \) under this sampling is obtained.
We evaluate the Fisher information contained in the order statistics arising from (1) in which \(\alpha \) is known and observe that minimum of a random sample contains the maximum information about \(\beta \) in Table 1. So we will arrange RSS based on minimum.
In each of the n samples select the unit with rank minimum, denote them as \(\left\{ {X_{(1)1} ,X_{(1)2} ,X_{(1)3} , \cdots ,X_{(1)n} } \right\} \). The pdf of \(X_{(1)i}\) is
The log-likelihood function based these samples \(\left\{ {X_{(1)1} ,X_{(1)2} ,X_{(1)3} , \cdots ,X_{(1)n} } \right\} \) is
where d is a value which is free of \(\beta \). Taking the first derivative for \(lnL_{RSSF}\), we have
If the MLE of \(\beta \) exists, then it is a solution of the likelihood equation
In order to study the existence and uniqueness of the MLE, the second-order derivative of \(lnL_{RSSF}\) is computed as
The LHS of (19) is a continuous function. When \(\beta \rightarrow 0\), the LHS of (19) goes to \(\infty \) and when \(\beta \rightarrow \infty \), the LHS of (19) is always negative. Thus, the solution of (19) exists. Since the RHS of (20) is always negative, (19) has a unique solution and this solution is the MLE of \(\beta \). This estimator denoted by \(\hat{\beta }_{RSSF,~MLE}.\)
The Fisher information number for \(\beta \) under RSS based on minimum
\(y_{(1)}\) is 1th order statistics of a sample of size n from LLD(1, 1).
4 MLEs of \(\alpha \) and \(\beta \)
In this section, the existence of the MLEs of \(\alpha \) and \(\beta \) for \(LLD(\alpha , \beta )\) under SRS and RSS is respectively proved. To compare MLEs of \(\alpha \) and \(\beta \) estimated simultaneously, the Fisher information matrices \(I_{SRS} (\alpha ,\beta )\) and \(I_{RSS} (\alpha ,\beta )\) must be computed.
4.1 MLEs of \(\alpha \) and \(\beta \) using SRS
Let \(\left\{ {X_1 ,X_2 ,X_3 , \cdots ,X_n } \right\} \) be a simple random sample of size n from \(LLD(\alpha , \beta )\). The log-likelihood function based these samples is
where d is a constant. This function implies that
and
In order to prove that the solutions of (22) and (23) are the MLEs of \(\alpha \) and \(\beta \), \( \displaystyle \frac{{\partial ^2 lnL_{SRS}}}{{\partial \alpha ^2 }} \) and \( \displaystyle \left( {\frac{{\partial ^2 \ln L_{SRS} }}{{\partial \alpha ^2 }}} \right) \left( {\frac{{\partial ^2 \ln L_{SRS} }}{{\partial \beta ^2 }}} \right) - \left( {\frac{{\partial ^2 \ln L_{SRS} }}{{\partial \alpha \partial \beta }}} \right) ^2 \) are computed. Note that \( \displaystyle \frac{{\partial ^2 lnL_{SRS}}}{{\partial \alpha ^2 }}|_{({{\hat{\alpha }}} _{SRS,~MLE},~{{\hat{\beta }}} _{SRS,~MLE})}<0. \)
Since the first term of (24) is zero at any solutions of \(\alpha \) and \(\beta \) of (22) and (23), the second term \( \displaystyle \frac{{2n}}{{\alpha ^2 }}\sum \limits _{i = 1}^n {\frac{{\left( \frac{{x_i }}{\alpha }\right) ^\beta }}{{\left[ 1 + \left( \frac{{x_i }}{\alpha }\right) ^\beta \right] ^2 }}} \) is always positive, \( \displaystyle \left( {\frac{{\partial ^2 \ln L_{SRS} }}{{\partial \alpha ^2 }}} \right) \left( {\frac{{\partial ^2 \ln L_{SRS} }}{{\partial \beta ^2 }}} \right) - \left( {\frac{{\partial ^2 \ln L_{SRS} }}{{\partial \alpha \partial \beta }}} \right) ^2|_{({{\hat{\alpha }}} _{SRS,~MLE},~{{\hat{\beta }}} _{SRS,~MLE})}>0\). Thus the solutions of (22) and (23) are MLEs of \(\alpha \) and \(\beta \).
The fisher information matrix for \(\alpha \) and \(\beta \)
is given by Reath et al. (2018).
4.2 MLEs of \(\alpha \) and \(\beta \) using RSS
Let \(\left\{ {X_{(1)1} ,X_{(2)2} ,X_{(3)3} , \cdots ,X_{(n)n} } \right\} \) be a ranked set sample of size n from \(LLD(\alpha , \beta )\). Then we have
and
In order to prove that the solutions of (26) and (27) are the MLEs of \(\alpha \) and \(\beta \), \( \displaystyle \frac{{\partial ^2 lnL_{RSS}}}{{\partial \alpha ^2 }} \) and \( \displaystyle \left( {\frac{{\partial ^2 \ln L_{RSS} }}{{\partial \alpha ^2 }}} \right) \left( {\frac{{\partial ^2 \ln L_{RSS} }}{{\partial \beta ^2 }}} \right) - \left( {\frac{{\partial ^2 \ln L_{RSS} }}{{\partial \alpha \partial \beta }}} \right) ^2 \) are computed. Note that \( \displaystyle \frac{{\partial ^2 lnL_{RSS}}}{{\partial \alpha ^2 }}|_{({{\hat{\alpha }}} _{RSS,~MLE},~{{\hat{\beta }}} _{RSS,~MLE})}<0. \)
Since the first term of (28) is zero at any solutions of \(\alpha \) and \(\beta \) of (26) and (27), the second term \( \displaystyle \frac{{(n + 1)^2 \beta ^2 }}{{\alpha ^2 }}\sum \limits _{i = 1}^n {\frac{{\left( \frac{{x_{(i)i} }}{\alpha }\right) ^{2\beta } \ln ^2 \frac{{x_{(i)i} }}{\alpha }}}{{\left[ 1 + \left( \frac{{x_{(i)i} }}{\alpha }\right) ^\beta \right] ^4 }}} \) and the third term \( \displaystyle \frac{{(n + 1)n}}{{\alpha ^2 }}\sum \limits _{i = 1}^n {\frac{{\left( \frac{{x_{(i)i} }}{\alpha }\right) ^\beta }}{{\left[ 1 + \left( \frac{{x_{(i)i} }}{\alpha }\right) ^\beta \right] ^2 }}} \) are always positive, \( \displaystyle \left( {\frac{{\partial ^2 \ln L_{RSS} }}{{\partial \alpha ^2 }}} \right) \left( {\frac{{\partial ^2 \ln L_{RSS} }}{{\partial \beta ^2 }}} \right) - \left( {\frac{{\partial ^2 \ln L_{RSS} }}{{\partial \alpha \partial \beta }}} \right) ^2|_{({{\hat{\alpha }}} _{RSS,~MLE},~{{\hat{\beta }}} _{RSS,~MLE})}>0 \). Thus the solutions of (26) and (27) are MLEs of \(\alpha \) and \(\beta \).
In order to obtain the fisher information matrix for \(\alpha \) and \(\beta \), we also need to compute
Combining (7), (18) with (29), we can obtain the fisher information matrix for \(\alpha \) and \(\beta \)
5 Numerical comparison
5.1 Results for perfect ranking
We will compare the above MLEs in terms of asymptotic efficiency. Combining (4), (7), (10) with (12) in Sect. 2, we can respectively obtain the asymptotic efficiencies \({{\hat{\alpha }}} _{RSS,MLE}\) with respect to (w.r.t.) \({{\hat{\alpha }}} _{SRS,MLE}\), \({{\hat{\alpha }}} _{~RSSFK, MLE}\)(\(K=E\) or O) w.r.t. \({{\hat{\alpha }}} _{SRS,MLE}\) and \({{\hat{\alpha }}} _{~RSSFK, MLE}\) (\(K=E\) or O) w.r.t. \({{\hat{\alpha }}} _{~RSS, MLE}\)
and
It can be seen that \(\mathrm{{AE}}^i\ge 1(i=1,2,3)\) for \(n>2\). Combining (15), (18) with (21) in Sect. 3, the asymptotic efficiencies \({\hat{\beta }} _{RSS,MLE}\) w.r.t. \({{\hat{\beta }}} _{SRS,MLE}\), \({{\hat{\beta }}} _{~RSSF, MLE}\) w.r.t. \({{\hat{\beta }}} _{SRS,MLE}\) and \({{\hat{\beta }}} _{~RSSF,~MLE}\) w.r.t. \({{\hat{\beta }}} _{~RSS,~MLE}\)
and
are respectively given. It can be seen that \(\mathrm{{AE}}^4>1\) for \(n\ge 2\). Combining (15) with (18), we can obtain the asymptotic efficiency \(({{\hat{\alpha }}} _{RSS,MLE} ,{{\hat{\beta }}} _{RSS,MLE})\) w.r.t. \(({{\hat{\alpha }}} _{SRS,MLE} ,{{\hat{\beta }}} _{SRS,MLE} )\) in Sect. 3
It can be seen that \(\mathrm{{AE}}^7\) for \(n\ge 2\).
In order to observe the change of \(I_{SRS} (\alpha )\), \(I_{RSS} (\alpha )\), \(I_{RSSFK}\), \(I_{SRS} (\beta )\), \(I_{RSS} (\beta )\), \(I_{RSSF} (\beta )\), \({\det \left\{ {I_{\mathrm{{RSS}}} (\alpha ,\beta )} \right\} }\), \({\det \left\{ {I_{\mathrm{{RSS}}} (\alpha ,\beta )} \right\} }\) and \(\mathrm{{AE}}^i(i=1,2,\ldots ,7)\) with the set size n, the numerical comparisons of their are given in the following Table 2, 3 and 4.
From Table 2, we conclude the following:
-
(1)
\(\mathrm{{AE}}^i(i=1,2,3)>1\) for \(n>2\) and they are increase as n increase.
-
(2)
\(I_{RSS} (\alpha )>I_{SRS} (\alpha )\), \(I_{RSSFK} (\alpha )>I_{SRS} (\alpha )\) and \(I_{RSSFK} (\alpha )> I_{RSS} (\alpha )\) for \(n>2\).
-
(3)
\({{\hat{\alpha }}} _{RSSFK,MLE}\) is more efficient \(\hat{\alpha }_{SRS,MLE}\), \({{\hat{\alpha }}} _{RSSFK,MLE}\) is more efficient \({{\hat{\alpha }}} _{SRS,MLE}\) and \({{\hat{\alpha }}} _{RSSFK,MLE}\) is more efficient \({{\hat{\alpha }}} _{RSS, MLE}\).
-
(4)
In conclusion, the MLE of \(\alpha \) using RSSF is more efficient than that using RSS, which is, in turn, more efficient than that using SRS. From Table 3, we conclude the following:
-
(5)
\(\mathrm{{AE}}^i(i=4,5,6)>1\) for \(n\ge 2\) and they are increase as n increase.
-
(6)
\(I_{RSS} (\beta )>I_{SRS} (\beta )\), \(I_{RSSF} (\beta )>I_{SRS} (\beta )\) and \(I_{RSSF} (\beta )> I_{RSS} (\beta )\) for \(n\ge 2\).
-
(7)
\({{\hat{\beta }}} _{RSS,MLE}\) is more efficient \({{\hat{\beta }}} _{SRS,MLE}\), \({{\hat{\beta }}} _{RSSF,MLE}\) is more efficient \(\hat{\alpha }_{SRS,MLE}\) and \({{\hat{\beta }}} _{RSSF,MLE}\) is more efficient \({{\hat{\beta }}} _{RSS, MLE}\).
-
(8)
In conclusion, the MLE of \(\beta \) using RSSF is more efficient than that using RSS, which is, in turn, more efficient than that using SRS. From Table 4, we conclude the following:
-
(9)
\(\mathrm{{AE}}^7>1\) for \(n\ge 2\) and they are increase as n increase.
-
(10)
\({\det \left\{ {I_{\mathrm{{RSS}}} (\alpha ,\beta )} \right\} }>{\det \left\{ {I_{\mathrm{{SRS}}} (\alpha ,\beta )} \right\} }\).
-
(11)
In conclusion, the MLEs of \(\alpha \) and \(\beta \) using RSS are more efficient than that using SRS.
5.2 Results for imperfect ranking
Since perfect rankings are unlikely in practice, efficiencies of the above MLEs under imperfect ranking will be compared in this subsection. We respectively use \({{\hat{\alpha }}}^{*}_{RSS,MLE}\) and \({{\hat{\alpha }}}^{*}_{RSSFK,MLE}\) to denote the MLE of \(\alpha \) using imperfect RSS and imperfect RSSF, \({{\hat{\beta }}}^{*}_{RSS,MLE}\) and \({{\hat{\beta }}}^{*}_{RSSF,MLE}\) denote the MLE of \(\beta \) using imperfect RSS and imperfect RSSF. Then the efficiencies of \({\hat{\alpha }}^{*} _{RSS,MLE}\) w.r.t. \({{\hat{\alpha }}} _{SRS,MLE}\), \({\hat{\alpha }}^{*} _{~RSSFK, MLE}\) w.r.t. \({{\hat{\alpha }}} _{SRS,MLE}\), \({\hat{\alpha }}^{*} _{~RSSFK, MLE}\) w.r.t. \({{\hat{\alpha }}}^{*} _{~RSS, MLE}\), \({{\hat{\beta }}}^{*} _{RSS,MLE}\) w.r.t. \({{\hat{\beta }}} _{SRS,MLE}\), \({\hat{\beta }}^{*} _{~RSSF, MLE}\) w.r.t. \({{\hat{\beta }}} _{SRS,MLE}\) and \({\hat{\beta }}^{*} _{~RSSF,~MLE}\) w.r.t. \({{\hat{\beta }}}^{*} _{~RSS,~MLE}\) are respectively denoted as \(\mathrm{{AE}}^{1^{*}}\), \(\mathrm{{AE}}^{2^{*}}\), \(\mathrm{{AE}}^{3^{*}}\), \(\mathrm{{AE}}^{4^{*}}\), \(\mathrm{{AE}}^{5^{*}}\) and \(\mathrm{{AE}}^{6^{*}}\). We use the simulation method considered by Dell and Clutter (1972), in these simulations, we choose a random error variable \(\varepsilon \thicksim N(0, \sigma ^2)\). In the following simulation, we choose \(\sigma ^2\)=0.22, 0.54 , 1.00, n=5, 6, 7 and \(\alpha =1\), \(\beta =3\). All simulations results are calculated after 10000 iterations.
Based on Tables 5 and 6 we may conclude:
-
(1)
\({{\hat{\alpha }}}^{*} _{RSSFK,MLE}\) is more efficient \(\hat{\alpha }_{SRS,MLE}\), \({{\hat{\alpha }}}^{*} _{RSSFK,MLE}\) is more efficient \({{\hat{\alpha }}} _{SRS,MLE}\) and \({{\hat{\alpha }}}^{*} _{RSSFK,MLE}\) is more efficient \({{\hat{\alpha }}} _{RSS, MLE}\).
-
(2)
\({{\hat{\beta }}}^{*} _{RSS,MLE}\) is more efficient \(\hat{\beta }_{SRS,MLE}\) and \({{\hat{\beta }}}^{*} _{RSSF,MLE}\) is more efficient \({{\hat{\beta }}} _{SRS,MLE}\) .
-
(3)
The MLEs of \(\alpha \) and \(\beta \) using imperfect RSS are more efficient than that using SRS.
-
(4)
All efficiencies are decreasing as \(\sigma ^{2}\) gets larger, i.e. all efficiencies are decreasing as error in ranking in increase.
-
(5)
In conclusion, the MLEs of \(\alpha \) and \(\beta \) using imperfect RSS are more efficient than that using SRS.
References
Abu-Dayyeh W, Assrhani A, Ibrahim K (2013) Estimation of the shape and scale parameters of pareto distribution using ranked set sampling. Stat Pap 54(1):207–225
Ahmad MI, Sinclair CD, Werritty A (1988) Log-logistic flood frequency analysis. J Hydrol 98(3):205–224
Al-Saleh MF, Al-Hadhrami SA (2003) Estimation of the mean of the exponential distribution using moving extremes ranked set sampling. Stat Pap 44(3):367–382
Ashkar F, Mahdi S (2003) Comparison of two fitting methods for the Log-logistic distribution. Water Resour Res 39:12–17
Balakrishnan N, Malik HJ, Puthenpura S (1987) Best linear unbiased estimation of location and scale parameters of the log-logistic distribution. Commun Stat Theory Methods 16:3477–3495
Bennett S (1983) Log-logistic regression models for survival data. J R Stat Soc 32(2):165–171
Chen W, Xie M, Wu M (2013) Parametric estimation for the scale parameter for scale distributions using moving extremes ranked set sampling. Stat Probab Lett 83(9):2060–2066
Chen W, Xie M, Wu M (2016) Modified maximum likelihood estimator of scale parameter using moving extremes ranked set sampling. Commun Stat Simul Comput 45(6):2232–2240
Chen W, Tian Y, Xie M (2017) Maximum likelihood estimator of the parameter for a continuous one-parameter exponential family under the optimal ranked set sampling. J Syst Sci Complex 30(6):1350–1363
Chen Z (2006) Estimating the shape parameter of the log-logistic distribution. Int J Reliab Qual Saf Eng 13(3):257–266
Dell TR, Clutter JL (1972) Ranked set sampling theory with order statistics background. Biometrics 28(2):545–555
Fisk PR (1961) The graduation of income distributions. Econometrica 29(2):171–185
Geskus RB (2001) Methods for estimating the AIDS incubation time distribution when data of seroconversion is censored. Stat Med 20:795–812
Gupta RC, Akman O, Lvin S (1999) A study of log-logistic model in survival analysis. Biom J 41(4):431–443
Kus CS, Kaya MF (2006) Estimation of parameters of the log-logistic distribution based on progressive censoring using the EM algorithm. Hacettepe J Math Stat 35(2):203–211
Lesitha G, Thomas PY (2013) Estimation of the scale parameter of a log-logistic distribution. Metrika 76(3):427–448
McIntyre GA (1952) A method of unbiased selective sampling, using ranked sets. Aust J Agric Res 3:385–390
Omar A, Ibrahim K (2013) Estimation of the shape and scale parameters of the pareto distribution using extreme ranked set sampling. Pak J Stat 29(1):33–47
Reath J, Dong J, Wang M (2018) Improved parameter estimation of the log-logistic distribution with applications. Comput Stat 33(1):339–356
Robson A, Reed D (1999) Statistical procedures for flood frequency estimation. Flood estimation handbook. Institute of Hydrology, Wallingford, p 3
Shoukri MM, Mian IUM, Tracy D (1988) Sampling properties of estimators of log-logistic distribution with application to Canadian precipitation data. Can J Stat 16(3):223–236
Stokes L (1995) Parametric ranked set sampling. Ann Inst Stat Math 47(3):465–482
Tiku ML, Suresh RP (1992) A new method of estimation for location and scale parameters. J Stat Plan Inference 30:281–292
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
He, X., Chen, W. & Qian, W. Maximum likelihood estimators of the parameters of the log-logistic distribution. Stat Papers 61, 1875–1892 (2020). https://doi.org/10.1007/s00362-018-1011-3
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00362-018-1011-3