1 Introduction

Ranked set sampling enables one to provide more structure to the collected sample items and use this structure to develop efficient inferential procedures. This approach to data collection was first proposed by McIntyre ([11, 12]) for situations where taking the actual measurements for sample observations is difficult (may be costly, destructive, time-consuming), but mechanisms for either informally or formally ranking a set of sample units are relatively easy and reliable.

For discussions of some of the settings where ranked set sampling techniques have found applications, one may refer to Patil [13], Barnett and Moore [6], and Chen et al. [9]. During the last four decades, a good deal of attention has been devoted to this topic in the statistical literature. Some of this work has been geared toward specific parametric families. Many authors have used the ranked set sampling for estimating the unknown parameters of some distributions; for example, see Adatia [1] for the half-logistic distribution and Shaibu and Muttlak [16] for normal, exponential, and gamma distributions. Al-Hadhrami et al. [2] have studied the estimator of standard deviation of normal distribution using moving extreme ranked set sampling, while Al-Odat [3] has suggested a modification of estimating a ratio in rank set sampling. Sinha et al. [17] have proposed best linear unbiased estimators (BLUEs) of the parameters of the normal and exponential distributions under RSS. Stokes [18] has studied the maximum likelihood estimators under RSS of the parameters of the location-scale family. Sengupta and Mukhuti [15] have presented some unbiased estimators which are better than the nonparametric minimum variance quadratic unbiased estimator based on a balanced ranked set sample as well as the uniformly minimum variance unbiased estimator based on a simple random sample (SRS) of the same size. Jemain et al. [10] have studied the multistage median ranked set sampling for estimating the population median. Al-Saleh et al. [4] have obtained the Bayesian estimate of the exponential parameter using squared error loss function. In this paper, we use the linex loss function to derive an explicit form of the Bayesian estimate of the exponential distribution under ranked set sampling.

In Sect. 2, we present some preliminary details. In Sect. 3, we derive the Bayesian estimates of the exponential parameter based on gamma and Jefferys prior distributions. In Sect. 4, we develop an alternative procedure for deriving the Bayesian estimates. Finally, in Sect. 5, we present that some numerical results demonstrate the usefulness of the results developed here.

2 Preliminaries

Let \(X_1, X_2, \ldots , X_n\) be a sequence of independent and identically distributed (iid) random variables with probability density function \( f(x; \theta ) \) and cumulative distribution function \(F(x;\theta ) \), where \(\theta \) has a prior density function \(\Pi (\theta ).\) This sequence will be referred to here as a Simple Random Sample (SRS). Let

$$\begin{aligned} X_{11}, X_{12}, \ldots , X_{1n}; X_{21}, X_{22},\ldots , X_{2n}; \ldots ; X_{s1},X_{s2},\ldots , X_{sn} \end{aligned}$$

be the visual (judgment) order statistics of \(s\) sets, each is based on a simple random sample of size \(n.\) This is observed specifically as follows. A set of \(n\) items is drawn from the population, the items of the set are ranked by judgment, and only the item ranked the smallest is quantified. Then another set of size \(n\) is drawn, then ranked, and only the item ranked the second smallest is quantified. The procedure is repeated in this way until the item ranked the largest in the \(n\)-th set is quantified. This completes one cycle of this sampling. The cycle may be repeated \(m\) times until \(nm\) units have been quantified and these \(nm\) units form the RSS data.

Let \(Y_1, Y_2, \ldots , Y_s \) be a RSS from this distribution obtained using a full data of \(sn\) observations. It is assumed through out this paper that the judgmental identification of the ranks is perfect and at negligible cost. This assumption is made for many developments on RSS. Under this assumption, \(Y_i\) has the same distribution as \(X_{i:n}\) which is the \(i^{th}\) order statistic in a random sample of size \(n\) with pdf

$$\begin{aligned} f_{i:n}(x)=\frac{n!}{(i-1)!(n-i)!}(F(x))^{i-1}(1-F(x))^{n-i}f(x). \end{aligned}$$

Now, we assume that we have a RSS from an exponential distribution with parameter \(\theta \) with density function

$$\begin{aligned} f(x; \theta ) = \theta e^{-\theta x} \; , \quad \quad x>0, \; \theta > 0. \end{aligned}$$
(2.1)

In the Bayesian setup, the choice of the loss function is an important part, and it is well known that most studies use the squared error loss function (SEL) for measuring an estimators’ performance; see Box and Tiao [8] and Berger [7]. This is due to its simplicity and relevance to classical procedures. But, the squared error loss function is justified only when losses are symmetric in nature. The symmetric nature of this function gives equal weight to over-estimation as well as under-estimation. In the estimation of the survival function or reliability function, such a symmetric loss function may be inappropriate. A number of asymmetric loss functions have been discussed in the literature, but among these asymmetric losses, Linear-exponential (LINEX) loss function is widely used as it is a natural extension of the squared error loss function (SEL). It was originally introduced by Varian [19] and was popularized by Zellner [20].

The LINEX loss function for \(\theta \) can be expressed as

$$\begin{aligned} L(\Delta )\propto exp(C\Delta )-C\Delta -1, \, \, c\ne 0, \end{aligned}$$
(2.2)

where \(\Delta =({\theta }^*-\theta )\), and \({\theta }^* \) is an estimate of \(\theta .\) The sign and magnitude of the shape parameter \(C\) represent the direction and degree of symmetry, respectively. When value of \(C \) is less than zero, the LINEX loss function gives more weight to under-estimation against over-estimation, and the situation is reversed when the value of \(C \) is greater than zero. For \(C\) close to zero, the LINEX loss is approximately squared error loss.

In order to develop the Bayesian analysis in the case at hand, the conjugate prior for \(\theta \) is considered, i.e., \(\theta \sim Gamma(\alpha , \beta ),\) whose probability density function is given by

$$\begin{aligned} g(\theta ) \propto \theta ^{\alpha -1} exp(-\beta \theta ), \; \; \theta >0, \end{aligned}$$
(2.3)

where \( \alpha >0\) and \(\beta >0\) are the hyperparameters. If \(\alpha =\beta =0,\) the prior (2.3) becomes the Jeffreys prior, which is given by

$$\begin{aligned} g(\theta ) \propto 1/\theta , \; \; \theta \ge 0. \end{aligned}$$
(2.4)

3 Bayes Estimates

In this section, we derive the Bayes estimates of the exponential parameter \(\theta \) based on both SRS and RSS. In each case, we use both conjugate prior and the non-informative prior for the scale parameter. Also, we consider both the symmetric loss function (squared error loss) and asymmetric loss function (Linear-exponential, LINEX) to derive the corresponding Bayesian estimates. Throughout the paper, let \(\pi (\theta |\underline{X})\) and \(\pi (\theta |\underline{Y})\) denote the posterior densities of \(\theta \), given SRS \((\underline{X})\) and RSS \((\underline{Y})\), respectively.

3.1 Bayes Estimate Based on SRS

Let \(x_1, x_2, \ldots , x_n\) be a random sample from the exponential distribution with parameter \(\theta \) in (2.1), and \(g(\theta )\) be the conjugate prior in (2.3). In this case, the posterior density based on SRS can be written as

$$\begin{aligned} \pi (\theta | \underline{X} ) = \frac{\theta ^{n+\alpha -1} \; e^{- \theta \big (n \overline{X}+\beta \big )}}{\Gamma (n+\alpha )\; (n\overline{X}+\beta )^{-(n+\alpha )} }. \end{aligned}$$
(3.1)

Hence, the Bayesian estimatie of \(\theta \) based on the squared error loss function is given by

$$\begin{aligned} \widetilde{\theta }_{\textit{SEL}}(\underline{X})= \frac{n+\alpha }{n\overline{X}+\beta }, \end{aligned}$$
(3.2)

while the Bayesian estimation of \(\theta \) based on the LINEX loss function is given by

$$\begin{aligned} \widetilde{\theta }_{\textit{LINEX}}=- \frac{1}{C}\ln (E[e^{-C\theta }]), \end{aligned}$$

where

$$\begin{aligned} E[e^{-C\theta }]&= \int \frac{e^{-C\theta } \theta ^{(n+\alpha -1)}\; e^{- \theta (n\overline{X}+\beta )}}{\Gamma (n+\alpha ) \;(n\overline{X}+\beta )^{-(n+\alpha }) } \; \mathrm{d}\theta \\&= \int \frac{ \theta ^{(n+\alpha -1)} \; e^{- \theta (n \overline{X}+\beta +C)}}{\Gamma (n+\alpha ) \; \big (n\overline{X}+\beta \big )^{-(n+\alpha )} } \; \mathrm{d}\theta \\&= \Bigl (1+ \frac{C}{n \overline{X}+\beta }\Bigr )^{-(n+\alpha )}, \end{aligned}$$

and consequently

$$\begin{aligned} \widetilde{\theta }_{\textit{LINEX}}(\underline{X})=- \frac{1}{C} \ln \Bigl [ \Bigl (1+ \frac{C}{n \overline{X}+\beta }\Bigr )^{-(n+\alpha )}\Bigr ]. \end{aligned}$$
(3.3)

3.2 Bayes Estimate Based on RSS

Now, let \(y_1, y_2, \ldots , y_n\) be a one-cycle RSS from the exponential distribution in (2.1) and the prior density of \(\theta \) be as in (2.3). The density of the \(jth\) order statistic \(Y_j\) is known to be (Arnold et al. [5])

$$\begin{aligned} g(y_j|\theta )&= j {n \atopwithdelims ()j}f(y_j|\theta )[F(y_j|\theta )]^{j-1}[1-F(y_j|\theta )]^{n-j}\\&= j {n \atopwithdelims ()j} \theta e^{-\theta y_j} (1-e^{-\theta y_j})^{j-1} (e^{-\theta y_j})^{n-j} \\&= \sum _{k=0}^{j-1} j {n \atopwithdelims ()j}{j-1 \atopwithdelims ()k} (-1)^k \; \theta \;(e^{-\theta y_j})^{n-j+k+1}. \\&= \sum _{k=0}^{j-1} c_k(j) \; h_k(y_j|\theta ),\ y_j>0, \end{aligned}$$

where \(c_k(j)=j {n \atopwithdelims ()j}{j-1 \atopwithdelims ()k} (-1)^k\) and \(h_k(y_j|\theta )=\theta \;(e^{-\theta y_j})^{n-j+k+1}.\)

Then, the joint density of the RSS in this case, due to the independence of \(y_i's\), is given by

$$\begin{aligned} g(\underline{y}|\theta )&= \prod _{j=1}^n \; g(y_j|\theta ) = \prod _{j=1}^n \sum _{k=0}^{j-1} c_k(j) \; h_k(y_j|\theta ) \\&= \sum _{i_1=0}^{0} \sum _{i_2=0}^{1} \cdots \sum _{i_n=0}^{n-1} \left[ \prod _{j=1}^n \;c_{i_j}(j)\right] \theta ^n \; e^{-\theta {\sum \limits _{j=1}^n y_j(n-j+i_j+1)}}, \ y_j>0. \end{aligned}$$

Hence, the posterior density can be derived as

$$\begin{aligned} \pi (\theta | \underline{Y} )= \frac{{\displaystyle {\sum _{i_1=0}^{0} \sum _{i_2=0}^{1} \cdots \sum _{i_n=0}^{n-1}}} \; \left[ {\displaystyle {\prod _{j=1}^n}} \;c_{i_j}(j) \right] \; \theta ^{n+\alpha -1} \; e^{-\theta \left[ \sum \limits _{j=1}^n y_j(n-j+i_j+1)+\beta \right] }}{{\displaystyle {\sum _{i_1=0}^{0} \sum _{i_2=0}^{1} \cdots \sum _{i_n=0}^{n-1} }} \; \left[ {\displaystyle {\prod _{j=1}^n}} \;c_{i_j}(j)\right] \; \Gamma (n+\alpha )\; \bigl [{\displaystyle {\sum _{j=1}^n}} y_j(n-j+i_j+1)+\beta \bigr ]^{-(n+\alpha )}}, \end{aligned}$$
(3.4)

and the Bayesian estimate of \(\theta \) based on the squared error loss function is then obtained from (3.4) as

$$\begin{aligned} \widetilde{\theta }_{\textit{SEL}} (\underline{Y}) =\frac{{\displaystyle {\sum _{i_1=0}^{0} \sum _{i_2=0}^{1} \cdots \sum _{i_n=0}^{n-1}}} \left[ {\displaystyle {\prod _{j=1}^n}} \;c_{i_j}(j)\right] (n+\alpha ){\left[ {\displaystyle {\sum _{j=1}^n}} y_j(n-j+i_j+1)+\beta \right] ^{-(n+\alpha +1)}}}{{\displaystyle {\sum _{i_1=0}^{0} \sum _{i_2=0}^{1} \cdots \sum _{i_n=0}^{n-1} }} \;\left[ {\displaystyle {\prod _{j=1}^n}} \;c_{i_j}(j)\right] \; \bigl [{\displaystyle {\sum _{j=1}^n}} y_j(n-j+i_j+1)+\beta \bigr ]^{-(n+\alpha )}}. \end{aligned}$$
(3.5)

Next, in order to derive the Bayesian estimate of \(\theta \) based on the LINEX loss function, we need to calculate the posterior expectation of \(e^{-C\theta }\) from (3.4) as

$$\begin{aligned} E[e^{-C\theta }]&= \frac{{\displaystyle {\sum _{i_1=0}^{0} \sum _{i_2=0}^{1} \cdots \sum _{i_n=0}^{n-1}}} \; \left[ {\displaystyle {\prod _{j=1}^n}} \; c_{i_j}(j) \right] \; \displaystyle \int \theta ^{n+\alpha -1} \; e^{-\theta \left[ \sum _{j=1}^n y_j(n-j+i_j+1)+\beta +C\right] }\mathrm{d}\theta }{{\displaystyle {\sum _{i_1=0}^{0} \sum _{i_2=0}^{1} \cdots \sum _{i_n=0}^{n-1} }} \;\left[ {\displaystyle {\prod _{j=1}^n}} \;c_{i_j}(j) \right] \; \Gamma (n+\alpha )\; \left[ {\displaystyle {\sum _{j=1}^n}} y_j(n-j+i_j+1)+\beta \right] ^{-(n+\alpha )}} \nonumber \\&= \frac{{\displaystyle {\sum _{i_1=0}^{0} \sum _{i_2=0}^{1} \cdots \sum _{i_n=0}^{n-1}}} \left[ {\displaystyle {\prod _{j=1}^n}} \;c_{i_j}(j)\right] (n+\alpha ){\left[ {\displaystyle {\sum _{j=1}^n}} y_j(n-j+i_j+1)+\beta +C\right] ^{-(n+\alpha +1)}}}{{\displaystyle {\sum _{i_1=0}^{0} \sum _{i_2=0}^{1} \cdots \sum _{i_n=0}^{n-1} }} \;\left[ {\displaystyle {\prod _{j=1}^n}} \;c_{i_j}(j)\right] \; \left[ {\displaystyle {\sum _{j=1}^n}} y_j(n-j+i_j+1)+\beta \right] ^{-(n+\alpha )}} \nonumber .\\&\end{aligned}$$
(3.6)

Then from (3.6), the Bayesian estimate of \(\theta \) based on the LINEX loss function is given by

$$\begin{aligned} \widetilde{\theta }_{\textit{LINEX}}(\underline{Y})=- \frac{1}{C}\ln \left( E[e^{-C\theta }]\right) , \end{aligned}$$
(3.7)

where \(E[e^{C\theta }]\) is as derived in Eq. (3.6).

3.3 Bayes Estimates Based on Non-informative Prior

Let \(\theta \) have a non-informative Jefferys prior in Eq. (2.4). Then, we obtain the Bayesian estimates of \(\theta \) in this case as follows:

  1. 1.

    SRS

    $$\begin{aligned} \widetilde{\theta }_{\textit{SEL}}^J(\underline{X})= \frac{1}{\overline{X}} \end{aligned}$$
    (3.8)

    and

    $$\begin{aligned} \widetilde{\theta }_{\textit{LINEX}}^J(\underline{X})= - \frac{1}{C}\ln \Bigl [ \Bigl (1+ \frac{C}{n \overline{X} }\Bigr )^{-n}\Bigr ]. \end{aligned}$$
    (3.9)
  2. 2.

    RSS

    $$\begin{aligned} \widetilde{\theta }_{\textit{SEL}}^J (\underline{Y}) = \frac{n \; {\displaystyle {\sum _{i_1=0}^{0} \sum _{i_2=0}^{1} \cdots \sum _{i_n=0}^{n-1}}} \Bigl [ {\displaystyle {\prod _{j=1}^n}} \;c_{i_j}(j)\Bigr ] {\bigl [{\displaystyle {\sum _{j=1}^n}} y_j(n-j+i_j+1)\bigr ]^{-(n+1)}}}{{\displaystyle {\sum _{i_1=0}^{0} \sum _{i_2=0}^{1} \cdots \sum _{i_n=0}^{n-1} }} \;\Bigl [ {\displaystyle {\prod _{j=1}^n}} \;c_{i_j}(j)\Bigr ] \; \bigl [{\displaystyle {\sum _{j=1}^n}} y_j(n-j+i_j+1)\bigr ]^{-n}} \end{aligned}$$
    (3.10)

    and

    $$\begin{aligned} \widetilde{\theta }_{\textit{LINEX}}^J(\underline{Y})= - \frac{1}{C}\ln \left[ \frac{n {\displaystyle {\sum _{i_1=0}^{0} \sum _{i_2=0}^{1} \cdots \sum _{i_n=0}^{n-1}}} \Bigl [ {\displaystyle {\prod _{j=1}^n}} \;c_{i_j}(j)\Bigr ] {\bigl [{\displaystyle {\sum _{j=1}^n}} y_j(n-j+i_j+1)+C\bigr ]^{-(n+1)}}}{{\displaystyle {\sum _{i_1=0}^{0} \sum _{i_2=0}^{1} \cdots \sum _{i_n=0}^{n-1} }} \;\Bigl [ {\displaystyle {\prod _{j=1}^n}} \;c_{i_j}(j)\Bigr ] \; \bigl [{\displaystyle {\sum _{j=1}^n}} y_j(n-j+i_j+1)\bigr ]^{-n}}\right] . \end{aligned}$$
    (3.11)

3.4 Bayes Estimates Based on \(m\)-Cycle RSS

Based on \(m\) cycles, let

$$\begin{aligned} Y_{11}, Y_{12}, \ldots , Y_{1n};Y_{21}, Y_{22}, \ldots , Y_{2n}; \ldots ;Y_{m1}, Y_{m2}, \ldots , Y_{mn} \end{aligned}$$

be the \(m\)-cycle RSS from the exponential distribution in Eq. (2.1) and the prior of \(\theta \) be as in Eq. (2.4). Then, the joint density in this case is given by

$$\begin{aligned} g(\underline{y}|\theta )&= \prod _{l=1}^m \sum _{i_1^l=0}^{0} \sum _{i_2^l=0}^{1} \cdots \sum _{i_n^l=0}^{n-1} \left[ \prod _{j=1}^n \;c_{i_j^l}(j)\right] \theta ^n \; e^{-\theta {\sum \limits _{j=1}^n y_{_{lj}}\big (n-j+i_j^l+1\big )}} \\&= \left[ \sum _{i_1^1=0}^{0} \sum _{i_2^1=0}^{1} \cdots \sum _{i_n^1=0}^{n-1} \right] \left[ \sum _{i_1^2=0}^{0} \sum _{i_2^2=0}^{1} \cdots \sum _{i_n^2=0}^{n-1} \right] \cdots \left[ \sum _{i_1^m=0}^{0} \sum _{i_2^m=0}^{1} \cdots \sum _{i_n^m=0}^{n-1} \right] K_{i_j^l} \theta ^{nm} \; e^{-\theta \eta _{i_j^l}},\ y_{ij}>0, \end{aligned}$$

where

$$\begin{aligned} K_{i_j^l}=\left[ \prod _{l=1}^m \prod _{j=1}^n \;c_{i_j^l}(j)\right] \end{aligned}$$

and

$$\begin{aligned} \eta _{i_j^l}={\sum \limits _{l=1}^m {\sum \limits _{j=1}^n y_{_{lj}}(n-j+i_j^l+1)}}. \end{aligned}$$

Hence, the posterior density can be expressed as

$$\begin{aligned} \pi (\theta | \underline{Y} )&= \frac{\pi (\theta ) g(\underline{y}|\theta )}{\int \limits _\theta \pi (\theta ) g(\underline{y}|\theta ) d\theta } \nonumber \\&= \frac{\left[ \prod \limits _{l=1}^m \sum \limits _{i_1^l=0}^{0} \sum \limits _{i_2^l=0}^{1} \cdots \sum \limits _{i_n^l=0}^{n-1}\right] \; K_{i_j^l} \theta ^{nm+\alpha -1} \; e^{-\theta \eta _{i_j^l}+\beta }}{\left[ \prod \limits _{l=1}^m \sum \limits _{i_1^l=0}^{0} \sum \limits _{i_2^l=0}^{1} \cdots \sum \limits _{i_n^l=0}^{n-1}\right] \; K_{i_j^l} \; \Gamma (nm +\alpha ) \; \bigl [\eta _{i_j^l}+\beta \bigr ]^{-(nm+\alpha )}}, \ y_{ij}>0.\nonumber \\ \end{aligned}$$
(3.12)

From (3.12), the Bayesian estimate of \(\theta \) based on the squared error loss function is obtained as

$$\begin{aligned}&\widetilde{\theta }_{\textit{SEL}} (\underline{Y}^{(m)}) \\&=\frac{\left[ \prod \limits _{l=1}^m \sum \limits _{i_1^l=0}^{0} \sum \limits _{i_2^l=0}^{1} \cdots \sum \limits _{i_n^l=0}^{n-1}\right] \; K_{i_j^l} (nm+\alpha )\left[ \eta _{i_j^l}+\beta \right] ^{-(nm+\alpha +1)} }{\left[ \prod \limits _{l=1}^m \sum \limits _{i_1^l=0}^{0} \sum \limits _{i_2^l=0}^{1} \cdots \sum \limits _{i_n^l=0}^{n-1}\right] \; K_{i_j^l} \; \left[ \eta _{i_j^l}+\beta \right] ^{-(nm+\alpha )}}, \end{aligned}$$

while the Bayesian estimate of \(\theta \) based on the LINEX loss function is obtained to be

$$\begin{aligned} \widetilde{\theta }_{\textit{LINEX}}(\underline{Y}^{(m)}) =- \frac{1}{C}\ln \left[ \frac{\Bigl [\prod \limits _{l=1}^m \sum \limits _{i_1^l=0}^{0} \sum \limits _{i_2^l=0}^{1} \cdots \sum \limits _{i_n^l=0}^{n-1}\Bigr ] \; K_{i_j^l} (nm+\alpha )\bigl [\eta _{i_j^l}+\beta +C \bigr ]^{-(nm+\alpha +1)} }{\Bigl [\prod \limits _{l=1}^m \sum \limits _{i_1^l=0}^{0} \sum \limits _{i_2^l=0}^{1} \cdots \sum \limits _{i_n^l=0}^{n-1}\Bigr ] \; K_{i_j^l} \; \bigl [\eta _{i_j^l} +\beta \bigr ]^{-(nm+\alpha )}} \right] . \end{aligned}$$
(3.13)

Upon using the non-informative Jefferys prior in (2.4), we derive the Bayesian estimates based on the squared error and LINEX loss functions to be

$$\begin{aligned} \widetilde{\theta }_{\textit{SEL}}^J (\underline{Y}^{(m)}) = \frac{\Bigl [\prod \limits _{l=1}^m \sum \limits _{i_1^l=0}^{0} \sum \limits _{i_2^l=0}^{1} \cdots \sum \limits _{i_n^l=0}^{n-1}\Bigr ] \; K_{i_j^l} (nm)\bigl [\eta _{i_j^l}\bigr ]^{-(nm+1)} }{\Bigl [\prod \limits _{l=1}^m \sum \limits _{i_1^l=0}^{0} \sum \limits _{i_2^l=0}^{1} \cdots \sum \limits _{i_n^l=0}^{n-1}\Bigr ] \; K_{i_j^l} \; \bigl [\eta _{i_j^l}\bigr ]^{-(nm)}} \end{aligned}$$
(3.14)

and

$$\begin{aligned} \widetilde{\theta }_{\textit{LINEX}}^J(\underline{Y}^{(m)})= - \frac{1}{C}\ln \left[ \frac{\Bigl [\prod \limits _{l=1}^m \sum \limits _{i_1^l=0}^{0} \sum \limits _{i_2^l=0}^{1} \cdots \sum \limits _{i_n^l=0}^{n-1}\Bigr ] \; K_{i_j^l} (nm) \bigl [\eta _{i_j^l}+C \bigr ]^{-(nm+1)} }{\Bigl [\prod \limits _{l=1}^m \sum \limits _{i_1^l=0}^{0} \sum \limits _{i_2^l=0}^{1} \cdots \sum \limits _{i_n^l=0}^{n-1}\Bigr ] \; K_{i_j^l} \; \bigl [\eta _{i_j^l}\bigr ]^{-(nm)}} \right] , \end{aligned}$$
(3.15)

respectively.

4 Alternative Procedure

In this section, we propose an alternative procedure for deriving the Bayesian estimates, which is simple in nature and reduces the amount of numerical calculation quite considerably for finding the estimates. Using the independent spacings property of the exponential distribution (see Arnold et al. [5]) and then by employing the partial fractions (see Sen and Balakrishnan [14]), we can rewrite the density of the \(j^{th}\) order statistic \(Y_j\) as

$$\begin{aligned} g(y_j|\theta )=\sum \limits _{k=1}^j H_k(j) \; \theta \; k \; e^{-\theta \;k \; y_j}, \ y_j>0, \end{aligned}$$

where \( H_k(j)=\prod \limits _{\ell =1,\ell \ne k }^j \frac{\ell }{\ell -k}=(-1)^{k-1}\; {j \atopwithdelims ()k}\). Then, the joint density for \(m\)-cycle RSS can be expressed as

$$\begin{aligned} g(\underline{y}^{(m)}|\theta )&= \prod _{l=1}^m \prod _{j=1}^n \; g(y_{lj}|\theta ) \\&= \sum \,^{mn} Q_{i_j^l} \; (\theta \; i_j^l)^{mn} \; e^{-\theta \; \bigl [\sum \limits _{l=1}^m \sum \limits _{j=1}^n i_j^l \; y_ {lj}\bigr ]},\ y_{l_j}>0, \end{aligned}$$

where we use the notation

$$\begin{aligned} \sum \,^{mn}= \left[ \sum _{i_1^1=0}^{0} \sum _{i_2^1=0}^{1} \cdots \sum _{i_n^1=0}^{n-1} \right] \left[ \sum _{i_1^2=0}^{0} \sum _{i_2^2=0}^{1} \cdots \sum _{i_n^2=0}^{n-1} \right] \cdots \left[ \sum _{i_1^m=0}^{0} \sum _{i_2^m=0}^{1} \cdots \sum _{i_n^m=0}^{n-1} \right] \end{aligned}$$

and

$$\begin{aligned} Q_{i_j^l}(i_j^{l})^{mn}=\left[ \prod _{l=1}^m \prod _{j=1}^n \;H_{i_j^l}(j)\right] . \end{aligned}$$

Then, the Bayesian estimates of \(\theta \) based on the gamma conjugate prior are obtained as follows:

$$\begin{aligned} \widetilde{\theta }_{\textit{SEL}} (\underline{Y}^{(m)}) = \frac{\sum ^{mn} \; Q_{i_j^l} \; (i_j^l)^{mn} (nm+\alpha )\left[ \left[ \sum \limits _{l=1}^m \sum \limits _{j=1}^n i_j^l \;y_ {lj}\right] +\beta \right] ^{-(nm+\alpha +1)} }{\sum ^{mn} \; \; Q_{i_j^l} \; (i_j^l)^{mn} \left[ \left[ \sum \limits _{l=1}^m \sum \limits _{j=1}^n i_j^l \;y_ {lj}\right] +\beta \right] ^{-(nm+\alpha )}}\quad \quad \end{aligned}$$
(4.1)

and

$$\begin{aligned} \widetilde{\theta }_{\textit{LINEX}}(\underline{Y}^{(m)})= - \frac{1}{C}\ln \left[ \frac{\sum ^{mn} \; Q_{i_j^l} \; (i_j^l)^{mn} (nm+\alpha ) \left[ \left[ \sum \limits _{l=1}^m \sum \limits _{j=1}^n i_j^l \;y_ {lj}\right] \!+\!\beta \!+\! C \right] ^{-(nm+\alpha +1)} }{\sum ^{mn} \; \; Q_{i_j^l} \; (i_j^l)^{mn} \left[ \left[ \sum \limits _{l=1}^m \sum \limits _{j=1}^n i_j^l \;y_ {lj}\right] +\beta \right] ^{-(nm+\alpha )}} \right] \!. \end{aligned}$$
(4.2)

Similarly, the Bayesian estimates of \(\theta \) based on Jefferys prior are obtained as follows:

$$\begin{aligned} \widetilde{\theta }_{\textit{SEL}}^J \big (\underline{Y}^{(m)}\big ) = \frac{\sum ^{mn} \; Q_{i_j^l} \; (i_j^l)^{mn} (nm)\Bigg [\sum \limits _{l=1}^m \sum \limits _{j=1}^n i_j^l \; y_ {lj}\bigg ]^{-(nm+1)} }{\sum ^{mn} \; Q_{i_j^l} \; (i_j^l)^{mn} \bigg [\sum \limits _{l=1}^m \sum \limits _{j=1}^n i_j^l \;y_ {lj}\bigg ]^{-(nm)}}, \end{aligned}$$
(4.3)

and

$$\begin{aligned} \widetilde{\theta }_{\textit{LINEX}}^J(\underline{Y}^{(m)})= - \frac{1}{C}\ln \left[ \frac{\sum ^{mn} \; Q_{i_j^l} \; (i_j^l)^{mn} (nm) \left[ \left[ \sum \limits _{l=1}^m \sum \limits _{j=1}^n i_j^l \;y_ {lj}\right] +C \right] ^{-(nm+1)} }{\sum ^{mn} \; \; Q_{i_j^l} \; (i_j^l)^{mn} \left[ \sum \limits _{l=1}^m \sum \limits _{j=1}^n i_j^l \; y_ {lj}\right] ^{-(nm)}} \right] . \end{aligned}$$
(4.4)

5 Numerical Results

In order to demonstrate the usefulness of the Bayesian estimates based on both SRS and RSS derived in the preceding sections (for the RSS, we use the alternative procedure as in Sect. 4), we carry out Monte Carlo simulations using the following steps:

  1. 1.

    Generate SRS and RSS samples of size \(n\) from the exponential distribution for the case when \(m=1\) (one cycle is used in many applications).

  2. 2.

    Calculate the Bayesian estimates given derived in in Sect. 3 using the SRS and RSS samples;

  3. 3.

    Repeat Steps 1 and 2 for 1,000 runs;

  4. 4.

    Then calculate the bias and mean squared error (MSE) of all the estimates.

The results so obtained, for the cases \(n=3(1)6, \theta =2, \ \alpha =1, \ \beta =1\), and \(C=1,-1\), are all presented in Tables 1 and 2.

Table 1 Bias of the Bayesian estimates based on SRS and RSS when \(\theta =2, \ \alpha =1, \ \beta =1\) for \(n=3(1)6\)
Table 2 MSE of the Bayesian estimates based on SRS and RSS when \(\theta =2, \ \alpha =1, \ \beta =1\) for \(n=3(1)6\)

From Table 1, we first of all observe that the Bayesian estimates of \(\theta \) are all biased. Next, we observe that the estimates based on the informative gamma prior are less biased than the corresponding estimates based on Jefferys non-informative prior. Quite importantly, we finally observe that the Bayesian estimates based on RSS are considerably less biased than the corresponding Bayesian estimates based on SRS.

From Table 2, we first note that the mean squared error of all estimates decrease when \(n\) increases, as one would expect. Next, we observe that the mean squared error of the Bayesian estimates based on the informative gamma prior is in general less than the corresponding values for the estimates based on Jeffereys non-informative prior. Finally, we observe that the Bayesian estimates based on RSS have a much smaller mean squared error than the corresponding Bayesian estimates based on RSS in all cases considered. This clearly demonstrates the efficiency of inference based on RSS and also the usefulness of the Bayesian estimates based on RSS developed here for the scale parameter of the exponential distribution.