1 Introduction

The primary objective of any survey is to improve the efficiency of the estimate of the population which not only rely upon the size of sample and sampling fraction but also upon the heterogeneity or variability of the population. By far the most common sampling scheme used to reduce the heterogeneity of population is stratified simple random sampling (SSRS). Ranked set sampling (RSS) was first introduced by McIntyre [16]. Following McIntyre [16], Samawi [19] investigated the concept of stratified ranked set sampling (SRSS) as an alternative sampling scheme to SSRS which merge the profits of stratification and RSS to obtain an unbiased estimator of population mean with likely improvement in efficiency. In contrast to SSRS, quantified observations in SRSS do not lend identically to draw illation due to further structure that imposes ranking, permitting observations in order to tempt different dimensions of population and to obtain possible advantages in efficiency.

In theory of sample surveys, the aim of survey statistician is to improve the efficiency of the proposed estimators. RSS becomes the better alternative over the other sampling schemes provided the ranking of the units is possible. The procedure of RSS is based on quantifying m simple random samples each of size m units from the population and m units are ranked within each set according to the variable of interest visually or by any cost independent measure. From the first sample, the unit with rank 1 is taken for the measurement of element and remaining units of the sample are omitted. Again, from the second sample, the unit with rank 2 is taken for the measurement of element and remaining units of the sample are omitted. Iterate the process in the similar pattern until the unit with rank m is taken for the measurement of elements from the \(m^{th}\) sample. This whole process constitute a cycle. If this whole cycle is iterated r times then this yield a ranked set sample of size \(n=mr\).

The procedure of stratified ranked set sample is based on drawing \(m_h\) independent random samples each of size \(m_h\) units from the \(h^{th}\), \(h=1,2,...,L\), stratum of population. Now ranking is performed over observations of each sample and procedure of RSS is used to get L independent ranked set samples each of size \(m_h\) such that \(\sum _{h=1}^{L}m_h=m\). This process completes one cycle of SRSS. The whole procedure is iterated r times in order to obtain desired sample size \(n_h=m_hr\).

The literature on RSS is quite broad starting with McIntyre [16] and extending to many variations till date. SRSS as introduced by Samawi [19] is one of those variations of RSS. Samawi and Siam [20] suggested the ratio estimator under SRSS. By using auxiliary information, Mandowara and Mehta [15] analysed some modified ratio estimators under SRSS whereas Mehta and Mandowara [17] suggested an advance estimator under SRSS. Linder et al. [14] suggested regression estimator under SRSS. Khan and Shabbir [10] considered Hartley-Ross type unbiased estimators under RSS and SRSS. Saini and Kumar [18] investigated ratio type estimator using quartile as auxiliary information under SRSS. Bhushan et al. [4] considered the problem of mean estimation utilizing logarithmic estimators under SRSS, while Bhushan et al. [5] introduced an efficient estimation procedure of population mean under SRSS. Bhushan et al. [6] suggested some improved class of estimators employing SRSS. Bhushan et al.  [7] investigated SRSS for the quest of an optimal class of estimators. The main objective of this paper is to study the performance of combined and separate log type class of estimators viz a viz the existing classes of estimators under SRSS.

The rest of the paper is drafted as follows. In Sect. 2, the notations used throughout this article will be defined. Section 3 will consider the review of the combined and separate classes of estimators with their properties. The suggested combined and separate classes of estimators will be presented in Sect. 4, whereas the efficiency comparison of combined and separate estimators will be given in Sect. 5. In support of theoretical results, a simulation study will be carried out in Sect. 6. Some formal concluding remarks will be given in Sect. 7.

2 Notaions

To obtain the estimate of the population mean Y. of the study variable y, let the ranking be executed on the auxiliary variable x. For \(r^{th}\) cycle and \(h^{th}\) stratum, let \((X_{h(1)r}, Y_{h[1]r})\), \((X_{h(2)r}, Y_{h[2]r})\),..., \((X_{h(m_h)r}, Y_{h[m_h]r})\) denote the stratified ranked set sample with bivariate probability density function \(f(x_h,y_h)\) and bivariate cumulative distribution function (c.d.f.) \(F(x_h,y_h)\).

The notations used throughout this article are defined below.

  • N; size of population,

  • \(N_h\); size of population in stratum h,

  • n; size of sample,

  • \(n_h\); size of sample in stratum h,

  • \(W_h=N_h/N\); weight of stratum h,

  • \(\bar{y}_h=\sum _{i=1}^{n_h}y_{h_i}/n_h\); sample mean of variable Y in stratum h,

  • \(\bar{y}_{st}=\sum _{h=1}^{L}W_hy_{h}\); sample mean of variable Y,

  • \(\bar{x}_h=\sum _{i=1}^{n_h}x_{h_i}/n_h\); sample mean of variable X in stratum h,

  • \(\bar{x}_{st}=\sum _{h=1}^{L}W_hx_{h}\); sample mean of variable X,

  • \(\bar{y}_{[\text {srss}]}=\sum _{h=1}^{L}W_h\bar{y}_{h{[{rss}]}}\); stratified ranked set sample mean of variable Y,

  • \(\bar{x}_{(\text {srss})}=\sum _{h=1}^{L}W_h\bar{x}_{h{({rss})}}\); stratified ranked set sample mean of variable X,

  • \(\bar{y}_{h{[\text {rss}]}}=\sum _{i=1}^{m_h}\sum _{j=1}^{r}{y}_{h{[i]}j}/m_hr\); ranked set sample mean of variable Y in stratum h,

  • \(\bar{x}_{h{(\text {rss})}}=\sum _{i=1}^{m_h}\sum _{j=1}^{r}{x}_{h{(i)}j}/m_hr\); ranked set sample mean of variable X in stratum h,

  • \(\bar{Y}_h=\sum _{i=1}^{N_h}y_{h_i}/N_h\); population mean of variable Y in stratum h,

  • \(\bar{Y}=\bar{Y}_{st}=\sum _{h=1}^{L}W_hY_{h}\); population mean of variable Y,

  • \(\bar{X}_h=\sum _{i=1}^{N_h}x_{h_i}/N_h\); population mean of variable X in stratum h,

  • \(\bar{X}=\bar{X}_{st}=\sum _{h=1}^{L}W_hX_{h}\); population mean of variable X,

  • \(R=\bar{Y}/\bar{X}\); population ratio,

  • \(R_h=\bar{Y}_h/\bar{X}_h\); population ratio in stratum h,

  • \(S_{y_h}^2=(N_h-1)^{-1}\sum _{h=1}^{N_h}(y_{h_i}-\bar{Y}_h)^2\); population variance of variable Y in stratum h,

  • \(S_{x_h}^2=(N_h-1)^{-1}\sum _{h=1}^{N_h}(x_{h_i}-\bar{X}_h)^2\); population variance of variable X in stratum h,

  • \(S_{xy_h}=(N_h-1)^{-1}\sum _{h=1}^{N_h}(x_{h_i}-\bar{X}_h)(y_{h_i}-\bar{Y}_h)\); population covariance between variables X and Y in stratum h,

  • \(\rho _{xy_h}=S_{xy_h}/S_{x_h}S_{y_h}\); population correlation coefficient between variables X and Y in stratum h,

  • \(C_{y_h}\); population coefficient of variation for variable Y in stratum h,

  • \(C_{x_h}\); population coefficient of variation for variable X in stratum h,

  • \(\beta _1(x_h)=(E(\bar{x}_h-\bar{X}_h)^3)^2/(E(\bar{x}_h-\bar{X}_h)^2)^2\); population coefficient of skewness for variable X in stratum h, and

  • \(\beta _2(x_h)=(E(\bar{x}_h-\bar{X}_h))^4/(E(\bar{x}_h-\bar{X}_h)^2)^2\); population coefficient of kurtosis for variable X in stratum h.

In order to find out bias and mean square error of combined estimators, the following notations will be used end to end in this paper.

Let \(\bar{y}_{[srss]}=\bar{Y}(1+\varepsilon _0),~ \bar{x}_{(srss)}=\bar{X}(1+\varepsilon _1),~\text {such that}~ E(\varepsilon _0)=E(\varepsilon _1)=0\)

$$\begin{aligned} V_{r,s}=\sum \limits _{h=1}^{L}W_h^{r+s}\frac{E(\bar{x}_{srss}-\bar{X})^r(\bar{y}_{srss}-\bar{Y})^s}{\bar{X}^r\bar{Y }^s} \end{aligned}$$
(2.1)

Following (2.1), we can write

\(E({\varepsilon _0}^2)=\sum \nolimits _{h=1}^{L}W_h^2\left( \gamma _h\frac{S_{y_h}^2}{\bar{Y}^2}-D^2_{y_{h[i]}}\right) =V_{02},~~ E({\varepsilon _1}^2)=\sum \nolimits _{h=1}^{L}W_h^2\left( \gamma _h\frac{S_{x_h}^2}{\bar{X}^2}-D^2_{x_{h(i)}}\right) =V_{20} ~\text {and}~ E({\varepsilon _0\varepsilon _1})=\sum \nolimits _{h=1}^{L}W_h^2\left( \gamma _h\rho _{x_hy_h}\frac{S_{x_h}}{\bar{X}}\frac{S_{y_h}}{\bar{Y}}-D_{{x_hy_h}_{[i]}}\right) =V_{11}\) 

where \(\gamma =1/m_hr,~~ D^2_{x_{h(i)}}=\sum \nolimits _{i=1}^{m_h}\tau _{x_{h(i)}}^2/m_h^2r\bar{X}^2,~~ D^2_{y_{h[i]}}=\sum \nolimits _{i=1}^{m_h}\tau _{y_{h[i]}}^2/m_h^2r\bar{Y}^2,~~ D_{{x_hy_h}_{[i]}}=\sum \nolimits _{i=1}^{m_h}\tau _{{x_hy_h}_{[i]}}/m_h^2r\bar{X}\bar{Y}\), \(\tau _{x_{_{h(i)}}}=(\mu _{x_{_{h(i)}}}-\bar{X}_h)\),   \(\tau _{y_{_{h[i]}}}=(\mu _{y_{_{h[i]}}}-\bar{Y}_h)\),   and  \(\tau _{{x_hy_h}_{{[i]}}}=(\mu _{x_{{h(i)}}}-\bar{X}_h)(\mu _{y{_{h[i]}}}-\bar{Y}_h)\).

To find out bias and mean square error of separate estimators, the following notations will be used throughout this paper.

Let \(\bar{y}_{[srss]}=\bar{Y}(1+e_0),~~ \bar{x}_{(srss)}=\bar{X}(1+e_1),~\text {such that}~ E(e_0)=E(e_1)=0\),  

\(E({e_0}^2)=\left( \gamma _h\frac{S_{y_h}^2}{\bar{Y }_h^2}-M^2_{y_{h[i]}}\right) =U_{0},~~ E({e_1}^2)=\left( \gamma _h\frac{S_{x_h}^2}{\bar{X}_h^2}-M^2_{x_{h(i)}}\right) =U_{1} ~\text {and}~ E({e_0e_1})=\left( \gamma _h\rho _{x_hy_h}\frac{S_{x_h}}{\bar{X }_h}\frac{S_{y_h}}{\bar{Y}_h}-M_{{x_hy_h}_{[i]}}\right) =U_{10}\),  

where \(M^2_{x_{h(i)}}=\sum _{i=1}^{m_h}\tau _{x_{h(i)}}^2/m_h^2r\bar{X}_h^2,~~ M^2_{y_{h[i]}}=\sum _{i=1}^{m_h}\tau _{y_{h[i]}}^2/m_h^2r\bar{Y}_h^2,~~ M_{{x_hy_h}_{[i]}}=\sum _{i=1}^{m_h}\tau _{{x_hy_h}_{[i]}}/m_h^2r\bar{X}_h\bar{Y}_h\),   \(\tau _{x_{_{h(i)}}}=(\mu _{x_{_{h(i)}}}-\bar{X}_h)\),   \(\tau _{y_{_{h[i]}}}=(\mu _{y_{_{h[i]}}}-\bar{Y}_h)\),   and  \(\tau _{{x_hy_h}_{{[i]}}}=(\mu _{x_{_{h(i)}}}-\bar{X}_h)(\mu _{y_{_{h[i]}}}-\bar{Y}_h)\).

3 Review of estimators

The conventional mean estimator under SRSS can be defined as

$$\begin{aligned} {T}_{m}={\bar{y}_{[srss]}}=\sum \limits _{h=1}^{L}W_h\bar{y}_{h{[{rss}]}} \end{aligned}$$

having variance as

$$\begin{aligned} V(T_{m})&=\sum \limits _{h=1}^{L}W_h^2\bar{Y}^2\left( \gamma _h\frac{S_{y_h}^2}{\bar{Y}^2}-D ^2_{y_{h[i]}}\right) \end{aligned}$$
(3.1)

Further, in this section, we consider review of some well known existing combined and separate estimators for the estimation of population mean under SRSS.

3.1 Combined estimators

Samawi and Siam [20] considered the classical combined ratio estimator under SRSS as

$$\begin{aligned} {T}_{r}^c=\frac{\bar{y}_{[srss]}}{\bar{x}_{(srss)}}\bar{X} \end{aligned}$$

Linder et al. [14] suggested the regression estimator under SRSS as

$$\begin{aligned} T_{\beta }^c=\bar{y}_{[srss]}+\beta (\bar{X}-\bar{x}_{(srss)}) \end{aligned}$$

where \(\beta \) is the regression coefficient of y on x.

Utilizing the information on auxiliary variable, Mandowara and Mehta [15] proposed some ratio type estimators as

$$\begin{aligned} {T}_{mm_1}^c&=\bar{y}_{[srss]}\frac{\sum _{h=1}^{L}W_h(\bar{X}_h+C_{x_h})}{\sum _{h=1}^{L}W_h(\bar{x}_{{h(r_h)}}+C_{x_h})} \\ {T}_{mm_2}^c&=\bar{y}_{[srss]}\frac{\sum _{h=1}^{L}W_h(\bar{X}_h+\beta _2{(x_h)})}{\sum _{h=1}^{L}W_h(\bar{x}_{{h(r_h)}}+\beta _2{(x_h)})}\\ {T}_{mm_3}^c&=\bar{y}_{[srss]}\frac{\sum _{h=1}^{L}W_h(\bar{X}_h\beta _2{(x_h)}+C_{x_h})}{\sum _{h=1}^{L}W_h(\bar{x}_{{h(r_h)}}\beta _2{(x_h)}+C_{x_h})}\\ {T}_{mm_4}^c&=\bar{y}_{[srss]}\frac{\sum _{h=1}^{L}W_h(\bar{X}_hC_{x_h}+\beta _2{(x_h)})}{\sum _{h=1}^{L}W_h(\bar{x}_{{h(r_h)}}C_{x_h}+\beta _2{(x_h)})} \end{aligned}$$

On the lines of Kadilar and Cingi [9], Mehta and Mandowara [17] suggested an advance ratio estimator under SRSS as

$$\begin{aligned} T_{mm}^c=k\frac{\bar{y}_{[srss]}}{\bar{x}_{(srss)}}\bar{X}=\hat{R}_{ksrss}\bar{X}\end{aligned}$$

where k is a suitably chosen scalar.

Following Shabbir and Gupta [21], we define the combined regression cum ratio estimator under SRSS as

$$\begin{aligned} {T}_{sg}^c=\lambda [\bar{y}_{[srss]}+\beta (\bar{X}-\bar{x}_{(srss)})]\left( \frac{\bar{z}_{(srss)}}{Z}\right) \end{aligned}$$

where \(z_{[srss]}=\sum _{h=1}^{L}W_h(\bar{x}_h+{X})\), \(\bar{Z}=\sum _{h=1}^{L}W_h(\bar{X}_h+{X})\) and X is the population total.

On the lines of Koyuncu and Kadilar [12], one may define the following combined estimator under SRSS as

$$\begin{aligned} T_{kk}^c=\lambda _k\bar{y}_{[srss]}\bigg [\frac{a\bar{X}+b}{\alpha (a\bar{x}_{(srss)}+b)+(1-\alpha )(a\bar{X }+b)}\bigg ]^g \end{aligned}$$

where \(\alpha \) is a fixed constant and g is a suitably opted scalar which take values 1 and -1 to produce ratio and product type estimators, respectively, whereas \((a \ne 0)\) and b are either real numbers or the function of known parameters of auxiliary variable x.

Following Singh and Vishwakarma [24], one may consider a combined general procedure for estimating the population mean \(\bar{Y}\) under SRSS as

$$\begin{aligned} T_{sv}^c=\Lambda _1\bar{y}_{[srss]}+\Lambda _2\bar{y}_{(srss)}\left( \frac{\bar{X}}{\bar{x}_{(srss)}}\right) \end{aligned}$$

where \(\bar{x}^*_{(srss)}=\sum _{h=1}^{L}W_h(a_h\bar{x}_h+b_h)\), \(\bar{X}^*=\sum _{h=1}^{L}W_h(a_h\bar{X}_h+b_h)\) and \(\Lambda _1\), \(\Lambda _2\) are suitably chosen scalars.

Motivated by Singh and Solanki [23], we define a new family of combined estimators for population mean \(\bar{Y}\) under SRSS as

$$\begin{aligned} T_{ss}^c=\left[ \begin{array}{l} \lambda _1\bar{y}_{st}\left\{ \frac{\alpha (a_{st}\bar{x}_{st}+b_{st})+(1-\alpha )(a_{st}\bar{X}+b_{st})}{(a_{st}\bar{X}+b_{st})}\right\} ^{\delta }+\lambda _2\bar{y}_{st}\left\{ \frac{(a_{st}\bar{X}+b_{st})}{\alpha (a_{st}\bar{x}_{st}+b_{st})+(1-\alpha )(a_{st}\bar{X}+b_{st})}\right\} ^g \end{array}\right] \end{aligned}$$

where \(\delta , g\), and \(\alpha \) are suitably opted constants, whereas \(\lambda _1\) and \(\lambda _2\) are optimizing scalars to be determined later.

Saini and Kumar [18] utilized quartiles as auxiliary information and suggested ratio estimator under SRSS as

$$\begin{aligned} T_{sk_t}^c=\bar{y}_{[srss]}\left[ \frac{\bar{X}-\bar{x}_{(srss)}+q_{t}}{\bar{X}+\bar{x}_{(srss)}+q_{t}}\right] ;~~t=1,3 \end{aligned}$$

where \(q_t,~t=1,3\) is the \(t^{th}\) quartile.

The MSEs of these estimators are given in Appendix A for ready reference and further analysis.

3.2 Separate estimators

Samawi and Siam [20] suggested the classical separate ratio estimator under SRSS as

$$\begin{aligned} {T}_{r}^s=\sum \limits _{h=1}^{L}W_h\frac{\bar{y}_{h{[rss]}}}{\bar{x}_{h{(rss)}}}\bar{X}_h \end{aligned}$$

where \(\bar{y}_{h{[rss]}}=\frac{1}{m_hr}\sum _{i=1}^{m_h}\sum _{j=1}^{r}{y}_{h{[i]}j}\) and \(\bar{x}_{h{(rss)}}=\frac{1}{m_hr}\sum _{i=1}^{m_h}\sum _{j=1}^{r}{x}_{h{(i)}j}\).

Linder et al. [14] evoked the separate regression estimator under SRSS as

$$\begin{aligned} T_{\beta }^s=\sum _{h=1}^{L}W_h[\bar{y}_{h[rss]}+\beta _h(\bar{X}_h-\bar{x}_{h(rss)})] \end{aligned}$$

where \(\beta _h\) is the regression coefficient of y on x.

The separate version of Mandowara and Mehta [15] estimator under SRSS is

$$\begin{aligned} {T}_{mm_1}^s&=\sum \limits _{h=1}^{L}W_h\bar{y}_{h[rss]}\left( \frac{\bar{X}_h+C_{x_h}}{\bar{x}_{{h(rss)}}+C_{x_h}}\right) \\ {T}_{mm_2}^s&=\sum \limits _{h=1}^{L}W_h\bar{y}_{h[rss]}\left( \frac{\bar{X}_h+\beta _2{(x_h)}}{\bar{x}_{{h(rss)}}+\beta _2{(x_h)}}\right) \\ {T}_{mm_3}^s&=\sum \limits _{h=1}^{L}W_h\bar{y}_{h[rss]}\left( \frac{\bar{X}_h\beta _2{(x_h)}+C_{x_h}}{\bar{x}_{{h(rss)}}\beta _2{(x_h)}+C_{x_h}}\right) \\ {T}_{mm_4}^s&=\sum \limits _{h=1}^{L}W_h\bar{y}_{h[rss]}\left( \frac{\bar{X}_hC_{x_h}+\beta _2{(x_h)}}{\bar{x}_{{h(rss)}}C_{x_h}+\beta _2{(x_h)}}\right) \end{aligned}$$

The separate version of Mehta and Mandowara [17] estimator under SRSS is

$$\begin{aligned} ~~~T_{mm}^s=\sum \limits _{h=1}^{L}W_hk_h\frac{\bar{y}_{h[rss]}}{\bar{x}_{(srss)}}\bar{X}_h \end{aligned}$$

where \(k_h\) is duly opted scalar.

On the line of Shabbir and Gupta [21], we define the separate regression cum ratio estimator under SRSS as

$$\begin{aligned} {T}_{sg}^s=\sum \limits _{h=1}^{L}W_h\lambda _h[\bar{y}_{h[rss]}+\beta _h(\bar{X}_h-\bar{x}_{h(rss)})]\left( \frac{\bar{z}_{h(rss)}}{\bar{Z}_h}\right) \end{aligned}$$

where \(z_{h[rss]}=\sum _{h=1}^{L}W_h(\bar{x}_h+{X}_h)\), \(\bar{Z}_h=\sum _{h=1}^{L}W_h(\bar{X}_h+{X}_h)\) and \(X_h\) is the population total in stratum h.

Motivated by Koyuncu and Kadilar [12], one may consider the following separate estimator under SRSS as

$$\begin{aligned} {T}_{kk}^s=\sum \limits _{h=1}^{L}W_h\lambda _{k_h}\bar{y}_{h[rss]}\bigg [\frac{a_h\bar{X }_h+b_h}{\alpha _h(a_h\bar{x}_{h(rss)}+b_h)+(1-\alpha _h)(a_h\bar{X }_h+b_h)}\bigg ]^g \end{aligned}$$

where \(\alpha _h\), g are fixed constants, and \(\lambda _{k_h}\) is a suitably opted scalar. Also, \((a_h \ne 0)\) and \(b_h\) are real values or the function of known parameters of auxiliary variable \(x_h\) in stratum h.

Following Singh and Vishwakarma [24], one may consider a separate general procedure for estimating population mean \(\bar{Y }\) under SRSS as

$$\begin{aligned} T_{sv}^s=\sum \limits _{h=1}^{L}W_h\left[ \Lambda _{1_h}\bar{y}_{h[rss]}+\Lambda _{2_h}\bar{y}_{h(rss)}\left( \frac{\bar{X}_h}{\bar{x}_{h(rss)}}\right) \right] \end{aligned}$$

where \(\Lambda _{1_h}\) and \(\Lambda _{2_h}\) are duly chosen scalars in stratum h.

Motivated by Singh and Solanki [23], we define a new family of separate estimators for population mean \(\bar{Y }\) under SRSS as

$$\begin{aligned} T_{ss}^s=\sum \limits _{h=1}^{L}W_h\left[ \begin{array}{l} \lambda _{1_h}\bar{y}_{[srss]}\left\{ \frac{\alpha _h(a_{h}\bar{x}_{h(rss)}+b_{h})+(1-\alpha _h)(a_{h}\bar{X}_h+b_{h})}{(a_{h}\bar{X}_h+b_{h})}\right\} ^{\delta }\\ +\lambda _{2_h}\bar{y}_{h[rss]}\left\{ \frac{(a_{h}\bar{X}_h+b_{h})}{\alpha _h(a_{h}\bar{x}_{h(rss)}+b_{h})+(1-\alpha _h)(a_{h}\bar{X}_h+b_{h})}\right\} ^g \end{array}\right] \end{aligned}$$

where \(\delta ,~g\), and \(\alpha _h\) are suitably chosen scalars, whereas \(\lambda _{1_h}\) and \(\lambda _{2_h}\) are optimizing constants to be determined later.

The separate type of Saini and Kumar [18] estimator under SRSS is

$$\begin{aligned} T_{sk_t}^s=\sum \limits _{h=1}^{L}W_h\bar{y}_{h[rss]}\left[ \frac{\bar{X}_h-\bar{x}_{h(rss)}+q_{t_h}}{\bar{X}_h+\bar{x}_{h(rss)}+q_{t_h}}\right] ;~~t=1,3 \end{aligned}$$

The MSEs of these estimators are given in Appendix B for ready reference and further analysis.

4 Proposed estimators

The crux of this paper is to suggest some efficient combined and separate classes of estimators for the estimation of population mean \(\bar{Y}\) under SRSS. The suggested class of estimators are better choice over the existing class of estimators discussed in previous section. Motivated by Bhushan et al. [4], we extend the work of Bhushan and Kumar [1] and suggest some combined and separate class of estimators of population mean under SRSS.

4.1 Combined estimators

We propose some efficient combined log type class of estimators under SRSS as

$$\begin{aligned} T_{v_1}^c&=\alpha _1\bar{y}_{[srss]}\left[ 1+\log \left( \frac{\bar{x}_{(srss)}}{\bar{X}}\right) \right] ^{\beta _1}\\ T_{v_2}^c&=\alpha _2\bar{y}_{[srss]}\left[ 1+\beta _2 \log \left( \frac{\bar{x}_{(srss)}}{\bar{X}}\right) \right] \\ T_{v_3}^c&=\alpha _3\bar{y}_{[srss]}\left[ 1+\log \left( \frac{\bar{x}^*_{(srss)}}{{\bar{X}}^*}\right) \right] ^{\beta _3} \\ T_{v_4}^c&=\alpha _4\bar{y}_{[srss]}\left[ 1+\beta _4 \log \left( \frac{\bar{x}^*_{(srss)}}{{\bar{X}}^*}\right) \right] \end{aligned}$$

where \(\alpha _i, i=1,2,3,4\) and \(\beta _i\) are suitably chosen scalars, whereas \(\bar{x}^*_{(srss)}=a\bar{x}_{(srss)}+b\) and \({\bar{X}}^*=a{\bar{X}}+b\) provided that \(a(\ne 0),~b\) are either real numbers or function of known parameters of the auxiliary variable x.

Theorem 4.1

The biases of the proposed estimators are given to the first order of approximation as

$$\begin{aligned} Bias(T_{v_1}^c)&=\bar{Y }\left[ \alpha _1\left\{ \begin{array}{l} 1+\beta _1\left( \frac{\beta _1}{2}-1\right) V_{20}+\beta _1V_{11} \end{array}\right\} -1\right] \\ Bias(T_{v_2}^c)&=\bar{Y }\left[ \alpha _2\left\{ \begin{array}{l} 1+\beta _2V_{11}-\frac{\beta _2}{2}V_{20} \end{array}\right\} -1\right] \\ Bias(T_{v_3}^c)&=\bar{Y }\left[ \alpha _3\left\{ \begin{array}{l} 1+\beta _3\left( \frac{\beta _3}{2}-1\right) \upsilon ^2V_{20}+\beta _3\upsilon V_{11} \end{array}\right\} -1\right] \\ Bias(T_{v_4}^c)&=\bar{Y }\left[ \alpha _4\left\{ \begin{array}{l} 1+\beta _4\upsilon V_{11}-\frac{\beta _4}{2}\upsilon ^2V_{20} \end{array}\right\} -1\right] \end{aligned}$$

Proof

The precis of the derivations are given in Appendix C. \(\square \)

Theorem 4.2

The MSEs of the proposed estimators are given to the first order of approximation as

$$\begin{aligned} MSE(T_{v_1}^c)&=\bar{Y }^2\left[ \begin{array}{l} 1+\alpha _{1_h}^2\left\{ \begin{array}{l} 1+V_{02}+2\beta _1(\beta _1-1)V_{20}+4\beta _1V_{11} \end{array}\right\} \\ -2\alpha _1\left\{ \begin{array}{l} \beta _1\left( \frac{\beta _1}{2}-1\right) V_{20}-\beta _1V_{11} \end{array}\right\} \end{array}\right] \\ MSE(T_{v_2}^c)&=\bar{Y }^2\left[ \begin{array}{l} 1+\alpha _2^2\left\{ \begin{array}{l} 1+V_{02}+\beta _2(\beta _2-1)V_{20}+4\beta _2V_{11} \end{array}\right\} \\ -2\alpha _2\left\{ \begin{array}{l} 1+\beta _2V_{11}-\frac{\beta _2}{2}V_{20} \end{array}\right\} \end{array}\right] \\ MSE(T_{v_3}^c)&=\bar{Y }^2\left[ \begin{array}{l} 1+\alpha _3^2\left\{ \begin{array}{l} 1+V_{02}+2\beta _3(\beta _3-1)\upsilon ^2V_{20}+4\beta _3\upsilon V_{11} \end{array}\right\} \\ -2\alpha _3\left\{ \begin{array}{l} \beta _3\left( \frac{\beta _3}{2}-1\right) \upsilon ^2V_{20}-\beta _3\upsilon V_{11} \end{array}\right\} \end{array}\right] \\ MSE(T_{v_4}^c)&=\bar{Y }^2\left[ \begin{array}{l} 1+\alpha _4^2\left\{ \begin{array}{l} 1+V_{02}+\beta _4(\beta _4-1)\upsilon ^2V_{20}+4\beta _4\upsilon V_{11} \end{array}\right\} \\ -2\alpha _4\left\{ \begin{array}{l} 1+\beta _4\upsilon V_{11}-\frac{\beta _4}{2}\upsilon ^2V_{20} \end{array}\right\} \end{array}\right] \end{aligned}$$

Proof

The precis of the derivations are given in Appendix C. \(\square \)

Corollary 4.1

The minimum MSEs at the optimum values of \(\alpha _{i}\) and \(\beta _i\) are given as

$$\begin{aligned} minMSE(T_{v_i}^c)=\bar{Y}^2\left( 1-\frac{Q_i^2}{P_i}\right) ;~~i=1,2,3,4 \end{aligned}$$
(4.1)

Proof

The precis of the derivations are given in Appendix C. \(\square \)

4.2 Separate estimators

We propose the some efficient separate log type class of estimator under SRSS as

$$\begin{aligned} T_{v_1}^s&=\sum \limits _{h=1}^{L}W_h\alpha _{1_h}\bar{y}_{h[rss]}\left[ 1+\log \left( \frac{\bar{x}_{h(rss)}}{\bar{X}_h}\right) \right] ^{\beta _{1_h}}\\ T_{v_2}^s&=\sum \limits _{h=1}^{L}W_h\alpha _{2_h}\bar{y}_{h[rss]}\left[ 1+\beta _{2_h} \log \left( \frac{\bar{x}_{h(rss)}}{\bar{X}_h}\right) \right] \\ T_{v_3}^s&=\sum \limits _{h=1}^{L}W_h\alpha _{3_h}\bar{y}_{h[rss]}\left[ 1+\log \left( \frac{\bar{x}^*_{h(rss)}}{{\bar{X}}^*_h}\right) \right] ^{\beta _{3_h}}\\ T_{v_4}^s&=\sum \limits _{h=1}^{L}W_h\alpha _{4_h}\bar{y}_{h[rss]}\left[ 1+\beta _{4_h} \log \left( \frac{\bar{x}^*_{h(rss)}}{{\bar{X}}^*_h}\right) \right] \end{aligned}$$

where \(\alpha _{i_h},~i=1,2,3,4\) and \(\beta _{i_h}\) are suitably chosen scalars. Also, \(\bar{x}^*_{h(rss)}=a_h\bar{x}_{h(rss)}+b_h\) and \({\bar{X}}_h^*=a_h{\bar{X}_h}+b_h\) provided that \(a_h(\ne 0),~b_h\) are real values or function of known parameters of the auxiliary variable \(x_h\) in stratum h.

Theorem 4.3

The biases of the proposed estimators are given to the first order of approximation as

$$\begin{aligned} Bias(T_{v_1}^s)&=\sum \limits _{h=1}^{L}W_h^2\bar{Y}_h\left[ \alpha _{1_h}\left\{ \begin{array}{l}1+\beta _{1_h}\left( \frac{\beta _{1_h}}{2}-1\right) U_{1}+\beta _{1_h}U_{10}\end{array}\right\} -1\right] \\ Bias(T_{v_2}^s)&=\sum \limits _{h=1}^{L}W_h^2\bar{Y}_h\left[ \alpha _{2_h}\left\{ \begin{array}{l} 1+\beta _{2_h}U_{10}-\frac{\beta _{2_h}}{2}U_{1}\end{array}\right\} -1\right] \\ Bias(T_{v_3}^s)&=\sum \limits _{h=1}^{L}W_h^2\bar{Y}_h\left[ \alpha _{3_h}\left\{ \begin{array}{l} 1+\beta _{3_h}\left( \frac{\beta _{3_h}}{2}-1\right) \upsilon ^2U_{1}+\beta _{3_h}\upsilon U_{10}\end{array}\right\} -1\right] \\ Bias(T_{v_4}^s)&=\sum \limits _{h=1}^{L}W_h^2\bar{Y}_h\left[ \alpha _{4_h}\left\{ \begin{array}{l} 1+\beta _{4_h}\upsilon U_{10}-\frac{\beta _{4_h}}{2}\upsilon ^2U_{1}\end{array}\right\} -1\right] \end{aligned}$$

Proof

The precis of the derivations are given in Appendix C. \(\square \)

Theorem 4.4

The MSEs of the proposed estimators are given to the first order of approximation as

$$\begin{aligned} MSE(T_{v_1}^s)&=\sum \limits _{h=1}^{L}W_h^2\bar{Y }_h^2\left[ \begin{array}{l} 1+\alpha _1^2\left\{ \begin{array}{l} 1+U_{0}+2\beta _{1_h}(\beta _{1_h}-1)U_{1}+4\beta _{1_h}U_{10}\end{array}\right\} \\ -2\alpha _{1_h}\left\{ \begin{array}{l} \beta _{1_h}\left( \frac{\beta _{1_h}}{2}-1\right) U_{1}-\beta _{1_h}U_{10}\end{array}\right\} \end{array}\right] \\ MSE(T_{v_2}^s)&=\sum \limits _{h=1}^{L}W_h^2\bar{Y }_h^2\left[ \begin{array}{l} 1+\alpha _{2_h}^2\left\{ \begin{array}{l} 1+U_{0}+\beta _{2_h}(\beta _{2_h}-1)U_{1}+4\beta _{2_h}U_{10}\end{array}\right\} \\ -2\alpha _{2_h}\left\{ \begin{array}{l} 1+\beta _{2_h}U_{10}-\frac{\beta _{2_h}}{2}U_{1}\end{array}\right\} \end{array}\right] \\ MSE(T_{v_3}^s)&=\sum \limits _{h=1}^{L}W_h^2\bar{Y }_h^2\left[ \begin{array}{l} 1+\alpha _{3_h}^2\left\{ \begin{array}{l} 1+U_{0}+2\beta _{3_h}(\beta _{3_h}-1)\upsilon ^2U_{1}+4\beta _{3_h}\upsilon U_{10}\end{array}\right\} \\ -2\alpha _{3_h}\left\{ \begin{array}{l} \beta _{3_h}\left( \frac{\beta _{3_h}}{2}-1\right) \upsilon ^2U_{1}-\beta _{3_h}\upsilon U_{10}\end{array}\right\} \end{array}\right] \\ MSE(T_{v_4}^s)&=\sum \limits _{h=1}^{L}W_h^2\bar{Y }_h^2\left[ \begin{array}{l} 1+\alpha _{4_h}^2\left\{ \begin{array}{l} 1+U_{0}+\beta _{4_h}(\beta _{4_h}-1)\upsilon ^2U_{1}+4\beta _{4_h}\upsilon U_{10}\end{array}\right\} \\ -2\alpha _{4_h}\left\{ \begin{array}{l} 1+\beta _{4_h}\upsilon U_{10}-\frac{\beta _{4_h}}{2}\upsilon ^2U_{1}\end{array}\right\} \end{array}\right] \end{aligned}$$

Proof

The precis of the derivations are given in Appendix C. \(\square \)

Corollary 4.2

The minimum MSEs at the optimum values of \(\alpha _{ih},~i=1,2,3,4\) and \(\beta _{ih}\) are given as

$$\begin{aligned} minMSE(T_{v_i}^s)=\sum \limits _{h=1}^{L}W_h^2\bar{Y}_h^2\left( 1-\frac{Q_{i_h}^2}{P_{i_h}}\right) \end{aligned}$$
(4.2)

Proof

The precis of the derivations are given in Appendix C. \(\square \)

5 Theoretical comparison

5.1 Combined estimators

On comparing the minimum MSEs of proposed estimator \(T_{v_i},~i=1,2,3,4\) and the existing estimators from (4.1), with (3.1), (A.1), (A.2), (A.3), (A.4), (A.5), (A.6), (A.7), (A.8) and (A.9), we get the following theoretical conditions.

$$\begin{aligned} V(T_m)&>MSE(T_{v_i}^c)\implies \frac{Q_i^2}{P_i}>1-\sum _{h=1}^{L}W_h^2\left( \gamma _h\frac{S_{y_h}^2}{\bar{Y}^2}-D ^2_{y_{h[i]}}\right) \\ MSE(T_r^c)&>MSE(T_{v_i}^c)\implies \frac{Q_i^2}{P_i}>1-V_{02}-V_{20}+2V_{11}\\ MSE(T_{\beta }^c)&>MSE(T_{v_i}^c)\implies \frac{Q_i^2}{P_i}>1-V_{02}+\frac{V_{11}^2}{V_{20}}\\ MSE(T_{mm_i}^c)&>MSE(T_{v_i}^c)\implies \frac{Q_i^2}{P_i}>1-V_{02}-\lambda _i^2V_{20}+2\lambda _iV_{11}\\ MSE(T_{mm}^c)&>MSE(T_{v_i}^c)\implies \frac{Q_i^2}{P_i}>1-(k^*-1)^2-V_{02}-k^{*^2}V_{20}+2k^*V_{11}\\ MSE(T_{sg}^c)&>MSE(T_{v_i}^c)\implies \frac{Q_i^2}{P_i}>\lambda _{s(opt)}\\ MSE(T_{kk}^c)&>MSE(T_{v_i}^c)\implies \frac{Q_i^2}{P_i}>\frac{B^2}{A}\\ MSE(T_{sv}^c)&>MSE(T_{v_i}^c)\implies \frac{Q_i^2}{P_i}>\Lambda _{1(opt)}+\Lambda _{2(opt)}D_1 \\ MSE(T_{ss}^c)&>MSE(T_{v_i}^c)\implies \frac{Q_i^2}{P_i}>\lambda _{1(opt)}E_2+\lambda _{2(opt)}A_2\\ MSE(T_{sk_t}^c)&>MSE(T_{v_i}^c)\implies \frac{Q_i^2}{P_i}>V_{02}+P_tV_{20}^2-2P_t^2V_{11} \end{aligned}$$

If the above conditions hold then the proposed class of estimators \(T_{v_i}^c,~i=1,2,3,4\) dominates the conventional mean estimator \(T_m^c\), classical ratio and regression estimator \(T_r^c\) and \(T_{\beta }^c\), Shabbir and Gupta [21] type estimator \(T_{sg}^c\), Koyuncu and Kadilar [12] type estimator \(T_{kk}^c\), Singh and Vishwakarma [24] type estimator \(T_{sv}^c\), Singh and Solanki [23] type estimator \(T_{ss}^c\), Mandowara and Mehta [15] estimator \(T_{mm_i}^c,~i=1,2,3,4\), Mehta and Mandowara  [17] estimator \(T_{mm}^c\) and Saini and Kumar [18] estimator \(T_{sk_t}^c,~t=1,3\) which shows the theoretical justification of the proposed estimators \(T_{v_i}^c,~i=1,2,3,4\). As a matter of relief, it has been observed that the above conditions are readily met in practical situations.

5.2 Separate estimators

On comparing the minimum MSEs of proposed separate estimator \(T_{v_i}^s,~i=1,2,3,4\) with the exising separate estimators from (4.2) and (3.1), (B.10), (B.11), (B.14), (B.15), (B.12), (B.13), (B.16), (B.17) and (B.18), we get the following theoretical conditions.

$$\begin{aligned} MSE(T_{v_i}^s)< & {} V(T_m)\nonumber \\\implies \sum _{h=1}^{L}W_h^2\bar{Y}_h^2\left( 1-\frac{Q_{i_h}^2}{P_{i_h}}\right)< & {} \sum _{h=1}^{L}W_h^2\bar{Y}^2\left( \gamma _h\frac{S_{y_h}^2}{\bar{Y}^2}-D ^2_{y_{h[i]}}\right) \\ MSE(T_{v_i}^s)< & {} MSE(T_r^s)\nonumber \\\implies \sum \limits _{h=1}^{L}W_h^2\bar{Y}_h^2\left( 1-\frac{Q_{i_h}^2}{P_{i_h}}\right)< & {} \sum \limits _{h=1}^{L}W_h^2\bar{Y}^2_h\bigg [U_0+U_1-2U_{10}\bigg ]\\ MSE(T_{v_i}^s)< & {} MSE(T_{\beta }^s)\nonumber \\\implies \sum \limits _{h=1}^{L}W_h^2\bar{Y}_h^2\left( 1-\frac{Q_{i_h}^2}{P_{i_h}}\right)< & {} \sum \limits _{h=1}^{L}W_h^2\bar{Y}_h^2\left[ U_0-\frac{U_{10}^2}{U_1}\right] \\ MSE(T_{v_i}^s)< & {} MSE(T_{mm_i}^s)\nonumber \\\implies \sum \limits _{h=1}^{L}W_h^2\bar{Y}_h^2\left( 1-\frac{Q_{i_h}^2}{P_{i_h}}\right)< & {} \sum \limits _{h=1}^{L}W_h^2\bar{Y}_h^2\bigg [U_0+\lambda _{i_h}^2U_1-2\lambda _{i_h}U_{10}\bigg ]\\ MSE(T_{v_i}^s)< & {} MSE(T_{mm}^s)\nonumber \\\implies \sum \limits _{h=1}^{L}W_h^2\bar{Y}_h^2\left( 1-\frac{Q_{i_h}^2}{P_{i_h}}\right)< & {} \sum \limits _{h=1}^{L}W_h^2\bar{Y}^2_h\bigg [(k^*_h-1)^2+\left\{ \begin{array}{l}U_0+k_h^{*^2}U_1-2k^*_h U_{10}\end{array}\right\} \bigg ]\\ MSE(T_{v_i}^s)< & {} MSE(T_{sg}^s)\nonumber \\\implies \sum \limits _{h=1}^{L}W_h^2\bar{Y}_h^2\left( 1-\frac{Q_{i_h}^2}{P_{i_h}}\right)< & {} \sum \limits _{h=1}^{L}W_h^2\bar{Y}_h^2\left[ 1-\lambda _{s_h(opt)}\right] \\ MSE(T_{v_i}^s)< & {} MSE(T_{kk}^s)\nonumber \\\implies \sum \limits _{h=1}^{L}W_h^2\bar{Y}_h^2\left( 1-\frac{Q_{i_h}^2}{P_{i_h}}\right)< & {} \sum \limits _{h=1}^{L}W_h^2\bar{Y}^2_h\left( 1-\frac{A_h^2}{4B_h}\right) \\ MSE(T_{v_i}^s)< & {} MSE(T_{sv}^s)\nonumber \\\implies \sum \limits _{h=1}^{L}W_h^2\bar{Y}_h^2\left( 1-\frac{Q_{i_h}^2}{P_{i_h}}\right)< & {} \sum \limits _{h=1}^{L}W_h^2\bar{Y}_h^2\left[ 1-\Lambda _{1_h(opt)}-\Lambda _{2_h(opt)}D_{1_h}\right] \\ MSE(T_{v_i}^s)< & {} MSE(T_{ss}^s)\nonumber \\\implies \sum \limits _{h=1}^{L}W_h^2\bar{Y}_h^2\left( 1-\frac{Q_{i_h}^2}{P_{i_h}}\right)< & {} \sum \limits _{h=1}^{L}W_h^2\bar{Y}_h^2\left[ 1-\lambda _{1_h(opt)}E_{2_h}-\lambda _{2_h(opt)}A_{2_h}\right] \\ MSE(T_{v_i}^s)< & {} MSE(T_{sk_t}^s)\nonumber \\\implies \sum \limits _{h=1}^{L}W_h^2\bar{Y}_h^2\left( 1-\frac{Q_{i_h}^2}{P_{i_h}}\right)< & {} \sum _{h=1}^{L}W_h^2\bar{Y }_h^2\left[ U_0+P_tU_1^2-2P_tU_{10}\right] \end{aligned}$$

Again, it follows from the above conditions that the proposed class of estimators \(T_{v_i}^s,~i=1,2,3,4\) dominates the conventional mean estimator \(T_m^s\), classical ratio and regression estimators \(T_r^s\) and \(T_{\beta }^s\), Shabbir and Gupta [21] type estimator \(T_{sg}^s\), Koyuncu and Kadilar [12] type estimator \(T_{kk}^s\), Singh and Vishwakarma  [24] type estimator \(T_{sv}^s\), Singh and Solanki [23] type estimator \(T_{ss}^s\), Mandowara and Mehta [15] estimators \(T_{mm_i}^s,~i=1,2,3,4\), Mehta and Mandowara [17] estimators \(T_{mm}^s\), and Saini and Kumar [18] estimator \(T_{sk_t}^s\) which conclude the theoretical justification of the proposed separate estimators \(T_{v_i}^s,~i=1,2,3,4\). Also, as a matter of relief, it has been observed that the conditions are readily met in practical situations.

5.3 Comparison of proposed combined and separate estimators

On comparing minimum MSE of the proposed combined and separate class of estimators \(T_{v_i}^c,~i=1,2,3,4\) and \(T_{v_i}^s\), we get

$$\begin{aligned} minMSE(T_{v_i}^c)-minMSE(T_{v_i}^s)=\sum \limits _{h=1}^{L}\left[ (\bar{Y}^2-W_h^2\bar{Y}^2_h)-\left( \bar{Y }^2\frac{Q_i^2}{P_i}-W_h^2\bar{Y}^2_h\frac{Q_{i_h}^2}{P_{i_h}}\right) \right] \end{aligned}$$
(5.1)

In situations if the proposed class of estimators is reasonable and the relationship between auxiliary variable \(x_{(i)}\) and study variable \(y_{[i]}\) within each stratum is a straight line and passes through origine then the last term of (5.1) is generally small and get decreases.

Also, unless \(R_h\) is invariant from stratum to stratum, separate estimators probably becomes more efficient in each stratum if the sample in each stratum is large enough so that the approximate formula for \(MSE(T_{v_i}^s),~i=1,2,3,4\) is valid and cumulative bias that can alter the proposed estimators is neglegible whereas the proposed combibed estimators is to be preferably recommended with only a small sample in each stratum (see, Cochran [8]).

6 Simulation study

In order to enhance the theoretical justification, following Singh and Horn [22] and motivated by Bhushan and Kumar [2, 3] and Kumar et al. [13], we conducted a simulation study over some artificially generated symmetric (viz. Normal, Uniform, Logistic, and t) and asymmetric (viz. F, \(\chi ^2\), \(Beta-I\), Log-normal, Exponential, Gamma, and Weibull) populations of size \(N=1200\) units by using the models given as

$$\begin{aligned} y_i&=2.8+\sqrt{(1-\rho _{xy}^2)}~y_i^*+\rho _{xy}\left( \frac{S_y}{S_x}\right) x_i^*\\ x_i&=2.4+x_i^* \end{aligned}$$

where \(x_i^*\) and \(y_i^*\) are independent variates of respective parent distributions with reasonably chosen value of correlation coefficient \(\rho _{x_hy_h}=0.7\) as it seems to be neither very high nor very low. Now, we stratify the population into three equal mutually exclusive and exhaustive strata and drawn a ranked set sample of size 9 units with set size 3 and number of cycles 3 from each strata by using the sampling methodology discussed in earlier section. Using 10,000 iterations, the percent relative efficiency (PRE) of the proposed class of estimators with respect to the conventional mean estimator is computed as

$$\begin{aligned} PRE=\frac{MSE(T_{m})}{MSE(T)}\times 100 \end{aligned}$$

where T denotes the combined and separate estimators. The results of the simulation study which reveals the ascendance of proposed estimators over other existing estimators are reported below in Tables 1 and 2 in terms of PRE for reasonably chosen value of correlation coefficient \(\rho _{xy}=0.7\).

It follows from the persual of simulation results summarized in Table 1 and Table 2 that:

  1. (i).

    The proposed combined estimator \(T_{v_i}^c,~i=1,3\) dominate the ratio and regression estimators \(T_r^c\) and \(T_{\beta }^c\), Shabbir and Gupta [21] type estimator \(T_{sg}^c\), Koyuncu and Kadilar [12] type estimator \(T_{kk}^c\), Singh and Vishwakarma [24] type estimator \(T_{sv}^c\), Singh and Solanki [23] type estimator \(T_{ss}^c\), Mandowara and Mehta [15] estimators \(T_{mm_i}^c,~i=1,2,3,4\), Mehta and Mandowara [17] estimator \(T_{mm}^c\) and Saini and Kumar [18] estimators \(T_{sk_t}^c,~t=1,3\) in each population.

  2. (ii).

    The proposed combined estimators \(T_{v_i}^c,~i=2,4\) are less efficient than Singh and Solanki [23] type estimator \(T_{ss}^c\), whereas more efficient than the ratio and regression estimators \(T_r^c\) and \(T_{\beta }^c\), Shabbir and Gupta [21] type estimator \(T_{sg}^c\), Koyuncu and Kadilar [12] type estimator \(T_{kk}^c\), Singh and Vishwakarma [24] type estimator \(T_{sv}^c\), Mandowara and Mehta [15] estimators \(T_{mm_i}^c,~i=1,2,3,4\), Mehta and Mandowara [17] estimator \(T_{mm}^c\) and Saini and Kumar [18] estimators \(T_{sk_t}^c,~t=1,3\) in each population.

  3. (iii).

    The proposed separate estimators \(T_{v_i}^s,~i=1,3\) dominate the ratio and regression estimators \(T_r^s\) and \(T_{\beta }^s\), Shabbir and Gupta [21] type estimator \(T_{sg}^s\), Koyuncu and Kadilar [12] type estimator \(T_{kk}^s\), Singh and Vishwakarma [24] type estimator \(T_{sv}^s\), Singh and Solanki [23] type estimator \(T_{ss}^s\), Mandowara and Mehta [15] estimators \(T_{mm_i}^s,~i=1,2,3,4\), Mehta and Mandowara [17] estimator \(T_{mm}^s\) and Saini and Kumar [18] estimators \(T_{sk_t}^s,~t=1,3\) in each population.

  4. (iv).

    The proposed separate estimators \(T_{v_i}^s,~i=2,4\) are less efficient than Singh and Solanki [23] type estimator \(T_{ss}^s\), whereas more efficient than the ratio and regression estimators \(T_r^s\) and \(T_{\beta }^s\), Shabbir and Gupta [21] type estimator \(T_{sg}^s\), Koyuncu and Kadilar [12] type estimator \(T_{kk}^s\), Singh and Vishwakarma [24] type estimator \(T_{sv}^s\), Mandowara and Mehta [15] estimators \(T_{mm_i}^s,~i=1,2,3,4\), Mehta and Mandowara [17] estimator \(T_{mm}^s\) and Saini and Kumar [18] estimators \(T_{sk_t}^s,~t=1,3\) in each population.

  5. (v).

    It can also be seen that the proposed combined and separate estimators \(T_{v_i}^c,~i=1,3\) and \(T_{v_i}^s,~i=1,3\) are most efficient among the proposed class of estimators.

Table 1 PREs of proposed combind estimators w.r.t. conventional mean estimator \({T}_m^c\) for different distribution at \(\rho _{x_hy_h}=0.7\)
Table 2 PREs of proposed separate estimators w.r.t. conventional mean estimator \({T}_m^s\) for different distribution at \(\rho _{x_hy_h}=0.7\)

7 Concluding remarks

This paper has considered some combined and separate log type class of estimators under SRSS along with their properties. The theoretical justification of the proposed combined and separate class of estimators has been provided. In order to enhance the credibility of the theoretical justification, a simulation study has been performed over some artificially generated symmetric (viz. Normal, uniform and Logistic, and t) and asymmetric (viz. F, \(\chi ^2\), \(Beta-I\), Log-normal, Exponential, Gamma, and Weibull) populations. The following noteworthy observations have been mentioned below:

  1. i.

    The simulation results reported in Table 1 and 2 shows that the proposed combined and separate estimators \(T_{v_i}^c,~i=1,3\) and \(T_{v_i}^s,~i=1,3\) dominate the combined and separate version of ratio and regression type estimator, Mandowara and Mehta [15] estimator, Mehta and Mandowara [17] estimator, Shabbir and Gupta [21] type estimator Koyuncu and Kadilar [12] type estimator, Singh and Vishwakarma [24] type estimator, Singh and Solanki [23] type estimator and Saini and Kumar [18] estimator as well as proposed class of estimators \(T_{v_i}^c,~i=2,4\) and \(T_{v_i}^s,~i=2,4\).

  2. ii.

    The proposed combined and separate estimators \(T_{v_i}^c,~i=2,4\) and \(T_{v_i}^s,~i=2,4\) are less efficient than combined and separate version of Singh and Solanki [23] type estimator and perform better than the other existing estimators of our study.

  3. iii.

    It can also be seen that the proposed combined estimators \(T_{v_i}^c,~i=1,2,3,4\) perform better than the proposed separate estimators \(T_{v_i}^s\) in each population.

  4. iv.

    Thus, the proposed comnined and separate class of estimator \(T_{v_i}^c,~i=1,3\) and \(T_{v_i}^s,~i=1,3\) is to be considered in practice.

In future studies, the proposed estimators may be examined under stratified double RSS, for more details, see Khan et al. [11].