1 INTRODUCTION

In sample survey, many different types of estimators are proposed by researchers for estimating the population mean. These estimators are proposed in a case that complete information on different variables is always obtained. However, this situation may not always occur because of various reasons. Therefore, estimators have been proposed by many researchers under non-response situation in recent years.

Firstly, we examine the main estimators to estimate the population mean in the simple random sampling in a case that complete information is available as follows:

The classical ratio estimator, the classical regression estimator and the pioneer exponential estimator were proposed by Cochran (1940, 1977) and Bahl and Tuteja (1991), respectively. The estimators are given by, respectively,

$$t_{1}=\bar{X}\frac{\bar{y}}{\bar{x}},$$
(1)
$$t_{2}=\bar{y}+b\left(\bar{X}-\bar{x}\right),$$
(2)
$$t_{3}=\bar{y}\exp\left(\frac{\bar{X}-\bar{x}}{\bar{X}+\bar{x}}\right),$$
(3)

where the sample mean and the population mean of the auxiliary variable are represented as \(\bar{x}\) and \(\bar{X}\), respectively, as well as \(\bar{y}\) is the sample mean of the study variable. In (2), b refers the regression coefficient in SRS.

After Bahl and Tuteja (1991), many estimators have been proposed taking advantage of the exponential function, recently. One of the estimators is proposed by Singh et al. (2016) as

$$t_{4}=\bar{y}\exp\left(\frac{\left(\theta-1\right)\left(\bar{X}-\bar{x}\right)}{\left(\theta+1\right)\left(\bar{X}+\bar{x}\right)}\right),$$
(4)

whose MSE equations of the estimators in (1)–(4), up to the first degree approximations, are given by

$$MSE\left(t_{1}\right)=\bar{Y}^{2}\lambda\left(C_{y}^{2}-2C_{xy}+C_{x}^{2}\right),$$
(5)
$$MSE\left(t_{2}\right)=MSE_{\textrm{min}}\left(t_{4}\right)=\bar{Y}^{2}\lambda C_{y}^{2}\left(1-\rho_{xy}^{2}\right),$$
(6)
$$MSE\left(t_{3}\right)=\bar{Y}^{2}\lambda\left(C_{y}^{2}-C_{xy}+\frac{C_{x}^{2}}{4}\right),$$
(7)

where \(\lambda=\frac{1-f}{n},f=\frac{n}{N},C_{x}^{2}=\frac{S_{x}^{2}}{\bar{X}^{2}},C_{y}^{2}=\frac{S_{y}^{2}}{\bar{Y}^{2}},C_{xy}=\rho_{xy}C_{x}C_{y}\) and \(\bar{Y}\) means the population mean of the study variable. In (6), the population correlation coefficient between the study and auxiliary variables is referred as \(\rho_{xy}\).

However, these estimators are inappropriate in case there is incomplete information on different variables. For this reason, Hansen and Hurwitz (1946) introduced a new technique to cope with this problem.

In this technique, the population is composed of \(N\) units \(S=(S_{1},S_{2},\ldots,S_{N})\) and they are divided in two parts as respondents units \(\left(N_{1}\right)\) and non-respondents units \(\left(N_{2}\right)\). Suppose that, a sample size of \(n\) units is drawn from the population by the simple random sampling without replacement (SRSWOR). Accordingly, \(n\) units are also divided in two parts as respondents units \(n_{1}\) and non-respondents units \(n_{2},\left(n_{2}=n-n_{1}\right)\). In addition, a sub-sample size of \(r=n_{2}/g\) is drawn by making an extra effort from \(n_{2}\) non-responding units. For this reason, the population mean can be estimated by using \(\left(n_{1}+r\right)\) units substituted for \(n\) in this sub-sampling technique. Note that g \(\left(g>1\right)\) is the inverse sampling rate for the sample of size \(n\) at the second phase.

The unbiased estimator was proposed by Hansen and Hurwitz (1946) in the case of non-response for the population mean using \(\left(n_{1}+r\right)\) units as follows:

$$t_{5}=w_{1}\bar{y}_{1}+w_{2}\bar{y}_{2(r)},$$
(8)

where the sample means of the study variable contingent on \(n_{1}\) and \(r\) units are represented as \(\bar{y}_{1}\) and \(\bar{y}_{2(r)}\), respectively. \(w_{1}=n_{1}/n\) and \(w_{2}=n_{2}/n\) also refer the proportions of the responding and non-responding for the sample, respectively.

The variance of the unbiased estimator is given as

$$V\left(t_{5}\right)=\bar{Y}^{2}\left(\lambda C_{y}^{2}+\frac{W_{2}\left(g-1\right)}{n}C_{y(2)}^{2}\right),$$
(9)

where \(C_{y(2)}^{2}=S_{y(2)}^{2}/\bar{Y}^{2}\) is the square of the population coefficient of variation for the study variable having \(N_{2}\) units and \(W_{2}=N_{2}/N\) is the proportion of the non-responding for the population.

After Hansen and Hurwitz (1946), the estimators for the population mean have been proposed by considering the non-response situations under two well-known cases in many studies. Note that the population mean of the auxiliary variable \(\left(\bar{X}\right)\) is known for both cases in these studies.

Firstly, Case I is defined as the non-response occurs only on the study variable and the following main estimators are listed for the Case I.

Under the Case I, classical ratio and the regression estimators were proposed by Rao (1986) and the first exponential type estimators was introduced by Singh et al. (2009) as follows, respectively,

$$t_{6}=\bar{X}\frac{\bar{y}^{*}}{\bar{x}},$$
(10)
$$t_{7}=\bar{y}^{*}+b^{*}\left(\bar{X}-\bar{x}\right),$$
(11)
$$t_{8}=\bar{y}^{*}\textrm{exp}\left(\frac{\bar{X}-\bar{x}}{\bar{X}+\bar{x}}\right).$$
(12)

Here, \(\bar{y}^{*}\) shows the sample mean of the study variable in the presence of non-response and \(b^{*}=S_{xy}^{*}/S_{x}^{*^{2}}\).

The MSE equations of the \(t_{6}\), \(t_{7}\) and \(t_{8}\) estimators in (10)–(12) are given by

$$MSE\left(t_{6}\right)=\bar{Y}^{2}\left(\lambda C_{y}^{2}+\frac{W_{2}\left(g-1\right)}{n}C_{y(2)}^{2}+\lambda\left(C_{x}^{2}-2C_{yx}\right)\right),$$
(13)
$$MSE\left(t_{7}\right)=\bar{Y}^{2}\left(\lambda C_{y}^{2}\left(1-\rho_{xy}^{2}\right)+\frac{W_{2}\left(g-1\right)}{n}C_{y(2)}^{2}\right),$$
(14)
$$MSE\left(t_{8}\right)=\bar{Y}^{2}\left(\lambda C_{y}^{2}+\frac{W_{2}\left(g-1\right)}{n}C_{y(2)}^{2}+\lambda\left(\frac{C_{x}^{2}}{4}-C_{yx}\right)\right).$$
(15)

Secondly, Case II is defined as the non-response on both the auxiliary variable and the study variable and the following main estimators are listed for the Case II.

Cochran (1977) proposed the classical ratio and the regression estimators and the exponential type estimators was introduced by Singh et al. (2009) under the Case II as follows:

$$t_{9}=\frac{\bar{y}^{*}}{\bar{x}^{*}}\bar{X},$$
(16)
$$t_{10}=\bar{y}^{*}+b^{*}\left(\bar{X}-\bar{x}^{*}\right),$$
(17)
$$t_{11}=\bar{y}^{*}\exp\left(\frac{\bar{X}-\bar{x}^{*}}{\bar{X}+\bar{x}^{*}}\right),$$
(18)

respectively. Note that \(\bar{x}^{*}\) is the sample mean of the auxiliary variable in case of non-response. The MSE equations of the \(t_{9}\), \(t_{10}\) and \(t_{11}\) estimators in (16)–(18) are given by

$$MSE\left(t_{9}\right)=\bar{Y}^{2}\left(\lambda\left(C_{y}^{2}-2C_{yx}+C_{x}^{2}\right)+\frac{W_{2}\left(g-1\right)}{n}\left(C_{y(2)}^{2}-2C_{yx(2)}+C_{x(2)}^{2}\right)\right),$$
(19)
$$MSE\left(t_{10}\right)=\bar{Y}^{2}\left(\lambda C_{y}^{2}\left(1-\rho_{xy}^{2}\right)+\frac{W_{2}\left(g-1\right)}{n}\left(C_{y(2)}^{2}+\rho_{xy}^{2}\frac{C_{y}^{2}}{C_{x}^{2}}C_{x(2)}^{2}-2\rho_{xy}\frac{C_{y}}{C_{x}}C_{yx(2)}\right)\right),$$
(20)
$$MSE\left(t_{11}\right)=\bar{Y}^{2}\left(\lambda C_{y}^{2}+\lambda\frac{C_{x}^{2}}{4}-\lambda C_{yx}+\frac{W_{2}\left(g-1\right)}{n}\left(C_{y(2)}^{2}-C_{yx(2)}+\frac{C_{x(2)}^{2}}{4}\right)\right),$$
(21)

where \(C_{x(2)}^{2}=\frac{S_{x(2)}^{2}}{\bar{X}^{2}}\) and \(\rho_{yx(2)}=\frac{C_{yx(2)}}{C_{y(2)}C_{x(2)}}\) is the correlation coefficient of the population for the non-response group between the auxiliary and the study variables.

In this study, we adapt the estimator in (4) to an estimator considering the non-response situation in two mentioned cases in Section 2. In Sections 3 and 4, theoretical and numerical comparisons are obtained using the MSE equations of the proposed estimators and other estimators in literature, respectively, and in Section 5, the study is completed.

2 THE ADAPTED ESTIMATORS

Following Singh et al. (2016), we adapt the exponential estimator in (4) to an estimator in case of non-response under Case I and Case II. Note that the population mean of the auxiliary variable is known for both cases.

Case I. The first proposed estimator \(t_{P1}\) is as follows:

$$t_{P1}=\bar{y}^{*}\exp\left(\frac{\left(\theta_{1}-1\right)\left(\bar{X}-\bar{x}\right)}{\left(\theta_{1}+1\right)\left(\bar{X}+\bar{x}\right)}\right),$$
(22)

where \(\theta_{1}\) is a chosen constant that makes the MSE minimum. To obtain the bias and MSE of the \(t_{P1}\), respectively, we consider \(\bar{y}^{*}=\bar{Y}\left(e_{0}^{*}+1\right),\) \(\bar{x}=\bar{X}\left(e_{1}+1\right).\) Then, \(E\left(e_{0}^{*}\right)=0,\) \(E\left(e_{1}e_{0}^{*}\right)=\lambda\rho_{xy}C_{x}C_{y},\) \(E\left(e_{1}\right)=0,\) \(E\left(e_{1}^{2}\right)=\lambda C_{x}^{2}\).

Now, expressing the adapted estimator \(t_{P1}\) under this case, in terms of \(e_{0}^{*}\) and \(e_{1}\), we have

$$t_{P1}=\bar{Y}\left(e_{0}^{*}+1\right)\exp\left(\frac{\left(\theta_{1}-1\right)\left(\bar{X}-\bar{X}e_{1}-\bar{X}\right)}{\left(\theta_{1}+1\right)\left(\bar{X}+\bar{X}e_{1}+\bar{X}\right)}\right)$$
$${}=\bar{Y}\left(e_{0}^{*}+1\right)\exp\left(k_{1}e_{1}\left(2+e_{1}\right)^{-1}\right),$$
(23)

where \(k_{1}=\frac{\left(1-\theta_{1}\right)}{\left(1+\theta_{1}\right)}\) for the Case I. Expanding (23) to the first order approximation and neglecting the terms involving powers of \(e_{0}^{*}\) and \(e_{1}\) greater than two, we have

$$\left(t_{P1}-\bar{Y}\right)=\bar{Y}\left(e_{0}^{*}+\frac{k_{1}}{2}e_{1}-\frac{k_{1}}{4}e_{1}^{2}+\frac{k_{1}^{2}}{8}e_{1}^{2}+\frac{k_{1}}{2}e_{0}^{*}e_{1}\right).$$
(24)

We take the expectation on both sides of (24) and we get the \(B\left(t_{P1}\right)\) as

$$B\left(t_{P1}\right)=\bar{Y}\lambda C_{x}^{2}\left(\left(\frac{k_{1}^{2}}{8}-\frac{k_{1}}{4}\right)+\frac{k_{1}}{2}\rho_{yx}\frac{C_{y}}{C_{x}}\right).$$
(25)

To obtain the \(MSE\left(t_{P1}\right)\), we take the square of both sides of (24) and taking expectations, we have

$$MSE\left(t_{P1}\right)=\bar{Y}^{2}\left(\lambda C_{y}^{2}+\frac{W_{2}\left(g-1\right)}{n}C_{y(2)}^{2}+\lambda\left(\frac{k_{1}^{2}}{4}C_{x}^{2}+k_{1}C_{yx}\right)\right).$$
(26)

The optimal value of \(k_{1}\) is obtained as \(k_{1}^{*}=-2\rho_{xy}{C_{y}}/{C_{x}},\) and we get the minimum \(MSE\left(t_{P1}\right)\) for the Case I using the value of \(k_{1}^{*}\) as

$$MSE_{\textrm{min}}\left(t_{P1}\right)=\bar{Y}^{2}\left(\lambda C_{y}^{2}\left(1-\rho_{xy}^{2}\right)+\frac{W_{2}\left(g-1\right)}{n}C_{y(2)}^{2}\right);$$
(27)

that is, equal to the MSE of the classical regression estimator.

Case II. The second proposed estimator \(t_{P2}\) is as follows:

$$t_{P2}=\bar{y}^{*}\exp\left(\frac{\left(\theta_{2}-1\right)\left(\bar{X}-\bar{x}^{*}\right)}{\left(\theta_{2}+1\right)\left(\bar{X}+\bar{x}^{*}\right)}\right).$$
(28)

To obtain the bias and MSE of the second proposed estimator \(t_{P2}\), we consider

$$\bar{x}^{*}=\bar{X}\left(e_{1}^{*}+1\right),\bar{y}^{*}=\bar{Y}\left(e_{0}^{*}+1\right).$$

Then, \(E\left(e_{0}^{*}\right)=0,\) \(E\left(e_{0}^{*^{2}}\right)=\lambda C_{y}^{2}+\frac{W_{2}\left(g-1\right)}{n}C_{y(2)}^{2}\), \(E\left(e_{1}^{*}\right)=0\), \(E\left(e_{1}^{*^{2}}\right)=\lambda C_{x}^{2}+\frac{W_{2}\left(g-1\right)}{n}C_{x(2)}^{2}\) and \(E\left(e_{0}^{*}e_{1}^{*}\right)=\lambda\rho_{xy}C_{x}C_{y}+\frac{W_{2}\left(g-1\right)}{n}C_{yx(2)}\).

Now, expressing \(t_{P2}\) under this case, in terms of \(e_{0}^{*}\) and \(e_{1}^{*}\), we have

$$t_{P2}=\bar{Y}\left(e_{0}^{*}+1\right)\exp\left(\frac{\left(\theta_{2}-1\right)\left(\bar{X}-\bar{X}e_{1}^{*}-\bar{X}\right)}{\left(\theta_{2}+1\right)\left(\bar{X}+\bar{X}e_{1}^{*}+\bar{X}\right)}\right)=\bar{Y}\left(e_{0}^{*}+1\right)\exp\left(k_{2}e_{1}^{*}\left(2+e_{1}^{*}\right)^{-1}\right),$$
(29)

where \(k_{2}=\frac{\left(1-\theta_{2}\right)}{\left(1+\theta_{2}\right)}\)for the Case II. Expanding (29) to the first order approximation and neglecting the terms involving powers of \(e_{0}^{*}\) and \(e_{1}^{*}\) greater than two, we have

$$\left(t_{P2}-\bar{Y}\right)=\bar{Y}\left(e_{0}^{*}+\frac{k_{2}}{2}e_{1}^{*}-\frac{k_{2}}{4}e_{1}^{*^{2}}+\frac{k_{2}^{2}}{8}e_{1}^{*^{2}}+\frac{k_{2}}{2}e_{0}^{*}e_{1}^{*}\right).$$
(30)

To obtain the bias and MSE of the second proposed estimator, we use the similar procedure of the first proposed estimator and we get \(B\left(t_{P2}\right)\) and \(MSE\left(t_{P2}\right)\), respectively, as

$$B\left(t_{P2}\right)=\bar{Y}\left(\lambda C_{x}^{2}\left(\frac{k_{2}^{2}}{8}-\frac{k_{2}}{4}+\frac{k_{2}}{2}\rho_{yx}\frac{C_{y}}{C_{x}}\right){}\right.$$
$${}\left.+\frac{W_{2}\left(g-1\right)}{n}C_{x(2)}^{2}\left(\frac{k_{2}^{2}}{8}-\frac{k_{2}}{4}+\frac{k_{2}}{2}\rho_{yx(2)}\frac{C_{y(2)}}{C_{x(2)}}\right)\right),$$
(31)
$$MSE\left(t_{P2}\right)=\bar{Y}^{2}\left(\lambda\left(C_{y}^{2}+\frac{k_{2}^{2}}{4}C_{x}^{2}+k_{2}C_{yx}\right)+\frac{W_{2}\left(g-1\right)}{n}\left(C_{y(2)}^{2}+k_{2}C_{yx(2)}+\frac{k_{2}^{2}}{4}C_{x(2)}^{2}\right)\right).$$
(32)

The optimal value of \(k_{2}\) is obtained as

$$k_{2}^{*}=-2\frac{\left(\lambda C_{yx}+\frac{W_{2}\left(g-1\right)}{n}C_{yx(2)}\right)}{\left(\lambda C_{x}^{2}+\frac{W_{2}\left(g-1\right)}{n}C_{x(2)}^{2}\right)}=-2\frac{E\left(e_{0}^{*}e_{1}^{*}\right)}{E\left(e_{1}^{*^{2}}\right)}$$

and we get the minimum MSE of the second proposed estimator using the value of \(k_{2}^{*}\) as

$$MSE_{\textrm{min}}\left(t_{P2}\right)=\bar{Y}^{2}\left[\left(\lambda C_{y}^{2}+\frac{W_{2}\left(g-1\right)}{n}C_{y(2)}^{2}\right)-\frac{\left(\lambda C_{xy}+\frac{W_{2}\left(g-1\right)}{n}\rho_{yx(2)}C_{y(2)}C_{x(2)}\right)^{2}}{\left(\lambda C_{x}^{2}+\frac{W_{2}\left(g-1\right)}{n}C_{x(2)}^{2}\right)}\right].$$
(33)

3 EFFICIENCY COMPARISONS

We obtain the theoretical comparisons of the \(t_{P1}\)and \(t_{P2}\)estimators with the mentioned estimators in Section 1 to show the efficiencies of the \(t_{P1}\) and \(t_{P2}\) estimators under both cases, respectively, in this section.

3.1 Efficiency Comparisons for the Case I

We compare the MSE of the \(t_{P1}\) estimator with the MSEs of the \(t_{5}\), \(t_{6}\) and \(t_{8}\) estimators for the Case I, respectively. In this sub-section, comparison between \(t_{7}\) and \(t_{P1}\) is not included due to equality of MSEs of the estimators, \(MSE_{\textrm{min}}\left(t_{P1}\right)=MSE\left(t_{7}\right)\).

We use (9), (13), (15), and (27), we have

$$\textrm{i) }\quad\left[MSE\left(t_{5}\right)-MSE_{\textrm{min}}\left(t_{P1}\right)\right]=\lambda\rho_{xy}^{2}C_{y}^{2}>0,$$
(34)
$$\textrm{ii) }\quad\left[MSE\left(t_{6}\right)-MSE_{\textrm{min}}\left(t_{P1}\right)\right]=\left(C_{x}-\rho_{yx}C_{y}\right)^{2}>0,$$
(35)
$$\textrm{iii)}\quad\left[MSE\left(t_{8}\right)-MSE_{\textrm{min}}\left(t_{P1}\right)\right]=\left(\frac{C_{x}}{2}-\rho_{yx}C_{y}\right)^{2}>0.$$
(36)

It is observed that the proposed estimator \(t_{P1}\) is the most efficient estimator among compared estimators as the conditions (34)–(36) are always satisfied under the Case I.

3.2 Efficiency Comparisons for the Case II

We compare the MSE of the \(t_{P2}\) estimator with the MSEs of the \(t_{5}\), \(t_{9}\), \(t_{11}\) and \(t_{10}\) for the Case II, respectively.

We use (9), (19), (20), (21), and (33), respectively, we have

$$\textrm{ i)}\quad\left[MSE\left(t_{5}\right)-MSE_{\textrm{min}}\left(t_{P2}\right)\right]=\left(\lambda C_{xy}+\frac{W_{2}\left(g-1\right)}{n}C_{yx(2)}\right)^{2}>0,$$
(37)
$$\textrm{ii)}\quad\left[MSE\left(t_{9}\right)-MSE_{\textrm{min}}\left(t_{P2}\right)\right]$$
$${}=\left(\left(\lambda C_{x}^{2}+\frac{W_{2}\left(g-1\right)}{n}C_{x(2)}^{2}\right)-\left(\lambda C_{yx}+\frac{W_{2}\left(g-1\right)}{n}C_{yx(2)}\right)\right)^{2}>0,$$
(38)
$$\textrm{iii)}\quad\left[MSE\left(t_{10}\right)-MSE_{\textrm{min}}\left(t_{P2}\right)\right]$$
$${}=\left(\left(\frac{W_{2}\left(g-1\right)}{n}C_{x(2)}^{2}\rho_{xy}\frac{C_{y}}{C_{x}}\right)-\left(\frac{W_{2}\left(g-1\right)}{n}C_{yx(2)}\right)\right)^{2}>0,$$
(39)
$$\textrm{iv)}\quad\left[MSE\left(t_{11}\right)-MSE_{\textrm{min}}\left(t_{P2}\right)\right]$$
$${}=\left(\left(\lambda C_{xy}+\frac{W_{2}\left(g-1\right)}{n}\rho_{yx(2)}C_{y(2)}C_{x(2)}\right)-\frac{1}{2}\left(\lambda C_{x}^{2}+\frac{W_{2}\left(g-1\right)}{n}C_{x(2)}^{2}\right)\right)^{2}>0.$$
(40)

It is also observed that the second proposed estimator \(t_{P2}\) is the most efficient estimator among compared estimators as the conditions from (37) to (40) are always satisfied under the Case II.

Accordingly, the proposed estimators \(t_{P1}\) and \(t_{P2}\) are always suggested theoretically according to the results in this section.

4 EMPIRICAL STUDY

After theoretical comparisons, we have also used the data sets of eleven populations which were also used in Ünal and Kadilar (2019) to examine the efficiencies of the proposed \(t_{P1}\) and \(t_{P2}\) estimators numerically. In this section, the percent relative efficiencies (PREs) of the all mentioned estimators are computed for different values of g according to the Case I and Case II, separately, by using the following formulae as

$$PRE\left(*,t_{5}\right)=\frac{V\left(t_{5}\right)}{MSE\left(*\right)}x100.$$

The brief information for each data set is given in Table 1.

Table 1. Descriptive statistics for each data set

Firstly, the PREs of the \(t_{6}\), \(t_{8}\), and \(t_{P1}\) estimators with respect to the \(t_{5}\) estimator are given in Tables 212 for the Case I. The classical regression estimator \(t_{7}\) is not included for the similar reason that is explained in Section 3.

Table 2. PREs of the \(t_{P1}\) estimator and compared estimators for population 1
Table 3. PREs of the \(t_{P1}\) estimator and compared estimators for population 2
Table 4. PREs of the \(t_{P1}\) estimator and compared estimators for population 3
Table 5. PREs of the \(t_{P1}\) estimator and compared estimators for population 4
Table 6. PREs of the \(t_{P1}\) estimator and compared estimators for population 5
Table 7. PREs of the \(t_{P1}\) estimator and compared estimators for population 6
Table 8. PREs of the \(t_{P1}\) estimator and compared estimators for population 7
Table 9. PREs of the \(t_{P1}\) estimator and compared estimators for population 8
Table 10. PREs of the \(t_{P1}\) estimator and compared estimators for population 9
Table 11. PREs of the \(t_{P1}\) estimator and compared estimators for population 10
Table 12. PREs of the \(t_{P1}\) estimator and compared estimators for population 11

We obtain that the PRE of the first proposed estimator \(t_{P1}\) better than compared estimators. Therefore, the estimator \(t_{P1}\) is recommended among the unbiased estimator \(t_{5}\), the classical ratio estimator \(t_{6}\), and the pioneer exponential type estimator \(t_{8}\) numerically, according to the results in Tables 212. We also see that the PRE of the \(t_{P1}\) estimator decrease with the increasing values of \(g\).

Secondly, the PREs of the \(t_{9},t_{11},t_{10}\), and \(t_{P2}\) estimators with respect to \(t_{5}\) estimator based on different values of g are given in Tables 1323 under the Case II.

Table 13. PREs of the \(t_{P2}\) estimator and compared estimators for population 1
Table 14. PREs of the \(t_{P2}\) estimator and compared estimators for population 2
Table 15. PREs of the \(t_{P2}\) estimator and compared estimators for population 3
Table 16. PREs of the \(t_{P2}\) estimator and compared estimators for population 4
Table 17. PREs of the \(t_{P2}\) estimator and compared estimators for population 5
Table 18. PREs of the \(t_{P2}\) estimator and compared estimators for population 6
Table 19. PREs of the \(t_{P2}\) estimator and compared estimators for population 7
Table 20. PREs of the \(t_{P2}\) estimator and compared estimators for population 8
Table 21. PREs of the \(t_{P2}\) estimator and compared estimators for population 9
Table 22. PREs of the \(t_{P2}\) estimator and compared estimators for population 10
Table 23. PREs of the \(t_{P2}\) estimator and compared estimators for population 11

We obtain that the PRE of the second proposed estimator \(t_{P2}\) better than \(t_{5}\), \(t_{9}\), \(t_{10}\), and \(t_{11}\). Therefore, the estimator \(t_{P2}\) is also recommended according to the results in Tables 1323. We also see that the PRE of the \(t_{P2}\) estimator decrease with the increasing values of \(g\) except the Population 5, 9, 11.

In this section, we conclude that the numerical results support the theoretical findings that the \(t_{P1}\) and \(t_{P2}\) estimators are the most efficient estimators under the both cases, respectively.

5 CONCLUSION

In this study, non-response situations are considered as Cases I and II to estimate the population mean. Cases are examined and the estimators are proposed according to these cases, respectively. The first and the second proposed estimator are compared with the pioneer unbiased estimator, the classical ratio, and the exponential type estimators under the Cases I and II, respectively, in theory. After theoretical comparisons, we have used different data sets to examine the efficiencies of proposed estimators numerically and consequently, we infer that the \(t_{P1}\) and \(t_{P2}\) estimators are recommended under both cases in the case of the estimation of the population mean.