1 Introduction

Respondents sometimes come across sensitive questions, such as gambling, alcoholism, sexual and physical abuse, drug addiction, abortion, tax evasion, illegal income, mobbing, political view, doping usage, homosexual activities and many others. When respondents are asked directly with such questions, they may refuse to answer the question or give untruthful answers, which would significantly affect the quantity and quality of the survey. Warner (1965) introduced the randomized response technique (RRT) to address this problem.

Several variations of the original RRT models, both binary response and quantitative response models, have been discussed by researchers, including Mangat and Singh (1990) Fox and Tracy (1986), Chaudhuri and Mukerjee (1987, 1988), Hedayat and Sinha (1991), Tracy and Mangat (1996), Mangat and Singh (1990),Mangat (1994), Mahmood et al. (1998), Singh et al. (2000), Chang and Huang (2001), Christofides (2003), Huang (2004), Chang et al. (2004a, b), and Singh and Tarray (2012).

To implement the privacy problem with the Moors (1997) model, Mangat et al. (1997) and Singh et al. (2000) have given several strategies as alternatives to Moors (1997) model, but their models may lose a large portion of data information and require a high cost to obtain confidentiality of the respondents. These drawbacks with the previous alternative models for the Moors model motivated Kim and Warde (2005) to envisage a mixed RR model using simple random sampling with replacement that modifies the privacy problem. The work of this paper based on mixed randomized response model due to Singh and Tarray’s (2014). So the description of their model is given below.

1.1 Singh and Tarray’s (2014) mixed randomized response model

In the model given by Singh and Tarray (2014), a single sample with size n is selected by simple random sampling with replacement (SRSWR) from the population. Each respondent from the sample is instructed to answer the direct question, “I am a member of the innocuous trait group”. If a respondent answers “Yes” to direct question, then he or she is instructed to go to randomization device \( R_{1} \) consisting of the statements (i) “I am a member of the sensitive trait group” and (ii) “I am a member of the innocuous trait group” with probabilities of selection \( P_{1} \) and \( \left( {1 - P_{1} } \right) \), respectively. If a respondent answers “No” to the direct question, then the respondent is instructed to use a randomization procedure due to Mangat (1994). In the Mangat’s (1994) RR procedure, each respondent is instructed to say “Yes” if he or she is a member of the sensitive trait group. If he or she is not a member of the sensitive trait group, then the respondent is required to use the Warner’s (1965) randomization device \( R_{2} \) consisting of statements: (a) “I belong to the sensitive trait group” and (b) “I do not belong to the sensitive trait group” represented with probabilities \( P \) and \( \left( {1 - P} \right) \), respectively. Then he or she is to report “Yes” or “No” according to the outcome of the randomization device \( R_{2} \) and the actual status that he or she has with respect to the sensitive trait group. The survey procedures are performed under the assumption that both the sensitive and the innocuous questions are unrelated and independent in a randomization device \( R_{1} \). To protect the respondent’s privacy, the respondents should not disclose to the interviewer the question they answered from either \( R_{1} \) or \( R_{2} \).

Let n be the sample size confronted with a direct question, and \( n_{1} \) and \( n_{2} \) \( \left( { = n - n_{1} } \right) \) denote the number of “Yes” and “No” answers from the sample. Since all the respondents using a randomization device \( R_{1} \) already responded “Yes” from the initial direct innocuous question, the proportion “Y” of getting “Yes” answers from the respondents using randomization device \( R_{1} \) is expressed as

$$ Y = P_{1} \pi_{\text{s}} + \left( {1 - P_{1} } \right)\pi_{1} = P_{1} \pi_{\text{s}} + \left( {1 - P_{1} } \right), $$
(1)

where \( \pi_{\text{s}} \) is the proportion of “Yes” answers from the sensitive trait and \( \pi_{1} \) is the proportion of “Yes” answer from the innocuous question.

An unbiased estimator of \( \pi_{\text{s}} \), in terms of the sample proportion of “Yes” responses \( \hat{Y} \), is given by

$$ \hat{\pi }_{1} = \frac{{\hat{Y} - \left( {1 - P_{1} } \right)}}{{P_{1} }}, $$
(2)

with variance

$$ V\left( {\hat{\pi }_{1} } \right) = \frac{{Y\left( {1 - Y} \right)}}{{n_{1} P_{1}^{2} }} = \frac{1}{{n_{1} }}\left[ {\left( {1 - \pi_{\text{s}} } \right)\pi_{\text{s}} + \frac{{\left( {1 - \pi_{s} } \right)\left( {1 - P_{1} } \right)}}{{P_{1} }}} \right]. $$
(3)

The proportion of “Yes” answers from the respondents using Mangat’s (1994) randomization device \( R_{2} \)

$$ X = \pi_{\text{s}} + \left( {1 - \pi_{\text{s}} } \right)\left( {1 - P} \right). $$
(4)

An unbiased estimator of \( \pi_{\text{s}} \), in terms of the sample proportion of “Yes” responses \( \hat{X} \) is given by

$$ \hat{\pi }_{2} = \frac{{\hat{X} - \left( {1 - P} \right)}}{P}. $$
(5)

The variance of \( \hat{\pi }_{2} \) is given by

$$ V\left( {\hat{\pi }_{2} } \right) = \frac{{X\left( {1 - X} \right)}}{{n_{2} P^{2} }} = \frac{1}{{n_{2} }}\left[ {\pi_{\text{s}} \left( {1 - \pi_{\text{s}} } \right) + \frac{{\left( {1 - P} \right)\left( {1 - \pi_{\text{s}} } \right)}}{P}} \right]. $$
(6)

Giving weight \( \lambda = {{n_{1} } \mathord{\left/ {\vphantom {{n_{1} } n}} \right. \kern-0pt} n} \) to the estimator \( \hat{\pi }_{1} \) and \( \left( {1 - \lambda } \right) = {{\left( {n - n_{1} } \right)} \mathord{\left/ {\vphantom {{\left( {n - n_{1} } \right)} n}} \right. \kern-0pt} n} \) to the estimator \( \hat{\pi }_{2} \), Singh and Tarray (2014) suggested an unbiased estimator for \( \pi_{\text{s}} \) as

$$ \hat{\pi }_{\text{h}} = \lambda \hat{\pi }_{1} + \left( {1 - \lambda } \right)\hat{\pi }_{2} ,\quad {\text{for}}\quad 0 < \lambda < 1. $$
(7)

with the variance

$$ \begin{aligned} V\left( {\hat{\pi }_{\text{h}} } \right) = & \frac{\lambda }{n}\left[ {\pi_{\text{s}} \left( {1 - \pi_{\text{s}} } \right) + \frac{{\left( {1 - \pi_{\text{s}} } \right)\left( {1 - P_{1} } \right)}}{{P_{1} }}} \right] \\ & \quad + \frac{{\left( {1 - \lambda } \right)}}{n}\left[ {\pi_{\text{s}} \left( {1 - \pi_{\text{s}} } \right) + \frac{{\left( {1 - \pi_{\text{s}} } \right)\left( {1 - P} \right)}}{P}} \right]. \\ \end{aligned} $$
(8)

For \( P = \left( {2 - P_{1} } \right)^{ - 1} \), Singh and Tarray (2014) obtained the variance of \( \hat{\pi }_{\text{h}} \) as

$$ V\left( {\hat{\pi }_{\text{h}} } \right) = \frac{1}{n}\left[ {\lambda V_{1} + \left( {1 - \lambda } \right)V_{2} } \right], $$
(9)

where

$$ V_{1} = \left[ {\pi_{\text{s}} \left( {1 - \pi_{\text{s}} } \right) + \frac{{\left( {1 - \pi_{\text{s}} } \right)\left( {1 - P_{1} } \right)}}{{P_{1} }}} \right], $$
$$ V_{2} = \left[ {\pi_{\text{s}} \left( {1 - \pi_{\text{s}} } \right) + \left( {1 - \pi_{\text{s}} } \right)\left( {1 - P_{1} } \right)} \right]. $$

In Sect. 2, we have suggested a weighted unbiased estimator for \( \pi_{\text{s}} \) and studied its properties.

2 Proposed class of unbiased estimators

We define a weighted unbiased estimator for \( \pi_{\text{s}} \) as

$$ \hat{\pi }_{\text{HS}} = \eta_{1} \hat{\pi }_{1} + \eta_{2} \hat{\pi }_{2} , $$
(10)

where \( \eta_{1} \) and \( \eta_{2} \) are suitably chosen weights such that \( \eta_{1} + \eta_{2} = 1 \).

For suitable values of \( \left( {\eta_{1} ,\eta_{2} } \right) \), a set of estimators can be identified, for instance, see Table 1

Table 1 Different weights of \( \left( {\eta_{1} ,\eta_{2} } \right) \) and the resulting estimators of \( \hat{\pi }_{\text{s}} \)

It is known that the two randomization devices are independent, therefore, the variance of \( \hat{\pi }_{\text{HS}} \) is given by

$$ V\left( {\hat{\pi }_{\text{HS}} } \right) = \eta_{1}^{2} V\left( {\hat{\pi }_{1} } \right) + \eta_{2}^{2} V\left( {\hat{\pi }_{2} } \right), $$
$$ = \frac{1}{n}\left\{ {\frac{{\eta_{1}^{2} }}{\lambda }\left[ {\pi_{\text{s}} \left( {1 - \pi_{\text{s}} } \right) + \frac{{\left( {1 - \pi_{\text{s}} } \right)\left( {1 - P_{1} } \right)}}{{P_{1} }}} \right] + \frac{{\eta_{2}^{2} }}{{\left( {1 - \lambda } \right)}}\left[ {\pi_{\text{s}} \left( {1 - \pi_{\text{s}} } \right) + \frac{{\left( {1 - \pi_{\text{s}} } \right)\left( {1 - P} \right)}}{P}} \right]} \right\}. $$
(11)

Inserting \( P = \left( {2 - P_{1} } \right)^{ - 1} \) in (11) we get

$$ \begin{aligned} V\left( {\hat{\pi }_{\text{HS}} } \right) = & \frac{1}{n}\left\{ {\frac{{\eta_{1}^{2} }}{\lambda }\left[ {\pi_{\text{s}} \left( {1 - \pi_{\text{s}} } \right) + \frac{{\left( {1 - \pi_{\text{s}} } \right)\left( {1 - P_{1} } \right)}}{{P_{1} }}} \right] + \frac{{\eta_{2}^{2} }}{{\left( {1 - \lambda } \right)}}\left[ {\pi_{\text{s}} \left( {1 - \pi_{\text{s}} } \right) + \left( {1 - \pi_{\text{s}} } \right)\left( {1 - P_{1} } \right)} \right]} \right\} \\ =& \frac{1}{n}\left[ {\frac{{\eta_{1}^{2} }}{\lambda }V_{1} + \frac{{\eta_{2}^{2} }}{{\left( {1 - \lambda } \right)}}V_{2} } \right] = \frac{1}{n}\left[ {\frac{{\eta_{1}^{2} }}{\lambda }V_{1} + \frac{{\left( {1 + \eta_{1}^{2} - 2\eta_{1} } \right)}}{{\left( {1 - \lambda } \right)}}V_{2} } \right] \\ =& \frac{1}{n}\left[ {\eta_{1}^{2} \left\{ {\frac{{V_{1} }}{\lambda } + \frac{{V_{2} }}{{\left( {1 - \lambda } \right)}}} \right\} - \frac{{2\eta_{1} V_{2} }}{{\left( {1 - \lambda } \right)}} + \frac{{V_{2} }}{{\left( {1 - \lambda } \right)}}} \right] \\ = &\frac{1}{{n\lambda \left( {1 - \lambda } \right)}}\left[ {\eta_{1}^{2} \left\{ {\left( {1 - \lambda } \right)V_{1} + \lambda V_{2} } \right\} - 2\eta_{1} \lambda V_{2} + \lambda V_{2} } \right]. \\ \end{aligned} $$
(12)

The variance of \( \hat{\pi }_{HS} \) at (12) is minimised for

$$ \left. \begin{aligned} \eta_{1} = \frac{{\lambda V_{2} }}{{\left[ {\left( {1 - \lambda } \right)V_{1} + \lambda V_{2} } \right]}} = \eta_{10} ({\text{say}}) \hfill \\ \eta_{2} = \frac{{\left( {1 - \lambda } \right)V_{1} }}{{\left[ {\left( {1 - \lambda } \right)V_{1} + \lambda V_{2} } \right]}} = \eta_{20} ({\text{say}}) \hfill \\ \end{aligned} \right\}, $$
(13)

Inserting (13) in (10) we get the optimum estimator (OE) for \( \pi_{\text{s}} \) as

$$ \hat{\pi }_{\text{HS}}^{{\left( {\text{o}} \right)}} = \left( {w_{10} \hat{\pi }_{1} + w_{20} \hat{\pi }_{2} } \right). $$
(14)

Thus, the resulting minimum variance of \( \hat{\pi }_{\text{HS}} \) (or the variance of the OE \( \left( {\hat{\pi }_{\text{HS}}^{\text{o}} } \right) \) is given by

$$ \begin{aligned} \hbox{min} .\;V\left( {\hat{\pi }_{\text{HS}} } \right) & = \frac{{V_{1} V_{2} }}{{n\left[ {\left( {1 - \lambda } \right)V_{1} + \lambda V_{2} } \right]}} \hfill \\ & = V\left( {\hat{\pi }_{\text{HS}}^{{\left( {\text{o}} \right)}} } \right). \hfill \\ \end{aligned} $$
(15)

Thus, we state the following Theorem.

Theorem 2.1

The variance of the weighted estimator \( \hat{\pi }_{HS} \),

$$ V\left( {\hat{\pi }_{HS} } \right) \ge \frac{{V_{1} V_{2} }}{{n\left[ {\left( {1 - \lambda } \right)V_{1} + \lambda V_{2} } \right]}} $$

with equality holding if

$$ \eta_{1} = \eta_{10} \;{\text{and}}\;\eta_{2} = \eta_{20} $$

Putting \( \eta_{1} = \lambda \) and \( \eta_{2} = \left( {1 - \lambda } \right) \) in (12) one can easily get the variance of Singh and Tarray (2014) estimator as given in (9).

2.1 Special case

For \( \eta_{1} = \frac{{\lambda P_{1} }}{1 - \lambda } \), the proposed estimator \( \hat{\pi }_{\text{HS}} \) defined by (10) reduces to an unbiased estimator

$$ \hat{\pi }_{{{\text{HS}}\left( 1 \right)}} = \frac{{\lambda P_{1} }}{{\left( {1 - \lambda } \right)}}\hat{\pi }_{1} + \frac{{\left\{ {1 - \lambda \left( {1 + P_{1} } \right)} \right\}}}{{\left( {1 - \lambda } \right)}}\hat{\pi }_{2} . $$
(16)

Here, we note that \( \left( {\lambda ,P_{1} } \right) \) are known.

Putting \( \eta_{1} = \frac{{\lambda P_{1} }}{1 - \lambda } \) in (12) we get the variance of the unbiased estimator \( \hat{\pi }_{{{\text{HS}}\left( 1 \right)}} \) as

$$ V\left( {\hat{\pi }_{{{\text{HS}}\left( 1 \right)}} } \right) = \frac{1}{{n\left( {1 - \lambda } \right)}}\left[ {\frac{{\lambda P_{1}^{2} \left\{ {\left( {1 - \lambda } \right)V_{1} + \lambda V_{2} } \right\}}}{{\left( {1 - \lambda } \right)^{2} }} - \frac{{2\lambda P_{1} V_{2} }}{{\left( {1 - \lambda } \right)}} + V_{2} } \right]. $$
(17)

Putting \( \eta_{1} = \lambda \Rightarrow \eta_{2} = \left( {1 - \lambda } \right) \) in (12) we get the variance of the Singh and Tarray’s (2014) estimator \( \hat{\pi }_{\text{h}} \) as

$$ V\left( {\hat{\pi }_{\text{h}} } \right) = \frac{1}{n}\left[ {\lambda V_{1} + \left( {1 - \lambda } \right)V_{2} } \right]. $$
(18)

From (17) and (18) we have

$$ V\left( {\hat{\pi }_{\text{h}} } \right) - V\left( {\hat{\pi }_{{{\text{HS}}\left( 1 \right)}} } \right) = \frac{\lambda }{{n\left( {1 - \lambda } \right)^{2} }}\left( {1 - \frac{{P_{1} }}{1 - \lambda }} \right)\left[ {\left\{ {\left( {1 - \lambda } \right)V_{1} + \lambda V_{2} } \right\}\left( {1 - \lambda + P_{1} } \right) - 2\left( {1 - \lambda } \right)V_{2} } \right], $$

which is positive if

$$ \left. \begin{aligned} {\text{either}}\;\left[ {\lambda^{2} \left( {V_{1} - V_{2} } \right) + \lambda \left\{ {V_{2} - \left( {2 + P_{1} } \right)\left( {V_{1} - V_{2} } \right)} \right\} + \left\{ {V_{1} \left( {1 + P_{1} } \right) - 2V_{2} } \right\}} \right] > 0,\quad P_{1} < \left( {1 - \lambda } \right) \hfill \\ {\text{or}}\;\left[ {\lambda^{2} \left( {V_{1} - V_{2} } \right) + \lambda \left\{ {V_{2} - \left( {2 + P_{1} } \right)\left( {V_{1} - V_{2} } \right)} \right\} + \left\{ {V_{1} \left( {1 + P_{1} } \right) - 2V_{2} } \right\}} \right] < 0,\quad P_{1} > \left( {1 - \lambda } \right) \hfill \\ \end{aligned} \right\}. $$
(19)

To see the performance of the suggested estimator \( \hat{\pi }_{{{\text{HS}}\left( 1 \right)}} \) at (16) relative to Singh and Tarray (2014) estimator \( \hat{\pi }_{\text{h}} \) given by (7) we have computed the percent relative efficiency (PRE) of \( \hat{\pi }_{{{\text{HS}}\left( 1 \right)}} \) with respect to \( \hat{\pi }_{\text{h}} \) using the formula given in Sect. 3 for different values of \( \left( {\lambda ,P_{1} ,\pi_{\text{s}} } \right) \).

3 Efficiency comparison

In this section, we have made the comparison of the proposed weighted mixed randomized response model, under completely truthful reporting case, with Singh and Tarray’s (2014) model.

We have from (9) and (16) that

$$ \begin{aligned} V\left( {\hat{\pi }_{\text{h}} } \right) - \hbox{min} .\;V\left( {\hat{\pi }_{\text{HS}} } \right) &= \left[ { = V\left( {\pi_{\text{HS}}^{{\left( {\text{o}} \right)}} } \right)} \right] = \frac{{\lambda \left( {1 - \lambda } \right)\left( {V_{1} - V_{2} } \right)^{2} }}{{n\left[ {\left( {1 - \lambda } \right)V_{1} + \lambda V_{2} } \right]}} \hfill \\ &= \frac{{\lambda \left( {1 - \lambda } \right)\left( {1 - \pi_{\text{s}} } \right)^{2} \left( {1 - P_{1} } \right)^{2} \left[ {\frac{1}{{P_{1} }} - 1} \right]^{2} }}{{n\left[ {\left( {1 - \lambda } \right)V_{1} + \lambda V_{2} } \right]}}, \hfill \\ \end{aligned} $$
(20)

which is always positive.

It follows that the proposed class of estimators \( \hat{\pi }_{\text{HS}} \) is more efficient than Singh and Tarray (2014) estimator \( \hat{\pi }_{\text{h}} \) at optimum condition. Thus, we infer that to get estimator better than Singh and Tarray (2014) estimator \( \hat{\pi }_{\text{h}} \) one has to choose the values of \( \left( {\eta_{1} ,\eta_{2} } \right) \) in the vicinity of the exact optimum values \( \left( {\eta_{10} ,\eta_{20} } \right) \) of \( \left( {\eta_{1} ,\eta_{2} } \right) \).

The percent relative efficiency of the OE \( \hat{\pi }_{\text{HS}}^{{\left( {\text{o}} \right)}} \) with respect to Singh and Tarray (2014) estimator \( \hat{\pi }_{\text{h}} \) is given by

$$ \begin{aligned} {\text{PRE}}\left( {\hat{\pi }_{\text{HS}}^{{\left( {\text{o}} \right)}} ,\hat{\pi }_{\text{h}} } \right) =& \frac{{V\left( {\hat{\pi }_{\text{h}} } \right)}}{{V\left( {\hat{\pi }_{\text{HS}}^{{ ( {\text{o)}}}} } \right)}} \times 100 \hfill \\ =& \left[ {1 + \frac{{\lambda \left( {1 - \lambda } \right)\left( {V_{1} - V_{2} } \right)^{2} }}{{V_{1} V_{2} }}} \right] \times 100, \hfill \\ \end{aligned} $$
(21)

Further, the PRE of \( \hat{\pi }_{{{\text{HS}}\left( 1 \right)}} \) with respect to \( \hat{\pi }_{\text{h}} \) is given by

$$ \begin{aligned} {\text{PRE}}\left( {\hat{\pi }_{{{\text{HS}}\left( 1\right)}} ,\hat{\pi }_{\text{h}} } \right) = & \frac{{V\left( {\hat{\pi }_{\text{h}} } \right)}}{{V\left( {\hat{\pi }_{{{\text{HS}}\left( 1\right)}} } \right)}} \times 100 \hfill \\ = & \frac{{\left[ {\lambda V_{1} + \left( {1 - \lambda } \right)V_{2} } \right]\left( {1 - \lambda } \right)}}{{\left[ {V_{2} - \frac{{2\lambda P_{1} V_{2} }}{{\left( {1 - \lambda } \right)}} + \frac{{\lambda P_{1}^{2} \left\{ {\left( {1 - \lambda } \right)V_{1} + \lambda V_{2} } \right\}}}{{\left( {1 - \lambda } \right)^{2} }}} \right]}} \times 100, \hfill \\ \end{aligned} $$
(22)

with the help of the formula given in (13), we have computed the optimum values of \( \eta_{10} \) and \( \eta_{20} \) for different values of \( \left( {\lambda ,\;\pi_{\text{s}} ,\;P_{1} } \right) \) and findings are shown in Table 2.

Using the formulae given by (21) and (22) we have computed the values of \( {\text{PRE}}\left( {\hat{\pi }_{\text{HS}}^{{\left( {\text{o}} \right)}} ,\hat{\pi }_{\text{h}} } \right) \) and \( {\text{PRE}}\left( {\hat{\pi }_{{{\text{HS}}\left( 1 \right)}} ,\hat{\pi }_{\text{h}} } \right) \) for different values of \( \left( {\lambda ,\;\pi_{\text{s}} ,\;P_{1} } \right) \) and findings are tabulated in Tables 3 and 4, respectively.

Table 2 depicts the optimum values \( \left( {\eta_{10} ,\eta_{20} } \right) \) of weights \( \left( {\eta_{1} ,\eta_{2} } \right) \) in the proposed estimator \( \hat{\pi }_{\text{HS}} \) for the various values of \( \pi_{\text{s}} \), \( \lambda \), \( P_{1} \), and n = 1000. Table 2 reveals that for fixed values of \( \left( {\pi_{\text{s}} ,\lambda } \right) \), the value of \( \eta_{10} \) increases as \( P_{1} \) increases while \( \eta_{20} \) decreases as \( P_{1} \) increases. On the other hand, it is looked upon that for fixed values of \( \left( {\lambda ,P_{1} } \right) \) the value of \( \eta_{10} \) increases as \( \pi_{s} \) increases and \( \eta_{20} \) decreases as \( \pi_{\text{s}} \) increases. It follows from Table 3 that \( {\text{PRE}}\left( {\hat{\pi }_{\text{HS}}^{{\left( {\text{o}} \right)}} ,\hat{\pi }_{\text{h}} } \right) \) decreases as \( P_{1} \) increases and it decreases as \( \pi_{\text{s}} \) increases. For fixed values of \( \left( {\pi_{\text{s}} ,P_{1} } \right) \) the \( {\text{PRE}}\left( {\hat{\pi }_{\text{HS}}^{{\left( {\text{o}} \right)}} ,\hat{\pi }_{\text{h}} } \right) \) increases as λ decreases.

Table 2 Optimum values of weights \( \eta_{10} \) and \( \eta_{20} \) of \( \eta_{1} \) and \( \eta_{2} \)
Table 3 Percent relative efficiency of the suggested optimum estimator \( \left( {\hat{\pi }_{\text{HS}}^{{\left( {\text{o}} \right)}} } \right) \) with respect to Singh and Tarray (2014) estimator \( \left( {\hat{\pi }_{\text{h}} } \right) \)

There is considerable gain in efficiency using the proposed OE \( \hat{\pi }_{\text{HS}}^{{\left( {\text{o}} \right)}} \) over Singh and Tarray (2014) estimator \( \hat{\pi }_{\text{h}} \) as long as \( P_{1} < \frac{1}{2} \). However, in general, the PRE of the proposed OE is larger than 100%.

Further, from Table 4 it is observed that

Table 4 Percent relative efficiency of the suggested estimator \( \left( {\hat{\pi }_{{{\text{HS}}\left( 1 \right)}} } \right) \) with respect to Singh and Tarray (2014) estimator \( \left( {\hat{\pi }_{\text{h}} } \right) \)
  1. 1.

    there is substantial gain in efficiency using the envisaged estimator \( \hat{\pi }_{{{\text{HS}}\left( 1 \right)}} \) over Singh and Tarray’s (2014) estimator \( \hat{\pi }_{\text{h}} \) where \( P_{1} \le 0.42 \).

  2. 2.

    the proposed estimator \( \hat{\pi }_{{{\text{HS}}\left( 1 \right)}} \) is always better than Singh and Tarray’s (2014) estimator \( \hat{\pi }_{\text{h}} \) as long as \( 0 < P_{1} \le 0.42 \) and \( \lambda \in \left( {0.1,\;0.5} \right) \).

  3. 3.

    the \( {\text{PRE}}\left( {\hat{\pi }_{{{\text{HS}}\left( 1 \right)}} ,\hat{\pi }_{\text{h}} } \right) \) decreases as \( P_{1} \) increases.

Thus, the proposed estimator \( \hat{\pi }_{{{\text{HS}}\left( 1 \right)}} \) is to be preferred over Singh and Tarray’s (2014) estimator \( \hat{\pi }_{\text{h}} \) under the parametric restrictions (i) and (ii).

Further comparing results of Tables 3 and 4 we observed that the values of the Table 3 is very close to the values of Table 4. Thus, we infer that the proposed estimator \( \hat{\pi }_{{{\text{HS}}\left( 1 \right)}} \) would be used as an alternative to the optimum estimator \( \hat{\pi }_{\text{HS}}^{{\left( {\text{o}} \right)}} \). There is practical difficulty in using the proposed optimum estimator \( \hat{\pi }_{\text{HS}}^{{\left( {\text{o}} \right)}} \) as it depends on the unknown parameter \( \pi_{\text{s}} \) under investigation while the proposed estimator \( \hat{\pi }_{{{\text{HS}}\left( 1 \right)}} \) does not face any such difficulty. So the estimator \( \hat{\pi }_{{{\text{HS}}\left( 1 \right)}} \) would be preferred over the optimum estimator \( \hat{\pi }_{\text{HS}}^{{\left( {\text{o}} \right)}} \) and Singh and Tarray (2014) estimator \( \hat{\pi }_{\text{h}} \).

3.1 Analytical comparison between the estimator \( \hat{\pi }_{\text{h}} \) and \( \hat{\pi }_{\text{HS}} \)

From (9) and (12) we have

$$ n\lambda \left( {1 - \lambda } \right)\left[ {V\left( {\hat{\pi }_{\text{h}} } \right) - V\left( {\hat{\pi }_{\text{HS}} } \right)} \right] = \left[ {\left( {1 - \lambda } \right)\left( {\lambda^{2} - \eta_{1}^{2} } \right)V_{1} + \lambda \left\{ {\left( {1 - \lambda } \right)^{2} - \eta_{2}^{2} } \right\}V_{2} } \right], $$

which is positive if

$$ \left[ {\left( {1 - \lambda } \right)\left( {\lambda^{2} - \eta_{1}^{2} } \right)V_{1} + \left\{ {\lambda \left( {1 - \lambda } \right)^{2} - \lambda \eta_{2}^{2} } \right\}V_{2} } \right]^{{}} > 0 $$

i.e. if \( \left\{ { - \left( {1 - \lambda } \right)V_{1} - \lambda V_{2} } \right\}\eta_{1}^{2} + 2\eta_{1} \lambda V_{2} + \left\{ {\lambda^{2} \left( {1 - \lambda } \right)V_{1} + \lambda \left( {1 - \lambda } \right)^{2} V_{2} - \lambda V_{2} > 0} \right\} \)

i.e. if \( - \eta_{1}^{2} D + 2\eta_{1} \lambda V_{2} + \lambda \left\{ {\left( {1 - \lambda } \right)\lambda V_{1} + \left( {1 - \lambda } \right)^{2} V_{2} - V_{2} } \right\} > 0 \)

i.e. if \( - \eta_{1}^{2} D + 2\eta_{1} \lambda V_{2} + \lambda \left\{ {\lambda \left[ {\left( {1 - \lambda } \right)V_{1} + \lambda V_{2} - \lambda V_{2} } \right] + \left( {1 - \lambda } \right)^{2} V_{2} - V_{2} } \right\} > 0 \)

i.e. if \( - \eta_{1}^{2} D + 2\eta_{1} \lambda V_{2} + \lambda \left\{ {\lambda D - \lambda^{2} V_{2} + \left( {1 - \lambda } \right)^{2} V_{2} - V_{2} } \right\} > 0 \)

i.e. if \( - \eta_{1}^{2} D + 2\eta_{1} \lambda V_{2} + \lambda \left\{ {\lambda D - 2\lambda V_{2} } \right\} > 0 \)

i.e. if \( - \eta_{1}^{2} D + 2\eta_{1} \eta_{10} D + \lambda \left\{ {\lambda D - 2\eta_{10} D} \right\} > 0 \)

i.e. if \( - \eta_{1}^{2} + 2\eta_{1} \eta_{10} + \lambda \left\{ {\lambda - 2\eta_{10} } \right\} > 0 \)

i.e. if \( \eta_{1}^{2} - 2\eta_{1} \eta_{10} - \lambda \left\{ {\lambda - 2\eta_{10} } \right\} < 0 \)

i.e. if \( \left( {\eta_{1} - \eta_{10} } \right)^{2} - \left( {\lambda - \eta_{10} } \right)^{2} < 0 \)

i.e. if \( \left( {\eta_{1} - \eta_{10} } \right)^{2} < \left( {\lambda - \eta_{10} } \right)^{2} \)

$$ {\text{i}}.{\text{e}}.\;{\text{if}}\;\left| {\eta _{1} - \eta _{{10}} } \right|{\text{ < }}\left| {\lambda - \eta _{{10}} } \right| $$
(23)

where \( D = \left[ {\left( {1 - \lambda } \right)V_{1} + \lambda V_{2} } \right] .\)

It is observed that the OE \( \hat{\pi }_{\text{HS}}^{{ ( {\text{o)}}}} \) is hard to apply in practice as the optimum weights involve the unknown parameter \( \pi_{\text{s}} \). However, one can generate estimators from \( \hat{\pi }_{\text{HS}} \) better than Singh and Tarray (2014) estimator \( \hat{\pi }_{\text{h}} \) with help of (23) even when exact optimum value of \( \eta_{1} \) is unknown.

We have computed the range of \( \eta_{1} \) using (23) for different values of \( \pi_{\text{s}} ,\lambda ,P_{1} \) and n =1000 and findings are shown in Table 5. It is observed from Table 5 that the value of lower limit of \( \eta_{1} \) increases as \( P_{1} \) increases for fixed values of \( \left( {\pi_{\text{s}} ,\lambda } \right) \) resulting in the shorter range of \( \eta_{1} \). We note from Tables 2 and 5 that one can obtain efficient estimator of \( \pi_{\text{s}} \) from the proposed class of estimators \( \hat{\pi }_{\text{HS}} \) even if the value of \( \eta_{1} \) deviates from its exact optimum value \( \eta_{10} \). Thus, the proposed class of estimators \( \hat{\pi }_{\text{HS}} \) can be used in practice even if the investigator is less experienced or has less association with the population under investigation.

Table 5 Range of \( \eta_{1} \) for different value of \( \pi_{\text{s}} \), \( \lambda \), \( P_{1} \) and \( n = 1000 \)

The range of \( \lambda \) can be obtained from (23) in which the estimators shown in Table 1 are better than the Singh and Tarray estimator \( \hat{\pi }_{\text{h}} \). For example, if we set \( w_{1} = \left( {1 - \lambda } \right) \), we find that the estimator \( \hat{\pi }_{\text{HS1}} \) is more efficient than the Singh and Tarray’s (2014) estimator \( \hat{\pi }_{\text{h}} \) if

$$ \lambda = \frac{1}{2}. $$
(24)

4 Estimation that utilizes approximate optimum value

In this section, we study the “robustness” of the OE \( \hat{\pi }_{\text{HS}}^{{\left( {\text{o}} \right)}} \) in (14) against departure from the true optimum values \( \left( {\eta_{10} ,\eta_{20} } \right) \) of \( \left( {\eta_{1} ,\eta_{2} } \right) \).

It is to be mentioned that the OE \( \hat{\pi }_{\text{HS}}^{{\left( {\text{o}} \right)}} \) in (14) is of little practical utility as it depends on optimum values \( \left( {\eta_{10} ,\eta_{20} } \right) \) in (13) which are functions of the unknown parameter \( \pi_{\text{s}} \) (under study) and the known probability \( P_{1} \). However, in many practical situations, investigator has prior information regarding the parameter \( \pi_{\text{s}} \) and hence of \( \left( {\eta_{10} ,\eta_{20} } \right) \) due to either long association with the experimental material or through past data. One can also obtain the values of \( \left( {\eta_{10} ,\eta_{20} } \right) \) from the sample data at hand. Thus, the assumption that the investigator has prior information or guessed or approximate values of \( \left( {\eta_{10} ,\eta_{20} } \right) \) is quite reasonable. The estimator \( \hat{\pi }_{\text{HS}}^{{\left( {\text{o}} \right)}} \) which substitute the approximate value \( \tilde{\eta }_{10} = \alpha \eta_{10} \Rightarrow \tilde{\eta }_{20} = \left( {1 - \alpha \eta_{20} } \right) \), where \( \alpha \left( { > 0} \right) \) is the departure from the true optimum value in the estimator \( \hat{\pi }_{HS}^{(o)} \) at (14) is defined by

$$ \hat{\pi }_{\text{HS}}^{{\left( {\text{o}} \right)*}} = \left( {\tilde{\eta }_{10} \hat{\pi }_{1} + \tilde{\eta }_{20} \hat{\pi }_{2} } \right) . $$
(25)

The variance of \( \hat{\pi }_{\text{HS}}^{{({\text{o}})*}} \) is given by

$$ V\left( {\hat{\pi }_{\text{HS}}^{{({\text{o}})*}} } \right) = \frac{1}{n}\left[ {\frac{{\tilde{\eta }_{10} }}{\lambda }V_{1} + \frac{{\tilde{\eta }_{20} }}{\lambda }V_{2} } \right]. $$
(26)

From (9) and (26) we have

$$ n\left[ {V\left( {\hat{\pi }_{\text{h}} } \right) - V\left( {\hat{\pi }_{\text{HS}}^{{({\text{o}})*}} } \right)} \right] = \left[ {\left( {\lambda - \frac{{\tilde{\eta }_{10}^{2} }}{\lambda }} \right)V_{1} + \left( {\left( {1 - \lambda } \right) - \frac{{\tilde{\eta }_{20}^{2} }}{(1 - \lambda }} \right)V_{2} } \right], $$
(27)

which is always positive if

$$ \alpha^{2} - 2\alpha + \frac{\lambda }{{\eta_{10} }}\left( {2 - \frac{\lambda }{{\eta_{10} }}} \right) < 0 $$

i.e. if \( \left( {\alpha - 1} \right)^{2} - \left( {1 - \frac{\lambda }{{\eta_{10} }}} \right)^{2} < 0 \)

i.e. if \( \left( {\alpha - 1} \right)^{2} < \left( {1 - \frac{\lambda }{{\eta_{10} }}} \right)^{2} \)

i.e. if \( \left| {\alpha - 1} \right| < \left| {1 - \frac{\lambda }{{\eta_{10} }}} \right| \)

$$ {\text{i}}.{\text{e}}.{\text{ if}}\;\left( {2 - \frac{\lambda }{{\eta_{10} }}} \right) < \alpha < \frac{\lambda }{{\eta_{10} }} $$
(28)

From (9) and (26) the percent relative efficiency of the proposed estimator \( \hat{\pi }_{\text{HS}}^{{({\text{o}})*}} \) for the approximate values \( \left( {\tilde{\eta }_{10} ,\tilde{\eta }_{20} } \right) \), with respect to Singh and Tarray (2014) estimator is given as

$$ {\text{PRE}}\left( {\hat{\pi }_{\text{HS}}^{{({\text{o}})*}} ,\hat{\pi }_{\text{h}} } \right) = \frac{{V\left( {\hat{\pi }_{\text{h}} } \right)}}{{V\left( {\hat{\pi }_{\text{HS}}^{{({\text{o}})*}} } \right)}} \times 100 $$
$$ = \frac{{\lambda \left( {1 - \lambda } \right)\left[ {\lambda V_{1} + \left( {1 - \lambda } \right)V_{2} } \right]}}{{\left[ {\left( {1 - \lambda } \right)\alpha^{2} \eta_{10}^{2} V_{1} + \lambda \left( {1 - \alpha \eta_{10} } \right)^{2} V_{2} } \right]}} \times 100. $$
(29)

We have computed the range of \( \alpha \left( \% \right) \) for different values of \( \left( {\pi_{\text{s}} ,P_{1} ,\lambda } \right) \) in Table 6. It is observed from Table 6 that the upper limit of \( \alpha \) decreases while lower limit of \( \alpha \) increases as \( P_{1} \) increases for the fixed values of \( \left( {\pi_{\text{s}} ,\lambda } \right) \). Table 6 also exhibits that for fixed values of \( \left( {\lambda ,P_{1} } \right) \), the value of upper limit of \( \alpha \) decreases while the lower limit of increases as increases.

Table 6 Range of \( \alpha \) for different values of \( \pi_{\text{s}} \), \( \lambda \), \( P_{1} \) and \( n = 1000 \)

Further to appreciate the idea of robustness, we have computed the relative efficiency (%) of the proposed estimator \( \hat{\pi }_{\text{HS}}^{{({\text{o}})*}} \) with respect to Singh and Tarray (2014) estimator \( \hat{\pi }_{\text{h}} \) for various values of \( \left( {\pi_{s} ,P_{1} ,\lambda } \right) \) and \( \alpha \) demonstrated in Tables 7, 8, 9, 10, 11, 12. It is observed that the values of \( {\text{PRE}}\left( {\hat{\pi }_{\text{HS}}^{{\left( {\text{o}} \right)*}} ,\hat{\pi }_{\text{h}} } \right) \) are more than 100. Further from Tables 7, 8, 9, 10, 11, 12 we note that the value of percent relative efficiency \( {\text{PRE}}\left( {\hat{\pi }_{\text{HS}}^{{\left( {\text{o}} \right)*}} ,\hat{\pi }_{\text{h}} } \right) \) decreases as the value of \( P_{1} \) increases and it increases with increasing value of \( \pi_{\text{s}} \). Thus, we conclude that the proposed estimator \( \hat{\pi }_{\text{HS}}^{{\left( {\text{o}} \right)}} \) has practical utility in practice even if \( \alpha \) departs from ‘unity’.

Table 7 Percent relative efficiency of the suggested optimum estimator \( \left( {\hat{\pi }_{\text{HS}}^{{\left( {\text{o}} \right)}} } \right) \) with respect to Singh and Tarray (2014) estimator \( \left( {\hat{\pi }_{h} } \right) \) when \( \alpha = 0.5 \)
Table 8 Percent relative efficiency of the suggested optimum estimator \( \left( {\hat{\pi }_{\text{HS}}^{{\left( {\text{o}} \right)}} } \right) \) with respect to Singh and Tarray (2014) estimator \( \left( {\hat{\pi }_{\text{h}} } \right) \) when \( \alpha = 0.7 \)
Table 9 Percent relative efficiency of the suggested optimum estimator \( \left( {\hat{\pi }_{\text{HS}}^{{\left( {\text{o}} \right)}} } \right) \) with respect to Singh and Tarray (2014) estimator \( \left( {\hat{\pi }_{h} } \right) \) when \( \alpha = 0.8 \)
Table 10 Percent relative efficiency of the suggested optimum estimator \( \left( {\hat{\pi }_{\text{HS}}^{{\left( {\text{o}} \right)}} } \right) \) with respect to Singh and Tarray (2014) estimator \( \left( {\hat{\pi }_{\text{h}} } \right) \) when \( \alpha = 0.9 \)
Table 11 Percent relative efficiency of the suggested optimum estimator \( \left( {\hat{\pi }_{\text{HS}}^{{\left( {\text{o}} \right)}} } \right) \) with respect to Singh and Tarry (2014) estimator \( \left( {\hat{\pi }_{\text{h}} } \right) \) when \( \alpha = 1.3 \)
Table 12 Percent relative efficiency of the suggested optimum estimator \( \left( {\hat{\pi }_{\text{HS}}^{{\left( {\text{o}} \right)}} } \right) \) with respect to Singh and Tarray (2014) estimator \( \left( {\hat{\pi }_{\text{h}} } \right) \) when \( \alpha = 1.5 \)

5 Stratified mixed randomized response model

Stratified random sampling is usually applied by decomposing the population into distinct homogeneous groups called strata. It gives reasonably representative sample of the population. Many researchers have suggested RR techniques using stratified random sampling, for instance, Hong et al. (1994), Kim and Elam (2005) and Singh and Tarray (2014). We now present estimator under stratified estimation method proposed by Singh and Tarray (2014) to be used later for comparison purposes.

5.1 Singh and Tarray (2014) stratified mixed randomized response model

Singh and Tarray (2014) assumed that the population is partitioned into “r” nonoverlapping strata, and a sample is selected by simple random sampling with replacement from each stratum. To get the full benefit from stratification, they assumed that the number of units in each stratum is known. In this model, an individual respondent in a sample from each stratum is instructed to answer a direct question “I am a member of the innocuous trait group.” Respondents answer the direct question by “Yes” or “No.” If a respondent answers “Yes,” then he or she is instructed to go to the randomization device \( R_{k1} \) consisting of statements: (i) “I am the member of the sensitive trait group” and (ii) “I am a member of the innocuous trait group” with preassigned probabilities \( Q_{k} \) and \( \left( {1 - Q_{k} } \right) \), respectively. If a respondent answers “No,” then the respondent is instructed to use a randomization procedure due to Mangat (1994). In the Mangat’s (1994) RR procedure, each respondent is instructed to say “Yes” if he or she is a member of the sensitive trait group. If he or she is not a member of the sensitive trait group, then the respondent is required to use the Warner’s (1965) randomization device \( R_{k2} \) consisting of the statement: (i) “I belong to the sensitive trait group” and (b) “I do not belong to the sensitive trait group” with preassigned probabilities \( P_{k} \) and \( \left( {1 - P_{k} } \right) \), respectively. Then he or she is to report “Yes” or “No” according to the outcome of the randomization device \( R_{k2} \) and the actual status that he or she has with respect to the sensitive trait group. The survey procedures are performed under the assumption that both the sensitive and the innocuous questions are unrelated and independent in a randomization device \( R_{k1} \). To protect the respondent’s privacy, the respondents should not disclose to the interviewer the question they answered from either \( R_{k1} \) or \( R_{k2} \). Suppose we denote \( m_{k} \) as the number of units in the sample from stratum k and n as the total number of units in samples from all strata. Let \( m_{k1} \) be the number of people responding “Yes” when respondents in a sample \( m_{k} \) were asked the direct question and \( m_{k2} \) be the number of people responding “No” when respondents in a sample \( m_{k} \) were asked the direct question so that \( \sum\limits_{k = 1}^{r} {m_{k} = \sum\limits_{k = 1}^{r} {\left( {m_{k1} + m_{k2} } \right)} } \). Under the assumption that these “Yes” or “No” reports are made truthfully, and \( Q_{k} \) and \( P_{k} \left( { \ne 0.5} \right) \) are set by the researcher, then the proportion of “Yes” answer from the respondents using the randomization device \( R_{k1} \) will be

$$ \begin{aligned} Y_{k} &= Q_{k} \pi_{{S_{k} }} + \left( {1 - Q_{k} } \right)\pi_{1k} \quad {\text{for}}\quad k \, = 1,2, \ldots ,r, \hfill \\ &= Q_{k} \pi_{{S_{k} }} + \left( {1 - Q_{k} } \right)\;({\text{i}}.{\text{e}}.\;\pi_{1k} = 1). \hfill \\ \end{aligned} $$
(30)

The estimator of \( \pi_{{{\text{S}}_{k} }} \) in terms of the sample proportion of “Yes” response \( \hat{Y}_{k} \) is given as

$$ \hat{\pi }_{{{\text{h}}1_{k} }} = \frac{{\hat{Y}_{k} - \left( {1 - Q_{k} } \right)}}{{Q_{k} }}\quad {\text{for}}\quad k \, = 1,2, \ldots ,r, $$
(31)

with the variance

$$ V\left( {\hat{\pi }_{{{\text{h}}1_{k} }} } \right) = \frac{{\left( {1 - \pi_{{{\text{S}}_{k} }} } \right)\left[ {Q_{k} \pi_{{{\text{S}}_{k} }} + \left( {1 - Q_{k} } \right)} \right]^{{}} }}{{m_{k1} Q_{k} }} = \frac{{V_{1k} }}{{m_{k1} }}, $$
(32)

where \( V_{1k} = \frac{{\left( {1 - \pi_{{{\text{S}}_{k} }} } \right)\left[ {Q_{k} \pi_{{{\text{S}}_{k} }} + \left( {1 - Q_{k} } \right)} \right]}}{{Q_{k} }}. \)

The proportion of “Yes” answers from the respondents using Mangat (1994) randomization device \( R_{2k} \):

$$ X_{k} = \pi_{{{\text{S}}_{k} }} + \left( {1 - \pi_{{{\text{S}}_{k} }} } \right)\left( {1 - P_{k} } \right). $$
(33)

The estimator of \( \pi_{{{\text{S}}_{k} }} \) in terms of the sample proportion of “Yes” response \( \hat{X}_{k} \) is given by

$$ \hat{\pi }_{{{\text{h}}2_{k} }} = \frac{{\hat{X}_{k} - \left( {1 - P_{k} } \right)}}{{P_{k} }}, $$
(34)

with the variance

$$ V\left( {\hat{\pi }_{{{\text{h}}2_{k} }} } \right) = \frac{1}{{\left( {m_{k} - m_{k1} } \right)}}\left[ {\pi_{{{\text{S}}_{k} }} \left( {1 - \pi_{{{\text{S}}_{k} }} } \right) + \left( {1 - Q_{k} } \right)\left( {1 - \pi_{{{\text{S}}_{k} }} } \right)} \right] = \frac{{V_{2k} }}{{m_{k2} }}, $$
(35)

where \( V_{2k} = \left[ {\pi_{{S_{k} }} \left( {1 - \pi_{{{\text{S}}_{k} }} } \right) + \left( {1 - Q_{k} } \right)\left( {1 - \pi_{{{\text{S}}_{k} }} } \right)} \right] \)

The pooled unbiased estimator of \( \pi_{{{\text{S}}_{k} }} \) in terms of the sample proportion of “Yes” response \( \hat{Y}_{k} \) and \( \hat{X}_{k} \) is given as

$$ \hat{\pi }_{{{\text{mS}}_{k} }} = \frac{{m_{k1} }}{{m_{k} }}\hat{\pi }_{{{\text{h}}1_{k} }} + \frac{{m_{k} - m_{k1} }}{{m_{k} }}\hat{\pi }_{{{\text{h}}2_{k} }} ,\quad {\text{for}}\quad 0 < \frac{{m_{k1} }}{{m_{k} }} < 1. $$
(36)

with the variance

$$ V\left( {\hat{\pi }_{{{\text{mS}}_{k} }} } \right) = \frac{{\pi_{{{\text{S}}_{k} }} \left( {1 - \pi_{{{\text{S}}_{k} }} } \right)}}{{m_{k} }} + \frac{1}{{m_{k} }}\left[ {\left( {1 - \pi_{{{\text{S}}_{k} }} } \right)\left( {1 - Q_{k} } \right)\left\{ {\frac{{\lambda_{k} }}{{Q_{k} }} + \left( {1 - \lambda_{k} } \right)} \right\}} \right], $$
(37)

where \( m_{k} = m_{k1} + m_{k2} \) and \( \lambda_{k} = {{m_{k1} } \mathord{\left/ {\vphantom {{m_{k1} } {m_{k} }}} \right. \kern-0pt} {m_{k} }} \).

Thus, the unbiased estimator of \( \pi_{\text{S}} = \sum\limits_{k = 1}^{r} {w_{k} \pi_{{{\text{S}}_{k} }} } \) is given as

$$ \hat{\pi }_{\text{mS}} = \sum\limits_{k = 1}^{r} {w_{k} \hat{\pi }_{{{\text{mS}}_{k} }} } = \sum\limits_{k = 1}^{r} {w_{k} \left[ {\frac{{m_{k1} }}{{m_{k} }}\hat{\pi }_{{{\text{h}}1_{k} }} + \frac{{m_{k} - m_{k1} }}{{m_{k} }}\hat{\pi }_{{{\text{h}}2_{k} }} } \right]} . $$
(38)

The variance of \( \pi_{\text{mS}} \) is given as

$$ V\left( {\hat{\pi }_{\text{mS}} } \right) = \sum\limits_{k = 1}^{r} {\frac{{w_{k}^{2} }}{{m_{k} }}} \left[ {\frac{{\pi_{{{\text{S}}_{k} }} \left( {1 - \pi_{{{\text{S}}_{k} }} } \right)}}{{m_{k} }} + \frac{1}{{m_{k} }}\left( {\left( {1 - \pi_{{S_{k} }} } \right)\left( {1 - Q_{k} } \right)\left\{ {\frac{{\lambda_{k} }}{{Q_{k} }} + \left( {1 - \lambda_{k} } \right)} \right\}} \right)} \right]. $$
(39)

In the next section, we have proposed a weighted unbiased estimator for Singh and Tarray (2014) stratified estimator \( \pi_{S} \) and studied its properties.

6 Proposed Stratified Mixed Randomized Response Model Using Weights

Moving along the direction for stratified mixed RR model traced by Singh and Tarray (2014), we introduce a weighted unbiased estimator for \( \pi_{\text{s}} \) as

$$ \hat{\pi }_{{{\text{mh}}_{k} }} = \eta_{1k} \hat{\pi }_{{{\text{h}}1k}} + \eta_{2k} \hat{\pi }_{{{\text{h}}2k}} , $$
(40)

where \( \eta_{1k} \) and \( \eta_{2k} \) are suitably chosen constant such that \( \eta_{1k} + \eta_{2k} = 1 \).

For \( \eta_{1k} = \frac{{\lambda_{k} Q_{k} }}{{\left( {1 - \lambda_{k} } \right)}} \) and \( \eta_{2k} = \left( {1 - \eta_{1k} } \right) = \frac{{\left\{ {1 - \lambda_{k} \left( {1 + Q_{k} } \right)} \right\}}}{{\left( {1 - \lambda_{k} } \right)}} \), in (40) we get an unbiased estimator \( \hat{\pi }_{{{\text{mh}}_{k} }} \) for \( \pi_{\text{s}} \) as

$$ \hat{\pi }_{{{\text{mh}}_{k} }} = \frac{{\lambda_{k}^{2} Q_{k}^{2} }}{{\left( {1 - \lambda_{k} } \right)}}\hat{\pi }_{{{\text{h}}1k}} + \frac{{\left\{ {1 - \lambda_{k} \left( {1 + Q_{k} } \right)} \right\}}}{{\left( {1 - \lambda_{k} } \right)}}\hat{\pi }_{{{\text{h}}2k}} . $$
(41)

We mention that in (41) \( \lambda_{k} 's \) and \( Q_{k} 's \) are known.

The variance of the estimator \( \hat{\pi }_{{{\text{mh}}_{k} }} \) is given as

$$ \begin{aligned} V\left( {\hat{\pi }_{{{\text{mh}}_{k} }} } \right) = & \frac{{\lambda_{k}^{2} Q_{k}^{2} }}{{\left( {1 - \lambda_{k} } \right)}}V\left( {\hat{\pi }_{{{\text{h}}1k}} } \right) + \frac{{\left\{ {1 - \lambda_{k} \left( {1 + Q_{k} } \right)} \right\}}}{{\left( {1 - \lambda_{k} } \right)}}V\left( {\hat{\pi }_{{{\text{h}}2k}} } \right) \hfill \\ = & \frac{{\lambda_{k} Q_{k}^{2} }}{{\left( {1 - \lambda_{k} } \right)^{2} }}\frac{{V_{1k} }}{{m_{k} }} + \frac{{\left\{ {1 - \lambda_{k} \left( {1 + Q_{k} } \right)} \right\}^{2} }}{{\left( {1 - \lambda_{k} } \right)^{2} }}\frac{{V_{2k} }}{{m_{k} \left( {1 - \lambda_{k} } \right)}} \hfill \\ = & \frac{1}{{m_{k} }}\left[ {\frac{{\lambda_{k} Q_{k}^{2} V_{1k} }}{{\left( {1 - \lambda_{k} } \right)^{2} }} + \frac{{\left\{ {1 - \lambda_{k} \left( {1 + Q_{k} } \right)} \right\}^{2} V_{2k} }}{{\left( {1 - \lambda_{k} } \right)^{3} }}} \right], \hfill \\ \end{aligned} $$
(42)
$$ = \frac{{D_{k} }}{{m_{k} }}, $$
(43)

where \( D_{k} = \frac{1}{{\left( {1 - \lambda_{k} } \right)^{2} }}\left[ {\lambda_{k} Q_{k}^{2} V_{1k} + \frac{{\left\{ {1 - \lambda_{k} \left( {1 + Q_{k} } \right)} \right\}^{2} V_{2k} }}{{\left( {1 - \lambda_{k} } \right)}}} \right] \) and \( V_{1k} \) and \( V_{2k} \) are same as defined earlier

The unbiased estimator of \( \pi_{S} = \sum\limits_{k = 1}^{r} {w_{k} \pi_{{S_{k} }} } \) is given by

$$ \hat{\pi }_{\text{mh}} = \sum\limits_{k = 1}^{r} {w_{k}^{{}} \hat{\pi }_{{{\text{mh}}_{k} }} } , $$
(44)

with the variance

$$ \begin{aligned} V\left( {\hat{\pi }_{\text{mh}} } \right) = &\sum\limits_{k = 1}^{r} {w_{k}^{2} V\left( {\hat{\pi }_{{{\text{mh}}_{k} }} } \right)} \hfill \\ = & \sum\limits_{k = 1}^{r} {\frac{{w_{k}^{2} }}{{m_{k} }}D_{k} } . \hfill \\ \end{aligned} $$
(45)

6.1 Variance of \( \hat{\pi }_{\text{mh}} \) under Neyman allocation

Information on \( \pi_{{{\text{S}}_{k} }} \) is usually unavailable. But if prior information about them is available from past experience then we may derive the Neyman allocation formula.

The Neyman allocation of n to \( m_{1} ,m_{2} , \ldots ,m_{r - 1} \;{\text{and}}\;m_{r} \), to derive the minimum variance of \( \hat{\pi }_{\text{mh}} \) subject to \( n = \sum\nolimits_{k = 1}^{r} {m_{k} } \) is approximately given by

$$ m_{k} \propto w_{k} \sqrt {D_{k} } , $$
$$ \Rightarrow m_{k} = \frac{{nw_{k} \sqrt {D_{k} } }}{{\sum\limits_{k = 1}^{r} {w_{k} \sqrt {D_{k} } } }}. $$
(46)

Using (45) and (46) the minimal variance of \( \hat{\pi }_{\text{mh}} \) is given by

$$ V\left( {\hat{\pi }_{\text{mh}} } \right)_{\text{opt}} = \sum\limits_{k = 1}^{r} {\frac{{w_{k}^{2} }}{{nw_{k} \sqrt {D_{k} } }}\left( {\sum\limits_{k = 1}^{r} {w_{k} \sqrt {D_{k} } } } \right)D_{k} } , $$
$$ = \frac{1}{n}\left( {\sum\limits_{k = 1}^{r} {w_{k} \sqrt {D_{k} } } } \right)^{2} $$
(47)

6.2 Efficiency comparison with Singh and Tarray (2014) model

In this section, we have made the comparison of the proposed mixed randomized response model using Singh and Tarray’s (2014) model by way of variance comparison.

To compare the proposed estimator \( \hat{\pi }_{\text{mh}} \) with that of Singh and Tarray’s (2014) estimator \( \hat{\pi }_{\text{mS}} \), we write the variance of Singh and Tarray’s (2014) estimator \( \hat{\pi }_{\text{mS}} \) under Neyman allocation as

$$ V\left( {\hat{\pi }_{\text{mS}} } \right)_{\text{opt}} = \frac{1}{n}\left( {\sum\limits_{k = 1}^{r} {w_{k} \sqrt {S_{k}^{*} } } } \right)^{2} , $$
(48)

where \( S_{k}^{*} = \left[ {\pi_{{{\text{S}}_{k} }} \left( {1 - \pi_{{{\text{S}}_{k} }} } \right) + \frac{{\left( {1 - \pi_{{{\text{S}}_{k} }} } \right)\left( {1 - Q_{k} } \right)\lambda_{k} }}{{Q_{k} }} + \left( {1 - \pi_{{{\text{S}}_{k} }} } \right)\left( {1 - Q_{k} } \right)\left( {1 - \lambda_{k} } \right)} \right] \).

From Eq. (47) and (48), we have

$$ n\left[ {V\left( {\hat{\pi }_{\text{mS}} } \right)_{\text{opt}} - V\left( {\hat{\pi }_{\text{mh}} } \right)_{\text{opt}} } \right] = \left( {\sum\limits_{k = 1}^{r} {w_{k} \sqrt {S_{k}^{*} } } } \right)^{2} - \left( {\sum\limits_{k = 1}^{r} {w_{k} \sqrt {D_{k} } } } \right)^{2} > 0 $$

i.e. if \( \left( {\sum\limits_{k = 1}^{r} {w_{k} \sqrt {S_{k}^{*} } } } \right)^{2} > \left( {\sum\limits_{k = 1}^{r} {w_{k} \sqrt {D_{k} } } } \right)^{2} \)

i.e. if \( \sum\limits_{k = 1}^{r} {w_{k} \sqrt {D_{k} } } < \sum\limits_{k = 1}^{r} {w_{k} \sqrt {S_{k}^{*} } } \)

i.e. if \( \sum\limits_{k = 1}^{r} {w_{k} \left( {\sqrt {D_{k} } - \sqrt {S_{k}^{*} } } \right)} < 0 \)

i.e. if \( \left( {\sqrt {D_{k} } - \sqrt {S_{k}^{*} } } \right) < 0\quad \forall \quad k = 1,2, \ldots ,r \)

$$ {\text{i}}.{\text{e}}. \, \;{\text{if}}\;D_{k} < S_{k}^{*} \quad \forall \quad k = 1,2, \ldots ,r $$
(49)

Thus, we state the following theorem.

Theorem 6.1

The proposed mixed randomized response model based on stratified random sampling is more efficient than the Singh and Tarray’s (2014) stratified mixed randomized response model as long as the condition \( D_{k} < S_{k}^{*} \;\forall \;k = 1,2, \ldots ,r \); is satisfied.

To have an idea about the efficiency gain of the proposed stratified estimator \( \hat{\pi }_{\text{mh}} \), we perform a numerical study. Their performance is evaluated through percent relative efficiency with respect to Singh and Tarray (2014) stratified estimator \( \hat{\pi }_{\text{mS}} \) in case of two strata (i.e., r = 2) using formula:

$$ \begin{aligned} {\text{PRE}}\left( {\hat{\pi }_{\text{mh}} ,\hat{\pi }_{\text{mS}} } \right) = \frac{{V\left( {\hat{\pi }_{\text{mS}} } \right)}}{{V\left( {\hat{\pi }_{\text{mh}} } \right)}} \times 100 \hfill \\ = \frac{{\left( {\sum\limits_{k = 1}^{r} {w_{k} \sqrt {S_{k}^{*} } } } \right)^{2} }}{{\left( {\sum\limits_{k = 1}^{r} {w_{k} \sqrt {D_{k} } } } \right)^{2} }} \times 100, \hfill \\ \end{aligned} $$
(50)

where

\( \pi_{\text{S}} = w_{1} \pi_{{{\text{S}}_{1} }} + w_{2} \pi_{{{\text{S}}_{2} }} , \) \( \sqrt {S_{k}^{*} } = \left[ {\pi_{{{\text{S}}_{k} }} \left( {1 - \pi_{{S_{k} }} } \right) + \frac{{\left( {1 - \pi_{{{\text{S}}_{k} }} } \right)\left( {1 - Q_{k} } \right)\lambda_{k} }}{{Q_{k} }} + \left( {1 - \pi_{{{\text{S}}_{k} }} } \right)\left( {1 - Q_{k} } \right)\left( {1 - \lambda_{k} } \right)} \right]^{{{1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-0pt} 2}}} \), and \( \sqrt {D_{k} } = \frac{1}{{\left( {1 - \lambda_{k} } \right)}}\left[ {\lambda_{k} Q_{1k}^{2} V_{1k} + \frac{{\left\{ {1 - \lambda_{k} \left( {1 + Q_{k} } \right)} \right\}^{2} V_{2k} }}{{\left( {1 - \lambda_{k} } \right)}}} \right]^{{{1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-0pt} 2}}} \)

Findings for the percent relative efficiency are given in Table 13 for the different cases of \( \pi_{\text{S}} ,\lambda_{k} \) and \( Q_{k} \) respectively.

Table 13 Percent relative efficiency of the proposed stratified estimator \( \hat{\pi }_{\text{mh}} \) with respect to Singh and Tarray (2014) estimator \( \hat{\pi }_{\text{mS}} \)

It is observed from Table 13 that, the values of \( {\text{PRE}}\left( {\hat{\pi }_{\text{mh}} ,\hat{\pi }_{\text{mS}} } \right) \) are more than 100. Thus, the proposed estimator \( \hat{\pi }_{\text{mh}} \) is more efficient than the one earlier considered by Singh and Tarray’s (2014) estimator \( \hat{\pi }_{\text{mS}} \) for given parametric values. Further, we note that the \( {\text{PRE}}\left( {\hat{\pi }_{\text{mh}} ,\hat{\pi }_{\text{mS}} } \right) \) decreases as \( Q_{1} \) increases. For the fixed values of \( \left( {\pi_{\text{S}} ,Q_{1} } \right) \) the \( {\text{PRE}}\left( {\hat{\pi }_{\text{mh}} ,\hat{\pi }_{\text{mS}} } \right) \) increases as \( \lambda \) increases. Larger gain in efficiency is observed as long as \( Q_{1} \) lies between 0.1 and 0.3 (i.e. \( 0.1 \le Q_{1} \le 0.3 \)) and \( Q_{2} \) lies between 0.2 and 0.5 (i.e. \( 0.2 \le Q_{2} \le 0.5 \)).

6.3 Estimation of population proportion using mixed randomized response model when weights \( \eta_{1k} \) and \( \eta_{2k} \) are scalars

Using the estimator \( \hat{\pi }_{{{\text{mh}}k}} \) defined at (40) we define a weighted unbiased estimator of population proportion \( \pi_{\text{S}} = \sum\nolimits_{k = 1}^{r} {w_{k} \pi_{{{\text{S}}_{k} }} } \) as

$$ \begin{aligned} \hat{\pi }_{m\eta } &= \sum\limits_{k = 1}^{r} {w_{k} \hat{\pi }_{{{\text{mh}}k}} } \hfill \\ & = \sum\limits_{k = 1}^{r} {w_{k} \left\{ {\eta_{1k} \hat{\pi }_{{{\text{h}}1k}} + \left( {1 - \eta_{1k} } \right)\hat{\pi }_{{{\text{h}}2k}} } \right\}} \hfill \\ \end{aligned} . $$
(51)

The variance of \( \hat{\pi }_{{{\text{m}}\eta }} \) is given by

$$ V\left( {\hat{\pi }_{{{\text{m}}\eta }} } \right) = \sum\limits_{k = 1}^{r} {\frac{{w_{k}^{2} }}{{m_{k} \left( {1 - \lambda_{k} } \right)\lambda_{k} }}\left[ {\lambda_{k} V_{2k} + \eta_{1k}^{2} \left\{ {\left( {1 - \lambda_{k} } \right)V_{1k} + \lambda_{k} V_{2k} } \right\} - 2\eta_{1k} \lambda_{k} V_{2k} } \right]} . $$
(52)

The variance \( V\left( {\hat{\pi }_{{{\text{m}}\eta }} } \right) \) at (51) is minimized for

$$ \left. \begin{aligned} \eta_{1k} = \frac{{\lambda_{k} V_{2k} }}{{\left[ {\left( {1 - \lambda_{k} } \right)V_{1k} + \lambda_{k} V_{2k} } \right]}} = \eta_{1ko} \hfill \\ \eta_{2k} = \frac{{\left( {1 - \lambda_{k} } \right)V_{1k} }}{{\left[ {\left( {1 - \lambda_{k} } \right)V_{1k} + \lambda_{k} V_{2k} } \right]}} = \eta_{2ko} \hfill \\ \end{aligned} \right\}. $$
(53)

Thus, the resulting minimum variance of \( \hat{\pi }_{{{\text{m}}\eta }} \) is given by

$$ V_{\hbox{min} } \left( {\hat{\pi }_{{{\text{m}}\eta }} } \right) = \sum\limits_{k = 1}^{r} {\frac{{w_{k}^{2} }}{{m_{k} }}V_{{k{\text{o}}}} } , $$
(54)

where

$$ V_{{k{\text{o}}}} = \frac{{V_{1k} V_{2k} }}{{\left[ {\left( {1 - \lambda_{k} } \right)V_{1k} + \lambda_{k} V_{2k} } \right]}}. $$
(55)

Thus, the resulting optimum estimator for \( \pi_{\text{S}} \) is given by

$$ \hat{\pi }_{{{\text{m}}\eta_{{k{\text{o}}}} }} = \eta_{1k0} \hat{\pi }_{h1k} + \left( {1 - \eta_{1k0} } \right)\hat{\pi }_{{{\text{h}}2k}} . $$
(56)

whose variance is

$$ V\left( {\hat{\pi }_{{{\text{m}}\eta_{ko} }} } \right) = V_{\hbox{min} } \left( {\hat{\pi }_{{{\text{m}}\eta }} } \right) = \sum\limits_{k = 1}^{r} {\frac{{w_{k}^{2} }}{{m_{k} }}V_{{k{\text{o}}}} } . $$
(57)

The variance of the optimum estimator (OE) under Neyman allocation \( m_{k} \propto w_{k} \sqrt {S_{k} } \), k = 1,2,…r.

$$ {\text{i}}.{\text{e}}.\;\frac{{m_{k} }}{n} = \frac{{w_{k} \sqrt {V_{{k{\text{o}}}} } }}{{\sum\limits_{k = 1}^{r} {w_{k} \sqrt {V_{{k{\text{o}}}} } } }},\quad k = 1,2, \ldots ,r, $$
(58)

is given by

$$ V\left( {\hat{\pi }_{{{\text{m}}\eta_{{k{\text{o}}}} }} } \right)_{\text{opt}} = \frac{1}{n}\left( {\sum\limits_{k = 1}^{r} {w_{k} } \sqrt {V_{{k{\text{o}}}} } } \right)^{2} . $$
(59)

From (39) and (50) we have

$$ V\left( {\hat{\pi }_{\text{mS}} } \right) - V\left( {\hat{\pi }_{{{\text{m}}\eta_{{k{\text{o}}}} }} } \right) = \frac{1}{n}\left\{ {\left( {\sum\limits_{k = 1}^{r} {w_{k} \sqrt {S_{k}^{*} } } } \right)^{2} - \left( {\sum\limits_{k = 1}^{r} {w_{k} } \sqrt {V_{{k{\text{o}}}} } } \right)^{2} } \right\}, $$

which is positive if

$$ \left( {\sum\limits_{k = 1}^{r} {w_{k} \sqrt {S_{k}^{*} } } } \right)^{2} > \left( {\sum\limits_{k = 1}^{r} {w_{k} } \sqrt {V_{{k{\text{o}}}} } } \right)^{2} , $$

i.e. if \( \sum\limits_{k = 1}^{r} {w_{k} \sqrt {S_{k}^{*} } } - \sum\limits_{k = 1}^{r} {w_{k} } \sqrt {V_{{k{\text{o}}}} } > 0 \)

i.e. if \( \sum\limits_{k = 1}^{r} {w_{k} } \left( {\sqrt {S_{k}^{*} } - \sqrt {V_{{k{\text{o}}}} } } \right) > 0 \)

i.e. if \( \sqrt {S_{k}^{*} } - \sqrt {V_{{k{\text{o}}}} } > 0\quad \forall \quad k = 1,2, \ldots ,r \)

i.e. if \( \sqrt {S_{k}^{*} } > \sqrt {V_{{k{\text{o}}}} } \quad \forall \quad k = 1,2, \ldots ,r \)

i.e. if \( S_{k}^{*} > V_{{k{\text{o}}}} \quad \forall \ldots k = 1,2, \ldots ,r \)

i.e. if \( \begin{aligned} \left[ {\pi_{{{\text{S}}_{k} }} \left( {1 - \pi_{{{\text{S}}_{k} }} } \right) + \left( {1 - \pi_{{{\text{S}}_{k} }} } \right)\left( {1 - Q_{k} } \right) + \left( {1 - \pi_{{{\text{S}}_{k} }} } \right)\left( {1 - Q_{k} } \right)\lambda_{k} \left( {\frac{1}{{Q_{k} - 1}}} \right)} \right] \hfill \\ > \frac{{V_{1k} V_{2k} }}{{\left[ {\left( {1 - \lambda_{k} } \right)V_{1k} + \lambda_{k} V_{2k} } \right]}}\quad \forall \quad k = 1,2, \ldots ,r \hfill \\ \end{aligned} \)

i.e. if \( V_{2k} + \left( {1 - \pi_{{{\text{S}}_{k} }} } \right)\left( {1 - Q_{k} } \right)\lambda_{k} \left( {\frac{1}{{Q_{k} }} - 1} \right) - \frac{{V_{1k} V_{2k} }}{{\left\{ {\left( {1 - \lambda_{k} } \right)V_{1k} + \lambda_{k} V_{2k} } \right\}}} > 0\quad \forall \quad k = 1,2, \ldots ,r \)

i.e. if \( \frac{{\lambda_{k} V_{2k} \left( {1 - \pi_{{S_{k} }} } \right)\left( {1 - Q_{k} } \right)^{2} }}{{Q_{k} \left[ {\left( {1 - \lambda_{k} } \right)V_{1k} + \lambda_{k} V_{2k} } \right]}} + \frac{{\left( {1 - \pi_{{S_{k} }} } \right)\left( {1 - Q_{k} } \right)^{2} \lambda_{k} }}{{Q_{k} }} > 0\quad \forall \quad k = 1,2, \ldots ,r \)

$$ {\text{i}} . {\text{e}}.\;{\text{if}}\;\frac{{V_{2k} }}{{\left\{ {\left( {1 - \lambda_{k} } \right)V_{1k} + \lambda_{k} V_{2k} } \right\}}} + 1 > 0\quad \forall \quad k = 1,2, \ldots ,r, $$
(60)

which is always true.

Thus, the proposed optimum estimator (OE) \( \hat{\pi }_{{{\text{m}}\eta_{{k{\text{o}}}} }} \) is more efficient than the Singh and Tarray’s (2014) estimator \( \hat{\pi }_{\text{mS}} \) under Neyman allocation.

From (47) and (59) we have

$$ n\left[ {V\left( {\hat{\pi }_{\text{mh}} } \right) - V\left( {\hat{\pi }_{{{\text{m}}\eta_{{k{\text{o}}}} }} } \right)} \right] = \left( {\sum\limits_{k = 1}^{r} {w_{k} \sqrt {D_{k} } } } \right)^{2} - \left( {\sum\limits_{k = 1}^{r} {w_{k} \sqrt {V_{{k{\text{o}}}} } } } \right)^{2} $$
$$ = \left\{ {\sum\limits_{k = 1}^{r} {w_{k} \sqrt {D_{k} } } + \sum\limits_{k = 1}^{r} {w_{k} \sqrt {V_{{k{\text{o}}}} } } } \right\}\left\{ {\sum\limits_{k = 1}^{r} {w_{k} \sqrt {D_{k} } } - \sum\limits_{k = 1}^{r} {w_{k} \sqrt {V_{{k{\text{o}}}} } } } \right\} $$
$$ = \left\{ {\sum\limits_{k = 1}^{r} {w_{k} \left( {\sqrt {D_{k} } + \sqrt {V_{{k{\text{o}}}} } } \right)} } \right\}\left\{ {\sum\limits_{k = 1}^{r} {w_{k} \left( {\sqrt {D_{k} } - \sqrt {V_{{k{\text{o}}}} } } \right)} } \right\}, $$

which is positive if

$$ \left( {\sqrt {D_{k} } + \sqrt {V_{{k{\text{o}}}} } } \right) > 0,\quad \forall \quad k = 1,2, \ldots ,r $$

i.e. if \( \sqrt {D_{k} } > \sqrt {V_{{k{\text{o}}}} } ,\quad \forall \quad k = 1,2, \ldots ,r \);

i.e. if \( D_{k} > V_{{k{\text{o}}}} ,\quad \forall \quad k = 1,2, \ldots ,r \);

$$ {\text{i}} . {\text{e}}.\;{\text{if}}\;\left( {D_{k} - V_{ko} } \right) > 0,\quad \forall \quad k = 1,2, \ldots ,r. $$
(61)

Now we have

$$ \begin{aligned} \left( {D_{k} - V_{{k{\text{o}}}} } \right) = & \frac{1}{{\left( {1 - \lambda_{k} } \right)^{3} }}\left[ {\left( {1 - \lambda_{k} } \right)\lambda_{k} Q_{k}^{2} V_{1k} + \left\{ {1 - \lambda_{k} \left( {1 + Q_{k} } \right)} \right\}^{2} V_{2k} - \frac{{\left( {1 - \lambda_{k} } \right)^{3} V_{1k} V_{2k} }}{{\left\{ {\left( {1 - \lambda_{k} } \right)V_{1k} + \lambda_{k} V_{2k} } \right\}}}} \right] \\ = & \frac{1}{{\left( {1 - \lambda_{k} } \right)^{3} }}\left[ {\left( {1 - \lambda_{k} } \right)\lambda_{k} Q_{k}^{2} V_{1k} + \left\{ {\left( {1 - \lambda_{k} } \right)^{2} + \lambda_{k}^{2} - 2\lambda_{k} \left( {1 - \lambda_{k} } \right)Q_{k} } \right\}V_{2k} - \frac{{\left( {1 - \lambda_{k} } \right)^{3} V_{1k} V_{2k} }}{{\left\{ {\left( {1 - \lambda_{k} } \right)V_{1k} + \lambda_{k} V_{2k} } \right\}}}} \right] \\ = & \frac{1}{{\left( {1 - \lambda_{k} } \right)^{3} \left\{ {\left( {1 - \lambda_{k} } \right)V_{1k} + \lambda_{k} V_{2k} } \right\}}}\left[ {\lambda_{k} \left( {1 - \lambda_{k} } \right)^{2} Q_{k}^{2} V_{1k}^{2} + \lambda_{k} \left( {1 - \lambda_{k} - \lambda_{k} Q_{k} } \right)^{2} V_{2k}^{2} } \right. \\ & \quad - 2\lambda_{k} \left( {1 - \lambda_{k} } \right)\left( {1 - \lambda_{k} - \lambda_{k} Q_{k} } \right)\left. {Q_{k} V_{1k} V_{2k} } \right] \\ = & \frac{{\lambda_{k} }}{{\left( {1 - \lambda_{k} } \right)^{3} \left\{ {\left( {1 - \lambda_{k} } \right)V_{1k} + \lambda_{k} V_{2k} } \right\}}}\left[ {\left( {1 - \lambda_{k} } \right)^{2} Q_{k}^{2} V_{1k}^{2} - 2\left( {1 - \lambda_{k} } \right)\left( {1 - \lambda_{k} - \lambda_{k} Q_{k} } \right)Q_{k} V_{1k} V_{2k} } \right. \\ & \quad + \left( {1 - \lambda_{k} - \lambda_{k} Q_{k} } \right)^{2} \left. {V_{2k}^{2} } \right] \\ = & \frac{{\lambda_{k} \left[ {\left( {1 - \lambda_{k} } \right)^{{}} Q_{k}^{{}} V_{1k}^{{}} - \left( {1 - \lambda_{k} - \lambda_{k} Q_{k} } \right)V_{2k}^{{}} } \right]^{2} }}{{\left( {1 - \lambda_{k} } \right)^{3} \left\{ {\left( {1 - \lambda_{k} } \right)V_{1k} + \lambda_{k} V_{2k} } \right\}}} > 0. \\ \end{aligned} $$
(62)

From (61) and (62) we have

$$ D_{k} - V_{{k{\text{o}}}} > 0,\quad \forall \quad k = 1,2, \ldots ,r $$
(63)

which is always true.

Thus,

$$ \left[ {V\left( {\hat{\pi }_{\text{mh}} } \right) - V\left( {\hat{\pi }_{{{\text{m}}\eta_{{k{\text{o}}}} }} } \right)} \right] > 0. $$
(64)

It follows from (64) that the proposed estimator \( \hat{\pi }_{{{\text{m}}\eta_{{k{\text{o}}}} }} \) is more efficient than the proposed estimator \( \hat{\pi }_{\text{mh}} \).

Now we established the following theorem.

Theorem 6.2

The proposed optimum estimator \( \hat{\pi }_{{{\text{m}}\eta_{{k{\text{o}}}} }} \) is more efficient than the Singh and Tarray’s (2014) estimator \( \hat{\pi }_{\text{ms}} \) and the proposed estimator \( \hat{\pi }_{\text{mh}} \).

To see the performance of the proposed optimum estimator (OE) \( \hat{\pi }_{{{\text{m}}\eta_{{k{\text{o}}}} }} \) relative to Singh and Tarray’s (2014) estimator \( \hat{\pi }_{\text{mS}} \) under Neyman allocation, we have computed the percent relative efficiency (PRE) of the estimator \( \hat{\pi }_{{{\text{m}}\eta_{{k{\text{o}}}} }} \) with respect to the estimator \( \hat{\pi }_{\text{mS}} \) using the formula:

$$ \begin{aligned} {\text{PRE}}\left( {\hat{\pi }_{{{\text{m}}\eta_{{k{\text{o}}}} }} ,\hat{\pi }_{\text{mS}} } \right) = & \frac{{V\left( {\hat{\pi }_{\text{mS}} } \right)}}{{V\left( {\hat{\pi }_{{{\text{m}}\eta_{{k{\text{o}}}} }} } \right)}} \times 100, \\ = & \frac{{\left( {\sum\limits_{k = 1}^{r} {w_{k} \sqrt {S_{k}^{*} } } } \right)^{2} }}{{\left( {\sum\limits_{k = 1}^{r} {w_{k} \sqrt {V_{{k{\text{o}}}} } } } \right)^{2} }} \times 100, \\ \end{aligned} $$
(65)

for two strata (i.e. r = 2), \( \lambda_{1} = \lambda_{2} = \lambda \) and different values of \( \left( {\pi_{{{\text{S}}1}} ,\pi_{{{\text{S}}2}} ,\pi_{\text{S}} } \right) \) and \( Q_{1} \;{\text{and}}\;Q_{2} \).

Findings are shown in Table 14

Table 14 Percent relative efficiency of the proposed estimator \( \hat{\pi }_{{{\text{m}}\eta_{\text{ko}} }} \) with respect to Singh and Tarray (2014) estimator \( \hat{\pi }_{\text{mS}} \)

Table 14 shows that the proposed optimum estimator \( \hat{\pi }_{{{\text{m}}\eta_{{k{\text{o}}}} }} \) is more efficient than Singh and Tarray (2014) estimator \( \hat{\pi }_{\text{mS}} \). We further note that the values of \( {\text{PRE}}\left( {\hat{\pi }_{\text{mh}} ,\hat{\pi }_{\text{mS}} } \right) \) is very close to the value of \( {\text{PRE}}\left( {\hat{\pi }_{{{\text{m}}\eta_{{k{\text{o}}}} }} ,\hat{\pi }_{\text{mS}} } \right) \). There is practical difficulty in using the proposed optimum estimator \( \hat{\pi }_{{{\text{m}}\eta_{{k{\text{o}}}} }} \) as it depends on the unknown parameter \( \pi_{\text{s}} \) under investigation while the proposed estimator \( \hat{\pi }_{\text{mh}} \) does not face any such difficulty, so the estimator \( \hat{\pi }_{\text{mh}} \) would be preferred over the optimum estimator \( \hat{\pi }_{{{\text{m}}\eta_{{k{\text{o}}}} }} \) and Singh and Tarray (2014) estimator \( \hat{\pi }_{\text{h}} \). Thus, we infer that the proposed estimator \( \hat{\pi }_{\text{mh}} \) would be used as an alternative to the optimum estimator \( \hat{\pi }_{{{\text{m}}\eta_{{k{\text{o}}}} }} \).

7 Conclusion

In this article, we have suggested a weighted unbiased estimator based on mixed randomized response model and its Stratified RR model which are more efficient than the Singh and Tarray (2014) model. We have also discussed a particular case by giving a suitable weight in the proposed weighted estimator and found that the relative efficiency of the estimator for the different parametric choices is close to the proposed optimum estimator. Thus, our mixed RR model and Stratified mixed RR model are good alternative to the Singh and Tarray’s (2014) model.