Abstract
Mukhopadhyay and Padmanabhan (Metrika 40:121–128, 1993) considered the construction of fixed-width confidence intervals for the difference of location parameters of two negative exponential distributions via triple sampling when the scale parameters are unknown and unequal. Under the same setting, this paper deals with the problem of fixed-width confidence interval estimation for a linear combination of location parameters, using the above mentioned three-stage procedure.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Let \(\{X_{i1},X_{i2},\ldots \}\ (i=1,2)\) be two independent sequences of random variables where \(X_{i1},X_{i2},\ldots \) are independent and identically distributed (i.i.d.) random variables with the probability density function (pdf)
Here \(I(\cdot )\) denotes the indicator function of the set \((\cdot )\) and the four parameters \(\mu _1,\,\mu _2\in (-\infty ,\infty )\), \(\sigma _1,\,\sigma _2\in (0,\infty )\) are all unknown. This distribution is known as a two-parameter negative exponential distribution (written as \(\mathrm{{E_{XP}}}(\mu _i,\sigma _i)\)) and has been widely used in many reliability and life-testing experiments to describe the failure times of complex equipment and some small electrical components. In the paper we consider a linear combination of locations, including the difference of two location parameters. For any given numbers \(b_1,\,b_2\ (b_1b_2\ne 0)\) and any preassigned numbers \(d\,(>0)\) and \(0<\alpha <1\) we would like to find appropriate sample sizes to construct a confidence interval J for a linear combination \(\delta =b_1\mu _1+b_2\mu _2\) of two location parameters based on the random samples \(\{X_{11},\ldots , X_{1\,n_1}\}\) and \(\{X_{21},\ldots , X_{2\,n_2}\}\) such that \(P\{\delta \in J\}\ge 1-\alpha \) for all fixed values of \(\mu _1,\,\mu _2,\,\sigma _1,\,\sigma _2,\,\alpha \) and d and that the length of J is fixed at 2d. Mukhopadhyay and Padmanabhan (1993) designed three-stage sampling procedures for \(\delta =\mu _1-\mu _2\) and provided the asymptotic second-order expansion of the coverage probability \(P\{\delta \in J\}=(1-\alpha )+Ad +o(d)\) as d tends to zero where A is a certain constant. They also gave \(P\{\delta \in J\}=(1-\alpha ) +o(d)\) with choosing the “fine-tuning” factors. The theory of a three-stage procedure was first established by Hall (1981). Many authors have investigated the sequential estimation problems for the difference of two negative exponential distributions by using purely and/or two-stage procedures, for instance Mukhopadhyay and Hamdy (1984), Mukhopadhyay and Mauromoustakos (1987), Hamdy et al. (1989) and Singh and Chaturvedi (1991). Mukhopadhyay and Zack (2007) dealt with bounded risk estimation of linear combinations of the location and scale parameters. Isogai and Futschik (2010) proposed a purely sequential procedure for a linear combination of locations. Honda (1992) and Yousef et al. (2013) considered the estimation of the mean by a three-stage procedure when the distribution is unspecified.
In the present paper we construct fixed-width confidence intervals for \(\delta =b_1\mu _1+b_2\mu _2\) via the three-stage procedure proposed by Mukhopadhyay and Padmanabhan (1993) when \(\sigma _1,\sigma _2\) are unknown and may be unequal, and derive the asymptotic second-order expansion of the coverage probability.
In Sect. 2 we give some preliminaries and design the three-stage procedure. Section 3 provides the main results concerning the asymptotic second-order expansion of the coverage probability. In Sect. 4 we show some simulation results. Section 5 gives all the proofs of the results in Sect. 3.
2 Preliminaries and a three-stage procedure
Having observed \(\{X_{i1},\ldots , X_{in_i}\}\) from the population \(\Pi _i:\,\mathrm{{E_{X P}}}(\mu _i,\sigma _i)\), we define for \(n_i\ge 2\)
for \(i=1,2\) and \(X_{i\,n_i(1)}\) and \(U_{i\,n_i}\) are the estimators of \(\mu _i\) and \(\sigma _i\). Let \(\underline{n}=(n_1,n_2)\), \(b_1\) and \(b_2\ (b_1b_2\ne 0)\) be given numbers and \(d(>0)\) be a preassigned number. We propose the fixed-width confidence interval of the parameter \(\delta =b_1\mu _1+b_2\mu _2\) with length 2d
For a preassigned number \(\alpha \in (0,1)\) we wish to conclude that \(P\{\delta \in J(\underline{n})\}\ge 1-\alpha \).
First of all, we want to find an appropriate sample size \(C_i\) which satisfies
for all fixed \(\mu _1,\mu _2,\sigma _1, \sigma _2, d\) and \(\alpha \). We will calculate the probability \(P\{\delta \in J(\underline{n})\}\). For \(i=1,2\), let \(V_i=|b_i|(X_{i\,n_i(1)}-\mu _i)\) and
\(V_1\) and \(V_2\) are independent and \(V_i\) is distributed as \(\mathrm{{E_{XP}}}(0,\beta _i^{-1})\) with pdf \(g_i(s)=f(s;0,\beta _i^{-1})\). First, let us treat the case \(b_1b_2>0\). We can easily see
which provides
Let \(b_1b_2<0\). By the argument similar to above we get
Thus, utilizing the indicator function \(I(\cdot )\), we have the following lemma.
Lemma 1
For any fixed \(\underline{n}=(n_1,n_2)\) with \(n_i\ge 2\ (i=1,2)\) we have
Let any \(\alpha \in (0,1)\) be fixed. For \(b_1b_2<0\) we get
if \(e^{-\beta _i d}\le \alpha \ (i=1,2)\) which is equivalent to \(n_i\ge a|b_i|\sigma _i/d\equiv C_i\) with \(a=\ln \alpha ^{-1}\). Hence from Lemma 1 we get \(P\{\delta \in J(\underline{n})\} \ge 1-\alpha \) for all \(n_i\ge C_i\ (i=1,2)\) which gives (1). Next we consider the case \(b_1b_2>0\). Let \(u(x)=(1+x) e^{-x}\) for \(x>0\). We can easily show that the function u(x) is strictly decreasing on \((0,\infty )\) with \(u(0)=1\) and \(u(+\infty )=0\) and hence there exists a unique solution \(a_0(>0)\) satisfying that \(u(a_0)=\alpha \). Let us define the function h(x, y) on \(\mathbb {R}_+^2\) as
where \(\mathbb {R}_+ =(0,\infty )\). After some calculations we have
It follows from Lemma 1 that \(P\{\delta \in J(\underline{n})\}=1-h(\beta _1 d,\beta _2 d)\), which, together with the above inequality, yields that \(P\{\delta \in J(\underline{n})\}\ge 1-\alpha \) if \(\beta _i d\ge a_0\) for \(i=1,2\). Let \(C_i=a_0|b_i|\sigma _i/d\). From (2) we get that \(\beta _i d \ge a_0\) if \(n_i\ge C_i\) for \(i=1,2\). Therefore we have that \(P\{\delta \in J(\underline{n})\}\ge 1-\alpha \) if \(n_i\ge C_i\) for \(i=1,2,\) which gives (1). We call \(C_i\ (i=1,2)\) the optimal fixed sample size. From the above results, we obtain
Proposition 1
Let
Then for all \(n_i\ge C_i\ (i=1,2)\) we have
where \(\underline{n}=(n_1,n_2)\) with \(n_i\ge 2\ (i=1,2)\).
Since the optimal fixed sample size \(C_i\) of (3) is unknown, we will define a three-stage procedure which is similar to that designed by Mukhopadhyay and Padmanabhan (1993). First we take the pilot sample \(X_{i1},\ldots , X_{i m}\) and calculate \(X_{i\,m(1)}\) and \(U_{i\,m}\) for \(i=1,2\), where the starting sample size \(m(\ge 2)\) satisfies \(m=O(d^{-1/r})\) for some \(r>1\) as \(d\rightarrow 0\). We also choose and fix any two numbers \(\rho _i\in (0,1)\ (i=1,2)\). Let any \(d(>0)\) be fixed and define
If \(T_i>m\), then we take the second sample \(X_{i\,m+1},\ldots , X_{i T_i}\) for \(i=1,2\). Using the combined sample \(X_{i 1},\ldots , X_{i T_i}\), we calculate \(X_{i\,T_i(1)}\) and \(U_{i\,T_i}\) and define
where \(\langle x \rangle \) stands for the largest integer less than x. If \(N_i>T_i\), then we take the third sample \(X_{i\,T_i +1},\ldots , X_{i N_i}\) for \(i=1,2\). Using all the combined sample \(X_{i 1},\ldots , X_{i N_i}\) (\(i=1,2\)), we construct a confidence interval of \(\delta =b_1\mu _1+b_2\mu _2\) as
where \(\underline{N}=(N_1,N_2)\) and \(\hat{\delta }(\underline{N})=b_1 X_{1\,N_1(1)}+b_2 X_{2\,N_2(1)}\).
3 Main results
In this section we will derive the asymptotic second-order expansions of the expected sample size \(E(N_i)\) for (6) and coverage probability \(P\{\delta \in J(\underline{N})\}\) for (7). Theorem 1 gives the asymptotic second-order expansion of \(E(N_i)\) for \(i=1,2.\)
Theorem 1
We have
where \(\eta _i=\frac{1}{2} - \rho _i^{-1}\in (-\infty ,-\frac{1}{2})\).
The following theorem shows the asymptotic second-order expansion of the coverage probability \(P\{\delta \in J(\underline{N})\}\).
Theorem 2
As \(d\rightarrow 0\) we have
where
Remark 1
Theorems 1 and 2 generalize the results of Mukhopadhyay and Padmanabhan (1993) for estimating the difference \(\delta =\mu _1-\mu _2\) (\(b_1=1\), \(b_2=-1\)).
Remark 2
The approximation to \(P\{\delta \in J(\underline{N})\}\) becomes better as \(\rho _i\) increases, since the absolute value of \(A_{\alpha }\) gets smaller as \(\rho _i\) increases.
Remark 3
When \(b_1 b_2>0\), one can consider the confidence interval
with fixed-width d. By the same arguments as Lemma 1, we have
and hence, it holds for all \(n_i\ge C_i\ (i=1,2)\) that \(P\{\delta \in J^*(\underline{n})\}\ge 1-\alpha \) for all fixed \(\mu _1\), \(\mu _2\), \(\sigma _1\), \(\sigma _2\), d and \(\alpha \). Therefore, when \(b_1 b_2>0\), the length of the confidence interval (7) is indeed considered as a half length.
4 Simulation results
We shall present some simulation results which were carried out by means of Borland C\(++\). We consider two cases when \(\delta =\frac{1}{2} (\mu _1 +\mu _2)\) (\(b_1 b_2>0\)) and \(\delta =\mu _1 -\mu _2\) (\(b_1 b_2<0\)). We choose \(\rho _1=\rho _2 =0.4,\,0.6\) in (5) and take \(\alpha =0.05\) (\(1-\alpha =0.95\)) in Tables 1, 2, 5 and 6 and \(\alpha =0.10,\ 0.01\) in Tables 3 and 4, respectively. About (5) and (6), we have \(a_{*}=a_0=4.74386\) with \((1+a_0)e^{-a_0}=0.05\) for \(b_1 b_2>0\) and \(a_{*}=a=\ln (0.05)^{-1}=2.99573\) for \(b_1 b_2<0\). From Taylor’s expansion and calculus, one can find an approximation \(\tilde{a}_0\) to \(a_0\) such as
For \(\alpha =0.05\), we have \(\tilde{a}_0=4.64269\) with \((1+\tilde{a}_0)e^{-\tilde{a}_0}=0.05435\). For \(\alpha =0.1\), we also have \(a_0=3.88972\) with \((1+a_0)e^{-a_0}=0.1\) and \(\tilde{a}_0=3.77691\) with \((1+\tilde{a}_0)e^{-\tilde{a}_0}=0.10936\). Further, for \(\alpha =0.01\), we have \(a_0=6.63835\) with \((1+a_0)e^{-a_0}=0.01\) and \(\tilde{a}_0=6.55596\) with \((1+\tilde{a}_0)e^{-\tilde{a}_0}=0.01074\). In all tables below, the three-stage procedure defined by (5) and (6) was carried out with 1,000,000 independent replications under \(d=0.06\) (moderate) and \(d=0.03\) (sufficiently small). In each table, \(E(T_i)\), \(E(N_i)\), \(E(\hat{\delta }(\underline{N}))\) and \(P\{\delta \in J(\underline{N})\}\) stand for the averages of 1,000,000 independent replications and “s.e.” stands for each standard error. Let the size of the pilot sample be \(m=\langle d^{-2/3} \rangle +1\) for each population. Thus, \(m=7\) for \(d=0.06\) and \(m=11\) for \(d=0.03\).
For estimating \(\delta =\frac{1}{2} (\mu _1 +\mu _2)\), we take \(\mathrm{{E_{XP}}}(2,1)\) as \(\Pi _1\) and \(\mathrm{{E_{XP}}}(0,1)\) as \(\Pi _2\) in Table 1, where the variances are equal and also take \(\mathrm{{E_{XP}}}(2,2)\) as \(\Pi _1\) and \(\mathrm{{E_{XP}}}(0,1)\) as \(\Pi _2\) in Table 2, where the variances are unequal. In both Tables 1 and 2, we estimate \(\delta =1\) with \(\alpha =0.05\) and the optimal fixed sample sizes \(C_1\) and \(C_2\) are calculated by (3) with \(b_1=b_2=0.5\) and \(a_{*}=a_0=4.74386\). We have from Theorem 1
where \(\eta _i=-2\) for \(\rho _i=0.4\) and \(\eta _i=-1.167\) for \(\rho _i=0.6\). It seems from Tables 1 and 2 that each \(N_i\) underestimates \(C_i\) as the above approximation. We also have from Theorem 2
It also seems from Tables 1 and 2 that the coverage probabilities \(P\{\delta \in J(\underline{N})\}\) are less than 0.95, for \(A_{\alpha } d<0\). However, as d becomes sufficiently small (\(d=0.03\)), the coverage probabilities \(P\{\delta \in J(\underline{N})\}\) get closer to 0.95 in both tables. In Tables 3 and 4, we carried out simulations for \(\alpha =0.1\) with \(a_0=3.88972\) and \(\alpha =0.01\) with \(a_0=6.63835\), respectively, under the same settings in Table 1. The results in Tables 3 and 4 behave as in Table 1.
In Tables 5 and 6, we consider the estimation of \(\delta =\mu _1 -\mu _2\), where our three-stage procedure defined by (5) and (6) concides with the one of Mukhopadhyay and Padmanabhan (1993). We take \(\mathrm{{E_{XP}}}(2,1)\) as \(\Pi _1\) and \(\mathrm{{E_{XP}}}(1,1)\) as \(\Pi _2\) in Table 5 and also take \(\mathrm{{E_{XP}}}(2,2)\) as \(\Pi _1\) and \(\mathrm{{E_{XP}}}(1,1)\) as \(\Pi _2\) in Table 6. In both tables, we estimate \(\delta =1\) and the optimal fixed sample sizes \(C_1\) and \(C_2\) are calculated by (3) with \(b_1=1\), \(b_2=-1\) and \(a_{*}=a=2.99573\). The simulation results in Tables 5 and 6 also seem to have the trends as above including the properties (8) and (9). Throughout these tables, we can verify Remark 2 for \(\rho _i\).
Hamdy (1997), Hamdy et al. (2015) and Son et al. (1997) treated theories on the type II errors of sequential procedures and gave simulation results for one-sample case. For the present two-sample case, it is still open.
5 Proofs of Theorems 1 and 2
In this section we will give the proofs of two theorems in Sect. 3. Let \(\mu _1^{\prime }=b_1\mu _1\), \(\mu _2^{\prime }=-b_2\mu _2,\ \sigma _i^{\prime }=|b_i|\sigma _i\) for \(b_1b_2<0\) and \(\mu _i^{\prime }=b_i\mu _i,\ \sigma _i^{\prime }=|b_i|\sigma _i\) (\(i=1,2\)) for \(b_1b_2>0\). Then without any loss of generality \(\delta \) can be written as
Throughout this section we use this form. Thus, \(b_1=b_2=1\) for both cases.
Let \(Y_{i\,2},Y_{i\,3},\ldots \) be i.i.d. random variables according to \(\mathrm{{E_{XP}}}(0,\sigma _i)\) and \(Y_{1\,j}\)’s and \(Y_{2\,j}\)’s be independent. Also let \(\{X_{1j},X_{2j}:j\ge 1\}\) and \(\{Y_{1j},Y_{2j}:j\ge 2\}\) be independent. Set \(\overline{Y}_{i\,n}= \sum _{j=2}^nY_{i\,j}/(n-1)\) for \(n\ge 2\ (i=1,2)\). From Lemma 6.1 of Lombard and Swanepoel (1978) \(\{(n-1)U_{i\,n},\ n\ge 2\}\) and \(\{(n-1)\overline{Y}_{i\,n},\ n\ge 2\}\) are identically distributed. Let us define for \(i=1,2\)
Then we get the following lemma.
Lemma 2
For \(i=1,2\,(T_i,N_i)\) and \((R_i,S_i)\) are identically distributed, and \(S_1\) and \(S_2\) are independent.
Proof
Let any \(m\le k\le n\) be fixed. Then
which shows that \((T_i,N_i)\) and \((R_i,S_i)\) are identically distributed. It is obvious that \(S_1\) and \(S_2\) are independent. This completes the proof. \(\square \)
Lemma 2 implies that we can use results of Mukhopadhyay (1990) for (10) below, from which we can derive the desired results for (6) and (7).
Let \(Y_{ij}^{\prime }=Y_{ij}/\sigma _i\) and \(\lambda _i=a_{*}\sigma _i d^{-1}=C_i\). Then \(Y_{i2}^{\prime },\,Y_{i3}^{\prime },\ldots \) are i.i.d. random variables according to \(\mathrm{{E_{XP}}}(0,1)\), and \(R_i\) and \(S_i\) can be rewritten as
From Theorems 2 and 3 of Mukhopadhyay (1990) we have
Lemma 3
Let \(i=1,2\).
-
(i)
For \(k=1,2,3,\ldots \)
$$\begin{aligned}&E(S_i^k)=C_i^k+\frac{1}{2} k C_i^{k-1}\{(k-3)+\rho _i\}/\rho _i+o(C_i^{k-1})\quad \text{ and }\\&E(S_i)=C_i+\eta _i+o(1)\quad \text{ as }\ d\rightarrow 0. \end{aligned}$$ -
(ii)
Let \(\tilde{S}_i=C_i^{-1/2}(S_i-C_i)\). Then
$$\begin{aligned} \tilde{S}_i{\mathop {\longrightarrow }\limits ^{\mathcal {D}}} \sqrt{\rho _i^{-1}}\,Z_i \ \text{ as }\ d\rightarrow 0 \end{aligned}$$and for each \(p\ge 1\) \(\{\tilde{S}_i^{\,p},\,0<d\le d_0\}\) is uniformly integrable for some \(d_0>0\), where \(Z_1\) and \(Z_2\) are independent and identically distributed random variables according to the standard normal distribution and “\(\mathop {\longrightarrow }\limits ^{\mathcal {D}}\)” stands for convergence in distribution.
The uniform integrability of \(\{\tilde{S}_i^{\,p},\,0<d\le d_0\}\) for each \(p\ge 1\) in Lemma 3 will be shown in “Appendix”.
Proof of Theorem 1
For both cases Theorem 1 is an immediate consequence of Lemmas 2 and 3. \(\square \)
Proof of Theorem 2
From the point of view of Lemma 1 we need to show it separately.
Case 1 \(b_1 b_2<0\). Thus \(\delta =\mu _1-\mu _2.\) In the proof of Theorem 1 of Mukhopadhyay and Padmanabhan (1993) they provided the equation
where K is similarly given as Mukhopadhyay and Padmanabhan (1993). Lemmas 2 and 3 yield
Mukhopadhyay and Padmanabhan (1993) showed that \(E(K)=o(d)\). Therefore, recalling \(\sigma _i=|b_i|\sigma _i\), the above results give Theorem 2.
Case 2 \(b_1 b_2>0\). Thus \(\delta =\mu _1+\mu _2.\) Lemmas 4, 5 and 7 (which are given later) imply the desired result. Therefore the proof of Theorem 2 is complete. \(\square \)
Let us give Lemmas 4, 5, 6 and 7. We introduce the following real valued functions of \((x,y)\in \mathbb {R}^2\):
Throughout the rest of this section let \(Q_i=S_i/C_i\) for \(i=1,2\). Lemma 4 shows an expression of the coverage probability.
Lemma 4
Let \(b_1b_2>0\). Then we have
Proof
Lemma 1 implies
Since \(\{X_{1\,n_1(1)},\,X_{2\,n_2(1)}\}\) and \(\{U_{i\,2},\ldots ,U_{i\,n_i}\ i=1,2\}\) are independent, two events \(\{\delta \in J(\underline{n})\}\) and \(\{\underline{N}=\underline{n}\}\) for any fixed \(\underline{n}\) are also independent. Hence from (13) and Lemma 2 we get
which leads to the lemma. Thus the proof is complete. \(\square \)
We will evaluate each quantity in (12).
Lemma 5
We have as \(d\rightarrow 0\)
Proof
Let \(h(x)=e^{-a_0 x}+a_0 x e^{-a_0 x}\) for x. Then by using Taylor’s expansion around one and \((1+a_0)e^{-a_0}=\alpha \) we get
where \(w_1\) satisfies that \(|w_1-1|<|x-1|\). Using \(\tilde{S}_1=C_1^{-1/2}(S_1-C_1)\) in Lemma 3, we have
where \(W_1\) is a positive random variable satisfying \(|W_1-1|<|Q_1-1|\). From Lemma 3 we get
Since \(Q_1{\mathop {\longrightarrow }\limits ^{P}} 1\) by Lemma 3 (ii) where “\({\mathop {\longrightarrow }\limits ^{P}}\)” means convergence in probability, we have that \((1-a_0 W_1)e^{-a_0 W_1}\tilde{S}_1^{\,2}{\mathop {\longrightarrow }\limits ^{{\mathcal {D}}}}(1-a_0)e^{-a_0}\rho _1^{-1} Z_1^2.\) Using \(a_0 W_1>0\), we get that \(|(1-a_0 W_1)e^{-a_0 W_1}\tilde{S}_1^{\,2}|\le \tilde{S}_1^{\,2},\) which, together with Lemma 3, implies that \(\{(1-a_0 W_1)e^{-a_0 W_1}\tilde{S}_1^{\,2}\}\) is uniformly integrable. Thus we get
Therefore, combining (14)–(16), we obtain the desired result. Therefore the proof is complete. \(\square \)
The following lemma is used to evaluate the expectation \(E[B(Q_1,Q_2)]\).
Lemma 6
As \(d\rightarrow 0\) we have the following results:
-
(i)
\(E[(Q_1-1)e^{-a_0 Q_1}]=a_0^{-1}e^{-a_0}(\eta _1-a_0 \rho _1^{-1})\sigma _1^{-1}d+o(d),\)
-
(ii)
\(E[(Q_1-1)^2 e^{-a_0 Q_1}]=a_0^{-1}e^{-a_0}\rho _1^{-1}\sigma _1^{-1}d+o(d),\)
-
(iii)
\(E[(Q_2-1)e^{-a_0 Q_1}]=a_0^{-1}e^{-a_0}\eta _2\sigma _2^{-1}d+o(d),\)
-
(iv)
\(E[(Q_2-1)^2 e^{-a_0 Q_1}]=a_0^{-1}e^{-a_0}\rho _2^{-1}\sigma _2^{-1}d+o(d),\)
-
(v)
\(E[(Q_1-1)e^{-a_0 Q_1}(Q_2-1)]=o(d),\)
-
(vi)
\(E[(Q_1-1)^j e^{-a_0 Q_1}(Q_2-1)^{3-j}]=o(d)\) for \(j=1,2,3.\)
Proof
First we will prove (i). Let \(h(x)=e^{-a_0 x}\). Taylor’s expansion and Lemma 3 give
where \(W_1\) is a positive random variable satisfying \(|W_1-1|<|Q_1-1|\). Since \(E[ e^{-a_0 W_1} \tilde{S}_{1}^{\,2}]=e^{-a_0}\rho _1^{-1}+o(1)\), we obtain (i). (ii) follows from the fact that \(Q_1-1=(a_0\sigma _1)^{-1/2}d^{1/2}\tilde{S_1}\) and \(E[ e^{-a_0 Q_1} \tilde{S}_{1}^{\,2}]=e^{-a_0}\rho _1^{-1}+o(1)\). Similarly, we can show (iii)–(vi). This completes the proof. \(\square \)
Lemma 7
As \(d\rightarrow 0\) we have
Proof
Let g(x) be defined as in (11). Taylor’s expansion for \(e^{-x}\) implies
where \(w=\theta x\) for some \(\theta =\theta (x) \in (0,1)\). Hence from (11) we get
where \(W=\theta a_0(Q_2-Q_1)\) for some \(\theta =\theta (Q_1,Q_2) \in (0,1)\). We will evaluate each term \(K_i\) for \(i=1,2,3\). It follows from Lemma 6 that
Similarly,
Finally, we will calculate the term \(K_3\). Since \(W=\theta a_0(Q_2-Q_1)\), it is easy to see that \(e^{-a_0 Q_1}\,e^{-W}=e^{-a_0\{(1-\theta )Q_1+\theta Q_2\}}\), which implies that \(0<e^{-a_0 W}\,e^{-W}\le 1\), for \(a_0\{(1-\theta )Q_1+\theta Q_2\}\ge 0\). Thus we have
Let \(s_j=(a_0\sigma _j)^{-1/2}\) for \(j=1,2\). Recall that \(Q_j-1=s_j d^{1/2}\tilde{S_j}\). The uniform integrability of \(\{\tilde{S}_j^{\,p},\,0<d\le d_0\}\) for each \(p\ge 1\) gives that \(\sup _{0<d\le d_0} E(|\tilde{S}_j|^p)\) is bounded from above for each \(p\ge 1\). Let us evaluate each term \(K_{3j}\) for \(j=1,2,3,4\). Since \(\tilde{S}_1\) and \(\tilde{S}_2\) are independent, we have for some positive constant M
In the same way we get that \(K_{3j}=o(d)\) for \(j=2,3,4\). Therefore we obtain
Combining (17)–(20), we obtain the desired result of the lemma. This completes the proof. \(\square \)
References
Chow YS, Yu KF (1981) The performance of a sequential procedure for the estimation of the mean. Ann Stat 9:184–189
Gut A (2005) Probability, graduate course. Springer, Berlin
Hall P (1981) Asymptotic theory and triple sampling for sequential estimation of a mean. Ann Stat 9:1229–1238
Hamdy HI (1997) Performance of fixed width confidence intervals under type II errors: the exponential case. S Afr Stat J 31:259–269
Hamdy HI, Al-Mahmeed M, Nigm A, Son MS (1989) Three-stage estimation procedure for the exponential location parameters. Metron 47:279–294
Hamdy HI, Son MS, Yousef AS (2015) Sensitivity analysis of multistage sampling to departure of an underlying distribution from normality with computer simulations. Seq Anal 34:532–558
Honda T (1992) Estimation of the mean by three stage procedure. Seq Anal 11:73–89
Isogai E, Futschik A (2010) Sequential estimation of a linear function of location parameters of two negative exponential distributions. J Stat Plan Inference 140:2416–2424
Lombard F, Swanepoel JWH (1978) On finite and infinite confidence sequences. S Afr Stat J 12:1–24
Mukhopadhyay N (1990) Some properties of a three-stage procedure with applications in sequential analysis. Sankhyā A52:218–231
Mukhopadhyay N, Hamdy HI (1984) On estimating the difference of location parameters of two negative exponential distributions. Can J Stat 12:67–76
Mukhopadhyay N, Mauromoustakos A (1987) Three-stage estimation procedures for the negative exponential distributions. Metrika 34:83–93
Mukhopadhyay N, Padmanabhan AR (1993) A note on three-stage confidence intervals for the difference of locations: the exponential case. Metrika 40:121–128
Mukhopadhyay N, Zack S (2007) Bounded risk estimation of linear combinations of the location and scale parameters in exponential distributions under two-stage sampling. J Stat Plan Inference 137:3672–3686
Singh RK, Chaturvedi A (1991) A note on sequential estimation of the difference between location parameters of two negative exponential distributions. J Indian Stat Assoc 29:107–114
Son MS, Haugh LD, Hamdy HI, Costanza MC (1997) Controlling type II error while constructing triple sampling fixed precision confidence intervals for the normal mean. Ann Inst Stat Math 49:681–692
Yousef AS, Kimber AC, Hamdy HI (2013) Sensitivity of normal-based triple sampling sequential point estimation to the normality assumption. J Stat Plan Inference 143:1606–1618
Acknowledgements
The authors thank the anonymous referees for their constructive comments and suggestions which helped to improve the paper. The first author was supported by JSPS KAKENHI Grant-Number 26400193.
Author information
Authors and Affiliations
Corresponding author
Appendix
Appendix
In this appendix we will give the uniform integrability of \(\{\tilde{S}_i^{\,p},\,0<d\le d_0\}\) for each \(p\ge 1\) in Lemma 3. Let \(Y_2,\,Y_3,\ldots \) be a sequence of independent and identically distributed positive continuous random variables having a finite mean \(\theta =E(Y_2).\) We consider the following three-stage procedure defined by Mukhopadhyay (1990):
where \(N_1=\langle \rho \lambda \overline{Y}_m \rangle +1\), \(N_2=\langle \lambda \overline{Y}_R \rangle +1\), \(0<\rho <1\), \(0<\lambda <\infty \), \(\overline{Y}_n=(n-1)^{-1}\sum _{i=2}^n Y_i\) for \(n\ge 2\) and \(m=m(d)\, (\ge 2)\) is the starting sample size such that \(m\rightarrow \infty \) as \(d\rightarrow 0\). Let \(n^*=\lambda \theta \) and we suppose the following conditions
and for some \(r>1\), as \(m\rightarrow \infty \)
In the following we assume that \(E(Y_2^p)<\infty \) for some \(p\ge 2\) and let M denote a generic positive constant, not depending on d. Let \(V_j=Y_j/\theta \) for \(j=2,3,\cdots \) and \(\overline{V}_n=\sum _{j=2}^{n}V_j/(n-1)\). Then \(N_1=\langle \rho n^* \overline{V}_m \rangle +1\) and \(N_2=\langle n^* \overline{V}_R \rangle +1\). For \(\varepsilon \in (0,1)\), define a set \(B_{m,\varepsilon }\) by \(B_{m,\varepsilon }=\left\{ \overline{V}_m<1-\varepsilon \right\} \).
Lemma 8
As \(d\rightarrow 0\), we have \(P(B_{m,\varepsilon })=O(m^{-p/2})\).
Proof
Since \(\{\overline{V}_n -1,\ n\ge m\}\) is a reversed martingale, we have from the submartingale inequality,
\(\square \)
Lemma 9
As \(d\rightarrow 0\), we have
Proof
Fix \(\varepsilon _0\in (0,1-\rho )\). By (21) and Lemma 8, for sufficiently small d,
which implies the left side of (23). Next,
from the left side of (23). The first term is evaluated as follows.
As in the proof of Lemma 8, we have that \(P\left( \left| \overline{V}_R-1\right| >\varepsilon _0\right) =O(m^{-p/2})\) and \(P(\rho \overline{V}_m>1-\varepsilon _0)=P\left( \left| \overline{V}_m-1\right| >(1-\varepsilon _0-\rho )/\rho \right) =O(m^{-p/2}).\) Hence, the right side of (23) holds.\(\square \)
Lemma 10
If \(0<q<p/(2r)\), where r is as in (22), then \(\left\{ (n^*/R)^{q}, 0<d\le d_0\right\} \) and \(\left\{ (n^*/S)^{q}, 0<d\le d_0\right\} \) are uniformly integrable for some \(d_0>0\).
Proof
Note that \((n^*/S)^{q}\le (n^*/R)^{q}\). From Lemma 1 of Chow and Yu (1981), it suffices to show that \( P(R<\varepsilon _1 n^*)=o(n^{*\,-q}) \) for some \(\varepsilon _1 \in (0,1)\). By choosing \(\varepsilon _1 \in (0,\rho )\), we have from (22)
Lemma 11
For \(0<q\le p, \left\{ (R/n^*)^{q}, 0<d\le d_0\right\} \) and \(\left\{ (S/n^*)^{q}, 0<d\le d_0\right\} \) are uniformly integrable for some \(d_0>0\).
Proof
From Corollary 4.1 of Gut (2005), if \(E\left\{ \sup _{0<d\le d_0}(R/n^*)^q\right\} <\infty \), then \(\left\{ (R/n^*)^{q}, 0<d\le d_0\right\} \) is uniformly integrable. By the definition of R, Doob’s maximal inequality for the reversed martingale and (21),
which yields the uniform integrability of \(\left\{ (R/n^*)^{q}, 0<d\le d_0\right\} \) for \(1<q\le p\). When \(0<q\le 1\), we have that \(\sup _{0<d\le d_0}E(R/n^*)^{q\zeta }= \sup _{0<d\le d_0}E(R/n^*)^{p}<\infty \) for \(\zeta =p/q>1\). Therefore, \(\left\{ (R/n^*)^{q}, 0<d\le d_0\right\} \) is uniformly integrable for \(0<q\le p\). Next, we shall show the uniform integrability of \(\left\{ (S/n^*)^{q}, 0<d\le d_0\right\} \). Since \(S\le N_2+R\), it suffices to show that \(E\left\{ \sup _{0<d\le d_0}(N_2/n^*)^{q}\right\} <\infty \) which can be proved similarly. \(\square \)
Lemma 12
For \(0<q\le p\),
are uniformly integrable for some \(d_0>0\).
Proof
Follows from Lemma 5 of Chow and Yu (1981) and Lemma 11. \(\square \)
Proposition 2
We assume that \(E(Y_2^p)<\infty \) for some \(p\ge 2\). Let \(\tilde{S}={n^*}^{-\frac{1}{2}}(S-n^*)\). Under the conditions (21) and (22), if \(0<q< p/(2r+1)\), then \(\left\{ \tilde{S}^{q}, 0<d\le d_0\right\} \) is uniformly integrable for some \(d_0>0\).
Proof
Now,
Since \(K_3\equiv n^{*\,-1/2}(\langle n^* \overline{V}_R \rangle +1-n^* \overline{V}_R)\le n^{*\,-1/2}\le 1\) and \(0<R/(R-1)\le 2\), we have for some \(\zeta >1\), \(u=2r+1\) and \(v=\frac{1}{2r}+1\),
by Lemmas 10 and 12. Finally, for some \(\zeta >1\), \(u_0=r+1\) and \(v_0=\frac{1}{r}+1\), we have from (23) and Lemma 11
Hence, the proposition is proved.\(\square \)
Proof of the uniform integrability
We will show the uniform integrability of \(\{\tilde{S}_i^{\,p},\ 0<d\le d_0\}\) for each \(p\ge 1\). Let \(Y_{i j}^{\prime }=Y_{i j}/\sigma _i\) and \(C_i=\lambda _i=a_{*}\sigma _i d^{-1}\), where \(Y_{ij}\) has the exponential distribution \(\mathrm{{E_{XP}}}(0,\sigma _i)\). Then \(Y_{i2}^{\prime },\,Y_{i3}^{\prime },\ldots \) are i.i.d random variables according to \(\mathrm{{E_{XP}}}(0,1)\), and \(R_i\) and \(S_i\) defined by (10) can be written as \(R_i={\max }\left\{ m,\ N_{1i}\right\} \) and \(S_i={\max }\left\{ R_i,\ N_{2i}\right\} ,\) where \(N_{1i}=\langle \rho _i \lambda _i \overline{Y^{\prime }}_{i m} \rangle +1,\ N_{2i}=\langle \lambda _i \overline{Y^{\prime }}_{i R_i} \rangle +1\) and \(0<\rho _i<1\). Put \(n^*=C_i,\, \lambda =\lambda _i,\, \rho =\rho _i,\,R=R_i,\,S=S_i\), \(Y_j=Y_{ij}\) and \(V_j=Y_{ij}^{\prime }\) for \(i=1,2\). Since \(E(Y_{i 2}^p)<\infty \) for all \(p>0\) and \(m=O(d^{-1/r})\) for some \(r>1\), the conditions (21) and (22) are satisfied. Therefore from Proposition 2, \(\{\tilde{S_i}^{p}, 0<d\le d_0\}\) is uniformly integrable for some \(d_0>0\).
\(\square \)
Rights and permissions
About this article
Cite this article
Isogai, E., Uno, C. Three-stage confidence intervals for a linear combination of locations of two negative exponential distributions. Metrika 81, 85–103 (2018). https://doi.org/10.1007/s00184-017-0635-y
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00184-017-0635-y
Keywords
- Fixed-width interval
- Location parameter
- Two negative exponentials
- Three-stage procedure
- Behrens–Fisher situation
- Second-order expansions