1 Introduction

Life-testing experiments are usually time-consuming and expensive in nature. To reduce the cost and time of experimentation, various types of censoring schemes are used. The two most common adopted censoring schemes in the literature are type-I and type-II censoring schemes. But these censoring schemes do not allow intermediate removal of the experimental units other than the final termination point. Therefore, to overcome this restriction, a more general censoring scheme known as progressive censoring scheme is considered. Progressive type-II censoring scheme was first discussed by Cohen [14]. Further, Balakrishnan and Aggarwala [2] and Rastogi and Tripathi [26] provide elaborate treatment on the issue. Recently, the progressive censoring scheme has received considerable attention in life-testing and reliability studies. The progressive type-II censoring scheme can briefly be described as follows: Let us assume that \(n\) units are placed on test at time zero. Immediately following the first failure, \(R_{1 }\) surviving units are removed from the test at random. Similarly, after the second failure, \(R_{2 }\) of the surviving units are removed at random. The process continues until at the time of mth failure, the remaining \(R_{m} = n - R_{1} - R_{2} - \ldots - R_{m - 1} - m\) units are removed from the test. In this censoring scheme, failure times of \(m\) units are completely observed.

The reliability function \(R\left( t \right)\) is defined as the probability of failure-free operation until time t. Thus, if the random variable \(X\) denotes the lifetime of an item or a system, then \(R\left( t \right) = P(X > t)\). Another measure of reliability under stress strength set-up is the probability \(P = P(X > Y)\), which represents the reliability of an item or a system of random strength \(X\) subject to random stress \(Y\). For a brief review of the literature related to inferential procedures of \(R\left( t \right)\) and \(P\), one may refer to Bartholomew [5, 6], Johnson [17], Kelly et al. [18] and Chaturvedi and Kumari [9]. Awad and Gharraf [1] provided a simulation study which compared minimum variance unbiased, the maximum likelihood and Bayes estimators for P(Y < X) when Y and X were two independent but not identically distributed Burr random variables. Chaturvedi and Rani [11] obtained the classical as well as Bayesian estimates of reliability in respect of generalized Maxwell failure distribution, and Chaturvedi and Surinder [12] obtained the estimates of reliability functions in respect of exponential distribution under type-I and type-II censoring schemes. The classical as well as Bayesian estimates in respect of negative binomial distribution were obtained by Chaturvedi and Tomer [13]. Estimates of the reliability function for the four-parameter exponentiated generalized lomax distribution were obtained by Chaturvedi and Pathak [10].

In many situations, we come across cases in which there may exist some prior information on the parameters, the usage of which may lead to improved inferential results. It is well known that the estimators with the prior information (called the restricted estimator) perform better than the estimators with no prior information (called the unrestricted estimator). However, when the prior information is doubtful (or not sure), one may combine the restricted and unrestricted estimators to obtain an estimator with better performance, which leads to the PTEs. The preliminary test approach was first discussed by Bancroft [4]. Saleh and Kibria [29] combined the preliminary test and ridge regression approach to estimate regression parameter in a multiple regression model. Kibria [19] considered the shrinkage preliminary test ridge regression estimators (SPTRRE) based on Wald (W), the likelihood ratio (LR) and the Lagrangian multiplier (LM) and further derived the bias and risk functions of the proposed estimators. Saleh [28] discussed the preliminary test and related shrinkage estimation technique. Belaghi et al. [7] proposed the confidence intervals based on preliminary test estimator, Thompson shrinkage estimator and Bayes estimator for the scale parameter in respect of the Burr type-XII distribution. Belaghi et al. [8] defined the Bayes and empirical Bayes preliminary test estimators on the basis of record-breaking observations in the Burr type-XII model.

Motivated by the work of Kumaraswamy [20,21,22], Cordeiro and De Castro [16] proposed Kumaraswamy-G distribution. Nadarajah et al. [24] recommended Kumaraswamy-G distribution as a reliability model, and Tamandi and Nadarajah [30] developed estimation procedures for the parameters based on complete samples.

In the present paper, the ambit of our work dovetailing with Kumaraswamy-G distribution is multi-pronged. In Sect. 2, the estimators (point as well as interval) are obtained for the powers of parameter, R(t) and P for Kumaraswamy-G distribution besides developing testing procedures. In Sect. 3, we propose the PTEs, and in Sect. 4, the PTCIs are developed by us. In Sect. 5, we provide simulated results and also present analysis of real-life data. Lastly, Sect. 6 summarizes findings and conclusions.

2 The Kumaraswamy-G Distribution and Related Inferential Procedures

A random variable (rv) \(X\) is said to follow Kumaraswamy-G distribution if its probability density function (pdf) and cumulative density function (cdf) are given, respectively, by

$$f\left( {x;\sigma ,\gamma } \right) = \sigma \gamma g\left( x \right)G^{\gamma - 1} \left( x \right)\left\{ {1 - G^{\gamma } \left( x \right)} \right\}^{\sigma - 1} \quad ; \sigma ,\gamma > 0$$
(2.1)

and

$$F\left( {x;\sigma ,\gamma } \right) = 1 - \left\{ {1 - G^{\gamma } \left( x \right)} \right\}^{\sigma }$$
(2.2)

where G(x) is the baseline cdf, g(x) is the pdf of G(x). \(\sigma\) and \(\gamma\) are the shape parameters of this distribution. Under progressive type-II censoring, let us denote the \(m\) failure time by \(X_{i;m,n} , i = 1,2, \ldots ,m .\)

Denoting by \(c = n \left( {n - R_{1} - 1} \right)\left( {n - R_{1} - R_{2} - 2} \right) \ldots \left( {n - R_{1} - R_{2} - \ldots - R_{m - 1} - m + 1} \right)\), from (2.1) and (2.2), the joint pdf of \(X_{i;m,n} , i = 1,2, \ldots ,m\) is given by,

$$\begin{aligned} f\left( {x_{i;m,n} ;i = 1,2, \ldots ,m; \sigma ,\gamma } \right) & = c \mathop \prod \limits_{i = 1}^{m} f(x_{i} ;\sigma ,\gamma )\left( {1 - F\left( {x_{i} ;\sigma ,\gamma } \right)} \right)^{{R_{i} }} \\ & = c \mathop \prod \limits_{i = 1}^{m} \sigma \gamma g(x_{i} )G^{\gamma - 1} \left( {x_{i} } \right)\left\{ {1 - G^{\gamma } \left( {x_{i} } \right)} \right\}^{\sigma - 1} \left\{ {1 - G^{\gamma } \left( {x_{i} } \right)} \right\}^{{\sigma R_{i} }} \\ & = c {\text{exp}}\left\{ { - \sigma \mathop \sum \limits_{i = 1}^{m} - \left( {R_{i} + 1} \right)\log \left( {1 - G^{\gamma } \left( {x_{i} } \right)} \right)} \right\}\mathop \prod \limits_{i = 1}^{m} \sigma \gamma g(x_{i} )\frac{{G^{\gamma - 1} \left( {x_{i} } \right)}}{{\left\{ {1 - G^{\gamma } \left( {x_{i} } \right)} \right\}}} \\ \end{aligned}$$
(2.3)

The following theorem establishes the relationship between Kumaraswamy-G distribution and exponential distribution.

Theorem 1

The rv \(Y = - \log \left( {1 - G^{\gamma } \left( X \right)} \right)\)follows exponential distribution with mean life 1/σ.

Proof

Let us make the transformation

$$y = - \log \left( {1 - G^{\gamma } \left( x \right)} \right),$$

or,

$$1 - G^{\gamma } \left( x \right) = e^{ - y} ,$$
(2.4)
$$\gamma g\left( x \right) G^{\gamma - 1} \left( x \right) dx = e^{ - y} dy,$$
(2.5)

Making substitutions from (2.4) and (2.5) in (2.1), we get the pdf of Y to be

$$h\left( {y;\sigma } \right) = \sigma e^{{ - \left( {\sigma - 1} \right)y}} e^{ - y} = \sigma e^{ - \sigma y}$$

and the theorem follows.

Let us consider the transformations

$$Z_{1} = nY_{1} ; Z_{2} = \left( {n - R_{1} - 1} \right)\left( {Y_{2} - Y_{1} } \right); \ldots ; Z_{m} = \left( {n - R_{1} - R_{2} - \ldots - R_{m - 1} - m + 1} \right)\left( {Y_{m} - Y_{m - 1} } \right)$$

Then \(Z_{j}^{'} s \left( {j = 1,2, \ldots ,m} \right)\) follow exponential distribution with mean life 1/\(\sigma\). Since,

$$S_{m} = \mathop \sum \limits_{j = 1}^{m} - \left( {R_{j} + 1} \right)\log \left( {1 - G^{\gamma } \left( {x_{j} } \right)} \right) = \mathop \sum \limits_{j = 1}^{m} Z_{j}$$

Applying Theorem 1 and additive property of exponential distribution, \(S_{m}\) follows gamma distribution with pdf

$$w\left({s_{m};\sigma} \right) = \frac{{e^{{- \sigma s_{m}}} s_{m}^{m - 1} \sigma^{m}}}{{\ulcorner{m}}}$$
(2.6)

Utilizing (2.6), it follows from (2.3) that

$$f\left( {x_{i;m,n} ;i = 1,2, \ldots ,m; \sigma ,\gamma } \right) = c\sigma^{m} e^{{ - \sigma S_{m} }} \mathop \prod \limits_{i = 1}^{m} \gamma g(x_{i} )\frac{{G^{\gamma - 1} \left( {x_{i} } \right)}}{{\left\{ {1 - G^{\gamma } \left( {x_{i} } \right)} \right\}}}$$
(2.7)

The following theorem provides the UMVUE of \(\sigma^{p}\)(p ≠ 0)

Theorem 2

For (p ≠ 0), the UMVUE of \(\sigma^{p}\) is given by

$$\hat{\sigma}_{U}^{p} = \frac{{\ulcorner{m}}}{{_{{\ulcorner {{\left( {m - p} \right)}}}}}} S_{m}^{- p};\quad m > p$$

Proof

It follows from (2.6) and factorization theorem that \(S_{m}\) is sufficient for σ. Moreover, since the distribution of \(S_{m}\) belongs to exponential family, it is also complete, the theorem now follows from Lehmann–Scheffe’s theorem and the fact that

$$E\left({S_{m}^{- p}} \right) = \frac{{\ulcorner {{\left({m - p} \right)}}}}{{\ulcorner{m}}} \sigma^{p}$$

The proof of following theorem is a direct consequence of (2.7).

Theorem 3

For (p ≠ 0), the MLE of \(\sigma^{p}\) is given by

$$\hat{\sigma }_{\text{ML}}^{p} = \left( {\frac{m}{{S_{m} }}} \right)^{p}$$

Lemma 1

The UMVUE of sampled pdf vide (2.1) at a specified pointxis given by

$$\hat{f}_{U} (x;\sigma ,\gamma ) = \left\{ {\begin{array}{*{20}l} {\begin{array}{*{20}c} \frac{{(m - 1)\gamma g(x)G^{\gamma - 1} (x)}}{{S_{m} (1 - G^{\gamma } (x))}} \\ \end{array} \left[ {1 + \begin{array}{*{20}l} \frac{{\log (1 - G^{\gamma } (x))}} {{S_{m} }} \\ \end{array} } \right]^{m-2};} & {S_{m} > - \log (1 - G^{\gamma } (x))} \\ {0;} & {\text{otherwise}} \\ \end{array} } \right.$$

Proof

Equation (2.1) may be written as

$$\begin{aligned} f\left( {x;\sigma ,\gamma } \right) & = \sigma \gamma g\left( x \right)G^{\gamma - 1} \left( x \right) exp\left\{ {\left( {\sigma - 1} \right)\log (1 - G^{\gamma } \left( x \right)} \right\} \\ & = \frac{{\gamma g\left( x \right)G^{\gamma - 1} \left( x \right)}}{{\left( {1 - G^{\gamma } \left( x \right)} \right)}}\mathop \sum \limits_{i = 0}^{\infty } \frac{1}{i !} \left\{ {\log \left( {1 - G^{\gamma } \left( x \right)} \right)} \right\}^{i} \sigma^{i + 1} \\ \end{aligned}$$
(2.8)

Applying Theorem 2 and a result due to Chaturvedi and Tomer [13], it follows from (2.8) that

$$\begin{aligned} \hat{f}_{U} \left({x;\sigma,\gamma} \right) & = \frac{{\gamma g\left(x \right)G^{\gamma - 1} \left(x \right)}}{{\left({1 - G^{\gamma} \left(x \right)} \right)}}\mathop \sum \limits_{i = 0}^{\infty} \frac{1}{i !} \left\{{\log \left({1 - G^{\gamma} \left(x \right)} \right)} \right\}^{i} \hat{\sigma}_{U}^{i + 1} \\ & = \frac{{\gamma g\left(x \right)G^{\gamma - 1} \left(x \right)}}{{\left({1 - G^{\gamma} \left(x \right)} \right)}}\mathop \sum \limits_{i = 0}^{m - 2} \frac{1}{i !} \left\{{\log \left({1 - G^{\gamma} \left(x \right)} \right)} \right\}^{i} \frac{{\ulcorner{m}}}{{\ulcorner{{\left({m - i - 1} \right)}}}} S_{m}^{- i - 1} \\ \end{aligned}$$

and the desired result is obtained on simplification.

Lemma 2

The MLE of sampled pdf vide (2.1) at a specified pointxis given by

$$\hat{f}_{\text{ML}} \left( {x;\sigma ,\gamma } \right) = \frac{m}{{S_{m} }}\gamma g\left( x \right)G^{\gamma - 1} \left( x \right) \left( {1 - G^{\gamma } \left( x \right)} \right)^{{\frac{m}{{S_{m} }} - 1}}$$

Proof

The result follows from (2.1) in conjunction with Theorem 3 and invariance property of MLEs.

Theorem 4

The UMVUE of \(R\left( t \right)\) is given by

$$\hat{R}_{U} \left( t \right) = \left\{ {\begin{array}{*{20}l} {\left[ {1 + \frac{{\log \left( {1 - G^{\gamma } \left( t \right)} \right)}}{{S_{m} }}} \right]^{m - 1} ;} & {S_{m} > - \log \left( {1 - G^{\gamma } \left( t \right)} \right)} \\ {0;} & {\text{otherwise}} \\ \end{array} } \right.$$

Proof

We have

$$\mathop \int \limits_{t}^{\infty } \mathop \int \limits_{0}^{\infty } f\left( {x;\sigma ,\gamma } \right)w\left( {s_{m} ;\sigma } \right) {\text{d}}x {\text{d}}s_{m} = R\left( t \right)$$

Thus,

$$\mathop \int \limits_{t}^{\infty } \hat{f}_{U} \left( {x;\sigma ,\gamma } \right) {\text{d}}x\,{\text{is}}\,{\text{ the }}\,{\text{UMVUE}}\,{\text{ of}}\,R\left( t \right)$$

Therefore, from Lemma 1,

$$\hat{R}_{U} \left( t \right) = \mathop \int \limits_{t}^{\infty } \frac{{\left( {m - 1} \right)\gamma g\left( x \right)G^{\gamma - 1} \left( x \right)}}{{S_{m} \left( {1 - G^{\gamma } \left( x \right)} \right)}}\left[ {1 + \frac{{\log \left( {1 - G^{\gamma } \left( x \right)} \right)}}{{S_{m} }}} \right]^{m - 2} {\text{d}}x$$

on substituting

$$\frac{{\log \left( {1 - G^{\gamma } \left( x \right)} \right)}}{{S_{m} }} = - v ,$$

and solving the integral, the desired result follows.

Theorem 5

The MLE of \(R\left( t \right)\) is given by

$$\hat{R}_{\text{ML}} \left( t \right) = \left[ {1 - G^{\gamma } \left( t \right)} \right]^{{\frac{m}{{S_{m} }}}}$$

Proof

The proof follows on similar lines as in previous theorem.

Suppose \(X\) and \(Y\) are two independent rvs with parameters \(\left( {\sigma_{1} ,\gamma_{1} } \right)\, {\text{and}}\, \left( {\sigma_{2} ,\gamma_{2} } \right)\), respectively. Then, the UMVUE of \(P\) is given by following theorem.

Theorem 6

The UMVUE of \(P\)is given by

$$\hat{P}_{U} = \left\{ {\begin{array}{*{20}c} {(l - 1)\int_{0}^{U} {\left[ {1 + \frac{{\log \left\{ {1 - G^{\gamma_1} \left( {H^{ - 1} \left( {(1 - e^{{vT_{l} }} )^{{\frac{l}{\gamma_2}}} } \right)} \right)} \right\}}}{{S_{m} }}} \right]^{m - 1} \left( {1 - v} \right)^{l - 2} {\text{d}}v;\,G^{ - 1} \left( {1 - e^{{ - S_{m} }} } \right)^\frac{1}{\gamma_1} \le H^{ - 1} \left( {\left( {1 - e^{{ - T_{l} }} } \right)^{{\frac{1}{\gamma_ 2}}} } \right)} } \\ {(l - 1)\int_{0}^{1} {\left[ {1 + \frac{{\log \left\{ {1 - G^{\gamma _1} \left( {H^{ - 1} \left( {(1 - e^{{vT_{l} }} )^{{\frac{l}{\gamma_ 2}}} } \right)} \right)} \right\}}}{{S_{m} }}} \right]^{m - 1} \left( {1 - v} \right)^{l - 2} {\text{d}}v;\,G^{ - 1} \left( {1 - e^{{ - S_{m} }} } \right)^\frac{1}{\gamma_ 1} > H^{ - 1} \left( {\left( {1 - e^{{ - T_{l} }} } \right)^{{\frac{1}{\gamma_ 2}}} } \right)} } \\ \end{array} } \right.$$

where

$$U = \frac{{\log \left\{ {1 - H^{{\gamma_{2} }} \left( {G^{ - 1} \left( {\left( {1 - e^{{ - S_{m} }} } \right)^{{\frac{1}{{\gamma_{1} }}}} } \right)} \right)} \right\}}}{{T_{l} }}$$

Proof

Let the pdf of \(X\) and \(Y\) are given by \(f_{1} \left( {x;\sigma_{1} ,\gamma_{1} } \right) \,{\text{and}} \,f_{2} \left( {y;\sigma_{2} ,\gamma_{2} } \right)\), respectively

$$f_{1} \left( {x;\sigma_{1} ,\gamma_{1} } \right) = \sigma_{1} \gamma_{1} g\left( x \right)G^{{\gamma_{1} - 1}} \left( x \right)\left\{ {1 - G^{{\gamma_{1} }} \left( x \right)} \right\}^{{\sigma_{1} - 1}} ;\quad \sigma_{1} ,\gamma_{1} > 0$$

and

$$f_{2} \left( {y;\sigma_{2} ,\gamma_{2} } \right) = \sigma_{2} \gamma_{2} h\left( y \right)H^{{\gamma_{2} - 1}} \left( y \right)\left\{ {1 - H^{{\gamma_{2} }} \left( y \right)} \right\}^{{\sigma_{2} - 1}} ; \quad \sigma_{2} ,\gamma_{2} > 0$$

Suppose that \(n\) units are put on test. Let \(X_{i:m:n} ;i = 1,2, \ldots ,m\) be \(m\) observed failure times from \(X\) and \(Y_{j:l:n} ;j = 1,2, \ldots ,l\) be \(l\) observed failure times from \(Y\).

We denote by \(T_{l} = \mathop \sum \limits_{j = 1}^{l} - \left( {R_{j}^{*} + 1} \right)\log \left( {1 - H^{{\gamma_{2} }} \left( {y_{j} } \right)} \right)\) where \(R_{j}^{*}\) is the censoring scheme for second sample.

From the arguments similar to those used for Theorem 4,

$$\begin{aligned} \hat{P}_{U} & = \mathop \int \limits_{y = 0}^{\infty } \mathop \int \limits_{x = y}^{\infty } \hat{f}_{1 U} \left( {x;\sigma_{1} ,\gamma_{1} } \right)\hat{f}_{2 U} \left( {y;\sigma_{2} ,\gamma_{2} } \right) {\text{d}}x{\text{d}}y \\ & = \mathop \int \limits_{y = 0}^{\infty } \hat{R}_{1U} (y,\sigma_{1} ,\gamma_{1} ) \hat{f}_{2 U} \left( {y;\sigma_{2} ,\gamma_{2} } \right) {\text{d}}y \\ & = \left( {l - 1} \right)\mathop \int \limits_{y = 0}^{\text{c}} \left[ {1 + \frac{{\log \left( {1 - G^{{\gamma_{1} }} \left( y \right)} \right)}}{{S_{m} }}} \right]^{m - 1} \frac{{\gamma_{2} h\left( y \right)H^{{\gamma_{2} - 1}} \left( y \right)}}{{T_{l} \left( {1 - H^{{\gamma_{2} }} \left( y \right)} \right)}} \left[ {1 + \frac{{\log \left( {1 - H^{{\gamma_{2} }} \left( y \right)} \right)}}{{T_{l} }}} \right]^{l - 2} {\text{d}}y \\ \end{aligned}$$

where

$$c = { \hbox{min} }\left\{ {G^{ - 1} \left( {\left( {1 - e^{{ - S_{m} }} } \right)^{{\frac{1}{{\gamma_{1} }}}} } \right),H^{ - 1} \left( {\left( {1 - e^{{ - T_{l} }} } \right)^{{\frac{1}{{\gamma_{2} }}}} } \right)} \right\}$$

On substituting

$$\frac{{\log \left( {1 - H^{{\gamma_{2} }} \left( y \right)} \right)}}{{T_{l} }} = - v,$$

and taking different values of c, we get the desired result on solving the integral.

Corollary 1

When \(X\)and \(Y\)belong to same family of distributions, i.e. \(G\left( . \right) = H\left( . \right)\), the UMVUE of \(P\)is given by

$$\hat{P}_{U} = \left\{ {\begin{array}{*{20}c} {\left( {l - 1} \right)\mathop \int \limits_{0}^{W} \left[ {1 + \frac{{\log \left\{ {1 - \left( {1 - e^{{vT_{l} }} } \right)^{{\frac{{\gamma_{1} }}{{\gamma_{2} }}}} } \right\}}}{{S_{m} }}} \right]^{m - 1} \left( {1 - v} \right)^{l - 2} {\text{d}}v;\quad \left( {1 - e^{{ - S_{m} }} } \right)^{{\frac{1}{{\gamma_{1} }}}} \le \left( {1 - e^{{ - T_{l} }} } \right)^{{\frac{1}{{\gamma_{2} }}}} } \\ {\left( {l - 1} \right)\mathop \int \limits_{0}^{1} \left[ {1 + \frac{{\log \left\{ {1 - \left( {1 - e^{{vT_{l} }} } \right)^{{\frac{{\gamma_{1} }}{{\gamma_{2} }}}} } \right\}}}{{S_{m} }}} \right]^{m - 1} \left( {1 - v} \right)^{l - 2} {\text{d}}v;\quad \left( {1 - e^{{ - S_{m} }} } \right)^{{\frac{1}{{\gamma_{1} }}}} > \left( {1 - e^{{ - T_{l} }} } \right)^{{\frac{1}{{\gamma_{2} }}}} } \\ \end{array} } \right.$$

where

$$W = \frac{{\log \left\{ {1 - \left( {1 - e^{{ - S_{m} }} } \right)^{{\frac{{\gamma_{2} }}{{\gamma_{1} }}}} } \right\}}}{{T_{l} }}$$

Corollary 2

When \(G\left( . \right) = H\left( . \right)\)and \(\gamma\)1 = \(\gamma\)2, the UMVUE of \(P\)is given by

$$\hat{P}_{U} = \left\{ {\begin{array}{*{20}c} {\mathop \sum \limits_{i = 0}^{l - 2} \frac{{\left( { - 1} \right)^{i} \left( {l - 1} \right)!\left( {m - 1} \right)!}}{{\left( {l - 2 - i} \right)!\left( {m + i} \right)!}} \left( {\frac{{S_{m} }}{{T_{l} }}} \right)^{i + 1} ;} & {S_{m} \le T_{l} } \\ {\mathop \sum \limits_{i = 0}^{m - 1} \frac{{\left( { - 1} \right)^{i} \left( {l - 1} \right)!\left( {m - 1} \right)!}}{{\left( {l + i - 1} \right)!\left( {m - 1 - i} \right)!}} \left( {\frac{{T_{l} }}{{S_{m} }}} \right)^{i} ;} & {S_{m} > T_{l} } \\ \end{array} } \right.$$

Theorem 7

The MLE of \(P\)when \(G\left( . \right) = H\left( . \right)\)and \(\gamma_{1}\)  = \(\gamma_{2}\) = \(\gamma\)(say) is given by

$$\hat{P}_{\text{ML}} = \frac{{\hat{\sigma }_{2 ML} }}{{\hat{\sigma }_{1 ML} + \hat{\sigma }_{2 ML} }}$$

Proof

We have

$$\begin{aligned} \hat{P}_{\text{ML}} & = \mathop \int \limits_{y = 0}^{\infty } \hat{R}_{1 ML} (y,\sigma_{1} ,\gamma_{1} ) \hat{f}_{2 ML} \left( {y;\sigma_{2} ,\gamma_{2} } \right) {\text{d}}y \\ & =\mathop \int \limits_{y = 0}^{\infty }\left[ {1 - G^{{\gamma_{1} }} \left( y \right)} \right]^{{\hat{\sigma }_{1ML} }} \hat{\sigma }_{2ML} \gamma_{2} h\left( y \right)H^{{\gamma_{2} - 1}} \left( y \right)\left\{ {1 - H^{{\gamma_{2} }} \left( y \right)} \right\}^{{\hat{\sigma }_{2ML} - 1}} {\text{d}}y \\ \end{aligned}$$

On substituting \(H^{{\gamma_{2} }} \left( y \right) = v\), we get

$$\hat{P}_{\text{ML}} = \mathop \int \limits_{v = 0}^{1} \left[ {1 - G^{{\gamma_{1} }} \left\{ {H^{ - 1} \left( {v^{{\frac{1}{{\gamma_{2} }}}} } \right)} \right\}} \right]^{{\hat{\sigma }_{1 ML} }} \hat{\sigma }_{2ML} \left( {1 - v} \right)^{{\hat{\sigma }_{2ML} - 1}} {\text{d}}v$$

When \(G\left( . \right) = H\left( . \right),\) we get

$$\hat{P}_{\text{ML}} = \mathop \int \limits_{v = 0}^{1} \left[ {1 - v^{{\frac{{\gamma_{1} }}{{\gamma_{2} }}}} } \right]^{{\hat{\sigma }_{1 ML} }} \hat{\sigma }_{2ML} \left( {1 - v} \right)^{{\hat{\sigma }_{2ML} - 1}} {\text{d}}v$$

Using the condition that \(\gamma_{1}\)  = \(\gamma_{2}\) = \(\gamma\) (say), we get the desired result on solving the integral.

In the following theorem, we derive critical region for the hypotheses regarding σ.

Theorem 8

Let the null hypothesis be \(H_{0} :\sigma = \sigma_{0}\)against the alternative \(H_{1} :\sigma \ne \sigma_{0}\). Then, the critical region is given by

$$\left( {0 < S_{m} < \frac{{\chi_{2m}^{2} \left( {\frac{\alpha }{2}} \right)}}{{2\sigma_{0} }}} \right) \cup \left( {\frac{{\chi_{2m}^{2} \left( {1 - \frac{\alpha }{2}} \right)}}{{2\sigma_{0} }} < S_{m} < \infty } \right)$$

where, \(\alpha\)is the level of significance

Proof

We know that

$$2{{\sigma }}S_{m} \sim \chi_{2m}^{2}$$

The critical region is then given by

$$\left( {0 < \chi_{2m}^{2} < \chi_{2m}^{2} \left( {\frac{\alpha }{2}} \right)} \right) \cup \left( {\chi_{2m}^{2} \left( {1 - \frac{\alpha }{2}} \right) < \chi_{2m}^{2} < \infty } \right)$$

or

$$\left( {0 < S_{m} < \frac{{\chi_{2m}^{2} \left( {\frac{\alpha }{2}} \right)}}{{2\sigma_{0} }}} \right) \cup \left( {\frac{{\chi_{2m}^{2} \left( {1 - \frac{\alpha }{2}} \right)}}{{2\sigma_{0} }} < S_{m} < \infty } \right)$$

In what follows, we obtain the critical region for the hypotheses related to \(R\left( t \right)\).

Theorem 9

Let the null hypothesis be \(H_{0} :R\left( t \right) = R_{0} \left( t \right)\)against the alternative \(H_{1} :R\left( t \right) \ne R_{0} \left( t \right)\). Then, the critical region comes out to be,

$$\left( {0 < S_{m} < \frac{{\chi_{2m}^{2} \left( {\frac{\alpha }{2}} \right)}}{{2\sigma_{0} }}} \right) \cup \left( {\frac{{\chi_{2m}^{2} \left( {1 - \frac{\alpha }{2}} \right)}}{{2\sigma_{0} }} < S_{m} < \infty } \right)$$

Proof

From (2.2), we know that,

$$R\left( t \right) = \left[ {1 - G^{\gamma } \left( t \right)} \right]^{\sigma }$$

Therefore, \(H_{0} :R\left( t \right) = R_{0} \left( t \right)\) against the alternative \(H_{1} :R\left( t \right) \ne R_{0} \left( t \right)\) is equivalent to\(H_{0} :\sigma = \sigma_{0}\) against the alternative \(H_{1} :\sigma \ne \sigma_{0}\)

Thus, the theorem follows from Theorem 8.

The following theorem provides critical region for hypotheses regarding \(P\).

Theorem 10

Let us take the null hypothesis \(H_{0} :P = P_{0}\)against the alternative \(H_{1} :P \ne P_{0}\). Then, the critical region turns out to be,

$$\left( {\frac{{S_{m} }}{{T_{l} }} < \frac{mk}{l}F_{2m,2l} \left( {\frac{\alpha }{2}} \right)} \right) \cup \left( {\frac{mk}{l}F_{2m,2l} \left( {1 - \frac{\alpha }{2}} \right) < \frac{{S_{m} }}{{T_{l} }}} \right)$$

\({\text{where}}\, k = \frac{{P_{0} }}{{1 - P_{0} }}\)

Proof

We know that

$$P = \frac{{\sigma_{2} }}{{\sigma_{1} + \sigma_{2} }}$$

\(P = P_{0}\) gives \(\sigma_{2} = k \sigma_{1}\)

Therefore, \(H_{0}\) is equivalent to

$$H_{0} :\sigma _{2} = k~\sigma _{1} ~{\rm agains}t~H_{1} :\sigma _{2} \ne k~\sigma _{1}$$

As we know that,

$$S_{m} \sim {\text{Gamma}}\left( {m,\sigma_{1} } \right)\, {\text{and}} \,T_{l} \sim {\text{Gamma}}\left( {l,\sigma_{2} } \right)$$

Therefore,

$$\frac{{l\sigma_{1} S_{m} }}{{m\sigma_{2} T_{l} }} \sim F_{2m,2l}$$
(2.9)

The critical region for testing \(H_{0} :P = P_{0}\) is given by,

$$\left( {F_{2m,2l} < F_{2m,2l} \left( {\frac{\alpha }{2}} \right)} \right) \cup \left( {F_{2m,2l} \left( {1 - \frac{\alpha }{2}} \right) < F_{2m,2l} } \right).$$
(2.10)

The theorem follows using (2.9) and (2.10)

The following three theorems provide confidence intervals for σ, R(t) and P, respectively. The proofs of these theorems emanate as a direct consequence of Theorems 8, 9 and 10, respectively.

Theorem 11

The \(100\left( {1 - \alpha } \right)\%\) confidence interval for σ is given by

$$\left[ {\frac{{\chi_{2m}^{2} \left( {\frac{\alpha }{2}} \right)}}{{2S_{m} }} , \frac{{\chi_{2m}^{2} \left( {1 - \frac{\alpha }{2}} \right)}}{{2S_{m} }}} \right]$$

Theorem 12

The \(100\left( {1 - \alpha } \right)\%\) confidence interval for \(R\left( t \right)\) comes out to be

$$\left[ {{\text{exp}}\left\{ {\frac{{\chi_{2m}^{2} \left( {1 - \frac{\alpha }{2}} \right)}}{{2S_{m} }} \log \left( {1 - G^{\gamma } \left( t \right)} \right)} \right\}, {\text{exp}}\left\{ {\frac{{\chi_{2m}^{2} \left( {\frac{\alpha }{2}} \right)}}{{2S_{m} }} \log \left( {1 - G^{\gamma } \left( t \right)} \right)} \right\}} \right]$$

Theorem 13

The \(100\left( {1 - \alpha } \right)\%\) confidence interval for \(P\) turns out to be

$$\left[ {\frac{1}{{1 + \frac{{mT_{l} }}{{lS_{m} }}F_{2m,2l} \left( {1 - \frac{\alpha }{2}} \right)}}, \frac{1}{{1 + \frac{{mT_{l} }}{{lS_{m} }}F_{2m,2l} \left( {\frac{\alpha }{2}} \right)}}} \right]$$

3 Proposed Preliminary Test Estimators

The prior information for the parameter σ can be expressed in the form of the null hypothesis discussed in Theorem 8.

Let us suppose

$$\chi_{2m}^{2} \left( {\frac{\alpha }{2}} \right) = C_{1 } {\text{and }}\chi_{2m}^{2} \left( {1 - \frac{\alpha }{2}} \right) = C_{2}$$

and I(A) is the Indicator function of the set

$$A = \left\{ {\chi_{2m}^{2} ; C_{1} \le \chi_{2m}^{2} \le C_{2} } \right\}$$

The PTEs of \(\sigma^{p}\) based on UMVUE and MLE are then given, respectively, by

$$\hat{\sigma }_{{\text{PT}}\_U}^{p} = \hat{\sigma}_{U}^{p} - \left( {\hat{\sigma }_{U}^{p} - \sigma_{0}^{p} } \right)I\left( A \right)$$
(3.1)

and

$$\hat{\sigma }_{{\text{PT}}\_{\text{ML}}}^{p} = \hat{\sigma }_{\text{ML}}^{p} - \left( {\hat{\sigma }_{\text{ML}}^{p} - \sigma_{0}^{p} } \right)I\left( A \right),$$
(3.2)

where \(\hat{\sigma }_{U }^{p} \,{\text{and}} \,\hat{\sigma }_{\text{ML}}^{p}\) are as defined in Theorems 2 and 3, respectively.

Let us suppose

\(\delta = \frac{\sigma }{{\sigma_{o} }}\) where \(\sigma_{o}\) is the true value of \(\sigma\)

The bias of the PTE given at (3.1) is

$$\begin{aligned} {\text{Bias}}\left( {\hat{\sigma }_{{\text{PT}}\_U}^{p} } \right) & = E\left[ {\hat{\sigma }_{U}^{p} - \left( {\hat{\sigma }_{U}^{p} - \sigma_{0}^{p} } \right)I\left( A \right) - \sigma^{p} } \right] \\ & = \sigma_{0}^{p} P\left( A \right) - E\left( {\hat{\sigma }_{U}^{p} I\left( A \right)} \right) \\ \end{aligned}$$
$${\text{or}}, {\text{Bias}}\left( {\hat{\sigma }_{{\text{PT}}\_U}^{p} } \right) = \sigma_{0}^{p} \left\{ {H_{2m} \left( {\delta C_{2} } \right) - H_{2m} \left( {\delta C_{1} } \right)} \right\} - \sigma^{p} \left\{ {H_{{2\left( {m - p} \right)}} \left( {\delta C_{2} } \right) - H_{{2\left( {m - p} \right)}} \left( {\delta C_{1} } \right)} \right\}$$

where \(H_{\varPsi } \left( . \right)\) denotes the \(cdf\) of \(\chi^{2}\) distribution with Ψ degrees of freedom

The mean sum of squares due to error (MSE) of the PTE given at (3.1) comes out to be

$$\begin{aligned} MSE\left( {\hat{\sigma }_{{\text{PT}}\_U}^{p} } \right) & = \sigma^{2p} \left[ {\frac{{\varGamma \left( {m - 2p} \right)\varGamma \left( m \right)}}{{\varGamma^{2} \left( {m - p} \right)}} - 1} \right] + \sigma^{2p} \frac{{\varGamma \left( {m - 2p} \right)\varGamma \left( m \right)}}{{\varGamma^{2} \left( {m - p} \right)}}\left\{ {H_{{2\left( {m - 2p} \right)}} \left( {\delta C_{2} } \right) - H_{{2\left( {m - 2p} \right)}} \left( {\delta C_{1} } \right)} \right\} \\ & - \sigma^{2p} \left\{ {H_{{2\left( {m - p} \right)}} \left( {\delta C_{2} } \right) - H_{{2\left( {m - p} \right)}} \left( {\delta C_{1} } \right)} \right\}^{2} \\ & + \sigma_{o}^{2p} \left\{ {H_{2m} \left( {\delta C_{2} } \right) - H_{2m} \left( {\delta C_{1} } \right)} \right\}\left[ {1 - \left\{ {H_{2m} \left( {\delta C_{2} } \right) - H_{2m} \left( {\delta C_{1} } \right)} \right\}} \right] \\ & - 2\sigma_{o}^{p} \sigma^{p} \left\{ {H_{{2\left( {m - p} \right)}} \left( {\delta C_{2} } \right) - H_{{2\left( {m - p} \right)}} \left( {\delta C_{1} } \right)} \right\}\left[ {1 - \left\{ {H_{2m} \left( {\delta C_{2} } \right) - H_{2m} \left( {\delta C_{1} } \right)} \right\}} \right] \\ & - 2\sigma^{2p} \frac{{\varGamma \left( {m - 2p} \right)\varGamma \left( m \right)}}{{\varGamma^{2} \left( {m - p} \right)}}\left\{ {H_{{2\left( {m - 2p} \right)}} \left( {\delta C_{2} } \right) - H_{{2\left( {m - 2p} \right)}} \left( {\delta C_{1} } \right)} \right\} \\ & + 2\sigma_{o}^{p} \sigma^{p} \left\{ {H_{{2\left( {m - p} \right)}} \left( {\delta C_{2} } \right) - H_{{2\left( {m - p} \right)}} \left( {\delta C_{1} } \right)} \right\} + 2\sigma^{2p} \left\{ {H_{{2\left( {m - p} \right)}} \left( {\delta C_{2} } \right) - H_{{2\left( {m - p} \right)}} \left( {\delta C_{1} } \right)} \right\} \\ & - 2\sigma_{o}^{p} \sigma^{p} \left\{ {H_{2m} \left( {\delta C_{2} } \right) - H_{2m} \left( {\delta C_{1} } \right)} \right\} \\ & + \left[ {\sigma_{0}^{p} \left\{ {H_{2m} \left( {\delta C_{2} } \right) - H_{2m} \left( {\delta C_{1} } \right)} \right\} - \sigma^{p} \left\{ {H_{{2\left( {m - p} \right)}} \left( {\delta C_{2} } \right) - H_{{2\left( {m - p} \right)}} \left( {\delta C_{1} } \right)} \right\}} \right]^{2} \\ \end{aligned}$$

The bias of the PTE given at (3.2) is

$${\text{Bias}}\left( {\hat{\sigma }_{{PT\_ML}}^{p} } \right) = \left( {\sigma m} \right)^{p} \frac{{\Gamma \left( {m - p} \right)}}{{\Gamma \left( m \right)}}\left[ {1 - \left\{ {H_{{2\left( {m - p} \right)}} \left( {\delta C_{2} } \right) - H_{{2\left( {m - p} \right)}} \left( {\delta C_{1} } \right)} \right\}} \right] + \sigma _{o}^{p} \left\{ {H_{{2m}} \left( {\delta C_{2} } \right) - H_{{2m}} \left( {\delta C_{1} } \right)} \right\} - \sigma ^{p}$$

The MSE of PTE given at (3.2) comes out to be

$$\begin{aligned} {\text{MSE}}\left( {\hat{\sigma }_{{PT\_ML}}^{p} } \right) & = \left( {\sigma m} \right)^{{2p}} \left\{ {\frac{{\Gamma \left( {m - 2p} \right)}}{{\Gamma \left( m \right)}} - \left( {\frac{{\Gamma \left( {m - p} \right)}}{{\Gamma \left( m \right)}}} \right)^{2} ~} \right\} \\ & \quad - \left( {\sigma m} \right)^{{2p}} \frac{{\Gamma \left( {m - 2p} \right)}}{{\Gamma \left( m \right)}}\left\{ {H_{{2\left( {m - 2p} \right)}} \left( {\delta C_{2} } \right) - H_{{2\left( {m - 2p} \right)}} \left( {\delta C_{1} } \right)} \right\} \\ & \quad - ~\left( {\sigma m} \right)^{{2p}} \left\{ {\frac{{\Gamma \left( {m - p} \right)}}{{\Gamma \left( m \right)}}} \right\}^{2} \left\{ {H_{{2\left( {m - p} \right)}} \left( {\delta C_{2} } \right) - H_{{2\left( {m - p} \right)}} \left( {\delta C_{1} } \right)} \right\}^{2} \\ & \quad + \sigma _{o}^{{2p}} \left\{ {H_{{2m}} \left( {\delta C_{2} } \right) - H_{{2m}} \left( {\delta C_{1} } \right)} \right\}\left[ {1 - \left\{ {H_{{2m}} \left( {\delta C_{2} } \right) - H_{{2m}} \left( {\delta C_{1} } \right)} \right\}} \right] \\ & \quad - ~2\sigma _{o}^{p} \left( {\sigma m} \right)^{p} \frac{{\Gamma \left( {m - p} \right)}}{{\Gamma \left( m \right)}}\left\{ {H_{{2\left( {m - p} \right)}} \left( {\delta C_{2} } \right) - H_{{2\left( {m - p} \right)}} \left( {\delta C_{1} } \right)} \right\}\left[ {1 - \left\{ {H_{{2m}} \left( {\delta C_{2} } \right) - H_{{2m}} \left( {\delta C_{1} } \right)} \right\}} \right] \\ & \quad + 2\left( {\sigma m} \right)^{p} \left\{ {\frac{{\Gamma \left( {m - p} \right)}}{{\Gamma \left( m \right)}}} \right\}^{2} \left\{ {H_{{2\left( {m - p} \right)}} \left( {\delta C_{2} } \right) - H_{{2\left( {m - p} \right)}} \left( {\delta C_{1} } \right)} \right\} \\ & \quad - ~2\left( {\sigma m} \right)^{{2p}} \frac{{\Gamma \left( {m - 2p} \right)}}{{\Gamma \left( m \right)}}~\left\{ {H_{{2\left( {m - 2p} \right)}} \left( {\delta C_{2} } \right) - H_{{2\left( {m - 2p} \right)}} \left( {\delta C_{1} } \right)} \right\} \\ & \quad + 2\sigma _{o}^{p} \left( {\sigma m} \right)^{p} \left\{ {\frac{{\Gamma \left( {m - p} \right)}}{{\Gamma \left( m \right)}}} \right\}\left\{ {H_{{2\left( {m - p} \right)}} \left( {\delta C_{2} } \right) - H_{{2\left( {m - p} \right)}} \left( {\delta C_{1} } \right)} \right\} \\ & \quad - 2\sigma _{o}^{p} \left( {\sigma m} \right)^{p} \frac{{\Gamma \left( {m - p} \right)}}{{\Gamma \left( m \right)}}\left\{ {H_{{2m}} \left( {\delta C_{2} } \right) - H_{{2m}} \left( {\delta C_{1} } \right)} \right\} \\ & \quad + \left[ {\left( {\sigma m} \right)^{p} \frac{{\Gamma \left( {m - p} \right)}}{{\Gamma \left( m \right)}}\left[ {1 - \left\{ {H_{{2\left( {m - p} \right)}} \left( {\delta C_{2} } \right) - H_{{2\left( {m - p} \right)}} \left( {\delta C_{1} } \right)} \right\}} \right] + \sigma _{o}^{p} \left\{ {H_{{2m}} \left( {\delta C_{2} } \right) - H_{{2m}} \left( {\delta C_{1} } \right)} \right\} - \sigma ^{p} } \right]^{2} \\ \end{aligned}$$

The prior information for \(R\left( t \right)\) can be expressed in the form of the null hypothesis discussed in Theorem 9. Accordingly, the PTEs of \(R\left( t \right)\) based on MLE and UMVUE are given, respectively, by

$$\hat{R}_{{\text{PT}}\_{\text{ML}}} \left( t \right) = \hat{R}_{\text{ML}} \left( t \right) - \left( {\hat{R}_{\text{ML}} \left( t \right) - R_{0} \left( t \right)} \right)I\left( A \right)$$
(3.3)

where \(R_{0} \left( t \right) = \left[ {1 - G^{\gamma } \left( t \right)} \right]^{{\sigma_{0} }} , {\text{under}}\,H_{0}\)

and

$$\hat{R}_{{\text{PT}}\_U} \left( t \right) = \hat{R}_{U} \left( t \right) - \left( {\hat{R}_{U} \left( t \right) - R_{0} \left( t \right)} \right)I\left( A \right)$$
(3.4)

where \(\hat{R}_{U} \left( t \right)\) and \(\hat{R}_{\text{ML}} \left( t \right)\) are as defined in Theorems 4 and 5, respectively.

The bias of the PTE given at (3.3) is

$${\text{Bias}}\left( {\hat{R}_{{\text{PT}}\_{\text{ML}}} \left( t \right)} \right) = I1 - \varphi_{1} + R_{0} \left( t \right)\left\{ {H_{2m} \left( {\delta C_{2} } \right) - H_{2m} \left( {\delta C_{1} } \right)} \right\} - R\left( t \right)$$

where

$$I1 = \mathop \int \limits_{0}^{\infty } \frac{1}{\varGamma \left( m \right)} e^{ - u} \left( {1 - G^{\gamma } \left( t \right)} \right)^{{\frac{ - u}{\sigma m}}} u^{m - 1} {\text{d}}u, \, {\text{where}}\,u = \sigma S_{m}$$
$${\text{and }}\,\varphi_{1} = \mathop \int \limits_{{C_{1} /2}}^{{C_{2} /2}} \frac{{u^{m - 1} }}{{\left( {m - 1} \right)!}}e^{ - u} \left( {1 - G^{\gamma } \left( t \right)} \right)^{{\frac{ - u}{\sigma m}}} {\text{d}}u$$

The MSE of PTE given at (3.3) is

$$\begin{aligned} {\text{MSE}}\left( {\hat{R}_{{\text{PT}}\_{\text{ML}}} \left( t \right)} \right) & = I2 - I1^{2} - \varphi_{2} - \varphi_{1}^{2} \\ & \quad + R_{o}^{2} \left( t \right)\left\{ {H_{2m} \left( {\delta C_{2} } \right) - H_{2m} \left( {\delta C_{1} } \right)} \right\}\left\{ {1 - \left\{ {H_{2m} \left( {\delta C_{2} } \right) - H_{2m} \left( {\delta C_{1} } \right)} \right\}} \right\} \\ & \quad + 2R_{o} \left( t \right)\left\{ {H_{2m} \left( {\delta C_{2} } \right) - H_{2m} \left( {\delta C_{1} } \right)} \right\}\left( {\varphi_{1} - I1} \right) + 2\varphi_{1} I1 \\ & \quad + \left[ {I1 - \varphi_{1} + R_{0} \left( t \right)\left\{ {H_{2m} \left( {\delta C_{2} } \right) - H_{2m} \left( {\delta C_{1} } \right)} \right\} - R\left( t \right)} \right]^{2} \\ \end{aligned}$$

where

$$I2 = \int\limits_{0}^{\infty } {\frac{1}{\varGamma \left( m \right)} e^{ - u} \left( {1 - G^{\gamma } \left( t \right)} \right)^{{\frac{ - 2u}{\sigma m}}} u^{m - 1} {\text{d}}u \, {\text{and}} \,\varphi_{2} = \mathop \int \limits_{{C_{1} /2}}^{{C_{2} /2}} \frac{{u^{m - 1} }}{{\left( {m - 1} \right)!}}e^{ - u} \left( {1 - G^{\gamma } \left( t \right)} \right)^{{\frac{ - 2u}{\sigma m}}} {\text{d}}u}$$

The bias of the PTE given at (3.4) is

$${\text{Bias}}\left( {\hat{R}_{{\text{PT}}\_U} \left( t \right)} \right) = R_{o} \left( t \right)\left\{ {H_{2m} \left( {\delta C_{2} } \right) - H_{2m} \left( {\delta C_{1} } \right)} \right\} - \varphi_{3}$$

where

$$\varphi_{3} = \mathop \int \limits_{{C_{1} /2}}^{{C_{2} /2}} \frac{{u^{m - 1} e^{ - u} }}{{\left( {m - 1} \right)!}} \left( {1 + \frac{{\sigma { \log }\left( {1 - G^{\gamma } \left( t \right)} \right)}}{u}} \right)^{m - 1} {\text{d}}u$$

The MSE of PTE given at (3.4) is

$$\begin{aligned} {\text{MSE}}\left\{ {\hat{R}_{{\text{PT}}\_U} \left( t \right)} \right\} & = I3 - R^{2} \left( t \right) - \varphi_{4} - \varphi_{3}^{2} \\ & + R_{0}^{2} \left( t \right) \left\{ {H_{2m} \left( {\delta C_{2} } \right) - H_{2m} \left( {\delta C_{1} } \right)} \right\} \left\{ {1 - \left( {H_{2m} \left( {\delta C_{2} } \right) - H_{2m} \left( {\delta C_{1} } \right)} \right)} \right\} \\ & + 2R_{0} \left( t \right)\left\{ {H_{2m} \left( {\delta C_{2} } \right) - H_{2m} \left( {\delta C_{1} } \right)} \right\}\left( {\varphi_{3} - R\left( t \right)} \right) + 2\varphi_{3} R\left( t \right) \\ & + \left( {R_{o} \left( t \right)\left\{ {H_{2m} \left( {\delta C_{2} } \right) - H_{2m} \left( {\delta C_{1} } \right)} \right\} - \varphi_{3} } \right)^{2} \\ \end{aligned}$$

where

$$\varphi_{4} = \mathop \int \limits_{{C_{1} /2}}^{{C_{2} /2}} \frac{{u^{m - 1} e^{ - u} }}{{\left( {m - 1} \right)!}} \left( {1 + \frac{{\sigma { \log }\left( {1 - G^{\gamma } \left( t \right)} \right)}}{u}} \right)^{{2\left( {m - 1} \right)}} {\text{d}}u$$

and

$$I3 = \mathop \int \limits_{0}^{\infty } \frac{{u^{m - 1} e^{ - u} }}{{\left( {m - 1} \right)!}} \left( {1 + \frac{{\sigma { \log }\left( {1 - G^{\gamma } \left( t \right)} \right)}}{u}} \right)^{{2\left( {m - 1} \right)}} {\text{d}}u$$

The prior information for \(P\) can be expressed in the form of the null hypothesis discussed in Theorem 10.

Let I(B) be indicator function of the set

$$B = \left\{ {F_{2m,2l} ; C_{3} \le F_{2m,2l} \le C_{4} } \right\}$$

where

$$C_{3} = F_{2m,2l} \left( {\frac{\alpha }{2}} \right) \,{\text{and}} \,C_{4} = F_{2m,2l} \left( {1 - \frac{\alpha }{2}} \right)$$

The PTEs of \(P\) based on MLE and UMVUE are then given, respectively, by

$$\hat{P}_{{\text{PT}}\_{\text{ML}}} = \hat{P}_{\text{ML}} - \left( {\hat{P}_{\text{ML}} - P_{0} } \right)I\left( B \right)$$
(3.5)

and

$$\hat{P}_{{\text{PT}}\_U} = \hat{P}_{U} - \left( {\hat{P}_{U} - P_{0} } \right)I\left( B \right)$$
(3.6)

where \(\hat{P}_{U}\) and \(\hat{P}_{\text{ML}}\) are as defined in Theorems 6 and 7, respectively.

Now, we derive bias and MSE expressions of PTEs of \(P\) based on MLE and UMVUE.

$${\text{Bias}}\left( {\hat{P}_{{\text{PT}}\_{\text{ML}}} } \right) = E\left( {\hat{P}_{\text{ML}} } \right) - E\left( {\hat{P}_{\text{ML}} I\left( B \right)} \right) + P_{o} E\left( {I\left( B \right)} \right) - P$$
(3.7)

We know that,

$$\begin{aligned} E\left( {\hat{P}_{\text{ML}} } \right) & = E\left( {\frac{{\hat{\sigma }_{2ML} }}{{\hat{\sigma }_{1ML} + \hat{\sigma }_{2ML} }}} \right) \\ & = E\left( {\hat{Q}} \right), \left( {\text{say}} \right) \\ \end{aligned}$$

We make use of the approach given by Constantine [15] to obtain the \({\text{pdf}}\) of \(\hat{Q}\) by transformation into two new independent random variables \(r > 0\) and \(\theta \in \left( {0,\frac{\pi }{2}} \right)\) such that \(\hat{\sigma }_{1ML} = \frac{{\sigma_{1} r{ \sin }^{2} \theta }}{m}\) and \(\hat{\sigma }_{2ML} = \frac{{\sigma_{2} r{ \cos }^{2} \theta }}{l}\).

Putting \(\varphi = { \sin }^{2} \theta \,{\text{and}}\, \rho = \frac{{\sigma_{1} }}{{\sigma_{2} }}\), the \({\text{pdf}}\) of \(\hat{Q} = \left[ {1 + \rho \left( {\frac{l}{m}} \right)\left( {\frac{\varphi }{1 - \varphi }} \right)} \right]^{ - 1}\) is given by,

$$g\left( q \right) = \frac{1}{{\beta \left( {m,l} \right)}}\left( {\frac{{S_{m} }}{{T_{l} }}} \right)^{m} \left( {\frac{m}{l} \left( {\frac{1 - \varphi }{\varphi }} \right)} \right)^{m + 1} \frac{{q^{l - 1} \left( {1 - q} \right)^{m - 1} }}{{\left( {\varepsilon + q\left( {1 - \varepsilon } \right)} \right)^{ - m - l} }} ;0 < q < 1, \varepsilon = \frac{{S_{m} }}{{T_{l} }}\frac{m}{l}\left( {\frac{1 - \varphi }{\varphi }} \right)$$
(3.8)

When \(\varepsilon = 1,\) using (3.8), we get

$$E\left( {\hat{Q}^{c} } \right) = \frac{{\beta \left( {l + c,m} \right)}}{{\beta \left( {m,l} \right)}}\left( {\frac{{S_{m} }}{{T_{l} }}} \right)^{m} \left( {\frac{m}{l} \left( {\frac{1 - \varphi }{\varphi }} \right)} \right)^{m + 1}$$
(3.9)

When \(\varepsilon \ne 1,\) on substituting \(\varepsilon + q\left( {1 - \varepsilon } \right) = \omega ,\) (3.8) gives

$$E\left( {\hat{Q}^{c} } \right) = \left\{ {\begin{array}{*{20}c} {\frac{1}{{\beta (m,l)}}\left( {\frac{{S_{m} }}{{T_{l} }}} \right)^{m} \left( {\frac{m}{l}\left( {\frac{{1 - \varphi }}{\varphi }} \right)} \right)^{{m + 1}} \frac{{\omega ^{{m + 1}} }}{{(1 - \varepsilon )^{{m + l + c - 1}} }}\sum\limits_{{i = 0}}^{{l + c - 1}} {\left( {\begin{array}{*{20}c} {l + c - 1} \\ i \\ \end{array} } \right)( - 1)^{i} \varepsilon ^{i} \sum\limits_{{j = 0}}^{{m - 1}} {\left( {\begin{array}{*{20}c} {m - 1} \\ j \\ \end{array} } \right)( - 1)^{j} \frac{1}{{j + c - i - l}}(1 - \omega ^{{j + c - i - l}} ),\quad j + c - i - l \ne 0} } } \\ {\frac{1}{{\beta (m,l)}}\left( {\frac{{S_{m} }}{{T_{l} }}} \right)^{m} \left( {\frac{m}{l}\left( {\frac{{1 - \varphi }}{\varphi }} \right)} \right)^{{m + 1}} \frac{{\omega ^{{m + 1}} }}{{(1 - \varepsilon )^{{m + l + c - 1}} }}\sum\limits_{{i = 0}}^{{l + c - 1}} {\left( {\begin{array}{*{20}c} {l + c - 1} \\ i \\ \end{array} } \right)( - 1)^{i} \varepsilon ^{i} \sum\limits_{{j = 0}}^{{m - 1}} {\left( {\begin{array}{*{20}c} {m - 1} \\ j \\ \end{array} } \right)( - 1)^{{j + 1}} \log \,\omega ,\quad j + c - i - l = 0} } } \\ \end{array} } \right.$$
$${\text{Thus}},\,E\left( {\hat{P}_{\text{ML}} } \right) = \left\{ {\begin{array}{*{20}l} { \frac{l}{m + l}\left( {\frac{{S_{m} }}{{T_{l} }}} \right)^{m} \left( {\frac{m}{l} \left( {\frac{1 - \varphi }{\varphi }} \right)} \right)^{m + 1} ; \varepsilon = 1} \\ { \varphi_{5 } ; \quad\quad\quad\quad\qquad\quad\varepsilon \ne 1} \\ \end{array} } \right.$$
(3.10)

where \(\varphi_{5 }\) which is obtained by putting c = 1 in the expression of \(E\left( {\hat{Q}^{c} } \right)\) is as follows

$$\varphi _{5} = \left\{ {\begin{array}{*{20}l} {\frac{1}{{\beta (m,l)}}\left( {\frac{{S_{m} }}{{T_{l} }}} \right)^{m} \left( {\frac{m}{l}\left( {\frac{{1 - \varphi }}{\varphi }} \right)} \right)^{{m + 1}} \frac{{\omega ^{{m + 1}} }}{{(1 - \varepsilon )^{{m + l }} }}\sum\limits_{{i = 0}}^{l} {\left( {\begin{array}{*{20}l} l \hfill \\ i \hfill \\ \end{array} } \right)( - 1)^{i} \varepsilon ^{i} \sum\limits_{{j = 0}}^{{m - 1}} {\left( {\begin{array}{*{20}l} {m - 1} \\ j \\ \end{array} } \right)( - 1)^{j} \frac{1}{{j + 1 - i - l}}(1 - \omega ^{{j + 1 - i - l}} ),\quad j + 1 - i - l \ne 0} } } \hfill \\ {\frac{1}{{\beta (m,l)}}\left( {\frac{{S_{m} }}{{T_{l} }}} \right)^{m} \left( {\frac{m}{l}\left( {\frac{{1 - \varphi }}{\varphi }} \right)} \right)^{{m + 1}} \frac{{\omega ^{{m + 1}} }}{{(1 - \varepsilon )^{{m + l}} }}\sum\limits_{{i = 0}}^{{l }} {\left( {\begin{array}{*{20}c} l \\ i \\ \end{array} } \right)( - 1)^{i} \varepsilon ^{i} \sum\limits_{{j = 0}}^{{m - 1}} {\left( {\begin{array}{*{20}c} {m - 1} \\ j \\ \end{array} } \right)( - 1)^{{j + 1}} \log {\mkern 1mu} \omega ,\quad j + 1 - i - l = 0} } } \hfill \\ \end{array} } \right.$$

Further,

$$\begin{aligned} E\left( {\hat{P}_{\text{ML}} I\left( B \right)} \right) & = \frac{1}{{\beta \left( {m,l} \right)}}\left( {\frac{m}{l}} \right)^{m} \mathop \int \limits_{{C_{3} }}^{{C_{4} }} \frac{{\upsilon^{m} \left( {1 + \left( {\frac{m}{l}} \right)\upsilon } \right)^{ - m - l} }}{{\left( {\upsilon + \frac{{\sigma_{1} }}{{\sigma_{2} }}} \right)}}{\text{d}}\upsilon \\ & = \varphi_{6} , \left( {\text{say}} \right) \\ \end{aligned}$$
(3.11)

using (3.10) and (3.11) in (3.7), the bias of PTE of \(P\) based on MLE is:

$${\text{Bias}}\left( {\hat{P}_{{PT\_ML}} } \right) = \left\{ {\begin{array}{*{20}c} {\left\{ {\frac{l}{{m + l}}\left( {\frac{{S_{m} }}{{T_{l} }}} \right)^{m} \left( {\frac{m}{l}~\left( {\frac{{1 - \varphi }}{\varphi }} \right)} \right)^{{m + 1}} } \right. - \varphi _{6} + P_{o} \left\{ {F_{{2m,2l}} \left( {C_{4} } \right) - F_{{2m,2l}} \left( {C_{3} } \right)} \right\} - P;~\varepsilon = 1} \\ {\varphi _{5} - \varphi _{6} + P_{o} \left\{ {F_{{2m,2l}} \left( {C_{4} } \right) - F_{{2m,2l}} \left( {C_{3} } \right)} \right\} - P;~\varepsilon \ne 1} \\ \end{array} } \right.$$

and the MSE of PTE of \(P\) based on MLE is:

$${\text{MSE}}\left( {\hat{P}_{{PT\_ML}} } \right) = \left\{ {\begin{array}{*{20}c} \begin{gathered} \varphi _{8} - \varphi _{7} - \varphi _{6} ^{2} + 2P_{0} \left\{ {F_{{2m,2l}} \left( {C_{4} } \right) - F_{{2m,2l}} \left( {C_{3} } \right)} \right\}\left\{ {\varphi _{6} - \varphi _{8} } \right\} + P_{o}^{2} \left\{ {F_{{2m,2l}} \left( {C_{4}}\right) - F_{{2m,2l}} \left( {C_{3} } \right)} \right\} \hfill \\ \left\{ {1 - \left( {F_{{2m,2l}} \left( {C_{4} } \right) - F_{{2m,2l}} \left( {C_{3} } \right)} \right)} \right\} + 2\varphi _{6} \varphi _{8} + \left( {{\text{Bias}}\left( {\hat{P}_{{PT\_ML}} } \right)} \right)^{2};{\text{when } \varepsilon = 1} \hfill \\ \end{gathered} \\ \begin{gathered} \varphi _{9} - \varphi _{5} ^{2} - \varphi _{7} - \varphi _{6} ^{2} + P_{o}^{2} \left\{ {F_{{2m,2l}} \left( {C_{4} } \right) - F_{{2m,2l}} \left( {C_{3} } \right)} \right\}~\left\{ {1 - \left( {F_{{2m,2l}} \left( {C_{4} } \right) - F_{{2m,2l}} \left( {C_{3} } \right)} \right)} \right\} \hfill \\ + 2P_{0} \left\{ {F_{{2m,2l}} \left( {C_{4} } \right) - F_{{2m,2l}} \left( {C_{3} } \right)} \right\}\left\{ {\varphi _{6} - \varphi _{5} } \right\} + 2\varphi _{5} \varphi _{6} + \left( {{\text{Bias}}\left( {\hat{P}_{{PT\_ML}} } \right)} \right)^{2} ~;{\text{when }\varepsilon \ne 1} \hfill \\ \end{gathered}\\ \end{array} } \right.$$

where

$$\varphi_{7} = \frac{1}{{\beta \left( {m,l} \right)}}\left( {\frac{m}{l}} \right)^{m} \mathop \int \limits_{{C_{3} }}^{{C_{4} }} \frac{{\upsilon^{m + 1} \left( {1 + \left( {\frac{m}{l}} \right)\upsilon } \right)^{ - m - l} }}{{\left( {\upsilon + \frac{{\sigma_{1} }}{{\sigma_{2} }}} \right)^{2} }}{\text{d}}\upsilon$$
$$\varphi_{8} = {\text{Var}}\left( {\hat{P}_{\text{ML}} } \right) = \frac{l}{m + l}\left( {\frac{{S_{m} }}{{T_{l} }}} \right)^{m} \left( {\frac{m}{l} \left( {\frac{1 - \varphi }{\varphi }} \right)} \right)^{m + 1} \left\{ {\frac{l + 1}{m + l + 1} - \frac{l}{m + l}\left( {\frac{{S_{m} }}{{T_{l} }}} \right)^{m} \left( {\frac{m}{l} \left( {\frac{1 - \varphi }{\varphi }} \right)} \right)^{m + 1} } \right\}$$
$$\varphi _{9} = \left\{ {\begin{array}{*{20}c} {\frac{1}{{\beta \left( {m,l} \right)}}\left( {\frac{{S_{m} }}{{T_{l} }}} \right)^{m} \left( {\frac{m}{l}~\left( {\frac{{1 - \varphi }}{\varphi }} \right)} \right)^{{m + 1}} \frac{{\omega ^{{m + l}} }}{{\left( {1 - \varepsilon } \right)^{{m + l + 1}} }}\mathop \sum \limits_{{i = 0}}^{{l + 1}} \left( {\begin{array}{*{20}c} {l + 1} \\ i \\ \end{array} } \right)\left( { - 1} \right)^{i} \varepsilon ^{i} \mathop \sum \limits_{{j = 0}}^{{m - 1}} \left( {\begin{array}{*{20}c} {m - 1} \\ j \\ \end{array} } \right)\left( { - 1} \right)^{j} ~\frac{1}{{j + 2 - i - l}}\left( {1 - \omega ^{{j + 2 - i - l}} } \right),~~j + 2 - i - l \ne 0} \\ {\frac{1}{{\beta \left( {m,l} \right)}}\left( {\frac{{S_{m} }}{{T_{l} }}} \right)^{m} \left( {\frac{m}{l}~\left( {\frac{{1 - \varphi }}{\varphi }} \right)} \right)^{{m + 1}} \frac{{\omega ^{{m + l}} }}{{\left( {1 - \varepsilon } \right)^{{m + l + 1}} }}\mathop \sum \limits_{{i = 0}}^{{l + 1}} \left( {\begin{array}{*{20}c} {l + 1} \\ i \\ \end{array} } \right)\left( { - 1} \right)^{i} \varepsilon ^{i} \mathop \sum \limits_{{j = 0}}^{{m - 1}} \left( {\begin{array}{*{20}c} {m - 1} \\ j \\ \end{array} } \right)\left( { - 1} \right)^{{j + 1}} ~log\omega ~,\quad ~j + 2 - i - l = 0} \\ \end{array} } \right.$$
$${\text{which }}\,{\text{is }}\,{\text{obtained}}\, {\text{by }}\,{\text{putting }}\,c = 2 \,{\text{in }}\,{\text{the }}\,{\text{expression }}\,{\text{of }}\,E\left( {\hat{Q}^{c} } \right)$$

Let us define

\(\varphi_{10} = \mathop \sum \limits_{i = 0}^{l - 2} \frac{{\left( { - 1} \right)^{i} \left( {l - 1} \right)!\left( {m - 1} \right)!}}{{\left( {l - i - 2} \right)!\left( {m + i} \right)!}}\left( {\frac{{l\sigma_{1} }}{{m\sigma_{2} }}} \right)^{i + 1} \mathop \int \limits_{{C_{3} }}^{{C_{4} }} \upsilon^{i + 1} \emptyset_{1} \left( \upsilon \right){\text{d}}\upsilon ,\) where \(\emptyset_{1} \left( \cdot \right)\) is the pdf of \(f -\) distribution with (2m, 2l) degrees of freedom.

\(\varphi_{11} = \mathop \sum \limits_{i = 0}^{m - 1} \frac{{\left( { - 1} \right)^{i} \left( {l - 1} \right)!\left( {m - 1} \right)!}}{{\left( {l + i - 1} \right)!\left( {m - i - 1} \right)!}}\left( {\frac{{m\sigma_{2} }}{{l\sigma_{1} }}} \right)^{i} \mathop \int \limits_{{C_{3} }}^{{C_{4} }} \upsilon^{i} \emptyset_{2} \left( \upsilon \right){\text{d}}\upsilon\), where \(\emptyset_{2} \left( \cdot \right)\) is the of \(f -\) distribution with (2l, 2m) degrees of freedom.

The bias of PTE of \(P\) based on UMVUE is:

$${\text{Bias}}\left( {\hat{P}_{{\text{PT}}\_U} } \right) = \left\{ {\begin{array}{*{20}c} {P_{o} P\left( B \right) - \varphi_{10} ;v \le 1} \\ {P_{o} P\left( B \right) - \varphi_{11} ;v > 1} \\ \end{array} } \right.$$

where \(v = \frac{{S_{m} }}{{H_{l} }} and P\left( B \right) = \left\{ {F_{2m,2l} \left( {C_{4} } \right) - F_{2m,2l} \left( {C_{3} } \right)} \right\}\).

To obtain \({\text{MSE}}\left( {\hat{P}_{{\text{PT}}\_U} } \right)\), consider

$$E\left( {\hat{P}_{U}^{2} } \right) = E\left( {\left. {\mathop \sum \limits_{i = 0}^{l - 2} \mathop \sum \limits_{j = 0}^{l - 2} a_{i} a_{j} \left( v \right)^{i + j + 2} } \right|v \le 1} \right)P\left( {v \le 1} \right) + E\left( {\left. {\mathop \sum \limits_{i = 0}^{m - 1} \mathop \sum \limits_{j = 0}^{m - 1} b_{i} b_{j} \left( v \right)^{{ - \left( {i + j} \right)}} } \right|v > 1} \right)P\left( {v > 1} \right)$$
(3.12)

where \(a_{i} = \frac{{\left( { - 1} \right)^{i} \left( {l - 1} \right)!\left( {m - 1} \right)!}}{{\left( {l - i - 2} \right)!\left( {m + i} \right)!}}, b_{i} = \frac{{\left( { - 1} \right)^{i} \left( {l - 1} \right)!\left( {m - 1} \right)!}}{{\left( {l + i - 1} \right)!\left( {m - i - 1} \right)!}}\)

We obtain the \({\text{pdf}}\) of \(v\) as:

$$f\left( v \right) = \frac{{\rho^{m} }}{{\beta \left( {m,l} \right)}}v^{m - 1} \left( {1 + \rho v} \right)^{ - m - l} ; v > 0$$

For c > 0,

$$E\left( {\left. {v^{c} } \right|v \le 1} \right)P\left( {v \le 1} \right) = \mathop \int \limits_{0}^{1} \frac{{\rho^{m} }}{{\beta \left( {m,l} \right)}}v^{m + c - 1} \left( {1 + \rho v} \right)^{ - m - l} {\text{d}}v.$$

Substituting \(r = \left( {1 + \rho v} \right)^{ - 1} ,\) we get on simplification

$$E\left( {\left. {v^{c} } \right|v \le 1} \right)P\left( {v \le 1} \right) = \frac{{\rho^{ - c} }}{{\beta \left( {m,l} \right)}}\mathop \sum \limits_{h = 0}^{m + c - 1} \left( { - 1} \right)^{h} \left( {\begin{array}{*{20}c} {m + c - 1} \\ h \\ \end{array} } \right)\mathop \int \limits_{\varPsi }^{1} r^{l + h - c - 1} {\text{d}}r$$
(3.13)

where

$$\mathop \int \limits_{\varPsi }^{1} r^{l + h - c - 1} {\text{d}}r = \left\{ {\begin{array}{*{20}c} {\frac{{1 - \varPsi^{l + h - c} }}{l + h - c};h \ne c - l} \\ { - { \log }\left( \varPsi \right);h = c - l} \\ \end{array} } \right.$$

and \(\varPsi = \frac{1}{1 + \rho }\).

Similarly, we can obtain

$$E\left( {\left. {v^{ - c} } \right|v > 1} \right)P\left( {v > 1} \right) = \frac{{\rho^{c} }}{{\beta \left( {m,l} \right)}}\mathop \sum \limits_{h = 0}^{l + c - 1} \left( { - 1} \right)^{h} \left( {\begin{array}{*{20}c} {l + c - 1} \\ h \\ \end{array} } \right)\mathop \int \limits_{1 - \varPsi }^{1} r^{m - 1 - c + h} {\text{d}}r$$
(3.14)

where

$$\mathop \int \limits_{1 - \varPsi }^{1} r^{m - 1 - c + h} {\text{d}}r = \left\{ {\begin{array}{*{20}c} {\frac{{1 - \left( {1 - \varPsi } \right)^{m - c + h} }}{m - c + h};\quad h \ne c - m} \\ { - { \log }\left( {1 - \varPsi } \right);\quad h = c - m} \\ \end{array} } \right.$$

let us denote

$$\varphi_{12} = \mathop \sum \limits_{i = 0}^{l - 2} \mathop \sum \limits_{j = 0}^{l - 2} a_{i} a_{j} \left( {\frac{{l\sigma_{1} }}{{m\sigma_{2} }}} \right)^{i + j + 2} \mathop \int \limits_{{C_{3} }}^{{C_{4} }} \upsilon^{i + j + 2} \emptyset_{1} \left( \upsilon \right){\text{d}}\upsilon \, {\text{and}},\varphi_{13} = \mathop \sum \limits_{i = 0}^{m - 1} \mathop \sum \limits_{j = 0}^{m - 1} b_{i} b_{j} \left( {\frac{{m\sigma_{2} }}{{l\sigma_{1} }}} \right)^{i + j} \mathop \int \limits_{{C_{3} }}^{{C_{4} }} \upsilon^{i + j} \emptyset_{2} \left( \upsilon \right){\text{d}}\upsilon$$

Further, using (3.13) and (3.14) in (3.12), we get the expression of \(E\left( {\hat{P}_{U}^{2} } \right) as,\)

$$\begin{aligned} E\left( {\hat{P}_{U}^{2} } \right) & = \mathop \sum \limits_{i = 0}^{l - 2} \mathop \sum \limits_{j = 0}^{l - 2} \frac{{a_{i} a_{j} \rho^{{ - \left( {i + j + 2} \right)}} }}{{\beta \left( {m,l} \right)}}\mathop \sum \limits_{h = 0}^{m + i + j + 1} \left( { - 1} \right)^{h} \left( {\begin{array}{*{20}c} {m + i + j + 1} \\ h \\ \end{array} } \right)\mathop \int \limits_{\varPsi }^{1} r^{l + h - i - j - 3} {\text{d}}r \\ & \quad + \mathop \sum \limits_{i = 0}^{m - 1} \mathop \sum \limits_{j = 0}^{m - 1} \frac{{b_{i} b_{j} \rho^{i + j} }}{{\beta \left( {m,l} \right)}}\mathop \sum \limits_{h = 0}^{l + i + j - 1} \left( { - 1} \right)^{h} \left( {\begin{array}{*{20}c} {l + i + j - 1} \\ h \\ \end{array} } \right)\mathop \int \limits_{1 - \varPsi }^{1} r^{m - 1 - i - j + h} {\text{d}}r = \varphi_{14} , {\text{say}} \\ \end{aligned}$$

Thus, \({\text{Var}}\left( {\hat{P}_{U} } \right) = \varphi_{14} - P^{2}\) and \({\text{Va}}r\left( {\hat{P}_{U} I\left( B \right)} \right) = \left\{ {\begin{array}{*{20}c} {\varphi_{12} - \varphi_{10}^{2} ;v \le 1} \\ {\varphi_{13} - \varphi_{11}^{2} ;v > 1} \\ \end{array} } \right.\),

Finally, we obtain the MSE of PTE of \(P\) based on UMVUE as:

$${\text{MSE}}\left( {\hat{P}_{{\text{PT}}\_U} } \right) = \left\{ {\begin{array}{*{20}c} {\varphi_{14} - P^{2} - \varphi_{12} - \varphi_{10}^{2} + 2P\varphi_{10} + P_{o}^{2} P\left( B \right)\left( {1 - P\left( B \right)} \right) + 2P_{o} P\left( B \right)\left( {\varphi_{10} - P} \right);v \le 1} \\ {\varphi_{14} - P^{2} - \varphi_{13} - \varphi_{11}^{2} + P_{o}^{2} P\left( B \right)\left( {1 - P\left( B \right)} \right) + 2P\varphi_{11} + 2P_{o} P\left( B \right)\left( {\varphi_{11} - P} \right);v > 1} \\ \end{array} } \right.$$

4 Preliminary Test Confidence Intervals

In this section, we derive the preliminary test confidence intervals (PTCIs) for \(\sigma , R\left( t \right)\, {\text{and}}\, P\) based on their UMVUEs and MLEs. After introducing the PTCIs, subsequently the coverage probability is obtained.

From Theorems 2 and 3, we know that

$$\hat{\sigma }_{U} = \frac{m - 1}{{S_{m} }}$$
(4.1)

and

$$\hat{\sigma }_{\text{ML}} = \frac{m}{{S_{m} }}$$
(4.2)

Using Theorem 11 and Eqs. (4.1) and (4.2), \(100\left( {1 - \alpha } \right)\%\) equal tail CIs for σ based on UMVUE and MLE are as follows

$$I_{{{\text{ET}}\_{{\sigma }}\_U}} = \left[ {\frac{{\chi_{2m}^{2} \left( {\frac{\alpha }{2}} \right)}}{{2\left( {m - 1} \right)}}\hat{\sigma }_{U} , \frac{{\chi_{2m}^{2} \left( {1 - \frac{\alpha }{2}} \right)}}{{2\left( {m - 1} \right)}}\hat{\sigma }_{U} } \right]$$

and

$$I_{{{\text{ET}}\_{{\sigma }}\_{\text{ML}}}} = \left[ {\frac{{\chi_{2m}^{2} \left( {\frac{\alpha }{2}} \right)}}{2m}\hat{\sigma }_{\text{ML}} , \frac{{\chi_{2m}^{2} \left( {1 - \frac{\alpha }{2}} \right)}}{2m}\hat{\sigma }_{\text{ML}} } \right]$$

Therefore, the PTCIs of σ based on UMVUE and MLE are as follows

$$I_{{{\text{PT}}\_{{\sigma }}\_U}} = \left[ {\frac{{\chi_{2m}^{2} \left( {\frac{\alpha }{2}} \right)}}{{2\left( {m - 1} \right)}}\hat{\sigma }_{{\text{PT}}\_U} , \frac{{\chi_{2m}^{2} \left( {1 - \frac{\alpha }{2}} \right)}}{{2\left( {m - 1} \right)}}\hat{\sigma }_{{\text{PT}}\_U} } \right]$$

and

$$I_{{{\text{PT}}\_{{\sigma }}\_{\text{ML}}}} = \left[ {\frac{{\chi_{2m}^{2} \left( {\frac{\alpha }{2}} \right)}}{2m}\hat{\sigma }_{{\text{PT}}\_{\text{ML}}} , \frac{{\chi_{2m}^{2} \left( {1 - \frac{\alpha }{2}} \right)}}{2m}\hat{\sigma }_{{\text{PT}}\_{\text{ML}}} } \right]$$

where \(\hat{\sigma }_{{\text{PT}}\_U} \,{\text{and}}\, \hat{\sigma }_{{\text{PT}}\_{\text{ML}}}\) are as defined in (3.1) and (3.2), respectively.

Next we derive the PTCI for R(t),

From Theorem 4, we can write

$$\frac{{\log \left( {1 - G^{\gamma } \left( t \right)} \right)}}{{S_{m} }} = \left( {\hat{R}_{U} (t)} \right)^{{\frac{1}{m - 1}}} - 1$$
(4.3)

From Theorem 5, we can write

$$\frac{{\log \left( {1 - G^{\gamma } \left( t \right)} \right)}}{{S_{m} }} = \frac{1}{m}\log \left( {\hat{R}_{\text{ML}} \left( t \right)} \right)$$
(4.4)

Using Theorem 12 and Eqs. (4.3) and (4.4), \(100\left( {1 - \alpha } \right)\%\) equal tail CIs for \(R\left( t \right)\) based on its UMVUE and MLE may be written as

$$I_{{\text{ET}}\_R\_U} = \left[ {{ \exp }\left\{ {\frac{{\chi_{2m}^{2} \left( {1 - \frac{\alpha }{2}} \right)}}{2} \left\{ {\left( {\hat{R}_{U} \left( t \right)} \right)^{{\frac{1}{m - 1}}} - 1} \right\}} \right\}, { \exp }\left\{ {\frac{{\chi_{2m}^{2} \left( {\frac{\alpha }{2}} \right)}}{2} \left\{ {\left( {\hat{R}_{U} \left( t \right)} \right)^{{\frac{1}{m - 1}}} - 1} \right\}} \right\}} \right]$$

and

$$I_{{\text{ET}}\_R\_{\text{ML}}} = \left[ {{ \exp }\left\{ {\frac{{\chi_{2m}^{2} \left( {1 - \frac{\alpha }{2}} \right)}}{2m} \log \left( {\hat{R}_{\text{ML}} \left( t \right)} \right)} \right\}, { \exp }\left\{ {\frac{{\chi_{2m}^{2} \left( {\frac{\alpha }{2}} \right)}}{2m} \log \left( {\hat{R}_{\text{ML}} \left( t \right)} \right)} \right\}} \right]$$

Therefore, the PTCIs of \(R\left( t \right)\) based on UMVUE and MLE are as follows:

$$I_{{\text{PT}}\_R\_U} = \left[ {{ \exp }\left\{ {\frac{{\chi_{2m}^{2} \left( {1 - \frac{\alpha }{2}} \right)}}{2} \left\{ {\left( {\hat{R}_{{\text{PT}}\_U} \left( t \right)} \right)^{{\frac{1}{m - 1}}} - 1} \right\}} \right\}, \,{ \exp }\left\{ {\frac{{\chi_{2m}^{2} \left( {\frac{\alpha }{2}} \right)}}{2} \left\{ {\left( {\hat{R}_{{\text{PT}}\_U} \left( t \right)} \right)^{{\frac{1}{m - 1}}} - 1} \right\}} \right\}} \right]$$

and

$$I_{{\text{PT}}\_R\_{\text{ML}}} = \left[ {{ \exp }\left\{ {\frac{{\chi_{2m}^{2} \left( {1 - \frac{\alpha }{2}} \right)}}{2m} \log \left( {\hat{R}_{{\text{PT}}\_{\text{ML}}} \left( t \right)} \right)} \right\}, { \exp }\left\{ {\frac{{\chi_{2m}^{2} \left( {\frac{\alpha }{2}} \right)}}{2m} \log \left( {\hat{R}_{{\text{PT}}\_{\text{ML}}} \left( t \right)} \right)} \right\}} \right]$$

where \(\hat{R}_{{\text{PT}}\_{\text{ML}}} \left( t \right) \,{\text{and}}\, \hat{R}_{{\text{PT}}\_U} \left( t \right)\) are as defined in (3.3) and (3.4), respectively.

Now, we derive the confidence intervals for \(P\)

From Theorem 6, we can write

$$\frac{{T_{l} }}{{S_{m} }} = \left\{ {\begin{array}{*{20}c} {\frac{{\hat{P}_{U} }}{d1} , S_{m} \le T_{l} } \\ {\frac{{\hat{P}_{U} }}{d2}, T_{l} < S_{m} } \\ \end{array} } \right.$$
(4.5)
$${\text{where}}, d1 = \mathop \sum \limits_{i = 0}^{l - 2} \left( { - 1} \right)^{i} \frac{{\left( {l - 1} \right)!\left( {m - 1} \right)!}}{{\left( {l - i - 2} \right)!\left( {m + i} \right)!}}\left( {\frac{{S_{m} }}{{T_{l} }}} \right)^{i + 2}$$
$$d2 = \mathop \sum \limits_{i = 0}^{m - 1} \left( { - 1} \right)^{i} \frac{{\left( {l - 1} \right)!\left( {m - 1} \right)!}}{{ \left( {l + i - 1} \right)!\left( {m - i - 1} \right)!}}\left( {\frac{{T_{l} }}{{S_{m} }}} \right)^{i - 1}$$

From Theorem 7, we can write

$$\frac{{mT_{l} }}{{lS_{m} }} = \frac{{1 - \hat{P}_{\text{ML}} }}{{\hat{P}_{\text{ML}} }}$$
(4.6)

Using Theorem 12 and Eqs. (4.5) in (4.6), we can write \(100\left( {1 - \alpha } \right)\)\(\%\) equal tail CIs for \(P\) based on UMVUE and MLE as

$$I_{{\text{ET}}\_P\_U} = \left\{ {\begin{array}{*{20}c} {\left[ {\frac{1}{{1 + \frac{m}{l} \left( {\frac{{\hat{P}_{U} }}{d1}} \right)F_{2m,2l} \left( {1 - \frac{\alpha }{2}} \right)}}, \frac{1}{{1 + \frac{m}{l} \left( {\frac{{\hat{P}_{U} }}{d1}} \right)F_{2m,2l} \left( {\frac{\alpha }{2}} \right)}}} \right], S_{m} \le T_{l} } \\ {\left[ {\frac{1}{{1 + \frac{m}{l} \left( {\frac{{\hat{P}_{U} }}{d2}} \right)F_{2m,2l} \left( {1 - \frac{\alpha }{2}} \right)}}, \frac{1}{{1 + \frac{m}{l} \left( {\frac{{\hat{P}_{U} }}{d2}} \right)F_{2m,2l} \left( {\frac{\alpha }{2}} \right)}}} \right], S_{m} > T_{l} } \\ \end{array} } \right.$$

and

$$I_{{\text{ET}}\_P\_{\text{ML}}} = \left[ {\begin{array}{*{20}c} {\frac{1}{{1 + \left( {\frac{{1 - \hat{P}_{\text{ML}} }}{{\hat{P}_{\text{ML}} }}} \right)F_{2m,2l} \left( {1 - \frac{\alpha }{2}} \right)}}, \frac{1}{{1 + \left( {\frac{{1 - \hat{P}_{\text{ML}} }}{{\hat{P}_{\text{ML}} }}} \right)F_{2m,2l} \left( {\frac{\alpha }{2}} \right)}} } \\ \\ \end{array} } \right]$$

Therefore, PTCIs for \(P\) based on UMVUE and MLE are defined as follows:

$$I_{{\text{PT}}\_P\_U} = \left\{ {\begin{array}{*{20}c} {\left[ {\frac{1}{{1 + \frac{m}{l} \left( {\frac{{\hat{P}_{{\text{PT}}\_U} }}{d1}} \right)F_{2m,2l} \left( {1 - \frac{\alpha }{2}} \right)}}, \frac{1}{{1 + \frac{m}{l} \left( {\frac{{\hat{P}_{{\text{PT}}\_U} }}{d1}} \right)F_{2m,2l} \left( {\frac{\alpha }{2}} \right)}}} \right],\, S_{m} \le T_{l} } \\ {\left[ {\begin{array}{*{20}c} {\frac{1}{{1 + \frac{m}{l} \left( {\frac{{\hat{P}_{{\text{PT}}\_U} }}{d2}} \right)F_{2m,2l} \left( {1 - \frac{\alpha }{2}} \right)}}, \frac{1}{{1 + \frac{m}{l} \left( {\frac{{\hat{P}_{{\text{PT}}\_U} }}{d2}} \right)F_{2m,2l} \left( {\frac{\alpha }{2}} \right)}} } \\ \\ \end{array} } \right], S_{m} > T_{l} } \\ \end{array} } \right.$$

and

$$I_{{\text{PT}}\_P\_{\text{ML}}} = \left[ {\begin{array}{*{20}c} {\frac{1}{{1 + \left( {\frac{{1 - \hat{P}_{{\text{PT}}\_{\text{ML}}} }}{{\hat{P}_{{\text{PT}}\_{\text{ML}}} }}} \right)F_{2m,2l} \left( {1 - \frac{\alpha }{2}} \right)}}, \frac{1}{{1 + \left( {\frac{{1 - \hat{P}_{{\text{PT}}\_{\text{ML}}} }}{{\hat{P}_{{\text{PT}}\_{\text{ML}}} }}} \right)F_{2m,2l} \left( {\frac{\alpha }{2}} \right)}} } \\ \\ \end{array} } \right]$$

where \(\hat{P}_{{\text{PT}}\_{\text{ML}}} \,{\text{and}}\, \hat{P}_{{\text{PT}}\_U}\) are as defined in (3.5) and (3.6), respectively.

Now we obtain the coverage probability of PTCI of σ based on its UMVUE.Let us suppose that, \(\delta = \frac{{{\sigma }}}{{{{\sigma }}_{0} }}, {\text{where}} \,{{\sigma }}_{0} {\text{is }}\,{\text{the }}\,{\text{true }}\,{\text{value}}\,{\text{of}}\, {{\sigma }}\). We know that \(T = 2{{\sigma }}S_{m} \sim \chi_{2m}^{2}\)

$$\begin{aligned} P\left( {{{\sigma }} \in I_{{{\text{PT}}\_{{\sigma }}\_U}} } \right) & = P\left\{ {{{\sigma }} \in \left( {A_{1} {{\sigma }}_{0} ,A_{2} {{\sigma }}_{0} } \right), \chi_{2m}^{2} \left( {\frac{\alpha }{2}} \right) \le 2{{\sigma }}_{0} S_{m} \le \chi_{2m}^{2} \left( {1 - \frac{\alpha }{2}} \right)} \right\} \\ & + P\left\{ {{{\sigma }} \in \left( {A_{1} \hat{\sigma }_{U} ,A_{2} \hat{\sigma }_{U} } \right), 2{{\sigma }}_{0} S_{m} < \chi_{2m}^{2} \left( {\frac{\alpha }{2}} \right)} \right\} \\ & + P\left\{ {{{\sigma }} \in \left( {A_{1} \hat{\sigma }_{U} ,A_{2} \hat{\sigma }_{U} } \right), 2{{\sigma }}_{0} S_{m} > \chi_{2m}^{2} \left( {1 - \frac{\alpha }{2}} \right)} \right\} \\ \end{aligned}$$

where \(A_{1} = \frac{{\chi_{2m}^{2} \left( {\frac{\alpha }{2}} \right)}}{{2\left( {m - 1} \right)}} \,{\text{and}}\, A_{2} = \frac{{\chi_{2m}^{2} \left( {1 - \frac{\alpha }{2}} \right)}}{{2\left( {m - 1} \right)}}\)

$$\begin{aligned} P\left( {{{\sigma }} \in I_{{{\text{PT}}\_{{\sigma }}\_U}} } \right) & = P\left\{ {A_{1} \le {{\delta }} \le A_{2} ; {{\delta }}\chi_{2m}^{2} \left( {\frac{\alpha }{2}} \right) \le T \le {{\delta }}\chi_{2m}^{2} \left( {1 - \frac{\alpha }{2}} \right)} \right\} \\ & \quad + P\left\{ {\chi_{2m}^{2} \left( {\frac{\alpha }{2}} \right) < T < \chi_{2m}^{2} \left( {1 - \frac{\alpha }{2}} \right);T < {{\delta }} \chi_{2m}^{2} \left( {\frac{\alpha }{2}} \right)} \right\} \\ & \quad + P\left\{ {\chi_{2m}^{2} \left( {\frac{\alpha }{2}} \right) < T< \chi_{2m}^{2} \left( {1 - \frac{\alpha }{2}} \right);T> {{\delta }}\chi_{2m}^{2} \left( {1 - \frac{\alpha }{2}} \right)} \right\} \\ \end{aligned}$$

or,

$$\begin{aligned} P\left( {{{\sigma }} \in I_{{{\text{PT}}\_{{\sigma }}\_U}} } \right) & = P\left\{ {{{\delta }}\chi_{2m}^{2} \left( {\frac{\alpha }{2}} \right) \le T \le {{\delta }}\chi_{2m}^{2} \left( {1 - \frac{\alpha }{2}} \right)} \right\}I_{{A_{1} A_{2} }} \left( {{\delta }} \right) \\ & \quad + {\text{P}}\left\{ {\chi_{2m}^{2} \left( {\frac{\alpha }{2}} \right) < T < { \hbox{min} }\left( {\chi_{2m}^{2} \left( {1 - \frac{\alpha }{2}} \right),{{\delta }}\chi_{2m}^{2} \left( {\frac{\alpha }{2}} \right) } \right)} \right\} \\ & \quad + P\left\{ {{ \hbox{max} }\left( {\chi_{2m}^{2} \left( {\frac{\alpha }{2}} \right),{{\delta }}\chi_{2m}^{2} \left( {1 - \frac{\alpha }{2}} \right) < T < \chi_{2m}^{2} \left( {1 - \frac{\alpha }{2}} \right)} \right)} \right\} \\ \end{aligned}$$

Let us denote \(P\left\{ {{{\delta }}\chi_{2m}^{2} \left( {\frac{\alpha }{2}} \right) \le T \le {{\delta }}\chi_{2m}^{2} \left( {1 - \frac{\alpha }{2}} \right)} \right\}I_{{A_{1} A_{2} }} \left( {{\delta }} \right)\) by \(A\), where I(.) is an indicator function. Considering all possible cases of \({{\delta }}\), we may write

$$P\left( {{{\sigma }} \in I_{{{\text{PT}}\_{{\sigma }}\_U}} } \right)\left\{ {\begin{array}{*{20}l} {A + 1 - \alpha ,} & { 0 < {{\delta }} \le \frac{{\chi_{2m}^{2} \left( {\frac{\alpha }{2}} \right)}}{{\chi_{2m}^{2} \left( {1 - \frac{\alpha }{2}} \right)}}} \\ {A + 1 - \alpha , } & {{{\delta }} \ge \frac{{\chi_{2m}^{2} \left( {1 - \frac{\alpha }{2}} \right)}}{{\chi_{2m}^{2} \left( {\frac{\alpha }{2}} \right)}}} \\ {A + P\left\{ {\chi_{2m}^{2} \left( {\frac{\alpha }{2}} \right) < T < {{\delta }}\chi_{2m}^{2} \left( {\frac{\alpha }{2}} \right)} \right\}, } & {1 < {{\delta }} < \frac{{\chi_{2m}^{2} \left( {1 - \frac{\alpha }{2}} \right)}}{{\chi_{2m}^{2} \left( {\frac{\alpha }{2}} \right)}}} \\ {A + P\left\{ {{{\delta }}\chi_{2m}^{2} \left( {1 - \frac{\alpha }{2}} \right) < T < \chi_{2m}^{2} \left( {1 - \frac{\alpha }{2}} \right)} \right\},} & {\frac{{\chi_{2m}^{2} \left( {\frac{\alpha }{2}} \right)}}{{\chi_{2m}^{2} \left( {1 - \frac{\alpha }{2}} \right)}} < {{\delta }} \le 1} \\ \end{array} } \right.$$

Similarly, coverage probability for other PTCI may be obtained.

5 Numerical Findings

In this section, we have tried to judge the performance of preliminary test estimators based on progressive type-II censored data. Five progressive censoring schemes are considered which can be seen in Table 1.

Table 1 Different progressive type-II censoring schemes

We consider Kumaraswamy distribution [22] as a particular case of Kw-G distributions. A rv \(X\) is said to follow the Kumaraswamy distribution, if its pdf and cdf are given by

$$f\left( {x;\sigma ,\gamma } \right) = \sigma \gamma x^{\gamma - 1} \left( {1 - x^{\gamma } } \right)^{\sigma - 1} ;\quad 0 < x<1, \sigma, \gamma >0$$
(5.1)

and

$$F\left( {x;\sigma ,\gamma } \right) = 1 - \left( {1 - x^{\gamma } } \right)^{\sigma } ,\quad 0 < x<1, \sigma, \gamma >0$$
(5.2)

For simulation studies, different progressively censored samples are generated using the algorithm proposed by Balakrishnan and Sandhu [3] which involves the following steps

  1. 1.

    Generate \(m\) independent and identically (iid) random numbers \(\left( {u_{1} ,u_{2} , \ldots ,u_{m} } \right)\) from uniform distribution \(U\left( {0,1} \right).\)

  2. 2.

    Set \(z_{i} = - \log \left( {1 - u_{i} } \right),\) so that \(z_{i} 's\) are iid standard exponential variates.

  3. 3.

    Given censoring scheme \(R = \left( {R_{1} ,R_{2} , \ldots ,R_{m} } \right)\),

    set \(y_{1} = z_{1} /m\) and \(y_{i} = y_{i - 1} + \frac{{z_{i} }}{{\left( {n - \mathop \sum \nolimits_{j = 1}^{i - 1} R_{j} - i + 1} \right)}}, i = 2, \ldots ,m\)

  4. 4.

    Now, \(\left( {y_{1} ,y_{2} , \ldots ,y_{m} } \right)\) is a progressive type-II censored sample from standard exponential distribution.

  5. 5.

    Set \(w_{i} = 1 - \exp \left( { - y_{i} } \right),\) so that \(w_{i} 's\) form a progressive type-II censored sample from \(U\left( {0,1} \right).\)

  6. 6.

    Set \(x_{i} = F^{ - 1} \left( {w_{i} } \right)\), where \(F\left( . \right)\) is the cdf of Kumaraswamy distribution as defined in (5.2).

Now \(\left( {x_{1} ,x_{2} , \ldots ,x_{m} } \right)\) is a progressive type-II censored sample from the said distribution with censoring scheme \(R = \left( {R_{1} ,R_{2} , \ldots ,R_{m} } \right).\)

For a particular set of sample size, parameter value and progressive censoring scheme, we have generated 1000 progressively censored sample. For each case, UMVUE and MLE of parameter, \(R\left( t \right)\) and \(P\) are computed. Finally, mean square error (MSE) of all the estimators is obtained on the basis of estimates from all 1000 simulations.

The estimates of σ along with their MSEs for different censoring schemes are presented in Table 2. Here, it is assumed for the sake of illustration that the true value of σ is 2.5. From Table 2, it can be seen that the MSE of preliminary test estimators is less than that of the classical estimators. Further, the preliminary test estimates are closer to the true value of \(\sigma\) than the classical estimates.

Table 2 Estimates and corresponding MSE for parameter σ

In Table 3, the estimates of \(R\left( t \right)\) and their MSEs are obtained for different values of time t under different censoring schemes. It can be observed that the MSE of preliminary test estimators is less than that of the classical estimators. Further, the preliminary test estimates are closer to the true value of \(R\left( t \right)\) than the classical estimates.

Table 3 Estimates and corresponding MSE for \(R\left( t \right)\)

Similarly, the estimates of \(P\) and their MSEs for different combination of \(\left( {m, l} \right)\) and (σ1, σ2) are obtained and presented in Table 4. Similar findings as in the above cases may be drawn here. Therefore, it can be concluded that the preliminary test estimators of \(\sigma , R\left( t \right) \,{\text{and}} \,P\) perform better that the classical estimators since the MSE of preliminary test estimators is less than that of classical estimators in all the cases under simulated data set.

Table 4 Estimates and corresponding MSE for \(P\)

Finally, the coverage probability (CP) of PTCI of σ is plotted against \(\delta\), which can be seen in Fig. 1. It may be seen that for fixed values of \(n, m\) and \(\alpha\) = 0.15, the CP, as a function of \(\delta\), decreases monotonically. It then increases and crosses the line \(1 - \alpha\) and decreases again. It increases and decreases again until reaching a minimum value. Finally, it increases and tends to line \(1 - \alpha\) when δ becomes large. Further, it is observed that for small values of \(m\), the domination interval is wider than for large values of m. Therefore, we may conclude that for some \(\delta\) in specific interval, the CP of PTCI of σ is more than that of equal tail confidence interval.

Fig. 1
figure 1

Coverage probability of PTCI of σ

Let us now consider the real data set used by Proschan [25], Rasouli and Balakrisnan [27] and Kumari et al. [23]. The data represent the intervals between failure (in hours) of the air conditioning system of a fleet of 13 Boeing 750 jet airplanes. It was observed by Proschan [25] that the failure time distribution of the air conditioning system for each of the plane can be well approximated by exponential distribution. For the present study, failure times of plane ‘7913’ are taken which are as follows:3, 5, 5, 13, 14, 15, 22, 22, 23, 30, 36, 39, 44, 46, 50, 72, 79, 88, 97, 102, 139, 188, 197, 210

This data set is transformed to a new data set with range of unit interval by using the transformation,

$$X = \frac{X}{\hbox{max} \left( X \right) + 1}$$

The transformed data set is as follows,0.0142, 0.0237, 0.0237, 0.0616, 0.0664, 0.0711, 0.1043, 0.1043, 0.1090, 0.1422, 0.1706, 0.1848, 0.2085, 0.2180, 0.2370, 0.3412, 0.3744, 0.4171, 0.4597, 0.4834, 0.6588, 0.8910, 0.9336, 0.9953

It has been seen by Kumari et al. [23] that this data set is a good fit for the distribution under consideration. Further, the ML estimates for the complete data set are \(\left( {\hat{\sigma }, \hat{\gamma }} \right) = \left( {1.0854,0.6054} \right)\).

Let us suppose that out of this data set, the failure time of only 15 units is completely observed, rest all being progressively censored. Then, from Theorem 3, we find that for \(p = 1\), \(\hat{\sigma }_{\text{ML}} = 1.19862\)

Consider the hypothesis,

$$H_{0} : \sigma = 1.5 \,{\text{aganist}}\, H_{1} : \sigma \ne 1.5$$

The computed test statistic is \(2\sigma_{0} S_{m} = 37.54319\) which does not fall in the critical region; therefore, we do not reject the null hypothesis at 5% level of significance which indicates that \(\hat{\sigma }_{{\text{PT}}\_{\text{ML}}} = 1.5\). Further, the estimated value of \(\delta = \frac{{\sigma_{0} }}{{\hat{\sigma }_{\text{ML}} }} = 1.25144\) which falls in the range of (0.422468, 2.367046) which means that the coverage probability of PTCI of σ is more than that of the equal tail CI.

6 Conclusions

In this paper, we have obtained estimators (point as well as interval) for the powers of parameter, \(R\left( t \right)\, {\text{and}}\)\(P\) for Kumaraswamy-G distribution under progressive type-II censoring scheme. The PTEs and PTCIs for parameter \(R\left( t \right)\) and \(P\) for the said distribution are further developed. The bias and MSE of all the PTEs are obtained. It is shown from the numerical analysis that the proposed PTEs perform better than the classical estimators whenever the true value of parameter is close to the prior guessed value. From the analysis of real-life data set, it can be inferred that the PTCI of the parameter has greater coverage probability than that of the equal tail confidence interval in the neighbourhood of null hypothesis. Thus, one can construct the PTEs and PTCIs which are superior to their classical counterparts whenever some prior information is available.