1 Introduction

In life testing and reliability experiments, time to failure data obtained under normal operating conditions is used to analyze the products failure time distribution and its associated parameters. The continuous improvement in manufacturing design creates a problem in obtaining information about lifetime of some products and materials with high reliability at the time of testing under normal conditions. Under such conditions the life testing becomes very expensive and time consuming. To obtain failures quickly, a sample of these materials is tested at more severe operating conditions than normal ones. These conditions are referred to as stresses, which may be in the form of temperature, voltage, force, humidity, pressure, vibrations, etc. This type of testing is called accelerated life testing (ALT), where products are run at higher than usual stress conditions, to induce early failures in a short time. The life data collected from such accelerated tests is then analyzed and extrapolated to estimate the life characteristic under normal operating conditions by using a proper life stress relationship. There are situations where a life stress relationship is not known and cannot be assumed, i.e., the data obtained from ALT cannot be extrapolated to normal conditions. In such situations, partially accelerated life testing (PALT) is used. In PALT, test units are run at both normal and accelerated conditions.

1.1 Constant Stress ALT

The stresses can be applied in various ways, namely; constant-stress, step-stress, and progressive-stress (see Nelson [16]). Under step-stress PALT, a test item is first run at normal use condition and, if it does not fail for a specified time, then it is run at accelerated use condition until failure occurs or the observation is censored. On the other hand a progressive-stress ALT lets the stress level to increase linearly and continuously on any surviving test units. A constant-stress ALT (CS-PAL) is the most common type where each test unit is subjected to only one chosen stress level until its failure or the termination of the test, whichever occurs first.

For an overview of the CS-PALT, there is an amount of literature on designing CS-PALT for example, Bai and Chung [3], Bai et al. [4], Abdel-Ghani [1], Hassan [9], Abdel-Hamid [2], Ismail [11], Ismail et al. [12], Wang and Cheng [21], Kamal et al. [13], Srivastava and Mittal [19, 20], Hassan et al. [10], and Mahmoud et al. [15].

1.2 Competing Risks Schemes

In reliability analysis, the failure of items may be attributable to more than one cause at the same time. Theses “causes” are competing for the failure of the experimental unit. This problem is known as the competing risks model in the statistical literature. In the competing risks data analysis, the data consists of a failure time and the associated cause of failure. The causes of failure may be assumed to be independent or dependent. In this paper, we assume the latent failure time model, as suggested by Cox [5], where the failure times are independently distributed. For several examples, where the failure is due to more than cause of failure, see Crowder [6]. Considered a life time experiment with \( n \in N \) identical units, where its lifetimes are described as independent and identically distributed (i.i.d) random variables \( X_{1} , \ldots , X_{n} \). Without loss of generality; assume that there are only two causes of failure. We have \( T_{i} = \hbox{min} \left\{ {X_{1i} , X_{2i} } \right\} \) for \( i = 1, \ldots , n \), where \( X_{1i} , X_{2i} \) denotes the latent failure time of the ith unit under first and second cause of failure, respectively. We assumed that the latent failure times \( X_{1i} \) and \( X_{2i} \) are independent, and the pairs \( \left( {X_{1i} , X_{2i} } \right) \) are i.i.d. The observed failure time is given by the random variable \( T_{i} = \hbox{min} \left\{ {X_{1i} , X_{2i} } \right\} \). The survival function of the random variable \( T \) is defined as

$$ \begin{aligned} \bar{T}\left( x \right) & = \Pr \left( {T > x} \right) \\ & = \Pr \left( {T > x_{1} } \right)\Pr \left( {T > x_{2} } \right) \\ & = \overline{{F_{1} }} \left( x \right)\overline{{F_{2} }} \left( x \right), \\ \end{aligned} $$

where \( \bar{F}\left( . \right) = 1 - F\left( . \right) \) is the survival function. On using the relation \( f\left( x \right) = - \frac{\partial }{\partial x}\bar{F}\left( x \right) \), we get the densities

$$ g_{1} \left( x \right) = - \frac{\partial }{{\partial x_{1} }}\bar{G}\left( t \right) = f_{1} \left( x \right)\overline{{F_{2} }} \left( x \right)\quad g_{2} \left( x \right) = - \frac{\partial }{{\partial x_{2} }}\bar{G}\left( t \right) = f_{2} \left( x \right)\overline{{F_{1} }} \left( x \right). $$

Recently, some authors have investigated the competing failure models in ALT, see for example, Shi et al. [17], Han and Kundu [8], Haghighi and Bae [7], Zhang et al. [22], Shi et al. [18] and Lone et al. [14].

The Weibull distribution is a very popular model and it has been extensively used over the past decades for modeling data in reliability, engineering and bio-logical studies. In this paper, we consider the estimation problem for the CS-PALT competing failure model from Weibull distribution under type I censoring (TIC) and type II censoring (TIIC). The rest of this paper is organized as follows. In Sect. 2, under TIC and TIIC schemes, a CS-PALT competing failure model from Weibull distribution is described and some basic assumptions are given. In Sect. 3, we obtain the maximum likelihood (ML) estimators of the acceleration factor and unknown parameters for CS-PALT competing model under TIC. Section 4 gives the ML estimators of the acceleration factor and unknown parameters for CS-PALT competing model under TIIC. The simulation results of all proposed methods for different sample sizes and for different censoring schemes are presented in Sect. 5.

2 Model Description and Assumptions

This section displays the main assumptions for product life test in CS-PALT competing failure model. Also, the test procedures in CS-PALT based on TIC and TIIC schemes when the lifetime of competing failures are assumed to have Weibull distribution are explained.

2.1 Model Description

The test procedure in CS-PALT is considered as follows:

  • Total \( n \) items are divided into two groups:

    • Group 1 consists of \( n_{1} = n\left( {1 - \pi } \right) \), \( \left( {1 - \pi } \right) \) is sample proportion items allocated to normal conditions.

    • Group 2 consists of \( n_{2} = n\pi \) remaining items are subjected to accelerated conditions.

  • Each item in Group 1 and Group 2 is run at constant level of stress until the test terminates when the censoring time \( \tau \) in case of TIC or the rth failure in case of TIIC is reached.

  • The lifetimes \( T_{i} ,\quad i = 1, 2, \ldots , n\left( {1 - \pi } \right) \), of items allocated at normal conditions follow Weibull distribution with shape parameter \( \theta , \) scale parameter \( \lambda \) and have the probability density function (pdf) and cumulative distribution function (cdf) as follows:

    $$ f\left( {t_{i} } \right) = \theta \lambda t_{i}^{\theta - 1} e^{{ - \lambda t_{i}^{\theta } }} t_{i} ;\quad \theta , \lambda > 0, $$
    (1)

    and,

    $$ F\left( {t_{i} } \right) = 1 - e^{{ - \lambda t_{i}^{\theta } }} , $$
    (2)

    where, the observed ordered failure times are \( t_{\left( 1 \right)} < \cdots < t_{{\left( {n_{u} } \right)}} < \tau \) under TIC and \( n_{u} \) is the number of failed items at normal conditions. While the observed rth ordered failure is \( t_{\left( 1 \right)} < t_{\left( 2 \right)} < t_{\left( 3 \right)} \cdots < t_{\left( r \right)} \) under TIIC.

  • The lifetimes \( X_{j} ,\quad j = 1, 2, \ldots , n\pi \) of items allocated at accelerated conditions follow a Weibull distribution with shape parameter \( \theta \) and scale parameter \( \lambda \) and have the pdf and cdf as follows:

    $$ f\left( {x_{j} } \right) = \theta \lambda \beta \left( {\beta x_{j} } \right)^{\theta - 1} e^{{ - \lambda \left( {\beta x_{j} } \right)^{\theta } }} ;\quad x_{j} , \theta , \lambda > 0 , \beta > 1, $$
    (3)

    and,

    $$ F\left( {x_{j} } \right) = 1 - e^{{ - \lambda \left( {\beta x_{j} } \right)^{\theta } }} , $$
    (4)

    where, the observed ordered failure times are \( x_{\left( 1 \right)} < \cdots < x_{{\left( {n_{a} } \right)}} < \tau \) and \( n_{a} \) is the number of failed items at accelerated conditions under TIC. While the observed ordered rth failure is \( x_{\left( 1 \right)} < x_{\left( 2 \right)} < x_{\left( 3 \right)} \cdots < x_{\left( r \right)} \) under TIIC.

2.2 Basic Assumption

  • The lifetimes \( T_{i} ,\quad i = 1, 2, \ldots , n\left( {1 - \pi } \right) \) of items allocated at normal conditions are i.i.d random variables

  • The lifetimes \( X_{j} ,\quad j = 1, 2, \ldots , n\pi \) of items allocated at accelerated conditions are i.i.d random variables

  • The lifetimes \( T_{i} \) and \( X_{j} \) are mutually independent.

3 ML Estimators Under TIC Competing Risks Data

Suppose that the observed values of the total lifetime T of size \( n\left( {1 - \pi } \right) \) at normal condition are \( t_{\left( 1 \right)} , t_{\left( 2 \right)} , \ldots , t_{{\left( {n\left( {1 - \pi } \right)} \right)}} \), and the observed values of the total lifetime X of size \( n\pi \) at accelerated condition are \( x_{\left( 1 \right)} , x_{\left( 2 \right)} , \ldots , x_{{\left( {n\pi } \right)}} \). Let \( \delta_{ui} \) and \( \delta_{ai} \) denote the failure indicators such that

$$ \delta_{ui} = \left\{ {\begin{array}{*{20}l} {1\quad } \hfill & {t_{i} < \tau } \hfill \\ {0\quad } \hfill & {otherwise} \hfill \\ \end{array} } \right.\quad i = 1, 2, \ldots , n\left( {1 - \pi } \right), $$

and

$$ \delta_{ai} = \left\{ {\begin{array}{*{20}l} {1\quad } \hfill & {x_{j} < \tau } \hfill \\ {0\quad } \hfill & {otherwise} \hfill \\ \end{array} } \right.\quad i = 1, 2, \ldots , n\pi . $$

The likelihood function for TIC competing risks data when the cause of failure is known at normal conditions is given by

$$ L \propto \mathop \prod \limits_{i = 1}^{{n\bar{\pi }}} \left[ {f_{1} \left( {t_{i} } \right)\bar{F}_{2} \left( {t_{i} } \right)} \right]^{{I\left( {\delta_{i} = 1} \right)}} \left[ {f_{2} \left( {t_{i} } \right)\bar{F}_{1} \left( {t_{i} } \right)} \right]^{{I\left( {\delta_{i} = 2} \right)}} \left[ {\bar{F}_{1} \left( \tau \right) \bar{F}_{2} \left( \tau \right)} \right]^{{\bar{\delta }_{i} }} , $$
(5)

where, \( t_{i} = t_{\left( i \right)} \), and \( \bar{\pi } = 1 - \pi \). Substituting (1), (2), (3) and (4) in likelihood function (5), then:

$$\begin{aligned} & L_{{1\left( {ui} \right)}} \propto \mathop \prod \limits_{i = 1}^{{n\bar{\pi }}} \left[ {\theta_{1} \lambda_{1} t_{i}^{{\theta_{1} - 1}} e^{{ - \left( {\lambda_{1} t_{i}^{{\theta_{1} }} + \lambda_{2} t_{i}^{{\theta_{2} }} } \right)}} } \right]^{{\delta_{1ui} }}\\ & \quad \left[ {\theta_{2} \lambda_{2} t_{i}^{{\theta_{2} - 1}} e^{{ - \left( {\lambda_{2} t_{i}^{{\theta_{2} }} + \lambda_{1} t_{i}^{{\theta_{1} }} } \right)}} } \right]^{{\delta_{2ui} }} \left[ {e^{{ - \left( {\lambda_{1} \tau^{{\theta_{1} }} + \lambda_{2} \tau^{{\theta_{2} }} } \right)}} } \right]^{{\bar{\delta }_{ui} }}. \end{aligned} $$

Also, the likelihood function for TIC competing risks data when the cause of failure is known at accelerated conditions is given by

$$ \begin{aligned} &L_{{1\left( {aj} \right)}} \propto \mathop \prod \limits_{j = 1}^{n\pi } \left[ {\theta_{1} \lambda_{1} \beta_{1} \left( {\beta_{1} x_{j} } \right)^{{\theta_{1} - 1}} e^{{ - \left[ {\lambda_{1} \left( {\beta_{1} x_{j} } \right)^{{\theta_{1} }} + \lambda_{2} \left( {\beta_{2} x_{j} } \right)^{{\theta_{2} }} } \right]}} } \right]^{{\delta_{1aj} }}\\ &\quad \left[ {e^{{ - \left[ {\lambda_{1} \left( {\beta_{1} \tau } \right)^{{\theta_{1} }} + \lambda_{2} \left( {\beta_{2} \tau } \right)^{{\theta_{2} }} } \right]}} } \right]^{{\bar{\delta }_{aj} }} \left[ {\theta_{2} \lambda_{2} \beta_{2} \left( {\beta_{2} x_{j} } \right)^{{\theta_{2} - 1}} e^{{ - \left[ {\lambda_{2} \left( {\beta_{2} x_{j} } \right)^{{\theta_{2} }} + \lambda_{1} \left( {\beta_{1} x_{j} } \right)^{{\theta_{1} }} } \right]}} } \right]^{{\delta_{2aj} }}. \end{aligned} $$

Since the lifetimes of \( t_{1} , \ldots , t_{{n_{u} }} \) and \( x_{1} , \ldots , x_{{n_{a} }} \) are iid then the total likelihood function for TIC competing risks data when the cause of failure is known at normal and accelerated conditions \( \left( {t_{1} ; \quad \delta_{u1} \ldots , t_{{n\bar{\pi }}} ; \quad \delta_{{un\bar{\pi }}} , x_{1} ;\quad \delta_{a1} \ldots , x_{n\pi } ;\quad \delta_{an\pi } } \right) \) is given by:

$$ \begin{aligned} L_{1i} & \propto L_{{1\left( {ui} \right)}} L_{{1\left( {aj} \right)}} L_{1i} \propto \mathop \prod \limits_{i = 1}^{{n\bar{\pi }}}\\ & \left[ {\theta_{1} \lambda_{1} t_{i}^{{\theta_{1} - 1}} e^{{ - \left( {\lambda_{1} t_{i}^{{\theta_{1} }} + \lambda_{2} t_{i}^{{\theta_{2} }} } \right)}} } \right]^{{\delta_{1ui} }} \left[ {\theta_{2} \lambda_{2} t_{i}^{{\theta_{2} - 1}} e^{{ - \left( {\lambda_{2} t_{i}^{{\theta_{2} }} + \lambda_{1} t_{i}^{{\theta_{1} }} } \right)}} } \right]^{{\delta_{2ui} }} \left[ {e^{{ - \left( {\lambda_{1} \tau^{{\theta_{1} }} + \lambda_{2} \tau^{{\theta_{2} }} } \right)}} } \right]^{{\bar{\delta }_{ui} }} \\ & \quad \mathop \prod \limits_{j = 1}^{n\pi } \left[ {\theta_{1} \lambda_{1} \beta_{1} \left( {\beta_{1} x_{j} } \right)^{{\theta_{1} - 1}} e^{{ - \left[ {\lambda_{1} \left( {\beta_{1} x_{j} } \right)^{{\theta_{1} }} + \lambda_{2} \left( {\beta_{2} x_{j} } \right)^{{\theta_{2} }} } \right]}} } \right]^{{\delta_{1aj} }} \\ &\left[ {e^{{ - \left[ {\lambda_{1} \left( {\beta_{1} \tau } \right)^{{\theta_{1} }} + \lambda_{2} \left( {\beta_{2} \tau } \right)^{{\theta_{2} }} } \right]}} } \right]^{{\bar{\delta }_{aj} }} \left[ {\theta_{2} \lambda_{2} \beta_{2} \left( {\beta_{2} x_{j} } \right)^{{\theta_{2} - 1}} e^{{ - \left[ {\lambda_{2} \left( {\beta_{2} x_{j} } \right)^{{\theta_{2} }} + \lambda_{1} \left( {\beta_{1} x_{j} } \right)^{{\theta_{1} }} } \right]}} } \right]^{{\delta_{2aj} }} , \\ \end{aligned} $$

where, \( \bar{\delta }_{ui} = 1 - \delta_{ui} \) and \( \bar{\delta }_{aj} = 1 - \delta_{aj} \). The ML estimators \( \hat{\theta }_{1} , \hat{\theta }_{2} , \hat{\lambda }_{1} ,\hat{\lambda }_{2} , \hat{\beta }_{1} \) and \( \hat{\beta }_{2} \) of the parameters and acceleration factors \( \theta_{1} , \theta_{2} , \lambda_{1} , \lambda_{2} , \beta_{1} \) and \( \beta_{2} \) are the values which maximize the likelihood function. The logarithm of the likelihood function \( l_{1} = \ln L_{1i} \) is given by:

$$ \begin{aligned} l_{1} & \propto n_{10} \ln \theta_{1} + n_{10} \ln \lambda_{1} + n_{20} \ln \theta_{2} + n_{20} \ln \lambda_{2} + n_{1a} \ln \beta_{1} + n_{2a} \ln \beta_{2} + \left( {\theta_{1} - 1} \right)\mathop \sum \limits_{i = 1}^{{n\bar{\pi }}} \delta_{1ui} \ln t_{i} \\ & \quad - \lambda_{1} \left[ {\mathop \sum \limits_{i = 1}^{{n\bar{\pi }}} \delta_{1ui} t_{i}^{{\theta_{1} }} + \mathop \sum \limits_{i = 1}^{{n\bar{\pi }}} \delta_{2ui} t_{i}^{{\theta_{1} }} } \right] + \left( {\theta_{2} - 1} \right)\mathop \sum \limits_{i = 1}^{{n\bar{\pi }}} \delta_{2ui} \ln t_{i} - \lambda_{2} \left[ {\mathop \sum \limits_{i = 1}^{{n\bar{\pi }}} \delta_{1ui} t_{i}^{{\theta_{2} }} + \mathop \sum \limits_{i = 1}^{{n\bar{\pi }}} \delta_{2ui} t_{i}^{{\theta_{2} }} } \right] \\ & \quad + \left( {\theta_{1} - 1} \right)\mathop \sum \limits_{j = 1}^{n\pi } \delta_{1aj} \ln \left( {\beta_{1} x_{j} } \right) - \lambda_{1} \left[ {\mathop \sum \limits_{j = 1}^{n\pi } \delta_{1aj} \left( {\beta_{1} x_{j} } \right)^{{\theta_{1} }} + \mathop \sum \limits_{j = 1}^{n\pi } \delta_{2aj} \left( {\beta_{1} x_{j} } \right)^{{\theta_{1} }} } \right] \\ & \quad + \left( {\theta_{2} - 1} \right)\mathop \sum \limits_{j = 1}^{n\pi } \delta_{2aj} \ln \left( {\beta_{2} x_{j} } \right) - \lambda_{2} \left[ {\mathop \sum \limits_{j = 1}^{n\pi } \delta_{1aj} \left( {\beta_{2} x_{j} } \right)^{{\theta_{2} }} + \mathop \sum \limits_{j = 1}^{n\pi } \delta_{2aj} \left( {\beta_{2} x_{j} } \right)^{{\theta_{2} }} } \right] - \left( {n\bar{\pi } - n_{u} } \right)\\ &\qquad\left[ {\lambda_{1} \tau^{{\theta_{1} }} + \lambda_{2} \tau^{{\theta_{2} }} } \right] - \left( {n\pi - n_{a} } \right)\left[\lambda_{1} \left( {\beta_{1} \tau } \right)^{{\theta_{1} }} + \lambda_{2} \left( {\beta_{2} \tau } \right)^{{\theta_{2} }} \right]. \\ \end{aligned} $$
(6)

The first derivatives of the logarithm of the likelihood function (6) with respect to \( \theta_{k} , \lambda_{k} \) and \( \beta_{k} \) are given by:

$$ \begin{aligned} \frac{{\partial l_{1} }}{{\partial \theta_{k} }} & = \frac{{n_{k0} }}{{\theta_{k} }} + \mathop \sum \limits_{i = 1}^{{n\bar{\pi }}} \delta_{kui} \ln t_{i} - \lambda_{k} \left[ {\mathop \sum \limits_{i = 1}^{{n\bar{\pi }}} \delta_{kui} t_{i}^{{\theta_{k} }} \ln t_{i} + \mathop \sum \limits_{i = 1}^{{n\bar{\pi }}} \delta_{sui} t_{i}^{{\theta_{k} }} \ln t_{i} } \right] - \left( {n\bar{\pi } - n_{u} } \right)\lambda_{k} \tau^{{\theta_{k} }} \ln \tau \\ & \quad + \mathop \sum \limits_{j = 1}^{n\pi } \delta_{kaj} \ln \left( {\beta_{k} x_{j} } \right) - \lambda_{k} \beta_{k}^{{\theta_{k} }} \left[ {\mathop \sum \limits_{j = 1}^{n\pi } \delta_{kaj} \left( {x_{j}^{{\theta_{k} }} } \right)\ln \left( {\beta_{k} x_{j} } \right) + \mathop \sum \limits_{j = 1}^{n\pi } \delta_{saj} \left( {x_{j}^{{\theta_{k} }} } \right)\ln \left( {\beta_{k} x_{j} } \right)} \right] - \left( {n\pi - n_{a} } \right)\lambda_{k} \beta_{k}^{{\theta_{k} }} \tau^{{\theta_{k} }} \ln \left( {\beta_{k} \tau } \right), \\ \end{aligned} $$
(7)
$$ \frac{{\partial l_{1} }}{{\partial \lambda_{k} }} = \frac{{n_{k0} }}{{\lambda_{k} }} - \mathop \sum \limits_{i = 1}^{{n\bar{\pi }}} \delta_{kui} t_{i}^{{\theta_{k} }} - \mathop \sum \limits_{i = 1}^{{n\bar{\pi }}} \delta_{sui} t_{i}^{{\theta_{k} }} - \left( {n\bar{\pi } - n_{u} } \right)\tau^{{\theta_{k} }} - \left( {n\pi - n_{a} } \right)\beta_{k}^{{\theta_{k} }} \tau^{{\theta_{k} }} - \beta_{k}^{{\theta_{k} }} \left[ {\mathop \sum \limits_{j = 1}^{n\pi } \delta_{kaj} \left( {x_{j}^{{\theta_{k} }} } \right) + \mathop \sum \limits_{j = 1}^{n\pi } \delta_{saj} \left( {x_{j}^{{\theta_{k} }} } \right)} \right], $$
(8)

and

$$ \frac{{\partial l_{1} }}{{\partial \beta_{k} }} = \frac{{n_{ka} }}{{\beta_{k} }} + \left( {\theta_{k} - 1} \right)\mathop \sum \limits_{j = 1}^{n\pi } \delta_{kaj} \beta_{k}^{ - 1} - \left( {n\pi - n_{a} } \right)\lambda_{k} \theta_{k} \tau^{{\theta_{k} }} \beta_{k}^{{\theta_{k} - 1}} - \lambda_{k} \theta_{k} \beta_{k}^{{\theta_{k} - 1}} \left[ {\mathop \sum \limits_{j = 1}^{n\pi } \delta_{kaj} \left( {x_{j}^{{\theta_{k} }} } \right) + \mathop \sum \limits_{j = 1}^{n\pi } \delta_{saj} \left( {x_{j}^{{\theta_{k} }} } \right)} \right], $$
(9)

where, \( n_{ku} = \sum\nolimits_{i = 1}^{{n\bar{\pi }}} {\delta_{kui} } \), \( n_{ka} = \sum\nolimits_{j = 1}^{{n\bar{\pi }}} {\delta_{kaj} } \), \( n_{k0} = n_{ku} + n_{ka} \) and \( k = 1,2 \).

Setting Eqs. (7), (8) and (9) by zeros we obtain three nonlinear equations. The system of these nonlinear equations cannot be solved analytically. So, we can apply numerical solution via iterative techniques to get the ML estimators.

Additionally, the asymptotic variances and covariance matrix of the ML estimators of \( \theta_{k} , \lambda_{k} \) and \( \beta_{k} \) can be approximated by numerically inverting the asymptotic Fisher-information matrix F. It is composed of the negative second and mixed derivatives of the natural logarithm of the likelihood function evaluated at the ML estimates. So, the elements of the Fisher information are given by

$$ \begin{aligned} \frac{{\partial^{2} l_{1} }}{{\partial \theta_{k}^{2} }} & = - \frac{{n_{k0} }}{{\theta_{k}^{2} }} - \lambda_{k} \left[ {\mathop \sum \limits_{i = 1}^{{n\bar{\pi }}} \delta_{kui} t_{i}^{{\theta_{k} }} \ln t_{i}^{2} + \mathop \sum \limits_{i = 1}^{{n\bar{\pi }}} \delta_{sui} t_{i}^{{\theta_{k} }} \ln t_{i}^{2} } \right] - \left( {n\bar{\pi } - n_{u} } \right)\lambda_{k} \tau^{{\theta_{k} }} \ln \tau^{2} \\ & \quad - \lambda_{k} \beta_{k}^{{\theta_{k} }} \left[ {\mathop \sum \limits_{j = 1}^{n\pi } \delta_{kaj} \left( {x_{j}^{{\theta_{k} }} } \right)\ln \left( {\beta_{k} x_{j} } \right)^{2} + \mathop \sum \limits_{j = 1}^{n\pi } \delta_{saj} \left( {x_{j}^{{\theta_{k} }} } \right)\ln \left( {\beta_{k} x_{j} } \right)^{2} } \right] \\ & \quad - \left( {n\pi - n_{a} } \right)\lambda_{k} \beta_{k}^{{\theta_{k} }} \tau^{{\theta_{k} }} \ln \left( {\beta_{k} \tau } \right)^{2} , \\ \end{aligned} $$
$$ \frac{{\partial^{2} l_{1} }}{{\partial \lambda_{k}^{2} }} = - \frac{{n_{k0} }}{{\lambda_{k}^{2} }}, $$
$$ \begin{aligned} \frac{{\partial^{2} l_{1} }}{{\partial \beta_{k}^{2} }} & = - \frac{{n_{ka} }}{{\beta_{k}^{2} }} - \left( {\theta_{k} - 1} \right)\mathop \sum \limits_{j = 1}^{n\pi } \delta_{kaj} \beta_{k}^{ - 2} - \left( {n\pi - n_{a} } \right)\left( {\theta_{k} - 1} \right)\theta_{k} \lambda_{k} \tau^{{\theta_{k} }} \beta_{k}^{{\theta_{k} - 2}} \\ & \quad - \lambda_{k} \theta_{k} \left( {\theta_{k} - 1} \right)\beta_{k}^{{\theta_{k} - 2}} \left[ {\mathop \sum \limits_{j = 1}^{n\pi } \delta_{kaj} \left( {x_{j}^{{\theta_{k} }} } \right) + \mathop \sum \limits_{j = 1}^{n\pi } \delta_{saj} \left( {x_{j}^{{\theta_{k} }} } \right)} \right], \\ \end{aligned} $$
$$ \begin{aligned} \frac{{\partial^{2} l_{1} }}{{\partial \theta_{k} \partial \lambda_{k} }} & = - \mathop \sum \limits_{i = 1}^{{n\bar{\pi }}} \delta_{kui} t_{i}^{{\theta_{k} }} \ln t_{i} - \mathop \sum \limits_{i = 1}^{{n\bar{\pi }}} \delta_{sui} t_{i}^{{\theta_{k} }} \ln t_{i} - \left( {n\bar{\pi } - n_{u} } \right)\tau^{{\theta_{k} }} \ln \tau \\ & \quad - \beta_{k}^{{\theta_{k} }} \left[ {\mathop \sum \limits_{j = 1}^{n\pi } \delta_{kaj} \left( {x_{j}^{{\theta_{k} }} } \right)\ln \left( {\beta_{k} x_{j} } \right) + \mathop \sum \limits_{j = 1}^{n\pi } \delta_{saj} \left( {x_{j}^{{\theta_{k} }} } \right)\ln \left( {\beta_{k} x_{j} } \right)} \right] - \left( {n\pi - n_{a} } \right)\beta_{k}^{{\theta_{k} }} \tau^{{\theta_{k} }} \ln \left( {\beta_{k} \tau } \right), \\ \end{aligned} $$
$$ \begin{aligned} \frac{{\partial^{2} l_{1} }}{{\partial \theta_{k} \partial \beta_{k} }} & = \mathop \sum \limits_{j = 1}^{n\pi } \delta_{kaj} \beta_{k}^{ - 1} - \left( {n\pi - n_{a} } \right)\lambda_{k} \beta_{k}^{{\theta_{k} - 1}} \tau^{{\theta_{k} }} \left[ {1 + \theta_{k} \ln \left( {\beta_{k} \tau } \right)} \right] \\ & \quad - \lambda_{k} \beta_{k}^{{\theta_{k} - 1}} \left[ {\mathop \sum \limits_{j = 1}^{n\pi } \delta_{kaj} \left\{ {x_{j}^{{\theta_{k} }} + \theta_{k} \left( {x_{j}^{{\theta_{k} }} } \right)\ln \left( {\beta_{k} x_{j} } \right)} \right\} + \mathop \sum \limits_{j = 1}^{n\pi } \delta_{saj} \left\{ {x_{j}^{{\theta_{k} }} + \theta_{k} \left( {x_{j}^{{\theta_{k} }} } \right)\ln \left( {\beta_{k} x_{j} } \right)} \right\}} \right], \\ \end{aligned} $$
$$ \frac{{\partial^{2} l_{1} }}{{\partial \lambda_{k} \partial \beta_{k} }} = - \,\theta_{k} \beta_{k}^{{\theta_{k} - 1}} \left[ {\mathop \sum \limits_{j = 1}^{n\pi } \delta_{kaj} \left( {x_{j}^{{\theta_{k} }} } \right) + \mathop \sum \limits_{j = 1}^{n\pi } \delta_{saj} \left( {x_{j}^{{\theta_{k} }} } \right)} \right] - \left( {n\pi - n_{a} } \right)\theta_{k} \beta_{k}^{{\theta_{k} - 1}} \tau^{{\theta_{k} }} . $$

For interval estimation of the parameters, the \( 3 \times 3 \) observed information matrix \( I\left( \varPhi \right) = \left\{ {I_{u, v} } \right\} \) for \( \left( {u, v} \right) = \left( {\theta , \lambda ,\beta } \right) \). Under the regularity conditions, the known asymptotic properties of the ML method ensure that: \( \sqrt n \left( {\hat{\varPhi } - \varPhi } \right)\mathop \to \limits^{d} N_{3} \left( {0, I^{ - 1} \left( \varPhi \right)} \right) \) as \( n \to \infty \) where \( \mathop \to \limits^{d} \) means the convergence in distribution, with mean \( 0 = \left( {0, 0, 0} \right)^{T} \) and \( 3 \times 3 \) covariance matrix \( I^{ - 1} \left( \varPhi \right) \) then, the \( 100\left( {1 - \upsilon } \right)\% \) confidence intervals for \( \theta , \lambda \) and \( \beta \) are given, respectively, as follows

$$ \hat{\theta }_{k} \pm Z_{\upsilon /2}\sqrt {var\left( {\hat{\theta }_{k} } \right)} ,\quad \hat{\lambda }_{k} \pm Z_{\upsilon /2}\sqrt {var\left( {\hat{\lambda }_{k} } \right)} \quad and\quad \hat{\beta }_{k} \pm Z_{\upsilon /2}\sqrt {var\left( {\hat{\beta }_{k} } \right)} , $$
(10)

where \( Z_{\upsilon /2} \) is the \( \left[ {100\left( {1 - \upsilon /2} \right)} \right] \) th standard normal percentile and \( var\left( . \right) \)’s denote the diagonal elements of \( I^{ - 1} \left( \varPhi \right) \) corresponding to the model parameters.

4 ML Estimators Under TIIC Competing Risks Data

Suppose that the observed values of the total lifetime T of size \( n\left( {1 - \pi } \right) \) at normal condition are \( t_{\left( 1 \right)} , t_{\left( 2 \right)} , \ldots , t_{\left( r \right)} \), and the observed values of the total lifetime X of size \( n\pi \) at accelerated condition are \( x_{\left( 1 \right)} , x_{\left( 2 \right)} , \ldots , x_{\left( r \right)} \). Let \( \delta_{ui} \) and \( \delta_{ai} \) denote the failure indicators such that

$$ \delta_{ui} = \left\{ {\begin{array}{*{20}l} {1\quad } \hfill & { t_{i} \le t_{\left( r \right)} } \hfill \\ {0\quad } \hfill & {otherwise} \hfill \\ \end{array} } \right.\quad {\text{for}}\quad i = 1, 2, \ldots , n\left( {1 - \pi } \right) $$

and

$$ \delta_{ai} = \left\{ {\begin{array}{*{20}l} {1\quad } \hfill & { x_{j} \le x_{\left( r \right)} } \hfill \\ {0\quad } \hfill & {otherwise} \hfill \\ \end{array} } \right.\quad {\text{for}}\quad j = 1, 2, \ldots , n\pi . $$

The total likelihood function for TIIC competing risks data when the cause of failure is known at normal \( \left( {t_{i} , \delta_{ui} } \right) \) and accelerated conditions \( \left( {x_{j} , \delta_{aj} } \right) \) are respectively given by:

$$ L_{{2\left( {ui} \right)}} \propto \mathop \prod \limits_{i = 1}^{{n\bar{\pi }}} \left[ {\theta_{1} \lambda_{1} t_{i}^{{\theta_{1} - 1}} e^{{ - \left( {\lambda_{1} t_{i}^{{\theta_{1} }} + \lambda_{2} t_{i}^{{\theta_{2} }} } \right)}} } \right]^{{\delta_{1ui} }} \left[ {\theta_{2} \lambda_{2} t_{i}^{{\theta_{2} - 1}} e^{{ - \left( {\lambda_{2} t_{i}^{{\theta_{2} }} + \lambda_{1} t_{i}^{{\theta_{1} }} } \right)}} } \right]^{{\delta_{2ui} }} \left[ {e^{{ - \left( {\lambda_{1} t_{r}^{{\theta_{1} }} + \lambda_{2} t_{r}^{{\theta_{2} }} } \right)}} } \right]^{{\bar{\delta }_{ui} }} , $$

and,

$$\begin{aligned} & L_{{2\left( {aj} \right)}} \propto \mathop \prod \limits_{j = 1}^{n\pi } \left[ {\theta_{1} \lambda_{1} \beta_{1} \left( {\beta_{1} x_{j} } \right)^{{\theta_{1} - 1}} e^{{ - \left[ {\lambda_{1} \left( {\beta_{1} x_{j} } \right)^{{\theta_{1} }} + \lambda_{2} \left( {\beta_{2} x_{j} } \right)^{{\theta_{2} }} } \right]}} } \right]^{{\delta_{1aj} }}\\ & \quad \left[ {\theta_{2} \lambda_{2} \beta_{2} \left( {\beta_{2} x_{j} } \right)^{{\theta_{2} - 1}} e^{{ - \left[ {\lambda_{2} \left( {\beta_{2} x_{j} } \right)^{{\theta_{2} }} + \lambda_{1} \left( {\beta_{1} x_{j} } \right)^{{\theta_{1} }} } \right]}} } \right]^{{\delta_{2aj} }} \left[ {e^{{ - \left[ {\lambda_{1} \left( {\beta_{1} x_{r} } \right)^{{\theta_{1} }} + \lambda_{2} \left( {\beta_{2} x_{r} } \right)^{{\theta_{2} }} } \right]}} } \right]^{{\bar{\delta }_{aj} }}. \end{aligned} $$

Then, the total likelihood function for TIIC competing risks data when the cause of failure is known at normal and accelerated conditions \( \big( t_{1} ; \quad \delta_{u1} \ldots , t_{{n\bar{\pi }}} ; \quad \delta_{{un\bar{\pi }}} , x_{1} ;\quad \delta_{a1} \ldots , x_{n\pi } ; \quad \delta_{an\pi } \big) \) is:

$$ \begin{aligned} L_{2i} & \propto L_{{2\left( {ui} \right)}} L_{{2\left( {aj} \right)}} \\ L_{2i} & \propto \mathop \prod \limits_{i = 1}^{{n\bar{\pi }}} \left[ {\theta_{1} \lambda_{1} t_{i}^{{\theta_{1} - 1}} e^{{ - \left( {\lambda_{1} t_{i}^{{\theta_{1} }} + \lambda_{2} t_{i}^{{\theta_{2} }} } \right)}} } \right]^{{\delta_{1ui} }} \left[ {\theta_{2} \lambda_{2} t_{i}^{{\theta_{2} - 1}} e^{{ - \left( {\lambda_{2} t_{i}^{{\theta_{2} }} + \lambda_{1} t_{i}^{{\theta_{1} }} } \right)}} } \right]^{{\delta_{2ui} }} \left[ {e^{{ - \left( {\lambda_{1} t_{r}^{{\theta_{1} }} + \lambda_{2} t_{r}^{{\theta_{2} }} } \right)}} } \right]^{{\bar{\delta }_{ui} }} \\ & \quad \mathop \prod \limits_{j = 1}^{n\pi } \left[ {\theta_{1} \lambda_{1} \beta_{1} \left( {\beta_{1} x_{j} } \right)^{{\theta_{1} - 1}} e^{{ - \left[ {\lambda_{1} \left( {\beta_{1} x_{j} } \right)^{{\theta_{1} }} + \lambda_{2} \left( {\beta_{2} x_{j} } \right)^{{\theta_{2} }} } \right]}} } \right]^{{\delta_{1aj} }}\\ & \left[ {\theta_{2} \lambda_{2} \beta_{2} \left( {\beta_{2} x_{j} } \right)^{{\theta_{2} - 1}} e^{{ - \left[ {\lambda_{2} \left( {\beta_{2} x_{j} } \right)^{{\theta_{2} }} + \lambda_{1} \left( {\beta_{1} x_{j} } \right)^{{\theta_{1} }} } \right]}} } \right]^{{\delta_{2aj} }} \left[ {e^{{ - \left[ {\lambda_{1} \left( {\beta_{1} x_{r} } \right)^{{\theta_{1} }} + \lambda_{2} \left( {\beta_{2} x_{r} } \right)^{{\theta_{2} }} } \right]}} } \right]^{{\bar{\delta }_{aj} }} . \\ \end{aligned} $$

The ML estimators \( \hat{\theta }_{1} , \hat{\theta }_{2} , \hat{\lambda }_{1} ,\hat{\lambda }_{2} , \hat{\beta }_{1} \) and \( \hat{\beta }_{2} \) of the parameters and acceleration factor \( \theta_{1} , \theta_{2} \lambda_{1} , \lambda_{2} , \beta_{1} \) and \( \beta_{2} \) are the values which maximize the likelihood function. The logarithm of the likelihood function \( l_{2} = \ln L_{2i} \) is given by:

$$ \begin{aligned} l_{2} & \propto n_{10} \ln \theta_{1} + n_{10} \ln \lambda_{1} + n_{20} \ln \theta_{2} + n_{20} \ln \lambda_{2} + n_{1a} \ln \beta_{1} + n_{2a} \ln \beta_{2} + \left( {\theta_{1} - 1} \right)\mathop \sum \limits_{i = 1}^{{n\bar{\pi }}} \delta_{1ui} \ln t_{i} \\ & \quad - \lambda_{1} \left[ {\mathop \sum \limits_{i = 1}^{{n\bar{\pi }}} \delta_{1ui} t_{i}^{{\theta_{1} }} + \mathop \sum \limits_{i = 1}^{{n\bar{\pi }}} \delta_{2ui} t_{i}^{{\theta_{1} }} } \right] + \left( {\theta_{2} - 1} \right)\mathop \sum \limits_{i = 1}^{{n\bar{\pi }}} \delta_{2ui} \ln t_{i} - \lambda_{2} \left[ {\mathop \sum \limits_{i = 1}^{{n\bar{\pi }}} \delta_{1ui} t_{i}^{{\theta_{2} }} + \mathop \sum \limits_{i = 1}^{{n\bar{\pi }}} \delta_{2ui} t_{i}^{{\theta_{2} }} } \right] \\ & \quad - \left( {n\bar{\pi } - n_{u} } \right)\left[ {\lambda_{1} t_{r}^{{\theta_{1} }} + \lambda_{2} t_{r}^{{\theta_{2} }} } \right] + \left( {\theta_{1} - 1} \right)\mathop \sum \limits_{j = 1}^{n\pi } \delta_{1aj} \ln \left( {\beta_{1} x_{j} } \right) - \left( {n\pi - n_{a} } \right)\left[ {\lambda_{1} \left( {\beta_{1} x_{r} } \right)^{{\theta_{1} }} + \lambda_{2} \left( {\beta_{2} x_{r} } \right)^{{\theta_{2} }} } \right] \\ & \quad - \lambda_{1} \left[ {\mathop \sum \limits_{j = 1}^{n\pi } \delta_{1aj} \left( {\beta_{1} x_{j} } \right)^{{\theta_{1} }} + \mathop \sum \limits_{j = 1}^{n\pi } \delta_{2aj} \left( {\beta_{1} x_{j} } \right)^{{\theta_{1} }} } \right] + \left( {\theta_{2} - 1} \right)\mathop \sum \limits_{j = 1}^{n\pi } \delta_{2aj} \ln \left( {\beta_{2} x_{j} } \right) \\ & \quad - \lambda_{2} \left[ {\mathop \sum \limits_{j = 1}^{n\pi } \delta_{1aj} \left( {\beta_{2} x_{j} } \right)^{{\theta_{2} }} + \mathop \sum \limits_{j = 1}^{n\pi } \delta_{2aj} \left( {\beta_{2} x_{j} } \right)^{{\theta_{2} }} } \right], \\ \end{aligned} $$
(11)

The first derivatives of the logarithm of the likelihood function (11) with respect to \( \theta_{k} , \lambda_{k} , \beta_{k} \) and \( k = 1, 2 \) are given by:

$$ \begin{aligned} \frac{{\partial l_{2} }}{{\partial \theta_{k} }} & = \frac{{n_{k0} }}{{\theta_{k} }} + \mathop \sum \limits_{i = 1}^{{n\bar{\pi }}} \delta_{kui} \ln t_{i} - \lambda_{k} \left[ {\mathop \sum \limits_{i = 1}^{{n\bar{\pi }}} \delta_{kui} t_{i}^{{\theta_{k} }} \ln t_{i} + \mathop \sum \limits_{i = 1}^{{n\bar{\pi }}} \delta_{sui} t_{i}^{{\theta_{k} }} \ln t_{i} } \right] - \left( {n\bar{\pi } - n_{u} } \right)\lambda_{k} t_{r}^{{\theta_{k} }} \ln t_{r} \\ & \quad + \mathop \sum \limits_{j = 1}^{n\pi } \delta_{kaj} \ln \left( {\beta_{k} x_{j} } \right) - \lambda_{k} \beta_{k}^{{\theta_{k} }} \left[ {\mathop \sum \limits_{j = 1}^{n\pi } \delta_{kaj} \left( {x_{j}^{{\theta_{k} }} } \right)\ln \left( {\beta_{k} x_{j} } \right) + \mathop \sum \limits_{j = 1}^{n\pi } \delta_{saj} \left( {x_{j}^{{\theta_{k} }} } \right)\ln \left( {\beta_{k} x_{j} } \right)} \right] \\ & \quad - \left( {n\pi - n_{a} } \right)\lambda_{k} \beta_{k}^{{\theta_{k} }} x_{r}^{{\theta_{k} }} \ln \left( {\beta_{k} x_{r} } \right), \\ \end{aligned} $$
(12)
$$ \frac{{\partial l_{2} }}{{\partial \lambda_{k} }} = \frac{{n_{k0} }}{{\lambda_{k} }} - \mathop \sum \limits_{i = 1}^{{n\bar{\pi }}} \delta_{kui} t_{i}^{{\theta_{k} }} - \mathop \sum \limits_{i = 1}^{{n\bar{\pi }}} \delta_{sui} t_{i}^{{\theta_{k} }} - \beta_{k}^{{\theta_{k} }} \left[ {\mathop \sum \limits_{j = 1}^{n\pi } \delta_{kaj} \left( {x_{j}^{{\theta_{k} }} } \right) + \mathop \sum \limits_{j = 1}^{n\pi } \delta_{saj} \left( {x_{j}^{{\theta_{k} }} } \right)} \right] - \left( {n\bar{\pi } - n_{u} } \right)t_{r}^{{\theta_{k} }} - \left( {n\pi - n_{a} } \right)\beta_{k}^{{\theta_{k} }} x_{r}^{{\theta_{k} }} , $$
(13)

and

$$ \frac{{\partial l_{2} }}{{\partial \beta_{k} }} = \frac{{n_{ka} }}{{\beta_{k} }} + \left( {\theta_{k} - 1} \right)\mathop \sum \limits_{j = 1}^{n\pi } \delta_{kaj} \beta_{k}^{ - 1} - \left( {n\pi - n_{a} } \right)\lambda_{k} \theta_{k} x_{r}^{{\theta_{k} }} \beta_{k}^{{\theta_{k} - 1}} - \lambda_{k} \theta_{k} \beta_{k}^{{\theta_{k} - 1}} \left[ {\mathop \sum \limits_{j = 1}^{n\pi } \delta_{kaj} \left( {x_{j}^{{\theta_{k} }} } \right) + \mathop \sum \limits_{j = 1}^{n\pi } \delta_{saj} \left( {x_{j}^{{\theta_{k} }} } \right)} \right]. $$
(14)

Setting Eqs. (12), (13) and (14) by zeros we obtain three nonlinear equations. As mentioned in the previous section, the system of these nonlinear equations cannot be solved analytically. So, numerical solution is applied via iterative techniques to obtain the ML estimators.

The asymptotic variance covariance matrix of \( \theta_{k} , \lambda_{k} \) and \( \beta_{k} \) is obtained by inverting the Fisher information matrix, so the elements of the Fisher information are obtained as follows

$$ \begin{aligned} \frac{{\partial^{2} l_{2} }}{{\partial \theta_{k}^{2} }} & = - \frac{{n_{k0} }}{{\theta_{k}^{2} }} - \lambda_{k} \left[ {\mathop \sum \limits_{i = 1}^{{n\bar{\pi }}} \delta_{kui} t_{i}^{{\theta_{k} }} \ln t_{i}^{2} + \mathop \sum \limits_{i = 1}^{{n\bar{\pi }}} \delta_{sui} t_{i}^{{\theta_{k} }} \ln t_{i}^{2} } \right] - \left( {n\bar{\pi } - n_{u} } \right)\lambda_{k} t_{r}^{{\theta_{k} }} \ln t_{r}^{2} \\ & \quad - \lambda_{k} \beta_{k}^{{\theta_{k} }} \left[ {\mathop \sum \limits_{j = 1}^{n\pi } \delta_{kaj} \left( {x_{j}^{{\theta_{k} }} } \right)\ln \left( {\beta_{k} x_{j} } \right)^{2} + \mathop \sum \limits_{j = 1}^{n\pi } \delta_{saj} \left( {x_{j}^{{\theta_{k} }} } \right)\ln \left( {\beta_{k} x_{j} } \right)^{2} } \right] \\ & \quad - \left( {n\pi - n_{a} } \right)\lambda_{k} \beta_{k}^{{\theta_{k} }} x_{r}^{{\theta_{k} }} \ln \left( {\beta_{k} x_{r} } \right)^{2} , \\ \end{aligned} $$
$$ \frac{{\partial^{2} l_{2} }}{{\partial \lambda_{k}^{2} }} = - \frac{{n_{k0} }}{{\lambda_{k}^{2} }}, $$
$$ \begin{aligned} \frac{{\partial^{2} l_{2} }}{{\partial \beta_{k}^{2} }} & = - \frac{{n_{ka} }}{{\beta_{k}^{2} }} - \left( {\theta_{k} - 1} \right)\mathop \sum \limits_{j = 1}^{n\pi } \delta_{kaj} \beta_{k}^{ - 2} - \left( {n\pi - n_{a} } \right)\left( {\theta_{k} - 1} \right)\theta_{k} \lambda_{k} x_{r}^{{\theta_{k} }} \beta_{k}^{{\theta_{k} - 2}} \\ & \quad - \lambda_{k} \theta_{k} \left( {\theta_{k} - 1} \right)\beta_{k}^{{\theta_{k} - 2}} \left[ {\mathop \sum \limits_{j = 1}^{n\pi } \delta_{kaj} \left( {x_{j}^{{\theta_{k} }} } \right) + \mathop \sum \limits_{j = 1}^{n\pi } \delta_{saj} \left( {x_{j}^{{\theta_{k} }} } \right)} \right], \\ \end{aligned} $$
$$ \begin{aligned} \frac{{\partial^{2} l_{2} }}{{\partial \theta_{k} \partial \lambda_{k} }} & = - \mathop \sum \limits_{i = 1}^{{n\bar{\pi }}} \delta_{kui} t_{i}^{{\theta_{k} }} \ln t_{i} - \mathop \sum \limits_{i = 1}^{{n\bar{\pi }}} \delta_{sui} t_{i}^{{\theta_{k} }} \ln t_{i} - \left( {n\bar{\pi } - n_{u} } \right)t_{r}^{{\theta_{k} }} \ln t_{r} \\ & \quad - \beta_{k}^{{\theta_{k} }} \left[ {\mathop \sum \limits_{j = 1}^{n\pi } \delta_{kaj} \left( {x_{j}^{{\theta_{k} }} } \right)\ln \left( {\beta_{k} x_{j} } \right) + \mathop \sum \limits_{j = 1}^{n\pi } \delta_{saj} \left( {x_{j}^{{\theta_{k} }} } \right)\ln \left( {\beta_{k} x_{j} } \right)} \right] - \left( {n\pi - n_{a} } \right)\beta_{k}^{{\theta_{k} }} x_{r}^{{\theta_{k} }} \ln \left( {\beta_{k} x_{r} } \right), \\ \end{aligned} $$
$$ \begin{aligned} \frac{{\partial^{2} l_{2} }}{{\partial \theta_{k} \partial \beta_{k} }} & \ {=}\ \mathop \sum \limits_{j = 1}^{n\pi } \delta_{kaj} \beta_{k}^{ - 1} - \left( {n\pi - n_{a} } \right)\lambda_{k} \beta_{k}^{{\theta_{k} - 1}} x_{r}^{{\theta_{k} }} \left[ {1 + \theta_{k} \ln \left( {\beta_{k} x_{r} } \right)} \right] \\ & \quad - \lambda_{k} \beta_{k}^{{\theta_{k} - 1}} \left[ {\mathop \sum \limits_{j = 1}^{n\pi } \delta_{kaj} \left\{ {x_{j}^{{\theta_{k} }} + \theta_{k} \left( {x_{j}^{{\theta_{k} }} } \right)\ln \left( {\beta_{k} x_{j} } \right)} \right\} + \mathop \sum \limits_{j = 1}^{n\pi } \delta_{saj} \left\{ {x_{j}^{{\theta_{k} }} + \theta_{k} \left( {x_{j}^{{\theta_{k} }} } \right)\ln \left( {\beta_{k} x_{j} } \right)} \right\}} \right], \\ \end{aligned} $$
$$ \frac{{\partial^{2} l_{2} }}{{\partial \lambda_{k} \partial \beta_{k} }} = - \theta_{k} \beta_{k}^{{\theta_{k} - 1}} \left[ {\mathop \sum \limits_{j = 1}^{n\pi } \delta_{kaj} \left( {x_{j}^{{\theta_{k} }} } \right) + \mathop \sum \limits_{j = 1}^{n\pi } \delta_{saj} \left( {x_{j}^{{\theta_{k} }} } \right)} \right] - \left( {n\pi - n_{a} } \right)\theta_{k} \beta_{k}^{{\theta_{k} - 1}} x_{r}^{{\theta_{k} }} . $$

By similar way, the approximate confidence intervals of \( \theta_{k} , \lambda_{k} \) and \( \beta_{k} \) under TIIC competing risk are obtained by using Eq. (10).

5 Simulation Study

In this section, a simulation study is carried out to evaluate the performance of the estimates. The estimates of the acceleration factor \( \left( {\beta_{1} , \beta_{2} } \right) \) and population parameters \( \left( {\theta_{1} , \theta_{2} , \lambda_{1} , \lambda_{2} } \right) \) are evaluated in terms of their mean squared errors (MSEs) and biases. The numerical procedure is designed as below:

  • A random sample of size \( n_{1} = n\left( {1 - \pi } \right), \) where \( \pi = 0.4 \) is the proportion and \( n \) is the total sample size, is generated under normal conditions. So, we generate samples from \( W_{1} \sim Weibull\left( {n_{1} , \theta_{1} ,\lambda_{1} } \right) \) and \( W_{2} \sim Weibull\left( {n_{1} , \theta_{2} , \lambda_{2} } \right) \). In view of two samples we generate new samples \( t_{1} = \left( {t_{\left( 1 \right)} , t_{\left( 2 \right)} , t_{\left( 3 \right)} , \ldots ., t_{{\left( {n_{1} } \right)}} } \right) \) where \( T = min\left( {W_{1} , W_{2} } \right) \).

  • A random sample of size \( n_{2} = n\pi \) is generated under accelerated conditions. So, we generate samples from \( W_{1} \sim Weibull\left( {n_{2} , \theta_{1} , \beta_{1} , \lambda_{1} } \right) \) and \( W_{2} \sim Weibull\left( {n_{2} , \theta_{2} , \beta_{2} , \lambda_{2} } \right) \). Based on this two samples we generate new samples \( x_{2} = \left( {x_{\left( 1 \right)} , x_{\left( 2 \right)} , x_{\left( 3 \right)} , \ldots ., x_{{\left( {n_{2} } \right)}} } \right) \) where \( X = min\left( {W_{1} , W_{2} } \right) \).

  • In TIC, let \( \tau = 1.5, \) while, in TIIC, let r = 10 for sample sizes 50, 75 and 100.

  • For some choices of unknown parameters and accelerated factor, the above process is repeated 1000 times

  • The average values of biases and MSEs are computed.

Numerical outcomes are listed in Tables 1 and 2. The following observations can be detected as follows:

  • The MSEs and biases decrease as n increases under TIC and TIIC data (see Tables 1, 2).

  • For fixed value of \( \left( {\lambda_{1} , \theta_{2} , \lambda_{2} , \beta_{1} , \beta_{2} } \right) \) and as the value of \( \theta_{1} \) increases, the MSEs and biases of estimates of \( \left( {\theta_{1} , \lambda_{1} , \theta_{2} , \lambda_{2} } \right) \) are increasing except the MSEs and biases for estimates of \( \beta_{1} \) and \( \beta_{2} \) are decreasing under TIC data (see Table 1).

  • For fixed value of \( \left( {\theta_{2} , \lambda_{2} , \beta_{1} , \beta_{2} } \right) \), as the value of \( \theta_{1} \) decreases and \( \lambda_{1} \) increases, the MSEs and biases of estimates for \( (\lambda_{1} , \beta_{1} , \beta_{2} ) \) are increasing but the MSEs and biases for estimates of \( \left( {\theta_{1} , \theta_{2} , \lambda_{2} } \right) \) are decreasing under TIC data (see Table 1).

  • For fixed value of \( \left( {\theta_{1} , \lambda_{2} , \beta_{1} , \beta_{2} } \right) \) and as the value of \( (\lambda_{1} , \theta_{2} ) \) is decreasing, the MSEs and biases for estimates of \( \left( {\lambda_{2} , \beta_{1} , \beta_{2} } \right) \) are increasing but the MSEs and biases for estimates of \( \left( {\theta_{1} , \theta_{2} , \lambda_{1} } \right) \) are decreasing under TIC data (see Table 1).

  • For fixed value of \( \left( {\theta_{1} , \lambda_{1} , \beta_{1} , \beta_{2} } \right) \), as the value of \( \lambda_{2} \). decreases and \( \theta_{2} \) increases, the MSEs and biases of estimates of \( \left( {\lambda_{2} , \theta_{2} } \right) \) are increasing but the MSEs and biases of estimates for \( \left( {\theta_{1} , \lambda_{1} , \beta_{1} , \beta_{2} } \right) \) are decreasing under TIC data (see Table 1).

  • For fixed value of \( \left( {\theta_{1} , \lambda_{1} , \theta_{2} , \beta_{2} } \right) \) and as the value of \( \left( {\lambda_{2} , \beta_{1} } \right) \) increases, the MSEs and biases of estimates for \( \left( {\theta_{1} , \lambda_{1} , \theta_{2} , \beta_{1} } \right) \) are increasing except the MSEs and biases of estimates for \( \lambda_{2} \) and \( \beta_{2} \) are decreasing under TIIC data (see Table 1).

  • For fixed value of \( \left( {\theta_{1} , \lambda_{1} , \theta_{2} , \lambda_{2} } \right) \) and as the value of \( \left( {\beta_{1} , \beta_{2} } \right) \) decreases, the MSEs and biases of estimates of \( \left( {\theta_{2} , \lambda_{1} , \lambda_{2} , \beta_{1} , \beta_{2} } \right) \) are increasing but the MSEs and biases of estimates for \( \theta_{1} \) are increasing under TIC data (see Table 1).

  • When the value of \( \left( {\lambda_{1} , \theta_{2} , \lambda_{2} , \beta_{1} , \beta_{2} } \right) \) is fixed and the parameter value of \( \theta_{1} \) increases, the MSEs and biases for estimates of \( \left( {\theta_{1} , \lambda_{2} , \beta_{1} , \beta_{2} } \right) \) are increasing while the MSEs and biases for estimates of \( (\lambda_{1} , \theta_{2} ) \) are decreasing based on TIIC (see Tables 2).

  • For fixed value of \( \left( {\theta_{2} , \lambda_{2} , \beta_{1} , \beta_{2} } \right) \), as the value of \( \theta_{1} \) decreases, and the value of \( \lambda_{1} \) increases, the MSEs and biases for estimates of \( \left( {\lambda_{1} , \beta_{1} , \beta_{2} } \right) \) are increasing but the MSEs and biases for estimates of \( \left( {\theta_{1} , \theta_{2} , \lambda_{2} } \right) \) are decreasing under TIIC data (see Table 2).

  • Under TIIC, when the value of \( \left( {\theta_{1} , \lambda_{2} , \beta_{1} , \beta_{2} } \right) \) is fixed and the value of \( \left( {\theta_{2} , \lambda_{1} } \right) \) are decreasing, the MSEs and biases for estimates of \( \left( {\theta_{1} , \theta_{2} , \lambda_{1} , \lambda_{2} , \beta_{1} , \beta_{2} } \right) \) are increasing (see Table 2).

  • Under TIIC, when the value of \( \left( {\theta_{1} , \lambda_{1} , \beta_{1} , \beta_{2} } \right) \) is fixed, the value of \( \lambda_{2} \) are decreasing and the value of \( \theta_{2} \) are increasing, the MSEs and biases for estimates \( \left( {\theta_{1} , \theta_{2} , \lambda_{1} , \lambda_{2} , \beta_{1} , \beta_{2} } \right) \) are decreasing (see Table 2).

  • For fixed value of \( \left( {\theta_{1} , \lambda_{1} , \theta_{2} , \beta_{2} } \right) \) and as the value of \( \left( {\lambda_{2} , \beta_{1} } \right) \) increases, the MSEs and biases for estimates of \( \left( {\theta_{1} , \theta_{2} , \lambda_{2} , \beta_{1} , \beta_{2} } \right) \) are increasing but the MSEs and biases for estimates of \( \left( {\lambda_{1} } \right) \) are decreasing under TIIC data (see Table 2).

  • When the value of \( \left( {\theta_{1} , \theta_{2} , \lambda_{1} , \lambda_{2} } \right) \) is fixed and the value of \( \left( {\beta_{1} , \beta_{2} } \right) \) decreases, the MSEs and biases for estimates of \( \left( {\theta_{1} , \theta_{2} , \beta_{2} } \right) \) are increasing while the MSEs and biases for estimates of \( \left( {\lambda_{1} , \lambda_{2} , \beta_{1} } \right) \) are decreasing based on TIIC (see Table 2).

Table 1 Biases and MSEs of ML estimates under TIC competing risks data for \( \tau = 1.5 \) and \( \pi = 0.4 \)
Table 2 Biases and MSEs of ML estimates under TIC competing risks data for \( r = 10 \) and \( \pi = 0.4 \)