1 Introduction

Product reliability is a common concern of manufacturers. To track product performance, it is necessary to collect product life data. So, life test is indispensable for the analysis and assessment of product reliability. However, with scientific and technological advance, high reliability and long life products emerge in an endless stream. In practical experiments, accelerated life test( ALT ) are widely used for time and expense. In ALT, the life of the product under different accelerated stress levels was tested, and then the life distribution of the product under normal stress was estimated by a suitable physical statistical model, common models are the Arrhenius model, the power law model and the Eyring model. In practice, the acceleration stress level can be temperature, voltage, etc. In recent years, many scholars have conducted research on ALT based on different types of data and life distribution models. Ismail (2015) discussed the maximum-likelihood estimates (MLEs) and BEs of Pareto distribution parameters on the basis of Type-I censoring sample in the partially ALT of constant-stress. Xu et al. (2016) got the BE for the Weibull distribution based on constant-stress ALT. Yan et al. (2017) obtained MLE and BE of Weibull regression model based on the general progressive type-II censoring (GPT-IIC) sample in the multi-stress life test. More detailed studies on ALT can be found in Shi et al. (2013), Anwar (2014), Mahmoud et al. (2017), Zheng and Shi (2013) and their references. In traditional accelerated experimental studies, it is considered that only one parameter of the lifetime distribution changes with stress, but other parameters remain constant throughout the test. However, owing to the complication of the failure mechanism, this hypothesis may be unsuitable in many cases. For example, Hiergeist et al. (1989) found that in capacitor tests, Weibull shape parameters depend on temperature. Assuming that the log-life of product obeys a location-scale distribution, studies by Nelson (1984), Boyko and Gerlach (1989) demonstrate that both location and scale parameters in dielectric breakdown data depend on stress. Therefore, the study of life model with nonconstant parameters has begun to be concerned by many scholars. For example, Seo et al. (2009) designed a new ALT sampling scheme that has a non-constant shape parameters. Lv et al. (2015) discussed the ALT reliability modeling of stochastic effect and nonconstant shape parameters. Meeter and Meeker (1994) extend the existing ML theory for test scheme with non-constant scale parameter models, and gave test scheme for a large number of actual test cases.

The above research mainly corresponds to the reliability analysis of complete sample or right censored sample under constant stress ALT, the shape parameter and scale parameter of the life model, one is non-constant, and the other is constant for different stress levels. The case where both shape and scale parameters are changing is relatively rare studied in lifetime model. Wang (2018a, 2018b) introduced the MLEs of exponential distribution and Weibull distribution with nonconstant parameters under constant-stress ALT. The GPT-IIC scheme is increasingly common in the field of obtaining product failure time data. In order to motivate our research, we provided a real data set from the life test of seel specimens.

The Log-normal distribution is an alternative to life distribution in practice. For failure data of some products, such as steel specimens data, it is very flexible to use this distribution to fit these lifetime data. In the paper we introduced the reliability analysis of the log-normal distribution, which has nonconstant shape parameter and nonconstant scale parameter, by adopting the maximum likelihood and Bayesian methods.

The structure of this paper is arranged as below. In Sect. 2, the basic assumptions and life test procedure are assessed. In Sect. 3, the MLEs and BEs of the model parameters, the reliability and the hazard rate functions are given. In order to prove the effectiveness and practicability of the research, the reliability analysis is assessed in Sects. 4 and 5 with the simulation study and a real data, respectively. Some conclusion remarks are presented in Sect. 6.

2 Basic assumptions and life test procedure

2.1 Basic assumptions

This study adopts the following assumptions.

Assumption 1

Based on the normal stress level \(S_{0}\) and the accelerated stress levels \(S_{i},i=1,\ldots ,k\). The product life X obeys the log-normal distribution with different parameters \((\mu _i,\sigma ^2_i)\), \(\mu _i\in (-\infty ,+\infty )\), \(\sigma _i\in (0,+\infty )\), the probability density function (PDF), cumulative distribution function (CDF), reliability and hazard rate functions are as followings:

$$\begin{aligned}&f(x;\mu _{i},\sigma _{i}) =\frac{1}{\sqrt{2\pi }\sigma _{i}x}\exp \{-\frac{(\ln x-\mu _{i})^2}{2\sigma _{i}^2}\}, \end{aligned}$$
(1)
$$\begin{aligned}&F(x;\mu _{i},\sigma _{i}) =\Phi \left( \frac{\ln x-\mu _{i}}{\sigma _{i}}\right) , \end{aligned}$$
(2)
$$\begin{aligned}&s(x;\mu _{i},\sigma _{i}) =1-\Phi \left( \frac{\ln x-\mu _{i}}{\sigma _{i}}\right) , \end{aligned}$$
(3)
$$\begin{aligned}&h(x;\mu _{i},\sigma _{i}) =\frac{\frac{1}{\sqrt{2\pi }\sigma _{i}x}\exp \{-\frac{(\ln x-\mu _{i})^2}{2\sigma _{i}^2}\}}{1-\Phi \left( \frac{\ln x-\mu _{i}}{\sigma _{i}}\right) }, \end{aligned}$$
(4)

where \(\Phi (x)=\int _{-\infty }^x\frac{1}{\sqrt{2\pi }}\exp \{-\frac{t^2}{2}\}\mathrm{d}t\), \((\mu _0,\sigma ^2_0)\) is denoted as \((\mu ,\sigma ^2)\) that is the log-normal distribution parameter at the normal stress level \(S_{0}\).

Assumption 2

The product accelerated model is assumed to be log-linear, that is, both the location and scale parameters, and the accelerated stress \(S_i\) satisfy the following equations:

$$\begin{aligned}&\ln {\mu _i} = a_{1} + b_{1}\phi ({S_i}),~~i=0,1,\ldots ,k, \end{aligned}$$
(5)
$$\begin{aligned}&\ln {\sigma _i} = a_{2} + b_{2}\phi ({S_i}),~~i=0,1,\ldots ,k, \end{aligned}$$
(6)

where \(\phi (S)\) denotes the function of the accelerated stress S, and \(a_t,b_t\) are the constants, \(t=1,2\). In general, the stress S is the temperature, then \(\phi (S)=\frac{1}{S}\), the stress S is voltage, \(\phi (S)=\log {S}\). According to Eqs. (5) and (6), we rewrite \((\mu _i,\sigma ^2_i)\) in terms of \((\mu ,\sigma ^2)\) as follows:

$$\begin{aligned} \mu _{i}&= \mu \exp \{b_{1}(\phi ({S_i}) - \phi ({S_0}))\}, \\ \sigma _{i}&= \sigma \exp \{b_{2}(\phi ({S_i}) - \phi ({S_0}))\}, \end{aligned}$$
(7)

let \(\theta _1=\mu _1/\mu , \theta _2=\sigma _1/\sigma\), we can get

$$\begin{aligned} \mu _{i}= \mu \left [\exp \{b_{1}(\phi ({S_1}) - \phi ({S_0}))\}\right]^{\frac{\phi ({S_i}) - \phi ({S_0})}{\phi ({S_1}) - \phi ({S_0})}}= \mu \theta _{1}^{\phi _{i}}, \end{aligned}$$
(8)
$$\begin{aligned} \sigma _{i}= \sigma \theta _{2}^{\phi _{i}}, \end{aligned}$$
(9)

where \({\phi _i} = \frac{{\phi ({S_i}) - \phi ({S_0})}}{{\phi ({S_1}) - \phi ({S_0})}}, i=1,\ldots ,k\), and \(\theta _1,\theta _2\in (0,+\infty )\) are named as acceleration coefficient.

2.2 Life test procedure

Let k accelerated stress levels be \(S_{1}<S_{2}<\cdots <S_{k}\). Assume that units with size \(n_i\) are tested at stress level \(S_{i}\), and put into the GPT-IIC test, implementation is as follows.

Suppose that the failure times of the first \(r_{i}\) units are not observed. The failure time \(x_{i:r_{i}+1}\) is observed at the \((r_{i}+1)\)th failure unit, then \(R_{i:r_{i+1}}\) surviving units are randomly withdrawn from the test. At the \((r_{i}+2)\)th failure time \(x_{i:r_{i}+2}\), \(R_{i:r_{i+2}}\) surviving units are randomly withdrawn, and so on. At the \(m_{i}\)th failure time \(x_{i:m_{i}}\), all the remaining units \(R_{i:m_{i}}=n_{i}-m_{i}-R_{i:r_{i+1}}-\cdots -R_{i:m_{i}-1}\) are finally removed and the test terminates, where the failure numbers \(m_{i},i=1,\ldots ,k\) are prefixed. The \(m_{i}-r_{i}\) failure times

$$\begin{aligned} x_{i}=(x_{i:r_{i}+1},x_{i:r_{i}+2},\ldots ,x_{i:m_{i}}) \end{aligned}$$
(10)

is a general progressive censored sample under censoring scheme

$$\begin{aligned} R_{i}=(R_{i:r_{i}+1},R_{i:r_{i}+2},\ldots ,R_{i:m_{i}}),\quad i=1,\ldots ,k. \end{aligned}$$

Apparently, \(x_{i:r_{i}+1}\le x_{i:r_{i}+2}\le \cdots \le x_{i:m_{i}}\). In this paper, we suppose the tests are independent of each other under the different stress levels, and \(m_{i}-r_{i}>0\) causes the failure time of at least one unit to be observed.

3 MSEs and BEs for the unknown parameters

For the stress level \(S_{i}\), the likelihood function is that

$$\begin{aligned} L_i({\mu _{i}} ,{\sigma _i}|{x_i})&\varpropto [F(x_{i:r_{i}+1};\mu _{i},\sigma _{i})]^{r_{i}}\prod \limits _{j = r_{i}+1}^{{m_i}}f(x_{i:j};\mu _{i},\sigma _{i}) \\ &\quad[1-F(x_{i:j};\mu _{i},\sigma _{i})]^{R_{i:j}}. \end{aligned}$$
(11)

Then, combined with assumptions 1 and 2 in Sect. 2.1, the likelihood and log-likelihood functions based on the GPT-IIC sample \(\varvec{x}=(x_{1},x_{2},\ldots ,x_{k})\) are given by

$$\begin{aligned}&L(\mu ,\sigma ,\theta _{1},\theta _{2}|{\varvec{x}}) \varpropto \prod \limits _{i=1}^{k}\left[ \Phi \left( \frac{\ln x_{i:r_{i}+1}-\mu \theta _{1}^{\phi _{i}}}{\sigma {\theta _{2}}^{\phi _{i}}}\right) \right] ^{r_{i}} \\ &\quad \prod \limits _{i=1}^{k}\prod \limits _{j=r_{i}+1}^{{m_i}}\frac{1}{\sigma \theta _{2}^{\phi _{i}}}\exp \left\{ -\frac{\left( \ln x_{i:j}-\mu \theta _{1}^{\phi _{i}}\right) ^2}{2\sigma ^2\theta _{2}^{2{\phi _{i}}}}\right\} \\ &\quad \times \left[ 1-\Phi \left( \frac{\ln x_{i:j}-\mu \theta _{1}^{\phi _{i}}}{\sigma {\theta _{2}}^{\phi _{i}}}\right) \right] ^{R_{i:j}},\end{aligned}$$
(12)
$$\begin{aligned}&l(\mu ,\sigma ,\theta _{1},\theta _{2}|{\varvec{x}})=\!\!\sum \limits _{i=1}^{k}r_{i}\ln \Phi (\omega _{ir_{i}+1}) \\ &\quad +\sum \limits _{i=1}^{k} \sum \limits _{j=r_{i}+1}^{m_{i}}\left\{ R_{i:j}\ln [1-\Phi (\omega _{ij})]-(\ln \sigma \!+\!\phi _{i}\ln \theta _{2}+\frac{\omega _{ij}^2}{2})\right\} , \end{aligned}$$
(13)

where \(\omega _{ij}=\frac{\ln x_{i:j}-\mu \theta _{1}^{\phi _{i}}}{\sigma {\theta _{2}}^{\phi _{i}}}\).

3.1 MLEs

Let \(({\hat{\mu }} ,{\hat{\sigma }},{{\hat{\theta }} _1},{{\hat{\theta }} _2} )\) denote the MLEs of \((\mu ,\sigma ,\theta _1,\theta _2)\), which can be obtained by following equations.

$$\begin{aligned} \frac{\partial l}{\partial \mu }&= \sum \limits _{i=1}^{k}r_{i}\frac{\varphi (\omega _{ir_{i}+1})}{\Phi (\omega _{ir_{i+1}})}\omega _{ir_{i+1}1} \\ &\quad+\sum \limits _{i=1}^{k}\sum \limits _{j=r_{i}+1}^{m_{i}}-R_{i:j}\frac{\varphi (\omega _{ij})}{1-\Phi (\omega _{ij})}\omega _{ij1}-\omega _{ij}\omega _{ij1}=0 \end{aligned}$$
(14)
$$\begin{aligned} \frac{\partial l}{\partial \sigma }&= \sum \limits _{i=1}^{k}r_{i}\frac{\varphi (\omega _{ir_{i}+1})}{\Phi (\omega _{ir_{i}+1})}\omega _{ir_{i+1}2} \\ &\quad+\sum \limits _{i=1}^{k}\sum \limits _{j=r_{i}+1}^{m_{i}}-R_{i:j}\frac{\varphi (\omega _{ij})}{1-\Phi (\omega _{ij})}\omega _{ij2}-\omega _{ij}\omega _{ij2}-\frac{1}{\sigma }=0 \end{aligned}$$
(15)
$$\begin{aligned} \frac{\partial l}{\partial \theta _{1}}&= \sum \limits _{i=1}^{k}r_{i}\frac{\varphi (\omega _{ir_{i}+1})}{\Phi (\omega _{ir_{i}+1})}\omega _{ir_{i+1}3} \\ &\quad+\sum \limits _{i=1}^{k}\sum \limits _{j=r_{i}+1}^{m_{i}}-R_{ij}\frac{\varphi (\omega _{ij})}{1-\Phi (\omega _{ij})}\omega _{ij3}-\omega _{ij}\omega _{ij3}=0 \end{aligned}$$
(16)
$$\begin{aligned} \frac{\partial l}{\partial \theta _{2}}&= \sum \limits _{i=1}^{k}r_{i}\frac{\varphi (\omega _{ir_{i}+1})}{\Phi (\omega _{ir_{i}+1})}\omega _{ir_{i+1}4} \\ &\quad+\sum \limits _{i=1}^{k}\sum \limits _{j=r_{i}+1}^{m_{i}}-R_{i:j}\frac{\varphi (\omega _{ij})}{1-\Phi (\omega _{ij})}\omega _{ij4}-\omega _{ij}\omega _{ij4}- \frac{\phi _{i}}{\theta _{2}}=0 \end{aligned}$$
(17)

where \(l=l(\mu ,\sigma ,\theta _{1},\theta _{2}|{\varvec{x}})\), \(\varphi (\cdot )\) is the standard normal distribution density function, and \(\omega _{ij1}=\frac{\partial \omega _{ij}}{\partial \mu }=-\frac{\theta _{1}^{\phi _{i}}}{\sigma \theta _{2}^{\phi _{i}}}\), \(\omega _{ij2}=\frac{\partial \omega _{ij}}{\partial \sigma }=-\frac{1}{\sigma }\omega _{ij}\), \(\omega _{ij3}=\frac{\partial \omega _{ij}}{\partial \theta _{1}}=-\frac{\mu \phi _{i}\theta _{1}^{\phi _{i}-1}}{\sigma \theta _{2}^{\phi _{i}}}\), \(\omega _{ij4}=\frac{\partial \omega _{ij}}{\partial \theta _{2}}=-\frac{\phi _{i}}{\theta _{2}}\omega _{ij}\). Because Eqs. (1417) are complex, explicit solutions are not available. The Newton–Raphson method is used to calculate the MLEs \(({\hat{\mu }} ,{\hat{\sigma }},{{\hat{\theta }} _1},{{\hat{\theta }} _2} )\). Further, the standard deviation of the estimators was assessed using the variance–covariance matrix, which is the inverse of the observed Fisher information matrix \(\mathrm{I}(\mu ,\sigma ,\theta _{1},\theta _{2} )\) at \(({\hat{\mu }} ,{\hat{\sigma }},{{\hat{\theta }} _1},{{\hat{\theta }} _2} )\) as follows:

$$\begin{aligned}&\mathrm{I}=\left[ \begin{array}{cccc} \frac{{{\partial ^2} l}}{{\partial {\mu ^2}}}&{}\frac{{{\partial ^2}l}}{{\partial \mu \partial {\sigma }}}&{}\frac{{{\partial ^2} l}}{{\partial \mu \partial \theta _{1} }}&{}\frac{{{\partial ^2} l}}{{\partial \mu \partial \theta _{2} }}\\ \frac{{{\partial ^2} l}}{{\partial {\sigma }\partial \mu }}&{} \frac{{{\partial ^2} l}}{{\partial \sigma ^2}}&{} \frac{{{\partial ^2} l}}{{\partial {\sigma }\partial \theta _{1} }}&{}\frac{{{\partial ^2} l}}{{\partial {\sigma }\partial \theta _{2} }}\\ \frac{{{\partial ^2} l}}{{\partial \theta _{1} \partial \mu }}&{} \frac{{{\partial ^2} l}}{{\partial \theta _{1} \partial {\sigma }}}&{} \frac{{{\partial ^2} l}}{{\partial {\theta _{1} ^2}}}&{}\frac{{{\partial ^2} l}}{{\partial {\theta _{1} }\partial {\theta _{2} }}}\\ \frac{{{\partial ^2} l}}{{\partial \theta _{2} \partial \mu }}&{} \frac{{{\partial ^2} l}}{{\partial \theta _{2} \partial {\sigma }}}&{} \frac{{{\partial ^2} l}}{{\partial {\theta _{2} }\partial {\theta _{1} }}}&{}\frac{{{\partial ^2} l}}{{\partial {\theta _{2}^2 }}} \end{array}\right] \\&\mathrm{I}^{-1}= \left[ \begin{array}{cccc} Var({\hat{\mu }} )&{} Cov({\hat{\mu }} ,\hat{\sigma })&{} Cov(\hat{\mu },{\hat{\theta }}_{1} )&{}Cov(\hat{\mu },{\hat{\theta }}_{2} )\\ &{} Var({\hat{\sigma }} ) &{} Cov(\hat{\sigma },{\hat{\theta }}_{1})&{}Cov(\hat{\sigma },{\hat{\theta }}_{2})\\ &{} &{} Var({\hat{\theta }}_{1} )&{}Cov({\hat{\theta }}_{1},{\hat{\theta }}_{2})\\ &{} &{} &{}Var({\hat{\theta }}_{2} ) \end{array}\right] \end{aligned}$$
(18)
$$\begin{aligned}&\frac{\partial ^2 l(\mu ,\sigma ,\theta _{1},\theta _{2}|{\varvec{x}})}{\partial \mu ^2} \\&\quad =\sum \limits _{i=1}^{k}r_{i}\left[ \omega _{ir_{i+1}1}\frac{-\omega _{ir_{i}+1}\varphi (\omega _{ir_{i}+1})\omega _{ir_{i+1}1}\Phi (\omega _{ir_{i+1}}) -\varphi ^2(\omega _{ir_{i}+1})\omega _{ir_{i+1}1}}{\Phi ^2(\omega _{ir_{i+1}})}-\frac{\varphi (\omega _{ir_{i}+1})}{\Phi (\omega _{ir_{i+1}})} \omega _{ir_{i+1}11}\right] \\&\qquad +\sum \limits _{i=1}^{k}\!\!\sum \limits _{j=r_{i}+1}^{m_{i}}-R_{i:j}\left[ \omega _{ij1}\frac{-\omega _{ij}\varphi (\omega _{ij})\omega _{ij1}(1-\Phi (\omega _{ij})) +\varphi ^2(\omega _{ij})\omega _{ij1}}{[1-\Phi ^(\omega _{ij})]^2}-\frac{\varphi (\omega _{ij})}{\Phi (\omega _{ij})}\omega _{ij11}\right] -\omega _{ij1}^2-\omega _{ij}\omega _{ij11} \end{aligned}$$
(19)
$$\begin{aligned}&\frac{\partial ^2 l(\mu ,\sigma ,\theta _{1},\theta _{2}|{\varvec{x}})}{\partial \mu \partial \sigma } \\&\quad =\sum \limits _{i=1}^{k}r_{i}\left[ \omega _{ir_{i+1}1}\omega _{ir_{i+1}2}\varphi (\omega _{ir_{i}+1})\frac{-\omega _{ir_{i}+1}\Phi (\omega _{ir_{i+1}}) -\varphi (\omega _{ir_{i}+1})}{\Phi ^2(\omega _{ir_{i+1}})}-\frac{\varphi (\omega _{ir_{i}+1})}{\Phi (\omega _{ir_{i+1}})} \omega _{ir_{i+1}12}\right] \\&\qquad +\sum \limits _{i=1}^{k}\!\!\sum \limits _{j=r_{i}+1}^{m_{i}}-R_{i:j}\left[ \omega _{ij2}\frac{-\omega _{ij}\varphi (\omega _{ij})\omega _{ij2}(1-\Phi (\omega _{ij})) +\varphi ^2(\omega _{ij})\omega _{ij1}}{[1-\Phi ^(\omega _{ij})]^2}-\frac{\varphi (\omega _{ij})}{\Phi (\omega _{ij})}\omega _{ij12}\right] -\omega _{ij2}\omega _{ij1}-\omega _{ij}\omega _{ij12} \end{aligned}$$
(20)
$$\begin{aligned}&\frac{\partial ^2 l(\mu ,\sigma ,\theta _{1},\theta _{2}|{\varvec{x}})}{\partial \mu \partial \theta _{1}} \\&\quad = \sum \limits _{i=1}^{k}r_{i}\left[ \omega _{ir_{i+1}1}\frac{-\omega _{ir_{i}+1}\varphi (\omega _{ir_{i}+1})\omega _{ir_{i+1}3}\Phi (\omega _{ir_{i+1}}) -\varphi ^2(\omega _{ir_{i}+1})\omega _{ir_{i+1}3}}{\Phi ^2(\omega _{ir_{i+1}})}-\frac{\varphi (\omega _{ir_{i}+1})}{\Phi (\omega _{ir_{i+1}})}\omega _{ir_{i+1}13}\right] \\&\qquad +\sum \limits _{i=1}^{k}\!\!\sum \limits _{j=r_{i}+1}^{m_{i}}-R_{i:j}\left[ \omega _{ij3}\frac{-\omega _{ij}\varphi (\omega _{ij})\omega _{ij2}(1-\Phi (\omega _{ij})) +\varphi ^2(\omega _{ij})\omega _{ij1}}{[1-\Phi ^(\omega _{ij})]^2}-\frac{\varphi (\omega _{ij})}{\Phi (\omega _{ij})}\omega _{ij13}\right] \\&\qquad -\omega _{ij3}\omega _{ij1}-\omega _{ij}\omega _{ij13} \end{aligned}$$
(21)
$$\begin{aligned}&\frac{\partial ^2 l(\mu ,\sigma ,\theta _{1},\theta _{2}|{\varvec{x}})}{\partial \mu \partial \theta _{2}} \\&\quad =\sum \limits _{i=1}^{k}r_{i}\left[ \omega _{ir_{i+1}1}\frac{-\omega _{ir_{i}+1}\varphi (\omega _{ir_{i}+1})\omega _{ir_{i+1}4}\Phi (\omega _{ir_{i+1}}) -\varphi ^2(\omega _{ir_{i}+1})\omega _{ir_{i+1}4}}{\Phi ^2(\omega _{ir_{i+1}})}-\frac{\varphi (\omega _{ir_{i}+1})}{\Phi (\omega _{ir_{i+1}})}\omega _{ir_{i+1}14}\right] \\&\qquad +\sum \limits _{i=1}^{k}\!\!\sum \limits _{j=r_{i}+1}^{m_{i}}-R_{i:j}\left[ \omega _{ij4}\frac{-\omega _{ij}\varphi (\omega _{ij})\omega _{ij2}(1-\Phi (\omega _{ij})) +\varphi ^2(\omega _{ij})\omega _{ij1}}{[1-\Phi ^(\omega _{ij})]^2}-\frac{\varphi (\omega _{ij})}{\Phi (\omega _{ij})}\omega _{ij14}\right] \\&\qquad -\omega _{ij4}\omega _{ij1}-\omega _{ij}\omega _{ij14} \end{aligned}$$
(22)
$$\begin{aligned}&\frac{\partial ^2 l(\mu ,\sigma ,\theta _{1},\theta _{2}|{\varvec{x}})}{\partial \sigma ^2} \\&\quad =\sum \limits _{i=1}^{k}r_{i}\left[ \omega _{ir_{i+1}2}\frac{-\omega _{ir_{i}+1}\varphi (\omega _{ir_{i}+1})\omega _{ir_{i+1}2}\Phi (\omega _{ir_{i+1}}) -\varphi ^2(\omega _{ir_{i}+1})\omega _{ir_{i+1}2}}{\Phi ^2(\omega _{ir_{i+1}})}-\frac{\varphi (\omega _{ir_{i}+1})}{\Phi (\omega _{ir_{i+1}})}\omega _{ir_{i+1}22}\right] \\&\qquad +\sum \limits _{i=1}^{k}\!\!\sum \limits _{j=r_{i}+1}^{m_{i}}-R_{i:j}\left[ \omega _{ij2}\frac{-\omega _{ij}\varphi (\omega _{ij})\omega _{ij2}(1-\Phi (\omega _{ij})) +\varphi ^2(\omega _{ij})\omega _{ij2}}{[1-\Phi ^(\omega _{ij})]^2}-\frac{\varphi (\omega _{ij})}{\Phi (\omega _{ij})}\omega _{ij22}\right] \\&\qquad -\omega _{ij2}^2-\omega _{ij}\omega _{ij22}+\frac{1}{\sigma ^2} \end{aligned}$$
(23)
$$\begin{aligned}&\frac{\partial ^2 l(\mu ,\sigma ,\theta _{1},\theta _{2}|{\varvec{x}})}{\partial \sigma \partial \theta _{1}} \\&\quad =\sum \limits _{i=1}^{k}r_{i}\left[ \omega _{ir_{i+1}2}\frac{-\omega _{ir_{i}+1}\varphi (\omega _{ir_{i}+1})\omega _{ir_{i+1}3}\Phi (\omega _{ir_{i+1}}) -\varphi ^2(\omega _{ir_{i}+1})\omega _{ir_{i+1}3}}{\Phi ^2(\omega _{ir_{i+1}})}-\frac{\varphi (\omega _{ir_{i}+1})}{\Phi (\omega _{ir_{i+1}})}\omega _{ir_{i+1}23}\right] \\&\qquad +\sum \limits _{i=1}^{k}\!\!\sum \limits _{j=r_{i}+1}^{m_{i}}-R_{i:j}\left[ \omega _{ij2}\frac{-\omega _{ij}\varphi (\omega _{ij})\omega _{ij3}(1-\Phi (\omega _{ij})) +\varphi ^2(\omega _{ij})\omega _{ij3}}{[1-\Phi ^(\omega _{ij})]^2}-\frac{\varphi (\omega _{ij})}{\Phi (\omega _{ij})}\omega _{ij23}\right] \\&\qquad -\omega _{ij3}\omega _{ij2}-\omega _{ij}\omega _{ij23} \end{aligned}$$
(24)
$$\begin{aligned}&\frac{\partial ^2 l(\mu ,\sigma ,\theta _{1},\theta _{2}|{\varvec{x}})}{\partial \sigma \partial \theta _{2}} \\&\quad =\sum \limits _{i=1}^{k}r_{i}\left[ \omega _{ir_{i+1}2}\frac{-\omega _{ir_{i}+1}\varphi (\omega _{ir_{i}+1})\omega _{ir_{i+1}4}\Phi (\omega _{ir_{i+1}}) -\varphi ^2(\omega _{ir_{i}+1})\omega _{ir_{i+1}4}}{\Phi ^2(\omega _{ir_{i+1}})}-\frac{\varphi (\omega _{ir_{i}+1})}{\Phi (\omega _{ir_{i+1}})}\omega _{ir_{i+1}24}\right] \\&\qquad +\sum \limits _{i=1}^{k}\!\!\sum \limits _{j=r_{i}+1}^{m_{i}}-R_{i:j}\left[ \omega _{ij2}\frac{-\omega _{ij}\varphi (\omega _{ij})\omega _{ij4}(1-\Phi (\omega _{ij})) +\varphi ^2(\omega _{ij})\omega _{ij4}}{[1-\Phi ^(\omega _{ij})]^2}-\frac{\varphi (\omega _{ij})}{\Phi (\omega _{ij})}\omega _{ij24}\right] \\&\qquad -\omega _{ij4}\omega _{ij2}-\omega _{ij}\omega _{ij24} \end{aligned}$$
(25)
$$\begin{aligned}&\frac{\partial ^2 l(\mu ,\sigma ,\theta _{1},\theta _{2}|{\varvec{x}})}{\partial {\theta _{1}}^2} \\&\quad =\sum \limits _{i=1}^{k}r_{i}\left[ \omega _{ir_{i+1}3}\frac{-\omega _{ir_{i}+1}\varphi (\omega _{ir_{i}+1})\omega _{ir_{i+1}3}\Phi (\omega _{ir_{i+1}}) -\varphi ^2(\omega _{ir_{i}+1})\omega _{ir_{i+1}3}}{\Phi ^2(\omega _{ir_{i+1}})}-\frac{\varphi (\omega _{ir_{i}+1})}{\Phi (\omega _{ir_{i+1}})}\omega _{ir_{i+1}33}\right] \\&\qquad +\sum \limits _{i=1}^{k}\!\!\sum \limits _{j=r_{i}+1}^{m_{i}}-R_{i:j}\left[ \omega _{ij3}\frac{-\omega _{ij}\varphi (\omega _{ij})\omega _{ij3}(1-\Phi (\omega _{ij})) +\varphi ^2(\omega _{ij})\omega _{ij3}}{[1-\Phi ^(\omega _{ij})]^2}-\frac{\varphi (\omega _{ij})}{\Phi (\omega _{ij})}\omega _{ij33}\right] \\&\qquad -\omega _{ij3}^2-\omega _{ij}\omega _{ij33} \end{aligned}$$
(26)
$$\begin{aligned}&\frac{\partial ^2 l(\mu ,\sigma ,\theta _{1},\theta _{2}|{\varvec{x}})}{\partial \theta _{1}\partial \theta _{2}} \\&\quad =\sum \limits _{i=1}^{k}r_{i}\left[ \omega _{ir_{i+1}3}\frac{-\omega _{ir_{i}+1}\varphi (\omega _{ir_{i}+1})\omega _{ir_{i+1}4}\Phi (\omega _{ir_{i+1}}) -\varphi ^2(\omega _{ir_{i}+1})\omega _{ir_{i+1}4}}{\Phi ^2(\omega _{ir_{i+1}})}-\frac{\varphi (\omega _{ir_{i}+1})}{\Phi (\omega _{ir_{i+1}})}\omega _{ir_{i+1}34}\right] \\&\qquad +\sum \limits _{i=1}^{k}\!\!\sum \limits _{j=r_{i}+1}^{m_{i}}-R_{i:j}\left[ \omega _{ij3}\frac{-\omega _{ij}\varphi (\omega _{ij})\omega _{ij4}(1-\Phi (\omega _{ij})) +\varphi ^2(\omega _{ij})\omega _{ij4}}{[1-\Phi ^(\omega _{ij})]^2}-\frac{\varphi (\omega _{ij})}{\Phi (\omega _{ij})}\omega _{ij34}\right] \\&\qquad -\omega _{ij4}\omega _{ij3}-\omega _{ij}\omega _{ij34} \end{aligned}$$
(27)
$$\begin{aligned}&\frac{\partial ^2 l(\mu ,\sigma ,\theta _{1},\theta _{2}|{\varvec{x}})}{\partial {\theta _{2}}^2} \\&\quad =\sum \limits _{i=1}^{k}r_{i}\left[ \omega _{ir_{i+1}4}\frac{-\omega _{ir_{i}+1}\varphi (\omega _{ir_{i}+1})\omega _{ir_{i+1}4}\Phi (\omega _{ir_{i+1}}) -\varphi ^2(\omega _{ir_{i}+1})\omega _{ir_{i+1}4}}{\Phi ^2(\omega _{ir_{i+1}})}-\frac{\varphi (\omega _{ir_{i}+1})}{\Phi (\omega _{ir_{i+1}})}\omega _{ir_{i+1}44}\right] \\&\qquad +\sum \limits _{i=1}^{k}\!\!\sum \limits _{j=r_{i}+1}^{m_{i}}-R_{i:j}\left[ \omega _{ij4}\frac{-\omega _{ij}\varphi (\omega _{ij})\omega _{ij4}(1-\Phi (\omega _{ij})) +\varphi ^2(\omega _{ij})\omega _{ij4}}{[1-\Phi ^(\omega _{ij})]^2}-\frac{\varphi (\omega _{ij})}{\Phi (\omega _{ij})}\omega _{ij44}\right] \\&\qquad -\omega _{ij4}^2-\omega _{ij}\omega _{ij44} \end{aligned}$$
(28)

here \(\omega _{ij11}=\frac{\partial \omega _{ij1}}{\partial \mu }=0\), \(\omega _{ij12}=\frac{\partial \omega _{ij1}}{\partial \sigma }=\frac{{\theta _{1}}^{\phi _{i}}}{\sigma ^2{\theta _{2}}^{\phi _{i}}}\), \(\omega _{ij13}=\frac{\partial \omega _{ij1}}{\partial \theta _{1}}=\frac{(\phi _{i}-1){\theta _{1}}^{\phi _{i}-1}}{\sigma {\theta _{2}}^{\phi _{i}}}\), \(\omega _{ij14}=\frac{\partial \omega _{ij1}}{\partial \theta _{2}}=\frac{(\phi _{i}+1){\theta _{1}}^{\phi _{i}}}{\sigma {\theta _{2}}^{\phi _{i}+1}}\), \(\omega _{ij22}=\frac{1}{\sigma ^2}\omega _{ij}-\frac{1}{\sigma }\omega _{ij2}\), \(\omega _{ij23}=\frac{1}{\sigma }\omega _{ij3}\), \(\omega _{ij23}=\frac{1}{\sigma }\omega _{ij4}\), \(\omega _{ij33}=-\frac{\mu \phi _{i}(\phi _{i}-1){\theta _{1}}^{\phi _{i}-2}}{\sigma {\theta _{2}}^{\phi _{i}}}\), \(\omega _{ij34}=\frac{\mu \phi _{i}^2{\theta _{1}}^{\phi _{i}-1}}{\sigma {\theta _{2}}^{\phi _{i}+1}}\), \(\omega _{ij44}=\frac{\phi _{i}}{{\theta _{2}}^2}\omega _{ij}-\frac{\phi _{i}}{{\theta _{2}}}\omega _{ij4}\).

The \(100(1-\tau )\)% approximate confidence interval (CI) for parameter \(P_{M}\) is given by

$$\begin{aligned} \left( {{\hat{P}}_M} - {z_{\tau /2}}\sqrt{Var({{{\hat{P}}}_M})} ,{{\hat{P}}_M} +{z_{\tau /2}}\sqrt{Var({{{\hat{P}}}_M})}\right) \end{aligned}$$
(29)

where \({\hat{P}}_{M}\) = \({\hat{\mu }}\), \(\hat{\sigma }\), \({{\hat{\theta }} _1}\), or \({{\hat{\theta }} _2}\), and \(z_{\tau /2}\) is the upper quantile of the standard normal distribution. Under the normal stress level, the estimations of s(x) and h(x) are that:

$$\begin{aligned} {\hat{s}}(x) =1-\Phi \left( \frac{\ln x-{\hat{\mu }}}{{\hat{\sigma }}}\right) , \end{aligned}$$
(30)
$$\begin{aligned} {\hat{h}}(x) = \frac{\frac{1}{\sqrt{2\pi }{\hat{\sigma }} x}\exp \left\{ -\frac{(\ln x-{\hat{\mu }})^2}{2{\hat{\sigma }}^2}\right\} }{1-\Phi \left( \frac{\ln x-{\hat{\mu }}}{{\hat{\sigma }}}\right) }. \end{aligned}$$
(31)

The approximating variances of \({\hat{s}}(x)\) and \({\hat{h}}(x)\) are obtained by using the Delta method.

$$\begin{aligned} Var({\hat{s}}(x))&= \left[ \frac{\partial s(x)}{\partial \mu },\frac{\partial s(x)}{\partial \sigma },\frac{\partial s(x)}{\partial \theta _{1}},\frac{\partial s(x)}{\partial \theta _{2}} \right] \mathrm{I}^{ - 1} \\ &\left[ \frac{\partial s(x)}{\partial \mu },\frac{\partial s(x)}{\partial \sigma },\frac{\partial s(x)}{\partial \theta _{1}},\frac{\partial s(x)}{\partial \theta _{2}}\right] ^T \end{aligned}$$
(32)
$$\begin{aligned} Var({\hat{h}}(x))&= \left[ \frac{\partial h(x)}{\partial \mu },\frac{\partial h(x)}{\partial \sigma },\frac{\partial h(x)}{\partial \theta _{1}},\frac{\partial h(x)}{\partial \theta _{2}} \right] \mathrm{I}^{-1} \\ &\left[ \frac{\partial h(x)}{\partial \mu },\frac{\partial h(x)}{\partial \sigma },\frac{\partial h(x)}{\partial \theta _{1}},\frac{\partial h(x)}{\partial \theta _{2}}\right] ^T \end{aligned}$$
(33)

where

$$\begin{aligned}&\frac{\partial s(x)}{\partial \mu }=\frac{1}{\sigma }\varphi \left( \frac{\ln x-\mu }{\sigma }\right) \end{aligned}$$
(34)
$$\begin{aligned}&\frac{\partial s(x)}{\partial \sigma }=\frac{\ln x-\mu }{\sigma ^2}\varphi \left( \frac{\ln x-\mu }{\sigma }\right) \end{aligned}$$
(35)
$$\begin{aligned}&\frac{\partial h(x)}{\partial \mu }=\frac{\frac{(\ln x-\mu )}{\sqrt{2\pi }\sigma ^3 x}\exp \{-\frac{(\ln x-\mu )^2}{2\sigma ^2}\}s(x)-\frac{\partial s(x)}{\partial \mu }\frac{1}{\sqrt{2\pi }\sigma x}\exp \{-\frac{(\ln x-\mu )^2}{2\sigma ^2}\}}{\left[ 1-\Phi \left( \frac{\ln x-{\hat{\mu }}}{\sigma }\right) \right] ^2} \end{aligned}$$
(36)
$$\begin{aligned}&\frac{\partial h(x)}{\partial \sigma }=\frac{\frac{1}{\sqrt{2\pi }\sigma x}(-\frac{1}{\sigma }+\frac{(\ln x-\mu )^2}{\sigma ^3})\exp \{-\frac{(\ln x-\mu )^2}{2\sigma ^2}\}s(x)-\frac{\partial s(x)}{\partial \sigma }\frac{1}{\sqrt{2\pi }\sigma x}\exp \{-\frac{(\ln x-\mu )^2}{2\sigma ^2}\}}{\left[ 1-\Phi \left( \frac{\ln x-{\hat{\mu }}}{\sigma }\right) \right] ^2} \end{aligned}$$
(37)

The \(100(1-\tau )\%\) approximate CIs for s(x) and h(x) are

$$\begin{aligned} \left( {\hat{s}}(x)-{z_{\tau /2}}\sqrt{Var({\hat{s}}(x))},{\hat{s}}(x)+{z_{\tau /2}}\sqrt{Var({\hat{s}}(x))}\right) \end{aligned}$$
(38)

and

$$\begin{aligned} \left( {\hat{h}}(x)-{z_{\tau /2}}\sqrt{Var({\hat{h}}(x))},{\hat{h}}(x)+{z_{\tau /2}}\sqrt{Var({\hat{h}}(x))}\right) \end{aligned}$$
(39)

respectively.

3.2 Bayesian estimation

The parameters are derived by Bayesian method in this section. First, suppose that \(\mu ,\sigma\), \(\theta _{1}\), and \(\theta _{2}\) are independent of one another. Next suppose that \(\sigma\) follows Gamma prior, its PDF \(g(\sigma ;a,b)\) is that

$$\begin{aligned} g(\sigma ;a,b) = \frac{b^a}{\Gamma (a)}{\sigma ^{a-1}} \mathrm{e}^{-b\sigma },\quad a>0,\quad b>0. \end{aligned}$$
(40)

The prior on parameter \(\mu\) has a log-concave function with PDF, here \(\mu\) takes normal prior with PDF \(\pi (\mu ;c,d)\) that is abbreviated to \(\pi (\mu )\) and given by

$$\begin{aligned} \pi (\mu ) = \frac{1}{\sqrt{2\pi }d}e^{-\frac{(\mu -c)^2}{2d^2}} \end{aligned}$$
(41)

\(\theta _{1}\) and \(\theta _{2}\) have uniform priors with PDF \(u(\theta _i)\equiv 1\), \(i=1,2\). Then, the posterior joint PDF of \((\mu ,\sigma ,\theta _{1},\theta _{2})\) given \(\varvec{x}\) is derived by

$$\begin{aligned} \pi (\mu , \sigma ,\theta _{1},\theta _{2} |\varvec{x}) \propto {L(\mu ,\sigma ,\theta _{1},\theta _{2}|x)\pi (\mu )g(\sigma ;{a},{b})u(\theta _{1})u(\theta _{2})} \end{aligned}$$
(42)

Under the square error loss (SEL) function, we obtain the Bayes estimate of the parameters. Therefore, the BE of any function \(P(\mu ,\sigma ,\theta _{1},\theta _{2})\) of parameters, named \(\hat{P}(\mu ,\sigma ,\theta _{1},\theta _{2})\), is that

$$\begin{aligned} \hat{P}(\mu ,\sigma ,\theta _{1},\theta _{2}) = \frac{{\int _0^\infty \! {\int _0^\infty \!{\int _0^\infty \!{\int _{-\infty }^\infty {P(\mu ,\sigma ,\theta _{1},\theta _{2})L(\mu ,{\sigma },\theta _{1},\theta _{2}|\varvec{x})\pi (\mu )g({\sigma };{a},{b})u(\theta _{1})u(\theta _{2})d\mu d\sigma d \theta _{1}d\theta _{2} } } } }}}{{\int _0^\infty {\int _0^\infty {\int _0^\infty {\int _{-\infty }^\infty {L(\mu ,{\sigma },\theta _{1},\theta _{2}|\varvec{x})\pi (\mu )g({\sigma };{a},{b})u(\theta _{1})u(\theta _{2})d\mu d\sigma d \theta _{1}d\theta _{2} } } } }}} \end{aligned}$$
(43)

Because Eq. (43) cannot be reduced to closed form, the MCMC method is used. Substituting (12) and (40) into (42), and given \(\alpha\), \(\theta _1\), \(\theta _2\), and \(\varvec{x}\), the conditional PDF of \(\mu\) is proportional to

$$\begin{aligned} \pi (\mu |\sigma ,\theta _{1},\theta _{2} ,\varvec{x})&\varpropto\prod \limits _{i=1}^{k}\left[ \Phi \left( \frac{\ln x_{i:r_{i}+1}-\mu \theta _{1}^{\phi _{i}}}{\sigma {\theta _{2}}^{\phi _{i}}}\right) \right] ^{r_{i}} \\ &\times \prod _{j=1}^{{m_i}}\exp \left\{ -\frac{(\ln x_{i:j}-\mu \theta _{1}^{\phi _{i}})^2}{2\sigma ^2\theta _{2}^{2{\phi _{i}}}}\right\} \\ &\left[ 1-\Phi \left( \frac{\ln x_{i:j}-\mu \theta _{1}^{\phi _{i}}}{\sigma {\theta _{2}}^{\phi _{i}}}\right) \right] ^{R_{i:j}}\pi (\mu ). \end{aligned}$$
(44)

Given \(\mu\), \(\theta _{1}\),\(\theta _{2}\) and \(\varvec{x}\), the conditional PDF of \(\sigma\) is proportional to

$$\begin{aligned} \pi (\sigma |\mu ,\theta _{1},\theta _{2} ,\varvec{x})&\varpropto\prod \limits _{i=1}^{k}\left[ \Phi \left( \frac{\ln x_{i:r_{i}+1}-\mu \theta _{1}^{\phi _{i}}}{\sigma {\theta _{2}}^{\phi _{i}}}\right) \right] ^{r_{i}} \\ &\prod \limits _{j=1}^{{m_i}}\frac{1}{\sigma }\exp \left\{ -\frac{(\ln x_{i:j}-\mu \theta _{1}^{\phi _{i}})^2}{2\sigma ^2\theta _{2}^{2{\phi _{i}}}}\right\} \\ &\times \left[ 1-\Phi \left( \frac{\ln x_{i:j}-\mu \theta _{1}^{\phi _{i}}}{\sigma {\theta _{2}}^{\phi _{i}}}\right) \right] ^{R_{i:j}} \\&\sigma ^{a-1}\exp (-b\sigma ). \end{aligned}$$
(45)

Given \(\mu\), \(\sigma\), \(\theta _{2}\), and \(\varvec{x}\), the conditional PDF of \(\theta _{1}\) is proportional to

$$\begin{aligned} \pi (\theta _{1}|\mu ,\sigma ,\theta _{2} ,\varvec{x})&\varpropto\prod \limits _{i=1}^{k}\left[ \Phi \left( \frac{\ln x_{i:r_{i}+1}-\mu \theta _{1}^{\phi _{i}}}{\sigma {\theta _{2}}^{\phi _{i}}}\right) \right] ^{r_{i}} \\ &\prod \limits _{j=1}^{{m_i}}\exp \left\{ -\frac{(\ln x_{i:j}-\mu \theta _{1}^{\phi _{i}})^2}{2\sigma ^2\theta _{2}^{2{\phi _{i}}}}\right\} \\ &\times \left[ 1-\Phi \left( \frac{\ln x_{i:j}-\mu \theta _{1}^{\phi _{i}}}{\sigma {\theta _{2}}^{\phi _{i}}}\right) \right] ^{R_{i:j}}. \end{aligned}$$
(46)

Given \(\mu\), \(\sigma\), \(\theta _{1}\), and \(\varvec{x}\), the conditional PDF of \(\theta _{2}\) is proportional to

$$\begin{aligned} \pi (\theta _{2}|\mu ,\sigma ,\theta _{1},\varvec{x})&\varpropto\prod \limits _{i=1}^{k}\left[ \Phi \left( \frac{\ln x_{i:r_{i}+1}-\mu \theta _{1}^{\phi _{i}}}{\sigma {\theta _{2}}^{\phi _{i}}}\right) \right] ^{r_{i}} \\ &\prod \limits _{j = 1}^{{m_i}}\frac{1}{\theta _{2}^{\phi _{i}}}\exp \left\{ -\frac{(\ln x_{i:j}-\mu \theta _{1}^{\phi _{i}})^2}{2\sigma ^2\theta _{2}^{2{\phi _{i}}}}\right\} \\ &\times \left[ 1-\Phi \left( \frac{\ln x_{i:j}-\mu \theta _{1}^{\phi _{i}}}{\sigma {\theta _{2}}^{\phi _{i}}}\right) \right] ^{R_{i:j}}. \end{aligned}$$
(47)

Property 1

The posterior PDF \(\pi (\mu |\sigma ,\theta _{1},\theta _{2} ,\varvec{x})\) of the Eq. (44) is log-concave.

Proof

See “Appendix A”. \(\hfill\square\)

The marginal posterior distributions of \(\mu ,\sigma ,\theta _1\) and \(\theta _2\) do not have closed form. By Eq. (45) and Property 1, we used the adaptive rejection sampling (ARS) method Gilks and Wild (1992) which needs log-concave posterior PDF to get samples from the marginal distribution of \(\mu\). The Metropolis–Hastings (M–H) algorithm is used to obtain samples from the marginal distributions of \(\sigma ,\theta _1\) and \(\theta _2\) based on Eqs. (46)–(48). Finally, a hybrid Markov Chain is generated. The posterior samples are simulated and the BES are obtained in turn, the process of using the M–H and ARS methods is:

Step 1. Set initial value \({\mu }^{(0)},\sigma ^{(0)},\theta _{1}^{(0)},\theta _{2}^{(0)}\) and the iteration counter \(j=1\);

Step 2. Due to Property 1, the marginal density of \(\mu\) forms a log-concave density family, generate a random value \(\mu ^{(j)}\) from \(\pi (\mu |\sigma ^{(j-1)},\theta _{1}^{(j-1)},\theta _{2}^{(j-1)},\varvec{x})\) by adopting the adaptive rejection algorithm introduced by Gilks and Wild (1992).

Step 3. Generate a random variable \(\sigma ^{(j)}\) using M–H algorithm, the process is:

  1. (3.1)

    Generate a random number \(\sigma ^{(j)}_1\) from the \(\pi (\sigma |\mu ^{(j)},\theta _{1}^{(j-1)},\theta _{2}^{(j-1)},\varvec{x})\);

  2. (3.2)

    Generate a random number \(\sigma _*^{(j)}\) from the proposal Gamma distribution \(g(\sigma ;a,b)\) with known and nonnegative hyper-parameters ab;

  3. (3.3)

    Computer the acceptance probability

    $$\begin{aligned} h(\sigma ^{(j)}_1,\sigma ^{(j)}_*)=\min \left( 1,\frac{\pi (\sigma ^{(j)}_*|\mu ^{(j)},\theta _{1}^{(j-1)},\theta _{2}^{(j-1)},\varvec{x})g(\sigma _1^{(j)};a,b)}{\pi (\sigma ^{(j)}_1|\mu ^{(j)},\theta _{1}^{(j-1)},\theta _{2}^{(j-1)},\varvec{x})g(\sigma _*^{(j)};a,b)}\right) \end{aligned}$$
  4. (3.4)

    Generate a random number \(u_j\) from U(0, 1);

  5. (3.5)

    If \(h(\sigma ^{(j)}_1,\sigma ^{(j)}_*)>u_j\) then \(\sigma ^{(j)}=\sigma _*^{(j)}\), otherwise \(\sigma ^{(j)}=\sigma ^{(j)}_1\);

Step 4. Random variable \(\theta _{1}^{(j)}\) is generated from \(\pi (\theta _{1} |\mu ^{(j)},\sigma ^{(j)},\theta _{2}^{(j-1)},\varvec{x})\) using M–H algorithm, the process is similar to Step 3;

Step 5. Random variable \(\theta _{2}^{(j)}\) is generated from \(\pi (\theta _{2} |\mu ^{(j)},\sigma ^{(j)},\theta _{1}^{(j)},\varvec{x})\) using M–H algorithm,the process is similar to Step 3;

Step 6. For a given point \(x\in (0,+\infty )\), compute

$$\begin{aligned} {{\hat{s}}^{(j)}}(x)= & {} 1-\Phi \left( \frac{\ln x-\mu ^{(j)}(\theta _1^{(j)})^{\phi _{i}}}{\sigma ^{(j)}{(\theta _2^{(j)})}^{\phi _{i}}}\right) ,\\ {{\hat{h}}^{(j)}}(x)= & {} \frac{\varphi \left( \frac{\ln x-\mu ^{(j)}(\theta _1^{(j)})^{\phi _{i}}}{\sigma ^{(j)}{(\theta _2^{(j)})}^{\phi _{i}}}\right) }{{{\hat{s}}^{(j)}}(x)},\quad i=1,\ldots ,k. \end{aligned}$$

Step 7. Set \(j=j+1\);

Step 8. Repeat 2 to 7 steps M times and obtain

$$\begin{aligned} \left( {{\hat{\mu }}}^{(1)},{{\hat{\sigma }} ^{(1)}},{{\hat{\theta }}_1 ^{(1)}},{{\hat{\theta }}_2 ^{(1)}},{\hat{s}^{(1)}}(x),{{\hat{h}}^{(1)}}(x)\right) , \ldots ,\left( {{\hat{\mu }}}^{(N)},{{\hat{\sigma }} ^{(N)}},{{\hat{\theta }}_1 ^{(N)}},{{\hat{\theta }}_2 ^{(N)}},{\hat{s}^{(N)}}(x),{{\hat{h}}^{(N)}}(x)\right) . \end{aligned}$$

Abandoning first samples \(N_{0}\) as “burn in”, remaining \(N-N_{0}\) samples are used to obtain the BEs

$$\begin{aligned} {\hat{P}}_{B}= \frac{1}{N}\sum \limits _{j =N_0+ 1}^{N}{\hat{P}}^{(j)}, \end{aligned}$$

under the SEL function, the posterior mean square errors (PMSEs) of the parameters \((\mu ,\sigma ,\theta _{1}\), \(\theta _{2},s(x),h(x))\) is that

$$\begin{aligned} \mathrm{PMSE}({{\hat{P}}_B}) = \sqrt{\frac{1}{N}\sum \limits _{j = N_0+1}^N {{{(P^{(j)}-{{{\hat{P}}}_B})}^2}}}, \end{aligned}$$

where P can be \(\mu ,\sigma ,\theta _{1},\theta _2, s(x)\), or h(x).

Step 9. Order \(\hat{P}^{(N_{0}+1)},\ldots ,\hat{P}^{(N)}\) as \(\hat{P}_{(1)},\ldots ,\hat{P}_{(N-N_{0})}\).

Then, the \(100(1-\gamma )\%\) highest posterior density (HPD) credible intervals of P, namely \(\big (\hat{P}_{([(N-N_0)\gamma /2])}, \hat{P}_{([(N-N_0)(1-\gamma /2)])}\big )\), are given by using the method suggested by Chen and Shao (1999), where [a] denotes the integer part of a.

4 Simulation study

In this section, simulation study of the proposed method is conducted. First, suppose that the prior distribution for \(\sigma\) in Eq. (40) is the Gamma distribution with hyper-parameters \(a=0, b=0\), it is equal to non-informative prior \(\frac{1}{\sigma }\), and \(\mu ,\theta _1,\theta _2\) have uniform priors. Next, under the GPT-IIC and a three-level constant stress. Supposing the model is the log-normal distribution and there are three temperature-accelerated levels \(S_{1}=240\), \(S_{2}=260\), \(S_{3}=280\). The normal operating temperature is \(S_{0}=200\). The accelerating function is \(\ln \mu _{i}=1.95+126.43/S_{i}\), \(\ln \sigma _{i}=-0.11+218.79/S_{i},i=1,2,3\). Therefore, \(\mu =13.2255\), \(\sigma =2.6750\), \(\theta _{1}=0.9\), \(\theta _{2}=0.8333\). For the sake of simplicity, we take \(n_{1}=n_{2}=n_{3}=n\) and \(m_{1}=m_{2}=m_{3}=m\), the following sampling schemes are used at the three stress levels:

  • [1]: \(n=30,m=20,r=5,R_{i:6}=\cdots =R_{i:{m-1}}=0,R_{i:m}=5\).

  • [2]: \(n=30,m=25,r=0,R_{i:1}=2,R_{i:2}=\cdots =R_{i:{m-1}}=0,R_{i:m}=3\).

  • [3]: \(n=40,m=30,r=0,R_{i:1}=5,R_{i:2}=\cdots =R_{i:{m-1}}=0,R_{i:m}=5\).

  • [4]: \(n=40,m=35,r=3,R_{i:4}=\cdots =R_{i:{m-1}}=0,R_{i:m}=2\).

  • [5]: \(n=50,m=40,r=1,R_{i:2}=5,R_{i:3}=\cdots =R_{i:{m-1}}=0,R_{i:m}=4\).

  • [6]: \(n=50,m=45,r=2,R_{i:3}=\cdots =R_{i:{m-1}}=0,R_{i:m}=3\). \(i=1,2,3\).

Let the GPT-IIC scheme be that \(n_h,m_h,r_h\) and \(R_h=(R_{h:r_h+1}\), \(R_{h:r_h+2},\ldots , R_{h:m_h})\) under each stress level \(S_h\). The GPT-IIC samples from the log-normal distribution are generated according to the method given in Balakrishnan and Aggarwala (2000) under \(S_h\). the steps of the algorithm as follow:

Step 1. Generate a random variables \(V_{m_h}\) from \(Beta(n_h-r_h,r_h+1)\).

Step 2. Generate \(m_h-r_h-1\) independent random variables \(W_{h:r_h+1},\ldots ,W_{h:m_h-1}\) from U(0, 1).

Step 3. Set \(V_{h:r_h+l}=W_{h:r_h+l}^{1/a_{h:{r_h+l}}}\), \(l=1,\ldots , m_h-r_h-1\), where \(a_{h:{r_h+l}}=l+\sum \limits _{j=m_h-l+1}^{m_h}R_{h:j}\).

Step 4. Set \(_{r_h}U_{r_h+l:m_h:n_h}=1-V_{h:m_h-l+1}V_{h:m_h-l+2}\cdots V_{h:m_h},l=1,\ldots ,m_h-r_h\).

Step 5. Finally, \(_{r_h}X_{r_h+l:m_h:n_h}=F_h^{-1}(_{r_h}U_{r_h+l:m_h:n_h};\mu _h,\sigma _h)\), where \(F_h^{-1}(x;\mu _h,\sigma _h)\) is the inverse function of log-normal CDF that \(F_h(x;\mu _h,\sigma _h)=\Phi (\frac{\ln x-\mu _h}{\sigma _h})\),\(l=1,\ldots ,m_h-r_h\). Then \(\big (_{r_h}X_{r_h+1:m_h:n_h},\, _{r_h}X_{r_h+2:m_h:n_h},\,\ldots , \, _{r_h}X_{m_h:m_h:n_h}\big )\), abbreviated as \(\big (X_{h:r_h+1},\, X_{h:r_h+2},\,\ldots , \,X_{h:m_h}\big )\), is the required GPT-IIC samples of size \(m_h-r_h\) from log-normal distribution with parameters \((\mu _h,\sigma _h^2)\).

We obtain the BEs based on 4000 MCMC samples and remove the first 1000 values. We simulate the whole process 2000 times in each scheme and obtain the MLEs and BEs of parameters according to the method described in Sects. 3.1 and 3.2. Finally, the MEANs, mean square errors (MSEs), average confidence lengths (ACLs) of 95% confidence HPD credible intervals and the coverage percentages (CPs) of the parameters based on simulation are listed in Table 1. We can see that the MLEs and BEs of the parameters are very approach to the real value, therefore, the performance of the two estimation methods is satisfactory. However, the BE has more superiority because it is not affected by initial value, and the MSEs of the BE is generally less than the MLE. For interval estimation, it is observed that the CPs of the confidence and credible intervals for the parameters are nearly 95%, HPD credibel intervals are better than CIs in respect of the ACLs and CPs, when (nm) increase, the MSEs for MLEs and BEs of the model parameters decrease.

Table 1 MLEs, BEs, ACLs and CPs for parameters

5 Real example

Table 2 shows the life data of steel specimens in 6 randomly assigned batches of 20 observations, each batch has been subjected to a different stress amplitude (Kimber 1990; Lawless 2003). First, K-S hypothesis testing is used to check whether log-normal distribution is fit to the data sets. The K–S distances and the p-values under six levels are 0.0987 (0.9791), 0.0692 (0.9999), 0.0994 (0.9775), 0.1330 (0.8264), 0.1270 (0.8642), and 0.0920 (0.9898) respectively, it is shown that the log-normal distribution is an appropriate choice for these data. Secondly, we consider the homogeneity of variance using Bartlett test. The Bartlett test statistic is given by

$$\begin{aligned} \chi ^{2} = \frac{\sum \limits _{i=1}^{6}(n_{i}-1)\ln {s^2}-\sum \limits _{i=1}^{6}\ln {s^2_{i}}}{C}, \end{aligned}$$
(48)

where \(s^2=\frac{\sum _{i=1}^{6}(n_{i}-1)s^2_{i}}{\sum _{i=1}^{6}(n_{i}-1)}\), \(s^2_{i} =\frac{1}{n_{i}-1}\sum _{j=1}^{n_{i}}(x_{ij}-{\bar{X}}_{i})^2\), \(C=1+\Big \{3(m-1)[\sum _{i=1}^{6}\frac{1}{n_{i}-1}-\frac{1}{\sum _{i=1}^{6}(n_{i}-1)}]\Big \}^{-1}\).

Table 2 The lifetimes of steel specimens tested at 6 different stress levels

By calculation, the Bartlett test statistic for these data is \(\chi ^{2} = 31.1890>\chi _{\frac{\alpha }{2}}(5)=12.833,~~\alpha =0.05.\) This indicates that the variance under the six stress levels is inhomogeneous or unequal, so the method discussed in this paper is very necessary in practical application.

We assume that the normal stress level is \(s_{0}=32\), and non-informative prior for \(\sigma\), (that is, the Gamma hyper-parameters are \(a=0, b=0\) in Eq. (40)), uniform priors for \(\mu ,\theta _1,\theta _2\). We then get two groups GPT-IIC samples of the original data in Table 2 with two different censoring schemes:

  • Scheme [1]: \(n=20,m=17,r=0,R_{i:1}=1,R_{i:2}=\cdots =R_{i:{m-1}}=0,R_{i:m}=2\).

  • Scheme [2]: \(n=20,m=15,r=2,R_{i:3}=1,R_{i:4}=\cdots =R_{i:{m-1}}=0,R_{i:m}=2\).

    \(i=1,2,3,4,5,6\).

The samples are listed in Tables 3 and 4. By using the MCMC algorithm mentioned in Sect. 3.2, under the GPT-IIC scheme, the samples of \(\mu , \sigma , \theta _{1}, \theta _{2}, S (x),\) and h(x)(\(x=800\)) of size 4000 are generated, and the first 500 samples are removed. Figure 1 shows the sample paths for the remaining 3500 MCMC samples under the scheme 1. As can be seen from Fig. 1, the MCMC algorithm converges.

Table 3 The GPT-IIC sample under scheme Ismail (2015)
Table 4 The GPT-IIC sample under scheme Xu et al. (2016)
Fig. 1
figure 1

Sample path maps of MCMC samples under scheme Ismail (2015)

By applying MCMC samples, the BEs of unknown parameters of the life distribution, corresponding reliability s(x) and hazard rate h(x)(\(x=800\)), the Lower limits (LL), upper limits (UL) and interval length (IL) of HPD confidence intervals are calculated, and these results and MLEs are shown in Table 5. Although the BEs and MLEs are similar, the ILs of BEs are shorter than MLEs. However, the MLE is not only complex, but also often influenced by initial value, and it is very difficult to prove the uniqueness of MLE. But the Bayesian method does not need to prove the uniqueness of the solution, and the convergence is not affected by the initial value.

Table 5 MLEs, 95% CIs, BEs and HPDs for log-normal model parameters and s(x) and h(x) at \(x=800\)

Figure 2 is the reliability function and hazard rate function plots with different stress levels under scheme (Ismail 2015). It can be seen from Fig. 2 that the reliability function becomes steeper and which indicates that the life of the steel specimen decreases with the stress levels increasing. The hazard rate of the steel specimen increases with the stress levels increasing, but it is always increasing at each stress level, which indicates that the monotonicity of the failure rate function doesn’t change with stress levels.

Fig. 2
figure 2

The reliability (left) and hazard rate (right) functions under different stress of scheme Ismail (2015)

6 Conclusion

In the accelerated life test with constant-stress, the reliability has been discussed, when the data is general progressive censoring and follows the log-normal distribution. It is considered that parameters of the log-normal model are affected by stress, this situation is often encountered in practice. Point estimation and interval estimation are obtained by using Bayesian and maximum likelihood methods for unknown parameters of the life distribution, and corresponding reliability and hazard rate. The hybrid Markov Chain Monte Carlo algorithm that combines ARS and M–H steps within the Gibbs sampling method was implemented to obtain the Bayes estimate. The simulation results demonstrate that the maximum likelihood and Bayes estimators have the significant performances. It shows that this paper presents an alternative and effective method for reliability test analysis. The reliability analysis of a real data set shows that the proposed method has the possibility of application.