1 Introduction

The well-known two-parameter Birnbaum–Saunders (BS) distribution was proposed by Birnbaum and Saunders (1969) to model fatigue failure provoked by cyclic loading. The BS is related to the normal distribution by means of the stochastic representation \(T=(\beta /4)\left[ \alpha {Z} + \sqrt{(\alpha {Z})^2+4}\right] ^{2}\), where \(\alpha >0\) and \(\beta >0\) are shape and scale parameters, respectively, \(Z \sim \text{ N }(0, 1)\) and T is BS distributed with notation \(T\sim {\mathrm{BS}}(\alpha , \beta )\). The probability density function (PDF) of T is given by

$$\begin{aligned} f_{\mathrm{BS}}(t;\alpha ,\beta )= & {} \frac{1}{2\sqrt{2\pi }\alpha \beta } \exp \left\{ -\frac{1}{2\alpha ^{2}}\left( \frac{t}{\beta }+\frac{\beta }{t}-2 \right) \right\} \left\{ \left( \frac{\beta }{t}\right) ^{\frac{1}{2}} +\left( \frac{\beta }{t}\right) ^{\frac{1}{2}} \right\} , \nonumber \\&t>0. \end{aligned}$$
(1)

The generalized BS (GBS) distribution was proposed by Díaz-García and Leiva (2005) as a manner to provide more flexible models than the BS one. The GBS distribution is obtained from \(Z=\big (\sqrt{T/\beta } - \sqrt{\beta /T}\big )/\alpha \sim \text{ ES }(g)\), where \(\text{ ES }(g)\) denotes an elliptically symmetric distribution with parameter of position \(\mu = 0\), parameter of scale \(\sigma = 1\), and a density generator g. Then, \(T=(\beta /4)(\alpha Z + \sqrt{\alpha ^{2}Z^{2}+4})^{2}\sim \text{ GBS }(\alpha ,\beta ;g)\), and its PDF is given by

$$\begin{aligned} f_{\mathrm{GBS}}(t;\alpha ,\beta ;g)= c\,g\left( \frac{1}{\alpha ^2}\left( \frac{t}{\beta }+ \frac{\beta }{t}-2\right) \right) \frac{1}{2\alpha }\left\{ \left( \frac{\beta }{t} \right) ^{\frac{1}{2}}+\left( \frac{\beta }{t} \right) ^{\frac{3}{2}}\right\} , \quad t>0, \end{aligned}$$

where c is a normalizing constant of the associated symmetric PDF, and \(\alpha \) and \(\beta \) are as in (1). The mean and variance of T are given by

$$\begin{aligned} {\mathrm{E}}[T]=\frac{\beta }{2}\left( 2+ u_{1}\alpha ^2\right) ,\quad \text{ Var }[T]=\frac{\beta ^2 \alpha ^{2}}{4}\left[ 4 u_{1}+ \left( 2u_{2}-u_{1}^{2}\right) \alpha ^{2}\right] , \end{aligned}$$
(2)

where \(u_{r} = u_{r}(g)={\mathrm{E}}[U^r]\) (see Table 1), with \( U\sim \text {G}\chi ^2(1; g)\), namely, U follows a generalized chi-squared (\(\text {G}\chi ^2(\cdot )\)) distribution with one degree of freedom and density generator g; see Fang et al. (1990).

Table 1 Moments [\(u_{r}(g)\)] for the indicated distributions

Recently, Kundu et al. (2013) introduced a generalized multivariate BS (GMBS) distribution by using the multivariate elliptical symmetric distribution, and derived the maximum likelihood estimators (MLEs) of its parameters. Two particular cases were analyzed, the multivariate normal and multivariate-Student-t distributions. A special case of the GMBS model is the generalized bivariate BS (GBBS) distribution, which in turn has the bivariate BS (BBS) distribution, proposed by (Kundu et al. 2010), and the bivariate BS-Student-t (BBS-t), as particular models. Amongst other things, Kundu et al. (2010, 2013) discussed different properties of these distributions and maximum likelihood (ML) estimation. Balakrishnan and Zhu (2015) studied the fitting of a regression model based on the BBS distribution introduced by Kundu et al. (2010). The authors derived the MLEs of the model parameters and then developed inferential issues.

In this context, the main purpose of this paper is to introduce two moment-type estimation methods for the parameters of the GBBS distribution. First, we derive modified moment estimators (MMEs) which basically rely on the reciprocal property of the GBBS distribution; see Ng et al. (2003). We then derive new modified moment estimators (NMMEs) which are based on some key properties of the GBBS distribution; see Balakrishnan and Zhu (2014). These two new methods have the advantages to be easy to compute and to possess explicit expressions as functions of the sample observations. Additionally, contrasted to the MLEs, the MMEs and NMMEs always exist uniquely. We derive the asymptotic distributions of the MMEs and NMMEs, which are used to compute the probability coverages of confidence intervals.

The rest of the paper proceeds as follows. In Sect. 2, we describe briefly the GBBS distribution and some of its properties. In Sect. 3, we describe the MLEs and the corresponding inferential results. In Sect. 4, we present the proposed estimators and derive their asymptotic distributions. A comparison of the estimators via a Monte Carlo (MC) simulation study is shown in Sect. 5. In Sect. 6, we illustrate the proposed methodology by using two real data sets. Finally, in Sect. 7, we provide some concluding remarks and also point out some problems worthy of further study.

2 Generalized bivariate Birnbaum–Saunders distribution

Let \({\varvec{X}}^{\top }=(X_{1},X_{2})\) be a bivariate random vector following a bivariate elliptically symmetric (BES) distribution with location vector \({\varvec{\mu }}={\varvec{0}}\), correlation coefficient \(\rho \), and a density generator \(g_{c}(\cdot )\); see Fang et al. (1990). The PDF of \({\varvec{X}}\) is given by

$$\begin{aligned} f_{\mathrm{BES}}\left( {\varvec{x}};\rho ,g_{c}\right) =\frac{\omega _{c}}{\sqrt{1-\rho ^2}} g_{c}\left( \frac{1}{\left( 1-\rho ^2\right) }\left( x_{1}^2+x_{2}^2 -2\rho {x}_{1}x_{2}\right) \right) , \quad {\varvec{x}}\in \mathbb {R}^2, \end{aligned}$$
(3)

where \({\omega }_{c}>0\) and \(\int _{ \mathbb {R}^2}f_{\mathrm{BES}}({\varvec{x}};\rho ,g_{c}){\varvec{d}}{\varvec{x}}=1\). In this case, the notation \({\varvec{X}}\sim {\mathrm{BES}}(\rho ,g_{c})\) is used. Alternative definitions of elliptical distributions can be found in Cambanis et al. (1981) and Abdous et al. (2005). Table 2 presents some examples of elliptically symmetric distributions.

Table 2 Constants (\({\omega }_{c}\)) and density generator (\(g_{c}(\cdot )\)) for the indicated distributions

Now, let \({\varvec{\alpha }}=(\alpha _{1},\alpha _{2})^{\top }\) and \({\varvec{\beta }}=(\beta _{1},\beta _{2})^{\top }\), with \(\alpha _{k}>0\) and \(\beta _{k}>0\) for \(k=1,2\). If the bivariate vector \({\varvec{T}}=(T_{1},T_{2})^{\top }\) with correlation coefficient \(\rho \) follows a GBBS distribution, denoted by \({\varvec{T}}\sim {\mathrm{GBBS}}({\varvec{\alpha }},{\varvec{\beta }},\rho )\), then its PDF is

$$\begin{aligned} f_{\mathrm{GBBS}}({\varvec{t}};{\varvec{\alpha }},{\varvec{\beta }},\rho )= & {} f_{\mathrm{BES}}\left( \frac{1}{\alpha _{1}} \left( \sqrt{\frac{t_{1}}{\beta _{1}}} -\sqrt{\frac{\beta _{1}}{t_{1}}}\right) ,\frac{1}{\alpha _{2}} \left( \sqrt{\frac{t_{2}}{\beta _{2}}} -\sqrt{\frac{\beta _{2}}{t_{2}}}\right) ;\rho ,g_{c} \right) \nonumber \\&\times \frac{1}{2\alpha _{1}}\left\{ \left( \frac{\beta _{1}}{t_{1}} \right) ^{\frac{1}{2}}+\left( \frac{\beta _{1}}{t_{1}} \right) ^{\frac{3}{2}}\right\} \frac{1}{2\alpha _{2}}\left\{ \left( \frac{\beta _{2}}{t_{2}} \right) ^{\frac{1}{2}}+\left( \frac{\beta _{2}}{t_{2}} \right) ^{\frac{3}{2}}\right\} ,\nonumber \\&\quad {\varvec{t}}>{\varvec{0}}, \end{aligned}$$
(4)

where \(f_{\mathrm{BES}}(\cdot ;\rho ,g_{c})\) is the PDF given in (3). The corresponding joint cumulative distribution function (CDF) of \({\varvec{T}}=(T_{1},T_{2})^{\top }\) is given by

$$\begin{aligned} F_{\mathrm{GBBS}}({\varvec{t}};{\varvec{\alpha }},{\varvec{\beta }},\rho )= & {} F_{\mathrm{BES}}\left( \frac{1}{\alpha _{1}} \left( \sqrt{\frac{t_{1}}{\beta _{1}}} -\sqrt{\frac{\beta _{1}}{t_{1}}}\right) ,\frac{1}{\alpha _{2}} \left( \sqrt{\frac{t_{2}}{\beta _{2}}} -\sqrt{\frac{\beta _{2}}{t_{2}}}\right) ;\rho ,g_{c} \right) , \nonumber \\&\quad {\varvec{t}}>{\varvec{0}}, \end{aligned}$$
(5)

where \(F_{\mathrm{BES}}(\cdot ;\rho ,g_{c})\) is the CDF associated with (3).

Theorem 1

If \({\varvec{T}}=(T_{1},T_{2})^{\top }\sim {\mathrm{GBBS}}({\varvec{\alpha }},{\varvec{\beta }},\rho )\) as defined in Equation (5), then

  1. a)

    \({\varvec{T}}^{-1}=(T_{1}^{-1},T_{2}^{-1})^{\top }\sim {\mathrm{GBBS}}({\varvec{\alpha }},{\varvec{\beta }}^{-1},\rho )\), with \({\varvec{\beta }}^{-1}=(1/\beta _{1},1/\beta _{2})^{\top }\);

  2. b)

    \({\varvec{T}}_{1}^{-1}=(T_{1}^{-1},T_{2})^{\top }\sim {\mathrm{GBBS}}({\varvec{\alpha }},{\varvec{\beta }}_{[1]}^{-1},-\rho )\), with \({\varvec{\beta }}_{[1]}=(1/\beta _{1},\beta _{2})^{\top }\);

  3. c)

    \({\varvec{T}}_{2}^{-1}=(T_{1},T_{2}^{-1})^{\top }\sim {\mathrm{GBBS}}({\varvec{\alpha }},{\varvec{\beta }}_{[2]}^{-1},-\rho )\), with \({\varvec{\beta }}_{[2]}=(\beta _{1},1/\beta _{2})^{\top }\).

Proof

By using the PDF in (4) and making suitable transformations. \(\square \)

Particular cases of the GBBS distributions are the BBS distribution proposed by Kundu et al. (2010), and the BBS-t distribution. These models are obtained by assuming the bivariate normal and bivariate Student-t kernels in Table 2, respectively.

Bivariate Birnbaum–Saunders distribution If the random vector \({\varvec{T}}=(T_{1},T_{2})^{\top }\) is BBS distributed with parameter vectors \({\varvec{\alpha }}=(\alpha _{1},\alpha _{2})^{\top }\) and \({\varvec{\beta }}=(\beta _{1},\beta _{2})^{\top }\), and correlation coefficient \(\rho \), denoted by \({\varvec{T}}\sim {\mathrm{BBS}}({\varvec{\alpha }},{\varvec{\beta }},\rho )\), then its joint PDF is given by

$$\begin{aligned} f_{\mathrm{BBS}}({\varvec{t}};{\varvec{\alpha }},{\varvec{\beta }},\rho )= & {} \phi _{2}\left( \frac{1}{\alpha _{1}} \left( \sqrt{\frac{t_{1}}{\beta _{1}}} -\sqrt{\frac{\beta _{1}}{t_{1}}}\right) ,\frac{1}{\alpha _{2}} \left( \sqrt{\frac{t_{2}}{\beta _{2}}} -\sqrt{\frac{\beta _{2}}{t_{2}}}\right) ;\rho \right) \nonumber \\&\times \frac{1}{2\alpha _{1}}\left\{ \left( \frac{\beta _{1}}{t_{1}} \right) ^{\frac{1}{2}}+\left( \frac{\beta _{1}}{t_{1}} \right) ^{\frac{3}{2}}\right\} \frac{1}{2\alpha _{2}}\left\{ \left( \frac{\beta _{2}}{t_{2}} \right) ^{\frac{1}{2}}+\left( \frac{\beta _{2}}{t_{2}} \right) ^{\frac{3}{2}}\right\} , \nonumber \\&\quad {\varvec{t}}>{0}, \end{aligned}$$
(6)

where \(\alpha _{k}>0\) and \(\beta _{k}>0\) for \(k=1,2\), \(-1<\rho <1\), and \(\phi _{2}(\cdot ,\cdot ;\rho )\) is a normal joint PDF given by

$$\begin{aligned} \phi _{2}(u,v;\rho )=\frac{1}{2\pi \sqrt{1-\rho ^2}} \exp \left\{ \frac{1}{\left( 1-\rho ^2\right) }\left( u^2+v^2-2\rho {u}{v}\right) \right\} . \end{aligned}$$

Bivariate Birnbaum–Saunders- t distribution The random vector \({\varvec{T}}=(T_{1},T_{2})^{\top }\) is said to have a BBS-t distribution with parameter vectors \({\varvec{\alpha }}=(\alpha _{1},\alpha _{2})^{\top }\) and \({\varvec{\beta }}=(\beta _{1},\beta _{2})^{\top }\), \(\nu \) degrees of freedom, and correlation coefficient \(\rho \), denoted by \({\varvec{T}}\sim {\mathrm{BBS-}}t({\varvec{\alpha }},{\varvec{\beta }},\rho ,\nu )\), if its joint PDF is given by

$$\begin{aligned} f_{\mathrm{BBS-t}}({\varvec{t}};{\varvec{\alpha }},{\varvec{\beta }},\rho )= & {} h_{2}\left( \frac{1}{\alpha _{1}} \left( \sqrt{\frac{t_{1}}{\beta _{1}}} -\sqrt{\frac{\beta _{1}}{t_{1}}}\right) ,\frac{1}{\alpha _{2}} \left( \sqrt{\frac{t_{2}}{\beta _{2}}} -\sqrt{\frac{\beta _{2}}{t_{2}}}\right) ;\rho ,\nu \right) \nonumber \\&\times \frac{1}{2\alpha _{1}}\left\{ \left( \frac{\beta _{1}}{t_{1}} \right) ^{\frac{1}{2}}+\left( \frac{\beta _{1}}{t_{1}} \right) ^{\frac{3}{2}}\right\} \frac{1}{2\alpha _{2}}\left\{ \left( \frac{\beta _{2}}{t_{2}} \right) ^{\frac{1}{2}}+\left( \frac{\beta _{2}}{t_{2}} \right) ^{\frac{3}{2}}\right\} , \nonumber \\&\quad {\varvec{t}}>{0}, \end{aligned}$$
(7)

where \(\alpha _{k}>0\) and \(\beta _{k}>0\) for \(k=1,2\), \(-1<\rho <1\), \(\nu >0\), and \(h_{2}(\cdot ,\cdot ;\rho )\) is a Student-t joint PDF given by

$$\begin{aligned} h_{2}(u,v;\rho ,\nu )=\frac{\varGamma \left( \frac{\nu +2}{2}\right) }{\varGamma \left( \frac{\nu }{2}\right) \nu \pi \sqrt{1-\rho ^2}} \left( 1+\frac{1}{\nu \left( 1-\rho ^2\right) } \left( u^2+v^2-2\rho {u}v\right) \right) ^{-\frac{(\nu +2)}{2}}.\nonumber \\ \end{aligned}$$
(8)

3 Maximum likelihood estimators

The MLEs of the model parameters of the BBS distribution are discussed in Kundu et al. (2010), whereas Kundu et al. (2013) have approached the MLEs of the GMBS model, which has as special case the GBBS distribution.

Bivariate Birnbaum–Saunders distribution Let \(\{(t_{1i},t_{2i}),i=1,\ldots ,n\}\) be a bivariate random sample from the \({\mathrm{BBS}}({\varvec{\alpha }},{\varvec{\beta }},\rho )\) distribution with PDF as given in Eq. (6). Then, the MLEs of \(\beta _{1}\) and \(\beta _{2}\), denoted by \(\widehat{\beta }_{1}\) and \(\widehat{\beta }_{2}\), can be obtained by maximizing the profile log-likelihood function

$$\begin{aligned} \ell _{p}({\varvec{\beta }})= & {} -n\ln (\widehat{\alpha }_{1}(\beta _{1})) -n\ln (\beta _{1})-n\ln \left( \widehat{\alpha }_{2}(\beta _{1})\right) -n\ln (\beta _{2})\nonumber \\&-\frac{n}{2} \ln \left( 1-\widehat{\rho }^{2}(\beta _{1},\beta _{2})\right) \nonumber \\&+\sum _{i=1}^{n}\ln \left\{ \left( \frac{\beta _{1}}{t_{1i}}\right) ^{\frac{1}{2}} +\left( \frac{\beta _{1}}{t_{1i}}\right) ^{\frac{3}{2}}\right\} +\sum _{i=1}^{n}\ln \left\{ \left( \frac{\beta _{2}}{t_{2i}}\right) ^{\frac{1}{2}} +\left( \frac{\beta _{2}}{t_{2i}}\right) ^{\frac{3}{2}}\right\} , \end{aligned}$$
(9)

where

$$\begin{aligned} \widehat{\alpha }_{k}(\beta _{k})= & {} \left( \frac{s_{k}}{\beta _{k}} +\frac{\beta _{k}}{r_{k}}-2 \right) ^{\frac{1}{2}}, \quad k=1,2, \end{aligned}$$
(10)
$$\begin{aligned} \widehat{\rho }(\beta _{1},\beta _{2})= & {} \frac{\sum _{i=1}^{n}\left( \sqrt{\frac{t_{1i}}{\beta _{1}}} -\sqrt{\frac{\beta _{1}}{t_{1i}}} \right) \left( \sqrt{\frac{t_{2i}}{\beta _{2}}} -\sqrt{\frac{\beta _{2}}{t_{2i}}} \right) }{\sqrt{\sum _{i=1}^{n}\left( \sqrt{\frac{t_{1i}}{\beta _{1}}} -\sqrt{\frac{\beta _{1}}{t_{1i}}} \right) ^{2}}\sqrt{\sum _{i=1}^{n}\left( \sqrt{\frac{t_{2i}}{\beta _{2}}} -\sqrt{\frac{\beta _{2}}{t_{2i}}} \right) ^{2}}}. \end{aligned}$$
(11)

In order to maximize the function in (9) with respect to \(\beta _{1}\) and \(\beta _{2}\), one may use the Newton–Raphson algorithm or some other optimization algorithm. Once \(\widehat{\beta }_{1}\) and \(\widehat{\beta }_{2}\) are obtained, the MLEs of \({\alpha }_{1}\), \({\alpha }_{2}\) and \({\rho }\) are computed from (10) and (11). Kundu et al. (2010) showed that the asymptotic joint distribution of \(\widehat{\varvec{\theta }}\), where \({\varvec{\theta }}=(\alpha _{1},\beta _{1},\alpha _{2},\beta _{2},\rho )\), is

$$\begin{aligned} \sqrt{n}(\widehat{\varvec{\theta }}-{\varvec{\theta }})\sim \text {N}_{5}\left( {\varvec{0}},{\varvec{I}}^{-1}\right) , \end{aligned}$$

where \(\text {N}_{5}\left( {\varvec{0}},{\varvec{I}}^{-1}\right) \) is a 5-variate normal distribution with mean \({\varvec{0}}\) and covariance matrix \({\varvec{I}}^{-1}\); see Kundu et al. (2010) for the elements of the Fisher information matrix \({\varvec{I}}\).

Bivariate Birnbaum–Saunders- t distribution Now, let \(\{(t_{1i},t_{2i}),i=1,\ldots ,n\}\) be a bivariate random sample from the \({\mathrm{BBS-t}}({\varvec{\alpha }},{\varvec{\beta }},\rho ,\nu )\) distribution with PDF as given in Eq. (7). Let also

$$\begin{aligned}&\left[ \left( \sqrt{\frac{T_{1}}{\beta _{1}}}-\sqrt{\frac{\beta _{1}}{T_{1}}} \right) ,\left( \sqrt{\frac{T_{2}}{\beta _{2}}}-\sqrt{\frac{\beta _{2}}{T_{2}}} \right) \right] ^{\top } \sim {t}_{2}({\varvec{D}}{\varvec{\varGamma }}{\varvec{D}}^{\top },\nu ), \\&\quad {\varvec{M}}={\varvec{D}}{\varvec{\varGamma }}{\varvec{D}}^{\top }=\left( \begin{array}{cc} \alpha _{1}^2 &{} \alpha _{1}\alpha _{2}\rho \\ &{}\\ \alpha _{1}\alpha _{2}\rho &{} \alpha _{2}^2 \end{array}\right) , \end{aligned}$$

where \({t}_{2}({\varvec{D}}{\varvec{\varGamma }}{\varvec{D}}^{\top },\nu )\) is a bivariate Student-t distribution with PDF as in (8) and \({\varvec{D}}=\text {diag}\{\alpha _{1},\alpha _{2}\}\). Moreover, \({\varvec{M}}=1/n\sum _{i=1}^{n}\gamma _{i}{\varvec{u}}_{i}{\varvec{u}}_{i}^{\top }\), where \(\gamma _{i}=(\nu +2)/(\nu +{\varvec{u}}_{i}^{\top }{\varvec{M}}^{-1}{\varvec{u}}_{i})\) with

$$\begin{aligned} {\varvec{u}}_{i}^{\top }=\left[ \left( \sqrt{\frac{t_{1i}}{\beta _{1}}}-\sqrt{\frac{\beta _{1}}{t_{1i}}} \right) ,\left( \sqrt{\frac{t_{2i}}{\beta _{2}}}-\sqrt{\frac{\beta _{2}}{t_{2i}}} \right) \right] . \end{aligned}$$

The MLEs of \(\beta _{1}\) and \(\beta _{2}\), denoted by \(\widehat{\beta }_{1}\) and \(\widehat{\beta }_{2}\), can be obtained by maximizing the profile log-likelihood function

$$\begin{aligned} \ell _{p}({\varvec{\beta }})= & {} -\frac{n}{2} \ln \left( \left| \widehat{\varvec{\varGamma }}(\beta _{1},\beta _{2})\right| \right) -n\ln \left( \widehat{\alpha }_{1}(\beta _{1})\right) -n\ln (\beta _{1}) -n\ln \left( \widehat{\alpha }_{2}(\beta _{1})\right) -n\ln (\beta _{2})\\&-\frac{\nu +2}{2}\sum _{i=1}^{n} \ln \left( 1+\frac{{\varvec{v}}_{i}^{\top }\widehat{\varvec{\varGamma }}^{-1} (\beta _{1},\beta _{2}){\varvec{v}}_{i}}{\nu } \right) +\sum _{i=1}^{n}\ln \left\{ \left( \frac{\beta _{1}}{t_{1i}}\right) ^{\frac{1}{2}} +\left( \frac{\beta _{1}}{t_{1i}}\right) ^{\frac{3}{2}}\right\} \\&+\sum _{i=1}^{n}\ln \left\{ \left( \frac{\beta _{2}}{t_{2i}}\right) ^{\frac{1}{2}}+\left( \frac{\beta _{2}}{t_{2i}}\right) ^{\frac{3}{2}}\right\} , \end{aligned}$$

where

$$\begin{aligned} {\varvec{v}}_{i}^{\top }=\left[ \frac{1}{\alpha _{1}}\left( \sqrt{\frac{t_{1i}}{\beta _{1}}}-\sqrt{\frac{\beta _{1}}{t_{1i}}} \right) ,\frac{1}{\alpha _{2}}\left( \sqrt{\frac{t_{2i}}{\beta _{2}}}-\sqrt{\frac{\beta _{2}}{t_{2i}}} \right) \right] , \end{aligned}$$

and

$$\begin{aligned} \widehat{\alpha }_{k}(\beta _{k})=(m_{kk})^{\frac{1}{2}}, k=1,2, \quad \widehat{\varvec{\varGamma }}(\beta _{1},\beta _{2})=\widehat{\varvec{Q}}(\beta _{1},\beta _{2})\widehat{\varvec{M}}(\beta _{1},\beta _{2})\widehat{\varvec{Q}}^{\top }(\beta _{1},\beta _{2}),\nonumber \\ \end{aligned}$$
(12)

with \(\widehat{\varvec{M}}(\beta _{1},\beta _{2})=((m_{kj}(\beta _{1},\beta _{2})))\) and \(\widehat{\varvec{Q}}(\beta _{1},\beta _{2})=\text {diag}\{1/\widehat{\alpha }_{1}(\beta _{1}),1/\widehat{\alpha }_{2}(\beta _{2})\}\). The MLE of \({\varvec{M}}\) can be obtained by using an algorithm to carry out iterations successively until a certain convergence criterion is satisfied, for instance, when \(||\widehat{\varvec{M}}^{(k+1)}(\beta _{1},\beta _{2})-\widehat{\varvec{M}}^{(k)}(\beta _{1},\beta _{2}) ||\) is sufficiently small; see Nadarajah and Kotz (2008) and Kundu et al. (2013).

An estimate of \(\nu \) can be obtained by using the profile likelihood. Therefore, we have the following two steps:

  1. i)

    Let \(\nu _{l}=l\) and for each \(l=1,..,20\) compute the ML estimates of \(\alpha _{1}\), \(\alpha _{2}\), \(\beta _{1}\), \(\beta _{2}\) and \(\rho \) by using the above procedures. Compute also the likelihood function;

  2. ii)

    The final estimate of \(\nu \) is the one which maximizes the likelihood function and the associated estimates of \(\alpha _{1}\), \(\alpha _{2}\), \(\beta _{1}\), \(\beta _{2}\) and \(\rho \), are the final ones.

4 Proposed estimators

In this section we propose two new simple estimators of the parameters of the GBBS distribution. Let \(\{(t_{1i},t_{2i}),i=1,\ldots ,n\}\) be a bivariate random sample from the \({\mathrm{GBBS}}({\varvec{\alpha }},{\varvec{\beta }},\rho )\) distribution with PDF as given in (4).

4.1 Modified moment estimators

Let the sample arithmetic and harmonic means be defined as

$$\begin{aligned} s_{k} = \frac{1}{n}\sum \limits _{i=1}^{n}t_{ki} \quad \text {and} \quad r_{k} = \left[ \frac{1}{n}\sum \limits _{i=1}^{n}t_{ki}^{-1}\right] ^{-1},\quad k=1,2, \end{aligned}$$

respectively. The MMEs are obtained by equating \({\mathrm{E}}[T_{1}]\), \({\mathrm{E}}[T_{1}^{-1}]\), \({\mathrm{E}}[T_{2}]\) and \({\mathrm{E}}[T^{-1}_{2}]\) to the corresponding sample estimates, that is,

$$\begin{aligned} {\mathrm{E}}\left[ T_{1}\right] =s_{1}, \quad {\mathrm{E}}\left[ T_{1}^{-1}\right] =r_{1}^{-1}, \quad {\mathrm{E}}\left[ T_{2}\right] =s_{2} \quad \text{ and } \quad {\mathrm{E}}\left[ T_{2}^{-1}\right] =r_{2}^{-1}. \end{aligned}$$
(13)

Thus, by using the expressions in (2), we have

$$\begin{aligned} s_{1}= & {} \frac{\beta _{1}}{2}\left( 2+ u_{11}\alpha _{1}^2\right) , \quad r_{1}^{-1}=\frac{1}{2\beta _{1}}\left( 2+ u_{11}\alpha _{1}^2\right) , \nonumber \\ s_{2}= & {} \frac{\beta _{2}}{2}\left( 2+ u_{21}\alpha _{2}^2\right) \quad \text{ and }\quad r_{2}^{-1}=\frac{1}{2\beta _{2}}\left( 2+ u_{21}\alpha _{2}^2\right) , \end{aligned}$$
(14)

where \(u_{kr} = u_{kr}(g)={\mathrm{E}}[U_{k}^r]\), with \( U_{k}\sim \text {G}\chi ^2(1; g)\); see Table 1. Solving (14) for \(\alpha _{1}\), \(\beta _{1}\), \(\alpha _{2}\) and \(\beta _{2}\), we obtain the MMEs of these parameters, denoted by \(\widetilde{\alpha }_{1}\), \(\widetilde{\beta }_{1}\), \(\widetilde{\alpha }_{2}\) and \(\widetilde{\beta }_{2}\), namely,

$$\begin{aligned} \widetilde{\alpha }_{1}= & {} \left\{ \frac{2}{u_{11}}\left[ \left( \frac{s_{1}}{r_{1}} \right) ^{\frac{1}{2}} -1\right] \right\} ^{\frac{1}{2}}, \quad \widetilde{\beta }_{1}=(s_{1}r_{1})^{\frac{1}{2}}, \\ \widetilde{\alpha }_{2}= & {} \left\{ \frac{2}{u_{21}}\left[ \left( \frac{s_{2}}{r_{2}} \right) ^{\frac{1}{2}} -1\right] \right\} ^{\frac{1}{2}}, \quad \text{ and } \quad \widetilde{\beta }_{2}=\left( s_{2}r_{2}\right) ^{\frac{1}{2}}. \end{aligned}$$

Theorem 2

The asymptotic distributions of \(\widetilde{\alpha }_{k}\) and \(\widetilde{\beta }_{k}\), for \(k=1,2\), are given by

$$\begin{aligned}&\sqrt{n}(\widetilde{\alpha }_{k}-{\alpha }_{k})\sim \text {N}\left( 0,{\alpha _{k}^2} \left[ \frac{u_{k2}-u_{k1}^2}{4u_{k1}^2}\right] \right) , \\&\quad \sqrt{n}(\widetilde{\beta }_{k}-{\beta }_{k})\sim \text {N}\left( 0, {\alpha _{k}^2\beta _{k}^2} \left[ \frac{u_{k1}+\frac{u_{k2}}{4}\alpha _{k}^2}{\left( 1+\frac{u_{k1}}{2}\alpha _{k}^2\right) ^2}\right] \right) . \end{aligned}$$

Proof

See “Appendix 1”. \(\square \)

Bivariate Birnbaum–Saunders distribution In this case, the MMEs of \(\alpha _{1}\), \(\beta _{1}\), \(\alpha _{2}\) and \(\beta _{2}\) are given by

$$\begin{aligned} \widetilde{\alpha }_{1}= & {} \left\{ 2\left[ \left( \frac{s_{1}}{r_{1}} \right) ^{\frac{1}{2}} -1\right] \right\} ^{\frac{1}{2}}, \quad \widetilde{\beta }_{1}=(s_{1}r_{1})^{\frac{1}{2}}, \\ \widetilde{\alpha }_{2}= & {} \left\{ 2\left[ \left( \frac{s_{2}}{r_{2}} \right) ^{\frac{1}{2}} -1\right] \right\} ^{\frac{1}{2}}, \quad \text{ and } \quad \widetilde{\beta }_{2}=(s_{2}r_{2})^{\frac{1}{2}}. \end{aligned}$$

Then, the MME of \(\rho \) is

$$\begin{aligned} \widetilde{\rho }= & {} \frac{\sum _{i=1}^{n} \left( \sqrt{\frac{t_{1i}}{\widetilde{\beta }_{1}}} -\sqrt{\frac{\widetilde{\beta }_{1}}{t_{1i}}} \right) \left( \sqrt{\frac{t_{2i}}{\widetilde{\beta }_{2}}}-\sqrt{\frac{\widetilde{\beta }_{2}}{t_{2i}}} \right) }{\sqrt{\sum _{i=1}^{n}\left( \sqrt{\frac{t_{1i}}{\widetilde{\beta }_{1}}}-\sqrt{\frac{\widetilde{\beta }_{1}}{t_{1i}}} \right) ^{2}}\sqrt{\sum _{i=1}^{n}\left( \sqrt{\frac{t_{2i}}{\widetilde{\beta }_{2}}} -\sqrt{\frac{\widetilde{\beta }_{2}}{t_{2i}}} \right) ^{2}}}. \end{aligned}$$

Bivariate Birnbaum–Saunders- t distribution For a given \(\nu \), the MMEs of \(\alpha _{1}\), \(\beta _{1}\), \(\alpha _{2}\) and \(\beta _{2}\) are given by

$$\begin{aligned} \widetilde{\alpha }_{1}= & {} \left\{ \frac{2}{u_{11}}\left[ \left( \frac{s_{1}}{r_{1}} \right) ^{\frac{1}{2}} -1\right] \right\} ^{\frac{1}{2}}, \quad \widetilde{\beta }_{1}=\left( s_{1}r_{1}\right) ^{\frac{1}{2}}, \\ \widetilde{\alpha }_{2}= & {} \left\{ \frac{2}{u_{21}}\left[ \left( \frac{s_{2}}{r_{2}} \right) ^{\frac{1}{2}} -1\right] \right\} ^{\frac{1}{2}}, \quad \text{ and } \quad \widetilde{\beta }_{1}=\left( s_{2}r_{2}\right) ^{\frac{1}{2}}, \end{aligned}$$

where \(u_{kr}\) is as given in (14), that is, \(u_{k1}=\frac{\nu }{\nu -2}\) with \(\nu >2\) and \(k=1,2\). The MME of \(\rho \) is given by \(\widetilde{\rho }=\gamma _{12}=\gamma _{21}\), where \(\gamma _{kl}\) is the (kl)th element of the matrix [see Eq. (12)]

$$\begin{aligned} \widetilde{\varvec{\varGamma }}=\widehat{\varvec{Q}}(\widetilde{\beta }_{1}, \widetilde{\beta }_{2})\widetilde{\varvec{M}} (\widetilde{\beta }_{1},\widetilde{\beta }_{2})\widetilde{\varvec{Q}}^{\top }(\widetilde{\beta }_{1},\widetilde{\beta }_{2}). \end{aligned}$$

The estimate of \(\nu \) can be obtained by using the same procedure presented in Sect. 3.

4.2 New modified moment estimators

Let

$$\begin{aligned} Y_{1ij} = T_{1i} \, \frac{1}{T_{1j}}, \quad Y_{2ij} = T_{2i} \, \frac{1}{T_{2j}}, \quad \text {for} \, 1 \le i \ne j \le n, \end{aligned}$$

where \(Y_{1ij}=1/Y_{1ji}\) and \(Y_{2ij}=1/Y_{2ji}\), and then we have \(\left( {\begin{array}{c}n\\ 2\end{array}}\right) \) pairs of \(Y_{1ij}\) or \(Y_{2ij}\). Therefore,

$$\begin{aligned} {\mathrm{E}}\left[ Y_{1ij}\right]= & {} {\mathrm{E}}\left[ T_{1i}\right] {\mathrm{E}}\left[ \frac{1}{T_{1j}}\right] =\left( 1+ \frac{u_{11}}{2}\alpha _{1}^2\right) ^2,\\ {\mathrm{E}}\left[ Y_{2ij}\right]= & {} {\mathrm{E}}\left[ T_{2i}\right] {\mathrm{E}}\left[ \frac{1}{T_{2j}}\right] =\left( 1+ \frac{u_{21}}{2}\alpha _{2}^2\right) ^2, \end{aligned}$$

where \(u_{kr}\) is as in (14). Note that the sample means of \(y_{1ij}\) and \(y_{2ij}\) (observed values of \(Y_{1ij}\) and \(Y_{2ij}\), respectively) are given by

$$\begin{aligned} \overline{y}_{1} = \frac{1}{2\left( {\begin{array}{c}n\\ 2\end{array}}\right) }\sum \limits _{1 \le i \ne j \le n}y_{1ij} \quad \text{ and } \quad \overline{y}_{2} = \frac{1}{2\left( {\begin{array}{c}n\\ 2\end{array}}\right) }\sum \limits _{1 \le i \ne j \le n}y_{2ij}. \end{aligned}$$

Then, \(y_{1ij}\) and \(y_{2ij}\) can be equated to \({\mathrm{E}}[Y_{1ij}]\) and \({\mathrm{E}}[Y_{2ij}]\), respectively, and solved for \(\alpha _{1}\) and \(\alpha _{2}\) to obtain the NMMEs estimators, namely,

$$\begin{aligned} \widetilde{\alpha }_{1}^{*}= \left\{ \frac{2}{u_{11}}\left[ \sqrt{\overline{y}_{1}} -1\right] \right\} ^{\frac{1}{2}} \quad \text{ and } \quad \widetilde{\alpha }_{2}^{*}= \left\{ \frac{2}{u_{21}}\left[ \sqrt{\overline{y}_{2}} -1\right] \right\} ^{\frac{1}{2}}. \end{aligned}$$

Furthermore, since

$$\begin{aligned} {\mathrm{E}}[\overline{T}_{1}]= & {} {\mathrm{E}}\left[ \frac{1}{n}\sum _{i=1}^{n}T_{1i} \right] =\beta _{1}\left( 1+ \frac{u_{11}}{2}\alpha _{1}^2\right) \quad \text{ and }\\ {\mathrm{E}}\left[ \overline{T}_{2}\right]= & {} {\mathrm{E}}\left[ \frac{1}{n}\sum _{i=1}^{n}T_{2i} \right] = \beta _{2}\left( 1+ \frac{u_{21}}{2}\alpha _{2}^2\right) , \end{aligned}$$

we can obtain estimators of \(\beta _{1}\) and \(\beta _{2}\), denoted by \(\widetilde{\beta }_{1}^{\circledast }\) and \(\widetilde{\beta }_{2}^{\circledast }\), as

$$\begin{aligned} \widetilde{\beta }_{1}^{\circledast }= \frac{2s_{1}}{2+(\widetilde{\alpha }_{1}^{*})^2} =\frac{s_{1}}{\sqrt{\overline{y}_{1}}} \quad \text{ and } \quad \widetilde{\beta }_{2}^{\circledast }= \frac{2s_{2}}{2+(\widetilde{\alpha }_{2}^{*})^2} =\frac{s_{2}}{\sqrt{\overline{y}_{2}}}. \end{aligned}$$

Also, note that

$$\begin{aligned} {\mathrm{E}}\left[ \,\overline{{T}^{-1}_{1}}\,\right]= & {} \frac{1}{n}\sum _{i=1}^{1}{\mathrm{E}}\left[ \frac{1}{T_{1i}} \right] = \beta _{1}\left( 1+ \frac{u_{11}}{2}\alpha _{1}^2\right) \quad \text{ and }\\ {\mathrm{E}}\left[ \,\overline{{T}^{-1}_{2}}\,\right]= & {} \frac{1}{n}\sum _{i=1}^{1}{\mathrm{E}}\left[ \frac{1}{T_{2i}} \right] = \beta _{2}\left( 1+ \frac{u_{21}}{2}\alpha _{2}^2\right) , \end{aligned}$$

which implies the following estimators of \(\beta _{1}\) and \(\beta _{2}\), denoted by \(\widetilde{\beta }_{1}^{\circleddash }\) and \(\widetilde{\beta }_{2}^{\circleddash }\), that is,

$$\begin{aligned} \widetilde{\beta }_{1}^{\circleddash }= \frac{2s_{1}}{2+(\widetilde{\alpha }_{1}^{*})^2}=\frac{s_{1}}{\sqrt{\overline{y}_{1}}} \quad \text{ and } \quad \widetilde{\beta }_{2}^{\circleddash }= \frac{2s_{2}}{2+(\widetilde{\alpha }_{2}^{*})^2}=\frac{s_{2}}{\sqrt{\overline{y}_{2}}}. \end{aligned}$$

The final NMMEs of \(\beta _{1}\) and \(\beta _{2}\), denoted by \(\widetilde{\beta }_{1}^{*}\) and \(\widetilde{\beta }_{2}^{*}\), can be obtained by merging the two estimators as

$$\begin{aligned} \widetilde{\beta }_{1}^{*}=\left( \widetilde{\beta }_{1}^{\circledast }\widetilde{\beta }_{1}^{\circleddash }\right) ^{\frac{1}{2}}=(s_{1}r_{1})^{\frac{1}{2}} \quad \text{ and } \quad \widetilde{\beta }_{2}^{*}=\left( \widetilde{\beta }_{2}^{\circledast }\widetilde{\beta }_{2}^{\circleddash }\right) ^{\frac{1}{2}}=\left( s_{2}r_{2}\right) ^{\frac{1}{2}}, \end{aligned}$$

which coincide with the MMEs.

Property 1

The NMMEs always exist uniquely.

Proof

This can be proved by showing that \(\widetilde{\alpha }_{1}^{*}\) and \(\widetilde{\alpha }_{2}^{*}\) are always non-negative. This result was proved by Balakrishnan and Zhu (2014). \(\square \)

Theorem 3

The asymptotic distributions of \(\widetilde{\alpha }_{k}^{*}\) and \(\widetilde{\beta }_{k}^{*}\), for \(k=1,2\), are given by

$$\begin{aligned}&\sqrt{n}(\widetilde{\alpha }_{k}^{*}-\alpha _{k})\sim \text {N}\left( 0,{\alpha _{k}^2} \left[ \frac{u_{k2}-u_{k1}^2}{4u_{k1}^2}\right] \right) , \\&\quad \sqrt{n}(\widetilde{\beta }_{k}^{*}-\beta _{k})\sim \text {N}\left( 0, \alpha _{k}^2\beta _{k}^2\frac{u_{k1}+\frac{u_{k2}}{4} \alpha _{k}^2}{\left( 1+\frac{u_{k1}}{2}\alpha _{k}^2\right) ^2} \right) . \end{aligned}$$

Proof

Note that \(\widetilde{\beta }_{k}=\widetilde{\beta }_{k}^{*}\), then we have the same asymptotic distribution. The proof for \(\widetilde{\alpha }_{k}^{*}\) is presented in “Appendix 2”. \(\square \)

Bivariate Birnbaum–Saunders distribution Here, the NMMEs of \(\alpha _{1}\), \(\beta _{1}\), \(\alpha _{2}\) and \(\beta _{2}\) are given by

$$\begin{aligned}&\widetilde{\alpha }_{1}^{*} = \left\{ {2}\left[ \sqrt{\overline{y}_{1}} -1\right] \right\} ^{\frac{1}{2}}, \quad \widetilde{\beta }_{1}^{*}=(s_{1}r_{1})^{\frac{1}{2}}, \quad \widetilde{\alpha }_{2}^{*} = \left\{ {2}\left[ \sqrt{\overline{y}_{2}} -1\right] \right\} ^{\frac{1}{2}}, \quad \text{ and } \\&\quad \widetilde{\beta }_{2}^{*}=\left( s_{2}r_{2}\right) ^{\frac{1}{2}}. \end{aligned}$$

Then, the NMME of \(\rho \) is

$$\begin{aligned} \widetilde{\rho }^{*}= & {} \frac{\sum _{i=1}^{n} \left( \sqrt{\frac{t_{1i}}{\widetilde{\beta }_{1}^{*}}} -\sqrt{\frac{\widetilde{\beta }_{1}^{*}}{t_{1i}}} \right) \left( \sqrt{\frac{t_{2i}}{\widetilde{\beta }_{2}^{*}}} -\sqrt{\frac{\widetilde{\beta }_{2}^{*}}{t_{2i}}} \right) }{\sqrt{\sum _{i=1}^{n}\left( \sqrt{\frac{t_{1i}}{\widetilde{\beta }_{1}^{*}}}-\sqrt{\frac{\widetilde{\beta }_{1}^{*}}{t_{1i}}} \right) ^{2}}\sqrt{\sum _{i=1}^{n}\left( \sqrt{\frac{t_{2i}}{\widetilde{\beta }_{2}^{*}}}-\sqrt{\frac{\widetilde{\beta }_{2}^{*}}{t_{2i}}} \right) ^{2}}}. \end{aligned}$$

Bivariate Birnbaum–Saunders- t distribution For a given \(\nu \), the NMMEs of \(\alpha _{1}\), \(\beta _{1}\), \(\alpha _{2}\) and \(\beta _{2}\) are given by

$$\begin{aligned}&\widetilde{\alpha }_{1}^{*} = \left\{ \frac{2}{u_{11}}\left[ \sqrt{\overline{y}_{1}} -1\right] \right\} ^{\frac{1}{2}}, \quad \widetilde{\beta }_{1}^{*}=(s_{1}r_{1})^{\frac{1}{2}}, \quad \widetilde{\alpha }_{2}^{*} = \left\{ \frac{2}{u_{21}}\left[ \sqrt{\overline{y}_{2}} -1\right] \right\} ^{\frac{1}{2}}, \quad \text{ and }\\&\quad \widetilde{\beta }_{2}^{*}=(s_{2}r_{2})^{\frac{1}{2}}, \end{aligned}$$

where \(u_{k1}\) is provided in (14), namely, \(u_{k1}=\frac{\nu }{\nu -2}\) with \(\nu >2\) and \(k=1,2\). The NMME of \(\rho \) is given by \(\widetilde{\rho }^{*}=\gamma _{12}=\gamma _{21}\), where \(\gamma _{kl}\) is the (kl)th element of the matrix [see Eq. (12)]

$$\begin{aligned} \widetilde{\varvec{\varGamma }}^{*}=\widehat{\varvec{Q}}^{*}(\widetilde{\beta }_{1}^{*},\widetilde{\beta }_{2}^{*})\widetilde{\varvec{M}}^{*}(\widetilde{\beta }_{1}^{*},\widetilde{\beta }_{2}^{*})\widetilde{\varvec{Q}}^{*\top }(\widetilde{\beta }_{1}^{*},\widetilde{\beta }_{2}^{*}). \end{aligned}$$

Here, the estimate of \(\nu \) can also be obtained by using the same procedure presented in Sect. 3.

5 Numerical evaluation

We here carry out a MC simulation study to evaluate the performance of the proposed estimators presented anteriorly. We focus on the BBS distribution. The simulation scenario considered the following: the sample sizes \(n \in \{10, 50\}\); the values of the shape and scale parameters as \(\alpha _{k}\in \{0.1,2.0\}\) and \(\beta _{k}=2.0\), for \(k=1,2\), respectively; the values of \(\rho \) are 0.00, 0.25, 0.50 and 0.95 (the results for negative \(\rho \) are quite similar so are omitted here); and 10, 000 MC replications. The values of \(\alpha _{k}\) cover low and high skewness. We also present the 90 and \(95\%\) probability coverages of confidence intervals for the BBS model.

Tables 3, 4 report the empirical values of the biases and mean square errors (MSEs) of the MLEs, MMEs and NMMEs, for the BBS distribution. From these tables, we observe that, as n increases, the bias and MSE of all the estimators decrease, tending to be unbiased, as expected. We also observe that the NMMEs \(\widetilde{\alpha }_{k}^{*}\), for \(k=1,2\), of the shape parameters \(\alpha _{k}\) display biases, in absolute values, that are smaller than those of the corresponding MLEs and MMEs for all samples sizes and values of \(\rho \) considered in the study. In terms of MSE, the performances of the three methods are quite similar.

Table 3 Simulated values of biases and MSEs (within parentheses) of the MMEs and NMMEs in comparison with those of MLEs (\(\alpha _{k}=0.1\), \(\beta _{k} = 2.0\), for \(k=1,2\)), for the BBS distribution

From Tables 34, it is also worth noting that the MLEs and MMEs are quite similar in terms of bias and MSE. Furthermore, we note that, as the values of the shape parameters \(\alpha _{k}\) increase, the performances of the estimators of \(\beta _{k}\), the scale parameters, deteriorate. For example, when \(n=10\), \(\rho =0.95\) and \(\alpha _{1}=0.1\), the bias of \(\widehat{\beta }_{1}\) (MLE), \(\widetilde{\beta }_{1}\) (MME) and \(\widetilde{\beta }_{1}^{*}\) (NMME) were 0.0017 in these three cases, and 0.1934, 0.2179 and 0.2179, respectively, when \(\alpha _{1}=2.0\), which is equivalent to an increase in the bias of over 200 times. In general, the results do not seem to depend on \(\rho \). Overall, the results favor the NMMEs.

Table 4 Simulated values of biases and MSEs (within parentheses) of the MMEs and NMMEs in comparison with those of MLEs (\(\alpha _{k}=2.0\), \(\beta _{k} = 2.0\), for \(k=1,2\)), for the BBS distribution

5.1 Probability coverage simulation results

We compute the 90 and \(95\%\) probability coverages of confidence intervals for the BBS model using the asymptotic distributions given earlier, with \(\alpha _{k}=0.5\), \(\beta _{k} = 1.0\), for \(k=1,2\). The \(100(1-\gamma )\%\) confidence intervals for \(\theta _{j}\), \(j=1,\ldots ,5\), based on the MLEs can be obtained from

$$\begin{aligned} \left[ \left( \widehat{\theta }_{j}+{\frac{z_{\gamma /2}}{\sqrt{{\varvec{I}}_{jj}(\widehat{\varvec{\varTheta }})}}}\right) , \left( \widehat{\theta }_{k}+{\frac{z_{1-\gamma /2}}{\sqrt{{\varvec{I}}_{jj}(\widehat{\varvec{\varTheta }})}}}\right) \right] , \end{aligned}$$

respectively, where \(\widehat{\varvec{\varTheta }}=(\widehat{\theta }_{1},\widehat{\theta }_{2},\widehat{\theta }_{3},\widehat{\theta }_{4},\widehat{\theta }_{5})^{\top } =(\widehat{\alpha }_{1},\widehat{\beta }_{1},\widehat{\alpha }_{2},\widehat{\beta }_{2},\widehat{\rho })^{\top }\) and \(z_{r}\) is the 100rth percentile of the standard normal distribution. The corresponding \(100(1-\gamma )\%\) confidence intervals for \(\alpha _{k}\) and \(\beta _{k}\), \(k=1,2\), based on the MMEs are given by

$$\begin{aligned}&\left[ \widetilde{\alpha }_{k}\left( 1+\frac{z_{\gamma /2}}{\sqrt{2n}}\right) ^{-1}, \widetilde{\alpha }_{k}\left( 1+\frac{z_{1-\gamma /2}}{\sqrt{2n}}\right) ^{-1}\right] ,\\&\quad \left[ \widetilde{\beta }_{k}\left( 1+\frac{z_{\gamma /2}}{\sqrt{n{h}(\widetilde{\alpha }_{k})}}\right) ^{-1}, \widetilde{\beta }_{k}\left( 1+\frac{z_{1-\gamma /2}}{\sqrt{n{h}(\widetilde{\alpha }_{k})}}\right) ^{-1}\right] , \end{aligned}$$

where \(h(x)=\frac{1+(3/4)x^2}{[1+(1/2)x^2]^2}\). Finally, the \(100(1-\gamma )\%\) confidence intervals for \(\alpha _{k}\) and \(\beta _{k}\), \(k=1,2\), based on the NMMEs are given by

$$\begin{aligned}&\left[ \widetilde{\alpha }_{k}^{*}\left( 1+\frac{z_{\gamma /2}}{\sqrt{2n}}\right) ^{-1}, \widetilde{\alpha }_{k}^{*}\left( 1+\frac{z_{1-\gamma /2}}{\sqrt{2n}}\right) ^{-1}\right] ,\\&\quad \left[ \widetilde{\beta }_{k}^{*}\left( 1+\frac{z_{\gamma /2}}{\sqrt{n{h}(\widetilde{\alpha }_{k}^{*})}}\right) ^{-1}, \widetilde{\beta }_{k}^{*}\left( 1+\frac{z_{1-\gamma /2}}{\sqrt{n{h}(\widetilde{\alpha }_{k}^{*})}}\right) ^{-1}\right] . \end{aligned}$$

To obtain \(100(1-\gamma )\%\) confidence interval for \(\rho \) based on the MME (\(\widetilde{\rho }_{k}\)) and NMME (\(\widetilde{\rho }_{k}^{*}\)), we can make use of the Fisher’s z-transformation Fisher (1921) and the generalized confidence interval proposed by Krishnamoorthy and Xia (2007). The latter method is suggested by Kazemi and Jafari (2015) as one of the best approaches to construct confidence interval for the correlation coefficient in a bivariate normal distribution.

First, note that

$$\begin{aligned} X_{1}=\frac{1}{\alpha _{1}}\left( \sqrt{\frac{T_{1}}{{\beta }_{1}}}-\sqrt{\frac{{\beta }_{1}}{T_{1}}} \right) \sim \text {N}(0,1) \quad \text{ and } \quad X_{2}=\frac{1}{\alpha _{2}}\left( \sqrt{\frac{T_{1}}{{\beta }_{2}}}-\sqrt{\frac{{\beta }_{2}}{T_{2}}} \right) \sim \text {N}(0,1). \end{aligned}$$

Note also that the we can express \(\widetilde{\rho }\) (results for \(\widetilde{\rho }^{*}\) are similar) as

$$\begin{aligned} \widetilde{\rho }=\frac{\sum _{i=1}^{n}x_{1i}x_{2i}}{\sqrt{\sum _{i=1}^{n}x_{1i}^2}\sqrt{\sum _{i=1}^{n}x_{2i}^{2}}}, \end{aligned}$$

where \(x_{1i}=\frac{1}{\widetilde{\alpha _{1}}}\left( \sqrt{\frac{t_{1i}}{\widetilde{\beta }_{1}}}-\sqrt{\frac{\widetilde{\beta }_{1}}{t_{1i}}} \right) \) and \(x_{2i}=\frac{1}{\widetilde{\alpha _{2}}}\left( \sqrt{\frac{t_{2i}}{\widetilde{\beta }_{2}}}-\sqrt{\frac{\widetilde{\beta }_{2}}{t_{2i}}} \right) \). The pairs \((x_{1i},x_{2i})\) for \(i=1,\ldots ,n\) can be thought of as realizations of the pair \((X_{1},X_{2})\). Then, \(\widetilde{\rho }\) is an estimator of the correlation coefficient of a standard bivariate normal distribution. Below, we detail the two methods to compute the confidence interval.

Fisher’s z-transformation (FI) Based on the Fisher’s z-transformation Fisher (1921), we readily have

$$\begin{aligned} z=\frac{1}{2}\log \left( \frac{1+\widetilde{\rho }}{1-\widetilde{\rho }} \right) =\tanh ^{-1}(\widetilde{\rho }), \end{aligned}$$

which has an asymptotic normal distribution with mean \(\frac{1}{2}\log \left( \frac{1+\rho }{1-\rho } \right) =\tanh ^{-1}(\rho )\) and variance \(1/(n-3)\). Then, we can obtain an approximate \(100(1-\gamma )\%\) confidence interval for \({\rho }\) by

$$\begin{aligned} \left[ \tanh \left( \widetilde{\rho }+\frac{z_{\gamma /2}}{\sqrt{n-3}}\right) ,\tanh \left( \widetilde{\rho }+\frac{z_{1-\gamma /2}}{\sqrt{n-3}} \right) \right] . \end{aligned}$$

Krishnamoorthy and Xia’s Method (KX) Based on Krishnamoorthy and Xia (2007) we can construct an approximate \(100(1-\xi )\%\) confidence interval for \({\rho }\) from the following algorithm

  • Step 1. Compute \(\overline{\rho }=\frac{\widetilde{\rho }}{\sqrt{1-\widetilde{\rho }^2}}\) for a given n and \(\widetilde{\rho }\);

  • Step 2. For \(i=1\) to m (1,000,000 say), generate \(U_{1}\sim \chi _{n-1}^{2}\), \(U_{2}\sim \chi _{n-2}^{2}\) and \(Z_{0}\sim {N}(0,1)\) and compute

    $$\begin{aligned} Q_{i}=\frac{\overline{\rho }\sqrt{U_{2}}-Z_{0}}{\sqrt{(\overline{\rho }\sqrt{U_{2}}-Z_{0})^{2}+U_{1}}}. \end{aligned}$$

The upper and lower limits for \(\rho \) are the \(100(\gamma )\)th and \(100(1-\gamma )\)th percentiles of the \(Q_{i}\)’s. Table 5 presents the \(90\%\) and \(95\%\) probability coverages of confidence intervals. The results show that the asymptotic confidence intervals do no provide good results for \(\alpha _{k}\) and \(\beta _{k}\) when the sample size is small (\(n=10\)), since the coverage probabilities are much lower than the corresponding nominal values. The scenario changes when \(n=50\) with satisfactory results for both \(\alpha _{k}\) and \(\beta _{k}\). Overall, the coverages for \(\rho \) associated with the MMEs and NMMEs have quite good performances, whereas the coverages based on the MLEs have poor performances.

Table 5 Probability coverages of 90 and \(95\%\) confidence intervals for the BBS model (\(\alpha _{k}=0.5\), \(\beta _{k} = 1.0\), for \(k=1,2\))

6 Illustrative examples

We illustrate the proposed methodology by using two real data sets. The first data set corresponds to two different measurements of stiffness, whereas the second data set represents bone mineral contents of 24 individuals.

6.1 Example 1

In this example, the data set corresponds to two different measurements of stiffness, namely, shock (\(T_1\)) and vibration (\(T_2\)) of each of \(n=30\) boards. The former involves emitting a shock wave down the board, while the latter is obtained during the vibration of the board; see Johnson and Wichern (1999).

Figure 1 provides the histogram, scaled total time on test (TTT) plot and probability versus probability (PP) plot with \(95\%\) acceptance bands for each marginal \(T_{1}\) and \(T_2\). Acceptance bands are computed by using the relation between the Kolmogorov-Smirnoff (KS) test and the PP plot; see Castro-Kuriss et al. (2014). The TTT plot allows us to have an idea about the shape of the failure rate of the marginals; see Aarset (1987) and Azevedo et al. (2012). Let the failure rate of a random variable X be \(h(x)=f(x)/[1-F(x)]\), where \(f(\cdot )\) and \(F(\cdot )\) are the PDF and CDF of X, respectively. The scaled TTT transform is given by \(W(u) = H^{-1}(u)/H^{-1}(1)\), for \({0}\le {u}\le {1}\), where \(H^{-1}(u) = \int _{0}^{F^{-1}(u)}[1-F(y)]{\mathrm{d}}y\), with \(F^{-1}(\cdot )\) being the inverse CDF of X. The corresponding empirical version of the scaled TTT transform is obtained by plotting the points \([k/n,W_{n}(k/n)]\), with \(W_{n}(k/n)= [\sum _{i=1}^{k}x_{(i)} + \{n-k\}x_{k}]/\sum _{i=1}^{n}x_{(i)}\), for \(k=1,\ldots ,n\), and \(x_{(i)}\) being the ith observed order statistic. From Fig. 1, we observe that the TTT plots suggest that the failure rates are all unimodal. Therefore, the BBS and BBS-t models are good choices, since the marginal distributions of these models allow us to model unimodal failure rates. Moreover, in Fig. 1, the PP plots support the BBS and BBS-t models.

Fig. 1
figure 1

Histogram, TTT plot and PP plot with acceptance bands for the two different measurements of stiffness

We now fit the BBS and BBS-t distributions to the stiffness data. From the observations, we obtain \(s_{1}=1906.1\), \(r_{1}=1857.55\) and \(s_{2}=1749.53\) \(r_{2}=1699.99\). Table 6 presents the MLEs, MMEs and NMMEs, as well as the log-likelihood values and the corresponding values of the Akaike (AIC) and Bayesian (BIC) information criteria. We note that across the models the log-likelihood values are quite similar, which suggests that the BBS model is the best model, since the BBS-t one does not improve substantially the fit for these data. The AIC and BIC values also confirm this result. Note that the estimates of \(\nu \) are quite large, indicating that the BBS-t distribution is tending to the BBS case.

In order to assess whether the BBS and BBS-t models fit these bivariate data or not, we compute the generalized Cox–Snell (GCS) residual based on each marginal. The GCS residual is given by \(r^{\mathrm{GCS}}_j = -\log (\widehat{S}(t_{ji}))\), for \(j=1,2\) and \(i=1,\ldots ,n\), where \(\widehat{S}(t_{ji})\) is the fitted survival function of the j-th marginal. If the model is correctly specified, the GCS residual is unit exponential [EXP(1)] distributed; see Leiva et al. (2014).

Figure 2 shows the QQ plots with simulated envelope of the GCS residuals based on the marginals of the BBS and BBS-t models and based on the MLEs. From this figure, we note that the GCS residuals present a good agreement with the EXP(1) distribution. Similar results are obtained when the QQ plots are based on the MMEs and NMMEs.

Table 6 Estimates of the parameters, log-likelihood values and AIC and BIC values for the indicated models
Fig. 2
figure 2

QQ plot with envelope of the GCS residual for the indicated models and marginals, based on the MLEs

Fig. 3
figure 3

Histogram, TTT plot and PP plot with acceptance bands for the BMD data

6.2 Example 2

Here, the data set corresponds to the bone mineral density (BMD) measured in g/cm\(^2\) for 24 individuals included in a experimental study; see Johnson and Wichern (1999). The data represent the BMD of dominant radius (\(T_{1}\)) and radius (\(T_{2}\)) bones. The histogram, TTT plot and PP plot with \(95\%\) acceptance bands for each marginal \(T_{1}\) and \(T_2\) are showed in Fig. 3. From this figure, we note that the PP plots of \(T_{1}\) and \(T_{2}\) support the assumed BBS and BBS-t models. We also note that the TTT plots suggest unimodal hazard rates for both marginals.

From the observations, we obtain \(s_{1}=0.8409\), \(r_{1}=0.8178\) and \(s_{2}=0.8101\) \(r_{2}=0.7949\). Table 7 provides the MLEs, MMEs and NMMEs, as well as the log-likelihood values and the corresponding values of the AIC and BIC information criteria. The results of the log-likelihood values and the information criteria indicate that the BBS-t model provides the best fit to this data set. Based on the MLEs, Fig. 4 shows that the QQ plots with simulated envelope of the GCS residuals under the BBS and BBS-t models. These graphical plots show a good agreement, in terms of fitting to the data, of both models.

Table 7 Estimates of the parameters, log-likelihood values and AIC and BIC values for the indicated models
Fig. 4
figure 4

QQ plot with envelope of the GCS residual for the indicated models and marginals, based on the MLEs

7 Concluding remarks

In this paper, we have proposed two simple estimation methods, based on complete samples, for the generalized bivariate Birnbaum–Saunders distribution. The new estimators are easy to compute, possess good asymptotic properties, and have explicit expressions as functions of the sample observation, that is, they are obtained without the need to use numerical methods for maximizing the log-likelihood. Through a Monte Carlo simulation study, we have shown that the new modified moment estimators we proposed have good performance. Two illustrative examples with real data have shown the usefulness of the proposed methodology. As part of future work, it would be of interest to extend the proposed methods of estimation to generalized multivariate Birnbaum–Saunders distributions as well as to censored data. Work on these problems is currently under progress and we hope to report these findings in a future paper.