1 Introduction

The Weibull distribution has been attained more attention in the literature and has inherent flexibility. The univariate Weibull distribution has the following cdf and pdf respectively

$$ F\left( {y;\alpha ,\beta } \right) = 1 - e^{{ - \left( {\frac{y}{\beta }} \right)^{\alpha } }} ;\quad y > 0,\quad \alpha ,\beta > 0, $$
(1.1)

and

$$ f\left( {y;\alpha ,\beta } \right) = \frac{\alpha }{\beta }\left( {\frac{y}{\beta }} \right)^{\alpha - 1} e^{{ - \left( {\frac{y}{\beta }} \right)^{\alpha } }} ;\quad y > 0,\quad \alpha ,\beta > 0, $$
(1.2)

where \( \beta ,\alpha \) are the scale and shape parameters respectively.

In bivariate Weibull, recent researches have been made for the bivariate Weibull distribution. Galiani [6] concluded that bivariate Weibull are specifically oriented towards applications in economics, finance and risk management. Flores [4] used Weibull marginal to construct bivariate Weibull distributions. Kundu and Gupta [13] introduced the Marshall–Olkin bivariate Weibull distribution.

A copula is a convenient approach for description of a multivariate distribution. Nelsen [16] introduced Copulas as following; copula is function that join multivariate distribution functions with uniform [0, 1] margins. A copula is a convenient approach to describe a multivariate distribution with dependence structure. The n-dimensional copula \( \left( C \right) \) exists for all \( y_{1} , \ldots ,y_{n} \), \( F\left( { y_{1} , \ldots ,y_{n} } \right) = C\left( {F_{1} \left( {y_{1} } \right), \ldots , F_{n} \left( {y_{n} } \right)} \right), \) if F is continuous, then \( C \) is uniquely defined.

Sklar [18] states that, considered the two random variables \( Y_{1} \) and \( Y_{2} \), with distribution functions \( F_{1} \left( {y_{1} } \right) \) and \( F_{2} \left( {y_{2} } \right) \) the following cdf and pdf for copula are respectively

$$ F\left( {y_{1} ,y_{2} } \right) = C\left( {F_{1} \left( {y_{1} } \right) , F_{2} \left( {y_{2} } \right)} \right), $$
(1.3)

and

$$ f\left( {y_{1} ,y_{2} } \right) = f_{1} \left( {y_{1} } \right) f_{2} \left( {y_{2} } \right) c\left( {F_{1} \left( {y_{1} } \right) , F_{2} \left( {y_{2} } \right)} \right). $$
(1.4)

Farlie–Gumbel–Morgenstern (FGM) is one of the most popular parametric families of copulas, the family was discussed by Gumbel [8] The joint cdf and joint pdf for FGM copula as following respectively

$$ C\left( {y_{1} ,y_{2} } \right) = F_{1} \left( {y_{1} } \right)F_{2} \left( {y_{2} } \right)\left( {1 + \theta \left( {1 - F_{1} \left( {y_{1} } \right)} \right)\left( {1 - F_{2} \left( {y_{2} } \right)} \right)} \right);\quad - 1 < \theta < 1 $$
(1.5)

and

$$ c\left( {y_{1} ,y_{2} } \right) = \left( {1 + \theta \left( {1 - 2F_{1} \left( {y_{1} } \right)} \right)\left( {1 - 2F_{2} \left( {y_{2} } \right)} \right)} \right). $$
(1.6)

Figure 1 3-dimension for the pdf and cdf of FGM copula with different value of parameter of copula \( \theta \).

Fig. 1
figure 1

FGM copula with various value of \( \theta \)

Fredricks and Nelsen [5] drives the formula for Spearman’s and Kendall’s correlation coefficient as follows

$$ \rho_{sperman} = \left( {12\int \int uv\left( {1 + \theta \left( {1 - u} \right)\left( {1 - v} \right)} \right) du dv} \right) - 3 = \frac{\theta }{3}, $$
(1.7)
$$ \rho_{\text{Kendall}} = 1 - 4\int \int \frac{\partial C}{\partial u}C\left( {u,v} \right)\frac{\partial C}{\partial v}C\left( {u,v} \right) du dv = \frac{2 }{9}\theta , $$
(1.8)

Such that \( \frac{ - 1}{3} \le \rho_{sperman} \le \frac{1}{3}, \frac{ - 2}{9} \le \rho_{\text{Kendall}} \le \frac{2}{9} \).

In this article, we study the bivariate extension of the Weibull distribution based on FGM copula function (FGMBW) and discuss its statistical properties. FGMBW distribution is used for describing bivariate data that have weak correlation between variables in lifetime data. It is a good alternative to bivariate several lifetime distributions for modeling non-negative real-valued data in application.

The objective of this article is twofold: to study the properties of the FGMBW distribution, and to estimate the parameters of the model by different estimation methods. The attractive feature of the marginal function of FGMBW distribution is the same as the basic distribution (Weibull). Other features of the FGMBW distribution: it contains closed forms for its cdf, product moment, moment generation function, and hazard rate function. The final motivation of the article is to develop a guideline for introducing the best estimation method for the FGMBW distribution, which we think would be of deep interest to statisticians. A simulation study is conducted to compare the preferences between estimation methods. Also, a real data set is introduced and analyzed to investigate the model. The uniqueness of this study comes from the fact that we introduce a comprehensive description of mathematical and statistical properties of FGMBW distribution with the hope that they will attract wider applications in medicine, economics, life testing and other areas of research.

The rest of this paper is organized as follows: FGM bivariate Weibull distribution is obtain in Sect. 2. Some statistical properties of FGMBW distribution in Sect. 3. Parameter estimation methods for the FGMBW distribution based in copula in Sect. 4. In Sect. 5, asymptotic confidence intervals are discussed. In Sect. 6, the potentiality of the new model is illustrated by simulation study. In Sect. 7, Application of real data are discussed. Finally, Conclusion of some remarks for FGMBW model are addressed in Sect. 8.

2 FGM Bivariate Weibull Distribution

According to Sklar theorem the joint pdf of bivariate Weibull distribution for any copula is as follows

$$ f\left( {y_{1} ,y_{2} } \right) = \frac{{\alpha_{1} }}{{\beta_{1} }}\left( {\frac{{y_{1} }}{{\beta_{1} }}} \right)^{{\alpha_{1} - 1}} e^{{ - \left( {\frac{{y_{1} }}{{\beta_{1} }}} \right)^{{\alpha_{1} }} }} \frac{{\alpha_{2} }}{{\beta_{2} }}\left( {\frac{{y_{2} }}{{\beta_{2} }}} \right)^{{\alpha_{2} - 1}} e^{{ - \left( {\frac{{y_{2} }}{{\beta_{2} }}} \right)^{{\alpha_{2} }} }} c\left( {\left( {1 - e^{{ - \left( {\frac{{y_{1} }}{{\beta_{1} }}} \right)^{{\alpha_{1} }} }} } \right),\quad \left( {1 - e^{{ - \left( {\frac{{y_{2} }}{{\beta_{2} }}} \right)^{{\alpha_{2} }} }} } \right)} \right) $$
(2.1)

The cdf of a FGMBW distribution can be expressed as

$$ F\left( {y_{1} ,y_{2} } \right) = \left( {1 - e^{{ - \left( {\frac{{y_{1} }}{{\beta_{1} }}} \right)^{{\alpha_{1} }} }} } \right)\left( {1 - e^{{ - \left( {\frac{{y_{2} }}{{\beta_{2} }}} \right)^{{\alpha_{2} }} }} } \right)\left[ {1 + \theta \left( {1 - \left( {1 - e^{{ - \left( {\frac{{y_{1} }}{{\beta_{1} }}} \right)^{{\alpha_{1} }} }} } \right)} \right)\left( {1 - \left( {1 - e^{{ - \left( {\frac{{y_{2} }}{{\beta_{2} }}} \right)^{{\alpha_{2} }} }} } \right)} \right)} \right] $$
(2.2)

The pdf of a FGMBW distribution is defined as

$$ \begin{aligned} f\left( {y_{1} ,y_{2} } \right) & = \frac{{\alpha_{1} }}{{\beta_{1} }}\left( {\frac{{y_{1} }}{{\beta_{1} }}} \right)^{{\alpha_{1} - 1}} e^{{ - \left( {\frac{{y_{1} }}{{\beta_{1} }}} \right)^{{\alpha_{1} }} }} \frac{{\alpha_{2} }}{{\beta_{2} }}\left( {\frac{{y_{2} }}{{\beta_{2} }}} \right)^{{\alpha_{2} - 1}} e^{{ - \left( {\frac{{y_{2} }}{{\beta_{2} }}} \right)^{{\alpha_{2} }} }} \\ & \quad \times \left[ {1 + \theta \left( {1 - 2\left( {1 - e^{{ - \left( {\frac{{y_{1} }}{{\beta_{1} }}} \right)^{{\alpha_{1} }} }} } \right)} \right)\left( {1 - 2\left( {1 - e^{{ - \left( {\frac{{y_{2} }}{{\beta_{2} }}} \right)^{{\alpha_{2} }} }} } \right)} \right)} \right] \\ \end{aligned} $$
(2.3)

Figure 2 show the plot 3-dimension for the pdf and cdf of FGMBW distribution with different value of \( \alpha_{1} ,\beta_{1} , \alpha_{2} ,\beta_{2} \,{\text{and}}\,\theta \) (Fig. 3).

Fig. 2
figure 2

Correlation of FGM copula with various value of copula parameter

Fig. 3
figure 3

The pdf and cdf of FGMBW distribution with various value of the parameters

3 Properties of FGMBW Distribution

In this section, we give some important statistical properties of the FGMBW distribution such as Marginal Distributions, product moments, moment generating function, conditional distribution, generating random variables, reliability function. Establishing algebraic expressions to determine some statistical properties of the FGMBW distribution can be more efficient than computing them directly by numerical simulation.

3.1 The Marginal Distributions

The marginal density functions for \( Y_{1} \) and \( Y_{2} \) respectively,

$$ f\left( {y_{1} ; \,\alpha_{1} ,\beta_{1} } \right) = \frac{{\alpha_{1} }}{{\beta_{1} }}\left( {\frac{{y_{1} }}{{\beta_{1} }}} \right)^{{\alpha_{1} - 1}} e^{{ - \left( {\frac{{y_{1} }}{{\beta_{1} }}} \right)^{{\alpha_{1} }} }} ;\quad y_{1} > 0,\quad \alpha_{1} ,\quad \beta_{1} > 0, $$
(3.1)
$$ f\left( {y_{2} ; \,\alpha_{2} ,\beta_{2} } \right) = \frac{{\alpha_{2} }}{{\beta_{2} }}\left( {\frac{{y_{2} }}{{\beta_{2} }}} \right)^{{\alpha_{2} - 1}} e^{{ - \left( {\frac{{y_{2} }}{{\beta_{2} }}} \right)^{{\alpha_{2} }} }} ;\quad y_{2} > 0,\quad \alpha_{2} ,\quad \beta_{2} > 0, $$
(3.2)

which are Weibull distributed, where the marginal distribution of \( Y_{1} \) and \( Y_{2} \) can be calculated directly by

$$ f\left( {y_{i} } \right) = \mathop \int \limits_{{ally_{j} }}^{ } f\left( {y_{1} ,y_{2} } \right) dy_{j} ;\quad i,j = 1,2,\quad i \ne j. $$

3.2 Conditional Distribution

The conditional probability distribution of \( Y_{2} \) given \( Y_{1} \) is given as follows

$$ f\left( {y_{2} \left| {y_{1} } \right.} \right) = \frac{{\alpha_{2} }}{{\beta_{2} }}\left( {\frac{{y_{2} }}{{\beta_{2} }}} \right)^{{\alpha_{2} - 1}} e^{{ - \left( {\frac{{y_{2} }}{{\beta_{2} }}} \right)^{{\alpha_{2} }} }} \left[ {1 + \theta \left( {1 - 2u\left( {y_{1} } \right)} \right)\left( {1 - 2u\left( {y_{2} } \right)} \right)} \right], $$
(3.3)

and the conditional cdf is

$$ F\left( {y_{2} \left| {y_{1} } \right.} \right) = u\left( {y_{2} } \right)\left[ {1 - \theta + 2\theta u\left( {y_{1} } \right)} \right] + \theta v\left( {y_{2} } \right) - 2\theta u\left( {y_{1} } \right)v\left( {y_{2} } \right), $$
(3.4)

where \( u\left( {y_{i} } \right) = 1 - e^{{ - \left( {\frac{{y_{i} }}{{\beta_{i} }}} \right)^{{\alpha_{i} }} }} i = 1,2 \) and \( v\left( {y_{i} } \right) = 1 - e^{{ - 2\left( {\frac{{y_{i} }}{{\beta_{i} }}} \right)^{{\alpha_{i} }} }} i = 1,2 \)

3.3 Generating Random Variables

Nelsen [16] discussed generating a sample from a specified joint distribution. By conditional distribution method, the joint distribution function is as follows

$$ f\left( {y_{1} ,y_{2} } \right) = f\left( {y_{1} } \right)f\left( {y_{2} \left| {y_{1} } \right.} \right) $$

By using the following steps, we can generate a bivariate sample by using the conditional approach:

  1. 1.

    Generate \( U \) and \( V \) independently from a \( {\text{uniform}}\left( {0, 1} \right) \) distribution.

  2. 2.

    Set \( Y_{1} = \beta_{1} \left[ { - \ln \left( {1 - U} \right)} \right]^{{\alpha_{1} }} \).

  3. 3.

    Set \( F\left( {y_{2} \left| {y_{1} } \right.} \right) = V \) to find \( Y_{2} \) by numerical simulation.

  4. 4.

    Repeat Steps 1–3 \( \left( n \right) \) times to obtain \( \left( {y_{1i} ,y_{2i} } \right), i = 1, 2, \ldots , n \).

3.4 Moment Generating Function

Let \( \left( {Y_{1} ,Y_{2} } \right) \) denote a random variable with the probability density function (2.3). Then, the moment generating function of \( \left( {Y_{1} ,Y_{2} } \right) \) is given by,

$$ \begin{aligned} M_{{\left( {y_{1} ,y_{2} } \right)}}^{ } \left( {t_{1} t_{2} } \right) & = \mathop \sum \limits_{n = 0}^{\infty } \left( {\frac{{t_{1}^{n} \beta_{1}^{n} }}{n!}\Gamma \left( {1 + \frac{n}{{\alpha_{1} }}} \right)} \right)\mathop \sum \limits_{m = 0}^{\infty } \left( {\frac{{t_{2}^{m} \beta_{2}^{m} }}{ m!}\Gamma \left( {1 + \frac{m}{{\alpha_{2} }}} \right)} \right) \\ & \quad \times \left[ {1 + \theta - 2\theta \frac{1}{{2^{{\left( {1 + \frac{m}{{\alpha_{2} }}} \right)}} }} - 2\theta \frac{1}{{2^{{\left( {1 + \frac{n}{{\alpha_{1} }}} \right)}} }} + 4\theta \frac{1}{{2^{{\left( {1 + \frac{n}{{\alpha_{1} }}} \right)}} }}\frac{1}{{2^{{\left( {1 + \frac{m}{{\alpha_{2} }}} \right)}} }}} \right], \\ \end{aligned} $$
(3.5)

To prove the moment generating function start with

$$ {\text{M}}_{{\left( {y_{1} ,y_{2} } \right)}}^{ } \left( {t_{1} t_{2} } \right) = E\left( {e^{{t_{1} y_{1} }} e^{{t_{2} y_{2} }} } \right) = \mathop \int \limits_{0}^{\infty } \mathop \int \limits_{0}^{\infty } e^{{t_{1} y_{1} }} e^{{t_{2} y_{2} }} f\left( {y_{1} ,y_{2} } \right) dy_{1} dy_{2} $$

3.5 Product Moments

If the random variable \( \left( {Y_{1} ,Y_{2} } \right) \) is distributed as FGMBW, then its rth and sth moments around zero can be expressed as follows

$$ \upmu_{rs}^{'} = \left( {\beta_{1}^{r}\Gamma \left( {\frac{r}{{\alpha_{1} }} + 1} \right)\beta_{2}^{s}\Gamma \left( {\frac{s}{{\alpha_{2} }} + 1} \right)} \right)\left[ {1 + \theta - \frac{\theta }{{2^{{\left( {{\raise0.7ex\hbox{$s$} \!\mathord{\left/ {\vphantom {s {\alpha_{2} }}}\right.\kern-0pt} \!\lower0.7ex\hbox{${\alpha_{2} }$}}} \right)}} }} - \frac{\theta }{{2^{{\left( {{\raise0.7ex\hbox{$r$} \!\mathord{\left/ {\vphantom {r {\alpha_{1} }}}\right.\kern-0pt} \!\lower0.7ex\hbox{${\alpha_{1} }$}}} \right)}} }} + \frac{\theta }{{2^{{\left( {{\raise0.7ex\hbox{$s$} \!\mathord{\left/ {\vphantom {s {\alpha_{2} }}}\right.\kern-0pt} \!\lower0.7ex\hbox{${\alpha_{2} }$}}} \right)}} 2^{{\left( {{\raise0.7ex\hbox{$r$} \!\mathord{\left/ {\vphantom {r {\alpha_{1} }}}\right.\kern-0pt} \!\lower0.7ex\hbox{${\alpha_{1} }$}}} \right)}} }}} \right] $$
(3.6)

To prove that start with

$$ \upmu_{rs}^{'} = E\left( {Y_{1}^{r} Y_{2}^{s} } \right) = \mathop \int \limits_{0}^{\infty } \mathop \int \limits_{0}^{\infty } y_{1}^{r} y_{2}^{s} f\left( {y_{1} ,y_{2} } \right) dy_{1} dy_{2} $$

We use Mardia’s [14] measures of multivariate and bivariate skewness (SK) and kurtosis (KU) (Table 1). Mardia defined bivariate SK and KU, respectively, as

$$ {\text{SK}} = \left( {1 - \rho^{2} } \right)^{ - 3} \left[ {\gamma_{30}^{2} + \gamma_{03}^{2} + 3\left( {1 + 2\rho^{2} } \right)\left( {\gamma_{12}^{2} + \gamma_{21}^{2} } \right) - 2\rho^{3} \gamma_{30}^{ } \gamma_{03} + 6\rho \left\{ {\gamma_{30} \left( {\rho \gamma_{12} - \gamma_{21} } \right) + \gamma_{03} \left( {\rho \gamma_{21} - \gamma_{12} } \right) - \left( {2 + \rho^{2} } \right)\gamma_{21} \gamma_{12} } \right\}} \right] $$
(3.7)
$$ {\text{KU}} = \frac{{\gamma_{40}^{ } + \gamma_{04}^{ } + 2\gamma_{22} + 4\rho \left( {\rho \gamma_{22} - \gamma_{13} - \gamma_{31} } \right)}}{{\left( {1 - \rho^{2} } \right)^{2} }} $$
(3.8)

where \( \gamma_{rs} = \frac{{\mu_{rs} }}{{\sigma_{1}^{r} \sigma_{2}^{s} }}, \rho = corr\left( {Y_{1} ,Y_{2} } \right) = \frac{{E\left( {\left( {Y_{1} - E\left( {Y_{1} } \right)} \right)^{ } \left( {\left( {Y_{2} - E\left( {Y_{2} } \right)} \right)} \right)^{ } } \right)}}{{\sigma_{1}^{ } \sigma_{2}^{ } }} \) where \( \mu_{rs} \), is the central moment of order \( \left( {r, s} \right) \) of \( \left( {Y_{1} ,Y_{2} } \right) \), \( \sigma_{1}^{ } = \sqrt {E\left( {Y_{1} - E\left( {Y_{1} } \right)} \right)^{2} } \) and \( \sigma_{2} = \sqrt {E\left( {Y_{2} - E\left( {Y_{2} } \right)} \right)^{2} } \)

$$ \begin{aligned}\upmu_{rs} & = E\left( {\left( {Y_{1} - E\left( {Y_{1} } \right)} \right)^{r} \left( {\left( {Y_{2} - E\left( {Y_{2} } \right)} \right)} \right)^{s} } \right) \\ & = \mathop \int \limits_{0}^{\infty } \mathop \int \limits_{0}^{\infty } \left( {y_{1} - E\left( {y_{1} } \right)} \right)^{r} \left( {\left( {y_{2} - E\left( {y_{2} } \right)} \right)} \right)^{s} f\left( {y_{1} ,y_{2} } \right) dy_{1} dy_{2} \\ \end{aligned} $$
Table 1 Covariance, skewness, and kurtosis of FGMBW distribution

3.6 Reliability Function

Osmetti and Chiodini [17] discussed that the reliability function is more convenient to express a joint survival function as a copula of its marginal survival functions, where \( Y_{1} \,{\text{and}}\,Y_{2} \) be random variable with survival functions \( \bar{F}\left( {y_{1} } \right) \) and \( \bar{F}\left( {y_{2} } \right) \) as following.

The reliability function of the marginal distributions is defined as

$$ R\left( {y_{j} ;\, \alpha_{j} ,\beta_{j} } \right) = 1 - F\left( {y_{j} ;\, \alpha_{j} ,\beta_{j} } \right) = e^{{ - \left( {\frac{{y_{j} }}{{\beta_{j} }}} \right)^{{\alpha_{j} }} }} ;\quad y > 0,\quad \alpha ,\beta > 0,\quad j = 1,2 $$

The expression of the joint survival function for copula is as following

$$ R\left( {y_{1} ,y_{2} } \right) = C\left( {R\left( {y_{1} } \right), R\left( {y_{2} } \right)} \right) $$

Then the reliability function of FGMBW distribution is

$$ R\left( {y_{1} ,y_{2} } \right) = e^{{ - \left( {\frac{{y_{1} }}{{\beta_{1} }}} \right)^{{\alpha_{1} }} }} e^{{ - \left( {\frac{{y_{2} }}{{\beta_{2} }}} \right)^{{\alpha_{2} }} }} \left[ {1 + \theta \left( {1 - e^{{ - \left( {\frac{{y_{1} }}{{\beta_{1} }}} \right)^{{\alpha_{1} }} }} } \right)\left( {1 - e^{{ - \left( {\frac{{y_{2} }}{{\beta_{2} }}} \right)^{{\alpha_{2} }} }} } \right)} \right] $$
(3.9)

Basu [1] defined the bivariate failure rate function for the first time as

$$ h\left( {y_{1} ,y_{2} } \right) = \frac{{f\left( {y_{1} ,y_{2} } \right)}}{{R\left( {y_{1} ,y_{2} } \right)}} $$
(.)

Then the hazard rate function of FGMBW distribution is

$$ h\left( {y_{1} ,y_{2} } \right) = \frac{{\frac{{\alpha_{1} }}{{\beta_{1} }}\left( {\frac{{y_{1} }}{{\beta_{1} }}} \right)^{{\alpha_{1} - 1}} \frac{{\alpha_{2} }}{{\beta_{2} }}\left( {\frac{{y_{2} }}{{\beta_{2} }}} \right)^{{\alpha_{2} - 1}} \left[ {1 + \theta \left( { - 1 + 2e^{{ - \left( {\frac{{y_{1} }}{{\beta_{1} }}} \right)^{{\alpha_{1} }} }} } \right)\left( { - 1 + 2e^{{ - \left( {\frac{{y_{2} }}{{\beta_{2} }}} \right)^{{\alpha_{2} }} }} } \right)} \right]}}{{\left[ {1 + \theta \left( {1 - e^{{ - \left( {\frac{{y_{1} }}{{\beta_{1} }}} \right)^{{\alpha_{1} }} }} } \right)\left( {1 - e^{{ - \left( {\frac{{y_{2} }}{{\beta_{2} }}} \right)^{{\alpha_{2} }} }} } \right)} \right]}} $$
(3.10)

4 Estimation Based on Copulas

In the section, we introduce different estimation methods that used to estimate the parameters of FGMBW distribution, such as: maximum likelihood estimation (MLE), inference functions for margins (IFM) and semi-parametric method (SP). To more information about these methods see Chen [2], Tsukahara [19] and Weiß [20].

4.1 Maximum Likelihood Estimation (MLE)

Elaal and Jarwan [3], discussed the maximum likelihood estimator to estimate all model parameters jointly, it is a one-step parametric method. Therefore, the log-likelihood is given as

$$ \ln L = \mathop \sum \limits_{j = 1}^{n} \left[ {\ln \left( {f_{1} \left( {y_{1j} } \right) f_{2} \left( {y_{2j} } \right) c\left( {F_{1} \left( {y_{1j} ,\delta_{1} } \right),F_{2} \left( {y_{2j} ,\delta_{2} } \right);\theta } \right)} \right)} \right] $$

The parameter estimates are obtained by maximizing the log-likelihood function with expect to each parameter separately. Considering Eqs. (2.3) and (1.6), let

$$ a\left( {y_{j} ;\, \alpha_{j} , \beta_{j} } \right) = 1 - 2\left( {1 - e^{{ - \left( {\frac{{y_{j} }}{{\beta_{j} }}} \right)^{{\alpha_{j} }} }} } \right), \quad j = 1,2. $$

The likelihood function of a FGMBW distribution is defined as

$$ \begin{aligned} L & = \left( {\frac{{\alpha_{2} \alpha_{1} }}{{\beta_{2} \beta_{1} }}} \right)^{n} \mathop \prod \limits_{i = 1}^{n} \left( {\left( {\frac{{y_{1i} }}{{\beta_{1} }}} \right)^{{\alpha_{1} - 1}} \left( {\frac{{y_{2i} }}{{\beta_{2} }}} \right)^{{\alpha_{2} - 1}} } \right)e^{{ - \mathop \sum \limits_{i = 1}^{n} \left( {\frac{{y_{1i} }}{{\beta_{1} }}} \right)^{{\alpha_{1} }} - \mathop \sum \limits_{i = 1}^{n} \left( {\frac{{y_{2i} }}{{\beta_{2} }}} \right)^{{\alpha_{2} }} }} \\ & \quad \times \mathop \prod \limits_{i = 1}^{n} \left( {1 + \theta \left( {a\left( {y_{1i} , \alpha_{1} , \beta_{1} } \right)} \right)\left( {a\left( {y_{2i} , \alpha_{2} , \beta_{2} } \right)} \right)} \right) \\ \end{aligned} $$

and the log-likelihood function can be written as

$$ \ln L = n(\ln \alpha_{1} - \ln \beta_{1} ) + n(\ln \alpha_{2} - \ln \beta_{2} ) + \left( {\alpha_{1} - 1} \right)\mathop \sum \limits_{i = 1}^{n} \ln \left( {\frac{{y_{1i} }}{{\beta_{1} }}} \right) - \mathop \sum \limits_{i = 1}^{n} \left( {\frac{{y_{1i} }}{{\beta_{1} }}} \right)^{{\alpha_{1} }} + \left( {\alpha_{2} - 1} \right) \mathop \sum \limits_{i = 1}^{n} \ln \left( {\frac{{y_{2i} }}{{\beta_{2} }}} \right) - \mathop \sum \limits_{i = 1}^{n} \left( {\frac{{y_{2i} }}{{\beta_{2} }}} \right)^{{\alpha_{2} }} + \mathop \sum \limits_{i = 1}^{n} \ln \left( {1 + \theta \left( {a\left( {y_{1i} , \alpha_{1} , \beta_{1} } \right)} \right)\left( {a\left( {y_{2i} , \alpha_{2} , \beta_{2} } \right)} \right)} \right) $$
(4.1)

The estimates of all parameters are obtained by differentiating the log-likelihood function in (4.1) with respect to each parameter separately, as following

$$ \begin{aligned} \frac{\partial L}{{\partial \alpha_{1} }} & = \frac{n}{{\alpha_{1} }} + \mathop \sum \limits_{i = 1}^{n} \ln \left( {\frac{{y_{1i} }}{{\beta_{1} }}} \right) - \mathop \sum \limits_{i = 1}^{n} \left( {\frac{{y_{1i} }}{{\beta_{1} }}} \right)^{{\alpha_{1} }} \ln \frac{{y_{1i} }}{{\beta_{1} }} \\ & \quad + \mathop \sum \limits_{i = 1}^{n} \frac{{ - 2 \theta \left( {a\left( {y_{2i} , \alpha_{2} , \beta_{2} } \right)} \right) e^{{ - \left( {\frac{{y_{1i} }}{{\beta_{1} }}} \right)^{{\alpha_{1} }} }} \left( {\frac{{y_{1i} }}{{\beta_{1} }}} \right)^{{\alpha_{1} }} \ln \frac{{y_{1i} }}{{\beta_{1} }}}}{{\left( {1 + \theta \left( {a\left( {y_{1i} , \alpha_{1} , \beta_{1} } \right)} \right)\left( {a\left( {y_{2i} , \alpha_{2} , \beta_{2} } \right)} \right)} \right)}} \\ \frac{\partial L}{{\partial \alpha_{2} }} & = \frac{n}{{\alpha_{2} }} + \mathop \sum \limits_{i = 1}^{n} \ln \left( {\frac{{y_{2i} }}{{\beta_{2} }}} \right) - \mathop \sum \limits_{i = 1}^{n} \left( {\frac{{y_{2i} }}{{\beta_{2} }}} \right)^{{\alpha_{2} }} \ln \frac{{y_{2i} }}{{\beta_{2} }} \\ & \quad + \mathop \sum \limits_{i = 1}^{n} \frac{{ - 2 \theta \left( {a\left( {y_{1i} , \alpha_{1} , \beta_{1} } \right)} \right) e^{{ - \left( {\frac{{y_{2i} }}{{\alpha_{2} }}} \right)^{{\beta_{2} }} }} \left( {\frac{{y_{2i} }}{{\beta_{2} }}} \right)^{{\alpha_{2} }} \ln \frac{{y_{2i} }}{{\beta_{2} }}}}{{\left( {1 + \theta \left( {a\left( {y_{1i} , \alpha_{1} , \beta_{1} } \right)} \right)\left( {a\left( {y_{2i} , \alpha_{2} , \beta_{2} } \right)} \right)} \right)}} \\ \frac{\partial L}{{\partial \beta_{1} }} & = \frac{ - n}{{\beta_{1} }} - \frac{{\left( {\alpha_{1} - 1} \right)}}{{\beta_{1} }} - \frac{{\alpha_{1} }}{{\beta_{1} }}\mathop \sum \limits_{i = 1}^{n} \left( {\frac{{y_{1i} }}{{\beta_{1} }}} \right)^{{\alpha_{1} }} \\ & \quad + \mathop \sum \limits_{i = 1}^{n} \frac{{2\alpha_{1} \theta y_{1i} \left( {\frac{{y_{1i} }}{{\beta_{1} }}} \right)^{{\alpha_{1} - 1}} e^{{ - \left( {\frac{{y_{1i} }}{{\beta_{1} }}} \right)^{{\alpha_{1} }} }} \left( {a\left( {y_{2i} , \alpha_{2} , \beta_{2} } \right)} \right) }}{{\beta_{1}^{2} \left( {1 + \theta \left( {a\left( {y_{1i} , \alpha_{1} , \beta_{1} } \right)} \right)\left( {a\left( {y_{2i} , \alpha_{2} , \beta_{2} } \right)} \right)} \right)}} \\ \frac{\partial L}{{\partial \beta_{2} }} & = \frac{ - n}{{\beta_{2} }} - \frac{{n\left( {\alpha_{2} - 1} \right)}}{{\beta_{2} }} - \frac{{\alpha_{2} }}{{\beta_{2} }}\mathop \sum \limits_{i = 1}^{n} \left( {\frac{{y_{2i} }}{{\beta_{2} }}} \right)^{{\alpha_{2} }} \\ & \quad + \mathop \sum \limits_{i = 1}^{n} \frac{{2 \alpha_{2} \theta y_{2i} \left( {\frac{{y_{2i} }}{{\beta_{2} }}} \right)^{{\alpha_{2} - 1}} e^{{ - \left( {\frac{{y_{2i} }}{{\beta_{2} }}} \right)^{{\alpha_{2} }} }} \left( {a\left( {y_{1i} , \alpha_{1} , \beta_{1} } \right)} \right) }}{{\beta_{2}^{2} \left( {1 + \theta \left( {a\left( {y_{1i} , \alpha_{1} , \beta_{1} } \right)} \right)\left( {a\left( {y_{2i} , \alpha_{2} , \beta_{2} } \right)} \right)} \right)}} \\ \end{aligned} $$

and

$$ \frac{\partial L}{\partial \theta } = \mathop \sum \limits_{i = 1}^{n} \frac{{\left( {a\left( {y_{1i} , \alpha_{1} , \beta_{1} } \right)} \right)\left( {a\left( {y_{2i} , \alpha_{2} , \beta_{2} } \right)} \right)}}{{\left( {1 + \theta \left( {a\left( {y_{1i} , \alpha_{1} , \beta_{1} } \right)} \right)\left( {a\left( {y_{2i} , \alpha_{2} , \beta_{2} } \right)} \right)} \right)}}. $$

The MLE \( \hat{\delta } = \left( {\hat{\alpha }_{1} , \hat{\beta }_{1} , \hat{\alpha }_{2} , \hat{\beta }_{2} , \hat{\theta }} \right) \) can be obtained by solving simultaneously the likelihood equations

$$ \left. {\frac{\partial L}{\partial \theta }} \right|_{{\theta = \hat{\theta }}} = 0,\quad \left. {\frac{\partial L}{{\partial \beta_{j} }}} \right|_{{\beta_{j} = \hat{\beta }_{j} }} = 0,\quad \left. {\frac{\partial L}{{\partial \alpha_{j} }}} \right|_{{\alpha_{j} = \hat{\alpha }_{j} }} = 0,\quad j = 1,2 $$

But the equations has to be performed numerically using a nonlinear optimization algorithm.

4.2 Estimation by Inference Functions for Margins (IFM)

Joe [9] introduced this parametric method with two-step of estimation. In the first step, each marginal distribution is estimated separately.

$$ \ln L_{1} = \mathop \sum \limits_{j = 1}^{n} \ln f_{1} \left( {y_{1j} ,\delta_{1} } \right);\quad \ln L_{2} = \mathop \sum \limits_{i = 1}^{n} \ln f_{2} \left( {y_{2j} ,\delta_{2} } \right) $$
(4.2)

Then, in the second step the copula parameter is estimated by maximizing the log-likelihood function of the copula density using the ML estimates of the marginal \( \hat{F}_{1} \left( {y_{1j} ,\delta } \right) \) and \( \hat{F}_{2} \left( {y_{2j} ,\delta } \right) \). Considering the Eq. (2.3), the log likelihood function of a Weibull distribution is defined as

$$ \ln L_{j} = n({\text{ln }}\alpha_{j} - \ln \beta_{j} ) + \left( {\alpha_{j} - 1} \right) \mathop \sum \limits_{i = 1}^{n} \ln \left( {\frac{{y_{ji} }}{{\beta_{j} }}} \right) - \mathop \sum \limits_{i = 1}^{n} \left( {\frac{{y_{ji} }}{{\beta_{j} }}} \right)^{{\alpha_{j} }} ; j = 1,2 $$
(4.3)

The MLEs \( \left( {\hat{\alpha }_{1} , \hat{\beta }_{1} , \hat{\alpha }_{2} , \hat{\beta }_{2} } \right) \) can be obtained by solving simultaneously the likelihood equations

$$ \left. {\frac{\partial \ln L}{{\partial \beta_{j} }}} \right|_{{\beta_{j} = \hat{\beta }_{j} }} = 0,\quad \left. {\frac{\partial \ln L}{{\partial \alpha_{j} }}} \right|_{{\alpha_{j} = \hat{\alpha }_{j} }} = 0, \quad j = 1,2 $$

then

$$ \hat{F}_{j} \left( {y_{ j} } \right) = 1 - e^{{ - \left( {\frac{{y_{j} }}{{\hat{\beta }_{j} }}} \right)^{{\hat{\alpha }_{j} }} }} ;\quad j = 1,2 $$

and considering the previous step, the IFM estimate of a FGMBW distribution is defined as

$$ \ln L_{IFM} = \mathop \sum \limits_{i = 1}^{n} \ln \left( {1 + \theta \left( {1 - 2\hat{F}_{1} \left( {y_{{1{\text{i}}}} } \right)\left( {1 - 2\hat{F}_{2} \left( {y_{2i} } \right)} \right)} \right)} \right) $$
(4.4)

The estimates of all parameters are obtained by differentiating the log-likelihood function in (4.4) with respect to each parameter separately. Basing on this, differentiating the log-likelihood function with respect to \( \theta \) is given as

$$ \frac{{\partial \ln L_{IFM} }}{\partial \theta } = \mathop \sum \limits_{i = 1}^{n} \frac{{\left( {a\left( {y_{1i} , \hat{\alpha }_{1} , \hat{\beta }_{1} } \right)} \right)\left( {a\left( {y_{2i} , \hat{\alpha }_{2} , \hat{\beta }_{2} } \right)} \right)}}{{\left( {1 + \theta \left( {a\left( {y_{1i} , \hat{\alpha }_{1} , \hat{\beta }_{1} } \right)} \right)\left( {a\left( {y_{2i} ,\hat{\alpha }_{2} ,\hat{\beta }_{2} } \right)} \right)} \right)}} $$

The estimates of parameters are handled numerically simultaneously the likelihood equations

$$ \left. {\frac{{\partial \ln L_{IFM} }}{\partial \theta }} \right|_{{\theta = \hat{\theta }}} = 0 $$

There is no closed-form expression for the MLE \( \hat{\theta } \) and its computation has to be performed numerically using a nonlinear optimization algorithm.

4.3 Estimation by Semi-Parametric Method [SP]

Kim et al. [10] introduced Estimation that carried out in two stages as in IFM, but the difference is that the marginal distributions are estimated non-parametrically by their sample empirical distributions. In this method, the observations are transformed into pseudo-observations using the empirical distribution function of each marginal distribution. The empirical distribution function is defined as

$$ \tilde{F}_{i} \left( {y_{i} } \right) = \frac{{\mathop \sum \nolimits_{j = 1}^{n} I\left( {Y_{i,j} \le y_{i} } \right)}}{n + 1} ;\quad i = 1,2 $$
(4.5)

Then, \( \theta \) is estimated by the maximizer of the pseudo loglikelihood,

$$ \mathop \sum \limits_{i = 1}^{n} \ln c\left( {\tilde{F}_{1} \left( {Y_{{1{\text{i}}}} } \right),\tilde{F}_{2} \left( {Y_{{2{\text{i}}}} } \right); \theta } \right) $$
(4.6)

Considering the Eq. (4.6), the log likelihood function of a FGMBW distribution is defined as

$$ \ln L_{SP} = \mathop \sum \limits_{i = 1}^{n} \ln \left( {1 + \theta \left( {1 - 2\tilde{F}_{1} \left( {y_{{1{\text{i }}}} } \right)\left( {1 - 2\tilde{F}_{2} \left( {y_{{2{\text{i }}}} } \right)} \right)} \right)} \right) $$
(4.7)

There is no closed-form expression for the MLE \( \hat{\theta } \) by using of Eq. (4.7) and it computation has to be performed numerically using a statistical software.

5 Asymptotic Confidence Intervals

In this section, we propose the asymptotic confidence intervals using methods of estimations. Keeping this in mind, we may propose the asymptotic confidence intervals using ML, IFM, and SP methods can be used to construct the confidence intervals for the parameters. We first obtain \( I\left( {\hat{\alpha }_{1} , \hat{\beta }_{1} , \hat{\alpha }_{2} , \hat{\beta }_{2} , \hat{\theta }} \right) \) which is the observed inverse Fishers information matrix and it is defined as:

$$ \begin{aligned} I\left( {\hat{\alpha }_{1} , \hat{\beta }_{1} , \hat{\alpha }_{2} , \hat{\beta }_{2} , \hat{\theta }} \right) & = \left[ {\begin{array}{*{20}l} { - L_{{\alpha_{1} \alpha_{1} }}^{\prime \prime } } \hfill & { - L_{{\alpha_{1} \beta_{1} }}^{\prime \prime } } \hfill & { - L_{{\alpha_{1} \alpha_{2} }}^{\prime \prime } } \hfill & { - L_{{\alpha_{1} \beta_{2} }}^{\prime \prime } } \hfill & { - L_{{\alpha_{1} \theta }}^{\prime \prime } } \hfill \\ { - L_{{\beta_{1} \alpha_{1} }}^{\prime \prime } } \hfill & { - L_{{\beta_{1} \beta_{1} }}^{\prime \prime } } \hfill & { - L_{{\beta_{1} \alpha_{2} }}^{\prime \prime } } \hfill & { - L_{{\beta_{1} \beta_{2} }}^{\prime \prime } } \hfill & { - L_{{\beta_{1} \theta }}^{\prime \prime } } \hfill \\ { - L_{{\alpha_{2} \alpha_{1} }}^{\prime \prime } } \hfill & { - L_{{\alpha_{2} \beta_{1} }}^{\prime \prime } } \hfill & { - L_{{\alpha_{2} \alpha_{2} }}^{\prime \prime } } \hfill & { - L_{{\alpha_{2} \beta_{2} }}^{\prime \prime } } \hfill & { - L_{{\alpha_{2} \theta }}^{\prime \prime } } \hfill \\ { - L_{{\beta_{2} \alpha_{1} }}^{\prime \prime } } \hfill & { - L_{{\beta_{2} \beta_{1} }}^{\prime \prime } } \hfill & { - L_{{\beta_{2} \alpha_{2} }}^{\prime \prime } } \hfill & { - L_{{\beta_{2} \beta_{2} }}^{\prime \prime } } \hfill & { - L_{{\beta_{2} \theta }}^{\prime \prime } } \hfill \\ { - L_{{\theta \alpha_{1} }}^{\prime \prime } } \hfill & { - L_{{\theta \beta_{1} }}^{\prime \prime } } \hfill & { - L_{{\theta \alpha_{2} }}^{\prime \prime } } \hfill & { - L_{{\theta \beta_{2} }}^{\prime \prime } } \hfill & { - L_{\theta \theta }^{\prime \prime } } \hfill \\ \end{array} } \right] \\ & = \left[ {\begin{array}{*{20}l} {I_{{\hat{\alpha }_{1} \hat{\alpha }_{1} }} } \hfill & {I_{{\hat{\alpha }_{1} \hat{\beta }_{1} }} } \hfill & {I_{{\hat{\alpha }_{1} \hat{\alpha }_{2} }} } \hfill & {I_{{\hat{\alpha }_{1} \hat{\beta }_{2} }} } \hfill & {I_{{\hat{\alpha }_{1} \hat{\theta }}} } \hfill \\ {I_{{\hat{\beta }_{1} \hat{\alpha }_{1} }} } \hfill & {I_{{\hat{\beta }_{1} \hat{\beta }_{1} }} } \hfill & {I_{{\hat{\beta }_{1} \hat{\alpha }_{2} }} } \hfill & {I_{{\hat{\beta }_{1} \hat{\beta }_{2} }} } \hfill & {I_{{\hat{\beta }_{1} \hat{\theta }}} } \hfill \\ {I_{{\hat{\alpha }_{2} \hat{\alpha }_{1} }} } \hfill & {I_{{\hat{\alpha }_{2} \hat{\beta }_{1} }} } \hfill & {I_{{\hat{\alpha }_{2} \hat{\alpha }_{2} }} } \hfill & {I_{{\hat{\alpha }_{2} \hat{\beta }_{2} }} } \hfill & {I_{{\hat{\alpha }_{2} \hat{\theta }}} } \hfill \\ {I_{{\hat{\beta }_{2} \hat{\alpha }_{1} }} } \hfill & {I_{{\hat{\beta }_{2} \hat{\beta }_{1} }} } \hfill & {I_{{\hat{\beta }_{2} \hat{\alpha }_{2} }} } \hfill & {I_{{\hat{\beta }_{2} \hat{\beta }_{2} }} } \hfill & {I_{{\hat{\beta }_{2} \hat{\theta }}} } \hfill \\ {I_{{\hat{\theta }\hat{\alpha }_{1} }} } \hfill & {I_{{\hat{\theta }\hat{\beta }_{1} }} } \hfill & {I_{{\hat{\theta }\hat{\alpha }_{2} }} } \hfill & {I_{{\hat{\theta }\hat{\beta }_{2} }} } \hfill & {I_{{\hat{\theta }\hat{\beta }_{2} }} } \hfill \\ \end{array} } \right] \\ \end{aligned} $$
(5.1)

An approximate 95% two side confidence intervals for (\( \alpha_{1} , \beta_{1} , \alpha_{2} , \beta_{2} , \theta \)) are respectively

$$ \hat{\alpha }_{i} \pm Z_{0.025} \sqrt {I_{{\hat{\alpha }_{i} \hat{\alpha }_{i} }}^{ } } , \quad \hat{\beta }_{i} \pm Z_{0.025} \sqrt {I_{{\hat{\beta }_{i} \hat{\beta }_{i} }} } ; \quad i = 1,2, \quad {\text{and}}\quad \hat{\theta } \pm Z_{0.025} \sqrt {I_{{\hat{\theta }\hat{\theta }}}^{ } } $$

6 Simulation Study

In this section; Monte Carlo simulation is done for comparison between estimation methods based on copula such as: MLE, IFM and SP. For estimating FGMBW distribution parameters by R language.

Simulation Algorithm Monte Carlo experiments were carried out based on the following data- generated form Weibull Distributions, where \( Y_{1} ,Y_{2} \) are distributed as Weibull with \( \beta_{\text{i }} \) shape parameters and \( \alpha_{\text{i}} \) scale parameter, i = 1, 2 the values of the parameters \( \alpha_{1} ,\beta_{1} , \alpha_{2} , \beta_{2} \) and θ is chosen as the following cases for the random variables generating:

$$ \begin{aligned} & {\text{Case 1:}}\quad \left\{ {\begin{array}{*{20}c} {\left( {\alpha_{1} = 1.8,\quad \beta_{1} = 3.5,\quad \alpha_{2} = 1.5,\quad \beta_{2} = 2.5,\quad \theta = 0.25} \right)} \\ {\left( {\alpha_{1} = 1.8,\quad \beta_{1} = 3.5,\quad \alpha_{2} = 1.5,\quad \beta_{2} = 2.5,\quad \theta = 0.75} \right)} \\ {\begin{array}{*{20}c} {\left( {\alpha_{1} = 1.8,\quad \beta_{1} = 3.5,\quad \alpha_{2} = 1.5,\quad \beta_{2} = 2.5,\quad \theta = - \,0.25} \right)} \\ {\left( {\alpha_{1} = 1.8,\quad \beta_{1} = 3.5,\quad \alpha_{2} = 1.5,\quad \beta_{2} = 2.5,\quad \theta = - \,0.75} \right)} \\ \end{array} } \\ \end{array} } \right. \\ & {\text{Case 2:}}\quad \left\{ {\begin{array}{*{20}c} {\left( {\alpha_{1} = 1.5, \quad \beta_{1} = 1.2,\quad \alpha_{2} = 2.1,\quad \beta_{2} = 2.8,\quad \theta = 0.25} \right)} \\ {\left( {\alpha_{1} = 1.5, \quad \beta_{1} = 1.2,\quad \alpha_{2} = 2.1,\quad \beta_{2} = 2.8,\quad \theta = 0.75} \right)} \\ \end{array} } \right. \\ \end{aligned} $$

For different sample size \( n \) = 30, 50, 70, 100, 125 and 150. All computations are obtained based on the R language. The simulation methods are compared using the criteria of parameters estimation, the comparison is performed by calculate in the Bias, the MSE and the length of confidence interval (L.CI) for each method as following

$$ Bias = \left( {\hat{\delta } - \delta } \right) . $$
(6.1)

where \( \hat{\delta } \) is the estimated value of \( \delta \).

$$ MSE = Mean\left( {\hat{\delta } - \delta } \right)^{2} . $$
(6.2)

and

$$ L.CI = Upper.CI - Lower.CI $$
(6.3)

We restricted the number of repeated-samples to 1000.

Based on Eqs. 1.7 and 1.8 for Spearman’s and Kendall’s correlation coefficient (Table 2).

Table 2 Correlation spearman and kendall correlation of FGMBW distribution with various value of parameters
Fig. 4
figure 4

Bias and MSE of the copula parameter estimate for different methods and different cases with variation of sample size

Fig. 5
figure 5

The bias and MSE of copula parameter for FGMBW distribution with various value of the parameters and sample size

On the basis of the results summarized in tables and figures, some conclusions can be drawn which are stated as follows: it is observed that as sample size increases and fixed vector value of δ, the Bias, MSE and Length of confidence interval of the estimates decreases in all the considered methods. In large sample size all of them are nearly equivalent, where the difference is less and there are no significant differences in Bias and MSE values for alternative methods and MLE method. The compare between parametric estimation and non-parametrically estimation have done. The parametric estimation methods are better than non-parametrically estimation method, when copula parameter is not high, approximately (− 0.7:0.7). The SP method is discussed where, the marginal distributions are estimated non-parametrically by their sample empirical distributions and it estimated the copula parameter, this compare between parametric estimation and non-parametrically estimation, whenever the value of copula parameter is close to (− 1 or 1), the efficiency will increase for SP method compared with other methods. IFM method is better than another methods, this is clear for copula parameter θ. It is noted that result, IFM method is the best method because it is a two steps of estimation, first, the marginal distribution parameters estimated and second the copula parameter is estimated, taking into consideration of previous parameter estimates of marginal distribution. That get more efficiency (Figs. 4, 5, Tables 345678).

Table 3 Estimation of the parameters of FGMBW distribution: case 1.1
Table 4 Estimation of the parameters of FGMBW distribution: case 1.2
Table 5 Estimation of the parameters of FGMBW distribution: case 2.1
Table 6 Estimation of the parameters of FGMBW distribution: case 2.2
Table 7 Estimation of the parameters of FGMBW distribution: case 1.3
Table 8 Estimation of the parameters of FGMBW distribution: case 1.4

On the basis of the results summarized in table, some conclusions can be drawn which are stated as follows: It is observed that as sample size increases for fixed vector values of \( \delta \) the MSE of the estimates decreases in all the considered methods and for large size all of them are nearly equivalent but IFM performs better than another method when increase \( \theta \) (parameter of copula). While θ decreases, the MLE is better than other methods Based on MSE term, but the SP method is discussed where, the marginal distributions are estimated non-parametrically by their sample empirical distributions and it estimated the parameter of copula, this compare between parametric estimation and non-parametrically estimation.

7 Application of Real Data

The data for 30 patients set from McGilchrist and Aisbett in [15]. Let \( Y_{1} \) refers to first recurrence time and \( Y_{2} \) to second recurrence time, as following \( Y_{1} \,is\,(8,23,22,447,30,24,7,511,53,15,7,141,96,149, \)\( 536,17,185,292,22,15,152,402,13,39,12,113,132,34,2,130) \) and \( Y_{2} \,is\,(16,13,28,318,12,245,9,30, \)\( 196,154,333,8,38,70,25,4,117, 114,159,108,362,24,66,46,40,201,156,30,25,26) \). Elaal and Jarwan [3] discussed the estimation of the parameters of bivariate generalized exponential distribution for this data (Table 9).

Table 9 The correlation coefficient and test of correlation for real data

Genest et al. [7] introduced Multiplier bootstrap-based goodness-of-fit test. We use the concludes of Genest to fit of Farlie–Gumbel–Morgenstern (FGM) by R package then (Table 10)

Table 10 Goodness of fit test of FGM copula

This by using a parametric bootstrap N = 10,000 time and the empirical copula estimate.

Goodness of fit test one-sample Kolmogorov–Smirnov test (Table 11).

Table 11 Goodness of fit test of Weibull distribution

A comparison has been done between FGM bivariate Gamma (FGMBG), which was discussed by Kotz et al. [11], bivariate Marshall–Olkin Weibull (BMOW), which was discussed by Kundu and Dey [12] and FGM Bivariate Generalized Exponential (FGMBGE), which was discussed by Elaal and Jarwan [3].

In Table 12, it is observed that, the FGMBW model provides a better fit than the other tested models (FGMBG FGMBGE BMOW), because it has the smallest value of L, AIC and BIC. The FGMBW distribution is a good alternative to bivariate several lifetime distributions for modeling non-negative real-valued data in application.

Table 12 The estimates parameters of bivariate distributions

In Tables 13, it is observed that, the IFM method provides a better fit than the other tested methods, because it has the smallest value of stander deviation and L.CI for parameters of FGMBW distribution.

Table 13 The estimates and the corresponding stander deviation of parameters of FGMBW distribution

8 Conclusion

In this paper, we have proposed a FGMBW distribution based on FGM copula function. Moreover, we have the reliability functions for FGMBW distribution; therefore, it can be used quite effectively in life testing data. Additionally, the new FGMBW model can be used as an alternative to any bivariate Weibull distribution; it might work better, where the marginal function of FGMBW distribution has the same basic distribution and has closed forms for product moment. A comparison between different estimation methods of the FGMBW distribution are concluded. The results show that the best method of estimation is IFM method, whereas real data application show that MLE perform better than their counterparts. Hence, we can argue that IFM estimators and MLE are the best performing estimators for FGMBW distribution.