1 Introduction

As the interest in developing discrete distributions on Z is recent, there is limited development of bivariate distributions on Z2. This article aims to develop new distributions on Z2. In particular in the last two decades, the discrete distributions defined on Z that take both positive and negative integers have garnered scholarly interest. Skellam [1] defined the Skellam distribution as the difference of two Poisson variates. Karlis and Ntzoufras [2] studied the correlated cases. The Skellam distribution has been used to model many applications. For instance, Jiang [3] developed an algorithm to cluster genes into distinct groups based on the differences of RNA counts between different treatments. Koopman et al. [4] modeled the intraday stochastic volatility stocks using a dynamic Skellam model for high-frequency tick-by-tick discrete price changes. Other univariate distributions defined on the set of integers Z have also been proposed. Kozubowski and Inusah [5] introduced the discrete skew Laplace distribution as the difference between two geometric distributions. Ong et al. [6] introduced the difference between two discrete random variables from the Panjer family. Alzaid and Omair [7] developed the extended binomial distribution as a byproduct of extending Moran’s characterization of the Poisson distribution to the Skellam distribution. Omair et al. [8] defined the trinomial difference distribution as an extension of the binomial difference distribution.

Bivariate discrete distributions on N2 and their applications in different fields have been developed rapidly. The monographs of Kocherlakota and Kocherlakota [9] and Johnson et al. [10] suitably analyzed some basic discrete bivariate distributions and their properties. Lai [11] presented a thorough review for constructing bivariate discrete distributions, and Sarabia and Gómez-Déniz [12] generally reviewed the construction of multivariate distributions. The most popular method to construct bivariate distribution is trivariate reduction. This method is based on constructing two dependent variables from three independent variables. It mostly relies on the closure of distributions under certain operations such as the sum or the minimum of two variables. Let Wi; i = 0, 1, 2 be three independent variables that define the two random variables X = h1 (W1;W0) and Y = h2 (W2;W0), where h1 and h2 are certain functions that are properly chosen to provide the required marginal distributions. As W0 is common in the definition of X and Y, X and Y are mostly dependent.

Similar methods had been used by researchers recently to introduce bivariate distributions on Z2. Bulla et al. [13], Odhah and Alzaid [14], and Genest and Mesfioui [15] introduced the bivariate Skellam distribution and extensions of this distribution. Many applications in real life require the development of bivariate discrete distribution on Z2. Akpoue and Angers [16] modeled the relationship between the number of points per season and the goal differential for data on soccer teams by some covariates using a multivariate Skellam distribution. Bulla et al. [17] defined the bivariate autoregressive integer-valued time-series models, based on the signed thinning operator.

In this paper, we defined three new bivariate distributions on Z2 to enrich this class of distributions, especially that these distributions vary in terms of correlations between the dependent variables that constitute the bivariate distribution, where some have positive correlation and others allow for negative correlation. Further, marginal distribution can be overdispersed or underdispersed. These new bivariate distributions are based on the trinomial difference distribution, extended binomial distribution, and the discrete skew Laplace distribution. Extensions of these distributions with shifted parameters are also discussed. Their basic properties and the estimation of the parameters using the maximum likelihood and moments method are presented. Applications of these distributions to real data are also illustrated.

The paper is organized as follows: in Sect. 2, bivariate distributions on Z2 with their properties are introduced. In Sect. 3, we consider applications of these distributions to real data with several comparisons.

2 Bivariate Distributions on Z2

2.1 Bivariate Skellam

Definition 2.1

(Bulla et al. [13] and Odhah [14]) A random vector \(({\mathrm{Z}}_{1},{\mathrm{Z}}_{2})\) has the bivariate Skellam distribution with parameters \({\uptheta }_{0}\ge 0,{\uptheta }_{1}>0\) and \({\uptheta }_{2}>0\)(denoted by BSkellam (\({\uptheta }_{0},{\uptheta }_{1},{\uptheta }_{2}\))) if its joint probability mass function is given by

$$ P\left( {Z_{1} = z_{1} ,Z_{2} = z_{2} } \right) = e^{{ - \left( {\theta_{0} + \theta_{1} + \theta_{2} } \right)}} \theta_{1}^{{z_{1} }} \theta_{2}^{{z_{2} }} \, _{0} \tilde{F}_{2} \left( {;z_{1} + 1,z_{2} + 1;\theta_{0} \theta_{1} \theta_{2} } \right)\;for \, all \, \left( {z_{1} ,z_{2} } \right) \in {\mathbb{Z}}^{2} , $$
(2.1)

where \( _{0} \tilde{F}_{2} \left( {;z_{1} ,z_{2} ;\theta } \right) = \sum\limits_{k = 0}^{\infty } {\frac{{\theta^{k} }}{{k!\left( {z_{1} + k - 1} \right)!\left( {z_{2} + k - 1} \right)!}}}\).

The mean, the covariance matrix, and the correlation are, respectively,

$$ \mu = (\theta_{1} - \theta_{0} ,\theta_{2} - \theta_{0} ), $$
(2.2)
$$ {\Sigma } = \left( {\begin{array}{*{20}c} {\theta_{1} + \theta_{0} } & {\theta_{0} } \\ {\theta_{0} } & {\theta_{2} + \theta_{0} } \\ \end{array} } \right), $$
(2.3)
$$ {\text{and}}\;{\text{Corr}}\left( {Z_{1} ,Z_{2} } \right) = \frac{{\theta_{0} }}{{\sqrt {\left( {\theta_{1} + \theta_{0} } \right)\left( {\theta_{2} + \theta_{0} } \right)} }}. $$
(2.4)

Note that the marginal distributions are overdispersed and the correlation is positive.

The characteristic function is

$$ \phi_{{Z_{1} ,Z_{2} }} \left( {u,v} \right) = Exp\left\{ {\theta_{1} \left( {e^{iu} - 1} \right) + \theta_{2} \left( {e^{iv} - 1} \right) + \theta_{0} \left( {e^{ - iu - iv} - 1} \right)} \right\}. $$
(2.5)

The marginal distribution of Zi ∼ Skellam (θi, θ0), i = 1,2.

A shifted bivariate Skellam distribution is obtained by considering the distribution of the bivariate vector \(\left( {Z_{1}^{\left( s \right)} ,Z_{2}^{\left( s \right)} } \right) = \left( {Z_{1} + k_{1} ,Z_{2} + k_{2} } \right).\)

Note that in the shifted distribution the marginals can be overdispersed, underdispersed or equidispersed.

2.2 Bivariate Quadrinomial Difference

Omair et al. [8] introduced a trinomial difference distribution on the set of integer values which extended the binomial difference distribution. A random variable Z on the set \(\{0,\pm 1,...,\pm \mathrm{n}\}\) has trinomial difference distribution (Z \(\sim \) TD(n,\(\alpha \),\(\gamma \))) with parameters n \(\in {\mathbb{Z}}\), 0 \(\le \alpha \),\(\gamma \le \) 1 and \(\alpha +\gamma <1\) if it has the following probability mass function:

$$ \begin{aligned} P\left( {Z = z} \right) & = \sum\limits_{{y = \max \left( {0, - z} \right)}}^{{\left[ {\frac{n - z}{2}} \right]}} \left( {\begin{array}{*{20}l} n \hfill \\ {z + y,y} \hfill \\ \end{array} } \right)\alpha^{z + y} \gamma^{y} (1 - \alpha - \gamma )^{n - z - 2y} ,\\ & \quad z = 0, \pm 1,\; \pm \;2, \ldots ,\, \pm \,n. \end{aligned} $$

\(\left[ {\frac{n - z}{2}} \right]\) stands for the integer part of the value \(\frac{n - z}{2}\).

The mean and variance are

$$ \mu = n\left( {\alpha - \gamma } \right), $$
$$ Var\left( Z \right) = n\left\{ {\alpha + \gamma - (\alpha - \gamma )^{2} } \right\}. $$

The corresponding characteristic function is given by

$$ \phi_{Z} \left( t \right) = (1 - \alpha - \gamma + \alpha e^{it} + \gamma e^{ - it} )^{n} . $$

Using a similar approach, we define a bivariate quadrinomial difference by the reduction method in two ways:

  1. 1.

    The independent case

Here we assume that the random variables \({X}_{1}\sim Bin(n,{p}_{1})\), \({X}_{2}\sim Bin(n,{p}_{2})\) and \({X}_{3}\sim Bin(n,{p}_{3})\) are independent. Then, the bivariate quadrinomial difference distribution is defined as the distribution of the vector \(({Z}_{1},{Z}_{2})\) where \({Z}_{1}={X}_{1}-{X}_{3}\) and \({Z}_{2}={X}_{2}-{X}_{3}\). The characteristic function of \({Z}_{1}\) and \({Z}_{2}\) is given by

$$ \begin{aligned} \phi_{{Z_{1} ,Z_{2} }} \left( {u,v} \right) = & E\left( {e^{{iu\left( {X_{1} - X_{3} } \right) + iv\left( {X_{2} - X_{3} } \right)}} } \right) \\ = & \left\{ {\overline{p}_{1} \overline{p}_{2} \overline{p}_{3} + p_{1} p_{2} p_{3} + p_{1} \overline{p}_{2} \overline{p}_{3} e^{iu} + p_{2} \overline{p}_{1} \overline{p}_{3} e^{iv} } \right. \\ & + \left. { p_{3} \overline{p}_{1} \overline{p}_{2} e^{{ - i\left( {u + v} \right)}} + p_{1} p_{2} \overline{p}_{3} e^{{i\left( {u + v} \right)}} + p_{1} p_{3} \overline{p}_{2} e^{ - iv} + p_{2} p_{3} \overline{p}_{1} e^{ - iu} } \right\}^{n} , \\ \end{aligned} $$
(2.6)

where \({\overline{p} }_{i}=(1-{p}_{i})\), i = 1,2,3.

  1. 2.

    The quadrinomial case

In this case, the vector \(({X}_{1},{X}_{2},{X}_{3})\) is assumed to follow the quadrinomial distribution with probability mass function

$$ \begin{gathered} P\left( {X_{1} = x_{1} ,X_{2} = x_{2} ,X_{3} = x_{3} } \right) \hfill \\ \qquad = \left\{ {\begin{array}{*{20}c} {\left( {\begin{array}{*{20}c} n \\ {x_{1} ,x_{2} ,x_{3} } \\ \end{array} } \right)} & {\alpha^{{x_{1} }} \beta^{{x_{2} }} \gamma^{{x_{3} }} \left( {1 - \alpha - \beta - \gamma } \right)^{{n - x_{1} - x_{2} - x_{3} }} ,} & {\sum\limits_{i = 1}^{3} {x_{i} \le n} } \\ 0 & , & {otherwise} \\ \end{array} } \right. \hfill \\ \end{gathered} $$
(2.7)

for nonnegative integers \({x}_{1},{x}_{2}\) and \({x}_{3}\).

Hence the corresponding characteristic function is

$$ \phi_{{X_{1} ,X_{2} ,X_{3} }} \left( {t_{1} ,t_{2} ,t_{3} } \right) = \left\{ {1 - \alpha - \beta - \gamma + \alpha e^{{it_{1} }} + \beta e^{{it_{2} }} + \gamma e^{{it_{3} }} } \right\}^{n} . $$
(2.8)

Now, we define \({Z}_{1}={X}_{1}-{X}_{3}\) and \({Z}_{2}={X}_{2}-{X}_{3}\). Then, from (2.8), the characteristic function of \(({Z}_{1},{Z}_{2})\) is

$$ \phi_{{Z_{1} ,Z_{2} }} \left( {u,v} \right) = \left\{ {1 - \alpha - \beta - \gamma + \alpha e^{iu} + \beta e^{iv} + \gamma e^{{ - i\left( {u + v} \right)}} } \right\}^{n} . $$
(2.9)

Definition 2.2

We say the random variable \(({Z}_{1},{Z}_{2})\) has bivariate quadrinomial difference distribution with parameters \(n\in {\mathbb{Z}}\) , \(0\le \alpha ,\beta ,\gamma \le 1\) and \(\alpha +\beta +\gamma <1\) denoted by \(({Z}_{1},{Z}_{2})\sim BQD(n,\alpha ,\beta ,\gamma )\) if its characteristic function is given by (2.9).

The probability function of the bivariate quadrinomial difference distribution is

$$ \begin{aligned} g\left( {z_{1} ,z_{2} } \right) & = \sum\limits_{{j = \max \left( {0, - z_{1} , - z_{2} } \right)}}^{{\left[ {\frac{{n - z_{1} - z_{2} }}{3}} \right]}} {\left( {\begin{array}{*{20}l} n \hfill \\ {z_{1} + j,z_{2} + j,j} \hfill \\ \end{array} } \right)} \,\alpha^{{z_{1} + j}} \beta^{{z_{2} + j}} \gamma^{j} \\ &\quad \times (1 - \alpha - \beta - \gamma )^{{n - z_{1} - z_{2} - 3j}} , \end{aligned} $$
(2.10)

where \({z}_{1},{z}_{2}=0,\pm 1,...,\pm n\) and \({-n\le z}_{1}+{z}_{2}\le n\).

Proposition 2.1

If \(({Z}_{1},{Z}_{2})\sim BQD(n,\alpha ,\beta ,\gamma )\) then.

  1. 1.

    \({Z}_{1}\) has trinomial difference distribution with parameter \((n,\alpha ,\gamma )\), and \({Z}_{2}\) has trinomial difference distribution with parameter \((n,\beta ,\gamma )\).

  2. 2.

    \({Z}_{1}-{Z}_{2}\sim TD(n,\alpha ,\beta )\).

Proof

The proof is immediate from (2.9) by setting \((u,v)\) as \((u,0)\) and \((0,v)\) in (1) and \((u,-u)\) in (2).

Proposition 2.2

\(Let ({Z}_{1},{Z}_{2})\sim BQD(n,\alpha ,\beta ,\gamma )\) then

  1. 1.

    The mean is

\(\mu = \left( {n\left( {\alpha - \gamma } \right),n\left( {\beta - \gamma } \right)} \right)\).

  1. 2.

    The covariance matrix is

$$ \Sigma = \left( {\begin{array}{*{20}l} {n\left[ {\alpha + \gamma - \alpha - \gamma )^{2} } \right]} \hfill & { - n\alpha \beta + n\alpha \gamma + n\beta \gamma + n\gamma \left( {1 - \gamma } \right)} \hfill \\ { - n\alpha \beta + n\alpha \gamma + n\beta \gamma + n\gamma \left( {1 - \gamma } \right)} \hfill & {n\left[ {\beta + \gamma - \left( {\beta - \gamma } \right)^{2} } \right]} \hfill \\ \end{array} } \right), $$

and hence the correlation is

$$ Corr\left( {Z_{1} ,Z_{2} } \right) = \frac{{ - \alpha \beta + \alpha \gamma + \beta \gamma + \gamma \left( {1 - \gamma } \right)}}{{\sqrt {\left( {\alpha + \gamma - (\alpha - \gamma )^{2} } \right)\left( {\beta + \gamma - (\beta - \gamma )^{2} } \right)} }}. $$

Proof:

The proof is straightforward as \({Z}_{1}\sim TD(n,\alpha ,\gamma )\),\({Z}_{2}\sim TD(n,\beta ,\gamma )\) and \(({X}_{1},{X}_{2},{X}_{3})\) has quadrinomial distribution. (See Omair et al. [8]).

Remark

The covariance can take positive and negative values, and the correlation has full range.

A shifted bivariate quadrinomial difference distribution is obtained by considering the bivariate vector \(({Z}_{1}^{(s)},{Z}_{1}^{(s)})=({Z}_{1}+{k}_{1},{Z}_{2}+{k}_{2})\), where \(({Z}_{1},{Z}_{2})\) has \(BQD(n,\alpha ,\beta ,\gamma )\).

The shifted bivariate distribution has marginals that can be over, under, or equidispersed.

2.3 Bivariate Conditional Skellam Distribution

Considering two independent Skellam random variables \({X}_{1}\sim Skellam({\theta }_{1},{\theta }_{2})\) and \({X}_{2}\sim Skellam ({\theta }_{3},{\theta }_{4})\) and assuming \(Z={X}_{1}+{X}_{2}\), Alzaid and Omair [7] defined the conditional Skellam distribution \(X={X}_{1}|Z\) with parameters \(z\in {\mathbb{Z}}\) and \({\theta }_{1},{\theta }_{2},{\theta }_{3}\) and \({\theta }_{4}\) > 0, denoted by \(CSkellam\)(\(z, {\theta }_{1}, {\theta }_{2}, {\theta }_{3}, {\theta }_{4}\)) if

$$ \begin{aligned} P\left( {X = x} \right) & = \frac{{\left( {\frac{{\theta_{1} }}{{\theta_{1} + \theta_{3} }}} \right)^{x}_{0} \tilde{F}_{1} \left( {;x + 1;\theta_{1} \theta_{2} } \right)\left( {\frac{{\theta_{3} }}{{\theta_{1} + \theta_{3} }}} \right)^{z - x}_{0} \tilde{F}_{1} \left( {;z - x + 1;\theta_{3} \theta_{4} } \right)}}{{{ }_{0} \tilde{F}_{1} \left( {;z + 1;\left( {\theta_{1} + \theta_{3} } \right)\left( {\theta_{2} + \theta_{4} } \right)} \right)}},\\ &\quad x = 0,\; \pm \;1, \cdots \end{aligned} $$
(2.11)

The mean of the distribution is

$$ \mu = \frac{{\theta_{1} }}{{\theta_{1} + \theta_{3} }}z + (\theta_{1} \theta_{4} - \theta_{2} \theta_{3} )_{0} H_{1} \left( {z + 2,\left( {\theta_{1} + \theta_{3} } \right)\left( {\theta_{2} + \theta_{4} } \right)} \right), $$
(2.12)

where

$$ H_{k} \left( {u_{1} + 1,u_{2} + 1,...,u_{k} + 1,\theta } \right) = \frac{{ _{0} \tilde{F}_{k} \left( {;u_{1} + 1,u_{2} + 1,...,u_{k} + 1;\theta } \right)}}{{{ }_{0} \tilde{F}_{k} \left( {;u_{1} ,u_{2} ,...,u_{k} ;\theta } \right)}}. $$

Considering three independent Skellam random variables, we will identify the bivariate conditional distribution of two of them given their sum. Let \({X}_{1}\sim Skellam({\theta }_{1},{\theta }_{2})\), \({X}_{2}\sim Skellam\left({\theta }_{3},{\theta }_{4}\right),\) and \({X}_{3}\sim Skellam({\theta }_{5},{\theta }_{6})\) be independent random variables and the sum \(Z={X}_{1}+{X}_{2}+{X}_{3}\sim Skellam({\theta }_{1}+{\theta }_{3}+{\theta }_{5},{\theta }_{2}+{\theta }_{4}+{\theta }_{6})\). Hence, the bivariate conditional Skellam distribution of \(({X}_{1},{X}_{2})|Z\) will have the following probability mass function:

$$ \begin{aligned} P(X_{1} = x_{1} ,X_{2} = x_{2} |Z = z) = & \frac{{P\left( {X_{1} = x_{1} } \right)P\left( {X_{2} = x_{2} } \right)P\left( {X_{3} = z - x_{1} - x_{2} } \right)}}{{P\left( {Z = z} \right)}} \\ = & \frac{{\mathop \prod \nolimits_{k = 1}^{3} (\frac{{\theta_{2k - 1} }}{{\theta_{1} + \theta_{3} + \theta_{5} }})^{{x_{k} }}_{0} \tilde{F}_{1} \left( {;x_{k} + 1;\theta_{2k - 1} \theta_{2k} } \right)}}{{{ }_{0} \tilde{F}_{1} \left( {;z + 1;\left( {\theta_{1} + \theta_{3} + \theta_{5} } \right)\left( {\theta_{2} + \theta_{4} + \theta_{6} } \right)} \right)}}, \\ \end{aligned} $$

where \({x}_{1},{x}_{2}=0,\pm 1,\pm 2,...\), \({x}_{3}=z-{x}_{1}-{x}_{2}\), \({\theta }_{1},{\theta }_{2},{\theta }_{3},{\theta }_{4},{\theta }_{5}\) and \({\theta }_{6}>0\).

From this result we have the following definition:

Definition 2.3

(Bivariate conditional Skellam) A random variable \(({Y}_{1},{Y}_{2})\) has the bivariate conditional Skellam distribution with parameters \(z\in {\mathbb{Z}}\),\({\theta }_{1},{\theta }_{2},{\theta }_{3},{\theta }_{4},{\theta }_{5}\) and \({\theta }_{6}\)> 0 denoted by \(BCSkellam(z,{\theta }_{1},{\theta }_{2},{\theta }_{3},{\theta }_{4},{\theta }_{5},{\theta }_{6})\) if the probability mass function is

$$ P\left( {Y_{1} = y_{1} ,Y_{2} = y_{2} } \right) = \frac{{\mathop \prod \nolimits_{k = 1}^{3} \left( {\frac{{\theta_{2k - 1} }}{{\theta_{1} + \theta_{3} + \theta_{5} }}} \right)^{{y_{k} }}_{0} \tilde{F}_{1} \left( {;y_{k} + 1;\theta_{2k - 1} \theta_{2k} } \right)}}{{ {}_{0}\tilde{F}_{1} \left( {;z + 1;\left( {\theta_{1} + \theta_{3} + \theta_{5} } \right)\left( {\theta_{2} + \theta_{4} + \theta_{6} } \right)} \right)}}, $$
(2.13)

where \({y}_{1},{y}_{2}=0,\pm 1,\pm 2,...\), \({y}_{3}=z-{y}_{1}-{y}_{2}\).

The following proposition from [7] is needed to prove some results.

Proposition 2.3

Let Y have the conditional Skellam distribution with parameters \(z\in {\mathbb{Z}}\) , \({\theta }_{1},{\theta }_{2},{\theta }_{3}\) and \({\theta }_{4}>0\) then

$$ (\theta_{1} + \theta_{3} )^{z} \,_{0} \tilde{F}_{1} \left( {;z + 1;\left( {\theta_{1} + \theta_{3} } \right)\left( {\theta_{2} + \theta_{4} } \right)} \right) = \sum\limits_{y = - \infty }^{\infty } {\theta_{1}^{y} \,_{0} \tilde{F}_{1} \left( {;y + 1;\theta_{1} \theta_{2} } \right)\theta_{3}^{z - y} \,_{0} \tilde{F}_{1} \left( {;z - y + 1;\theta_{3} \theta_{4} } \right)} $$

From the fact that (2.13) is a joint probability mass function, we obtain the following result.

Proposition 2.4

For any parameters \(z\in {\mathbb{Z}}\),\({\theta }_{1},{\theta }_{2},{\theta }_{3},{\theta }_{4},{\theta }_{5}\) and \({\theta }_{6}\)>0, we have

$$ \begin{aligned} & (\theta_{1} + \theta_{3} + \theta_{5} )^{z}_{0} \tilde{F}_{1} \left( {;z + 1;\left( {\theta_{1} + \theta_{3} + \theta_{5} } \right)\left( {\theta_{2} + \theta_{4} + \theta_{6} } \right)} \right)\\ &\quad = \sum\limits_{{y_{1} = - \infty }}^{\infty } \sum\limits_{{y_{2} = - \infty }}^{\infty } \theta_{1}^{{y_{1} }} \theta_{3}^{{y_{2} }} \theta_{5}^{{z - y_{1} - y_{2} }} \times_{0} \tilde{F}_{1} (;y_{1} + 1;\theta_{1} \theta_{2} )_{0} \tilde{F}_{1} (;y_{2} + 1;\theta_{3} \theta_{4} )_{0}\\ & \quad \quad \times \tilde{F}_{1} \left( {;z - y_{1} - y_{2} + 1;\theta_{5} \theta_{6} } \right) \end{aligned} $$

Proposition 2.5

If \(\left({Y}_{1},{Y}_{2}\right)\sim BCSkellam\left(z,{\theta }_{1},{\theta }_{2},{\theta }_{3},{\theta }_{4},{\theta }_{5},{\theta }_{6}\right),\) then

\({Y}_{1}\sim CSkellam(z,{\theta }_{1},{\theta }_{2},({\theta }_{3}+{\theta }_{5}),({\theta }_{4}+{\theta }_{6}))\) and \({Y}_{2}\sim CSkellam(z,{\theta }_{3},{\theta }_{4},({\theta }_{1}+{\theta }_{5}),({\theta }_{2}+{\theta }_{6}))\).

Proof

$$ \begin{aligned} &P\left( {Y_{1} = y_{1} } \right) \hfill \\ &\quad = \sum\limits_{{y_{2} = - \infty }}^{\infty } \left\{ \frac{{\left( {\frac{{\theta_{1} }}{{\theta_{1} + \theta_{3} + \theta_{5} }}} \right)^{{y_{1} }}_{0} \tilde{F}_{1} \left( {;y_{1} + 1;\theta_{1} \theta_{2} } \right)\left( {\frac{{\theta_{3} }}{{\theta_{1} + \theta_{3} + \theta_{5} }}} \right)^{{y_{2} }}_{0} \tilde{F}_{1} \left( {;y_{2} + 1;\theta_{3} \theta_{4} } \right)\left( {\frac{{\theta_{5} }}{{\theta_{1} + \theta_{3} + \theta_{5} }}} \right)^{{z - y_{1} - y_{2} }} }}{{ _{0} \tilde{F}_{1} \left( {;z + 1;\left( {\theta_{1} + \theta_{3} + \theta_{5} } \right)\left( {\theta_{2} + \theta_{4} + \theta_{6} } \right)} \right)}}\right.\\ &\qquad \left. \times_{0} \tilde{F}_{1} \left( {;z - y_{1} - y_{2} + 1;\theta_{5} \theta_{6} } \right) \right\} \hfill \\ \end{aligned} $$

Then, from Proposition 2.3, we have.

$$ \begin{aligned} & P\left( {Y_{1} = y_{1} } \right) \hfill \\ &\quad =\frac{{\left( {\frac{{\theta_{1} }}{{\theta_{1} + \theta_{3} + \theta_{5} }}} \right)^{{y_{1} }}_{0} \tilde{F}_{1} \left( {;y_{1} + 1;\theta_{1} \theta_{2} } \right)\left( {\frac{{\theta_{3} + \theta_{5} }}{{\theta_{1} + \theta_{3} + \theta_{5} }}} \right)^{{z - y_{1} }}_{0} \tilde{F}_{1} \left( {;z - y_{1} + 1;\left( {\theta_{3} + \theta_{5} } \right)\left( {\theta_{4} + \theta_{6} } \right)} \right)}}{{{ }_{0} \tilde{F}_{1} \left( {;z + 1;\left( {\theta_{1} + \theta_{3} + \theta_{5} } \right)\left( {\theta_{2} + \theta_{4} + \theta_{6} } \right)} \right)}},\\ &\qquad {\text{similarly for}}\;Y_{2} . \hfill \end{aligned} $$

Proposition 2.6

If a random variable \(({Y}_{1},{Y}_{2})\sim \) BCSkellam \(\left(z,{\theta }_{1},{\theta }_{2},{\theta }_{3},{\theta }_{4},{\theta }_{5},{\theta }_{6}\right),\) then the conditional distribution of \({{Y}_{2}\left|Y\right.}_{1}={y}_{1}\) is \(CSkellam(z-{y}_{1},{\theta }_{3},{\theta }_{4},{\theta }_{5},{\theta }_{6})\) .

The proof is straightforward.

Proposition 2.7

If \(({Y}_{1},{Y}_{2})\) \(\sim \) BCSkellam \(\left(z,{\theta }_{1},{\theta }_{2},{\theta }_{3},{\theta }_{4},{\theta }_{5},{\theta }_{6}\right),\) then

  • 1. The moment generating function of \(({Y}_{1},{Y}_{2})\) is given by

$$ M_{{Y_{1} ,Y_{2} }} \left( {t_{1} ,t_{2} } \right) = \left( {\frac{{\theta_{1} e^{{t_{1} }} + \theta_{3} e^{{t_{2} }} + \theta_{5} }}{{\theta_{1} + \theta_{3} + \theta_{5} }}} \right)^{z} \frac{{{ }_{0} \tilde{F}_{1} \left( {;z + 1;\left( {\theta_{1} e^{{t_{1} }} + \theta_{3} e^{{t_{2} }} + \theta_{5} } \right)\left( {\theta_{2} e^{{ - t_{1} }} + \theta_{4} e^{{ - t_{2} }} + \theta_{6} } \right)} \right)}}{{{ }_{0} \tilde{F}_{1} \left( {;z + 1;\left( {\theta_{1} + \theta_{3} + \theta_{5} } \right)\left( {\theta_{2} + \theta_{4} + \theta_{6} } \right)} \right)}}. $$
(2.14)
  • 2. The covariance of \(({Y}_{1},{Y}_{2})\) is

$$ \begin{aligned} & Cov\left( {Y_{1} ,Y_{2} } \right) = \frac{{ - z\theta_{1} \theta_{3} }}{{(\theta_{1} + \theta_{3} + \theta_{5} )^{2} }} - \left( {\theta_{1} \theta_{4} + \theta_{2} \theta_{3} } \right)H_{1} \left( {z + 2,\left( {\theta_{1} + \theta_{3} + \theta_{5} } \right)\left( {\theta_{2} + \theta_{4} + \theta_{6} } \right)} \right) \\ &\quad + \left( {\left( {\theta_{1} \left( {\theta_{4} + \theta_{6} } \right) - \theta_{2} \left( {\theta_{3} + \theta_{5} } \right)} \right)\left( {\theta_{3} \left( {\theta_{2} + \theta_{6} } \right) - \theta_{4} \left( {\theta_{1} + \theta_{5} } \right)} \right)} \right) \\ & \quad \times \{ H_{1} \left( {z + 2,\left( {\theta_{1} + \theta_{3} + \theta_{5} } \right)\left( {\theta_{2} + \theta_{4} + \theta_{6} } \right)} \right)H_{1} \left( {z + 3,\left( {\theta_{1} + \theta_{3} + \theta_{5} } \right)\left( {\theta_{2} + \theta_{4} + \theta_{6} } \right)} \right) \\ &\quad - (H_{1} \left( {z + 2,\left( {\theta_{1} + \theta_{3} + \theta_{5} } \right)\left( {\theta_{2} + \theta_{4} + \theta_{6} } \right)} \right))^{2} \} . \\ \end{aligned} $$
(2.15)

Proof 1

Using Proposition 2.4

$$ \begin{aligned} & M_{{Y_{1} ,Y_{2} }} \left( {t_{1} ,t_{2} } \right) = E\left( {e^{{\left( {t_{1} Y_{1} + t_{2} Y_{2} } \right)}} } \right) \\ &\quad = \mathop \sum \limits_{{y_{1} = - \infty }}^{\infty } \left( {\mathop \sum \limits_{{y_{2} = - \infty }}^{\infty } e^{{\left( {t_{1} y_{1} + t_{2} y_{2} } \right)}} \frac{{\theta_{1} }}{{\theta_{1} + \theta_{3} + \theta_{5} }}} \right)^{{y_{1} }} \left( {\frac{{\theta_{3} }}{{\theta_{1} + \theta_{3} + \theta_{5} }}} \right)^{{y_{2} }} \left( {\frac{{\theta_{5} }}{{\theta_{1} + \theta_{3} + \theta_{5} }}} \right)^{{z - y_{1} - y_{2} }} \\ &\quad \quad \times \frac{{{ }_{0} \tilde{F}_{1} (;y_{1} + 1;\theta_{1} \theta_{2} )_{0} \tilde{F}_{1} (;y_{2} + 1;\theta_{3} \theta_{4} )_{0} \tilde{F}_{1} \left( {;z - y_{1} - y_{2} + 1;\theta_{5} \theta_{6} } \right)}}{{{ }_{0} \tilde{F}_{1} \left( {;z + 1;\left( {\theta_{1} + \theta_{3} + \theta_{5} } \right)\left( {\theta_{2} + \theta_{4} + \theta_{6} } \right)} \right)}} \\ &= \left( {\frac{{\theta_{1} e^{{t_{1} }} + \theta_{3} e^{{t_{2} }} + \theta_{5} }}{{\theta_{1} + \theta_{3} + \theta_{5} }}} \right)^{z} \frac{{{ }_{0} \tilde{F}_{1} \left( {;z + 1;\left( {\theta_{1} e^{{t_{1} }} + \theta_{3} e^{{t_{2} }} + \theta_{5} } \right)\left( {\theta_{2} e^{{ - t_{1} }} + \theta_{4} e^{{ - t_{2} }} + \theta_{6} } \right)} \right)}}{{{ }_{0} \tilde{F}_{1} \left( {;z + 1;\left( {\theta_{1} + \theta_{3} + \theta_{5} } \right)\left( {\theta_{2} + \theta_{4} + \theta_{6} } \right)} \right)}} \\ \end{aligned} $$
$$ \begin{aligned} E\left( {Y_{1} Y_{2} } \right) &= \frac{{\partial ^{2} M\left( {t_{1} ,t_{2} } \right)}}{{\partial t_{1} \partial t_{2} }}|_{{\left( {t_{1} ,t_{2} } \right) = \left( {0,0} \right)}} \\ \frac{{\partial M\left( {t_{1} ,t_{2} } \right)}}{{\partial t_{1} }} &= z\left( {\frac{{\theta _{1} e^{{t_{1} }} + \theta _{3} e^{{t_{2} }} + \theta _{5} }}{{\theta _{1} + \theta _{3} + \theta _{5} }}} \right)^{{z - 1}} \left( {\frac{{\theta _{1} e^{{t_{1} }} }}{{\left( {\theta _{1} + \theta _{3} + \theta _{5} } \right)}}} \right) \\ & \quad \times \frac{{{\text{~}}_{0} \tilde{F}_{1} \left( {;z + 1;\left( {\theta _{1} e^{{t_{1} }} + \theta _{3} e^{{t_{2} }} + \theta _{5} } \right)\left( {\theta _{2} e^{{ - t_{1} }} + \theta _{4} e^{{ - t_{2} }} + \theta _{6} } \right)} \right)}}{{{\text{~}}_{0} \tilde{F}_{1} \left( {;z + 1;\left( {\theta _{1} + \theta _{3} + \theta _{5} } \right)\left( {\theta _{2} + \theta _{4} + \theta _{6} } \right)} \right)}} \\ & \quad + \left( {\frac{{\theta _{1} e^{{t_{1} }} + \theta _{3} e^{{t_{2} }} + \theta _{5} }}{{\theta _{1} + \theta _{3} + \theta _{5} }}} \right)^{z} \left( {\theta _{1} e^{{t_{1} }} \left( {\theta _{4} e^{{ - t_{2} }} + \theta _{6} } \right) - \theta _{2} e^{{ - t_{1} }} \left( {\theta _{3} e^{{t_{2} }} + \theta _{5} } \right)} \right) \\ & \quad \times \frac{{{\text{~}}_{0} \tilde{F}_{1} \left( {;z + 2;\left( {\theta _{1} e^{{t_{1} }} + \theta _{3} e^{{t_{2} }} + \theta _{5} } \right)\left( {\theta _{2} e^{{ - t_{1} }} + \theta _{4} e^{{ - t_{2} }} + \theta _{6} } \right)} \right)}}{{{\text{~}}_{0} \tilde{F}_{1} \left( {;z + 1;\left( {\theta _{1} + \theta _{3} + \theta _{5} } \right)\left( {\theta _{2} + \theta _{4} + \theta _{6} } \right)} \right)}}. \\ \end{aligned} $$

Therefore,

$$ \begin{gathered} \frac{{\partial ^{2} M\left( {t_{1} ,t_{2} } \right)}}{{\partial t_{1} \partial t_{2} }} \hfill \\ = z\left( {z - 1} \right)\left( {\frac{{\theta _{1} e^{{t_{1} }} + \theta _{3} e^{{t_{2} }} + \theta _{5} }}{{\theta _{1} + \theta _{3} + \theta _{5} }}} \right)^{{z - 2}} \left( {\frac{{\theta _{1} \theta _{3} e^{{t_{1} }} e^{{t_{2} }} }}{{(\theta _{1} + \theta _{3} + \theta _{5} )^{2} }}} \right) \hfill \\ \times \frac{{{\text{~}}_{0} \tilde{F}_{1} \left( {;z + 1;\left( {\theta _{1} e^{{t_{1} }} + \theta _{3} e^{{t_{2} }} + \theta _{5} } \right)\left( {\theta _{2} e^{{ - t_{1} }} + \theta _{4} e^{{ - t_{2} }} + \theta _{6} } \right)} \right)}}{{{\text{~}}_{0} \tilde{F}_{1} \left( {;z + 1;\left( {\theta _{1} + \theta _{3} + \theta _{5} } \right)\left( {\theta _{2} + \theta _{4} + \theta _{6} } \right)} \right)}} \hfill \\ + \{ z(\frac{{\theta _{1} e^{{t_{1} }} + \theta _{3} e^{{t_{2} }} + \theta _{5} }}{{\theta _{1} + \theta _{3} + \theta _{5} }})^{{z - 1}} \left( {\frac{{\theta _{1} e^{{t_{1} }} }}{{\theta _{1} + \theta _{3} + \theta _{5} }}} \right)(\theta _{3} e^{{t_{2} }} \left( {\theta _{2} e^{{ - t_{1} }} + \theta _{6} } \right) \hfill \\ - \theta _{4} e^{{ - t_{2} }} \left( {\theta _{1} e^{{t_{1} }} + \theta _{5} } \right)) + z(\frac{{\theta _{1} e^{{t_{1} }} + \theta _{3} e^{{t_{2} }} + \theta _{5} }}{{\theta _{1} + \theta _{3} + \theta _{5} }})^{{z - 1}} \left( {\frac{{\theta _{3} e^{{t_{2} }} }}{{(\theta _{1} + \theta _{3} + \theta _{5} )^{2} }}} \right) \hfill \\ \times \left( {\theta _{1} e^{{t_{1} }} \left( {\theta _{4} e^{{ - t_{2} }} + \theta _{6} } \right) - \theta _{2} e^{{ - t_{1} }} \left( {\theta _{3} e^{{t_{2} }} + \theta _{5} } \right)} \right) - (\frac{{\theta _{1} e^{{t_{1} }} + \theta _{3} e^{{t_{2} }} + \theta _{5} }}{{\theta _{1} + \theta _{3} + \theta _{5} }})^{z} \hfill \\ \times \left( {\theta _{1} \theta _{4} e^{{\left( {t_{1} - t_{2} } \right)}} + \theta _{2} \theta _{3} e^{{\left( {t_{2} - t_{1} } \right)}} } \right)\} \frac{{{\text{~}}_{0} \tilde{F}_{1} \left( {;z + 2;\left( {\theta _{1} e^{{t_{1} }} + \theta _{3} e^{{t_{2} }} + \theta _{5} } \right)\left( {\theta _{2} e^{{ - t_{1} }} + \theta _{4} e^{{ - t_{2} }} + \theta _{6} } \right)} \right)}}{{{\text{~}}_{0} \tilde{F}_{1} \left( {;z + 1;\left( {\theta _{1} + \theta _{3} + \theta _{5} } \right)\left( {\theta _{2} + \theta _{4} + \theta _{6} } \right)} \right)}} \hfill \\ + \left( {\frac{{\theta _{1} e^{{t_{1} }} + \theta _{3} e^{{t_{2} }} + \theta _{5} }}{{\theta _{1} + \theta _{3} + \theta _{5} }}} \right)^{z} \left( {\theta _{1} e^{{t_{1} }} \left( {\theta _{4} e^{{ - t_{2} }} + \theta _{6} } \right) - \theta _{2} e^{{ - t_{1} }} \left( {\theta _{3} e^{{t_{2} }} + \theta _{5} } \right)} \right) \hfill \\ \times \left( {\theta _{3} e^{{t_{2} }} \left( {\theta _{2} e^{{ - t_{1} }} + \theta _{6} } \right) - \theta _{4} e^{{ - t_{2} }} \left( {\theta _{1} e^{{t_{1} }} + \theta _{5} } \right)} \right) \hfill \\ \times \frac{{{\text{~}}_{0} \tilde{F}_{1} \left( {;z + 3;\left( {\theta _{1} e^{{t_{1} }} + \theta _{3} e^{{t_{2} }} + \theta _{5} } \right)\left( {\theta _{2} e^{{ - t_{1} }} + \theta _{4} e^{{ - t_{2} }} + \theta _{6} } \right)} \right)}}{{{\text{~}}_{0} \tilde{F}_{1} \left( {;z + 1;\left( {\theta _{1} + \theta _{3} + \theta _{5} } \right)\left( {\theta _{2} + \theta _{4} + \theta _{6} } \right)} \right)}} \hfill \\ \end{gathered}$$
$$ \begin{aligned} & \frac{{\partial^{2} M\left( {t_{1} ,t_{2} } \right)}}{{\partial t_{1} \partial t_{2} }}|_{{\left( {t_{1} ,t_{2} } \right) = \left( {0,0} \right)}} = z\left( {z - 1} \right)\frac{{\theta_{1} \theta_{3} }}{{(\theta_{1} + \theta_{3} + \theta_{5} )^{2} }} + \{ z\frac{{\theta_{1} \left( {\theta_{3} \left( {\theta_{2} + \theta_{6} } \right) - \theta_{4} \left( {\theta_{1} + \theta_{5} } \right)} \right)}}{{\left( {\theta_{1} + \theta_{3} + \theta_{5} } \right)}} \\ & \quad + z\frac{{\theta_{3} \left( {\theta_{1} \left( {\theta_{4} + \theta_{6} } \right) - \theta_{2} \left( {\theta_{3} + \theta_{5} } \right)} \right)}}{{\left( {\theta_{1} + \theta_{3} + \theta_{5} } \right)}} - \left( {\theta_{1} \theta_{4} + \theta_{2} \theta_{3} } \right)\} \\ & \quad \times H_{1} \left( {z + 2,\left( {\theta_{1} + \theta_{3} + \theta_{5} } \right)\left( {\theta_{2} + \theta_{4} + \theta_{6} } \right)} \right) \\ & \quad + \left( {\theta_{1} \left( {\theta_{4} + \theta_{6} } \right) - \theta_{2} \left( {\theta_{3} + \theta_{5} } \right)} \right)\left( {\theta_{3} \left( {\theta_{2} + \theta_{6} } \right) - \theta_{4} \left( {\theta_{1} + \theta_{5} } \right)} \right) \\ & \quad \times H_{1} \left( {z + 2,\left( {\theta_{1} + \theta_{3} + \theta_{5} } \right)\left( {\theta_{2} + \theta_{4} + \theta_{6} } \right)} \right) \\ & \quad \times H_{1} \left( {z + 3,\left( {\theta_{1} + \theta_{3} + \theta_{5} } \right)\left( {\theta_{2} + \theta_{4} + \theta_{6} } \right)} \right). \\ \end{aligned} $$

Since \({Y}_{1}\sim CSkellam(z,{\theta }_{1},{\theta }_{2},({\theta }_{3}+{\theta }_{5}),({\theta }_{4}+{\theta }_{6}))\) and \({Y}_{2}\sim CSkellam(z,{\theta }_{3},{\theta }_{4},({\theta }_{1}+{\theta }_{5}),({\theta }_{2}+{\theta }_{6}))\) by using (2.12) then

$$ E\left( {Y_{1} } \right) = z\frac{{\theta_{1} }}{{\left( {\theta_{1} + \theta_{3} + \theta_{5} } \right)}} + \left( {\theta_{1} \left( {\theta_{4} + \theta_{6} } \right) - \theta_{2} \left( {\theta_{3} + \theta_{5} } \right)} \right)H_{1} \left( {z + 2,\left( {\theta_{1} + \theta_{3} + \theta_{5} } \right)\left( {\theta_{2} + \theta_{4} + \theta_{6} } \right)} \right). $$

and

$$ E\left( {Y_{2} } \right) = z\frac{{\theta_{3} }}{{\left( {\theta_{1} + \theta_{3} + \theta_{5} } \right)}} + \left( {\theta_{3} \left( {\theta_{2} + \theta_{6} } \right) - \theta_{4} \left( {\theta_{1} + \theta_{5} } \right)} \right)H_{1} \left( {z + 2,\left( {\theta_{1} + \theta_{3} + \theta_{5} } \right)\left( {\theta_{2} + \theta_{4} + \theta_{6} } \right)} \right). $$

Hence,

$$ \begin{aligned} & Cov\left( {Y_{1} ,Y_{2} } \right) = E\left( {Y_{1} Y_{2} } \right) - E\left( {Y_{1} } \right)E\left( {Y_{2} } \right) \\ & = \frac{{ - z\theta_{1} \theta_{3} }}{{(\theta_{1} + \theta_{3} + \theta_{5} )^{2} }} - \left( {\theta_{1} \theta_{4} + \theta_{2} \theta_{3} } \right)H_{1} \left( {z + 2,\left( {\theta_{1} + \theta_{3} + \theta_{5} } \right)\left( {\theta_{2} + \theta_{4} + \theta_{6} } \right)} \right) \\ & \quad + \left( {\left( {\theta_{1} \left( {\theta_{4} + \theta_{6} } \right) - \theta_{2} \left( {\theta_{3} + \theta_{5} } \right)} \right)\left( {\theta_{3} \left( {\theta_{2} + \theta_{6} } \right) - \theta_{4} \left( {\theta_{1} + \theta_{5} } \right)} \right)} \right) \\ & \quad \times (H_{1} \left( {z + 2,\left( {\theta_{1} + \theta_{3} + \theta_{5} } \right)\left( {\theta_{2} + \theta_{4} + \theta_{6} } \right)} \right)H_{1} \left( {z + 3,\left( {\theta_{1} + \theta_{3} + \theta_{5} } \right)\left( {\theta_{2} + \theta_{4} + \theta_{6} } \right)} \right) \\ & \quad- \left( {H_{1} \left( {z + 2,\left( {\theta_{1} + \theta_{3} + \theta_{5} } \right)\left( {\theta_{2} + \theta_{4} + \theta_{6} } \right)} \right)^{2} } \right). \\ \end{aligned} $$

2.4 Bivariate Extended Binomial Distribution

Alzaid and Omair [7] reduced the number of parameters of the \(CSkellam(z,{\theta }_{1},{\theta }_{2},{\theta }_{3},{\theta }_{4})\) by using the constraint \({\theta }_{1}{\theta }_{4}-{\theta }_{2}{\theta }_{3}=0\) and the reparametrization \(z=z\), \({p}_{1}=\frac{{\theta }_{1}}{{\theta }_{1}+{\theta }_{3}}\), \(q=\frac{{\theta }_{3}}{{\theta }_{1}+{\theta }_{3}}\) and \(\theta =({\theta }_{1}+{\theta }_{3})({\theta }_{2}+{\theta }_{4})\) to define the extended binomial distribution \(X\sim EB(z,p,\theta )\) as follows:

$$ p\left( {X = x} \right) = \frac{{p^{x} q^{z - x}_{0} \tilde{F}_{1} (;x + 1;p^{2} \theta )_{0} \tilde{F}_{1} \left( {;z - x;q^{2} \theta } \right)}}{{{ }_{0} \tilde{F}_{1} \left( {;z + 1;\theta } \right)}},\;\;\;x = 0, \pm 1, \pm 2,{ } \cdots . $$
(2.16)

The mean and the variance for the extended binomial distribution are given by

$$ \mu = pz, Var\left( X \right) = zpq + 2pq\theta H_{1} \left( {z + 2,\theta } \right). $$
(2.17)

The corresponding moment generating function is

$$ M_{X} \left( t \right) = (pe^{t} + q)^{z} \frac{{{ }_{0} \tilde{F}_{1} \left( {;z + 1;\theta \left( {pqe^{t} + pqe^{ - t} + 1 - 2pq} \right)} \right)}}{{{ }_{0} \tilde{F}_{1} \left( {;z + 1;\theta } \right)}}. $$

As in the univariate case, to reduce the number of parameters in the bivariate conditional Skellam distribution, we use the constraints:

$$ \left( {\theta_{1} \left( {\theta_{4} + \theta_{6} } \right) - \theta_{2} \left( {\theta_{3} + \theta_{5} } \right)} \right) = 0 $$
(2.18)
$$ \left( {\theta_{3} \left( {\theta_{2} + \theta_{6} } \right) - \theta_{4} \left( {\theta_{1} + \theta_{5} } \right)} \right) = 0 $$
(2.19)

The following reparameterization is adopted as

$$ z = z,p_{1} = \frac{{\theta_{1} }}{{\theta_{1} + \theta_{3} + \theta_{5} }},p_{2} = \frac{{\theta_{3} }}{{\theta_{1} + \theta_{3} + \theta_{5} }},q = 1 - p_{1} - p_{2} = \frac{{\theta_{5} }}{{\theta_{1} + \theta_{3} + \theta_{5} }} $$

and \(\theta = \left( {\theta_{1} + \theta_{3} + \theta_{5} } \right)\left( {\theta_{2} + \theta_{4} + \theta_{6} } \right)\).

The resulting distribution is called the bivariate extended binomial distribution.

Definition 2.4

A random variable \(({Y}_{1},{Y}_{2})\) in \({\mathbb{Z}}^{2}\) has the bivariate extended binomial distribution with parameters \(0<{p}_{1}<1, 0<{p}_{2}<1,(q=1-{p}_{1}-{p}_{2}),\theta >0\) and \(z\in {\mathbb{Z}},\) denoted by \(({Y}_{1},{Y}_{2})\sim BEB(z,{p}_{1},{p}_{2},\theta )\) if

$$ P\left( {Y_{1} = y_{1} ,Y_{2} = y_{2} } \right) = \frac{{p_{1}^{{y_{1} }} p_{2}^{{y_{2} }} q^{{z - y_{1} - y_{2} }}_{0} \tilde{F}_{1} (;y_{1} + 1;p_{1}^{2} \theta )_{0} \tilde{F}_{1} (;y_{2} + 1;p_{2}^{2} \theta )_{0} \tilde{F}_{1} \left( {;z - y_{1} - y_{2} + 1;q^{2} \theta } \right)}}{{{ }_{0} \tilde{F}_{1} \left( {;z + 1;\theta } \right)}}, $$

where \({y}_{1},{y}_{2}=0,\pm 1,\pm 2,\cdots \).

The following propositions demonstrate some properties of the bivariate extended binomial distribution.

Proposition 2.8

If \(\left({Y}_{1},{Y}_{2}\right)\sim BEB\left(z,{p}_{1},{p}_{2},\theta \right),\) then the marginal distribution \({Y}_{i}\) has extended binomial distribution with parameters \((z,{p}_{i},\theta )\), i = 1,2.

Proof

Using the fact that (2.16) is a probability mass function (PMF), we have

\( _{0} \tilde{F}_{1} \left( {;z + 1;\theta } \right) = \sum\nolimits_{x = - \infty }^{\infty } {p^{x} q^{z - x}_{0} \tilde{F}_{1} (;x + 1;p^{2} \theta )_{0} \tilde{F}_{1} \left( {;z - x;q^{2} \theta } \right)}\).

We prove the result for \(i=1\). The probability mass function of \({Y}_{1}\) is

$$ \begin{aligned} f_{{Y_{1} }} \left( {y_{1} } \right) = & \sum\limits_{{y_{2} = - \infty }}^{\infty } {\frac{{p_{1}^{{y_{1} }} p_{2}^{{y_{2} }} q^{{z - y_{1} - y_{2} }} \,_{0} \tilde{F}_{1} (;y_{1} + 1;p_{1}^{2} \theta )\,_{0} \tilde{F}_{1} (;y_{2} + 1;p_{2}^{2} \theta )\,_{0} \tilde{F}_{1} \left( {;z - y_{1} - y_{2} + 1;q^{2} \theta } \right)}}{{ _{0} \tilde{F}_{1} \left( {;z + 1;\theta } \right)}}} \\ = & \frac{{p_{1}^{{y_{1} }} q_{1}^{{z - y_{1} }} \,_{0} \tilde{F}_{1} (;y_{1} + 1;p_{1}^{2} \theta )\,_{0} \tilde{F}_{1} \left( {;z - y_{1} + 1;q_{1}^{2} \theta } \right)}}{{ _{0} \tilde{F}_{1} \left( {;z + 1,\theta } \right)}}, \\ \end{aligned} $$

where \({q}_{1}=1-{p}_{1}\).

Proposition 2.9

If \(\left({Y}_{1},{Y}_{2}\right)\sim BEB\left(z,{p}_{1},{p}_{2},\theta \right),\) then \(({Y}_{2}|{Y}_{1}={y}_{1})\sim EB(z-{y}_{1},{\alpha }_{1},{\phi }_{1})\) where \({\alpha }_{1}=\frac{{p}_{2}}{{q}_{1}}\) , \({q}_{1}=1-{p}_{1}\) and \({\phi }_{1}={q}_{1}^{2}\theta \) .

Proof

$$ \begin{aligned} f_{{Y_{2} |Y_{1} }} (y_{2} |y_{1} ) = & \frac{{\frac{{p_{1}^{{y_{1} }} p_{2}^{{y_{2} }} q^{{z - y_{1} - y_{2} }}_{0} \tilde{F}_{1} (;y_{1} + 1;p_{1}^{2} \theta )_{0} \tilde{F}_{1} (;y_{2} + 1;p_{2}^{2} \theta )_{0} \tilde{F}_{1} \left( {;z - y_{1} - y_{2} + 1;q^{2} \theta } \right)}}{{ _{0} \tilde{F}_{1} \left( {;z + 1;\theta } \right)}}}}{{\frac{{p_{1}^{{y_{1} }} q_{1}^{{z - y_{1} }} \,_{0} \tilde{F}_{1} (;y_{1} + 1;p_{1}^{2} \theta )_{0} \tilde{F}_{1} \left( {;z - y_{1} + 1;q_{1}^{2} \theta } \right)}}{{ _{0} \tilde{F}_{1} \left( {;z + 1;\theta } \right)}}}} \\ = & \frac{{p_{2}^{{y_{2} }} q^{{z - y_{1} - y_{2} }}_{0} \tilde{F}_{1} (;y_{2} + 1;p_{2}^{2} \theta )_{0} \tilde{F}_{1} \left( {;z - y_{1} - y_{2} + 1;q^{2} \theta } \right)}}{{q_{1}^{{z - y_{1} }} \,_{0} \tilde{F}_{1} \left( {;z - y_{1} + 1;q_{1}^{2} \theta } \right)}} \\ \end{aligned} $$
(2.20)

Let \({q}_{1}^{2}\theta ={\phi }_{1}\);

Hence,

$$\theta =\frac{{\phi }_{1}}{{q}_{1}^{2}}$$

and \(p_{2}^{2} \theta = p_{2}^{2} \frac{{\phi_{1} }}{{q_{1}^{2} }} = \left( {\frac{{p_{2} }}{{q_{1} }}} \right)^{2} \phi_{1}\).

Set \({\alpha }_{1}=\frac{{p}_{2}}{{q}_{1}}\) then \({p}_{2}^{2}\theta ={\alpha }_{1}^{2}{\phi }_{1}\). Let

$$ q^{2} \theta = \left( {\frac{q}{{q_{1} }}} \right)^{2} \phi_{1} = \left( {\frac{{1 - p_{1} - p_{2} }}{{1 - p_{1} }}} \right)^{2} \phi_{1} = \left( {1 - \frac{{p_{2} }}{{1 - p_{1} }}} \right)^{2} \phi_{1} = \left( {1 - \frac{{p_{2} }}{{q_{1} }}} \right)^{2} \phi_{1} = (1 - \alpha_{1} )^{2} \phi_{1} . $$

If we divide and multiply (2.20) by \({q}_{1}^{{y}_{2}},\) we can then rewrite (2.20) as.

\(f_{{Y_{2} |Y_{1} }} (y_{2} |y_{1} ) = \frac{{\alpha_{1}^{{y_{2} }} (1 - \alpha_{1} )^{{z - y_{1} - y_{2} }}_{0} \tilde{F}_{1} (;y_{2} + 1;\alpha_{1}^{2} \phi_{1} )_{0} \tilde{F}_{1} (;z - y_{1} - y_{2} + 1;\left( {1 - \alpha_{1} )^{2} \phi_{1} } \right)}}{{{ }_{0} \tilde{F}_{1} \left( {;z - y_{1} + 1;\phi_{1} } \right)}}\).

Therefore, the conditional distribution of \({Y}_{2}\) given \({Y}_{1}={y}_{1}\) \(\sim EB(z-{y}_{1},{\alpha }_{,}{\phi }_{1}).\) Similarly, \({Y}_{1}|{Y}_{2}={y}_{2}\sim EB(z-{y}_{2},{\alpha }_{2},{\phi }_{2})\) where \({\alpha }_{2}=\frac{{p}_{1}}{{q}_{2}}\) and \({\phi }_{2}={q}_{2}^{2}\theta \)(\({q}_{2}=1-{p}_{2}\)).

Using constraints (2.18) and (2.19) we get the following corollary as a special case from Proposition 2.7.

Proposition 2.10

For \(({Y}_{1},{Y}_{2})\sim \) BEB \(\left(z,{p}_{1},{p}_{2},\theta \right):\)

  • 1. The mean is

$$ {\varvec{\mu}} = \left( {zp_{1} ,zp_{2} } \right). $$
  • 2. The covariance matrix is

$$ {\Sigma } = \left( {\begin{array}{*{20}l} {zp_{1} q_{1} + 2p_{1} q_{1} \theta H_{1} \left( {z + 2,\theta } \right)} \hfill & { - zp_{1} p_{2} - 2p_{1} p_{2} \theta H_{1} \left( {z + 2,\theta } \right)} \hfill \\ { - zp_{1} p_{2} - 2p_{1} p_{2} \theta H_{1} \left( {z + 2,\theta } \right)} \hfill & {zp_{2} q_{2} + 2p_{2} q_{2} \theta H_{1} \left( {z + 2,\theta } \right)} \hfill \\ {} \hfill & {} \hfill \\ \end{array} } \right). $$
  • 3. The correlation is

$$ Corr\left( {Y_{1} ,Y_{2} } \right) = - \sqrt {\frac{{p_{1} p_{2} }}{{q_{1} q_{2} }}} . $$

4. The moment-generating function is

$$ M\left( {t_{1} ,t_{2} } \right) = \frac{{(p_{1} e^{{t_{1} }} + p_{2} e^{{t_{2} }} + q)^{z}_{0} \tilde{F}_{1} \left( {;z + 1,\theta \left( {p_{1} e^{{t_{1} }} + p_{2} e^{{t_{2} }} + q} \right)\left( {p_{1} e^{{ - t_{1} }} + p_{2} e^{{ - t_{2} }} + q} \right)} \right)}}{{{ }_{0} \tilde{F}_{1} \left( {;z + 1,\theta } \right)}}. $$

Remark

The correlation has range in [− 1,0]. It is -1 when \({p}_{1}={p}_{2}=\frac{1}{2}\) and zero when one of the probabilities \({p}_{1}\) or \({p}_{2}\) reaches zero. Additionally, the correlation does not depend on the parameters z and \(\theta \) .

A shifted bivariate extended binomial distribution is obtained by considering the bivariate vector \(({Y}_{1}^{(s)},{Y}_{2}^{(s)})=({Y}_{1}+{k}_{1},{Y}_{2}+{k}_{2})\).

2.5 Bivariate Skew Laplace Distribution

Here, we consider a bivariate extension of the skew Laplace distribution.

Definition 2.5

Let \({X}_{1}\sim G({p}_{1})\) , \({X}_{2}\sim G({p}_{2})\) and \({X}_{0}\sim G({p}_{0})\) be independent random variables. Define.

\({Y}_{1}={X}_{1}-{X}_{0}\) and \({Y}_{2}={X}_{2}-{X}_{0}\).

Then, the random vector \(({Y}_{1},{Y}_{2})\) follows a bivariate skew Laplace distribution denoted by \(BSLaplace({p}_{0},{p}_{1},{p}_{2})\) where \(0\le {p}_{i}\le 1\), \(i=\mathrm{0,1},2\).

Proposition 2.11

The joint probability function of BSLaplace is given by:

$$ P\left( {Y_{1} = y_{1} ,Y_{2} = y_{2} } \right) = \left( {1 - p_{0} } \right)\left( {1 - p_{1} } \right)\left( {1 - p_{2} } \right)p_{1}^{{y_{1} }} p_{2}^{{y_{2} }} \frac{{(p_{0} p_{1} p_{2} )^{{max\left( {0, - y_{1} , - y_{2} } \right)}} }}{{1 - p_{0} p_{1} p_{2} }}, $$
(2.21)

where \({y}_{1},{y}_{2}=0,\pm 1,\pm 2,...\) and \(0\le {p}_{i}\le 1\),\(i=\mathrm{0,1},2.\)

Proposition 2.12

For \(({Y}_{1},{Y}_{2})\sim \) BSLaplace \(\left({p}_{0},{p}_{1},{p}_{2}\right):\)

  • 1. The mean is

$$ {\varvec{\mu}} = \left( {\frac{{p_{1} - p_{0} }}{{\left( {1 - p_{1} } \right)\left( {1 - p_{0} } \right)}},\frac{{p_{2} - p_{0} }}{{\left( {1 - p_{2} } \right)\left( {1 - p_{0} } \right)}}} \right). $$
  • 2. The covariance matrix is

$$ {\Sigma } = \left( {\begin{array}{*{20}l} {\frac{{p_{1} }}{{(1 - p_{1} )^{2} }} + \frac{{p_{0} }}{{(1 - p_{0} )^{2} }}} \hfill & { \frac{{p_{0} }}{{(1 - p_{0} )^{2} }}} \hfill \\ {\frac{{p_{0} }}{{(1 - p_{0} )^{2} }}} \hfill & { \frac{{p_{2} }}{{(1 - p_{2} )^{2} }} + \frac{{p_{0} }}{{(1 - p_{0} )^{2} }}} \hfill \\ \end{array} } \right). $$

Hence, the correlation between \({Y}_{1}\) and \({Y}_{2}\) is given by

\({\text{Corr}}\left( {Y_{1} ,Y_{2} } \right) = \frac{{p_{0} \left( {1 - p_{1} } \right)\left( {1 - p_{2} } \right)}}{{\sqrt {\left[ {(p_{1} \left( {1 - p_{0} )^{2} } \right) + \left( {p_{0} \left( {1 - p_{1} )^{2} } \right)} \right)} \right]\left[ {(p_{2} \left( {1 - p_{0} )^{2} } \right) + \left( {p_{0} 1 - p_{2} )^{2} } \right)} \right]} }}\).

  • 4. The joint moment-generating function is

\(M_{{y_{1} ,y_{2} }} \left( {t_{1} ,t_{2} } \right) = \frac{{\left( {1 - p_{0} } \right)\left( {1 - p_{1} } \right)\left( {1 - p_{2} } \right)}}{{\left( {1 - p_{1} e^{{t_{1} }} } \right)\left( {1 - p_{2} e^{{t_{2} }} } \right)\left( {1 - p_{0} e^{{ - \left( {t_{1} + t_{2} } \right)}} } \right)}}\).

Proof

The proof is direct as \({Y}_{1}={X}_{1}-{X}_{0}\) and \({Y}_{2}={X}_{2}-{X}_{0}\) and \({X}_{i}\sim G({p}_{i})\) for i = 0,1,2.

Remark

The correlation is positive.

Proposition 2.13

If \(({Y}_{1},{Y}_{2})\sim BSLaplace({p}_{0},{p}_{1},{p}_{2})\) , then

  1. 1.

    The correlation is monotone increasing in \({p}_{0}\) when other parameters are held constants.

  2. 2.

    The correlation is monotone decreasing in \({p}_{i}\), i = 1,2 when other parameters are held constants.

Proof

We can write the correlation as

\(\rho = \frac{1}{{\sqrt {\left[ {\frac{{p_{1} (1 - p_{0} )^{2} }}{{p_{0} (1 - p_{1} )^{2} }} + 1} \right]\left[ {\frac{{p_{2} (1 - p_{0} )^{2} }}{{p_{0} (1 - p_{2} )^{2} }} + 1} \right]} }}\).

The proof is based on the observation that the function \(\frac{(1-U{)}^{2}}{U}\) has negative derivative. As the function is a decreasing function in \(\frac{(1-{p}_{0}{)}^{2}}{{p}_{0}}\) and an increasing function in \(\frac{{p}_{i}}{(1-{p}_{i}{)}^{2}}\), i = 1,2. This completes the proof.

A shifted bivariate skew Laplace can be introduced by considering the distribution of the vector \(({Y}_{1}^{(s)},{Y}_{2}^{(s)})=({Y}_{1}+{k}_{1},{Y}_{2}+{k}_{2})\).

3 Applications

3.1 Difference in the Number of Casualties to the Number of Employees on Duty on Railroads

The data is obtained from the number of casualties to the number of employees on duty on railroads in the US state of Alabama. Data were downloaded from

http://safetydata.fra.dot.gov/OfficeofSafety/publicsite/Query/castally1.aspx.

We consider the difference in the number of casualties across 45 of Alabama’s 61 counties between the years 2013 and 2014 as Y1 and between the years 2014 and 2015 as Y2.

Figure 1 shows the 45 observations. Table 1 provides the descriptive statistics (Fig. 2) . The correlation between \({y}_{1}\) and \({y}_{2}\) is − 0.1425. Since the correlation is negative, bivariate extended binomial, shifted bivariate extended binomial, bivariate quadrinomial difference, and shifted bivariate quadrinomial difference distributions are fitted for this data for different values of shift parameters. Tables 2 and 3 exhibit the corresponding Akaike information criteria (AIC) values for shifted bivariate extended binomial and shifted bivariate quadrinomial difference distributions, respectively.

Fig. 1
figure 1

Scatter plot of the difference in the number of casualties for 2013–2014 and 2014–2015

Fig. 2
figure 2

Goal difference between Arsenal and other teams in game i in 2012/2013 and 2013/2014

Table 1 Descriptive Statistics
Table 2 AIC values for bivariate extended binomial difference distribution with different values of shifts \(({k}_{1},{k}_{2})\)
Table 3 AIC values for bivariate quadrinomial difference distribution with different values of shifts \(({k}_{1},{k}_{2})\)

According to the values of the AIC in Tables 1 and 2, the best fitted model for shifted bivariate extended binomial difference distribution occurs at \(({k}_{1}=0,{k}_{2}=0)\), and the best fitted model for shifted bivariate quadrinomial difference occurs with shift (\({k}_{1}=\)−1,\({k}_{2}=\)−2). In the following, we provide the results for these distributions. Table 4 illustrates the maximum likelihood estimates (MLE) of the parameters for bivariate extended binomial, the bivariate quadrinomial difference, and the shifted bivariate quadrinomial.

Table 4 MLE for fitted bivariate distributions

Under the null hypothesis for large sample, \( \sum {\frac{{\left( {O - E} \right)^{2} }}{E}} \sim \chi_{l - 1 - p}^{2} \), where \(l\) is the number of cells and p = 3 is the number of parameters estimated.

From Table 5, the p values imply that the bivariate extended binomial distribution and the shifted bivariate quadrinomial difference distribution fit the data well. Moreover, the shifted bivariate quadrinomial distribution offers a better fit in terms of AIC and the chi-square goodness-of-fit test.

Table 5 Pearson chi-square test for bivariate fitted distribution

3.1.1 Difference in the Number of Goals Scored in the English Premier League in Different Years

The data are obtained from the English Premier League. Data were downloaded from http://uk.soccerway.com/. We considered the difference in the number of goals scored between Arsenal and other teams for the seasons 2012/2013 and 2013/2014 as Y1 and Y2, respectively. Descriptive statistics are given in Table 6. The correlation between the two variables is 0.337; hence, we fit the bivariate Skellam, bivariate quadrinomial difference, and bivariate skew Laplace distribution with different possibilities of shift parameters. The parameters of these models are estimated by using MLM. The estimated parameters for these distributions are as follows (the bivariate shifted distributions with the lowest AIC for different possible values of shift parameters):

  • Bivariate Skellam

$$ \left( {\hat{\lambda }_{0} ,\hat{\lambda }_{1} ,\hat{\lambda }_{2} } \right) = \left( {1.9528,2.6715,2.4215} \right) $$
Table 6 Descriptive statistics difference in goals between arsenal and other teams for 2012/2013 and 2013/2014
  • Shifted bivariate Skellam with \(({k}_{1},{k}_{2})=(0,-3)\)

    $$ \left( {\hat{\lambda }_{0} ,\hat{\lambda }_{1} ,\hat{\lambda }_{2} } \right) = \left( {1.1454,1.8642,4.6141} \right) $$
  • Bivariate skew Laplace

    $$ \left( {\hat{p}_{0} ,\hat{p}_{1} ,\hat{p}_{2} } \right) = \left( {0.5149,0.6403,0.6048} \right) $$
  • Shifted bivariate skew Laplace with \(({k}_{1},{k}_{2})=(\mathrm{0,2})\)

    $$ \left( {\hat{p}_{0} ,\hat{p}_{1} ,\hat{p}_{2} } \right) = \left( {0.6352,0.7110,0.1736} \right) $$
  • Bivariate quadrinomial difference

    $$ \left( {\hat{\alpha },\hat{\beta },\hat{\gamma }} \right) = \left( {0.1914,0.1748,0.1435} \right) $$
  • Shifted bivariate quadrinomial difference with \(({k}_{1},{k}_{2})=(1,-3)\)

    $$ \left( {\hat{\alpha },\hat{\beta },\hat{\gamma }} \right) = \left( {0.0908,0.3408,0.1095} \right) $$

The maximized values of the log-likelihood and AIC are shown in Table 7. The values of AIC indicate that the shifted bivariate quadrinomial difference is a better fit than the other models.

Table 7 LogL and AIC for fitted distributions to difference in goals between arsenal and other teams for 2012/2013 and 2013/2014

4 Conclusion

Three bivariate discrete distributions on Z2 are developed: bivariate quadrinomial, bivariate extended binomial, and bivariate skew Laplace distributions. The bivariate Skellam distribution [13,14,15] and the bivariate skew Laplace have positive correlation. The correlation in the bivariate quadrinomial distribution has a full range [−1,1]. The bivariate extended binomial distribution can only take negative values. The properties and the shifted version of these distributions have been discussed. The correlation in the application of the difference in the number of casualties to employees on duty on railroads is negative. Therefore, bivariate Skellam and bivariate skew Laplace distributions cannot be fitted to the data. The shifted bivariate quadrinomial distribution offers a better fit in terms of AIC and chi-square goodness-of-fit test than the bivariate extended binomial distribution. The correlation in the application of the difference of goals scored between Arsenal and other teams for the seasons 2012/2013 and 2013/2014 is positive. Therefore, the bivariate extended binomial distribution cannot be fitted to the data. The shifted bivariate quadrinomial difference gives a better fit than the other models.