1 Introduction

Gamma distribution is an important continuous distribution in probability and statistics. Several distributions such as exponential, Erlang, and chi-square are special cases of this distribution. Several univariate generalizations of gamma distribution have also been studied. Gamma distribution and its variants have been applied in different disciplines to model continuous variables that are positive and have skewed distributions. Gamma distribution has been used to model amounts of daily rainfall (Aksoy [1]) and in neuroscience this model is often used to describe the distribution of inter-spike intervals (Robson and Troy [26]). The gamma distribution is widely used as a conjugate prior in Bayesian statistics. It also plays an important role in actuarial sciences (Furman [9]).

Several multivariate generalizations of univariate gamma distributions are also available in the literature. Mathai and Moschopoulos [20, 21] introduced two multivariate gamma models as the joint distribution of certain linear combinations/partial sums of independent three parameter gamma variables. All the components of their multivariate gamma vectors are positively correlated and have three parameter gamma distributions. They have also indicated that their models have potential applications in stochastic processes and reliability. Furman [9] used the multivariate reduction technique to derive a multivariate probability model possessing a dependence structure and gamma marginals. Kowalczyk and Tyrcha [17] used a re-parameterized form of the gamma distribution to define a multivariate gamma vector and studied a number of properties of their distribution. Recently, Semenikhine, Furman and Su [28] introduced a multiplicative multivariate gamma distribution with gamma marginals and applied their results in actuarial science. They proved that the correlation coefficient between any pair of variables is positive and belongs to (0, 1/2). Multivariate gamma distributions have been used in diverse fields like hydrology, space (wind modeling), reliability, traffic modeling, and finance. For further results on multivariate gamma distribution, the reader may consult articles by Balakrishnan and Ristić [4], Carpenter and Diawara [5], Dussauchoy and Berland [6], Gaver [10], Krishnaiah and Rao [18], Marcus [19], Pepas et al.[23], Royen [27], Vaidyanathan and Lakshmi [33], and an excellent text by Kotz, Balakrishnan and Johnson [16]. For a good review on bivariate gamma distributions, see Balakrishnan and Lai [3], Arnold, Castillo and Sarabia [2], Hutchinson and Lai [14], and Kotz, Balakrishnan and Johnson [16]. For a review on some recent work and applications the reader is referred to Rafiei, Iranmanesh, and Nagar [24] and references therein.

In this chapter, we introduce a multivariate gamma distribution whose marginals are finite mixtures of gamma distributions and correlation between any pair of variables is negative. We organize our work as follows: In Sect. 2, we introduce the new multivariate gamma distribution. In Sects. 3 and 4, results on marginal distributions and factorizations of the multivariate gamma distribution are derived. Sections 58 deal with properties such as joint moments, correlation, moment generating function, entropies and estimations. In Sect. 9, simulations of the new distribution are performed in different ways, and the results are provided to evaluate the performance of the maximum likelihood method. Section 10 contains the conclusion. Finally, the Appendix lists a number of results used in this chapter.

2 The Multivariate Gamma Distribution

Recently, Rafiei, Iranmanesh, and Nagar [24] have defined a bivariate gamma distribution with parameters \(\alpha , \beta \) and k and the pdf

$$\begin{aligned} \!\!\!\!f(x_1,x_2;\alpha , \beta , k)= \frac{\Gamma \left( 2\alpha \right) }{\beta ^{2\alpha + k} \Gamma ^2 \left( \alpha \right) \Gamma \left( 2\alpha +k\right) }(x_1 x_2)^{\alpha -1} (x_1 + x_2)^k \exp \left[ -\frac{1}{\beta } (x_1 + x_2)\right] , \end{aligned}$$

where \( x_1>0\), \(x_2>0\), \(\alpha >0\), \(\beta >0\), and \(k\in \mathbb {N}_0\). A natural multivariate generalization of this distribution can be given as follows.

Definition 1

The random variables \(X_{1},\ldots , X_{n}\) are said to have a generalized multivariate gamma distribution, denoted as \((X_1,\ldots ,X_n)\) \(\sim \text {GMG}(\alpha _1, \ldots , \alpha _n; \beta , k)\), if their joint pdf is given by

$$\begin{aligned} f(x_{1},\ldots , x_{n};\alpha _1, \ldots , \alpha _n; \beta , k)= & {} C(\alpha _1, \ldots ,\alpha _n; \beta , k)\prod _{i=1}^{n}x_{i}^{\alpha _{i}-1} \left( \sum _{i=1}^{n}x_{i}\right) ^k \nonumber \\{} & {} \times \exp \left( -\frac{1}{\beta } \sum _{i=1}^{n}x_{i}\right) ,\, x_{i}>0, \, i=1, \ldots , n, \end{aligned}$$
(1)

where \(\alpha _1>0, \ldots , \alpha _n>0\), \(\beta > 0\), \(k\in \mathbb {N}_0\) and \(C(\alpha _1, \ldots ,\alpha _n; \beta , k)\) is the normalizing constant.

By integrating the joint density of \(X_1, \ldots ,X_n\) over its support set, the normalizing constant is derived as

$$\begin{aligned} \! [C(\alpha _1, \ldots ,\alpha _n; \beta , k)]^{-1}= & {} \int _{0}^{\infty }\!\cdots \! \int _{0}^{\infty }\! \prod _{i=1}^{n}x_{i}^{\alpha _{i}-1} \left( \sum _{i=1}^{n}x_{i}\!\right) ^k \exp \!\left( -\frac{1}{\beta } \sum _{i=1}^{n}x_{i}\!\right) \textrm{d}x_1\cdots \textrm{d}x_n\nonumber \\= & {} \beta ^{\sum _{i=1}^{n}\alpha _{i}+ k} \left[ \prod _{i=1}^{n} \Gamma (\alpha _{i})\right] \left( \alpha _1+\cdots + \alpha _n\right) _k, \end{aligned}$$

where the last line has been obtained by using Lemma 2. Finally, from the above expression

$$\begin{aligned} C(\alpha _1, \ldots ,\alpha _n; \beta , k) = \frac{\Gamma \left( \alpha _1+\cdots + \alpha _n\right) }{\beta ^{\sum _{i=1}^{n}\alpha _{i}+ k} \left[ \prod _{i=1}^{n} \Gamma (\alpha _{i})\right] \Gamma \left( \alpha _1+\cdots + \alpha _n+k\right) }. \end{aligned}$$
(2)

For \(k=0\), the multivariate gamma density simplifies to the product of n independent univariate gamma densities with common scale parameter \(\beta \). For \(k=1\), the multivariate gamma density can be written as

$$\begin{aligned} \sum _{j=1}^{n}\left( \frac{\alpha _j}{\sum ^n_{i=1}\alpha _i}\right) \frac{x_j^{\alpha _j} \exp (-x_j/\beta )}{ \beta ^{\alpha _j+1} \Gamma (\alpha _j+1) } \prod _{\begin{array}{c} i=1\\ i\ne j \end{array}}^{n} \frac{x_i^{\alpha _i-1} \exp (-x_i/\beta )}{ \beta ^{\alpha _i} \Gamma (\alpha _i) },\, x_1>0, \ldots , x_n>0. \end{aligned}$$
(3)

For \(n =2 \) in (1), the bivariate gamma density is obtained as

$$\begin{aligned} C(\alpha _1,\alpha _2; \beta , k)x_{1}^{\alpha _{1}-1} x_{2}^{\alpha _{2}-1} \left( x_{1} + x_{2}\right) ^k \exp \left[ -\frac{1}{\beta } (x_{1} + x_{2})\right] ,\, x_{1}>0, \, x_{2}>0, \end{aligned}$$
(4)

where

$$\begin{aligned} C(\alpha _1,\alpha _2; \beta , k)= \frac{\Gamma ( \alpha _1 + \alpha _2)}{\Gamma (\alpha _1) \Gamma (\alpha _2)} \frac{\beta ^{-(\alpha _1 + \alpha _2 + k)}}{\Gamma (\alpha _1 + \alpha _2 + k)}. \end{aligned}$$

In a recent article, Rafiei, Iranmanesh, and Nagar [24] have studied the above distribution for \(\alpha _1=\alpha _2\). Substituting \(n=2\) in (3) or \(k=1\) in (4), the generalized bivariate gamma density takes the form

$$\begin{aligned}{} & {} \frac{\alpha _1}{\alpha _1+\alpha _2} \frac{x_1^{\alpha _1} x_2^{\alpha _2-1} \exp [-(x_1+x_2)/\beta ]}{\beta ^{\alpha _1+\alpha _2+1}\Gamma (\alpha _1+1) \Gamma (\alpha _2)} \\{} & {} \quad + \frac{\alpha _2}{\alpha _1+\alpha _2} \frac{x_1^{\alpha _1-1} x_2^{\alpha _2} \exp [-(x_1+x_2)/\beta ]}{\beta ^{\alpha _1+\alpha _2+1}\Gamma (\alpha _1) \Gamma (\alpha _2+1)}, \, x_{1}>0, \, x_{2}>0, \end{aligned}$$

which yields the marginal density of \(X_1\) as

$$\begin{aligned} \frac{\alpha _1}{\alpha _1+\alpha _2} \frac{x_1^{\alpha _1} \exp (- x_1 /\beta )}{\beta ^{\alpha _1 +1}\Gamma (\alpha _1+1) } + \frac{\alpha _2}{\alpha _1+\alpha _2} \frac{x_1^{\alpha _1-1} \exp (- x_1 /\beta )}{\beta ^{\alpha _1 } \Gamma (\alpha _2+1)}, \, x_{1}>0. \end{aligned}$$

Clearly, the marginal density of \(X_1\) is a mixture of two gamma densities indicating that, in general, marginal density of any subset of \(X_1, \ldots , X_n\) is not a generalized multivariate gamma.

It may be noted here that the multivariate gamma distribution defined above belongs to the Liouville family of distributions (Sivazlian [30], Gupta and Song [12], Gupta, and Richards [13], Song and Gupta [31]). Because of mathematical tractability, this distribution further enriches the class of multivariate Liouville distributions and may serve as an alternative to many existing distributions belonging to this class.

3 Marginal Distributions

In this section, we derive results on marginal distributions of the generalized multivariate gamma distribution defined in this chapter. By using multinomial expansion of \( \left( \sum _{i=1}^{n}x_{i}\right) ^k\), namely,

$$\begin{aligned} \left( \sum _{i=1}^{n}x_{i}\right) ^k = \sum _{k_1+k_2+\cdots +k_n=k}\left( {\begin{array}{c}k\\ k_1, k_2,\ldots , k_n\end{array}}\right) x_{1}^{k_1} x_{2}^{k_2} \cdots x_{n}^{k_n} \end{aligned}$$

in (1), the joint density of \(X_{1},\ldots , X_{n}\) can be restated as

$$\begin{aligned} C(\alpha _1, \ldots ,\alpha _n; \beta , k)\sum _{k_1+k_2+\cdots +k_n=k}\left( {\begin{array}{c}k\\ k_1, k_2,\ldots , k_n\end{array}}\right) \prod _{i=1}^{n}x_{i}^{\alpha _{i}+k_i-1} \exp \left( -\frac{1}{\beta } \sum _{i=1}^{n}x_{i}\right) , \end{aligned}$$

where \(x_{i}>0\), \(i=1,2, \ldots , n\). Thus, the generalized multivariate gamma distribution is a finite mixture of product of independent gamma densities.

In the remaining part of this section and the next section, we derive marginal distributions, distribution of partial sums and several factorizations of the generalized multivariate gamma distribution.

Theorem 1

Let \((X_1,\ldots ,X_n)\) \(\sim \text {GMG}(\alpha _1, \ldots , \alpha _n; \beta , k)\). Then, for \(1\le s \le n-1\), the marginal density of \(X_{1},\ldots , X_{s}\) is given by

$$\begin{aligned}{} & {} C\left( \alpha _1, \ldots ,\alpha _s, \sum _{i=s+1}^{n}\alpha _{i}; \beta , k\right) \prod _{i=1}^{s}x_{i}^{\alpha _{i}-1} \exp \left( -\frac{1}{\beta } \sum _{i=1}^{s}x_{i}\right) \left( \sum _{i=1}^{s}x_{i} \right) ^{k}\nonumber \\{} & {} \quad \times \beta ^{\sum _{i=s+1}^{n}\alpha _{i}}\sum _{j=0}^{k}\left( {\begin{array}{c}k\\ j\end{array}}\right) \Gamma \left( \sum _{i=s+1}^{n}\alpha _{i}+j\right) \left( \frac{\sum _{i=1}^{s}x_{i}}{\beta }\right) ^{-j},\, x_i>0, \, i=1, \ldots , s. \end{aligned}$$

Proof

Integrating out \(x_{s+1},\ldots , x_{n}\) in (1), the marginal density of \(X_{1},\ldots , X_{s}\) is derived as

$$\begin{aligned}{} & {} C(\alpha _1, \ldots ,\alpha _n; \beta , k)\prod _{i=1}^{s}x_{i}^{\alpha _{i}-1} \exp \left( -\frac{1}{\beta } \sum _{i=1}^{s}x_{i}\right) \nonumber \\{} & {} \quad \times \int _{0}^{\infty } \cdots \int _{0}^{\infty }\prod _{i=s+1}^{n}x_{i}^{\alpha _{i}-1} \left( \sum _{i=1}^{s}x_{i}+ \sum _{i=s+1}^{n}x_{i}\right) ^k \exp \left( -\frac{1}{\beta } \sum _{i=s+1}^{n}x_{i}\right) \prod _{i=s+1}^{n}\textrm{d}x_{i}\nonumber \\{} & {} = C(\alpha _1, \ldots ,\alpha _n; \beta , k)\prod _{i=1}^{s}x_{i}^{\alpha _{i}-1} \exp \left( -\frac{1}{\beta } \sum _{i=1}^{s}x_{i}\right) \nonumber \\{} & {} \quad \times \frac{\prod _{i=s+1}^{n}\Gamma (\alpha _{i})}{ \Gamma (\sum _{i=s+1}^{n}\alpha _{i})} \int _{0}^{\infty }x^{ \sum _{i=s+1}^{n}\alpha _{i}-1} \left( \sum _{i=1}^{s}x_{i}+ x\right) ^k \exp \left( -\frac{1}{\beta }x\right) \textrm{d}x \end{aligned}$$
(5)

where the last line has been obtained by using (16). Substituting \(x/\sum _{i=1}^{s}x_{i} = z\) in (5), the marginal density of \(X_{1},\ldots , X_{s}\) is rewritten as

$$\begin{aligned}{} & {} C\left( \alpha _1, \ldots ,\alpha _s, \sum _{i=s+1}^{n}\alpha _{i}; \beta , k\right) \prod _{i=1}^{s}x_{i}^{\alpha _{i}-1} \exp \left( -\frac{1}{\beta } \sum _{i=1}^{s}x_{i}\right) \left( \sum _{i=1}^{s}x_{i} \right) ^{\sum _{i=s+1}^{n}\alpha _{i}+k}\nonumber \\{} & {} \quad \times \int _{0}^{\infty } z^{ \sum _{i=s+1}^{n}\alpha _{i}-1} \left( 1+ z\right) ^{k} \exp \left[ -\frac{1}{\beta }\left( \sum _{i=1}^{s}x_{i} \right) z\right] \textrm{d}z. \end{aligned}$$
(6)

Now, writing \((1 + z)^k\) using binomial theorem and integrating z in (6), the marginal density of \(X_{1},\ldots , X_{s}\) is derived.    \(\square \)

Alternately, the density of \(X_{1},\ldots , X_{s}\) given in (7) can be written as

$$\begin{aligned}{} & {} C\left( \alpha _1, \ldots ,\alpha _s, \sum _{i=s+1}^{n}\alpha _{i}; \beta , k\right) \beta ^{\sum _{i=s+1}^{n}\alpha _{i}+k} \prod _{i=1}^{s}x_{i}^{\alpha _{i}-1} \exp \left( -\frac{1}{\beta } \sum _{i=1}^{s}x_{i}\right) \nonumber \\{} & {} \quad \times \sum _{j=0}^{k}\left( {\begin{array}{c}k\\ j\end{array}}\right) \Gamma \left( \sum _{i=s+1}^{n}\alpha _{i}+k-j\right) \left( \frac{\sum _{i=1}^{s}x_{i}}{\beta }\right) ^{j},\, x_i>0, \, i=1, \ldots , s. \end{aligned}$$

Corollary 1

The marginal density of \(X_1\) is given by

$$\begin{aligned}{} & {} C\left( \alpha _1, \sum _{i=2}^{n}\alpha _{i}; \beta , k\right) \beta ^{\sum _{i=2}^{n}\alpha _{i}+k} x_{1}^{\alpha _{1}-1} \exp \left( -\frac{1}{\beta } x_{1}\right) \nonumber \\{} & {} \quad \times \sum _{j=0}^{k}\left( {\begin{array}{c}k\\ j\end{array}}\right) \Gamma \left( \sum _{i=2}^{n}\alpha _{i}+k-j\right) \left( \frac{ x_{1}}{\beta }\right) ^{j},\, x_1>0. \end{aligned}$$

Corollary 2

The marginal density of \(X_1\) and \(X_2\) is given by

$$\begin{aligned}{} & {} C\left( \alpha _1,\alpha _2, \sum _{i=3}^{n}\alpha _{i}; \beta , k\right) \beta ^{\sum _{i=3}^{n}\alpha _{i}+k} x_{1}^{\alpha _{1}-1} x_{2}^{\alpha _{2}-1} \exp \left[ -\frac{1}{\beta } (x_{1} + x_2)\right] \nonumber \\{} & {} \quad \times \sum _{j=0}^{k}\left( {\begin{array}{c}k\\ j\end{array}}\right) \Gamma \left( \sum _{i=3}^{n}\alpha _{i}+k-j\right) \left( \frac{x_{1}+x_2}{\beta }\right) ^{j},\, x_1>0, x_2>0. \end{aligned}$$

Substituting \(u = z/(1+z)\) with \(dz = (1-u)^{-2} du\) in (6), one gets

$$\begin{aligned}{} & {} C\left( \alpha _1, \ldots ,\alpha _s, \sum _{i=s+1}^{n}\alpha _{i}; \beta , k\right) \prod _{i=1}^{s}x_{i}^{\alpha _{i}-1} \exp \left( -\frac{1}{\beta } \sum _{i=1}^{s}x_{i}\right) \left( \sum _{i=1}^{s}x_{i} \right) ^{\sum _{i=s+1}^{n}\alpha _{i}+k}\nonumber \\{} & {} \quad \times \int _{0}^{1} u^{\sum _{i=s+1}^{n}\alpha _{i}-1} (1 - u)^{-(\sum _{i=s+1}^{n}\alpha _{i} + k+1)} \exp \left[ -\frac{\left( \sum _{i=1}^{s}x_{i} \right) u}{\beta (1-u)}\right] \textrm{d}u. \end{aligned}$$
(7)

Now, writing

$$\begin{aligned} (1 - u)^{-(\sum _{i=s+1}^{n}\alpha _{i} + k+1)} \exp \left[ -\frac{\left( \sum _{i=1}^{s}x_{i} \right) u}{\beta (1-u)}\right] = \sum _{j=0}^{\infty } u^j L_{j}^{(\sum _{i=s+1}^{n}\alpha _{i} + k)} \left( \frac{\sum _{i=1}^{s}x_{i}}{\beta } \right) \end{aligned}$$

in (7) and integrating u, the marginal density of \(X_{1},\ldots , X_{s}\), in series involving generalized Laguerre polynomials, is derived as

$$\begin{aligned}{} & {} C\left( \alpha _1, \ldots ,\alpha _s, \sum _{i=s+1}^{n}\alpha _{i}; \beta , k\right) \prod _{i=1}^{s}x_{i}^{\alpha _{i}-1} \exp \left( -\frac{1}{\beta } \sum _{i=1}^{s}x_{i}\right) \left( \sum _{i=1}^{s}x_{i} \right) ^{\sum _{i=s+1}^{n}\alpha _{i}+k}\nonumber \\{} & {} \quad \times \sum _{j=0}^{\infty }\frac{1}{\sum _{i=s+1}^{n}\alpha _{i}+j}L_{j}^{(\sum _{i=s+1}^{n}\alpha _{i} + k)} \left( \frac{\sum _{i=1}^{s}x_{i}}{\beta } \right) . \end{aligned}$$

Theorem 2

Let \((X_1,\ldots ,X_n)\) \(\sim \textrm{GMG}(\alpha _1, \ldots , \alpha _n; \beta , k)\). Then, for \(2\le r \le n\), the marginal density of \(X_{r},\ldots , X_{n}\) is given by

$$\begin{aligned}{} & {} C\left( \sum _{i=1}^{r-1}\alpha _{i}, \alpha _r, \ldots ,\alpha _n; \beta , k\right) \beta ^{\sum _{i=1}^{r-1}\alpha _{i}+k} \prod _{i=r}^{n}x_{i}^{\alpha _{i}-1} \exp \left( -\frac{1}{\beta } \sum _{i=r}^{n}x_{i}\right) \nonumber \\{} & {} \quad \times \sum _{\ell =0}^{k}\left( {\begin{array}{c}k\\ \ell \end{array}}\right) \Gamma \left( \sum _{i= 1}^{r-1}\alpha _{i}+k-\ell \right) \left( \frac{\sum _{i=r}^{n}x_{i}}{\beta }\right) ^{\ell },\, x_i>0, \, i=r, \ldots , n. \end{aligned}$$

Proof

Similar to the proof of Theorem 1.    \(\square \)

Corollary 3

The marginal density of \(X_{n}\) is given by

$$\begin{aligned}{} & {} C\left( \sum _{i=1}^{n-1}\alpha _{i}, \alpha _n; \beta , k\right) \beta ^{\sum _{i=1}^{n-1}\alpha _{i}+k} x_{n}^{\alpha _{n}-1} \exp \left( -\frac{1}{\beta } x_{n}\right) \nonumber \\{} & {} \quad \times \sum _{\ell =0}^{k}\left( {\begin{array}{c}k\\ \ell \end{array}}\right) \Gamma \left( \sum _{i= 1}^{n-1}\alpha _{i}+k-\ell \right) \left( \frac{ x_{n}}{\beta }\right) ^{\ell },\, x_n>0. \end{aligned}$$

Theorem 3

Let \((X_1,\ldots ,X_n) \sim \text {GMG}(\alpha _1, \ldots , \alpha _n; \beta , k)\). Then, for \(r=1,\ldots ,n\), the marginal density of \(X_r\) is given by

$$\begin{aligned}{} & {} C\left( \sum _{i(\ne r)=1}^{n}\alpha _{i}, \alpha _r; \beta , k\right) \beta ^{\sum _{i(\ne r)=1}^{n}\alpha _i+k} x_{r}^{\alpha _{r}+k-1} \exp \left( -\frac{x_r}{\beta }\right) \\{} & {} \times \sum _{j=0}^{k} \left( {\begin{array}{c}k\\ j\end{array}}\right) \Gamma \left( \sum _{i(\ne r)= 1}^{n}\alpha _{i}+ k-j\right) \left( \frac{x_r}{\beta }\right) ^j ,\, x_r>0. \end{aligned}$$

4 Factorizations

This section deals with several factorizations of the multivariate gamma distribution defined in Sect. 2.

In the next theorem, we give the joint distribution of partial sums of random variables distributed jointly as generalized multivariate gamma.

Let \(n_1, \ldots , n_\ell \) be non-negative integers such that \(\sum _{i=1}^{\ell } n_i = n\) and define

$$\begin{aligned} \alpha _{(i)}=\sum _{j=n_{i-1}^{*}+1}^{n_{i}^{*}}\alpha _{j} ,\quad n_{0}^{*}= 0,\quad n_{i}^{*} = \sum _{j=1}^{i}n_{j},\quad i = 1,\ldots ,\ell . \end{aligned}$$

Theorem 4

Let \((X_1,\ldots ,X_n) \sim \text {GMG}(\alpha _1, \ldots , \alpha _n; \beta , k)\). Define \(Z_{j}= X_{j}/X_{(i)}, j = n_{i-1}^{*} + 1,\ldots , n_{i}^{*}- 1\) and \(X_{(i)} = \sum _{j=n_{i-1}^{*}+1}^{n_{i}^{*}}X_{j}\), \(i = 1,\ldots ,\ell \). Then,

(i) \((X_{(1)},\ldots ,X_{(\ell )})\) and \((Z_{n_{i-1}^{*} +1},\ldots , Z_{n_{i}^{*}-1})\), \(i = 1,\ldots ,\ell \), are independently distributed,

(ii) \((Z_{n_{i-1}^{*} +1},\ldots , Z_{n_{i}^{*}-1})\sim \textrm{D1}(\alpha _{n_{i-1}^{*}+1},\) \(\ldots , \alpha _{n_{i}^{*} -1};\) \( \alpha _{n_{i}^{*}}),\) \(i = 1,\ldots ,\ell \), and

(iii) \((X_{(1)},\ldots ,X_{(\ell )}) \sim \text {GMG}(\alpha _{(1)},\ldots ,\alpha _{(\ell )}; \beta ,k)\).

Proof

Substituting \( x_{(i)} = \sum _{j=n_{i-1}^{*}+1}^{n_{i}^{*}}x_{j}\) and \(z_{j}= x_{j}/x_{(i)},\) \(j = n_{i-1}^{*} + 1,\ldots , n_{i}^{*}- 1\)\(i = 1,\ldots ,\ell \) with the Jacobian

$$\begin{aligned}{} & {} J(x_{1},\ldots ,x_{n}\rightarrow z_{1},\ldots ,z_{n_{1}-1}, x_{(1)},\ldots ,z_{n_{\ell -1}^{*}+1},\ldots ,z_{n-1},x_{(\ell )}) \nonumber \\{} & {} \quad = \prod _{i=1}^{\ell } J(x_{n_{i-1}^{*}+1},\ldots ,x_{n_{i}^{*}} \rightarrow z_{n_{i-1}^{*}+1},\ldots , z_{n_{i}^{*}-1},x_{(i)})\nonumber \\{} & {} \quad = \prod _{i=1}^{\ell }x_{(i)}^{n_{i}-1}. \end{aligned}$$

in the density of \((X_{1},\ldots ,X_{n})\) given by (1), we get the joint density of \(Z_{{n_{i-1}^{*}}+1},\ldots ,\) \( Z_{n_{i}^{*}-1},X_{(i)}\), \(i = 1,\ldots ,\ell \) as

$$\begin{aligned}{} & {} C(\alpha _1,\ldots ,\alpha _n;\alpha ,\beta ) \prod _{i=1}^{\ell }x_{(i)}^{\alpha _{(i)}-1} \left( \sum _{i=1}^{\ell } x_{(i)}\right) ^k \exp \left( -\frac{1}{\beta } \sum _{i=1}^{\ell }x_{(i)}\right) \nonumber \\{} & {} \quad \times \prod _{i=1}^{\ell }\left[ \left( \prod _{j=n_{i-1}^{*}+1} ^{n_{i}^{*}-1}z_{j}^{\alpha _{j}-1}\right) \bigg (1-\sum _{j=n_{i-1}^{*}+1}^{n_{i}^{*}-1} z_{j}\bigg )^{\alpha _{n_{i}^{*}}-1}\right] , \end{aligned}$$
(8)

where \( x_{(i)}>0\), \(i=1,\ldots ,\ell \), \(z_{j}>0\), \(j = n_{i-1}^{*}+1,\ldots , n_{i}^{*} -1\), \(\sum _{j=n_{i-1}^{*}+1}^{n_{i}^{*}-1} z_{j} < 1\), \(i = 1,\ldots ,\ell \). From the factorization in (8), it is easy to see that \((X_{(1)},\ldots ,X_{(\ell )})\) and \((Z_{n_{i-1}^{*} +1},\ldots , Z_{n_{i}^{*}-1})\), \(i = 1,\ldots ,\ell \), are independently distributed. Further \((X_{(1)},\ldots ,X_{(\ell )}) \!\sim \! \text {GMG}(\alpha _{(1)},\ldots ,\alpha _{(\ell )}; \beta ,k)\) and \((Z_{n_{i-1}^{*} {+}1},\ldots , Z_{n_{i}^{*}-1})\!\sim \! \textrm{D1}(\alpha _{n_{i-1}^{*}{+}1},\) \(\ldots , \alpha _{n_{i}^{*} -1}\); \( \alpha _{n_{i}^{*}})\), \(i = 1,\ldots ,\ell \).    \(\square \)

Corollary 4

Let \((X_1,\ldots ,X_n) \sim \text {GMG}(\alpha _1, \ldots , \alpha _n; \beta ,k)\). Define \(Z_{i} = X_{i}/ Z\), \(i = 1,\ldots ,n-1\), and \(Z =\sum _{j=1}^{n} X_{j}\). Then, \((Z_{1},\ldots ,Z_{n-1})\) and Z are independent, \((Z_{1},\ldots ,Z_{n-1})\sim \textrm{D1}(\alpha _{1},\ldots ,\alpha _{n-1};\alpha _{n})\) and \(Z\sim \text {G}\left( \sum _{i=1}^{n}\alpha _i+k, \beta \right) \).

Corollary 5

If \((X_1,\ldots ,X_n) \sim \text {GMG}(\alpha _1, \ldots , \alpha _n; \beta ,k)\), then \(\sum _{j=1}^{n} X_{j}\) and \( \frac{\sum _{i=1}^{s} X_{i}}{\sum _{i=1}^{n} X_{i}}\) are independent. Further

$$\begin{aligned} \frac{\sum _{i=1}^{s} X_{i}}{\sum _{i=1}^{n} X_{i}} \sim \textrm{B1}\left( \sum _{i=1}^{s} \alpha _{i},\sum _{i=s+1}^{n} \alpha _{i}\right) ,\, s <n. \end{aligned}$$

Theorem 5

Let \((X_1,\ldots ,X_n) \sim \text {GMG}(\alpha _1, \ldots , \alpha _n; \beta ,k)\). Define \( W_{j}= {X_{j}}/{X_{n_i^{*}}}, j = n_{i-1}^{*} + 1,\ldots , n_{i}^{*}- 1\) and \(X_{(i)} = \sum _{j=n_{i-1}^{*}+1}^{n_{i}^{*}}X_{j}, i = 1,\ldots ,\ell . \) Then,

(i) \((X_{(1)},\ldots ,X_{(\ell )})\) and \( (W_{n_{i-1}^{*} +1},\ldots , W_{n_{i}^{*}-1} )\), \(i = 1,\ldots ,\ell \), are independently distributed,

(ii) \( (W_{n_{i-1}^{*} +1},\ldots , W_{n_{i}^{*}-1} )\sim \textrm{D2}(\alpha _{n_{i-1}^{*}+1},\) \(\ldots , \alpha _{n_{i}^{*} -1};\) \( \alpha _{n_{i}^{*}}),\) \(i = 1,\ldots ,\ell \), and

(iii) \((X_{(1)},\ldots ,X_{(\ell )}) \sim \text {GMG}(\alpha _{(1)},\ldots ,\alpha _{(\ell )}; \beta ,k)\).

Corollary 6

Let \((X_1,\ldots ,X_n) \sim \text {GMG}(\alpha _1, \ldots , \alpha _n; \beta ,k)\). Define \( W_{i} = {X_{i}}/{ X_n}, i = 1,\ldots ,n-1\) and \(Z =\sum _{j=1}^{n} X_{j}\). Then, \((W_{1},\ldots ,W_{n-1})\) and Z are independent, \( (W_{1},\ldots ,W_{n-1})\sim \textrm{D2}(\alpha _{1},\ldots ,\alpha _{n-1};\alpha _{n}) \) and \(Z\sim \text {G}(\sum _{i=1}^{n}\alpha _i, \beta ,k)\).

Corollary 7

If \((X_1,\ldots ,X_n) \sim \text {GMG}(\alpha _1, \ldots , \alpha _n; \beta ,k)\), then \(\sum _{j=1}^{n} X_{j}\) and \( \frac{\sum _{i=1}^{s} X_{i}}{\sum _{i=s+1}^{n} X_{i}}\) are independent. Further

$$\begin{aligned} \frac{\sum _{i=1}^{s} X_{i}}{\sum _{i=s+1}^{n} X_{i}} \sim \textrm{B2}\left( \sum _{i=1}^{s} \alpha _{i},\sum _{i=s+1}^{n} \alpha _{i}\right) ,\, s <n. \end{aligned}$$

In next six theorems, we give several factorizations of the generalized multivariate gamma density.

Theorem 6

Let \((X_1,\ldots ,X_n) \sim \text {GMG}(\alpha _1, \ldots , \alpha _n; \beta ,k)\). Define \(Y_n= \sum _{j=1}^{n}X_j\) and \(Y_i= \sum _{j=1}^{i}X_j/ \sum _{j=1}^{i+1}X_j\), \(i=1, \ldots , n-1\). Then, \(Y_{1},\ldots ,Y_{n}\) are independent, \(Y_{i}\sim \textrm{B1} (\sum _{j=1}^{i} \alpha _{j},\alpha _{i+1} ),\, i = 1,\ldots ,n-1\), and \(Y_{n}\sim \text {G}(\sum _{i=1}^{n} \alpha _{i}+k, \beta )\).

Proof

Substituting \(x_1= y_n \prod _{i=1}^{n-1}y_i\), \(x_2= y_n (1-y_1)\prod _{i=2}^{n-1}y_i, \) \(\ldots ,\) \( x_{n-1}= y_n\) \( (1-y_{n-2}) y_{n-1}\) and \(x_n= y_n (1-y_{n-1})\) with the Jacobian \(J(x_1, \ldots , x_n\rightarrow y_1, \ldots , y_n) = \prod _{i=2}^{n}y_i^{i-1} \) in (1) we get the desired result.    \(\square \)

Theorem 7

Let \((X_1,\ldots ,X_n) \sim \text {GMG}(\alpha _1, \ldots , \alpha _n; \beta ,k)\). Define \(Z_n= \sum _{j=1}^{n}X_j\) and \(Z_i = X_{i+1}/ \sum _{j=1}^{i} X_j\), \(i=1, \ldots , n-1\). Then, \(Z_{1},\ldots ,Z_{n}\) are independent, \(Z_{i}\sim \textrm{B2}(\alpha _{i+1}, \sum _{j=1}^{i} \alpha _{j})\), \(i = 1,\ldots ,n-1\), and \(Z_{n}\sim \text {G}(\sum _{j=1}^{n} \alpha _{j}+k, \beta )\).

Proof

The desired result follows from Theorem 6 by noting that \((1-Y_i)/Y_i\sim \textrm{B2} (\alpha _{i+1},\) \( \sum _{j=1}^{i} \alpha _{j})\).    \(\square \)

Theorem 8

Let \((X_1,\ldots ,X_n) \sim \text {GMG}(\alpha _1, \ldots , \alpha _n; \beta ,k)\). Define \(W_n= \sum _{j=1}^{n}X_j\) and \(W_i = \sum _{j=1}^{i} X_j/X_{i+1}\), \(i=1, \ldots , n-1\). Then, \(W_{1},\ldots ,W_{n}\) are independent, \(W_{i}\sim \textrm{B2}( \sum _{j=1}^{i} \alpha _{j},\alpha _{i+1})\), \(i = 1,\ldots ,n-1\), and \(W_{n}\sim \text {G}(\sum _{i=1}^{n} \alpha _{i}+k, \beta , k)\).

Proof

The result follows from Theorem 7 by noting that \(1/Z_i\sim \textrm{B2}(\sum _{j=1}^{i} \alpha _{j}, \alpha _{i+1})\).    \(\square \)

Theorem 9

Let \((X_1,\ldots ,X_n) \sim \text {GMG}(\alpha _1, \ldots , \alpha _n; \beta ,k)\). Define \(Y_n{=}\sum _{j =1}^{n}X_j \) and \(Y_i= X_i/\sum _{j=i}^{n}X_j\), \(i=1,\ldots , n-1\). Then, \(Y_{1},\ldots ,Y_{n}\) are independent, \(Y_{i}\sim \textrm{B1} (\alpha _{i},\sum _{j=i+1}^{n} \alpha _{j}),\, i = 1,\ldots , n-1\), and \(Y_{n}\sim \text {G}( \sum _{i=1}^{n} \alpha _{i}+k, \beta )\).

Proof

Substituting \(x_1=y_n y_1 , x_2= y_ny_2(1-y_1) , \ldots , x_{n-1}= y_n y_{n-1}(1-y_1) \cdots \) \( (1-y_{n-2})\), and \(x_n= y_n (1-y_1) \cdots (1-y_{n-1})\) with the Jacobian \(J(x_1, \ldots , x_n\rightarrow y_1, \ldots , y_n) = y_n^{n-1} \prod _{i=1}^{n-2} (1-y_i)^{n-i-1}\) in (1), we get the desired result.    \(\square \)

Theorem 10

Let \((X_1,\ldots ,X_n) \sim \text {GMG}(\alpha _1, \ldots , \alpha _n; \beta ,k)\). Define \(Z_n=\sum _{j =1}^{n}X_j \) and \(Z_i= X_i/\sum _{j=i+1}^{n}X_j\), \(i=1,\ldots , n-1\). Then, \(Z_{1},\ldots ,Z_{n}\) are independent, \(Z_{i}\sim \textrm{B2} (\alpha _{i},\sum _{j=i+1}^{n} \alpha _{j}),\, i = 1,\ldots , n-1\), and \(Z_{n}\sim \text {G}( \sum _{i=1}^{n} \alpha _{i}+k, \beta )\).

Proof

The result follows from Theorem 9 by observing that \(Y_i/(1-Y_i)\sim \textrm{B2} (\alpha _{i},\) \(\sum _{j=i+1}^{n} \alpha _{j})\).    \(\square \)

Theorem 11

Let \((X_1,\ldots ,X_n) \!\sim \! \text {GMG}(\alpha _1, \ldots , \alpha _n; \beta ,k)\). Define \(W_n=\sum _{j =1}^{n}X_j \) and \(W_i= \sum _{j=i+1}^{n}X_j/X_i\), \(i=1,\ldots , n-1\). Then, \(W_{1},\ldots ,W_{n}\) are independent, \(W_{i}\sim \text {B2} (\sum _{j=i+1}^{n} \alpha _{j},\alpha _{i}),\, i = 1,\ldots , n-1\), and \(W_{n}\sim \textrm{G}( \sum _{i=1}^{n}\alpha _{i}+k,\beta )\).

Proof

The result follows from Theorem 10 by noting that \(1/ W_i\!\sim \! \textrm{B2} (\sum _{j=i+1}^{n} \alpha _{j},\alpha _{i})\).    \(\square \)

5 Joint Moments

By definition

$$\begin{aligned} E(X_1^{r_1} \cdots X_n^{r_n})= & {} C(\alpha _1, \ldots ,\alpha _n; \beta , k) \int _{0}^{\infty }\cdots \int _{0}^{\infty } \prod _{i=1}^{n}x_{i}^{\alpha _{i}+r_i-1} \left( \sum _{i=1}^{n}x_{i}\right) ^k\nonumber \\{} & {} \qquad \qquad \times \exp \left( -\frac{1}{\beta } \sum _{i=1}^{n}x_{i}\right) \textrm{d}x_1\cdots \textrm{d}x_n\nonumber \\= & {} \frac{C(\alpha _1, \ldots ,\alpha _n; \beta , k)}{C(\alpha _1+r_1, \ldots ,\alpha _n+r_n; \beta , k)}. \end{aligned}$$

Now, simplifying the above expression by using (2), one gets

$$\begin{aligned} E(X_1^{r_1} \cdots X_n^{r_n}) = \beta ^{r} \frac{\Gamma (\alpha ) \Gamma (\alpha + r + k) }{\Gamma (\alpha +k) \Gamma (\alpha +r) } \prod _{i=1}^{n} \frac{\Gamma (\alpha _i+r_i)}{\Gamma (\alpha _i)}, \end{aligned}$$

where \(\alpha = \sum _{i=1}^{n}\alpha _i\) and \(r = \sum _{i=1}^{n}r_i\).

Further, substituting appropriately in the above expression, one gets

$$\begin{aligned} E(X_{\ell }^{r_{\ell }} X_m^{r_m}) = \beta ^{r_{\ell }+r_{m}} \frac{\Gamma (\alpha ) \Gamma (\alpha + r_{\ell }+r_{m} + k) }{\Gamma (\alpha +k) \Gamma (\alpha +r_{\ell }+r_{m}) } \frac{\Gamma (\alpha _{\ell }+r_{\ell }) \Gamma (\alpha _{m}+r_{m})}{\Gamma (\alpha _{\ell }) \Gamma (\alpha _{m})}, \end{aligned}$$
$$\begin{aligned} E(X_{\ell } X_m ) = \beta ^{2} \frac{\alpha _{\ell } \alpha _{m} (\alpha + k) (\alpha + k +1)}{ \alpha (\alpha +1) }, \end{aligned}$$
$$\begin{aligned} E(X_j) = \beta \frac{ \alpha _{j} (\alpha + k) }{ \alpha }, \end{aligned}$$

and

$$\begin{aligned} E(X_j^2) = \beta ^2 \frac{ \alpha _{j} (\alpha _{j}+1) (\alpha + k) (\alpha + k +1) }{ \alpha (\alpha +1) }. \end{aligned}$$

Finally, by using appropriate definitions, we get

$$\begin{aligned} \text {var}(X_j) = \beta ^2 \frac{ \alpha _{j} (\alpha + k)[\alpha (\alpha + 1) + (\alpha - \alpha _{j})k ] }{ \alpha ^2 (\alpha + 1) } , \end{aligned}$$
$$\begin{aligned} \text {cov}(X_{\ell }, X_m) = -k \beta ^2 \frac{ \alpha _{\ell } \alpha _{m} (\alpha + k) }{ \alpha ^2 (\alpha + 1) } , \end{aligned}$$
$$\begin{aligned} \text {corr}(X_{\ell }, X_m) = -k \sqrt{\frac{ \alpha _{\ell } \alpha _{m} }{[\alpha (\alpha + 1) + (\alpha - \alpha _{\ell })k ][\alpha (\alpha + 1) + (\alpha - \alpha _{m})k ] }} . \end{aligned}$$

6 Moment Generating Function

By definition, the joint mgf of \(X_1, \ldots ,X_n\) is given by

$$\begin{aligned} M_{X_1,\ldots , X_n} (t_1,\ldots , t_n)= & {} C(\alpha _1, \ldots ,\alpha _n; \beta , k) \int _{0}^{\infty }\cdots \int _{0}^{\infty } \prod _{i=1}^{n}x_{i}^{\alpha _{i}-1} \left( \sum _{i=1}^{n}x_{i}\right) ^k\nonumber \\{} & {} \quad \times \exp \left( \sum _{i=1}^{n}t_i x_{i}-\frac{1}{\beta } \sum _{i=1}^{n}x_{i}\right) \textrm{d}x_1\cdots \textrm{d}x_n. \end{aligned}$$
(9)

Substituting \(x_1 = r_1 s , \ldots x_{n-1} = r_{n-1}s\) and \(x_n = s (1-\sum _{i=1}^{n-1}r_i)\) in (9) with the Jacobian \(J (x_1,\ldots , x_{n-1}, x_n \rightarrow r_1, \ldots , r_{n-1}, s) = s^{n-1}\) and integrating s, we get

$$\begin{aligned}{} & {} \! M_{X_1,\ldots , X_n} (t_1,\ldots , t_n)\nonumber \\{} & {} \! =C(\alpha _1, \ldots ,\alpha _n; \beta , k) \beta ^{\sum _{i=1}^{n} \alpha _i + k}\Gamma \left( \sum _{i=1}^{n}\! \alpha _i + k\right) \!\!\!\!\mathop {\int \!\cdots \!\int }_{\textstyle {r_{1}+\cdots +r_{n-1}<1\atop 0<r_{i},\, i=1, \ldots ,n-1}} \!\!\!\! \prod _{i=1}^{n-1}r_{i}^{\alpha _{i}-1} \left( 1- \sum _{i=1}^{n-1}r_{i}\!\right) ^{\alpha _n-1} \nonumber \\{} & {} \! \quad \times \left[ \sum _{i=1}^{n-1} (1- \beta t_i)r_i + (1-\beta t_n) \left( 1-\sum _{i=1}^{n-1}r_i\right) \right] ^{-(\sum _{i=1}^{n} \alpha _i + k)} \textrm{d}r_1\cdots \textrm{d}r_{n-1}, \end{aligned}$$
(10)

where \(1-t_i \beta >0\), \(i=1, \ldots , n\). Now, writing

$$\begin{aligned} \left[ \sum _{i=1}^{n-1} (1- \beta t_i)r_i + (1-\beta t_n) \left( 1-\sum _{i=1}^{n-1}r_i\right) \right] ^{-(\sum _{i=1}^{n} \alpha _i + k)} \end{aligned}$$
$$\begin{aligned}{} & {} \qquad = (1-t_n \beta )^{-(\sum _{i=1}^{n} \alpha _i + k)} \left[ 1 - \sum _{i=1}^{n-1} r_i \left( 1- \frac{ 1-t_i \beta }{ 1- \beta t_n }\right) \right] ^{-(\sum _{i=1}^{n} \alpha _i + k)},\\{} & {} \quad \qquad \frac{ 1-t_i \beta }{ 1-t_n \beta }<1,\quad i=1, \ldots , n-1 \end{aligned}$$

in (10) and integrating r, we get

$$\begin{aligned}{} & {} \!\! M_{X_1,\ldots , X_n} (t_1,\ldots , t_n)\nonumber \\{} & {} \!\! = C(\alpha _1, \ldots ,\alpha _n; \beta , k) \beta ^{\sum _{i=1}^{n} \alpha _i + k}\Gamma \left( \sum _{i=1}^{n} \alpha _i + k\right) (1-t_n \beta )^{-(\sum _{i=1}^{n} \alpha _i + k)}\nonumber \\{} & {} \,\, \times \!\!\!\! \mathop {\int \!\cdots \!\int }_{\textstyle {r_{1}+\cdots +r_{n-1}<1\atop 0<r_{i}, i=1, \ldots ,n-1}} \!\!\! \prod _{i=1}^{n-1}r_{i}^{\alpha _{i}-1}\! \left( 1- \sum _{i=1}^{n-1}\!r_{i}\!\right) ^{\alpha _n-1} \! \left[ 1 - \sum _{i=1}^{n-1} \!r_i \frac{ \beta (t_i-t_n) }{ 1-\beta t_n } \right] ^{-(\sum _{i=1}^{n} \alpha _i + k)}\!\!\! \textrm{d}r_1\cdots \textrm{d}r_{n-1}\nonumber \\{} & {} = C(\alpha _1, \ldots ,\alpha _n; \beta , k) \beta ^{\sum _{i=1}^{n} \alpha _i + k}\Gamma \left( \sum _{i=1}^{n} \alpha _i + k\right) (1-t_n \beta )^{-(\sum _{i=1}^{n} \alpha _i + k)} \frac{\prod _{i=1}^{n}\Gamma (\alpha _i)}{\Gamma (\sum _{i=1}^{n}\alpha _i)}\nonumber \\{} & {} \quad \times F_{D}^{(n-1)} \left( \sum _{i=1}^{n}\alpha _i+k, \alpha _{1},\ldots , \alpha _{n-1};\sum _{i=1}^{n}\alpha _i; \frac{\beta (t_1-t_n)}{1- \beta t_n },\ldots , \frac{\beta (t_{n-1}-t_n)}{1- \beta t_n }\right) , \end{aligned}$$

where the last line has been obtained by using the integral representation of the fourth hypergeometric function of Lauricella given in (15). Finally, substituting for \(C(\alpha _1, \ldots ,\alpha _n; \beta , k)\) and simplifying, we get

$$\begin{aligned} M_{X_1,\ldots , X_n} (t_1,\ldots , t_n)= & {} (1-t_n \beta )^{-(\sum _{i=1}^{n} \alpha _i + k)}\\{} & {} \times F_{D}^{(n-1)} \left( \sum _{i=1}^{n}\alpha _i+k, \alpha _{1},\ldots , \alpha _{n-1};\sum _{i=1}^{n}\alpha _i; \frac{\beta (t_1-t_n)}{1- \beta t_n },\ldots , \frac{\beta (t_{n-1}-t_n)}{1- \beta t_n }\right) . \end{aligned}$$

For \(t_1=\cdots =t_n =t\), we have

$$\begin{aligned} M_{X_1,\ldots , X_n} (t,\ldots , t) = M_{X_1+\cdots + X_n} (t) = (1-t \beta )^{-(\sum _{i=1}^{n} \alpha _i + k)} \end{aligned}$$

which is the mgf of a gamma random variable with shape parameter \(\sum _{i=1}^{n} \alpha _i + k\) and scale parameter \(\beta \).

7 Entropies

In this section, exact forms of Rényi and Shannon entropies are derived for the multivariate gamma distribution defined in this article.

Let \((\mathcal{X},\mathcal {B},\mathcal {P})\) be a probability space. Consider a pdf f associated with \(\mathcal{P}\), dominated by \(\sigma -\)finite measure \(\mu \) on \(\mathcal {X}\). Denote by \(H_{SH}(f)\) the well-known Shannon entropy introduced in Shannon [29]. It is define by

$$\begin{aligned} H_{SH}(f) = -\int _{\mathcal {X}} f(x) \log f(x)\,\text {d} \mu . \end{aligned}$$
(11)

One of the main extensions of the Shannon entropy was defined by Rényi [25]. This generalized entropy measure is given by

$$\begin{aligned} H_R(\eta ,f) = \frac{\log G(\eta )}{1-\eta }\qquad \text{(for } \eta >0 \text{ and } \eta \ne 1\text{) }, \end{aligned}$$
(12)

where

$$\begin{aligned} G(\eta )=\int _{\mathcal {X}}f^{\eta } d \mu . \end{aligned}$$

The additional parameter \(\eta \) is used to describe complex behavior in probability models and the associated process under study. Rényi entropy is monotonically decreasing in \(\eta \), while Shannon entropy (11) is obtained from (12) for \(\eta \uparrow 1\). For details see Nadarajah and Zografos [22], Zografos and Nadarajah [36] and Zografos [35].

Theorem 12

For the generalized multivariate gamma distribution defined by the pdf (1), the Rényi and the Shannon entropies are given by

$$\begin{aligned} H_R(\eta ,f)= & {} \frac{1}{1-\eta }\bigg [ \eta \ln C(\alpha _1, \ldots ,\alpha _n; \beta , k) + \Big [\eta \sum _{i=1}^{n}(\alpha _{i}-1)+n+\eta k\Big ]\ln \left( \frac{\beta }{\eta }\right) \\{} & {} \quad +\sum _{i=1}^{n}\ln \Gamma [\eta (\alpha _{i}-1)+1] + \ln \Gamma \left[ \eta \sum _{i=1}^{n}(\alpha _{i}-1)+n+\eta k\right] \\{} & {} \quad -\ln \Gamma \left[ \eta \sum _{i=1}^{n}(\alpha _{i}-1)+n\right] \end{aligned}$$

and

$$\begin{aligned} H_{SH}(f)= & {} - \ln C(\alpha _1, \ldots ,\alpha _n; \beta , k) - \bigg [ \bigg (\sum _{i=1}^{n} \alpha _{i} +k-n\bigg )\ln \beta -\bigg ( \sum _{i=1}^{n} \alpha _{i} +k\bigg )\\{} & {} \quad + \sum _{i=1}^{n}(\alpha _i-1 )\psi (\alpha _i ) + \bigg (\sum _{i=1}^{n} \alpha _{i} +k-n\bigg )\psi \bigg (\sum _{i=1}^{n} \alpha _{i} +k\bigg )\\{} & {} \quad - \bigg (\sum _{i=1}^{n} \alpha _{i} -n\bigg )\psi \bigg (\sum _{i=1}^{n} \alpha _{i} \bigg ) \bigg ], \end{aligned}$$

respectively, where \(\psi (z)=\frac{d}{dz} \ln \Gamma (z) =\frac{1}{\Gamma (z)} \frac{d}{dz} \Gamma (z) \) is the digamma function.

Proof

For \(\eta >0\) and \(\eta \ne 1\), using the joint density of \(X_1, \ldots , X_n\) given by (1), we have

$$\begin{aligned} G(\eta )= & {} \int _{0}^{\infty }\cdots \int _{0}^{\infty } f^{\eta }(x_{1},\ldots , x_{n};\alpha _1, \ldots , \alpha _n; \beta , k)\,\prod _{i=1}^{n}\text {d}x_i\nonumber \\= & {} [C(\alpha _1, \ldots ,\alpha _n; \beta , k)]^{\eta }\int _{0}^{\infty }\cdots \int _{0}^{\infty } \prod _{i=1}^{n}x_{i}^{\eta (\alpha _{i}-1)} \left( \sum _{i=1}^{n}x_{i}\right) ^{\eta k}\\{} & {} \quad \times \exp \left( -\frac{\eta }{\beta } \sum _{i=1}^{n}x_{i}\right) \,\prod _{i=1}^{n}\text {d}x_i\nonumber \\= & {} [C(\alpha _1, \ldots ,\alpha _n; \beta , k)]^{\eta }\frac{\prod _{i=1}^{n}\Gamma [\eta (\alpha _{i}-1)+1]}{ \Gamma [\eta \sum _{i=1}^{n}(\alpha _{i}-1)+n]} \\{} & {} \quad \times \int _{0}^{\infty } x^{\eta \sum _{i=1}^{n}(\alpha _{i}-1)+n+\eta k -1} \exp \left( -\frac{\eta x}{\beta }\right) \,\text {d}x, \end{aligned}$$

where the last line has been obtained by using (16). Finally, evaluating the above integral by using gamma integral and simplifying the resulting expression, we get

$$\begin{aligned} G(\eta )= & {} [C(\alpha _1, \ldots ,\alpha _n; \beta , k)]^{\eta }\frac{\prod _{i=1}^{n}\Gamma [\eta (\alpha _{i}-1)+1]}{ \Gamma [\eta \sum _{i=1}^{n}(\alpha _{i}-1)+n]}\Gamma \left[ \eta \sum _{i=1}^{n}(\alpha _{i}-1)+n+\eta k\right] \\ {}{} & {} \quad \times \left( \frac{\beta }{\eta }\right) ^{\eta \sum _{i=1}^{n}(\alpha _{i}-1)+n+\eta k}. \end{aligned}$$

Now, taking logarithm of \(G(\eta )\) and using (12) we get \(H_R(\eta ,f)\). The Shannon entropy is obtained from \(H_R(\eta ,f)\) by taking \(\eta \uparrow 1\) and using L’Hopital’s rule.    \(\square \)

8 Estimation

Let \((X_{11},\ldots , X_{1n}), \ldots , (X_{N1},\ldots , X_{Nn})\) be a random sample from \( \text {GMG}(\alpha _1, \ldots , \alpha _n;\) \( \beta , k)\). The log-likelihood function, denoted by \(l(\alpha _1, \ldots , \alpha _n; \beta )\), is given by

$$\begin{aligned} l(\alpha _1, \ldots , \alpha _n; \beta )= & {} N \left[ \ln \Gamma ( \alpha ) - ( \alpha +k ) \ln \beta - \sum _{i=1}^{n}\ln \Gamma \left( \alpha _i \right) -\ln \Gamma (\alpha +k )\right] \\{} & {} \quad + \sum _{h=1}^{N} \sum _{i=1}^{n}\left( \alpha _i - 1\right) \ln x_{hi} + k \sum _{h=1}^{N} \ln \left( \sum _{i=1}^{n} x_{hi} \right) -\frac{1}{\beta } \sum _{h=1}^{N} \sum _{i=1}^{n} x_{hi}, \end{aligned}$$

where \(\alpha = \sum _{i=1}^{n}\alpha _i\). Now, differentiating \(l(\alpha _1, \ldots , \alpha _n; \beta )\) w.r.t. \(\alpha _i\), we get

$$\begin{aligned} \frac{\partial l(\alpha _1, \ldots , \alpha _n; \beta )}{\partial \alpha _i} = N \left[ \psi ( \alpha ) - \ln \beta - \psi \left( \alpha _i \right) - \psi ( \alpha +k )\right] + \sum _{h=1}^{N} \ln x_{hi} . \end{aligned}$$

Further,

$$\begin{aligned} \frac{\partial l(\alpha _1, \ldots , \alpha _n; \beta )}{\partial \beta } = - \frac{N(\alpha +k) }{\beta } +\frac{1}{\beta ^2} \sum _{h=1}^{N} \sum _{i=1}^{n} x_{hi}, \end{aligned}$$
$$\begin{aligned} \frac{\partial ^2 l(\alpha _1, \ldots , \alpha _n; \beta )}{\partial \alpha _i \partial \alpha _\ell } = N \left[ \psi _1( \alpha ) - \psi _1 ( \alpha +k )\right] ,\quad 1\le i\ne \ell \le n, \end{aligned}$$
$$\begin{aligned} \frac{\partial ^2 l(\alpha _1, \ldots , \alpha _n; \beta )}{\partial \alpha _i^2} = N [\psi _1 ( \alpha ) - \psi _1 (\alpha _i ) - \psi _1 ( \alpha +k)], \end{aligned}$$

where \(\psi _1(z)\) is the trigamma function defined as the derivative of the digamma function, \(\psi _1(z)=\frac{d}{dz}\psi (z)\),

$$\begin{aligned} \frac{\partial ^2 l(\alpha _1, \ldots , \alpha _n; \beta )}{\partial \alpha _i \partial \beta } = - \frac{N}{\beta }, \end{aligned}$$
$$\begin{aligned} \frac{\partial ^2 l(\alpha _1, \ldots , \alpha _n; \beta )}{\partial \beta ^2} = \frac{N(\alpha +k) }{\beta ^2} -\frac{2}{\beta ^3} \sum _{h=1}^{N} \sum _{i=1}^{n} x_{hi}. \end{aligned}$$

Now, noting that \(\sum _{i=1}^{n} X_i\sim \text {G} (\alpha +k,\beta )\) and the expected value of a constant is the constant itself, we obtain

$$\begin{aligned} \theta _{i\ell } = \theta _{\ell i}= E\left[ \frac{\partial ^2 l(\alpha _1, \ldots , \alpha _n; \beta )}{\partial \alpha _i \partial \alpha _\ell }\right] = N\psi _1( \alpha ) - N\psi _1 ( \alpha +k ) ,\, 1\le i\ne \ell \le n, \end{aligned}$$
$$\begin{aligned} \theta _{i\, n+1}=\theta _{n+1\, i} =E\left[ \frac{\partial l(\alpha _1, \ldots , \alpha _n; \beta )}{\partial \alpha _i \partial \beta }\right] = - \frac{N}{\beta },\, 1\le i\le n, \end{aligned}$$
$$\begin{aligned} \theta _{ii} = E\left[ \frac{\partial ^2 l(\alpha _1, \ldots , \alpha _n; \beta )}{\partial \alpha _i^2}\right] = N \psi _1( \alpha ) -N \psi _1 (\alpha _i) - N \psi _1 ( \alpha +k),\, 1\le i\le n, \end{aligned}$$
$$\begin{aligned} \theta _{n+1\,n+1}=E\left[ \frac{\partial ^2 l(\alpha _1, \ldots , \alpha _n; \beta )}{\partial \beta ^2}\right] = - \frac{ N ( \alpha +k)}{\beta ^2} . \end{aligned}$$

The Fisher information matrix for the multivariate gamma distribution given by the density (1) is defined as

$$\begin{aligned} -\left( \begin{array}{ccccc} \theta _{11} &{} \theta _{12} &{} \cdots &{} \theta _{1n} &{} \theta _{1\,n+1} \\ \theta _{21} &{} \theta _{22} &{}\cdots &{} \theta _{1n} &{} \theta _{2\,n+1} \\ \vdots &{} &{} &{} &{} \vdots \\ \theta _{n1} &{} \theta _{n2} &{} \cdots &{} \theta _{nn} &{} \theta _{n\,n+1}\\ \theta _{n+1\,1} &{} \theta _{n+1\,2} &{} \cdots &{} \theta _{n+1\,n} &{} \theta _{n+1\,n+1} \end{array} \right) . \end{aligned}$$

Further

$$\begin{aligned} \frac{\partial l(\alpha _1, \ldots , \alpha _n; \beta )}{\partial \beta } = - \frac{N(\alpha +k) }{\beta } +\frac{1}{\beta ^2} \sum _{h=1}^{N} \sum _{i=1}^{n} x_{hi} =0 \end{aligned}$$

gives

$$\begin{aligned} (\alpha +k) \beta = \sum _{i=1}^{n} \bar{x}_i \end{aligned}$$
(13)

and

$$\begin{aligned} \frac{\partial l(\alpha _1, \ldots , \alpha _n; \beta )}{\partial \alpha _i} = N \left[ \psi ( \alpha ) - \ln \beta - \psi \left( \alpha _i \right) - \psi ( \alpha +k )\right] + \sum _{h=1}^{N} \ln x_{hi} =0 \end{aligned}$$

gives

$$\begin{aligned} \psi (\alpha +k) - \psi (\alpha ) + \ln \beta + \psi \left( \alpha _i\right) = \ln \tilde{x}_{i},\, i=1, \ldots , n, \end{aligned}$$

where \(\tilde{x}_{i} =\prod _{h=1}^{N} x_{hi}^{1/N}\), \(i=1,2,\ldots , n\). Further, using

$$\begin{aligned}{}[ \psi (z + m) - \psi (z) =\sum _{j=0}^{m-1} \frac{1}{z+j} \end{aligned}$$

we have

$$\begin{aligned} \sum _{j=0}^{k-1} \frac{1}{ \alpha +j} + \ln \beta + \psi \left( \alpha _i\right) = \ln \tilde{x}_{i},\quad i=1, \ldots , n. \end{aligned}$$
(14)

Thus, by solving numerically (13) and (14), the MLEs of \(\alpha _i\) and \(\beta \) can be obtained.

9 Simulation

In this section, a simulation study for \(p=3\) is conducted to evaluate the performance of maximum likelihood method. For \(p=2\), see Rafiei, Iranmanesh, and Nagar [24]. Samples of size \(n =50, 200, 500\) from Equation (1) for selected values of parameters are generated by MCMC methods (Gibbs Metropolise, Markov Chain Monte Carlo Metropolise, Metropolise, Metropolise gaussian, random walk Metropolise and Metropolise-Hastings). We have performed the simulation for particular values of parameters, namely, \(\alpha _1=1\), \(\alpha _2=2\), \(\alpha _3=3\), \(\beta =2\), \(k=4, 8,\) and \(\alpha _1=2\), \(\alpha _2=2\), \(\alpha _3=1\), \(\beta =2\), \(k=4, 8.\) The results were similar for other choices. MLEs for parameters based on the numerical procedures were computed. This procedures was repeated five hundred times and \((\widehat{\alpha _1}, \widehat{\alpha _2}, \widehat{\alpha _3}, \widehat{\beta })\), the average of biases (Ab) and the mean squared errors (MSE) were obtained by using Monte Carlo methods (the parameter k is an integer and the derivative method is not used to calculate its MLE).

Different packages such as MCMC, MCMCpack, gibbs.met, LearnBayes, MHadaptive, MetroHastings and walkMetropolis in R were used for simulation. After performing simulation using the above methods and comparing results, it was observed that the Gibbs sampling method provides better results. Therefore, the output of Gibbs method is presented in Tables 1, 2, 3, 4 and Figs. 1, 2, 3, 4 and 5. The MLEs of parameters and correlation cofficients are reported in Tables 1 and 2. The DEoptim package in R was used to calculate the MLEs. The average of biases and the mean squared errors of all the estimators are reported in Tables 3 and 4. In particular, biases for the maximum likelihood estimators of \(\alpha _1\), \(\alpha _2\), \(\alpha _3\) and \(\beta \) are close to 0 and the mean squared errors of all estimators always decrease with increasing n.

Table 1 MLEs of parameters and correlation cofficients
Table 2 MLEs of parameters and correlation cofficients
Table 3 The average of biases and the mean squared errors of estimators
Table 4 The average of biases and the mean squared errors of estimators

Figure 1 shows 3D scatter plot of the simulation data for \(\alpha _1=1\), \(\alpha _2=2\), \(\alpha _3=3\), \(\beta =2\), \(k=4\), \(n=50, 500\). Figure 2 shows 3D plot of the simulation data for \(\alpha _1=1\), \(\alpha _2=2\), \(\alpha _3=3\), \(\beta =2\), \(k=4.\) Figs. 3 and 4 show pairs style of the simulation data for \(\alpha _1=2\), \(\alpha _2=2\), \(\alpha _3=2\), \(\beta =2\), \(k=8\), \(n=50\) and \(\alpha _1=2\), \(\alpha _2=2\), \(\alpha _3=1\), \(\beta =2\), \(k=8\), \(n=500\), respectively. Figure 5 shows Trace plot for \(\alpha _1=1\), \(\alpha _2=2\), \(\alpha _3=3\), \(\beta =2\), \(k=8\), \(n=500\). Finally, simulation points and 3D contour plot for different selected values of parameters are shown in Figs. 6, 7, 8 and 9.

Fig. 1
figure 1

3D scatter plot of simulation data with \(\alpha _1=1\), \(\alpha _2=2\), \(\alpha _3=3\), \(\beta =2\), \(k=4\), \(n=50{,}500\)

Fig. 2
figure 2

3D plot for simulation data, \(\alpha _1=1\), \(\alpha _2=2\), \(\alpha _3=3\), \(\beta =2\), \(k=4\)

Fig. 3
figure 3

Pairs plot for \(\alpha _1=2\), \(\alpha _2=2\), \(\alpha _3=1\), \(\beta =2\), \(k=8\), \(n=50\)

Fig. 4
figure 4

Pairs plot for \(\alpha _1=2\), \(\alpha _2=2\), \(\alpha _3=1\), \(\beta =2\), \(k=8\), \(n=500\)

10 Conclusion

In this chapter, a new multivariate gamma distribution whose marginals are finite mixtures of gamma distributions is defined. It is shown that the correlation between any pair of variables is negative. Therefore, the newly introduced distribution could be suitable for fitting multivariate data with negative correlations. Several of its properties such as joint moments, correlation coefficients, moment generating function, Rényi and Shannon entropies have been derived. In Sect. 8, the method of MLE has been applied to estimate the parameters. Because the resulting likelihood equations are nonlinear, numerical methods have been used to solve them. Simulation studies have been conducted to evaluate the performance of the maximum likelihood method. Moreover, various tables and figures have been provided to confirm a proper simulation and results of the MLE method for estimating the parameters.

Fig. 5
figure 5

Trace plots for \(\alpha _1=1\), \(\alpha _2=2\), \(\alpha _3=3\), \(\beta =2\), \(k=8\), \(n=500\)

Fig. 6
figure 6

Simulation points and 3D contour plot for \(\alpha _1=1\), \(\alpha _2=2\), \(\alpha _3=3\), \(\beta =2\), \(k=4\), \(n=50\)

Fig. 7
figure 7

Simulation points and contour plot for \(\alpha _1=1\), \(\alpha _2=2\), \(\alpha _3=3\), \(\beta =2\), \(k=4\), \(n=50\)

Fig. 8
figure 8

Simulation points and contour plot for \(\alpha _1=2\), \(\alpha _2=2\), \(\alpha _3=1\), \(\beta =2\), \(k=4\), \(n=50\)

Fig. 9
figure 9

Simulation points and 3D contour plot for \(\alpha _1=2\), \(\alpha _2=2\), \(\alpha _3=1\), \(\beta =2\), \(k=4\), \(n=50\)