1 Introduction

The implementation of many important likelihood ratio tests (LRTs) used in multivariate analysis is often hindered because of difficulties in handling the exact distribution, which is most of the time too complicated to be used in practice. On the other hand, the commonly available asymptotic distributions do not deliver the necessary accuracy, particularly for small samples and/or large numbers of variables involved.

In this paper the authors show how, in the complex multivariate normal setting, it is possible to obtain simple closed form expressions for the exact distribution of the LRT statistics to test (i) independence of sets of variables, (ii) equality of mean vectors and (iii) the equality of an expected value matrix to a given matrix, and very sharp near-exact distributions for the LRT statistics to test (iv) sphericity and (v) equality of covariance matrices.

These results are obtained, based on the fact that the LRT statistics in (i)–(v) have, under the null hypothesis, a common structure for their exact distributions, which for some \({p\in {\mathbb {N}}}\) and \({u\in {\mathbb {N}}_0}\), may be written as

$$\begin{aligned} \varLambda ~\buildrel {st}\over {\sim }~ \left( \prod ^{p-1}_{j=1} \mathrm{e}^{-Z_j}\right) \times \left( \prod ^u_{k=1}Y_k\right) . \end{aligned}$$
(1)

In (1), \(\varLambda \) represents the LRT statistic, ‘\(\buildrel {st}\over {\sim }\)’ is to be read as ‘is stochastically equivalent to’, and, for \(r_j\in {\mathbb {N}}\) \({(j=1,\ldots ,p-1)}\) and \(a_k,b_k>0\) \({(k=1,\ldots ,u)}\),

$$\begin{aligned} Z_j\sim \varGamma \left( r_j,\frac{n-1-j}{n}\right) \quad \mathrm{and}\quad Y_k\sim \hbox {Beta}(a_k,b_k), \end{aligned}$$
(2)

are independent r.v.’s (random variables). In (2), \({\varGamma (r_j,\lambda _j)}\) denotes a gamma distribution with shape parameter \(r_j\) and rate parameter \(\lambda _j\) and \(n\) is the sample size. In (1), \(p\) represents the number of variables involved, for the tests in (i), (iv) and (v), or the sum of the number of variables plus the number of vectors minus 1, for the test in (ii), or the sum of the number of variables involved plus the number of columns in the matrix \(\mu \) in (22), for the test in (iii). Also, in (1), we have \({u=0}\) for the LRT statistics in (i), (ii) and (iii).

Krishnaiah et al. (1976) used the first four moments of the LRT statistics to test independence, equality of covariance matrices and sphericity to approximate the distributions of certain powers of these LRT statistics by Pearson type I distributions, while Fang et al. (1982) use infinite mixtures of beta distributions to asymptotically approximate the distributions of the first two of those LRT statistics. Khatri (1965) obtained the exact distribution of the LRT statistic to test if an expected value matrix is null also in terms of an infinite mixture of beta distributions, while Gupta (1971) obtained closed form expressions for the distribution of the same test statistic for the cases of two and three variables, stating the possibility of extending such representations to a general number of variables, although without obtaining explicit expressions for the parameters in the distribution. Pillai and Nagarsenker (1971) and Nagarsenker and Das (1975) address the exact distribution of the LRT statistic for sphericity through contour integration, obtaining representations in terms of Meijer G functions and infinite series. Authors such as Nagarsenker and Nagarsenker (1981) and Nagar et al. (1985) studied a block sphericity test, which has as a particular case the common sphericity test, obtaining the exact distribution for the corresponding LRT statistic in the form of series representation, which in the case of the first authors has the form of an infinite mixture of beta distributions for the LRT statistic itself, or the form of an infinite mixture of chi-square distributions for its logarithm, while the second authors start with representations based on Meijer G and hypergeometric functions, to end-up with highly complicated series representations. Gupta (1976) and Gupta and Rathie (1983) address the non-null distribution of Wilks \(\varLambda \) statistic for MANOVA, obtaining in the first reference explicit expressions for the probability density function (pdf) of \(\varLambda \) only for \(p=2\) and \(p=3\) and only the general form of the expression for \(p>3\) and in the second reference the distribution for general \(p\), in the form of Meijer G functions and what the authors claim to be computable series forms, which anyway in the null case remain too intricate. Somehow similar representations were obtained by Pillai and Jouris (1971) also for the LRT statistic for MANOVA and the LRT statistics to test equality of two covariance matrices and independence of two groups of variables. Mathai (1973) refers the non-computability of the representations involving Meijer G functions, which is still a reality, and obtains series representations for the distribution of the LRT for the equality of two covariance matrices and for sphericity, which anyway have very elaborate expressions.

In this paper the authors obtain

  • very simple expressions for the exact pdf and cumulative distribution function (cdf) of the statistics in (i), (ii) and (iii), avoiding the need for the approximations in Krishnaiah et al. (1976), Fang et al. (1982), Khatri (1965) and the complex expressions in Khatri (1965), Gupta (1971), Tang and Gupta (1986), Gupta (1976), Gupta and Rathie (1983), Pillai and Jouris (1971), Mathai (1973) even for the null case,

  • very accurate manageable near-exact distributions for the statistics in (iv) and (v), which are far more precise and manageable than the approximations and expressions in Krishnaiah et al. (1976), Fang et al. (1982), Tang and Gupta (1986), Pillai and Jouris (1971), Mathai (1973), Pillai and Nagarsenker (1971), Nagar et al. (1985), Nagarsenker and Das (1975), Nagarsenker and Nagarsenker (1981).

In Sect. 2 the exact distributions for the LRT statistics in (i), (ii) and (iii) are derived and it is shown that they have simple closed form expressions, which do not involve any infinite sums. In Sects. 3 and 4 the exact distribution of the LRT statistics in (iv) and (v) is obtained in the form of (1). Then, in Sect. 5, near-exact distributions for these statistics are developed, and in Sect. 6 some numerical studies are carried out, confirming the very good performance of these near-exact distributions. Based on the results obtained, near-exact distributions for the LRT statistic to test equality of several complex normal distributions are developed in Sect. 7, and an example of application of these near-exact distributions is provided in the supplementary material in the Online Resource.

Near-exact distributions are asymptotic distributions developed under a new concept. Usually based on factorizations of the characteristic function (c.f.) of the test statistic under study, or of its logarithm, they are built by leaving unchanged one part of this c.f. and replacing asymptotically the remaining part, in such a way that the resulting c.f. corresponds to a manageable distribution. These distributions, when correctly built for statistics used in Multivariate Analysis, are not only asymptotic for increasing sample sizes but also have a marked asymptotic behavior for increasing number of variables and number of populations involved. All this amounts to obtaining asymptotic distributions that lie much closer to the exact distribution than common asymptotic distributions (Marques et. al. 2011).

In this paper the definition of the complex multivariate normal distribution that will be used is that found in Wooding (1956), Goodman (1963a), Brillinger (2001, sec. 4.2) and Anderson (2003, prob. 2.64): the random vector \(\underline{X}\) \({(p \times 1)}\) has a complex multivariate normal distribution, with expected value \(\underline{\mu }\) and Hermitian variance-covariance matrix \(\varSigma \) if the pdf of \(\underline{X}\) is

$$\begin{aligned} f^{}_{\underline{X}}(\underline{x})=\pi ^{-p}|\varSigma |^{-1}\,\mathrm{e}^{-(\overline{\underline{x}-\underline{\mu }})'\varSigma ^{-1}(\underline{x}-\underline{\mu })}, \end{aligned}$$

where \((\overline{\underline{x}-\underline{\mu }})\) denotes the complex conjugate of \((\underline{x}-\underline{\mu })\) and the prime denotes the transpose, with

$$\begin{aligned} \varSigma =2\varTheta +2\mathrm {i}\varPhi , \end{aligned}$$

where \(\varTheta =\mathrm{Cov}(\mathrm{Re}(\underline{X}))=\mathrm{Cov}(\mathrm{Im}(\underline{X}))\) and \(\varPhi =\mathrm{Cov}(\mathrm{Re}(\underline{X}),\mathrm{Im}(\underline{X}))\) are, respectively, a positive-definite and a skew-symmetric matrix. In this case we will write

$$\begin{aligned} \underline{X}\sim CN_p(\underline{\mu },\varSigma ). \end{aligned}$$
(3)

The complex multivariate normal distribution has applications in a wide range of areas, from crystallography (Pannu et al. 2003) to spectral analysis in time series (Brillinger and Krishnaiah 1982; Brillinger 2001; Shumway and Stoffer 2006; Goodman 1963a; Turin 1960; Krishnaiah et al. 1976) and to studies on satellite images, signal processing, communications and information theory (Conradsen 2012; Conradsen et al. 2003; Lehmann et al. 2007).

2 The exact distribution of the LRT statistics to test independence, the equality of mean vectors and the equality of an expected value matrix to a given matrix

In this section it is shown how, in contrast to what happens when dealing with real r.v.’s (Marques et. al. 2011), in the complex case it is possible to obtain the exact distribution for the negative logarithm of the LRT statistics to test independence, the equality of mean vectors or the equality of an expected value matrix to a given matrix as generalized integer gamma (GIG) distributions (Coelho 1998), for any number of variables or mean vectors involved, this way obtaining the exact distribution of the LRT test statistics in the form in (1), with \({u=0}\). The simplicity of the expressions obtained for the exact pdf’s and cdf’s is quite striking when compared with the expressions obtained by other authors (Fang et al. 1982; Gupta and Rathie 1983; Khatri 1965; Krishnaiah et al. 1976; Pillai and Jouris 1971), and although bearing some resemblance with the expressions in Tang and Gupta (1986), they have, in contrast to these, the advantage of involving only finite sums. Even when compared with the quite close representations in Gupta (1971), Mathai (1973), Gupta (1976), they have the advantage of being more general and with coefficients which have much simpler forms. This is especially the case with respect to the representation in Gupta (1976), while in Gupta (1971) the author did not obtain explicit expressions for such coefficients.

2.1 The LRT statistic to test independence among sets of variables

Suppose that the random vector \(\underline{X}\) in (3) is split into \(m\) subvectors \(\underline{X}_k\) \((k=1,\ldots ,m)\) and that we intend to test the independence of these \(m\) subvectors. The partition of the vector \(\underline{X}\) will induce the following structure for \(\varSigma \):

$$\begin{aligned} \varSigma =\left[ \begin{array}{lllll} \varSigma _{11} &{} \cdots &{} \varSigma _{1k} &{} \cdots &{} \varSigma _{1m} \\ \vdots &{} \ddots &{} \vdots &{} &{} \vdots \\ \varSigma _{k1} &{} \cdots &{} \varSigma _{kk} &{} \cdots &{} \varSigma _{km} \\ \vdots &{} &{} \vdots &{} \ddots &{} \vdots \\ \varSigma _{m1} &{} \cdots &{} \varSigma _{mk} &{} \cdots &{} \varSigma _{mm} \end{array} \right] , \end{aligned}$$

where \({\varSigma _{kk}=\mathrm{Var}(\underline{X}_k)}\) \({(k=1,\ldots ,m)}\) and \({\varSigma _{ij}=\mathrm{Cov}(\underline{X}_i,\underline{X}_j)}\) (\({i,j\in \{1,\ldots ,m\}}\)). The hypothesis of mutual independence of the random subvectors \(\underline{X}_k\), may thus be written as

$$\begin{aligned} H_0:\,\varSigma =\hbox {diag}(\varSigma _{11},\ldots ,\varSigma _{kk},\ldots ,\varSigma _{mm})\equiv \varSigma _{ij}=0, \ \hbox {for all }i\ne j. \end{aligned}$$
(4)

Let us further suppose that, for each \(k\), \(\underline{X}_k\) has \(p_k\) variables, with \(p=\sum _{k=1}^m p_k\). Then, for a sample of size \(n\), the LRT statistic to test \(H_0\) in (4) is

$$\begin{aligned} \varLambda _1=\left( \frac{|A|}{\prod ^m_{k=1}|A_{kk}|}\right) ^n, \end{aligned}$$
(5)

where \(A\) is the maximum likelihood estimator (MLE) of \(\varSigma \) and \(A_{kk}\) its \(k\)-th diagonal block (\(k=1,\ldots ,m\)), with

$$\begin{aligned} A=\frac{1}{n}\left( \,\overline{X-\frac{1}{n}E_{nn}X}\,\right) '\left( X-\frac{1}{n}E_{nn}X\right) , \end{aligned}$$
(6)

where once again the bar denotes the complex conjugate, \(X\) is the \(n{\scriptstyle \times }p\) sample matrix and \(E_{np}\) is an \(n{\scriptstyle \times }p\) matrix of ones (see Goodman 1963a; Anderson 2003, problem 3.11 for references concerning the MLE of \(\varSigma \) for the multivariate complex normal distribution).

Following a similar procedure to that used in the real case it is possible to show that, under \(H_0\) in (4), for \(h>p/n-1\),

$$\begin{aligned} E\left( \varLambda _1^h\right) =\prod _{k=1}^{m-1}\prod _{j=1}^{p_k} \frac{\varGamma \left( n-j\right) \varGamma (n-q_k-j+nh)}{\varGamma (n-q_k-j) \varGamma (n-j+nh)}=\prod _{k=1}^{m-1}\prod _{j=1}^{p_k}E\left( Y_{jk}^{nh}\right) , \end{aligned}$$
(7)

where \(Y_{jk}\sim \mathrm{Beta}\left( n-q_k-j,q_k\right) \) are a set of \(p=\sum _{k=1}^m p_k\) independent r.v.’s (for a proof, see Appendix B, section B.1 in the Online Resource, or section B.5.1 for an alternative proof based on the results in Jensen (1988, Thm. 5)). Note that (7) matches the expressions for the moments of \(\varLambda _1\) in Krishnaiah et al. (1976) and Fang et al. (1982).

But then, since \(0<\varLambda _1<1\), its distribution is well defined by the set of its moments and consequently

$$\begin{aligned} \varLambda _1\,\buildrel {st}\over {\sim }\, \prod _{k=1}^{m-1}\prod _{j=1}^{p_k} \left( Y_{jk}\right) ^n. \end{aligned}$$

Therefore, the c.f. of

$$\begin{aligned} W_1=-\log \varLambda _1, \end{aligned}$$
(8)

may be written as

$$\begin{aligned} \displaystyle \varPhi ^{}_{W_1}(t) \!=\! \displaystyle E\left( \mathrm{e}^{\mathrm {i}tW_1}\right) \!=\! E\left( \varLambda _1^{-\mathrm {i}t}\right) = \displaystyle \prod _{k=1}^{m-1}\prod _{j=1}^{p_k} \frac{\varGamma \left( n-j\right) }{\varGamma (n-q_k-j)}\,\frac{\varGamma (n-q_k-j-n\mathrm {i}t)}{\varGamma (n-j-n\mathrm {i}t)},\nonumber \\ \end{aligned}$$
(9)

giving rise to Theorem 1.

Theorem 1

The exact distribution of \(W_1\) in (8) is a GIG distribution of depth \({p-1}\) with pdf \((\)see Marques et al. (2011, App. B\()\) for the notation on the GIG pdf and cdf\()\)

$$\begin{aligned} f^{}_{W_1}(w)=f^\mathrm{GIG}\left( w\,\bigl |\,r_1,\ldots ,r_{p-1};\frac{n-2}{n},\ldots ,\frac{n-p-2}{n};p-1\right) \end{aligned}$$

and cdf

$$\begin{aligned} F^{}_{W_1}(w)=F^\mathrm{GIG}\left( w\,\bigl |\,r_1,\ldots ,r_{p-1};\frac{n-2}{n},\ldots ,\frac{n-p-2}{n};p-1\right) , \end{aligned}$$

where \({w>0}\) and

$$\begin{aligned} r_j=\left\{ \begin{array}{ll} h_j &{}\quad j=1\\ h_j+r_{j-1} &{}\quad j=2,\ldots ,p-1 \end{array} \right. \end{aligned}$$
(10)

with\(,\) \(h_j=(\hbox {number of }p_k \hbox { greater or equal to }j)-1\), for \({j=1,\ldots ,p-1}\).

Proof

From (9), using, for any complex number \(z\) and positive integer \(n\), the relation

$$\begin{aligned} \frac{\varGamma (z+n)}{\varGamma (z)}=\prod _{\nu =0}^{n-1}(z+\nu ), \end{aligned}$$
(11)

it follows that

$$\begin{aligned} \displaystyle \varPhi ^{}_{W_1}(t)&= \displaystyle \prod _{k=1}^{m-1}\prod _{j=1}^{p_k}\prod _{\nu =0}^{q_k-1} \frac{n-q_k-j+\nu }{n-q_k-j+\nu -n\mathrm {i}t}= \prod _{j=1}^{p-1} \left( \frac{n-1-j}{n}\right) ^{r_j}\left( \frac{n-1-j}{n}-\mathrm {i}t\right) ^{-r_j}, \end{aligned}$$

which is the c.f. of the sum of \({p-1}\) independent gamma r.v.’s with shape parameters \(r_j\), given by (10) in the body of the Theorem, and rate parameters \(\frac{n-1-j}{n}\) \((j=1,\ldots ,p-1)\), which is a GIG distribution of depth \({p-1}\), with shape parameters \(r_j\) and rate parameters \(\frac{n-1-j}{n}\).

Then Corollary 1 gives the exact pdf and cdf of \(\varLambda _1=\mathrm{e}^{-W_1}\).

Corollary 1

The exact pdf and cdf of the statistic \(\varLambda _1=\mathrm{e}^{-W_1}\) in (5) are\(,\) for \({0<\ell \le 1},\)

$$\begin{aligned} f^{}_{\varLambda _1}(\ell )=f^\mathrm{GIG}\left( -\log \ell \,\bigl |\,r_1,\ldots ,r_{p-1};\frac{n-2}{n},\ldots , \frac{n-p-2}{n};p-1\right) \frac{1}{\ell } \end{aligned}$$

and

$$\begin{aligned} F^{}_{\varLambda _1}(\ell )=1-F^\mathrm{GIG}\left( -\log \ell \,\bigl |\,r_1,\ldots ,r_{p-1}; \frac{n-2}{n},\ldots ,\frac{n-p-2}{n};p-1\right) , \end{aligned}$$

for \(r_j\) given by (10).

The exact distribution of \(\varLambda _1\) is thus of the form (1), with \(u=0\).

2.2 The LRT statistic to test the equality of several mean vectors

Let us suppose that

$$\begin{aligned} \underline{X}_j\sim CN_p(\underline{\mu }_j,\varSigma _j),\quad j=1,\ldots ,q \end{aligned}$$

and that, assuming \(\varSigma _1=\cdots =\varSigma _q({=}\varSigma )\), the null hypothesis to be tested is

$$\begin{aligned} H_0:\,\underline{\mu }_1=\cdots =\underline{\mu }_q, \end{aligned}$$
(12)

based on \(q\) independent samples, the \(j\)-th of which is from \(\underline{X}_j\), with size \(n_j\). Also, suppose that the \(j\)-th sample is stored in the \(n_j{\scriptstyle \times }p\) matrix \(X_j\).

Then, the LRT statistic used to test \(H_0\) in (12) is given by

$$\begin{aligned} \varLambda _2=\left( \frac{|A|}{|A+B|}\right) ^n \end{aligned}$$
(13)

where \({n=\sum ^q_{j=1} n_j}\),

$$\begin{aligned} A=\sum ^q_{j=1}\left( \,\overline{X_j-E_{n_j1}\widetilde{\underline{X}}_j'}\,\right) '\biggl (X_j-E_{n_j1}\widetilde{\underline{X}}_j'\biggr ) \quad \mathrm{and}\quad B=\sum ^q_{j=1} n_j\Bigl (\widetilde{\underline{X}}_j-\widetilde{\underline{X}}\Bigr )\left( \,\overline{\widetilde{\underline{X}}_j-\widetilde{\underline{X}}}~\right) ',\nonumber \\ \end{aligned}$$
(14)

where once again the bar denotes the complex conjugate and where

$$\begin{aligned} \widetilde{\underline{X}}_j=\frac{1}{n_j}X_j'E_{n_j1}, \end{aligned}$$

is the vector of sample means from the \(j\)-th sample, and

$$\begin{aligned} \widetilde{\underline{X}}=\frac{1}{n}\sum ^q_{j=1}n_j\,\widetilde{\underline{X}}_j. \end{aligned}$$

The \(p{\scriptstyle \times }p\) matrix \(A\) has what is called a complex Wishart distribution Goodman (1963a, b) with \(n-q\) degrees of freedom and parameter matrix \(\varSigma \). This fact is denoted by \(A\sim CW_p(n-q,\varSigma )\). Under \(H_0\) in (12), \(B\sim CW_p(q-1,\varSigma )\), and then, given the independence, for normal r.v.’s, of the MLE’s of the mean and variance, the matrices \(A\) and \(B\) are independent and thus, under \(H_0\) in (12),

$$\begin{aligned} A+B\sim CW_p(n-1,\varSigma ). \end{aligned}$$

It then follows that (see Goodman 1963b),

where \(W_j\) \((j=1,\ldots ,p)\) and \(Z_j\) \((j=1,\ldots ,p)\) are two independent sets of \(p\) independent r.v.’s, with

where each

$$\begin{aligned} W^*_j\sim \chi ^2_{2(q-1)},\quad j=1,\ldots ,p, \end{aligned}$$

is independent of \(W_j\) \((j=1,\ldots ,p)\).

Thus, under \(H_0\) in (12),

(15)

But then, for \(h>(p+q)/n-1\),

$$\begin{aligned} E\left( \varLambda _2^h\right) =\prod ^{p}_{j=1}\frac{\varGamma (n-j)}{\varGamma (n-q-j+1)}\,\frac{\varGamma (n-q-j+1+nh)}{\varGamma (n-j+nh)}. \end{aligned}$$
(16)

This result may also be obtained in an alternative way as in Appendix B.2 in the Online Resource.

From (16), the c.f. of

$$\begin{aligned} W_2=-\log \varLambda _2 \end{aligned}$$
(17)

may be written as

$$\begin{aligned} \varPhi ^{}_{W_2}(t) \!=\! E\left( \mathrm{e}^{\mathrm {i}tW_2}\right) \!=\!E\left( \varLambda _2^{-\mathrm {i}t}\right) \!=\! \prod ^{p}_{j=1}\frac{\varGamma (n-j)}{\varGamma (n\!-\!q\!-\!j\!+\!1)}\,\frac{\varGamma (n-q-j+1-n\mathrm {i}t)}{\varGamma (n-j-n\mathrm {i}t)},\nonumber \\ \end{aligned}$$
(18)

from which Theorem 2 may be established.

Theorem 2

The exact distribution of \(W_2\) in (17) is a GIG distribution of depth \({p+q-2}\) with pdf

$$\begin{aligned} f^{}_{W_2}(w)=f^\mathrm{GIG}\left( w\,\bigl |\,r_1,\ldots ,r_{p+q-2};\frac{n-2}{n},\ldots ,\frac{n-p-q+1}{n};p+q-2\right) \end{aligned}$$

and cdf

$$\begin{aligned} F^{}_{W_2}(w)=F^\mathrm{GIG}\left( w\,\bigl |\,r_1,\ldots ,r_{p+q-2};\frac{n-2}{n},\ldots ,\frac{n-p-q+1}{n};p+q-2\right) , \end{aligned}$$

where \({w>0}\) and

$$\begin{aligned} r_j=\left\{ \begin{array}{ll} h_j &{}\quad j=1\\ h_j+r_{j-1} &{}\quad j=2,\ldots ,p+q-2 \end{array} \right. \end{aligned}$$
(19)

with \(h_j=(\# \hbox { of elements in }\{p,q-1\}\hbox { greater or equal to }j)-1,\) for \(j=1,\ldots ,p+q-2.\)

Proof

From (18), using the relation in (11), the c.f. of \(W_2\) may be written as

$$\begin{aligned} \displaystyle \varPhi ^{}_{W_2}(t)\!=\!\displaystyle \prod _{j=1}^{p}\prod _{\ell =0}^{q-2} \frac{n-q-j+1+\ell }{n-q-j+1+\ell -n\mathrm {i}t}\!=\!\prod _{j=1}^{p+q-2} \left( \frac{n-1-j}{n}\right) ^{\!r_j}\left( \frac{n-1-j}{n}-\mathrm {i}t\right) ^{\!-r_j}\nonumber \\ \end{aligned}$$
(20)

with \(r_j\) given by (19) in the body of the theorem. From (20) it is possible to conclude that the distribution of \(W_2\) is indeed a GIG distribution of depth \({p+q-2}\), with shape parameters \(r_j\) and rate parameters \(\frac{n-1-j}{n}\) \((j=1,\ldots ,p+q-2)\).

Then Corollary 2 gives the exact pdf and cdf of \(\varLambda _2=\mathrm{e}^{-W_2}\).

Corollary 2

The exact pdf and cdf of the statistic \(\varLambda _2=\mathrm{e}^{-W_2}\) in (13) are\(,\) for \({0<\ell \le 1},\)

$$\begin{aligned} f^{}_{\varLambda _2}(\ell )\!=\!f^\mathrm{GIG}\left( \!-\!\log \ell \,\bigl |\,r_1,\ldots ,r_{p+q-2};\frac{n\!-\!2}{n},\ldots ,\frac{n\!-\!p\!-\!q+1}{n};p+q-2\right) \frac{1}{\,\ell \,} \end{aligned}$$

and

$$\begin{aligned} F^{}_{\varLambda _2}(\ell )\!=\!1-F^\mathrm{GIG}\left( \!-\!\log \ell \,\bigl |\,r_1,\ldots ,r_{p+q-2};\frac{n\!-\!2}{n},\ldots ,\frac{n\!-\!p\!-\!q\!+\!1}{n};p+q-2\right) , \end{aligned}$$

for \(r_j\) given by (19).

Once again, the distribution of \(\varLambda _2\) is of the form (1) with \(u=0\).

2.3 The LRT statistic to test if an expected value matrix is equal to a given matrix

A generalization of the test for an expected value matrix developed in (Khatri (1965), Subsection 3.2) will be addressed in this subsection. Let \(Z\) \((p{\scriptstyle \times }n)\) be a matrix with a complex multivariate normal distribution with expected value \({\varvec{\mu }}M\), where \(\varvec{\mu }\) is a \(p{\scriptstyle \times }q\) complex matrix and \(M\) is \(q{\scriptstyle \times }n\) of rank \(q\,({\le }n)\), and variance \(I_n\otimes \varSigma \), that is, with \(\mathrm{var}(\mathrm{vec}(Z))=I_n\otimes \varSigma \). This fact is denoted by

$$\begin{aligned} Z_{p{\scriptstyle \times }n}\sim CN_{p{\scriptstyle \times }n}(\varvec{\mu }M,I_n\otimes \varSigma ). \end{aligned}$$
(21)

Let us then suppose that the hypothesis to be tested, for a given matrix \(\varXi \), is

$$\begin{aligned} H_0:\,\varvec{\mu }_{(p{\scriptstyle \times }q)}=\varXi _{(p{\scriptstyle \times }q)}. \end{aligned}$$
(22)

Then, according to Khatri (1965), the LRT statistic to test \(H_0\) is

$$\begin{aligned} \varLambda _3=\left( \frac{|\varPsi |}{\left| \varPsi +\frac{1}{n}(\beta -\varXi )(M\overline{M}')(\overline{\beta -\varXi })'\right| }\right) ^n , \end{aligned}$$
(23)

where

$$\begin{aligned} \varPsi = \frac{1}{n}Z\left( I_n-\overline{M}'\!\left( M\overline{M}'\right) ^{\!-1}\!M\right) \!\overline{Z}'\quad \mathrm{and}\quad \beta =Z\overline{M}'\left( M\overline{M}'\right) ^{-1} \end{aligned}$$
(24)

are, respectively, the MLE’s of \(\varSigma \) and \(\mu \), and as such independent.

But then, since \((I_n-\overline{M}'(M\overline{M}')^{-1}M)\) is the projector on the null space of the columns of \(M\), it is idempotent with

$$\begin{aligned} \hbox {rank}\left( I_n-\overline{M}'\left( M\overline{M}'\right) ^{-1}M\right) =tr\left( I_n-\overline{M}'\left( M\overline{M}'\right) ^{-1}M\right) =n-q. \end{aligned}$$

Then, given the distribution of \(Z\) in (21),

$$\begin{aligned} \varPsi \sim CW_p\left( n-q,\frac{1}{n}\varSigma \right) . \end{aligned}$$
(25)

From (21),

$$\begin{aligned} \beta =Z\overline{M}'\!\left( M\overline{M}'\right) ^{-1}\sim CN_{p{\scriptstyle \times }q}\left( \varvec{\mu },\left( M\overline{M}'\right) ^{-1}\!\otimes \varSigma \right) , \end{aligned}$$

so that

$$\begin{aligned} (\beta -\varXi )\left( M\overline{M}'\right) ^{1/2}\,\sim \,CN_{p{\scriptstyle \times }q}\left( (\varvec{\mu }-\varXi )\left( M\overline{M}'\right) ^{1/2},\,I_q\otimes \varSigma \right) , \end{aligned}$$

where, under \(H_0\) in (22), \((\varvec{\mu }-\varXi )\left( M\overline{M}'\right) ^{1/2}=0\). Consequently under \(H_0\) in (22),

$$\begin{aligned} (\beta -\varXi )\left( M\overline{M}'\right) (\overline{\beta -\varXi })'\sim CW_p(q,\varSigma ), \end{aligned}$$

independent of \(\varPsi \), so that, under \(H_0\) in (22),

$$\begin{aligned} \varPsi +\frac{1}{n}(\beta -\varXi )(M\overline{M}')(\overline{\beta -\varXi })' \sim CW_p\left( n,\frac{1}{n}\varSigma \right) . \end{aligned}$$

Thus, following similar steps to those in Sect. 2.2 (see also Appendix B.2 in the Online Resource), under \(H_0\) in (22) for \({n>q+p-1}\),

(26)

so that, for \({h>(q+p-1)/n-1}\),

$$\begin{aligned} E\left( \varLambda _3^h\right) =\prod ^p_{j=1}\frac{\varGamma (n+1-j)}{\varGamma (n+1-q-j)}\,\frac{\varGamma (n+1-q-j+nh)}{\varGamma (n+1-j+nh)}. \end{aligned}$$

Thus, the c.f. of \(W_3=-\log \varLambda _3\) is

$$\begin{aligned} \displaystyle \varPhi ^{}_{W_3}(t)&= \displaystyle E\left( \mathrm{e}^{itW_3}\right) =E\left( \varLambda _3^{-it}\right) \nonumber \\&= \displaystyle \prod ^p_{j=1} \frac{\varGamma (n+1-j)\,\varGamma (n+1-q-j-n\mathrm {i}t)}{\varGamma (n+1-q-j)\,\varGamma (n+1-j-n\mathrm {i}t)} = \prod ^p_{j=1}\prod ^{q-1}_{\ell =0}\frac{n+1-q-j+\ell }{n+1-q-j+\ell -n\mathrm {i}t}\nonumber \\ \end{aligned}$$
(27)
$$\begin{aligned}&= \displaystyle \prod ^{p+q-1}_{j=1}(n-j)^{r_j}\,(n-j-n\mathrm {i}t)^{-r_j} = \prod ^{p+q-1}_{j=1}\left( \frac{n-j}{n}\right) ^{r_j}\left( \frac{n-j}{n}-\mathrm {i}t\right) ^{-r_j} \end{aligned}$$
(28)

for \(r_j\) given by (19) in Theorem 2, with \(q\) replaced by \(q+1\).

Comparing (26) with (5.2.2) and (5.2.3) in Khatri (1965) and the first expression in (27) with (5.3.1) and (5.3.3) in the same reference, it can be seen that there is a small mistake in Khatri’s paper, where \(q\) has to be subtracted from the first argument of the beta r.v.’s in (5.2.2) and (5.2.3) and also from the arguments of all gamma functions in (5.3.1) of Khatri (1965).

From (28) Theorem 3 and Corollary 3 may be established.

Theorem 3

The exact distribution of \(W_3=-\log \varLambda _3\) is a GIG distribution of depth \({p+q-1}\) with pdf

$$\begin{aligned} f^{}_{W_3}(w)=f^\mathrm{GIG}\left( w\,\bigl |\,r_1,\ldots ,r_{p+q-1};\frac{n-1}{n},\ldots ,\frac{n-p-q+1}{n};p+q-1\right) \end{aligned}$$

and cdf

$$\begin{aligned} F^{}_{W_3}(w)=F^\mathrm{GIG}\left( w\,\bigl |\,r_1,\ldots ,r_{p+q-1};\frac{n-1}{n},\ldots ,\frac{n-p-q+1}{n};p+q-1\right) , \end{aligned}$$

where \({w>0}\) and \(r_j\) \((j=1,\ldots ,p)\) are given by (19) in Theorem 2, with \(q\) replaced by \(q+1.\)

Corollary 3

The exact pdf and cdf of the statistic \(\varLambda _3=\mathrm{e}^{-W_3}\) in (23) are\(,\) for \({0<\ell \le 1},\)

$$\begin{aligned} f^{}_{\varLambda _3}(\ell )\!=\!f^\mathrm{GIG}\left( \!-\!\log \ell \,\bigl |\,r_1,\ldots ,r_{p+q-1}; \frac{n-1}{n},\ldots ,\frac{n\!-\!p\!-\!q+1}{n};p+q-1\right) \frac{1}{\ell } \end{aligned}$$

and

$$\begin{aligned} F^{}_{\varLambda _3}(\ell )\!=\!1-F^\mathrm{GIG}\left( \!-\!\log \ell \,\bigl |\,r_1,\ldots ,r_{p+q-1}; \frac{n-1}{n},\ldots ,\frac{n\!-\!p\!-\!q+1}{n};p\!+\!q\!-\!1\right) , \end{aligned}$$

for \(r_j\) given by (19)\(,\) with \(q\) replaced by \(q+1.\)

This test may be easily extended to test hypothesis of the type

$$\begin{aligned} H_0: \mu D=0, \end{aligned}$$
(29)

where \(D\) is a \({q{\scriptstyle \times }q}\) non-random matrix.

By the invariance property of the MLEs, which entails the invariance of the LRT statistics, the LRT statistic to test \(H_0\) in (29) may be obtained just by replacing \(\beta =\hat{\mu }\) in the expression of \(\varLambda _3\) by \(\beta ^*=\widehat{\mu D}=\hat{\mu }D\). Since in many cases \(D\) will not be full-rank, the distribution of the matrix \(B^*\), where

$$\begin{aligned} B^*=\frac{1}{n}\beta ^*(M\overline{M}')\overline{\beta }^{*\prime }\quad \mathrm{with} \quad \beta ^*=Z\overline{M}'(M\overline{M}')^{-1}D \end{aligned}$$
(30)

is now a complex Wishart distribution with a number of degrees of freedom equal to

$$\begin{aligned} \hbox {rank}(\overline{M}'\overline{D}'(M\overline{M}')^{-1}DM) \end{aligned}$$

which, since \(M\overline{M}'\) is full-rank, will be equal to \(\hbox {rank}(D)\).

The LRT statistic to test \(H_0\) in (29) is thus

$$\begin{aligned} \varLambda _3^*=\left( \frac{|\varPsi |}{|\varPsi +B^*|}\right) ^n, \end{aligned}$$
(31)

with \(\varPsi \sim CW_p\left( n-q,\frac{1}{n}\varSigma \right) \), as in (25), and \(B^*\) as in (30). If \(\hbox {rank}(D)=q^*(\le q)\), then

$$\begin{aligned} B^*\sim CW_p\left( q^*,\frac{1}{n}\varSigma \right) , \end{aligned}$$

independent of \(\varPsi \), so that \(\varPsi +B^*\sim CW_p\left( n-q+q^*,\frac{1}{n}\varSigma \right) \) and as such (see Appendix B.2 in the Online Resource)

Then, with an appropriate choice of the matrices \(Z\), \(M\) and \(D\), this test may be used to implement the test of equality of mean vectors in Sect. 2.2 Indeed, if \(Z\) is the sample matrix, with

where \(Z_{kj\ell }\) represents the \(\ell \)-th observation for the \(j\)-th variable in the \(k\)-th population, for \({k=1,\ldots ,q}\), \({j=1,\ldots ,p}\) and \({\ell =1,\ldots ,n_k}\), the matrix \(M\) the design matrix, with

and

clearly with \(\hbox {rank}(D)=q-1\), then the matrix \(B^*\) in (30) is exactly the same as matrix \(B\) in (14), while the matrix \(\varPsi \) in (24) is the same as matrix \(A\) in (14) and the distribution of the LRT statistic \(\varLambda _3^*\) in (31) the same as that of \(\varLambda _2\) in Sect. 2.2.

3 The exact distribution of the LRT statistic to test sphericity of the covariance matrix

Suppose that \(\underline{X}\sim CN_p(\underline{\mu },\varSigma )\), and that the hypothesis to be tested is

$$\begin{aligned} H_0:\,\varSigma =\sigma ^2I_p\quad (\hbox {for some unspecified }\sigma ^2>0). \end{aligned}$$
(32)

Then, for a sample of size \(n\), the LRT statistic to test \(H_0\) in (32), is,

$$\begin{aligned} \varLambda _4=\left( \frac{|A|}{\left( \hbox {tr}\,\frac{1}{p}A\right) ^p}\right) ^n, \end{aligned}$$
(33)

where \(A\) is the MLE of \(\varSigma \) [see (6) and the note after this expression for references on the MLE of \(\varSigma \)]. Under \(H_0\) in (32), the \(h\)-th moment of \(\varLambda _4\) is

$$\begin{aligned} E\left( \varLambda _4^h\right) =\prod ^{p-1}_{j=1} \frac{\varGamma \left( n-1+\frac{j}{p}\right) }{\varGamma (n-j-1)} \,\frac{\varGamma (n-j-1+nh)}{\varGamma \left( n-1+\frac{j}{p}+nh\right) }= \prod _{j=1}^{p-1}E\left( Y_j\right) ^{nh}, \end{aligned}$$
(34)

for \({h>\frac{p}{n}-1}\), and where \({Y_j\sim \hbox {Beta}\left( n-j-1,\frac{j}{p}+j\right) }\) are \(p-1\) independent r.v.’s. See sections B.3 and B.5.2 of Appendix B in the Online Resource for details on the derivation of the expression for \(E\left( \varLambda _4^h\right) \).

Since the whole set of moments of \(\varLambda _4\) defines its distribution,

$$\begin{aligned} \varLambda _4\buildrel {st}\over {\sim }\prod ^{p-1}_{j=1}\left( Y_j\right) ^n. \end{aligned}$$

Then, for \({W_4=-\log \varLambda _4}\), since the expression in (34) is well defined for \(h\) in a neighborhood of zero, it follows that (see Appendix C in the Online Resource for details),

$$\begin{aligned}&\varPhi _{W_4}(t)=\displaystyle E\left( \mathrm{e}^{\mathrm {i}tW_4}\right) =E\left( \varLambda _4^{-\mathrm {i}t}\right) = \prod _{j=1}^{p-1}\frac{\varGamma \left( n-1+\frac{j}{p}\right) }{\varGamma (n-j-1)}\,\frac{\varGamma (n-j-1-n\mathrm {i}t)}{\varGamma \left( n-1+\frac{j}{p}-n\mathrm {i}t\right) }\nonumber \\&\quad =\displaystyle \underbrace{\left\{ \!\prod _{j=1}^{p-1}\!\left( \frac{n-j-1}{n}\right) ^{\!\!p-j} \left( \frac{n-j-1}{n}-\mathrm {i}t\right) ^{\!\!\!-(p-j)}\right\} \!}_{\varPhi ^{}_{1,W_4}(t)}\, \underbrace{\!\left\{ \prod _{j=1}^{p-1}\frac{\varGamma \left( n-1+ \frac{j}{p}\right) \,\varGamma (n-1-n\mathrm {i}t)}{\varGamma (n-1)\,\varGamma \left( n-1+\frac{j}{p}-n\mathrm {i}t\right) }\right\} \!}_{\varPhi ^{}_{2,W_4}(t)}\nonumber \\ \end{aligned}$$
(35)

which shows that the exact distribution of \(W_4\) is the same as the distribution of the sum of a GIG distributed r.v. with depth \(p-1\), with shape parameters \(r_j=p-j\) and rate parameters \(\frac{n-j-1}{n}\) \((j=1,\ldots ,p-1)\), with \(p-1\) independent \(\hbox {Logbeta}\left( n-1,\frac{j}{p}\right) \) r.v.’s \((j=1,\ldots ,p-1)\). This shows that the distribution of \(\varLambda _4\) is thus of the form (1), with \({u=p-1}\). The possibility of expressing the exact distribution of \(\varLambda _4\) in this form will enable us to develop a very accurate near-exact distribution for \(\varLambda _4\) in Sect. 5.1.

4 The exact distribution of the LRT statistic to test equality of several covariance matrices

In this section the authors will only address the case of equal sample sizes. The case of unequal sample sizes, given its complexity, will only be addressed in the next section.

Hence, suppose that \(X_k\sim CN_p(\underline{\mu }_k,\varSigma _k)\), \({(k=1,\ldots ,q)}\), and that, based on \(q\) independent samples, each of size \(n\), the hypothesis to be tested is

$$\begin{aligned} H_0:\,\varSigma _1=\cdots =\varSigma _q. \end{aligned}$$
(36)

Then, the LRT statistic is

$$\begin{aligned} \varLambda _5=\left( q^{pq}\,\frac{\prod _{k=1}^q|A_k|}{|A|^q}\right) ^n, \end{aligned}$$
(37)

where \(A_k\) is \(n\) times the MLE of \(\varSigma _k\) \((k=1,\ldots ,q)\), and \(A=A_1+\cdots +A_q\) [see (6) and the note after for references on the MLE’s of \(\varSigma _k\)]. Under \(H_0\) in (36), the \(h\)-th moment of \(\varLambda _5\) is

$$\begin{aligned} E\left( \varLambda _5^h\right) = \displaystyle \prod _{j=1}^p\prod _{k=1}^q \frac{\varGamma (n+nh-j)}{\varGamma (n-j)}\,\frac{\varGamma \left( n-1+ \frac{k-j}{q}\right) }{\varGamma \left( n-1+\frac{k-j}{q}+nh\right) }= \prod _{j=1}^p\prod _{k=1}^q E\left( Y_{jk}^{nh}\right) \nonumber \\ \end{aligned}$$
(38)

which may be obtained from expression (70) in Appendix B.4 in the Online Resource (see also subsection B.5.3 in Appendix B in the Online Resource for an alternative proof based on the results in Jensen (1988, Thm. 5)), and where

$$\begin{aligned} {Y_{jk}\sim \hbox {Beta}\left( n-j,j-1+\frac{k-j}{q}\right) } \end{aligned}$$

are \(pq-1\) independent r.v.’s. Since the distribution of \(\varLambda _5\) is determined by the set of its moments,

$$\begin{aligned} \varLambda _5\buildrel {st}\over {\sim }\mathop {\prod ^p_{j=1}\prod ^q_{k=1}}_{{(\mathrm{except\,for} {j=k=1})}} \left( Y_{jk}\right) ^n. \end{aligned}$$

Then, taking \(W_5=-\log \varLambda _5\), we have (see Appendix D in the Online Resource for details)

$$\begin{aligned} \varPhi _{W_5}(t)&= \displaystyle E\left( \mathrm{e}^{-\mathrm {i}t W_5}\right) =E\left( \varLambda _5^{-\mathrm {i}t}\right) = \prod ^p_{j=1}\prod ^q_{k=1}\frac{\varGamma \left( n-1+\frac{k-j}{q}\right) \varGamma \left( n-j-n\mathrm {i}t\right) }{\varGamma \left( n-1+\frac{k-j}{q}-n\mathrm {i}t\right) \varGamma \left( n-j\right) }\nonumber \\&= \displaystyle \underbrace{\left\{ \prod ^{p-1}_{j=1}\left( \frac{n-1-j}{n} \right) ^{\!\!r_j}\!\left( \frac{n-1-j}{n}-\mathrm {i}t\right) ^{\!\!-r_j} \!\right\} }_{\varPhi ^{}_{1,W_5}(t)}\nonumber \\&\times \displaystyle \underbrace{\left\{ \prod ^p_{j=1}\prod ^q_{k=1}\!\frac{\varGamma \!\left( n-1+ \frac{k-j}{q}\right) \varGamma \!\left( n-1+\left\lfloor \frac{k-j}{q}\right\rfloor -n\mathrm {i}t\!\right) }{\varGamma \!\left( n-1+\left\lfloor \frac{k-j}{q}\right\rfloor \!\right) \varGamma \!\left( n-1+\frac{k-j}{q}-n\mathrm {i}t\!\right) }\!\!\right\} \!\!}_{\varPhi ^{}_{2,W_5}(t)} \end{aligned}$$
(39)

for

$$\begin{aligned} r_j=\left\{ \begin{array}{ll} q(q-1)\left( j-\frac{1}{2}\right) &{}\quad j=1,\ldots ,\left\lceil \!\frac{p-1}{q}\right\rceil -1\\ \begin{array}{l}\frac{1}{2}\left( p-p^2+2jpq+q\right. \\ \quad \left. -3jq-q^2(j-1)^2\right) \end{array} &{}\quad j=\left\lceil \frac{p-1}{q}\right\rceil \\ q(p-j) &{}\quad j=\left\lceil \frac{p-1}{q}\right\rceil +1,\ldots ,p-1, \end{array} \right. \end{aligned}$$
(40)

which shows that the exact distribution of \(\varLambda _5\) is of the form (1), with \(u=pq\). As a consequence, as was the case for the statistic in the previous section, very accurate near-exact distributions can be developed for \(\varLambda _5\). See Sect. 5.2.

5 Near-exact distributions

Given the complexity of the exact distributions of the statistics in Sects. 3 and 4, or rather, of the expressions that might be obtained for their exact pdf’s and cdf’s and the concomitant issues related with their manageability, the development of near-exact distributions for such statistics arises as a sensible goal.

Let then, as a general notation, \(W\) stand for the negative logarithm of the LRT statistics in Sects. 3 and 4. The near-exact distributions developed in this section, for \(W\), assume the form of mixtures of generalized near-integer gamma (GNIG) distributions (see Marques et. al. 2011, App. B for the complete notation of the pdf and cdf of these distributions), which, for some \({m^*\in {\mathbb {N}}}\), will have pdf’s and cdf’s, respectively, of the form

$$\begin{aligned} f(w)=\sum _{\nu =0}^{m^*} \pi _\nu \,f^\mathrm{GNIG}\left( w\,\big |\,r_1,\ldots ,r_{p-1},r+\nu ; \frac{n-2}{n},\ldots ,\frac{n-p}{n},\lambda ;p\right) \end{aligned}$$
(41)

and

$$\begin{aligned} F(w)=\sum _{\nu =0}^{m^*} \pi _\nu \,F^\mathrm{GNIG}\left( w\,\big |\,r_1,\ldots ,r_{p-1},r+\nu ; \frac{n-2}{n},\ldots ,\frac{n-p}{n},\lambda ;p\right) , \qquad \end{aligned}$$
(42)

where \({w>0}\) represents a possible value of \(W\) and where, for the statistic in Sect. 3,

$$\begin{aligned} r=\frac{p-1}{2},\quad \lambda =\frac{n\!-\!1}{n}\quad \mathrm{and}\quad r_j=p-j, \end{aligned}$$
(43)

for \({j=1,\ldots ,p-1}\), while for the statistic in Sect. 4, \(r_j\) are given by (40),

$$\begin{aligned} r=p\frac{q-1}{2}\quad \mathrm{and}\quad \lambda =\lambda ^*, \end{aligned}$$

where \(\lambda ^*\) is the rate parameter in

$$\begin{aligned} \varPhi ^{**}_{2,W_5}(t)=p_1(\lambda ^*)^{s_1}(\lambda ^*-\mathrm {i}t)^{-s_1}+(1-p_1)\left( \lambda ^*\right) ^{s_2}\left( \lambda ^*-\mathrm {i}t\right) ^{-s_2}, \end{aligned}$$
(44)

which is determined together with \(s_1\), \(s_2\) and \(p_1\) in such a way that the first 4 derivatives of \(\varPhi ^{**}_{2,W_5}(t)\) and \(\varPhi ^{}_{2,W_5}(t)\) in (39) at \({t=0}\) are the same.

These near-exact distributions are built by leaving \(\varPhi _{1,W_4}(t)\) and \(\varPhi _{1,W_5}(t)\), respectively, in (35) and (39), unchanged and then asymptotically approximating the c.f.’s \(\varPhi _{2,W_4}(t)\) and \(\varPhi _{2,W_5}(t)\) in the same expressions by c.f.’s of finite mixtures of gamma distributions. This replacement is done based on the results in Sects. 5 and 6 of Tricomi and Erdélyi (1951), which show that any \(\hbox {Logbeta}(a,b)\) distribution may, for non-integer \(b\), be asymptotically approximated by an infinite mixture of \(\varGamma (b+\nu ,a)\) \({(\nu =0,1,\ldots )}\) distributions.

This yields as near-exact distributions for \(W\), finite mixtures of sums of \(p\) independent gamma r.v.’s, \(p-1\) of which have integer shape parameters \(r_j\) while the \(p\)th one has shape parameter \(r\). For non-integer \(r\), these are finite mixtures of GNIG distributions of depth \(p\), but since for odd \(p\), \(r=\frac{p-1}{2}\) and for odd \(q\), \({r=p\frac{q-1}{2}}\) are integers, for \(\varLambda _4\) with odd \(p\) and for \(\varLambda _5\) with odd \(q\), the components of the mixtures are indeed GIG distributions (see Marques et. al. 2011, App. B for a complete notation of the pdf and cdf of these distributions), although for generality of notation the components of the mixtures in (41) and (42) are denoted as GNIG distributions.

The near-exact distributions built in this way are asymptotic not only for increasing sample sizes but also for increasing number of variables and populations involved, as is shown by the numerical studies carried out in the next section. These studies also show the extreme closeness of these near-exact distributions to the corresponding exact distributions even for very small sample sizes. Their parameters being very simple to determine, their implementation is very easy with the help of adequate software.

Further details on the construction of these near-exact distributions are given in the subsections ahead.

5.1 Near-exact distribution for the LRT statistic in Sect. 3

Using the results in Sects. 5 and 6 of Tricomi and Erdélyi (1951), as a first step, \(\varPhi ^{}_{2,W_4}(t)\) in (35), which is the c.f. of the sum of \(p-1\) independent \(\hbox {Logbeta}\left( n-1,\frac{j}{p}\right) \) r.v.’s \((j=1,\ldots ,p-1)\), multiplied by \(n\), is replaced by the c.f. of the sum of \(p-1\) independent infinite mixtures of \(\varGamma \left( \frac{j}{p}+\nu ,n-1\right) \) distributions \({(\nu =0,1,\ldots )}\), multiplied by \(n\), which is the c.f. of the sum of \(p-1\) independent infinite mixtures of \(\varGamma \left( \frac{j}{p}+\nu ,\frac{n-1}{n}\right) \) distributions \({(\nu =0,1,\ldots )}\), and which in turn, and given that the rate parameters of the gamma distributions are not functions of either \(j\) or \(\nu \), is an infinite mixture of \({\varGamma \left( \left( \sum _{j=1}^{p-1}\frac{j}{p}\right) +\nu ,\frac{n-1}{n}\right) }\) distributions, where \({\sum ^{p-1}_{j=1}\frac{j}{p}=\frac{p-1}{2}}\). This way we obtain a representation of the distribution which bears some resemblance with the representations in Nagar et al. (1985), Nagarsenker and Das (1975), Tang and Gupta (1986), but with coefficients which are much simpler to compute. This representation has the advantage of allowing for the easy development of very accurate near-exact approximations. Moreover, our representation does not have any parameters that are not well-defined, in contrast to the representation in Tang and Gupta (1986).

Therefore, to obtain a near-exact distribution for \(W_4\), \(\varPhi ^{}_{2,W_4}(t)\) in (35) will be replaced by the c.f. of a finite mixture of \(\varGamma \left( \frac{p-1}{2}+\nu ,\frac{n-1}{n}\right) \) distributions, for \(\nu =0,\ldots ,m^*\),

$$\begin{aligned} \varPhi ^*_{2,W_4}(t)=\sum ^{m^*}_{\nu =0} \pi _\nu \left( \frac{n-1}{n}\right) ^{\frac{p-1}{2}+\nu }\left( \frac{n-1}{n}-\mathrm {i}t\right) ^{-\left( \frac{p-1}{2}+\nu \right) } , \end{aligned}$$
(45)

where the weights \(\pi _0,\ldots ,\pi _{m^*-1}\) are determined as the numerical solution of the system of \(m^*\) equations

$$\begin{aligned} \left. \frac{\partial ^h}{\partial t^h}\,\varPhi ^*_{2,W_4}(t)\right| _{t=0}=\left. \frac{\partial ^h}{\partial t^h}\,\varPhi ^{}_{2,W_4}(t)\right| _{t=0}\!,\quad h=1,\ldots ,m^* \end{aligned}$$
(46)

with \(\pi _{m^*}=1-\sum ^{m^*-1}_{\nu =0}\pi _\nu \). In this way these near-exact distributions have, by construction, the first \(m^*\) moments equal to the first \(m^*\) exact moments of \(W_4\). Hence,

$$\begin{aligned} \varPhi ^*_{W_4}(t)=\varPhi _{1,W_4}(t)~\varPhi ^*_{2,W_4}(t), \end{aligned}$$

with \(\varPhi ^{}_{1,W_4}(t)\) given by (35) and \(\varPhi ^*_{2,W_4}(t)\) given by (45), will be used as a near-exact c.f. for \(W_4\), to which corresponds the pdf and the cdf in (41) and (42), with \(r\) and \(\lambda \) given by (43).

Then, Theorem 4 and Corollary 4 may be stated.

Theorem 4

Let \({W_4=-\log \varLambda _4},\) where \(\varLambda _4\) is the LRT statistic in (33). Then, for some \({m^*\in {{\mathbb {N}}}}\), distributions with pdf and cdf given by (41) and (42)\(,\) with \(r,\lambda \) and \(r_j\) given by (43) and \(\pi _0,\ldots ,\pi _{m^*-1}\) determined from (46)\(,\) with \(\varPhi ^*_{2,W_4}(t)\) given by (45) and \(\pi _{m^*}=1-\sum ^{m^*-1}_{\nu =0}\pi _\nu ,\) are near-exact distributions for \(W_4.\)

From Theorem 4, Corollary 4 is then readily obtained.

Corollary 4

Let \(\varLambda _4\) be the LRT statistic in (33). Then, distributions with pdf and cdf\(,\) respectively, given by \(f(-\log \,\ell )\frac{1}{\ell }\) and \(1-F(-\log \,\ell )\) where \(f(\,\cdot \,)\) and \(F(\,\cdot \,)\) are given by (41) and (42)\(,\) with \({0<\ell \le 1}\) representing a possible value for \(\varLambda _4\) and \({r_j}\) \({(j=1,\ldots ,p-1)},r,\lambda \) and \(\pi _\nu \) \({(\nu =0,\ldots ,m^*)}\) defined as in the previous theorem\(,\) are near-exact distributions for \(\varLambda _4.\)

5.2 Near-exact distribution for the LRT statistic in Sect. 4

5.2.1 The case of equal sample sizes

For the statistic \(\varLambda _5\) in Sect. 4, in the case where all \(q\) samples have size \(n\), we may asymptotically replace \(\varPhi ^{}_{2,W_5}(t)\) in (39), which is the c.f. of the sum of \(pq-\min (p,q)\) independent \(\hbox {Logbeta}\left( n-1+\left\lfloor \frac{k-j}{q}\right\rfloor ,\frac{k-j}{q}-\left\lfloor \frac{k-j}{q}\right\rfloor \right) \) r.v.’s \({(j=1,\ldots ,p;k=1,\ldots ,q;j\ne k)}\), multiplied by \(n\), by the c.f. of the sum of \(pq-\min (p,q)\) independent infinite mixtures of \(\varGamma \left( \frac{k-j}{q}\!-\!\left\lfloor \frac{k-j}{q}\right\rfloor \!+\!\nu ,\frac{1}{n}\left( n-1\!+\!\left\lfloor \frac{k-j}{q}\right\rfloor \right) \right) \) distributions \({(\nu =0,1,\ldots )}\). If it were the case that the rate parameters in these gamma distributions were functions of neither \(j\) nor \(k\), as happened in the previous subsection, then this sum of mixtures would yield a simple mixture of gamma distributions. However, since now the rate parameters of the gamma distributions in the sum of mixtures are functions of both \(j\) and \(k\), it renders it difficult to add the different mixtures of gamma distributions. For this reason it is used as an asymptotic replacement for \(\varPhi ^{}_{2,W_5}(t)\) in (39) the c.f. of a finite mixture of \({m^*+1}\) \(\varGamma \left( \sum _{j=1}^p\sum _{k=1}^q\frac{k-j}{q}-\left\lfloor \frac{k-j}{q}\right\rfloor +\nu ,\lambda ^* \right) \) distributions \({(\nu =0,\ldots ,m^*)}\), which is

$$\begin{aligned} \varPhi ^{*}_{2,W_5}(t)=\sum ^{m^*}_{\nu =0}\pi _\nu (\lambda ^*)^{r+\nu }(\lambda ^*-\mathrm {i}t)^{-(r+\nu )}, \end{aligned}$$
(47)

with

$$\begin{aligned} r=\sum _{j=1}^p\sum _{k=1}^q\frac{k-j}{q}-\left\lfloor \frac{k-j}{q}\right\rfloor =p\,\frac{q-1}{2}, \end{aligned}$$
(48)

and where \(\lambda ^*\) is the common rate parameter of a mixture of two gamma distributions whose first four moments match the first four moments of the sum of independent logbeta r.v.’s whose c.f. is \(\varPhi _{2,W_5}(t)\) in (39), that is, where \(\lambda ^*\) is the rate parameter in \(\varPhi ^{**}_{2,W_5}(t)\) in (44), which is determined together with \(s_1\), \(s_2\) and \(p_1\) in such a way that,

$$\begin{aligned} \left. \frac{\partial ^h}{\partial t^h}\,\varPhi ^{**}_{2,W_5}(t)\right| _{t=0}=\left. \frac{\partial ^h}{\partial t^h}\,\varPhi ^{}_{2,W_5}(t)\right| _{t=0}\!,\quad h=1,\ldots ,4, \end{aligned}$$
(49)

and where the weights \(\pi _\nu \) \({(\nu =0,\ldots ,m^*-1)}\), are then determined in such a way that

$$\begin{aligned} \left. \frac{\partial ^h}{\partial t^h}\,\varPhi ^*_{2,W_5}(t)\right| _{t=0}=\left. \frac{\partial ^h}{\partial t^h}\,\varPhi ^{}_{2,W_5}(t)\right| _{t=0}\!,\quad h=1,\ldots ,m^*, \end{aligned}$$
(50)

with \({\pi _{m^*}=1-\sum _{\nu =0}^{m^*-1}\pi _\nu }\).

Therefore,

$$\begin{aligned} \varPhi ^*_{W_5}(t)=\varPhi _{1,W_5}(t)~\varPhi ^*_{2,W_5}(t), \end{aligned}$$

with \(\varPhi ^{}_{1,W_5}(t)\) given by (39) and \(\varPhi ^*_{2,W_5}(t)\) given by (47), will be used as a near-exact c.f. for \(W_5\), and as such one may thus enunciate the results summarized in Theorem 5 and Corollary 5.

Theorem 5

Let \({W_5=-\log \varLambda _5},\) where \(\varLambda _5\) is the LRT statistic in (37). Then\(,\) for some \({m^*\in {{\mathbb {N}}}},\) distributions with pdf and cdf given by (41) and (42)\(,\) with \(r_j\) given by (40)\(,\) \(r\) given by (48) and \(\lambda =\lambda ^*\) obtained by solving (49) in order to \(\lambda ^*,s_1,s_2\) and \(p_1,\) and \(\pi _0,\ldots ,\pi _{m^*-1}\) determined from (50)\(,\) with \(\varPhi ^*_{2,W_5}(t)\) given by (47) and \(\pi _{m^*}=1-\sum ^{m^*-1}_{\nu =0}\pi _\nu ,\) are near-exact distributions for \(W_5.\)

Corollary 5

Let \(\varLambda _5\) be the LRT statistic in (37). Then distributions with pdf and cdf\(,\) respectively, given by \(f(-\log \,\ell )\frac{1}{\ell }\) and \(1-F(-\log \,\ell )\) where \(f(\,\cdot \,)\) and \(F(\,\cdot \,)\) are given by (41) and (42)\(,\) with \({0<\ell \le 1}\) representing the running value for \(\varLambda _5\) and \({r_j}\) \({(j=1,\ldots ,p-1)},r,\lambda \) and \(\pi _\nu \) \({(\nu =0,\ldots ,m^*)}\) defined as in the previous theorem\(,\) are near-exact distributions for \(\varLambda _5.\)

5.2.2 The case of unequal sample sizes

When the samples have different sizes, with the \(k\)-th sample having size \(n_k\), the LRT statistic \(\varLambda _5\) is now

$$\begin{aligned} \varLambda _5=\frac{N^{Np}}{\prod \nolimits _{k=1}^q n_k^{n_kp}}\,\frac{\prod \nolimits _{k=1}^q |A_k|^{n_k}}{|A|^N}, \end{aligned}$$
(51)

where \({N=\sum ^q_{k=1} n_k}\), and \(A_k\) (\({k=1,\ldots ,q}\)) is equal to \(n_k\) times the MLE of \(\varSigma _k\) and \(A=A_1+\cdots +A_q\).

The c.f. of \(W_5=-\log \varLambda _5\) may be obtained from (69) in the supplementary material in the Online Resource, replacing \(h\) by \(-\mathrm {i}t\), as

$$\begin{aligned} \varPhi ^{}_{W_5}(t)&= E\left( \varLambda _5^{-\mathrm {i}t}\right) \nonumber \\&= \frac{N^{\,-Np\mathrm {i}t}}{\prod ^q_{k=1}n_k^{\,-n_kp\mathrm {i}t}}\prod _{j=1}^p\left\{ \frac{\varGamma (N-q+1-j)}{\varGamma (N-q+1-j-N\mathrm {i}t)}\prod ^q_{k=1}\frac{\varGamma (n_k-j-n_k\mathrm {i}t)}{\varGamma (n_k-j)}\right\} . \end{aligned}$$

It happens that the exact distribution of \(\varLambda _5\) in (51) is quite complicated and it is not even possible to give it a structure of the form (1).

However, using a procedure similar to the one used in Coelho and Marques (2012), the c.f. of \(W_5\) may be written as

$$\begin{aligned} \varPhi ^{}_{W_5}(t)=\varPhi ^{}_{1,W_5}(t)\,\frac{\varPhi ^{}_{W_5}(t)}{\varPhi ^{}_{1,W_5}(t)}, \end{aligned}$$

where \({\varPhi ^{}_{1,W_5}(t)}\) is given by (39), now with \({n=N/q}\). Then, to build a near-exact approximation for \(W_5\), \({\varPhi ^{}_{1,W_5}(t)}\) will be left unchanged and \(\frac{\varPhi ^{}_{W_5}(t)}{\varPhi ^{}_{1,W_5}(t)}\) will be replaced by \({\varPhi ^{*}_{2,W_5}(t)}\), given by (47), once again with \(\lambda ^*\) defined in a similar manner to the one used in the previous subsection, with \(r\) either equal to \(s_1\) in (44) or given by (48). As it will be shown in the next section, while the first choice for \(r\) will yield near-exact distributions that are asymptotic for increasing sample sizes as well as for increasing values of \(p\) and \(q\), the second choice may give slightly better approximations for small values of \(p\) and even better approximations for large sample sizes. However, these latter near-exact distributions will no longer be asymptotic for increasing values of \(p\) or \(q\), but only for increasing sample sizes.

As such, in terms of near-exact distributions, for the case of unequal sample sizes, similar results to the ones in Theorem 5 and Corollary 5 still hold, with the due changes in the parameters, namely with all the occurrences of \(n\) changed to \(N/q\).

6 Numerical studies

In order to assess the proximity of the near-exact distributions developed in Sect. 5 to the exact distribution and their performance in different situations, we will use the measure

$$\begin{aligned} \varDelta =\frac{1}{2\pi }\int _{-\infty }^{+\infty } \left| \frac{\varPhi ^{}_{W}(t)-\varPhi ^*_{W}(t)}{t}\right| \mathrm{d}t, \end{aligned}$$

which yields

$$\begin{aligned} \max _{0<\ell \le 1}\left| F^{}_\varLambda (\ell )-F^*_\varLambda (\ell )\right| =\max _{w>0}\left| F^{}_W(w)-F^*_W(w)\right| \le \varDelta , \end{aligned}$$

where \(W\) represents generally either \(W_4\) or \(W_5\), \({\varLambda =\mathrm{e}^{-W}}\), \(\varPhi ^{}_W(t)\) represents the exact c.f. of \(W\) and \(\varPhi ^*_W(t)\) its approximate c.f., usually a near-exact c.f., but occasionally an asymptotic or other c.f., \({F^{}_W(\,\cdot \,)}\) and \({F^{}_\varLambda (\,\cdot \,)}\) the exact cdf’s of \(W\) and \(\varLambda \), corresponding to \(\varPhi ^{}_W(t)\), and \(F^*_W(\,\cdot \,)\) and \(F^*_\varLambda (\,\cdot \,)\) the cdf’s corresponding to \(\varPhi ^*_W(t)\). Some further details on \(\varDelta \) and its relation with the Berry–Esseen bound are discussed in Coelho and Marques (2012). It should be noted that, quite clearly, a given value of \(\varDelta \) for any approximation to the exact distribution of either \(W_4\) or \(W_5\) will be the same as the corresponding value of the measure for the corresponding approximation to the distribution of, respectively, either \({\varLambda _4=\mathrm{e}^{-W_4}}\) or \({\varLambda _5=\mathrm{e}^{-W_5}}\).

All computations are done with Mathematica\(^{{\circledR }}\), version 7.0.0.

6.1 Numerical studies on the approximations for \(\varLambda _4\)

In Table 1 may be analyzed the values of the measure \(\varDelta \) for the statistic \(\varLambda _4\), for values of \(p\), the number of variables involved, ranging from 3 to 50 and sample sizes \(n\) which exceed \(p\) by 2, 12, 50 and 100. In order to compare the performance of the near-exact distributions with other available approximations in the literature, the Box type asymptotic distribution (Box 1949), used in Nagarsenker and Nagarsenker (1981) and the Pearson type I distribution used in Krishnaiah et al. (1976) have also been considered. The Pearson type I distribution was fitted by matching the first four exact moments of \(\varLambda _4^{1/nb}\). In each case it is used for \(b\) the positive integer value which would give a better fit, this way obtaining indeed much better approximations than the ones obtained in Krishnaiah et al. (1976). These values of \(b\) are specified in Table 1, inside square brackets, right after the value of \(\varDelta \) for the Pearson type I approximation. It may be noted that this Pearson type I approximation, with the slight change introduced, has a much better performance than truncations, even with a rather large number of terms, of any of the expansions in Nagarsenker and Das (1975).

Table 1 Values of the measure \(\varDelta \) for the near-exact and asymptotic distributions for the LRT statistic to test sphericity

The results in Table 1 show that although the Pearson type I distribution, with the improvement introduced of finding the integer value of \(b\) which gives the best fit to the exact distribution, has a good performance, with an asymptotic behavior not only for increasing sample sizes but also for increasing values of \(p\), it still is no match even for the near-exact distribution which matches only 4 moments. Besides, finding the integer value of \(b\) which gives the best fit for the Pearson type I distribution is not an easy task, and it was not possible to fit any such distribution for \({p=50}\) and \({n=52}\). The Box type asymptotic distribution has the poorest behavior of all the approximations. Its performance being worse for larger values of \(p\), with values of \(\varDelta \) greater than 1, which shows that in these cases the asymptotic distribution yielded by the approximation is not a true distribution. Moreover, it produces values of \(\varDelta \) quite close to 1 for a number of other cases.

The near-exact distributions always show a very good performance, with very low values of \(\varDelta \). They exhibit a very good performance even for the smaller sample sizes, always showing an asymptotic behavior both for increasing sample sizes as well as for increasing values of \(p\), and, of course, with the near-exact distributions which match a larger number of exact moments showing an increasingly better performance.

6.2 Numerical studies on the approximations for \(\varLambda _5\)

For the case of equal sample sizes, are shown in Table 2 the values of the measure \(\varDelta \) for the statistic \(\varLambda _5\), for different values of \(p\), \(q\) and \(n\), respectively, the number of variables involved, the number of covariance matrices involved and the common sample size of the \(q\) independent samples. In order to compare the performance of the near-exact distributions with other available approximations in the literature, the mixture of two beta distributions in Fang et al. (1982) and the Pearson type I distribution used by Krishnaiah et al. (1976) are used. These two distributions were fitted by matching, respectively, the first two and four exact moments of \(\varLambda _5^{1/nb}\), for the choice of \({b\in {{\mathbb {N}}}}\) which gives the lowest value of \(\varDelta \). These values of \(b\) are specified in Table 2, inside square brackets, right after the value of \(\varDelta \) for each of these approximations. The mixture of two beta distributions used is a mixture of two beta distributions with the same first parameter and a second parameter given by expressions (4.7) and (4.9) in Fang et al. (1982). Then the first weight in the mixture and the first parameter in the beta distributions were determined by equating the two first exact moments of \(\varLambda _5^{1/nb}\) and the two first moments of the mixture. The mixture of two beta distributions in Fang et al. (1982) is indeed not implemented exactly in this way. The decision to follow this alternative implementation was due to the facts that not only the definition of the parameter \(s\) in (4.9) and (4.10) in Fang et al. (1982) yields a conflicting definition for \(A_2\) in the same reference, but also to the fact that by defining the parameters as suggested above, the approximation has indeed a much better performance.

Table 2 Values of the measure \(\varDelta \) for the near-exact and asymptotic distributions for the LRT statistic to test the equality of \(q\) covariance matrices for equal sample sizes

From the results in Table 2 is possible to see how the Pearson type I approximation always outperforms the mixture of two beta distributions, mainly for the larger sample sizes. However, the Pearson type I distribution is always itself largely outperformed by the near-exact distribution which matches only 4 moments. The near-exact distributions once again show not only a marked asymptotic behavior for increasing sample sizes but also for increasing values of both \(p\) and \(q\), as well as a consistent extremely good performance for very small sample sizes, avoiding the cumbersomeness of the determination of the best value for the parameter \(b\) in the Pearson type I and in the mixture of betas approximations.

In the case of different sample sizes, it was not feasible to fit the Pearson type I approximation to any case. Actually the authors in Krishnaiah et al. (1976) have only used this approximation for the case of equal sample sizes. However, it was possible to fit the mixture of two beta distributions to powers of \(\varLambda _5^{1/b}\). But, when trying to find the best integer value of \(b\) many local minima were found, rendering the process of determining the best value of \(b\) a very frustrating and time consuming task. This, together with the fact that the near-exact distributions have a very good performance also for this different sample sizes case, for all sample sizes and all values of \(p\) and \(q\), leads to the conclusion that it is indeed much preferable to use the near-exact approximations, which seem to be the only approximations with a very good and consistent performance.

The near-exact distributions which use for \(\lambda ^*\) and \(r\), respectively, the values of \(\lambda ^*\) and \(s_1\) in (44), obtained by solving the system of equations in (49) exhibit a very good performance even for the smaller sample sizes and a clear asymptotic behavior not only for increasing sample sizes but also for increasing values of \(p\) and \(q\). The near-exact distributions which match only four exact moments always outperform the mixture of two beta distributions except for the smaller sample sizes for \(p=5\), with the measures for this mixture exhibiting a somewhat erratic behavior. Values of \(\varDelta \) for these near-exact distributions as well as for the mixture of two beta distributions, with the indication, inside square brackets, of the value of \(b\) used, may be analyzed in Table 3 in the supplementary material in the Online Resource.

The near-exact distributions which use the same value of \(\lambda ^*\) as the ones above, but use for \(r\) the value given by (48), give better values of \(\varDelta \) for larger sample sizes and also for smaller values of \(p\), but they lose their asymptotic character for increasing values of \(p\), the number of variables involved. See Table 4 in the supplementary material in the Online Resource for the values of \(\varDelta \) for these near-exact distributions.

7 An application of the results obtained: the LRT statistic to test the equality of several multivariate complex normal distributions

In this section the authors show how the results obtained may be combined, as proposed in Coelho and Marques (2009), allowing a smooth path towards the development of near-exact distributions for LRT statistics for more elaborate hypotheses. As an application, the authors will show how very sharp near-exact distributions may be obtained for the LRT statistic to test the equality of several multivariate complex normal distributions, based on the results in the previous sections. Numerical results show the extreme closeness of the near-exact distribution obtained to the exact distribution of the LRT statistic.

Let us consider \(q\) independent samples of size \(n\), the \(k\)-th of which obtained from \(CN_p(\underline{\mu }_k,\varSigma _k)\) \((k=1,\ldots ,q)\), and that we want to test the hypothesis of equality of the \(q\) \(CN_p(\underline{\mu }_k,\varSigma _k)\) distributions,

$$\begin{aligned} H_0: \mu _1=\cdots =\underline{\mu }_k=\cdots =\underline{\mu }_q,\,\varSigma _1=\cdots =\varSigma _k=\cdots =\varSigma _q. \end{aligned}$$
(52)

The null hypothesis \(H_0\) in (52) may be written as

$$\begin{aligned} H_0\equiv H_{02|05}\,{\small \mathrm {o}}\,H_{05} \end{aligned}$$

where “\({\small \mathrm {o}}\)” is to be read as “after” or “composed with”, and where

$$\begin{aligned} H_{05}: \varSigma _1=\cdots =\varSigma _k=\cdots =\varSigma _q \end{aligned}$$
(53)

and

$$\begin{aligned} \begin{array}{lll} H_{02|05}: &{} \underline{\mu }_{\,1}=\cdots =\underline{\mu }_{\,k}=\cdots =\underline{\mu }_{\,q},\,\\ &{} \hbox {assuming }H_{05} \end{array} \end{aligned}$$
(54)

which are the null hypotheses in Sects. 4 and 2.2.

According to Lemma 10.3.1 in Anderson (2003), the LRT statistic to test \(H_0\) in (52) will be the product of the LRT statistics to test \(H_{05}\) in (53) and \(H_{02|05}\) in (54), so that this statistic will be

$$\begin{aligned} \varLambda \,=\,\underbrace{\left( q^{pq}\,\frac{\prod _{k=1}^q |A_k|}{|A|^q}\right) ^n}_{\varLambda _5}\,\underbrace{\frac{|A|^{nq}}{|A+B|^{nq}}}_{\varLambda _{2|5}}\,=\, \left( q^{pq}\,\frac{\prod _{k=1}^q |A_k|}{|A+B|^q}\right) ^n, \end{aligned}$$
(55)

where \(\varLambda _5\) is the LRT statistic to test \(H_{05}\) in (53) (see Sect. 4), \(\varLambda _{2|5}\) is the LRT statistic to test \(H_{02|05}\) (see Sect. 2.2), \(A=A_1+\dots +A_k+\dots +A_q\) and \(B\) is given by (14), with \(n_j=n\), with \(A_k\) being equal to \(n\) times the MLE of \(\varSigma _k\).

Given the independence of \(\varLambda _5\) and \(\varLambda _{2|5}\) (see Anderson 2003, Lemma 10.4.1), the \(h\)-th moment of \(\varLambda \) may be obtained as

$$\begin{aligned} E\left( \varLambda ^h\right) =E\left( \varLambda _5^h\right) E\left( \varLambda _{2|5}^h\right) , \end{aligned}$$

where \(E(\varLambda _5^h)\) is the \(h\)-th moment of \(\varLambda _5\) and as such, given by (38), and \(E(\varLambda _{2|5}^h)\) is the \(h\)-th moment of \(\varLambda _{2|5}\), and as such given by (16), using \(nq\) in place of \(n\). In this context and assuming \(H_0\) in (52), \(\varLambda _5\) and \(\varLambda _{2|5}\) are independent, given that \(\varLambda _5\) is independent of \(A=A_1+\dots +A_k+\dots +A_q\), which may be shown using a similar procedure to the one in (Anderson (2003), Sec. 10.4).

The c.f. of \(W=-\log \,\varLambda \) may be written, from (39) and (20), where in this last expression \(n\) has to be replaced by \(nq\), as

where

$$\begin{aligned} s_j=\left\{ \begin{array}{lll} r_j^* &{} &{} j=1,\ldots ,p-1,j\ne \alpha q-1\\ r_j^*+r_{\alpha q-1} &{} &{} j=\alpha q-1, \end{array} \right. \end{aligned}$$

for \(\alpha =2,\ldots ,\left\lfloor \!\frac{p-1}{q}\!\right\rfloor \), with \(r_j\) \((j\!=\!1,\ldots ,p\!+\!q\!-\!2)\) given by (19) and \(r_j^*\) \((j\!=\!1,\ldots , p\!-\!1)\) equal to \(r_j\) in (40).

Near-exact distributions for \(W\) and \(\varLambda \) are thus obtained by leaving \(\varPhi ^{}_{1,W}(t)\) unchanged and replacing \(\varPhi ^{}_{2,W}(t)\) by \(\varPhi ^{*}_{2,W}(t)\) in (47), with \(r\) given by (48) and \(\lambda ^*\) determined as described in Sect. 5.2.1. These near-exact distributions will have pdf’s and cdf’s which for \(\varLambda \) are given by

and

(56)

where the GNIG components of the mixture may indeed be replaced by GIG components in case the shape parameter \(r\), given by (48) is an integer.

These near-exact distributions yield very sharp approximations to the exact distribution, which are asymptotic not only for increasing sample sizes but also for increasing values of the number of variables in the distributions (\(p\)) and the number of distributions involved (\(q\)), as it may be seen from the values of the measure \(\varDelta \) in Table 5 in the Online Resource.

An example of application of this test is provided in the supplementary material in the Online Resource.

8 Conclusions

The authors were able to show that the main LRT statistics used in multivariate analysis in the complex multivariate normal setting all have a distribution which can be written in the form in (1). This enabled a much deeper insight into the true structure of such distributions and made possible to obtain very simple expressions for the exact distributions of the LRT statistics to test the independence of several groups of variables, the equality of several mean vectors and the equality of an expected value matrix to a given matrix and to develop very well-fitting near-exact approximations for the LRT statistics to test sphericity and the equality of covariance matrices.

Although (1) may seem to bear some resemblance to the form obtained for the real multivariate normal distribution in Marques et. al. (2011), there are major differences between the complex and real cases: in the complex case (i) it is possible to obtain the exact distribution for the statistics discussed in Sect. 2 in a closed and very manageable form, thus avoiding the necessity for approximations as in Krishnaiah et al. (1976), Fang et al. (1982), Khatri (1965), or the complex expressions in Khatri (1965), Gupta (1971), Tang and Gupta (1986), Gupta (1976), Gupta and Rathie (1983), Pillai and Jouris (1971), Mathai (1973), even for the null case, and (ii) for the statistics in Sects. 3 and 4, the shape parameters of the gamma r.v.’s in the exact and near-exact distributions have much simpler expressions.

The near-exact approximations show very good performances even for very small sample sizes, displaying an asymptotic behavior not only for increasing sample sizes but also for increasing number of variables and matrices involved, outperforming by far any other available approximations, such as those in Krishnaiah et al. (1976), Fang et al. (1982). Using one of the symbolic softwares available their implementation is very simple and their use enables us to avoid the cumbersome problems associated with the determination of the best value for the parameter \(b\) in the Pearson approximation. Moreover, the numerical determination of their parameters is well defined and poses no numerical problems. Benefiting from all these features, the near-exact approximations are, beyond any doubt, the recommended approximation for the statistics \(\varLambda _4\) and \(\varLambda _5\).

The common structure of the distributions of the LRT statistics addressed, established in (1), also enables the smooth development of near-exact distributions for LRT statistics for more elaborate hypotheses. In Sect. 7, this feature is used to address the test of equality of multivariate complex normal distributions and to develop near-exact distributions for its LRT statistic. Numerical studies confirm the extreme closeness between the near-exact distributions obtained and the exact distribution.

Modules for the near-exact distributions developed in this paper are available in the web-site: https://sites.google.com/site/nearexactdistributions/complex-normal, and a short users-guide for these modules is made available in Appendix .