1 Introduction

Recent developments in sensors and other devices have made it possible to obtain multivariate data easily. As a result, it is sometimes necessary to handle multivariate data with higher dimensions than before. On the other hand, depending on the experimental design and characteristics of the survey subjects, a special covariance structure may occur in the data. It must be respected in order to obtain valid results. Moreover, such a special covariance structure reduces the number of parameters to be estimated, thus reducing the required sample size.

One such structure is the blocked compound symmetric (BCS) covariance structure. The BCS covariance structure for double multivariate observations is a multivariate generalization of the compound symmetric covariance structure for multivariate observations. The BCS covariance structure is defined as follows:

$$\begin{aligned} \varvec{\Sigma } = \varvec{I}_u\otimes \left( \varvec{\Sigma }_0-\varvec{\Sigma }_1\right) + \varvec{J}_u\otimes \varvec{\Sigma }_1=\begin{pmatrix} \varvec{\Sigma }_0 &{} \varvec{\Sigma }_1 &{} \cdots &{} \varvec{\Sigma }_1 \\ \varvec{\Sigma }_1 &{} \varvec{\Sigma }_0 &{} \cdots &{} \varvec{\Sigma }_1 \\ \vdots &{} \vdots &{} &{} \vdots \\ \varvec{\Sigma }_1 &{} \varvec{\Sigma }_1 &{} \cdots &{} \varvec{\Sigma }_0 \end{pmatrix}, \end{aligned}$$
(1)

where \(\varvec{I}_u\) is the \(u\times u\) identity matrix, \(\varvec{1}_u\) is a \(u\times 1\) vector of ones, \(\varvec{J}_u=\varvec{1}_u\varvec{1}_u'\), and \(\otimes \) denotes the Kronecker product. We assume that \(u \ge 2\), \(\varvec{\Sigma }_0\) is a positive definite symmetric \(p\times p\) matrix, \(\varvec{\Sigma }_1\) is a symmetric \(p\times p\) matrix, and \(\varvec{\Sigma }_0-\varvec{\Sigma }_1\) and \(\varvec{\Sigma }_0+(u-1)\varvec{\Sigma }_1\) are positive definite matrices so that \(\varvec{\Sigma }\) is a positive definite matrix. The diagonals \(\varvec{\Sigma }_0\) in \(\varvec{\Sigma }\) represent the covariance matrix of the p response variables at any given site, whereas the off diagonals \(\varvec{\Sigma }_{1}\) represent the covariance matrix of the p response variables between any two sites, and the matrices \(\varvec{\Sigma }_0\) and \(\varvec{\Sigma }_1\) are unstructured.

Leiva [11] derived maximum likelihood estimates (MLEs) of the BCS covariance structure and provided a linear discrimination method for when the vectors in each training sample are equicorrelated. Roy et al. [13] and Žežula et al. [16] studied hypothesis testing for the equality of mean vectors in two populations under the BCS covariance structure. Roy et al. [14] proved that the unbiased estimators of the BCS covariance structure are optimal under normality. Coelho and Roy [2] have developed hypothesis testing for the BCS covariance structure. Recently, Liang et.al. [10] considered hypothesis testing such that \(\varvec{\Sigma }_0\) and \(\varvec{\Sigma }_1\) are symmetric circular Toeplitz matrices, or compound symmetric matrices.

Tsukada [15] considered hypothesis testing for independence under the BCS covariance structure, i.e.,

$$\begin{aligned} H_0: \varvec{\Sigma }_1 = \varvec{O} \text{ versus } H_1: \varvec{\Sigma }_1\not = \varvec{O}, \end{aligned}$$
(2)

where \(\varvec{O}\) is a \(p\times p\) zero matrix, and proposed a likelihood ratio (LR) test. Fonseca et al. [5] have proposed an F-test for this hypothesis testing.

The LR criterion is known to work well in large samples, but not in high-dimensional settings. Therefore, in this study, we investigate the asymptotic properties of the LR criterion under the assumption

$$\begin{aligned} p\le n, \quad \lim _{n\rightarrow \infty }\frac{p}{n}=y\in (0,1], \end{aligned}$$
(3)

where n is the sample size, and provide hypothesis testing with the standard normal distribution. We also investigate the moderate deviation principals as a property of the hypothesis testing.

The remainder of this study is organized as follows. We show the notation, the LR criterion, the moment of the LR criterion in Sect. 2. Section 3 is devoted to the main results. The numerical simulation for our results is represented in Sect. 4. The conclusions are presented in Sect. 5.

2 Preparation

We assume that \(\varvec{x}_{r,s}\) is a p-variate vector of measurements on the r-th individual at the s-th site (\(r = 1, \ldots , n; s = 1, \ldots , u\)). The n individuals are all independent. Let \(\varvec{x}_r = (\varvec{x}_{ r,1}, \ldots , \varvec{x}_{r,u})'\) be the up-variate vector of all measurements corresponding to the r-th individual. Finally, we assume that \(\varvec{x}_1, \varvec{x}_2, \ldots , \varvec{x}_n\) be a random sample of size n drawn from the population \(N_{up}(\varvec{\mu }, \varvec{\Sigma })\), where \(\varvec{\mu } = (\varvec{\mu }_1, \ldots , \varvec{\mu }_u)'\) is a \(up\times 1\) vector and \(\varvec{\Sigma }\) is a \(up \times up\) positive-definite matrix that has the BCS covariance structure denoted in (1). In this section, we discuss estimators under the BCS covariance structure, the LR criterion, and the moment of the LR criterion. Roy et al. [14] derived unbiased estimators as follows:

Theorem 1

(Roy et al. [13]) Assume that \(\varvec{x}_1, \varvec{x}_2, \ldots , \varvec{x}_n\) is a random sample of size n drawn from the population \(N_{up}(\varvec{\mu }, \varvec{\Sigma })\). Let \(\bar{\varvec{x}}=\left( \bar{\varvec{x}}_1', \bar{\varvec{x}}_2', \ldots , \bar{\varvec{x}}_u'\right) '\),

$$\begin{aligned} \varvec{S}=\frac{1}{n-1}\sum _{i=1}^{n}\left( \varvec{x}_i-\bar{\varvec{x}}\right) \left( \varvec{x}_i-\bar{\varvec{x}}\right) '= \begin{pmatrix} \varvec{S}_{11} &{} \varvec{S}_{12} &{} \cdots &{} \varvec{S}_{1u} \\ \varvec{S}_{21} &{} \varvec{S}_{22} &{} \cdots &{} \varvec{S}_{2u} \\ \vdots &{} \vdots &{} \vdots \\ \varvec{S}_{u1} &{} \varvec{S}_{u2} &{} \cdots &{} \varvec{S}_{uu} \end{pmatrix}, \end{aligned}$$

where \(\bar{\varvec{x}}_s=\sum _{r=1}^{n}\varvec{x}_{r,s}/n\) \((s=1, \ldots , u)\) and \(\varvec{S}_{ij}\) is a \(p\times p\) matrix. Then, \(\bar{\varvec{x}}\) is distributed as \(N_{up}(\varvec{\mu }, \varvec{\Sigma }/n)\) and is the unbiased estimator for the mean vector \(\varvec{\mu }\). The estimators

$$\begin{aligned} \tilde{\varvec{\Sigma }}_0=\frac{1}{u}\sum _{i=1}^{u}\varvec{S}_{ii}, \quad \tilde{\varvec{\Sigma }}_1=\frac{1}{u(u-1)}\sum _{\begin{array}{c} i, j=1 \\ i\ne j \end{array}}^{u}\varvec{S}_{ij} \end{aligned}$$

are unbiased estimators for \(\varvec{\Sigma }_0\) and \(\varvec{\Sigma }_1\), respectively.

Therefore, the unbiased estimator for \(\varvec{\Sigma }\) is

$$\begin{aligned} \tilde{\varvec{\Sigma }} = \varvec{I}_u \otimes \left( \tilde{\varvec{\Sigma }}_0-\tilde{\varvec{\Sigma }}_1\right) + \varvec{J}_u\otimes \tilde{\varvec{\Sigma }}_1. \end{aligned}$$

Roy et al. [13] have also shown that

$$\begin{aligned} \varvec{W}_1&=(n-1)(u-1)\left( \tilde{\varvec{\Sigma }}_0-\tilde{\varvec{\Sigma }}_1\right) \sim W_p\left( \varvec{\Sigma }_0-\varvec{\Sigma }_1, (n-1)(u-1)\right) ,\\ \varvec{W}_2&=(n-1)\left\{ \tilde{\varvec{\Sigma }}_0+(u-1)\tilde{\varvec{\Sigma }}_1\right\} \sim W_p\left( \varvec{\Sigma }_0+(u-1)\varvec{\Sigma }_1, n-1\right) \end{aligned}$$

are independent, respectively. Tsukada [15] proposed the LR criterion for testing the hypothesis (2) as follows:

Theorem 2

(Tsukada [15], p.171) Assume that \(n-1\ge p\). Let

$$\begin{aligned} \Lambda&= \frac{(nu)^{nup/2}}{\{n(u-1)\}^{n(u-1)p/2}n^{np/2}} \cdot \frac{|\varvec{W}_1|^{n(u-1)/2}|\varvec{W}_2|^{n/2}}{|\varvec{W}_1+\varvec{W}_2|^{nu/2}},\\ \rho&=1-\frac{u^2-u+1}{(n-1)u(u-1)}\cdot \frac{2p^2+3p-1}{6(p+1)}. \end{aligned}$$

Under the null hypothesis \(H_0: \varvec{\Sigma }_1=\varvec{O}\), the LR criterion \(L_n=-2\rho \log \Lambda \) is asymptotically distributed as a Chi-square distribution with \(p(p +1)/2\) degrees of freedom for the large sample size n and fixed dimension p.

From Theorem 2, let

$$\begin{aligned} V_n&= \Lambda ^{1/n} = \frac{(nu)^{up/2}}{\{n(u-1)\}^{(u-1)p/2}n^{p/2}}\cdot \frac{|\varvec{W}_1|^{(u-1)/2}|\varvec{W}_2|^{1/2}}{|\varvec{W}_1+\varvec{W}_2|^{u/2}}. \end{aligned}$$
(4)

We obtained the h-th moment of \(V_n\) by the method of Section 10.4.2 in Anderson [1]. The moment of the LR criterion \(V_n\) is as follows:

$$\begin{aligned}&E\left[ V_n^h\right] = \frac{(nu)^{uph/2}}{\{n(u-1)\}^{(u-1)ph/2}n^{ph/2}} \cdot \frac{\Gamma _p\left[ \frac{1}{2}(n-1)(u-1)+\frac{1}{2}h(u-1)\right] }{\Gamma _p\left[ \frac{1}{2}(n-1)(u-1)\right] } \nonumber \\&\qquad \times \frac{\Gamma _p\left[ \frac{1}{2}(n-1)+\frac{1}{2}h\right] }{\Gamma _p\left[ \frac{1}{2}(n-1)\right] } \cdot \frac{\Gamma _p\left[ \frac{1}{2}(n-1)u\right] }{\Gamma _p\left[ \frac{1}{2}(n-1)u+\frac{1}{2}hu\right] }, \end{aligned}$$
(5)

where \(\Gamma _p[\cdot ]\) is the multivariate Gamma function, which is defined as

$$\begin{aligned} \Gamma _p[z]=\pi ^{p(p-1)/4}\prod _{i=1}^{p}\Gamma \left[ z-\frac{1}{2}(i-1)\right] \end{aligned}$$
(6)

for complex number z with \(\text{ Re }(z)>(p-1)/2\). (See Section 2.1.2 in Muirhead [12])

3 Main Results

3.1 Hypothesis Testing in High-Dimensional Setting

The asymptotic results of the LR criterion obtained under a large sample are not useful when the sample size n is close to the dimension p, because the LR criterion uses determinants that are unstable in such a situation. We consider the asymptotic result for the LR criterion in a high-dimensional setting (3). Our result is also derived from the similar argument of Jiang and Yang [8].

Theorem 3

Assume that \(n-1\ge p\), \(u\ge 2\), and \(\lim _{n\rightarrow \infty }p/n=y\in (0,1]\). Let \(V_n\) be defined as in (4). Then, under \(H_0:\varvec{\Sigma }_1=\varvec{O}\), the criterion \((\log V_n-\mu _n)/\sigma _n\) converges in distribution to N(0, 1) as \(n\rightarrow \infty \), where

$$\begin{aligned}&\mu _n = \frac{1}{2}\left\{ (n-1)u^2-\left( p+\frac{1}{2}\right) u\right\} \log \left\{ 1-\frac{p}{(n-1)u}\right\} \\&\qquad - \frac{1}{2}\left\{ (n-1)(u-1)^2-\left( p+\frac{1}{2}\right) (u-1)\right\} \log \left\{ 1-\frac{p}{(n-1)(u-1)}\right\} \\&\qquad - \frac{1}{2}\left\{ (n-1)-\left( p+\frac{1}{2}\right) \right\} \log \left( 1-\frac{p}{n-1}\right) ,\\ \sigma _n^2&= \frac{1}{2} \left[ u^2\log \left\{ 1-\frac{p}{(n-1)u}\right\} \right. \\&\left. - (u-1)^2\log \left\{ 1-\frac{p}{(n-1)(u-1)}\right\} - \log \left( 1-\frac{p}{n-1}\right) \right] . \end{aligned}$$

Proof

The proof is presented in detail in Sect. 6.1. \(\square \)

Remark 1

When the dimension p is close to the sample size n, the asymptotic variance \(\sigma _n^2\) diverges, which makes the approximation unstable.

Next, we prove \(\sigma ^2_n>0\) for \(n-1\ge p\) and \(u\ge 2\) as \(n\rightarrow \infty \). From the mean-value theorem, we have

$$\begin{aligned} f(b)-f(a)\le \sup _{a< x<b}f'(x) \end{aligned}$$

in the open interval (ab). Letting \(f(x)=-x^2\log \left[ 1-p/\{(n-1)x\}\right] \), we have

$$\begin{aligned}&-u^2\log \left\{ 1-\frac{p}{(n-1)u}\right\} + (u-1)^2\log \left\{ 1-\frac{p}{(n-1)(u-1)}\right\} \\&\quad = f(u)-f(u-1) \le \sup _{u-1< x<u}f'(x)\\&\quad = \sup _{u-1< x<u} \left[ \frac{px}{p-(n-1)x}-2x\log \left\{ 1-\frac{p}{(n-1)x}\right\} \right] . \end{aligned}$$

From the fact that \(f'(x)\) is monotonically increasing and

$$\begin{aligned} \lim _{x\rightarrow \infty }f'(x) = \frac{p}{n-1}, \end{aligned}$$

because \(\sup _{u-1< x<u}f'(x)<p/(n-1)<f(1)\), we have \(f(u)-f(u-1)<f(1)\), i.e., \(f(u)<f(u-1)+f(1)\). This indicates that the asymptotic variance is positive.

3.2 Moderate Deviation Principle

Next, we investigate the derivation rate of the convergence as a property of the LR criterion. The performance of the LR criterion can be measured by the exponential rate of decay (see Jurečková et al. [9]), i.e., for any \(x>0\),

$$\begin{aligned} \lim _{a\rightarrow \infty }\lim _{n\rightarrow \infty } \frac{1}{a^2} \log \text{ Pr }\left( \frac{|\log V_n-\mu _n|}{\sigma _n} \ge ax\right) = -\frac{x^2}{2}. \end{aligned}$$

This is the conventional local asymptotic analysis for \(\log V_n\), focusing on \(a\sigma _n\)-neighborhoods. A further extension of this is the moderate deviation principle (MDP), whereby one has

$$\begin{aligned} \lim _{a_n\rightarrow \infty }\lim _{n\rightarrow \infty } \frac{1}{a_n^2} \log \text{ Pr }\left( \frac{|\log V_n-\mu _n|}{\sigma _n} \ge a_nx\right) = -\frac{x^2}{2}, \end{aligned}$$
(7)

for any sequence \(\{a_n\}\) with \(a_n\rightarrow \infty \). It is important to control the type I errors, i.e., the probability

$$\begin{aligned} \text{ Pr }\left( \frac{|\log V_n-\mu _n|}{a_n\sigma _n} \ge x\right) ,\quad x>0, \end{aligned}$$

in hypothesis testing problems.

The LR criterion \(\log V_n\) is satisfied (7) and the following theorem is obtained.

Theorem 4

Under the assumption

$$\begin{aligned} p\le n, \quad \lim _{n\rightarrow \infty }\frac{p}{n}=y\in (0,1], \end{aligned}$$

the following results are obtained.

  1. (i)

    When \(y=1\), the statistic \(\left( \log V_n-\mu _n\right) /\left( a_n\sigma _n\right) \) satisfies the moderate deviation principle with speed \(a_n^2\) and a good rate function \(x^2/2\) for all \(x>0\), where \(\{a_n \mid n\ge 1\}\) is a sequence of positive numbers satisfying

    $$\begin{aligned} \lim _{n \rightarrow \infty } a_n = \infty , \quad {\limsup _{n \rightarrow \infty }}\frac{a_n}{\sigma _n}=0. \end{aligned}$$
  2. (ii)

    When \(y\in (0,1)\), the statistic \(\left( \log V_n-\mu _n\right) /\left( a_n\sigma _n\right) \) satisfies the moderate deviation principle with speed \(a_n^2\) and a good rate function \(x^2/2\) for all \(x>0\), where \(\{a_n \mid n\ge 1\}\) is a sequence of positive numbers such that

    $$\begin{aligned} {\lim _{n \rightarrow \infty } a_n = \infty },\quad \lim _{n \rightarrow \infty } \frac{a_n}{n} = 0. \end{aligned}$$

Therefore, in both cases, we have

$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{1}{a_n^2} \log \text{ Pr } \left( \frac{1}{a_n}\left|\frac{\log V_n-\mu _n}{\sigma _n}\right|\ge x\right) = -\frac{x^2}{2}, \end{aligned}$$

for any fixed \(x> 0\).

Proof

The proof is presented in detail in Sect. 6.2. \(\square \)

Remark 2

The choices of the moderate deviation scale \(a_n\) are because of (i) if \(y=1\), then \(\sigma _n^2\) tends to infinity as \(n\rightarrow \infty \), whereas (ii) if \(y \in (0,1)\), then

$$\begin{aligned} \sigma _n^2\rightarrow \frac{u^2}{2}\log \left( 1-\frac{y}{u}\right) - \frac{(u-1)^2}{2}\log \left( 1-\frac{y}{u-1}\right) - \frac{1}{2}\log \left( 1-y\right) , \end{aligned}$$

that is, \(\sigma _n^2\) is bounded. Then we can choose \(a_n=\sqrt{\sigma _n} \) for \(y=1\) and \(a_n=\sqrt{n}\) for \(y \in (0,1)\).

Remark 3

When we take the rejection region \(\left\{ |(\log V_n-\mu _n)/\sigma _n|>ca_n\right\} \), where c is a constant and \(a_n\) is the scale number, for testing the null hypothesis \(H_0\) against \(H_1\), the probability \(\alpha _n\) of the type I error is

$$\begin{aligned} \alpha _n = \text{ Pr }\left( |(\log V_n-\mu _n)/\sigma _n|>ca_n\right) . \end{aligned}$$

From Theorem 4, we can see

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{1}{a_n^2}\log \text{ Pr }\left\{ \left|\frac{\log V_n-\mu _n}{\sigma _n}\right|> ca_n\right\} = -\frac{c^2}{2}. \end{aligned}$$

This implies that

$$\begin{aligned} \alpha _n = \exp \left( -\frac{c^2a_n^2}{2}\right) \left( 1+o(1) \right) \end{aligned}$$

as \(n\rightarrow \infty \), i.e., the probability \(\alpha _n\) of type I error decays to zero exponentially.

4 Numerical Simulation

In this section, we verify the results of Theorem 3 and Theorem 4 by numerical simulations. The population mean vector is the p-variate zero vector \(\textbf{0}_p\), and the population covariance matrix is

$$\begin{aligned} \varvec{\Sigma }&= \varvec{I}_u\otimes \left( \varvec{\Sigma }_0-\varvec{\Sigma }_1\right) +\varvec{J}_u\otimes \varvec{\Sigma }_1 , \nonumber \\ \varvec{\Sigma }_{0}&= \begin{pmatrix} 1 &{} \omega &{} \cdots &{} \omega ^{p-1} \\ \omega &{} 2 &{} \cdots &{}\omega ^{p-2} \\ \vdots &{} \vdots &{} &{}\vdots \\ \omega ^{p-1} &{} \omega ^{p-2} &{} \cdots &{} p \end{pmatrix}\equiv \varvec{\Sigma }_{0,un}, \quad \varvec{\Sigma }_1=\varvec{O}_{p\times p} \end{aligned}$$

with \(\omega =0.8\) and \(u=4\), i.e., this assumes the null hypothesis \(H_0\). The number of simulations is \(N_s=100,000\).

The results of the F-test by Fonseca et al. [5] are also presented here for comparison. We can perform the F test using the criterion

$$\begin{aligned} T_F=\frac{\varvec{v}'\left\{ \hat{\varvec{\Sigma }}_0+(u-1)\hat{\varvec{\Sigma }}_1\right\} \varvec{v}}{\varvec{v}'\left( \hat{\varvec{\Sigma }}_0-\hat{\varvec{\Sigma }}_1\right) \varvec{v}} \end{aligned}$$

where \(\varvec{v}\ne \varvec{0}\), is distributed as an F distribution with \((n-1, (n-1)(u-1))\) degrees of freedom. Similar to Fonseca et. al. [5], we also simulate using \(\varvec{v} = \varvec{1}_p\).

Table 1 shows the achieved significance level of the test using criteria \(T_n=(\log V_n-\mu _n)/\sigma _n\), \(L_n=-2\rho \log \Lambda \), and \(T_F\) when the significance level \(\alpha \) is set to 0.01, 0.05, and 0.10, respectively. For the criterion \(T_n\), we show the achieved significance level, which is the probability that \(|T_n|\) is greater than the critical point \(z_0=1.95996\) of the standard normal distribution. For the criterion \(L_n\) the achieved significance level is the probability of taking a value greater than the critical point \(\chi ^2_{0.05}(f)\) of the Chi-square distribution with degrees of freedom \(f=p(p+1)/2\), and the achieved significance level for the criterion \(T_F\) is similar. The sample size n is fixed at 100 and the dimension p is varied from 10 to 90 in increments of 10. In order to examine the behavior of \(T_n\) near the boundary of condition \(p<n-1\), we also simulated \(p=98\).

According to Filipiak et al. [4], the exact distribution of the criterion \(\Lambda \) can be calculated by using the characteristic function and R package CharFunToolR developed in Gajdoš [6]. We show the achieved significance level using the exact percentile in the bottom row of \(T_n\) and \(L_n\). But since the exact percentile cannot be obtained using R package when \(p=80\), 90 and 98, we use the percentiles by 100,000 times simulation.

It is well known that the LR criterion \(L_n\) converges quickly to the Chi-square distribution in a large sample, but we can see that the convergence becomes worse as the dimension p increases. In contrast, the criterion \(T_n\) converges to the standard normal distribution slower than the criterion \(L_n\) in a large sample with low dimension, but the overall convergence of the standard normal distribution is good. However, the approximation becomes slightly worse when it is closer to the boundary of condition \(p<n-1\). As both the dimension p and the sample size n are closer and larger, the approximation is expected to become worse. Since the F-test is valid under normality, the exact significance level can be obtained regardless of the size of the dimension.

Table 1 Achieved significance level of test using criteria \(T_n\), \(L_n\), and \(T_F\) (sample size \(n=100\), significance level \(\alpha \))

Figure 1 represents a histogram of criteria \(L_n\) and \(T_n\) when \(n=100\), \(u=4\) and p is varied with 10, 30, 60, and 90. Figure (a) to Figure (d) are histograms for \(L_n\), and Figure (e) to Figure (h) are histograms for \(T_n\). The red lines from (a) to (d) in Fig. 1 are the curves of the probability density function of the Chi-square distribution with \(p(p+1)/2\) degrees of freedom, and the red lines from (e) to (h) are the curves of the probability density function of the standard normal distribution. The histogram of criteria \(L_n\), the curves deviate from each other when dimension p increases. In contrast, when \(p=10\), the histogram of \(T_n\) deviates slightly from the red line, but when \(p=30\) or more, the histogram of \(T_n\) coincides with the red line. Although the figure is not shown, the shape of the histogram is almost the same as this figure, even if we exchange \(u=4\) for \(u=2\) and \(u=3\).

Fig. 1
figure 1

Histogram for \(L_n\) and \(T_n\) (\(n=100\), \(u=4\))

Fig. 2
figure 2

Proximity of P(x) and Q(x) for \(x\in [0,1]\) (\(n=100\), \(u=4\))

Fig. 3
figure 3

Power of test using criteria \(T_n\), \(L_n\), and \(T_F\) ( \(H_1: \varvec{\Sigma }_0=\varvec{\Sigma }_{0,cs}, \varvec{\Sigma }_1=\varvec{\Sigma }_{1,1}\), sample size \(n=100\), significance level \(\alpha =0.05\))

Fig. 4
figure 4

Power of test using criteria \(T_n\), \(L_n\), and \(T_F\) ( \(H_1: \varvec{\Sigma }_0=\varvec{\Sigma }_{0,cs}, \varvec{\Sigma }_1=\varvec{\Sigma }_{1,2}\), sample size \(n=100\), significance level \(\alpha =0.05\))

Fig. 5
figure 5

Power of test using criteria \(T_n\), \(L_n\), and \(T_F\) ( \(H_1: \varvec{\Sigma }_0=\varvec{\Sigma }_{0,to}, \varvec{\Sigma }_1=\varvec{\Sigma }_{1,1}\), sample size \(n=100\), significance level \(\alpha =0.05\))

Fig. 6
figure 6

Power of test using criteria \(T_n\), \(L_n\), and \(T_F\) ( \(H_1: \varvec{\Sigma }_0=\varvec{\Sigma }_{0,to}, \varvec{\Sigma }_1=\varvec{\Sigma }_{1,2}\), sample size \(n=100\), significance level \(\alpha =0.05\))

We continue to examine the power of the test, using criteria \(T_n\) and \(T_F\) in the range where the LR criterion is not effective. We assume the following compound symmetric matrix \(\varvec{\Sigma }_{0,cs}\) and the following Toeplitz matrix \(\varvec{\Sigma }_{0,to}\) for the covariance matrix \(\varvec{\Sigma }_0\):

$$\begin{aligned} \varvec{\Sigma }_{0,cs} = 4 \varvec{I}_p +0.8\left( \varvec{J}_p-\varvec{I}_p\right) , \quad \varvec{\Sigma }_{0,to} = \begin{pmatrix} 5 &{} \omega &{} \cdots &{} \omega ^{p-1} \\ \omega &{} 5 &{} \cdots &{}\omega ^{p-2} \\ \vdots &{} \vdots &{} &{}\vdots \\ \omega ^{p-1} &{} \omega ^{p-2} &{} \cdots &{} 5 \end{pmatrix} , \end{aligned}$$

with \(\omega =0.8\). As the alternative hypothesis, the covariance matrix \(\varvec{\Sigma }_1\) is set as follows:

$$\begin{aligned} \varvec{\Sigma }_{1,1} = k\begin{pmatrix} 0.2\ &{} 0.1 &{} \cdots &{} 0 &{} 0 \\ 0.1\ &{} 0.2 &{} \cdots &{} 0 &{} 0 \\ \vdots &{} \vdots &{} \ddots &{} \vdots &{}\vdots \\ 0 &{} 0 &{} &{}\ 0.2 &{}\ 0.1 \\ 0 &{} 0 &{} \cdots &{}\ 0.1 &{}\ 0.2 \end{pmatrix},\quad \varvec{\Sigma }_{1,2} = \frac{k}{p}\begin{pmatrix} 0.2p\ {} &{} 0.1 &{} \cdots &{} 0.1\ {} &{} 0.1\ \\ 0.1\ {} &{} 0.2p &{} \cdots &{} 0.1\ {} &{} 0.1\ \\ \vdots &{} \vdots &{} \ddots &{} \vdots &{}\vdots \\ 0.1 &{} 0.1 &{} &{} 0.2p\ {} &{}0.1\ \\ 0.1 &{} 0.1 &{} \cdots &{} 0.1\ {} &{} 0.2p\ \end{pmatrix}. \end{aligned}$$

The value k was varied in the range such that \(\varvec{\Sigma }\) is a positive-definite matrix and we also represent the power of test using \(L_n\) only when \(p=10\). The solid line, the thick dashed line, and the dashed line represent the power of the test using criteria \(T_F\), \(T_n\), and \(L_n\), respectively (Fig. 2).

All simulation results are represented from Figs. 3, 4, 5 and 6. In all simulations, the power of the test in the range of small values of k was smaller when the dimension was larger. Testing hypothesis using \(T_n\) or \(L_n\) had approximately the same power for \(p=10\).

Figures 3 and 4 represent the power of the test for \(H_1\):\(\varvec{\Sigma }_0=\varvec{\Sigma }_{0,cs}\), \(\varvec{\Sigma }_1=\varvec{\Sigma }_{1,1}\) and \(H_1\):\(\varvec{\Sigma }_0=\varvec{\Sigma }_{0,cs}\), \(\varvec{\Sigma }_1=\varvec{\Sigma }_{1,2}\), respectively. In these cases, the power of test using \(T_n\) was largest and the power of the test using \(T_F\) did not increase.

Figures 5 and 6 represent the power of the test for \(H_1\):\(\varvec{\Sigma }_0=\varvec{\Sigma }_{0,to}\), \(\varvec{\Sigma }_1=\varvec{\Sigma }_{1,1}\) and \(H_1\):\(\varvec{\Sigma }_0=\varvec{\Sigma }_{0,to}\), \(\varvec{\Sigma }_1=\varvec{\Sigma }_{1,2}\), respectively. In these cases, the power of test using \(T_F\) was largest when the alternative hypothesis was close to the null hypothesis, i.e., until \(k=\pm 1.0\). When the alternative hypotheses were further apart than \(k=\pm 1.0\), the power of the test using \(T_n\) was largest and the increase in the power of the test using \(T_F\) was gradual. The simulation result is not represented, but there was a similar tendency in the case of \(\varvec{\Sigma }_0=\varvec{\Sigma }_{0,un}\).

Next, we investigate the moderate deviation result in Theorem 4. We choose \(a_n=\sqrt{n}\) and define

$$\begin{aligned} P(x)&= \frac{1}{N_s} \#\left\{ |\log V_n^{(k)}-\mu _n|\ge a_n\sigma _n x ; k=1, \ldots , N\right\} ,\\ Q(x)&= \exp \left( -\frac{a_n^2}{2}x^2\right) \end{aligned}$$

for all \(x>0\), where N is the number of simulations, i.e., a hundred thousand, and \(V_n^{(k)} (k=1, \ldots , N)\) is the sample value of the LR criterion in the k-th independent simulation. Figure 2 represents the curve of P(x) and Q(x), where the blue dashed line is P(x) and the red solid line is Q(x). These two lines are always close and both lines are rapidly approaching zero. This confirms the MDP result in Theorem 4.

5 Conclusions

In this study, using the asymptotic expansion of the gamma function by Stirling’s formula, we derived the limiting distributions of the LR criterion in high dimensions and the moderate deviation principle of the LR criterion. Numerical simulations show that the probability of the type I errors in the test using \(T_n\) is stable even when the dimension is low and the sample size is moderate. Hypothesis testing using \(T_n\) is recommended except when the dimension p is low and the sample size n is large, and when the dimension p is very high and very close to the sample size n.

The subject for future studies will also be the hypothesis testing method for high-dimensional samples in the case of \(\lim _{n\rightarrow \infty }p/n>1\) and hypothesis testing under non-normality.

6 Proof of Theorem

In this section, we show the proofs of Theorem 3 and Theorem 4.

6.1 Proof of Theorem 3

Initially, we introduce the lemma needed to obtain the expansion of the logarithmic moment generating function.

Lemma 1

(Lemma 5.4 in Jiang and Yang [8]) Let \(n>p=p_n\) and \(r_{n}=[-\log \{1-p/n\}]^{1/2}\). Assume \(p/n\rightarrow y\in (0,1]\) and \(s=s_n=O(1/r_n)\) and \(t=t_n=O(1/r_n)\) as \(n\rightarrow \infty \). Then, we have

$$\begin{aligned} \log \frac{\Gamma _p[n/2+t]}{\Gamma _p[n/2+s]}&= p(t-s)(\log n-1-\log 2)\nonumber \\&\qquad +r_{n}^2\left[ (t^2-s^2)-\left( p-n+\frac{1}{2}\right) (t-s)\right] +o(1) \end{aligned}$$
(8)

as \(n\rightarrow \infty \).

To prove \(\displaystyle {(\log V_n-\mu )/\sigma _n\mathop {\rightarrow }^{d} N(0,1)}\), we need to show that there exits \(\delta _0>0\) such that

$$\begin{aligned} E\left[ \exp \left\{ \frac{\log V_n-\mu }{\sigma _n}s\right\} \right] \rightarrow e^{s^2/2} \end{aligned}$$
(9)

as \(n\rightarrow \infty \) for all \(|s|<\delta _0\).

Under the assumption, we have

$$\begin{aligned} \sigma _n^2 \rightarrow \frac{u^2}{2}\log \left( 1-\frac{y}{u}\right) -\frac{(u-1)^2}{2}\log \left( 1-\frac{y}{u-1}\right) -\frac{1}{2}\log \left( 1-y\right) \end{aligned}$$

as \(n\rightarrow \infty \) for \(y\in (0,1)\), \(\sigma _n^2\rightarrow \infty \) as \(n\rightarrow \infty \) for \(y=1\). Therefore, we know that \(\delta _0:=\inf \{\sigma _n: n\ge 3\}>0\) is well defined. Fix \(|s|<\delta _0/2\) and set \(t=t_n=s/\sigma _n\). Then, \(\{t_n: n\ge 3\}\) is bounded and \(|t_n|<1/2\) for all \(n\ge 3\). From the moment result (5), we have

$$\begin{aligned}&E\left[ e^{t\log V_n}\right] = E\left[ V_n^t\right] \nonumber \\&\quad = \frac{(nu)^{upt/2}}{\{n(u-1)\}^{(u-1)pt/2}n^{pt/2}} \frac{\Gamma _p\left[ \frac{(n-1)(u-1)}{2}+\frac{(u-1)t}{2}\right] }{\Gamma _p\left[ \frac{(n-1)(u-1)}{2}\right] }\nonumber \\&\qquad \times \frac{\Gamma _p\left[ \frac{n-1}{2}+\frac{t}{2}\right] }{\Gamma _p\left[ \frac{n-1}{2}\right] } \cdot \frac{\Gamma _p\left[ \frac{(n-1)u}{2}\right] }{\Gamma _p\left[ \frac{(n-1)u}{2}+\frac{ut}{2}\right] } \end{aligned}$$
(10)

for all \(n\ge 3\). Let \(r_{p,n,u}^2=-\log \{1-p/(nu)\}\). Notice

$$\begin{aligned}&\frac{1}{4}t^2r^2_{p,n-1,1}\\&\quad = \frac{s^2}{4\sigma _n^2}\cdot \left\{ -\log \left( 1-\frac{p}{n-1}\right) \right\} \\&\quad \rightarrow {\left\{ \begin{array}{ll} \displaystyle {\frac{s^2}{2}\cdot \frac{\log (1-y)}{-u^2\log \left( 1-\frac{y}{u}\right) +(u-1)^2\log \left( 1-\frac{y}{u-1}\right) +\log \left( 1-y\right) }} &{} y\in (0,1),\\ \displaystyle {\frac{s^2}{2}} &{} y=1 \end{array}\right. } \end{aligned}$$

as \(n\rightarrow \infty \). Thus, we have \(t=O(1/r_{p,n-1,1})\) as \(n\rightarrow \infty \). Similarly, \(t=O(1/r_{p,n-1,u})\) and \(t=O(1/r_{p,n-1,u-1})\) can be also obtained. Using Lemma 1, we have

$$\begin{aligned}&\log \frac{\Gamma _p\left[ \frac{1}{2}(n-1)u\right] }{\Gamma _p\left[ \frac{1}{2}(n-1)u+\frac{1}{2}ut\right] } \nonumber \\&\quad =-\frac{1}{2}utp\bigg [\log \{(n-1)u\}-1-\log 2\bigg ]\nonumber \\&\qquad +r^2_{p, n-1, u}\left[ -\frac{1}{4}u^2t^2+\frac{1}{2}ut\left\{ p-(n-1)u+\frac{1}{2}\right\} \right] +o(1), \end{aligned}$$
(11)
$$\begin{aligned}&\log \frac{\Gamma _p\left[ \frac{1}{2}(n-1)(u-1)+\frac{1}{2}(u-1)t\right] }{\Gamma _p\left[ \frac{1}{2}(n-1)(u-1)\right] }\nonumber \\&\quad =\frac{1}{2}(u-1)tp\bigg [\log \{(n-1)(u-1)\}-1-\log 2\bigg ]\nonumber \\&\qquad +r^2_{p, n-1, u-1}\left[ \frac{1}{4}(u-1)^2t^2-\frac{1}{2}(u-1)t\left\{ p-(n-1)(u-1)+\frac{1}{2}\right\} \right] +o(1), \end{aligned}$$
(12)
$$\begin{aligned}&\log \frac{\Gamma _p\left[ \frac{1}{2}(n-1)+\frac{1}{2}t\right] }{\Gamma _p\left[ \frac{1}{2}(n-1)\right] } \nonumber \\&\quad =\frac{1}{2}tp\bigg [\log (n-1)-1-\log 2\bigg ]\nonumber \\&\qquad +r^2_{p, n-1, 1}\left[ \frac{1}{4}t^2-\frac{1}{2}t\left\{ p-(n-1)+\frac{1}{2}\right\} \right] +o(1). \end{aligned}$$
(13)

From the expansions (11), (12), and (13), the log moment generating function of the criterion \(V_n\) is as follows:

$$\begin{aligned} \log E\left[ e^{t\log V_n}\right]&= \frac{1}{2}utr_{p, n-1, u}^2\left\{ p-(n-1)u+\frac{1}{2}\right\} \\&\quad -\frac{1}{2}(u-1)tr_{p, n-1, u-1}^2\left\{ p-(n-1)(u-1)+\frac{1}{2}\right\} \\&\quad -\frac{1}{2}tr_{p, n-1, 1}^2\left\{ p-(n-1)+\frac{1}{2}\right\} \\&\quad +\frac{1}{4}t^2\left\{ (u-1)^2r_{p, n-1, u-1}^2 +r_{p, n-1, 1}^2 -u^2r_{p, n-1, u}^2\right\} +o(1). \end{aligned}$$

Let

$$\begin{aligned} \mu _n&= \frac{1}{2} \left[ ur^2_{p, n-1, u}\left\{ p-(n-1)u+\frac{1}{2}\right\} \right. \\&\quad -(u-1)r^2_{p, n-1, u-1}\left\{ p-(n-1)(u-1)+\frac{1}{2}\right\} \\&\quad \left. -r^2_{p, n-1, 1}\left\{ p-(n-1)+\frac{1}{2}\right\} \right] ,\\ \sigma _n^2&= \frac{1}{2} \left[ u^2\log \left\{ 1-\frac{p}{(n-1)u}\right\} \right. \\&\quad \left. -(u-1)^2\log \left\{ 1-\frac{p}{(n-1)(u-1)}\right\} -\log \left( 1-\frac{p}{n-1}\right) \right] . \end{aligned}$$

Then, we have

$$\begin{aligned} \log E\left[ e^{t\log V_n}\right]&= \mu _n t+\frac{1}{2}\sigma _n^2t^2+o(1), \end{aligned}$$
(14)

i.e., this implies (9). The proof of Theorem 3 is complete.

6.2 Proof of Theorem 4

First, we show three lemmas used in the proof.

Lemma 2

(Lemma 4.2 in Jiang and Wang [7]) Let \(\lambda _n\), \(n\ge 1\), be a sequence of positive numbers satisfying

$$\begin{aligned} \lambda _n \rightarrow \infty , \quad \frac{\lambda _n}{n} \rightarrow 0, \quad n \rightarrow \infty . \end{aligned}$$

Assume that

$$\begin{aligned} p \rightarrow \infty , \quad \frac{p}{n} \rightarrow y\in (0,1], \quad n \rightarrow \infty . \end{aligned}$$

Then, for any \(a\in \varvec{R}\), as \(n \rightarrow \infty \), we have

$$\begin{aligned} \log \frac{\Gamma _p\left[ n+a\lambda _n\right] }{\Gamma _p[n]}&= \sum _{i=1}^{p}\left\{ \log \left( n-\frac{i-1}{2}\right) -\frac{1}{2\left\{ n-(i-1)/2\right\} }\right\} \lambda _n a\\&\quad +\sum _{i=1}^{p}\frac{1}{2n+1-i}\lambda _n^2 a^2 +\max \left\{ O(1/n), O(\lambda _n^3/n^2)\right\} , \end{aligned}$$

where the function \(\Gamma _p[z]\) is defined as

$$\begin{aligned} \Gamma _p[z] = \pi ^{p(p-1)/2}\prod _{i=1}^{p}\Gamma \left[ z-\frac{i-1}{2}\right] . \end{aligned}$$

Lemma 3

For any positive integer p with \(n> p\) and \(n>1\), we have

$$\begin{aligned}&1-p-(n-p+1)\log \left( 1-\frac{p-1}{n}\right) \le \sum _{i=1}^{p} \log \left( 1-\frac{i-1}{n}\right) \\&\quad \le 1-p+(n-1)\log \left( 1-\frac{1}{n}\right) -(n-p)\log \left( 1-\frac{p}{n}\right) . \end{aligned}$$

Proof

From the idea of a Riemann sum, we have

$$\begin{aligned} \int _{0}^{p-1}\log \left( 1-\frac{x}{n}\right) dx \le \sum _{i=1}^{p} \log \left( 1-\frac{i-1}{n}\right) \le \int _{1}^{p}\log \left( 1-\frac{x}{n}\right) dx. \end{aligned}$$

From

$$\begin{aligned} \int _{0}^{p-1}\log \left( 1-\frac{x}{n}\right) dx&=1-p-(n-p+1)\log \left( 1-\frac{p-1}{n}\right) , \\ \int _{1}^{p}\log \left( 1-\frac{x}{n}\right) dx&=1-p+(n-1)\log \left( 1-\frac{1}{n}\right) -(n-p)\log \left( 1-\frac{p}{n}\right) , \end{aligned}$$

Lemma 3 is obtained. \(\square \)

Lemma 4

Let p, n, and u be positive integers with \(n\ge p\) and \(u\ge 2\). Assume that \(p/n \rightarrow y\in (0,1)\) as \(n \rightarrow \infty \), Then, we have

$$\begin{aligned} (1)&\ \lim _{n\rightarrow \infty } \sum _{i=1}^{p} \frac{u}{2(n-1)u-2(i-1)} = -\frac{u}{2}\log \left( 1-\frac{y}{u}\right) ,\\ (2)&\ \lim _{n\rightarrow \infty } \sum _{i=1}^{p} \frac{u-1}{2(n-1)(u-1)-2(i-1)} = -\frac{u-1}{2}\log \left( 1-\frac{y}{u-1}\right) ,\\ (3)&\ \lim _{n\rightarrow \infty } \sum _{i=1}^{p} \frac{1}{2(n-i)} = -\frac{1}{2}\log \left( 1-y\right) . \end{aligned}$$

Proof

$$\begin{aligned} (1)&\sum _{i=1}^{p}\frac{u}{2(n-1)u-2(i-1)} = \frac{u}{2}\cdot \frac{1}{n}\sum _{i=1}^{p}\frac{1}{u-\frac{u}{n}-\frac{i-1}{n}}\\&\rightarrow \frac{u}{2}\int _{0}^{y}\frac{dx}{u-x} = -\frac{u}{2}\log \left( 1-\frac{y}{u}\right) .\\ (2)&\sum _{i=1}^{p}\frac{u-1}{2(n-1)(u-1)-2(i-1)} = \frac{u-1}{2}\cdot \frac{1}{n}\sum _{i=1}^{p}\frac{1}{(u-1)-\frac{u-1}{n}-\frac{i-1}{n}}\\&\rightarrow \frac{u-1}{2}\int _{0}^{y}\frac{dx}{(u-1)-x} = -\frac{u-1}{2}\log \left( 1-\frac{y}{u-1}\right) .\\ (3)&\sum _{i=1}^{p}\frac{1}{2(n-i)} = \frac{1}{2}\cdot \frac{1}{n}\sum _{i=1}^{p}\frac{1}{1-\frac{1}{n}-\frac{i-1}{n}} \rightarrow \frac{1}{2}\int _{0}^{y}\frac{dx}{1-x} = -\frac{1}{2}\log \left( 1-y\right) . \end{aligned}$$

\(\square \)

According to the Gärtner-Ellis theorem (see Section 2.3 in Dembo and Zeitouni [3]), we only need to show that

$$\begin{aligned} \lim _{n\rightarrow \infty }\Psi _n(\lambda ) = \lim _{n\rightarrow \infty }\frac{1}{a_n^2}\log E\left[ \exp \left( \lambda a_n\frac{\log V_n-\mu _n}{\sigma _n}\right) \right] =\frac{\lambda ^2}{2} \end{aligned}$$
(15)

for any fixed \(\lambda \in \varvec{R}\). We consider the following two cases. Case 1: The case that \(p/n \rightarrow y = 1\) as \(n\rightarrow \infty \). In this case, since

$$\begin{aligned} \sigma _n^2 \sim \frac{n^2}{2} \left[ -(u-1)^2\log \left( 1-\frac{y}{u-1}\right) -\log (1-y)+u^2\log \left( 1-\frac{y}{u}\right) \right] \rightarrow \infty , \end{aligned}$$

we have

$$\begin{aligned} \frac{\lambda a_n}{\sigma _n}\rightarrow 0 \text{ as } n\rightarrow \infty \end{aligned}$$

for any sequence \(\{a_n\}\) satisfying the assumption of the theorem. Therefore, from Theorem 3, we have

$$\begin{aligned} \Psi _n(\lambda )&= \frac{1}{a_n^2}\log E\left[ V_n^{\lambda a_n/\sigma _n}\right] -\frac{\lambda \mu _n}{a_n\sigma _n}\\&=\frac{1}{a_n^2}\left[ \mu _n\cdot \frac{\lambda a_n}{\sigma _n} + \frac{1}{2}\sigma _n^2\left( \frac{\lambda a_n}{\sigma _n}\right) ^2+o(1)\right] -\frac{\lambda \mu _n}{a_n\sigma _n}\\&=\frac{\lambda ^2}{2}+o(1), \end{aligned}$$

this implies (15).

Case 2: The case for which \(p/n \rightarrow y \in (0, 1)\) as \(n\rightarrow \infty \). In this case, we have

$$\begin{aligned} \lim _{n\rightarrow \infty }\sigma _n^2 = \frac{1}{2} \left[ -(u-1)^2\log \left( 1-\frac{y}{u-1}\right) -\log (1-y)+u^2\log \left( 1-\frac{y}{u}\right) \right] > 0, \end{aligned}$$

and this implies that the variance \(\sigma _n^2\) is uniformly bounded. We have \(|\lambda a_n \sigma _n^{-1}|\rightarrow \infty \) as \(n\rightarrow \infty \). Therefore, the proof of Theorem 3 cannot be used, and so we need a more detailed analysis.

For convenience, let \(\lambda _n=\lambda a_n\sigma _n^{-1}\). From the assumption, \(a_n\ll n\) and \(|\lambda _n|\ll n\). This means that we can use the result (5) to compute the moments of \(V_n\). From the result (5), we have

$$\begin{aligned} \log E[{V_n^{\lambda _n}}]&= \frac{1}{2}up\lambda _n\log (nu) - \frac{1}{2}(u-1)p\lambda _n\log \{n(u-1)\}\\&\quad - \frac{1}{2}p\lambda _n\log n + \log \frac{\Gamma _p\left[ \frac{1}{2}(n-1)(u-1)+\frac{1}{2}(u-1)\lambda _n\right] }{\Gamma _p\left[ \frac{1}{2}(n-1)(u-1)\right] }\\&\quad - \log \frac{\Gamma _p\left[ \frac{1}{2}(n-1)u+\frac{1}{2}u\lambda _n\right] }{\Gamma _p\left[ \frac{1}{2}(n-1)u\right] } + \log \frac{\Gamma _p\left[ \frac{1}{2}(n-1)+\frac{1}{2}\lambda _n\right] }{\Gamma _p\left[ \frac{1}{2}(n-1)\right] }. \end{aligned}$$

To analyze the logarithm of the multivariate gamma function in more detail, using Lemma 2, we have the following expansions:

$$\begin{aligned}&\log \frac{\Gamma _p\left[ \frac{1}{2}(n-1)u+\frac{1}{2}u\lambda _n\right] }{\Gamma _p\left[ \frac{1}{2}(n-1)u\right] } = \frac{1}{2}u\lambda _n\sum _{i=1}^{p} \log \left\{ (n-1)u-(i-1)\right\} \\&\qquad +\frac{1}{2}u\lambda _n p\log \left( \frac{1}{2}\right) -\lambda _n\sum _{i=1}^{p}\frac{u}{2(n-1)u-2(i-1)}\\&\qquad +\lambda _n^2\sum _{i=1}^{p}\frac{u^2}{4(n-1)u-4(i-1)} +\max \left\{ O(1/n), O(\lambda _n^3/n^2)\right\} ,\\&\log \frac{\Gamma _p\left[ \frac{1}{2}(n-1)(u-1)+\frac{1}{2}(u-1)\lambda _n\right] }{\Gamma _p\left[ \frac{1}{2}(n-1)(u-1)\right] }\\&\quad = \frac{u-1}{2}\lambda _n\sum _{i=1}^{p} \log \left\{ (n-1)(u-1)-(i-1)\right\} +\frac{u-1}{2}\lambda _n p\log \left( \frac{1}{2}\right) \\&\qquad -\lambda _n\sum _{i=1}^{p}\frac{u-1}{2(n-1)(u-1)-2(i-1)} +\lambda _n^2\sum _{i=1}^{p}\frac{(u-1)^2}{4(n-1)(u-1)-4(i-1)}\\&\qquad +\max \left\{ O(1/n), O(\lambda _n^3/n^2)\right\} ,\\&\log \frac{\Gamma _p\left[ \frac{1}{2}(n-1)+\frac{1}{2}\lambda _n\right] }{\Gamma _p\left[ \frac{1}{2}(n-1)\right] } = \frac{1}{2}\lambda _n\sum _{i=1}^{p} \log \left\{ (n-1)-(i-1)\right\} \\&\qquad +\frac{1}{2}\lambda _n p\log \left( \frac{1}{2}\right) -\lambda _n\sum _{i=1}^{p}\frac{1}{2(n-1)-2(i-1)}\\&\qquad +\lambda _n^2\sum _{i=1}^{p}\frac{1}{4(n-1)-4(i-1)} +\max \left\{ O(1/n), O(\lambda _n^3/n^2)\right\} . \end{aligned}$$

Then, we obtain the following expansion:

$$\begin{aligned} \log E[{V_n^{\lambda _n}}]&= -\frac{u}{2}\lambda _n\sum _{i=1}^{p} \log \left\{ 1-\frac{i-1}{(n-1)u}\right\} +\lambda _n\sum _{i=1}^{p}\frac{u}{2(n-1)u-2(i-1)}\nonumber \\&\quad +\frac{u-1}{2}\lambda _n\sum _{i=1}^{p} \log \left\{ 1-\frac{i-1}{(n-1)(u-1)}\right\} \nonumber \\&\quad -\lambda _n\sum _{i=1}^{p}\frac{u-1}{2(n-1)(u-1)-2(i-1)} +\frac{1}{2}\lambda _n\sum _{i=1}^{p} \log \left( 1-\frac{i-1}{n-1}\right) \nonumber \\&\quad -\lambda _n\sum _{i=1}^{p}\frac{1}{2(n-1)-2(i-1)} -\lambda _n^2\sum _{i=1}^{p}\frac{u^2}{4(n-1)u-4(i-1)}\nonumber \\&\quad +\lambda _n^2\sum _{i=1}^{p}\frac{(u-1)^2}{4(n-1)(u-1)-4(i-1)} +\lambda _n^2\sum _{i=1}^{p}\frac{1}{4(n-1)-4(i-1)}\nonumber \\&\quad +\max \left\{ O(1/n), O(\lambda _n^3/n^2)\right\} . \end{aligned}$$
(16)

So, we represent this expansion as follows.

$$\begin{aligned} \log E[{V_n^{\lambda _n}}]&= \lambda _n\left( A_1 + A_2 \right) +\lambda _n^2 A_3 +\max \left\{ O(1/n), O(\lambda _n^3/n^2)\right\} , \end{aligned}$$
(17)

where

$$\begin{aligned} A_1&= -\frac{u}{2}\sum _{i=1}^{p} \log \left\{ 1-\frac{i-1}{(n-1)u}\right\} \\&\qquad +\frac{u-1}{2}\sum _{i=1}^{p} \log \left\{ 1-\frac{i-1}{(n-1)(u-1)}\right\} +\frac{1}{2}\sum _{i=1}^{p} \log \left\{ 1-\frac{i-1}{n-1}\right\} ,\\ A_2&= \sum _{i=1}^{p}\frac{u}{2(n-1)u-2(i-1)}\\&\qquad -\sum _{i=1}^{p}\frac{u-1}{2(n-1)(u-1)-2(i-1)} -\sum _{i=1}^{p}\frac{1}{2(n-1)-2(i-1)},\\ A_3&= -\sum _{i=1}^{p}\frac{u^2}{4(n-1)u-4(i-1)}\\&\qquad +\sum _{i=1}^{p}\frac{(u-1)^2}{4(n-1)(u-1)-4(i-1)} +\sum _{i=1}^{p}\frac{1}{4(n-1)-4(i-1)} . \end{aligned}$$

From Lemma 3, we have the following three inequalities:

$$\begin{aligned} (1)&\ 1-p-\left\{ (n-1)u-p+1\right\} \log \left\{ 1-\frac{p-1}{(n-1)u}\right\} \le \sum _{i=1}^{p} \log \left\{ 1-\frac{i-1}{(n-1)u}\right\} \nonumber \\&\quad \le 1-p+\left\{ (n-1)u-1\right\} \log \left\{ 1-\frac{1}{(n-1)u}\right\} \nonumber \\&\qquad -\left\{ (n-1)u-p\right\} \log \left\{ 1-\frac{p}{(n-1)u}\right\} , \nonumber \\ (2)&1-p-\left\{ (n-1)(u-1)-p+1\right\} \log \left\{ 1-\frac{p-1}{(n-1)(u-1)}\right\} \nonumber \\&\quad \le \sum _{i=1}^{p} \log \left\{ 1-\frac{i-1}{(n-1)(u-1)}\right\} \nonumber \\&\quad \le 1-p+\left\{ (n-1)(u-1)-1\right\} \log \left\{ 1-\frac{1}{(n-1)(u-1)}\right\} \nonumber \\&\qquad -\left\{ (n-1)(u-1)-p\right\} \log \left\{ 1-\frac{p}{(n-1)(u-1)}\right\} , \nonumber \\ (3)&\ 1-p-\left( n-p\right) \log \left( 1-\frac{p-1}{n-1}\right) \le \sum _{i=1}^{p} \log \left( 1-\frac{i-1}{n-1}\right) \nonumber \\&\quad \le 1-p+\left( n-2\right) \log \left( 1-\frac{1}{n-1}\right) -\left( n-p-1\right) \log \left( 1-\frac{p}{n-1}\right) . \end{aligned}$$
(18)

From these inequalities, we obtain

$$\begin{aligned} A_1-\mu _n&\ge -\frac{u}{2} \left[ 1-p+\left\{ (n-1)u-1\right\} \log \left\{ 1-\frac{1}{(n-1)u}\right\} \right. \\&\quad \left. -\left\{ (n-1)u-p\right\} \log \left\{ 1-\frac{p}{(n-1)u}\right\} \right] +\frac{u-1}{2} \bigg [ 1-p \\&\quad \left. -\left\{ (n-1)(u-1)-p+1\right\} \log \left\{ 1-\frac{p-1}{(n-1)(u-1)}\right\} \right] \\&\quad +\frac{1}{2} \left\{ 1-p-\left( n-p\right) \log \left( 1-\frac{p-1}{n-1}\right) \right\} \\&\quad -\frac{1}{2}u\left\{ (n-1)u-p-\frac{1}{2}\right\} \log \left\{ 1-\frac{p}{(n-1)u}\right\} \\&\quad +\frac{1}{2}(u-1)\left\{ (n-1)(u-1)-p-\frac{1}{2}\right\} \log \left\{ 1-\frac{p}{(n-1)(u-1)}\right\} \\&\quad +\frac{1}{2}\left\{ (n-1)-p-\frac{1}{2}\right\} \log \left( 1-\frac{p}{n-1}\right) \equiv B_1, \\ A_1-\mu _n&\le -\frac{u}{2} \left[ 1-p-\left\{ (n-1)u-p+1\right\} \log \left\{ 1-\frac{p-1}{(n-1)u}\right\} \right] \\&\quad +\frac{u-1}{2} \left[ 1-p+\left\{ (n-1)(u-1)-1\right\} \log \left\{ 1-\frac{1}{(n-1)(u-1)}\right\} \right. \\&\left. -\left\{ (n-1)(u-1)-p\right\} \log \left\{ 1-\frac{p}{(n-1)(u-1)}\right\} \right] \\&\quad +\frac{1}{2} \left\{ 1-p+\left( n-2\right) \log \left( 1-\frac{1}{n-1}\right) \right. \\&\left. -\left( n-p-1\right) \log \left( 1-\frac{p}{n-1}\right) \right\} \\&\quad -\frac{1}{2}u\left\{ (n-1)u-p-\frac{1}{2}\right\} \log \left\{ 1-\frac{p}{(n-1)u}\right\} \\&\quad +\frac{1}{2}(u-1)\left\{ (n-1)(u-1)-p-\frac{1}{2}\right\} \log \left\{ 1-\frac{p}{(n-1)(u-1)}\right\} \\&\quad +\frac{1}{2}\left\{ (n-1)-p-\frac{1}{2}\right\} \log \left( 1-\frac{p}{n-1}\right) \equiv B_2, \end{aligned}$$

and

$$\begin{aligned} \lim _{n\rightarrow \infty }B_1&= \frac{u}{4}\log \left( 1-\frac{y}{u}\right) -\frac{3(u-1)}{4}\log \left( 1-\frac{y}{u-1}\right) -\frac{3}{4}\log \left( 1-y\right) ,\\ \lim _{n\rightarrow \infty }B_2&= \frac{3u}{4}\log \left( 1-\frac{y}{u}\right) -\frac{u-1}{4}\log \left( 1-\frac{y}{u-1}\right) -\frac{1}{4}\log \left( 1-y\right) . \end{aligned}$$

Therefore, \(\lim _{n\rightarrow \infty }(A_1-\mu _n)\) is bounded. From Lemma 4, we have

$$\begin{aligned} \lim _{n\rightarrow \infty }A_2&= -\frac{u}{2}\log \left( 1-\frac{y}{u}\right) +\frac{u-1}{2}\log \left( 1-\frac{y}{u-1}\right) +\frac{1}{2}\log \left( 1-y\right) , \end{aligned}$$
(19)
$$\begin{aligned} \lim _{n\rightarrow \infty }A_3&= \frac{u^2}{4}\log \left( 1-\frac{y}{u}\right) -\frac{(u-1)^2}{4}\log \left( 1-\frac{y}{u-1}\right) -\frac{1}{4}\log \left( 1-y\right) . \end{aligned}$$
(20)

We can see that

$$\begin{aligned} \frac{\lambda _n}{a_n^2}\left( A_1 + A_2 - \mu _n \right) = o(1). \end{aligned}$$

It follows by (20) and the fact \(\lim _{n\rightarrow \infty }(A_3/\sigma _n^2)=1/2\) that

$$\begin{aligned} \Psi _n(\lambda ) = \frac{\lambda _n^2}{a_n^2} A_3 + o(1) = \frac{\lambda ^2}{\sigma _n^2} A_3 + o(1). \end{aligned}$$

Then, we can reach (15). The proof of Theorem 4 is completed.