Abstract
The blocked compound symmetric covariance structure for double multivariate observations is a multivariate generalization of the compound symmetric covariance structure for multivariate observations. Many studies have investigated the blocked compound symmetric covariance structure, some of which considered testing the hypothesis of independence. Since the results for the likelihood ratio criterion obtained for large samples cannot be used when the dimension is close to the sample size, we derive a criterion for high dimensionality, which uses a normal approximation. The probability of type I errors in the test using the criterion is found to be stable in numerical simulations.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Recent developments in sensors and other devices have made it possible to obtain multivariate data easily. As a result, it is sometimes necessary to handle multivariate data with higher dimensions than before. On the other hand, depending on the experimental design and characteristics of the survey subjects, a special covariance structure may occur in the data. It must be respected in order to obtain valid results. Moreover, such a special covariance structure reduces the number of parameters to be estimated, thus reducing the required sample size.
One such structure is the blocked compound symmetric (BCS) covariance structure. The BCS covariance structure for double multivariate observations is a multivariate generalization of the compound symmetric covariance structure for multivariate observations. The BCS covariance structure is defined as follows:
where \(\varvec{I}_u\) is the \(u\times u\) identity matrix, \(\varvec{1}_u\) is a \(u\times 1\) vector of ones, \(\varvec{J}_u=\varvec{1}_u\varvec{1}_u'\), and \(\otimes \) denotes the Kronecker product. We assume that \(u \ge 2\), \(\varvec{\Sigma }_0\) is a positive definite symmetric \(p\times p\) matrix, \(\varvec{\Sigma }_1\) is a symmetric \(p\times p\) matrix, and \(\varvec{\Sigma }_0-\varvec{\Sigma }_1\) and \(\varvec{\Sigma }_0+(u-1)\varvec{\Sigma }_1\) are positive definite matrices so that \(\varvec{\Sigma }\) is a positive definite matrix. The diagonals \(\varvec{\Sigma }_0\) in \(\varvec{\Sigma }\) represent the covariance matrix of the p response variables at any given site, whereas the off diagonals \(\varvec{\Sigma }_{1}\) represent the covariance matrix of the p response variables between any two sites, and the matrices \(\varvec{\Sigma }_0\) and \(\varvec{\Sigma }_1\) are unstructured.
Leiva [11] derived maximum likelihood estimates (MLEs) of the BCS covariance structure and provided a linear discrimination method for when the vectors in each training sample are equicorrelated. Roy et al. [13] and Žežula et al. [16] studied hypothesis testing for the equality of mean vectors in two populations under the BCS covariance structure. Roy et al. [14] proved that the unbiased estimators of the BCS covariance structure are optimal under normality. Coelho and Roy [2] have developed hypothesis testing for the BCS covariance structure. Recently, Liang et.al. [10] considered hypothesis testing such that \(\varvec{\Sigma }_0\) and \(\varvec{\Sigma }_1\) are symmetric circular Toeplitz matrices, or compound symmetric matrices.
Tsukada [15] considered hypothesis testing for independence under the BCS covariance structure, i.e.,
where \(\varvec{O}\) is a \(p\times p\) zero matrix, and proposed a likelihood ratio (LR) test. Fonseca et al. [5] have proposed an F-test for this hypothesis testing.
The LR criterion is known to work well in large samples, but not in high-dimensional settings. Therefore, in this study, we investigate the asymptotic properties of the LR criterion under the assumption
where n is the sample size, and provide hypothesis testing with the standard normal distribution. We also investigate the moderate deviation principals as a property of the hypothesis testing.
The remainder of this study is organized as follows. We show the notation, the LR criterion, the moment of the LR criterion in Sect. 2. Section 3 is devoted to the main results. The numerical simulation for our results is represented in Sect. 4. The conclusions are presented in Sect. 5.
2 Preparation
We assume that \(\varvec{x}_{r,s}\) is a p-variate vector of measurements on the r-th individual at the s-th site (\(r = 1, \ldots , n; s = 1, \ldots , u\)). The n individuals are all independent. Let \(\varvec{x}_r = (\varvec{x}_{ r,1}, \ldots , \varvec{x}_{r,u})'\) be the up-variate vector of all measurements corresponding to the r-th individual. Finally, we assume that \(\varvec{x}_1, \varvec{x}_2, \ldots , \varvec{x}_n\) be a random sample of size n drawn from the population \(N_{up}(\varvec{\mu }, \varvec{\Sigma })\), where \(\varvec{\mu } = (\varvec{\mu }_1, \ldots , \varvec{\mu }_u)'\) is a \(up\times 1\) vector and \(\varvec{\Sigma }\) is a \(up \times up\) positive-definite matrix that has the BCS covariance structure denoted in (1). In this section, we discuss estimators under the BCS covariance structure, the LR criterion, and the moment of the LR criterion. Roy et al. [14] derived unbiased estimators as follows:
Theorem 1
(Roy et al. [13]) Assume that \(\varvec{x}_1, \varvec{x}_2, \ldots , \varvec{x}_n\) is a random sample of size n drawn from the population \(N_{up}(\varvec{\mu }, \varvec{\Sigma })\). Let \(\bar{\varvec{x}}=\left( \bar{\varvec{x}}_1', \bar{\varvec{x}}_2', \ldots , \bar{\varvec{x}}_u'\right) '\),
where \(\bar{\varvec{x}}_s=\sum _{r=1}^{n}\varvec{x}_{r,s}/n\) \((s=1, \ldots , u)\) and \(\varvec{S}_{ij}\) is a \(p\times p\) matrix. Then, \(\bar{\varvec{x}}\) is distributed as \(N_{up}(\varvec{\mu }, \varvec{\Sigma }/n)\) and is the unbiased estimator for the mean vector \(\varvec{\mu }\). The estimators
are unbiased estimators for \(\varvec{\Sigma }_0\) and \(\varvec{\Sigma }_1\), respectively.
Therefore, the unbiased estimator for \(\varvec{\Sigma }\) is
Roy et al. [13] have also shown that
are independent, respectively. Tsukada [15] proposed the LR criterion for testing the hypothesis (2) as follows:
Theorem 2
(Tsukada [15], p.171) Assume that \(n-1\ge p\). Let
Under the null hypothesis \(H_0: \varvec{\Sigma }_1=\varvec{O}\), the LR criterion \(L_n=-2\rho \log \Lambda \) is asymptotically distributed as a Chi-square distribution with \(p(p +1)/2\) degrees of freedom for the large sample size n and fixed dimension p.
From Theorem 2, let
We obtained the h-th moment of \(V_n\) by the method of Section 10.4.2 in Anderson [1]. The moment of the LR criterion \(V_n\) is as follows:
where \(\Gamma _p[\cdot ]\) is the multivariate Gamma function, which is defined as
for complex number z with \(\text{ Re }(z)>(p-1)/2\). (See Section 2.1.2 in Muirhead [12])
3 Main Results
3.1 Hypothesis Testing in High-Dimensional Setting
The asymptotic results of the LR criterion obtained under a large sample are not useful when the sample size n is close to the dimension p, because the LR criterion uses determinants that are unstable in such a situation. We consider the asymptotic result for the LR criterion in a high-dimensional setting (3). Our result is also derived from the similar argument of Jiang and Yang [8].
Theorem 3
Assume that \(n-1\ge p\), \(u\ge 2\), and \(\lim _{n\rightarrow \infty }p/n=y\in (0,1]\). Let \(V_n\) be defined as in (4). Then, under \(H_0:\varvec{\Sigma }_1=\varvec{O}\), the criterion \((\log V_n-\mu _n)/\sigma _n\) converges in distribution to N(0, 1) as \(n\rightarrow \infty \), where
Proof
The proof is presented in detail in Sect. 6.1. \(\square \)
Remark 1
When the dimension p is close to the sample size n, the asymptotic variance \(\sigma _n^2\) diverges, which makes the approximation unstable.
Next, we prove \(\sigma ^2_n>0\) for \(n-1\ge p\) and \(u\ge 2\) as \(n\rightarrow \infty \). From the mean-value theorem, we have
in the open interval (a, b). Letting \(f(x)=-x^2\log \left[ 1-p/\{(n-1)x\}\right] \), we have
From the fact that \(f'(x)\) is monotonically increasing and
because \(\sup _{u-1< x<u}f'(x)<p/(n-1)<f(1)\), we have \(f(u)-f(u-1)<f(1)\), i.e., \(f(u)<f(u-1)+f(1)\). This indicates that the asymptotic variance is positive.
3.2 Moderate Deviation Principle
Next, we investigate the derivation rate of the convergence as a property of the LR criterion. The performance of the LR criterion can be measured by the exponential rate of decay (see Jurečková et al. [9]), i.e., for any \(x>0\),
This is the conventional local asymptotic analysis for \(\log V_n\), focusing on \(a\sigma _n\)-neighborhoods. A further extension of this is the moderate deviation principle (MDP), whereby one has
for any sequence \(\{a_n\}\) with \(a_n\rightarrow \infty \). It is important to control the type I errors, i.e., the probability
in hypothesis testing problems.
The LR criterion \(\log V_n\) is satisfied (7) and the following theorem is obtained.
Theorem 4
Under the assumption
the following results are obtained.
-
(i)
When \(y=1\), the statistic \(\left( \log V_n-\mu _n\right) /\left( a_n\sigma _n\right) \) satisfies the moderate deviation principle with speed \(a_n^2\) and a good rate function \(x^2/2\) for all \(x>0\), where \(\{a_n \mid n\ge 1\}\) is a sequence of positive numbers satisfying
$$\begin{aligned} \lim _{n \rightarrow \infty } a_n = \infty , \quad {\limsup _{n \rightarrow \infty }}\frac{a_n}{\sigma _n}=0. \end{aligned}$$ -
(ii)
When \(y\in (0,1)\), the statistic \(\left( \log V_n-\mu _n\right) /\left( a_n\sigma _n\right) \) satisfies the moderate deviation principle with speed \(a_n^2\) and a good rate function \(x^2/2\) for all \(x>0\), where \(\{a_n \mid n\ge 1\}\) is a sequence of positive numbers such that
$$\begin{aligned} {\lim _{n \rightarrow \infty } a_n = \infty },\quad \lim _{n \rightarrow \infty } \frac{a_n}{n} = 0. \end{aligned}$$
Therefore, in both cases, we have
for any fixed \(x> 0\).
Proof
The proof is presented in detail in Sect. 6.2. \(\square \)
Remark 2
The choices of the moderate deviation scale \(a_n\) are because of (i) if \(y=1\), then \(\sigma _n^2\) tends to infinity as \(n\rightarrow \infty \), whereas (ii) if \(y \in (0,1)\), then
that is, \(\sigma _n^2\) is bounded. Then we can choose \(a_n=\sqrt{\sigma _n} \) for \(y=1\) and \(a_n=\sqrt{n}\) for \(y \in (0,1)\).
Remark 3
When we take the rejection region \(\left\{ |(\log V_n-\mu _n)/\sigma _n|>ca_n\right\} \), where c is a constant and \(a_n\) is the scale number, for testing the null hypothesis \(H_0\) against \(H_1\), the probability \(\alpha _n\) of the type I error is
From Theorem 4, we can see
This implies that
as \(n\rightarrow \infty \), i.e., the probability \(\alpha _n\) of type I error decays to zero exponentially.
4 Numerical Simulation
In this section, we verify the results of Theorem 3 and Theorem 4 by numerical simulations. The population mean vector is the p-variate zero vector \(\textbf{0}_p\), and the population covariance matrix is
with \(\omega =0.8\) and \(u=4\), i.e., this assumes the null hypothesis \(H_0\). The number of simulations is \(N_s=100,000\).
The results of the F-test by Fonseca et al. [5] are also presented here for comparison. We can perform the F test using the criterion
where \(\varvec{v}\ne \varvec{0}\), is distributed as an F distribution with \((n-1, (n-1)(u-1))\) degrees of freedom. Similar to Fonseca et. al. [5], we also simulate using \(\varvec{v} = \varvec{1}_p\).
Table 1 shows the achieved significance level of the test using criteria \(T_n=(\log V_n-\mu _n)/\sigma _n\), \(L_n=-2\rho \log \Lambda \), and \(T_F\) when the significance level \(\alpha \) is set to 0.01, 0.05, and 0.10, respectively. For the criterion \(T_n\), we show the achieved significance level, which is the probability that \(|T_n|\) is greater than the critical point \(z_0=1.95996\) of the standard normal distribution. For the criterion \(L_n\) the achieved significance level is the probability of taking a value greater than the critical point \(\chi ^2_{0.05}(f)\) of the Chi-square distribution with degrees of freedom \(f=p(p+1)/2\), and the achieved significance level for the criterion \(T_F\) is similar. The sample size n is fixed at 100 and the dimension p is varied from 10 to 90 in increments of 10. In order to examine the behavior of \(T_n\) near the boundary of condition \(p<n-1\), we also simulated \(p=98\).
According to Filipiak et al. [4], the exact distribution of the criterion \(\Lambda \) can be calculated by using the characteristic function and R package CharFunToolR developed in Gajdoš [6]. We show the achieved significance level using the exact percentile in the bottom row of \(T_n\) and \(L_n\). But since the exact percentile cannot be obtained using R package when \(p=80\), 90 and 98, we use the percentiles by 100,000 times simulation.
It is well known that the LR criterion \(L_n\) converges quickly to the Chi-square distribution in a large sample, but we can see that the convergence becomes worse as the dimension p increases. In contrast, the criterion \(T_n\) converges to the standard normal distribution slower than the criterion \(L_n\) in a large sample with low dimension, but the overall convergence of the standard normal distribution is good. However, the approximation becomes slightly worse when it is closer to the boundary of condition \(p<n-1\). As both the dimension p and the sample size n are closer and larger, the approximation is expected to become worse. Since the F-test is valid under normality, the exact significance level can be obtained regardless of the size of the dimension.
Figure 1 represents a histogram of criteria \(L_n\) and \(T_n\) when \(n=100\), \(u=4\) and p is varied with 10, 30, 60, and 90. Figure (a) to Figure (d) are histograms for \(L_n\), and Figure (e) to Figure (h) are histograms for \(T_n\). The red lines from (a) to (d) in Fig. 1 are the curves of the probability density function of the Chi-square distribution with \(p(p+1)/2\) degrees of freedom, and the red lines from (e) to (h) are the curves of the probability density function of the standard normal distribution. The histogram of criteria \(L_n\), the curves deviate from each other when dimension p increases. In contrast, when \(p=10\), the histogram of \(T_n\) deviates slightly from the red line, but when \(p=30\) or more, the histogram of \(T_n\) coincides with the red line. Although the figure is not shown, the shape of the histogram is almost the same as this figure, even if we exchange \(u=4\) for \(u=2\) and \(u=3\).
We continue to examine the power of the test, using criteria \(T_n\) and \(T_F\) in the range where the LR criterion is not effective. We assume the following compound symmetric matrix \(\varvec{\Sigma }_{0,cs}\) and the following Toeplitz matrix \(\varvec{\Sigma }_{0,to}\) for the covariance matrix \(\varvec{\Sigma }_0\):
with \(\omega =0.8\). As the alternative hypothesis, the covariance matrix \(\varvec{\Sigma }_1\) is set as follows:
The value k was varied in the range such that \(\varvec{\Sigma }\) is a positive-definite matrix and we also represent the power of test using \(L_n\) only when \(p=10\). The solid line, the thick dashed line, and the dashed line represent the power of the test using criteria \(T_F\), \(T_n\), and \(L_n\), respectively (Fig. 2).
All simulation results are represented from Figs. 3, 4, 5 and 6. In all simulations, the power of the test in the range of small values of k was smaller when the dimension was larger. Testing hypothesis using \(T_n\) or \(L_n\) had approximately the same power for \(p=10\).
Figures 3 and 4 represent the power of the test for \(H_1\):\(\varvec{\Sigma }_0=\varvec{\Sigma }_{0,cs}\), \(\varvec{\Sigma }_1=\varvec{\Sigma }_{1,1}\) and \(H_1\):\(\varvec{\Sigma }_0=\varvec{\Sigma }_{0,cs}\), \(\varvec{\Sigma }_1=\varvec{\Sigma }_{1,2}\), respectively. In these cases, the power of test using \(T_n\) was largest and the power of the test using \(T_F\) did not increase.
Figures 5 and 6 represent the power of the test for \(H_1\):\(\varvec{\Sigma }_0=\varvec{\Sigma }_{0,to}\), \(\varvec{\Sigma }_1=\varvec{\Sigma }_{1,1}\) and \(H_1\):\(\varvec{\Sigma }_0=\varvec{\Sigma }_{0,to}\), \(\varvec{\Sigma }_1=\varvec{\Sigma }_{1,2}\), respectively. In these cases, the power of test using \(T_F\) was largest when the alternative hypothesis was close to the null hypothesis, i.e., until \(k=\pm 1.0\). When the alternative hypotheses were further apart than \(k=\pm 1.0\), the power of the test using \(T_n\) was largest and the increase in the power of the test using \(T_F\) was gradual. The simulation result is not represented, but there was a similar tendency in the case of \(\varvec{\Sigma }_0=\varvec{\Sigma }_{0,un}\).
Next, we investigate the moderate deviation result in Theorem 4. We choose \(a_n=\sqrt{n}\) and define
for all \(x>0\), where N is the number of simulations, i.e., a hundred thousand, and \(V_n^{(k)} (k=1, \ldots , N)\) is the sample value of the LR criterion in the k-th independent simulation. Figure 2 represents the curve of P(x) and Q(x), where the blue dashed line is P(x) and the red solid line is Q(x). These two lines are always close and both lines are rapidly approaching zero. This confirms the MDP result in Theorem 4.
5 Conclusions
In this study, using the asymptotic expansion of the gamma function by Stirling’s formula, we derived the limiting distributions of the LR criterion in high dimensions and the moderate deviation principle of the LR criterion. Numerical simulations show that the probability of the type I errors in the test using \(T_n\) is stable even when the dimension is low and the sample size is moderate. Hypothesis testing using \(T_n\) is recommended except when the dimension p is low and the sample size n is large, and when the dimension p is very high and very close to the sample size n.
The subject for future studies will also be the hypothesis testing method for high-dimensional samples in the case of \(\lim _{n\rightarrow \infty }p/n>1\) and hypothesis testing under non-normality.
6 Proof of Theorem
In this section, we show the proofs of Theorem 3 and Theorem 4.
6.1 Proof of Theorem 3
Initially, we introduce the lemma needed to obtain the expansion of the logarithmic moment generating function.
Lemma 1
(Lemma 5.4 in Jiang and Yang [8]) Let \(n>p=p_n\) and \(r_{n}=[-\log \{1-p/n\}]^{1/2}\). Assume \(p/n\rightarrow y\in (0,1]\) and \(s=s_n=O(1/r_n)\) and \(t=t_n=O(1/r_n)\) as \(n\rightarrow \infty \). Then, we have
as \(n\rightarrow \infty \).
To prove \(\displaystyle {(\log V_n-\mu )/\sigma _n\mathop {\rightarrow }^{d} N(0,1)}\), we need to show that there exits \(\delta _0>0\) such that
as \(n\rightarrow \infty \) for all \(|s|<\delta _0\).
Under the assumption, we have
as \(n\rightarrow \infty \) for \(y\in (0,1)\), \(\sigma _n^2\rightarrow \infty \) as \(n\rightarrow \infty \) for \(y=1\). Therefore, we know that \(\delta _0:=\inf \{\sigma _n: n\ge 3\}>0\) is well defined. Fix \(|s|<\delta _0/2\) and set \(t=t_n=s/\sigma _n\). Then, \(\{t_n: n\ge 3\}\) is bounded and \(|t_n|<1/2\) for all \(n\ge 3\). From the moment result (5), we have
for all \(n\ge 3\). Let \(r_{p,n,u}^2=-\log \{1-p/(nu)\}\). Notice
as \(n\rightarrow \infty \). Thus, we have \(t=O(1/r_{p,n-1,1})\) as \(n\rightarrow \infty \). Similarly, \(t=O(1/r_{p,n-1,u})\) and \(t=O(1/r_{p,n-1,u-1})\) can be also obtained. Using Lemma 1, we have
From the expansions (11), (12), and (13), the log moment generating function of the criterion \(V_n\) is as follows:
Let
Then, we have
i.e., this implies (9). The proof of Theorem 3 is complete.
6.2 Proof of Theorem 4
First, we show three lemmas used in the proof.
Lemma 2
(Lemma 4.2 in Jiang and Wang [7]) Let \(\lambda _n\), \(n\ge 1\), be a sequence of positive numbers satisfying
Assume that
Then, for any \(a\in \varvec{R}\), as \(n \rightarrow \infty \), we have
where the function \(\Gamma _p[z]\) is defined as
Lemma 3
For any positive integer p with \(n> p\) and \(n>1\), we have
Proof
From the idea of a Riemann sum, we have
From
Lemma 3 is obtained. \(\square \)
Lemma 4
Let p, n, and u be positive integers with \(n\ge p\) and \(u\ge 2\). Assume that \(p/n \rightarrow y\in (0,1)\) as \(n \rightarrow \infty \), Then, we have
Proof
\(\square \)
According to the Gärtner-Ellis theorem (see Section 2.3 in Dembo and Zeitouni [3]), we only need to show that
for any fixed \(\lambda \in \varvec{R}\). We consider the following two cases. Case 1: The case that \(p/n \rightarrow y = 1\) as \(n\rightarrow \infty \). In this case, since
we have
for any sequence \(\{a_n\}\) satisfying the assumption of the theorem. Therefore, from Theorem 3, we have
this implies (15).
Case 2: The case for which \(p/n \rightarrow y \in (0, 1)\) as \(n\rightarrow \infty \). In this case, we have
and this implies that the variance \(\sigma _n^2\) is uniformly bounded. We have \(|\lambda a_n \sigma _n^{-1}|\rightarrow \infty \) as \(n\rightarrow \infty \). Therefore, the proof of Theorem 3 cannot be used, and so we need a more detailed analysis.
For convenience, let \(\lambda _n=\lambda a_n\sigma _n^{-1}\). From the assumption, \(a_n\ll n\) and \(|\lambda _n|\ll n\). This means that we can use the result (5) to compute the moments of \(V_n\). From the result (5), we have
To analyze the logarithm of the multivariate gamma function in more detail, using Lemma 2, we have the following expansions:
Then, we obtain the following expansion:
So, we represent this expansion as follows.
where
From Lemma 3, we have the following three inequalities:
From these inequalities, we obtain
and
Therefore, \(\lim _{n\rightarrow \infty }(A_1-\mu _n)\) is bounded. From Lemma 4, we have
We can see that
It follows by (20) and the fact \(\lim _{n\rightarrow \infty }(A_3/\sigma _n^2)=1/2\) that
Then, we can reach (15). The proof of Theorem 4 is completed.
References
Anderson TW (2003) An introduction to multivariate statistical analysis. Wiley, New York
Coelho CA, Roy A (2017) Testing the hypothesis of a block compound symmetric covariance matrix for elliptically contoured distributions. TEST 26(2):308–330
Dembo A, Zeitouni O (2009) Large deviations techniques and applications. Springer, Berlin Heidelberg
Filipiak K, Mateusz J, Daniel K (2022) Testing independence under a block compound symmetry covariance structure. Stat Papers. https://doi.org/10.1007/s00362-022-01335-7
Fonseca M, KoziołA Zmyślony R (2018) Testing hypotheses of covariance structure in multivariate data. Electron J Linear Al 33:53–62
Gajdoš A (2018) CharFunToolR: the characteristic functions toolbox (R). https://github.com/gajdosandrej/CharFunToolR
Jiang H, Wang S (2017) Moderate deviation principles for classical likelihood ratio tests of high-dimensional normal distributions. J Multivariate Anal 156:57–69
Jiang T, Yang F (2013) Central limit theorems for classical likelihood ratio tests for high-dimensional normal distributions. Ann Stat 41(4):2029–2074
Jurečková J, Kallenberg W, Veraverbeke N (1988) Moderate and Cramér- type large deviation theorems for M-estimators. Stat Probabil Lett 6(3):191–199
Liang Y, Coelho CA, von Rosen T (2021) Hypothesis testing in multivariate normal models with block circular covariance structures. Biometrical J 64(3):557–576
Leiva R (2007) Linear discrimination with equicorrelated training vectors. J Multivariate Anal 98(2):384–409
Muirhead RJ (2005) Aspects of multivariate statistical theory. Wiley-Interscience, Hoboken, N.J.
Roy A, Leiva R, Žežula I, Klein D (2015) Testing the equality of mean vectors for paired doubly multivariate observations in blocked compound symmetric covariance matrix setup. J Multivariate Anal 137:50–60
Roy A, Zmyślony R, Fonseca M, Leiva R (2016) Optimal estimation for doubly multivariate data in blocked compound symmetric covariance structure. J Multivariate Anal 144:81–90
Tsukada S (2018) Hypothesis testing for independence under blocked com- pound symmetric covariance structure. Commun Math Stat 6(2):163–184
Žežula I, Klein D, Roy A (2018) Testing of multivariate repeated measures data with block exchangeable covariance structure. Test 27(2):360–378
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Tsukada, Si. Hypothesis Testing for Independence given a Blocked Compound Symmetric Covariance Structure in a High-Dimensional Setting. J Stat Theory Pract 17, 33 (2023). https://doi.org/10.1007/s42519-023-00329-4
Accepted:
Published:
DOI: https://doi.org/10.1007/s42519-023-00329-4
Keywords
- Blocked compound symmetric covariance structure
- Hypothesis testing
- Central limit theorem
- Moderate deviation principals