Abstract
We consider the spherical integral of real symmetric or Hermitian matrices when the rank of one matrix is one. We prove the existence of the full asymptotic expansions of these spherical integrals and derive the first and the second term in the asymptotic expansion.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
In this paper, we consider the expansion of the spherical integral
where \(m_N^{(\beta )}\) is the Haar measure on orthogonal group O(N) if \(\beta =1\), on unitary group U(N) if \(\beta =2\), and \(D_N\), \(B_N\) are deterministic \(N\times N\) real symmetric or Hermitian matrices, that we can assume diagonal without loss of generality. We follow [4] to investigate the asymptotics of the spherical integrals under the case \(D_{N}=\text {diag}(\theta ,0,0,0,0\ldots 0)\):
where \(e_1\) is the first column of U.
The main result of this paper can be stated as follows,
Theorem
If \(\sup _{N}\Vert B_N\Vert _{\infty }<M\), then for any \(\theta \in \mathbb {R}\) such that \(|\theta |<\frac{1}{4M^2+10M+1}\), the spherical integral has the following asymptotic expansion (up to \(O(N^{-n-1})\) for any given n):
where v and the coefficients \(\{m_i\}_{i=0}^{n}\) depend on \(\theta \) and the derivatives of the Hilbert transform of the empirical spectral distribution of \(B_N\).
The spherical integral provides a finite-dimensional analogue of the R-transform in free probability [8, 10], which states that if X and Y are two freely independent self-adjoint noncommutative random variables, then their R-transforms satisfy the following additive formula:
In the scenario of random matrices, let \(\{B_N\}\) and \(\{\tilde{B}_N\}\) be sequences of uniformly bounded real symmetric (or Hermitian) matrices whose empirical spectral distributions converge in law toward \(\tau _B\) and \(\tau _{\tilde{B}}\) respectively. Let \(V_{N}\) be a sequence of independent orthogonal (or unitary) matrices following the Haar measure. Then the noncommutative variables \(\{B_N\}\) and \(\{V_N^*\tilde{B}_NV_N\}\) are asymptotically free. And the law of their sum \(\{B_N+V_N^*\tilde{B}_NV_N\}\) converges toward \(\tau _{B+V^*\tilde{B}V}\), which is characterized by the following additive formula,
We refer to [1, Section 5] for a proof of this.
For the spherical integral, using the same notation as above, we have the following additive formula
So if we define the finite-dimensional R-transform for a random or deterministic matrix \(B_N\) as follows
then the additive law can be formulated in a more concise way,
Compare with (4), which takes advantage of the fact that \(\{B_N\}\) and \(\{V_N^*\tilde{B}_NV_N\}\) are asymptotically free, and which holds after we take the limit as N goes to infinity. However, additive formula (5) holds for any N, which provides a finite-dimensional analogue of the additivity of the R-transform. Indeed, the R-transform is some sort of limit of our \(R_N\):
Combining this and some concentration of measure estimates, i.e., almost surely, as N goes to infinity,
Guionnet and Maïda in [4] showed that the additivity of R-transform (4) is a direct consequence of (5). Moreover, with a more quantitative version of (6), the finite-dimensional analogue of R-transform may be used to study the rate of convergence of sums of asymptotically free random matrices.
The asymptotic expansion and some related properties of spherical integrals were thoroughly studied by Guionnet et al. [4,5,6]. However, in their paper, they only studied the first term in the asymptotic expansion. We here derive the higher-order asymptotic expansion terms. The proofs are different from those in [4]; Guionnet and Maïda relied on large deviation techniques and used central limit theorem to derive the first-order term. In this paper, we express the spherical integral as an integral over only two Gaussian variables, hence allowing for easier asymptotic analysis.
The spherical integral can also be used to study the Schur polynomials. Spherical integral (1) can be expressed in terms of Schur polynomials. The Harish-Chandra-Itzykson-Zuber integral formula [2, 7, 9] gives an explicit form for integral (1) in the case \(\beta =2\) and all the eigenvalues of D and B are simple:
where \(\Delta \) denotes the Vandermonde determinant,
and \(\lambda _i(\cdot )\) is the i-th eigenvalue. If we define the N-tuple \(\mu =(\lambda _i(B_N)-N+i)_{i=1}^{N}\), then above expression (7) is the normalized Schur polynomial times an explicit factor,
In [3], Gorin and Panova studied the asymptotic expansion of the normalized Schur polynomial \(\frac{S_\mu (x_1,x_2,\ldots x_k,1,1,1\ldots 1)}{S_\mu (1,1,\ldots ,1)}\), for fixed k. Its asymptotic expansion can be obtained from a limit formula [3, Proposition 3.9], combining with the asymptotic results for \(S_{\mu }(x_i,1,1,\ldots 1)\), which corresponds to the spherical integral where D is of rank one. Therefore, our methods give a new proof of [3, Proposition 4.1].
2 Some Notations
Throughout this paper, N is a parameter going to infinity. We use \(O(N^{-l})\) to denote any quantity that is bounded in magnitude by \(CN^{-l}\) for some constant \(C>0\). We use \(O(N^{-\infty })\) to denote a sequence of quantities that are bounded in magnitude by \(C_l N^{-l}\) for any \(l>0\) and some constant \(C_l>0\) depending on l.
Given a real symmetric or Hermitian matrix B, with eigenvalues \(\{\lambda _i\}_{i=1}^{N}\). Let \(\lambda _{\min }(B)\) (\(\lambda _{\max }(B)\)) be the minimal (maximal) eigenvalue. We denote the Hilbert transform of its empirical spectral distribution by \(H_B\):
On intervals \((-\infty ,\lambda _{\min }(B))\) and \((\lambda _{\max }(B),+\infty )\), \(H_B\) is monotonous. The R-transform of empirical spectral distribution of B is
on \(\mathbb {R}/\{0\}\), where \(H_B^{-1}\) is the functional inverse of \(H_B\) from \(\mathbb {R}/\{0\}\) to \((-\infty ,\lambda _{\min }(B))\cup (\lambda _{\max }(B),+\infty )\). In the following paper, for given nonzero real number \(\theta \), we denote \(v(\theta )=R_B(2\theta )\), for simplicity we will omit the variable \(\theta \) in the expression of \(v(\theta )\).
Remark 1
Notice v, defined above, satisfies the following explicit formulas:
We denote the normalized k-th derivative of Hilbert transform
Notice from the definition of v, \(A_1=1\). The coefficients in asymptotic expansion (3) can be represented in terms of these \(A_k\)’s.
Also we denote
3 Orthogonal Case
3.1 First-Order Expansion
In this section, we consider the real case, \(\beta =1\). For simplicity, in this section we will omit the superscript (\(\beta \)) in all the notations. We can assume \(B_N\) is diagonal \(B_{N}=\text {diag}(\lambda _1,\lambda _2,\ldots , \lambda _N)\). Notice \(e_1\) is the first column of U, which follows Haar measure on orthogonal group O(N). \(e_1\) is uniformly distributed on \(S^{N-1}\), the surface of the unit ball in \(\mathbb R^N\), and can be represented as the normalized Gaussian vector,
where \(g=(g_1,g_2,\ldots , g_N)^T\) is the standard Gaussian vector in \(\mathbb {R}^N\), and \(\Vert \cdot \Vert \) is the Euclidean norm in \(\mathbb {R}^N\). Plug them into (2)
where \(P(\cdot )\) is the standard Gaussian probability measure on \(\mathbb {R}\). Following the paper [4], define
Then (10) can be rewritten in the following form
Next we will do the following change of measure
With this measure, we have \(\mathbb {E}_{P_N}(\gamma _N)=0\) and \(\mathbb {E}_{P_N}(\hat{\gamma }_N)=0\). Therefore, intuitively from the law of large number, as N goes to infinity, \(\gamma _N\) and \(\hat{\gamma }_N\) concentrate at origin. This is exactly what we will do in the following.
Let us also define
where constants \(\kappa _1\) and \(\kappa _2\) satisfy \(\frac{1}{2}>\kappa _1>2\kappa _2\) and \(2\kappa _1+\kappa _2>1\). We prove the following proposition, then the difference between \(I_N(\theta , B_N)\) and \(I_N^{\kappa _1,\kappa _2}(\theta , B_N)\) is of order \(O(N^{-\infty })\). A weaker form of this proposition also appears in [4]. With this proposition, for the asymptotic expansion, we only need to consider \(I_N^{\kappa _1,\kappa _2}(\theta , B_N)\).
Proposition 1
Given constants \(\kappa _1\) and \(\kappa _2\) satisfying \(\frac{1}{2}>\kappa _1>2\kappa _2\) and \(2\kappa _1+\kappa _2>1\). There exist constants c, \(c'\), depending on \(\kappa _1\),\(\kappa _2\) and \(\sup \Vert B_N\Vert _{\infty }\), such that for N large enough
Proof
We can split (11) into two parts
Following the same argument as in [4, Lemma 14],
For \(E_2\), notice that
With the change of variable \(g_i=\frac{\tilde{g}_i}{\sqrt{1-2\theta \lambda _i+2\theta v}}\),
where in terms of the new variables \(\tilde{g}_i\)’s, we have
We can bound \(|E_2|\) as,
where
The last inequality comes from the concentration measure inequality in Lemma A1 and the uniformly boundedness of \(\Vert B_N\Vert _{\infty }\). Moreover from [4, Lemma 14], we have the lower bound for \(I_N(\theta , B_N)\),
From our assumption \(\kappa _1>2\kappa _2\), so \(e^{|\theta | (\Vert B_N\Vert _{\infty }+v)N^{1-\kappa _1}}\ll e^{c'N^{1-2\kappa _2}}\). Therefore for N large enough
This finishes the proof of (11). \(\square \)
Since the difference between \(I_N(\theta , B_N)\) and \(I_N^{\kappa _1,\kappa _2}(\theta , B_N)\) is of order \(O(N^{-\infty })\), for asymptotic expansion, we only need to consider \(I_N^{\kappa _1,\kappa _2}(\theta , B_N)\).
The next thing is to expand the following integral,
We recall the definition of \(\gamma _N\) and \(\hat{\gamma }_N\) from (12). Since the denominator \(\gamma _N+1\) in the exponent has been restricted in a narrow interval centered at 1, we can somehow “ignore” it by Taylor expansion, which results in the following error \(R_1\),
Under our assumption \(2\kappa _1+\kappa _2>1\), \(\theta N\gamma _N^2\frac{\hat{\gamma }_N-v\gamma _N}{\gamma _N+1}=O(N^{1-2\kappa _1-\kappa _2})=o(1)\), which means the above error \(R_1\) is of magnitude o(1). Therefore, the error \(R_1\) won’t contribute to the first term in asymptotic expansion of integral (13). And the first term of asymtotics of the spherical integral comes from the following integral
We will revisit this for higher-order expansion in next section. In the remaining part of this section, we prove the following theorem about the first term of asymptotic expansion of spherical integral. This strengthens [4, Theorem 3], where they require additional conditions on the convergence speed of spectral measure).
Theorem 1
If \(\sup _{N}\Vert B_N\Vert _{\infty }<M\), then for any \(\theta \in \mathbb {R}\) such that \(|\theta |<\frac{1}{4M^2+10M+1}\), the spherical integral has the following asymptotic expansion,
where \(A_2=\frac{1}{N}\sum _{i=1}^{N}\frac{1}{(1+2\theta v-2\theta \lambda _i)^2}\).
Notice (16) is an integral of N Gaussian variables and the exponent \(-\theta N \gamma _N(\hat{\gamma }_N-v\gamma _N)\) is a quartic polynomial in terms of \(g_i\)’s; So, one cannot compute the above integral directly. Using the following lemma, by introducing two more Gaussian variables \(x_1\) and \(x_2\), we can reduce the degree of the exponent to two, and it turns to be ordinary Gaussian integral. Then we can directly compute it, and obtain the higher order asymptotic expansion.
Write the exponent in integral (15) as sum of two squares, then we can implement Lemma A2, to reduce its degree to two. Let \(b^2=\theta /2\), \( I_1=[-N^{\frac{1}{2}-\kappa _1+\epsilon },N^{\frac{1}{2}-\kappa _1+\epsilon }]\) and \( I_2=[-N^{\frac{1}{2}-\kappa _2+\epsilon },N^{\frac{1}{2}-\kappa _2+\epsilon }]\), for some \(\epsilon >0\). Then on the region \(\{|\gamma _N|\le N^{-\kappa _1}, |\hat{\gamma }_N|\le N^{-\kappa _2}\}\)
Plug it back into (15), ignoring the \(O(N^{-\infty })\) error,
Notice here in the inner integral, the integral domain is the region \(\mathcal {D}=\{|\gamma _N|\le N^{-\kappa _1}, |\hat{\gamma }_N|\le N^{-\kappa _2}\}\) and the Gaussian variables \(\tilde{g}_i\) are located in this region with overwhelming probability. It is highly likely that if we instead integrate over the whole space \(\mathbb {R}^{N}\), the error is exponentially small. We will first compute the integral under the belief that the integral outside this region \(\mathcal {D}\) is negligible, then come back to this point later. Replace the integral region \(\mathcal {D}\) by \(\mathbb {R}^N\),
where for the numerator we used the definition (8) of v, such that
Let \(\mu _i=\frac{b(\mathrm {i}(1-v+\lambda _i) x_1+(1+v-\lambda _i)x_2)}{\sqrt{N}(1-2\theta \lambda _i+2\theta v)}\). Then \(\mu _i=O(N^{\epsilon -\kappa _2})\), where \(0<\epsilon <\kappa _2\). So we have the Taylor expansion
later we will see \(\int e^{\sum _{i=1}^{N}\mu ^2}\sum _{i=1}^{N}\mu _i^2\mathrm{d}P(x_1)\mathrm{d}P(x_2)=\hbox {O}(1)\). Thus, the first term in the asymptotics of integral (18) comes from the integral of \(e^{\sum _{i=1}^{N}\mu _i^2}\), which is
The exponent is a quadratic form and can be written as \(-\frac{1}{2}\langle x, Kx\rangle \) where K is the following \(2\times 2\) symmetric matrices,
where \(A_2\), F, G are respectively \(\frac{1}{N}\sum _{i=1}^{N}\frac{1}{(1-2\theta \lambda _i+2\theta v)^2}\), \(\frac{1}{N}\sum _{i=1}^{N}\frac{\lambda _i}{(1-2\theta \lambda _i+2\theta v)^2}\) and \(\frac{1}{N}\sum _{i=1}^{N}\frac{\lambda _i^2}{(1-2\theta \lambda _i+2\theta v)^2}\). With this notation, above integral (21) can be rewritten as
To deal with this complex Gaussian integral, we will use Lemma A3. Now we need to verify that our matrix K satisfies the condition of Lemma A3. Write K as the following sum
Since \(\min \{\lambda _i\}\le v(\theta ) \le \max \{\lambda _i\}\), \(|v(\theta )|<\max |\lambda _i|<M\). If \(\theta <\frac{1}{4M^2+10M+1}\), it is easy to check the real part of matrix K is positive definite. To use Lemma A3, the only thing is to compute the determinant of matrix K. Notice the algebraic relations between parameters \(A_2\), F, G, \(\det (K)=A_2\). Therefore, we obtain the first-order asymptotic of the spherical integral,
Go back to integral (18), we need to prove that the integral outside the region \(\mathcal {D}\) is of order \(O(N^{-\infty })\), then replacing the integral domain \(\mathcal {D}=\{|\hat{\gamma }_N|\le N^{-\kappa _1}, |\gamma _N|\le N^{-\kappa _2}\}\) by the whole space \(\mathbb {R}^N\) won’t affect the asymptotic expansion.
Lemma 1
Consider integral (18) on the complement of \(\mathcal {D}\), i.e., on \(\{|\hat{\gamma }_N|\ge N^{-\kappa _1}\text { or }|\gamma _N|\ge N^{-\kappa _2}\}\),
Then for N large enough, the error \(R\le c'e^{-cN^{\zeta }}\), for some constant \(c,c',\zeta >0\) depending on \(\kappa _1\), \(\kappa _2\) and \(\Vert B_N\Vert _{\infty }\).
Proof
Since \(b=\sqrt{\theta /2}\), b is either real or imaginary. For simplicity, here we only discuss the case when b is real. The case when b is imaginary can be proved in the same way. The above integral can be simplified as
To simplify it, we perform a change of measure. Let \(h_i=\tilde{g}_i\sqrt{1+\frac{2b(1+v-\lambda _i)x_2}{\sqrt{N}(1-2\theta \lambda _i+2\theta v)}}\),
Here \(P_h(\cdot )\) is the Gaussian measure of \((h_i)_{i=1}^{N}\). Take \(0<\epsilon <\frac{1}{2}-\kappa _1\), we can separate the above integral into two parts,
For \(E_2\), it is of the same form as (19). Use the same argument, the main contribution from \(E_2\) is a Gaussian integral on \([-N^{\epsilon },N^{\epsilon }]^c\). So it stretched exponential decays, \(E_2\le c'e^{-cN^{\zeta }}\). For \(E_1\),
If we can show that on the interval \([-N^{\epsilon }, N^{\epsilon }]\), \(P_h(\mathcal {D}^c)\) is uniformly exponentially small, independent of \(x_2\), then \(E_1\) is exponentially small. For the upper bound of \(P_h(\mathcal {D}^c)\), first by the union bound,
For simplicity here I will only bound the first term, the second term can be bounded in exactly the same way.
For the second to last line, we used the defining relation (8) of v, such that \(\frac{1}{N}\sum _{i=1}^{N}\frac{1}{1-2\lambda _i\theta +2v\theta }=1\). And for the last inequality, since \(\epsilon -\frac{1}{2}<-\kappa _2\), the term \(O(N^{\epsilon -\frac{1}{2}})\) is negligible compared with \(N^{-\kappa _1}\). Then the concentration measure inequality in Lemma A1 implies (25). \(\square \)
3.2 Higher-Order Expansion
We recall the definition of \(\gamma _N\) and \(\hat{\gamma }_N\) in terms of the new variables \(\tilde{g}_i\)’s,
To compute the higher-order expansion of the spherical integral \(I_N(\theta ,B_N)\), we need to obtain a full asymptotic expansion of (13).
We consider the l-th summand in expression (28). As proved in [4, Theorem 3], the distribution of \((\sqrt{N}\gamma _N, \sqrt{N}\hat{\gamma }_N)\) tends to \(\Gamma \), which is a centered two-dimensional Gaussian measure on \(\mathbb {R}^{2}\) with covariance matrix
Moreover, the matrix R is non-degenerate. Therefore, by the central limit theorem, we can obtain the asymptotic expression of the l-th term in (28)
Therefore, if we cut off the infinite sum (28) at the l-th term, then the error terms are of magnitude \(O(N^{-(l+1)/2})\). To obtain the full expansion, we need to understand each term in expansion (28).
Define
By the dominated convergence theorem, we can interchange derivative and integral. Integral (29) can be written as
Therefore to understand asymptotic expansion of (29), we only need to compute the asymptotic expansion of \(f_l(t)\), for \(l=0,1,2,\ldots \). We have the following proposition
Proposition 2
If \(\sup _{N}\Vert B_N\Vert _{\infty }<M\), then for any \(\theta \in \mathbb {R}\) such that \(|\theta |<\frac{1}{4M^2+10M+1}\), \(f_l\) has the following asymptotic expansion (up to \(O(N^{-n-1})\) for any given n)
where \(\{m_i\}_{i=0}^{n}\) depends explicitly on t, \(\theta \), v and the derivative of the Hilbert transform of the empirical spectral distribution of \(B_N\), namely, \(A_2,A_3,A_4\ldots A_{2n+2}\), as defined in (9).
Proof
First we show that \(f_l\) has asymptotic expansion in form (31), then we show those \(m_i\)’s depend only explicitly on t, \(\theta \) and \(\{A_k\}_{k=2}^{+\infty }\). We introduce two Gaussian random variables \(x_1\) and \(x_2\), the same as in (17) and (18), we obtain the following expression of \(f_l(t)\),
where \(b^2=t/2\), \(I_1=[-N^{\frac{1}{2}-\kappa _1+\epsilon },N^{\frac{1}{2}-\kappa _1+\epsilon }]\) and \(I_2=[-N^{\frac{1}{2}-\kappa _2+\epsilon }, N^{\frac{1}{2}-\kappa _2+\epsilon }]\), for some \(\epsilon >0\). The same argument, as in Lemma 1, about replacing integral domain in the inner integral can be implemented here without too much change. So we can replace the integral domain \(\{{|\hat{\gamma }_N|\le N^{-\kappa _1}, |\gamma _N|\le N^{-\kappa _2}}\}\) by \(\mathbb {R}^{N}\).
where we used (20) for the first equality, and
Notice here \(\mu _i\) can be written as a linear function of \(\nu _i\)
Formula (34) consists of two parts: a product factor \(E_1\) and a Gaussian integral \(E_2\). For \(E_1\) we can obtain the following explicit asymptotic expansion
Notice
If we regard v and \(\theta \) as constants (since they are of magnitude O(1)), then the sum \(\sum _{i=1}^N\frac{\lambda _i^m}{N(1+2\theta \lambda _i+2\theta v)^k}\) can be written as a linear combination of \(A_2,A_3\ldots ,A_k\) for any \(0\le m\le k\). Thus we can expand (36), to obtain
where \(g_k(x_1,x_2)\)’s are polynomials of \(x_1\) and \(x_2\). Consider t, \(\theta \) and v as constants, the coefficients of \(g_k(x_1,x_2)\) are polynomials in terms of \(A_2,A_3,\ldots A_{k+2}\). Moreover the degree of each monomial of \(g_k(x_1,x_2)\) is congruent to k modulo 2.
Next we compute the Gaussian integral \(E_2\) in (34). Expand the l-th power, we obtain
Denote
then \(p_{k_j}\) is a \(k_j\)-th degree polynomial, which only depends on \(k_j\). With this notation, the Gaussian integral \(E_2\) can be written as,
By the following lemma, the above expression can be expressed in a more symmetric way, as a sum in terms of
where \(k_1\ge k_2 \cdots \ge k_m\), \(k_1+k_2+\cdots k_m=l\) and \(q_{k_j}\)’s are some polynomials depending only on \(k_j\). \(\square \)
Lemma 2
Given integers \(s_1,s_2\ldots , s_m\) and polynomials \(q_1,q_2,\ldots q_m\), consider the following polynomial in terms of 2N variables \(x_1,x_2\ldots x_N, y_1,y_2\ldots y_N\)
Then h can be expressed as sum of terms in the following form
where l, \(\{t_i\}_{i=1}^{N}\) and polynomials \(\{\tilde{q}_i\}_{i=1}^{N}\) are to be chosen.
Proof
We prove this by induction on m. If \(m=1\) then h itself is of form (38). We assume the statement holds for \(1,2,3\ldots , m-1\), then we prove it for m.
Notice the summands of (39) are \(\sum _{\begin{array}{c} 1\le i_1,i_2\ldots ,i_d\le N \\ \text {distinct} \end{array}}\prod _{j=1}^{d}x_{i_j}^{\sum _{l\in \pi _j}s_l}\prod _{l\in \pi _j}q_{l}(y_{i_j})\), which are of the same form as h but with less m. Thus, by induction, each term in (39) can be expressed as a sum of terms in form (38), so does h. \(\square \)
In view of (37), since we have \(\mu _i=O(N^{\epsilon -\kappa _2})\), we can Taylor expand \(1/({1+\mu _i})\) in (37). Also notice (35), the relation between \(\mu _i\) and \(\nu _i\), (37) has the following full expansion
where \(h_k(x_1,x_2)\) are k-th degree polynomials of \(x_1\) and \(x_2\). Consider t, \(\theta \) and v as constant (since they are of magnitude O(1)), the coefficients of \(h_k(x_1,x_2)\) are polynomials of \(A_2,A_3,\ldots A_{k+l-m+1}\). Since the Gaussian integral \(E_2\) is the sum of terms which has the asymptotic expansion (40), itself has the full asymptotic expansion,
where the coefficients of \(s_k(x_1,x_2)\) are polynomials of \(A_2,A_3,\ldots A_{k+1}\). And the degree of each monomial of \(s_k\) is congruent to k modulo 2. Combine the asymptotic expansions of \(E_1\) and \(E_2\), we obtain the following expansion of \(f_l(t)\) (up to an error of order \(O(N^{-\infty })\)),
where \(\tilde{K}\) is the following \(2\times 2\) matrix
Formula (42) is a Gaussian integral in terms of \(x_1\) and \(x_2\). If we cut off at \(k=m\), this will result in an error term \(O(N^{-\frac{m+1}{2}})\). Now the integrand is a finite sum. The integral is \(O(N^{-\infty })\) outside the region \(I_1\times I_2\); thus, we obtain the following asymptotic expansion,
Notice the degree of each monomial of \(g_l(x_1,x_2)\) is congruent to l modulo 2, and the degree of each monomial of \(s_{k-l}(x_1,x_2)\) is congruent to \(k-l\) modulo 2. For any odd k, \(\sum _{l=0}^{k}g_l(x_1,x_2)s_{k-l}(x_1,x_2)\) is sum of monomials of odd degree. Since here \(x_1\), \(x_2\) are centered Gaussian variables, the integral of \(\sum _{l=0}^{k}g_l(x_1,x_2)s_{k-l}(x_1,x_2)\) vanishes. Using Lemma A5, (43) can be rewritten as
Since the entries of matrix \(\tilde{K}\) and the coefficients of \(\sum _{l=0}^{2k}g_l(x_1,x_2)s_{2k-l}(x_1,x_2)\) depend only on t, \(\theta \), v and \(A_2,A_3,\ldots ,A_{2k+2}\). This implies \(f_l(t)\) has the expansion (31). \(\square \)
From the argument above, the asymptotic expansion of (27) up to \(O(N^{-l/2})\) is a finite sum in terms of derivatives of \(f_k(t)\)’s at \(t=\theta \). Differentiate \(f_l(t)\) term by term, we arrive at our main theorem of this paper,
Theorem 2
If \(\sup _{N}\Vert B_N\Vert _{\infty }<M\), then for any \(\theta \in \mathbb {R}\) such that \(|\theta |<\frac{1}{4M^2+10M+1}\), the spherical integral has the following asymptotic expansion (up to \(O({N^{-n-1}})\) for any given n)
where \(v=R_{B_N}(\theta )\) and \(\{m_i\}_{i=0}^{n}\) depends on \(\theta \), v and the derivatives of Hilbert transform of the empirical spectral distribution of \(B_N\) at \(v+1/2\theta \), namely \(A_2,A_3,\ldots A_{2n+2}\) as defined in (9). Especially we have
Proof
In the last section, Theorem 1, we have computed the first term in the expansion \(m_0=\frac{1}{\sqrt{A_2}}\). We only need to figure out the second term \(m_1\). For this, we cut off (31) at \(l=2\),
Take \(l=0,1,2\) in (30), we obtain the asymptotic expansion of \(f_0\), \(f_1\) and \(f_2\) (we put the detailed computation in the appendix),
Plug them back to (44), we get
\(\square \)
4 Unitary Case
In this section, we consider the unitary case, \(\beta =2\). As we will see soon that the unitary case is a special case of orthogonal case. With the same notation as before, let \(B_{N}=\text {diag}(\lambda _1,\lambda _2,\ldots , \lambda _N)\). and U follows the Haar measure on unitary group U(N). The first column \(e_1\) of U can be parametrized as the normalized complex Gaussian vector,
where \(g^{(1)}=(g_1,g_3\ldots ,g_{2N-3},g_{2N-1})^{T}\) and \(g^{(2)}=(g_{2},g_{4},\ldots , g_{2N-2},g_{2N})^{T}\) are independent Gaussian vectors in \(\mathbb {R}^N\). Then the spherical integral has the following form
Consider the \(2N\times 2N\) diagonal matrix \(D_{2N}=\text {diag}\{\lambda _1,\lambda _1,\lambda _2,\lambda _2,\ldots ,\lambda _N,\lambda _N\}\) with each \(\lambda _i\) appearing twice. Then we have the following relation,
Define \(\tilde{v}\) and \(\{\tilde{A}_i\}_{i=1}^{\infty }\) as in the notation section but replace \(\theta \) by \(\theta /2\) and replace \(B_N\) by \(D_{2N}\), namely \(\tilde{v}=R_{B_N}(\theta )\) and
Then from Theorem 2, we have the following theorem for unitary case.
Theorem 3
If \(\sup _{N}\Vert B_N\Vert _{\infty }<M\), then for any \(\theta \in \mathbb {R}\) such that \(|\theta |<\frac{1}{4M^2+10M+1}\), the spherical integral \(I_{N}^{(2)}(\theta , B_N)\) has the following asymptotic expansion (up to \(O(N^{-n-1})\) for any given n)
where \(\tilde{v}=R_{B_N}(\theta )\) and \(\{m_i\}_{i=0}^{n}\) depends on \(\theta \), \(\tilde{v}\), \(\{\tilde{A}_i\}_{i=2}^{2n+2}\), and the derivatives of Hilbert transform of the empirical spectral distribution of \(B_N\) at \(\tilde{v}+1/\theta \). Especially we have
References
Anderson, G.W., Guionnet, A., Zeitouni, O.: An Introduction to Random Matrices, volume 118 of Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge (2010)
Duistermaat, J.J., Heckman, G.J.: On the variation in the cohomology of the symplectic form of the reduced phase space. Invent. Math. 69(2), 259–268 (1982)
Gorin, V., Panova, G.: Asymptotics of symmetric polynomials with applications to statistical mechanics and representation theory. Ann. Probab. 43(6), 3052–3132 (2015)
Guionnet, A., Maï da, M.: A Fourier view on the \(R\)-transform and related asymptotics of spherical integrals. J. Funct. Anal. 222(2), 435–490 (2005)
Guionnet, A., Zeitouni, O.: Large deviations asymptotics for spherical integrals. J. Funct. Anal. 188(2), 461–515 (2002)
Guionnet, A., Zeitouni, O.: Addendum to: “Large deviations asymptotics for spherical integrals” [J. Funct. Anal. 188 (2002), no. 2, 461–515; mr1883414]. J. Funct. Anal. 216(1), 230–241 (2004)
Itzykson, C., Zuber, J.B.: The planar approximation. II. J. Math. Phys. 21(3), 411–421 (1980)
Maassen, H.: Addition of freely independent random variables. J. Funct. Anal. 106(2), 409–438 (1992)
Mehta, M.L.: Random Matrices, volume 142 of Pure and Applied Mathematics (Amsterdam), third edn. Elsevier/Academic Press, Amsterdam (2004)
Voiculescu, D.: Addition of certain noncommuting random variables. J. Funct. Anal. 66(3), 323–346 (1986)
Acknowledgements
This research was conducted at the Undergraduate Research Opportunities Program of the MIT Mathematics Department, under the direction of Prof. Alice Guionnet. I would like to express to her my warmest thanks both for introducing me to this problem and for her dedicated guidance throughout the research process. I want to also thank the anonymous reviewer for careful reading of the manuscript and helpful suggestions.
Author information
Authors and Affiliations
Corresponding author
Appendices
Appendix A. Properties of Gaussian Random Variables
The following is a useful lemma on the concentration of measure for the sum of squares of Gaussian random variables.
Lemma A1
Given independent Gaussian random variables \(\{g_i\}_{i=1}^{N}\), consider the weighted sum \(\sum _{i=1}^{N}{a_i g_i^2}\), where the coefficients \(\{a_i\}_{i=1}^N\) depend on N. If there exists some constant \(c>0\), such that \(\max \{|a_i|\}\le \frac{c}{\sqrt{N}}\), then for N large enough, the weighted sum satisfies the following concentration inequality,
Proof
This can be proved by applying Markov’s inequality to \(\exp \{t\sum _{i=1}^{N}a_i(g_i^2-1)\}\) for some well-chosen value of t. \(\square \)
The following is a useful trick, which states that we can express \(e^{\alpha ^2/2}\) as a Gaussian integral. Then the exponents are all linear in \(\alpha \).
Lemma A2
For any \(\alpha \in \mathbb {C}\) with \(|\alpha |\le CN^{\kappa }\),
where interval \(I=[-N^{\kappa +\epsilon }, N^{\kappa +\epsilon }]\) for any \(\epsilon >0\).
Proof
Recall the formula, for any \(\alpha \in \mathbb {C}\)
The main contribution of the above integral comes from where \(|x+\alpha |\) is small. More precisely,
where c and \(c'\) are constants independent of N. Therefore
\(\square \)
The following lemmas are useful identities about Gaussian integrals.
Lemma A3
If an n by n symmetric matrix K can be written as \(K=A+iB\), where A is a real positive definite matrix, B is a real symmetric matrix. Then we have the Gaussian integral formula
Proof
Since A is positive definite, there exists a positive definite matrix C such that \(A=C^TC\). Since \(CBC^{T}\) is symmetric, it can be diagonalized by some special orthogonal matrix U, let \(P=UC\). Then we have \(A=P^TP\) and \(B=P^T D P\), where \(D=diag\{d_1,d_2,\ldots , d_n\}\). Thus \(K=A+iB=P^T(I+iD)P\). Plug this back to the integral
where the square root in (46) is the branch with positive real part, in our case K is a \(2\times 2\) matrix. \(\square \)
Lemma A4
Given n by n symmetric matrix K, whose real part is positive definite, then we have the following change of variable formula for Gaussian type integral,
where F is a polynomial in terms of \(\{x_i\}_{i=1}^{n}\), \(A=diag\{a_1,a_2\ldots ,a_n\}\) and \(\mathfrak {R}(a_i)>0\) for \(i=1,2\ldots n\), and \(b\in \mathbb {C}^n\).
Proof
This can be proved by reducing to one dimensional case. \(\square \)
Lemma A5
Given n by n symmetric matrix K, whose real part is positive definite, then we have the following integral formula for any polynomial F( or even infinite power series) in the variables \(\{x_i\}_{i=1}^{n}\),
Proof
Consider the Laplacian transform,
Since \(e^{-\frac{1}{2}\langle x, Kx\rangle }\) is a Schwartz function, with decaying speed faster than \(e^{\langle x, \xi \rangle }\). By dominated convergence theorem, we can interchange the integral and differential.
From Lemma A3 and Lemma A4, we get
\(\square \)
Appendix B. Detailed Computation for \(m_1\)
In this section, we give the detailed computation for coefficient \(m_1\). To compute \(f_0\), \(f_1\), \(f_2\), take \(l=0,1,2\) in (30) and follow the process in Page 14, 15. It is not hard to derive,
where
And for \(f_1\) and \(f_2\),
where
For integral (47) and (48), it is merely symbolic computation, and we can easily do it by some mathematic software, like Mathematica, then get explicit formula for \(f_0\), \(f_1\) and \(f_2\).
Actually in this way, we can obtain any higher expansion terms of the spherical integral.
Rights and permissions
About this article
Cite this article
Huang, J. Asymptotic Expansion of Spherical Integral. J Theor Probab 32, 1051–1075 (2019). https://doi.org/10.1007/s10959-018-0840-2
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10959-018-0840-2