Abstract
Polynomial ensembles are a sub-class of probability measures within determinantal point processes. Examples include products of independent random matrices, with applications to Lyapunov exponents, and random matrices with an external field, that may serve as schematic models of quantum field theories with temperature. We first analyse expectation values of ratios of an equal number of characteristic polynomials in general polynomial ensembles. Using Schur polynomials, we show that polynomial ensembles constitute Giambelli compatible point processes, leading to a determinant formula for such ratios as in classical ensembles of random matrices. In the second part, we introduce invertible polynomial ensembles given, e.g. by random matrices with an external field. Expectation values of arbitrary ratios of characteristic polynomials are expressed in terms of multiple contour integrals. This generalises previous findings by Fyodorov, Grela, and Strahov. for a single ratio in the context of eigenvector statistics in the complex Ginibre ensemble.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
In this paper, we study correlation functions of characteristic polynomials in a sub-class of determinantal random point processes. They are called polynomial ensembles [39] and belong to biorthogonal ensembles in the sense of Borodin [10]. Polynomial ensembles are characterised by the fact that one of the two determinants in the joint density of points is given by a Vandermonde determinant, while the other one is kept general. Thus they are generalising the classical ensembles of Gaussian random matrices [41]. Polynomial ensembles appear in various contexts as the joint distribution of eigenvalues (or singular values) of random matrices, see [3, 14, 19, 20, 27]. They enjoy many invariance properties on the level of joint density, kernel and bi-orthogonal functions [35, 38] and provide examples for realisations of multiple orthogonal polynomials, see e.g. [8, 20, 39] and Muttalib–Borodin ensembles [10, 42].
Random matrices enjoy many different applications in physics and beyond, see [2] and references therein. Polynomial ensembles in particular are relevant in the following contexts: Ensembles with an external field have been introduced as a tool to count intersection numbers of moduli spaces on Riemann surfaces [16]. In the application to the quantum field theory of the strong interactions, quantum chromodynamics (QCD), they have been used as a schematic model to study the influence of temperature in the chiral phase transition [29]. Detailed computations of Dirac operator eigenvalues [28, 45] within this class of models have been restricted to supersymmetric techniques so far, that can now be addressed in the framework of biorthogonal ensembles.
Recently, sums and products of random matrices have been shown to lead to polynomial ensembles [3, 18, 36]—see [4] for a review. This has important consequences for the spectrum of Lyapunov exponents, relating this multiplicative process to the additive process of Dyson’s Brownian motion [6]. Last but not least polynomial ensembles of Pólya type have led to a deeper understanding of the relation between singular values and complex eigenvalues [34, 35], where a bijection between the respective point processes was constructed.
In this paper, we consider expectation values of products and ratios of characteristic polynomials within the class of polynomial ensembles. While these can be used to generate multi-point resolvents and thus arbitrary k-point density correlation functions, as well as the kernel of bi-orthogonal polynomials, they are of interest in their own right as well. Examples for applications are the partition function of QCD with an arbitrary number of fermionic flavours [46]. In mathematics, the Montgomery conjecture in conjunction with moments of the Riemann zeta-functions has lead to important insights [32], where moments and correlations of characteristic polynomials relevant for more general L-functions were computed.
Mathematical properties of ratios of characteristic polynomials have equally received attention, and we will not be able to give full justice to the existing literature. Based on earlier works such as [7, 22], the determinantal structure of the expectation value of ratios of characteristic polynomials in orthogonal polynomial ensembles was expressed in several equivalent forms, given in terms of orthogonal polynomials, their Cauchy transform or their respective kernels. This structure was generalised for products of characteristic polynomials in [1], as well as to all symmetry classes [33]. The universality of such ratios has been studied in several works [13, 47] and in particular its relation to the sine- and Airy-kernel [11]. New critical behaviours have been found from such ensembles as well [15] and their universality was discussed in [9].
Moving to polynomial ensembles, expectation values of products are easy to evaluate by including them into the Vandermonde determinant, just as for orthogonal polynomial ensembles. Determinantal formulas for expectation values of characteristic polynomials and their inverse have been derived, see e.g. [8, 19, 21]. A duality in the number of products and matrix dimension, which is well known for the classical ensembles, holds also in this external field model [17]. The kernel for general polynomial ensembles has been expressed in terms of the residue of a single ratio of characteristic polynomials in [19], see also [11, 26]. Most recently the study of eigenvector statistics of random matrices has seen a revival, and also in this context expectation values of ratios of characteristic polynomials in polynomial ensembles arise [23, 24]. This has been one of the starting points of the present work.
The outline of the paper is as follows. In Sect. 2, we introduce polynomial ensembles, provide several examples, and state the main results of the present paper. In particular, Theorem 2.2 says that any polynomial ensemble is a Giambelli compatible point process in the sense of Borodin, Olshanski, and Strahov [12]. This leads to Theorem 2.3, expressing the expectation value of the ratio of an equal number of characteristic polynomials as a determinant of a single ratio, generalising [7, Theorem 3.3] to polynomial ensembles. In Sect. 2.3, we introduce a more restricted class of polynomial ensembles which we call invertible. Here, we give a nested multiple complex contour integral representation for general ratios of characteristic polynomials in Theorem 2.9. The number of integrals only depends on the number of characteristic polynomials, but not on the number of points N of the point process. This generalises the results of [24, Theorem 5.1] to rectangular random matrices, in the presence of an arbitrary number of characteristic polynomials. Several examples are given that belong to the class of invertible polynomial ensembles, including the external field models. Sections 3 and 4 are devoted to the proofs of the results stated in Sect. 2. Section 5 contains some special cases and comparison with the work by Fyodorov, Grela, and Strahov [24]. Finally, Appendix A collects properties of the Vandermonde determinant, when adding or removing factors.
2 Definitions and Statement of Results
2.1 Polynomial Ensembles
We introduce polynomial ensembles following [39]. They are defined by the probability density function on \(I^n\), where \(I \subseteq {\mathbb {R}}\) is an interval. The probability density function is given by
where \(\Delta _N(x_1,\ldots ,x_N) = \prod _{1\le i < j \le N} (x_i-x_j)=\det \left[ x_i^{N-j}\right] _{i,j=1}^N\) is the Vandermonde determinant of N variables. The \(\varphi _{1},\ldots ,\varphi _{_N}\) are certain integrable real-valued functions on I, such that the normalisation constant \({\mathcal {Z}}_N\)
exists and is nonzero. The constant \({\mathcal {Z}}_N\) is also called partition function in the physics literature. Polynomial ensembles are formed by eigenvalues (or singular values) of certain \(N\times N\) random matrices H, see examples below. Here, the matrix \(G=(g_{k,l})_{k,l=1}^{N}\) is the invertible generalised moment matrix with entries
The second equality in (2.2) follows using (A.1) and the Andréiéf integral formula,
valid for any two sets of integrable functions \(\psi _k\) and \(\phi _l\). We will now give some explicit realisations of polynomial ensembles in terms of random matrices. The simplest example for a polynomial ensembles is given by the eigenvalues of \(N\times N\) complex Hermitian random matrices H from the Gaussian Unitary Ensembles (GUE), defined by the probability measure
The probability density function of the real eigenvalues \(x_1,\ldots ,x_N\) of H reads [41]
This is a polynomial ensemble where the resulting \(\varphi \)-functions, \(\varphi _k(x)=x^{N-k}e^{-x^2}\), are obtained after multiplying the exponential factors into one of the Vandermonde determinants. Note that the GUE is an orthogonal polynomial ensemble.
The GUE with an external source or field [14, 27] contains an additional constant, deterministic Hermitian matrix A of size \(N\times N\) that we choose to be diagonal here, \(A=\text{ diag }(a_1,\ldots ,a_N)\) with \(a_j\in {\mathbb {R}}\) for \(j=1,\ldots ,N\), without loss of generality. It will constitute our first main example and is defined by the probability measure
with the probability density function
The resulting \(\varphi _k(x)=e^{-(x-a_k)^2}\) follows from the Harish-Chandra–Itzykson–Zuber integral [30, 31] and from multiplying the Gaussian term inside the determinant. We refer to [14] for the derivation. Notice that the second determinant in (2.8) cannot be reduced to a Vandermonde determinant in general.
Our second main example is the chiral GUE with an external source, cf. [19]. It is defined in terms of a complex non-Hermitian \(N\times (N+\nu )\)-dimensional random matrix X and a deterministic matrix A of equal size, with \(\nu \ge 0\). Again without loss of generality, we can choose \(AA^\dag =\text{ diag }(a_1,\ldots ,a_N)\), with elements \(a_j\in {\mathbb {R}}_+\) for \(j=1,\ldots ,N\). The ensemble is defined by
At vanishing A, it reduces to the chiral GUE also called complex Wishart or Laguerre unitary ensemble. The probability density function of the real positive eigenvalues \(x_1,\ldots ,x_N\) of \(XX^\dag \) reads,
The modified Bessel function of second kind \(\varphi _k(x)=x^{\nu /2}e^{-(x+a_k)}I_\nu (2\sqrt{a_kx})\), follows from the Berezin–Karpelevich integral formula, cf. [44]. In principle, we may also allow the parameter \(\nu >-1\) to take real values.
In the application to QCD at finite temperature, typically the density (2.9) is endowed with \(N_f\) extra terms, \(P(X) \rightarrow P(X)\prod _{f=1}^{N_f}\det [XX^\dag +m_f^2\mathbb {1}_N]\), with \(m_{f=1,\ldots ,N_f}\in {\mathbb {R}}\), that correspond to \(N_f\) Fermion flavours with masses \(m_f\), see e.g. [45] which also motivates the present study. We would like to mention that the expectation value of the ratio of two characteristic polynomials studied in [24] follows from the above ensemble, when setting \(\nu =0\) and letting \(m_f\rightarrow 0\) for all \(f=1,\ldots ,N_f\). This leads to a polynomial ensemble with \(\varphi _k(x)=x^{{\mathcal {L}}}e^{-(x+a_k)}I_0(2\sqrt{a_kx})\) of [24], with \({\mathcal {L}}= N_f\).
Further examples have been given already in the introduction, including the singular values of products of independent random matrices, see [4] for a review, where \(\varphi _k(x)\) is given by a special function, the Mejier G-function, and more generally Polyá ensembles [34, 35]. Notice that when also the Vandermonde determinant in (2.1) is replaced by a general determinant, as in the Andréiéf integration formula (2.4), we are back to biorthogonal ensembles [10]—an explicit example can be found in [5]. For this class, our methods below will not apply in general.
2.2 Polynomials Ensembles as Giambelli Compatible Point Processes
In this section, we adopt notation and definitions from Macdonald [40]. Let \(\Lambda \) be the algebra of symmetric functions. The Schur functions \(s_{\lambda }\) indexed by Young diagrams \(\lambda \) form an orthonormal basis in \(\Lambda \). Recall that Young diagrams can be written in the Frobenius notation, namely
where d equals the number of boxes on the diagonal of \(\lambda \), \(p_j\) with \(j=1,\ldots ,d\) denotes the number of boxes in the jth row of \(\lambda \) to the right of the diagonal, and \(q_l\) with \(l=1,\ldots ,d\) denotes the number of boxes in the lth column of \(\lambda \) below the diagonal. The Schur functions satisfy the Giambelli formula:
The Schur polynomial \(s_{\lambda }\left( x_1,\ldots ,x_N\right) \) is the specialisation of \(s_{\lambda }\) to the variables \(x_1\), \(\ldots \), \(x_N\). The Schur polynomial \(s_{\lambda }\left( x_1,\ldots ,x_N\right) \) corresponding to the Young diagram \(\lambda \) with \(l(\lambda )\le N\) rows of lengths \(\lambda _1\ge \cdots \ge \lambda _{l(\lambda )}>0\) can be defined by
If \(l(\lambda )>N\), then \(s_\lambda (x_1,\ldots ,x_N) \equiv 0\) (by definition).
The Giambelli compatible point processes form a class of point processes whose different probabilistic quantities of interest can be studied using the Schur symmetric functions. This class of point processes was introduced in Borodin, Olshanski, and Strahov [12] to prove determinantal identities for averages of analogs of characteristic polynomials for ensembles originating from Random Matrix Theory, the theory of random partitions, and from representation theory of the infinite symmetric group. In the context of random point processes formed by N-point random configurations on a subset of \({\mathbb {R}}\), the Giambelli compatible point processes can be defined as follows.
Definition 2.1
Assume that a point process is formed by an N-point configuration \(\left( x_1,\ldots ,x_N\right) \) on \(I\subseteq {\mathbb {R}}\). If the Giambelli formula
(valid for the Schur polynomial \(s_\lambda (x_1,\ldots ,x_N)\) parameterised by an arbitrary Young diagram \(\lambda \left( p_1,\ldots ,p_d|q_1,\ldots ,q_d\right) \)) can be extended to averages, i.e.
then the random point process is called Giambelli compatible point process.
In the present paper, we show that the polynomial ensembles introduced in Sect. 2.1 can be understood as Giambelli compatible point processes. Namely, the following Theorem holds true.
Theorem 2.2
Any polynomial ensemble in the sense of Sect. 2.1 is a Giambelli compatible point process.
As it is explained in Borodin, Olshanski, and Strahov [12], the Giambelli compatibility of point processes implies determinantal formulas for averages of ratios of characteristic polynomials. Namely, we obtain
Theorem 2.3
Assume that \(x_1,\ldots ,x_N\) form a polynomial ensemble. Let \(u_1,\ldots , u_M \in {\mathbb {C}} \backslash {\mathbb {R}}\) and \(z_1,\ldots ,z_M \in {\mathbb {C}}\) for any \(M \in {\mathbb {N}}\) be pairwise distinct variables. Then
where \(D_N(z)=\prod _{n=1}^N(z-x_n)\) denotes the characteristic polynomial associated with the random variables \(x_1\), \(\ldots \), \(x_N\).
2.3 Averages of Arbitrary Ratios of Characteristic Polynomials in Invertible Ensembles
In this section, we present our results for arbitrary ratios of characteristic polynomials,
allowing the number of characteristic polynomials in the numerator M and denominator, \(L\le N\), to differ. As before we will assume the parameters \(y_1,\ldots ,y_L \in {\mathbb {C}} \backslash {\mathbb {R}}\) and \(z_1,\ldots ,z_M \in {\mathbb {C}}\) to be pairwise distinct. We will not consider the most general polynomial ensembles (2.1) here, but consider functions \(\varphi _j(x)\) that satisfy certain conditions to be specified below.
Definition 2.4
Consider a polynomial ensemble defined by the probability density function (2.1). Assume that \(\varphi _l(x)=\varphi (a_l,x)\) for \(l=1,\ldots ,N\), (where \(a_1,\ldots ,a_N\) are real parameters) is analytic in both arguments and that there exists a family \(\left\{ \pi _k\right\} _{k=0}^{\infty }\) of monic polynomials such that each polynomial \(\pi _k\) of degree k can be represented as
In addition, assume that Eq. (2.17) is invertible, i.e. there exists a function \(F: I'\times {\mathbb {C}}\rightarrow {\mathbb {C}}\) such that
where \(I^\prime \) is a certain contour in the complex plane. Then, we will refer to such a polynomial ensemble as an invertible ensemble.
Remark 2.5
Condition (2.17) together with (2.2) immediately implies that for invertible polynomial ensembles the normalising partition function simplifies as follows:
Here, we use that in (A.1) the determinant of monomials equals that of arbitrary monic polynomials.
We will now present two examples for polynomial ensembles of invertible type according to Definition 2.4 and comment on the general class of such ensembles.
Example 2.6
Our first example is given by the GUE with external field (2.8). Here, the eigenvalues take real values, \(I={\mathbb {R}}\), and the functions \(\varphi _l(x)\) can be chosen as
which are analytic. From [25, 8.951], we know the following representation of the standard Hermite polynomials \(H_n(t)\) of degree n,
that can be made monic as follows, \(2^{-n} H_n(x)=x^n+O(x^{n-2})\). This leads to the integral
from which we can read off
with \(\pi _{k}(a)=(2i)^{-k}H_{k}(ia)\), for \(k=0,1,\ldots \), which is again monic. Thus condition (2.17) is satisfied.
For the second condition (2.18), we use the integral [25, 7.374.6]
Renaming \(y=iz\) and \(x=is\) we obtain
with \(I^\prime =i{\mathbb {R}}\) and \(F(s,z)= \frac{i}{\sqrt{\pi }}e^{(s-z)^2}\).
Remark 2.7
Example 2.6 is the simplest case of a much wider class of polynomial ensembles of Pólya type convolved with fixed matrices, as introduced in [37, Theorem II.3]. Such polynomials ensembles are generalising the form (2.20) to
such that f is \((N-1)\)-times differentiable on \({\mathbb {R}}\),
analytic on \({\mathbb {C}}\), and the moments of its derivatives exist,Footnote 1
It immediately follows that its generalised moment matrix leads to polynomials, upon shifting the integration variable, and thus (2.17) is satisfied. It is not too difficult to show using Fourier transformation of f that also condition (2.18) of Definition 2.4 is satisfied and thus these ensembles are invertible.
Example 2.8
Our second example is the chiral GUE with external field (2.10) having \(I = {\mathbb {R}}_+\) and functions \(\varphi _l(x)\) can be chosen as
which is analytic, with the \(a_l\) positive real numbers. The following integral is known, see e.g. [25, 6.631.10] after analytic continuation,
Here, \(L_n^\nu (y)\) is the standard generalised Laguerre polynomial of degree n, which is made monic as follows, \(n!L_n^\nu (-x)=x^n+O(x^{n-1})\). Then, the first condition (2.17) is satisfied,
with \(\pi _{k}(a)=k!L_{k}^\nu (-a)\) for \(k=0,1,\ldots .\)
For the second condition (2.18), we consider the following integral, see [25, 7.421.6], which is also called Hankel transform,
Bringing factors on the other side and making the substitution \(t=-s\) to make the same monic polynomials \(n!L_n^\nu (-s)\) as above appear in the integrand, we obtain after using \(I_\nu (x)=i^{-\nu }J_\nu (ix)\)
with \(F(s,z)= (-1)^\nu \left( \frac{s}{z}\right) ^{\nu /2}e^{s+z} I_\nu \left( 2\sqrt{zs} \right) \) and \(I^\prime = {\mathbb {R}}_{-}\).
Now we state the second main result of the present paper which gives a formula for averages of products and ratios of characteristic polynomials in the case of invertible ensembles.
Theorem 2.9
Consider a polynomial ensemble (2.1) formed by \(x_1\), \(\ldots \), \(x_N\), and assume that this ensemble is invertible in the sense of Definition 2.4. Then we have for \(L\le N\)
where \(D_N(z)=\prod _{n=1}^N(z-x_n)\) denotes the characteristic polynomial associated with the random variables \(x_1\), \(\ldots \), \(x_N\), the parameters \(y_1,\ldots ,y_L \in {\mathbb {C}} \backslash {\mathbb {R}}\) and \(z_1,\ldots ,z_M \in {\mathbb {C}}\) are pairwise distinct, and all contours \(C_l\) with \(l=1,\ldots ,N\) encircle the points \(a_1,\ldots ,a_N\) counter-clockwise.
We note that Theorem 2.9 generalises Theorem 5.1 in [24] for the ratio of two characteristic polynomials, derived for the polynomial ensemble with \(\varphi (a,x)=x^{{\mathcal {L}}}e^{-x}I_0(2\sqrt{ax})\), to general ratios in invertible polynomial ensembles. Clearly, it is well suited for the asymptotic analysis when \(N\rightarrow \infty \) as the number of integrations does not depend on N.
2.4 A Formula for the Correlation Kernel for Invertible Ensembles
It is well known that each polynomial ensemble is a determinantal process. For invertible polynomial ensembles (see Definition 2.4), Theorem 2.9 enables us to deduce a double contour integration formula for the correlation kernel.
Proposition 2.10
Consider an invertible polynomial ensemble, i.e. a polynomial ensemble defined by (2.1), where the functions \(\varphi _l(x)=\varphi (a_l,x)\) satisfy the conditions specified in Definition 2.4. The correlation kernel \(K_N(x,y)\) of this ensemble can be written as
where C encircles the points \(a_1\), \(\ldots \), \(a_N\) counter-clockwise, and where \(\varphi (u,y)\) and F(s, x) are defined by Eqs. (2.17) and (2.18) correspondingly.
Proof
We use the following fact valid for any polynomial ensemble formed by \(x_1\), \(\ldots \), \(x_N\) on \(I\subseteq {\mathbb {R}}\), see Ref. [19].Footnote 2 Assume that
where the function \(v\rightarrow \Phi _N(x,v)\) is analytic for all \(v\in I\). Then the correlation kernel of the determinantal process formed by \(x_1,\ldots ,x_N\) is given by
In our case, Theorem 2.9 gives
which leads to the formula for the correlation kernel in the statement of the Proposition. \(\square \)
3 Proof of Theorem 2.2
Let \(x_1\), \(\ldots \), \(x_N\) form a polynomial ensemble on \(I^N\), where \(I\subseteq {\mathbb {R}}\). The probability density function of this ensemble is defined by Eq. (2.1). Denote by \({\widetilde{s}}_{\lambda }\) the expectation of the Schur polynomial \(s_{\lambda }\left( x_1,\ldots ,x_N\right) \) with respect to this ensemble,
Our aim is to show that \({\widetilde{s}}_{\lambda }\) satisfies the Giambelli formula, i.e.
where \(\lambda \) is an arbitrary Young diagram, \(\lambda =\left( p_1,\ldots ,p_d\vert q_1,\ldots ,q_d\right) \) in the Frobenius coordinates. According to Definition 2.1, this will mean that the polynomial ensemble under considerations is a Giambelli compatible point process.
The proof of Eq. (3.2) below is based on the following general fact due to Macdonald, see Macdonald [40], Example I.3.21.
Proposition 3.1
Let \(\{ h_{r,s} \}\) with integer \(r \in {\mathbb {Z}}\) and non-negative integer \(s\in {\mathbb {N}}\) be a collection of commuting indeterminates such that we have
and set
where k is any number such that \(k\ge l(\lambda )\). Then we have
where \(\lambda \) is an arbitrary Young diagram, \(\lambda =\left( p_1,\ldots ,p_d\vert q_1,\ldots ,q_d\right) \) in the Frobenius coordinates.
Clearly, in order to apply Proposition 3.1 to \({\widetilde{s}}_{\lambda }\) defined by Eq. (3.1) we need to construct a collection of indeterminates \(\{ h_{r,s} \}\) such that
will hold true for an arbitrary Young diagram \(\lambda \), for an arbitrary \(k\ge l(\lambda )\) and such that condition (3.3) will be satisfied.
By Andréiéf’s integration formula (2.4) and the expression for the normalisation constant \({\mathcal {Z}}_N\) (2.2), we can write
where we used (A.1) and Eq. (2.12). Notice that at this point it matters that we consider polynomial ensembles and not more general bi-orthogonal ensembles. In the latter case, the Vandermonde determinant in the denominator of the Schur function (2.12) would not cancel, the Andréiéf formula would not apply, and we would not know how to compute such expectation values. Set
and denote by \(Q=\left( Q_{i,j}\right) _{i,j=1}^N\) the inverseFootnote 3 of \({\tilde{G}}=\left( {\tilde{g}}_{i,j}\right) _{i,j=1}^N\), where \({\tilde{g}}_{i,j}=\int _I\mathrm{d}xx^{N-i}\varphi _j(x)\). With this notation we can rewrite Eq. (3.7) as
Since Q is the inverse of \({\tilde{G}}\), we have
or
The following Proposition will imply Theorem 2.2.
Proposition 3.2
Let \(\{ h_{r,s} \}\), with integer \(r \in {\mathbb {Z}}\) and non-negative integer \(s\in {\mathbb {Z}}_{\ge 0}\), be a collection of indeterminates defined by
The collection of indeterminates \(\{ h_{r,s} \}\) satisfies condition (3.3). Moreover, with this collection of indeterminates, \(\{ h_{r,s} \}\) formula (3.6) holds true for an arbitrary Young diagram \(\lambda \), and for an arbitrary \(k\ge l(\lambda )\).
Proof
We divide the proof of into several steps. First, the collection of indeterminates \(\{ h_{r,s} \}\) defined by (3.12) is shown to satisfy condition (3.3). Next, we prove that Eq. (3.6) holds true for an arbitrary Young diagram \(\lambda \), and for an arbitrary \(k\ge l(\lambda )\).
Step 1. First, we want to show that
for any \(k\ge l(\lambda )\).
Let \(\lambda \) be an arbitrary Young diagram and assume that \(k>l(\lambda )\). Consider the diagonal entries of the \(k\times k\) matrix
for \(i=j \in \{ l(\lambda )+1,\ldots ,k \}\). By definition of the \(h_{r,s}\) these entries are all equal to 1, since \(\lambda _i = 0\) for \(i \in \{ l(\lambda )+1,\ldots ,k \}\) implying \(h_{0,s}=1\) by condition (3.3). For \(r<0\), we have \(h_{r,s}=0\) (see Eq. (3.12)) and the matrix \(\left( h_{\lambda _i-i+j,j-1}\right) _{i,j=1}^k\) has the form
where the first row from the top with zeros has the label \(l(\lambda )+1\), and the first column from the left with ones has the label \(l(\lambda )+1\). The determinant of such a block matrix reduces to the product of the determinants of the blocks, which gives relation (3.13).
Step 2. Assume now that \(l(\lambda ) >N\). Then it trivially holds that
by the very definition of the Schur polynomials. Here, we would like to show that it equally holds that
if \(l(\lambda ) > N\).
We have \(h_{r,s} = \delta _{r,0}\) for \(s\ge N\) and \(r\ge 0\). This implies that the matrix \(\left( h_{\lambda _i-i+j,j-1}\right) _{i,j=1}^{l(\lambda )}\), which we can write out as
has the form
Thus, we can again apply the formula for determinants of block matrices to obtain
which is true for any \(l(\lambda ) >N\) and therefore condition (3.6) is satisfied in this case.
Step 3. Now we wish to prove that
is valid for any Young diagram with \(l(\lambda )\le N\), and for \(1\le i,j\le N\). Assume that \(\lambda _i-i+j\ge 0\). Then (3.14) turns into the first equation in (3.12) with \(r=\lambda _i-i+j\), \(s=j-1\). Assume that \(\lambda _i-i+j<0\), then \(i-\lambda _i>j\). Clearly, \(i-\lambda _i\in \{1,\ldots ,N\}\) in this case, and we have
where we have used Eq. (3.11). Also, if \(\lambda _i-i+j<0\), and \(1\le i,j\le N\), then \(h_{\lambda _i-i+j,j-1}=0\) as it follows from Eq. (3.12). We conclude that (3.14) holds true for \(\lambda _i-i+j<0\) as well.
Finally, the results obtained in Step 1-Step 3 together with formula (3.9) give the desired formula (3.6). \(\square \)
4 Proof of Theorem 2.9
Denoting by \(S_K\) the symmetric group of a set of K variables with its elements being the permutations of these, we will utilise the following Lemma that was proven in [22].
Lemma 4.1
Let L be an integer with \(1\le L\le N\), and let \(x_1,\ldots ,x_N\) and \(y_1,\ldots ,y_L\) denote two sets of parameters that are pairwise distinct. Then the following identity holds
on the coset of the permutation group.
As shown in [22], this follows from the Cauchy–Littlewood formula and the determinantal formula for the Schur polynomials (2.12). We can use this identity to reduce the number of variables in the inverse characteristic polynomials from N to L. Applied to the averages of products and ratios of characteristic polynomials, we obtain
where we used the fact that each term in the sum over permutations gives the same contribution to the expectation. Hence, we can undo the permutations under the sum by a change of variables and replace the sum over \(S_N/(S_{N-L} \times S_L)\) by the cardinality of the coset space \(N!/(N-L)!L!\). Next, we expand the determinant over the \(\det \left[ \varphi _l(x_k) \right] _{k,l=1}^{N} \) and then separate the integration over the first L variables \(x_{l=1,\ldots ,L}\) and the following \(N-L\) variables \(x_{n=L+1,\ldots ,N}\), by also splitting the characteristic polynomials accordingly. This gives
Because we are aiming at an expression that will be amenable to taking the large-N limit, we now focus on the integrals over \(N-L\) variables in the second line, which we denote by J. Here, we make use of one of the properties of the Vandermonde determinant, namely the absorption of the M characteristic polynomials in J into a larger Vandermonde determinant, see (A.2), to write
We use the representation (A.1), pull the integrations \(\int _I \mathrm{d}x_k \ \varphi _{\sigma (k)}(x_k)\) into the corresponding columns, and use definition (2.3) of the generalised moment matrix to obtain
Property (2.17) of invertible polynomial ensembles enables us to rewrite J as
Property (2.18) allows us to replace again the determinant of monic polynomials by a Vandermonde determinant of size \(N-L+M\) to obtain
Let us come back to the expectation value of characteristic polynomials in the form (4.3) and insert what we have derived for J above. This gives
The integrals are now put into a form to apply the following Lemma that will allow us to simplify (and eventually get rid of) the sum over permutations.
Lemma 4.2
Let \(S_N\) denote the permutation group of \(\left\{ 1,\ldots ,N\right\} \), and let \(S_L\) be the subgroup of \(S_N\) realised as the permutation group of the first L elements \(\{1,\ldots ,L\}\). Also, let \(S_{N-L}\) be the subgroup of \(S_N\) realised as the permutation group of the remaining \(N-L\) elements \(\{L+1,\ldots ,N\}\). Assume that F is a complex valued function on \(S_N\) which satisfies the condition \(F(\sigma h)=F(\sigma )\) for each \(\sigma \in S_N\), and each \(h\in S_L\times S_{N-L}\). Then we have
where \(\left( i_1,\ldots ,i_N\right) \) is a one-line notation for the permutation \(\left( \begin{array}{cccc} 1 &{} 2 &{} \ldots &{} N \\ i_1 &{} i_2 &{} \ldots &{} i_N \end{array} \right) \), and notation \({\check{l}}_p\) means that \(l_p\) is removed from the list.
Proof
Recall that if G is a finite group, and H is its subgroup, then there are transversal elements \(t_1,\ldots ,t_k\in G\) for the left cosets of H such that \(G=t_1H\uplus \cdots \uplus t_kH\), where \(\uplus \) denotes disjoint union. It follows that if F is a function on G with the property \(F(gh)=F(g)\) for any \(g\in G\), and any \(h\in H\), then
where |H| denotes the number of elements in H. In our situation \(G=S_N\), \(H=S_L\times S_{N-L}\), and each transversal element can be represented as a permutation
written in one-line notation, where \(1\le l_1<\cdots <l_L\le N\). Moreover, each collection of numbers \(l_1\), \(\ldots \), \(l_L\) satisfying the condition \(1\le l_1<\cdots <l_L\le N\) gives a transversal element for the left cosets of \(H=S_L\times S_{N-L}\) in \(G=S_N\). We conclude that Eq. (4.6) is reduced to Eq. (4.5). \(\square \)
Assume that \(\Phi (x_1,\ldots ,x_L)\) is antisymmetric under permutations \(\sigma \) of its L variable, i.e.
and that \(L\le N\). Let F be the function on \(S_N\) defined by
where f is a function of two variables. Clearly, F satisfies the condition \(F(\sigma h)=F(\sigma )\) for each \(\sigma \in S_N\), and each \(h\in S_L\times S_{N-L}\). Application of Lemma 4.2 to this function gives
where the reduced Vandermonde determinant is defined in (A.6). Taking into account that
we obtain the formula
valid for any antisymmetric function \(\Phi (x_{\sigma (1)},\ldots ,x_{\sigma (L)})\), and for any function f(x, y) such that the integrals in the equation above exist.
Formula (4.8) enables us to rewrite Eq. (4.4) as
We note that due to (A.7) it holds
In addition, we apply (2.19) to eliminate \({\mathcal {Z}}_N\), cancel signs, and see that the strict ordering of the indices \(l_1<l_2<\cdots <l_L\) can be relaxed,
Finally, we see that the sum in formula (4.9) can be written as contour integrals, because of the formula
where the contour C encircles the points \(a_1,\ldots ,a_N\) counter-clockwise. The leads to the formula in the statement of Theorem 2.9. \(\square \)
5 Special Cases
In Proposition 2.10, we have used Eq. (2.33) in the case \(M=L=1\). Another case of interest is that corresponding to products of characteristic polynomials. In this case \(L=0\), and we obtain that only the first set of integrals remains in (2.33), i.e.
where
after pulling the M integrations over the \(s_j\)’s into the Vandermonde determinant of size M. This result also could have been directly computed using Lemma A.2.
As a final special case of interest, we look at the ratio of \(M+1\) characteristic polynomials over a single one at \(L=1\). This object is needed in the application to finite temperature QCD, cf. [45]. Theorem 2.9 gives
Following [19], we may use the Lagrange extrapolation formula
to rewrite
This leads to the following rewriting of (5.3)
where we have defined
In the second step in (5.6), we have first pulled all the s-integrals except the one over \(s_m\) into the Vandermonde determinant \(\Delta _M^{(m)}(s_1,\ldots ,s_{M+1})\), leading to a determinant of size M with matrix elements \({\mathcal {B}}_i(z_j)\) (5.2). We then recognise that the sum is a Laplace expansion of a determinant of size \(M+1\) with respect to the first row, containing the matrix elements \({\mathcal {A}}(z_j,u)\) (5.7). This reveals the determinantal form of the corresponding kernel.
Fyodorov, Grela, and Strahov [24] considered the probability density defined byFootnote 4
on \({\mathbb {R}}_+^N\). Note that this polynomial ensemble is invertible only for \({\mathcal {L}}=0\) as it follows from Eqs. (2.30) and (2.32). However, computations of different averages with respect to \(P_N^{{\mathcal {L}}}\) can be reduced to those with respect to \(P_N^{{\mathcal {L}}=0}\), i.e. with respect to an invertible ensemble. Indeed, we have
for any function \(f\left( x_1,\ldots ,x_N\right) \) such that the expectations in the formula above exist. In particular, we can reproduce the results of [24] for the expectation value of a single characteristic polynomial, its inverse or a single ratio. Without going much into detail, we need two ingredients for this check. First, in order to perform the limit of vanishing arguments in Eq. (5.9), it is useful to antisymmetrise the product of the first \({\mathcal {L}}\) functions \(F(t_j,z_j)\) using the Vandermonde determinant \(\Delta _{{\mathcal {L}}+1}(t_1,\ldots ,t_{{\mathcal {L}}+1})\) in (2.33). We are then led to consider
after first taking the limit of degenerate arguments, which is then sent to zero. Obviously, we first separate the remaining non-vanishing argument \(z_{{\mathcal {L}}+1}\) from the Vandermonde determinant by \(\Delta _{{\mathcal {L}}}(z_1,\ldots ,z_{{\mathcal {L}}}) \prod _{l=1}^{{\mathcal {L}}}(z_l-z_{{\mathcal {L}}+1}) =\Delta _{{\mathcal {L}}+1}(z_1,\ldots ,z_{{\mathcal {L}}+1})\).
Second, we need an equivalent formulation of Propositions 3.1 and 3.5 employed in [24], which are due to [19, 21], respectively.Footnote 5
Proposition 5.1
where C is the inverse of the \(N\times N\) moment matrix G, and \(c_{i,j}\) are the matrix elements of \(C^T\).
Proof
Eqs. (5.11) were stated in [24] following [19, 21], without the factors of \((u/y)^{N-1}\). The equivalence of the two statements can be seen as follows. Expanding the geometric series inside the determinant without these factors, we have
If we perform the integrals in the last row, we obtain infinite series over generalised moment matrices \(g_{k,l}\), the first \(N-1\) of which can be removed by subtraction of the upper \(N-1\) rows. Rewriting the last row as integrals and resumming the series, we arrive at the first line of (5.11).
The second line in (5.11) is obtained as follows. Using that \(\det [c_{i,j}]_{i,j=1}^N=1/{\mathcal {Z}}_N\) and then multiplying the matrix C with the matrix inside the determinant from the right, this leads to an identity matrix, except for the last row, as C is the inverse of the finite, \(N\times N\) dimensional matrix. Laplace expanding with respect to the last column leads to the desired result. \(\square \)
Employing Proposition 5.1 in [24], it is not difficult to see that from our Theorem 2.9 together with (5.10) we obtain an equivalent form of [24, Theorem 4.1] for a single characteristic polynomial, [24, Theorem 3.4] for its inverse and [24, Theorem 5.1] for a single ratio.
Notes
The unconvoluted polynomial ensemble has \(\varphi _j(x)=\partial ^j f(x)/\partial x^j\).
Because we take the residue of the right hand side at \(z=y\), any ratio f(v)/f(z) can be multiplied under the integral for regular functions f, without changing the value of the kernel.
Notice that due to (2.3) we have \(\det [G]=(-1)^{N(N-1)/2}\det [{\tilde{G}}]\).
In our conventions empty products equal to unity.
References
Akemann, G., Vernizzi, G.: Characteristic polynomials of complex random matrix models. Nucl. Phys. B 660, 532–556 (2003). arXiv:hep-th/0212051
Akemann, G., Baik, J., Di Francesco, P. (eds.): The Oxford Handbook of Random Matrix Theory. Oxford University Press, Oxford (2011)
Akemann, G., Kieburg, M., Wei, L.: Singular value correlation functions for products of Wishart random matrices. J. Phys. A Math. Theor. 46, 275205 (2013). arXiv:1303.5694
Akemann, G., Ipsen, J.R.: Recent exact and asymptotic results for products of independent random matrices. Acta Physica Polonica B 46, 1747–1784 (2015). arXiv:1502.01667
Akemann, G., Strahov, E.: Dropping the independence: singular values for products of two coupled random matrices. Commun. Math. Phys. 345, 101–140 (2016). arXiv:1504.02047
Akemann, G., Burda, Z., Kieburg, M.: From integrable to chaotic systems: universal local statistics of Lyapunov exponents. Europhys. Lett. 126, 40001 (2019). arXiv:1809.05905
Baik, J., Deift, P., Strahov, E.: Products and ratios of characteristic polynomials of random Hermitian matrices. J. Math. Phys. 44, 3657–3670 (2003). arXiv:math-ph/0304016
Bleher, P.M., Kuijlaars, A.B.J.: Random matrices with external source and multiple orthogonal polynomials. Int. Math. Res. Not. 2004, 109–129 (2004). arXiv:math-ph/0307055
Bleher, P.M., Kuijlaars, A.B.J.: Large n limit of Gaussian random matrices with external source. Part III: double scaling limit. Commun. Math. Phys. 270, 481–517 (2007). arXiv:math-ph/0602064
Borodin, A.: Biorthogonal ensembles. Nucl. Phys. B 536, 704–732 (1998). arXiv:math/9804027
Borodin, A., Strahov, E.: Averages of characteristic polynomials in random matrix theory. Commun. Pure Appl. Math. 59, 161–253 (2006). arXiv:math-ph/0407065
Borodin, A., Olshanski, G., Strahov, E.: Giambelli compatible point processes. Adv. Appl. Math. 37, 209–248 (2006). arXiv:math-ph/0505021
Breuer, J., Strahov, E.: A universality theorem for ratios of random characteristic polynomials. J. Approx. Theo. 164, 803–814 (2012). arXiv:1201.0473
Brézin, E., Hikami, S.: Correlations of nearby levels induced by a random potential. Nucl. Phys. B 479, 697–706 (1996). arXiv:cond-mat/9605046
Brézin, E., Hikami, S.: Level spacing of random matrices in an external source. Phys. Rev. E 58, 7176–7185 (1998). arXiv:cond-mat/9804024
Brézin, E., Hikami, S.: Intersection numbers of Riemann surfaces from Gaussian matrix models. JHEP 0710, 096 (2007). arXiv:0709.3378
Brézin, E., Hikami, S.: Intersection theory from duality and replica. Commun. Math. Phys. 283, 507–521 (2008). arXiv:0708.2210
Claeys, T., Kuijlaars, A.B., Wang, D.: Correlation kernels for sums and products of random matrices. Random Matrices Theory Appl. 04, 1550017 (2015). arXiv:1505.00610
Desrosiers, P., Forrester, P.J.: A note on biorthogonal ensembles. J. Approx. Theo. 152, 167–187 (2008). arXiv:math-ph/0608052
Desrosiers, P., Forrester, P.J.: Asymptotic correlations for Gaussian and Wishart matrices with external source. Int. Math. Res. Not. 2006, 27395 (2006). arXiv:math-ph/0604012
Forrester, P.J., Liu, Dang-Zheng: Singular values for products of complex Ginibre matrices with a source: hard edge limit and phase transition. Commun. Math. Phys. 344, 333–368 (2016). arXiv:1503.07955
Fyodorov, Y.V., Strahov, E.: An exact formula for general spectral correlation function of random Hermitian matrices. J. Phys. A Math. Gen. 36, 3203–3214 (2003). arXiv:math-ph/0204051
Fyodorov, Y.V.: On statistics of bi-orthogonal eigenvectors in real and complex Ginibre ensembles: combining partial Schur decomposition with supersymmetry. Commun. Math. Phys. 363, 579–603 (2018). arXiv:1710.04699
Fyodorov, Y.V., Grela, J., Strahov, E.: On characteristic polynomials for a generalized chiral random matrix ensemble with a source. J. Phys. A Math. Theor. 51, 134003 (2018). arXiv:1711.07061
Gradshteyn, L.S., Ryzhik, I.M.: Table of Integers, Series and Products, 8th edn. Academic Press, Cambridge (2015)
Grønquist, J., Guhr, T., Kohler, H.: The k-point random matrix kernels obtained from one-point supermatrix models. J. Phys. A Math. Gen. 37, 2331–2344 (2004). arXiv:math-ph/0402018
Guhr, T.: Transitions toward quantum chaos: with supersymmetry from Poisson to Gauss. Ann. Phys. 250, 145–192 (1996). arXiv:cond-mat/9510052
Guhr, T., Wettig, T.: Universal spectral correlations of the Dirac operator at finite temperatures. Nucl. Phys. B 506, 589–611 (1997). arXiv:hep-th/9704055
Halasz, M.A., Jackson, A.D., Shrock, R.E., Stephanov, M.A., Verbaarschot, J.J.M.: Phase diagram of QCD. Phys. Rev. D 58, 096007 (1998). arXiv:hep-ph/9804290
Harish-Chandra: Differential operators on a semisimple Lie algebra. Am. J. Math 79, 87–120 (1957)
Itzykson, C., Zuber, J.B.: The planar approximation II. J. Math. Phys. 21, 411–421 (1980)
Keating, J.P., Snaith, N.C.: Random matrix theory and \(\zeta (1/2+it)\). Commun. Math. Phys. 214, 57–89 (2000)
Kieburg, M., Guhr, T.: Derivation of determinantal structures for random matrix ensembles in a new way. J. Phys. A Math. Gen. 43, 075201 (2010). arXiv:0912.0654
Kieburg, M., Kösters, H.: Exact relation between singular value and eigenvalue statistics. Random Matrices Theo. Appl. 5, 1650015 (2016). arXiv:1601.02586
Kieburg, M., Kösters, H.: Products of random matrices from polynomial ensembles. Ann. Inst. H. Poincaré Probab. Stat. 55, 98–126 (2019). arXiv:1601.03724
Kieburg, M., Kuijlaars, A.B.J., Stivigny, D.: Singular value statistics of matrix products with truncated unitary matrices. Int. Math. Res. Not. 2016, 33923424 (2016). arXiv:1501.03910
Kieburg, M.: Additive matrix convolutions of Pólya ensembles and polynomial ensembles. Random Matrices Theo. Appl. 09, 2150002 (2020). arXiv:1710.09481
Kuijlaars, A.B.J.: Transformations of polynomial ensembles. In: Hardin, D.P., Lubinsky, D.S., Simanek, B. (eds.) Modern Trends in Constructive Function Theory. Contemporary Mathematics, vol. 661, pp. 253–268 (2016). arXiv:1501.05506
Kuijlaars, A., Stivigny, D.: Singular values of products of random matrices and polynomial ensembles. Random Matrices Theory Appl. 3, 1450011 (2014). arXiv:1404.5802
Macdonald, I.G.: Symmetric Functions and Hall Polynomials, 2nd edn. Oxford University Press, Oxford (1995)
Mehta, M.L.: Random Matrices, Volume 142 of Pure and Applied Mathematics (Amsterdam), 3rd edn. Elsevier/Academic Press, Cambridge (2004)
Muttalib, K.A.: Random matrix models with additional interactions. J. Phys. A Math. Gen. 28(5), L159–L164 (1995)
Sagan, B.E.: The Symmetric Group: Representations, Combinatorial Algorithms, and Symmetric Functions. Graduate Texts in Mathematics, 2nd edn. Springer, Berlin (2000)
Schlittgen, B., Wettig, T.: Generalizations of some integrals over the unitary group. J. Phys. A Math. Gen. 36, 3195–3202 (2003). arXiv:math-ph/0209030
Seif, B., Wettig, T., Guhr, T.: Spectral correlations of the massive QCD Dirac operator at finite temperature. Nucl. Phys. B 548, 475–490 (1999). arXiv:hep-th/9811044
Shuryak, E.V., Verbaarschot, J.J.M.: Random matrix theory and spectral sum rules for the Dirac operator in QCD. Nucl. Phys. A 560, 306–320 (1993). arXiv:hep-th/9212088
Strahov, E., Fyodorov, Y.V.: Universal results for correlations of characteristic polynomials: Riemann–Hilbert approach. Commun. Math. Phys. 241, 343–382 (2003). arXiv:arXiv:math-ph/0210010
Zinn-Justin, P.: Universality of correlation functions of Hermitian random matrices in an external field. Commun. Math. Phys. 194, 631–650 (1998). arXiv:cond-mat/9705044
Acknowledgements
G.A. and T.W. were supported by the German research council DFG through the Grant CRC 1283 “Taming uncertainty and profiting from randomness and low regularity in analysis, stochastics and their applications”. E.S. was partially supported by the BSF Grant 2018248 “Products of random matrices via the theory of symmetric functions”. We thank Maurice Duits, Peter Forrester and Mario Kieburg for useful discussions, and the Department of Mathematics at the Royal Institute of Technology (KTH) Stockholm for hospitality (G.A. and T.W.). The School of Mathematics and Statistics of the University of Melbourne is equally thanked for support and hospitality, where part of this work was completed (G.A.). Last but not least we that thank one of the referees for many useful remarks.
Funding
Open Access funding enabled and organized by Projekt DEAL.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Claude-Alain Pillet.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix A. Properties of Vandermonde Determinants
Appendix A. Properties of Vandermonde Determinants
In this appendix, we first define the Vandermonde determinant in various equivalent ways. We then collect several of its properties when extending the number of variables by multiplication, and when reducing it by division through the corresponding factors.
Definition A.1
The Vandermonde determinant of N pairwise distinct variables \(x_1,\ldots ,x_N\) is denoted by \(\Delta _N (x_1,\ldots ,x_N)\) and can be represented in the following equivalent ways:
For \(N=1\) it follows that \(\Delta _1(x_1)=1\), and we also formally define \(\Delta _0=1\) in the absence of parameters.
The Vandermonde determinant can be extended from N to \(N+M\) variables in the following way, when multiplied by M characteristic polynomials.
Lemma A.2
The following extension formula holds for a Vandermonde determinant of size N. Let the M parameters \(\{ z_1,\ldots ,z_M \}\) be pairwise distinct. Then it holds that
Proof
We proceed by induction over M. Defining \(z_1 \equiv x_{N+1}\), the \(M=1\) case can seen from inserting the definition (A.1) in product form
We now assume that Eq. (A.2) is valid for any M. The induction step \(M \rightarrow M+1\) is straightforward:
Using the induction assumption, multiplying by a factor of unity \(\frac{\prod _{l=1}^M(z_l-z_{M+1})}{\prod _{l=1}^M(z_l-z_{M+1})}\) and using the definition (A.1) in product form, the formula (A.2) for \(M+1\) follows. \(\square \)
For extended Vandermonde determinants, it holds that
by permuting rows in the determinant form in (A.1).
Let us introduce a notation for the Vandermonde determinant with a reduced number of indices. For \(L\le N\) ordered indices \(l_1,\ldots ,l_L\), we define the reduced Vandermonde determinant of size \(N-L\) by
where the parameters \(x_j\) with \(j=l_1,\ldots ,l_L\) are absent. From Definition A.1, we obtain that for \(L=N\) both sides are equal to unity. We obtain the reduced Vandermonde by the following formula.
Lemma A.3
For \(N\ge L\), the Vandermonde determinant of size \(N-L\) obtained by removing the variables \(x_{l_j}\), with \(1\le l_1<\cdots <l_L\le N\), from the variables \(x_{1},\ldots ,x_N\) can be obtained via
Proof
The proof is again done by induction. For \(L=1\), we have for the right-hand side of (A.7)Footnote 6
For the induction step, we assume that (A.7) holds for any \(N>L\ge 1\). From the definition of the reduced Vandermonde (A.6) as a product, it is not difficult to see that
Using the induction assumption for the left-hand side and solving this equation for the reduced Vandermonde determinant of size \(N-L-1\) on the right hand side, we obtain
which finishes the proof. \(\square \)
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Akemann, G., Strahov, E. & Würfel, T.R. Averages of Products and Ratios of Characteristic Polynomials in Polynomial Ensembles. Ann. Henri Poincaré 21, 3973–4002 (2020). https://doi.org/10.1007/s00023-020-00963-9
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00023-020-00963-9