1 Introduction and Statement of the Main Results

1.1 Biorthogonal Laguerre Ensembles

The biorthogonal Laguerre ensembles refer to n particles \(x_1< \cdots < x_n\) distributed over the positive real axis, following a probability density function of the form

$$\begin{aligned} \frac{1}{Z_n}\Delta (x_1,\ldots ,x_n)\Delta (x_1^\theta ,\ldots ,x_n^\theta )\prod _{j=1}^nx_j^\alpha e^{-x_j}, \quad \alpha >-1, \quad \theta >0, \end{aligned}$$
(1.1)

where

$$\begin{aligned} Z_n=Z_n(\alpha ,\theta )=\int _{[0,\infty )^n}\Delta (x_1,\ldots ,x_n)\Delta (x_1^\theta ,\ldots ,x_n^\theta )\prod _{j=1}^nx_j^\alpha e^{-x_j}\,\mathrm {d}x_j \end{aligned}$$

is the normalization constant, and

$$\begin{aligned} \Delta (\lambda _1,\ldots ,\lambda _n)=\prod _{1\le i <j \le n}(\lambda _j-\lambda _i) \end{aligned}$$

is the standard Vandermonde determinant.

Densities of the form (1.1) were first introduced by Muttalib [38], where he pointed out that, due to the appearance of two body interaction term \(\Delta (x_1,\ldots ,x_n)\Delta (x_1^\theta ,\ldots ,x_n^\theta )\), these ensembles provide more effective description of disordered conductors in the metallic regime than the classical random matrix theory. A more concrete physical example that leads to (1.1) (with \(\theta =2\)) can be found in [36], where the authors proposed a random matrix model for disordered bosons. These ensembles are further studied by Borodin [10] under a more general framework, namely, biorthogonal ensembles. It is also worthwhile to mention the work of Cheliotis [12], where the author constructed certain triangular random matrices in terms of a Wishart matrix whose squared singular values are distributed according to (1.1); see also [22]. Note that when \(\theta =1\), (1.1) reduces to the well-known Wishart-Laguerre unitary ensemble and plays a fundamental role in random matrix theory; cf. [6, 19].

A nice property of (1.1) is that, as proved in [38], they form the so-called determinantal point processes [27, 45]. This means there exits a correlation kernel \(K_n^{(\alpha , \theta )}(x,y)\) such that the joint probability density functions (1.1) can be rewritten as the following determinantal forms

$$\begin{aligned} \frac{1}{n!}\det \left( K_n^{(\alpha ,\theta )}(x_i,x_j)\right) _{i,j=1}^n. \end{aligned}$$

The kernel \(K_n^{(\alpha , \theta )}(x,y)\) has a representation in terms of the so-called biorthogonal polynomials (cf. [29] for a definition). Let

$$\begin{aligned} p_j^{(\alpha ,\theta )}(x)=\kappa _j x^j+\cdots , \quad q_k^{(\alpha ,\theta )}(x)= x^k+\cdots , \quad \kappa _j > 0, \end{aligned}$$
(1.2)

be two sequences of polynomials depending on the parameters \(\alpha \) and \(\theta \), of degree j and k respectively, and they satisfy the orthogonality conditions

$$\begin{aligned} \int _0^\infty p_j^{(\alpha ,\theta )}(x)q_k^{(\alpha ,\theta )}(x^\theta )x^\alpha e^{-x}\,\mathrm {d}x=\delta _{j,k}, \quad j,k=0,1,2,\ldots . \end{aligned}$$
(1.3)

Note that the polynomial \(q_k^{(\alpha , \theta )}\) is normalized to be monic. We then have

$$\begin{aligned} K_n^{(\alpha , \theta )}(x,y)=\sum _{j=0}^{n-1}p_j^{(\alpha , \theta )}(x)q_j^{(\alpha , \theta )}(y^\theta ) x^\alpha e^{-x}. \end{aligned}$$
(1.4)

The families \(\{p_j,j=0,1,\ldots \}\) and \(\{q_k,k=0,1,\ldots \}\), which are called Laguerre biorthogonal polynomials, exist uniquely, since the associated bimoment matrix is nonsingular; see (2.3) and (2.6) below. The studies of these polynomials (with \(\theta =2\)) might be traced back to [46] during the investigations of penetration and diffusion of X-rays through matter. Later, intensive studies have been conducted on the case \(\theta \in \mathbb {N}=\{1,2,\ldots \}\) in [11, 23, 24, 30, 43, 44, 47], where the general properties including explicit formulas, recurrence relations, generating functions, Rodrigues’s formulas etc. are derived.

As determinantal point processes, a fundamental issue of the study is to establish the large n limit of the correlation kernel (1.4) in both macroscopic and microscopic regimes. By expressing \(K_n^{(\alpha ,\theta )}(x,y)\) as a finite series expansion in terms of \(x^{k\theta }y^r\), \(k,r=0,1,\ldots ,n-1\), it was shown by Borodin [10, Theorem 4.2] that

$$\begin{aligned} \lim _{n \rightarrow \infty }\frac{K_n^{(\alpha , \theta )}\left( \frac{x}{n^{1/\theta }},\frac{y}{n^{1/\theta }}\right) }{n^{1/\theta }}&=\sum _{k,l=0}^{\infty }\frac{(-1)^kx^{\alpha +k}}{k!\Gamma \left( \frac{\alpha +1+k}{\theta }\right) }\frac{(-1)^ly^{\theta l}}{l!\Gamma (\alpha +1+\theta l)}\frac{\theta }{\alpha +1+k+\theta l} \nonumber \\&=\theta x^{\alpha }\int _0^1J_{\frac{\alpha +1}{\theta },\frac{1}{\theta }}(ux)J_{\alpha +1,\theta }((uy)^{\theta })u^\alpha \,\mathrm {d}u, \end{aligned}$$
(1.5)

where \(J_{a,b}\) is Wright’s generalization of the Bessel function [17] given by

$$\begin{aligned} J_{a,b}(x)=\sum _{j=0}^\infty \frac{(-x)^j}{j!\Gamma (a+bj)}; \end{aligned}$$
(1.6)

see also [36] for the special case \(\theta =2\), \(\alpha \in \mathbb {N}\cup \{0\}\). These non-symmetric hard edge scaling limits generalize the classical Bessel kernels [18, 50] (corresponding to \(\theta =1\)), and possess some nice symmetry properties. Moreover, they also appear in the studies of large n limits of correlation kernels for biorthogonal Jacobi and biorthogonal Hermite ensembles [10]. When \(\theta =M\in \mathbb {N}\) or \(1/\theta = M\), the limiting kernels coincide with the hard edge scaling limits of specified parameters arising from products of M Ginibre matrices [33], as shown in [32].

The macroscopic behavior of the particles as \(n\rightarrow \infty \) has recently been investigated in [13], where the expressions for the associated equilibrium measures are given for quite general potentials and \(\theta \ge 1\). According to [13], as \(n\rightarrow \infty \), the (rescaled) particles in (1.1) are distributed over a finite interval \([0,(1+\theta )^{1+1/\theta }]\), with the density function given by

$$\begin{aligned} f_\theta (x)= \frac{\theta }{2 \pi x i }(I_+(x)-I_-(x)), \quad x\in (0,(1+\theta )^{1+1/\theta }). \end{aligned}$$
(1.7)

Here, \(I_{\pm }(x)\) (with \(\mathrm {Im}\,(I_+(x))>0\)) stand for two complex conjugate solutions of the equation

$$\begin{aligned} J(z)=\theta (z+1)\left( \frac{z+1}{z}\right) ^{1/\theta }=x, \quad x\in (0,(1+\theta )^{1+1/\theta }). \end{aligned}$$

Moreover, by [13, Remark 1.9], the density blows up with a rate \(x^{-1/(1+\theta )}\) near the origin (hard edge), while vanishes as a square root near \((1+\theta )^{1+1/\theta }\) (soft edge). This phenomenon in particular suggests non-trivial hard edge scaling limits (as shown in 1.5), as well as the expectation that the classical bulk and soft edge universality [31] (via the sine kernel and Airy kernel, respectively) should hold in the bulk and the right edge as in the case of \(\theta =1\). More explicit description is revealed later in [21]. After changing variables \(x_i \rightarrow \theta x_i^{1/\theta }\), the (rescaled) particles are then distributed over \(\left[ 0,(1+\theta )^{1+\theta }/\theta ^\theta \right] \) and the limiting mean distribution is recognized as the Fuss–Catalan distribution [5, 7, 40]. Its kth moment is given by the Fuss–Catalan number

$$\begin{aligned} \frac{1}{(1+\theta )k+1}\left( {\begin{array}{c}(1+\theta )k+k\\ k\end{array}}\right) ,\quad k=0,1,2,\ldots . \end{aligned}$$
(1.8)

The density function of Fuss–Catalan distribution can be written down explicitly in several ways; cf. [42] in terms of Meijer G-functions (see e.g. [8, 37, 41] and the “Appendix” below for a brief introduction) or [34] in terms of multivariate integrals. The simplest form of the representation for general \(\theta \) might follow from the following parametrization of the argument [9, 21, 25, 39]:

$$\begin{aligned} x=\frac{(\sin ((1+\theta )\varphi ))^{1+\theta }}{\sin \varphi (\sin ( \theta \varphi ))^\theta }, \quad 0<\varphi <\frac{\pi }{1+\theta }. \end{aligned}$$
(1.9)

It is readily seen that this parametrization is a strictly decreasing function of \(\varphi \), thus gives a one-to-one mapping from \((0, \pi /(1+\theta ))\) to \((0, (1 + \theta )^{1 + \theta }/\theta ^\theta )\). The density function in terms of \(\varphi \) is then given by

$$\begin{aligned} \rho (\varphi ) = \frac{1}{\pi x} \frac{\sin ((1+\theta )\varphi )}{\sin (\theta \varphi )} \sin \varphi = \frac{1}{\pi }\frac{(\sin \varphi )^2(\sin (\theta \varphi ))^{\theta -1}}{ (\sin ((1+\theta )\varphi ))^\theta },\quad 0<\varphi <\frac{\pi }{1+\theta }. \end{aligned}$$
(1.10)

From (1.9) and (1.10), one can check directly that \(\rho \) blows up with a rate \(x^{-\theta /(1+\theta )}\) near the origin, and vanishes as a square root near \((1+\theta )^{1+\theta }/\theta ^\theta \), which is compatible with the changes of variables. We finally note that the other description of macroscopic behavior with the notion of a DT-element [16] can be found in [12].

The main aim of this paper to establish local universality for biorthogonal Laguerre ensembles (1.1). Due to lack of a simple Christoffel–Darboux formula for Laguerre biorthogonal polynomials, we have to adapt an approach that is different from the conventional one. The main issue here is an explicit integral representation of \(K_n^{(\alpha ,\theta )}\). Our main results are stated in the next section.

1.2 Statement of the Main Results

Our first result is stated as follows:

Theorem 1.1

(Double contour integral representation of \(K_n^{(\alpha ,\theta )}\)) With \(K_n^{(\alpha ,\theta )}\) defined in (1.4), we have

$$\begin{aligned} K_n^{(\alpha , \theta )}(x,y) \!=\! \frac{\theta }{(2\pi i)^2} \int _{c-i\infty }^{c+i\infty } \,\mathrm {d}s \oint _{\Sigma } \,\mathrm {d}t \frac{\Gamma (s+1)\Gamma (\alpha +1+\theta s )}{\Gamma (t+1)\Gamma (\alpha +1+\theta t)} \frac{\Gamma (t-n+1)}{\Gamma (s-n+1)} \frac{x^{-\theta s -1}y^{\theta t}}{s-t}, \end{aligned}$$
(1.11)

for \(x,y>0\), where

$$\begin{aligned} c=\frac{\max \{0,1-\frac{\alpha +1}{\theta }\}-1}{2}<0, \end{aligned}$$
(1.12)

and \(\Sigma \) is a closed contour going around \(0, 1, \ldots , n-1\) in the positive direction and \(\mathrm {Re}\,t > c\) for \(t \in \Sigma \).

We highlight that this contour integral representation bears a resemblance to those appearing recently in the studies of products of random matrices [20, 28, 32, 33], where the integrands of double contour integral representations for the correlation kernels again consist of ratios of gamma functions. When \(\theta \in \mathbb {N}\), \(K_n^{(\alpha ,\theta )}\) is indeed related to certain correlation kernels arising from products of Ginibre matrices; see Sect. 3 below. We also note that, in the context of products of random matrices, the correlation kernels can be written as integrals involving Meijer G-functions, for biorthogonal Laguerre ensembles, however, it does not seem to be the case for general parameters \(\alpha \) and \(\theta \).

An immediate consequence of the above theorem is the following new representations of hard edge scaling limits.

Corollary 1.2

(Hard edge scaling limits of \(K_n^{(\alpha ,\theta )}\)) With \(\alpha \ge -1\), \(\theta \ge 1\) being fixed, we have

$$\begin{aligned} \lim _{n \rightarrow \infty }\frac{K_n^{(\alpha , \theta )}\left( \frac{x}{n^{1/\theta }},\frac{y}{n^{1/\theta }}\right) }{n^{1/\theta }} = K^{(\alpha ,\theta )}(x,y), \end{aligned}$$
(1.13)

uniformly for xy in compact subsets of the positive real axis, where

$$\begin{aligned} K^{(\alpha ,\theta )}(x,y) =\frac{\theta }{(2\pi i)^2} \int _{c-i\infty }^{c+i\infty } \,\mathrm {d}s \oint _{\Sigma } \,\mathrm {d}t \frac{\Gamma (s+1)\Gamma (\alpha +1+\theta s )}{\Gamma (t+1)\Gamma (\alpha +1+\theta t)} \frac{\sin \pi s}{\sin \pi t} \frac{x^{-\theta s -1}y^{\theta t }}{s-t}\nonumber \\ \end{aligned}$$
(1.14)

and where c is given in (1.12), \(\Sigma \) is a contour starting from \(+\infty \) in the upper half plane and returning to \(+\infty \) in the lower half plane which encircles the positive real axis and \(\mathrm {Re}\,t>c\) for \(t\in \Sigma \). Alternatively, by setting

$$\begin{aligned} p^{(\alpha ,\theta )}(x)=\frac{1}{2 \pi i}\int _{\kappa -i\infty }^{\kappa +i\infty } \frac{\Gamma (\alpha +s)}{\Gamma ((1-s)/\theta )}x^{-s} \,\mathrm {d}s, \quad \kappa >-\alpha , \end{aligned}$$
(1.15)

and

$$\begin{aligned} q^{(\alpha ,\theta )}(x)=\frac{1}{2 \pi i}\int _\gamma \frac{\Gamma (t/\theta )}{\Gamma (\alpha +1-t)}x^{-t} \,\mathrm {d}t, \end{aligned}$$
(1.16)

where \(\gamma \) is a loop starting from \(-\infty \) in the lower half plane and returning to \(-\infty \) in the upper half plane which encircles the negative real axis, we have

$$\begin{aligned} K^{(\alpha ,\theta )}(x,y)=\int _0^1p^{(\alpha ,\theta )}(ux)q^{(\alpha ,\theta )}(uy)\,\mathrm {d}u. \end{aligned}$$
(1.17)

In Corollary 1.2, we require \(\theta \ge 1\) to make sure the integral is convergent. Note that when \(\theta =1\), we have (see [41, formula 10.9.23])

$$\begin{aligned} p^{(\alpha ,1)}(ux)=(ux)^{\alpha /2}J_{\alpha }(2\sqrt{ux}),\quad q^{(\alpha ,1)}(uy)=(uy)^{-\alpha /2}J_{\alpha }(2\sqrt{uy}), \end{aligned}$$

where \(J_\alpha \) denotes the Bessel function of the first kind of order \(\alpha \). It then follows from (1.17) that

$$\begin{aligned} K^{(\alpha ,1)}(x,y)&=\left( \frac{x}{y}\right) ^{\alpha /2}\int _0^1 J_{\alpha }(2\sqrt{ux})J_{\alpha }(2\sqrt{uy}) \,\mathrm {d}u \\&= 4 \left( \frac{x}{y}\right) ^{\alpha /2} K^\mathrm{Bes}_{\alpha }(4x,4y), \end{aligned}$$

where

$$\begin{aligned} K^\mathrm{Bes}_{\alpha }(x,y) = \frac{J_\alpha (\sqrt{x})\sqrt{y} J'_\alpha (\sqrt{y})-\sqrt{x} J_\alpha '(\sqrt{x})J_\alpha (\sqrt{y})}{2(x-y)},\quad \alpha >-1, \end{aligned}$$

is the Bessel kernel of order \(\alpha \) that appears as the scaling limit of the Laguerre unitary ensembles at the hard edge [18, 50], as expected. Furthermore, a comparison of (1.5) and (1.13)–(1.14) gives us the following identity

$$\begin{aligned}&\theta x^{\alpha }\int _0^1J_{\frac{\alpha +1}{\theta },\frac{1}{\theta }}(ux)J_{\alpha +1,\theta }((uy)^{\theta })u^\alpha \,\mathrm {d}u \nonumber \\&=\frac{\theta }{(2\pi i)^2} \int _{c-i\infty }^{c+i\infty } \,\mathrm {d}s \oint _{\Sigma } \,\mathrm {d}t \frac{\Gamma (s+1)\Gamma (\alpha +1+\theta s )}{\Gamma (t+1)\Gamma (\alpha +1+\theta t)} \frac{\sin \pi s}{\sin \pi t} \frac{x^{-\theta s -1}y^{\theta t }}{s-t},\quad \theta \ge 1.\nonumber \\ \end{aligned}$$
(1.18)

For a direct proof of the above formula; see Remark 3.2 below.

We believe that the new integral representations (1.14) and (1.17) for \(K^{(\alpha ,\theta )}\) will also facilitate further investigations of relevant quantities, say, the differential equations for the associated Fredholm determinants, as done in [48, 49, 51]. The studies of these aspects will be the topics of future research.

By performing an asymptotic analysis for the double contour integral representation (1.11), we are able to confirm the bulk and soft edge universality for biorthogonal Laguerre ensembles, which are left open in [13]. The relevant results are stated as follows.

Theorem 1.3

(Bulk and soft edge universality) For \(x_0\in (0, ( 1+\theta )^{1+\theta }/\theta ^\theta )\), which is parameterized through (1.9) by \(\varphi = \varphi (x_0) \in (0, \pi /(1+\theta ))\), we have, with \(\alpha ,\theta \) being fixed,

$$\begin{aligned}&\lim _{n \rightarrow \infty } \frac{e^{-\pi \eta \cot \varphi }}{e^{-\pi \xi \cot \varphi }} \frac{1}{\rho (\varphi )x_0^{1-\frac{1}{\theta }}} K_n^{(\alpha ,\theta )}\left( n \theta \left( x_0+\frac{\xi }{n\rho (\varphi )}\right) ^{\frac{1}{\theta }}, n \theta \left( x_0+\frac{\eta }{n\rho (\varphi )}\right) ^{\frac{1}{\theta }}\right) \nonumber \\&\quad =K_{\sin }(\xi ,\eta ), \end{aligned}$$
(1.19)

uniformly for \(\xi \) and \(\eta \) in any compact subset of \(\mathbb {R}\), where \(\rho (\varphi )\) is defined in (1.10) and

$$\begin{aligned} K_{\sin }(x,y):=\frac{\sin \pi (x-y)}{\pi (x-y)} \end{aligned}$$
(1.20)

is the normalized sine kernel.

For the soft edge, we have

$$\begin{aligned}&\lim _{n \rightarrow \infty } \frac{e^{-2^{-\frac{1}{3}}(1+\theta )^{\frac{2}{3}} \eta n^{\frac{1}{3}}}}{e^{-2^{-\frac{1}{3}}(1+\theta )^{\frac{2}{3}} \xi n^{\frac{1}{3}}}}\frac{(1+\theta )^{\frac{2}{3}+\frac{1}{\theta }}}{2^{\frac{1}{3}}} n^{\frac{1}{3}} K_n^{(\alpha ,\theta )}\left( n\theta \left( x_* + \frac{ c_* \xi }{n^{\frac{2}{3}}}\right) ^{\frac{1}{\theta }}, n\theta \left( x_* + \frac{ c_* \eta }{n^{\frac{2}{3}}}\right) ^{\frac{1}{\theta }}\right) \nonumber \\&\quad = K_{{{\mathrm{Ai}}}}(\xi , \eta ) \end{aligned}$$
(1.21)

uniformly for \(\xi \) and \(\eta \) in any compact subset of \(\mathbb {R}\), where

$$\begin{aligned} x_* = \frac{( 1+\theta )^{ 1+\theta }}{\theta ^\theta }, \quad c_*=\frac{(1+\theta )^{ \frac{2}{3}+\theta }}{2^{\frac{1}{3}} \theta ^{\theta - 1}}, \end{aligned}$$
(1.22)

and

$$\begin{aligned} K_{{{\mathrm{Ai}}}}(x,y):=\frac{{{\mathrm{Ai}}}(x){{\mathrm{Ai}}}'(y)-{{\mathrm{Ai}}}'(x){{\mathrm{Ai}}}(y)}{x-y} =\frac{1}{(2\pi i)^2}\int _{\gamma _R}\,\mathrm {d}\mu \int _{\gamma _L}\,\mathrm {d}\lambda \frac{e^{\frac{\mu ^3}{3}-x\mu }}{e^{\frac{\lambda ^3}{3}-y\lambda }}\frac{1}{\mu -\lambda }\nonumber \\ \end{aligned}$$
(1.23)

is the Airy kernel. In (1.23), \(\gamma _R\) and \(\gamma _L\) are symmetric with respect to the imaginary axis, and \(\gamma _R\) is a contour in the right-half plane going from \(e^{-\pi /3i}\cdot \infty \) to \(e^{\pi /3 i}\cdot \infty \); see Fig. 1 for an illustration.

Fig. 1
figure 1

The contours \(\gamma _L\) and \(\gamma _R\) in the definition of Airy kernel

In the special case \(\theta =2\), \(\alpha \in \mathbb {N}\cup \{0 \}\), the bulk universality is first proved in [36].

Remark 1.1

The result of soft edge universality (1.21) also implies that the limiting distribution of the largest particle in biorthogonal Laguerre ensembles, after proper scaling, converges to the well-known Tracy–Widom distribution [6, Theorem 3.1.5].

1.3 Organization of the Rest of the Paper

The rest of this paper is organized as follows. Our main results are proved in Sect. 2. The proofs of Theorem 1.1 and Corollary 1.2 are given in Sects. 2.2 and 2.3, respectively, which rely on two propositions concerning the contour integral representations of \(p_k^{(\alpha ,\theta )}\) and \(q_k^{(\alpha ,\theta )}\) in Sect. 2.1. These formulas might be viewed as extensions of the intensively studied \(\theta \in \mathbb {N}\) case, and we give direct proofs here. The nice structures of these formulas then allow us to simplify (1.4) into a closed integral form as well as to obtain the hard edge scaling limits, following the idea in recent work of the author with Kuijlaars [33]. The bulk and soft edge universality stated in Theorem 1.3 is proved in Sect. 2.4. We will perform a steepest descent analysis of the double contour integral (1.11), whose integrand constitutes products and ratios of gamma functions with large arguments. It comes out that the strategy developed by Liu et al. [35] (see also [1]) works well in the present case. Roughly speaking, the strategy is to approximate the logarithmics of the gamma functions by elementary functions for n large, which play the role of phase functions. There will be two complex conjugate saddle points in the bulk regime, corresponding to the sine kernel, while in the edge regime these two saddle points coalesce into a single one, which leads to the Airy kernel. A crucial feature of the analysis is to construct suitable contours of integration with the aid of the parametrization (1.9). Since the asymptotic analysis is carried out in a manner similar to that performed in [35], emphasis will be placed on key steps and demonstration of basic ideas in the proof of Theorem 1.3, but refer to [35] for some technical issues.

We finally focus on the cases when \(\theta =M \in \mathbb {N}\), and relate \(K_n^{(\alpha ,M)}\) to correlation kernels of specified parameters arising from products of M Ginibre matrices. Some remarks are made in accordance with this relation to conclude this paper. For convenience of the reader, we include a short introduction to the Meijer G-function in the “Appendix”.

2 Proofs of the Main Results

2.1 Contour Integral Representations of \(p_k^{(\alpha ,\theta )}(x)\) and \(q_k^{(\alpha ,\theta )}(x)\)

Proposition 2.1

We have for \(x>0\),

$$\begin{aligned} q_k^{(\alpha ,\theta )}(x)&= (-1)^k\sum _{j=0}^k \left( {\begin{array}{c}k\\ j\end{array}}\right) \frac{(-x)^j}{\Gamma (\alpha +1+j\theta )}\Gamma (\alpha +1+k\theta ) \nonumber \\&= \frac{\Gamma (\alpha +1+k\theta ) k!}{2 \pi i}\oint _{\Sigma }\frac{\Gamma (t-k)x^t}{\Gamma (t+1)\Gamma (\alpha +1+\theta t)} \,\mathrm {d}t, \end{aligned}$$
(2.1)

where \(\Sigma \) is a closed contour that encircles \(0, 1, \ldots , k\) once in the positive direction.

Proof

The first identity in (2.1) follows from the determinantal expressions for the polynomials \(q_k^{(\alpha ,\theta )}\). By setting the bimoments

$$\begin{aligned} m_{j,k}=\int _0^{+\infty }x^{\alpha +j+\theta k}e^{-x}\,\mathrm {d}x =\Gamma (\alpha +j+k\theta +1), \quad j,k\in \mathbb {N}\cup \{0\}, \end{aligned}$$
(2.2)

we define

$$\begin{aligned} D_n&=\det (m_{j,k})_{j,k=0,\ldots ,n} \nonumber \\&= \det \begin{pmatrix} \Gamma (\alpha +1) &{} \Gamma (\alpha +1+\theta ) &{} \cdots &{} \Gamma (\alpha +1+n\theta ) \\ \Gamma (\alpha +2) &{} \Gamma (\alpha +2+\theta ) &{} \cdots &{} \Gamma (\alpha +2+n\theta ) \\ \vdots &{} \vdots &{} \vdots &{} \vdots \\ \Gamma (\alpha +1+n) &{} \Gamma (\alpha +1+n+\theta ) &{} \cdots &{}\Gamma (\alpha +1+n(\theta +1)) \end{pmatrix}. \end{aligned}$$
(2.3)

From the general theory of biorthogonal polynomials (cf. [15, Proposition 2]), it follows that

$$\begin{aligned} q_k^{(\alpha ,\theta )}(x)&=\frac{1}{D_{k-1}}\det \begin{pmatrix} m_{0,0} &{} m_{0,1} &{} \cdots &{} m_{0,k} \\ \vdots &{} \vdots &{} \vdots &{} \vdots \\ m_{k-1,0} &{} m_{k-1,1} &{} \cdots &{} m_{k-1,k} \\ 1 &{} x &{} \cdots &{} x^k \end{pmatrix} \nonumber \\&=\frac{1}{D_{k-1}} \det \begin{pmatrix} \Gamma (\alpha +1) &{} \Gamma (\alpha +1+\theta ) &{} \cdots &{} \Gamma (\alpha +1+k\theta ) \\ \vdots &{} \vdots &{} \vdots &{} \vdots \\ \Gamma (\alpha +k) &{} \Gamma (\alpha +k+\theta ) &{} \cdots &{} \Gamma (\alpha +k(1+\theta )) \\ 1 &{} x &{} \cdots &{} x^k \end{pmatrix}, ~~k \ge 1 \end{aligned}$$
(2.4)

with \(q_0^{(\alpha ,\theta )}(x)=1\). With the aid of functional relation

$$\begin{aligned} \Gamma (z+1)=z\Gamma (z), \end{aligned}$$
(2.5)

an easy Gauss elimination process gives us

$$\begin{aligned} D_n=\prod _{k=0}^{n} k! \theta ^k \Gamma (\alpha +1+k\theta ). \end{aligned}$$
(2.6)

Similarly, by expanding the matrix in (2.4) along the last row and evaluating the associated minors, it follows

$$\begin{aligned} q_k^{(\alpha ,\theta )}(x)=(-1)^k\sum _{j=0}^k \left( {\begin{array}{c}k\\ j\end{array}}\right) \frac{(-x)^j}{\Gamma (\alpha +1+j\theta )}\Gamma (\alpha +1+k\theta ), \end{aligned}$$
(2.7)

see also [30] for a proof of (2.7) by checking the orthogonality directly if \(\theta =M\).

To show the second identity in (2.1), we note that integrand in the right-hand side of (2.1) is meromorphic on \(\mathbb C\) with simple poles at \(0, 1, \ldots , k\) (the poles of the numerator at the negative integers are canceled by the poles of the factor \(\Gamma (t+1)\) in the denominator). Hence, by the residue theorem and a straightforward calculation, we obtain

$$\begin{aligned}&\frac{\Gamma (\alpha +1+k\theta ) k!}{2 \pi i}\oint _{\Sigma }\frac{\Gamma (t-k)x^t}{\Gamma (t+1)\Gamma (\alpha +1+\theta t)} \,\mathrm {d}t \nonumber \\&\quad =\Gamma (\alpha +1+k\theta ) k! \sum _{j=0}^k {{\mathrm{Res}}}_{t=j} \left( \frac{\Gamma (t-k)}{\Gamma (t+1)\Gamma (\alpha +1+\theta t)} \right) x^j \nonumber \\&\quad =\Gamma (\alpha +1+k\theta ) k! \sum _{j=0}^k \frac{(-1)^{k-j}x^j}{(k-j)!j!\Gamma (\alpha +1+j\theta )} \nonumber \\&\quad =(-1)^k\sum _{j=0}^k \left( {\begin{array}{c}k\\ j\end{array}}\right) \frac{(-x)^j}{\Gamma (\alpha +1+j\theta )}\Gamma (\alpha +1+k\theta ). \end{aligned}$$
(2.8)

This completes the proof of Proposition 2.1. \(\square \)

Proposition 2.2

For \( p_k^{(\alpha ,\theta )}\), we have the following Mellin–Barnes integral representation

$$\begin{aligned} x^\alpha e^{-x} p_k^{(\alpha ,\theta )}(x) = \frac{1}{2\pi i \Gamma (\alpha +1+k\theta ) k!} \int _{c-i\infty }^{c+i\infty } \frac{\Gamma \left( \frac{s}{\theta }+1-\frac{1}{\theta }\right) }{\Gamma \left( \frac{s}{\theta }+1-\frac{1}{\theta }-k\right) } \Gamma (\alpha +s)x^{-s} \,\mathrm {d}s,\nonumber \\ \end{aligned}$$
(2.9)

where \(c > \max \{-\alpha , 1-\theta \}\) and \(x>0\).

Proof

Note that all the poles of the integrand lie on the left of the line \(\mathrm {Re}\,z = c\), it is readily seen that the integral formula in the right-hand side of (2.9) is well-defined. On account of the uniqueness of biorthogonal functions, our strategy is to check the integral representation satisfies

  • the orthogonality conditions

    $$\begin{aligned} \frac{1}{2\pi i \Gamma (\alpha +1+k\theta ) k!}\int _0^\infty x^{j\theta } \int _{c-i\infty }^{c+i\infty } \frac{\Gamma \left( \frac{s}{\theta }+1-\frac{1}{\theta }\right) }{\Gamma \left( \frac{s}{\theta }+1-\frac{1}{\theta }-k\right) } \Gamma (\alpha +s)x^{-s} \,\mathrm {d}s \,\mathrm {d}x =\delta _{j,k},\nonumber \\ \end{aligned}$$
    (2.10)

    for \(j=0,1,\ldots ,k\);

  • the integral \(\frac{1}{2\pi i } \int _{c-i\infty }^{c+i\infty } \frac{\Gamma \left( \frac{s}{\theta }+1-\frac{1}{\theta }\right) }{\Gamma \left( \frac{s}{\theta }+1-\frac{1}{\theta }-k\right) } \Gamma (\alpha +s)x^{-s} \,\mathrm {d}s\) belongs to the linear span of \(x^\alpha e^{-x}, x^{\alpha +1}e^{-x}, \ldots , x^{\alpha +k}e^{-x}\).

To show (2.10), we make use of the inversion formula for the Mellin transform and obtain

$$\begin{aligned}&\frac{1}{2\pi i \Gamma (\alpha +1+k\theta ) k!}\int _0^\infty x^{j\theta } \int _{c-i\infty }^{c+i\infty } \frac{\Gamma \left( \frac{s}{\theta }+1-\frac{1}{\theta }\right) }{\Gamma \left( \frac{s}{\theta }+1-\frac{1}{\theta }-k\right) } \Gamma (\alpha +s)x^{-s} \,\mathrm {d}s \,\mathrm {d}x \nonumber \\&\quad =\left. \frac{\Gamma \left( \frac{s}{\theta }+1-\frac{1}{\theta }\right) }{\Gamma (\alpha +1+k\theta ) k!\Gamma \left( \frac{s}{\theta }+1-\frac{1}{\theta }-k\right) } \Gamma (\alpha +s) \right| _{s=j\theta +1} \nonumber \\&\quad =\frac{(j+1-k)_k\Gamma (1+\alpha +j\theta )}{\Gamma (\alpha +1+k\theta ) k!}=\delta _{j,k}. \end{aligned}$$
(2.11)

To check the second statement, recall the Pochhammer symbol \((a)_k=\frac{\Gamma (a+k)}{\Gamma (a)}=a(a+1)\cdots (a+k-1)\), it is readily seen that

$$\begin{aligned} \frac{\Gamma \left( \frac{s}{\theta }+1-\frac{1}{\theta }\right) }{\Gamma \left( \frac{s}{\theta }+1-\frac{1}{\theta }-k\right) }= \left( \frac{s}{\theta }+1-\frac{1}{\theta }-k\right) _k=\left( \frac{s}{\theta }+1-\frac{1}{\theta }-k\right) \cdots \left( \frac{s}{\theta }+1-\frac{1}{\theta }-1\right) \end{aligned}$$

is a polynomials of degree k in s, the integral is then a linear combination of weights \(w_j^{(\alpha )}(x)\), \(j=0,\ldots ,k\), where

$$\begin{aligned} w_j^{(\alpha )}(x)&= \frac{1}{2\pi i} \int _{c-i\infty }^{c+i\infty } s^j \Gamma (\alpha +s) x^{-s} \,\mathrm {d}s . \end{aligned}$$
(2.12)

Thus, it suffices to check \(w_j^{(\alpha )}(x)\) belongs to the linear span of \(x^\alpha e^{-x}, x^{\alpha +1}e^{-x}, \ldots , x^{\alpha +j}e^{-x}\). We now expand the monomial \(s^j\) in terms of the basis \((\alpha +s)_l\), \(l=0,\ldots ,j\), i.e.,

$$\begin{aligned} s^j=\sum _{l=0}^j a_l(\alpha +s)_l= \sum _{l=0}^j a_l\frac{\Gamma (\alpha +l+s)}{\Gamma (\alpha +s)} \end{aligned}$$

for some constants \(a_l\) with \(a_j=1\). Inserting the above formula into (2.12), it follows that

$$\begin{aligned} w_j^{(\alpha )}(x) = \frac{1}{2\pi i} \sum _{l=0}^j a_l \int _{c-i\infty }^{c+i\infty } \Gamma (\alpha +l+s) x^{-s} \,\mathrm {d}s =\sum _{l=0}^j a_l x^{\alpha +l}e^{-x}, \end{aligned}$$
(2.13)

as desired, where we have made use of the fact that

$$\begin{aligned} \frac{1}{2\pi i} \int _{c-i\infty }^{c+i\infty } \Gamma (\nu +s) x^{-s} \,\mathrm {d}s=x^\nu e^{-x}, \quad \nu >-1; \end{aligned}$$

see (4.4) below.

This completes the proof of Proposition 2.2. \(\square \)

2.2 Proof of Theorem 1.1

With a change of variable \(s\rightarrow \theta s+1-\theta \) in (2.9) and contour deformation, we rewrite \(x^\alpha e^{-x} p_k^{(\alpha ,\theta )}(x)\) as

$$\begin{aligned} \frac{\theta x^{\theta -1}}{2\pi i \Gamma (\alpha +1+k\theta ) k! } \int _{c-i\infty }^{c+i\infty } \frac{\Gamma \left( s\right) }{\Gamma \left( s-k\right) } \Gamma (\theta s +1-\theta +\alpha )x^{-\theta s } \,\mathrm {d}s, \end{aligned}$$
(2.14)

where \(c>\max \{0,1-\frac{\alpha +1}{\theta }\}\). This, together with (1.4) and (2.1), implies that

$$\begin{aligned} K_n^{(\alpha ,\theta )}(x,y) = \frac{\theta x^{\theta -1}}{(2\pi i)^2} \int _{c-i\infty }^{c+i\infty } \,\mathrm {d}s \oint _{\Sigma } \,\mathrm {d}t \frac{\Gamma (s)\Gamma ( \theta s +1-\theta +\alpha )}{\Gamma (t+1)\Gamma (\alpha +1+\theta t)} \sum _{k=0}^{n-1} \frac{\Gamma (t-k)}{\Gamma (s-k)} x^{-\theta s }y^{\theta t}.\nonumber \\ \end{aligned}$$
(2.15)

We now follow the idea in [33]. From the functional equation (2.5), one can easily check that

$$\begin{aligned} (s-t-1) \frac{\Gamma (t-k)}{\Gamma (s-k)} = \frac{\Gamma (t-k)}{\Gamma (s-k-1)} - \frac{\Gamma (t-k+1)}{\Gamma (s-k)}, \end{aligned}$$

which means that there is a telescoping sum

$$\begin{aligned} (s-t-1) \sum _{k=0}^{n-1} \frac{\Gamma (t-k)}{\Gamma (s-k)} = \frac{\Gamma (t-n+1)}{\Gamma (s-n)} -\frac{\Gamma (t+1)}{\Gamma (s)}. \end{aligned}$$
(2.16)

To make sure that \(s-t-1 \ne 0\) when \(s \in c + i\mathbb R\) and \(t \in \Sigma \), we make the following settings. Note that \(\max \{0,1-\frac{\alpha +1}{\theta }\}<1\) for \(\alpha \ge -1\) and \(\theta >0\), we take

$$\begin{aligned} c=\frac{1+\max \{0,1-\frac{\alpha +1}{\theta }\}}{2}<1 \end{aligned}$$

and let \(\Sigma \) go around \(0, 1, \ldots , n-1\) but with \(\mathrm {Re}\,t > c-1\) for \(t \in \Sigma \). Then we insert (2.16) into (2.15) and get

$$\begin{aligned} K_n^{(\alpha , \theta )}(x,y) =\,&\frac{\theta x^{\theta -1}}{(2\pi i)^2} \int _{c-i\infty }^{c+i\infty } \,\mathrm {d}s \oint _{\Sigma } \,\mathrm {d}t \frac{\Gamma (s)\Gamma (\theta s +1-\theta +\alpha )}{\Gamma (t+1)\Gamma (\alpha +1+\theta t)} \frac{\Gamma (t-n+1)}{\Gamma (s-n)} \frac{x^{-\theta s}y^{\theta t }}{s-t-1} \nonumber \\&- \,\frac{\theta x^{\theta -1}}{(2\pi i)^2} \int _{c-i\infty }^{c+i\infty } \,\mathrm {d}s \oint _{\Sigma } \,\mathrm {d}t \frac{\Gamma (\theta s +1-\theta +\alpha )}{\Gamma (\alpha +1+\theta t)} \frac{x^{-\theta s }y^{\theta t }}{s-t-1}. \end{aligned}$$

The t-integral in the second double integral vanishes due to Cauchy’s theorem, since there are no singularities for the integrand inside \(\Sigma \). With a change of variable \(s \mapsto s+1\) in the first double integral, we obtain (1.11)

This completes the proof of Theorem 1.1.

2.3 Proof of Corollary 1.2

The proof now is straightforward by taking limit in (1.11), as in [33]. Recall the reflection formula of the gamma function

$$\begin{aligned} \Gamma (t) \Gamma (1-t) = \frac{\pi }{\sin \pi t}, \end{aligned}$$
(2.17)

it is readily seen that

$$\begin{aligned} \frac{\Gamma (t-n+1)}{\Gamma (s-n+1)} =\frac{\Gamma (n-s)}{\Gamma (n-t)} \frac{\sin \pi s}{\sin \pi t}. \end{aligned}$$
(2.18)

As \(n \rightarrow \infty \), we have (cf. [41, formula 5.11.13])

$$\begin{aligned} \frac{\Gamma (n-s)}{\Gamma (n-t)} = n^{t-s} \left( 1+ O(n^{-1})\right) , \end{aligned}$$
(2.19)

which can be easily verified by using Stirling’s formula for the gamma functions. By modifying the contour \(\Sigma \) in (1.11) from a closed contour around \(0, 1, \ldots , n-1\) to a two sided unbounded contour starting from \(+\infty \) in the upper half plane and returning to \(+\infty \) in the lower half plane which encircles the positive real axis and \(\mathrm {Re}\,t>c\) for \(t\in \Sigma \), the scaling limits (1.14) follow. The interchange of limit and integrals can be justified by combining elementary estimates of the \(\sin \) and gamma functions with the dominated convergence theorem, as explained in [33].

To show (1.17), we note that

$$\begin{aligned} \frac{x^{-\theta s -1}y^{t \theta }}{s-t} = - \theta \int _0^1 (ux)^{-\theta s -1} (uy)^{\theta t } \,\mathrm {d}u, \end{aligned}$$
(2.20)

and, by (2.17),

$$\begin{aligned} \frac{\sin \pi s}{\sin \pi t} =\frac{\Gamma (1+t)\Gamma (-t)}{\Gamma (1+s) \Gamma (-s)}. \end{aligned}$$

Inserting the above two formulas into (1.14), it is readily seen that

$$\begin{aligned} K^{(\alpha ,\theta )}(x,y)= & {} - \int _0^1 \left( \frac{\theta }{2\pi i} \int _{c-i\infty }^{c+i\infty } \frac{\Gamma (\alpha +1+\theta s)}{\Gamma (-s)} (ux)^{-\theta s-1} \,\mathrm {d}s \right) \\&\times \,\left( \frac{\theta }{2\pi i} \int _{\Sigma } \frac{\Gamma (-t)}{\Gamma (\alpha +1+\theta t)} (uy)^{\theta t} \,\mathrm {d}t \right) \,\mathrm {d}u. \end{aligned}$$

The change of variables \(s \mapsto \theta s+1\) and \(t \mapsto -\theta t\) takes the two integrals into the two functions \(p^{(\alpha ,\theta )}\) and \(q^{(\alpha ,\theta )}\) defined in (1.15) and (1.16), respectively. The identity (1.17) then follows.

This completes the proof of Theorem 1.2.

2.4 Proof of Theorem 1.3

We start with a scaling of the correlation kernel \(K_n^{(\alpha ,\theta )}(x,y) \rightarrow K_n^{(\alpha ,\theta )}(\theta x^{\frac{1}{\theta }},\theta y^{\frac{1}{\theta }})\). By (1.11), it then follows that

$$\begin{aligned}&K_n^{(\alpha , \theta )}(\theta x^{\frac{1}{\theta }},\theta y^{\frac{1}{\theta }})\nonumber \\&\quad = \frac{1}{(2\pi i)^2 x^{\frac{1}{\theta }}} \int _{\mathcal {C}} \,\mathrm {d}s \oint _{\Sigma } \,\mathrm {d}t \frac{\Gamma (s+1)\Gamma (\alpha +1+\theta s )}{\Gamma (t+1)\Gamma (\alpha +1+\theta t)} \frac{\Gamma (t-n+1)}{\Gamma (s-n+1)} \frac{\theta ^{-\theta s} x^{-s}\theta ^{\theta t} y^{t}}{s-t},\qquad \qquad \end{aligned}$$
(2.21)

where \(\mathcal {C}\) and \(\Sigma \) are two contours to be specified later, depending on the choices of reference points.

By setting

$$\begin{aligned} F(z; a) := \log \left( \frac{\Gamma (z+1)\Gamma (\alpha +1+ \theta z)}{\Gamma (z-n+1)} \theta ^{-\theta z} a^{-z} \right) , \quad a\ge 0, \end{aligned}$$
(2.22)

where the branch cut for the logarithmic function is taken along the negative axis and we assume that the value of \(\log z\) for \(z \in (-\infty , 0)\) is continued from above, we could rewrite (2.21) as

$$\begin{aligned} K_n^{(\alpha , \theta )}\left( \theta x^{\frac{1}{\theta }},\theta y^{\frac{1}{\theta }}\right) = \frac{1}{(2\pi i)^2 x^{\frac{1}{\theta }}} \int _{\mathcal {C}} \,\mathrm {d}s \oint _{\Sigma } \,\mathrm {d}t \frac{e^{F(s; x)}}{e^{F(t; y)}} \frac{1}{s - t}. \end{aligned}$$
(2.23)

We will then perform an asymptotic analysis of (2.23). The basic idea is the following. It is clear that the function F in (2.23) plays the role of a phase function. For large z and proper scalings, F can be approximated by a more elementary function \(\hat{F}\) (see 2.29 below) with the help of the Stirling’s formula for gamma function. There will be two complex conjugate saddle points \(w_{\pm }\) (see 2.32 below) of \(\hat{F}\) in general. In the proof of bulk universality, the two contours are deformed so that one of them will meet the pair of saddle points. It comes out that the main contribution to the integral does not come from the saddle points alone, but from the vertical line segment connecting the two points. In the proof of soft edge universality, the two saddle points coalesce into a real one. The phase function then behaves like a cubic polynomial around the saddle point (see 2.47 below), which justifies the appearance of Airy kernel.

We also note the possibilities to deform the contours in (2.23). Firstly, it is readily seen that the integral contour for s can be replaced by any infinite contour \(\mathcal {C}\) oriented from \(-i\infty \) to \(i\infty \), as long as \(\Sigma \) is on the right side of \(\mathcal {C}\). One can further deform \(\mathcal {C}\) such that \(\Sigma \) is on its left, and the resulting double contour integral remains the same. To see this, let \(\mathcal {C}\) and \(\mathcal {C}'\) be two infinite contours from \(-i\infty \) to \(i\infty \) such that \(\Sigma \) lies between \(\mathcal {C}\) and \(\mathcal {C}'\). An appeal to the residue theorem to the integral on \(\mathcal {C} \cup \mathcal {C}'\) gives

$$\begin{aligned} \int _\mathcal {C} \,\mathrm {d}s \oint _{\Sigma } \,\mathrm {d}t \frac{e^{F(s; x)}}{e^{F(t; y)}} \frac{1}{s - t} - \int _{\mathcal {C}'} \,\mathrm {d}s \oint _{\Sigma } \,\mathrm {d}t \frac{e^{F(s; x)}}{e^{F(t; y)}} \frac{1}{s - t} = 2\pi i \int _{\Sigma } \left( \frac{y}{x} \right) ^t \,\mathrm {d}t = 0.\qquad \end{aligned}$$
(2.24)

Hence, the double contour integral does not change if \(\mathcal {C}\) is replaced by \(\mathcal {C}'\). We will use such kind of contour deformation in the proof of the soft edge universality. Similarly, one can show that if \(\Sigma \) is split into two disjoint closed counterclockwise contours \(\Sigma = \Sigma _1 \cup \Sigma _2\), which jointly enclose poles \(0, 1, \ldots , n-1\), and \(\mathcal {C}\) is an infinite contour from \(-i\infty \) to \(i\infty \) such that \(\Sigma _1\) is on the left side of \(\mathcal {C}\) and \(\Sigma _2\) is on the right side of \(\mathcal {C}\), the formula (2.23) is still valid. We will use such kind of contours in the proof of the bulk universality.

We now derive the asymptotic behavior of F. Recall that the Stirling’s formula for gamma function [41, formula 5.11.1] reads

$$\begin{aligned} \log \Gamma (z) = \left( z-\frac{1}{2}\right) \log z-z+\frac{1}{2}\log (2\pi )+\mathcal {O}\left( \frac{1}{z}\right) \end{aligned}$$
(2.25)

as \(z\rightarrow \infty \) in the sector \(|\arg z |\le \pi -\epsilon \) for some \(\epsilon > 0\). It then follows that if \(|z |\rightarrow \infty \) and \(|z - n |\rightarrow \infty \), while \(\arg z\) and \(\arg (z - n)\) are in \((-\pi + \epsilon , \pi - \epsilon )\), then uniformly

$$\begin{aligned} F(z; a) = \tilde{F}(z; a) + \frac{1}{2}( \log z - \log (z - n))+ \frac{1}{2} \log (2\pi ) +\mathcal {O}(\min (|z |, |z - n |)^{-1}),\nonumber \\ \end{aligned}$$
(2.26)

where

$$\begin{aligned} \tilde{F}(z; a) = (1+\theta )z(\log z - 1) - (z - n)(\log (z - n) - 1) - z\log a. \end{aligned}$$
(2.27)

Furthermore, we have

$$\begin{aligned} \tilde{F}(nz; n^\theta a) = n \hat{F}(z; a) + n\log n, \end{aligned}$$
(2.28)

where

$$\begin{aligned} \hat{F}(z; a) = (1+\theta )z(\log z - 1) - (z - 1)(\log (z - 1) - 1) - z\log a. \end{aligned}$$
(2.29)

Note that if \(\theta =M\in \mathbb {N}\), we encounter the same \(\tilde{F}(z; a)\) and \(\hat{F}(z; a)\) as in [35].

Since

$$\begin{aligned} \hat{F}_{z}(z;x)=(1+\theta )\log z-\log (z-1)-\log x, \end{aligned}$$
(2.30)

the saddle point of \(\hat{F}(z;x)\) satisfies the equation

$$\begin{aligned} z^{1+\theta }=x(z-1). \end{aligned}$$
(2.31)

In particular, if \(x=x_0\in (0, (1 + \theta )^{1 + \theta }/\theta ^\theta )\), which is parameterized through (1.9) by \(\varphi = \varphi (x_0) \in (0, \pi /(1+\theta ))\), one can find two complex conjugate solutions of (2.31) explicitly given by

$$\begin{aligned} w_{\pm } = \frac{\sin ((1+\theta )\varphi )}{\sin (\theta \varphi )} e^{\pm i\varphi }. \end{aligned}$$
(2.32)

For later use, we also define a closed contour

$$\begin{aligned} \left. \tilde{\Sigma }=\left\{ z=\frac{ \sin ( (1+\theta ) \phi ) }{ \sin ( \theta \phi ) } \, e^{i\phi } ~\right| ~ -\frac{\pi }{1+\theta } \le \phi \le \frac{\pi }{1+\theta } \right\} , \end{aligned}$$
(2.33)

which passes through \(w_\pm \), intersects the real line only at 0 when \(\phi = \pm \pi /(1 + \theta )\) and at \(1 + \theta ^{-1}\) when \(\phi = 0\). Since the integrand of (2.23) takes 0 as one of the poles, we further deform \(\tilde{\Sigma }\) a little bit near the origin simply by setting

$$\begin{aligned} \tilde{\Sigma }^{\epsilon } := \{ z \in \tilde{\Sigma } \mid |z |\ge \epsilon \} \cup \text {the arc of }\{ |z |= \epsilon \}\text { connecting }\tilde{\Sigma } \cap \{ |z |= \epsilon \}\text { and through }-\epsilon ,\nonumber \\ \end{aligned}$$
(2.34)

with counterclockwise orientation.

Fig. 2
figure 2

The contours \(\mathcal {C}\) and \(\Sigma \) used in the proof of bulk universality

With the above preparations, we are ready to prove the bulk and soft edge universality for \(K_n^{(\alpha ,\theta )}\).

Proof of (1.19) In view of (1.19), we scale the arguments x and y in (2.23) such that

$$\begin{aligned} x = n^\theta \left( x_0 + \frac{ \xi }{n \rho (\varphi )}\right) , \quad y =n^\theta \left( x_0 + \frac{\eta }{n \rho (\varphi )}\right) , \end{aligned}$$
(2.35)

where \(\xi \) and \(\eta \) are in a compact subset of \(\mathbb {R}\) and \(\rho (\varphi )\) is given in (1.10).

The contours \(\mathcal {C}\) and \(\Sigma \) are chosen in the following ways. The contour \(\mathcal {C}\) is simply taken to be an upward straight line passing through two scaled saddle points \(n w_{\pm }\). This line then divides \(n \tilde{\Sigma }^{r}\) into two parts, where r is a small parameter depending on \(\theta \). By further separating these two parts, we define

$$\begin{aligned} \Sigma = \Sigma _{{{\mathrm{cur}}}} \cup \Sigma _{{{\mathrm{ver}}}}, \end{aligned}$$
(2.36)

where \(\Sigma _{{{\mathrm{cur}}}}\) is the part from \(n \tilde{\Sigma }^{r}\), and \(\Sigma _{{{\mathrm{ver}}}}\) are two vertical lines connecting ending points of \(\Sigma _{{{\mathrm{cur}}}}\). The distance of these two lines is taken to be \(2\epsilon \), with \(\mathcal {C}\) lying in the middle of them; see Fig. 2 for an illustration. The main issue here is that, with these choices of \(\mathcal {C}\) and \(\Sigma \), \(\mathrm {Re}\,\hat{F}(z; x_0)\) defined in (2.29) attains its global maximum at \(z=w_{\pm }\) for \(nz \in \mathcal {C}\) and its global minimum at \(z = w_{\pm }\) for \(z \in \tilde{\Sigma }\), which can be proved rigorously with estimates as shown in [35, Lemma 3.1].

By taking the limit \(\epsilon \rightarrow 0\), it follows

$$\begin{aligned} K_n^{(\alpha , \theta )}(\theta x^{\frac{1}{\theta }},\theta y^{\frac{1}{\theta }}) = I_1 + I_2, \end{aligned}$$
(2.37)

where (\({{\mathrm{p.v.}}}\) means the Cauchy principal value)

$$\begin{aligned} I_1:= & {} \lim _{\epsilon \rightarrow 0} \frac{1}{(2\pi i)^2 x^{\frac{1}{\theta }}} \int _\mathcal {C} \,\mathrm {d}s \int _{\Sigma _{{{\mathrm{cur}}}}} \,\mathrm {d}t \frac{e^{F(s; x)}}{e^{F(t; y)}} \frac{1}{s - t} \nonumber \\= & {} \frac{1}{(2\pi i)^2 x^{\frac{1}{\theta }}} {{\mathrm{p.v.}}}\int _{n\tilde{\Sigma }^r} \left( \int _\mathcal {C} \,\mathrm {d}s \frac{e^{F(s; x)}}{e^{F(t; y)}} \frac{1}{s - t} \right) \,\mathrm {d}t , \end{aligned}$$
(2.38)

and, by interchange of integrals and the Cauchy’s theorem,

$$\begin{aligned} I_2:= & {} \lim _{\epsilon \rightarrow 0} \frac{1}{(2\pi i)^2 x^{\frac{1}{\theta }}} \int _\mathcal {C} \,\mathrm {d}s \int _{\Sigma _{{{\mathrm{ver}}}}} \,\mathrm {d}t \frac{e^{F(s; x)}}{e^{F(t; y)}} \frac{1}{s - t} = \frac{1}{2\pi i x^{\frac{1}{\theta }} } \int ^{nw_+}_{nw_-} \frac{e^{F(s; x)}}{e^{F(s; y)}}\,\mathrm {d}s \nonumber \\= & {} \frac{1}{2\pi i x^{\frac{1}{\theta }}} \int ^{nw_+}_{nw_-} \left( \frac{y}{x} \right) ^s \,\mathrm {d}s = \frac{1}{2\pi i x^{\frac{1}{\theta }} \log (\frac{y}{x})} \left( \left( \frac{y}{x} \right) ^{nw_+} - \left( \frac{y}{x} \right) ^{nw_-} \right) . \end{aligned}$$
(2.39)

Here we note that by taking \(\epsilon \rightarrow 0\), the vertical line \((nw_-,nw_+)\) is enclosed by \(\Sigma _{{{\mathrm{ver}}}}\), hence the Cauchy’s theorem is applicable in the first step.

With the values of xy given in (2.35) and \(w_\pm \) given in (2.32), a straightforward calculation gives us

$$\begin{aligned} I_2 = {}&\frac{\rho (\varphi )x_0^{1-\frac{1}{\theta }}}{2\pi i (\eta - \xi )\left( 1 + \mathcal {O}\left( n^{-1}\right) \right) } \left( e^{\frac{(\eta - \xi )w_+}{\rho (\varphi )x_0}} \left( 1 + \mathcal {O}\left( n^{-1}\right) \right) - e^{\frac{(\eta - \xi )w_-}{\rho (\varphi )x_0}} \left( 1 + \mathcal {O}\left( n^{-1}\right) \right) \right) \nonumber \\ = {}&\rho (\varphi ) x_0^{1-\frac{1}{\theta }} \frac{e^{\pi \cot \varphi \eta }}{e^{\pi \cot \varphi \xi }} \frac{\sin \pi (\xi - \eta )}{\pi (\xi - \eta )} + \mathcal {O}\left( n^{-1}\right) \end{aligned}$$
(2.40)

On the other hand, one can show that, in a manner similar to the estimates in [35, Lemma 2.1], \(F(z; n^\theta x_0)\) attains its global maximum at \(z=nw_{\pm }\) for \(z \in \mathcal {C}\) and its global minimum at \(z = n w_{\pm }\) for \(z \in \tilde{\Sigma }\), which leads to the fact that \(I_1(z)=\mathcal {O}(n^{-1/2})\). This, together with (2.37) and (2.40), implies (1.19).

Proof of (1.21) On account of the scalings of xy in (1.21), we set

$$\begin{aligned} x = n^\theta \left( x_*+ \frac{c_*\xi }{n^{2/3}}\right) , \quad y = n^\theta \left( x_*+ \frac{c_*\eta }{n^{2/3}}\right) , \end{aligned}$$
(2.41)

where \(\xi ,\eta \in \mathbb {R}\), \(x_*\) and \(c_*\) are given in (1.22).

In this case, the two saddle points \(w_\pm \) coalesce into a single one, i.e.,

$$\begin{aligned} w_+=w_-=z_0=1+\frac{1}{\theta }. \end{aligned}$$
(2.42)

We now select the contours \(\Sigma \) and \(\mathcal {C}\) as illustrated in Fig. 3.

Fig. 3
figure 3

The contours \(\mathcal {C}\) and \(\Sigma \) used in the proof of soft edge universality

The contour \(\Sigma \) is still a deformation of \(n\tilde{\Sigma }^{r}\), while near the scaled saddle point \(nz_0\), the local part \(\Sigma _{{{\mathrm{loc}}}}\) is defined by

$$\begin{aligned} \Sigma _{{{\mathrm{loc}}}}= & {} \left. \left. \left\{ nz_0 + c_1 n^{\frac{2}{3}} re^{2\pi i/3} ~\right| ~ r \in \left[ 1, n^{\frac{1}{30}}\right] \right\} \cup \left\{ nz_0 + c_1 n^{\frac{2}{3}} re^{-2\pi i/3} ~\right| ~ r \in \left[ 1, n^{\frac{1}{30}}\right] \right\} \nonumber \\&\quad \left. \cup \left\{ nz_0 - \frac{c_1 n^{\frac{2}{3}}}{2} + ic_1 n^{\frac{2}{3}} r ~\right| ~ r \in \left[ -\frac{\sqrt{3}}{2}, \frac{\sqrt{3}}{2}\right] \right\} ,\qquad \end{aligned}$$
(2.43)

where

$$\begin{aligned} c_1:=x_*/c_*=2^{\frac{1}{3}}(1+\theta )^{\frac{1}{3}}/\theta . \end{aligned}$$
(2.44)

The contour \(\mathcal {C}\) is obtained by deforming a straight line. Around \(nz_0\), the local part is defined by

$$\begin{aligned} \left. \left. \mathcal {C}_{{{\mathrm{loc}}}} = \left\{ nz_0 + c_1 n^{\frac{2}{3}} re^{\pi i/3} ~\right| ~ r \in \left[ 1, n^{\frac{1}{30}}\right] \right\} \cup \left\{ nz_0 + c_1 n^{\frac{2}{3}} re^{-\pi i/3} ~\right| ~ r \in \left[ 1, n^{\frac{1}{30}}\right] \right\} \nonumber \\ \left. \cup \left\{ nz_0 + \frac{c_1 n^{\frac{2}{3}}}{2} + ic_1 n^{\frac{2}{3}} r ~\right| ~ r \in \left[ -\frac{\sqrt{3}}{2}, \frac{\sqrt{3}}{2}\right] \right\} .\qquad \end{aligned}$$
(2.45)

As in [35, Eq. 2.69], one can show that the main contribution to the integral (2.23), as \(n\rightarrow \infty \), comes from the part \(\mathcal {C}_{\mathrm {loc}} \times \Sigma _{\mathrm {loc}}\), and the remaining part of the integral is negligible. When \((s,t)\in \mathcal {C}_{\mathrm {loc}} \times \Sigma _{\mathrm {loc}}\), we can approximate \(F(s; n^\theta x_*)\) and \(F(t; n^\theta x_*)\) by \(\tilde{F}\) given in (2.26), and further by \(\hat{F}\) that is defined in (2.29).

With \(z_0\) given in (2.42), it is readily seen that

$$\begin{aligned} \hat{F}_z(z_0; x_*) = 0, \quad \hat{F}_{zz}(z_0; x_*) = 0, \quad \hat{F}_{zzz}(z_0; x_*) = \frac{\theta ^3}{1+\theta }. \end{aligned}$$
(2.46)

Hence,

$$\begin{aligned}&\hat{F}(z_0 + n^{-\frac{1}{3}}c_1u; x_*) \nonumber \\&\quad = \hat{F}(z_0; x_*) + \hat{F}_z(z_0; x_*)c_1u n^{-\frac{1}{3}} + \frac{1}{2} \hat{F}_{zz}(z_0; x_*)c^2_1u^2 n^{-\frac{2}{3}} + \frac{1}{6} \hat{F}_{zzz}(z_0; x_*)c^3_1u^3 n^{-1}\nonumber \\&\qquad + \mathcal {O}\left( n^{-\frac{6}{5}}\right) \nonumber \\&\quad = \hat{F}(z_0; x_*) + \frac{u^3}{3n} + \mathcal {O}\left( n^{-\frac{6}{5}}\right) . \end{aligned}$$
(2.47)

By changes of variables

$$\begin{aligned} s = nz_0 + n^{\frac{2}{3}} c_1 u, \quad t = nz_0 + n^{\frac{2}{3}} c_1 v, \end{aligned}$$
(2.48)

it follows from (2.23), (2.26), (2.41), (2.44) and (2.47) that

$$\begin{aligned}&\frac{1}{(2\pi i)^2 x^{\frac{1}{\theta }}} \int _{\mathcal {C}_\mathrm {loc}} \,\mathrm {d}s \oint _{\Sigma _\mathrm {loc}} \,\mathrm {d}t \frac{e^{F(s; x)}}{e^{F(t; y)}} \frac{1}{s - t} \nonumber \\&\quad = \frac{1}{(2\pi i)^2 x^{\frac{1}{\theta }}} \int _{\mathcal {C}_{\mathrm {loc}}} \,\mathrm {d}s \oint _{\Sigma _{\mathrm {loc}}} \,\mathrm {d}t \frac{e^{F(s; n^M x_*)}\left( 1 + n^{-\frac{2}{3}} c^{-1}_1 \xi \right) ^{-s}}{e^{F(t; n^M x_*)} \left( 1 + n^{-\frac{2}{3}} c^{-1}_1 \eta \right) ^{-t}} \frac{1}{s - t} \nonumber \\&\quad = \frac{e^{2^{-\frac{1}{3}}(1+\theta )^{\frac{2}{3}} (\eta - \xi ) n^{\frac{1}{3}}}c_1}{n^{\frac{1}{3}} x_*^{\frac{1}{\theta }} } \left( \frac{1}{(2\pi i)^2} \int _{\mathcal {C}_{0}} \,\mathrm {d}u \int _{\Sigma _{0}} \,\mathrm {d}v \frac{e^{ \frac{1}{3}u^3 - u\xi }}{e^{\frac{1}{3}v^3 - v\eta }} \frac{1}{u - v} + \mathcal {O}\left( n^{-\frac{1}{5}}\right) \right) \nonumber \\&\quad = \frac{2^{\frac{1}{3}}e^{2^{-\frac{1}{3}}( 1 + \theta )^{\frac{2}{3}} (\eta - \xi ) n^{\frac{1}{3}}}}{n^{ \frac{1}{3}}(1+\theta )^{\frac{1}{\theta }+\frac{2}{3}}} \left( K_{{{\mathrm{Ai}}}}(\xi , \eta ) + \mathcal {O}\left( n^{-\frac{1}{5}}\right) \right) , \end{aligned}$$
(2.49)

where \(\Sigma _0\) and \(\mathcal {C}_0\) are the images of \(\mathcal {C}_{\mathrm {loc}}\) and \(\Sigma _{\mathrm {loc}}\) (see 2.45, 2.43) under the change of variables (2.48), and the last equality follows from the integral representation of Airy kernel shown in (1.23).

This completes the proof of Theorem 1.3.

3 The Cases when \(\theta =M\in \mathbb {N}\)

In this section, we will show a remarkable connection between \(K^{(\alpha ,\theta )}_n\) and those arising from products of Ginibre random matrices if \(\theta \in \mathbb {N}\). In the limiting case, this relation has been established in [32]. Our result gives new insights for the relations between these two different determinantal point processes. In particular, it provides the other perspective to explain the appearance of Fuss–Catalan distribution in biorthogonal Laguerre ensembles; see Remark 3.1 below. We start with an introduction to the correlation kernels appearing in recent investigations of products of Ginibre matrices.

3.1 Correlation Kernels Arising from Products of M Ginibre Matrices

Let \(X_j\), \(j=1,\ldots ,M\) be independent complex matrices of size \((n+\nu _j)\times (n+\nu _{j-1})\) with \(\nu _0=0\) and \(\nu _j\ge 0\). Each matrix has independent and identically distributed standard complex Gaussian entries. These matrices are also known as Ginibre random matrices. We then form the product

$$\begin{aligned} Y_M = X_M X_{M-1} \cdots X_1. \end{aligned}$$
(3.1)

When \(M=1\), \(Y_1 = X_1\) defines the Wishart–Laguerre unitary ensemble and it is well-known that the squared singular values of \(Y_1\) form a determinantal point process with the correlation kernel expressed in terms of Laguerre polynomials. Recent studies show that the determinantal structures still hold for general M [2, 4]. According to [2], the joint probability density function of the squared singular values is given by (see [2, formula 18])

$$\begin{aligned} P(x_1, \ldots , x_n) = \frac{1}{\mathcal {Z}_n} \Delta (x_1,\ldots ,x_n)\, \det \left[ w_{k-1}(x_j) \right] _{j,k=1, \ldots , n}, \quad x_j > 0, \end{aligned}$$
(3.2)

where the function \(w_k\) is a Meijer G-function

$$\begin{aligned} \left. w_k(x) = \mathop {{G^{{M,0}}_{{0,M}}}}\nolimits \!\left( {\begin{array}{c} - \\ \nu _M, \nu _{M-1}, \ldots , \nu _2, \nu _1 + k \end{array}} \right| x\right) , \end{aligned}$$
(3.3)

and the normalization constant (see [2, formula 21]) is

$$\begin{aligned} \mathcal {Z}_n = n!\prod _{i=1}^{n}\prod _{j=0}^M \Gamma (i+\nu _j). \end{aligned}$$

Note that the Meijer G-function \(w_k(x)\) can be written as a Mellin–Barnes integral

$$\begin{aligned} w_k(x) = \frac{1}{2\pi i} \int _{c-i\infty }^{c+i\infty } \Gamma (s+\nu _1 + k) \prod _{j=2}^{M} \Gamma (s+\nu _j) x^{-s} \,\mathrm {d}s, \quad k=0, 1, \ldots , \end{aligned}$$
(3.4)

with \(c > 0\). As a consequence of (4.4), it is readily seen that if \(M=1\), (3.2) is equivalent to (1.1) with \(\theta =1\).

The determinantal point process (3.2) again is a biorthogonal ensemble. Hence, one can write the correlation kernel as

$$\begin{aligned} K_n^{\mathbf {\nu }}(x,y) = \sum _{k=0}^{n-1} P_k^\nu (x) Q_k^\nu (y), \end{aligned}$$
(3.5)

where \(\nu \) stands for the collection of parameters \(\nu _1,\ldots ,\nu _M\) and the biorthogonal functions \(P_k^\nu \) and \(Q_k^\nu \) are defined as follows. For each \(k = 0, 1, \ldots ,n-1\), \(P_k^\nu \) is a monic polynomial of degree k and \(Q_k^\nu \) can be a linear combination of \(w_0, \ldots , w_k\), uniquely defined by the orthogonality

$$\begin{aligned} \int _0^{\infty } P_j^\nu (x) Q_k^\nu (x) \,\mathrm {d}x = \delta _{j,k}. \end{aligned}$$
(3.6)

In particular, we have the following explicit formulas of \(P_k^\nu \) and \(Q_k^\nu \) in terms of Meijer G-functions [2]:

(3.7)

and

$$\begin{aligned} P_n^\nu (x)&= -\prod _{j=0}^M\Gamma (n+\nu _j+1)\mathop {{G^{{0,1}}_{{1,M+1}}}}\nolimits \!\left. \left( {\begin{array}{c} n+1 \\ -\nu _0, -\nu _{1}, \ldots , -\nu _{M-1}, -\nu _{M} \end{array}}\right| x\right) \nonumber \\&= (-1)^n \prod _{j=1}^M \frac{\Gamma (n+\nu _j+1)}{\Gamma (\nu _j+1)} {\; }_1 F_M \left. \left( {\begin{array}{c} -n \\ 1+ \nu _1, \ldots , 1+\nu _M \end{array}} \right| x \right) , \end{aligned}$$
(3.8)

where

$$\begin{aligned} \left. {\; }_p F_q \left( {\begin{array}{c} a_1,\ldots , a_p \\ b_1,\ldots ,b_q \end{array}} \right| z \right) =\sum _{k=0}^\infty \frac{(a_1)_k\cdots (a_p)_k}{(b_1)_k \cdots (b_q)_k}\frac{z^k}{k!} \end{aligned}$$
(3.9)

is the generalized hypergeometric function with

$$\begin{aligned} (a)_k=\frac{\Gamma (a+k)}{\Gamma (a)}=a(a+1)\cdots (a+k-1) \end{aligned}$$
(3.10)

being the Pochhammer symbol; see (4.2) for the second equality in (3.8). The polynomials \(P_k^\nu \) can also be interpreted as multiple orthogonal polynomials [26] with respect to the first M weight functions \(w_j\), \(j=0,\ldots ,M-1\), as shown in [33]. More properties of these polynomials (or in special cases) can be found in [14, 33, 39, 52, 53, 55].

With the aid of (3.7) and (3.8), it is shown in [33, Proposition 5.1] that the correlation kernel admits the following double contour integral representation

$$\begin{aligned} K_n^\nu (x,y) = \frac{1}{(2\pi i)^2} \int _{-1/2-i\infty }^{-1/2+i\infty } \,\mathrm {d}s \oint _{\Sigma } \,\mathrm {d}t \prod _{j=0}^M \frac{\Gamma (s+\nu _j+1)}{\Gamma (t+\nu _j+ 1)} \frac{\Gamma (t-n+1)}{\Gamma (s-n+1)} \frac{x^t y^{-s-1}}{s-t},\nonumber \\ \end{aligned}$$
(3.11)

where \(\Sigma \) is a closed contour going around \(0, 1, \ldots , n-1\) in the positive direction and \(\mathrm {Re}\,t > -1/2\) for \(t \in \Sigma \). For recent progresses in the studies of products of random matrices; see [3].

We point out that the kernel (3.11) (as well as the biorthogonal functions \(P_k^\nu \) and \(Q_k^\nu \)) is well-defined as along as \(\nu _i>-1\), \(i=1,\ldots ,M\), and has a random matrix interpretation if \(\nu _i\) are non-negative integers, i.e., then the particles correspond to the squared singular values of the matrix \(Y_M\).

3.2 Connections Between \(K_n^{(\alpha ,M)}\) and \(K_n^\nu \)

Our final result of this paper is stated as follows.

Theorem 3.1

(Relating \(K_n^{(\alpha ,M)}\) to \(K_n^\nu \)) Let \(p_k^{(\alpha ,\theta )}\), \(q_k^{(\alpha ,\theta )}\), \(P_k^\nu \) and \(Q_k^\nu \) be the functions defined through biorthogonalities (1.3) and (3.6), respectively. If \(\theta =M\in \mathbb {N}\), we have

$$\begin{aligned} \begin{aligned} q_k^{(\alpha ,M)}(x)&=M^{kM}P_{k}^{\tilde{\nu }}\left( \frac{x}{M^M} \right) ,\\ x^\alpha e^{-x} p_k^{(\alpha ,M)}(x)&= Q_{k}^{\tilde{\nu }}\left( \frac{x^M}{M^M}\right) \frac{x^{M-1}}{M^{(k+1)M-1}}, \end{aligned} \end{aligned}$$
(3.12)

where the parameter \(\tilde{\nu }\) is given by an arithmetic sequence

$$\begin{aligned} \tilde{\nu }_j=\frac{\alpha }{M}+\frac{j}{M}-1,\quad j=1,\ldots ,M. \end{aligned}$$
(3.13)

As a consequence, we have

$$\begin{aligned} K_n^{(\alpha ,M)}(x,y)=\frac{x^{M-1}}{M^{M-1}}K_n^{\tilde{\nu }}\left( \frac{y^M}{M^M},\frac{x^M}{M^M}\right) \end{aligned}$$
(3.14)

where \(K_n^{(\alpha ,\theta )}\) and \(K_n^\nu \) are two correlation kernels defined in (1.4) and (3.5), respectively.

Proof

Suppose now the parameters in \(P_k^\nu \) are given by \(\tilde{\nu }\) (3.13), we see from (3.8) that

$$\begin{aligned} P_k^{\tilde{\nu }}(x)&=(-1)^k \prod _{j=1}^M \frac{\Gamma (k+\tilde{\nu }_j+1)}{\Gamma (\tilde{\nu }_j+1)} {\; }\sum _{i=0}^\infty \frac{(-k)_i}{(1+\tilde{\nu }_1)_i \cdots (1+\tilde{\nu }_M)_i}\frac{x^i}{i!} \nonumber \\&=(-1)^k \prod _{j=1}^M \Gamma \left( k+\frac{\alpha +j}{M}\right) {\; }\sum _{i=0}^k \left( {\begin{array}{c}k\\ i\end{array}}\right) \frac{(-x)^i}{\Gamma \left( i+\frac{\alpha +1}{M}\right) \cdots \Gamma \left( i+\frac{\alpha +M}{M}\right) }, \end{aligned}$$
(3.15)

where the second equality follows from the definition of Pochhammer symbol (3.10). In view of the Gauss’s multiplication formula [41, formula 5.5.6]

$$\begin{aligned} \Gamma (nz)=(2\pi )^{(1-n)/2}n^{nz-(1/2)}\prod _{k=0}^{n-1}\Gamma \left( z+\frac{k}{n}\right) \end{aligned}$$
(3.16)

with \(z=i+\frac{\alpha +1}{M}\) and \(n=M\), we could further simplify (3.15) to get

$$\begin{aligned} P_k^{\tilde{\nu }}=(-1)^k\sum _{i=0}^k \left( {\begin{array}{c}k\\ i\end{array}}\right) \frac{\Gamma (\alpha +1+kM)(-xM^M)^i}{\Gamma (\alpha +1+iM)M^{kM}}. \end{aligned}$$
(3.17)

Combining (3.17) with (2.7), it is readily seen that

$$\begin{aligned} q_k^{(\alpha ,M)}(x)=M^{kM}P_{k}^{\tilde{\nu }}\left( \frac{x}{M^M} \right) , \end{aligned}$$
(3.18)

which is the first identity in (3.12). Note that both \(q_k^{(\alpha ,M)}\) and \(P_{k}^{\tilde{\nu }}\) are monic polynomials of degree k.

To show the second identity in (3.12), we obtain from (3.7) and (3.13) that

$$\begin{aligned}&Q_k^{\tilde{\nu }}\left( \frac{x^M}{M^M}\right) \\&\quad =\frac{1}{2\pi i k! \prod _{j=1}^M \Gamma \left( k+\frac{\alpha +j}{M}\right) } \int _{c-i\infty }^{c+i\infty } \frac{\Gamma (s)}{\Gamma (s-k)}\prod _{j=1}^M \Gamma \left( s+\frac{\alpha }{M}-1+\frac{j}{M}\right) \left( \frac{x}{M}\right) ^{-Ms} \,\mathrm {d}s \\&\quad = \frac{M^{(k+1)M}}{2\pi i k! \Gamma (\alpha +1+kM)} \int _{c-i\infty }^{c+i\infty } \frac{\Gamma (s)}{\Gamma (s-k)} \Gamma (Ms+1-M+\alpha ) x^{-Ms} \,\mathrm {d}s, \end{aligned}$$

where we have made use of (3.16) again in the second step. This, together with (2.14), implies that

$$\begin{aligned} Q_{k}^{\tilde{\nu }}\left( \frac{x^M}{M^M}\right) \frac{x^{M-1}}{M^{(k+1)M-1}}= x^\alpha e^{-x} p_k^{(\alpha ,M)}(x),\end{aligned}$$

as desired.

Finally, the relation (3.14) follows immediately from a combination of (1.4), (3.5) and (3.12). Alternatively, this relation can also be checked directly from the double contour integral representations (1.11) and (3.11) with the help of multiplication formula (3.16).

This completes the proof of Theorem 3.1. \(\square \)

Remark 3.1

By setting \(x=y\) in (3.14), we simply have that \(K_n^{(\alpha ,M)}\) is related to \(K_n^{\tilde{\nu }}\) via an Mth root transformation. Let \(n\rightarrow \infty \), this in turn provides the other perspective to explain the appearance of Fuss–Catalan distribution in biorthogonal Laguerre ensembles, since it is well-known that the Fuss–Catalan distribution characterizes the limiting mean distribution for squared singular values of products of random matrices [5, 7, 40]. As a concrete example, we may focus on the case \(\theta =M=2\). According to [33, 54], the empirical measure for scaled squared singular values for the products of two Ginibre matrices converges weakly and in moments to a probability measure over the real axis with density given by

$$\begin{aligned} \frac{\sqrt{3}}{2^{\frac{4}{3}} \pi x^{\frac{2}{3}}}\left( \left( 1+\sqrt{1-\frac{4x}{27}}\right) ^{1/3}-\left( 1-\sqrt{1-\frac{4x}{27}}\right) ^{1/3}\right) ,\quad x\in \left( 0,\frac{27}{4} \right) . \end{aligned}$$
(3.19)

On the other hand, by [36] (see also [13]), the limiting mean distribution for scaled particles from biorthogonal Laguerre ensembles (1.1) with \(\theta =2\) takes the density function given by

$$\begin{aligned} \frac{\sqrt{3}}{2 \pi x^{\frac{1}{3}}}\left( \left( 1+\sqrt{1-\frac{x^2}{27}}\right) ^{1/3}-\left( 1-\sqrt{1-\frac{x^2}{27}}\right) ^{1/3}\right) ,\quad x\in \left( 0,3^{\frac{3}{2}} \right) . \end{aligned}$$
(3.20)

Clearly, the density (3.20) can be reduced to (3.19) via a change of variable \(x\rightarrow 2\sqrt{x}\), as expected.

Remark 3.2

From [33, Theorem 5.3], it follows that, with \(K_n^\nu \) defined in (3.5) and \(\nu _1, \ldots , \nu _M\) being fixed,

$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{1}{n} K_n^{\nu } \left( \frac{x}{n}, \frac{y}{n} \right) = K^{\nu }(x,y), \end{aligned}$$
(3.21)

uniformly for xy in compact subsets of the positive real axis, where

$$\begin{aligned}&K^{\nu } (x,y) \nonumber \\&\quad =\frac{1}{(2\pi i)^2} \int _{-1/2-i\infty }^{-1/2+i\infty } \,\mathrm {d}s \int _{\Sigma } \,\mathrm {d}t \prod _{j=0}^M \frac{\Gamma (s+\nu _j+1)}{\Gamma (t+\nu _j+ 1)}\frac{\sin \pi s}{\sin \pi t} \frac{x^t y^{-s-1}}{s-t}\nonumber \\&\quad =\left. \left. \int _0^1 G^{1,0}_{0,M+1} \left( \begin{array}{c} - \\ nu_0, -\nu _1, \ldots , -\nu _M \end{array}\right| ux \right) G^{M,0}_{0,M+1} \left( \begin{array}{c} - \\ \nu _1, \ldots , \nu _M,\nu _0 \end{array}\right| uy \right) \,\mathrm {d}u, \end{aligned}$$
(3.22)

and where \(\Sigma \) is a contour starting from \(+\infty \) in the upper half plane and returning to \(+\infty \) in the lower half plane which encircles the positive real axis and \(\mathrm {Re}\,t>-1/2\) for \(t\in \Sigma \). This fact, together with our relation (3.14) and the hard edge scaling limits of Borodin (1.5), implies that

$$\begin{aligned} M x^{\alpha }\int _0^1J_{\frac{\alpha +1}{M},\frac{1}{M}}(ux)J_{\alpha +1,M}(\left( uy\right) ^{M})u^\alpha \,\mathrm {d}u = \frac{x^{M-1}}{M^{M-1}}K^{\tilde{\nu }}\left( \frac{y^M}{M^M},\frac{x^M}{M^M}\right) . \end{aligned}$$
(3.23)

The identity (3.23) was first proved in [32], where the authors gave a direct proof by noting that Wright generalized Bessel functions \(J_{a,b}\) defined in (1.6) can be expressed in Meijer G-functions if b is a rational number. Since it is easily seen from (3.22) and the multiplication formula (3.16) that

$$\begin{aligned} \frac{x^{M-1}}{M^{M-1}}K^{\tilde{\nu }}\left( \frac{y^M}{M^M},\frac{x^M}{M^M}\right) =K^{(\alpha ,M)}(x,y), \end{aligned}$$
(3.24)

where \(K^{(\alpha ,M)}(x,y)\) is given in (1.14), the proof presented in [32] also gives a direct proof of identity (1.18) if \(\theta =M\in \mathbb {N}\). To show (1.18) for general \(\theta \ge 1\), we first observe from (1.16), the residue theorem and (1.6) that

$$\begin{aligned} q^{(\alpha ,\theta )}(x)=\sum _{k=0}^{\infty }\frac{(-1)^k}{k!}\frac{\theta x^{k\theta }}{\Gamma (\alpha +1+k\theta )} =\theta J_{\alpha +1,\theta }(x^\theta ). \end{aligned}$$
(3.25)

Similarly, by deforming the vertical line in (1.15) to be a loop starting from \(-\infty \) in the lower half plane and returning to \(-\infty \) in the upper half plane which encircles the negative real axis, we again obtain from the residue theorem that

$$\begin{aligned} p^{(\alpha ,\theta )}(x)=\sum _{k=0}^{\infty }\frac{(-1)^k}{k!}\frac{x^{\alpha +k}}{\Gamma \left( \frac{\alpha +1+k}{\theta }\right) } =x^\alpha J_{\frac{\alpha +1}{\theta },\frac{1}{\theta }}(x). \end{aligned}$$
(3.26)

A combination of the above two formulas, (1.14) and (1.17) gives us (1.18).