1 Introduction

In this paper we consider a directed random polymer model in random media in two (one discrete and one continuous) dimension introduced by O’Connell and Yor [59]. For N independent one-dimensional standard Brownian motions \(B_j(t),~j=1,\ldots ,N\) and the parameter \(\beta (>0)\) representing the inverse temperature, the polymer partition function is defined by

$$\begin{aligned} Z_N(t)=\int _{0<s_1<\cdots <s_{N-1}<t}e^{\beta \left( B_1(s_1)+B_2(s_1,s_2)+\cdots +B_N(s_{N-1},t)\right) }ds_1\cdots ds_{N-1}. \end{aligned}$$
(1.1)

Here \(B_j(s,t)=B_j(t)-B_j(s),~j=2,\ldots ,N\) for \(s<t\) and \(-B_1(s_1)-B_2(s_1,s_2)-\cdots -B_N(s_{N-1},t)\) represents the energy of the polymer. In the last fifteen years much progress has been made on this O’Connell-Yor polymer model, by which we can access some explicit information about \(Z_N(t)\) and the polymer free energy \(F_N(t)=-\log (Z_N(t))/\beta \) [7, 10, 11, 36, 41, 4547, 54, 57, 72]. The first breakthrough was made in the zero temperature \((\beta \rightarrow \infty )\) case. In this limit, \(-F_N(t)\) becomes

$$\begin{aligned} f_N(t):=-\lim _{\beta \rightarrow \infty }F_N(t) =\max _{0<s_1<\cdots <s_{N-1}<t} \left( B_1(s_1)+B_2(s_1,s_2)+\cdots +B_N(s_{N-1},t)\right) \end{aligned}$$
(1.2)

where \(-f_N(t)\) is the ground state energy. For \(f_N(t)\), the following relation was established [7, 36]:

$$\begin{aligned}&\text {Prob}\left( f_N(t)\le s\right) =\int _{(-\infty ,s]^N}\prod _{j=1}^Ndx_j \cdot P_{\text {GUE}}(x_1,\ldots ,x_N;t),\end{aligned}$$
(1.3)
$$\begin{aligned}&P_{\text {GUE}}(x_1,\ldots ,x_N;t)= \prod _{j=1}^N\frac{e^{-x_j^2/2t}}{j!t^{j-1}\sqrt{2\pi t}}\cdot \prod _{1\le j<k\le N}(x_k-x_j)^2, \end{aligned}$$
(1.4)

where \(P_{\text {GUE}}(x_1,\ldots ,x_N;t)\) is the probability density function of the eigenvalues in the Gaussian unitary ensemble (GUE) in random matrix theory [3, 35, 52]. This type of connection of the ground state energy of a directed polymer in random media with random matrix theory was first obtained for a directed random polymer model on a discrete space \(\mathbb {Z}_+^2\) [42] by using the Robinson–Schensted–Knuth (RSK) correspondence. Equation (1.3) can be regarded as its continuous analogue. Note that (1.4) is written in the form of a product of the Vandermonde determinant \(\prod _{1\le j<k\le N}(x_k-x_j)\). This feature implies that the m-point correlation function is described by an \(m\times m\) determinant, i.e. the eigenvalues of the GUE are a typical example of the determinantal point processes [73]. In addition based on this fact and explicit expression of the correlation kernel, we can study the asymptotic behavior of \(f_N(t)\) in the limit \(N\rightarrow \infty \). In [7, 36], it has been shown that under a proper scaling, the limiting distribution of \(f_N(t)\) becomes the GUE Tracy–Widom distribution [75].

In this paper, we provide a representation for a moment generating function of the polymer partition function (1.1) which holds for arbitrary \(\beta (>0)\):

$$\begin{aligned} \mathbb {E}\left[ \exp \left( -\frac{e^{-\beta u} Z_N(t)}{\beta ^{2(N-1)}} \right) \right] =\int _{\mathbb {R}^N}\prod _{j=1}^Ndx_j \, f_F(x_j-u) \cdot W(x_1,\ldots ,x_N;t), \end{aligned}$$
(1.5)
$$\begin{aligned} W(x_1,\ldots ,x_N;t) =\prod _{j=1}^N\frac{1}{j!}\cdot \prod _{1\le j<k\le N} (x_k-x_j)\cdot \det \left( \psi _{k-1}(x_j;t)\right) _{j,k=1}^N, \end{aligned}$$
(1.6)

where \(f_F(x)=1/(e^{\beta x}+1)\) is the Fermi distribution function and

$$\begin{aligned} \psi _k(x;t)=\frac{1}{2\pi }\int _{-\infty }^{\infty }dw\, e^{-iwx-w^2t/2} \frac{(iw)^k}{\Gamma \left( 1+iw/\beta \right) ^N}. \end{aligned}$$
(1.7)

For more details see Definition 1 and Theorem 2 below. This is a simple generalization of (1.3) to the case of finite temperature. We easily find that it recovers (1.3) in the zero-temperature limit (\(\beta \rightarrow \infty \)). Note that the function \(W(x_1,\ldots ,x_N;t)\) is also written as a product of two determinants and thus retains the determinantal structure in (1.4).

In most cases, to find a finite temperature generalization of results for zero-temperature case is highly nontrivial and in fact often impossible. But for the O’Connell-Yor polymer model and a few related models, rich mathematical structures have been discovered for finite temperature and the studies on this topic entered a new stage [2, 10, 25, 37, 57, 61, 6669]. O’Connell [57] found a connection to the quantum Toda lattice, and based on the developments in its studies and the geometric RSK correspondence, it was revealed that the law of the free energy \(F_N(t)\) is expressed as

$$\begin{aligned} \text {Prob}\left( -F_N(t)+\frac{N-1}{\beta }\log \beta ^2\le s\right) = \int _{(-\infty , s]}dx_1\int _{\mathbb {R}^{N-1}}\prod _{j=2}^{N} dx_{j}\cdot m(x_1,\ldots ,x_N;t). \end{aligned}$$
(1.8)

Here the probability measure \(m(x_1,\ldots ,x_N;t)\prod _{j=1}^Ndx_j\), which is called the Whittaker measure, is defined by the density function \(m(x_1,\ldots ,x_N;t)\) in terms of the Whittaker function \(\Psi _{\lambda }(x_1,\ldots ,x_N)\) (for the definition, see [57]) and the Sklyanin measure \(s_N(\lambda )d\lambda \) (see (2.10) below) as follows,

$$\begin{aligned}&m(x_1,\ldots ,x_N;t)\nonumber \\&\quad =\Psi _0(\beta x_1,\ldots ,\beta x_N)\int _{(i\mathbb {R})^N}d\lambda \, \Psi _{-\lambda /\beta }(\beta x_1,\ldots ,\beta x_N)e^{\sum _{j=1}^N\lambda _j^2t/2}s_N(\lambda /\beta ), \end{aligned}$$
(1.9)

where \(\lambda \) represents \((\lambda _1,\ldots ,\lambda _N)\). In contrast to (1.4), the density function (1.9) is not known to be expressed as a product of determinants and the process associated with (1.9) does not seem to be determinantal. Nevertheless some determinantal formulas for the O’Connell-Yor polymer have been found: First in [57], O’Connell showed a determinantal representation for the moment generating function (LHS of (1.5)) in terms of the Sklyanin measure. (See (2.9) below.) Next in [10], Borodin and Corwin obtained a Fredholm determinant representation for the same moment generating function (see (4.23) below). A direct proof of the equivalence between the two determinantal expressions was given in [13]. In [10], by considering its continuous limit, the authors also obtained an explicit representation of the free energy distribution for the directed random polymer in two continuous dimension described by stochastic heat equation (SHE) [10, 11]. The distribution in this limit, which describes the universal crossover between the Kardar–Parisi–Zhang (KPZ) and the Edwards–Wilkinson universality class, was first obtained in [2, 6669] and can be interpreted also as the height distribution for the KPZ equation [44]. Furthermore in [10], they consider not only the O’Connell-Yor model but a class of stochastic processes having the similar Fredholm determinant expressions, the Macdonald processes, the probability measures on a sequence of partitions which are written in terms of the Macdonald symmetric functions and include the Whittaker measure defined by (1.8) as a limiting case.

The purpose of this paper is to investigate further the mechanism of appearance of such determinantal structures and (1.5) is the central formula in our study. Although \(W(x_1,\ldots ,x_N;t)\prod _{j=1}^Ndx_j\) defined by (1.6) is not a probability measure but a signed measure except when \(\beta \rightarrow \infty \), a remarkable feature of this measure is that it is determinantal for arbitrary \(\beta \) contrary to the Whittaker measure (1.9). This determinantal structure allows us to use the conventional techniques developed in the random matrix theory and thus from the relation we readily get a Fredholm determinant representation with a kernel using biorthogonal functions which is regarded as a generalization of the kernel with the Hermite polynomials for the GUE. In (1.6), the parameter \(\beta \), which originally represents the inverse temperature in the polymer model appears in the Fermi distribution function \(f_F(x-u)\) with the chemical potential u as well as \(\psi _k(x;t)\) (1.7) in RHS. This fact with the determinantal structure suggests that the RHS might have something to do with the free Fermions at a finite temperature. Related to this, a curious relation of the height of the KPZ equation with Fermions has been discussed in [28].

For establishing (1.5), we introduce a measure on a larger space \(\mathbb {R}^{N(N+1)/2}\). By integrating the measure in two different ways, we get its two marginal weights. In one formula appears a determinant which solves the N dimensional diffusion equation with some condition (see (2.11), (3.6), and (3.7)) and the other one with a symmetrization is exactly the RHS of (1.5). The relation (1.5) follows immediately from the equivalence of these two expressions. Our approach is similar to the one by Warren [78] for getting the relation (1.3). Actually in the zero-temperature limit \(\beta \rightarrow \infty \), we see that the integration of the measure is written in terms of the probability measure introduced in [78], which describes the positions of the reflected Brownian particles on the Gelfand–Tsetlin cone. Note that the Macdonald processes (especially the Whittaker process in our case) [10] are also other generalizations of [78]. Although the Whittaker process has rich integrable properties, they do not inherit the determinantal structure of [78]. On the other hand, our measure is described without using the Whittaker functions and keeps the determinantal structure. Furthermore combining (1.5) with the fact that the quantity can be rewritten as the Fredholm determinant found in [10] (Corollary 13 and Proposition 15 below), our approach can be considered as another proof of the equivalence between (4.23) and (2.9) in [13]. One feature of our proof is to bring to light the larger determinantal structure behind the two relations.

This paper is organized as follows. In the next section, after stating the definition of a determinantal measure, we give our main result, Theorem 2 and its proof. The proof consists of two major steps: we first introduce in Lemma 3 a determinantal representation for the moment generating function which is a deformed version of the representation (2.9) in [57]. Next we introduce another determinantal measure on larger space \(\mathbb {R}^{N(N+1)/2}\) and then we find two relations about its integrations which play a key role in deriving our main result. In Sect. 3 we show that this approach can be considered as an extension of the one in [78]. In Sect. 4., we consider the Fredholm determinant formula with biorthogonal kernel obtained by applying conventional random matrix techniques to our main result. The scaling limit to the KPZ equation is discussed in Sect. 5. We check that our kernel goes to the one obtained in the studies of the KPZ equation. A concluding remark is given in the last section.

2 Main Result

In this section, we introduce a measure \(W(x_1,\ldots ,x_N;t)\prod _{j=1}^Ndx_j\) (1.6), state our main result and give its proof.

2.1 Definition and Result

Definition 1

Let \(\psi _k(x;t),~k=1,2,\ldots \) be

$$\begin{aligned} \psi _k(x;t)=\frac{1}{2\pi }\int _{-\infty }^{\infty }dw\, e^{-iwx-w^2t/2} \frac{(iw)^k}{\Gamma \left( 1+iw/\beta \right) ^N}. \end{aligned}$$
(2.1)

For \((x_1,\ldots ,x_N)\in \mathbb {R}^N\), a function \(W(x_1,\ldots ,x_N;t)\) is defined by

$$\begin{aligned} W(x_1,\ldots ,x_N;t)=\prod _{j=1}^N\frac{1}{j!}\cdot \prod _{1\le l<m\le N}(x_m-x_l)\cdot \det \left( \psi _{j-1}(x_k;t)\right) _{j,k=1}^N. \end{aligned}$$
(2.2)

Remark

We find that \(W(x_1,\ldots ,x_N;t)\) is a real function on \(\mathbb {R}^N,\) since by definition \(\psi _k(x;t)\) is real for any \(k=0,1,2,\ldots , N-1, \beta >0\) and \(t>0\). But in general, the positivity of this measure is not guaranteed. For example \(\psi _0(x;t)\) shows a damped oscillation and can take a negative value for some x. Thus at least for the case \(N=1\), \(W(x,t)=\psi _0(x;t)\) can be negative.

We discuss the zero-temperature limit \(\beta \rightarrow \infty \) of \(W(x_1,\ldots ,x_N;t)\). Noting \(\Gamma (1)=1\), we see

$$\begin{aligned}&\lim _{\beta \rightarrow \infty }\psi _k(x;t)=\frac{1}{2\pi } \int _{-\infty }^{\infty }dw\, e^{-iwx-w^2t/2}(iw)^k =\frac{e^{-x^2/2t}}{\sqrt{2\pi t}} \left( \frac{1}{2t}\right) ^{\frac{k}{2}} H_k\left( \frac{x}{\sqrt{2t}}\right) , \end{aligned}$$
(2.3)

where we used the integral representations of the nth order Hermite polynomial \(H_n(x)\) (see e.g. Sect. 6.1 in [5]),

$$\begin{aligned} H_n(x)=\frac{(-2i)^n}{\sqrt{\pi }} \int _{-\infty }^{\infty }du\, u^ne^{-(u-ix)^2}. \end{aligned}$$
(2.4)

Note that \((t/2)^{k/2}H_k(x/\sqrt{2t})\) is a monic polynomial (i.e. the coefficient of the highest degree is 1) and

$$\begin{aligned} \lim _{\beta \rightarrow \infty }\det \left( \psi _{k-1}(x_j;t)\right) _{j,k=1}^N =\prod _{j=1}^N\frac{e^{-x_j^2/2t}}{t^{j-1}\sqrt{2\pi t}}\cdot \prod _{1\le j<k\le N}(x_k-x_j). \end{aligned}$$
(2.5)

Thus we find

$$\begin{aligned} \lim _{\beta \rightarrow \infty }W(x_1,\ldots ,x_N;t)= P_{\text {GUE}}(x_1,\ldots ,x_N;t), \end{aligned}$$
(2.6)

where \(P_{\text {GUE}}(x_1,\ldots ,x_N;t)\) is defined by (1.4). The function \(W(x_1,\ldots ,x_N;t)\) can be regarded as a deformation of  (1.4) which keeps its determinantal structure.

In this paper, we provide a determinantal representation for the moment generating function of the polymer partition function (1.1) in terms of the function (2.2).

Theorem 2

$$\begin{aligned} \mathbb {E}\left( e^{-\frac{e^{-\beta u} Z_N(t)}{\beta ^{2(N-1)}}}\right) =\int _{\mathbb {R}^N}\prod _{j=1}^Ndx_j \, f_F(x_j-u) \cdot W(x_1,\ldots ,x_N;t) \end{aligned}$$
(2.7)

where \(f_F(x)=1/(e^{\beta x}+1)\) is the Fermi distribution function.

By (1.2), (2.6) and the simple facts

$$\begin{aligned} \lim _{\beta \rightarrow \infty }e^{-e^{\beta x}}=\lim _{\beta \rightarrow \infty }f_F(x)=\Theta (-x), \end{aligned}$$
(2.8)

we find that the zero temperature limit of (2.7) becomes (1.3).

Because of the determinantal structure of \(W(x_1,\ldots ,x_N;t)\), we can get the Fredholm determinant representation for the moment generating function by using the techniques in random matrix theory. Recently another Fredholm determinant representation has been given based on properties of Macdonald difference operators [10]. The equivalence between them will be shown in Sect. 4.

2.2 Proof

Here we provide a proof of Theorem 2. Our starting point is the representation for the moment generating function given in [57]:

$$\begin{aligned} \mathbb {E}\left( e^{-\frac{e^{-\beta u} Z_N(t)}{\beta ^{2(N-1)}}}\right) =\int _{(i\mathbb {R}-\epsilon )^N} \prod _{j=1}^N \frac{d\lambda _j}{\beta }e^{-u\lambda _j+\lambda _j^2t/2}\Gamma \left( -\frac{\lambda _j}{\beta }\right) ^N\cdot s_N\left( \frac{\lambda }{\beta }\right) , \end{aligned}$$
(2.9)

where \(0<\epsilon <\beta \) and \(s_N(\lambda )d\lambda \) is the Sklyanin measure defined by

$$\begin{aligned} s_N(\lambda )=\frac{1}{(2\pi i)^NN!} \prod _{i<j}\frac{\sin \pi (\lambda _i-\lambda _j)}{\pi } \prod _{i>j}\left( \lambda _i-\lambda _j\right) . \end{aligned}$$
(2.10)

This relation was obtained by using the properties of the Whittaker functions [22, 74] and the Whittaker measure (1.9).

Lemma 3

$$\begin{aligned} \mathbb {E}\left( e^{-\frac{e^{-\beta u} Z_N(t)}{\beta ^{2(N-1)}}}\right) = \int _{\mathbb {R}^N}\prod _{\ell =1}^N dx_\ell f_F(x_\ell -u)\cdot G(x_1,\ldots ,x_N;t) \end{aligned}$$
(2.11)

where \(f_F(x)\) is defined below (2.7) and

$$\begin{aligned}&G(x_1,\ldots ,x_N;t)=\det \left( F_{jk}(x_{N-j+1};t)\right) _{j,k=1}^N, \end{aligned}$$
(2.12)
$$\begin{aligned}&F_{jk}(x;t)=\int _{i\mathbb {R}-\epsilon }\frac{d\lambda }{2\pi i} \frac{e^{-\lambda x+\lambda ^2 t/2}}{\Gamma \left( \frac{\lambda }{\beta }+1\right) ^N} \left( \frac{\pi }{\beta }\cot \frac{\pi \lambda }{\beta }\right) ^{j-1} \lambda ^{k-1} \end{aligned}$$
(2.13)

with \(0<\epsilon <\beta \).

We will discuss an interpretation of (2.12) in the next section. In this definition, we have arranged \(x_i\)’s in the reversed order so as to relate (3.17), the zero-temperature limit of (2.12), to the stochastic processes defined later in (3.20).

Proof

Noting the relation

$$\begin{aligned} \prod _{1\le i<j\le N}{\sin (x_i-x_j)}= & {} \prod _{1\le i<j\le N}\sin x_i \sin x_j \left( \cot x_j-\cot x_i\right) \nonumber \\= & {} \prod _{j=1}^N\sin ^{N-1}x_j\cdot \prod _{1\le k<\ell \le N}\left( \cot x_\ell -\cot x_k\right) \nonumber \\= & {} \prod _{j=1}^N\sin ^{N-1}x_j\cdot \det \left( \cot ^{\ell -1}x_k \right) _{k,\ell =1}^N, \end{aligned}$$
(2.14)

we rewrite RHS of (2.9) as

$$\begin{aligned}&\int _{(i\mathbb {R}-\epsilon )^N}\prod _{j=1}^N \frac{d\lambda _j}{\beta }e^{-u\lambda _j+\lambda _j^2t/2}\Gamma \left( -\frac{\lambda _j}{\beta }\right) ^N\cdot s_N\left( \frac{\lambda }{\beta }\right) \nonumber \\&\quad = \frac{1}{N!}\int _{(i\mathbb {R}-\epsilon )^N} \prod _{j=1}^N\frac{d\lambda _j}{2\pi i\beta }e^{-u\lambda _j+\lambda _j^2t/2} \Gamma \left( -\frac{\lambda _j}{\beta } \right) ^N\left( \frac{\sin \frac{\pi }{\beta }\lambda _j}{\pi }\right) ^{N-1}\nonumber \\&\quad \quad \, \times \det \left( \left( \frac{\pi }{\beta }\cot \frac{\pi }{\beta }\lambda _j\right) ^{k-1}\right) _{j,k=1}^N \det \left( {\lambda _j}^{k-1}\right) _{j,k=1}^N \nonumber \\&\quad =\det \left( \int _{i\mathbb {R}-\epsilon } \frac{d\lambda }{2\pi i\beta }e^{-u\lambda +\lambda ^2t/2} \Gamma \left( -\frac{\lambda }{\beta } \right) ^N\left( \frac{\sin \frac{\pi }{\beta }\lambda }{\pi }\right) ^{N-1} \left( \frac{\pi }{\beta }\cot \frac{\pi }{\beta }\lambda \right) ^{j-1} \lambda ^{k-1} \right) _{j,k=1}^N \end{aligned}$$
(2.15)

where in the last equality, we used the Andréief identity (also known as the Cauchy-Binet identity) [4]: For the functions \(g_j(x),~h_j(x)\), \(j=1,2,\dots ,N,\) such that all integrations below are well-defined, we have

$$\begin{aligned} \frac{1}{N!}\int _{\mathbb {R}^N}\prod _{j=1}^Ndx_j\cdot \det \left( g_k(x_j)\right) _{j,k=1}^N\det \left( h_k(x_j)\right) _{j,k=1}^N=\det \left( \int _{\mathbb {R}}dx g_j(x)h_k(x)\right) _{j,k=1}^N. \end{aligned}$$
(2.16)

We notice that the factor \(e^{-u\lambda }\Gamma (-\lambda /\beta )^ N({\sin (\pi \lambda /\beta )}/{\pi })^{N-1}\)in (2.15) can be written as

$$\begin{aligned} e^{-u\lambda }\Gamma \left( -\frac{\lambda }{\beta }\right) ^N \left( \frac{\sin \frac{\pi }{\beta }\lambda }{\pi }\right) ^{N-1}&=\frac{(-1)^{N-1}}{\Gamma \left( 1+\frac{\lambda }{\beta }\right) ^N} \frac{\pi e^{-u\lambda }}{-\sin \frac{\pi }{\beta }\lambda }\nonumber \\&= \frac{(-1)^{N-1}}{\Gamma \left( 1+\frac{\lambda }{\beta }\right) ^N} \int _{-\infty }^{\infty }\beta \frac{e^{-x\lambda }}{e^{\beta (x-u)}+1}dx \end{aligned}$$
(2.17)

where we used the reflection formula for the Gamma function and the relation (4.31). From (2.15) and (2.17), we arrive at the desired expression (2.11). \(\square \)

From (2.11), we see that for the derivation of our main result (2.7), it is sufficient to prove the relation

$$\begin{aligned} \int _{\mathbb {R}^N} \prod _{\ell =1}^N dx_\ell f_F(x_\ell -u)\cdot G(x_1,\ldots ,x_N;t) =\int _{\mathbb {R}^N}\prod _{j=1}^Ndx_j\, f_F(x_j-u) \cdot W(x_1,\ldots ,x_N;t). \end{aligned}$$
(2.18)

where \(f_F(x)\) is defined below (2.7) and \(W(x_1,\ldots , x_N;t)\) is given in Definition 1. Note that this is a relation for the integrated values on \(\mathbb {R}^N\). To establish this we introduce a measure on the larger space \(\mathbb {R}^{N(N+1)/2}\).

Definition 4

Let \(\underline{x}_k\) be an array \((x^{(1)},\ldots , x^{(k)})\) where \(x^{(j)}=(x^{(j)}_1,\ldots ,x^{(j)}_j)\in \mathbb {R}^{j}\) and \(d\underline{x}_k=\prod _{j=1}^k\prod _{i=1}^jdx^{(j)}_i\). We define a measure \(R_u(\underline{x}_N;t)d\underline{x}_N\) by

$$\begin{aligned} R_u(\underline{x}_N;t)=\prod _{1\le i\le j\le N} f_i(x^{(j)}_i-x^{(j-1)}_{i-1}) \cdot \det \left( F_{1i}(x^{(N)}_j;t)\right) _{i,j=1}^N. \end{aligned}$$
(2.19)

Here \(x^{(j-1)}_0=u\), \(F_{1j}(x;t)\) is given by \(F_{ij}(x;t)\) (2.13) with \(i=1\) and \(f_i(x), i=1,2,\ldots \) is defined by using the Fermi and Bose distribution functions, \(f_F(x):=1/(e^{\beta x}+1)\) and \(f_B(x):=1/(e^{\beta x}-1)\) respectively as follows.

$$\begin{aligned} f_i(x)= {\left\{ \begin{array}{ll} f_F(x), &{} i=1,\\ f_B(x), &{} i\ge 2. \end{array}\right. } \end{aligned}$$
(2.20)

Remark

The reason why both the Bose and Fermi distributions appear in our approach is not clear. The interrelations between them (see (2.28)–(2.30) below) will play an important role in the following discussions.

As in Fig 1. we usually represent the array \(\underline{x}_N\) graphically in the triangular shape. Although no ordering is imposed on \(\underline{x}_N\), in the zero-temperature limit, \(R_u(\underline{x}_N;t)\) has the support on the ordered arrays as in Fig. 1a (see (3.34)). Figure 1b represents the other ordered array called the Gelfand–Tsetlin pattern (see (3.23)).

Fig. 1
figure 1

Triangular arrays (\(k=3\)). a An element of \(V_k\) (3.34). b The Gelfand–Tsetrlin pattern (an element of (3.23))

As discussed later we will find that the moment generating function of the O’Connell-Yor polymer model is expressed as the integration of this measure \(R_u(\underline{x}_N;t)\) over \(\mathbb {R}^{N(N+1)/2}\). We have other choices for the definition of \(R_u(\underline{x}_N;t)\) which give the same integration value. One example is

$$\begin{aligned} \bar{R}_u(\underline{x}_N;t)= \prod _{\ell =1}^N\frac{1}{\ell !} \det \left( f_i(x^{(\ell )}_j-x^{(\ell -1)}_{i-1})\right) _{i,j=1}^\ell \cdot \det \left( F_{1i}(x^{(N)}_j;t)\right) _{i,j=1}^N. \end{aligned}$$
(2.21)

This comes form the following consideration. Let \(f_{\text {sym}}(\underline{x}_N)\) be a function which is symmetric under permutations of \(x^{(j)}_1,\ldots ,x^{(j)}_j\) for each \(j\in \{1,2,\ldots ,N\}\). Then we see that \(R_u(\underline{x}_N;t)\) (2.19) and \(\bar{R}_u(\underline{x}_N;t)\) have the same integration value:

$$\begin{aligned} \int _{\mathbb {R}^{N(N+1)/2}}d\underline{x}_N f_\mathrm{{sym}}(\underline{x}_N)R_u(\underline{x}_N;t) = \int _{\mathbb {R}^{N(N+1)/2}} d\underline{x}_N f_\mathrm{{sym}}(\underline{x}_N)\bar{R}_{u}(\underline{x}_N;t). \end{aligned}$$
(2.22)

It can be shown as follows. From the symmetry of \(f_{\text {sym}}(\underline{x}_N)\), LHS of the equation above becomes

$$\begin{aligned} \int _{\mathbb {R}^{N(N+1)/2}}d\underline{x}_N f_\mathrm{{sym}}(\underline{x}_N)R_u(\underline{x}_N;t)= \int _{\mathbb {R}^{N(N+1)/2}}d\underline{x}_N f_\mathrm{{sym}}(\underline{x}_N)\tilde{R}_u(\underline{x}_N;t). \end{aligned}$$
(2.23)

Here \(\tilde{R}_u(\underline{x}_N;t)\) is defined by

$$\begin{aligned} \tilde{R}_u(\underline{x}_N;t)=\prod _{\ell =1}^N\frac{1}{\ell !}\sum _{\sigma ^{(j)}\in S_j,~j=1,\ldots ,N} R_u\left( \underline{x}_N^\sigma ;t\right) , \end{aligned}$$
(2.24)

where \(S_j\) is the permutation of \(1,2,\ldots ,j\) and \(\underline{x}_N^\sigma \) denotes \((x^{\sigma ^{(1)}},\ldots , x^{\sigma ^{(N)}})\) with \(x^{\sigma ^{(j)}}=(x^{(j)}_{\sigma ^{(j)}(1)},\ldots , x^{(j)}_{\sigma ^{(j)}(j)})\). We easily find the equivalence \(\tilde{R}_u(\underline{x}_N;t)=\bar{R}_u(\underline{x}_N;t)\). Note that

$$\begin{aligned} R_{u}(\underline{x}_N^\sigma ;t)&= \prod _{1\le i\le j\le N} f_i\left( x^{(j)}_{\sigma ^{(j)}(i)}-x^{(j-1)}_{\sigma ^{(j-1)}(i-1)}\right) \cdot \det \left( F_{1i}(x^{(N)}_{\sigma ^{(N)}(j)};t)\right) _{i,j=1}^N\nonumber \\&=\mathrm{{sgn}}\sigma ^{(N)}\prod _{j=1}^N\prod _{i=1}^{j} f_i\left( x^{(j)}_{\sigma ^{(j)}(i)}-x^{(j-1)}_{\sigma ^{(j-1)}(i-1)}\right) \cdot \det \left( F_{1i}(x^{(N)}_{j};t)\right) _{i,j=1}^N\nonumber \\&=\prod _{j=1}^N\mathrm{{sgn}}\tau ^{(j)} \prod _{i=1}^{j} f_i\left( x^{(j)}_{\tau ^{(j)}(i)}-x^{(j-1)}_{i-1}\right) \cdot \det \left( F_{1i}(x^{(N)}_{j};t)\right) _{i,j=1}^N. \end{aligned}$$
(2.25)

Here in the last equality, \(\tau ^{(j)}\in S_j,~j=1,2,\ldots ,N\) is defined by using \(\sigma ^{(j-1)}\) and \(\sigma ^{(j)}\) as \(\sigma ^{(j-1)}\tau ^{(j)}(k)=\sigma ^{(j)}(k),~k=1,\ldots ,j\), where we regard \(\sigma ^{(j-1)}\) as an element of \(S_j\) with \(\sigma ^{(j-1)}(j)=j\). Further in the last equality we used \(\sigma ^{(N)}=\prod _{j=1}^N\tau ^{(j)}\). Substituting (2.25) into (2.24) and using the definition of the determinant, we have \(\tilde{R}_u(\underline{x}_N;t)=\bar{R}_u(\underline{x}_N;t)\).

The function \(\bar{R}_u(\underline{x}_N;t)\) (2.21) has a similar determinantal structure to the Schur process [60]. The Schur process is a probability measure on the sequence of partitions \(\{\lambda ^{(j)}\}_{j=1,\ldots ,N},\) where \(\lambda ^{(j)}:=\{(\lambda ^{(j)}_1,\ldots ,\lambda ^{(j)}_j)|\lambda ^{(j)}_i\in \mathbb {Z},~\lambda ^{(j)}_1 \ge \cdots \ge \lambda ^{(j)}_j\ge 0\}\), described as products of the skew Schur functions \(s_{\lambda /\mu }(x_1,\ldots ,x_n)\). For the ascending case (see Definition 2.7 in [10]), the probability measure is expressed as

$$\begin{aligned} \prod _{i,j=1}^N\frac{1}{1-a_ib_j}\cdot \prod _{k=1}^{N}s_{\lambda ^{(k)}/\lambda ^{(k-1)}} (a_k)\cdot s_{\lambda ^{(N)}}(b_1,\ldots , b_N), \end{aligned}$$
(2.26)

where \(a_j,~b_j,~j=1,\ldots ,N\) are positive variables. We note that \(s_{\lambda ^{(k)}/\lambda ^{(k-1)}} (a_k)\) is expressed as a kth order determinant and \(s_{\lambda ^{(N)}}(b_1,\ldots , b_N)\) as a Nth order determinant by the Jacobi-Trudi identity [50],

$$\begin{aligned} s_{\lambda /\mu }(x_1,\ldots ,x_n)=\det \left( h_{\lambda _i-\mu _j+j-i}(x_1,\ldots ,x_n)\right) _{i,j=1}^{\ell (\lambda )}, \end{aligned}$$
(2.27)

where \(h_{k}(x_1,\ldots ,x_n)\) is a complete symmetric polynomial with degree k and \(\ell (\lambda )\) is the length of the partition \(\lambda \). Thus (2.21) and (2.26) have a common structure of N products of determinants with increasing size times an Nth order determinant.

In the following we provide the relations about two marginals of \(R_u(\underline{x}_N;t)\) (2.19), from which (2.18) immediately follows. For this purpose, we give two formulas for \(f_F(x)\) and \(f_B(x)\) (2.20). First we define a multiple convolution \(g^{*(m)}f(x)~m=0,1,2,\ldots \) for a functions f(x) on \(\mathbb {R}\) and an integral operator g with the kernel \(g(x-y)\) as

$$\begin{aligned} g^{*(0)}f(x)=f(x),~g^{*(k)}f(x)=\int _{-\infty }^{\infty } dy\, g(x-y)g^{*(k-1)}f(y),~k=1,2,\ldots . \end{aligned}$$
(2.28)

Using this definition, the formulas are written as follows:

Lemma 5

We regard all integrations below as the Cauchy principal values. For \(\beta >0\), \(a\in \mathbb {C}\) with \(-\beta <\text {Re~}a<0\) and \(m=0,1,2,\ldots \), we have

$$\begin{aligned}&f^{*(m)}_Be^{ax} =\left( \frac{\pi }{\beta } \cot \left( \frac{\pi a}{\beta } \right) \right) ^{m} e^{ax}, \end{aligned}$$
(2.29)
$$\begin{aligned}&f^{*(m)}_Bf_F(x) =q_m(x)f_F(x), \end{aligned}$$
(2.30)

where \(q_m(x)\) is an mth order polynomial with the coefficient of the highest degree being 1 / m!.

A proof of this lemma will be given in Appendix 1. The polynomial \(q_m(x)\) in (2.30) is defined inductively by (7.11)-(7.13). But in our later discussion we will not use its explicit form.

From (2.13) and (2.29), we readily obtain for \(m=0,1,2,\ldots \),

$$\begin{aligned} \tilde{f}_B^{*(m)}F_{jk}\left( x;t\right) =F_{j+m,k}(x;t), \end{aligned}$$
(2.31)

where we define \(\tilde{f}_B(x):=f_B(-x)\).

Using (2.30) and (2.31), we obtain the following relations.

Theorem 6

Let the measures \(dA_1\) and \(dA_2\) be

$$\begin{aligned} dA_1=\prod _{2\le i\le j\le N}d x^{(j)}_i,~dA_2=\prod _{1\le i\le j\le N-1}dx^{(j)}_i. \end{aligned}$$
(2.32)

Then we have

$$\begin{aligned}&\int _{\mathbb {R}^{N(N-1)/2}}dA_1 R_u(\underline{x}_N;t) = G(x^{(1)}_1,\ldots ,x^{(N)}_1;t)\prod _{j=1}^N f_F(x^{(j)}_1-u), \end{aligned}$$
(2.33)
$$\begin{aligned}&\int _{\mathbb {R}^{N(N-1)/2}}dA_2 R_u(\underline{x}_N;t) =\bar{W}(x^{(N)}_1,\ldots ,x^{(N)}_N;t)\prod _{j=1}^N f_F(x^{(N)}_j-u). \end{aligned}$$
(2.34)

Here \(G(x^{(1)}_1,\ldots ,x^{(N)}_1;t)\) is defined by (2.12) and

$$\begin{aligned} \bar{W}(x^{(N)}_1,\ldots ,x^{(N)}_N;t)=\prod _{j=1}^{N-1} q_j\left( x^{(N)}_{j+1}-u\right) \cdot \det \left( F_{1j}\left( x^{(N)}_k;t\right) \right) _{j,k=1}^N, \end{aligned}$$
(2.35)

where \(q_j(x)\) is defined below (2.30).

We easily see that (2.18) can be obtained from these relations (2.33) and (2.34): Integrating the both hand sides of them over the remaining degrees of freedom (\((x^{(1)}_1,\ldots , x^{(N)}_1)\) for (2.33) and \((x^{(N)}_1,\ldots ,x^{(N)}_N)\) for (2.34)), we get two different expression about the integrated value of \(R_u(\underline{x}_N;t)\)

$$\begin{aligned}&\int _{\mathbb {R}^{N(N+1)/2}}d\underline{x}_N R_u(\underline{x}_N;t) =\int _{\mathbb {R}^N}\prod _{j=1}^N dx^{(j)}_1f_F\left( x^{(j)}_1-u \right) \cdot G\left( x^{(1)}_1,\ldots ,x^{(N)}_1;t \right) , \end{aligned}$$
(2.36)
$$\begin{aligned}&\int _{\mathbb {R}^{N(N+1)/2}}d\underline{x}_N R_u(\underline{x}_N;t) =\int _{\mathbb {R}^N}\prod _{j=1}^N dx^{(N)}_j f_F\left( x^{(N)}_j-u \right) \cdot \bar{W}\left( x^{(N)}_1,\ldots ,x^{(N)}_N;t \right) , \end{aligned}$$
(2.37)

where \(d\underline{x}_N=\prod _{1\le i\le j\le N}x^{(j)}_i\). RHS of the second relation is further rewritten as

$$\begin{aligned} \int _{\mathbb {R}^N}\prod _{j=1}^Ndx^{(j)}_1f_F(x^{(j)}_1-u)\cdot \frac{1}{N!} \sum _{\sigma ^{(N)}\in S_N}\bar{W}\left( x_{\sigma ^{(N)}(1)},\ldots ,x_{\sigma ^{(N)}(N)};t \right) , \end{aligned}$$
(2.38)

and the symmetrized \(\bar{W}(x_1,\ldots ,x_N;t)\) in this equation is nothing but \(W(x^{(N)}_1,\ldots ,x^{(N)}_1;t)\) (2.2) since

$$\begin{aligned}&\frac{1}{N!}\sum _{\sigma ^{(N)\in S_N}}\bar{W}(x_{\sigma ^{(N)}(1)},\ldots ,x_{\sigma ^{(N)}(N)};t)\nonumber \\&\quad =\frac{1}{N!}\cdot \det \left( q_{j-1}(x^{(N)}_k)\right) _{j,k=1}^N\det \left( F_{1j}(x^{(N)}_k;t)\right) _{j,k=1}^N\nonumber \\&\quad =W(x^{(N)}_1,\ldots ,x^{(N)}_N;t). \end{aligned}$$
(2.39)

Here in the second equality we used the fact that \(q_{j}(x)\) is a jth order polynomial with the coefficient of the highest degree being 1 / j! and \(F_{1j}(x;t)=\psi _{j}(x;t)\).

Proof of Theorem 6

First we derive (2.33). By the definition of (2.19), LHS of (2.33) becomes

$$\begin{aligned} \prod _{j=1}^N f_F(x^{(j)}_1-u) \cdot \det \left( \tilde{f}_B^{*(k-1)}F_{1j}\left( x^{(N-k+1)}_1;t\right) \right) _{j,k=1}^N. \end{aligned}$$
(2.40)

Here \(\tilde{f}_B(x)\) is defined below (2.31). Applying (2.31) to this equation we obtain (2.33).

Next we derive (2.34). We see that the factor \(dA_2\prod _{1\le i\le j\le N}f_i(x^{(j)}_i-x^{(j-1)}_{i-1})\) in \(R_u(\underline{x_N};t)\) (2.19) can be decomposed to

$$\begin{aligned}&dA_2\prod _{1\le i\le j\le N} f_{i}\left( x^{(j)}_{i}-x^{(j-1)}_{i-1}\right) \nonumber \\&\quad =\prod _{k=1}^{N-1} \left( \prod _{i=1}^{N-k}dx^{(i+k-1)}_i\cdot \prod _{j=1}^{N-k+1} f_j\left( x^{(j+k-1)}_j-x^{(j+k-2)}_{j-1}\right) \right) , \end{aligned}$$
(2.41)

and from (2.30) the integration of the factor for each k is represented as

$$\begin{aligned}&\int _{\mathbb {R}^{N-k}}\prod _{1\le i\le N-k}dx^{(i+k-1)}_i \prod _{j=1}^{N-k+1}f_j\left( x^{(j+k-1)}_i-x^{(j+k-2)}_{j-1} \right) =f^{*(N-k)}_Bf_F\left( x^{(N)}_{N-k+1}-u \right) \nonumber \\&\quad =q_{N-k}\left( x^{(N)}_{N-k+1}-u \right) f_F\left( x^{(N)}_{N-k+1}-u \right) \end{aligned}$$
(2.42)

where \(q_m(x)\) is given in (2.30). Eq. (2.34) follows immediately from this relation. \(\square \)

3 Dynamics of the Two Marginals

The purpose of this section is to have a better understanding of the two quantities, \(W(x_1,\ldots ,x_N;t)\) (2.2) and \(G(x_1,\ldots ,x_N;t)\) (2.12), which arose as partially integrated quantities of \(R_u(\underline{x}_N;t)\) (2.19) in Theorem 6 (for W a symmetrization is also necessary, see (2.39)). We will first consider the evolution equations of these two quantities. Next we will see that the zero-temperature limit of the equation for \(W(x_1,\ldots ,x_N;t)\) is nothing but the evolution equation for the Brownian particles with reflection interaction while \(W(x_1,\ldots ,x_N;t)\) satisfies the one for the GUE Dyson’s Brownian motion [31] regardless of the value of \(\beta \). Furthermore we will find that our idea using \(R_u(\underline{x}_N;t)\) in an enlarged space \(\mathbb {R}^{N(N+1)/2}\) (Theorem 6) is similar to the argument in [78] although we need a modification of  [78] about the ordering in an enlarged space.

3.1 Evolution Equations of \(G(x_1,\ldots ,x_N;t)\) and \(W(x_1,\ldots ,x_N;t)\)

Let us first summarize the properties of \(F_{jk}(x;t)\) \(j,k\in \{1,2,\ldots \}\) (2.13) all of which are easily confirmed by simple observations:

$$\begin{aligned}&F_{1k}(x;t)=\psi _k(x;t), \end{aligned}$$
(3.1)
$$\begin{aligned}&\frac{\partial }{\partial t}F_{jk}(x;t)=\frac{1}{2}\frac{\partial ^2}{\partial x^2}F_{jk}(x;t),\end{aligned}$$
(3.2)
$$\begin{aligned}&\int _{-\infty }^\infty dx \tilde{f}_B(x-y) F_{jk}(x;t)=F_{j+1k}(y;t), \end{aligned}$$
(3.3)
$$\begin{aligned}&-\frac{\beta ^2}{\pi ^2}\int _{-\infty }^\infty dx\frac{e^{\frac{\beta }{2}(x-y)}}{e^{{\beta }(x-y)}-1}F_{j+1k}(x;t)=F_{jk}(y;k), \end{aligned}$$
(3.4)

where \(\psi _k(x;t)\) in (3.1) and \(\tilde{f}_B(x)\) in (3.3) are defined by (2.1) and below (2.31). Eq. (3.3) is equivalent to (2.31) while (3.4) is obtained from the relation

$$\begin{aligned} \frac{\beta ^2}{\pi ^2}\int _{-\infty }^{\infty }dx\,\frac{e^{\frac{\beta }{2}(x-y)}}{e^{\beta (x-y)}-1}e^{-bx} =-\frac{\beta }{\pi } \tan \left( \frac{\pi }{\beta }b\right) e^{-by}, \end{aligned}$$
(3.5)

for \(|\text {Re}~b|<\beta /2\). This relation is easily given by (2.29) with \(a=b-\beta /2\).

We see that due to (3.2) and the multilinearity of a determinant, \(G(x_1,\ldots ,x_N;t)\) (2.12) satisfies the diffusion equation.

$$\begin{aligned} \frac{\partial }{\partial t}G(x_1,\ldots ,x_N;t) =\frac{1}{2}\sum _{j=1}^N\frac{\partial ^2}{\partial x_j^2}G(x_1,\ldots ,x_N;t). \end{aligned}$$
(3.6)

In addition, by (3.4), it satisfies the condition

$$\begin{aligned} -\frac{\beta ^2}{\pi ^2}\int _{-\infty }^\infty dx_{j}\frac{e^{-\frac{\beta }{2}(x_j-x_{j+1})}}{e^{{\beta }(x_j-x_{j+1})}-1}G(x_1,\ldots ,x_N;t)=0, \end{aligned}$$
(3.7)

for \(j=1,2,\ldots , N-1\). Though this condition is unusual, we will see that it is regarded as a finite temperature generalization of the Neumann boundary conditions at \(x_j=x_{j+1},~j=1,\ldots , N-1\) in the zero temperature limit (see (3.19)).

On the other hand, from (3.2) with the harmonicity of the Vandermonde determinant in (2.2), we see that \(W(x_1,\ldots ,x_N;t)\) satisfies the Kolmogorov forward equation of the GUE Dyson’s Brownian motion [31], which is a dynamical generalization of the GUE,

$$\begin{aligned} \frac{\partial }{\partial t}W(x_1,\ldots ,x_N;t)&=\frac{1}{2}\sum _{j=1}^N\frac{\partial ^2}{\partial x_j^2}W(x_1,\ldots ,x_N;t)\nonumber \\&\qquad \;\;-\,\sum _{ j=1}^N \frac{\partial }{\partial x_j}\left( \sum _{\begin{array}{c} m=1\\ m\ne j \end{array}}^N\frac{1}{x_j-x_m} \right) W(x_1,\ldots ,x_N;t). \end{aligned}$$
(3.8)

The time evolution equation for the GUE Dyson’s Brownian motion can be transformed to the imaginary-time Schrödinger equation with free-Fermionic Hamiltonian (e.g. see Chapter 11 in [35]). On the other hand note that the density function of the Whittaker measure (1.9) does not solve such a simple free-Fermionic time evolution equation (3.8).

3.2 The Zero-Temperature Limit and a Brownian Particle System with Reflection Interactions

Let us consider the zero temperature limit of the Eqs. (3.6) with (3.7) and (3.8). Note that for \(x\ne 0\),

$$\begin{aligned} -\lim _{\beta \rightarrow \infty }\tilde{f}_B(x)=1_{>0}(x),~-\lim _{\beta \rightarrow \infty }f_B(x) =\lim _{\beta \rightarrow \infty }f_F(x)=1_{<0}(x), \end{aligned}$$
(3.9)

where \(\tilde{f}_B(x)\) is defined below (2.31) and \(1_{>0}(x)\) and \(1_{<0}(x)\) are the step functions defined by

$$\begin{aligned} 1_{>0}(x)= {\left\{ \begin{array}{ll} 1, &{} x >0,\\ 0, &{} x\le 0, \end{array}\right. } ~~1_{<0}(x)= {\left\{ \begin{array}{ll} 0, &{} x>0,\\ 1, &{} x\le 0. \end{array}\right. } \end{aligned}$$
(3.10)

In addition we have

$$\begin{aligned} \lim _{\beta \rightarrow \infty }F_{jk}(x;t)=\mathcal {F}_{j-k}(x;t), \end{aligned}$$
(3.11)

where \(\mathcal {F}_n(x;t)\) is defined for \(n\in \mathbb {Z}\) and \(\epsilon >0\) as

$$\begin{aligned} \mathcal {F}_{n}(x;t)=\int _{i\mathbb {R}-\epsilon }\frac{d\lambda }{2\pi i} \frac{e^{-\lambda x+\lambda ^2 t/2}}{\lambda ^n}. \end{aligned}$$
(3.12)

Here we summarize a few properties of the function which are the zero temperature limit of (3.1)–(3.4) for \(F_{jk}(x;t)\).

$$\begin{aligned}&\mathcal {F}_{-k}(x;t)=\lim _{\beta \rightarrow \infty }\psi _k(x;t)= \frac{e^{-x^2/2t}}{\sqrt{2\pi t}} \left( \frac{1}{2t}\right) ^{\frac{k}{2}}H_k(x/\sqrt{2t}),~k=0,1,2,\ldots ,\end{aligned}$$
(3.13)
$$\begin{aligned}&\frac{\partial }{\partial t}\mathcal {F}_n(x;t)=\frac{1}{2}\frac{\partial ^2}{\partial x^2}\mathcal {F}_n(x;t),\end{aligned}$$
(3.14)
$$\begin{aligned}&\int _{-\infty }^{y}dx\mathcal {F}_n(x;t)=-\mathcal {F}_{n+1}(y;t), \end{aligned}$$
(3.15)
$$\begin{aligned}&\frac{\partial }{\partial x }\mathcal {F}_n(x;t)\left| _{x\rightarrow y}\right. =-\mathcal {F}_{n-1}(y;t), \end{aligned}$$
(3.16)

where in (3.13), \(\psi _k(x;t)\) is defined by (2.1) and \(H_k(x)\) is the kth order Hermite polynomial [5]. The second equality in (3.13) has appeared as (2.3). Note that (3.16) corresponds to the zero-temperature limit of (3.4), since RHS of (3.5) goes to \(-be^{-bx}\) in the zero-temperature limit and thus the integral operator with the kernel \(\pi ^2e^{\beta (y-x)/2}/\beta ^2(e^{\beta (y-x)}-1)\) is equivalent to differentiation in the zero temperature limit when its action is restricted to \(e^{-b x}\).

Let \(\mathcal {G}(x_1,\ldots ,x_N;t)\) be the zero-temperature limit of \(G(x_1,\ldots ,x_N;t)\) (2.12) defined on \(\mathbb {R}^N\). From (3.11), we find

$$\begin{aligned} \mathcal {G}(x_1,\ldots ,x_N;t)=\det \left( \mathcal {F}_{j-k}(x_{N-j+1};t)\right) _{j,k=1}^N. \end{aligned}$$
(3.17)

The function \(\mathcal {G}(x_1,\ldots ,x_N;t)\) appeared as a solution to the Schrödinger equation for the derivative nonlinear Schrödinger type model [70]. As discussed in [70], using (3.14) and (3.16) with basic properties of a determinant, we find that for \(x_1\ne \cdots \ne x_N\), \(\mathcal {G}(x_1,\ldots ,x_N;t)\) satisfies the diffusion equation,

$$\begin{aligned}&\frac{\partial }{\partial t}\mathcal {G}(x_1,\ldots ,x_N;t) =\frac{1}{2}\sum _{j=1}^N\frac{\partial ^2}{\partial x_j^2} \mathcal {G}(x_1,\ldots ,x_N;t), \end{aligned}$$
(3.18)

with the boundary condition

$$\begin{aligned} \frac{d}{dx_{j}}\mathcal {G}(x_1,\ldots ,x_N;t)|_{x_{j}\rightarrow x_{j+1}}=0, {\text {~for~}}j=1,\ldots ,N-1. \end{aligned}$$
(3.19)

The probabilistic interpretation of \(\mathcal {G}(x_1,\ldots ,x_N;t)\) has been given in [78]. Let \(X_i(t),~i=1,\ldots ,N\) be the stochastic processes with N-components described by

$$\begin{aligned} X_i(t)=y_i+B_i(t)+L^-_i(t), \end{aligned}$$
(3.20)

where \(y_i\in \mathbb {R}\) satisfying \(y_1<y_2<\cdots <y_N\) represent initial positions, \(B_i(t)\) denotes the standard Brownian motion and \(L^-_i(t)\) is twice the semimartingale local time at zero of \(X_i-X_{i-1}\) for \(i=2,\ldots ,N\) while \(L^-_1(t)=0\). The system (3.20) describes the N-Brownian particles system with one-sided reflection interaction, i.e. the ith particle is reflected from the \(i-1\)th particle for \(i=2,3,\ldots ,N\). Warren [78] found that the transition density of this system from \(y_i\) to \(x_i\), \(i=1,\ldots ,N\) is written as \(\mathcal {G}(x_1-y_1,\ldots ,x_N-y_N;t)\). Such kind of determinantal transition density was first obtained for the totally asymmetric simple exclusion process (TASEP) in [71]. Furthermore, based on the determinantal structures, various techniques for discussing the space-time joint distributions for the particle positions or current have been developed for TASEP [15, 1721, 56, 64, 65] and the reflected Brownian particle system (3.20) [32, 33].

On the other hand, we have seen in (2.6) that the zero temperature limit of \(W(x_1,\ldots ,x_N;t)\) (2.2) is the GUE density \(P_{\text {GUE}}(x_1,\ldots ,x_N;t)\) (1.4). Note that \(P_{\text {GUE}}(x_1,\ldots ,x_N;t)\) also satisfies  (3.8) since it holds for arbitrary \(\beta \) i.e:

$$\begin{aligned} \frac{\partial }{\partial t}P_{\text {GUE}}(x_1,\ldots ,x_N;t)&=\frac{1}{2}\sum _{j=1}^N\frac{\partial ^2}{\partial x_j^2}P_{\text {GUE}}(x_1,\ldots ,x_N;t)\nonumber \\&\quad -\,\sum _{ j=1}^N \frac{\partial }{\partial x_j}\left( \sum _{\begin{array}{c} m=1\\ m\ne j \end{array}}^N\frac{1}{x_j-x_m} \right) P_{\text {GUE}}(x_1,\ldots ,x_N;t). \end{aligned}$$
(3.21)

From (2.6), (3.9), and (3.11), we find that the zero-temperature limit of (2.18) is

$$\begin{aligned} \int _{(-\infty ,u]^{N}}\prod _{\ell =1}^Ndx_{\ell }\cdot \mathcal {G}(x_1,\ldots ,x_N;t) = \int _{(-\infty ,u]^{N}}\prod _{j=1}^Ndx_{j}\cdot P_{\text {GUE}}(x_1,\ldots ,x_N;t). \end{aligned}$$
(3.22)

Warren [78] showed that this relation, which connects the two different processes, is obtained in the following way. First one introduces a process on the \(N(N+1)/2\)-dimensional Gelfand–Tsetlin cone whose two marginals describe the above two processes. The Gelfand–Tsetlin cone \(\text {GT}_k,~k=1,2,\ldots \) is defined as

$$\begin{aligned} \text {GT}_k:=\{(x^{(1)},\ldots ,x^{(k)})|~&x^{(i)}=(x^{(i)}_1,\ldots ,x^{(i)}_i)\in \mathbb {R}^i \text {~with~} i=1,\ldots ,k, \nonumber \\&x^{(m+1)}_{\ell +1}\le x^{(m)}_{\ell } \le x^{(m+1)}_{\ell } \text {~with~}1\le \ell \le m\le k-1\}. \end{aligned}$$
(3.23)

For the graphical representation of an element of \(\text {GT}_k\), see Fig. 1b. Next we introduce a following stochastic process on \(\text {GT}_N\). Let \((X^{(1)}(t),\ldots ,X^{(N)}(t))\) with \(X^{(j)}(t)=(X^{(j)}_1(t),\ldots ,X^{(j)}_j(t))\) be a process defined by

$$\begin{aligned} X^{(j)}_i(t)=B^{(j)}_i(t)+y^{(j)}_i+L^{(j)-}_i(t)-L^{(j)+}_i(t),~1\le i\le j\le N, \end{aligned}$$
(3.24)

where \(B^{(j)}_i(t)\) are the \(N(N+1)/2\) independent Brownian motions starting at the origin, \(y^{(j)}_i\) represent the initial positions and the process \(L^{(j)-}_i(t)\) and \(L^{(j)+}_i(t)\) are twice the semimartingale local time at zero of \(X^{(j)}_i-X^{(j-1)}_{i}\) and \(X^{(j)}_i-X^{(j-1)}_{i-1}\) respectively. Equation (3.24) describes the interacting particle systems where each \(X^{(j)}_i(t)\) is a Brownian motion reflected from \(X^{(j-1)}_{i-1}(t)\) to a negative direction and from \(X^{(j-1)}_i(t)\) to a positive direction. In [16], Borodin and Ferrari also introduced similar processes on the discrete Gelfand–Tsetlin cone where the probability measure at a particular time is described by the Schur process [60].

The pdf of the system (3.24) at time t can be given explicitly : For the case of \(y^{(j)}_i=0\), it is expressed as

$$\begin{aligned} \mathcal {Q}_\mathrm{GT}(\underline{x}_N;t)=\prod _{1\le i<j\le N} \left( x^{(N)}_i-x^{(N)}_j\right) \cdot \prod _{k=1}^N \frac{\exp {\left( -\left( x^{(N)}_k\right) ^2/2t\right) }}{t^{k-1}\sqrt{2\pi t}}\cdot 1_{\text {GT}}(\underline{x}_N), \end{aligned}$$
(3.25)

where \(\underline{x}_N\) is defined above (2.19) and \(1_{\text {GT}}(\underline{x}_k)\) represents the indicator function on GT\(_k\). The pdfs of the two marginals, \((x^{(1)}_1,\ldots ,x^{(N)}_1)\) and \((x^{(N)}_1,\ldots ,x^{(N)}_N)\) for \(\mathcal {Q}_{\text {GT}}(\underline{x}_N;t)\) was obtained as follows:

Proposition 7

(Proposition 6 and 8 in [78])

$$\begin{aligned}&\int _{\mathbb {R}^{N(N-1)/2}}dA_1\mathcal {Q}_\mathrm{GT}(\underline{x}_N;t) =\mathcal {G}(x^{(1)}_1,\ldots ,x^{(N)}_1;t)\prod _{j=1}^{N-1}1_{>0}(x^{(j+1)}_1-x^{(j)}_1), \end{aligned}$$
(3.26)
$$\begin{aligned}&\int _{\mathbb {R}^{N(N-1)/2}}dA_2\mathcal {Q}_\mathrm{GT}(\underline{x}_N;t)= N!P_\mathrm{{GUE}}(x^{(N)}_1,\ldots ,x^{(N)}_N;t)\prod _{j=1}^{N-1}1_{>0}(x^{(N)}_j-x^{(N)}_{j+1}), \end{aligned}$$
(3.27)

where \(\mathcal {G}(x_1,\ldots ,x_N;t)\), \(P_\mathrm{{GUE}}(x^{(N)}_1,\ldots ,x^{(N)}_N;t)\)\(1_{>0}(x)\) and \(dA_1,~dA_2\) are defined by (3.17), (1.4), (3.10) and (2.32) respectively.

Remark

Note that \(\mathcal {G}(x^{(1)}_1,\ldots ,x^{(N)}_1;t)\) in (3.26) can be replaced by an arbitrary function on \(\mathbb {R}^N\)such that it corresponds to \(\mathcal {G}(x^{(1)}_1,\ldots ,x^{(N)}_1;t)\) in the region \(x^{(1)}_1<x^{(2)}_1<\cdots <x^{(N)}_1\). For later discussion on a generalization of finite temperature, we chose it as \(\mathcal {G}(x^{(1)}_1,\ldots ,x^{(N)}_1;t)\) on the whole \(\mathbb {R}^N\).

We see that the relation (3.22) is obtained from this theorem. By decomposing the integral on \(\underline{x}_N\) in two different ways, we clearly have

$$\begin{aligned} \int _{(-\infty ,u]^{N(N-1)/2}}d\underline{x}_N\mathcal {Q}_{\text {GT}}(\underline{x}_N;t)&=\int _{(-\infty ,u]^N}\prod _{j=1}^{N}dx^{(j)}_1 \int _{\mathbb {R}^{N(N-1)/2}}dA_1\mathcal {Q}_\mathrm{GT}(\underline{x}_N;t)\nonumber \\&=\int _{(-\infty ,u]^N}\prod _{j=1}^{N}dx^{(N)}_{j} \int _{\mathbb {R}^{N(N-1)/2}}dA_2\mathcal {Q}_\mathrm{GT}(\underline{x}_N;t). \end{aligned}$$
(3.28)

Applying (3.26) and (3.27) to this equation, we get

$$\begin{aligned}&\int _{(-\infty ,u]^N}\prod _{j=1}^{N}dx^{(j)}_1 \mathcal {G}\left( x^{(1)}_1,\ldots ,x^{(N)}_1;t)\prod _{j=1}^{N-1}1_{>0}(x^{(j+1)}_1-x^{(j)}_1\right) \nonumber \\&\quad =\int _{(-\infty ,u]^N}\prod _{j=1}^{N}dx^{(N)}_{j} N!P_\mathrm{{GUE}}\left( x^{(N)}_1,\ldots ,x^{(N)}_N;t\right) \prod _{j=1}^{N-1}1_{>0}\left( x^{(N)}_j-x^{(N)}_{j+1}\right) . \end{aligned}$$
(3.29)

Due to the symmetry of \(P_{\text {GUE}}(x_1,\ldots ,x_N;t)\) under the permutations of \(x_1,\ldots ,x_N\), we readily see that RHS of this equation is equal to RHS of (3.22). Also we find that LHS of (3.29) becomes

$$\begin{aligned}&~\int _{(-\infty ,u]^N}\prod _{j=1}^{N}dx^{(j)}_1\cdot \mathcal {G}\left( x^{(1)}_1,\ldots ,x^{(N)}_1;t\right) \prod _{j=1}^{N-1}1_{>0}\left( x^{(j+1)}_1-x^{(j)}_1\right) \nonumber \\&\quad =\int _{(-\infty ,u]^N}\prod _{j=1}^{N}dx^{(j)}_1\cdot \mathcal {G}\left( x^{(1)}_1,\ldots ,x^{(N)}_1;t\right) \prod _{j=1}^{N-1}\left( 1_{>0}\left( x^{(j+1)}_1-x^{(j)}_1\right) +1_{>0}(x^{(j)}_1-x^{(j+1)}_1)\right) \nonumber \\&\quad =\int _{(-\infty ,u]^N}\prod _{j=1}^{N}dx^{(j)}_1\cdot \mathcal {G}\left( x^{(1)}_1,\ldots ,x^{(N)}_1;t\right) , \end{aligned}$$
(3.30)

where in the first equality we used for \(k=2,\ldots , N\) and \((x^{(1)}_1,\ldots ,x^{(N)}_1)\in (-\infty ,u]^N\)

$$\begin{aligned} \int _{(-\infty ,u]}dx^{(k)}_1 \mathcal {G}\left( x^{(1)}_1,\ldots ,x^{(N)}_1;t\right) \prod _{j=1}^{k-2}1_{>0}\left( x^{(j+1)}_1-x_1^{(j)}\right) \cdot 1_{>0}\left( x^{(k-1)}_1-x^{(k)}_1\right) =0. \end{aligned}$$
(3.31)

Note that \(\mathcal {G}(x^{(1)}_1,\ldots ,x^{(N)}_1;t)\) is defined on \(\mathbb {R}^N\) and is finite even outside the region \(x^{(1)}_1<x^{(2)}_1<\cdots <x^{(N)}_1\). (See Remark. of Proposition 7.) Eq. (3.31) is obtained from the following observation: putting the last factor \(1_{>0}(x^{(k-1)}_1-x^{(k)}_1)\) in the \(N-k-1\)th row of the determinant \(\mathcal {G}(x^{(1)}_1,\ldots ,x^{(N)}_1;t)\) in (3.31) then applying (3.15), we get the determinant which has the same two rows.

Thus (3.22) is obtained from Proposition 7. This is similar to the situation of (2.18) and Theorem 6. This naive observation gives us the impression that the pdf \(\mathcal {Q}_{\text {GT}}(\underline{x}_N;t)\) (3.25) is the zero-temperature limit of the weight \(R_u(\underline{x}_N;t)\) (2.19). However in fact this is not the case. Let \(\mathcal {R}_u(\underline{x}_N;t):=\lim _{\beta \rightarrow \infty }R_u(\underline{x}_N;t)\). From (3.9) and (3.11) one has

$$\begin{aligned} \mathcal {R}_u(\underline{x}_N;t)=(-1)^{N(N-1)/2}\det \left( \mathcal {F}_{1-i}(x^{(N)}_j;t)\right) _{i,j=1}^N \prod _{1\le j\le k\le N}1_{>0}\left( x^{(k-1)}_{j-1}-x^{(k)}_{j}\right) . \end{aligned}$$
(3.32)

From (2.5) and (3.13), it is further rewritten as

$$\begin{aligned} \mathcal {R}_u(\underline{x}_N;t)= \prod _{1\le i<j\le N} \left( x^{(N)}_i-x^{(N)}_j\right) \cdot \prod _{k=1}^N \frac{\exp {\left( -\left( {x^{(N)}_k}\right) ^2/{2t}\right) }}{t^{k-1}\sqrt{2\pi t}} 1_{>0}\left( u-x^{(k)}_1\right) \cdot 1_{V_N}(\underline{x}_N), \end{aligned}$$
(3.33)

where \(1_{V_k}(\underline{x}_k)\) is the indicator function on an ordered set \(V_k\) defined by

$$\begin{aligned} V_k:=\{(x^{(1)},\ldots ,x^{(k)})|~x^{(j)}=(x^{(j)}_1,\ldots ,x^{(j)}_j)\in \mathbb {R}^j, x^{(m+1)}_{\ell +1} \le x^{(m)}_{\ell },~1\le \ell \le m\le k-1\}. \end{aligned}$$
(3.34)

For the graphical representation of an element of (3.34), see Fig. 1a. Comparing (3.25) with (3.33), we see that they have the same form but their supports (\(\text {GT}_N\) and \(V_N\)) are different. We further notice that \(V_N\) with an additional order \(x^{(m)}_{\ell }\le x^{(m+1)}_{\ell },~1\le \ell \le m\le N-1\) corresponds to GT\(_N\).

Hence our approach using \(\mathcal {R}_u(\underline{x}_N;t)\) can be regarded as a modification of Warren’s arguments on \(\text {GT}_N\) to the ones on the partially ordered spece \(V_N\). Let us focus on two marginals \((x^{(1)}_1,x^{(2)}_1,\ldots , x^{(N)}_1)\) and \((x^{(N)}_1,x^{(N)}_2, \cdots , x^{(N)}_N)\) for \(\mathcal {R}_u(\underline{x}_N;t)\) (3.33). By taking the zero-temperature limit of Theorem 6, we have the following analogue of Proposition 7:

Proposition 8

$$\begin{aligned}&\int _{\mathbb {R}^{N(N-1)/2}}dA_1\,\mathcal {R}_u(\underline{x}_N;t) =\mathcal {G}(x^{(1)}_1,\ldots ,x^{(N)}_1;t)\prod _{j=1}^N1_{>0}(u-x^{(j)}_1), \end{aligned}$$
(3.35)
$$\begin{aligned}&~\int _{\mathbb {R}^{N(N-1)/2}}d\,A_2~\mathcal {R}_{u}(\underline{x}_N;t)= P_u\left( x^{(N)}_1,\ldots ,x^{(N)}_N;t\right) \prod _{j=1}^N1_{>0}\left( u-x^{(N)}_{j}\right) , \end{aligned}$$
(3.36)

where for the definition of \(dA_1\) and \(dA_2\), see (2.32), \(\mathcal {G}(x^{(1)}_1,\ldots ,x^{(N)}_1;t)\) is given by (3.17) and

$$\begin{aligned} P_u\left( x^{(N)}_1,\ldots ,x^{(N)}_N;t\right) =\prod _{j=1}^{N} \frac{(u-x^{(N)}_{j})^{j-1}}{(j-1)!t^{j-1}}\cdot \prod _{1\le j<k\le N}\left( x^{(N)}_j-x^{(N)}_k\right) \cdot \prod _{j=1}^N \frac{e^{-\left( x^{(N)}_j\right) ^2/2t}}{\sqrt{2\pi t}}. \end{aligned}$$
(3.37)

Proof

It is obtained by taking the zero-temperature limit \((\beta \rightarrow \infty )\) in Theorem 6. \(\square \)

As discussed in (2.39), \(P_{\text {GUE}}(x_1,\ldots ,x_N;t)\) (1.4) can be interpreted as the symmetric version of \(P_u(x_1,\ldots ,x_N;t)\):

$$\begin{aligned} \frac{1}{N!}\sum _{\sigma ^{(N)}\in S_N}P_u\left( x_{\sigma ^{(N)}(1)},\ldots ,x_{\sigma ^{(N)}(N)};t\right) =P_{\text {GUE}}\left( x^{(N)}_1,\ldots ,x^{(N)}_N;t\right) . \end{aligned}$$
(3.38)

Therefore by the similar discussion in (3.28), we see that the relation (3.22) is obtained also from Proposition 8.

The fact that both Proposition 7 and 8 lead to (3.22) implies the relation

$$\begin{aligned} \int _{\mathbb {R}^{N(N+1)/2}} d\underline{x}_N \mathcal {R}_{u}(\underline{x}_N;t) = \int _{(-\infty ,u]^{N(N+1)/2}}d\underline{x}_N \mathcal {Q}_\mathrm{GT}(\underline{x}_N;t). \end{aligned}$$
(3.39)

This equivalence of their integration values is generalized in the following way.

Proposition 9

Let \(f_\mathrm{sym}(\underline{x}_N) \) be the function defined above (2.22). Then we have

$$\begin{aligned} \int _{\mathbb {R}^{N(N+1)/2}} d\underline{x}_N f_\mathrm{{sym}}(\underline{x}_N)\mathcal {R}_{u}(\underline{x}_N;t) = \int _{(-\infty ,u]^{N(N+1)/2}}d\underline{x}_N f_\mathrm{{sym}}(\underline{x}_N)\mathcal {Q}_\mathrm{GT}(\underline{x}_N;t) \end{aligned}$$
(3.40)

An essential step of the proof of this proposition is represented as the following.

Lemma 10

$$\begin{aligned} \sum _{\sigma ^{(j)}\in S_j,j=1,\ldots ,N}\mathrm{{sgn}}\sigma ^{(N)}1_{V_N}(\underline{x}_N^\sigma ) =\sum _{\sigma ^{(j)}\in S_j,j=1,\ldots ,N}\mathrm{{sgn}}\sigma ^{(N)}1_\mathrm{{GT}}(\underline{x}_N^\sigma ) \end{aligned}$$
(3.41)

The proof of this lemma will be given in Appendix 2. Using this lemma we readily derive Proposition 9.

Proof of Proposition 9

Substituting the definition of \(\mathcal {R}_u(\underline{x}_N;t)\) (3.33) into (3.40), we see that the LHS of (3.40) is rewritten as

$$\begin{aligned}&\int _{\mathbb {R}^{N(N+1)/2}}d\underline{x}_Nf_{\text {sym}}(\underline{x}_N) \prod _{k=1}^N1_{>0}(u-x^{(k)}_1)e^{-\left( x^{(N)}_k\right) ^2/2t}\cdot \prod _{1\le i<j\le N}\left( x^{(N)}_i-x^{(N)}_j\right) \nonumber \\&\qquad \times \sum _{\sigma ^{(j)}\in S_j,j=1,\ldots ,N} \mathrm{{sgn}}\sigma ^{(N)}1_V(\underline{x}_N^{\sigma })\nonumber \\&\quad = \int _{\mathbb {R}^{N(N+1)/2}}d\underline{x}_Nf_{\text {sym}}(\underline{x}_N) \prod _{k=1}^N1_{>0}(u-x^{(k)}_1)e^{-\left( x^{(N)}_k\right) ^2/2t}\cdot \prod _{1\le i<j\le N}\left( x^{(N)}_i-x^{(N)}_j\right) \nonumber \\&\qquad \times \sum _{\sigma ^{(j)}\in S_j,j=1,\ldots ,N} \mathrm{{sgn}}\sigma ^{(N)}1_{\text {GT}}(\underline{x}_N^{\sigma })\nonumber \\&\quad =\int _{\mathbb {R}^{N(N+1)/2}}d\underline{x}_Nf_{\text {sym}}(\underline{x}_N)\prod _{k=1}^N1_{>0}(u-x^{(k)}_1) \sum _{\sigma ^{(j)}\in S_j,j=1,\ldots ,N} \mathcal {Q}_{\text {GT}}(\underline{x}_N^{\sigma };t) \end{aligned}$$
(3.42)

where in the second equality we use Lemma 10. \(\square \)

4 Fredholm Determinant Formulas

4.1 A Fredholm Determinant with a Biorthogonal Kernel

The function \(W(x_1,\ldots ,x_N;t)\) (1.6) has a notable determinantal structure that it is described by a product of two determinants. This allows us to apply the results of random matrix theory and determinantal point processes developed in [43, 76] and to get the Fredholm determinant representation.

To see this we provide a lemma. Let \(\phi _j(x;t),~j=0,1,2,\ldots \) be

$$\begin{aligned} \phi _j(x;t)=\frac{1}{2\pi i}\oint dv\, e^{vx-v^2t/2}\frac{\Gamma (1+v/\beta )^N}{v^{j+1}}, \end{aligned}$$
(4.1)

where the contour encloses the origin anticlockwise with radius smaller than \(\beta \). We find \(\phi _j(x;t)\) and \(\psi _k(x;t)\) (2.1) satisfy the biorthonormal relation:

Lemma 11

For \(j,k\in \{0,1,2,\ldots \}\), we have

$$\begin{aligned} \int _{-\infty }^{\infty }dx\,\phi _j(x;t)\psi _k(x;t)=\delta _{j,k}. \end{aligned}$$
(4.2)

Proof

Substituting the definitions (2.1) and (4.1) into LHS of (4.2), one has

$$\begin{aligned}&\int _{-\infty }^{\infty }dx\,\phi _j(x;t)\psi _k(x;t)\nonumber \\&\quad =\frac{1}{(2\pi )^2i} \oint dv\int _{-\infty }^{\infty }dw\, e^{-(w^2+v^2)t/2} \left( \frac{\Gamma (1+v/\beta )}{\Gamma (1+iw/\beta )}\right) ^N \frac{(iw)^k}{v^{j+1}} \int _{-\infty }^{\infty }dx\,e^{(v-iw)x}. \end{aligned}$$
(4.3)

As the integrand in this equation is analytic on \(\mathbb {C}\) with respect to w, we can shift the integration path as \(w=w'-i v,~w'\in \mathbb {R}\). Then using

$$\begin{aligned} \frac{1}{2\pi }\int _{-\infty }^{\infty } dx\, e^{(v-iw)x}= \frac{1}{2\pi }\int _{-\infty }^{\infty } dx\, e^{-iw'x}= \delta (w'), \end{aligned}$$
(4.4)

we find

$$\begin{aligned} \int _{-\infty }^{\infty }dx\,\phi _j(x;t)\psi _k(x;t) =\frac{1}{2\pi i}\oint dv\,v^{k-j-1}=\delta _{j,k}. \end{aligned}$$
(4.5)

\(\square \)

The residue calculus shows that the function \(\phi _j(x;t)\) is a jth order polynomial in x and the coefficient of the highest order is 1 / j!. As the Vandermonde determinant in (2.2) is expressed as

$$\begin{aligned} \prod _{1\le j<k\le N}(x_k-x_j)=\det \left( x^{j-1}_k\right) _{j,k=1}^N =\det \left( (j-1)!\phi _{j-1}(x_k,t)\right) _{j,k=1}^N, \end{aligned}$$
(4.6)

\(W(x_1,\ldots ,x_N;t)\) is rewritten as a product of two determinants

$$\begin{aligned} W(x_1,\ldots ,x_N;t)=\frac{1}{N!}\det \left( \phi _{j-1}(x_k;t)\right) _{j,k=1}^N \det \left( \psi _{j-1}(x_k;t)\right) _{j,k=1}^N. \end{aligned}$$
(4.7)

From Lemma 11 and (4.7), we obtain a Fredholm determinant representation for the moment generating function. Throughout this paper, we follow [10] for the notation on Fredholm determinants.

Proposition 12

$$\begin{aligned} \int _{-\infty }^{\infty }\prod _{j=1}^Ndx_j\, g(x_j)\cdot W(x_1,\ldots ,x_N;t) =\det \left( 1-\bar{g}K\right) _{L^2(\mathbb {R})} \end{aligned}$$
(4.8)

where g(x) is an arbitrary function such that the left hand side is well-defined and in the right hand side \(\det \left( 1-\bar{g}K\right) _{L^2(\mathbb {R})}\) represents a Fredholm determinant defined by

$$\begin{aligned} \det \left( 1-\bar{g}K\right) _{L^2(\mathbb {R})}=\sum _{k=0}^{\infty }\frac{(-1)^k}{k!}\int _{\mathbb {R}^k} \prod _{j=1}^k dx_j\, \bar{g}(x_j)\cdot \det \left( K(x_l,x_m;t) \right) _{l,m=1}^k. \end{aligned}$$
(4.9)

Here \(\bar{g}(x)=1-g(x)\) and K(xyt) is written in terms of the biorthogonal functions \(\psi _j(x,t)\) (2.1) and \(\phi _k(x,t)\) (4.1) as

$$\begin{aligned}&K(x,y;t)=\sum _{k=0}^{N-1}\phi _k(x;t)\psi _k(y;t). \end{aligned}$$
(4.10)

Proof

We readily obtain this representation by applying the techniques in [76] with Lemma 11 to LHS of (4.8). For reference, here is an outline of the proof. First, using the Andréief (Cauchy-Binet) identity (2.16), we have

$$\begin{aligned}&\int _{\mathbb {R}^N}\prod _{j=1}^Ndx_j\, g(x_j)\cdot W(x_1,\ldots ,x_N;t) =\det \left( \int _\mathbb {R}dx\, g(x)\phi _{j-1}(x;t)\psi _{k-1}(x;t)\right) _{j,k=1}^N\nonumber \\&\quad =\det \left( \int _\mathbb {R}dx\,\phi _{j-1}(x;t)\psi _{k-1}(x;t)- \int _\mathbb {R}dx\,\bar{g}(x)\phi _{j-1}(x;t)\psi _{k-1}(x;t)\right) _{j,k=1}^N \nonumber \\&\quad =\det \left( \delta _{j,k}-A_{j,k}\right) _{j,k=1}^N, \end{aligned}$$
(4.11)

where \(A_{j,k},~j,k=1,\ldots ,N\) is defined as

$$\begin{aligned} A_{jk}=\int _\mathbb {R}dx\, \bar{g}(x)\phi _{j-1}(x;t)\psi _{k-1}(x;t). \end{aligned}$$
(4.12)

In the first equality of (4.11), we used (4.7) with (2.16) and in the last one we used Lemma 11. We further rewrite \(A_{jk}\) as

$$\begin{aligned} A_{jk}=\int _{\mathbb {R}}dx\, B(j,x)C(x,k) \end{aligned}$$
(4.13)

by using

$$\begin{aligned} B(j,x)=\phi _{j-1}(x;t), ~C(x,k)=\bar{g}(x)\psi _{k-1}(x;t). \end{aligned}$$
(4.14)

Applying the identity for Fredholm determinants,

$$\begin{aligned} \det \left( \delta _{j,k}-A_{j,k}\right) _{j,k=1}^N=\det (1-BC)_{L^2(\{1,2,\ldots ,N\})}=\det (1-CB)_{L^2(\mathbb {R})}, \end{aligned}$$
(4.15)

and noting

$$\begin{aligned} (CB)(x,y)=\bar{g}(x)\sum _{k=0}^{N-1}\phi _{k}(x;t)\psi _k(y;t), \end{aligned}$$
(4.16)

we arrive at our desired expression. \(\square \)

Combining this proposition with Theorem 2, we readily obtain

Corollary 13

$$\begin{aligned} \mathbb {E}\left( e^{-\frac{e^{-\beta u} Z_N(t)}{\beta ^{2(N-1)}}}\right) =\det \left( 1-\bar{f}_uK\right) _{L^2(\mathbb {R})} \end{aligned}$$
(4.17)

where the right hand side is the Fredholm determinant (4.9) with the kernel \(\bar{f}_u(x_i)K(x_i,x_j;t)\), \(\bar{f}_u(x_j)=1-f_F(x_j-u)\), and \(K(x_i,x_j;t)\) is defined in (4.10).

As in (2.3), we see

$$\begin{aligned} \lim _{\beta \rightarrow \infty }\phi _k(x;t)= \frac{1}{2\pi i}\oint dv\, \frac{e^{vx-v^2t/2}}{ {v^{k+1}}}=\frac{1}{k!}\left( \frac{t}{2}\right) ^{\frac{k}{2}}H_k\left( \frac{x}{\sqrt{2t}}\right) , \end{aligned}$$
(4.18)

which is due to another representation of the nth order Hermite polynomial \(H_n(x)\) (see e.g. Sect. 6.1 in [5]),

$$\begin{aligned} H_n(x)=\frac{n!}{2\pi i}\oint dz\, \frac{e^{2xz-z^2}}{z^{n+1}}, \end{aligned}$$
(4.19)

where the contour encloses the origin anticlockwise. From (2.3) and (4.18), we find

$$\begin{aligned} \lim _{\beta \rightarrow \infty }K(x_1,x_2;t) =\frac{e^{-x_2^2/2t}}{\sqrt{2\pi t}}\sum _{k=0}^{N-1}\frac{H_k(x_1/\sqrt{2t}) H_k(x_2/\sqrt{2t})}{2^kk!}. \end{aligned}$$
(4.20)

Here RHS appears as a correlation kernel of the eigenvalues in the GUE random matrices [52].

Thus \(K(x_i,x_j;t)\) is a simple biorthogonal deformation of the kernel with Hermite polynomials which appears in the eigenvalue correlations of \(N\times N\) GUE random matrices. Using this Fredholm determinant expression (4.17), we can understand a few asymptotic properties of the partition function by applying saddle point analyses to the kernel as will be discussed in Sect. 5.

4.2 A Representation from the Macdonald Processes

In [57], O’Connell first introduced the probability measure on \(\mathbb {R}^N\) which is called the Whittaker measure \(m(x_1,\ldots ,x_N;t)\prod _{j=1}^Ndx_j\) whose density function \(m(x_1,\ldots ,x_N;t)\) is defined in terms of the Whittaker function \(\Psi _{\lambda }(x_1,\ldots ,x_N)\) (see [57]),

$$\begin{aligned}&m_t(x_1,\ldots ,x_N;t)\nonumber \\&\quad =\Psi _0(\beta x_1,\ldots ,\beta x_N) \int _{(i\mathbb {R})^N}d\lambda \, \Psi _{-\lambda /\beta }(\beta x_1,\ldots ,\beta x_N)e^{\sum _{j=1}^N\lambda _j^2t/2}s_N(\lambda /\beta ), \end{aligned}$$
(4.21)

where throughout this paper we denote \(\lambda =(\lambda _1,\ldots ,\lambda _N)\) and \(s_N(\lambda )\) is defined by (2.10). Then he showed the following relation about the distribution of the free energy \(F_N(t)=-\log (Z_N(t))/\beta \) (see Theorem 3.1 and Corollary 4.1 in [57]),

$$\begin{aligned} \text {Prob}\left( -F_N(t)+\frac{N-1}{\beta }\log \beta ^2\le s\right) = \int _{(-\infty , s]}dx_1\int _{\mathbb {R}^{N-1}}\prod _{j=2}^{N} dx_{j}\cdot m(x_1,\ldots ,x_N;t). \end{aligned}$$
(4.22)

The density function \(m(x_1, \cdots , x_N;t)\) (4.21) is also a finite temperature extension of \(P_{\text {GUE}}(x_1, \cdots , x_N;t)\) (1.4). Actually it has been known that \(m(x_1,\ldots ,x_N;t)\) converges to \(P_{\text {GUE}}(x_1,\ldots ,x_N;t)\) in the zero-temperature limit. (See Sect. 6 in [57]). In contrast to \(W(x_1,\ldots , x_N;t)\) (2.2), however, this extension does not inherit the determinantal structure which \(P_{\text {GUE}}(x_1,\ldots ,x_N;t)\) has and thus we cannot apply the techniques in random matrix theory which is useful especially for asymptotic analyses of the GUE. This fact necessitated the developments of new methods [2, 10, 11, 13, 14, 24, 29, 30, 6669]. By using the techniques of the Macdonald difference operators [10] and the duality [14], one can get a Fredholm determinant expression for the moment generating function of the partition function, which allows us to access the asymptotic properties.

Proposition 14

([10])

$$\begin{aligned} \mathbb {E}\left( e^{-\frac{e^{-\beta u} Z_N(t)}{\beta ^{2(N-1)}}}\right) =\det \left( 1+L\right) _{L^2(C_0)} \end{aligned}$$
(4.23)

where \(C_0\) denotes the contour enclosing only the origin positively with radius \(r<\beta /2\) and the kernel \(L(v,v';t)\) is written as

$$\begin{aligned} L(v,v';t)=\frac{1}{2\pi i}\int _{i\mathbb {R}+\delta }dw\,\frac{\pi /\beta }{\sin (v'-w)/\beta } \frac{w^Ne^{w^2t/2-wu}}{v'^Ne^{v'^2t/2-v'u}}\frac{1}{w-v}\frac{\Gamma (1+v'/\beta )^N}{\Gamma (1+w/\beta )^N}. \end{aligned}$$
(4.24)

Here \(\delta \) satisfies the condition \(r<\delta <\beta -r\).

We can show the equivalence between the two expressions (4.17) and (4.23).

Proposition 15

$$\begin{aligned} \det (1-\bar{f}_uK)_{L^2(\mathbb {R})}=\det (1+L)_{L^2(C_0)} \end{aligned}$$
(4.25)

where \(\bar{f}_u(x)=1-f_F(x-u)\) and \(K(x,x';t)\) and \(L(v,v';t)\) are defined (4.10) and (4.24) respectively.

Proof

Substituting the definitions (2.1) and (4.1) into (4.10), we have

$$\begin{aligned} K(x,x';t)&=\oint _{C_0} dv\int _{i\mathbb {R}+\delta }dw\, e^{vx-wx'-(v^2-w^2)t/2} \frac{\Gamma (1+v/\beta )^N}{\Gamma (1+w/\beta )^N} \frac{1}{v}\sum _{k=0}^{N-1}\left( \frac{w}{v}\right) ^k\nonumber \\&=\oint _{C_0} dv\int _{i\mathbb {R}+\delta } dw\, e^{vx-wx'-(v^2-w^2)t/2} \frac{\Gamma (1+v/\beta )^N}{\Gamma (1+w/\beta )^N} \frac{1-(w/v)^N}{v-w}. \end{aligned}$$
(4.26)

For the definition of \(C_0\), see below (4.23). Here we changed \(w\rightarrow -iw\) in (2.1) and shift the path of w by \(\delta \) which is larger than the radius of v. We notice that although the last expression in (4.26) consists of two terms proportional to \(1/(v-w)\) and \((w/v)^N/(v-w)\), the integration of the term proportional to \(1/(v-w)\) with respect to v vanishes. Thus we see

$$\begin{aligned} \bar{f}_u(x)K(x,x')&=-\bar{f}_u(x)\oint _{C_{0}} dv\int _{i\mathbb {R}+\delta } dw\,\frac{e^{vx-wx'-(v^2-w^2)t/2}}{w-v} \left( \frac{\Gamma (1+v/\beta )}{\Gamma (1+w/\beta )} \frac{w}{v}\right) ^N\nonumber \\&=-\oint _{C_{0}}dv\, A(x,v)B(v,x') \end{aligned}$$
(4.27)

where we set

$$\begin{aligned}&A(x,v)=\bar{f}_u(x)e^{vx-v^2t/2}\left( \frac{\Gamma (1+v/\beta )}{v}\right) ^N,\end{aligned}$$
(4.28)
$$\begin{aligned}&B(v,x')=\int _{i\mathbb {R}+\delta }dw\,\frac{e^{-wx'+w^2t/2}}{w-v} \left( \frac{w}{\Gamma (1+w/\beta )}\right) ^N. \end{aligned}$$
(4.29)

Here we use the relation for Fredholm determinants, \(\det (1-AB)_{L^2(\mathbb {R})}=\det (1-BA)_{L^2(C_0)}\), where the kernel \(-(BA)(v,v')\) on RHS reads

$$\begin{aligned}&-\int _{-\infty }^\infty dx\, B(v,x)A(x,v') \nonumber \\&\quad =-\int _{i\mathbb {R}+\delta }dw\,\frac{e^{(w^2-v'^2)t/2}}{w-v} \left( \frac{w\Gamma (1+v'/\beta )}{v'\Gamma (1+w/\beta )}\right) ^N \int _{-\infty }^{\infty }dx\, \bar{f}_u(x)e^{(v'-w)x}. \end{aligned}$$
(4.30)

Using the relation

$$\begin{aligned} \int _{-\infty }^{\infty }dx\,\frac{e^{ax}}{1+e^{x}}=\frac{\pi }{\sin \pi a},~~ \text {for~}0<\text {Re}~a<1, \end{aligned}$$
(4.31)

we perform the integration over x in (4.30) as

$$\begin{aligned} -~\int _{-\infty }^{\infty }dx\, \bar{f}_u(x)e^{(v'-w)x} =\int _{-\infty }^{\infty }dx\, \frac{-e^{\beta (x-u)+(v'-w)x}}{1+e^{\beta (x-u)}} =\frac{e^{(v'-w)u}\pi /\beta }{\sin \left[ (v'-w)\pi /\beta \right] }. \end{aligned}$$
(4.32)

Note that because of the conditions \(0<r<\beta /2\) and \(r<\delta <\beta -r\) (see below (4.23) and (4.24) respectively), (4.31) is applicable to the above equation. Thus from (4.30) and (4.32), we have

$$\begin{aligned}&-\int _{-\infty }^\infty dx\, B(v,x)A(x,v')\nonumber \\&\quad = \frac{1}{2\pi i}\int _{i\mathbb {R}+\delta }dw\,\frac{\pi /\beta }{\sin \left[ (v'-w)\pi /\beta \right] } \frac{w^Ne^{w^2t/2-wu}}{v'^Ne^{v'^2t/2-v'u}}\frac{1}{w-v}\frac{\Gamma (1+v'/\beta )^N}{\Gamma (1+w/\beta )^N}\nonumber \\&\quad =L(v,v';t). \end{aligned}$$
(4.33)

\(\square \)

5 The Scaling Limit to the KPZ Equation

In this section, we discuss a scaling limit of the O’Connell-Yor polymer model. When both N and t are large with its ratio N / t fixed, it has been known that the polymer free energy \(F_N(t)\) defined below (1.1) is proportional to N on average and the fluctuation around the average is of order \(N^{1/3}\) [54, 72]. Furthermore recently it has been shown in [10] that the limiting distribution of the free energy fluctuation under the \(N^{1/3}\) scaling is the GUE Tracy–Widom distribution [75]. This type of the limit theorem has been obtained also for other models related to the O’Connell-Yor model [6, 13, 26, 34, 58, 77]. These results reflect the strong universality known as the KPZ universality class.

Although we expect that the same result on the Tracy–Widom asymptotics can be obtained from our representation (4.17), we consider another scaling limit where the partition function goes to the solution to the stochastic heat equation (SHE) (or equivalently, the free energy goes to the solution to the Kardar–Parisi–Zhang (KPZ) equation). This scaling limit to the KPZ equation has also been known to be universal although in a weaker sense compared with the KPZ universality stated above [1, 9, 27]. The height distribution of the KPZ equation has been obtained for a droplet initial data in [2, 6669]. Since then, explicit forms of the height distribution have been given for the KPZ equation and related models for a few initial data [1012, 23, 38, 39, 49, 62, 63]. In particular for the O’Connell-Yor model (1.1), the limiting distribution of the polymer free energy has been obtained by applying the saddle point method to the kernel (4.24) [10, 11].

In this section, we confirm that a similar saddle point analysis can be applicable to our biorthogonal kernel (4.10). Since our kernel has a simple form, we find that the nontrivial part of this problem reduces only to the asymptotic analyses of the functions \(\psi _k(x;t)\) (2.1) and \(\phi _k(x;t)\) (4.1).

5.1 The O’Connell-Yor Polymer Model and the KPZ Equation

Before discussing the saddle point analysis, let us briefly review the scaling limit to the KPZ equation. Hereafter we will write out explicitly the dependence on \(\beta \) of the polymer partition function (1.1) as \(Z_{N,\beta }(t)\).

Let \(\tilde{Z}_{j,\beta }(t):=e^{-t-\beta ^2t/2}{Z}_{j,\beta }(t),~j=1,\ldots ,N\). By Itô’s formula, we easily find that it satisfies the stochastic differential equations

$$\begin{aligned} d\tilde{Z}_{j,\beta }(t)=\left( \tilde{Z}_{j-1,\beta }(t)-\tilde{Z}_{j,\beta }(t)\right) dt+\beta \tilde{Z}_{j,\beta }(t) dB_j(t), \end{aligned}$$
(5.1)

where we set \(\tilde{Z}_{0,\beta }(t)=0\) and interpret the second term of this equation as Itô type. Now let us take the diffusion scaling for (5.1): we set

$$\begin{aligned} t=TM,~~ N=TM-X\sqrt{M} \end{aligned}$$
(5.2)

and at the same time we scale \(\beta \) as

$$\begin{aligned} \beta =M^{-1/4}, \end{aligned}$$
(5.3)

then take the large M limit. the scaling exponent \(-1/4\) in (5.3) is known to be universal: it characterizes the disorder regime referred to as the intermediate disorder regime [1], which lies between weak and strong disorder regimes in directed polymer models in random media in \(1+1\) dimension.

This \(M^{-1/4}\) scaling can be explained in the following heuristic way. Let \(B_{j}(t),~j=1,\ldots ,N\) be N independent one dimensional standard Brownian motions. For \(N_1,N_2\in \{1,2,\ldots ,N\}\), we have

$$\begin{aligned} \langle B_{N_1}(t)B_{N_2}(t)\rangle =t \delta _{N_1,N_2}, \end{aligned}$$
(5.4)

where \(\langle \cdot \rangle \) represents the expectation value with respect to the Brownian motions. Now we consider its large M limit under the same scaling as (5.2) i.e. \(t=MT\) and

$$\begin{aligned} N_k=TM-X_k\sqrt{M},~k=1,2. \end{aligned}$$
(5.5)

Noting that \(\lim _{M\rightarrow \infty }\sqrt{M}\delta _{N_1,N_2}=\delta (X_1-X_2)\) under (5.5), we see

$$\begin{aligned} \lim _{M\rightarrow \infty }M^{-1/2}\langle B_{N_1}(t)B_{N_2}(t)\rangle = T\delta (X_1-X_2). \end{aligned}$$
(5.6)

This suggests in a heuristic sense,

$$\begin{aligned} \lim _{M\rightarrow \infty }M^{-1/4}B_{N_k}(t)= \int _0^Tds\,\eta (s,X_k),~k=1,2. \end{aligned}$$
(5.7)

Here \(\eta (T,X)\) with \(T>0\) and \(X\in \mathbb {R}\) is the space-time white noise with mean 0 and \(\delta \)-function covariance,

$$\begin{aligned} \langle \eta (T,X)\rangle =0, ~~\langle \eta (T,X)\eta (T',X')\rangle =\delta (T-T')\delta (X-X'). \end{aligned}$$
(5.8)

Thus considering (5.7), we choose the scaling of \(\beta \) (5.3).

Under the scaling (5.2) and (5.3), the following limiting property is established.

$$\begin{aligned} \lim _{M\rightarrow \infty }\sqrt{M}\tilde{Z}_{N,\beta }(t) =\mathcal {Z}(T,X). \end{aligned}$$
(5.9)

Here \(\mathcal {Z}(T,X)\) is the solution to the SHE with the \(\delta \)-function initial condition,

$$\begin{aligned}&\frac{\partial }{\partial T}\mathcal {Z}(T,X)=\frac{1}{2}\frac{\partial ^2}{\partial X^2}\mathcal {Z}(T,X)+\eta (T,X)\mathcal {Z}(T,X), \end{aligned}$$
(5.10)
$$\begin{aligned}&\mathcal {Z}(0,X)=\delta (X), \end{aligned}$$
(5.11)

where \(\eta (T,X)\) is the space-time white noise with mean 0 and \(\delta \)-function covariance (5.8). The SHE (5.10) is known to be well-defined if we interpret the multiplicative noise term as Itô-type [8, 55]. Using this equation, the solution to the KPZ equation can be defined via

$$\begin{aligned} h(T,X)=\log (\mathcal {Z}(T,X)), \end{aligned}$$
(5.12)

which is called the Cole-Hopf solution to the KPZ equation. Recently a new regularization for the KPZ equation was developed in [37] (see also [48]).

According to [10], a rigorous estimate about the convergence to the SHE (5.9) has been obtained for the O’Connell-Yor model [53] based on the results in [1]. This type of convergence has been discussed also in interacting particle processes [9, 27]. For reference we offer a sketch of the derivation of (5.9). For this purpose, we provide the following lemma,

Lemma 16

For \(\tilde{Z}_{N,\beta }(t)\) defined above (5.1), one has

$$\begin{aligned} \tilde{Z}_{N,\beta }(t) = \sum _{k=0}^{\infty }\beta ^k\sum _{1\le N_1\le \cdots \le N_k\le N} \int _{\Delta _k(0,t)} \prod _{j=1}^kdB_{N_j}(t_j) \cdot \prod _{j=1}^{k+1}Po\left( t_j-t_{j-1},N_{j}-N_{j-1}\right) \end{aligned}$$
(5.13)

where \(Po(t,n):=e^{-t}t^n/n!\) denotes the Poissonian density and \(N_0=1, N_{k+1}=N, s_0=t_0=0, s_N=t_{k+1}=t\). \(\Delta _n(s,t)\) denotes the region of the integration \(s<t_1<\dots <t_n<t\) and the Itô integrals on RHS, referred to as the multiple Itô integrals [40, 51], are performed in time order (i.e. the order of \(t_1,\ldots ,t_N\)).

Proof

By the definition of \(Z_N(t)\) (1.1), we have

$$\begin{aligned} \tilde{Z}_{N,\beta }(t)=e^{-t}\int _{0<s_1<\cdots <s_{N-1}<t} \prod _{j=1}^{N-1}ds_j \cdot \prod _{j=1}^Ne^{\beta \left( B_j(s_j)-B_j(s_{j-1})-\frac{\beta (s_j-s_{j-1})}{2}\right) }, \end{aligned}$$
(5.14)

with \(s_0\)=0 and the integrand of RHS is expressed as

$$\begin{aligned}&\prod _{j=1}^Ne^{\beta \left( B_j(s_j)-B_j(s_{j-1})-\frac{\beta (s_j-s_{j-1})}{2}\right) } =\prod _{j=1}^N\left( 1+e^{\beta \left( B_j(s_j)-B_j(s_{j-1})-\frac{\beta (s_j-s_{j-1})}{2}\right) }-1\right) \nonumber \\&\quad =\sum _{m=0}^{\infty }\sum _{1\le M_1<\cdots <M_m\le N}\prod _{j=1}^m \left( e^{\beta \left( B_{M_j}(s_{M_j})-B_{M_j}(s_{M_j-1})-\frac{\beta (s_{M_j}-s_{M_j-1})}{2}\right) }-1\right) . \end{aligned}$$
(5.15)

Here we use the relation on a one-dimensional standard Brownian motion B(t): one has for \(t>s>0\) and \(\beta >0\),

$$\begin{aligned} e^{\beta \left( B(t)-B(s)-\frac{\beta (t-s)}{2}\right) }=\sum _{n=0}^\infty \beta ^n\int _{\Delta _n(s,t)} \prod _{j=1}^n dB(t_j) \cdot , \end{aligned}$$
(5.16)

where the Itô integrals on RHS, referred to us the multiple Itô integrals, are performed in time order (i.e. the order of \(t_1,\ldots ,t_N\)) [40, 51]. Using this, we get

$$\begin{aligned}&\prod _{j=1}^Ne^{\beta \left( B_j(s_j)-B_j(s_{j-1})-\frac{\beta (s_j-s_{j-1})}{2}\right) }\nonumber \\&\quad =\sum _{m=0}^{\infty }\sum _{1\le M_1<\cdots <M_m\le N} \prod _{j=1}^m\sum _{n_j=1}^{\infty }\beta ^{n_j}\int _{\Delta _{n_j}(s_{M_j-1},s_{M_j})}\prod _{\ell =1}^{n_j}dB_{M_j}(t_{M_j,\ell })\nonumber \\&\quad =\sum _{k=0}^{\infty }\beta ^k\sum _{m=0}^{\infty }\sum _{1\le M_1<\cdots <M_m\le N} \sum _{\begin{array}{c} n_1,\ldots ,n_m=1 \\ n_1+\cdots +n_m=k \end{array}}^{\infty } \prod _{j=1}^m\int _{\Delta _{n_j}(s_{M_j-1},s_{M_j})}\prod _{\ell =1}^{n_j}dB_{M_j}(t_{M_j,\ell }). \end{aligned}$$
(5.17)

Substituting this into (5.14), and performing the integration on \(s_1,\ldots ,s_{N-1}\), we have

$$\begin{aligned} \tilde{Z}_{N,\beta }(t)= & {} \sum _{k=0}^{\infty }\beta ^k\sum _{m=0}^{\infty }\sum _{1\le M_1<\cdots <M_m\le N} \sum _{\begin{array}{c} n_1,\ldots ,n_m=1 \\ n_1+\cdots +n_m=k \end{array}}^{\infty }\int _{\Delta _k(0,t)} \prod _{j=1}^m\prod _{\ell =1}^{n_j}dB_{M_j}(t_{M_j,\ell })\nonumber \\&\times \, e^{-t}\prod _{j=1}^{m+1}\frac{(t_{M_j,1}-t_{M_j-1,n_j})^{M_j-M_{j-1}}}{(M_j-M_{j-1})!} \end{aligned}$$
(5.18)

where we set \(M_0=1,~M_{m+1}=N\). Now we introduce the new variables \(N_j,~t_j,~j=1,\ldots ,k\) by the relation

$$\begin{aligned} N_{n_1+\cdots +n_{j-1}+\ell }=M_j,~t_{n_1+\cdots +n_{j-1}+\ell }=t_{M_j,\ell }~\text {~for~} \ell =1,\ldots ,n_j,~ j=1,\ldots ,m. \end{aligned}$$
(5.19)

Then one has \( dB_{M_j}(t_{M_j,\ell })=dB_{N_{n_1+\cdots +n_{j-1}+\ell }}(t_{n_1+\cdots +n_{j-1}+\ell })\) leading to

$$\begin{aligned} \prod _{j=1}^m\prod _{\ell =1}^{n_j} dB_{M_j}(t_{M_j,\ell })=\prod _{j=1}^kdB_{N_j}(t_j). \end{aligned}$$
(5.20)

Further from (5.19), we have

$$\begin{aligned} e^{-t}\prod _{j=1}^{m+1}\frac{(t_{M_j,1}-t_{M_j-1,n_j})^{M_j-M_{j-1}}}{(M_j-M_{j-1})!} =\prod _{j=1}^{k+1}e^{-(t_j-t_{j-1)}}\frac{(t_j-t_{j-1})^{N_j-N_{j-1}}}{(N_j-N_{j-1})!} \end{aligned}$$
(5.21)

where we set \(N_0=1,~N_{k+1}=N\). Substituting these (5.20) and (5.21) into (5.18) and noting the summations \(\sum _{m=0}^{\infty }\sum _{1\le M_1<\cdots <M_m\le N} \sum _{\begin{array}{c} n_1,\ldots ,n_m=1 \\ n_1+\cdots +n_m=k \end{array}}^{\infty }\) can be summarized as the simple form \(\sum _{1\le N_1\le \cdots \le N_k\le N}\), we obtain (5.13). \(\square \)

Note that under the scaling (5.2), the Poissonian density Po(tN) goes to the Gaussian density \(g(T,X)=\exp ({-{X^2}/{2T}})/{\sqrt{2\pi T}}\), i.e.

$$\begin{aligned} \lim _{M\rightarrow \infty }\sqrt{M}Po(t,N-1)= g(T,X). \end{aligned}$$
(5.22)

Furthermore by Theorems 4.3 and 4.5 in [1], for a function \(f(t_1,\ldots ,t_k, N_1,\ldots ,N_k)\) that converges to \({\mathfrak {f}}(u_1,\ldots ,u_k;y_1,\ldots ,y_k)\) under the scaling \(t_i=u_i M\) and \(N_i=u_iM-y_i\sqrt{M},~i=1,\ldots ,k\), we have

$$\begin{aligned}&\lim _{M\rightarrow \infty }\frac{1}{M^{3k/4}}\sum _{1\le N_1\le \cdots \le N_k\le N} \int _{\Delta _k(0,t)} \prod _{j=1}^k dB_{N_j}(t_j) \cdot f(t_1,\ldots ,t_k; N_1,\ldots ,N_k)\nonumber \\&\quad =\int _{\Delta _k(0;T)} \prod _{j=1}^k du_j \cdot \int _{\mathbb {R}^k} \prod _{\ell =1}^k dy_j \cdot \prod _{m=1}^k\eta (t_m,y_m)\cdot \mathfrak {f}(u_1,\ldots ,u_k;y_1,\ldots ,y_k) \end{aligned}$$
(5.23)

where \(\eta (t,y)\) is the space-time white noise with the \(\delta \)-covariances (5.8). Thus from (5.13), (5.22) and (5.23), we have under the scaling (5.2),

$$\begin{aligned}&\lim _{M\rightarrow \infty }\sqrt{M}\tilde{Z}_{N,\beta }(t)\nonumber \\&\quad =\lim _{M\rightarrow \infty }\sum _{k=0}^{\infty }(\beta M^{1/4})^k \frac{1}{M^{3k/4}} \sum _{1\le N_1\le \cdots \le N_k\le N} \int _{\Delta _k(0,t)} \prod _{j=1}^k dB_{N_j}(t_j) \nonumber \\&\qquad \times \prod _{j=1}^{k+1}M^{1/2}Po(t_j-t_{j-1},N_{j}-N_{j-1})\nonumber \\&\quad =\sum _{k=0}^\infty \int _{\Delta _k(T)} \prod _{j=1}^k dt_j \cdot \int _{\mathbb {R}^k}\prod _{j=1}^k dy_j\cdot \prod _{m=1}^k\eta (t_m,y_m)\cdot \prod _{\ell =1}^{k+1}g(t_{\ell }-t_{\ell -1},y_{\ell }-y_{\ell -1}), \end{aligned}$$
(5.24)

where \(t_0=0,t_{k+1}=T,y_0=0,y_{k+1}=X\). Since we easily find that RHS of this equation is the solution of the SHE with \(\delta \)-function initial data (5.10), we obtain (5.9).

5.2 The Asymptotics of the Kernel

In [10], Borodin and Corwin discussed the asymptotics of the Fredholm determinant (4.23) under the scaling limit to the KPZ equation, especially the limiting property of the kernel (4.24) based on the saddle point method. Here we check that a similar saddle point method is applicable to our biorthogonal kernel (4.10). The scaling limit we consider is  (5.9) discussed above, but here we adopt its rephrased version stated in [10],

$$\begin{aligned} \lim _{N\rightarrow \infty }\frac{Z_{N,\beta =1}(t=\sqrt{TN}+X)}{C(N,T,X)}=\mathcal {Z}(T,X), \end{aligned}$$
(5.25)

where C(NTX) is

$$\begin{aligned} C(N,T,X):=\exp \left( N+\frac{\sqrt{TN}+X}{2}+X\sqrt{\frac{N}{T}}\right) \left( \frac{T}{N}\right) ^{\frac{N}{2}}, \end{aligned}$$
(5.26)

which is more suitable for our purpose. To see the equivalence between (5.9) and (5.25), we rewrite the relation (5.9) as

$$\begin{aligned} \lim _{N\rightarrow \infty }\beta ^{-2}\tilde{Z}_{N,\beta }(t)=\mathcal {Z}(T,X), \end{aligned}$$
(5.27)

where we scale \(t,~\beta \) as

$$\begin{aligned} t=N+X\sqrt{\frac{N}{T}},~ \beta =\left( \frac{N}{T}\right) ^{-1/4}. \end{aligned}$$
(5.28)

Furthermore focusing on the scaling property of the partition function \(Z_{N,\beta }(t)=Z_{N,1}(\beta ^2 t) /\beta ^{2(N-1)}\), we find

$$\begin{aligned} \beta ^{-2}\tilde{Z}_{N,\beta }(t)=\frac{1}{\beta ^{2N}e^{t+\beta ^2t/2}} Z_{N,1}(\beta ^2 t) \end{aligned}$$
(5.29)

in distribution. Noticing under the scaling (5.28)

$$\begin{aligned} \beta ^2t=\sqrt{TN}+X,~ \beta ^{2N}e^{t+\beta ^2t/2} =C(N,T,X), \end{aligned}$$
(5.30)

where C(NTX) is defined in (5.26), we find that (5.27) is equivalent to (5.25).

For the moment generating function,  (5.25) implies

$$\begin{aligned} \lim _{N\rightarrow \infty }\mathbb {E}\left( e^{-e^{-u}Z_{N,1}\left( \sqrt{TN}+X\right) }\right) =\mathbb {E}\left( e^{-e^{-u'}\mathcal {Z}(T,X)}\right) =\mathbb {E}\left( e^{-e^{-u'+h(T,X)}}\right) , \end{aligned}$$
(5.31)

where on LHS, u is set to be

$$\begin{aligned} u=u'+\log C(N,T,X),~ \end{aligned}$$
(5.32)

with C(TNX) (5.26), and in the last equality in (5.31) we used (5.12). The notions of the KPZ universality class tell us that the fluctuation of the height h(TX) and the position X are scaled as \(T^{1/3}\) and \(T^{2/3}\) respectively for large T. Considering them, we set

$$\begin{aligned} h\left( T,2\gamma _T^2Y\right) =-\frac{\gamma _T^3}{12}+\gamma _T(\tilde{h}(T,Y)-Y^2), \end{aligned}$$
(5.33)

where \(\gamma _T=(T/2)^{1/3}\). The first term \(-\gamma _T^3/12=-T/24\) represents the macroscopic growth with a constant velocity. The height fluctuation is expressed as \(\tilde{h}(T,Y)\) and the term \(Y^2\) reflects the fact that the SHE with delta-function initial data in (5.11) corresponds to the parabolic growth in the KPZ equation [2, 66, 69]. Thus substituting \(u'=\gamma _ts-\gamma _T^3/12-\gamma _TY^2\), \(X=2\gamma _T^2Y\) into (5.32), we arrive at the modified scaling

$$\begin{aligned} u=\gamma _Ts-\frac{\gamma _T^3}{12}-\gamma _TY^2+N+\frac{\sqrt{TN}+2\gamma _T^2Y}{2}+2\gamma _T^2Y\sqrt{\frac{N}{T}}+\frac{N}{2}\log \frac{T}{N}. \end{aligned}$$
(5.34)

Hence (5.31) is rewritten as

$$\begin{aligned} \lim _{N\rightarrow \infty }\mathbb {E}\left( e^{-e^{-u}Z_{N,1}\left( \sqrt{TN}+2\gamma _t^2Y\right) }\right) =\mathbb {E}\left( e^{-e^{\gamma _t(\tilde{h}(T,Y)-s))}}\right) \end{aligned}$$
(5.35)

with the scaling  (5.34). This is the scaling limit of the moment generating function from the O’Connell-Yor polymer to the KPZ equation.

It has been known that RHS of this equation can be represented as the Fredholm determinant [24, 29, 30],

$$\begin{aligned} \mathbb {E}\left( e^{-e^{\gamma _T(\tilde{h}(T,Y)-s)}}\right) =\det \big (1-\mathcal {K}_{\text {KPZ}}\big )_{L^2(\mathbb {R})}, \end{aligned}$$
(5.36)

where the kernel \(\mathcal {K}_{\text {KPZ}}(\xi _1,\xi _2)\) is expressed as

$$\begin{aligned} \mathcal {K}_{\text {KPZ}}(\xi _1,\xi _2)=\frac{e^{\gamma _T(\xi _1-s)}}{e^{\gamma _T(\xi _1-s)}+1} \int _{0}^{\infty }d\lambda \, \mathrm{Ai}(\xi _1+\lambda )\mathrm{Ai}(\xi _2+\lambda ). \end{aligned}$$
(5.37)

Note that Y does not appear in RHS of this equation. This kernel first appeared in the studies of the KPZ equation for the narrow wedge initial condition [2, 6669]. From the relation (5.36) we readily get the distribution of the scaled height \(\tilde{h}(T,Y)\) given in (5.33).

By combining the formula (4.17) for the O’Connell-Yor polymer and the limiting relation (5.35) from the O’Connell-Yor polymer to the KPZ equation, we can obtain (5.36) by showing

$$\begin{aligned} \lim _{N\rightarrow \infty }\det \left( 1-\bar{f}_uK\right) _{L^2(\mathbb {R})} =\det \left( 1-\mathcal {K}_{\text {KPZ}}\right) _{L^2(\mathbb {R})} \end{aligned}$$
(5.38)

under (5.34). This was indeed already discussed in [10] by using the kernel (4.24) . Here we show that the kernel (5.37) appears rather easily from the scaling limit of our biorthogonal kernel (4.17). Using the saddle point method, we get the following:

Proposition 17

$$\begin{aligned} \lim _{N\rightarrow \infty }\bar{f}_u(x_1)K\left( x_1,x_2;\sqrt{TN}+2\gamma _T^2Y\right) = e^{\frac{\gamma _T}{2}(\xi _1-\xi _2)} \mathcal {K}_\mathrm{{KPZ}}(\xi _1,\xi _2). \end{aligned}$$
(5.39)

Here the kernel is expressed in terms of \(\phi _k(x_1;t)\) and \(\psi _k(x_2;t)\) defined by (4.1) and (2.1) respectively as

$$\begin{aligned} \bar{f}_u(x_1)K(x_1,x_2;t)=\frac{e^{x_1-u}}{e^{x_1-u}+1}\sum _{k=0}^{N-1}\phi _k(x_1;t)\psi _k(x_2;t), \end{aligned}$$
(5.40)

and we set u to be (5.34) and

$$\begin{aligned} x_i=\gamma _T\xi _i-\frac{\gamma _T^3}{12}-\gamma _TY^2+N+ \frac{(TN)^{1/2}+2\gamma _T^2Y}{2}+2\gamma _T^2Y\sqrt{\frac{N}{T}} +\frac{N}{2}\log \frac{T}{N}. \end{aligned}$$
(5.41)

Since the factor \(e^{\frac{\gamma _T}{2}(\xi _1-\xi _2)}\) in (5.39) does not contribute to the Fredholm determinant, we get  (5.38) (though for a complete proof one has to prove the convergence of the Fredholm determinant itself, not only the kernel). Note that (5.40) has a similar structure to the kernel (4.20) in the GUE random matrices. When we discuss certain large N limits in the GUE such as the bulk and the edge scaling limit, the nontrivial step reduces to the scaling limit of the Hermite polynomial in (4.20). The same thing happens in our case: the only nontrivial step for getting (5.39) is the asymptotics of the functions \(\psi _k(x;t)\) (2.1) and \(\phi _k(x;t)\) (4.1). Based on the saddle point method, we obtain the following results of which the proof is given in Appendix 3.

Lemma 18

$$\begin{aligned} \lim _{N\rightarrow \infty } \frac{\gamma _T}{C(N)}\psi _k(x_i;t)=\lim _{N\rightarrow \infty }\frac{N^{1/2}C(N)}{(2\gamma _T)^{1/2}}\phi _k(x_i;t)={\mathrm{Ai}(\xi _i-\lambda )}, ~~i=1,2, \end{aligned}$$
(5.42)

where we set \(x_i\) as (5.41) and k and t as

$$\begin{aligned} k=N+\frac{N^{1/2}}{(2\gamma _T)^{1/2}}\lambda ,~t=\sqrt{TN}+2\gamma _T^2Y. \end{aligned}$$
(5.43)

The constant C(N) is represented as \( C(N)=e^{\sum _{j=1}^5C_j} \) in terms of \(C_1,\ldots ,C_5\) defined by (9.10), (9.14) and (9.16) in Appendix 3.

On the other hand, when we take the same limit for the other representation (4.23), we can apply the saddle point analysis also to the kernel (4.24) and can get the limiting kernel. But since it does not correspond to the kernel (5.37) directly, we need an additional step to show the equivalence between the Fredholm determinant with the limiting kernel and that with (5.37) (see Sect. 5.4.3 in [10]).

Proof of Proposition 17

Combining the estimate (5.42) with the simple fact

$$\begin{aligned} \frac{e^{x_i-u}}{e^{x_i-u}+1}=\frac{e^{\gamma _T(\xi _i-s)}}{e^{\gamma _T(\xi _i-s)}+1},~i=1,2, \end{aligned}$$
(5.44)

under (5.34) and (5.41), we immediately obtain the result (5.39). \(\square \)

6 Conclusion

For the O’Connell-Yor directed random polymer model, we have established the representation (2.7) of the moment generating function for the partition function in terms of a determinantal function which is regarded as a one-parameter deformation of the eigenvalue density function of the GUE random matrices.

There are some special mathematical structures behind the O’Connell-Yor model which play a crucial role in deriving the relation. The first one has been the determinantal representation (2.11) which is essentially the one with the Sklyanin measure in [57]. Next we have introduced another determinantal measure in enlarged degrees of freedom (2.19). Our main theorem has been readily obtained from a simple fact about two marginals of this measure (Theorem 6).

We can regard our approach as a generalization of the one in [78] which retains its determinantal structures. To see this we needed to reinterpret the dynamics in the Gelfand–Tsetlin cone introduced in [78] using the weight (3.33) supported on the partially ordered space \(V_N\) (3.34). Our approach is a natural generalization of [78] from this viewpoint. It would be an interesting future problem to find a clear relation with the Macdonald process [10], which is another generalization of [78].

Applying familiar techniques in random matrix theory to the main result, we have readily obtained the Fredholm determinant representation of the moment generating function whose kernel is expressed as the biorthogonal functions both of which are simple deformations of the Hermite polynomials. The asymptotics of the kernel under the scaling limit to the KPZ equation can be estimated easily by the saddle point analysis.