1 Introduction

We study an exactly solvable one-dimensional random walk in space–time i.i.d. random environment. It is a random walk on \(\mathbb {Z}\) which performs nearest neighbour steps, according to transition probabilities following the Beta distribution and drawn independently at each time and each location. We call this model the Beta RWRE. Using methods of integrable probability, we find an exact Fredholm determinantal formula for the Laplace transform of the quenched probability distribution of the walker’s position. An asymptotic analysis of this formula allows to prove a very precise limit theorem. It was already known that such a random walk satisfies a quenched large deviation principle [34]. We show that for the Beta RWRE, the second order correction to the large deviation principle fluctuates on the cube-root scale with Tracy–Widom statistics. This brings the scope of KPZ universality to random walks in dynamic random environment, and the Beta RWRE is the first RWRE for which such a limit theorem has been proved. Moreover, our result translates in terms of the maximum of the locations of independent walkers in the same environment. Hence, the Beta RWRE can also be considered as a toy model for studying maxima of strongly correlated random variables.

Our route to discover the exact solvability of the Beta RWRE was through an equivalent directed polymer model with Beta weights, which is itself a limit of the q-Hahn TASEP (introduced in [30] and further studied in [15]). However, we show that the RWRE/polymer model can be analysed independently of its interacting particle system origin, via a rigorous variant of the replica method.

Our work generalizes a study of similar spirit, where a limit of the discrete-time geometric q-TASEP [3] was related to the strict weak lattice polymer [17] (see also [29]). It should be emphasized that this procedure of translating the algebraic structure of interacting particle systems to directed polymer models was already fruitful in [4], where formulas for the q-TASEP allowed to study the law of continuous directed polymers related to the KPZ equation.

2 Definitions and main results

2.1 Random walk in space–time i.i.d. Beta environment

Definition 2.1

Let \((B_{x, t})_{x\in \mathbb {Z}, t\in \mathbb {Z}_{\geqslant 0}}\) be a collection of independent random variables following the Beta distribution, with parameters \(\alpha \) and \(\beta \). We call this collection of random variables the environment of the walk. Recall that if a random variable B is drawn according to the \(Beta(\alpha , \beta )\) distribution, then for \(0\leqslant r \leqslant 1\),

$$\begin{aligned} \mathbb {P}\left( B\leqslant r\right) = \int _0^r x^{\alpha -1} (1-x)^{\beta -1} \frac{\Gamma (\alpha +\beta )}{\Gamma (\alpha )\Gamma (\beta )}\ \mathrm {d}x. \end{aligned}$$

In this environment, we define the random walk in space–time Beta environment (abbreviated Beta-RWRE) as a random walk \((X_t)_{t\in \mathbb {Z}_{\geqslant 0}}\) in \(\mathbb {Z}\), starting from 0 and such that

  • \(X_{t+1}=X_t +1\) with probability \(B_{X_t, t}\) and

  • \(X_{t+1}=X_t -1\) with probability \(1- B_{X_t, t}\).

A sample path is depicted in Fig. 1. We denote by \(\mathsf {P}\) and \(\mathsf {E}\) (resp. \(\mathbb {P}\) and \(\mathbb {E}\)) the measure and expectation associated to the random walk (resp. to the environment).

Fig. 1
figure 1

The graph of \(t\mapsto X_t\) for the Beta RWRE. One sees that that the random walk \(\mathbf {X}_t:=(t, X_t)\) is also a (directed) random walk in a random environment in \(\mathbb {Z}^2\)

Let \(P(t, x)=\mathsf {P}(X_t\geqslant x)\). This is a random variable with respect to \(\mathbb {P}\). Our first aim is to show that the Beta RWRE model is exactly solvable, in the sense that we are able to find the distribution of P(tx), by exploiting an exact formula for the Laplace transform of P(tx).

Remark 2.2

The random walk \((\mathbf {X}_t)_t\) in \(\mathbb {Z}^2\), where \(\mathbf {X}_t:=(t, X_t)\) is a random walk in random environment in the classical sense, i.e. the environment is not dynamic (see Fig. 1). It is a very particular case of random walk in Dirichlet random environment [21]. Dirichlet RWREs have generated some interest because it can be shown using connections between Dirichlet law and Pólya urn scheme that the annealed law of such random walks is the same as that of oriented-edge-reinforced random walks [20]. However, since the random walk \((\mathbf {X}_t)\) can go through a given edge of \(\mathbb {Z}^2\) at most once, the connection to self-reinforced random walks is irrelevant for the Beta RWRE.

Remark 2.3

  • The Beta distribution with parameters (1, 1) is the uniform distribution on (0, 1).

  • For B a random variable with \(Beta(\alpha , \beta )\) distribution, \(1-B\) is distributed according to a Beta distribution with parameters \((\beta , \alpha )\). Consequently, exchanging the parameters \(\alpha \) and \(\beta \) of the Beta RWRE corresponds to applying a symmetry with respect to the time axis.

Fig. 2
figure 2

The thick line represents a possible polymer path in the point-to-point Beta polymer model. The dotted thick part represents a modification of the polymer path that is admissible if one considers the half-line to point polymer (see the Sect. 2.2.2). The partition function for the half-line to point model \(\tilde{Z}(s,k)\) at the point (sk) shown in gray equals 1

2.2 Definition of the Beta polymer

2.2.1 Point to point Beta polymer

Definition 2.4

A point-to-point Beta polymer is a measure \(Q_{t,n}\) on lattice paths \(\pi \) between (0, 1) and (tn). At each site (sk) the path is allowed to

  • jump horizontally to the right from (sk) to \((s+1,k)\),

  • or jump diagonally to the upright from (sk) to \((s+1,k+1)\).

An admissible path is shown in Fig. 2. Let \(B_{i,j}\) be independent random variables distributed according to the Beta distribution with parameters \(\mu \) and \(\nu -\mu \) where \(0<\mu <\nu \). The measure \(Q_{t,n}\) is defined by

$$\begin{aligned} Q_{t,n}\left( \pi \right) = \frac{\prod _{e\in \pi } w_e}{Z(t,n)} \end{aligned}$$

where the products is taken over edges of \(\pi \) and the weights \(w_e\) are defined by

$$\begin{aligned} w_e = {\left\{ \begin{array}{ll} B_{ij} &{}\text{ if } e=(i-1,j)\rightarrow (i,j) \\ 1 &{}\text{ if } e=(i-1, i)\rightarrow (i,i+1)\\ 1-B_{i,j}&{}\text{ if } e=(i-1,j-1)\rightarrow (i,j) \quad \text{ with } i\geqslant j, \end{array}\right. } \end{aligned}$$

and Z(tn) is a normalisation constant called the partition function,

$$\begin{aligned} Z(t,n) = \sum _{\pi : (0,1)\rightarrow (t,n)} \prod {w_e}. \end{aligned}$$

The free energy of the beta polymer is \(\log Z(t,n)\). The partition function of the beta polymer satisfies the recurrence

$$\begin{aligned} {\left\{ \begin{array}{ll} Z(t, n) = Z(t-1,n) B_{t,n} + Z(t-1,n-1)(1- B_{t,n}) &{} \text {for } t \geqslant n > 1 ,\\ Z(t,t+1) = Z(t-1, t) &{}\text {for }t>0, \\ Z(t,1) = Z(t-1,1) B_{t,1} &{}\text {for }t>0. \end{array}\right. } \end{aligned}$$
(1)

With the initial data

$$\begin{aligned} Z(0,1) = 1. \end{aligned}$$
(2)

Remark 2.5

One recovers at the \(\nu \rightarrow \infty \) limit the strict-weak lattice polymer described in [17, 29]. As \(\nu \) goes to infinity,

$$\begin{aligned} \nu \cdot Beta(\mu , \nu -\mu ) \Rightarrow Gamma(\mu ), \end{aligned}$$

and \(1-Beta(\mu , \nu -\mu )\Rightarrow 1\). There are \(t-n+1\) horizontal edges in any admissible lattice path from (0, 1) to (tn), and thus

$$\begin{aligned} \bar{Z}(t,n) :=\lim _{\nu \rightarrow \infty } \nu ^{t-n+1} Z(t,n) \end{aligned}$$

is the partition function of the strict-weak polymer. Indeed, in the strict-weak polymer, the horizontal edges have weights \(Gamma(\mu )\) whereas upright paths have weight 1.

2.2.2 Half-line to point Beta polymer

Another (equivalent) possible interpretation of the same quantity Z(tn) is the partition function of an ensemble of polymer paths starting from the “half-line” \(\lbrace (0,n) :n>0\rbrace \). Fix \(t\geqslant 0\) and \(n>0\). One considers paths starting from any point (0, m) for \(0<m\leqslant n\) and ending at (tn). As for the point-to-point Beta polymer, paths are allowed to make right and diagonal steps. The weight of any path is the product of the weights of each edge along the path, and the weight \(\tilde{w}_e\) of the edge e is now defined by

$$\begin{aligned} \tilde{w}_e = {\left\{ \begin{array}{ll} B_{ij} &{}\text{ if } e\text { is the horizontal edge }(i-1,j)\rightarrow (i,j), \\ 1-B_{i,j}&{}\text{ if } e\text { is the diagonal edge }(i-1,j-1)\rightarrow (i,j). \end{array}\right. } \end{aligned}$$

Let us denote by \(\tilde{Z}(t,n)\) the partition function in the half-line to point model. It is characterized by the recurrence

$$\begin{aligned} \tilde{Z}(t, n) = \tilde{Z}(t-1,n) B_{t,n} + \tilde{Z}(t-1,n-1)(1- B_{t,n}) \end{aligned}$$

for all \(t,n>0\) and the initial condition \(Z(0,n)=1\) for \(n>0\). With the above definition of weights, we can see by induction that for any \(t\geqslant 0\) and \(n>t\), \(\tilde{Z}(t, n)=1\). For example, in Fig. 2, the possible paths leading to (sk) are shown in gray. On the figure, one has

$$\begin{aligned} \tilde{Z}(s,k)= & {} \tilde{Z}(2,6)\\= & {} B_{1,6}B_{2,6} + (1-B_{1,6})B_{2,6} + B_{1,5}(1-B_{2,6}) + (1-B_{1,5})(1-B_{2,6}) =1. \end{aligned}$$

Consequently, the partition functions of the half-line-to-point and the point-to-point model coincide for \(t+1\geqslant n\). In the following, we drop the tilde above Z, even when considering the half-line-to point model, since the models are equivalent.

By deforming the lattice so that admissible paths are up/right, and reverting the orientation of the path, one sees that the Beta polymer and the Beta-RWRE are closely related models, in the sense of Proposition 2.6. This proposition is proved in Sect. 3.3.

Proposition 2.6

Consider the Beta-RWRE with parameters \(\alpha , \beta >0\) and the Beta polymer with parameters \(\mu =\alpha \) and \(\nu =\alpha +\beta \). For any fixed \(t, n\in \mathbb {Z}_{\geqslant 0}\) such that \(t+1\geqslant n\), then we have the equality in law

$$\begin{aligned} Z(t, n) = P(t, t-2n+2) . \end{aligned}$$

Moreover, conditioning on the environment of the Beta polymer corresponds to conditioning on the environment of the Beta RWRE.

2.3 Bernoulli-Exponential directed first passage percolation

Let us introduce the “zero-temperature” counterpart of the Beta RWRE.

Definition 2.7

Let \((E_e)\) be a family of independent exponential random variables indexed by the horizontal and vertical edges e in the lattice \(\mathbb {Z}^2\), such that \(E_e\) is distributed according to the exponential law with parameter a (i.e. with mean 1 / a) if e is a vertical edge and \(E_e\) is distributed according to the exponential law with parameter b if e is a horizontal edge. Let \((\xi _{i,j})\) be a family of independent Bernoulli random variables with parameter \( b/(a+b)\). For an edge e of the lattice \(\mathbb {Z}^2\), we define the the passage time \(t_e\) by

$$\begin{aligned} t_e= {\left\{ \begin{array}{ll} \xi _{i,j}E_e &{}\text {if }e\text { is the vertical edge }(i,j) \rightarrow (i, j+1),\\ (1-\xi _{i,j})E_e&{}\text {if }e\text { is the horizontal edge }(i,j)\rightarrow (i+1, j). \end{array}\right. } \end{aligned}$$
(3)

The first passage-time T(nm) in the Bernoulli-Exponential first passage percolation model is given by

$$\begin{aligned} T(n, m) = \min _{\pi :(0, 0)\rightarrow D_{n, m}}\ \sum _{e\in \pi } \ t_e , \end{aligned}$$

where the minimum is taken over all up/right paths \(\pi \) from (0, 0) to \(D_{n, m}\), which is the set of points

$$\begin{aligned} D_{n, m} = \lbrace (i, n+m-i) :0\leqslant i\leqslant n\rbrace . \end{aligned}$$
Fig. 3
figure 3

Percolation cluster for the Bernoulli-Exponential model with parameters \(a=b=1\) in a grid of size \(100\times 100\). The different shades of gray correspond to different times: the black line corresponds to the percolation cluster at time 0 and the other shades of gray corresponds to times 0.2, 0.5 and 1.2. This implies that for n and m chosen as on the figure, \( 0.2\leqslant T(n,m)\leqslant 0.5\)

Although the quantity that we are fully able to study is T(nm), that is a point to half-line passage time, is is also natural to introduce the point-to-point passage time \(T^{pp}(n,m)\) defined by

$$\begin{aligned} T^{pp}(n, m) = \min _{\pi :(0, 0)\rightarrow (n,m)}\ \sum _{e\in \pi } \ t_e, \end{aligned}$$

where the maximum is taken over paths between the points (0, 0) and (nm). We define the percolation cluster C(t) by

$$\begin{aligned} C(t) = \lbrace (n,m) :T^{pp}(n,m) \leqslant t\rbrace . \end{aligned}$$

It can be constructed in a dynamic way (see Fig. 3). At each time t, C(t) is the union of points visited by (portions of) several directed up/right random walks in the quarter plane \(\mathbb {Z}_{\geqslant 0}^2\). The evolution is as follows:

  • At time 0, the percolation cluster contains the points of the path of a directed random walk starting from (0, 0). Indeed, since for any ij, \(\xi _{i,j}\) is a Bernoulli random variable in \(\lbrace 0,1\rbrace \), either the passage time from (ij) to \((i+1,j)\) is zero, or the passage time from (ij) to \((i,j+1)\) is zero. This implies that there exists a unique infinite up-right path starting from (0, 0) with zero passage-time. This path is distributed as a directed random walk.

  • At time t, from each point on the boundary of the percolation cluster where a random walk can branch, we add to the percolation cluster after an exponentially distributed waiting time, the path of that random walk. Paths starting with a vertical (resp. horizontal) edge are added at rate a (resp. b). This random walk almost surely crosses the percolation cluster somewhere, and we add to the percolation cluster only the points of the walk path up to the first hitting point. Indeed, any edge \(e=(x,y)\) from a point x inside C(t) to a point y outside C(t), has a positive passage time. Hence, one adds the point y to the percolation cluster after an exponentially distributed waiting time \(t_e\). Once the point y is added, one immediately adds to C(t) all the points that one can reach from y with zero passage time. These points form a portion of random walk that will almost surely coalesce with the initial random walk path C(0).

Fig. 4
figure 4

An admissible path for the Bernoulli-Exponential FPP model is shown on the figure. T(nm) is the passage time between (0, 0) and \(D_{n,m}\) (thick gray line). Note that the first passage time to \(D_{n,m}\) is also the first passage time to \(\tilde{D}_{n,m}\) depicted in dotted gray on the figure (cf. Remark 2.8)

Remark 2.8

Denote by \(\tilde{D}_{n,m}\) the set of points \(\lbrace (i, m) :0\leqslant i\leqslant n\rbrace \) (see Fig. 4). Any path going from (0, 0) to \(D_{n,m}\) has to go through a point of \(\tilde{D}_{n,m}\). Moreover, the first passage time from any point of \(\tilde{D}_{n,m}\) to the set \(D_{n,m}\) is zero. Hence the first passage time from (0, 0) to \(\tilde{D}_{n,m}\) is also T(nm).

Remark 2.9

When b tends to infinity, \(E_e\) tends to 0 for all horizontal edges, and one recovers the first passage percolation model introduced in [28], which is the zero temperature limit of the strict-weak lattice polymer as explained in [17, 29].

Let us show how the Bernoulli-Exponential first passage percolation model is a limit of the Beta RWRE.

Proposition 2.10

Let \(\alpha _{\epsilon }=\epsilon a\) and \(\beta _{\epsilon } = \epsilon b\). Let \(P_{\epsilon }(t,x)\) be the probability distribution function of the Beta-RWRE with parameters \(\alpha _{\epsilon }\) and \(\beta _{\epsilon }\) and T(nm) the first-passage time in the Bernoulli-Exponential FPP model with parameters ab. Then, for all \( n, m\geqslant 0\), \(-\epsilon \log (P_{\epsilon }(n+m,m-n))\) weakly converges as \(\epsilon \) goes to zero to T(nm), the first passage time from (0, 0) to \(D_{n,m}\) in the Bernoulli-Exponential FPP model.

Proposition 2.10 is proved in Sect. 5.

2.4 Exact formulas

Our first result is an exact formula for the mixed moments of the polymer partition function \(\mathbb {E}[Z(t,n_1) \cdots Z(t,n_k) ]\). In light of Proposition 3.1, this result can be seen as a limit when q goes to 1 of the formula from Theorem 1.8 in [15]. Even so, we prove this in an independent way in Sect. 4 via a rigorous polymer replica trick methods (see Proposition 4.4).

Proposition 2.11

For \(n_1 \geqslant n_2\geqslant \dots \geqslant n_k\geqslant 1\), one has the following moment formula,

$$\begin{aligned}&\mathbb {E}[Z(t,n_1) \ldots Z(t,n_k) ]\nonumber \\&\quad =\frac{1}{(2i\pi )^k} \int \dots \int \prod _{1\leqslant A<B\leqslant k} \frac{z_A-z_B}{z_A-z_B-1}\prod _{j=1}^k \left( \frac{\nu +z_j}{z_j}\right) ^{n_j} \left( \frac{\mu + z_j}{\nu +z_j}\right) ^t\frac{\mathrm {d}z_j}{\nu +z_j},\nonumber \\ \end{aligned}$$
(4)

where the contour for \(z_k\) is a small circle around the origin, and the contour for \(z_j\) contains the contour for \(z_{j+1} + 1\) for all \(j=1, \dots , k-1\), as well as the origin, but all contours exclude \(-\nu \).

The previous proposition provides a formula for the moments of the partition function Z(tn). Using tools developed in the study of Macdonald processes [4] (see also [13, 18]), one is able to take the moment generating series, which yields a Fredholm determinant representation for the Laplace transform of Z(tn). We refer to [4, Section 3.2.2] for background about Fredholm determinants.

Theorem 2.12

For \(u\in \mathbb {C} {\setminus } \mathbb {R}_{>0}\), fix \(n,t\geqslant 0\) with \(n\leqslant t+1\) and \(\nu >\mu >0 \). Then one has

$$\begin{aligned} \mathbb {E}[ e^{uZ(t,n)} ] = \det (I+K^{\mathrm {BP}}_u)_{\mathbb {L}^2(C_0)} \end{aligned}$$

where \(C_0\) is a small positively oriented circle containing 0 but not \(-\nu \) nor \(-1\), and \(K^{\mathrm {BP}}_u :\mathbb {L}^2(C_0)\rightarrow \mathbb {L}^2(C_0)\) is defined by its integral kernel

$$\begin{aligned} K^{{{\mathrm {BP}}}}_u(v,v') = \frac{1}{2i\pi } \int _{1/2-i\infty }^{1/2+i\infty } \frac{\pi }{\sin (\pi s)} (-u)^s\frac{g^{{{\mathrm {BP}}}}(v)}{g^{{{\mathrm {BP}}}}(v+s)} \frac{{{\mathrm {d}}}s}{s+v - v'} \end{aligned}$$

where

$$\begin{aligned} g^{\mathrm {BP}}(v) = \left( \frac{\Gamma (v)}{\Gamma (\nu +v)} \right) ^n \left( \frac{\Gamma (\nu +v)}{\Gamma (\mu +v)}\right) ^t \Gamma (\nu + v). \end{aligned}$$
(5)

In light of the relation between the Beta RWRE and the Beta polymer given in Proposition 2.6, we have a similar Fredholm determinant representation for the Laplace transform of P(tx).

Theorem 2.13

For \(u\in \mathbb {C} {\setminus } \mathbb {R}_{>0}\), fix \(t\in \mathbb {Z}_{\geqslant 0}\), \(x\in \lbrace -t, \dots , t\rbrace \) with the same parity, and \(\alpha , \beta >0 \). Then one has

$$\begin{aligned} \mathbb {E}[ e^{u P(t,x)} ] = \det (I+K^{\mathrm {RW}}_u)_{\mathbb {L}^2(C_0)} \end{aligned}$$
(6)

where \(C_0\) is a small positively oriented circle containing 0 but not \(-\alpha -\beta \) nor \(-1\), and \(K^{\mathrm {RW}}_u :\mathbb {L}^2(C_0)\rightarrow \mathbb {L}^2(C_0)\) is defined by its integral kernel

$$\begin{aligned} K^{\mathrm {RW}}_u(v,v') = \frac{1}{2i\pi } \int _{1/2-i\infty }^{1/2+i\infty } \frac{\pi }{\sin (\pi s)} (-u)^s\frac{g^{\mathrm {RW}}(v)}{g^{\mathrm {RW}}(v+s)} \frac{\mathrm {d}s}{s+v - v'} \end{aligned}$$

where

$$\begin{aligned} g^{\mathrm {RW}}(v) = \left( \frac{\Gamma (v)}{\Gamma (\alpha +v)} \right) ^{(t-x)/2} \left( \frac{\Gamma (\alpha +\beta +v)}{\Gamma (\alpha +v)}\right) ^{(t+x)/2} \Gamma ( v). \end{aligned}$$

2.5 Limit theorem for the random walk

A quenched large deviation principle is proved in [34, Section 4] for a wide class of random walks in random environment that includes the Beta-RWRE model. More precisely, the setting of [34] applies to the random walk \(\mathbf {X}_t=(t, X_t)\) (see Remark 2.2). The condition that one has to check is that the logarithm of the probability of each possible step has nice properties with respect to the environment (the random variables must belong to the class \(\mathcal {L}\) defined in [34, Definition 2.1]). Using the fact that if B is a \(Beta(\alpha , \beta )\) random variable, \(\log (B)\) and \(\log (1-B)\) have integer moments of any order, Ref. [34, Lemma A.4] ensures that the condition is satisfied. The limit

$$\begin{aligned} \lambda (z) := \lim _{t\rightarrow \infty } \frac{1}{t} \log (\mathsf {E}[ e^{zX_t}]) \end{aligned}$$

exists \(\mathbb {P}\)-almost surely. Let I be the Legendre transform of \(\lambda \). Then, we have [34, Section 4] that for \(x>(\alpha -\beta )/(\alpha +\beta )\),

$$\begin{aligned} \lim _{t\rightarrow \infty } \frac{1}{t} \log (\mathsf {P}(X_t>x t)) = -I(x) \ \ \mathbb {P} \text { a.s.} \end{aligned}$$
(7)

Remark 2.14

In the language of polymers, the limit (7) states the existence of the quenched free energy. Theorem 4.3 in [33] states that for such random walks in random environment, we have that

$$\begin{aligned} \lim _{t\rightarrow \infty } \frac{1}{t} \log (\mathsf {P}(X_t=\lfloor x t\rfloor )) = \lim _{t\rightarrow \infty } \frac{1}{t} \log (\mathsf {P}(X_t>x t))= -I(x). \end{aligned}$$

In other terms, the point-to-point free energy and the point-to-half-line free energies are equal.

In [34, Theorem 3.1], a formula is given for I in terms of a variational problem over a space of measures. We provide a closed formula in the present case. It would be interesting to see how the variational problem recovers the formulas that we now present.

For the Beta-RWRE, critical point Fredholm determinant asymptotics shows that the function I is implicitly defined by

$$\begin{aligned} x(\theta ) = \frac{\Psi _1(\theta +\alpha +\beta ) +\Psi _1(\theta )- 2 \Psi _1(\theta + \alpha )}{\Psi _1(\theta ) - \Psi _1(\theta + \alpha +\beta )} \end{aligned}$$
(8)

and

$$\begin{aligned} I(x(\theta ))= & {} \frac{\Psi _1(\theta +\alpha +\beta ) - \Psi _1(\theta + \alpha )}{\Psi _1(\theta ) - \Psi _1(\theta + \alpha +\beta )} (\Psi (\theta + \alpha +\beta )- \Psi (\theta ) )\nonumber \\&+\, \Psi (\theta + \alpha +\beta )- \Psi (\theta +\alpha ), \end{aligned}$$
(9)

where \(\Psi \) is the digamma function (\(\Psi (z)= \Gamma '(z)/\Gamma (z)\)) and \(\Psi _1\) is the trigamma function (\(\Psi _1(z)=\Psi '(z)\)). The parameter \(\theta \) does not seem natural at a first sight. It is convenient to use it as it will turn out to be the position of the critical point in the asymptotic analysis. When \(\theta \) ranges from 0 to \(+\infty \), \(x(\theta )\) ranges from 1 to \((\alpha -\beta )/(\alpha +\beta )\). This covers all the interesting range of large deviation events since \((\alpha -\beta )/(\alpha +\beta )\) is the expected drift of the random walk, and we know that \(\mathsf {P}(X_t>x t)=0\) for \(x>1\).

Moreover, we define \(\sigma (\theta )>0\) such that

$$\begin{aligned} 2\sigma (\theta )^3= & {} \Psi _2(\theta +\alpha ) - \Psi _2(\alpha +\beta +\theta )\nonumber \\&+ \frac{\Psi _1(\alpha +\theta ) - \Psi _1(\alpha +\beta +\theta )}{\Psi _1(\theta ) - \Psi _1(\alpha +\beta +\theta )}\left( \Psi _2(\alpha +\beta +\theta ) - \Psi _2(\theta )\right) . \end{aligned}$$
(10)

In the case \(\alpha =\beta =1\), that is when the \(B_{x,t}\) variables are distributed uniformly on (0, 1), the expressions for \(x(\theta )\) and \(I(x(\theta ))\) simplify. We find that

$$\begin{aligned} x(\theta ) = \frac{1+2\theta }{\theta ^2+(\theta +1)^2} \end{aligned}$$

and

$$\begin{aligned} I(x(\theta )) = \frac{1}{\theta ^2+(\theta +1)^2} , \end{aligned}$$

so that the rate function I is simply the function \(I:x \mapsto 1-\sqrt{1-x^2}\).

The following theorem gives a second order correction to the large deviation principle satisfied by the position of the walker at time t.

Theorem 2.15

For \(0<\theta <1/2\) and \(\alpha =\beta =1\), we have that

$$\begin{aligned} \lim _{t\rightarrow \infty } \mathbb {P}\left( \frac{\log (P(t, x(\theta )t)) + I(x(\theta ))t}{t^{1/3}\sigma (x(\theta ))} \leqslant y \right) = F_\mathrm{GUE}(y). \end{aligned}$$
(11)

Remark 2.16

As we explain in Sect. 6, we expect Theorem 2.15 to hold more generally for arbitrary parameters \(\alpha , \beta >0\) and \(\theta >0\). The assumption \(\alpha =\beta \) is made for simplifying the computations, whereas the assumption \(\theta <1/2\) is present because certain deformations of contours are justified only for \(\theta <\min \lbrace 1/2, \alpha +\beta \rbrace \). The condition \(\theta >0\) is natural, it corresponds to looking at \(x(\theta )<1\). We know that for \(x(\theta )>1\), then \(P(t, x(\theta )t)=0\).

In the case \(\alpha =\beta =1\), the condition \(\theta <1/2\) corresponds to \(x(\theta )>4/5\).

Remark 2.17

The Tracy–Widom limit theorem from Theroem 2.15 should be understood as an analogue of limit theorems for the free energy fluctuations of exactly-solvable random directed polymers. Similar results are proved in [1, 6] for the continuum polymer, in [4, 6] for the O’Connell-Yor semi-discrete polymer, in [9] for the log-gamma polymer, and in [17, 29] for the strict-weak-lattice polymer.

In light of KPZ universality for directed polymers, we expect the conclusion of Theorem 2.15 to be more general with respect to weight distribution, but this is only the first RWRE to verify this.

In Sect. 6, we also provide an interesting corollary of Theorem 2.15. Corollary 6.8 states that if one considers an exponential number of Beta RWRE drawn in the same environment, then the maximum of the endpoints satisfies a Tracy–Widom limit theorem. It turns out that even if the rescaled endpoint of a random walk converges in distribution to a Gaussian random variable for large t, the limit theorem that we get is quite different from the one verified by Gaussian random variables having the same dependence structure.

2.6 Localization of the paths

The localization properties of random walks in random environment are quite different from localization properties of random directed polymers in \(1+1\) dimensions. For instance, in the log-gamma polymer model, the endpoint of a polymer path of size n fluctuates on the scale \(n^{2/3}\) [35], and localizes in a region of size \(\mathcal {O}(1)\) when one conditions on the environment [14]. For random walks in random environment, it is clear by the central limit theorem that the endpoint of a path of size n fluctuates on the scale \(\sqrt{n}\).

Remarkably, the central limit theorem also holds if one conditions on the environment. A general quenched central limit theorem is proved in [31] for space–time i.i.d. random walks in \(\mathbb {Z}^d\). The only hypotheses are that the environment is not deterministic, and that the expectation over the environment of the variance of an elementary increment is finite. These two conditions are clearly satisfied by the Beta-RWRE model. In the particular case of one-dimensional random walks, and when transition probabilities have mean 1 / 2, the result was also proved in [12]. However, most of the other papers proving a quenched central limit theorem for similar RW models assume a strict ellipticity condition, which is not satisfied by the Beta-RWRE. See also [11, 32] for similar results about random walks in random environment under weaker conditions.

In any case, if we let the environment vary, the fluctuations of the endpoints at time t in the Beta RWRE live on the \(\sqrt{t}\) scale. For the Beta-RWRE, Proposition 6.13 shows that the expected proportion of overlap between two random walks drawn independently in a common environment is of order \(\sqrt{t}\) up to time t. The \(\sqrt{t}\) order of magnitude has already been proved in [31, Lemma 2] based on results from [22], and our Proposition 6.13 provides the precise equivalent.

Let us give an intuitive argument explaining the difference of behaviour between polymers and random walks. Assume that the environment of the random walk (resp. the polymer) has been drawn, and consider a random walk starting from the point 0 (resp. a point-to-point polymer starting from 0). The quenched probability that the random walk performs a first step upward depends only on the environment at the point 0 (i.e. the random variable \(B_{0,0}\) in the case of the Beta RWRE). However, the probability for the polymer path to start with a step upward depends on the global environment. For instance, if the weight on some edge is very high, this will influence the probability that the first step of the polymer path is upward or downward, so as to enable the polymer path to go through the edge with high weight. This explains why two independent paths in the same environment have more tendency to overlap in the polymer model.

In [27], a random walk in dynamic random environment is associated to a random directed polymer in \(1+1\) dimensions, under a condition called north-east induction on the edge-weights. For the log-gamma polymer, it turns out that the associated random walk has Beta distributed transition probabilities. However, the environment is correlated, so that this RWRE is very different from the Beta RWRE. The random walk considered in [27] defines a measure on lattice paths which can be seen as a limit of point-to-point polymer measures. Hence, as pointed out in [27, Remark 8.3], it has very different localization properties than random walks in space–time i.i.d random environment that we consider in the present paper.

2.7 Limit theorem at zero-temperature

Turning to the zero-temperature limit, Theorem 2.13 degenerates to the following for the Bernoulli-Exponential FPP model:

Theorem 2.18

For \(r\in \mathbb {R}_{>0}\), fix \(n,m\geqslant 0\) and consider T(nm) the first passage time to the set \(D_{n,m}\) in the Bernoulli-Exponential FPP model with parameters \(a,b>0\). Then, one has

$$\begin{aligned} \mathbb {P}( T(n,m) > r) = \det (I+K^{\mathrm {FPP}}_r)_{\mathbb {L}^2(C'_0)} \end{aligned}$$

where \(C'_0\) is a small positively oriented circle containing 0 but not \(-a-b\), and \(K^{\mathrm {FPP}}_r :\mathbb {L}^2(C'_0)\rightarrow \mathbb {L}^2(C'_0)\) is defined by its integral kernel

$$\begin{aligned} K^{\mathrm {FPP}}_r(u,u') = \frac{1}{2i\pi } \int _{1/2-i\infty }^{1/2+i\infty } \frac{e^{rs}}{s} \frac{g^{\mathrm {FPP}}(u)}{g^{\mathrm {FPP}}(u+s)} \frac{\mathrm {d}s}{s+u-u'}, \end{aligned}$$
(12)

where

$$\begin{aligned} g^{\mathrm {FPP}}(u) = \left( \frac{a+u}{u}\right) ^n \left( \frac{a+u}{a+b+u} \right) ^m \frac{1}{u}. \end{aligned}$$
(13)

The integral in (12) is an improper oscillatory integral if one integrates on the vertical line \(1/2+i\mathbb {R}\). One could justify a deformation of the integration contour (so that the tails go to \(\infty e^{\pm i2\pi /3}\) for instance) in order to have an absolutely convergent integral, but it happens that the vertical contour is more practical for analyzing the asymptotic behaviour of \(\det (I+K^{\mathrm {FPP}}_r)\) in Sect. 7.

One has a Tracy–Widom limit theorem for the fluctuations of the first passage time \(T(n,\kappa n)\) when n goes to infinity, for some slope \(\kappa >\frac{a}{b}\). Theorem 2.19 is proved as Theorem 7.1 in Sect. 7.

Theorem 2.19

We have that for any \(\theta >0\) and parameters \(a,b>0\),

$$\begin{aligned} \lim _{n \rightarrow \infty }\mathbb {P}\left( \frac{T\big (n, \kappa (\theta )n\big ) - \tau (\theta )n}{\rho (\theta )n^{1/3}}\leqslant y \right) = F_{\mathrm {TW}}(y), \end{aligned}$$

where \(\kappa (\theta ), \tau (\theta )\) and \(\rho (\theta )\) are explicit constants (see Sect. 7) such that when \(\theta \) ranges from 0 to infinity, \(\kappa (\theta )\) ranges from \(+\infty \) to a / b.

Notice that in Theorem 2.19, we do not have any restriction on the range of the parameters ab and \(\theta \).

Another direction of study for the Bernoulli-Exponential FPP model is to compute the asymptotic shape of the percolation cluster C(t) for a fixed time t (but looking very far from the origin). In Sect. 7.3 we explain, based on a degeneration of the results of Theorem 2.19, what should be the limit shape of the the convex envelope of the percolation cluster, and guess the scale of the fluctuations. However, these arguments are based on a non-rigorous interchange of limits and we leave a rigorous proof for future consideration.

Outline of the paper In Sect. 3, we introduce the q-Hahn TASEP [15, 30] and explain how some observables of the q-Hahn TASEP converge to the partition function of the Beta polymer (and likewise endpoint distribution of the Beta RWRE). This already leads to a proof of the Fredholm determinant formulas in Theorems 2.12 and 2.13, using results on the q-Hahn TASEP. We do not write here the necessary technical details to make this approach rigorous. Rather, in Sect. 4, we give a direct proof of Theorems 2.12 and 2.13 using an approach which can be seen as a rigorous instance of the replica method. In Sect. 5, we show that the Beta RWRE converges to the Bernoulli-Exponential FPP, and prove the Fredholm determinant formula of Theorem 2.18. In Sect. 6 we perform an asymptotic analysis of the Fredholm determinant from Theorem 2.13 to prove Theorem 2.15. We also discuss Corollary 6.8 which is about the maximum of the endpoints of several Beta RWRE drawn in a common environment, and we relate this result to extreme value theory. In Sect. 7, we perform an asymptotic analysis of the Bernoulli-Exponential FPP model to prove Theorem 2.19.

3 From the q-Hahn TASEP to the Beta RWRE

In this section, we explain how the Beta-RWRE and the Beta polymer arise as limits of the q-Hahn TASEP introduced in [30] (see also [15]). We first show that some observables of the q-Hahn TASEP converge to the partition function of the polymer model (Proposition 3.1). Discarding technical details (which are written in full details in the arXiv version of this paper), this leads to a first proof of Theorem 2.12. Then we prove that the Beta RWRE and the Beta polymer model are equivalent models in the sense of Proposition 2.6.

3.1 The q-Hahn TASEP

Let us recall the definition of the q-Hahn-TASEP: this is a discrete time interacting particle system on the one-dimensional integer lattice. Fix \(0<q<1\) and \(0\leqslant \bar{\nu } \leqslant \bar{\mu }<1\). Then the N-particle q-Hahn TASEP is a discrete time Markov chain \(\vec {x}(t) = \lbrace x_n(t) \rbrace _{n=0}^{N} \in \mathbb {X}_N\) where the state space \(\mathbb {X}_N\) is

$$\begin{aligned} \mathbb {X}_N = \lbrace +\infty = x_0 > x_1 >\dots >x_N :\forall i, x_i\in \mathbb {Z} \rbrace . \end{aligned}$$

At time \(t+1\), each coordinate \(x_n(t)\) is updated independently and in parallel to \(x_n(t+1) = x_n(t)+j_n\) where \(0\leqslant j_n \leqslant x_{n-1}(t)-x_n(t)-1\) is drawn according to the q-Hahn probability distribution \(\varphi _{q, \bar{\mu }, \bar{\nu }}(j_n\vert x_{n-1}(t)-x_n(t)-1) \). The q-Hahn probability distribution on \(j\in \lbrace 0, 1, \dots ,m\rbrace \) is defined by the probabilities

$$\begin{aligned} \varphi _{q, \bar{\mu }, \bar{\nu }}(j\vert m) = \bar{\mu }^j \frac{(\bar{\nu }/\bar{\mu } ; q)_{j}(\bar{\mu } ; q)_{m-j}}{(\bar{\nu };q)_{m}} \frac{(q;q)_m}{(q;q)_{j}(q;q)_{m-j}}, \end{aligned}$$
(14)

where for \(a\in \mathbb {C}\) and \(n\in \mathbb {Z}_{\geqslant 0} \cup \lbrace +\infty \rbrace \), \((a;q)_n\) is the q-Pochhammer symbol

$$\begin{aligned} (a ; q)_n = (1-a)(1-qa) \dots (1-q^{n-1}a). \end{aligned}$$

3.2 Convergence of the q-Hahn TASEP to the Beta polymer

An interesting interpretation of the q-Hahn distribution is provided in Section 4 of [26]. The authors define a q-analogue of the Pólya urn process: One considers two urns, initially empty, in which one sequentially adds balls. When the first urn contains k balls, and the second urn contains \(n-k\) balls, one adds a ball to the first urn with probability \( [\nu -\mu +n-k]_q/[\nu +n]_q \), where for any integer m, \([m]_q=(1-q^m)/(1-q)\) denotes the q-deformed integer, and we set \(\bar{\mu } = q^{\mu }\) and \(\bar{\nu } = q^{\nu }\). One adds a ball to the second urn with the complementary probability. Then \(\varphi _{q, \bar{\mu }, \bar{\nu }}(j\vert m)\) is the probability that after m steps, the first urn contains j balls. When q goes to 1, one recovers the classical Pólya urn process.

For the classical Pólya urn, it is known that after n steps, the number of balls in the first urn is distributed according to the Beta-Binomial distribution. Further, the proportion of balls in the first urns converges in distribution to the Beta distribution when the number of added balls tends to infinity. Thus, it is natural to consider the q-Hahn distribution as a q-analogue of the Beta-Binomial distribution. Further, we expect that if X is a random variable drawn according to the q-Hahn distribution on \(\lbrace 0, \dots , m\rbrace \) with parameters \((q, \bar{\mu }, \bar{\nu })\), the q-deformed proportion \([X]_q /[m]_q\) converges as m goes to infinity to a q analogue of the Beta distribution, which converges as q goes to 1 to the Beta distribution with parameters \((\nu -\mu , \mu )\).

This interpretation of the q-Hahn distribution as a q-analogue of the Beta-Binomial distribution explains why the partition function of the Beta polymer is a limit of observables of the q-Hahn TASEP. Let \(Z^{\epsilon }(t,n)\) be the rescaled quantity

$$\begin{aligned} Z^{\epsilon }(t,n) = q^{x_n(t)+n}, \end{aligned}$$
(15)

where \(x_n(t)\) is the location of the \(n^\mathrm{th}\) particle in q-Hahn TASEP and we set \(q=e^{-\epsilon }, \bar{\mu } = q^{ \mu }\) and \( \bar{\nu } = q^{ \nu }\).

Proposition 3.1

For \(t\geqslant 0\) and \(n\geqslant 1\) such that \(n\leqslant t+1\), the sequence of random variables \(\left( Z^{\epsilon }(t,n) \right) _{\epsilon }\) converges in distribution as \(\epsilon \rightarrow 0\) to a limit Z(tn) and one has

$$\begin{aligned} Z(t,n) = Z(t-1,n) B_{t,n} + Z(t-1,n-1)(1- B_{t,n}) \end{aligned}$$

where \(B_{t,n}\) are i.i.d. Beta distributed random variables with parameters \((\mu , \nu -\mu )\). Additionally, we have the weak convergence of processes

$$\begin{aligned} \lbrace Z^{\epsilon }(t,n)\rbrace _{t\geqslant 0, n\geqslant 1} \Rightarrow \lbrace Z(t,n) \rbrace _{t\geqslant 0, n\geqslant 1}, \end{aligned}$$
(16)

where Z(tn) is the partition function of the Beta polymer.

Proof

See the arXiv version of this paper for a detailed proof. \(\square \)

One has the following Fredholm determinant representation for the \(e_q\)-Laplace transform of \(x_n(t)\).

Theorem 3.2

(Theorem 1.10 in [15]) Consider q-Hahn TASEP started from step initial data \(x_n(0)=-n \ \forall n \geqslant 1\). Then for all \(\zeta \in \mathbb {C}{\setminus } \mathbb {R}_{>0}\),

$$\begin{aligned} \mathbb {E}\left[ \frac{1}{(\zeta q^{x_n(t)+n} ; q)_{\infty }} \right] = \det (I+K^{\mathrm {qHahn}}_{\zeta })_{\mathbb {L}^2(C_1)} \end{aligned}$$
(17)

where \(C_1\) is a small positively oriented circle containing 1 but not \(1/\bar{\nu }, 1/q\) nor 0, and \(K^{\mathrm {qHahn}}_{\zeta } :\mathbb {L}^2(C_1)\rightarrow \mathbb {L}^2(C_1)\) is defined by its integral kernel

$$\begin{aligned} K^{{{\mathrm {qHahn}}}}_{\zeta } (w,w') =\frac{1}{2i\pi } \int _{1/2+i\mathbb R} \frac{\pi }{\sin (\pi s) } (-\zeta )^s \frac{g^{{{\mathrm {qHahn}}}}(w)}{g^{{{\mathrm {qHahn}}}}(q^sw)} \frac{\mathrm{d}s}{q^s w -w'} \end{aligned}$$

with

$$\begin{aligned} g(w) = \left( \frac{(\bar{\nu } w ; q)_{\infty }}{(w ; q)_{\infty }} \right) ^n \left( \frac{(\bar{\mu } w ; q)_{\infty }}{(\bar{\nu } w ; q)_{\infty }}\right) ^t \frac{1}{(\bar{\nu } w ; q)_{\infty }}. \end{aligned}$$

Let us scale the parameter \(\zeta \) as

$$\begin{aligned} \zeta = (1-q) u, \end{aligned}$$

and scale the other parameters as previously: \(q=e^{-\epsilon }, \bar{\mu } = q^{\mu }, \bar{\nu } = q^{\nu }\). Then we have

$$\begin{aligned} \mathbb {E}\left[ \frac{1}{(\zeta q^{x_n(t)+n} ; q)_{\infty }} \right] = \mathbb {E}[ e_q(uZ^{\epsilon }(t,n) ) ] \end{aligned}$$

where

$$\begin{aligned} e_q(x) = \frac{1}{((1-q)x ; q)_{\infty }} \end{aligned}$$

is the \(e_q\)-exponential function. Since \(e_q(x)\rightarrow e^x\) uniformly for x in a compact set, we have, using the convergence of processes (16) and the fact that \(Z^{\epsilon }(t,n)\) are uniformly bounded by 1, that

$$\begin{aligned} \lim _{\epsilon \rightarrow 0}\mathbb {E}\left[ \frac{1}{(\zeta q^{x_n(t)+n} ; q)_{\infty }} \right] = \mathbb {E}[e^{u Z(t,n)}]. \end{aligned}$$
(18)

Hence, in order to prove Theorem 2.12, one could take the limit when \(\epsilon \) goes to zero of the Fredholm determinant in the right-hand-side of (17). This is indeed possible, but require a good control of the integrand of the kernel as q goes to 1. Since we provide another proof of Theorem 2.12 independent of the q-Hahn TASEP in Sect. 4, we do not write the required estimates—but refer to the arXiv version of this paper where a complete proof is written.

More precisely, we have the following,

Proposition 3.3

$$\begin{aligned} \lim _{\epsilon \rightarrow 0}\mathbb {E}\left[ \frac{1}{(\zeta q^{x_n(t)+n} ; q)_{\infty }} \right] =\det (I+K^{\mathrm {BP}}_u)_{\mathbb {L}^2(C_0)} \end{aligned}$$

where \(C_0\) is a small positively oriented circle containing 0 but not \(-\nu \) nor \(-1\), and \(K^{\mathrm {BP}}_u :\mathbb {L}^2(C_0)\rightarrow \mathbb {L}^2(C_0)\) is defined by its integral kernel

$$\begin{aligned} K^{\mathrm {BP}}_u(v,v') = \frac{1}{2i\pi } \int _{1/2-i\infty }^{1/2+i\infty } \frac{\pi }{\sin (\pi s)} (-u)^s\frac{g^{\mathrm {BP}}(v)}{g^{\mathrm {BP}}(v+s)} \frac{\mathrm {d}s}{s+v - v'} \end{aligned}$$
(19)

where

$$\begin{aligned} g^{\mathrm {BP}}(v) = \left( \frac{\Gamma (v)}{\Gamma (\nu +v)} \right) ^n \left( \frac{\Gamma (\nu +v)}{\Gamma (\mu +v)}\right) ^t \Gamma (\nu + v). \end{aligned}$$

Proof

See the arXiv version of this paper. \(\square \)

Proposition 3.3 combined with (18) yields the Fredholm determinant formula for the Laplace transform of Z(tn) given in Theorem 2.12. In order to deduce Theorem 2.13, we use the equivalence between the Beta polymer and the Beta-RWRE from Proposition 2.6, proved in Sect. 3.3.

3.3 Equivalence Beta-RWRE and Beta polymer

We show that the Beta RWRE and the Beta polymer are equivalent models in the sense that if the parameters \(\alpha , \beta \) of the random walk and the parameters \(\mu , \nu \) of the polymer are such that \(\mu =\alpha \) and \(\nu =\alpha +\beta \), we have the equality in law

$$\begin{aligned} Z(t, n) = P(t, t-2n+2). \end{aligned}$$

The equality in law is true for fixed t and n. However, as families of random variables, \(\left( Z(t,n) \right) \) and \(\left( P(t,t-2n+2) \right) \) for \(t+1\geqslant n\geqslant 1\) have different laws.

Proof of Proposition 2.6

Let us first notice that since \(\mu =\alpha \) and \(\nu =\alpha +\beta \), the i.i.d. collection of Beta random variables defining the environment for the Beta polymer, and the i.i.d. collection of r.v. defining the environment of the Beta RWRE, have the same law.

Also, as it was already pointed-out in Sect. 2.2.2, the point-to-point Beta polymer is equivalent to a half line to point Beta polymer.

Let t and n having the same parity. The random variable \(P(t, t-2n+2)\) is the probability for the Beta RWRE to arrive above (or exactly at) \(t-2n+2\). This is also the probability for the Beta RWRE to make at most \(n-1\) downward steps up to time t. Let us imagine that we deform the underlying lattice of the Beta polymer so that Beta polymer paths are actually up-right path, and we also consider the path from (tn) to its initial point. Then the polymer path is the trajectory of a random walk, and one can interpret the weight of this polymer path as the quenched probability of the corresponding random walk trajectory (compare the polymer path depicted in Fig. 2 with the RWRE path depicted in Fig. 5, using the correspondence shown in Fig. 6). Moreover the event that the random walk performs at most \(n-1\) downward steps is equivalent to the fact that the polymer path starts with positive n-coordinate. These events correspond to the fact that the path intersects the thick gray half-lines in Figs. 2 and 5.

Fig. 5
figure 5

A possible path for the Beta-RWRE is shown. It corresponds to the half-line to point polymer path in Fig. 2. P(tx) is the (quenched) probability that the random walk ends at time t in the gray region

Fig. 6
figure 6

Illustration of the deformation of the underlying lattice for the Beta polymer. The left picture corresponds to the Beta polymer whereas the right picture corresponds to the RWRE. Black arrows represents possible steps for the polymer path (resp. the RWRE) with their associated weight (resp. probability)

Finally, for any fixed \(t, n\in \mathbb {Z}_{\geqslant 0}\) such that \(t+1\geqslant n\), if we set \(x=t-2n+2\), then P(tx) and Z(tn) have the same probability law. Moreover, conditioning on the environment of the Beta polymer corresponds to conditioning on the probability of each step for the Beta RWRE.

4 Rigorous replica method for the Beta polymer

4.1 Moment formulas

Let \(\mathbb {W}^{k}\) be the Weyl chamber

$$\begin{aligned} \mathbb {W}^{k} = \lbrace \vec {n}\in \mathbb {Z}^k :n_1\geqslant n_2\geqslant \dots \geqslant n_k \rbrace . \end{aligned}$$

For \(\vec {n}\in \mathbb {W}^{k}\), let us define

$$\begin{aligned} u(t, \vec {n}) = \mathbb {E}[Z(t, n_1)\dots Z(t, n_k)], \end{aligned}$$
(20)

with the convention that \(Z(t,n)=)\) for \(n<1\). The recurrence relation (1) implies a recurrence relation for \(u(t, \vec {n})\). We are going to solve this recurrence to find a closed formula for \(u(t, \vec {n})\), using a variant of the Bethe ansatz. It is the analogue of Section 5 in [17]. Besides the strict weak polymer [17], such “replica method” calculations have been performed to study moments of the partition function for the continuum polymer [4, 13, 18], the semi-discrete polymer [4, 10], and the log-gamma polymer [4, 36]. However, in those models, the moment problems are ill-posed and one cannot rigorously recover the distribution from them. In the present case, since the \(Z(t,n) \in [0,1]\), the moments do determine the distribution as explained in Sect. 4.2.

Using the recurrence relation (1),

$$\begin{aligned} u(t+1,\vec {n}) = \mathbb {E}\left[ \prod _{i=1}^k ( (1-B_{t+1,n_i}) Z(t,n_i) + B_{t+1, n_i} Z(t, n_i-1))\right] . \end{aligned}$$
(21)

Let us first simplify this expression when \(k=c\) and \(\vec {n} = (n, \dots , n)\) is a vector of length c with all components equal. In this case, setting \(B=B_{t+1, n}\) to simplify the notations, we have

$$\begin{aligned} u(t+1,\vec {n})&= \sum _{j=0}^c \left( {\begin{array}{c}c\\ j\end{array}}\right) \mathbb {E}[ (1-B)^j B^{c-j} Z(t, n-1)^j Z(t, n)^{c-j} ]\\&=\sum _{j=0}^c \left( {\begin{array}{c}c\\ j\end{array}}\right) \mathbb {E}[ (1-B)^j B^{c-j}]u(t, n, \dots , n,\underbrace{n-1, \dots , n-1}_{j}). \end{aligned}$$

The recurrence relation can be further simplified using the next lemma.

Lemma 4.1

Let B a random variable following the \(Beta(\mu , \nu -\mu )\) distribution. Then for integers \(0\leqslant j\leqslant c\),

$$\begin{aligned} \mathbb {E}[(1-B)^jB^{c-j}] = \frac{(\nu -\mu )_j(\mu )_{c-j}}{(\nu )_c}, \end{aligned}$$

where \((a)_k\) is the Pochhammer symbol \( (a)_k= a (a+1) \dots (a+k-1)\) and \((a)_0 =1 \).

Proof

By the definition of the Beta law, we have

$$\begin{aligned} \mathbb {E}[(1-B)^jB^{c-j}]&= \frac{\Gamma (\nu )}{\Gamma (\mu )\Gamma (\nu -\mu )}\int _{0}^1 (1-x)^j x^{c-j} x^{\mu -1} (1-x)^{\nu -\mu -1},\\&= \frac{\Gamma (\nu )}{\Gamma (\mu )\Gamma (\nu -\mu )}\frac{\Gamma (\mu +c-j)\Gamma (\nu -\mu + j)}{\Gamma (\nu +c)},\\&=\frac{(\nu -\mu )_j(\mu )_{c-j}}{(\nu )_c}. \end{aligned}$$

\(\square \)

In order to write the general case, we need a little more notation. For \(\vec {n}\in \mathbb {W}^{k}\), we denote by \(c_1, c_2, \dots c_\ell \) the sizes of clusters of equal components in \(\vec {n}\). More precisely, \(c_1, c_2, \dots c_{\ell }\) are positive integers such that \(\sum c_i = k\) and

$$\begin{aligned} n_1 = \dots = n_{c_1}>n_{c_1+1} = \dots n_{c_1+c_2} >\dots >n_{c_1+\dots + c_{k-1}+1} = \dots = n_k. \end{aligned}$$

Define also the operator \(\tau ^{(i)}\) acting on a function \(f:\mathbb {W}^{k}\rightarrow \mathbb {R}\) by

$$\begin{aligned} \tau ^{(i)} f(\vec {n}) = f(n_1, \ldots , n_i-1,\ldots , n_k). \end{aligned}$$

Using the Lemma 4.1, we have that

$$\begin{aligned} u(t+1,\vec {n}) = \sum _{j_1=0}^{c_1}\dots \sum _{j_{\ell }=0}^{c_{\ell }} \left( \prod _{i=1}^{\ell }\left( {\begin{array}{c}c_i\\ j_i\end{array}}\right) \frac{(\nu -\mu )_{j_i}(\mu )_{c_i-j_i}}{(\nu )_{c_i}} \prod _{r=0}^{j_i-1} \tau ^{(c_1 + \dots + c_{i} - r)} \right) u(t,\vec {n}). \end{aligned}$$
(22)

In words, for each \(\ell \)-tuple \(j_1, \dots , j_{\ell }\) such that \(0\leqslant j_i \leqslant c_i\), we decrease the \(j_i\) last coordinates of the cluster i in \(\vec {n}\), for each cluster, and multiply by

$$\begin{aligned} \prod _{i=1}^{\ell }\left( {\begin{array}{c}c_i\\ j_i\end{array}}\right) \frac{(\nu -\mu )_{j_i}(\mu )_{c_i-j_i}}{(\nu )_{c_i}}. \end{aligned}$$

Lemma 4.2

Let XY generate an associative algebra such that

$$\begin{aligned} YX = \frac{1}{1+\nu }XX + \frac{\nu -1}{1+\nu } XY + \frac{1}{1+\nu } YY. \end{aligned}$$

Then we have the following non-commutative binomial identity:

$$\begin{aligned} (pX+(1-p)Y )^n = \sum _{j=0}^n \left( {\begin{array}{c}n\\ j\end{array}}\right) \frac{(\nu -\mu )_j(\mu )_{n-j}}{(\nu )_n}X^j Y^{n-j}, \end{aligned}$$

where \(p=\frac{\nu -\mu }{\nu }\).

Proof

It is shown in [30, Theorem 1] that if X and Y satisfy the quadratic homogeneous relation

$$\begin{aligned} YX= \alpha XX + \beta XY + \gamma YY, \end{aligned}$$

with

$$\begin{aligned} \alpha = \frac{\bar{\nu }(1-q)}{1-q\bar{\nu }}, \quad \beta = \frac{q-\bar{\nu }}{1-q\bar{\nu }}, \quad \gamma = \frac{1-q}{1-q\bar{\nu }}, \end{aligned}$$

and

$$\begin{aligned} \bar{\mu } = \bar{p}+\bar{\nu }(1-\bar{p}), \end{aligned}$$

then

$$\begin{aligned} (\bar{p}X+(1-\bar{p})Y)^n = \sum _{k=0}^{n}\varphi _{q, \bar{\mu }, \bar{\nu }}(j\vert n) X^kY^{n-k}, \end{aligned}$$

where \(\varphi _{q, \bar{\mu }, \bar{\nu }}(j\vert n)\) are the q-Hahn weights defined in (14). Our lemma is the \(q\rightarrow 1\) degeneration of this result. \(\square \)

Let \( \mathcal {L}^{\mathrm {cluster}}_{c}\) denote the operator

$$\begin{aligned} \mathcal {L}^{\mathrm {cluster}}_c = \sum _{j=0}^{c} \left( {\begin{array}{c}c\\ j\end{array}}\right) \frac{(\nu -\mu )_{j}(\mu )_{c-j}}{(\nu )_{c}} \prod _{r=0}^{j-1} \tau ^{(c - r)} \end{aligned}$$

which appears in the R.H.S. of (22), and \(\mathcal {L}^{free}_{c}\) the operator

$$\begin{aligned} \mathcal {L}^{free}_{c} = \prod _{i=1}^{c} \nabla _i, \end{aligned}$$

where \(\nabla _i = p\tau ^{(i)} + (1-p)\). It is worth noticing that for \(c=1\), \(\mathcal {L}^{\mathrm {cluster}}_{c} = \mathcal {L}^{free}_{c}\).

For a function \(f:\mathbb {Z}^c\rightarrow \mathbb {C}\), we formally identify monomials \(X_1 X_2 \dots X_c\) where \(X_i\in \lbrace X,Y\rbrace \) with terms \(f(\vec {n})\) where for all \(1\leqslant i \leqslant c\), \(n_{c-i}=n-1\) if \(X_i=X\) and \(n_{c-i}=n\) if \(X_i=Y\). Using this identification, the binomial formula from Lemma 4.2 says that the operators \(\mathcal {L}^{free}_{c}\) and \(\mathcal {L}^{\mathrm {cluster}}_{c}\) act identically on functions f satisfying the condition

$$\begin{aligned} \forall 1\leqslant i <c\left( \frac{1}{1+\nu } \tau ^{(i)}\tau ^{(i+1)} + \frac{\nu -1}{1+\nu }\tau ^{(i+1)} +\frac{1}{1+\nu } -\tau ^{(i)}\right) f(n, \ldots ,n) = 0. \end{aligned}$$
(23)

One notices that the operator involved in (22) acts independently by \(\mathcal {L}^{\mathrm {cluster}}_{c}\) on each cluster of equal components. It follows that if a function \(u :\mathbb {Z}_{\geqslant 0}\times \mathbb {Z}^k \rightarrow \mathbb {C}\) satisfies the boundary condition

$$\begin{aligned} \left( \frac{1}{1+\nu } \tau ^{(i)}\tau ^{(i+1)} + \frac{\nu -1}{1+\nu }\tau ^{(i+1)} +\frac{1}{1+\nu } -\tau ^{(i)}\right) u(t, \vec {n}) = 0, \end{aligned}$$
(24)

for all \(\vec {n}\) such that \(n_i=n_{i+1}\) for some \(1\leqslant i < k\), and satisfies the free evolution equation

$$\begin{aligned} u(t+1, \vec {n}) = \left( \prod _{i=1}^{k} \nabla _i \right) u(t, \vec {n}), \end{aligned}$$
(25)

for all \(\vec {n}\in \mathbb {Z}^k\), then the restriction of \(u(t,\vec {n})\) to \(\mathbb {W}^{k}\) satisfies the true evolution equation (22).

Remark 4.3

The coefficients \(\left( {\begin{array}{c}c\\ j\end{array}}\right) \frac{(\nu -\mu )_j(\mu )_{c-j}}{(\nu )_c}\) that appear in the true evolution equation (22) are probabilities of the Beta-binomial distribution with parameters \(c, \mu , \nu -\mu \). Hence, the true evolution equation could be interpreted as the “evolution equation” for a series of urns where each urn evolves according to the Pólya urn scheme. Such dynamics could be interpreted as the \(q\rightarrow 1\) degeneration of the q-Hahn Boson, which is dual to the q-Hahn TASEP [15].

Proposition 4.4

For \(n_1 \geqslant n_2\geqslant \dots \geqslant n_k\geqslant 1\), one has the following moment formula,

$$\begin{aligned}&\mathbb {E}[Z(t,n_1) \ldots Z(t,n_k) ]\nonumber \\&\quad = \frac{1}{(2i\pi )^k} \int \dots \int \prod _{1\leqslant A<B\leqslant k} \frac{z_A-z_B}{z_A-z_B-1} \prod _{j=1}^k \left( \frac{\nu +z_j}{z_j}\right) ^{n_j} \left( \frac{\mu + z_j}{\nu +z_j}\right) ^t\frac{\mathrm {d}z_j}{\nu +z_j},\nonumber \\ \end{aligned}$$
(26)

where the contour for \(z_k\) is a small circle around the origin, and the contour for \(z_j\) contains the contour for \(z_{j+1}\) shifted by \(+1\) for all \(j=1, \dots , k-1\), as well as the origin, but all contours exclude \(-\nu \).

Proof

We show that the right-hand-side of (26) satisfies the free evolution equation, the boundary condition and the initial condition for \(u(0,\vec {n})\) for \(\vec {n}\in \mathbb {W}^{k}\) (the initial condition outside \(\mathbb {W}^{k}\) is inconsequential). The above discussion shows that the restriction to \(\vec {n}\in \mathbb {W}^{k}\) then solves the true evolution equation (22). By the definition of the function u in (20) and the initial condition for the half-line to point polymer, \(u(0,\vec {n}) = \prod _{i=1}^k \mathbbm {1}_{n_i\geqslant 1}=\mathbbm {1}_{n_k\geqslant 1}\) (the second equality holds because the \(n_i\)’s are ordered). Let us consider the right-hand-side of (26) when \(t=0\). If \(n_k\leqslant 0\), there is no pole in zero, so one can shrink the \(z_k\) contour to zero, and consequently \(u(0,\vec {n})=0\). When \(n_k>0\) (and consequently all \(n_i\)’s are positive), there is no pole at \(-\nu \) for \(t=0\), so that one can successively send to infinity the contours for the variables \(z_k\), \(z_{k-1}, \dots \) Since the residue at infinity is one for each variable, then \(u(0,\vec {n})=1\). Hence, the initial condition is satisfied.

In order to show that the boundary condition is satisfied, we assume that \(n_i=n_{i+1}\) for some i. Let us apply the operator

$$\begin{aligned} \left( \frac{1}{1+\nu } \tau ^{(i)}\tau ^{(i+1)} + \frac{\nu -1}{1+\nu }\tau ^{(i+1)} +\frac{1}{1+\nu } -\tau ^{(i)}\right) \end{aligned}$$

inside the integrand. This brings into the integrand a factor

$$\begin{aligned}&\frac{1}{1+\nu } \frac{z_i}{\nu +z_i}\frac{z_{i+1}}{\nu +z_{i+1}} + \frac{\nu -1}{\nu +1} \frac{z_{i+1}}{\nu +z_{i+1}} + \frac{1}{1+\nu } - \frac{z_i}{\nu + z_i} \\&\quad = \frac{-\nu ^2(z_i-z_{i+1}-1)}{(1+\nu )(\nu + z_i)(\nu +z_{i+1})}. \end{aligned}$$

Since it cancels the pole for \(z_i=z_{i+1}+1\), one can use the same contour for both variables, and since the integrand is now antisymmetric in the variables \((z_i, z_{i+1})\) the integral is zero as desired.

In order to show that the free evolution equation is satisfied, it is enough to show that applying the operator \(p\tau ^{(i)} + (1-p)\) for i from 1 to k inside the integrand brings an extra factor \(\prod _{j=1}^{k}\frac{\mu +z_j}{\nu +z_j}\). This is clearly true since

$$\begin{aligned} ( p\tau ^{(i)} + (1-p)) \left( \frac{\nu +z_i}{z_i}\right) ^{n_i} = \left( \frac{\nu +z_i}{z_i}\right) ^{n_i} \frac{\mu +z_i}{\nu +z_i}. \end{aligned}$$

\(\square \)

Remark 4.5

It is possible to prove a generalization of Proposition 4.4 where the parameter \(\mu \) depends on t. In this generalization, the weight of an edge starting from a point (sn) for any n would have a weight B or \(1-B\) (depending on the direction of the edge), where B is a random variable distributed according to the Beta distribution with parameters \((\mu _s, \nu -\mu _s)\). In the formula (26), the factor \(\left( \frac{\mu + z_j}{\nu +z_j}\right) ^t\) would be replaced by

$$\begin{aligned} \prod _{s=0}^{t-1} \frac{\mu _s + z_j}{\nu +z_j}. \end{aligned}$$

Such moment formulas with time inhomogeneous parameters have been proved for the discrete-time q-TASEP [3] and for the q-Hahn TASEP in [15, Section 2.4] (see also the discussion in [16, Section 5.7] which deals with a generalization of the q-Hahn TASEP). In all these cases, this allows to prove Fredholm determinant formulas with time-dependent parameters, using the same method as in the homogeneous case. It is not clear however if one can find moment formulas with a parameter inhomogeneity depending on n (e.g. the parameter \(\nu \) would depend on n).

Proposition 4.4 provides an integral formula for the moments of Z(tn). In order to form the generating series, it is convenient to transform the formula so that all integrations are on the same contour.

Proposition 4.6

For all \(n,t \geqslant 0\), we have

$$\begin{aligned} \mathbb {E}[ Z(t,n)^k]= & {} k!\sum _{\lambda \vdash k} \frac{1}{m_1!m_2!\dots }\frac{1}{(2i\pi )^{\ell (\lambda )}} \int \dots \int \det \left( \frac{1}{v_j-v_i- \lambda _i}\right) _{i,j=1}^{\ell (\lambda )}\nonumber \\&\times \prod _{j=1}^{\ell (\lambda )} f(v_j)f(v_j+1)\dots f(v_j+\lambda _j-1) \mathrm {d}v_1 \dots \mathrm {d}v_{\ell (\lambda )}, \end{aligned}$$
(27)

where

$$\begin{aligned} f(v) = \frac{g^{\mathrm {BP}}(v)}{g^{\mathrm {BP}}(v+1)} = \left( \frac{\nu +v}{v} \right) ^n \left( \frac{\mu +v}{\nu +v}\right) ^t \frac{1}{v+\nu }, \end{aligned}$$

where \(g^{\mathrm {BP}}\) is defined in (5) and the integration contour is a small circle around 0 excluding \(-\nu \) and for a partition \(\lambda \vdash k\) (i.e. \(\sum _i \lambda _i =k\)) we write \(\lambda =1^{m_1} 2^{m_2}\dots \), meaning that \(m_j\) is the number of indices i such that \(\lambda _i=j\) components; and \(\ell (\lambda )\) is the number of non-zero components \(\ell (\lambda )=\sum _i m_i\).

Proof

This type of deduction, called the contour shift argument, has already occurred in the context of the q-Whittaker process in [4, Section 3.2.1]. See [8], in particular Proposition 7.4, and references therein for more background on the contour shift argument. The present formulation corresponds to a degeneration when \(q\rightarrow 1\) of Proposition 3.2.1 in [4].

One starts with the moment formula given by Proposition 4.4:

$$\begin{aligned} \mathbb {E}[ Z(t,n)^k] = \frac{1}{(2i\pi )^k} \int \dots \int \prod _{A<B} \frac{z_A-z_B}{z_A-z_B-1} \prod _{j=1}^k f(z_j)\mathrm {d}z_j. \end{aligned}$$

We need to shrink all contours to a small circle around 0. During the deformation of contours, one encounters all poles of the product \(\prod _{A<B} \frac{z_A-z_B}{z_A-z_B-1}\). Thus, a direct proof would amount to carefully book-keeping the residues. Although one could adapt to the present setting the proof of [8, Proposition 7.4], we refer to Proposition 6.2.7 in [4] which provides a very similar formula. The only modification is that the function f that we consider has a pole at \(-\nu \), but this does not play any role in the deformation of contours.

It is also worth remarking that applying Proposition 3.2.1 in [4] to q-Hahn moment formula [15, Theorem 1.8] and taking a suitable limit yields the statement of Proposition 4.6. \(\square \)

4.2 Proof of Theorem 2.12

Thanks to Proposition 4.6, the moments of Z(tn) have a suitable form for taking the generating series. Let us denote \(\mu _k = \mathbb {E}\left[ Z(t,n)^k\right] \). A degeneration when q goes to 1 of Proposition 3.2.8 in [4] shows that

$$\begin{aligned} \sum _{k\geqslant 0} \mu _k \frac{u^k}{k!} = \det (I+K)_{\mathbb {L}^2(\mathbb {Z}_{>0}\times C_0)}, \end{aligned}$$

where \(\det (I+K)\) is the formal Fredholm determinant expansion of the operator K defined by the integral kernel

$$\begin{aligned} K(n_1, v_1 ; n_2, v_2) = \frac{u^{n_1} f(v_1) f(v_1+1) \dots f(v_1+n_1-1)}{v_1+n_1-v_2}, \end{aligned}$$

and \(C_0\) is a positively oriented circular contour around 0 excluding \(-\nu \). Since \(f(v+n_1)\) is uniformly bounded for \(v\in C_0\) and \(n_1\geqslant 1\), and \(v_1+n_1-v_2\) is uniformly bounded away from 0 for \(v_1, v_2 \in C_0, n_1\geqslant 1\), the identity holds also numerically. Since \(\vert Z(t, n)\vert \leqslant 1\), then one can exchange summation and expectation so that for any \(u\in \mathbb {C}\)

$$\begin{aligned} \sum _{k\geqslant 0} \mu _k \frac{u^k}{k!} = \mathbb {E}[e^{uZ(t,n)} ]. \end{aligned}$$

It is useful to notice that

$$\begin{aligned} f(v_1) f(v_1+1) \dots f(v_1+n_1-1) = \frac{g^{\mathrm {BP}}(v_1)}{g^{\mathrm {BP}}(v_1+n_1)}. \end{aligned}$$

Next, we want to rewrite \(\det (I+K)\) as the Fredholm determinant of an operator acting on a single contour. For that purpose we use the following Mellin–Barnes integral formula:

Lemma 4.7

For \(u\in \mathbb {C}{\setminus } \mathbb {R}_{>0}\) with \(\vert u\vert <1\),

$$\begin{aligned}&\sum _{n=1}^{\infty } u^n \frac{g^{\mathrm {BP}}(v)}{g^{\mathrm {BP}}(v+n)}\frac{1}{v+n-v'}\nonumber \\&\quad = \frac{1}{2i\pi } \int _{1/2-i\infty }^{1/2+i\infty } \Gamma (-s)\Gamma (1+s) (-u)^s \frac{g^{\mathrm {BP}}(v)}{g^{\mathrm {BP}}(v+s)}\frac{\mathrm {d}s}{v+s-v'}, \end{aligned}$$
(28)

where \(z^s\) is defined with respect to a branch cut along \(z\in \mathbb {R}_{\leqslant 0}\).

Proof

The statement of the Lemma is very similar to [4, Lemma 3.2.13].

Since \(\mathrm{Res}_{s=k} \left( \Gamma (-s)\Gamma (1+s)\right) = (-1)^{k+1}\), we have that

$$\begin{aligned}&\sum _{n=1}^{\infty } u^n \frac{g^{\mathrm {BP}}(v)}{g^{\mathrm {BP}}(v+n)}\frac{1}{v+n-v'}\nonumber \\&\quad = \frac{1}{2i\pi } \int _{\mathcal {H}} \Gamma (-s)\Gamma (1+s) (-u)^s \frac{g^{\mathrm {BP}}(v)}{g^{\mathrm {BP}}(v+s)}\frac{\mathrm {d}s}{v+s-v'}, \end{aligned}$$
(29)

where \(\mathcal {H}\) is a negatively oriented integration contour enclosing all positive integers. For the identity to be valid, the L.H.S. of (29) must converge, and the contour must be approximated by a sequence of contours \(\mathcal {H}_k\) enclosing the integers \(1, \dots , k\) such that the integral along the symmetric difference \(\mathcal {H}{\setminus } \mathcal {H}_k\) goes to zero.

The following estimates show that one can chose the contour \(\mathcal {H}_k\) as a rectangular contour connecting the points \(1/2+i\), \(k+1/2+i\), \(k+1/2-i\) and \(1/2-i\); and the contour \(\mathcal {H}\) as the infinite contour from \(\infty -i\) to \(1/2-i\) to \(1/2+i\) to \(\infty +i\).

We first need an estimate for the Gamma function [19, Chapter 1, 1.18 (2)]: for any \(\delta >0\)

$$\begin{aligned} \Gamma (z) = \sqrt{2\pi } e^{-z} z^{z-1/2} (1+\mathcal {O}\left( 1/z\right) )\quad \text {as } \vert z\vert \rightarrow \infty , \quad \vert \arg (z)\vert <\pi -\delta . \end{aligned}$$
(30)

Then recall that

$$\begin{aligned} g^{\mathrm {BP}}(v+s) = \left( \frac{\Gamma (v+s)}{\Gamma (\nu +v+s)} \right) ^n \left( \frac{\Gamma (\nu +v+s)}{\Gamma (\mu +v+s)}\right) ^t \Gamma (\nu + v+s). \end{aligned}$$

Using (30),

$$\begin{aligned} g^{\mathrm {BP}}(v+s) = \sqrt{2\pi } e^{-\nu -v-s}(\nu +v+s)^{\nu +v+s-1/2} \frac{(\nu +v+s)^{(\nu -\mu )t}}{(\nu +v+s)^{\nu n}} \left( 1+\mathcal {O}\left( \frac{1}{s}\right) \right) . \end{aligned}$$

It implies that for s going to \(\infty e^{i\phi }\) with \(\phi \in [-\pi /2, \pi /2]\), \(1/g^{\mathrm {BP}}(v+s)\) has exponential decay in \(\vert s\vert \). Moreover, for s going to \(\infty e^{i\phi }\) with \(\phi \in [-\pi /2, \pi /2]\) and \(\phi \ne 0\),

$$\begin{aligned} (-u) ^s \frac{\pi }{\sin (\pi s)} \frac{1}{v+s-v'} \end{aligned}$$

is bounded. Thus, one can freely deform the integration contour \(\mathcal {H}\) in (29) to become the straight line from \(1/2 -i\infty \) to \(1/2+i\infty \). \(\square \)

This shows that for any \(u\in \mathbb {C}{\setminus } \mathbb {R}_{>0}\) with \(\vert u\vert <1\), one has that

$$\begin{aligned} \mathbb {E}[e^{uZ(t,n)} ] = \det (I+K^{\mathrm {BP}}_u)_{\mathbb {L}^2(C_0)}, \end{aligned}$$
(31)

where the kernel \(K^{\mathrm {BP}}_u\) is defined in the statement of Theorem 2.12. One extends the result to any \(u\in \mathbb {C}{\setminus } \mathbb {R}_{>0}\) by analytic continuation. The left-hand-side in (31) is analytic since \(\vert Z(t,n)\vert <1\). The right-hand-side is analytic because the Fredholm determinant expansion is absolutely summable and integrable. Indeed, first notice that since the Fredholm determinant contour is finite, it is clear that \(K^{\mathrm {BP}}_{u}(v, v')\) is uniformly bounded for \(v,v'\) in the contour \(C_0\). Moreover, each term in the Fredholm determinant expansion

$$\begin{aligned} \det (I+K^{\mathrm {BP}}_{u}) = 1 + \sum _{n=1}^{\infty } \frac{1}{n!} \int \dots \int \det (K^{\mathrm {BP}}_{u}(v_i, v_j))_{i,j=1}^{n} \mathrm {d}v_1\dots \mathrm {d}v_n, \end{aligned}$$

can be bounded using Hadamard’s bound, so that the sum absolutely converges.

5 Zero-temperature limit

5.1 Proof of Proposition 2.10

In this section, we prove that the Bernoulli-Exponential first passage percolation model is the zero-temperature limit of the Beta-RWRE. The zero temperature limit corresponds to sending the parameters \(\alpha , \beta \) of the Beta RWRE to zero.

Proof

We first show how the transition probabilities for the Beta RWRE degenerate in the zero temperature limit. \(\square \)

Lemma 5.1

Fix \(a,b >0\). For \(\epsilon >0\), let \(B_{\epsilon }\) be a Beta distributed random variable with parameters \((\epsilon a, \epsilon b)\). We have the convergence in distribution

$$\begin{aligned} (-\epsilon \log (B_{\epsilon }), -\epsilon \log (1-B_{\epsilon }) ) \Longrightarrow (\xi E_a , (1-\xi )E_b) \end{aligned}$$

as \(\epsilon \) goes to zero, where \(\xi \) is a Bernoulli random variable with parameter \(b/(a+b)\) and \((E_a, E_b)\) are exponential random variables with parameters a and b, independent of \(\xi \).

Proof

Let \(f, g :\mathbb {R}\rightarrow \mathbb {R}\) be continuous bounded functions.

$$\begin{aligned}&\mathbb {E}[ f(-\epsilon \log (B_{\epsilon }))g(-\epsilon \log (1-B_{\epsilon }))] \nonumber \\&\quad =\int _0^{1} f(-\epsilon \log (x)) g(-\epsilon \log (1-x)) x^{\epsilon a -1} (1-x)^{\epsilon b-1} \frac{\Gamma (\epsilon a+\epsilon b)}{\Gamma (\epsilon a) \Gamma (\epsilon b)} \mathrm {d}x.\quad \quad \end{aligned}$$
(32)

In order to compute the limit of (32), we evaluate separately the contribution of the integral between 0 and 1 / 2, and between 1 / 2 and 1. By making the change of variable \(z=-\epsilon \log (x)\), we have that

$$\begin{aligned}&\int _0^{1/2} f(-\epsilon \log (x)) g(-\epsilon \log (1-x)) x^{\epsilon a -1} (1-x)^{\epsilon b-1} \frac{\Gamma (\epsilon a+\epsilon b)}{\Gamma (\epsilon a) \Gamma (\epsilon b)} \nonumber \\&\quad =\frac{\Gamma (\epsilon a+\epsilon b)}{\Gamma (\epsilon a) \Gamma (\epsilon b)} \int _{\epsilon \log (2)}^{\infty } f(z) g(-\epsilon \log (1-e^{-z/\epsilon })) e^{-az} e^{(\epsilon b-1)\log (1-e^{-z/\epsilon })} \mathrm {d}z.\nonumber \\ \end{aligned}$$
(33)

Since

$$\begin{aligned} \frac{\Gamma (\epsilon a+\epsilon b)}{\Gamma (\epsilon a) \Gamma (\epsilon b)} \xrightarrow [\epsilon \rightarrow 0]{} \frac{ab}{a+b}, \end{aligned}$$

the limit of the right-hand-side in (33) is

$$\begin{aligned} \frac{b}{a+b} \int _{0}^{\infty } f(z)g(0) ae^{-az} \mathrm {d}z = \frac{b}{a+b} \mathbb {E}[f(E_a)g(0)]. \end{aligned}$$

The contribution of the integral in (32) between 1 / 2 and 1 is computed in the same way, and we find that

$$\begin{aligned}&\lim _{\epsilon \rightarrow 0} \mathbb {E}[ f(-\epsilon \log (B_{\epsilon }))g(-\epsilon \log (1-B_{\epsilon }))]\\&\quad = \frac{b}{a+b} \mathbb {E}[f(E_a)g(0)]+ \frac{a}{a+b} \mathbb {E}[f(0)g(E_b)]\\&\quad = \mathbb {E}[ f(\xi E_a)g((1-\xi )E_b)], \end{aligned}$$

which proves the claim. \(\square \)

Remark 5.2

Whether \(E_a\) and \(E_b\) are independent or not does not have any importance. However, it is important that \(E_a\) and \(E_b\) are independent of the Bernoulli random variable \(\xi \).

Let \(\alpha _{\epsilon }=\epsilon a, \beta _{\epsilon } = \epsilon b\) and \(P_{\epsilon }(t,x)\) the (quenched) distribution function of the endpoint at time t for the Beta random walk with parameters \(\alpha _{\epsilon }\) and \(\beta _{\epsilon }\). Let T(nm) be the first-passage time in the Bernoulli-Exponential model with parameters ab.

It is convenient to define the analogue of the set of weights \(w_e\) of the Beta polymer in the context of the Beta RWRE. For an edge e in \((\mathbb {Z}_{\geqslant 0})^2 \) we define \(p_e\) by

$$\begin{aligned} p_e= {\left\{ \begin{array}{ll} B_{j-i, i+j}&{} \text {if }e\text { is the vertical edge }(i,j)\rightarrow (i, j+1)\\ 1-B_{j-i, i+j}&{} \text {if }e\text { is the horizontal edge }(i,j)\rightarrow (i+1,j);\\ \end{array}\right. } \end{aligned}$$

where the variables \(B_{\cdot , \cdot }\) define the environment of the random walk. Lemma 5.1 implies that as \(\epsilon \) goes to zero, we have the weak convergence

$$\begin{aligned} \min _{\pi : (0,0)\rightarrow D_{n,m}}\left\{ \sum _{e\in \pi } -\epsilon \log (p_e)\right\} \Rightarrow \min _{\pi : (0,1)\rightarrow D_{n,m}}\left\{ \sum _{e\in \pi } t_e\right\} , \end{aligned}$$

where the minimum is taken over up-right paths, and the passage times \(t_e\) are defined in (3).

Since the times \(t_e\) in the FPP model are either zero or exponential, and there is at most one path with zero passage time, the minimum over paths of \(\sum _{e\in \pi } t_e\) is attained for a unique path with probability one. We know by the principle of the largest term that as \(\epsilon \rightarrow 0\),

$$\begin{aligned} -\epsilon \log \big (P_{\epsilon }(n+m,m-n)\big ) = -\epsilon \log \left( \sum _{\pi : (0,0) \rightarrow D_{n,m}} \exp \left( \sum _{e\in \pi } \log (p_e)\right) \right) \end{aligned}$$

has the same limit as

$$\begin{aligned} \min _{\pi :(0,0)\rightarrow D_{n,m}} \left\{ \sum _{e\in \pi } -\epsilon \log (p_e) \right\} . \end{aligned}$$

Since the family of rescaled weights \((-\epsilon \log (p_e))_e\) weakly converges to \((t_e)_e\), then

$$\begin{aligned} \min _{\pi : (0,0)\rightarrow D_{n,m}} \left\{ \sum _{e\in \pi } -\epsilon \log (p_e) \right\} \Rightarrow \min _{\pi : (0,0)\rightarrow D_{n,m}}\left\{ \sum _{e\in \pi } t_e\right\} . \end{aligned}$$

Hence for any \(n, m\geqslant 0\), \(-\epsilon \log (P_{\epsilon }(t,n)\) weakly converges as \(\epsilon \) goes to zero to T(nm). \(\square \)

5.2 Proof of Theorem 2.18

Theorem 2.18 states that for \(r\in \mathbb {R}_{>0}\), one has

$$\begin{aligned} \mathbb {P}( T(n,m) > r) = \det (I+K^{\mathrm {FPP}}_r)_{\mathbb {L}^2(C'_0)} \end{aligned}$$

where \(C'_0\) is a small positively oriented circle containing 0 but not \(-\nu \), and \(K^{\mathrm {FPP}}_r :\mathbb {L}^2(C'_0)\rightarrow \mathbb {L}^2(C'_0)\) is defined by its integral kernel

$$\begin{aligned} K^{\mathrm {FPP}}_r(u,u') = \frac{1}{2i\pi } \int _{1/2-i\infty }^{1/2+i\infty } \frac{e^{rs}}{s} \frac{g^{\mathrm {FPP}}(u)}{g^{\mathrm {FPP}}(u+s)} \frac{\mathrm {d}s}{s+u-u'} \end{aligned}$$
(34)

where

$$\begin{aligned} g^{\mathrm {FPP}}(u) = \left( \frac{a+u}{u}\right) ^n \left( \frac{a+u}{a+b+u} \right) ^m \frac{1}{u}. \end{aligned}$$
(35)

Proof

The proof splits into two pieces. We first show that under appropriate scalings, the Laplace transform \(\mathbb {E}[ e^{u P_{\epsilon }(n+m, m-n)} ]\) converges to \(\mathbb {P}(T(n,m)\geqslant r)\). Then we show that the Fredholm determinant \(\det (I+K^{\mathrm {BP}}_u)\) from Theorem 2.13 converges to \(\det (I+K^{\mathrm {FPP}}_r)_{\mathbb {L}^2(C'_0)}\).

First step We have an exact formula for \(\mathbb {E}[ e^{u P_{\epsilon }(n+m, m-n)} ]\). Let us scale u as \(u=-\exp (\epsilon ^{-1}r)\) so that

$$\begin{aligned} \mathbb {E}[ e^{u P_{\epsilon }(n+m,m-n)} ] = \mathbb {E}[\exp (-e^{-\epsilon ^{-1}(-\epsilon \log (P_{\epsilon }(n+m,m-n))-r)}) ]. \end{aligned}$$

If \(f_{\epsilon }(x):= \exp (-e^{-\epsilon ^{-1}x})\), then the sequence of functions \(\lbrace f_{\epsilon } \rbrace \) maps \(\mathbb {R}\) to (0, 1), is strictly increasing with a limit of 1 at \(+\infty \) and 0 at \(-\infty \), and for each \(\delta >0\), on \(\mathbb {R}{\setminus } [-\delta , \delta ]\) converges uniformly to \(\mathbbm {1}_{x>0}\). We define the r-shift of \(f_{\epsilon }\) as \(f_{\epsilon }^r(x) = f_{\epsilon }(x-r)\). Then,

$$\begin{aligned} \mathbb {E}[ e^{u P_{\epsilon }(n+m, m-n)} ] = \mathbb {E}[ f_{\epsilon }^r(-\epsilon \log (P_{\epsilon }(n+m, m-n)))]. \end{aligned}$$

Since the variable T(nm) has an atom in zero, we are not exactly in the situation of Lemma 4.1.38 in [4], but we can adapt the proof. Let \(s<r<u\). By the properties of the functions \(f_{\epsilon }\) mentioned above, we have that for any \( \eta >0\), there exists an \(\epsilon _0\) such that for any \(\epsilon <\epsilon _0\),

$$\begin{aligned} \mathbb {P}( -\epsilon \log \big (P_{\epsilon }(n+m, m-n)\big ) \geqslant u )\leqslant & {} \mathbb {E}[ f_{\epsilon }^r(-\epsilon \log (P_{\epsilon }(n+m, m-n)))]\\\leqslant & {} \mathbb {P}( -\epsilon \log (P_{\epsilon }(n+m, m-n)) \geqslant s ). \end{aligned}$$

Since we have established the weak convergence of \(-\epsilon \log (P_{\epsilon }(n+m, m-n))\) one can take limits as \(\epsilon \) goes to zero in the probabilities and we find that

$$\begin{aligned} \mathbb {P}( T(n,m)\geqslant u )\leqslant & {} \liminf _{\epsilon \rightarrow 0} \mathbb {E}[ f_{\epsilon }^r\nonumber (-\epsilon \log (P_{\epsilon }(n+m, m-n)))] \\\leqslant & {} \limsup _{\epsilon \rightarrow 0} \mathbb {E}[ f_{\epsilon }^r(-\epsilon \log (P_{\epsilon }(n+m, m-n)))]\leqslant \mathbb {P}( T(n,m) \geqslant s ). \end{aligned}$$

Now we take s and u to r and notice that T(nm) can be decomposed as an atom at zero and an absolutely continuous part. Thus, for any \(r>0\),

$$\begin{aligned} \mathbb {P}( T(n,m)> r ) = \lim _{\epsilon \rightarrow 0} \mathbb {E}[ f_{\epsilon }^r(-\epsilon \log (P_{\epsilon }(n+m, m-n)))]. \end{aligned}$$

Second step We shall prove that the limit when \(\epsilon \) goes to zero of \(\mathbb {E}[ e^{u P_{\epsilon }(n+m, m-n)} ]\) is \(\det (I+K^{\mathrm {FPP}}_r)_{\mathbb {L}^2(C_0)}\) where \(K_r^{\mathrm {FPP}}\) is defined as in Theorem 2.18. For that we take the limit of the Fredholm determinant \(K^{RW}\) from Theorem 2.13. Let us use the change of variables

$$\begin{aligned} v=\epsilon \tilde{v},\quad v'=\epsilon \tilde{v}',\quad s=\epsilon \tilde{s}. \end{aligned}$$

Assuming that the limit of the Fredholm determinant is the Fredholm determinant of the limit, which we prove below, we have to take the limit of \(\epsilon K^{RW}(\epsilon \tilde{v}, \epsilon \tilde{v}')\). The factor \(\epsilon \) in front of \(K^{RW}\) is a priori necessary, it comes from the Jacobian of the change of variables \(v=\epsilon \tilde{v}\) and \(v'=\epsilon \tilde{v}'\). For any \(1> \epsilon >0\) the kernel \(K^{RW}(v,v')\) can be written as an integral over \(\frac{1}{2} \epsilon + i\mathbb {R}\) instead of an integral over \(\frac{1}{2}+i\mathbb {R}\), since we do not cross any singularity of the integrand during the contour deformation, and the integrand has exponential decay. Thus, one can write

$$\begin{aligned} \epsilon K^{RW}(\epsilon \tilde{v}, \epsilon \tilde{v}') = \frac{1}{2i\pi } \int _{1/2 -i\infty }^{1/2 +i\infty } \frac{\epsilon \pi }{\sin (\pi \epsilon \tilde{s})} (-u)^{\epsilon \tilde{s}} \frac{g^{RW}(\epsilon \tilde{v})}{g^{RW}(\epsilon \tilde{v}+\tilde{s})} \frac{\mathrm {d}\tilde{s}}{\tilde{s}+\tilde{v}-\tilde{v}'}. \end{aligned}$$
(36)

With \(u=-\exp \left( \epsilon ^{-1}r\right) \), we have that \((-u)^{\epsilon \tilde{s}} = e^{\tilde{s}r}\). Moreover, since

$$\begin{aligned} \lim _{\epsilon \rightarrow 0} \epsilon \Gamma (\epsilon z) = \frac{1}{z}, \end{aligned}$$

we have that

$$\begin{aligned} \lim _{\epsilon \rightarrow 0} \frac{g^{RW}(\epsilon \tilde{v})}{g^{RW}(\epsilon \tilde{v}+\tilde{s})} = \frac{g^{\mathrm {FPP}}(\tilde{v})}{g^{\mathrm {FPP}}(\tilde{v}+\tilde{s})}, \end{aligned}$$

where \(g^{\mathrm {FPP}}\) is defined in (35), and

$$\begin{aligned} \lim _{\epsilon \rightarrow 0}\frac{\epsilon \pi }{\sin {\epsilon \tilde{s}}} = \frac{1}{\tilde{s}}. \end{aligned}$$

Because the integrand in (34) is not absolutely integrable, one cannot apply dominated convergence directly. Instead, we will split the integral (36) into two pieces: the integral over s when \({{\mathrm {Im}}}[\epsilon s ]<1/4\) and the integral over s when \({{\mathrm {Im}}}[\epsilon s ] \geqslant 1/4\). Let us begin with some estimates. Since the function \(z\mapsto z/\sin (z)\) is holomorphic on a circle of radius 1 / 2 around zero, there exists a constant \(C>0\) such that for \(s\in 1/2+i\mathbb {R}\) and \(\epsilon >0\) such that \(\vert \epsilon s \vert <1/2\), we have

$$\begin{aligned} \left| \frac{\epsilon \pi }{\sin (\pi \epsilon \tilde{s})} -\frac{1}{s} \right| <C\epsilon . \end{aligned}$$

In order to lighten the notations, we denote

$$\begin{aligned} G(\epsilon , \tilde{s}) = \frac{g^{RW}(\epsilon \tilde{v})}{g^{RW}(\epsilon \tilde{v}+\epsilon \tilde{s})} \frac{1}{\tilde{s}+\tilde{v}-\tilde{v}'} . \end{aligned}$$

The variables \(\tilde{v}\) and \(\tilde{v}'\) are fixed for the moment. We know that \(G(\epsilon , \tilde{s})\) is bounded for \(\epsilon \) close to zero and \( \tilde{s}\in 1/2+i\mathbb {R}\). Moreover, there exists a constant \(C'>0\) such that for \(\vert \epsilon s \vert <1/2\),

$$\begin{aligned} \left| G(\epsilon , \tilde{s}) - \frac{g^{\mathrm {FPP}}(\tilde{v})}{g^{\mathrm {FPP}}(\tilde{v}+\tilde{s})} \frac{1}{\tilde{v}+\tilde{s}-\tilde{v}'} \right| <C'\epsilon . \end{aligned}$$

We have the decomposition

$$\begin{aligned}&\frac{1}{2i\pi } \int _{\frac{1}{2} -\frac{i\epsilon ^{-1}}{4}}^{\frac{1}{2} +\frac{i\epsilon ^{-1}}{4}} \frac{\epsilon \pi }{\sin (\pi \epsilon \tilde{s})} e^{r\tilde{s}} G(\epsilon , \tilde{s})\mathrm {d}\tilde{s}\nonumber \\&\quad = \frac{1}{2i\pi } \int _{\frac{1}{2} -\frac{i\epsilon ^{-1}}{4}}^{\frac{1}{2} +\frac{i\epsilon ^{-1}}{4}} \left( \frac{\epsilon \pi }{\sin (\pi \epsilon \tilde{s})} - \frac{1}{\tilde{s}}\right) e^{r\tilde{s}} G(\epsilon , \tilde{s})\mathrm {d}\tilde{s}\nonumber \\&\quad \quad +\frac{1}{2i\pi } \int _{\frac{1}{2} -\frac{i\epsilon ^{-1}}{4}}^{\frac{1}{2} +\frac{i\epsilon ^{-1}}{4}} \frac{e^{r\tilde{s}}}{\tilde{s}} ( G(\epsilon , \tilde{s})-G(0, \tilde{s}))\mathrm {d}\tilde{s} + \frac{1}{2i\pi } \int _{\frac{1}{2} -\frac{i\epsilon ^{-1}}{4}}^{\frac{1}{2} +\frac{i\epsilon ^{-1}}{4}} \frac{e^{r\tilde{s}}}{\tilde{s}}G(0, \tilde{s}) \mathrm {d}\tilde{s}.\quad \quad \quad \end{aligned}$$
(37)

The first integral in the R.H.S of (37) can be bounded by

$$\begin{aligned} C \epsilon \frac{1}{2\pi } \int _{\frac{1}{2}\epsilon -\frac{i}{4}}^{\frac{1}{2}\epsilon +\frac{i}{4}} \vert \Gamma (1-s)\vert e^{r/2} \vert G(\epsilon , s\epsilon ^{-1})\vert \mathrm {d}s, \end{aligned}$$

which is \(\mathcal {O}(\epsilon )\). The second integral in the R.H.S of (37) can be bounded by

$$\begin{aligned} C'\epsilon \frac{1}{2\pi } \int _{\frac{1}{2}-\frac{i\epsilon ^{-1}}{4}}^{\frac{1}{2} +\frac{i\epsilon ^{-1}}{4}} \frac{e^{r/2}}{\vert \tilde{s}\vert } \mathrm {d}\tilde{s}, \end{aligned}$$

which is \(\mathcal {O}(\epsilon \log (\epsilon ^{-1}))\). The third integral in the R.H.S of (37) converges to a limit as \(\epsilon \) goes to zero, even if the integrand is not absolutely integrable. The limit is the improper integral

$$\begin{aligned} \frac{1}{2i\pi } \int _{1/2-i\infty }^{1/2+i\infty } \frac{e^{r\tilde{s}}}{\tilde{s}} \frac{g^{\mathrm {FPP}}(\tilde{v})}{g^{\mathrm {FPP}}(\tilde{v}+\tilde{s})} \frac{\mathrm {d}\tilde{s}}{\tilde{v}+\tilde{s}-\tilde{v}'} = K^{\mathrm {FPP}}_r(\tilde{v}, \tilde{v}'). \end{aligned}$$

It remains to show that we have made a negligible error when cutting the tails of the integral. We have

$$\begin{aligned} \frac{1}{2i\pi } \int _{\frac{1}{2} +\frac{i\epsilon ^{-1}}{4}}^{\frac{1}{2} +i\infty } \frac{\epsilon \pi }{\sin (\pi \epsilon \tilde{s})} e^{r\tilde{s}} G(\epsilon , \tilde{s})\mathrm {d}\tilde{s}= & {} \frac{1}{2i\pi } \int _{\frac{1}{2} \epsilon +\frac{i}{4}}^{\frac{1}{2} \epsilon +i\infty } \frac{\pi }{\sin (\pi s)} e^{rs \epsilon ^{-1}} G(\epsilon , s\epsilon ^{-1})\mathrm {d}s \nonumber \\= & {} \frac{1}{2i\pi } \int _{\frac{1}{2} \epsilon +\frac{i}{4}}^{\frac{1}{2} \epsilon +i\infty } \frac{\pi }{\sin (\pi s)} e^{rs \epsilon ^{-1}} (G(\epsilon , s\epsilon ^{-1})-1)\mathrm {d}s\nonumber \\&+ \frac{1}{2i\pi } \int _{\frac{1}{2} \epsilon +\frac{i}{4}}^{\frac{1}{2} \epsilon +i\infty } \frac{\pi }{\sin (\pi s)} e^{rs \epsilon ^{-1}} \mathrm {d}s. \end{aligned}$$
(38)

The first integral in the R.H.S of (38) goes to zero by dominated convergence, and the second integral in the R.H.S of (38) goes to zero by the Riemann-Lebesgue lemma. At this point we have shown that for any \(\tilde{v}, \tilde{v}'\in C_0\),

$$\begin{aligned} \lim _{\epsilon \rightarrow 0} \epsilon K^{RW}(\epsilon \tilde{v}, \epsilon \tilde{v}') = K^{\mathrm {FPP}}_r(\tilde{v}, \tilde{v}'). \end{aligned}$$

Observe now that the kernel \(K^{\mathrm {FPP}}_r(\tilde{v},\tilde{v}')\) is bounded as \(\tilde{v}, \tilde{v}'\) vary along their contour. Using Hadamard’s bound, one can bound the Fredholm series expansion of \(K^{\mathrm {FPP}}_r\) by an absolutely convergent series of integrals, and conclude by dominated convergence that under the scalings above

$$\begin{aligned} \det (I+K_u^{\mathrm {RW}})_{\mathbb {L}^2(C_0)} \xrightarrow [\epsilon \rightarrow 0]{}\det (I+K_r^{\mathrm {FPP}})_{\mathbb {L}^2(C_0)}. \end{aligned}$$

\(\square \)

6 Asymptotic analysis of the Beta RWRE

Let us first define the Tracy–Widom distribution governing the fluctuations of extreme eigenvalues of Gaussian hermitian random matrices. We refer to [4, Section 3.2.2] for an introduction to Fredholm determinants.

Definition 6.1

The distribution function \(F_\mathrm{GUE}(x)\) of the GUE Tracy–Widom distribution is defined by \(F_\mathrm{GUE}(x)=\det (I-K_\mathrm{Ai})_{\mathbb {L}^2(x,+\infty )}\) where \(K_\mathrm{Ai}\) is the Airy kernel,

$$\begin{aligned} K_\mathrm{Ai} (u, v) = \frac{1}{(2i\pi )^2} \int _{e^{-2i\pi /3}\infty }^{e^{2i\pi /3}\infty } \mathrm {d}w \int _{e^{-i\pi /3}\infty }^{e^{i\pi /3}\infty } \mathrm {d}z \frac{e^{z^3/3-zu}}{e^{w^3/3-wv}}\frac{1}{z-w}, \end{aligned}$$

where the contours for z and w do not intersect. There is some freedom in the choice of contours. For instance, one can choose the contour for z (resp. w) as constituted of two infinite rays departing 1 (resp. 0) in directions \(\pi /3\) and \(-\pi /3\) (resp. \(2\pi /3\) and \(-2\pi /3\)).

6.1 Fredholm determinant asymptotics

We consider a Beta RWRE \((X_t)_{t\geqslant 0}\) with parameters \(\alpha , \beta >0\). For a parameter \(\theta >0\), we define the quantity

$$\begin{aligned} x(\theta ) = \frac{\Psi _1(\theta +\alpha +\beta ) +\Psi _1(\theta )- 2 \Psi _1(\theta + \alpha )}{\Psi _1(\theta ) - \Psi _1(\theta + \alpha +\beta )} \end{aligned}$$
(39)

and the function \(I:\big (\frac{\alpha -\beta }{\alpha +\beta }, 1\big ) \rightarrow \mathbb {R}_{>0}\) such that

$$\begin{aligned} I(x(\theta ))= & {} \frac{\Psi _1(\theta +\alpha +\beta ) - \Psi _1(\theta + \alpha )}{\Psi _1(\theta ) - \Psi _1(\theta + \alpha +\beta )} (\Psi (\theta + \alpha +\beta )- \Psi (\theta ) )\nonumber \\&+ \Psi (\theta + \alpha +\beta )- \Psi (\theta +\alpha ), \end{aligned}$$
(40)

where \(\Psi \) is the digamma function (\(\Psi (z)= \Gamma '(z)/\Gamma (z)\)) and \(\Psi _1\) is the trigamma function (\(\Psi _1(z)=\Psi '(z)\)). Moreover, we define a real-valued \(\sigma (\theta )>0\) such that

$$\begin{aligned} 2\sigma (\theta )^3= & {} \Psi _2(\theta +\alpha ) - \Psi _2(\alpha +\beta +\theta ) \nonumber \\&\quad + \frac{\Psi _1(\alpha +\theta ) - \Psi _1(\alpha +\beta +\theta )}{\Psi _1(\theta ) - \Psi _1(\alpha +\beta +\theta )}\left( \Psi _2(\alpha +\beta +\theta ) - \Psi _2(\theta )\right) . \end{aligned}$$
(41)

The fact that we can choose \(\sigma (\theta )>0\) is proved in Lemma 6.3. We will see that a critical point Fredholm determinant asymptotic analysis shows that for all \(\theta >0\) and \(\alpha , \beta >0\),

$$\begin{aligned} \lim _{t\rightarrow \infty } \mathbb {P}\left( \frac{\log (P(t, x(\theta )t)) + I(x(\theta ))t}{t^{1/3}\sigma (\theta )} \leqslant y \right) = F_\mathrm{GUE}(y). \end{aligned}$$
(42)

However, due to increased technical challenges in the general parameter case, we presently prove rigorously only the case of Theorem 6.2, which deals with \(\alpha =\beta =1\) (i.e. when the \(B_{x,t}\) variables are distributed uniformly on (0, 1)).

When \(\alpha =\beta \) the expressions for \(x(\theta )\) and \(I(x(\theta ))\) simplify. We find that

$$\begin{aligned} x(\theta ) = \frac{1+2\theta }{\theta ^2+(\theta +1)^2} \end{aligned}$$

and

$$\begin{aligned} I\big (x(\theta )\big ) = \frac{1}{\theta ^2+(\theta +1)^2} , \end{aligned}$$

so that the rate function I is simply the function \(I:x \mapsto 1-\sqrt{1-x^2}\). We also find that for \(\alpha =\beta =1\),

$$\begin{aligned} \sigma (\theta )^3 = \frac{1}{\theta + 3 \theta ^2+ 4\theta ^3 + 2\theta ^4} =\frac{2(1-\sqrt{1-x^2} )^2}{\sqrt{1-x^2}} = \frac{2 I(x)^2}{1-I(x)}, \end{aligned}$$
(43)

where \(x=x(\theta )\).

Theorem 6.2

For \(0<\theta <1/2\) and \(\alpha =\beta =1\), we have that

$$\begin{aligned} \lim _{t\rightarrow \infty } \mathbb {P}\left( \frac{\log (P(t, x(\theta )t)) + I(x(\theta ))t}{t^{1/3}\sigma (\theta )} \leqslant y \right) = F_\mathrm{GUE}(y). \end{aligned}$$
(44)

The rest of this section is devoted to the proof of Theorem 6.2. Most arguments in the proof apply equally for any parameters \(\alpha , \beta \) except the deformation of contours which is valid for small \(\theta \) and Lemma 6.5 which is only valid for \(\alpha =\beta =1\). We expect the general \(\alpha , \beta , \theta \) to still hold but do not attempt to extend to that case.

We first observe that we do not need to invert the Laplace transform of \(P(t, x(\theta )t)\). Setting \(u=-e^{t I(x(\theta )) - t^{1/3} \sigma (\theta ) y}\), one has that

$$\begin{aligned} \lim _{t\rightarrow \infty }\mathbb {E}[ e^{u P(t,x(\theta )t)} ] = \lim _{t\rightarrow \infty } \mathbb {P}\left( \frac{\log (P(t, x(\theta ) t))+ I(x(\theta ))t}{t^{1/3}\sigma (\theta )} <y \right) . \end{aligned}$$
(45)

This convergence is justified by Lemma 4.1.39 in [4], provided that the limit is a continuous probability distribution function, and we see later that this is the case. Hence, in order to prove Theorem 6.2, one has to take the \(t\rightarrow \infty \) limit of the Fredholm determinant (6) in the statement of Theorem 2.13.

The asymptotic analysis of this Fredholm determinant proceeds by steepest descent analysis, and is very close to the analysis presented in the recent papers [2, 5, 6, 17, 24, 38], that deal with similar kernels. Let us assume for the moment that the contour \(C_0\) is a circle around 0 with very small radius. One can make the change of variables \(v+s=z\) in the kernel \(K_u^{\mathrm {RW}}\) so that, with the value of u that we choose,

$$\begin{aligned} K^{\mathrm {RW}}_u(v,v') = \frac{1}{2i\pi } \int _{1/2-i\infty }^{1/2+i\infty } \frac{\pi }{\sin (\pi (z-w))} e^{(z-w)(tI(x(\theta ))-t^{1/3}\sigma (\theta )y)}\frac{g^{\mathrm {RW}}(v)}{g^{\mathrm {RW}}(z)} \frac{\mathrm {d}z}{z - v'}, \end{aligned}$$

and the contour for z can be chosen as \(1/2+i\mathbb {R}\). The kernel can be rewritten

$$\begin{aligned} K^{\mathrm {RW}}_u(v,v')= & {} \frac{1}{2i\pi } \int _{1/2-i\infty }^{1/2+i\infty } \frac{\pi }{\sin (\pi (z-w))} \exp ( t(h(z)-h(v))\nonumber \\&\quad -t^{1/3}\sigma (\theta )y (z-v)) \frac{\Gamma (v)}{\Gamma (z)}\frac{\mathrm {d}z}{z - v'}, \quad \quad \quad \end{aligned}$$
(46)

where

$$\begin{aligned} h(z) = I\big (x(\theta )\big ) z + \frac{1-x(\theta )}{2}\log \left( \frac{\Gamma (\alpha +z)}{\Gamma (z)} \right) + \frac{1+x(\theta )}{2}\log \left( \frac{\Gamma (\alpha +z)}{\Gamma (\alpha +\beta +z)} \right) . \end{aligned}$$

The function h governs the asymptotic behaviour of the Fredholm determinant of \(K_u^{\mathrm {RW}}\). The principle of the steepest-descent method is to deform the integration contour—both the contour in the definition of \(K_u^{\mathrm {RW}}\) and the \(\mathbb {L}^2\) contour—so that they go across a critical point of the function h. Then one needs to prove that only the integration around the critical point has a contribution in the limit, and one can approximate all terms by their Taylor approximation close to the critical point.

The first derivatives of h are

$$\begin{aligned} h'(z)= & {} I(x(\theta )) + \Psi (\alpha +z) - \frac{1}{2} \Psi (z) -\frac{1}{2} \Psi (\alpha +\beta +z) \\&+\frac{x(\theta )}{2}(\Psi (z) - \Psi (\alpha +\beta + z) ), \end{aligned}$$

and

$$\begin{aligned} h''(z)= & {} \Psi _1(\alpha +z) - \frac{1}{2} \Psi _1(z) -\frac{1}{2} \Psi _1(\alpha +\beta +z) \nonumber \\&+\frac{x(\theta )}{2}(\Psi _1(z) - \Psi _1(\alpha +\beta + z) ). \end{aligned}$$

One readily sees that the expressions for \(x(\theta )\) and \(I(x(\theta ))\) in (39) and (40) are precisely chosen so that \(h'(\theta )=h''(\theta )=0\). Let us give an expression of \(h'\) in terms of \(\theta \):

$$\begin{aligned} h'(z)= & {} \Psi (z+\alpha ) - \Psi (\alpha +\beta +z) +\frac{\Psi _1(\alpha +\theta ) - \Psi _1(\alpha +\beta +\theta )}{\Psi _1(\theta ) - \Psi _1(\alpha +\beta +\theta )}\nonumber \\&( \Psi (\alpha +\beta +z) - \Psi (z)) - \left( \Psi (\theta +\alpha ) - \Psi (\alpha +\beta +\theta )\right. \nonumber \\&\left. +\frac{\Psi _1(\alpha +\theta ) - \Psi _1(\alpha +\beta +\theta )}{\Psi _1(\theta ) - \Psi _1(\alpha +\beta +\theta )}( \Psi (\alpha +\beta +\theta ) - \Psi (\theta )) \right) . \end{aligned}$$
(47)

Expressions are much simpler in the case \(\alpha =\beta =1\). In that case we have

$$\begin{aligned} h'(z)&= \frac{1}{\theta +1}-\frac{1}{z+1} + \frac{1}{1+ \left( \frac{\theta +1}{\theta } \right) ^2} \left( \frac{2z+1}{z(z+1)}- \frac{2\theta +1}{\theta (\theta +1)} \right) ,\nonumber \\&= \frac{(\theta -z)^2}{z(1+z)(1+2\theta +2 \theta ^2)}. \end{aligned}$$
(48)

In order to understand the behaviour of \({{\mathrm {Re}}}[h]\) around the critical point \(\theta \), we also need the sign of the third derivative of h.

Lemma 6.3

For any \(\alpha , \beta , \theta >0\), we have that \(h'''(\theta )>0 \).

Lemma 6.3 is proved in Sect. 6.2.

By the definition of \(\sigma (\theta )\) in (41), \(\sigma (\theta )=\left( \frac{h'''(\theta )}{2}\right) ^{1/3}\). Then, using Taylor expansion, we have that for z in a neighbourhood of \(\theta \),

$$\begin{aligned} h(z) - h(\theta ) \approx \frac{( \sigma (\theta )(z-\theta ))^3}{3}. \end{aligned}$$
(49)

We now deform the integration contour in (46) and the Fredholm determinant contour which was initially a small circle around 0. Let \(\mathcal {D}_{\theta }\) be the vertical line \(\mathcal {D}_{\theta } = \lbrace \theta + iy :y\in \mathbb {R}\rbrace \), and \(\mathcal {C}_{\theta }\) be the circle centred in 0 with radius \(\theta \). This deformation of contours does not change the Fredholm determinant \(\det (I+K_u^{\mathrm {RW}})\) only if

  • All the poles of the sine inverse in (46) corresponding with \(z-w\in \mathbb {Z}_{>0}\) stay on the right of \(\mathcal {D}_{\theta }\).

  • We do not cross the pole of h at \(-\alpha -\beta \) when deforming the \(\mathbb {L}^2\) contour.

Hence, we will assume that \(\theta <\min (\alpha +\beta , \frac{1}{2})\) so that the two above conditions are satisfied.

Lemma 6.4

For any parameters \(\alpha , \beta >0\), and \(\theta >0\), the contour \(\mathcal {D}_{\theta }\) is steep-descent for the function \({{\mathrm {Re}}}[h]\) in the sense that \(y\mapsto {{\mathrm {Re}}}[h(\theta +iy)]\) is decreasing for y positive and increasing for y negative.

Lemma 6.4 is proved in Sect. 6.2. The step which prevents us from proving Theorem 6.2 for any parameters \(\alpha , \beta >0\) is the steep-descent properties of the contour \(\mathcal {C}_{\theta }\).

Lemma 6.5

Assume \(\alpha =\beta =1\). Then the contour \(\mathcal {C}_{\theta }\) is steep descent for the function \(-{{\mathrm {Re}}}[h]\), in the sense that \(y\mapsto {{\mathrm {Re}}}[h(\theta e^{i\phi })]\) is increasing for \(\phi \in (0, \pi )\) and decreasing for \(\phi \in (-\pi , 0)\).

Lemma 6.5 is proved in Sect. 6.2. Proving Lemma 6.5 for arbitrary parameters \(\alpha , \beta \) turns out to be computationally difficult, and we do not pursue that here.

In the rest of this section, although the proofs are quite general and do not depend on the value of parameters, we assume that \(\alpha =\beta =1\) so that we can use Lemma 6.5. Let us show that the only part of the contours that contributes to the limit of the Fredholm determinant when t tends to infinity is a neighbourhood of the critical point \(\theta \).

Proposition 6.6

Let \(B(\theta , \epsilon )\) be the ball of radius \(\epsilon \) centred at \(\theta \). We note \(\mathcal {C}_{\theta }^{\epsilon }\) (resp. \(\mathcal {D}_{\theta }^{\epsilon })\) the part of the contour \(\mathcal {C}_{\theta }\) (resp. \(\mathcal {D}_{\theta })\) inside the ball \(B(\theta , \epsilon )\). Then, for any \(\epsilon >0\),

$$\begin{aligned} \lim _{t\rightarrow \infty } \det (I+K^{\mathrm {RW}}_u)_{\mathbb {L}^2(\mathcal {C}_{\theta })} = \lim _{t\rightarrow \infty } \det (I+K^{\mathrm {RW}}_{y, \epsilon })_{\mathbb {L}^2(\mathcal {C}_{\theta }^{\epsilon })} \end{aligned}$$

where \(K^{\mathrm {RW}}_{y, \epsilon }\) is defined by the integral kernel

$$\begin{aligned} K^{\mathrm {RW}}_{y, \epsilon }(v,v')= & {} \frac{1}{2i\pi } \int _{\mathcal {D}_{\theta }^{\epsilon }} \frac{\pi }{\sin (\pi (z-w))} \exp ( t(h(z)-h(v)) -t^{1/3}\sigma (\theta )y (z-v))\nonumber \\&\times \frac{\Gamma (v)}{\Gamma (z)}\frac{\mathrm {d}z}{z - v'}. \end{aligned}$$
(50)

Proof

By Lemmas 6.4 and 6.5, there exists a constant \(C>0\) such that if \(v\in \mathcal {C}_{\theta }\) and \(z\in \mathcal {D}_{\theta }{\setminus } \mathcal {D}_{\theta }^{\epsilon }\), then

$$\begin{aligned} {{\mathrm {Re}}}[h(z)-h(v)]<-C. \end{aligned}$$

and consequently

$$\begin{aligned} \exp ( t(h(z)-h(v)) -t^{1/3}\sigma (\theta )y (z-v))\frac{\mathrm {d}z}{z - v'} \xrightarrow [t\rightarrow \infty ]{} 0. \end{aligned}$$

Since \(\frac{\pi }{\sin (\pi (z-w))\Gamma (z)}\) has exponential decay in the imaginary part of z, the contribution of the integration over \(\mathcal {D}_{\theta }{\setminus } \mathcal {D}_{\theta }^{\epsilon }\) is negligible (by dominated convergence). Thus, \(K^{\mathrm {RW}}_{y}(v,v')\) and \(K^{\mathrm {RW}}_{y, \epsilon }(v,v')\) have the same limit when t goes to infinity.

By Lemmas 6.4 and 6.5, there exists another constant \(C'>0\) such that if \(v\in \mathcal {C}_{\theta }{\setminus }\mathcal {C}_{\theta }^{\epsilon }\) and \(z\in \mathcal {D}_{\theta }\), then

$$\begin{aligned} {{\mathrm {Re}}}[h(z)-h(v)]<-C'. \end{aligned}$$

Consider the Fredholm determinant expansion

$$\begin{aligned} \det (I+K^{\mathrm {RW}}_{u}) = 1 + \sum _{n=1}^{\infty } \frac{1}{n!} \int \dots \int \det (K^{\mathrm {RW}}_{u}(w_i, w_j))_{i,j=1}^{n} \mathrm {d}w_1\dots \mathrm {d}w_n. \end{aligned}$$

The \(k{\mathrm{th}}\) term can be decomposed as the sum of the integration over \((\mathcal {C}_{\theta }^{\epsilon })^k\) plus the integration over \((\mathcal {C}_{\theta })^k{\setminus }(\mathcal {C}_{\theta }^{\epsilon })^k\). The second contribution goes to zero since it will be possible to factorize \(e^{-C't}\). Finally, the proposition is proved using again dominated convergence on the Fredholm series expansion, which is absolutely summable by Hadamard’s bound. \(\square \)

Let us rescale the variables around \(\theta \) by the change of variables

$$\begin{aligned} z=\theta + t^{-1/3}\tilde{z},\quad v=\theta + t^{-1/3}\tilde{v}, \quad v'=\theta + t^{-1/3}\tilde{v}'. \end{aligned}$$

The Fredholm determinant of \(K^{\mathrm {RW}}_{y, \epsilon }\) on the contour \(\mathcal {C}_{\theta }^{\epsilon }\) equals the Fredholm determinant of the rescaled kernel

$$\begin{aligned} K^t_{y, \epsilon }(\tilde{v}, \tilde{v'}) = t^{-1/3}K^{\mathrm {RW}}_{y, \epsilon }(\theta + t^{-1/3}\tilde{v}, \theta + t^{-1/3}\tilde{v}') \end{aligned}$$

acting on the contour \(\mathcal {C}_{\theta }^{t^{1/3}\epsilon }\).

It is more convenient to change again the contours. For \(L\in \mathbb {R}_{>0}\), define the contour

$$\begin{aligned} \mathcal {C}^L := \lbrace \vert y \vert e^{i(\pi -\phi ) \cdot \mathrm{sgn}(y)} :y\in [0,L]\rbrace , \end{aligned}$$
(51)

where \(\phi \) is some angle \(\phi \in (\pi /6, \pi /2)\) to be chosen later. We also set

$$\begin{aligned} \mathcal {C} := \lbrace \vert y \vert e^{i(\pi -\phi ) \cdot \mathrm{sgn}(y)} :y \geqslant 0\rbrace . \end{aligned}$$
(52)

The contour \(\mathcal {C}_{\theta }^{\epsilon }\) is an arc of circle and crosses \(\theta \) vertically. For \(\epsilon \) small enough, one can replace the contour \(\mathcal {C}_{\theta }^{\epsilon }\) by \(\mathcal {C}^L\) without changing the Fredholm determinants. The values of L and \(\phi \) has to be chosen so that the endpoints of the contours coincide.

We define the rescaled contour for the variable \(\tilde{z}\) by

$$\begin{aligned} \mathcal {D}^L := \left\{ i y :y\in [-L,L]\right\} , \end{aligned}$$

and we set \(\mathcal {D}:=i\mathbb {R}\).

Proposition 6.7

We have that

$$\begin{aligned} \lim _{t\rightarrow \infty } \det (I+K^{\mathrm {BP}}_{y, \epsilon })_{\mathbb {L}^2(\mathcal {C}_{\theta }^{\epsilon })}= \det (I-K_y)_{\mathbb {L}^2(\mathcal {C})}, \end{aligned}$$

where \(K_y\) is defined by its integral kernel

$$\begin{aligned} K_y(w,w') = \frac{1}{2i\pi } \int _{\infty e^{-i\pi /3}}^{\infty e^{i\pi /3}} \frac{\mathrm {d}z}{(z-w')(w-z)} \frac{e^{z^3/3-yz} }{e^{w^3/3-yw}} \end{aligned}$$

where the contour for z is a wedge-shaped contour constituted of two rays going to infinity in the directions \(e^{-i\pi /3}\) and \(e^{i\pi /3}\), such that it does not intersect \(\mathcal {C}\).

The proof of Proposition 6.7 follows the lines of [24, Proposition 6.4] (see also [5, Proposition 6.13]).

Proof

We take the limit of the rescaled kernel \(\det (I+K^t_{y, \epsilon }(\tilde{v}, \tilde{v}'))\). Let us first examine the pointwise convergence. Under the scalings above

$$\begin{aligned} \frac{t^{-1/3}\pi }{\sin (\pi (z-v))}&\xrightarrow [t\rightarrow \infty ]{} \frac{1}{\tilde{z}-\tilde{v}},\\ \frac{\mathrm {d}z}{z-v'}&\xrightarrow [t\rightarrow \infty ]{} \frac{\mathrm {d}\tilde{z}}{\tilde{z}-\tilde{v}'},\\ \frac{\Gamma (v)}{\Gamma (z)}&\xrightarrow [t\rightarrow \infty ]{}1,\\ t(h(z)-h(v))&\xrightarrow [t\rightarrow \infty ]{} \frac{\sigma (\theta )^3}{3}(\tilde{z}^3 - \tilde{v}^3). \end{aligned}$$

Now we justify that one can take the pointwise limit. We take \(\mathcal {D}^{\epsilon t^{1/3}}\) as the integration contour for the \(\tilde{z}\) variable. Since \(\tilde{z}\) is pure imaginary, \(\exp (\tilde{z}^3/3 - \tilde{z} y \sigma (\theta ))\) has modulus one. Moreover for fixed \(\tilde{v}\) and \(\tilde{v}'\), we can find a constant \(C'''>0\) such that

$$\begin{aligned} \frac{t^{-1/3}\pi }{\sin (\pi (z-v))} \frac{\mathrm {d}z}{z-v'} <\frac{C'''}{({{\mathrm {Im}}}(\tilde{z})^2)}. \end{aligned}$$

This means that the integrand of \(K^t_{y, \epsilon }(\tilde{v}, \tilde{v'})\) has quadratic decay, which is enough to apply dominated convergence. It results that

$$\begin{aligned} \lim _{t\rightarrow \infty } K^t_{y, \epsilon }(\tilde{v}, \tilde{v'}) = \frac{1}{2i\pi } \int _{\mathcal {D}^{\infty }} \frac{e^{\tilde{z}^3\sigma (\theta )^3/3- \tilde{z}y\sigma (\theta ) }}{e^{\tilde{v}^3\sigma (\theta )^3/3- \tilde{v}y\sigma (\theta ) }}\frac{1}{\tilde{z}-\tilde{v}}\frac{\mathrm {d}\tilde{z}}{\tilde{z}-\tilde{v}'}. \end{aligned}$$

Now we need to prove that one can exchange the limit with the Fredholm determinant. By Taylor expansion, there exists a constant \(C>0\) such that for \(\vert v-\theta \vert <\epsilon \),

$$\begin{aligned} \left| t \cdot h(v) - \frac{\sigma (\theta )^3}{3}(\tilde{v})^3\right| <Ct(v-\theta )^4. \end{aligned}$$
(53)

Since \(\vert v-\theta \vert <\epsilon \), we have that \(Ct(v-\theta )^4<C\epsilon \tilde{v}^3\). Hence, for \(\epsilon \) small enough, one can factor out \(\exp (-C'\tilde{v}^3/3)\) for some \(C'>0\). By using the same bound as before for the factors in the integrand of \(K^t_{y, \epsilon }\), there exist constants \(C', C''>0\) such that

$$\begin{aligned} K^t_{y, \epsilon }(\tilde{v}, \tilde{v'}) < C'' \exp (C' \tilde{v}^3). \end{aligned}$$

As \(\exp (- \tilde{v}^3)\) decays exponentially in the direction \(\infty e^{\pm i\phi }\) for \(\phi \in (\pi /2, 5\pi /6)\), we have that for \(\epsilon \) small enough, the integrand of the rescaled kernel decays exponentially and we can apply dominated convergence. Now recall that we can take \(\epsilon \) arbitrarily small in Proposition 6.6. Thus, the Fredholm expansion of \(K^t\) is integrable and summable (using Hadamard’s bound), and dominated convergence implies that the limit of \( \det (I+K^{\mathrm {BP}}_{y, \epsilon })_{\mathbb {L}^2(\mathcal {C}_{\theta }^{\epsilon })}\) is the Fredholm determinant of an operator \(\tilde{K}_y\) acting on \(\mathcal {C}\) defined by the integral kernel

$$\begin{aligned} \tilde{K}_y(\tilde{v}, \tilde{v}') =\frac{1}{2i\pi } \int _{\mathcal {D}^{\infty }} \frac{e^{\tilde{z}^3\sigma (\theta )^3/3- \tilde{z}y\sigma (\theta ) }}{e^{\tilde{v}^3\sigma (\theta )^3/3- \tilde{v}y\sigma (\theta ) }}\frac{1}{\tilde{z}-\tilde{v}}\frac{\mathrm {d}\tilde{z}}{\tilde{z}-\tilde{v}'}. \end{aligned}$$

Since the integrand of \(\tilde{K}_y\) has quadratic decay on the tails of the contour \(\mathcal {D}^{\infty }\) one can freely deform the contours so that it goes from \(\infty e^{-i\pi /3}\) to \(\infty e^{i\pi /3}\) without intersecting \(\mathcal {C}^{\infty }\). Finally, by doing another change of variables to eliminate the dependency in \(\sigma (\theta )\) in the integrand, one recovers the Fredholm determinant of \(K_y\) as claimed. \(\square \)

Using the \(\det (I+AB)=\det (I+BA)\) trick, one can reformulate the Fredholm determinant of \(K_y\) as the Fredholm determinant of an operator on \(\mathbb {L}^2(y, \infty )\) (see e.g. [6, Lemma 8.6]). It turns out that

$$\begin{aligned} \det (I-K_y)_{\mathbb {L}^2(\mathcal {C})} =\det (I-K_\mathrm{Ai})_{\mathbb {L}^2(x,+\infty )}, \end{aligned}$$

and this concludes the proof of Theorem 6.2.

6.2 Precise estimates and steep-descent properties

The following series representations will be useful:

$$\begin{aligned} \Psi (z)-\Psi (w) = \sum _{n=0}^{\infty } \frac{z-w}{(n+z)(n+w)}, \end{aligned}$$
(54)

is valid for z and w away from the negative integers. We also use

$$\begin{aligned} \Psi _1(z)- \Psi _1(w) = \sum _{n=0}^{\infty }\left[ \frac{1}{(n+z)^2} - \frac{1}{(n+w)^2}\right] . \end{aligned}$$
(55)

Proof of Lemma 6.3

Given the expression (47) for the first derivative of h, we have

$$\begin{aligned} h'''(\theta )= & {} \Psi _2(\theta +\alpha ) - \Psi _2(\alpha +\beta +\theta )\nonumber \\&+ \frac{\Psi _1(\alpha +\theta ) - \Psi _1(\alpha +\beta +\theta )}{\Psi _1(\theta ) - \Psi _1(\alpha +\beta +\theta )}( \Psi _2(\alpha +\beta +\theta ) - \Psi _2(\theta )), \end{aligned}$$
(56)

where \(\Psi _2\) is the second polygamma function (\(\Psi _2(z)=\frac{\mathrm {d}}{\mathrm {d}z}\Psi _1(z))\). Hence \(h'''(\theta )>0\) is equivalent to

$$\begin{aligned}&(\Psi _2(\theta +\alpha +\beta ) - \Psi _2(\theta +\alpha ))(\Psi _1(\theta +\alpha +\beta ) -\Psi _1(\theta ) )\\&\quad - (\Psi _1(\theta +\alpha +\beta ) - \Psi _1(\theta +\alpha ))(\Psi _2(\theta +\alpha +\beta ) -\Psi _2(\theta ) ) >0, \end{aligned}$$

which is equivalent to

$$\begin{aligned} \frac{\Psi _2(\theta +\alpha +\beta ) - \Psi _2(\theta +\alpha )}{\Psi _1(\theta +\alpha +\beta ) - \Psi _1(\theta +\alpha )}>\frac{\Psi _2(\theta +\alpha +\beta ) -\Psi _2(\theta )}{\Psi _1(\theta +\alpha +\beta ) -\Psi _1(\theta ) }. \end{aligned}$$
(57)

The function trigamma \(\Psi _1\) is positive and decreasing on \(\mathbb {R}_{>0}\). The function \(\Psi _2\) is negative and increasing. One recognizes in (57) difference quotients for the function \(\Psi _2 \circ \Psi _1^{-1}\). Thus, it is enough to prove that \( \Psi _2 \circ \Psi _1^{-1}\) is strictly concave. The derivative of \( \Psi _2 \circ \Psi _1^{-1}\) is \( \Psi _3\circ \Psi _1^{-1} / \Psi _2\circ \Psi _1^{-1}\). Since \(\Psi _1\) is decreasing, it is enough to show that \(\Psi _3/\Psi _2\) is increasing, which, by taking the derivative, is equivalent to \(\Psi _4 \Psi _2 >\Psi _3 \Psi _3\).

For all \(n\geqslant 1\), one has the integral representation

$$\begin{aligned} \Psi _n(x) = - \int _0^{\infty } \frac{(-t)^n e^{-xt}}{1-e^{-t}} \mathrm {d}t. \end{aligned}$$
(58)

Thus for \(x>0\), \(\Psi _4(x) \Psi _2(x) >\Psi _3(x) \Psi _3(x)\) is equivalent to

$$\begin{aligned} \int _0^{\infty } \int _0^{\infty }\frac{e^{-xt-xu}}{(1-e^{-t})(1-e^{-u})}t^3 u^3 < \int _0^{\infty } \int _0^{\infty } \frac{e^{-xt-xu}}{(1-e^{-t})(1-e^{-u})}t^2 u^4. \end{aligned}$$

By symmetrizing the right-hand-side, the inequality is equivalent to

$$\begin{aligned} \int _0^{\infty } \int _0^{\infty } \frac{e^{-xt-xu} t^2 u^2 }{(1-e^{-t})(1-e^{-u})}tu < \int _0^{\infty } \int _0^{\infty } \frac{e^{-xt-xu} t^2 u^2}{(1-e^{-t})(1-e^{-u})}\frac{t^2+u^2}{2}, \end{aligned}$$

which is true for all \(x>0\).

Proof of Lemma 6.4

By symmetry, it is enough to treat only the case \(y>0\). Hence we show that if \(y>0\), then \({{\mathrm {Im}}}[h'(\theta +iy)]>0.\) Using (47), \({{\mathrm {Im}}}[h'(\theta +iy)]>0\) is equivalent to

$$\begin{aligned}&( \Psi _1(\theta ) - \Psi _1(\alpha +\beta +\theta )) {{\mathrm {Im}}}[\Psi (\alpha + \theta +iy)-\Psi (\alpha +\beta +\theta +iy)]\nonumber \\&\quad + ( \Psi _1(\alpha +\theta ) - \Psi _1(\alpha +\beta +\theta )) {{\mathrm {Im}}}[\Psi (\alpha +\beta + \theta +iy)-\Psi (\theta +iy)]>0.\quad \quad \end{aligned}$$
(59)

Using the series representations (54), Eq. (59) is equivalent to

$$\begin{aligned}&( \Psi _1(\theta ) - \Psi _1(\alpha +\beta +\theta )){{\mathrm {Im}}}\sum _{m=0}^{\infty } \frac{-\beta }{(m+\theta +\alpha + iy)(m+\theta +\alpha +\beta +iy)} \nonumber \\&\quad + ( \Psi _1(\alpha +\theta ) - \Psi _1(\alpha +\beta +\theta )){{\mathrm {Im}}}\sum _{m=0}^{\infty } \frac{\alpha +\beta }{(m+\theta + iy)(m+\theta +\alpha +\beta +iy)}>0,\nonumber \\ \end{aligned}$$
(60)

We have that

$$\begin{aligned}&{{\mathrm {Im}}}\left[ \frac{-\beta }{(m+\theta +\alpha + iy)(m+\theta +\alpha +\beta +iy)}\right] \\&\quad =\frac{1}{(m+\theta +\alpha )^2+y^2} - \frac{1}{(m+\theta +\alpha +\beta )^2+y^2} \end{aligned}$$

and

$$\begin{aligned}&{{\mathrm {Im}}}\left[ \frac{-(\alpha +\beta )}{(m+\theta + iy)(m+\theta +\alpha +\beta +iy)}\right] \\&\quad = \frac{1}{(m+\theta )^2+y^2} - \frac{1}{(m+\theta +\alpha +\beta )^2+y^2}. \end{aligned}$$

It yields that (60) can be rewritten as

$$\begin{aligned}&( \Psi _1(\theta ) - \Psi _1(\alpha +\beta +\theta ))(\Phi (\theta +\alpha )- \Phi (\theta +\alpha +\beta )) \nonumber \\&\quad >( \Psi _1(\theta )+\alpha - \Psi _1(\alpha +\beta +\theta ))(\Phi (\theta )- \Phi (\theta +\alpha +\beta )), \end{aligned}$$
(61)

where

$$\begin{aligned} \Phi (x) = \sum _{n\geqslant 0} \frac{1}{(n+x)^2+y^2} . \end{aligned}$$

The inequality (61) is equivalent to

$$\begin{aligned} \frac{ \Psi _1(\theta ) - \Psi _1(\theta +\alpha )}{\Phi (\theta ) - \Phi (\theta +\alpha )}> \frac{ \Psi _1(\theta +\alpha ) - \Psi _1(\theta +\alpha +\beta )}{\Phi (\theta +\alpha ) - \Phi (\theta +\alpha +\beta )}. \end{aligned}$$
(62)

Using Cauchy’s mean value theorem, there exist \(\theta _1\in (\theta , \theta +\alpha )\) and \(\theta _2\in (\theta +\alpha , \theta +\alpha +\beta )\) such that (62) is equivalent to

$$\begin{aligned} \frac{\Psi _2(\theta _1)}{\Phi '(\theta _1)} >\frac{\Psi _2(\theta _2)}{\Phi '(\theta _2)}. \end{aligned}$$

Finally, this last inequality is always true for \(\theta _1<\theta _2\) since we have the series of equivalences

$$\begin{aligned} \Psi _2(\theta _1)\Phi '(\theta _2)&> \Psi _2(\theta _1)\Phi '(\theta _2) \nonumber \\&\Leftrightarrow \sum _{n=0}^{\infty } \frac{2}{(n+\theta _1)^3} \sum _{m=0}^{\infty }\frac{2(m+\theta _2)}{((m+\theta _2)^2+y^2)^2}\nonumber \\&> \sum _{n=0}^{\infty } \frac{2}{(n+\theta _2)^3} \sum _{m=0}^{\infty }\frac{2(m+\theta _1)}{((m+\theta _1)^2+y^2)^2} \nonumber \\&\Leftrightarrow \sum _{n,m=0}^{\infty } \frac{1}{(n+\theta _1)^3(n+\theta _2)^3} \frac{1}{1+\frac{2y^2}{(m+ \theta _2)^2}+\frac{y^2}{(m+\theta _2)^4}} \nonumber \\&> \sum _{n,m=0}^{\infty } \frac{1}{(n+\theta _1)^3(n+\theta _2)^3} \frac{1}{1+\frac{2y^2}{(m+ \theta _1)^2}+\frac{y^2}{(m+\theta _1)^4}}. \end{aligned}$$
(63)

The inequality (63) is satisfied because \(\theta _1<\theta _2\).

Proof of Lemma 6.5

in the case \(\alpha =\beta =1\). We have that

$$\begin{aligned} \frac{\mathrm {d}}{\mathrm {d}\phi } {{\mathrm {Re}}}[h(\theta e^{i\phi })] = {{\mathrm {Re}}}[i\theta e^{i\phi } h'(\theta e^{i\phi })] . \end{aligned}$$

Using formula (48), we have

$$\begin{aligned} h'(\theta e^{i\phi }) = \frac{\theta (1-e^{i\phi })^2}{e^{i\phi }(\theta e^{i\phi }+1)((\theta +1)^2+\theta ^2) )}. \end{aligned}$$

We have to show that for any \(\phi \in (0, \pi )\), \({{\mathrm {Re}}}[i\theta e^{i\phi } h'(\theta e^{i\phi })]>0\). We can forget the factor \(\theta /((\theta +1)^2+\theta ^2) )\) which is positive. Thus, we have to show that

$$\begin{aligned} {{\mathrm {Im}}}\left[ \frac{(1-e^{i\phi })^2}{(\theta e^{i\phi } +1)}\right] <0. \end{aligned}$$

One can see that the inequality is equivalent to

$$\begin{aligned} 2 \sin (\phi )(\cos (\phi )-1) <0, \end{aligned}$$

which is always true for \(\phi \in (0,\pi )\).

6.3 Relation to extreme value theory

Let us now state a corollary of Theorem 6.2. Let \((X^{(1)}_t)_{t\in \mathbb {Z}_{\geqslant 0}}, \dots , (X^{(N)}_t)_{t\in \mathbb {Z}_{\geqslant 0}}\) be N independent random walks drawn in the same Beta environment (Definition 2.1). We denote by \(\mathcal {P}\) and \(\mathcal {E}\) the measure and expectation associated with the probability space which is the product of the environment probability space and the N random walks probability space (for f a function of the environment and the N random walk paths, we have \(\mathcal {E}[ f] = \mathbb {E}[ \mathsf {E}^{\otimes N}[f] ]\) and \(\mathcal {P}(A) = \mathcal {E}[\mathbbm {1}_{A}]\)).

Corollary 6.8

Assume \(\alpha =\beta =1\). We set \(N=\lfloor e^{ct}\rfloor \) for some \(c\in \left( \frac{2}{5},1\right) \), and \(x_0 = I^{-1}(c) = \sqrt{1-(1-c)^2}\). Then we have

$$\begin{aligned} \lim _{t\rightarrow \infty } \mathcal {P}\left( \frac{ \max _{i=1, \dots , \lfloor e^{ct}\rfloor }\lbrace X^{(i)}_t \rbrace - t x_0}{t^{1/3} d} \leqslant y \right) = F_\mathrm{GUE}(y), \end{aligned}$$
(64)

where

$$\begin{aligned} d = \frac{( 2c^2\sqrt{1-c} )^{1/3}}{\sqrt{1-(1-c)^2}}. \end{aligned}$$

Remark 6.9

The condition \(c>2/5\) is equivalent to \(x_0>4/5\). It is also equivalent to the condition that \(\theta <1/2\) in Theorem 6.2. Hence, it is most probably purely technical.

Remark 6.10

We expect that Corollary 6.8 holds more generally for arbitrary parameters \(\alpha , \beta >0\). One would have the following result:

Let \(N=\lfloor e^{ct}\rfloor \) such that there exists \(x_0 >\frac{\alpha -\beta }{\alpha +\beta }\) and \(\theta _0>0\) with \(x(\theta _0)=x_0\) and \(I(x(\theta _0)) = c\). Then

$$\begin{aligned} \lim _{t\rightarrow \infty } \mathcal {P}\left( \frac{ \max _{i=1, \dots , \lfloor e^{ct}\rfloor }\lbrace X^{(i)}_t \rbrace - t x_0}{t^{1/3}\sigma (x_0)/I'(x_0)} \leqslant y \right) = F_\mathrm{GUE}(y), \end{aligned}$$
(65)

where \(I'(x)= \frac{\mathrm {d}}{\mathrm {d} x}I(x)\).

Remark 6.11

The range of the parameter c in Corollary 6.8 is a priori \(c\in (0,1)\). The reason why the upper bound is precisely 1 is because we are in the \(\alpha =\beta =1\) case. In general, the upper bound is I(1), which is always finite. It is natural that c is bounded. Indeed, we know that for all i, \(X_t^{(i)}\leqslant t\) (because the random walk performs \(\pm 1\) steps), and for c very large there exists some i such that \(X_t^{(i)}=t\) with high probability. Hence, one expects that for c large enough, the maximum \(\max _{i=1, \dots , \lfloor e^{ct}\rfloor }\lbrace X^{(i)}_t \rbrace \) is exactly t with a probability going to 1 as t goes to infinity, and there cannot be random fluctuations in that case.

If one considers N simple symmetric random walks (corresponding to the annealed model), the threshold is \(\log (2)\) (i.e. for \(c>\log (2)\), \((1-(1/2)^t)^N\rightarrow 0\) and for \(c<\log (2)\), \((1-(1/2)^t)^N\rightarrow 1\)). One can calculate the large deviations rate function \(I^{a}\) for the simple random walkFootnote 1 and check that \(I^a(1)=\log (2)\).

Proof of Corollary 6.8

This proof relies on Theorem 6.2 which deals only with \(\alpha =\beta =1\). However, this type of deduction would also hold in the general parameter case, and we write the proof using general form expressions. From Theorem 6.2, we have that writing

$$\begin{aligned} \log (\mathsf {P}(X_t>xt)) = -I(x) t + t^{1/3} \sigma (x) \chi _t, \end{aligned}$$
(66)

then \(\chi _t\) weakly converges to the Tracy–Widom GUE distribution, provided x can be written \(x=x(\theta )\) with \(0<\theta <1/2\). For any realization of the environment, we have on the one hand

$$\begin{aligned} \mathsf {P}\left( \max _{i=1, \dots , \lfloor e^{ct}\rfloor }\lbrace X^{(i)}_t\rbrace \leqslant x t \right)= & {} (1- \mathsf {P}(X_t>xt))^{\lfloor e^{ct}\rfloor } \\= & {} \exp ( \lfloor e^{c t}\rfloor \log (1- \mathsf {P}(X_t>xt))). \end{aligned}$$

On the other hand, setting \(x = x_0 + \frac{t^{-2/3}\sigma (x_0) y}{I'(x_0)}\), we have that

$$\begin{aligned} \mathcal {P}\left( \max _{i=1, \dots , \lfloor e^{ct}\rfloor }\lbrace X^{(i)}_t\rbrace \leqslant x t \right) =\mathcal {P}\left( \frac{ \max _{i=1, \dots , \lfloor e^{ct}\rfloor }\lbrace X^{(i)}_t \rbrace - t x_0}{t^{1/3}\sigma (x_0)/I'(x_0)} \leqslant y \right) . \end{aligned}$$
(67)

By Taylor expansion, we have as t goes to infinity

$$\begin{aligned} I(x) = I(x_0) + t^{-2/3}\sigma (x_0) y + \mathcal {O}(t^{-4/3}), \end{aligned}$$

and

$$\begin{aligned} \sigma (x)= \sigma (x_0) + t^{-2/3}\frac{\sigma '(x_0) \sigma (x_0) y}{I'(x_0)} + \mathcal {O}(t^{-4/3}). \end{aligned}$$

Hence, the R.H.S. of (66) is approximated by

$$\begin{aligned} -I(x)t+ t^{1/3}\sigma (x)\chi _t = -I(x_0) t + t^{1/3}\sigma (x_0)(\chi _t -y) + \mathcal {O}(t^{-1/3}) + \mathcal {O}(t^{-1/3}\chi _t). \end{aligned}$$
(68)

Choosing \(x_0\) such that \(I(x_0)= c\), we have

$$\begin{aligned} \mathcal {P}\left( \max _{i=1, \dots , \lfloor e^{ct}\rfloor }\lbrace X^{(i)}_t\rbrace \leqslant x t \right)&= \mathbb {E}\exp (\lfloor e^{ct }\rfloor \log (1- \mathsf {P}(X_t>xt)) ) \\&= \mathbb {E}\exp (-\lfloor e^{ct }\rfloor P(t, xt) + \mathcal {O}(e^{ct} P(t, xt)^2) ) \\&= \mathbb {E}\exp (e^{t^{1/3}\sigma (x_0)(\chi _t -y) + \mathcal {O}(t^{-1/3}(1+\chi _t))} +\mathcal {O}( P(t, xt))\\&\quad +\mathcal {O}(e^{ct} P(t, xt)^2) ) \end{aligned}$$

The second equality relies on Taylor expansion of the logarithm around 1. The third equality is the consequence (66) and (68). Since \(\chi _t\) converges in distribution, \(t^{-1/3}(1+\chi _t))\) converges in probability to zero by Slutsky’s theorem. Hence, the term \(\mathcal {O}(t^{-1/3}(1+\chi _t))\) inside the exponential converges in probability to zero. Recalling that \(I(x_0)=c\), we have

$$\begin{aligned} P(t, xt)^2 = \exp ( 2\log (P(t, xt)) )= & {} \exp ( 2( -c t +\mathcal {O}( t^{1/3} \chi _t) ) )\\= & {} \exp ( -2c t +2t^{2/3}\mathcal {O}( t^{-1/3} \chi _t) ), \end{aligned}$$

and since \(\mathcal {O}(t^{-1/3}(1+\chi _t))\) converges to zero in probability, we have that \(P(t, xt)^2\) is smaller that \(e^{-\frac{3}{2}ct}\) with probability going to 1 as t goes to infinity, so that the term \(\mathcal {O}(e^{ct} P(t, xt)^2)\) can be neglected. One can bound similarly \(\mathcal {O}(P(t, xt))\) by \(e^{-\frac{1}{2}ct}\) with probability going to 1. Thus,

$$\begin{aligned} \lim _{t\rightarrow \infty } \mathcal {P}\left( \frac{ \max _{i=1, \dots , \lfloor e^{ct}\rfloor }\lbrace X^{(i)}_t \rbrace - t x_0}{t^{1/3}\sigma (x_0)/I'(x_0)} \leqslant y \right)&= \lim _{t\rightarrow \infty } \mathcal {P}\left( \max _{i=1, \dots , \lfloor e^{ct}\rfloor }\lbrace X^{(i)}_t\rbrace \leqslant x t \right) \\&=\lim _{t\rightarrow \infty }\mathbb {P}(\chi _t\leqslant y) \\&= F_ \mathrm{GUE}(y). \end{aligned}$$

In the case \(\alpha =\beta =1\), we have seen that \(I(x) = 1-\sqrt{1-x^2}\) so that \(x_0 = \sqrt{1-(1-c)^2}\). Moreover, using (43),

$$\begin{aligned} \sigma (x_0)/I'(x_0) = d = \frac{( 2c^2\sqrt{1-c} )^{1/3}}{x}=\frac{( 2c^2\sqrt{1-c} )^{1/3}}{\sqrt{1-(1-c)^2}}, \end{aligned}$$

as in the statement of Corollary 6.8. Finally, we have that \(I(x(1/2)) = 2/5\) and \(x((1/2)) = 4/5\), so that the hypothesis of Corollary 6.8 match with that of Theorem 6.2.

In order to put Corollary 6.8 in the perspective of extreme value statistics, recall that if \((G_i)_i\) for \(i=1, \dots , \lfloor e^{ct}\rfloor \) is a sequence of independent Gaussian centred random variables of variance 1, then we have [25, Section 2.3.2] the weak convergence

$$\begin{aligned} \sqrt{2ct}\max _{i=1, \dots , e^{ct}} \left\{ G_i\right\} - 2 ct + \frac{1}{2}\log (t) + \frac{1}{2}\log (4\pi c)\Longrightarrow \mathcal {G}, \end{aligned}$$

where \(\mathcal {G}\) is a Gumbel random variable with cumulative distribution function \(\exp (-e^{-x})\).

For the Beta-RWRE with general \(\alpha , \beta >0\) parameters, the variables \(X^{(i)}_t\) have mean \(\frac{\alpha -\beta }{\alpha +\beta }t\) with variance \( \mathcal {O}(t)\) [see Proposition 6.12 (1) and (2)]. Let us note

$$\begin{aligned} R_t^{(i)}:=\frac{X_t^{(i)} - \frac{\alpha -\beta }{\alpha +\beta }t}{\sqrt{t}}. \end{aligned}$$

We know that \(R_t^{(i)}\) converges weakly to the Gaussian distribution by the central limit theorem. Moreover, conditionally on the environment, \(R_t^{(i)}\) converges weakly to the Gaussian distribution (it is proved in [31], see the discussion in Sect. 2.6). However, if we let the environment vary, the variables \(R^{(i)}_t\) are not independent since the random walks all share the same environment.

The next proposition characterizes the covariance structure of the family \((X^{(i)}_t)_{i\geqslant 1}\). We state the result for any parameters \(\alpha , \beta >0\).

Proposition 6.12

  1. 1.

    For all \(i\geqslant 1\), we have \(\mathcal {E}[X^{(i)}_t ]= t\frac{\alpha -\beta }{\alpha +\beta }\).

  2. 2.

    For all \(i\geqslant 1\), we have \(\mathcal {E}[(X^{(i)}_t)^2 ]= \left( \frac{\alpha -\beta }{\alpha +\beta } \right) ^2t^2 + \frac{4\alpha \beta }{(\alpha +\beta )^2} t \).

  3. 3.

    For all \(i\ne j \geqslant 1\), we have

    $$\begin{aligned} \mathcal {E}[X^{(i)}_t X^{(j)}_t ]= \left( t\frac{\alpha -\beta }{\alpha +\beta }\right) ^2 + \frac{4\alpha \beta \sum _{s=0}^{t-1}\mathcal {P}(X^{(i)}_s= X^{(j)}_s)}{(\alpha +\beta )^2 (\alpha +\beta +1)}. \end{aligned}$$
    (69)
  4. 4.

    For two random variables X and Y measurable with respect to \(\mathcal {P}\), we denote their correlation coefficient as

    $$\begin{aligned} \rho (X,Y) = \frac{\mathcal {E}[XY]}{\sqrt{\mathcal {E}[X^2]\mathcal {E}[Y^2]}}. \end{aligned}$$

    For all \(i\ne j \geqslant 1\), the correlation coefficient \(\rho (X^{(i)}_t, X^{(j)}_t )\) equals \(1/(\alpha +\beta +1)\) times the \(\mathcal {E}\)-expected proportion of overlap between the walks \(X^{(i)}_t\) and \(X^{(j)}_t\), up to time t.

Proof

The points (1) and (2) are trivial since \(X_t\) is actually a simple random walk if we do not condition on the environment. In any case, let us explain each case explicitly.

  1. 1.

    Let us write \(\Delta _t = X_{t+1}-X_{t}\). Then \(X_t=\sum _{i=0}^{t-1} \Delta _i\). \(\Delta _i\) is a random variable that takes the value 1 with probability \(\mathbb {E}[B]\) and the value \(-1\) with probability \(\mathbb {E}[1-B]\) for some \(Beta(\alpha , \beta )\) random variable B. We find that \(\mathcal {E}\left[ \Delta _t \right] = \frac{\alpha -\beta }{\alpha +\beta }\), and

    $$\begin{aligned} \mathcal {E}\left[ X_t\right] = \sum _{i=1}^t \mathcal {E}\left[ \Delta _t \right] = t\frac{\alpha -\beta }{\alpha +\beta }. \end{aligned}$$
  2. 2.

    We have

    $$\begin{aligned} \mathcal {E}[ (X_t)^2] =\mathcal {E}\left[ \sum _{i=1}^t \Delta _i\sum _{j=1}^t \Delta _j\right] . \end{aligned}$$

    For \(i\ne j\), \(\mathcal {E}[ \Delta _i \Delta _j] = \mathcal {E}[ \Delta _i ]\mathcal {E}[ \Delta _j]\), and since \(\Delta _i\) equals plus or minus one, \(\mathcal {E}[ (\Delta _i)^2 ]=1\). Hence,

    $$\begin{aligned} \mathcal {E}[ (X_t)^2] = t(t-1)\left( \frac{\alpha -\beta }{\alpha +\beta }\right) ^2 + t = \left( t\frac{\alpha -\beta }{\alpha +\beta }\right) ^2 +t\frac{4\alpha \beta }{(\alpha +\beta )^2}. \end{aligned}$$
  3. 3.

    Let us write \(\Delta _t^{(i)} = X^{(i)}_{t+1} -X^{(i)}_t\) and \(\Delta _t^{(j)} = X^{(j)}_{t+1} -X^{(j)}_t\). We have

    $$\begin{aligned} \mathcal {E}[X^{(i)}_t X^{(j)}_t] = \mathcal {E}\left[ \sum _{n=0}^{t-1}\Delta _n^{(i)}\sum _{m=0}^{t-1}\Delta _m^{(j)} \right] . \end{aligned}$$

    For \(n\ne m\), since the increments and the environments corresponding to different times are independent,

    $$\begin{aligned} \mathcal {E}[\Delta _n^{(i)}\Delta _m^{(j)} ] = \mathcal {E}[\Delta _n^{(i)}]\mathcal {E}[\Delta _m^{(j)} ] = \left( \frac{\alpha -\beta }{\alpha +\beta }\right) ^2. \end{aligned}$$

    However, \(\mathcal {E}[\Delta _n^{(i)}\Delta _n^{(j)} ]\) depends on whether \( X^{(i)}_n=X^{(j)}_n\) or not. More precisely,

    $$\begin{aligned} \mathcal {E}[\Delta _n^{(i)}\Delta _n^{(j)}\vert X^{(i)}_n \ne X^{(j)}_n ] = \mathcal {E}[\Delta _n^{(i)}]\mathcal {E}[\Delta _m^{(j)} ] = \left( \frac{\alpha -\beta }{\alpha +\beta }\right) ^2, \end{aligned}$$

    and

    $$\begin{aligned} \mathcal {E}[\Delta _n^{(i)}\Delta _n^{(j)}\vert X^{(i)}_n = X^{(j)}_n ] = \mathbb {E}[ \mathsf {E}[\Delta _n^{(i)} ]\mathsf {E}[\Delta _n^{(j)} ]\vert X^{(i)}_n = X^{(j)}_n] = \mathbb {E}[ (2B-1)^2], \end{aligned}$$

    for some \(Beta(\alpha , \beta )\) random variable B. This yields

    $$\begin{aligned} \mathcal {E}[\Delta _n^{(i)}\Delta _n^{(j)} ] = \mathcal {P}(X^{(i)}_n\ne X^{(j)}_n)\left( \frac{\alpha -\beta }{\alpha +\beta }\right) ^2 + \mathcal {P}(X^{(i)}_n= X^{(j)}_n)\mathbb {E}[ (2B-1)^2]. \end{aligned}$$

    Using \( \mathbb {E}[B^2] = \frac{\alpha (\alpha +1)}{(\alpha +\beta )(\alpha +\beta +1)},\) we find that

    $$\begin{aligned} \mathcal {E}[X^{(i)}_t X^{(j)}_t] = t^2\left( \frac{\alpha -\beta }{\alpha +\beta }\right) ^2 + \left( \sum _{s=0}^{t-1}\mathcal {P}( X^{(i)}_s= X^{(j)}_s) \right) \frac{4\alpha \beta }{(\alpha +\beta )^2(\alpha +\beta +1)}. \end{aligned}$$
  4. 4.

    The \(\mathcal {E}\)-expected proportion of overlap between the walks \(X^{(i)}_t\) and \(X^{(j)}_t\) up to time t is

    $$\begin{aligned} \frac{1}{t}\mathcal {E}\left[ \sum _{s=0}^{t-1}\mathbbm {1}_{X^{(i)}_s=X^{(j)}_s}\right] = \frac{1}{t}\sum _{s=0}^{t-1}\mathcal {P}(X^{(i)}_s= X^{(j)}_s). \end{aligned}$$

    Hence, the point (4) is a direct consequence of (1)–(3).

\(\square \)

One can precisely describe the behaviour of \(\sum _{s=0}^{t-1}\mathcal {P}(X^{(i)}_s= X^{(j)}_s)\). For simplicity, we restrict the study to the case where the random walks have no drift, that is \(\alpha =\beta \).

Proposition 6.13

Consider \((X_t^{(1)})_{t\in \mathbb {Z}_{\geqslant 0}}\) and \((X_t^{(2)})_{t\in \mathbb {Z}_{\geqslant 0}}\) two Beta-RWRE drawn independently in the same environment with parameters \(\alpha =\beta \). Then

$$\begin{aligned} \sqrt{t} \ \cdot \ \mathcal {P}(X_t^{(1)} = X_t^{(2)}) \xrightarrow [t\rightarrow \infty ]{} \frac{2\alpha +1}{2\alpha } \frac{1}{\sqrt{\pi }}, \end{aligned}$$

and consequently

$$\begin{aligned} \sqrt{t}\ \cdot \ \mathcal {E}\left[ \frac{X^{(i)}_t}{\sqrt{t}} \frac{X^{(j)}_t }{\sqrt{t}}\right] \xrightarrow [t\rightarrow \infty ]{} \frac{1}{\alpha \sqrt{\pi }}. \end{aligned}$$

Proof

First, notice that \((X^{(1)}_t - X^{(2)}_t)_{t\geqslant 0}\) is a random walk. Let \(Y_t:= X^{(1)}_t - X^{(2)}_t\). The transitions probabilities depend on whether \(Y_t=0\). If \(Y_t=0\), then

$$\begin{aligned} Y_{t+1}-Y_t = {\left\{ \begin{array}{ll} +2 &{}\text{ with } \text{ probability } \mathbb {E}[B(1-B)]\\ 0 &{}\text{ with } \text{ probability } \mathbb {E}[B^2 + (1-B)^2]\\ -2 &{}\text{ with } \text{ probability } \mathbb {E}[B(1-B)]\end{array}\right. } \end{aligned}$$

where B is a \(Beta(\alpha , \alpha )\) random variable. If \(Y_t\ne 0\), then

$$\begin{aligned} Y_{t+1}-Y_t = {\left\{ \begin{array}{ll} +2 &{}\text{ with } \text{ probability } 1/4\\ 0 &{}\text{ with } \text{ probability } 1/2\\ -2 &{}\text{ with } \text{ probability } 1/4.\end{array}\right. } \end{aligned}$$

In the following, we denote \(r=\mathbb {E}[B(1-B)] = \frac{\alpha }{4\alpha +2}\). We also denote \(P_t:= \mathcal {P}\left( Y_t=0\right) \) which is the quantity that we want to approximate.

We introduce an auxiliary random walk starting from 0 and having transitions

$$\begin{aligned} {\left\{ \begin{array}{ll} +2 &{}\text{ with } \text{ probability } 1/4,\\ 0 &{}\text{ with } \text{ probability } 1/2,\\ -2 &{}\text{ with } \text{ probability } 1/4.\end{array}\right. } \end{aligned}$$

We denote by \(Q_{t}\) the probability for the auxiliary random walk to arrive at zero at time t and stay in the non-negative region between times 0 and t.

By conditioning on the first return in zero of the random walk \((Y_t)_t\), we claim that for \(t\geqslant 2\),

$$\begin{aligned} P_t = (1-2r)P_{t-1} + 2\sum _{i=2}^{t} r \frac{1}{4} Q_{i-2}P_{t-i}. \end{aligned}$$
(70)

Let us explain more precisely Eq. (70) (see Fig. 7):

  • The term \((1-2r)P_{t-1}\) corresponds to the case when the first return at zero occur at time 1.

  • The factor 2 in front of the sum in (70) accounts for the fact that the walk can stay either in the positive, or in the negative region before the first return in zero, with equal probability.

  • The factor r is the probability that \(Y_1=2\) (which is also the probability that \(Y_1=-2\)).

  • The factor 1 / 4 is the probability of the last step before the first return at zero.

Fig. 7
figure 7

A possible trajectory of the random walk \(Y_t\) is decomposed to explain the recurrence (70). The trajectory in the gray box has the same probability as that of the auxiliary random walk

By conditioning on the first return at zero of the auxiliary random walk, one can see that \(Q_t\) verifies the recurrence

$$\begin{aligned} Q_t = \frac{1}{2}Q_{t-1} +\sum _{i=2}^{t}\frac{1}{16}Q_{i-2}Q_{t-i} \quad \text {for }t\geqslant 2. \end{aligned}$$

This implies that if \(Q(z)=\sum _{n\geqslant 0} Q_n z^n \) is the generating function of the sequence \((Q_n)_n\), then

$$\begin{aligned} Q(z)- 1- 1/2 z = 1/2 z(Q(z)-1) + 1/16 z^2 Q(z)^2. \end{aligned}$$

This yields

$$\begin{aligned} Q(z) = \frac{8-4z - 8\sqrt{1-z}}{z^2}. \end{aligned}$$

Now, let us denote \(G(z)= \sum _{n\geqslant 0} P_n z^n\) the generating function of the sequence \((P_n)_n\). The recurrence (70) implies that

$$\begin{aligned} G(z)-1-(1-2r)z = (1-2r) z (G(z)-1) + 2 r (1/4) G(z)Q(z). \end{aligned}$$

This yields

$$\begin{aligned} G(z) =\frac{1}{1+z(4r-1) + 4 r (\sqrt{1-z}-1)}. \end{aligned}$$

The function G(z) is analytic in the unit open disk, and can be developed in series around 0 with radius of convergence 1. The nature of its singularities on the unit circle gives the leading order for the asymptotic behaviour of its series coefficients. As \(z\rightarrow 1\) (for \(z\in \mathbb {C}{\setminus } D\) where D is the cone \(D=\lbrace z :\vert \arg (z-1) \vert <\epsilon \rbrace \), for some \(\epsilon >0\) arbitrarily small, and taking the branch cut of \(\sqrt{1-z} \) along \(\mathbb {R}_{\geqslant 1}\)),

$$\begin{aligned} G(z) \sim \frac{1}{4r\sqrt{1-z}}, \end{aligned}$$

where \(\sim \) means that the ratio of the two sides tends to 1 as \(z\rightarrow 1\) and z belongs to the domain described above. We deduce (from e.g. [23, Corollary VI.1]) that

$$\begin{aligned} P_t \sim \frac{1}{4r} \frac{1}{\sqrt{\pi t}}. \end{aligned}$$

This clearly implies that

$$\begin{aligned} \frac{\sum _{s=0}^{t-1}P_s}{\sqrt{t}} \xrightarrow [t\rightarrow \infty ]{} \frac{1}{2r\sqrt{\pi }}. \end{aligned}$$

Since \(r= \frac{\alpha }{4\alpha +2}\) and using (69), we get

$$\begin{aligned} \sqrt{t}\mathcal {E}\left[ \frac{X^{(i)}_t}{\sqrt{t}} \frac{X^{(j)}_t }{\sqrt{t}}\right] \xrightarrow [t\rightarrow \infty ]{} \frac{1}{\alpha \sqrt{\pi }}. \end{aligned}$$

\(\square \)

Comparison to correlated Gaussian variables Consider for simplicity only the case \(\alpha =\beta \). We denote as before \(R_t^{(i)}=X_t^{(i)}/\sqrt{t}\). As already mentioned in Sect. 2.6, \(R_t^{(i)}\) converges weakly as t goes to infinity to the Gaussian distribution \(\mathcal {N}(0,1)\) (whether we condition on the environment or not). It is tempting to ask if the same limit theorem for the maximum holds when one replaces the \(R_t^{(i)}\) by the corresponding limiting collection of Gaussian random variables (it would correspond to taking first the limit when t goes to infinity and then study the maximum as N goes to infinity). The theory of extreme value statistics provides a negative answer.

Let \(\Sigma _N(\lambda )\) be the matrix of size N

$$\begin{aligned} \Sigma _N(\lambda ):= \left( \begin{matrix} 1 &{} \frac{\lambda }{\sqrt{\log (N)}} &{}\dots &{} \frac{\lambda }{\sqrt{\log (N)}}\\ \frac{\lambda }{\sqrt{\log (N)}}&{}1&{} &{}\vdots \\ \vdots &{} &{} \ddots &{} \frac{\lambda }{\sqrt{\log (N)}}\\ \frac{\lambda }{\sqrt{\log (N)}}&{} \dots &{} \frac{\lambda }{\sqrt{\log (N)}} &{}1 \end{matrix}\right) , \end{aligned}$$

where \(\lambda >0\) is a parameter. If we set \(N=\lfloor e^{ct}\rfloor \), and look at the maximum of the sequence \(\lbrace R_t^{(i)}\rbrace _{1\leqslant i\leqslant N} \) as t goes to infinity, the correlation matrix of the sequence is asymptotically \(\Sigma _N(\lambda )\) with \(\lambda = \frac{\sqrt{c/\pi }}{\alpha }\) (cf. Proposition 6.13).

Let \(G_N:=(G^{(1)}, \dots , G^{(N)})\) be a Gaussian vector with covariance matrix \(\Sigma _N(\lambda )\) and let denote the maximum \(M_N:= \max _{i=1, \dots , N}\lbrace G^{(i)} \rbrace \). Theorem 3.8.1 in [25] implies that we have the convergence in distribution

$$\begin{aligned} \frac{M_N- \sqrt{2\log (N)} + \lambda \sqrt{2}}{(\lambda ^{-1}\sqrt{\log (N)})^{-1/2}} \Longrightarrow \mathcal {N}(0,1). \end{aligned}$$

In particular, we have the convergence in probability of \(M_N/\sqrt{\log (N)}\) to \(\sqrt{2}\).

Thus, we have seen that the maximum of \((R_t^{(i)})_{1\leqslant i\leqslant N}\) and the maximum of \((G^{(i)})_{1\leqslant i\leqslant N}\) obey to very different limit theorems: both the scales and the limiting laws are different.

Remark 6.14

By Corollary 6.8, we have the convergence in probability

$$\begin{aligned} \frac{\max _{i=1, \dots , N}\lbrace R_{\log (N)/c}^{(i)} \rbrace }{\sqrt{\log (N)}} \xrightarrow [N\rightarrow \infty ]{\mathcal {P}} \frac{x_0}{\sqrt{c}}, \end{aligned}$$

where \(c=I(x_0)\). Since for any \(\alpha \) and \(\beta =\alpha \), \(I''(0)=1\), we notice that when \(x_0\rightarrow 0\), the approximation at the first order coincide with the Gaussian case. To substantiate this parallel, one must extend to the full parameter range \(\alpha , \beta >0\) and \(0<c<1\) in Corollary 6.8 byond \(\alpha =\beta =1\) and \(c>2/5\) (see also Remark 6.9).

Remark 6.15

It is clear that the sequence \((X_t^{(i)} )_{1\leqslant i \leqslant N}\) is exchangeable. There exist general results for maxima of exchangeable sequences. In some cases, one can prove that the maximum, properly renormalized, converges to a mixture of one of the classical extreme laws (see in [25] the discussion in Section 3.2 and the results of Section 3.6). However, it seems that our particular setting does not fit into this theory.

7 Asymptotic analysis of the Bernoulli-Exponential directed first passage percolation

7.1 Statement of the result

We investigate the behaviour of the first passage time \(T(n,\kappa n)\) when n goes to infinity, for some slope \(\kappa >\frac{a}{b}\). When \(\kappa =\frac{a}{b}\), the first passage time \(T(n, \kappa n)\) should go to zero. The case \(\kappa <\frac{a}{b}\) is similar with \(\kappa >\frac{a}{b}\) by symmetry.

As in Theorem 6.2, we parametrize the slope \(\kappa \) by a parameter \(\theta \) (which turns out to be the position of the critical point in the asymptotic analysis). Let

$$\begin{aligned} \kappa (\theta )&:= \dfrac{\dfrac{1}{\theta ^2} -\dfrac{1}{(a+\theta )^2}}{\dfrac{1}{(a+\theta )^2}-\dfrac{1}{(a+b+\theta )^2}}, \end{aligned}$$
(71)
$$\begin{aligned} \tau (\theta )&:= \frac{1}{a+\theta } - \frac{1}{\theta } +\kappa (\theta )\left( \frac{1}{a+\theta }-\frac{1}{a+b+\theta }\right) = \frac{a(a+b)}{\theta ^2 (2a+b+2\theta )}, \end{aligned}$$
(72)

and

$$\begin{aligned} \rho (\theta ) := \left[ \frac{1}{\theta ^3} -\frac{1}{(a+\theta )^3} + \kappa (\theta )\left( \frac{1}{(a+b+\theta )^3} -\frac{1}{(a+\theta )^3} \right) \right] ^{1/3}. \end{aligned}$$
(73)

When \(\theta \) ranges from 0 to \(+\infty \), \(\kappa (\theta )\) ranges from \(+\infty \) to a / b and \(\tau (\theta )\) ranges from \(+\infty \) to 0.

Theorem 7.1

We have that for any \(\theta >0\) and parameters \(a,b>0\),

$$\begin{aligned} \lim _{n \rightarrow \infty }\mathbb {P}\left( \frac{T(n, \kappa (\theta )n) - \tau (\theta )n}{\rho (\theta )n^{1/3}}\leqslant y \right) = F_{\mathrm {GUE}}(y). \end{aligned}$$

By Theorem 2.18, we have a Fredholm determinant representation for the probability

$$\begin{aligned} \mathbb {P}(T(n, \kappa (\theta ) n)>r). \end{aligned}$$

We set \( r=\tau (\theta ) n +\rho (\theta ) n^{1/3} y\). Thus, we have that

$$\begin{aligned} \mathbb {P}(T(n, \kappa (\theta ) n)>\tau (\theta ) n +\rho (\theta ) n^{1/3} y) = \det (I-K^{\mathrm {FPP}}_r)_{\mathbb {L}^2(C'_0)}, \end{aligned}$$

where

$$\begin{aligned} K^{\mathrm {FPP}}_r(u,u')= & {} \frac{1}{2i\pi } \int _{1/2-i\infty }^{1/2+i\infty } \exp (n(H(u+s)- H(u))+ \rho (\theta ) n^{1/3} ys)\\&\times \frac{u+s}{u} \frac{\mathrm {d}s}{s(s+u-u')}, \end{aligned}$$

and

$$\begin{aligned} H(z):= \tau (\theta ) z +\log \left( \frac{z}{a+z} \right) +\kappa (\theta )\log \left( \frac{a+b+z}{a+z}\right) . \end{aligned}$$

We have

$$\begin{aligned} H'(z) = \tau (\theta ) +\frac{1}{z} - \frac{1}{a+z} +\kappa (\theta )\left( \frac{1}{a+b+z}-\frac{1}{a+z}\right) . \end{aligned}$$

and

$$\begin{aligned} H''(z) = \frac{1}{(a+z)^2} -\frac{1}{z^2} +\kappa (\theta ) \left( \frac{1}{(a+z)^2} - \frac{1}{(a+b+z)^2}\right) . \end{aligned}$$

We can see from the expressions for the derivatives of H why it is natural to paramametrize \(\kappa , \tau \) and \(\rho \) as in (71)–(73): with this choice, we have that \(H'(\theta ) = H''(\theta )=0\).

As in Sect. 6, we assume for the moment that the Fredholm determinant contour is a small circle around 0. We do the change of variables \(z=u+s\) in the definition of the kernel, so that

$$\begin{aligned} K^{\mathrm {FPP}}_r(u,u')= & {} \frac{1}{2i\pi } \int _{1/2-i\infty }^{1/2+i\infty } \exp (n(H(z)- H(u))+ \rho (\theta ) n^{1/3} y(z-u))\nonumber \\&\times \frac{z}{u} \frac{\mathrm {d}z}{(z-u)(z-u')}. \end{aligned}$$
(74)

Lemma 7.2

For any parameters \(a,b>0\) and \(\theta >0\), we have \(H'''(\theta )>0\).

Proof

We have

$$\begin{aligned} H'''(\theta ) = \frac{2}{\theta ^3} -\frac{2}{(a+\theta )^3} + \dfrac{\frac{1}{\theta ^2} -\frac{1}{(a+\theta )^2}}{\frac{1}{(a+\theta )^2}-\frac{1}{(a+b+\theta )^2}}\left( \frac{2}{(a+b+\theta )^3} -\frac{2}{(a+\theta )^3} \right) . \end{aligned}$$

Hence we have to show that

$$\begin{aligned}&\left( \frac{2}{\theta ^3} -\frac{2}{(a+\theta )^3}\right) \left( \frac{1}{(a+\theta )^2}-\frac{1}{(a+b+\theta )^2} \right) \nonumber \\&\quad >\left( \frac{2}{(a+\theta )^3}-\frac{2}{(a+b+\theta )^3} \right) \left( \frac{1}{\theta ^2} -\frac{1}{(a+\theta )^2}\right) . \end{aligned}$$
(75)

By putting each side to the same denominator, we arrive at

$$\begin{aligned}&b(a+b+\theta )(2\theta +2a + b)((a+\theta )^3- \theta ^3) \\&\quad > a\theta (2\theta + a) ((a+b+\theta )^3-(a+\theta )^3 )\\&\quad \Leftrightarrow a b (a + b) (a + \theta )^2 (2 a + b + 3 \theta )>0 \end{aligned}$$

which clearly holds. \(\square \)

We notice that given the expression (73), \(H'''(\theta ) = 2(\rho (\theta ))^3\). By Taylor expansion around \(\theta \),

$$\begin{aligned} H(z) -H(\theta ) = \frac{(\rho (\theta )(z-\theta ))^3}{3} + \mathcal {O}((z-\theta )^4). \end{aligned}$$
(76)

7.2 Deformation of contours

We need to find steep-descent contours for the variables z and u. For the z variable, we choose the contour \(\mathcal {D}_{\theta } = \theta +i\mathbb {R}\) as in Sect. 6. For the u variable, we notice that since we are integrating on a finite contour, it will be enough that \({{\mathrm {Re}}}[H(z)]> {{\mathrm {Re}}}[H(\theta )]\) along the contour (see [7, 37]).

Lemma 7.3

The contour \(\mathcal {D}_{\theta }\) is steep-descent for the function \({{\mathrm {Re}}}[H]\) in the sense that \(y\mapsto {{\mathrm {Re}}}[H(\theta +iy)]\) is decreasing for y positive and increasing for y negative.

Proof

Since \(\frac{\mathrm {d}}{\mathrm {d}y}{{\mathrm {Re}}}[H(\theta +iy)] = {{\mathrm {Im}}}[H'(\theta +iy)]\), and using symmetry with respect to the real axis, it is enough to show that for \(y>0\), \({{\mathrm {Im}}}[H'(\theta +iy)]>0\). We have

$$\begin{aligned} {{\mathrm {Im}}}[H'(\theta +iy)]= & {} \frac{y}{(\theta +a)^2+y^2} - \frac{y}{\theta ^2+y^2}\\&+\kappa (\theta )\left( \frac{y}{(\theta +a)^2+y^2}-\frac{y}{(\theta +a+b)^2+y^2}\right) . \end{aligned}$$

Given the expression (71) for \(\kappa (\theta )\), we have to show that

$$\begin{aligned}&\left( \frac{1}{\theta ^2+y^2} - \frac{1}{(\theta +a)^2+y^2}\right) \left( \frac{1}{(a+\theta )^2}-\frac{1}{(a+b+\theta )^2} \right) \nonumber \\&\quad <\left( \frac{1}{(\theta +a)^2+y^2}-\frac{1}{(\theta +a+b)^2+y^2}\right) \left( \frac{1}{\theta ^2} -\frac{1}{(a+\theta )^2}\right) . \end{aligned}$$
(77)

Factoring both sides in the inequality (77) and cancelling equal factors, one readily sees that it is equivalent to

$$\begin{aligned} \frac{1}{(\theta ^2+y^2)(a+b+\theta )^2} <\frac{1}{ \left( (\theta +a+b)^2+y^2\right) \theta ^2}, \end{aligned}$$

which is always satisfied. \(\square \)

Instead of finding a steep-descent path for the \(\mathbb {L}^2\) contour as in Sect. 6, we prove that we can find a contour with suitable properties for asymptotics analysis, following the approach of [7].

Lemma 7.4

There exists a closed continuous path \(\gamma \) in the complex plane, such that

  • The path \(\gamma \) encloses 0 but not \(-a-b\).

  • The path \(\gamma \) crosses the point \(\theta \) and departs \(\theta \) with angles \(\phi \) and \(-\phi \), for some \(\phi \in (\pi /2, 5\pi /6)\).

  • Let \(B(\theta , \epsilon )\) the ball of radius \(\epsilon \) centred at \(\theta \). For any \(\epsilon >0\), there exists \(\eta >0\) such that for all \(z\in \gamma {\setminus } B(\theta , \epsilon )\), \({{\mathrm {Re}}}[H(z)]-{{\mathrm {Re}}}[H{[}\theta {]}]>\eta \).

Proof

Since H is analytic away from its singularities, \({{\mathrm {Re}}}[H]\) is a harmonic function. It turns out that the shape of level lines \({{\mathrm {Re}}}[H(z)] = {{\mathrm {Re}}}[H(\theta )]\) are constrained by the nature and the positions of the singularities of H, and provided H is not too complicated (does not have too many singularities), one can describe these level lines.

We know that level lines can cross only at singularities or critical points. In our case, three branches of the level line \({{\mathrm {Re}}}[H(z)] = {{\mathrm {Re}}}[H(\theta )]\) cross at \(\theta \) making angles \(\pi /6, \pi /2\) and \(5\pi /6\). This can be seen from the Taylor expansion (76).

The function H has only three singularities of logarithmic type at 0, \(-a\) and \(-a+b\). When z goes to infinity, \({{\mathrm {Re}}}[H(z)] = {{\mathrm {Re}}}[H(\theta )]\) implies \(\mathfrak {R}[\tau (\theta ) z ]\approx {{\mathrm {Re}}}[H(\theta )]\). Hence, there are two branches that goes to infinity in the direction \(\pm \infty i+{{\mathrm {Re}}}[H(\theta )]/\tau (\theta )\). Additionally, one knows by the maximum principle that any closed path formed by portions of level lines must enclose a singularity. Finally, one knows the sign of \({{\mathrm {Re}}}[H(z)]\) around each singularity:

  • \({{\mathrm {Re}}}[H(z)]<0\) for z near 0,

  • \({{\mathrm {Re}}}[H(z)]<0\) for z near \(-a-b\),

  • \({{\mathrm {Re}}}[H(z)]>0\) for z near \(-a\).

This is enough to conclude that the level lines of \({{\mathrm {Re}}}[H(z)] = {{\mathrm {Re}}}[H(\theta )]\) are necessarily as shown in Fig. 8 (modulo a continuous deformation of the lines that does not cross any singularity).

Fig. 8
figure 8

The solid lines are contour lines \({{\mathrm {Re}}}[H(z)] = {{\mathrm {Re}}}[H(\theta )]\) in the case \(\theta =a=b=1\). Dashed lines are contour lines \({{\mathrm {Re}}}[H(z)] = {{\mathrm {Re}}}[H(\theta )]+2\eta \) with \(\eta =0.05\)

It follows that one can find a path \(\gamma \) having the required properties. It would depart \(\theta \) with angles \(\pm \phi \) with \(\phi \in (\pi /2, 5\pi /6)\), and stay between the level lines that depart \(\theta \) with angles \( \pm \pi /2\) and the level lines that departs \(\theta \) with angles \(\pm 5\pi /6\) (for instance, one could follow the level lines of \({{\mathrm {Re}}}[H(z)] = {{\mathrm {Re}}}[H(\theta )]+ 2\eta \) outside of a neighbourhood of \(\theta \)). \(\square \)

We have the analogue of Proposition 6.6.

Proposition 7.5

Let \(B(\theta , \epsilon )\) be the ball of radius \(\epsilon \) centred at \(\theta \). We denote by \(\gamma ^{\epsilon }\) (resp. \(\mathcal {D}_{\theta }^{\epsilon })\) the part of the contour \(\gamma \) (resp. \(\mathcal {D}_{\theta })\) inside the ball \(B(\theta , \epsilon )\). Then, for any \(\epsilon >0\),

$$\begin{aligned} \lim _{t\rightarrow \infty } \det (I+K^{\mathrm {FPP}}_r)_{\mathbb {L}^2(\mathcal {C}_{\theta })} = \lim _{t\rightarrow \infty } \det (I+K^{\mathrm {FPP}}_{y, \epsilon })_{\mathbb {L}^2(\gamma ^{\epsilon })} \end{aligned}$$

where \(K^{\mathrm {FPP}}_{y, \epsilon }\) is defined by the integral kernel

$$\begin{aligned} K^{\mathrm {FPP}}_{y, \epsilon }(u,u') = \frac{1}{2i\pi } \int _{\mathcal {D}_{\theta }^{\epsilon }} \frac{\pi }{\sin (\pi (z-u))} \exp ( t(H(z)-H(u)) -t^{1/3}\rho (\theta )y (z-u))\frac{\mathrm {d}z}{z - u'}. \end{aligned}$$
(78)

Proof

The proof is similar to the proof of Proposition 6.6. The two main differences are

  1. 1.

    The integral defining \(K^{\mathrm {FPP}}_y\) in (74) is an improper integral, which forbids to use dominated convergence.

  2. 2.

    The \(\mathbb {L}^2\) contour (i.e. the contour \(\gamma \)) is not steep-descent.

The point (2) is not an issue since in the proof of Proposition 6.6, we actually only used the fact that for any \(\epsilon >0\) there exists a constants \(C'>0\) such that \({{\mathrm {Re}}}[h(z)]-{{\mathrm {Re}}}[h(\theta )]>C'\) for \(z\in \mathcal {C}_{\theta }{\setminus } \mathcal {C}_{\theta }^{\epsilon }\). This property is still satisfied by the contour \(\gamma \).

The point (1) is resolved by bounding the integral over \(\mathcal {D}_{\theta }{\setminus } \mathcal {D}_{\theta }^{\epsilon }\) with the same kind of estimates as in the proof of Theorem 2.18. More precisely, one writes

$$\begin{aligned}&\left| \frac{1}{2i\pi } \int _{\theta +i\epsilon }^{\theta +i\infty } \exp (n(H(z)- H(u))+ \rho (\theta ) n^{1/3} y(z-u))\frac{z}{u} \frac{\mathrm {d}z}{(z-u)(z-u')} \right| \nonumber \\&\quad <\exp (-C n + n^{1/3}\rho (\theta )y (\theta -u) )\nonumber \\&\quad \quad \bigg \vert \frac{1}{2i\pi } \int _{\theta +i\epsilon }^{\theta +i\infty } \exp (i \rho (\theta ) n^{1/3} y {{\mathrm {Im}}}[z])\frac{z}{u} \frac{\mathrm {d}z}{(z-u)(z-u')} \bigg \vert . \end{aligned}$$
(79)

The integral in the R.H.S of (79) is an oscillatory integral that can be bounded uniformly in n (actually it goes to zero by Riemann-Lebesgue’s lemma) so that it goes to zero when multiplied by \(\exp (-C n + n^{1/3}\rho (\theta )y (\theta -u) )\). \(\square \)

The rest of the proof is similar to Sect. 6. One makes the change of variables

$$\begin{aligned} z=\theta +\tilde{z}n^{-1/3},\quad u=\theta +\tilde{u}n^{-1/3},\quad u'=\theta +\tilde{u}'n^{-1/3}. \end{aligned}$$

It is again convenient to deform slightly the contours for u and \(u'\) so that the contour for \(\tilde{u}\) and \(\tilde{u}'\) is \(\mathcal {C}^{\epsilon n^{1/3}}\) as in Sect. 6 (\(\mathcal {C}^L\) is defined in (51)).

Proposition 7.6

We have that

$$\begin{aligned} \lim _{t\rightarrow \infty } \det (I+K^{\mathrm {FPP}}_{y,\epsilon })_{\mathbb {L}^2(\gamma ^{\epsilon })}= \det (I-K_y)_{\mathbb {L}^2(\mathcal {C})}, \end{aligned}$$

where the contour \(\mathcal {C}\) is defined in (52) and \(K_y\) is defined by its integral kernel

$$\begin{aligned} K_y(w,w') = \frac{1}{2i\pi } \int _{\infty e^{-i\pi /3}}^{\infty e^{i\pi /3}} \frac{\mathrm {d}z}{(z-w')(w-z)} \frac{e^{z^3/3-yz} }{e^{w^3/3-yw}} \end{aligned}$$

and the contour for z does not intersect \(\mathcal {C}\).

Proof

Identical to the proof of Proposition 6.7. \(\square \)

7.3 Limit shape of the percolation cluster for fixed t

As \(\theta \) goes to infinity, \(\kappa (\theta ), \tau (\theta )\) and \(\rho (\theta )\) are approximated by

$$\begin{aligned} \kappa (\theta )&= \frac{a}{b} + \frac{3a(a+b)}{2b} \left( \frac{1}{\theta }\right) + \mathcal {O}\left( \frac{1}{\theta }\right) ^2, \\ \tau (\theta )&= \frac{1}{2} a(a+b) \left( \frac{1}{\theta }\right) ^3+ \mathcal {O}\left( \frac{1}{\theta }\right) ^4, \\ \rho (\theta )&= \left( \frac{3}{2} a(a+b)\right) ^{1/3}\left( \frac{1}{\theta }\right) ^{5/3}. \end{aligned}$$

On the other hand, we have from Theorem 7.1 the convergence in distribution

$$\begin{aligned} \frac{ T(n, \kappa (\theta )n) -\tau (\theta ) n }{\rho (\theta ) n^{1/3}} \Longrightarrow \mathcal {L}_{GUE}, \end{aligned}$$

where \(\mathcal {L}_{GUE}\) is the GUE Tracy–Widom distribution.

Fig. 9
figure 9

Percolation set in the Bernoulli-FPP model at different times for parameters \(a=b=1\). The different shades of gray corresponds to times 0, 0.1, 0.2, 0.3, 0.4, 0.6, 1 and 4. Although it seems on the picture that the convex envelope of the percolation cluster at time \(t=4\) is asymptotically a cone, this is an effect due to the relatively small size of the grid (\(300\times 300\)), and it is not true asymptotically: \(n=300\) is not enough to discriminate between c n and \(c'n^{2/3}\) (see Sect. 7.3)

Scaling \(\theta \) by \(n^{1/3}\) suggests a limit theorem for the shape of the convex envelope of the percolation cluster after a fixed time. Of course, there is a non-rigorous interchange of limits here, and one should use the Fredholm determinant representation in order to make this rigorous (we do not include this here).

Let us set \(\theta =n^{1/3}\). Then

$$\begin{aligned} \kappa (\theta )n = \frac{a}{b}n+\frac{3a(a+b)}{2b}n^{2/3} + \mathcal {O}(n^{1/3}) \end{aligned}$$

and

$$\begin{aligned} \tau (\theta )n = \frac{1}{2} a(a+b) + \mathcal {O}(n^{-1/3}). \end{aligned}$$

This suggests that the border of the percolation cluster at time \(\frac{1}{2} a(a+b)\) is asymptotically at a distance \(\frac{3a(a+b)}{2b}n^{2/3}\) from the point \(\frac{a}{b}n\) (see Fig. 9). The fact that \(\rho (\theta )n^{1/3} = \mathcal {O}(n^{-2/9})\) suggests an anomalous scaling for the fluctuations of the border of the percolation cluster. We leave this for future consideration.