1 Introduction

Interfaces evolving according to local stochastic growth rules have been extensively studied in physics, biology, material science, and engineering. There has been significant theoretical success over the last 25 years in describing such rough self-affine interfaces which evolve due to local processes. In short times such interfaces display properties particular to their local processes. However, in the long time limit, it is predicted that certain universal scaling exponents and exact statistical distributions should accurately describe the fixed long time properties of a wide variety of rough interfaces. These predictions have been repeatedly confirmed through Monte-Carlo simulation as well as experiments. What is lacking, however, is a theoretical understanding and prediction of the temporal evolution of these interfaces, in the large-time scaling limit. In this article we provide two complementary descriptions of this universal temporal evolution of interfaces—one from a PDE perspective and the other from an exact solvability perspective. As of yet, neither of these descriptions are justified with mathematical proof. However, a variety of predictions arising from these perspectives can be rigorously confirmed (e.g. [1316, 32]).

The \(1+1\) dimensional KPZ universality class includes a wide variety of forms of stochastic interface growth [5, 20, 26] on a one dimensional substrate, randomly stirred one dimensional fluids (the stochastic Burgers equation) [21], polymer chains directed in one dimension and fluctuating transversally in the other due to a random potential [22] and various lattice models such as the driven lattice gas model of ASEP and ground state polymer model of last passage percolation [24]. All models can be transformed to a kinetically roughening, growing interface reflecting the competition between growth in a direction normal to the surface, a surface tension smoothing force, and a stochastic term which tends to roughen the interface. Numerical simulations along with some theoretical results have confirmed that in the long time \(t\) scaling limit, fluctuations in the height of such evolving interfaces scale like \(t^{1/3}\) and display non-trivial spatial correlations in the scale \(t^{2/3}\) [5, 12, 21, 25, 30]. These scales were confirmed experimentally in studies involving paper wetting, burning fronts, bacterial colonies and liquid crystals [5, 34].

Beyond the KPZ scalings, the universality class is characterized in terms of the long-time limits of the probability distribution of fluctuations. These depend on the initial data or geometry. Starting from (i) narrow wedge, or droplet, one sees the GUE Tracy–Widom distribution of random matrix theory, \(F_\mathrm{GUE}\), which describes the asymptotic fluctuations of the largest eigenvalue of a random matrix from the Gaussian Unitary Ensemble [37]; while starting from (ii) flat substrate, the analogous GOE Tracy–Widom distribution, \(F_\mathrm{GOE}\), associated to the Gaussian Orthogonal Ensemble [38] arises. A recent series of spectacular experiments involving turbulent liquid crystals [35, 36] have been able to not only confirm the predicted scaling laws but also the statistics (skewness and kurtosis) for the distribution of these fluctuations. The multi-point joint distributions of scaled fluctuations are likewise given by the (i) Airy\(_2\) [27] and (ii) Airy\(_1\) [8, 9] processes, and [35, 36] could also demonstrate that certain statistic involving the two-point correlation functions agrees with the predictions. A further natural initial geometry is two sided Brownian motion for which one sees, at later time, a new (though correlated) Brownian motion, with a global height shift given by the \(F_0\) distribution [4]. Note that all these spatial processes have \(n\)-point distributions given by Fredholm determinants.

In this work we consider two questions: (1) What are the exact statistics and multi-point joint distributions for growth off of more general initial geometries; and (2) Can one predict multi-time statistics and distributions? Our partial answers follow from our investigation into the KPZ renormalization fixed point which we denote as \(\mathfrak {h}\). We describe its Markovian evolution in two complementary ways: (1) Through a variational formulation similar to that of a stochastically forced Burgers equation, but with a new, nontrivial (but unfortunately not very explicit) driving noise, which we call the Airy sheet, and with the maximization occurring along a network of paths called the polymer fixed point; and (2) Through a formula for the transition probabilities, derived by employing non-rigorous methods of replica Bethe ansatz for the KPZ equation [11, 18, 28]. These transition probabilities should enable one to compute multi-point statistics for general initial geometries, and using the Markov property, this should enable one to compute multi-time statistics, thus answering both questions. A complete description or characterization of the noise arising in the forced Burgers equation could provide a stochastic analysis approach to show universality of the KPZ fixed point.

Besides the usual issues with the replica method for the KPZ equation (e.g. the moment problem is not well-posed and consequently one must deal with trying to sum divergent series), in deriving our transition probability formulas we employ an asymptotic factorization ansatz on the Bethe wavefunctions. For the narrow wedge initial data, this ansatz has been shown to lead in the long time limit to the expected multi-point joint distributions [19, 23, 28]. However, since we are dealing with general initial data it is not clear whether this factorization approximation (after asymptotics) leads to the true transition probability formulas. (Such transition probability formulas are only known for a few types of initial data, such as flat and two sided Brownian motion.) As such, our transition probability formulas should be treated as plausible answer, as they pass several basic tests such as scaling invariance and the Markov property. They also reproduce the known formulas for one dimensional distributions for general initial data. In Appendix we consider the two-point distribution for flat initial data and produce a formula using our transition probability formulas. Unfortunately, we have not been able to match this (or to show that this does not match) with the expected Airy\(_1\) process formulas.

Although the connection is not yet understood, our transition probability formula should be accessible via asymptotics of a less explicit determinantal formula derived earlier for the microscopic model TASEP [9, 33]. Another possible route to make rigorous our transition probability formula (or disprove it) is through the rigorous replica Bethe ansatz developed in [7, 10] for \(q\)-TASEP.

The results of this article should not be treated as mathematically rigorous and rather are intended to provide conjectural descriptions of the KPZ renormalization fixed point. Significant and serious mathematical challenges exist to make rigorous any part of these conjectures.

2 Variational Formulation

2.1 The KPZ Equation

In 1986, Kardar–Parisi–Zhang (KPZ) [25] proposed the model equation (which now bears their names)

$$\begin{aligned} \partial _t h = \tfrac{1}{2} (\partial _xh)^2 + \tfrac{1}{2} \partial _x^2 h + \xi , \end{aligned}$$
(1)

from which the universality class takes its name. The noise \(\xi \) is Gaussian space-time white noise with formal covariance \(\langle \xi (t,x) \xi (s,y)\rangle = \delta (t-s)\delta (y-x)\). It is, in fact, a mathematical challenge to even define this equation, let along study its long-time scaling behaviors. The physically relevant notion of solution is provided through the Hopf–Cole transform \(Z = e^h\) which (formally) transforms the KPZ equation into the well-posed stochastic heat equation (SHE) with multiplicative noise

$$\begin{aligned} \partial _tZ = \tfrac{1}{2} \partial _x^2 Z + \xi Z. \end{aligned}$$
(2)

We will always work with the so-called Hopf–Cole solution to the KPZ equation which is defined as

$$\begin{aligned} h(t,x) = \log Z(t,x). \end{aligned}$$

The narrow wedge initial conditions correspond to starting \(Z\) with a delta function, and the flat initial conditions correspond to starting with \(Z\equiv 1\). Correspondingly, in the liquid crystal experiments the laser excites a single point, or a line.

Many discrete growth models have a tunable asymmetry and (1) appears as a continuum limit in the diffusive time scale as this parameter is critically tuned close to zero [1, 3, 6]. Let us demonstrate this idea via explaining how the KPZ equation scales. For real \(b,z\) define the scaled KPZ solution

$$\begin{aligned} h_{\epsilon ;b,z}(t,x) = \epsilon ^{b} h(\epsilon ^{-z}t,\epsilon ^{-1}x) \end{aligned}$$

Under this scaling,

$$\begin{aligned} \partial _t h_{\epsilon ;b,z} = \tfrac{1}{2}\epsilon ^{2-z} \partial _{x}^2 h_{\epsilon ;b,z} + \tfrac{1}{2} \epsilon ^{2-z-b} (\partial _x h_{\epsilon ;b,z})^2 + \epsilon ^{b-z/2+1/2}\xi . \end{aligned}$$

Note that the noise on the right-hand side is not the same for different \(\epsilon \), but in terms of its distribution it is. It is natural to consider whether there are there any scalings of the KPZ equation under which it is invariant. If so, then one could hope to scale a given growth process in the same way to arrive at the KPZ equation. However, one checks that there is no way to do this. On the other hand, there are certain weak scalings which fix the KPZ equation. By weak we mean that, simultaneously as we scale time, space and fluctuations, we also put tuning parameters in front of certain terms in the KPZ equation and scale them with \(\epsilon \). In other words, we simultaneously scale time, space and fluctuations, as well as the model. Let us consider two weak scalings.

Weak Non-linearity Scaling Take \(b=1/2,z=2\). The first and third terms stay fixed, but the middle term blows up. Thus, insert a constant \(\lambda _{\epsilon }\) in front of the non-linear term \((\partial _xh)^2\) and set \(\lambda _{\epsilon }=\epsilon ^{1/2}\). Under this scaling, the KPZ equation is mapped to itself.

Weak Noise Scaling Take \(b=0,z=2\). Under this scaling, the linear \(\partial _x^2 h\) and non-linear \((\partial _x h)^2\) terms stay fixed, but now the noise blows up. So insert a constant \(\beta _{\epsilon }\) in front of the noise term and set \(\beta _{\epsilon }=\epsilon ^{1/2}\), and again the KPZ equation stays invariant.

One can hope that these rescalings are attractive, in the sense that if one takes models with a parameter (non-linearity or noise) that can be tuned, then these models will all converge to the same limiting object. There are a handful of rigorous mathematical results showing that weak non-linearity scaling can be applied to particle growth processes [3, 6, 17] and weak noise scaling can be applied to directed polymers.

A third scaling of interest is the following.

KPZ Scaling It was predicted by [21, 25] that under the scaling \(b=1/2\) and \(z=3/2\) the KPZ equation should have non-trivial limiting behavior. It is this scaling and its limit which is of primary interest in this paper. Figure 1 summarizes these scalings and the role of the KPZ equation, KPZ fixed point and EW fixed point.

Fig. 1
figure 1

Three types of scalings for the KPZ equation. Weak noise and weak non-linearity scaling fix the KPZ equation whereas under KPZ scaling, the KPZ equation should go to the KPZ fixed point. It is believed (and in some cases shown) that this extends to a variety of growth processes and directed polymer models. The KPZ equation represents a heteroclinic orbit connecting the Edwards–Wilkinson (EW) and KPZ fixed point

2.2 The Renormalization Operator

We now fix the KPZ scaling \(b=1/2\) and \(z=3/2\) and thus set

$$\begin{aligned} h_\epsilon (t,x)=(R_\epsilon h)(t,x) =\epsilon ^{1/2} h( \epsilon ^{-3/2} t, \epsilon ^{-1}x). \end{aligned}$$
(3)

Under these changes of variables, \(h_\epsilon \) satisfies (1) with renormalized coefficients,

$$\begin{aligned} \partial _t h_\epsilon = \tfrac{1}{2} (\partial _xh_\epsilon )^2 + \epsilon ^{1/2}\tfrac{1}{2} \partial _x^2 h_\epsilon + \epsilon ^{1/4} \xi . \end{aligned}$$
(4)

Note that the initial data for this equation also gets rescaled, that is, if (1) has initial data \(h(0,x)=h^0(x)\), then the initial data for \(h_\epsilon \) in (4) is \(h_\epsilon (0,x)=(R_\epsilon h)(0,x)=\epsilon ^{1/2}h^0(\epsilon ^{-1}x)\). It would seem, at first glance, that as \(\epsilon \rightarrow 0\) all of the coefficients on the right-hand side go to zero, except that of the non-linearity. Thus, formally one might expect that \(h_0\) satisfies the deterministic inviscid Burgers equation. However, this cannot be true since it would imply two false results: that the narrow wedge solution has deterministic solutions and that Brownian motion is not invariant.

The KPZ renormalization fixed point \(\mathfrak {h}\) should be the \(\epsilon \rightarrow 0\) limit of the (properly centered) process \(\bar{h}_\epsilon \). We now provide a description of what this limit should be.

Let \(h(u,y;t,x)\) be the solution of (1) for times \(t>u\) with \(Z=e^h\) started at time \(u\) with a delta function at \(y\), all using the same noise (for different \(u,y\)). To center, set

$$\begin{aligned} \bar{h}(u,y;t,x) ={h}(u,y;t,x) -\tfrac{1}{24}(t-u) -\log \sqrt{2\pi (t-u)} \end{aligned}$$

and define \(A_{1}\) by

$$\begin{aligned}&\bar{h}(u,y;t,x) = -\tfrac{ (x-y)^2}{ 2(t-u)} + A_{1}(u,y;t,x).&\end{aligned}$$

After the rescaling (3),

$$\begin{aligned} R_\epsilon \bar{h}(u,y;t,x) = -\tfrac{ (x-y)^2}{ 2(t-u)} + A_{\epsilon }(u,y;t,x) \end{aligned}$$

where \(A_{\epsilon }=R_\epsilon A_{1}\). As \(\epsilon \rightarrow 0\), \(A_{\epsilon }(u,y;t,x)\) should converge to a four-parameter field which we henceforth call the space-time Airy sheet \(\mathcal {A}(u,y;t,x)\). In each spatial variable it should be an \(\text {Airy}_2\) process [27] and it should enjoy several nice properties:

  1. (1)

    Independent increments \(\mathcal {A}(u,y;t,x)\) is independent of \(\mathcal {A}(u',y;t',x)\) if \((u,t)\cap (u',t')=\emptyset \);

  2. (2)

    Space and time stationarity \( \mathcal {A}(u,y;t,x) \mathop {=}\limits ^\mathrm{dist} \mathcal {A}(u+h,y;t+h,x) \mathop {=}\limits ^\mathrm{dist} \mathcal {A}(u,y+z;t,x+z) \);

  3. (3)

    Scaling \( \mathcal {A}(0,y;t,x) \mathop {=}\limits ^\mathrm{dist} t^{1/3} \mathcal {A}(0,t^{-2/3}y;1,t^{-2/3}x)\);

  4. (4)

    Semi-group property For \(u<s<t\),

    $$\begin{aligned} \mathcal {A}(u,y;t,x) = \sup _{z\in \mathbb R} \big \{\tfrac{(x-y)^2}{2(t-u)} -\tfrac{(z-y)^2}{2(s-u)}-\tfrac{(x-z)^2}{2(t-s)} + \mathcal {A}(u,y;s,z)+\mathcal {A}(s,z;t,x) \big \}. \end{aligned}$$

Using \(\mathcal {A}(u,y;t,x)\) we construct our conjectural description of the KPZ fixed point \(\mathfrak {h}(t,x)\). By the Hopf–Cole transformation and the linearity of the SHE, the centered solution of (1), \(\bar{h}(t,x) = h(t,x) - \tfrac{t}{24} - \log \sqrt{2\pi t}\), with initial data \(h^0\), can be written after rescaling as

$$\begin{aligned} R_{\epsilon } \bar{h}(t,x) = \epsilon ^{1/2}\ln \int \exp \Big ( \epsilon ^{-1/2} \big \{ -\tfrac{(x-y)^2}{2t} + A_\epsilon (0,y;t,x) +R_\epsilon h^0(y)\big \} \Big ) dy. \end{aligned}$$

If we choose initial data \(h^0_\epsilon \) so that \(R_\epsilon h^0_{\epsilon }\) converges to some (possibly random) function \(f\) as \(\epsilon \rightarrow 0\), we can use Laplace’s method to evaluate \( \mathfrak {h}(t,x)=\lim _{\epsilon \rightarrow 0}R_\epsilon \bar{h}(t,x) = T_{0,t} f(x) \) where

$$\begin{aligned} T_{u,t}f(x):= \sup _{y\in \mathbb R}\left\{ - \tfrac{ (x-y)^2}{ 2(t-u)} + \mathcal {A}(u,y;t,x) + f(y) \right\} . \end{aligned}$$
(5)

The operators \(T_{u,t}\), \(0<u<t\) form a semi-group, i.e. \(T_{u,t}=T_{u,s}T_{s,t}\), which is stationary with independent increments, and such that

$$\begin{aligned} T_{0,t} \mathop {=}\limits ^\mathrm{dist} (R_{t^{-2/3}})^{-1}T_{0,1}R_{t^{-2/3}}. \end{aligned}$$

Additionally, if \(\alpha \in \mathbb {R}\) then

$$\begin{aligned} T_{0,t}(\alpha f)(x)\mathop {=}\limits ^\mathrm{dist}\alpha T_{\alpha ^{-3}t}(\alpha ^{-2}x). \end{aligned}$$
(6)

By the Markov property, the joint distribution of the marginal spatial process of \(\mathfrak {h}\) (for initial data \(f\)) at a set of times \(t_1< t_2<\cdots < t_n\) should be given by

$$\begin{aligned} (\mathfrak {h}(t_1),\ldots , \mathfrak {h}(t_n))\mathop {=}\limits ^\mathrm{dist} (T_{0,t_1}f, \ldots , T_{t_{n-1},t_n}\cdots T_{0,t_1}f). \end{aligned}$$

The process of randomly evolving functions can be thought of as a high dimensional analogue of Brownian motion (with function-valued state space), and the \(T_{t_{i},t_{i+1}}\) as analogous to the independent increments.

The solution \(h(u,y;t,x)\) of (1) (with \(e^h\) started at time \(u\) with a delta function at \(y\)) corresponds to the free energy of a directed random polymer \(x(s)\), \(u<s<t\) starting at \(y\) and ending at \(x\), with quenched random energy (see [2] for a rigorous construction of this measure)

$$\begin{aligned} \int _u^t \{|\dot{x}(s)|^2 -\xi (s,x(s)) \}\,ds. \end{aligned}$$
(7)

Under the rescaling (3) this probability measure on paths should converge to the polymer fixed point; a continuous path \(\pi _{u,y;t,x}(s)\), \(u\le s\le t\) from \(y\) to \(x\) which at discrete times \(u=s_0<\cdots s_{m-1}<t\) is given by the argmax over \(x_0,\ldots , x_{m-1}\) of

$$\begin{aligned} (T_{u,s_1}\delta _y)(x_1) + (T_{s_1,s_2} \delta _{x_1})(x_2) +\cdots + (T_{s_{m-1},t}\delta _{x_{m-1}})(x). \end{aligned}$$
(8)

This is the analogue in the present context of the minimization of the action, and the polymer fixed point paths are analogous to characteristics in the randomly forced Burgers’ equation. One might hope to take the analogy farther and find a limit of the renormalizations of (7), and minimize it to find that path \(\pi _{u,y;t,x}\). However, the limit of the energy (7) does not appear to exist, so one has to be satisfied with a limit of the path measures themselves. The path \(\pi _{0,y;t,x}\) should be Hölder continuous with exponent \(1/3-\), as compared to Brownian motion where the Hölder exponent is \(1/2-\). As the mesh of times is made finer, a limit \(\mathcal {E}(\pi _{0,y;t,x})\) of (8) should exist, and through it we can write the time evolution of the KPZ fixed point in terms of the polymer fixed point through the analogue of the Lax-Oleinik variational formula,

$$\begin{aligned} T_{u,t}f(x) = \sup _{y\in R} \{ \mathcal {E}(\pi _{u,y;t,x})+ f(\pi _{u,y;x,t}(u))\}. \end{aligned}$$

The KPZ fixed point, space-time Airy sheet, and polymer fixed point should be universal and arise in random polymers, last passage percolation and growth models—anything in the KPZ universality class. Just as for (1), for some models at the microscopic scale, approximate versions of the variational problem (5) hold, becoming exact as \(\epsilon \rightarrow 0\). For example, consider the PNG model [27] with a finite collection of nucleations spaced order \(\epsilon ^{-1}\) apart (see Fig. 2). At time \(\epsilon ^{-3/2}t\) we look at \(\epsilon ^{-1/2}\) scaled fluctuations in spatial locations \(\epsilon ^{-1}x\). As \(\epsilon \) goes to zero, these fluctuations (after proper centering) should converge to \(\mathfrak {h}\) where the initial data \(f\) is \(-\infty \) except at the nucleation points, where it is zero. By introducing additional nucleations at times on the order of \(\epsilon ^{-1/2}\) and spatial locations order \(\epsilon ^{-1}\) apart, it is possible to modulate the value of \(f\) at these non \(-\infty \) points. Taking the number of nucleation points large allows one to recover any \(f\). The experiment of [35, 36] is well described by the KPZ fixed point with a single nucleation. Future experiments could probe the effect of additional nucleations. Using statistics to differentiate between types of initial data given finite time observations is a driving force for the development of the following exact formulas which provide theoretical predictions.

Fig. 2
figure 2

Polynuclear growth with two nucleations distance \(\epsilon ^{-1}\) apart observed at times \(\epsilon ^{-3/2}t_i\) (\(i=1,2,3\)) in an \(\epsilon ^{-1}\) spatial scale and \(\epsilon ^{-1/2}\) fluctuation scale

3 Transition Probability Formulation

We start with a simple version of our proposed formulas which are the results of computations described in Sect. 3.1.

Given \({c}_i, y_i\in \mathbb R\), \(i=1,\ldots ,M\), and \(s_j,x_j\in \mathbb R\), \(j=1,\ldots ,K\), let

$$\begin{aligned} \mathfrak {h}(t,x)=(T_t f)(x)\quad \text {with}\quad f(y_i)=-\big (\tfrac{t}{2}\big )^{1/3} [c_i -(y_i-x_1)^2]\quad \text {and}\quad f=-\infty ~\text {otherwise}. \end{aligned}$$

Then the computations of Sect. 3.1 suggest that

$$\begin{aligned} \mathbb {P}\big (\mathfrak {h}(t,x_j)\le \big (\tfrac{t}{2}\big )^{1/3}[s_j -(x_j-x_1)^2], ~j=1,\ldots ,K\big ) = \det (I - \bar{L}) \end{aligned}$$
(9)

where \(\bar{L}\) is the operator with kernel

$$\begin{aligned}&\bar{L}(z,z') \nonumber \\&\quad = \int _{A(\mathbf{{s},\mathbf {c}})} dv_1\cdots dv_{M+1} \, du_1\cdots du_{K}\, e^{(y_1-x_1)H}K_{\mathrm{Ai}}|v_1\rangle \langle v_1| e^{(y_2-y_1)H}|v_2\rangle \langle v_M| e^{(x_1-y_M)H}|v_{M+1}\rangle \nonumber \\&\qquad \times \,\delta (u_1 - v_{M+1}) \langle u_1|e^{ ( x_1 -x_2) H}|u_2\rangle \langle u_{K-1}|e^{ (x_{K- 1} - x_{K}) H}|u_{K}\rangle \langle u_{K}|e^{(x_{K} - x_1) H}K_{\mathrm{Ai}}\end{aligned}$$
(10)

acting on \(L^2(\mathbb R)\), where \(H=-\frac{d^2}{dx^2} +x\) is the Airy operator, \(K_{\mathrm{Ai}}\) is the \(\mathrm{Airy}_2\) operator which is the spectral projection of \(H\) onto its negative eigenvalues, and

$$\begin{aligned} A(\mathbf{{s},\mathbf {c}}) =\big \{ (\mathbf{{u},\mathbf {v}}): \max _{i=1,\ldots ,K}\{u_i-s_i\}+ \max _{j=1,\ldots ,M}\{v_j -c_j\} \ge v_{M+1}\big \}. \end{aligned}$$
(11)

We assume here that \(x_1<x_2<\cdots <x_K\) and \(y_1>\cdots >y_M\). The formulas give a consistent family of finite dimensional distributions. Specializing to \(K=1\) or \(M=1\) one checks that the resulting process (in \(x_1\) and \(y_1\)) is the Airy\(_2\) process in each variable. Note that, in view of (6), the left-hand side of (9) can be rewritten as \(\mathbb {P}\big (\mathfrak {h}(2,x_j)\le [s_j -(x_j-x_1)^2], ~j=1,\ldots ,K\big )\) where \(\mathfrak {h}(2,x)=T_2\tilde{f}(x)\) with \(\tilde{f}(y_j)=-[c_j -(2/t)^{4/3}y_j^2]\) and \(f=-\infty \) otherwise.

From the above the transition probabilities of \(T_t=T_{0,t}\) can be obtained by approximation. Define the operator \(Y_{[a,b]}^{g}\) via its action \(Y_{[a,b]}^{g}f(y) =u(b,y)\) where \(u(t,x)\) solves the boundary value problem

$$\begin{aligned} \left\{ \begin{array}{l@{\quad }l} \partial _t u = -H u &{}\text{ a }<\text{ t }<\text{ b } \\ u(t,x)=0&{}\text{ x }\ge \text{ g(t) }\\ \end{array} \right. \end{aligned}$$
(12)

with initial data \(u(a,x) = f(x)\). Let \(c(\cdot )\) and \(s(\cdot )\) be (nice) functions such that \(s(\cdot )\) has finite support in \([L_s,R_s]\) and \(c(\cdot )\) has finite support in \([L_c,R_c]\) (here by finite support we mean that outside that interval the function is \(+\infty \)). Then

$$\begin{aligned} \mathbb {P}\big ({T}_t f (x)\le (\tfrac{t}{2})^{1/3}[s(x)-(x-L_s)^2], x\in [L_s,R_s]\big ) = \det (I-K_{\mathrm{Ai}}+\hat{L}) \end{aligned}$$
(13)

where \(f(y) =-(\frac{t}{2})^{1/3}\left( c(y) - (y-L_s)^2\right) \) and

$$\begin{aligned} \hat{L}(z,z') = \iint \! dm \,du\,\Theta _{1,m,u}(u,z)\frac{d}{dm} \Theta _{2,-m}(z',u), \end{aligned}$$
(14)
$$\begin{aligned} \Theta _{1,m,u_1}(u_2,z) = \left( Y_{[L_s,R_s]}^{s(\cdot )+u_1+m} e^{(R_s-L_s) H}K_{\mathrm{Ai}}\right) (u_2,z), \end{aligned}$$
$$\begin{aligned} \Theta _{2,m'}(z',v_{M+1})=\left( K_{\mathrm{Ai}}e^{(R_c-L_s)H}Y_{[L_c\,R_c]}^{\hat{c}(\cdot )+m'} e^{(L_s-L_c) H}\right) (z',v_{M+1}), \end{aligned}$$
(15)

where \(\hat{c}(y)=c(L_c+R_c-y)\).

3.1 Replica Bethe Ansatz

In this section we follow the methods of [11, 18] as further developed in [28, 29]. Let \({Z}(y,x;t)\) be the solution of (2) at \(x\) at time \(t\) with initial data \(\delta _{y}\) at time \(0\). Given \(s_k, x_k\), \(k=1,\ldots ,K\) and \(c_m, y_m\), \(m=1,\ldots ,M\) consider \(\sum _{k=1}^Ke^{-s_k} {Z}(t,x_k)\) where \({Z}(t,x)\) solves the SHE with initial data \(\sum _{m=1}^{M} e^{-c_m}\delta _{y_m}\). From the linearity of the SHE this can be written as \(\sum _{k=1}^{K}\sum _{m=1}^{M}e^{-s_k-c_m} {Z}(y_m,x_k;t)\). Following [11, 18] we compute the generating function

$$\begin{aligned} G(s,x;c,y) = \mathbb {E}\bigg [\!\exp \!\Big (\!-e^{\tfrac{t}{24}} \big \{\sum _{k=1}^{K}\sum _{m=1}^{M}e^{-s_k-c_m} {Z}(y_m,x_k;t)\big \}\Big )\bigg ]. \end{aligned}$$

By studying asymptotics under KPZ scaling we obtain the formula for the transition probabilities.

Expanding the generating function exponential we write this using replicas as

$$\begin{aligned} G(s,x;c,y) = 1 + \sum _{N=1}^{\infty } \frac{(-1)^N e^{tN/24}}{N!} \left\langle I^N| e^{-H_N t}|F^N\right\rangle \end{aligned}$$
(16)

where the \(N\)-particle wave functions of the delta-Bose gas

$$\begin{aligned} \mathrm{H}_N= -\frac{1}{2}\sum _{i=1}^{N} \partial _{x_i}^2 - \frac{1}{2} \sum _{i\ne j=1}^{N} \delta (x_i-x_j) \end{aligned}$$
(17)

are \(\langle I^N|\!=\!\! \sum _{m_1,\ldots m_N=1}^{M} e^{-\sum _jc_{m_j}} \langle y_{m_1}\cdots y_{m_N}|\) and \(|F^N\rangle \!=\!\! \sum _{k_1,\ldots ,k_N=1}^{K} e^{-\sum _js_{k_j}} |x_{k_1}\cdots x_{k_N}\!\rangle \). The wave functions are symmetric, so the propagator is only needed on the symmetric subspace. Thus we may employ the eigenfunction expansion of \(\mathrm{H}_N\) (e.g. [11, 18]),

$$\begin{aligned} \langle I^N| e^{-H_N t}|F^N\rangle = \sum _r e^{-t E_r} \left\langle F^N|\psi _r\rangle \langle \psi _r|I^N\right\rangle . \end{aligned}$$
(18)

The eigenfunctions are given by

$$\begin{aligned} \psi _\mathbf{q, n}^{(L)} =\mathop {\mathop {\sum }\nolimits ^{\prime }}\limits _{p\in {\mathcal P}} A_p(\mathbf{y}) \exp \Big \{ i \sum _{\alpha =1}^L q_\alpha \sum _{c\in \Omega _{\alpha (p)}} y_c - \frac{1}{4}\sum _{\alpha =1}^L \sum _{c,c'\in \Omega _{\alpha (p)}}|y_c-y_{c'}| \Big \}. \end{aligned}$$
(19)

Here \(1\le L\le N\) is the number of clusters of particles, \(n_\alpha \) is the number of particles in the \(\alpha \)-th cluster, and \(N=\sum _{\alpha =1}^L n_\alpha \) is the total number of particles. \(\sum ^{\prime }_{p\in {\mathcal P}}\) denotes the sum over those permutations of \(N\) elements permuting particles between different clusters. \(\mathbf{q} = (q_1,\ldots ,q_M)\) are the momenta, and \(\mathbf{y} = (y_1,\ldots ,y_N)\). See [18] for the definitions of the coefficients \(A_p(\mathbf{y}) \) and the partition \(\Omega _\alpha (p)\), \(\alpha =1,\ldots , L\) of \(\{1,\ldots ,N\}\).

Expanding the tensor products given above, and still writing \(r\) to index the eigenfunctions for simplicity, we get

$$\begin{aligned}&G(s,x;c,y) \\&\quad = 1 +\sum _{N=1}^{\infty } \frac{(-1)^N e^{tN/24}}{N!} \sum _{r}e^{-tE_r}|\psi _r(\mathbf{0})|^2 \sum _{k_1,\ldots , k_N=1}^{K} e^{-(s_{k_1}+\cdots + s_{k_N})} \frac{\psi _r(x_{k_1},\ldots , x_{k_N})}{\psi _r(\mathbf{0})}\\&\qquad \times \sum _{m_1,\ldots , m_N=1}^{M} e^{-(c_{m_1}+\cdots + c_{m_N})} \left[ \frac{\psi _r(y_{m_1},\ldots , y_{m_N})}{\psi _r(\mathbf{0})}\right] ^*. \end{aligned}$$

Using the factorization approximation of [28] with the identity \(e^{au+bv+cuv}=e^{c\partial _{a}\partial _b}e^{au+bv}\),

$$\begin{aligned}&\sum _{k_1,\ldots , k_N=1}^{K} e^{-(s_{k_1}+\cdots + s_{k_N})} \frac{\psi _r(x_{k_1},\ldots , x_{k_N})}{\psi _r(\mathbf{0})}\nonumber \\&\quad \approx \prod _{\alpha =1}^{M} e^{-\tfrac{1}{4} \sum _{a,b=1}^{K} |x_a-x_b|\partial _{s_a}\partial _{s_b}} \left( \sum _{k=1}^{K}e^{-s_k+i q_{\alpha }x_k}\right) ^{n_{\alpha }}\!. \end{aligned}$$

This approximation is believed to hold asymptotically in \(t\), the evidence being the recovery of the Airy\(_2\) process in the limit when \(M=1\) [28].

We now follow many of the manipulations made in [28] in the case when \(M=1\). Plugging in the above factorization we have an approximate generating function,

$$\begin{aligned}&G^{\#}(s,x;c,y)\\&\quad = 1 \!+\! \sum _{N=1}^{\infty } \frac{(-1)^N e^{\frac{1}{24}tN}}{N!} \sum _{r}e^{-tE_r}|\psi _r(\mathbf{0})|^2 \prod _{\alpha =1}^{L} e^{-\frac{1}{4} \sum _{a,b=1}^{K} |x_a-x_b|\partial _{s_a}\partial _{s_b}} \left( \sum _{k=1}^{K}e^{-s_k+i q_{\alpha }x_k}\right) ^{n_{\alpha }} \\&\qquad \quad \times \prod _{\alpha =1}^{L} e^{-\frac{1}{4} \sum _{a,b=1}^{M} |y_a-y_b|\partial _{c_a}\partial _{c_b}} \left( \sum _{l=1}^{M}e^{-c_l-i q_{\alpha }y_l}\right) ^{n_{\alpha }}\!\!\!\!. \end{aligned}$$

Now recall [18] that the eigenenergies \(E_r = \frac{1}{2}\sum _{j=1}^{M} n_j q_j^2 - \frac{1}{24} \sum _{j=1}^{M} (n_j^3-n_j)\), that

$$\begin{aligned} |\psi _r(\mathbf{0})|^2 = N! \det \left( \frac{1}{\tfrac{1}{2}(n_j+n_k)+i(q_j-q_k)}\right) _{j,k=1,\ldots ,M}, \end{aligned}$$

and that the normalized sum over eigenstates is given by

$$\begin{aligned} \sum _r= \sum _{L=1}^{\infty } \frac{1}{L!} \prod _{j=1}^{L} \left( \int _{-\infty }^{\infty } \frac{dq_j}{2\pi } \sum _{n_j=1}^{\infty }\right) \mathbf{1}_{N=\sum _{j}n_j}. \end{aligned}$$

Thus we can write

$$\begin{aligned}&G^{\#}\\&\quad =1+ \sum _{L=1}^{\infty } \frac{1}{L!}\left[ \prod _{j=1}^{L}\int _{-\infty }^{\infty } \frac{dq_j}{2\pi } \sum _{n_j=1}^{\infty } e^{\frac{1}{24}tn_j^3-J}\left\{ -e^{-\frac{1}{2}tq_j^2}\left( \sum _{a=1}^{K} e^{ix_a q_j-s_a}\right) \left( \sum _{a=1}^{M} e^{-iy_a q_j-c_a}\right) \right\} ^{n_j}\right] \\&\qquad \times \det \left( \frac{1}{\tfrac{1}{2}(n_j+n_k)+i(q_j-q_k)}\right) \end{aligned}$$

where

$$\begin{aligned} J=\tfrac{1}{4}\textstyle \sum \nolimits _{a,b=1}^{K}|x_a-x_b|\partial _{s_a}\partial _{s_b} +\tfrac{1}{4}\sum \nolimits _{a,b=1}^{M}|y_a-y_b|\partial _{c_a}\partial _{c_b}. \end{aligned}$$

At this point one is forced to adopt a choice \(e^{tm^3/3}= \int \! du\,\mathrm{Ai}(u) e^{umt^{1/3}}\) of analytic continuation to complex \(m\). Then one observes, following [18], that \(G^{\#}\) can be written as a Fredholm determinant \(G^{\#} = \det (1+R)\) where \(R\) has kernel \(R(q,m;q',m') \) given by

$$\begin{aligned} \frac{1}{2\pi (\tfrac{1}{2}(m+m') +i (q-q'))}e^{\tfrac{tm^3}{24}-J}\left\{ -e^{-\tfrac{tq^2}{2}} {\textstyle \left( \sum _{a=1}^{K} e^{ix_a q-s_a}\right) \left( \sum _{a=1}^{M} e^{-iy_a q-c_a}\right) }\right\} ^{m}\!\!. \end{aligned}$$

Define for vectors \(\mathbf{s}=(s_1,\ldots , s_K)\) and \(\mathbf{c}=(c_1,\ldots , c_{M})\) the function

$$\begin{aligned} \Phi (\mathbf{s};\mathbf{c})= \frac{ (e^{s_1}+\cdots + e^{s_{K}})(e^{c_1}+\cdots + e^{c_{M}})}{ 1+ (e^{s_1}+\cdots + e^{s_{K}})(e^{c_1}+\cdots + e^{c_{M}})}. \end{aligned}$$

Following [28] we obtain that \(G^{\#} = \det (1-\tilde{N})\) with

$$\begin{aligned} \tilde{N}(z,z') = \mathbf{1}_{z,z'>0}e^{-\hat{J}}\int du \mathrm{Ai}(u+z)\mathrm{Ai}(u+z') \Phi (\alpha \mathbf{u -s}; -\mathbf{c}) \end{aligned}$$

where \(\mathbf{u} = (u, \ldots , u)\) and where, for \(\alpha = (t/2)^{1/3}\),

$$\begin{aligned} \textstyle \hat{J} = J+ (2\alpha )^{-1}(\partial _z-\partial _{z'}) \left( \sum _{a=1}^{K} x_a\partial _{s_a}-\sum _{b=1}^{M} y_b\partial _{c_b}\right) . \end{aligned}$$

We can follow the method of [28] equation (4.21) to replace \(\partial _z\) by either \(- \partial _{z'} +\alpha \sum _{b=1}^K\partial _{s_b}\) or \( -\partial _{z'} +\alpha \sum _{b=1}^M\partial _{c_b}\) because \(\sum _{b=1}^K\partial _{s_b}\) and \(\sum _{b=1}^M\partial _{c_b}\) have the same action on \(\Phi \). Hence we get

$$\begin{aligned} \tilde{N}(z,z') = \tau _{ \mathbf{x}^2/2t, \mathbf{y}^2/2t } \mathbf{1}_{z,z'>0}e^{J_\mathbf{y}}e^{J_\mathbf{x}}\int du \mathrm{Ai}(u+z)\!\mathrm{Ai}(u+z') \Phi (\alpha \mathbf{u -s}; -\mathbf{c}) \end{aligned}$$
(20)

where

$$\begin{aligned} J_\mathbf{x} ={\textstyle -\sum _{a>b}^{K}x_a\partial _{s_a}\partial _{s_b}-\tfrac{1}{2}\sum _{a=1}^{K} x_a\partial _{s_a}^2 -\tfrac{1}{4\alpha ^3}\sum _{a=1}^{K} x_a^2 \partial _{s_a} +\tfrac{\partial _{z'}}{\alpha } \sum _{a=1}^{K} x_a\partial _{s_a}}, \end{aligned}$$

\(J_\mathbf{y}\) is the analogue of \(J_\mathbf{x}\) with \(\mathbf{s},\mathbf{x}\) replaced by \(\mathbf{c},-\mathbf{y}\), and

$$\begin{aligned} \tau _{ \mathbf{x}^2/2t, \mathbf{y}^2/2t }= \exp \left\{ {\textstyle \frac{1}{4\alpha ^3} \sum _{a=1}^{K} x_a^2 \partial _{s_a} +\frac{1}{4\alpha ^3} \sum _{a=1}^{M} y_a^2 \partial _{c_a} }\right\} \end{aligned}$$

is the parabolic shift (recall that \(f(s+d)=\exp (d\partial _s)f(s)\)). From now on we move into the frame with parabolic shifts removed (which accounts for the parabolas in the transition probability formula (9), the shift by \(x_1\) will be explained later), and call the associated operator \(\tilde{L}\) (instead of \(\tilde{N}\)). Thus, in view of (16), we are computing

$$\begin{aligned} \mathbb {E}\!\left( \exp \!\left[ -e^{\sum _{k=1}^K[\frac{t}{24}+h(t,x_k)-(s_k -x_k^2/2t)]}\right] \right) \approx \det (I - \tilde{L}) \end{aligned}$$
(21)

with \(h(t,x)=\log (Z(t,x))\) the Hopf–Cole solution to the KPZ equation with initial condition given by \(Z(0,\cdot )=\sum _{m=1}^Me^{-c_m+y_m^2/2t}\delta _{y_m}\).

Observe (as in [28]) that one may write (setting \(x_{K+1}=y_{M+1}=0\)) \( e^{J_\mathbf{x}} = e^{\tfrac{x_1}{2\alpha ^2}z'}A^\mathbf{x}_1\cdots A^\mathbf{x}_K, \) and \( e^{J_\mathbf{y}} = e^{\tfrac{-y_1}{2\alpha ^2}z'}A^\mathbf{y}_1\cdots A^\mathbf{y}_M, \) where

$$\begin{aligned} A^\mathbf{x}_l=\exp \Big \{{\textstyle -\frac{x_\ell -x_{\ell +1}}{2}(\sum _{a=1}^{\ell }\partial _{s_{a}})^2-\frac{(x_{\ell }-x_{\ell +1})^2}{4\alpha ^3}\sum _{a=1}^{\ell }\partial _{s_{a}} -\frac{x_{\ell }-x_{\ell +1}}{2\alpha ^2}z'-\frac{x_\ell -x_{\ell +1}}{\alpha }\partial _{z'}(\sum _{a=1}^{\ell }\partial _{s_{a}})}\Big \} \end{aligned}$$

and \(A^\mathbf{y}_l\) is the analogous operator with \(\mathbf{s},\mathbf{x}\) replaced by \(\mathbf{c},-\mathbf{y}\). By the Baker-Campbell-Hausdorff formula,

$$\begin{aligned} A^\mathbf{y}_1\cdots A^\mathbf{y}_Me^{\tfrac{x_1}{2\alpha ^2}z'}= e^{\tfrac{x_1}{2\alpha ^3} \sum _{\ell =1}^My_\ell \partial _{c_\ell } + \tfrac{x_1}{2\alpha ^2}z'}A^\mathbf{y}_1\cdots A^\mathbf{y}_M. \end{aligned}$$
(22)

This gives for our kernel

$$\begin{aligned} \mathbf{1}_{z,z'>0} e^{\tfrac{x_1}{2\alpha ^3} \sum _{\ell =1}^My_\ell \partial _{c_\ell } + \tfrac{x_1-y_1}{2\alpha ^2}z'} \int \! du \,A^\mathbf{y}_1\cdots A^\mathbf{y}_MA^\mathbf{x}_1\cdots A^\mathbf{x}_K\Phi (\alpha \mathbf{u -s}, -\mathbf{c})\mathrm{Ai}(u+z)\mathrm{Ai}(u+z'). \end{aligned}$$
(23)

Introducing an auxiliary variable \(u_{K+1}\), the integral in (23) becomes

$$\begin{aligned} \int du_{K+1}\,du\,\delta (u_{K+1} - u) A^\mathbf{y}_1\cdots A^\mathbf{y}_MA^\mathbf{x}_1\cdots A^\mathbf{x}_K\Phi (\alpha \mathbf{u -s}, -\mathbf{c})\mathrm{Ai}(u_{K+1}+z)\mathrm{Ai}(u+z'). \end{aligned}$$
(24)

Now observe the identity

$$\begin{aligned} A^\mathbf{x}_\ell \Phi (\alpha \mathbf{u-s}, \star )\mathrm{Ai}(\star )\mathrm{Ai}(u+z') = e^{\frac{x_\ell -x_{\ell +1}}{2\alpha ^2}H_u}\Phi (\alpha \mathbf{u-s}, \star )\mathrm{Ai}(\star )\mathrm{Ai}(u+z') \end{aligned}$$

where \(\star \) represents that all subsequent terms do not depend on \(u\), and where \(H_u\) is the Airy operator acting in \(u\), \(H_u = -\partial _u^2 + u\). Likewise,

$$\begin{aligned} A^\mathbf{y}_\ell \Phi (\star ,\alpha \mathbf{u-c})\mathrm{Ai}(\star )\mathrm{Ai}(u+z') = e^{\frac{y_{\ell +1}-y_{\ell }}{2\alpha ^2}H_u}\Phi (\star ,\alpha \mathbf{u-c})\mathrm{Ai}(\star )\mathrm{Ai}(u+z'). \end{aligned}$$

From the first identity we can replace the operator \(A^\mathbf{x}_{K}\) by \(e^{\frac{x_K-x_{K+1}}{2\alpha ^2}H_u}\) applied to \(\Phi \) times the Airy function terms only. Introducing an additional variable \(u_{K}\) and a delta function \(\delta (u_{K}-u)\) we get

$$\begin{aligned}&\int \!du_{K+1}\,du\,\delta (u_K-u)\delta (u_{K+1}-u_K) A^\mathbf{y}_1\cdots A^\mathbf{y}_MA^\mathbf{x}_1\cdots A^\mathbf{x}_{K}e^{\frac{x_K-x_{K+1}}{2\alpha ^2}H_u}\\&\quad \times \big [\Phi (\alpha \mathbf{u -s}, -\mathbf{c}) \!\mathrm{Ai}(u_1+z)\!\mathrm{Ai}(u+z')\big ]. \end{aligned}$$

Then we can integrate by parts so as to move the action onto only the delta function, and writing \(A\delta (v-u)=A(u,v)\) for an operator \(A\) acting on \(u\), (24) becomes

$$\begin{aligned}&\int \!du_{K}\,du_{K+1}\,du\, \delta (u_{K}-u)e^{\frac{x_K-x_{K+1}}{2\alpha ^2}H}(u_{K},u_{K+1})\\&\quad \times A^\mathbf{y}_1\cdots A^\mathbf{y}_MA^\mathbf{x}_1\cdots A^\mathbf{x}_{K-1} \Phi \big (\alpha u -s_1,\ldots ,\alpha u-s_{K-1}, \alpha u_{K} \!-\! s_K, -\mathbf{c}\big )\!\mathrm{Ai}(u_{K+1}+z)\!\mathrm{Ai}(u+z'). \end{aligned}$$

By introducing this additional delta function we have been able to replace the variable \(u\) in the term \(\alpha u -s_{K}\) by \(\alpha u_{K}-s_{K}\) and likewise \(e^{\frac{x_K-x_{K+1}}{2\alpha ^2}H}(u,u_{K+1})\) by \(e^{\frac{x_K-x_{K+1}}{2\alpha ^2}H}(u_{K},u_{K+1})\). Iterating this procedure \(K-1\) times we obtain

$$\begin{aligned}&\int du_1\cdots du_{K+1}\,du\, \delta (u_1-u)e^{\frac{x_1-x_2}{2\alpha ^2} H}(u_1,u_2)\cdots e^{\frac{x_K-x_{K+1}}{2\alpha ^2} H}(u_K,u_{K+1})\\&\quad \times A^\mathbf{y}_1\cdots A^\mathbf{y}_{M}\Phi \big (\alpha u_1 -s_1,\ldots \alpha u_{K} -s_K, -\mathbf{c}\big ) \mathrm{Ai}(u_{K+1}+z)\mathrm{Ai}(u+z'). \end{aligned}$$

So far the manipulations have followed exactly those of [28]. Now in order to apply a similar procedure for the \(A^\mathbf{y}_l\) operators, apply the change of variables \(u_i\mapsto u_i+u\) for \(i=1,\ldots , K\) (but not \(K+1\)), which yields

$$\begin{aligned}&\int \!du_1\cdots du_{K+1}du\, \delta (u_1) \langle u_1|e^{ \frac{ x_1-x_2}{2\alpha ^2} H}| u_{2}\rangle \cdots \left\langle u_K| e^{ \frac{ x_K-x_{K+1}}{2\alpha ^2} H}|u_{K+1}-u\right\rangle \\&\quad \times A^\mathbf{y}_1\cdots A^\mathbf{y}_{M}\Phi \big (\alpha u_1 -s_1,\ldots \alpha u_{K} -s_K, \alpha \mathbf{u- c}\big )\! \mathrm{Ai}(u_{K+1}+z)\!\mathrm{Ai}(u+z'). \end{aligned}$$

The key point is that we now have \(\alpha u -c_i\) in the second set of slots of \(\Phi \). We proceed now for the the \(A^\mathbf{y}_{\ell }\) just as we did for the \(A^\mathbf{x}_{\ell }\). Introduce a new variable \(v_{M+1}\) and a delta function \(\delta (v_{M+1}-u)\), use the formula above to replace \(A^\mathbf{y}_{M}\) by \(\exp \{\frac{y_{M+1}-y_{M}}{2\alpha ^2}H_u\}\) applied to the product of the \(\Phi \) and \(\mathrm{Ai}\) functions, integrate by parts and finally introduce yet another new variable \(v_{M}\). Doing this once yields

$$\begin{aligned}&\int \!dv_{M}\, dv_{M+1} \,du_1\cdots du_{K+1}\,du\, \delta (u_1) \langle u_1|e^{ \frac{ x_1-x_2}{2\alpha ^2} H}| u_{2}\rangle \cdots \langle u_K| e^{ \frac{ x_K-x_{K+1}}{2\alpha ^2} H}|u_{K+1}\!-\!v_{M+1}\rangle \\&\quad \times \delta (v_{M}-u) (e^{\frac{y_{M+1}-y_{M}}{2\alpha ^2}H}\delta ( v_{M+1}-v_M) ) A^\mathbf{y}_1\cdots A^\mathbf{y}_{M-1}\\&\quad \times \Phi \big (\alpha u_1 \!-\! s_1,\ldots \alpha u_{K} \!-\! s_K, \alpha u-c_1,\ldots ,\alpha u \!-\! c_{M-1}, \alpha v_{M}-c_M\big ) \!\mathrm{Ai}(u_{K+1}+z)\!\mathrm{Ai}(u+z'). \end{aligned}$$

We may iterate this \(M-1\) more times to get

$$\begin{aligned}&\int \! dv_{M}\, dv_{M+1}\, du_1\cdots du_{K+1}\,du\, \delta (u_1) \langle u_1|e^{ \frac{ x_1-x_2}{2\alpha ^2} H}| u_{2}\rangle \cdots \langle u_K| e^{ \frac{ x_K-x_{K+1}}{2\alpha ^2} H}|u_{K+1}\!-\!v_{M+1}\rangle \\&\quad \times \delta (v_{1}-u)\langle v_1| e^{\frac{y_2-y_{1}}{2\alpha ^2}H}| v_2\rangle \cdots \langle v_M| e^{\frac{y_{M+1}-y_{M}}{2\alpha ^2}H}| v_{M+1}\rangle \\&\quad \times \Phi \big (\alpha u_1 -s_1,\ldots \alpha u_{K} -s_K; \alpha v_1-c_1,\ldots , \alpha v_{M}-c_M\big )\!\mathrm{Ai}(u_{K+1}+z)\!\mathrm{Ai}(u+z'). \end{aligned}$$

Now we make the change of variables \(u_i\mapsto u_i -v_{M+1}\) for \(i=1,\ldots , K\) (but not \(K+1\)). Because of the shift invariance of the problem we can assume without loss of generality that \(x_1=0\) (this accounts for the shift by \(x_1\) in (9)). Since \(x_{K+1}=0\) by assumption as well, the term that comes from the shift of the operators telescopes to zero. Also we introduce the function \(\mathrm{Ai}_z(s) = \mathrm{Ai}(s+z)\). We can gather the terms involving \(u_{K+1}\) and, also including the \(\mathbf{1}_{z>0}\) term, we have \(\langle u_{K}|e^{\frac{x_{K}}{2\alpha ^2}H}K_{\mathrm{Ai}}|\mathrm{Ai}_z\rangle \). We can also gather the terms involving \(u\) along with \(e^{\frac{-y_1}{2\alpha ^2}z'}\) and \(\mathbf{1}_{z'>0}\). Observe that \(\int _{-\infty }^{\infty } du\,\mathbf{1}_{z'>0}\mathrm{Ai}_{z'}(u) e^{\frac{-y_1}{2\alpha ^2}z'} \delta (v_1-u) = \langle \mathrm{Ai}_{z'}|e^{\frac{y_1}{2\alpha ^2}H}K_{\mathrm{Ai}}|v_1\rangle .\) The final result for our operator \(\tilde{L}\) is

$$\begin{aligned}&\tilde{L}(z,z')\nonumber \\&\quad = \int dM_1 \cdots dv_{M+1} du_1\cdots du_{K} \delta (u_1 - v_{M+1}) \langle u_1|e^{ \frac{ x_1-x_2}{2\alpha ^2} H}|u_2\rangle \cdots \langle u_{K-1}|e^{ \frac{x_{K-1}-x_{K}}{2\alpha ^2} H}|u_{K}\rangle \nonumber \\&\qquad \times \langle v_1| e^{\frac{y_2-y_{1}}{2\alpha ^2}H}|v_2\rangle \cdots \langle v_M| e^{\frac{y_{M+1}-y_{M}}{2\alpha ^2}H}|v_{M+1}\rangle \langle u_{K}|e^{\frac{x_{K}}{2\alpha ^2}H}K_{\mathrm{Ai}}|\mathrm{Ai}_z\rangle \langle \mathrm{Ai}_{z'}|e^{\frac{y_1}{2\alpha ^2}H}K_{\mathrm{Ai}}|v_1\rangle \nonumber \\&\qquad \times \Phi \big (\alpha (u_1-v_{M+1}) -s_1,\ldots \alpha (u_{K}-v_{M+1}) -s_K, \alpha v_1-c_1,\ldots , \alpha v_{M}-c_M\big ). \end{aligned}$$
(25)

Recall now that we are interested in the asymptotics of this formula under the KPZ scaling (3). In particular, we need to scale \((t,x)\) as \((\epsilon ^{-3/2}t,\epsilon ^{-1}x)\), which leads to setting \(\alpha =\epsilon ^{-1/2}(t/2)^{1/3}\). Observe that, with this choice, \(2\alpha ^2=2^{1/3}\epsilon ^{-1}t^{2/3}\), and thus in order to obtain the desired asymptotics we replace \(x_{a} \mapsto 2^{1/3}\epsilon ^{-1}t^{2/3}x_{a}\), \(y_{a} \mapsto 2^{1/3}\epsilon ^{-1}t^{2/3} y_{a}\), \(s_{a}\mapsto \epsilon ^{-1/2}(t/2)^{1/3} s_{a}\), and \(c_{a}\mapsto \epsilon ^{-1/2}(t/2)^{1/3} c_{a}\). Note that under KPZ scaling \(-c_i+y_i^2/2t\) rescales to \((t/2)^{1/3}(-c_i+y_i^2)\) while, similarly, \(s_j-x_j^2/2t\) rescales to \((t/2)^{1/3}(-c_i+y_i^2)\). With this scaling, the left-hand side of (21) leads to

$$\begin{aligned} \mathbb {P}\!\left( \mathfrak {h}(t,x_k)\le (t/2)^{1/3}(s_k-x_k^2),\,k=1\cdots ,K\right) \end{aligned}$$

with \(\mathfrak {h}(t,x)=T_tf(x)\) and \(f(y_m)=-(t/2)^{1/3}(c_m-y_m^2)\) for \(y=y_m\) and \(f=-\infty \) otherwise which, in view of (9) is exactly what we are looking for (recall that we have set \(x_1=0\)). The formula for the kernel \(\bar{L}\) follows from taking \(\epsilon \rightarrow 0\), or alternatively \(\alpha \rightarrow \infty \) in (25). Note that

$$\begin{aligned} \Phi \big (\alpha (u_1-v_{M+1} -s_1),\ldots ,\alpha (u_{K}-v_{M+1} -s_K), \alpha (v_1-c_1),\ldots , \alpha (v_{M}-c_M)\big )\rightarrow \mathbf{1}_{A(\mathbf{s,c})} \end{aligned}$$
(26)

where \(A(\mathbf{{s},\mathbf {c}})\) is given in (11). For \(x_1=0\), this gives (10) after a similarity transformation. The formula for general \(x_1\) follows by simply shifting the \(x\) and \(y\) coordinates by \(x_1\).

Finally, in order to pass to the continuum limit we first write (10) as \(K_{\mathrm{Ai}}-\int _{A^\mathrm{c}(\mathbf{{s},\mathbf {c}})}\) where the complementary set can be written

$$\begin{aligned} A^\mathrm{c}(\mathbf{{s},\mathbf {c}}) = \lim _{\gamma \rightarrow 0} \bigcup _{m\in \gamma \mathbb Z}\big \{ (\mathbf{{u},\mathbf {v}}): \max \{u_1-s_1,\ldots , u_K-s_K\}\le m+ v_{M+1},\\ \min \{ c_1-v_1 ,\ldots , c_M- v_M\} \in [m,m+\gamma )\big \}. \end{aligned}$$

We obtain

(27)

where

$$\begin{aligned}&B_{m,m'} \\&\quad = \big \{ (\mathbf{{u},\mathbf {v}}): \max \{u_1-s_1,\ldots , u_K\!-\! s_K\}\!\le \! m\!+\! v_{M+1}, \max \{ v_1-c_1 ,\ldots , v_M- c_M\} \!\le \! m'\big \}. \end{aligned}$$

Observe that

$$\begin{aligned} \mathbf{1}_{B_{m,m'}} = \prod _{i=1}^{M} \mathbf{1}_{u_i\le s_i+m+v_{M+1}} \prod _{i=1}^{K} \mathbf{1}_{v_i \le c_i+m'}. \end{aligned}$$

This implies, using the fact that \(K_{\mathrm{Ai}}\) is self adjoint and a projection to move it to the right entirely, that the limit in (27) is

$$\begin{aligned} \lim _{\gamma \rightarrow 0}\sum _{m\in \gamma \mathbb Z} \int \! dv_{M+1}\, \tilde{\Theta }^M_{2,m}(z',v_{M+1})\delta (u_1\!-\!v_{M+1})\tilde{\Theta }^K_{1,m,v_{M+1}}(u_1,z), \end{aligned}$$

where

$$\begin{aligned}&\tilde{\Theta }^K_{1,m,v_{M+1}}(u_1,z) \\&\quad = \bar{P}_{s_1+m+v_{M+1}} e^{( x_1-x_2) H}\cdots \bar{P}_{s_{K-1}+m+v_{M+1}} e^{ (x_{K-1}-x_{K}) H} \bar{P}_{s_K+m+v_{M+1}}e^{(x_{K}-x_1)H}K_{\mathrm{Ai}}, \end{aligned}$$

and \(\tilde{\Theta }^M_{2,m} = \tilde{\Theta }^n_{3,-m+\gamma } - \tilde{\Theta }^n_{3,-m}\) where

$$\begin{aligned}&\tilde{\Theta }^n_{3,m'}(z',v_{M+1})\\&\quad =e^{(y_1-x_1)H}\bar{P}_{c_1+m'} e^{( y_2-y_1) H}\cdots \bar{P}_{c_{M-1}+m'} e^{ (y_{M}-y_{M-1}) H} \bar{P}_{c_M+m'}e^{(x_{1}-y_{M})H}K_{\mathrm{Ai}}. \end{aligned}$$

Fix two intervals \([L_s,R_s]\) and \([L_c,R_c]\), and functions \(s:[L_s,R_s]\rightarrow \mathbb R\) and \(c:[L_c,R_c]\rightarrow \mathbb R\). Now let \(K=M=n\) and let the \(L_s\le x_{1}<\cdots < x_n\le R_s\) and \(R_c\ge y_1>\cdots > y_n\ge L_c\) be evenly spaced within these intervals. The limit as the mesh goes to zero (i.e., \(n\) goes to infinity) in (27) is then given by

$$\begin{aligned} \lim _{\gamma \rightarrow 0}\sum _{m\in \gamma \mathbb Z} \int dv_{M+1} \tilde{\Theta }^\infty _{2,m}(z',v_{M+1})\delta (u_1\! -\!v_{M+1})\tilde{\Theta }^\infty _{1,m,v_{M+1}}(u_1,z), \end{aligned}$$

where \(\tilde{\Theta }^\infty _{2,m} = \tilde{\Theta }^\infty _{3,-m+\gamma } - \tilde{\Theta }^\infty _{3,-m}\) as before, and where \(\tilde{\Theta }^\infty _{1,m,v_{M+1}} = Y_{[L_s,R_s]}^{s(\cdot )+v_{M+1}+m} e^{(R_s-R_c) H}K\) and \(\tilde{\Theta }^\infty _{3,m'}=Y_{[L_c,R_c]}^{\hat{c}(\cdot )+m'} e^{(L_s-L_c)H}\) with \(Y_{[a,b]}^g\) defined in (12) and with \(\hat{c}(y) = c( L_c+R_c-y)\). We may now take \(\gamma \) to zero. We include a multiplicative factor of \(\gamma \) so that the sum converges to an integral and use it to divide \(\tilde{\Theta }^\infty _{2,m}\) so that the resulting quantity converges to a derivative. This yields (14)–(15).