1 Introduction

The KPZ equation [40] is a stochastic partial differential equation describing the growth by normal deposition of an interface in \((d+1)\) space dimensions, see e.g. [6, 14]. By definition the time evolution of the height h(tx), \(x\in \mathbb {R}^d\), is given by

$$\begin{aligned} \partial _t h(t,x)=\nu \Delta h(t,x)+\lambda |\nabla h|^2 +\sqrt{D}\, \eta (t,x), \qquad x\in \mathbb {R}^d \end{aligned}$$
(1.1)

where \(\eta (t,x)\) is a regularized white noise, and \(\nu ,\lambda ,D>0\) are constant. Three terms contribute to Eq. (1.1): a viscous term proportional to the viscosity \(\nu \), leading to a smoothening of the interface; a growth by normal deposition with rate \(\lambda \), called deposition rate, and playing the rôle of a coupling constant; and a random rise or lowering of the interface modelling molecular diffusivity, with coefficient D called noise strength. In a related context, h also represents the free energy of directed polymers in a random environment [15, 20, 34]. It makes sense to consider more general nonlinearities of the form \(V(\nabla h)\) with V, say, positive and convex, instead of \(|\nabla h|^2\), which is in any case an approximation of \(2(\sqrt{1+|\nabla h(t,x)|^2}-1)\), assuming that the gradient \(|\nabla h|\) (the slope of the interface) remains throughout small enough so that the evolution makes physically sense, precluding e.g overhangs.

The interest is here in the large-scale limit of this equation, for t and/or x large. A well-known naive rescaling argument gives some ideas about the dependence on the dimension of this limit. Namely, the linearized equation, a stochastic heat (or infinite-dimensional Ornstein-Uhlenbeck [50]) equation called Edwards–Wilkinson model [6] in the physics literature,

$$\begin{aligned} \partial _t \phi (t,x)=\nu \Delta \phi (t,x)+ \sqrt{D}\, \eta (t,x), \quad (t,x)\in \mathbb {R}_+\times \mathbb {R}^d \end{aligned}$$
(1.2)

where \(\eta \) requires no regularization – is invariant under the rescaling \(\phi (t,x)\mapsto \phi ^{\varepsilon }(t,x):= \varepsilon ^{-{1\over 2}(\frac{d}{2}-1)}\phi (\varepsilon ^{-1} t,\varepsilon ^{-{1\over 2}} x)\); we used here the equality in distribution, \(\eta (\varepsilon ^{-1} t,\varepsilon ^{-{1\over 2}}x){\mathop {=}\limits ^{(d)}} \varepsilon ^{{1\over 2}(1+\frac{d}{2})}\eta (t,x)\). Assuming that \(\phi \) is a solution of the KPZ equation instead yields after rescaling

$$\begin{aligned} \partial _t \phi ^{\varepsilon }(t,x)=\nu \Delta \phi ^{\varepsilon }(t,x)+ \varepsilon ^{{1\over 2}(\frac{d}{2}-1)} \frac{\lambda }{2}|\nabla \phi ^{\varepsilon }(t,x)|^2 + \sqrt{D}\, \eta ^{\varepsilon }(t,x), \end{aligned}$$
(1.3)

where (up to change of regularization) \(\eta ^{\varepsilon }{\mathop {=}\limits ^{(d)}}\eta \). For \(d>2\), \(\varepsilon ^{{1\over 2}(\frac{d}{2}-1)}\) vanishes in the limit \(\varepsilon \rightarrow 0\); in other terms, the KPZ equation is infra-red super-renormalizable, hence (power-like) asymptotically free at large scales in \(\ge 3\) dimensions, i.e. expected to behave, in a small coupling (also called small disorder) regime where \(\lambda \ll 1\), like the corresponding linearized equation up to a redefinition (called renormalization) of the diffusion constant \(\nu \) and of the noise strength D.

Let us emphasize the striking difference with the one-dimensional \({\mathrm {KPZ}}_1\) equation. For this equation, scaling behaviors, see (1.3), are reversed with respect to \(d\ge 3\), in other words, KPZ\(_1\) is (power-like) asymptotically free at small scales (i.e. in the ultra-violet), or equivalently (in the PDE analysts’ terminology) sub-critical. A large part of the interest for this equation comes from the fact that the large-scale strongly coupled theory [3, 20] is understood by comparison with integrable discrete statistical physics models [21, 52, 55, 56] relating to weakly asymmetric exclusion process [7] or the Tracy–Widom distribution of the largest eigenvalue of random matrices connected with Bethe Ansatz [56], free fermions and determinantal processes [35],... Note that \({\mathrm {KPZ}}_2\) is believed by perturbative QFT arguments to be strongly coupled at large scales [6, 14] and its large-scale limit is not at all understood.

We prove the diffusive limit of d-dimensional KPZ \((d\ge 3)\) with small coupling in the present work, thus establishing on firm mathematical ground old predictions of physicists, see e.g. Cardy [14]. The space dimension d does not really matter as long as \(d\ge 3\). In the small-coupling regime, contrary to the 1d-case, we fall into the Edwards–Wilkinson universality class.

In comparison with the achievements made in the study of strongly coupled large-scale \({\mathrm {KPZ}}_1\), this problem looks at first sight of lesser importance and difficulty. We believe that the interest of our result lies in the precision of our asymptotics, and in the potential wide scope of applicability of our methods.

Namely, the KPZ model is one particular instance of a large variety of dynamical problems in statistical physics, modelized as interacting particle systems, or as parabolic SPDEs heuristically derived by some mesoscopic limit, which have been turned into a functional integral form analogous to the Gibbs measure of equilibrium statistical mechanics, \(e^{-\int {{\mathcal {L}}}_0- g\int {{\mathcal {L}}}_{int}}\), using the so-called response field (RF), or Martin-Siggia-Rose (MSR) formalism and studied by using standard perturbative expansions originated from quantum field theory (QFT); for reviews see e.g. [14] or [2]. Despite the lack of mathematical rigor, this formalism yields a correct description of the qualitative behaviour of such dynamical problems in the large scale limit.

The Feynman perturbative approach, see e.g. [43], consists in expanding \(\exp -g\int \mathcal{L}_{int}\) into a series in g and making a clever resummation of some truncation of it into so-called counterterms, represented in terms of a sum of diagrams; as such, it is non-rigorous, since it yields N-point functions in terms of an asymptotic expansion in the coupling parameter g which is divergent in all interesting cases (at least for bosonic theories). A few years ago, however, Gubinelli, Hairer, Weber,...[5, 12, 13, 16,17,18, 29,30,31,32, 47], drawing sometimes on a dynamical approach to the construction of equilibrium measures advocated by Nelson [49], Parisi-Wu [51], and Jona-Lasinio, Mitter and Sénéor [36,37,38], have started developing this philosophy in a systematic way to solve sub-critical parabolic SPDEs rigorously, i.e. beyond perturbation theory. Such SPDEs have only a finite number of counterterms, each counterterm being the sum of a finite number of terms (that can be interpreted in terms of Feynman diagrams), which makes the task considerably easier, but still far from trivial.

Constructive approaches developed in the context of statistical physics by mathematical physicists from the mid-60es, see e.g. [22,23,24,25,26, 33, 44, 45, 48, 60] and surveys [27, 46, 53, 54, 58], have developed sophisticated, systematic truncation methods making it possible to control the error terms. The partial resummations are interpreted in the manner of K. Wilson [61, 62] as a scale-by-scale, finite renormalization of the parameters \(\nu ,\Delta ,\lambda \) of the Lagrangian \(\mathcal{L}_0+g\mathcal{L}_{int}\). In many instances it has proved possible to subtract scale counterterms explicitly by hand and prove that the remainder is finite, yielding some description of the effective, large-scale theory, see e.g. works in diverse contexts—random walks in random environment, KAM theory, etc.—by Bricmont, Gawedzki, Kupiainen and coauthors [9,10,11], and recent extensions to the study of sub-critical parabolic PDEs [41, 42], as an alternative to the “global counterterm” strategy mentioned in the last paragraph. However, the implementation of a full-fledged, multi-scale constructive scheme is for the moment limited to equilibrium statistical physics models.

The present work is, to the best of our knowledge, the first attempt to use such a scheme in the context of non-equilibrium statistical mechanics, here for a parabolic SPDE. Instead of using the MSR formalism, we develop (as all previously mentioned mathematically rigorous approaches do) a more straightforward approach, starting directly from the equation and cutting the propagator \(e^{t\nu \Delta }\) into scales. We actually work on the following model.

The model Let \(d\ge 3\). We consider the following equation on \(\mathbb {R}_+\times \mathbb {R}^d\),

$$\begin{aligned} (\partial _t-\nu ^{(0)}\Delta )h(t,x)=\lambda |\nabla h(t,x)|^2+ \sqrt{D^{(0)}}\, (\eta (t,x)-v^{(0)}), \qquad h\big |_{t=0}=h_0 \end{aligned}$$
(1.4)

where \(\eta \) is a white noise regularized in time and in space; \(h_0\) is a smooth, bounded, integrable initial condition, i.e. \(||h_0||_{L^{\infty }}:=\sup _{x\in \mathbb {R}^d} |h_0(x)|, ||h_0||_{L^1}:=\int _{\mathbb {R}^d} dx\, |h_0(x)|\) are \(<\infty \); \(\lambda >0\) is small enough; and \(v^{(0)}\) is a constant, average interface velocity which we shall fix later on.

The precise choice of regularization for the white noise is unimportant; one should just keep in mind that local (in time and space) solvability of (1.1) in a strong sense requires that, for every compact set \(\bar{\Delta }\subset \mathbb {R}^d\) [equivalently, for any \(\bar{\Delta }\in \bar{\mathbb {D}}^0\) as in Definition 3.1 (iii)], \(t\mapsto \sup _{x\in \bar{\Delta }} \left( |\eta (t,x)| +|\nabla \eta (t,x)| \right) \) is locally integrable. For simplicity of exposition, we define \(\eta \) to be a smooth, stationary Gaussian noise with short-range covariance. To be definite:

We fix a smooth, isotropic (i.e. invariant under space rotations) function \(\omega :\mathbb {R}\times \mathbb {R}^d\rightarrow \mathbb {R}\) with support \(\subset B(0,{1\over 2})\) and \(L^1\)-norm \(\int dt\, dx\, \omega (t,x)=1\), and let

$$\begin{aligned} \langle \eta (t,x)\eta (t',x')\rangle:= & {} (\omega *\omega )(t-t',x-x')\nonumber \\= & {} \int dt''\, \int dx''\, \omega (t-t'',x-x'')\omega (t''-t',x''-x'). \end{aligned}$$
(1.5)

Our main result is the following. Gaussian expectation with respect to \(\eta \) is denoted either by \(\langle \ \cdot \ \rangle \), or \(\langle \ \cdot \rangle _{\lambda }\) or also \(\langle \ \cdot \rangle _{\lambda ;\nu ^{(0)},D^{(0)},v^{(0)}}\) if one wants to emphasize the dependence on the parameters \(\nu ^{(0)},D^{(0)},\lambda ,v^{(0)}\); the result also depends obviously on the initial condition \(h_0\). By convention, \(\langle \cdot \rangle _{0;\nu ,D}\) refers to the expectation with respect to the measure of the Edwards–Wilkinson equation \((\partial _t-\nu \Delta )\phi (t,x)=\sqrt{D}\, \eta (t,x)\) with zero initial condition, where \(\eta \) is a standard (unregularized) space-time white noise; for this equation we implicitly set \(v=0\). By definition, \(\phi (t,x)=\sqrt{D} \int _0^t ds\, \left( e^{(t-s)\nu \Delta } \eta _s\right) (x)\) is a centered Gaussian process.

Theorem 1.1

(Main Theorem) Let \(d\ge 3\). Fix \(D^{(0)},\nu ^{(0)}>0\) and a smooth, bounded, integrable initial condition \(h_0\). Let \(\lambda \ge 0\) be small enough, \(\lambda \le \lambda _{max}=\lambda _{max}(||h_0||_{L^1}, ||h_0||_{L^{\infty }})\). Then there exist three coefficients \(D_{eff}= D^{(0)}+O(\lambda ^2)\), \(\nu _{eff}=\nu ^{(0)}+O(\lambda ^2)\) and \(v^{(0)}=v^{(0)}(\lambda )=O(\lambda )\), all independent of the initial condition \(h_0\), such that the solution h of the KPZ equation (1.4) satisfies the following asymptotic properties:

  1. 1.

    for all (tx) with \(t>0\),

    $$\begin{aligned} \left\langle h_{\varepsilon ^{-1}t}\big (\varepsilon ^{-{1\over 2}}x\big )\right\rangle _{\lambda ;\nu ^{(0)},D^{(0)},v^{(0)}}= O_{\varepsilon \rightarrow 0}\big (\varepsilon ^{d/2}\big ); \end{aligned}$$
    (1.6)
  2. 2.

    for all \((t_1,x_1),\ldots ,(t_{2N},x_{2N})\), \(N\ge 1\) with \(t_i>0\), \(i=1,\ldots ,2N\) and \((t_i,x_i)\not =(t_j,x_j), i\not =j\), letting \(h_i:=\big \langle h_{\varepsilon ^{-1}t_i}\big (\varepsilon ^{-{1\over 2}}x_i\big )\big \rangle _{\lambda ;v^{(0)},\nu ^{(0)},D^{(0)}}\),

    $$\begin{aligned} \left\langle \prod _{i=1}^{2N} \left( h_{\varepsilon ^{-1} t_i}\big (\varepsilon ^{-{1\over 2}} x_i\big ) -h_i\right) \right\rangle _{\lambda ;v^{(0)},\nu ^{(0)},D^{(0)}} \sim _{\varepsilon \rightarrow 0} \varepsilon ^{N(\frac{d}{2}-1)} \left\langle \prod _{i=1}^{2N} h_{ t_i}(x_i)\right\rangle _{0;\nu _{eff},D_{eff}}. \nonumber \\ \end{aligned}$$
    (1.7)

Since \(\langle \ \cdot \ \rangle _{0;\nu _{eff},D_{eff}}\) is a Gaussian measure, 2. may be rephrased as follows. Let

$$\begin{aligned} K_{eff}(t_1,x_1;t_2,x_2):= & {} \lim _{\varepsilon \rightarrow 0} \varepsilon ^{-\big (\frac{d}{2}-1\big )} \big \langle \left( h_{\varepsilon ^{-1}t_1}\big (\varepsilon ^{-1/2}x_1\big )-h_1\right) \, \nonumber \\&\left( h_{\varepsilon ^{-1}t_2}\big (\varepsilon ^{-1/2}x_2\big )-h_2\right) \big \rangle _{\lambda ; v^{(0)},\nu ^{(0)},D^{(0)}} \end{aligned}$$
(1.8)

(\(t,t'>0\), \((t,x)\not =(t',x')\)). Then

$$\begin{aligned} K_{eff}(t,x;t',x')= \langle h(t,x)h(t',x')\rangle _{0;\nu _{eff},D_{eff}} \end{aligned}$$
(1.9)

and

$$\begin{aligned}&\Big \langle \prod _{i=1}^{2N} \left( h_{\varepsilon ^{-1} t_i}(\varepsilon ^{-{1\over 2}} x_i) -h_i\right) \Big \rangle _{\lambda ;v^{(0)},\nu ^{(0)},D^{(0)}} \sim _{\varepsilon \rightarrow 0} \varepsilon ^{N(\frac{d}{2}-1)} \nonumber \\&\quad \sum _{{\mathrm {pairings}}} \prod _{j=1}^N K_{eff}(t_{i_{2j-1}},x_{i_{2j-1}};t_{i_{2j}}, x_{i_{2j}}) \end{aligned}$$
(1.10)

where the sum ranges over all pairings \((i_1,i_2),\ldots ,(i_{2N-1},i_{2N})\) of the 2N indices \(1,2,\ldots ,2N\).

In other words, up to a Galilei transformation \(h_t(x)\mapsto h_t(x)- \sqrt{D^{(0)}}\, v^{(0)}t\), the N-point functions of the KPZ equation \((\partial _t-\nu ^{(0)}\Delta )h=\lambda |\nabla h|^2+\sqrt{D^{(0)}} \, \eta \) behave asymptotically in the large-scale limit as the N-point functions of the solution of the Edwards–Wilkinson equation with renormalized coefficients \(D_{eff},\nu _{eff}\),

$$\begin{aligned} (\partial _t -\nu _{eff} \Delta ) h_t(x)=\sqrt{D_{eff}}\, \eta (t,x) \qquad (t\ge 0), \qquad h_0\equiv 0 \end{aligned}$$
(1.11)

where \(\eta \) requires no regularization. Generally speaking, main corrections to the above asymptotic behaviour (1.6,1.10) are smaller by \(O(\varepsilon ^{(1/2)^-})\) as proved in §5.3 D. Effective coefficients \(D_{eff},\nu _{eff}\) have a (diverging) asymptotic expansion in terms of \(\lambda \); lowest-order corrections in \(O(\lambda ^2)\) are computed in (5.28) and (6.37). The \(O(\varepsilon ^{d/2})\)-term in (1.6) is a contribution due to the initial condition; further contributions of the initial condition to N-point functions come with an extra multiplicative factor in \(O(\lambda \varepsilon ^{\frac{d}{2}-1})\), which is the scaling of the vertex. Corrections to Gaussianity of N-point functions, of order \(O(\lambda ^2 \varepsilon ^{\frac{d}{2}-1})\), are examined in (2) a few pages below. Furthermore, our multi-scale scheme actually involves an effective propagator differing slightly from the effective Edwards–Wilkinson propagator \(e^{(t-s)\nu _{eff}\Delta }\), see Appendix 2 section; this implies a correction w.r. to the r.h.s. of (1.10) with a small extra prefactor, which is proved to be a \(O(\varepsilon )\) but could easily be improved to \(O(\varepsilon ^n)\) with n arbitrary large.

Remark

A more common choice of regularization for \(\eta \) is to take a discretized “kick force”, namely, we pave \(\mathbb {R}_+\) by unit size intervals \([n,n+1)\), \(n\ge 0\), and let \(\xi _{n+{1\over 2}}:=\eta \big |_{[n,n+1)}\), \(n=0,1,\ldots \) be independent, centered Gaussian fields on \(\mathbb {R}^d\) which are constant in time and have smooth, space-translation invariant covariance kernel with finite range, for instance. This does not change the conclusion of Theorem 0.1, except that, the law of \(\eta \) being now only \(\mathbb {Z}\)-periodic in time, \(h_{\infty }(t):=\lim _{n\rightarrow +\infty } \langle h_{n+t}(0)\rangle \) is now a 1-periodic function instead of the constant 0. This regularization has several advantages (see Sect. 2); it allows in particular an explicit representation of \(v^{(0)}\) in probabilistic terms. The scheme of proof extends without any significant modification if the covariance kernel decreases heat-kernel-like in space, e.g. if \(\xi _{n+{1\over 2}}\overset{(d)}{=}e^{c\Delta } \xi \) where \(\xi \) is a standard space white noise, and \(c>0\) is some constant.

Furthermore, it follows from the proof (see Sect. 6) that the value of \(v^{(0)}\) may be obtained by equating it to the constant \(\tilde{v}^{(0)}\) such that \(\langle w(t,0)\rangle _{\tilde{v}^{(0)}}=O(1)\) independently of t, in coherence with the value obtained in Carmona and Hu [15] in a discrete setting for a random directed polymer measure (see Sect. 3.1), where w is the Cole-Hopf transform of h (see below). Let us note that the equality between \(v^{(0)}\) and \(\tilde{v}^{(0)}\) points out to the fact that we are in a weak disorder regime in which the annealed and quenched free energies coincide. However, our proof is independent of that of Carmona and Hu (see [15], Theorem 1.5), based on Gaussian concentration inequalities.

The proof follows closely the article by Iagolnitzer and Magnen [33] on weakly self-avoiding polymers in four dimensions, which is the main reference for the present work. Namely, up to the change of function \(h\mapsto w:=e^{\frac{\lambda }{\nu }h}\) (called Cole-Hopf transform) and of coupling constant, \(g:=\frac{\lambda }{\nu }\sqrt{D}\), the KPZ equation is equivalent to the linear equation \((\partial _t-\nu \Delta )w=g\eta w\), solved as \(w(t,x):=\int dy\, G_{\eta }((t,x),(0,y))w_0(y)\), where \(G_{\eta }\equiv \left( \partial _t-\nu \Delta -g\eta \right) ^{-1}\) is a random resolvent. Formally then, our problem is a parabolic counterpart to the large-scale analysis of polymers in a weak random potential solved in [33] by studying the equilibrium resolvent \(\left( \Delta +\mathrm{i}g\eta \right) ^{-1}\), where the “\(\mathrm{i}\)”-coefficient is the Edwards model representation of the self-avoiding condition (the model is solved for \(g\ll 1\) but the self-avoiding condition is recovered for \(g=1\)). Though the two models are physically unrelated, one must analyze similar mathematical objects. As is often the case, the model with a time evolution (i.e. the parabolic one) turns out to be easier than the equilibrium model (i.e. the elliptic one), because of the causality constraint.

The general scheme of proof, following, as mentioned above, the philosophy of constructive field theory, is to introduce a multi-scale expansion and define a renormalization mapping, \(\nu =\nu ^{(0)}\longrightarrow \nu ^{(1)}\longrightarrow \cdots \longrightarrow \nu ^{(\infty )}:=\nu _{eff}\), \(D=D^{(0)}\longrightarrow D^{(1)}\longrightarrow \cdots \longrightarrow D^{(\infty )}:=D_{eff}\) or equivalently \(g^{(0)}:=\frac{\lambda }{\nu _0} \sqrt{D^{(0)}}\rightarrow g^{(1)}\longrightarrow \cdots \longrightarrow g^{(\infty )}\equiv g_{eff}=\frac{\lambda }{\nu _{eff}} \sqrt{D_{eff}}\) (later on interpreted as the flow of the coupling constant through the Cole-Hopf transform), \(v=v^{(0)}\longrightarrow v^{(1)}\longrightarrow \cdots \longrightarrow v^{(\infty )}\equiv v_{eff}:=0\) ensuring the convergence of the expansion at each scale and allowing to control error terms. The average interface velocity \(v^{(0)}\) is fixed by requiring that the asymptotic velocity \(v_{eff}\) vanishes. The original parameters \(\nu ^{(0)},D^{(0)},v^{(0)}\), called bare parameters, describe the theory at scale O(1), while the Edwards–Wilkinson model with scale j parameters \(\nu ^{(j)},D^{(j)}\) and drift velocity \(v^{(j)}\) give a good approximation of the theory at time distances of order \(\varepsilon ^{-1}=2^{-j}\), which becomes asymptotically exact in the infra-red limit, when \(j\rightarrow \infty \). This goal is achieved in general by using a phase-space expansion, i.e. a horizontal cluster expansion casting into the form of a series the interactions at a given energy-momentum level between the degrees of freedom, and a vertical cluster or momentum-decoupling expansion separating the different energy-momentum levels. Energy, resp. momentum, are the Fourier conjugate variables of time and space; here a given energy-momentum level j is adequately defined by considering heat-kernel propagators

$$\begin{aligned} G_{\nu }((t,x),(t',x'))=e^{\nu (t-t')\Delta }(x-x')=p_{\nu (t-t')}(x-x') \end{aligned}$$

with \(t-t'\approx 2^j\). Then the above series (roughly speaking, a truncated power series in the coupling constants with a bounded integral, Taylor-like remainder) converge if the bare coupling constant \(g^{(0)}\) is small enough.

With our choice of covariance function for \(\eta \), however, the flow of the parameters \(\nu ,v\) is actually trivial starting from \(j=1\), i.e. \(\nu ^{(j)}=\nu _{eff},\, v^{(j)}=v_{eff}=0\) for \(j\ge 1\), and the noise strength D, defined by resumming connected diagrams with four external legs, though scale-dependent, requires no renormalization at all, because the equation is infra-red super-renormalizable, and the total correction (obtained by summing over scales) is finite. This, and also the causality condition preventing the so-called low-momentum field accumulation problem [22, 26, 58], leads to a much simplified framework, from which the phase space analysis has almost disappeared. Only scale 0, two-point diagrams need to be renormalized, with a contribution at near zero momentum \(\varvec{k}\)

$$\begin{aligned} v^{(0)}+(\nu _{eff}-\nu ^{(0)})|\varvec{k}|^2\equiv v^{(0)}-(\nu _{eff}-\nu ^{(0)})\Delta , \end{aligned}$$

leaving a remainder of parabolic order three in the momenta, i.e. \(O(\nabla ^3)\) or \(O(\nabla \partial _t)\). Scale 0 diagrams are connected by “low-momentum” heat-kernel propagators \(G((t,\cdot ),(t',\cdot ))\) with \(t-t'\approx 2^j\), \(j\ge 1\). A crucial point in the proof is that, thanks to the \(\nabla ^3\), remainders integrated over space-time cost a factor O(1), namely [see (3.19) and (6.21)]

$$\begin{aligned}&\int _{t''}^t dt'\, \int dx'\, G((t,x),(t',x')) \, |\nabla ^3 G((t',x'),(t'',x'')| \nonumber \\&\quad \lesssim \left( \int dt'\, (1+|t'-t''|)^{-3/2}\right) \, p_{\nu (t-t'')}(c|x-x''|) = O(1)\ p_{\nu (t-t')}(c|x-x''|) \nonumber \\ \end{aligned}$$
(1.12)

or, simply said, \(G\nabla ^3 G\lesssim G\). What is left of the cluster expansions is adequately resummed as in [33] into the random resolvent in the form of localized “vertex insertions” (see Sect. 6), thereby suppressing combinatorial factors which make the series divergent. Then the contribution of all vertex insertions is bounded by some contour integral of a modified resolvent through the use of Cauchy’s formula.

An extra complication comes however from the inverse Cole-Hopf transform. Applying cluster expansions—which is done in practice by differentiation with respect to some additional parameters—to \(\log (w)\) leads to rational expressions of the form \(\frac{``D_1 w'' \cdots ``D_n w''}{w^n}\), where the \(D_i\)’s are differential operators, acting on “replicas” of w. Then the scale 0 diagrams requiring renormalization can be factorized, hence averaged with respect to the measure \(\langle \, \cdot \, \rangle \). Remaining terms are shown to yield a convergent series in the form of a sum over “polymers” for \(\lambda \) small enough.

The \(\lambda \) and \(\varepsilon \)-pre-factors contained in Theorem 1.1 may be guessed from the following guiding principles, put into light by the cluster expansion.

(1) First, the two-point function of the renormalized Edwards–Wilkinson equation,

$$\begin{aligned}&\langle h(\varepsilon ^{-1} t,\varepsilon ^{-1/2} x)h(\varepsilon ^{-1} t',\varepsilon ^{-{1\over 2}} x' )\rangle _{0;\nu _{eff},D_{eff}} \nonumber \\&\qquad = D_{eff}\int _{0}^{\varepsilon ^{-1} t'} ds\, \int dy\, p_{\nu _{eff}(\varepsilon ^{-1}t-s)}(\varepsilon ^{-{1\over 2}}x-y) p_{\nu _{eff}(\varepsilon ^{-1}t'-s)}(y-\varepsilon ^{-{1\over 2}}x') \nonumber \\&\qquad = D_{eff} \int _{0}^{\varepsilon ^{-1} t'} ds\, p_{\nu _{eff}(\varepsilon ^{-1}(t+t')-2s)}(\varepsilon ^{-{1\over 2}}(x-x')) \end{aligned}$$
(1.13)

(\(t\ge t'>0\)), scales like \(\varepsilon ^{\frac{d}{2}-1}\), as can be seen by simply rescaling variables \(t'\rightarrow \varepsilon t', (x,x')\rightarrow (\varepsilon ^{1/2}x,\varepsilon ^{1/2}x')\) in the integral. There are two regimes: the equilibrium regime (\(t-t'\lesssim |x-x'|^2\)), in which \(\langle h(t,x)h(t',x')\rangle _{0;\nu _{eff},D_{eff}}\approx \int _{0}^t ds\, p_{2(t-s)}(x-x')\approx |x-x'|^{-(d-2)}\) is essentially the equilibrium Green function of the Laplacian; the dynamical regime (\(t-t'\gtrsim |x-x'|^2\)), in which \(\langle h(t,x)h(t',x')\rangle _{0;\nu _{eff},D_{eff}}\approx \int _{0}^{t'} ds\, p_{t+t'-2s}(0)\approx |t-t'|^{-(\frac{d}{2}-1)}\).

(2) The connected quantities \(\Big \langle \prod _{i=1}^{2N} h_{\varepsilon ^{-1} t_i}(\varepsilon ^{-{1\over 2}} x_i) \Big \rangle ^{{\mathrm {connected}}}_{\lambda ;v^{(0)},\nu ^{(0)},D^{(0)}}\) (also called truncated 2N-point functions) are \(O\left( \left[ \lambda ^{2} \varepsilon ^{\frac{d}{2}-1}\right] ^{2N-1}\right) \). Namely, Gaussian pairwise contractions yield the expected scaling in \(O(\varepsilon ^{N(\frac{d}{2}-1)})\), i.e. \(O(\varepsilon ^{\frac{d}{2}-1})\) per link, as expected from (1); whereas the connected expectation requires \(N-1\) supplementary links and twice as much vertices (since these are not present in the linear theory) in the expansion, contributing an extra small \(O\left( \left[ (\lambda ^2 \varepsilon ^{\frac{d}{2}-1})\right] ^{N-1}\right) \) prefactor. The cluster expansion makes it possible to develop those links explicitly.

Recent related results Gu et al. [28] consider the same scaling for the Cole-Hopf transform \(e^{(\lambda /\nu ) h}\) of the KPZ field—though starting from an initial condition varying at a macroscopic scale, while ours varies at a microscopic scale. To lowest order in \(\varepsilon \), they prove convergence to an Ornstein-Uhlenbeck process with effective, renormalized parameters independent of the initial conditions, see their Theorem 1.1 and §1.2 for comparison to our results. However, entirely different methods are being used as based on Itô’s formula, homogenization-type results and a martingale central limit theorem for the fluctuations of the Cole-Hopf transform.

The plan of the article is as follows. We start by recalling the Cole-Hopf transform in Sect. 2, and make the bridge to previous results on the subject stated in terms of the associated directed polymer measure. We then introduce in Sect. 3 a multi-scale expansion for the propagators, together with multi-scale estimates (also called “power-counting”), which are the building blocks of our approach. Sections 45, and 6 are the heart of the article. The dressed equation, and the cluster expansion thereof, is presented in Sect. 4. Section 5 is dedicated to renormalization; the scale 0 counterterms obtained by factorizing two-point functions through a supplementary Mayer expansion are bounded. Then we show in Sect. 6 how to bound the sum of all terms produced by the expansion, and obtain final bounds for N-point functions, proving thus our main result, Theorem 0.1. Finally, there are two appendices. In the first one, we provide detailed combinatorial formulas for the horizontal and Mayer cluster expansions. The second one is merely dedicated to a technical result. Pictures are provided, which are there to help the reader visualize the outcome of the various expansions.

Notations

  1. 1.

    (parabolic distance) Let \(d((t,x),(t',x')):=\sqrt{|t-t'|+|x-x'|^2}\) \((t,t'\in \mathbb {R}_+, \, x,x'\in \mathbb {R}^d)\). Similarly, for \(U,U'\subset \mathbb {R}_+\times \mathbb {R}^d\), \(d((t,x),U):=\inf _{(t',x')\in U} d((t,x),(t',x'))\), \(d(U,U'):=\max \left( \sup _{(t,x)\in U} d((t,x),U'), \sup _{(t',x')\in U'} d(U,(t',x')) \right) \) (Hausdorff distance). Then \({\bar{d}}\) is the space projection of the distance d, i.e. \({\bar{d}}(x,x'):=d((0,x),(0,x'))=|x-x'|\), etc.

  2. 2.

    Let \(f,g:E\rightarrow \mathbb {R}\) be two functions on some set E. We write \(|f(z)|\lesssim |g(z)|\) if there exists some inessential constant C (possibly depending on the parameters \(D,\nu \) and on the space dimension d), uniform in \(\lambda \) for \(\lambda \) small enough, such that \(|f(z)|\le C|g(z)|\). Then, by definition, \(|g(z)|\gtrsim |f(z)|\). If \(|f(z)|\lesssim |g(z)|\) and \(|g(z)|\lesssim |f(z)|\), we write \(|f(z)|\approx |g(z)|\).

  3. 3.

    In many situations, one obtains (tx)-dependent functions f(tx) such that f decays Gaussian-like, \(f(t,x)\le e^{-c|x|^2/t}\) for some positive constant c bounded away from 0. We then write \(f(t,x)\le e^{-c|x|^2/t}\) without further specifying the value of c, which may change from line to line. For instance, if \(p_{\nu t}(x)=e^{\nu t\Delta }(x)\) is the heat kernel, then we may write \(p_{\nu t}(x)\lesssim t^{-d/2} e^{-c|x|^2/\nu t}\lesssim t^{-d/2} e^{-c'|x|^2/t}\), leaving out the dependence in the parameter \(\nu \) as explained in 2. Note however that, if \(\nu '\le \nu \), \(p_{\nu ' t}(x)\lesssim p_{\nu t}(x)\), whereas the inequality \(p_{\nu t}(x)\lesssim p_{\nu ' t}(x)\) does not hold uniformly in x because the space decay of \(p_{\nu t}(\cdot )\) is slower than that of \(p_{\nu ' t}(\cdot )\).

2 Cole-Hopf Transform

It is well-known that \(w:=e^{\frac{\lambda }{\nu ^{(0)}} h}\) is a solution of the linear equation with multiplicative noise,

$$\begin{aligned} (\partial _t-\nu ^{(0)}\Delta )w(t,a)=g^{(0)} \left( \eta (t,a) - v^{(0)} \right) w(t,a) \end{aligned}$$
(2.1)

where

$$\begin{aligned} g^{(0)}:= \frac{\lambda }{\nu ^{(0)}}\sqrt{D^{(0)}}=O(\lambda ) \end{aligned}$$
(2.2)

plays the rôle of a bare coupling constant, from which (representing the solution as a Wiener integral by Feynman–Kac’s formula)

$$\begin{aligned} h(T,a)=\frac{\nu ^{(0)}}{\lambda } \log w(T,a), \qquad w(T,a)= {\mathbb {E}}^a\left[ e^{g^{(0)}\int _0^T dt\, \left( \eta (T-t,B_{t}) - v^{(0)} \right) } e^{\frac{\lambda }{\nu ^{(0)}} h_0(B_T)} \right] , \nonumber \\ \end{aligned}$$
(2.3)

where the expectation \({\mathbb {E}}^a\) is relative to the Wiener measure on d-dimensional Brownian paths \((B_t)_{0\le t\le T}\) issued from \(a\in \mathbb {R}^d\) with \(\nu ^{(0)}\)-normalization, i.e. \({\mathbb {E}}^a[ (B_t^i-a)^2]=2\nu ^{(0)}t\), \(i=1,\ldots ,d\). Thus w(Ta) may be interpreted as the partition function of a directed polymer, see e.g. [15] and references within, but we shall not need this interpretation in the article. Note that \((B_t)_{t\ge 0} \overset{(d)}{=} (W_{2\nu ^{(0)} t})_{t\ge 0}\), where W is now a standard Brownian motion, from which—forgetting about the regularization and using the variable \(2\nu ^{(0)} t\) instead of \(t-\)

$$\begin{aligned} \int _0^T dt\, \eta (T-t,B_t)\sim \frac{1}{2\nu ^{(0)}} \int _0^{2\nu ^{(0)} T} du\, \eta \left( \frac{u}{2\nu ^{(0)}},W_u\right) \overset{(d)}{=} \frac{1}{\sqrt{2\nu ^{(0)}}} \int _0^{2\nu ^{(0)} T} du\, \eta (u,W_u). \end{aligned}$$

Thus w(Ta) may be expanded in a series in the parameter \(g:=\frac{g^{(0)}}{\sqrt{2\nu ^{(0)}}}= \frac{1}{\sqrt{2}}\frac{\lambda }{(\nu ^{(0)})^{3/2}} \sqrt{D^{(0)}}\). Similarly, \(\nabla w=\frac{\lambda }{\nu ^{(0)}} e^{\frac{\lambda }{\nu ^{(0)}} h}\nabla h\), or conversely \(\nabla h=\frac{\nu ^{(0)}}{\lambda } \frac{\nabla w}{w}\), from which

$$\begin{aligned} \nabla h(T,a)= & {} e^{-\frac{\lambda }{\nu ^{(0)}} h(T,a)} \left( {\mathbb {E}}^a\left[ e^{g^{(0)}\int _0^T dt\, \left( \eta (T-t,B_{t}) - v^{(0)} \right) }\ e^{\frac{\lambda }{\nu ^{(0)}} h_0(B_T)} \nabla h_0(B_T) \right] + \sqrt{D^{(0)}}\ {\mathbb {E}}^a \right. \nonumber \\&\left. \times \,\left[ \int _0^T dt\, e^{g^{(0)}\int _0^t ds\, \left( \eta (T-s,B_{s}) - v^{(0)} \right) } \ \, \cdot \, \nabla \eta (T-t,B_{t}) \, \cdot \, e^{\frac{\lambda }{\nu ^{(0)}} h_{T-t}(B_{t})} \right] \right) \nonumber \\ \end{aligned}$$
(2.4)

Without using the general theory developed in [57, 59], Eqs. (2.3) and (2.4) show that a.s. h,\(\nabla h\) exist and are \(C^1\) for \(h_0\), say, \(C^1\) and compactly supported. The Cole-Hopf solution coincides with the solution defined for more general Hamilton-Jacobi equations in [57, 59].

For the rest of the subsection only, we assume that \(\eta \) is a discretized “kick force”, i.e. \(\eta \big |_{[n,n+1)}=:\xi _{n+{1\over 2}}\) are independent and constant in time, in order to compare with the existing literature. Since \((\eta |_{[n-1,n)})_{n\ge 0}\) are independent fields, letting \(v^{(0)}:=\tilde{v}^{(0)}\), where

$$\begin{aligned} \tilde{v}^{(0)}:=\frac{1}{g^{(0)}} \log \, \left\langle {\mathbb {E}}^0\left[ e^{g^{(0)} \int _0^1 dt\, \eta (0,B_t)}\right] \right\rangle \end{aligned}$$
(2.5)

leads to \(\langle w(n,a)\rangle _{\tilde{v}^{(0)}}=1\) for any \(n\in \mathbb {N}\) and \(a\in \mathbb {R}^d\) if \(w_0=1\), whence more generally

$$\begin{aligned} \langle w(n,a)\rangle _{\tilde{v}^{(0)}}=O(1). \end{aligned}$$
(2.6)

Expanding the exponential in (2.5) and using

$$\begin{aligned} \Big \langle \Big |\int _0^1 dt\, \eta (0,B_t)\Big |^p \Big \rangle \le \int _0^1 dt \, \langle |\eta ^p(0,B_t)| \rangle \lesssim C^p \Gamma (p/2) \langle \eta ^2(0,B_t) \rangle ^{p/2}=O( (C')^p \Gamma (p/2)),\nonumber \\ \end{aligned}$$
(2.7)

one gets: \(\langle {\mathbb {E}}^0\left[ e^{g^{(0)} \int _0^1 dt\, \eta (0,B_t)} \right] \rangle =e^{O(\lambda ^2)}\), whence \(\tilde{v}^{(0)}=O(\lambda )\).

Let us state an easy preliminary result, adapted from Carmona and Hu [15].

Lemma 2.1

There exists some positive constant \(v^{(0)}\) such that the solution of the KPZ equation with zero bare velocity,

$$\begin{aligned} (\partial _t-\nu ^{(0)}\Delta )h(t,x)=\sqrt{D^{(0)}}\eta (t,x)+ \lambda |\nabla h(t,x)|^2 \end{aligned}$$
(2.8)

verifies

$$\begin{aligned} \frac{1}{T}\langle h(T,x)\rangle \rightarrow _{T\rightarrow \infty } \liminf _{T\rightarrow \infty } \frac{1}{T}\langle h(T,x)\rangle =:v^{(0)}. \end{aligned}$$
(2.9)

Furthermore, \(0\le v^{(0)}\le \tilde{v}^{(0)}\).

Proof

(see [15], Lemma 3.1) Let, for f general forcing term,

$$\begin{aligned} w_T(a|f):= {\mathbb {E}}^0 \left[ e^{g^{(0)}\int _0^T dt\, f(t,a+B_{T-t}) } \right] \end{aligned}$$
(2.10)

and

$$\begin{aligned} w_T(a,b|f):= {\mathbb {E}}^0 \left[ e^{g^{(0)}\int _0^T dt\, f(t,a+B_{T-t}) } \ \left| \right. a+B_T=b \right] \end{aligned}$$
(2.11)

Conditioning with respect to the terminal condition, \(a+B_T=b\), means that we average with respect to the law of the Brownian bridge from (0, a) to (Tb) (see e.g. [39]). Then, for \(T,T'\in \mathbb {N}\),

$$\begin{aligned} w_{T+T'}(x|\eta )= & {} \int p_T(x,dy) w_T(x,y|\eta (\cdot +T')) w_{T'}(y|\eta ) \nonumber \\= & {} w_T(x| \eta (\cdot +T')) \int p_T(x,dy) \pi _{T,T'}(x,y|\eta (\cdot +T')) w_{T'}(y|\eta ) \end{aligned}$$
(2.12)

where \(\pi _{T,T'}(x,y|\eta (\cdot +T')):=\frac{w_T(x,y|\eta (\cdot +T'))}{w_T(x|\eta (\cdot +T'))}\). By construction, \(\int p_T(x,dy) \pi _{T,T'}(x,y|\eta (\cdot +T'))=1\). Hence (by concavity of the log)

$$\begin{aligned} h_{T+T'}(x)\ge h(T,x) + \int p_T(x,dy)\pi _{T,T'}(x,y|\eta (\cdot +T')) h_{T'}(y). \end{aligned}$$
(2.13)

Taking the expectation with respect to the noise and using independence of \(\eta (\cdot +T')\) from \(\eta \big |_{[0,T']}\), together with space translation invariance, one gets the superadditive inequality,

$$\begin{aligned} \langle h_{T+T'}(x)\rangle =\langle h_{T+T'}(0)\rangle \ge \langle h_T(0)\rangle + \langle h_{T'}(0)\rangle . \end{aligned}$$
(2.14)

On the other hand, by convexity of exp, \(\langle h_T(0)\rangle \ge 0\). Fekete’s superadditive lemma allows us to conclude to the existence of some constant \(v^{(0)}\) verifying (2.9). This is the constant whose existence is asserted in Main Theorem [see (1.6)]. Furthermore, by Jensen’s inequality, \(v^{(0)}\le \tilde{v}^{(0)}\), as observed already in [15], Prop. 1.4. \(\square \)

As mentioned in the Introduction, Carmona and Hu [15] actually prove the existence of a limit random variable \(h_{\infty }(0):=\)a.s.-lim\(_{t\rightarrow \infty } h(t,0)\) for the solution of the KPZ equation with velocity \(\tilde{v}^{(0)}\), and a Gaussian lower large deviation theorem (Theorem 1.5 in [15]) for \(h_{\infty }(0)\) of the form

$$\begin{aligned} {\mathbb {P}}[h_{\infty }(0)\le -A]\lesssim e^{-cA^2}, \qquad A>0 \end{aligned}$$
(2.15)

from which it is clear in particular that \(v^{(0)}=\tilde{v}^{(0)}\).

Because the equation for w is linear, there exists a random kernel \(G_{\eta }=G_{\eta }((t,x),(t',x'))\) (\(t>t'\)) such that

$$\begin{aligned} w_t(x|\eta )=\int dx'\, G_{\eta }((t,x),(t',x'))w_{t'}(x'|\eta ). \end{aligned}$$
(2.16)

From the above formulas one sees that

$$\begin{aligned} G_{\eta }((T,a),(0,b))\equiv w_T(a,b|\eta ). \end{aligned}$$
(2.17)

The kernel \(G_{\eta }\), called random propagator, is the matter of the next subsection.

3 Multi-scale Expansion and Vertex Representation

We discuss in this section two different points of view on the KPZ equation (1.1):

  1. 1.

    First (see Sect. 2), due to our specific choice of quadratic nonlinearity \(V(\nabla h)=|\nabla h|^2\), the Cole-Hopf transform maps (1.1) into a linear equation for a Cole-Hopf field w with multiplicative noise, which is explicitly solved in terms of an average over Brownian paths, giving rise to Cole-Hopf solutions. Conjugating with respect to the Cole-Hopf transform, these may be seen to coincide with the \(\mathcal{W}\)-solutions introduced elsewhere [59]. This point of view, in combination with martingale theorems and Gaussian concentration inequalities, is extensively used in the literature [8, 15, 19, 34], where people have been at least as much interested in the resulting weighted measure on paths, interpreted as a directed polymer measure. A lot of properties of this measure have been derived in all dimensions, in the small (\(\lambda \ll 1\)) or large (\(\lambda \gg 1\)) disorder regime, with attention focused on asymptotic theorems, large-deviation properties, scaling exponent, etc. However, not much can be derived therefrom concerning the asymptotic behavior of N-point functions of the original KPZ field h for \(N\ge 2\), because they are not directly accessible from the directed polymer measure due to necessity of taking the inverse Cole-Hopf transform.

  2. 2.

    Second (see § 3.2)—and this our approach here—, starting either directly from the KPZ equation or the Cole-Hopf transformed linear equation, one may try to expand the solution in powers of \(\lambda \) for \(\lambda \) small enough. In the first case, the idea is more or less to apply iteratively Duhamel expansion. In the second case, one is led to a vertex representation based on an expansion of the random resolvent.

The second point of view may look very naive to mathematicians at first sight—though physicists have long known how to build predictions out of perturbative expansions—; such approaches in PDE theory lead in general only to existence “in the small”, i.e. for a small enough initial condition. Because here we have a SPDE with a right-hand side, one may expect to get only short-time existence. However, it turns out that combining it to very basic finite-time bounds for the solution in a finite box, and to the apparatus of cluster expansions and renormalization, yields exact asymptotics for N-point functions in the large-scale limit! Thus this semi-perturbative approach for \(\lambda \ll 1\) is much more successful than previous approaches 1. and 2., whose results are not required, and actually can be rederived directly up to some point. The key point is to assess the precise amount of expansion needed to get the leading large-scale behavior without producing at the same time diverging series.

3.1 Multi-scale Decompositions and Power-Counting

In the following somewhat technical section, we cut propagators into scales, and space-time into scaled boxes, paving the way for the cluster expansions of Sect. 4. The more PDE-minded reader may find it more reassuring to read Sect. 3 first, and then navigate between Sects. 2 and 4.

Definition 3.1

(Phase space)

  1. (i)

    (boxes) Let

    $$\begin{aligned} \mathbb {D}^j:= & {} \cup _{(k_0,\varvec{k})\in \mathbb {N}\times \mathbb {Z}^d} [k_0 2^j,(k_0+1) 2^j)\times [k_1 2^{j/2},(k_1+1)2^{j/2}]\\&\quad \times \cdots \times [k_d 2^{j/2},(k_d+1)2^{j/2}] \end{aligned}$$

    \((j\ge 0)\) and \(\mathbb {D}=\cup _{j=0}^{+\infty } \mathbb {D}^j\). If \((t,x)\in \Delta \) with \(\Delta \in \mathbb {D}^j\), we write \(\Delta ^j_{(t,x)}:=\Delta \).

  2. (ii)

    (momentum-decoupling \(\tau \)-parameters) If \(\tau ^0:\mathbb {D}^0\rightarrow [0,1]\), we write \(\tau ^0_t:=\tau (\Delta ^0_t)\).

  3. (iii)

    (space projection) If \(\Delta \in \Delta ^j\), \(\Delta :=[k_0 2^j,(k_0+1) 2^j)\times [k_1 2^j,(k_1+1)2^{j/2}]\times \cdots \times [k_d 2^{j/2},(k_d+1)2^{j/2}]\), we let \(\bar{\Delta }:= [k_1 2^j,(k_1+1)2^{j/2}]\times \cdots \times [k_d 2^{j/2},(k_d+1)2^{j/2}]\). Then \(\bar{\mathbb {D}}^j\) is the union of all such cubes in \(\mathbb {R}^d\).

Let \(\nu >0\). We let \(G_{\nu }:=(\partial _t-\nu \Delta )^{-1}\) be the heat kernel with diffusion coefficient \(\nu \),

$$\begin{aligned} G_{\nu }(t,x;t',x'):=p_{\nu (t-t')}(x-x') \ {\mathrm {if}} \ t,t'\ge 0 \ {\mathrm {and}}\ t-t'>0, \qquad 0 \ {\mathrm {else}} \end{aligned}$$
(3.1)

where \(p_{\nu (t-t')}(x-x')=\frac{e^{-|x-x'|^2/4\nu (t-t')}}{(4\pi \nu (t-t'))^{d/2}}\) is the kernel of the heat operator \(e^{\nu (t-t')\Delta }\). When \(\nu :=\nu ^{(0)}\) is the bare viscosity, we write simply \(G_{\nu ^{(0)}}=:G\).

In the following definition, if \(f:\mathbb {R}_+\rightarrow \mathbb {R}\), we let: \(f^j(t):=f(2^{-j}t)\) (\(j\ge 1\)).

Definition 3.2

(Multi-scale decompositions) Choose a smooth partition of unity \(1=\chi ^{0}*\chi ^{0} + \sum _{j=1}^{+\infty } (\chi *\chi )^j\) of \(\mathbb {R}_+\) for some smooth functions \(\chi :\mathbb {R}_+\rightarrow [0,1]\) with compact support \(\subset [{1\over 2},2]\), and \(\chi ^{0}:\mathbb {R}_+\rightarrow [0,1]\) with compact support \(\subset [0,1]\). Let \(A^j(t,t'),B^j(t,t')\) (\(j\ge 0\), \(t>t'>0\)) be the operator-valued, time-convolution kernels defined by

$$\begin{aligned} A_{\nu }^0(t,t')\equiv B_{\nu }^0(t,t'):= \chi ^{ 0}(t-t') e^{\nu (t-t')\Delta } \end{aligned}$$
(3.2)

and, for \(j\ge 1\),

$$\begin{aligned} A_{\nu }^j(t,t')\equiv B_{\nu }^j(t,t') := 2^{-j/2} \chi ^j(t-t') e^{\nu (t-t')\Delta }. \end{aligned}$$
(3.3)

They define operators \(A^j_{\nu },B^j_{\nu }:L^2(\mathbb {R}_+\times \mathbb {R}^d)\rightarrow L^2(\mathbb {R}_+\times \mathbb {R}^d)\) through \((A^j f)(t):=\int _0^t dt'\, A^j(t,t')f(t'),\ (B^j f)(t):=\int _0^t dt'\, B^j(t,t')f(t')\).

Remark

If (tx) is connected to \((t,'x')\) by some \(A^j\) or \(B^j\) with \(j\ge 1\), then \(t-t'>1\), hence \(\langle \eta (t,x)\eta (t',x')\rangle =0\). This property (due to an adequate choice of cut-offs) is convenient since it implies that two-point functions require only a scale 0 renormalization (see §5.1).

Note that \((\chi *\chi )^j=(2^{-j/2}\chi ^j)*(2^{-j/2}\chi ^j)\) \((j\ge 1)\). Hence, by construction,

  • The \(A^j_{\nu }\)’s provide a decomposition of the kernel \(G_{\nu }\) into a sum of positive kernels: namely,

    $$\begin{aligned} \sum _{j\ge 0} A_{\nu }^j B_{\nu }^j(t,t')= & {} (\chi ^{0}*\chi ^{0})(t-t') e^{\nu (t-t')\Delta }\, dt \, \nonumber \\&+\, \sum _{j\ge 1} ( (2^{-j/2}\chi ^j)*(2^{-j/2}\chi ^j))(t-t') e^{\nu (t-t')\Delta }\, dt=G_{\nu }(t,t'). \nonumber \\ \end{aligned}$$
    (3.4)

    Furthermore, letting

    $$\begin{aligned} G^j_{\nu }:= A^j_{\nu }B^j_{\nu }, \qquad j\ge 0 \end{aligned}$$
    (3.5)

    we have \(\sum _{j\ge 0} G^j_{\nu }=G_{\nu }\), and \(G^j_{\nu }\) is “roughly” \(2^{j/2} A^j_{\nu }\) (we say “roughly”, because \((\chi *\chi )^j\) and \(\chi ^j\) do not have exactly the same time support—a more precise statement may be e.g. that \(2^{j/2} A^j_{c\nu }(\cdot ,\cdot )\lesssim G^j_{\nu }\lesssim 2^{j/2} A^j_{\nu /c}(\cdot ,\cdot )\) for some \(0<c<1\)).

Definition 3.3

  1. 1.

    Let \(A_{\nu }(\cdot ;\cdot ,\cdot )\) be the following kernel on \((\mathbb {R}_+\times \mathbb {R}^d)\times (\mathbb {N}\times \mathbb {R}_+\times \mathbb {R}^d)\),

    $$\begin{aligned} A_{\nu }((t,x);j,(t',x')):= A_{\nu }^j((t,x),(t',x')). \end{aligned}$$
    (3.6)
  2. 2.

    Let \(B_{\nu }(\cdot ,\cdot ;\cdot )\) be the following kernel in \((\mathbb {N}\times \mathbb {R}_+\times \mathbb {R}^d)\times (\mathbb {R}_+\times \mathbb {R}^d)\),

    $$\begin{aligned} B_{\nu }(j,(t,x);(t',x')):= B_{\nu }^j((t,x),(t',x')). \end{aligned}$$
    (3.7)

In other words, letting \(\mathcal{H}\) be an auxiliary separable Hilbert space with orthonormal basis denoted by \(\varvec{e}^j\), \(j\ge 0\), or equivalently, \(|j\rangle \) (in quantum mechanical notation), \(A_{\nu }(\cdot ,\cdot )\) is the kernel of the operator

$$\begin{aligned} A_{\nu }: \mathcal{H}\otimes L^2(\mathbb {R}_+\times \mathbb {R}^d) \rightarrow L^2(\mathbb {R}_+\times \mathbb {R}^d) \end{aligned}$$
(3.8)

defined by \(A_{\nu }(\varvec{e}^j \otimes f)=A^j_{\nu }(f)\); equivalently, \(A_{\nu }:=\sum _{j\ge 0} A^j_{\nu } \langle j|\) has a linear form-valued kernel on \((\mathbb {R}_+\times \mathbb {R}^d)\times (\mathbb {R}_+\times \mathbb {R}^d)\),

$$\begin{aligned} A_{\nu }(\cdot ,\cdot )\equiv \sum _{j\ge 0} A^j(\cdot ,\cdot ) \langle j|. \end{aligned}$$
(3.9)

Dualizing, \(B_{\nu }(\cdot ,\cdot )\) is the kernel of the operator

$$\begin{aligned} B_{\nu }: L^2(\mathbb {R}_+\times \mathbb {R}^d) \rightarrow \mathcal{H}\otimes L^2(\mathbb {R}_+\times \mathbb {R}^d) \end{aligned}$$
(3.10)

defined by \(B_{\nu }(f)=\sum _{j\ge 0} B^j_{\nu }(f) \varvec{e}^j\); in other words, \(B_{\nu }:=\sum _{j\ge 0} B^j_{\nu } |j\rangle \), with associated vector-valued kernel

$$\begin{aligned} B_{\nu }(\cdot ,\cdot )\equiv \sum _{j\ge 0} B^j(\cdot ,\cdot ) |j\rangle . \end{aligned}$$
(3.11)

Thus the decomposition of \(G_{\nu }\), see (3.4), is equivalent to the identity

$$\begin{aligned} A_{\nu }B_{\nu }=\sum _{j,j'\ge 0} A^j_{\nu } B^{j'}_{\nu } \langle j|j'\rangle = \sum _{j\ge 0} A^j_{\nu }B^j_{\nu }=G_{\nu } \end{aligned}$$
(3.12)

which lies at the core of the vertex representation in § 3.2.

As in the case of \(G_{\nu }\), we write simply \(A_{\nu ^{(0)}}=:A\), \(B_{\nu ^{(0)}}=:B\).

The following estimates for the kernel \(A^{j}(t,x;t',x')=B^{j}(t,x;t',x')\) of \(A^j=B^j\) are easily shown:

Lemma 3.4

(multi-scale estimates for A and B) Let \(j\ge 1\).

  1. (i)

    (single-scale estimates)

    $$\begin{aligned}&|\partial ^{\kappa '}_t \nabla ^{\varvec{\kappa }} A^{j}(t,x;t',x')|\lesssim (2^{-j/2})^{2\kappa '+|\varvec{\kappa }|} (2^{-j/2})^{d+1} e^{-c2^{-j}|x-x'|^2} \mathbf{1}_{t-t'\approx 2^j}; \qquad \end{aligned}$$
    (3.13)
    $$\begin{aligned}&\int dt' \, dx'\, A^{j}(t,x;t',x') \approx 2^{j/2}; \end{aligned}$$
    (3.14)
    $$\begin{aligned}&||A^j f||_{L^2} \lesssim (2^{-j/2})^{d/2} ||f||_{L^2}. \end{aligned}$$
    (3.15)
  2. (ii)

    (two-scale estimates) let \(1\le j\) and \(\kappa ,\kappa '\ge 0\), then

    $$\begin{aligned} |(\nabla ^{\varvec{\kappa }}A^j \ \nabla ^{\varvec{\kappa }'}B^{j})((t,x),(t',x'))| \lesssim (2^{-j/2})^{d+|\varvec{\kappa }|+|\varvec{\kappa '}|} e^{-c2^{-j}|x-x'|^2} \mathbf{1}_{t-t'\approx 2^{j}}. \end{aligned}$$
    (3.16)

From (ii) it results that \((\nabla ^{\varvec{\kappa }} A^j \ \nabla ^{\varvec{\kappa }'} B^{j})(\cdot ,\cdot )\) scales like \(\nabla ^{\varvec{\kappa }+\varvec{\kappa }'} G^{j'}(\cdot ,\cdot )\)—or, more precisely, like \(\nabla ^{\kappa +\kappa '} G^{j}_{\nu }(\cdot ,\cdot )\), with \(\nu \approx \nu ^{(0)}\), or equivalently, like \(2^{j/2} \nabla ^{\varvec{\kappa }+\varvec{\kappa }'} B^{j}_{\nu }(\cdot ,\cdot )\). Also, it is clear that \(G(\cdot ,\cdot )\lesssim \sum _{k\ge 0} 2^{k/2} A^k_{\nu }(\cdot ,\cdot )\). As immediate corollary, expanding G over scales, it comes out

$$\begin{aligned} (B^j G)(\cdot ,\cdot ) \lesssim 2^{j/2} G^{\rightarrow j}_{\nu }(\cdot ,\cdot ). \end{aligned}$$
(3.17)
$$\begin{aligned} |(\nabla ^3 G^j \, \cdot \, G)(\cdot ,\cdot )|,\qquad |(\partial _t\nabla G^j\, \cdot \, G)(\cdot ,\cdot )| \lesssim 2^{-j/2} G^{\rightarrow j}_{\nu }(\cdot ,\cdot ) \end{aligned}$$
(3.18)

and finally the first of our two key power-counting estimates,

$$\begin{aligned} |(\nabla ^3 G\, \cdot \, G)(\cdot ,\cdot )|,\qquad |(\partial _t\nabla G \, \cdot \, G)(\cdot ,\cdot )|\lesssim G_{\nu }(\cdot ,\cdot ), \end{aligned}$$
(3.19)

whereas \(\partial _t^{\kappa _0}\nabla ^{\varvec{\kappa }} G\, \cdot \, G\), \(|\kappa |:=2\kappa _0+|\varvec{\kappa }|\equiv 2\kappa _0+\kappa _1+\cdots +\kappa _d\) diverges in the stationary limit when \(|\kappa |\le 2\), i.e. \( (\partial _t^{\kappa _0}\nabla ^{\varvec{\kappa }} G\, \cdot \, G)((t,x),(0,x))\approx t^{1-\kappa /2}\) \((|\kappa |<2)\), \(\log (t)\) (\(|\kappa |=2\)), therefore \(\overset{t\rightarrow +\infty }{\longrightarrow } +\infty \). In all these estimates it is intended that \(\nu \approx \nu ^{(0)}\).

Proof

  1. (i)

    Immediate consequence of the elementary heat kernel estimates, \(|\partial _t^{\kappa '}\nabla ^{\varvec{\kappa }} p_{\nu (t-t')}(x)| \lesssim (t-t')^{-\kappa '-|\varvec{\kappa }/2} p_{2\nu (t-t')}(x).\) Note that the time support and scaled exponential space decay leave an effective space-time integration volume \(O(2^{j(1+d/2)})\). The \(L^2\)-norm estimate is also a consequence of: \(||A^j f||_{L^2}^2\lesssim \int _{t-t'\approx 2^j} \frac{dt}{t-t'} ||e^{(t-t')\nu \Delta }f_{t'}||_{L^2}^2\) and the easy inequality \(||e^{(t-t')\nu \Delta }f||_{L^2}^2\lesssim (t-t')^{-d/2} ||f||_{L^2}^2\) (standard parabolic estimate).

  2. (ii)

    Integrating \(\int dt''\, \int dx''\, \nabla ^{\varvec{\kappa }}A^j((t,x),(t'',x'')) \nabla ^{\varvec{\kappa }'}B^{j}((t'',x''),(t',x'))\) by parts with respect to \(t''\), and remarking that \(t''\) ranges in a time-interval of size \(O(2^{j})\), we obtain

    $$\begin{aligned}&\Big | (\nabla ^{\varvec{\kappa }} A^j \ \nabla ^{\varvec{\kappa }'}B^{j})((t,x),(t',x')) \Big | = \Big | (A^j\ \nabla ^{\varvec{\kappa }+\varvec{\kappa }'}B^{j})((t,x),(t',x')) \Big | \nonumber \\&\qquad \lesssim \left( 2^{j/2} e^{2^j \frac{\nu ^{(0)}}{c} \Delta }\, \cdot \, (2^{-j/2})^{1+|\varvec{\kappa }|+|\varvec{\kappa }'|} e^{2^{j'} \frac{\nu ^{(0)}}{c} \Delta } \right) ((t,x),(t',x')) \nonumber \\&\qquad \lesssim (2^{-j/2})^{|\varvec{\kappa }|+|\varvec{\kappa }'|} e^{2^{j'}\frac{\nu ^{(0)}}{c'}\Delta }((t,x),(t',x')). \end{aligned}$$
    (3.20)

\(\square \)

One gets similarly

$$\begin{aligned} G^j(t,x;t',x') \lesssim 2^{-jd/2} e^{-c2^{-j}|x-x'|^2} \ \mathbf{1}_{t-t'\approx 2^j} \end{aligned}$$
(3.21)

At this point we introduce a very useful

Universal notation Let \(f=\sum _{j=0}^{+\infty } f^{(j)}\) be a function/random field/multi-scale diagram/... decomposed into its scale components, then

$$\begin{aligned} f^{\rightarrow j}:=\sum _{k\ge j} f^{(k)}=\cdots +f^{(j+2)}+f^{(j+1)}+f^{(j)} \end{aligned}$$
(3.22)

is the scale j low-momentum part of f, while

$$\begin{aligned} f^{j\rightarrow }:=\sum _{k\le j} f^{(k)}=f^{(j)}+f^{(j-1)}+\cdots +f^{(1)}+f^{(0)} \end{aligned}$$
(3.23)

is the scale j high-momentum part of f.

In the particular case of the kernels A and B, the following is intended,

$$\begin{aligned} A^{\rightarrow j}(\cdot ,\cdot ):= & {} \sum _{k\ge j} A^k(\cdot ,\cdot ) \langle k|, \qquad A^{j\rightarrow } (\cdot ,\cdot ):=\sum _{k\le j} A^k(\cdot ,\cdot ) \langle k| \end{aligned}$$
(3.24)
$$\begin{aligned} B^{\rightarrow j}(\cdot ,\cdot ):= & {} \sum _{k\ge j} B^k(\cdot ,\cdot ) |k\rangle , \qquad B^{j\rightarrow }(\cdot ,\cdot ):=\sum _{k\le j} B^k(\cdot ,\cdot ) |k\rangle . \end{aligned}$$
(3.25)

3.2 The Vertex Representation

Consider the KPZ equation (1.4). Recall \((\nu ^{(0)},D^{(0)},v^{(0)})\) are the bare parameters. Expanding blindly the exponential in Feynman–Kac’s formula (2.3) would yield a series in the bare coupling constant \(g^{(0)}=\frac{\lambda }{\nu ^{(0)}}\sqrt{D^{(0)}}\). This is the starting point for our expansion. In the end (see Sect. 6), we shall see that it is possible to make partial resummations, and obtain thus expressions bounded by products of short-time kernels \(G_{\eta }((t,x),(t',x'))\) with \(t-t'=O(1)\), which are in turn bounded using (2.3).

Let us start with some general considerations. Let \(f=f(t,x)\) be any right-hand side, and \(\nu >0\). The integral version of the equation

$$\begin{aligned} (\partial _t-\nu \Delta )w(t,x)=f(t,x)w(t,x), \end{aligned}$$
(3.26)

coinciding—up to the replacement of \(\nu ^{(0)}\) by \(\nu \)—with (2.1) when \(f(t,x):=g^{(0)} ( \eta (t,x) - v^{(0)}),\) is

$$\begin{aligned} w(t,x)=G_{\nu }((t,x),(0,\cdot ))w_0(\cdot )+G_{\nu }((t,x),\cdot )(f w)(\cdot ). \end{aligned}$$
(3.27)

Iterating yields

$$\begin{aligned} w(t,x)=\left( G_{\nu }+G_{\nu }f G_{\nu }+G_{\nu }fG_{\nu }f G_{\nu }+\cdots \right) ((t,x),(0,\cdot ))w_0(\cdot ). \end{aligned}$$
(3.28)

The series converges under suitable hypotheses on f, and the general term in the series has the form of a chronological sequence, or string of propagators \(G_{\nu }\) with g’s sandwiched in-between, namely,

$$\begin{aligned}&\left( G_{\nu }f\cdots f G_{\nu }\right) (t,x;0,y) =\int _0^t dt_1 \int dx_1\, G_{\nu }(t,x;t_1,x_1) f(t_1,x_1) \nonumber \\&\qquad \int _0^{t_1} dt_2\int dx_2\, G_{\nu }(t_1,x_1;t_2,x_2) f(t_2,x_2) \int _0^{t_2} dt_3\int dx_3\, \cdots \end{aligned}$$
(3.29)

We now turn to a representation in terms of the operators \(A_{\nu },B_{\nu }\) defined in Definition 3.3 by means of the auxiliary space \(\mathcal{H}\) indexing the scales.

To an arbitrary function f, we associate the following general vertex

$$\begin{aligned} V_{\nu }(f)(t,x):=B_{\nu }(\cdot ,(t,x))f(t,x) A_{\nu }((t,x),\cdot ). \end{aligned}$$
(3.30)

Since \(A_{\nu }B_{\nu }=G_{\nu }\), one sees immediately by expanding \((1-X)^{-1}=1+X+X^2+\cdots \) that

$$\begin{aligned} w=A_{\nu } \left( 1-\int dt\, dx\, V_{\nu }(f)(t,x)\right) ^{-1}B_{\nu }w_0. \end{aligned}$$
(3.31)

Here \((1-\int dt\, dx\, V_{\nu }(f)(t,x))^{-1}\) plays manifestly the rôle of a resolvent.

Remark

Other choices of vertices and scale decompositions are possible; for instance, letting instead \(B_{\nu }\equiv A_{\nu }:=\sqrt{G_{\nu }}=\int _0^{+\infty } e^{\nu t\Delta } \frac{dt}{\sqrt{2t}}\), and decomposing \(B_{\nu },A_{\nu }\) into scales in a similar way as we did in Definition 3.2, Eq. 3.30 defines a scalar vertex. However, the orthogonal projection structure of (3.9, 3.11) yields significant simplifications, see (5.6) and Appendix 2 section.

Let \(\nu =\nu ^{(0)}\). Recall that we write for short in this case \(G\equiv G_{\nu ^{(0)}},A\equiv A_{\nu ^{(0)}},B\equiv B_{\nu ^{(0)}}\). Choosing \(f=g^{(0)}(\eta -v^{(0)})\), we obtain the

Definition 3.5

(Cole-Hopf vertex)

$$\begin{aligned} V_{\eta }(t,x):=B(\cdot ,(t,x)) \left( g^{(0)}(\eta (t,x)-v^{(0)}) \right) A((t,x),\cdot ) \end{aligned}$$
(3.32)

Then the solution of (2.1) is

$$\begin{aligned} w=A \left( 1-\int dt\, dx\, V_{\eta }(t,x)\right) ^{-1}Bw_0. \end{aligned}$$
(3.33)

In other words, letting

Definition 3.6

(Random resolvent/propagator)

$$\begin{aligned} R_{\eta }:=\left( 1-\int dt\, dx\, V_{\eta }(t,x)\right) ^{-1}, \qquad G_{\eta }:=AR_{\eta }B \end{aligned}$$
(3.34)

we have

$$\begin{aligned} w(t,x)=(A R_{\eta } B)((t,x),(0,\cdot ))w_0(\cdot )=G_{\eta }((t,x),(0,\cdot ))w_0(\cdot ). \end{aligned}$$
(3.35)

4 Cluster Expansions

The general principle of multi-scale expansions is that each field has one degree of freedom per box in \(\mathbb {D}=\cup _{j\ge 0}\mathbb {D}^j\) (an idea made precise by wavelet expansions). In order to understand the effect of the weak coupling between the degrees of freedom belonging to different boxes, one interpolates between the totally decoupled theory and the coupled theory by introducing parameters. These are of two kinds. Horizontal parameters (denoted by the letter s) test the coupling between two boxes of the same scale. Vertical parameters (denoted by the letter \(\tau \)) test the coupling between a given box \(\Delta \in \mathbb {D}^j\), \(j\ge 0\) and the boxes below it, i.e. the boxes \(\Delta ^k\in \mathbb {D}^k\), \(k\ge j\) (one per scale) such that \(\Delta ^k\supset \Delta ^j\). (In the case of the KPZ equation in its Cole-Hopf formulation, the only essential counterterms for renormalization are produced at scale 0, so we shall only test the coupling between a box in \(\mathbb {D}^0\) and the boxes below it). For the coupled theory, these parameters are equal to 1; for the totally decoupled theory, on the other hand, they are equal to 0. Taylor expanding to some order around 0 with respect to the s- and \(\tau \)-parameters produces in general a combinatorial sum over products of so-called multi-scale polymers (unions of boxes). Any polymer is connected by links between boxes for which the relevant parameter, s or \(\tau \), is \(>0\); such terms are written in terms of Taylor integral remainders. In equilibrium statistical field theory, there appear pieces totally isolated from remaining boxes; they correspond to vacuum diagrams, and—as well-known—disappear when one computes connected expectations. In our context, these do not appear (\(Z=1\) automatically for dynamical theories, because the noise measure is normalized from the beginning). On the other hand, renormalization is in general a necessity in either setting, due to the following reason. Differentiating with respect to a \(\tau \)-link originated from a box \(\Delta ^j\in \mathbb {D}^j\) produces low-momentum fields in some box \(\Delta ^k\supset \Delta ^j\), \(k> j\). Imagine one applies \(\ge 1\) differentiations with respect to some of the vertical parameters located in boxes at the bottom of the polymer, in total \(N_{ext}\) of them, and then sets all of these vertical parameters to 0. Thus this polymer “floats” at a certain height with respect to its external legs, measured by the difference \(j_{ext,min}-j_{int,max}=\) (min of scales k of the \(N_{ext}\) low-momentum fields) - (max of scales j of bottom boxes). Then the quantity integrated in volume obtained by summing over all possible locations of the polymer with respect to its external legs is not a vacuum diagram; it is to be seen rather as some insertion contributing to the evaluation of the polymers located below. Computations show that, for \(N_{ext}\) small enough (in our case, for \(N_{ext}=2\) only), this contribution diverges in the limit when \(j_{ext,min}-j_{int,max}\rightarrow \infty \). Thus such insertions contribute to the large-scale limit. The idea of Wilson’s renormalization scheme is to absorb the diverging part of these insertions into a scale by scale redefinition of the parameters of the theory.

Here an essential simplification comes through the fact that only scale 0 diagrams need to be renormalized, but the general philosophy remains the same.

In most theories, N-point functions are of the form \(\langle P(h)\rangle \), where P(h) is a polynomial in the random field \(h=h(t,x)\); however, here h is the logarithm of w. This a feature specific to this particular model. Let us write down here explicitly the effect of successive differentiations on an expression of the form \(P(\log h)\). Incorporating the interpolating parameters transforms w(tx) into \(w(\tau ^0,\varvec{s};t,x)\), where \(\varvec{s}\) and \(\tau ^0\) are scale 0 parameters. Now, we need to differentiate with respect to s- and \(\tau ^0-\)parameters the N-point function \(\langle \log (w_1(\tau ^0,\varvec{s})) \cdots \log (w_N(\tau ^0,\varvec{s}))\rangle \), where we have let \(w_k(\cdot ):=w(\cdot ;t_k,x_k)\). Then (letting \(D_1,D_2,\cdots \) denote the derivative with respect to various s- or \(\tau ^0\)-parameters)

$$\begin{aligned}&D_1 \log (w_k(\cdot ))=\frac{D_1 w_k(\cdot )}{w_k(\cdot )},\ D_2 D_1 \log (w_k(\cdot ))=\frac{D_2 D_1 w_k(\cdot )}{w_k(\cdot )} - \frac{D_1 w_k(\cdot ) D_2 w_i(\cdot )}{(w_k(\cdot ))^2}, \cdots \end{aligned}$$
(4.1)
$$\begin{aligned}&D_n\cdots D_1 \log (w_k(\cdot ))=\sum _{m=1}^n (-1)^{m-1} (m-1)! \sum _{i_1+\cdots +i_m=n} \sum _{I_1,\ldots ,I_m} \nonumber \\&\qquad \qquad \qquad \qquad \qquad \qquad \,\times \,\frac{ \left[ \prod _{i\in I_1} D_i \right] w_k(\cdot ) \cdots \left[ \prod _{i\in I_m} D_m\right] w_k(\cdot )}{(w_k(\cdot ))^m}, \end{aligned}$$
(4.2)

where the last sum ranges over all partitions of \(\{1,\ldots ,n\}\) into m disjoint subsets \(I_1,\ldots ,I_m\) with \(|I_1|=i_1,\ldots ,|I_m|=i_m\). Thus the derivatives apply to a product \(w_{i,1}\cdots w_{i,k}\) of k “replicas” of \(w_i\). The latter expression generalizes easily to some combinatorial expression of the same type for \(D_n\cdots D_1 \left\{ \log w_1(\cdot ))\cdots \log w_N(\cdot )) \right\} \) which is of the general form

$$\begin{aligned} \sum _{(I_{k,j})} c_{\varvec{I}} \frac{ \prod _{k\le N} \prod _{j\le m_k}\Big ( \left[ \prod _{i\in I_{k,j}} D_i\right] w_k(\cdot ) \Big ) }{\prod _k (w_k(\cdot ))^{m_k}} \end{aligned}$$
(4.3)

with \(\uplus _k \uplus _{j\le m_k} I_{k,j}=\{1,\ldots ,n\}\), for some coefficients \(c_{\varvec{I}}\) depending on the choice of the sets \((I_{k,j})\), plus terms involving one or several \(\log w\) which have not been differentiated. The conclusion of this discussion is that we need only evaluate the \(D_i\)’s on so-called replicated products \(\prod _{k=1}^{N'} w_k(\varvec{s},\tau ^0;t,x)\) taking into account the “replicas”, or equivalently [by (3.35)] on a product of \(N':=\sum _{k\le N} m_k\) noisy resolvents \(R_{\eta }\). This is what we do in the next paragraphs.

Let us finally mention our implicit integration convention: whenever a formula contains more space-time variables in the r.-h.s. than in the l.-h.s., supplementary variables are implicitly integrated over.

4.1 The Dressed Equation

We now proceed—as a preparation to the renormalization step—to separate the 0-th scale from the others. The outcome is a “dressed” vertex \(V(\tau ^0)\). Let \(\tau ^0:\mathbb {D}^0\rightarrow [0,1]\). First we need to dress the operators AB.

Definition 4.1

(Dressed fields)

  1. 1.

    (A-field)

    Let

    $$\begin{aligned} A(\tau ^0;(t,x),\cdot ) := A^{0}((t,x),\cdot ) \langle 0| \, +\, \tau ^0_{(t,x)} A^{\rightarrow 1}((t,x),\cdot ) \end{aligned}$$
    (4.4)
  2. 2.

    (B-field) The dressing procedure is the same, except that it acts on the second set of variables, namely,

    $$\begin{aligned} B(\tau ^0;\cdot ,(t',x')) := B^{0}(\cdot ,(t',x')) |0\rangle \, +\, \tau ^0_{(t',x')} B^{\rightarrow 1}(\cdot ,(t',x')) \end{aligned}$$
    (4.5)

The idea is the following. Start from a space-time dependent field, say, \(\phi (t,x)\), and make it \(\tau \)-dependent as indicated. Then Taylor’s formula, \(\phi (\tau _{t,x}^0;t,x)=\phi (0;t,x)+\tau ^0_{t,x} \partial _{\tau ^0_{t,x}} \phi (0;t,x)+\cdots \) reads simply \(\phi (\tau _{t,x}^0;t,x)=\phi ^{(0)}(t,x)+\tau _{t,x}^0 \phi ^{\rightarrow 1}(t,x)\). In other words, by differentiating \(\phi (\tau _{t,x}^0;t,x)\) with respect to \(\tau _{t,x}^0\), one separates the zeroth scale component \(\phi ^{(0)}(t,x)\) from the low-momentum field \(\phi ^{\rightarrow 1}(t,x)\).

Renormalization involves a priori the introduction of scale counterterms \(\delta g^{(j)}:=g^{(j+1)}-g^{(j)}\) (recall \(g^{(j)}:=\frac{\lambda }{\nu _{eff}} \sqrt{D^{(j)}}\) by definition), \(\delta v^{(j)}:=v^{(j+1)}-v^{(j)}\), \(\delta \nu ^{(j)}:=\nu ^{(j+1)}-\nu ^{(j)}\). Due to our hypotheses on the covariance kernel \(\langle \eta (t,x)\eta (t',x')\rangle \), it actually happens (as proved in Sect. 5) that only two-point scale 0 diagrams absolutely need renormalization; thus we choose to take \(\delta g^{(j)}=0\) for all \(j\ge 0\), and \(\delta \nu ^{(j)},\delta v^{(j)}\equiv 0\) for every \(j\ge 1\). Since we want \(\nu ^{(j)}\rightarrow _{j\rightarrow \infty } \nu _{eff}\), \(v^{(j)}\rightarrow _{j\rightarrow \infty } 0\), this implies simply that \(g^{(j)}=g^{(0)},\, \nu ^{(j)}=\nu _{eff}, \, v^{(j)}=0\) for all \(j\ge 1\). Thus dressing the vertex is a very simple matter. First (in order to avoid having to differentiate characteristic functions of scale 0 boxes coming out of the horizontal cluster, see § 4.2), we introduce

$$\begin{aligned} \Delta ^{\rightarrow 0}:=\bar{\chi }^{(0)}*\Delta , \end{aligned}$$
(4.6)

where \(\bar{\chi }^{(0)}:\mathbb {R}^d\rightarrow \mathbb {R}\) is any normalized smooth “bump” function, such that e.g. \({\mathrm {supp}}(\bar{\chi }^{(0)})\subset B(0,1)\), \(\int dx\, \bar{\chi }^{(0)}(x)=1\); \(\Delta ^{\rightarrow 0}\) is a regularized version of \(\Delta \). It is useful to assume that \(\bar{\chi }^{(0)}\) is isotropic though (see Appendix 2 section), which improves the precision of the asymptotics in Theorem 0.1.

Definition 4.2

(Dressed vertex and effective propagators) Let, for \(\tau ^0:\mathbb {D}^0\rightarrow [0,1]\),

  1. (i)
    $$\begin{aligned} V_{\eta }(\tau ^0;t,x)&:= B(\tau ^0;\cdot ,(t,x)) \left( g^{(0)}(\eta (t,x)-v^{(0)}) \right) A(\tau ^0;(t,x),\cdot ) \nonumber \\&\quad + B^{\rightarrow 1}(\cdot ,(t,x)) \left( (1-(\tau ^0_{t,x})^2) (\nu _{eff}-\nu ^{(0)}) \Delta ^{\rightarrow 0} \right) A^{\rightarrow 1}((t,x),\cdot ) \nonumber \\ \end{aligned}$$
    (4.7)
  2. (ii)
    $$\begin{aligned} R_{\eta }(\tau ^0):=\left( 1-\int dt\, dx\, V_{\eta }(\tau ^0;t,x)\right) ^{-1}.\end{aligned}$$
    (4.8)

Let us comment formula (4.7), which is the starting point of all subsequent computations.

The first line of (4.7),

$$\begin{aligned} V^{(0)}_{\eta }(\tau ^0;t,x) := B(\tau ^0;\cdot ,(t,x)) \left( g^{(0)}(\eta (t,x)-v^{(0)}) \right) A(\tau ^0;(t,x),\cdot ) \end{aligned}$$
(4.9)

is simply a dressed version of the Cole-Hopf vertex (3.32).

The second line,

$$\begin{aligned} \delta V_{\eta }(\tau ^0;t,x):=B^{\rightarrow 1}(\cdot ,(t,x)) \left( (1-(\tau ^0_{t,x})^2) (\nu _{eff}-\nu ^{(0)}) \Delta ^{\rightarrow 0} \right) A^{\rightarrow 1}((t,x),\cdot ) \end{aligned}$$
(4.10)

vanishes when \(\tau ^0\equiv 1\), which ensures that one recovers the original Cole-Hopf vertex, i.e. \(V_{\eta }(\tau ^0\equiv 1;\cdot )=V_{\eta }(\cdot ).\) It may be decomposed into two pieces, which are proportional but play a very different rôle. The first one, \(-(\tau ^0_{t,x})^2 B^{\rightarrow 1}(\cdot ,(t,x)) \left( (\nu _{eff}-\nu ^{(0)}) \Delta ^{\rightarrow 0} \right) A^{\rightarrow 1}((t,x),\cdot )\), is a low-momentum counterterm which resums the corresponding zero-momentum contribution of scale 0 two-point functions (see §5.1). The second one,

\(+B^{\rightarrow 1}(\cdot ,(t,x)) \left( (\nu _{eff}-\nu ^{(0)}) \Delta ^{\rightarrow 0} \right) A^{\rightarrow 1}((t,x),\cdot )\), leads to an effective propagator

$$\begin{aligned} \tilde{G}_{eff}:= & {} A^{\rightarrow 1} \ \cdot \ \sum _{n\ge 0} \Big (\delta V_{\eta }(\tau ^0\equiv 0)\Big )^n \ \cdot \ B^{\rightarrow 1} \nonumber \\= & {} A^{\rightarrow 1} \left( 1-(\nu _{eff}-\nu ^{(0)}) B^{\rightarrow 1} \Delta ^{\rightarrow 0} A^{\rightarrow 1} \right) ^{-1} B^{\rightarrow 1} \end{aligned}$$
(4.11)

which plays an essential rôle in the large-scale limit discussed in Sect. 6. As proved in Lemma 8.2, \(\tilde{G}_{eff}\) may be replaced in that limit by \(G_{eff}:=(\partial _t-\nu _{eff}\Delta )^{-1}\) with an excellent approximation. Thus \(\nu _{eff}\) is, indeed, an effective viscosity. Namely, it is shown in §7 that

$$\begin{aligned} \tilde{G}_{eff}((\varepsilon ^{-1}t,\varepsilon ^{-1/2}x),(\varepsilon ^{-1}t',\varepsilon ^{-1/2}x')=G_{eff}((\varepsilon ^{-1}t,\varepsilon ^{-1/2}x),(\varepsilon ^{-1}t',\varepsilon ^{-1/2}x')) +``O(\varepsilon )'', \nonumber \\ \end{aligned}$$
(4.12)

meaning the following (see Lemma 8.2). Assume \(t-t'\approx 1\) and \(\varepsilon \approx 2^{-j}\ll 1\), so that \(\varepsilon ^{-1}(t-t')\approx 2^j\). Then the error term \(``O(\varepsilon )''\) is equal to \(O(\varepsilon )\) times an exponentially decreasing kernel which is bounded by \(G_{\nu ^{(0)}+O(\lambda ^2)}((\varepsilon ^{-1}t,\varepsilon ^{-1/2}x),(\varepsilon ^{-1}t',\varepsilon ^{-1/2}x')) = \varepsilon ^{d/2} G_{\nu ^{(0)}+O(\lambda ^2)}((t,x),(t',x'))\) in a very large space-time region including the “normal regime” \(\frac{|x-x'|^2}{t-t'}\lesssim 1\).

4.2 Horizontal Cluster Expansion

The general principle is outlined in Appendix 1 section. We only need a scale 0 cluster expansion, which we apply using (7.4) to

$$\begin{aligned} F\equiv F(A^0,B^0|\eta ;A^{\rightarrow 1},B^{\rightarrow 1}):=\log (w_1(\tau ^0,\varvec{s}=1)\cdots \log (w_N(\tau ^0,\varvec{s}=1)) , \end{aligned}$$
(4.13)

where the \(A^{\rightarrow 1}\)’s and \(B^{\rightarrow 1}\)’s are only spectators. To be specific, \(w(\tau ^0,\varvec{s})\) in the above expression is defined as follows:

$$\begin{aligned}&w(\tau ^0,\varvec{s};t,x)=(AR_{\eta }(\tau ^0)(\varvec{s}) B)((t,x),(0,\cdot ))w_0(\cdot ); \end{aligned}$$
(4.14)
$$\begin{aligned}&R_{\eta }(\tau ^0)(\varvec{s})(t,x;t',x'):=\delta (t-t')\delta (x-x')+ \sum _{n=1}^{\infty }\int dx_{1}...dx_{n} \nonumber \\&\quad \prod _{i=1}^n \Big [ \ \ \left( s_{\Delta ^0_{t_{i-1},x_{i-1}},\Delta ^0_{t_i,x_i}} B^0((t_{i-1},x_{i-1}), (t_i,x_i))\, |0\rangle \, + \tau ^0_{t_i,x_i} B^{\rightarrow 1} ((t_{i-1},x_{i-1}),(t_i,x_i)) \right) \nonumber \\&\quad \cdot \ (g^{(0)}(\eta (t_i,x_i)-v^{(0)})) \nonumber \\&\quad \cdot \ \left( s_{\Delta ^0_{t_{i},x_{i}},\Delta ^0_{t_{i+1},x_{i+1}}} A^0((t_{i},x_{i}), (t_{i+1},x_{i+1}))\, \langle 0|\, + \tau ^0_{t_i,x_i} A^{\rightarrow 1} ((t_{i},x_{i}),(t_{i+1},x_{i+1})) \right) \nonumber \\&\quad \ +\ B^{\rightarrow 1}(t_{i-1},x_{i-1}),(t_i,x_i)) \left( (1-(\tau ^0_{t_i,x_i})^2) (\nu _{eff}-\nu ^{(0)}) \Delta ^{\rightarrow 0} \right) A^{\rightarrow 1}((t_i,x_i),(t_{i+1},x_{i+1}))\ \Big ]\nonumber \\ \end{aligned}$$
(4.15)

where (by convention) \((t_0,x_0)\equiv (t,x),(t_{n+1},x_{n+1})\equiv (t',x')\). This way, F appears as a functional of \(A^0,B^0\), to which the BKAR cluster expansion formula (7.4) applies.

The outcome is an expression of F in terms of a sum over scale 0 forests \({\mathbb {F}}^0\),

$$\begin{aligned} \langle F(A^0,B^0|\eta ) \rangle =\sum _{{\mathbb {F}}^0\in \mathcal{F}^0} \left( \prod _{\ell \in L({\mathbb {F}}^0)}\int _0^1 dw_{\ell }\right) \left( \left( \prod _{\ell \in L({\mathbb {F}}^0)} \frac{d}{d s_{\ell }}\right) \langle F(A^0(\varvec{s}(\varvec{w})),B^0(\varvec{s}(w)))|\eta \rangle _{\varvec{s}(\varvec{w})} \right) , \nonumber \\ \end{aligned}$$
(4.16)

see Appendix 1 section for detailed notations.

Let \(\ell =(\Delta ,\Delta ')\), \(\Delta ,\Delta '\in \mathbb {D}^0\) be a pair of linked boxes. We use the shortened notation \(V_{\eta }(\tau ^0)(\varvec{s}(\varvec{w})):=V_{\eta }(\tau ^0)(A^0(\varvec{s}(w)),B^0(\varvec{s}(w)))\) and \(R_{\eta }(\tau ^0)(\varvec{s}(\varvec{w})):=\frac{1}{1-\int dt\, dx\, V_{\eta }(\tau ^0;t,x)(A^0(\varvec{s}(\varvec{w})),B^0(\varvec{s}(w)))}\). A direct computation yields

$$\begin{aligned} \frac{\partial }{\partial s_{\ell }} R_{\eta }(\tau ^0)(\varvec{s}(\varvec{w}))= R_{\eta }(\tau ^0)(\varvec{s}(\varvec{w})) \left( \frac{d}{ds_{\ell }} \int dt\, dx\, V(\tau ^0;t,x)(\varvec{s}(\varvec{w})) \right) R_{\eta }(\tau ^0)(\varvec{s}(\varvec{w})) \nonumber \\ \end{aligned}$$
(4.17)

Then

$$\begin{aligned}&\frac{\partial }{\partial s_{\ell }} \int dt\, dx\, V_{\eta }(\tau ^0;t,x)(\varvec{s}(\varvec{w})) = \int dt\, dx\, B(\tau ^{0},\varvec{s}(\varvec{w}))(\cdot ,(t,x))\ \cdot \nonumber \\&\quad \cdot \ \left( g^{(0)}(\eta (t,x)-v^{(0)}) \right) \left( \frac{d}{ds_{\ell }} A(\tau ^{(0)},\varvec{s}(\varvec{w}))((t,x),\cdot )\right) \nonumber \\&\quad \ +\int dt'\, dx'\, \left( \frac{d}{ds_{\ell }} B(\tau ^{(0)},\varvec{s}(\varvec{w}))(\cdot ,(t',x')) \right) \ \cdot \nonumber \\&\quad \cdot \ \left( g^{(0)}(\eta (t',x')-v^{(0)}) \right) \left( A(\tau ^{(0)},\varvec{s}(\varvec{w}))((t',x'),\cdot )\right) \nonumber \\ \end{aligned}$$
(4.18)

Finally, if \((t,x)\in \Delta \), \((t',x')\in \Delta '\), \(\Delta ,\Delta '\in \mathbb {D}^0\),

$$\begin{aligned} \frac{\partial }{\partial s_{\ell }}A(\tau ^0,\varvec{s}(\varvec{w})((t,x),(t',x'))= & {} \frac{\partial }{\partial s_{\ell }}A^0(\varvec{s}(\varvec{w}))((t,x),(t',x')) \langle 0| \nonumber \\= & {} A^0((t,x),(t',x')) \langle 0| \ \cdot \ \mathbf{1}_{\ell =\{\Delta ,\Delta '\}} \end{aligned}$$
(4.19)

and similarly for B. Hence (tx), resp. \((t',x')\) in (4.18) is integrated over \(\Delta \), resp. \(\Delta '\).

On the other hand [see (7.6)], \(\frac{d}{ds_{\ell }}\) also acts on the covariance kernel of \(\eta \), according to the rules:

$$\begin{aligned}&\frac{d}{ds_{\ell }} \Big \langle \big (\, \cdot \, \big ) \Big \rangle _{\varvec{s}(\varvec{w})} \equiv \int _{\Delta _{\ell }}dz_{\ell } \int _{\Delta '_{\ell }}dz'_{\ell } \, \langle \eta (z_{\ell })\eta (z'_{\ell })\rangle _{\varvec{s}=1} \ \cdot \ \Big \langle \frac{\delta }{\delta \eta (z_{\ell })} \frac{\delta }{\delta \eta (z'_{\ell })} \big (\, \cdot \, \big ) \Big \rangle _{\varvec{s}(\varvec{w})} \end{aligned}$$
(4.20)
$$\begin{aligned}&\frac{\delta }{\delta \eta (z_{\ell })} R_{\eta }(\tau ^0)(\varvec{s}(\varvec{w}))= R_{\eta }(\tau ^0)(\varvec{s}(\varvec{w})) \left( \frac{\delta }{\delta \eta (z_{\ell })} \int dz\, V(\tau ^0)(\varvec{s}(\varvec{w}))(z) \right) R_{\eta }(\tau ^0)(\varvec{s}(\varvec{w})) \nonumber \\ \end{aligned}$$
(4.21)
$$\begin{aligned}&\frac{\delta }{\delta \eta (z_{\ell })} \int dz\, V(\tau ^0)(\varvec{s}(\varvec{w}))(z)= B(\tau ^0,\varvec{s}(\varvec{w}))(\cdot ,z_{\ell }) g^{(0)} A(\tau ^0,\varvec{s}(\varvec{w}))(z_{\ell },\cdot ), \nonumber \\ \end{aligned}$$
(4.22)

with now averages defined with respect to the \(\varvec{s}\)-dependent Gaussian measure \(\langle \, \cdot \, \rangle _{\varvec{s}(\varvec{w})}.\)

Clearly, \(\frac{d}{ds_{\ell }}\) (or \(\frac{\delta }{\delta \eta (z)}\), \(z=z_{\ell }\) or \(z'_{\ell }\)) can also act directly on one of the \(A(\tau ^0,\varvec{s}(\varvec{w}))(\cdot ,\cdot )\), \(B(\tau ^0,\varvec{s}(\varvec{w}))(\cdot ,\cdot )\) or \(\eta \)’s produced by previous differentiations.

Summarizing, turning to the specific case (4.13), the result of the expansion (4.16) may be rewritten, using the notations of (4.3), and separating the action of the s-derivatives on the covariance kernel of \(\eta \) from the action on the propagators \(A^0,B^0\), and splitting the s-derivatives according to the index (kj) of the string on which they act—or possibly the pair of indices \((k,j),(k',j')\) for \(\eta \)-pairings between two different strings—

$$\begin{aligned}&\sum _{{\mathbb {F}}^0\in \mathcal{F}^0} \left( \prod _{\ell \in L({\mathbb {F}}^0)}\int _0^1 dw_{\ell }\right) \sum _{(L_G)_{k,j},(L_{\eta })_{k,j},(L_{\eta })_{(k,j),(k',j')}} c_{\varvec{I}} \ \ \Big ( \prod _{\ell \in L_{\eta } } \int _{\Delta _{\ell }}dz_{\ell }\ \int _{\Delta '_{\ell }}dz'_{\ell } \langle \eta (z_{\ell })\eta (z'_{\ell })\rangle _{\varvec{s}=1} \Big ) \nonumber \\&\ \left\langle \frac{ \prod _{k\le N} \prod _{j\le m_k} \Big ( \Big [ (D_G)_{k,j} (D_{\eta })_{k,j} (D_{\eta })_{(k,j),\cdot } (D_{\eta })_{\cdot ,(k,j)} \Big ] w_k(\tau ^0,\varvec{s};\cdot ) \Big )}{\prod _{k\le N} (w_k(\tau ^0,\varvec{s};\cdot ))^{m_k}} \right\rangle _{\varvec{s}(\varvec{w})} , \end{aligned}$$
(4.23)

where: \(c_{\varvec{I}}\) is as in (4.3);

\(L({\mathbb {F}}^0)=L_G\uplus L_{\eta }\);

\(L_G=\uplus _{(k,j)} (L_G)_{k,j}\) (propagator links);

\(L_{\eta }=\uplus _{k,j} (L_{\eta })_{k,j}\uplus _{(k,j),(k',j')} (L_{\eta })_{(k,j),(k',j')}\) (noise links);

\((L_{\eta })_{(k,j),\cdot }:=\uplus _{(k',j')\not =(k,j)} (L_{\eta })_{(k,j),(k',j')}, (L_{\eta })_{\cdot ,(k,j)}:=\uplus _{(k',j')\not =(k,j)} (L_{\eta })_{(k',j'),(k,j)}\) (noise links between two strings);

\((D_G)_{k,j}:= \prod _{\ell \in (L_G)_{k,j}} \frac{\partial }{\partial s_{\ell }}\) (derivatives acting on propagators \(A^0\) or \(B^0\));

\((D_{\eta })_{k,j}:= \prod _{\ell \in (L_{\eta })_{k,j}} \frac{\delta ^2}{\delta \eta (z_{\ell })\delta \eta (z'_{\ell })} \) (double derivatives acting on two noise fields located on the same string);

\((D_{\eta })_{(k,j),\cdot }:=\prod _{\ell \in (L_{\eta })_{(k,j),\cdot }} \frac{\delta }{\delta \eta (z_{\ell })\delta \eta (z'_{\ell })}, \ (D_{\eta })_{\cdot ,(k,j)}:= \prod _{\ell '\in (L_{\eta })_{\cdot ,(k,j)}} \frac{\delta }{\delta \eta (z_{\ell })\delta \eta (z'_{\ell })}\) (resp. on two different strings, including that of index (kj));

\(m_k=\)Card\(\Big \{j | (L_G)_{k,j}\cup (L_{\eta })_{k,j} \cup (L_{\eta })_{(k,j),\cdot } \cup (L_{\eta })_{\cdot ,(k,j)} \not =\emptyset \Big \}\)

with \(w_k(\tau ^0;\varvec{s})\) defined as in (4.14,4.15).

In other words:

  1. (i)

    [see (4.17,4.18,4.19)], each s-derivative along a link acting on a random resolvent (i) singles out a localized \(A^0\)- or \(B^0\)-propagator between the two boxes connected by the link, and produces (ii) a supplementary \(B-\), resp. \(A-\) propagator ending, resp. starting in one of the two boxes; (iii) a “renormalized” noise field

    $$\begin{aligned} \tilde{\eta }(t,x):=g^{(0)}(\eta (t,x)-v^{(0)}) \end{aligned}$$
    (4.24)

    sandwiched between the localized scale 0 propagator, and another propagator with unspecified scale; (iv) and supplementary resolvents \(R_{\eta }(\tau ^0)(\varvec{s}(\varvec{w}))\), whose scale 0 components \(R_{\eta }^{(0)}\) will later on be produced explicitly by the vertical expansion. Because all these scale 0 operators are causal, they may be seen as beads stringed on an (open) string propagating causally, with dangling \(\tilde{\eta }\)-ends on each bead. See Fig. 1 below.

    Sequences \(\int _{\Delta } dt\, dx\, B^{\bullet }(\cdot ,(t,x)) \tilde{\eta }(t,x)A^{\bullet }((t,x),\cdot )\) integrated in a box \(\Delta \in \mathbb {D}^0\), are called vertices by reference to Definition 4.2.

  2. (ii)

    an s-derivative acting directly on some A or B turns into an \(A^0\) or \(B^0\) linking two specified boxes;

  3. (iii)

    the cluster in \(\eta \) [see in particular (4.20,4.21,4.22)] produces from 0 to 2 vertices (depending on whether the \(\frac{\delta }{\delta \eta (z_{\ell })},\frac{\delta }{\delta \eta (z'_{\ell })}\) act on a resolvent or directly on some dangling \(\tilde{\eta }\)), and a local link between two vertices, by which we mean that one gets some pairing of (old or new) vertices \(\int _{\Delta } dz\, B^{\bullet }(\cdot ,z) A^{\bullet }(z,\cdot )\), \(\int _{\Delta '} dz'\, B(\cdot ,z')A(z',\cdot )\), multiplied with the finite-range kernel \(\langle \eta (z)\eta (z')\rangle _{\varvec{s}=1}\), which forces \(d(\Delta ,\Delta ')=O(1)\).

A general term in (4.23) is in the form of a product of \(N'\) strings with beads or inserted vertices and dangling \(\tilde{\eta }\)-ends, schematically, letting \(z_i^j:=(t_i^j,x_i^j)\) (\(1\le i\le N'\), \(j\ge 1\)) be intermediate coordinates implicitly integrated over with \(t_i>t_i^1>t_i^2>\cdots >t_i^{3n_i} \equiv 0\),

$$\begin{aligned}&\left( \prod _{j=1}^{n_1-1} \tilde{\eta }\Big (z_1^{3j}\Big ) \right) \ A^{\bullet }\Big ((t_1,x_1),z_1^1\Big ) \nonumber \\&\quad \left( R_{\eta }\Big (z^1_1,z^2_1\Big ) \prod _{j=1}^{n_1-1} B^{\bullet }\Big (z^{3j-1}_1, z_1^{3j}\Big )A^{\bullet }\Big (z_1^{3j},z_1^{3j+1}\Big ) R_{\eta }\Big (z_1^{3j+1},z_1^{3j+2}\Big ) \right) B^{\bullet } \Big (z_1^{3n_1-1},z_1^{3n_1}\Big ) w_0\Big (z_1^{3n_1}\Big )\nonumber \\&\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \vdots \nonumber \\&\quad \left( \prod _{j=1}^{n_{N'}-1} \tilde{\eta }\Big (z_{N'}^{3j}\Big ) \right) \ A^{\bullet }\Big ((t_{N'},x_{N'}),z_{N'}^1\Big ) \nonumber \\&\quad \left( R_{\eta }\Big (z^1_{N'},z^2_{N'}\Big ) \prod _{j=1}^{n_{N'}-1} B^{\bullet }\Big (z^{3j-1}_{N'}, z_{N'}^{3j}\Big )A^{\bullet }\Big (z_{N'}^{3j},z_{N'}^{3j+1}\Big ) R_{\eta }\Big (z_{N'}^{3j+1},z_{N'}^{3j+2}\Big ) \right) B^{\bullet } \Big (z_{N'}^{3n_{N'}-1},z_{N'}^{3n_{N'}}\Big ) w_0\Big (z_{N'}^{3n_{N'}}\Big )\nonumber \\ \end{aligned}$$
(4.25)

averaged w.r. to the measure \(\langle \ \cdot \ \rangle _{\varvec{s}(\varvec{w})}\), where some of the B’s and A’s are localized, 0-th scale propagators, others being “grey” for the moment (i.e. of unspecified scale), and \(\tilde{\eta }(\cdot )=g^{(0)}(\eta (\cdot )-v^{(0)})\), see Eq. (4.24). As seen from the previous formulas in this very subsection, such terms should be summed over forests, integrated w.r. to interpolation coefficients \(\varvec{w}\). Intermediate coordinates \(z^j_i\) are integrated over 0-scale boxes \(\Delta _{\ell },\Delta '_{\ell }\). Also missing are coefficients \(c_{\varvec{I}}(\varvec{z})\) now depending on \(\varvec{z}:= (z_{\ell },z'_{\ell })_{\ell \in L(\mathcal{F}^0)}\) through the pairing factors \(\langle \eta (z_{\ell })\eta (z'_{\ell })\rangle \) due to the cluster expansion in \(\eta \). A more explicit expression shall be given at the very end of Sect. 4, after we have completed the vertical cluster expansion.

4.3 Vertical Cluster or Momentum-Decoupling Expansion

After performing the scale 0 horizontal cluster expansion, one must still perform on the contribution associated to a given forest \({\mathbb {F}}^0\) another expansion called vertical cluster or momentum-decoupling expansion. This consists simply in applying the operator

$$\begin{aligned} {\mathrm {Vert}}^0= \prod _{\Delta \in \mathbb {D}^0} \left( \sum _{\mu _{\Delta }=0}^{2} \partial ^{\mu _{\Delta }}_{\tau _{\Delta }}\big |_{\tau _{\Delta }=0} + \int _0^1 d\tau _{\Delta } \frac{(1-\tau _{\Delta })^{2}}{ 2!} \partial _{\tau _{\Delta }}^{3} \right) . \end{aligned}$$
(4.26)

Fix a box \(\Delta ^0\in \mathbb {D}^0\). A derivative \(\partial _{\tau ^0}\), acting on a dressed field \(A(\tau ;\cdot )\), simply beheads \(A^0\)—the highest-momentum component of A—, and yields \(A^{\rightarrow 1}\). On the other hand, if \((t,x)\in \Delta ^0\),

$$\begin{aligned}&\partial _{\tau _{\Delta ^0}} R_{\eta }(\tau )=R_{\eta }(\tau ) \left( \partial _{\tau _{\Delta ^0}} V_{\eta }(\tau )\right) R_{\eta }(\tau )\end{aligned}$$
(4.27)
$$\begin{aligned} \partial _{\tau _{\Delta ^0}} V_{\eta }(\tau ;t,x)&= B^{\rightarrow 1}(\cdot ,(t,x)) \left( g^{(0)} (\eta (t,x)-v^{(0)}) \right) A(\tau ;(t,x),\cdot ) \nonumber \\&\quad + B(\tau ;\cdot ,(t,x)) \left( g^{(0)}(\eta (t,x)-v^{(0)}) \right) A^{\rightarrow 1}((t,x),\cdot ) \nonumber \\&\quad + B^{\rightarrow 1}(\cdot ,(t,x)) \left( -2\tau ^0_{t,x} (\nu ^{(0)}-\nu _{eff})\Delta ^{\rightarrow 0} \right) A^{\rightarrow 1}((t,x),\cdot )\nonumber \\ \end{aligned}$$
(4.28)

Therefore, the vertical cluster expansion acts by inserting vertices, just as the horizontal cluster expansion does. On the other hand, these vertices comprise at least one low-momentum field. 0-scale boxes in which these low-momentum fields are integrated (here \(\Delta ^0\)) constitute the external boxes or (looking more precisely at the nature—A or B - and the scale of the low-momentum fields) the external structure of the associated polymers. Such low-momentum fields are called external legs of the polymer. The order of differentiation in \(\tau _{\Delta }\) is denoted by \(\mu _{\Delta }\); for the Taylor remainder in (4.26) one has \(\mu _{\Delta }=3\). Since each \(\tau \)-derivative contributes an external leg, the number of external legs of a polymer is equal to the number of \(\tau \)-derivatives that have been applied to it. Thus \(\mu _{\Delta }\) can be interpreted as a multiplicity, by which we mean that a polymer containing \(\Delta \) has \(\mu _{\Delta }\) external legs starting from the box \(\Delta \).

Now that we have completed the cluster expansion, a fundamental observation to be made is the following. Let \(\Delta :=\Delta ^0_{t,x},\Delta ':=\Delta ^0_{t',x'}\). If \(\Delta ,\Delta '\) belong to different components of \({\mathbb {F}}^0\), then \(R_{\eta }(\tau )(\varvec{s}(\varvec{w}))((t,x),(t',x'))=0\). In the contrary case, letting \({\mathbb {T}}^0\) be the tree containing \(\Delta \) and \(\Delta '\), \(R_{\eta }(\tau )(\varvec{s}(\varvec{w}))((t,x),(t',x'))\) depends only on the values of \(\eta \) in the image \(|{\mathbb {T}}^0|:=\{\Delta \in \mathbb {D}^0\ |\ \Delta \in {\mathbb {T}}^0\}\) of the polymer.

We illustrate the double horizontal/vertical cluster expansion by Fig. 1, where the following pictural conventions are used. Wavy lines are pairings \(\langle \eta (t,x)\eta (t',x')\rangle \) produced by the cluster expansion in \(\eta \); the attached d / ds is a reminder of the action of the cluster operator d / ds which produced the pairing. Wavy half-lines with added symbol \(\tilde{\eta }\) stand for dangling \(\tilde{\eta }\)-ends; when evaluating averaged N-point functions, they are contracted inside their connected component (polymer). Scale 0 thick lines are space-time convolutions \(A^0 R^{(0)}(\tau ^0=0)B^0\); an attached d / ds signals the fact that either \(A^0\) or \(B^0\) has been produced by the propagator cluster. Scale j thick lines (\(j\ge 1\)) are either \(A^j\) or \(B^j\) or \(G^j=A^j B^j\).

Fig. 1
figure 1

Cluster expansions

The final outcome of this section is the following compact expression, where \(V({\mathbb {F}}^0)\) is the set of vertices connected by a forest \({\mathbb {F}}^0\), and \(n_{\Delta }\), \(\Delta \in V({\mathbb {F}}^0)\) is the coordination number of a given vertex of the forest:

$$\begin{aligned}&\Big \langle \log (w_1(\tau ^0,\varvec{s}=1;t_1,x_1))\cdots \log (w_N(\tau ^0,\varvec{s}=1;t_N,x_N)) \Big \rangle =\nonumber \\&\quad \sum _{{\mathbb {F}}^0\in \mathcal{F}^0} \left( \prod _{\ell \in L({\mathbb {F}}^0)}\int _0^1 dw_{\ell }\right) \sum _{L_G, L_{\eta },L_{vert},\varvec{\mu }} c_{\varvec{I}} \ \ \Big ( \prod _{\ell \in L_{\eta } } \int _{\Delta _{\ell }}dz_{\ell }\ \int _{\Delta '_{\ell }}dz'_{\ell } \langle \eta (z_{\ell })\eta (z'_{\ell })\rangle _{\varvec{s}=1} \Big ) \nonumber \\&\quad \ \left\langle \frac{ \prod _{k\le N} \prod _{j\le m_k} \Big ( \Big [ (D_G)_{k,j} (D_{\eta })_{k,j} (D_{\eta })_{(k,j),\cdot } (D_{\eta })_{\cdot ,(k,j)} (D_{\tau })_{k,j} \Big ] w_k(\tau ^0,\varvec{s};\cdot ) \Big )}{\prod _{k\le N} (w_k(\tau ^0,\varvec{s};\cdot ))^{m_k}} \right\rangle _{\varvec{s}(\varvec{w})},\nonumber \\ \end{aligned}$$
(4.29)

where; \(\varvec{\mu }:=(\mu _{\Delta })_{\Delta \in \mathbb {D}^0}\); \((D_{\tau })_{k,j}=\prod _{\Delta \in L_{k,j}} D_{\tau _{\Delta }}(\mu _{\Delta })\), and

$$\begin{aligned} D_{\tau _{\Delta }}(\mu _{\Delta })=\partial _{\tau _{\Delta }}^{\mu _{\Delta }} \Big |_{\tau _{\Delta }=0} \qquad (\mu _{\Delta }=0,1,2),\qquad \int _0^1 d\tau _{\Delta } \, \frac{(1-\tau _{\Delta })^2}{2!} \partial ^3_{\tau _{\Delta }} \qquad (\mu _{\Delta }=3),\nonumber \\ \end{aligned}$$
(4.30)

with \(\tilde{\eta }=g^{(0)}(\eta -v^{(0)})\), featuring a product of strings indexed by kj, where, for each box \(\Delta \in {\mathbb {F}}^0\):

  1. (i)

    the horizontal cluster expansion has produced \(0\le n'_{\Delta }\le n_{\Delta }\) vertices integrated over \(z'_n(\Delta )\in \Delta \), \(n=0,\ldots ,n'_{\Delta }\);

  2. (ii)

    the vertical cluster expansion has produced \(0\le n''_{\Delta }\le \mu _{\Delta } \le 3\) vertices integrated over \(z''_n(\Delta )\in \Delta \), \(n=0,\ldots ,n'''_{\Delta }\);

and \(\{z'_n(\Delta ),z''_n(\Delta )\}_{\Delta ,n} = \{z_{k,j}^i\}\).

5 Renormalization

We now proceed to the renormalization stage. As explained in the introduction to Sect. 4, renormalization consists in general in computing, and compensating by equal counterterms, the “diverging part” of the sum of diagrams with a given external structure. In a multi-scale setting, one considers instead the so-called “local part” of the sum of all polymers with internal legs of scale \(\le j\) and given external structure, made up of a product of external legs of scale \(>j\); such local parts are compensated by counterterms of scale j.

Given the simplicity of this stage in the present model, we spare the reader a full-length explanation of these ideas (that can be found e.g. in [45] or [58]), and describe instead what we do in simple terms.

The main step is the estimation of the two-point function. The idea is roughly the following. Low-momentum propagators \(G^{\rightarrow 1}((t_i,x_i),(t_f,x_f)){=}\sum _{j\ge 1} A^j\langle j|\, B^j|j\rangle \, ((t_i,x_i),(t_f,x_f))\), \(j\ge 1\) occupying on a string the time-section between initial time \(t_i\) and final time \(t_f\), may be cut anywhere into two parts by a scale 0 vertex insertion, according to the rule

$$\begin{aligned}&G^{\rightarrow 1}((t_i,x_i),(t_f,x_f))\rightsquigarrow \sum _{j,k\ge 1} A^j((t_i,x_i),\cdot )\langle j|\, \Big [ B^j|j\rangle \Big ( g^{(0)}\eta A^0 \langle 0|\, B^0 |0\rangle \, g^{(0)}\eta +\cdots \Big ) A^k \langle k|\, \Big ](\cdot ,\cdot ) \ \cdot \nonumber \\&\qquad \cdot \ B^k(\cdot ,(t_f,x_f))\, |k\rangle \ = G^{\rightarrow 1}((t,x),\cdot )\ K_{\eta }(\cdot ,\cdot )\ G^{\rightarrow 1}(\cdot ,(t',x')) \end{aligned}$$
(5.1)

The random kernel between parentheses,

$$\begin{aligned}&K_{\eta }((t,x),(t',x')):= \Big (g^{(0)}\eta A^0 \langle 0| \frac{1}{1- B^0 |0\rangle \, g^{(0)}\eta A^0 \langle 0| } B^0 |0\rangle \, g^{(0)} \eta \Big ) ((t,x),(t',x')) \nonumber \\&= \Big ( g^{(0)}\eta A^0 \langle 0|\, B^0 |0\rangle \, g^{(0)}\eta +\cdots \Big )((t,x),(t',x')) \end{aligned}$$
(5.2)

containing only \(A^0\)- and \(B^0\)-components, is (as can be shown) O(1) in average, and decreases exponentially fast when \(d((t,x),(t',x'))\) is large, while

$$\begin{aligned} G^j((t',x'),\cdot )\simeq G^j((t,x),\cdot ) \end{aligned}$$
(5.3)

if \(d((t,x),(t',x'))=O(1)\) and \(j\gg 1\). Thus it makes sense to assume that its main contribution to the string is the averaged zero-momentum quantity \(v(t):=\int dt'\, dx'\, \langle K_{\eta }((t,x),(t',x'))\rangle \) (later on identified as \(g^{(0)}v^{(0)}\), up to some small correction). Assuming for simplicity that \(v(t)\equiv v\) is a constant, we must consider the sum of the geometric series \(G^{\rightarrow 1}+ G^{\rightarrow 1} v G^{\rightarrow 1} + G^{\rightarrow 1} vG^{\rightarrow 1}vG^{\rightarrow 1}\cdots \). Since now \((G *G)((t,x),(t',x'))=\int _{t'}^t dt''\, \int dx''\, p_{t-t''}(x-x'')p_{t''-t'}(x''-x')=(t-t')G((t,x),(t',x'))\), one sees that the large-scale (i.e. \(t-t'\rightarrow +\infty \)) correction to \(G^{\rightarrow 1}\) is infinite. On the other hand, the geometric series may be resummed exactly, \(G+GvG+GvGvG+\cdots = (\partial _t-\nu ^{(0)}\Delta -v)^{-1}\). This explains why we incorporated \(v^{(0)}\) into the equation. Considering instead a second-order Taylor expansion in \(x-x'\) in (5.3) yields [see similarly (5.12)] a contribution \(\delta \nu \), compensated by \(\delta V\) [see (4.7, 4.10)], creating a geometric series \(\simeq G+G\delta \nu \Delta G+G\delta \nu \Delta G\delta \nu \Delta G+\cdots =(\partial _t-\nu ^{(0)}\Delta - \delta \nu \Delta )^{-1}\); thus \(\nu _{eff}:=\nu ^{(0)}+\delta \nu \) may be interpreted as an effective viscosity. Now, further corrections, of the type \(G^{\rightarrow 1}\rightsquigarrow G^{\rightarrow 1} \partial ^{\kappa }G^{\rightarrow 1}\) with \(|\kappa |\ge 3\), see our first key power-counting estimate (3.19), finite in the large-scale limit, need not be considered.

In a general renormalizable theory, only a finite number of N-point functions yield infinite contributions in the large-scale limit. It turns out here, however, that only \(N=2\) point functions yield an infinite contribution, because of our second key power-counting estimate (6.29). We content ourselves with briefly discussing diagrammatics for \(N=4\) in §5.2.

5.1 Two-Point Function

Consider a piece \(\mathcal{S}\) of a string \(A((t_{init},x_{init}),\cdot ) (1-V_{\eta })^{-1}(\cdot ,\cdot ) B(\cdot ,(0,y)) e^{\frac{\lambda }{\nu ^{(0)}} h_0(y)}\) running from initial position \((t_{init},x_{init})\) to final position (0, y), connected by the horizontal cluster alone (i.e. obtained by letting \(\tau ^0\equiv 0\)). By construction, it has two external legs, one at each temporal end. Then (letting \(\tilde{\eta }(t,x):=g^{(0)}(\eta (t,x)-v^{(0)})\)—see Definition 4.2—, and \(L_{\eta }({\mathbb {F}}^{0})\) be the set of cluster links coming from the perturbation of the measure on \(\eta \)—compare with Eq. (4.23), while now \(k=j=1\) since there is only one string, and \(I_{1,1}=L_{\eta }({\mathbb {F}}^0)\)—)

$$\begin{aligned}&\mathcal{S}:= \Big (\prod _{\ell \in L({\mathbb {F}}^{0})} \int dw_{\ell } \Big ) \sum _{L_G,L_{\eta }} \Big (\prod _{\ell \in L_{\eta }} \int _{\Delta _{\ell }} dz_{\ell }\, \int _{\Delta '_{\ell }} dz'_{\ell }\, \langle \eta (z_{\ell }) \eta (z'_{\ell }) \rangle _{\varvec{s}=\varvec{1}} \Big )\ \cdot \ \nonumber \\&\qquad D_G D_{\eta } \Big \{ B^{\rightarrow 1}(\cdot ,(t,x)) \frac{1}{1-\int dt''\, dx''\, V^{(0)}_{\eta }(\tau ^0=0)(\varvec{s}(\varvec{w}))(t``,x'')}((t,x),(t',x')) A^{\rightarrow 1}((t',x'),\cdot ) \Big \} \nonumber \\&\quad = B^{\rightarrow 1}(\cdot ,(t,x)) \ \cdot \ \tilde{\eta }(t,x)\ \cdot \ A^{\rightarrow 1}((t,x),\cdot ) \nonumber \\&\qquad +\sum _{n\ge 0} \int dt_1\, dx_1\cdots \int dt_n\, dx_n\, \Big (\prod _{\ell \in L({\mathbb {F}}^{0})} \int dw_{\ell } \Big )\sum _{L_G,L_{\eta }} \Big (\prod _{\ell \in L_{\eta }} \int _{\Delta _{\ell }} dz_{\ell } \int _{\Delta '_{\ell }} dz'_{\ell } \, \langle \eta (z_{\ell }) \eta (z'_{\ell }) \rangle \Big )\nonumber \\&\qquad D_G D_{\eta } \Big \{ B^{\rightarrow 1}(\cdot , (t,x)) \ \cdot \ \Big [ \tilde{\eta }(t,x) A^0(\varvec{s}(\varvec{w}))((t,x),\cdot ) \langle 0| \ \cdot \ R_{\eta }^{(0)}(\tau ^0=0)(\varvec{s}(\varvec{w}))(\cdot ,\cdot ) \ \cdot \ \nonumber \\&\qquad \big ( B^0(\varvec{s}(\varvec{w}))(\cdot ,(t_1,x_1)) |0\rangle \, \tilde{\eta }(t_1,x_1))A^0(\varvec{s}(\varvec{w}))((t_1,x_1),\cdot ) \langle 0| \big ) \ \cdot \ R_{\eta }^{(0)}(\tau ^0=0)(\cdot ,\cdot ) \nonumber \\&\qquad \cdots \ \big ( B^0(\varvec{s}(\varvec{w}))(\cdot ,(t_i,x_i)) |0\rangle \, \tilde{\eta }(t_i,x_i))A^0(\varvec{s}(\varvec{w}))((t_i,x_i),\cdot ) \langle 0| \big )\ \cdot \ R_{\eta }^{(0)}(\tau ^0=0)(\cdot ,\cdot ) \nonumber \\&\qquad \cdots \ \big ( B^0(\varvec{s}(\varvec{w}))(\cdot ,(t_n,x_n)) |0\rangle \, \tilde{\eta }(t_n,x_n))A^0(\varvec{s}(\varvec{w}))((t_n,x_n),\cdot ) \langle 0| \big )\ \cdot \ R_{\eta }^{(0)}(\tau ^0=0)(\cdot ,\cdot ) \nonumber \\&\qquad B^0(\varvec{s}(\varvec{w}))(\cdot ,(t',x')) |0\rangle \, \tilde{\eta }(t',x') \Big ] \cdot \ A^{\rightarrow 1}((t',x'),\cdot ) \Big \} \end{aligned}$$
(5.4)

where n is the number of internal vertices, and \(R_{\eta }^{(0)}(\tau ^0=0)\) are “scale 0 resolvents”,

$$\begin{aligned} R^{(0)}_{\eta }(\tau ^0=0)(\varvec{s}(\varvec{w}))((t,x),(t',x'))=\Big (1-\int dt\, dx\, V^{(0)}_{\eta }(\tau ^0=0)(\varvec{s}(\varvec{w}))(t,x)\Big )^{-1}.\qquad \end{aligned}$$
(5.5)

The operators \(D_G\), \(D_{\eta }\) are as in (4.29), with only one string involved, say, \(D_G\equiv (D_G)_{1,1}, D_{\eta }\equiv (D_{\eta })_{1,1}\). Each \(\partial /\partial s_{\ell }\) appearing in \(D_G\) suppresses one of the s-factors in front of the propagators; each \(\frac{\delta }{\delta \eta (z_{\ell })\delta \eta (z'_{\ell })}\) appearing in \(D_{\eta }\) takes out the corresponding pair of \(\tilde{\eta }\)’s. How this is done is specified by the choice of \(({\mathbb {F}}^0,L_G,L_{\eta })\). Thus the action of \(D_G,D_{\eta }\) is extremely simple and produces no extra combinatorial factors.

It is convenient to describe the lonely term in the first line of (5.4), obtained simply by differentiating twice with respect to \(\frac{d}{d\tau ^0_{\Delta ^0}}\) in a box \(\Delta ^0=\Delta ^0_{t,x}\) untouched by the horizontal cluster, as an “\(n=-1\)” contribution; note that it contains implicitly a Dirac function \(\delta ((t,x),(t',x'))\).

Assume that the \(\eta \)’s inside the brackets \(\Big [ \ \cdot \ \Big ]\) contract pairwise, or equivalently, that no \(\eta \)-field on \(\mathcal S\) pairs to an \(\eta \)-field on another string. By the first property below Definition 3.2, namely, since \(\langle \eta (t_i,x_i)\eta (t_{i'},x_{i'})\rangle =0\) if \((t_i,x_i)\) is connected to \((t_{i'},x_{i'})\) by some low-momentum propagator \(A^{\rightarrow 1}\) or \(B^{\rightarrow 1}\), only scale 0 diagrams contribute; which explains why we need not consider generalizations of (5.4) with brackets \(\Big [ \ \cdot \ \Big ]\) including lower-momentum A’s and B’s. Note that, since \(R_{\eta }^{(0)}(\tau ^0=0)={\mathrm {Id}}+B^0(\varvec{s}(\varvec{w}))(\cdot ,\cdot ) |0\rangle \, \tilde{\eta }(\cdot ,\cdot ) A^0 \langle 0| +\cdots \), other choices of external legs are not allowed, for instance,

$$\begin{aligned} \Big [\ \cdot \ \Big ]\ \cdot \ B^{\rightarrow 1}= \Big [ \cdots A^0 \langle 0| \Big ] \Big (B^1 |1\rangle + B^2 |2\rangle +\cdots \Big )\equiv 0 \end{aligned}$$
(5.6)

because the basis \((|j\rangle )_{j\ge 0}\) is orthonormal.

Let \(\Sigma _0((t,x),(t',x'))\) be the average with respect to the measure in \(\eta \) of the sum of all contributions like the one in \(\left[ \ \cdot \ \right] \) in (5.4); the kernel \(\Sigma _0((t,x),(t',x'))\) must be seen as a deterministic insertion on the string between (tx) and \((t',x')\). For reasons explained in C. below, we symmetrize the kernel \(\Sigma _0\) by letting \(\Sigma _0((t',x'),(t,x)):=\Sigma _0((t,x),(t',x'))\) if \(t'<t\). We split the discussion into a number of steps.

A A first step consists in displacing the final external leg \(A^{\rightarrow 1}((t',x'),\cdot )\) to the location (tx) of the initial external leg \(B^{\rightarrow 1}(\cdot ,(t,x))\) (or conversely, see below). Namely,

$$\begin{aligned}&B^{\rightarrow 1}(.(t,x))\Sigma _0((t,x),(t',x'))A^{\rightarrow 1}((t',x'),.) \nonumber \\&\quad = B^{\rightarrow 1}(.(t,x)) A^{\rightarrow 1}((t,x),.)\Sigma _0((t,x),(t',x')) \nonumber \\&\qquad +B^{\rightarrow 1}(.(t,x))\Sigma _0((t,x),(t',x'))\ \big [ A^{\rightarrow 1}((t',x'),.)-A^{\rightarrow 1}((t,x),.)\big ]. \end{aligned}$$
(5.7)

Then we Taylor expand \( A^{\rightarrow 1}((t',x'),.)-A^{\rightarrow 1}((t,x),.) \) to parabolic order three:

$$\begin{aligned}&A^{\rightarrow 1}((t',x'),.)-A^{\rightarrow 1}((t,x),.) \nonumber \\&\quad =\Big ((t'-t)\partial _{t}+(x'-x)\cdot \nabla _{x} +{1\over 2}\sum _{i,j}(x'-x)_{i} (x'-x)_{j}\partial _{x_i}\partial _{x_j} \Big )A^{\rightarrow 1}((t,x),.) \nonumber \\&\qquad +\int _{0}^{1} du\, \frac{(1-u)^{2}}{2} \frac{d^{3}}{du^{3}} \big \{ A^{\rightarrow 1}(((1-u^2)t+u^2 t',(1-u)x+ux'),.) \big \} \end{aligned}$$
(5.8)

See Fig. 2 for an illustration.

Fig. 2
figure 2

Displacement of external legs

The integral remainder term in (5.8) is a sum of derivatives of parabolic order \(\ge 3\) (more precisely, ranging in \(\{3,\ldots ,6\}\)),

$$\begin{aligned}&\left| \frac{d^{3}}{du^{3}} A^{\rightarrow 1}(\cdot ,\cdot )\right| \lesssim |x-x'|^3 |\nabla ^3 A^{\rightarrow 1}(\cdot ,\cdot )|+ (t-t') |x-x'|\ |\partial _{t}\nabla A^{\rightarrow 1}(\cdot ,\cdot )| \nonumber \\&\quad + (t-t')^{2} |\partial _{t}^{2} A^{\rightarrow 1}(\cdot ,\cdot )|+ (t-t')^{3} \, | \partial _t^3 A^{\rightarrow 1}(\cdot ,\cdot )| + (t-t')^2 |x-x'| \ |\partial ^2_t\nabla A^{\rightarrow 1} (\cdot ,\cdot )| \nonumber \\&\quad +(t-t') |x-x'|^{2} |\partial _{t}\nabla ^{2} A^{\rightarrow 1}(\cdot ,\cdot )| \end{aligned}$$
(5.9)

The main terms in (5.9) are those on the first line; splitting \(A^{\rightarrow 1}\) into its constituent scales \(\sum _{j'\ge 1} A^{j'}\langle j'|\), we known from Sect. 1 that \(\nabla ^3 A^{j'}, \partial _t\nabla A^{j'}\sim 2^{-3j'/2} A^{j'}\), whereas \(|x-x'|^n |t-t'|^m \Sigma _0((t,x),(t',x'))=O(\Sigma _0((t,x),(t',x')))\) for all \(n,m\ge 0\) (due to the exponential decrease in \(d((t,x),(t',x'))\), see below), all together a gain of \(O(2^{-3j'/2})\).

The other terms are dealt with below, namely the first term in the r.-h.s. of (5.7) and the first line in the r.-h.s. of (5.8)—more precisely, only the second-order, traced term \(\frac{1}{6} |x'-x|^2 \Delta _x A^{\rightarrow 1}((t,x),\cdot )\), the other ones vanishing by symmetry—; they contribute to the renormalization of the two-point function.

In order to get the smaller of two factors, we displace instead the initial external leg \(B^{\cdot }(\cdot ,(t,x))\) to the location of the final external leg \(A^{\cdot }((t',x'),\cdot )\) if the scale j of the B-leg is strictly lower than the scale \(j'\) of the A-leg, i.e. if \(j> j'\), yielding a small factor \(O(2^{-3j/2})\).

Summarizing: were it not for (i) the boundary conditions at initial time \(t_{init}\) and final time 0, and (ii) the non-overlapping condition between the scale 0 boxes chosen by the horizontal cluster expansion, the contribution would be (taking into account the symmetrization of the kernel \(\Sigma _0\), and considering—as an intermediate step only—the natural extension of the model to negative times)

$$\begin{aligned}&{1\over 2}\sum _{1\le j\le j'} \Big ( \Big \{\int _{-\infty }^{+\infty } dt'\, dx'\, \Sigma _0((t,x),(t',x')) \Big \}\, B^{j}(\cdot ,(t,x))A^{j'}((t,x),\cdot ) \nonumber \\&\quad + \Big \{\frac{1}{2d} \int _{-\infty }^{+\infty } dt'\, dx'\, |x-x'|^2 \Sigma _0((t,x),(t',x')) \Big \} \, B^{j}(\cdot ,(t,x)) \Delta A^{j'}((t,x),\cdot ) \, \Big ), \nonumber \\ \end{aligned}$$
(5.10)

plus the same expression up to the exchange of A, B and \((t,x),(t',x')\) summed over \(j>j'\ge 1\). Choosing \(v^{(0)}\) such that \(\int dt'\, dx'\, \Sigma _0((t,x),(t',x'))=0\), and letting

$$\begin{aligned} \delta \nu :=\frac{1}{4d} \int _{-\infty }^{+\infty } dt'\, dx'\, |x-x'|^2\, \Sigma _0((t,x),(t',x')), \end{aligned}$$
(5.11)

this is equivalent to the addition to the vertex \(V_{\eta }\) of \(\int dt\, dx\, {1\over 2}\frac{d^2}{d\tau ^0_{t,x}} B^{\rightarrow 1}(\cdot ,(t,x)) ((\tau ^0_{t,x})^2\delta \nu \Delta ) A^{\rightarrow 1}((t,x),\cdot )\), compensating the term proportional to \((\tau ^0_{t,x})^2\) in \(\delta V_{\eta }\), see (4.7).

Let us consider objections (i) and (ii) separately. First, because of the boundary conditions, the integral \({1\over 2}\int _{-\infty }^{+\infty } dt'\, dx' \, (\cdots )=\int _{-\infty }^t dt'\, (\cdots ) \) in (4.9) must be replaced by \(\int _0^t dt'\, (\cdots )\). Similarly, if \(j>j'\), the integral \({1\over 2}\int _{-\infty }^{+\infty } dt\, dx\, (\cdots )=\int _{t'}^{+\infty } dt\, (\cdots )\) must be replaced by \(\int _{t'}^{t_{init}} \, (\cdots )\). Differences \(\big (\int _{-\infty }^t - \int _0^t\big ) dt'\, (\cdots )=\int _{-\infty }^0 dt'\, (\cdots )\), resp. \(\big (\int _{t'}^{+\infty }-\int _{t'}^{t_{init}}\big ) dt\, (\cdots )=\int _{t_{init}}^{+\infty } dt\, (\cdots )\), are shown in D. to be exponentially small in the distance to the boundary, \(t-0\), resp. \(t_{init}-t'\). Thus one may equivalently define \(\delta \nu \) by an integral over positive times, which is more natural given that we are considering an initial-value problem,

$$\begin{aligned} \delta \nu :=\frac{1}{2d} \lim _{t\rightarrow +\infty } \int _{0}^t dt'\, dx'\, |x-x'|^2\, \Sigma _0((t,x),(t',x')). \end{aligned}$$
(5.12)

Next, due to the non-overlapping condition, the factorization of \(\Sigma _0\) fails. The solution to this well-known problem is through a Mayer expansion.

B (Mayer expansion) Namely, we shall now apply the restricted cluster expansion, see Proposition 7.2, to the result of our expansion. Cluster expansions have produced a scale 0 forest \({\mathbb {F}}^0\) of boxes, whose tree components, together with their external structure made up of low-momentum A’s and B’s, are called polymers, and denoted by \(\mathbb {P}_1,\ldots ,\mathbb {P}_N\). The objects are now scale 0 polymers \(\mathbb {P}\) in \(\mathcal{O}=\{\mathbb {P}_1,\ldots ,\mathbb {P}_N\}\) ; a link \(\ell \in L(\mathcal{O})\) is a pair of polymers \(\{\mathbb {P}_n,\mathbb {P}_{n'}\}\), \(n\not =n'\). Objects of type 2 are polymers with \(>2\) external legs, whose non-overlap conditions we shall not remove at this stage, because these polymers are already convergent, hence do not need to be renormalized. Then objects of type 1 are polymers with two external legs; note that—due to the displacement of externel legs operated in A.—the two external legs are located in the same scale 0 box.

Implicit in the outcome of the cluster expansions is the non-overlapping condition,

$$\begin{aligned} {\mathrm {NonOverlap}}(\mathbb {P}_1,\ldots ,\mathbb {P}_N):= & {} \prod _{(\mathbb {P}_n,\mathbb {P}_{n'}) } \mathbf{1}_{\mathbb {P}_n,\mathbb {P}_{n'}\ {\mathrm {non}}-{\mathrm {overlapping}}} \nonumber \\= & {} \prod _{(\mathbb {P}_n,\mathbb {P}_{n'}) } \prod _{\Delta \in \varvec{\Delta }(\mathbb {P}_n),\Delta '\in \varvec{\Delta }(\mathbb {P}_{n'})} \left( 1 + \left( \mathbf{1}_{\Delta \not =\Delta '}-1 \right) \right) \nonumber \\ \end{aligned}$$
(5.13)

stating that a box \(\Delta \) belonging to the image of \(\mathbb {P}_n\) and a box \(\Delta '\) belonging to the image of \(\mathbb {P}_{n'}\) are necessarily distinct. As in the proof of BKAR formula (see Proposition 7.1), we choose some polymer, say \(\mathbb {P}_1\), with 2 external legs, and weaken the non-overlap condition between \(\mathbb {P}_1\) and all the other polymers by introducing a parameter \(S_1\),

$$\begin{aligned}&{\mathrm {NonOverlap}}(\mathbb {P}_1,\ldots ,\mathbb {P}_N) (S_1)= \prod _{\{\mathbb {P}_n,\mathbb {P}_{n'}\}_{n,n'\not =1} } \prod _{\Delta \in \mathbf{\Delta }(\mathbb {P}_n),\Delta '\in \mathbf{\Delta }(\mathbb {P}_{n'})} \mathbf{1}_{\Delta \not =\Delta '} \ \cdot \nonumber \\&\quad \prod _{ (\Delta ,\Delta ')\in \varvec{\Delta }(\mathbb {P}_1)\times \varvec{\Delta }(\mathbb {P}_{n'})\setminus \varvec{\Delta }_{ext}(\mathbb {P}_1)\times \varvec{\Delta }_{ext}(\mathbb {P}_{n'}) } \left( 1 + S_1 \left( \mathbf{1}_{\Delta \not =\Delta '}-1 \right) \right) , \end{aligned}$$
(5.14)

where \(\varvec{\Delta }_{ext}(\mathbb {P})\subset \varvec{\Delta }(\mathbb {P})\) is the subset of boxes \(\Delta \) with external legs - i.e. that have been differentiation with respect to \(\tau _{\Delta }\) -, and Taylor expand in \(S_1\) to order 1; each factor

$$\begin{aligned} \mathbf{1}_{\Delta \not =\Delta '}-1 =-\mathbf{1}_{\Delta =\Delta '} \end{aligned}$$
(5.15)

produced by differentiation is a Mayer link between \(\mathbb {P}_1\) and some \(\mathbb {P}_{n'}, n'\not =1\), or more precisely, some box \(\Delta \in \varvec{\Delta }(\mathbb {P}_1)\) and some box \(\Delta '\in \varvec{\Delta }(\mathbb {P}_{n'})\), implying an explicit overlap between \(\mathbb {P}_1\) and \(\mathbb {P}_{n'}\), and adding a link to the forest \({\mathbb {F}}^0\). Iterating the procedure and applying Proposition 7.2 to the weakened non-overlap condition

$$\begin{aligned}&{\mathrm {NonOverlap}}(\mathbb {P}_1,\ldots ,\mathbb {P}_N) (\varvec{S}): = \prod _{\{\mathbb {P}_n,\mathbb {P}_{n'}\} } \prod _{\Delta \in \mathbf{\Delta }_{ext}(\mathbb {P}_n),\Delta '\in \mathbf{\Delta }_{ext}(\mathbb {P}_{n'})} \mathbf{1}_{\Delta \not =\Delta '} \ \cdot \nonumber \\&\quad \prod _{(\Delta ,\Delta ')\in \varvec{\Delta }(\mathbb {P}_n)\times \varvec{\Delta }(\mathbb {P}_{n'})\setminus \mathbf{\Delta }_{ext}(\mathbb {P}_n)\times \mathbf{\Delta }_{ext}(\mathbb {P}_{n'})} \left( 1 + S_{\mathbb {P}_n ,\mathbb {P}_{n'}} \left( \mathbf{1}_{\Delta \not =\Delta '}-1 \right) \right) , \end{aligned}$$
(5.16)

The outcome is a sum

$$\begin{aligned}&\sum _{{\mathbb {G}}^0\in \mathcal{F}_{res}(\mathcal{O})} \Big ( \prod _{\ell \in L({\mathbb {G}}^0)} \int _0^1 dW_{\ell }\Big )\ \ {\mathrm {NonOverlap}}(\varvec{S}(\varvec{W})), \nonumber \\&\quad {\mathrm {NonOverlap}}(\varvec{S}(\varvec{W})):= \Big [ \Big (\prod _{\ell \in L({\mathbb {G}}^0)} \frac{\partial }{\partial S_{\ell }} \Big ) {\mathrm { NonOverlap}}(\mathbb {P}_1,\ldots ,\mathbb {P}_N)\Big ] (\varvec{S}(\varvec{W})) \nonumber \\ \end{aligned}$$
(5.17)

Links \(\ell =\ell _{\mathbb {P}_n,\mathbb {P}_{n'}}\in L({\mathbb {G}}^0)\) are obtained as links between polymers, however the corresponding differentiation \(\frac{\partial }{\partial S_{\ell }}\) is immediately rewritten as a sum over pairs over boxes \((\Delta ,\Delta ')\in \varvec{\Delta }(\mathbb {P}_n)\times \varvec{\Delta }(\mathbb {P}_{n'})\). Thus we see Mayer links as links between boxes. As such they add up to the set of links \(L({\mathbb {F}}^0)\) produced by the horizontal cluster expansion, producing a forest \(\bar{{\mathbb {F}}}^0\) with same vertices as \({\mathbb {F}}^0\) but larger set of links \(L(\bar{{\mathbb {F}}}^0)\equiv L({\mathbb {F}}^0)\uplus L_{{\mathrm {Mayer}}}\), where \(L_{{\mathrm {Mayer}}}\) (in bijection with \(L({\mathbb {G}}^0)\) is the set of Mayer links. Since a forest is characterized by its set of links, we rewrite in practice (5.17) as

$$\begin{aligned}&\sum _{L_{{\mathrm {Mayer}}}} \Big ( \prod _{\ell \in L_{{\mathrm {Mayer}}}} \int _0^1 dW_{\ell }\Big )\ \ {\mathrm {Mayer}}(\varvec{S}(\varvec{W})), \nonumber \\&\quad {\mathrm {Mayer}}(\varvec{S}(\varvec{W})):= \Big [ \Big (\prod _{\ell \in L_{{\mathrm {Mayer}}}} \frac{\partial }{\partial S_{\ell }} \Big ) {\mathrm { NonOverlap}}(\mathbb {P}_1,\ldots ,\mathbb {P}_N)\Big ] (\varvec{S}(\varvec{W})) . \end{aligned}$$
(5.18)

The number of external legs of a set of polymers connected by Mayer links is the sum of the number of external legs of each of the polymers. In particular, any Mayer connected component containing at least two polymers has \(\ge 4\) external legs; it has become convergent.

Let us now give some necessary precisions. Since the Mayer expansion is really applied to the non-overlap function NonOverlap and not to the outcome of the expansion, one must still extend the outcome of the expansion to the case when the \(\mathbb {P}_n\), \(n=1,\ldots ,N\) have some overlap. The natural way to do this is to assume that the random variables \((\eta \big |_{\mathbb {P}_n})_{n=1,\ldots ,N}\) remain independent even when they overlap. This may be understood in the following way. Choose a different color for each polymer \(\mathbb {P}_n=\mathbb {P}_1,\ldots ,\mathbb {P}_N\), and paint with that color all intervals \(\Delta \in \mathbb {P}_n\cap \mathbb {D}^0\). If \(\Delta \in \mathbf{\Delta }_{ext}(\mathbb {P}_n)\), then its external links to the \(A^{\rightarrow 1}, B^{\rightarrow 1}\) below it are left in black. The previous discussion implies that intervals with different colors may superpose; on the other hand, external inclusion links may not, so that low-momentum fields \(B^{\rightarrow 1}((\cdot ),(t,x)), A^{\rightarrow 1}((t,x),\cdot )\), \((t,x)\in \Delta ^{0}\) with \(\Delta ^{0}\in \mathbf{\Delta }_{ext}(\mathbb {P}_n)\), do not superpose and may be left in black.

Hence one must see \(\eta \) as living on a two-dimensional set, \(\mathbb {D}^{0}\times \{ {\mathrm {colors}}\}\), so that copies of \(\eta \) with different colors are independent of each other. This defines a new, extended and restricted to the zeroth scale resolvent \(\tilde{R}_{\eta }^{(0)}(\tau ^0=0)\) associated to an extended field \(\eta :\mathbb {R}_+\times \mathbb {R}^d\times \{{\mathrm {colors}}\}\rightarrow \mathbb {R}\), and Mayer-extended polymers. By abuse of notation, we shall skip the tilde in the sequel, and always implicitly extend the fields and the measures of scale 0 by taking into account colors.

C (counterterms) We now define \(\Sigma ((t,x),(t',x'))\) to be the Mayerization of the sum of all contributions like the one in \(\left[ \ \cdot \ \right] \) in (5.4), in which the two external legs have been displaced into the same box as in A., so that there is no non-overlapping restriction on the support but for the box containing (tx). Note that Mayer links between polymers with two external legs produce Mayer polymers with \(\ge 4\) external legs, which are therefore convergent (see §4.2).

Then (provided that the limit does exist)

$$\begin{aligned} g^{(0)}v^{(0)}:= & {} \lim _{T\rightarrow +\infty } \ \int _0^T dt' \, \int dx'\, \Sigma ((T,x),(t',x')) \nonumber \\= & {} \lim _{T\rightarrow +\infty } \int ^T_{t'} dt\, \int dx\, \Sigma ((t,x),(t',x')). \end{aligned}$$
(5.19)

The result does not depend on x. Furthermore, as shown below, letting

$$\begin{aligned} g^{(0)}v^{(0)}(T):= & {} \int _0^T dt' \, \int dx'\, \Sigma ((T,x),(t',x'))\nonumber \\= & {} \int _{t'}^{T+t'} dt \, \int dx\, \Sigma ((t,x),(t',x')), \end{aligned}$$
(5.20)

with \(T=t\), resp. \(t_{init}-t'\), the boundary correction \(\delta v^{(0)}(T)\) to \(v^{(0)}\) decreases exponentially with T, namely,

$$\begin{aligned} \delta v^{(0)}(T):= v^{(0)}-v^{(0)}(T)=O((Cg^{(0)})^{cT})\rightarrow _{T\rightarrow +\infty } 0 \end{aligned}$$
(5.21)

for some constants \(C,c>0\).

Consider once again the first term in the r.-h.s. of (5.7) and the first line in the r.-h.s. of (5.8), but this time after the Mayer expansion; summing, we get if \(j'\ge j\) (with a factor \({1\over 2}\) due to the symmetrization of \(\Sigma _0\))

$$\begin{aligned}&{1\over 2}\int dt'\, dx'\, B^{j}(\cdot , (t,x)) \Sigma ((t,x),(t',x')) \nonumber \\&\quad \Big (1\ +\ (t'-t)\partial _{t}+(x'-x)\cdot \nabla _{x} +{1\over 2}\sum _{i,j}(x'-x)_{i} (x'-x)_{j}\partial _{x_i}\partial _{x_j} \Big ) A^{j'}((t,x),\cdot )\nonumber \\ \end{aligned}$$
(5.22)

The first term in (5.22) vanishes for an adequate choice of \(v^{(0)}\), as shown below. Then the second (thanks to the symmetrization) and third terms vanish by parity, and the fourth one vanishes for \(i \ne j \) by isotropy. The remaining term in (5.22) may be absorbed into a redefinition of \(\nu \). Namely, we define for any \(i=1,\ldots ,d\),

$$\begin{aligned} \nu _{eff}-\nu ^{(0)}:=\frac{1}{4}\int dt' \, dx' (x'_i-x_i)^2 \Sigma ((t,x),(t',x')). \end{aligned}$$
(5.23)

Thus

$$\begin{aligned}&{1\over 2}\int dt'\, dx'\, B^{j}(\cdot , (t,x)) \Sigma ((t,x),(t',x')) A^{j'}((t',x'),\cdot ) = v^{(0)} B^{j}(\cdot ,(t,x)) A^{j'}((t,x),\cdot ) \nonumber \\&\quad + (\nu _{eff}-\nu ^{(0)}) B^{j}(\cdot ,(t,x)) \Delta ^{\rightarrow 0}_x A^{j'}((t,x),\cdot ) + \text{ remainders } \end{aligned}$$
(5.24)

Remainders include the previously discussed integral remainder term in (5.8), and the cut-off difference

$$\begin{aligned} (\nu _{eff}-\nu ^{(0)}) B^{j'\rightarrow }(\cdot ,(t,x)) (\Delta _x-\Delta ^{\rightarrow 0}_x)A^{j'}((t,x),\cdot ), \end{aligned}$$
(5.25)

which is bounded in absolute value by \(O(|\nu _{eff}-\nu ^{(0)}|\, B^{j'\rightarrow }(\cdot ,(t,x)))\) times

$$\begin{aligned} \int dx'\, \bar{\chi }^0(x') |\nabla ^2 A^{j'}((t,x),\cdot )-\nabla ^2 A^{j'}((t,x+x'),\cdot )| \sim 2^{-3j/2} A^{j'}((t,x),\cdot ), \end{aligned}$$
(5.26)

of the same order as the integral remainder term.

The leading-order contribution in the coupling constant of \(\nu _{eff}-\nu ^{(0)}\) is obtained (as seen from (5.23), letting \((t',x')=(0,0)\) and integrating in (tx) instead) by contracting the \(\eta \)’s in the expression

$$\begin{aligned} {1\over 2}(g^{(0)})^2 \int _0^{\infty } dt \, \int dx\, x_1^2\, \eta (t,x)(A^0 B^0)((t,x),(0,0)) \eta (t',x'). \end{aligned}$$
(5.27)

This is the \(n=2\) term in (5.4) with \(R_{\eta }^{(0)}(\cdot ,\cdot )\) substituted by its leading order term \(\delta (\cdot \, -_, \cdot )\). By (1.5), one gets

$$\begin{aligned} \nu _{eff}-\nu ^{(0)}={1\over 2}\frac{\lambda ^2 D^{(0)}}{(\nu ^{(0)})^2} \int _0^{\infty } dt\, \int dx\, x_1^2 (\omega *\omega )(t,x) (A^0 B^0)(t,x). \end{aligned}$$
(5.28)

The simplest contributions to \(v^{(0)}\) are obtained by taking \(n=-1,0\) in (5.4) and replacing

$$\begin{aligned} R_{\eta }^{(0)}(\tau ^0=0)(\varvec{s}(\varvec{w}))=\frac{1}{1-\int dt\, dx\, V^{(0)}(\tau ^0=0)(\varvec{s}(\varvec{w}))(t,x)} \end{aligned}$$
(5.29)

by its lowest-order term 1. Demanding that the \(``n=-1''\)-term compensates exactly the sum for \(n\ge 0\), we get an implicit equation for \(v^{(0)}\),

$$\begin{aligned} g^{(0)} v^{(0)}= & {} (g^{(0)})^2 \int dt' \, dx'\, G^0((t,x),(t',x'))\langle (\eta (t,x)-v^{(0)})(\eta (t',x')-v^{(0)})\rangle \nonumber \\&+ O((g^{(0)}+g^{(0)}v^{(0)})^2 g^{(0)}v^{(0)})+ O((g^{(0)}+g^{(0)}v^{(0)})^4) \end{aligned}$$
(5.30)

The implicit function theorem yields a unique solution

$$\begin{aligned} v^{(0)}=g^{(0)} \int dt'\, dx'\, G^0((t,x),(t',x')) \, \langle \eta (t,x) \eta (t',x')\rangle + O ((g^{(0)})^3), \end{aligned}$$
(5.31)

provided one can show that the series in n converges, and that subleading terms are indeed bounded as suggested in (5.30) and (5.31). This is our next task.

D (bounds) We now proceed to bound \(v^{(0)}\) and \(\nu _{eff}-\nu ^{(0)}\).

Let us first bound scale 0 resolvents. They are of the form (5.29), where

$$\begin{aligned} V_{\eta }^{(0)}(\tau ^0=0)(\varvec{s}(\varvec{w}))(t,x):=B^0(\varvec{s}(\varvec{w}))(\cdot ,(t,x))\tilde{\eta }(t,x) A^0(\varvec{s}(\varvec{w}))((t,x),\cdot ) \end{aligned}$$
(5.32)

where \(\tilde{\eta }(t,x):=g^{(0)}(\eta (t,x)-v^{(0)})\). Now, as explained in § 4.2, only the \(\tilde{\eta }\)’s belonging to the image \(|{\mathbb {T}}|\) of the connected component \({\mathbb {T}}\) (i.e. polymer) of \({\mathbb {F}}^0\) containing (tx) contribute. Denote then \(\tilde{\eta }_{|{\mathbb {T}}|}(t,x):=\mathbf{1}_{(t,x)\in |{\mathbb {T}}|}\tilde{\eta }(t,x)\) the restriction of \(\tilde{\eta }\) to \(|{\mathbb {T}}|\). Expanding each \(R^{(0)}_{\eta }(\tau ^0=0)(\varvec{s}(\varvec{w}))((t_i,x_i),(t_{i+1},x_{i+1}))\) yields \(\delta ((t_i,x_i),(t_{i+1},x_{i+1}))+\Big (R^{(0)}_{\eta }(\tau ^0=0)(\varvec{s}(\varvec{w}))((t_i,x_i),(t_{i+1},x_{i+1})) - \delta ((t_i,x_i),(t_{i+1},x_{i+1})) \Big )\), with [expanding (5.29)]

$$\begin{aligned}&\Big | R^{(0)}_{\eta }(\tau ^0=0)(\varvec{s}(\varvec{w}))((t_i,x_i),(t_{i+1},x_{i+1})) - \delta ((t_i,x_i),(t_{i+1},x_{i+1})) \Big | \nonumber \\&\quad = \Big | B^0(\varvec{s}(\varvec{w}))((t_i,x_i),\cdot ) |0\rangle \, \tilde{\eta }(\cdot ) A^0(\varvec{s}(\varvec{w}))(\cdot ,(t_{i+1},x_{i+1})) \langle 0|\, \nonumber \\&\qquad + B^0(\varvec{s}(\varvec{w}))((t_i,x_i),\cdot ) |0\rangle \ \tilde{\eta }(\cdot ) A^0(\varvec{s}(\varvec{w}))(\cdot ,\cdot ) \langle 0|\, B^0(\varvec{s}(\varvec{w}))(\cdot ,\cdot ) |0\rangle \ \nonumber \\&\qquad \tilde{\eta }(\cdot ) A^0(\varvec{s}(\varvec{w}))(\cdot ,(t_{i+1},x_{i+1})) \langle 0|\, +\cdots \Big | \nonumber \\&\quad \le B^0((t_i,x_i),\cdot ) |0\rangle \ |\tilde{\eta }_{|{\mathbb {T}}|}(\cdot )|\, A^0(\cdot ,(t_{i+1},x_{i+1})) \langle 0| \nonumber \\&\qquad + B^0((t_i,x_i),(t'_i,x'_i)) |0\rangle \ |\tilde{\eta }_{|{\mathbb {T}}|}(t'_i,x'_i)| \ \cdot \ G_{|\eta _{|{\mathbb {T}}|}|}((t'_i,x'_i),(t'_{i+1},x'_{i+1})) \ \cdot \ \nonumber \\&\qquad \cdot \ |\tilde{\eta }_{|{\mathbb {T}}|}(t'_{i+1},x'_{i+1})| \, A^0((t'_{i+1},x'_{i+1}),(t_{i+1},x_{i+1})) \langle 0|. \nonumber \\ \end{aligned}$$
(5.33)

Remark that (as follows from causality and from the fact that boxes of \(\mathcal S\) are connected through \(A^0\)s and \(B^0\)s) \(t'_i-t'_{i+1}\le t_i-t_{i+1}\le 2\).

Thus (letting \(|\eta _{\Delta }|:=\sup _{(t,x)\in \Delta } |\eta (t,x)|\) for \(\Delta \in \mathbb {D}^0\))

$$\begin{aligned}&\left\langle G_{|\eta _{|{\mathbb {T}}|}|}((t'_i,x'_i),(t'_{i+1},x'_{i+1})\right\rangle \le G^{2\rightarrow }((t'_i,x'_i),(t'_{i+1},x'_{i+1})) \max \Big \langle \prod _{\Delta \in {\mathbb {T}}} e^{\theta _{\Delta }g^{(0)}|\eta _{\Delta }|} \Big \rangle \nonumber \\&\quad \le G^{2\rightarrow }((t'_i,x'_i),(t'_{i+1},x'_{i+1})) \max e^{c(g^{(0)})^2 \sum _{\Delta }\theta _{\Delta }^2} \le G^{2\rightarrow }((t'_i,x'_i),(t'_{i+1},x'_{i+1})) e^{c'(g^{(0)})^2} \nonumber \\ \end{aligned}$$
(5.34)

if the maximum ranges over all possible choices of occupation times \(\theta _{\Delta }:=|\{0\le s\le t'_i-t'_{i+1}\ |\ B_s\in \Delta \}|\) for the Brownian bridge from \((0,x'_i)\) to \((t'_i-t'_{i+1},x'_{i+1})\), since \(\sum _{\Delta } \theta _{\Delta }^2\lesssim \sum _{\Delta } \theta _{\Delta }=t'_i-t'_{i+1}\lesssim 1\). The bound for \(\langle \prod _{\Delta \in {\mathbb {T}}} e^{\theta _{\Delta }g^{(0)}|\eta _{\Delta }|} \rangle \) is obtained by rewriting the product \(\prod _{\Delta \in {\mathbb {T}}} (\cdots )\) as a finite product, \(\prod _{\varvec{\varepsilon }} \left( \prod _{\Delta \in {\mathbb {T}}_{\varvec{\varepsilon }}} (\cdots ) \right) \), where \(\varvec{\varepsilon }\in \{0,1\}^{d+1}\) and \({\mathbb {T}}_{\varvec{\varepsilon }}:=\{\Delta =[k_0,k_0+1)\times [k_1,k_1+1]\times \cdots \times [k_d,k_d+1]\ |\ k_i-\varepsilon _i\equiv 0 \mod 2, i=0,\ldots ,d\}\), each of these a product of independent variables, and uses Hölder’s inequality, \(\big |\big \langle \prod _{\varvec{\varepsilon }} X_{\varvec{\varepsilon }} \big \rangle \big | \le \prod _{\varvec{\varepsilon }} \big (\big \langle (X_{\varvec{\varepsilon }})^{2^{d+1}} \big \rangle \big )^{2^{-(d+1)}}\).

However, because the \(G_{|\eta _{|{\mathbb {T}}|}|}((t'_i,x'_i),(t'_{i+1},x'_{i+1}))\), \(i\ge 1\) are not independent in general, one should make the following easy adaptation of the argument around (5.34). Split the total time interval \([t',t]\) in (5.4) into a union \(I_1\cup I_2\cup \cdots \), \(I_1:=[t_{i_1},t_{i_0}]\), \(I_2:=[t_{i_2},t_{i_1}],\ldots \), \(i_0<i_1<i_2<\ldots \), in such a way that \(t_{i_{k-1}}-t_{i_k-1}<1<t_{i_{k-1}}-t_{i_k}\), and bound as in (5.34) the products \(\langle Y_k^2\rangle :=\Big \langle \Big ( \prod _{i=i_{k-1}}^{i_k-2} G_{|\eta _{|{\mathbb {T}}|}|}((t'_i,x'_i),(t'_{i+1},x'_{i+1})) \Big )^2 \Big \rangle \). Since \(t_{i_{k-1}}-t_{i_{k+1}}>2\), the random variables \((Y_{2k+\varepsilon })_k\), \(\varepsilon =0,1\) are independent, hence one concludes as above using \(|\langle \prod _k Y_k\rangle |\le (\prod _k \langle Y_{2k}^2\rangle )^{1/2} (\prod _k \langle Y_{2k+1}^2\rangle )^{1/2}\).

Let us now bound in average the product of the \(\eta \)-dependent terms along \(\mathcal{S}\), namely, the product of the dangling \(\tilde{\eta }\)’s with the \(R^{(0)}_{\eta }\)’s. Compared to (5.34), one must now face the case when \(O(n(\Delta ))\) dangling \(\tilde{\eta }\)’s are produced inside a box \(\Delta \), where \(n(\Delta )\) is the (a priori arbitrary large) coordination number of \(\Delta \) in \({\mathbb {T}}\). Keeping aside for further use the small factor \(O(g^{(0)})\) per vertex, this leads to replacing \(\langle e^{\theta _{\Delta }g^{(0)}|\eta _{\Delta }|} \rangle \) in (5.34) by \(\langle (|\eta _{\Delta }|+O(1))^{n(\Delta )} e^{\theta _{\Delta }g^{(0)}|\eta _{\Delta }|} \rangle =e^{c'(g^{(0)})^2\theta _{\Delta }^2}\ \cdot \ O(C^{n(\Delta )}\Gamma (n(\Delta )/2))\), with \(C=O(1)\). These factors, traditionally called local factorials, are easily shown to pose no real threat to the convergence of the sum over all polymers. Namely, if \(\Delta \sim _{{\mathbb {T}}} \Delta _1,\ldots ,\Delta _{n(\Delta )-1}\), \(d(\Delta ,\Delta _1)\le \ldots \le d(\Delta ,\Delta _{n(\Delta )-1})\), then (i) \(d(\Delta ,\Delta _n)\gtrsim n^{1/d}\); (ii) for each \(n=1,\ldots ,n(\Delta )-1\), the string \(\mathcal S\) contains a propagator, either \(A^0((t,x),(t_n,x_n))\) or \(B^0((t,x),(t_n,x_n))\), with \((t,x)\in \Delta ,(t_n,x_n)\in \Delta _n\). Rewriting \(A^0((t,x),(t_n,x_n))\) as \(e^{-\frac{c}{2} |x-x_n|^2} \ \cdot \ \tilde{A}^0((t,x),(t_n,x_n))\), where c is as in Lemma 3.4, one sees that \(\tilde{A}^0(\cdot ,\cdot )\) has the same scaling properties as \(A^0(\cdot ,\cdot )\), and has furthermore retained the same Gaussian type space-decay, only with different constants. Putting (i) and (ii) together, one sees easily that

$$\begin{aligned} C^{n(\Delta )} \Gamma (n(\Delta )/2) \ \cdot \ \prod _n e^{-\frac{c}{2}|x-x_n|^2} \lesssim C^{n(\Delta )} \Gamma (n(\Delta )/2) e^{-c'n(\Delta )^{1+2/d}} =O(1). \end{aligned}$$
(5.35)

Thus (at the price of replacing the \(A^0\)’s and \(B^0\)’s along the string by propagators \(\tilde{A}^0\)’s, \(\tilde{B}^0\)’s with equivalent bounds), one has got rid of local factorials.

Finally, \(|v^{(0)}|\), \(|\nu _{eff}-\nu ^{(0)}|\) and more generally

$$\begin{aligned} I_{p,q}:=\int dt'\, \int dx'\, |x-x'|^p |t-t'|^q \Sigma ((t,x),(t',x')), \end{aligned}$$
(5.36)

\(p+q\le 3\), see (5.9), are simply bounded by a sum over the number n of vertices,

$$\begin{aligned}&I_{p,q}\le \sum _{n\ge 1} (Cg^{(0)})^{n+1} \int dt_1\, dx_1\, A^0((t,x),(t_1,x_1)) \nonumber \\&\qquad \qquad \qquad \int dt'_1\, dx'_1\, {\underline{G}}^{2\rightarrow }((t_1,x_1),(t'_1,x'_1)) \int dt_2\, dx_2\, B^0((t'_1,x'_1),(t_2,x_2)) \nonumber \\&\qquad \cdots \int dt_{n}\, dx_{n}\, A^0((t_{n-1},x_{n-1}),(t_{n},x_{n})) \int dt'_{n}\, dx'_{n}\, {\underline{G}}^{2\rightarrow }((t_{n},x_{n}),(t'_{n},x'_{n})) \nonumber \\&\qquad \qquad \qquad \int dt'\, dx'\, F_3((t,x),(t_1,x_1),(t'_1,x'_1),\ldots ,(t'_n,x'_n),(t',x'))\, B^0((t'_{n},x'_{n}),(t',x')). \nonumber \\ \end{aligned}$$
(5.37)

where \({\underline{G}}^{2\rightarrow }(\cdot ,\cdot ):=\delta (\cdot ,\cdot )+G^{2\rightarrow }(\cdot ,\cdot )\), \(C=O(1)\) and (by Hölder’s inequality)

$$\begin{aligned} F_3(\cdot )= & {} O(n^2) \Big [ (1+t-t_1)^3+(1+t_1-t'_1)^3+\cdots +(1+t_n-t'_n)^3+(1+t'_n-t')^3 \nonumber \\&+ (1+|x-x_1|)^3+(1+|x_1-x'_1|)^3+\cdots +(1+|x_n-x'_n|)^3+(1+|x'_n-x'|)^3 \Big ].\nonumber \\ \end{aligned}$$
(5.38)

Integrating space-time variables in chronological order, and using

$$\begin{aligned}&(1+t-t'+|x-x'|)^3 A^0((t,x),(t',x')),(1+t-t'+|x-x'|)^3 G^{2\rightarrow }((t,x),(t',x'))\nonumber \\&\quad \lesssim (t-t')^{-d/2} e^{-c|x-x'|^2/(t-t')} \, \cdot \, \mathbf{1}_{t-t'=O(1)}, \end{aligned}$$
(5.39)

one gets

$$\begin{aligned} g^{(0)}|v^{(0)}|,|\nu _{eff}-\nu ^{(0)}|\le \sum _{n\ge 1} n^3 (C'g^{(0)})^{n+1}=O((g^{(0)})^2) \end{aligned}$$
(5.40)

for another constant \(C'=O(1)\).

Finally, it is clear from (5.37) that \(v^{(0)}-v^{(0)}(T)\) involves only terms in the sum for which \(n\gtrsim T\); thus it is of order \(O((C'g^{(0)})^{cT})\) for some constant \(c>0\).

5.2 Four-Point Function

Next, we discuss briefly connected four-point functions, which contribute corrections to the noise strength D. The correct way to get an understanding à la Wilson of the induced flow for the parameter D is a priori to sum inductively for each fixed scale j over all diagrams of lowest scale \(\le j\) with four external A- or B-propagators of scale \(>j\). In practice this would lead to introduce further scale counterterms of the form \(V^{(j)}(t,x)=B^{\rightarrow j}(\cdot ,(t,x)) g^{(j)}\eta _j(t,x) A^{\rightarrow j}((t,x),\cdot )\), where \((\eta _j)_{j\ge 0}\) are independent copies of \(\eta \), with scale \(\tau \)-prefactors, yielding the whole machinery of multi-scale cluster expansions. Fortunately, since the insertion of such diagrams inside the expansion yields power-like vanishing contributions in the large-scale limit, such counterterms need not be introduced by hand to make the expansion convergent. We shall actually compute directly in § 6.4 an effective value \(D_{eff}\equiv D^{(\infty )}\) for D by considering the large-scale limit of the connected two-point function \(\langle h(\cdot )h(\cdot )\rangle \). We shall be content here with a few indications about how four-point functions are produced by the expansion. This subsection may be skipped since it is not used in the proof of our Main Theorem.

In order to obtain a four-point function, one needs two strings. Let us denote by the index \(\alpha \) vertices produced on the first string, and by the index \(\beta \) those produced on the second string. A component connected by the cluster is made up of a piece of string \(\mathcal{S}_{\alpha }\) and a piece of string \(\mathcal{S}_{\beta }\), both of the type (5.4). One thus obtains a diagram with 4 external vertices, \(B^{\rightarrow (j+1)}(\cdot ,(t_{\alpha },x_{\alpha })) \, \cdot \, \Big [ \, \cdot \, \Big ]\, A^{\rightarrow (j+1)}((t'_{\alpha },x'_{\alpha },\cdot ) \ \cdot \ B^{\rightarrow (j+1)}(\cdot ,(t_{\beta },x_{\beta })) \, \cdot \, \Big [ \, \cdot \, \Big ]\, A^{\rightarrow (j+1)}((t'_{\beta },x'_{\beta },\cdot )\). To get a connected contribution, we assume that \(\eta (t_{\alpha },x_{\alpha })\) contracts with \(\eta (t_{\beta },x_{\beta })\), and similarly, \(\eta (t'_{\alpha },x'_{\alpha })\) contracts with \(\eta (t'_{\beta },x'_{\beta })\). Then this means that we obtain a very simple “ladder diagram”, whose leading term is

$$\begin{aligned}&B^{\rightarrow (j+1)}(\cdot ,(t_{\alpha },x_{\alpha })) \, \cdot \, \nonumber \\&\quad \cdot \, \Big [ \eta (t_{\alpha },x_{\alpha }) A^{j\rightarrow }((t_{\alpha },x_{\alpha }),\cdot ) B^{j\rightarrow } (\cdot ,(t'_{\alpha },x'_{\alpha })) \eta (t'_{\alpha },x'_{\alpha }) \Big ]\, \cdot \, A^{\rightarrow (j+1)}((t'_{\alpha },x'_{\alpha }),\cdot ) \, \cdot \nonumber \\&\quad \cdot \ B^{\rightarrow (j+1)}(\cdot ,(t_{\beta },x_{\beta })) \ \cdot \nonumber \\&\qquad \cdot \, \big [ \eta (t_{\beta },x_{\beta }) A^{j\rightarrow }((t_{\beta },x_{\beta }),\cdot ) B^{j\rightarrow }(\cdot ,(t'_{\beta },x'_{\beta })) \eta (t'_{\beta },x'_{\beta }) \Big ] \, \cdot A^{\rightarrow (j+1)}((t'_{\beta },x'_{\beta }),\cdot ) \nonumber \\ \end{aligned}$$
(5.41)

with \(d((t_{\alpha },x_{\alpha }),(t_{\beta },x_{\beta })),d((t'_{\alpha },x'_{\alpha }),(t'_{\beta },x'_{\beta }))=O(1)\). Renormalization corrections are due precisely to these (and more complicated) ladder diagrams, with \((t_{\alpha },x_{\alpha }),(t_{\beta },x_{\beta })\), resp. \((t'_{\alpha },x'_{\alpha }),(t'_{\beta },x'_{\beta })\) belonging to \(\Delta \), resp. \(\Delta '\), where \(\Delta \), \(\Delta '\) are two distinct scale 0 boxes where the \(\eta \)’s contract two-by-two. If \(j=0\) then short-distance “crossed” \(\eta \)-contractions are also possible.

6 Final Bounds

We are now, at long last, ready to prove our Main Theorem. Roughly speaking, N-point functions \(\langle h(t_1,x_1)\cdots h(t_N,x_N)\rangle \) have been rewritten in terms of a series, that is, an (infinite) sum over polymers. Obviously, the first task is to ensure that this series is convergent. This turns out to be the main point in the section; once this is understood, the scaling behavior of N-point functions will be essentially obtained by looking at the terms of lower order in \(g^{(0)}\) in the series.

6.1 Small Noise/Large Noise Boxes

Definition 6.1

Let \(\Delta \in \mathbb {D}^0\). Then \(\Delta \) is said to be a size k large field box \((k\ge 0)\) if \(2^k \lambda ^{-1/2}<\sup _{\Delta } |\eta |\le 2^{k+1}\lambda ^{-1/2}\).

Denote by \(\mathbb {D}^0_{LF,k}\) the set of size k large field boxes, by \(\mathbb {D}^0_{LF}:=\uplus _{k\ge 0} \mathbb {D}^0_{LF,k}\) the set of all large field boxes, and by \(\mathbb {D}^0_{SF}:=\mathbb {D}^0 \setminus \mathbb {D}^0_{LF}\) its complementary. The region \(\mathbb {D}^0_{LF}\) is called the large field region, and the region \(\mathbb {D}^0_{SF}\) the small field region.

By standard Gaussian deviations, if \(\Delta \in \mathbb {D}^0\), then

$$\begin{aligned} {\mathbb {P}}[\Delta \in \mathbb {D}^0_{{\mathrm {LF}},k}]\le e^{-c2^{2k}/\lambda }, \qquad k\ge 0. \end{aligned}$$
(6.1)

The bound (6.1) also holds trivially if \(\Delta \in \mathbb {D}^0_{SF}\) by letting formally \(k=-\infty \). This trick allows to handle small noise and large noise boxes on equal footing.

6.2 Vertex Insertions and Contour Integrals

Let us recapitulate the previous steps. We start from an N-point function,

$$\begin{aligned} \langle h(t_1,x_1)\cdots h(t_N,x_N)\rangle =\left( \frac{\nu ^{(0)}}{\lambda }\right) ^N F_N, \end{aligned}$$
(6.2)

where

$$\begin{aligned} F_N:= & {} \langle \log (w(t_1,x_1))\cdots \log (w(t_N,x_N))\rangle \nonumber \\= & {} \left\langle \log \left( \int dy_1 \, A((t_1,x_1),\cdot ) (1-V_{\eta })^{-1} (\cdot ,\cdot ) B(\cdot ,(0,y_1))\, e^{\frac{\lambda }{\nu ^{(0)}} h_0(y_1)} \right) \cdots \right. \nonumber \\&\left. \log \left( \int dy_N\, A((t_N,x_N),\cdot ) (1-V_{\eta })^{-1} (\cdot ,\cdot ) B(\cdot ,(0,y_1))\, e^{\frac{\lambda }{\nu ^{(0)}} h_0(y_N)} \right) \right\rangle \end{aligned}$$
(6.3)

and \(V_{\eta }:=\int dt\, dx\, V_{\eta }(\tau =1)(t,x)=\int dt\, dx\, B(\cdot ,(t,x))\left( g^{(0)}(\eta (t,x)-v^{(0)})\right) A((t,x),\cdot )\). Then we:

  1. 1.

    apply to \(F_N\) the horizontal and vertical cluster expansions; this results in a sum over forests \({\mathbb {F}}^0\in \mathcal{F}^0\) of a rational function [see (4.2)] in strings;

  2. 2.

    displace external legs;

  3. 3.

    contract the dangling \(\eta \)’s;

  4. 4.

    apply Mayer’s expansion to scale 0 two-point diagrams;

  5. 5.

    factorize the scale 0 two-point diagram contributions. By construction these are exactly compensated by the counterterms.

Consider now the various vertex insertions (in form of a kernel),

$$\begin{aligned} \tilde{V}_{\alpha }(\varvec{s}(\varvec{w});z_{\alpha }):=c_{\alpha }(\varvec{s}(\varvec{w})) {\left\{ \begin{array}{ll} V_{\alpha }(z_{\alpha }) \qquad \ \ {\mathrm {if}}\ z_{\alpha }=z_{\ell }, \ell \in L_{\eta } \\ \tilde{\eta }(z_{\alpha })V_{\alpha }(z_{\alpha }) \qquad {\mathrm {otherwise}} \end{array}\right. } \end{aligned}$$
(6.4)

where

$$\begin{aligned} c_{\alpha }(\varvec{s}(\varvec{w})):=\tilde{s}_{\Delta '_{\alpha },\Delta _{\alpha }} \tilde{s}_{\Delta _{\alpha },\Delta ''_{\alpha }}, \qquad \tilde{s}_{\Delta ,\Delta '}={\left\{ \begin{array}{ll} 1 \qquad {\mathrm {if}}\ \{\Delta ,\Delta '\}\in L_G \\ s_{\Delta ,\Delta '} \qquad {\mathrm {otherwise}} \end{array}\right. } \end{aligned}$$
(6.5)

and

$$\begin{aligned} V_{\alpha }(z_{\alpha })(z'_{\alpha },z''_{\alpha }):=g^{(0)} \partial ^{\kappa '_{\alpha }} B^{j'_{\alpha }}(z'_{\alpha },z_{\alpha }) \partial ^{\kappa ''_{\alpha }} A^{j''_{\alpha }}(z_{\alpha },z''_{\alpha }) \end{aligned}$$
(6.6)

on the strings, with \(z_{\alpha }=(t_{\alpha },x_{\alpha }),z'_{\alpha }=(t'_{\alpha },x'_{\alpha })\in \Delta '_{\alpha },z''_{\alpha }=(t''_{\alpha },x''_{\alpha })\in \Delta ''_{\alpha }\), \(\partial ^{\kappa '_{\alpha }}:=\partial _{t'}^{\kappa '_{\alpha ,0}} \partial _{x'}^{\varvec{\kappa }'_{\alpha }}\), and similarly for \(\partial ^{\kappa ''_{\alpha }}\); \(\alpha \) being some dummy index. Let \(|\kappa '_{\alpha }|:=2\kappa '_{\alpha ,0}+|\varvec{\kappa }'_{\alpha }|\) be the parabolic order of derivation; in particular, \(|\kappa '_{\alpha }|=3\) if and only if \(\partial ^{\kappa '_{\alpha }}=\partial _{t'} \nabla _{x'}\) or \(\nabla _{x'}^{\varvec{\kappa }'_{\alpha }}\), \(|\varvec{\kappa }'_{\alpha }|=3\); and similarly for \(\kappa ''_{\alpha }\). By assumption \(z_{\alpha }\) ranges over some box \(\Delta ^0_{\alpha }\) of scale 0, and \(\Delta '_{\alpha } \in \mathbb {D}^{j'_{\alpha }},\Delta ''_{\alpha }\in \mathbb {D}^{j''_{\alpha }}\). There are four cases:

  1. (i)

    (no \(\tau \)-derivative, 0-th scale vertices) \(j'_{\alpha },j''_{\alpha }=0\), and \(\kappa '_{\alpha }=\kappa ''_{\alpha }=0\);

  2. (ii)

    (one \(\tau \)-derivative, beginning of 0-th scale cluster) \(j'_{\alpha }>0\), \(j''_{\alpha }=0\), \(\kappa ''_{\alpha }=0\), \(|\kappa '_{\alpha }|=0\) or \(\ge 3\);

  3. (iii)

    (one \(\tau \)-derivative, end of 0-th scale cluster) \(j'_{\alpha }=0\), \(\kappa '_{\alpha }=0\), and \(j''_{\alpha }>0\), \(|{\kappa }''_{\alpha }|=0\) or \(\ge 3\).

  4. (iv)

    (second \(\tau \)-derivative) \(j'_{\alpha }>0\), \(j''_{\alpha }>0\), and \(\kappa '_{\alpha }=\kappa ''_{\alpha }=0\).

To these, one must add insertions of a particular type, proportional to \(\delta v^{(0)}\) [see (5.21)],

  1. (v)

    (boundary terms) \(V_{\alpha }(z_{\alpha })(z'_{\alpha },z''_{\alpha }):= \delta v^{(0)}(t_{\alpha }) B^{j'_{\alpha }}(z'_{\alpha },z_{\alpha }) A^{j''_{\alpha }}(z_{\alpha },z''_{\alpha })\) \((j''_{\alpha }\ge j'_{\alpha })\), resp. \(\delta v^{(0)}(t_{init}-t_{\alpha }) B^{j'_{\alpha }}(z'_{\alpha },z_{\alpha }) A^{j''_{\alpha }}(z_{\alpha },z''_{\alpha })\) \((j''_{\alpha }\ge j'_{\alpha })\) \((j''_{\alpha }<j'_{\alpha })\), where \(t_{init}:=t_i\) if \(V_{\alpha }(z_{\alpha })\) is inserted on the i-th string, \(i=1,\ldots ,N\).

Vertices of type (iv) are responsible for the production of the \(v^{(0)}\) (see “\(n=-1\)” term in §5.1) and \(\delta \nu \) [see second line of (4.7)] counterterms; the \(v^{(0)}\)-counterterm is chosen in such a way as to cancel the two-point function (see §5.1), while \(\delta \nu \)-counterterms are resummed into the effective propagator \(\tilde{G}_{eff}\) (see §6.3 A.). The contribution of scale 0 vertices (i) is bounded in § 6.3 A.

Vertex insertions of type (ii), (iii) have been differentiated by the scale 0 renormalization. More precisely, letting \(\kappa '_{\alpha }\) be the order of differentiation of a low-momentum \(B^{j'_{\alpha }}\)-propagator entering a given 0-th scale cluster (ii), and \(\kappa ''_{\alpha '}\) that of a low-momentum \(A^{j''_{\alpha '}}\)-propagator exiting the same 0-th scale cluster, one sets as in § 5.1: \((|\kappa '_{\alpha }|\ge 3,\kappa ''_{\alpha '}=0)\) if \(j'_{\alpha }\ge j''_{\alpha '}\), \((\kappa '_{\alpha }=0,|\kappa ''_{\alpha '}|\ge 3)\) if \(j'_{\alpha }< j''_{\alpha '}\). From the point of view of power-counting (see below), we have thus produced an essential small factor

$$\begin{aligned} O(2^{-\frac{3}{2}\max (j'_{\alpha },j''_{\alpha '})}), \end{aligned}$$
(6.7)

that is, \(O(2^{-\frac{3}{2}j})\) per half of the low-momentum fields \(A^j\) or \(B^j\), having the same effect as \(\nabla ^{3}\), or (considering a chronological sequence \(A^j(\cdot ,\cdot ) \langle j| \ B^j(\cdot ,\cdot ) |j\rangle =G^j(\cdot ,\cdot )\)), \(O(\nabla ^3)\) in average per low-momentum G-field.

Finally, boundary vertices of type (v) enjoy an exponentially small factor. Namely, assuming e.g. that \(j''_{\alpha }\ge j'_{\alpha }\), the boundary correction \(\delta v^{(0)}(t_{\alpha })\) to \(v^{(0)}\) is \(O((Cg^{(0)})^{ct_{\alpha }}\), which is \(\lesssim t_{\alpha }^{-3/2}\lesssim 2^{-\frac{3}{2}j''_{\alpha }}\). Hence such vertices may and will be considered—from the power-counting point of view—as O(1) times a vertex of type (ii) or (iii).

Recalling that these vertices are produced anywhere along the strings by the cluster expansion, their contributions may be resummed as follows. We first need some notations. Let:

  • \(I({\mathbb {F}}^{0})=\{I_{\alpha }\}_{\alpha }\) be the set of vertex insertions;

  • \(\Delta _{\alpha }\in \mathbb {D}^0\) be the scale-0 box where \(z_{\alpha }\) [see (6.6)] is located;

  • \(L({\mathbb {F}}^{0})\) be the set of horizontal cluster links, and \(L_{\eta }\subset L({\mathbb {F}}^0)\) those coming specifically from the cluster on \(\eta \) (compare with §5.1);

  • \(L_{vert}\) be the set of links coming from the vertical cluster expansion;

  • \(L_{Mayer}\) be the set of Mayer links.

Then

$$\begin{aligned} F_N= & {} \sum _{{\mathbb {F}}^{0}\in \mathcal{F}^{0}} \sum _{L_G,L_{\eta },L_{vert},L_{Mayer},\varvec{\mu }}\Big (\prod _{\ell \in L({\mathbb {F}}^{0})} \int _{0}^{1} dw_{\ell }\Big ) \Big (\prod _{\ell \in L_{Mayer}({\mathbb {F}}^{0})} \int _{0}^{1} dS_{\ell }\Big )\ \ {\mathrm {Mayer}}(\varvec{S}) \nonumber \\&\quad \Big (\prod _{\ell \in L_{\eta }({\mathbb {F}}^{0})} \langle \eta (z_{\ell }) \eta (z'_{\ell })\rangle \Big ) \qquad \cdot \qquad \Big \langle \Big (\prod _{\alpha \in I({\mathbb {F}}^{0}) }\frac{d}{d\gamma _{\alpha }}|_{\gamma _{\alpha }=0}\Big )\nonumber \\&\quad \prod _{j=1}^{N} \log \left( \int dy_{j} \, \tilde{A}(\varvec{s}(\varvec{w}))((t_j,x_j),\cdot ) \frac{1}{1-V_{\eta }(\tau )-\sum _{\alpha \in I({\mathbb {F}}^{0}) }\gamma _{\alpha }\tilde{V}_{\alpha }} \right. \nonumber \\&\left. \quad (\cdot ,\cdot ) \tilde{B}(\varvec{s}(\varvec{w}))(\cdot ,(0,y_j)) \, e^{\frac{\lambda }{\nu ^{(0)}} h_0(y_j)} \right) \Big \rangle _{s(w)} \end{aligned}$$
(6.8)

where \(\tilde{C}(\varvec{s}(\varvec{w})):=\tilde{C}^0(\varvec{s}(\varvec{w}))+C^{\rightarrow 1}\), \(\tilde{C}^0(\varvec{s}(\varvec{w}))(z,z'):=\! {\left\{ \begin{array}{ll} C^0(z,z') \qquad {\mathrm {if}}\ (\Delta ^0_z,\Delta ^0_{z'})\in L_G \\ s_{\Delta ^0_z, \Delta ^0_{z'}} C^0(z,z') \qquad {\mathrm {otherwise}} \end{array}\right. }\) \((C=A,B)\); \(\tilde{V}_{\alpha }:=\int _{\Delta _{\alpha }} dz_{\alpha }\, \tilde{V}_{\alpha }(\varvec{s}(\varvec{w});z_{\alpha })\), and \(V_{\eta }(\tau ):=\int dt\, dx\, V_{\eta }(\tau )(\varvec{s}(\varvec{w}))(t,x)\) is the space-time integration of the dressed vertex (4.7). Note that the s-dependence in this expression is trivial when it comes to bounds, since \(|\tilde{C}(\varvec{s}(\varvec{w})(\cdot ,\cdot )|\le |C(\cdot ,\cdot )|\), \(C=A,B\), and similarly \(|c_{\alpha }(\varvec{s}(\varvec{w})|\le 1\) [see (6.4)].

By causality, the vertex insertions may be re-expanded along the string number \(i=1,\ldots ,N\) into a finite sum \(\mathcal{S}_i\) as follows: letting \(\varvec{\gamma }:=(\gamma _{\alpha })_{\alpha }\),

$$\begin{aligned} \mathcal{S}_i(\varvec{\gamma }):= & {} \tilde{A}(\varvec{s}(\varvec{w}))(t_i,x_i),\cdot ) (1-V_{\eta }(\tau )-\sum _{\alpha }\gamma _{\alpha }V_{\alpha })^{-1}(\cdot ,\cdot ) \int dy_i\, \tilde{B}(\varvec{s}(\varvec{w}))(\cdot ,(0,y_i)) \, e^{\frac{\lambda }{\nu ^{(0)}} h_0(y_i)} \nonumber \\= & {} \sum _{\alpha _1,\alpha _2,\ldots } \tilde{A}(\varvec{s}(\varvec{w}))((t_i,x_i),\cdot ) \ \cdot \ \nonumber \\&\cdot \ \left( \int _{\Delta '_{\alpha _1}} dz'_1\, \int _{\Delta _{\alpha _1}} dz_{\alpha _1} \int _{\Delta ''_{\alpha _1}}dz''_1 \right) R_{\eta }(\tau )(\cdot ,z'_1) \Big \{\gamma _{\alpha _1} \tilde{V}_{\alpha _1}(\varvec{s}(\varvec{w}));z_{\alpha _1})(z'_1,z''_1) \Big \} \ \cdot \nonumber \\&\cdot \ \left( \int _{\Delta '_{\alpha _2}} dz'_2\, \int _{\Delta _{\alpha _2}} dz_{\alpha _2}\, \int _{\Delta ''_{\alpha _2}}dz''_2 \right) R_{\eta }(\tau )(z''_1,z'_2) \Big \{ \gamma _{\alpha _2} \tilde{V}_{\alpha _2}(\varvec{s}(\varvec{w}))(z_{\alpha _2})(z'_2,z''_2) \Big \} \ \cdots , \nonumber \\ \end{aligned}$$
(6.9)

with main term (disregarding propagator renormalization, see §6.3 A.)

$$\begin{aligned}&A((t_i,x_i),\cdot ) \int dy_i\, B(\cdot ,(0,y_i))e^{\frac{\lambda }{\nu ^{(0)}} h_0(y_i)} = \int dy_i\, G((t_i,x_i),(0,y_i)) e^{\frac{\lambda }{\nu ^{(0)}} h_0(y_i)}\nonumber \\&\quad = 1+ e^{\nu ^{(0)} t_i\Delta } (e^{\frac{\lambda }{\nu ^{(0)}} h_0}-1)(x_i) \le 1+\frac{\lambda }{\nu ^{(0)}}\ e^{\frac{\lambda }{\nu ^{(0)}} ||h_0||_{\infty }} \, \ \cdot \ (e^{\nu ^{(0)}t_i\Delta } |h_0|)(x_i) \nonumber \\&\quad = 1+O(\lambda e^{\frac{\lambda }{\nu ^{(0)}} ||h_0||_{\infty }}) \, \min (||h_0||_{L^{\infty }}, t_i^{-d/2} ||h_0||_{L^1}). \end{aligned}$$
(6.10)

Since the result is analytic in the parameters \(\varvec{\gamma }\) in a neighborhood of 0, we may replace \(\frac{d}{d\gamma _{\alpha }}\big |_{\gamma _{\alpha }=0} F(\gamma _{\alpha })\) by the Cauchy contour integral

$$\begin{aligned} \frac{1}{2\mathrm{i}\pi } \oint _{\partial B(0,r_{\alpha })} \frac{d\gamma _{\alpha }}{\gamma _{\alpha }^2} F(\gamma _{\alpha }), \end{aligned}$$

with (defining \(k_{\alpha }\) to be the size of the large-field zone of \(\Delta _{\alpha }\) if \(\Delta _{\alpha }\) is large-field, i.e. \(\Delta _{\alpha }\in \mathbb {D}^0_{LF,k_{\alpha }}\), \(k_{\alpha }\ge 0\), and \(k_{\alpha }=-\infty \) if \(\Delta _{\alpha }\in \mathbb {D}^0_{SF}\))

$$\begin{aligned} r_{\alpha }\equiv r_{\alpha }(k_{\alpha }):=r'_{\alpha }(k_{\alpha }) r''_{\alpha }, \end{aligned}$$
(6.11)

where

$$\begin{aligned} (r'_{\alpha }(k_{\alpha }))^{-1}:= & {} C (2^{k_{\alpha }+1})^{n(\Delta _{\alpha })} e^{\lambda ^{1/2} 2^{k_{\alpha }+1}} \nonumber \\ (r''_{\alpha })^{-1}:= & {} C g^{(0)} \int _{\Delta '_{\alpha }} dz'_{\alpha } \int _{\Delta _{\alpha }} dz_{\alpha } \int _{\Delta ''_{\alpha }} dz''_{\alpha } \, |V_{\alpha }(z_{\alpha })(z'_{\alpha },z''_{\alpha })| \end{aligned}$$
(6.12)

for some large enough uniform constant C. Then

$$\begin{aligned} |\gamma _{\alpha }|=r_{\alpha },\qquad \frac{1}{2\pi } \oint _{\partial B(0,r_{\alpha })} \frac{d|\gamma _{\alpha }|}{|\gamma _{\alpha }|^2}= r_{\alpha }^{-1}. \end{aligned}$$
(6.13)

As we shall see, the \(r_{\alpha }\equiv |\gamma _{\alpha }|\) have been chosen small enough (depending on the order of magnitude of the \((|\eta _{\Delta _{\alpha }}|)_{\alpha }\)) so that each \(\mathcal{S}_i(\varvec{\gamma })\) is equal to \(1+o(1)\), yielding

$$\begin{aligned} F_N({\mathbb {F}}^0,\varvec{k}|\eta ):= & {} \prod _{\alpha } \mathbf{1}_{\Delta _{\alpha }\in \mathbb {D}^0_{LF,k_{\alpha }}} \ \cdot \ \left[ \prod _{\alpha } \left( \frac{1}{2\mathrm{i}\pi } \oint _{\partial B(0,r_{\alpha })} \frac{d\gamma _{\alpha }}{\gamma _{\alpha }^2} \right) \right] \ \left\{ \prod _{i=1}^N \log ( \mathcal{S}_i(\varvec{\gamma })) \right\} \nonumber \\= & {} O(1) \prod _{\alpha } r^{-1}_{\alpha }(k_{\alpha }), \end{aligned}$$
(6.14)

a deterministic estimate (but depending on \(\varvec{k}:=(k_{\alpha })_{\alpha }\)). This is step B. in §6.3.

The next step (see § 6.3, step C.) is to show that the averaged infinite sum \(\langle \sum _{\varvec{k}} F_N({\mathbb {F}}^0,\varvec{k}|\eta ) \rangle \) is \(\lesssim \prod _{\alpha } (r''_{\alpha })^{-1}\); or rather, to be precise, \(\lesssim \prod _{\alpha } (\tilde{r}''_{\alpha })^{-1}\), where (as in § 5.1) \(\tilde{r}''_{\alpha }\) is \(r''_{\alpha }\) up to the replacement of \(A^j,B^j\) with equivalent kernels \(\tilde{A}^j,\tilde{B}^j\).

The final step is to show that the infinite sum \(\sum _{{\mathbb {F}}^0}\prod _{\alpha } (\tilde{r}''_{\alpha })^{-1}\) converges; see step D. in § 6.3.

Obviously, in the course of the proof one must extract the lowest order terms, which will give the leading behavior of the KPZ truncated functions.

6.3 KPZ 1-Point Function

Let us first consider the case of the 1-point function \(\langle h(t,x)\rangle = \frac{\nu ^{(0)}}{\lambda } \langle \log w(t,x)\rangle \), where there is only one string. One must prove that \(\langle h(t,x)\rangle \overset{t\rightarrow \infty }{\rightarrow } 0\). We decompose the proof into four points (see discussion at the end of §6.2); the first point A. is a preparatory step. Except that A. must be supplemented with a new power-counting argument (see A’.), the same scheme of proof of convergence is used for KPZ truncated functions of higher order, see §5.4, 5.5, where details are skipped, so that one can concentrate on the asymptotic large-scale scaling functions.

A. :

(Contribution of the random resolvents) On each string, one finds a number of random resolvents \(R^{(0)}_{\eta }(\tau ^0=0)(\varvec{s}(\varvec{w}))((t_i,x_i),(t_{i+1},x_{i+1}))\). As in §4.1 C., such resolvents may be expanded to order two, see (5.33),

$$\begin{aligned}&R^{(0)}_{\eta }(\tau ^0=0)((t_i,x_i),(t_{i+1},x_{i+1}))=\delta ((t_i,x_i),(t_{i+1},x_{i+1})) \nonumber \\&\quad + B^0(\varvec{s}(\varvec{w}))((t_i,x_i),\cdot ) \tilde{\eta }(\cdot ) A^0(\varvec{s}(\varvec{w}))(\cdot ,(t_{i+1},x_{i+1})) \nonumber \\&\quad +B^0(\varvec{s}(\varvec{w}))((t_i,x_i),(t'_i,x'_i)) \tilde{\eta }(t'_i,x'_i) \ \cdot \ G_{\eta }(\varvec{s}(\varvec{w}))((t'_i,x'_i),(t'_{i+1},x'_{i+1})) \ \cdot \ \nonumber \\&\quad \cdot \ \tilde{\eta }(t'_{i+1},x'_{i+1}) ((A^0(\varvec{s}(\varvec{w}))((t'_{i+1},x'_{i+1}),(t_{i+1},x_{i+1})) \end{aligned}$$
(6.15)

with \(t'_i-t'_{i+1}\le 2\). Then \(A^0(\varvec{s}(\varvec{w})(\cdot ,\cdot )\le A^0(\cdot ,\cdot )\), \(B^0(\varvec{s}(\varvec{w})(\cdot ,\cdot )\le B^0(\cdot ,\cdot )\) and

$$\begin{aligned}&G_{\eta }(\varvec{s}(\varvec{w}))((t'_i,x'_i),(t'_{i+1},x'_{i+1}))\le G_{|\eta _{|{\mathbb {T}}|}|} ((t'_i,x'_i),(t'_{i+1},x'_{i+1})) \nonumber \\&\quad \le G^{2\rightarrow } ((t'_i,x'_i),(t'_{i+1},x'_{i+1})) \max \prod _{\Delta \in {\mathbb {T}}} e^{\theta _{\Delta }g^{(0)}|\eta _{\Delta }|}. \end{aligned}$$
(6.16)

Furthermore, the expansion (6.15) has produced new \(\eta \) fields, \(O(n(\Delta ))\) per box \(\Delta \in {\mathbb {T}}\). Thus, to each large-field box \(\Delta _{\alpha }\in {\mathbb {T}}\) corresponds a factor \(r'_{\alpha }(k_{\alpha }) |\eta _{\Delta _{\alpha }}|^{O(n(\Delta _{\alpha }))} e^{\theta _{\Delta _{\alpha }}g^{(0)}|\eta _{\Delta _{\alpha }}|}=o(1)\). Concluding: a scale 0 resolvent \(R^{(0)}_{\eta }(\tau ^0=0)(\cdot ,\cdot )\) may be replaced by \(\delta (\cdot ,\cdot )+ O(g^{(0)})G^{2\rightarrow }(\cdot ,\cdot )\). On the other hand, one also finds low-momentum resolvents

$$\begin{aligned} \delta R_{\eta }:=(1-\delta \nu B^{\rightarrow 1}\Delta ^{\rightarrow 0}A^{\rightarrow 1})^{-1} \end{aligned}$$
(6.17)

[see second line of (4.7)]. Sandwiched between a \(\partial ^{\kappa ''_{\alpha }} A^{j''_{\alpha }} \, \langle j''_{\alpha }|\)-propagator on the left side, and a \(\partial ^{\kappa '_{\alpha '}}B^{j'_{\alpha '}} \, |j'_{\alpha '}\rangle \)-propagator on the right side, they produce, as proved in Lemma 8.2, a propagator \(\partial ^{\kappa ''_{\alpha }+\kappa '_{\alpha '}} \tilde{G}_{eff}^{j,j'}(z''_{\alpha },z'_{\alpha '})\) having a priori three scales—\(j,j'\) and \(\lfloor \log _2(t''_{\alpha }-t'_{\alpha '})\rfloor \)—which may be resummed into an effective propagator \(\partial ^{\kappa ''_{\alpha }+\kappa '_{\alpha '}}\tilde{G}_{eff}(z''_{\alpha },z'_{\alpha '})\). Thus in the sequel these are evaluated as a constant O(1), times a contraction

\(\partial ^{\kappa ''_{\alpha }} \tilde{A}^{\tilde{j}''_{\alpha }} \, \langle \tilde{j}''_{\alpha }|\ \partial ^{\kappa '_{\alpha '}}\tilde{B}^{\tilde{j}'_{\alpha '}}\, |\tilde{j}'_{\alpha '}\rangle \ (z''_{\alpha },z'_{\alpha '})\), where \(\tilde{A}=A_{\nu ^{(0)}+O(\lambda ^2)}\), \(\tilde{B}=B_{\nu ^{(0)}+O(\lambda ^2)}\), and \(\tilde{j}''_{\alpha }=\tilde{j}'_{\alpha '}= \log _2(t''_{\alpha }-t'_{\alpha '})+O(1)\).

B. :

(Deterministic bound for the sum (6.14)) The (deterministic) product of the \(V_{\alpha }\)’s is compensated by the product \(\prod _{\alpha } r''_{\alpha }\), leaving only a small coefficient \(C^{-1}\) per vertex. Thus the sum \(\mathcal{S}_1(\varvec{\gamma })\) (6.9) converges to a constant \(1+O(C^{-2})\). For C small enough this is in the complex disk B(1, 1 / 2), so \( \mathbf{1}_{\Delta _{\alpha }\in \mathbb {D}^0_{LF,k_{\alpha }}} \ \cdot \ \log (\mathcal{S}_1(\varvec{\gamma })\) is well-defined, and

$$\begin{aligned} \mathbf{1}_{\Delta _{\alpha }\in \mathbb {D}^0_{LF,k_{\alpha }}} \ \cdot \log \mathcal{S }_1(\varvec{\gamma }) \simeq \mathbf{1}_{\Delta _{\alpha }\in \mathbb {D}^0_{LF,k_{\alpha }}} \ \cdot \left( \mathcal{S}_1(\varvec{\gamma })-1\right) . \end{aligned}$$
(6.18)

We must now sum the scaling coefficient \(\prod _{\alpha }r_{\alpha }^{-1}\) over all vertex locations, i.e. over all forests \({\mathbb {F}}\). Since the \((r''_{\alpha })^{-1}\)’s give (up to a constant O(1) per vertex) the correct order of magnitude of the vertex insertions, we may assume that we want to sum over all large-field indices \(\varvec{k}\) (see C.), then over all forests \({\mathbb {F}}\) (see D.) the string \(\mathcal{S}_1-1\), see (6.9) , where one has set: \(\gamma _{\alpha }=O(1)\) and \(R_{\eta }(\tau )(\cdot ,\cdot )=\delta (\cdot ,\cdot )+G^{2\rightarrow }(\cdot ,\cdot )\).

C. :

(Convergence of the average in \(\eta \)) The main issue here is to show, using standard Gaussian large deviations, that our estimates are integrable in \(\eta \). Proceeding as in §5.1 C., we rewrite \(A^j((t,x),(t',x'))\) as \(e^{-\frac{c}{2} |x-x'|^2/2^j} \ \cdot \ \tilde{A}^j((t,x),(t',x'))\), where \(\tilde{A}^j\) has the same scaling properties as \(A^j\), and has furthermore retained the same Gaussian type space-decay, only with different constants; and similarly for \(B^j\), \(\tilde{B}^j\). Up to a multiplicative constant O(1), this is equivalent to replacing \(\nu ^{(0)}\) by \(\tilde{\nu }^{(0)}\approx \nu ^{(0)}\). In the process, we have gained a small factor \(\prod _{\alpha } 2^{-cn(\Delta _{\alpha })^{1+2/d}}\), see (5.35). Then we split in two the large-deviation factor \(LF({\mathbb {F}},\varvec{k})=\prod _{\alpha } {\mathbb {P}}[\Delta _{\alpha }\in \mathbb {D}^0_{LF,k}]\lesssim \left( \prod _{\alpha } e^{-\frac{c}{2} 2^{2k}/\lambda } \right) ^2\). Then

$$\begin{aligned} \Big [ LF({\mathbb {F}},\varvec{k}) \Big ]^{1/2} \ \cdot \ \Big [\prod _{\alpha } (r'_{\alpha }(k_{\alpha }))^{-1} 2^{-cn(\Delta _{\alpha })^{1+2/d}} \Big ]=O(1).\end{aligned}$$
(6.19)

This is easily shown using the space-decay, resp. large-deviation factor when \(k_{\alpha }\le n(\Delta _{\alpha })\), resp. \(\ge n(\Delta _{\alpha })\). The remaining factor \(\left[ {\mathrm {LF}}({\mathbb {F}},\varvec{k})\right] ^{1/2}\) makes the sum over large-field indices converge to a factor O(1) per vertex, \(\sum _{k_{\alpha }\in \{-\infty \}\cup \mathbb {N}} e^{-\frac{c}{2} 2^{2k_{\alpha }}/\lambda }=1+o(1)\). Thus [see (6.14)] \(\Big | \sum _{{\mathbb {F}}} \sum _{\varvec{k}} \big \langle F_1({\mathbb {F}},\varvec{k}|\eta ) \big \rangle \Big |\lesssim \sum ^{\star }_{{\mathbb {F}}} \prod _{\alpha } (\tilde{r}''_{\alpha })^{-1} \equiv \left( \sum _{{\mathbb {F}}}\prod _{\alpha } (\tilde{r}''_{\alpha })^{-1} \right) -1, \) where: \(\sum ^*_{{\mathbb {F}}} f({\mathbb {F}}):=\sum _{{\mathbb {F}}\not =\emptyset } f({\mathbb {F}})\), and \((\tilde{r}''_{\alpha })^{-1}\) is given by the same formula as (6.12), but with \(V_{\alpha }(z_{\alpha })\) [see (6.6)] replaced by \(\tilde{V}_{\alpha }(z_{\alpha }):= g^{(0)} \partial ^{\kappa '_{\alpha }} \tilde{B}^{j'_{\alpha }}(\cdot ,z_{\alpha })\partial ^{\kappa ''_{\alpha }}\tilde{A}^{j''_{\alpha }}(z_{\alpha },\cdot )\).

D. :

(Convergence of the sum over forests) Following the same technique as in § 5.1, we integrate space-time variables in chronological order, yielding for n vertices

$$\begin{aligned}&\int d{\bar{z}}_{\alpha _1} \cdots d{\bar{z}}_{\alpha _n} \sum _{j'_{\alpha _1},\ldots , j'_{\alpha _n}\ge 0} \sum _{j''_{\alpha _1},\ldots ,j''_{\alpha _n}\ge 0} \ \int dz''\, A((t_1,x_1),z'') \ \cdot \ \nonumber \\&\quad \cdot \ \int dz_{\alpha _1} \partial ^{\kappa '_{\alpha _1}} \tilde{B}^{j'_{\alpha _1}} (z'',z_{\alpha _1}) |j'_{\alpha _1}\rangle g^{(0)} {\underline{G}}^{2\rightarrow }(z_{\alpha _1},{\bar{z}}_{\alpha _1}) \int dz''_{\alpha _1} \partial ^{\kappa ''_{\alpha _1}}\tilde{A}^{j''_{\alpha _1}}({\bar{z}}_{\alpha _1},z''_{\alpha _1}) \langle j''_{\alpha _1}| \cdot \nonumber \\&\quad \!\!\!\!\cdot \ \int dz_{\alpha _2}\, \partial ^{\kappa '_{\alpha _2}} \tilde{B}^{j'_{\alpha _2}} (z''_{\alpha _1},z_{\alpha _2}) |j'_{\alpha _2}\rangle g^{(0)}{\underline{G}}^{2\rightarrow }(z_{\alpha _2},{\bar{z}}_{\alpha _2}) \!\int dz''_{\alpha _2}\, \partial ^{\kappa ''_{\alpha _2}} \tilde{A}^{j''_{\alpha _2}} (z_{\alpha _2},z''_{\alpha _2}) \langle j''_{\alpha _2}| \cdots \nonumber \\&\quad \cdot \ \, \int dz_{\alpha _n}\, \partial ^{\kappa '_{\alpha _n}} \tilde{B}^{j'_{\alpha _n}} (z''_{\alpha _{n-1}},z_{\alpha _n}) |j'_{\alpha _n}\rangle g^{(0)} {\underline{G}}^{2\rightarrow }(z_{\alpha _n}, {\bar{z}}_{\alpha _n}) \int dz''_{\alpha _n}\partial ^{\kappa ''_{\alpha _n}} \tilde{A}^{j''_{\alpha _n}} (z_{\alpha _n},z''_{\alpha _n}) \langle j''_{\alpha _n}| \cdot \nonumber \\&\quad \cdot \int dy_1\, B(z''_{\alpha _n},(0,y_1))\, e^{\frac{\lambda }{\nu ^{(0)}} h_0(y_1)} \end{aligned}$$
(6.20)

where \({\underline{G}}^{2\rightarrow }(\cdot ,\cdot ):=\delta (\cdot ,\cdot )+G^{2\rightarrow }(\cdot ,\cdot )\) as in (5.37). Since, for \(t\gtrsim 1\), \(G^{2\rightarrow }e^{\tilde{\nu }^{(0)}t\Delta }(\cdot ,\cdot )\lesssim \int _0^{O(1)} dt'\, e^{\tilde{\nu }^{(0)}(t'+ct)\Delta }(\cdot \,\cdot )\lesssim e^{c'\tilde{\nu }^{(0)}t\Delta }(\cdot ,\cdot )\), the contribution of the integration in the \({\bar{z}}\)’s may be absorbed into the coupling constant \(g^{(0)}\) and a redefinition of \(\tilde{\nu }^{(0)}\). Thus we may assume that \({\bar{z}}_{\alpha _i}\equiv z_{\alpha _i}\). Furthermore, since \((|j\rangle )_{j\ge 0}\) is an orthonormal basis, \(j''_{\alpha _i}=j'_{\alpha _{i+1}}\), and \(\partial ^{\kappa ''_{\alpha _i}}\tilde{A}^{j''_{\alpha _i}}({z}_{\alpha _i},\cdot )\langle j''_{\alpha _i}|\ \cdot \ \partial ^{\kappa '_{\alpha _{i+1}}} \tilde{B}^{j'_{\alpha _{i+1}}}(\cdot ,z_{\alpha _{i+1}}) |j'_{\alpha _{i+1}}\rangle = \partial ^{\kappa ''_{\alpha _i}+\kappa '_{\alpha _{i+1}}} \tilde{G}^{j''_{\alpha _i}}(z_{\alpha _i},z_{\alpha _{i+1}})\). We let \(z_{\alpha _i}\equiv (t_{i+1},x_{i+1})\); rewrite the derivatives \(\partial ^{\kappa ''_{\alpha _i}+\kappa '_{\alpha _{i+1}}}\) as \(\partial ^{\kappa _{i+1}}\), with \(|\kappa _{i+1}|=0,3\) or 6, which produces an equivalent factor \(O((1+t_{i+1}-t_{i+2})^{-|\kappa _{i+1}|/2})\); bound \(\sum _{j\ge 0}\tilde{G}^j((t,x),(t',x'))\) by \(O(1) p_{\frac{\nu ^{(0)}}{c}(t-t')}(x-x')\); and use for a sequence of two low-momentum \(\tilde{G}\)-propagators our first power-counting estimate (compare with (3.19)),

$$\begin{aligned}&\int _{t_{i+2}}^{t_i} dt_{i+1}\, (1+t_i-t_{i+1})^{-|\kappa _i|/2} (p_{\frac{\nu ^{(0)}}{c}(t_i-t_{i+1})}*p_{\frac{\nu ^{(0)}}{c}(t_{i+1}-t_{i+2})})(x_i-x_{i+2}) \nonumber \\&\quad \le O(1)\, p_{\frac{\nu ^{(0)}}{c}(t_i-t_{i+2})}(x_i-x_{i+2}), \end{aligned}$$
(6.21)

an estimate similar to but more precise than (3.19), valid for \(|\kappa _i|>2\). If \(|\kappa _i|=3\), resp. 6, then we apply (6.21) with \(|\kappa _i|\) replaced by \(2^+\), keeping \((1+t_{i}-t_{i+1})^{-(\frac{1}{2})^-}\), resp. \((1+t_{i}-t_{i+1})^{-2^-}\) in store. If \(\kappa _i=0\) then \(\kappa _{i+1}\not =0\) and \(t_{i+2}-t_{i+1}\gtrsim t_{i+1}-t_i\); we obtain similarly a factor O(1) and keep in store \((1+t_{i+1}-t_{i+2})^{-(\frac{1}{2})^-}\), resp. \((1+t_{i+1}-t_{i+2})^{-2^-}\). Extra factors \((1+t_{i}-t_{i+1})^{-3/2}\) are used to iterate, so that there remains in store exactly \(\prod _i (1+t_{i}-t_{i+1})^{-({1\over 2})^-}\), where the product ranges over low-momentum propagators. Each scale 0 propagator \(\tilde{G}^0((t_i,\cdot ),(t_{i+1},\cdot ))\) \((t_i-t_{i+1}\le 1)\), on the other hand, has \(\kappa _i=0\), but benefits from a small factor \(O(g^{(0)})\) which can be rewritten in the form \((1+t_i-t_{i+1})^{-({1\over 2})^-} O(g^{(0)})\). The conclusion is the following. Rescale the coordinates, \((t_1,x_1)\rightsquigarrow (\varepsilon ^{-1}t_1,\varepsilon ^{-1/2}x_1)\). The main term in (6.21) is

$$\begin{aligned}&\int dy_1\, \tilde{G}_{eff}((\varepsilon ^{-1}t_1,\varepsilon ^{-1/2} x_1),(0,y_1)) e^{\frac{\lambda }{\nu ^{(0)}} h_0}(y_1) \nonumber \\&\quad = 1+e^{\nu _{eff}\varepsilon ^{-1} t_1\Delta } (e^{\frac{\lambda }{\nu ^{(0)}} h_0}-1)(\varepsilon ^{-1/2}x_1) \nonumber \\&\qquad + O(\varepsilon ) e^{(\nu ^{(0)}+O(\lambda ^2))\varepsilon ^{-1}t_1\Delta } (e^{\frac{\lambda }{\nu ^{(0)}} h_0}-1)(\varepsilon ^{-1/2}x_1) \nonumber \\&\quad = 1+O(\lambda e^{\frac{\lambda }{\nu ^{(0)}} ||h_0||_{\infty }}) \, \varepsilon ^{d/2} ||h_0||_{L^1} \end{aligned}$$
(6.22)

(\(n=0\)), while terms with \(n\ge 1\) are bounded by \(O(\lambda \varepsilon ^{d/2})\) times a prefactor

$$\begin{aligned} \prod _{i=1}^n \big [ O(\lambda )(1+t_{i}-t_{i+1})^{-(\frac{1}{2})^-} \big ] \le O(\lambda ) t_1^{- (\frac{1}{2})^-}. \end{aligned}$$
(6.23)

yielding after rescaling an error term \(O(\lambda \varepsilon ^{({1\over 2})^-})\). Hence we simply get

$$\begin{aligned} \langle h(\varepsilon ^{-1}t_1,\varepsilon ^{-1/2}x_1)\rangle \equiv \frac{\nu ^{(0)}}{\lambda } \langle \log (w(\varepsilon ^{-1}t_1,\varepsilon ^{-1/2}x_1)\rangle =O( e^{\frac{\lambda }{\nu ^{(0)}} ||h_0||_{\infty }}) \, \varepsilon ^{d/2} ||h_0||_{L^1}.\nonumber \\ \end{aligned}$$
(6.24)

6.4 KPZ Truncated 2-Point Function

We are now interested in the large scale behavior of the connected 2-point function (i.e. covariance function),

$$\begin{aligned} \langle h(t_1,x_1)h(t_2,x_2)\rangle _{c}= & {} \left( \frac{\nu ^{(0)}}{\lambda }\right) ^2 \langle \log (w(t_1,x_1))\log (w(t_2,x_2))\rangle _{c}\nonumber \\=: & {} \left( \frac{\nu ^{(0)}}{\lambda }\right) ^2 F_{2,c}((t_1,x_1),(t_2,x_2)). \end{aligned}$$
(6.25)

A simple way to generate the connected two-point function is to consider two independent replicas \(\eta _1,\eta _2\) of \(\eta \); then

$$\begin{aligned} \langle h(t_1,x_1)h(t_2,x_2)\rangle _c= {1\over 2}\Big \langle \Big (h(t_1,x_1|\eta _1)-h(t_1,x_1|\eta _2)\Big ) \Big ( h(t_2,x_2|\eta _1)- h(t_2,x_2|\eta _2) \Big ) \Big \rangle ,\nonumber \\ \end{aligned}$$
(6.26)

where \(\langle \, \cdot \, \rangle \) now refers to the expectation with respect to the pair \((\eta _1,\eta _2)\). We make a cluster expansion as above in the propagators and in the covariance kernels of \(\eta _1\) and \(\eta _2\), and get an expression similar to (6.8). By symmetry \((1\leftrightarrow 2)\), there is at least one four-leg vertex, which means that there is (at least) one pairing \(\langle \eta _p(z_{\beta _1})\eta _p(z_{\beta _2})\rangle \) \((p=1,2)\), \(z_{\beta _i}=(t_{\beta _i},x_{\beta _i})\) \((i=1,2)\), coming from a \(V_{\beta _1}(z_{\beta _1})\) insertion on the 1st string, and a \(V_{\beta _2}(z_{\beta _2})\) insertion on the 2nd string; the pairing vanishes unless \(z_{\beta _1},z_{\beta _2}\) are in the same box \(\Delta \in \mathbb {D}^0\) or in neighboring boxes.

Choose among the existing such \(V_{\beta _i}(z_{\beta _i})\), \(i=1,2\) the earliest one anti-chronologically, i.e. the one with the largest \(t_{\beta _i}\). Because of the finite-range nature of the kernel \(\langle \eta (\cdot )\eta (\cdot )\rangle \), there exists a pairing \(\langle \eta _i(z_{\beta _1}) \eta _i(z_{\beta '_2})\rangle \) with \(d(\Delta _{\beta _2},\Delta _{\beta '_2})=O(1)\), and similarly a pairing \(\eta _i(z_{\beta '_1})\eta _i(z_{\beta _2})\rangle \) with \(d(\Delta _{\beta _1},\Delta _{\beta '_1})=O(1)\); it may of course happen that \(\beta _1=\beta '_1\), \(\beta _2=\beta '_2\). Call \(\mathcal{F}^0_{\Delta _1,\Delta _2}\) the set of forests such that \(z_{\beta _i}\), \(i=1,2\) belong to fixed boxes \(\Delta _{\beta _1}:=\Delta _1\), \(\Delta _{\beta _2}:=\Delta _2\); \(\mathcal{F}^0_{\Delta _1,\Delta _2}\) is empty unless \(d(\Delta _1,\Delta _2)=O(1)\). Applying explicitly the operator \(\frac{d}{d\gamma _{\beta _1}} \frac{d}{d\gamma _{\beta _2}}\) to the r.-h.s. of (6.8), one gets

$$\begin{aligned}&F_{2,c}((t_1,x_1),(t_2,x_2))=\sum _{\Delta _1,\Delta _2\in \mathbb {D}^0} \sum _{{\mathbb {F}}\in \mathcal{F}^0_{\Delta _1,\Delta _2}} \prod _{\alpha } \frac{1}{2\mathrm{i}\pi } \oint _{\partial B(0,r_{\alpha })} \frac{d\gamma _{\alpha }}{\gamma _{\alpha }^2} \nonumber \\&\quad \left\langle \mathcal{S}_1(\varvec{\gamma })^{-1} \mathcal{S}_2 (\varvec{\gamma })^{-1} \ \cdot \ \frac{d}{d\gamma _{\beta _1}} \left( \mathcal{S}_1(\varvec{\gamma })\right) \ \frac{d}{d\gamma _{\beta _2}} \left( \mathcal{S}_2(\varvec{\gamma }) \right) \right\rangle \end{aligned}$$
(6.27)

Compared to the previous subsection, we must now add a supplementary estimate in the preparatory phase.

A’. (Power-counting factors for \(\eta \)-pairings between strings) As mentioned in §3.2 and illustrated in §5.2, \(\eta \)-pairings produce outer contractions linking different strings. Contrary to inner contractions inside 0-th scale clusters which contribute to the two-point function, outer contractions produce 4-point functions, which have not been renormalized. We must now show that the power-counting effect of an outer contraction is comparable to that described in (6.7). For that, consider parallel chronological sequences on two strings,

$$\begin{aligned}&\int _{\Delta _1^0} dz'_1\, \int _{\Delta _{2}^0} dz'_{2}\, \langle \eta (z'_1)\eta (z'_{2})\rangle _{\varvec{s}(\varvec{w})} \nonumber \\&\quad \left( \cdots A^{j_1}(\cdot ,\cdot )\langle j_1|\ B^{j_1}(\cdot ,z'_1) |j_1\rangle \, A^{j'_1}(z'_1,\cdot ) \langle j'_1|\ \cdots \right) \nonumber \\&\quad \left( \cdots A^{j_{2}}(\cdot ,\cdot ) \langle j_{2}|\ B^{j_{2}}(\cdot ,z'_{2}) |j_{2}\rangle \ A^{j'_{2}}(z'_{2},\cdot ) \langle j'_2|\, \cdots \right) \end{aligned}$$
(6.28)

in which vertex integration points \(z'_1,z'_{2}\) are located in neighboring boxes so that the average \(\langle \eta (z'_1) \eta (z'_{2})\rangle _{\varvec{s}(\varvec{w})}\) does not vanish. Ladder diagrams considered in §5.2, see (5.41), , are of this type. Following the chronological integration procedure of D., we replace (supposedly already integrated) outgoing legs \(A^{j'_1},A^{j'_{2}}\) by 1 and integrate over \(z'_1,z'_{2}\). Since \(d(\Delta ^0_1,\Delta ^0_2)=O(1)\), we may just as well assume (up to a volume prefactor O(1)) that \(\Delta ^0_1=\Delta ^0_{2}\). We are free to choose the ordering of the strings and may therefore suppose that \(j_1\le j_{2}\). Thanks to the exponential decay of \(B^{j_1},B^{j_{2}}\), the space-time integration \(\int dz'_1\, \int dz'_{2}\) costs a volume factor \(O(2^{j_1(1+\frac{d}{2})})\). On the other hand, were \(z'_1,z'_{2}\) not constrained to be located in the same scale 0 box, we would get instead a volume factor \(O\left( \prod _{i=1}^{2} (2^{j_i(1+\frac{d}{2})}) \right) \). The overall gain is therefore bounded up to a constant by

$$\begin{aligned} 2^{-j_2(1+\frac{d}{2})} \le \prod _{i=1}^{2} 2^{-5j_i/4} \end{aligned}$$
(6.29)

if \(d\ge 3\), which is our second key power-counting estimate. This shows that we have produced a small factor \(O(2^{-\frac{5}{4}j})\) per low-momentum field \(G^j\), or equivalently \(O((1+t-t')^{-5/4})\) per low-momentum field \(G((t,x),(t',x'))\), \(t-t'\gtrsim 1\).

Remark

In the case \(d=3\), this upper bound is optimal (for \(j_2=j_1+O(1)\)), and not quite as good as the \(O(2^{-\frac{3}{2}j})\) factor due to renormalization, compare with (6.7). However, one easily shows that the resulting small factor is actually comparable or smaller than (6.7) if \(d\ge 4\). In any case, in order to be able to integrate (see D.) we simply need a small factor \(O(2^{-(1+2\kappa )j})\) per low-momentum field \(G^j\), with \(\kappa >0\). In the KPZ\(_2\) case (\(d=2\)), one finds \(\kappa =0\); this border case is no more super-renormalizable in the infra-red: four-point functions are superficially divergent in the QFT terminology, which leads to a floating (i.e. scale-dependent) coupling constant g.

So much for A’. Resuming now our previous discussion, and proceeding as in §5.2, one can prove in exactly the same way that \(F_{2,c}=O(1)\). The only difference is that (compare with the discussion below (6.21)), using our second power-counting estimate leaves in store in the worst case only \((1+t_i-t_{i+1})^{-(\frac{1}{4})^-}\) per low-momentum G.

There remains to see how one gets the prefactor \(O(\varepsilon ^{\frac{d}{2}-1})\) and the scaling function \(K_{eff}\). For that, we remark, proceeding as in D., that [see (6.17)] \( \frac{d}{d\gamma _{\beta _1}} \mathcal{S}_i(\varvec{\gamma })\), \(i=1,2\) is equal to

$$\begin{aligned}&\Big \{ \tilde{G}_{eff}((t_i,x_i),(t'_i,x'_i)) + O((t_i-t'_i)^{-({1\over 2})^-}) \, G_{\nu ^{(0)}/c}((t_i,x_i),(t'_i,x'_i))\Big \} \ \cdot \ \nonumber \\&\quad R_{\eta }^{(0)}(\tau ^0=0) \gamma _{\beta _1}V_{\beta _i}(\cdot )( \cdot ,\cdot )\cdots \end{aligned}$$
(6.30)

where \(t'_i\in \Delta _i\equiv [t^+_{\Delta _i}-1,t^+_{\Delta _i})\times \bar{\Delta }_i\). Then the main term of \(\mathcal{S}_i(\varvec{\gamma })^{-1}\) is

$$\begin{aligned} \int dy_i\, \tilde{G}_{eff}((t_i,x_i),(0,y_i))e^{\frac{\lambda }{\nu ^{(0)}}h_0}(y_i)=1+O(\lambda e^{\frac{\lambda }{\nu ^{(0)}}||h_0||_{\infty }}) t_1^{-d/2} ||h_0||_{L^1}. \end{aligned}$$
(6.31)

Error terms take into account: vertex insertions along any of the two strings, costing either the already accounted for \(O((t_i-t'_i)^{-1/2})\) or \(O((t'_i)^{-1/2})\) for the two numerators, and \(O(t_i^{-1/2})\) for the two denominators; \(\eta \)-pairings between strings (see A’.), by construction at times \(\le t^+_{\Delta _1}+O(1)=t^+_{\Delta _2}+O(1)\), costing \(O(((t^+_{\Delta _1})^{-(\frac{1}{4})^-})^2)=O((t^+_{\Delta _1})^{-({1\over 2})^-})\); corrections in \(O(t_i^{-d/2})\) or \(O((t'_i)^{-d/2})\) due to the initial condition.

Concluding: replacing the sum over boxes \(\Delta _i=\Delta _{\beta _i}\), \(\Delta _{\beta '_i}\), and the integral over \(z_{\beta _i},z_{\beta '_i}\), \(i=1,2\), by O(1) times a single integral over a single space-time variable (tx) located at distance O(1) of all of these, and rescaling the coordinates, we get asymptotically in the limit \(\varepsilon \rightarrow 0\) if \(t_1\ge t_2\)

$$\begin{aligned}&F_{2,c}((\varepsilon ^{-1}t_1,\varepsilon ^{-1/2}x_1),(\varepsilon ^{-1}t_2,\varepsilon ^{-1/2}x_2)) \equiv \frac{(g^{(0)})^2}{D^{(0)}} \langle h(\varepsilon ^{-1}t_1,\varepsilon ^{-1/2}x_1) h(\varepsilon ^{-1} t_2,\varepsilon ^{-1/2}x_2)\rangle _c \end{aligned}$$
(6.32)
$$\begin{aligned}&\sim _{\varepsilon \rightarrow 0} F(\lambda )\ (g^{(0)})^2 \ \cdot \ \varepsilon ^{-1-d/2} \int _{0}^{t_2} dt \int dx\ \cdot \ \varepsilon ^{d/2}p_{\nu _{eff}(t_1-t)}(x_1-x) \cdot \, \varepsilon ^{d/2}p_{\nu _{eff}(t_2-t)}(x_2-x) \nonumber \\&\quad \sim _{\varepsilon \rightarrow 0} F(\lambda ) \frac{(g^{(0)})^2}{D^{(0)}} \varepsilon ^{\frac{d}{2}-1} \ \langle h(t_1,x_1) h(t_2,x_2)\rangle _{0;\nu _{eff},D^{(0)}} \end{aligned}$$
(6.33)

up to error terms smaller by a factor \(O(\varepsilon ^{({1\over 2})^-})\), for some function \(F(\lambda )=1+O(\lambda ^2)\) independent of the coordinates \((t_1,x_1),(t_2,x_2)\). Letting

$$\begin{aligned} D_{eff}:=F(\lambda )D^{(0)}, \end{aligned}$$
(6.34)

and comparing (6.32) with (6.33), one sees that

$$\begin{aligned} \langle h(\varepsilon ^{-1}t_1,\varepsilon ^{-1/2}x_1) h(\varepsilon ^{-1} t_2,\varepsilon ^{-1/2}x_2)\rangle _c \sim _{\varepsilon \rightarrow 0} \varepsilon ^{\frac{d}{2}-1} \langle h(t_1,x_1) h(t_2,x_2)\rangle _{\lambda =0;\nu _{eff},D_{eff}},\nonumber \\ \end{aligned}$$
(6.35)

where the coefficient \(D_{eff}=D^{(0)}(1+O(\lambda ^2))\) is interpreted as the effective noise strength.

The leading term for \(D_{eff}-D^{(0)}\) may be computed as follows. Following the expansion in the number of vertices made in §4.1, see in particular (5.28), the main term in

\(\langle h(\varepsilon ^{-1}t_1,\varepsilon ^{-1/2}x_1) h(\varepsilon ^{-1} t_2,\varepsilon ^{-1/2}x_2)\rangle _c\) is obtained from (6.30) by simply contracting a vertex \(V_{\beta _1}\equiv B((t_1,x_1),(t'_1,x'_1)) g^{(0)}(\eta (t'_1,x'_1)-v^{(0)})A((t'_1,x'_1),\cdot )\) on the first string with a vertex \(V_{\beta _2}\) on the second string. Next comes the leading-order correction, obtained by double-contracting \(n=2\) vertex contributions on each string, yielding as in (5.27)

$$\begin{aligned}&\Big \langle \Big [ (g^{(0)})^2 \eta (t'_1,x'_1) A((t'_1,x'_1),\cdot )B(\cdot ,(t''_1,x''_1)) \eta (t''_1,x''_1) \Big ] \ \cdot \nonumber \\&\quad \Big [ (g^{(0)})^2 \eta (t'_2,x'_2) A((t'_2,x'_2),\cdot )B(\cdot ,(t''_2,x''_2)) \eta (t''_2,x''_2) \Big ] \Big \rangle \end{aligned}$$
(6.36)

Displacing the four outer \(B-\) and \(A-\) propagators \(B(\cdot ,(t'_1,x'_1))\), \(A((t''_1,x''_1),\cdot )\), \(B(\cdot ,(t'_2,x'_2))\), \(A((t''_2,x''_2),\cdot )\) to the same point \((t'_1,x'_1)\), integrating over \((t''_1,x''_1),(t'_2,x'_2),(t''_2,x''_2)\) and taking the limit \(t'_1\rightarrow +\infty \) yields an effective contribution

$$\begin{aligned} C_4:= & {} (g^{(0)})^4 \lim _{t'_1\rightarrow +\infty } \int _0^{t'_1} dt''_1 \int dx''_1 \ \int _0^{+\infty } dt'_2 \int dx'_2\ \int _0^{t'_2} dt''_2\int dx''_2 \nonumber \\&(\omega *\omega )(t'_1-t'_2,x'_1-x'_2) \, G(t'_1-t''_1,x'_1-x''_1)\nonumber \\&\quad G(t'_2-t''_2,x'_2-x''_2)\, (\omega *\omega )(t''_1-t''_2,x''_1-x''_2) \end{aligned}$$
(6.37)

Neglected terms involving e.g. \(A((t'_1,x'_1),\cdot )-A((t''_1,x''_1),\cdot )\) involve a low-momentum gradient, whence an extra \(O(\varepsilon ^{1/2})\) which vanishes in the scaling limit. Then \(C_4\) is added to the main term which (after displacing outer B- and A-propagators) becomes \(C_2:=(g^{(0)})^2 \int _0^{+\infty } dt'_2 \int dx'_2\, (\omega *\omega )(t'_1-t'_2,x'_1-x'_2)\). Thus \(\frac{D_{eff}}{D^{(0)}}-1\) is given to leading order by the quotient \(C_4/C_2=O((g^{(0)})^2)=O(\lambda ^2)\).

6.5 Higher-Order KPZ Truncated Functions

We must still prove that higher-order truncated functions

$$\begin{aligned} \langle h(t_1,x_1)\cdots h(t_N,x_N)\rangle _c=:\left( \frac{\nu ^{(0)}}{\lambda }\right) ^N F_{N,c}((t_1,x_1),\ldots ,(t_N,x_N)), \end{aligned}$$
(6.38)

\((N>2)\) are negligible in the large scale limit because the KPZ field is asymptotically Gaussian, with correlations given by \(K_{eff}=F_{2,c}\). To be specific we prove this for \(N=4\), but the reader may easily adapt the following arguments to arbitrary N. Let \(F_{4,c}((t_i,x_i)_{i\le 4}):=\langle \log (w(t_1,x_1))\cdots \log (w(t_4,x_4))\rangle _c\). The “replica trick” of §6.4 extends, with now 4 replicas of \(\eta \),

$$\begin{aligned} F_{4,c}:=\frac{1}{4} \Big \langle \prod _{\ell =1}^4 \sum _{k=0}^3 e^{\mathrm{i}k\pi /2} \log w(t_{\ell },x_{\ell }|\eta _{k+1}) \Big \rangle , \end{aligned}$$
(6.39)

a classical formula immediately generalized to arbitrary N as

$$\begin{aligned} F_{N,c}:=\frac{1}{N} \Big \langle \prod _{\ell =1}^N \sum _{k=0}^{N-1} e^{2\mathrm{i}k\pi /N} \log w(t_{\ell },x_{\ell }|\eta _{k+1}) \Big \rangle , \end{aligned}$$
(6.40)

originally proved by P. Cartier.Footnote 1 Then the connected function \(F_{4,c}\) is obtained by selecting in (6.8) those contributions for which there is a permutation \(\sigma \) of the index set \(\{1,\ldots ,4\}\), and for each \(i=1,2,3\), paired vertex insertions \(V_{\beta _i}\), \(V_{\beta '_i}\) on strings number \(\sigma (i),\sigma (i+1)\). Proceeding as in §5.4, we obtain a O(1) denominator of order \(3\times 2=6\), multiplied by an expression bounded by (after coordinate rescaling) \((\varepsilon ^{\frac{d}{2}-1})^3\) instead of the expected overall scaling

$$\begin{aligned}&\sum _{\text{ pairings }\ \sigma } K_{eff}((\varepsilon ^{-1} t_{\sigma (1)},\varepsilon ^{-1/2} x_{\sigma (1)}), (\varepsilon ^{-1} t_{\sigma (2)},\varepsilon ^{-1/2} x_{\sigma (2)})) \ \cdot \ \nonumber \\&\quad \cdot \ K_{eff}((\varepsilon ^{-1} t_{\sigma (3)},\varepsilon ^{-1/2} x_{\sigma (3)}),(\varepsilon ^{-1} t_{\sigma (4)},\varepsilon ^{-1/2} x_{\sigma (4)}))=O( (\varepsilon ^{\frac{d}{2}-1})^2) \end{aligned}$$
(6.41)

for a four-point function.

6.6 A Remark on Lower Large-Deviations for h

Similar computations can be made for \(\langle w^{-N}(t,x)\rangle \), where \(N=1,2,\ldots \), \(N=O(1)\). Compared with the previous subsections, we now get a product \(\prod _{i=1}^N \Big (\mathcal{S}_i(\varvec{\gamma })\Big )^{-1}\) instead of \(\prod _{i=1}^N \log (\mathcal{S}_i(\varvec{\gamma })\). It is easy to see that we get in the end

$$\begin{aligned} \langle w^{-N}(t,x)\rangle =O(1).\end{aligned}$$
(6.42)

Using Markov’s inequality e.g. for \(N=1\) implies then for \(A>0\)

$$\begin{aligned} {\mathbb {P}}[h(t,x)<-A]={\mathbb {P}}[w^{-1}(t,x)>e^{\frac{\lambda }{\nu ^{(0)}}A}]=O(1)\ e^{-\frac{\lambda }{\nu ^{(0)}}A},\end{aligned}$$
(6.43)

an exponential lower large-deviation estimate for h(tx).

This is however disappointing with respect to the expected lower Gaussian large-deviation

$$\begin{aligned} {\mathbb {P}}[h(t,x)<-A]\lesssim e^{-cA^2}, \end{aligned}$$
(6.44)

proved using Gaussian concentration inequalities in Carmona and Hu [15], Theorem 1.5 in a deterministic setting. It is plausible that their results extend to our setting by generalizing to regularized white noise classical large deviation results for Lipschitz functions of vector-valued Gaussian random variables, see e.g. [4], §7.3.