1 Introduction

We study the emergence of continuum mean curvature interface flow from a class of microscopic interacting particle systems. Such a concern in the context of phase separating interface evolution is a long standing one in statistical physics; see Spohn [44] for a discussion. The aim of this paper is to understand the formation of a continuum mean curvature interface flow, with a homogenized ‘surface tension-mobility’ parameter reflecting microscopic rates, as a scaling limit in a general class of reaction-diffusion interacting particle systems. We focus on so-called Glauber+Zero-range processes on discrete tori \({\mathbb {T}}^d_N= ({\mathbb {Z}}/N{\mathbb {Z}})^d\) for dimensions \(d\ge 2\) and scaling parameter N, where the Glauber part governs reaction rates favoring two levels of mass density, and the Zero-range part controls nonlinear rates of exploration.

A ‘two step’ approach to derive the continuum interface flow would consider scaling the Zero-range part of the dynamics, but not speeding up the Glauber rates. The first step would be to obtain the space-time mass hydrodynamic limit in terms of an Allen–Cahn reaction-diffusion PDE. The second step would be to scale the reaction term in this Allen–Cahn PDE and to obtain mean-curvature interface flow in this limit.

However, in a nutshell, our purpose is to obtain ‘directly’ the mean curvature interface flow, up to the time of singularity, by scaling both the Glauber and Zero-range parts simultaneously. The Zero-range part is diffusively scaled while the Glauber part is scaled at a lesser level. By means of a probabilistic relative entropy method, and a new ‘Boltzmann–Gibbs’ principle, we show that the microscopic system may be approximated by a ‘discretized’ Allen–Cahn equation whose reaction term is being speeded up; see (1.5).

1.1 Motion by mean curvature and Allen–Cahn equation with linear diffusion

In the continuum, motion by mean curvature is a time evolution of \((d-1)\)-dimensional hypersurface \(\Gamma _t\) in \({\mathbb {T}}^d:= ({\mathbb {R}}/{\mathbb {Z}})^d=[0,1)^d\) with periodic boundary conditions, or in \({\mathbb {R}}^d\) defined by

$$\begin{aligned} V=\kappa , \end{aligned}$$
(1.1)

where V is a normal velocity and \(\kappa \) is the mean curvature of \(\Gamma _t\) multiplied by \(d-1\). Such a flow is of course a well-studied geometric object (cf. book Bellettini [5]).

Mean curvature flow is known to arise from Allen–Cahn equations with linear diffusion, which are reaction-diffusion equations of the form

$$\begin{aligned} {\partial _t u} = \Delta u + \frac{1}{\varepsilon ^2} f(u), \quad t>0,\; x\in D, \end{aligned}$$
(1.2)

in terms of a ‘sharp interface limit’ as \(\varepsilon \downarrow 0\). Here, \(D={\mathbb {T}}^d\) or a domain in \({\mathbb {R}}^d\), for \(d\ge 2\), with Neumann boundary conditions at \(\partial D\), \(\varepsilon >0\) is a small parameter and f is a bistable function with stable points \(\alpha _\pm \) and unstable point \(\alpha _* \in (\alpha _-,\alpha _+)\) satisfying the balance condition:

$$\begin{aligned} \int _{\alpha _-}^{\alpha _+} f(u) \, du \; \big (= F(\alpha _-)-F(\alpha _+)\big )=0, \end{aligned}$$

where F is the potential associated with f such that \(f=-F'\). The sharp interface limit is as follows: The solution \(u=u^\varepsilon \) of the Allen–Cahn equation satisfies

$$\begin{aligned} u^\varepsilon (t, x) \underset{\varepsilon \downarrow 0}{\longrightarrow } \chi _{\Gamma _t}(x) := \left\{ \begin{aligned} \alpha _+,&\quad \text {on one side of } \Gamma _t, \\ \alpha _-,&\quad \text {on the other side of } \Gamma _t, \end{aligned} \right. \end{aligned}$$

where \(\Gamma _t\) moves according to the motion by mean curvature (1.1), and the sides are determined from \(\Gamma _0\). This limit has a long history; among other works, see Alfaro et al. [2], Bellettini [5], Chen et al. [11], Funaki [24], Chapter 4 of Funaki [25] and references therein. Although we do not consider the case \(d=1\), we remark the phenomenon in dimension \(d=1\) is much different given that the ‘interface’ consists of points; see Carr et al. [8].

1.2 Glauber+Zero-range process, its scaling limits and main result

Informally, the Zero-range process follows a collection of continuous time random walks on \({\mathbb {T}}^d_N\) such that each particle interacts infinitesimally only with the particles at its location: At a site x, one of the particles there jumps with rate given by a function of the occupation number \(\eta _x\) at x, say \(g(\eta _x)\), and then displaces by y with rate p(y). We will consider the case that jumps occur only to neighboring sites with equal rate, that is \(p(y) = 1(|y|=1)\). It is known that, under the diffusive scaling in space and time, namely when space squeezed by N while time speeded up by \(N^2\), in the limit as \(N\rightarrow \infty \), the evolution of the macroscopic mass density profile of the microscopic particles, namely the ‘hydrodynamics’, follows a nonlinear PDE (cf. [36])

$$\begin{aligned} \partial _t u = \Delta \varphi (u), \end{aligned}$$

where \(\varphi \) can be seen as a homogenization of the microscopic rate g. We remark when \(g(k)\equiv k\), and so \(\varphi (u)\equiv u\), the associated Zero-range process is the system of independent particles.

We may add the effect of Glauber dynamics to the Zero-range process. Namely, we allow now creation and annihilation of particles at a location with rates which depend on occupation numbers nearby. This mechanism is also speeded up by a factor \(K=K(N)\nearrow \infty \) as \(N\rightarrow \infty \). We will impose that K grows much slower than the time scale \(N^2\) for the Zero-range part, in fact we will take that \(K = O((\log N)^{\sigma /2})\) with a certain \(\sigma \in (0,1)\) in our main Theorem 2.1; see below for some discussion.

If K were kept constant with respect to N, the associated hydrodynamic mass density solves a nonlinear reaction-diffusion equation, a type of Allen–Cahn equation with nonlinear diffusion, in the diffusive scaling limit:

$$\begin{aligned} \partial _t u = \Delta \varphi (u) + Kf(u) \end{aligned}$$
(1.3)

where f reflects a homogenization of the Glauber creation and annihilation rates; cf. [39], see also [13, 18] in which related Glauber+Kawasaki dynamics was studied.

As mentioned above, with notation \(1/\varepsilon ^2\) instead of K, in the PDE literature, taking the limit of solutions \(u=u^{(K)}\), as \(K\uparrow \infty \), in these Allen–Cahn equations, when say \(\varphi (u)\equiv u\) and f is bistable, that is \(f(u)=-F'(u)\) with F being a ‘balanced’ double-well potential, is called the sharp interface limit. This scaling limit leads to a continuum motion by mean curvature of an interface separating two phases, here say two levels of mass density.

In our stochastic setting, by properly choosing the rates of creation and annihilation of particles in Glauber part, we observe, in the microscopic system itself, the whole domain \({\mathbb {T}}^d_N\) separates in a short time into ‘denser’ and ‘sparser’ regions of particles with an interface region of width \(O(K^{-1/2})\) between (cf. Theorems 6.1 and 6.2). In particular, our paper derives as a main result, as \(N\uparrow \infty \), motion of a continuum interface by mean curvature directly from these microscopic particle systems as a combination of the ideas of the hydrodynamic limit (probabilistic part) and the sharp interface limit (PDE part); cf. Theorem 2.1.

1.3 Probabilistic vs PDE arguments

In the probabilistic part (Sects. 4 and 7), for the hydrodynamic limit, we apply the so-called relative entropy method originally due to Yau [45]. As a consequence of the method, we show that the microscopic configurations are not far from the solution to a deterministic discrete approximation to the nonlinear Allen–Cahn equation (cf. Theorem 2.2); see Eq. (1.5). To control the errors in this approximation, we will need a new ‘quantified’ replacement estimate, which can be seen as a type of ‘Boltzmann–Gibbs’ principle (cf. Theorem 3.4). \(L^\infty \)-bounds on second discrete derivatives of the solution of discretized Allen–Cahn equation (1.5) (cf. Theorem 3.3), derived by Nash and Schauder estimates in [27], play important role.

In the continuum/discrete PDE part (Sects. 5 and  6, respectively), we compare the discretized Allen–Cahn equation (1.5) with its continuous counterpart (1.3) with nonlinear diffusion and, by comparison argument, construct super and sub solutions in terms of those for the continuum PDE; see Theorem 2.3 for the main result of the PDE part. We note that a sharp interface limit, with respect to the Allen–Cahn equation, now with nonlinear diffusion term \(\Delta \varphi (u)\) is shown in a companion paper [21], and summarized in Theorems 5.1 and 5.2. Such a derivation is obtained by keeping a ‘corrector’ term in the expansion, or second order term in \(\varepsilon =K^{-1/2}\), of the solutions \(u=u^{(K)}\) in variables depending on the distance to a certain level set; see Sect. 5. It seems this sharp interface limit for the nonlinear Allen–Cahn equation is unknown even in the continuum setting.

1.4 Comparison to previous works and differences

Previous work on such problems in particle systems with creation and annihilation rates concentrates on Glauber +Kawasaki dynamics (where the Zero-range part is replaced by Kawasaki dynamics) [7, 17, 28, 29, 34]. In these papers, the Kawasaki part is a simple exclusion process. For K fixed with respect to N, the macroscopic mass hydrodynamic equation is a more standard Allen–Cahn PDE (1.2) with linear diffusion \(\Delta u\) (instead of \(\Delta \varphi (u)\)) and K instead of \(\varepsilon ^{-2}\),

$$\begin{aligned} \partial _t u = \Delta u + Kf(u). \end{aligned}$$

See also related work on Glauber dynamics with Kac type long range mean field interaction [6, 15, 16, 35], on fast-reaction limit for two-component Kawasaki dynamics [14], and on spatial coalescent models of population genetics [22].

Phenomenologically, when there is a nonlinear Laplacian, say \(\Delta \varphi (u)\), as in our case of the Glauber+Zero-range process, this nonlinearity affects the limit motion of the hypersurface interface. When now f satisfies a modified balance condition due to the nonlinearity [cf. condition (BS)], we obtain in the limit a mean curvature motion speeded up by a nontrivial in general ‘surface tension-mobility’ speed \(\lambda _0\) reflecting a homogenization of the Glauber and Zero-range microscopic rates,

$$\begin{aligned} V=\lambda _0\kappa \end{aligned}$$
(1.4)

[cf. flow \((P^0)\) (2.13)]. We derive two formulas for \(\lambda _0\), one of them below in (1.6), and the other found in (5.11), from which \(\lambda _0\) is seen as the ‘surface tension’ multiplied by the ‘mobility’ of the interface; see Appendix of El Kettani et al. [21]. We remark, in the case of Glauber+Kawasaki dynamics, or for independent particles, the speed \(\lambda _0 = 1\) is not affected by the microscopic rates. See also derivations in [15] with respect to Glauber dynamics with Kac type long range mean field interactions.

The discretized hydrodynamic equation, or discretized Allen–Cahn PDE,

$$\begin{aligned} \partial _t u^N = \Delta ^N \varphi (u^N) + Kf(u^N), \end{aligned}$$
(1.5)

with discrete Laplacian \(\Delta ^N\) [see (2.18)], plays a role to cancel the first order terms in the occupation numbers in the computation of the time derivative of the relative entropy of the law of the microscopic configuration at time t with respect to a local equilibrium measure with average profile given by \(u^N\). But, in the present situation, the problem is more complex than say in the application to Glauber+Kawasaki dynamics since we need to handle nonlinear functions of occupation numbers, which do not appear in the Glauber+Kawasaki process, by replacing them by linear ones. Once this is done, in a quantified way, the relative entropy can be suitably estimated, yielding that the microscopic configuration on \({\mathbb {T}}^d_N\) is ‘near’ the values \(u^N\).

The replacement scheme, a type of ‘quantified’ second-order estimate or ‘Boltzmann–Gibbs principle, takes on here an important role. This estimate, in comparison with a related bound for Kawasaki+Glauber systems in [28], seems to hold in more generality, and its proof is quite different. In particular, the technique used in [28] does not seem to apply for Glauber+Zero-range processes, relying on the structure of the Kawasaki generator. Moreover, as a byproduct of the ‘quantified’ second order estimate here, the form of the discretized hydrodynamic equation found turns out to satisfy a comparison theorem without any additional assumptions, such as the assumption (A3) for the creation and annihilation rates in [28]. This is another advantage of our Boltzmann–Gibbs principle, beyond its more general validity (cf. Remark 2.1). We remark, in passing, different ‘quantitative’ replacement estimates, in other settings, have been recently considered [19, 32]. See also in this context the non-quantitative estimates in [30, 31, 33].

1.5 Outline of the paper

The outline of the paper is as follows: In Sect. 2, we introduce Glauber+Zero-range process in detail. In particular, we describe a class of invariant measures \(\nu _\rho \) [cf. (2.2)], and a spectral gap assumption (SP) for the Zero-range part, and then specify a proper choice of the creation and annihilation rates for the Glauber part, favoring two levels of mass density [cf. (2.11) and (2.12)], so that the corresponding macroscopic reaction function f satisfies a form of balanced bistability, matched to the nonlinear diffusion term \(\Delta \varphi (u)\) obtained from the Zero-range part [cf. condition (BS)].

Our main result on the direct passage from the microscopic system to the continuum interface flow is formulated in Theorem 2.1. Its proof, given in Sect. 2.3, relies on two theorems: Theorem 2.2, which is probabilistic, stating that the microscopic system is close to that of a discretized reaction-diffusion equation, and Theorem 2.3, which is PDE related, stating that the discrete PDE evolution is close to the continuum interface flow. Theorem 2.2 follows as a combination of the relative entropy method developed in Sect. 4 and a Boltzmann–Gibbs principle stated in Sect. 3.4 and proved in Sect. 7. On the other hand, Theorem 2.3 is shown via PDE arguments for the sharp interface limit in terms of ‘generation’ and ‘propagation’ of the interface phenomena, in Sect. 6.

In Sect. 3, we develop, in addition to stating the Boltzmann–Gibbs principle, some preliminary results for the discrete PDE, namely a comparison theorem, a priori energy estimates, and \(L^\infty \)-bounds on discrete derivatives due to Nash and Schauder estimates shown in [27].

In Sect. 4, we prove Theorem 2.2, by implementing the method of relative entropy: We compute the time derivative of the relative entropy of our dynamics \(\mu _t^N\) at time t with respect to the local equilibrium state \(\nu _t^N\) constructed from the solution of the discretized hydrodynamic Eq. (1.5) or (2.16). As remarked earlier, in the case of Kawasaki dynamics instead of the Zero-range process, the first order terms appearing in these computations are all written already in occupation numbers \(\eta _x\) or its normalized variables; see [28]. In our case, in contrast, nonlinear functions of \(\eta _x\) appear, that is, the jump rate \(g(\eta _x)\) of the Zero-range part, as well as reaction rates \(c_x^\pm (\eta )\) of the Glauber part. We mention, in [28], the relative entropy method of Jara and Menezes [32], a variant of [45], was applied. This method does not seem to apply for Glauber+Zero-range processes. However, because of our Boltzmann–Gibbs principle, the original method of Yau [45] turns out to be enough.

The Boltzmann–Gibbs principle with a quantified error is essential in our work to replace nonlinear functions of \(\eta _x\), for instance \(g(\eta _x)\) and those arising from the Glauber part, by linearizations in terms of the occupation numbers \(\eta _x\). Its proof is given in Sect. 7. The argument makes use of time averaging and mixing properties of the Zero-range process in the form of a spectral gap condition (SP), verified for a wide variety of rates g. Nonlinear functions, such as \(g(\eta _x)\), are estimated by their conditional expectation given local average densities \(\eta ^\ell _x= \ell ^{-d}\sum _{y: |y-x|\le \ell } \eta _y\). In the standard ‘one-block’ estimate of Guo-Papanicolaou-Varadhan (cf. [36]), which gives errors of order o(1) without quantification, \(\ell \) is of the order N, and so \(\eta ^\ell _x\) is close to the local macroscopic density. Here, errors multiplied by diverging functions of K need to be controlled, because of the form of certain terms in the discrete hydrodynamic equation. The idea then is to consider \(\ell = N^\alpha \) where \(\alpha >0\) is small, and so \(\eta ^\ell _x\) is a type of ‘mesoscopic’ average. The spectral gap condition (SP) is also an ingredient used to quantify the errors suitably.

The growth of K of order \(O((\log N)^{\sigma /2})\) that we impose is due to the Schauder estimate [27] for the discrete hydrodynamic equation that we formulate in Sect. 3.3. In the case of the Glauber+Kawasaki model, a growth order of \(O(\sqrt{\log N})\) was obtained in [28], afforded by the linear diffusion term in its discrete hydrodynamic equation, as opposed to the nonlinear one \(\Delta ^N \varphi (u^N)\) which seems not as well behaved. We remark that, in the work of Bonaventura [7] and Katsoulakis and Souganidis [34], for Glauber+Kawasaki processes, K can be of order \(O(N^\beta )\) for a small \(\beta >0\), the difference being that the method of correlation functions was used instead of relative entropy. This method, relying on the structure of the Kawasaki model, does not seem to generalize to the systems considered here.

Finally we explain the PDE part. In Sect. 5.1 we discuss informally our derivation of the sharp interface limit from Allen–Cahn PDE with nonlinear diffusion. To study the limit as \(K\uparrow \infty \), it is essential to consider the asymptotic expansion of the solution up to the second order term in K. This plays a role of the corrector in the homogenization theory and, by the averaging effect for the nonlinear diffusion operator, a constant speed \(\lambda _0\) arises in the motion by mean curvature,

$$\begin{aligned} \lambda _0 =\frac{\displaystyle \int _{\alpha _-} ^{\alpha _+} \varphi '(u)\sqrt{W(u)}du}{\displaystyle \int _{\alpha _-} ^{\alpha _+} \sqrt{W(u)} du}\, \end{aligned}$$
(1.6)

and the potential W is defined by

$$\begin{aligned} W(u)= \int _ u ^{\alpha _+} f(s) \varphi '(s) ds\,. \end{aligned}$$
(1.7)

We refer also to (5.11) for the other formula for \(\lambda _0\) in terms of surface tension and mobility of the interface.

Section 5.2 summarizes results obtained in [21] on the ‘generation’ of interface, or ‘initial layer’ property (cf. Theorem 5.1) and the ‘propagation’ of interface, or motion by mean curvature with a homogenized ‘surface tension-mobility’ speed, for the continuum Allen–Cahn equation with nonlinear diffusion (cf. Theorem 5.2).

Sections 5.3 and  5.4 give outline of the proof of these two theorems, especially, recording estimates (cf. Lemmas 5.4 and 5.5) useful to apply for the discrete PDE (1.5).

In Sect. 6, we extend the ‘generation’ and ‘propagation’ of the interface results to the discrete PDE (1.5) as \(N\uparrow \infty \) and \(K=K(N)\uparrow \infty \), in Theorems 6.1 and 6.2, by employing a comparison argument. Finally, as a consequence, the proof of Theorem 2.3 is completed in Sect. 6.3.

2 Models and Main Results

We now introduce the Glauber+Zero-range model in detail in Sect. 2.1, and state our main results, Theorems 2.1, 2.2 (probabilistic part) and 2.3 (PDE part), in Sect. 2.2. Section 2.3 gives a proof of Theorem 2.1 assuming Theorems 2.2 and 2.3.

2.1 Glauber+Zero-range processes

Let \({\mathbb {T}}_N^d :=({\mathbb {Z}}/N{\mathbb {Z}})^d = \{1,2,\ldots ,N\}^d\) be the d-dimensional lattice of size N with periodic boundary condition. We consider, on \({\mathbb {T}}^d_N\), Glauber+Zero-range processes. The configuration space is \(\mathcal {X}_N = \{0,1,2,\ldots \}^{{\mathbb {T}}_N^d} \equiv {\mathbb {Z}}_+^{{\mathbb {T}}_N^d}\) and its element is denoted by \(\eta =\{\eta _x\}_{x\in {\mathbb {T}}_N^d}\), where \(\eta _x\) represents the number of particles at the site x. The generator of our process is of the form \(L_N = N^2L_{ZR} + KL_G\), where \(L_{ZR}\) and \(L_{G}\) are Zero-range and Glauber operators, respectively, defined as follows. Here, K is a parameter, which will later depend on the scaling parameter N.

Zero-range specification To define the Zero-range part, to avoid degeneracy, let the jump rate \(g=\{g(k)\ge 0\}_{k\in {\mathbb {Z}}_+}\) be given such that \(g(0)=0\) and \(\inf _{k\ge 1}g(k)>0\). Consider the symmetric simple Zero-range process with generator, obtained for a function f on \(\mathcal {X}_N\),

$$\begin{aligned} L_{ZR} f(\eta ) = \sum _{x\in {\mathbb {T}}_N^d}\sum _{e\in {\mathbb {Z}}^d: |e|=1} g(\eta _x) \{f(\eta ^{x,x+e})-f(\eta )\}, \end{aligned}$$
(2.1)

where \(\eta =\{\eta _x\}_{x\in {\mathbb {T}}_N^d}\in \mathcal {X}_N\), \(|e| = \sum _{i=1}^d |e_i|\) for \(e=(e_i)_{i=1}^d \in {\mathbb {Z}}^d\) and \(\eta ^{x,y}\in \mathcal {X}_N\) for \(x, y \in {\mathbb {T}}_N^d\) is defined from \(\eta \) satisfying \(\eta _x\ge 1\) by

$$\begin{aligned} (\eta ^{x,y})_z = \left\{ \begin{array}{ll} \eta _x-1 &{} \mathrm{when \ } z=x\\ \eta _{y}+1 &{} \mathrm{when \ }z=y\\ \eta _z &{} \mathrm{otherwise,} \end{array}\right. \end{aligned}$$

for \(z\in {\mathbb {T}}_N^d\); \(\eta ^{x,y}\) describes the configuration after one particle at x in \(\eta \) jumps to y.

We remark the case \(g(k)\equiv k\) corresponding to the motion of independent particles, however when g is not linear, the infinitesimal interaction is nontrivial.

The invariant measures of the Zero-range process are translation-invariant product measures \(\{{\bar{\nu }}_\varphi : 0\le \varphi < \varphi ^*:=\liminf _{k\rightarrow \infty } g(k)\}\) on \(\mathcal {X}_N\) with one site marginal given by

$$\begin{aligned} {\bar{\nu }}_\varphi (k) \equiv {\bar{\nu }}_\varphi (\eta _x=k)= \frac{1}{Z_\varphi } \frac{\varphi ^k}{g(k)!}, \quad Z_\varphi = \sum _{k=0}^\infty \frac{\varphi ^k}{g(k)!}. \end{aligned}$$
(2.2)

Here, \(g(k)!=g(1)\cdots g(k)\) for \(k\ge 1\) and \(g(0)!=1\); see Section 2.3 of [36].

  • (De) We assume that \(\rho (\varphi )=\sum _{k= 0}^\infty k {\bar{\nu }}_\varphi (k)\) diverges as \(\varphi \uparrow \varphi ^*\), meaning that all densities \(0\le \rho <\infty \) are possible in the system.

We denote, for \(\rho \ge 0\), that

$$\begin{aligned} \nu _\rho := {\bar{\nu }}_{\varphi (\rho )} \end{aligned}$$

by changing the parameter so that the mean of the marginal is \(\rho \). In fact, \(\rho \) and \(\varphi =\varphi (\rho )\) is related by

$$\begin{aligned} \rho =\varphi (\log Z_\varphi )' \left( = \frac{1}{Z_\varphi }\sum _{k=0}^\infty k \frac{\varphi ^k}{g(k)!} =: \langle k \rangle _{{\bar{\nu }}_\varphi }\right) . \end{aligned}$$

Also, note that

$$\begin{aligned} \varphi = \langle g(k) \rangle _{{\bar{\nu }}_\varphi } \left( := \frac{1}{Z_\varphi }\sum _{k=1}^\infty \frac{\varphi ^k}{g(k-1)!} \right) . \end{aligned}$$

Moreover, one can compute that \(\varphi '(\rho )= \varphi (\rho )/E_{\nu _\rho }\big [(\eta _0 - \rho )^2\big ]>0\), and so \(\varphi =\varphi (\rho )\) is a strictly increasing function.

We observe when \(g(k)\equiv k\) that the marginals of \(\nu _\rho \) are Poisson distributions with mean \(\rho \). When \(ak\le g(k)\le bk\) for all \(k\ge 0\) with \(0<a<b<\infty \), we have \(a\rho \le \varphi (\rho )\le b\rho \) for \(\rho \ge 0\). When \(g(k)=1(k\ge 1)\), i.e., \(g(k)=1\) for \(k\ge 1\) and 0 for \(k=0\), we have \(\varphi (\rho )= \rho /(1+\rho )\) for \(\rho \ge 0\).

We will need the following condition to use and prove the ‘Boltzmann–Gibbs principle’ (cf. proofs of Theorems 2.2 and 3.4).

  • (LG) We assume \(g(k) \le Ck\) for all \(k\ge 0\) with some \(C>0\).

Later, we also consider \({\bar{\nu }}_\varphi \) and \(\nu _\rho \) as the product measures on the configuration space \(\mathcal {X}={\mathbb {Z}}_+^{{\mathbb {Z}}^d}\) on an infinite lattice \({\mathbb {Z}}^d\) instead of \({\mathbb {T}}_N^d\).

Let \(u: {\mathbb {T}}_N^d \rightarrow [0,\infty )\) be a function. We define the (inhomogeneous) product measure on \(\mathcal {X}_N\) by

$$\begin{aligned} \nu _{u(\cdot )}(\eta ) = \prod _{x\in {\mathbb {T}}_N^d} \nu _{u(x)}(\eta _x), \quad \eta =\{\eta _x\}_{x\in {\mathbb {T}}_N^d}, \end{aligned}$$
(2.3)

with means \(u(\cdot ) = \{u(x)\}_{x\in {\mathbb {T}}_N^d}\) over sites in \({\mathbb {T}}_N^d\).

In the sequel, we will assume a certain ‘spectral gap’ bound on the Zero-range operator: let \(\Lambda _k = \{x\in {\mathbb {T}}^d_N: |x|\le k\}\) for \(k\ge 1\) with N large enough. Let \(L_{ZR,k}\) be the restriction of \(L_{ZR}\) to \(\Lambda _k\), that is

$$\begin{aligned} L_{ZR,k}f(\eta ) = \sum _{{\mathop {x,y\in \Lambda _k}\limits ^{|x-y|=1}}}g(\eta _x)\big \{f(\eta ^{x,y}) - f(\eta )\big \}. \end{aligned}$$

When there are \(j\ge 0\) particles on \(\Lambda _k\), the process generated by \(L_{ZR,k}\) is an irreducible continuous-time Markov chain. The operator \(L_{ZR,k}\) is self-adjoint with respect to the unique canonical invariant measure \(\nu _{k,j}=\nu _\beta \big \{ \cdot | \sum _{x\in \Lambda _k} \eta _x = j\big \}\); here, \(\nu _{k,j}\) does not depend on \(\beta >0\): Indeed, in terms of a partition function \(Z_{k,l}\),

$$\begin{aligned} \nu _{k,j}(\eta _x = {\bar{k}}_x, x\in \Lambda _k) = \frac{1}{Z_{k,j}}\prod _{x\in \Lambda _k} \frac{1}{g({\bar{k}}_x)!} \end{aligned}$$

for \(\{{\bar{k}}_x\}\) such that \(\sum _{x\in \Lambda _k} {\bar{k}}_x = j\). For the operator \(-L_{ZR,k}\), the value 0 is the bottom of the spectrum. Let gap(kj) denote the value of the next smallest eigenvalue.

  • (SP) There exists \(C_{gp}>0\) so that \(gap(k,j)^{-1} \le C_{gp}k^2(1+j/|\Lambda _k|)^2\) for all \(k\ge 2\) and \(j\ge 0\).

Such bounds have been shown for Zero-range processes with different jump rates g:

\(\bullet \):

Suppose there is C, \(r_1>0\) and \(r_2\ge 1\) such that \(g(k)\le Ck\) and \(g(k+r_2)\ge g(k) + r_1\) for all \(k\ge 0\). Then, there is a constant \(C_{gp}>0\) such that \(gap(k,j)^{-1}\le C_{gp}k^2\) independent of j [37].

\(\bullet \):

Suppose \(g(k) = k^\gamma \) for \(0<\gamma <1\). Then, there is a \(C_{gp}>0\) such that \(gap(k,j)^{-1}\le C_{gp}k^2(1 + j/|\Lambda _k|)^{1-\gamma }\) [40].

\(\bullet \):

Suppose \(g(k)=1(k\ge 1)\). Then, there is a \(C_{gp}>0\) such that \(gap(k,j)^{-1}\le C_{gp}k^2(1 + j/|\Lambda _k|)^2\) [37, 38].

We remark that all of these g’s satisfy (De) and (LG).

Glauber specification For Glauber part, we consider the creation and annihilation of a single particle when a change happens, though it is possible to consider the case that several particles are created or annihilated at once. Let \(\tau _x\) be the shift acting on \(\mathcal {X}_N\) so that \(\tau _x\eta = \eta _{\cdot + x}\) for \(\eta \in \mathcal {X}_N\) and \(\tau _xf(\eta ) = f(\tau _x \eta )\) for functions f on \(\mathcal {X}_N\).

The generator of the Glauber part, obtained for a function f on \(\mathcal {X}_N\), is given by

$$\begin{aligned} L_G f(\eta ) = \sum _{x\in {\mathbb {T}}_N^d}\Big [ c_x^+(\eta ) \{f(\eta ^{x,+})-f(\eta )\} + c_x^-(\eta ) 1(\eta _x\ge 1)\{f(\eta ^{x,-})-f(\eta )\}\Big ],\nonumber \\ \end{aligned}$$
(2.4)

where \(\eta ^{x,\pm }\in \mathcal {X}_N\) are determined from \(\eta \in \mathcal {X}_N\) by \((\eta ^{x,\pm })_z = \eta _x\pm 1\) when \(z=x\) and \((\eta ^{x,\pm })_z=\eta _z\) when \(z\ne x\), note that \(\eta ^{x,-}\) is defined only for \(\eta \in \mathcal {X}_N\) satisfying \(\eta _x \ge 1\). Here, \(c_x^\pm (\eta )= \tau _x c^\pm (\eta )\) and \(c^\pm (\eta )\) are nonnegative local functions on \(\mathcal {X}\), that is, those depending on finitely many \(\{\eta _x\}\) so that these can be viewed as functions on \(\mathcal {X}_N\) for N large enough. We assume that \(c^\pm (\eta )\) are written in form

$$\begin{aligned} c^\pm (\eta ) = {\hat{c}}^{\pm }(\eta ) {\hat{c}}^{0,\pm }(\eta _0), \end{aligned}$$
(2.5)

where \({\hat{c}}^{\pm }\) are functions of \(\{\eta _y\}_{y\not =0}\) and \({\hat{c}}^{0,\pm }\) are functions of \(\eta _0\) only. Moreover, since the rate of annihilation at an empty site vanishes, namely \(c^-(\eta ) = c^-(\eta )1(\eta _0\ge 1)\), we may take \({\hat{c}}^{0,-}(0)=0\) so that \({\hat{c}}^{0,-}(\eta _0)={\hat{c}}^{0,-}(\eta _0)1(\eta _0\ge 1)\) and \(c^-(\eta ) = c^-(\eta )1(\eta _0\ge 1)\). In particular, we may drop \(1(\eta _x\ge 1)\) in (2.4), since it is now included in \(c_x^-(\eta )\) by the specification that \({\hat{c}}^{0,-}(0)=0\).

As an example, we may choose

$$\begin{aligned} {\hat{c}}^{0,+}(\eta _0) = \frac{1}{g(\eta _0+1)} \quad \text { and } \quad {\hat{c}}^{0,-}(\eta _0) = 1(\eta _0\ge 1) \end{aligned}$$
(2.6)

and therefore

$$\begin{aligned} c_x^+(\eta ) = \frac{{\hat{c}}_x^+(\eta )}{g(\eta _x+1)} \quad \text { and } \quad c_x^-(\eta ) = {\hat{c}}_x^-(\eta ) 1(\eta _x\ge 1) \end{aligned}$$
(2.7)

with \({\hat{c}}_x^\pm (\eta ) = \tau _x {\hat{c}}^\pm (\eta )\); see (2.11) and (2.12) below with further choices of \({\hat{c}}^\pm (\eta )\).

Glauber+Zero-range specification Let now \(\eta ^N(t) = \{\eta _x(t)\}_{x\in {\mathbb {T}}_N^d}\) be the Markov process on \(\mathcal {X}_N\) corresponding to the Glauber+Zero-range generator \(L_N=N^2L_{ZR}+ KL_G\). The macroscopically scaled empirical measure on \({\mathbb {T}}^d ( =[0,1)^d\) with the periodic boundary) associated with \(\eta \in \mathcal {X}_N\) is defined by

$$\begin{aligned}&\alpha ^N(dv;\eta ) = \frac{1}{N^d} \sum _{x\in {\mathbb {T}}_N^d} \eta _x \delta _{\frac{x}{N}}(dv),\quad v \in {\mathbb {T}}^d, \end{aligned}$$

and we denote

$$\begin{aligned}&\alpha ^N(t,dv) = \alpha ^N(dv;\eta ^N(t)), \quad t \ge 0. \end{aligned}$$

Define \(\langle \alpha ,\phi \rangle \) to be the integral \(\int \phi d\alpha \) with respect to test functions \(\phi \) and measure \(\alpha \) on \({\mathbb {T}}^d\). Sometimes, when \(\alpha \) has a density, \(\alpha = rdv\), we will write \(\langle r, \phi \rangle = \int \phi rdv\) when the context is clear.

When K is a fixed parameter, one may deduce that a hydrodynamic limit can be shown: The empirical measure \(\langle \alpha ^N(t,dv), \phi \rangle \) with \(\phi \) converges to \(\langle \rho (t,v)dv, \phi \rangle \) as \(N\rightarrow \infty \) in probability if initially this limit holds at \(t=0\), where \(\rho (t,v)\) is a unique weak solution of the reaction-diffusion or ‘nonlinear’ Allen–Cahn equation,

$$\begin{aligned} \partial _t\rho = \Delta \varphi (\rho )+K f(\rho ), \quad v\in {\mathbb {T}}^d, \end{aligned}$$
(2.8)

with an initial value \(\rho _0(x)=\rho (0,x)\). Here, functions \(\varphi \) and f are defined by

$$\begin{aligned}&\varphi (\rho ) \equiv {\tilde{g}}(\rho ) = E_{\nu _\rho }[g(\eta _0)], \end{aligned}$$
(2.9)
$$\begin{aligned}&f(\rho ) \equiv \widetilde{c^+}(\rho ) - \widetilde{c^-}(\rho ) = E_{\nu _\rho }[c^+(\eta )] - E_{\nu _\rho }[c^-(\eta )], \end{aligned}$$
(2.10)

respectively, where \(E_{\nu _\rho }\) is expectation with respect to \(\nu _\rho \). As noted earlier, \(\varphi \) is an increasing function since \(\varphi '(\rho )=\varphi (\rho )/E_{\nu _\rho }[(\eta _0-\rho )^2]>0\).

More generally, we denote the ensemble averages of local functions \(h=h(\eta )\) on \(\mathcal {X}\) under \(\nu _\rho \) by

$$\begin{aligned} {\tilde{h}}(\rho ) \equiv \langle h\rangle _{\nu _\rho } := E_{\nu _\rho }[h], \quad \rho \ge 0. \end{aligned}$$

It is known that \({\tilde{h}}\) is \(C^\infty \)-smooth, and so in particular both \(\varphi , f\in C^\infty \).

Such hydrodynamic limits, and our later results do not depend on knowledge of the invariant measures of the Glauber+Zero-range process. Indeed, when the process rates are irreducible, there is a unique invariant measure, but it is not explicit. See [20] for some discussion in infinite volume about these measures.

We now impose the following assumptions on the rates \(c^{\pm }\):

  • (P) \(c^{\pm }(\eta )\ge 0\).

  • (BR) \(\Vert c^+(\eta )g(\eta _0+1)\Vert _{L^\infty }<\infty \) and \(\Vert c^-(\eta ^{0,+})g^{-1}(\eta _0+1)\Vert _{L^\infty }<\infty \).

  • (BS) f is a ‘bistable’ function with three zeros at \(\alpha _-, \alpha _*, \alpha _+\) such that \(0<\alpha _-<\alpha _*<\alpha _+\), \(f'(\alpha _-)<0\), \(f'(\alpha _*)>0\) and \(f'(\alpha _+)<0\). Also, the ‘\(\varphi \)-balance’ condition \(\int _{\alpha _-}^{\alpha _+}f(\rho )\varphi '(\rho )d\rho =0\) holds.

The first assumption (P) was already mentioned. We mention, under the choice (2.6), since \(g(k)\ge C_0>0\) for \(k\ge 1\), (BR) is implied by

$$\begin{aligned} \Vert {\hat{c}}^\pm (\eta )\Vert _{L^\infty }< \infty . \end{aligned}$$

Note also that \(\varphi (\rho )=\rho \) for the linear Laplacian so that \(\varphi '(\rho )=1\), in which case the ‘\(\varphi \)-balance’ condition is the more familiar ‘balance’ condition \(\int _{\alpha _-}^{\alpha _+} f(\rho )d\rho = 0\).

An example of the rates \(c^\pm (\eta )\) and the corresponding reaction term \(f(\rho )\) determined by (2.10) is the following. Define, with respect to (2.6) and (2.7), that

$$\begin{aligned}&c^+(\eta ) = \frac{C}{g(\eta _0+1)}\big \{(a_-+a_*+a_+)1(\eta _{e_1}\ge 1)1(\eta _{e_2}\ge 1)+ a_-a_*a_+\big \}, \end{aligned}$$
(2.11)
$$\begin{aligned}&c^-(\eta ) = \frac{C}{g(\eta _{e_3}+1)}\big \{1(\eta _{e_1}\ge 1)1(\eta _{e_2}\ge 1)+ (a_-a_* + a_-a_+ + a_*a_+)\big \}1(\eta _0\ge 1), \end{aligned}$$
(2.12)

where \(C>0\) and \(a_-, a_+, a_*>0\). Here, \(e_1, e_2, e_3 \in {\mathbb {Z}}^d\) are distinct points not equal to \(0\in {\mathbb {Z}}^d\). In this case, setting \(r(\rho )=E_{\nu _\rho }[1(\eta _0\ge 1)]\) and \(v(\rho ) = E_{\nu _\rho }[g(\eta _0 +1)^{-1}]= r(\rho )/\varphi (\rho )\), we have

$$\begin{aligned} f(\rho ) = -Cv(\rho )(r(\rho ) - a_-)(r(\rho )-a_*) (r(\rho ) - a_+), \end{aligned}$$

which has three zeros since \(r(\rho )\) is strictly increasing from 0 to 1 as \(\rho \) increases from 0 to \(\infty \).

One can find \(0<a_-<a_*<a_+<1\) so that \(\int _{\alpha _-}^{\alpha _+}f(\rho )\varphi '(\rho )d\rho = 0\), where \(\alpha _\pm = r^{-1}(a_\pm )\). Indeed, take \(0<a_-<a_+<1\) arbitrarily and observe that this integral is negative if \(a_*\in (a_-,a_+)\) is close to \(a_+\), while it is positive if \(a_*\) is close to \(a_-\). Hence, the rates \(c^\pm \) satisfy conditions (P), (BR) and (BS).

2.2 Main results

Let now \(\mu _0^N\) be the initial distribution of \(\eta ^N(0)\) on \({{\mathcal {X}}}_N\). Let also \(\{u^N(0,x)\}_{x\in {\mathbb {T}}_N^d}\) be a collection of nonnegative values and consider the inhomogeneous product measure \(\nu _0^N := \nu _{u^N(0,\cdot )}\) defined by (2.3).

We make the following assumptions on \(\{u^N(0,x)\}_{x\in {\mathbb {T}}_N^d}\):

  • (BIP1) \(u_- \le u^N(0,x) \le u_+\) for some \(0<u_- <u_+\).

  • (BIP2) \(u^N(0,x)= u_0(\tfrac{x}{N}), x\in {\mathbb {T}}_N^d\) with some \(u_0\in C^5({\mathbb {T}}^d)\). Further, \(\Gamma _0:=\{v\in {\mathbb {T}}^d; u_0(v)=\alpha _*\}\) is a \((d-1)\)-dimensional \(C^{5+\theta }\), \(\theta >0\), hypersurface in \({\mathbb {T}}^d\) without boundary such that \(\nabla u_0\) is non-degenerate to the normal direction to \(\Gamma _0\) at every point \(v\in \Gamma _0\). Also, \(u_0>\alpha _*\) in \(D_0^+\) and \(u_0<\alpha _*\) in \(D_0^-\) where \(D_0^\pm \) are the regions separated by \(\Gamma _0\).

Consider a family of closed smooth \(C^{5+\theta }\), \(\theta >0\), hypersurfaces \(\{\Gamma _t\}_{t\in [0,T]}\) in \({\mathbb {T}}^d\), without boundary, whose evolution is governed by a ‘homogenized’ mean curvature motion:

$$\begin{aligned} (P^{\;\!0})\quad {\left\{ \begin{array}{ll} \, V= \lambda _0\kappa \quad \text{ on } \Gamma _t \\ \, \Gamma _t\big |_{t=0}=\Gamma _0\,, \end{array}\right. } \end{aligned}$$
(2.13)

where V is the normal velocity of \(\Gamma _t\) from the \(\alpha _-\)-side to the \(\alpha _+\)-side defined below, \(\kappa \) is the mean curvature at each point of \(\Gamma _t\) multiplied by \(d-1\), the constant \(\lambda _0= \lambda _0(\varphi ,f)\) is given by (1.6).

In the linear case of independent particles, that is when \(g(k)\equiv k\) and so \(\varphi (u)\equiv u\), we recover the value \(\lambda _0 =1\). Here, \(T>0\) is the time such that the \(\Gamma _t\) is smooth for \(t\le T\). If \(\Gamma _0\) is smooth, such a \(T>0\) always exists; see Sect. 5.1.

We comment that the full \(C^{5+\theta }\) strength of the smoothness assumption (BIP2) is used only in Sect. 6.2 with respect to ‘propagation of a discrete interface’.

Denote

$$\begin{aligned} \chi _{\Gamma _t}(v) = \left\{ \begin{array}{rl} \alpha _- &{} \ \mathrm{for \ } v \ \text { on one side of } \Gamma _t\\ \alpha _+&{} \ \mathrm{for \ }v \ \text { on the other side of } \Gamma _t.\end{array}\right. \end{aligned}$$
(2.14)

These sides are determined by how \(u_0\) is arranged with respect to \(\Gamma _0\), and then continuously kept in time for \(\Gamma _t\).

We will also denote by \(\mathbb {P}_\mu \) and \({\mathbb E}_\mu \) the process measure and expectation with respect to \(\eta ^N(\cdot )\) starting from initial measure \(\mu \). When \(\mu = \mu ^N_0\), we will call \(\mathbb {P}_{\mu ^N_0} = \mathbb {P}_N\) and \({\mathbb E}_{\mu ^N_0}={\mathbb E}_N\). Let also \(E_\mu \) denote expectation with respect to measure \(\mu \).

Recall that the relative entropy between two probability measures \(\mu \) and \(\nu \) on \(\mathcal {X}_N\) is given as

$$\begin{aligned} H(\mu |\nu ) := \int _{\mathcal {X}_N} \frac{d\mu }{d\nu } \log \frac{d\mu }{d\nu } d\nu . \end{aligned}$$

The main result of this article is now formulated as follows.

Theorem 2.1

Suppose \(d\ge 2\) and the assumptions (De), (LG), (SP), (P), (BR), (BS) stated in Sect. 2.1 and (BIP1), (BIP2). Suppose also that the relative entropy at \(t=0\) behaves as

$$\begin{aligned} H(\mu ^N_0|\nu _0^N) = O(N^{d-\epsilon }) \end{aligned}$$

as \(N\uparrow \infty \), where \(\epsilon >0\). Suppose further that \(K=K(N)\uparrow \infty \) as \(N\uparrow \infty \) and satisfies \(1\le K(N)\le \delta (\log N)^{\sigma /2}\), with respect to small \(\delta = \delta (\epsilon , T)\), where \(\sigma \in (0,1)\) is the Hölder exponent determined by a Nash estimate; see Theorem 3.3.

Then, for \(0< t\le T\), \(\varepsilon >0\) and \(\phi \in C^\infty ({\mathbb {T}}^d)\), we have that

$$\begin{aligned} \lim _{N\rightarrow \infty } \mathbb {P}_N\Big (\big |\langle \alpha ^N(t),\phi \rangle - \langle \chi _{\Gamma _t}, \phi \rangle \big |>\varepsilon \Big ) = 0. \end{aligned}$$
(2.15)

As we will see in Theorem 6.2, the macroscopic width of the interface is \(O(K^{-1/2})\). Our result (2.15) shows that, apart from this area, the local particle density, that is the local empirical average of particles’ number, is close to either \(\alpha _-\) or \(\alpha _+\). In other words, the whole domain is separated into sparse or dense regions of particles and the interface \(\Gamma _t\) separating these two regions move macroscopically according to the motion by mean curvature \((P^0)\).

Remark 2.1

In [7, 34], the growth condition for K was \(K=O(N^\beta )\) for a small power \(\beta >0\), whereas in [28], the growth condition was \(K\le \delta _0\sqrt{\log N}\). The condition here on K is worse primarily due to the nonlinearity of the Zero-range rates.

The proof of Theorem 2.1 is given in two main parts. The first part establishes that the microscopic evolution is close to a discrete PDE motion through use of the relative entropy method and the Boltzmann–Gibbs principle, Theorem 2.2. The second part shows that the discrete PDE evolution converges to that of the ‘homogenized’ mean curvature flow desired, Theorem 2.3.

To state Theorem 2.2, let \(u^N(t,\cdot ) = \{u^N(t,x)\}_{x\in {\mathbb {T}}_N^d}\) be the nonnegative solution of the discretized hydrodynamic equation (1.5), that is,

$$\begin{aligned} \partial _t u^N(t,x)&= \sum _{i=1}^d \Delta _i^N \{ \varphi (u^N(t,x))\} + K f(u^N(t,x)), \end{aligned}$$
(2.16)

with initial values \(u^N(0,\cdot )=\{u^N(0,x)\}_{x\in {\mathbb {T}}^d_N}\), where

$$\begin{aligned} \Delta _i^N \varphi (u(x)) := N^2 \left( \varphi (u(x+e_i)) + \varphi (u(x-e_i)) - 2\varphi (u(x))\right) , \end{aligned}$$
(2.17)

where \(u(\cdot ) = \{u(x)\}_{x\in {\mathbb {T}}_N^d}\) and \(\{e_i\}_{i=1}^d\) are standard unit basis vectors of \({\mathbb {Z}}^d\). Recall also that \(\varphi \) and f are functions given by (2.9) and (2.10), respectively. We will later denote

$$\begin{aligned} \Delta ^N = \sum _{i=1}^d \Delta _i^N. \end{aligned}$$
(2.18)

Let \(\nu ^N_t = \nu _{u^N(t,\cdot )}\) be the inhomogeneous product measure with Zero-range marginals defined by (2.3) from \(u^N(t,\cdot )\) for \(t\ge 0\).

The next theorem shows that the ‘microscopic motion is close to the discretized hydrodynamic equation’. We note this result holds in all \(d\ge 1\).

Theorem 2.2

Suppose \(d\ge 1\) and let \(\mu ^N_t\) be the distribution of \(\eta ^N(t)\) on \(\mathcal {X}_N\). Suppose all conditions in Sect. 2.1 and that (BIP1) holds with respect to \(u^N(0)\) and the initial measure \(\mu _0^N\) is such that

$$\begin{aligned} H(\mu _0^N|\nu _0^N) = O(N^{d-\epsilon }) \end{aligned}$$

as \(N\rightarrow \infty \) for some \(\epsilon >0\). Then, when \(K=K(N)\) is a sequence as in the statement of Theorem 2.1, we have, for an \(0<\epsilon _1 = \epsilon _1(\epsilon , d)\), that

$$\begin{aligned} H(\mu _t^N|\nu _t^N) = O(N^{d-\epsilon _1}) \end{aligned}$$

for \(t\in [0,T]\) as \(N\rightarrow \infty \).

We comment that \(\epsilon _1\) can be taken as \(\epsilon _1 = (\varepsilon _0\wedge \epsilon )/2\) where \(\varepsilon _0 = 2d/(9d+2)\).

We now capture the behavior of \(u^N(t)\) as \(N\uparrow \infty \) in terms of the motion by mean curvature \((P^0)\) when \(d\ge 2\). Define the step function

$$\begin{aligned} u^N(t,v) = \sum _{x\in {\mathbb {T}}_N^d} u^N(t,x) 1_{B(\frac{x}{N},\frac{1}{N})}(v), \quad v \in {\mathbb {T}}^d, \end{aligned}$$
(2.19)

where \(B(\frac{x}{N},\frac{1}{N}) = \prod _{i=1}^d [\frac{x_i}{N} - \frac{1}{2N}, \frac{x_i}{N} + \frac{1}{2N})\) is a box with center \(\frac{x}{N}\), \(x=(x_i)_{i=1}^d\), and side length \(\frac{1}{N}\). The following theorem is shown in Sect. 6.3.

Theorem 2.3

Let \(d\ge 2\) and assume (BS), (BIP1) and (BIP2). Then, for \(v\not \in \Gamma _t\) and \(t\in (0,T]\), we have that

$$\begin{aligned} \lim _{N\rightarrow \infty } u^N(t,v) =\chi _{\Gamma _t}(v). \end{aligned}$$

2.3 Proof of Theorem 2.1

As we mentioned, Theorem 2.1 is shown mainly as a combination of Theorems 2.2 and 2.3. To make this precise, define, for \(\varepsilon >0\) and a test function \(\phi \in C^\infty ({\mathbb {T}}^d)\), the event

$$\begin{aligned} \mathcal {A}^\varepsilon _{N,t} = \{\eta \in \mathcal {X}_N; |\langle \alpha ^N,\phi \rangle - \langle u^N(t,\cdot ),\phi \rangle |>\varepsilon \}. \end{aligned}$$

Proposition 2.4

There exists \(C=C(\varepsilon )>0\) such that

$$\begin{aligned} \nu ^N_t(\mathcal {A}^\varepsilon _{N,t}) \le e^{-CN^d}. \end{aligned}$$

Proof

Write

$$\begin{aligned} \langle \alpha ^N,\phi \rangle - \langle u^N(t,\cdot ),\phi \rangle = \frac{1}{N^d} \sum _{x\in {\mathbb {T}}^d_N} (\eta _x - u^N(t,x))\phi (x/N) + o(1). \end{aligned}$$

Under \(\nu ^N_t\), the variable \(\eta _x\) has mean \(u^N(t,x)\) and a variance \(\sigma ^2_{x,t}\) in terms of \(u^N(t,x)\). Under the condition (BIP1), by the comparison Lemma 3.1, we have that \(u^N(t,\cdot )\), and so also \(\sigma ^2_{x,t}\), is uniformly bounded away from 0 and \(\infty \).

The desired bound, since \(\phi \) is uniformly bounded, follows from a standard application of exponential Markov inequalities. \(\square \)

Now note that the entropy inequality, for an event A, gives

$$\begin{aligned} \mu ^N_t(A) \le \frac{\log 2 + H(\mu ^N_t|\nu ^N_t)}{\log \{1+ 1/\nu ^N_t(A)\}}. \end{aligned}$$

Combined with Proposition 2.4 and the relative entropy Theorem 2.2, we have that

$$\begin{aligned} \lim _{N\rightarrow \infty } \mu ^N_t(\mathcal {A}^\varepsilon _{N,t}) = 0. \end{aligned}$$

However, the discrete PDE convergence Theorem 2.3 shows that \(\langle u^N(t,\cdot ), \phi \rangle \rightarrow \langle \chi _{\Gamma _t}, \phi \rangle \) as \(N\uparrow \infty \), finishing the proof of Theorem 2.1.

3 Comparison, a Priori Estimates, and a ‘Boltzmann–Gibbs’ Principle

Let \(u^N(t, \cdot ) = \{u^N(t,x)\}_{x\in {\mathbb {T}}_N^d}\) be the nonnegative solution of the discretized hydrodynamic equation (2.16) or (1.5) with given sequence \(1\le K=K(N)\). In this section, we do not impose a growth condition on \(K=K(N)\), stating results in terms of K.

3.1 Comparison theorem

The Eq. (2.16) satisfies a comparison theorem; cf. [27], Section 2.5. We will say that profiles \(u(\cdot )=(u_x)_{x\in {\mathbb {T}}_N^d}\) and \(v(\cdot ) =(v_x)_{x\in {\mathbb {T}}_N^d}\) are ordered \(u(\cdot )\ge v(\cdot )\) when \(u_y\ge v_y\) for all \(y\in {\mathbb {T}}_N^d\).

We say that \(u^+(t,\cdot )\) and \(u^-(t,\cdot )\) are super and sub solutions of (2.16), if \(u^+\) and \(u^-\) satisfy (2.16) with “\(\ge \)” and “\(\le \)” instead of “\(=\)” respectively.

Lemma 3.1

Suppose initial conditions \(u^-(0,\cdot ) \le u^+(0,\cdot )\). Then, the corresponding super and sub solutions \(u^+(t,\cdot )\) and \(u^-(t,\cdot )\) to the discrete PDE (2.16), for all \(t\ge 0\), satisfy

$$\begin{aligned} u^-(t,\cdot ) \le u^+(t,\cdot ). \end{aligned}$$

Furthermore, suppose (BIP1) holds: \(u_-\le u^N(0,x)\le u_+\) for some \(0<u_-<u_+<\infty \). Then, for \(t\ge 0\) and \(x\in {\mathbb {T}}^d_N\), we have

$$\begin{aligned} u_-\wedge \alpha _- \le u^N(t,x)\le u_+\vee \alpha _+. \end{aligned}$$

Proof

Assume that \(u^+(t,\cdot )\ge u^-(t,\cdot )\) and \(u^-(t,x)=u^+(t,x)\) holds at some space-time point (tx). Then, since the reaction term f cancels, and \(\varphi \) is an increasing function, we have

$$\begin{aligned} \partial _t (u^+-u^-)(t,x)&\ge \Delta ^N \{\varphi (u^+)-\varphi (u^-)\}(t,x) + K \big (f(u^+(t,x))- f(u^-(t,x))\big ) \\&= N^2\sum _{\pm e_i} \big \{ (\varphi (u^+)-\varphi (u^-))(t,x\pm e_i) - (\varphi (u^+)-\varphi (u^-))(t,x) \big \} \\&= N^2\sum _{\pm e_i} \{\varphi (u^+)-\varphi (u^-)\} (t,x\pm e_i) \ge 0. \end{aligned}$$

This implies \(\partial _t (u^+-u^-)(t,x) \ge 0\) and shows that \(u^-(t)\) can not exceed \(u^+(t)\) for all \(t>0\).

In particular, if we take \(u^+(0,x)\equiv u_+\vee \alpha _+\), then by the condition (BS), the solution \(u^+(t,\cdot )\) with this initial datum is decreasing in t so that we obtain \(u^N(t,\cdot ) \le u^+(t,\cdot ) \le u_+\vee \alpha _+\). We can similarly show \(u^N(t,\cdot ) \ge u_-\wedge \alpha _-\). \(\square \)

3.2 A priori estimates

Define for \(\{u_x = u(x)\}_{x\in {\mathbb {T}}^d}\) and \(1\le i \le d\),

$$\begin{aligned} \nabla _i^Nu(x)&= N\big (u(x+e_i)-u(x)\big ), \ \ \mathrm{and \ }\\ \nabla ^Nu(x)&= \big (\nabla _i^Nu(x)\big )_{i=1}^d. \end{aligned}$$

Lemma 3.2

(cf. [27], Sect. 4.4). Suppose bounds (BIP1) hold for \(u^N(0,\cdot )\). Then, for a constant \(C>0\), we have

$$\begin{aligned}&\frac{1}{2}\sum _{x\in {\mathbb {T}}_N^d} u^N(t,x)^2 + c_0 \int _0^T \sum _{x\in {\mathbb {T}}_N^d} |\nabla ^Nu^N(t,x)|^2 dt \le \frac{1}{2}\sum _{x\in {\mathbb {T}}_N^d} u^N(0,x)^2 + CKTN^d, \end{aligned}$$

where \(c_0:= \inf _{\rho>0}\varphi '(\rho )>0\) (see [36], p. 30), and as a consequence

$$\begin{aligned} \frac{N^2}{\ell ^2}\frac{1}{N^d}\int _0^T \sum _{x\in {\mathbb {T}}_N^d} \Big (\frac{1}{(2\ell +1)^d}\sum _{|z-x|\le \ell } u^N(t, z) - u^N(t,x)\Big )^2 dt \le \frac{CKT}{c_0}, \end{aligned}$$
(3.1)

where \(|x|= \sum _{i=1}^d |x_i|\) for \(x = (x_i)_{i=1}^d \in {\mathbb {Z}}^d\).

Proof

Recall \(u^N(t,\cdot )\) is the solution of (2.16). By Lemma 3.1, we have that \(u^N(t,\cdot )\) is between \(u_-^* = u_-\wedge \alpha _-\) and \(u_+^* = u_+\vee \alpha _+\) uniformly in time. Since \(\varphi '(u)\ge c_0>0\) and f(u) is bounded for u between \(u_-^*\) and \(u_+^*\), we have by the mean-value theorem that

$$\begin{aligned} \tfrac{1}{2} \partial _t \sum _{x\in {\mathbb {T}}^d_N} u^N(t,x)^2&= \sum _{x\in {\mathbb {T}}^d_N} u^N(t,x) \left( \Delta ^N \varphi (u^N(t,x)) +Kf(u^N(t,x))\right) \\&= - \sum _{x\in {\mathbb {T}}^d_N} \sum _{i=1}^d\nabla _i^N u^N(t,x) \nabla _i^N \varphi (u^N(t,x))\\&\quad + K\sum _{x\in {\mathbb {T}}^d_N} u^N(t,x) f(u^N(t,x)) \\&\le - c_0 \sum _{x\in {\mathbb {T}}^d_N} \sum _{i=1}^d|\nabla _i^Nu(t,x)|^2 + C KN^d. \end{aligned}$$

Integrating in time gives the first inequality in the lemma. The second inequality now follows from the first, utilizing Jensen’s inequality and the relation \((a_1+\cdots + a_j)^2 \le j(a_1^2 + \cdots + a_j^2)\). \(\square \)

3.3 \(L^\infty \)-Estimates on discrete derivatives

We next state the \(L^\infty \)-estimates for the (macroscopic) discrete derivatives of the solution \(u^N(t,x)\) of (2.16). We define the norm \(\Vert u^N\Vert _{C_N^n}\) for \(u^N=\{u^N(x)\}_{x\in {\mathbb {T}}_N^d}\) and \(n=0,1,2,\ldots \) by

$$\begin{aligned} \Vert u^N\Vert _{C_N^n} = \sum _{k=0}^n \sum _{1\le i_1,\ldots ,i_k\le d} \max _{x\in T_N^d} |\nabla _{i_k}^N\cdots \nabla _{i_1}^Nu^N(x)| \end{aligned}$$

where for \(n=0\) the norm reduces to \(\Vert u^N\Vert _{L^\infty ({\mathbb {T}}^d_N)}\). The following Schauder estimate is shown in [27] for quasilinear discrete PDEs. The constant \(\sigma \in (0,1)\) appears as the Hölder exponent in Nash estimate; see [27] for details. Note that we described \(u^N(x)\) or \(u^N(t,x)\) as \(u^N(\frac{x}{N})\) or \(u^N(t,\frac{x}{N})\) in [27] by using macroscopic spatial variables \(\frac{x}{N}\) instead of microscopic ones x, but these two descriptions are equivalent.

Theorem 3.3

Suppose \(\Vert u^N(0)\Vert _{C_N^4}\le C_0\) and condition (BIP1): \(0<u_-\le u^N(0,x)\le u_+<\infty \) for all \(x\in {\mathbb {T}}_N^d\). Then, we have

$$\begin{aligned}&\Vert u^N(t)\Vert _{C_N^2} \le CK^{2/\sigma }, \end{aligned}$$
(3.2)

for all \(t\in [0,T]\) and some \(C>0\).

In particular, we have

$$\begin{aligned} \Vert \Delta ^N\varphi (u^N(t,\cdot ))\Vert _{L^\infty ({\mathbb {T}}^d_N)} \le CK^{2/\sigma }. \end{aligned}$$
(3.3)

We note that \(\Vert u^N(0)\Vert _{C_N^4}\le C_0\) holds under the condition (BIP2): \(u^N(0,x)=u_0(x/N)\) and \(u_0\in C^5({\mathbb {T}}^d)\).

3.4 A ‘Boltzmann–Gibbs’ principle

For a local function \(h=h(\eta )\), with support in a finite box denoted \(\Lambda _h\subset {\mathbb {T}}_N^d\), and parameter \(\beta \ge 0\), let

$$\begin{aligned} {\tilde{h}}(\beta ) = E_{\nu _\beta }[h]. \end{aligned}$$

In this section, we suppose that the function h satisfies, in terms of constants \(C_1, C_2\), the bound

$$\begin{aligned} |h(\eta )|\le C_1\sum _{y\in \Lambda _h}\eta _y + C_2. \end{aligned}$$
(3.4)

With respect to an evolution \(\{u^N(t,x)\}_{x\in {\mathbb {T}}^d_N}\) satisfying the discrete PDE (2.16), let

$$\begin{aligned} f_x(\eta ) = \tau _xh(\eta ) -{\tilde{h}}(u^N(t,x)) -{\tilde{h}}'(u^N(t,x))\big (\eta _x - u^N(t,x)\big ). \end{aligned}$$
(3.5)

Recall that \(\mathbb {P}_N\) is the underlying process measure governing \(\eta ^N(\cdot )\) starting from \(\mu ^N_0\) and \(\mu ^N_t\) is the distribution of \(\eta ^N(t)\) for \(t\ge 0\). Recall \(K=K(N)\ge 1\) for \(N\ge 1\) is speed of the Glauber jumps in the process \(\eta ^N(\cdot )\) with generator \(L_N\). We will not impose a growth condition here on K but state results in terms of K.

Recall \(\nu ^N_t=\nu _{u^N(t,\cdot )}\), defined below (2.18).

We now state a so-called ‘Boltzmann–Gibbs’ principle, under the relative entropy assumption \(H(\mu ^N_0|\nu ^N_0) = O(N^d)\), weaker than the one assumed for Theorem 2.2. It is a ‘second-order’ estimate valid in \(d\ge 1\) with a remainder given in terms of a relative entropy term and a certain error.

Theorem 3.4

Suppose bounds (BIP1) hold for the initial values \(\{u^N(0,x)\}_{x\in {\mathbb {T}}^d_N}\), and the initial relative entropy \(H(\mu ^N_0|\nu ^N_0) = O(N^d)\). Suppose \(\{a_{t,x}: x\in {\mathbb {T}}^d_N, t\ge 0\}\) are non-random coefficients with uniform bound

$$\begin{aligned} \sup _{x\in {\mathbb {T}}^d_N, t\ge 0} |a_{t,x}| \le M. \end{aligned}$$
(3.6)

Then, there exist \(\epsilon _0, C>0\) such that

$$\begin{aligned} {\mathbb E}_{N} \left| \int _0^T \sum _{x\in {\mathbb {T}}^d_N} a_{t,x} f_x dt\right| \le O(MKTN^{d-\epsilon _0}) + CM \int _0^T H(\mu _t^N| \nu ^N_t) \, dt. \end{aligned}$$
(3.7)

Moreover, we may take \(\epsilon _0 = 2d/(9d+2)\).

The proof of Theorem 3.4 is given in Sect. 7.

Remark 3.1

We remark that this proof relies on the form of the discrete PDE (2.16) only in that \(u^N\) satisfies the statements in Lemmas 3.1 and 3.2.

4 Microscopic Motion is Close to the ‘Discrete PDE’: Proof of Theorem 2.2

Recall the Glauber+Zero-range process \(\eta ^N(t)\) generated by \(L_N=N^2L_{ZR}+K(N)L_G\), where \(K=K(N)\). For a function f on \(\mathcal {X}_N\) and a measure \(\nu \) on \(\mathcal {X}_N\), set

$$\begin{aligned} \mathcal {D}_N(f;\nu ) = 2N^2 \mathcal {D}_{ZR}(f;\nu ) + K \mathcal {D}_G(f;\nu ), \end{aligned}$$

where

$$\begin{aligned} \mathcal {D}_{ZR}(f;\nu )&= \frac{1}{4} \sum _{{\mathop {x,y\in {\mathbb {T}}_N^d}\limits ^{|x-y|=1}}} \int _{\mathcal {X}_N} g(\eta _x) \{f(\eta ^{x,y})-f(\eta )\}^2 d\nu , \nonumber \\ \mathcal {D}_G(f;\nu )&= \sum _{x\in {\mathbb {T}}_N^d} \int _{\mathcal {X}_N} c^+_x(\eta )\{f(\eta ^{x,+})-f(\eta )\}^2 + c^-_x(\eta )\{f(\eta ^{x,-})-f(\eta )\}^2 d\nu , \end{aligned}$$
(4.1)

and recall \(c_x^-(\eta )=0\) when \(\eta _x=0\).

Recall \(\mu _t^N\) is the law of \(\eta ^N(t)\) on \(\mathcal {X}_N\) and \(\nu ^N_t = \nu _{u^N(t,\cdot )}\). Let m be a reference measure on \(\mathcal {X}_N\) with full support in \(\mathcal {X}_N\). Define

$$\begin{aligned} \psi ^N_t := \frac{d\nu ^N_t}{dm}. \end{aligned}$$

In general, we denote the adjoint of an operator L on \(L^2(\nu ^N_t)\) by \(L^{*,\nu ^N_t}\).

We now state an estimate for the derivative of relative entropy. Such estimates go back to the work of Guo–Papanicolaou–Varadhan (cf. [36]) and Yau [45]. A more recent bound is the following; see [26, 28, 32] for a proof.

Proposition 4.1

$$\begin{aligned} \frac{d}{dt} H(\mu _t^N|\nu ^N_t) \le - \mathcal {D}_N\left( \sqrt{\frac{d\mu _t^N}{d\nu ^N_t}}; \nu ^N_t\right) + \int _{\mathcal {X}_N} (L_N^{*,\nu ^N_t}1 - \partial _t \log \psi ^N_t) d\mu _t^N. \end{aligned}$$

We remark that in our later development we need only the inequality, originally derived in [45], where the Dirichlet form term is dropped:

$$\begin{aligned} \frac{d}{dt} H(\mu _t^N|\nu ^N_t) \le \int _{\mathcal {X}_N} (L_N^{*,\nu ^N_t}1 - \partial _t \log \psi ^N_t) d\mu _t^N. \end{aligned}$$
(4.2)

To control the relative entropy \(H(\mu ^N_t| \nu ^N_t)\) we will develop a bound of the right-hand side of (4.2) in the following subsection. With the aid of these bounds, which use a ‘Boltzmann–Gibbs’ estimate shown in Sect. 7, we later give a proof of Theorem 2.2 in Sect. 4.2.

4.1 Computation of \(L^{*, \nu ^N_t}_N 1 - \partial _t \log \psi ^N_t(\eta )\)

We first formulate a few lemmas in the abstract. Let \(\{u(x)\ge 0\}_{x\in {\mathbb {T}}_N^d}\) be given and let \(\nu =\nu _{u(\cdot )}\) be the product measure given as in (2.3). Recall that \(\Delta ^N_i\) and \(\Delta ^N\) are defined in (2.17) and (2.18), respectively.

Lemma 4.2

We have

$$\begin{aligned} L_{ZR}^{*,\nu } 1&= \sum _{x\in {\mathbb {T}}_N^d} \frac{N^{-2}(\Delta ^N\varphi )(u(x))}{\varphi (u(x))} g(\eta _x) \\&= \sum _{x\in {\mathbb {T}}_N^d} \frac{N^{-2}(\Delta ^N\varphi )(u(x))}{\varphi (u(x))} \{g(\eta _x) - \varphi (u(x))\}. \end{aligned}$$

Proof

Similar computations results are found in [36], pp. 120–121. Take any \(f=f(\eta )\) on \(\mathcal {X}_N\) as a test function and compute

$$\begin{aligned} \int L_{ZR}^{*,\nu } 1\cdot f d \nu&= \int L_{ZR}f d \nu \\&= \sum _{x\in {\mathbb {T}}_N^d}\sum _{|e|=1} \sum _{\eta \in \mathcal {X}_N} g(\eta _x) \{f(\eta ^{x,x+e})-f(\eta )\} \nu (\eta ). \end{aligned}$$

Then, by fixing xe and making change of variables \(\zeta = \eta ^{x,x+e}\), we have

$$\begin{aligned} \sum _\eta g(\eta _x) f(\eta ^{x,x+e}) \nu (\eta ) = \sum _\zeta g(\zeta _x+1) f(\zeta ) \nu (\zeta ^{x+e,x}). \end{aligned}$$

However, since

$$\begin{aligned} \nu (\zeta ^{x+e,x})&= \frac{\nu _{u(x+e)}(\zeta _{x+e}-1)}{\nu _{u(x+e)}(\zeta _{x+e})} \frac{\nu _{u(x)}(\zeta _x+1)}{\nu _{u(x)}(\zeta _x)} \nu (\zeta ) \\&= \frac{g(\zeta _{x+e})}{\varphi (u(x+e))} \frac{\varphi (u(x))}{g(\zeta _x+1)} \nu (\zeta ), \end{aligned}$$

we obtain

$$\begin{aligned} L_{ZR}^{*,\nu } 1&= \sum _{x,e} \left\{ \frac{\varphi (u(x))}{\varphi (u(x+e))} g(\eta _{x+e}) - g(\eta _x)\right\} \\&= \sum _{x,e} \left\{ \frac{\varphi (u(x-e))}{\varphi (u(x))} -1 \right\} g(\eta _x) = \sum _{x} \frac{N^{-2}(\Delta ^N\varphi )(u(x))}{\varphi (u(x))} g(\eta _x). \end{aligned}$$

The last equality follows by noting that \(\sum _{x} (\Delta ^N\varphi )(u(x))=0\). \(\square \)

Lemma 4.3

We have

$$\begin{aligned} L_G^{*,\nu } 1 = \sum _{x\in {\mathbb {T}}_N^d} \left\{ c_x^+(\eta ^{x,-}) \frac{g(\eta _x)}{\varphi (u(x))} + c_x^-(\eta ^{x,+}) \frac{\varphi (u(x))}{g(\eta _x+1)} -c_x^+(\eta ) -c_x^-(\eta ) \right\} . \end{aligned}$$

Proof

Taking any \(f=f(\eta )\) on \(\mathcal {X}_N\), we have

$$\begin{aligned} \int L_G^{*,\nu } 1\cdot f d \nu&= \int L_G f d \nu \\&= \sum _{x\in {\mathbb {T}}_N^d} \sum _{\eta \in \mathcal {X}_N} \Big \{c^+_x(\eta ) \{f(\eta ^{x,+})-f(\eta )\} + c^-_x(\eta )1(\eta _x\ge 1) \{f(\eta ^{x,-})-f(\eta )\} \Big \}\nu (\eta ) \end{aligned}$$

Then, by making change of variables \(\zeta = \eta ^{x,\pm }\), we have

$$\begin{aligned}&\sum _\eta c^+_x(\eta ) f(\eta ^{x,+})\nu (\eta ) = \sum _\zeta c^+_x(\zeta ^{x,-})1(\zeta _x\ge 1) f(\zeta )\nu (\zeta ^{x,-}), \\&\sum _\eta c^-_x(\eta )1(\eta _x\ge 1)f(\eta ^{x,-})\nu (\eta ) = \sum _\zeta c^-_x(\zeta ^{x,+}) f(\zeta )\nu (\zeta ^{x,+}). \end{aligned}$$

However, since

$$\begin{aligned}&\nu (\zeta ^{x,-})1(\zeta _x\ge 1) = 1(\zeta _x\ge 1)\frac{\nu _{u(x)}(\zeta _x-1)}{\nu _{u(x)}(\zeta _x)} \nu (\zeta ) = \frac{g(\zeta _x)}{\varphi (u(x))} \nu (\zeta ),\\&\nu (\zeta ^{x,+}) = \frac{\nu _{u(x)}(\zeta _x+1)}{\nu _{u(x)}(\zeta _x)} \nu (\zeta ) = \frac{\varphi (u(x))}{g(\zeta _x+1)} \nu (\zeta ), \end{aligned}$$

we obtain

$$\begin{aligned} L_G^{*,\nu } 1&= \sum _{x} \left\{ c_x^+(\eta ^{x,-}) \frac{g(\eta _x)}{\varphi (u(x))} + c_x^-(\eta ^{x,+}) \frac{\varphi (u(x))}{g(\eta _x+1)} -c_x^+(\eta ) -c_x^-(\eta )1(\eta _x\ge 1)\right\} . \end{aligned}$$

Finally, by our convention with respect to \(c_x^-\), we have that \(c_x^-(\eta )1(\eta _x\ge 1) = c_x^-(\eta )\). \(\square \)

Example 4.1

If we choose \(c_x^\pm (\eta )\) as in (2.7), noting that \({\hat{c}}_x^\pm (\eta )\) do not depend on \(\eta _x\), we have \(L_G^{\nu ,*}1\) equals

$$\begin{aligned} \sum _{x\in {\mathbb {T}}_N^d} {\hat{c}}_x^+(\eta ) \left( \frac{1(\eta _x\ge 1)}{\varphi (u(x))} -\frac{1}{g(\eta _x+1)}\right) + \sum _{x\in {\mathbb {T}}_N^d} {\hat{c}}_x^-(\eta ) \left( \frac{\varphi (u(x))}{g(\eta _x + 1)} -1(\eta _x\ge 1)\right) . \end{aligned}$$

Lemma 4.4

Now we take \(u(\cdot )=\{u^N(t,x)\}_{x\in {\mathbb {T}}_N^d}\). Then, we have

$$\begin{aligned} \partial _t \log \psi ^N_t(\eta ) = \sum _{x\in {\mathbb {T}}_N^d} \frac{\partial _t \varphi (u^N(t,x))}{\varphi (u^N(t,x))} (\eta _x-u^N(t,x)). \end{aligned}$$

Proof

Since

$$\begin{aligned} \psi ^N_t(\eta ) = \frac{\nu _{u^N(t,\cdot )}(\eta )}{m(\eta )} = \frac{\prod _x\nu _{u^N(t,x)}(\eta _x)}{m(\eta )}, \end{aligned}$$

we have

$$\begin{aligned} \partial _t \log \psi ^N_t(\eta ) = \sum _{x\in {\mathbb {T}}^d_N} \frac{\partial _t \nu _{u^N(t,x)}(\eta _x)}{\nu _{u^N(t,x)}(\eta _x)}. \end{aligned}$$

Here,

$$\begin{aligned} \partial _t \nu _{u^N(t,x)}(k)&= \partial _t \left( \frac{1}{Z_{\varphi (u^N(t,x))}} \frac{\varphi (u^N(t,x))^k}{g(k)!}\right) \\&= \frac{1}{Z_{\varphi (u^N(t,x))}} \frac{k\varphi (u^N(t,x))^{k-1}}{g(k)!} \partial _t \varphi (u^N(t,x))\\&\quad -\, \frac{Z_{\varphi (u^N(t,x))}'\partial _t \varphi (u^N(t,x))}{Z_{\varphi (u^N(t,x))}^2} \frac{\varphi (u^N(t,x))^k}{g(k)!} \\&= \nu _{u^N(t,x)}(k) \partial _t \varphi (u^N(t,x)) \frac{1}{ \varphi (u^N(t,x)) } (k-u^N(t,x)), \end{aligned}$$

where we have used the formula \( \frac{\partial }{\partial \varphi } \log Z_\varphi = {\rho }/{\varphi }\). This shows the conclusion. \(\square \)

These three lemmas, combined with the comparison estimates, discrete derivative bounds, and Boltzmann–Gibbs principle in Sect. 3, are the main ingredients for the following theorem.

Theorem 4.5

Suppose \(u^N(t,x)\) satisfies (2.16), with \(K\ge 1\). Then, there are \(\varepsilon _0, C>0\) such that

$$\begin{aligned}&\int _0^T \int _{\mathcal {X}_N} \Big \{L^{*,\nu ^N_t}_N 1 - \partial _t\log \psi ^N_t\Big \}d\mu ^N_t dt\\&\quad \le CK^{2/\sigma }\int _0^TH(\mu ^N_t|\nu ^N_t)dt+ O\big (K^{1+2/\sigma }N^{d-\varepsilon _0}\big ). \end{aligned}$$

Proof

By Lemmas 4.24.4, we have \(L^{*,\nu ^N_t}_N 1 - \partial _t\log \psi ^N_t\) equals

$$\begin{aligned}&\sum _{x} \frac{(\Delta ^N\varphi )(u^N(t,x))}{\varphi (u^N(t,x))} \{g(\eta _x) - \varphi (u^N(t,x))\}\nonumber \\&\quad +\, K \sum _{x\in {\mathbb {T}}^d_N} \left\{ c_x^+(\eta ^{x,-})\frac{g(\eta _x)}{\varphi (u^N(t,x))} - c_x^+(\eta ) + c_x^-(\eta ^{x,+})\frac{\varphi (u^N(t,x))}{g(\eta _x+1)} - c_x^-(\eta )1(\eta _x\ge 1) \right\} \nonumber \\&\quad -\,\sum _{x\in {\mathbb {T}}^d_N} \frac{\partial _t \varphi (u^N(t,x))}{\varphi (u^N(t,x))} (\eta _x-u^N_x(t)). \end{aligned}$$
(4.3)

First, let \(h_0(\eta ) = g(\eta _0)\) in (3.5). By the assumption (LG), \(h_0\) satisfies the bound in (3.4). Observe that \({\tilde{h}}_0(\beta ) \equiv E_{\nu _\beta }[h_0] = \varphi (\beta )\) for \(\beta \ge 0\), whence \(f_x(\eta ) = g(\eta _x) - \varphi (u^N(t,x))-\varphi '(u^N(t,x))\big (\eta _x - u^N(t,x)\big )\).

Let now \(a^0_{t,x} = \Delta ^N\varphi (u^N(t,x)/\varphi (u^N(t,x))\). Since \(u^N\) is bounded between \(u_-\wedge \alpha _-\) and \(u_+\vee \alpha _+\) according to Lemma 3.1, \(\varphi (u^N(t,x))\) is uniformly bounded away from 0. Also, by Theorem 3.3, we have the estimate \(\Vert \Delta ^N\varphi (u^N(t,\cdot ))\Vert _{L^\infty } = O(K^{2/\sigma })\). Then, we conclude that \(\Vert a^0(t,\cdot )\Vert _{L^\infty } = O(K^{2/\sigma })\).

Therefore, by the Boltzmann–Gibbs principle (Theorem 3.4), applied with \(h_0\) and \(a^0_{t,x}\), we obtain that

$$\begin{aligned}&{\mathbb E}_{N}\Big | \int _0^T \sum _{x\in {\mathbb {T}}^d_N} \frac{(\Delta ^N\varphi )(u^N(t,x))}{\varphi (u^N(t,x))} \left( g(\eta _x(t))-\varphi (u^N(t,x))\right) dt \\&\qquad - \int _0^T\sum _{x\in {\mathbb {T}}^d_N} \frac{(\Delta ^N\varphi )(u^N(t,x))}{\varphi (u^N(t,x))} \varphi '(u^N(t,x))\big (\eta _x(t)-u^N(t,x)\big )dt \Big | \\&\quad \le CK^{2/\sigma }\int _0^T H(\mu ^N_t| \nu ^N_t)dt + O(K^{1+2/\sigma } N^{d-\varepsilon _0}). \end{aligned}$$

Secondly, let \(h_1(\eta ) = c^-(\eta ^{0,+})/g(\eta _0 +1)\) and \(a^1_{t,x} = K\varphi (u^N(t,x))\), and also \(h^2(\eta ) = c^-(\eta )1(\eta _0\ge 1)\) and \(a^2_{t,x}=K\). Note that \(h_1(\eta )\) is bounded by assumption (BR), and \(h_2(\eta ) = \big (c^-(\eta )/g(\eta _0)\big )g(\eta _0)\) is bounded by \(C\eta _0\) by assumptions (BR) and (LG). Hence, \(h_1\) and \(h_2\) both satisfy condition (3.4). Also, by Lemma 3.1, \(\Vert a^j_{t,x}\Vert _{L^\infty } =O(K)\) for \(j=1,2\).

Observe that

$$\begin{aligned} {\tilde{h}}_1(\beta )= & {} \widetilde{\Big (\frac{c^-(\eta ^{0,+})}{g(\eta _0 + 1)}\Big )}(\beta ) \equiv E_{\nu _\beta } \left[ \frac{c^-(\eta ^{0,+})}{g(\eta _0+1)}\right] \nonumber \\= & {} \frac{1}{\varphi (\beta )}E_{\nu _\beta } [c^-(\eta )1(\eta _0\ge 1)] = \frac{{\tilde{h}}_2(\beta )}{\varphi (\beta )}. \end{aligned}$$
(4.4)

Indeed, recall \(c^-(\eta ) = {\hat{c}}^-(\eta ) {\hat{c}}^{0,-}(\eta _0)\) where \({\hat{c}}^-\) does not depend on \(\eta _0\). Then,

$$\begin{aligned} E_{\nu _\beta }[c^-(\eta ^{0,+})g(\eta _0+1)^{-1}] = E_{\nu _\beta }[{\hat{c}}^-(\eta )]E_{\nu _\beta }[{\hat{c}}^{0,-}(\eta _0 +1) g(\eta _0+1)^{-1}]. \end{aligned}$$

The factor \(E_{\nu _\beta }[{\hat{c}}^{0,-}(\eta _0 +1) g(\eta _0+1)^{-1}]\) is rewritten as

$$\begin{aligned} \frac{1}{Z_\varphi } \sum _{k=0}^\infty \frac{{\hat{c}}^{0,-}(k+1)}{g(k+1)} \frac{\varphi ^k}{g(k)!}&= \frac{1}{Z_\varphi } \varphi ^{-1} \sum _{k=0}^\infty \frac{\varphi ^{k+1}}{g(k+1)!}{\hat{c}}^{0,-}(k+1) \\&= \varphi ^{-1} \frac{1}{Z_\varphi } \sum _{k=0}^\infty {\hat{c}}^{0,-}(k)1(k\ge 1) \frac{\varphi ^k}{g(k)!} = \frac{1}{\varphi }E_{\nu _\beta }[{\hat{c}}^{0,-}(\eta _0)1(\eta _0\ge 1)], \end{aligned}$$

where \(\varphi =\varphi (\beta )\). This shows (4.4) by noting the independence of \({\hat{c}}^-(\eta )\) and functions of \(\eta _0\) under \(\nu _\beta \).

We also have

$$\begin{aligned} {\tilde{h}}'_1(\beta ) = \frac{d}{d\beta }\Big (\frac{{\tilde{h}}_2(\beta )}{\varphi (\beta )}\Big ), \quad \mathrm{and }\quad {\tilde{h}}'_2(\beta ) = \frac{d}{d\beta }\Big (\frac{{\tilde{h}}_2(\beta )}{\varphi (\beta )}\Big ) \varphi (\beta ) + \Big (\frac{{\tilde{h}}_2(\beta )}{\varphi (\beta )}\Big )\varphi '(\beta ). \end{aligned}$$

We may form the corresponding f’s in (3.5) with respect to \(h_1\) and \(h_2\) at \(\beta = u^N(t,x)\).

By the Boltzmann–Gibbs principle, Theorem 3.4, applied separately to pairs \(h_1\), \(a^1_{t,x}\), and \(h_2\), \(a^2_{t,x}\), and then subtracting the estimates, we conclude that

$$\begin{aligned}&{\mathbb E}_{N}\Big |K\int _0^T \sum _{x\in {\mathbb {T}}^d_N} \left\{ c_x^{-}(\eta ^{x,+}(t)) \frac{\varphi (u^N(t,x))}{g(\eta _x(t) +1)} - c_x^-(\eta (t))1(\eta _x(t)\ge 1) \right\} dt \\&\qquad +\, K\int _0^T \sum _{x\in {\mathbb {T}}^d_N} E_{\nu _{u^N(t,x)}}[c^-(\eta )]\frac{\varphi '(u^N(t,x))}{\varphi (u^N(t,x))} \big (\eta _x(t) - u^N(t,x)\big )dt \Big |\\&\quad \le CK \int _0^T H(\mu ^N_t|\nu ^N_t)dt + O(K^2N^{d-\varepsilon _0}). \end{aligned}$$

Thirdly, consider \(h_3(\eta ) = c^+(\eta ^{0,-})g(\eta _0)\) and \(a^3_{t,x} = K/\varphi (u^N(t,x))\), and also \(h_4(\eta ) = c^+(\eta )\) and \(a^4_{t,x} = K\). Again, by the assumption (BR), \(h_3(\eta )\) is bounded, and \(h_4(\eta ) = \big (c^+(\eta )g(\eta _0+1)\big )/g(\eta _0+1)\) is bounded by (BR), recalling \(\inf _{k\ge 1}g(k)>0\); hence, both \(h_3\) and \(h_4\) satisfy (3.4). Moreover, \(\Vert a^j_{t,x}\Vert _{L^\infty } = O(K)\) for \(j=3,4\). Also, from a calculation similar to (4.4), we see that

$$\begin{aligned} {\tilde{h}}_3(\beta )&= E_{\nu _\beta }[c^+(\eta )]\varphi (\beta )= {\tilde{h}}_4(\beta )\varphi (\beta ). \end{aligned}$$

Therefore,

$$\begin{aligned} {\tilde{h}}_3'(\beta ) = E_{\nu _\beta }[c^+(\eta )]\varphi '(\beta ) + {\tilde{h}}'_4(\beta )\varphi (\beta ). \end{aligned}$$

Again, we may write functions f in (3.5) with respect to \(h_3\) and \(h_4\) at \(\beta =u^N(t,x)\).

Once more, by the Boltzmann–Gibbs principle, Theorem 3.4, applied separately to pairs \(h_3\), \(a^3_{t,x}\) and \(h_4\), \(a^4_{t,x}\), and taking the difference, we have that

$$\begin{aligned}&{\mathbb E}_{N}\Big | K\int _0^T \sum _{x\in {\mathbb {T}}^d_N} \left\{ c_x^+(\eta ^{x,-}(t))\frac{g(\eta _x(t))}{\varphi (u^N(t,x))} - c_x^+(\eta (t)) \right\} dt \\&\qquad -K\int _0^T \sum _{x\in {\mathbb {T}}^d_N} E_{\nu _{u^N(t,x)}}[c^+(\eta )]\frac{\varphi '(u^N(t,x))}{\varphi (u^N(t,x))}\big (\eta _x(t) - u^N(t,x)\big ) dt\Big |\\&\quad \le CK\int _0^T H(\mu ^N_t|\nu ^N_t)dt + O(K^2N^{d-\varepsilon _0}). \end{aligned}$$

Finally, we note, with respect to the third line of (4.3), that

$$\begin{aligned} \partial _t\varphi (u^N(t,x)) = \varphi '(u^N(t,x))\partial _t u^N(t,x). \end{aligned}$$

Then, combining these observations, \(\int _0^T \big ( L^{*,\nu ^N_t}_N 1 - \partial _t\log \psi ^N_t\big ) dt\) is approximated in \(L^1(\mathbb {P}_N)\) by

$$\begin{aligned}&\int _0^T \sum _{x} \left[ \frac{(\Delta ^N\varphi )(u^N(t,x))}{\varphi (u^N(t,x))} \varphi '(u^N(t,x))\{\eta _x(t) - u^N(t,x)\} \right. \nonumber \\&\qquad + \,K\sum _{x} \frac{\varphi '(u^N(t,x))}{\varphi (u^N(t,x))}E_{\nu _{u^N(t,x)}}\big [c^+(\eta ) - c^-(\eta )\big ]\{\eta _x(t) - u^N(t,x)\}\nonumber \\&\qquad \left. -\,\sum _{x\in {\mathbb {T}}^d_N} \frac{\varphi '(u^N(t,x))}{\varphi (u^N(t,x))} \partial _t u^N(t,x) \{\eta _x(t)-u^N(t,x)\} \right] dt \end{aligned}$$
(4.5)

with error \(CK^{2/\sigma }\int _0^T H(\mu ^N_t|\nu ^N_t)dt + O(K^{1+2/\sigma }N^{d-\varepsilon _0})\). Since \(u^N(t,x)\) satisfies the discretized equation (2.16), the display (4.5) vanishes. Hence, \(\int _0^T \big ( L^{*,\nu ^N_t}_N 1 - \partial _t\log \psi ^N_t \big ) dt\) is within the \(L^1\) error bound desired. \(\square \)

4.2 Proof of Theorem 2.2

From (4.2) and Theorem 4.5, we have, for \(t\in [0.T]\), that

$$\begin{aligned} H(\mu ^N_t|\nu ^N_t) \le H(\mu ^N_0|\nu ^N_0) + C K^{2/\sigma } \int _0^t H(\mu ^N_s| \nu _s^N) ds + O(K^{1+2/\sigma } N^{d-\varepsilon _0}), \end{aligned}$$

where \(\varepsilon _0=2d/(9d+2)\). Then, by Gronwall’s estimate, we obtain, for \(t\in [0,T]\), that

$$\begin{aligned} H(\mu ^N_t| \nu ^N_t) \le \left\{ H(\mu ^N_0| \nu ^N_0) + O(K^{1+2/\sigma } N^{d-\varepsilon _0}) \right\} \exp \big \{CTK^{2/\sigma }\big \}. \end{aligned}$$

Suppose now that

$$\begin{aligned} K(N)\le \delta (\log N)^{\sigma /2} \end{aligned}$$

for \(\delta >0\) such that \(CT\delta ^{2/\sigma }< (\varepsilon _0\wedge \epsilon )/2\). Since the initial entropy \(H(\mu ^N_0|\nu ^N_0)= O(N^{d-\epsilon })\), we will have for \(t\le T\) that

$$\begin{aligned} H(\mu ^N_t| \nu ^N_t) = o(N^{d-(\varepsilon _0\wedge \epsilon )/2}). \end{aligned}$$

This finishes the proof. \(\square \)

5 Interface Limit for Continuum Allen–Cahn Equations with Nonlinear Diffusion

We first discuss a formal derivation of the interface motion in the continuous PDE setting in Sect. 5.1, before stating precise results in Sect. 5.2 found in [21]. Then, we turn to outline of proofs of Theorems 5.1 and 5.2 on generation and propagation of the continuous interface motion in Sections 5.3 and 5.4. Especially, we gather necessary bounds to apply for the discrete PDE in Sect. 6; see Lemmas 5.35.5.

5.1 Formal derivation

We first give, through formal asymptotic expansions, the derivation of the interface motion equation corresponding to Problem

$$\begin{aligned} (P^\varepsilon )~~ {\left\{ \begin{array}{ll} \partial _t u = \Delta \varphi (u) + \displaystyle { \frac{1}{\varepsilon ^2}} f(u) &{} \text{ in } [0,\infty )\times {\mathbb {T}}^d \\ u(0,v) = u_0(v) &{}\text { for } v \in {\mathbb {T}}^d, \end{array}\right. } \end{aligned}$$
(5.1)

where the unknown function u denotes say ‘mass density’, \(d\ge 2\), and \(\varepsilon > 0\) is a small parameter. We remark the parameter \(\varepsilon \) can be viewed in terms of K, which we use to describe the microscopic Glauber+Zero-range dynamics, as \(\varepsilon = K^{-1/2}\) or \(\varepsilon ^{-2} = K\).

This equation is determined by the two first terms of the asymptotic expansion. We refer to Nakamura et al. [41], Alfaro [1], Alfaro e al. [3] for a similar formal analysis for other equations with a bistable nonlinear reaction term. Let us also mention some other papers Alikakos et al. [4], Fife [23] and Rubinstein et al. [43] involving the method of matched asymptotic expansions for related phase transition problems.

Problem \((P^{\;\!\varepsilon })\) possesses a unique solution \(u^\varepsilon \). As \(\varepsilon \rightarrow 0\), the qualitative behavior of this solution is the following. In the very early stage, the nonlinear diffusion term is negligible compared with the reaction term \(\varepsilon ^{-2}f(u)\). Hence, rescaling time by \(\tau =t/\varepsilon ^2\), the equation is well approximated by the ordinary differential equation \(u_\tau =f(u)\) where \(u_\tau = \partial _\tau u\). In view of the bistable nature of f, \(u^\varepsilon \) quickly approaches the values \(\alpha _-\) or \(\alpha _+\), the stable equilibria of the ordinary differential equation, and an interface is formed between the regions \(\{u^\varepsilon \approx \alpha _-\}\) and \(\{u^\varepsilon \approx \alpha _+\}\). Once such an interface is developed, the nonlinear diffusion term becomes large near the interface, and comes to balance with the reaction term so that the interface starts to propagate, on a much slower time scale.

To study such interfacial behavior, it is useful to consider a formal asymptotic limit of \((P^{\;\!\varepsilon })\) as \(\varepsilon \rightarrow 0\). Then, the limit solution will be a step function taking the value \(\alpha _-\) on one side of the interface, and \(\alpha _+\) on the other side. This sharp interface, which we will denote by \(\Gamma _t\), obeys a certain law of motion, which is expressed as \((P^0)\) [cf. (2.13)].

It follows from the standard local existence theory for parabolic equations that Problem \((P^{\;\!0})\) possesses locally in time a unique smooth solution. In fact, by using an appropriate parametrization, one can express \(\Gamma _t\) as a graph over a \(N-1\) manifold without boundary and transfer the motion equation \((P^{\;\!0})\) into a parabolic equation on the manifold, at least locally in time. Let \(0\le t <T^{max}\), \(T^{max} \in (0,\infty ]\) be the maximal time interval for the existence of the solution of \((P^{\;\!0})\) and denote this solution by \(\Gamma =\cup _{0\le t < T^{max}} (\{t\}\times \Gamma _t)\). Hereafter, we fix T such that \(0<T<T^{max}\) and work on [0, T]. Since \(\Gamma _0\) is a \(C^{5+\theta }\) hypersurface, we also see that \(\Gamma \) is of class \(C^{ \frac{5+\theta }{2}, 5+\theta }\). For more details concerning problems related to \((P^{\;\!0})\), we refer to Chen [9, 10] or Chen and Reitich [12].

In fact, formal derivation of the interface motion from \((P^\varepsilon )\) is discussed in a companion paper [21], under the Neumann boundary condition. We repeat the argument on \({\mathbb {T}}^d\) for readers’ convenience. We set \(Q_T:=(0,T) \times {\mathbb {T}}^d\) and, for each \(t\in [0,T]\), we denote by \(\Omega ^{(1)}_t\) the region of one side of the hypersurface \(\Gamma _t\), and by \(\Omega ^{(2)}_t\) the region of the other side of \(\Gamma _t\). We define a step function \({\tilde{u}}(t,v)\) by

$$\begin{aligned} {\tilde{u}}(t,v)={\left\{ \begin{array}{ll} \, \alpha _- &{}\text {in } \Omega ^{(1)}_t\\ \, \alpha _+ &{}\text {in } \Omega ^{(2)}_t \end{array}\right. } \quad \text {for } t\in [0,T]\,, \end{aligned}$$
(5.2)

which represents the formal asymptotic limit of \(u^\varepsilon \) (or the sharp interface limit) as \(\varepsilon \rightarrow 0\).

More specifically, we define \(\Gamma ^\varepsilon _t\) using the solution \(u^\varepsilon \) of \((P^\varepsilon )\). Denote \(\Gamma ^\varepsilon _t\) as follows;

$$\begin{aligned} \Gamma ^\varepsilon _t := \{ v \in {\mathbb {T}}^d : u^\varepsilon (t,v) = \alpha _* \}. \end{aligned}$$

Assume that, for some \(T > 0\), \(\Gamma ^\varepsilon _t\) is a smooth hypersurface without boundary for each \(t \in [0,T], \varepsilon > 0\). Define the signed distance function to \(\Gamma ^\varepsilon _t\) as follows;

$$\begin{aligned} {\overline{d}}^\varepsilon (t,v) := {\left\{ \begin{array}{ll} \mathrm{dist}(v,\Gamma ^\varepsilon _t) &{} \text { for } v \in \overline{D^{\varepsilon ,-}_t} \\ - \mathrm{dist}(v,\Gamma ^\varepsilon _t) &{} \text { for } v \in D^{\varepsilon ,+}_t \end{array}\right. } \end{aligned}$$

where \(D^{\varepsilon ,-}_t\) is the region ‘enclosed’ by \(\Gamma ^\varepsilon _t\) and \(D^{\varepsilon ,+}_t := {\mathbb {T}}^d \setminus \{ D^{\varepsilon ,-}_t \cup \Gamma ^\varepsilon _t \}\). Note that \({\overline{d}}^\varepsilon = 0\) on \(\Gamma ^\varepsilon _t\) and \(| \nabla {\overline{d}}^\varepsilon | = 1\) near \(\Gamma _t^\varepsilon \). Suppose further that \({\overline{d}}^\varepsilon \) is expanded in the form

$$\begin{aligned} {\overline{d}}^\varepsilon (t,v) = {\overline{d}}_0(t,v) + \varepsilon {\overline{d}}_1(t,v) + \varepsilon ^2 {\overline{d}}_2(t,v) + \cdots . \end{aligned}$$

Define

$$\begin{aligned}&\Gamma _t := \{v \in {\mathbb {T}}^d : {\overline{d}}_0(t,v) = 0 \}, \qquad \; \; \Gamma := \cup _{0 \le t \le T} ( \{ t \} \times \Gamma _t ), \\&D^-_t := \{v \in {\mathbb {T}}^d : {\overline{d}}_0(t,v) > 0 \}, \qquad D^+_t := \{ v \in {\mathbb {T}}^d : {\overline{d}}_0(t,v) < 0\}. \end{aligned}$$

As we will see later, the values of \(u^\varepsilon \) are close to \(\alpha _\pm \) on the domains \(D_t^\pm \), which is consistent with \(D_0^\pm \) in (BIP2) and (5.18).

Assume that \(u^\varepsilon \) has the expansions

$$\begin{aligned} u^\varepsilon (t,v) = \alpha _\pm + \varepsilon u^\pm _1(t,v) + \varepsilon ^2 u^\pm _2(t,v) + \cdots \end{aligned}$$

away from the interface \(\Gamma \) and

$$\begin{aligned} u^\varepsilon (t,v) = U_0(t,v,\xi ) + \varepsilon U_1(t,v,\xi ) + \varepsilon ^2 U_2(t,v,\xi ) + \cdots \end{aligned}$$
(5.3)

near \(\Gamma \), where \(\displaystyle {\xi = \frac{{\overline{d}}_0}{\varepsilon }}\). Here the variable \(\xi \) was given to describe the rapid transition between the regions \(\{ u^\varepsilon \simeq \alpha _+ \}\) and \( \{ u^\varepsilon \simeq \alpha _- \}\). In addition, we normalize \(U_0\) and \(U_k\) in a way that

$$\begin{aligned} U_0(t,v,0) = \alpha _*, \quad U_k(t,v,0) = 0. \end{aligned}$$
(5.4)

To match the inner and outer expansions, we require that

$$\begin{aligned} U_0(t,v,\pm \infty ) = \alpha _\mp , \quad U_k(t,v,\pm \infty ) = u^\mp _k(t,v) \end{aligned}$$
(5.5)

for all \(k \ge 1\).

After substituting the expansion (5.3) into \((P^\varepsilon )\) we consider collecting the \(\varepsilon ^{-2}\) terms, which yields the following equation

$$\begin{aligned} \varphi (U_0)_{zz} + f(U_0) = 0. \end{aligned}$$

Since the equation only depends on the variable z, we may assume that \(U_0\) is only a function of the variable z. Thus we may assume \(U_0(t,v,z) = U_0(z)\). In view of the conditions (5.4) and (5.5), we find that \(U_0\) is the unique solution of the following problem

$$\begin{aligned} {\left\{ \begin{array}{ll} (\varphi (U_0))_{zz} + f(U_0) = 0, \\ U_0(-\infty ) = \alpha _+, U_0(0)= \alpha _*, U_0(\infty ) = \alpha _-. \end{array}\right. } \end{aligned}$$
(5.6)

To understand this more clearly, for \(u\ge 0\), we set

$$\begin{aligned} b(u) := f(\varphi ^{-1}(u)), \end{aligned}$$

where \(\varphi ^{-1}\) is the inverse function of \(\varphi :{\mathbb {R}}_+\rightarrow {\mathbb {R}}_+\) and define \(V_0(z) := \varphi (U_0(z))\); note that such transformation is possible by the condition (5.13). The condition (BS) on f implies that b(u) has exactly three zeros \(\varphi (\alpha _-)\), \(\varphi (\alpha _*)\) and \(\varphi (\alpha _+)\) where

$$\begin{aligned} b'(\varphi (\alpha _-))<0, \ b'(\varphi (\alpha _*))>0, \ \ \mathrm{and \ \ } b'(\varphi (\alpha _+))<0. \end{aligned}$$

Substituting \(V_0\) into Eq. (5.6) yields

$$\begin{aligned} {\left\{ \begin{array}{ll} V_{0zz} + b(V_0) = 0, \\ V_0(-\infty ) = \varphi (\alpha _+), V_0(0)= \varphi (\alpha _*), V_0(\infty ) = \varphi (\alpha _-). \end{array}\right. } \end{aligned}$$
(5.7)

Condition (5.14) then implies

$$\begin{aligned} \int _{\varphi (\alpha _-)}^{\varphi (\alpha _+)} b(u)du =0, \end{aligned}$$

which gives the existence and uniqueness up to translations of the solution of (5.7), and especially in our case that the speed of the traveling wave solution \(V_0\) vanishes.

Next, we consider the collection of \(\varepsilon ^{-1}\) terms in the asymptotic expansion. In view of the definition of \(U_0(z)\) and the condition (5.4), for each (tv), this yields the following problem

$$\begin{aligned} {\left\{ \begin{array}{ll} (\varphi '(U_0) \overline{U_1})_{zz} + f'(U_0)\overline{U_1} = U_{0z} \partial _t{\overline{d}}_0 - (\varphi (U_0))_z \Delta {\overline{d}}_0,\\ \overline{U_1}(t,v,0) = 0, ~~~ \varphi '(U_0) \overline{U_1} \in L^\infty (\mathbb {R}). \end{array}\right. } \end{aligned}$$
(5.8)

To see the existence of the solution of (5.8) we perform the change of unknown function \(\overline{V_1} = \varphi '(U_0)\overline{U_1}\), which yields the problem

$$\begin{aligned} {\left\{ \begin{array}{ll} \overline{V_{1}}_{zz} + b'(V_0)\overline{V_1} = \displaystyle {\frac{V_{0z}}{\varphi '(\varphi ^{-1} (V_0) )}} \partial _t{\overline{d}}_0 - V_{0z} \Delta {\overline{d}}_0, \\ \overline{V_1}(t,v,0) = 0, ~~~ \overline{V_1} \in L^\infty (\mathbb {R}). \end{array}\right. }. \end{aligned}$$
(5.9)

Lemma 2.2 of [3] implies the existence of \(V_1\) provided that

$$\begin{aligned} \int _{\mathbb {R}}\left( \frac{1}{\varphi '(\varphi ^{-1}(V_0))} \partial _t{\overline{d}}_0 - \Delta {\overline{d}}_0 \right) V_{0z}^2dz = 0. \end{aligned}$$

Substituting \(V_0 = \varphi (U_0)\) and \( V_{0z} = \varphi '(U_0) U_{0z} \) in the above equation yields

$$\begin{aligned} \partial _t{\overline{d}}_0 = \frac{\int _{\mathbb {R}}V_{0z}^2dz}{\int _{\mathbb {R}}\frac{V_{0z}^2}{\varphi '(\varphi ^{-1}(V_0)}dz} \Delta {\overline{d}}_0 = \frac{\int _{\mathbb {R}}(\varphi '(U_0) U_{0z})^2dz}{\int _{\mathbb {R}}\varphi '(U_0) U_{0z}^2dz} \Delta {\overline{d}}_0. \end{aligned}$$
(5.10)

It is well known that \(\partial _t{\overline{d}}_0\) is equal to the normal velocity V of the interface \(\Gamma _t\), and \(\Delta {\overline{d}}_0\) is equal to \(\kappa \) where \(\kappa \) is the mean curvature of \(\Gamma _t\) multiplied by \(d-1\). Thus, we obtain the interface motion equation on \(\Gamma _t\):

$$\begin{aligned} V = \lambda _0 \kappa , \end{aligned}$$

where

$$\begin{aligned} \lambda _0 = \frac{\int _{\mathbb {R}}(\varphi '(U_0) U_{0z})^2dz}{\int _{\mathbb {R}}\varphi '(U_0) U_{0z}^2dz}. \end{aligned}$$
(5.11)

This speed \(\lambda _0\) is interpreted as the ‘surface tension’ multiplied by the ‘mobility’ of the interface; see Appendix of El Kettani et al. [21] and also [44]. The constant \(\lambda _0\) has another explicit form (1.6). Its derivation is given in the last part of Sect. 2 of [21].

5.2 Results on Allen–Cahn equation with nonlinear diffusion

Here we briefly summarize the results obtained in [21] on generation and propagation of interface properties for an Allen–Cahn equation \((P^\varepsilon )\) with nonlinear diffusion, and state estimates on sub and super solutions necessary to study discrete Allen–Cahn equation in Sect. 6.

The nonlinear functions \(\varphi \) and f satisfy the following properties: In line with the previous specification of the microscopic dynamics, we assume (minimally) that \(f \in C^2(\mathbb {R}_+)\) has exactly three zeros \(f(\alpha _-) = f(\alpha _+) = f(\alpha _*) = 0\), where \({\mathbb {R}}_+ = [0,\infty )\), \(0<\alpha _-< \alpha _* < \alpha _+\), and

$$\begin{aligned} f'(\alpha _-)< 0, f'(\alpha _+) < 0, f'(\alpha _*) > 0. \end{aligned}$$
(5.12)

Also, \(f(0)>0\) so that the later evolution starting positive stays positive.

In addition, we assume that \(\varphi \in C^4(\mathbb {R}_+)\) and, for any \(0<u_-<u_+\),

$$\begin{aligned} \varphi '(u) \ge C(\varphi , u_-, u_+) \ \ \mathrm{for \ \ } u_-\le u\le u_+ \end{aligned}$$
(5.13)

for some positive constant \(C(\varphi , u_-, u_+)\). We give one more assumption on f and \(\varphi \), namely

$$\begin{aligned} \int _{\alpha _-}^{\alpha _+} \varphi '(s) f(s) ds = 0. \end{aligned}$$
(5.14)

We note in the particle system context that \(\varphi ,f\in C^\infty ({\mathbb {R}}_+)\) and \(\varphi '(u)>0\) for \(u>0\), and so \(\varphi '(u)\) is bounded away from 0 and \(\infty \) for \(u\in [u_-, u_+]\).

As for the initial condition \(u_0\), following (BIP1) and (BIP2), we assume \(u_0 \in C^5({\mathbb {T}}^d)\) and \(0<u_-\le u_0\le u_+\). As a consequence, \(u(t,\cdot )\) is also bounded between \(u_-\) and \(u_+\). We define \(C_0\) as follows,

$$\begin{aligned} C_0 := \Vert u_0 \Vert _{C^0 \left( {\mathbb {T}}^d \right) } + \Vert \nabla u_0 \Vert _{C^0 \left( {\mathbb {T}}^d \right) } + \Vert \Delta u_0 \Vert _{C^0 \left( {\mathbb {T}}^d \right) }. \end{aligned}$$
(5.15)

Furthermore we define \(\Gamma _0\) by

$$\begin{aligned} \Gamma _0 := \{ v \in {\mathbb {T}}^d: u_0(v) = \alpha _* \}. \end{aligned}$$
(5.16)

In addition, recalling assumption (BIP2), we suppose \(\Gamma _0\) is a \(C^{5+\theta }, 0< \theta < 1\), hypersurface without boundary such that

$$\begin{aligned}&\nabla u_0(v) \cdot n(v) \ne 0 \text { if}~ v \in \Gamma _0 \end{aligned}$$
(5.17)
$$\begin{aligned}&u_0 > \alpha _* \text { in } D_0^+, \quad u_0 < \alpha _* \text { in } D_0^- \end{aligned}$$
(5.18)

where \(D_0^\pm \) denote the regions separated by \(\Gamma _0\) and n is the outward normal vector to \(D_0^+\). It is standard that Problem \((P^\varepsilon ) \) possesses a unique classical solution \(u^\varepsilon \).

The goal is to study the singular limit of \(u^\varepsilon \) as \(\varepsilon \downarrow 0\). We first present the generation of interface result (cf. [21], Theorem 1.2). We will use below the following notation:

$$\begin{aligned} \gamma = f'(\alpha _*) , \quad t^\varepsilon = \gamma ^{-1} \varepsilon ^2 |\log \varepsilon | , \quad \delta _0 := \min (\alpha _* - \alpha _-, \alpha _+ - \alpha _*). \end{aligned}$$
(5.19)

Theorem 5.1

Let \(u^\varepsilon \) be the solution of the problem \((P^\varepsilon )\), \(\delta \) be an arbitrary constant satisfying \(0< \delta < \delta _0\). Then, there exist positive constants \(\varepsilon _0\) and \(M_0\) such that, for all \(\varepsilon \in (0, \varepsilon _0)\), we have the following:

  1. (1)

    For all \(v \in {\mathbb {T}}^d\),

    $$\begin{aligned} \alpha _- - \delta \le u^\varepsilon (t^\varepsilon ,v) \le \alpha _+ + \delta . \end{aligned}$$
    (5.20)
  2. (2)

    If \(u_0(v) \ge \alpha _* + M_0 \varepsilon \), then

    $$\begin{aligned} u^\varepsilon (t^\varepsilon ,v) \ge \alpha _+ - \delta . \end{aligned}$$
    (5.21)
  3. (3)

    If \(u_0(v) \le \alpha _* - M_0 \varepsilon \), then

    $$\begin{aligned} u^\varepsilon (t^\varepsilon ,v) \le \alpha _- + \delta . \end{aligned}$$
    (5.22)

To understand more this statement, we remark that the assumption (5.17) implies that \(u_0(v)\) is away from \(\alpha _*\) when v is away from \(\Gamma _0\).

After the interface has been generated, the diffusion term has the same order as the reaction term. As a result the interface starts to propagate slowly. Later we will prove that the interface moves according to the motion equation \((P^0)\) [cf. (2.13)].

Let \(D^{+}_t\) denote the region ‘enclosed’ by the interface \(\Gamma _t\), continuously determined from \(D_0^+\), and set \(D^-_t := {\mathbb {T}}^d \setminus \overline{D^+_t}\). Let \({\overline{d}}(t,v)\) be the signed distance function to \(\Gamma _t\) defined by

$$\begin{aligned} {\overline{d}}(t,v) := {\left\{ \begin{array}{ll} \mathrm{dist}(v, \Gamma _t) &{} \text { for } v \in \overline{D^-_t} \\ - \mathrm{dist}(v, \Gamma _t) &{} \text { for } v \in D^+_t. \end{array}\right. } \end{aligned}$$

The second is the propagation of the interface (cf. [21], Theorem 1.3).

Theorem 5.2

Under the conditions given in Theorem 5.1 and those mentioned above, for any given \(0< \delta < \delta _0\) there exist \(\varepsilon _0 > 0\) and \(C > 0\) such that

(5.23)

for all \(\varepsilon \in (0, \varepsilon _0)\) and for all \(t \in (t^\varepsilon ,T]\).

5.3 Generation of the interface: outline of Proof of Theorem 5.1

The main idea of the proof is based on the comparison principle. Thus, we need to construct appropriate sub and super solutions for the problem \((P^\varepsilon )\). In this first stage, we expect that the solution behaves as that of the corresponding ordinary differential equation and we construct sub and super solutions as solutions of the following initial value problem ordinary differential equation;

$$\begin{aligned} {\left\{ \begin{array}{ll} \partial _\tau Y(\tau , \zeta ) = f(Y(\tau ,\zeta )), &{} \tau > 0,\\ Y(0,\zeta ) = \zeta , &{} \zeta \in {\mathbb {R}}_+. \end{array}\right. } \end{aligned}$$
(5.24)

Recall \(C_0\) defined in (5.15), \(\gamma = f'(\alpha _*), t^\epsilon , \delta _0\) defined in (5.19), and set

$$\begin{aligned} -{\bar{\gamma }} = \min _{\zeta \in [u_- \wedge \alpha _-, u_+\vee \alpha _+]}f'(\zeta ); \end{aligned}$$

note that \(\gamma , {\bar{\gamma }}>0\). The following bounds on \(Y(\tau ,\zeta )\) are used for the proofs of Lemma 5.4 and also Theorem 6.1 below.

Lemma 5.3

Let \(\delta \in (0, \delta _0)\) be arbitrary.

  1. (1)

    There exists a constant \(C_1 = C_1(\delta )>0\) such that

    $$\begin{aligned} 0<e^{-{\bar{\gamma }} \tau }< Y_\zeta (\tau ,\zeta ) \le C_1 e^{\gamma \tau } \end{aligned}$$

    for all \(\zeta \in [u_-,u_+]\) and \(\tau \ge 0\).

  2. (2)

    There exists a constant \(C_2 = C_2(\delta )>0\) such that, for all \(\tau > 0\) and all \(\zeta \in (0, 2C_0)\),

    $$\begin{aligned} \left| \frac{Y_{\zeta \zeta }(\tau , \zeta )}{Y_\zeta (\tau , \zeta )} \right| \le C_2 (e^{\gamma \tau } - 1), \ \ \ |Y_{\zeta \zeta }(\tau , \zeta )|\le C_2(e^{\gamma \tau }-1)e^{\gamma \tau }, \ \ \mathrm{and } \end{aligned}$$
    $$\begin{aligned} |Y_{\zeta \zeta \zeta }(\tau , \zeta )| \le 2 C_2 (e^{2\gamma \tau }-1)e^{\gamma \tau }. \end{aligned}$$
    (5.25)
  3. (3)

    There exist constants \(\varepsilon _0, C_3>0\) such that for all \(\varepsilon \in (0, \varepsilon _0)\):

    1. (a)

      For all \(\zeta \in (0, 2C_0)\), in terms of a constant \(C_0>0\),

      $$\begin{aligned} \alpha _- - \delta \le Y(\gamma ^{-1} |\log \varepsilon |, \zeta ) \le \alpha _+ + \delta . \end{aligned}$$
      (5.26)
    2. (b)

      If \(\zeta \ge \alpha _* + C_3 \varepsilon \), then

      $$\begin{aligned} Y(\gamma ^{-1} |\log \varepsilon |, \zeta ) \ge \alpha _+ - \delta . \end{aligned}$$
      (5.27)
    3. (c)

      If \(\zeta \le \alpha _* - C_3 \varepsilon \), then

      $$\begin{aligned} Y(\gamma ^{-1} |\log \varepsilon |, \zeta ) \le \alpha _- + \delta . \end{aligned}$$
      (5.28)

Proof

We refer to Alfaro et al. [3] and El Kettani e al. [21], Lemma 2 for the proof except that of (5.25). To show (5.25), we use

$$\begin{aligned}&Y_{\zeta \zeta }(\tau ,\zeta ) = A(\tau ,\zeta )Y_\zeta (\tau ,\zeta ), \quad A(\tau ,\zeta ) = \int _0^\tau f''(Y(r,\zeta )) Y_\zeta (r,\zeta ) dr, \\&|A(\tau ,\zeta )| \le C_A(e^{\gamma \tau }-1), \end{aligned}$$

given in Lemmas 3.3 and 3.4 of [3] where \(C_A>0\) is some constant. Indeed, we have

$$\begin{aligned} Y_{\zeta \zeta \zeta }(\tau ,\zeta ) = A_\zeta (\tau ,\zeta )Y_\zeta (\tau ,\zeta ) + A(\tau ,\zeta )Y_{\zeta \zeta }(\tau ,\zeta ). \end{aligned}$$

Thus, there exists \(C' > 0\) such that \(A_\zeta \) in the first term can be estimated as

$$\begin{aligned} |A_\zeta (\tau ,\zeta )|&= \left| \int _0^\tau \left\{ f'''(Y(r,\zeta )) Y_\zeta ^2(r,\zeta ) + f''(Y(r,\zeta )) Y_{\zeta \zeta }(r,\zeta ) \right\} dr\right| \\&\le C'\int _0^\tau e^{2\gamma r}dr \le C'(e^{2\gamma \tau }-1). \end{aligned}$$

Thus, by choosing \(C_2\) bigger if necessary, we obtain

$$\begin{aligned} |Y_{\zeta \zeta \zeta }(\tau ,\zeta )| \le C_2(e^{2\gamma \tau }-1) e^{\gamma \tau } + C_2(e^{\gamma \tau }-1)^2 e^{\gamma \tau } \le 2 C_2(e^{2\gamma \tau }-1) e^{\gamma \tau }. \end{aligned}$$

\(\square \)

Define sub and super solutions on \({\mathbb {T}}^d\) for the proof of Theorem 5.1 as follows

$$\begin{aligned} w^{\pm }_\varepsilon (t,v) = Y \left( \frac{t}{\varepsilon ^2}, u_0(v) \pm P(t) \right) , \end{aligned}$$
(5.29)

where

$$\begin{aligned} P(t) = \varepsilon ^2 C_4 \left( e^{\gamma t/\varepsilon ^2} - 1\right) , \end{aligned}$$

for some constant \(C_4>0\). Note that \(P(t)\le \varepsilon ^2C_4(\varepsilon ^{-1}-1)\le \varepsilon C_4\) for \(t\le t^\varepsilon \), where \(t^\varepsilon \) is defined in (5.19). In particular, since \(u_0(v)\ge u_->0\), we have \(u_0(v)-P(t)>0\) for sufficiently small \(\varepsilon >0\). Given that we work on the torus \({\mathbb {T}}^d\), or on \({\mathbb {R}}^d\) with periodic \(u_0\), the constructed sub and super solutions \(w_\varepsilon ^\pm (t,v)\) are periodic for all \(t\in [0,t^\varepsilon ]\).

Denote also the operator \(\mathcal {L}\) by

$$\begin{aligned} \mathcal {L} u = \partial _tu - \Delta \varphi (u) - \frac{1}{\varepsilon ^2} f(u). \end{aligned}$$

We set also, noting \(\varphi (u), \varphi '(u)>0\),

$$\begin{aligned} C_\varphi := \max \varphi (u) + \max \varphi '(u) + \max |\varphi ''(u)|, \end{aligned}$$

where ‘\(\max \)’ is maximum over \(u\in [0,(2C_0)\vee \alpha _+]\). Then, we have the following bounds; see [21], Lemma 3.

Lemma 5.4

There exist constants \(\varepsilon _0, C_4>0\) such that, for all \(\varepsilon \in (0,\varepsilon _0)\), \(w^{\pm }_\varepsilon \) is a pair of sub and super solutions of \((P^\varepsilon )\) in the domain \([0, t^\varepsilon ]\times {\mathbb {T}}^d\).

In particular, in terms of a constant \(C_5>0\), we have

$$\begin{aligned} \mathcal {L} w^+_\varepsilon \ge C_5e^{-{\bar{\gamma }} \tau /\varepsilon ^2} \quad \mathrm{and }\quad \mathcal {L} w^-_\varepsilon \le -C_5 e^{-\bar{\gamma }\tau /\varepsilon ^2}, \ (\tau ,v)\in [0,t^\varepsilon ]\times {\mathbb {T}}^d. \end{aligned}$$
(5.30)

Remark 5.1

It follows from \(\mathcal {L}w_\varepsilon ^-\le 0\le \mathcal {L}w_\varepsilon ^+\) that \(w_\varepsilon ^\pm \) are sub and super solutions. However, the stronger estimate (5.30) will be useful in the proof of Theorem 6.1 in the discrete setting.

5.4 Propagation of the interface: outline of Proof of Theorem 5.2

We now argue the propagation of the interface given in Theorem 5.2. Again, we will need to construct appropriate sub and super solutions, but now in terms of functions \(U_0\) in (5.6) and a \(U_1\) similar to that in (5.8).

We first introduce a cut-off signed distance function \(d=d(t,v)\) as follows. Choose \(d_0 > 0\) small enough so that the signed distance function \({\overline{d}}={\overline{d}}(t,v)\) from the interface \(\Gamma _t\) evolving under \((P^0)\) is smooth in the set

$$\begin{aligned} \{ (t,v) \in [0,T] \times {\mathbb {T}}^d, | {\overline{d}}(t,v) | < 3 d_0 \}. \end{aligned}$$

Let h(s) be a smooth non-decreasing function on \(\mathbb {R}\) such that

$$\begin{aligned} h(s) = {\left\{ \begin{array}{ll} s &{} \text {if}~ |s| \le d_0\\ -2d_0 &{} \text {if}~ s \le -2d_0\\ 2d_0 &{} \text {if}~ s \ge 2d_0. \end{array}\right. } \end{aligned}$$

We then define the cut-off signed distance function d by

$$\begin{aligned} d(t,v) = h({\overline{d}}(t,v)), ~~~ (t,v) \in [0,T]\times {\mathbb {T}}^d. \end{aligned}$$

Note that, as d coincides with \({\overline{d}}\) in the region

$$\begin{aligned} \{ (t,v) \in [0,T]\times {\mathbb {T}}^d : | d(t,v)| < d_0 \}, \end{aligned}$$

we have

$$\begin{aligned} \partial _t d = \lambda _0 \Delta d \text { on } \Gamma _t. \end{aligned}$$

Moreover, d is constant far away from \(\Gamma _t\).

In terms of this function d, we now define \(U_1 : [0,T] \times {\mathbb {T}}^d \times {\mathbb {R}}\rightarrow {\mathbb {R}}\) satisfying the following problem

$$\begin{aligned} {\left\{ \begin{array}{ll} (\varphi '(U_0) U_1)_{zz} + f'(U_0)U_1 = (\lambda _0 U_{0z} - (\varphi (U_0))_z) \Delta d(t,v)\\ U_1(t,v,0) = 0, ~~~ \varphi '(U_0) U_1(t,v) \in L^\infty ({\mathbb {R}}) \end{array}\right. } \end{aligned}$$

where \(U_0\) is the solution of (5.6). Since \(d \in C^{ \frac{{5}+\theta }{2}, {5}+\theta }([0,T] \times {\mathbb {T}}^d)\), we have that \(\Delta d \in C^{ \frac{{3}+\theta }{2}, {3}+\theta }([0,T] \times {\mathbb {T}}^d)\). As a consequence, we have \(U_1(\cdot ,\cdot , z) \in C^{ \frac{{3}+\theta }{2}, {3}+\theta }([0,T] \times {\mathbb {T}}^d)\) for each \(z \in {\mathbb {R}}\). Moreover, \(U_1(t,v,\cdot ) \in C^3({\mathbb {R}})\) for each \((t,v) \in [0,T] \times {\mathbb {T}}^d\) by a similar argument given in the proof of Lemma 6 of [21].

We construct the sub and super solutions as follows: Given \(0<\varepsilon <1\), we define

$$\begin{aligned} u^\pm (t,v) \equiv u_\varepsilon ^\pm (t,v)= & {} U_0 \left( \frac{d(t,v) \pm \varepsilon p(t)}{\varepsilon } \right) \nonumber \\&+ \,\varepsilon U_1 \left( t,v, \frac{d(t,v) \pm \varepsilon p(t)}{\varepsilon } \right) \pm q(t), \end{aligned}$$
(5.31)

where

$$\begin{aligned}&p(t) = e^{- \beta t/ \varepsilon ^2} - e^{Lt} - \hat{L},\\&q(t) = {\tilde{\sigma }} \left( \beta e^{- \beta t / \varepsilon ^2} + \varepsilon ^2 L e^{Lt} \right) . \end{aligned}$$

Here \(\beta , {\tilde{\sigma }}, L, \hat{L}>0\) are constants determined by Lemma 5.5 below. Although we work on \({\mathbb {T}}^d\), if we take the viewpoint of working on \({\mathbb {R}}^d\), we may regard the signed distance function d as periodic with period 1 so that \(u^\pm (t,v)\) are periodic as well for all \(t \in [0,T]\). Then, we have the following bounds; see [21], Lemma 10 and Section 4.4.

Lemma 5.5

One can choose \(\beta , {\tilde{\sigma }}>0\) such that, for each \(\hat{L} > 1\) there exist \(L > 0\) large enough and \(\varepsilon _0 > 0\) small enough such that for a constant \(C>0\) we have

$$\begin{aligned} \mathcal {L} u^- \le -C<C \le \mathcal {L}u^+ \,\mathrm{in } \,[0,T]\times {\mathbb {T}}^d \end{aligned}$$
(5.32)

for every \(\varepsilon \in (0, \varepsilon _0)\), and

$$\begin{aligned} u^-(0,v) \le u^\varepsilon (t^\varepsilon ,v) \le u^+(0,v) \end{aligned}$$

holds. Hence, \(u^\pm (t - t^\varepsilon ,v)\) are sub and super solutions for Problem \((P^\varepsilon )\) for \(t \in [t^\varepsilon ,T]\).

Remark 5.2

To show that \(u^\pm \) are sub and super solutions, it would have been enough in the above proof to show that \(\mathcal {L}u^-\le 0\le \mathcal {L}u^+\). However, the stronger estimate (5.32) found will be useful in the proof of Theorem 6.2 in the discrete setting.

6 Generation and Propagation of the Interface for the ‘Discrete PDE’: Proof of Theorem 2.3

Recall that the initial data \(\{u^N(0,x)\}_{x\in {\mathbb {T}}_N^d}\) of the discrete PDE (2.16) satisfy (BIP1) and (BIP2). Previously, in (5.29) and (5.31), we have constructed super and sub solutions

$$\begin{aligned} w_\varepsilon ^\pm (t,v) \equiv w_K^\pm (t,v) \quad \mathrm{and }\quad u_\varepsilon ^\pm (t,v) \equiv u_K^\pm (t,v), t\ge 0, v\in {\mathbb {T}}^d, \end{aligned}$$

of the problem \((P^\varepsilon )\) with \(\varepsilon =K^{-1/2}\).

We will show that these functions, \(w_K^\pm (t,v)\) and \(u_K^\pm (t,v)\), restricted to the discrete torus \(\frac{1}{N} {\mathbb {T}}_N^d\) actually play the role of super and sub solutions of the discretized hydrodynamic equation (2.16). As we noted, we abuse notations \(\frac{x}{N}\) and x for the discrete spatial variables. The proof relies on the comparison argument.

More precisely, we show

$$\begin{aligned} \mathcal {L}^{N,K}w_K^+ \ge 0 \ge \mathcal {L}^{N,K}w_K^- \quad \mathrm{and }\quad \mathcal {L}^{N,K}u_K^+\ge 0\ge \mathcal {L}^{N,K}u_K^-, \end{aligned}$$

where \(\mathcal {L}^{N,K}\) is the operator associated with (2.16). These estimates will follow from estimates shown in the continuum setting, namely \(\mathcal {L} w_\varepsilon ^+ \ge C_5 e^{-{\bar{\gamma }} \tau /\varepsilon ^2} > - C_5 e^{-{\bar{\gamma }} \tau /\varepsilon ^2} \ge \mathcal {L} w_\varepsilon ^-\) [cf. (5.30)], and \(\mathcal {L} u_\varepsilon ^+ \ge C > - C\ge \mathcal {L} u_\varepsilon ^-\) [cf. (5.32)], in combination with the error estimates on \((\mathcal {L} - \mathcal {L}^{N,K})w_K^\pm \) and \((\mathcal {L} - \mathcal {L}^{N,K})u_K^\pm \).

6.1 Generation of a discrete interface

Recall \(Y(\tau )=Y(\tau ,\zeta )\) for \(\tau \ge 0, \zeta \in {\mathbb {R}}_+\), is the solution of the ordinary differential equation (5.24), with the initial value \(Y(0)=\zeta \).

Theorem 6.1

Let \(u^N(t,\cdot )\) be the solution of the discrete PDE (2.16) with initial value \(u^N(0,\cdot )\). Let also \(\delta \in (0,\delta _0)\) where \(\delta _0 = \min \{\alpha _*-\alpha _-,\alpha _+-\alpha _*\}\), and \(t^N = \tfrac{1}{2\gamma K} \log K\). Suppose that \(K\equiv K(N)=o(N^{2\gamma /(3\gamma +{\bar{\gamma }})})\). Then, there exist \(N_0, M_0>0\) such that the following hold for every \(N \ge N_0\):

  1. (1)

    For all \(x\in {\mathbb {T}}_N^d\),

    $$\begin{aligned} \alpha _--\delta \le u^N(t^N,x) \le \alpha _++\delta . \end{aligned}$$
  2. (2)

    If \(u_0(\tfrac{x}{N}) \ge \alpha _*+M_0K^{-1/2}\), then

    $$\begin{aligned} u^N(t^N,x) \ge \alpha _+-\delta . \end{aligned}$$
  3. (3)

    If \(u_0(\tfrac{x}{N}) \le \alpha _*-M_0K^{-1/2}\), then

    $$\begin{aligned} u^N(t^N,x) \le \alpha _-+\delta . \end{aligned}$$

Proof

Recalling (5.29), we define sub and super solutions of the continuous system as

$$\begin{aligned} w_K^\pm (t,v) = Y(Kt, u_0(v)\pm P(t)), \quad v\in {\mathbb {T}}^d, \end{aligned}$$

where \(P(t) = C_4(e^{K\gamma t}-1)/K\). Define the operators \(\mathcal {L}^K\) and \(\mathcal {L}^{N,K}\) by

$$\begin{aligned} \mathcal {L}^K u = \partial _t u - \Delta \varphi (u) -K f(u), \quad v \in {\mathbb {T}}^d, \end{aligned}$$

with respect to the continuous Laplacian \(\Delta \) on \({\mathbb {T}}^d\) and also continuous functions \(u=\{u(t,v)\}_{v\in {\mathbb {T}}^d}\), and

$$\begin{aligned} \mathcal {L}^{N,K} u = \partial _t u - \Delta ^N \varphi (u) -K f(u), \quad x \in {\mathbb {T}}_N^d, \end{aligned}$$

for discrete functions \(u=\{u(t,x)\}_{x\in {\mathbb {T}}_N^d}\), respectively.

We now make use of an estimate in the proof of Theorem 5.1: in Lemma 5.4, it is shown that

$$\begin{aligned} \mathcal {L}^K w_K^+ \ge C_5 e^{-{\bar{\gamma }} Kt^N} = C_5K^{-{\bar{\gamma }}/2\gamma }>0 \end{aligned}$$

holds for some \(C_5>0\) and large enough K; note that \(t^N=t^\varepsilon =\gamma ^{-1}\varepsilon ^2 |\log \varepsilon |\) and \(K=\varepsilon ^{-2}\). However,

$$\begin{aligned} \mathcal {L}^{N,K} w_K^+ = \mathcal {L}^K w_K^+ + (\Delta \varphi (w_K^+) - \Delta ^N \varphi (w_K^+)), \end{aligned}$$

and, by Taylor’s formula, the second term is bounded by

$$\begin{aligned} \tfrac{C_2}{N} \sup _{v\in {\mathbb {T}}^d} \left| D_v^3 \{ \varphi (w_K^+(t,v))\}\right| , \end{aligned}$$

where \(|D^3_v \{\, \cdot \,\}|\) means the sum of the absolute values of all third derivatives in v.

Since \(u_0\in C^3({\mathbb {T}}^d)\) and \(\varphi \in C^3({\mathbb {R}}_+)\) (note that \(w_K^\pm \) takes only bounded values so that \(\varphi \in C_b^3([0,M])\)), from (1) to (3) especially (5.25) of Lemma 5.3 and noting \(e^{3\gamma Kt^N} = K^{3/2}\), we obtain

$$\begin{aligned} \sup _{0\le t \le t^N, x\in {\mathbb {T}}_N^d} |\Delta \varphi (w_K^+(t,\tfrac{x}{N}))- \Delta ^N \varphi (w_K^+(t,\tfrac{x}{N}))| \le \tfrac{C_3}{N} K^{3/2}. \end{aligned}$$

Thus, this term is absorbed by \(C_5K^{-{\bar{\gamma }}/2\gamma }\) if \(K=o(N^{2\gamma /(3\gamma +{\bar{\gamma }})})\) and N is large enough.

Therefore, we obtain \(\mathcal {L}^{N,K} w_K^+ \ge 0\) for \(N\ge N_0\) with some \(N_0>0\). By Lemma 3.1, we see \(u^N(t,x) \le w_K^+(t,\tfrac{x}{N})\). Similarly, one can show \(w_K^-(t,\frac{x}{N})\le u^N(t,x)\). Thus, the proof of the theorem is concluded similarly to the proof of Theorem 5.1; see [21]. \(\square \)

6.2 Propagation of a discrete interface

Recall the interface flow \(\Gamma _t\), and the two functions \(u^{\pm }(t,v) \equiv u^{\pm }_K(t,v)\) defined by (5.31), namely

$$\begin{aligned} u^\pm (t,v) = U_0\left( K^{1/2}d(t,v) \pm p(t)\right) + K^{-1/2} U_1\left( t, v, K^{1/2}d(t,v) \pm p(t)\right) \pm q(t), \end{aligned}$$

and \(u^N(t,v)\) defined in (2.19) from the discretized hydrodynamic equation (2.16).

Theorem 6.2

Assume that the following inequality (6.1) holds at \(t=0\) and \(K =o(N^{2/3})\) for \(K=K(N) \uparrow \infty \). Then, taking \(\beta , {\tilde{\sigma }}, L, \hat{L} >0\) in p(t) and q(t) as in Lemma 5.5, there exists \(N_0\in {\mathbb {N}}\) such that

$$\begin{aligned} u^-(t,v) \le u^N(t+t^N,v) \le u^+(t,v), \end{aligned}$$
(6.1)

holds for every \(t\in [0,T-t^N]\), \(v= x/N, x \in {\mathbb {T}}_N^d\) and \(N\ge N_0\).

Proof

The upper bound in (6.1) follows from Lemma 3.1, once we can show that

$$\begin{aligned} \mathcal {L}^{N,K} u^+ = \partial _t u^+ - \Delta ^N \varphi (u^+) - K f(u^+) \ge 0, \quad x \in {\mathbb {T}}_N^d, \end{aligned}$$
(6.2)

for every \(N\ge N_0\) with some \(N_0\in {\mathbb {N}}\). As in the proof of Theorem 6.1, we decompose

$$\begin{aligned} \mathcal {L}^{N,K} u^+ = \mathcal {L}^K u^+ +( \Delta \varphi (u^+) - \Delta ^N\varphi (u^+)), \end{aligned}$$
(6.3)

where \(\mathcal {L}^K u^+ = \partial _t u^+ -\Delta \varphi (u^+) -K f(u^+)\).

We now make use of an estimate derived in the proof of Theorem 5.2: by Lemma 5.5, the first term \(\mathcal {L}^Ku^+\) in (6.3) is bounded on \([0,T]\times {\mathbb {T}}^d\) as

$$\begin{aligned} \mathcal {L}^Ku^+ \ge C>0, \end{aligned}$$
(6.4)

if we choose parameters \(\beta , {\tilde{\sigma }}, L, \hat{L}>0\) there properly.

For the second term in (6.3), since \(u^+ \in C^{ \frac{{3}+\theta }{2}, {3}+\theta }\) by the regularity of \(d, U_0\) and \(U_1\) [cf. discussion above (5.31)], and also

$$\begin{aligned} \sup _{t\in [0,T], v \in {\mathbb {T}}^d} \big | (\nabla _v)^i u^+(t,v)\big | \le CK^{i/2}, \quad i=1,2,3, \end{aligned}$$

we have

$$\begin{aligned} \left| \Delta \varphi (u^+(t,\tfrac{x}{N})) - \Delta ^N \varphi (u^+(t,\tfrac{x}{N}))\right| \le C_1 \tfrac{K^{3/2}}{N}. \end{aligned}$$

Indeed, this follows from Taylor expansion for \(\Delta ^N \varphi (u^+)\) up to the third order term, noting that \(\varphi \in C^3({\mathbb {R}}_+)\) and \(u^+(t,v)\) is bounded. Therefore, if \(K=o(N^{2/3})\), this term is absorbed by the positive constant C in (6.4) for \(\mathcal {L}^Ku^+\). This proves (6.2).

The lower bound by \(u^-(t,v)\) is shown similarly. \(\square \)

6.3 Proof of Theorem 2.3

The proof of Theorem 2.3 follows from Theorems 6.1 and 6.2. By the assumption (BIP2), \(\nabla u_0(v)\cdot n(v) \ne 0\) for \(v\in \Gamma _0\). Hence, for \(v\not \in \Gamma _0\), we have that \(u_0(v) \ne \alpha _*\). Then, for N large enough, we would have \(|u_0(v) - \alpha _*|\ge \epsilon _v> M_0K^{-1/2}\), where \(M_0\) is the constant in Theorem 6.1.

Recall \(u^N(t,v)\) in (2.19). By Theorem 6.1, at time \(t^N = (2\gamma K)^{-1}\log K\), either \(u^N(t^N,v)\ge \alpha _+ - \delta \) or \(u^N(t,v)\le \alpha _-+\delta \) for a small \(\delta >0\).

Since for large N, we have \(u^-(0,v)\le u^N(t^N,v)\le u^+(0,v)\), thinking of \(u^N(t^N, \cdot )\) as an initial condition, by Theorem 6.2, we can ‘propagate’ and obtain \(u^-(t-t^N, v)\le u^N(t,v)\le u^+(t-t^N,v)\) for \(t^N\le t\le T\). As \(N\uparrow \infty \), we obtain, for each \(0<t\le T\) and \(v\not \in \Gamma _t\) that \(u^N(t,v)\rightarrow \chi _{\Gamma _t}(v)\), concluding the proof. \(\square \)

7 A ‘Boltzmann–Gibbs’ Principle: Proof of Theorem 3.4

The strategy of the replacement is in roughly two steps: Estimate via the time average closeness of \(\sum _x a_{t,x} f_x\) to a conditional mean given the particle mass in a block of mesoscopic width, that is of order \(N^{\epsilon _0}\), and then the time integral of the conditional mean through careful local central and large deviation bounds. In the first step, in terms of a translation-invariant reference measure \(\nu _\beta \), a Rayleigh spectral bound is quantified using the spectral gap assumption (SP). In the second step, the local bounds are formulated with respect to an inhomogeneous product measure \(\nu _{u^N(t,\cdot )}\) [cf. (2.3)]. Some pre-processing in terms of truncation bounds is done before the main estimates.

We give now an outline of the proof of Theorem 3.4, referring to statements proved in the following subsections. In this section, the constant \(C>0\) depending on fixed parameters will change from line to line.

We have, by Lemmas 7.4 and 7.5, bounding \(|a_{t,x}|\le M\), that

$$\begin{aligned}&{\mathbb E}_{N} \left| \int _0^T \sum _{x\in {\mathbb {T}}^d_N} a_{t,x}f_x dt\right| \nonumber \\&\quad \le {\mathbb E}_{N} \left| \int _0^T \sum _{x\in {\mathbb {T}}^d_N} a_{t,x}f_x 1(\sum _{y\in \Lambda _h}\eta _{y+x}\le A) dt\right| + \int _0^T CMH(\mu ^N_t|\nu ^N_t)dt + \frac{CMTN^d}{A}\nonumber \\&\quad \le {\mathbb E}_{N} \left| \int _0^T \sum _{x\in {\mathbb {T}}^d_N} a_{t,x}f_x 1(\sum _{y\in \Lambda _h}\eta _{y+x}\le A)1(\eta ^\ell _x\le B) dt\right| \nonumber \\&\qquad +\, \int _0^T CM\left( 1+ \frac{A+1}{B}\right) H(\mu ^N_t|\nu ^N_t)dt + \frac{CMTN^d(A+1)}{B} + \frac{CMTN^d}{A}. \end{aligned}$$
(7.1)

The expectation in the right-side of (7.1) is bounded by

$$\begin{aligned}&{\mathbb E}_{N}\left| \int _0^T \sum _{x\in {\mathbb {T}}^d_N} a_{t,x}m_x dt\right| + {\mathbb E}_{N}\left| \int _0^T E_{\nu _\beta }\Big [\sum _{x\in {\mathbb {T}}^d_N} a_{t,x}f_x1\big (\sum _{y\in \Lambda _h}\eta _{y+x}\le A\big ) \big |\eta ^\ell _x\Big ]1(\eta ^\ell _x\le B)dt\right| \end{aligned}$$
(7.2)

where \(m_x\) is defined in (7.10). The first and second terms in (7.2) are bounded by Lemma 7.6, and Lemmas 7.9 and 7.10 respectively. Adding these bounds together, with simple overestimates, we have (7.2) is bounded by

$$\begin{aligned}&\frac{C(T+1)MKN^d}{G} + \frac{CTMG\ell ^{d+2}A^2B^2N^d}{N^2} \\&\quad + 2CM\int _0^T H(\mu ^N_t|\nu ^N_t)dt + \frac{2CTMN^d}{\ell ^d} + \frac{CTMKN^d\ell ^2(B+1)}{N^2} + \frac{CTMN^d}{A}. \end{aligned}$$

Here, \(A,B, G, \ell \) are in form \(A=N^{\alpha _A}\), \(B=N^{\alpha _B}\), \(G=N^{\alpha _{G}}\) and \(\ell =N^{\alpha _\ell }\) for parameters \(\alpha _A, \alpha _B,\alpha _{G}, \alpha _\ell >0\). By the assumptions of Lemmas 7.5 and 7.6, we assume that \(\alpha _B = 2\alpha _A\) and

$$\begin{aligned} \alpha _A+\alpha _{G} + (d+2)\alpha _\ell + 2\alpha _B -2 = 5\alpha _A +\alpha _{G}+ (d+2)\alpha _\ell -2<0. \end{aligned}$$
(7.3)

Combining the estimates, as \(A/B = 1/A (\le 1)\) and \(K\ge 1\), the left-hand side of (7.1) is bounded by

$$\begin{aligned} C\int _0^T M H(\mu _t^N| \nu ^N_t))dt + C(T+1)MN^d\Big (\frac{1}{A} + \frac{K}{G} + \frac{G\ell ^{d+2}A^2B^2}{N^2}+\frac{1}{\ell ^d} + \frac{KB\ell ^2}{N^2} \Big ). \end{aligned}$$
(7.4)

So that the second term on the right-hand side of (7.4) is bounded by \(CTMKN^{d-\kappa }\) for a \(\kappa >0\), we now fix \(\alpha _B=2\alpha _A\), \(\alpha _A\), \(\alpha _{G}\), \(d\alpha _\ell \) so that \(2-[\alpha _{G} +[(d+2)/d]d\alpha _\ell + 2\alpha _A + 2\alpha _B]>0\). Then, the constraint (7.3) would also hold. A convenient choice is \(\varepsilon _0=\alpha _A=\alpha _{G}=d\alpha _\ell = 2- (7+ (d+2)/d)\varepsilon _0\), or when \(\varepsilon _0=2d/(9d+2)\). Inserting into (7.4) yields the right-hand side of (3.7) as desired. \(\square \)

We now turn to the estimates used in the proof of Theorem 3.4. We will assume throughout this section condition (BIP1) and that \(H(\mu ^N_0|\nu ^N_0)=O(N^d)\).

To simplify notation, we will drop t-dependence in the notation \(\eta _x = \eta _x(t)\), and related quantities when the context is clear.

7.1 Preliminary estimates

Recall the ‘entropy inequality’ following from the variational form of the relative entropy between two probability measures \(\mu \) and \(\nu \):

$$\begin{aligned} E_{\mu }[F] \le H(\mu |\nu ) + \log E_{\nu }\big [e^{F}\big ]. \end{aligned}$$

Lemma 7.1

We have, for a small \(\gamma >0\), and uniformly over \(t\in [0,T]\) that

$$\begin{aligned} E_{\mu ^N_t}\big [\sum _{x\in {\mathbb {T}}^d_N} \eta _x\big ] \le \frac{H(\mu ^N_t|\nu ^N_t)}{\gamma } +O(N^d). \end{aligned}$$

Proof

Write

$$\begin{aligned} E_{\mu ^N_t}\big [\sum _{x\in {\mathbb {T}}^d_N} \eta _x\big ]&\le \frac{H(\mu ^N_t|\nu ^N_t)}{\gamma } + \frac{1}{\gamma }\log E_{\nu ^N_t}e^{\gamma \sum _{x\in {\mathbb {T}}^d_N} \eta _x}\\&\le \frac{H(\mu ^N_t|\nu ^N_t)}{\gamma } + \frac{N^d}{\gamma }\max _{x} E_{\nu ^N_t}e^{\gamma \eta _x}\ \le \ \frac{H(\mu ^N_t|\nu ^N_t)}{\gamma } + O(N^d), \end{aligned}$$

given \(\max _{x\in {\mathbb {T}}^d_N} E_{\nu ^N_t}e^{\gamma \eta _x}<\infty \) for a \(\gamma >0\) small (relative to \(\varphi ^*\) defined near (2.2) say), noting the uniform estimate on \(u^N\) in Lemma 3.1. \(\square \)

Lemma 7.2

For \(\beta >0\) and the \(\gamma \) in Lemma 7.1, uniformly over \(t\in [0,T]\), we have

$$\begin{aligned} H(\mu ^N_t|\nu _\beta ) \le (1+C(\beta , u_+)\gamma ^{-1})H(\mu ^N_t|\nu ^N_t) + O(N^d). \end{aligned}$$

In particular, when \(H(\mu ^N_0|\nu ^N_0)=O(N^d)\), we have \(H(\mu ^N_0|\nu _\beta )=O(N^d)\).

Proof

Write

$$\begin{aligned} H(\mu ^N_t|\nu _\beta ) = \int \log \frac{d\mu ^N_t}{d\nu _\beta }d\mu ^N_t = H(\mu ^N_t|\nu ^N_t) + \int \log \frac{d\nu ^N_t}{d\nu _\beta }d\mu ^N_t \end{aligned}$$
(7.5)

and \( \frac{d\nu ^N_t}{d\nu _\beta } =\prod _x \frac{d\nu ^N_t}{d\nu _\beta }(\eta _x) \).

From Lemma 3.1, we have that \(u^N\) is uniformly bounded between \(c_- =u_-\wedge \alpha _-\) and \(c_+=u_+\vee \alpha _+\). Since \(Z_{u^N(t,x)}=\sum \varphi (u^N(t,x))^k/g(k)!\) and \(\varphi \) is an increasing function, we also have \(Z_{u^N(t,x)}\ge Z_{c_-}\). In addition, \(\varphi (u^N(t,x))\le \varphi (c_+)\). Then,

$$\begin{aligned} \frac{d\nu _{u^N(t,x)}}{d\nu _\beta }(k) = \frac{Z_{u^N(t,x)}^{-1} \frac{\varphi (u^N(t,x))^k}{g(k)!} }{Z_{\beta }^{-1} \frac{\varphi (\beta )^k}{g(k)!} } = \frac{Z_\beta }{Z_{u^N(t,x)}} \frac{\varphi (u^N(t,x))^k}{\varphi (\beta )^k} \le \frac{Z_\beta }{Z_{c_-}} \left( \frac{\varphi (c_+)}{\varphi (\beta )} \right) ^k. \end{aligned}$$

Therefore, the right-hand side of (7.5) is bounded by

$$\begin{aligned} H(\mu ^N_t|\nu ^N_t) + N^d \log \frac{Z_\beta }{Z_{c_-}} + \log \big (\tfrac{\varphi (c_+)}{\varphi (\beta )}\big )E_{\mu ^N_t}\big [\sum _{x\in {\mathbb {T}}^d_N} \eta _x\big ]. \end{aligned}$$

Noting that \(E_{\mu ^N_t}\big [\sum _{x\in {\mathbb {T}}^d_N} \eta _x\big ] \le \gamma ^{-1}H(\mu ^N_t|\nu ^N_t)+ O(N^d)\) by Lemma 7.1, the proof is complete. \(\square \)

We now give an estimate to be used several times in the sequel. Recall \(\Lambda _{k}=\{x\in {\mathbb {T}}^d_N: |x|\le k\}\) is a cube of width \(2k+1\). Let \(q=q(\eta )\) be a function supported in \(\Lambda _k\). Denote \(q_x = \tau _x q\) for \(x\in {\mathbb {T}}^d_N\). Consider the collection of \(|\Lambda _k|\) regular sublattices \({\mathbb {T}}^d_{N, z, k}\subset {\mathbb {T}}^d_N\), where \(z\in \Lambda _k\) and neighboring points in the grid are separated by \(2k+1\).

Lemma 7.3

We have, uniformly over \(t\in [0,T]\), that

$$\begin{aligned} \log E_{\nu ^N_t}\big [ e^{\sum _{x\in {\mathbb {T}}^d_N} q_x}\big ]&\le \frac{1}{|\Lambda _k|} \sum _{z\in \Lambda _k} \log E_{\nu ^N_t} \big [e^{|\Lambda _k| \sum _{w\in {\mathbb {T}}^d_{N,z,k}} q_w}\big ] \\&= \frac{1}{|\Lambda _k|}\sum _{x\in {\mathbb {T}}^d_N} \log E_{\nu ^N_t} \big [e^{|\Lambda _k| q_x}\big ].\nonumber \end{aligned}$$
(7.6)

Proof

One can write \(\sum _{x\in {\mathbb {T}}^d_N} q_x = \sum _{z\in \Lambda _k}\sum _{w\in {\mathbb {T}}^d_{N,z,k}} q_w\). The inequality in (7.6) results from a Hölder’s inequality. The last equality follows since elements \(\{q_w: w\in {\mathbb {T}}^d_{N,z,k}\}_{z\in \Lambda _k}\) are independent under \(\nu ^N_t\). \(\square \)

7.2 Truncation estimates

We now develop some truncation estimates, since under the Glauber+Zero-range dynamics, there is no a priori bound on the number of particles at a site \(x\in {\mathbb {T}}^d_N\).

The first limits the particle numbers in \(\tau _x \Lambda _h\), where we recall that \(\Lambda _h\) denotes a finite box containing the support of the function h through which \(f_x\) is defined in (3.5).

Lemma 7.4

Let \(A=A_N = N^{\alpha _A}\) for \(\alpha _A>0\). Then, uniformly over \(t\in [0,T]\), we have

$$\begin{aligned} E_{\mu ^N_t}\Big [ \sum _{x\in {\mathbb {T}}^d_N} |f_x| 1(\sum _{y\in \Lambda _h} \eta _{y+x}>A)\Big ] \le CH(\mu _t^N| \nu ^N_t) + \frac{CN^d}{A}. \end{aligned}$$

Proof

Write, through the entropy inequality and Lemma 7.3, with respect to a \(\gamma _1>0\), that

$$\begin{aligned}&E_{\mu ^N_t} \Big [\sum _{x\in {\mathbb {T}}^d_N} |f_x| 1(\sum _{y\in \Lambda _h} \eta _{y+x}>A) \Big ]\\&\quad \le \frac{ H(\mu _t^N| \nu ^n_t)}{\gamma _1} + \frac{1}{\gamma _1}\log E_{\nu ^N_t}\Big [ e^{\gamma _1\sum _{x\in {\mathbb {T}}^d_N} |f_x|1(\sum _{y\in \Lambda _h}\eta _{y+x}>A)}\Big ]\\&\quad \le \frac{ H(\mu _t^N| \nu ^N_t)}{\gamma _1} +\frac{1}{\gamma _1|\Lambda _h|}\sum _{x\in {\mathbb {T}}^d_N}\log E_{\nu ^N_t}\Big [e^{\gamma _1|\Lambda _h||f_x|1(\sum _{y\in \Lambda _h}\eta _{y+x}>A)}\Big ]\\&\quad = \frac{ H(\mu _t^N| \nu ^N_t)}{\gamma _1}\\&\qquad + \frac{1}{\gamma _1|\Lambda _h|}\sum _{x\in {\mathbb {T}}^d_N}\log \Big \{1- P_{\nu ^N_t}\big (\sum _{y\in \Lambda _h}\eta _{y+x}>A\big ) + E_{\nu ^N_t}\Big [1(\sum _{y\in \Lambda _h}\eta _{y+x}>A) e^{\gamma _1|\Lambda _h| |f_x|}\Big ]\Big \}. \end{aligned}$$

The last line is further estimated with the inequality \(\log (1+x)\le x\) for \(x\ge 0\), and then Markov’s inequality:

$$\begin{aligned}&\frac{ H(\mu _t^N| \nu ^N_t)}{\gamma _1} + \frac{1}{\gamma _1|\Lambda _h|}\sum _{x\in {\mathbb {T}}^d_N} E_{\nu _t}\Big [1(\sum _{y\in \Lambda _h}\eta _{y+x}>A) \big (e^{\gamma _1 |\Lambda _h| |f_x|} - 1\big )\Big ]\nonumber \\&\quad \le \frac{ H(\mu _t^N| \nu ^N_t)}{\gamma _1} + \frac{1}{\gamma _1|\Lambda _h|A}\sum _{x\in {\mathbb {T}}^d_N} E_{\nu ^N_t}\Big [\sum _{y\in \Lambda _h}\eta _{y+x}e^{\gamma _1 |\Lambda _h| |f_x|}\Big ]. \end{aligned}$$
(7.7)

We note that \(f_x(\eta ) \le C_1\sum _{y\in \Lambda _h}\eta _{x+y} + C_2\) through the bounds (3.4). Then, by the uniform estimate Lemma 3.1, we may choose \(\gamma _1\) small enough [relative to \(\varphi ^*\) defined above (2.2)] so that

$$\begin{aligned} \sup _{x\in {\mathbb {T}}^d_N} E_{\nu ^N_t}\big [\sum _{y\in \Lambda _h}\eta _{y+x}e^{\gamma _1|\Lambda _h| |f_x|}\big ]<\infty \end{aligned}$$

The display (7.7) is then bounded by \(CH(\mu _t^N| \nu ^N_t) + CN^d/A\), as desired. \(\square \)

We now truncate the average number of particles in a block of width \(\ell \) around x. Define

$$\begin{aligned} \eta ^\ell _x = \frac{1}{(2\ell +1)^d}\sum _{z\in \Lambda _\ell }\eta _{z+x}. \end{aligned}$$

Lemma 7.5

Let \(B=B_N=A^2_N = N^{2\alpha _A}\) for \(\alpha _A>0\), and \(\ell \ge 1\). Then, for large N, uniformly over \(t\in [0,T]\), we have

$$\begin{aligned}&E_{\mu _t^N} \left[ \sum _{x\in {\mathbb {T}}^d_N} |f_x| 1(\sum _{y\in \Lambda _h}\eta _{y+x}\le A) 1(\eta ^\ell _x>B) \right] \\&\quad \le \frac{C (A+1)}{B}H(\mu _t^N|\nu ^N_t) + \frac{C(A+1)}{B}N^d. \end{aligned}$$

Proof

Since \(f_x(\eta ) \le C_1\sum _{y\in \Lambda _h} \eta _{x+y} + C_2\) by (3.4), and \(\sum _{x\in {\mathbb {T}}^d_N} \eta ^\ell _x = \sum _{x\in {\mathbb {T}}^d_N} \eta _x\), we have

$$\begin{aligned}&E_{\mu _t^N} \Big [\sum _{x\in {\mathbb {T}}^d_N} |f_x| 1(\sum _{y\in \Lambda _h}\eta _{y+x}\le A) 1(\eta ^\ell _x>B)\Big ] \end{aligned}$$
(7.8)
$$\begin{aligned}&\quad \le \frac{\max (C_1,C_2)(A+1)}{B}E_{\mu _t^N}\big [\sum _{x\in {\mathbb {T}}^d_N} \eta ^\ell _x\big ] \ = \ \frac{\max (C_1,C_2)(A+1)}{B}E_{\mu _t^N}\big [\sum _{x\in {\mathbb {T}}^d_N} \eta _x\big ]. \end{aligned}$$
(7.9)

Now, by Lemma 7.1, \(E_{\mu _t^N}\big [\sum _{x\in {\mathbb {T}}^d_N} \eta _x\big ] \le CH(\mu _t^N| \nu ^N_t) + O(N^d)\). Then, the display (7.9) is bounded as desired, for large N, by

$$\begin{aligned} \frac{C (A+1)}{B}H(\mu _t^N|\nu ^N_t) + \frac{C (A+1)}{B}N^d. \end{aligned}$$

We note this bound does not depend on the size of \(\ell \ge 1\). \(\square \)

7.3 Main estimates

We now estimate the remaining portions of \(\sum _{x\in {\mathbb {T}}^d_N} a_{t,x}f_x\). In Sect. 7.3.1, we show that \(f_x\) is in a sense close to its conditional mean given the local density of particles. In Sect. 7.3.2, we estimate this conditional mean.

7.3.1 Bound on ‘concentration’ around conditional mean

Fix \(\beta >0\) and, for \(x\in {\mathbb {T}}^d_N\), let

$$\begin{aligned} m_x =\Big (f_x 1(\sum _{y\in \Lambda _h}\eta _{y+x}\le A) - E_{\nu _\beta }\big [f_x 1(\sum _{y\in \Lambda _h}\eta _{y+x}\le A)|\eta ^\ell _x\big ]\Big )1(\eta ^\ell _x\le B).\quad \end{aligned}$$
(7.10)

We remark that the quantified estimate on the spectral gap in (SP) is used now in the proof of the following Lemma 7.6.

Lemma 7.6

Let \(\ell = \ell _N = N^{\alpha _\ell }\) and \(G=G_N=N^{\alpha _{G}}\) for \(\alpha _\ell ,\alpha _{G}>0\). Suppose \(\alpha _A + \alpha _{G} + (d+2)\alpha _\ell + 2\alpha _B -2<0\). Then, we have

$$\begin{aligned}&{\mathbb E}_{N}\Big | \int _0^T \sum _{x\in {\mathbb {T}}^d_N} a_{t,x}m_x dt \Big | \le \frac{C(T+1)MKN^d}{G} + \frac{CTMG\ell ^{d+2} A^2 B^2 N^d}{N^2}. \end{aligned}$$

Proof

We apply the entropy inequality, with respect to the Zero-range invariant measure \(\nu _\beta \), to obtain

$$\begin{aligned} {\mathbb E}_{N}\Big | \int _0^T \sum _{x\in {\mathbb {T}}^d_N} a_{t,x}m_x dt \Big | \le \frac{H(\mu _0^N| \nu _\beta )}{\gamma } + \frac{1}{\gamma } \log {\mathbb E}_{\nu _\beta }\left[ e^{\gamma | \int _0^T \sum _{x\in {\mathbb {T}}^d_N} a_{t,x}m_x dt |} \right] , \end{aligned}$$

for every \(\gamma >0\). The second term, on the right-hand side of the display, noting \(e^{|z|}\le e^{z}+e^{-z}\), is bounded by the Feynman–Kac formula in Appendix 1.7 in [36] (whose proof does not require \(\nu _\beta \) to be an invariant measure of \(L_N\)).

Then, considering \(\gamma = G/M\), we have

$$\begin{aligned}&{\mathbb E}_{N}\Big | \int _0^T \sum _{x\in {\mathbb {T}}^d_N} a_{t,x}m_x dt \Big |\\&\quad \le \frac{MH(\mu _0^N| \nu _\beta )}{G} + M\sup _\pm \int _0^T \sup _h\Big \{\langle M^{-1}\sum _{x\in {\mathbb {T}}^d_N} \pm a_{t,x}m_x, h\rangle _{\nu _\beta } + \frac{1}{G}D_N(\sqrt{h})\Big \}dt + \frac{M}{G}\log 2, \end{aligned}$$

where h is a density with respect to \(\nu _\beta \), and \(D_N(f) = E_{\nu _\beta }[f(S_N f)] = E_{\nu _\beta }[f(L_Nf)]\) is the quadratic form given in terms of \(S_N = (L_N + L_N^*)/2\) and the \(L^2(\nu _\beta )\) adjoint \(L_N^*\).

By Lemma 7.2 and our initial assumption, \(H(\mu _0^N|\nu _\beta ) \le O\big (H(\mu ^N_0|\nu ^N_0)\big ) + O(N^d) = O(N^d)\). We will drop the \(\sup _\pm \) as the next estimates exactly hold for \(-m_x\) replacing \(m_x\).

To estimate the supremum, write \(D_N(f) = -2N^2 D_{ZR}(f) + KQ_G(f)\). By (4.1),

$$\begin{aligned} D_{ZR}(f)&= E_{\nu _\beta }[f(-L_{ZR}f)] = {{\mathcal {D}}}_{ZR}(f;\nu _\beta )\\&= \frac{1}{4}\sum _{{\mathop {x,y\in {\mathbb {T}}^d_N}\limits ^{|x-y|=1 }}}E_{\nu _\beta }\big [g(\eta _x)\big (f(\eta ^{x,y}) - f(\eta )\big )^2\big ]. \end{aligned}$$

Also, \(Q_G(f) = E_{\nu _\beta }[(L_G f)f]\) is explicit following calculations say in Lemma 4.3 as

$$\begin{aligned} Q_G(f)= & {} -\sum _{x\in {\mathbb {T}}^d_N} E_{\nu _\beta } \Big [c^+_x(\eta )\big (f(\eta ^{x,+})-f(\eta )\big )^2\Big ]\\&-\,\sum _{x\in {\mathbb {T}}^d_N} E_{\nu _\beta } \Big [c^-_x(\eta ) 1(\eta _x\ge 1)\big (f(\eta ^{x,-})-f(\eta )\big )^2\Big ]\\&-\,\sum _{x\in {\mathbb {T}}^d_N} E_{\nu _\beta }\Big [f(\eta )f(\eta ^{x,+})c^+_x(\eta ) + f(\eta )f(\eta ^{x,-})c_x^-(\eta )1(\eta _x\ge 1)\Big ]\\&+\, \sum _{x\in {\mathbb {T}}^d_N} E_{\nu _\beta }\Big [f^2(\eta )\Big (c_x^+(\eta ^{x,-})\frac{g(\eta _x)}{\varphi (\beta )} + c_x^-(\eta ^{x,+})\frac{\varphi (\beta )}{g(\eta _x+1)}\Big )\Big ]. \end{aligned}$$

As the rates \(c^\pm \ge 0\), only the last line in the display for \(Q_G(f)\) is nonnegative. By our assumption (BR), however, we have that \(c_x^+(\eta ^{x,-})g(\eta _x)\) and \(c_x^-(\eta ^{x,+})/g(\eta _x+1)\) are bounded. When f is a nonnegative function such that \(f^2\) is a density with respect to \(\nu _\beta \), that is \(E_{\nu _\beta }[f^2(\eta )]=1\), we have the upper bound

$$\begin{aligned} \frac{1}{G}D_N(f) = -\frac{2N^2}{G}D_{ZR}(f) + \frac{K}{G}Q_G(f) \le -\frac{N^2}{G}D_{ZR}(f) + \frac{CKN^d}{G} \end{aligned}$$

and therefore,

$$\begin{aligned}&{\mathbb E}_{N}\left| \int _0^T \sum _{x\in {\mathbb {T}}^d_N} a_{t,x}m_x dt \right| \le \frac{CMN^d}{G}\nonumber \\&\quad + \,M\int _0^T\sup _h\Big \{M^{-1}\langle \sum _{x\in {\mathbb {T}}^d_N} a_{t,x}m_x, h\rangle _{\nu _\beta } - \frac{N^2}{G}D_{ZR}(\sqrt{h})\Big \} dt+ \frac{CTMKN^d}{G}. \end{aligned}$$
(7.11)

To analyze further, define

$$\begin{aligned} D_{\ell , x}(f) = E_{\nu _\beta }[f(-L_{\ell ,x}f)] = \frac{1}{4}\sum _{{\mathop {w,z\in \Lambda _{\ell ,x}}\limits ^{|w-z|=1}} } E_{\nu _\beta }\Big [g(\eta _w)\big (f(\eta ^{w,z})-f(\eta )\big )^2\Big ] \end{aligned}$$

where \(L_{\ell ,x}\) is the Zero-range generator restricted to sites \(\Lambda _{\ell ,x} = \{y +x: |y|\le \ell \}\).

Define also the associated canonical process on \(\Lambda _{\ell ,x}\) where the number of particles \(\sum _{y\in \Lambda _{\ell ,x}}\eta _y = j\) is fixed for \(j\ge 0\). Let \(L_{\ell ,x,j}\) denote its generator and let \(\nu _{\ell , x,j} = \nu _\beta (\cdot |\sum _{y\in \Lambda _{\ell ,x,j}}\eta _y=j)\) be its canonical invariant measure on the configuration space \(\big \{\{\eta _z\}_{z\in \Lambda _{\ell ,x}}:\sum _{y\in \Lambda _{\ell ,x,j}}\eta _y=j\big \}\). By translation-invariance, \(\nu _{\ell ,x,j}\) does not depend on x.

Then, counting the overlaps, we have

$$\begin{aligned} \sum _{x\in {\mathbb {T}}^d_N} D_{\ell , x}(\sqrt{h}) = (2\ell +1)^d D_{ZR}(\sqrt{h}). \end{aligned}$$

The supremum on the right-hand side of (7.11) is less than

$$\begin{aligned} \sum _{x\in {\mathbb {T}}^d_N}\sup _h \Big \{ E_{\nu _\beta }[(a_{t,x}/M)m_x h] - \frac{N^2}{G\ell _*^d}D_{\ell , x}(\sqrt{h})\Big \} \end{aligned}$$
(7.12)

where \(\ell _*= 2\ell +1\).

Recall that \(G= N^{\alpha _{G}}\) for a small \(\alpha _{G}>0\), and \(m_x\) vanishes unless the density of particles in the \(\ell \)-block is bounded, \(\eta ^\ell (x)\le B\). By conditioning on the number of particles in \(\Lambda _{\ell ,x}\), and dividing and multiplying by \(E_{\nu _\beta }[h|\sum _{z\in \Lambda _{\ell ,x}}\eta _z=j]\), we have for each x and t that

$$\begin{aligned}&\sup _h \Big \{ E_{\nu _\beta }[(a_{t,x}/M)m_x h] - \frac{N^2}{G\ell _*^d}D_{\ell , x}(\sqrt{h})\Big \}\nonumber \\&\quad \le \sup _{j\le B\ell _*^d} \sup _h \Big \{ E_{\nu _{\ell , x, j}}[(a_{t,x}/M)m_x h] - \frac{N^2}{G\ell _*^d}D_{\ell , x, j}(\sqrt{h})\Big \}\nonumber \end{aligned}$$

where h is a density with respect to \(\nu _{\ell ,x,j}\).

Now, by the Rayleigh estimate in [36] p. 375, Theorem 1.1, in terms of the spectral gap of the canonical process \(gap(\ell ,j)\), which does not depend on x by translation-invariance, the last display is bounded by

$$\begin{aligned} \sup _{j\le B\ell _*^d}\frac{G\ell _*^d}{N^2} \frac{ E_{\nu _{\ell , x, j}}[(a_{t,x}/M)m_x \{(-L_{\ell , x,j})^{-1} (a_{t,x}/M)m_x\}]}{1-2\Vert (a_{t,x}/M)m_x\Vert _{L^\infty } \frac{G\ell _*^d}{N^2}gap(\ell , j)^{-1}}. \end{aligned}$$
(7.13)

By the bounds on \(f_x\) via (3.4) and those on \(\{a_{t,x}\}\), we have \(\Vert (a_{t,x}/M)m_x\Vert _{L^\infty } = O(A)\). Since \(m_x\) is mean-zero with respect to \(\nu _{\ell ,x,j}\), we have

$$\begin{aligned} E_{\nu _{\ell ,x,j}}\big [ (a_{t,x}/M)m_x\{ (-L_{\ell ,x,j})^{-1}(a_{t,x}/M)m_x\}\big ] \le gap(\ell ,j)^{-1}\Vert (a_{t,x}/M)m_x\Vert ^2_{L^\infty }. \end{aligned}$$

Recall the spectral gap assumption (SP) that \(gap(\ell ,j)^{-1} \le C_{gp}\ell ^2 (j/\ell ^d)^2\). Since \(j/\ell ^d \le CB\), we have that \(gap(\ell ,j)^{-1}\le C_{gp}\ell ^2B^2\). Choosing \(\alpha _A + \alpha _{G} + (d+2)\alpha _\ell + 2\alpha _B -2<0\), we have

$$\begin{aligned} A G\ell ^d gap(\ell , j)^{-1}/N^2 \le C_{gp} AG\ell ^{d+2}B^2N^{-2} = o(1) \end{aligned}$$

and so the denominator in (7.13) is bounded below.

Hence, summing over x, (7.12) is bounded above by

$$\begin{aligned} \frac{CG\ell ^d}{N^2}\Vert m_x\Vert ^2_{L^\infty }\ell ^{2}N^d \le \frac{CG\ell ^d A^2\ell ^2B^2N^d}{N^2}, \end{aligned}$$

and the desired estimate follows by inserting back into (7.11). \(\square \)

7.3.2 Bound on conditional mean

To treat the conditional expectation

$$\begin{aligned} E_{\nu _\beta }\big [a_{t,x}f_x 1(\sum _{y\in \Lambda _h}\eta _{y+x}\le A)|\eta _x^\ell \big ]1(\eta ^\ell _x\le B), \end{aligned}$$
(7.14)

we will need two preliminary estimates (Lemmas 7.7 and 7.8).

Note that \(f_x\) is mean-zero with respect to \(\nu _{u^N(t,x)}\) and \(E_{\nu _{u^N(t,x) + \kappa }}[f_x] = {\tilde{h}}\big (u^N(t,x) + \kappa \big ) - {\tilde{h}}\big (u^N(t,x)\big ) - {\tilde{h}}'\big (u^N(t,x)\big )\kappa \) from the definition (3.5).

Lemma 7.7

For \(x\in {\mathbb {T}}^d_N\), let \(y_x = \eta ^\ell _x - u^N(t,x)\). Fix also \(\delta >0\). We have, uniformly over \(t\in [0,T]\), that

$$\begin{aligned} \Big |E_{\nu _\beta }\big [(a_{t,x}/M)f_x 1(\sum _{y\in \Lambda _h}\eta _{y+x}\le A)|\eta _x^\ell \big ]\Big |1(|y_x|\le \delta ) \le C y_x^21(|y_x|\le \delta ) + \frac{C}{\ell ^d} + \frac{C}{A}. \end{aligned}$$

Proof

The argument makes use of an equivalence of ensembles estimate and properties of \(f_x\). Let \(b_x = (a_{t,x}/M)f_x 1(\sum _{y\in \Lambda _h}\eta _{y+x}\le A)\). Recall \(\Vert a_{t,x}\Vert _\infty /M \le 1\). By Corollary 1.7 in Appendix 2 of [36], when \(|y_x|\le \delta \), we have that

$$\begin{aligned} \Big |E_{\nu _\beta }[b_x |\eta _x^\ell ]\Big | \le \big |E_{\nu _{u^N(t,x) + y_x}}[b_x ]\big | + \frac{C}{\ell ^d}. \end{aligned}$$

We now expand \(E_{\nu _{u^N(t,x) + y_x}}[b_x]\) in terms of \(y_x\) around 0. Choose \(\lambda = \lambda (y_x)\) so that

$$\begin{aligned} \frac{E_{\nu _{u^N(t,x)}}[\eta _x e^{\lambda (\eta _x - u^N(t,x))}]}{E_{\nu _{u^N(t,x)}}[e^{\lambda (\eta _x - u^N(t,x))}]} = u^N(t,x) + y_x. \end{aligned}$$
(7.15)

Note from (7.15) that \(\lambda (0)=0\) and \(\lambda '(0):=\frac{d}{dy_x} \lambda (0) = E_{\nu _{u^N(t,x)}}[(\eta _x-u^N(t,x))^2]^{-1}\). In terms of this change of measure,

$$\begin{aligned} \frac{d}{dy_x} E_{\nu _{u^N(t,x)+y_x}}[b_x]\big |_{y_x=0}&= \lambda '(0)E_{\nu _{u^N(t,x)}}\Big [b_x\big (\sum _{y\in \Lambda _h}(\eta _{y+x}-u^N(t,x))\big )\Big ]. \end{aligned}$$

Since \(u^N\) is uniformly bounded away from 0 and infinity (Lemma 3.1), \(\lambda '(0)\) is bounded. Also, from (7.15), one can see that \(\lambda ''(a) = \frac{d^2}{dy^2_x} \lambda (a)\), for \(|a|\le \delta \), is also bounded say by \(C(\delta )\) for \(|a|\le \delta \). Then,

$$\begin{aligned} E_{\nu _{u^N(t,x) + y_x}}[b_x] = E_{\nu _{u^N(t,x)}}[b_x] + \Big [\frac{d}{dy_x}E_{\nu _{u^N(t,x)+y_x}}[b_x]\big |_{y_x=0}\Big ]y_x + r_x \end{aligned}$$
(7.16)

where \(|r_x|\le (C(\delta )/2)y_x^2\).

We now estimate that the first two terms on the right-hand side of (7.16) are of order \(A^{-1}\) to finish the argument. Indeed,

$$\begin{aligned}&|E_{\nu _{u^N(t,x)}}[b_x]| = \big |E_{\nu _{u^N(t,x)}}[(a_{t,x}/M)f_x] - E_{\nu _{u^N(t,x)}}[(a_{t,x}/M)f_x1(\sum _{y\in \Lambda _h}\eta _{y+x}> A)]\big |\\&\le \frac{1}{A}E_{\nu _{u^N(t,x)}}\big [|f_x|\sum _{y\in \Lambda _h}\eta _{y+x}\big ] \le \frac{C}{A} \end{aligned}$$

as \(f_x\) is mean-zero with respect to \(\nu _{u^N(t,x)}\) and \(E_{\nu _{u^N(t,x)}}[|f_x|\sum _{y\in \Lambda _h}\eta _{y+x}]\) is uniformly bounded as \(u^N(t,\cdot )\) is uniformly bounded in Lemma 3.1.

The other term is similar:

$$\begin{aligned}&\big |\frac{d}{dy_x}E_{\nu _{u^N(t,x)+y_x}}[b_x]|_{y_x=0}\big |\\&\quad \le \big |\frac{d}{dy_x}E_{\nu _{u^N(t,x)+y_x}}[(a_{t,x}/M)f_x]|_{y_x=0}\big | \\&\qquad +\, \big |\frac{d}{dy_x}E_{\nu _{u^N(t,x)+y_x}}[(a_{t,x}/M)f_x(\sum _{y\in \Lambda _h}\eta _{y+x}> A)]|_{y_x=0}\big |\\&\quad \le \frac{\lambda '(0)}{A} \Big |E_{\nu _{u^N(t,x)}}\big [(a_{t,x}/M)f_x(\sum _{y\in \Lambda _h}\eta _{y+x})(\sum _{y\in \Lambda _h}(\eta _{y+x}-u^N(t,x)))\big ]\Big | \le \frac{C}{A}, \end{aligned}$$

since first \(a_{t,x}\) is non-random and \(f_x\) satisfies \(0=\frac{d}{dy_x}E_{\nu _{u^N(t,x)+y_x}}[f_x]|_{y_x=0}\) (cf. (3.5)), and second

$$\begin{aligned} E_{\nu _{u^N(t,x)}}\big [(a_{t,x}/M)f_x\big (\sum _{y\in \Lambda _h}\eta _{y+x}\big )\big (\sum _{y\in \Lambda _h}(\eta _{y+x}-u^N(t,x))\big )\big ] \end{aligned}$$

is uniformly bounded as \(u^N(t,\cdot )\) is uniformly bounded (Lemma 3.1). \(\square \)

Recall \(\ell _* = 2\ell +1\) and let now

$$\begin{aligned} {\tilde{y}}_x = \frac{1}{(2\ell +1)^d}\sum _{|z|\le \ell } \big (\eta _{z+x} - u^N(t,z+x)\big ). \end{aligned}$$

We will need that the following exponential moment is uniformly bounded.

Lemma 7.8

For \(\gamma , \delta >0\) small, uniformly over \(t\in [0,T]\), we have

$$\begin{aligned} \sup _\ell E_{\nu ^N_t} \Big [e^{\gamma \ell _*^d {\tilde{y}}_x^2 }1(|{\tilde{y}}_x|\le \delta )\Big ] <\infty . \end{aligned}$$

To get a feel for this estimate, consider the case that the variables are i.i.d. Poisson with parameter \(\kappa \). Then, \({\tilde{y}}_x\) has the distribution of \(\ell _*^{-d}\) times a centered Poisson(\(\ell _*^d) \kappa \) random variable. In this case, the expectation in this lemma equals

$$\begin{aligned} \sum _{|k-\ell _*^d\kappa |\le \ell _*^d \delta } e^{\gamma \ell _*^{-d}(k-\ell _*^d\kappa )^2} e^{-\ell _*^d \kappa } \frac{\big (\ell _*^d\kappa \big )^k}{k!}. \end{aligned}$$

A typical summand, say with \(k\sim (\kappa + \delta )\ell _*^d\) is estimated by Stirling’s formula as

$$\begin{aligned} e^{\gamma \ell _*^d\delta ^2}e^{-\ell _*^d\kappa }\frac{\big (\ell _*^d\kappa \big )^{\ell _*^d(\kappa + \delta )}}{\big (\ell _*^d(\kappa + \delta )\big )!} \sim e^{-c\ell _*^d}, \end{aligned}$$

with \(c>0\), when \(\gamma \delta ^2 < \kappa \). Since there are only \(\ell _*^d\) order summands, Lemma 7.8 holds in this setting.

We now give an argument for the general case through use of a local central limit theorem. Denote now \(\kappa =\ell _*^{-d}\sum _{|z|\le \ell }u^N(t,x+z)\).

Proof of Lemma 7.8

Write the expectation in the display of Lemma 7.8 as

$$\begin{aligned}&\sum _{|k-\ell _*^d\kappa |<\ell _*^d\delta } e^{\gamma \ell _*^{-d}(k-\ell _*^d\kappa )^2} \nu ^N_t\big (\sum _{|z|\le \ell } \eta _{z+x} = k\big )\nonumber \\&\quad = \sum _{k=\lceil \ell _*^d\kappa \rceil }^{\lfloor \ell _*^d(\kappa +\delta )\rfloor }e^{\gamma \ell _*^{-d}(k-\ell _*^d\kappa )^2} {\nu ^N_t}\big (\sum _{|z|\le \ell } \eta _{z+x} = k\big )\nonumber \\&\qquad + \,\sum _{k=\lceil \ell _*^d(\kappa -\delta )\rceil }^{\lceil \ell _*^d\kappa \rceil - 1}e^{\gamma \ell _*^{-d}(k-\ell _*^d\kappa )^2} \nu ^N_t\big (\sum _{|z|\le \ell } \eta _{z+x} = k\big ). \end{aligned}$$
(7.17)

We now bound uniformly the first sum, and discuss the second sum afterwards.

Write the first sum on the right-hand side of (7.17), in terms of a positive constant a, as

$$\begin{aligned}&\sum _{k=\lceil \ell _*^d\kappa \rceil }^{\lceil \ell _*^d\kappa \rceil + a\lceil \ell _*^{d/2}\rceil } e^{\gamma \ell _*^{-d}(k-\ell _*^d\kappa )^2} \nu ^N_t\big (\sum _{|z|\le \ell } \eta _{z+x} = k\big ) \nonumber \\&\quad +\, \sum _{k=\lceil \ell _*^d\kappa \rceil + a\lceil \ell _*^{d/2}\rceil +1}^{\lfloor \ell _*^d(\kappa +\delta )\rfloor } e^{\gamma \ell _*^{-d}(k-\ell _*^d\kappa )^2} \nu ^N_t\big (\sum _{|z|\le \ell } \eta _{z+x} = k\big ). \end{aligned}$$
(7.18)

The first term in (7.18), since \(0\le k-\ell _*^d\kappa \le a\ell _*^{d/2}\), is bounded by \(e^{a^2\gamma }\).

To estimate the second term in (7.18), noting \(\ell _*^{d/2}{\tilde{y}}_x = \ell _*^{-d/2}\big (\sum _{|z|\le \ell } \eta _{z+x} - \ell _*^d \kappa \big )\), we write the probability in the sum as a difference of \(1-F(\ell _*^{-d/2}(k-1-\ell _*^d\kappa ))\) and \(1-F(\ell _*^{-d/2}(k-\ell _*^d\kappa ))\), where F is the distribution function of \(\ell _*^{d/2}{\tilde{y}}_x\). Then, we may rewrite the second term in (7.18), summing by parts, as

$$\begin{aligned}&\sum _{k=\lceil \ell _*^d\kappa \rceil + a\lceil \ell _*^{d/2}\rceil +1}^{\lfloor \ell _*^d(\kappa +\delta )\rfloor -1} \big [e^{\gamma \ell _*^{-d}(k+1 - \ell _*^d\kappa )^2} - e^{\gamma \ell _*^{-d}(k - \ell _*^d\kappa )^2}\big ]\big [1- F(\ell _*^{-d/2}(k-\ell _*^d\kappa ))\big ]\nonumber \\&\quad +\, e^{\gamma \ell _*^{-d}(k-\ell _*^d\kappa )^2}\big [1-F(\ell _*^{-d/2}(k-1-\ell _*^d\kappa ))\big ]|_{k=\lceil \ell _*^d\kappa \rceil + a\lceil \ell _*^{d/2}\rceil +1}\nonumber \\&\quad -\, e^{\gamma \ell _*^{-d}(k-\ell _*^d\kappa )^2}\big [1-F(\ell _*^{-d/2}(k-\ell _*^d\kappa ))\big ]|_{k=\lfloor \ell _*^d(\kappa +\delta )\rfloor }. \end{aligned}$$
(7.19)

In Theorem 10 of Chapter 8 in [42] (page 230), subject to assumptions, namely that Eqs. (2.3)–(2.5) in Chapter 8 [42] hold, a uniform estimate on the tail of the distribution function is given. These assumptions hold when there is an \(H>0\) small where \(R_{z,t}(u)= \log E_{\nu ^N_t} e^{u\eta _z}\) is uniformly bounded in z and t for \(|u|\le H\), and also when \(\sigma ^2_{z,t}=E_{\nu ^N_t}[(\eta _z-u^N(t,z))^2]\) is uniformly bounded away from 0 in z and t. These specifications follow straightforwardly from the uniform bounds on \(u^N\) (Lemma 3.1).

Then, \(v_\ell := \sqrt{\ell _*^{-d}\sum _{|z|\le \ell }\sigma ^2_{z,t}}\) is uniformly bounded away from 0 and \(\infty \).

Therefore, by Theorem 10 in Chapter 8 [42], there is a constant \(\tau \) such that for \(0\le x\le \tau \ell _*^{d/2}\) we have

$$\begin{aligned} 1-F(x)&\le C(\tau )\big (1-\Phi (x/v_\ell )\big ) \exp \Big \{ \frac{x^3}{v^3_\ell \ell _*^{d/2}} \kappa ^1(x/(v_\ell \ell _*^{d/2}))\Big \} \ \ \mathrm{and }\nonumber \\ F(-x)&\le C(\tau )\Phi (x/v_\ell )\exp \Big \{ - \frac{x^3}{v^3_\ell \ell _*^{d/2}}\kappa ^1(-x/(v_\ell \ell _*^{d/2}))\Big \}, \end{aligned}$$
(7.20)

where \(\kappa ^1(\cdot )\) is uniformly bounded for small arguments, and \(\Phi \) is the Normal(0, 1) distribution function. Note that

$$\begin{aligned}&\exp \big \{\gamma \ell _*^{-d}(k+1 - \ell _*^d\kappa )^2\big \} - \exp \big \{\gamma \ell _*^{-d}(k - \ell _*^d\kappa )^2\big \}\\&\quad = \exp \big \{\gamma \ell _*^{-d} (k-\ell _*^d\kappa )^2\big \}\Big (\exp \big \{2\gamma \ell _*^{-d}(k-\ell _*^d\kappa ) + \gamma \ell _*^{-d}\big \} -1\Big ). \end{aligned}$$

Also, when \(x/v_\ell =\ell _*^{-d/2}(k-\ell _*^d\kappa )/v_\ell \ge 1\), which is the case when \(k\ge \ell _*^d\kappa + a\ell _*^{d/2}\) and a is fixed large enough, we have that

$$\begin{aligned}&\Big \{1-\Phi (\ell _*^{-d/2}(k-\ell _*^d\kappa )/v_\ell ) \Big \} \exp \big \{\kappa ^1\big (x/(v_\ell \ell _*^{d/2})\big )v_\ell ^{-3}\ell _*^{-2d}(k-\ell _*^d\kappa )^3\big \}\\&\quad \le \frac{1}{\sqrt{2\pi }} \exp \big \{-\ell _*^{-d}(k-\ell _*^d\kappa )^2/(2v^2_\ell )\big \} \exp \big \{\kappa ^1 \big (x/(v_\ell \ell _*^{d/2})\big ) v_\ell ^{-3}\ell _*^{-2d}(k-\ell _*^d\kappa )^3\big \}. \end{aligned}$$

With the aid of these observations, we deduce now that (7.19) is uniformly bounded in \(\ell \). Indeed, to see that the sum in (7.19) is bounded, observe since \(a\ell _*^{d/2}\le k-\ell _*^d\kappa \le \delta \ell _*^d\) that

$$\begin{aligned} \exp \big \{2\gamma \ell _*^{-d}(k-\ell _*^d\kappa ) + \gamma \ell _*^{-d}\big \} -1 \le 2\Big (2\gamma \ell _*^{-d}(k-\ell _*^d\kappa ) + \gamma \ell _*^{-d}\Big ), \end{aligned}$$

and \(\kappa ^1(x/(v_\ell \ell _*^{d/2})) \le {\bar{\kappa }}\) where \({\bar{\kappa }}\) is a constant, when \(\gamma \) and \(\delta \) are small. Then, each summand in the sum in (7.19) is bounded by

$$\begin{aligned} \frac{2}{\sqrt{2\pi }}\big (2\gamma \ell _*^{-d}(k-\ell _*^d\kappa ) + \gamma \ell _*^{-d}\big ) \exp \Big \{\big (\ell _*^{-d}(k-\ell _*\kappa )^2\big )\big [\gamma -\frac{1}{2v_\ell ^2} + \frac{\delta {\bar{\kappa }}}{v_\ell ^3}\big ]\Big \}, \end{aligned}$$

which in turn is bounded by a multiple of

$$\begin{aligned} \gamma \ell _*^{-d}(k-\ell _*^d\kappa + 1)e^{-c(\gamma ,\delta )\ell _*^{-d}(k-\ell _*^d\kappa )^2} \end{aligned}$$

for \(\gamma ,\delta >0\) chosen small, with \(c(\gamma ,\delta )>0\). Hence, the sum may be bounded uniformly in \(\ell \) in terms of the integral \(C(\gamma )\int _{a}^{\infty } z e^{-c(\gamma ,\delta ) z^2}dz\), for some constant \(C(\gamma )\).

The other two terms in (7.19) are bounded using similar ideas.

Finally, the second sum in (7.17) is bounded uniformly in \(\ell \) analogously, using the left tail estimate in (7.20). \(\square \)

With these preliminary bounds in place, we resume the argument and consider the conditional expectation (7.14) when \(|y_x|\le \delta \).

Lemma 7.9

For \(\delta >0\) small, we have

$$\begin{aligned}&\int _0^TE_{\mu ^N_t}\Big [\sum _{x\in {\mathbb {T}}^d_N}E_{\nu _\beta }[a_{t,x}f_x 1(\sum _{y\in \Lambda _h}\eta _{y+x}\le A)|\eta _x^\ell ]1(\eta ^\ell _x\le B)1(|y_x|\le \delta )\Big ]dt\\&\quad \le CM\int _0^T H(\mu ^N_t|\nu ^N_t)dt + \frac{CTMN^d}{\ell ^d} + \frac{CTMKN^d\ell ^2}{N^2} + \frac{CTMN^d}{A}. \end{aligned}$$

Proof

We first divide and multiply the left-hand side of the display by M. By Lemma 7.7, we first bound the term

$$\begin{aligned}&\Big |E_{\nu _\beta }[(a_{t,x}/M)f_x 1(\sum _{y\in \Lambda _h}\eta _{y+x}\le A)|\eta _x^\ell ]1(\eta ^\ell _x\le B)1(|y_x|\le \delta )\Big |\\&\quad \le \ C y_x^21\big (|y_x|\le \delta \big ) + \frac{C}{\ell ^d} + \frac{C}{A}. \end{aligned}$$

The last two terms when multiplied by M, summed over \(x\in {\mathbb {T}}^d_N\), and integrated over [0, T], give rise to those terms \(CTMN^d/\ell ^d + CTMN^d/A\) present in the right-hand side of the display of Lemma 7.9.

We now concentrate on the terms \(y_x^21(|y_x|\le \delta )\). Recall \({\tilde{y}}_x = \ell _*^{-d}\sum _{|z-x|\le \ell } (\eta _z - u(t,z))\). Bound

$$\begin{aligned} 1(|y_x|\le \delta )&\le 1(|{\tilde{y}}_x| \le 2\delta )+1(|y_x|\le \delta ) 1(|y_x - {\tilde{y}}_x|\ge \delta ). \end{aligned}$$

Hence,

$$\begin{aligned} M\int _0^T E_{\mu ^N_t} \Big [\sum _{x\in {\mathbb {T}}^d_N} y_x^21(|y_x|\le \delta )\Big ]dt&\le M\int _0^T E_{\mu ^N_t}\Big [ \sum _{x\in {\mathbb {T}}^d_N} y_x^2 1(|{\tilde{y}}_x|\le 2\delta )\Big ]dt\nonumber \\&\quad +\, M\int _0^T E_{\mu ^N_t} \Big [\sum _{x\in {\mathbb {T}}^d_N} y_x^2 1(|y_x|\le \delta )1(|y_x - {\tilde{y}}_x|\ge \delta )\Big ]dt. \end{aligned}$$
(7.21)

To bound the second term in (7.21), since \(|y_x - {\tilde{y}}_x| = \big |\ell _*^{-d}\sum _{|z-x|\le \ell } u^N(t,z)-u^N(t,x)\big |\), we have by Markov’s inequality and Lemma 3.2 that

$$\begin{aligned}&M\int _0^TE_{\mu ^N_t}\Big [\sum _{x\in {\mathbb {T}}^d_N} y_x^2 1(|y_x|\le \delta )1(|y_x-{\tilde{y}}_x|\ge \delta )\Big ] dt \le \delta ^2\frac{CTMKN^d\ell ^2}{\delta ^2 N^2}. \end{aligned}$$
(7.22)

To bound the first term in (7.21), write

$$\begin{aligned} y_x^2 \le 2{\tilde{y}}^2_x + 2\Big (\frac{1}{\ell _*^d}\sum _{|z-x|\le \ell } (u^N(t, z) - u^N(t,x))\Big )^2. \end{aligned}$$
(7.23)

By Lemma 3.2 again,

$$\begin{aligned}&M\int _0^T E_{\mu ^N_t} \Big [\sum _{x\in {\mathbb {T}}^d_N} y_x^21(|{\tilde{y}}_x|\le 2\delta )\Big ]dt \nonumber \\&\quad \le \frac{CTMKN^d\ell ^2}{N^2} + 2M\int _0^T E_{\mu ^N_t}\Big [\sum _{x\in {\mathbb {T}}^d_N} {\tilde{y}}_x^2 1(|{\tilde{y}}_x|\le 2\delta )\Big ] dt. \end{aligned}$$
(7.24)

The sum of the first terms on the right-hand sides of (7.22) and (7.24) gives the third term in Lemma 7.9, writing 2C as C.

To address the remaining second term in (7.24), write

$$\begin{aligned} M E_{\mu ^N_t} \Big [\sum _{x\in {\mathbb {T}}^d_N} {\tilde{y}}_x^2 1(|{\tilde{y}}_x|\le 2\delta )\Big ]&\le \frac{M H(\mu ^N_t| \nu ^N_t)}{\gamma _2} + \frac{M}{\gamma _2}\log E_{\nu ^N_t}\Big [ e^{\gamma _2 \sum _{x\in {\mathbb {T}}^d_N} {\tilde{y}}_x^21(|{\tilde{y}}_x|\le 2\delta )}\Big ]\nonumber \\&\le \frac{M H(\mu ^N_t| \nu ^N_t)}{\gamma _2} + \frac{M}{\gamma _2\ell _*^d} \sum _{x\in {\mathbb {T}}^d_N} \log E_{\nu ^N_t}\Big [ e^{\gamma _2 \ell _*^d {\tilde{y}}_x^2 1(|{\tilde{y}}_x|\le 2\delta )}\Big ], \end{aligned}$$
(7.25)

using Lemma 7.3 where the grid spacing is \(2\ell +1\). By Lemma 7.8, we have that

$$\begin{aligned} \log E_{\nu ^N_t} \Big [e^{\gamma _2 \ell _*^d {\tilde{y}}_x^2 1(|{\tilde{y}}_x|\le 2\delta )} \Big ]&\le \log \Big \{ 1 + E_{\nu ^N_t} \Big [e^{\gamma _2 \ell _*^d {\tilde{y}}_x^2} 1(|{\tilde{y}}_x|\le 2\delta )\Big ]\Big \}\\&\le E_{\nu ^N_t} \Big [e^{\gamma _2 \ell _*^d {\tilde{y}}_x^2} 1(|{\tilde{y}}_x|\le 2\delta )\Big ] \end{aligned}$$

is uniformly bounded in \(\ell \) for small \(\gamma _2, \delta >0\). Hence, the right-hand side of (7.25) is bounded by \(M H(\mu ^N_t| \nu _t)/\gamma _2 + CMN^d/(\gamma _2\ell ^d)\), finishing the argument. \(\square \)

Finally, our last estimate bounds the conditional expectation in (7.14) when \(|y_x|>\delta \).

Lemma 7.10

We have, for \(\delta >0\) and small \(\gamma _3=\gamma _3(\delta )>0\), in terms of a constant \(c_1=c_1(\delta )>0\), that

$$\begin{aligned}&\int _0^TE_{\mu _t^N}\Big [ \sum _{x\in {\mathbb {T}}^d_N} E_{\nu _\beta }\big [|a_{t,x}||f_x| |y_x\big ]1(\eta ^\ell _x\le B)1(|y_x|>\delta )\Big ]dt \nonumber \\&\quad \le \frac{M}{\gamma _3}\int _0^TH(\mu ^N_t|\nu ^N_t)dt + \frac{CTMKBN^d\ell ^2}{\delta ^2N^2}+ \frac{CTMN^d}{\gamma _3\ell ^d} e^{-c_1\ell ^d}. \end{aligned}$$
(7.26)

Proof

First, we have \(|a_{t,x}|\le M\). Next, by our assumptions on \(f_x\) [cf. (3.4)], using exchangeability of the canonical measure and the uniform bounds on \(u^N\) in Lemma 3.1, we have that

$$\begin{aligned} E_{\nu _\beta }\big [|f_x||y_x\big ] \le C(|\Lambda _h|)\Big \{ \eta ^\ell _x + C\Big \}&= C(|\Lambda _h|) \Big \{{\tilde{y}}_x + \frac{1}{\ell _*^d}\sum _{|z-x|\le \ell } u^N(t,z) + C\Big \}\\&\le C{\tilde{y}}_x + C. \end{aligned}$$

Hence, we need only bound \(M E_{\mu _t^N}\big [\sum _{x\in {\mathbb {T}}^d_N} \big (C{\tilde{y}}_x + C\big ) 1(\eta ^\ell _x\le B)1(|y_x|>\delta )\big ]\). Since

$$\begin{aligned} 1(|y_x|>\delta ) \le 1\Big (\Big |\frac{1}{\ell _*^d}\sum _{|z-x|\le \ell } \big (u^N(t,x) - u^N(t,z)\big )\Big |>\delta /2\Big ) + 1(|{\tilde{y}}_x|>\delta /2) \end{aligned}$$

and \(|{\tilde{y}}_x|1(\eta ^\ell _x\le B) \le B + \sup _x \Vert u^N(t,x)\Vert _{L^\infty }\le 2B\) say, by Lemma 3.1, for large N, in turn, we need only bound

$$\begin{aligned}&\big (1+ \delta ^{-1}\big )\int _0^T ME_{\mu ^N_t}\Big [\sum _{x\in {\mathbb {T}}^d_N} |{\tilde{y}}_x| 1(|{\tilde{y}}_x|>\delta /2)\Big ]dt \nonumber \\&\quad +\, \int _0^T \frac{8MB}{\delta ^2}\sum _{x\in {\mathbb {T}}^d_N} \Big (\frac{1}{\ell _*^d}\sum _{|z-x|\le \ell } \big (u^N(t,x) - u^N(t,z)\big )\Big )^2dt. \end{aligned}$$
(7.27)

The second term in (7.27) is bounded by \(CTMKBN^d \ell ^2/\big (\delta ^2 N^2)\) via Lemma 3.2, giving one of the terms in (7.26).

However, the integrand of the first expression in (7.27) is bounded, by Lemma 7.3 with grid spacing \(2\ell +1\), by

$$\begin{aligned}&\frac{MH(\mu _t^N| \nu ^N_t)}{\gamma _3} + \frac{M}{\gamma _3\ell _*^d}\sum _{x\in {\mathbb {T}}^d_N} \log \Big (1- \nu ^N_t(|{\tilde{y}}_x|>\delta /2) + E_{\nu ^N_t} \Big [e^{\gamma _3 \ell _*^d|{\tilde{y}}_x|}1(|{\tilde{y}}_x|>\delta /2)\Big ]\Big ). \end{aligned}$$

By Schwarz inequality, we have

$$\begin{aligned}&E_{\nu ^N_t} \Big [e^{\gamma _3 \ell _*^d|{\tilde{y}}_x|}1(|{\tilde{y}}_x|>\delta /2)\Big ] \le \Big \{E_{\nu ^N_t} \Big [e^{2\gamma _3 \ell _*^d |{\tilde{y}}_x| }\Big ]\cdot {\nu ^N_t}(|{\tilde{y}}_x|>\delta /2)\Big \}^{1/2}. \end{aligned}$$

Now, for \(s>0\),

$$\begin{aligned}&{\nu ^N_t}(|{\tilde{y}}_x|>\delta ) \le E_{\nu ^N_t} \big [e^{s{\tilde{y}}_x\ell _*^d}\big ] e^{-s\ell _*^d\delta } + E_{\nu ^N_t}\big [e^{-s{\tilde{y}}_x\ell _*^d} \big ]e^{-s\ell _*^d\delta }\\&\quad \le \prod _{|z|\le \ell }E_{\nu ^N_t}\big [e^{s(\eta _{x+z} - u^N(t,x+z))}\big ]e^{-s\delta } + \prod _{|z|\le \ell }E_{\nu ^N_t}\big [e^{-s(\eta _{x+z} - u^N(t,x+z))}\big ]e^{-s\delta }. \end{aligned}$$

Moreover, recalling \(\sigma ^2_{z,t} = E_{\nu ^N_t}[(\eta _z-u^N(t,z))^2]\), we have

$$\begin{aligned} \log E_{\nu ^N_t}\big [e^{\pm s(\eta _y- u^N(t,y)}\big ] = s^2\sigma ^2_{y,t}/2 + o(s^2). \end{aligned}$$

Hence, with \(s=\varepsilon \delta \) and \(\varepsilon >0\) small, noting that \(\sigma ^2_{x+z,t}\) is uniformly bounded away from 0 and infinity by Lemma 3.1, we have

$$\begin{aligned} {\nu ^N_t}\big (|{\tilde{y}}_x|>\delta \big ) \le 2\prod _{|z|\le \ell } e^{-\delta ^2(\varepsilon - \varepsilon ^2\sigma ^2_{x+z,t}/2)} \le e^{-c\ell ^d} \end{aligned}$$

for a constant \(c>0\) depending on \(\delta \).

At the same time, for \(\gamma _3>0\) small, as the means \(u^N(t,\cdot )\) are uniformly bounded via Lemma 3.1 again, we have, in terms of \(0\le \gamma '_3\le \gamma _3\), that

$$\begin{aligned}&\log E_{\nu ^N_t} \big [e^{2\gamma _3 \ell _*^d |{\tilde{y}}_x| }\big ]\\&\quad \le \log \Big [\prod _{|z-x|\le \ell } E_{\nu ^N_t} \big [e^{2\gamma _3(\eta _z - u^N(t,z))}\big ] + \prod _{|z-x|\le \ell } E_{\nu ^N_t} \big [e^{-2\gamma _3(\eta _z - u^N(t,z))}\big ]\Big ]\\&\quad \le C\gamma _3^2\sum _{|z-x|\le \ell } E_{\nu ^N_t} \big [(\eta _z-u^N(t,z))^2e^{2\gamma '_3|\eta _z - u^N(t,z)|}\big ] = O(\gamma _3^2 \ell ^d). \end{aligned}$$

Hence, with \(c_1(\delta ) = c/4\), we bound the integrand in the first term in (7.27), taking \(\gamma _3=\gamma _3(\delta )>0\) small enough compared to c, by

$$\begin{aligned} \frac{CM}{\gamma _3}H(\mu _t^N|\nu ^N_t) + \frac{CMN^d}{\gamma _3\ell ^d}e^{-c_1(\delta )\ell ^d}, \end{aligned}$$

which when integrated in time yields the other two terms in (7.26). \(\square \)