1 Introduction and main results

We study long time behavior of solutions to models of reactive processes, such as combustion and population dynamics, in random environments—specifically, reaction–diffusion equations and G-equations. The former are the PDE

$$\begin{aligned} u_t={\mathcal L}_\omega u+f(t,x,u,\omega ), \end{aligned}$$
(1.1)

with \(f:{\mathbb {R}}^{d+1}\times [0,1]\times \Omega \rightarrow {\mathbb {R}}\) some non-linear reaction function, a second-order linear term

$$\begin{aligned} {\mathcal L}_\omega u(t,x):=\sum _{i,j=1}^d A_{ij}(t,x,\omega )u_{x_ix_j}(t,x)+\sum _{i=1}^d b_i(t,x,\omega )u_{x_i}(t,x), \end{aligned}$$
(1.2)

and \(\omega \) an element from some probability space \((\Omega ,{\mathbb {P}},{\mathcal F})\). One also typically assumes that f vanishes at \(u=0,1\), and solutions \(0\le u\le 1\) represent normalized temperature or density, which is subject to reaction, advection, and diffusion. The simplest reaction–diffusion model involves \({\mathcal L}_\omega \equiv \Delta _x\) and \(f=f(u)\), but we will consider here the general non-isotropic, space-time-dependent, random reaction–advection–diffusion setting of (1.1).

We will mainly concentrate on this case, but our methods equally apply to the related first-order flame propagation model

$$\begin{aligned} u_t+v(t,x,\omega )\cdot \nabla u=c(t,x,\omega )|\nabla u|. \end{aligned}$$
(1.3)

This Hamilton–Jacobi PDE is called the G-equation (it is often considered with \(c\equiv 1\) only), where \(c>0\) is the flame speed and v is some (incompressible) background advection.

We will consider (1.1) with the KPP (a.k.a. Fisher-KPP) reactions, first studied by Kolmogorov, Petrovskii, and Piskunov [19] and Fisher [11] in 1937. We will therefore assume the following uniform KPP hypotheses.

Definition 1.1

A Lipschitz function \(f:{\mathbb {R}}^{d+1}\times [0,1] \times \Omega \rightarrow {\mathbb {R}}\) is a KPP reaction if \(f(\cdot ,\cdot ,0,\cdot )\equiv 0\equiv f(\cdot ,\cdot ,1,\cdot )\) and \(f(t,x,u,\omega )\le f_u(t,x,0,\omega )u \) for all \((t,x,u,\omega )\in {\mathbb {R}}^d\times [0,1] \times \Omega \) (with \(f_u(\cdot ,\cdot ,0,\cdot )\) existing pointwise), plus the following uniform hypotheses hold. We have \(\inf _{(t,x,\omega )\in {\mathbb {R}}^{d+1}\times \Omega }f(t,x,u,\omega )>0\) for each \(u\in (0,1)\), as well as \(\inf _{(t,x,\omega )\in {\mathbb {R}}^{d+1}\times \Omega }f_u(t,x,0,\omega )>0\) and

$$\begin{aligned} \lim _{u\rightarrow 0}\sup _{(t,x,\omega )\in {\mathbb {R}}^{d+1}} \left( f_u(t,x,0,\omega )-\frac{f(t,x,u,\omega )}{u} \right) =0. \end{aligned}$$
(1.4)

When physical processes occur in random media, one often expects them to exhibit an effectively homogeneous dynamic on large space-time scales due to large-scale averaging of the variations in the environment. Our main results show that this phenomenon, called homogenization, indeed occurs for (1.1) and (1.3) in very general settings under suitable hypotheses. The main two of the latter are always stationarity of the environment and some mixing assumption on it, without which one cannot reasonably hope for homogenization to occur. We state our versions of these next, with H being either \((A,b,f_u(\cdot ,\cdot ,0,\cdot ))\) or (cv) (see below for why it suffices to only include \(f_u(\cdot ,\cdot ,0,\cdot )\) here in the KPP reaction case).

Definition 1.2

Let \((\Omega ,{\mathcal F},{\mathbb {P}})\) be some probability space and H a measurable function on \({\mathbb {R}}^{d+1}\times \Omega \) with values in some measurable space. We say that H is space-time stationary if there is a group of measure-preserving bijections \(\{{\Upsilon _{(s,y)}:\Omega \rightarrow \Omega }\}_{(s,y)\in {\mathbb {R}}^{d+1}}\) with \(\Upsilon _{(0,0)}=\textrm{Id}_{\Omega }\) and \(\Upsilon _{(s,y)}\circ \Upsilon _{(r,z)}=\Upsilon _{(s+r,y+z)}\) for any \((s,y),(r,z)\in {\mathbb {R}}^{d+1}\), and for any \((t,x,s,y,\omega )\in {\mathbb {R}}^{2d+2}\times \Omega \) we have

$$\begin{aligned} H \left( t,x,\Upsilon _{(s,y)}\omega \right) =H(t+s,x+y,\omega ). \end{aligned}$$
(1.5)

For any \(t\in {\mathbb {R}}\), we let \({\mathcal F}_t^\pm (H)\) be the \(\sigma \)-algebra generated by the family of random variables

$$\begin{aligned} \left\{ H(s,x,\cdot )\,\big |\, \pm (s-t)\ge 0 \text { and } x\in {\mathbb {R}}^d\right\} . \end{aligned}$$

We also define for each \(s\ge 0\),

$$ \begin{aligned} \phi _H(s):=\sup \left\{ \left| {\mathbb {P}}[F|E] -{\mathbb {P}}[F]\right| \,\big |\,\, t\in {\mathbb {R}}\,\, \& \,\, (E,F)\in {\mathcal F}_t^-(H) \times {\mathcal F}_{t+s}^+(H) \,\, \& \,\, {\mathbb {P}}[E]>0 \right\} . \end{aligned}$$

So \(\phi _H\) is clearly non-increasing, and it vanishes at some \(s\ge 0\) precisely when \({\mathcal F}^-_t(H)\) and \({\mathcal F}^+_{t+s}(H)\) are \({\mathbb {P}}\)-independent for each \(t\in {\mathbb {R}}\) (in that case H has a finite temporal range of dependence). The mixing hypothesis mentioned above will in our case be the assumption that \(\lim _{s\rightarrow \infty } \phi _H(s)=0\) (possibly at some rate) for the appropriate function H. That is, it will involve mixing in time but not necessarily in space.

Long-time propagation of solutions to (1.1) is well known to be ballistic, with solutions converging locally uniformly to 1. One should therefore expect homogenization to take the following form. First, solutions starting from compactly supported initial data should approximate the characteristic function of \(t{\mathcal S}\) for some open bounded convex set \({\mathcal S}\ni 0\) (called Wulff shape) as \(t\rightarrow \infty \). This means that, as \(t\rightarrow \infty \) the \(\theta \)-level set of the solution should, after scaling by \(\frac{1}{t}\) in space, converge in Hausdorff distance to \(\partial {\mathcal S}\) for each \(\theta \in (0,1)\). And, of course, this should hold for a large set of \(\omega \in \Omega \) in the probabilistic sense, with the Wulff shape being deterministic (i.e., \(\omega \)-independent).

Second, (3.1) should exhibit a homogenized large-scale dynamic in the ballistic scaling

$$\begin{aligned} u^{\varepsilon }(t,x, \omega ):=u\left( \varepsilon ^{-1} t, \varepsilon ^{-1} x, \omega \right) , \end{aligned}$$
(1.6)

with \(\varepsilon >0\) small. This of course turns (1.1) into its large-space-time-scale version

$$\begin{aligned} u^{\varepsilon }_{t}= {\mathcal L}_\omega ^\varepsilon u^{\varepsilon }+ \varepsilon ^{-1} f\left( \varepsilon ^{-1} t,\varepsilon ^{-1} x, u^{\varepsilon }, \omega \right) , \end{aligned}$$
(1.7)

where

$$\begin{aligned} {\mathcal L}_\omega ^\varepsilon u^\varepsilon (t,x):= \varepsilon \sum _{i,j=1}^d A_{ij} \left( \varepsilon ^{-1} t,\varepsilon ^{-1} x,\omega \right) u^\varepsilon _{x_ix_j}(t,x) + \sum _{i=1}^d b_i \left( \varepsilon ^{-1} t,\varepsilon ^{-1} x,\omega \right) u^\varepsilon _{x_i} (t,x). \end{aligned}$$

Then one hopes that, again for a large set of \(\omega \in \Omega \), solutions to (1.7) with some \(\varepsilon \)-independent initial datum \(u_0\) converge as \(\varepsilon \rightarrow 0\) to a function \(\bar{u}\) that solves some homogeneous PDE with the same initial value (the term “homogenization” usually refers to this type of result).

Such stochastic homogenization results were obtained previously in several works for time-independent (Abf) in one spatial dimension, where the geometry of the level sets of solutions is trivial (they are typically two points ballistically traveling to \(\pm \infty \)). The interested reader can consult, for instance, papers [3, 5, 13, 24, 25, 27, 33, 34] and references therein (yet others involve spatially periodic rather than random (Abf)), which study KPP reactions as well as ignition and bistable reactions (for which \(f (\cdot ,\cdot ,u,\cdot )\) vanishes or is negative when \(u>0\) is close to 0).

Progress in the multi-dimensional (and still time-independent) case \(d\ge 2\) has been much more limited, due to the geometry of the level sets of solutions substantially complicating the analysis. Stochastic homogenization results for stationary ergodic ignition reactions and \((A,b)=(\Delta ,0)\) in dimensions \(d\le 3\) were recently obtained by the second author and Lin [22] as well as by both authors [29, 30] (homogenization results in spatially periodic multidimensional media appear in, e.g., [1, 3, 7, 13, 22, 24]), but the only such results for KPP reactions that we are aware of are Theorem 9.3 in [23] by Lions and Souganidis, and Theorem 1.4 in the companion paper [37] by the second author (the latter even holds in the time-periodic \((A,b,f_u(\cdot ,\cdot ,0,\cdot ))\) case, which is closely related to the time-independent setting). However, we note that Theorem 9.3 in [23] is stated without a proof, and the authors only indicated that methods developed by them and in other works can be used to obtain one. Moreover, we know of no other prior homogenization results even in the simpler case of time-independent and spatially periodic KPP reactions (although existence of Wulff shapes and front speeds in the periodic case goes back to work of Gärtner and Freidlin [13]).

In light of the above discussion, our main result for KPP reactions (Theorem 1.3 below) appears to be the first one in the general time-dependent setting for any reaction and in any dimension. In fact, homogenization results in time-dependent environments seem to be rather sparse even in the much more studied and developed setting of Hamilton–Jacobi equations (see below). We note that the proof of Theorem 1.3 uses two new ingredients, a non-autonomous version of the classical Kingman’s subadditive ergodic theorem [18] (Theorem 2.1 below) and the principle of virtual linearity for (1.1), from the companion papers [31, 36].

In addition, together with Theorem 1.4 in [37], Theorem 1.3 appears to be the first multi-dimensional stochastic homogenization result for (1.1) that provides an explicit formula for the solution to the homogenized dynamic (except in the special case of isotropic ignition reactions, see below). The results in [22, 29, 30] for ignition reactions, as well as Theorem 9.3 in [23] for KPP reactions show that in the relevant settings, solutions to (1.7) with common initial datum \( u_0\) converge as \(\varepsilon \rightarrow 0\) to a discontinuous viscosity solution to the homogeneous Hamilton–Jacobi equation

$$\begin{aligned} \bar{u}_t=c^* \left( -\nabla \bar{u} |\nabla \bar{u}|^{-1} \right) |\nabla \bar{u}| \end{aligned}$$
(1.8)

that only takes values in \(\{0,1\}\) for all \(t>0\), where \(c^*(e)\) is some (Abf)-dependent deterministic front speed in direction \(e\in {\mathbb {S}}^{d-1}\) (see [22, 29, 37] for its definition). This yields an implicit formula for the homogenized solutions. However, here and in [37] we show that for KPP reactions (including in the time-dependent case), one in fact has the explicit formula

$$\begin{aligned} \bar{u} := \chi _{G+t{\mathcal S}}, \end{aligned}$$
(1.9)

where (essentially) \(G:={\textrm{supp}}\, u_0\) and \({\mathcal S}\) is the Wulff shape for (Abf) (this then also implies that \(c^*(e)\) exists for each \(e\in {\mathbb {S}}^{d-1}\) and \(c^*(e)=\sup _{y\in {\mathcal S}} \,y\cdot e\)). Moreover, we show that the dependence of \({\mathcal S}\) on f is only through \(f_u(\cdot ,\cdot ,0,\cdot )\) in the KPP reaction case.

The reason for \(\bar{u}\) only taking values in \(\{0,1\}\) is the hair-trigger effect, discussed in Sect. 3 below, which shows that solutions to (1.1) transition from values arbitrarily close to 0 to those arbitrarily close to 1 in \(\varepsilon \)-independent time. This then becomes an instantaneous transition from value 0 to 1 in the \(\varepsilon \rightarrow 0\) limit for (1.7). However, our proofs show that this transition also becomes sharp in space in this limit, which is not surprising but also not an obvious corollary. Of course, this means that for solutions to (1.1), spatial transition from values close to 0 to those close to 1 happens on distances of size o(t). This shows that in the setting of (1.1), it makes most sense to consider initial data that are also characteristic functions of sets in \({\mathbb {R}}^d\), but our main results in fact hold for more general initial data (see (1.11) below).

This contrasts with the case of ignition reactions in dimensions \(d\le 3\), where the second author proved that the above spatial transition for (1.1) occurs on distances of size O(1) [35] (calling this the bounded width property of solutions). For this it is crucial that the hair trigger effect is not present for ignition reactions, and the argument was based on the solution dynamic being pushed for ignition reactions when \(d\le 3\) (which may fail when \(d\ge 4\) [35]). On the other hand, for KPP reactions it is pulled due to the crucial hypothesis \(f(t,x,u,\omega )\le f_u(t,x,0,\omega )u \), which guarantees that the dynamic depends on f only through \(f_u(\cdot ,\cdot ,0,\cdot )\) [36]. See [35, 36] for details on these concepts and further discussion.

We note that the explicit formula (1.9) also holds for time-independent stationary ergodic reactions if (1.1) has a Wulff shape \({\mathcal S}\) and this \({\mathcal S}\) has no corners [22] (i.e., it has a unique unit outer normal vector at each \(x\in \partial {\mathcal S}\)). However, the latter hypothesis has previously only been verified for isotropic ignition reactions in dimensions \(d\le 3\) [22], when \({\mathcal S}\) is a ball (this clearly also holds in the settings of Theorem 1.3 below and Theorem 1.4 in [37]), and it is known that it can fail even for non-isotropic periodic ignition reactions in two dimensions. In fact, an example constructed by Caffarelli, Lee, and Mellet in [7] was used in [22] to show that not only \({\mathcal S}\) can have corners in this setting, but (1.9) can also fail for non-KPP reactions.

Let us now state our main result for (1.1), in which we can also accommodate \(\varepsilon \)-dependent shifts \(y_\varepsilon \) of the initial value and perturbations that decay in an appropriate sense as \(\varepsilon \rightarrow 0\). To simplify the relevant notation, let \(B_r:=B_r(0)\subseteq {\mathbb {R}}^d\) for \(r>0\) and \(B_0:=\{0\}\), then let \(B_r(G):=G+B_r\) and \(G^0_r:=G\backslash \overline{B_r(\partial G)}\) for \(G\subseteq {\mathbb {R}}^d\) and \(r\ge 0\) (so \(G^0_0\) is the interior of G).

Theorem 1.3

Let f be a KPP reaction and let \({\mathcal L}_\omega \) be from (1.2), where \(A=(A_{ij})\) a bounded symmetric matrix with \(A\ge \lambda I\) for some \(\lambda >0\), and the vector \(b=(b_1,\dots ,b_d)\) satisfies

$$\begin{aligned} \Vert b\Vert _{L^\infty }^2< 4 \lambda \inf _{(t,x,\omega )\in {\mathbb {R}}^{d+1} \times \Omega } f_u(t,x,0,\omega ). \end{aligned}$$
(1.10)

Also assume that \(H:=(A,b,f_u(\cdot ,\cdot ,0,\cdot ))\) is space-time stationary.

(i) If \(\lim _{s\rightarrow \infty }s^\alpha \phi _H(s)=0\) for some \(\alpha >0\), then there is a convex bounded open set \({\mathcal S}\subseteq {\mathbb {R}}^d\) containing 0 (called Wulff shape), which depends only on H, and the following holds for almost all \(\omega \in \Omega \). If \(G\subseteq {\mathbb {R}}^d\) is open, \(\theta \in (0,1)\), \(\Lambda <\infty \), and \(u^\varepsilon (\cdot ,\cdot ,\omega )\) solves (1.7) with

$$\begin{aligned} \theta \chi _{(G+y_\varepsilon )^0_{\rho (\varepsilon )}}\le u^\varepsilon (0,\cdot ,\omega )\le \chi _{B_{\rho (\varepsilon )}(G+y_\varepsilon )} \end{aligned}$$
(1.11)

for each \(\varepsilon >0\), with some \(y_\varepsilon \in B_\Lambda \) and \(\lim _{\varepsilon \rightarrow 0}\rho (\varepsilon )=0\) (when \(y_\varepsilon =0\) and \(\rho (\varepsilon )= 0\), this becomes just \(\theta \chi _{G} \le u^\varepsilon (0,\cdot ,\omega )\le \chi _{G}\)), then

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0}u^\varepsilon (t,x+y_\varepsilon ,\omega )=\chi _{G^{\mathcal S}}(t,x) \end{aligned}$$
(1.12)

locally uniformly on \(([0,\infty )\times {\mathbb {R}}^d)\backslash \partial G^{\mathcal S}\), where \(G^{\mathcal S}:=\{(t,x)\in {\mathbb {R}}^+\times {\mathbb {R}}^d\,|\,x\in G+t{\mathcal S}\}\).

(ii) If \(\lim _{s\rightarrow \infty } \phi _H(s)=0\), then (i) holds with \(\Lambda =\infty \) and with (1.12) replaced by

$$\begin{aligned}{} & {} \lim _{\varepsilon \rightarrow 0} {\mathbb {P}}\left[ (G+(1-\delta )t {\mathcal S})\cap B_{\delta ^{-1}}\subseteq \Gamma _{\theta '}^{\varepsilon }(t,\cdot )\cap B_{\delta ^{-1}}\right. \nonumber \\{} & {} \quad \left. \subseteq (G+(1+\delta )t {\mathcal S})\cap B_{\delta ^{-1}}\,\, \forall t\in [\delta ,\delta ^{-1}]\right] =1 \end{aligned}$$
(1.13)

for any \(\delta ,\theta '\in (0,1)\), where \(\Gamma _{\theta '}^{\varepsilon }(t,\omega ):=\{x\in {\mathbb {R}}^d\,|\, u^\varepsilon (t,x+y_\varepsilon ,\omega )\ge \theta ' \}\).

Remarks

1. The hypothesis on \(\phi _H\) is of course satisfied in both (i) and (ii) when H has a finite temporal range of dependence.

2. Since the homogenized dynamic only depends on f via \(f_u(\cdot ,\cdot ,0,\cdot )\), the full reaction f need not be space-time-stationary or have the required temporal dependence properties.

3. It is shown in [36] that the bound (1.10) is necessary (and sharp) for solutions to spread with positive speeds in all directions (i.e., for \(0\in {\mathcal S}\)).

4. Allowing for \(y_\varepsilon \ne 0\) makes (i) more general, but this is not the case for (ii) due to space stationarity of H.

5. In the course of the proof we also show in Theorems 3.2 and 4.2 below that \({\mathcal S}\) is the Wulff shape for (3.1) in the sense of propagation from compactly supported initial data.

6. In (ii) we also have that \(\limsup _{\varepsilon \rightarrow 0}u^\varepsilon (t,x+y_\varepsilon ,\omega )\le \chi _{G^{\mathcal S}}(t,x)\) locally uniformly on \(([0,\infty )\times {\mathbb {R}}^d)\backslash \partial G^{\mathcal S}\) (see the remark at the end of Sect. 3).

Let us now turn to the G-equation (1.3). The scaling (1.6) transforms it into

$$\begin{aligned} u^\varepsilon _t+v(\varepsilon ^{-1}t,\varepsilon ^{-1}x,\omega )\cdot \nabla u^\varepsilon =c(\varepsilon ^{-1}t,\varepsilon ^{-1}x,\omega )|\nabla u^\varepsilon |, \end{aligned}$$
(1.14)

and the goal is again to show that the dynamic for this PDE converges in an appropriate sense to that for some deterministic homogeneous equation as \(\varepsilon \rightarrow 0\).

The G-equation is a (first-order) Hamilton–Jacobi equation and there is a vast literature on periodic and stochastic homogenization for general Hamilton–Jacobi equations

$$\begin{aligned} u_t=H(t,x,\nabla u,\omega ), \end{aligned}$$

as well as their second-order (viscous) analogs. We will not attempt to review it here, and will only focus on homogenization results involving time-dependent Hamiltonians. Kosygina and Varadhan proved homogenization for space-time stationary ergodic super-linear (in p) Hamiltonians in the presence of diffusion represented by the Laplacian [20], Schwab addressed the same case but without diffusion [26], and Jing, Souganidis, and Tran treated the cases of space-time stationary ergodic super-quadratic Hamiltonians with possibly degenerate diffusions [15]. The last three authors also considered (1.3) with \(v\equiv 0\) and c that is either periodic in time and stationary-ergodic in space or vice versa [14, 16]. All these papers considered Hamiltonians that are convex and coercive in \(\nabla u\) (the latter means that \(\lim _{p\rightarrow \infty }H(t,x,p,\omega )=\infty \) for all \((t,x,\omega )\in {\mathbb {R}}^{d+1}\times \Omega \)), which is a frequent hypothesis in the theory, even in the time-independent case.

The Hamiltonian \(H(t,x,p,\omega ):=c(t,x,\omega )|p|-v(t,x,\omega )\cdot p\) from (1.3) is convex when \(c\ge 0\), but it is only coercive when \(|v|<c\). Hence none of the above results are applicable to the G-equation when this fails, and we in fact know of only two prior homogenization results in the non-coercive time-dependent case. Cardaliaguet, Nolen, and Souganidis obtained homogenization for (1.3) with \(c\equiv 1\) and space-time periodic v with a not-too-large divergence [8] (independently, Xin and Yu addressed the divergence-free time-independent space-periodic case at the same time [28]; see also the work of Cardaliaguet and Souganidis [9] for the general spatially stationary ergodic case). More recently, Burago, Ivanov, and Novikov proved it with \(c\equiv 1\) and space-time stationary divergence-free v that is not too large in average over very large balls (specifically, (1.18) below holds with \(c\equiv 1\)) and has a finite temporal range of dependence [6]. Hence this was the first (and prior to our results the only) stochastic homogenization result in the non-coercive time-dependent setting.

Our approach to homogenization for KPP reaction–advection–diffusion equations, via the non-autonomous subadditive theorem from the next section, turns out to easily extend to the setting of G-equations with general (cv) that have infinite temporal ranges of dependence, provided their temporal correlations decay in an appropriate sense. Our main result for (1.3) is Theorem 1.5 below, which we discuss next. We note that besides the ability to accommodate some environments with infinite temporal ranges of dependence, another advantage of our method is that it applies to second-order equations, including (1.1) (the method in [6] does not seem to be fully extendable to this setting, due to the need for a control representation formula for solutions, such as (1.16) below). It will be used to study homogenization for other (viscous) Hamilton–Jacobi PDE elsewhere [32].

In the setting of G-equations, we again have the concept of a Wulff shape, which is now the asymptotic shape of the reachable sets in the sense of the following definition.

Definition 1.4

We say that \((t_1,x_1)\in {\mathbb {R}}^{d+1}\) is \(\omega \)-reachable from \((t_0,x_0)\in (-\infty ,t_1]\times {\mathbb {R}}^{d}\) if there is an absolutely continuous path \(\gamma :[t_0,t_1]\rightarrow {\mathbb {R}}^d\) such that \(\gamma (t_j)=x_j\) (\(j=0,1\)) and

$$\begin{aligned} \left| \gamma '(t)-v(t,\gamma (t),\omega )\right| \le c(t,\gamma (t),\omega ) \end{aligned}$$

for almost all \(t\in [t_0,t_1]\). For any \(t\ge 0\), we let

$$\begin{aligned} \Gamma (t,\omega ;t_0,x_0):=\left\{ x\in {\mathbb {R}}^d\,\big |\, (t_0+t,x)\text { is}\, \omega \text {-reachable from }(t_0,x_0)\right\} \end{aligned}$$
(1.15)

be the \(\omega \)-reachable set from \((t_0,x_0)\) at time t, and denote \(\Gamma (t,\omega ):=\Gamma (t,\omega ;0,0)\).

These sets allow one to explicitly solve (1.3) via a well known control representation formula (see, e.g., Theorem 7.2 in [12]), under reasonable hypotheses on c and v, so that after scaling we obtain

$$\begin{aligned} u^\varepsilon (t,x,\omega )=\sup _{x\in \varepsilon \Gamma (\varepsilon ^{-1}t,\omega ;0,\varepsilon ^{-1}y)} u^\varepsilon (0,y,\omega ) \end{aligned}$$
(1.16)

for solutions to (1.14). If there is \({\mathcal S}\subseteq {\mathbb {R}}^d\) such that \(\Gamma (t,\omega )\) approaches \(t{\mathcal S}\) as \(t\rightarrow \infty \), for a large set of \(\omega \) in the probabilistic sense (space-time stationarity then shows that the same holds for \(\Gamma (t,\omega ;t_0,x_0)\) for any \((t_0,x_0)\in {\mathbb {R}}^{d+1}\)), then \({\mathcal S}\) is the Wulff shape for (1.3). In this case it follows from (1.16) that if the solutions \(u^\varepsilon \) share the same initial datum \(u_0\), then they converge in an appropriate sense to the function

$$\begin{aligned} \bar{u}(t,x):=\sup _{x\in y+t{\mathcal S}} u_0(y) = \sup _{y\in x-t{\mathcal S}} u_0(y), \end{aligned}$$
(1.17)

which again also solves (1.8) with \(c^*(e)=\sup _{y\in {\mathcal S}} \,y\cdot e\).

We note that unlike for (1.1), here the transition time from one value of u to another should be roughly proportional to the inverse of the spatial gradient of the solution, and therefore can increase as \(O(\frac{1}{\varepsilon })\) if the solution gradient is \(O(\varepsilon )\). It then makes perfect sense to consider initial data for (1.14) equal to or approximating some continuous function as \(\varepsilon \rightarrow 0\), which is what we will therefore do in the following analog of Theorem 1.3 for the G-equation (nevertheless, our arguments easily extend to discontinuous initial data).

Theorem 1.5

Let (cv) be bounded, uniformly continuous in t, and (uniformly) Lipschitz in x, with v divergence-free (i.e., \(\nabla _x \cdot v(t,\cdot ,\omega )=0\) holds a.e., for all \((t,\omega )\in {\mathbb {R}}\times \Omega \)) and

$$\begin{aligned} \inf _{L>0}\sup _{(t,x,\omega )\in {\mathbb {R}}^{d+1}\times \Omega }\left\| \frac{1}{L^d} \int _{[0,L]^d}v(t,x+y,\omega )dy\right\| _{L^\infty } <\inf _{(t,x,\omega )\in {\mathbb {R}}^{d+1}\times \Omega }c(t,x,\omega ). \end{aligned}$$
(1.18)

Also assume that \(H:=(c,v)\) is space-time stationary.

(i) If \(\lim _{s\rightarrow \infty }s^\alpha \phi _H(s)=0\) for some \(\alpha >0\), then there is a convex bounded open set \({\mathcal S}\subseteq {\mathbb {R}}^d\) containing 0 (called Wulff shape) such that the following holds for almost all \(\omega \in \Omega \). If \(u_0\) and \(u^\varepsilon (0,\cdot ,\omega )\) for each \((\varepsilon ,\omega )\in (0,1)\times \Omega \) are uniformly continuous on \({\mathbb {R}}^d\), \(\Lambda <\infty \), and \(u^\varepsilon (\cdot ,\cdot ,\omega )\) solves (1.14) in the viscosity sense with

$$\begin{aligned} \sup _{\omega \in \Omega } \Vert u^\varepsilon (0,\cdot +y_\varepsilon ,\omega )- u_0\Vert _{L^\infty } \le \rho (\varepsilon ) \end{aligned}$$
(1.19)

for some \(y_\varepsilon \in B_\Lambda \) and \(\lim _{\varepsilon \rightarrow 0}\rho (\varepsilon )=0\) (when \(y_\varepsilon =0\) and \(\rho (\varepsilon )= 0\), this becomes just \(u^\varepsilon (0,\cdot ,\omega )=u_0\)), then

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0} u^\varepsilon (t,x+y_\varepsilon ,\omega )=\sup _{y\in x-t{\mathcal S}} u_0(y) \end{aligned}$$
(1.20)

locally uniformly on \([0,\infty )\times {\mathbb {R}}^d\).

(ii) If \(\lim _{s\rightarrow \infty } \phi _H(s)=0\), then (i) holds with \(\Lambda =\infty \) and with (1.20) replaced by

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0}{\mathbb {P}}\left[ \,\left| u^\varepsilon (t,x+y_\varepsilon ,\omega )-\sup _{y\in x-t{\mathcal S}} u_0(y) \right| \le \delta \,\,\, \forall (t,x)\in [0,\delta ^{-1})\times B_{\delta ^{-1}}\right] =1 \end{aligned}$$

for any \(\delta >0\).

Remarks

1. Similarly to (1.10) in Theorem 1.3, hypotheses (1.18) and \(\nabla _x\cdot v\equiv 0\) guarantee positive spreading speed of reachable sets in all directions [6]. We note that while the main result in [10] may seem to allow for weakening of these hypotheses, it in fact only guarantees the above spreading to be ballistic after a time that is almost surely finite but not uniformly bounded (we need the latter here).

2. Again, in (ii) we also have that \(\limsup _{\varepsilon \rightarrow 0}u^\varepsilon (t,x+y_\varepsilon ,\omega )\le \sup _{y\in x-t{\mathcal S}} u_0(y)\) locally uniformly on \([0,\infty )\times {\mathbb {R}}^d\) (this is analogous to the proof of Remark 6 after Theorem 1.3).

The rest of the paper is organized as follows. In Sect. 2 we state a subadditive theorem from [31]. We then prove the two parts of Theorem 1.3 in Sects. 3 and 4, and show how to extend these arguments to the case of Theorem 1.5 in Sect. 5.

2 A subadditive theorem in time-dependent environments

In this section we provide for the convenience of the reader a new non-autonomous subadditive theorem, Theorem 1.2 in [31] (also Remark 3 following it), that is a crucial ingredient in the proofs of our main results. Specifically, it will be used in the proofs of Lemmas 3.1 and 4.1 below.

Theorem 2.1

Let \((\Omega ,{\mathbb {P}},{\mathcal F})\) be a probability space, and \(\{{\mathcal F}^{\pm }_t\}_{t\ge 0}\) two filtrations such that

$$\begin{aligned} {\mathcal F}^{-}_s\subseteq {\mathcal F}^{-}_{t}\subseteq {\mathcal F}\qquad \text {and}\qquad {\mathcal F}\supseteq {\mathcal F}^{+}_{s}\supseteq {\mathcal F}^{+}_{t} \end{aligned}$$

for all \(t\ge s\ge 0\). For any \(t\ge 0\) and integers \(n>m\ge 0\), let \(X_{m,n}^t:\Omega \rightarrow [0,\infty )\) be a random variable. Let there be \(C\ge 0\) such that the following statements hold for all such tmn.

  1. (1)

    \(X_{m,n}^t \le X_{m,k}^t+X_{k,n}^{t+X_{m,k}^t}\) for all \(k\in \{m+1,\dots ,n-1\}\);

  2. (2)

    \(X_{0,1}^0\le C\);

  3. (3)

    the joint distribution of \(\{X_{m,m+1}^t, X_{m,m+2}^t, \dots \}\) is independent of (tm);

  4. (4)

    \(X_{m,n}^t\) is \({\mathcal F}^+_t\)-measurable, and \(\{\omega \in \Omega \,|\, X_{m,n}^t(\omega )\le s\}\in {\mathcal F}^{-}_{t+s}\) for any \(s\ge 0\);

  5. (5)

    For some \(\alpha >0\) we have \(\lim _{s\rightarrow \infty }s^\alpha \phi (s)=0\), where

    $$ \begin{aligned} \phi (s):=\sup \left\{ \left| {\mathbb {P}}[F|E] -{\mathbb {P}}[F]\right| \,\big |\,\, t\ge 0 \,\, \& \,\, (E,F)\in {\mathcal F}_t^- \times {\mathcal F}_{t+s}^+ \,\, \& \,\, {\mathbb {P}}[E]>0 \right\} . \end{aligned}$$
  6. (6)

    \(X_{m,n}^t \le X_{m,n}^{t+s}+s\) for all \(s\in [C,C+c]\), with some \(c>0\).

Then

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{X_{0,n}^0 }{n}= \lim _{n\rightarrow \infty }\frac{{\mathbb {E}}\left[ X_{0,n}^0\right] }{n} \qquad \text { almost surely.} \end{aligned}$$
(2.1)

Moreover, if in (5) we only have \(\lim _{s\rightarrow \infty } \phi (s)=0\), then (2.1) holds in probability, as well as

$$\begin{aligned} \liminf _{n\rightarrow \infty }\frac{X_{0,n}^0 }{n}\ge \lim _{n\rightarrow \infty }\frac{{\mathbb {E}}\left[ X_{0,n}^0\right] }{n} \qquad \text { almost surely.} \end{aligned}$$
(2.2)

3 Proof of Theorem 1.3(i)

We will consider general initial times \(t_0\in {\mathbb {R}}\), and it will be convenient to rewrite (1.1) as

$$\begin{aligned} u_t={\mathcal L}_{\Upsilon (t_0,0)(\omega )} u+f(t_0+t,x,u,\omega ) \end{aligned}$$
(3.1)

(recall (1.5)), so that the solutions we will consider will always be defined on \({\mathbb {R}}^+\times {\mathbb {R}}^d\). Then for each \((t_0,x_0,\omega )\in {\mathbb {R}}^{d+1}\times \Omega \), let \(u(\cdot ,\cdot ,\omega ;{t_0},{x_0})\) be the solution to (3.1) satisfying the initial condition \(u(0,\cdot ,\omega ;t_0,x_0):=\frac{1}{2}\chi _{B_1(x_0)}\). For any \(\theta \in (0,1)\), we let

$$\begin{aligned} \Gamma _{\theta }(t,\omega ;t_0,x_0):=\left\{ x\in {\mathbb {R}}^d\,\big |\, u(t,x,\omega ;{t_0},{x_0})\ge \theta \right\} \end{aligned}$$

be its \(\theta \)-super-level set at time \(t\ge 0\). Let us also denote \(\Gamma _\theta (t,\omega ):=\Gamma _\theta (t,\omega ;0,0)\).

It is proved in [36] (and explicitly stated in the proof of Theorem 1.4 in [37]) that a uniform hair-trigger effect holds under the hypotheses of Theorem 1.3. Specifically, for any fixed \(\theta \in (0,1)\) we have that any solution to (3.1) with \(u(0,\cdot )\ge \theta \chi _{B_1(0)}\) converges locally uniformly on \({\mathbb {R}}^d\) to 1 as \(t\rightarrow \infty \), and this convergence is uniform in all (Abf) (as well as in all \((t_0,\omega )\in {\mathbb {R}}\times \Omega \)) that satisfy the hypotheses of Theorem 1.3 uniformly—that is, with the same

$$\begin{aligned} \gamma \in \left( 0, \min \left\{ \lambda ,\, \Vert A\Vert _\infty ^{-1}, \,\Vert f_u(\cdot ,\cdot ,0,\cdot )\Vert _{L^{\infty }}^{-1}, \,4 \lambda \inf _{(t,x,\omega )\in {\mathbb {R}}^{d+1} \times \Omega } f_u(t,x,0,\omega ) - \Vert b\Vert _{L^\infty }^2 \right\} \right] , \end{aligned}$$

the same Lipschitz lower bound \(f_0:(0,1)\rightarrow (0,\infty )\) on \({\tilde{f}}(u):=\inf _{(t,x,\omega )\in {\mathbb {R}}^{d+1}\times \Omega }f(t,x,u,\omega )\), and the \(\sup \) in (1.4) bounded above by the same \(\psi (u)\) with \(\lim _{u\rightarrow 0} \psi (u)=0\). Of course, the uniformity in \(\omega \) then also extends the uniform convergence to any spatial shift of the initial datum, after accounting for the corresponding shift in the solution (because shifting the medium by \(z\in {\mathbb {R}}^{d}\) simply amounts to changing \(\omega \) to \(\Upsilon _{(0,z)}(\omega )\)). Note that bootstrapping this claim then yields at least ballistic spreading of each super-level set in all directions (with the same positive lower bound on the spreading speed for all the super-level sets, because such lower bound for the \(\frac{1}{2}\)-super-level set also applies to all other \(\theta \in (0,1)\) due to the hair-trigger effect). On the other hand, a finite upper bound on the spreading speeds follows from \(e^{at-(x-x_0)\cdot e}\) being a super-solution to (3.1) for any \((e,x_0)\in {\mathbb {S}}^{d-1}\times {\mathbb {R}}^d\), provided a is large enough (depending only on \(\gamma \) above—see, e.g., the proof of Theorem 2.1 in [36]).

In particular, there is \(M\ge 1\) (which depends only on \(\gamma ,f_0,\psi \) above) such that under the hypotheses of Theorem 1.3 we have for all \((t_0,x_0,\omega )\in {\mathbb {R}}^{d+1}\times \Omega \),

$$\begin{aligned} B_{M^{-1}t}(x_0)\subseteq \Gamma _{1/2}(t,\omega ;t_0,x_0)\subseteq B_{{M}t}(x_0)\qquad \text { when}\, t\ge M. \end{aligned}$$
(3.2)

This immediately yields \(u(s,\cdot ,\omega ;t_0,x_0)\ge u(0,\cdot ,\omega ;t_0+s,z)\) for any \((t_0,x_0,z,\omega )\in {\mathbb {R}}^{2d+1}\times \Omega \) and \(s\ge {M}(|z-x_0|+1)\). Hence the comparison principle shows that for any \(t\ge 0\),

$$\begin{aligned} \Gamma _{1/2}(t,\omega ;t_0+s,z )\subseteq \Gamma _{1/2}(t+s,\omega ;t_0,x_0 )\qquad \text {when }s\ge {M}(|z-x_0|+1). \end{aligned}$$
(3.3)

Parabolic Harnack inequality [21, Corollary 7.42] shows that there is \(\theta >0\) (depending only on \(\gamma \)) such that if \(x\in \Gamma _{1/2}(t+s,\omega ;t_0,x_0 )\) and \(t+s\ge 1\), then \(u(t+s+1,\cdot ,\omega ;t_0,x_0)\ge \theta \chi _{B_1(x)}\). Hence if we increase M to three times the maximum of M and a time \(\tau \ge 1\) such that under the hypotheses of Theorem 1.3, any solution to (3.1) with \(u(0,\cdot )\ge \theta \chi _{B_1(0)}\) satisfies \(u(\tau ,\cdot )\ge \frac{1}{2}\chi _{B_1(0)}\) (such \(\tau \) exists due to the hair-trigger effect), then from (3.3) we obtain with any \(s'\ge M\),

$$\begin{aligned} B_{M^{-1}s'} \left( \Gamma _{1/2}(t,\omega ;t_0+s,z ) \right) \subseteq \Gamma _{1/2}(t+s+s',\omega ;t_0,x_0 )\qquad \text {when }s\ge {M}(|z-x_0|+1) \end{aligned}$$
(3.4)

by the hair-trigger effect and another usage of (3.2).

Next let the travel time to a point \(x\in {\mathbb {R}}^d\), when starting at \((t_0,x_0)\in {\mathbb {R}}^{d+1}\), be

$$\begin{aligned} \tau ^{t_0}({x_0,x},\omega ):=\inf \left\{ t\ge 0\,\big |\, B_1(x)\subseteq \Gamma _{1/2}(t,\omega ;t_0,x_0) \right\} . \end{aligned}$$

The comparison principle yields a space-time subadditivity property for these times, namely

$$\begin{aligned} \tau ^{t_0}({x_0,x},\omega )\le \tau ^{t_0}({x_0,z},\omega )+\tau ^{t_0+\tau ^{t_0}({x_0,z},\omega )}({z,x},\omega ) \end{aligned}$$
(3.5)

for any \((z,\omega )\in {\mathbb {R}}^d \times \Omega \). Due to (3.2), we also have

$$\begin{aligned} \tau ^{t_0}({x_0,x},\omega )\le {M}(|x-x_0|+1) \end{aligned}$$
(3.6)

for all \((t_0,x_0,x,\omega )\in {\mathbb {R}}^{2d+1} \times \Omega \). Combining this with (3.5) yields

$$\begin{aligned} \tau ^{t_0}({x_0,x},\omega )\le \tau ^{t_0}({x_0,z},\omega )+{M}(|x-z|+1) \end{aligned}$$
(3.7)

for all \((t_0,x_0,x,z,\omega )\in {\mathbb {R}}^{3d+1} \times \Omega \). Finally, from (3.3) we have

$$\begin{aligned} \tau ^{t_0}({x_0,x},\omega )\le \tau ^{t_0+t}({z,x},\omega )+t\qquad \text { when}\, t\ge M(|z-x_0|+1). \end{aligned}$$
(3.8)

We can now prove Theorem 1.3(i). We will first assume that \(H:=(A,b,f)\) is space-time stationary (with f understood as a C([0, 1])-valued function on \({\mathbb {R}}^{d+1}\times \Omega \)), rather than just \((A,b,f_u(\cdot ,\cdot ,0,\cdot ))\). We will also denote \({\mathcal F}_t^\pm :={\mathcal F}_t^\pm (H)\) for simplicity (see Definition 1.2).

We start with a lemma that shows that the travel times in any fixed direction are asymptotically linear.

Lemma 3.1

Assume the hypotheses of Theorem 1.3(i) but with \(H:=(A,b,f)\) being space-time stationary. Then for each \(e\in {\mathbb {S}}^{d-1}\) there are \(\Omega _e\subseteq \Omega \) and

$$\begin{aligned} \bar{\tau }(e)\in \left[ M^{-1}, M \right] , \end{aligned}$$
(3.9)

with M from (3.2), such that \({\mathbb {P}}[\Omega _e]=1\) and for each \((t_0,x_0,\omega )\in {\mathbb {R}}^{d+1}\times \Omega _e\) we have

$$\begin{aligned} \lim _{r\rightarrow \infty }\frac{\tau ^{t_0}({x_0,x_0+re},\omega )}{r}=\lim _{r\rightarrow \infty }\frac{{\mathbb {E}}\left[ \tau ^{t_0}({x_0,x_0+re},\cdot )\right] }{r}=\bar{\tau }(e). \end{aligned}$$
(3.10)

Moreover, for any \(e,e'\in {\mathbb {S}}^{d-1}\) we have

$$\begin{aligned} \max \left\{ |\bar{\tau }(e)-\bar{\tau }(e')|,\left| \frac{1}{\bar{\tau }(e)}-\frac{1}{\bar{\tau }(e')}\right| \right\} \le M^3|e-e'|. \end{aligned}$$
(3.11)

Proof

For any \(e\in {\mathbb {S}}^{d-1}\), the hypotheses of Theorem 2.1 hold with \(X_{m,n}^t:=\tau ^{t}(me,ne,\cdot )\). Indeed: (1) follows from (3.5); (2) from (3.6); (3) from space-time stationarity of H; (4) from the definition of \({\mathcal F}_t^\pm \); (5) from the hypothesis; and (6) with \((C,c):=(M,\infty )\) from (3.8) with \(z=x_0\). Hence Theorem 2.1 and (3.7) yield (3.10) with \((t_0,x_0)=(0,0)\) for almost all \(\omega \in \Omega \), with (3.9) following from (3.2). Space-time stationarity of H yields some full measure set \(\Omega _e\subseteq \Omega \) such that (3.10) holds for all \((t_0,x_0)\in {\mathbb {Z}}^{d+1}\) when \(\omega \in \Omega _e\), and this then extends to all \((t_0,x_0)\in {\mathbb {R}}^{d+1}\) by (3.8).

Next, (3.5) and (3.6) yield for any \(e,e'\in {\mathbb {S}}^{d-1}\),

$$\begin{aligned} \tau ^0(0,ne,\omega )\le \tau ^0(0,ne',\omega )+\tau ^{\tau ^{0}(0,ne',\omega )}(ne,ne',\omega )\le \tau ^0(0,ne',\omega )+{M}(n|e-e'|+1). \end{aligned}$$

After dividing this by n and taking \(n\rightarrow \infty \), we obtain \(\bar{\tau }(e)\le \bar{\tau }(e')+{M}|e-e'|\). This and \(\bar{\tau }\ge M^{-1}\) yield (3.11). \(\square \)

Next, we show that solutions to (3.1) with localized initial data asymptotically approximate characteristic functions of a ballistically expanding deterministic Wulff shape

$$\begin{aligned} {\mathcal S}:=\{se\,\big |\,e\in {\mathbb {S}}^{d-1}\text { and }s\in [0,w(e))\}, \end{aligned}$$
(3.12)

where the deterministic spreading speed in direction \(e\in {\mathbb {S}}^{d-1}\) is

$$\begin{aligned} w(e):=\bar{\tau }(e)^{-1}\in [{M}^{-1},{M}]. \end{aligned}$$
(3.13)

Note that \({\mathcal S}\) is bounded and open due to (3.11).

Theorem 3.2

Under the hypotheses of Lemma 3.1, \({\mathcal S}\) from (3.12) is convex, and it is a strong deterministic Wulff shape for (1.1). The latter means that for almost all \(\omega \in \Omega \),

$$\begin{aligned} \begin{aligned} \lim _{t\rightarrow \infty }\inf _{|x_0|\le \Lambda t} \inf _{x\in (1-\delta )t{\mathcal S}}u(t,x_0+x,\omega ;0,x_0)&=1,\\ \lim _{t\rightarrow \infty }\sup _{|x_0|\le \Lambda t} \sup _{x\not \in (1+\delta )t{\mathcal S}}u(t,x_0+x,\omega ;0,x_0)&=0 \end{aligned} \end{aligned}$$
(3.14)

hold for each \(\delta \in (0,1)\) and \(\Lambda \ge 0\).

Proof

Convexity of \({\mathcal S}\) will be proved in Theorem 4.2 under slightly more general hypotheses.

For each \(e\in {\mathbb {S}}^{d-1}\), let \(\Omega _e\) be the set from Lemma 3.1. Let Q be a countable dense subset of \({\mathbb {S}}^{d-1}\) and define \(\Omega ':=\cap _{e\in Q}\Omega _e\) (so \({\mathbb {P}}[\Omega ']=1\)). Now fix any \(\delta \in (0,1)\) and \(\omega \in \Omega '\). We will first show that there is \(C_{\delta ,\omega }>0\) such that for all \(t\ge C_{\delta ,\omega }\),

$$\begin{aligned} (1-\delta )t{\mathcal S}\subseteq \Gamma _{1/2}(t,\omega ) \subseteq (1+\delta ) t{\mathcal S}. \end{aligned}$$
(3.15)

Let \(\varepsilon :=\frac{\delta }{3M}\) and let \(e_1,\dots ,e_N\in {\mathcal S}{\setminus }\{0\}\) be such that \(\frac{e_i}{|e_i|}\in Q\) and \({\mathcal S}\subseteq \bigcup _{i=1}^NB_\varepsilon (e_i)\). Hence for any \(t\ge 0\) and \(v\in t{\mathcal S}\), there is \(i\in \{1,\dots ,N\}\) such that \(|v-te_i|\le t\varepsilon \). Then (3.7) shows that

$$\begin{aligned} \left| \tau ^0(0,v,\omega )-\tau ^0(0,te_i,\omega )\right| \le {M}(t\varepsilon +1). \end{aligned}$$
(3.16)

By Lemma 3.1, for all large enough t we have

$$\begin{aligned} \sup _{i\in \{1,\cdots ,N\}} \left| \frac{\tau ^0(0,te_i,\omega )}{t|e_i|}-\frac{1}{w({e_i}{|e_i|^{-1}})}\right| \le \varepsilon \end{aligned}$$
(3.17)

Using (3.16), (3.17), and \(|e_i|\le w(\frac{e_i}{|e_i|})\le {M}\) (by (3.12) and (3.13)), for all large t we obtain

$$\begin{aligned} \sup _{v\in t{\mathcal S}}\tau ^0(0,v,\omega )\le \max _{i\in \{1,\cdots ,N\}} \tau ^0(0,te_i,\omega )+{M}(t\varepsilon +1)\le t+{M}(2t\varepsilon +1). \end{aligned}$$

Hence

$$\begin{aligned} t{\mathcal S}\subseteq \Gamma _{1/2} (t+{M}(2t\varepsilon +1),\omega ) \end{aligned}$$
(3.18)

holds for all large enough t. Since \(2M\varepsilon < \delta \), the first inclusion in (3.15) follows.

Next let \(e_1',\dots ,e_{N'}'\subseteq {\mathbb {R}}^d\setminus {\mathcal S}\) be such that \(\frac{e_i'}{|e_i'|}\in Q\) and \(B_{{M}}(0)\backslash {\mathcal S}\subseteq \bigcup _{i=1}^{N'} B_\varepsilon (e_i') \). Note that

$$\begin{aligned} v\notin \Gamma _{1/2}(t,\omega ) \qquad \text { whenever }\, t\ge M\, \text {and}\, v\in B_{{M}t}(0)^c \end{aligned}$$
(3.19)

due to (3.2). For each \(v\in B_{{M}t}(0)\backslash t{\mathcal S}\), there is \(e_i'\) such that \(|v-te_i'|\le t\varepsilon \) and then

$$\begin{aligned} \tau ^0(0,v,\omega )\ge \tau ^0(0,te_i',\omega )- {M}(t\varepsilon +1). \end{aligned}$$

by (3.7). Moreover, since now \(w(\frac{e_i'}{|e_i'|}) \le |e_i'| \le {M}\) and (3.17) holds with \(e_i'\) and \(N'\) in place of \(e_i\) and N, we obtain

$$\begin{aligned} \inf _{v\in B_{{M}t}(0)\backslash t{\mathcal S}}\tau ^0(0,v,\omega )\ge \min _{i\in \{1, \cdots ,N'\}} \tau ^0(0,te_i',\omega )-{M}(t\varepsilon +1)\ge t-{M}(2t\varepsilon +1). \end{aligned}$$

This and (3.19) yield \(\Gamma _{1/2}(t-2{M}(t\varepsilon +1),\omega )\subseteq t{\mathcal S}\) for all large enough t, so the second inclusion in (3.15) again follows by \(2M\varepsilon < \delta \).

We next want to upgrade (3.15) to the claim that for almost all \(\omega \) we have for any \(\Lambda \ge 1\) and \(\delta \in (0,1)\),

$$\begin{aligned} x_0+(1-\delta ) t{\mathcal S}\subseteq \Gamma _{1/2}(t,\omega ;0,x_0) \subseteq x_0+(1+\delta ) t{\mathcal S}\end{aligned}$$
(3.20)

for all large enough t (depending on \(\omega ,\delta ,\Lambda \)) and all \(x_0\in B_{\Lambda t}(0)\). This will finish the proof because parabolic Harnack inequality and hair-trigger effect show that for each \(\theta \in (0,1)\), there is \(C_\theta >0\) such that for all \((t_0,x_0,\omega )\in {\mathbb {R}}^{d+1}\times \Omega \) and \(t\ge C_\theta +1\),

$$\begin{aligned} \Gamma _{1/2}(t-C_\theta ,\omega ;t_0,x_0)\subseteq \Gamma _\theta (t,\omega ;t_0,x_0)\subseteq \Gamma _{1/2}(t+C_\theta ,\omega ;t_0,x_0). \end{aligned}$$

It therefore remains to show (3.20). Fix any \(\Lambda \ge 1\) and

$$\begin{aligned} \delta \in \left( 0, ({26M\Lambda })^{-1} \right) . \end{aligned}$$
(3.21)

By (3.15) and Egorov’s Theorem, there are \(\tau _{\delta }>0\) and \(D_{\delta }\subseteq \Omega \) with \({\mathbb {P}}[D_{\delta }]\ge 1-\delta ^{d+1}\) such that for each \(\omega \in D_{\delta }\) and \(t\ge \tau _{\delta }\),

$$\begin{aligned} (1-\delta ) t{\mathcal S}\subseteq \Gamma _{1/2}(t,\omega ) \subseteq (1+\delta ) t{\mathcal S}. \end{aligned}$$
(3.22)

It is clear that we can in fact pick \(D_{\delta }\) from the \(\sigma \)-algebra generated by \(\bigcup _{i\in {\mathbb {N}}} ({\mathcal F}_0^+\cap {\mathcal F}_i^-)\). Let \({\mathcal F}'\) be the \(\sigma \)-algebra generated by \(\bigcup _{i\in {\mathbb {N}}}({\mathcal F}_{-i}^+\cap {\mathcal F}_i^-)\) (or just replace \({\mathcal F}\) by \({\mathcal F}'\) from the very start) and apply Wiener’s ergodic theorem (see, e.g., [4, Theorems 2 and 3]) with the group of transformations \(\{\Upsilon _{(s,y)}\}_{(s,y)\in {\mathbb {R}}^{d+1}}\) on the probability space \((\Omega ,{\mathcal F}',{\mathbb {P}})\). It shows that there is \(\Omega _{\delta }\in {\mathcal F}'\) with \({\mathbb {P}}[\Omega _{\delta }]=1\) such that the following holds, with

$$\begin{aligned} \varphi _{\delta ,r}(\omega ):=\frac{1}{|{{\mathcal B}}_{ r}|}\int _{{{\mathcal B}}_{ r}} \chi _{D_{\delta }} \left( \Upsilon _{(s,y)}(\omega ) \right) dsdy \end{aligned}$$

and \({\mathcal B}_r\subseteq {\mathbb {R}}^{d+1}\) the space-time ball of radius \(r>0\) centered at the origin. The limit

$$\begin{aligned} \varphi _{\delta }(\omega ):= \lim _{r\rightarrow \infty } \varphi _{\delta ,r}(\omega ) \, \in [0,1] \end{aligned}$$

(which is \({\mathcal F}'\)-measurable because \(\varphi _{\delta ,r}\) is measurable with respect to the \(\sigma \)-algebra generated by \(\bigcup _{i\in {\mathbb {N}}} ({\mathcal F}_{-r}^+\cap {\mathcal F}_i^-)\)) exists for each \(\omega \in \Omega _{\delta }\), is invariant under the transformations \(\{\Upsilon _{(s,y)}\}_{(s,y)\in {\mathbb {R}}^{d+1}}\), and satisfies

$$\begin{aligned} {\mathbb {E}}[\varphi _{\delta }(\cdot )]={\mathbb {E}}[\chi _{D_{\delta }}(\cdot )]={\mathbb {P}}[D_{\delta }]. \end{aligned}$$

Next we claim that \(\varphi _{\delta }\) is a constant almost everywhere on \(\Omega \). If not, then there are \(k\in {\mathbb {N}}\), \(c>0\), and \(A_1,A_2\in {\mathcal F}_{-k}^+\cap {\mathcal F}_{k}^-\) with \({\mathbb {P}}[A_1]{\mathbb {P}}[A_2]> 0\) such that

$$\begin{aligned} \left| \frac{1}{{\mathbb {P}}[A_1]}{\mathbb {E}}[\varphi _{\delta }(\cdot )\chi _{A_1}(\cdot )]- \frac{1}{{\mathbb {P}}[A_2]}{\mathbb {E}}[\varphi _{\delta }(\cdot )\chi _{A_2}(\cdot )]\right| \ge c. \end{aligned}$$
(3.23)

Fix \(C'>0\) and note that \(\varphi _{\delta ,r}(\Upsilon _{(r+k+C',0)}(\cdot ))\) is \({\mathcal F}_{k+C'}^+\)-measurable. Hence the definition of \(\phi _H\), the fact that \(A_j\in {\mathcal F}^-_k\), and \(0\le \varphi _{\delta ,r} \le 1\) yield for \(j=1,2\),

$$ \begin{aligned} {\mathbb {E}}[\varphi _{\delta ,r}(\Upsilon _{(r+k+C',0)}(\cdot ))\chi _{A_j}(\cdot )]&=\int _0^1{\mathbb {P}}\left[ \varphi _{\delta ,r}(\Upsilon _{(r+k+C',0)}(\cdot ))>\mu \,\, \& \,\,\chi _{A_j}=1\right] d\mu \\&\le \int _0^1\left( {\mathbb {P}}\left[ \varphi _{\delta ,r}(\Upsilon _{(r+k+C',0)}(\cdot ))>\mu \right] +\phi _H(C')\right) {\mathbb {P}}\left[ A_j\right] d\mu \\&\le {\mathbb {E}}[\varphi _{\delta ,r}(\Upsilon _{(r+k+C',0)}(\cdot ))]{\mathbb {P}}\left[ A_j\right] +\phi _H(C'). \end{aligned}$$

Similarly,

$$\begin{aligned} {\mathbb {E}}[\varphi _{\delta ,r}(\Upsilon _{(r+k+C',0)}(\cdot ))\chi _{A_j}(\cdot )] \ge {\mathbb {E}}[\varphi _{\delta ,r}(\Upsilon _{(r+k+C',0)}(\cdot ))]{\mathbb {P}}\left[ A_j\right] -\phi _H(C'). \end{aligned}$$

Since \(\lim _{s\rightarrow \infty }\phi _H(s)=0\) and \({\mathbb {P}}[A_j]>0\) for \(j=1,2\), taking sufficiently large \(C'\) yields

$$\begin{aligned} \left| \frac{1}{{\mathbb {P}}\left[ A_j\right] }{\mathbb {E}}[\varphi _{\delta ,r}(\Upsilon _{(r+k+C',0)}(\cdot ))\chi _{A_j}(\cdot )] -{\mathbb {E}}[\varphi _{\delta ,r}(\Upsilon _{(r+k+C',0)}(\cdot ))]\right| \le \frac{c}{4}. \end{aligned}$$

Since \(\varphi _{\delta ,r}\rightarrow \varphi _{\delta }\) almost surely as \(r\rightarrow \infty \), and \(0\le \varphi _{\delta ,r} \le 1\), we thus obtain

$$\begin{aligned} \left| \frac{1}{{\mathbb {P}}[A_j]}{\mathbb {E}}[\varphi _{\delta }(\Upsilon _{(r+k+C',0)}(\cdot ))\chi _{A_j}(\cdot )] -{\mathbb {E}}[\varphi _{\delta ,r}(\Upsilon _{(r+k+C',0)}(\cdot ))]\right| <\frac{c}{2} \end{aligned}$$

for all large enough r and \(j=1,2\). However, since \(\varphi _{\delta }\circ \Upsilon _{(r+k+C',0)} = \varphi _{\delta }\), this contradicts (3.23). Thus we see that \(\varphi _{\delta }(\omega ) = {\mathbb {P}}[D_{\delta }]\) for almost all \(\omega \in \Omega \).

This means that there is \(\Omega _{\delta }'\subseteq \Omega _{\delta }\) with \({\mathbb {P}}[\Omega _{\delta }']=1\) such that for each \(\omega \in \Omega _{\delta }'\) we have

$$\begin{aligned} \lim _{r\rightarrow \infty } \frac{1}{|{{\mathcal B}}_{ r}|}\int _{{{\mathcal B}}_{ r}} \chi _{D_{\delta }}(\Upsilon _{(s,y)}(\omega ))dsdy={\mathbb {P}}[D_{\delta }]\ge 1-\delta ^{d+1}. \end{aligned}$$

Thus there is \(t_{\omega ,\delta ,\Lambda }\ge \max \{\tau _\delta , \frac{1}{\delta }\} \) such that for all \(t\ge t_{\omega ,\delta ,\Lambda }\) we have

$$\begin{aligned} \left| \left\{ (s,z)\in {\mathcal B}_{2\Lambda t}\,|\,\Upsilon _{(s,z)}(\omega )\notin D_{\delta }\right\} \right| \le 2\delta ^{d+1}\left| {\mathcal B}_{2\Lambda t}\right| . \end{aligned}$$

For any \(t\ge t_{\omega ,\delta ,\Lambda }\), let \(C_t:=M(2\delta \Lambda t+1)\le 3M\delta \Lambda t\) (because \(t\ge \frac{1}{\delta \Lambda }\)). Then for any \(x_0\in B_{\Lambda t}(0)\), there are

$$\begin{aligned} (s_\pm ,z_\pm )\in [C_t,C_t+8\delta \Lambda t] \times B_{2\delta \Lambda t}(x_0) \subseteq {\mathcal B}_{2\Lambda t} \end{aligned}$$

satisfying \(\Upsilon _{(\pm s_\pm ,z_\pm )}(\omega )\in D_{\delta }\) (note that \((-2\delta \Lambda t,2\delta \Lambda t) \times B_{2\delta \Lambda t}(0) \supseteq {\mathcal B}_{2\delta \Lambda t}\), while (3.21) implies \(C_t+10\delta \Lambda t \le 13\,M\delta \Lambda t \le \Lambda t\)). Now let \(c_{\delta ,\Lambda }:=13\,M\delta \Lambda \) (\(\le \frac{1}{2}\) by (3.21)). Since we have \(s_\pm \ge C_t\ge M(|z_\pm -x_0|+1)\) and \(2\,M\delta \Lambda t\ge M\), as well as

$$\begin{aligned} s_\pm + 2M\delta \Lambda t \le C_t+10M\delta \Lambda t \le c_{\delta ,\Lambda } t, \end{aligned}$$

from (3.4), (3.22), and \(\Upsilon _{(\pm s_\pm ,z_\pm )}(\omega ) \in D_{\delta }\) we obtain

$$\begin{aligned} \Gamma _{1/2}(t,\omega ;0,x_0)-x_0&\subseteq \Gamma _{1/2}(t+s_-+2M\delta \Lambda t,\omega ; -s_-,z_-)-z_-\\&= \Gamma _{1/2}(t+s_- +2M\delta \Lambda t ,\Upsilon _{(-s_-,z_-)}(\omega )) \subseteq (1+\delta )(1+c_{\delta ,\Lambda }) t{\mathcal S}\end{aligned}$$

and

$$\begin{aligned} \Gamma _{1/2}(t,\omega ;0,x_0) -x_0&\supseteq \Gamma _{1/2}(t-s_+-2M\delta \Lambda t,\omega ;s_+,z_+) -z_+\\&= \Gamma _{1/2}(t-s_+-2M\delta \Lambda t,\Upsilon _{(s_+,z_+)}(\omega )) \supseteq (1-\delta )(1-c_{\delta ,\Lambda }) t{\mathcal S}\end{aligned}$$

for any \(\omega \in \Omega '_\delta \) and \(t\ge t_{\omega ,\delta ,\Lambda }\). Since \(\lim _{\delta \rightarrow 0}c_{\delta ,\Lambda }=0\) for each \(\Lambda \ge 1\), this shows that for each \(\omega \in \Omega '':= \bigcap _{L\in (26\,M,\infty )\cap {\mathbb {N}}} \Omega '_{1/L}\) (so \({\mathbb {P}}[\Omega '']=1\)) and \((\delta ,\Lambda )\in (0,1)\times [1,\infty )\), we indeed have (3.20) when t is large enough and \(x_0\in B_{\Lambda t}(0)\). \(\square \)

If now only \(H:=(A,b,f_u(\cdot ,\cdot ,0,\cdot ))\) is space-time stationary (rather than (Abf)), we let

$$\begin{aligned} f'(t,x,u,\omega ):=f_u(t,x,0,\omega ) \min \{u,1-u\}, \end{aligned}$$
(3.24)

so that Lemma 3.1 and Theorem 3.2 apply to (1.1) with \(f'\) in place of f. We will use the virtual linearity property of (1.1) with KPP f, expressed in Theorem 1.2 in [36], to show that the leading order solution dynamic (as \(t\rightarrow \infty \)) of (1.1) with f coincides with the “Wulff shape spreading dynamic” (1.9) (where \({\mathcal S}\) is the strong deterministic Wulff shape for \(f'\)), and hence conclude Theorem 1.3(i) for f.

First, note that taking \(t\rightarrow \infty \) for any fixed \(\delta >0\) in Theorem 1.2 in [36] with initial conditions \(u(0,\cdot )=\frac{1}{2} \chi _{B_1(x_0)}\) (\(x_0\in {\mathbb {R}}^d\)), and then taking \(\delta \rightarrow 0\), immediately shows that the strong deterministic Wulff shape \({\mathcal S}\) for \(f'\) is also a strong deterministic Wulff shape for f (here we also use the hair-trigger effect property discussed at the start of this section, but with \(u(0,\cdot )\ge \theta \chi _{B_1(0)}\) replaced by \(u(0,\cdot )\ge \theta \chi _{B_1(0)\cap (0,1)^d}\), which holds equally). Hence Lemma 3.1 and Theorem 3.2 remain valid if only \((A,b,f_u(\cdot ,\cdot ,0,\cdot ))\) is space-time stationary, with \({\mathcal S}\) only depending on \((A,b,f_u(\cdot ,\cdot ,0,\cdot ))\).

Let now \(\Omega '\in {\mathcal F}\) with \({\mathbb {P}}[\Omega ']=1\) be a set (of almost all \(\omega \in \Omega \)) from Theorem 3.2 for \(f'\) in place of f, and fix any \(\omega \in \Omega '\). Note that “unscaled” versions of (1.11) and (1.12) are

$$\begin{aligned} \theta \chi _{(\varepsilon ^{-1} (G+y_\varepsilon ))_{\varepsilon ^{-1} \rho (\varepsilon )}^0} \le u_\varepsilon (0,\cdot ,\omega )\le \chi _{B_{\varepsilon ^{-1} \rho (\varepsilon )}(\varepsilon ^{-1} (G+y_\varepsilon )) } \end{aligned}$$
(3.25)

and

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0} u_\varepsilon (\varepsilon ^{-1} T,\varepsilon ^{-1} (x+ y_\varepsilon ),\omega )= \chi _{G^{\mathcal S}}(T,x), \end{aligned}$$
(3.26)

with \(u_\varepsilon \) solving (1.1) and t replaced by T so that applying (3.14) later does not cause confusion. Let us first consider the case of bounded G, that is, we have \(G\subseteq B_\Lambda (0)\) after possibly increasing \(\Lambda \) from the statement of Theorem 3.2. Fix any \(T_0>0\).

Applying now Theorem 1.2 in [36] to the initial values from (3.25), together with the first claim in (3.14) with \(\frac{2\Lambda }{T_0}\) in place of \(\Lambda \), and with the fact that \({\mathcal S}\) is the strong deterministic Wulff shape for (1.1) with \(f'\) in place of f, shows that for any \(\delta >0\) we have

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0} \inf _{T\ge T_0} \, \inf _{z\in \varepsilon ^{-1} \left( (G+y_\varepsilon )_{ \rho (\varepsilon )}^0 + T(1-\delta )^2 {\mathcal S}\right) } u_\varepsilon (\varepsilon ^{-1}T,z,\omega )=1 \end{aligned}$$

(here we again use the hair-trigger effect if \(\theta <\frac{1}{2}\)). Since the set under the \(\inf \) contains \(\varepsilon ^{-1} (G + T(1-\delta )^3 {\mathcal S}+y_\varepsilon )\) for any \(T\ge T_0\) as long as \(\varepsilon >0\) is small enough, taking \(\delta \rightarrow 0\) yields (3.26) locally uniformly on \(G^{\mathcal S}\). We can use a similar argument based on the second claim in (3.14) to show that

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0} \sup _{T\ge T_0} \, \sup _{z\notin \varepsilon ^{-1} \left( B_{ \rho (\varepsilon )}(G+y_\varepsilon ) + T(1+\delta )^2 {\mathcal S}\right) } u_\varepsilon (\varepsilon ^{-1}T,z,\omega )=0, \end{aligned}$$
(3.27)

provided that we also have \(u_\varepsilon (0,\cdot ,\omega )\le \frac{1}{2}\). We obviously obtain the same result for (1.1) with \(f'\) in place of f. But this means that we now get (3.27) without the additional hypothesis \(u_\varepsilon (0,\cdot ,\omega )\le \frac{1}{2}\) because \(\frac{1}{2} u_\varepsilon \) is a subsolution to (1.1) with \(f'\) in place of f that is initially \(\le \frac{1}{2}\). So after again taking \(\delta \rightarrow 0\), we obtain (3.26) locally uniformly on \(((0,\infty )\times {\mathbb {R}}^d)\setminus \overline{G^{\mathcal S}}\).

So to obtain Theorem 1.3(i) for all bounded G, it suffices to extend this convergence to the union of the above set and \(\{0\}\times ({\mathbb {R}}^d\setminus \overline{G})\). But this follows from the spreading speeds of solutions to (1.1) being uniformly bounded above, namely by

$$\begin{aligned} a:=(1+d+d^2) \max \left\{ \max _{i,j} \Vert A_{i,j}\Vert _{L^\infty }, \max _{i} \Vert b_{i}\Vert _{L^\infty }, \Vert f_u(\cdot ,\cdot ,0,\cdot )\Vert _{L^\infty } \right\} \end{aligned}$$

because \(v(t,x):=e^{at-(x-x_0)\cdot e}\) is a supersolution to (1.1) for any \((x_0,e)\in {\mathbb {R}}^d\times {\mathbb {S}}^{d-1}\). So if G is convex, then for each \(x\notin \overline{G}\) and any solution \(u_\varepsilon \) to (1.1) with \(u_\varepsilon (0,\cdot ,\omega )\le \chi _{B_{\varepsilon ^{-1} \rho (\varepsilon )} (\varepsilon ^{-1} (G+y_\varepsilon ))}\) we have

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0} \sup _{T\in [0, a^{-1}d(x,G)-\delta ]} u_\varepsilon (\varepsilon ^{-1} T, \varepsilon ^{-1}(x+y_\varepsilon ),\omega ) =0 \end{aligned}$$

for any \(\delta >0\). If G is not convex, a similar result is obtained by instead using supersolutions \(e^{a(t-t_0)}\sum _{i=1}^d (e^{(x-x_0)\cdot e_i} + e^{-(x-x_0)\cdot e_i})\), where \(\{e_1,\dots ,e_d\}\) is the standard basis in \({\mathbb {R}}^d\).

Finally, if G is unbounded, then it clearly suffices to prove (3.26) locally uniformly on \(([0,M]\times B_M(0)){\setminus }\partial G^{\mathcal S}\) for any \(M>0\). The last argument above (together with Theorem 1.2 in [36]) shows that if we only consider (Tx) in this set, we can replace G by \(G\cap B_{(2+a^{-1})M}(0)\) because \(u_\varepsilon (0,\cdot ,\omega )\) at points outside \(B_{\varepsilon ^{-1}(2+a^{-1})M}(0)\) will have no effect on \(u_\varepsilon (\cdot ,\cdot ,\omega )\) on the set \([0,\varepsilon ^{-1}M]\times B_{\varepsilon ^{-1}M}(0)\) in the limit \(\varepsilon \rightarrow 0\). But since \(G\cap B_{(2+a^{-1})M}(0)\) is bounded, the argument in the bounded G case applies and yields Theorem 1.3(i) for unbounded G as well.

4 Proof of Theorem 1.3(ii)

The arguments from the start of the previous section (prior to Lemma 3.1) also apply here, and we will again consider (3.1) as well as \(u(t,x,\omega ;{t_0},{x_0})\), \(\Gamma _\theta (t,\omega ;{t_0},{x_0})\), and \(\tau ^t_0(x_0,x,\omega )\) as above. We will also again first assume that \(H:=(A,b,f)\) is space-time stationary, and denote \({\mathcal F}_t^\pm := {\mathcal F}_t^\pm (H)\). We now have the following analogs of Lemma 3.1 and Theorem 3.2.

Lemma 4.1

Assume the hypotheses of Lemma 3.1, but with \(\lim _{s\rightarrow \infty }\phi _H(s)=0\) for \(H:=(A,b,f)\) instead of \(\lim _{s\rightarrow \infty }s^\alpha \phi _H(s)=0\). Then for each \(e\in {\mathbb {S}}^{d-1}\), there is \(\bar{\tau }(e)\) satisfying (3.9) and (3.11) such that for each \((t_0,x_0)\in {\mathbb {R}}^{d+1}\) we have

$$\begin{aligned} \lim _{r\rightarrow \infty }\frac{\tau ^{t_0}(x_0,x_0+re,\cdot )}{r} =\lim _{r\rightarrow \infty }\frac{{\mathbb {E}}\left[ \tau ^{t_0}(x_0,x_0+re,\cdot )\right] }{r}=\bar{\tau }(e)\qquad \text { in probability}. \end{aligned}$$
(4.1)

Proof

Same as Lemma 3.1, using the convergence in probability claim in Theorem 2.1. \(\square \)

Theorem 4.2

Under the hypotheses of Lemma 4.1, \({\mathcal S}\) from (3.12) with w from (3.13) is convex, and it is a strong Wulff shape for (1.1) in probability. The latter means that

$$\begin{aligned} \begin{aligned} \lim _{t\rightarrow \infty }{\mathbb {P}}\left[ (1-\delta ) s{\mathcal S}\subseteq \Gamma _{\theta }(s,\cdot ;0,x_0)-x_0\subseteq (1+\delta ) s{\mathcal S}\,\,\forall (s,x_0)\in [\delta t ,\delta ^{-1}t] \times B_{\Lambda t} \right] =1 \end{aligned} \end{aligned}$$
(4.2)

holds for each \(\delta ,\theta \in (0,1)\) and \(\Lambda \ge 0\).

Proof

Let \(e_1,e_2\in {\mathbb {S}}^{d-1}\) be arbitrary with \(e_2\ne -e_1\), and let \(e':=\frac{e_1+e_2}{|e_1+e_2|}\). From (3.5) and \(|e_1+e_2|e'=e_1+e_2\) and we obtain for each \(r>0\) and \(\omega \in \Omega \),

$$\begin{aligned} \tau ^0(0,|e_1+e_2|re',\omega )\le \tau ^0(0,re_1,\omega )+\tau ^{\tau ^0(0,re_1,\omega )}(re_1,r(e_1+e_2),\omega ). \end{aligned}$$
(4.3)

Then (4.1) shows that for each \(\varepsilon >0\) and any large enough r, there is \(\omega _{r,\varepsilon }\in \Omega \) such that

$$\begin{aligned} \max \left\{ |\tau ^0(0,|e_1+e_2|re',\omega _{r,\varepsilon })-|e_1+e_2|r\bar{\tau }(e')|,\, |\tau ^0(0,re_1,\omega _{r,\varepsilon })- r\bar{\tau }(e_1)| \right\} \le r\varepsilon , \end{aligned}$$

as well as (using also (3.8))

$$\begin{aligned} \tau ^{\tau ^0(0,re_1,\omega _{r,\varepsilon })}(re_1,r(e_1+e_2),\omega _{r,\varepsilon }){} & {} \le \tau ^{r\bar{\tau }(e_1)+r\varepsilon }(re_1,r(e_1+e_2),\omega _{r,\varepsilon })\\{} & {} \quad +2r\varepsilon \le r\bar{\tau }(re_2)+3r\varepsilon . \end{aligned}$$

From these and (4.3) we obtain

$$\begin{aligned} |e_1+e_2|\bar{\tau }(e')\le \bar{\tau }(e_1)+\bar{\tau }(e_2)+5\varepsilon , \end{aligned}$$

and after taking \(\varepsilon \rightarrow 0\) this becomes

$$\begin{aligned} w(e')\ge \frac{w(e_1)w(e_2)}{w(e_1)+w(e_2)}|e_1+e_2|. \end{aligned}$$

The angle bisector theorem now shows that \({\mathcal S}\) is convex.

It remains to prove (4.2). Let \(\varepsilon :=\frac{\delta ^2}{10M(1+M\delta )}\), let \(s_1,\ldots ,s_N\in [\delta ,\delta ^{-1}]\) be such that \([\delta ,\delta ^{-1}]\subseteq \bigcup _{j=1}^NB_\varepsilon (s_j)\), and let \(e_1,\ldots ,e_N\in {\mathcal S}\backslash \{0\}\) be such that \({\mathcal S}\subseteq \bigcup _{i=1}^NB_\varepsilon (e_i)\). Thus for any \(t>0\), and any \(s\in [\delta t, \delta ^{-1}t]\) and \(v\in s{\mathcal S}\), there are \(i,j\in \{1,\ldots ,N\}\) such that \(|s-ts_j|\le t\varepsilon \) and \(|v-se_i|\le s\varepsilon \le \delta ^{-1} t\varepsilon \). It follows from (3.7) and \(|e_i|\le M\) that

$$\begin{aligned} \left| \tau ^0(0,v,\omega )-\tau ^0(0,ts_je_i,\omega )\right| \le {M}(|v-ts_je_i|+1)\le M(\delta ^{-1} t\varepsilon +Mt\varepsilon +1). \end{aligned}$$
(4.4)

By Lemma 4.1, we also have

$$\begin{aligned} \lim _{t\rightarrow \infty } {\mathbb {P}}\left[ \left| \frac{{\tau ^0}(0,ts_je_i,\cdot )}{ts_j|e_i|}-\frac{{1}}{w({e_i}{|e_i|^{-1}})}\right| \ge \varepsilon \right] =0 \end{aligned}$$
(4.5)

for each \(i,j\in \{1,\ldots ,N\}\). From (4.4) and \(s_j|e_i|\le M\delta ^{-1} \) we obtain

$$\begin{aligned}&{\mathbb {P}}\left[ \sup _{v\in s{\mathcal S}}\tau ^0(0,v,\cdot )\ge (s+t\varepsilon )+ M\delta ^{-1}t\varepsilon +M(\delta ^{-1} t\varepsilon +Mt\varepsilon +1)\,\text { for some }s\in [\delta t, \delta ^{-1}t]\right] \\&\quad \le {\mathbb {P}}\left[ \max _{i,j\in \{1,\cdots ,N\}} \tau ^0(0,ts_je_i,\cdot )\ge ts_j+M\delta ^{-1} t\varepsilon \right] \\&\quad \le \sum _{i,j=1}^N{\mathbb {P}}\left[ \tau ^0(0,ts_je_i,\cdot )\ge ts_j+ ts_j|e_i|\varepsilon \right] , \end{aligned}$$

which converges to 0 as \(t\rightarrow \infty \) by (4.5) and \(|e_i|\le w(\frac{e_i}{|e_i|})\). This implies that

$$\begin{aligned} \lim _{t\rightarrow \infty }{\mathbb {P}}\left[ s{\mathcal S}\subseteq \Gamma _{1/2}(s+2M(\delta ^{-1}+M)t\varepsilon +M,\cdot )\,\,\forall s\in [\delta t, \delta ^{-1}t]\right] =1. \end{aligned}$$
(4.6)

Next, let \(e_1',\ldots ,e_{N'}'\in {\mathbb {R}}^d\backslash {\mathcal S}\) be such that \(B_{M}(0)\backslash {\mathcal S}\subseteq \bigcup _{i=1}^{N'} B_\varepsilon (e_i') \). It is clear that (3.19) still holds, as does (4.5) with \(e_i'\) and \(N'\) in place of \(e_i\) and N. For any \(t\ge M\delta ^{-1}\), and any \(s\in [\delta t, \delta ^{-1}t]\) and \(v\in B_{{M}s}(0)\backslash s{\mathcal S}\), there are \(i\in \{1,\ldots ,N'\}\) and \(j\in \{1,\ldots ,N\}\) such that (4.4) holds with \(e_i'\) in place of \(e_i\). This, together with (3.19) and \(s_j|e_i|\le M\delta ^{-1}\), yields

$$ \begin{aligned}&{\mathbb {P}}\left[ \inf _{v\in ( s{\mathcal S})^c}\tau ^0(0,v,\cdot )\le s-t\varepsilon - M\delta ^{-1}t\varepsilon -{M}(\delta ^{-1} t\varepsilon +Mt\varepsilon +1)\,\text { for some }s\in [\delta t, \delta ^{-1}t]\right] \\&\quad \le {\mathbb {P}}\left[ \min _{i\in \{1,\cdots ,N'\}\, \& \, j\in \{1,\cdots ,N\}} \tau ^0(0,ts_j e_i',\cdot )\le ts_j-M\delta ^{-1} t\varepsilon \right] \\&\quad \le \sum _{i=1}^{N'}\sum _{j=1 }^N {\mathbb {P}}\left[ \tau ^0(0,ts_je_i',\cdot )\le ts_j-ts_j|e_i|\varepsilon \right] , \end{aligned}$$

which converges to 0 as \(t\rightarrow \infty \) by (4.5) and \(|e_i'|\ge w(\frac{e_i'}{|e_i'|})\). Therefore we get

$$\begin{aligned} \lim _{t\rightarrow \infty }{\mathbb {P}}\left[ \Gamma _{1/2}(s-2M(\delta ^{-1}+M)t\varepsilon -M,\cdot )\subseteq s{\mathcal S}\,\, \forall s\in [\delta t, \delta ^{-1}t]\right] =1. \end{aligned}$$

Since \(\varepsilon <\frac{\delta }{2M(\delta ^{-1}+M)}\), this and (4.6) yield (4.2) with \((\theta ,\Lambda )=(\frac{1}{2},0)\).

Let us now extend this to the general case. Fix any \(\delta \in (0,1)\) and again let \(\varepsilon :=\frac{\delta ^2}{10M(1+M\delta )}\). Stationarity of (Abf) and (4.2) with \((\theta ,\Lambda )=(\frac{1}{2},0)\) show that for each \({\sigma }\in (0,1)\), there is \(C_{{\sigma }}\ge \frac{1}{\varepsilon }\) such that for any \((t_0,z)\in {\mathbb {R}}^{d+1}\) and \(t\ge C_{{\sigma }}\),

$$\begin{aligned} {\mathbb {P}}\left[ (1-\delta ) s{\mathcal S}\subseteq \Gamma _{1/2}(t_0+s,\cdot ;t_0,z)-z\subseteq (1+\delta ) s{\mathcal S}\,\,\forall s\in [2^{-1}\delta t,2\delta ^{-1} t]\right] \ge 1-{\sigma }. \end{aligned}$$
(4.7)

Fix any \(\Lambda \ge 0\) and let \(y_1,\ldots ,y_{N''}\in B_\Lambda (0)\) be such that \(B_\Lambda (0)\subseteq \bigcup _{i=1}^{N''} B_\varepsilon (y_i)\). For each \(t\ge C_{{\sigma }}\) and \(x_0\in B_{\Lambda t}(0)\), there is \(i\in \{1,\ldots ,N''\}\) such that \(|x_0-ty_i|\le t\varepsilon \). Then (3.4) yields for all \(s\ge M(2t\varepsilon +1)\),

$$\begin{aligned} \begin{aligned} \Gamma _{1/2}(s-M(2t\varepsilon +1),\cdot ;&M(t\varepsilon +1),ty_i)-ty_i\subseteq \Gamma _{1/2}(s,\cdot ;0,x_0)-x_0\\&\subseteq \Gamma _{1/2}(s+M(2t\varepsilon +1),\cdot ;-M(t\varepsilon +1),ty_i)-ty_i \end{aligned} \end{aligned}$$
(4.8)

(using (3.4) twice with \(s':=Mt\varepsilon \ge M\)). Since \(\varepsilon \le \frac{\delta ^2}{10\,M}\) and \(t\ge \frac{1}{\varepsilon }\), for any \(s\ge \delta t\) we have

$$\begin{aligned} M(3t\varepsilon +2)\le 5Mt\varepsilon \le \frac{\delta ^2 t}{2} \le \frac{\delta s}{2}, \end{aligned}$$
(4.9)

and therefore

$$\begin{aligned} (1-2\delta ) s{\mathcal S}\subseteq (1-\delta )(s-M(3t\varepsilon +2)){\mathcal S}. \end{aligned}$$
(4.10)

and

$$\begin{aligned} (1+\delta )(s+M(3t\varepsilon +2)){\mathcal S}\subseteq (1+2\delta ) s{\mathcal S}\end{aligned}$$
(4.11)

Since also \(M(2t\varepsilon +1)\le \delta t\le s\), (4.8)–(4.11) imply

$$\begin{aligned}&{\mathbb {P}}\big [(1-2\delta ) s{\mathcal S}\not \subseteq \Gamma _{1/2}(s,\cdot ;0,x_0)-x_0 \text { for some }(s,x_0)\in [\delta t, \delta ^{-1}t]\times B_{\Lambda t}(0) \big ]\\&\quad \le \sum _{i=1}^{N''} {\mathbb {P}}\big [(1-\delta ) (s-M(3t\varepsilon +2)) {\mathcal S}\not \subseteq \Gamma _{1/2}(s-M(2t\varepsilon +1),\cdot ;M(t\varepsilon +1),ty_i)-ty_i\\&\qquad \qquad \qquad \text { for some }s\in [\delta t, \delta ^{-1}t]\big ] \end{aligned}$$

and

$$\begin{aligned}&{\mathbb {P}}\big [ \Gamma _{1/2}(s,\cdot ;0,x_0)-x_0\not \subseteq (1+2\delta ) s{\mathcal S}\text { for some }(s,x_0)\in [\delta t, \delta ^{-1}t]\times B_{\Lambda t}(0) \big ]\\&\quad \le \sum _{i=1}^{N''} {\mathbb {P}}\big [ \Gamma _{1/2}(s+M(2t\varepsilon +1),\cdot ;-M(t\varepsilon +1),ty_i)-ty_i \not \subseteq (1+\delta ) (s+M(3t\varepsilon +2)) {\mathcal S}\\&\qquad \qquad \qquad \text { for some }s\in [\delta t, \delta ^{-1}t]\big ]. \end{aligned}$$

Both right-hand sides are \(\le {\sigma }\), due to (4.7) with \((s,t_0,z)=(s\mp M(3t\varepsilon +2),\pm M(t\varepsilon +1),ty_i)\) (note that (4.9) shows that \(s\pm M(3t\varepsilon +2)\in [2^{-1}\delta t,2\delta ^{-1}t]\) when \(s\in [\delta t, \delta ^{-1}t]\)). Thus, after taking \({\sigma }\rightarrow 0\) and then replacing \(\delta \) by \(\frac{\delta }{2}\), we obtain (4.2) with \(\theta =\frac{1}{2}\). The general case follows from parabolic Harnack inequality and hair-trigger effect as in the proof of Theorem 3.2. \(\square \)

We now note that it suffices to prove Theorem 1.3(ii) with \(y_\varepsilon =0\) for all \(\varepsilon \in (0,1)\) because we assume space-time stationarity of H. Recall that for now we assume that \(H=(A,b,f)\), although this claim also holds in the general case.

Let us first assume that f equals \(f'\) from (3.24), and that G is bounded. Then assume without loss that \(\delta \in (0,1)\) is such that \(G\subseteq B_{\delta ^{-1}}(0)\). Note that the “unscaled" version of (1.11) (with \(y_\varepsilon =0\)) is

$$\begin{aligned} \theta \chi _{(\varepsilon ^{-1}G)^0_{\varepsilon ^{-1}\rho (\varepsilon )}}\le u_\varepsilon (0,\cdot ,\omega )\le \chi _{B_{\varepsilon ^{-1}\rho (\varepsilon )}(\varepsilon ^{-1}G)} \end{aligned}$$
(4.12)

for \(u_\varepsilon (\cdot ,\cdot ,\omega ):=u^\varepsilon (\varepsilon ^{-1}\cdot ,\varepsilon ^{-1}\cdot ,\omega )\), which solves (1.1). Let us define

$$\begin{aligned} \Gamma _{\theta ',\varepsilon }(t,\omega ):=\left\{ x\in {\mathbb {R}}^d\,|\, u_\varepsilon (t,x,\omega )\ge \theta '\right\} \end{aligned}$$

for each \(\theta '\in (0,1)\). Therefore (1.13) will follow if we prove

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0}{\mathbb {P}}\left[ \varepsilon ^{-1}G+\varepsilon ^{-1}(1-\delta )t{\mathcal S}\subseteq \Gamma _{\theta ',\varepsilon }(\varepsilon ^{-1}t,\cdot ) \subseteq \varepsilon ^{-1}G +\varepsilon ^{-1}(1+\delta )t{\mathcal S}\,\,\forall t\in [\delta ,\delta ^{-1}]\right] =1. \end{aligned}$$
(4.13)

The hair-trigger effect, (3.2), (4.12), and the comparison principle imply that there is \(M'\ge M\) such that with \(s_\varepsilon :=M\varepsilon ^{-1}\rho (\varepsilon )+M'\) we have \(B_{1}(\varepsilon ^{-1}G)\subseteq \Gamma _{1/2,\varepsilon }(s_\varepsilon ,\omega )\) for all \((\varepsilon ,\omega )\in (0,1)\times \Omega \). Then \(u(0,\cdot ,\omega ;c_\varepsilon , \varepsilon ^{-1}x_0) \le u_\varepsilon (s_\varepsilon ,\cdot ,\omega )\) for any \(x_0\in G\), so the comparison principle yields for all \((t,x_0)\in {\mathbb {R}}^+\times G\),

$$\begin{aligned} \Gamma _{\theta '}(\varepsilon ^{-1}t,\omega ; c_\varepsilon ,\varepsilon ^{-1}x_0)\subseteq \Gamma _{\theta ',\varepsilon }(\varepsilon ^{-1}t+s_\varepsilon ,\omega ). \end{aligned}$$
(4.14)

Since space-time stationarity of H and Theorem 4.2 with \((\frac{\delta }{2}, \theta ',\frac{1}{\delta })\) in place of \((\delta ,\theta ,\Lambda )\) yield

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0}{\mathbb {P}}\left[ \varepsilon ^{-1}G+\varepsilon ^{-1}(1- 2^{-1}\delta ) t{\mathcal S}\subseteq \bigcup _{x_0\in G}\Gamma _{\theta '}(\varepsilon ^{-1}t,\cdot ;c_\varepsilon ,\varepsilon ^{-1}x_0)\,\,\forall t\in \big [2^{-1}\delta ,2\delta ^{-1}\big ]\right] =1, \end{aligned}$$

(4.14) and \(\lim _{\varepsilon \rightarrow 0}\varepsilon c_\varepsilon =0\) show that the probability of just the first inclusion in (4.13) converges to 0 as \(\varepsilon \rightarrow 0\).

Next, from Theorem 1.2 in [36], and Remarks 2 and 3 following it, we see that for each \(\delta >0\), there is \(C_{\delta }>0\) such that for each \((\varepsilon ,\omega )\in (0,1)\times \Omega \) and \(t\ge C_\delta \) we get from (4.12) that

$$\begin{aligned} u_\varepsilon (t,\cdot ,\omega ) \le \delta + \sup _{ z\in B_{\varepsilon ^{-1}\rho (\varepsilon )}(\varepsilon ^{-1}G) }u_z \left( (1+ 4^{-1}\delta )t,\cdot ,\omega \right) , \end{aligned}$$

where \(u_z\) is a solution to (1.1) with initial data \(u_\varepsilon (0,\cdot ,\omega )\chi _{B_1(z)}\) (unit cubes were used in [36] instead of unit balls, but simple scaling shows that these can be replaced by cubes of side-length \(\frac{1}{d}\), which are contained in the corresponding unit balls; recall also that we now have \(f=f'\)). This shows that for any \((\theta ',\omega )\in (0,1)\times \Omega \) and \(t\ge C_\delta \) we have

$$\begin{aligned} \Gamma _{\theta ',\varepsilon }(t,\omega )\subseteq \bigcup _{z\in B_{\varepsilon ^{-1}\rho (\varepsilon )}(\varepsilon ^{-1}G) }\Gamma _{\theta '-\delta }^z((1+4^{-1}\delta )t,\omega ), \end{aligned}$$
(4.15)

where \(\Gamma _{\theta '}^z(t,\omega ):=\{x\in {\mathbb {R}}^d\,|\, u_z(t,x,\omega )\ge \theta '\}\). Since \(\frac{1}{2}\min \{u,1-u\}\le \min \{\frac{1}{2} u,1-\frac{1}{2} u\}\), we see that \(\frac{1}{2} u_z\) is a subsolution to (1.1) and \(\frac{1}{2}u_z(0,\cdot ,\omega )\le \frac{1}{2}\chi _{B_1(z)}\). The comparison principle then shows that

$$\begin{aligned} \Gamma _{\theta '-\delta }^z \left( (1+4^{-1}\delta )t,\omega \right) \subseteq \Gamma _{(\theta '-\delta )/2} \left( (1+4^{-1}\delta )t,\omega ;0,z \right) . \end{aligned}$$
(4.16)

Hence (4.15), (4.16), and Theorem 4.2 with \((\frac{\delta }{2}, \frac{\theta '-\delta }{2}, \frac{2}{\delta })\) in place of \((\delta ,\theta ,\Lambda )\) yield for any \(\delta <\theta '\),

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0}{\mathbb {P}}\left[ \Gamma _{\theta ',\varepsilon }(t,\cdot ) \subseteq \varepsilon ^{-1}B_{\rho (\varepsilon )}(G)+\varepsilon ^{-1}(1+2^{-1}\delta )(1+4^{-1}\delta )t{\mathcal S}\,\,\forall t\in [\delta ,\delta ^{-1}] \right] =1. \end{aligned}$$

But this, \((1+2^{-1}\delta )(1+4^{-1}\delta )<1+\delta \), and \(\lim _{\varepsilon \rightarrow 0} \rho (\varepsilon )=0\) show that the probability of just the second inclusion in (4.13) converges to 0 as \(\varepsilon \rightarrow 0\). Therefore we proved (4.13) and hence Theorem 1.3(ii) for all bounded G when \(f=f'\).

This now extends to general f because Theorem 1.2 in [36] shows that for any \(\delta >0\) and all large enough t we have

$$\begin{aligned} \pm \big [ u(t,\cdot ,\omega ;0,0) - u' \left( (1\pm \delta )t,\cdot ,\omega ;0,0\right) \big ] \le \delta , \end{aligned}$$

where \(u'(\cdot ,\cdot ,\omega ;0,z)\) solves (1.1) with \(f'\) in place of f and \(u'(0,\cdot ,\omega ;0,z):=\frac{1}{2}\chi _{B_1(z)}\). This also shows that f and \(f'\) have the same \({\mathcal S}\), which thus only depends on \(H:=(A,b,f_u(\cdot ,\cdot ,0,\cdot ))\).

Finally, the extension to unbounded G is obtained as in the proof of Theorem 1.4 in [37] (this uses that, similarly to (3.2), perturbations to initial data propagate with speeds \(\le M\)). Hence the proof of Theorem 1.3(ii) is finished.

Remark

To prove Remark 6 after Theorem 1.3, one can use (2.2). This yields Lemma 3.1 with (3.10) replaced by

$$\begin{aligned} \liminf _{r\rightarrow \infty }\frac{\tau ^{t_0}({x_0,x_0+re},\omega )}{r} \ge \lim _{r\rightarrow \infty }\frac{{\mathbb {E}}\left[ \tau ^{t_0}({x_0,x_0+re},\cdot )\right] }{r}=\bar{\tau }(e), \end{aligned}$$

which then implies the second claim in (3.14) as in the proof of Theorem 3.2. The argument at the end of Sect. 3 then proves the remark.

5 Proof of Theorem 1.5

Let us fix any \(\omega \in \Omega \). Theorem 7.2 in [12] shows that under our hypotheses on (cv),

$$\begin{aligned} u(t,x,\omega ):=\sup _{x\in \Gamma (t,\omega ;0,y)} u(0,y,\omega ) \end{aligned}$$
(5.1)

is a viscosity solution to (1.3). We claim that \(u(\cdot ,\cdot ,\omega )\) is uniformly continuous on \([0,T]\times {\mathbb {R}}^d\) for each \(T>0\) whenever \(u(0,\cdot ,\omega )\) is uniformly continuous. Indeed, let \((t,x,y)\in {\mathbb {R}}^{2d+1}\) be such that (tx) is \(\omega \)-reachable from (0, y). Then there is \(\alpha :[0,t]\rightarrow \overline{B_1(0)}\subseteq {\mathbb {R}}^d\) and an absolutely continuous path \(\gamma :[0,t]\rightarrow {\mathbb {R}}^d\) such that \(\gamma (0)=y\), \(\gamma (t)=x\), and

$$\begin{aligned} \gamma '(s)=v(s,\gamma (s),\omega )+c(s,\gamma (s),\omega ) \alpha (s) \end{aligned}$$

for a.e. \(s\in [0,t]\). Pick any \((\tau ,z)\in {\mathbb {R}}^{d+1}\), and extend \(\alpha \) to \((t,\tau ]\) by 0 if \(\tau >t\). Then define \(\beta :[0,\tau ]\rightarrow {\mathbb {R}}^d\) via the terminal value problem \(\beta (\tau )=z\) and

$$\begin{aligned} \beta '(s)=v(s,\beta (s),\omega )+c(s,\beta (s),\omega )\alpha (s) \end{aligned}$$

for a.e. \(s\in [0,\tau ]\). If we let \(\tau ':=\min \{t,\tau \}\) and \(y':=\beta (0)\). The two ODEs yield

$$\begin{aligned} |y-y'|\le e^{C\tau '}(|x-z|+C|t-\tau |), \end{aligned}$$
(5.2)

where

$$\begin{aligned} C:=\Vert v\Vert _{L^\infty } + \Vert \nabla _xv\Vert _{L^\infty }+\Vert \nabla _x c\Vert _{L^\infty }. \end{aligned}$$

It is clear that \((\tau ,z)\) is \(\omega \)-reachable from \((0,y')\), so uniform continuity of \(u(\cdot ,\cdot ,\omega )\) on \([0,T]\times {\mathbb {R}}^d\) for each \(T>0\) follows from uniform continuity of \(u(0,\cdot ,\omega )\). Then by, e.g. Exercise 3.9 in [2], we obtain that there is a unique uniformly continuous viscosity solution to (1.3) for any uniformly continuous initial data.

Let us now prove (i). Since \(\nabla _x\cdot v(t,\cdot ,\omega )=0\) a.e. for all \((t,\omega )\in {\mathbb {R}}\times \Omega \) and we have (1.18), it follows from Corollary 1.3 in [6] that there is \(M\ge 1\) such that for any \((t_0,x_0,\omega )\in {\mathbb {R}}^{d+1}\times \Omega \),

$$\begin{aligned} B_{M^{-1}t}(x_0)\subseteq \Gamma (t,\omega ;t_0,x_0) \qquad \text { for all}\, t\ge M, \end{aligned}$$
(5.3)

with \(\Gamma \) from (1.15). (We note that that result yields (5.3) if we replace \(c(t,x,\omega )\) in (1.3) by the constant \(\inf _{(t,x,\omega )\in {\mathbb {R}}^{d+1}\times \Omega }c(t,x,\omega )\), but then (5.3) follows from the comparison principle.) Since v and c are bounded, after possibly increasing M we also obtain

$$\begin{aligned} \Gamma (t,\omega ;t_0,x_0)\subseteq B_{{M}t}(x_0)\qquad \text { for all}\, t\ge 0. \end{aligned}$$
(5.4)

Thus (3.2) holds for all \((t_0,x_0,\omega )\in {\mathbb {R}}^{d+1}\times \Omega \) with \(\Gamma \) in place of \(\Gamma _{1/2}\). We also define the arrival time to a point \(x\in {\mathbb {R}}^d\), when starting at \((t_0,x_0)\in {\mathbb {R}}^{d+1}\), by

$$\begin{aligned} \tau ^{t_0}(x_0,x,\omega ):=\inf \{t\ge 0\,|\,x\in \Gamma (t,\omega ;t_0,x_0)\}. \end{aligned}$$
(5.5)

The same arguments as in Sect. 3 yield (3.3) and (3.5)–(3.8), with \(\Gamma \) in place of \(\Gamma _{1/2}\). Moreover, (1.15) shows that if \(x'\in \Gamma (t,\omega ;t_0,x_0)\), then \(\Gamma (s',\omega ;t_0+t,x')\subseteq \Gamma (t+s',\omega ;t_0,x_0)\) for any \(s'\ge 0\). This with \(t+s\) in place of t, together with (3.3) and (5.3) with \(s'\) in place of t, now yields (3.4) with \(\Gamma \) in place of \(\Gamma _{1/2}\).

All this shows that if \(H:=(c,v)\) is space-time stationary and \(\lim _{s\rightarrow \infty }s^\alpha \phi _H(s)=0\) for some \(\alpha >0\), then Lemma 3.1 holds with \(\tau ^{t_0}\) from (5.5) and with the same proof. So we can again define \(\bar{\tau }\), w, and \({\mathcal S}\) via (3.10), (3.13), and (3.12). The proof of Theorem 3.2 also extends to this setting, with \(\Gamma \) in place of \(\Gamma _{1/2}\) (and without the sentence after (3.20)). We thus obtain for any \(\delta \in (0,1)\) and \(\Lambda \ge 1\) some \(\Omega _{\Lambda ,\delta }\subseteq \Omega \) with \({\mathbb {P}}[\Omega _{\Lambda ,\delta }]=1\) such that

$$\begin{aligned} x+(1-\delta ) t{\mathcal S}\subseteq \Gamma (t,\omega ;0,x) \subseteq x+(1+\delta ) t{\mathcal S}\end{aligned}$$
(5.6)

holds for each \((x,\omega )\in B_{\Lambda t}(0)\times \Omega _{\Lambda ,\delta }\) whenever t is sufficiently large, depending on \(\omega ,\delta ,\Lambda \) (this is just (3.20) above). Note that we need neither the parabolic Harnack inequality nor the hair-trigger effect here due to (1.15).

Fix any \(R>0\). From (1.16) we obtain

$$\begin{aligned} u^\varepsilon (t,x+y_\varepsilon ,\omega )=\sup _{x+y_\varepsilon \in \varepsilon \Gamma (\varepsilon ^{-1}t,\omega ;0,\varepsilon ^{-1}y)} u^\varepsilon (0,y,\omega ). \end{aligned}$$
(5.7)

This, (1.19), (1.17), (5.6) with \(\Lambda +R+M\) in place of \(\Lambda \), and (5.4) yield that for each \(\omega \in \Omega ':=\bigcap _{n\in {\mathbb {N}}}\Omega _{n,1/n}\) (so \({\mathbb {P}}[\Omega ']=1\)) and \(x\in B_R(0)\) we have

$$\begin{aligned} \bar{u}((1-\delta )t,x)-\rho (\varepsilon )\le u^\varepsilon (t,x+y_\varepsilon ,\omega )\le \bar{u}((1+\delta )t,x)+\rho (\varepsilon ) \end{aligned}$$
(5.8)

for any \(\delta >0\) and \(t\in [\frac{1}{R},R]\) whenever \(\varepsilon \) is small enough (depending on \(\delta ,R,\Lambda ,\omega \)). If \(\varphi \) is a modulus of continuity for \(u_0\) (with \(\lim _{r\rightarrow 0}\varphi (r)=0\)), from (1.17), (5.4), and \({\mathcal S}\subseteq B_M(0)\), we also have

$$\begin{aligned} |\bar{u}(t,x)-\bar{u}(t',x')|\le \varphi (|x-x'|+M|t-t'|) \end{aligned}$$
(5.9)

for any \((t,t',x,x',\omega )\in [0,\infty )^2\times {\mathbb {R}}^{2d}\times \Omega \). This and (5.8) show that \(u^\varepsilon (\cdot ,\cdot +y_\varepsilon ,\omega )\rightarrow \bar{u}\) locally uniformly on \({\mathbb {R}}^+\times {\mathbb {R}}^d\) as \(\varepsilon \rightarrow 0\) (for each \(\omega \in \Omega '\)). This then easily extends to locally uniform convergence on \([0,\infty ) \times {\mathbb {R}}^d\) (i.e., up to time 0), using (5.4) with \(t_0=0\), \({\mathcal S}\subseteq B_M(0)\), (1.16), (1.17), (1.19), and \(\lim _{r\rightarrow 0}\varphi (r)=\lim _{\varepsilon \rightarrow 0}\rho (\varepsilon )=0\). This finishes the proof of (i).

Let us now turn to (ii). As in the proof of Theorem 1.3(ii), it suffices to consider \(y_\varepsilon =0\). Since we have (3.2)–(3.8) with \(\Gamma \) in place of \(\Gamma _{1/2}\), the proofs of Lemma 4.1 and Theorem 4.2 with \(\Gamma \) in place of \(\Gamma _{\theta }\) extend to the present setting (again with no need for the parabolic Harnack inequality or the hair-trigger effect). Hence, for any \(R>0\) and \(\delta \in (0,1)\) we have

$$\begin{aligned} \lim _{t\rightarrow \infty }{\mathbb {P}}\left[ (1-\delta ) s{\mathcal S}\subseteq \Gamma (s,\cdot ;0,x_0)-x_0\subseteq (1+\delta ) s{\mathcal S}\,\,\forall (s,x_0)\in [R^{-1} t,R t]\times B_{R t}\right] =1. \end{aligned}$$
(5.10)

But then the argument proving (5.8) again applies, so after using (5.10) and (5.9) we obtain

$$\begin{aligned} \begin{aligned} \lim _{\varepsilon \rightarrow 0}{\mathbb {P}}\big [ |u^\varepsilon (t,x,\omega )-\bar{u}(t,x)|\le \varphi (M\delta R)+\rho (\varepsilon )\,\,\forall (t,x)\in [R^{-1},R]\times B_{R}\big ]=1. \end{aligned} \end{aligned}$$

The result now follows by taking \(\delta :=R^{-2}\) and \(\varepsilon \rightarrow 0\), although with \((t,x)\in [\delta ,\delta ^{-1}]\times B_{\delta ^{-1}}\) inside the probability. The extension up to time 0 is the same as in part (i).