Abstract
We consider the linear stochastic heat equation on \(\mathbb {R}^\ell \), driven by a Gaussian noise which is colored in time and space. The spatial covariance satisfies general assumptions and includes examples such as the Riesz kernel in any dimension and the covariance of the fractional Brownian motion with Hurst parameter \(H\in (\frac{1}{4}, \frac{1}{2}]\) in dimension one. First we establish the existence of a unique mild solution and we derive a Feynman-Kac formula for its moments using a family of independent Brownian bridges and assuming a general integrability condition on the initial data. In the second part of the paper we compute Lyapunov exponents and lower and upper exponential growth indices in terms of a variational quantity.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
The purpose of this paper is to study the stochastic heat equation
where \(t\ge 0\), \(x\in \mathbb {R}^\ell \) \((\ell \ge 1)\) and W is a centered Gaussian field, which is correlated in both temporal and spatial variables. We assume that the noise W is described by a centered Gaussian family \(W=\{ W(\phi ), \phi \in \mathcal {S}(\mathbb {R}_+\times \mathbb {R}^\ell )\}\), with covariance
where \(\gamma _0\) is a nonnegative and nonnegative definite locally integrable function, \(\mu \) is a tempered measure and \(\mathcal {F}\) denotes the Fourier transform in the spatial variables. Throughout the paper, we denote by \(|\cdot |\) the Euclidean norm in \(\mathbb {R}^\ell \) and by \(x\cdot y\) the usual inner product between two vectors x, y in \(\mathbb {R}^\ell \). We are going to consider two types of spatial covariances:
-
(H.1)
\(\ell =1\), the spectral measure \(\mu \) is absolutely continuous with respect to the Lebesgue measure on \(\mathbb {R}\) with density f, that is \(\mu (d \xi )=f(\xi )d \xi \), and f satisfies:
-
(a)
For all \(\xi ,\eta \) in \(\mathbb {R}\) and for some constant \(\kappa _0>0\),
$$\begin{aligned} f(\xi +\eta )\le \kappa _0 (f(\xi )+f(\eta )). \end{aligned}$$(1.3) -
(b)
$$\begin{aligned} \int _\mathbb {R}\frac{f^2(\xi )}{1+|\xi |^2}d \xi <\infty \,. \end{aligned}$$(1.4)
To state the second type of covariance, we recall that the space of Schwartz functions is denoted by \({\mathcal {S}}(\mathbb {R}^\ell )\). The Fourier transform of a function \(u \in {\mathcal {S}}(\mathbb {R}^\ell )\) is defined with the normalization
so that the inverse Fourier transform is given by \(\mathcal {F}^{- 1} u ( \xi ) = ( 2 \pi )^{- \ell } \mathcal {F}u ( - \xi )\).
-
(H.2)
The inverse Fourier transform of \(\mu \) is a nonnegative locally integrable function (or generalized function) denoted by \(\gamma \)
$$\begin{aligned} \gamma (x)=\frac{1}{(2 \pi )^\ell }\int _{\mathbb {R}^\ell } e^{i \xi \cdot x}\mu (d \xi )\,, \end{aligned}$$(1.5)and \(\mu \) satisfies Dalang’s condition
$$\begin{aligned} \int _{ \mathbb {R}^\ell }\frac{\mu (d\xi ) }{1+ |\xi |^2} <\infty . \end{aligned}$$(1.6)
For the case (H.2), \(\gamma \) is nonnegative definite and (1.2) can be written as
Examples of covariance functions satisfying condition (H.2) are the Riesz kernel \(\gamma (x)=|x|^{-\eta }\), with \(0<\eta <2\wedge \ell \), the space-time white noise in dimension one, where \(\gamma =\delta _0\), and the multidimensional fractional Brownian motion, where \(\gamma (x)= \prod _{i=1}^\ell H_i (2H_i-1) |x_i|^{2H_i-2}\), assuming that \(\sum _{i=1}^\ell H_i >\ell -1\) and \(H_i \in (\frac{1}{2},1)\) for \(i=1,\dots , \ell \).
In the case (H.1), the inverse Fourier transform of \(\mu \) is at best a distribution and the expression (1.5) is only formal. The right-hand side of (1.7), however, makes sense by pairing between Schwartz functions and distributions. For our convenience, we will address \(\gamma \) as a generalized covariance function if its Fourier transform is a (nonnegative) tempered measure. It also worths noting that by Jensen’s inequality, (1.4) implies Dalang’s condition (1.6). The basic example of a noise satisfying (H.1) is the rough fractional noise, where the spectral density is given by \(f(\xi )=c_H |\xi |^{1-2H}\), with \(H \in (\frac{1}{4}, \frac{1}{2}]\) and \(c_H= \Gamma (2H+1) \sin (\pi H)\). In this case, the noise W has the covariance of a fractional Brownian motion with Hurst parameter \(H \in (\frac{1}{4}, \frac{1}{2}]\) in the spatial variable. Condition (a) holds with \(\kappa _0=1\) and condition (b) holds because of the restriction \(H>\frac{1}{4}\).
These types of spatial covariances were introduced in our paper [10], where the noise is white in time. In [10] we proved the existence of a unique mild solution formulated using Itô-type stochastic integrals, we derived Feynman-Kac formulas for the moments of the solution using a family of independent Brownian bridges and we computed Lyapunov exponents and lower and upper exponential growth indices. The purpose of this paper is to carry out this program when the noise is not white in time. While the general methodology of the current article is similar to [10], the case of colored temporal noises has some distinct features and needs a different treatment. In particular the existence and estimation of Lyapunov exponents offer new difficulties that require techniques different from the white-time case.
After a section on preliminaries, Sect. 3 is devoted to show (see Theorem 3.2) the existence of a unique mild solution to Eq. (1.1), when the stochastic integral is understood in the Skorohod sense, and the initial condition satisfies the general integrability condition (3.2). We want to mention that the existence and uniqueness of a solution for the (H.2) type covariance in the case of a colored noise in time has been also obtained in the recent paper [1] by Balan and Chen. Then, in Sect. 4 we establish Feynman-Kac formulas for the moments of the solution in terms of independent Brownian motions or Brownian bridges (see Proposition 4.3 and Corollary 4.4). Section 5 is devoted to obtain Lyapunov exponents for exponential functionals of Brownian bridges, assuming that \(\gamma _0(t)= |t|^{-\alpha _0}\) and \(\gamma (cx)=c^{-\alpha } \gamma (x)\) for all \(c>0\), where \(\alpha _0\in (0,1)\) and \(\alpha \in (0,2)\). The main result of this section is Theorem 5.2, whose proof is inspired by Theorem 1.1 of [2]. While Chen’s article [2] deals with exponential functionals of Brownian motions, we deal with exponential functionals of Brownian bridges. Another difference is that we allow noises which satisfy condition (H.1). In this case, the spatial covariance is generally a distribution and even if it is a function, it is not necessary non-negative and may switch signs. The former issue is solved by an appropriate approximation procedure. For the later issue, the compact folding argument in [2] is no longer applicable here. Instead, we use a moment comparison between Brownian motions and Ornstein-Uhlenbeck processes, which is observed by Donsker and Varadhan [7]. We refer to [3] for related results on temporal asymptotics for the fractional parabolic Anderson model.
Finally, in Sect. 6 we study the speed of propagation of intermittent peaks. The propagation of the farthest high peaks was first considered by Conus and Khoshnevisan [6] for a one-dimensional heat equation driven by a space-time white noise with compactly supported initial condition, where it is shown that there are intermittency fronts that move linearly with time as \(\lambda t\). More precisely, they defined the lower and upper exponential growth indices as follows:
and
Generalizing previous results by Chen and Dalang [4], we proved in [10] that, assuming that \(u_0\) is nonnegative,
where \(\mathcal {E}_n(\gamma )\) is the nth Lyapunov exponent. In particular, If \(u_0\) is nontrivial and supported on a compact set, then
In the reference [8], using the Feynman-Kac formula for the moments of the solution established in [9], the authors have obtained lower and upper bounds for the exponential growth indices when the noise has a general covariance of the form \(\mathbb {E}[\dot{W}(t,x) \dot{W}(s,y)]=\gamma _0(t-s) \gamma (x-y)\), where \(\gamma _0\) is locally integrable and the spatial covariance \(\gamma \) satisfies (H.2). Here \(\dot{W}(t,x)\) stands for \(\frac{\partial ^{\ell +1}W}{\partial t \partial x_1 \cdots \partial x_\ell }\). In this general situation, to obtain non-trivial limits the factor \(t^{-1}\) and the set \(\{|x| \ge \lambda t\}\) appearing in the definition of the exponential growth indices, need to be changed. In the particular case \(\gamma _0(t)=|t|^{-\alpha _0}\) and \(\gamma (x)= |x|^{-\alpha }\) we need to replace \(t^{-1}\) by \(t^{-a}\) and the set \(\{|x| \ge \lambda t\}\) by \(\{|x| \ge \lambda t^{\frac{a+1}{2}}\}\), where \( a=\frac{4-\alpha -\alpha _0}{2-\alpha }. \) In the present paper, we complete this analysis with the methodology developed in [10], based on large deviations.
As a consequence of the large deviation results obtained in Sects. 5 and 6, under suitable scaling hypotheses on the covariance of the noise, we deduce the following results on the exponential growth indices, that should be compared with (1.8) and (1.9):
-
(i)
If the initial condition \(u_0\) is a nonnegative function such that \( \int _{\mathbb {R}^\ell } e^{\beta |y|^b}u_0(y)dy<\infty \), where \(b= \frac{4- \alpha -2 \alpha _0}{3- \alpha - \alpha _0}\), then
$$\begin{aligned} \lambda ^*(n) \le g_\beta ^{-1} \left( \left( \frac{n-1}{2} \right) ^{ \frac{2}{2-\alpha }} {\mathcal {E}}(\alpha _0,\gamma )\right) . \end{aligned}$$where \(g_\beta \) is an increasing function on \((0,\infty )\) defined by Eq. (6.9), and \({\mathcal {E}}(\alpha _0,\gamma )\) is a variational quantity defined in (5.1).
-
(ii)
Suppose \(u_0\) is bounded below in a ball of radius M, and for some technical reasons assume that the spatial covariance satisfies (H.2). Then,
$$\begin{aligned} \lambda _*(n) \ge a^{\frac{a}{2}}(a+1)^{-\frac{a+1}{2}} \sqrt{2 \left( \frac{n-1}{2} \right) ^{ \frac{2}{2-\alpha }} {\mathcal {E}}(\alpha _0,\gamma ) }\,, \end{aligned}$$
where \(a=\frac{4-\alpha -2\alpha _0}{2-\alpha }\). Moreover, as \(\beta \) tends to infinity, the function \(g_\beta (x)\) converges to \(\sqrt{2x}\) and in the compact support case, the two bounds above differ only on the constant \(a^{\frac{a}{2}} (a+1) ^{-\frac{a+1}{2}}\).
2 Preliminaries
Let \(\mathcal {H}\) be the completion of \({\mathcal {S}}(\mathbb {R}_+\times \mathbb {R}^\ell )\) endowed with the inner product
The mapping \(\varphi \rightarrow W(\varphi )\) defined on \({\mathcal {S}}(\mathbb {R}_+\times \mathbb {R}^\ell )\) can be extended to a linear isometry between \(\mathcal {H}\) and the Gaussian space spanned by W. We will denote this isometry by
for \(\phi \in \mathcal {H}\). If \(\mu \) satisfies (H.2), the righ-hand side of (2.1) can be written in Cartesian coordinates as \(\int _{\mathbb {R}_+^2\times \mathbb {R}^{2\ell }}\varphi (s,x)\psi (t,y)\gamma _0(s-t)\gamma (x-y)dxdydsdt\). Hence, a standard approximation (still assuming (H.2)) shows that \(\mathcal {H}\) contains the class of measurable functions \(\phi \) on \(\mathbb {R}_+\times \mathbb {R}^\ell \) such that
2.1 Elements of Malliavin calculus
We denote by D the Malliavin derivative. That is, if F is a smooth and cylindrical random variable of the form
with \(\phi _i \in \mathcal {H}\), \(f \in C^{\infty }_p (\mathbb {R}^n)\) (that is, f and all its partial derivatives have polynomial growth), then DF is the \(\mathcal {H}\)-valued random variable defined by
The operator D is closable from \(L^2(\Omega )\) into \(L^2(\Omega ; \mathcal {H})\), and we define the Sobolev space \(\mathbb {D}^{1,2}\) as the closure of the space of smooth and cylindrical random variables under the norm
We denote by \(\delta \) the adjoint of the derivative operator given by the duality formula
for any \(F \in \mathbb {D}^{1,2}\) and any element \(u \in L^2(\Omega ; \mathcal {H})\) in the domain of \(\delta \). The operator \(\delta \) is also called the Skorohod integral because in the case of the Brownian motion, it coincides with an extension of the Itô integral introduced by Skorohod.
If \(F\in \mathbb {D}^{1,2}\) and h is an element of \( \mathcal {H}\), then Fh is Skorohod integrable and, by definition, the Wick product equals the Skorohod integral of Fh, that is,
We refer to the book [14] of Nualart for a detailed account of the Malliavin calculus with respect to a Gaussian process.
When handling the stochastic heat equation in the Skorohod sense we will make use of chaos expansions, which we briefly describe in the following. For any integer \(n\ge 0\) we denote by \(\mathbf {H}_n\) the nth Wiener chaos of W. We observe that \(\mathbf {H}_0\) is \(\mathbb {R}\) and for \(n\ge 1\), \(\mathbf {H}_n\) is the closed linear subspace of \(L^2(\Omega )\) generated by the family of random variables \(\{ H_n(W(h)), h \in \mathcal {H}, \Vert h\Vert _{\mathcal {H}}=1 \}\). Here \(H_n\) is the nth Hermite polynomial. For any \(n\ge 1\), we denote by \(\mathcal {H}^{\otimes n}\) (resp. \(\mathcal {H}^{\odot n}\)) the nth tensor product (resp. the nth symmetric tensor product) of \(\mathcal {H}\). Then, the mapping \(I_n(h^{\otimes n})= H_n(W(h))\) can be extended to a linear isometry between \(\mathcal {H}^{\odot n}\), equipped with the modified norm \(\sqrt{n!}\Vert \cdot \Vert _{\mathcal {H}^{\otimes n}}\), and \(\mathbf {H}_n\).
Let us consider a random variable \(F\in L^2(\Omega )\) which is measurable with respect to the \(\sigma \)-field \(\mathcal {F}^W\) generated by W. This random variable can be expressed (called the Wiener-chaos expansion of F) as
where the series converges in \(L^2(\Omega )\), and the elements \(f_n \in \mathcal {H}^{\odot n}\), \(n\ge 1\), are determined by F.
The Skorohod integral (or the divergence) of a random field u can be computed using the Wiener chaos expansion. More precisely, suppose that \(u=\{u(t,x), (t,x) \in \mathbb {R}_+ \times \mathbb {R}^\ell \}\) is a random field such that for each (t, x), u(t, x) is an \(\mathcal {F}^W\)-measurable and square integrable random variable. Then, for each (t, x), u(t, x) has the Wiener chaos expansion of the form
Suppose additionally that the trajectories of u belong to \(\mathcal {H}\) and \(\mathbb {E}[\Vert u\Vert ^2_{\mathcal {H}}] <\infty \). Then, we can interpret u as a square integrable random function with values in \(\mathcal {H}\) and the kernels \(f_n\) in the expansion (2.6) are functions in \(\mathcal {H} ^{\otimes (n+1)}\) which are symmetric in the first n time-space variables. In this situation, u belongs to the domain of the divergence (that is, u is Skorohod integrable with respect to W) if and only if the following series converges in \(L^2(\Omega )\)
where \({\widetilde{f}}_{n}\) denotes the symmetrization of \(f_n\) in all its \(n+1\) time-space variables.
2.2 Brownian bridges
Let \(\{B(s), s\ge 0\}\) be an \(\ell \)-dimensional Brownian motion starting at 0. For every fixed time \(t>0\) and \(x,y\in \mathbb {R}^\ell \), the process
is an \(\ell \)-dimensional Brownian bridge from x to y, i.e. \(\widetilde{B}(0)=x\) and \(\widetilde{B}(t)=y\). Away from the terminal time t, the law of Brownian bridge admits a density with respect to Brownian motion. Indeed, it is shown in [13, Lemma 3.1] that for every bounded measurable function F,
Throughout the paper, we denote by \(\{B_{0,t}(s),0\le s\le t\}\) an \(\ell \)-dimensional Brownian bridge which starts and ends at the origin. A Brownian bridge from x to y can be expressed as
3 Existence and uniqueness of a solution via chaos expansions
We denote by \(p_{t}(x)\) the \(\ell \)-dimensional heat kernel \(p_{t}(x)=(2\pi t)^{-\ell /2}e^{-|x|^2/2t} \), for any \(t > 0\), \(x \in \mathbb {R}^\ell \). For each \(t\ge 0\) let \(\mathcal {F}_t\) be the \(\sigma \)-field generated by the random variables \(W(\varphi )\), where \(\varphi \) has support in \([0,t ]\times \mathbb {R}^\ell \). We say that a random field \(u=\{u({t,x}), (t,x) \in \mathbb {R}_+\times \mathbb {R}^\ell \}\) is adapted if for each (t, x) the random variable \(u_{t,x}\) is \(\mathcal {F}_t\)-measurable.
We assume that the initial condition \(u_0\) is a measurable function satisfying the condition
where \(p_t *|u_0|\) denotes the convolution of the heat kernel \(p_t\) and the function \(|u_0|\). This condition is equivalent to
for all \(\kappa >0\).
We define the solution of Eq. (1.1) as follows.
Definition 3.1
An adapted random field \(u=\{u({t,x}), t \ge 0, x \in \mathbb {R}^\ell \}\) such that \(\mathbb {E}u^2({t,x}) < \infty \) for all (t, x) is a mild solution to Eq. (1.1) with initial condition \(u_0\) satisfying (3.2), if for any \((t,x) \in [0, \infty )\times \mathbb {R}^\ell \), the process \(\{p_{t-s}(x-y)u({s,y})\mathbf{1}_{[0,t)}(s) , s \ge 0, y \in \mathbb {R}^\ell \}\) is Skorohod integrable, and the following equation holds
Suppose now that \(u=\{u({t,x}), t\ge 0, x \in \mathbb {R}^\ell \}\) is a mild solution to Eq. (3.3). Then according to (2.5), for any fixed (t, x) the random variable u(t, x) admits the following Wiener chaos expansion
where for each (t, x), \(f_n(\cdot ,t,x)\) is a symmetric element in \(\mathcal {H}^{\otimes n}\). Thanks to (2.7) and using an iteration procedure, one can then find an explicit formula for the kernels \(f_n\) for \(n \ge 1\)
where \(\sigma \) denotes the permutation of \(\{1,2,\dots ,n\}\) such that \(0<s_{\sigma (1)}<\cdots<s_{\sigma (n)}<t\) (see, for instance, equation (4.4) in [11], where this formula is established in the case of a noise which is white in space). Then, to show the existence and uniqueness of the solution it suffices to show that for all (t, x) we have
Theorem 3.2
Suppose that the spatial covariance satisfies (H.1) or (H.2). Then relation (3.5) holds for each \((t,x) \in (0,\infty )\times \mathbb {R}^{\ell }\). Consequently, Eq. (1.1) admits a unique mild solution in the sense of Definition 3.1.
Proof
Notice that the kernel \(f_n\) can be written as
where \(s=(s_1, \dots , s_n)\), \(y=(y_1,\dots , y_n)\) and
Then
where \(\xi =(\xi ^1, \dots , \xi ^n)\), \(\mu (d\xi )= \prod _{i=1}^n \mu (d\xi ^i)\),
\(ds=ds_1\cdots ds_n\) and \(dr=dr_1\cdots dr _n\). Using the inequality \(ab \le \frac{1}{2} (a^2+b^2)\) and the fact that \(\gamma _0\) is locally integrable, we obtain
By symmetry, this leads to
where for each \(n\ge 2\), we denote
Fix \(0<s_2<s_2< \cdots< s_n <t\). Notice that \((y,z) \mapsto n! g_n(s,y,t,z)\) is the joint density of the random vector \((B_{s_1}, B_{s_2}, \dots , B_{s_n}, B_t)\) at the point \((y_1-z, y_2-z, \dots , y_n-z,x-z)\) where \(B=\{B_t, t\ge 0\}\) is an \(\ell \)-dimensional Brownian motion. Therefore, \(n! g_n(s,\cdot ,t,z) /p_t(x-z)\) is the conditional density of \((B_{s_1}, B_{s_2}, \dots , B_{s_n})\) given \(B_t=x-z\), which coincides with the law of the random vector
The characteristic function of this vector is given by
where we recall that \(\{B_{0,t}(s), s\in [0,t]\}\) denotes an \(\ell \)-dimensional Brownian bridge from zero to zero. This implies
Substituting the previous estimate into (3.7) yields
Finally, from Lemmas 9.1 and 9.4 of [10] we conclude that (3.5) holds. \(\square \)
4 Feynman-Kac formulas for the moments of the solution
For any \(\varepsilon >0\), we define \(\gamma _\varepsilon \) by
Notice that for each \(\varepsilon >0\), the spectral measure of \(\gamma _ \varepsilon \) is \(\mu _\varepsilon (d\xi ):= e^{-\varepsilon |\xi |^2}\mu (d \xi )\), which has finite total mass because \(\mu \) is a tempered measure. Thus, \(\gamma _ \varepsilon \) is a bounded positive definite function. The next proposition is the key ingredient in the proof of the Feynman-Kac formula for the moments of the solution to Eq. (1.1) using Brownian bridges.
Proposition 4.1
Suppose that the spatial covariance satisfies (H.1) or (H.2). Let \(\kappa \) be a real number. Let \(\{B^j_{0,t}(s), s\in [0,t]\}\), \(j=1\dots , n\), be independent \(\ell \)-dimensional Brownian bridges from 0 to 0. Then for each \(\varepsilon >0\), the function \(F_\varepsilon : (\mathbb {R}^\ell )^n \rightarrow \mathbb {R}\) given by
is well-defined and continuous. Moreover, as \(\varepsilon \downarrow 0\), \(F_ \varepsilon \) converges uniformly to a limit function denoted by
Remark 4.2
Actually, for each \(1 \le j<k \le n\), the integral
converges in \(L^p(\Omega )\) as \(\varepsilon \) tends to zero, for each \(p\ge 1\), and we can also denote the limit as
Proof
We claim that for every \(\kappa \in \mathbb {R}\)
By Hölder inequality, it suffices to show the previous inequality for \(n=2\). For every \(d \in \mathbb {N}\), we have
where we use the notation \(\mu _\varepsilon (d\xi ) = \prod _{k=1}^d e^{-\varepsilon |\xi ^k|^2}\mu (d\xi ^k)\) and \(ds = ds_1 \cdots ds_d\). Using the independence of \(B^1\) and \(B^2\), Cauchy-Schwarz inequality, the inequality \(ab \le \frac{1}{2}(a^2+b^2)\) and the fact that \(\gamma _0\) is locally integrable, we obtain
where \([0,t]^d_<\) is defined in (3.8). Then (4.3) follows from the Taylor expansion of \(e^x\) and Lemmas 9.1 and 9.4 in [10]. Finally, the proof of the uniform convergence of \(F_ \varepsilon \) as \(\varepsilon \) tends to zero can be done by the same arguments as in the proof of Proposition 4.2 in [10]. Notice that Lemma 4.1 in [10] has to be replaced by the inequality
where \(\kappa \in \mathbb {R}\), \(G=(G^1,\dots ,G^n)\in (\mathbb {R}^\ell )^n\) is a centered Gaussian process indexed by [0, t] and \(y=(y^{jk})_{1\le j<k\le n} :[0,t]^2\rightarrow (\mathbb {R}^\ell )^{n(n-1)/2}\) is a measurable matrix-valued function. \(\square \)
As an application, we have the following Feynman-Kac formula based on Brownian bridges.
Proposition 4.3
Suppose that the spatial covariance satisfies (H.1) or (H.2). Suppose that \(\{B_{0,t}^j(s),s\in [0,t]\}\), \(j=1,\dots , n\) are \(\ell \)-dimensional independent Brownian bridges from zero to zero. Then for every \(x^1,\dots ,x^n\in \mathbb {R}^\ell \),
Proof
For any \(\varepsilon >0\) we denote by \(u_\varepsilon (t,x)\) the solution to the stochastic heat equation
where \(\dot{W}_\varepsilon \) is a Gaussian centered noise with covariance
From the results of Hu, Huang, Nualart and Tindel [9] we have the following Feynman-Kac formula for the moments of \(u_\varepsilon \)
where \(\{B^j, j=1,\dots , n\}\) are independent \(\ell \)-dimensional standard Brownian motions. We remark that in [9] it is required that \(\gamma \) is a non-negative function, which is not necessarily true for \(\gamma _\varepsilon \). However, \(\gamma _\varepsilon \) is bounded, and, in this case, it is not difficult to show that (4.7) still holds. Also, [9] assumes that \(u_0\) is bounded, but it is not difficult to show that (4.7) still holds assuming (3.2).
For each \(j=1,\dots ,n\) and every fixed \(t>0\), the Brownian motion \(B^j\) admits the following decomposition
where \(\{B_{0,t}^j(s),s\in [0,t]\}\), \(j=1,\dots ,n\), are Brownian bridges on \(\mathbb {R}^\ell \) independent from \(\{B^j(t), 1\le j \le n\}\) and from each other. Thus, identity (4.7) can be written as
From Proposition 4.1 and the dominated convergence theorem, the right-hand side of (4.9) converges to the right-hand side of (4.6). From the Wiener chaos expansion of the solution and the computations in the proof of Theorem 3.2, it follows easily that \(u_\varepsilon (t,x)\) converges in \(L^2(\Omega )\) to u(t, x). On the other hand, from (4.9) it follows that the moments of all orders of \(u_\varepsilon (t,x)\) are uniformly bounded in \(\varepsilon \). As a consequence, the left-hand side of (4.9) converges to the left-hand side of (4.6). This completes the proof. \(\square \)
Corollary 4.4
Under the assumptions of Proposition 4.3 we have, for any \(x\in \mathbb {R}^\ell \)
where \(B^j\), \(j=1,\dots , n\), are independent \(\ell \)-dimensional Brownian motions.
Remark 4.5
If the initial condition \(u_0\) is nonnegative, one can show that \(u(t,x) \ge 0\) a.s., for all \(t\ge 0\) and \(x\in \mathbb {R}^\ell \). This follows from the fact that \(u_\varepsilon (t,x)\) is nonnegative for any \(\varepsilon \), where \(u_\varepsilon \) is the random field introduced in the proof of Proposition 4.3.
5 Lyapunov exponents of Brownian bridges
The following variational formula occurs frequently in our considerations,
where \(\mathcal {A}_\ell \) is the class of functions defined by
where \(\alpha _0 \in [0,1)\) and \(\gamma \) is a generalized covariance function.
In general, if \(\eta _0\) is a covariance function (nonnegative and nonnegative definite locally integrable function) on \(\mathbb {R}\) and \(\eta \) is a generalized covariance function on \(\mathbb {R}^\ell \) with spectral measure \(\nu \), we can define the variational quantity
It is evident that \(\mathcal {E}(\alpha _0,\gamma )=\mathcal {E}(|\cdot |^{-\alpha _0},\gamma )\). The first integration in (5.3) is defined through Fourier transforms,
A priori, \(\mathcal {E}(\eta _0,\eta )\) can be infinite. However, if \(\eta _0\) belongs to \(L^1([-1,1])\) and \(\eta \) satisfies the Dalang’s condition (1.6) (as in all cases in the current article), then \(\mathcal {E}(\eta _0,\eta )\) is finite. Indeed, applying Cauchy-Schwarz inequality, we have
Moreover, for each \(s\in [0,1]\), \( | \mathcal {F} g^2(s,\cdot )(\xi )|\) is bounded by 1 and by \(\frac{2\sqrt{\ell }}{|\xi |} \Vert \nabla _x g(s,\cdot )\Vert _{L^2(\mathbb {R}^\ell )}\). In fact, integrating by parts, we have
It follows that for every \(R>0\),
Since \(\nu \) satisfies Dalang’s condition \(\int _{\mathbb {R}^\ell }\frac{\nu (d \xi )}{1+|\xi |^2}<\infty \), we can choose \(R>0\) such that
This implies that the right-hand side of (5.3) is at most \({\Vert \eta _0\Vert _{L^1([-1,1])} }(2 \pi )^{-\ell }\int _{|\xi |<R}\nu (d \xi )\), which is also an upper bound for \(\mathcal {E}(\eta _0,\eta )\).
To conclude our discussion on basic properties of \(\mathcal {E}(\eta _0,\eta )\), we describe a useful comparison principle. Suppose \(\eta _0,\tilde{\eta }_0\) are covariance functions on \(\mathbb {R}\) and \(\eta ,\tilde{\eta }\) are generalized covariance functions on \(\mathbb {R}^\ell \) such that the spectral measures of \(\eta _0, \eta \) are less than the spectral measures of \(\tilde{\eta }_0, \tilde{\eta }\) respectively. In other words, \(\eta _0\le \tilde{\eta }_0\) and \(\eta \le \tilde{\eta }\) in quadratic sense. Then
This is immediate from (5.3).
In the remaining of the article, we consider the following scaling condition on the noise:
-
(S)
There exist \(\alpha _0\in (0,1)\) and \(\alpha \in (0,2)\) such that \(\gamma _0(t)=|t|^{-\alpha _0}\) and \(\gamma (cx)=c^{-\alpha } \gamma (x)\) for all \(t,c>0\) and \(x\in \mathbb {R}^\ell \).
Under the scaling assumption (S), it is easy to check that for every \(\theta >0\),
Proposition 5.1
Let K and \(\psi \) be symmetric functions in \(L^2(\mathbb {R}^\ell )\) and \(L^2(\mathbb {R})\) respectively. We assume in addition that \(\psi \) is nonnegative and \(\psi '\) exists and belongs to \(L^2(\mathbb {R})\). The functions \(\eta _0=\psi *\psi \) and \(\eta =K*K\) are bounded and nonnegative definite functions. Then for every \(\theta >0\) and every integer \(n\ge 1\),
where \(\mathcal {E}(\eta _0,\eta )\) is the variational quantity defined in (5.3).
Before giving the proof, let us explain our contribution. This result, together with Theorem 5.2 below, extends the result of Chen in [2, Section 4] , where \(\eta \) is assumed to be nonnegative. In the aforementioned paper, the author uses a compact folding argument. When \(\eta \) switches signs, this argument no longer works. In particular, [2, inequality (4.15)] fails. Here, we replace the compact folding argument by a moment comparison between Brownian motions and Ornstein-Uhlenbeck processes, which was observed earlier by Donsker and Varadhan [7] [see (5.9) below]. Unlike Brownian motions, the Ornstein-Uhlenbeck processes are ergodic. This makes the essential arguments of [2] carry through. Lastly, although the occupation times of Ornstein-Uhlenbeck processes satisfy (strong) large deviation principles, it cannot be applied here due to the time-dependent structure (namely \(\eta _0\)).
Proof of Proposition 5.1
We first observe that
In conjunction with the independence of the Brownian motions, we see that the left-hand side of (5.7) is at most
Hence, it suffices to show
The proof is now divided into several steps.
Step 1. For each \(\kappa >0\), let \(\mathbb {P}_\kappa \) be the law of an Ornstein-Uhlenbeck process in \(\mathbb {R}^\ell \) starting from 0 with generator \(\frac{1}{2} \Delta - \kappa x\cdot \nabla \). Let \(\mathbb {E}_\kappa \) denote the expectation with respect to \(\mathbb {P}_\kappa \). We will show that
We note that
Hence, it suffices to check that for each integer \(d\ge 1\)
Since \(\eta _0\) is nonnegative, this amounts to show
for arbitrary times \(s_1,r_1,\dots ,s_d,r_d\) in [0, t]. By writing \(\eta (z)=(2 \pi )^{-\ell }\int _{\mathbb {R}^\ell }e^{i \xi \cdot z}|\mathcal {F}K(\xi )|^2 d \xi \), we see that
Hence, (5.10) is evident provided that
An observation made by Donsker-Varadhan [7, proof of Lemma 3.10] is that \(\mathbb {E}[B(s)\otimes B(r)]\ge \mathbb {E}_\kappa [B(s)\otimes B(r)]\) in quadratic sense. This fact implies the above inequality.
Step 2. As a consequence, (5.8) is reduced to showing
For each \(t>0\) and each path B, we denote
and observe that
In particular \(Z_{t,B}\) belongs to \(L^2(\mathbb {R}^{\ell +1})\) and
Let N be a fixed positive number and denote \(\Omega _{t,N}=\{B:\frac{1}{t}\int _0^t|B(s)|ds\le N \}\). The only advantage of \(\mathbb {P}_\kappa \) over \(\mathbb {P}\), for which we need, is the following inequality
In fact, by Girsanov’s theorem we have
It follows that
where the last inequality is a consequence of a Cauchy-Schwarz inequality. Hence, in conjunction with Chebyshev’s inequality, we obtain
The estimate (5.13) is directly derived from here.
The set \(M=\{Z_{t,B}\}_{B\in \Omega _{t,N},t>0}\) is then a subset of \(L^2(\mathbb {R}^{\ell +1})\). We will show that M is relatively compact in \(L^2(\mathbb {R}^{\ell +1})\). Indeed, we verify that \(\mathcal {F}M=\{\mathcal {F}Z_{t,B}\}_{B\in \Omega _{t,N},t>0}\) satisfies the Kolmogorov-Riesz’s compactness criterion in \(L^2(\mathbb {R}^{\ell +1})\) (cf. [12, Theorem 5]). More precisely, we check that
Notice that (5.15) is evident from (5.12). We can easily compute the Fourier transform of \(Z_{t,B}\)
Hence,
which implies (5.16). To show (5.17), let us first fix \(\varepsilon >0\) and choose a function g in \(C_c^\infty (\mathbb {R}^{\ell +1})\) such that \(\Vert \mathcal {F}\psi \otimes \mathcal {F}K-g\Vert _{L^2(\mathbb {R}^{\ell +1})}< \varepsilon \). We denote \(Y_{t,B}(\tau ,\xi )=g(\tau ,\xi )\frac{1}{t}\int _0^t e^{-i \tau \frac{s}{t}-i \xi \cdot B(s)}ds\) and observe that for every path B in \(\Omega _{t,N}\) and \(|(\tau ',\xi ')|<\rho \), we have
It follows that
Since g is uniformly continuous, the last limit above vanishes. Hence,
for every \(\varepsilon >0\). This in turn implies (5.17).
Step 3. Applying (5.12),
Together with (5.13), the previous estimate yields
To deal with the limit on the right-hand side above, we adopt an argument from [2]. Let \(\varepsilon \) be a fixed positive number and define
The collection \(\{{\mathcal {O}}_h\}_{h\in M}\) forms an open cover of M in \(L^2(\mathbb {R}^{\ell +1})\). Since M is relatively compact, we can find deterministic functions \(h_1,\dots ,h_m\in M\) such that \(M\subset \cup _{j=1}^m {\mathcal {O}}_{h_j}\). It follows that for every \(t>0\) and \(B\in \Omega _{t,N}\),
and hence,
We note that
where
Since \({\bar{h}}_j\) is the convolution of \(L^2\)-functions, it is continuous and bounded. Moreover, since \(\psi '\) belongs to \(L^2(\mathbb {R})\), \(\partial _s{\bar{h}}_j\) exists and \(\Vert \partial _s{\bar{h}}_j\Vert _{L^\infty }\le \Vert h_j\Vert _{L^2}\Vert \partial _s \psi \otimes K\Vert _{L^2}\). In particular, \({\bar{h}}_j\) satisfies the hypothesis of [5, Proposition 3.1]. We also note that from (5.14), \(\frac{d\mathbb {P}_k}{d\mathbb {P}}\big |_{[0,t]}\le e^{\frac{\ell }{2} \kappa t}\). In conjunction with [5, Proposition 3.1], it follows that
where each for \(g\in \mathcal {A}_\ell \) we conventionally set \(g(s,x)=0\) if \(s\notin [0,1]\). Gluing together our argument since (5.19), we obtain
Applying the Cauchy-Schwarz inequality \(-\Vert h_j\Vert ^2_{L^2}+2 \langle h_j,(\psi \otimes K)*g^2\rangle _{L^2}\le \Vert (\psi \otimes K)*g^2\Vert ^2_{L^2}\), we further have
Together with (5.9), (5.18) and the fact that
we see that the left-hand side of (5.8) is at most
We now send \(N\rightarrow \infty \), \(\kappa \downarrow 0\) and \(\varepsilon \downarrow 0\) to obtain (5.8) and complete the proof. \(\square \)
The following result provides an upper bound for the Lyapunov exponents of Brownian bridges.
Theorem 5.2
Suppose that the covariance of the noise satisfies condition (H.1) or (H.2) and also (S). Assume that the spectral density \(f(\xi )\) exists. Suppose that \(\{B^j_{0,t} (s), s\in [0,t]\}\), \(j=1\dots , n\), are independent \(\ell \)-dimensional Brownian bridges from zero to zero. Then,
where we recall that \(a=\frac{4-\alpha - 2\alpha _0}{2-\alpha }\).
Proof
For suitable distributions \(\eta _0,\eta \), we are going to make use of the notation
Let \(\gamma _0(s)=|s|^{-\alpha _0}\) denote the temporal covariance. With these notation, the expectation in (5.20) can be written as \(\mathbb {E}\exp \left\{ t^{-\alpha _0}Q_t(\gamma _0,\gamma ) \right\} \). We note that \(\gamma _0=\psi *\psi \) where \(\psi (s)=c(\alpha _0) |s|^{-\frac{1+\alpha _0}{2}} \) with some suitable constant \(c(\alpha _0)\). For each \(\delta >0\) we set \(\psi _\delta =p_{\delta /2}* \psi \) and \(\gamma _{0,\delta }=\psi _\delta *\psi _\delta \). To prove (5.20), the main ideas are first approximate the singular covariances \(\gamma _0,\gamma \) by regular covariances \(\gamma _{0,\delta },\gamma _\varepsilon \); then upper bound the exponential functional of Brownian bridges by that of Brownian motions. At the final stage, we will apply Proposition 5.1. This procedure will be carried out in detail in several steps below. For the moment, let us put \(t_n=(n-1)^{\frac{2}{2- \alpha }}t^a\) and observe that by making change of variables \(s\rightarrow \frac{t}{t_n}s\), \(r\rightarrow \frac{t}{t_n}r\) and using the scaling properties of \(\gamma \) and of Brownian bridges (i.e. (S) and \(\{B_{0,t}(\lambda s),s\le t/ \lambda \}\mathop {=}\limits ^{d} \sqrt{\lambda } \{B_{0,\frac{t}{\lambda }}(s),s\le t/\lambda \} \) for any \(\lambda >0\)) we have
Therefore, in conjunction with (5.6), (5.20) is equivalent to
Step 1. Fix \(\varepsilon >0\). For any \(p,q>1\), \(\frac{1}{p} + \frac{2}{q} =1\), applying Hölder inequality, we have
We claim that
and
Let us focus on (5.24). By Hölder’s inequality, it suffices to show that for any \(\kappa \in \mathbb {R}\)
For each integer \(d\ge 1\), we can write
Then, using Cauchy-Schwarz inequality and the inequality \(ab \le \frac{1}{2}(a^2+b^2)\), we obtain
for some constant C depending only on \(\kappa \). Therefore, the claim (5.26) is reduced to
which follows from Lemma 5.3 in [10]. This completes the proof of (5.24).
To show (5.25), we use the estimate
to obtain
This implies (5.25) since \(\gamma _0\in L^1([0,1])\) and \(\lim _{\delta \downarrow 0}\gamma _{0,\delta }=\gamma _0\) in \(L^1([0,1])\).
Step 2. We claim that
Notice that the function \(\gamma _\varepsilon \) is bounded and can be expressed in the form \(\gamma _\varepsilon =K_\varepsilon * K_\varepsilon \), where the function \(K_\varepsilon \), defined by
is bounded with bounded first derivatives, symmetric and \(K_\varepsilon \in L^2 (\mathbb {R}^\ell )\). Let \(\lambda \) be a fixed number in (0, 1) and set \(\rho _{\lambda }=\int _{[0,1]^2\setminus [0,\lambda ]^2} \gamma _{0,\delta }(s-r)dsdr\). For each \(j\ne k\), we use the estimate
together with (2.8) to obtain
At this point, we apply Proposition 5.1 to get
Passing through the limit \(\lambda \uparrow 1\), noting that \(\rho _\lambda \rightarrow 0\), we obtain (5.27).
Step 3. We combine (5.23), (5.24), (5.25) and (5.27) to get
Note that the order of the limits \(\delta \downarrow 0\) and \(\varepsilon \downarrow 0\) can not be interchanged. It is evident to check that \(\gamma _{0,\delta }\le \gamma _0\) and \(\gamma _\varepsilon \le \gamma \) in quadratic sense. Hence, using (5.5) we have
Finally, letting \(p\downarrow 1\), we obtain (5.22), which completes the proof. \(\square \)
Remark 5.3
The case of time-independent noises corresponds to \(\alpha _0=0\). In this case, the function \(\gamma _0\equiv 1\) can not be written as a convolution of a function with itself. Thus the proof of Proposition 5.1 does not work in this case.
Corollary 5.4
Suppose that the covariance of the noise satisfies (H.1) or (H.2), condition (S) holds and the spectral measure \(\mu \) is absolutely continuous. Let u(t, x) be the solution to (1.1) with nonnegative initial condition \(u_0\) satisfying condition (3.1). Then for any integer \(n \ge 2\),
Proof
Let \(\{B_{0,t}^j(s), s \in [0,t], j=1,\dots , n\}\), be \(\ell \)-dimensional Brownian bridges from zero to zero. Using the moment formula for the solution (4.6) proved in Proposition 4.3, we have
where the last inequality follows from (4.5). Then, the upper bound is a consequence of Theorem 5.2. \(\square \)
Remark 5.5
Using the approach developed in [2], we can also show the corresponding lower bound in (5.20), assuming (H.1) or (H.2) and (S) (but not necessarily the absolute continuity of \(\mu \)). However, a lower bound similar to that proved in Corollary 5.4 cannot be obtained. For this reason, the proof of a lower bound for the exponential growth indices needs a direct approach as it is done in the next section.
6 Exponential growth indices
In this section we denote by u(t, x) the solution to (1.1) with nonnegative initial condition \(u_0\) satisfying condition (3.1). The exponential growth indices are defined as follows:
and
where we recall that \(a=\frac{4-\alpha -2\alpha _0}{2-\alpha }\).
6.1 Upper bound for \(\lambda ^*(n)\)
Set
It may be helpful to note that \(a,b\in (1,2)\). For each positive number \(\beta \), we define two auxiliary functions \(\psi _ \beta \) and \(\phi _ \beta \). The function \(\psi _ \beta :(0,\infty )\rightarrow (0,\infty )\) is defined by
and \(\phi _ \beta :(0,\infty )\rightarrow (0,\infty )\) is uniquely defined by the relation
For every fixed \(x>0\), \(\phi _ \beta (x)\) can be recognized as the unique minimizer of the function
on \((0,\infty )\). Together with (6.5), it follows that
for every \(\beta ,x>0\). Relation (6.5) implies that \(\phi _ \beta \) is strictly increasing.
Theorem 6.1
Assume conditions (H.1) or (H.2), condition (S) and the absolute continuity of \(\mu \). Suppose that \(u_0\) satisfies
for some \(\beta >0\), then
where the function \(g_\beta (\lambda )=\psi _ \beta (\phi _ \beta (\lambda ))\) is given by
and \(\phi _\beta \) is characterized by (6.5).
Proof
It suffices to show that for any \(\lambda >0\),
We write
Together with Corollary 5.4, it suffices to show the inequality
We observe that by the triangle inequality,
Hence, together with (6.7), we see that for every \(|x|\ge \lambda t^{\frac{a+1}{2}} \)
Thus,
which implies (6.10). \(\square \)
As \(\beta \) tends to infinity, \(\phi _\beta (\lambda )\) tends to zero and it behaves as \(\left( \lambda / b\beta \right) ^{\frac{1}{b-1}}\). Therefore, \(g_\beta (\lambda )\) behaves as \(\frac{1}{2} \lambda ^2\). These facts lead to the following result.
Corollary 6.2
Under the assumptions of Theorem 6.1, if \(u_0\) satisfies \( \int _{\mathbb {R}^\ell } e^{\beta |y|^b}u_0(y)dy<\infty \) for all \(\beta >0\), then
6.2 Lower bound for \( \lambda _*(n) \)
The main result is the following.
Theorem 6.3
Assume conditions (H.2) and (S). Suppose that \(u_0\) is non-trivial and non-negative. In addition, we assume that
for any \(\lambda >0\). Then,
Proof
Set
To derive a lower bound for \(I_t\) we proceed as follows. We will make use of the notation
Then, by the Feynman-Kac formula for the moments of the solution in terms of Brownian bridges proved in Proposition 4.3, we have
For each \(\varepsilon >0\), \(p>1\), applying Hölder inequality, we see that
Notice that, from (4.5) we can write
Substituting (6.13) into (6.12) yields
Choosing \(\varepsilon =\varepsilon (t)=\delta t^{1-a}\) with \(\delta >0\), from (5.24) we obtain
In addition, from our assumption,
In other words, \(I_1(t)\) and \(I_2(t)\) are negligible in the limits \(t\rightarrow \infty \), \(\delta \rightarrow 0\) and \(p\rightarrow 1\).
We now consider \(I_3\). It can be written as
where \(u_{\varepsilon ,p}(t,x)\) denotes the solution of Eq. (1.1) with initial condition \(u_0\) and spatial covariance \(\frac{1}{p}\gamma _{\varepsilon (t)}\), where \(\varepsilon =\varepsilon (t) = \delta t^{1-a}\). Define \(\mathcal {H}_{p,\varepsilon }\) as in (2.1), but with \(\mu (d\xi )\) replaced by \(\frac{1}{p}e^{-\varepsilon |\xi |^2}\mu (d\xi )\).
For every \(\phi \) in \(\mathcal {H}_{p,\varepsilon }\), we denote by \(Z(\phi )\) the (Wick) exponential functional
By the Feynman-Kac formula for the solution of Eq. (1.1), when the spatial covariance is bounded, we obtain
where \(\psi _{x,y}(s,z)= \delta (B_{0,t} (t-s) +x + \frac{t-s}{t} y -z)\mathbf{1}_{[0,t]}(s) \) and
Let q be such that \(\frac{1}{n}+\frac{1}{q}=1\). Using Hölder inequality, for any \(\phi \in \mathcal {H}_{p,\varepsilon }\) we have
We are going to choose an element \(\phi \), which depends on t and x.
Our next step is the computation of the inner product \( \langle \phi , \psi _{x,y} \rangle _{\mathcal {H}_{p,\varepsilon }} \). We can write
Set
where \(a=\frac{4-\alpha -2\alpha _0}{2-\alpha }\) and \(c=(n-1)^{\frac{2}{2-\alpha }}\). Making the change of variables \(s \rightarrow \frac{t}{t_n} s\) and \(r \rightarrow tr\) and using the scaling property for Brownian bridge, we obtain that
Finally, the change of variables \(z\rightarrow \sqrt{\frac{t}{t_n}}z\) yields
Choosing \(\phi \) of the form
where \(\widehat{\phi }\) satisfies
we can write
Set
Then, we obtain
On the other hand, for this choice of \(\phi \), we obtain
The change of variables \(s\rightarrow t-ts\), \(r\rightarrow t-tr\), \(z\rightarrow \sqrt{\frac{t}{t_n }} z + x\) and \(w\rightarrow \sqrt{\frac{t}{t_n }} w + x\) leads to
Substituting (6.17) and (6.18) into (6.15), we get
This together with (6.14) leads to the inequality, for any \(K>0\),
where
and
We are going to analyze these three terms and this will be done in several steps.
Step 1. Using the properties of the initial condition, we claim that if \(\lambda < K\sqrt{c}\), then
Notice first that \(\sqrt{tt_n}= \sqrt{c} t^{\frac{a+1}{2}}\). Recall that \(u_0\) is non-trivial, there exists \(M>0\) such that \(\int _{|y|\le M}u_0(y)dy>0\). For t large enough, \( \lambda t^{\frac{ a+1}{2}} +M \le K\sqrt{c} t^{\frac{a+1}{2}}\). Therefore, choosing \(x_0\) such that \(|x_0| =\lambda t^{\frac{ a+1}{2}}\) implies that
Thus we obtain
which is (6.19).
Step 2. We can write
For any \(\rho \in (0,1)\), we can write
From (2.8), we get
where \(A_R =\{ \sup _{0\le s\le \rho t} |B(s)| \le R\}\) for \(R >0\). Notice that, if \(|y| \le Kt\), on the set \(A_R\) we have
On the other hand, by Proposition 3.1 of [5] we obtain
where
and \(\mathcal {F}_\ell (B_R)\) is the set of smooth functions on \(B_R:=\{ x: |x| \le R\}\) with \(\Vert g\Vert _{L^2(B_R)}=1\) and \(g(\partial B_R)=\{0\}\). For this result we need that for each \(0\le s\le 1\), the function \(f(\rho s, \cdot )\) is bounded and continuous and the family of functions \(\{s\rightarrow f(\rho s, x), x\in \mathbb {R}^\ell \}\) is equicontinuous in [0, 1]. These properties are a consequence of assumption (6.16). In conclusion, we have proved that
From (6.22), (6.21) and (6.19), letting \(K\downarrow \lambda /\sqrt{c}\) and \(R\uparrow \infty \), we obtain
for any function g(s, x) in \(\mathcal {A}_\ell \), where \(\mathcal {A}_\ell \) has been defined in (5.2). We can write
Making the change of variables \(s\rho \rightarrow s\), yields
Now choose the function \(\widehat{\phi }\) of the form \(\widehat{\phi } (r,x)= g^2(\frac{r}{\rho }, x) \mathbf{1}_{[0,\rho ]} (r)\). With this choice we obtain
Step 3. With the above choice for \(\widehat{\phi }\) and letting \(p\rightarrow 1\), the term \(I_{3,1}\) can be written as
Finally, from (6.23) and (6.24), we obtain
Letting \(\delta \downarrow 0\), we obtain
Now we write \(\widehat{g}(r,x)= \sqrt{\varkappa } g(r,\varkappa x)\) where \(\varkappa \) is a constant whose value will be determined very soon, and we obtain, using the scaling properties of \(\gamma \),
Finally, choosing \(\varkappa =2^{ \frac{1}{\alpha -2}} \rho ^{\frac{1-\alpha _0}{2-\alpha }}\) and taking the supremum over g, we obtain
Optimizing in \(\rho \), this produces the lower bound
The proof is now complete. \(\square \)
Remark 6.4
Putting together the results from Corollary 6.2 and Theorem 6.3 we obtain, for a nontrivial \(u_0\) with compact support and assuming a covariance satisfying conditions (H.2), (S) and the absolute continuity of \(\mu \),
Notice that when \(\alpha _0 \uparrow 1\) the constant a converges to 1 and the above factor converges to \(\frac{1}{2}\). In this sense, in comparison with (1.9), our result is not optimal. We conjecture that the constant in the left-hand side should be 1, but our techniques do not allow to show this.
References
Balan, R., Chen, L.: Parabolic Anderson model with space-time homogeneous Gaussian noise and rough initial condition. J. Theor. Probab. (to appear)
Chen, X.: Moment asymptotics for parabolic Anderson equation with fractional time-space noise: in Skorohod regime. Ann. Inst. Henri Poincaré Probab. Stat. 53, 819–841 (2017)
Chen, X., Hu, Y., Song, J., Song, X.: Temporal asymptotics for fractional parabolic Anderson model, arXiv preprint
Chen, L., Dalang, R.C.: Moments and growth indices for the nonlinear stochastic heat equation with rough initial conditions. Ann. Probab. 43(6), 3006–3051 (2015)
Chen, X., Hu, Y., Song, J., Xing, F.: Exponential asymptotics for time-space Hamiltonians. Ann. Inst. Henri Poincaré Probab. Stat. 51(4), 1529–1561 (2015)
Conus, D., Khoshnevisan, D.: On the existence and position of the farthest peaks of a family of stochastic heat and wave equations. Probab. Theory Relat. Fields 152(3–4), 681–701 (2012)
Donsker, M.D., Varadhan, S.R.S.: Asymptotics for the polaron. Comm. Pure Appl. Math. 36(4), 505–528 (1983)
Hu, Y., Huang, J., Nualart, D.: On the intermittency front of stochastic heat equation driven by colored noises. Electron. Commun. Probab. 21 (2016), Paper No. 21, 13
Hu, Y., Huang, J., Nualart, D., Tindel, S.: Stochastic heat equations with general multiplicative Gaussian noises: Hölder continuity and intermittency. Electron. J. Probab. 20, 55, 50 (2015)
Huang, J., Lê, K., Nualart, D.: Large time asymptotic for the parabolic Anderson model driven by spatially correlated noise. Ann. Inst. Henri Poincaré Probab. Stat. (to appear)
Hu, Y., Nualart, D.: Stochastic heat equation driven by fractional noise and local time. Probab. Theory Relat. Fields 143(1–2), 285–328 (2009)
Hanche-Olsen, H., Holden, H.: The Kolmogorov-Riesz compactness theorem. Expo. Math. 28(4), 385–394 (2010)
Nakao, S.: On the spectral distribution of the Schrödinger operator with random potential. Jpn. J. Math. 3(1), 111–139 (1977)
Nualart, D.: The Malliavin calculus and related topics, 2nd edn. Probability and its Applications, New York. Springer, Berlin (2006)
Acknowledgements
J. Huang and K. Lê were supported by the NSF Grant No. 0932078 000, while they visited the Mathematical Sciences Research Institute in Berkeley, California in Fall 2015, during which the project was initiated. D. Nualart was supported by the NSF Grant No. DMS1512891 and the ARO Grant No. FED0070445. K. Lê thanks PIMS for its support through the Postdoctoral Training Centre in Stochastics. The authors wish to thank the referees who pointed out a gap in the original proof of Theorem 5.2, which leads to a new proof in the revised version.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Huang, J., Lê, K. & Nualart, D. Large time asymptotics for the parabolic Anderson model driven by space and time correlated noise. Stoch PDE: Anal Comp 5, 614–651 (2017). https://doi.org/10.1007/s40072-017-0099-0
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40072-017-0099-0