Abstract
In this paper, we present a quantitative central limit theorem for the d-dimensional stochastic heat equation driven by a Gaussian multiplicative noise, which is white in time and has a spatial covariance given by the Riesz kernel. We show that the spatial average of the solution over an Euclidean ball is close to a Gaussian distribution, when the radius of the ball tends to infinity. Our central limit theorem is described in the total variation distance, using Malliavin calculus and Stein’s method. We also provide a functional central limit theorem.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
We consider the stochastic heat equation
on \({\mathbb {R}}_+\times {\mathbb {R}}^d\) with initial condition \(u(0,x)=1\), where \({\dot{W}}(t,x)\) is a centered Gaussian noise with covariance \( {\mathbb {E}}\big [ {\dot{W}}(t,x) {\dot{W}}(s,y)\big ] = \delta _0(t-s) | x-y|^{-\beta }\), \( 0<\beta < \min (d,2)\). In this paper, \(\vert \cdot \vert \) denotes the Euclidean norm and we assume the nonlinear term \(\sigma \) is a Lipschitz function with \(\sigma (1)\ne 0\); see point (iii) in Remark 3.
Our first result is the following quantitative central limit theorem concerning the spatial average of the solution u(t, x) over \(B_R:= \{ x\in {\mathbb {R}}^d: |x| \le R\}\).
Theorem 1.1
For all \( t > 0\), there exists a constant \(C = C(t,\beta )\), depending on t and \(\beta \), such that
where \(d_{\mathrm{TV}}\) denotes total variation distance (see (1.2)), \(Z\sim N(0,1)\) is a standard normal random variable and \(\sigma _R^2 = \mathrm{Var} \big ( \int _ {B_R} [ u(t,x) - 1 ]\,dx \big )\). Moreover, the normalization \(\sigma _R\) is of order \(R^{d-\frac{\beta }{2}}\), as \(R\rightarrow +\infty \) [(see (1.8)].
Remark 1
In the above result we have assumed \(u(0,x)=1\) for the sake of simplicity. However, one can easily extend the result to cover more general initial deterministic condition u(0, x). In fact, this is the topic of Corollary 3.3.
We will mainly rely on the methodology of Malliavin-Stein approach to prove the above result. Such an approach was introduced by Nourdin and Peccati in [9] to, among other things, quantify Nualart and Peccati’s fourth moment theorem in [14]. Notably, if F is a Malliavin differentiable (that means, F belongs to the Sobolev space \({\mathbb {D}}^{1,2}\)), centered Gaussian functional with unit variance, the well-known Malliavin-Stein bound implies
where D is the Malliavin derivative, \(L^{-1}\) is the pseudo-inverse of the Ornstein-Uhlenbeck operator and \(\langle \cdot , \cdot \rangle _{\mathfrak {H}}\) denotes the inner product in the Hilbert space \({\mathfrak {H}}\) associated with the covariance of W; see also the monograph [10].
It was observed in [16] that instead of \( \langle DF, -DL^{-1}F \rangle _{\mathfrak {H}}\), one can work with the term \( \langle DF, v \rangle _{\mathfrak {H}}\) once F can be represented as a Skorohod integral \(\delta (v)\); see the following result from [16, Proposition 3.1] (see also [8, 12]).
Proposition 1.2
If F is a centered random variable in the Sobolev space \({\mathbb {D}}^{1,2}\) with unit variance such that \(F = \delta (v)\) for some \({\mathfrak {H}}\)-valued random variable v, then, with \(Z \sim N (0,1)\),
This estimate enables us to bring in tools from stochastic analysis.
Remark 2
Throughout this paper we only work with the total variation distance. However, we point out that we can get the same rate in other frequently used distances. Indeed, if \(\texttt {d}\) denotes either the Kolmogorov distance, the Wasserstein distance or the Fortet-Mourier distance, we have as well
where F, Z are given as in (1.4); see e.g. [10, Chapter 3] for the properties of Stein’s solution. Thus, proceeding in the exact same lines as in this paper, we will get the same rate in these distances.
It is known (see e.g. [4, 18]) that Eq. (1.1) has a unique mild solution\(\{u(t,x): t\ge 0, x \in {\mathbb {R}}^d\}\), in the sense that it is adapted to the filtration generated by W, uniformly bounded in \(L^2(\Omega )\) over \( [0,T]\times {\mathbb {R}}^d\) (for any finite T) and satisfies the following integral equation in the sense of Dalang-Walsh
where \(p_t(x) = (2\pi t)^{-d/2}\exp \big ( - |x|^2/ (2t) \big )\) denotes the heat kernel and is the fundamental solution to the corresponding deterministic heat equation.
Let us introduce some handy notation. For fixed \(t > 0\), we define
If we put \(F_R(t) := G_R(t)/\sigma _R\), then we write, due to the Fubini’s theorem,
with \(v_R(s,y) = \sigma _R^{-1}{\mathbf {1}} _{[0,t]}(s) \varphi _R(s,y) \sigma (u(s,y) )\) taking into account that the Dalang-Walsh integral (1.5) is a particular case of the Skorohod integral; see [11].
By Proposition 1.2, the proof of Theorem 1.1 reduces to estimating the variance of \( \langle DF_R(t), v_R\rangle _{\mathfrak {H}}\). One of the key steps, our Proposition 3.2, provides the exact asymptotic behavior of the normalization \(\sigma _R^2\):
and throughout this paper, we will reserve the notation \(k_\beta \) and \(\eta (\cdot )\) for
Remark 3
-
(i)
The definition of \(\eta \) does not depend on the spatial variable, due to the strict stationarity of the solution, meaning that the finite-dimensional distributions of the process \(\{u(t, x+y),x\in {\mathbb {R}}^d\}\) do not depend on y; see [4].
-
(ii)
Another consequence of the strict stationarity is that the quantity
$$\begin{aligned} K_p(t):={\mathbb {E}}\big [ |u(t,x)| ^p \big ] \end{aligned}$$(1.10)does not depend on x. Moreover, \(K_p(\cdot )\) is uniformly bounded on compact sets.
-
(iii)
Under our constant initial condition (i.e.\(u(0,x)\equiv 1\)), the assumption \(\sigma (1) \ne 0\) is necessary and sufficient in our paper. It is necessary, in view of the usual Picard iteration, to exclude the situation where \(u(t,x)\equiv 1\) is the unique solution, and it is sufficient to guarantee that the integral in (1.8) is nonzero. Moreover, we have the following equivalence
$$\begin{aligned} \sigma (1)=0 \Leftrightarrow \sigma _R=0,\forall R>0 \Leftrightarrow \sigma _R=0\hbox { for some }R>0 \Leftrightarrow {\displaystyle \lim _{R\rightarrow \infty }\sigma _R^2 R^{\beta -2d}= 0;} \end{aligned}$$whose verification can be done in the same way as in [5, Lemma 3.4], so we leave it as an easy exercise for interested readers.
-
(iv)
Due to the stationarity again, we can see that Theorem 1.1 still holds true when we replace \(B_R\) by \( \{ x\in {\mathbb {R}}^d: | x - a_R | \le R \}\), with \(a_R\) possibly varying as \(R\rightarrow +\infty \). Moreover, the normalization \(\sigma _R\) in this translated version remains unchanged. To see this translational “invariance”, one can alternatively go through the exactly same arguments as in the proof of Theorem 1.1. Note that this invariance under translation justifies the statement in our abstract.
-
(v)
By going through the exactly same arguments as in the proof of Theorem 1.1, we can obtain the following result: If we replace the ball \(B_R\) by the box \(\Lambda _R: = \big \{ (x_1,\ldots , x_d)\in {\mathbb {R}}^d: \vert x_i\vert \le R, i=1,\dots , d \big \}\), Theorem 1.1 still holds true with a slightly different normalization \(\sigma '_R\) given by
$$\begin{aligned} \sigma '_R \sim \left( \int _0^t \eta ^2(s)\int _{\Lambda _1^2} |z- y | ^{-\beta } dzdy \, ds\right) ^{1/2} R^{d-\frac{\beta }{2}}\,, \quad \text {as }R\rightarrow +\infty . \end{aligned}$$
Our second main result is the following functional version of Theorem 1.1.
Theorem 1.3
Fix any finite \(T>0\) and recall (1.9). Then, as \(R\rightarrow +\infty \), we have
where Y is a standard Brownian motion and the above weak convergence takes place on the space of continuous functions C([0, T]).
A similar problem for the stochastic heat equation on \({\mathbb {R}}\) has been recently considered in [8], but only in the case of a space-time white noise, where \(\eta ^2(s)\) appearing in the limiting variance (1.8) is replaced by \( {\mathbb {E}}\big [ \sigma ^2(u(s,y)) \big ]\). Such a phenomenon also appeared in the case of the one-dimensional wave equation, see the recent paper [5], whose authors also considered the Riesz kernel for the spatial fractional noise therein, which corresponds to the case \(\beta =2-2H\).
Remark 4
When \(\sigma (x)=x\), the random variable \(G_R(t)\) defined in (1.6) has an explicit Wiener chaos expansion:
In this linear case, the asymptotic result (1.8) reduces to \(\sigma _R^2 \sim t k_\beta R^{2d-\beta }\), while the first chaotic component of \(G_R(t)\) is centered Gaussian with variance equal to
see (3.9). Therefore, due to the orthogonality of Wiener chaoses with different orders, we can conclude that \(F_R(t) = G_R(t)/\sigma _R\) is asymptotically Gaussian. It is worth pointing out that, unlike in our case, for the linear stochastic heat equation driven by space-time white noise as considered in [8], the central limit is chaotic, meaning that each projection on the Wiener chaos contributes to the Gaussian limit. In this case, the proof of asymptotic normality could be based on the chaotic central limit theorem from [7, Theorem 3] (see also [10, Section 6.3]).
The rest of the paper is organized as follows. We present the proofs in Sects. 3 and 4 , after we recall briefly some preliminaries in Sect. 2.
Along the paper, we will denote by C a generic constant that may depend on the fixed time T, the parameter \(\beta \) and \(\sigma \), and it can vary from line to line. We also denote by \(\Vert \cdot \Vert _p\) the \(L^p(\Omega )\)-norm and by L the Lipschitz constant of \(\sigma \).
2 Preliminaries
Let us build an isonormal Gaussian process from the colored noise W as follows. We begin with a centered Gaussian family of random variables \( \big \{W(\psi ):\, \psi \in C_c^{\infty }\left( [0,\infty )\times {\mathbb {R}}^{d}\right) \big \} \), defined on some complete probability space, such that
where \(C_c^{\infty }\left( [0,\infty )\times {\mathbb {R}}^{d}\right) \) is the space of infinitely differentiable functions with compact support on \([0,\infty )\times {\mathbb {R}}^{d}\). Let \({\mathfrak {H}}\) be the Hilbert space defined as the completion of \(C_c^{\infty }\left( [0,\infty )\times {\mathbb {R}}^{d}\right) \) with respect to the above inner product. By a density argument, we obtain an isonormal Gaussian process \(W:=\big \{W(\psi ):\, \psi \in {\mathfrak {H}}\big \}\) meaning that for any \( \psi , \phi \in {\mathfrak {H}}\),
Let \({\mathbb {F}}=({\mathcal {F}}_t,t\ge 0)\) be the natural filtration of W, with \({\mathcal {F}}_t\) generated by \(\big \{W(\phi ):\phi \) continuous and with compact support in \([0,t]\times {\mathbb {R}}^d\big \}\). Then for any \({\mathbb {F}}\)-adapted, jointly measurable random field X such that
the stochastic integral
is well-defined in the sense of Dalang-Walsh and the following isometry property is satisfied
In the sequel, we recall some basic facts on Malliavin calculus and we refer readers to the books [11, 12] for any unexplained notation and result.
Denote by \(C_p^{\infty }({\mathbb {R}}^n)\) the space of smooth functions with all their partial derivatives having at most polynomial growth at infinity. Let \({\mathcal {S}}\) be the space of simple functionals of the form \(F = f(W(h_1), \dots , W(h_n)) \) for \(f\in C_p^{\infty }({\mathbb {R}}^n)\) and \(h_i \in {\mathfrak {H}}\), \(1\le i \le n\), \(n\in {\mathbb {N}}\). Then, DF is the \({\mathfrak {H}}\)-valued random variable defined by
The derivative operator D is closable from \(L^p(\Omega )\) into \(L^p(\Omega ; {\mathfrak {H}})\) for any \(p \ge 1\) and we let \({\mathbb {D}}^{1,p}\) be the completion of \({\mathcal {S}}\) with respect to the norm
We denote by \(\delta \) the adjoint of D characterized by the integration-by-parts formula
for any \(F \in {\mathbb {D}}^{1,2}\) and \(u\in \mathrm{Dom} \, \delta \subset L^2(\Omega ; {\mathfrak {H}})\), the domain of \(\delta \). The operator \(\delta \) is also called the Skorohod integral, because in the case of the Brownian motion, it coincides with Skorohod’s extension of the Itô integral (see e.g. [6, 13]). In our context, any adapted random field X satisfying (2.1) belongs to \(\text {Dom}\delta \), and \(\delta (X)\) coincides with the Dalang-Walsh stochastic integral. As a consequence, the mild formulation (1.5) can be rewritten as
It is known that for any \((t,x)\in {\mathbb {R}}_+\times {\mathbb {R}}^d\), \(u(t,x)\in {\mathbb {D}}^{1,p}\) for any \(p\ge 2\) and the derivative satisfies, for \(t\ge s\), the linear equation
where \(\Sigma (r,z)\) is an adapted process, bounded by L. If in addition \(\sigma \in C^1({\mathbb {R}})\), then \(\Sigma (r,z)= \sigma '(u(r,z))\). This result is proved in [11, Proposition 2.4.4] in the case of the stochastic heat equation with Dirichlet boundary conditions on [0, 1] driven by a space-time white noise. Its proof can be easily adapted to our case, see also [1, 15].
In the end of this section, we record a technical lemma, whose proof can be found in [3, Lemma 3.11].
Lemma 2.1
For any \(p\in [ 2,+\infty )\), \(0 \le s \le t \le T \) and \(x,y \in {\mathbb {R}}^d\), we have for every \((s,y)\in [0,T]\times {\mathbb {R}}^d\),
for some constant \(C= C_{T,p}\) that may depend on T and p.
Note that in [5], the p-norm of the Malliavin derivative \(D_{s,y}u(t, x)\) can be bounded by a multiple of the fundamental solution of the wave equation. So it is natural to expect that a similar estimate like (2.3) may hold for a large family of stochastic partial differential equations.
3 Proof of Theorem 1.1
We first state a lemma, which will be used below.
Lemma 3.1
Let Z be a standard Gaussian vector on \({\mathbb {R}}^d\) and \(\beta \in (0,d)\). Then
for some constant C that only depends on d and \(\beta \).
Proof
We want to show that
where C does not depend on s. We first prove this for \(s=1\).
where the constants \(C_1\) and \(C_2\) only depend on \(d,\beta \). So we have proved (3.1) for \(s=1\).
Therefore, we have for general \(s>0\),
That is, we have proved (3.1) for general \(s>0\).
Now we begin with the estimate (1.8) recalled below.
Proposition 3.2
With the notation (1.6) and (1.9), we have
Proof
We define \( \Psi (s, y)={\mathbb {E}}\big [ \sigma (u(s,0))\sigma (u(s,y)) \big ] \). Then
We claim that
To see (3.2), we apply a version of the Clark-Ocone formula for square integrable functionals of the noise W and we can write
As a consequence, \( {\mathbb {E}}\big [ \sigma (u(s,y) )\sigma (u(s,z) ) \big ] = \eta ^2(s)+ T(s, y,z), \) where
By the chain-rule (see [11]), \( D_{r,\gamma }\big ( \sigma (u(s,y)) \big ) = \Sigma (s,y) D_{r,\gamma } u(s,y)\), with \(\Sigma (\cdot ,*)\)\({\mathbb {F}}\)-adapted and bounded by L (in particular, \(\Sigma (s,y) = \sigma '(u(s,y))\), when \(\sigma \) is differentiable .) This implies, using (2.3),
for some constant C. Therefore, substituting (3.4) into (3.3) and using the semigroup property in (3.5), we can write
by Lemma 3.1. This completes the verification of (3.2).
Let us continue our proof of Proposition 3.2 by proving that
as \(R\rightarrow +\infty \). Notice that
From (3.2) we deduce that given any \(\varepsilon > 0\), we find \(K = K_\varepsilon > 0\) such that \(| \Psi (s, \xi ) -\eta ^2(s)| < \varepsilon \) for all \(s\in [0,t]\) and \(|\xi |\ge K\). Now we divide the integration domain in (3.7) into two parts \(\vert \xi \vert \le K\) and \(\vert \xi \vert > K\).
Case (i): On the region \(| \xi | \le K\), since \(\Psi (s,y-z) - \eta ^2(s) = T(s,y,z )\) is uniformly bounded, using (3.8), we can write
Case (ii): On the region \([0,t]\times B_K^c\), \(| \Psi (s, \xi ) -\eta ^2(s)| < \varepsilon \). Thus,
with Z a standard Gaussian random vector on \({\mathbb {R}}^d\). This establishes the limit (3.7), since \(\varepsilon > 0\) is arbitrary.
Now, it suffices to show that
as \(R\rightarrow +\infty \). This follows from arguments similar to those used above. In fact, previous computations imply that the left-hand side of (3.9) is equal to
In view of Lemma 3.1 and dominated convergence theorem, we obtain the limit in (3.9) and hence the proof of the proposition is completed. \(\square \)
Remark 5
By using the same argument as in the proof of Proposition 3.2, we obtain an asymptotic formula for \({\mathbb {E}}\big [ G_R(t_i) G_R(t_j) \big ]\) with \(t_i, t_j\in {\mathbb {R}}_+\), which is a useful ingredient for our proof of Theorem 1.3. Suppose \(t_i , t_j\in {\mathbb {R}}_+\), we write
with \( \varphi ^{(i)}_R(s,y) = \int _{B_R} p_{t_i-s}(x-y)\, dx\), and we obtain
We are now ready to present the proof of Theorem 1.1.
Proof of Theorem 1.1
Recall from (1.7) that
Moreover,
and by (2.2) and Fubini’s theorem, we can write
Therefore, we have the following decomposition \( \langle DF_R,v_R\rangle _{{\mathfrak {H}}} = B_1 +B_2, \) with
and
By Proposition 1.2, \(d_{\mathrm{TV}}(F_R, Z) \le 2 \sqrt{ \mathrm{Var} \big [\langle DF_R,v_R\rangle _{{\mathfrak {H}}} \big ] } \le 2\sqrt{2}(A_1+A_2), \) with
and
The proof will be done in two steps:
Step 1: Let us first estimate the term \(A_2\). By Lemma 2.1,
where \(K_4(t)\) is introduced in (1.10). By Proposition 3.2, we have
where the integral in the spatial variables can be rewritten as
Making the change of variables \(\big (\theta = x-z\), \({\tilde{\theta }}= {\tilde{x}} -{\tilde{z}}\), \(\eta '= x'-y'\), \({\tilde{\eta }}'= {\tilde{x}}' -{\tilde{y}}'\), \(\eta =y-z\) and \({\tilde{\eta }} = {\tilde{y}} -{\tilde{z}}\big )\) yields,
This can be written as
with \(Z_1, \ldots , Z_6\) i.i.d. standard Gaussian vectors on \({\mathbb {R}}^d\), here we have used Lemma 3.1 repeatedly for \(Z_5, Z_3, Z_6, Z_4\) to obtain the last inequality. Making the change of variables \(x\rightarrow Rx\), \({\tilde{x}} \rightarrow R{\tilde{x}} \), \( x' \rightarrow Rx'\) and \({\tilde{x}} ' \rightarrow R {\tilde{x}}'\), we can write
where the second inequality follows from the change of variables \(x-x'=y_1\), \({\tilde{x}} -{\tilde{x}} ' =y_2\), \(x-{\tilde{x}} =y_3\). Taking into account the fact that
we obtain
and it follows immediately that \( A_2 \le C R^{-\beta /2} \).
Step 2: We now estimate \(A_1\). We begin by estimating the covariance
Using a version of Clark–Ocone formula for square integrable functionals of the noise W, we can write
Then, we represent the covariance (3.11) as
By the chain rule, \( D_{r,z} \big (\sigma (u(s,y) ) \sigma (u(s,y') )\big ) = \sigma \big (u(s,y)\big ) \Sigma (s,y') D_{r,z} u(s,y') + \sigma \big (u(s,y')\big ) \Sigma (s,y) D_{r,z} u(s,y)\). Therefore, \(\big \Vert {\mathbb {E}}\big [ D_{r,z} (\sigma (u(s,y) ) \sigma (u(s,y') ) )| {\mathcal {F}}_r \big ] \big \Vert _2\) is bounded by \( 2K_4(t) L \big \{ \Vert D_{r,z} u(s,y) \Vert _4+ \Vert D_{r,z} u(s,y') \Vert _4\big \} \). Using Lemma 2.1 again, we see that the covariance (3.11) is bounded by
Consequently, the spatial integral in the expression of \(A_1\) can be bounded by
Then, it follows from exactly the same argument as in the estimation of \({\mathbf {I}}_R\) in the previous step that \({\mathbf {J}}_R \le C R^{4d-3\beta }\). Indeed, we have
where we have used Lemma 3.1 for \(Z_5, Z_2, Z_4\) to obtain (3.12). This gives us the desired estimate for \(A_1\) and finishes our proof. \(\square \)
With a slight modification of the above proof, we can extend Theorem 1.1 to more general initial conditions.
Corollary 3.3
Assume that there are two positive constants \(c_1,c_2\) such that \(c_1\le u(0,x)\le c_2\), and assume one of the following two conditions:
- (1)
\(\sigma \) is a non-negative, nondecreasing Lipschitz function with \(\sigma (c_1) > 0\).
- (2)
\(-\sigma \) is a non-negative, nondecreasing Lipschitz function with \(\sigma (c_1) < 0\).
Define
where
for every \(R>0\). Then, there exists a constant \(C = C(t,\beta )\), depending on t and \(\beta \), such that
Proof
We write \({\widetilde{F}}_R(t)\) as \({\widetilde{G}}_R(t) / {\widetilde{\sigma }}_R\). Let \(u_1(t,x)\) be the solution to Eq. (1.1) with initial condition \(u(0,x)=c_1\). According to the weak comparison principle (see [2, Theorem 1.1]), \(u(t,x)\ge u_1(t,x)\) almost surely for every \(t\ge 0\) and \(x\in {\mathbb {R}}^d\), which immediately implies
so that
In view of point (iii) from Remark 3, our assumption \(\sigma (c_1)\ne 0\) guarantees that the last integral (3.15) has the exact order \(R^{2d-\beta }\): More precisely,
where \(\eta _1(s) := {\mathbb {E}}\big [ \sigma (u_1(s,y))\big ]\) does not depend on y and \(\int _0^t \eta _1(s)^2 ds\in (0,\infty )\); see also (1.9).
Now let \(u_2(t,x)\) be the solution to the Eq. (1.1) with initial condition \(u(0,x)=c_2\). Notice that by our assumption on \(\sigma \), we get \(\sigma (c_2)\ne 0\). And by applying the same weak comparison principle, we deduce that
where \(\eta _2(s) := {\mathbb {E}}\big [ \sigma (u_2(s,y))\big ]\) does not depend on y and \(\int _0^t \eta _2(s)^2 ds\in (0,\infty )\). This implies that
The rest of the proof follows the same lines as the proof of Theorem 1.1. \(\square \)
4 Proof of Theorem 1.3
In order to prove Theorem 1.3, we need to establish the convergence of the finite-dimensional distributions as well as the tightness.
Convergence of finite-dimensional distributions. Fix \(0\le t_1< \cdots <t_m \le T\) and consider
where
with \( \varphi ^{(i)}_R(s,y) = \int _{B_R} p_{t_i -s} (x-y) dx\). Set \({\mathbf {F}}_R=\big ( F_R(t_1), \dots , F_R(t_m) \big )\) and let Z be a centered Gaussian vector on \({\mathbb {R}}^m\) with covariance \((C_{i,j})_{1\le i,j\le m}\) given by
To proceed, we need the following generalization of [10, Theorem 6.1.2]; see [8, Proposition 2.3] for a proof.
Lemma 4.1
Let \(F=( F^{(1)}, \dots , F^{(m)})\) be a random vector such that \(F^{(i)} = \delta (v^{(i)})\) for \(v^{(i)} \in \mathrm{Dom}\, \delta \) and \( F^{(i)} \in {\mathbb {D}}^{1,2}\), \(i = 1,\dots , m\). Let Z be an m-dimensional centered Gaussian vector with covariance \((C_{i,j})_{ 1\le i,j\le m} \). For any \(C^2\) function \(h: {\mathbb {R}}^m \rightarrow {\mathbb {R}}\) with bounded second partial derivatives, we have
where \(\Vert h ''\Vert _\infty : = \sup \big \{ \big \vert \frac{\partial ^2}{\partial x_i\partial x_j} h(x) \big \vert \,:\, x\in {\mathbb {R}}^d\, , \, i,j=1, \ldots , m \big \}\).
In view of Lemma 4.1, the proof of \({\mathbf {F}}_R\xrightarrow {\mathrm{law}} Z\) boils down to the \(L^2(\Omega )\)-convergence of \(\langle DF_R(t_i), v_R^{(j)} \rangle _{{\mathfrak {H}}}\) to \(C_{i,j} \) for each i, j, as \(R\rightarrow +\infty \). The case \(i=j\) has been covered in the proof of Theorem 1.1 and for the other cases, we need to show
and
as \(R\rightarrow +\infty \). The point (4.1) has been established in Remark 5. To see point (4.2), we put
and
so that \(\langle DF_R(t_i), v_R^{(j)} \rangle _{{\mathfrak {H}}} = B_1(i,j) +B_2(i,j) \). Therefore,
Going through the same lines as for the estimation of \(A_1, A_2\), one can verify easily that both \( \mathrm{Var}\big [ B_1(i,j) \big ]\) and \( \mathrm{Var} \big [ B_2(i,j)\big ]\) vanish asymptotically, as R tends to infinity. By recalling that \(C_{ij} = \lim _{R \rightarrow +\infty } {\mathbb {E}}\big [ B_1(i,j)\big ]\), the convergence of finite-dimensional distributions is thus established. \(\square \)
Tightness The following proposition together with Kolmogorov’s criterion ensures the tightness of the processes \(\big \{ R^{\frac{\beta }{2}-d} \int _{B_R} \big [ u(t,x) -1 \big ] \,dx \,, t\in [0,T] \big \}\), \(R > 0\).
Proposition 4.2
Recall the notation (1.6). For any \(0\le s < t\le T\), for any \(p\in [ 2,\infty )\) and any \( \alpha \in (0, 1)\), it holds that
where the constant \(C = C_{p,T,\alpha }\) may depend on T, p and \(\alpha \).
Notice that the pth moment of an increment of the solution \(u(t,x) -u(s,x)\) is bounded by a constant times \(|t-s|^ {\frac{\alpha p }{2}(1-\frac{\beta }{2})}\), for any \( \alpha \in (0, 1)\), see [17]; and here the spatial averaging has improved the Hölder continuity.
Now we present the proof of Proposition 4.2. Let \(0\le s < t\le T\) and set
Then,
so that
Therefore, taking into account that \( \Vert \sigma (u(r,y))\sigma (u(r,y'))\Vert _{p/2}\) is uniformly bounded, we obtain
Estimation of the term \({\mathbf {T}}_1\). We need Lemma 3.1 in [2]: For all \(\alpha \in (0,1)\), \(x,y\in {\mathbb {R}}^d\) and \(t'\ge t>0\),
Thus,
by the change of variable \(\theta = x - y\), \({\tilde{\theta }} = {\tilde{x}} - y'\). Consequently, with Z a standard Gaussian random vector on \({\mathbb {R}}^d\), we continue to write
Estimation of the term \({\mathbf {T}}_2\). We use a similar change of variable as before:
Combining the above two estimates, we obtain (4.3). \(\square \)
References
Chen, L., Hu, Y., Nualart, D.: Regularity and strict positivity of densities for the nonlinear stochastic heat equation. To appear in Memoirs of the American Mathematical Society (2018)
Chen, L., Huang, J.: Comparison principle for stochastic heat equation on \({\mathbb{R}}^{d}\). To appear in Annals of Probability
Chen, L., Huang, J.: Regularity and strict positivity of densities for the stochastic heat equation on \({\mathbb{R}}^d\). arXiv preprint: arxiv:1902.02382 (2019)
Dalang, R.C.: Extending the martingale measure stochastic integral with applications to spatially homogeneous SPDEs. Electron. J. Probab. 4(6), 29 (1999)
Delgado-Vences, F., Nualart, D., Zheng, G.: A central limit theorem for the stochastic wave equation with fractional noise. arXiv preprint: arxiv:1812.05019 (2018)
Gaveau, B., Trauber, P.: L’intégrale stochastique comme opérateur de divergence dans l’espace founctionnel. J. Funct. Anal. 46, 230–238 (1982)
Hu, Y., Nualart, D.: Renormalized self-intersection local time for fractional Brownian motion. Ann. Probab. 33, 948–983 (2005)
Huang, J., Nualart, D., Viitasaari, L.: A central limit theorem for the stochastic heat equation. arXiv preprint: arxiv:1810.09492 (2018)
Nourdin, I., Peccati, G.: Stein’s method on Wiener chaos. Probab. Theory Relat. Fields 145(1), 75–118 (2009)
Nourdin, I., Peccati, G.: Normal Approximations with Malliavin Calculus. From Stein’s Method to Universality. Cambridge Tracts in Mathematics, 192. Cambridge University Press, Cambridge, xiv+239 pp (2012)
Nualart, D.: The Malliavin Calculus and Related Topics. Second edition. Probability and its Applications (New York). Springer, Berlin, xiv+382 pp (2006)
Nualart, D., Nualart, E.: Introduction to Malliavin Calculus. IMS Textbooks, Cambridge University Press, Cambridge (2018)
Nualart, D., Pardoux, E.: Stochastic calculus with anticipating integrands. Probab. Theory Relat. Fields 78, 535–581 (1988)
Nualart, D., Peccati, G.: Central limit theorems for sequences of multiple stochastic integrals. Ann. Probab. 33(1), 177–193 (2005)
Nualart, D., Quer-Sardanyons, L.: Existence and smoothness of the density for spatially homogeneous SPDEs. Potential Anal. 27, 281–299 (2007)
Nualart, D., Zhou, H.: Total variation estimates in the Breuer-Major theorem. arXiv preprint: arxiv:1807.09707 (2018)
Sanz-Solé, M., Sarrà, M.: Hölder continuity for the stochastic heat equation with spatially correlated noise. In: Dalang, R.C., Dozzi, M., Russo, F. (eds.) Seminar on Stochastic Analysis, Random Fields and Applications, pp. 259–268. Birkhäuser, Basel (2002)
Walsh, J.B.: An Introduction to Stochastic Partial Differential Equations. In: École d’été de probabilités de Saint-Flour, XIV—1984, pp. 265–439. Lecture Notes in Mathematics 1180. Springer, Berlin (1986)
Acknowledgements
The authors thank the anonymous referee for many constructive advices that improved this paper.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
David Nualart is supported by NSF Grant DMS 1811181.
Rights and permissions
About this article
Cite this article
Huang, J., Nualart, D., Viitasaari, L. et al. Gaussian fluctuations for the stochastic heat equation with colored noise. Stoch PDE: Anal Comp 8, 402–421 (2020). https://doi.org/10.1007/s40072-019-00149-3
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40072-019-00149-3