Abstract
In this article, we study the large time asymptotic behavior of the stochastic heat equation.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction and Main Results
Suppose that \(W=\{W(t,x), t\ge 0, x\in {\mathbb {R}}\}\) is a two-parameter Wiener process. That is, W is a zero-mean Gaussian process with covariance function given by
Consider the stochastic heat equation
where \(\varphi : {\mathbb {R}}\rightarrow {\mathbb {R}}\) is a given Borel measurable function such that for each \(t \ge 0\) and \(x\in {\mathbb {R}}\),
Along the paper, \(p_t(x)\) denotes the one-dimensional heat kernel, that is, \(p_t(x)= (2\pi t)^{-1/2} e^{ -x^2/2t}\) for \(t>0\) and \(x\in {\mathbb {R}}\). The mild solution to Eq. (1.1) with initial condition \( u(0,x)=0 \) is given by
where the stochastic integral is well defined in view of condition (1.2).
We are interested in the asymptotic behavior as \(t\rightarrow \infty \) of u(t, x) for \(x\in {\mathbb {R}}\) fixed. Notice first that in the particular case where \(\varphi (x)\equiv c\), then, u(t, x) is a centered Gaussian random variable with variance
Therefore, \(t^{-\frac{1}{4}} u(t,x)\) has the law \(N(0, c^2/ \sqrt{\pi })\). In the general case, using the change of variables \(s \rightarrow ts\) and \(y \rightarrow \sqrt{t }y\), we can write
By the scaling properties of the two-parameter Wiener process, it follows that u(t, x) has the same law as
The asymptotic behavior of u(t, x) will depend on the properties of the function \(\varphi \). We will consider three classes of functions for which different behaviors appear. We are going to use the following notion of convergence, which is stronger than convergence in distribution (see, for instance, [6, Chapter VIII]).
Definition 1.1
Let \(\{F_{n}\}\) be a sequence of X-valued random variables defined on a probability space \((\Omega , {\mathcal {F}}, {\mathbb {P}})\), X being a complete separable metric space. Let F be an X-valued random variable defined on some extended probability space \((\Omega ', {\mathcal {F}}', {\mathbb {P}}')\). We say that \(F_{n}\) converges stably to F, written \(F_n {\mathop {\longrightarrow }\limits ^\mathrm{stably}} F\), if for every bounded \({\mathcal {F}}\)–measurable random variable G, \((G,F_n)\) converges in law to (G, F).
When \(X={\mathbb {R}}\), \(F_n {\mathop {\longrightarrow }\limits ^\mathrm{stably}} F\) is equivalent to saying that
for every \(\lambda \in {\mathbb {R}}\) and every bounded \({\mathcal {F}}\)–measurable random variable G.
The first theorem deals with the case where \(\varphi \) is an homogeneous-type function.
Theorem 1.2
Suppose that \( \varphi :{\mathbb {R}}\rightarrow {\mathbb {R}}\) is a measurable and bounded on compacts function such thatFootnote 1\(\lim _{x \rightarrow \pm \infty } |x|^{-\alpha }\varphi (x) =c_\pm \) for some constants \(c_+, c_-\) and \( \alpha \ge 0 \). Then, as \(t\rightarrow \infty \),
Here, \(\widehat{W}\) is a two-parameter Wiener process independent of W.
Note that in the case that \( c_+=c_-\) and \( \alpha =0 \); then, the limit is Gaussian. Note that one may also consider the caseFootnote 2\(\lim _{x \rightarrow \pm \infty } |x|^{-\alpha _\pm }\varphi (x) =c_\pm \) for some constants \(c_+, c_-\), \(\alpha _+,\alpha _-\ge 0 \). In this case, the renormalization factor is \( t^{-\frac{3(\alpha _+\vee \alpha _- )+1}{4}} \) and the limit will only have contributions from the largest \(\alpha =\alpha _+\vee \alpha _- \).
In the second theorem, we consider the case where \(\varphi \) satisfies some integrability properties with respect to the Lebesgue measure on \({\mathbb {R}}\). The limit involves a weighted local time of the two-parameter Wiener process, and the proof has been inspired by the work of Nualart and Xu [7] on the central limit theorem for an additive functional of the fractional Brownian motion.
Theorem 1.3
Suppose that \(\varphi \in L^2({\mathbb {R}}) \cap L^p({\mathbb {R}})\) for some \(p<2\). Then, as \(t\rightarrow \infty \),
where \(\widehat{W}\) is a two-parameter Wiener process, Z is a N(0, 1) random variable and \(\widehat{W}\), and W and Z are independent.
In Theorem 1.3, \(L_{0,1}:=\int _0^1 \int _{{\mathbb {R}}} p^2_{1-s} (y ) \delta _0(\widehat{W}(s, y)) dyds\) is a weighted local time of the random field \(\widehat{W}\) that can be defined (see Lemma 2.1) as the limit in \(L^2(\Omega )\) of
as \(\varepsilon \) tends to zero.
The present study originates from a nonlinear stochastic heat equation which is described in Sect. 4. Although we have not been able to obtain explicit results in this general situation, the results presented here provide a range of techniques and results related to this problem.
The paper is organized as follows. Section 2 contains the proofs of the above theorems, and in Sect. 3, we discuss an extension of these results in the case where we consider also space averages on an interval \([-R,R]\) and both R and t tend to infinity. A brief discussion of the case of nonlinear equations is provided in Sect. 4.
2 Proofs
Proof of Theorem 1.2
We know that \(t^{-\frac{ 3\alpha +1}{4}} u(t,x)\) has the same law as
We divide the study of v into two parts according to the boundedness on compacts property for \(\varphi \). In fact, for any compact K consider \(\varphi _K(x)= \varphi (x)\mathbf {1}_{x\in K}\). Then, we will prove that \( v_K(x)\rightarrow 0 \) in \( L^2(\Omega ) \) where
In fact, as \(\varphi _K\) is bounded by a constant, say M, we have
The above quantity clearly converges to zero as \(t\rightarrow \infty \), if one considers separately the cases \( \alpha >0 \) and \(\alpha =0 \).
Given the above result, we can assume without loss of generality that \( \varphi (x)=f(x)|x|^\alpha \) with a bounded measurable function f such that \( \lim _{x\rightarrow \pm \infty }f(x)=c_\pm \).
For this, we fix \(t_0>0\) and compute the conditional characteristic function of \(t^{-\frac{ 3\alpha +1}{4}} u(t,x)\) given \({\mathcal {F}}_{t_0}\), where \( t>t_0 \) and \(\{ {\mathcal {F}}_t, t\ge 0\}\) denotes the natural filtration of the two-parameter Wiener process W used in the definition of uFootnote 3 . For any \(\lambda \in {\mathbb {R}}\), we have
It is easy to show that \(\lim _{t\rightarrow \infty } A_t =0\) in \(L^2(\Omega )\) as \( t\rightarrow \infty \). In fact, using the rescaling properties we obtain that as \( t\rightarrow \infty \)
Now, we continue with the term \( B_t \) for which we will use the decomposition
Then, we can write
where \(\widehat{{\mathbb {E}}}\) denotes the mathematical expectation with respect to the two-parameter Wiener process \(\widehat{W}\). By the same renormalization arguments as in (1.3) leading to (1.4), this gives
Here,
As \(t\rightarrow \infty \) and \(\lim _{x \rightarrow \pm \infty } |x|^{-\alpha }\varphi (x) =c_\pm \), then \(B_t\) converges almost surely to
where \(F_\alpha (x)=c_+\mathbf {1}_{x>0}+c_-\mathbf {1}_{x<0} \). Then, the above formula is the characteristic function of X. As a consequence, for every bounded \({\mathcal {F}}_{t_0}\) measurable random variable G, we obtain
This can be extended to any bounded random variable G measurable with respect to the two-parameter Wiener process W, and this provides the desired stable convergence in the sense of Definition 1.1. \(\square \)
For the proof of Theorem 1.3, we need the following lemma on the existence of the weighted local time \(L_{0,r}\).
Lemma 2.1
For any \(r\in [0,1]\), the limit in \(L^2(\Omega )\), as \(\varepsilon \) tends to zero, of
exists and the limit random variable will be denoted by \(L_{0,r}:=\int _0^r \int _{{\mathbb {R}}} p^2_{1-s} (y ) \delta _0(W(s, y)) dyds\).
Proof
Using the inverse Fourier transform formula for a Gaussian law, we have
Note that in order to prove the statement in the lemma is enough to prove that \( {\mathbb {E}}( L^\varepsilon _{0,r} L^{\varepsilon '}_{0,r}) \) converges as \( \varepsilon , \varepsilon '\rightarrow 0 \). Therefore, consider
As \(\varepsilon \) and \(\varepsilon '\) tend to zero, we obtain the limit
where \(f(s,y,s',y')\) is the density at (0, 0) of the random vector \((W(s,y), W(s',y'))\). We claim that \(I<\infty \). Indeed, first notice that \(f(s,y,s',y')\) is bounded by
To show that \(I<\infty \), it suffices to consider the integral over the set \(\{ yy'>0 \}\), because the integral over \(\{ yy' \le 0\}\) is clearly finite. By symmetry, we only need to show that the integral
is finite. With the change of variables \(sy=z\) and \(s'y'=z'\), we obtain
Fix \(\delta >0\) and let \(K_\delta = \sup _{s\in [0,1],z>0} \left( \frac{z^2}{ s^2(1-s)} \right) ^\delta e^{-\frac{z^2}{s^2(1-s)}}\). Then,
Integrating first in z and later in \(z'\), it is easy to show that the above integral is finite if \(\delta <\frac{1}{4}\). This allows us to conclude the proof of the lemma. \(\square \)
Proof of Theorem 1.3
Consider the random variable \(\widetilde{u}(t,x)\) defined in (1.4). We can put \( \widetilde{u}(t,x)= t^{-\frac{3}{8}} M_t(1,x)\), where for \(r\in {\mathbb {R}}_+\),
where \(\{Z_t, t\ge 0\}\) is a standard Brownian motion independent of W, defined, if necessary, on an enlarged probability space. Then, \(\{M_t(\cdot , x), t\ge 0 \}\) is a family of continuous martingales in the time interval \([0,\infty )\). We note that the important part of the definition of \( M_t(r,x) \) corresponds to \( r\in [0,1] \). For this reason, we will concentrate on this interval leaving the calculations and corresponding modifications for \( r\in [1,\infty )\) to the reader.
We will find the limit as \(t\rightarrow \infty \) of the quadratic variation of these martingales. We have for \( r\in [0,1] \)
The proof of the theorem will be done in several steps.
Step 1 In this step, we prove that \(\langle M_t(\cdot ,x) \rangle _{r}\) converges in \(L^1(\Omega ) \) to the weighted local time \(L_{0,r}\), for \(r\in [0,1]\). First, we claim that
This follows from the fact that
and for fixed \( r\in [0,1] \)
Note that the bound in (2.2) also shows that the martingale \( M_t(\cdot ,x) \) is square integrable. On the other hand, for any fixed t, we have
where
To show (2.3), notice first that
Therefore,
which converges to zero as \(\varepsilon \) tends to zero because \(\int _0^r \int _{\mathbb {R}}p^2_{1-s} (y) (2\pi s |y|) ^{-\frac{1}{2}} dy ds <\infty \) and \(\varphi \in L^2({\mathbb {R}})\).
We also claim that
where
To show (2.4), we write
which leads to the estimate
for any \(\beta \in [0,1]\). Then, by the dominated convergence theorem, the limit (2.4) follows from
which follows from the fact that \((2\pi )^{-2} \int _{{\mathbb {R}}^2} e^ {-\frac{1}{2} {\mathbb {E}}( |\eta W(s,y)- \eta ' W(s',y') |^2)} d\eta d\eta ' \) is the density at (0, 0) of the random vector \((W(s,y), W(s',y'))\) (see the proof of Lemma 2.1).
By Lemma 2.1, \( \int _0^r \int _{\mathbb {R}}p^2_{1-s} (y) p_{\varepsilon } (W(s, y) ) dy ds\) converges in \(L^2(\Omega )\) as \(t\rightarrow \infty \) to the weighted local time \(L_{0,r}\). As a consequence, from (2.1), (2.3), (2.4) and Lemma 2.1, we deduce that \(\langle M_t(\cdot ,x) \rangle _{r}\) converges as \( t\rightarrow \infty \) in \(L^1(\Omega ) \) to the weighted local time \(L_{0,r}=\int _0^r \int _{{\mathbb {R}}} p^2_{1-s} (y ) \delta _0(W(s, y)) dyds\).
Step 2 Fix an orthonormal basis \(\{e_i, i\ge 1\}\) of \(L^2({\mathbb {R}})\) formed by bounded functions and consider the martingales for \( r\in {\mathbb {R}}_+ \)
We claim that the total variation of the joint quadratic variation \(\langle M_t(\cdot , x), M^i \rangle _r\) converges to zero in \(L^1(\Omega )\) as \(t\rightarrow \infty \). Indeed,
Then, using the fact that \(\varphi \in L^p({\mathbb {R}})\) for some \(p<2\), we can write for \( r\in [0,1] \)
where \(\frac{1}{p} + \frac{1}{q} =1\). Then, the claim follows because \(\frac{3}{8}- \frac{3}{ 4p} <0\) for \( p\in (1,2) \) and
Step 3 Given a sequence \(t_n \uparrow \infty \), set \(M^n_{0,r} = M_{t_n}(r,x)\) and \(M^n_{i,r}= M^i(r)\) for \(i \ge 1\). These martingales possess Dambis-Dubins-Schwarz Brownian motions \(\beta ^n_i\), such that
and
We have proved in Step 2 that \( \sup _{r\in {\mathbb {R}}_+ } |\langle M^n_i, M^n_0 \rangle |_r \rightarrow 0\) in probability as \(n\rightarrow \infty \). Moreover, it is clear that for any \(1\le i <j\), \( \langle M^n_i, M^n_j \rangle _ r =0\). Then, by the asymptotic Ray-Knight theorem (see Theorem 5.1 in Appendix), we conclude that the Brownian motions \(\{\beta ^n_{i,y}, y \ge 0\}\), \(i\ge 0\), converge in law to a family of independent Brownian motions \(\{\beta _{i,y}, y \ge 0\}\), \(i\ge 0\). As a consequence, we deduce in particular, the stable convergence in the sense of Definition 1.1, of the sequence of Brownian motions \(\beta ^n_0\) to a Brownian motion \(\beta _0\) with \({\mathcal {F}} \) being the \( \sigma \)-field generated by the white noise W on \({\mathbb {R}}_+ \times {\mathbb {R}}\) which is independent of \( \beta _0 \).
We are going to show using Step 1 that \( M_{t_n} (1,x)\) converges stably to \(\beta _{0, L_{0,1} \Vert \varphi \Vert ^2_{L^2({\mathbb {R}})} }\) as \(n\rightarrow \infty \), where \(\beta _0\) is independent of W. Indeed, for any \(\lambda \in {\mathbb {R}}\), \(\epsilon , K>0\), with the notation \(c= \Vert \varphi \Vert ^2_{L^2({\mathbb {R}})}\), and for any W-measurable random variable G such that \(|G|\le 1\), we can write
By Step 1, \(\lim _{n\rightarrow \infty } A^1_{n,\epsilon } =0\) and the stable convergence of \(\beta ^n_0\) to \(\beta _0\) implies \(\lim _{n\rightarrow \infty } A^4_n =0\). Finally, for fixed K, \(A^3_{n,\epsilon ,K}\) does not depend on n and converges to zero as \(\epsilon \rightarrow 0\) and, finally, \(A^2_K\) tends to zero as \(K \uparrow \infty \). Thus, we have proved the stable convergence \( t^{\frac{3}{8}} \widetilde{u}(t,x)) {\mathop {\longrightarrow }\limits ^\mathrm{stably}} \beta _{0, L_{0,1} \Vert \varphi \Vert ^2_{L^2({\mathbb {R}})} }\) as \(t\rightarrow \infty \), where \(L_{0,1}=\int _0^1 \int _{{\mathbb {R}}} p^2_{1-s} (y ) \delta _0(W(s, y)) dyds\) and \(\beta _0\) is independent of W.
Step 4 Fix \(\lambda \in {\mathbb {R}}\) and \(t_0 \ge 0\). To complete the proof, we follow a similar argument as in the proof of Theorem 1.2 based on the method of characteristic functions. In fact, we can write
As in the proof of Theorem 1.2, it is easy to show that \(\lim _{t\rightarrow \infty } A_t =0\) in \(L^2(\Omega )\). On the other hand, with the decomposition
for the term \(B_t\), we can write
where \(\widehat{{\mathbb {E}}}\) denotes the mathematical expectation with respect to the two-parameter Wiener process \(\widehat{W}\) defined by \(\widehat{W}(s,y)= W(s+ t_0,y) -W(t_0,y)\). By the same arguments as before, this leads to
Here, we have used the notation \(\Phi _{t,y}= (t-t_0) ^{-\frac{3}{4}} W(t_0, \sqrt{t-t_0} y)\). Rather than considering the limit of the above expression, we would like to consider instead the limit of
In order to be able to do this, consider the residual term
We claim that
in \(L^2(\Omega )\). Indeed, using bounds for the density of \(\hat{W}(s,y) \), we obtain
Taking into account that \(\int _{{\mathbb {R}}} \left| \varphi (z+ \Phi _{t,y} ) -\varphi (z ) \right| ^2 dz \le 2 \Vert \varphi \Vert ^2_{L^2({\mathbb {R}})}\), and using that
almost surely, for any \(y\in {\mathbb {R}}\), we conclude the proof of (2.5) by the dominated convergence theorem.
Furthermore, proceeding as in Steps 1–3, we can show that, as \(t\rightarrow \infty \), \(B^{(1)}_t\) converges to
where \(L_{0,1}=\int _0^1 \int _{{\mathbb {R}}} p^2_{1-s} (y ) \delta _0(\widehat{W}(s, y)) dyds\) and \(\beta _0\) is independent of \(\widehat{W}\). As a consequence, for every bounded and \({\mathcal {F}}_{t_0}\)-measurable random variable G, we obtain
This completes the proof. \(\square \)
3 Large Times and Space Averages
The asymptotic behavior of the spatial averages \(\int _{-R} ^R u(t,x) dx\) as \(R\rightarrow \infty \) has been recently studied in References [3,4,5]. In these articles, u(t, x) is the solution to a stochastic partial differential equation with initial condition \(u(0,x)=1\) and a Lipschitz nonlinear coefficient function \(\sigma (u)\). The solution process is stationary in \(x\in {\mathbb {R}}\), and the limit is Gaussian after a proper normalization of the solution process. In the case considered here, the lack of stationarity creates different limit behaviors. In order to achieve a limit, we will consider the case where both R and t tend to infinity.
Set
As before, \(u_R(t)\) has the same law as
Consider first the case where \(\varphi \) is an homogeneous function.
Theorem 3.1
Suppose that \(\varphi (x) = |x| ^\alpha \) for some \(\alpha >0\). Suppose that \(t_R \rightarrow \infty \) as \(R\rightarrow \infty \). Then, if we let \(\widehat{W}\) be a two-parameter Wiener process independent of W, the following stable convergences hold true:
-
(i)
If \(\frac{R}{\sqrt{t_R}} \rightarrow c\), with \(c\in (0,\infty )\),
$$\begin{aligned} t_R^{-\frac{3}{4} (\alpha +1)} u(t_R) {\mathop {\longrightarrow }\limits ^\mathrm{stably}}\int _{-c} ^c \int _0^1 \int _{{\mathbb {R}}} p_{1-s} (x-y) |\widehat{W}(s, y))|^\alpha \widehat{W}(ds,dy)dx. \end{aligned}$$ -
(ii)
If \(\frac{R}{\sqrt{t_R}} \rightarrow 0\),
$$\begin{aligned} R^{-1} t_R^{-\frac{3\alpha +1}{4}} u(t_R) {\mathop {\longrightarrow }\limits ^\mathrm{stably}}2 \int _0^1 \int _{{\mathbb {R}}} p_{1-s} (y) |\widehat{W} (s, y))|^\alpha \widehat{W}(ds,dy)dx. \end{aligned}$$ -
(iii)
If \(\frac{R}{\sqrt{t_R}} \rightarrow \infty \),
$$\begin{aligned} R ^{-\frac{ \alpha +1}{2}} t_R^{-\frac{\alpha +1}{2}} u(t_R) {\mathop {\longrightarrow }\limits ^\mathrm{stably}}\int _{-1} ^{ 1} \int _0^1 |\widehat{W}(s,y)| ^{\alpha } \widehat{W}(ds,dy). \end{aligned}$$
Proof
We have, with the change of variable \(\frac{x}{\sqrt{t_R}} \rightarrow x \),
and therefore, (i) follows by letting \(R\rightarrow \infty \).
If \(\frac{R}{\sqrt{t_R}} \rightarrow 0\), with the change of variable \(x \rightarrow Rx \), we can write
which implies (ii).
The proof of (iii) is more involved. Making the change of variable \(y \rightarrow y \frac{R}{\sqrt{t_R}}\) in (3.1) yields
Finally, the stochastic integral
converges in \(L^2(\Omega )\) as \(R\rightarrow \infty \) to
The stable character of the convergence can be proved by the same arguments, based on the conditional characteristic function, as in the proof of Theorem 1.2. \(\square \)
For a function which satisfies integrability conditions with respect to the Lebesgue measure, we have the following result.
Theorem 3.2
Suppose that \(\varphi \in L^2({\mathbb {R}})\) and that \(t_R \rightarrow \infty \) as \(R\rightarrow \infty \). Then, with Z a N(0, 1) random variable and \(\widehat{W}\) an independent two-parameter Wiener process such that \((Z, \widehat{W})\) are independent of W, the following stable convergences hold true:
-
(i)
If \(\frac{R}{\sqrt{t_R}} \rightarrow c\), with \(c\in (0,\infty )\),
$$\begin{aligned} t_R^{-\frac{3}{8}} u(t_R) {\mathop {\longrightarrow }\limits ^\mathrm{stably}}Z \left( \Vert \varphi \Vert ^2_{L^2({\mathbb {R}})}\int _0^1 \int _{{\mathbb {R}}} \left( \int _{-c} ^c p_{1-s} (x-y) dx \right) ^2 \delta _0(W(s,y)) dyds \right) ^{\frac{1}{2}}. \end{aligned}$$ -
(ii)
If \(\frac{R}{\sqrt{t_R}} \rightarrow 0\),
$$\begin{aligned} R^{-1} t_R^ {\frac{1}{2}} u(t_R) {\mathop {\longrightarrow }\limits ^\mathrm{stably}}Z \left( 2\Vert \varphi \Vert ^2_{L^2({\mathbb {R}})}\int _0^1 \int _{{\mathbb {R}}} p^2_{1-s} (y) \delta _0(W(s,y)) dyds \right) ^{\frac{1}{2}}. \end{aligned}$$ -
(iii)
If \(\frac{R}{\sqrt{t_R}} \rightarrow \infty \),
$$\begin{aligned} R ^{-\frac{1}{2}} t_R^{\frac{1}{4}} u(t_R) {\mathop {\longrightarrow }\limits ^\mathrm{stably}}Z \left( 2 \Vert \varphi \Vert ^2_{L^2({\mathbb {R}})}\int _0^1 \int _{-1}^1 p^2_{1-s} (y) \delta _0(W(s,y)) dyds \right) ^{\frac{1}{2}}. \end{aligned}$$
Proof
Let us prove first the case (i). We have, with the change of variable \(\frac{x}{\sqrt{t_R}} \rightarrow x \),
Consider the family of martingales
\(r\in [0,1]\). We can write
Then, as in the proof of Theorem 1.3, we can show that \(\langle M_R(\cdot , x )\rangle _r\) converges in \(L^1(\Omega )\) as \(R\rightarrow \infty \) to the weighted local time
This completes the proof of (i).
If \(\frac{R}{\sqrt{t_R}} \rightarrow 0\), with the change of variable \(x \rightarrow Rx \), we can write
As before, the stochastic integral
converges in law to
which implies (ii). To show (iii), we make the change of variable \(y \rightarrow y \frac{R}{\sqrt{t_R}}\) in (3.2) to get
Finally, the stochastic integral
converges in law as \(R\rightarrow \infty \) to
The stable character of the convergence can be proved by the same arguments, based on the conditional characteristic function, as in the proof of Theorem 1.3. \(\square \)
4 Case of a Nonlinear Coefficient \(\sigma \)
In this section, we discuss the case of a nonlinear stochastic heat equation
with initial condition \(u(0,z)=1\), where \(\sigma : {\mathbb {R}}\rightarrow {\mathbb {R}}\) is a Lipschitz function. The mild solution to equation (4.1) is given by
We are interested in the asymptotic behavior of u(t, x) as t tends to infinity. As before, we consider different cases:
Case 1. Suppose that \(\sigma (u)=u\). In this case, the solution has a Wiener chaos expansion given by
with
and \(\Delta _n(t) =\{ (s_1, \dots , s_n): 0<s_1< \cdots< s_n <t\}\). Here, \(I_n\) denotes the multiple stochastic integral of order n with respect to the noise W. If we consider the projection of u(t, x) on a fixed Wiener chaos, we can write with the change of variables \(s_i \rightarrow ts_i\) and \(x_i \rightarrow \sqrt{t }y_i\),
By the scaling properties of the two-parameter Wiener process, it follows that \( I_n(f_{t,x,n} )\) has the same law as
As a consequence, \(t^{-\frac{3}{4} n} I_n(f_{t,x,n} )\) converges stably to
where \(\widehat{W}\) is a two-parameter Wiener process independent of W and with the convention \(s_0=1\) and \(x_0=0\).
Notice that the rate of convergence depends polynomially with respect to the order of the Wiener chaos. This property gives a hint toward a possible exponential type behavior of u(t, x) . This is consistent with the asymptotic behavior of \(\log u(t,x)\), when \(u(0,x)=\delta _0(x)\), obtained by Amir, Corwin and Quastel in [1].
Case 2. When \(\sigma \) is a Lipschitz function that belongs to \(L^2({\mathbb {R}})\), the problem is much more involved and we will give here just some ideas on how to proceed. We can write
Furthermore,
By the scaling properties of the two-parameter Wiener process, as a function of \(\widehat{W}\), \(u( ts, \sqrt{t} y)\) has the same law as
Therefore, u(t, x) has the same law as
Then,
The quadratic variation of the martingale part of the above stochastic integral is
where \(Z^t(s,y)\) satisfies
From these computations, we conjecture that \(t^{-\frac{1}{6}}\) is the right normalization and the limit would satisfy an equation involving a weighted local time of the solution. However, proving these facts is a challenging problem not to be treated in this paper.
Notes
From here onward, we use this notation to indicate that \( \lim _{x \rightarrow +\infty } |x|^{-\alpha }\varphi (x) =c_+\) and \( \lim _{x \rightarrow -\infty } |x|^{-\alpha }\varphi (x) =c_- \).
Here, this means \( \lim _{x \rightarrow +\infty } |x|^{-\alpha _+}\varphi (x) =c_+ \) and \( \lim _{x \rightarrow -\infty } |x|^{-\alpha _-}\varphi (x) =c_- \).
That is, \( {\mathcal {F}}_t \) is generated by \( W(s,x), s\le t, x\in {\mathbb {R}}\).
References
Amir, G., Corwin, I., Quastel, J.: Probability distribution of the free energy of the continuum directed random polymer in \(1+1\) dimension. Commun. Pure Appl. Math. 67, 7 (2011)
Campese, S.: A limit theorem for moments in space of the increments of Brownian local time. Ann. Probab. 45(3), 1512–1542 (2017)
Delgado-Vences, F., Nualart, D., Zheng, G.: A Central Limit Theorem for the stochastic wave equation with fractional noise. Preprint
Huang, J., Nualart, D., Viitasaari, L.: A central limit theorem for the stochastic heat equation. Preprint
Huang, J., Nualart, D., Viitasaari, L., Zheng, G.: Gaussian fluctuations for the stochastic heat equation with colored noise. Preprint
Jacod, J., Shiryaev, A.N.: Limit Theorems for Stochastic Processes. Springer, Berlin (1987)
Nualart, D., Xu, F.: Central limit theorem for an additive functional of the fractional Brownian motion. Ann. Probab. 42(1), 169–302 (2014)
Pitman, J., Yor, M.: Asymptotic laws of planar Brownian motion. Ann. Probab. 14(3), 733–779 (1986)
Revuz, D., Yor, M.: Continuous martingales and Brownian motion, 3rd ed., Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 293. Springer, Berlin (1999)
Acknowledgements
We would like to thank two anonymous referees for a number of valuable comments that helped to improve this paper.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
The research of the second author was supported by KAKENHI Grants 16K05215 and 16H03642. D. Nualart is supported by NSF Grant DMS 1811181.
Appendix: The Asymptotic Ray–Knight Theorem
Appendix: The Asymptotic Ray–Knight Theorem
The following version has been adapted from [2] (see also [8]). Here, \(H^2_{0,\text {loc}} \) denotes the space of locally square integrable martingales.
Theorem 5.1
For \(k \ge 2\) and \(n \ge 1\), define a sequence \((M_{1,r}^n,M_{2,r}^n, \dots , M_{k,r}^n)\) of k-tuples of continuous local martingales \((M_{j,r}^n)_{r \ge 0} \in H^2_{0,\text {loc}}\) such that for fixed \(j=1,\dots ,k\) the limit \(\left\langle M_j^n, M_j^n \right\rangle _{\infty }\) is either infinite for all n, or finite for all n. After possibly enlarging the underlying probability space, each \(M_j^n\) possesses a Dambis–Dubins–Schwarz Brownian motion \(\beta ^n_j\) and an associated time change \(T_j^n(y)\), such that
for \(r\ge 0\) and \(1 \le j \le k\) (for a proof, see, for example, [9, Ch. V, Thm. 1.7]). If for \(a \ge 0\) and \(1 \le i,j \le k\) with \(i \ne j\)
in probability, then the k-dimensional process \(\beta ^n_r =(\beta _{1,r}^n,\beta _{2,r}^n, \dots ,\beta _{k,r}^n)_{r \ge 0}\) converges in distribution to a k-dimensional Brownian motion \((\beta _r)_{r\ge 0}\).
Rights and permissions
About this article
Cite this article
Kohatsu-Higa, A., Nualart, D. Large Time Asymptotic Properties of the Stochastic Heat Equation. J Theor Probab 34, 1455–1473 (2021). https://doi.org/10.1007/s10959-020-01007-y
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10959-020-01007-y