Abstract
We consider sample path properties of the solution to the stochastic heat equation, in \({\mathbb {R}}^d\) or bounded domains of \({\mathbb {R}}^d\), driven by a Lévy space–time white noise. When viewed as a stochastic process in time with values in an infinite-dimensional space, the solution is shown to have a càdlàg modification in fractional Sobolev spaces of index less than \(-\frac{d}{2}\). Concerning the partial regularity of the solution in time or space when the other variable is fixed, we determine critical values for the Blumenthal–Getoor index of the Lévy noise such that noises with a smaller index entail continuous sample paths, while Lévy noises with a larger index entail sample paths that are unbounded on any non-empty open subset. Our results apply to additive as well as multiplicative Lévy noises, and to light- as well as heavy-tailed jumps.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Let \(T>0\) and consider, on a stochastic basis \((\Omega ,{\mathcal {F}},({\mathcal {F}}_t)_{t\in [0,T]},{\mathbb {P}})\) satisfying the usual conditions, the stochastic heat equation driven by a Lévy space–time white noise on \([0,T]\times D\) with Dirichlet boundary conditions:
where D is the whole space \({\mathbb {R}}^d\) or a bounded domain in \({\mathbb {R}}^d\), \(\sigma :{\mathbb {R}}\rightarrow {\mathbb {R}}\) is a Lipschitz function, \(u_0:{\bar{D}}\rightarrow {\mathbb {R}}\) is a bounded continuous initial condition vanishing on \(\partial D\), and \(\dot{L}\) is a Lévy space–time white noise. If \(D={\mathbb {R}}^d\), the boundary conditions on u and \(u_0\) are considered void.
A predictable random field \(u=\left( u(t,x):(t,x) \in [0,T]\times D \right) \) is called a mild solution to (1.1) if for all \((t,x) \in [0,T]\times D\),
almost surely, where
is the solution to the homogeneous version of (1.1).
In (1.2) and (1.3), \(G_D\) denotes the Green’s function of the heat operator on D, which for \(D={\mathbb {R}}^d\) equals the Gaussian density
(when \(t=0\), we interpret g(0, x) as the Dirac delta function \(\delta _0(x)\)), while on a bounded domain D with smooth boundary it has the spectral representation
where \((\lambda _j)_{j\geqslant 1}\) are the eigenvalues of \(-\Delta \) with vanishing Dirichlet boundary conditions, and \((\Phi _j)_{j\geqslant 1}\) are the corresponding eigenfunctions forming a complete orthonormal basis of \(L^2(D)\).
In the special case where \(\dot{L}\) is a Gaussian noise, the existence, uniqueness and regularity of solutions to Eq. (1.1) have been extensively studied in the literature, see e.g. [3, 10, 23, 39] for the case of space–time white noise, [15, 36, 37] for noises that are white in time but colored in space, and [22] for noises that may exhibit temporal covariances as well. In all cases, the mild solution to (1.2) is jointly locally Hölder continuous in space and time, with exponents that depend on the covariance structure of the noise.
By contrast, suppose that \(\dot{L}\) is a Lévy space–time white noise without Gaussian part, that is,
where \(b\in {\mathbb {R}}\), J is an \(({\mathcal {F}}_t)_{t\in [0,T]}\)-Poisson random measure on \([0,T]\times D \times {\mathbb {R}}\) with intensity \(\mathrm {d}t \, \mathrm {d}x \, \nu (\mathrm {d}z)\), and \({{\tilde{J}}}\) is the compensated version of J. Here \(\nu \) is a Lévy measure, that is, \(\nu (\{0\}) =0\) and \(\int _{\mathbb {R}}\left( z^2\wedge 1 \right) \,\nu (\mathrm {d}z)<+\infty \), and we assume that \(\nu \) is not identically zero. The existence and uniqueness of solutions for equations like (1.1) with Lévy noise have been investigated in [1, 2, 11, 12, 31, 35].
Already in the linear case with \(\sigma (x)\equiv 1\), due to the singularity of the Green’s kernel on the diagonal \(x=y\) near \(t=0\), each jump of the noise creates a Dirac mass for the solution. Even worse, if \(\nu ({\mathbb {R}})=\infty \), these space–time jump points form a dense subset of \([0,T]\times D\). Hence one cannot expect the solution to have any continuity properties jointly in space and time.
In this article, we thus take two different viewpoints and consider
-
1.
The path properties of \(t\mapsto u(t,\cdot )\) as a process with values in an infinite-dimensional space;
-
2.
The path properties of the partial maps \(t\mapsto u(t,x)\) for fixed \(x\in D\), and of \(x\mapsto u(t,x)\) for fixed \(t\in [0,T]\).
For each \(t\geqslant 0\), \(u(t,\cdot )\) may take values in \(L^p(D)\) for some \(p>0\) almost surely, but since each atom of the Lévy noise introduces a Dirac delta into the solution, the process \(t\mapsto u(t,\cdot )\) cannot have a càdlàg version in such a space (see also [7] or [31, Proposition 9.25]). Instead, one should consider spaces of distributions containing delta functions, such as negative fractional Sobolev spaces \(H_r(D)\) for \(r<-\frac{d}{2}\) (see Sects. 2.1.1, 2.2.2 and 2.3.1). If \(\sigma =1\) and the noise has a finite second moment, the existence of a càdlàg modification in such spaces follows from a result of [24] on maximal inequalities for stochastic convolutions in an infinite-dimensional setting, see also [31, Chapter 9.4.2]. This type of result has also been obtained in the case of additive (possibly colored) Lévy noise in [8, 9, 32]. To our best knowledge, the question of existence of càdlàg versions in the case of multiplicative noise has only been studied in [21]. For the relation of the results of this paper to our results, see Remark 2.16.
In Sect. 2 of this paper, we substantially generalize the aforementioned results in the case of a Lévy space–time white noise (1.6): Without any further assumptions than those required for the existence of solutions, we prove in Theorems 2.5, 2.15 and 2.19, for both a bounded domain D and the case \(D={\mathbb {R}}^d\), that \(t\mapsto u(t,\cdot )\) has a càdlàg modification in \(H_r(D)\) and \(H_{r,loc}({\mathbb {R}}^d)\), respectively, for any \(r<-\frac{d}{2}\). To this end, we start our analysis by considering the stochastic heat equation on the interval \(D=[0,\pi ]\) in Sect. 2.1. Treating this basic case first has the advantage that we can directly proceed to the main steps of the proof while avoiding the technical difficulties of the general case. Next, in Sect. 2.2, we demonstrate how the proof for \(D=[0,\pi ]\) can be directly extended to the case \(D={\mathbb {R}}^d\), provided \(\sigma \) is bounded and \(\nu \) has finite second moments. But in order to cover the general case of Lipschitz continuous \(\sigma \) and heavy-tailed noises, we need to use stopping time techniques from [12] to deal with the (infinitely many) large jumps of the noise, as well as results from the integration theory for general random measures (see the Appendix) to compensate the absence of finite second moments for \(d\geqslant 2\), due to the singularity of the heat kernel and the small jumps of the noise. Finally, the proof for \(D=[0,\pi ]\) does not extend to bounded domains in \({\mathbb {R}}^d\) with \(d\geqslant 2\) because the eigenfunctions are typically no longer uniformly bounded. Instead, the proof we give in Sect. 2.3 makes use of the fact that in the interior of D, the Green’s function \(G_D\) can be decomposed into the Gaussian density g (where we can use the results of Sect. 2.2) and a smooth function. With the methods reviewed in the Appendix, we also obtain sufficient control at the boundary of D.
Regarding the partial regularity of \(t\mapsto u(t,x)\) and \(x\mapsto u(t,x)\), [35, Section 2] obtained the following result on \(D={\mathbb {R}}^d\): If the Lévy measure \(\nu \) of L satisfies \(\int _{\mathbb {R}}|z|^p \,\nu (\mathrm {d}z)<+\infty \) for some \(p<\frac{2}{d}\), then for fixed t, the process \(x\mapsto u(t,x)\) has a continuous modification. Similarly, if \(\int _{\mathbb {R}}|z|^p \,\nu (\mathrm {d}z)<+\infty \) for some \(p<1\), then there exists a continuous modification of \(t\mapsto u(t,x)\) for every fixed x. Extending the results of [35], our Theorems 3.1 and 3.5, which also apply to bounded domains, show that it suffices to check whether \(\int _{[-1,1]} |z|^p \,\nu (\mathrm {d}z)<+\infty \) is finite, which would include, for example, \(\alpha \)-stable noises with \(\alpha <\frac{2}{d}\) (for spatial regularity) and \(\alpha <1\) (for temporal regularity). Furthermore, these conditions are essentially sharp as we show in Theorems 3.3 and 3.7: If \(\sigma \equiv 1\), and if \(\nu \) has the same behavior near the origin as the Lévy measure of an \(\alpha \)-stable noise, then for \(\frac{2}{d} \leqslant \alpha < 1+\frac{2}{d}\) (resp. \(1\leqslant \alpha <1+\frac{2}{d}\)), the paths of \(x\mapsto u(t,x)\) (resp. \(t\mapsto u(t,x)\)) are unbounded on any non-empty open subset of D (resp. [0, T]). Let us remark that the last conclusion was observed in [29] for an \(\alpha \)-stable noise \(\Lambda \) and the (non-Lipschitz) function \(\sigma (x)=x^{\frac{1}{\alpha }}\) with \(\alpha \in (1,1+\frac{2}{d})\) via a connection between the resulting equation and stable super-Brownian motion (note that our \(\alpha \) is \(1+\beta \) in this reference).
In what follows, the letter C, occasionally with subscripts indicating the parameters that it depends on, denotes a strictly positive finite number whose value may change from line to line.
2 Regularity of the solution in fractional Sobolev spaces
2.1 The stochastic heat equation on an interval
For the interval \(D=[0,\pi ]\), the Green’s function \(G=G_D\) has the explicit representation
The existence and uniqueness of mild solutions to (1.1) in this case basically follow from [11].
Proposition 2.1
Let \(\sigma :{\mathbb {R}}\rightarrow {\mathbb {R}}\) be a Lipschitz function, \(u_0:[0,\pi ]\rightarrow {\mathbb {R}}\) be continuous with \(u_0(0)=u_0(\pi )=0\), and L be a pure jump Lévy white noise as in (1.6). Furthermore, define
with the convention \(\inf \emptyset = +\infty \). Then \((\tau _N)_{N\geqslant 1}\) is an increasing sequence of stopping times such that \(\tau _N>0\) and \(\tau _N = +\infty \) for large values of N. In addition, up to modifications, (1.1) has a mild solution u satisfying
for any \(0<p<3\) and \(N\in {\mathbb {N}}\). Furthermore, up to modifications, this solution is unique among all predictable random fields that satisfy (2.3).
Proof
Since \([0,\pi ]\) is a bounded interval, almost surely, there is only a finite number of jumps larger than N in \([0,T]\times [0,\pi ]\). This immediately implies the statements about \((\tau _N)_{N\geqslant 1}\). Next, by [3, (B.5)], we know that \(G(t;x,y)\leqslant Cg (t,x-y)\) for any \((t,x,y)\in [0,T]\times [0,\pi ]^2\), with g as in (1.4). Consequently, (1) to (4) of Assumption B of [11] are satisfied, and we can apply [11, Theorem 3.5] to obtain the existence of a unique mild solution to (1.1) satisfying (2.3) for all \(p\in (0,2]\). In order to extend this to all \(p\in (2,3)\), we notice that the only step in the proof of [11, Theorem 3.5] that uses \(p\leqslant 2\) is the moment estimate (6.9) given in [11, Lemma 6.1(2)] with respect to the martingale part \(L^M\). We now elaborate how this estimate can be extended to exponents \(2< p<3\). For predictable processes \(\phi _1\) and \(\phi _2\), we can use [28, Theorem 1] to get the upper bound
Using Hölder’s inequality with respect to the measure \(|G(t-s;x,y)|^2\,\mathrm {d}s\, \mathrm {d}y\), the first term is further bounded by
Since \(G(t;x,y)\leqslant Cg (t,x-y)\) for any \((t,x,y)\in [0,T]\times [0,\pi ]^2\), and \(\int _0^T \int _{\mathbb {R}}g^2(t,x) \,\mathrm {d}t\,\mathrm {d}x<+\infty \), we obtain the following estimate from (2.4) and the simple inequality \(|x|^2 \leqslant |x|+|x|^p\) for all \(p\geqslant 2\):
Since \(\int _0^T g^p(t,x)\,\mathrm {d}t\,\mathrm {d}x < +\infty \) for \(p<3\) (see e.g. [11, (3.20)]), this is exactly the extension of [11, Lemma 6.1(2)] needed to complete the proof of [11, Theorem 3.5]. \(\square \)
2.1.1 The fractional Sobolev spaces \(H_r([0,\pi ])\)
For any function \(f\in L^2([0,\pi ])\), we can define its Fourier sine coefficients
Then, by Parseval’s identity, \(\Vert f\Vert ^2_{L^2([0,\pi ])}=\sum _{n\geqslant 1}a_n(f)^2\). For any \(r\geqslant 0\), we define
This is a Hilbert space for the inner product \(\left\langle f, h \right\rangle _{H_r}:= \sum _{n\geqslant 1} \left( 1+n^2\right) ^r a_n(f) a_n(h)\). For \(r>0\), we define \(H_{-r}([0,\pi ])\) as the dual space of \(H_{r}([0,\pi ])\), that is, the space of continuous linear functionals on \(H_{r}([0,\pi ])\). Then \(H_{-r}([0,\pi ])\) is isomorphic to the space of sequences \(b=(b_n)_{n\geqslant 1}\) such that
More precisely, for \(r>0\) and \(\tilde{f}\in H_{-r}([0,\pi ])\), the coefficients \(b_n\) are given by \(b_n= \tilde{f} \big (\sqrt{\frac{2}{\pi }}\sin (n\cdot )\big )\). Then, \(\Vert \tilde{f}\Vert _{H_{-r}} = \Vert b \Vert _{H_{-r}}\) and the duality between \(H_{-r}([0,\pi ])\) and \(H_{r}([0,\pi ])\) is given by
For example, it is easy to check that \(\delta _x \in H_r([0,\pi ])\) for any \(x\in (0,\pi )\) and \(r<-\frac{1}{2}\). Indeed, \(\delta _x(\sin (n\cdot ))=\sin (nx)\), and for any \(r<-\frac{1}{2}\),
2.1.2 Existence of a càdlàg solution in \(H_r([0,\pi ])\) with \(r<-\frac{1}{2}\)
In order to motivate why we consider fractional Sobolev spaces \(H_r([0,\pi ])\) with \(r<-\frac{1}{2}\), we start with a special case. Suppose that \(b=0\) and that \(\nu \) is a symmetric measure with \(\nu ({\mathbb {R}})<+\infty \). Then we can rewrite \(L=\sum _{i=1}^{N_T} Z_i \delta _{(T_i, X_i)}\), where \((T_i,X_i,Z_i)\) are the atoms of the Poisson random measure J, and
In this case, it suffices to check whether for fixed \(i\geqslant 1\), \(t\mapsto G(t-T_i; \cdot , X_i)\) is càdlàg in \(H_r([0,\pi ])\). Using the series representation (2.1), we immediately see that the function \(x\mapsto G(t-T_i; x,X_i)\) belongs to \(H_r([0,\pi ])\) if and only if
This is the case for any \(r\in {\mathbb {R}}\) if \(t\ne T_i\). However, for \(t\downarrow T_i\), we have to restrict to \(r<-\frac{1}{2}\). Indeed, for the càdlàg property, the only point where a problem might appear is at \(t=T_i\). At this point, the existence of a left limit is obvious since \(G(t-T_i; \cdot , X_i)=0\) for any \(t<T_i\). For right-continuity, we use the fact that \((1-e^{-k^2 h})^2\leqslant k^{2\varepsilon }h^\varepsilon \) for any \(0<\varepsilon < -\frac{1}{2} -r\), so
Therefore, \(t\mapsto u(t,\cdot )\) is càdlàg in \(H_r([0,\pi ])\). For the general case, we first treat the drift term.
Lemma 2.2
Assume that u is the unique solution to (1.1) as in Proposition 2.1. Then,
is jointly continuous in \((t,x)\in [0,T]\times [0,\pi ]\). In particular, for every \(r\leqslant 0\), the process \(t\mapsto F(t,\cdot )\) is continuous in \(H_r([0,\pi ])\).
Proof
The continuity of V is standard, and with (2.1), the integral term in F equals
Each term in this series is jointly continuous in (t, x). Hence, it suffices to show the uniform convergence of the series. Using Hölder’s inequality and the fact that u has uniformly bounded moments of any order \(p<3\), we obtain this from
Then, to prove the continuity of \(t\mapsto F(t,\cdot )\) in \(H_r([0,\pi ])\), it suffices to show the continuity in \(L^2([0,\pi ])\) because \(L^2([0,\pi ]) \hookrightarrow H_r([0,\pi ])\) (that is, \(L^2([0,\pi ])\) is continuously embedded in \(H_r([0,\pi ])\)). The continuity in \(L^2([0,\pi ])\) in turn follows from the fact that F is uniformly continuous on the compact domain \([0,T]\times [0,\pi ]\). \(\square \)
Proposition 2.3
Let L be a pure jump Lévy white noise, and let \(\sigma \) be a bounded and Lipschitz function. Let u be the mild solution to the stochastic heat equation (1.1). Then, for any \(r<-\frac{1}{2}\), the stochastic process \((u(t, \cdot ))_{t\in [0,T]}\) has a càdlàg version in \(H_r([0,\pi ])\).
Proof
For \(N\in {\mathbb {N}}\), consider the truncated noise
as well as the mild solution \(u_N\) to (1.1) driven by \(L_N\), that is,
for \((t, x) \in [0, T] \times [0, \pi ]\). Then, by definition, we have \(L_N = L\) and therefore also \(u=u_N\) on the event \(\{T\leqslant \tau _N\}\), where \(\tau _N\) was defined in (2.2). Since almost surely, \(\tau _N=+\infty \) for sufficiently large N, we have stationary convergence of the processes \(u_N\) in (2.7) to u (that is, almost surely, for large enough N, \(u_N(t,x)=u(t,x)\) for all \((t,x)\in [0,T]\times [0,\pi ]\)). As we are interested in sample path properties of the mild solution to (1.1), and these properties are identical to those of \(u_N\) for sufficiently large N, it is enough to consider \(u_N\) instead of u in the following. The value of the parameter N has no importance in our study, so we take \(N=1\) for simplicity and drop the dependency in N. Therefore, it suffices to consider the solution to the integral equation
in other words, to assume that all jumps of L are bounded by 1. Furthermore, by Lemma 2.2, it only remains to consider the process
For this purpose, we need to calculate the Fourier sine coefficients defined in (2.5). To lighten the notations, in what follows, we will denote these coefficients by \(a_k(t)\). Then, by definition, for every \(k \geqslant 1\),
We want to exchange the stochastic integral and the Lebesgue integral, and because all involved terms are square-integrable, Theorem A.3 with \(p=2\) allows us to do so. Therefore,
We gather some moment estimates for the family of integrals
where \(0 \leqslant a < b\leqslant T\). Since \(\sigma \) is bounded, we can estimate the second and fourth moments of \(I_a^b(k)\) using [28, Theorem 1]:
where C also depends on \(\int _{|z|\leqslant 1} z^2\, \nu (\mathrm {d}z)\) and \(\int _{|z|\leqslant 1} z^4\, \nu (\mathrm {d}z)\), both of which are finite.
Also, for \(0\leqslant a<b\leqslant c <d \leqslant T\), still assuming that \(\sigma \) is bounded,
where we used the fact that \(I_a^b(k)\) is \(\mathcal F_c\)-measurable, and (2.12) in the second inequality.
We will use [18, Chapter III, Sect. 4, Theorem 1] to show the existence of a càdlàg version of \(t\mapsto u^M(t,\cdot )\). By [18, Chapter III, Sect. 2, Theorem 1], u has a separable version, which is, because of [11, Theorem 4.7] and [3, Lemma B.1], continuous in \(L^2(\Omega )\), and therefore \(t\mapsto u^M(t,\cdot )\) is continuous in \(L^2(\Omega )\) as a process with values in \(L^2([0,\pi ])\) (and thus in \(H_r([0,\pi ])\) since \(r<-\frac{1}{2}\)). Then it suffices to show that for any \(t\in [0,T]\), \(u(t, \cdot ) \in H_r([0,\pi ])\), and that for some \(\delta >0\),
for any \(h\in (0,1)\). By (2.12), we have \({\mathbb {E}}\left[ a_k^2(t) \right] \leqslant CT\), so for \(r<-\frac{1}{2}\), we have
almost surely, and \(u(t,\cdot )\in H_r([0,\pi ])\) is proved. Next,
so using (2.10),
Therefore, using the classical inequality \((a+b)^2\leqslant 2 (a^2+b^2)\),
for some constant C, where
We treat each of the four terms separately.
\(\underline{A_1(j,k):}\)
By (2.13), we can write
where we used \(1-e^{-2k^2h} \leqslant 2k^2h\) and \(1-e^{-2j^2(t-h)} \leqslant 1\) in the last inequality. Since \((1-e^{-k^2h})^2\leqslant 1\) and \((1-e^{-j^2h})^2\leqslant j^2 h\), we deduce that
Also, by the Cauchy–Schwarz inequality,
By (2.12) and subadditivity of the square root,
Let \(0<\delta <\frac{3}{2}\), to be chosen later. Then, multiplying each term by \((1-e^{-k^2h})^2\) and using \((1-e^{-k^2h})^2\leqslant k^{2} h\) for the first term, and \((1-e^{-k^2h})^2=(1-e^{-k^2h})^{\frac{1}{2} +\delta }(1-e^{-k^2h})^{\frac{3}{2} -\delta } \leqslant k^{1+2\delta } h^{\frac{1}{2}+\delta }\) for the second term of the sum, we get
A similar calculation yields
Then, we combine (2.16), (2.18) and (2.19) to obtain
Therefore, (2.15) and (2.20) give
\(\underline{A_2(j,k):}\) We treat this term in a similar way to \(A_1(j,k)\):
In the same way as for the term \({\tilde{A}}_2(j,k)\), we get
We use the Cauchy–Schwarz inequality to deal with the term \(B_2(j,k)\):
As in (2.17), we get
and similarly \({\mathbb {E}}\left[ I_{t-h}^{t}(k)^4 \right] ^{\frac{1}{2}} \leqslant C e^{2k^2t} \left( h+ \sqrt{h} \right) \). Also, for \(0<\delta <1\), since \((1-e^{-k^2h})^2\leqslant k^{2\delta }h^\delta \),
\(\underline{A_3(j,k):}\) By (2.13),
Then, since \((1-e^{-j^2h})^2 \leqslant j^2h\) and \(1-e^{-2k^2h} \leqslant 2k^2h\), we get
\(\underline{A_4(j,k):}\) Again, by (2.13),
Therefore, as for the previous term we get
Then, for every \(r<-\frac{1}{2}\), we can pick \(0<\delta <1\) such that \(r+\delta <-\frac{1}{2}\). Then,
so we deduce that \((u^M(t,\cdot ))_{t\geqslant 0}\) has a càdlàg version in \(H_r([0,\pi ])\) for any \(r<-\frac{1}{2}\). \(\square \)
Remark 2.4
The result of Proposition 2.3 is in fact valid for any predictable random field u whose Fourier sine coeficients can be written in the form
where Z is another predictable and bounded random field.
For unbounded \(\sigma \), we deduce the result from Proposition 2.3 via an approximation argument.
Theorem 2.5
Let u be the mild solution to the stochastic heat equation (1.1) constructed in Proposition 2.1. Then, for any \(r<-\frac{1}{2}\), the process \((u(t, \cdot ))_{t\in [0,T]}\) has a càdlàg version in \(H_r([0,\pi ])\).
Remark 2.6
The constraint \(r<-\frac{1}{2}\) in Theorem 2.5 is optimal. This follows from the discussion at the beginning of Sect. 2.1.2 and the fact that the Dirac delta distribution \(\delta _a\), \(a\in (0,\pi )\), does not belong to \(H_s([0,\pi ])\) for any \(s\geqslant -\frac{1}{2}\).
Proof of Theorem 2.5
By the argument given at the beginning of the proof of Proposition 2.3 and by Lemma 2.2, we only need to consider \(u^M\) as defined in (2.9). Let \(\sigma _n(u)= \sigma (u)\mathbb {1}_{|u|\leqslant n}\). We define
As in (2.10), the Fourier sine coefficients of \(t\mapsto u^M(t, \cdot )-u^M_n(t,\cdot )\) are given by
Therefore, for any \(t\in [0,T]\),
Then, using \(e^{-k^2(t-s)}=1-\int _s^t k^2 e^{-k^2(t-r)}\,\mathrm {d}r\) and Theorem A.3 and (A.3) with \(p=2\), we can rewrite
where \(\sigma _{(n)} (s,y):= \sigma (u(s,y))- \sigma _n(u(s,y))\). Therefore,
where C does not depend on k. So by Doob’s inequality, we deduce that
By (2.3), (2.23) and (2.25), it follows from dominated convergence that for any \(r<-\frac{1}{2}\),
as \(n\rightarrow +\infty \). Therefore, \( \sup _{t\in [0,T]} \Vert u(t,\cdot )-u_n(t,\cdot )\Vert _{H_r}\rightarrow 0\) in \(L^2(\Omega )\) as \(n\rightarrow +\infty \), and there is a subsequence \((n_k)_{k\geqslant 0}\) such that \(\sup _{t\in [0,T]} \Vert u(t,\cdot )-u_{n_k}(t,\cdot )\Vert _{H_r}\rightarrow 0\) almost surely as \(k\rightarrow +\infty \). This means that \(u_{n_k}(t,\cdot )\) converges to \(u(t,\cdot )\) in \(H_r([0,\pi ])\) uniformly in time for any \(r<-\frac{1}{2}\). Since \(\sigma _{n_k}\) is bounded, \(t \mapsto u_{n_k}(t,\cdot )\) has a càdlàg version in \(H_r([0,\pi ])\) by Proposition 2.3 and Remark 2.4. Therefore, \(t \mapsto u(t,\cdot )\) has a càdlàg version in \(H_r([0,\pi ])\) for any \(r<-\frac{1}{2}\). \(\square \)
2.2 The stochastic heat equation on \({\mathbb {R}}^d\)
In [12], the first author proved the existence of a solution to the stochastic heat equation on \({\mathbb {R}}^d\) under assumptions on the driving noise that are general enough to include the case of \(\alpha \)-stable noises. More specifically, suppose that \(D={\mathbb {R}}^d\) in (1.1) and that the following hypotheses hold:
- (H) :
-
There exists \(0<p<1+\frac{2}{d}\) and \(\frac{p}{1+\left( 1+\frac{2}{d} -p\right) }<q\leqslant p\) such that
$$\begin{aligned} \int _{|z|\leqslant 1}|z| ^p \,\nu (\mathrm {d}z)+\int _{|z|>1}|z|^q\,\nu ( \mathrm {d}z)<+\infty . \end{aligned}$$If \(p<1\), we assume that \(b_0:=b-\int _{|z|\leqslant 1} z\,\nu (\mathrm {d}z)=0\).
In contrast to the situation on a bounded domain, we can no longer use the stopping times \(\tau _N\) in (2.2) (with \([0,\pi ]\) replaced by \({\mathbb {R}}^d\)) to localize Eq. (1.1). Indeed, since \({\mathbb {R}}^d\) is unbounded, the \(\mathrm {d}t\,\mathrm {d}x\,\nu (\mathrm {d}z)\)-measure of \([0,T]\times {\mathbb {R}}^d \times [-N,N]^c\) will in general be infinite for any \(N\in {\mathbb {N}}\). In particular, on any time interval \([0,\varepsilon ]\) where \(\varepsilon >0\), we already have infinitely many jumps of arbitrarily large size, which implies that \(\tau _N=0\) almost surely for all \(N\in {\mathbb {N}}\).
Therefore, instead of using the stopping times (2.2), the idea is to use truncation levels that increase with the distance to the origin. More precisely, let \(h:{\mathbb {R}}^d\rightarrow {\mathbb {R}}\) be the function \(h(x)=1+|x|^\eta \), for some \(\eta \) to be chosen later, and define for \(N\in {\mathbb {N}}\),
For every \(N\geqslant 1\), we can now introduce a truncation of L by
(do not confuse this with \(L_N\) defined in (2.6), which was for the case of the interval \([0,\pi ]\)), which in turn gives rise to the equation
Proposition 2.7
Let \(\sigma \) be Lipschitz continuous, \(u_0\) be bounded and continuous, and L be a Lévy white noise as in (1.6) satisfying (H) for some \(p,q>0\). Then, if we choose \(\frac{d}{q}<\eta <\frac{2-d(p-1)}{p-q}\), we have \(\tau _N>0\) for every \(N\geqslant 1\) and almost surely, \(\tau _N= +\infty \) for large N (recall the convention \(\inf \emptyset = +\infty \)). Moreover, for any \(N\geqslant 1\), there exists a solution \(u_N\) to (2.28) such that for some constant \(C_N<+\infty \), we have
for all \(R>1\), where \(E_{\alpha ,\beta }(z)=\sum _{k \geqslant 0} \frac{z^k}{\Gamma (\alpha k+\beta )}\) for \(\alpha ,\beta > 0\) and \(z\in {\mathbb {R}}\) are the Mittag–Leffler functions.
Furthermore, we have \(u_N(t,x) = u_{N+1}(t,x)\) on \(\{t \leqslant \tau _N \}\), and the random field u defined by \(u(t,x)=u_N(t,x)\) on \(\{t\leqslant \tau _n\}\), is a mild solution to (1.1) on \(D={\mathbb {R}}^d\).
Proof
The result is a direct application of [12, Theorem 3.1] except for the moment property (2.29). The finiteness of the left-hand side is included in the cited theorem for \(p< 1+\frac{2}{d}\). In the case \(d=1\) and \(2<p<3\), the only thing we need is an extension of [12, Lemma 3.3(2)], which can be obtained by combining the arguments given in the proof of [12, Lemma 3.3(2)] and the proof of Proposition 2.1. Indeed, for predictable \(\phi _1\) and \(\phi _2\), proceeding as in the proof of [12, Lemma 3.3] but using the moment inequalities of [28, Theorem 1], we see that
In order to obtain the bound involving the Mittag–Leffler functions, observe from the calculations between the last display on page 2272 and Eq. (3.13) of [12] that for every fixed \(N\geqslant 1\), there exists \(C_N<+\infty \) such that
The first series converges for our choice of \(\eta \). Furthermore, for all \(a>0\), we have by Stirling’s formula that \(\Gamma (1+ax)^\frac{1}{p\vee 1} \geqslant C \Gamma (\frac{1+ax}{p\vee 1})\) when \(x>1\). Hence,
which is (2.29). \(\square \)
2.2.1 Stationarity of the solution
The proof of Proposition 2.7 heavily relies on the stopping times \(\tau _N\) introduced in (2.26). These are “centered” around the origin in the sense that large jumps are permitted if they occur far enough from \(x=0\). As a consequence, even if the initial condition is constant, it does not follow a priori from Proposition 2.7 that the solution u to (1.1) is stationary in space. On the other hand, of course, choosing to center \(\tau _N\) around the origin is completely arbitrary. So in this section, we show that the solution constructed in Proposition 2.7 remains the same if we take other spatial reference points for \(\tau _N\), from which the stationarity of the solution in space will follow.
To this end, let \(\frac{d}{q}<\eta <\frac{2-d(p-1)}{p-q}\) and define the family of stopping times \(\tau _N^a\) by
In particular, \(\tau _N^0\) is the same as \(\tau _N\) defined in (2.26). Since the intensity measure of J is invariant under translation in the space variable, \(\tau _N^a\) has the same law as \(\tau _N\), and the conclusions of Proposition 2.7 are valid for \(\tau _N^a\). In particular, for any \(N\geqslant 1\), almost surely \(\tau _N^a>0\), and \(\tau _N^a= +\infty \) for large N. Furthermore, by definition, on the event \(\left\{ t\leqslant \tau _N^a\right\} \), \(L(\mathrm {d}t, \mathrm {d}x)=L_N^a(\mathrm {d}t, \mathrm {d}x)\), where
Proposition 2.8
Let \(\sigma :{\mathbb {R}}\rightarrow {\mathbb {R}}\) be Lipschitz continuous, \(u_0\) be bounded and continuous, and L be a Lévy white noise as in (1.6) fulfilling the assumption (H) with \(p,q>0\). Then for any \(N\in {\mathbb {N}}\) and \(a\in {\mathbb {R}}^d\), there exists a mild solution \(u_N^a\) to (1.1) with noise \(L^a_N\) instead of L such that (2.29) also holds for \(u_N^a\).
Moreover, for \(a,b\in {\mathbb {R}}^d\), \(N\in {\mathbb {N}}\) and \((t,x) \in [0,T]\times {\mathbb {R}}^d\), we have
Proof
The first part is proved in the same way as Proposition 2.7. For (2.30), we observe that \(L_N^a=L_N^b\) on \(\{t< \tau _N^a\wedge \tau _N^b\}\). Then we use the construction of the solutions \(u_N^a\) and \(u_N^b\) via a Picard iteration scheme as in the proof of [12, Theorem 3.1] and show that at each step of the scheme,
For \(n=0\), we clearly have \(u_N^{a,0}(t,x)=u_N^{b,0}(t,x)=u_0(x)\). Now if (2.31) holds for some \(n\geqslant 0\), then
Since \(u_N^{a,n}(t,x) \rightarrow u_N^{a}(t,x)\) and \(u_N^{b,n}(t,x) \rightarrow u_N^{b}(t,x)\) as \(n\rightarrow +\infty \) in \(L^p(\Omega )\), we deduce (2.30). \(\square \)
In [15, Definition 5.1], the second author introduced the property (S) for a stochastic process and a martingale measure, which is a sort of stationarity property in the space variable. In our case, the noise is not necessarily a martingale measure, but we can use a similar definition:
Definition 2.9
We say the family of random fields \(u_N^a\) has property (S) if the law of the process
does not depend on a.
Lemma 2.10
If \(u_0(x)\equiv u_0\) is constant, then the family \((u_N^a:a\in {\mathbb {R}}^d)\) has property (S).
Proof
Similarly to the proof of Proposition 2.8, it is enough to show property (S) for the Picard iterates \(u_N^{a,n}\) for each \(n\geqslant 0\). For \(n=0\), we obviously have \(u^{a,0}_N(t,x+a)=u_0=u^{0,0}_N(t,x)\). So property (S) for \(u_N^{a,n}\) follows from the fact that the law of \(\left( L_N^a\left( [0,t]\times \left( a+B\right) \right) , (t,B) \in [0,T]\times \mathcal B_b({\mathbb {R}}^d) \right) \) does not depend on a. Next, assume that \(u^{a,n}_N\) has the property (S). Since
we can use the same argument as in [15, Lemma 18], since the proof only relies on the fact that L has a law that is invariant under translation in the space variable. \(\square \)
Theorem 2.11
If \(u_0(x)\equiv u_0\) is constant, for any \(a\in {\mathbb {R}}\), the random field \((u(t,a+x):(t,x)\in [0,T]\times {\mathbb {R}}^d)\) has the same law as the random field \((u(t,x):(t,x)\in [0,T]\times {\mathbb {R}}^d)\).
Proof
By (2.30), \(u_N^a(t,a+x)\mathbb {1}_{t\leqslant \tau _N^a\wedge \tau _N^0}= u_N^0(t,a+x)\mathbb {1}_{t\leqslant \tau _N^a\wedge \tau _N^0}\) almost surely. Taking the stationary limit as \(N\rightarrow +\infty \), we get that \(u^a(t,a+x)=u^0(t,a+x)\) almost surely for any \((t,x)\in [0,T]\times {\mathbb {R}}^d\). Also, by the property (S) of the family of random fields \((u_N^a:a\in {\mathbb {R}}^d)\) (see Lemma 2.10), the random field \((u_N^a(t,a+x):(t,x)\in [0,T]\times {\mathbb {R}}^d)\) has the same law as the random field \((u_N^0(t,x):(t,x)\in [0,T]\times {\mathbb {R}}^d)\). Again, taking the stationary limit as \(N\rightarrow +\infty \), we get that the random field \((u^a(t,a+x):(t,x)\in [0,T]\times {\mathbb {R}}^d)\) has the same law as the random field \((u^0(t,x):(t,x)\in [0,T]\times {\mathbb {R}}^d)\). Therefore, the random field \((u^0(t,a+x):(t,x)\in [0,T]\times {\mathbb {R}}^d)\) has the same law as the random field \((u^0(t,x):(t,x)\in [0,T]\times {\mathbb {R}}^d)\). \(\square \)
2.2.2 Existence of a càdlàg solution in \(H_{r,loc}({\mathbb {R}}^d)\) with \(r<-\frac{d}{2}\)
In the following, we want to establish a regularity result for the paths of the mild solution to (1.1) in the case \(D={\mathbb {R}}^d\), analogous to Theorem 2.5 which concerns \(D=[0,\pi ]\). Since \(D={\mathbb {R}}^d\) is unbounded, and the solution may not decay in space (see Theorem 2.11), we consider the mild solution \(u:t \mapsto u(t, \cdot )\) as a distribution-valued process in a local fractional Sobolev space, and prove that it has a càdlàg version in this space.
Recall to this end the Schwartz space\({\mathcal {S}}({\mathbb {R}}^d)\) of smooth functions \(\varphi :{\mathbb {R}}^d\rightarrow {\mathbb {R}}\) such that \(\sup _{x\in {\mathbb {R}}^d}\left| x^\alpha \varphi ^{(\beta )}(x) \right| <+\infty \) for any multi-indices \(\alpha , \, \beta \in {\mathbb {N}}^d\), equipped with the topology induced by the semi-norms \(\sum _{|\alpha |, |\beta | \leqslant p} \sup _{x\in {\mathbb {R}}^d} \left| x^\alpha \varphi ^{(\beta )}(x) \right| \) for \(p\in {\mathbb {N}}\). Here, \(\varphi ^{(\beta )}(x) =\partial _{x_1}^{\beta _1}\ldots \partial _{x_d}^{\beta _d}\varphi (x)\) if \(\beta =(\beta _1,\ldots ,\beta _d)\). Its topological dual is called the space of tempered distributions and is denoted by \({\mathcal {S}}'({\mathbb {R}}^d)\). The classical Fourier transform\(\mathcal F( \varphi )(\xi ):= \int _{{\mathbb {R}}^d} e^{-i\xi \cdot x}\varphi (x) \,\mathrm {d}x\) with \(\xi \in {\mathbb {R}}^d\) and \(\varphi \in {\mathcal {S}}({\mathbb {R}}^d)\) can be extended by duality to \(f\in {\mathcal {S}}'({\mathbb {R}}^d)\):
Definition 2.12
The (local) fractional Sobolev space of order\(r\in {\mathbb {R}}\) is defined by
The topology on \(H_r({\mathbb {R}}^d)\) is induced by the norm
and we have \(f_n\rightarrow f\) in \(H_{r,loc}({\mathbb {R}}^d)\) if \(\theta f_n\rightarrow \theta f\) in \(H_{r}({\mathbb {R}}^d)\) as \(n\rightarrow +\infty \) for any \(\theta \in C^\infty _c({\mathbb {R}}^d)\).
We now proceed to studying the regularity of \(u:t \mapsto u(t, \cdot )\) in \(H_{r,{loc}}({\mathbb {R}}^d)\). As in the case of a bounded interval in dimension one, the drift part is easy to handle.
Lemma 2.13
Let Z be a bounded measurable random field and
Then the process \(t\mapsto F(t,\cdot )\) is continuous in \(H_{r,{loc}}({\mathbb {R}}^d)\) for any \(r\leqslant 0\).
Proof
Since \(g\in L^1([0,T]\times {\mathbb {R}}^d)\) and Z is bounded, it follows from [6, Corollary 3.9.6] that the sample paths of F are jointly continuous in (t, x) almost surely. Therefore, \(t\mapsto F(t,\cdot )\) is continuous in \(H_{0,{loc}}({\mathbb {R}}^d)=L^2_{{loc}}({\mathbb {R}}^d)\), hence also in \(H_{r,{loc}}({\mathbb {R}}^d)\) for any \(r\leqslant 0\). \(\square \)
Next, we consider the situation where \(\sigma \) is a bounded function. Already in this restricted case, the unboundedness of space and the possibility of having infinitely many large jumps require a more careful analysis of the different parts of the solution.
Proposition 2.14
Let \(\sigma \) be a bounded and Lipschitz function, and u be the mild solution to the stochastic heat equation (1.1) constructed in Proposition 2.7 under hypothesis (H). Then, for any \(r<-\frac{d}{2}\), the stochastic process \((u(t, \cdot ))_{t\in [0,T]}\) has a càdlàg version in \(H_{r,\text {loc}}({\mathbb {R}}^d)\).
Proof
Since the mild solution \(u_N\) to the truncated equation (2.28) agrees with the mild solution u to the stochastic heat equation (1.1) on \(\left\{ t \leqslant \tau _N\right\} \) (see the last statement of Proposition 2.7), the sample path properties of u and \(u_N\) are the same, and we can restrict to the study of the regularity of the sample paths of \(u_N\). Furthermore, there is no loss of generality if we take \(N=1\). Therefore, we suppose that
where \(L_1\) is the truncated noise from (2.27) with \(N=1\). We use the decomposition
where for \(A>0\) and \(Z(s,y):=\sigma (u(s,y))\),
where \(L^M\) is defined in (1.6), and \(L^P_1\) is the noise obtained by applying the truncation (2.27) with \(N=1\) to \(L^P\) from (1.6). It is clear that V is jointly continuous in (t, x), and the same holds for \(u^3\) as pointed out in the proof of Lemma 2.13. Furthermore, on \([0,T]\times [-2A,2A]^d\), the noise \(L^P_1\) consists of only finitely many jumps. So upon a change of the drift b and increasing the truncation level for \(L^M\) from 1 to the largest size of these jumps (which clearly does not affect the arguments below), we may assume that \(u^{2,1}=0\). The remaining terms are now treated separately.
\(\underline{u^{1,1}(t,x):}\) By definition of the Fourier transform, we have for \(\varphi \in {\mathcal {S}}({\mathbb {R}}^d)\),
Permuting the stochastic integral and the Lebesgue integral (because \(\int _{|z|\leqslant 1} |z|^p \,\nu (\mathrm {d}z)<+\infty \), \(g\in L^p([0,T]\times {\mathbb {R}}^d)\) and Z is bounded, this is possible by Theorem A.3 together with the estimate (A.3)) yields
which implies that \(\mathcal F( u^{1,1}(t, \cdot )) (\xi )\) is given by
Thus,
Since the function \(t\mapsto e^{-|\xi |^2 t}\) is continuous, and the stochastic integral in \(a_\xi (t)\) exists in \(L^2(\Omega )\), \(t\mapsto a_\xi (t)\) is continuous in \(L^2(\Omega )\). Furthermore,
for some constant C that does not depend on \(\xi \), so by the dominated convergence theorem (which applies since \(r<-\frac{d}{2}\)),
and the process \(t\mapsto u^{1,1}(t,\cdot )\) is continuous in \(L^2(\Omega )\) as a process with values in \(H_r({\mathbb {R}}^d)\).
In order to apply [18, Chapter III, Sect. 4, Theorem 1] to deduce the existence of a càdlàg modification of \(t\mapsto u^{1,1}(t,\cdot )\) in \(H_r({\mathbb {R}}^d)\) for any \(r<-\frac{d}{2}\), it remains to prove
for some \(\delta >0\). Upon defining, similar to (2.11),
for \(0\leqslant a<b \leqslant T\) and \(\xi \in {\mathbb {R}}^d\), the proof is identical to that of Proposition 2.3 for the equation on a bounded interval if we make the following replacements:
\(\underline{u^{1,2}(t,x):}\) If \(f:{\mathbb {R}}^d\rightarrow {\mathbb {R}}\) is a smooth function, then for \(a,b \in {\mathbb {R}}^d\) with \(a_i\leqslant b_i\) for all \(1\leqslant i \leqslant d\),
where for \({\mathbf {k}}=(k_1, \ldots , k_i)\) with \(k_1<\cdots < k_i\), we define \(\left( c_{{\mathbf {k}}}(a,r)\right) _j:= a_i \mathbb {1}_{j\notin {\mathbf {k}}} +r_j \mathbb {1}_{j \in {\mathbf {k}}}\) for \(1\leqslant j\leqslant d\). This formula is easily proved by induction on the dimension. Since the heat kernel \(g(t-s,x-y)\) is smooth on \(y\notin [-2A, 2A]^d\) for \(x\in [-A, A]^d\), (2.35) with \(a=(s,-A, \ldots , -A)\) and \(b=(t,x)\) gives
where \({\mathbf {A}}:= (A,\ldots , A)\). Another application of Theorem A.3 and (A.3) shows that \(u^{1,2}(t,x)\) equals
We see from this expression that \(u^{1,2}\) is jointly continuous in (t, x). By the argument at the end of the proof of Lemma 2.13, we deduce that \(t\mapsto u^{1,2}(t, \cdot )\mathbb {1}_{[-A,A]^d}\) is continuous in \(H_r({\mathbb {R}}^d)\) for \(r\leqslant 0\).
\(\underline{u^{2,2}(t,x):}\) This process takes into account only the jumps that are far away from x, but that can be arbitrarily large. We can write \(u^{2,2}\) as a sum:
We first observe that each term of this sum is jointly continuous in \((t,x) \in [0,T]\times [-A,A]^d\) almost surely. We show that this sum converges uniformly in \((t,x)\in [0,T]\times [-A,A]^d\). Choose A large enough such that \(T< \frac{A^2}{2d} \). Because \(|x-X_i|>A\), Lemma 3.6 below shows that the maximum of the function \(t\mapsto g(t, x-X_i)\) is attained at \(t=T\):
where \(p_A\) is the projection on the convex set \([-A, A]^d\). Then, for \(\beta =1 \wedge q\),
Therefore, the sum defining \(u^{2,2}\) converges uniformly in \((t,x) \in [0,T]\times [-A,A]^d\), and \(u^{2,2}\) is jointly continuous. Thus, \(t\mapsto u^{2,2}(t, \cdot )\mathbb {1}_{[-A,A]^d}\) is continuous in \(H_r({\mathbb {R}}^d)\) for every \(r\leqslant 0\).
Since A can be chosen arbitrarily large, the assertion of the proposition follows. \(\square \)
In order to pass from bounded to unbounded nonlinearities \(\sigma \), the basic strategy remains the same as in the proof of Theorem 2.5. However, it was crucial in that proof that the solution have a finite second moment. Unfortunately, in dimensions \(d\geqslant 2\), the mild solution u to (1.1) has no finite second moments as a result of the singularity of g. And it is easy to convince oneself that taking powers \(p<2\) instead of 2 does not combine well with the \(\Vert \cdot \Vert _{H_r({\mathbb {R}}^d)}\)-norms. Instead, in the proof we propose below, the idea is to consider an equivalent probability measure \({\mathbb {Q}}\) (which obviously does not affect the path properties of u) under which the solution has a finite second moment. Although L might not be a Lévy noise under \({\mathbb {Q}}\) anymore, it follows from the theory of integration against random measures, which we briefly recall in the Appendix, that there exists a particularly clever choice of \({\mathbb {Q}}\) such that we have sufficient control on the second moments of both integrands and integrators under \({\mathbb {Q}}\).
Theorem 2.15
If u is the mild solution to the stochastic heat equation (1.1) constructed under the assumptions of Proposition 2.7, then, for any \(r<-\frac{d}{2}\), the stochastic process \((u(t, \cdot ))_{t\in [0,T]}\) has a càdlàg version in \(H_{r,\text {loc}}({\mathbb {R}}^d)\).
Proof
We first consider the case \(p\geqslant 1\) in assumption (H). As in Proposition 2.14, we can suppose that u is the solution to (2.28) with \(N=1\), and use the decomposition (2.32) with \(A>0\). The terms V and \(u^{2,1}\) can be dealt with as in Proposition 2.14. For the remaining terms, we use different arguments.
\(\underline{u^{1,1}(t,x):}\) Let \(\sigma _n(u)= \sigma (u)\mathbb {1}_{|u|\leqslant n}\) and define \(u^{1,1}_n\) as in (2.32) but with Z replaced by \(\sigma _n(u)\). Then,
and
Writing \(\sigma _{(n)} (s,y)= \sigma (u(s,y))- \sigma _n(u(s,y))\), we obtain
as in (2.34). With similar calculations as in (2.24), but using Theorem A.3 with \(1<p<1+\frac{2}{d}\), one can show that
where C does not depend on \(\xi \). With notation from the Appendix, the fact that \(\sigma (u) \in L^{1,p}(L^M,{\mathbb {P}})\) implies that there exists a probability measure \({\mathbb {Q}}\) that is equivalent to \({\mathbb {P}}\) such that the process \(\sigma (u)\) belongs to \(L^{1,2}(L^M,{\mathbb {Q}})\), see Theorem A.4. Consequently, using the notation in (A.1), we deduce from (2.38) that
The last integral is finite because \(r<-\frac{d}{2}\). Moreover, \(\sigma _{(n)}(\omega ,s,y) \rightarrow 0\), pointwise in \((\omega ,s,y)\), and is bounded by \(\sigma (u(\omega ,s,y))\), which belongs to \(L^{1,2}(L^M,{\mathbb {Q}})\) by assumption. Hence, by Theorem A.1, the left-hand side of (2.40) converges to 0 as \(n\rightarrow +\infty \). As before, we may extract a subsequence that converges uniformly in [0, T] almost surely with respect to \({\mathbb {Q}}\), and hence \({\mathbb {P}}\). We now deduce that \(t\mapsto u^{1,1}(t,\cdot )\) has a càdlàg modification because the processes \(u_n^{1,1}\) have càdlàg modifications by Proposition 2.14.
\(\underline{u^{1,2}(t,x):}\) The proof is identical to the corresponding part in Proposition 2.14, provided we can still apply Theorem A.3 in (2.36). In order to justify this, observe that
by (2.29) and [19, Theorems 4.3 and 4.4] with some polynomial P. Next, for every multi-index \(\alpha \in {\mathbb {N}}^{1+d}\), it is easily verified by induction that \(\partial ^\alpha g(t,x)\) takes the form \(Q(t^{-\frac{1}{2}},x)\exp (-\frac{|x|^2}{4t})\) for some polynomial Q. So if l denotes the degree of Q, we have for every \(t\in [0,T]\) and \(|x|^2\geqslant 2lT\),
Hence, as \(|c_{{\mathbf {k}}}(-{\mathbf {A}},r)- y|\geqslant A\) for \(y \notin [-2A, 2A]^d\), we obtain for sufficiently large A that the expectation in the penultimate display is bounded by
Since \(|c_{{\mathbf {k}}}(-{\mathbf {A}},r)|\leqslant \sqrt{d}A\), \(|x-y|^b\leqslant 2^{b-1} (|x|^b+|y|^b)\) for \(b\geqslant 1\), and \(P(|x-y|)\leqslant C(1+|x|^m + |y|^m)\) where m is the degree of P, one can find another polynomial \({\tilde{P}}\) such that the last integral is further bounded by
which is independent of r, and finite because it is possible by assumption (H) to choose \(\eta >\frac{d}{q}\) such that \(\frac{2\eta (p-q)}{2-d (p-1)}< 2\) is satisfied. Theorem A.3 is therefore applicable by (A.3).
\(\underline{u^{2,2}(t,x):}\) The argument remains the same as in Proposition 2.14, except that we have to replace the final bound in (2.37) by
which is finite by an argument similar to the one for \(u^{1,2}\).
\(\underline{u^3(t,x):}\) Consider the decomposition \(u^3(t,x)=u^{3,1}(t,x)+u^{3,2}(t,x)\) where
If \(u^{3,1}_n\) is the process obtained from \(u^{3,1}\) by replacing \(\sigma (u(s,y))\) by \(\sigma _n(u(s,y))\), then, as in (2.39),
Consequently, we have
Recalling that u is the solution to (2.28) with \(N=1\), the expectation of the left-hand side tends to 0 as \(n\rightarrow +\infty \) by (2.29) and the dominated convergence theorem. Hence, \(t\mapsto u^{3,1}(t,\cdot )\) inherits the càdlàg sample paths of \(u^{3,1}_n\), see Lemma 2.13. Concerning \(u^{3,2}\), the continuity of \((t,x)\mapsto u^{3,2}(t,x)\) on \([0,T]\times [-A,A]^d\) is shown in the same way as for \(u^{1,2}\). Instead of the stochastic Fubini theorem, one can use the ordinary Fubini theorem because
almost surely. This is verified by showing that the expectation of the integral in brackets is finite and uniformly bounded in u and \(r_{k_1}, \ldots , r_{k_i}\). This concludes the proof for \(p\geqslant 1\).
For \(0<p<1\), we have to modify the proof in the following way. Because L has drift \(b_0=0\) and summable jumps by the assumption \(\int _{|z|\leqslant 1} |z|^p \,\nu (\mathrm {d}z)<+\infty \), we can write u in the same form as (2.32) with \(L^M(\mathrm {d}t,\mathrm {d}x)\) replaced by \(\int _{|z|\leqslant 1} z \, J(\mathrm {d}t,\mathrm {d}x,\mathrm {d}z)\) and \(u^3 = 0\). An inspection of the proof above shows that the arguments for V, \(u^{2,1}\), \(u^{2,2}\) remain valid, and in principle also for \(u^{1,1}\) and \(u^{1,2}\) if changing the order of integration in (2.33) and (2.36), respectively, is permitted. The justification is comparable to the situation for \(p\geqslant 1\); one only has to use (A.4) instead of (A.3):
\(\square \)
Remark 2.16
The paper [21] studies the existence of càdlàg modifications in certain Banach spaces of solutions to a class of stochastic PDEs driven by Poisson random measures. Example 2.3 in [21] particularizes to the case of the stochastic heat equation with a multiplicative Lévy space–time white noise. However, this example contains an error since the measure \(\nu \) in the first display on p. 1502 is not a Lévy measure (it is infinite on sets of the form \(\{|x|>\delta \}\) for all sufficiently small values of \(\delta \), contradicting Remark 3.1 in [21]). After private communication with the author, it seems that this example could be rewritten for the case of a bounded domain, but cannot be extended to the case where \(D={\mathbb {R}}^d\) (because the stopping times in (2.2) with \([0,\pi ]\) replaced by \({\mathbb {R}}^d\) are 0 almost surely for all \(N\in {\mathbb {N}}\), cf. the discussion at the beginning of Sect. 2.2).
2.3 The stochastic heat equation on bounded domains
Let D be a \(C^\infty \)-regular domain of \({\mathbb {R}}^d\), where \(d\geqslant 2\), that is, we assume that D is a bounded open set whose boundary \(\partial D\) is a smooth \((d-1)\)-dimensional manifold, and whose closure \({\bar{D}}\) has the same boundary \(\partial {\bar{D}}=\partial D\). For the stochastic heat equation (1.1) on such a domain D, we assume:
- \((\mathbf{H }')\) :
-
There exists \(0<p<1+\frac{2}{d}\) such that \(\int _{|z|\leqslant 1}|z| ^p \,\nu (\mathrm {d}z)<+\infty \).
As in the case of an interval (Sect. 2.1), the stopping times
are almost surely strictly positive and equal to \(+\infty \) for large N.
Proposition 2.17
Let D be a \(C^\infty \)-regular domain, \(\sigma :{\mathbb {R}}\rightarrow {\mathbb {R}}\) be a Lipschitz function and let L be a pure jump Lévy white noise as in (1.6) such that \((\mathbf{H }')\) is satisfied. Then there exists a predictable mild solution u to (1.1) such that for all \(0<p<1+\frac{2}{d}\),
Furthermore, up to modifications, the solution is unique among all predictable random fields that satisfy (2.43).
Proof
By [16, Corollary 3.2.8], \(G_D(t;x,y) \leqslant C{t^{-\frac{d}{2}}} e^{-\frac{|x-y|^2}{6t}}\), so [11, Theorem 3.5] applies. \(\square \)
As in the proof of Proposition 2.3, the stopping times \(\tau _N\) allow us to ignore the big jumps for the analysis of path properties of the solution. So we only need to consider
where \(b_N:= b-\int _{1<|z| \leqslant N}z \,\nu (\mathrm {d}z)\), and the corresponding mild solutions to
For simplicity, we take \(N=1\) in the following, so that our equation becomes
2.3.1 The fractional Sobolev spaces \(H_r(D)\)
The operator \(-\Delta \) on D with vanishing Dirichlet boundary conditions admits a complete orthonormal system in \(L^2(D)\) of smooth eigenfunctions \((\Phi _j)_{j\geqslant 1}\), with eigenvalues \((\lambda _j)_{j\geqslant 1}\). Then we have the following properties (see for example [39, Chapter V, p. 343]):
The Green’s function \(G_D\) has the representation (1.5) and we have the decomposition
for every \(f\in L^2(D)\) where \(a_j(f)=\left\langle f, \Phi _j \right\rangle _{L^2(D)}\). For \(r\geqslant 0\), we now define
which becomes a Hilbert space with the inner product \(\left\langle f, h \right\rangle _{H_r}:= \sum _{j\geqslant 1} \left( 1+\lambda _j\right) ^r a_j(f) a_j(h)\). We denote by \(H_{-r}\left( D \right) \) the topological dual space of \(H_{r}\left( D \right) \), which turns out to be isomorphic to the space of sequences \(b=(b_n)_{n\geqslant 1}\) such that
In fact, with \(b_j= \tilde{f}(\Phi _j)\) for \(\tilde{f}\in H_{-r}\left( D \right) \), we have \(\Vert \tilde{f}\Vert _{H_{-r}} = \Vert b \Vert _{H_{-r}}\) and the pairing between \(H_{-r}\left( D \right) \) and \(H_{r}\left( D \right) \) is given by
We need the following technical lemma, for which we could not find a reference in the literature.
Lemma 2.18
For \(r\leqslant 0\), the restriction of \(H_r({\mathbb {R}}^d)\) to D is continuously embedded in \(H_r(D)\).
Proof
For \(m\in {\mathbb {N}}\), let \(H^m(D)=\{ u :\ u^{(\alpha )} \in L^2(D) \ \text {for all} \ |\alpha |\leqslant m \}\), with an upper index m, be the “usual” Sobolev spaces as in [27, p. 3]. For real \(r\geqslant 0\), let m be the smallest even integer with \(m\geqslant r\). Following [27, Chapitre 1, (9.1)], we define, with a superscript index,
where the right-hand side is the notation of [27, Chapitre 1, Définition 2.1] for interpolation spaces.
Furthermore, define \(H_0^r(D)\) for \(r\geqslant 0\) as the closure in \(H^r(D)\) of the set of smooth functions with compact support in D, see [27, Chapitre 1, (11.1)]. Similarly, as in [20, Definition 8.1], let \(H^r_B(D)\) for \(r>\frac{1}{2}\) be the closed subspace of \(H^r(D)\) such that its elements are equal to zero on \(\partial D\).
By the definition of interpolation spaces, there exists for each \(\theta \in [0,1]\) some self-adjoint positive operator \(\Lambda \) in \(L^2(D)\) with domain \(H^m_B(D)\) such that
The notion of domain is as in [27, Chapitre 1, p. 12], and the power in this case is to be understood as the spectral power of the operator \(\Lambda \). By [27, Chapitre 1, Remarque 2.3], \(\text {dom} (\Lambda ^{1-\theta })\) coincides with \(\text {dom}( {\tilde{\Lambda }}^{1-\theta })\) for any other self-adjoint positive operator \({\tilde{\Lambda }}\) in \(L^2(D)\) with domain \(H^m_B(D)\). In particular, we can choose \({\tilde{\Lambda }}=(-\Delta )^{\frac{m}{2}}\), where \(\Delta \) is the Dirichlet Laplacian, and the power \(\frac{m}{2}\), an integer because m is even, is to be understood as the composition of partial differential operators. Then, from [20, Théorème 8.1], we deduce that
and with the choice \(\theta =1-\frac{r}{m}\) further that
Let \(f\in L^2(D)\) be as in (2.49). Then, see e.g. [30, (2.12)], we have that \({\tilde{\Lambda }}^{\frac{r}{m}}f=\sum _{j\geqslant 1} \mu _j^{\frac{r}{m}} a_j(f) \Phi _j\), where \(\mu _j=\lambda _j^{\frac{m}{2}}\) is the jth eigenvalue of \({\tilde{\Lambda }}\). The previous sum converges in \(L^2(D)\) if and only if \(\sum _{j\geqslant 1}\lambda _j^r \left| a_j(f) \right| ^2<+\infty \), so together with (2.50), we obtain
Therefore, by [20, Théorème 8.1] and the discussion that follows, we have \(H_r(D)\hookrightarrow H^r(D)\).
If \(r\leqslant \frac{1}{2}\), we have by [30, (2.13)]
where \(H_{00}^{\frac{1}{2}}(D)\) is the Lions–Magenes space satisfying \(H_{00}^{\frac{1}{2}}(D)\hookrightarrow H_{0}^{\frac{1}{2}}(D)\) by [27, Chapitre 1, Théorème 11.7]. In addition, for any \(r\geqslant 0\), \(H_{0}^{r}(D)\hookrightarrow H^{r}(D)\) and thus \(H_r(D)\hookrightarrow H^r(D)\). Next, by [27, Chapitre 1, Théorèmes 9.1, 9.2 and (7.1)], there exists a constant C such that any function \(u\in H^r(D)\) is the restriction of a function \({\tilde{u}} \in H_r({\mathbb {R}}^d)\) to D with \(\Vert {\tilde{u}} \Vert _{H_r({\mathbb {R}}^d)}\leqslant C \Vert u \Vert _{H^r(D)}\). Therefore, \(H_r(D)\hookrightarrow H^r(D) \hookrightarrow H_r({\mathbb {R}}^d)|_D\) for any \(r\geqslant 0\), and by duality, we have \(H_r({\mathbb {R}}^d)|_D\hookrightarrow H^r(D)\hookrightarrow H_r(D)\) for \(r\leqslant 0\). \(\square \)
2.3.2 Existence of a càdlàg solution in \(H_{r}(D)\) with \(r<-\frac{d}{2}\)
Theorem 2.19
The mild solution u to (1.1) constructed in Proposition 2.17 has a càdlàg modification in \(H_r(D)\) for any \(r<-\frac{d}{2}\).
In contrast to the case \(D=[0,\pi ]\), the eigenfunctions of \(-\Delta \) on a general domain D in \({\mathbb {R}}^d\) may not be uniformly bounded, see (2.48). Thus, the proof of Theorem 2.5 does not extend to higher dimensions. Instead, we use [17, Theorem 1] to write
where g is the heat kernel on \({\mathbb {R}}^d\) and H is a function such that for any \(\varepsilon >0\), \((t,x,y) \mapsto H(t;x,y)\) is smooth on \([0,T]\times D \times B_\varepsilon ^c(\partial D)\), where \(B_\varepsilon (\partial D)\) is the \(\varepsilon \)-neighborhood of \(\partial D\). Away from the boundary \(\partial D\), g can be dealt with as in Theorem 2.15, and H is smooth and therefore easily handled. In order to control the behavior close to the boundary, the change of measure technique (see the Appendix and also the proof of Theorem 2.15) is again fruitful.
Proof of Theorem 2.19
We may assume that u satisfies (2.46). For \(\varepsilon >0\), split u(t, x) into the sum of three terms:
By (2.43), the same proof as in Theorem 2.15 for \(u^{1,1}\) and \(u^3\) shows that \(t\mapsto u_\varepsilon ^1(t,\cdot )\) has a càdlàg version in \(H_{r,{loc}}({\mathbb {R}}^d)\) for \(r<-\frac{d}{2}\), where the spatial variable takes values in the whole space \({\mathbb {R}}^d\) rather than just D. Thus, as a process with \(x\in D\), it has a càdlàg version in \(H_r(D)\) for \(r<-\frac{d}{2}\) by Lemma 2.18. Regarding \(u^2_\varepsilon \), since \((t,x,y) \mapsto H(t;x,y)\) is smooth on \([0,T]\times D \times B_\varepsilon ^c(\partial D)\), we can mimic the part of the proof of Theorem 2.15 concerning \(u^{1,2}\) in order to get that \((t,x) \mapsto u_\varepsilon ^2(t,x)\) is jointly continuous, and in fact, uniformly continuous since D is bounded. In particular, \(t\mapsto u_\varepsilon ^2(t,\cdot )\) is continuous in \(H_r(D)\) for any \(r\leqslant 0\). For the last term \(u^3_\varepsilon \), we want to show that it converges to 0 in \(H_r(D)\), uniformly in \(t\in [0,T]\). As a first step, we have
where \(a_k^\varepsilon (t) := \int _D \Phi _k(x)u_\varepsilon ^3(t,x)\, \mathrm {d}x\). As in (2.24) and (2.39), one can then show that
Next, let \(\Phi (x):=\big (\sum _{k\geqslant 1} (1+\lambda _k)^r \Phi _k^2(x)\big )^{\frac{1}{2}}\), which belongs to \(L^2(D)\) by (2.47) since \(\Vert \Phi _k\Vert _{L^2(D)} = 1\) and \(r<-\frac{d}{2}\). Hence, assuming without loss of generality that p in \((\mathbf{H }')\) satisfies \(1\leqslant p< 1+\frac{2}{d}\), we have by Lemma A.2:
Thus, by Theorem A.4, there exists an equivalent probability measure \({\mathbb {Q}}\) such that
Furthermore, the Doob–Meyer decomposition of L under \({\mathbb {Q}}\) is given by \(L={L^{B,{\mathbb {Q}}}}+ L^{M,{\mathbb {Q}}}\), where
with some predictable random function Y, see [13, Theorem 3.6]. By [4, Theorem 4.14], we deduce that \(\Vert \Phi \sigma (u)\Vert _{L^{B,{\mathbb {Q}}},2,{\mathbb {Q}}}<+\infty \) and \(\Vert \Phi \sigma (u)\Vert _{L^{M,{\mathbb {Q}}},2,{\mathbb {Q}}}<+\infty \). As a consequence,
and
We obtain
For the first term in the parenthesis, we use the Cauchy–Schwarz inequality to obtain
as \(\varepsilon \rightarrow 0\) by (2.53) and dominated convergence. Similarly, (2.54) implies that
as \(\varepsilon \rightarrow 0\). Altogether, there exists a subsequence of \(u^3_\varepsilon (t,\cdot )\) that converges almost surely to 0 in \(H_r(D)\) for \(r<-\frac{d}{2}\), uniformly in \(t\in [0,T]\), which completes the proof. \(\square \)
3 Partial regularity of the solution
In Sect. 2, we have established the existence of a version such that \(t\mapsto u(t,\cdot )\) has càdlàg paths in (local) fractional Sobolev spaces. The goal of the current section is to investigate the partial regularity of the solution, that is, the behavior of the partial functions \(t\mapsto u(t,x)\) for fixed \(x\in D\) and \(x\mapsto u(t,x)\) for fixed \(t\in [0,T]\). In the case where the Lévy noise L has locally finite intensity (so L is a compound Poisson noise), it is clear that almost surely, no jump will fall onto a fixed t- or x-section of the solution. Because the Green’s function \(G_D(t;x,y)\) is smooth outside \(\{0\}\times \{(x,x):x\in D\}\), the partial functions are continuous, and even smooth, in this case. However, a general Lévy noise can have infinitely many jumps on any compact subset of \([0,T]\times D\), which may even fail to be summable. Still they never lie on a fixed section, but may come arbitrarily close to it, so its regularity is unclear a priori. As we shall show, the answer critically depends on the Blumenthal–Getoor index of the noise (that is, the smallest p for which \(\int _{[-1,1]} |z|^p\,\nu (\mathrm {d}z)\) is finite), and both continuous and locally unbounded sample paths may arise.
Throughout this section, we consider the stochastic heat Eq. (1.1) on a bounded \(C^\infty \)-regular domain or \(D={\mathbb {R}}^d\), with some Lipschitz continuous \(\sigma :{\mathbb {R}}\rightarrow {\mathbb {R}}\) and some bounded continuous \(u_0:{\bar{D}}\rightarrow {\mathbb {R}}\) that is zero on \(\partial D\). Furthermore, let L be a pure-jump Lévy white noise as in (1.6) and u be the mild solution constructed under the hypotheses in Propositions 2.1, 2.7 or 2.17, respectively. In particular, if \(D={\mathbb {R}}^d\), we are given \(p,q>0\) such that (H) is satisfied; and if D is a bounded domain, there exists \(p>0\) such that \((\mathbf{H }')\) holds.
3.1 Regularity in space at a fixed time
Theorem 3.1
In the setting described above, assume that \(p<\frac{2}{d}\). Then, for any \(t\in [0,T]\), the process \(x\mapsto u(t,x)\) has a continuous modification.
Proof
The solution u is the stationary limit of the mild solution \(u_N\) to the truncated equation defined in (2.7), (2.28) or (2.45) with noise \(L_N\) given in (2.6), (2.27) or (2.44), respectively. Therefore, we can suppose that \(u=u_N\) for some \(N\geqslant 1\), and for simplicity, we only consider \(N=1\). We prove the claim using different approaches depending on the value of p.
\(\underline{1<p<2:}\) Notice that \(1<p<2\) can only occur in \(d=1\) because of the hypothesis \(p< \frac{2}{d}\). However, we keep the exponent d since we will use similar ideas in the next case. Let \(A>0\) be such that \(x\in (-A,A)^d\), and split u into seven parts according to (2.32) and (2.41), with obvious changes when D is bounded. In this case, we further assume that \(A>0\) is large enough such that \(D\subset (-A,A)^d\). Clearly, V is jointly continuous, and as shown in the proof of Theorem 2.15, the same is true for \(u^{1,2}\mathbb {1}_{[-A,A]^d}\), \(u^{2,2}\mathbb {1}_{[-A,A]^d}\) and \(u^{3,2}\) in the case \(D={\mathbb {R}}^d\), while they are zero if D is bounded. Furthermore, almost surely, \(u^{2,1}\) consists of finitely many jumps, none of which occur at time t, so \(x\mapsto u^{2,1}(t,x)\) is smooth because the Green’s function \(G_D(t;x,y)\) is so for \(t>0\). It remains to consider \(u^{1,1}+u^{3,1}\), for which we apply the Kolmogorov continuity criterion. We have
for any \(x,z\in D\). Then, using Hölder’s inequality and (2.3) or (2.29),
For \(D={\mathbb {R}}\), the last term is bounded by \(C |x-z|^p\), see [35, Lemme A2]; for \(D=[0,\pi ]\), we can take the power p inside the integral by Hölder’s inequality, and further assume that \(\frac{3}{2}<p<2\) (by (2.3) there is no harm in taking a larger value of p on bounded domains). Then the upper bound becomes \(C |x-z|^{3-p}\) by [3, Lemma B.1(a)]. For the martingale part, we have by [28, Theorem 1] and (2.29),
If \(D={\mathbb {R}}^d\), then according to [35, Lemme A2], this is bounded by
and if \(D=[0,\pi ]\), and we take \(\frac{3}{2}<p<2\), then the upper bound we obtain is again \(C|x-z|^{3-p}\) by [3, Lemma B.1(a)]. Thus, the Kolmogorov continuity criterion (see e.g. [39, Chapter 1, Corollary 1.2]) ensures the existence of a continuous modification of \(u^{1,1}+u^{3,1}\) in the space variable x.
\(\underline{0<p\leqslant 1:}\) We use the same decomposition of u as above, except that we replace b by \(b_0\) and \(L^M(\mathrm {d}t,\mathrm {d}x)\) by \(\int _{|z|\leqslant 1} z \,J(\mathrm {d}t,\mathrm {d}x,\mathrm {d}z)\). The proofs for V, \(u^{2,1}\) and \(u^{2,2}\) are not affected by this change, and up to the modification indicated at the end of the proof of Theorem 2.15, the proof for \(u^{1,2}\) is not affected, either. Since \(\int _{|z|\leqslant 1} |z|\,\nu (\mathrm {d}z) < +\infty \), the small jumps of L are summable and \(u^{1,1}\) is actually a sum of possibly infinitely many terms, each of which is continuous in x because no jump occurs exactly at time t. Furthermore, by [16, Corollary 3.2.8], we have for \(x_0\in D\),
which is finite because \(p<\frac{2}{d}\). So the sum defining \(u^{1,1}\) converges locally uniformly in x, which implies that \(x\mapsto u^{1,1}(t,x)\) is continuous almost surely. Recalling that \(b_0 = 0\) for \(p<1\) and \(D={\mathbb {R}}^d\), \(u^{3,1}\) and \(u^{3,2}\) are non-zero in this case only if \(p=1\). Then \(u^{3,2}(t,x)\) is jointly continuous in (t, x), as shown in the proof of Theorem 2.15. If D is bounded, \(u^{3,2}\) is zero. For \(u^{3,1}\) we use the approximation sequence \(u^{3,1}_n\) defined after (2.41). For each n, the process \(x\mapsto u^{3,1}_n(t,x)\) is continuous because
as \(x'\rightarrow x\), see [35, Lemme A2] for \(D={\mathbb {R}}^d\) and the proof of [37, Proposition 5] for bounded D. Hence, it suffices to prove that for fixed t, \(u^{3,1}_n(t,x)\) converges to \(u^{3,1}(t,x)\), locally uniformly in x. By dominated convergence, this can be reduced to showing
But this follows from (3.1) together with (2.3), (2.29) and (2.43), respectively, by taking expectation. \(\square \)
Remark 3.2
In particular, any (tempered) \(\alpha \)-stable noise with \(\alpha \in (0,\frac{2}{d})\) (and \(b_0=0\) if \(D={\mathbb {R}}^d\) and \(\alpha <1\)) satisfies the hypothesis of Proposition 3.1. The same holds for (variance-)gamma noises for all \(d\geqslant 1\), inverse Gaussian noises for \(d=1,2,3\) and normal inverse Gaussian noises for \(d=1\) (cf. [14]).
The next theorem shows that the value \(\frac{2}{d}\) in the previous theorem is essentially optimal.
Theorem 3.3
Let \(\sigma =1\) and suppose there is \(\delta >0\) such that the Lévy measure of L satisfies
for \(z\in [-\delta , \delta ]\), where \(\alpha \in [\frac{2}{d},(1+\frac{2}{d})\wedge 2)\) and \(f:[-\delta ,\delta ]\rightarrow [0,+\infty )\) is measurable with \(f(0)\ne 0\) and
for some \(0<r<\frac{2}{d}\). Then for fixed \(t\in [0,T]\), the path \(x\mapsto u(t,x)\) is unbounded on any non-empty open subset of D with probability 1.
Proof
Fix \(t\in [0,T]\). Since V is continuous in (t, x), it suffices to show the unboundedness of
We start with the case where f is constant, that is, L is an \(\alpha \)-stable noise. Then \((Y(x):x\in D)\) is an \(\alpha \)-stable process given in the form of [34, (10.1.1)] with \(E=[0,T]\times D\) and control measure \(\mathrm {d}s \, \mathrm {d}y\). We shall check that the necessary condition [34, (10.2.14)] for sample path boundedness in [34, Theorem 10.2.3] is not satisfied, in particular that for \(x_0 \in D\), and \(\delta \) such that \(B_{x_0}(\delta ) \subset D\),
Indeed, by [38, Theorem 2 and Lemma 9], for any \(x,y \in B_{x_0}(\delta )\),
which implies that
and (3.4) is proved. In the case of general f, we write
and decompose L accordingly into \(L_1-L_2+L_3+L_4\) such that for \(1\leqslant i\leqslant 4\), \(L_i\) has Lévy measure \(\nu _i\) and is independent of the other three parts. If \(u_i\) solves the additive heat equation with driving noise \(L_i\), then by (3.3) and Theorem 3.1, for any fixed \(t\in [0,T]\), \(x\mapsto u_1(t,x)\), \(x\mapsto u_2(t,x)\) and \(x\mapsto u_4(t,x)\) each has a continuous version. And since the first part of the proof shows that \(x\mapsto u_3(t,x)\) is unbounded on any open subset of D, the same property holds for \(x\mapsto u(t,x)\). \(\square \)
Remark 3.4
Taking \(f\equiv 1\), Theorem 3.3 shows that for the solution u to the heat equation with an additive \(\alpha \)-stable noise where \(\frac{2}{d} \leqslant \alpha <1+\frac{2}{d}\) (since \(\alpha <2\), this can only occur for \(d\geqslant 2\)), for any \(t\in [0,T]\), \(x\mapsto u(t,x)\) is unbounded on any non-empty open subset of D. The same holds true if \(\nu \) has the form (3.2) with \(\alpha \) in the same range and some f that is \((\alpha -\frac{2}{d} +\varepsilon )\)-Hölder continuous at 0. For \(d\geqslant 2\), this includes tempered stable Lévy noise (with stability index in the indicated range) and normal inverse Gaussian noises.
3.2 Regularity in time at a fixed space point
Theorem 3.5
In the set-up described at the beginning of Sect. 3, assume that \(0<p<1\). Then for any \(x\in D\), the process \(t\mapsto u(t,x)\) has a continuous modification.
We need the following elementary lemma.
Lemma 3.6
If g is the heat kernel (1.4), then there exists \(C>0\) such that for every \(T>0\),
Proof of Theorem 3.5
Again, by a stopping time argument, it suffices to show the regularity of \(u_N\) for any \(N\geqslant 1\), as defined in (2.7), (2.28) or (2.45), respectively. We only consider \(N=1\), and decompose \(u=V+u^{1,1}+u^{1,2}+u^{2,1}+u^{2,2}+u^3\) as in the part “\(0<p\leqslant 1\)” of the proof of Theorem 3.1 (with \(A>0\) such that \(x\in (-A,A)^d\) and \(D\subset (-A,A)^d\) if D is bounded). There we explained that V, \(u^{1,2}\mathbb {1}_{[-A,A]^d}\) and \(u^{2,2}\mathbb {1}_{[-A,A]^d}\) are jointly continuous. The term \(u^{2,1}\) is again a finite sum of weighted heat kernels, so \(t\mapsto u(t,x)\) is smooth because none of the jumps falls exactly on a given \(x\in D\). Moreover, for \(u^{1,1}\), we can use [35, Théorème 2.2.2] in the case \(D={\mathbb {R}}^d\) because \({\mathbb {E}}[|\sigma (u(t,x))|^p] = {\mathbb {E}}[|\sigma (u_1(t,x))|^p]\) is uniformly bounded on \([0,T]\times [-2A,2A]^d\). The proof also applies to bounded D because the heat kernel \(G_D\) is majorized by a multiple of the heat kernel on \({\mathbb {R}}^d\) by [16, Corollary 3.2.8]. Finally, since \(p<1\) and \(b_0=0\) if \(D={\mathbb {R}}^d\), \(u^3\) only needs to be considered for bounded D. With \(1<r<1+\frac{2}{d}\) and \(0\leqslant t' < t \leqslant T\), we can use Hölder’s inequality, the moment bound (2.3) or (2.43) and [37, Proposition 5] (and the proof therein) to deduce for every \(\gamma \in (0,1)\),
Thus, by choosing \(\gamma \) close to 1, the claim follows from Kolmogorov’s continuity theorem. \(\square \)
Theorem 3.7
If \(\sigma \equiv 1\) and \(\nu \) satisfies (3.2) for some \(\alpha \in [1,1+\frac{2}{d})\), \(\delta >0\), and some measurable \(f:[-\delta ,\delta ]\rightarrow [0,+\infty )\) with \(f(0)\ne 0\) and (3.3) for some \(0<r<1\), then for any \(x\in D\), the process \(t\mapsto u(t,x)\) is unbounded on any non-empty open interval in [0, T] with probability 1.
Proof
The proof is analogous to the proof of Theorem 3.3. It suffices to show that
for \(x\in D\) and \(0\leqslant t_1 < t_2 \leqslant T\). By [38, Theorem 2 and Lemma 9], it suffices to consider \(D={\mathbb {R}}^d\). In this case, the integral above is bounded from below by
so (3.6) follows from Lemma 3.6:
\(\square \)
Remark 3.8
In contrast to the results on regularity in space, the critical exponent \(p=1\) for temporal regularity does not depend on the dimension d. In particular, for all \(d\geqslant 1\), we obtain continuity of \(t\mapsto u(t,x)\), x fixed, for any (tempered) \(\alpha \)-stable noise with \(\alpha \in (0,1)\) (and \(b_0=0\) if \(D={\mathbb {R}}^d\)), any (variance-)gamma noise and inverse Gaussian noise. In the case of additive noise, \(t\mapsto u(t,x)\) is almost surely unbounded on any non-empty open subinterval for (tempered) \(\alpha \)-stable noises with \(\alpha \in [1,1+\frac{2}{d})\) and normal inverse Gaussian noises.
References
Balan, R.M.: SPDEs with \(\alpha \)-stable Lévy noise: a random field approach. Int. J. Stoch. Anal. (2014). https://doi.org/10.1155/2014/793275
Balan, R.M.: Integration with respect to Lévy colored noise, with applications to SPDEs. Stochastics 83(3), 363–381 (2015)
Bally, V., Millet, A., Sanz-Solé, M.: Approximation and support theorem in Hölder norm for parabolic stochastic partial differential equations. Ann. Probab. 23(1), 178–222 (1995)
Bichteler, K., Jacod, J.: Random measures and stochastic integration. In: Kallianpur, G. (ed.) Theory and Application of Random Fields, pp. 1–18. Springer, Berlin (1983)
Bichteler, K., Lin, S.J.: On the stochastic Fubini theorem. Stoch. Stoch. Rep 54(3–4), 271–279 (1995)
Bogachev, V.I.: Measure Theory, vol. I. Springer, Berlin (2007)
Brzeźniak, Z., Goldys, B., Imkeller, P., Peszat, S., Priola, E., Zabczyk, J.: Time irregularity of generalized Ornstein–Uhlenbeck processes. C. R. Acad. Sci. Paris Ser. I 348(5–6), 273–276 (2010)
Brzeźniak, Z., Hausenblas, E.: Maximal regularity for stochastic convolutions driven by Lévy processes. Probab. Theory Relat. Fields 145(3–4), 615–637 (2009)
Brzeźniak, Z., Zabczyk, J.: Regularity of Ornstein–Uhlenbeck processes driven by a Lévy white noise. Potential Anal. 32(2), 153–188 (2010)
Chen, L., Dalang, R.C.: Hölder-continuity for the nonlinear stochastic heat equation with rough initial conditions. Stoch. Partial Differ. Equ. Anal. Comput. 2(3), 316–352 (2014)
Chong, C.: Lévy-driven Volterra equations in space and time. J. Theor. Probab. 30(3), 1014–1058 (2017)
Chong, C.: Stochastic PDEs with heavy-tailed noise. Stoch. Process. Appl. 127(7), 2262–2280 (2017)
Chong, C., Klüppelberg, C.: Integrability conditions for space-time stochastic integrals: theory and applications. Bernoulli 21(4), 2190–2216 (2015)
Cont, R., Tankov, P.: Financial Modelling with Jump Processes. Chapman & Hall, Boca Raton (2004)
Dalang, R .C.: Extending the martingale measure stochastic integral with applications to spatially homogeneous S. P. D. E’s. Electron. J. Probab. 4(6), 1–29 (1999)
Davies, E.B.: Heat Kernels and Spectral Theory. Cambridge University Press, Cambridge (1990)
Èĭdel’man, S.D., Ivasišen, S.D.: Investigation of the Green matrix for a homogeneous parabolic boundary value problem. Trans. Moscow Math. Soc. 23, 179–242 (1970)
Gihman, I.I., Skorohod, A.V.: The Theory of Stochastic Processes, vol. 1. Springer, New York (1974)
Gorenflo, R., Kilbas, A.A., Mainardi, F., Rogosin, S.V.: Mittag-Leffler Functions, Related Topics and Applications. Springer, Berlin (2014)
Grisvard, P.: Caractérisation de quelques espaces d’interpolation. Arch. Ration. Mech. Anal. 25(1), 40–63 (1967)
Hausenblas, E.: Existence, uniqueness and regularity of parabolic SPDEs driven by Poisson random measure. Electron. J. Probab. 10(46), 1496–1546 (2005)
Hu, Y., Huang, J., Nualart, D., Tindel, S.: Stochastic heat equations with general multiplicative Gaussian noises: Hölder continuity and intermittency. Electron. J. Probab. 20(55), 1–50 (2015)
Khoshnevisan, D.: Analysis of Stochastic Partial Differential Equations. American Mathematical Society, Providence (2014)
Kotelenez, P.: A submartingale type inequality with applications to stochastic evolution equations. Stochastics 8(2), 139–151 (1982)
Lebedev, V.A.: Behavior of random measures under filtration change. Theory Probab. Appl. 40(4), 645–652 (1996)
Lebedev, V.A.: The Fubini theorem for stochastic integrals with respect to \({L}^0\)-valued random measures depending on a parameter. Theory Probab. Appl. 40(2), 285–293 (1996)
Lions, J.-L., Magenes, E.: Problèmes aux limites non homogènes et applications, vol. 1. Dunod, Paris (1968)
Marinelli, C., Röckner, M.: On maximal inequalities for purely discontinuous martingales in infinite dimensions. In: Donati-Martin, C., Lejay, A., Rouault, A. (eds.) Séminaire de Probabilités XLVI, pp. 293–315. Springer, Cham (2014)
Mytnik, L., Perkins, E.: Regularity and irregularity of \((1+\beta )\)-stable super-Brownian motion. Ann. Probab. 31(3), 1413–1440 (2003)
Nochetto, R.H., Otárola, E., Salgado, A.J.: A PDE approach to fractional diffusion in general domains: a priori error analysis. Found. Comput. Math. 15(3), 733–791 (2015)
Peszat, S., Zabczyk, J.: Stochastic Partial Differential Equations with Lévy Noise: An Evolution Equation Approach. Cambridge University Press, Cambridge (2007)
Peszat, S., Zabczyk, J.: Time regularity of solutions to linear equations with Lévy noise in infinite dimensions. Stoch. Process. Appl. 123(3), 719–751 (2013)
Protter, P.E.: Stochastic Integration and Differential Equations, 2nd edn. Springer, Berlin (2005)
Samorodnitsky, G., Taqqu, M.S.: Stable Non-Gaussian Random Processes: Stochastic Models with Infinite Variance. Chapman & Hall, Boca Raton (1994)
Saint Loubert Bié, E.: Étude d’une EDPS conduite par un bruit poissonnien. Probab. Theory Relat. Fields 111(2), 287–321 (1998)
Sanz-Solé, M., Sarrà, M.: Hölder continuity for the stochastic heat equation with spatially correlated noise. In: Dalang, R.C., Dozzi, M., Russo, F. (eds.) Seminar on Stochastic Analysis, Random Fields and Applications III. Birkhäuser, Basel (2002)
Sanz-Solé, M., Vuillermot, P.-A.: Equivalence and Hölder–Sobolev regularity of solutions for a class of non-autonomous stochastic partial differential equations. Ann. Inst. H. Poincaré Probab. Statist. 39(4), 703–742 (2003)
Van den Berg, M.: A Gaussian lower bound for the Dirichlet heat kernel. Bull. Lond. Math. Soc. 24(5), 475–477 (1992)
Walsh, J.B.: An introduction to stochastic partial differential equations. In: Hennequin, P.L. (ed.) École d’Été de Probabilités de Saint-Flour, XIV–1984, pp. 265–439. Springer, Berlin (1986)
Acknowledgements
We thank two anonymous referees for their careful reading of our manuscript.
Author information
Authors and Affiliations
Corresponding author
Additional information
This research is partially supported by the Swiss National Foundation for Scientific Research.
Appendix A: Integration with respect to random measures
Appendix A: Integration with respect to random measures
Just as Lévy processes are special instances of semimartingales, a Lévy noise as in (1.6) is a random measure as introduced in [4]. We give a short introduction into the integration theory associated to it, hereby concentrating on the main ideas and results that we need for our purposes. All details not mentioned or explained can be found in [4, 13, 25, 26]. Given a filtered probability space \((\Omega ,{\mathcal {F}},({\mathcal {F}}_t)_{t\in [0,T]},{\mathbb {P}})\) satisfying the usual conditions, consider a Polish space E (e.g., \(E=D\), the spatial domain in (1.1)), and denote by \({\mathcal {P}}\) the \(\sigma \)-field \({\mathcal {P}}_0\otimes {\mathcal {B}}(E)\), where \({\mathcal {P}}_0\) is the usual predictable \(\sigma \)-field. With an abuse of terminology, \({\mathcal {P}}\)-measurable mappings from \({\tilde{\Omega }} :=\Omega \times [0,T]\times E\) to \({\mathbb {R}}\) are again called predictable and their collection again denoted by \({\mathcal {P}}\).
Given a sequence \(({\tilde{\Omega }}_k)_{k\geqslant 1}\) in \({\mathcal {P}}\) satisfying \({\tilde{\Omega }}_k \uparrow {\tilde{\Omega }}\), a mapping \(M:{\mathcal {P}}_M:=\bigcup _{k\geqslant 1} {\mathcal {P}}|_{{\tilde{\Omega }}_k} \rightarrow L^p\) where \(p\in [0,+\infty )\) is called an \(L^p\)-random measure if for every sequence \((A_i)_{i\geqslant 1}\) of pairwise disjoint sets in \({\mathcal {P}}_M\) with \(\bigcup _{i\geqslant 1} A_i\in {\mathcal {P}}_M\), we have \(M(\bigcup _{i\geqslant 1} A_i) = \sum _{i\geqslant 1} M(A_i)\) in \(L^p\), and some additional “\(({\mathcal {F}}_t)_{t\in [0,T]}\)-adaptedness” conditions are satisfied. In our example of a Lévy noise on \([0,T]\times D\), we can take \({\tilde{\Omega }}_k = D\) if D is bounded, and \({\tilde{\Omega }}_k = [-k,k]^d\) if \(D={\mathbb {R}}^d\).
The stochastic integral of a simple integrand of the form \(S=\sum _{i=1}^r a_i \mathbb {1}_{A_i}\), where \(r\in {\mathbb {N}}\), \(a_i\in {\mathbb {R}}\) and \(A_i\in {\mathcal {P}}_M\), is defined in the canonical way by
Denoting by \({\mathcal {S}}_M\) the collection of such simple integrands, the extension of the integral to a larger subset of \({\mathcal {P}}\) is carried out using the Daniell mean
A predictable process H is called p-integrable with respect to M if there exists a sequence \((S_n)_{n\geqslant 1}\) of simple integrands with \(\Vert H-S_n \Vert _{M,p} \rightarrow 0\) as \(n\rightarrow +\infty \). The collection of p-integrable processes is denoted by \(L^{1,p}(M)\) (or \(L^{1,p}(M,{\mathbb {P}})\) if we want to emphasize the probability measure). The stochastic integral of H with respect to M is then defined as the \(L^p\)-limit of \(\int _0^T\int _E S_n(t,x)\,M(\mathrm {d}t,\mathrm {d}x)\) (which exists and does not depend on the choice of \(S_n\)). In all notions introduced, the prefix p is suppressed if \(p=0\). The constructed integral obeys the dominated convergence theorem, see [4, (2.6)].
Theorem A.1
If \((H_n)_{n\geqslant 1}\) are predictable and converge pointwise to H, and \(|H_n|\leqslant H_0\) for all \(n\geqslant 1\) and some \(H_0\in L^{1,p}(M)\), then \(H_n, H \in L^{1,p}(M)\) and \(\Vert H-H_n\Vert _{M,p}\rightarrow 0\) as \(n\rightarrow +\infty \).
In this paper, we are particularly interested in the case where M is a linear combination of random measures of one of the following forms:
-
(a)
M is a predictable strict random measure, that is, almost every realization of M is a measure on \([0,T]\times E\) and \(t\mapsto \int _0^T \int _E \mathbb {1}_A(s,y) \mathbb {1}_{[0,t]}(s)\,M(\mathrm {d}s,\mathrm {d}y)\) is a predictable process for all \(A\in {\mathcal {P}}_M\).
-
(b)
\(M(\mathrm {d}t,\mathrm {d}x) = \int _{z\in {\mathbb {R}}} W(t,x,z)\,{\tilde{J}}(\mathrm {d}t,\mathrm {d}x,\mathrm {d}z)\), where J is an \(({\mathcal {F}}_t)_{t\in [0,T]}\)-Poisson random measure with intensity measure \(\nu (\mathrm {d}t,\mathrm {d}x,\mathrm {d}z)\), \({\tilde{J}} = J-\nu \), and \(W\mathbb {1}_{{\tilde{\Omega }}_k}\) is 1-integrable with respect to \({\tilde{J}}\) (a random measure on \(E\times {\mathbb {R}}\)) in the sense above for every \(k\geqslant 1\).
-
(c)
M is a strict random measure of the form \(M(\mathrm {d}t,\mathrm {d}x) = \int _{z\in {\mathbb {R}}} W(t,x,z)\,J(\mathrm {d}t,\mathrm {d}x, \mathrm {d}z)\) where \(W\mathbb {1}_{{\tilde{\Omega }}_k}\) is integrable with respect to J for every \(k\geqslant 1\).
In these cases, the Daniell mean can be computed (or estimated) explicitly.
Lemma A.2
Let \(\Vert X\Vert _{L^p} = {\mathbb {E}}[|X|^p]\) for \(0<p<1\) and \(\Vert X\Vert _{L^p} = ({\mathbb {E}}[|X|^p])^{\frac{1}{p}}\) be the usual \(L^p\)-norm for \(p\geqslant 1\).
-
1.
In the case (a) above, we have for every \(0<p<+\infty \) and \(H\in {\mathcal {P}}\),
$$\begin{aligned} \Vert H\Vert _{M,p} = \left\| \int _0^T \int _E |H(t,x)|\,|M|(\mathrm {d}t,\mathrm {d}x) \right\| _{L^p}, \end{aligned}$$(A.2)where |M| is the total variation measure of M.
-
2.
In the case (b) above, there exist for every \(p\geqslant 1\) constants \(c_p, C_p>0\) such that for all \(H\in {\mathcal {P}}\),
$$\begin{aligned}&c_p \left\| \left( \int _0^T \int _E H^2(t,x)\,[M](\mathrm {d}t,\mathrm {d}x) \right) ^{\frac{1}{2}} \right\| _{L^p} \\&\quad \leqslant \Vert H\Vert _{M,p}\leqslant C_p \left\| \left( \int _0^T \int _E H^2(t,x)\,[M](\mathrm {d}t,\mathrm {d}x) \right) ^{\frac{1}{2}} \right\| _{L^p}, \end{aligned}$$where \([M](\mathrm {d}t,\mathrm {d}x) = \int _{z\in {\mathbb {R}}} W^2(t,x,z)\,J(\mathrm {d}t,\mathrm {d}x,\mathrm {d}z)\) is the quadratic variation measure of M. In particular,
$$\begin{aligned} \Vert H\Vert _{M,p}\leqslant C_p \left( \int _0^T\int _E\int _{\mathbb {R}}\Vert H(t,x) W(t,x,z)\Vert _{L^p}^p\,\nu (\mathrm {d}t,\mathrm {d}x,\mathrm {d}z)\right) ^{\frac{1}{p}}. \end{aligned}$$(A.3) -
3.
In the case (c) above, we have for \(0<p\leqslant 1\) and \(H\in {\mathcal {P}}\),
$$\begin{aligned} \Vert H\Vert _{M,p} \leqslant \int _0^T \int _E \int _{\mathbb {R}}\Vert H(t,x)W(t,x,z)\Vert _{L^p}\, \nu (\mathrm {d}t,\mathrm {d}x,\mathrm {d}z). \end{aligned}$$
Proof
For the first statement, the “\(\leqslant \)”-part follows from
for all S with \(|S|\leqslant |H|\). For the “\(\geqslant \)”-part, observe that the right-hand side of (A.2) equals \(\Vert H\Vert _{|M|,p}\) by dominated convergence. Next, consider the measure \(\mu (\mathrm {d}\omega ,\mathrm {d}t,\mathrm {d}x) = M(\omega ,\mathrm {d}t,\mathrm {d}x)\,{\mathbb {P}}(\mathrm {d}\omega )\) on \({\mathcal {P}}\) and let \(D(\omega ,t,x)\) be its Radon–Nikodym derivative with respect to \(|\mu |(\mathrm {d}\omega ,\mathrm {d}t,\mathrm {d}x)\). Then D is predictable, \(|D|\equiv 1\) and \(|M|(\omega ,\mathrm {d}t,\mathrm {d}x)=D(\omega ,t,x)\,M(\omega ,\mathrm {d}t,\mathrm {d}x)\). Hence,
and (A.2) is proved
For the second statement, we observe that \(t\mapsto \int _0^T\int _E S(s,y)\mathbb {1}_{[0,t]}(s)\,M(\mathrm {d}s,\mathrm {d}y)\) is a local martingale for all \(S\in {\mathcal {S}}_M\), see [4, Proposition 4.9(b)]. So the statement for \(S\in {\mathcal {S}}_M\) follows from the Burkholder–Davis–Gundy inequalities. The general case is again a consequence of the dominated convergence theorem. Inequality (A.3) can be proved along the lines of the third statement, which we now establish.
Let \((T_i,X_i,Z_i)_{i\geqslant 1}\) be the points of J in \([0,T]\times E\times {\mathbb {R}}\). Then, using \((x+y)^p \leqslant x^p+y^p\) for \(x,y>0\) and \(0<p\leqslant 1\), we get
and the proof is complete. \(\square \)
With the help of the Daniell mean, one can obtain the following stochastic Fubini theorem.
Theorem A.3
Let \((A,{\mathcal {A}},\mu )\) be a \(\sigma \)-finite measure space, M be an \(L^p\)-random measure for some \(p>0\), and H be a \({\mathcal {P}}\otimes {\mathcal {A}}\)-measurable function.
-
1.
If \(p\geqslant 1\) and \(\int _A \Vert H(\cdot ,\cdot ,a)\Vert _{M,p}\,\mu (\mathrm {d}a)<+\infty \), then
$$\begin{aligned} \int _A \left( \int _0^T \int _E H(t,x,a)\, M(\mathrm {d}t,\mathrm {d}x)\right) \,\mu (\mathrm {d}a) \end{aligned}$$and
$$\begin{aligned} \int _0^T \int _E \left( \int _A H(t,x,a)\,\mu (\mathrm {d}a) \right) \,M(\mathrm {d}t,\mathrm {d}x) \end{aligned}$$are equal almost surely, and all integrals involved are well defined.
-
2.
If \(0<p\leqslant 1\) and M is a random measure as in (c) above, the conclusion of the first part continues to hold if
$$\begin{aligned} \int _0^T\int _E \int _{\mathbb {R}}\left\| \int _A |H(t,x,a)|\,\mu (\mathrm {d}a) |W(t,x,z)| \right\| _{L^p} \,\nu (\mathrm {d}t,\mathrm {d}x,\mathrm {d}z)<+\infty .\qquad \end{aligned}$$(A.4)
The first part has been proved in [26, Theorem 2] for \(p=1\) (see also [5] for processes indexed only by time), but is obviously also valid for \(p>1\) by the monotonicity of \(L^p\)-norms. In particular, Lemma A.2 can be used to verify the integrability assumption. The second part follows from the ordinary Fubini theorem (M is a strict random measure here) together with an argument as in the proof of third part of Lemma A.2.
A last result that we need relates to the possibility of recovering \(L^2\)-integrability from \(L^p\)-integrability, \(0\leqslant p<2\), upon an equivalent change of probability measure. For semimartingales, this result is well known, see [33, Chapter IV, Theorem 34], for example. For random measures, it is proved in [25, Corollary of Theorem 2].
Theorem A.4
If M is an \(L^p\)-random measure and \(H\in L^{1,p}(M)\) for some \(0\leqslant p<2\), then there exists a probability measure \({\mathbb {Q}}\) that is equivalent to \({\mathbb {P}}\) on \({\mathcal {F}}\) such that \(\frac{\mathrm {d}{\mathbb {Q}}}{\mathrm {d}{\mathbb {P}}}\) is bounded, \(\frac{\mathrm {d}{\mathbb {P}}}{\mathrm {d}{\mathbb {Q}}} \in L^{\frac{p}{2-p}}({\mathbb {P}})\), M is an \(L^2\)-random measure under \({\mathbb {Q}}\), and \(H\in L^{1,2}(M,{\mathbb {Q}})\).
Rights and permissions
About this article
Cite this article
Chong, C., Dalang, R.C. & Humeau, T. Path properties of the solution to the stochastic heat equation with Lévy noise. Stoch PDE: Anal Comp 7, 123–168 (2019). https://doi.org/10.1007/s40072-018-0124-y
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40072-018-0124-y