1 Introduction

Let \(T>0\) and consider, on a stochastic basis \((\Omega ,{\mathcal {F}},({\mathcal {F}}_t)_{t\in [0,T]},{\mathbb {P}})\) satisfying the usual conditions, the stochastic heat equation driven by a Lévy space–time white noise on \([0,T]\times D\) with Dirichlet boundary conditions:

$$\begin{aligned} \left\{ \begin{array}{ll} \frac{\partial u}{\partial t}(t,x) = \Delta u(t,x)+\sigma (u(t,x)) \dot{L}(t, x), &{}\quad (t,x) \in (0,T)\times D,\\ u(t,x)=0, &{}\quad \text {for all}\quad (t,x)\in [0,T]\times \partial D, \\ u(0,x)=u_0(x), &{}\quad \text {for all}\quad x\in D, \end{array}\right. \qquad \end{aligned}$$
(1.1)

where D is the whole space \({\mathbb {R}}^d\) or a bounded domain in \({\mathbb {R}}^d\), \(\sigma :{\mathbb {R}}\rightarrow {\mathbb {R}}\) is a Lipschitz function, \(u_0:{\bar{D}}\rightarrow {\mathbb {R}}\) is a bounded continuous initial condition vanishing on \(\partial D\), and \(\dot{L}\) is a Lévy space–time white noise. If \(D={\mathbb {R}}^d\), the boundary conditions on u and \(u_0\) are considered void.

A predictable random field \(u=\left( u(t,x):(t,x) \in [0,T]\times D \right) \) is called a mild solution to (1.1) if for all \((t,x) \in [0,T]\times D\),

$$\begin{aligned} u(t,x)=V(t,x)+\int _0^t \int _D G_D(t-s;x,y) \sigma (u(s,y)) \,L(\mathrm {d}s, \mathrm {d}y) \end{aligned}$$
(1.2)

almost surely, where

$$\begin{aligned} V(t,x)=\int _D G_D(t;x,y)u_0(y)\,\mathrm {d}y,\quad (t,x)\in [0,T]\times D, \end{aligned}$$
(1.3)

is the solution to the homogeneous version of (1.1).

In (1.2) and (1.3), \(G_D\) denotes the Green’s function of the heat operator on D, which for \(D={\mathbb {R}}^d\) equals the Gaussian density

$$\begin{aligned} g(t,x) = (4\pi t)^{-\frac{d}{2}}e^{-\frac{|x|^2}{4t}}\mathbb {1}_{t\geqslant 0} \end{aligned}$$
(1.4)

(when \(t=0\), we interpret g(0, x) as the Dirac delta function \(\delta _0(x)\)), while on a bounded domain D with smooth boundary it has the spectral representation

$$\begin{aligned} G_D(t;x,y)=\sum _{j\geqslant 1} \Phi _j(x) \Phi _j (y) e^{-\lambda _j t}\mathbb {1}_{t\geqslant 0}, \quad \text {for all}~x,y \in D, \end{aligned}$$
(1.5)

where \((\lambda _j)_{j\geqslant 1}\) are the eigenvalues of \(-\Delta \) with vanishing Dirichlet boundary conditions, and \((\Phi _j)_{j\geqslant 1}\) are the corresponding eigenfunctions forming a complete orthonormal basis of \(L^2(D)\).

In the special case where \(\dot{L}\) is a Gaussian noise, the existence, uniqueness and regularity of solutions to Eq. (1.1) have been extensively studied in the literature, see e.g. [3, 10, 23, 39] for the case of space–time white noise, [15, 36, 37] for noises that are white in time but colored in space, and [22] for noises that may exhibit temporal covariances as well. In all cases, the mild solution to (1.2) is jointly locally Hölder continuous in space and time, with exponents that depend on the covariance structure of the noise.

By contrast, suppose that \(\dot{L}\) is a Lévy space–time white noise without Gaussian part, that is,

$$\begin{aligned} \begin{aligned} L( \mathrm {d}t, \mathrm {d}x)&= b\,\mathrm {d}t\,\mathrm {d}x+\int _{|z|\leqslant 1} z\, {{\tilde{J}}}(\mathrm {d}t, \mathrm {d}x, \mathrm {d}z)+\int _{|z|> 1} z \,J(\mathrm {d}t, \mathrm {d}x, \mathrm {d}z)\\&=:L^B(\mathrm {d}t,\mathrm {d}x)+L^M(\mathrm {d}t, \mathrm {d}x) +L^P(\mathrm {d}t , \mathrm {d}x), \end{aligned} \end{aligned}$$
(1.6)

where \(b\in {\mathbb {R}}\), J is an \(({\mathcal {F}}_t)_{t\in [0,T]}\)-Poisson random measure on \([0,T]\times D \times {\mathbb {R}}\) with intensity \(\mathrm {d}t \, \mathrm {d}x \, \nu (\mathrm {d}z)\), and \({{\tilde{J}}}\) is the compensated version of J. Here \(\nu \) is a Lévy measure, that is, \(\nu (\{0\}) =0\) and \(\int _{\mathbb {R}}\left( z^2\wedge 1 \right) \,\nu (\mathrm {d}z)<+\infty \), and we assume that \(\nu \) is not identically zero. The existence and uniqueness of solutions for equations like (1.1) with Lévy noise have been investigated in [1, 2, 11, 12, 31, 35].

Already in the linear case with \(\sigma (x)\equiv 1\), due to the singularity of the Green’s kernel on the diagonal \(x=y\) near \(t=0\), each jump of the noise creates a Dirac mass for the solution. Even worse, if \(\nu ({\mathbb {R}})=\infty \), these space–time jump points form a dense subset of \([0,T]\times D\). Hence one cannot expect the solution to have any continuity properties jointly in space and time.

In this article, we thus take two different viewpoints and consider

  1. 1.

    The path properties of \(t\mapsto u(t,\cdot )\) as a process with values in an infinite-dimensional space;

  2. 2.

    The path properties of the partial maps \(t\mapsto u(t,x)\) for fixed \(x\in D\), and of \(x\mapsto u(t,x)\) for fixed \(t\in [0,T]\).

For each \(t\geqslant 0\), \(u(t,\cdot )\) may take values in \(L^p(D)\) for some \(p>0\) almost surely, but since each atom of the Lévy noise introduces a Dirac delta into the solution, the process \(t\mapsto u(t,\cdot )\) cannot have a càdlàg version in such a space (see also [7] or [31, Proposition 9.25]). Instead, one should consider spaces of distributions containing delta functions, such as negative fractional Sobolev spaces \(H_r(D)\) for \(r<-\frac{d}{2}\) (see Sects. 2.1.12.2.2 and 2.3.1). If \(\sigma =1\) and the noise has a finite second moment, the existence of a càdlàg modification in such spaces follows from a result of [24] on maximal inequalities for stochastic convolutions in an infinite-dimensional setting, see also [31, Chapter 9.4.2]. This type of result has also been obtained in the case of additive (possibly colored) Lévy noise in [8, 9, 32]. To our best knowledge, the question of existence of càdlàg versions in the case of multiplicative noise has only been studied in [21]. For the relation of the results of this paper to our results, see Remark 2.16.

In Sect. 2 of this paper, we substantially generalize the aforementioned results in the case of a Lévy space–time white noise (1.6): Without any further assumptions than those required for the existence of solutions, we prove in Theorems 2.5, 2.15 and 2.19, for both a bounded domain D and the case \(D={\mathbb {R}}^d\), that \(t\mapsto u(t,\cdot )\) has a càdlàg modification in \(H_r(D)\) and \(H_{r,loc}({\mathbb {R}}^d)\), respectively, for any \(r<-\frac{d}{2}\). To this end, we start our analysis by considering the stochastic heat equation on the interval \(D=[0,\pi ]\) in Sect. 2.1. Treating this basic case first has the advantage that we can directly proceed to the main steps of the proof while avoiding the technical difficulties of the general case. Next, in Sect. 2.2, we demonstrate how the proof for \(D=[0,\pi ]\) can be directly extended to the case \(D={\mathbb {R}}^d\), provided \(\sigma \) is bounded and \(\nu \) has finite second moments. But in order to cover the general case of Lipschitz continuous \(\sigma \) and heavy-tailed noises, we need to use stopping time techniques from [12] to deal with the (infinitely many) large jumps of the noise, as well as results from the integration theory for general random measures (see the Appendix) to compensate the absence of finite second moments for \(d\geqslant 2\), due to the singularity of the heat kernel and the small jumps of the noise. Finally, the proof for \(D=[0,\pi ]\) does not extend to bounded domains in \({\mathbb {R}}^d\) with \(d\geqslant 2\) because the eigenfunctions are typically no longer uniformly bounded. Instead, the proof we give in Sect. 2.3 makes use of the fact that in the interior of D, the Green’s function \(G_D\) can be decomposed into the Gaussian density g (where we can use the results of Sect. 2.2) and a smooth function. With the methods reviewed in the Appendix, we also obtain sufficient control at the boundary of D.

Regarding the partial regularity of \(t\mapsto u(t,x)\) and \(x\mapsto u(t,x)\), [35, Section 2] obtained the following result on \(D={\mathbb {R}}^d\): If the Lévy measure \(\nu \) of L satisfies \(\int _{\mathbb {R}}|z|^p \,\nu (\mathrm {d}z)<+\infty \) for some \(p<\frac{2}{d}\), then for fixed t, the process \(x\mapsto u(t,x)\) has a continuous modification. Similarly, if \(\int _{\mathbb {R}}|z|^p \,\nu (\mathrm {d}z)<+\infty \) for some \(p<1\), then there exists a continuous modification of \(t\mapsto u(t,x)\) for every fixed x. Extending the results of [35], our Theorems 3.1 and 3.5, which also apply to bounded domains, show that it suffices to check whether \(\int _{[-1,1]} |z|^p \,\nu (\mathrm {d}z)<+\infty \) is finite, which would include, for example, \(\alpha \)-stable noises with \(\alpha <\frac{2}{d}\) (for spatial regularity) and \(\alpha <1\) (for temporal regularity). Furthermore, these conditions are essentially sharp as we show in Theorems 3.3 and 3.7: If \(\sigma \equiv 1\), and if \(\nu \) has the same behavior near the origin as the Lévy measure of an \(\alpha \)-stable noise, then for \(\frac{2}{d} \leqslant \alpha < 1+\frac{2}{d}\) (resp. \(1\leqslant \alpha <1+\frac{2}{d}\)), the paths of \(x\mapsto u(t,x)\) (resp. \(t\mapsto u(t,x)\)) are unbounded on any non-empty open subset of D (resp. [0, T]). Let us remark that the last conclusion was observed in [29] for an \(\alpha \)-stable noise \(\Lambda \) and the (non-Lipschitz) function \(\sigma (x)=x^{\frac{1}{\alpha }}\) with \(\alpha \in (1,1+\frac{2}{d})\) via a connection between the resulting equation and stable super-Brownian motion (note that our \(\alpha \) is \(1+\beta \) in this reference).

In what follows, the letter C, occasionally with subscripts indicating the parameters that it depends on, denotes a strictly positive finite number whose value may change from line to line.

2 Regularity of the solution in fractional Sobolev spaces

2.1 The stochastic heat equation on an interval

For the interval \(D=[0,\pi ]\), the Green’s function \(G=G_D\) has the explicit representation

$$\begin{aligned} G(t;x,y):=G_D(t; x,y)=\frac{2}{\pi }\sum _{k\geqslant 1} \sin (kx) \sin (ky) e^{-k^2 t}\mathbb {1}_{t\geqslant 0}. \end{aligned}$$
(2.1)

The existence and uniqueness of mild solutions to (1.1) in this case basically follow from [11].

Proposition 2.1

Let \(\sigma :{\mathbb {R}}\rightarrow {\mathbb {R}}\) be a Lipschitz function, \(u_0:[0,\pi ]\rightarrow {\mathbb {R}}\) be continuous with \(u_0(0)=u_0(\pi )=0\), and L be a pure jump Lévy white noise as in (1.6). Furthermore, define

$$\begin{aligned} \tau _N=\inf \left\{ t\in [0,T] :J\left( [0,t]\times [0,\pi ]\times [-N,N]^c\right) \ne 0\right\} ,\quad N\in {\mathbb {N}}, \end{aligned}$$
(2.2)

with the convention \(\inf \emptyset = +\infty \). Then \((\tau _N)_{N\geqslant 1}\) is an increasing sequence of stopping times such that \(\tau _N>0\) and \(\tau _N = +\infty \) for large values of N. In addition, up to modifications, (1.1) has a mild solution u satisfying

$$\begin{aligned} \sup _{(t,x)\in [0,T]\times [0,\pi ]}{\mathbb {E}}\left[ |u(t,x)|^p \mathbb {1}_{t\leqslant \tau _N}\right] <+\infty , \end{aligned}$$
(2.3)

for any \(0<p<3\) and \(N\in {\mathbb {N}}\). Furthermore, up to modifications, this solution is unique among all predictable random fields that satisfy (2.3).

Proof

Since \([0,\pi ]\) is a bounded interval, almost surely, there is only a finite number of jumps larger than N in \([0,T]\times [0,\pi ]\). This immediately implies the statements about \((\tau _N)_{N\geqslant 1}\). Next, by [3, (B.5)], we know that \(G(t;x,y)\leqslant Cg (t,x-y)\) for any \((t,x,y)\in [0,T]\times [0,\pi ]^2\), with g as in (1.4). Consequently, (1) to (4) of Assumption B of [11] are satisfied, and we can apply [11, Theorem 3.5] to obtain the existence of a unique mild solution to (1.1) satisfying (2.3) for all \(p\in (0,2]\). In order to extend this to all \(p\in (2,3)\), we notice that the only step in the proof of [11, Theorem 3.5] that uses \(p\leqslant 2\) is the moment estimate (6.9) given in [11, Lemma 6.1(2)] with respect to the martingale part \(L^M\). We now elaborate how this estimate can be extended to exponents \(2< p<3\). For predictable processes \(\phi _1\) and \(\phi _2\), we can use [28, Theorem 1] to get the upper bound

$$\begin{aligned}&{\mathbb {E}}\left[ \left| \int _0^t\int _{0}^\pi \int _{|z|\leqslant N} G(t-s;x,y)\left( \sigma (\phi _1(s,y))-\sigma (\phi _2(s,y))\right) \,{\tilde{J}} (\mathrm {d}s,\mathrm {d}y,\mathrm {d}z)\right| ^p \right] \nonumber \\&\quad \leqslant C{\mathbb {E}}\left[ \left( \int _0^t \int _0^\pi \int _{|z|\leqslant N} |G(t-s;x,y)|^2 |\phi _1(s,y)-\phi _2(s,y)|^2 |z|^2 \,\mathrm {d}s\,\mathrm {d}y \,\nu (\mathrm {d}z) \right) ^{\frac{p}{2}}\right] \nonumber \\&\qquad +\,C{\mathbb {E}}\left[ \int _0^t \int _0^\pi \int _{|z|\leqslant N} |G(t-s;x,y)|^p |\phi _1(s,y)-\phi _2(s,y)|^p |z|^p \,\mathrm {d}s\,\mathrm {d}y \,\nu (\mathrm {d}z) \right] .\qquad \quad \end{aligned}$$
(2.4)

Using Hölder’s inequality with respect to the measure \(|G(t-s;x,y)|^2\,\mathrm {d}s\, \mathrm {d}y\), the first term is further bounded by

$$\begin{aligned}&C_N\left( \int _0^t\int _{0}^\pi |G(t-s;x,y)|^2\,\mathrm {d}s\,\mathrm {d}y \right) ^{\frac{p}{2}-1}\\&\quad \times \int _0^t \int _0^\pi |G(t-s;x,y)|^2 \, {\mathbb {E}}[|\phi _1(s,y)-\phi _2(s,y)|^p]\,\mathrm {d}s\,\mathrm {d}y \end{aligned}$$

Since \(G(t;x,y)\leqslant Cg (t,x-y)\) for any \((t,x,y)\in [0,T]\times [0,\pi ]^2\), and \(\int _0^T \int _{\mathbb {R}}g^2(t,x) \,\mathrm {d}t\,\mathrm {d}x<+\infty \), we obtain the following estimate from (2.4) and the simple inequality \(|x|^2 \leqslant |x|+|x|^p\) for all \(p\geqslant 2\):

$$\begin{aligned} \begin{aligned}&{\mathbb {E}}\left[ \left| \int _0^t\int _{0}^\pi \int _{|z|\leqslant N} G(t-s;x,y)\left( \sigma (\phi _1(s,y))-\sigma (\phi _2(s,y))\right) \,{\tilde{J}}(\mathrm {d}s,\mathrm {d}y,\mathrm {d}z)\right| ^p \right] \\&\quad \leqslant C_N\int _0^t \int _{0}^\pi (|G(t-s;x,y)|+|G(t-s;x,y)|^p){\mathbb {E}}[|\phi _1(s,y)-\phi _2(s,y)|^p] \,\mathrm {d}s\,\mathrm {d}y. \end{aligned} \end{aligned}$$

Since \(\int _0^T g^p(t,x)\,\mathrm {d}t\,\mathrm {d}x < +\infty \) for \(p<3\) (see e.g. [11, (3.20)]), this is exactly the extension of [11, Lemma 6.1(2)] needed to complete the proof of [11, Theorem 3.5]. \(\square \)

2.1.1 The fractional Sobolev spaces \(H_r([0,\pi ])\)

For any function \(f\in L^2([0,\pi ])\), we can define its Fourier sine coefficients

$$\begin{aligned} a_n(f)=\sqrt{\frac{2}{\pi }} \int _0^\pi f(x) \sin (nx) \,\mathrm {d}x, \quad n\in {\mathbb {N}}. \end{aligned}$$
(2.5)

Then, by Parseval’s identity, \(\Vert f\Vert ^2_{L^2([0,\pi ])}=\sum _{n\geqslant 1}a_n(f)^2\). For any \(r\geqslant 0\), we define

$$\begin{aligned} H_r([0,\pi ]) :=\left\{ f\in L^2([0,\pi ]):\Vert f \Vert ^2_{H_r} := \sum _{n\geqslant 1}\left( 1+n^2\right) ^r a_n(f)^2 <+\infty \right\} . \end{aligned}$$

This is a Hilbert space for the inner product \(\left\langle f, h \right\rangle _{H_r}:= \sum _{n\geqslant 1} \left( 1+n^2\right) ^r a_n(f) a_n(h)\). For \(r>0\), we define \(H_{-r}([0,\pi ])\) as the dual space of \(H_{r}([0,\pi ])\), that is, the space of continuous linear functionals on \(H_{r}([0,\pi ])\). Then \(H_{-r}([0,\pi ])\) is isomorphic to the space of sequences \(b=(b_n)_{n\geqslant 1}\) such that

$$\begin{aligned} \Vert b \Vert ^2_{H_{-r}}:= \sum _{n\geqslant 1}\left( 1+n^2\right) ^{-r} b_n^2 <+\infty \, . \end{aligned}$$

More precisely, for \(r>0\) and \(\tilde{f}\in H_{-r}([0,\pi ])\), the coefficients \(b_n\) are given by \(b_n= \tilde{f} \big (\sqrt{\frac{2}{\pi }}\sin (n\cdot )\big )\). Then, \(\Vert \tilde{f}\Vert _{H_{-r}} = \Vert b \Vert _{H_{-r}}\) and the duality between \(H_{-r}([0,\pi ])\) and \(H_{r}([0,\pi ])\) is given by

$$\begin{aligned} \left\langle b, h \right\rangle =\sum _{n\geqslant 1} b_n a_n(h)\leqslant \Vert b\Vert _{H_{-r}} \, \Vert h\Vert _{H_r}. \end{aligned}$$

For example, it is easy to check that \(\delta _x \in H_r([0,\pi ])\) for any \(x\in (0,\pi )\) and \(r<-\frac{1}{2}\). Indeed, \(\delta _x(\sin (n\cdot ))=\sin (nx)\), and for any \(r<-\frac{1}{2}\),

$$\begin{aligned} \Vert \delta _x\Vert _{H_{r}}^2= \frac{2}{\pi }\sum _{n\geqslant 1}\left( 1+n^2\right) ^{r} \sin ^2(nx) \leqslant \frac{2}{\pi }\sum _{n\geqslant 1}\left( 1+n^2\right) ^{r}<+\infty \, .\end{aligned}$$

2.1.2 Existence of a càdlàg solution in \(H_r([0,\pi ])\) with \(r<-\frac{1}{2}\)

In order to motivate why we consider fractional Sobolev spaces \(H_r([0,\pi ])\) with \(r<-\frac{1}{2}\), we start with a special case. Suppose that \(b=0\) and that \(\nu \) is a symmetric measure with \(\nu ({\mathbb {R}})<+\infty \). Then we can rewrite \(L=\sum _{i=1}^{N_T} Z_i \delta _{(T_i, X_i)}\), where \((T_i,X_i,Z_i)\) are the atoms of the Poisson random measure J, and

$$\begin{aligned} u(t,x)=\sum _{i= 1}^{N_T} G(t-T_i; x, X_i) \sigma ( u(T_i, X_i) )Z_i. \end{aligned}$$

In this case, it suffices to check whether for fixed \(i\geqslant 1\), \(t\mapsto G(t-T_i; \cdot , X_i)\) is càdlàg in \(H_r([0,\pi ])\). Using the series representation (2.1), we immediately see that the function \(x\mapsto G(t-T_i; x,X_i)\) belongs to \(H_r([0,\pi ])\) if and only if

$$\begin{aligned} \sum _{k\geqslant 1}(1+k^2)^r \sin (kX_i)^2 e^{-2k^2 (t-T_i)}\mathbb {1}_{t\geqslant T_i} <+\infty . \end{aligned}$$

This is the case for any \(r\in {\mathbb {R}}\) if \(t\ne T_i\). However, for \(t\downarrow T_i\), we have to restrict to \(r<-\frac{1}{2}\). Indeed, for the càdlàg property, the only point where a problem might appear is at \(t=T_i\). At this point, the existence of a left limit is obvious since \(G(t-T_i; \cdot , X_i)=0\) for any \(t<T_i\). For right-continuity, we use the fact that \((1-e^{-k^2 h})^2\leqslant k^{2\varepsilon }h^\varepsilon \) for any \(0<\varepsilon < -\frac{1}{2} -r\), so

$$\begin{aligned} \left\| G(h; \cdot , X_i)-G(0; \cdot , X_i)\right\| _{H_r}^2&=\frac{2}{\pi }\sum _{k\geqslant 1}(1+k^2)^r \sin (kX_i)^2 \left( 1-e^{-k^2 h}\right) ^2\\&\leqslant \frac{2}{\pi }\sum _{k\geqslant 1}(1+k^2)^r k^{2\varepsilon }h^\varepsilon \leqslant C h^\varepsilon \rightarrow 0 \quad \text {as}~h\rightarrow 0. \end{aligned}$$

Therefore, \(t\mapsto u(t,\cdot )\) is càdlàg in \(H_r([0,\pi ])\). For the general case, we first treat the drift term.

Lemma 2.2

Assume that u is the unique solution to (1.1) as in Proposition 2.1. Then,

$$\begin{aligned} F(t,x)=V(t,x)+\int _0^t\int _0^\pi G(t-s;x,y)\sigma (u(s,y))\,\mathrm {d}s\,\mathrm {d}y \end{aligned}$$

is jointly continuous in \((t,x)\in [0,T]\times [0,\pi ]\). In particular, for every \(r\leqslant 0\), the process \(t\mapsto F(t,\cdot )\) is continuous in \(H_r([0,\pi ])\).

Proof

The continuity of V is standard, and with (2.1), the integral term in F equals

$$\begin{aligned} \frac{2}{\pi } \sum _{k\geqslant 1} \sin (kx)\int _0^t\int _0^\pi e^{-k^2(t-s)} \sin (ky)\sigma (u(s,y))\,\mathrm {d}s\,\mathrm {d}y. \end{aligned}$$

Each term in this series is jointly continuous in (tx). Hence, it suffices to show the uniform convergence of the series. Using Hölder’s inequality and the fact that u has uniformly bounded moments of any order \(p<3\), we obtain this from

$$\begin{aligned}&{\mathbb {E}}\left[ \sum _{k\geqslant 1} \sup _{(t,x)\in [0,T]\times [0,\pi ]} \left| \sin (kx) \int _0^t\int _0^\pi e^{-k^2(t-s)} \sin (ky)\sigma (u(s,y))\,\mathrm {d}s\,\mathrm {d}y \right| \right] \\&\quad \leqslant C \sum _{k\geqslant 1}{\mathbb {E}}\left[ \sup _{(t,x)\in [0,T]\times [0,\pi ]} \left( \int _0^t e^{-\frac{5}{3} k^2(t-s)} \, \mathrm {d}s \right) ^{\frac{3}{5}} \left( \int _0^t\int _0^\pi |\sigma (u(s,y))|^{\frac{5}{2}}\,\mathrm {d}s\,\mathrm {d}y\right) ^{\frac{2}{5}}\right] \\&\quad \leqslant C\sum _{k\geqslant 1}\left( \int _0^T e^{-\frac{5}{3} k^2s}\, \mathrm {d}s\right) ^{\frac{3}{5}}\left( \int _0^T\int _0^\pi {\mathbb {E}}[|\sigma (u(s,y))|^\frac{5}{2}]\,\mathrm {d}s\,\mathrm {d}y \right) ^{\frac{2}{5}} \leqslant C \sum _{k\geqslant 1} {k^{-\frac{6}{5}}} < +\infty . \end{aligned}$$

Then, to prove the continuity of \(t\mapsto F(t,\cdot )\) in \(H_r([0,\pi ])\), it suffices to show the continuity in \(L^2([0,\pi ])\) because \(L^2([0,\pi ]) \hookrightarrow H_r([0,\pi ])\) (that is, \(L^2([0,\pi ])\) is continuously embedded in \(H_r([0,\pi ])\)). The continuity in \(L^2([0,\pi ])\) in turn follows from the fact that F is uniformly continuous on the compact domain \([0,T]\times [0,\pi ]\). \(\square \)

Proposition 2.3

Let L be a pure jump Lévy white noise, and let \(\sigma \) be a bounded and Lipschitz function. Let u be the mild solution to the stochastic heat equation (1.1). Then, for any \(r<-\frac{1}{2}\), the stochastic process \((u(t, \cdot ))_{t\in [0,T]}\) has a càdlàg version in \(H_r([0,\pi ])\).

Proof

For \(N\in {\mathbb {N}}\), consider the truncated noise

$$\begin{aligned} L_N(\mathrm {d}t,\mathrm {d}x)= & {} b_N\,\mathrm {d}t\,\mathrm {d}x + \int _{|z|\leqslant N} z\,{\tilde{J}}(\mathrm {d}t,\mathrm {d}x,\mathrm {d}z),\nonumber \\ b_N= & {} b-\int _{1<|z|\leqslant N} z\,\nu (\mathrm {d}z), \end{aligned}$$
(2.6)

as well as the mild solution \(u_N\) to (1.1) driven by \(L_N\), that is,

$$\begin{aligned} u_N(t,x)= & {} V(t,x)+\int _0^t \int _{0}^\pi G(t-s; x,y) \sigma (u_N(s,y))\, L_N(\mathrm {d}s, \mathrm {d}y) \end{aligned}$$
(2.7)

for \((t, x) \in [0, T] \times [0, \pi ]\). Then, by definition, we have \(L_N = L\) and therefore also \(u=u_N\) on the event \(\{T\leqslant \tau _N\}\), where \(\tau _N\) was defined in (2.2). Since almost surely, \(\tau _N=+\infty \) for sufficiently large N, we have stationary convergence of the processes \(u_N\) in (2.7) to u (that is, almost surely, for large enough N, \(u_N(t,x)=u(t,x)\) for all \((t,x)\in [0,T]\times [0,\pi ]\)). As we are interested in sample path properties of the mild solution to (1.1), and these properties are identical to those of \(u_N\) for sufficiently large N, it is enough to consider \(u_N\) instead of u in the following. The value of the parameter N has no importance in our study, so we take \(N=1\) for simplicity and drop the dependency in N. Therefore, it suffices to consider the solution to the integral equation

$$\begin{aligned} u(t,x)= & {} V(t,x)+{ b\int _0^t \int _0^\pi G(t-s;x,y) \sigma (u(s,y)) \, \mathrm {d}s \, \mathrm {d}y}\nonumber \\&+ \int _0^t \int _0^\pi G(t-s;x,y) \sigma (u(s,y)) \,L^M(\mathrm {d}s, \mathrm {d}y), \end{aligned}$$
(2.8)

in other words, to assume that all jumps of L are bounded by 1. Furthermore, by Lemma 2.2, it only remains to consider the process

$$\begin{aligned} u^M(t,x)=\int _0^t \int _0^\pi G(t-s;x,y) \sigma (u(s,y)) \,L^M(\mathrm {d}s, \mathrm {d}y). \end{aligned}$$
(2.9)

For this purpose, we need to calculate the Fourier sine coefficients defined in (2.5). To lighten the notations, in what follows, we will denote these coefficients by \(a_k(t)\). Then, by definition, for every \(k \geqslant 1\),

$$\begin{aligned} a_k(t)= & {} \sqrt{\frac{2}{\pi }} \int _0^\pi \left( \int _{0}^t \int _0^\pi \int _{|z|\leqslant 1} \sin (kx) G(t-s;x,y) \sigma (u(s,y))z \,{\tilde{J}}( \mathrm {d}s, \mathrm {d}y, \mathrm {d}z) \right) \mathrm {d}x. \end{aligned}$$

We want to exchange the stochastic integral and the Lebesgue integral, and because all involved terms are square-integrable, Theorem A.3 with \(p=2\) allows us to do so. Therefore,

$$\begin{aligned} \begin{aligned} a_k(t)&= \int _{0}^t \int _0^\pi \int _{|z|\leqslant 1} \sqrt{\frac{2}{\pi }} \left( \int _0^\pi \sin (kx) G(t-s;x,y) \,\mathrm {d}x \right) \sigma (u(s,y))z \,{\tilde{J}}(\mathrm {d}s, \mathrm {d}y, \mathrm {d}z)\\&= \sqrt{\frac{2}{\pi }} e^{-k^2t} \int _{0}^t \int _0^\pi \sigma (u(s,y)) \sin (ky) e^{k^2 s} \,L^M(\mathrm {d}s, \mathrm {d}y ). \end{aligned} \end{aligned}$$
(2.10)

We gather some moment estimates for the family of integrals

$$\begin{aligned} I_a^b(k):=\int _a^b \int _0^\pi \sin (ky) e^{k^2 s} \sigma (u(s,y)) \,L^M(\mathrm {d}s, \mathrm {d}y), \end{aligned}$$
(2.11)

where \(0 \leqslant a < b\leqslant T\). Since \(\sigma \) is bounded, we can estimate the second and fourth moments of \(I_a^b(k)\) using [28, Theorem 1]:

$$\begin{aligned} {\mathbb {E}}\left[ I_a^b(k)^2\right]\leqslant & {} C \int _a^b \int _0^\pi \sin ^2 (ky) e^{2k^2s} {\mathbb {E}}\left[ \left| \sigma (u(s,y)) \right| ^2 \right] \,\mathrm {d}s \,\mathrm {d}y \nonumber \\\leqslant & {} C \int _a^b e^{2k^2s} \,\mathrm {d}s= C\frac{ e^{2k^2b}- e^{2k^2a}}{2k^2},\nonumber \\ {\mathbb {E}}\left[ I_a^b(k)^4\right]\leqslant & {} C \left( {\mathbb {E}}\left[ \left( \int _a^b \int _0^\pi \sin ^2 (ky) e^{2k^2s} \left| \sigma (u(s,y)) \right| ^2 \,\mathrm {d}s\, \mathrm {d}y\right) ^2 \right] \right. \nonumber \\&\left. + \int _a^b \int _0^\pi \sin ^4 (ky) e^{4k^2s} {\mathbb {E}}\left[ \left| \sigma (u(s,y)) \right| ^4 \right] \, \mathrm {d}s \,\mathrm {d}y \right) \nonumber \\\leqslant & {} C\left( \frac{ (e^{2k^2b}- e^{2k^2a})^2}{4k^4} +\frac{ e^{4k^2b}- e^{4k^2a}}{4k^2}\right) , \end{aligned}$$
(2.12)

where C also depends on \(\int _{|z|\leqslant 1} z^2\, \nu (\mathrm {d}z)\) and \(\int _{|z|\leqslant 1} z^4\, \nu (\mathrm {d}z)\), both of which are finite.

Also, for \(0\leqslant a<b\leqslant c <d \leqslant T\), still assuming that \(\sigma \) is bounded,

$$\begin{aligned} {\mathbb {E}}\left[ \left( I_a^b(k) I_c^d(j)\right) ^2 \right]= & {} {\mathbb {E}}\left[ \left( \int _c^d \int _0^\pi I_a^b(k) \sin (jy) e^{j^2 s} \sigma (u(s,y)) \,L^M(\mathrm {d}s, \mathrm {d}y)\right) ^2 \right] \nonumber \\\leqslant & {} C \int _c^d \int _0^\pi {\mathbb {E}}\left[ I_a^b(k) ^2 \sigma (u(s,y))^2 \right] \sin ^2(jy) e^{2j^2 s} \,\mathrm {d}s \,\mathrm {d}y\nonumber \\\leqslant & {} C\left( \int _a^b e^{2k^2 s} \,\mathrm {d}s\right) \left( \int _c^d e^{2j^2 s} \,\mathrm {d}s \right) \nonumber \\= & {} C \frac{e^{2k^2b}-e^{2k^2 a}}{2k^2} \frac{e^{2j^2d}-e^{2j^2 c}}{2j^2}, \end{aligned}$$
(2.13)

where we used the fact that \(I_a^b(k)\) is \(\mathcal F_c\)-measurable, and (2.12) in the second inequality.

We will use [18, Chapter III, Sect. 4, Theorem 1] to show the existence of a càdlàg version of \(t\mapsto u^M(t,\cdot )\). By [18, Chapter III, Sect. 2, Theorem 1], u has a separable version, which is, because of [11, Theorem 4.7] and [3, Lemma B.1], continuous in \(L^2(\Omega )\), and therefore \(t\mapsto u^M(t,\cdot )\) is continuous in \(L^2(\Omega )\) as a process with values in \(L^2([0,\pi ])\) (and thus in \(H_r([0,\pi ])\) since \(r<-\frac{1}{2}\)). Then it suffices to show that for any \(t\in [0,T]\), \(u(t, \cdot ) \in H_r([0,\pi ])\), and that for some \(\delta >0\),

$$\begin{aligned} {\mathbb {E}}\left[ \left\| u^M(t+h, \cdot )-u^M(t,\cdot )\right\| _{H_r}^2 \left\| u^M(t-h, \cdot )-u^M(t,\cdot )\right\| _{H_r}^2 \right] \leqslant C h^{1+\delta } \end{aligned}$$

for any \(h\in (0,1)\). By (2.12), we have \({\mathbb {E}}\left[ a_k^2(t) \right] \leqslant CT\), so for \(r<-\frac{1}{2}\), we have

$$\begin{aligned} \sum _{k\geqslant 1} (1+k^2)^r a_k^2(t) <+\infty \end{aligned}$$

almost surely, and \(u(t,\cdot )\in H_r([0,\pi ])\) is proved. Next,

$$\begin{aligned} \Vert u^M(t\pm h, \cdot )-u^M(t,\cdot )\Vert _{H_r}^2=\sum _{k\geqslant 1} (1+k^2)^r \left( a_k(t\pm h)-a_k(t)\right) ^2, \end{aligned}$$

so using (2.10),

$$\begin{aligned} a_k(t+h)-a_k(t)= & {} -\sqrt{\frac{2}{\pi }} e^{-k^2t} \left[ \left( 1-e^{-k^2h}\right) I_0^t(k)-e^{-k^2h} I_t^{t+h}(k) \right] , \\ a_j(t-h)-a_j(t)= & {} -\sqrt{\frac{2}{\pi }}e^{-j^2(t-h)} \left[ \left( e^{-j^2h}-1\right) I_0^{t-h}(j)+ e^{-j^2h} I_{t-h}^{t}(j) \right] . \end{aligned}$$

Therefore, using the classical inequality \((a+b)^2\leqslant 2 (a^2+b^2)\),

$$\begin{aligned}&\left\| u^M(t+h, \cdot )-u^M(t,\cdot )\right\| _{H_r}^2 \left\| u^M(t-h, \cdot )-u^M(t,\cdot )\right\| _{H_r}^2 \nonumber \\&\quad \leqslant \frac{4}{\pi ^2} \sum _{k,j \geqslant 1} (1+k^2)^r(1+j^2)^r \left( |A_1(j,k)|+|A_2(j,k)|+|A_3(j,k)|+|A_4(j,k)| \right) ^2\nonumber \\&\quad \leqslant C \sum _{k,j \geqslant 1} (1+k^2)^r(1+j^2)^r \left( A_1(j,k)^2+A_2(j,k)^2+A_3(j,k)^2+A_4(j,k)^2 \right) ,\nonumber \\ \end{aligned}$$
(2.14)

for some constant C, where

$$\begin{aligned} A_1(j,k):= & {} e^{-k^2t}e^{-j^2(t-h)}\left( 1-e^{-k^2h}\right) \left( e^{-j^2h}-1\right) I_0^t(k)I_0^{t-h}(j),\\ A_2(j,k):= & {} e^{-k^2t}e^{-j^2(t-h)}\left( 1-e^{-k^2h}\right) e^{-j^2h}I_0^t(k)I_{t-h}^t(j),\\ A_3(j,k):= & {} e^{-k^2t}e^{-j^2(t-h)}e^{-k^2h}\left( e^{-j^2h}-1\right) I_t^{t+h}(k)I_0^{t-h}(j),\\ A_4(j,k):= & {} e^{-k^2t}e^{-j^2(t-h)}e^{-k^2h}e^{-j^2h}I_t^{t+h}(k)I_{t-h}^t(j). \end{aligned}$$

We treat each of the four terms separately.

\(\underline{A_1(j,k):}\)

$$\begin{aligned} {\mathbb {E}}\left[ A_1(j,k)^2 \right]= & {} e^{-2k^2t}e^{-2j^2(t-h)}\left( 1-e^{-k^2h}\right) ^2\left( e^{-j^2h}-1\right) ^2{\mathbb {E}}\left[ \left( I_0^t(k)I_0^{t-h}(j)\right) ^2 \right] \\\leqslant & {} C e^{-2k^2t}e^{-2j^2(t-h)}\left( 1-e^{-k^2h}\right) ^2\left( e^{-j^2h}-1\right) ^2 \\&\times {\mathbb {E}}\left[ \left( I_0^{t-h}(k)I_0^{t-h}(j)\right) ^2 + \left( I_{t-h}^t(k)I_0^{t-h}(j)\right) ^2 \right] \\=: & {} {\tilde{A}}_1(j,k) + {\tilde{A}}_2(j,k). \end{aligned}$$

By (2.13), we can write

$$\begin{aligned} {\mathbb {E}}\left[ \left( I_{t-h}^t(k)I_0^{t-h}(j)\right) ^2 \right]\leqslant & {} C\, \frac{ e^{2j^2(t-h)}-1 }{2j^2}\, \frac{ e^{2k^2t}-e^{2k^2(t-h)}}{2k^2} \\\leqslant & {} Ce^{2j^2(t-h)}\, \frac{1-e^{-2j^2(t-h)}}{2j^2} e^{2k^2t}\, \frac{1-e^{-2k^2h}}{2k^2}\\\leqslant & {} C\, \frac{e^{2k^2t}e^{2j^2(t-h)}}{2j^2}\, h, \end{aligned}$$

where we used \(1-e^{-2k^2h} \leqslant 2k^2h\) and \(1-e^{-2j^2(t-h)} \leqslant 1\) in the last inequality. Since \((1-e^{-k^2h})^2\leqslant 1\) and \((1-e^{-j^2h})^2\leqslant j^2 h\), we deduce that

$$\begin{aligned} {\tilde{A}}_2(j,k)\leqslant C h^2. \end{aligned}$$
(2.15)

Also, by the Cauchy–Schwarz inequality,

$$\begin{aligned} {\mathbb {E}}\left[ ( I_{0}^{t-h}(k)I_0^{t-h}(j)) ^2 \right] \leqslant {\mathbb {E}}\left[ I_{0}^{t-h}(k)^4 \right] ^{\frac{1}{2}} {\mathbb {E}}\left[ I_{0}^{t-h}(j)^4 \right] ^{\frac{1}{2}}. \end{aligned}$$
(2.16)

By (2.12) and subadditivity of the square root,

$$\begin{aligned} {\mathbb {E}}\left[ I_{0}^{t-h}(k)^4 \right] ^{\frac{1}{2}}\leqslant & {} C\Bigg (\frac{e^{2k^2(t-h)}-1}{2k^2}+ \Bigg ( \frac{e^{4k^2(t-h)}-1}{4k^2} \Bigg )^{\frac{1}{2}} \Bigg )\nonumber \\\leqslant & {} C e^{2k^2t} \Bigg (\frac{e^{-2k^2h}-e^{-2k^2t}}{2k^2}+ \Bigg ( \frac{e^{-4k^2h}-e^{-4k^2t}}{4k^2} \Bigg )^{\frac{1}{2}} \Bigg )\nonumber \\\leqslant & {} C e^{2k^2t} \left( \frac{1}{2k^2}+ \frac{1}{2k} \right) . \end{aligned}$$
(2.17)

Let \(0<\delta <\frac{3}{2}\), to be chosen later. Then, multiplying each term by \((1-e^{-k^2h})^2\) and using \((1-e^{-k^2h})^2\leqslant k^{2} h\) for the first term, and \((1-e^{-k^2h})^2=(1-e^{-k^2h})^{\frac{1}{2} +\delta }(1-e^{-k^2h})^{\frac{3}{2} -\delta } \leqslant k^{1+2\delta } h^{\frac{1}{2}+\delta }\) for the second term of the sum, we get

$$\begin{aligned} \left( 1-e^{-k^2h}\right) ^2 {\mathbb {E}}\left[ I_{0}^{t-h}(k)^4 \right] ^{\frac{1}{2}}\leqslant C e^{2k^2t} \left( h + k^{2\delta } h^{\frac{1}{2} +\delta } \right) . \end{aligned}$$
(2.18)

A similar calculation yields

$$\begin{aligned} \left( 1-e^{-j^2h}\right) ^2{\mathbb {E}}\left[ I_{0}^{t-h}(j)^4 \right] ^{\frac{1}{2}} \leqslant C e^{2j^2(t-h)} \left( h + j^{2\delta } h^{\frac{1}{2} +\delta } \right) . \end{aligned}$$
(2.19)

Then, we combine (2.16), (2.18) and (2.19) to obtain

$$\begin{aligned} {\tilde{A}}_1(j,k)\leqslant C \left( h^2+j^{2\delta }k^{2\delta } h^{1+2\delta }\right) . \end{aligned}$$
(2.20)

Therefore, (2.15) and (2.20) give

$$\begin{aligned} {\mathbb {E}}\left[ A_1(j,k)^2 \right] \leqslant C\left( h^2+ j^{2\delta }k^{2\delta } h^{1+2\delta }\right) . \end{aligned}$$

\(\underline{A_2(j,k):}\) We treat this term in a similar way to \(A_1(j,k)\):

$$\begin{aligned} {\mathbb {E}}\left[ A_2(j,k)^2\right]= & {} e^{-2k^2t}e^{-2j^2(t-h)}\left( 1-e^{-k^2h}\right) ^2e^{-2j^2h}{\mathbb {E}}\left[ \left( I_0^t(k)I_{t-h}^t(j)\right) ^2 \right] \\\leqslant & {} C e^{-2k^2t}e^{-2j^2(t-h)}\left( 1-e^{-k^2h}\right) ^2e^{-2j^2h}\\&\times {\mathbb {E}}\left[ \left( I_0^{t-h}(k)I_{t-h}^{t}(j)\right) ^2+ \left( I_{t-h}^t(k)I_{t-h}^t(j)\right) ^2 \right] \\=: & {} B_1(j,k)+ B_2(j,k). \end{aligned}$$

In the same way as for the term \({\tilde{A}}_2(j,k)\), we get

$$\begin{aligned} B_1(j,k)\leqslant Ch^2. \end{aligned}$$
(2.21)

We use the Cauchy–Schwarz inequality to deal with the term \(B_2(j,k)\):

$$\begin{aligned} B_2(j,k)\leqslant C e^{-2k^2t}e^{-2j^2(t-h)}\left( 1-e^{-k^2h}\right) ^2e^{-2j^2h} {\mathbb {E}}\left[ I_{t-h}^{t}(k)^4 \right] ^{\frac{1}{2}} {\mathbb {E}}\left[ I_{t-h}^{t}(j)^4 \right] ^{\frac{1}{2}}. \end{aligned}$$

As in (2.17), we get

$$\begin{aligned} {\mathbb {E}}\left[ I_{t-h}^{t}(j)^4 \right] ^{\frac{1}{2}}&\leqslant C e^{2j^2t} \Bigg (\frac{1-e^{-2j^2h}}{2j^2}+ \Bigg ( \frac{1-e^{-4j^2h}}{4j^2} \Bigg )^{\frac{1}{2}} \Bigg ) \leqslant C e^{2j^2t} \left( h+ \sqrt{h} \right) , \end{aligned}$$

and similarly \({\mathbb {E}}\left[ I_{t-h}^{t}(k)^4 \right] ^{\frac{1}{2}} \leqslant C e^{2k^2t} \left( h+ \sqrt{h} \right) \). Also, for \(0<\delta <1\), since \((1-e^{-k^2h})^2\leqslant k^{2\delta }h^\delta \),

$$\begin{aligned} B_2(j,k)\leqslant C k^{2\delta } h^{\delta } \left( h+ \sqrt{h} \right) ^2 \leqslant C k^{2\delta } h^{1+\delta }. \end{aligned}$$
(2.22)

By (2.21) and (2.22),

$$\begin{aligned} {\mathbb {E}}\left[ A_2(j,k)^2\right] \leqslant C k^{2\delta }h^{1+\delta }. \end{aligned}$$

\(\underline{A_3(j,k):}\) By (2.13),

$$\begin{aligned} {\mathbb {E}}\left[ A_3(j,k)^2\right]= & {} e^{-2k^2t}e^{-2j^2(t-h)}e^{-2k^2h}\left( e^{-j^2h}-1\right) ^2 {\mathbb {E}}\left[ \left( I_t^{t+h}(k)I_0^{t-h}(j)\right) ^2 \right] \\\leqslant & {} C e^{-2k^2t}e^{-2j^2(t-h)}e^{-2k^2h}\left( e^{-j^2h}-1\right) ^2 \, \frac{e^{2j^2(t-h)}-1}{2j^2} \, \frac{e^{2k^2(t+h)}-e^{2k^2t}}{2k^2}\\\leqslant & {} C \, \left( e^{-j^2h}-1\right) ^2 \, \frac{1-e^{-2j^2(t-h)}}{2j^2} \, \frac{1-e^{-2k^2h}}{2k^2} \leqslant C \, \frac{\left( e^{-j^2h}-1\right) ^2}{2j^2}\, \frac{1-e^{-2k^2h}}{2k^2}. \end{aligned}$$

Then, since \((1-e^{-j^2h})^2 \leqslant j^2h\) and \(1-e^{-2k^2h} \leqslant 2k^2h\), we get

$$\begin{aligned} {\mathbb {E}}\left[ A_3(j,k)^2\right] \leqslant C h^2. \end{aligned}$$

\(\underline{A_4(j,k):}\) Again, by (2.13),

$$\begin{aligned} {\mathbb {E}}\left[ A_4(j,k)^2\right]= & {} e^{-2k^2t}e^{-2j^2(t-h)}e^{-2k^2h}e^{-2j^2h}{\mathbb {E}}\left[ \left( I_t^{t+h}(k)I_{t-h}^t(j)\right) ^2\right] \\\leqslant & {} C e^{-2k^2t}e^{-2j^2(t-h)}e^{-2k^2h}e^{-2j^2h}\, \frac{e^{2j^2t}-e^{2j^2(t-h)}}{2j^2}\, \frac{e^{2k^2(t+h)}-e^{2k^2t}}{2k^2}\\\leqslant & {} C\, \frac{1-e^{-2j^2h}}{2j^2}\, \frac{1-e^{-2k^2h}}{2k^2}. \end{aligned}$$

Therefore, as for the previous term we get

$$\begin{aligned} {\mathbb {E}}\left[ A_4(j,k)^2\right] \leqslant C h^2. \end{aligned}$$

Then, for every \(r<-\frac{1}{2}\), we can pick \(0<\delta <1\) such that \(r+\delta <-\frac{1}{2}\). Then,

$$\begin{aligned} {\mathbb {E}}\left[ \Vert u^M(t+h, \cdot )-u^M(t,\cdot )\Vert _{H_r}^2 \Vert u^M(t-h, \cdot )-u^M(t,\cdot )\Vert _{H_r}^2\right] \leqslant C h^{1+\delta }, \end{aligned}$$

so we deduce that \((u^M(t,\cdot ))_{t\geqslant 0}\) has a càdlàg version in \(H_r([0,\pi ])\) for any \(r<-\frac{1}{2}\). \(\square \)

Remark 2.4

The result of Proposition 2.3 is in fact valid for any predictable random field u whose Fourier sine coeficients can be written in the form

$$\begin{aligned} a_k(u(t,\cdot )) = C e^{-k^2t} \int _0^t\int _0^\pi \sin (ky) e^{k^2s} Z(s,y)\, {L(\mathrm {d}s, \mathrm {d}y)}, \end{aligned}$$

where Z is another predictable and bounded random field.

For unbounded \(\sigma \), we deduce the result from Proposition 2.3 via an approximation argument.

Theorem 2.5

Let u be the mild solution to the stochastic heat equation (1.1) constructed in Proposition 2.1. Then, for any \(r<-\frac{1}{2}\), the process \((u(t, \cdot ))_{t\in [0,T]}\) has a càdlàg version in \(H_r([0,\pi ])\).

Remark 2.6

The constraint \(r<-\frac{1}{2}\) in Theorem 2.5 is optimal. This follows from the discussion at the beginning of Sect. 2.1.2 and the fact that the Dirac delta distribution \(\delta _a\), \(a\in (0,\pi )\), does not belong to \(H_s([0,\pi ])\) for any \(s\geqslant -\frac{1}{2}\).

Proof of Theorem 2.5

By the argument given at the beginning of the proof of Proposition 2.3 and by Lemma 2.2, we only need to consider \(u^M\) as defined in (2.9). Let \(\sigma _n(u)= \sigma (u)\mathbb {1}_{|u|\leqslant n}\). We define

$$\begin{aligned} u^M_n(t,x)=\int _0^t \int _0^\pi G(t-s;x,y) \sigma _n(u(s,y)) \,L^M(\mathrm {d}s, \mathrm {d}y). \end{aligned}$$

As in (2.10), the Fourier sine coefficients of \(t\mapsto u^M(t, \cdot )-u^M_n(t,\cdot )\) are given by

$$\begin{aligned} a_{k,n}(t)=\sqrt{\frac{2}{\pi }} \int _0^t \int _0^\pi \sin (ky) e^{-k^2(t-s)}( \sigma (u(s,y))- \sigma _n(u(s,y)) )\, L^M(\mathrm {d}s, \mathrm {d}y). \end{aligned}$$

Therefore, for any \(t\in [0,T]\),

$$\begin{aligned} \Vert u^M(t,\cdot )-u^M_n(t,\cdot )\Vert _{H_r}^2=\sum _{k\geqslant 1} (1+k^2)^r a_{k,n}^2(t). \end{aligned}$$
(2.23)

Then, using \(e^{-k^2(t-s)}=1-\int _s^t k^2 e^{-k^2(t-r)}\,\mathrm {d}r\) and Theorem A.3 and (A.3) with \(p=2\), we can rewrite

$$\begin{aligned} a_{k,n}(t)&=\sqrt{\frac{2}{\pi }}\left( \int _0^t \int _0^\pi \sin (ky) \sigma _{(n)} (s,y) \,L^M(\mathrm {d}s, \mathrm {d}y)\right. \\&\quad \left. - \int _0^t \int _0^\pi \sin (ky) \left( \int _s^t k^2 e^{-k^2(t-r)}\,\mathrm {d}r\right) \sigma _{(n)} (s,y)\,L^M(\mathrm {d}s, \mathrm {d}y) \right) \\&=\sqrt{\frac{2}{\pi }}\left( \int _0^t \int _0^\pi \sin (ky) \sigma _{(n)} (s,y)\, L^M(\mathrm {d}s, \mathrm {d}y)\right. \\&\quad \left. - \int _0^t k^2 e^{-k^2(t-r)} \left( \int _0^r \int _0^\pi \sin (ky) \sigma _{(n)} (s,y)\,L^M(\mathrm {d}s, \mathrm {d}y) \right) \, \mathrm {d}r \right) , \end{aligned}$$

where \(\sigma _{(n)} (s,y):= \sigma (u(s,y))- \sigma _n(u(s,y))\). Therefore,

$$\begin{aligned} \left| a_{k,n}(t) \right|\leqslant & {} C \sup _{r \in [0,t]} \left| \int _0^r \int _0^\pi \sin (ky) \sigma _{(n)} (s,y)\, L^M(\mathrm {d}s, \mathrm {d}y) \right| \left( 1+ \int _0^t k^2 e^{-k^2(t-r)}\,\mathrm {d}r \right) \nonumber \\\leqslant & {} C \sup _{r \in [0,t]} \left| \int _0^r \int _0^\pi \sin (ky) \sigma _{(n)} (s,y)\,L^M(\mathrm {d}s, \mathrm {d}y) \right| , \end{aligned}$$
(2.24)

where C does not depend on k. So by Doob’s inequality, we deduce that

$$\begin{aligned} {\mathbb {E}}\left[ \sup _{t\in [0,T]} a_{k,n}^2(t) \right] \leqslant C \int _0^T \int _0^\pi \sin ^2(ky) {\mathbb {E}}\left[ \sigma _{(n)}^2 (s,y) \right] \mathrm {d}s \,\mathrm {d}y. \end{aligned}$$
(2.25)

By (2.3), (2.23) and (2.25), it follows from dominated convergence that for any \(r<-\frac{1}{2}\),

$$\begin{aligned}&{\mathbb {E}}\left[ \sup _{t\in [0,T]} \Vert u(t,\cdot )-u_n(t,\cdot )\Vert _{H_r}^2\right] \\&\quad \leqslant C \sum _{k\geqslant 1} (1+k^2)^r\int _0^T \int _0^\pi \sin ^2(ky) {\mathbb {E}}\left[ \sigma _{(n)}^2 (s,y) \right] \mathrm {d}s \,\mathrm {d}y\rightarrow 0 \end{aligned}$$

as \(n\rightarrow +\infty \). Therefore, \( \sup _{t\in [0,T]} \Vert u(t,\cdot )-u_n(t,\cdot )\Vert _{H_r}\rightarrow 0\) in \(L^2(\Omega )\) as \(n\rightarrow +\infty \), and there is a subsequence \((n_k)_{k\geqslant 0}\) such that \(\sup _{t\in [0,T]} \Vert u(t,\cdot )-u_{n_k}(t,\cdot )\Vert _{H_r}\rightarrow 0\) almost surely as \(k\rightarrow +\infty \). This means that \(u_{n_k}(t,\cdot )\) converges to \(u(t,\cdot )\) in \(H_r([0,\pi ])\) uniformly in time for any \(r<-\frac{1}{2}\). Since \(\sigma _{n_k}\) is bounded, \(t \mapsto u_{n_k}(t,\cdot )\) has a càdlàg version in \(H_r([0,\pi ])\) by Proposition 2.3 and Remark 2.4. Therefore, \(t \mapsto u(t,\cdot )\) has a càdlàg version in \(H_r([0,\pi ])\) for any \(r<-\frac{1}{2}\). \(\square \)

2.2 The stochastic heat equation on \({\mathbb {R}}^d\)

In [12], the first author proved the existence of a solution to the stochastic heat equation on \({\mathbb {R}}^d\) under assumptions on the driving noise that are general enough to include the case of \(\alpha \)-stable noises. More specifically, suppose that \(D={\mathbb {R}}^d\) in (1.1) and that the following hypotheses hold:

(H) :

There exists \(0<p<1+\frac{2}{d}\) and \(\frac{p}{1+\left( 1+\frac{2}{d} -p\right) }<q\leqslant p\) such that

$$\begin{aligned} \int _{|z|\leqslant 1}|z| ^p \,\nu (\mathrm {d}z)+\int _{|z|>1}|z|^q\,\nu ( \mathrm {d}z)<+\infty . \end{aligned}$$

If \(p<1\), we assume that \(b_0:=b-\int _{|z|\leqslant 1} z\,\nu (\mathrm {d}z)=0\).

In contrast to the situation on a bounded domain, we can no longer use the stopping times \(\tau _N\) in (2.2) (with \([0,\pi ]\) replaced by \({\mathbb {R}}^d\)) to localize Eq. (1.1). Indeed, since \({\mathbb {R}}^d\) is unbounded, the \(\mathrm {d}t\,\mathrm {d}x\,\nu (\mathrm {d}z)\)-measure of \([0,T]\times {\mathbb {R}}^d \times [-N,N]^c\) will in general be infinite for any \(N\in {\mathbb {N}}\). In particular, on any time interval \([0,\varepsilon ]\) where \(\varepsilon >0\), we already have infinitely many jumps of arbitrarily large size, which implies that \(\tau _N=0\) almost surely for all \(N\in {\mathbb {N}}\).

Therefore, instead of using the stopping times (2.2), the idea is to use truncation levels that increase with the distance to the origin. More precisely, let \(h:{\mathbb {R}}^d\rightarrow {\mathbb {R}}\) be the function \(h(x)=1+|x|^\eta \), for some \(\eta \) to be chosen later, and define for \(N\in {\mathbb {N}}\),

$$\begin{aligned} \tau _N= \inf \left\{ t\in [0,T] :J\left( [0,t] \times \left\{ (x,z):|z|> Nh(x)\right\} \right) >0 \right\} . \end{aligned}$$
(2.26)

For every \(N\geqslant 1\), we can now introduce a truncation of L by

$$\begin{aligned} L_N( \mathrm {d}t, \mathrm {d}x)= & {} b\,\mathrm {d}t\,\mathrm {d}x+\int _{|z|\leqslant 1} z\, {\tilde{J}}(\mathrm {d}t, \mathrm {d}x, \mathrm {d}z)\nonumber \\&+\int _{1<|z|<Nh(x)} z \,J(\mathrm {d}t, \mathrm {d}x, \mathrm {d}z) \end{aligned}$$
(2.27)

(do not confuse this with \(L_N\) defined in (2.6), which was for the case of the interval \([0,\pi ]\)), which in turn gives rise to the equation

$$\begin{aligned} u_N(t,x)= V(t,x)+ \int _0^t \int _{{\mathbb {R}}^d} g(t-s, x-y) \sigma (u_N(s,y))\, L_N(\mathrm {d}s, \mathrm {d}y).\qquad \end{aligned}$$
(2.28)

Proposition 2.7

Let \(\sigma \) be Lipschitz continuous, \(u_0\) be bounded and continuous, and L be a Lévy white noise as in (1.6) satisfying (H) for some \(p,q>0\). Then, if we choose \(\frac{d}{q}<\eta <\frac{2-d(p-1)}{p-q}\), we have \(\tau _N>0\) for every \(N\geqslant 1\) and almost surely, \(\tau _N= +\infty \) for large N (recall the convention \(\inf \emptyset = +\infty \)). Moreover, for any \(N\geqslant 1\), there exists a solution \(u_N\) to (2.28) such that for some constant \(C_N<+\infty \), we have

$$\begin{aligned} \sup _{(t,x)\in [0,T]\times [-R,R]^d}{\mathbb {E}}\left[ \left| u_N(t,x) \right| ^p \right]\leqslant & {} E_{\frac{2-d (p-1)}{2(p\vee 1)},\frac{1}{p\vee 1}}\Big ( C_N R^{\frac{\eta (p-q)}{p\vee 1}}\Big ) < + \infty \qquad \end{aligned}$$
(2.29)

for all \(R>1\), where \(E_{\alpha ,\beta }(z)=\sum _{k \geqslant 0} \frac{z^k}{\Gamma (\alpha k+\beta )}\) for \(\alpha ,\beta > 0\) and \(z\in {\mathbb {R}}\) are the Mittag–Leffler functions.

Furthermore, we have \(u_N(t,x) = u_{N+1}(t,x)\) on \(\{t \leqslant \tau _N \}\), and the random field u defined by \(u(t,x)=u_N(t,x)\) on \(\{t\leqslant \tau _n\}\), is a mild solution to (1.1) on \(D={\mathbb {R}}^d\).

Proof

The result is a direct application of [12, Theorem 3.1] except for the moment property (2.29). The finiteness of the left-hand side is included in the cited theorem for \(p< 1+\frac{2}{d}\). In the case \(d=1\) and \(2<p<3\), the only thing we need is an extension of [12, Lemma 3.3(2)], which can be obtained by combining the arguments given in the proof of [12, Lemma 3.3(2)] and the proof of Proposition 2.1. Indeed, for predictable \(\phi _1\) and \(\phi _2\), proceeding as in the proof of [12, Lemma 3.3] but using the moment inequalities of [28, Theorem 1], we see that

$$\begin{aligned}&{\mathbb {E}}\left[ \left| \int _0^t\int _{{\mathbb {R}}} g(t-s,x-y)\left( \sigma (\phi _1(s,y))-\sigma (\phi _2(s,y))\right) L_N(\mathrm {d}s,\mathrm {d}y)\right| ^p \right] \\&\quad \leqslant C\int _0^t \int _{\mathbb {R}}(g(t-s,x-y)+g^p(t-s,x-y)){\mathbb {E}}[|\phi _1(s,y)-\phi _2(s,y)|^p] h(y)^{p-q}\mathrm {d}s\,\mathrm {d}y. \end{aligned}$$

In order to obtain the bound involving the Mittag–Leffler functions, observe from the calculations between the last display on page 2272 and Eq. (3.13) of [12] that for every fixed \(N\geqslant 1\), there exists \(C_N<+\infty \) such that

$$\begin{aligned}&\sup _{(t,x)\in [0,T]\times [-R,R]^d}{\mathbb {E}}[|u_N(t,x)|^p]\\&\quad \leqslant \sum _{n\geqslant 1} C_N^n\left( \frac{\left( R^{n\eta (p-q)}+\Gamma \left( \textstyle \frac{1+{n\eta (p-q)}}{2}\right) \right) \Gamma \left( 1-\frac{d}{2}(p-1)\right) ^n}{\Gamma \left( 1+(1-\frac{d}{2}(p-1))n\right) } \right) ^{\frac{1}{p\vee 1}}\\&\quad \leqslant \sum _{n\geqslant 1} C_N^n \left( \frac{\Gamma \left( \textstyle \frac{1+{n\eta (p-q)}}{2}\right) }{\Gamma \left( 1+(1-\frac{d}{2}(p-1))n\right) }\right) ^{\frac{1}{p\vee 1}} \\&\qquad + \sum _{n\geqslant 1} C_N^n \left( \frac{R^{n\eta (p-q)}}{\Gamma \left( 1+(1-\frac{d}{2}(p-1))n\right) }\right) ^{\frac{1}{p\vee 1}}. \end{aligned}$$

The first series converges for our choice of \(\eta \). Furthermore, for all \(a>0\), we have by Stirling’s formula that \(\Gamma (1+ax)^\frac{1}{p\vee 1} \geqslant C \Gamma (\frac{1+ax}{p\vee 1})\) when \(x>1\). Hence,

$$\begin{aligned} \sup _{(t,x)\in [0,T]\times [-R,R]^d}{\mathbb {E}}[|u_N(t,x)|^p]\leqslant & {} C_N + \sum _{n\geqslant 1} \frac{\big (C_N R^{\frac{\eta (p-q)}{p\vee 1}}\big )^n}{\Gamma \left( \frac{1}{p\vee 1} +\frac{(2-d(p-1))n}{2(p\vee 1)} \right) }\\\leqslant & {} E_{\frac{2-d (p-1)}{2(p\vee 1)},\frac{1}{p\vee 1}}\Big ( C_N R^{\frac{\eta (p-q)}{p\vee 1}}\Big ), \end{aligned}$$

which is (2.29). \(\square \)

2.2.1 Stationarity of the solution

The proof of Proposition 2.7 heavily relies on the stopping times \(\tau _N\) introduced in (2.26). These are “centered” around the origin in the sense that large jumps are permitted if they occur far enough from \(x=0\). As a consequence, even if the initial condition is constant, it does not follow a priori from Proposition 2.7 that the solution u to (1.1) is stationary in space. On the other hand, of course, choosing to center \(\tau _N\) around the origin is completely arbitrary. So in this section, we show that the solution constructed in Proposition 2.7 remains the same if we take other spatial reference points for \(\tau _N\), from which the stationarity of the solution in space will follow.

To this end, let \(\frac{d}{q}<\eta <\frac{2-d(p-1)}{p-q}\) and define the family of stopping times \(\tau _N^a\) by

$$\begin{aligned} \tau _N^a:=\inf \left\{ t{\in } [0,T]:J\left( [0,t]\times \left\{ (x,z){:} |z|{>}Nh(x-a) \right\} \right) {>}0 \right\} ,\quad N\in {\mathbb {N}},\quad a\in {\mathbb {R}}^d. \end{aligned}$$

In particular, \(\tau _N^0\) is the same as \(\tau _N\) defined in (2.26). Since the intensity measure of J is invariant under translation in the space variable, \(\tau _N^a\) has the same law as \(\tau _N\), and the conclusions of Proposition 2.7 are valid for \(\tau _N^a\). In particular, for any \(N\geqslant 1\), almost surely \(\tau _N^a>0\), and \(\tau _N^a= +\infty \) for large N. Furthermore, by definition, on the event \(\left\{ t\leqslant \tau _N^a\right\} \), \(L(\mathrm {d}t, \mathrm {d}x)=L_N^a(\mathrm {d}t, \mathrm {d}x)\), where

$$\begin{aligned} L_N^a(\mathrm {d}t, \mathrm {d}x):= b\, \mathrm {d}t \,\mathrm {d}x + \int _{|z|\leqslant 1} z \, {\tilde{J}}(\mathrm {d}t, \mathrm {d}x, \mathrm {d}z) +\int _{1<|z| \leqslant Nh(x-a)} z \, J(\mathrm {d}t, \mathrm {d}x, \mathrm {d}z). \end{aligned}$$

Proposition 2.8

Let \(\sigma :{\mathbb {R}}\rightarrow {\mathbb {R}}\) be Lipschitz continuous, \(u_0\) be bounded and continuous, and L be a Lévy white noise as in (1.6) fulfilling the assumption (H) with \(p,q>0\). Then for any \(N\in {\mathbb {N}}\) and \(a\in {\mathbb {R}}^d\), there exists a mild solution \(u_N^a\) to (1.1) with noise \(L^a_N\) instead of L such that (2.29) also holds for \(u_N^a\).

Moreover, for \(a,b\in {\mathbb {R}}^d\), \(N\in {\mathbb {N}}\) and \((t,x) \in [0,T]\times {\mathbb {R}}^d\), we have

$$\begin{aligned} u_N^a(t,x)\mathbb {1}_{t\leqslant \tau _N^a\wedge \tau _N^b}=u_N^b(t,x)\mathbb {1}_{t\leqslant \tau _N^a\wedge \tau _N^b} \quad \text {a.s.} \end{aligned}$$
(2.30)

Proof

The first part is proved in the same way as Proposition 2.7. For (2.30), we observe that \(L_N^a=L_N^b\) on \(\{t< \tau _N^a\wedge \tau _N^b\}\). Then we use the construction of the solutions \(u_N^a\) and \(u_N^b\) via a Picard iteration scheme as in the proof of [12, Theorem 3.1] and show that at each step of the scheme,

$$\begin{aligned} u_N^{a,n}(t,x)\mathbb {1}_{t\leqslant \tau _N^a\wedge \tau _N^b}=u_N^{b,n}(t,x)\mathbb {1}_{t\leqslant \tau _N^a\wedge \tau _N^b} \quad \text {a.s.} \end{aligned}$$
(2.31)

For \(n=0\), we clearly have \(u_N^{a,0}(t,x)=u_N^{b,0}(t,x)=u_0(x)\). Now if (2.31) holds for some \(n\geqslant 0\), then

$$\begin{aligned} u_N^{a,n+1}(t,x)\mathbb {1}_{t\leqslant \tau _N^a\wedge \tau _N^b}= & {} \mathbb {1}_{t\leqslant \tau _N^a\wedge \tau _N^b}\int _0^t\int _{{\mathbb {R}}^d} g (t-s,x-y) \sigma \left( u_N^{a,n}(s,y) \right) \, L_N^a(\mathrm {d}s, \mathrm {d}y)\nonumber \\= & {} \mathbb {1}_{t\leqslant \tau _N^a\wedge \tau _N^b}\int _0^t\int _{{\mathbb {R}}^d} g (t-s,x-y) \sigma \left( u_N^{b,n}(s,y) \right) \mathbb {1}_{s\leqslant \tau _N^a\wedge \tau _N^b} \, L_N^a(\mathrm {d}s, \mathrm {d}y)\\= & {} \mathbb {1}_{t\leqslant \tau _N^a\wedge \tau _N^b}\int _0^t\int _{{\mathbb {R}}^d} g (t-s,x-y) \sigma \left( u_N^{b,n}(s,y) \right) \mathbb {1}_{s\leqslant \tau _N^a\wedge \tau _N^b}L_N^b(\mathrm {d}s, \mathrm {d}y)\\= & {} \mathbb {1}_{t\leqslant \tau _N^a\wedge \tau _N^b}\int _0^t\int _{{\mathbb {R}}^d} g (t-s,x-y) \sigma \left( u_N^{b,n}(s,y) \right) \, L_N^b(\mathrm {d}s, \mathrm {d}y)\\= & {} u_N^{b,n+1}(t,x)\mathbb {1}_{t\leqslant \tau _N^a\wedge \tau _N^b}. \end{aligned}$$

Since \(u_N^{a,n}(t,x) \rightarrow u_N^{a}(t,x)\) and \(u_N^{b,n}(t,x) \rightarrow u_N^{b}(t,x)\) as \(n\rightarrow +\infty \) in \(L^p(\Omega )\), we deduce (2.30). \(\square \)

In [15, Definition 5.1], the second author introduced the property (S) for a stochastic process and a martingale measure, which is a sort of stationarity property in the space variable. In our case, the noise is not necessarily a martingale measure, but we can use a similar definition:

Definition 2.9

We say the family of random fields \(u_N^a\) has property (S) if the law of the process

$$\begin{aligned} \left( \left( u_N^a(t,a+x), (t,x)\in [0,T]\times {\mathbb {R}}^d \right) ;\left( L_N^a\left( [0,t]\times \left( a+B\right) \right) , (t,B) \in [0,T]\times \mathcal B_b({\mathbb {R}}^d) \right) \right) , \end{aligned}$$

does not depend on a.

Lemma 2.10

If \(u_0(x)\equiv u_0\) is constant, then the family \((u_N^a:a\in {\mathbb {R}}^d)\) has property (S).

Proof

Similarly to the proof of Proposition 2.8, it is enough to show property (S) for the Picard iterates \(u_N^{a,n}\) for each \(n\geqslant 0\). For \(n=0\), we obviously have \(u^{a,0}_N(t,x+a)=u_0=u^{0,0}_N(t,x)\). So property (S) for \(u_N^{a,n}\) follows from the fact that the law of \(\left( L_N^a\left( [0,t]\times \left( a+B\right) \right) , (t,B) \in [0,T]\times \mathcal B_b({\mathbb {R}}^d) \right) \) does not depend on a. Next, assume that \(u^{a,n}_N\) has the property (S). Since

$$\begin{aligned} u_N^{a,n+1}(t,x)= u_0 + \int _0^t\int _{{\mathbb {R}}^d} g (t-s,x-y) \sigma \left( u_N^{a,n}(s,y)\right) \, L_N^a(\mathrm {d}s, \mathrm {d}y), \end{aligned}$$

we can use the same argument as in [15, Lemma 18], since the proof only relies on the fact that L has a law that is invariant under translation in the space variable. \(\square \)

Theorem 2.11

If \(u_0(x)\equiv u_0\) is constant, for any \(a\in {\mathbb {R}}\), the random field \((u(t,a+x):(t,x)\in [0,T]\times {\mathbb {R}}^d)\) has the same law as the random field \((u(t,x):(t,x)\in [0,T]\times {\mathbb {R}}^d)\).

Proof

By (2.30), \(u_N^a(t,a+x)\mathbb {1}_{t\leqslant \tau _N^a\wedge \tau _N^0}= u_N^0(t,a+x)\mathbb {1}_{t\leqslant \tau _N^a\wedge \tau _N^0}\) almost surely. Taking the stationary limit as \(N\rightarrow +\infty \), we get that \(u^a(t,a+x)=u^0(t,a+x)\) almost surely for any \((t,x)\in [0,T]\times {\mathbb {R}}^d\). Also, by the property (S) of the family of random fields \((u_N^a:a\in {\mathbb {R}}^d)\) (see Lemma 2.10), the random field \((u_N^a(t,a+x):(t,x)\in [0,T]\times {\mathbb {R}}^d)\) has the same law as the random field \((u_N^0(t,x):(t,x)\in [0,T]\times {\mathbb {R}}^d)\). Again, taking the stationary limit as \(N\rightarrow +\infty \), we get that the random field \((u^a(t,a+x):(t,x)\in [0,T]\times {\mathbb {R}}^d)\) has the same law as the random field \((u^0(t,x):(t,x)\in [0,T]\times {\mathbb {R}}^d)\). Therefore, the random field \((u^0(t,a+x):(t,x)\in [0,T]\times {\mathbb {R}}^d)\) has the same law as the random field \((u^0(t,x):(t,x)\in [0,T]\times {\mathbb {R}}^d)\). \(\square \)

2.2.2 Existence of a càdlàg solution in \(H_{r,loc}({\mathbb {R}}^d)\) with \(r<-\frac{d}{2}\)

In the following, we want to establish a regularity result for the paths of the mild solution to (1.1) in the case \(D={\mathbb {R}}^d\), analogous to Theorem 2.5 which concerns \(D=[0,\pi ]\). Since \(D={\mathbb {R}}^d\) is unbounded, and the solution may not decay in space (see Theorem 2.11), we consider the mild solution \(u:t \mapsto u(t, \cdot )\) as a distribution-valued process in a local fractional Sobolev space, and prove that it has a càdlàg version in this space.

Recall to this end the Schwartz space\({\mathcal {S}}({\mathbb {R}}^d)\) of smooth functions \(\varphi :{\mathbb {R}}^d\rightarrow {\mathbb {R}}\) such that \(\sup _{x\in {\mathbb {R}}^d}\left| x^\alpha \varphi ^{(\beta )}(x) \right| <+\infty \) for any multi-indices \(\alpha , \, \beta \in {\mathbb {N}}^d\), equipped with the topology induced by the semi-norms \(\sum _{|\alpha |, |\beta | \leqslant p} \sup _{x\in {\mathbb {R}}^d} \left| x^\alpha \varphi ^{(\beta )}(x) \right| \) for \(p\in {\mathbb {N}}\). Here, \(\varphi ^{(\beta )}(x) =\partial _{x_1}^{\beta _1}\ldots \partial _{x_d}^{\beta _d}\varphi (x)\) if \(\beta =(\beta _1,\ldots ,\beta _d)\). Its topological dual is called the space of tempered distributions and is denoted by \({\mathcal {S}}'({\mathbb {R}}^d)\). The classical Fourier transform\(\mathcal F( \varphi )(\xi ):= \int _{{\mathbb {R}}^d} e^{-i\xi \cdot x}\varphi (x) \,\mathrm {d}x\) with \(\xi \in {\mathbb {R}}^d\) and \(\varphi \in {\mathcal {S}}({\mathbb {R}}^d)\) can be extended by duality to \(f\in {\mathcal {S}}'({\mathbb {R}}^d)\):

$$\begin{aligned} \left\langle \mathcal F(f), \varphi \right\rangle := \left\langle f, \mathcal F (\varphi ) \right\rangle , \quad \varphi \in {\mathcal {S}}({\mathbb {R}}^d). \end{aligned}$$

Definition 2.12

The (local) fractional Sobolev space of order\(r\in {\mathbb {R}}\) is defined by

$$\begin{aligned} H_r({\mathbb {R}}^d)&:=\left\{ f\in \mathcal S'({\mathbb {R}}^d) :\xi \mapsto \left( 1+|\xi | ^2\right) ^{\frac{r}{2}} \mathcal F (f) (\xi ) \in L^2({\mathbb {R}}^d) \right\} \\ \bigg (~ H_{r,\text {loc}}({\mathbb {R}}^d)&:=\left\{ f\in \mathcal S'({\mathbb {R}}^d) :\left( \forall \theta \in C^\infty _c({\mathbb {R}}^d) :\theta f \in H_r({\mathbb {R}}^d) \right) \right\} ~ \bigg ). \end{aligned}$$

The topology on \(H_r({\mathbb {R}}^d)\) is induced by the norm

$$\begin{aligned} \left\| f\right\| _{H_r({\mathbb {R}}^d)}:=\left\| (1+|\cdot |^2)^{\frac{r}{2}} \mathcal F(f)(\cdot ) \right\| _{L^2({\mathbb {R}}^d)}, \end{aligned}$$

and we have \(f_n\rightarrow f\) in \(H_{r,loc}({\mathbb {R}}^d)\) if \(\theta f_n\rightarrow \theta f\) in \(H_{r}({\mathbb {R}}^d)\) as \(n\rightarrow +\infty \) for any \(\theta \in C^\infty _c({\mathbb {R}}^d)\).

We now proceed to studying the regularity of \(u:t \mapsto u(t, \cdot )\) in \(H_{r,{loc}}({\mathbb {R}}^d)\). As in the case of a bounded interval in dimension one, the drift part is easy to handle.

Lemma 2.13

Let Z be a bounded measurable random field and

$$\begin{aligned} F(t,x):= \int _0^t \int _{{\mathbb {R}}^d} g(t-s, x-y) Z(s,y)\, \mathrm {d}s\, \mathrm {d}y. \end{aligned}$$

Then the process \(t\mapsto F(t,\cdot )\) is continuous in \(H_{r,{loc}}({\mathbb {R}}^d)\) for any \(r\leqslant 0\).

Proof

Since \(g\in L^1([0,T]\times {\mathbb {R}}^d)\) and Z is bounded, it follows from [6, Corollary 3.9.6] that the sample paths of F are jointly continuous in (tx) almost surely. Therefore, \(t\mapsto F(t,\cdot )\) is continuous in \(H_{0,{loc}}({\mathbb {R}}^d)=L^2_{{loc}}({\mathbb {R}}^d)\), hence also in \(H_{r,{loc}}({\mathbb {R}}^d)\) for any \(r\leqslant 0\). \(\square \)

Next, we consider the situation where \(\sigma \) is a bounded function. Already in this restricted case, the unboundedness of space and the possibility of having infinitely many large jumps require a more careful analysis of the different parts of the solution.

Proposition 2.14

Let \(\sigma \) be a bounded and Lipschitz function, and u be the mild solution to the stochastic heat equation (1.1) constructed in Proposition 2.7 under hypothesis (H). Then, for any \(r<-\frac{d}{2}\), the stochastic process \((u(t, \cdot ))_{t\in [0,T]}\) has a càdlàg version in \(H_{r,\text {loc}}({\mathbb {R}}^d)\).

Proof

Since the mild solution \(u_N\) to the truncated equation (2.28) agrees with the mild solution u to the stochastic heat equation (1.1) on \(\left\{ t \leqslant \tau _N\right\} \) (see the last statement of Proposition 2.7), the sample path properties of u and \(u_N\) are the same, and we can restrict to the study of the regularity of the sample paths of \(u_N\). Furthermore, there is no loss of generality if we take \(N=1\). Therefore, we suppose that

$$\begin{aligned} u(t,x)=V(t,x)+\int _0^t \int _{{\mathbb {R}}^d} g(t-s,x-y) \sigma (u(s,y)) \, L_1(\mathrm {d}s, \mathrm {d}y), \end{aligned}$$

where \(L_1\) is the truncated noise from (2.27) with \(N=1\). We use the decomposition

$$\begin{aligned} u(t,x)= & {} V(t,x)+u^{1,1}(t,x)+u^{1,2}(t,x) +u^{2,1}(t,x)+u^{2,2}(t,x)\nonumber \\&+u^3(t,x), \end{aligned}$$
(2.32)

where for \(A>0\) and \(Z(s,y):=\sigma (u(s,y))\),

$$\begin{aligned} u^{1,1}(t,x)&:=\int _0^t \int _{{\mathbb {R}}^d} g(t-s, x-y)Z(s,y) \mathbb {1}_{y \in [-2A, 2A]^d}\, L^M(\mathrm {d}s, \mathrm {d}y),\\ u^{1,2}(t,x)&:=\int _0^t \int _{{\mathbb {R}}^d} g(t-s, x-y)Z(s,y) \mathbb {1}_{y \notin [-2A, 2A]^d} \, L^M(\mathrm {d}s, \mathrm {d}y),\\ u^{2,1}(t,x)&:=\int _0^t \int _{{\mathbb {R}}^d} g(t-s, x-y)Z(s,y) \mathbb {1}_{y \in [-2A, 2A]^d} \, L_1^P(\mathrm {d}s , \mathrm {d}y),\\ u^{2,2}(t,x)&:=\int _0^t \int _{{\mathbb {R}}^d} g(t-s, x-y)Z(s,y) \mathbb {1}_{y \notin [-2A, 2A]^d} \, L_1^P(\mathrm {d}s , \mathrm {d}y),\\ u^{3}(t,x)&:=b\int _0^t \int _{{\mathbb {R}}^d} g(t-s, x-y)Z(s,y)\,\mathrm {d}s\,\mathrm {d}y, \end{aligned}$$

where \(L^M\) is defined in (1.6), and \(L^P_1\) is the noise obtained by applying the truncation (2.27) with \(N=1\) to \(L^P\) from (1.6). It is clear that V is jointly continuous in (tx), and the same holds for \(u^3\) as pointed out in the proof of Lemma 2.13. Furthermore, on \([0,T]\times [-2A,2A]^d\), the noise \(L^P_1\) consists of only finitely many jumps. So upon a change of the drift b and increasing the truncation level for \(L^M\) from 1 to the largest size of these jumps (which clearly does not affect the arguments below), we may assume that \(u^{2,1}=0\). The remaining terms are now treated separately.

\(\underline{u^{1,1}(t,x):}\) By definition of the Fourier transform, we have for \(\varphi \in {\mathcal {S}}({\mathbb {R}}^d)\),

$$\begin{aligned} \left\langle \mathcal F\left( u^{1,1}(t,\cdot )\right) , \varphi \right\rangle= & {} \left\langle u^{1,1}(t,\cdot ), \mathcal F \left( \varphi \right) \right\rangle =\int _{{\mathbb {R}}^d} u^{1,1}(t,x)\mathcal F(\varphi ) (x) \,\mathrm {d}x\\= & {} \int _{{\mathbb {R}}^d} \left( \int _0^t \int _{{\mathbb {R}}^d} g(t-s, x-y)Z(s,y) \mathbb {1}_{y \in [-2A, 2A]^d} \,L^M(\mathrm {d}s , \mathrm {d}y)\right) \\&\times \mathcal F(\varphi ) (x)\, \mathrm {d}x. \end{aligned}$$

Permuting the stochastic integral and the Lebesgue integral (because \(\int _{|z|\leqslant 1} |z|^p \,\nu (\mathrm {d}z)<+\infty \), \(g\in L^p([0,T]\times {\mathbb {R}}^d)\) and Z is bounded, this is possible by Theorem A.3 together with the estimate (A.3)) yields

$$\begin{aligned} \left\langle \mathcal F( u^{1,1}(t,\cdot )), \varphi \right\rangle= & {} \int _0^t \int _{[-2A, 2A]^d} \left( \int _{{\mathbb {R}}^d}\mathcal F(\varphi )(x) g(t-s, x-y) \,\mathrm {d}x \right) \nonumber \\&\times Z(s,y) L^M(\mathrm {d}s , \mathrm {d}y) \nonumber \\= & {} \int _0^t \int _{[-2A, 2A]^d} \left( \int _{{\mathbb {R}}^d} e^{-i\xi \cdot y -(t-s)|\xi |^2} \varphi (\xi ) \, \mathrm {d}\xi \right) \nonumber \\&\times Z(s,y) L^M(\mathrm {d}s , \mathrm {d}y)\nonumber \\= & {} \int _{{\mathbb {R}}^d} \left( \int _0^t \int _{[-2A, 2A]^d} e^{-i\xi \cdot y -(t-s)|\xi |^2} Z(s,y)\, L^M(\mathrm {d}s , \mathrm {d}y\right) \nonumber \\&\times \varphi (\xi ) \, \mathrm {d}\xi , \end{aligned}$$
(2.33)

which implies that \(\mathcal F( u^{1,1}(t, \cdot )) (\xi )\) is given by

$$\begin{aligned} \begin{aligned} a_\xi (t):=e^{-|\xi |^2 t}\int _0^t \int _{{\mathbb {R}}^d} e^{-i\xi \cdot y}e^{ s |\xi |^2} Z(s,y) \mathbb {1}_{y \in [-2A, 2A]^d} \,L^M(\mathrm {d}s , \mathrm {d}y). \end{aligned} \end{aligned}$$
(2.34)

Thus,

$$\begin{aligned} \left\| u^{1,1}(t\pm h,\cdot )-u^{1,1}(t,\cdot )\right\| _{H_r({\mathbb {R}}^d)}^2=\int _{{\mathbb {R}}^d} (1+|\xi |^2)^r \left| a_\xi (t\pm h)-a_\xi (t) \right| ^2\,\mathrm {d}\xi . \end{aligned}$$

Since the function \(t\mapsto e^{-|\xi |^2 t}\) is continuous, and the stochastic integral in \(a_\xi (t)\) exists in \(L^2(\Omega )\), \(t\mapsto a_\xi (t)\) is continuous in \(L^2(\Omega )\). Furthermore,

$$\begin{aligned} {\mathbb {E}}\left[ \left| a_\xi (t) \right| ^2 \right] \leqslant C \frac{1-e^{-2|\xi |^2 t}}{2|\xi |^2} \leqslant C, \end{aligned}$$

for some constant C that does not depend on \(\xi \), so by the dominated convergence theorem (which applies since \(r<-\frac{d}{2}\)),

$$\begin{aligned} {\mathbb {E}}\left[ \left\| u^{1,1}(t+h,\cdot )-u^{1,1}(t,\cdot )\right\| _{H_r({\mathbb {R}}^d)}^2 \right] \rightarrow 0, \quad \text {as} \ h\rightarrow 0, \end{aligned}$$

and the process \(t\mapsto u^{1,1}(t,\cdot )\) is continuous in \(L^2(\Omega )\) as a process with values in \(H_r({\mathbb {R}}^d)\).

In order to apply [18, Chapter III, Sect. 4, Theorem 1] to deduce the existence of a càdlàg modification of \(t\mapsto u^{1,1}(t,\cdot )\) in \(H_r({\mathbb {R}}^d)\) for any \(r<-\frac{d}{2}\), it remains to prove

$$\begin{aligned} {\mathbb {E}}\left[ \Vert u^{1,1}(t+h, \cdot )-u^{1,1}(t,\cdot )\Vert _{H_r({\mathbb {R}}^d)}^2 \Vert u^{1,1}(t-h, \cdot )-u^{1,1}(t,\cdot )\Vert _{H_r({\mathbb {R}}^d)}^2\right] \leqslant C h^{1+\delta } \end{aligned}$$

for some \(\delta >0\). Upon defining, similar to (2.11),

$$\begin{aligned} I_a^b(\xi ):=\int _a^b \int _{{\mathbb {R}}^d} \int _{\mathbb {R}}e^{-i\xi \cdot y}e^{ s |\xi |^2} Z(s,y) z\mathbb {1}_{|z|\leqslant 1} \mathbb {1}_{y \in [-2A, 2A]^d} \,{\tilde{J}}(\mathrm {d}s , \mathrm {d}y, \mathrm {d}z) \end{aligned}$$

for \(0\leqslant a<b \leqslant T\) and \(\xi \in {\mathbb {R}}^d\), the proof is identical to that of Proposition 2.3 for the equation on a bounded interval if we make the following replacements:

$$\begin{aligned}{}[0,\pi ] \longleftrightarrow [-2A, 2A]^d,\quad k \longleftrightarrow \xi ,\quad \sin (ky) \longleftrightarrow e^{-i \xi \cdot y}. \end{aligned}$$

\(\underline{u^{1,2}(t,x):}\) If \(f:{\mathbb {R}}^d\rightarrow {\mathbb {R}}\) is a smooth function, then for \(a,b \in {\mathbb {R}}^d\) with \(a_i\leqslant b_i\) for all \(1\leqslant i \leqslant d\),

$$\begin{aligned} f(b)= f(a)+\sum _{i=1}^d \, \sum _{1\leqslant k_1<\cdots <k_i\leqslant d}\, \int _{a_{k_1}}^{b_{k_1}} \mathrm {d}r_{k_1} \cdots \int _{a_{k_i}}^{b_{k_i}} \mathrm {d}r_{k_i} \partial _{x_{k_1} \ldots x_{k_i}} f( c_{{\mathbf {k}}}(a,r) ),\nonumber \\ \end{aligned}$$
(2.35)

where for \({\mathbf {k}}=(k_1, \ldots , k_i)\) with \(k_1<\cdots < k_i\), we define \(\left( c_{{\mathbf {k}}}(a,r)\right) _j:= a_i \mathbb {1}_{j\notin {\mathbf {k}}} +r_j \mathbb {1}_{j \in {\mathbf {k}}}\) for \(1\leqslant j\leqslant d\). This formula is easily proved by induction on the dimension. Since the heat kernel \(g(t-s,x-y)\) is smooth on \(y\notin [-2A, 2A]^d\) for \(x\in [-A, A]^d\), (2.35) with \(a=(s,-A, \ldots , -A)\) and \(b=(t,x)\) gives

$$\begin{aligned} g(t-s,x-y)= & {} \sum _{i=1}^{d+1} \, \sum _{1\leqslant k_1<\cdots<k_i\leqslant d+1} \, \int _{a_{k_1}}^{b_{k_1}} \mathrm {d}r_{k_1}\\&\cdots \int _{a_{k_i}}^{b_{k_i}} \mathrm {d}r_{k_i} \partial _{x_{k_1} \ldots x_{k_i}} g ( c_{{\mathbf {k}}}(a,r)-(s,y) )\\= & {} \sum _{i=1}^{d} \, \sum _{1\leqslant k_1<\cdots <k_i\leqslant d} \, \int _s^t \mathrm {d}u \int _{-A}^{x_{k_1}} \mathrm {d}r_{k_1} \\&\cdots \int _{-A}^{x_{k_i}} \mathrm {d}r_{k_i} \partial _{x_{k_1} \ldots x_{k_i}} \partial _t g( u-s , c_{{\mathbf {k}}}(-{\mathbf {A}},r)- y ), \end{aligned}$$

where \({\mathbf {A}}:= (A,\ldots , A)\). Another application of Theorem A.3 and (A.3) shows that \(u^{1,2}(t,x)\) equals

$$\begin{aligned}&\sum _{i=1}^{d} \, \sum _{1\leqslant k_1<\cdots <k_i\leqslant d} \, \int _0^t \mathrm {d}u \int _{-A}^{x_{k_1}} \mathrm {d}r_{k_1} \cdots \int _{-A}^{x_{k_i}} \mathrm {d}r_{k_i} \nonumber \\&\quad \left( \int _0^u \int _{{\mathbb {R}}^d} \partial _{x_{k_1}}\ldots \partial _{x_{k_i}} \partial _t g\left( u-s, c_{{\mathbf {k}}}(-{\mathbf {A}},r)- y \right) Z(s,y) \mathbb {1}_{y \notin [-2A, 2A]^d} \, L^M(\mathrm {d}s , \mathrm {d}y) \right) .\nonumber \\ \end{aligned}$$
(2.36)

We see from this expression that \(u^{1,2}\) is jointly continuous in (tx). By the argument at the end of the proof of Lemma 2.13, we deduce that \(t\mapsto u^{1,2}(t, \cdot )\mathbb {1}_{[-A,A]^d}\) is continuous in \(H_r({\mathbb {R}}^d)\) for \(r\leqslant 0\).

\(\underline{u^{2,2}(t,x):}\) This process takes into account only the jumps that are far away from x, but that can be arbitrarily large. We can write \(u^{2,2}\) as a sum:

$$\begin{aligned} u^{2,2}(t,x)=\sum _{i\geqslant 1} g(t-T_i, x-X_i) Z(T_i, X_i) Z_i \mathbb {1}_{X_i \notin [-2A, 2A]^d,\, 1< |Z_i| < 1+ |X_i|^\eta ,\, T_i \leqslant t}. \end{aligned}$$

We first observe that each term of this sum is jointly continuous in \((t,x) \in [0,T]\times [-A,A]^d\) almost surely. We show that this sum converges uniformly in \((t,x)\in [0,T]\times [-A,A]^d\). Choose A large enough such that \(T< \frac{A^2}{2d} \). Because \(|x-X_i|>A\), Lemma 3.6 below shows that the maximum of the function \(t\mapsto g(t, x-X_i)\) is attained at \(t=T\):

$$\begin{aligned} \sup _{t\leqslant T, x\in [-A,A]^d} g(t-T_i, x-X_i)&\leqslant \sup _{x\in [-A,A]^d} C {T^{-\frac{d}{2}}} e^{-\frac{|x-X_i|^2}{4T}}\leqslant C {T^{-\frac{d}{2}}} e^{-\frac{|p_A(X_i)-X_i|^2}{4T}}, \end{aligned}$$

where \(p_A\) is the projection on the convex set \([-A, A]^d\). Then, for \(\beta =1 \wedge q\),

$$\begin{aligned}&{\mathbb {E}}\Bigg [ \Bigg ( \sum _{i\geqslant 1} \sup _{t\leqslant T, x\in [-A,A]^d} \left| g(t-T_i, x-X_i) Z(T_i, X_i) Z_i \mathbb {1}_{X_i \notin [-2A, 2A]^d \, , \, 1< |Z_i|< 1+ |X_i|^\eta ,\,T_i \leqslant t} \right| \Bigg )^\beta \Bigg ]\nonumber \\&\quad \leqslant \frac{C}{T^{\frac{\beta d}{2}}} {\mathbb {E}}\Bigg [ \Bigg ( \sum _{i\geqslant 1} \left| e^{-\frac{|p_A(X_i)-X_i|^2}{4T}} Z_i \mathbb {1}_{X_i \notin [-2A, 2A]^d, \, 1< |Z_i|, \, T_i \leqslant T} \right| \Bigg )^\beta \Bigg ] \nonumber \\&\quad \leqslant \frac{C}{T^{\frac{\beta d}{2}}} {\mathbb {E}}\left[ \sum _{i\geqslant 1} \left| e^{-\frac{|p_A(X_i)-X_i|^2}{4T}} Z_i \mathbb {1}_{X_i \notin [-2A, 2A]^d, \, 1< |Z_i|, \, T_i \leqslant T } \right| ^\beta \right] \nonumber \\&\quad \leqslant C \int _0^T \int _{y\notin [-2A, 2A]^d} \int _{|z|>1} |z|^\beta e^{-\beta \frac{|p_A(y)-y|^2}{4T}}\, \mathrm {d}s \, \mathrm {d}y \, \nu ( \mathrm {d}z) <+\infty . \end{aligned}$$
(2.37)

Therefore, the sum defining \(u^{2,2}\) converges uniformly in \((t,x) \in [0,T]\times [-A,A]^d\), and \(u^{2,2}\) is jointly continuous. Thus, \(t\mapsto u^{2,2}(t, \cdot )\mathbb {1}_{[-A,A]^d}\) is continuous in \(H_r({\mathbb {R}}^d)\) for every \(r\leqslant 0\).

Since A can be chosen arbitrarily large, the assertion of the proposition follows. \(\square \)

In order to pass from bounded to unbounded nonlinearities \(\sigma \), the basic strategy remains the same as in the proof of Theorem 2.5. However, it was crucial in that proof that the solution have a finite second moment. Unfortunately, in dimensions \(d\geqslant 2\), the mild solution u to (1.1) has no finite second moments as a result of the singularity of g. And it is easy to convince oneself that taking powers \(p<2\) instead of 2 does not combine well with the \(\Vert \cdot \Vert _{H_r({\mathbb {R}}^d)}\)-norms. Instead, in the proof we propose below, the idea is to consider an equivalent probability measure \({\mathbb {Q}}\) (which obviously does not affect the path properties of u) under which the solution has a finite second moment. Although L might not be a Lévy noise under \({\mathbb {Q}}\) anymore, it follows from the theory of integration against random measures, which we briefly recall in the Appendix, that there exists a particularly clever choice of \({\mathbb {Q}}\) such that we have sufficient control on the second moments of both integrands and integrators under \({\mathbb {Q}}\).

Theorem 2.15

If u is the mild solution to the stochastic heat equation (1.1) constructed under the assumptions of Proposition 2.7, then, for any \(r<-\frac{d}{2}\), the stochastic process \((u(t, \cdot ))_{t\in [0,T]}\) has a càdlàg version in \(H_{r,\text {loc}}({\mathbb {R}}^d)\).

Proof

We first consider the case \(p\geqslant 1\) in assumption (H). As in Proposition 2.14, we can suppose that u is the solution to (2.28) with \(N=1\), and use the decomposition (2.32) with \(A>0\). The terms V and \(u^{2,1}\) can be dealt with as in Proposition 2.14. For the remaining terms, we use different arguments.

\(\underline{u^{1,1}(t,x):}\) Let \(\sigma _n(u)= \sigma (u)\mathbb {1}_{|u|\leqslant n}\) and define \(u^{1,1}_n\) as in (2.32) but with Z replaced by \(\sigma _n(u)\). Then,

$$\begin{aligned} u^{1,1}(t,x)-u^{1,1}_n(t,x)= & {} \int _0^t \int _{y\in [-2A,2A]^d} g(t-s, x-y)\\&\times \left( \sigma (u(s,y)) - \sigma _n(u(s,y)) \right) L^M(\mathrm {d}s , \mathrm {d}y), \end{aligned}$$

and

$$\begin{aligned} \Vert u^{1,1}(t,\cdot )-u^{1,1}_n(t,\cdot )\Vert _{H_r({\mathbb {R}}^d)}^2=\int _{{\mathbb {R}}^d} (1+|\xi |^2)^r \left| \mathcal F\left( u^{1,1}(t,\cdot )-u^{1,1}_n(t,\cdot ) \right) (\xi )\right| ^2 \,\mathrm {d}\xi . \end{aligned}$$
(2.38)

Writing \(\sigma _{(n)} (s,y)= \sigma (u(s,y))- \sigma _n(u(s,y))\), we obtain

$$\begin{aligned} \mathcal F(u^{1,1}(t,\cdot )-u^{1,1}_n(t,\cdot ))(\xi )= \int _0^t \int _{[-2A,2A]^d} e^{-i\xi \cdot y}e^{ -(t-s) |\xi |^2}\sigma _{(n)}(s,y) \,L^M(\mathrm {d}s, \mathrm {d}y) \end{aligned}$$

as in (2.34). With similar calculations as in (2.24), but using Theorem A.3 with \(1<p<1+\frac{2}{d}\), one can show that

$$\begin{aligned} \begin{aligned}&\sup _{t\in [0,T]} \left| \mathcal F(u^{1,1}(t,\cdot )-u^{1,1}_n(t,\cdot ) ) (\xi ) \right| \\&\quad \leqslant C \sup _{t \in [0,T]} \left| \int _0^t \int _{[-2A,2A]^d}e^{-i\xi \cdot y} \sigma _{(n)} (s,y)\,L^M(\mathrm {d}s, \mathrm {d}y) \right| , \end{aligned} \end{aligned}$$
(2.39)

where C does not depend on \(\xi \). With notation from the Appendix, the fact that \(\sigma (u) \in L^{1,p}(L^M,{\mathbb {P}})\) implies that there exists a probability measure \({\mathbb {Q}}\) that is equivalent to \({\mathbb {P}}\) such that the process \(\sigma (u)\) belongs to \(L^{1,2}(L^M,{\mathbb {Q}})\), see Theorem A.4. Consequently, using the notation in (A.1), we deduce from (2.38) that

$$\begin{aligned}&{\mathbb {E}}_{\mathbb {Q}}\left[ \sup _{t\in [0,T]} \Vert u^{1,1}(t,\cdot )-u^{1,1}_n(t,\cdot )\Vert _{H_r({\mathbb {R}}^d)}^2 \right] \nonumber \\&\quad \leqslant C \int _{{\mathbb {R}}^d} (1+|\xi |^2)^r {\mathbb {E}}_{\mathbb {Q}}\left[ \sup _{t \in [0,T]} \left| \int _0^t \int _{y\in [-2A,2A]^d}e^{-i\xi \cdot y} \sigma _{(n)} (s,y)\,L^M(\mathrm {d}s, \mathrm {d}y) \right| ^2\right] \,\mathrm {d}\xi \nonumber \\&\quad \leqslant C \int _{{\mathbb {R}}^d} (1+|\xi |^2)^r \Vert e^{-i\xi \cdot (\cdot )}\sigma _{(n)}\Vert ^2_{L^M,2,{\mathbb {Q}}} \,\mathrm {d}\xi \leqslant C\Vert \sigma _{(n)} \Vert ^2_{L^M,2,{\mathbb {Q}}}\int _{{\mathbb {R}}^d} (1+|\xi |^2)^r\,\mathrm {d}\xi .\nonumber \\ \end{aligned}$$
(2.40)

The last integral is finite because \(r<-\frac{d}{2}\). Moreover, \(\sigma _{(n)}(\omega ,s,y) \rightarrow 0\), pointwise in \((\omega ,s,y)\), and is bounded by \(\sigma (u(\omega ,s,y))\), which belongs to \(L^{1,2}(L^M,{\mathbb {Q}})\) by assumption. Hence, by Theorem A.1, the left-hand side of (2.40) converges to 0 as \(n\rightarrow +\infty \). As before, we may extract a subsequence that converges uniformly in [0, T] almost surely with respect to \({\mathbb {Q}}\), and hence \({\mathbb {P}}\). We now deduce that \(t\mapsto u^{1,1}(t,\cdot )\) has a càdlàg modification because the processes \(u_n^{1,1}\) have càdlàg modifications by Proposition 2.14.

\(\underline{u^{1,2}(t,x):}\) The proof is identical to the corresponding part in Proposition 2.14, provided we can still apply Theorem A.3 in (2.36). In order to justify this, observe that

$$\begin{aligned}&{\mathbb {E}}\left[ \int _0^u \int _{{\mathbb {R}}^d} \int _{|z|\leqslant 1} |\partial _{x_{k_1}\ldots x_{k_i}}\partial _t g\left( u-s, c_{{\mathbf {k}}}(-{\mathbf {A}},r)- y \right) \sigma (u(s,y))z|^p \mathbb {1}_{y \notin [-2A, 2A]^d} \,\mathrm {d}s\,\mathrm {d}y\,\nu (\mathrm {d}z) \right] \\&\quad \leqslant C\int _0^u \int _{{\mathbb {R}}^d} |\partial _{x_{k_1}\ldots x_{k_i}}\partial _t g( u-s, c_{{\mathbf {k}}}(-{\mathbf {A}},r)- y )|^p E_{\frac{2-d (p-1)}{2(p\vee 1)},\frac{1}{p\vee 1}}\Big ( C |y|^{\frac{\eta (p-q)}{p\vee 1}}\Big )\mathbb {1}_{y \notin [-2A, 2A]^d} \,\mathrm {d}s\,\mathrm {d}y\\&\quad \leqslant C\int _0^u \int _{{\mathbb {R}}^d} |\partial _{x_{k_1}\ldots x_{k_i}}\partial _t g( u-s, c_{{\mathbf {k}}}(-{\mathbf {A}},r)- y )|^p \exp \Big (C|y|^{\frac{2\eta (p-q)}{2-d (p-1)}}\Big )P(|y|)\mathbb {1}_{y \notin [-2A, 2A]^d} \,\mathrm {d}s\,\mathrm {d}y \end{aligned}$$

by (2.29) and [19, Theorems 4.3 and 4.4] with some polynomial P. Next, for every multi-index \(\alpha \in {\mathbb {N}}^{1+d}\), it is easily verified by induction that \(\partial ^\alpha g(t,x)\) takes the form \(Q(t^{-\frac{1}{2}},x)\exp (-\frac{|x|^2}{4t})\) for some polynomial Q. So if l denotes the degree of Q, we have for every \(t\in [0,T]\) and \(|x|^2\geqslant 2lT\),

$$\begin{aligned} |\partial ^\alpha g(t,x)| \leqslant C(1+t^{-\frac{l}{2}}+|x|^l)e^{-\frac{|x|^2}{4t}} \leqslant C(1+T^{-\frac{l}{2}}+|x|^l)e^{-\frac{|x|^2}{4T}} =: {\tilde{Q}}(x)e^{-\frac{|x|^2}{4T}}. \end{aligned}$$

Hence, as \(|c_{{\mathbf {k}}}(-{\mathbf {A}},r)- y|\geqslant A\) for \(y \notin [-2A, 2A]^d\), we obtain for sufficiently large A that the expectation in the penultimate display is bounded by

$$\begin{aligned}&C\int _{{\mathbb {R}}^d} |{\tilde{Q}}(c_{{\mathbf {k}}}(-{\mathbf {A}},r)- y)|^p e^{-\frac{p|c_{{\mathbf {k}}}(-{\mathbf {A}},r)- y|^2}{4T}}\exp \Big (C|y|^{\frac{2\eta (p-q)}{2-d (p-1)}}\Big )P(|y|)\mathbb {1}_{y \notin [-2A, 2A]^d} \,\mathrm {d}y\\&\quad \leqslant C\int _{{\mathbb {R}}^d} |{\tilde{Q}}(c_{{\mathbf {k}}}(-{\mathbf {A}},r)- y)|^p e^{-\frac{p|c_{{\mathbf {k}}}(-{\mathbf {A}},r)- y|^2}{4T}}\exp \Big (C|y|^{\frac{2\eta (p-q)}{2-d (p-1)}}\Big )P(|y|)\,\mathrm {d}y\\&\quad =C\int _{{\mathbb {R}}^d} |{\tilde{Q}}(y)|^p e^{-\frac{p|y|^2}{4T}}\exp \Big (C|c_{{\mathbf {k}}}(-{\mathbf {A}},r)- y|^{\frac{2\eta (p-q)}{2-d (p-1)}}\Big )P(|c_{{\mathbf {k}}}(-{\mathbf {A}},r)- y|)\,\mathrm {d}y. \end{aligned}$$

Since \(|c_{{\mathbf {k}}}(-{\mathbf {A}},r)|\leqslant \sqrt{d}A\), \(|x-y|^b\leqslant 2^{b-1} (|x|^b+|y|^b)\) for \(b\geqslant 1\), and \(P(|x-y|)\leqslant C(1+|x|^m + |y|^m)\) where m is the degree of P, one can find another polynomial \({\tilde{P}}\) such that the last integral is further bounded by

$$\begin{aligned} C\int _{{\mathbb {R}}^d} |{\tilde{Q}}(y)|^p e^{-\frac{p|y|^2}{4T}}\exp \Big (C|y|^{\frac{2\eta (p-q)}{2-d (p-1)}}\Big ){\tilde{P}}(y)\,\mathrm {d}y, \end{aligned}$$

which is independent of r, and finite because it is possible by assumption (H) to choose \(\eta >\frac{d}{q}\) such that \(\frac{2\eta (p-q)}{2-d (p-1)}< 2\) is satisfied. Theorem A.3 is therefore applicable by (A.3).

\(\underline{u^{2,2}(t,x):}\) The argument remains the same as in Proposition 2.14, except that we have to replace the final bound in (2.37) by

$$\begin{aligned} C\int _0^T \int _{y\notin [-2A, 2A]^d} \int _{|z|>1} |z|^\beta e^{-\beta \frac{|p_A(y)-y|^2}{4T}}\left( E_{\frac{2-d (p-1)}{2(p\vee 1)},\frac{1}{p\vee 1}}\Big ( C |y|^{\frac{\eta (p-q)}{p\vee 1}}\Big )\right) ^{\frac{\beta }{p}}\, \mathrm {d}s \, \mathrm {d}y \, \nu ( \mathrm {d}z), \end{aligned}$$

which is finite by an argument similar to the one for \(u^{1,2}\).

\(\underline{u^3(t,x):}\) Consider the decomposition \(u^3(t,x)=u^{3,1}(t,x)+u^{3,2}(t,x)\) where

$$\begin{aligned} u^{3,1}(t,x)= & {} b\int _0^t \int _{y\in [-2A, 2A]^d} g(t-s, x-y) \sigma (u(s,y))\, \mathrm {d}s\, \mathrm {d}y,\nonumber \\ u^{3,2}(t,x)= & {} b\int _0^t \int _{y\notin [-2A, 2A]^d} g(t-s, x-y) \sigma (u(s,y)) \, \mathrm {d}s \,\mathrm {d}y. \end{aligned}$$
(2.41)

If \(u^{3,1}_n\) is the process obtained from \(u^{3,1}\) by replacing \(\sigma (u(s,y))\) by \(\sigma _n(u(s,y))\), then, as in (2.39),

$$\begin{aligned} \sup _{t\in [0,T]} \left| \mathcal F( u^{3}(t,\cdot )-u^{3}_n(t,\cdot ) ) (\xi ) \right|&\leqslant C \sup _{t \in [0,T]} \left| \int _0^t \int _{{\mathbb {R}}^d}e^{-i\xi \cdot y} \sigma _{(n)} (s,y) \mathbb {1}_{y\in [-2A, 2A]^d}\,\mathrm {d}s\,\mathrm {d}y \right| \\&\leqslant C\int _0^T \int _{{\mathbb {R}}^d} |\sigma _{(n)}(s,y)|\mathbb {1}_{y\in [-2A, 2A]^d}\,\mathrm {d}s\,\mathrm {d}y. \end{aligned}$$

Consequently, we have

$$\begin{aligned} \sup _{t\in [0,T]} \Vert u^{3,1}(t,\cdot )-u^{3,1}_n(t,\cdot )\Vert _{H_r({\mathbb {R}}^d)}^2&=\sup _{t\in [0,T]}\int _{{\mathbb {R}}^d} (1+|\xi |^2)^r \left| \mathcal F( u^{3,1}(t,\cdot )-u^{3,1}_n(t,\cdot ) ) (\xi )\right| ^2 \,\mathrm {d}\xi \\&\leqslant C \int _0^T \int _{{\mathbb {R}}^d} |\sigma _{(n)}(s,y)|\mathbb {1}_{y\in [-2A, 2A]^d}\,\mathrm {d}s\,\mathrm {d}y \\&\quad \times \int _{{\mathbb {R}}^d} (1+|\xi |^2)^r \,\mathrm {d}\xi . \end{aligned}$$

Recalling that u is the solution to (2.28) with \(N=1\), the expectation of the left-hand side tends to 0 as \(n\rightarrow +\infty \) by (2.29) and the dominated convergence theorem. Hence, \(t\mapsto u^{3,1}(t,\cdot )\) inherits the càdlàg sample paths of \(u^{3,1}_n\), see Lemma 2.13. Concerning \(u^{3,2}\), the continuity of \((t,x)\mapsto u^{3,2}(t,x)\) on \([0,T]\times [-A,A]^d\) is shown in the same way as for \(u^{1,2}\). Instead of the stochastic Fubini theorem, one can use the ordinary Fubini theorem because

$$\begin{aligned}&\int _0^t \mathrm {d}u \int _{-A}^{x_{k_1}} \mathrm {d}r_{k_1} \cdots \int _{-A}^{x_{k_i}} \mathrm {d}r_{k_i}\\&\quad \left( \int _0^u \int _{{\mathbb {R}}^d} |\partial _{x_{k_1}\ldots x_{k_i}}\partial _t g\left( u-s , c_{{\mathbf {k}}}(-{\mathbf {A}},r)- y \right) \sigma (u(s,y))| \mathbb {1}_{y \notin [-2A, 2A]^d} \,\mathrm {d}s\,\mathrm {d}y\right) \\&\quad <+\infty \end{aligned}$$

almost surely. This is verified by showing that the expectation of the integral in brackets is finite and uniformly bounded in u and \(r_{k_1}, \ldots , r_{k_i}\). This concludes the proof for \(p\geqslant 1\).

For \(0<p<1\), we have to modify the proof in the following way. Because L has drift \(b_0=0\) and summable jumps by the assumption \(\int _{|z|\leqslant 1} |z|^p \,\nu (\mathrm {d}z)<+\infty \), we can write u in the same form as (2.32) with \(L^M(\mathrm {d}t,\mathrm {d}x)\) replaced by \(\int _{|z|\leqslant 1} z \, J(\mathrm {d}t,\mathrm {d}x,\mathrm {d}z)\) and \(u^3 = 0\). An inspection of the proof above shows that the arguments for V, \(u^{2,1}\), \(u^{2,2}\) remain valid, and in principle also for \(u^{1,1}\) and \(u^{1,2}\) if changing the order of integration in (2.33) and (2.36), respectively, is permitted. The justification is comparable to the situation for \(p\geqslant 1\); one only has to use (A.4) instead of (A.3):

$$\begin{aligned}&\int _0^t\int _{{\mathbb {R}}^d}\int _{{ |z|\leqslant 1}} \left( \int _{{\mathbb {R}}^d}|\mathcal F(\varphi )(x) | g(t-s, x-y) \,\mathrm {d}x \right) ^p\\&\quad \times {\mathbb {E}}[\sigma (u(s,y))|^p] |z|^p \mathbb {1}_{y \in [-2A, 2A]^d} \,\mathrm {d}s\,\mathrm {d}y\,\nu (\mathrm {d}z)\\&\quad \leqslant C\int _0^T\int _{{\mathbb {R}}^d} \left( \int _{{\mathbb {R}}^d} g(t-s, x-y) \,\mathrm {d}x \right) ^p\mathbb {1}_{y \in [-2A, 2A]^d} \,\mathrm {d}s\,\mathrm {d}y \\&\quad = CT(4A)^d<+\infty . \end{aligned}$$

\(\square \)

Remark 2.16

The paper [21] studies the existence of càdlàg modifications in certain Banach spaces of solutions to a class of stochastic PDEs driven by Poisson random measures. Example 2.3 in [21] particularizes to the case of the stochastic heat equation with a multiplicative Lévy space–time white noise. However, this example contains an error since the measure \(\nu \) in the first display on p. 1502 is not a Lévy measure (it is infinite on sets of the form \(\{|x|>\delta \}\) for all sufficiently small values of \(\delta \), contradicting Remark 3.1 in [21]). After private communication with the author, it seems that this example could be rewritten for the case of a bounded domain, but cannot be extended to the case where \(D={\mathbb {R}}^d\) (because the stopping times in (2.2) with \([0,\pi ]\) replaced by \({\mathbb {R}}^d\) are 0 almost surely for all \(N\in {\mathbb {N}}\), cf. the discussion at the beginning of Sect. 2.2).

2.3 The stochastic heat equation on bounded domains

Let D be a \(C^\infty \)-regular domain of \({\mathbb {R}}^d\), where \(d\geqslant 2\), that is, we assume that D is a bounded open set whose boundary \(\partial D\) is a smooth \((d-1)\)-dimensional manifold, and whose closure \({\bar{D}}\) has the same boundary \(\partial {\bar{D}}=\partial D\). For the stochastic heat equation (1.1) on such a domain D, we assume:

\((\mathbf{H }')\) :

There exists \(0<p<1+\frac{2}{d}\) such that \(\int _{|z|\leqslant 1}|z| ^p \,\nu (\mathrm {d}z)<+\infty \).

As in the case of an interval (Sect. 2.1), the stopping times

$$\begin{aligned} \tau _N=\inf \left\{ t\in [0,T]:J\left( [0,t]\times D \times [-N,N]^c \right) \ne 0\right\} \end{aligned}$$
(2.42)

are almost surely strictly positive and equal to \(+\infty \) for large N.

Proposition 2.17

Let D be a \(C^\infty \)-regular domain, \(\sigma :{\mathbb {R}}\rightarrow {\mathbb {R}}\) be a Lipschitz function and let L be a pure jump Lévy white noise as in (1.6) such that \((\mathbf{H }')\) is satisfied. Then there exists a predictable mild solution u to (1.1) such that for all \(0<p<1+\frac{2}{d}\),

$$\begin{aligned} \sup _{(t,x)\in [0,T]\times D}{\mathbb {E}}\left[ |u(t,x)|^{p} \mathbb {1}_{t\leqslant \tau _N}\right] <+\infty . \end{aligned}$$
(2.43)

Furthermore, up to modifications, the solution is unique among all predictable random fields that satisfy (2.43).

Proof

By [16, Corollary 3.2.8], \(G_D(t;x,y) \leqslant C{t^{-\frac{d}{2}}} e^{-\frac{|x-y|^2}{6t}}\), so [11, Theorem 3.5] applies. \(\square \)

As in the proof of Proposition 2.3, the stopping times \(\tau _N\) allow us to ignore the big jumps for the analysis of path properties of the solution. So we only need to consider

$$\begin{aligned} L_N( \mathrm {d}t, \mathrm {d}x)= b_N\,\mathrm {d}t\,\mathrm {d}x+\int _{|z|\leqslant N} z \,{\tilde{J}}(\mathrm {d}t, \mathrm {d}x, \mathrm {d}z), \end{aligned}$$
(2.44)

where \(b_N:= b-\int _{1<|z| \leqslant N}z \,\nu (\mathrm {d}z)\), and the corresponding mild solutions to

$$\begin{aligned} u_N(t,x)=V(t,x)+\int _0^t \int _D G_D(t-s; x,y) \sigma (u_N(s,y))\, L_N(\mathrm {d}s , \mathrm {d}y). \end{aligned}$$
(2.45)

For simplicity, we take \(N=1\) in the following, so that our equation becomes

$$\begin{aligned} u(t,x)= & {} V(t,x)+ b\int _0^t \int _D G_D(t-s;x,y) \sigma (u(s,y)) \, \mathrm {d}s \, \mathrm {d}y\nonumber \\&+ \int _0^t \int _D G_D(t-s;x,y) \sigma (u(s,y)) L^M(\mathrm {d}s , \mathrm {d}y). \end{aligned}$$
(2.46)

2.3.1 The fractional Sobolev spaces \(H_r(D)\)

The operator \(-\Delta \) on D with vanishing Dirichlet boundary conditions admits a complete orthonormal system in \(L^2(D)\) of smooth eigenfunctions \((\Phi _j)_{j\geqslant 1}\), with eigenvalues \((\lambda _j)_{j\geqslant 1}\). Then we have the following properties (see for example [39, Chapter V, p. 343]):

$$\begin{aligned}&\sum _{j\geqslant 1} (1+\lambda _j)^r<+\infty , \quad \text {for any}~r<-\frac{d}{2}, \end{aligned}$$
(2.47)
$$\begin{aligned}&\left\| \Phi _j \right\| _{L^\infty (D)}\leqslant C (1+\lambda _j)^{\frac{\alpha }{2}}, \quad \text {for any}~\alpha >\frac{d}{2}. \end{aligned}$$
(2.48)

The Green’s function \(G_D\) has the representation (1.5) and we have the decomposition

$$\begin{aligned} f(x)=\sum _{j\geqslant 1}a_j(f) \Phi _j(x), \quad x\in D, \end{aligned}$$
(2.49)

for every \(f\in L^2(D)\) where \(a_j(f)=\left\langle f, \Phi _j \right\rangle _{L^2(D)}\). For \(r\geqslant 0\), we now define

$$\begin{aligned} H_r(D):=\Bigg \{ f\in L^2\left( D \right) :\Vert f \Vert ^2_{H_r} := \sum _{j\geqslant 1}\left( 1+\lambda _j\right) ^r a_j(f)^2 <+\infty \Bigg \}, \end{aligned}$$

which becomes a Hilbert space with the inner product \(\left\langle f, h \right\rangle _{H_r}:= \sum _{j\geqslant 1} \left( 1+\lambda _j\right) ^r a_j(f) a_j(h)\). We denote by \(H_{-r}\left( D \right) \) the topological dual space of \(H_{r}\left( D \right) \), which turns out to be isomorphic to the space of sequences \(b=(b_n)_{n\geqslant 1}\) such that

$$\begin{aligned} \Vert b \Vert ^2_{H_{-r}}:= \sum _{j\geqslant 1}\left( 1+\lambda _j \right) ^{-r} b_j^2 <+\infty . \end{aligned}$$

In fact, with \(b_j= \tilde{f}(\Phi _j)\) for \(\tilde{f}\in H_{-r}\left( D \right) \), we have \(\Vert \tilde{f}\Vert _{H_{-r}} = \Vert b \Vert _{H_{-r}}\) and the pairing between \(H_{-r}\left( D \right) \) and \(H_{r}\left( D \right) \) is given by

$$\begin{aligned} \left\langle b, h \right\rangle =\sum _{j\geqslant 1} b_j a_j(h)\leqslant \Vert b\Vert _{H_{-r}} \, \Vert h\Vert _{H_r}. \end{aligned}$$

We need the following technical lemma, for which we could not find a reference in the literature.

Lemma 2.18

For \(r\leqslant 0\), the restriction of \(H_r({\mathbb {R}}^d)\) to D is continuously embedded in \(H_r(D)\).

Proof

For \(m\in {\mathbb {N}}\), let \(H^m(D)=\{ u :\ u^{(\alpha )} \in L^2(D) \ \text {for all} \ |\alpha |\leqslant m \}\), with an upper index m, be the “usual” Sobolev spaces as in [27, p. 3]. For real \(r\geqslant 0\), let m be the smallest even integer with \(m\geqslant r\). Following [27, Chapitre 1, (9.1)], we define, with a superscript index,

$$\begin{aligned} H^r(D):=\left[ H^m(D), L^2(D)\right] _{1-\frac{r}{m}}, \end{aligned}$$

where the right-hand side is the notation of [27, Chapitre 1, Définition 2.1] for interpolation spaces.

Furthermore, define \(H_0^r(D)\) for \(r\geqslant 0\) as the closure in \(H^r(D)\) of the set of smooth functions with compact support in D, see [27, Chapitre 1, (11.1)]. Similarly, as in [20, Definition 8.1], let \(H^r_B(D)\) for \(r>\frac{1}{2}\) be the closed subspace of \(H^r(D)\) such that its elements are equal to zero on \(\partial D\).

By the definition of interpolation spaces, there exists for each \(\theta \in [0,1]\) some self-adjoint positive operator \(\Lambda \) in \(L^2(D)\) with domain \(H^m_B(D)\) such that

$$\begin{aligned} \left[ H^m_B(D), L^2(D) \right] _\theta =\text {dom}\left( \Lambda ^{1-\theta } \right) . \end{aligned}$$

The notion of domain is as in [27, Chapitre 1, p. 12], and the power in this case is to be understood as the spectral power of the operator \(\Lambda \). By [27, Chapitre 1, Remarque 2.3], \(\text {dom} (\Lambda ^{1-\theta })\) coincides with \(\text {dom}( {\tilde{\Lambda }}^{1-\theta })\) for any other self-adjoint positive operator \({\tilde{\Lambda }}\) in \(L^2(D)\) with domain \(H^m_B(D)\). In particular, we can choose \({\tilde{\Lambda }}=(-\Delta )^{\frac{m}{2}}\), where \(\Delta \) is the Dirichlet Laplacian, and the power \(\frac{m}{2}\), an integer because m is even, is to be understood as the composition of partial differential operators. Then, from [20, Théorème 8.1], we deduce that

$$\begin{aligned} \left[ H^m_B(D), L^2(D) \right] _\theta =H_B^{m(1-\theta )}(D), \end{aligned}$$

and with the choice \(\theta =1-\frac{r}{m}\) further that

$$\begin{aligned} \text {dom}\left( {\tilde{\Lambda }}^{\frac{r}{m}} \right) =H_B^{r}(D). \end{aligned}$$
(2.50)

Let \(f\in L^2(D)\) be as in (2.49). Then, see e.g. [30, (2.12)], we have that \({\tilde{\Lambda }}^{\frac{r}{m}}f=\sum _{j\geqslant 1} \mu _j^{\frac{r}{m}} a_j(f) \Phi _j\), where \(\mu _j=\lambda _j^{\frac{m}{2}}\) is the jth eigenvalue of \({\tilde{\Lambda }}\). The previous sum converges in \(L^2(D)\) if and only if \(\sum _{j\geqslant 1}\lambda _j^r \left| a_j(f) \right| ^2<+\infty \), so together with (2.50), we obtain

$$\begin{aligned} H_r(D)=\text {dom}\left( {\tilde{\Lambda }}^{\frac{r}{m}} \right) =H^r_B(D). \end{aligned}$$

Therefore, by [20, Théorème 8.1] and the discussion that follows, we have \(H_r(D)\hookrightarrow H^r(D)\).

If \(r\leqslant \frac{1}{2}\), we have by [30, (2.13)]

$$\begin{aligned} H_r(D)=\left\{ \begin{array}{ll} H^r(D) &{} \text {if}\quad r< \frac{1}{2}, \\ H_{00}^{\frac{1}{2}}(D) &{} \text {if}\quad r=\frac{1}{2}, \end{array}\right. \end{aligned}$$

where \(H_{00}^{\frac{1}{2}}(D)\) is the Lions–Magenes space satisfying \(H_{00}^{\frac{1}{2}}(D)\hookrightarrow H_{0}^{\frac{1}{2}}(D)\) by [27, Chapitre 1, Théorème 11.7]. In addition, for any \(r\geqslant 0\), \(H_{0}^{r}(D)\hookrightarrow H^{r}(D)\) and thus \(H_r(D)\hookrightarrow H^r(D)\). Next, by [27, Chapitre 1, Théorèmes 9.1, 9.2 and (7.1)], there exists a constant C such that any function \(u\in H^r(D)\) is the restriction of a function \({\tilde{u}} \in H_r({\mathbb {R}}^d)\) to D with \(\Vert {\tilde{u}} \Vert _{H_r({\mathbb {R}}^d)}\leqslant C \Vert u \Vert _{H^r(D)}\). Therefore, \(H_r(D)\hookrightarrow H^r(D) \hookrightarrow H_r({\mathbb {R}}^d)|_D\) for any \(r\geqslant 0\), and by duality, we have \(H_r({\mathbb {R}}^d)|_D\hookrightarrow H^r(D)\hookrightarrow H_r(D)\) for \(r\leqslant 0\). \(\square \)

2.3.2 Existence of a càdlàg solution in \(H_{r}(D)\) with \(r<-\frac{d}{2}\)

Theorem 2.19

The mild solution u to (1.1) constructed in Proposition 2.17 has a càdlàg modification in \(H_r(D)\) for any \(r<-\frac{d}{2}\).

In contrast to the case \(D=[0,\pi ]\), the eigenfunctions of \(-\Delta \) on a general domain D in \({\mathbb {R}}^d\) may not be uniformly bounded, see (2.48). Thus, the proof of Theorem 2.5 does not extend to higher dimensions. Instead, we use [17, Theorem 1] to write

$$\begin{aligned} G_D(t;x,y)= g(t,x-y)+ H(t; x,y), \end{aligned}$$
(2.51)

where g is the heat kernel on \({\mathbb {R}}^d\) and H is a function such that for any \(\varepsilon >0\), \((t,x,y) \mapsto H(t;x,y)\) is smooth on \([0,T]\times D \times B_\varepsilon ^c(\partial D)\), where \(B_\varepsilon (\partial D)\) is the \(\varepsilon \)-neighborhood of \(\partial D\). Away from the boundary \(\partial D\), g can be dealt with as in Theorem 2.15, and H is smooth and therefore easily handled. In order to control the behavior close to the boundary, the change of measure technique (see the Appendix and also the proof of Theorem 2.15) is again fruitful.

Proof of Theorem 2.19

We may assume that u satisfies (2.46). For \(\varepsilon >0\), split u(tx) into the sum of three terms:

$$\begin{aligned} u_\varepsilon ^1(t,x)&:=\int _0^t \int _D g(t-s,x-y) \sigma (u(s,y)) \mathbb {1}_{y \in B_\varepsilon ^c(\partial D)} \,L(\mathrm {d}s, \mathrm {d}y), \\ u_\varepsilon ^2(t,x)&:=\int _0^t \int _D H(t-s;x,y) \sigma (u(s,y)) \mathbb {1}_{y \in B_\varepsilon ^c(\partial D)} \,L(\mathrm {d}s, \mathrm {d}y),\\ u^3_\varepsilon (t,x)&:= \int _0^t \int _D G_D(t-s;x,y) \sigma (u(s,y)) \mathbb {1}_{y \in B_\varepsilon (\partial D)} \,L(\mathrm {d}s, \mathrm {d}y). \end{aligned}$$

By (2.43), the same proof as in Theorem 2.15 for \(u^{1,1}\) and \(u^3\) shows that \(t\mapsto u_\varepsilon ^1(t,\cdot )\) has a càdlàg version in \(H_{r,{loc}}({\mathbb {R}}^d)\) for \(r<-\frac{d}{2}\), where the spatial variable takes values in the whole space \({\mathbb {R}}^d\) rather than just D. Thus, as a process with \(x\in D\), it has a càdlàg version in \(H_r(D)\) for \(r<-\frac{d}{2}\) by Lemma 2.18. Regarding \(u^2_\varepsilon \), since \((t,x,y) \mapsto H(t;x,y)\) is smooth on \([0,T]\times D \times B_\varepsilon ^c(\partial D)\), we can mimic the part of the proof of Theorem 2.15 concerning \(u^{1,2}\) in order to get that \((t,x) \mapsto u_\varepsilon ^2(t,x)\) is jointly continuous, and in fact, uniformly continuous since D is bounded. In particular, \(t\mapsto u_\varepsilon ^2(t,\cdot )\) is continuous in \(H_r(D)\) for any \(r\leqslant 0\). For the last term \(u^3_\varepsilon \), we want to show that it converges to 0 in \(H_r(D)\), uniformly in \(t\in [0,T]\). As a first step, we have

$$\begin{aligned} \left\| u_\varepsilon ^3 (t,\cdot ) \right\| _{H_r}^2=\sum _{k\geqslant 1} (1+\lambda _k)^r \left( a_k^\varepsilon (t)\right) ^2, \end{aligned}$$

where \(a_k^\varepsilon (t) := \int _D \Phi _k(x)u_\varepsilon ^3(t,x)\, \mathrm {d}x\). As in (2.24) and (2.39), one can then show that

$$\begin{aligned} \left| a_k^\varepsilon (t) \right|&\leqslant C\sup _{t\in [0,T]}\left| \int _0^t \int _D \Phi _k(y) \sigma (u(s,y)) \mathbb {1}_{y \in B_\varepsilon (\partial D)}\,L(\mathrm {d}s, \mathrm {d}y) \right| . \end{aligned}$$

Next, let \(\Phi (x):=\big (\sum _{k\geqslant 1} (1+\lambda _k)^r \Phi _k^2(x)\big )^{\frac{1}{2}}\), which belongs to \(L^2(D)\) by (2.47) since \(\Vert \Phi _k\Vert _{L^2(D)} = 1\) and \(r<-\frac{d}{2}\). Hence, assuming without loss of generality that p in \((\mathbf{H }')\) satisfies \(1\leqslant p< 1+\frac{2}{d}\), we have by Lemma A.2:

$$\begin{aligned} \Vert \Phi \sigma (u)\Vert _{L,p}&\leqslant \Vert \Phi \sigma (u)\Vert _{L^B,p} + \Vert \Phi \sigma (u)\Vert _{L^M,p} \leqslant |b| \left\| \int _0^T\int _D |\Phi (x)\sigma (u(t,x))|\,\mathrm {d}t\,\mathrm {d}x\right\| _{L^p} \\&\quad + C\left\| \bigg (\int _0^T\int _D\int _{|z|\leqslant 1} |\Phi (x)\sigma (u(t,x))z|^2\,J(\mathrm {d}t,\mathrm {d}x,\mathrm {d}z)\bigg )^{\frac{1}{2}}\right\| _{L^p} \\&\leqslant \int _0^T\int _D \Phi (x) \Vert \sigma (u(t,x))\Vert _{L^p}\,\mathrm {d}t\,\mathrm {d}x\\&\quad +C \left( \int _0^T\int _D \Phi ^p(x) {\mathbb {E}}[|\sigma (u(t,x))|^p]\,\mathrm {d}t\,\mathrm {d}x\right) ^{\frac{1}{p}}\\&\leqslant C (\Vert \Phi \Vert _{L^1(D)} + \Vert \Phi \Vert _{L^p(D)}) < +\infty . \end{aligned}$$

Thus, by Theorem A.4, there exists an equivalent probability measure \({\mathbb {Q}}\) such that

$$\begin{aligned} \Vert \Phi \sigma (u)\Vert _{L,2,{\mathbb {Q}}}<+\infty . \end{aligned}$$
(2.52)

Furthermore, the Doob–Meyer decomposition of L under \({\mathbb {Q}}\) is given by \(L={L^{B,{\mathbb {Q}}}}+ L^{M,{\mathbb {Q}}}\), where

$$\begin{aligned} {L^{B,{\mathbb {Q}}}}(\mathrm {d}t,\mathrm {d}x)= & {} b^{\mathbb {Q}}(t,x)\,\mathrm {d}t\,\mathrm {d}x= \left( b + \int _{|z|\leqslant 1} (Y(t,x,z)-1)\,\nu (\mathrm {d}z) \right) \,\mathrm {d}t\,\mathrm {d}x,\\ L^{M,{\mathbb {Q}}}(\mathrm {d}t,\mathrm {d}x)= & {} \int _{|z|\leqslant 1} z \,{\tilde{J}}^{\mathbb {Q}}(\mathrm {d}t,\mathrm {d}x,\mathrm {d}z) ,\\ {\tilde{J}}^{\mathbb {Q}}(\mathrm {d}t,\mathrm {d}x,\mathrm {d}z)= & {} J(\mathrm {d}t,\mathrm {d}x,\mathrm {d}z) - Y(t,x,z)\,\mathrm {d}t\,\mathrm {d}x \,\nu (\mathrm {d}z) \end{aligned}$$

with some predictable random function Y, see [13, Theorem 3.6]. By [4, Theorem 4.14], we deduce that \(\Vert \Phi \sigma (u)\Vert _{L^{B,{\mathbb {Q}}},2,{\mathbb {Q}}}<+\infty \) and \(\Vert \Phi \sigma (u)\Vert _{L^{M,{\mathbb {Q}}},2,{\mathbb {Q}}}<+\infty \). As a consequence,

$$\begin{aligned} {\mathbb {E}}_{\mathbb {Q}}\left[ \left( \int _0^T\int _D \Phi (x)|\sigma (u(t,x))b^{\mathbb {Q}}(t,x)|\,\mathrm {d}t\,\mathrm {d}x \right) ^2\right] < +\infty \end{aligned}$$
(2.53)

and

$$\begin{aligned} {\mathbb {E}}_{\mathbb {Q}}\left[ \int _0^T\int _D \int _{|z|\leqslant 1} (\Phi (x)\sigma (u(t,x))Y(t,x,z))^2\,\mathrm {d}t\,\mathrm {d}x\,\nu (\mathrm {d}z)\right] <+\infty . \end{aligned}$$
(2.54)

We obtain

$$\begin{aligned} {\mathbb {E}}_{\mathbb {Q}}\left[ \left\| u_\varepsilon ^3 (t,\cdot ) \right\| _{H_r}^2\right]&=\sum _{k\geqslant 1} (1+\lambda _k)^r {\mathbb {E}}_{\mathbb {Q}}[(a_k^\varepsilon (t))^2]\leqslant C \sum _{k\geqslant 1} (1+\lambda _k)^r \Vert \Phi _k\sigma (u)\mathbb {1}_{B_\varepsilon (\partial D)}\Vert ^2_{L,2,{\mathbb {Q}}}\\&\leqslant C\sum _{k\geqslant 1} (1+\lambda _k)^r \left( \Vert \Phi _k\sigma (u)\mathbb {1}_{B_\varepsilon (\partial D)}\Vert ^2_{L^{B,{\mathbb {Q}}},2,{\mathbb {Q}}} + \Vert \Phi _k\sigma (u)\mathbb {1}_{B_\varepsilon (\partial D)}\Vert ^2_{L^{M,{\mathbb {Q}}},2,{\mathbb {Q}}}\right) . \end{aligned}$$

For the first term in the parenthesis, we use the Cauchy–Schwarz inequality to obtain

$$\begin{aligned}&\sum _{k\geqslant 1} (1+\lambda _k)^r \Vert \Phi _k\sigma (u)\mathbb {1}_{B_\varepsilon (\partial D)}\Vert ^2_{L^{B,{\mathbb {Q}}},2,{\mathbb {Q}}} \\&\quad = \sum _{k\geqslant 1} (1+\lambda _k)^r {\mathbb {E}}_{\mathbb {Q}}\left[ \left( \int _0^T\int _D |\Phi _k(x)\sigma (u(t,x))\mathbb {1}_{x\in B_\varepsilon (\partial D)}b^{\mathbb {Q}}(t,x)|\,\mathrm {d}t\,\mathrm {d}x\right) ^2\right] \\&\quad = {\mathbb {E}}_{\mathbb {Q}}\bigg [ \sum _{k\geqslant 1} (1+\lambda _k)^r \int _0^T\int _D \int _0^T\int _D |\Phi _k(x)\Phi _k(y)\sigma (u(t,x))\sigma (u(s,y))\mathbb {1}_{x,y\in B_\varepsilon (\partial D)}\\&\qquad \times b^{\mathbb {Q}}(t,x)b^{\mathbb {Q}}(s,y)|\,\mathrm {d}t\,\mathrm {d}x\,\mathrm {d}s\,\mathrm {d}y\bigg ]\\&\quad \leqslant {\mathbb {E}}_{\mathbb {Q}}\left[ \int _0^T\int _D\int _0^T\int _D |\Phi (x)\Phi (y)\sigma (u(t,x))\sigma (u(s,y))\mathbb {1}_{x,y\in B_\varepsilon (\partial D)}\right. \\&\qquad \left. \times b^{\mathbb {Q}}(t,x)b^{\mathbb {Q}}(s,y)|\mathrm {d}t\,\mathrm {d}x\,\mathrm {d}s\,\mathrm {d}y\right] \\&\quad = {\mathbb {E}}_{\mathbb {Q}}\left[ \left( \int _0^T\int _D|\Phi (x)\sigma (u(t,x))\mathbb {1}_{x\in B_\varepsilon (\partial D)}b^{\mathbb {Q}}(t,x)|\,\mathrm {d}t\,\mathrm {d}x\right) ^2\right] \rightarrow 0 \end{aligned}$$

as \(\varepsilon \rightarrow 0\) by (2.53) and dominated convergence. Similarly, (2.54) implies that

$$\begin{aligned}&\sum _{k\geqslant 1} (1+\lambda _k)^r \Vert \Phi _k\sigma (u)\mathbb {1}_{B_\varepsilon (\partial D)}\Vert ^2_{L^{M,{\mathbb {Q}}},2,{\mathbb {Q}}} \\&\quad \leqslant C \sum _{k\geqslant 1} (1+\lambda _k)^r {\mathbb {E}}_{\mathbb {Q}}\left[ \int _0^T \int _D \int _{|z|\leqslant 1}(\Phi _k(x)\sigma (u(t,x)) Y(t,x,z))^2 \mathbb {1}_{x\in B_\varepsilon (\partial D)} \,\mathrm {d}t\,\mathrm {d}x\,\nu (\mathrm {d}z)\right] \\&\quad = C{\mathbb {E}}_{\mathbb {Q}}\left[ \int _0^T \int _D \int _{|z|\leqslant 1}(\Phi (x)\sigma (u(t,x)) Y(t,x,z))^2 \mathbb {1}_{x\in B_\varepsilon (\partial D)} \,\mathrm {d}t\,\mathrm {d}x\,\nu (\mathrm {d}z)\right] \rightarrow 0 \end{aligned}$$

as \(\varepsilon \rightarrow 0\). Altogether, there exists a subsequence of \(u^3_\varepsilon (t,\cdot )\) that converges almost surely to 0 in \(H_r(D)\) for \(r<-\frac{d}{2}\), uniformly in \(t\in [0,T]\), which completes the proof. \(\square \)

3 Partial regularity of the solution

In Sect. 2, we have established the existence of a version such that \(t\mapsto u(t,\cdot )\) has càdlàg paths in (local) fractional Sobolev spaces. The goal of the current section is to investigate the partial regularity of the solution, that is, the behavior of the partial functions \(t\mapsto u(t,x)\) for fixed \(x\in D\) and \(x\mapsto u(t,x)\) for fixed \(t\in [0,T]\). In the case where the Lévy noise L has locally finite intensity (so L is a compound Poisson noise), it is clear that almost surely, no jump will fall onto a fixed t- or x-section of the solution. Because the Green’s function \(G_D(t;x,y)\) is smooth outside \(\{0\}\times \{(x,x):x\in D\}\), the partial functions are continuous, and even smooth, in this case. However, a general Lévy noise can have infinitely many jumps on any compact subset of \([0,T]\times D\), which may even fail to be summable. Still they never lie on a fixed section, but may come arbitrarily close to it, so its regularity is unclear a priori. As we shall show, the answer critically depends on the Blumenthal–Getoor index of the noise (that is, the smallest p for which \(\int _{[-1,1]} |z|^p\,\nu (\mathrm {d}z)\) is finite), and both continuous and locally unbounded sample paths may arise.

Throughout this section, we consider the stochastic heat Eq. (1.1) on a bounded \(C^\infty \)-regular domain or \(D={\mathbb {R}}^d\), with some Lipschitz continuous \(\sigma :{\mathbb {R}}\rightarrow {\mathbb {R}}\) and some bounded continuous \(u_0:{\bar{D}}\rightarrow {\mathbb {R}}\) that is zero on \(\partial D\). Furthermore, let L be a pure-jump Lévy white noise as in (1.6) and u be the mild solution constructed under the hypotheses in Propositions 2.1, 2.7 or 2.17, respectively. In particular, if \(D={\mathbb {R}}^d\), we are given \(p,q>0\) such that (H) is satisfied; and if D is a bounded domain, there exists \(p>0\) such that \((\mathbf{H }')\) holds.

3.1 Regularity in space at a fixed time

Theorem 3.1

In the setting described above, assume that \(p<\frac{2}{d}\). Then, for any \(t\in [0,T]\), the process \(x\mapsto u(t,x)\) has a continuous modification.

Proof

The solution u is the stationary limit of the mild solution \(u_N\) to the truncated equation defined in (2.7), (2.28) or (2.45) with noise \(L_N\) given in (2.6), (2.27) or (2.44), respectively. Therefore, we can suppose that \(u=u_N\) for some \(N\geqslant 1\), and for simplicity, we only consider \(N=1\). We prove the claim using different approaches depending on the value of p.

\(\underline{1<p<2:}\) Notice that \(1<p<2\) can only occur in \(d=1\) because of the hypothesis \(p< \frac{2}{d}\). However, we keep the exponent d since we will use similar ideas in the next case. Let \(A>0\) be such that \(x\in (-A,A)^d\), and split u into seven parts according to (2.32) and (2.41), with obvious changes when D is bounded. In this case, we further assume that \(A>0\) is large enough such that \(D\subset (-A,A)^d\). Clearly, V is jointly continuous, and as shown in the proof of Theorem 2.15, the same is true for \(u^{1,2}\mathbb {1}_{[-A,A]^d}\), \(u^{2,2}\mathbb {1}_{[-A,A]^d}\) and \(u^{3,2}\) in the case \(D={\mathbb {R}}^d\), while they are zero if D is bounded. Furthermore, almost surely, \(u^{2,1}\) consists of finitely many jumps, none of which occur at time t, so \(x\mapsto u^{2,1}(t,x)\) is smooth because the Green’s function \(G_D(t;x,y)\) is so for \(t>0\). It remains to consider \(u^{1,1}+u^{3,1}\), for which we apply the Kolmogorov continuity criterion. We have

$$\begin{aligned} \begin{aligned}&{\mathbb {E}}\left[ \left| \left( u^{1,1}+u^{3,1}\right) (t,x)- \left( u^{1,1}+u^{3,1}\right) (t,z)\right| ^p\right] \\&\quad ={\mathbb {E}}\left[ \left| b\int _0^t \int _{[-2A,2A]^d} \left( G_D(t-s; x,y) - G_D(t-s; z,y)\right) \sigma (u(s,y))\, \mathrm {d}s\, \mathrm {d}y \right. \right. \\&\qquad +\,\left. \left. \int _0^t \int _{[-2A,2A]^d} \left( G_D(t-s;x,y) - G_D(t-s; z,y) \right) \sigma (u(s,y)) \,L^M(\mathrm {d}s , \mathrm {d}y) \right| ^p \right] \\&\quad \leqslant C \left( {\mathbb {E}}\left[ \left| b\int _0^t \int _{[-2A,2A]^d} \left( G_D(t-s;x,y) - G_D(t-s; z,y)\right) \sigma (u(s,y)) \,\mathrm {d}s \,\mathrm {d}y \right| ^p \right] \right. \\&\qquad +\,\left. {\mathbb {E}}\left[ \left| \int _0^t \int _{[-2A,2A]^d} \left( G_D(t-s;x,y) - G_D(t-s; z,y) \right) \sigma (u(s,y))\, L^M(\mathrm {d}s , \mathrm {d}y) \right| ^p \right] \right) \end{aligned} \end{aligned}$$

for any \(x,z\in D\). Then, using Hölder’s inequality and (2.3) or (2.29),

$$\begin{aligned} \begin{aligned}&{\mathbb {E}}\left[ \left| b\int _0^t \int _{[-2A,2A]^d} \left( G_D(t-s;x,y) - G_D(t-s; z,y)\right) \sigma (u(s,y)) \,\mathrm {d}s\, \mathrm {d}y \right| ^p \right] \\&\quad \leqslant \left( \int _0^t \int _{[-2A,2A]^d} \left| G_D(t-s;x,y) - G_D(t-s; z,y)\right| {\mathbb {E}}\left[ \left| \sigma (u(s,y)) \right| ^p \right] \, \mathrm {d}s\, \mathrm {d}y\right) \\&\qquad \times \left( \int _0^t \int _{[-2A,2A]^d} \left| G_D(t-s;x,y)- G_D(t-s; z,y)\right| \, \mathrm {d}s \, \mathrm {d}y \right) ^{p-1}\\&\quad \leqslant C \left( \int _0^t \int _{D} \left| G_D(t-s;x,y) - G_D(t-s; z,y)\right| \, \mathrm {d}s \, \mathrm {d}y \right) ^{p}. \end{aligned} \end{aligned}$$

For \(D={\mathbb {R}}\), the last term is bounded by \(C |x-z|^p\), see [35, Lemme A2]; for \(D=[0,\pi ]\), we can take the power p inside the integral by Hölder’s inequality, and further assume that \(\frac{3}{2}<p<2\) (by (2.3) there is no harm in taking a larger value of p on bounded domains). Then the upper bound becomes \(C |x-z|^{3-p}\) by [3, Lemma B.1(a)]. For the martingale part, we have by [28, Theorem 1] and (2.29),

$$\begin{aligned} \begin{aligned}&{\mathbb {E}}\left[ \left| \int _0^t \int _{[-2A,2A]^d} \left( G_D(t-s;x,y) - G_D(t-s; z,y)\right) \sigma (u(s,y)) L^M(\mathrm {d}s , \mathrm {d}y) \right| ^p \right] \\&\quad \leqslant C \int _0^t \int _{[-2A,2A]^d} \int _{|z|\leqslant 1} |z|^p \left| G_D(t-s;x,y) -G_D(t-s; z,y) \right| ^p \\&\qquad \, \times {\mathbb {E}}\left[ \left| \sigma (u(s,y))\right| ^p \right] \,\mathrm {d}s\, \mathrm {d}y \, \nu (\mathrm {d}z)\\&\quad \leqslant C \int _0^t \int _{D} \left| G_D(t-s;x,y)- G_D(t-s; z,y)\right| ^p \mathrm {d}s\, \mathrm {d}y. \end{aligned} \end{aligned}$$

If \(D={\mathbb {R}}^d\), then according to [35, Lemme A2], this is bounded by

$$\begin{aligned} {\left\{ \begin{array}{ll} C |x-z|^p, &{} \text {if}\quad p<\frac{3}{2}, \\ C|x-z|^{\frac{3}{2}}\log \left( |x-z|\right) &{} \text {if} \quad p=\frac{3}{2}, \\ C|x-z|^{3-p} &{} \text {if} \quad p>\frac{3}{2}; \end{array}\right. } \end{aligned}$$

and if \(D=[0,\pi ]\), and we take \(\frac{3}{2}<p<2\), then the upper bound we obtain is again \(C|x-z|^{3-p}\) by [3, Lemma B.1(a)]. Thus, the Kolmogorov continuity criterion (see e.g. [39, Chapter 1, Corollary 1.2]) ensures the existence of a continuous modification of \(u^{1,1}+u^{3,1}\) in the space variable x.

\(\underline{0<p\leqslant 1:}\) We use the same decomposition of u as above, except that we replace b by \(b_0\) and \(L^M(\mathrm {d}t,\mathrm {d}x)\) by \(\int _{|z|\leqslant 1} z \,J(\mathrm {d}t,\mathrm {d}x,\mathrm {d}z)\). The proofs for V, \(u^{2,1}\) and \(u^{2,2}\) are not affected by this change, and up to the modification indicated at the end of the proof of Theorem 2.15, the proof for \(u^{1,2}\) is not affected, either. Since \(\int _{|z|\leqslant 1} |z|\,\nu (\mathrm {d}z) < +\infty \), the small jumps of L are summable and \(u^{1,1}\) is actually a sum of possibly infinitely many terms, each of which is continuous in x because no jump occurs exactly at time t. Furthermore, by [16, Corollary 3.2.8], we have for \(x_0\in D\),

$$\begin{aligned}&{\mathbb {E}}\left[ \left( \int _0^t \int _{[-2A,2A]^d} \int _{|z|\leqslant 1} \sup _{x: |x-x_0|\leqslant 1} G_D(t-s;x,y) \left| z \sigma (u(s,y))\right| \,J(\mathrm {d}s , \mathrm {d}y, \mathrm {d}z) \right) ^p \right] \nonumber \\&\quad \leqslant \int _0^t \int _{[-2A,2A]^d} \int _{|z|\leqslant 1} |z|^p \sup _{x: |x-x_0|\leqslant 1} G^p_D(t-s;x,y) {\mathbb {E}}\left[ \left| \sigma (u(s,y)) \right| ^p \right] \, \mathrm {d}s \, \mathrm {d}y \, \nu (\mathrm {d}z) \nonumber \\&\quad \leqslant C \int _0^t \int _{{\mathbb {R}}^d} \sup _{x: |x-x_0|\leqslant 1} g^p(t-s, x-y) \, \mathrm {d}s \, \mathrm {d}y \nonumber \\&\quad = C\left( \int _0^t \int _{|y-x_0|\leqslant 1} g^p(s, 0) \,\mathrm {d}s \, \mathrm {d}y +\int _0^t \int _{|y-x_0|>1}\left( 4\pi s\right) ^{-\frac{p d}{2} }e^{-\frac{p (|y-x_0|-1)^2}{4s}} \,\mathrm {d}s \, \mathrm {d}y \right) ,\nonumber \\ \end{aligned}$$
(3.1)

which is finite because \(p<\frac{2}{d}\). So the sum defining \(u^{1,1}\) converges locally uniformly in x, which implies that \(x\mapsto u^{1,1}(t,x)\) is continuous almost surely. Recalling that \(b_0 = 0\) for \(p<1\) and \(D={\mathbb {R}}^d\), \(u^{3,1}\) and \(u^{3,2}\) are non-zero in this case only if \(p=1\). Then \(u^{3,2}(t,x)\) is jointly continuous in (tx), as shown in the proof of Theorem 2.15. If D is bounded, \(u^{3,2}\) is zero. For \(u^{3,1}\) we use the approximation sequence \(u^{3,1}_n\) defined after (2.41). For each n, the process \(x\mapsto u^{3,1}_n(t,x)\) is continuous because

$$\begin{aligned}&\left| u^{3,1}_n(t,x)-u^{3,1}_n(t,x')\right| \\&\leqslant C \int _0^T\int _D \left| G_D(t-s;x,y)-G_D(t-s;x',y)\right| \,\mathrm {d}s\,\mathrm {d}y\rightarrow 0 \end{aligned}$$

as \(x'\rightarrow x\), see [35, Lemme A2] for \(D={\mathbb {R}}^d\) and the proof of [37, Proposition 5] for bounded D. Hence, it suffices to prove that for fixed t, \(u^{3,1}_n(t,x)\) converges to \(u^{3,1}(t,x)\), locally uniformly in x. By dominated convergence, this can be reduced to showing

$$\begin{aligned} \int _0^t \int _{[-2A,2A]^d} \sup _{x: |x-x_0|\leqslant 1} G_D(t-s;x,y)|\sigma (u(s,y))|\,\mathrm {d}s\,\mathrm {d}y<+\infty \quad \text {a.s.} \end{aligned}$$

But this follows from (3.1) together with (2.3), (2.29) and (2.43), respectively, by taking expectation. \(\square \)

Remark 3.2

In particular, any (tempered) \(\alpha \)-stable noise with \(\alpha \in (0,\frac{2}{d})\) (and \(b_0=0\) if \(D={\mathbb {R}}^d\) and \(\alpha <1\)) satisfies the hypothesis of Proposition 3.1. The same holds for (variance-)gamma noises for all \(d\geqslant 1\), inverse Gaussian noises for \(d=1,2,3\) and normal inverse Gaussian noises for \(d=1\) (cf. [14]).

The next theorem shows that the value \(\frac{2}{d}\) in the previous theorem is essentially optimal.

Theorem 3.3

Let \(\sigma =1\) and suppose there is \(\delta >0\) such that the Lévy measure of L satisfies

$$\begin{aligned} \nu (\mathrm {d}z)=\frac{f(z)}{|z|^{\alpha +1}} \,\mathrm {d}z \end{aligned}$$
(3.2)

for \(z\in [-\delta , \delta ]\), where \(\alpha \in [\frac{2}{d},(1+\frac{2}{d})\wedge 2)\) and \(f:[-\delta ,\delta ]\rightarrow [0,+\infty )\) is measurable with \(f(0)\ne 0\) and

$$\begin{aligned} \int _{-\delta }^\delta \frac{|f(z)-f(0)|}{|z|^{\alpha +1}}|z|^r\,\mathrm {d}z<+\infty \end{aligned}$$
(3.3)

for some \(0<r<\frac{2}{d}\). Then for fixed \(t\in [0,T]\), the path \(x\mapsto u(t,x)\) is unbounded on any non-empty open subset of D with probability 1.

Proof

Fix \(t\in [0,T]\). Since V is continuous in (tx), it suffices to show the unboundedness of

$$\begin{aligned} Y(x)=\int _0^t \int _{D} G_D(t-s;x,y)\,L(\mathrm {d}s, \mathrm {d}y),\quad x\in D.\end{aligned}$$

We start with the case where f is constant, that is, L is an \(\alpha \)-stable noise. Then \((Y(x):x\in D)\) is an \(\alpha \)-stable process given in the form of [34, (10.1.1)] with \(E=[0,T]\times D\) and control measure \(\mathrm {d}s \, \mathrm {d}y\). We shall check that the necessary condition [34, (10.2.14)] for sample path boundedness in [34, Theorem 10.2.3] is not satisfied, in particular that for \(x_0 \in D\), and \(\delta \) such that \(B_{x_0}(\delta ) \subset D\),

$$\begin{aligned} \int _{0}^{t}\int _{D} \left( \sup _{x\in B_{x_0}(\delta )} G_D(t-s,x,y) \right) ^\alpha \, \mathrm {d}s \, \mathrm {d}y =+\infty . \end{aligned}$$
(3.4)

Indeed, by [38, Theorem 2 and Lemma 9], for any \(x,y \in B_{x_0}(\delta )\),

$$\begin{aligned} G_D(t-s,x,y)\geqslant C g(t-s,x-y), \end{aligned}$$
(3.5)

which implies that

$$\begin{aligned} \int _{0}^{t}\int _{D} \sup _{x\in B_{x_0}(\delta )} G_D^\alpha (t-s;x,y)\, \mathrm {d}s \, \mathrm {d}y&\geqslant C \int _{0}^{t}\int _{B_{x_0}(\delta )} \frac{1}{\left( 4\pi (t-s) \right) ^{\frac{\alpha d}{2}}} \,\mathrm {d}s \, \mathrm {d}y= +\infty , \end{aligned}$$

and (3.4) is proved. In the case of general f, we write

$$\begin{aligned} \nu (\mathrm {d}z)= & {} \nu _1(\mathrm {d}z) - \nu _2(\mathrm {d}z)+\nu _3(\mathrm {d}z) +\nu _4(\mathrm {d}z)\\= & {} \left( \frac{\left( f(z)-f(0)\right) _+}{|z|^{\alpha +1}} - \frac{\left( f(z)-f(0)\right) _-}{|z|^{\alpha +1}}+ \frac{f(0)}{|z|^{\alpha +1}}\right) \mathbb {1}_{z \in [-\delta , \delta ]}\,\mathrm {d}z\\&+\,\mathbb {1}_{z \in [-\delta , \delta ]^c}\,\nu (\mathrm {d}z), \end{aligned}$$

and decompose L accordingly into \(L_1-L_2+L_3+L_4\) such that for \(1\leqslant i\leqslant 4\), \(L_i\) has Lévy measure \(\nu _i\) and is independent of the other three parts. If \(u_i\) solves the additive heat equation with driving noise \(L_i\), then by (3.3) and Theorem 3.1, for any fixed \(t\in [0,T]\), \(x\mapsto u_1(t,x)\), \(x\mapsto u_2(t,x)\) and \(x\mapsto u_4(t,x)\) each has a continuous version. And since the first part of the proof shows that \(x\mapsto u_3(t,x)\) is unbounded on any open subset of D, the same property holds for \(x\mapsto u(t,x)\). \(\square \)

Remark 3.4

Taking \(f\equiv 1\), Theorem 3.3 shows that for the solution u to the heat equation with an additive \(\alpha \)-stable noise where \(\frac{2}{d} \leqslant \alpha <1+\frac{2}{d}\) (since \(\alpha <2\), this can only occur for \(d\geqslant 2\)), for any \(t\in [0,T]\), \(x\mapsto u(t,x)\) is unbounded on any non-empty open subset of D. The same holds true if \(\nu \) has the form (3.2) with \(\alpha \) in the same range and some f that is \((\alpha -\frac{2}{d} +\varepsilon )\)-Hölder continuous at 0. For \(d\geqslant 2\), this includes tempered stable Lévy noise (with stability index in the indicated range) and normal inverse Gaussian noises.

3.2 Regularity in time at a fixed space point

Theorem 3.5

In the set-up described at the beginning of Sect. 3, assume that \(0<p<1\). Then for any \(x\in D\), the process \(t\mapsto u(t,x)\) has a continuous modification.

We need the following elementary lemma.

Lemma 3.6

If g is the heat kernel (1.4), then there exists \(C>0\) such that for every \(T>0\),

$$\begin{aligned} \sup _{t\in [0,T]} g(t,x) = {\left\{ \begin{array}{ll} C T^{-{\frac{d}{2}}} e^{-\frac{|x|^2}{4T}}, &{} \text {if}\quad T<\frac{|x|^2}{2d},\\ C|x|^{-d},&{} \text {if}\quad T\geqslant \frac{|x|^2}{2d}. \end{array}\right. } \end{aligned}$$

Proof of Theorem 3.5

Again, by a stopping time argument, it suffices to show the regularity of \(u_N\) for any \(N\geqslant 1\), as defined in (2.7), (2.28) or (2.45), respectively. We only consider \(N=1\), and decompose \(u=V+u^{1,1}+u^{1,2}+u^{2,1}+u^{2,2}+u^3\) as in the part “\(0<p\leqslant 1\)” of the proof of Theorem 3.1 (with \(A>0\) such that \(x\in (-A,A)^d\) and \(D\subset (-A,A)^d\) if D is bounded). There we explained that V, \(u^{1,2}\mathbb {1}_{[-A,A]^d}\) and \(u^{2,2}\mathbb {1}_{[-A,A]^d}\) are jointly continuous. The term \(u^{2,1}\) is again a finite sum of weighted heat kernels, so \(t\mapsto u(t,x)\) is smooth because none of the jumps falls exactly on a given \(x\in D\). Moreover, for \(u^{1,1}\), we can use [35, Théorème 2.2.2] in the case \(D={\mathbb {R}}^d\) because \({\mathbb {E}}[|\sigma (u(t,x))|^p] = {\mathbb {E}}[|\sigma (u_1(t,x))|^p]\) is uniformly bounded on \([0,T]\times [-2A,2A]^d\). The proof also applies to bounded D because the heat kernel \(G_D\) is majorized by a multiple of the heat kernel on \({\mathbb {R}}^d\) by [16, Corollary 3.2.8]. Finally, since \(p<1\) and \(b_0=0\) if \(D={\mathbb {R}}^d\), \(u^3\) only needs to be considered for bounded D. With \(1<r<1+\frac{2}{d}\) and \(0\leqslant t' < t \leqslant T\), we can use Hölder’s inequality, the moment bound (2.3) or (2.43) and [37, Proposition 5] (and the proof therein) to deduce for every \(\gamma \in (0,1)\),

$$\begin{aligned} {\mathbb {E}}[|u(t,x)-u(t',x)|^r]&\leqslant \left( \int _0^t \int _D |G_D(t-s;x,y)-G_D(t'-s;x,y)|\,\mathrm {d}s\,\mathrm {d}y\right) ^{r-1}\\&\quad \times \int _0^t \int _D |G_D(t-s;x,y) -G_D(t'-s;x,y)|\\&\quad \times {\mathbb {E}}[|\sigma (u(s,y))|^r] \,\mathrm {d}s\,\mathrm {d}y\\&\leqslant \left( \int _0^t \int _D |G_D(t-s;x,y)-G_D(t'-s;x,y)|\,\mathrm {d}s\,\mathrm {d}y\right) ^r \\&\leqslant C|t-t'|^{\gamma r} \end{aligned}$$

Thus, by choosing \(\gamma \) close to 1, the claim follows from Kolmogorov’s continuity theorem. \(\square \)

Theorem 3.7

If \(\sigma \equiv 1\) and \(\nu \) satisfies (3.2) for some \(\alpha \in [1,1+\frac{2}{d})\), \(\delta >0\), and some measurable \(f:[-\delta ,\delta ]\rightarrow [0,+\infty )\) with \(f(0)\ne 0\) and (3.3) for some \(0<r<1\), then for any \(x\in D\), the process \(t\mapsto u(t,x)\) is unbounded on any non-empty open interval in [0, T] with probability 1.

Proof

The proof is analogous to the proof of Theorem 3.3. It suffices to show that

$$\begin{aligned} \int _{t_1}^{t_2}\int _{D} \left( \sup _{t\in [t_1, t_2]} G_D(t-s;x,y) \right) ^\alpha \,\mathrm {d}s \, \mathrm {d}y =+\infty \end{aligned}$$
(3.6)

for \(x\in D\) and \(0\leqslant t_1 < t_2 \leqslant T\). By [38, Theorem 2 and Lemma 9], it suffices to consider \(D={\mathbb {R}}^d\). In this case, the integral above is bounded from below by

$$\begin{aligned} \int _{t_1}^{t_2}\int _{{\mathbb {R}}^d} \sup _{t\in [t_1,t_2]} g(t-s,x-y)^\alpha \, \mathrm {d}s \, \mathrm {d}y \geqslant \int _{0}^{t_2-t_1}\int _{{\mathbb {R}}^d} \sup _{v\in [0,s]} g(v,x-y)^\alpha \,\mathrm {d}s \, \mathrm {d}y, \end{aligned}$$

so (3.6) follows from Lemma 3.6:

$$\begin{aligned}&\int _{t_1}^{t_2}\int _{{\mathbb {R}}^d} \left( \sup _{t\in [t_1, t_2]} g(t-s,x-y) \right) ^\alpha \mathrm {d}s \, \mathrm {d}y\\&\qquad \geqslant \int _0^{t_2-t_1} \int _{|x-y| \leqslant \sqrt{2ds}} \frac{C}{|x-y|^{d\alpha }} \mathrm {d}s \, \mathrm {d}y=+\infty \, .\end{aligned}$$

\(\square \)

Remark 3.8

In contrast to the results on regularity in space, the critical exponent \(p=1\) for temporal regularity does not depend on the dimension d. In particular, for all \(d\geqslant 1\), we obtain continuity of \(t\mapsto u(t,x)\), x fixed, for any (tempered) \(\alpha \)-stable noise with \(\alpha \in (0,1)\) (and \(b_0=0\) if \(D={\mathbb {R}}^d\)), any (variance-)gamma noise and inverse Gaussian noise. In the case of additive noise, \(t\mapsto u(t,x)\) is almost surely unbounded on any non-empty open subinterval for (tempered) \(\alpha \)-stable noises with \(\alpha \in [1,1+\frac{2}{d})\) and normal inverse Gaussian noises.