1 Introduction

In this paper we study one-dimensional stochastic Volterra equations (SVEs) of the form

$$\begin{aligned} X_t&=x_0(t)+\int _0^t (t-s)^{-\alpha }\mu (s,X_s)\,\textrm{d}s+\int _0^t (t-s)^{-\alpha }\sigma (s,X_s)\,\textrm{d}B_s, \quad t\in [0,T], \end{aligned}$$
(1.1)

where \(\alpha \in [0,\frac{1}{2})\), \(x_0:[0,T]\rightarrow {\mathbb {R}}\) is a continuous function, \(\mu ,\sigma :[0,T]\times {\mathbb {R}}\rightarrow {\mathbb {R}}\) are measurable functions and \((B_t)_{t\in [0,T]}\) is a standard Brownian motion. Although the stochastic integral in (1.1) is defined as a classical stochastic Itô integral, a potential solution of this SVE is, in general, neither a semimartingale nor a Markov process. Assuming that \(\mu \) is Lipschitz continuous and \(\sigma \) is \(\xi \)-Hölder continuous for \(\xi \in (\frac{1}{2(1-\alpha )},1]\), we show that pathwise uniqueness for the SVE (1.1) holds and, consequently, that there exists a unique strong solution.

Stochastic Volterra equations have been investigated in probability theory starting with the seminal works of Berger and Mizel [9, 10] and serve as mathematical models allowing, in particular, to represent dynamical systems with memory effects such as population growth, spread of epidemics and turbulent flows. Recently, stochastic Volterra equations of the form (1.1) with non-Lipschitz continuous coefficients have demonstrated to fit remarkably well historical and implied volatilities of financial markets, see e.g. [8], motivating the use of so-called rough volatility models in mathematical finance, see e.g. [4, 15]. Moreover, SVEs with non-Lipschitz continuous coefficients like (1.1) arise as scaling limits of branching processes in population genetics, see [1, 27].

The existence of strong solutions and pathwise uniqueness for stochastic Volterra equations with sufficiently regular kernels and Lipschitz continuous coefficients are well-known due to classical results such as [9, 10, 30], which have been generalized in various directions, e.g., allowing for anticipating and path-dependent coefficients, see [6, 17, 28, 29]. As long as the kernels of a one-dimensional SVE are sufficiently regular, i.e. excluding the singular kernel \((t-s)^{-\alpha }\) in (1.1), the existence of unique strong solutions can be still obtained when the diffusion coefficients are only 1/2-Hölder continuous, see [4, 32]. The latter results are crucially based on the observation that solutions to SVEs with sufficiently regular kernels are semimartingales, allowing to rather directly implement approaches in the spirit of Yamada–Watanabe [36]. Assuming a Lipschitz condition on the coefficients, the existence of unique strong solutions to SVEs with singular kernels were proven in [11, 12] and a slight extension beyond Lipschitz continuous coefficients can be found in [35].

Similarly to the case of ordinary stochastic differential equations (SDEs), the regularity assumptions on the coefficients and on the kernels of a stochastic Volterra equation can be significantly relaxed by considering the concept of weak solutions instead of strong solutions. While weak solutions to a certain class of one-dimensional SVEs were first treated by Mytnik and Salisbury in [27], a comprehensive study of weak solutions to stochastic Volterra equations of convolutional type was recently developed by Abi Jaber, Cuchiero, Larsson and Pulido [2], see also [1, 5]. By introducing a local martingale problem associated to SVEs of convolutional type, Abi Jaber et al. [2] derived the existence of weak solutions to SVEs of convolutional type with sufficiently integrable kernels and continuous coefficients. Assuming additionally that the coefficients of the SVE lead to affine Volterra processes, weak uniqueness was obtained in [1, 3, 13, 27]. The concept of weak solutions to SVEs with general kernels was investigated in [31].

A major challenge to prove pathwise uniqueness for the SVE (1.1) with its singular kernel \((t-s)^{-\alpha }\) is the missing natural semimartingale representation of its potential solution. Assuming the drift coefficient \(\mu \) does not depend on the solution \((X_t)_{t\in [0,T]}\) and the diffusion coefficient \(\sigma \) is \(\xi \)-Hölder continuous for \(\xi \in (\frac{1}{2(1-\alpha )},1]\), Mytnik and Salisbury [27] established pathwise uniqueness for the SVE (1.1) by equivalently reformulating the SVE into a stochastic partial differential equation, which then allows to accomplish a proof of pathwise uniqueness in the spirit of Yamada–Watanabe relying on the methodology developed in [25, 26]. In the present paper, we generalize the results and method of Mytnik and Salisbury [27] to derive pathwise uniqueness for the stochastic Volterra Eq. (1.1) with general time-inhomogeneous coefficients. As classical transforms allowing to remove the drift of an SDE are not applicable to the SVE (1.1), the general time-inhomogeneous coefficients \(\mu \) creates severe novel challenges. For the sake of readability, all proofs are presented in a self-contained manner although some intermediate steps can already be found in the work [27] of Mytnik and Salisbury.

The existence of a unique strong solution to the stochastic Volterra Eq. (1.1) follows by a general version of Yamada–Watanabe theorem (see [20, 36]) stating that the combination of pathwise uniqueness and the existence of weak solutions to the SVE (1.1) (as obtained in [31]) guarantees the existence of a strong solution. Let us remark that strong existence and pathwise uniqueness play a crucial role in the context of large deviation and as key ingredients to fully justify some numerical schemes, see e.g. [14, 23].

Organization of the paper: Sect. 2 presents the main results on the pathwise uniqueness and strong existence of solutions to stochastic Volterra equations. Section 3 contains the main steps in the proof of pathwise uniqueness, while the remaining Sects. 4-7 provide the necessary auxiliary results to implement these main steps.

Acknowledgments: D. Scheffels gratefully acknowledges financial support by the Research Training Group “Statistical Modeling of Complex Systems” (RTG 1953) funded by the German Science Foundation (DFG).

2 Main results

Let \((\Omega ,{\mathcal {F}},({\mathcal {F}}_t)_{t\in [0,T]},{\mathbb {P}})\) be a filtered probability space, which satisfies the usual conditions, \((B_t)_{t\in [0,T]}\) be a standard Brownian motion and \(T\in (0,\infty )\). We consider the one-dimensional stochastic Volterra equation (SVE)

$$\begin{aligned} X_t&=x_0(t)+\int _0^t (t-s)^{-\alpha }\mu (s,X_s)\,\textrm{d}s+\int _0^t (t-s)^{-\alpha }\sigma (s,X_s)\,\textrm{d}B_s, \quad t\in [0,T], \end{aligned}$$
(2.1)

where \(\alpha \in [0,\frac{1}{2})\), \(x_0:[0,T]\rightarrow {\mathbb {R}}\) is a deterministic continuous function and \(\mu ,\sigma :[0,T]\times {\mathbb {R}}\rightarrow {\mathbb {R}}\) are deterministic, measurable functions. Furthermore, \(\int _0^t (t-s)^{-\alpha }\mu (s,X_s)\,\textrm{d}s\) is defined as a Riemann–Stieltjes integral and \(\int _0^t (t-s)^{-\alpha }\sigma (s,X_s)\,\textrm{d}B_s\) as an Itô integral.

The regularity of the coefficients \(\mu \) and \(\sigma \) and of the initial condition \(x_0\) is determined in the following assumption.

Assumption 2.1

Let \(\alpha \in [0,\frac{1}{2})\), let \(x_0\) be deterministic and \(\beta \)-Hölder continuous for every \(\beta \in (0,\frac{1}{2}-\alpha )\) and let \(\mu ,\sigma :[0,T]\times {\mathbb {R}}\rightarrow {\mathbb {R}}\) be measurable functions such that

  1. (i)

    \(\mu \) and \(\sigma \) are of linear growth, i.e. there is a constant \(C_{\mu ,\sigma }>0\) such that

    $$\begin{aligned} |\mu (t,x)|+|\sigma (t,x)|\le C_{\mu ,\sigma }(1+|x|), \end{aligned}$$

    for all \(t\in [0,T]\) and \(x\in {\mathbb {R}}\).

  2. (ii)

    \(\mu \) is Lipschitz continuous and \(\sigma \) is Hölder continuous in the space variable uniformly in time of order \(\xi \) for some \(\xi \in [\frac{1}{2},1]\) such that

    $$\begin{aligned} \xi > \frac{1}{2(1-\alpha )}, \end{aligned}$$

    where in the case of \(\alpha =0\) even equality is allowed. Hence, there are constants \(C_\mu ,C_\sigma >0\) such that

    $$\begin{aligned} |\mu (t,x)-\mu (t,y)|\le C_\mu |x-y| \quad \text {and}\quad |\sigma (t,x)-\sigma (t,y)|\le C_\sigma |x-y|^{\xi } \end{aligned}$$

    hold for all \(t\in [0,T]\) and \(x,y\in {\mathbb {R}}\).

  3. (iii)

    For every \(K>0\), there is some constant \(C_K>0\) such that, for every \(t\in [0,T]\) and every \(x,y\in [-K,K]\),

    $$\begin{aligned} \bigg | \frac{\mu (t,x)-\mu (t,y)}{\sigma (t,x)-\sigma (t,y)}\bigg | \le C_K, \end{aligned}$$

    where we use the convention \(0/0:=1\).

Assumption 2.1 is a standing assumption throughout the entire paper. Although not always explicitly stated all results are proven supposing Assumption 2.1.

Remark 2.2

Assumption 2.1 (iii) is, for example, satisfied by any Lipschitz continuous functions \(\mu \) and \(\sigma \) of the form \(\sigma (t,x)=\mathop {\textrm{sgn}}(x)|x|^\xi \) for \(\xi \in [1/2,1]\). Note that, in interesting cases like the rough Heston model in mathematical finance, solutions to (2.1) are non-negative (see [3, Theorem A.2]), so that the \(\mathop {\textrm{sgn}}\) in the definition of \(\sigma \) does not influence the dynamics of the associated SVE. Then, for \(|x|,|y|\le K\), using the inequality \(\big |\mathop {\textrm{sgn}}(x)|x|^\xi -\mathop {\textrm{sgn}}(y)|y|^\xi \big | \ge K^{-1}|x-y|\), we get

$$\begin{aligned} \bigg | \frac{\mu (t,x)-\mu (t,y)}{\sigma (t,x)-\sigma (t,y)} \bigg | \le C_\mu \frac{|x-y|}{\big |\mathop {\textrm{sgn}}(x)|x|^\xi -\mathop {\textrm{sgn}}(y)|y|^\xi \big |} \le C_\mu \frac{|x-y|}{K^{-1}{|x-y|}}=C_\mu K<\infty . \end{aligned}$$

Nevertheless, while Assumption 2.1 (iii) is crucial for applying a Girsanov transformation in the proof of Theorem 6.4 below, it is not a necessary condition. Indeed, if \(\sigma \) does only depends on t, the Assumption 2.1 (iii) cannot be satisfied for general Lipschitz continuous functions \(\mu \), but there exists a unique strong solution by classical results, see e.g. [35].

Based on Assumption 2.1, we obtain a unique strong solution of the stochastic Volterra Eq. (2.1). Therefore, let us briefly recall the concepts of strong solutions and pathwise uniqueness. Let for \(p\ge 1\), \(L^p(\Omega \times [0,T])\) be the space of all real-valued, p-integrable functions on \(\Omega \times [0,T]\). An \(({\mathcal {F}}_t)_{t\in [0,T]}\)-progressively measurable stochastic process \((X_t)_{t\in [0,T]}\) in \(L^p(\Omega \times [0,T])\), on the given probability space \((\Omega ,{\mathcal {F}},({\mathcal {F}}_t)_{t\in [0,T]},{\mathbb {P}})\), is called (strong) \(L^p\)-solution to the SVE (2.1) if \( \int _0^t (|(t-s)^{-\alpha }\mu (s,X_s)|+|(t-s)^{-\alpha }\sigma (s,X_s)|^2 )\,\textrm{d}s<\infty \) for all \(t\in [0,T]\) and the integral Eq. (2.1) holds a.s. We call a strong \(L^1\)-solution often just solution to the SVE (2.1). We say pathwise uniqueness in \(L^p(\Omega \times [0,T])\) holds for the SVE (2.1) if \({\mathbb {P}}(X_t=\tilde{X}_t, \,\forall t\in [0,T])=1\) for two \(L^p\)-solutions \((X_t)_{t\in [0,T]}\) and \((\tilde{X}_t)_{t\in [0,T]}\) to the SVE (2.1) defined on the same probability space \((\Omega ,{\mathcal {F}},({\mathcal {F}}_t)_{t\in [0,T]},{\mathbb {P}})\). Moreover, we say there exists a unique strong \(L^p\)-solution \((X_t)_{t\in [0,T]}\) to the SVE (2.1) if \((X_t)_{t\in [0,T]}\) is a strong \(L^p\)-solution to the SVE (2.1) and pathwise uniqueness in \(L^p\) holds for the SVE (2.1). We say \((X_t)_{t\in [0,T]}\) is \(\beta \)-Hölder continuous for \(\beta \in (0,1]\) if there exists a modification of \((X_t)_{t\in [0,T]}\) with sample paths that are almost all \(\beta \)-Hölder continuous.

Note that the kernels \(K_\mu (s,t)=K_\sigma (s,t)=(t-s)^{-\alpha }\) with \(\alpha \in (0,1/2)\) fulfill the assumptions of Lemma 3.1 and Lemma 3.4 in [31] for every

$$\begin{aligned} \varepsilon \in \bigg (0,\frac{1}{\alpha }-2\bigg ) \end{aligned}$$

with

$$\begin{aligned} \gamma =\frac{1}{2+\varepsilon }-\alpha . \end{aligned}$$

This means that, to use the results of [31, Lemma 3.1 and Lemma 3.4], we need to consider \(L^p\)-solutions with

$$\begin{aligned} p>\max \bigg \lbrace \frac{1}{\gamma },1+\frac{2}{\varepsilon } \bigg \rbrace = \max \bigg \{ \frac{ 2+\varepsilon }{1-2\alpha -\varepsilon \alpha },1+\frac{2}{\varepsilon } \bigg \rbrace . \end{aligned}$$
(2.2)

The maximum in (2.2) is attained for \(\varepsilon ^\star =\frac{1-2\alpha }{1+\alpha }\). Hence, inserting \(\varepsilon ^\star \) into (2.2), we consider in the following \(L^p\)-solutions and \(L^p\)-pathwise uniqueness for some

$$\begin{aligned} p>3+\frac{6\alpha }{1-2\alpha }. \end{aligned}$$
(2.3)

The following theorem states that pathwise uniqueness for the stochastic Volterra Eq. (2.1) holds, which is the main result of the present work.

Theorem 2.3

Suppose Assumption 2.1 and let p be given by (2.3). Then, \(L^p\)-pathwise uniqueness holds for the stochastic Volterra Eq. (2.1).

The proof of Theorem 2.3 will be summarized in Sect. 3 and the subsequent Sects. 4-7 provide the necessary auxiliary results. Relying on the pathwise uniqueness and the classical Yamada–Watanabe theorem, we get the existence of a unique strong solution.

Corollary 2.4

Suppose Assumption 2.1 and let p be given by (2.3). Then, there exists a unique strong \(L^p\)-solution to the stochastic Volterra Eq. (2.1).

Proof

The \(L^p\)-pathwise uniqueness is provided by Theorem 2.3. The existence of a strong \(L^p\)-solution follows by the existence of a weak \(L^p\)-solution to the stochastic Volterra Eq. (2.1), which is provided by [32, Theorem 3.3], which is applicable since the kernel \((t-s)^{-\alpha }\), \(\alpha \in [0,\frac{1}{2})\), fulfills the required assumptions of [32, Theorem 3.3], cf. [32, Remark 3.5]. Thanks to Yamada–Watanabe’s theorem (see [36, Corollary 1], or [20, Theorem 1.5] for a generalized version), the existence of a weak \(L^p\)-solution and pathwise \(L^p\)-uniqueness imply the existence of a unique strong \(L^p\)-solution. \(\square \)

Furthermore, we obtain the following regularity properties of solutions to the SVE (2.1).

Lemma 2.5

Suppose Assumption 2.1, and let \((X_t)_{t\in [0,T]}\) be a strong \(L^p\)-solution to the stochastic Volterra Eq. (2.1) with p given by (2.3). Then, \(\sup _{t\in [0,T]}{\mathbb {E}}[|X_t|^q]<\infty \) for any \(q\ge 1\) and the sample paths of \((X_t)_{t\in [0,T]}\) are \(\beta \)-Hölder continuous for any \(\beta \in (0,\frac{1}{2}-\alpha )\).

Proof

The statements follow by [31, Lemma 3.1 and Lemma 3.4] since the kernel \((t-s)^{-\alpha }\) fulfills the regularity assumption of [31, Lemma 3.1 and Lemma 3.4] as shown in [32, Remark 3.5]. \(\square \)

For \(k\in {\mathbb {N}}\cup \lbrace \infty \rbrace \), we write \(C^k({\mathbb {R}})\), \(C^k({\mathbb {R}}_+)\) and \(C^k([0,T]\times {\mathbb {R}})\) for the spaces of continuous functions mapping from \({\mathbb {R}}\), \({\mathbb {R}}_+\) resp. \([0,T]\times {\mathbb {R}}\) to \({\mathbb {R}}\), that are k-times continuously differentiable. We use an index 0 to indicate compact support, e.g. \(C_0^\infty ({\mathbb {R}})\) denotes the space of smooth functions with compact support on \({\mathbb {R}}\). The space of square integrable functions \(f:{\mathbb {R}}\rightarrow {\mathbb {R}}\) is denoted by \(L^2({\mathbb {R}})\) and equipped with the usual scalar product \(\langle \cdot , \cdot \rangle \). Moreover, a ball in \({\mathbb {R}}\) around x with radius \(R>0\) is defined by \(B(x,R):=\{y\in {\mathbb {R}}: |y-x|\le R\}\) and we use the notation \(A_{\eta }\lesssim B_{\eta }\) for a generic parameter \(\eta \), meaning that \(A_{\eta }\le CB_{\eta }\) for some constant \(C>0\) independent of \(\eta \).

3 Proof of pathwise uniqueness

We prove Theorem 2.3 by generalizing the well-known techniques of Yamada–Watanabe (cf. [36, Theorem 1]) and the work of Mytnik and Salisbury [27]. One of the main challenges is the missing semimartingale property of a solution \((X_t)_{t\in [0,T]}\) to the SVE (2.1). Therefore, we transform (2.1) into a random field in Step 1, for which we can derive a semimartingale decomposition in (3.2). Then, we implement an approach in the spirit of Yamada–Watanabe in Step 2–5 and conclude the pathwise uniqueness by using a Grönwall inequality for weak singularities in Step 6.

Proof of Theorem 2.3

Suppose there are two strong \(L^p\)-solutions \((X^1_t)_{t\in [0,T]}\) and \((X^2_t)_{t\in [0,T]}\) to the stochastic Volterra Eq. (2.1).

Step 1: To induce a semimartingale structure, we introduce the random fields

$$\begin{aligned} X^i(t,x):=x_0(t)+\int _0^t p_{t-s}^\theta (x)\mu (s,X_s^i)\,\textrm{d}s+\int _0^tp_{t-s}^\theta (x)\sigma (s,X_s^i)\,\textrm{d}B_s, \end{aligned}$$
(3.1)

for \(t\in [0,T]\), \(x\in {\mathbb {R}}\) and \(i=1,2\), where the densities \(p_t^\theta :{\mathbb {R}}\rightarrow {\mathbb {R}}\) and \(\theta :=1/2-\alpha \) are defined in (4.3). By Proposition 4.12, we get that \(X^i\in C([0,T]\times {\mathbb {R}})\) and

$$\begin{aligned} \begin{aligned} \int _{{\mathbb {R}}}X^i(t,x)\Phi _t(x)\,\textrm{d}x&=\int _{{\mathbb {R}}}\bigg ( x_0\Phi _0(x)+\int _0^t\Phi _s(x)\frac{\partial }{\partial s}x_0(s)\,\textrm{d}s\bigg )\,\textrm{d}x\\&\quad +\int _0^t\int _{{\mathbb {R}}}X^i(s,x)\bigg ( \Delta _\theta \Phi _s(x)+\frac{\partial }{\partial s}\Phi _s(x) \bigg )\,\textrm{d}x\,\textrm{d}s\\&\quad +\int _0^t\mu (s,X^i(s,0))\Phi _s(0)\,\textrm{d}s +\int _0^t\sigma (s,X^i(s,0))\Phi _s(0)\,\textrm{d}B_s, \end{aligned} \end{aligned}$$
(3.2)

for \(t\in [0,T]\) and every \(\Phi \in C^2_0([0,T]\times {\mathbb {R}})\), where the differential operator \(\Delta _\theta \) is defined in (4.2) and \(\frac{\partial }{\partial s}x_0(s)\) is meant in the sense of distributions. Notice, due to (3.2), the stochastic process \(t\mapsto \int _{{\mathbb {R}}}X^i(t,x)\Phi _t(x)\,\textrm{d}x\) is a semimartingale and \(X^i(t,0)=X^i_t\) for \(t\in [0,T]\).

Step 2: We define suitable sequences \((\Phi _x^m)\subset C^2_0({\mathbb {R}})\), for \(x\in {\mathbb {R}}\), and \((\phi _n)\subset C^\infty ({\mathbb {R}})\) of test functions, see (6.1) and (5.3) for the precise definitions, such that

$$\begin{aligned} \Phi _x^m \rightarrow \delta _x\quad \text {as }m\rightarrow \infty ,\quad \text {for every } x\in {\mathbb {R}}, \quad \text {and}\quad \phi _n\rightarrow |\cdot |\quad \text {as }n\rightarrow \infty . \end{aligned}$$

Applying Proposition 5.1 (which is based on Itô’s formula and (3.2)) and setting \(\tilde{X}(t):=\tilde{X}(t,\cdot ):=X^1(t,\cdot )-X^2(t,\cdot )\) for \(t\in [0,T]\), we get

$$\begin{aligned} \phi _n(\langle \tilde{X}(t),\Phi _{x}^m \rangle )&=\int _0^t \phi _n'(\langle \tilde{X}(s), \Phi _x^m \rangle )\langle \tilde{X}(s),\Delta _\theta \Phi _{x}^m \rangle \,\textrm{d}s \\&\quad +\int _0^t \phi _n'(\langle \tilde{X}(s),\Phi _x^m \rangle )\Phi _x^m(0) \big ( \mu (s,X^1(s,0))-\mu (s,X^2(s,0)) \big )\,\textrm{d}s\\&\quad +\int _0^t \phi _n'(\langle \tilde{X}(s), \Phi _x^m \rangle )\Phi _x^m(0) \big ( \sigma (s,X^1(s,0))-\sigma (s,X^2(s,0))\big ) \,\textrm{d}B_s\\&\quad + \frac{1}{2}\int _0^t \psi _n(| \langle \tilde{X}(s),\Phi _x^m \rangle |)\Phi _x^m(0)^2 \big ( \sigma (s,X^1(s,0))-\sigma (s,X^2(s,0)) \big )^2\,\textrm{d}s , \end{aligned}$$

where \(\langle \cdot , \cdot \rangle \) denotes the scalar product on \(L^2({\mathbb {R}})\).

Step 3: To implement an approach in the spirit of Yamada–Watanabe, we need to introduce another suitable test function \(\Psi \in C([0,T]\times {\mathbb {R}})\) (satisfying Assumption 5.2 below). Denoting by \({\dot{\Psi }}:=\frac{\partial }{\partial s}\Psi \) the time derivative of \(\Psi \), Proposition 5.3 leads to

$$\begin{aligned}&\langle \phi _n(\langle \tilde{X}(t),\Phi _{\cdot }^m \rangle ),\Psi _t \rangle \\&\quad = \int _0^t \langle \phi _n'(\langle \tilde{X}(s),\Phi _\cdot ^m \rangle )\langle \tilde{X}(s),\Delta _\theta \Phi _{\cdot }^m \rangle ,\Psi _s \rangle \,\textrm{d}s\\&\quad \quad +\int _0^t \langle \phi _n'(\langle \tilde{X}(s),\Phi _\cdot ^m \rangle )\Phi _\cdot ^m(0),\Psi _s \rangle \big ( \mu (s,X^1(s,0))-\mu (s,X^2(s,0)) \big )\,\textrm{d}s\\&\quad \quad +\int _0^t \langle \phi _n'(\langle \tilde{X}(s),\Phi _\cdot ^m \rangle )\Phi _\cdot ^m(0),\Psi _s \rangle \big ( \sigma (s,X^1(s,0))-\sigma (s,X^2(s,0)) \big )\,\textrm{d}B_s\\&\quad \quad +\frac{1}{2}\int _0^t \langle \psi _n(| \langle \tilde{X}(s),\Phi _\cdot ^m \rangle |)\Phi _\cdot ^m(0)^2,\Psi _s \rangle \big ( \sigma (s,X^1(s,0))-\sigma (s,X^2(s,0)) \big ) \,\textrm{d}s\\&\quad \quad +\int _0^t \langle \phi _n(\langle \tilde{X}(s),\Phi _\cdot ^m \rangle ),{\dot{\Psi }}_s \rangle \,\textrm{d}s. \end{aligned}$$

Step 4: Using the stopping time \(T_{\xi ,K}\) defined in (6.47), taking expectations and sending \(n,m\rightarrow \infty \), Proposition 6.11 states that

$$\begin{aligned}&{\mathbb {E}}\big [\big \langle | \langle \tilde{X}(t\wedge T_{\xi ,K})|,\Psi _{t\wedge T_{\xi ,K}} \big \rangle ] \\&\quad \lesssim {\mathbb {E}} \bigg [ \int _0^{t\wedge T_{\xi ,K}}\int _{{\mathbb {R}}}|\tilde{X}(s,x)|\Delta _\theta \Psi _s(x)\,\textrm{d}x\,\textrm{d}s \bigg ]\\&\quad \quad +\int _0^{t\wedge T_{\xi ,K}}\Psi _s(0){\mathbb {E}}[|\tilde{X}(s,0)|]\,\textrm{d}s + {\mathbb {E}}\bigg [ \int _0^{t\wedge T_{\xi ,K}}\int _{{\mathbb {R}}}|\tilde{X}(s,x)|{\dot{\Psi }}_s(x)\,\textrm{d}x\,\textrm{d}s \bigg ]. \end{aligned}$$

Step 5: Since \(T_{\xi ,K}\rightarrow T\) as \(K\rightarrow \infty \) a.s. by Corollary 6.8, applying Fatou’s lemma yields

$$\begin{aligned} \int _{{\mathbb {R}}}{\mathbb {E}}[ |\tilde{X}(t,x)|]\Psi _t(x)\,\textrm{d}x&\lesssim \int _0^{t}\int _{{\mathbb {R}}}{\mathbb {E}}[|\tilde{X}(s,x)|] | \Delta _\theta \Psi _s(x)+{\dot{\Psi }}_s(x)|\,\textrm{d}x\,\textrm{d}s \nonumber \\&\quad + \int _0^t \Psi _s(0) {\mathbb {E}}[|\tilde{X}(s,0)|]\,\textrm{d}s. \end{aligned}$$
(3.3)

Finally, we choose appropriate test functions \((\Psi _{N,M})_{N,M\in {\mathbb {N}}}\) (satisfying Assumption 5.2) to approximate the Dirac distribution around 0 with \(\Psi _{N,M}(t,\cdot )\). Thus, choosing \(\Psi _t(x)=\Psi _{N,M}(t,x)\) in (3.3) and sending \(N,M\rightarrow \infty \) yields, by Proposition 7.3, that

$$\begin{aligned} {\mathbb {E}}[ |\tilde{X}(t,0)|]\lesssim \int _0^t (t-s)^{-\alpha }{\mathbb {E}}[|\tilde{X}(s,0)| ]\,\textrm{d}s, \quad t\in [0,T]. \end{aligned}$$

Step 6: Due to \(\alpha \in (0,\frac{1}{2})\), Grönwall’s inequality for weak singularities (see e.g. [18, Lemma A.2]) reveals

$$\begin{aligned} {\mathbb {E}}[ |\tilde{X}(t,0)|]=0, \quad t\in [0,T], \end{aligned}$$

and therefore \(X^1_t=X^2_t=0\) a.s. By the continuity of \(X^1\) and \(X^2\) (see Lemma 2.5), we conclude the claimed pathwise uniqueness. \(\square \)

4 Step 1: Transformation into an SPDE

Recall, in general, a solution \((X_t)_{t\in [0,T]}\) of the SVE (2.1) will not be a semimartingale due to the t-dependence of the kernel. In this section we will transform the SVE (2.1) into a stochastic partial differential Eq. (SPDE) in distributional form, see (3.2), which allows us to recover a semimartingale structure and, thus, to implement an approach in the spirit of Yamada–Watanabe.

To that end, we consider the evolution equation

$$\begin{aligned} \begin{aligned} \frac{\partial u}{\partial t}(t,x)&=\Delta _\theta u(t,x),\quad t\in [0,T],\,x\in {\mathbb {R}}_+,\\ u(0,x)&=\delta _0(x), \end{aligned} \end{aligned}$$
(4.1)

where the differential operator \(\Delta _\theta \) is defined by

$$\begin{aligned} \Delta _\theta := \frac{2}{(2+\theta )^2} \frac{\partial }{\partial x}|x|^{-\theta }\frac{\partial }{\partial x} \end{aligned}$$
(4.2)

for some constant \(\theta >0\). Note that we will later also consider the evolution Eq. (4.1) on \(t\in [0,T]\), \(x\in {\mathbb {R}}\). It can be seen that the following densities solve (4.1) if restricted to \(x\in {\mathbb {R}}_+\):

$$\begin{aligned} p_t^\theta (x):=c_\theta t^{-\frac{1}{2+\theta }}e^{-\frac{|x|^{2+\theta }}{2t}},\quad t\in [0,T],\,x\in {\mathbb {R}}_+, \end{aligned}$$
(4.3)

which we extend to \({\mathbb {R}}\) by setting

$$\begin{aligned} p_{t}^\theta (x):=p_{t}^\theta (|x|), \quad t\in [0,T],\, x\in {\mathbb {R}}. \end{aligned}$$

Since \(\int _0^\infty p_t^\theta (x)\,\textrm{d}x\) is independent of \(t\in (0,T]\), one can verify that if we choose the constant

$$\begin{aligned} c_\theta := (2+\theta )2^{-\frac{1}{2+\theta }}\Gamma \bigg (\frac{1}{2+\theta }\bigg )^{-1}, \end{aligned}$$
(4.4)

where \(\Gamma \) denotes the Gamma function, then \(p_t^\theta :{\mathbb {R}}_+\rightarrow {\mathbb {R}}_+\) defines a probability density on \({\mathbb {R}}_+\). The reason, why we consider (4.1), is that by the choice of \(\theta >0\) such that

$$\begin{aligned} \alpha =\frac{1}{2+\theta }, \end{aligned}$$
(4.5)

we get that for \(x=0\) the solution \(p_{t-s}^\theta (0)\) represents the kernel in the SVE (2.1) up to a constant. Therefore, we obtain the following lemma.

Lemma 4.1

Every strong \(L^p\)-solution \((X_t)_{t\in [0,T]}\) of the SVE (2.1) defines an a.s. continuous strong solution \((X(t,x))_{t\in [0,T],x\in {\mathbb {R}}}\) of

$$\begin{aligned} X(t,x)=x_0(t)&+\int _0^t p_{t-s}^\theta (x)\mu (s,X(s,0))\,\textrm{d}s \nonumber \\&+\int _0^tp_{t-s}^\theta (x)\sigma (s,X(s,0))\,\textrm{d}B_s,\quad t\in [0,T],x\in {\mathbb {R}}, \end{aligned}$$
(4.6)

with \(\theta >0\) chosen such that (4.5) holds, i.e., on the probability space \((\Omega ,{\mathcal {F}},({\mathcal {F}}_t)_{t\in [0,T]},\mathbb {P})\), there is a random field \((X(t,x))_{t\in [0,T],x\in {\mathbb {R}}}\) such that \(X\in C([0,T]\times {\mathbb {R}})\) a.s., \((X(t,x))_{t\in [0,T]}\) is \(({\mathcal {F}}_t)\)-progressively measurable for \(x\in {\mathbb {R}}\),

$$\begin{aligned} \int _0^t\big (|p_{t-s}^\theta (x)\mu (s,X(s,0))|+|p^\theta _{t-s}(x)\sigma (s,X(s,0))|^2 \big )\,\textrm{d}s<\infty \end{aligned}$$

and (4.6) holds a.s. Conversely, every strong solution of (4.6) defines a strong solution of the stochastic Volterra Eq. (2.1).

Proof

First, we assume that there is a solution to the SVE (2.1). This implies a solution Y to the SVE

$$\begin{aligned} Y_t=x_0(t)+\int _0^t p_{t-s}^\theta (0)\mu (s,Y_s)\,\textrm{d}s+\int _0^t p_{t-s}^\theta (0)\sigma (s,Y_s)\,\textrm{d}B_s. \end{aligned}$$

We define, for \(t\in [0,T], x\in {\mathbb {R}}\),

$$\begin{aligned} X(t,x):=x_0(t)+\int _0^t p_{t-s}^\theta (x)\mu (s,Y_s)\,\textrm{d}s+\int _0^tp_{t-s}^\theta (x)\sigma (s,Y_s)\,\textrm{d}B_s. \end{aligned}$$

Then, by obtaining \(X(t,0)=Y_t\), X solves

$$\begin{aligned} X(t,x)=x_0(t)+\int _0^t p_{t-s}^\theta (x)\mu (s,X(s,0))\,\textrm{d}s+\int _0^tp_{t-s}^\theta (x)\sigma (s,X(s,0))\,\textrm{d}B_s. \end{aligned}$$

By the adaptedness of the Itô integral and the Riemann–Stieltjes integral, \((X(t,x))_{t\in [0,T]}\) is \(({\mathcal {F}}_t)\)-progressively measurable for every \(x\in {\mathbb {R}}\). By the continuity of \(p_t^\theta (x)\), X(tx) is continuous in x-direction. By the continuity of the initial condition \(x_0\) and the integrals, it is also continuous in t-direction.

Conversely, if \(X=(X(t,x))_{t\in [0,T],x\in {\mathbb {R}}}\) solves (4.6), \(Y_t:=X(t,0)\) is a solution of (2.1). \(\square \)

Due to the transformation of the SVE (2.1) into the SPDE (4.6), we shall study continuous solutions \(X\in C([0,T]\times {\mathbb {R}})\) of the SPDE (4.6) instead of solutions to the SVE (2.1) directly. The next goal is to derive a regularity result for solutions of the SPDE (4.6). For this purpose, we first investigate the densities \(p_t^{\theta }\). We introduce some auxiliary lemmas, which are helpful for a better understanding of the densities \(p_t^\theta \), and skip the dependence on \(\theta \) by writing

$$\begin{aligned} p_t(x):=ct^{-\alpha }e^{-\frac{|x|^{\frac{1}{\alpha }}}{2t}}\quad \text {for a fixed } \alpha \in (0, 1/2). \end{aligned}$$

Lemma 4.2

For any \(x,y\in {\mathbb {R}}\), \(t\in [0,T]\) and \(\beta \in [0,1]\), one has

$$\begin{aligned} |p_t(x)-p_t(y)| \lesssim t^{-\alpha }\bigg ( \frac{|x-y|}{t}\bigg )^\beta \max (|x|,|y|)^{(\frac{1}{\alpha }-1)\beta }. \end{aligned}$$

Proof

First, let us fix \(t\in [0,T]\) and consider the function \(x\mapsto e^{-\frac{|x|^{1/\alpha }}{2t}}\). By applying the mean value theorem and assuming w.l.o.g. \(|y|<|x|\), we obtain, for some \(z\in [|y|,|x|]\),

$$\begin{aligned} \frac{e^{-\frac{|x|^{\frac{1}{\alpha }}}{2t}} - e^{-\frac{|y|^{\frac{1}{\alpha }}}{2t}}}{|x|-|y|} = -\frac{z^{\frac{1}{\alpha }-1}}{2t\alpha }e^{-\frac{z^{1/\alpha }}{2t}}, \end{aligned}$$

which reveals that

$$\begin{aligned} \bigg | e^{-\frac{|x|^{\frac{1}{\alpha }}}{2t}}-e^{-\frac{|y|^{\frac{1}{\alpha }}}{2t}}\bigg | \le \frac{|x-y|}{2t\alpha }|x|^{\frac{1}{\alpha }-1}. \end{aligned}$$
(4.7)

Using inequality (4.7) and \(\beta \in [0,1]\), we bound

$$\begin{aligned} |p_t(x)-p_t(y)| \lesssim t^{-\alpha }\bigg | e^{-\frac{|x|^{\frac{1}{\alpha }}}{2t}}-e^{-\frac{|y|^{\frac{1}{\alpha }}}{2t}}\bigg |^\beta \lesssim t^{-\alpha }\bigg ( \frac{|x-y|}{t} \bigg )^\beta \max (|x|,|y|)^{(\frac{1}{\alpha }-1)\beta }. \end{aligned}$$

\(\square \)

Corollary 4.3

For any \(x,y\in [-1,1]\), \(t\in [0,T]\) and \(\beta \in (0,1-\alpha )\), one has

$$\begin{aligned} \int _0^t|p_s(x)-p_s(y)| \,\textrm{d}s\lesssim |x-y|^\beta . \end{aligned}$$

Proof

By Lemma 4.2, we see that

$$\begin{aligned} \int _0^t|p_s(x)-p_s(y)| \,\textrm{d}s&\lesssim \int _0^t s^{-\alpha }\bigg (\frac{|x-y|}{s}\bigg )^\beta \max (|x|,|y|)^{(\frac{1}{\alpha }-1)\beta }\,\textrm{d}s\\&\lesssim |x-y|^\beta \int _0^t s^{-\alpha -\beta }\,\textrm{d}s\lesssim |x-y|^\beta . \end{aligned}$$

\(\square \)

Lemma 4.4

For any \(0<t<t'\le T\) and \(x\in {\mathbb {R}}\), one has

$$\begin{aligned} \int _0^t ( p_{t'-s}(x)-p_{t-s}(x))^2\,\textrm{d}s \lesssim |t'-t|^{1-2\alpha }. \end{aligned}$$

Proof

We assume w.l.o.g. that \(t'-t\le t\) and use the linearity of the integral together with \(|e^{-x}|\le 1\) for non-negative x to get

$$\begin{aligned} \int _0^t | p_{t'-s}(x)-p_{t-s}(x)|^{2}\,\textrm{d}s&\lesssim \int _{t-|t'-t|}^t | (t'-s)^{-\alpha }-(t-s)^{-\alpha }|^{2}\,\textrm{d}s\\&\quad +\int _0^{t-|t'-t|} |p_{t'-s}(x)-p_{t-s}(x) |^{2}\,\textrm{d}s\\&\lesssim \int _{t-|t'-t|}^t (t-s)^{-2\alpha }\,\textrm{d}s\\&\quad +\int _0^{t-|t'-t|} | (t-s)^{-\alpha }-(t'-s)^{-\alpha } |^{2}e^{-\frac{|x|^{\frac{1}{\alpha }}}{2(t-s)}}\,\textrm{d}s\\&\qquad +\int _0^{t-|t'-t|} (t'-s)^{-2\alpha }\bigg |e^{-\frac{|x|^{\frac{1}{\alpha }}}{2(t-s)}}-e^{-\frac{|x|^{\frac{1}{\alpha }}}{2(t'-s)}}\bigg |\,\textrm{d}s\\&=:I_1+I_2+I_3. \end{aligned}$$

For \(I_1\), we directly compute

$$\begin{aligned} I_1=\bigg [ \frac{-(t-s)^{1-2\alpha }}{1-2\alpha } \bigg ]_{t-|t'-t|}^t \lesssim |t'-t|^{1-2\alpha }. \end{aligned}$$

For \(I_2\), we use \(|a-b|^2\le a^2-b^2\) for \(a>b\) to bound

$$\begin{aligned} I_2&\le \int _0^{t-|t'-t|} (t-s)^{-2\alpha } \,\textrm{d}s - \int _0^{t-|t'-t|} (t'-s)^{-2\alpha }\,\textrm{d}s\\&=\bigg [ \frac{-(t-s)^{1-2\alpha }}{1-2\alpha } \bigg ]_0^{t-|t'-t|} - \bigg [ \frac{-(t'-s)^{1-2\alpha }}{1-2\alpha } \bigg ]_{0}^{t-|t'-t|}\\&\lesssim |t'-t|^{1-2\alpha }. \end{aligned}$$

For \(I_3\), we use the mean value theorem for the function \(t\mapsto e^{-\frac{|x|^{\frac{1}{\alpha }}}{2(t-s)}}\), similarly as we did in (4.7), to get the inequality

$$\begin{aligned} \bigg | e^{-\frac{|x|^{\frac{1}{\alpha }}}{2(t-s)}}-e^{-\frac{|x|^{\frac{1}{\alpha }}}{2(t'-s)}}\bigg | \le (t'-t)\frac{|x|^{\frac{1}{\alpha }}}{2(t-s)^2} e^{-\frac{|x|^{\frac{1}{\alpha }}}{2(t'-s)}}. \end{aligned}$$

Using this and the inequality \(e^{-x}\le x^{-1}\) for all \(x\ge 0\), such as \(\frac{t'-t}{t-s}\le 1\) and \(\frac{t'-s}{t-s}\le \frac{2(t-s)}{t-s}=2\) due to \(s\le t-|t'-t|\), we get

$$\begin{aligned} I_3&\le (t'-t) \int _0^{t-|t'-t|} (t-s)^{-2\alpha }\bigg (\frac{|x|^{\frac{1}{\alpha }}}{2(t-s)^2} e^{-\frac{|x|^{\frac{1}{\alpha }}}{2(t'-s)}}\bigg ) \,\textrm{d}s\\&\lesssim \int _0^{t-|t'-t|} (t-s)^{-2\alpha }\frac{(t'-t)(t'-s)}{(t-s)^2}\,\textrm{d}s\\&\lesssim \int _0^{t-|t'-t|} (t-s)^{-2\alpha } \,\textrm{d}s \lesssim |t'-t|^{1-2\alpha }, \end{aligned}$$

which yields the statement. \(\square \)

Lemma 4.5

For any \(x,y\in [-1,1]\), \(t\in [0,T]\) and \(\beta \in (0,\frac{1}{2}-\alpha )\), one has

$$\begin{aligned} \int _0^t (p_{t-s}(x)-p_{t-s}(y))^2 \,\textrm{d}s\lesssim \max \big (|x|,|y|\big )^{(\frac{1}{\alpha }-1)2\beta }|x-y|^{1-2\alpha }. \end{aligned}$$

Proof

W.l.o.g. we may assume \(t\ge |x-y|\) and split the integral into

$$\begin{aligned} \int _0^t ( p_{t-s}(x)-p_{t-s}(y))^{2} \,\textrm{d}s&\le \int _0^{t-|x-y|} ( p_{t-s}(x)-p_{t-s}(y) )^{2} \,\textrm{d}s\\&\quad + \int _{t-|x-y|}^t ( p_{t-s}(x)-p_{t-s}(y))^{2} \,\textrm{d}s\\&=:I_1+I_2. \end{aligned}$$

For \(I_1\), we apply Lemma 4.2 with \(\beta =1\) to get

$$\begin{aligned} I_1&\lesssim \max (|x|,|y|)^{(\frac{1}{\alpha }-1)2} \int _0^{t-|x-y|} |x-y|^{2} (t-s)^{-2\alpha -2}\,\textrm{d}s\\&= \max (|x|,|y|)^{(\frac{1}{\alpha }-1)2}|x-y|^{2} \bigg [ \frac{-(t-s)^{1-2\alpha -2}}{1-2\alpha -2}\ \bigg ]_0^{t-|x-y|}\\&\lesssim \max (|x|,|y|)^{(\frac{1}{\alpha }-1)2}|x-y|^{2} \big ( t^{-2\alpha -1} +|x-y|^{-2\alpha -1} \big )\\&\lesssim \max (|x|,|y|)^{(\frac{1}{\alpha }-1)2\beta }|x-y|^{1-2\alpha } \end{aligned}$$

with \(t\ge |x-y|\).

For \(I_2\), Lemma 4.2 again, but with \(\beta \in (0,1/2-\alpha )\) such that \(2\alpha +2\beta <1\), yields

$$\begin{aligned} I_2&\lesssim \max (|x|,|y|)^{(\frac{1}{\alpha }-1)2\beta } |x-y|^{2\beta } \int _{t-|x-y|}^t (t-s)^{-2\alpha -2\beta }\,\textrm{d}s\\&\lesssim \max (|x|,|y|)^{(\frac{1}{\alpha }-1)2\beta } |x-y|^{2\beta } \bigg [ \frac{-(t-s)^{1-2\alpha -2\beta }}{1-2\alpha -2\beta } \bigg ]_{t-|x-y|}^t\\&\lesssim \max (|x|,|y|)^{(\frac{1}{\alpha }-1)2\beta } |x-y|^{2\beta } |x-y|^{1-2\alpha -2\beta }\\&\lesssim \max (|x|,|y|)^{(\frac{1}{\alpha }-1)2\beta } |x-y|^{1-2\alpha }. \end{aligned}$$

\(\square \)

With these auxiliary results at hand, we are ready to prove the following regularity result for solutions of the SPDE (4.6).

Proposition 4.6

Suppose Assumption 2.1 and let \(X\in C([0,T]\times {\mathbb {R}})\) be a strong solution of the SPDE (4.6).

  1. (i)

    For any \(p\in (0,\infty )\), one has

    $$\begin{aligned} \sup \limits _{t\in [0,T]} \sup \limits _{x\in {\mathbb {R}}}{\mathbb {E}}[ |X(t,x)|^p ]<\infty . \end{aligned}$$
  2. (ii)

    We define the random field \((Z(t,x))_{t\in [0,T],x\in {\mathbb {R}}}\) by

    $$\begin{aligned} Z(t,x)&:=X(t,x)-x_0(t)\\&=\int _0^t p_{t-s}^\theta (x)\mu (s,X(s,0))\,\textrm{d}s + \int _0^t p_{t-s}^\theta (x)\sigma (s,X_s(s,0))\,\textrm{d}B_s. \end{aligned}$$

    For any \(0\le t\le t'\le T\), \(|x|,|y|\le 1\) and \(p\in [2,\infty )\), we get

    $$\begin{aligned} {\mathbb {E}}\big [| Z(t,x)-Z(t',y)|^p \big ]\lesssim |t'-t|^{(\frac{1}{2}-\alpha )p}+|x-y|^{(\frac{1}{2}-\alpha )p}. \end{aligned}$$

Proof

(i) Let us assume that \(p\ge 2\). For \(p\in (0,2)\), the statement then follows by the orderedness of the \(L^p\)-spaces. From Lemma 4.1 we know that \(Y_t:=X(t,0)\) is a solution of the SVE (2.1) and from Lemma 2.5 we know that its moment are finite. Thus, applying Hölder’s and the Burkholder–Davis–Gundy inequality, the linear growth condition on \(\mu \) and \(\sigma \) from Assumption 2.1, and Lemma 2.5, we get

$$\begin{aligned} {\mathbb {E}}[ |X(t,x)|^p ]&\lesssim 1+ {\mathbb {E}}\bigg [ \bigg | \int _0^t p_{t-s}^\theta (x)\mu (s,Y_s)\,\textrm{d}s \bigg |^p \bigg ] +{\mathbb {E}}\bigg [ \bigg | \int _0^t p_{t-s}^\theta (x)\sigma (s,Y_s)\,\textrm{d}B_s \bigg |^p \bigg ]\\&\lesssim 1+\bigg ( \int _0^t \big ( p_{t-s}^\theta (x)\big )^2\,\textrm{d}s \bigg )^{\frac{p}{2}} +\bigg ( \int _0^t\big ( p_{t-s}^\theta (x) \big )^2 \,\textrm{d}s \bigg )^{\frac{p}{2}}\\&\lesssim 1+\bigg ( \int _0^tc_\theta ^2 (t-s)^{-2\alpha }e^{-2\frac{|x|^{2+\theta }}{2(t-s)}} \,\textrm{d}s \bigg )^{\frac{p}{2}}\\&\lesssim 1+\bigg ( \int _0^t (t-s)^{-2\alpha }\,\textrm{d}s \bigg )^{\frac{p}{2}}<\infty . \end{aligned}$$

(ii) With

$$\begin{aligned} Z(t,x)=\int _0^t p_{t-s}^\theta (x)\mu (s,X(s,0))\,\textrm{d}s + \int _0^t p_{t-s}^\theta (x)\sigma (s,X_s(s,0))\,\textrm{d}B_s \end{aligned}$$

and by splitting the integrals, we get

$$\begin{aligned}&|Z(t',x)-Z(t,y)|\\&\quad =\int _0^t \big ( p_{t'-s}^\theta (x) - p_{t-s}^\theta (x) \big )\mu (s,X(s,0))\,\textrm{d}s \\ {}&\quad + \int _0^t \big ( p_{t-s}^\theta (x) - p_{t-s}^\theta (y) \big )\mu (s,X(s,0))\,\textrm{d}s + \int _t^{t'} p_{t'-s}^\theta (x) \mu (s,X(s,0))\,\textrm{d}s\\&\qquad +\int _0^t \big ( p_{t'-s}^\theta (x) - p_{t-s}^\theta (x) \big )\sigma (s,X(s,0))\,\textrm{d}B_s \\ {}&\qquad + \int _0^t \big ( p_{t-s}^\theta (x) - p_{t-s}^\theta (y) \big )\sigma (s,X(s,0))\,\textrm{d}B_s+ \int _t^{t'} p_{t'-s}^\theta (x) \sigma (s,X(s,0))\,\textrm{d}B_s\\&\quad =:D_1+D_2+D_3+S_1+S_2+S_3. \end{aligned}$$

We use Lemma 4.4, Lemma 4.5, Hölder’s and the Burkholder–Davis–Gundy inequality, Fubini’s theorem as well as (i) to get the following estimates:

$$\begin{aligned}&{\mathbb {E}}[|D_1|^p]\le \bigg (\int _0^t \big ( p_{t'-s}^\theta (x) - p_{t-s}^\theta (x) \big )^2 \,\textrm{d}s\bigg )^{\frac{p}{2}}\lesssim |t'-t|^{p(\frac{1}{2}-\alpha )},\\&{\mathbb {E}}[|S_1|^p]\le \bigg (\int _0^t \big ( p_{t'-s}^\theta (x) - p_{t-s}^\theta (x) \big )^2 \,\textrm{d}s\bigg )^{\frac{p}{2}}\lesssim |t'-t|^{p(\frac{1}{2}-\alpha )},\\&{\mathbb {E}}[|D_2|^p]\le \bigg (\int _0^t \big ( p_{t-s}^\theta (x) - p_{t-s}^\theta (y) \big )^2 \,\textrm{d}s\bigg )^{\frac{p}{2}}\lesssim |x-y|^{p(\frac{1}{2}-\alpha )},\\&{\mathbb {E}}[|S_2|^p]\lesssim |x-y|^{p(\frac{1}{2}-\alpha )},\\&{\mathbb {E}}[|D_3|^p]\le \bigg (\int _t^{t'} p_{t'-s}^\theta (x)^2 \,\textrm{d}s\bigg )^{\frac{p}{2}}\lesssim \bigg (\int _t^{t'} (t'-s)^{-2\alpha } \,\textrm{d}s\bigg )^{\frac{p}{2}}\lesssim |t'-t|^{p(\frac{1}{2}-\alpha )}, \\&{\mathbb {E}}[|S_3|^p]\lesssim |t'-t|^{p(\frac{1}{2}-\alpha )}. \end{aligned}$$

Hence, we obtain the desired statement. \(\square \)

4.1 Transformation to an SPDE in distributional form

The next aim is to transform the SPDE (4.6) into an SPDE in distributional form. To that end, we consider the evolution Eq. (4.1) on the whole \([0,T]\times {\mathbb {R}}\), i.e.

$$\begin{aligned} \begin{aligned} \frac{\partial u}{\partial t}(t,x)&=\Delta _\theta u(t,x),\quad t\in [0,T],\,x\in {\mathbb {R}},\\ u(0,x)&=\delta _0(x). \end{aligned} \end{aligned}$$
(4.8)

We are interested in the fundamental solution \(p^\theta :[0,T]\times {\mathbb {R}}\times {\mathbb {R}}\rightarrow {\mathbb {R}}\) of (4.8), in the sense that for any \(g:{\mathbb {R}}\rightarrow {\mathbb {R}}\), \(\big (\int _{\mathbb {R}}p_t^\theta (x,y)g(y)\,\textrm{d}y\big )_{t\in [0,T],x\in {\mathbb {R}}}\) is a solution of (4.8) with initial condition g instead of \(\delta _0\).

The semigroup \((S_t)_{t\in [0,T]}\) generated by \(\Delta _\theta \) is then defined by \(S_t:C_0^\infty ({\mathbb {R}})\rightarrow C_0^\infty ({\mathbb {R}})\) via

$$\begin{aligned} S_t\phi (x):=\int _{{\mathbb {R}}}p_t^\theta (x,y)\phi (y)\,\textrm{d}y, \quad \phi \in C_0^\infty ({\mathbb {R}}). \end{aligned}$$
(4.9)

First, we go back to the system (4.1) where only \(x\in {\mathbb {R}}_+\) is allowed and denote its fundamental solutions by

$$\begin{aligned} p^{|\cdot |} :[0,T]\times {\mathbb {R}}\times {\mathbb {R}}\rightarrow {\mathbb {R}}\end{aligned}$$
(4.10)

and skip the \(\theta \)-dependence for the sake of a better readability.

To find explicit formulas for the \(p^{|\cdot |}\), we need the following preparations:

  • A squared Bessel process \(Z_t\ge 0\) of dimension \(n\in {\mathbb {R}}\) is given by the stochastic differential equation

    $$\begin{aligned} \textrm{d}Z_t=2\sqrt{Z_t}\,\textrm{d}B_t + n\,\textrm{d}t,\quad t\in [0,T]. \end{aligned}$$
  • The generator of a squared Bessel process of dimension n is given by

    $$\begin{aligned} (Lf)(x)=n\frac{\partial }{\partial x}f(x) + 2x \frac{\partial ^2}{\partial x^2}f(x),\quad x\in {\mathbb {R}}_+, \end{aligned}$$
    (4.11)

    for \(f\in C_0^\infty ({\mathbb {R}}_+)\), see [34, page 443].

  • The semigroup \((S_t)_{t\in [0,T]}\), defined in (4.9), fulfills

    $$\begin{aligned} \frac{\partial }{\partial t}(S_tf)=\Delta _\theta (S_tf) \end{aligned}$$
    (4.12)

    for all \(f\in C_0^\infty ({\mathbb {R}}_+)\), since \(p^\theta \) is the fundamental solution of (4.8). Analogue, the semigroup \((S^{|\cdot |}_t)_{t\in [0,T]}\) which we define as (4.9) but with \(p^{|\cdot |}\) instead of p, fulfills

    $$\begin{aligned} \frac{\partial }{\partial t}(S^{|\cdot |}_tf)=\Delta _\theta (S^{|\cdot |}_tf) \end{aligned}$$

    for all \(f\in C_0^\infty ({\mathbb {R}}_+)\).

  • Denote by \((\xi _t)_{t\in [0,T]}\) the Markov process generated by the semigroup \(\big (S_t^{|\cdot |}\big )_{t\in [0,T]}\), that is, it has the transition densities \((p_t^{|\cdot |})_{t\in [0,T]}\). We define the semigroup \((T_t)_{t\in [0,T]}\) by

    $$\begin{aligned} (T_tg)(x):= (S_t(g\circ \tilde{f}))(x) ={\mathbb {E}}_{x}[ g( \tilde{f}(\xi _t ) )] \end{aligned}$$

    for the fixed function \(\tilde{f}(x):=x^{2+\theta }\) and for \(g\in C_0^\infty ({\mathbb {R}}_+)\).

Our ultimate aim is to find bounds on the densities \(p^\theta \). Therefore, we will use that we can find explicit formulas for the densities \(p^{|\cdot |}\), and then bound

$$\begin{aligned} p_t^{\theta }(x,y)\le p_t^{\theta }(x,y) + p_t^{\theta }(x,-y) = p^{|\cdot |}_t(|x|,|y|),\quad \forall x,y\in {\mathbb {R}}. \end{aligned}$$
(4.13)

We derive the following Bessel property for the process \((\xi _t^{2+\theta })_{t\in [0,T]}\).

Lemma 4.7

The process \((\xi _t^{2+\theta })_{t\in [0,T]}\) is a squared Bessel process of dimension  \(\frac{2}{2+\theta }<1\).

Proof

We show that the generator of \(\tilde{f}(\xi _t)\) is the same as the one of the squared Bessel process in (4.11) with dimension \(\frac{2}{2+\theta }\). Therefore, we use the semigroup \(T_t\) and denote by G its generator. For appropriate functions g we get, by the definition of the generator and by (4.12),

$$\begin{aligned} (Gg)(x)&=\frac{\partial }{\partial t}(T_tg)|_{t\rightarrow 0}(x) =\frac{\partial }{\partial t}( S_t(g\circ \tilde{f}))|_{t\rightarrow 0}(x) =\Delta _\theta S_0(g\circ \tilde{f})(x)\\&=\Delta _\theta (g\circ \tilde{f})(x). \end{aligned}$$

Note that the set \(\lbrace t\in [0,T]:\xi _t=0 \rbrace \) has Lebesque measure zero. Therefore, we can explicitly calculate, for \(x> 0\),

$$\begin{aligned} (Gg)(x)&=\frac{2}{(2+\theta )^2}\frac{\partial }{\partial x} \bigg ( x^{-\theta }\frac{\partial }{\partial x}\big ( g(x^{2+\theta })\big )\bigg )\\&=\frac{2}{(2+\theta )^2}\frac{\partial }{\partial x}(x^{-\theta } g'(x^{2+\theta })(2+\theta )x^{1+\theta }) \\&=\frac{2}{(2+\theta )}\frac{\partial }{\partial x}(x g'(x^{2+\theta })) \\&=\frac{2}{(2+\theta )}( g'(x^{2+\theta }) + x g''(x^{2+\theta })(2+\theta )x^{1+\theta } )\\&=\frac{2}{(2+\theta )}\frac{\partial g}{\partial x}(x^{2+\theta }) + 2 x^{2+\theta }\frac{\partial g^2}{\partial x^2}(x^{2+\theta }) \\&= (Lg)(u), \end{aligned}$$

where L is the generator of a squared Bessel process of dimension \(\frac{2}{2+\theta }\) and \(u:=x^{2+\theta }\). \(\square \)

Next, we derive explicit formulas for the transition densities of \((\xi _t)_{t\in [0,T]}\). Note that the transition densities for the squared Bessel process of dimension n are for \(t>0\) and \(y>0\) given by (see e.g. [34, Corollary XI.1.4])

$$\begin{aligned}&q_t^n(x,y) =\frac{1}{2t}\bigg ( \frac{y}{x}\bigg )^{\frac{\nu }{2}}e^{-\frac{x+y}{2t}}I_\nu \bigg (\frac{\sqrt{xy}}{t}\bigg )\quad \text {for }x>0 \quad \text {and} \end{aligned}$$
(4.14)
$$\begin{aligned}&q_t^n(0,y)=2^{-\nu }t^{-(\nu +1)}\Gamma (\nu +1)^{-1}y^{2\nu +1}e^{-\frac{y^2}{2t}}, \end{aligned}$$
(4.15)

where \(\nu :=\frac{n}{2}-1\) denotes the index of the Bessel process and \(I_\nu \) is the modified Bessel function that is given by

$$\begin{aligned} I_\nu (x):=\sum \limits _{k=0}^{\infty } \frac{( x/2 )^{2k+\nu }}{k!\Gamma (\nu +k+1)} \end{aligned}$$
(4.16)

for \(\nu \ge -1\) and \(x>0\).

Lemma 4.8

The transition densities of the Markov process \((\xi _t)_{t\in [0,T]}\) are, for \(t>0\), given by

$$\begin{aligned} p_t^{|\cdot |}(x,y)=\frac{(2+\theta )}{2t}|xy|^{\frac{(1+\theta )}{2}} e^{-\frac{|x|^{2+\theta }+|y|^{2+\theta }}{2t}}I_\nu \bigg (\frac{|xy|^{1+\frac{\theta }{2}}}{t}\bigg )\quad \text {for }x,y>0, \end{aligned}$$
(4.17)

and for \(x=0\), \(y> 0\) with \(p_t^{|\cdot |}(0,y)=p_t^\theta (y)\) defined in (4.3). Consequently, (4.17) are explicit formulas for the fundamental solutions \(p^{|\cdot |}\) defined in (4.10).

Proof

Denote for fixed \(\theta >0\) by \(q_t\) the density function of the Bessel process \(|\xi _t|^{2+\theta }\) with dimension \(\frac{2}{2+\theta }\), that is given by (4.14) with \(\nu =\frac{1}{2+\theta }-1\).

Now, by noting that, for all \(x,t,s>0\) and Borel sets \(A\subset B({\mathbb {R}}_+)\),

$$\begin{aligned} {\mathbb {E}}\Big [\mathbbm {1}_{A}(|\xi _{t+s}|^{2+\theta })|\xi _{t+s}|^{2+\theta }\,\Big |\,|\xi _{t}|^{2+\theta }=x \Big ] = {\mathbb {E}}\Big [\mathbbm {1}_{A}(|\xi _{t+s}|^{2+\theta })|\xi _{t+s}|^{2+\theta } \,\Big |\,|\xi _{t}|=x^{\frac{1}{2+\theta }} \Big ] \end{aligned}$$

holds, we get with the notation \(B:=\lbrace b\in {\mathbb {R}}_+:\,b^{2+\theta }\in A \rbrace \) the relation

$$\begin{aligned} \int _{A}q_t(x,y)y\,\textrm{d}y&= \int _{B}p_t^{|\cdot |}\Big (x^{\frac{1}{2+\theta }},y\Big )y^{2+\theta }\,\textrm{d}y\nonumber \\&= \frac{1}{2+\theta }\int _{A}p_t^{|\cdot |}\Big (x^{\frac{1}{2+\theta }},z^{\frac{1}{2+\theta }}\Big )z\, z^{\frac{1}{2+\theta }-1}\,\textrm{d}z\nonumber \\&= \frac{1}{2+\theta }\int _{A}p_t^{|\cdot |}\Big (x^{\frac{1}{2+\theta }},y^{\frac{1}{2+\theta }}\Big )y^{\frac{1}{2+\theta }-1}y \,\textrm{d}y, \end{aligned}$$
(4.18)

where we substituted \(z:=y^{2+\theta }\) and thus \(\textrm{d}y=\frac{1}{2+\theta }z^{\frac{1}{2+\theta }-1}\,\textrm{d}z\). Since (4.18) must hold for all Borel sets A, we can compare both sides of the equation to see with the notation

$$\begin{aligned} {\hat{x}}:=x^{\frac{1}{2+\theta }} \quad \text {and}\quad {\hat{y}}:=y^{\frac{1}{2+\theta }} \end{aligned}$$

that, with \(\nu =\frac{1}{2+\theta }-1=-(\frac{1+\theta }{2+\theta })\),

$$\begin{aligned} p_t^{|\cdot |}({\hat{x}},{\hat{y}})&=(2+\theta )q_t\Big ({\hat{x}}^{2+\theta },{\hat{y}}^{2+\theta }\Big )y^{1-\frac{1}{2+\theta }}\\&=\frac{(2+\theta )}{2t}\bigg | \frac{{\hat{y}}}{{\hat{x}}}\bigg |^{\frac{(2+\theta )\nu }{2}}e^{-\frac{|{\hat{x}}|^{2+\theta }+|{\hat{y}}|^{2+\theta }}{2t}}I_\nu \bigg (\frac{|{\hat{x}}{\hat{y}}|^{1+\frac{\theta }{2}}}{t}\bigg )|{\hat{y}}|^{1+\theta }\\&=\frac{(2+\theta )}{2t}\bigg |\frac{{\hat{y}}}{{\hat{x}}}\bigg |^{-\frac{(1+\theta )}{2}}e^{-\frac{|{\hat{x}}|^{2+\theta }+|{\hat{y}}|^{2+\theta }}{2t}}I_\nu \bigg (\frac{|{\hat{x}}{\hat{y}}|^{1+\frac{\theta }{2}}}{t}\bigg )|{\hat{y}}|^{1+\theta }\\&=\frac{(2+\theta )}{2t}|{\hat{x}}{\hat{y}}|^{\frac{(1+\theta )}{2}}e^{-\frac{|{\hat{x}}|^{2+\theta }+|{\hat{y}}|^{2+\theta }}{2t}}I_\nu \bigg (\frac{|{\hat{x}}{\hat{y}}|^{1+\frac{\theta }{2}}}{t}\bigg ). \end{aligned}$$

By a very similar calculation, (4.15) can be used to derive (4.3) in the case of \(x=0\):

$$\begin{aligned} \int _B q_t^\theta (0,y)y\,\textrm{d}y&=\int _A q_t^\theta (0,z^{1+\theta /2})z^{\theta /2}(1+\theta /2)z^{1+\theta /2}\,\textrm{d}z\\&=(1+\theta /2)2^{\frac{1+\theta }{2+\theta }}\Gamma (\nu +1)^{-1}\int _A t^{-(\nu +1)}z^{-\theta /2}e^{-\frac{|z|^{2+\theta }}{2t}}z^{\theta /2}z^{1+\theta /2}\,\textrm{d}z\\&=(2+\theta )2^{-\frac{1}{2+\theta }}\Gamma \bigg (\frac{1}{2+\theta }\bigg )^{-1}\int _A t^{-\frac{1}{2+\theta }}e^{-\frac{|z|^{2+\theta }}{2t}}z^{1+\theta /2}\,\textrm{d}z\\&=\int _A p_t^{|\cdot |}(0,z)z^{1+\theta /2}\,\textrm{d}z \end{aligned}$$

with \(p_t^{|\cdot |}(0,z)=p_t^\theta (z)\) as in (4.3) and choosing \(c_\theta \) as in (4.4). \(\square \)

Corollary 4.9

The fundamental solutions \(p^\theta :[0,T]\times {\mathbb {R}}\times {\mathbb {R}}\rightarrow {\mathbb {R}}\) of (4.8) fulfill for all \(t\in [0,T]\),

$$\begin{aligned} p_t^{\theta }(x,y)\le \frac{(2+\theta )}{2t}|xy|^{\frac{(1+\theta )}{2}} e^{-\frac{|x|^{2+\theta }+|y|^{2+\theta }}{2t}}I_\nu \bigg (\frac{|xy|^{1+\frac{\theta }{2}}}{t}\bigg )\quad \text {for }x,y\ne 0, \end{aligned}$$

and

$$\begin{aligned} p_t^{\theta }(x,0)\le c_\theta t^{-\frac{1}{2+\theta }}e^{-\frac{|x|^{2+\theta }}{2t}}\quad \text {for }x\ne 0. \end{aligned}$$

Proof

This is a straight consequence of (4.13) and Lemma 4.8. \(\square \)

Having the bound from Corollary 4.9, we introduce a partial integration formula for the operator \(\Delta _\theta \) using the fundamental solutions \(p_t^\theta \) of (4.1).

Lemma 4.10

For \(\Delta _\theta =\frac{2}{(2+\theta )^2}\frac{\partial }{\partial x}|x|^{-\theta }\frac{\partial }{\partial x}\), the partial integration formula

$$\begin{aligned} \int _{{\mathbb {R}}} p_t(x,y)\Delta _\theta \phi (x)\,\textrm{d}x=\int _{{\mathbb {R}}}\big (\Delta _\theta p_t(x,y)\big ) \phi (x)\,\textrm{d}x,\quad t\in [0,T],y\in {\mathbb {R}}, \end{aligned}$$

holds for any \(\phi \in C_0^2({\mathbb {R}})\).

Proof

Denoting \(\phi _{2,t}(x):=|x|^{-\theta }\frac{\partial }{\partial x}\phi (x)\), then \(\phi _{2,t}\) has also compact support and we get, by the classical partial integration formula,

$$\begin{aligned}&\int _{{\mathbb {R}}}p_t(x,y) \frac{\partial }{\partial x}|x|^{-\theta }\frac{\partial }{\partial x}\phi (x)\,\textrm{d}x =\int _{{\mathbb {R}}}p_t(x,y) \frac{\partial }{\partial x}\phi _{2,t}(x)\,\textrm{d}x\\&\quad =-\int _{{\mathbb {R}}}\frac{\partial }{\partial x}p_t(x,y) \phi _{2,t}(x)\,\textrm{d}x =-\int _{{\mathbb {R}}}\bigg ( \frac{\partial }{\partial x}p_t(x,y) \bigg )|x|^{-\theta }\frac{\partial }{\partial x}\phi (x)\,\textrm{d}x. \end{aligned}$$

Then, again by partial integration, we get, as claimed,

$$\begin{aligned} \int _{{\mathbb {R}}}p_t(x,y) \frac{\partial }{\partial x}|x|^{-\theta }\frac{\partial }{\partial x}\phi (x)\,\textrm{d}x=\int _{{\mathbb {R}}}\frac{\partial }{\partial x}\bigg ( \bigg ( \frac{\partial }{\partial x}p_t(x,y) \bigg )|x|^{-\theta } \bigg )\phi (x)\,\textrm{d}x. \end{aligned}$$

\(\square \)

With these auxiliary results at hand, we are in a position to do the transformation into an SPDE in distributional form. We consider test functions \(\Phi \in C_0^2([0,T]\times {\mathbb {R}})\), to which we can apply the operator \(\Delta _\theta \) such that

$$\begin{aligned} \Delta _\theta \Phi _t(x)=\frac{\partial }{\partial x}|x|^{-\theta }\frac{\partial }{\partial x}\Phi _t(x) \end{aligned}$$

is well-defined for all \(t\in [0,T]\) and \(x\in {\mathbb {R}}{\setminus } \lbrace 0\rbrace \).

Lemma 4.11

Every strong solution \((X(t,x))_{t\in [0,T],x\in {\mathbb {R}}}\) of (4.6) is a strong solution to the following SPDE in distributional form

$$\begin{aligned} \begin{aligned}&\int _{{\mathbb {R}}}X(t,x)\Phi _t(x)\,\textrm{d}x\\&\quad =\int _{{\mathbb {R}}}\bigg (x_0\Phi _0(x)+\int _0^t \Phi _s(x) \frac{\partial }{\partial s}x_0(s)\,\textrm{d}s\bigg )\,\textrm{d}x \\&\quad \quad +\int _0^t\int _{{\mathbb {R}}}X(s,x)\bigg ( \Delta _\theta \Phi _s(x)+\frac{\partial }{\partial s}\Phi _s(x) \bigg )\,\textrm{d}x \,\textrm{d}s \\&\quad \quad +\int _0^t\mu (s,X(s,0))\Phi _s(0)\,\textrm{d}s +\int _0^t\sigma (s,X(s,0))\Phi _s(0)\,\textrm{d}B_s, \quad t\in [0,T], \end{aligned} \end{aligned}$$
(4.19)

for every test function \(\Phi \in C^2_0([0,T]\times {\mathbb {R}})\).

Proof

Let X be a solution to (4.6) and \(\Phi \) be as in the statement. We first observe that

$$\begin{aligned}&\int _0^t \langle X(s,\cdot ),\Delta _\theta \Phi _s \rangle \,\textrm{d}s\nonumber \\&\quad =\int _0^t\int _{{\mathbb {R}}} x_0(s) \Delta _\theta \Phi _s(x)\,\textrm{d}x\,\textrm{d}s+\int _0^t\int _{{\mathbb {R}}}\int _0^s p_{s-u}^\theta (x)\sigma (u,X(u,0))\,\textrm{d}B_u\,\Delta _\theta \Phi _s(x)\,\textrm{d}x\,\textrm{d}s\nonumber \\&\quad \quad +\int _0^t\int _{{\mathbb {R}}}\int _0^sp_{s-u}^\theta (x)\mu (u,X(u,0))\,\textrm{d}u\,\Delta _\theta \Phi _s(x)\,\textrm{d}x \,\textrm{d}s\nonumber \\&\quad =:I_1+I_2+I_3. \end{aligned}$$
(4.20)

Use the fact that \(p^\theta _s(x,\cdot )\) is a probability density to write \(x_0(s)=\int _{{\mathbb {R}}}p_s^\theta (x,y)x_0(s) \,\textrm{d}y\) and use Fubini’s theorem, the partial integration formula from Lemma 4.10 and the fact that \(p_t^\theta \) is a fundamental solution, to get

$$\begin{aligned} I_1&=\int _0^t\int _{{\mathbb {R}}}\int _{{\mathbb {R}}}p_s^\theta (x,y)x_0(s)\,\textrm{d}y\,\Delta _\theta \Phi _s(x)\,\textrm{d}x\,\textrm{d}s\\&=\int _0^t x_0(s)\int _{{\mathbb {R}}}\int _{{\mathbb {R}}}p_s^\theta (x,y)\Delta _\theta \Phi _s(x)\,\textrm{d}x \,\textrm{d}y\,\textrm{d}s\\&=\int _{{\mathbb {R}}}\int _{{\mathbb {R}}}\int _0^t x_0(s)\Big (\Delta _\theta p_s^\theta (x,y)\Big ) \Phi _s(x)\,\textrm{d}s\,\textrm{d}y\,\textrm{d}x\\&=\int _{{\mathbb {R}}}\int _{{\mathbb {R}}}\int _0^t \bigg (\frac{\partial }{\partial s} p_s^\theta (x,y)\bigg )x_0(s) \Phi _s(x)\,\textrm{d}s\,\textrm{d}y\,\textrm{d}x. \end{aligned}$$

We denote the summands on the right-hand side of (4.6) as \(X_i(t,x)\) for \(i=2,3\), that is, \(X(t,x)=x_0+X_2(t,x)+X_3(t,x)\). Due to the s-dependence in \(x_0(s)\) and \(\Phi _s\), we apply the product rule to get

$$\begin{aligned} I_1&=\int _{{\mathbb {R}}}\int _{{\mathbb {R}}}\int _0^t\frac{\partial }{\partial s}\Big ((x_0(s) p_s^\theta (x,y) \Phi _s(x)\Big )\,\textrm{d}s\,\textrm{d}y\,\textrm{d}x\nonumber \\&\quad - \int _{{\mathbb {R}}}\int _{{\mathbb {R}}}\int _0^t p_s^\theta (x,y) \frac{\partial }{\partial s}\Big (x_0(s)\Phi _s(x)\Big )\,\textrm{d}s\,\textrm{d}y\,\textrm{d}x\nonumber \\&=\langle x_0(t),\Phi _t\rangle - \langle x_0(0),\Phi _0\rangle \nonumber \\&\quad - \int _0^t\int _{{\mathbb {R}}}x_0(s) \frac{\partial }{\partial s}\Phi _s(x)\,\textrm{d}x\,\textrm{d}s - \int _0^t\int _{{\mathbb {R}}}\Phi _s(x) \frac{\partial }{\partial s}x_0(s)\,\textrm{d}x\,\textrm{d}s. \end{aligned}$$
(4.21)

Similarly, using the stochastic Fubini theorem, we get

$$\begin{aligned} I_2&=\int _0^t\int _{{\mathbb {R}}}\int _0^s p_{s-u}^\theta (x)\sigma (u,X(u,0))\,\textrm{d}B_u \, \Delta _\theta \Phi _s(x)\,\textrm{d}x\,\textrm{d}s\nonumber \\&=\int _0^t\int _{{\mathbb {R}}}\int _u^t \bigg (\frac{\partial }{\partial s} p_{s-u}^\theta (x)\bigg ) \Phi _s(x)\,\textrm{d}s\,\textrm{d}x\, \sigma (u,X(u,0))\,\textrm{d}B_u\nonumber \\&=\int _0^t\int _{{\mathbb {R}}}\int _u^t \frac{\partial }{\partial s}\bigg ( p_{s-u}^\theta (x) \Phi _s(x)\bigg )\,\textrm{d}s\,\textrm{d}x\, \sigma (u,X(u,0))\,\textrm{d}B_u \nonumber \\&\quad - \int _0^t\int _{{\mathbb {R}}}\int _u^t p_{s-u}^\theta (x) \bigg (\frac{\partial }{\partial s}\Phi _s(x)\bigg )\,\textrm{d}s\,\textrm{d}x \, \sigma (u,X(u,0))\,\textrm{d}B_u\nonumber \\&=\langle X_2(t,\cdot ),\Phi _t\rangle -\int _0^t\int _{{\mathbb {R}}}p_0^\theta (x,0)\Phi _u(x)\,\textrm{d}x\,\sigma (u,X(u,0))\,\textrm{d}B_u\nonumber \\&\quad - \int _0^t\int _{{\mathbb {R}}}\int _0^s p_{s-u}^\theta (x) \sigma (u,X(u,0))\,\textrm{d}B_u\,\bigg (\frac{\partial }{\partial s}\Phi _s(x)\bigg )\,\textrm{d}x\,\textrm{d}s\nonumber \\&=\langle X_2(t,\cdot ),\Phi _t\rangle -\int _0^t\Phi _u(0)\sigma (u,X(u,0))\,\textrm{d}B_u\nonumber \\&\quad - \int _0^t\int _{{\mathbb {R}}}X_2(s,x)\bigg (\frac{\partial }{\partial s}\Phi _s(x)\bigg )\,\textrm{d}x\,\textrm{d}s \end{aligned}$$
(4.22)

and

$$\begin{aligned} I_3&=\int _0^t\int _{{\mathbb {R}}}\int _0^sp_{s-u}^\theta (x)\mu (u,X(u,0))\,\textrm{d}u\,\Delta _\theta \Phi _s(x)\,\textrm{d}x\,\textrm{d}s\nonumber \\&=\int _0^t\int _{{\mathbb {R}}}\int _u^t\frac{\partial }{\partial s}\Big ( p_{s-u}^\theta (x) \Phi _s(x)\Big )\,\textrm{d}s\,\textrm{d}x\,\mu (u,X(u,0))\,\textrm{d}u\nonumber \\&\quad -\int _0^t\int _{{\mathbb {R}}}\int _u^t p_{s-u}^\theta (x)\bigg (\frac{\partial }{\partial s}\Phi _s(x)\bigg )\,\textrm{d}s\,\textrm{d}x\,\mu (u,X(u,0))\,\textrm{d}u\nonumber \\&=\langle X_3(t,\cdot ),\Phi _t\rangle - \int _0^t \Phi _u(0)\mu (u,X(u,0))\,\textrm{d}u\nonumber \\&\quad -\int _0^t\int _{{\mathbb {R}}}X_3(s,x)\bigg (\frac{\partial }{\partial s}\Phi _s(x)\bigg )\,\textrm{d}x\,\textrm{d}s. \end{aligned}$$
(4.23)

Plugging (4.21), (4.22) and (4.23) into (4.20) and rearranging the terms yields

$$\begin{aligned} \langle X(t,\cdot ),\Phi _t\rangle&=\int _{{\mathbb {R}}}\bigg (x_0(0)\Phi _0(x)+\int _0^t \Phi _s(x) \frac{\partial }{\partial s}x_0(s)\,\textrm{d}s\bigg ) \,\textrm{d}x\\&\quad +\int _0^t\int _{{\mathbb {R}}}X(s,x)\bigg (\Delta _\theta \Phi _s(x)+\frac{\partial }{\partial s}\Phi _s(x) \bigg )\,\textrm{d}x\,\textrm{d}s\\&\quad +\int _0^t\mu (s,X(s,0))\Phi _s(0)\,\textrm{d}s +\int _0^t\sigma (s,X(s,0))\Phi _s(0)\,\textrm{d}B_s, \end{aligned}$$

for \(t\in [0,T]\), which shows that (4.19) holds. \(\square \)

We summarize the findings of Step 1 in the following proposition.

Proposition 4.12

Every strong \(L^p\)-solution \((X_t)_{t\in [0,T]}\) to the SVE (2.1) with p given by (2.3) generates a strong solution \((X_t)_{t\in [0,T],x\in {\mathbb {R}}}\), as defined in (3.1), to the distributional SPDE (4.19) with \(X\in C([0,T]\times {\mathbb {R}})\) a.s. Furthermore, \(\sup _{t\in [0,T],x\in {\mathbb {R}}}{\mathbb {E}}[|X(t,x)|^q]<\infty \) for all \(q\in (0,\infty )\) and, for \(Z(t,x):=X(t,x)-x_0(t)\) and \(q\in [2,\infty )\),

$$\begin{aligned} {\mathbb {E}}[| Z(t,x)-Z(t',x')|^q ] \lesssim |t'-t|^{(\frac{1}{2}-\alpha )q}+|x-x'|^{(\frac{1}{2}-\alpha )q}, \end{aligned}$$

for all \(t,t'\in [0,T]\) and \(x,x'\in [-1,1]\).

Proof

The implication of the solution to (4.19) by the one to (2.1) is given by Lemma 4.1 and Lemma 4.11, the continuity by Lemma 4.1 and the remaining properties by Proposition 4.6. \(\square \)

5 Step 2 and 3: Implementing Yamada–Watanabe’s approach

The next steps are to use the classical approximation of the absolute value function introduced by Yamada–Watanabe [36], allowing us to apply Itô’s formula. Recall that, by Assumption 2.1 (ii), \(\sigma \) is \(\xi \)-Hölder continuous for some \(\xi \in [\frac{1}{2},1]\). Hence, there exists a strictly increasing function \(\rho :[0,\infty )\rightarrow [0,\infty )\) such that \(\rho (0)=0\),

$$\begin{aligned} |\sigma (t,x)-\sigma (t,y)|\le C_{\sigma } |x-y|^{\xi } \le \rho (|x-y|) \quad \text {for } t\in [0,T] \text { and }x,y \in {\mathbb {R}}\end{aligned}$$

and

$$\begin{aligned} \int _0^{\varepsilon } \frac{1}{\rho (x)^2}\,\textrm{d}x=\infty \quad \text {for all } \varepsilon >0. \end{aligned}$$

Based on \(\rho \), we define a sequence \((\phi _n)_{n\in {\mathbb {N}}}\) of functions mapping from \({\mathbb {R}}\) to \({\mathbb {R}}\) that approximates the absolute value in the following way: Let \((a_n)_{n\in {\mathbb {N}}}\) be a strictly decreasing sequence with \(a_0=1\) such that \(a_n\rightarrow 0\) as \(n\rightarrow \infty \) and

$$\begin{aligned} \int _{a_n}^{a_{n-1}}\frac{1}{\rho (x)^2}\,\textrm{d}x=n. \end{aligned}$$
(5.1)

Furthermore, we define a sequence of mollifiers: let \((\psi _n)_{n\in {\mathbb {N}}}\in C_0^{\infty }({\mathbb {R}})\) be smooth functions with compact support such that \(supp (\psi _n)\subset (a_n,a_{n-1})\),

$$\begin{aligned} 0\le \psi _n(x)\le \frac{2}{n\rho (x)^2}\le \frac{2}{nx}, \quad x\in {\mathbb {R}}, \quad \text {and}\quad \int _{a_n}^{a_{n-1}}\psi _n(x)\,\textrm{d}x=1. \end{aligned}$$
(5.2)

We set

$$\begin{aligned} \phi _n(x):=\int _0^{|x|} \bigg (\int _0^y \psi _n(z)\,\textrm{d}z \bigg )\,\textrm{d}y,\quad x\in {\mathbb {R}}. \end{aligned}$$
(5.3)

By (5.2) and the compact support of \(\psi _n\), it follows that \(\phi _n(\cdot )\rightarrow |\cdot |\) uniformly as \(n\rightarrow \infty \). Since every \(\psi _n\) and, thus, every \(\phi _n\) is zero in a neighborhood around zero, the functions \(\phi _n\) are smooth with

$$\begin{aligned} \Vert \phi _n'\Vert _\infty \le 1, \quad \phi _n'(x)=sgn (x)\int _0^{|x|}\psi _n(y)\,\textrm{d}y \quad \text {and}\quad \phi _n''(x)=\psi _n(|x|), \quad \text {for } x\in {\mathbb {R}}. \end{aligned}$$

Let \(X^1\) and \(X^2\) be two strong solutions to the SPDE (4.19) for a given Brownian motion \((B_t)_{t\in [0,T]}\) such that \(X^1,X^2\in C([0,T]\times {\mathbb {R}})\) a.s. We define \(\tilde{X}:=X^1-X^2\) and consider, for some \(\Phi _x^m\in C^2_0({\mathbb {R}})\) for fixed \(x\in {\mathbb {R}}\) and \(m\in {\mathbb {R}}_+\) (we will later define m depending on n and \(\Phi _x^m\) is independent of t):

$$\begin{aligned} \langle \tilde{X}_t,\Phi _x^m \rangle = \int _{{\mathbb {R}}}\tilde{X}(t,y)\Phi _x^m(y)\,\textrm{d}y, \end{aligned}$$

where \(\langle \cdot ,\cdot \rangle \) denotes the scalar product on \(L^2({\mathbb {R}})\).

Proposition 5.1

For a fixed \(x\in {\mathbb {R}}\) and \(m\in {\mathbb {R}}_+\), let \(\Phi _x^m\in C^2_0({\mathbb {R}})\) be such that \(\Delta _\theta \Phi _x^m\) is well-defined. Then, for \(t\in [0,T]\), one has

$$\begin{aligned} \phi _n(\langle \tilde{X}_t,\Phi _{x}^m \rangle )&= \int _0^t \phi _n'(\langle \tilde{X}_s, \Phi _x^m \rangle )\langle \tilde{X}_s,\Delta _\theta \Phi _{x}^m \rangle \,\textrm{d}s\nonumber \\&\quad +\int _0^t \phi _n'(\langle \tilde{X}_s,\Phi _x^m \rangle )\Phi _x^m(0) ( \mu (s,X^1(s,0))-\mu (s,X^2(s,0)) )\,\textrm{d}s\nonumber \\&\quad +\int _0^t \phi _n'(\langle \tilde{X}_s, \Phi _x^m \rangle )\Phi _x^m(0) ( \sigma (s,X^1(s,0))-\sigma (s,X^2(s,0)) )\,\textrm{d}B_s\nonumber \\&\quad +\frac{1}{2}\int _0^t \psi _n(| \langle \tilde{X}_s,\Phi _x^m \rangle |)\Phi _x^m(0)^2 ( \sigma (s,X^1(s,0))-\sigma (s,X^2(s,0)) )^2\,\textrm{d}s. \end{aligned}$$
(5.4)

Proof

By (4.19), \((\langle \tilde{X}_t,\Phi _x^m \rangle )_{t\in [0,T]}\) is a semimartingale. Therefore, we are able to apply Itô’s formula to \(\phi _n\), which yields the result. \(\square \)

Note that (5.4) defines a function in x. We want to integrate this against another non-negative test function with the following properties.

Assumption 5.2

Let \(\Psi \in C^2([0,T]\times {\mathbb {R}})\) be twice continuously differentiable such that

  1. (i)

    \(\Psi _t(0)>0\) for all \(t\in [0,T]\),

  2. (ii)

    \(\Gamma (t):=\lbrace x\in {\mathbb {R}}:\,\exists s\le t \text { s.t. } |\Psi _s(x)|>0 \rbrace \subset B(0,J(t))\) for some \(0<J(t)<\infty \),

  3. (iii)
    $$\begin{aligned} \sup \limits _{s\le t}\bigg | \int _{{\mathbb {R}}}|x|^{-\theta }\bigg ( \frac{\partial \Psi _s(x)}{\partial x} \bigg )^2\,\textrm{d}x \bigg |<\infty , \quad t\in [0,T]. \end{aligned}$$

We will later choose an explicit function \(\Psi \) and show that it fulfills Assumption 5.2. Then, we get the following equality, where the extra term \(I_5^{m,n}\) arises due to the t-dependence of \(\Psi \).

Proposition 5.3

For \(\Psi \) fulfilling Assumption 5.2, we have

$$\begin{aligned}&\langle \phi _n(\langle \tilde{X}_t,\Phi _{\cdot }^m \rangle ),\Psi _t \rangle \nonumber \\&\quad = \int _0^t \langle \phi _n'(\langle \tilde{X}_s,\Phi _\cdot ^m \rangle )\langle \tilde{X}_s,\Delta _\theta \Phi _{\cdot }^m \rangle ,\Psi _s \rangle \,\textrm{d}s\nonumber \\&\qquad +\int _0^t \langle \phi _n'(\langle \tilde{X}_s,\Phi _\cdot ^m \rangle )\Phi _\cdot ^m(0),\Psi _s \rangle ( \mu (s,X^1(s,0))-\mu (s,X^2(s,0)) )\,\textrm{d}s\nonumber \\&\qquad + \int _0^t \langle \phi _n'(\langle \tilde{X}_s,\Phi _\cdot ^m \rangle )\Phi _\cdot ^m(0),\Psi _s \rangle ( \sigma (s,X^1(s,0))-\sigma (s,X^2(s,0)) )\,\textrm{d}B_s\nonumber \\&\qquad +\frac{1}{2}\int _0^t \langle \psi _n(| \langle \tilde{X}_s,\Phi _\cdot ^m \rangle |)\Phi _\cdot ^m(0)^2,\Psi _s \rangle ( \sigma (s,X^1(s,0))-\sigma (s,X^2(s,0)))^2\,\textrm{d}s\nonumber \\&\qquad +\int _0^t \langle \phi _n(\langle \tilde{X}_s,\Phi _\cdot ^m \rangle ),{\dot{\Psi }}_s \rangle \,\textrm{d}s\nonumber \\&\quad =:I_1^{m,n}(t)+I_2^{m,n}(t)+I_3^{m,n}(t)+I_4^{m,n}(t)+I_5^{m,n}(t), \end{aligned}$$
(5.5)

for \(t\in [0,T]\), where \({\dot{\Psi }}_s(x):=\frac{\partial }{\partial s}\Psi _s(x)\).

Proof

We discretize \(\Psi _t(x)\) in its time variable, then let the grid size go to zero and show that the resulting term converges to (5.5). Therefore, let \(t_i=i2^{-k}\), \(i=0,1,\dots ,\lfloor t2^k\rfloor +1=:K_t^k\), where \(\lfloor \cdot \rfloor \) denotes rounding down to the next integer, such that \(t_{\lfloor t2^k\rfloor }\le t< t_{K_t^k}\), and denote

$$\begin{aligned} \Psi _{t}^k(x):=2^k\int _{t_{i-1}}^{t_i}\Psi _s(x)\,\textrm{d}s,\quad t\in [t_{i-1},t_i), x\in {\mathbb {R}}. \end{aligned}$$
(5.6)

Then, we can build the telescope sum

$$\begin{aligned} \langle \phi _n(\langle \tilde{X}_t,\Phi _{\cdot }^m \rangle ),\Psi _t \rangle&=\sum \limits _{i=1}^{K_t^k}\langle \phi _n(\langle \tilde{X}_{t_i},\Phi _{\cdot }^m \rangle ),\Psi _{t_i}^k \rangle - \langle \phi _n(\langle \tilde{X}_{t_{i-1}},\Phi _{\cdot }^m \rangle ),\Psi _{t_{i-1}}^k \rangle \nonumber \\&\quad -\langle \phi _n(\langle \tilde{X}_{t_{K_t^k}},\Phi _{\cdot }^m \rangle ),\Psi _{t_{K_t^k}}^k \rangle + \langle \phi _n(\langle \tilde{X}_{t},\Phi _{\cdot }^m \rangle ),\Psi _{t} \rangle . \end{aligned}$$
(5.7)

By the continuity of \(\tilde{X}\), \(\Psi \) and \(\phi _n\), the sum of the last two terms approaches zero as \(t_{K_t^k}\rightarrow t\) and thus as \(k\rightarrow \infty \).

For the terms in the summation, we use the continuity of \(\tilde{X}\) and the notation \(f(t_{i}-):=\lim \limits _{s<t_{i},s\rightarrow t_{i}}f(s)\), to get the equality

$$\begin{aligned}&\langle \phi _n(\langle \tilde{X}_{t_{i}},\Phi _{\cdot }^m \rangle ),\Psi _{t_{i}}^k \rangle =\langle \phi _n(\langle \tilde{X}_{t_{i}-},\Phi _{\cdot }^m \rangle ),\Psi _{t_{i}-}^k \rangle + \langle \phi _n(\langle \tilde{X}_{t_{i}},\Phi _{\cdot }^m \rangle ),\Psi _{t_{i}}^k-\Psi _{t_{i-1}}^k \rangle . \end{aligned}$$

By plugging this into (5.7), we get

$$\begin{aligned} \langle \phi _n(\langle \tilde{X}_t,\Phi _{\cdot }^m \rangle ),\Psi _t \rangle&=\sum \limits _{i=1}^{K_t^k}\langle \phi _n(\langle \tilde{X}_{t_{i}-},\Phi _{\cdot }^m \rangle ),\Psi _{t_{i}-}^k \rangle - \langle \phi _n(\langle \tilde{X}_{t_{i-1}},\Phi _{\cdot }^m \rangle ),\Psi _{t_{i-1}}^k \rangle \nonumber \\&\quad +\sum \limits _{i=1}^{K_t^k}\langle \phi _n(\langle \tilde{X}_{t_{i}},\Phi _{\cdot }^m \rangle ),\Psi _{t_{i}}^k-\Psi _{t_{i-1}}^k \rangle =:A_t^k+C_t^k. \end{aligned}$$

For \(A_t^k\), we get, by applying Itô’s formula, that

$$\begin{aligned} A_t^k&=\sum \limits _{i=1}^{K_t}\langle \phi _n(\langle \tilde{X}_{t_{i}},\Phi _{\cdot }^m \rangle ),\Psi _{t_{i-1}}^k \rangle - \langle \phi _n(\langle \tilde{X}_{t_{i-1}},\Phi _{\cdot }^m \rangle ),\Psi _{t_{i-1}}^k \rangle \\&\rightarrow I_1^{m,n}(t)+I_2^{m,n}(t)+I_3^{m,n}(t)+I_4^{m,n}(t) \quad \text {as }k\rightarrow \infty , \end{aligned}$$

by the continuity of \(\Psi \).

Thus, it remains to show that \(C_t^k\) converges to \(I_5^{m,n}(t)\). To that end, we use the construction (5.6) and Fubini’s theorem to conclude that

$$\begin{aligned} C_t^k&=\sum \limits _{i=1}^{K_t^k}\bigg \langle \phi _n(\langle \tilde{X}_{t_{i}},\Phi _{\cdot }^m \rangle ),2^k\int _{t_{i-1}}^{t_i} (\Psi _{s}-\Psi _{s-2^{-k}})\,\textrm{d}s \bigg \rangle \\&=\sum \limits _{i=1}^{K_t^k}\bigg \langle \phi _n(\langle \tilde{X}_{t_{i}},\Phi _{\cdot }^m \rangle ),2^k\int _{t_{i-1}}^{t_i} \int _{s-2^{-k}}^s {\dot{\Psi }}_r\,\textrm{d}r \,\textrm{d}s \bigg \rangle \\&=2^k\sum \limits _{i=1}^{K_t^k}\int _{t_{i-1}}^{t_i} \int _{s-2^{-k}}^s\langle \phi _n(\langle \tilde{X}_{t_{i}},\Phi _{\cdot }^m \rangle ), {\dot{\Psi }}_r \rangle \,\textrm{d}r \,\textrm{d}s\\&=2^k\sum \limits _{i=1}^{K_t^k}\int _{t_{i-1}}^{t_i} \int _{s-2^{-k}}^s\langle \phi _n(\langle \tilde{X}_{t_{i}},\Phi _{\cdot }^m \rangle ), {\dot{\Psi }}_r \rangle - \langle \phi _n(\langle \tilde{X}_{r},\Phi _{\cdot }^m \rangle ), {\dot{\Psi }}_r \rangle \,\textrm{d}r \,\textrm{d}s\\&\quad + 2^k\sum \limits _{i=1}^{K_t^k}\int _{t_{i-1}}^{t_i} \int _{s-2^{-k}}^s \langle \phi _n(\langle \tilde{X}_{r},\Phi _{\cdot }^m \rangle ), {\dot{\Psi }}_r \rangle \,\textrm{d}r \,\textrm{d}s. \end{aligned}$$

The first summand can be bounded by

$$\begin{aligned} \int _0^t \sup \limits _{u\le t,|u-r|\le 2^{-k}}\big | \langle \phi _n(\langle \tilde{X}_{u},\Phi _{\cdot }^m \rangle ), {\dot{\Psi }}_r \rangle - \langle \phi _n(\langle \tilde{X}_{r},\Phi _{\cdot }^m \rangle ), {\dot{\Psi }}_r \rangle \big |\,\textrm{d}r, \end{aligned}$$

which converges to zero a.s. as \(k\rightarrow \infty \) by the continuity and boundedness of \(\tilde{X}\). Furthermore, we get, by

$$\begin{aligned} 2^k\int _{s-2^{-k}}^s \langle \phi _n(\langle \tilde{X}_{r},\Phi _{\cdot }^m \rangle ), {\dot{\Psi }}_r \rangle \,\textrm{d}r\rightarrow \langle \phi _n(\langle \tilde{X}_{s},\Phi _{\cdot }^m \rangle ), {\dot{\Psi }}_s \rangle \quad \text {as } k\rightarrow \infty \end{aligned}$$

and the dominated convergence theorem, that

$$\begin{aligned} C_t^k \rightarrow \int _{0}^{t} \langle \phi _n(\langle \tilde{X}_{s},\Phi _{\cdot }^m \rangle ), {\dot{\Psi }}_s \rangle \,\textrm{d}s \quad \text {as } k\rightarrow \infty , \end{aligned}$$

which proves the proposition. \(\square \)

We will bound the expectation of the terms \(I_1^{m,n}\) to \(I_5^{m,n}\) as \(m,n\rightarrow \infty \) in Sect. 6.

6 Step 4: Passing to the limit

Before we can pass to the limit in (5.5), we need to choose a sequence \((\Phi _x^{m,n})_{n\in {\mathbb {N}}}\) of smooth functions \(\Phi _x^{m,n}\in C_0^\infty ({\mathbb {R}})\) for some \(x\in {\mathbb {R}}\) and for \(m\in {\mathbb {R}}_+\), which approximates the Dirac distribution \(\delta _x\) explicitly. We will choose some \(m=m^{(n)}\) dependent on the index n of the Yamada–Watanabe approximation and, for notational simplicity, will skip the m-dependence and shortly write \((\Phi _x^n)_{n\in {\mathbb {N}}}\).

6.1 Explicit choice of the test function

We want to approximate with \(\Phi _x^{n}\) a Dirac distribution centered around \(x\in {\mathbb {R}}\). Therefore, we choose it to coincide with the sum of two Gaussian kernels with mean x and y, respectively, and standard deviation \(m^{{-1}}\), when x and y are close. The reason for this construction is that we want to keep the mass of \(\Phi \) in \(B(0,\frac{1}{m^{(n)}})\) constant as \(n\rightarrow \infty \). For this purpose, we define

$$\begin{aligned} {\tilde{\Phi }}_x^{m}(y):=\frac{1}{\sqrt{2\pi m^{-2}}}e^{-\frac{(y-x)^2}{2m^{-2}}} \end{aligned}$$

and, to construct the compact support, let \({\tilde{\psi }}^{m,n}_x\) be smooth functions for \(n\in {\mathbb {N}}\) and fixed \(x\in {\mathbb {R}}\) with

$$\begin{aligned} {\tilde{\psi }}^{m,n}_x(y):= \left\{ \begin{array}{ll} 1, &{} \text {if }y\in B(x,\frac{1}{m}) \\ 0, &{} \text {if }y\in {\mathbb {R}}\setminus B(x,\frac{1}{m}+b_n) \\ \end{array} \right. \end{aligned}$$

and \(0\le {\tilde{\psi }}^{m,n}_x(y)\le 1\) for y elsewhere such that \({\tilde{\psi }}^{m,n}_x\) is smooth. Here, let \((b_n)_{n\in {\mathbb {N}}}\) be a sequence such that \(b_n>0\) and

$$\begin{aligned} \mu _n\bigg (B\bigg (x,\frac{1}{m}+b_n\bigg )\setminus B\bigg (x,\frac{1}{m}\bigg )\bigg )=\frac{a_n}{2}, \end{aligned}$$

where \(\mu _n(A):=\int _A {\tilde{\Phi }}_x^{m}(y)\,\textrm{d}y\) denotes the measure in terms of the above normal distribution and \(a_n:=e^{-\frac{n(n+1)}{2}}\) comes from the Yamada–Watanabe sequence. It is always possible to find such a \(b_n>0\) since the mass of \({\tilde{\Phi }}_x^{m}\) in \(B(x,\frac{1}{m})\) is \(\approx 0.6827\), which is independent of n, and \(\frac{a_n}{2}<0.3\) for all \(n\in {\mathbb {N}}\).

Then, we define

$$\begin{aligned} \Phi _x^{n}(y):=c\Big ({\tilde{\psi }}_x^{m,n}(y){\tilde{\Phi }}_x^{m}(y)+{\tilde{\Phi }}_y^{m}(x){\tilde{\psi }}_y^{m,n}(x)\Big ), \end{aligned}$$
(6.1)

with \(c:=1/(2m_\sigma )\), where \(m_\sigma \approx 0.6827\) denotes the mass of a normal distribution \({\mathcal {N}}(\mu ,\sigma ^2)\) inside the interval \([\mu -\sigma ,\mu +\sigma ]\). With that choice of c, \(\Phi _x^n\) approximates the Dirac distribution \(\delta _x\) around x as \(n\rightarrow \infty \). Note that \(\Phi _x^{n}(y)\) is identical in terms of x and y. Furthermore, \(\Phi _x^{n}\) owes the following properties that we will need later. To that end, let us introduce the following stopping time for \(K>0\):

$$\begin{aligned} T_K:=\inf \limits _{t\in [0,T]} \bigg \{ \sup \limits _{x\in [-\frac{1}{2},\frac{1}{2}]}( |X^1(t,x)|+|X^2(t,x)|)>K \bigg \}, \end{aligned}$$
(6.2)

where we use the convention \(\inf \emptyset :=\infty \). Note that, by the continuity of \(X^1\) and \(X^2\), \(T_K\rightarrow \infty \) a.s. as \(K\rightarrow \infty \).

Proposition 6.1

For fixed \(x\in {\mathbb {R}}\), \(\Phi _x^{n}\), as defined in (6.1), fulfills:

  1. (i)

    \(\Delta _{\theta ,x}\Phi _x^{n}(y)=\Delta _{\theta ,y}\Phi _x^{n}(y)\) for all \(x,y\in {\mathbb {R}}\), where \(\Delta _{\theta ,x}\) denotes \(\Delta _\theta \) acting on x;

  2. (ii)

    \(\int _{{\mathbb {R}}}\Phi _x^{n}(0)^2\,\textrm{d}x \lesssim m^{(n)}\) for all \(n\in {\mathbb {N}}\);

  3. (iii)

    \(\int _{{\mathbb {R}}}\Phi _x^{n}(0)\,\textrm{d}x\le 2\) for all \(n\in {\mathbb {N}}\);

  4. (iv)

    for all \((s,x)\in [0,T]\times {\mathbb {R}}\),

    $$\begin{aligned} \langle \tilde{X}_s,\Phi _x^{n} \rangle \rightarrow \tilde{X}(s,x) \quad \text {and}\quad \phi _n'(\langle \tilde{X}_s,\Phi _x^{n} \rangle )\langle \tilde{X}_s,\Phi _x^{n} \rangle \rightarrow |\tilde{X}(s,x)|,\quad \text {as } n\rightarrow \infty ; \end{aligned}$$
  5. (v)

    given \(s\in [0,T_K]\), there exists a constant \(C_K>0\) that is independent from n, such that, if

    $$\begin{aligned} \bigg | \int _{{\mathbb {R}}} \tilde{X}(s,y)\Phi _x^{n}(y)\,\textrm{d}y\bigg | \le a_{n-1} \end{aligned}$$

    holds, then there is some \({\hat{x}}\in B(x,\frac{1}{m})\) such that \(|\tilde{X}(s,{\hat{x}})|\le C_Ka_{n-1}\).

Proof

  1. (i)

    This statement is clear since \(\Phi _x^{n}\) is identical in x and y.

  2. (ii)

    We denote \(c:=\frac{1}{\sqrt{2\pi }}\) to get

    $$\begin{aligned} \int _{{\mathbb {R}}}\Phi _x^{n}(0)^2\,\textrm{d}x \le \int _{{\mathbb {R}}}\bigg ( cm e^{-\frac{|x|^{2}}{2m^{-2}}}\bigg )^2\,\textrm{d}x \le c m \int _{{\mathbb {R}}} c m e^{-\frac{|x|^{2}}{2m^{-2}}} \,\textrm{d}x= c m. \end{aligned}$$
  3. (iii)

    \(\int _{{\mathbb {R}}}\Phi _x^{n}(0)\,\textrm{d}x \le 2\int _{{\mathbb {R}}}{\tilde{\Phi }}_x^{m}(0)\,\textrm{d}x=2\).

  4. (iv)

    From the construction of \(\Phi _x^{n}\) we get that

    $$\begin{aligned} \int _{{\mathbb {R}}}\tilde{X}(s,y)\Phi _x^{n}(y)\,\textrm{d}y \rightarrow \int _{{\mathbb {R}}}\tilde{X}(s,y)\delta _x(y)\,\textrm{d}y =\tilde{X}(s,x)\quad \text {as } n\rightarrow \infty . \end{aligned}$$

    Furthermore, we know that \(\phi _n'(x)x \rightarrow |x|\) as \(n\rightarrow \infty \) uniformly in \(x\in {\mathbb {R}}\) and thus the second statement follows.

  5. (v)

    Let us write

    $$\begin{aligned} \int _{{\mathbb {R}}} \tilde{X}(s,y)\Phi _x^{n}(y)\,\textrm{d}y =\int _{B(x,\frac{1}{m})} \tilde{X}(s,y)\Phi _x^{n}(y)\,\textrm{d}y +\int _{{\mathbb {R}}\setminus B(x,\frac{1}{m})} \tilde{X}(s,y)\Phi _x^{n}(y)\,\textrm{d}y.\nonumber \\ \end{aligned}$$
    (6.3)

By the construction of \({\tilde{\psi }}_x^{m,n}\) we know that \(\Phi _x^{n}\) vanishes outside the ball \(B(x,\frac{1}{m}+b_n)\), and, by the choice of \(b_n\), we know that the mass of \(\Phi _x^{n}\) in \(B(x,\frac{1}{m}+b_n)\setminus B(x,\frac{1}{m})\) is \(a_{n-1}/2\). Since we have that \(s\le T_K\), we can bound

$$\begin{aligned} \bigg | \int _{{\mathbb {R}}\setminus B(x,\frac{1}{m})} \tilde{X}(s,y)\Phi _x^{n}(y)\,\textrm{d}y\bigg | \le 2K \int _{{\mathbb {R}}\setminus B(x,\frac{1}{m})} \Phi _x^{n}(y)\,\textrm{d}y \le Ka_{n-1}. \end{aligned}$$

Thus, by assumption and (6.3), we have that

$$\begin{aligned} \bigg | \int _{B(x,\frac{1}{m})} \tilde{X}(s,y)\Phi _x^{n}(y)\,\textrm{d}y \bigg | \le (K+1)a_{n-1}, \end{aligned}$$

and, since \(\Phi _x^{n}\) is the sum of two Gaussian densities with standard deviation \(\frac{1}{m}\), we know that its mass inside the ball is \(\approx 2\cdot 0.6827\) and can conclude, using the continuity of \(\tilde{X}\), that

$$\begin{aligned} (K+1)a_{n-1} \ge \int _{B(x,\frac{1}{m})} \Phi _x^{n}(y)\,\textrm{d}y \inf \limits _{y\in B(x,\frac{1}{m})}|\tilde{X}(s,y)| \ge 1.3 \inf \limits _{y\in B(x,\frac{1}{m})}|\tilde{X}(s,y)|, \end{aligned}$$

and thus, the statement holds with \(C_K=(K+1)/1.3\). \(\square \)

6.2 Bounding the Yamada–Watanabe terms

We start with the summands \(I_1^{m,n}\), \(I_2^{m,n}\), \(I_3^{m,n}\) and \(I_5^{m,n}\) in (5.5) and will analyze \(I_4^{m,n}\) later. To that end, we need the following elementary estimate.

Lemma 6.2

If \(f\in C_0^2({\mathbb {R}})\) is non-negative and not identically zero, then

$$\begin{aligned} \sup \limits _{x\in {\mathbb {R}}:\,f(x)>0}\lbrace ( f'(x))^2f(x)^{-1} \rbrace \le 2\Vert f''(x)\Vert _\infty . \end{aligned}$$

Proof

Choose some \(x\in {\mathbb {R}}\) with \(f(x)>0\) and assume w.l.o.g. that \(f'(x)>0\). Let

$$\begin{aligned} x_1:=\sup \lbrace x'<x:f'(x')=0 \rbrace , \end{aligned}$$

which exists due to the compact support of f. By the extended mean value theorem (see [7, Theorem 4.6]), applied to f and \((f')^2\), there exists an \(x_2\in (x_1,x)\) such that

$$\begin{aligned} ( f'(x)^2-f'(x_1)^2)f'(x_2)=( f(x)-f(x_1))\frac{\partial (f')^2}{\partial x}(x_2). \end{aligned}$$

By the choice of \(x_1\), we know that \(f'(x_2)>0\), and thus with \(f'(x_1)=0\),

$$\begin{aligned} f'(x)^2=( f(x)-f(x_1))2f''(x_2). \end{aligned}$$

Since f is strictly increasing on \((x_1,x)\) and non-negative, we conclude

$$\begin{aligned} \frac{f'(x)^2}{f(x)}\le \frac{f'(x)^2}{f(x)-f(x_1)}= 2f''(x_2)\le 2\Vert f''\Vert _\infty . \end{aligned}$$

\(\square \)

We want to take expectations on both sides of (5.5) and then send \(m,n\rightarrow \infty \).

Lemma 6.3

For any stopping time \({\mathcal {T}}\) and fixed \(t\in [0,T]\) we have:

  1. (i)

    \(\lim \limits _{m,n\rightarrow \infty } {\mathbb {E}}[ I_1^{m,n}(t\wedge {\mathcal {T}}) ]\le {\mathbb {E}}\big [ \int _0^{t\wedge {\mathcal {T}}}\int _{{\mathbb {R}}}|\tilde{X}(s,x)|\Delta _\theta \Psi _s(x)\,\textrm{d}x\,\textrm{d}s \big ]\);

  2. (ii)

    \( \lim _{m,n\rightarrow \infty } {\mathbb {E}}[ I_2^{m,n}(t\wedge {\mathcal {T}}) ]\lesssim \int _0^{t\wedge {\mathcal {T}}}\Psi _s(0)\,{\mathbb {E}}[|\tilde{X}(s,0)|]\,\textrm{d}s\);

  3. (iii)

    \({\mathbb {E}}[ I_3^{m,n}(t\wedge {\mathcal {T}})]=0\) for all \(m,n\in {\mathbb {N}}\);

  4. (iv)

    \(\lim \limits _{m,n\rightarrow \infty } {\mathbb {E}}[ I_5^{m,n}(t\wedge {\mathcal {T}})]= {\mathbb {E}}\big [ \int _0^{t\wedge {\mathcal {T}}}\int _{{\mathbb {R}}}|\tilde{X}(s,x)|{\dot{\Psi }}_s(x)\,\textrm{d}x\,\textrm{d}s \big ]\).

Proof

(i) We need to rewrite \(I_1^{m,n}\). We use the property of \(\Phi _x^n\) from Proposition 6.1 (i) and the product rule to get

$$\begin{aligned} I_1^{m,n}(t)&=\int _0^t \int _{{\mathbb {R}}} \phi _n'(\langle \tilde{X}_s,\Phi _x^n \rangle )\int _{{\mathbb {R}}} \tilde{X}(s,y)\Delta _{y,\theta } \Phi _{x}^n(y)\,\textrm{d}y\, \Psi _s(x) \,\textrm{d}x \,\textrm{d}s\\&=\int _0^t \int _{{\mathbb {R}}} \phi _n'(\langle \tilde{X}_s,\Phi _x^n \rangle )\Delta _{x,\theta }(\langle \tilde{X}_s, \Phi _{x}^n\rangle ) \Psi _s(x) \,\textrm{d}x \,\textrm{d}s\\&=2\alpha ^2\int _0^t \int _{{\mathbb {R}}} \phi _n'(\langle \tilde{X}_s,\Phi _x^n \rangle ) \Big (\frac{\partial }{\partial x}|x|^{-\theta }\frac{\partial }{\partial x}\langle \tilde{X}_s, \Phi _{x}^n\rangle \Big ) \Psi _s(x) \,\textrm{d}x \,\textrm{d}s\\&\quad +2\alpha ^2\int _0^t \int _{{\mathbb {R}}} \phi _n'(\langle \tilde{X}_s,\Phi _x^n \rangle ) |x|^{-\theta }\Big (\frac{\partial ^2}{\partial x^2}\langle \tilde{X}_s, \Phi _{x}^n\rangle \Big ) \Psi _s(x) \,\textrm{d}x \,\textrm{d}s. \end{aligned}$$

Now, we use integration by parts for both summands and the compact support of \(\Psi _s\) for every \(s\in [0,T]\) to get

$$\begin{aligned} I_1^{m,n}(t)&=-2\alpha ^2\int _0^t \int _{{\mathbb {R}}}\psi _n(\langle \tilde{X}_s,\Phi _x^n \rangle ) |x|^{-\theta }\bigg (\frac{\partial }{\partial x}\langle \tilde{X}_s \Phi _{x}^n\rangle \bigg )^2 \Psi _s(x) \,\textrm{d}x \,\textrm{d}s\nonumber \\&\quad -2\alpha ^2\int _0^t \int _{{\mathbb {R}}} \phi _n'(\langle \tilde{X}_s,\Phi _x^n \rangle ) |x|^{-\theta }\frac{\partial }{\partial x}\langle \tilde{X}_s \Phi _{x}^n\rangle \frac{\partial }{\partial x} \Psi _s(x) \,\textrm{d}x \,\textrm{d}s. \end{aligned}$$
(6.4)

By a very similar partial integration we see that

$$\begin{aligned}&\int _0^t \int _{{\mathbb {R}}} \phi _n'(\langle \tilde{X}_s,\Phi _x^n \rangle )\langle \tilde{X}_s,\Phi _x^n \rangle \Delta _\theta \Psi _s(x)\,\textrm{d}x\,\textrm{d}s\nonumber \\&\quad =-2\alpha ^2 \int _0^t \int _{{\mathbb {R}}} \psi _n(\langle \tilde{X}_s,\Phi _x^n \rangle ) \frac{\partial }{\partial x}\langle \tilde{X}_s,\Phi _x^n \rangle \langle \tilde{X}_s,\Phi _x^n \rangle |x|^{-\theta }\frac{\partial }{\partial x} \Psi _s(x)\,\textrm{d}x\,\textrm{d}s\nonumber \\&\quad \quad -2\alpha ^2 \int _0^t \int _{{\mathbb {R}}} \phi _n'(\langle \tilde{X}_s,\Phi _x^n \rangle )\frac{\partial }{\partial x}\langle \tilde{X}_s,\Phi _x^n \rangle |x|^{-\theta }\frac{\partial }{\partial x} \Psi _s(x)\,\textrm{d}x\,\textrm{d}s. \end{aligned}$$
(6.5)

By identifying that the second term in (6.4) coincides with the second term in (6.5), we can plug in the latter one into the first one to get

$$\begin{aligned} I_1^{m,n}(t)&=-2\alpha ^2\int _0^t \int _{{\mathbb {R}}}\psi _n(\langle \tilde{X}_s,\Phi _x^n \rangle ) |x|^{-\theta }\bigg (\frac{\partial }{\partial x}\langle \tilde{X}_s \Phi _{x}^n\rangle \bigg )^2 \Psi _s(x) \,\textrm{d}x \,\textrm{d}s\nonumber \\&\quad +2\alpha ^2\int _0^t \int _{{\mathbb {R}}} \psi _n(\langle \tilde{X}_s,\Phi _x^n \rangle ) \frac{\partial }{\partial x}\langle \tilde{X}_s,\Phi _x^n \rangle \langle \tilde{X}_s,\Phi _x^n \rangle |x|^{-\theta }\frac{\partial }{\partial x} \Psi _s(x)\,\textrm{d}x\,\textrm{d}s\nonumber \\&\quad +\int _0^t \int _{{\mathbb {R}}} \phi _n'(\langle \tilde{X}_s,\Phi _x^n \rangle )\langle \tilde{X}_s,\Phi _x^n \rangle \Delta _\theta \Psi _s(x)\,\textrm{d}x\,\textrm{d}s\nonumber \\&=\int _0^t \big (I_{1,1}^{m,n}(s) + I_{1,2}^{m,n}(s) + I_{1,3}^{m,n}(s)\big )\,\textrm{d}s. \end{aligned}$$
(6.6)

In order to deal with the various parts of \(I_1^{m,n}\), we start with treating \(I_{1,1}^{m,n}\) and \(I_{1,2}^{m,n}\). Since we want to show that these parts are less than or equal to 0, we define for fixed \(s\in [0,t]\):

$$\begin{aligned} A^s&:=\bigg \{ x\in {\mathbb {R}}\,:\,\bigg ( \frac{\partial }{\partial x}\langle \tilde{X}_s,\Phi _x^n \rangle \bigg )^2 \Psi _s(x)\le \langle \tilde{X}_s,\Phi _x^n \rangle \frac{\partial }{\partial x} \langle \tilde{X}_s,\Phi _x^n \rangle \frac{\partial }{\partial x}\Psi _s(x) \bigg \}\\&\quad \quad \cap \lbrace x\in {\mathbb {R}}\,: \,\Psi _s(x)>0 \}\\&=A^{+,s} \cup A^{-,s} \cup A^{0,s}, \end{aligned}$$

with

$$\begin{aligned}&A^{+,s}:=A^s\cap \bigg \lbrace \frac{\partial }{\partial x}\langle \tilde{X}_s,\Phi _x^n \rangle >0 \bigg \rbrace ,\quad A^{-,s}:=A^s\cap \bigg \lbrace \frac{\partial }{\partial x}\langle \tilde{X}_s,\Phi _x^n \rangle <0 \bigg \rbrace \quad \text {and}\\&A^{0,s}:=A^s\cap \bigg \lbrace \frac{\partial }{\partial x}\langle \tilde{X}_s,\Phi _x^n \rangle =0 \bigg \rbrace . \end{aligned}$$

By Assumption 5.2 (i) and (iii), we can find an \(\varepsilon >0\) such that

$$\begin{aligned} B(0,\varepsilon )\subset \Gamma (t)\quad \text {and}\quad \inf \limits _{s\le t, x\in B(0,\varepsilon )}\Psi _s(x)>0. \end{aligned}$$
(6.7)

On \(A^{+,s}\) we have, by the definition of \(A^s\), that

$$\begin{aligned} 0<\bigg ( \frac{\partial }{\partial x}\langle \tilde{X}_s,\Phi _x^n \rangle \bigg )\Psi _s(x)\le \langle \tilde{X}_s,\Phi _x^n \rangle \frac{\partial }{\partial x}\Psi _s(x), \end{aligned}$$

and, therefore, we can bound the \(A^{+,s}\)-part of \(I_{1,2}^{m,n}\) for any \(t\in [0,T]\) by

$$\begin{aligned}&\int _0^t \int _{A^{+,s}} \psi _n(\langle \tilde{X}_s,\Phi _x^n \rangle ) \frac{\partial }{\partial x}\langle \tilde{X}_s,\Phi _x^n \rangle \langle \tilde{X}_s,\Phi _x^n \rangle |x|^{-\theta }\frac{\partial }{\partial x} \Psi _s(x)\,\textrm{d}x\,\textrm{d}s\\&\quad \le \int _0^t \int _{A^{+,s}} \psi _n(\langle \tilde{X}_s,\Phi _x^n \rangle )|x|^{-\theta } \langle \tilde{X}_s,\Phi _x^n \rangle ^2 \frac{(\frac{\partial }{\partial x} \Psi _s(x))^2}{\Psi _s(x)} \,\textrm{d}x\,\textrm{d}s\\&\quad \le \int _0^t \int _{A^{+,s}}\frac{2}{n} \mathbbm {1}_{\lbrace a_{n-1}\le |\langle \tilde{X}_s,\Phi _x^n \rangle | \le a_n\rbrace } |x|^{-\theta } \langle \tilde{X}_s,\Phi _x^n \rangle \frac{(\frac{\partial }{\partial x} \Psi _s(x))^2}{\Psi _s(x)} \,\textrm{d}x\,\textrm{d}s\\&\quad \le \frac{2a_n}{n}\int _0^t \int _{{\mathbb {R}}} \mathbbm {1}_{\lbrace \Psi _s(x)>0\rbrace } |x|^{-\theta } \frac{(\frac{\partial }{\partial x} \Psi _s(x))^2}{\Psi _s(x)} \,\textrm{d}x\,\textrm{d}s. \end{aligned}$$

Next, we split the integral by using \(\varepsilon \) from (6.7) to be able to apply Assumption 5.2 and Lemma 6.2 and get

$$\begin{aligned}&\int _0^t \int _{A^{+,s}} \psi _n(\langle \tilde{X}_s,\Phi _x^n \rangle ) \frac{\partial }{\partial x}\langle \tilde{X}_s,\Phi _x^n \rangle \langle \tilde{X}_s,\Phi _x^n \rangle |x|^{-\theta }\frac{\partial }{\partial x} \Psi _s(x)\,\textrm{d}x\,\textrm{d}s\\&\quad \le \frac{2a_n}{n}\int _0^t \bigg ( \int _{B(0,\varepsilon )} |x|^{-\theta } \frac{(\frac{\partial }{\partial x} \Psi _s(x))^2}{\Psi _s(x)} \,\textrm{d}x +2\Vert D^2\Psi _s\Vert _\infty \int _{\Gamma (t)\setminus B(0,\varepsilon )}|x|^{-\theta }\,\textrm{d}x\bigg ) \,\textrm{d}s\\&\quad =:\frac{2a_n}{n}C(\Psi ,t). \end{aligned}$$

Note that \(\varepsilon >0\) is fixed and thus the \(\varepsilon \)-dependence of \(C(\Psi ,t)\) does not matter.

On the set \(A^{-,s}\),

$$\begin{aligned} 0>\bigg ( \frac{\partial }{\partial x}\langle \tilde{X}_s,\Phi _x^n \rangle \bigg )\Psi _s(x) \ge \langle \tilde{X}_s,\Phi _x^n \rangle \frac{\partial }{\partial x}\Psi _s(x), \end{aligned}$$
(6.8)

holds and, since both terms in (6.8) are negative, we can use the same calculation as above to get

$$\begin{aligned} \int _0^t \int _{A^{+,s}} \psi _n(\langle \tilde{X}_s,\Phi _x^n \rangle ) \frac{\partial }{\partial x}\langle \tilde{X}_s,\Phi _x^n \rangle \langle \tilde{X}_s,\Phi _x^n \rangle |x|^{-\theta }\frac{\partial }{\partial x} \Psi _s(x)\,\textrm{d}x\,\textrm{d}s \le \frac{2a_n}{n}C(\Psi ,t). \end{aligned}$$

Finally, on the set \(A^{0,s}\),

$$\begin{aligned} \int _0^t \int _{A^{+,s}} \psi _n(\langle \tilde{X}_s,\Phi _x^n \rangle ) \frac{\partial }{\partial x}\langle \tilde{X}_s,\Phi _x^n \rangle \langle \tilde{X}_s,\Phi _x^n \rangle |x|^{-\theta }\frac{\partial }{\partial x} \Psi _s(x)\,\textrm{d}x\,\textrm{d}s=0 \end{aligned}$$

and thus

$$\begin{aligned} {\mathbb {E}}[I_{1,1}^{m,n}(t\wedge {\mathcal {T}})+I_{1,2}^{m,n}(t\wedge {\mathcal {T}})] \le 4\alpha ^2C(\Psi ,t)\frac{a_n}{n} \rightarrow 0\quad \text {as } n\rightarrow \infty . \end{aligned}$$

The remaining term in (6.6), we have to deal with, is

$$\begin{aligned} I_{1,3}^{m,n}=\int _0^t \int _{{\mathbb {R}}} \phi _n'(\langle \tilde{X}_s,\Phi _x^n \rangle )\langle \tilde{X}_s,\Phi _x^n \rangle \Delta _\theta \Psi _s(x)\,\textrm{d}x\,\textrm{d}s. \end{aligned}$$

Therefore, we apply Proposition 6.1 (iv) to get the pointwise convergence

$$\begin{aligned} \phi _n'(\langle \tilde{X}_s,\Phi _x^n \rangle )\langle \tilde{X}_s,\Phi _x^n \rangle \rightarrow \tilde{X}(s,x) \quad \text {as } m,n\rightarrow \infty . \end{aligned}$$

To complete our proof, we only need to show uniform integrability of \(|\phi _n'(\langle \tilde{X}_s,\Phi _x^n \rangle )\langle \tilde{X}_s,\Phi _x^n \rangle |\) in terms of \(m,n\in {\mathbb {N}}\) on \(([0,T]\times B(0,J(t))\times \Omega )\), since \(\Psi \) vanishes outside B(0, J(t)). First, by the inequality \(|\phi _n'|\le 1\), we can bound

$$\begin{aligned} |\phi _n'(\langle \tilde{X}_s,\Phi _x^n \rangle )\langle \tilde{X}_s,\Phi _x^n \rangle |\le \langle |\tilde{X}_s|,\Phi _x^n \rangle . \end{aligned}$$

Inserting the function \(\Phi ^{n}\) from (6.1), taking the mean and using Proposition 4.6 (i), we can bound

$$\begin{aligned} {\mathbb {E}}[|\langle |\tilde{X}_s|,\Phi _x^n \rangle |] \le {\mathbb {E}}\bigg [ \int _{{\mathbb {R}}}|\tilde{X}(s,y)|2{\tilde{\Phi }}_x^{m}(y) \,\textrm{d}y \bigg ] \le 2\sup \limits _{y\in {\mathbb {R}}} {\mathbb {E}}[|\tilde{X}(s,y)|] \int _{{\mathbb {R}}}{\tilde{\Phi }}_x^{m^{(n)}}(y)\,\textrm{d}y<\infty , \end{aligned}$$
(6.9)

thus the claimed integrability holds and we get

$$\begin{aligned} \lim \limits _{m,n\rightarrow \infty } {\mathbb {E}}[ I_{1,3}^{m,n}(t\wedge {\mathcal {T}})] \le {\mathbb {E}}\bigg [ \int _0^{t\wedge {\mathcal {T}}}\int _{{\mathbb {R}}}|\tilde{X}(s,x)|\Delta _\theta \Psi _s(x)\,\textrm{d}x\,\textrm{d}s \bigg ] \end{aligned}$$

and, altogether, we have shown the statement.

(ii) Again the inequality \(|\phi _n'|\le 1\) and the Lipschitz continuity of \(\mu \) yield

$$\begin{aligned} {\mathbb {E}}[ I_2^{m,n}(t\wedge {\mathcal {T}})] \lesssim \int _0^{t\wedge {\mathcal {T}}}\bigg (\int _{{\mathbb {R}}}\Phi _x^n(0)\Psi _s(x)\,\textrm{d}x\bigg ){\mathbb {E}}[|\tilde{X}(s,0)|]\,\textrm{d}s. \end{aligned}$$

Sending \(m,n\rightarrow \infty \) gives the statement as \(\Phi ^n_x(0)\rightarrow \delta _0(x)\).

(iii) We set \(g_{m,n}(s):=\langle \phi _n'(\langle \tilde{X}_s,\Phi _\cdot ^n \rangle )\Phi _\cdot ^n(0),\Psi _s \rangle \). Then, by \(|\phi _n'|\le 1\), one has

$$\begin{aligned} |g_{m,n}(s)| =\bigg | \int _{{\mathbb {R}}} \phi _n'(\langle \tilde{X}_s,\Phi _x^n \rangle )\Phi _x^n(0)\Psi _s(x) \,\textrm{d}x \bigg | \le \Vert \Psi \Vert _\infty \int _{{\mathbb {R}}} 2{\tilde{\Phi }}_0^{m}(x) \,\textrm{d}x =2\Vert \Psi \Vert _\infty \end{aligned}$$

by the construction of \(\Phi ^{n}\) in (6.1). Thus, \(I_3^{m,n}(t\wedge {\mathcal {T}})\) is a continuous local martingale with quadratic variation

$$\begin{aligned} \langle I_3^{m,n} \rangle _{t\wedge {\mathcal {T}}}&\le 4\Vert \Psi \Vert _\infty ^2 \int _0^{t\wedge {\mathcal {T}}}( \sigma (s,X^1(s,0))-\sigma (s,X^2(s,0)))^2\,\textrm{d}s\\&\lesssim \int _0^{t\wedge {\mathcal {T}}}(|X^1(s,0)|+|X^2(s,0)|+2)^2\,\textrm{d}s \end{aligned}$$

by the growth condition on \(\sigma \) and, consequently, by Proposition 4.6,

$$\begin{aligned} {\mathbb {E}}[\langle I_3^{m,n} \rangle _{t\wedge {\mathcal {T}}}]<\infty , \end{aligned}$$

such that \(I_3^{m,n}(t\wedge {\mathcal {T}})\) is a square integrable martingale with mean 0.

(iv) We want to calculate the limit as \(n,m\rightarrow \infty \) of the term

$$\begin{aligned} {\mathbb {E}}[I_5^{m,n}(t\wedge {\mathcal {T}})] ={\mathbb {E}}\bigg [ \int _0^{t\wedge {\mathcal {T}}} \langle \phi _n(\langle \tilde{X}_s,\Phi _\cdot ^n \rangle ),{\dot{\Psi }}_s \rangle \,\textrm{d}s \bigg ]. \end{aligned}$$

Therefore, the same argumentation as in (i) with the uniform integrability in (6.9) and the boundedness of \(|{\dot{\Psi }}_s|\) as a continuous function with compact support yield

$$\begin{aligned} \lim \limits _{m,n\rightarrow \infty }{\mathbb {E}}[ I_5^{m,n}(t\wedge {\mathcal {T}}) ] = {\mathbb {E}}\bigg [ \int _0^{t\wedge {\mathcal {T}}}\int _{{\mathbb {R}}}|\tilde{X}(s,x)|{\dot{\Psi }}_s(x)\,\textrm{d}x\,\textrm{d}s \bigg ]. \end{aligned}$$

\(\square \)

6.3 Key argument: Bounding the quadratic variation term

What is left to bound in line (5.5), is the expectation of the quadratic variation term \(I_4^{m,n}\). The main ingredient to be able to do this, will be the following Theorem 6.4.

Let us first introduce some definitions that we need to formulate the Theorem 6.4. Recall the definition of \(T_K\) in (6.2). Moreover, we define a semimetric on \([0,T]\times {\mathbb {R}}\) by

$$\begin{aligned} d((t,x),(t',x')):=|t-t'|^\alpha +|x-x'|,\quad t,t'\in [0,T],x,x'\in {\mathbb {R}}, \end{aligned}$$

and, for \(K>0\), \(N\in {\mathbb {N}}\) and \(\zeta \in (0,1)\), the set

$$\begin{aligned} Z_{K,N,\zeta }&:= \left\{ (t,x)\in [0,T]\times [-1/2,1/2]: \begin{array}{l} t\le T_K, \,|x|\le 2^{-N\alpha -1},\\ |t-{\hat{t}}|\le 2^{-N}\,|x-{\hat{x}}|\le 2^{-N\alpha }, \\ \text {for some }({\hat{t}},{\hat{x}})\in [0,T_K]\times [-1/2,1/2]\\ \text {satisfying }|\tilde{X}({\hat{t}},{\hat{x}})|\le 2^{-N\zeta } \end{array} \right\} . \end{aligned}$$
(6.10)

The following theorem improves the regularity of \(\tilde{X}(t,x)\) when |x| is small. For two measures \({\mathbb {Q}}_1\) and \({\mathbb {Q}}_2\) on some measurable space \(({\tilde{\Omega }},\tilde{{\mathscr {F}}})\), we call \({\mathbb {Q}}_1\) absolutely continuous with respect to \({\mathbb {Q}}_2\), denoted by \({\mathbb {Q}}_1\ll {\mathbb {Q}}_2\), if \({\mathscr {N}}_{1}\supseteq {\mathscr {N}}_{2}\), where \({\mathscr {N}}_i\in \tilde{{\mathscr {F}}}\) denotes the zero sets of \({\mathbb {Q}}_i\) in \(({\tilde{\Omega }},\tilde{{\mathscr {F}}})\).

Theorem 6.4

Suppose Assumption 2.1 and let \(\tilde{X}:=X^1-X^2\), where \(X^i\) is a solution of the SPDE (4.6) with \(X^i\in C([0,T]\times {\mathbb {R}})\) a.s. for \(i=1,2\). Let \(\zeta \in (0,1)\) satisfy:

$$\begin{aligned} \exists N_\zeta&= N_\zeta (K,\omega )\in {\mathbb {N}}\text { a.s. such that, for any }N\ge N_\zeta \text { and any }(t,x)\in Z_{K,N,\zeta }:\nonumber \\&\left. \begin{array}{c} |t'-t|\le 2^{-N},t'\le T_K\\ |y-x|\le 2^{-N\alpha } \end{array} \right\} \quad \Rightarrow \quad |\tilde{X}(t,x)-\tilde{X}(t',y)|\le 2^{-N\zeta }. \end{aligned}$$
(6.11)

Let \(\frac{1}{2}-\alpha<\zeta ^1 < ( \zeta \xi +\frac{1}{2}-\alpha ) \wedge 1\). Then, there is an \(N_{\zeta ^1}(K,\omega ,\zeta )\in {\mathbb {N}}\) a.s. such that, for any \(N\ge N_{\zeta ^1}\) and any \((t,x)\in Z_{K,N,\zeta ^1}\):

$$\begin{aligned}&\left. \begin{array}{c} |t'-t|\le 2^{-N},t'\le T_K\\ |y-x|\le 2^{-N\alpha } \end{array} \right\} \quad \Rightarrow \quad |\tilde{X}(t,x)-\tilde{X}(t',y)|\le 2^{-N\zeta ^1}. \end{aligned}$$
(6.12)

Moreover, there is some measure \({\mathbb {Q}}^{X,K}\) on \((\Omega ,{\mathscr {F}})\) such that \({\mathbb {Q}}^{X,K}\ll \mathbb {P}\) on \((\Omega ,{\mathscr {F}})\) and \(\mathbb {P}\ll {\mathbb {Q}}^{X,K}\) on \((\Omega ,{\mathscr {F}}^K)\), where \({\mathscr {F}}^K:=\lbrace A\cap \lbrace T_K\ge T \rbrace :A\in {\mathscr {F}}\rbrace \subseteq {\mathscr {F}}\) is the \(\sigma \)-algebra restricted to \(\lbrace T_K\ge T \rbrace \), and there are constants \(R>1\) and \(\delta ,C,c_2>0\) depending on \(\zeta \) and \(\zeta ^1\) (not on K) and \(N(K)\in {\mathbb {N}}\) such that

$$\begin{aligned} {\mathbb {Q}}^{X,K}(N_{\zeta ^1}\ge N)\le C\bigg ( {\mathbb {Q}}^{X,K}\bigg (N_\zeta \ge \frac{N}{R}\bigg )+ Ke^{-c_22^{N\delta }} \bigg ) \end{aligned}$$
(6.13)

for \(N\ge N(K)\).

Proof of Theorem 6.4

From the assumptions of Theorem 6.4 and Assumption 2.1, we are given the variables \(\alpha \in [0,\frac{1}{2})\), \(\zeta \in (0,1)\), \(\xi \in (\frac{1}{2(1-\alpha )},1]\) and \(\zeta _1<(\zeta \xi +\frac{1}{2}-\alpha )\wedge 1\). Moreover, fix arbitrary \((t,x),(t',y)\in [0,T_K]\times [-\frac{1}{2},\frac{1}{2}]\) such that w.l.o.g. \(t\le t'\) and given some \(N\ge N_\zeta \),

$$\begin{aligned} |t-t'|\le \varepsilon :=2^{-N},\quad |x|\le 2^{-N\alpha }\quad \text {and}\quad |x-y|\le 2^{-N\alpha }. \end{aligned}$$
(6.14)

We define small numbers \(\delta ,\delta ',\delta _1,\delta _2>0\) in the following way. We choose \(\delta \in (0,\frac{1}{2}-\alpha )\) such that

$$\begin{aligned} \zeta _1<\bigg ( \bigg (\zeta \xi +\frac{1}{2}-\alpha \bigg )\wedge 1\bigg )-\alpha \delta <1. \end{aligned}$$

Fixing \(\delta '\in (0,\delta )\), we choose \(\delta _1\in (0,\delta ')\) sufficiently small that

$$\begin{aligned} \zeta _1<\bigg ( \bigg (\zeta \xi +\frac{1}{2}-\alpha \bigg )\wedge 1\bigg )-\alpha \delta +\alpha \delta _1<1. \end{aligned}$$
(6.15)

Furthermore, we define \(\delta _2>0\) sufficiently small such that

$$\begin{aligned} \delta '-\delta _2>\delta _1, \end{aligned}$$
(6.16)

and we set

$$\begin{aligned} p:=\bigg ( \bigg (\zeta \xi +\frac{1}{2}-\alpha \bigg )\wedge 1\bigg )-\alpha \bigg (\frac{1}{2}-\alpha \bigg )+\alpha \delta _1 \end{aligned}$$
(6.17)

and

$$\begin{aligned} {\hat{p}}&:=p+\alpha (\delta '-\delta _2-\delta _1)=\bigg (\bigg (\zeta \xi +\frac{1}{2}-\alpha \bigg )\wedge 1\bigg )-\alpha \bigg (\frac{1}{2}-\alpha \bigg )+\alpha (\delta '-\delta _2). \end{aligned}$$
(6.18)

By (6.16), we see that \({\hat{p}}>p\).

Moreover, we introduce

$$\begin{aligned} D^{x,y,t,t'}(s):= & {} |p_{t-s}(x)-p_{t'-s}(y)|^2|\tilde{X}(s,0)|^{2\xi } \quad \text {and}\quad D^{x,t'}(s)\nonumber \\ {}:= & {} p_{t'-s}(x)^2|\tilde{X}(s,0)|^{2\xi }. \end{aligned}$$
(6.19)

Our goal is to bound the following expression, where we will explicitly determine the measure \({\mathbb {Q}}\) as in the statement of the theorem and the random variable \(N_1:=N_1(\omega )\) (in (6.37)), later:

$$\begin{aligned}&{\mathbb {Q}}\bigg ( |\tilde{X}(t,x)-\tilde{X}(t,y)|\ge |x-y|^{\frac{1}{2}-\alpha -\delta }\varepsilon ^p,(t,x)\in Z_{K,N,\zeta },N\ge N_1\bigg )\nonumber \\&+{\mathbb {Q}}\bigg ( |\tilde{X}(t',x)-\tilde{X}(t,x)|\ge |t'-t|^{\alpha ( \frac{1}{2}-\alpha -\delta )}\varepsilon ^p,\,(t,x)\in Z_{K,N,\zeta },N\ge N_1\bigg )\nonumber \\&\quad \le {\mathbb {Q}}\bigg ( |\tilde{X}(t,x)-\tilde{X}(t,y)|\ge |x-y|^{\frac{1}{2}-\alpha -\delta }\varepsilon ^p,(t,x)\in Z_{K,N,\zeta },N\ge N_1,\nonumber \\&\quad \quad \quad \quad \int _0^t D^{x,y,t,t}(s)\,\textrm{d}s \le |x-y|^{1-2\alpha -2\delta '}\varepsilon ^{2p} \bigg )\nonumber \\&\quad \quad + {\mathbb {Q}}\bigg ( |\tilde{X}(t',x)-\tilde{X}(t,x)|\ge |t'-t|^{\alpha ( \frac{1}{2}-\alpha -\delta )}\varepsilon ^p,(t,x)\in Z_{K,N,\zeta },N\ge N_1,\nonumber \\&\quad \quad \quad \quad \int _t^{t'}D^{x,t'}(s)\,\textrm{d}s+ \int _0^t D^{x,x,t,t'}(s)\,\textrm{d}s \le (t'-t)^{2\alpha ( \frac{1}{2}-\alpha -\delta ')}\varepsilon ^{2p} \bigg )\nonumber \\&\quad \quad + {\mathbb {Q}}\bigg ( \int _0^t D^{x,y,t,t}(s)\,\textrm{d}s> |x-y|^{1-2\alpha -2\delta '}\varepsilon ^{2p},(t,x)\in Z_{K,N,\zeta },N\ge N_1 \bigg )\nonumber \\&\quad \quad + {\mathbb {Q}}\bigg ( \int _t^{t'}D^{x,t'}(s)\,\textrm{d}s+ \int _0^t D^{x,x,t,t'}(s)\,\textrm{d}s >(t'-t)^{2\alpha ( \frac{1}{2}-\alpha -\delta ' )}\varepsilon ^{2p} ,\nonumber \\&\quad \quad \quad \quad \quad (t,x)\in Z_{K,N,\zeta },N\ge N_1\bigg )\nonumber \\&\quad =:Q_1+Q_2+Q_3+Q_4. \end{aligned}$$
(6.20)

We will proceed in three steps to prove the theorem:

Step (i)::

explicitly choosing a measure \({\mathbb {Q}}^{X,K}\) as in the statement of the theorem, such that \(Q_1\) and \(Q_2\) in (6.20) fulfill \(Q_1+Q_2\le ce^{-c'|t'-t|^{-2\alpha \delta ''}}\) for some \(c,c'>0\),

Step (ii)::

showing that \(Q_3=Q_4=0\) holds w.r.t. \(\mathbb {P}\) (and hence also w.r.t. \({\mathbb {Q}}^{X,K}\), since \({\mathbb {Q}}^{X,K}~\ll ~\mathbb {P}\)), if we choose the random variable \(N_1:=cN_\zeta \) for some large enough deterministic constant \(c>0\),

Step (iii)::

completing the proof, using Step (i) and Step (ii).

Step (i): Consider first the term \(Q_1\). Note that on the measurable space \((\Omega ,{\mathscr {F}}^K)\), where the restricted \(\sigma \)-algebra \({\mathscr {F}}^K\) on \(\lbrace T_K\ge T\rbrace \) is defined in the statement of the theorem, Assumption 2.1 (iii) yields the existence of some constant \(C_K>0\) such that

$$\begin{aligned} \bigg | \frac{\mu (s,X^1(s,0))-\mu (s,X^2(s,0))}{\sigma (s,X^1(s,0))-\sigma (s,X^2(s,0))}\bigg |\le C_K<\infty , \end{aligned}$$

for all \(s\in [0,T]\) \(\mathbb {P}\)-a.s. on \((\Omega ,{\mathscr {F}}^K)\) and, thus, we can apply Girsanov’s theorem (see [19, Theorem 3.5.1]) with the adapted process \((L_t)_{t\in [0,T]}\) defined by

$$\begin{aligned} L_t:=-\int _0^t \frac{\mu (s,X^1(s,0))-\mu (s,X^2(s,0))}{\sigma (s,X^1(s,0))-\sigma (s,X^2(s,0))}\,\textrm{d}B_s, \end{aligned}$$

whose stochastic exponential process \({\mathscr {E}}(L_t)\) is a martingale due to Novikov’s condition (see [19, Proposition 3.5.12]). We define \({\mathbb {Q}}^{X,K}\) via the Radon–Nikodym derivative \({\mathscr {E}}(L_T)\) of the measure \({\mathbb {Q}}^{X,K}\) with respect to \(\mathbb {P}\), under which the process \((\tilde{B}^{X,K}_t)_{t\in [0,T]}\) is a Brownian motion, where \(\tilde{B}^{X,K}_t=B_t-\langle B,L \rangle _t=B_t+A_t \) with \(A_t:=\int _0^t \frac{\mu (s,X^1(s,0))-\mu (s,X^2(s,0))}{\sigma (s,X^1(s,0))-\sigma (s,X^2(s,0))}\,\textrm{d}s\) on \([0,T_K]\).

To avoid measurability problems we re-define \({\mathbb {Q}}^{X,K}\) as a measure on \((\Omega ,{\mathscr {F}})\) by setting

$$\begin{aligned} {\mathbb {Q}}^{X,K}(A):={\mathbb {Q}}^{X,K}(A\cap \lbrace T_K\ge T\rbrace ) \end{aligned}$$

for \(A\in {\mathscr {F}}\). Girsanov’s theorem implies that \({\mathbb {Q}}^{X,K}\ll \mathbb {P}\) on \((\Omega ,{\mathscr {F}})\) and \(\mathbb {P}\ll {\mathbb {Q}}^{X,K}\) on \((\Omega ,{\mathscr {F}}^K)\). With this notation, we see that

$$\begin{aligned}&\tilde{X}(t,x)-\tilde{X}(t,y)\\&\quad =\int _0^t p_{t-s}^\theta (x)\Big ( \sigma (s,X^1(s,0))-\sigma (s,X^2(s,0))\Big )\,\textrm{d}( B_s+ A_s)\\&\quad \quad - \int _0^t p_{t-s}^\theta (y)\Big ( \sigma (s,X^1(s,0))-\sigma (s,X^2(s,0))\Big )\,\textrm{d}( B_s+ A_s)\\&\quad =\int _0^t \big ( p_{t-s}^\theta (x)-p_{t-s}^\theta (y)\big )\Big ( \sigma (s,X^1(s,0))-\sigma (s,X^2(s,0))\Big )\,\textrm{d}\tilde{B}_s^{X,K}. \end{aligned}$$

For fixed \(t\in [0,T]\) and \(x,y\in [-\frac{1}{2},\frac{1}{2}]\), the process

$$\begin{aligned} S_{\tilde{t}}^{x,y}= \int _0^{\tilde{t}} ( p_{t-s}^\theta (x)-p_{t-s}^\theta (y))(\sigma (s,X^1(s,0))-\sigma (s,X^2(s,0)))\,\textrm{d}\tilde{B}^{X,K}_s,\quad \tilde{t}\in [0,t], \end{aligned}$$

is a local \({\mathbb {Q}}^{X,K}\)-martingale with quadratic variation

$$\begin{aligned} \langle S^{x,y}\rangle _{\tilde{t}}&=\int _0^{\tilde{t}} ( p_{t-s}^\theta (x)-p_{t-s}^\theta (y))^2 (\sigma (s,X^1(s,0))-\sigma (s,X^2(s,0)))^2\,\textrm{d}s\nonumber \\&\le C_\sigma ^2 \int _0^{\tilde{t}} (p_{t-s}^\theta (x)-p_{t-s}^\theta (y))^2|\tilde{X}(s,0)|^{2\xi }\,\textrm{d}s\nonumber \\&=C_\sigma ^2 \int _0^{\tilde{t}}D^{x,y,t,t}(s)\,\textrm{d}s. \end{aligned}$$

Thus, working under \({\mathbb {Q}}^{X,K}\) in (6.20), we can bound the term \(Q_1\) as follows:

$$\begin{aligned} Q_1&\le {\mathbb {Q}}^{X,K}\bigg (| S_t^{x,y} |\ge |x-y|^{\frac{1}{2}-\alpha -\delta }\varepsilon ^p,\,\int _0^t D^{x,y,t,t}(s)\,\textrm{d}s \le |x-y|^{1-2\alpha -2\delta '}\varepsilon ^{2p} \bigg )\\&\le {\mathbb {Q}}^{X,K}\big ( |S_{t}^{x,y}|\ge |x-y|^{\frac{1}{2}-\alpha -\delta }\varepsilon ^p,\, \langle S^{x,y}\rangle _{t}\le C_\sigma ^2 |x-y|^{1-2\alpha -2\delta '}\varepsilon ^{2p}\big ) \end{aligned}$$

by the definition of \(D^{x,y,t,t}\).

Next, we apply the Dambis–Dubins–Schwarz theorem, which states that the local \({\mathbb {Q}}^{X,K}\)-martingale \(S_{\tilde{t}}^{x,y}\) can be embedded into a \({\mathbb {Q}}^{X,K}\)-Brownian motion \((\tilde{W}_{\tilde{t}})_{\tilde{t}\in [0,t]}\) such that \(S_{\tilde{t}}^{x,y}=\tilde{W}_{\langle S^{x,y} \rangle _{\tilde{t}}}\) holds for all \(\tilde{t}\in [0,t]\). Thus, with \(z:=C_\sigma ^2 |x-~y|^{1-2\alpha -2\delta '}\varepsilon ^{2p}\) we obtain

$$\begin{aligned} Q_1&\le {\mathbb {Q}}^{X,K}\bigg ( | \tilde{W}_{\langle S^{x,y}\rangle _t}| \ge |x-y|^{\frac{1}{2}-\alpha -\delta }\varepsilon ^p, \, \langle S^{x,y}\rangle _{t}\le z \bigg )\\&\le {\mathbb {Q}}^{X,K}\bigg (\sup \limits _{0\le s\le z}| \tilde{W}_{s}| \ge |x-y|^{\frac{1}{2}-\alpha -\delta }\varepsilon ^p \bigg ), \end{aligned}$$

since from the first event follows always the second one. Thus, with the notation \(\tilde{W}^*(t):=\sup \limits _{0\le s\le t}|\tilde{W}_{s}|\), the scaling property of Brownian motion and the reflection principle, we get

$$\begin{aligned} Q_1&\le {\mathbb {Q}}^{X,K}\big ( \tilde{W}^*(C_\sigma ^2 |x-y|^{1-2\alpha -2\delta '}\varepsilon ^{2p})\ge |x-y|^{\frac{1}{2}-\alpha -\delta }\varepsilon ^p \big )\\&= {\mathbb {Q}}^{X,K}\big ( \tilde{W}^*(1)C_\sigma |x-y|^{\frac{1}{2}-\alpha -\delta '}\varepsilon ^{p} \ge |x-y|^{\frac{1}{2}-\alpha -\delta }\varepsilon ^p \big )\\&= 2{\mathbb {Q}}^{X,K}\big ( \tilde{W}(1) \ge C_\sigma ^{-1} |x-y|^{-\delta ''} \big ) \end{aligned}$$

with \(\delta '':=\delta -\delta '>0\) and, applying the concentration inequality \({\mathbb {Q}}^{X,K}(N>a)\le e^{-\frac{a^2}{2}}\) for standard normal distributed N, we get

$$\begin{aligned} Q_1 \le 2e^{-\frac{1}{2C_\sigma ^2}|x-y|^{-2\delta ''}} =:ce^{-c'|x-y|^{-2\delta ''}}, \end{aligned}$$
(6.21)

for some constants \(c,c'>0\). With a very similar argumentation, we can use the probability measure \({\mathbb {Q}}^{X,K}\) and proceed as above to derive the bound

$$\begin{aligned} Q_2\le ce^{-c'|t'-t|^{-2\alpha \delta ''}}, \end{aligned}$$

where c and \(c'\) are the same constants as in (6.21).

Step (ii): We want to show that the terms \(Q_3\) and \(Q_4\) in (6.20) vanish \(\mathbb {P}\)-a.s., if we choose \(N_1\) large enough. Therefore, we consider \((t,x)\in Z_{K,N,\zeta }\) and \((t',y)\) as in (6.14) and begin by showing the following bound on \(|\tilde{X}(s,0)|\) for \(s\le t'\):

$$\begin{aligned} |\tilde{X}(s,0)| \le \left\{ \begin{array}{ll} 3\varepsilon ^\zeta &{} \text {if } s\in [t-\varepsilon ,t'], \\ (4+K)2^{\zeta N_\zeta }(t-s)^\zeta &{} \text {if } s\in [0,t-\varepsilon ]. \\ \end{array}\right. \end{aligned}$$
(6.22)

To see (6.22), we choose for \((t,x)\in Z_{K,N,\zeta }\) some \(({\hat{t}},{\hat{x}})\) as in the definition of \(Z_{K,N,\zeta }\) in (6.10) such that

$$\begin{aligned} |t-{\hat{t}}|\le \varepsilon =2^{-N},\quad |x-{\hat{x}}|\le \varepsilon ^\alpha \quad \text {and}\quad |\tilde{X}({\hat{t}},{\hat{x}})|\le 2^{-N\zeta }=\varepsilon ^\zeta . \end{aligned}$$

Then, for \(s\in [t-\varepsilon ,t']\), we see that \(|t-s|\le \varepsilon \) by (6.14). Thus, by (6.11), we obtain that

$$\begin{aligned} |\tilde{X}(s,0)|&\le |\tilde{X}({\hat{t}},{\hat{x}})|+|\tilde{X}({\hat{t}},{\hat{x}})-\tilde{X}(t,x)|+|\tilde{X}(t,x)-\tilde{X}(s,0)|\\&\le 3\cdot 2^{-N\zeta }=3\varepsilon ^\zeta . \end{aligned}$$

For \(s\in [t-2^{-N_\zeta },t-\varepsilon ]\), we can choose some \(\tilde{N}\ge N_\zeta \) such that \(2^{-(\tilde{N}+1)}\le t-s\le 2^{-\tilde{N}}\) due to \(t-\varepsilon \ge s\), i.e. \(t-s\ge 2^{-N}\). Thus, we get

$$\begin{aligned} |\tilde{X}(s,0)|&\le |\tilde{X}({\hat{t}},{\hat{x}})|+|\tilde{X}({\hat{t}},{\hat{x}})-\tilde{X}(t,x)|+|\tilde{X}(t,x)-\tilde{X}(s,0)|\\&\le 2^{-N\zeta }+2^{-N\zeta }+2^{-\tilde{N}\zeta }\le 2\cdot (t-s)^\zeta +2^\zeta 2^{-(\tilde{N}+1)\zeta }\\&\le 4(t-s)^\zeta . \end{aligned}$$

Last, for \(s\in [0,t-2^{-N_\zeta }]\) with \(s\le T_K\), i.e. \(\tilde{X}\) is bounded by \(K>0\), and \(t-s\ge 2^{-N_\zeta }\), we can bound

$$\begin{aligned} |\tilde{X}(s,0)|\ \le K\le K(t-s)^{-\zeta }(t-s)^\zeta \\ \le K2^{N_\zeta \zeta }(t-s)^\zeta , \end{aligned}$$

which shows the bound (6.22).

For \(Q_3\), using (6.22) and the definition of \(D^{x,y,t,t'}\) in (6.19), we can bound the term inside \(Q_3\) by

$$\begin{aligned} \int _0^t D^{x,y,t,t}(s)\,\textrm{d}s&\le 3^{2\xi }\int _{t-\varepsilon }^t ( p_{t-s}(x)-p_{t-s}(y))^2 \varepsilon ^{2\zeta \xi }\,\textrm{d}s\nonumber \\&\quad + (4+K)^{2\xi }2^{2\xi \zeta N_\zeta }\int _0^{t-\varepsilon }( p_{t-s}(x)-p_{t-s}(y))^2 (t-s)^{2\zeta \xi }\,\textrm{d}s\nonumber \\&=:D_1(t)+D_2(t). \end{aligned}$$
(6.23)

Now, by Lemma 4.5 with \(\beta =\frac{1}{2}-\alpha -\delta '\) and \(\max (|x|,|y|)\le 2\varepsilon ^\alpha \), we can bound

$$\begin{aligned} D_1(t)&\lesssim \varepsilon ^{2\zeta \xi }|x-y|^{1-2\alpha }\max (|x|,|y|)^{(\frac{1}{\alpha }-1)2\beta }\nonumber \\&\lesssim \varepsilon ^{2\zeta \xi +2\delta '}|x-y|^{1-2\alpha -2\delta '}\varepsilon ^{(1-\alpha )2\beta }\nonumber \\&=\varepsilon ^{2(\frac{1}{2}-\alpha (\frac{3}{2}-\alpha )+\alpha \delta '+\xi \zeta )}|x-y|^{1-2\alpha -2\delta '}\nonumber \\&\le \varepsilon ^{2{\hat{p}}}|x-y|^{1-2\alpha -2\delta '} \end{aligned}$$
(6.24)

by the definition of \({\hat{p}}\) in (6.18). For \(D_2(t)\), we use Lemma 4.2 with \(\beta =1\) to bound

$$\begin{aligned} D_2(t)&\lesssim 2^{2\xi \zeta N_\zeta }\int _0^{t-\varepsilon } |x-y|^2 (t-s)^{2\zeta \xi -2\alpha -2}\varepsilon ^{2(1-\alpha )} \,\textrm{d}s\nonumber \\&= 2^{2\xi \zeta N_\zeta }|x-y|^{1-2\alpha -2\delta '}|x-y|^{1+2\alpha +2\delta '} \varepsilon ^{2(1-\alpha )} \bigg [ \frac{(t-s)^{-2\alpha -1+2\xi \zeta }}{-2\alpha -1+2\xi \zeta } \bigg ]_0^{t-\varepsilon }\nonumber \\&\lesssim 2^{2\xi \zeta N_\zeta }|x-y|^{1-2\alpha -2\delta '}\varepsilon ^{\alpha (1+2\alpha +2\delta ')} \varepsilon ^{2(1-\alpha )} \varepsilon ^{((-2\alpha -1+2\xi \zeta )\wedge 0)-2\alpha \delta _2}\nonumber \\&= 2^{2\xi \zeta N_\zeta }|x-y|^{1-2\alpha -2\delta '}\varepsilon ^{2{\hat{p}}}. \end{aligned}$$
(6.25)

Hence, by inserting (6.24) and (6.25) into (6.23), we obtain

$$\begin{aligned} \int _0^t D^{x,y,t,t}(s)\,\textrm{d}s\lesssim 2^{2\xi \zeta N_\zeta }|x-y|^{1-2\alpha -2\delta '}\varepsilon ^{2{\hat{p}}}. \end{aligned}$$
(6.26)

For \(Q_4\), we can use (6.22) to bound the first summand in the definition of \(Q_4\) by

$$\begin{aligned} \int _t^{t'}D^{x,t'}(s)\,\textrm{d}s&=\int _{t}^{t'}p_{t'-s}(x)^2|\tilde{X}(s,0)|^{2\xi }\,\textrm{d}s\nonumber \\&\lesssim \int _{t}^{t'}(t'-s)^{-2\alpha }\varepsilon ^{2\zeta \xi }\,\textrm{d}s\nonumber \\&\lesssim \varepsilon ^{2\zeta \xi }|t'-t|^{1-2\alpha }\nonumber \\&\lesssim \varepsilon ^{2\xi \zeta }\varepsilon ^{2(\frac{1}{2}-\alpha -\alpha (\frac{1}{2}-\alpha )+\alpha \delta ')}|t'-t|^{2\alpha (\frac{1}{2}-\alpha -\delta ')}\nonumber \\&\lesssim \varepsilon ^{2{\hat{p}}}|t'-t|^{2\alpha (\frac{1}{2}-\alpha -\delta ')}, \end{aligned}$$
(6.27)

where we used that \(|t-t'|\le \varepsilon \) and \({\hat{p}}<\frac{1}{2}-\alpha -\alpha (\frac{1}{2}-\alpha )+\alpha \delta '\). We split the second summand similar as before:

$$\begin{aligned} \int _0^t D^{x,x,t,t'}(s)\,\textrm{d}s=\int _{t-\varepsilon }^t D^{x,x,t,t'}(s)\,\textrm{d}s+\int _0^{t-\varepsilon } D^{x,x,t,t'}(s)\,\textrm{d}s=:D_3(t)+D_4(t).\nonumber \\ \end{aligned}$$
(6.28)

By Lemma 4.4, we estimate

$$\begin{aligned} D_3(t)&=\int _{t-\varepsilon }^t|p_{t-s}(x)-p_{t'-s}(x)|^2|\tilde{X}(s,0)|^{2\xi }\,\textrm{d}s\nonumber \\&\lesssim \varepsilon ^{2\xi \zeta }|t'-t|^{1-2\alpha }\nonumber \\&\lesssim \varepsilon ^{2{\hat{p}}}|t'-t|^{2\alpha (\frac{1}{2}-\alpha -\delta ')}, \end{aligned}$$
(6.29)

where the last estimate follows as in (6.27).

For \(D_4(t)\), using the inequality \((a+b)^2\le 2(a^2+b^2)\), we obtain

$$\begin{aligned} D_4(t)&=\int _0^{t-\varepsilon }|p_{t-s}(x)-p_{t'-s}(x)|^2|\tilde{X}(s,0)|^{2\xi }\,\textrm{d}s\nonumber \\&\le 2(4+K)^{2\xi }2^{2\xi \zeta N_\zeta }\int _0^{t-\varepsilon } \bigg | ( (t-s)^{-\alpha }-(t'-s)^{-\alpha })e^{-\frac{|x|^{1/\alpha }}{t-s}} \bigg |^2(t-s)^{2\xi \zeta }\,\textrm{d}s\nonumber \\&\quad +2(4+K)^{2\xi }2^{2\xi \zeta N_\zeta }\int _0^{t-\varepsilon }\bigg | (t'-s)^{-\alpha } \bigg ( e^{-\frac{|x|^{1/\alpha }}{t-s}}-e^{-\frac{|x|^{1/\alpha }}{t'-s}}\bigg ) \bigg |^2(t-s)^{2\xi \zeta }\,\textrm{d}s\nonumber \\&=:D_{4,1}+D_{4,2}. \end{aligned}$$
(6.30)

For \(D_{4,1}\), we use the inequality

$$\begin{aligned} ((t-s)^{-\alpha }-(t'-s)^{-\alpha })e^{-\frac{|x|^{1/\alpha }}{t-s}}\le (t-s)^{-\alpha -1}(t'-t). \end{aligned}$$
(6.31)

To see this, note that

$$\begin{aligned} e^{-\frac{|x|^{1/\alpha }}{t-s}} \le \bigg ( \frac{t-s}{t'-s} \bigg )^\alpha e^{-\frac{|x|^{1/\alpha }}{t-s}} +\frac{t'-t}{t-s}, \end{aligned}$$

which holds since

$$\begin{aligned} \bigg (\frac{t-s}{t'-s} \bigg )^\alpha +\frac{t'-t}{t-s}&\ge \frac{t-s}{t'-s} +\frac{t'-t}{t-s}\nonumber \\&= \frac{t-s}{t'-s} +\frac{t'-s}{t-s}-1\ge 1 \end{aligned}$$
(6.32)

as \(x\mapsto \frac{1}{x} +x\ge 2\) on [0, 1]. Thus, using (6.31), we get

$$\begin{aligned} D_{4,1}&\lesssim 2^{2\xi \zeta N_\zeta }\int _0^{t-\varepsilon }(t-s)^{-2\alpha -2}(t'-t)^2(t-s)^{2\xi \zeta }\,\textrm{d}s\nonumber \\&\lesssim 2^{2\xi \zeta N_\zeta }(t'-t)^2 \varepsilon ^{((-2\alpha -1+\xi \zeta )\wedge 0)-2\alpha \delta _2}\nonumber \\&\lesssim 2^{2\xi \zeta N_\zeta }(t'-t)^{2\alpha (\frac{1}{2}-\alpha -\delta ')}\varepsilon ^{2-2\alpha (\frac{1}{2}-\alpha -\delta ')} \varepsilon ^{((-2\alpha -1+\xi \zeta )\wedge 0)-2\alpha \delta _2}\nonumber \\&= 2^{2\xi \zeta N_\zeta }(t'-t)^{\alpha (1-2\alpha -2\delta ')} \varepsilon ^{2((-\alpha +\frac{1}{2}+\xi \zeta )\wedge 1)-\alpha \delta _2-\alpha (\frac{1}{2}-\alpha -\delta ')}\nonumber \\&= 2^{2\xi \zeta N_\zeta }(t'-t)^{\alpha (1-2\alpha -2\delta ')}\varepsilon ^{2{\hat{p}}}. \end{aligned}$$
(6.33)

For \(D_{4,2}\), we use the inequality \(|e^{-a}-e^{-b}|\le |a-b|\) and then the bound \(\frac{1}{t-s}-\frac{1}{t'-s}\le \frac{t'-t}{(t-s)^2}\), which holds as in (6.32), to get

$$\begin{aligned} D_{4,2}&\lesssim 2^{2\xi \zeta N_\zeta }\int _0^{t-\varepsilon }(t'-s)^{-2\alpha }\bigg | \frac{|x|^{1/\alpha }}{t-s}-\frac{|x|^{1/\alpha }}{t'-s} \bigg |^2 (t-s)^{2\xi \zeta }\,\textrm{d}s\nonumber \\&\lesssim 2^{2\xi \zeta N_\zeta }|x|^{2/\alpha }\int _0^{t-\varepsilon }(t'-s)^{-2\alpha }(t-s)^{-4}(t'-t)^2 (t-s)^{2\xi \zeta }\,\textrm{d}s\nonumber \\&\lesssim 2^{2\xi \zeta N_\zeta }|x|^{2/\alpha }\varepsilon ^{-3-2\alpha +2\xi \zeta } (t'-t)^2\nonumber \\&\lesssim 2^{2\xi \zeta N_\zeta }|x|^{2/\alpha }\varepsilon ^{-3-2\alpha +2\xi \zeta } (t'-t)^{2\alpha (\frac{1}{2}-\alpha -\delta ')}\varepsilon ^{2-2\alpha (\frac{1}{2}-\alpha -\delta ')}\nonumber \\&= 2^{2\xi \zeta N_\zeta }|x|^{2/\alpha } \varepsilon ^{2(\frac{1}{2}-\alpha +\xi \zeta -\alpha (\frac{1}{2}-\alpha )+\alpha \delta ')}(t'-t)^{\alpha (1-2\alpha -2\delta ')}\nonumber \\&= 2^{2\xi \zeta N_\zeta }|x|^{2/\alpha } \varepsilon ^{2{\hat{p}}}(t'-t)^{\alpha (1-2\alpha -2\delta ')}. \end{aligned}$$
(6.34)

Hence, (6.27) and plugging (6.29), (6.30), (6.33) and (6.34) into (6.28), we obtain

$$\begin{aligned} \int _t^{t'}D^{x,t'}(s)\,\textrm{d}s + \int _0^t D^{x,x,t,t'}(s)\,\textrm{d}s\lesssim 2^{2\xi \zeta N_\zeta } |t'-t|^{\alpha (1-2\alpha -2\delta ')}\varepsilon ^{2{\hat{p}}}. \end{aligned}$$
(6.35)

Combining (6.26) and (6.35), we can denote \(C>0\) to be the maximum of the two generic constants occuring in the estimates, to conclude, that if we can secure that

$$\begin{aligned} C2^{2\xi \zeta N_\zeta }\varepsilon ^{2{\hat{p}}} < \varepsilon ^{2p}, \end{aligned}$$
(6.36)

then the conditions inside of \(Q_3\) and \(Q_4\) are never fulfilled and, thus, we get that \(Q_3=Q_4=0\). By \(\varepsilon =2^{-N}\), (6.36) is equivalent to

$$\begin{aligned} C<2^{2N({\hat{p}}-p)-2N_\zeta \xi \zeta }, \end{aligned}$$

and, since \({\hat{p}}-p>0\), fulfilled for all

$$\begin{aligned} N>\frac{2\xi \zeta N_\zeta + log_2(C)}{2({\hat{p}}-p)}. \end{aligned}$$

Therefore, we can find a deterministic constant \(c_{K,\zeta ,\delta ,\delta _1,\delta ',\delta _2}\) such that, for all

$$\begin{aligned} N\ge N_1(\omega ):= c_{K,\zeta ,\delta ,\delta _1,\delta ',\delta _2}N_\zeta (\omega ), \end{aligned}$$
(6.37)

\(Q_3=Q_4=0\) holds.

Step (iii): We discretize \(\tilde{X}(t,y)\) for \(t\in [0,T_K]\) and \(y\in [-\frac{1}{2},\frac{1}{2}]\) as follows:

$$\begin{aligned} M_{n,N,K}:=\max \Big \{&\Big | \tilde{X}(j2^{-n},(z+1)2^{-\alpha n}) - \tilde{X}(j2^{-n},z2^{-\alpha n})\Big |\\&\quad +\Big | \tilde{X}((j+1)2^{-n},z2^{-\alpha n}) - \tilde{X}(j2^{-n},z2^{-\alpha n}) \Big |:\\&\quad |z|\le 2^{\alpha n-1},(j+1)2^{-n}\le T_K,j\in {\mathbb {Z}}_+,z\in {\mathbb {Z}}, \\&\quad (j2^{-n},z2^{-\alpha n})\in Z_{K,N,\zeta } \Big \}. \end{aligned}$$

Moreover, we define the event

$$\begin{aligned} A_N:=\big \lbrace \omega \in \Omega :\, \text {for some }n\ge N,\, M_{n,N,K}\ge 2^{-n\alpha (\frac{1}{2}-\alpha -\delta )}2^{-Np},\,N\ge N_1 \big \rbrace . \end{aligned}$$

Then, we get, by using (6.20), Step (i) and Step (ii), that for all \(N\ge N_1\) as in (6.37):

$$\begin{aligned} {\mathbb {Q}}^{X,K}\bigg ( \bigcup \limits _{N'\ge N}A_{N'}\bigg )&\le \sum \limits _{N'=N}^{\infty }\sum \limits _{n=N'}^{\infty }{\mathbb {Q}}^{X,K} (M_{n,N',K}\ge 2\cdot 2^{-n\alpha (\frac{1}{2}-\alpha -\delta )}2^{-Np})\\&\lesssim \sum _{N'=N}^\infty \sum _{n=N'}^\infty 2^{(\alpha +1)n}e^{-c'2^{n\delta ''\alpha }}, \end{aligned}$$

since the total number of partition elements in each \(M_{n,N,K}\) is at most \(2~\cdot ~2^{\alpha n-1}\cdot ~K\cdot 2^n \lesssim K2^{(\alpha +1)n}\) (if \(T_K=T\)). Furthermore, we used that \(|t-{\hat{t}}|\le 2^{-n}\) and \(|x-{\hat{x}}|\le 2^{-n\alpha }\), which follows by the construction of \(M_{n,N,K}\).

We use the convexity \(2^{x+y}\ge 2^x+2^y\) for \(x,y\ge 0\) to estimate

$$\begin{aligned} {\mathbb {Q}}^{X,K}\bigg (\bigcup \limits _{N'\ge N}A_{N'}\bigg )&\lesssim \sum \limits _{N'=N}^{\infty }\sum \limits _{n=0}^{\infty } 2^{(\alpha +1)(n+N')}e^{-c'2^{(n+N')\delta ''\alpha }}\nonumber \\&\le \sum \limits _{N'=N}^{\infty }2^{(\alpha +1)N'}\sum \limits _{n=0}^{\infty } 2^{(\alpha +1)n}e^{-c'(2^{n\delta ''\alpha }+2^{N'\delta ''\alpha })}\nonumber \\&= \sum \limits _{N'=N}^{\infty }2^{(\alpha +1)N'}e^{-c'2^{N'\delta ''\alpha }}\sum \limits _{n=0}^{\infty } 2^{(\alpha +1)n}e^{-c'2^{n\delta ''\alpha }}\nonumber \\&= 2^{(\alpha +1)N}e^{-c'2^{N\delta ''\alpha }}\sum \limits _{N'=0}^{\infty }2^{(\alpha +1)N'}e^{-c'2^{N'\delta ''\alpha }}\sum \limits _{n=0}^{\infty } 2^{(\alpha +1)n}e^{-c'2^{n\delta ''\alpha }}\nonumber \\&\lesssim e^{(\alpha +1)N}e^{-c'2^{N\delta ''\alpha }}\\&\lesssim e^{-c_22^{N\delta ''\alpha }}, \end{aligned}$$

for some constant \(c_2>0\), where we used convergence and thus finiteness of the two series in the fourth line by applying the ratio test

$$\begin{aligned} \lim \limits _{n\rightarrow \infty }\Big | 2^{\alpha +1}e^{-c'(2^{(n+1)\delta ^{\prime \prime }\alpha }-2^{n\delta ^{\prime \prime }\alpha })}\Big |=0. \end{aligned}$$

Therefore, we get for

$$\begin{aligned} N_2(\omega ):=\min \lbrace N\in {\mathbb {N}} :\,\omega \in A_{N'}^c\,\forall N'\ge N \rbrace , \end{aligned}$$

where the superscript c denotes the complement of a set, that

$$\begin{aligned} {\mathbb {Q}}^{X,K}(N_2>N)={\mathbb {Q}}^{X,K}\bigg (\bigcup \limits _{N'\ge N}A_{N'}\bigg )\lesssim e^{-c_22^{N\delta ''\alpha }}, \end{aligned}$$
(6.38)

and thus \(N_2<\infty \) \({\mathbb {Q}}^{X,K}\)-a.s.

We fix some \(m\in {\mathbb {N}}\) with \(m>3/\alpha \) and choose \(N(\omega )\ge (N_2(\omega )+m)\wedge (N_1+m)\), which is finite a.s., such that holds:

$$\begin{aligned} \forall n\ge N: M_{n,N,K}< 2^{-n\alpha (\frac{1}{2}-\alpha -\delta )}2^{-Np}\quad \text {a.s.} \end{aligned}$$
(6.39)

and \(Q_3=Q_4=0\).

Furthermore, we choose \((t,x)\in Z_{K,N,\zeta }\) and \((t',y)\) such that

$$\begin{aligned} d((t',y),(t,x)):=|t'-t|^\alpha +|y-x|\le 2^{-N\alpha }, \end{aligned}$$

and we choose points near (tx) as follows: for \(n\ge N\), we denote by \(t_n\in 2^{-n}{\mathbb {Z}}_+\) and \(x_n\in 2^{-\alpha n}{\mathbb {Z}}\) for the unique points such that

$$\begin{aligned}&t_n\le t<t_n+2^{-n},\\&x_n\le x<x_n+2^{-\alpha n} \text { for }x\ge 0 \quad \text {or}\quad x_n-2^{-\alpha n}< x \le x_n \text { for }x< 0. \end{aligned}$$

We define \(t_n',y_n\) analogously. Let \(({\hat{t}},{\hat{x}})\) be the points from the definition of \(Z_{K,N,\zeta }\) with \(|\tilde{X}({\hat{t}},{\hat{x}})|\le 2^{-N\zeta }\). Then, for \(n\ge N\), we observe that

$$\begin{aligned} d((t_n',y_n),({\hat{t}},{\hat{x}}))&\le d((t_n',y_n),(t',y)) + d((t',y),(t,x)) + d((t,x),({\hat{t}},{\hat{x}}))\nonumber \\&\le |t_n'-t|^\alpha +|y-y_n|+2^{-N\alpha }+2\cdot 2^{-N\alpha }\nonumber \\&\le 6\cdot 2^{-N\alpha }<2^{3-N\alpha }=2^{-\alpha (N-\frac{3}{\alpha })}\nonumber \\&<2^{-\alpha (N-m)}, \end{aligned}$$
(6.40)

which implies \((t_n',y_n)\in Z_{K,N-m,\zeta }\). We use that to finally formulate our bound. We also use the continuity of \(\tilde{X}\) and our construction of the \(t_n,x_n\) to get that

$$\begin{aligned} \lim \limits _{n\rightarrow \infty }\tilde{X}(t_n,x_n)=\tilde{X}(t,x)\quad \text {a.s.} \end{aligned}$$

and the same for \(t_n',y_n\). Thus, by the triangle inequality:

$$\begin{aligned} |\tilde{X}(t,x)-\tilde{X}(t',y)|&=\bigg | \sum \limits _{n=N}^\infty \bigg ( (\tilde{X}(t_{n+1},x_{n+1})-\tilde{X}(t_n,x_n)) + ( \tilde{X}(t'_{n},y_{n})-\tilde{X}(t'_{n+1},y_{n+1}) ) \bigg )\\&\quad +\tilde{X}(t_N,x_N)-\tilde{X}(t_N',y_N) \bigg |\\&\le \sum \limits _{n=N}^\infty | \tilde{X}(t_{n+1},x_{n+1})-\tilde{X}(t_n,x_n)| + | \tilde{X}(t'_{n},y_{n})-\tilde{X}(t'_{n+1},y_{n+1}) |\\&\quad + |\tilde{X}(t_N,x_N)-\tilde{X}(t_N',y_N)|. \end{aligned}$$

Since we choose \(t_n,x_n\) and \(t'_n,y_n\) to be of the form of the discrete points in \(M_{n,N,K}\) and, since we have (6.40), we can continue to estimate

$$\begin{aligned} | \tilde{X}(t,x)-\tilde{X}(t',y) | \le \sum \limits _{n=N}^\infty 2M_{n+1,N-m,K}+ |\tilde{X}(t_N,x_N)-\tilde{X}(t_N',y_N)|. \end{aligned}$$

Because of \(|t-t'|\le 2^{-N}\) and our construction of \(t_N,t'_N\), they must be equal or adjacent in \(2^{-N}{\mathbb {Z}}_+\) and analogue for \(x_N,y_N\). Thus, we get

$$\begin{aligned} | \tilde{X}(t,x)-\tilde{X}(t',y) |&\le \sum \limits _{n=N}^\infty 2M_{n+1,N-m,K} + M_{N,N-m,K}\\&\le 2\sum \limits _{n=N}^\infty M_{n,N-m,K}\\&\lesssim \sum \limits _{n=N}^\infty 2^{-n\alpha (\frac{1}{2}-\alpha -\delta )}2^{-(N-m)p}\\&= 2^{-(N-m)p} \sum \limits _{n=0}^\infty 2^{-(n+N)\alpha (\frac{1}{2}-\alpha -\delta )}\\&\lesssim 2^{mp}2^{-N(\alpha (\frac{1}{2}-\alpha -\delta )+p)} \\&< 2^{-N\zeta _1}, \end{aligned}$$

where the last line follows with \(\alpha (\frac{1}{2}-\alpha -\delta )+p>\zeta _1\), which holds by (6.15) and (6.17), and for all

$$\begin{aligned} N\ge N_3 \end{aligned}$$
(6.41)

for some \(N_3\) that is large enough such that \(2^{mp}\) is dominated and thus depends deterministically on p. Therefore, we have proven Theorem 6.4 with

$$\begin{aligned} N_{\zeta _1}(\omega ):=\max \lbrace N_2(\omega )+m,N_{\zeta }(\omega )+m,c_{K,\zeta ,\delta ,\delta _1,\delta ',\delta _2}N_{\zeta }(\omega )+m,N_3 \rbrace \end{aligned}$$

by \(N_{\zeta _1}\) chosen in that way due to (6.39), Step (ii), (6.37) and (6.41). If we denote \(R':=1\vee c_{K,\zeta ,\delta ,\delta _1,\delta ',\delta _2}\) and consider some \(N\ge 2m\vee N_3\), (6.38) implies

$$\begin{aligned} {\mathbb {Q}}^{X,K}(N_{\zeta _1}\ge N)&\le {\mathbb {Q}}^{X,K}(N_{2}\ge N-m)+2{\mathbb {Q}}^{X,K}\bigg (N_{\zeta }\ge \frac{N-m}{R'}\bigg )\\&\le CKe^{-c_22^{(N-m)\delta ''\alpha }}+2{\mathbb {Q}}^{X,K}(N_\zeta \ge N/R) \end{aligned}$$

for \(R=2R'\) and \(C>0\) not depending on K, which shows the probability bound in (6.13) by re-defining \(\delta :=\delta ''\alpha >0\) and thus completes the proof. \(\square \)

In the following we sometimes only write a.s. when we mean \(\mathbb {P}\)-a.s. Since \({\mathbb {Q}}^{X,K}\ll \mathbb {P}\), this implies \({\mathbb {Q}}^{X,K}\)-a.s.

Corollary 6.5

With the hypotheses of Theorem 6.4 and \(\frac{1}{2}-\alpha<\zeta < \frac{\frac{1}{2}-\alpha }{1-\xi }\wedge 1\), there is an a.s. finite positive random variable \(C_{\zeta ,K}(\omega )\) such that, for any \(\varepsilon \in (0,1]\), \(t\in [0,T_K]\) and \(|x|<\varepsilon ^\alpha \), if \(|\tilde{X}(t,{\hat{x}})|\le \varepsilon ^\zeta \) for some \(|{\hat{x}}-x|\le \varepsilon ^\alpha \), then

$$\begin{aligned} |\tilde{X}(t,y)|\le C_{\zeta ,K}\varepsilon ^\zeta , \end{aligned}$$
(6.42)

whenever \(|x-y|\le \varepsilon ^\alpha \).

Moreover, there are constants \(\delta , C_1, c_2,\tilde{R}>0\), depending on \(\zeta \) (but not on K), and \(r_0(K)>0\) such that

$$\begin{aligned} {\mathbb {Q}}^{X,K}(C_{\zeta ,K}\ge r) \le C_1 \bigg [ {\mathbb {Q}}^{X,K}\bigg ( N_{\frac{\alpha }{2}(\frac{1}{2}-\alpha )}\ge \frac{1}{\tilde{R}}\log _2\bigg (\frac{r-6}{K+1}\bigg ) \bigg ) +K e^{ -c_2\big ( \frac{r-6}{K+1} \big )^\delta } \bigg ]\nonumber \\ \end{aligned}$$
(6.43)

for all \(r\ge r_0(K)> 6+(K+1)\), where \({\mathbb {Q}}^{X,K}\) is the probability measure from Theorem 6.4.

Proof

We will derive the statement by an appropriate induction. We start by choosing

$$\begin{aligned} \zeta _0:=\frac{\alpha }{2}\bigg (\frac{1}{2}-\alpha \bigg ), \end{aligned}$$

to be able to use the regularity result from Proposition 4.6. Indeed, by 4.6 (ii) we get the inequality (6.11) with \(\zeta _0\) by Kolmogorov’s continuity theorem.

Now, we define

$$\begin{aligned} \zeta _{n+1}:=\bigg [ \bigg ( \zeta _n \xi +\frac{1}{2}-\alpha \bigg ) \wedge 1\bigg ]\bigg (1-\frac{1}{n+d}\bigg ) \end{aligned}$$

for some \(d\in {\mathbb {R}}\). We chose that d given \(\zeta _0\) big enough such that \(\zeta _1>\frac{1}{2}-\alpha \). Moreover, it is clearly \(\zeta _{n+1}>\zeta _n\). Thus, we get inductively that \(\zeta _n\uparrow \frac{\frac{1}{2}-\alpha }{1-\xi }\wedge 1\) and, for every fixed \(\zeta \in \Big ( \frac{1}{2}-\alpha ,\frac{\frac{1}{2}-\alpha }{1-\xi }\wedge 1 \Big )\) as in the statement, we can find \(n_0\in {\mathbb {N}}\) such that \(\zeta _{n_0}\ge \zeta >\zeta _{n_0-1}\). By applying Theorem 6.4\(n_0\)-times, we get (6.11) for \(\zeta _{n_0-1}\) and, hence, (6.12) for \(\zeta _{n_0}\).

We derive the estimation (6.42) for all \(0<\varepsilon \le 1\). Therefore, we consider first \(\varepsilon \le 2^{-N_{\zeta _{n_0}}}\), where we got \(N_{\zeta _{n_0}}\) from the application of Theorem 6.4 to \(\zeta _{n_0-1}\). Further, we choose \(N\in {\mathbb {N}}\) such that \(2^{-N-1}<\varepsilon \le 2^{-N}\) and, thus, \(N\ge N_{\zeta _{n_0}}\). Also, we choose \(t\le T_K\) and \(|x|\le \varepsilon ^\alpha \le 2^{-N\alpha }\) such that, by assumption of Theorem 6.4, for some \(|{\hat{x}}-x|\le \varepsilon ^\alpha \le 2^{-N\alpha }\),

$$\begin{aligned} |\tilde{X}(t,{\hat{x}})|\le \varepsilon ^\zeta \le 2^{-N\zeta }\le 2^{-N\zeta _{n_0-1}}. \end{aligned}$$

Hence, \((t,x)\in Z_{K,N,\zeta _{n_0-1}}\). For any y such that \(|y-x|\le \varepsilon ^\alpha \), we get, by (6.12),

$$\begin{aligned} |\tilde{X}(t,y)|&\le |\tilde{X}(t,{\hat{x}})|+|\tilde{X}(t,{\hat{x}})-\tilde{X}(t,x)|+|\tilde{X}(t,x)-\tilde{X}(t,y)|\\&\le 2^{-N\zeta }+2^{-N\zeta _{n_0}}+2^{-N\zeta _{n_0}}\le 3\cdot 2^{-N\zeta }\le 6\varepsilon ^\zeta . \end{aligned}$$

Now, we consider \(\varepsilon \in (2^{-N_{\zeta _{n_0}}},1]\). Then, for (tx) and (ty) as in the assumption, we get

$$\begin{aligned} |\tilde{X}(t,y)|&\le |\tilde{X}(t,x)|+|\tilde{X}(t,y)-\tilde{X}(t,x)|\\&\le K+2^{-N\zeta }\le (K+1)2^{N_{\zeta _{n_0}}\zeta }\varepsilon ^\zeta \end{aligned}$$

by \(\varepsilon 2^{N_{\zeta _{n_0}}}>1\) and, therefore, we have shown (6.42) with \(C_{\zeta ,K}=(K+1)2^{N_{\zeta _{n_0}}\zeta }+6\).

It remains to show the estimate (6.43). Therefore, we use (6.13) to conclude that

$$\begin{aligned} {\mathbb {Q}}^{X,K}\bigg ( C_{\zeta ,K}\ge r \bigg )&={\mathbb {Q}}^{X,K}\bigg ( 2^{N_{\zeta _{n_0}}\zeta }\ge \frac{r-6}{K+1}\bigg ) ={\mathbb {Q}}^{X,K}\bigg ( N_{\zeta _{n_0}}\ge \frac{1}{\zeta } \log _2\bigg (\frac{r-6}{K+1}\bigg )\bigg )\\&\le C\bigg ({\mathbb {Q}}^{X,K}\bigg ( N_{\zeta _{n_0-1}}\ge \frac{1}{R\zeta } \log _2\bigg (\frac{r-6}{K+1} \bigg )\bigg )+ K exp \bigg (-c_22^{\frac{\delta }{\zeta }\log _2\big (\frac{r-6}{K+1} \big )}\bigg ) \bigg ). \end{aligned}$$

Applying (6.13) \(n_0\)-times, we end up with

$$\begin{aligned}&{\mathbb {Q}}^{X,K}v( C_{\zeta ,K}\ge r )\\&\quad \le C^{n_0}{\mathbb {Q}}^{X,K}\bigg ( N_{\frac{\alpha }{2}(\frac{1}{2}-\alpha )} \ge \frac{1}{\zeta R^{n_0}}\log _2\bigg (\frac{r-6}{K+1}\bigg ) \bigg ) + \sum \limits _{i=0}^{n_0}C^i K e^{-c_22^{R^{-i-1}\frac{\delta }{\zeta }\log _2\big (\frac{r-6}{K+1}\big )}}\\&\quad \le C^{n_0}n_0\bigg ({\mathbb {Q}}^{X,K}\bigg ( N_{\frac{\alpha }{2}(\frac{1}{2}-\alpha )}\ge \frac{1}{\tilde{R}}\log _2\bigg (\frac{r-6}{K+1}\bigg ) \bigg ) + K e^{-c_2\big (\frac{r-6}{K+1}\big )^{\frac{\delta }{\zeta R^{n_0}}}}\bigg )\\&\quad =: C_1\bigg ({\mathbb {Q}}^{X,K}\bigg ( N_{\frac{\alpha }{2}(\frac{1}{2}-\alpha )}\ge \frac{1}{\tilde{R}}\log _2\bigg (\frac{r-6}{K+1}\bigg ) \bigg ) + K e^{-c_2\big (\frac{r-6}{K+1}\big )^{{\tilde{\delta }}}}\bigg ), \end{aligned}$$

where \(C_1,{\tilde{\delta }},\tilde{R}>0\) depend on \(\zeta \) but not on K. \(\square \)

We will handle the event on the right-hand side of (6.43) under the measure \(\mathbb {P}\) again.

Proposition 6.6

In the setup and notation of Corollary 6.5, one has

$$\begin{aligned} \mathbb {P}\bigg ( N_{\frac{\alpha }{2}(\frac{1}{2}-\alpha )}\ge \frac{1}{\tilde{R}}\log _2\bigg (\frac{r-6}{K+1}\bigg ) \bigg ) \lesssim \bigg (\frac{r-6}{K+1}\bigg )^{-\varepsilon }, \end{aligned}$$

for some \(\varepsilon >0\).

Proof

We show that, for every \(M\in {\mathbb {R}}_+\),

$$\begin{aligned} {\mathbb {P}}\big ( N_{\frac{\alpha }{2}(\frac{1}{2}-\alpha )}\ge M \big ) \lesssim 2^{-M\varepsilon } \end{aligned}$$

for some \(\varepsilon >0\), which then yields the statement.

Indeed, from Proposition 4.6 (ii), we have that

$$\begin{aligned} {\mathbb {E}}[ |\tilde{X}(t,x)-\tilde{X}(t',x')|^p]\lesssim |t-t'|^{(\frac{1}{2}-\alpha )p}+|x-x'|^{(\frac{1}{2}-\alpha )p}, \end{aligned}$$

for all \(p\ge 2\), \(t,t'\in [0,T]\) and \(|x|,|x'|\le 1\). By choosing \((t,x)\in Z_{K,N,\zeta }\), \((t',x')\) from the definition of \(Z_{K,N,\zeta }\) and \(p>2\) such that \(\alpha p(\frac{1}{2}-\alpha )= 1+\beta \) for some \(\beta >0\), it holds that

$$\begin{aligned} {\mathbb {E}} [|\tilde{X}(t,x)-\tilde{X}(t',x')|^p ] \lesssim 2^{-N(1+\beta )} + 2^{-N(1+\beta )} \lesssim 2^{-N(1+\beta )}. \end{aligned}$$

We discretize \([0,T]\times [-1,1]\) on the dyadic rational numbers. For simplicity, we assume \(T=1\). First, for some \(n\in {\mathbb {N}}\), we keep some space variable \(x\in \lbrace k2^{-n},k\in -2^n,\dots ,0,1,\dots ,2^n \rbrace \) fixed and apply Markov’s inequality to get

$$\begin{aligned} {\mathbb {P}}\Big ( |\tilde{X}(k2^{-n},x)-\tilde{X}((k-1)2^{-n},x)|\ge 2^{-\zeta n} \Big ) \lesssim 2^{\zeta n p}2^{-n(1+\beta )}= 2^{-n(1+\beta -\zeta p)} \end{aligned}$$

for any \(k\in 1,\dots ,2^n\). Next, we define the following events:

$$\begin{aligned}&A_n=A_n(\zeta ):=\bigg \lbrace \max \limits _{k\in \lbrace -2^n+1,\dots ,2^n \rbrace } |\tilde{X}(k2^{-n},x)-\tilde{X}((k-1)2^{-n},x)| \ge 2^{-\zeta n-1}\bigg \rbrace ,\\&B_n:=\bigcup \limits _{m=n}^\infty A_m,\quad N:=\limsup \limits _{n\rightarrow \infty }A_n=\bigcap \limits _{n=1}^\infty B_n. \end{aligned}$$

Then, for every \(n\in {\mathbb {N}}\),

$$\begin{aligned} {\mathbb {P}}(A_n)&\le \sum \limits _{k=-2^n+1}^{2^n}{\mathbb {P}}\Big ( |\tilde{X}(k2^{-n},x)-\tilde{X}((k-1)2^{-n},x)|\ge 2^{-\zeta n-1} \Big )\nonumber \\&\lesssim 2^{n+2}2^{-n(1+\beta -\zeta p)+p}=2^{2+p}2^{-n(\beta -\zeta p)}. \end{aligned}$$
(6.44)

We choose, for \(\zeta =\frac{\alpha }{2}(\frac{1}{2}-\alpha )\),

$$\begin{aligned} p>\max \bigg \lbrace \frac{1+\beta }{\alpha (\frac{1}{2}-\alpha )},\frac{1}{\frac{\alpha }{2}-\zeta -\alpha ^2} \bigg \rbrace . \end{aligned}$$

Note that \(\frac{\alpha }{2}-\zeta -\alpha ^2=\frac{\alpha }{2}-\frac{\alpha }{2}(\frac{1}{2}-\alpha )-\alpha ^2=\frac{\alpha }{4}-\frac{\alpha ^2}{2}>0\) as \(\alpha <\frac{1}{2}\). Then, we have that

$$\begin{aligned} 0< p \bigg (\frac{\alpha }{2}-\zeta -\alpha ^2 \bigg )-1=\alpha p \bigg (\frac{1}{2}-\alpha \bigg )-1-\zeta p=\beta -\zeta p \end{aligned}$$
(6.45)

and from (6.44) it follows by the geometric series that

$$\begin{aligned} {\mathbb {P}}(B_n) \le \sum \limits _{m=n}^\infty {\mathbb {P}}(A_m)\lesssim 2^{2+p}\frac{2^{-n(\beta -\zeta p)}}{1-2^{\zeta p-\beta }}\rightarrow 0 \quad \text {as } n\rightarrow \infty , \end{aligned}$$

where \(2^{\zeta p-\beta }<1\) because of (6.45).

Analogously, we fix some time variable t and get an analogue version of inequality (6.44). Now, we fix an event \(\omega \in \Omega \) and some

$$\begin{aligned} N\ge N_{\frac{\alpha }{2}(\frac{1}{2}-\alpha )}(\omega ), \end{aligned}$$

where \(N_{\frac{\alpha }{2}(\frac{1}{2}-\alpha )}(\omega )\) is such that

$$\begin{aligned} \omega \notin \bigcup \limits _{n=N_{\frac{\alpha }{2}(\frac{1}{2}-\alpha )}}^\infty A_n, \end{aligned}$$

and this should also hold for the union of the analogue sets for fixed t, denote those by \(A_n^{(2)}\).

Let \(t,t',x,x'\in D_N\) with \(|t-t'|\le 2^{-N}\) and \(|x-x'|\le 2^{-\alpha N}\). Then, we have

$$\begin{aligned} |\tilde{X}(t,x,\omega )-\tilde{X}(t',x',\omega )|&\le |\tilde{X}(t,x,\omega )-\tilde{X}(t',x,\omega )| + |\tilde{X}(t',x,\omega )-\tilde{X}(t',x',\omega )|\\&\le 2\cdot 2^{-\zeta N-1}=2^{-\zeta N}. \end{aligned}$$

Then, we get from (6.44) that

$$\begin{aligned} {\mathbb {P}}(N_\zeta \ge M) \le \sum \limits _{m=M}^\infty {\mathbb {P}}(A_m) + \sum \limits _{m=M}^\infty {\mathbb {P}}(A_m^{(2)}) \lesssim \sum \limits _{m=M}^\infty 2^{-m(\beta -\zeta p)} =\frac{2^{-M(\beta -\zeta p)}}{1-2^{\zeta p-\beta }} \lesssim 2^{-M\varepsilon } \end{aligned}$$

with \(\varepsilon :=\beta -\zeta p\), by the geometric series with \(\beta -\zeta p>0\).

By the density of the dyadic rational numbers in the reals and the continuity of \(\tilde{X}\), the regularity extends to the whole \([0,T]\times [-1,1]\) and, thus, the statement holds. \(\square \)

We want to fix \(\zeta \in (0,1)\), that fulfills the requirements of the previous corollary.

Lemma 6.7

With fixed \(\alpha \in (0,\frac{1}{2})\) and \(\xi \in (\frac{1}{2},1)\) satisfying

$$\begin{aligned} 1>\xi>\frac{1}{2(1-\alpha )}>\frac{1}{2}, \end{aligned}$$

we can choose \(\zeta \in (0,1)\) such that

$$\begin{aligned} \frac{\alpha }{2\xi -1}<\zeta <\bigg ( \frac{\frac{1}{2}-\alpha }{1-\xi }\wedge 1 \bigg ). \end{aligned}$$
(6.46)

Especially, we get

$$\begin{aligned} \eta := \frac{\zeta }{\alpha }>\frac{1}{2\xi -1}. \end{aligned}$$

Proof

First, we consider \(\frac{\frac{1}{2}-\alpha }{1-\xi }<1\). In this case, we have that

$$\begin{aligned} \frac{\frac{1}{2}-\alpha }{1-\xi }-\frac{\alpha }{2\xi -1}&=\frac{(\frac{1}{2}-\alpha )(2\xi -1)-\alpha (1-\xi )}{(1-\xi )(2\xi -1)}\\&=\frac{\xi -\frac{1}{2}-2\alpha \xi +\alpha -\alpha +\alpha \xi }{(1-\xi )(2\xi -1)}=\frac{\xi (1-\alpha )-\frac{1}{2}}{(1-\xi )(2\xi -1)}>0, \end{aligned}$$

by the assumption on \(\xi \).

On the other hand, if \(\frac{\frac{1}{2}-\alpha }{1-\xi }\ge 1\), then \(\alpha \le \xi -\frac{1}{2}\), i.e. \(\frac{\alpha }{2\xi -1}\le \frac{1}{2}\), and we can fix \(\zeta \) such that (6.46) holds. \(\square \)

Let us finally introduce the following stopping time, that plays a central role for the following Lemma 6.9, and is the reason, why we needed Corollary 6.5 and Proposition 6.6:

$$\begin{aligned} T_{\zeta ,K}:=\inf \limits _{t\ge 0} \left\{ \begin{array}{l} t\le T_K \text { and there exist }\varepsilon \in (0,1],{\hat{x}},x,y\in {\mathbb {R}}\text { with }\\ |x|\le \varepsilon ^\alpha ,|\tilde{X}(t,{\hat{x}})|\le \varepsilon ^\zeta ,|x-{\hat{x}}|\le \varepsilon ^\alpha ,|x-y|\le \varepsilon ^\alpha \\ \text { such that }|\tilde{X}(t,y)|>c_0(K)\varepsilon ^\zeta \\ \end{array} \right\} \wedge T_K\wedge T, \end{aligned}$$
(6.47)

where \(c_0(K):=r_0(K)\vee K^2>0\) with \(r_0(k)\) from Corollary 6.5.

Corollary 6.8

The stopping time \(T_{\zeta ,K}\) fulfills \(T_{\zeta ,K}\rightarrow T\) as \(K\rightarrow \infty \) a.s.

Proof

We fix arbitrary \(K,\tilde{K}>0\) such that \(\tilde{K}\le K\). We can bound for any \(t\in [0,T)\),

$$\begin{aligned} \mathbb {P}\big ( T_{\zeta ,K}\le t \big )&\le \mathbb {P}\Big ( \lbrace T_{\zeta ,K}\le t\rbrace \cap \lbrace T_{\tilde{K}}\ge T \rbrace \Big ) +\mathbb {P}\big ( T_{\tilde{K}}<T \big )\nonumber \\&=: P_1^{K,\tilde{K}}+P_2^{\tilde{K}}. \end{aligned}$$
(6.48)

We show that \(\lim _{K\rightarrow \infty }P_1^{K,\tilde{K}}=0\). For this purpose, we consider the probability measure \({\mathbb {Q}}^{X,\tilde{K}}\) from Corollary 6.5. By the definition of \(T_{\zeta ,K}\) and Corollary 6.5, we obtain that

$$\begin{aligned}&{\mathbb {Q}}^{X,\tilde{K}}\Big ( \lbrace T_{\zeta ,K}\le t\rbrace \cap \lbrace T_{\tilde{K}}\ge T \rbrace \Big )\nonumber \\&\qquad \le {\mathbb {Q}}^{X,\tilde{K}}\big (T_K\le t\big ) + {\mathbb {Q}}^{X,\tilde{K}}\big ( C_{\zeta ,K}>c_0(K) \big )\nonumber \\&\qquad \le {\mathbb {Q}}^{X,\tilde{K}}\big (T_K\le t\big ) + C_1\bigg [ {\mathbb {Q}}^{X,\tilde{K}}\Big ( N_{\frac{\alpha }{2}(\frac{1}{2}-\alpha )}\ge \frac{1}{\tilde{R}}\log _2\Big ( \frac{K^2-6}{\tilde{K}+1} \Big ) \Big )+\tilde{K}e^{-c_2\big ( \frac{K^2-6}{\tilde{K}+1} \big ) \delta } \bigg ]. \end{aligned}$$
(6.49)

By Proposition 6.6 we know that the respective of the second probability on the right-hand side of (6.49) with \(\mathbb {P}\) instead of \({\mathbb {Q}}^{X,\tilde{K}}\) tends to zero as \(K\rightarrow \infty \). Since \({\mathbb {Q}}^{X,\tilde{K}}\ll \mathbb {P}\) holds on \((\Omega ,{\mathscr {F}})\), \(\lim _{K\rightarrow \infty } \mathbb {P}(A_K)=0\) implies \(\lim _{K\rightarrow \infty } {\mathbb {Q}}^{X,\tilde{K}}(A_K)=0\) for any sequence \((A_K)_{K\in {\mathbb {N}}}\) of events in \(\Omega \) (see e.g. [33, Theorem 6.11]) and, since \(T_K\rightarrow \infty \) as \(K\rightarrow \infty \) a.s., by the continuity of the solutions \(X^1\) and \(X^2\), we conclude that the whole right-hand side of (6.49) tends to zero as \(K\rightarrow \infty \). Hence, since \(\mathbb {P}\ll {\mathbb {Q}}^{X,\tilde{K}}\) on \((\Omega ,{\mathscr {F}}^{\tilde{K}})\) and the event inside \(P_1^{K,\tilde{K}}\) is trivially in \({\mathscr {F}}^{\tilde{K}}\), this implies also tending to zero for the respective \(\mathbb {P}\)-probability and we obtain \(\lim \limits _{K\rightarrow \infty }P_1^{K,\tilde{K}}=0\).

Therefore, using the continuity of \(X^1\) and \(X^2\) again, we can for every \(\varepsilon >0\) find some \(\tilde{K}>0\) such that (6.48) yields

$$\begin{aligned} \lim \limits _{K\rightarrow \infty }\mathbb {P}\big ( T_{\zeta ,K}\le t \big )\le \mathbb {P}\big ( T_{\tilde{K}}<T \big )<\varepsilon \end{aligned}$$

and we obtain \(\lim \limits _{K\rightarrow \infty }\mathbb {P}\big ( T_{\zeta ,K}\le t \big )=0\), which yields the statement. \(\square \)

Recall that we have a fixed constant \(\eta >\frac{1}{2\xi -1}\), determined by Lemma 6.7. We use this to fix the sequence \((m^{(n)})_{n\in {\mathbb {N}}}\) by defining

$$\begin{aligned} m^{(n)}:=a_{n-1}^{-\frac{1}{\eta }}>1, \end{aligned}$$

where \(a_n\) is the Yamada–Watanabe sequence, defined in (5.1). With this, we get the following crucial lemma, that regularizes \(\tilde{X}\) based on regularity of the approximation \(|\langle \tilde{X},\Phi ^{n} \rangle |\).

Lemma 6.9

For all \(x\in B(0,\frac{1}{m})\) and \(s\in [0,T_{\zeta ,K}]\), if \(|\langle \tilde{X}_s,\Phi _x^{n} \rangle |\le a_{n-1}\), then

$$\begin{aligned} \sup \limits _{y\in B(x,\frac{1}{m})} |\tilde{X}(s,y)|\le \tilde{C}_Ka_{n-1}, \end{aligned}$$

for some \(\tilde{C}_K>0\) only dependent on K.

Proof

By the assumption \(|\langle \tilde{X}_s,\Phi _x^{n} \rangle |\le a_{n-1}\), we can apply Proposition 6.1 (v) to get that there exists \({\hat{x}}\in B(x,\frac{1}{m})\) with \(|\tilde{X}(s,{\hat{x}})|\le C_Ka_{n-1}\).

For fixed \(n\ge 1\), we define \(\varepsilon _n>0\) such that

$$\begin{aligned} \varepsilon _n^\alpha = \frac{1}{m^{(n)}}C_K^{\frac{1}{\eta }} \end{aligned}$$

holds and, thus, by the choice \(\eta =\frac{\zeta }{\alpha }\),

$$\begin{aligned} C_Ka_{n-1} =C_K \bigg (\frac{1}{m} \bigg )^\eta =\bigg (\frac{C_K^{\frac{1}{\eta }}}{m}\bigg )^\eta =\varepsilon _n^\zeta . \end{aligned}$$

We use this and the definition of \(T_{\zeta ,K}\) in (6.47) to get the desired result with \(\tilde{C}_K=C_Kc_0(K)\). \(\square \)

Finally, we can handle the term \(I_4^{m,n}\) from (5.5).

Lemma 6.10

With \(I_4^{m,n}\) from (5.5) and \(T_{\zeta ,K}\) defined in (6.47), one has

$$\begin{aligned} \lim \limits _{n\rightarrow \infty } {\mathbb {E}} [| I_4^{m,n}(t\wedge T_{\zeta ,K})|]=0. \end{aligned}$$

Proof

We use the Hölder continuity of \(\sigma \) as well as the bounded support of \(\psi _n\), the inequality \(\psi _n(x)\le \frac{2}{nx}\mathbbm {1}_{\lbrace a_n\le x\le a_{n-1} \rbrace }\), the boundedness of \(\Psi \), Lemma 6.9 and Proposition 6.1 (ii) to get

$$\begin{aligned} |I_4^{m,n}(t\wedge T_{\zeta ,K})|&\lesssim \bigg | \int _0^{t\wedge T_{\zeta ,K}} \int _{{\mathbb {R}}} \psi _n(|\langle \tilde{X}_s,\Phi _{x}^{n} \rangle |)\Phi _{x}^{n}(0)^2 \Psi _s(x) \,\textrm{d}x |\tilde{X}(s,0)|^{2\xi } \,\textrm{d}s \bigg |\nonumber \\&\lesssim \int _0^{t\wedge T_{\zeta ,K}} \int _{{\mathbb {R}}}\mathbbm {1}_{\lbrace a_n\le |\langle \tilde{X}_s,\Phi _{x}^{n} \rangle | \le a_{n-1} \rbrace } \frac{2}{n a_n}\Phi _{x}^{n}(0)^2 \Psi _s(x) \,\textrm{d}x |\tilde{X}(s,0)|^{2\xi } \,\textrm{d}s\nonumber \\&\le \frac{\Vert \Psi \Vert _{\infty }}{n a_n} \int _0^{t\wedge T_{\zeta ,K}} \int _{{\mathbb {R}}} \Phi _{x}^{n}(0)^2 \,\textrm{d}x (\tilde{C}_K a_{n-1})^{2\xi } \,\textrm{d}s \nonumber \\&\lesssim \frac{a_{n-1}^{2\xi }}{n a_n} \int _0^{t\wedge T_{\zeta ,K}} \int _{{\mathbb {R}}} \Phi _{x}^{n}(0)^2 \,\textrm{d}x \,\textrm{d}s \nonumber \\&\lesssim \frac{a_{n-1}^{2\xi }}{n a_n} m^{(n)}\lesssim \frac{a_{n-1}^{2\xi }}{n a_n} a_{n-1}^{-\frac{1}{\eta }}=\frac{1}{n} \frac{a_{n-1}^{2\xi -\frac{1}{\eta }}}{ a_n} . \end{aligned}$$
(6.50)

We know that \(\frac{a_{n-1}}{a_n}=e^n\), \(a_0=1\) and, thus, get inductively that \(a_n=e^{-\frac{n(n+1)}{2}}\). Therefore, (6.50) tends to zero as \(n\rightarrow \infty \) if

$$\begin{aligned} n(n+1)-(2\xi - \eta ^{-1})(n-1)n<0 \end{aligned}$$

for n large, which holds if and only if \(1-(2\xi -\eta ^{-1})<0\), i.e., \(\xi >\frac{1}{2}+\frac{1}{2\eta }\), which holds by Lemma 6.7. \(\square \)

We summarize the essential findings for the proof of Theorem 2.3 in the next proposition.

Proposition 6.11

With \(\Psi \) that fulfills Assumption 5.2 and \(T_{\zeta ,K}\) defined in (6.47) for \(K>0\), one has, for \(t\in [0,T]\), that

$$\begin{aligned} \int _{{\mathbb {R}}}{\mathbb {E}}[|\tilde{X}(t\wedge T_{\zeta ,K},x)|]\Psi _{t\wedge T_{\zeta ,K}}(x)\,\textrm{d}x&\lesssim \int _0^{t\wedge T_{\zeta ,K}}\int _{{\mathbb {R}}}{\mathbb {E}}[|\tilde{X}(s,x)|]| \Delta _\theta \Psi _s(x)+{\dot{\Psi }}_s(x)|\,\textrm{d}x\,\textrm{d}s \nonumber \\&\quad + \int _0^{t\wedge T_{\zeta ,K}} \Psi _s(0) {\mathbb {E}}[|\tilde{X}(s,0)|]\,\textrm{d}s. \end{aligned}$$
(6.51)

Proof

By Proposition 5.3, Lemma 6.3, Lemma 6.10 and sending \(n\rightarrow \infty \) after applying Fatou’s lemma to exchange limiting and the integral, we get

$$\begin{aligned}&\int _{{\mathbb {R}}}{\mathbb {E}}[ |\tilde{X}(t\wedge T_{\zeta ,K},x)|]\Psi _{t\wedge T_{K,\zeta }}(x)\,\textrm{d}x\nonumber \\&\quad = \int _{{\mathbb {R}}}\liminf \limits _{n\rightarrow \infty } {\mathbb {E}} [\phi _n (\langle \tilde{X}_{t\wedge T_{\zeta ,K}},\Phi _x^{n}\rangle )]\Psi _{t\wedge T_{K,\zeta }}(x)\,\textrm{d}x\nonumber \\&\quad \le \liminf \limits _{n\rightarrow \infty } \int _{{\mathbb {R}}} {\mathbb {E}}[\phi _n (\langle \tilde{X} _{t\wedge T_{\zeta ,K}},\Phi _x^{n}\rangle ) ]\Psi _{t\wedge T_{K,\zeta }}(x)\,\textrm{d}x\nonumber \\&\quad \lesssim {\mathbb {E}}\bigg [ \int _0^{t\wedge T_{\zeta ,K}}\int _{{\mathbb {R}}}|\tilde{X}(s,x)|\big ( \Delta _\theta \Psi _s(x)+{\dot{\Psi }}_s(x) \big )\,\textrm{d}x \,\textrm{d}s \bigg ]\nonumber \\&\quad \quad +{\mathbb {E}}\bigg [ \int _0^{t\wedge T_{\zeta ,K}} \Psi _s(0)|\tilde{X}(s,0)|\,\textrm{d}s \bigg ]. \end{aligned}$$
(6.52)

Applying Fubini’s theorem then yields (6.51). \(\square \)

7 Step 5: Removing the auxiliary localizations

We want to construct appropriate test functions \(\Psi \in C_0^{\infty }([0,t],{\mathbb {R}})\) for some fixed \(t\in [0,T]\). They will be of the form

$$\begin{aligned} \Psi _{N,M}(s,x):=(S_{t-s}\phi _M(x))g_N(x) \end{aligned}$$
(7.1)

for \(N,M\in {\mathbb {N}}\), where \((S_u)_{u\in [0,T]}\) denotes the semigroup generated by \(\Delta _\theta \) and we specify the sequences of functions \(\phi _M,g_N \in C_0^\infty ({\mathbb {R}})\) in the following.

With the sequence \((\phi _M)_{M\in {\mathbb {N}}}\) we want to approximate the Dirac distribution around 0. To that end, we define

$$\begin{aligned} \phi _M(x):=Me^{-M^2x^2}\mathbbm {1}_{\lbrace |x|\le \frac{1}{M}\rbrace }+s_M(x),\quad M\ge 2, \end{aligned}$$

where the function \(s_M(x)\) extends smoothly to zero outside the ball \(B(1,\frac{1}{M-1})\) such that \(\lim _{M\rightarrow \infty }\phi _M(x)=\delta _0(x)\) pointwise.

Moreover, let \((g_N)_{N\in {\mathbb {N}}}\) be a sequence of functions in \(C_0^\infty ({\mathbb {R}})\) such that \(g_N:{\mathbb {R}}\rightarrow [0,1]\),

$$\begin{aligned} B(0,N)\subset \lbrace x\in {\mathbb {R}}:\, g_N(x)=1\rbrace ,\quad B(0,N+1)^C \subset \lbrace x\in {\mathbb {R}}:\, g_N(x)=0\rbrace , \end{aligned}$$

and

$$\begin{aligned} \sup \limits _{N\in {\mathbb {N}}}\Big [ \Vert |x|^{-\theta }g_N'(x) \Vert _\infty + \Vert \Delta _\theta g_N(x) \Vert _\infty \Big ] =:C_g<\infty . \end{aligned}$$
(7.2)

We simplify the term on the right-hand side of (6.52) in the next corollary.

Corollary 7.1

With \(\Psi _{N,M}\) constructed in (7.1), one has that

$$\begin{aligned}&\Delta _\theta \Psi _{N,M}(s,x) + {\dot{\Psi }}_{N,M}(s,x) \nonumber \\&\quad = 4\alpha ^2 |x|^{-\theta }\Big ( \frac{\partial }{\partial x}S_{t-s}\phi _M(x) \Big )\Big ( \frac{\partial }{\partial x}g_N(x) \Big ) + S_{t-s}\phi _M(x)\Delta _\theta g_N(x). \end{aligned}$$
(7.3)

Proof

Recall, that, by the definition of the semigroup \((S_t)_{t\in [0,T]}\) in (4.9) and using the fundamental solution of (4.1), we get

$$\begin{aligned} \Delta _\theta S_t \phi (x)=\frac{\partial }{\partial t}S_t \phi (x),\quad t\in [0,T], \end{aligned}$$

for all \(\phi \in C_0^\infty ({\mathbb {R}})\). Therefore, the second term on the left-hand side of (7.3) equals

$$\begin{aligned} {\dot{\Psi }}_{N,M}(s,x)&= g_N(x)\frac{\partial }{\partial s}\big (S_{t-s} \phi _M(x)\big )\nonumber \\&= -g_N(x)\Delta _\theta \big ( S_{t-s} \phi _M(x)\big )\nonumber \\&= -2\alpha ^2 g_N(x)\frac{\partial }{\partial x}\Big ( |x|^{-\theta }\frac{\partial }{\partial x}\big ( S_{t-s}\phi _M(x) \big ) \Big )\nonumber \\&= -2\alpha ^2 g_N(x)\Big (\frac{\partial }{\partial x} |x|^{-\theta }\Big )\Big (\frac{\partial }{\partial x} S_{t-s}\phi _M(x) \Big ) -2\alpha ^2 g_N(x) |x|^{-\theta }\Big (\frac{\partial ^2}{\partial x^2} S_{t-s}\phi _M(x) \Big ). \end{aligned}$$
(7.4)

For the first term on the left-hand side of (7.3), we calculate

$$\begin{aligned}&\Delta _\theta \Psi _{N,M}(s,x)\nonumber \\&\quad = 2\alpha ^2 \frac{\partial }{\partial x}\Big ( |x|^{-\theta }\frac{\partial }{\partial x}\Psi _{N,M}(s,x) \Big )\nonumber \\&\quad = 2\alpha ^2 |x|^{-\theta }\frac{\partial ^2}{\partial x^2}\Big ( S_{t-s}\phi _M(x)g_N(x) \Big ) + 2\alpha ^2 \Big (\frac{\partial }{\partial x} |x|^{-\theta }\Big )\Big (\frac{\partial }{\partial x}S_{t-s}\phi _M(x)g_N(x) \Big )\nonumber \\&\quad = 4\alpha ^2 |x|^{-\theta }\Big (\frac{\partial }{\partial x} S_{t-s}\phi _M(x)\Big )\Big (\frac{\partial }{\partial x}g_N(x)\Big ) + 2\alpha ^2 |x|^{-\theta }g_N(x)\Big (\frac{\partial ^2}{\partial x^2}S_{t-s}\phi _M(x) \Big )\nonumber \\&\quad \quad + 2\alpha ^2 |x|^{-\theta }\Big ( S_{t-s}\phi _M(x)\Big )\Big (\frac{\partial ^2}{\partial x^2}g_N(x) \Big ) \nonumber \\&\quad \quad + 2\alpha ^2 \Big (\frac{\partial }{\partial x} |x|^{-\theta }\Big )\Big (\frac{\partial }{\partial x}S_{t-s}\phi _M(x)\Big )g_N(x)\nonumber \\&\quad \quad + 2\alpha ^2 \Big (\frac{\partial }{\partial x} |x|^{-\theta }\Big )\big (S_{t-s}\phi _M(x)\big )\Big (\frac{\partial }{\partial x}g_N(x)\Big ). \end{aligned}$$
(7.5)

Hence, adding up (7.4) and (7.5), we obtain

$$\begin{aligned}&\Delta _\theta \Psi _{N,M}(s,x) + {\dot{\Psi }}_{N,M}(s,x)\\&\quad = 4\alpha ^2 |x|^{-\theta }\Big (\frac{\partial }{\partial x} S_{t-s}\phi _M(x)\Big )\Big (\frac{\partial }{\partial x}g_N(x)\Big ) + 2\alpha ^2 |x|^{-\theta }\Big ( S_{t-s}\phi _M(x)\Big )\Big (\frac{\partial ^2}{\partial x^2}g_N(x) \Big )\\&\qquad + 2\alpha ^2 \Big (\frac{\partial }{\partial x} |x|^{-\theta }\Big )\big (S_{t-s}\phi _M(x)\big )\Big (\frac{\partial }{\partial x}g_N(x)\Big )\\&\quad = 4\alpha ^2 |x|^{-\theta }\Big (\frac{\partial }{\partial x} S_{t-s}\phi _M(x)\Big )\Big (\frac{\partial }{\partial x}g_N(x)\Big ) + S_{t-s}\phi _M(x)\Delta _\theta g_N(x). \end{aligned}$$

\(\square \)

With these observations, we want to show that the semigroup \((S_t)_{t\in [0,T]}\) can be exponentially bounded in the following way.

Lemma 7.2

For any \(\phi \in C_0^\infty ({\mathbb {R}})\), \(t\in [0,T]\) and for any \(\lambda >0\), there is a constant \(C_{\lambda ,\phi ,t}>0\) such that

$$\begin{aligned} \bigg | S_t \phi (x) + \frac{\partial }{\partial x}( S_t \phi (x)) \bigg |\mathbbm {1}_{\lbrace N+1>|x|>N\rbrace } \le C_{\lambda ,\phi ,t}e^{-\lambda |x|}\mathbbm {1}_{\lbrace N+1> |x|>N\rbrace } \end{aligned}$$

for any \(N\ge 1\) and \(x\in {\mathbb {R}}\).

Proof

For \(t=0\), the statement is trivial due to \(S_0\phi (x)+\frac{\partial }{\partial x}(S_0\phi (x))=\phi (x)+\phi '(x)\), which is bounded with compact support. Thus, we fix \(t>0\) and consider the first summand without the derivative. We use the inequality

$$\begin{aligned} I_\nu (b)< \bigg (\frac{b}{a}\bigg )^\nu e^{b-a}\bigg ( \frac{a+\nu +\frac{1}{2}}{b+\nu +\frac{1}{2}} \bigg )^{\nu +\frac{1}{2}} I_\nu (a),\quad 0<a<b,\nu >-1, \end{aligned}$$
(7.6)

from [16, Theorem 2.1 (ii)], with \(a=\frac{|y|^{1+\frac{\theta }{2}}}{t}\) and \(b=\frac{|xy|^{1+\frac{\theta }{2}}}{t}\) such that \(b>a\) due to \(|x|>N\ge 1\). By the bound on \(p_t^\theta (x,y)\) from Corollary 4.9, due to the compact support of \(\phi \), which we denote by \(S_\phi \), and using (7.6), we get

$$\begin{aligned} S_t\phi (x)&\le \int _{{\mathbb {R}}}\frac{(2+\theta )}{2t}|xy|^{\frac{(1+\theta )}{2}}e^{-\frac{|x|^{2+\theta }+|y|^{2+\theta }}{2t}}I_\nu \bigg (\frac{|xy|^{1+\frac{\theta }{2}}}{t}\bigg )\phi (y)\,\textrm{d}y\nonumber \\&\le C_\phi \int _{S_\phi }\frac{(2+\theta )}{2t}|xy|^{\frac{(1+\theta )}{2}}e^{-\frac{|x|^{2+\theta }+|y|^{2+\theta }}{2t}}|x|^{\nu (1+\frac{\theta }{2})}e^{\frac{|xy|^{1+\frac{\theta }{2}}}{t}-\frac{|y|^{1+\frac{\theta }{2}}}{t}} I_\nu \bigg (\frac{|y|^{1+\frac{\theta }{2}}}{t}\bigg ) \,\textrm{d}y\nonumber \\&\le C_\phi \bigg (\int _{{\mathbb {R}}}\frac{(2+\theta )}{2t}|y|^{\frac{(1+\theta )}{2}}e^{-\frac{1^{2+\theta }+|y|^{2+\theta }}{2t}}I_\nu \bigg (\frac{|y|^{1+\frac{\theta }{2}}}{t}\bigg ) \,\textrm{d}y\bigg ) |x|^{(\nu +1)(1+\frac{\theta }{2})}e^{-\frac{|x-1|^{2+\theta }}{2t}}\nonumber \\&\quad \times e^{c_\phi (|x|^{1+\frac{\theta }{2}}-1)} \nonumber \\&=C_\phi \bigg (\int _{{\mathbb {R}}}p_t^\theta (1,y) \,\textrm{d}y\bigg ) |x|^{(\nu +1)(1+\frac{\theta }{2})}e^{-\frac{|x-1|^{2+\theta }}{2t}+c_\phi (|x|^{1+\frac{\theta }{2}}-1)+\lambda |x|}e^{-\lambda |x|} \nonumber \\&\le C_{\lambda ,\phi ,t}e^{-\lambda |x|}, \end{aligned}$$
(7.7)

since the function \(x\mapsto |x|^{(\nu +1)(1+\frac{\theta }{2})}e^{-\frac{|x-1|^{2+\theta }}{2t}+c_\phi (|x|^{1+\frac{\theta }{2}}-1)+\lambda |x|}\) attains a maximum on \({\mathbb {R}}\) for all \(c_\phi >0\).

For the second summand, we substitute \(z=\frac{|xy|^{1+\frac{\theta }{2}}}{t}\) such that \(\frac{1}{\partial x}=\frac{1+\frac{\theta }{2}}{t}y|xy|^{\frac{\theta }{2}}\frac{1}{\partial z}\), apply the product rule and \(\frac{\partial }{\partial z}I_\nu (z)=\frac{\nu }{z}I_\nu (z)+I_{\nu +1}(z)\) (see [24, page 67]) to get, for \(|x|>1\),

$$\begin{aligned} \frac{\partial }{\partial x}( S_t\phi (x))&\le \frac{\partial }{\partial x}\int _{{\mathbb {R}}}\frac{(2+\theta )}{2t}|xy|^{\frac{(1+\theta )}{2}}e^{-\frac{|x|^{2+\theta }+|y|^{2+\theta }}{2t}}I_\nu \bigg (\frac{|xy|^{1+\frac{\theta }{2}}}{t}\bigg )\phi (y)\,\textrm{d}y\nonumber \\&= \frac{(2+\theta )}{2t}\int _{{\mathbb {R}}}\frac{\partial }{\partial z}\bigg (|xy|^{\frac{(1+\theta )}{2}}\frac{1+\frac{\theta }{2}}{t}y|xy|^{\frac{\theta }{2}}e^{-\frac{|x|^{2+\theta }+|y|^{2+\theta }}{2t}}I_\nu (z)\bigg )\phi (y)\,\textrm{d}y\nonumber \\&= \frac{(2+\theta )}{2t}\int _{{\mathbb {R}}}\bigg (\frac{\partial }{\partial z}\bigg (|xy|^{\frac{(1+\theta )}{2}}\frac{1+\frac{\theta }{2}}{t}y|xy|^{\frac{\theta }{2}}e^{-\frac{|x|^{2+\theta }+|y|^{2+\theta }}{2t}}\bigg ) I_\nu (z) \nonumber \\&\quad \quad \quad \quad \quad \quad + |xy|^{\frac{(1+\theta )}{2}}\frac{1+\frac{\theta }{2}}{t}y|xy|^{\frac{\theta }{2}}e^{-\frac{|x|^{2+\theta }+|y|^{2+\theta }}{2t}}\frac{\partial }{\partial z}(I_\nu (z))\bigg )\phi (y)\,\textrm{d}y\nonumber \\&= \frac{(2+\theta )}{2t}\int _{{\mathbb {R}}}\bigg (\frac{1+\theta }{2}y|xy|^{\frac{(\theta -1)}{2}}e^{-\frac{|x|^{2+\theta }+|y|^{2+\theta }}{2t}} \nonumber \\&\qquad \qquad \qquad - \frac{2+\theta }{2t}|x|^{1+\theta }|xy|^{\frac{1+\theta }{2}}e^{-\frac{|x|^{2+\theta }+|y|^{2+\theta }}{2t}}\bigg ) I_\nu \bigg ( \frac{|xy|^{1+\frac{\theta }{2}}}{t}\bigg )\phi (y)\,\textrm{d}y \nonumber \\&\quad + \frac{(2+\theta )}{2t}\int _{{\mathbb {R}}}\bigg (|xy|^{\frac{(1+\theta )}{2}}\frac{1+\frac{\theta }{2}}{t}y|xy|^{\frac{\theta }{2}}e^{-\frac{|x|^{2+\theta }+|y|^{2+\theta }}{2t}} \nonumber \\&\qquad \qquad \qquad \bigg ( \nu \frac{t}{|xy|^{1+\frac{\theta }{2}}} I_\nu \bigg ( \frac{|xy|^{1+\frac{\theta }{2}}}{t}\bigg ) +I_{\nu +1}\bigg ( \frac{|xy|^{1+\frac{\theta }{2}}}{t} \bigg ) \bigg )\bigg )\phi (y)\,\textrm{d}y \nonumber \\&\le C_{t,\phi }\int _{S_\phi }\bigg (|xy|^{\frac{(1+\theta )}{2}}e^{-\frac{|x|^{2+\theta }+|y|^{2+\theta }}{2t}} \nonumber \\&\qquad \qquad \qquad + |x|^{1+\theta }|xy|^{\frac{(1+\theta )}{2}}e^{-\frac{|x|^{2+\theta }+|y|^{2+\theta }}{2t}}\bigg ) I_\nu \bigg ( \frac{|xy|^{1+\frac{\theta }{2}}}{t}\bigg )\,\textrm{d}y \nonumber \\&\quad + \int _{S_\phi }\bigg (|xy|^{\frac{(1+\theta )}{2}}e^{-\frac{|x|^{2+\theta }+|y|^{2+\theta }}{2t}} \bigg ( I_\nu \bigg ( \frac{|xy|^{1+\frac{\theta }{2}}}{t}\bigg ) +I_{\nu +1}\bigg ( \frac{|xy|^{1+\frac{\theta }{2}}}{t} \bigg ) \bigg )\bigg )\,\textrm{d}y \nonumber \\&\le C_{t,\phi }\int _{S_\phi } |x|^{1+\theta }|xy|^{\frac{(1+\theta )}{2}}e^{-\frac{|x|^{2+\theta }+|y|^{2+\theta }}{2t}}\bigg ( I_\nu \bigg ( \frac{|xy|^{1+\frac{\theta }{2}}}{t}\bigg ) +I_{\nu +1}\bigg ( \frac{|xy|^{1+\frac{\theta }{2}}}{t} \bigg ) \bigg )\bigg )\,\textrm{d}y, \end{aligned}$$
(7.8)

where \(S_\phi :=\lbrace y\in {\mathbb {R}}:\phi (y)\ne 0 \rbrace \). The integrands in (7.8) vanish for \(y=0\) by the definition of \(I_\nu \) in (4.16) with \(\nu =\frac{1}{2+\theta }-1<\frac{1+\theta }{2}\). If we thus show that, for any \(\nu >-1\), there is a constant \(C_\nu >0\) such that

$$\begin{aligned} I_\nu (z)+I_{\nu +1}(z)\le C_\nu \big ( z^{\nu +1}+z^{\nu +2} \big )e^z \end{aligned}$$
(7.9)

holds for all \(z>0\), then the statement will follow, since, similar as in (7.7), all the x-polynomials in (7.8) and the Bessel function terms are dominated by the term \(e^{-\frac{|x|^{2+\theta }}{2t}}\) and the y terms can be bounded using the compact support of \(\phi \).

To get (7.9), we use the equality (see [21, (5.7.9), page 110])

$$\begin{aligned} I_\nu (z)=2(\nu +1)I_{\nu +1}(z)+I_{\nu +2}(z), \end{aligned}$$
(7.10)

and, since \(\nu +1,\nu +2>-\frac{1}{2}\), we can then apply the following inequality from [22, (6.25), page 63], for \(x>0\):

$$\begin{aligned} I_\nu (x)<\frac{e^x+e^{-x}}{2\Gamma (\nu +1)}\bigg ( \frac{x}{2} \bigg )^\nu <\frac{e^x}{\Gamma (\nu +1)}\bigg ( \frac{x}{2} \bigg )^\nu . \end{aligned}$$
(7.11)

(7.10) and (7.11) yield, as \(\Gamma (x)>0\) for \(x>0\), that

$$\begin{aligned} I_\nu (z)+I_{\nu +1}(z)&=2\bigg (\nu +\frac{3}{2}\bigg )I_{\nu +1}(z)+I_{\nu +2}(z)\\&< 2\bigg (\nu +\frac{3}{2}\bigg )\frac{e^z}{\Gamma (\nu +2)}\bigg ( \frac{z}{2} \bigg )^{\nu +1} +\frac{e^z}{\Gamma (\nu +3)}\bigg ( \frac{z}{2} \bigg )^{\nu +2} \\&\le C_\nu \big ( z^{\nu +1}+z^{\nu +2} \big )e^z, \end{aligned}$$

which proves (7.9). \(\square \)

Proposition 7.3

It holds that

$$\begin{aligned} {\mathbb {E}}[|\tilde{X}(t,0)|] \lesssim \int _0^t (t-s)^{-\alpha }{\mathbb {E}}[|\tilde{X}(s,0)|]\,\textrm{d}s, \qquad t\in [0,T]. \end{aligned}$$

Proof

First, to apply Proposition 6.11, we need to show that \(\Psi _{N,M}\) defined in (7.1) fulfills Assumption 5.2. \(\Psi _{N,M}\in C^2([0,T]\times {\mathbb {R}})\) and the conditions \(\Psi _{N,M}(s,0)>0\) and \(\Gamma (t)\in B(0,J(t))\) for some \(J(t)>0\) follow by construction. Moreover, Lemma 7.2 directly yields that the last property holds:

$$\begin{aligned} \sup _{s\le t}\bigg | \int _{{\mathbb {R}}}|x|^{-\theta }\bigg (\frac{\partial }{\partial x}\Psi _{N,M}(s,x)\bigg )^2\,\textrm{d}x \bigg |&\le C \int _{{\mathbb {R}}}|x|^{-\theta }e^{-2\lambda |x|}\,\textrm{d}x, \end{aligned}$$

which is clearly finite as \(\theta <1\). Hence, Assumption 5.2 holds.

Thus, Proposition 6.11 holds and plugging (7.1) into (6.51), sending \(K\rightarrow \infty \) such that \(T_{\zeta ,K}\rightarrow T\) by Corollary 6.8 and using Corollary 7.1, (7.2) and Lemma 7.2, we get

$$\begin{aligned}&\int _{{\mathbb {R}}}{\mathbb {E}}[ |\tilde{X}(t,x)|]\phi _M(x)g_N(x)\,\textrm{d}x\nonumber \\&\quad \lesssim \int _0^{t}\int _{{\mathbb {R}}}{\mathbb {E}}[|\tilde{X}(s,x)|] \bigg | 4\alpha ^2 |x|^{-\theta }\Big ( \frac{\partial }{\partial x}S_{t-s}\phi _M(x) \Big )\Big ( \frac{\partial }{\partial x}g_N(x) \Big ) + S_{t-s}\phi _M(x)\Delta _\theta g_N(x)\bigg | \,\textrm{d}x\,\textrm{d}s\nonumber \\&\qquad + \int _0^t \Psi _{N,M}(s,0) {\mathbb {E}}[|\tilde{X}(s,0)| ]\,\textrm{d}s\nonumber \\&\quad \lesssim \int _0^{t}\int _{{\mathbb {R}}}{\mathbb {E}}[|\tilde{X}(s,x)|] \Big (\frac{\partial }{\partial x} S_{t-s} \phi _M(x) \Big ) + S_{t-s} \phi _M(x) |\mathbbm {1}_{\lbrace N+1>|x|>N\rbrace } \,\textrm{d}x\,\textrm{d}s\nonumber \\&\qquad + \int _0^t \Psi _{N,M}(s,0) {\mathbb {E}}[|\tilde{X}(s,0)|]\,\textrm{d}s\nonumber \\&\quad \lesssim \int _0^{t}\int _{{\mathbb {R}}}{\mathbb {E}}[|\tilde{X}(s,x)|] e^{-\lambda |x|}\mathbbm {1}_{\lbrace N+1>|x|>N \rbrace } \,\textrm{d}x\,\textrm{d}s + \int _0^t \Psi _{N,M}(s,0) {\mathbb {E}}[|\tilde{X}(s,0)|]\,\textrm{d}s. \end{aligned}$$
(7.12)

We want to send \(N,M\rightarrow \infty \). By Proposition 4.6 (i) we get that

$$\begin{aligned}&\int _0^{t}\int _{{\mathbb {R}}}{\mathbb {E}}[|\tilde{X}(s,x)|] e^{-\lambda |x|}\mathbbm {1}_{\lbrace N+1>|x|>N \rbrace } \,\textrm{d}x\,\textrm{d}s \lesssim t \int _{N}^{N+1}e^{-\lambda x} \,\textrm{d}x\rightarrow 0\quad \text {as }N\rightarrow \infty . \end{aligned}$$

Moreover, we get

$$\begin{aligned} \int _0^t \Psi _{N,M}(s,0) {\mathbb {E}}[|\tilde{X}(s,0)| ]\,\textrm{d}s&=\int _0^t ( S_{t-s}\phi _M(0))g_N(x) {\mathbb {E}}[|\tilde{X}(s,0)|]\,\textrm{d}s\\&= \int _0^t \bigg ( \int _{\mathbb {R}}p_{t-s}^\theta (y,0)\phi _M(y)\,\textrm{d}y \bigg ) {\mathbb {E}}[|\tilde{X}(s,0)| ]\,\textrm{d}s\\&{\mathop {\rightarrow }\limits ^{M\rightarrow \infty }} \int _0^t p_{t-s}^\theta (0) {\mathbb {E}}[|\tilde{X}(s,0)| ]\,\textrm{d}s \quad \text {as }M\rightarrow \infty , \end{aligned}$$

which gives

$$\begin{aligned} \int _0^t \Psi _{N,M}(s,0) {\mathbb {E}}[|\tilde{X}(s,0)| ]\,\textrm{d}s = c_\theta \int _0^t (t-s)^{-\alpha } {\mathbb {E}}[|\tilde{X}(s,0)|]\,\textrm{d}s. \end{aligned}$$

Hence, sending \(N,M\rightarrow \infty \) in (7.12) yields

$$\begin{aligned} {\mathbb {E}}[ |\tilde{X}(t,0)| \lesssim \int _0^t (t-s)^{-\alpha }{\mathbb {E}}[|\tilde{X}(s,0)|]\,\textrm{d}s. \end{aligned}$$

\(\square \)