Keywords

AMS 2010 Classification

1 Introduction

The purpose of this paper is to introduce a new class of Gaussian self-similar stochastic processes related to stochastic partial differential equations, and to establish a decomposition in law and a central limit theorem for the Hermite variations of the increments of such processes.

Consider the d-dimensional stochastic heat equation

$$\displaystyle{ \frac{\partial u} {\partial t} = \frac{1} {2}\Delta u +\dot{ W},\;t \geq 0,\;x \in \mathbb{R}^{d}, }$$
(1.1)

with zero initial condition, where \(\dot{W}\) is a zero mean Gaussian field with a covariance of the form

$$\displaystyle{\mathbb{E}\left [\dot{W}^{H}(t,x)\dot{W}^{H}(s,y)\right ] =\gamma _{ 0}(t - s)\Lambda (x - y),\quad s,t \geq 0,\;x,y \in \mathbb{R}^{d}.}$$

We are interested in the process U = {U t , t ≥ 0}, where U t = u(t, 0).

Suppose that \(\dot{W}\) is white in time, that is, γ 0 = δ 0 and the spatial covariance is the Riesz kernel, that is, \(\Lambda (x) = c_{d,\beta }\vert x\vert ^{-\beta }\), with β < min(d, 2) and \(c_{d,\beta } =\pi ^{-d/2}2^{\beta -d}\Gamma (\beta /2)/\Gamma ((d-\beta )/2)\). Then U has the covariance (see [14])

$$\displaystyle{ \mathbb{E}[U_{t}U_{s}] = D\left ((t + s)^{1-\frac{\beta }{2} } -\vert t - s\vert ^{1-\frac{\beta }{2} }\right ),\quad s,t \geq 0, }$$
(1.2)

for some constant

$$\displaystyle{ D = (2\pi )^{-d}(1 -\beta /2)^{-1}\int _{ \mathbb{R}^{d}}e^{-\frac{\vert \xi \vert ^{2}} {2} } \frac{d\xi } {\vert \xi \vert ^{d-\beta }}. }$$
(1.3)

Up to a constant, the covariance (1.2) is the covariance of the bifractional Brownian motion with parameters \(H = \frac{1} {2}\) and \(K = 1 - \frac{\beta } {2}\). We recall that, given constants H ∈ (0, 1) and K ∈ (0, 1), the bifractional Brownian motion B H, K = {B t H, K, t ≥ 0}, introduced in [4], is a centered Gaussian process with covariance

$$\displaystyle{R_{H,K}(s,t) = \frac{1} {2^{K}}\left ((t^{2H} + s^{2H})^{K} -\vert t - s\vert ^{2HK}\right ),\quad s,t \geq 0.}$$

When K = 1, the process B H = B H, 1 is simply the fractional Brownian motion (fBm) with Hurst parameter H ∈ (0, 1), with covariance R H (s, t) = R H, 1(s, t). In [5], Lei and Nualart obtained the following decomposition in law for the bifractional Brownian motion

$$\displaystyle{B^{H,K} = C_{ 1}B^{HK} + C_{ 2}Y _{t^{2H}}^{K},}$$

where B HK is a fBm with Hurst parameter HK, the process Y K is given by

$$\displaystyle{ Y _{t}^{K} =\int _{ 0}^{\infty }y^{-\frac{1+K} {2} }(1 - e^{-yt})dW_{y}, }$$
(1.4)

with W = {W y , y ≥ 0} a standard Brownian motion independent of B H, K, and C 1, C 2 are constants given by \(C_{1} = 2^{\frac{1-K} {2} }\) and \(C_{2} = \sqrt{ \frac{2^{-K } } {\Gamma (1-K)}}\). The process Y K has trajectories which are infinitely differentiable on (0, ) and Hölder continuous of order HKε in any interval [0, T] for any ε > 0. In particular, this leads to a decomposition in law of the process U with covariance (1.2) as the sum of a fractional Brownian motion with Hurst parameter \(\frac{1} {2} - \frac{\beta } {4}\) plus a regular process.

The classical one-dimensional space-time white noise can also be considered as an extension of the covariance (1.2) if we take β = 1. In this case the covariance corresponds, up to a constant, to that of a bifractional Brownian motion with parameters \(H = K = \frac{1} {2}\).

The case where the noise term \(\dot{W}\) is a fractional Brownian motion with Hurst parameter \(H \in (\frac{1} {2},1)\) in time and a spatial covariance given by the Riesz kernel, that is,

$$\displaystyle{\mathbb{E}\left [\dot{W}^{H}(t,x)\dot{W}^{H}(s,y)\right ] =\alpha _{ H}c_{d,\beta }\vert s - t\vert ^{2H-2}\vert x - y\vert ^{-\beta },}$$

where 0 < β < min(d, 2) and α H = H(2H − 1), has been considered by Tudor and Xiao in [14]. In this case the corresponding process U has the covariance

$$\displaystyle{ \mathbb{E}[U_{t}U_{s}] = D\alpha _{H}\int _{0}^{t}\int _{ 0}^{s}\vert u - v\vert ^{2H-2}(t + s - u - v)^{-\gamma }dudv, }$$
(1.5)

where D is given in (1.3) and \(\gamma = \frac{d-\beta } {2}\). This process is self-similar with parameter \(H - \frac{\gamma } {2}\) and it has been studied in a series of papers [1, 8, 1214]. In particular, in [14] it is proved that the process U can be decomposed into the sum of a scaled fBm with parameter \(H - \frac{\gamma } {2}\), and a Gaussian process V with continuously differentiable trajectories. This decomposition is based on the stochastic heat equation. As a consequence, one can derive the exact uniform and local moduli of continuity and Chung-type laws of the iterated logarithm for this process. In [12], assuming that d = 1, 2 or 3, a central limit theorem is obtained for the renormalized quadratic variation

$$\displaystyle{V _{n} = n^{2H-\gamma -\frac{1} {2} }\sum _{j=0}^{n-1}\left \{(U_{(\,j+1)T/n} - U_{jT/n})^{2} - \mathbb{E}\left [(U_{(\,j+1)t/n} - U_{jT/n})^{2}\right ]\right \},}$$

assuming \(\frac{1} {2} <H <\frac{3} {4}\), extending well-known results for fBm (see for example [6, Theorem 7.4.1]).

The purpose of this paper is to establish a decomposition in law, similar to that obtained by Lei and Nualart in [5] for the bifractional Brownian motion, and a central limit theorem for the Hermite variations of the increments, for a class of self-similar processes that includes the covariance (1.5). Consider a centered Gaussian process {X t , t ≥ 0} with covariance

$$\displaystyle{ R(s,t) = \mathbb{E}[X_{s}X_{t}] = \mathbb{E}\left [\left (\int _{0}^{t}Z_{ t-r}dB_{r}^{H}\right )\left (\int _{ 0}^{s}Z_{ s-r}dB_{r}^{H}\right )\right ], }$$
(1.6)

where

  1. (i)

    B H = {B t H, t ≥ 0} is a fBm with Hurst parameter H ∈ (0, 1).

  2. (ii)

    Z = {Z t , t > 0} is a zero-mean Gaussian process, independent of B H, with covariance

    $$\displaystyle{ \mathbb{E}[Z_{s}Z_{t}] = (s + t)^{-\gamma }, }$$
    (1.7)

    where 0 < γ < 2H.

In other words, X is a Gaussian process with the same covariance as the process { 0 t Z tr dB r H, t ≥ 0}, which is not Gaussian.

When \(H \in (\frac{1} {2},1)\), the covariance (1.6) coincides with (1.5) with D = 1. However, we allow the range of parameters 0 < H < 1 and 0 < γ < 2H. In other words, up to a constant, X has the law of the solution in time of the stochastic heat equation (1.1), when H ∈ (0, 1), d ≥ 1 and β = d − 2γ. Also of interest is that X can be constructed as a sum of stochastic integrals with respect to the Brownian sheet (see the proof of Theorem 1).

1.1 Decomposition of the Process X

Our first result is the following decomposition in law of the process X as the sum of a fractional Brownian motion with Hurst parameter \(\frac{\alpha }{2} = H - \frac{\gamma } {2}\) plus a process with regular trajectories.

Theorem 1

The process X has the same law as \(\{\sqrt{\kappa }B_{t}^{ \frac{\alpha }{ 2} } + Y _{t},t \geq 0\}\) , where here and in what follows, α = 2Hγ,

$$\displaystyle{ \kappa = \frac{1} {\Gamma (\gamma )}\int _{0}^{\infty } \frac{z^{\gamma -1}} {1 + z^{2}}\ dz, }$$
(1.8)

\(B^{ \frac{\alpha }{ 2} }\) is a fBm with Hurst parameter \(\frac{\alpha }{2}\) , and Y (up to a constant) has the same law as the process Y K defined in ( 1.4 ), with K = 2α + 1, that is, Y is a centered Gaussian process with covariance given by

$$\displaystyle{\mathbb{E}\left [Y _{t}Y _{s}\right ] =\lambda _{1}\int _{0}^{\infty }y^{-\alpha -1}(1 - e^{-yt})(1 - e^{-ys})\ dy,}$$

where

$$\displaystyle{\lambda _{1} = \frac{4\pi } {\Gamma (\gamma )\Gamma (2H + 1)\sin (\pi H)}\int _{0}^{\infty }\frac{\eta ^{1-2H}} {1 +\eta ^{2}}d\eta.}$$

The proof of this theorem is given in Sect. 3.

1.2 Hermite Variations of the Process

For each integer q ≥ 0, the qth Hermite polynomial is given by

$$\displaystyle{H_{q}(x) = (-1)^{q}e^{\frac{x^{2}} {2} } \frac{d^{q}} {dx^{q}}e^{-\frac{x^{2}} {2} }.}$$

See [6, Sect. 1.4] for a discussion of properties of these polynomials. In particular, it is well known that the family \(\{ \frac{1} {\sqrt{q!}}H_{q},q \geq 0\}\) constitutes an orthonormal basis of the space \(L^{2}(\mathbb{R},\gamma )\), where γ is the N(0, 1) measure.

Suppose {Z n , n ≥ 1} is a stationary, Gaussian sequence, where each Z n follows the N(0, 1) distribution with covariance function \(\rho (k) = \mathbb{E}\left [Z_{n}Z_{n+k}\right ]\). If k = 1 | ρ(k) |q < , it is well known that as n tends to infinity, the Hermite variation

$$\displaystyle{ V _{n} = \frac{1} {\sqrt{n}}\sum _{j=1}^{n}H_{ q}(Z_{j}) }$$
(1.9)

converges in distribution to a Gaussian random variable with mean zero and variance given by σ 2 = q!  k = 1 ρ(k)q. This result was proved by Breuer and Major in [3]. In particular, if B H is a fBm, then the sequence {Z j, n , 0 ≤ jn − 1} defined by

$$\displaystyle{Z_{j,n} = n^{H}\left (B_{\frac{ j+1} {n} }^{H} - B_{\frac{ j} {n} }^{H}\right )}$$

is a stationary sequence with unit variance. As a consequence, if \(H <1 -\frac{1} {q},\) we have that

$$\displaystyle{ \frac{1} {\sqrt{n}}\sum _{j=0}^{n-1}H_{ q}\left (n^{H}\left (B_{\frac{ j+1} {n} }^{H} - B_{\frac{ j} {n} }^{H}\right )\right )}$$

converges to a normal law with variance given by

$$\displaystyle{ \sigma _{q}^{2} = \frac{q!} {2^{q}}\sum _{m\in \mathbb{Z}}\left (\vert m + 1\vert ^{2H} - 2\vert m\vert ^{2H} + \vert m - 1\vert ^{2H}\right )^{q}. }$$
(1.10)

See [3] and Theorem 7.4.1 of [6].

The above Breuer-Major theorem can not be applied to our process because X is not necessarily stationary. However, we have a comparable result.

Theorem 2

Let q ≥ 2 be an integer and fix a real T > 0. Suppose that \(\alpha <2 -\frac{1} {q}\) , where α is defined in Theorem  1 . For t ∈ [0, T], define,

$$\displaystyle{F_{n}(t) = n^{-\frac{1} {2} }\sum _{j=0}^{\lfloor nt\rfloor -1}H_{q}\left ( \frac{\Delta X_{\frac{j} {n} }} {\left \|\Delta X_{\frac{j} {n} }\right \|_{L^{2}(\Omega )}}\right ),}$$

where H q (x) denotes the qth Hermite polynomial. Then as n∞, the stochastic process {F n (t), t ∈ [0, T]} converges in law in the Skorohod space D([0, T]), to a scaled Brownian motion {σB t , t ∈ [0, T]}, where {B t , t ∈ [0, T]} is a standard Brownian motion and \(\sigma = \sqrt{\sigma ^{2}}\) is given by

$$\displaystyle{ \sigma ^{2} = \frac{q!} {2^{q}}\sum _{m\in \mathbb{Z}}\left (\vert m + 1\vert ^{\alpha }- 2\vert m\vert ^{\alpha } + \vert m - 1\vert ^{\alpha }\right )^{q}. }$$
(1.11)

The proof of this theorem is given in Sect. 4.

2 Preliminaries

2.1 Analysis on the Wiener Space

The reader may refer to [6, 7] for a detailed coverage of this topic. Let \(Z =\{ Z(h),h \in \mathbb{H}\}\) be an isonormal Gaussian process on a probability space \((\Omega,\mathbb{F},P)\), indexed by a real separable Hilbert space \(\mathbb{H}\). This means that Z is a family of Gaussian random variables such that \(\mathbb{E}[Z(h)] = 0\) and \(\mathbb{E}\left [Z(h)Z(g)\right ] = \left <h,g\right>_{\mathbb{H}}\) for all \(h,g \in \mathbb{H}\).

For integers q ≥ 1, let \(\mathbb{H}^{\otimes q}\) denote the qth tensor product of \(\mathbb{H}\), and \(\mathbb{H}^{\odot q}\) denote the subspace of symmetric elements of \(\mathbb{H}^{\otimes q}\).

Let {e n , n ≥ 1} be a complete orthonormal system in \(\mathbb{H}\). For elements \(f,g \in \mathbb{H}^{\odot q}\) and p ∈ {0, , q}, we define the pth-order contraction of f and g as that element of \(\mathbb{H}^{\otimes 2(q-p)}\) given by

$$\displaystyle{ f \otimes _{p}g =\sum _{ i_{1},\ldots,i_{p}=1}^{\infty }\left <\,f,e_{ i_{1}} \otimes \cdots \otimes e_{i_{p}}\right>_{\mathbb{H}^{\otimes p}} \otimes \left <g,e_{i_{1}} \otimes \cdots \otimes e_{i_{p}}\right>_{\mathbb{H}^{\otimes p}}, }$$
(2.1)

where f0 g = fg. Note that, if \(f,g \in \mathbb{H}^{\odot q}\), then \(f \otimes _{q}g = \left <\,f,g\right>_{\mathbb{H}^{\odot q}}\). In particular, if f, g are real-valued functions in \(\mathbb{H}^{\otimes 2} = L^{2}(\mathbb{R}^{2},\mathbb{B}^{2},\mu ^{2})\) for a non-atomic measure μ, then we have

$$\displaystyle{ f \otimes _{1}g =\int _{\mathbb{R}}f(s,t_{1})g(s,t_{2})\ \mu (ds). }$$
(2.2)

Let \(\mathbb{H}_{q}\) be the qth Wiener chaos of Z, that is, the closed linear subspace of \(L^{2}(\Omega )\) generated by the random variables \(\{H_{q}(Z(h)),h \in \mathbb{H},\|h\|_{\mathbb{H}} = 1\}\), where H q (x) is the qth Hermite polynomial. It can be shown (see [6, Proposition 2.2.1]) that if Z, YN(0, 1) are jointly Gaussian, then

$$\displaystyle{ \mathbb{E}\left [H_{p}(Z)H_{q}(Y )\right ] = \left \{\begin{array}{@{}l@{\quad }l@{}} p!\left (\mathbb{E}\left [ZY \right ]\right )^{p}\quad &if\;p = q \\ 0 \quad &\mathrm{otherwise} \end{array} \right.. }$$
(2.3)

For q ≥ 1, it is known that the map

$$\displaystyle{ I_{q}(h^{\otimes q}) = H_{ q}(Z(h)) }$$
(2.4)

provides a linear isometry between \(\mathbb{H}^{\odot q}\) (equipped with the modified norm \(\sqrt{q!}\| \cdot \|_{\mathbb{H}^{\otimes q}}\)) and \(\mathbb{H}_{q}\), where I q (⋅ ) is the generalized Wiener-Itô stochastic integral (see [6, Theorem 2.7.7]). By convention, \(\mathbb{H}_{0} = \mathbb{R}\) and I 0(x) = x.

We use the following integral multiplication theorem from [7, Proposition 1.1.3]. Suppose \(f \in \mathbb{H}^{\odot p}\) and \(g \in \mathbb{H}^{\odot q}\). Then

$$\displaystyle{ I_{p}(\,f)I_{q}(g) =\sum _{ r=0}^{p\wedge q}r!\binom{p}{r}\binom{q}{r}I_{ p+q-2r}(\,f\widetilde{ \otimes }_{r}g), }$$
(2.5)

where \(f\widetilde{ \otimes }_{r}g\) denotes the symmetrization of f r g. For a product of more than two integrals, see Peccati and Taqqu [9].

2.2 Stochastic Integration and fBm

We refer to the ‘time domain’ and ‘spectral domain’ representations of fBm. The reader may refer to [10, 11] for details. Let \(\mathbb{E}\) denote the set of real-valued step functions on \(\mathbb{R}\). Let B H denote fBm with Hurst parameter H. For this case, we view B H as an isonormal Gaussian process on the Hilbert space \(\mathfrak{H}\), which is the closure of \(\mathbb{E}\) with respect to the inner product \(\left <\,f,g\right>_{\mathfrak{H}} = \mathbb{E}\left [I(\,f)I(g)\right ]\). Consider also the inner product space

$$\displaystyle{\tilde{\Lambda }_{H} = \left \{f: f \in L^{2}(\mathbb{R}),\int _{ \mathbb{R}}\vert \mathbb{F}f(\xi )\vert ^{2}\vert \xi \vert ^{1-2H}d\xi <\infty \right \},}$$

where \(\mathbb{F}f =\int _{\mathbb{R}}f(x)e^{i\xi x}dx\) is the Fourier transform, and the inner product of \(\tilde{\Lambda }_{H}\) is given by

$$\displaystyle{ \left <\,f,g\right>_{\tilde{\Lambda }_{H}} = \frac{1} {C_{H}^{2}}\int _{\mathbb{R}}\mathbb{F}f(\xi )\overline{\mathbb{F}g(\xi )}\vert \xi \vert ^{1-2H}d\xi, }$$
(2.6)

where \(C_{H} = \left ( \frac{2\pi } {\Gamma (2H+1)\sin (\pi H)}\right )^{\frac{1} {2} }\). It is known (see [10, Theorem 3.1]) that the space \(\tilde{\Lambda }_{H}\) is isometric to a subspace of \(\mathfrak{H}\), and \(\tilde{\Lambda }_{H}\) contains \(\mathbb{E}\) as a dense subset. This inner product (2.6) is known as the ‘spectral measure’ of fBm. In the case \(H \in (\frac{1} {2},1)\), there is another isometry from the space

$$\displaystyle{\left \vert \Lambda _{H}\right \vert = \left \{f:\int _{ 0}^{\infty }\int _{ 0}^{\infty }\vert f(u)\vert \vert f(v)\vert \vert u - v\vert ^{2H-2}du\ dv <\infty \right \}}$$

to a subspace of \(\mathfrak{H}\), where the inner product is defined as

$$\displaystyle{\left <\,f,g\right>_{\vert \Lambda _{H}\vert } = H(2H - 1)\int _{0}^{\infty }\int _{ 0}^{\infty }f(u)g(v)\vert u - v\vert ^{2H-2}du\ dv,}$$

see [10] or [7, Sect. 5.1].

3 Proof of Theorem 1

For any γ > 0 and λ > 0, we can write

$$\displaystyle{\lambda ^{-\gamma } = \frac{1} {\Gamma (\gamma )}\int _{0}^{\infty }y^{\gamma -1}e^{-\lambda y}dy,}$$

where \(\Gamma\) is the Gamma function defined by \(\Gamma (\gamma ) =\int _{ 0}^{\infty }y^{\gamma -1}e^{-y}dy\). As a consequence, the covariance (1.7) can be written as

$$\displaystyle{ \mathbb{E}[Z_{s}Z_{t}] = \frac{1} {\Gamma (\gamma )}\int _{0}^{\infty }y^{\gamma -1}e^{-(t+s)y}dy. }$$
(3.1)

Notice that this representation implies the covariance (1.7) is positive definite. Taking first the expectation with respect to the process Z, and using formula (3.1), we obtain

$$\displaystyle\begin{array}{rcl} R(s,t)& =& \frac{1} {\Gamma (\gamma )}\int _{0}^{\infty }\mathbb{E}\left [\left (\int _{ 0}^{t}e^{yu}dB_{ u}^{H}\right )\left (\int _{ 0}^{s}e^{yu}dB_{ u}^{H}\right )\right ]y^{\gamma -1}e^{-(t+s)y}dy {}\\ & =& \frac{1} {\Gamma (\gamma )}\int _{0}^{\infty }\left \langle e^{yu}\mathbf{1}_{ [0,t]}(u),e^{yv}\mathbf{1}_{ [0,s]}(v)\right \rangle _{\mathfrak{H}}y^{\gamma -1}e^{-(t+s)y}dy. {}\\ \end{array}$$

Using the isometry between \(\tilde{\Lambda }_{H}\) and a subspace of \(\mathfrak{H}\) (see Sect. 2.2), we can write

$$\displaystyle\begin{array}{rcl} \left \langle e^{yu}\mathbf{1}_{ [0,t]}(u),e^{yv}\mathbf{1}_{ [0,s]}(v)\right \rangle _{\mathfrak{H}}& =& C_{H}^{-2}\int _{ \mathbb{R}}\vert \xi \vert ^{1-2H}(\mathbb{F}\mathbf{1}_{ [0,t]}e^{y\cdot })(\overline{\mathbb{F}\mathbf{1}_{ [0,s]}e^{y\cdot }})\ d\xi {}\\ & =& C_{H}^{-2}\int _{ \mathbb{R}} \frac{\vert \xi \vert ^{1-2H}} {y^{2} +\xi ^{2}} \left (e^{yt+i\xi t} - 1\right )\left (e^{ys-i\xi s} - 1\right )\ d\xi, {}\\ \end{array}$$

where \((\mathbb{F}\mathbf{1}_{[0,t]}e^{x\cdot })\) denotes the Fourier transform and \(C_{H} = \left ( \frac{2\pi } {\Gamma (2H+1)\sin (\pi H)}\right )^{\frac{1} {2} }\). This allows us to write, making the change of variable ξ = ηy,

$$\displaystyle\begin{array}{rcl} R(s,t)& =& \frac{1} {\Gamma (\gamma )C_{H}^{2}}\int _{0}^{\infty }\int _{ \mathbb{R}}y^{\gamma -1}\frac{\vert \xi \vert ^{1-2H}} {y^{2} +\xi ^{2}} \left (e^{i\xi t} - e^{-yt}\right )\left (e^{-i\xi s} - e^{-ys}\right )\ d\xi \ dy \\ & =& \frac{1} {\Gamma (\gamma )C_{H}^{2}}\int _{0}^{\infty }\int _{ \mathbb{R}}y^{-\alpha -1}\frac{\vert \eta \vert ^{1-2H}} {1 +\eta ^{2}} \left (e^{i\eta yt} - e^{-yt}\right )\left (e^{-i\eta ys} - e^{-ys}\right )\ d\eta \ dy,{}\end{array}$$
(3.2)

where α = 2Hγ. By Euler’s identity, adding and subtracting 1 to compensate the singularity of y α−1 at the origin, we can write

$$\displaystyle{ e^{i\eta yt} - e^{-yt} = (\cos (\eta yt) - 1 + i\sin (\eta yt)) + (1 - e^{-yt}). }$$
(3.3)

Substituting (3.3) into (3.2) and taking into account that the integral of the imaginary part vanishes because it is an odd function, we obtain

$$\displaystyle\begin{array}{rcl} R(s,t)& =& \frac{2} {\Gamma (\gamma )C_{H}^{2}}\int _{0}^{\infty }\int _{ 0}^{\infty }y^{-\alpha -1}\frac{\eta ^{1-2H}} {1 +\eta ^{2}}\Big((1 -\cos (\eta yt))(1 -\cos (\eta ys)) {}\\ & & +\sin (\eta yt)\sin (\eta ys) + (\cos (\eta ys) - 1)(1 - e^{-yt}) + (\cos (\eta yt) - 1)(1 - e^{-ys}) {}\\ & & +(1 - e^{-yt})(1 - e^{-ys})\Big)\ d\eta \ dy. {}\\ \end{array}$$

Let B ( j) = {B ( j)(η, t), η ≥ 0, t ≥ 0}, j = 1, 2 denote two independent Brownian sheets. That is, for j = 1, 2, B ( j) is a continuous Gaussian field with mean zero and covariance given by

$$\displaystyle{\mathbb{E}\left [B^{(\,j)}(\eta,t)B^{(\,j)}(\xi,s)\right ] =\min (\eta,\xi ) \times \min (t,s).}$$

We define the following stochastic processes:

$$\displaystyle{ U_{t} = \frac{\sqrt{2}} {\sqrt{\Gamma (\gamma )}C_{H}}\int _{0}^{\infty }\int _{ 0}^{\infty }y^{-\frac{\alpha }{2} -\frac{1} {2} }\sqrt{\frac{\eta ^{1-2H } } {1 +\eta ^{2}}}\left (\cos (\eta yt) - 1\right )B^{(1)}(d\eta,dy), }$$
(3.4)
$$\displaystyle{ V _{t} = \frac{\sqrt{2}} {\sqrt{\Gamma (\gamma )}C_{H}}\int _{0}^{\infty }\int _{ 0}^{\infty }y^{-\frac{\alpha }{2} -\frac{1} {2} }\sqrt{\frac{\eta ^{1-2H } } {1 +\eta ^{2}}}\left (\sin (\eta yt)\right )B^{(2)}(d\eta,dy), }$$
(3.5)
$$\displaystyle{ Y _{t} = \frac{\sqrt{2}} {\sqrt{\Gamma (\gamma )}C_{H}}\int _{0}^{\infty }\int _{ 0}^{\infty }y^{-\frac{\alpha }{2} -\frac{1} {2} }\sqrt{\frac{\eta ^{1-2H } } {1 +\eta ^{2}}}\left (1 - e^{-yt}\right )B^{(1)}(d\eta,dy), }$$
(3.6)

where the integrals are Wiener-Itô integrals with respect to the Brownian sheet. We then define the stochastic process X = {X t , t ≥ 0} by X t = U t + V t + Y t , and we have \(\mathbb{E}\left [X_{s}X_{t}\right ] = R(s,t)\) as given in (3.2). These processes have the following properties:

  1. (I)

    The process W t = U t + V t is a fractional Brownian motion with Hurst parameter \(\frac{\alpha }{2}\) scaled with the constant \(\sqrt{\kappa }\). In fact, the covariance of this process is

    $$\displaystyle\begin{array}{rcl} \mathbb{E}[W_{t}W_{s}]& =& \frac{2} {\Gamma (\gamma )C_{H}^{2}}\int _{0}^{\infty }\int _{ 0}^{\infty }y^{-\alpha -1}\frac{\eta ^{1-2H}} {1 +\eta ^{2}}\Big((\cos (\eta yt) - 1)(\cos (\eta ys) - 1) {}\\ & & +\sin (\eta yt)\sin (\eta ys)\Big)d\eta dy {}\\ & =& \frac{1} {\Gamma (\gamma )C_{H}^{2}}\int _{0}^{\infty }\int _{ \mathbb{R}}y^{\gamma -1}\frac{\vert \xi \vert ^{1-2H}} {y^{2} +\xi ^{2}} (e^{i\xi t} - 1)(e^{-i\xi s} - 1)d\xi dy. {}\\ \end{array}$$

    Integrating in the variable y we finally obtain

    $$\displaystyle{\mathbb{E}[W_{t}W_{s}] = \frac{c_{1}} {\Gamma (\gamma )C_{H}^{2}}\int _{\mathbb{R}} \frac{(e^{i\xi t} - 1)(e^{-i\xi s} - 1)} {\vert \xi \vert ^{\alpha +1}} d\xi,}$$

    where \(c_{1} =\int _{ 0}^{\infty } \frac{z^{\gamma -1}} {1+z^{2}} dz =\kappa \Gamma (\gamma )\). Taking into account the Fourier transform representation of fBm (see [11, p. 328]), this implies \(\kappa ^{-\frac{1} {2} }W\) is a fractional Brownian motion with Hurst parameter \(\frac{\alpha }{2}\).

  2. (II)

    The process Y coincides, up to a constant, with the process Y K introduced in (1.4) with K = 2α + 1. In fact, the covariance of this process is given by

    $$\displaystyle{ \mathbb{E}[Y _{t}Y _{s}] = \frac{2c_{2}} {\Gamma (\gamma )C_{H}^{2}}\int _{0}^{\infty }y^{-\alpha -1}(1 - e^{-yt})(1 - e^{-ys})dy, }$$
    (3.7)

    where

    $$\displaystyle{c_{2} =\int _{ 0}^{\infty }\frac{\eta ^{1-2H}} {1 +\eta ^{2}}d\eta.}$$

Notice that the process X is self-similar with exponent \(\frac{\alpha }{2}\). This concludes the proof of Theorem 1.

4 Proof of Theorem 2

Along the proof, the symbol C denotes a generic, positive constant, which may change from line to line. The value of C will depend on parameters of the process and on T, but not on the increment width n −1.

For integers n ≥ 1, define a partition of [0, ) composed of the intervals \(\{[ \frac{j} {n}, \frac{j+1} {n} ),j \geq 0\}\). For the process X and related processes U, V, W, Y defined in Sect. 3, we introduce the notation

$$\displaystyle{\Delta X_{\frac{j} {n} } = X_{\frac{j+1} {n} } - X_{\frac{j} {n} }\;\text{and }\,\Delta X_{0} = X_{\frac{1} {n} },}$$

with corresponding notation for U, V, W, Y. We start the proof of Theorem 2 with two technical results about the components of the increments.

4.1 Preliminary Lemmas

Lemma 3

Using above notation with integers n ≥ 2 and j, k ≥ 0, we have

  1. (a)

    \(\mathbb{E}\left [\Delta W_{\frac{j} {n} }\Delta W_{\frac{k} {n} }\right ] = \frac{\kappa } {2}n^{-\alpha }\left (\vert \,j - k - 1\vert ^{\alpha }- 2\vert \,j - k\vert ^{\alpha } + \vert \,j - k - 1\vert ^{\alpha }\right )\) , where κ is defined in ( 1.8 ).

  2. (b)

    For j + k ≥ 1,

    $$\displaystyle{\left \vert \mathbb{E}\left [\Delta Y _{\frac{j} {n} }\Delta Y _{\frac{k} {n} }\right ]\right \vert \leq Cn^{-\alpha }(\,j + k)^{\alpha -2}}$$

    for a constant C > 0 that is independent of j, k and n.

Proof

Property (a) is well-known for fractional Brownian motion. For (b), we have from (3.7):

$$\displaystyle\begin{array}{rcl} \mathbb{E}\left [\Delta Y _{\frac{j} {n} }\Delta Y _{\frac{k} {n} }\right ]& =& \frac{2c_{2}} {\Gamma (\gamma )C_{H}^{2}n^{\alpha }}\int _{0}^{\infty }y^{-\alpha -1}\left (e^{-yj} - e^{-y(\,j+1)}\right )\left (e^{-yk} - e^{-y(k+1)}\right )dy {}\\ & =& \frac{2c_{2}} {\Gamma (\gamma )C_{H}^{2}n^{\alpha }}\int _{0}^{\infty }y^{-\alpha +1}\int _{ 0}^{1}\int _{ 0}^{1}e^{-y(\,j+k+u+v)}du\ dv\ dy. {}\\ \end{array}$$

Note that the above integral is nonnegative, and we can bound this with

$$\displaystyle\begin{array}{rcl} \left \vert \mathbb{E}\left [\Delta Y _{\frac{j} {n} }\Delta Y _{\frac{k} {n} }\right ]\right \vert & \leq & Cn^{-\alpha }\int _{ 0}^{\infty }y^{-\alpha +1}e^{-y(\,j+k)}\ dy {}\\ & =& Cn^{-\alpha }(\,j + k)^{\alpha -2}\int _{ 0}^{\infty }u^{-\alpha +1}e^{-u}du {}\\ & \leq & Cn^{-\alpha }(\,j + k)^{\alpha -2}. {}\\ \end{array}$$

Lemma 4

For n ≥ 2 fixed and integers j, k ≥ 1,

$$\displaystyle{\left \vert \mathbb{E}\left [\Delta W_{\frac{j} {n} }\Delta Y _{\frac{k} {n} }\right ]\right \vert \leq Cn^{-\alpha }j^{2H-2}k^{-\gamma }}$$

for a constant C > 0 that is independent of j, k and n.

Proof

From (3.4)–(3.6) in the proof of Theorem 1, observe that

$$\displaystyle{\mathbb{E}\left [\Delta W_{\frac{j} {n} }\Delta Y _{\frac{k} {n} }\right ] = \mathbb{E}\left [(\Delta U_{\frac{j} {n} } + \Delta V _{\frac{j} {n} })\Delta Y _{\frac{k} {n} }\right ] = \mathbb{E}\left [\Delta U_{\frac{j} {n} }\Delta Y _{\frac{k} {n} }\right ].}$$

Assume s, t > 0. By self-similarity we can define the covariance function ψ by \(\mathbb{E}\left [U_{t}Y _{s}\right ] = s^{\alpha }\mathbb{E}\left [U_{t/s}Y _{1}\right ] = s^{\alpha }\psi (t/s)\), where, using the change-of-variable θ = ηx,

$$\displaystyle\begin{array}{rcl} \psi (x)& =& \int _{0}^{\infty }\int _{ 0}^{\infty }y^{-\alpha -1}\frac{\eta ^{1-2H}} {1 +\eta ^{2}}\left (\cos (y\eta x) - 1\right )(1 - e^{-y})\ d\eta \ dy {}\\ & =& \int _{0}^{\infty }y^{-\alpha -1}(1 - e^{-y})\int _{ 0}^{\infty }\frac{\theta ^{1-2H}x^{2H}} {x^{2} +\theta ^{2}} \left (\cos (y\theta ) - 1\right )\ d\theta \ dy. {}\\ \end{array}$$

Then using the fact that

$$\displaystyle{ \left \vert \frac{\theta ^{1-2H}x^{2H}} {x^{2} +\theta ^{2}} \right \vert \leq \vert \theta ^{-2H}\vert \ \vert x\vert ^{2H-1}, }$$
(4.1)

we see that | ψ(x) | ≤ Cx 2H−1, and

$$\displaystyle\begin{array}{rcl} \psi '(x)& =& 2H\int _{0}^{\infty }y^{-\alpha -1}(1 - e^{-y})\int _{ 0}^{\infty }\frac{\theta ^{1-2H}x^{2H-1}} {x^{2} +\theta ^{2}} \left (\cos (y\theta ) - 1\right )\ d\theta \ dy {}\\ & & \quad - 2\int _{0}^{\infty }y^{-\alpha -1}(1 - e^{-y})\int _{ 0}^{\infty }\frac{\theta ^{1-2H}x^{2H+1}} {(x^{2} +\theta ^{2})^{2}} \left (\cos (y\theta ) - 1\right )\ d\theta \ dy. {}\\ \end{array}$$

Using (4.1) and similarly

$$\displaystyle{ \left \vert \frac{\theta ^{1-2H}x^{2H+1}} {(x^{2} +\theta ^{2})^{2}} \right \vert \leq \vert \theta ^{-2H}\vert \ \vert x\vert ^{2H-2}, }$$
(4.2)

we can write

$$\displaystyle{\left \vert \psi '(x)\right \vert \leq x^{2H-2}\vert 2H - 2\vert \int _{ 0}^{\infty }y^{-\alpha -1}(1 - e^{-y})\int _{ 0}^{\infty }\theta ^{-2H}\left (\cos (y\theta ) - 1\right )\ d\theta \ dy \leq Cx^{2H-2}.}$$

By continuing the computation, we can find that | ψ″(x) | ≤ Cx 2H−3. We have for j, k ≥ 1,

$$\displaystyle\begin{array}{rcl} \mathbb{E}\left [\Delta U_{\frac{j} {n} }\Delta Y _{\frac{k} {n} }\right ]& =& n^{-\alpha }(k + 1)^{\alpha }\left (\psi \left (\frac{j + 1} {k + 1}\right ) -\psi \left ( \frac{j} {k + 1}\right )\right ) {}\\ & & \quad - n^{-\alpha }k^{\alpha }\left (\psi \left (\frac{j + 1} {k} \right ) -\psi \left (\frac{j} {k}\right )\right ) {}\\ & =& n^{-\alpha }\left ((k + 1)^{\alpha } - k^{\alpha }\right )\left (\psi \left (\frac{j + 1} {k + 1}\right ) -\psi \left ( \frac{j} {k + 1}\right )\right ) {}\\ & & +n^{-\alpha }k^{\alpha }\left (\psi \left (\frac{j + 1} {k + 1}\right ) -\psi \left ( \frac{j} {k + 1}\right ) -\psi \left (\frac{j + 1} {k} \right ) +\psi \left (\frac{j} {k}\right )\right ). {}\\ \end{array}$$

With the above bounds on ψ and its derivatives, the first term is bounded by

$$\displaystyle\begin{array}{rcl} & & n^{-\alpha }\left \vert (k + 1)^{\alpha } - k^{\alpha }\right \vert \left \vert \psi \left (\frac{j + 1} {k + 1}\right ) -\psi \left ( \frac{j} {k + 1}\right )\right \vert {}\\ & &\phantom{n^{-\alpha }\vert (k} \leq \alpha n^{-\alpha }\int _{ 0}^{1}(k + u)^{\alpha -1}du\int _{ 0}^{ \frac{1} {k+1} }\left \vert \psi '\left ( \frac{j} {k + 1} + v\right )\right \vert \ dv {}\\ & & \phantom{n^{-\alpha }\vert (k} \leq Cn^{-\alpha }k^{\alpha -2}\left (\frac{j} {k}\right )^{2H-2} \leq Cn^{-\alpha }k^{-\gamma }j^{2H-2}, {}\\ \end{array}$$

and

$$\displaystyle\begin{array}{rcl} & & n^{-\alpha }k^{\alpha }\left \vert \psi \left (\frac{j + 1} {k + 1}\right ) -\psi \left ( \frac{j} {k + 1}\right ) -\psi \left (\frac{j + 1} {k} \right ) +\psi \left (\frac{j} {k}\right )\right \vert {}\\ & &\phantom{n^{-\alpha }k^{\alpha }\vert } = n^{-\alpha }k^{\alpha }\left \vert \int _{ 0}^{ \frac{1} {k+1} }\psi '\left ( \frac{j} {k + 1} + u\right )\ du -\int _{0}^{ \frac{1} {k} }\psi '\left (\frac{j} {k} + u\right )\ du\right \vert {}\\ & &\phantom{n^{-\alpha }}\leq n^{-\alpha }k^{\alpha }\int _{ \frac{ 1} {k+1} }^{ \frac{1} {k} }\left \vert \psi '\left (\frac{j} {k} + u\right )\right \vert \ du +\int _{ 0}^{ \frac{1} {k+1} }\int _{ \frac{j} {k+1} }^{ \frac{j} {k} }\left \vert \psi ''(u + v)\right \vert \ dv\ du {}\\ & & \phantom{n^{-\alpha }k^{\alpha }\vert }\leq Cn^{-\alpha }k^{\alpha -2}\left (\frac{j} {k}\right )^{2H-2} + Cn^{-\alpha }k^{\alpha -3}j\left (\frac{j} {k}\right )^{2H-3} \leq Cn^{-\alpha }k^{-\gamma }j^{2H-2}. {}\\ \end{array}$$

This concludes the proof of the lemma. □

4.2 Proof of Theorem 2

We will make use of the notation \(\beta _{j,n} = \left \|\Delta X_{\frac{j} {n} }\right \|_{L^{2}(\Omega )}\). We have for integer j ≥ 1,

$$\displaystyle{\beta _{j,n}^{2} = \mathbb{E}\left [\Delta W_{\frac{ j} {n} }^{2}\right ] + \mathbb{E}\left [\Delta Y _{\frac{ j} {n} }^{2}\right ] + 2\mathbb{E}\left [\Delta W_{\frac{ j} {n} }\Delta Y _{\frac{j} {n} }\right ] =\kappa n^{-\alpha }(1 +\theta _{ j,n}),}$$

where

$$\displaystyle{\kappa n^{-\alpha }\theta _{ j,n} = \mathbb{E}\left [\Delta Y _{\frac{j} {n} }^{2}\right ] + 2\mathbb{E}\left [\Delta W_{\frac{ j} {n} }\Delta Y _{\frac{j} {n} }\right ].}$$

It follows from Lemmas 3 and 4 that | θ j, n | ≤ Cj α−2 for some constant C > 0. Notice that, in the definition of F n (t), it suffices to consider the sum for jn 0 for a fixed n 0. Then, we can choose n 0 in such a way that \(Cn_{0}^{\alpha -2} \leq \frac{1} {2}\), which implies

$$\displaystyle{ \beta _{j,n}^{2} \geq \kappa n^{-\alpha }(1 - Cj^{\alpha -2}) }$$
(4.3)

for any jn 0.

By (2.4),

$$\displaystyle{\beta _{j,n}^{q}H_{ q}\left (\beta _{j,n}^{-1}\Delta X_{\frac{ j} {n} }\right ) = I_{q}^{X}\left (\left (\mathbf{1}_{ [ \frac{j} {n},\frac{j+1} {n} )}\right )^{\otimes q}\right ),}$$

where I q X denotes the multiple stochastic integral of order q with respect to the process X. Thus, we can write

$$\displaystyle{F_{n}(t) = n^{-\frac{1} {2} }\sum _{j=n_{ 0}}^{\lfloor nt\rfloor -1}\beta _{ j,n}^{-q}I_{ q}^{X}\left (\mathbf{1}_{\left [ \frac{j} {n},\frac{j+1} {n} \right )}^{\otimes q}\right ).}$$

The decomposition X = W + Y leads to

$$\displaystyle{I_{q}^{X}\left (\mathbf{1}_{\left [ \frac{j} {n},\frac{j+1} {n} \right )}^{\otimes q}\right ) =\sum _{ r=0}^{q}\binom{q}{r}I_{ r}^{W}\left (\mathbf{1}_{\left [ \frac{j} {n},\frac{j+1} {n} \right )}^{\otimes r}\right )I_{ q-r}^{Y }\left (\mathbf{1}_{\left [ \frac{j} {n},\frac{j+1} {n} \right )}^{\otimes q-r}\right ).}$$

We are going to show that the terms with r = 0, , q − 1 do not contribute to the limit. Define

$$\displaystyle{G_{n}(t) = n^{-\frac{1} {2} }\sum _{j=n_{ 0}}^{\lfloor nt\rfloor -1}\beta _{ j,n}^{-q}I_{ q}^{W}\left (\mathbf{1}_{\left [ \frac{j} {n},\frac{j+1} {n} \right )}^{\otimes q}\right )}$$

and

$$\displaystyle{\widetilde{G}_{n}(t) = n^{-\frac{1} {2} }\sum _{j=n_{ 0}}^{\lfloor nt\rfloor -1}\left \|\Delta W_{ j/n}\right \|_{L^{2}(\Omega )}^{-q}I_{ q}^{W}\left (\mathbf{1}_{\left [ \frac{j} {n},\frac{j+1} {n} \right )}^{\otimes q}\right ).}$$

Consider the decomposition

$$\displaystyle{F_{n}(t) = (F_{n}(t) - G_{n}(t)) + (G_{n}(t) -\widetilde{ G}_{n}(t)) +\widetilde{ G}_{n}(t).}$$

Notice that all these processes vanish at t = 0. We claim that for any 0 ≤ s < tT, we have

$$\displaystyle{ \mathbb{E}[\vert F_{n}(t) - G_{n}(t) - (F_{n}(s) - G_{n}(s))\vert ^{2}] \leq \frac{(\lfloor nt\rfloor -\lfloor ns\rfloor )^{\delta }} {n} }$$
(4.4)

and

$$\displaystyle{ \mathbb{E}[\vert G_{n}(t) -\widetilde{ G}_{n}(t) - (G_{n}(s) -\widetilde{ G}_{n}(s))\vert ^{2}] \leq \frac{(\lfloor nt\rfloor -\lfloor ns\rfloor )^{\delta }} {n}, }$$
(4.5)

where 0 ≤ δ < 1. By Lemma 3, \(\left \|\Delta W_{j/n}\right \|_{L^{2}(\Omega )}^{2} =\kappa n^{-\alpha }\) for every j. As a consequence, using (2.4) we can also write

$$\displaystyle{\widetilde{G}_{n}(t) = n^{-\frac{1} {2} }\sum _{j=n_{ 0}}^{\lfloor nt\rfloor -1}H_{ q}\left (\kappa ^{-\frac{1} {2} }n^{ \frac{\alpha }{2} }\Delta W_{\frac{j} {n} }\right ).}$$

Since \(\kappa ^{-\frac{1} {2} }W\) is a fractional Brownian motion, the Breuer-Major theorem implies that the process \(\widetilde{G}\) converges in D([0, T]) to a scaled Brownian motion {σB t , t ∈ [0, T]}, where σ 2 is given in (1.11). By the fact that all the p-norms are equivalent on a fixed Wiener chaos, the estimates (4.4) and (4.5) lead to

$$\displaystyle{ \mathbb{E}[\vert F_{n}(t) - G_{n}(t) - (F_{n}(s) - G_{n}(s))\vert ^{2p}] \leq \frac{(\lfloor nt\rfloor -\lfloor ns\rfloor )^{\delta p}} {n^{p}} }$$
(4.6)

and

$$\displaystyle{ \mathbb{E}[\vert G_{n}(t) -\widetilde{ G}_{n}(t) - (G_{n}(s) -\widetilde{ G}_{n}(s))\vert ^{2p}] \leq \frac{(\lfloor nt\rfloor -\lfloor ns\rfloor )^{\delta p}} {n^{p}}, }$$
(4.7)

for all p ≥ 1. Letting n tend to infinity, we deduce from (4.6) and (4.7) that for any t ∈ [0, T] the sequences F n (t) − G n (t) and \(G_{n}(t) -\widetilde{ G}_{n}(t)\) converge to zero in \(L^{2p}(\Omega )\) for any p ≥ 1. This implies that the finite dimensional distributions of the processes F n G n and \(G_{n} -\widetilde{ G}_{n}\) converge to zero in law. Moreover, by Billingsley [2, Theorem 13.5], (4.6) and (4.7) also imply that the sequences F n G n and \(G_{n} -\widetilde{ G}_{n}\) are tight in D([0, T]). Therefore, these sequences converge to zero in the topology of D([0, T]).

Proof of (4.4)

We can write

$$\displaystyle{\mathbb{E}\left [\vert F_{n}(t) - G_{n}(t) - (F_{n}(s) - G_{n}(s))\vert ^{2}\right ] \leq C\sum _{ r=0}^{q-1}\mathbb{E}[\Phi _{ r,n}^{2}],}$$

where

$$\displaystyle{\Phi _{r,n} = n^{-\frac{1} {2} }\sum _{j=\lfloor ns\rfloor \vee n_{ 0}}^{\lfloor nt\rfloor -1}\beta _{ j,n}^{-q}I_{ r}^{W}\left (\mathbf{1}_{\left [ \frac{j} {n},\frac{j+1} {n} \right )}^{\otimes r}\right )I_{ q-r}^{Y }\left (\mathbf{1}_{\left [ \frac{j} {n},\frac{j+1} {n} \right )}^{\otimes q-r}\right ).}$$

We have, using (4.3),

$$\displaystyle\begin{array}{rcl} & & \mathbb{E}[\Phi _{r,n}^{2}] \leq n^{-1+q\alpha } {}\\ & & \quad \times \sum _{j,k=\lfloor ns\rfloor \vee n_{0}}^{\lfloor nt\rfloor -1}\left \vert \mathbb{E}\left [I_{ r}^{W}\left (\mathbf{1}_{\left [ \frac{j} {n},\frac{j+1} {n} \right )}^{\otimes r}\right )I_{ q-r}^{Y }\left (\mathbf{1}_{\left [ \frac{j} {n},\frac{j+1} {n} \right )}^{\otimes q-r}\right )I_{ r}^{W}\left (\mathbf{1}_{\left [ \frac{k} {n},\frac{k+1} {n} \right )}^{\otimes r}\right )I_{ q-r}^{Y }\left (\mathbf{1}_{\left [ \frac{k} {n},\frac{k+1} {n} \right )}^{\otimes q-r}\right )\right ]\right \vert.{}\\ \end{array}$$

Using a diagram method for the expectation of four stochastic integrals (see [9]), we find that, for any j, k, the above expectation consists of a sum of terms of the form

$$\displaystyle{\left (\mathbb{E}\left [\Delta W_{\frac{j} {n} }\Delta W_{\frac{k} {n} }\right ]\right )^{a_{1} }\left (\mathbb{E}\left [\Delta Y _{\frac{j} {n} }\Delta Y _{\frac{k} {n} }\right ]\right )^{a_{2} }\left (\mathbb{E}\left [\Delta W_{\frac{j} {n} }\Delta Y _{\frac{k} {n} }\right ]\right )^{a_{3} }\left (\mathbb{E}\left [\Delta Y _{\frac{j} {n} }\Delta W_{\frac{k} {n} }\right ]\right )^{a_{4} },}$$

where the a i are nonnegative integers such that a 1 + a 2 + a 3 + a 4 = q, a 1rq − 1, and a 2qr. First, consider the case with a 3 = a 4 = 0, so that we have the sum

$$\displaystyle{n^{-1+q\alpha }\sum _{ j,k=\lfloor ns\rfloor \vee n_{0}}^{\lfloor nt\rfloor -1}\left (\mathbb{E}\left [\Delta W_{\frac{ j} {n} }\Delta W_{\frac{k} {n} }\right ]\right )^{a_{1} }\left (\mathbb{E}\left [\Delta Y _{\frac{j} {n} }\Delta Y _{\frac{k} {n} }\right ]\right )^{q-a_{1} },}$$

where 0 ≤ a 1q − 1. Applying Lemma 3, we can control each of the terms in the above sum by

$$\displaystyle{n^{-q\alpha }(\vert \,j - k + 1\vert ^{\alpha }- 2\vert \,j - k\vert ^{\alpha } + \vert \,j - k - 1\vert ^{\alpha })^{a_{1} }(\,j + k)^{(q-a_{1})(\alpha -2)},}$$

which gives

$$\displaystyle\begin{array}{rcl} & & n^{-1+q\alpha }\sum _{ j,k=\lfloor ns\rfloor \vee n_{0}}^{\lfloor nt\rfloor -1}\left \vert \mathbb{E}\left [\Delta W_{\frac{ j} {n} }\Delta W_{\frac{k} {n} }\right ]\right \vert ^{a_{1} }\left \vert \mathbb{E}\left [\Delta Y _{\frac{j} {n} }\Delta Y _{\frac{k} {n} }\right ]\right \vert ^{q-a_{1} } \\ & & \qquad \leq Cn^{-1}\left (\sum _{ j=\lfloor ns\rfloor \vee n_{0}}^{\lfloor nt\rfloor -1}j^{\alpha -2} +\sum _{ j,k=\lfloor ns\rfloor \vee n_{0},j\not =k}^{\lfloor nt\rfloor -1}\vert \,j - k\vert ^{(q-1)(\alpha -2)}(\,j + k)^{\alpha -2}\right ) \\ & & \qquad \leq Cn^{-1}\sum _{ j=\lfloor ns\rfloor \vee n_{0}}^{\lfloor nt\rfloor -1}\left (j^{\alpha -2} + j^{q(\alpha -2)+1}\right ) \\ & & \qquad \leq Cn^{-1}\left (\lfloor nt\rfloor -\lfloor ns\rfloor )^{(\alpha -1)\vee 0} + (\lfloor nt\rfloor -\lfloor ns\rfloor )^{[q(\alpha -2)+2]\vee 0}\right ). {}\end{array}$$
(4.8)

Next, we consider the case where a 3 + a 4 ≥ 1. By Lemma 3, we have that, up to a constant C,

$$\displaystyle{\left \vert \mathbb{E}\left [\Delta Y _{\frac{j} {n} }\Delta Y _{\frac{k} {n} }\right ]\right \vert \leq C\left \vert \mathbb{E}\left [\Delta W_{\frac{j} {n} }\Delta W_{\frac{k} {n} }\right ]\right \vert,}$$

so we may assume a 2 = 0, and have to handle the term

$$\displaystyle{ n^{-1+q\alpha }\sum _{ j,k=\lfloor ns\rfloor \vee n_{0}}^{\lfloor nt\rfloor -1}\left \vert \mathbb{E}\left [\Delta W_{\frac{ j} {n} }\Delta W_{\frac{k} {n} }\right ]\right \vert ^{q-a_{3}-a_{4} }\left \vert \mathbb{E}\left [\Delta W_{\frac{j} {n} }\Delta Y _{\frac{k} {n} }\right ]\right \vert ^{a_{3} }\left \vert \mathbb{E}\left [\Delta Y _{\frac{j} {n} }\Delta W_{\frac{k} {n} }\right ]\right \vert ^{a_{4} } }$$
(4.9)

for all allowable values of a 3, a 4 with a 3 + a 4 ≥ 1. Consider the decomposition

$$\displaystyle\begin{array}{rcl} & & n^{-1+q\alpha }\sum _{ j,k=\lfloor ns\rfloor \vee n_{0}}^{\lfloor nt\rfloor -1}\left \vert \mathbb{E}\left [\Delta W_{\frac{ j} {n} }\Delta W_{\frac{k} {n} }\right ]\right \vert ^{q-a_{3}-a_{4} }\left \vert \mathbb{E}\left [\Delta W_{\frac{j} {n} }\Delta Y _{\frac{k} {n} }\right ]\right \vert ^{a_{3} }\left \vert \mathbb{E}\left [\Delta Y _{\frac{j} {n} }\Delta W_{\frac{k} {n} }\right ]\right \vert ^{a_{4} } {}\\ & & \phantom{n^{-1+q\alpha }}\; = n^{q\alpha -1}\sum _{ j=\lfloor ns\rfloor \vee n_{0}}^{\lfloor nt\rfloor -1}\left \vert \mathbb{E}\left [\Delta W_{\frac{ j} {n} }^{2}\right ]\right \vert ^{q-a_{3}-a_{4} }\ \left \vert \mathbb{E}\left [\Delta W_{\frac{j} {n} }\Delta Y _{\frac{j} {n} }\right ]\right \vert ^{a_{3}+a_{4} } {}\\ & & \phantom{n^{-1+q\alpha }}\; + n^{q\alpha -1}\sum _{ j=\lfloor ns\rfloor \vee n_{0}}^{\lfloor nt\rfloor -1}\sum _{ k=\lfloor ns\rfloor \vee n_{0}}^{j-1}\left \vert \mathbb{E}\left [\Delta W_{\frac{ j} {n} }\Delta W_{\frac{k} {n} }\right ]\right \vert ^{q-a_{3}-a_{4} }\left \vert \mathbb{E}\left [\Delta W_{\frac{j} {n} }\Delta Y _{\frac{k} {n} }\right ]\right \vert ^{a_{3} }\left \vert \mathbb{E}\left [\Delta Y _{\frac{j} {n} }\Delta W_{\frac{k} {n} }\right ]\right \vert ^{a_{4} } {}\\ & & \phantom{n^{-1+q\alpha }}\; + n^{q\alpha -1}\sum _{ k=\lfloor ns\rfloor \vee n_{0}}^{\lfloor nt\rfloor -1}\sum _{ j=\lfloor ns\rfloor \vee n_{0}}^{k-1}\left \vert \mathbb{E}\left [\Delta W_{\frac{ j} {n} }\Delta W_{\frac{k} {n} }\right ]\right \vert ^{q-a_{3}-a_{4} }\left \vert \mathbb{E}\left [\Delta W_{\frac{j} {n} }\Delta Y _{\frac{k} {n} }\right ]\right \vert ^{a_{3} }\left \vert \mathbb{E}\left [\Delta Y _{\frac{j} {n} }\Delta W_{\frac{k} {n} }\right ]\right \vert ^{a_{4} }.{}\\ \end{array}$$

We have,by Lemmas 3 and 4,

$$\displaystyle\begin{array}{rcl} & & n^{-1+q\alpha }\sum _{ j,k=\lfloor ns\rfloor \vee n_{0}}^{\lfloor nt\rfloor -1}\left \vert \mathbb{E}\left [\Delta W_{\frac{ j} {n} }\Delta W_{\frac{k} {n} }\right ]\right \vert ^{q-a_{3}-a_{4} }\left \vert \mathbb{E}\left [\Delta W_{\frac{j} {n} }\Delta Y _{\frac{k} {n} }\right ]\right \vert ^{a_{3} }\left \vert \mathbb{E}\left [\Delta Y _{\frac{j} {n} }\Delta W_{\frac{k} {n} }\right ]\right \vert ^{a_{4} } \\ & & \phantom{n^{-1+q\alpha }}\quad \leq Cn^{-1}\sum _{ j=\lfloor ns\rfloor \vee n_{0}}^{\lfloor nt\rfloor -1}j^{(a_{3}+a_{4})(\alpha -2)} \\ & & \phantom{n^{-1+q\alpha }}\qquad + Cn^{-1}\sum _{ j=\lfloor ns\rfloor \vee n_{0}}^{\lfloor nt\rfloor -1}j^{a_{3}(2H-2)-a_{4}\gamma }\sum _{k=\lfloor ns\rfloor \vee n_{0}}^{j-1}k^{-a_{3}\gamma +a_{4}(2H-2)}\left \vert j - k\right \vert ^{(q-a_{3}-a_{4})(\alpha -2)} \\ & & \phantom{n^{-1+q\alpha }}\qquad + Cn^{-1}\sum _{ k=\lfloor ns\rfloor \vee n_{0}}^{\lfloor nt\rfloor -1}k^{-a_{3}\gamma +a_{4}(2H-2)}\sum _{ j=\lfloor ns\rfloor \vee n_{0}}^{k-1}j^{a_{3}(2H-2)-a_{4}\gamma }\left \vert k - j\right \vert ^{(q-a_{3}-a_{4})(\alpha -2)} \\ & & \phantom{n^{-1+q\alpha }}\quad \leq Cn^{-1}\left ((\lfloor nt\rfloor -\lfloor ns\rfloor )^{[(a_{3}+a_{4})(\alpha -2)+1]\vee 0} + (\lfloor nt\rfloor -\lfloor ns\rfloor )^{[q(\alpha -2)+2]\vee 0}\right. \\ & & \phantom{n^{-1+q\alpha }}\qquad \left.+(\lfloor nt\rfloor -\lfloor ns\rfloor )^{[a_{3}(2H-2)-a_{4}\gamma +1]\vee 0} + (\lfloor nt\rfloor -\lfloor ns\rfloor )^{[a_{4}(2H-2)-a_{3}\gamma +1]\vee 0}\right ). {}\end{array}$$
(4.10)

Then (4.8) and (4.10) imply (4.4) because \(\alpha <2 -\frac{1} {q}\).

Proof of (4.5)

We have

$$\displaystyle{G_{n}(t) -\widetilde{ G}_{n}(t) = n^{-\frac{1} {2} }\sum _{j=n_{ 0}}^{\lfloor nt\rfloor -1}\left (\beta _{ j,n}^{-q} -\left \|\Delta W_{\frac{ j} {n} }\right \|_{L^{2}(\Omega )}^{-q}\right )I_{ q}^{W}\left (\mathbf{1}_{ [ \frac{j} {n},\frac{j+1} {n} )}^{\otimes q}\right )}$$

and we can write, using (4.3) for any jn 0,

$$\displaystyle{\left \vert \beta _{j,n}^{-q} -\left \|\Delta W_{\frac{ j} {n} }\right \|_{L^{2}(\Omega )}^{-q}\right \vert = (\kappa ^{-1}n^{\alpha })^{\frac{q} {2} }\left \vert (1 +\theta _{j,n})^{-\frac{q} {2} } - 1\right \vert \leq C\left (\kappa ^{-1}n^{\alpha }j^{\alpha -2}\right )^{\frac{q} {2} }.}$$

This leads to the estimate

$$\displaystyle\begin{array}{rcl} & & \mathbb{E}\left [\left \vert G_{n}(t) -\widetilde{ G}_{n}(t) - (G_{n}(s) -\widetilde{ G}_{n}(s))\right \vert ^{2}\right ] \leq Cn^{-1} {}\\ & & \qquad \times \left (\sum _{j=\lfloor ns\rfloor \vee n_{0}}^{\lfloor nt\rfloor -1}j^{\alpha -2} +\sum _{ j,k=\lfloor ns\rfloor \vee n_{0},j\not =k}^{\lfloor nt\rfloor -1}\vert \,j - k\vert ^{q(\alpha -2)}\right ) {}\\ & & \leq Cn^{-1}\left (\lfloor nt\rfloor -\lfloor ns\rfloor )^{(\alpha -1)\vee 0} + (\lfloor nt\rfloor -\lfloor ns\rfloor )^{[q(\alpha -2)+2]\vee 0}\right ), {}\\ \end{array}$$

which implies (4.5).

This concludes the proof of Theorem 2.