Keywords

1 Introduction

Brownian semi-stationary processes (\(\mathcal{B}\mathcal{S}\mathcal{S}\)) has been originally introduced in [2] for modeling turbulent flows in physics. This class consists of processes \((X_{t})_{t\in \mathbb{R}}\) of the form

$$X_{t} = \mu + \int \limits _{-\infty }^{t}g(t - s)\sigma _{ s}W(ds) + \int \limits _{-\infty }^{t}q(t - s)a_{ s}ds,$$
(1)

where μ is a constant, \(g,q : \mathbb{R}_{>0} \rightarrow \mathbb{R}\) are memory functions, \((\sigma _{s})_{s\in \mathbb{R}}\) is a càdlàg intermittency process, \((a_{s})_{s\in \mathbb{R}}\) a càdlàg drift process and W is the Wiener measure. When \((\sigma _{s})_{s\in \mathbb{R}}\) and \((a_{s})_{s\in \mathbb{R}}\) are stationary then the process \((X_{t})_{t\in \mathbb{R}}\) is also stationary, which explains the name Brownian semi-stationary processes. In the following we concentrate on \(\mathcal{B}\mathcal{S}\mathcal{S}\) models without the drift part (i.e. a ≡ 0), but we come back to the original process (1) in Example 1.

The path properties of the process \((X_{t})_{t\in \mathbb{R}}\) crucially depend on the behaviour of the weight function g near 0. When g(x) ≃ x β (here g(x) ≃ h(x) means that g(x) ∕ h(x) is slowly varying at 0) with \(\beta \in (-\frac{1} {2},0) \cup (0, \frac{1} {2})\), X has r-Hölder continuous paths for any \(r < \beta + \frac{1} {2}\) and, more importantly, X is not a semimartingale, because g′ is not square integrable in the neighborhood of 0 (see e.g. [10] for a detailed study of conditions under which Brownian moving average processes are semimartingales). In the following, whenever g(x) ≃ x β, the index β is referred to as the smoothness parameter of X.

In practice the stochastic process X is observed at high frequency, i.e. the data points \(X_{i\Delta _{n}}\), i = 0, , [t ∕ Δ n ] are given, and we are in the framework of infill asymptotics, that is Δ n  → 0. For modeling and for practical applications in physics it is extremely important to infer the integrated powers of intermittency, i.e.

$$\int \limits _{0}^{t}\vert \sigma _{ s}{\vert }^{p}ds,\qquad p > 0,$$

and to estimate the smoothness parameter β. A very powerful instrument for analyzing those estimation problems is the normalized multipower variation that is defined as

$$MPV (X,p_{1},\ldots ,p_{k})_{t}^{n} = \Delta _{ n}\tau _{n}^{-{p}^{+} } \sum \limits _{i=1}^{[t/\Delta _{n}]-k+1}\vert \Delta _{ i}^{n}X{\vert }^{p_{1} }\cdots \vert \Delta _{i+k-1}^{n}X{\vert }^{p_{k} },$$
(2)

where \(\Delta _{i}^{n}X\,=\,X_{i\Delta _{n}} - X_{(i-1)\Delta _{n}}\), \(p_{1},\ldots ,p_{k} \geq 0\) and \({p}^{+}\,=\,\sum \limits _{l=1}^{k}p_{l}\), and τ n is a certain normalizing sequence which depends on the weight function g and n (to be defined later). The concept of multipower variation has been originally introduced in [3] for the semimartingale setting. Power and multipower variation of semimartingales has been intensively studied in numerous papers; see e.g. [36, 13, 15, 17, 22] for theory and applications.

However, as mentioned above, \(\mathcal{B}\mathcal{S}\mathcal{S}\) processes of the form (1) typically do not belong to the class of semimartingales. Thus, different probabilistic tools are required to determine the asymptotic behaviour of the multipower variation \(MPV (X,p_{1},\ldots ,p_{k})_{t}^{n}\) of \(\mathcal{B}\mathcal{S}\mathcal{S}\) processes. In [8] we applied techniques from Malliavin calculus, which has been originally introduced in [18, 19] and [20], to show the consistency, i.e.

$$MPV (X,p_{1},\ldots ,p_{k})_{t}^{n} - \rho _{ p_{1},\ldots ,p_{k}}^{n} \int \limits _{0}^{t}\vert \sigma _{ s}{\vert }^{{p}^{+} }ds{ \text{ u.c.p.} \atop \rightarrow } 0,$$

where \(\rho _{p_{1},\ldots ,p_{k}}^{n}\) is a certain constant and \({Y }^{n}{ \text{ u.c.p.} \atop \rightarrow } Y\) stands for sup t ∈ [0, T] \(\vert Y _{t}^{n} - Y _{t}\vert { \mathbb{P} \atop \rightarrow } 0\) (for all T > 0). This holds for all smoothness parameters \(\beta \in (-\frac{1} {2},0) \cup (0, \frac{1} {2})\), and we proved the associated (stable) central limit theorem for \(\beta \in (-\frac{1} {2},0)\).

Unfortunately, the restriction to \(\beta \in (-\frac{1} {2},0)\) in the central limit theorem is not satisfactory for applications as in turbulence we usually have \(\beta \in (0, \frac{1} {2})\) at ultra high frequencies. The theoretical reason for this restriction is two-fold: (i) long memory effects which lead to non-normal limits for \(\beta \in (\frac{1} {4}, \frac{1} {2})\) and more importantly (ii) a hidden drift in X which leads to an even stronger restriction \(\beta \in (-\frac{1} {2},0)\).

The main aim of this paper is to overcome both problems by considering multipower variations of higher order differences of \(\mathcal{B}\mathcal{S}\mathcal{S}\) processes. We will show the law of large numbers and prove the associated central limit theorem for all values of the smoothness parameter \(\beta \in (-\frac{1} {2},0) \cup (0, \frac{1} {2})\). Furthermore, we discuss possible extensions to other type of processes. We apply the asymptotic results to estimate the smoothness parameter β of a \(\mathcal{B}\mathcal{S}\mathcal{S}\) process X. Let us mention that the idea of using higher order differences to diminish the long memory effects is not new; we refer to [12, 16] for theoretical results in the Gaussian framework. However, the derivation of the corresponding theory for \(\mathcal{B}\mathcal{S}\mathcal{S}\) processes is more complicated due to their more involved structure.

This paper is organized as follows: in Sect. 2 we introduce our setting and present the main assumptions on the weight function g and the intermittency σ. Section 3 is devoted to limit theorems for the multipower variation of the second order differences of \(\mathcal{B}\mathcal{S}\mathcal{S}\) processes. In Sect. 4 we apply our asymptotic results to derive three estimators (the realised variation ratio, the modified realised variation ratio and the change-of-frequency estimator) for the smoothness parameter. Finally, all proofs are collected in Sect. 5.

2 The Setting and the Main Assumptions

We consider a filtered probability space \((\Omega ,\mathcal{F}, \mathbb{F} = (\mathcal{F}_{t})_{t\in \mathbb{R}}, \mathbb{P})\) on which we define a \(\mathcal{B}\mathcal{S}\mathcal{S}\) process \(X = (X_{t})_{t\in \mathbb{R}}\) without a drift as

$$X_{t} = \mu + \int \limits _{-\infty }^{t}g(t - s)\sigma _{ s}W(ds),$$
(3)

where W is an \(\mathbb{F}\)-adapted Wiener measure, σ is an \(\mathbb{F}\)-adapted càdlàg processes and \(g \in {\mathbb{L}}^{2}(\mathbb{R}_{>0})\). We assume that

$$\int \limits _{-\infty }^{t}{g}^{2}(t - s)\sigma _{ s}^{2}ds < \infty \quad \text{ a.s.}$$

to ensure that X t  <  almost surely. We introduce a Gaussian process \(G\,=\,(G_{t})_{t\in \mathbb{R}}\), that is associated to X, as

$$G_{t} = \int \limits _{-\infty }^{t}g(t - s)W(ds).$$
(4)

Notice that G is a stationary process with the autocorrelation function

$$r(t) = \text{ corr}(G_{s},G_{s+t}) = \frac{\int \limits _{0}^{\infty }g(u)g(u + t)du} {\vert \vert g\vert \vert _{{\mathbb{L}}^{2}}^{2}}.$$
(5)

We also define the variance function \(\overline{R}\) of the increments of the process G as

$$\overline{R}(t) = \mathbb{E}(\vert G_{s+t} - G_{s}{\vert }^{2}) = 2\vert \vert g\vert \vert _{{ \mathbb{L}}^{2}}^{2}(1 - r(t)).$$
(6)

Now, we assume that the process X is observed at time points \(t_{i}\,=\,i\Delta _{n}\) with Δ n  → 0, i = 0, , [t ∕ Δ n ], and define the second order differences of X by

$$\lozenge _{i}^{n}\!\!X = X_{ i\Delta _{n}} - 2X_{(i-1)\Delta _{n}} + X_{(i-2)\Delta _{n}}.$$
(7)

Our main object of interest is the multipower variation of the second order differences of the \(\mathcal{B}\mathcal{S}\mathcal{S}\) process X, i.e.

$${ MPV }^{\lozenge }(X,p_{ 1},\ldots ,p_{k})_{t}^{n} = \Delta _{ n}{(\tau _{n}^{\lozenge })}^{-{p}^{+} } \sum \limits _{i=2}^{[t/\Delta _{n}]-2k+2}\ \prod \limits _{l=0}^{k-1}\vert \lozenge _{ i+2l}^{n}\!X{\vert }^{p_{l} },$$
(8)

where \({(\tau _{n}^{\lozenge })}^{2} = \mathbb{E}(\vert \lozenge _{i}^{n}G{\vert }^{2})\) and \({p}^{+} = \sum \limits _{l=1}^{k}p_{l}\). To determine the asymptotic behaviour of the functional \({MPV }^{\lozenge }{(X,p_{1},\ldots ,p_{k})}^{n}\) we require a set of assumptions on the memory function g and the intermittency process σ. Below, the functions \(L_{\overline{R}},L_{{\overline{R}}^{(4)}},L_{g},L_{{g}^{(2)}} : \mathbb{R}_{>0} \rightarrow \mathbb{R}\) are assumed to be continuous and slowly varying at 0, f (k) denotes the k-th derivative of a function f and β denotes a number in \((-\frac{1} {2},0) \cup (0, \frac{1} {2})\).

Assumption 1.

It holds that

  1. (i)

    \(g(x) = {x}^{\beta }L_{g}(x)\).

  2. (ii)

    \({g}^{(2)} = {x}^{\beta -2}L_{{g}^{(2)}}(x)\) and, for any \(\epsilon > 0\), we have \({g}^{(2)} \in {\mathbb{L}}^{2}((\epsilon ,\infty ))\). Furthermore, | g (2) | is non-increasing on the interval (a, ) for some a > 0.

  3. (iii)

    For any t > 0

    $$F_{t} = \int \limits _{1}^{\infty }\vert {g}^{(2)}(s){\vert }^{2}\sigma _{ t-s}^{2}ds < \infty.$$
    (9)

Assumption 2.

For the smoothness parameter β from Assumption 1 it holds that

  1. (i)

    \(\overline{R}(x) = {x}^{2\beta +1}L_{\overline{R}}(x)\).

  2. (ii)

    \({\overline{R}}^{(4)}(x) = {x}^{2\beta -3}L_{{\overline{R}}^{(4)}}(x)\).

  3. (iii)

    There exists a b ∈ (0, 1) such that

    $$limsup_{x\rightarrow 0}\sup _{y\in [x,{x}^{b}]}\left \vert \frac{L_{{\overline{R}}^{(4)}}(y)} {L_{\overline{R}}(x)} \right \vert < \infty.$$

Assumption 3-γ.

For any p > 0, it holds that

$$\mathbb{E}(\vert \sigma _{t} - \sigma _{s}{\vert }^{p}) \leq C_{ p}\vert t - s{\vert }^{\gamma p}$$
(10)

for some γ > 0 and C p  > 0.

Some remarks are in order to explain the rather long list of conditions.

  • The memory function g: We remark that g(x) ≃ x β implies \({g}^{(2)}(x) \simeq {x}^{\beta -2}\) under rather weak assumptions on g (due to the Monotone Density Theorem; see e.g. [11, p. 38]). Furthermore, Assumption 1(ii) and Karamata’s Theorem (see again [11]) imply that

    $$\int \limits _{\epsilon }^{1}\vert g(x + 2\Delta _{ n}) - 2g(x + \Delta _{n}) + g(x){\vert }^{2}dx \simeq {\epsilon }^{2\beta -3}\Delta _{ n}^{4}$$
    (11)

    for any \(\epsilon \in [\Delta _{n},1)\). This fact will play an important role in the following discussion. Finally, let us note that Assumptions 1(i)–(ii) and 2 are satisfied for the parametric class

    $$g(x) = {x}^{\beta }\exp (-\lambda x),$$

    where \(\beta \in (-\frac{1} {2},0) \cup (0, \frac{1} {2})\) and λ > 0, which is used to model turbulent flows in physics (see [2]). This class constitutes the most important example in this paper. □ 

  • The central decomposition and the concentration measure: Observe the decomposition

    $$\begin{array}{rcl} \lozenge _{i}^{n}X& =& \int \limits _{(i-1)\Delta _{n}}^{i\Delta _{n} }g(i\Delta _{n} - s)\sigma _{s}W(ds) \\ & & \quad + \int \limits _{(i-2)\Delta _{n}}^{(i-1)\Delta _{n} }\left (g(i\Delta _{n} - s) - 2g((i - 1)\Delta _{n} - s)\right )\sigma _{s}W(ds) \\ & & \quad + \int \limits _{-\infty }^{(i-2)\Delta _{n} }\!\left (g(i\Delta _{n} - s) - 2g((i - 1)\Delta _{n} - s)\! +\! g((i - 2)\Delta _{n} - s)\right )\sigma _{s}W(ds), \end{array}$$
    (12)

    and the same type of decomposition holds for \(\lozenge _{i}^{n}G\). We deduce that

    $$\begin{array}{rcl}{ (\tau _{n}^{\lozenge })}^{2}& =& \int \limits _{0}^{\Delta _{n} }{g}^{2}(x)dx + \int \limits _{0}^{\Delta _{n} }{\left (g(x + \Delta _{n}) - 2g(x)\right )}^{2}dx \\ & +& \int \limits _{0}^{\infty }{\left (g(x + 2\Delta _{ n}) - 2g(x + \Delta _{n}) + g(x)\right )}^{2}dx.\end{array}$$

    One of the most essential steps in proving the asymptotic results for the functionals \({MPV }^{\lozenge }{(X,p_{1},\ldots ,p_{k})}^{n}\) is the approximation \(\lozenge _{i}^{n}X \approx \sigma _{(i-2)\Delta _{n}}\lozenge _{i}^{n}G\). The justification of this approximation is not trivial: while the first two summands in the decomposition (12) depend only on the intermittency σ around (i − 2)Δ n , the third summand involves the whole path \((\sigma _{s})_{s\leq (i-2)\Delta _{n}}\). We need to guarantee that the influence of the intermittency path outside of (i − 2)Δ n on the third summand of (12) is asymptotically negligible. For this reason we introduce the measure

    $$\pi _{n}^{\lozenge }(A) = \frac{\int \limits _{A}{\left (g(x + 2\Delta _{n}) - 2g(x + \Delta _{n}) + g(x)\right )}^{2}dx} {{(\tau _{n}^{\lozenge })}^{2}} \,<\,1,\qquad A \in \mathcal{B}(\mathbb{R}_{>0}),$$
    (13)

    and define \(\overline{\pi }_{n}^{\lozenge }(x) = \pi _{n}^{\lozenge }((x,\infty ))\). To justify the negligibility of the influence of the intermittency path outside of \((i - 2)\Delta _{n}\) we need to ensure that

    $$\overline{\pi }_{n}^{\lozenge }(\epsilon ) \rightarrow 0$$

    for all \(\epsilon > 0\). Indeed, this convergence follows from Assumptions 1(i)–(ii) (due to (11)). □ 

  • The correlation structure: By the stationarity of the process G we deduce that

    $$\begin{array}{rcl} r_{n}^{\lozenge }(j)& =& \text{ corr}(\lozenge _{ i}^{n}G,\lozenge _{ i+j}^{n}G) \\ & & = \frac{-\overline{R}((j + 2)\Delta _{n}) + 4\overline{R}((j + 1)\Delta _{n}) - 6\overline{R}(j\Delta _{n}) + 4\overline{R}(\vert j - 1\vert \Delta _{n}) -\overline{R}(\vert j - 2\vert \Delta _{n})} {{(\tau _{n}^{\lozenge })}^{2}}. \end{array}$$
    (14)

    Since \({(\tau _{n}^{\lozenge })}^{2} = 4\overline{R}(\Delta _{n}) -\overline{R}(2\Delta _{n})\) we obtain by Assumption 2(i) the convergence

    $$\begin{array}{rl} r_{n}^{\lozenge }(j)& \rightarrow {\rho }^{\lozenge }(j) \\ & = \frac{-{(j + 2)}^{1+2\beta } + 4{(j + 1)}^{1+2\beta } - 6{j}^{1+2\beta } + 4\vert j - 1{\vert }^{1+2\beta } -\vert j - 2{\vert }^{1+2\beta }} {2\left (4 - {2}^{1+2\beta }\right )}. \end{array}$$
    (15)

    We remark that \({\rho }^{\lozenge }\) is the correlation function of the normalized second order fractional noise \(\left (\lozenge _{i}^{n}{B}^{H}/\sqrt{\text{ var} (\lozenge _{i }^{n }{B}^{H } )}\right )_{i\geq 2}\), where B H is a fractional Brownian motion with Hurst parameter \(H = \beta + \frac{1} {2}\). Notice that

    $$\vert {\rho }^{\lozenge }(j)\vert \sim {j}^{2\beta -3},$$

    where we write \(a_{j} \sim b_{j}\) when \(a_{j}/b_{j}\) is bounded. In particular, it implies that \(\sum \limits _{j=1}^{\infty }\vert {\rho }^{\lozenge }(j)\vert < \infty \). This absolute summability has an important consequence: it leads to standard central limit theorems for the appropriately normalized version of the functional \({\mathit{MPV }}^{\lozenge }{(G,p_{1},\ldots ,p_{k})}^{n}\) for all \(\beta \in (-\frac{1} {2},0) \cup (0, \frac{1} {2})\). □ 

  • Sufficient conditions: Instead of considering Assumptions 1 and 2, we can alternatively state sufficient conditions on the correlation function \(r_{n}^{\lozenge }\) and the measure \(\pi _{n}^{\lozenge }\) directly, as it has been done for the case of first order differences in [8]. To ensure the consistency of \({\mathit{MPV }}^{\lozenge }(X,p_{1},\ldots ,p_{k})_{t}^{n}\) we require the following assumptions: there exists a sequence h(j) with

    $$\vert r_{n}^{\lozenge }\vert \leq h(j),\qquad \Delta _{ n} \sum \limits _{j=1}^{[1/\Delta _{n}]}{h}^{2}(j) \rightarrow 0,$$
    (16)

    and \(\overline{\pi }_{n}^{\lozenge }(\epsilon ) \rightarrow 0\) for all \(\epsilon > 0\) (cf. condition (LLN) in [8]). For the proof of the associated central limit theorem we need some stronger conditions: \(r_{n}^{\lozenge }(j) \rightarrow {\rho }^{\lozenge }(j)\) for all j ≥ 1, there exists a sequence h(j) with

    $$\vert r_{n}^{\lozenge }\vert \leq h(j),\qquad \sum \limits _{j=1}^{\infty }{h}^{2}(j) < \infty ,$$
    (17)

    Assumption 3-γ holds for some γ ∈ (0, 1] with \(\gamma (p \wedge 1) > \frac{1} {2}\), \(p =\max _{1\leq i\leq k}(p_{i})\), and there exists a constant λ > 1 ∕ (p ∧ 1) such that for all κ ∈ (0, 1) and \(\epsilon _{n} = \Delta _{n}^{\kappa }\) we have

    $$\overline{\pi }_{n}^{\lozenge }(\epsilon _{ n}) = O\left (\Delta _{n}^{\lambda (1-\kappa )}\right ).$$
    (18)

    (cf. condition (CLT) in [8]). In Sect. 5 we will show that Assumptions 1 and 2 imply the conditions (16)–(18). □ 

3 Limit Theorems

In this section we present the main results of the paper. Recall that the multipower variation process is defined in (8) as

$${ MPV }^{\lozenge }(X,p_{ 1},\ldots ,p_{k})_{t}^{n} = \Delta _{ n}{(\tau _{n}^{\lozenge })}^{-{p}^{+} } \sum \limits _{i=2}^{[t/\Delta _{n}]-2k+2}\ \prod \limits _{l=0}^{k-1}\vert \lozenge _{ i+2l}^{n}X{\vert }^{p_{l} }$$

with \(\tau _{n}^{2} = \mathbb{E}(\vert \lozenge _{i}^{n}G{\vert }^{2})\) and \({p}^{+} = \sum \limits _{l=1}^{k}p_{l}\). We introduce the quantity

$$\rho _{p_{1},\ldots ,p_{k}}^{n} = \mathbb{E}\left (\prod \limits _{l=0}^{k-1}{\left \vert \frac{\lozenge _{i+2l}^{n}G} {\tau _{n}^{\lozenge }} \right \vert }^{p_{l} }\right ).$$
(19)

Notice that in the case k = 1, p 1 = p we have that \(\rho _{p}^{n}\,=\,\mathbb{E}(\vert U{\vert }^{p})\) with U ∼ N(0, 1). We start with the consistency of the functional \({MPV }^{\lozenge }(X,p_{1},\ldots ,p_{k})_{t}^{n}\).

Theorem 1.

Let the Assumptions  1 and  2 hold. Then we obtain

$${ MPV }^{\lozenge }(X,p_{ 1},\ldots ,p_{k})_{t}^{n} - \rho _{ p_{1},\ldots ,p_{k}}^{n} \int \limits _{0}^{t}\vert \sigma _{ s}{\vert }^{{p}^{+} }ds{ \text{ u.c.p.} \atop \rightarrow } 0.$$
(20)

Proof.

See Sect. 5. □ 

As we have mentioned in the previous section, under Assumption 2(i) we deduce the convergence \(r_{n}^{\lozenge }(j) \rightarrow {\rho }^{\lozenge }(j)\) for all j ≥ 1 (see (15)). Consequently, it holds that

$$\rho _{p_{1},\ldots ,p_{k}}^{n} \rightarrow \rho _{ p_{1},\ldots ,p_{k}} = \mathbb{E}\left (\prod \limits _{l=0}^{k-1}{\left \vert \frac{\lozenge _{i+2l}^{n}{B}^{H}} {\sqrt{\text{ var} (\lozenge _{i+2l }^{n }{B}^{H } )}}\right \vert }^{p_{l} }\right ),$$
(21)

where B H is a fractional Brownian motion with Hurst parameter \(H = \beta + \frac{1} {2}\) (notice that the right-hand side of (21) does not depend on n, because B H is a self-similar process). Thus, we obtain the following result.

Lemma 1.

Let the Assumptions  1 and  2 hold. Then we obtain

$${ MPV }^{\lozenge }(X,p_{ 1},\ldots ,p_{k})_{t}^{n}{ \text{ u.c.p.} \atop \rightarrow } \rho _{p_{1},\ldots ,p_{k}} \int \limits _{0}^{t}\vert \sigma _{ s}{\vert }^{{p}^{+} }ds.$$
(22)

Next, we present a multivariate stable central limit theorem for the family \(({MPV }^{\lozenge }{(X,p_{1}^{j},\ldots ,p_{k}^{j})}^{n})_{1\leq j\leq d}\) of multipower variations. We say that a sequence of d-dimensional processes Z n converges stably in law to a d-dimensional process Z, where Z is defined on an extension \((\Omega {^\prime},\mathcal{F}{^\prime}, \mathbb{P}{^\prime})\) of the original probability \((\Omega ,\mathcal{F}, \mathbb{P})\), in the space \(\mathcal{D}{([0,T])}^{d}\) equipped with the uniform topology (\({Z}^{n}{ st \atop \rightarrow } Z\)) if and only if

$$\lim _{n\rightarrow \infty }\mathbb{E}(f({Z}^{n})V ) = \mathbb{E}{^\prime}(f(Z)V )$$

for any bounded and continuous function \(f : \mathcal{D}{([0,T])}^{d} \rightarrow \mathbb{R}\) and any bounded \(\mathcal{F}\)-measurable random variable V. We refer to [1, 14] or [21] for a detailed study of stable convergence.

Theorem 2.

Let the Assumptions  1 , 2 and  3-γ be satisfied for some γ ∈ (0,1] with \(\gamma (p \wedge 1) > \frac{1} {2}\) , \(p =\max _{1\leq i\leq k,1\leq j\leq d}(p_{i}^{j})\) . Then we obtain the stable convergence

$$\Delta _{n}^{-1/2}\left ({\mathit{MPV }}^{\lozenge }(X,p_{ 1}^{j},\ldots ,p_{ k}^{j})_{ t}^{n} - \rho _{ p_{1}^{j},\ldots ,p_{k}^{j}}^{n} \int \limits _{0}^{t}\vert \sigma _{ s}{\vert }^{p_{j}^{+} }ds\right )_{1\leq j\leq d}{ st \atop \rightarrow } \int \limits _{0}^{t}A_{ s}^{1/2}dW{^\prime}_{ s},$$
(23)

where W′ is a d-dimensional Brownian motion that is defined on an extension of the original probability space \((\Omega ,\mathcal{F}, \mathbb{P})\) and is independent of \(\mathcal{F}\) , A is a d × d-dimensional process given by

$$A_{s}^{ij} = \mu _{ ij}\vert \sigma _{s}{\vert }^{p_{i}^{+}+p_{ j}^{+} },\qquad 1 \leq i,j \leq d,$$
(24)

and the d × d matrix \(\mu = (\mu _{ij})_{1\leq i,j\leq d}\) is defined as

$$\mu _{ij} =\lim _{n\rightarrow \infty }\Delta _{n}^{-1}\text{ cov}\left ({MPV }^{\lozenge }({B}^{H},p_{ 1}^{i},\ldots ,p_{ k}^{i})_{ 1}^{n},{MPV }^{\lozenge }({B}^{H},p_{ 1}^{j},\ldots ,p_{ k}^{j})_{ 1}^{n}\right )$$
(25)

with B H being a fractional Brownian motion with Hurst parameter \(H = \beta + \frac{1} {2}\) .

Proof.

See Sect. 5. □ 

We remark that the conditions of Theorem 2 imply that \(\max _{1\leq i\leq k,1\leq j\leq d}(p_{i}^{j}) > \frac{1} {2}\) since γ ∈ (0, 1].

Remark 1.

Notice that the limit process in (23) is mixed normal, because the Brownian motion W′ is independent of the process A. In fact, we can transform the convergence result of Theorem 2 into a standard central limit theorem due to the properties of stable convergence; we demonstrate this transformation in Sect. 4. We remark that the limit in (25) is indeed finite; see Theorem 2 in [8] and its proof for more details. □ 

Remark 2.

In general, the convergence in (23) does not remain valid when \(\rho _{p_{1}^{j},\ldots ,p_{k}^{j}}^{n}\) is replaced by its limit \(\rho _{p_{1}^{j},\ldots ,p_{k}^{j}}\) defined by (21). However, when the rate of convergence associated with (21) is faster than \(\Delta _{n}^{-1/2}\), we can also use the quantity \(\rho _{p_{1}^{j},\ldots ,p_{k}^{j}}\) without changing the stable central limit theorem in (23). This is the case when the convergence

$$\Delta _{n}^{-1/2}(r_{ n}^{\lozenge }(j) - {\rho }^{\lozenge }(j)) \rightarrow 0$$

holds for any j ≥ 1. Obviously, the latter depends on the behaviour of the slowly varying function \(L_{\overline{R}}\) from Assumption 2(i) near 0. It can be shown that for our main example

$$g(x) = {x}^{\beta }\exp (-\lambda x),$$

where \(\beta \in (-\frac{1} {2},0) \cup (0, \frac{1} {4})\) and λ > 0, \(\rho _{p_{1}^{j},\ldots ,p_{k}^{j}}^{n}\) can indeed be replaced by the quantity \(\rho _{p_{1}^{j},\ldots ,p_{k}^{j}}\) without changing the limit in Theorem 2. □ 

Remark 3 (Second order differences vs. increments). 

Let us demonstrate some advantages of using second order differences ◊  i n X instead of using first order increments Δ i n X.

  1. (i)

    First of all, taking second order differences weakens the value of autocorrelations which leads to normal limits for the normalized version of the functional \({MPV }^{\lozenge }{(G,p_{1},\ldots ,p_{k})}^{n}\) (and hence to mixed normal limits for the value of \({MPV }^{\lozenge }{(X,p_{1},\ldots ,p_{k})}^{n}\)) for all \(\beta \in (-\frac{1} {2},0) \cup (0, \frac{1} {2})\). This can be explained as follows: to obtain normal limits it has to hold that

    $$\sum \limits _{j=1}^{\infty }\vert {\rho }^{\lozenge }(j){\vert }^{2} < \infty $$

    where \({\rho }^{\lozenge }(j)\) is defined in formula (15) (it relies on the fact that the function \(\vert x{\vert }^{p} - \mathbb{E}(\vert N(0,1){\vert }^{p})\) has Hermite rank 2; see also condition (17)). This is clearly satisfied for all \(\beta \,\in \,(-\frac{1} {2},0)\,\cup \,(0, \frac{1} {2})\), because we have \(\vert {\rho }^{\lozenge }(j)\vert \,\sim \,{j}^{2\beta -3}\).

    In the case of using first order increments Δ i n X we obtain the correlation function ρ of the fractional noise \((B_{i}^{H} - B_{i-1}^{H})_{i\geq 1}\) with \(H = \beta + \frac{1} {2}\) as the limit autocorrelation function (see e.g. (4.15) in [8]). As \(\vert \rho (j)\vert \sim {j}^{2\beta -1}\) it holds that

    $$\sum \limits _{j=1}^{\infty }\vert \rho (j){\vert }^{2} < \infty $$

    only for \(\beta \in (-\frac{1} {2},0) \cup (0, \frac{1} {4})\). □ 

  2. (ii)

    As we have mentioned in the previous section, we need to ensure that \(\overline{\pi }_{n}^{\lozenge }(\epsilon )\ \rightarrow \ 0\), where the measure π n  ◊  is defined by (13), for all \(\epsilon > 0\) to show the law of large numbers. But for proving the central limit theorem we require a more precise treatment of the quantity

    $$\overline{\pi }_{n}^{\lozenge }(\epsilon ) = \frac{\int \limits _{\epsilon }^{\infty }{\left (g(x + 2\Delta _{n}) - 2g(x + \Delta _{n}) + g(x)\right )}^{2}dx} {{(\tau _{n}^{\lozenge })}^{2}}.$$

    In particular, we need to show that the above quantity is small enough (see condition (18)) to prove the negligibility of the error that is due to the first order approximation \(\lozenge _{i}^{n}X \approx \sigma _{(i-2)\Delta _{n}}\lozenge _{i}^{n}G\). The corresponding term in the case of increments is essentially given as

    $$\overline{\pi }_{n}(\epsilon ) = \frac{\int \limits _{\epsilon }^{\infty }{\left (g(x + \Delta _{n}) - g(x)\right )}^{2}dx} {\tau _{n}^{2}} ,$$

    where \(\tau _{n}^{2} = \mathbb{E}(\vert \Delta _{i}^{n}G{\vert }^{2})\) (see [8]). Under the Assumptions 1 and 2 the denominators \({(\tau _{n}^{\lozenge })}^{2}\) and τ n 2 have the same order, but the nominator of \(\overline{\pi }_{n}^{\lozenge }(\epsilon )\) is much smaller than the nominator of \(\overline{\pi }_{n}(\epsilon )\). This has an important consequence: the central limit theorems for the multipower variation of the increments of X hold only for \(\beta \in (-\frac{1} {2},0)\) while the corresponding results for the second order differences hold for all \(\beta \in (-\frac{1} {2},0) \cup (0, \frac{1} {2})\). □ 

Another advantage of using second order differences \(\lozenge _{i}^{n}X\) is the higher robustness to the presence of smooth drift processes. Let us consider the process

$$Y _{t} = X_{t} + D_{t},\qquad t \geq 0,$$
(26)

where X is a \(\mathcal{B}\mathcal{S}\mathcal{S}\) model of the form (3) and D is a stochastic drift. We obtain the following result.

Proposition 1.

Assume that the conditions of Theorem  2 hold and \(D \in {C}^{v}(\mathbb{R}_{\geq 0})\) for some v ∈ (1,2), i.e. \(D \in {C}^{1}(\mathbb{R}_{\geq 0})\) (a.s.) and D′ has (v − 1)-Hölder continuous paths (a.s.). When v − β > 1 then

$$\Delta _{n}^{-1/2}\left ({\mathit{MPV }}^{\lozenge }(Y,p_{ 1}^{j},\ldots ,p_{ k}^{j})_{ t}^{n} - \rho _{ p_{1}^{j},\ldots ,p_{k}^{j}}^{n} \int \limits _{0}^{t}\vert \sigma _{ s}{\vert }^{p_{j}^{+} }ds\right )_{1\leq j\leq d}{ st \atop \rightarrow } \int \limits _{0}^{t}A_{ s}^{1/2}dW{^\prime}_{ s},$$

where the limit process is given in Theorem  2. That is, the central limit theorem is robust to the presence of the drift D.

Proof.

Proposition 1 follows by a direct application of the Cauchy-Schwarz and Minkovski inequalities (see Proposition 6 in [8] for more details). □ 

The idea behind Proposition 1 is rather simple. Notice that \(\lozenge _{i}^{n}X = O_{\mathbb{P}}(\Delta _{n}^{\beta +\frac{1} {2} })\) (this follows from Assumption 2) whereas \(\lozenge _{i}^{n}D = O_{\mathbb{P}}(\Delta _{n}^{v})\). It can be easily seen that the drift process D does not influence the central limit theorem if \(v - \beta -\frac{1} {2} > \frac{1} {2}\), because \(\Delta _{n}^{-1/2}\) is the rate of convergence; this explains the condition of Proposition 1.

Notice that we obtain better robustness properties than in the case of first order increments: we still have \(\Delta _{i}^{n}X = O_{\mathbb{P}}(\Delta _{n}^{\beta +\frac{1} {2} })\), but now \(\Delta _{i}^{n}D = O_{\mathbb{P}}(\Delta _{n})\). Thus, the drift process D is negligible only when β < 0, which is obviously a more restrictive condition.

Example 1.

Let us come back to the original \(\mathcal{B}\mathcal{S}\mathcal{S}\) process from (1), which is of the form (26) with

$$D_{t} = \int \limits _{-\infty }^{t}q(t - s)a_{ s}ds.$$

For the ease of exposition we assume that

$$q(x) = {x}^{\overline{\beta }}1_{\{x\in (0,1)\}},\qquad \overline{\beta } > -1,$$

and the drift process a is càdlàg and bounded. Observe the decomposition

$$D_{t+\epsilon } - D_{t} = \int \limits _{t}^{t+\epsilon }q(t + \epsilon - s)a_{ s}ds + \int \limits _{-\infty }^{t}(q(t + \epsilon - s) - q(t - s))a_{ s}ds.$$

We conclude that the process D has Hölder continuous paths of order \((\overline{\beta } + 1) \wedge 1\). Consequently, Theorem 1 is robust to the presence of the drift process D when \(\overline{\beta } > \beta -\frac{1} {2}\). Furthermore, for \(\overline{\beta } \geq 0\) we deduce that

$$D{^\prime}_{t} = q(0)a_{t} + \int \limits _{0}^{\infty }q{^\prime}(s)a_{ t-s}ds.$$

By Proposition 1 we conclude that Theorem 2 is robust to the presence of D when the process a has Hölder continuous paths of order bigger than β. □ 

Remark 4 (Higher order differences). 

Clearly, we can also formulate asymptotic results for multipower variation of q-order differences of \(\mathcal{B}\mathcal{S}\mathcal{S}\) processes X. Define

$${ \mathit{MPV }}^{(q)}(X,p_{ 1},\ldots ,p_{k})_{t}^{n} = \Delta _{ n}{(\tau _{n}^{(q)})}^{-{p}^{+} } \sum \limits _{i=q}^{[t/\Delta _{n}]-qk+q}\ \prod \limits _{l=0}^{k-1}\vert \Delta _{ i+ql}^{(q)n}X{\vert }^{p_{l} },$$

where \(\Delta _{i}^{(q)n}X\) is the q-order difference starting at n and \({(\tau _{n}^{(q)})}^{2} = \mathbb{E}(\vert \Delta _{i}^{(q)n}G{\vert }^{2})\). Then the results of Theorems 1 and 2 remain valid for the class \({\mathit{MPV }}^{(q)}{(X,p_{1},\ldots ,p_{k})}^{n}\) with \(\rho _{p_{1},\ldots ,p_{k}}^{n}\) defined as

$$\rho _{p_{1},\ldots ,p_{k}}^{n} = \mathbb{E}\left (\prod \limits _{l=0}^{k-1}{\left \vert \frac{\Delta _{i+ql}^{(q)n}G} {\tau _{n}^{(q)}} \right \vert }^{p_{l} }\right ).$$

The Assumptions 1 and 2 have to be modified as follows: (a) g (2) has to be replaced by g (q) in Assumption 1(ii) and 1(iii), and (b) \({\overline{R}}^{(4)}\) has to be replaced by \({\overline{R}}^{(2q)}\) in Assumption 2(ii).

However, let us remark that going from second order differences to q-order differences with q > 2 does not give any new theoretical advantages (with respect to robustness etc.). It might though have some influence in finite samples. □ 

Remark 5 (An extension to other integral processes). 

In [8] and [9] we considered processes of the form

$$Z_{t} = \mu + \int \limits _{0}^{t}\sigma _{ s}dG_{s},$$
(27)

where \((G_{s})_{s\geq 0}\) is a Gaussian process with centered and stationary increments. Define

$$\overline{R}(t) = \mathbb{E}(\vert G_{s+t} - G_{s}{\vert }^{2})$$

and assume that Assumption 2 holds for \(\overline{R}\) (we use the same notations as for the process (3) to underline the parallels between the models (27) and (3)). We remark that the integral in (27) is well-defined in the Riemann-Stieltjes sense when the process \(\sigma \) has finite r-variation with r < 1 ∕ (1 ∕ 2 − β) (see [8] and [23]), which we assume in the following discussion. We associate \(\tau _{n}^{\lozenge }\) and \({\mathit{MPV }}^{\lozenge }(Z,p_{1},\ldots ,p_{k})_{t}^{n}\) with the process Z by (8). Then Theorem 1 remains valid for the model (27) and Theorem 2 also holds if we further assume that Assumption 3-γ is satisfied for some γ ∈ (0, 1] with \(\gamma (p \wedge 1) > \frac{1} {2}\), \(p =\max _{1\leq i\leq k,1\leq j\leq d}(p_{i}^{j})\).

We remark that the justification of the approximation \(\lozenge _{i}^{n}Z = \sigma _{(i-2)\Delta _{n}}\lozenge _{i}^{n}\!G\) is easier to provide for the model (27) (see e.g. [8]). All other proof steps are performed in exactly the same way as for the model (3). □ 

Remark 6 (Some further extensions). 

We remark that the use of the power functions in the definition of \({\mathit{MPV }}^{\lozenge }(X,p_{1},\ldots ,p_{k})_{t}^{n}\) is not essential for the proof of Theorems 1 and 2. In principle, both theorems can be proved for a more general class of functionals

$${ \mathit{MPV }}^{\lozenge }(X,H)_{ t}^{n} = \Delta _{ n} \sum \limits _{i=2}^{[t/\Delta _{n}]-2k+2}H\left (\frac{\lozenge _{i}^{n}X} {\tau _{n}^{\lozenge }} ,\ldots , \frac{\lozenge _{i+2(k-1)}^{n}X} {\tau _{n}^{\lozenge }} \right ),$$

where \(H : {\mathbb{R}}^{k} \rightarrow \mathbb{R}\) is a measurable even function with polynomial growth (cf. Remark 2 in [8]). However, we dispense with the exact exposition.

Another useful extension of Theorem 2 is a joint central limit theorem for functionals \({\mathit{MPV }}^{\lozenge }(X,p_{1},\ldots ,p_{k})_{t}^{n}\) computed at different frequencies (this result will be applied in Sect. 4.3). For r ≥ 1, define the multipower variation computed at frequency n as

$$\mathit{MPV }_{r}^{\lozenge }(X,p_{ 1},\ldots ,p_{k})_{t}^{n} = \Delta _{ n}{(\tau _{n,r}^{\lozenge })}^{-{p}^{+} } \sum \limits _{i=2r}^{[t/\Delta _{n}]-2k+2}\ \prod \limits _{l=0}^{k-1}\vert \lozenge _{ i+2lr}^{n,r}\!X{\vert }^{p_{l} },$$
(28)

where \(\lozenge _{i}^{n,r}X = X_{i\Delta _{n}} - 2X_{(i-r)\Delta _{n}} + X_{(i-2r)\Delta _{n}}\) and \({(\tau _{n,r}^{\lozenge })}^{2} = \mathbb{E}(\vert \lozenge _{i}^{n,r}\!G{\vert }^{2})\). Then, under the conditions of Theorem 2, we obtain the stable central limit theorem

$$\Delta _{n}^{-1/2}\left (\begin{array}{c} \mathit{MPV }_{r_{1}}^{\lozenge }(X,p_{ 1},\ldots ,p_{k})_{t}^{n} - \rho _{ p_{1},\ldots ,p_{k}}^{n,r_{1}} \int \limits _{ 0}^{t}\vert \sigma _{ s}{\vert }^{{p}^{+} }ds \\ \mathit{MPV }_{r_{2}}^{\lozenge }(X,p_{1},\ldots ,p_{k})_{t}^{n} - \rho _{p_{1},\ldots ,p_{k}}^{n,r_{2}} \int \limits _{0}^{t}\vert \sigma _{s}{\vert }^{{p}^{+} }ds \end{array} \right ){ st \atop \rightarrow } \int \limits _{0}^{t}\vert \sigma _{ s}{\vert }^{{p}^{+} }{\mu }^{1/2}dW{^\prime}_{ s},$$
(29)

where W′ is a 2-dimensional Brownian motion independent of \(\mathcal{F}\),

$$\rho _{p_{1},\ldots ,p_{k}}^{n,r} = \mathbb{E}\left (\prod \limits _{l=0}^{k-1}{\left \vert \frac{\lozenge _{i+2lr}^{n,r}G} {\tau _{n,r}^{\lozenge }} \right \vert }^{p_{l} }\right )$$

and the \(2 \times 2\) matrix \(\mu = (\mu _{ij})_{1\leq i,j\leq 2}\) is defined as

$$\mu _{ij} =\lim _{n\rightarrow \infty }\Delta _{n}^{-1}\text{ cov}\left (\mathit{MPV }_{ r_{i}}^{\lozenge }({B}^{H},p_{ 1},\ldots ,p_{k})_{1}^{n},\mathit{MPV }_{ r_{j}}^{\lozenge }({B}^{H},p_{ 1},\ldots ,p_{k})_{1}^{n}\right )$$

with B H being a fractional Brownian motion with Hurst parameter \(H = \beta \ +\ \frac{1} {2}\).

Clearly, an analogous result can be formulated for any d-dimensional family \((r_{j};p_{1}^{j},\ldots ,p_{k}^{j})_{1\leq j\leq d}\). □ 

4 Estimation of the Smoothness Parameter

In this section we apply our probabilistic results to obtain consistent estimates of the smoothness parameter \(\beta \in (-\frac{1} {2},0) \cup (0, \frac{1} {2})\). We propose three different estimators for β: the realised variation ratio (\({\mathit{RVR}}^{\lozenge }\)), the modified realised variation ratio (\({\overline{\mathit{RVR}}}^{\lozenge }\)) and the change-of-frequency estimator (COF  ◊ ). Throughout this section we assume that

$$\Delta _{n}^{-1/2}(r_{ n}^{\lozenge }(j) - {\rho }^{\lozenge }(j)) \rightarrow 0$$
(30)

for any j ≥ 1, where r n  ◊ (j) and \({\rho }^{\lozenge }(j)\) are defined in (14) and (15), respectively. This condition guarantees that \(\rho _{p_{1}^{j},\ldots ,p_{k}^{j}}^{n}\) can be replaced by the quantity \(\rho _{p_{1}^{j},\ldots ,p_{k}^{j}}\) in Theorem 2 without changing the limit (see Remark 2). Recall that the condition (30) holds for our canonical example

$$g(x) = {x}^{\beta }\exp (-\lambda x)$$

when \(\beta \in (-\frac{1} {2},0) \cup (0, \frac{1} {4})\) and λ > 0.

4.1 The Realised Variation Ratio

We define the realised variation ratio based on the second order differences as

$$RV R_{t}^{\lozenge n} = \frac{{MPV }^{\lozenge }(X,1,1)_{ t}^{n}} {{\mathit{MPV }}^{\lozenge }(X,2,0)_{t}^{n}}.$$
(31)

This type of statistics has been successfully applied in semimartingale models to test for the presence of the jump part (see e.g. [4]). In the \(\mathcal{B}\mathcal{S}\mathcal{S}\) framework the statistic \(\mathit{RVR}_{t}^{\lozenge n}\) is used to estimate the smoothness parameter β.

Let us introduce the function \(\psi : (-1,1) \rightarrow ( \frac{2} {\pi },1)\) given by

$$\psi (x) = \frac{2} {\pi }(\sqrt{1 - {x}^{2}} + x\arcsin x).$$
(32)

We remark that \(\psi (x) = \mathbb{E}(U_{1}U_{2})\), where U 1, U 2 are two standard normal variables with correlation x. Let us further notice that while the computation of the value of \({\mathit{MPV }}^{\lozenge }(X,p_{1},\ldots ,p_{k})_{t}^{n}\) requires the knowledge of the quantity \(\tau _{n}^{\lozenge }\) (and hence the knowledge of the memory function g), the statistic \(\mathit{RVR}_{t}^{\lozenge n}\) is purely observation based since

$$\mathit{RVR}_{t}^{\lozenge n} = \frac{\sum \limits _{i=2}^{[t/\Delta _{n}]-2}\vert \lozenge _{i}^{n}\!X\vert \vert \lozenge _{i+2}^{n}\!X\vert } {\sum \limits _{i=2}^{[t/\Delta _{n}]}\vert \lozenge _{i}^{n}\!X{\vert }^{2}}.$$

Our first result is the consistency of \(\mathit{RVR}_{t}^{\lozenge n}\), which follows directly from Theorem 1 and Lemma 1.

Proposition 2.

Assume that the conditions of Theorem  1 hold. Then we obtain

$$\mathit{RVR}_{t}^{\lozenge n}{ \text{ u.c.p.} \atop \rightarrow } \psi ({\rho }^{\lozenge }(2)),$$
(33)

where \({\rho }^{\lozenge }(j)\) is defined by (15) .

Note that

$${\rho }^{\lozenge }(2) = \frac{-{4}^{1+2\beta } + 4 \cdot {3}^{1+2\beta } - 6 \cdot {2}^{1+2\beta } + 4} {2\left (4 - {2}^{1+2\beta }\right )} ,$$

\({\rho }^{\lozenge }(2) = \rho _{\beta }^{\lozenge }(2)\) is invertible as a function of \(\beta \in (-\frac{1} {2},0) \cup (0, \frac{1} {2})\), it is positive for \(\beta \in (-\frac{1} {2},0)\) and negative for \(\beta \in (0, \frac{1} {2})\).

Obviously, the function ψ is only invertible on the interval ( − 1, 0) or (0, 1). Thus, we can recover the absolute value of \({\rho }^{\lozenge }(2)\), but not its sign (which is not a big surprise, because we use absolute values of the second order differences in the definition of \(\mathit{RVR}_{t}^{\lozenge n}\)). In the following proposition we restrict ourselves to \(\beta \in (0, \frac{1} {2})\) as those values typically appear in physics.

Proposition 3.

Assume that the conditions of Theorems  2 and  (30) hold. Let \(\beta \ \in \ (0, \frac{1} {2})\) , \(\rho _{\beta }^{\lozenge }(2) : (0, \frac{1} {2}) \rightarrow (-1,0)\) , \(\psi : (-1,0) \rightarrow ( \frac{2} {\pi },1)\) and set \(f = \psi \circ \rho _{\beta }^{\lozenge }(2)\) . Then we obtain for h = f −1

$$h(RV R_{t}^{\lozenge n}){ \text{ u.c.p.} \atop \rightarrow } \beta ,$$
(34)

and

$$\frac{\Delta _{n}^{-1/2}(h(\mathit{RVR}_{t}^{\lozenge n}) - \beta ){\mathit{MPV }}^{\lozenge }(X,2,0)_{t}^{n}} {\sqrt{\frac{1} {3}\vert h{^\prime}(\mathit{RVR}_{t}^{\lozenge n})\vert (1,-\mathit{RVR}_{t}^{\lozenge n})\mu {(1,-\mathit{RVR}_{t}^{\lozenge n})}^{T}{\mathit{MPV }}^{\lozenge }(X,4,0)_{t}^{n}}}{ d \atop \rightarrow } N(0,1),$$
(35)

for any t > 0, where \(\mu = (\mu _{ij})_{1\leq i,j\leq 2}\) is given by

$$\begin{array}{rcl} \mu _{11}& =& \lim _{n\rightarrow \infty }\Delta _{n}^{-1}\text{ var}\left ({\mathit{MPV }}^{\lozenge }({B}^{H},1,1)_{ 1}^{n}\right ), \\ \mu _{12}& =& \lim _{n\rightarrow \infty }\Delta _{n}^{-1}\text{ cov}\left ({\mathit{MPV }}^{\lozenge }({B}^{H},1,1)_{ 1}^{n},{\mathit{MPV }}^{\lozenge }({B}^{H},2,0)_{ 1}^{n}\right ), \\ \mu _{22}& =& \lim _{n\rightarrow \infty }\Delta _{n}^{-1}\text{ var}\left ({\mathit{MPV }}^{\lozenge }({B}^{H},2,0)_{ 1}^{n}\right ), \\ \end{array}$$

with \(H = \beta + \frac{1} {2}\) .

Proposition 3 is a direct consequence of Theorem 2, of the delta-method for stable convergence and of the fact that the true centering \(\psi (r_{n}^{\lozenge }(2))\) in (23) can be replaced by its limit \(\psi ({\rho }^{\lozenge }(2))\), because of the condition (30) (see Remark 2). We note that the normalized statistic in (35) is again self-scaling, i.e. we do not require the knowledge of \(\tau _{n}^{\lozenge }\), and consequently we can immediately build confidence regions for the smoothness parameter \(\beta \in (0, \frac{1} {2})\).

Remark 7.

The constants β ij , 1 ≤ i, j ≤ 2, can be expressed as

$$\begin{array}{rcl} \mu _{11}& =& \text{ var}(\vert Q_{1}\vert \vert Q_{3}\vert ) + 2\sum \limits _{k=1}^{\infty }\text{ cov}(\vert Q_{ 1}\vert \vert Q_{3}\vert ,\vert Q_{1+k}\vert \vert Q_{3+k}\vert ), \\ \mu _{12}& =& \text{ cov}(Q_{2}^{2},\vert Q_{ 1}\vert \vert Q_{3}\vert ) + 2\sum \limits _{k=0}^{\infty }\text{ cov}(Q_{ 1}^{2},\vert Q_{ 1+k}\vert \vert Q_{3+k}\vert ), \\ \mu _{22}& =& \text{ var}(Q_{1}^{2}) + 2\sum \limits _{k=1}^{\infty }\text{ cov}(Q_{ 1}^{2},Q_{ 1+k}^{2}) = 2 + 4\sum \limits _{k=1}^{\infty }\vert {\rho }^{\lozenge }(k){\vert }^{2}, \\ \end{array}$$

with \(Q_{i} = \lozenge _{i}^{n}{B}^{H}/\sqrt{\text{ var} (\lozenge _{i }^{n }{B}^{H } )}\). The above quantities can be computed using formulas for absolute moments of the multivariate normal distributions. □ 

4.2 The Modified Realised Variation Ratio

Recall that the restriction \(\beta \in (0, \frac{1} {2})\) is required to formulate Proposition 3. To obtain estimates for all values \(\beta \in (-\frac{1} {2},0) \cup (0, \frac{1} {2})\) let us consider a modified (and, in fact, more natural) version of \(\mathit{RVR}_{t}^{\lozenge n}\):

$$\overline{\mathit{RVR}}_{t}^{\lozenge n} = \frac{\sum \limits _{i=2}^{[t/\Delta _{n}]-2}\lozenge _{i}^{n}X\lozenge _{i+2}^{n}\!X} {\sum \limits _{i=2}^{[t/\Delta _{n}]}\vert \lozenge _{i}^{n}\!X{\vert }^{2}}.$$
(36)

Notice that \(\overline{\mathit{RVR}}_{t}^{\lozenge n}\) is an analogue of the classical autocorrelation estimator. The following result describes the asymptotic behaviour of \(\overline{\mathit{RVR}}_{t}^{\lozenge n}\).

Proposition 4.

Assume that the conditions of Theorems  2 and (30) hold, and let \(h = {(\rho _{\beta }^{\lozenge }(2))}^{-1}\) . Then we obtain

$$h(\overline{RV R}_{t}^{\lozenge n}){ \text{ u.c.p.} \atop \rightarrow } \beta ,$$
(37)

and, with \({\overline{\mathit{MPV }}}^{\lozenge }(X,1,1)_{t}^{n} = \Delta _{n}{(\tau _{n}^{\lozenge })}^{-2} \sum \limits _{i=2}^{[t/\Delta _{n}]-2}\lozenge _{i}^{n}X\lozenge _{i+2}^{n}\!X\) ,

$$\frac{\Delta _{n}^{-1/2}(h(\overline{\mathit{RVR}}_{t}^{\lozenge n}) - \beta ){\mathit{MPV }}^{\lozenge }(X,2,0)_{t}^{n}} {\sqrt{\frac{1} {3}\vert h{^\prime}(\overline{\mathit{RVR}}_{t}^{\lozenge n})\vert (1,-\overline{\mathit{RVR}}_{t}^{\lozenge n})\mu {(1,-\overline{\mathit{RVR}}_{t}^{\lozenge n})}^{T}{\mathit{MPV }}^{\lozenge }(X,4,0)_{t}^{n}}}{ d \atop \rightarrow } N(0,1),$$
(38)

for any t > 0, where \(\mu = (\mu _{ij})_{1\leq i,j\leq 2}\) is given by

$$\begin{array}{rcl} \mu _{11}& =& \lim _{n\rightarrow \infty }\Delta _{n}^{-1}\text{ var}\left ({\overline{\mathit{MPV }}}^{\lozenge }({B}^{H},1,1)_{ 1}^{n}\right ), \\ \mu _{12}& =& \lim _{n\rightarrow \infty }\Delta _{n}^{-1}\text{ cov}\left ({\overline{\mathit{MPV }}}^{\lozenge }({B}^{H},1,1)_{ 1}^{n},{\mathit{MPV }}^{\lozenge }({B}^{H},2,0)_{ 1}^{n}\right ), \\ \mu _{22}& =& \lim _{n\rightarrow \infty }\Delta _{n}^{-1}\text{ var}\left ({\mathit{MPV }}^{\lozenge }({B}^{H},2,0)_{ 1}^{n}\right ), \\ \end{array}$$

with \(H = \beta + \frac{1} {2}\) .

Remark 8.

Note that Proposition 4 follows from Remark 6, because the function H(x, y) = xy is even one. In fact, its proof is much easier than the corresponding result of Theorem 2. The most essential step is the joint central limit theorem for the nominator and the denominator of \(\overline{\mathit{RVR}}_{t}^{\lozenge n}\) when X = G (i.e. σ ≡ 1). The latter can be shown by using Wiener chaos expansion and Malliavin calculus. Let \(\mathbb{H}\) be a separable Hilbert space generated by the triangular array \((\lozenge _{i}^{n}G/\tau _{n}^{\lozenge })_{n\geq 1,1\leq i\leq [t/\Delta _{n}]}\) with scalar product \(\langle \cdot ,\cdot \rangle _{\mathbb{H}}\) induced by the covariance function of the process \((\lozenge _{i}^{n}G/\tau _{n}^{\lozenge })_{n\geq 1,1\leq i\leq [t/\Delta _{n}]}\). Setting \(\chi _{i}^{n} = \lozenge _{i}^{n}G/\tau _{n}^{\lozenge }\) we deduce the identities

$$\begin{array}{rcl} & & \Delta _{n}^{1/2} \sum \limits _{i=2}^{[t/\Delta _{n}]-2}\left (\chi _{ i}^{n}\chi _{ i+2}^{n} - {\rho }^{\lozenge }(2)\right ) = I_{ 2}(f_{n}^{(1)}),\quad f_{ n}^{(1)} = \Delta _{ n}^{1/2} \sum \limits _{i=2}^{[t/\Delta _{n}]-2}\chi _{ i}^{n} \otimes \chi _{ i+2}^{n}, \\ & & \Delta _{n}^{1/2} \sum \limits _{i=2}^{[t/\Delta _{n}]}\left (\vert \chi _{ i}^{n}{\vert }^{2} - 1\right ) = I_{ 2}(f_{n}^{(2)}),\quad f_{ n}^{(2)} = \Delta _{ n}^{1/2} \sum \limits _{i=2}^{[t/\Delta _{n}]}{(\chi _{ i}^{n})}^{\otimes 2}, \\ \end{array}$$

where I 2 is the second multiple integral. The joint central limit theorem for the above statistics follows from [19] once we show the contraction conditions

$$\vert \vert f_{n}^{(1)} \otimes _{ 1}f_{n}^{(1)}\vert \vert _{{ \mathbb{H}}^{\otimes 2}} \rightarrow 0,\qquad \vert \vert f_{n}^{(2)} \otimes _{ 1}f_{n}^{(2)}\vert \vert _{{ \mathbb{H}}^{\otimes 2}} \rightarrow 0,$$

and identify the asymptotic covariance structure by computing \(2\lim _{n\rightarrow \infty }\langle f_{n}^{(i)},f_{n}^{(j)}\rangle _{{\mathbb{H}}^{\otimes 2}}\) for 1 ≤ i, j ≤ 2. We refer to the appendix of [7] for a more detailed proof of such central limit theorems. □ 

Remark 9.

The constants β ij , 1 ≤ i, j ≤ 2, are now much easier to compute. They are given as

$$\begin{array}{rcl} \mu _{11}& =& \text{ var}(Q_{1}Q_{3}) + 2\sum \limits _{k=1}^{\infty }\text{ cov}(Q_{ 1}Q_{3},Q_{1+k}Q_{3+k}) \\ & =& 1 + \vert {\rho }^{\lozenge }(2){\vert }^{2} + 2\sum \limits _{k=1}^{\infty }(\vert {\rho }^{\lozenge }(k){\vert }^{2} + {\rho }^{\lozenge }(k + 2){\rho }^{\lozenge }(\vert k - 2\vert ), \\ \mu _{12}& =& \text{ cov}(Q_{2}^{2},Q_{ 1}Q_{3}) + 2\sum \limits _{k=0}^{\infty }\text{ cov}(Q_{ 1}^{2},Q_{ 1+k}Q_{3+k}) \\ & =& 2\vert {\rho }^{\lozenge }(1){\vert }^{2} + 4\sum \limits _{k=1}^{\infty }{\rho }^{\lozenge }(k){\rho }^{\lozenge }(k + 2), \\ \mu _{22}& =& \text{ var}(Q_{1}^{2}) + 2\sum \limits _{k=1}^{\infty }\text{ cov}(Q_{ 1}^{2},Q_{ 1+k}^{2}) = 2 + 4\sum \limits _{k=1}^{\infty }\vert {\rho }^{\lozenge }(k){\vert }^{2}, \\ \end{array}$$

with \(Q_{i} = \lozenge _{i}^{n}{B}^{H}/\sqrt{\text{ var} (\lozenge _{i }^{n }{B}^{H } )}\). This follows from a well-known formula

$$\text{ cov}(Z_{1}Z_{2},Z_{3}Z_{4}) = \text{ cov}(Z_{1},Z_{3})\text{ cov}(Z_{2},Z_{4})+\text{ cov}(Z_{2},Z_{3})\text{ cov}(Z_{1},Z_{4})$$

whenever \((Z_{1},Z_{2},Z_{3},Z_{4})\) is normal. □ 

4.3 Change-of-Frequency Estimator

Another idea of estimating β is to change the frequency Δ n at which the second order differences are built. We recall that \({(\tau _{n}^{\lozenge })}^{2} = 4\overline{R}(\Delta _{n}) -\overline{R}(2\Delta _{n})\) and consequently we obtain the relationship

$${(\tau _{n}^{\lozenge })}^{2} \simeq \Delta _{ n}^{2\beta +1}$$

by Assumption 2(i). Observing the latter we define the statistic

$$COF_{t}^{n} = \frac{\sum \limits _{i=4}^{[t/\Delta _{n}]}\vert \lozenge _{i}^{n,2}\!\!X{\vert }^{2}} {\sum \limits _{i=2}^{[t/\Delta _{n}]}\vert \lozenge _{i}^{n}\!X{\vert }^{2}} ,$$
(39)

that is essentially the ratio of \({\mathit{MPV }}^{\lozenge }(X,2,0)_{t}^{n}\) computed at frequencies Δ n and 2Δ n . Recall that \({(\tau _{n,2}^{\lozenge })}^{2} = \mathbb{E}(\vert \lozenge _{i}^{n,2}G{\vert }^{2}) = 4\overline{R}(2\Delta _{n}) -\overline{R}(4\Delta _{n})\) and observe

$$\frac{{(\tau _{n,2}^{\lozenge })}^{2}} {{(\tau _{n}^{\lozenge })}^{2}} \rightarrow {2}^{2\beta +1}.$$

As a consequence we deduce the convergence

$$COF_{t}^{n}{ \text{ u.c.p.} \atop \rightarrow } {2}^{2\beta +1}.$$

The following proposition is a direct consequence of (29) and the properties of stable convergence.

Proposition 5.

Assume that the conditions of Theorems  2 and (30) hold, and let h(x) = (log 2 (x) − 1)∕2. Then we obtain

$$h(COF_{t}^{n}){ \text{ u.c.p.} \atop \rightarrow } \beta ,$$
(40)

and

$$\frac{\Delta _{n}^{-1/2}(h(COF_{t}^{n}) - \beta ){MPV }^{\lozenge }(X,2,0)_{t}^{n}} {\sqrt{\frac{1} {3}\vert h{^\prime}(COF_{t}^{n})\vert (1,-COF_{t}^{n})\mu {(1,-\mathit{COF}_{t}^{n})}^{T}{\mathit{MPV }}^{\lozenge }(X,4,0)_{t}^{n}}}{ d \atop \rightarrow } N(0,1),$$
(41)

for any t > 0, where \(\mu = (\mu _{ij})_{1\leq i,j\leq 2}\) is given by

$$\begin{array}{rcl} \mu _{11}& =& \lim _{n\rightarrow \infty }\Delta _{n}^{-1}\text{ var}\left (\mathit{MPV }_{ 2}^{\lozenge }({B}^{H},2,0)_{ 1}^{n}\right ), \\ \mu _{12}& =& \lim _{n\rightarrow \infty }\Delta _{n}^{-1}\text{ cov}\left (\mathit{MPV }_{ 2}^{\lozenge }({B}^{H},2,0)_{ 1}^{n},{\mathit{MPV }}^{\lozenge }({B}^{H},2,0)_{ 1}^{n}\right ), \\ \mu _{22}& =& \lim _{n\rightarrow \infty }\Delta _{n}^{-1}\text{ var}\left ({\mathit{MPV }}^{\lozenge }({B}^{H},2,0)_{ 1}^{n}\right ), \\ \end{array}$$

with \(H = \beta + \frac{1} {2}\) .

Let us emphasize that the normalized statistic in (41) is again self-scaling. We recall that the approximation

$$\frac{{(\tau _{n,2}^{\lozenge })}^{2}} {{(\tau _{n}^{\lozenge })}^{2}} - {2}^{2\beta +1} = o(\Delta _{ n}^{1/2}),$$

which follows from (30), holds for our main example \(g(x) = {x}^{\beta }\exp (-\lambda x)\) when \(\beta \in (-\frac{1} {2},0) \cup (0, \frac{1} {4})\) and λ > 0.

Remark 10.

Observe the identity

$$X_{i\Delta _{n}} - 2X_{(i-2)\Delta _{n}} + X_{(i-4)\Delta _{n}} = \lozenge _{i}^{n}X - 2\lozenge _{ i-1}^{n}X + \lozenge _{ i-2}^{n}X.$$

The latter implies that

$$\begin{array}{rcl} \mu _{11}& & = 2 + {2}^{-4\beta } \sum \limits _{k=1}^{\infty }\vert {\rho }^{\lozenge }(k + 2) - 4{\rho }^{\lozenge }(k + 1) + 6{\rho }^{\lozenge }(k) - 4{\rho }^{\lozenge }(\vert k - 1\vert ) \\ & & \quad + {\rho }^{\lozenge }(\vert k - 2\vert ){\vert }^{2}, \\ \mu _{12}& & = {2}^{-2\beta }({\rho }^{\lozenge }(1) - 1) + {2}^{1-2\beta } \sum \limits _{k=0}^{\infty }\vert {\rho }^{\lozenge }(k + 2) - 2{\rho }^{\lozenge }(k + 1) + {\rho }^{\lozenge }(k){\vert }^{2}, \\ \mu _{22}& & = 2 + 4\sum \limits _{k=1}^{\infty }\vert {\rho }^{\lozenge }(k){\vert }^{2}.\end{array}$$

 □ 

5 Proofs

Let us start by noting that the intermittency process σ is assumed to be càdlàg, and thus σ −  is locally bounded. Consequently, w.l.o.g. σ can be assumed to be bounded on compact intervals by a standard localization procedure (see e.g. Sect. 3 in [5] for more details). We also remark that the process F defined by (9) is continuous. Hence, F is locally bounded and can be assumed to be bounded on compact intervals w.l.o.g. by the same localization procedure.

Below, all positive constants are denoted by C or C p if they depend on some parameter p. In the following we present three technical lemmas.

Lemma 2.

Under Assumption  1 we have that

$$\mathbb{E}(\vert \lozenge _{i}^{n}\!\!X{\vert }^{p}) \leq C_{ p}{(\tau _{n}^{\lozenge })}^{p},\qquad i = 2,\ldots ,[t/\Delta _{ n}]$$
(42)

for all p > 0.

Proof of Lemma  2: Recall that due to Assumption 1(ii) the function | g (2) | is non-increasing on (a, ) for some a > 0 and assume w.l.o.g. that a > 1. By the decomposition (12) and Burkholder’s inequality we deduce that

$$\begin{array}{rcl} \mathbb{E}(\vert \lozenge _{i}^{n}\!\!X{\vert }^{p})& \leq & C_{ p}\left ({(\tau _{n}^{\lozenge })}^{p}\right. \\ & +& \mathbb{E}\left.{\left (\int \limits _{0}^{\infty }{\left (g(s + 2\Delta _{ n}) - 2g(s + \Delta _{n}) + g(s)\right )}^{2}\sigma _{ (i-2)\Delta _{n}-s}^{2}ds\right )}^{p/2}\right ),\\ \end{array}$$

since σ is bounded on compact intervals. We immediately obtain the estimates

$$\int \limits _{0}^{1}{\left (g(s + 2\Delta _{ n}) - 2g(s + \Delta _{n}) + g(s)\right )}^{2}\sigma _{ (i-2)\Delta _{n}-s}^{2}ds \leq C{(\tau _{ n}^{\lozenge })}^{2},$$
$$\int \limits _{1}^{a}{\left (g(s + 2\Delta _{ n}) - 2g(s + \Delta _{n}) + g(s)\right )}^{2}\sigma _{ (i-2)\Delta _{n}-s}^{2}ds \leq C\Delta _{ n}^{2},$$

because g (2) is continuous on (0, ) and σ is bounded on compact intervals. On the other hand, since | g (2) | is non-increasing on (a, ), we deduce that

$$\int \limits _{a}^{\infty }{\left (g(s + 2\Delta _{ n}) - 2g(s + \Delta _{n}) + g(s)\right )}^{2}\sigma _{ (i-2)\Delta _{n}-s}^{2}ds \leq \Delta _{ n}^{2}F_{ (i-2)\Delta _{n}}.$$

Finally, the boundedness of the process F implies (42). □ 

Next, for any stochastic process f and any s > 0, we define the (possibly infinite) measure

$$\pi _{f,s}^{\lozenge n}(A) = \frac{\int \limits _{A}{\left (g(x + 2\Delta _{n}) - 2g(x + \Delta _{n}) + g(x)\right )}^{2}f_{s-x}^{2}dx} {{(\tau _{n}^{\lozenge })}^{2}} ,\qquad A \in \mathcal{B}(\mathbb{R}_{>0}),$$
(43)

and set \(\overline{\pi }_{f,s}^{\lozenge n}(x) = \pi _{f,s}^{n}(\{y :\ y > x\})\).

Lemma 3.

Under Assumption  1 it holds that

$$\sup _{s\in [0,t]}\overline{\pi }_{\sigma ,s}^{\lozenge n}(\epsilon ) \leq C\overline{\pi }_{ n}^{\lozenge }(\epsilon )$$
(44)

for any \(\epsilon > 0\) , where the measure \(\pi _{n}^{\lozenge }\) is given by  (13) .

Proof of Lemma  3: Recall again that | g (2) | is non-increasing on (a, ) for some a > 0, and assume w.l.o.g. that \(a > \epsilon \). Since the processes σ and F are bounded we deduce exactly as in the previous proof that

$$\begin{array}{rcl} & & \int \limits _{\epsilon }^{\infty }{\left (g(x + 2\Delta _{ n}) - 2g(x + \Delta _{n}) + g(x)\right )}^{2}\sigma _{ s-x}^{2}dx \\ & & = \int \limits _{\epsilon }^{a}{\left (g(x + 2\Delta _{ n}) - 2g(x + \Delta _{n}) + g(x)\right )}^{2}\sigma _{ s-x}^{2}dx \\ & & +\int \limits _{a}^{\infty }{\left (g(x + 2\Delta _{ n}) - 2g(x + \Delta _{n}) + g(x)\right )}^{2}\sigma _{ s-x}^{2}dx \leq C(\overline{\pi }_{ n}^{\lozenge }(\epsilon ) + \Delta _{ n}^{2}).\end{array}$$

This completes the proof of Lemma 3. □ 

Finally, the last lemma gives a bound for the correlation function \(r_{n}^{\lozenge }(j)\).

Lemma 4.

Under Assumption  2 there exists a sequence \((h(j))_{j\geq 1}\) such that

$$\vert r_{n}^{\lozenge }(j)\vert \leq h(j),\qquad \sum \limits _{j=1}^{\infty }h(j) < \infty ,$$
(45)

for all j ≥ 1.

Proof of Lemma  4: This result follows directly from Lemma 1 in [7]. Recall that \(r_{n}^{\lozenge }(j) \rightarrow {\rho }^{\lozenge }(j)\) and \(\sum \limits _{j=1}^{\infty }\vert {\rho }^{\lozenge }(j)\vert < \infty \), so the assertion is not really surprising. □ 

Observe that Lemma 4 implies the conditions (16) and (17).

5.1 Proof of Theorem 1

In the following we will prove Theorems 1 and 2 only for k = 1, p 1 = p. The general case can be obtained in a similar manner by an application of the Hölder inequality.

Note that \({\mathit{MPV }}^{\lozenge }(X,p)_{t}^{n}\) is increasing in t and the limit process of (22) is continuous in t. Thus, it is sufficient to show the pointwise convergence

$${\mathit{MPV }}^{\lozenge }(X,p)_{ t}^{n}{ \mathbb{P} \atop \rightarrow } m_{p} \int \limits _{0}^{t}\vert \sigma _{ s}{\vert }^{p}ds,$$

where \(m_{p} = \mathbb{E}(\vert N(0,1){\vert }^{p})\). We perform the proof of Theorem 1 in two steps.

  • The crucial approximation: First of all, we prove that we can use the approximation \(\lozenge _{i}^{n}X \approx \sigma _{(i-2)\Delta _{n}}\lozenge _{i}^{n}G\) without changing the limit of Theorem 1, i.e. we show that

    $$\Delta _{n}{(\tau _{n}^{\lozenge })}^{-p} \sum \limits _{i=2}^{[t/\Delta _{n}]}\left (\vert \lozenge _{ i}^{n}\!\!X{\vert }^{p} -\vert \sigma _{ (i-2)\Delta _{n}}\lozenge _{i}^{n}\!G{\vert }^{p}\right ){ \mathbb{P} \atop \rightarrow } 0.$$
    (46)

    An application of the inequality \(\vert \vert x{\vert }^{p} -\vert y{\vert }^{p}\vert \leq p\vert x - y\vert (\vert x{\vert }^{p-1} + \vert y{\vert }^{p-1})\) for p > 1 and \(\vert \vert x{\vert }^{p} -\vert y{\vert }^{p}\vert \leq \vert x - y{\vert }^{p}\) for p ≤ 1, (42) and the Cauchy-Schwarz inequality implies that the above convergence follows from

    $$\Delta _{n}{(\tau _{n}^{\lozenge })}^{-2} \sum \limits _{i=2}^{[t/\Delta _{n}]}\mathbb{E}(\vert \lozenge _{ i}^{n}\!\!X - \sigma _{ (i-2)\Delta _{n}}\lozenge _{i}^{n}\!G{\vert }^{2})\rightarrow 0.$$
    (47)

    Observe the decomposition

    $$\lozenge _{i}^{n}X - \sigma _{ (i-2)\Delta _{n}}\lozenge _{i}^{n}\!G = A_{ i}^{n} + B_{ i}^{n,\epsilon } + C_{ i}^{n,\epsilon }$$

    with

    $$\begin{array}{rcl} A_{i}^{n}& =& \int \limits _{(i-1)\Delta _{n}}^{i\Delta _{n} }g(i\Delta _{n} - s)(\sigma _{s} - \sigma _{(i-2)\Delta _{n}})W(ds) \\ & & \quad + \int \limits _{(i-2)\Delta _{n}}^{(i-1)\Delta _{n} }\left (g(i\Delta _{n} - s) - 2g((i - 1)\Delta _{n} - s)\right )(\sigma _{s} - \sigma _{(i-2)\Delta _{n}})W(ds) \\ B_{i}^{n,\epsilon }& =& \int \limits _{(i-2)\Delta _{n}-\epsilon }^{(i-2)\Delta _{n} }\left (g(i\Delta _{n} - s) - 2g((i - 1)\Delta _{n} - s) + g((i - 2)\Delta _{n} - s)\right )\sigma _{s}W(ds) \\ & & \quad - \sigma _{(i-2)\Delta _{n}} \int \limits _{(i-2)\Delta _{n}-\epsilon }^{(i-2)\Delta _{n} }g(i\Delta _{n} - s) - 2g((i - 1)\Delta _{n} - s) + g((i - 2)\Delta _{n} - s)W(ds) \\ C_{i}^{n,\epsilon }& =& \int \limits _{-\infty }^{(i-2)\Delta _{n}-\epsilon }\left (g(i\Delta _{ n} - s) - 2g((i - 1)\Delta _{n} - s) + g((i - 2)\Delta _{n} - s)\right )\sigma _{s}W(ds) \\ & & \quad - \sigma _{(i-2)\Delta _{n}} \int \limits _{-\infty }^{(i-2)\Delta _{n}-\epsilon }g(i\Delta _{ n} - s) - 2g((i - 1)\Delta _{n} - s) + g((i - 2)\Delta _{n} - s)W(ds) \\ \end{array}$$

    Lemma 3 and the boundedness of σ imply that

    $$\Delta _{n}{(\tau _{n}^{\lozenge })}^{-2} \sum \limits _{i=2}^{[t/\Delta _{n}]}\mathbb{E}(\vert C_{ i}^{n,\epsilon }{\vert }^{2}) \leq C\overline{\pi }_{ n}^{\lozenge }(\epsilon ),$$
    (48)

    and by (11) and Assumption 2(i) we deduce that

    $$\Delta _{n}{(\tau _{n}^{\lozenge })}^{-2} \sum \limits _{i=2}^{[t/\Delta _{n}]}\mathbb{E}(\vert C_{ i}^{n,\epsilon }{\vert }^{2})\rightarrow 0,$$

    as n → , for all \(\epsilon > 0\). Next, set \(v(s,\eta ) =\sup \{ \vert \sigma _{s} - \sigma _{r}{\vert }^{2}\vert \ r \in [-t,t],\ \vert r - s\vert \leq \eta \}\) for s ∈ [ − t, t] and denote by Δσ the jump process associated with σ. We obtain the inequality

    $$\begin{array}{rcl} \Delta _{n}{(\tau _{n}^{\lozenge })}^{-2} \sum \limits _{i=2}^{[t/\Delta _{n}]}\mathbb{E}(\vert A_{ i}^{n}{\vert }^{2})& \leq & \Delta _{ n} \sum \limits _{i=2}^{[t/\Delta _{n}]}\mathbb{E}(v((i - 2)\Delta _{ n},2\Delta _{n})) \\ & \leq & \lambda + \Delta _{n}\mathbb{E}\left (\sum \limits _{s\in [-t,t]}\vert \Delta \sigma _{s}{\vert }^{2}1_{\{ \vert \Delta \sigma _{s}\vert \geq \lambda \}}\right ) = \theta (\lambda ,n) \end{array}$$
    (49)

    for any λ > 0. We readily deduce that

    $$\lim _{\lambda \rightarrow 0} limsup_{n\rightarrow \infty }\theta (\lambda ,n) = 0.$$

    Next, observe the decomposition \(B_{i}^{n,\epsilon } = B_{i}^{n,\epsilon }(1) + B_{i}^{n,\epsilon }(2)\) with

    $$\begin{array}{rcl} B_{i}^{n,\epsilon }(1)& =& \int \limits _{(i-2)\Delta _{n}-\epsilon }^{(i-2)\Delta _{n} }\left (g(i\Delta _{n} - s) - 2g((i - 1)\Delta _{n} - s) + g((i - 2)\Delta _{n} - s)\right ) \\ & & \quad \times (\sigma _{s} - \sigma _{(i-2)\Delta _{n}-\epsilon })W(ds) \\ B_{i}^{n,\epsilon }(2)& =& (\sigma _{ (i-2)\Delta _{n}-\epsilon } - \sigma _{(i-2)\Delta _{n}}) \\ & & \quad \times \int \limits _{(i-2)\Delta _{n}-\epsilon }^{(i-2)\Delta _{n} }g(i\Delta _{n} - s) - 2g((i - 1)\Delta _{n} - s)\! +\! g((i - 2)\Delta _{n} - s)W(ds).\end{array}$$

    We deduce that

    $$\begin{array}{rcl} & & \Delta _{n}{(\tau _{n}^{\lozenge })}^{-2} \sum \limits _{i=2}^{[t/\Delta _{n}]}\mathbb{E}(\vert B_{ i}^{n,\epsilon }(1){\vert }^{2}) \leq \Delta _{ n} \sum \limits _{i=2}^{[t/\Delta _{n}]}\mathbb{E}(v((i - 2)\Delta _{ n},\epsilon )), \\ & & \Delta _{n}{(\tau _{n}^{\lozenge })}^{-2} \sum \limits _{i=2}^{[t/\Delta _{n}]}\mathbb{E}(\vert B_{ i}^{n,\epsilon }(2){\vert }^{2}) \leq \Delta _{ n} \sum \limits _{i=2}^{[t/\Delta _{n}]}\mathbb{E}{(v{((i - 2)\Delta _{ n},\epsilon )}^{2})}^{\frac{1} {2} }. \end{array}$$
    (50)

    By using the same arguments as in (49) we conclude that both terms converge to zero and we obtain (47), which completes the proof of Theorem 1. □ 

  • The blocking technique: Having justified the approximation \(\lozenge _{i}^{n}X\,\approx \,\sigma _{(i-2)\Delta _{n}}\lozenge _{i}^{n}G\) in the previous step, we now apply a blocking technique for \(\sigma _{(i-2)\Delta _{n}}\lozenge _{i}^{n}G\): we divide the interval [0, t] into big sub-blocks of the length l  − 1 and freeze the intermittency process σ at the beginning of each big sub-block. Later we let l tend to infinity.

    For any fixed \(l \in \mathbb{N}\), observe the decomposition

    $${ \mathit{MPV }}^{\lozenge }(X,p)_{ t}^{n}-m_{ p} \int \limits _{0}^{t}\vert \sigma _{ s}{\vert }^{p}ds = \Delta _{ n}{(\tau _{n}^{\lozenge })}^{-p}\ \sum \limits _{i=2}^{[t/\Delta _{n}]}\ \left (\vert \lozenge _{ i}^{n}X{\vert }^{p} -\vert \sigma _{ (i-2)\Delta _{n}}\lozenge _{i}^{n}\!G{\vert }^{p}\right )+R_{ t}^{n,l},$$

    where

    $$\begin{array}{rcl} R_{t}^{n,l}& =& \Delta _{ n}{(\tau _{n}^{\lozenge })}^{-p}\left (\sum \limits _{i=2}^{[t/\Delta _{n}]}\vert \sigma _{ (i-2)\Delta _{n}}\lozenge _{i}^{n}\!G{\vert }^{p} -\sum \limits _{j=1}^{[lt]}\vert \sigma _{\frac{ j-1} {l} }{\vert }^{p} \sum \limits _{i\in I_{l}(j)}\vert \lozenge _{i}^{n}\!G{\vert }^{p}\right ) \\ & +& \left (\Delta _{n}{(\tau _{n}^{\lozenge })}^{-p} \sum \limits _{j=1}^{[lt]}\vert \sigma _{\frac{ j-1} {l} }{\vert }^{p} \sum \limits _{i\in I_{l}(j)}\vert \lozenge _{i}^{n}\!G{\vert }^{p} - m_{ p}{l}^{-1} \sum \limits _{j=1}^{[lt]}\vert \sigma _{\frac{ j-1} {l} }{\vert }^{p}\right ) \\ & +& m_{p}\left ({l}^{-1} \sum \limits _{j=1}^{[lt]}\vert \sigma _{\frac{ j-1} {l} }{\vert }^{p} -\int \limits _{0}^{t}\vert \sigma _{ s}{\vert }^{p}ds\right ), \\ \end{array}$$

    and

    $$I_{l}(j) = \left \{i\vert \ i\Delta _{n} \in \left (\left.\frac{j - 1} {l} , \frac{j} {l}\right ]\right.\right \},\qquad j \geq 1.$$

    Notice that the third summand in the above decomposition converges to 0 in probability due to Riemann integrability of σ. By Theorem 1 in [8] we know that \({\mathit{MPV }}^{\lozenge }(G,p)_{t}^{n}{ \text{ u.c.p.} \atop \rightarrow } m_{p}t\), because the condition (16) is satisfied (see Lemma 4). This implies the negligibility of the second summand in the decomposition when we first let \(n \rightarrow \infty \) and then l → . As σ is càdlàg and bounded on compact intervals, we finally deduce that

    $$\lim _{l\rightarrow \infty }limsup_{n\rightarrow \infty }\mathbb{P}(\vert R_{t}^{n,l}\vert > \epsilon ) = 0,$$

    for any \(\epsilon > 0\). This completes the proof of the second step and of Theorem 1. □ 

5.2 Proof of Theorem 2

Here we apply the same scheme of the proof as for Theorem 1. We start with the justification of the approximation \(\lozenge _{i}^{n}X \approx \sigma _{(i-2)\Delta _{n}}\lozenge _{i}^{n}\!\!G\) and proceed with the blocking technique.

  • The crucial approximation: Here we prove that

    $$\Delta _{n}^{1/2}{(\tau _{ n}^{\lozenge })}^{-p} \sum \limits _{i=2}^{[t/\Delta _{n}]}\left (\vert \lozenge _{ i}^{n}\!\!X{\vert }^{p} -\vert \sigma _{ (i-2)\Delta _{n}}\lozenge _{i}^{n}\!G{\vert }^{p}\right ){ \mathbb{P} \atop \rightarrow } 0.$$
    (51)

    Again we apply the inequality \(\vert \vert x{\vert }^{p} -\vert y{\vert }^{p}\vert \leq p\vert x - y\vert (\vert x{\vert }^{p-1} + \vert y{\vert }^{p-1})\) for p > 1, \(\vert \vert x{\vert }^{p} -\vert y{\vert }^{p}\vert \leq \vert x - y{\vert }^{p}\) for p ≤ 1 and (42) to deduce that

    $$\begin{array}{rcl} & & \Delta _{n}^{1/2}{(\tau _{ n}^{\lozenge })}^{-p} \sum \limits _{i=2}^{[t/\Delta _{n}]}\mathbb{E}\left (\left \vert \vert \lozenge _{ i}^{n}\!\!X{\vert }^{p} -\vert \sigma _{ (i-2)\Delta _{n}}\lozenge _{i}^{n}\!\!G{\vert }^{p}\right \vert \right )\vert \leq \Delta _{ n}^{1/2}{(\tau _{ n}^{\lozenge })}^{-(p\wedge 1)} \\ & & \quad \times \sum \limits _{i=2}^{[t/\Delta _{n}]}{\left (\mathbb{E}(\vert \lozenge _{ i}^{n}\!\!X - \sigma _{ (i-2)\Delta _{n}}\lozenge _{i}^{n}\!\!G{\vert }^{2})\right )}^{\frac{p\wedge 1} {2} }.\end{array}$$

    Now we use a similar decomposition as in the proof of Theorem 1:

    $$\lozenge _{i}^{n}X - \sigma _{ (i-2)\Delta _{n}}\lozenge _{i}^{n}\!\!G = A_{ i}^{n} + B_{ i}^{n,\epsilon _{n}^{(1)} } + \sum \limits _{j=1}^{l}C_{ i}^{n,\epsilon _{n}^{(j)},\epsilon _{ n}^{(j+1)} },$$

    where \(A_{i}^{n}\), \(B_{i}^{n,\epsilon _{n}^{(1)} }\) are defined as above, \(0 < \epsilon _{n}^{(1)} < \cdots < \epsilon _{n}^{(l)} < \epsilon _{n}^{(l+1)} = \infty \) and

    $$\begin{array}{rl} C_{i}^{n,\epsilon _{n}^{(j)},\epsilon _{ n}^{(j+1)} } & = \int \limits _{(i-2)\Delta _{n}-\epsilon _{n}^{(j+1)}}^{(i-2)\Delta _{n}-\epsilon _{n}^{(j)} }\left (g(i\Delta _{n} - s) - 2g((i - 1)\Delta _{n} - s)\right. \\ &\left.\quad + g((i - 2)\Delta _{n} - s)\right )\sigma _{s}W(ds) \\ & - \sigma _{(i-2)\Delta _{n}} \int \limits _{(i-2)\Delta _{n}-\epsilon _{n}^{(j+1)}}^{(i-2)\Delta _{n}-\epsilon _{n}^{(j)} }g(i\Delta _{n} - s) - 2g((i - 1)\Delta _{n} - s) + g((i - 2)\Delta _{n} - s)W(ds).\end{array}$$

    An application of Assumptions 1, 2 and 3-γ, for γ ∈ (0, 1] with \(\gamma (p \wedge 1) > \frac{1} {2}\), and Lemma 3 implies that (recall that σ is bounded on compact intervals)

    $$\begin{array}{rl} \Delta _{n}^{1/2}{(\tau _{ n}^{\lozenge })}^{-p} \sum \limits _{i=2}^{[t/\Delta _{n}]}{\left (\mathbb{E}(\vert A_{ i}^{n}{\vert }^{2})\right )}^{\frac{p\wedge 1} {2} } \leq &C\Delta _{n}^{\gamma \left (p\wedge 1\right )-\frac{1} {2} }, \\ \Delta _{n}^{1/2}{(\tau _{ n}^{\lozenge })}^{-p} \sum \limits _{i=2}^{[t/\Delta _{n}]}{\left (\mathbb{E}(\vert B_{ i}^{n,\epsilon _{n}^{(1)} }{\vert }^{2})\right )}^{\frac{p\wedge 1} {2} } \leq &C\Delta _{n}^{-1/2}\vert \epsilon _{n}^{(1)}{\vert }^{\gamma \left (p\wedge 1\right )}, \\ \Delta _{n}^{1/2}{(\tau _{ n}^{\lozenge })}^{-p} \sum \limits _{i=2}^{[t/\Delta _{n}]}{\left (\mathbb{E}(\vert C_{ i}^{n,\epsilon _{n}^{(j)},\epsilon _{ n}^{(j+1)} }{\vert }^{2})\right )}^{\frac{p\wedge 1} {2} } \leq & \\ &\leq C\Delta _{n}^{-1/2}\vert \epsilon _{ n}^{(j+1)}{\vert }^{\gamma \left (p\wedge 1\right )}\vert \overline{\pi }_{ n}^{\lozenge }(\epsilon _{ n}^{(j+1)}) -\overline{\pi }_{ n}^{\lozenge }(\epsilon _{ n}^{(j)}){\vert }^{\frac{p\wedge 1} {2} }, \\ \Delta _{n}^{1/2}{(\tau _{ n}^{\lozenge })}^{-p} \sum \limits _{i=2}^{[t/\Delta _{n}]}{\left (\mathbb{E}(\vert C_{ i}^{n,\epsilon _{n}^{(l)},\epsilon _{ n}^{(l+1)} }{\vert }^{2})\right )}^{\frac{p\wedge 1} {2} } \leq &C\Delta _{n}^{-1/2}\overline{\pi }_{n}^{\lozenge }{(\epsilon _{n}^{(l)})}^{\frac{p\wedge 1} {2} },\end{array}$$
    (52)

    for 1 ≤ j ≤ l − 1. In [8] (see Lemma 3 therein) we have proved the following result: if the condition (18) is satisfied then there exist sequences

    \(0 < \epsilon _{n}^{(1)} < \cdots < \epsilon _{n}^{(l)} < \epsilon _{n}^{(l+1)} = \infty \)

    such that all terms on the right-hand side of (52) converge to 0.

    Set λ = (3 − 2β)(1 − δ) for some δ > 0 such that λ > 1 ∕ (p ∧ 1). This is possible, because 3 − 2β ∈ (2, 4) and the assumptions of Theorem 2 imply that p > 1 ∕ 2. We obtain that

    $$\overline{\pi }_{n}^{\lozenge }(\epsilon _{ n}) \leq C\Delta _{n}^{\lambda (1-\kappa )},$$

    for any \(\epsilon _{n} = \Delta _{n}^{\kappa }\), κ ∈ (0, 1), by (11) and Assumption 2(i). Thus, we deduce (18) which implies the convergence of (51). □ 

  • The blocking technique: Again we only consider the case d = 1, k = 1 and p 1 = p. We recall the decomposition from the proof of Theorem 1:

    $$\begin{array}{rcl} & & \Delta _{n}^{-1/2}\left ({\mathit{MPV }}^{\lozenge }(X,p)_{ t}^{n} - m_{ p} \int \limits _{0}^{t}\vert \sigma _{ s}{\vert }^{p}ds\right ) \\ & & \quad = \Delta _{n}^{-1/2}\left (\Delta _{ n}{(\tau _{n}^{\lozenge })}^{-p} \sum \limits _{j=1}^{[lt]}\vert \sigma _{\frac{ j-1} {l} }{\vert }^{p} \sum \limits _{i\in I_{l}(j)}\vert \lozenge _{i}^{n}\!\!G{\vert }^{p} - m_{ p}{l}^{-1} \sum \limits _{j=1}^{[t/l\Delta _{n}]}\vert \sigma _{\frac{ j-1} {l} }{\vert }^{p}\right ) \\ & & \quad + \Delta _{n}^{1/2}{(\tau _{ n}^{\lozenge })}^{-p} \sum \limits _{i=2}^{[t/\Delta _{n}]}\left (\vert \lozenge _{ i}^{n}\!\!X{\vert }^{p} -\vert \sigma _{ (i-2)\Delta _{n}}\lozenge _{i}^{n}\!\!G{\vert }^{p}\right ) + \overline{R}_{ t}^{n,l}, \end{array}$$
    (53)

    where

    $$\begin{array}{rcl} \overline{R}_{t}^{n,l}& & = \Delta _{ n}^{1/2}{(\tau _{ n}^{\lozenge })}^{-p}\left (\sum \limits _{i=2}^{[t/\Delta _{n}]}\vert \sigma _{ (i-2)\Delta _{n}}\lozenge _{i}^{n}\!\!G{\vert }^{p} -\sum \limits _{j=1}^{[lt]}\vert \sigma _{\frac{ j-1} {l} }{\vert }^{p} \sum \limits _{i\in I_{l}(j)}\vert \lozenge _{i}^{n}\!\!G{\vert }^{p}\right ) \\ & & \quad + m_{p}\Delta _{n}^{-1/2}\left ({l}^{-1} \sum \limits _{j=1}^{[lt]}\vert \sigma _{\frac{ j-1} {l} }{\vert }^{p} -\int \limits _{0}^{t}\vert \sigma _{ s}{\vert }^{p}ds\right ).\end{array}$$

    Note that the negligibility of the second summand in the decomposition (53) has been shown in the previous step. The convergence

    $$\lim _{l\rightarrow \infty }limsup_{n\rightarrow \infty }\mathbb{P}(\vert \overline{R}_{t}^{n,l}\vert > \epsilon ) = 0,$$

    for any \(\epsilon > 0\), has been shown in [7] (see the proof of Theorem 7 therein). Finally, we concentrate on the first summand of the decomposition (53). By Remark 11 in [8] we know that \((G_{t},\Delta _{n}^{-1/2}({\mathit{MPV }}^{\lozenge }(G,p)_{t}^{n} - m_{p}t)) \Rightarrow (G_{t},\sqrt{\mu }W{^\prime}_{t})\), where μ is defined by (25), because \(r_{n}^{\lozenge }(j) \rightarrow {\rho }^{\lozenge }(j)\) and condition (17) holds (see again Lemma 4). An application of the condition D′ from Proposition 2 in [1] shows that

    $$\Delta _{n}^{-1/2}({\mathit{MPV }}^{\lozenge }(G,p)_{ t}^{n} - m_{ p}t){ st \atop \rightarrow } \sqrt{\mu }W{^\prime}_{t}.$$

    Now we deduce by the properties of stable convergence:

    $$\begin{array}{rcl} & & \Delta _{n}^{-1/2}\left (\Delta _{ n}{(\tau _{n}^{\lozenge })}^{-p} \sum \limits _{j=1}^{[lt]}\vert \sigma _{\frac{ j-1} {l} }{\vert }^{p} \sum \limits _{i\in I_{l}(j)}\vert \lozenge _{i}^{n}\!\!G{\vert }^{p} - m_{ p}{l}^{-1} \sum \limits _{j=1}^{[t/l\Delta _{n}]}\vert \sigma _{\frac{ j-1} {l} }{\vert }^{p}\right ) \\ & & \quad { st \atop \rightarrow } \sqrt{\mu }\sum \limits _{j=1}^{[lt]}\vert \sigma _{\frac{ j-1} {l} }{\vert }^{p}\Delta _{ j}^{l}W{^\prime}, \\ \end{array}$$

    for any fixed l. On the other hand, we have that

    $$\sqrt{\mu }\sum \limits _{j=1}^{[lt]}\vert \sigma _{\frac{ j-1} {l} }{\vert }^{p}\Delta _{ j}^{l}W{^\prime}{ \mathbb{P} \atop \rightarrow } \sqrt{\mu }\int \limits _{0}^{t}\vert \sigma _{ s}{\vert }^{p}dW{^\prime}_{ s}$$

    as \(l \rightarrow \infty \). This completes the proof of Theorem 2. □