1 Introduction

Professor C.R. Rao has extensively contributed to the theory of estimation, multivariate analysis, characterization problems, design of experiments, matrix algebra and several other branches of the subject of statistics. As his student in the Master’s programme at the Indian Statistical Institute during the years 1960–62 and later as his colleague at the Indian Statistical Institute, it gives me a great pleasure to contribute this article in statistical inference for stochastic processes to the special issue dedicated to Professor C.R. Rao during his birth centenary year.

Statistical inference for fractional diffusion processes satisfying stochastic differential equations driven by a fractional Brownian motion (fBm) has been studied earlier and a comprehensive survey of various methods is given in [9, 12]. There has been a recent interest to study similar problems for stochastic processes driven \(\alpha \)-stable noises and fractional Levy processes.

Prakasa Rao [11] investigated minimum \(L_1\)-norm estimation for fractional Ornstein–Uhlenbeck type process driven by a fractional Brownian motion. Diop and Yode [3] studied minimum distance parameter estimation for Ornstein–Uhlenbeck processes driven by a Levy process. Parametric estimation for Ornstein–Uhlenbeck process driven by fractional Levy process is discussed in [14].

In modeling processes with possible long-range dependence, it is possible that no special functional form is available for modeling the trend a priori and it is necessary to estimate the trend function based on the observed process over an interval. This problem of estimation is known as nonparametric function estimation in classical statistical inference (cf. [10]).

Nonparametric estimation of the trend for stochastic differential equations driven by fractional Brownian motion is investigated in [8]. Following techniques in [8], Zhang [15] studied a similar problem when the driving force is a small \(\alpha \)-stable noise.

Our aim in this paper is to study nonparametric estimation of the trend function when the process is governed by a stochastic differential equation driven by a fractional Levy process following the ideas of density function estimation and regression function estimation in classical statistical inference. Several methods are present for nonparametric function estimation as described in [10]. The method of kernels is widely used for the estimation of a density function or a regression function and it is known that the properties of such an estimator do not depend on the choice of the kernel in general but on the choice of the bandwidth. Properties of the estimators of a density function and a regression function, using the method of kernels, are described in [10]. Our aim is to propose a kernel-type estimator for the trend function and study its properties. We will show that the kernel-type estimator is uniformly consistent over a class of trend functions and obtain the asymptotic distribution of the estimator in the presence of small noise. We will also obtain the optimum rate of convergence of the kernel-type estimators for the trend function. Results derived in this paper will be useful when there is no information on the functional form of the trend coefficient and the trend has to be estimated from the observed path of the underlying process.

2 Fractional Levy Process

We will now describe some properties of a fractional Levy process and properties of processes driven by a fractional Levy process. A fractional Levy process is a generalization of the integral representation of fractional Brownian motion.

Definition

(Marquardt [7]) Let \(\{L(t), t \in R\}\) be a zero mean two-sided Levy process with \(E([L(1)]^2)<\infty \) and without a Brownian component. For \(d \in (0, \frac{1}{2}),\) a stochastic process

$$\begin{aligned} L_t^d= \frac{1}{\Gamma (d+1)}\int _{-\infty }^\infty [(t-s)_+^d-(-s)_+^d]L(\mathrm{d}s), t \in R \end{aligned}$$
(2.1)

is called a fractional Levy process (fLp) where \(L(t)= L_1(t), t \ge 0\) and \(L(t)= -L_2(-t_-), t <0\) and \(\{L_1(t), t \ge 0\}\) and \(\{L_2(t), t \ge 0\}\) are two independent copies of a one-sided Levy process.

The following two results are due to [7].

Theorem 2.1

Let the function \(g\in H\) where H is the completion of \(L^1(R)\cap L^2(R)\) with respect to the norm \(||g||^2_H= E([L(1)]^2)\int _R(I^d_g)^2(u)\mathrm{d}u.\) Then

$$\begin{aligned} \int _Rg(s)\mathrm{d}L_s^d= \int _R(I^d_g)(u)\mathrm{d}L(u) \end{aligned}$$
(2.2)

where the equality holds in the \(L^2\) -sense and \(I^d_g\) denotes the Riemann–Liouville fractional integral defined by

$$\begin{aligned} (I^d_g)(x)=\frac{1}{\Gamma (d)}\int _x^\infty g(t) (t-x)^{d-1}\mathrm{d}t. \end{aligned}$$
(2.3)

Suppose that \(Y= \int _R (I^d_g)(u)\mathrm{d}L(u).\) Following the results in [7, 13], it follows that the distribution of Y is infinitely divisible with characteristic function

$$\begin{aligned} E[e^{i u Y}]= \exp \left[ \int _R \int _R(e^{iu \;(I^d_g)(s)\;x}-1-iu\;(I^d_g)(s))\;\nu (\mathrm{d}x)\mathrm{d}s\right] \end{aligned}$$
(2.4)

where \(\nu (.)\) is the Levy measure corresponding to the process L. Furthermore, \(E(Y)=0\) and \(E(Y^2)= E[L(1)^2]\int _R|(I^d_G)(s)|^2\;\mathrm{d}s.\)

Theorem 2.1 gives a representation of the integral with respect to a fractional Levy process (fLp) as an integral with respect to a transformed function with respect to a Levy process. The next result gives a formula for the product moment of two integrals with respect to fractional Levy process.

Theorem 2.2

Let \(|f|,|g| \in H.\) Then

$$\begin{aligned} E\left( \int _Rf(s)\mathrm{d}L^d_s\right) =0 \end{aligned}$$
(2.5)

and

$$\begin{aligned} E\left[ \int _R f(s)\mathrm{d}L_s^d \int _Rg(s)\mathrm{d}L_s^d \right] = \frac{\Gamma (1-2d)E([L(1)]^2)}{\Gamma (d)\Gamma (1-d)}\int _R\int _R f(t)g(s)|t-s|^{2d-1}\mathrm{d}s\mathrm{d}t. \end{aligned}$$
(2.6)

Bender et al. [1] presented a maximal inequality for a fractional Levy process.

Theorem 2.3

Let \(\{L^d_t, t \in R\}\) be a fractional Levy process. Then, for every \(p\ge 2\) and \(\delta >0\) such that \(d+\delta <\frac{1}{2},\) there exists a constant \(C_{p,\delta ,d}\) independent of the Levy process L such that for every \(T \ge 1,\)

$$\begin{aligned} E(\sup _{0\le t \le T}|L_t^d|^p) \le C_{p, \delta , d} E(|L(1)|^p)T^{p(d+\frac{1}{2}+\delta )}. \end{aligned}$$
(2.7)

Remarks

It is known that a fractional Levy process (fLp) is not a semimartingale in general for a broad class of Levy processes and hence it is not possible to extend the notion of the Ito stochastic integral for stochastic integrals with respect to a fractional Levy process. However, it is possible to extend the notion of a Wiener integral with respect to a fLp when the integrand is a non-random function using the ideas from fractional calculus. The covariance structure of fLp is almost the same as that of a fractional Brownian motion. In fact,

$$\begin{aligned} \text {Cov}(L^d_t,L^d_s)= \frac{E[L(1)^2]}{2\Gamma (2d+2) \text {sin}(\pi (d+\frac{1}{2}))}[|t|^{2d+1}-|t-s|^{2d+1}+|s|^{2d+1}]. \end{aligned}$$
(2.8)

Furthermore, the increments of a fLp are stationary and exhibit long memory. Its sample paths are Holder continuous of order \(\beta < d\) and the fLp is not self-similar. For details, see [7]. For additional properties of fractional Levy processes, see [2, 4, 5].

3 Preliminaries

Let us consider the stochastic differential equation

$$\begin{aligned} \mathrm{d}X_t= S(X_t)\; \mathrm{d}t + \epsilon \; \mathrm{d}L^d_t, X_0=x_0, 0 \le t \le T \end{aligned}$$
(3.1)

where the function S(.) is unknown and the constant d is known with \(0<d <\frac{1}{2}.\) We assume that \(T\ge 1\) hereafter. We would like to estimate the function S(.) based on the observation \(\{ X_t, 0 \le t \le T\}. \) Suppose \(\{x_t, 0 \le t \le T\}\) is the solution of the differential equation

$$\begin{aligned} \frac{\mathrm{d}x_t}{\mathrm{d}t}=S(x_t) , x_0,\quad 0 \le t \le T. \end{aligned}$$
(3.2)

We assume that the trend coefficient S(x) is bounded and satisfies the following conditions which ensure the existence and uniqueness of the solution of Eq. (3.1): \( (A_1) :\) There exists a constant \(K > 0 \) such that \( \left| S(x)-S(y) \right| \le K |x-y|, x,y \in R\).

It is clear that the condition \((A_1)\) implies that there exists a constant \(M>0\) such that

$$\begin{aligned} |S(x)|\le M(1+|x|),\quad x \in R. \end{aligned}$$

Since the function \(x_t\) satisfies the ordinary differential Eq. (3.2), it follows that

$$\begin{aligned} |S(x_t)-S(x_s)| \le K|x_t-x_s|=K|\int _t^sS(x_t)\mathrm{d}t|\le K_1|t-s|,\quad t,s \in R \end{aligned}$$

for some constant \(K_1>0.\)

Lemma 3.1

Let the function S(.) satisfy the condition \((A_1)\). Let \( X_t\) and \(x_t\) be the solutions of Eqs. (3.1) and (3.2), respectively. Let \(\delta >0\) such that \(d+\delta <\frac{1}{2}\) and \(T \ge 1\). Then, with probability one,

$$\begin{aligned} {\rm (a)}|X_t-x_t|< e^{Kt} \epsilon \sup _{0\le s \le t}|L^d_s| \end{aligned}$$
(3.3)

and, for \(T\ge 1,\) and \(p \ge 2,\) there exists a constant \(C_{p,\delta ,d}\) independent of the Levy process such that

$$\begin{aligned} {\rm (b)}\sup _{0 \le t \le T} E |X_t-x_t|^p \le C_{p,\delta ,d} E[(L(1))^p] e^{pKT} \epsilon ^p T^{p(d+\frac{1}{2}+\delta )} \end{aligned}$$
(3.4)

Proof of (a) :

Let \(u_t=|X_t-x_t| \). Then, by \((A_1)\), we have

$$\begin{aligned} u_t\le & \int ^t_0 \left| S(X_v)-S(x_v) \right| \mathrm{d}v + \epsilon \;|L_t^d|\nonumber \\\le & K \int ^t_0 u_v \mathrm{d}v + \epsilon \;|L_t^d|. \end{aligned}$$
(3.5)

Applying the Gronwall’s lemma, it follows that

$$\begin{aligned} u_t \le \epsilon \sup _{0\le s \le t}|L_s^d| e^{Kt}. \end{aligned}$$
(3.6)

\(\square \)

Proof of (b) :

Let \(p \ge 2\) and suppose that \(T\ge 1.\) Applying Theorem 2.3, it follows that

$$\begin{aligned} \sup _{0\le t \le T}E|X_t-x_t|^p\le & e^{pKT} \epsilon ^p E [(\sup _{0\le s \le T}|L_s^d|)^p] \end{aligned}$$
(3.7)
$$\begin{aligned}\le & C_{p,\delta ,d} E[(L(1))^p] e^{pKT} \epsilon ^p T^{p(d+\frac{1}{2}+\delta )}. \end{aligned}$$
(3.8)

\(\square \)

4 Main Results

Let \(\Theta _0(K)\) denote the class of all functions S(x) satisfying the condition \((A_1)\) with the same bound L. Let \(\Theta _k(K) \) denote the class of all functions S(x) which are uniformly bounded by the same constant C and which are k-times differentiable with respect to t satisfying the condition

$$\begin{aligned} |S^{(k)}(x)-S^{(k)}(y)|\le K|x-y|, x,y \in R \end{aligned}$$

for some constant \(K >0.\) Here, \(g^{(k)}(x)\) denotes the k-th derivative of g(.) at x for \(k \ge 0.\) If \(k=0,\) we interpret \(g^{(0)}\) as g.

Let G(u) be a bounded function with finite support [AB] with \(A<0<B\) satisfying the conditions

\((A_2)\) \( G(u) =0\;\; \text{ for }\;\; u <A \;\;\text{ and }\;\; u >B, \;\;\text{ and }\;\; \int ^B_A G(u) \mathrm{d}u =1.\)

It is obvious that the following conditions are satisfied by the function G(.) : 

  1. (i)

    \( \int ^\infty _{-\infty } |G(u)|^2 \mathrm{d}u < \infty ;\)

  2. (ii)

    \(\int ^\infty _{-\infty } |u^{k+1} G(u)|^2 \mathrm{d}u <\infty .\)

We define a kernel-type estimator of the trend \(S_t=S(x_t)\) as

$$\begin{aligned} {\widehat{S}}_t = \frac{1}{\varphi _\epsilon }\int ^T_0 G \left( \frac{\tau -t}{\varphi _\epsilon } \right) d X_\tau \end{aligned}$$
(4.1)

where the normalizing function \( \varphi _\epsilon \rightarrow 0 \) as \( \epsilon \rightarrow 0. \) Let \(E_S(.)\) denote the expectation when the function S(.) is the trend function.

Theorem 4.1

Suppose that the trend function \(S(x) \in \Theta _0(K)\) and the conditions \((A_1)\) and \((A_2)\) hold. Further suppose that the function \( \varphi _\epsilon \rightarrow 0\) and \(\epsilon ^2\varphi _\epsilon ^{2d-1}\rightarrow 0\) as \(\epsilon \rightarrow 0.\) Then, for any \( 0< a \le b < T , T \ge 1,\) the estimator \({\widehat{S}}_t\) is uniformly consistent, that is,

$$\begin{aligned} \lim _{\epsilon \rightarrow 0} \sup _{S(x) \in \Theta _0(K)} \sup _{a\le t \le b } E_S ( |{\widehat{S}}_t - S (x_t)|^2)= 0. \end{aligned}$$
(4.2)

In addition to the conditions \((A_1)-(A_2),\) assume that

\((A_3)\) \( \int ^\infty _{-\infty } u^j G(u) \mathrm{d}u = 0 \;\;\text{ for }\;\; j=1,2,\ldots ,k.\)

Theorem 4.2

Suppose that the function \( S(x) \in \Theta _{k+1}(K)\) and the conditions \((A_1)-(A_3)\) hold. Further suppose that \( \varphi _\epsilon = \epsilon ^{\frac{2}{2k-2d+3}}.\) Then,

$$\begin{aligned} \limsup _{\epsilon \rightarrow 0} \sup _{S(x) \in \Theta _{k+1}(K)}\sup _{a \le t \le b} E_S (| {\widehat{S}}_t - S(x_t)|^2) \epsilon ^{-\frac{4(k+1)}{2k-2d+3}} \ < \infty . \end{aligned}$$
(4.3)

Theorem 4.3

Suppose that the function \(S(x) \in \Theta _{k+1}(K)\) for some \(k>1\) and the conditions \((A_1)-(A_3)\) hold. Further suppose that \(\varphi _\epsilon = \epsilon ^{\frac{1}{k+2-(d+\frac{1}{2})}}\). Then, as \(\epsilon \rightarrow 0,\) the asymptotic distribution of

$$\begin{aligned} \epsilon ^{\frac{-(k+1)}{k+2-(d+\frac{1}{2})}} ({\widehat{S}}_t-S(x_t)- \frac{S_t^{(k+1)}}{(k+1)!} \int ^\infty _{-\infty } G(u) u^{k+1}\ \mathrm{d}u) \end{aligned}$$

has mean zero and variance

$$\begin{aligned} \sigma ^2= \int ^{\infty }_{-\infty } \int ^{\infty }_{-\infty }G(u)G(v) |u-v|^{2d-1}\ \mathrm{d}u\mathrm{d}v \end{aligned}$$

and the asymptotic distribution is that of the family of random variables

$$\begin{aligned} \varphi _\epsilon ^{-(d+\frac{1}{2})}\int _{-\infty }^\infty G\left( \frac{\tau -t}{\varphi _\epsilon }\right) \mathrm{d}L^d_\tau \end{aligned}$$

as \(\epsilon \rightarrow 0\).

5 Proofs of Theorems

Proof of Theorem 4.1

From the inequality

$$\begin{aligned} (a+b+c)^2\le 3(a^2+b^2+c^2), a,b,c\in R, \end{aligned}$$

it follows that

$$\begin{aligned} E_S[|\widehat{S}_t -S(x_t)|^2]&= E_S \left[ \left| \frac{1}{\varphi _\epsilon } \int ^T_0 G \left( \frac{\tau -t}{\varphi _\epsilon } \right) \left( S(X_\tau ) -S(x_\tau ) \right) \mathrm{d}\tau \right. \right. \nonumber \\&\left. \left. + \frac{1}{\varphi _\epsilon } \int ^T_0 G \left( \frac{\tau -t}{\varphi _\epsilon }\right) S(x_\tau ) \mathrm{d}\tau - S(x_t) + \frac{\epsilon }{\varphi _\epsilon } \int ^T_0 G \left( \frac{\tau -t}{\varphi _\epsilon } \right) \mathrm{d}L^d_\tau \right| ^2\right] \nonumber \\\le &\,3 E_S\left[ \left| \frac{1}{\varphi _\epsilon } \int ^T_0 G \left( \frac{\tau -t}{\varphi _\epsilon } \right) (S(X_\tau ) -S(x_\tau )) \mathrm{d}\tau \right| ^2\right] \nonumber \\&+\,3 E_S \left[ \left| \frac{1}{\varphi _\epsilon } \int ^T_0 G \left( \frac{\tau -t}{\varphi _\epsilon } \right) S(x_\tau ) \mathrm{d}\tau -S(x_t) \right| ^2 \right] \nonumber \\&+\,3 \frac{\epsilon ^2}{\varphi _\epsilon ^2} E_S \left[ \left| \int ^T_0 G \left( \frac{\tau -t}{\varphi _\epsilon }\right) \mathrm{d}L_\tau ^d\right| ^2\right] \nonumber \\&= I_1+I_2+I_3 \;\;\text{(say). } \end{aligned}$$
(5.1)

By the Lipschitz condition on the function S(.),  the inequality (3.3) in Lemma 3.1 and the condition \((A_2)\), and applying the Holder inequality, it follows that

$$\begin{aligned} I_1&= 3 E_S \left| \frac{1}{\varphi _\epsilon } \int ^T_0 G \left( \frac{\tau -t}{\varphi _\epsilon } \right) (S(X_\tau ) -S(x_\tau )) \mathrm{d}\tau \right| ^2 \nonumber \\&= 3E_S \left| \int ^\infty _{-\infty } G(u) \left( S(X_{t+\varphi _\epsilon u} ) -S(x_{t+\varphi _\epsilon u} )\right) \mathrm{d}u\right| ^2\nonumber \\\le & 3 (B-A) \int ^\infty _{-\infty } |G(u)|^2 K^2 E \left| X_{t+\varphi _\epsilon u}-x_{t+\varphi _\epsilon u} \right| ^2 \ \mathrm{d}u \;\;\hbox {(by using the condition }(A_1))\nonumber \\\le & 3(B-A)\int ^\infty _{-\infty } |G(u)|^2 \;\;K^2 \sup _{0 \le t + \varphi _\epsilon u \le T}E \left| X_{t+\varphi _\epsilon u} -x_{t+\varphi _\epsilon u}\right| ^2 \ \mathrm{d}u \nonumber \\ \le & 3 (B-A)K^2 C_{2,\delta ,d} E[(L(1))^2] e^{2LT} \epsilon ^2 T^{2(d+\frac{1}{2}+\delta )} \int _{-\infty }^\infty |G(u)|^2\mathrm{d}u\;\;\text{(by } \text{ using } \text{(eq3.4) }) \end{aligned}$$
(5.2)

which tends to zero as \(\epsilon \rightarrow 0.\) For the term \(I_2\), by the Lipschitz condition on the function S(.),  the condition \((A_2)\) and the Holder inequality, it follows that

$$\begin{aligned} I_2&= 3E_S \left| \frac{1}{\varphi _\epsilon } \int ^T_0 G\left( \frac{\tau -t}{\varphi _\epsilon }\right) S (x_\tau ) \mathrm{d}\tau - S (x_t) \right| ^2 \nonumber \\ &= 3 \left| \int ^\infty _{-\infty } G(u) \left( S(x_{t+\varphi _\epsilon u})-S(x_t) \right) \ \mathrm{d}u \right| ^2 \nonumber \\ &\le 3(B-A) \int _{-\infty }^\infty |G(u)(S(x_{t+\varphi _\epsilon u})-S(x_t))|^2\mathrm{d}u\nonumber \\ & \le 3 (B-A)K_1^2 \varphi _\epsilon ^2 \int _{-\infty }^\infty |uG(u)|^2 \mathrm{d}u\;\;\text{(by } (A_2)\text{) }. \end{aligned}$$
(5.3)

The last term tends to zero as \(\epsilon \rightarrow 0.\) We will now get an upper bound on the term \(I_3.\) Note that

$$\begin{aligned} I_3&= 3\frac{ \epsilon ^2}{\varphi _\epsilon ^2} E_S \left| \int ^T_0 G \left( \frac{\tau -t}{\varphi _\epsilon }\right) \mathrm{d}L^d_\tau \right| ^2 \nonumber \\ &= 3 \frac{ \epsilon ^2}{\varphi _\epsilon ^2} \frac{\Gamma (1-2d)E[L(1)^2]}{\Gamma (d)\Gamma (1-d)}\int _0^T\int _0^T G \left( \frac{\tau -t}{\varphi _\epsilon }\right) G \left( \frac{\tau -s}{\varphi _\epsilon }\right) |t-s|^{2d-1}\mathrm{d}s \mathrm{d}t\nonumber \\ &\le C_1\frac{\epsilon ^2}{\varphi ^2_\epsilon } \varphi _\epsilon ^{2d+1}\int _R\int _R G(u) G(v) |u-v|^{2d-1}\mathrm{d}u\mathrm{d}v \end{aligned}$$
(5.4)

for some positive constant \(C_1.\) Theorem 4.1 is now proved by using Eqs. (5.1) to (5.4). \(\square \)

Proof of Theorem 4.2

By the Taylor’s formula, for any \(x \in R,\)

$$\begin{aligned} S(y) = S(x) +\sum ^k_{j=1} S^{(j)} (x) \frac{(y-x)^j}{j !} +[ S^{(k)} (z)-S^{(k)} (x)] \frac{(y-x)^k}{k!} \end{aligned}$$

for some z such that \(|z-x|\le |y-x|.\) Using this expansion, Eq. (3.2) and the condition \((A_3)\) in the expression for \(I_2\) defined in the proof of Theorem 4.1, it follows that

$$\begin{aligned} I_2&= 3 \left[ \int ^\infty _{-\infty } G(u) \left( S(x_{t+\varphi _\epsilon u}) - S(x_t) \right) \ \mathrm{d}u \right] ^2\\ &= 3\left[ \sum ^k_{j=1} S^{(j)} (x_t) \left( \int ^\infty _{-\infty }G(u) u^j \mathrm{d}u \right) \varphi ^j_\epsilon ( j !)^{-1}\right. \\&\left. \quad +\,\left( \int ^\infty _{-\infty }G(u) u^k (S^{(k)}(z_u) -S^{(k)} (x_t)\right) \mathrm{d}u \varphi ^k_\epsilon (k !)^{-1}\right] ^2\\ \end{aligned}$$

for some \(z_u\) such that \(|x_t-z_u|\le |x_{t+\varphi _\epsilon u}-x_t| \le C|\varphi _\epsilon u|.\) Hence

$$\begin{aligned} I_2&\le 3 K_1^2 \left[ \int ^\infty _{-\infty } \varphi _\epsilon u |G(u)u^{k}|\varphi ^{k}_\epsilon (k!) ^{-1} \mathrm{d}u \right] ^2 \nonumber \\&\le 3 K_1^2 (B-A)(k!)^{-2} \varphi ^{2(k+1)}_\epsilon \int ^\infty _{-\infty } G^2(u) u^{2 (k+1)}\ \mathrm{d}u \nonumber \\&\le C_2 \varphi _\epsilon ^{2(k+1)} \end{aligned}$$
(5.5)

for some positive constant \(C_2\). Combining Eqs. (5.2)– (5.5), we get that there exists a positive constant \(C_3\) such that

$$\begin{aligned} \sup _{a \le t \le b}E_S|{\widehat{S}}_t-S(x_t)|^2 \le C_3 (\epsilon ^2 + \varphi ^{2(k+1)}_\epsilon +\epsilon ^2 \varphi _\epsilon ^{2d-1}). \end{aligned}$$

Choosing \( \varphi _\epsilon = \epsilon ^{\frac{2}{2k-2d+3}},\) we get that

$$\begin{aligned} \limsup _{\epsilon \rightarrow 0} \sup _{S(x) \in \Theta _{k+1} (L) } \sup _{a \le t \le b} E_S |{\widehat{S}}_t - S (x_t)|^2\epsilon ^ {-\frac{4(k+1)}{2k-2d+3}} < \infty . \end{aligned}$$

This completes the proof of Theorem 4.2. \(\square \)

Proof of Theorem 4.3

Let \(\alpha = \frac{2k-2}{2k-2d+1}.\) Observe that \(0<\alpha <1.\) From (3.1), we obtain that

$$\begin{aligned}&{\epsilon ^{-\alpha }( {\widehat{S}}_t -S(x_t))}\nonumber \\&= \epsilon ^{-\alpha }\left[\frac{1}{\varphi _\epsilon } \int ^T_0 G \left( \frac{\tau -t}{\varphi _\epsilon } \right) \left( S(X_\tau )-S(x_\tau )\right) \ \mathrm{d}\tau \right.\nonumber \\&\quad \left.+\,\frac{1}{\varphi _\epsilon } \int ^T_0 G \left( \frac{\tau -t}{\varphi _\epsilon }\right) S(x_\tau ) \mathrm{d}\tau -S(x_t)+ \frac{\epsilon }{\varphi _\epsilon } \int ^T_0 G \left( \frac{\tau -t}{\varphi _\epsilon }\right) \mathrm{d}L^d_\tau \right]\nonumber \\&= \epsilon ^{-\alpha } \left[ \int ^\infty _{-\infty } G(u) (S (X_{t+\varphi _\epsilon u}) - S (x_{t+\varphi _\epsilon u})) \ \mathrm{d}u \right. \nonumber \\&\quad +\int ^\infty _{-\infty } G(u) (S(x_{t+\varphi _\epsilon u})- S(x_t)) \ \mathrm{d}u \nonumber \\&\left. \quad + \frac{\epsilon }{\varphi _{\epsilon }}\int ^T_0 G \left( \frac{\tau -t}{\varphi _\epsilon } \right) d L^d_\tau \right] .\nonumber \\&= J_1+J_2+J_3 \;\;\;\text{(say) }. \end{aligned}$$
(5.6)

\(\square \)

By the Lipschitz condition on the function S(.) and part (a) of Lemma 3.1, it follows that

$$\begin{aligned} J_1&\le \epsilon ^{-\alpha }|\int _{-\infty }^\infty G(u)(S(X_{t+\varphi _\epsilon u})-S(x_{t+\varphi _\epsilon u})) \mathrm{d}u|\nonumber \\&\le \epsilon ^{-\alpha } \, L\int _{-\infty }^\infty |G(u)| (X_{t+\varphi _\epsilon u} - x_{t+\varphi _\epsilon u})| \mathrm{d}u\nonumber \\&\le Ke^{KT} \epsilon ^{1-\alpha } \int _{-\infty }^\infty |G(u)|\sup _{0\le t+\varphi _\epsilon u \le T}|L^d_{t+\varphi _\epsilon u}|\mathrm{d}u. \end{aligned}$$
(5.7)

Applying the Markov inequality and Theorem 2.3, for any \(\eta >0,\)

$$\begin{aligned} P(|J_1|>\eta )&\le \epsilon ^{1-\alpha } \eta ^{-1} Ke^{KT} \int _{-\infty }^\infty |G(u)|E(\sup _{0\le t+\varphi _\epsilon u \le T}L^d_{t+\varphi _\epsilon u}|)\mathrm{d}u\nonumber \\&\le \epsilon ^{1-\alpha }\eta ^{-1} Ke^{KT} \int _{-\infty }^\infty |G(u)||E[(\sup _{0\le t+\varphi _\epsilon u \le T}(L^d_{t+\varphi _\epsilon u})^2]|^{1/2} \mathrm{d}u\nonumber \\&\le \epsilon ^{1-\alpha } \eta ^{-1} Ke^{KT} C_{2,\delta , d}^{1/2}[E(|L(1)|^2)]^{1/2}T^{(d+\frac{1}{2}+\delta )}\int _{-\infty }^\infty |G(u)|\mathrm{d}u. \end{aligned}$$
(5.8)

and the last term tends to zero as \(\epsilon \rightarrow 0.\) By the Taylor’s formula, for any \(t \in [0,T],\)

$$\begin{aligned} S_t = S_{t_0} + \sum ^{k+1}_{j=1} S_{t_0}^{(j)} \frac{(t-t_0)^j}{j !} + [ S_{t_0+\gamma (t-t_0)}^{(k+1)}-S_{t_0}^{(k+1)}] \frac{(t-t_0)^{k+1}}{(k+1)!} \end{aligned}$$

where \(0<\gamma <1\) and \(t_0 \in (0,T).\) Applying the Condition \((A_3)\) and the Taylor’s expansion, it follows that

$$\begin{aligned} J_2&= \epsilon ^{-\alpha }\left[ \sum _{j=1}^{k+1}S_t^{(j)}\left( \int _{-\infty }^\infty G(u) u^j \;\mathrm{d}u\right) \varphi _\epsilon ^j(j!)^{-1}\nonumber \right. \\&\left. \quad +\frac{\varphi _\epsilon ^{k+1}}{(k+1)!}\int _{-\infty }^\infty G(u) u^{k+1}(S_{t+\gamma \varphi _\epsilon u}^{(k+1)}-S_t^{(k+1)})\;\mathrm{d}u\right] \nonumber \\&= \epsilon ^{-\alpha }\frac{S_t^{(k+1)}}{(k+1)!}\int _{-\infty }^\infty G(u) u^{k+1}\;\mathrm{d}u\nonumber \\&\quad +\,\varphi _\epsilon ^{k+1} \epsilon ^{-\alpha }\frac{1}{(k+1)!}\int _{-\infty }^\infty G(u)u^{k+1}(S_{t+\gamma \varphi _\epsilon u}^{(k+1)}-S_t^{(k+1)})\;\mathrm{d}u. \end{aligned}$$
(5.9)

Observing that \(S_t \in \Theta _{k+1}(K),\) we obtain that

$$\begin{aligned}&{\frac{1}{(k+1)!}\int _{-\infty }^\infty G(u)u^{k+1}(S_{t+\gamma \varphi _\epsilon u}^{(k+1)}-S_t^{(k+1)})\mathrm{d}u}\nonumber \\\le & \frac{1}{(k+1)!}\int _{-\infty }^\infty |G(u)u^{k+1}(S_{t+\gamma \varphi _\epsilon u}^{(k+1)}-S_t^{(k+1)})|\mathrm{d}u\nonumber \\\le & \frac{L\varphi _\epsilon }{(k+1)!}\int _{-\infty }^\infty |G(u)u^{k+2}|\mathrm{d}u. \end{aligned}$$
(5.10)

Combining the equations given above, it follows that

$$\begin{aligned}&{\epsilon ^{\frac{-(k+1)}{k+2-(d+\frac{1}{2})}} ({\widehat{S}}_t-S(x_t)- \frac{S_t^{(k+1)}}{(k+1)!} \int ^\infty _{-\infty } G(u) u^{k+1}\ \mathrm{d}u)}\nonumber \\&= O_p(\epsilon ^{1-\alpha })+O_p(\epsilon ^{-\alpha }\varphi _\epsilon ^{k+2})+\epsilon ^{1-\alpha }\varphi _\epsilon ^{-1}\int _0^TG\left( \frac{\tau -t}{\varphi _\epsilon }\right) \mathrm{d}L_t^d. \end{aligned}$$
(5.11)

From the choice of \(\varphi _\epsilon \) and \(\alpha ,\) it follows that

$$\begin{aligned} \epsilon ^{1-\alpha }\varphi _\epsilon ^{-1}= \varphi _\epsilon ^{-(d+\frac{1}{2})} \end{aligned}$$

and, by Theorem 2.2,

$$\begin{aligned}&{Var\left[ \varphi _\epsilon ^{-(d+\frac{1}{2})} \int ^T_0 G \left( \frac{\tau -t}{\varphi _\epsilon } \right) \mathrm{d}L^d_\tau \right] }\nonumber \\&= \varphi _\epsilon ^{-(2d+1)}\int _0^T\int _0^TG \left( \frac{\tau -s}{\varphi _\epsilon } \right) G \left( \frac{\tau -t}{\varphi _\epsilon } \right) |t-s|^{2d-1}\mathrm{d}s\mathrm{d}t\nonumber \\&= \int _R\int _R G(u) G(v) |u-v|^{2d-1}\mathrm{d}u\mathrm{d}v. \end{aligned}$$
(5.12)

Applying the Slutsky’s theorem and the equations derived above, it can be checked that the random variable

$$\begin{aligned} \epsilon ^{\frac{-(k+1)}{k+2-(d+\frac{1}{2})}} ({\widehat{S}}_t - S(x_t) - \frac{S_t^{(k+1)} }{(k+1) !} \int ^\infty _{-\infty } G(u) u^{k+1}\ \mathrm{d}u) \end{aligned}$$

has a limiting distribution as \(\epsilon \rightarrow 0\) as that of the family of random variables

$$\begin{aligned} \varphi _\epsilon ^{-(d+\frac{1}{2})}\int _{-\infty }^{\infty } G\left( \frac{\tau -t}{\varphi _\epsilon } \right) \mathrm{d}L_\tau ^d \end{aligned}$$

as \(\epsilon \rightarrow 0.\) This completes the proof of Theorem 4.3.

Remarks

We now make some remarks on the limiting distribution following the suggestions of a reviewer. Define the rescaled fractional Levy process

$$\begin{aligned} L_\tau ^{d,\epsilon ,t}=\varphi _\epsilon ^{-(d+\frac{1}{2})}(L_{\tau \varphi _\epsilon +t}^d-L_t^d). \end{aligned}$$

Then, it can be shown that the integral

$$\begin{aligned} \varphi _\epsilon ^{-(d+\frac{1}{2})}\int _{-\infty }^{\infty } G\left( \frac{\tau -t}{\varphi _\epsilon } \right) \mathrm{d}L_\tau ^d \end{aligned}$$

is almost surely equal to

$$\begin{aligned} \int _{-\infty }^{\infty }G(\tau ) \mathrm{d}L_\tau ^{d,\epsilon ,t} \end{aligned}$$

which in turn has the same distribution as that of the integral

$$\begin{aligned} \int _{-\infty }^{\infty }G(\tau ) \mathrm{d}L_\tau ^{d,\epsilon ,0} \end{aligned}$$

by the stationarity of the increments. This integral in turn is equal to

$$\begin{aligned} \int _{-\infty }^{\infty } (I^d_G)(\tau )\mathrm{d}L_\tau ^{\epsilon } \end{aligned}$$

for the rescaled Levy process

$$\begin{aligned} L_\tau ^{\epsilon }\equiv L_\tau ^{0, \epsilon ,0}= \varphi _\epsilon ^{-\frac{1}{2}}L_{\tau \varphi _\epsilon }, \tau \in R. \end{aligned}$$

The process on the rightside converges to a two-sided Brownian motion W(.) with variance \(E[L(1)^2]\) at \(\tau =1\) by [6], Chapter VII. Approximating the function \(I_G^d\) by step functions, it can be shown that the random variable

$$\begin{aligned} \varphi _\epsilon ^{-(d+\frac{1}{2})}\int _{-\infty }^{\infty } G\left( \frac{\tau -t}{\varphi _\epsilon } \right) \mathrm{d}L_\tau ^d \end{aligned}$$

converges to a Gaussian random variable with mean zero and variance

$$\begin{aligned} E[L(1)^2]\int _R(I_G^d(\tau ))^2\mathrm{d}\tau \end{aligned}$$

as \(\epsilon \rightarrow 0.\)