1 Introduction and Main Result

Intersection local time or self-intersection local time when the two processes are the same are important subjects in probability theory and their derivatives have received much attention recently, see, e.g., [9, 11,12,13]. Jung and Markowsky [6, 7] discussed Tanaka formula and occupation time formula for derivative self-intersection local time of fractional Brownian motions. On the other hand, several authors paid attention to the renormalized self-intersection local time of fractional Brownian motions, see, e.g., Hu et al. [3, 4].

Motivated by Jung and Markowsky [6] and Hu [2], higher-order derivative of intersection local time for two independent fractional Brownian motions is studied in this paper.

To state our main result we let \(B^{H_1}=\{B^{H_1}_t, t\ge 0\}\) and \(\widetilde{B}^{H_2}=\{\widetilde{B}^{H_2}_t, t \ge 0 \}\) be two independent d-dimensional fractional Brownian motions of Hurst parameters \(H_1, H_2\in (0, 1)\), respectively. This means that \(B^{H_1}\) and \(\widetilde{B}^{H_2}\) are independent centered Gaussian processes with covariance

$$\begin{aligned} \mathbb {E}\left[ B^{H_{1 } }_s B^{H_{1} }_t\right] =\frac{1}{2}\left( s^{2H_{1}}+t^{2H_{1}}-{\mid s-t \mid }^{2H_1}\right) \end{aligned}$$

(similar identity for \(\tilde{B}\)). In this paper we concern with the derivatives of intersection local time of \(B^{H_1}\) and \(\widetilde{B}^{H_2}\), defined by

$$\begin{aligned} \hat{\alpha }^{(k)}(x):= \frac{\partial ^k}{\partial x_1^{k_1}\ldots \partial x_d^{k_d}} \int _0^T\int _0^T\delta \left( B^{H_1}_t-\widetilde{B}^{H_2}_s+x\right) \hbox {d}t\hbox {d}s, \end{aligned}$$

where \(k=(k_1, \ldots , k_d)\) is a multi-index with all \(k_i\) being nonnegative integers and \(\delta \) is the Dirac delta function of d-variables. In particular, we consider exclusively the case when \(x=0\) in this work. Namely, we are studying

$$\begin{aligned} \hat{\alpha }^{(k)}(0):= \int _0^T\int _0^T\delta ^{(k)} \left( B^{H_1}_t-\widetilde{B}^{H_2}_s \right) \hbox {d}t\hbox {d}s, \end{aligned}$$
(1.1)

where \(\delta ^{(k)}(x)=\frac{\partial ^k}{\partial x_1^{k_1}\ldots \partial x_d^{k_d}}\delta (x)\) is k-th order partial derivative of the Dirac delta function. Since \(\delta (x)=0\) when \(x\not =0\) the intersection local time \(\hat{\alpha } (0)\) (when \(k=0\)) measures the frequency that processes \(B^{H_1}\) and \(\widetilde{B}^{H_2}\) intersect each other.

Since the Dirac delta function \(\delta \) is a generalized function, we need to give a meaning to \(\hat{\alpha }^{(k)}(0)\). To this end, we approximate the Dirac delta function \(\delta \) by

$$\begin{aligned} f_{\varepsilon }(x):=\frac{1}{(2\pi \varepsilon )^{\frac{d}{2}}} \hbox {e}^{-\frac{\mid x\mid ^2}{2\varepsilon }} =\frac{1}{(2\pi )^d}\int _{\mathbb {R}^d}\hbox {e}^{ipx}\hbox {e}^{-\frac{\varepsilon \mid p\mid ^2}{2}}\hbox {d}p, \end{aligned}$$
(1.2)

where and throughout this paper, we use \(px=\sum _{j=1}^d p_jx_j\) and \(|p|^2=\sum _{j=1}^d p_j^2\). Thus, we approximate \(\delta ^{(k)}\) by

$$\begin{aligned} \begin{aligned} f_{\varepsilon }^{(k)}(x):=\frac{\partial ^k}{\partial x_1^{k_1}\ldots \partial x_d^{k_d}} f_{\varepsilon }(x) =\frac{i^k}{(2\pi )^d}\int _{\mathbb {R}^d} p_1^{k_1}\ldots p_d^{k_d} \hbox {e}^{ipx}\hbox {e}^{-\frac{\varepsilon \mid p\mid ^2}{2}}\hbox {d}p. \end{aligned} \end{aligned}$$
(1.3)

We say that \(\hat{\alpha }^{(k)} (0)\) exists (in \(L^2\)) if

$$\begin{aligned} \hat{\alpha }^{(k)}_{\varepsilon }(0):= \int _0^T\int _0^Tf^{(k)}_{\varepsilon }\left( B^{H_1}_t- \widetilde{B}^{H_2}_s\right) \hbox {d}t\hbox {d}s \end{aligned}$$
(1.4)

converges to a random variable (denoted by \(\hat{\alpha }^{(k)} (0)\)) in \(L^2\) when \(\varepsilon \downarrow 0\).

Here is the main result of this work.

Theorem 1

Let \(B^{H_1}\) and \(\widetilde{B}^{H_2}\) be two independent d-dimensional fractional Brownian motions of Hurst parameter \(H_1\) and \(H_2\), respectively.

  1. (i)

    Assume \(k =(k_1, \ldots , k_d)\) is an index of nonnegative integers (meaning that \(k_1, \ldots , k_d\) are nonnegative integers) satisfying

    $$\begin{aligned} \frac{H_1H_2}{H_1+H_2} (|k|+d)< 1, \end{aligned}$$
    (1.5)

    where \(|k|=k_1+\cdots +k_d\). Then, the k-th order derivative intersection local time \(\hat{\alpha }^{(k)}(0)\) exists in \(L^p(\Omega )\) for any \(p\in [1, \infty )\).

  2. (ii)

    Assume condition (1.5) is satisfied. There is a strictly positive constant \(C_{d,k,T}\in (0, \infty )\) such that

    $$\begin{aligned} \begin{aligned} \mathbb {E} \left[ \exp \left\{ C_{d,k,T}\left| \widehat{\alpha }^{(k)}(0)\right| ^{\beta }\right\} \right] <\infty , \end{aligned} \end{aligned}$$

    where \(\beta =\frac{H_1+H_2}{2dH_1H_2}\).

  3. (iii)

    If \(\hat{\alpha }^{(k)}(0)\in L^1(\Omega )\), where \(k=(0, \ldots , 0,k_i, 0,\ldots ,0)\) with \(k_i\) being even integer, then condition (1.5) must be satisfied.

Remark 1

  1. (i)

    When \(k=0\), we have that \(\widehat{\alpha }^{(0)}(0)\) is in \(L^p\) for any \(p\in [1, \infty )\) if \(\frac{H_1H_2}{H_1+H_2} d<1\). In the special case \(H_1=H_2=H\), this condition becomes \(Hd<2\), which is the condition obtained in Nualart et al. [8].

  2. (ii)

    When \(H_1=H_2=\frac{1}{2}\), we have the exponential integrability exponent \(\beta =2/d\), which implies an earlier result [2, Theorem 9.4].

  3. (iii)

    Part (iii) of the theorem states that the inequality (1.5) is also a necessary condition for the existence of \(\hat{\alpha }^{(k)}(0)\). This is the first time for such a statement.

2 Proof of the Theorem

Proof of Parts (i) and (ii)

This section is devoted to the proof of the theorem. We shall first find a good bound for \({\mathbb {E}}\left| \widehat{\alpha }^{(k)}(0)\right| ^n\) which gives a proof for (i) and (ii) simultaneously. We introduce the following notations.

$$\begin{aligned}&p_j=(p_{1j},\ldots ,p_{dj}),\quad p_j^{k }=\left( p_{1j}^{k_1 },\ldots ,p_{dj}^{k_d}\right) ,\quad j=1, 2, \ldots , n; \\&p=(p_1, \dots , p_n), \qquad dp =\prod _{i=1}^d\prod _{j=1}^n dp_{ij}. \end{aligned}$$

We also denote \(s=(s_1,\ldots ,s_n)\), \(t=(t_1,\ldots ,t_n)\), \(ds=ds_1\ldots ds_n\) and \(dt=dt_1\ldots dt_n\).

Fix an integer \(n\ge 1\). Denote \(T_n=\{0<t,s <T\}^n\). We have

$$\begin{aligned} \begin{aligned} \mathbb {E}\left[ \left| \widehat{\alpha }^{(k)}_{\varepsilon }(0)\right| ^n \right]&\le \frac{1}{(2\pi )^{nd}}\int _{T_n}\int _{\mathbb {R}^{nd}} \bigg | \mathbb {E}\left[ \exp \left\{ ip_1\left( B^{H_1}_{s_1}-\widetilde{B}^{H_2}_{t_1}\right) +\cdots \right. \right. \\&\quad \left. \left. +ip_n\left( B^{H_1}_{s_n}-\widetilde{B}^{H_2}_{t_n}\right) \right\} \right] \bigg | \exp \left\{ -\frac{\varepsilon }{2}\sum _{j=1} ^n\mid p_j\mid ^2\right\} \prod _{j=1}^n \mid p_j ^k\mid \hbox {d}p \hbox {d}t\hbox {d}s\\&=\frac{1}{(2\pi )^{nd}}\int _{T_n}\int _{\mathbb {R}^{nd}} \exp \left\{ -\frac{1}{2}\mathbb {E}\left[ \sum _{j=1}^np_j\left( B^{H_1}_{s_j}- \widetilde{B}^{H_2}_{t_j}\right) \right] ^2\right\} \\&\quad \times \exp \left\{ -\frac{\varepsilon }{2}\sum _{j=1} ^n\mid p_j\mid ^2\right\} \prod _{j=1}^n \mid p_j ^k\mid \hbox {d}p\hbox {d}t\hbox {d}s\\&\le \frac{1}{(2\pi )^{nd}}\int _{T_n}\int _{\mathbb {R}^{nd}} \prod _{i=1}^d \left( \prod _{j=1}^n \mid p_{ij}^{k_i}\mid \right) \exp \left\{ -\frac{1}{2}\mathbb {E}\left[ p_{i1}B^{H_1,i}_{s_1}+\cdots \right. \right. \\&\quad \left. \left. +p_{in}B^{H_1,i}_{s_n}\right] ^2-\frac{1}{2}\mathbb {E}\left[ p_{i1}B^{H_2,i}_{t_1}+\cdots +p_{in}B^{H_2,i}_{t_n}\right] ^2\right\} \hbox {d}p\hbox {d}t\hbox {d}s . \end{aligned} \end{aligned}$$

The expectations in the above exponent can be computed by

$$\begin{aligned}&\mathbb {E}\left[ p_{i1}B^{H_1,i}_{s_1}+\cdots +p_{in}B^{H_1,i}_{s_n}\right] ^2= (p_{i1}, \ldots , p_{in}) Q_1(p_{i1}, \ldots , p_{in})^\mathrm{T},\\&\mathbb {E}\left[ p_{i1}\tilde{B}^{H_2,i}_{s_1}+\cdots +p_{in} \tilde{B}^{H_2,i}_{s_n}\right] ^2=(p_{i1}, \ldots , p_{in}) Q_2 (p_{i1}, \ldots , p_{in})^\mathrm{T}, \end{aligned}$$

where

$$\begin{aligned} Q_1={\mathbb {E}}\left( B^{H_1, i}_j B^{H_1, i}_k\right) _{1\le j,k\le n} \quad \mathrm{and}\quad Q_2={\mathbb {E}}\left( \tilde{B}^{H_2, i}_j\tilde{B}^{H_2, i}_k\right) _{1\le j,k\le n} \end{aligned}$$

denote, respectively, covariance matrices of n-dimensional random vectors \((B^{H_1,i}_{s_1},\ldots ,B^{H_1,i}_{s_n})\) and that of \((\widetilde{B}^{H_2,i}_{t_1},\ldots , \widetilde{B}^{H_2,i}_{t_n})\). Thus, we have

$$\begin{aligned} \mathbb {E}\left[ \left| \widehat{\alpha }^{(k)}_{\varepsilon }(0)\right| ^n \right] \le \frac{1}{(2\pi )^{nd}} \int _{T_n} \prod _{i=1}^d I_i(t,s) \hbox {d}t\hbox {d}s, \end{aligned}$$
(2.1)

where

$$\begin{aligned} I_i(t,s):= \int _{\mathbb {R}^{n}} \mid x^{k_i} \mid \exp \left\{ -\frac{1}{2}x^T(Q_1+Q_2)x\right\} \hbox {d}x . \end{aligned}$$

Here we recall \(x=(x_1, \ldots , x_n)\) and \(x^k_i=x_1^{k_i} \ldots x_n^{k_i}\). For each fixed i let us compute integral \(I_i(t,s)\) first. Denote \(B=Q_1+Q_2\). Then B is a strictly positive definite matrix, and hence \(\sqrt{B}\) exists. Making substitution \(\xi =\sqrt{B}x\). Then

$$\begin{aligned} I_i(t,s)=\int _{\mathbb {R}^{n }}\prod _{j=1}^n \mid (B^{-\frac{1}{2}}\xi )_j\mid ^{k_i} \exp \left\{ -\frac{1}{2}\mid \xi \mid ^2\right\} \mathrm {det}(B) ^{-\frac{1}{2}}\hbox {d}\xi . \end{aligned}$$

To obtain a nice bound for the above integral, let us first diagonalize B:

$$\begin{aligned} B =Q \varLambda Q^{-1} , \end{aligned}$$

where \(\varLambda =\hbox {diag}\{\lambda _1 ,\ldots ,\lambda _n \}\) is a strictly positive diagonal matrix with \({\lambda }_1\le {\lambda }_2\le \cdots \le {\lambda }_d\) and \(Q =(q_{ij})_{1\le i,j\le d} \) is an orthogonal matrix. Hence, we have \(\det (B)={\lambda }_1\ldots {\lambda }_d\). Denote

$$\begin{aligned} \eta = \begin{pmatrix} \eta _1, \eta _2, \ldots , \eta _n \end{pmatrix}^\mathrm{T}=Q^{-1}\xi , \end{aligned}$$

Hence,

$$\begin{aligned} B^{-\frac{1}{2}}\xi= & {} Q\varLambda ^{-1/2}Q^{-1} \xi = Q\varLambda ^{-1/2}\eta \\= & {} Q\begin{pmatrix} \lambda _1^{-\frac{1}{2}}\eta _1\\ \lambda _2^{-\frac{1}{2}}\eta _2\\ \vdots \\ \lambda _n^{-\frac{1}{2}} \eta _n \end{pmatrix} = \begin{pmatrix} q_{1,1}&{}\quad q_{1,2}&{}\quad \cdots &{}\quad q _{1,n} \\ q_{2,1}&{}\quad q_{2,2}&{}\quad \cdots &{}\quad q_{2,n}\\ \vdots &{}\quad \vdots &{}\quad \cdots &{}\quad \vdots \\ q_{n,1}&{}\quad q _{n,2}&{}\quad \cdots &{}\quad q_{n,n}\\ \end{pmatrix} \begin{pmatrix} \lambda _1^{-\frac{1}{2}}\eta _1\\ \lambda _2^{-\frac{1}{2}}\eta _2\\ \vdots \\ \lambda _n^{-\frac{1}{2}} \eta _n. \end{pmatrix}. \end{aligned}$$

Therefore, we have

$$\begin{aligned} \mid (B^{-\frac{1}{2}}\xi )_j\mid= & {} \left| \sum _{k=1}^nq_{jk}\lambda _{k}^{-\frac{1}{2}}\eta _{k}\right| \le \lambda _1 ^{-\frac{1}{2}} \sum _{k=1}^n\mid q_{jk} \eta _{k}\mid \\\le & {} \lambda _1 ^{-\frac{1}{2}} \left( \sum _{k=1}^nq_{jk}^2\right) ^{\frac{1}{2}} \left( \sum _{k=1}^n\eta _{k}^2\right) ^{\frac{1}{2}} \le \lambda _1 ^{-\frac{1}{2}} \mid \eta \mid _2 = \lambda _1 ^{-\frac{1}{2}}\mid \xi \mid _2. \end{aligned}$$

Since both \(Q_1\) and \(Q_2\) are positive definite, we see that

$$\begin{aligned} {\lambda }_1\ge {\lambda }_1(Q_1) ,\qquad \mathrm{and}\qquad {\lambda }_1\ge {\lambda }_1(Q_2), \end{aligned}$$

where \( {\lambda }_1(Q_i)\) is the smallest eigenvalue of \(Q_i\), \(i=1, 2\). This means that

$$\begin{aligned} {\lambda }_1 \ge {\lambda }_1(Q_1)^\rho {\lambda }_1(Q_2)^{1-\rho }\quad \hbox {for any }\ \ \rho \in [0, 1]. \end{aligned}$$

This implies

$$\begin{aligned} \mid (B^{-\frac{1}{2}}\xi )_j\mid \le {\lambda }_1(Q_1)^{-\frac{1}{2}\rho } {\lambda }_1(Q_2)^{-\frac{1}{2}(1-\rho )} \mid \xi \mid _2. \end{aligned}$$

Consequently, we have

$$\begin{aligned} I_i(t,s)= & {} \mathrm {det}(B) ^{-\frac{1}{2}} {\lambda }_1(Q_1)^{-\frac{1}{2}\rho k_i } {\lambda }_1(Q_2)^{-\frac{1}{2}(1-\rho )k_i } \nonumber \\&\qquad \int _{\mathbb {R}^{n }} \mid \xi \mid _2 ^{k_i} \exp \left\{ -\frac{1}{2}\mid \xi \mid ^2\right\} \hbox {d}\xi , \end{aligned}$$
(2.2)

for any \(\rho \in [0,1]\).

Now we are going to find a lower bound for \({\lambda }_1(Q_1)\) (\({\lambda }_1(Q_2)\) can be dealt with the same way. We only need to replace s by t). Without loss of generality we can assume \(0\le s_1<s_2<\cdots <s_n\le T\). From the definition of \(Q_1\) we have for any vector \(u=(u_1, \ldots , u_d)^T\),

$$\begin{aligned} \begin{aligned}&u^TQ_1u ~=\mathrm {Var}\left( u_{1}B^{H_1}_{s_{1}}+u_{2}B^{H_1}_{s_{2}} +\cdots + u_{n}B^{H_1}_{s_{n}}\right) \\&~=\mathrm {Var}\left( (u_{1}+\cdots +u_{n})B^{H_1}_{s_{1}} +(u_{2}+\cdots +u_{n})\left( B^{H_1}_{s_{2}}-B^{H_1}_{s_{1}}\right) \right. \\&\left. \quad +\cdots + (u_{n-1}+u_{n})\left( B^{H_1}_{s_{n-1}}-B^{H_1}_{s_{n-2}}\right) + u_{n}\left( B^{H_1}_{s_{n}}-B^{H_1}_{s_{n-1}}\right) \right) \\ \end{aligned} \end{aligned}$$

Now we use Proposition 1 in “Appendix” to conclude

$$\begin{aligned}\begin{aligned} u^TQ_1u&\ge c^n \left( (u_{1}+\cdots +u_{n})^2s_{1}^{2H_1} +(u_{2}+\cdots +u_{n})^2(s_{2}-s_{1})^{2H_1}\right. \\&\left. \quad +\cdots + (u_{n-1}+u_{n})^2(s_{n-1}-s_{n-2})^{2H_1}+ u_{n}^2(s_{n}-s_{n-1})^{2H_1}\right) \\&\ge c^n\min \left\{ s_{1}^{2H_1},(s_{2}-s_{1})^{2H_1},\ldots , (s_{n}-s_{n-1})^{2H_1}\right\} \\&\quad \cdot \left[ (u_{1}+\cdots +u_{n})^2+(u_{2}+\cdots +u_{n})^2 +\cdots +(u_{n-1}+u_{n})^2+u_{n}^2\right] . \end{aligned} \end{aligned}$$

Consider the function

$$\begin{aligned} f(u_1, \ldots , u_n)= & {} (u_{1}+\cdots +u_{n})^2+(u_{2}+\cdots +u_{n})^2 +\cdots +(u_{n-1}+u_{n})^2+u_{n}^2 \\= & {} (u_1, \ldots , u_n) G(u_1, \ldots , u_n)^\mathrm{T}, \end{aligned}$$

where

$$\begin{aligned} G=\left( \begin{matrix} 1&{}\quad 1&{}\quad 1&{}\quad \cdots &{}\quad 1\\ 0&{}\quad 1&{}\quad 1&{}\quad \cdots &{}\quad 1\\ \vdots &{}\quad \vdots &{}\quad \vdots &{}\quad \cdots &{}\quad \vdots \\ 0&{}\quad 0&{}\quad 0&{}\quad \cdots &{}\quad 1 \end{matrix}\right) . \end{aligned}$$

It is easy to see that the matrix \(G^TG\) has a minimum eigenvalue independent of n. Thus, this function f attains its minimum value \(f_\mathrm{min} \) independent of n on the sphere \(u_1^2+\cdots +u_n^2=1\). It is also easy to see that \(f_\mathrm{min}>0\).

As a consequence we have

$$\begin{aligned} {\lambda }_1(Q_1)= & {} \inf _{|u|=1} u^TQ_1u\nonumber \\\ge & {} c^n\min \left\{ s_{1}^{2H_1},(s_{2}-s_{1})^{2H_1},\ldots , (s_{n}-s_{n-1})^{2H_1}\right\} \inf _{|u|=1} f(u_1, \ldots , u_n)\nonumber \\\ge & {} c^n f_\mathrm{min} \min \left\{ s_{1}^{2H_1},(s_{2}-s_{1})^{2H_1},\ldots , (s_{n}-s_{n-1})^{2H_1}\right\} \nonumber \\\ge & {} K c^n\min \left\{ s_{1}^{2H_1},(s_{2}-s_{1})^{2H_1},\ldots , (s_{n}-s_{n-1})^{2H_1}\right\} . \end{aligned}$$
(2.3)

In a similar way we have

$$\begin{aligned} {\lambda }_1(Q_2)\ge & {} K c^n \min \left\{ t_{1}^{2H_2},(t_{2}-t_{1})^{2H_2},\ldots , (t_{n}-t_{n-1})^{2H_2}\right\} . \end{aligned}$$
(2.4)

The integral in (2.2) can be bounded as

$$\begin{aligned} I_2:= & {} \int _{\mathbb {R}^{n }} \mid \xi \mid ^{ k_i } \exp \left\{ -\frac{1}{2} \mid \xi \mid ^2 \right\} \hbox {d}\xi \nonumber \\\le & {} n^{\frac{k_i}{2}}\int _{\mathbb {R}^{nd}}\max _{1\le j\le n} \mid \xi _j\mid ^{k_i} \exp \left\{ -\frac{1}{2} \mid \xi \mid ^2 \right\} \hbox {d}\xi \nonumber \\\le & {} n^{\frac{k_i}{2}}\int _{\mathbb {R}^{n }}\sum _{j=1}^n \mid \xi _j\mid ^{k_j} \exp \left\{ -\frac{1}{2} \mid \xi \mid ^2 \right\} \hbox {d}\xi \nonumber \\\le & {} n^{\frac{k_i}{2}+1}\int _{\mathbb {R}^{n }} \mid \xi _{1}\mid ^{k_i} \exp \left\{ -\frac{1}{2} \mid \xi \mid ^2 \right\} \hbox {d}\xi \nonumber \\\le & {} n^{\frac{k_i}{2}+1}C^n \le C^n. \end{aligned}$$
(2.5)

Substitute (2.3)-(2.5) into (2.2) we obtain

$$\begin{aligned} I_i(t,s)\le & {} C^n \mathrm {det}(B) ^{-\frac{1}{2}} \min _{j=1,\ldots ,n}(s_{j}-s_{ j-1 })^{- \rho H_1k_i} \nonumber \\&\qquad \min _{j=1,\ldots ,n}(t_{j}-t_{ j-1 })^{- (1-\rho ) H_2k_i} \end{aligned}$$

for possibly a different constant C, independent of n.

Next we obtain a lower bound for \(\mathrm {det}(B) \). According to [2, Lemma 9.4]

$$\begin{aligned} \mathrm {det}(Q_1+Q_2)\ge \mathrm {det}(Q_1)^{\gamma }\mathrm {det}(Q_2)^{1-\gamma }, \end{aligned}$$

for any two symmetric positive definite matrices \(Q_1\) and \(Q_2\) and for any \(\gamma \in [0, 1]\). Now it is well known that (see also the usages in [2,3,4]).

$$\begin{aligned} \begin{aligned} \mathrm {det}(Q_1) \ge&C^ns_{1}^{2H_1}(s_{2}-s_{1})^{2H_1}\cdots (s_{n}-s_{n-1})^{2H_1}.\\ \end{aligned} \end{aligned}$$

and

$$\begin{aligned} \begin{aligned} \mathrm {det}(Q_2)&\ge C^nt_{1}^{2H_2}(t_{2}-t_{1})^{2H_2}\cdots (t_{n}-t_{ n-1 })^{2H_2}. \end{aligned} \end{aligned}$$

As a consequence, we have

$$\begin{aligned} I_i(t,s)\le & {} C^n \min _{j=1,\ldots ,n}(s_{j}-s_{ j-1 })^{- \rho H_1k_i} \min _{j=1,\ldots ,n}(t_{j}-t_{ j-1 })^{- (1-\rho ) H_2k_i} \\&\quad \left[ s_{1} (s_{2}-s_{1}) \ldots (s_{n}-s_{n-1}) \right] ^{-\gamma H_1} \left[ t_{1} (t_{2}-t_{1}) \ldots (t_{n}-t_{n-1}) \right] ^{-(1-\gamma )H_2 } \end{aligned}$$

Thus,

$$\begin{aligned} \mathbb {E} \left[ \left| \widehat{\alpha }^{(k)}_{\varepsilon }(0)\right| ^n \right]\le & {} (n!)^2 C^n \int _{\Delta _n^2}\min _{j=1,\ldots ,n}(s_{j}-s_{ j-1 })^{- \rho H_1|k|}\\&\min _{j=1,\ldots ,n}(t_{j}-t_{ j-1 })^{- (1-\rho ) H_2|k|} \left[ s_{1} (s_{2}-s_{1}) \ldots (s_{n}-s_{n-1}) \right] ^{-\gamma H_1d}\\&\left[ t_{1} (t_{2}-t_{1}) \ldots (t_{n}-t_{n-1}) \right] ^{-(1-\gamma )H_2d }\hbox {d}t\hbox {d}s\\\le & {} (n!)^2 C^n \sum _{i,j=1}^n \int _{\Delta _n^2} (s_{i}-s_{ i-1 })^{- \rho H_1|k|}\\&(t_{j}-t_{ j-1 })^{- (1-\rho ) H_2|k|} \left[ s_{1} (s_{2}-s_{1}) \ldots (s_{n}-s_{n-1}) \right] ^{-\gamma H_1d}\\&\left[ t_{1} (t_{2}-t_{1}) \ldots (t_{n}-t_{n-1}) \right] ^{-(1-\gamma )H_2d }\hbox {d}t\hbox {d}s, \end{aligned}$$

where \(\Delta _n=\left\{ 0<s_1<\cdots <s_n\le T\right\} \) denotes the simplex in \([0, T]^n\). We choose \(\rho =\gamma =\frac{H_2}{H_1+H_2}\) to obtain

$$\begin{aligned}&\mathbb {E} \left[ \left| \widehat{\alpha }^{(k)}_{\varepsilon }(0)\right| ^n \right] \le (n!)^2 C^n \sum _{i,j=1}^n I_{3,i}I_{3,j}, \end{aligned}$$

where

$$\begin{aligned} I_{3,j}= \int _{\Delta _n } (t_{j}-t_{ j-1 })^{- \frac{H_1H_2}{H_1+H_2} |k|} \left[ t_{1} (t_{2}-t_{1}) \ldots (t_{n}-t_{n-1}) \right] ^{-\frac{H_1H_2}{H_1+H_2} d }\hbox {d}t , \end{aligned}$$

By Lemma 4.5 of [5], we see that if

$$\begin{aligned} \frac{H_1H_2}{H_1+H_2} (|k|+d)\le 1, \end{aligned}$$

then

$$\begin{aligned} I_{3,j}\le \frac{ C^n T^{ \kappa _1 n-\frac{H_1H_2|k|}{H_1+H_2} }}{\varGamma \left( n\kappa _1- \frac{H_1H_2}{H_1+H_2} |k| +1\right) }, \end{aligned}$$

where

$$\begin{aligned} \kappa _1=1-\frac{dH_1H_2}{H_1+H_2} . \end{aligned}$$

Substituting this bound we obtain

$$\begin{aligned} \mathbb {E} \left[ \left| \widehat{\alpha }^{(k)}_{\varepsilon }(0)\right| ^n \right]\le & {} n^2 (n!)^2 C^n \frac{ T^{ 2\kappa _1 n-\frac{2H_1H_2|k|}{H_1+H_2} }}{\varGamma ^2\left( n\kappa _1- \frac{H_1H_2}{H_1+H_2} |k| +1\right) }\\\le & {} (n!)^2 C^n \frac{ T^{ 2\kappa _1 n-\frac{2H_1H_2|k|}{H_1+H_2} }}{\left( \Gamma ( n\kappa _1- \frac{H_1H_2}{H_1+H_2} |k| +1)\right) ^2 } \\\le & {} C_T (n!)^{2-2\kappa _1} C^n T^{ 2\kappa _1 n } , \end{aligned}$$

where C is a constant independent of T and n and \(C_T\) is a constant independent of n.

For any \(\beta >0\), the above inequality implies

$$\begin{aligned} \begin{aligned} \mathbb {E}\left[ \left| \widehat{\alpha }^{(k)}(0)\right| ^{n\beta }\right] \le C_T (n!)^{ \beta (2-2\kappa _1)} C^n T^{ 2\beta \kappa _1 n } \end{aligned} \end{aligned}$$

From this bound we conclude that there exists a constant \(C_{d,T,k}>0\) such that

$$\begin{aligned} \mathbb {E}\left[ \exp \left\{ C_{d,T,k}\left| \widehat{\alpha }^{(k)}(0)\right| ^{\beta }\right\} \right]= & {} \sum _{n=0}^{\infty }\frac{C_{d,T,k}^n}{n!}{\mathbb {E}}\left| \widehat{\alpha }^{(k)}(0) \right| ^{n\beta }\\\le & {} C_T \sum _{n=0}^\infty C_{d,T, k}^n (n!)^{ \beta (2-2\kappa _1)-1} C^n T^{ 2\beta \kappa _1 n } <\infty ,\\ \end{aligned}$$

when \(C_{d, T, k}\) is sufficiently small (but strictly positive), where \(\beta =\frac{H_1+H_2}{2dH_1H_2}\). \(\square \)

Proof of part (iii)

Without loss of generality, we consider only the case \(k=(k_1,0, \ldots , 0)\) and we denote \(k_i\) by k. By the definition of k-order derivative local time of independent d-dimensional fractional Brownian motions, we have

$$\begin{aligned} \begin{aligned} \mathbb {E}\left[ \hat{\alpha }^{(k)}_{\varepsilon }(0)\right]&=\frac{1}{(2\pi )^d}\int _{0}^T\int _0^T\int _{\mathbb {R}^d}{\mathbb {E}}\left[ \hbox {e}^{i\langle \xi ,B^{H_1}_t-\widetilde{B}^{H_2}_s\rangle }\right] \hbox {e}^{-\frac{\varepsilon \mid \xi \mid ^2}{2}}\mid \xi _1\mid ^k\hbox {d}\xi \hbox {d}t\hbox {d}s\\&=\frac{1}{(2\pi )^d}\int _{0}^T\int _0^T\int _{\mathbb {R}^d}\hbox {e}^{-(\varepsilon +t^{2H_1} +s^{2H_2}) \frac{\mid \xi \mid ^2}{2}} \mid \xi _1\mid ^k\hbox {d}\xi \hbox {d}t\hbox {d}s.\\ \end{aligned} \end{aligned}$$

Thus, we have

$$\begin{aligned} \mathbb {E} \left[ \hat{\alpha }^{(k)}(0)\right] =\frac{1}{(2\pi )^d}\int _{0}^T\int _0^T\int _{\mathbb {R}^d}\hbox {e}^{-(t^{2H_1}+s^{2H_2}) \frac{\mid \xi \mid ^2}{2}} \mid \xi _1\mid ^k\hbox {d}\xi \hbox {d}t\hbox {d}s. \end{aligned}$$

Integrating with respect to \(\xi \), we find

$$\begin{aligned} \mathbb {E}\left[ \hat{\alpha }^{(k)}(0)\right] =c_{k, d} \int _{0}^T\int _0^T(t^{2H_1}+s^{2H_2}) ^{-\frac{(k+d) }{2}}\hbox {d}t\hbox {d}s \end{aligned}$$

for some constant \(c_{k, d}\in (0,\infty )\).

We are going to deal with the above integral. Assume first \(0<H_1\le H_2<1\). Making substitution \(t=u^{\frac{H_2}{H_1}}\) yields

$$\begin{aligned} I_4:= & {} \int _{0}^T\int _0^T(t^{2H_1}+s^{2H_2}) ^{-\frac{(k+d) }{2}}\hbox {d}t\hbox {d}s \nonumber \\= & {} \int _{0}^T\int _0^{T^{\frac{H_1}{H_2}}}(u^{2H_2}+s^{2H_2})^{-\frac{k+d}{2}} u^{\frac{H_2}{H_1}-1}\hbox {d}u\hbox {d}s. \end{aligned}$$
(2.6)

Using polar coordinate \(u=r\cos \theta \) and \(s=r\sin \theta \), where \(0\le \theta \le \frac{\pi }{2}\) and \(0\le r\le T\) we have

$$\begin{aligned} I_4\ge & {} \int _0^{\frac{\pi }{2}}(\cos \theta )^{\frac{H_2}{H_1}-1} \left( \cos ^{2H_2}\theta +\sin ^{2H_2}\theta \right) ^{-\frac{(k+d) }{2}} \hbox {d}\theta \int _0^{ T ^{\frac{H_1}{H_2}}} r^{-(k+d)H_2 + \frac{H_2}{H_1}}\hbox {d}r \nonumber \\ \end{aligned}$$
(2.7)

since the planar domain \(\left\{ (r, \theta ), 0\le r\le T\wedge T^{\frac{H_1}{H_2}}, 0\le \theta \le \frac{\pi }{2}\right\} \) is contained in the planar domain \(\left\{ (s,u), 0\le s\le T, 0\le u\le T ^{\frac{H_1}{H_2}} \right\} \). The integral with respect to r appearing in (2.7) is finite only if \(-(k+d)H_2 +\frac{H_2}{H_1}>-1\), namely only when the condition (1.5) is satisfied. The case \(0<H_2\le H_1<1\) can be dealt similarly. This completes the proof of our main theorem. \(\square \)

3 Appendix

In this section, we recall some known results that are used in this paper. The following lemma is Lemma 8.1 of [1].

Lemma 1

Let \(X_1\), \(\dots \), \(X_n\) be jointly mean zero Gaussian random variables, and let \( Y_1=X_1,\ Y_2=X_2-X_1,\ldots ,Y_n=X_n-X_{n-1}\). Then

$$\begin{aligned} \begin{aligned} \mathrm{Var}\left\{ \sum _{j=1}^nv_jY_j\right\} \ge \frac{R}{\prod _{j=1}^n\sigma _{j}^2}\frac{1}{n}\sum _{j=1}^nv_j^2\sigma _j^2, \end{aligned} \end{aligned}$$

where \(\sigma _{j}^2=\mathrm{Var}(Y_j)\) and R is the determinant of the covariance matrix of \(\{X_i,i=1,\ldots ,n\}\), which is also given by the following product of conditional variances

$$\begin{aligned} \begin{aligned} R=\mathrm{Var}(X_1)\mathrm{Var}(X_2\mid X_1)\ldots \mathrm{Var}(X_n\mid X_1,\ldots ,X_{n-1}). \end{aligned} \end{aligned}$$

The following lemma is from [4], Lemma A.1.

Lemma 2

Let \((\Omega , \mathcal {F}, P)\) be a probability space and let F be a square integrable random variable. Suppose that \(\mathcal {G}_1\subset \mathcal {G}_2\) are two \(\sigma \)-fields contained in \(\mathcal {F}\). Then

$$\begin{aligned} \begin{aligned} \mathrm{Var}(F\mid \mathcal {G}_1)\ge \mathrm{Var}(F\mid \mathcal {G}_2). \end{aligned} \end{aligned}$$

The following is Lemma 7.1 of [10] applied to fractional Brownian motion.

Lemma 3

If \((B_t, 0\le t<\infty )\) is the fractional Brownian motion of Hurst H, then

$$\begin{aligned} \mathrm{Var}(X(t)|X(s), |s-t|\ge r)=cr^{2H}. \end{aligned}$$

Combining the above three lemmas we have the following

Proposition 1

Let \((B_t, 0\le t<\infty )\) be the fractional Brownian motion of Hurst H and let \(0\le s_1<\cdots<s_n<\infty \). Then there is a constant c independent of n such that

$$\begin{aligned}&\mathrm {Var}\big (\xi _{1}B_{s_{1}}+\xi _{2}\left( B_{s_{2}} -B_{s_{1}}\right) +\cdots + \xi _{n}\left( B_{s_{n}} -B_{s_{n-1}}\right) \big ) \nonumber \\&\quad \ge c^n\left[ \xi _1^2 \mathrm{Var}( B_{s_{1}})+ \xi _{2}^2\mathrm{Var}\left( B_{s_{2}} -B_{s_{1}}\right) +\cdots + \xi _{n}^2\mathrm{Var}\left( B_{s_{n}} -B_{s_{n-1}}\right) \right] . \nonumber \\ \end{aligned}$$
(3.1)

Proof

Let \(X_i=B_{s_{i}} -B_{s_{i-1}}\) (\(B_{s_{-1}}=0\) by convention). From Lemma 2 we see

$$\begin{aligned} R_i:= & {} \mathrm{Var}(X_i\mid X_1,\ldots ,X_{i-1})\ge \mathrm{Var}(B_{s_i}|\mathcal {F}_{s_{i-1}})\\\ge & {} c|s_i-s_{i-1}|^{2H} =c\sigma _i^2, \end{aligned}$$

where \(\mathcal {F}_t=\sigma (B_s, s\le t)\). From the definition of R we see \(R\ge c^n \prod _{i=1}^n \sigma _i^2\). The proposition is proved by applying Lemma 1. \(\square \)

The following lemma is Lemma 4.5 of [5].

Lemma 4

Let \(\alpha \in (-1+\varepsilon ,1)^m\) with \(\varepsilon >0\) and set \(\mid \alpha \mid =\sum _{i=1}^m\alpha _i\). Denote \(T_m(t)=\{(r_1,r_2,\ldots ,r_m)\in \mathbb {R}^m:0<r_1<\cdots<r_m<t\}\). Then there is a constant \(\kappa \) such that

$$\begin{aligned} \begin{aligned} J_m(t,\alpha ):=\int _{T_m(t)}\prod _{i=1}^m(r_i-r_{i-1})^{\alpha _i}\mathrm{d}r\le \frac{\kappa ^mt^{\mid \alpha \mid +m}}{\Gamma (\mid \alpha \mid +m+1)}, \end{aligned} \end{aligned}$$

where by convention, \(r_0=0\).