1 Introduction

The goodness of fit (GoF) tests occupy an important place in statistics because they provide a bridge between the mathematical models and real data. Our work is devoted to the problem of the construction of a GoF test in the case of observation of ergodic diffusion process in the situation when the basic hypothesis is composite parametric. We propose asymptotically distribution free test, which is based on linear transformation of the normalized deviation of the empirical density.

Remind first the well-known properties of GoF tests in the statistics of i.i.d. observations \(X_1,\ldots ,X_n\). If we have to test the hypothesis \(\mathcal{H}_0\) that their distribution function \(F\left( x\right) =F_0\left( x\right) \) we can use (among others) the Cramér-von Mises test \(\hat{\psi }_n={{1\!\hbox {I}}}_{\left\{ \Delta _n>c_\varepsilon \right\} }\), where

$$\begin{aligned} \Delta _n=n\int _{-\infty }^{\infty }\left[ \hat{F}_n\left( x\right) -F_0\left( x\right) \right] ^2 \mathrm{d}F_0\left( x\right) ,\quad \quad \hat{F}_n\left( x\right) =\frac{1}{n}\sum _{j=1}^{n}{1\!\hbox {I}}_{\left\{ X_j<x \right\} }. \end{aligned}$$

Remarcable property of this (and some other) test is the fact that the statistics \(\Delta _n\) under hypothesis \(\mathcal{H}_0\) converges in distribution

$$\begin{aligned} \Delta _n\Longrightarrow \Delta \equiv \int _{0}^{1}B\left( t\right) ^2\,\mathrm{d}t, \end{aligned}$$

where \(B\left( t\right) , 0\le t\le 1\), is a Brownian bridge. The tests with the limit distribution not depending on the underlying model (here \(F_0\left( \cdot \right) \)) are called asymptotically distribution free (ADF). If we are interested in the construction of tests of asymptotically fixed first type error \(\varepsilon \in \left( 0,1\right) \), i.e., the tests \(\bar{\psi }_n\) satisfying

$$\begin{aligned} \mathop {\text {lim}}\limits _{n\rightarrow \infty }\mathbf{E}_{0 }\, \bar{\psi }_n=\varepsilon , \end{aligned}$$

then for such tests the choice of the threshold \(c_\varepsilon \) can be done once for all problems with the same limit distribution. Indeed, the threshold \(c_\varepsilon \) for the test \(\hat{\psi }_n\) is solution of the equation \( \mathbf{P}\left\{ \Delta >c_\varepsilon \right\} =\varepsilon \), which is the same for all possible \(F_0\left( \cdot \right) \).

If the basic hypothesis \(\mathcal{H}_0\) is parametric: \(F\left( x\right) =F_0\left( \vartheta ,x\right) \), where \(\vartheta \in \Theta \subset {\mathbb {R}}^d\) is an unknown parameter, then the situation changes and the limit distribution of the similar statistics

$$\begin{aligned} \hat{\Delta }_n=n\int _{-\infty }^{\infty }\left[ \hat{F}_n\left( x\right) -F_0\left( \hat{\vartheta }_n,x\right) \right] ^2 \mathrm{d}F_0\left( \hat{\vartheta }_n,x\right) \Longrightarrow \hat{\Delta } , \end{aligned}$$

(\(\hat{\vartheta }_n\) is the MLE) can be written in the following form

$$\begin{aligned} \hat{\Delta }=\int _{0}^{1 }U\left( t\right) ^2\mathrm{d}t,\quad \quad \quad U\left( t\right) =B\left( t\right) -\left( \zeta ,H\left( t\right) \right) \end{aligned}$$
(1)

where \(\zeta =\zeta \left( \vartheta ,F_0\right) \) is a Gaussian vector and \(H\left( t \right) =H\left( \vartheta ,F_0,t\right) \) is some deterministic vector-function (Darling 1955). If we decide to use the test \(\hat{\psi }_n={1\!\hbox {I}}_{\left\{ \hat{\Delta }_n>c_\varepsilon \right\} }\), then we need to find such \(c_\varepsilon =c_\varepsilon \left( \vartheta ,F_0 \right) \) that \(\mathbf{P}_\vartheta \left( \hat{\Delta } >c_\varepsilon \right) =\varepsilon \), verify that \(c_\varepsilon \left( \vartheta ,F_0 \right) \) is continuous function of \(\vartheta \) and to put \(\bar{c}_\varepsilon =c_\varepsilon \left( \bar{\vartheta }_n ,F_0 \right) \), where \(\bar{\vartheta }_n\) is some consistent estimator of \(\vartheta \) (say, MLE). Then it can be shown that for the test \(\hat{\psi }_n={1\!\hbox {I}}_{\left\{ \hat{\Delta }_n>\bar{c}_\varepsilon \right\} }\) we have

$$\begin{aligned} \mathop {\text {lim}}\limits _{n\rightarrow \infty }\mathbf{E}_{\theta }\, \hat{\psi }_n=\varepsilon \quad \quad \mathrm{for} \quad \mathrm{all}\quad \quad \vartheta \in \Theta . \end{aligned}$$

We denote the class of such tests as \(\mathcal{K}_\varepsilon \). For a given family \(F_0\left( \cdot \right) \) the function \(c_\varepsilon \left( \vartheta ,F_0 \right) \) can be found by numerical simulations. Of course, this problem becames much more complicate than the first one with the simple basic hypothesis. More about GoF tests can be found, e.g., in Lehmann and Romano (2005), Martynov (1978) or any other book on this subject.

Another possibility is to find such transformormation \(L\left[ U_n\right] \) of the statistic \(U_n\left( x\right) =\sqrt{n} \left( \hat{F}_n\left( x\right) -F(\hat{\vartheta }_n,x)\right) \) that

$$\begin{aligned} \tilde{\Delta }_n=\int _{-\infty }^{\infty }L\left[ U_n\right] \left( x\right) ^2\mathrm{d}F(\hat{\vartheta } _n,x)\Longrightarrow \tilde{\Delta }\equiv \int _{0}^{1}w_s^2\,\mathrm{d}s,\quad \quad \mathbf{P}\left( \tilde{\Delta } >c_\varepsilon \right) =\varepsilon , \end{aligned}$$

where \(w_s\), \(0\le s\le 1\), is some Wiener process. Then we will have the test \(\tilde{\psi }_n={1\!\hbox {I}}_{\left\{ \tilde{\Delta } _n>c_\varepsilon \right\} }\in \mathcal{K}_\varepsilon \). Such linear transformation was proposed in Khmaladze (1981).

In our work we consider a similar problem of the construction of ADF GoF tests by the observations of ergodic diffusion processes. We are given a stochastic differential equation

$$\begin{aligned} \mathrm{d}X_s=S\left( X_s\right) \,\mathrm{d}s+\sigma \left( X_s\right) \,\mathrm{d}W_s,\quad X_0,\quad 0\le s\le T , \end{aligned}$$
(2)

where \(W_s\), \(0\le s\le T\), is a Wiener process, \(\sigma \left( x\right) ^2>0\) is a known function and we have to test a composite basic hypothesis \(\mathcal{H}_0\) that

$$\begin{aligned} \mathrm{d}X_s=S\left( \vartheta ,X_s\right) \,\mathrm{d}s+\sigma \left( X_s\right) \,\mathrm{d}W_s,\quad X_0,\quad 0\le s\le T , \end{aligned}$$
(3)

i.e., the trend coefficient is some known function \(S\left( \vartheta ,x\right) \) which depends on the unknown parameter \(\vartheta \in \Theta \subset {\mathbb {R}}^d\). Here and in the sequel we suppose that the initial value \(X_0\) has the distribution function of the invariant law of this ergodic diffusion process. The invariant distribution function and density function are denoted as \(F\left( \vartheta ,x\right) \) and \(f\left( \vartheta ,x\right) \) respectively.

Let us denote by \(\hat{F}_T\left( x\right) \) and \(\hat{f}_T\left( x\right) \) the empirical distribution function of the invariant law and the empirical density (local time estimator of the invariant density) defined by the relations

$$\begin{aligned} \hat{F}_T\left( x\right) =\frac{1}{T} \int _{0}^{T}{1\!\hbox {I}}_{\left\{ X_s<x\right\} }\,\mathrm{d}s,\quad \hat{f}_T\left( x\right) =\frac{\Lambda _T\left( x\right) }{\sigma \left( x\right) ^2T}, \end{aligned}$$

where \(\Lambda _T\left( x\right) \) is the local time of the observed diffusion process (see Revuz and Yor (1991) for the definition and properties). Remind that we call the random function \(\hat{f}_T\left( x\right) \) empirical density because it is the derivative of empirical distribution function.

The Cramér-von Mises type statistics are based on \(L_2\) deviations of these estimators. Denoting

$$\begin{aligned} \hat{\eta }_T\left( x\right)&=\sqrt{T}\left( \hat{F}_T\left( x\right) -F\left( \hat{\vartheta }_T,x\right) \right) ,\quad \hat{\zeta }_T\left( x\right) =\sqrt{T}\left( \hat{f}_T\left( x\right) -f\left( \hat{\vartheta }_T,x\right) \right) , \end{aligned}$$

where \(\hat{\vartheta }_T\) is the MLE of the parameter \(\vartheta \), these statistics can be introduced as follows

$$\begin{aligned} \hat{\Delta }_T=\int _{-\infty }^{\infty }\hat{\eta }_T\left( x\right) ^2\mathrm{d}F\left( \hat{\vartheta }_T,x\right) ,\quad \quad \hat{\delta } _T=\int _{-\infty }^{\infty }\hat{\zeta }_T\left( x\right) ^2\mathrm{d}F\left( \hat{\vartheta }_T,x\right) . \end{aligned}$$

Unfortunatelly, the immediate use of the tests \(\hat{\Psi } _T={1\!\hbox {I}}_{\left\{ \hat{\Delta }_T>c_\varepsilon \right\} }\) and \(\hat{\psi } _T={1\!\hbox {I}}_{\left\{ \hat{\delta }_T>d_\varepsilon \right\} }\) leads to the same problems as in the i.i.d. case, i.e., the limit (\(T\rightarrow \infty \)) distributions of these statistics under hypothesis \(\mathcal{H}_0\) depend on the model \(S\left( \cdot ,\cdot \right) , \sigma \left( \cdot \right) \) and on the true value \(\vartheta \).

Moreover, in contrary to the i.i.d. case, even if the basic hypothesis is simple \(\Theta =\left\{ \vartheta _0\right\} \), the limit distributions depend on the model defined by the functions \(S\left( \vartheta _0 ,\cdot \right) , \sigma \left( \cdot \right) \). Therefore, even in this case of simple basic hypothesis we have no ADF limits for these statistics. This means that for each model we have to find the threshold \(c_\varepsilon \) separately. There are sevral ADF GoF tests for the ergodic and “small noise” diffusion processes proposed, for example, in the works (Dachian and Kutoyants 2007; Kutoyants 2011; Negri and Nishiyama 2009), but the links between these tests and the “traditional” tests like Cramér-von Mises and Kolmogorov-Smirnov (based on empirical distribution function) was not always clear.

Recently in this problem (with simple hypothesis) there was proposed a linear transformation \(L_1\left[ \zeta _T\right] \) of the random function

$$\begin{aligned} \zeta _T\left( x\right) =\sqrt{T} \left( \hat{f}_T\left( x\right) -f\left( \vartheta _0,x\right) \right) \end{aligned}$$

such that

$$\begin{aligned} \delta _T=\int _{-\infty }^{\infty }\left[ L_1\left[ \zeta _T\right] \left( x\right) \right] ^2\mathrm{d}F\left( \vartheta _0,x\right) \Longrightarrow \int _{0}^{1}w_s^2\mathrm{d}s \end{aligned}$$
(4)

(see Kutoyants (2012)). The proposed test statistics (after linear transformation and some simplifications) is

$$\begin{aligned} \tilde{\delta }_T=\int _{-\infty }^{\infty }\left[ \frac{1}{\sqrt{T}}\int _{0}^{T}\frac{{1\!\hbox {I}}_{\left\{ X_s<x\right\} }}{\sigma \left( X_s\right) } \left[ \mathrm{d}X_s-S\left( \vartheta _0,X_s\right) \mathrm{d}s\right] \right] ^2\mathrm{d}F\left( \vartheta _0,x\right) \end{aligned}$$
(5)

with the same limit (4). See as well Negri and Nishiyama (2009), where the similar statistics were used in the costruction of the Kolmogorov-Smirnov type ADF test.

Hence the test \(\hat{\psi }_T={1\!\hbox {I}}_{\left\{ \tilde{\delta } _T>c_\varepsilon \right\} }\) is ADF (in the case of simple basic hypothesis).

The goal of this work is to present such linear transformation \( L[\hat{\zeta }_T]\) of the random function \(\hat{\zeta } _T\left( x\right) \) that

$$\begin{aligned} \hat{\delta }_T=\int _{-\infty }^{\infty } L[\hat{\zeta } _T]\left( x\right) ^2\mathrm{d}F(\hat{\vartheta }_T,x) \Longrightarrow \int _{0}^{1}w_s^2\mathrm{d}s. \end{aligned}$$
(6)

Note that the general case of ergodic diffusion process with shift (one-dimensional) parameter was studied in Negri and Zhou (2012). They showed that the limit distribution of the Cramér-von Mises statistic does not depend on the unknown (shift) parameter and therefore is asymptotically parameter free.

2 Assumptions and preliminaries

We are given (under hypothesis \(\mathcal{H}_0\)) continuous time observations \(X^T=\left( X_s,0\le s\le T\right) \) of the diffusion process

$$\begin{aligned} \mathrm{d}X_s=S\left( \vartheta ,X_s\right) \,\mathrm{d}s+\sigma \left( X_s\right) \,\mathrm{d}W_s,\quad X_0,\quad 0\le s\le T. \end{aligned}$$
(7)

We are going to study the GoF test based on the normalized difference

$$\begin{aligned}&\sqrt{T}\left( \hat{f}_T\left( x\right) -f\left( \hat{\vartheta } _T,x\right) \right) \\&\quad \quad \quad \quad =\sqrt{T}\left( \hat{f}_T\left( x\right) -f\left( \vartheta ,x\right) \right) -\left( \sqrt{T}\left( \hat{\vartheta }_T-\vartheta \right) ,\,\dot{f}\left( \vartheta \right) \right) + o\left( 1\right) . \end{aligned}$$

We need three types of conditions. The first one (\(\mathcal{ES}\),\(\mathcal{RP}\) and \(\mathcal{A}_0\)) provide the existence of the solution of the Eq. (7), good ergodic properties of the process \(\left( X_s,s\ge 0\right) \) and allow to describe the asymptotic behavior of the normalized difference \( \zeta _T\left( \vartheta ,x\right) =\sqrt{T}\left( \hat{f}_T\left( x\right) -f\left( \vartheta ,x\right) \right) \).

The regularity conditions \(\mathcal{R}_1\) provide the properties of the MLE \(\hat{\vartheta }_T\) (consistency, asymptotic normality and stochastic representation). The last condition \(\mathcal{R}_2\) will help us to construct the linear transformation \(L \left[ \cdot \right] \) of the process \(\hat{\zeta }_T\left( \cdot \right) \) to the Wiener process. Therefore, the test based on this transformation is asymptotically distribution free.

We assume that the trend \(S\left( \vartheta ,x\right) \), \(\vartheta \in \Theta \subset {\mathbb {R}}^d \) and diffusion \(\sigma \left( x\right) ^2\) coefficients satisfy the following conditions.

\(\mathcal{ES}.\) The function \(S\left( \vartheta ,x\right) \), \(\vartheta \in \Theta , x\in {\mathbb {R}}\) is locally bounded, the function \(\sigma \left( x \right) ^2>0 \) is continuous and for some \( C>0\) the condition

$$\begin{aligned} x\,S\left( \vartheta ,x\right) +\sigma \left( x\right) ^2\le C\left( 1+x^2\right) \end{aligned}$$

holds.

By this condition the stochastic differential Eq. (7) has a unique weak solution for all \(\theta \in \Theta \) (see, e.g., Durrett (1996)).

\(\mathcal{RP}.\) The functions \(S\left( \vartheta ,\cdot \right) \) and \(\sigma \left( x \right) ^2\) are such that for all \(\vartheta \in \Theta \)

$$\begin{aligned} \int _{-\infty }^{x }\exp \left\{ 2\int _{0}^{x}\frac{S\left( \vartheta ,y\right) }{\sigma \left( y\right) ^2}\mathrm{d}y\right\} \mathrm{d}x\longrightarrow \pm \infty \quad {as} \quad \quad x\longrightarrow \pm \infty \end{aligned}$$

and

$$\begin{aligned} G\left( \vartheta \right) =\int _{-\infty }^{\infty }\sigma \left( x\right) ^{-2}\exp \left\{ 2\int _{0}^{x}\frac{S\left( \vartheta ,y\right) }{\sigma \left( y\right) ^2}\mathrm{d}y\right\} \mathrm{d}x<\infty . \end{aligned}$$

By condition \(\mathcal{RP}\) the diffusion process (7) is recurrent positive (ergodic) with the density of invariant law

$$\begin{aligned} f\left( \vartheta ,x\right) =\frac{1}{G\left( \vartheta \right) \;\sigma \left( x\right) ^2}\;\exp \left\{ 2\int _{0}^{x}\frac{S\left( \vartheta ,y\right) }{\sigma \left( y\right) ^2}\; \mathrm{d}y\right\} . \end{aligned}$$

We suppose that the initial value \(X_0\) has this density function, therefore the observed process is stationary.

Introduce the class \(\mathcal{P}\) of functions with polynomial majorants

$$\begin{aligned} \mathcal{P}=\left\{ h\left( \cdot \right) :\quad \left| h\left( y\right) \right| \le C\left( 1+\left| y\right| ^p\right) \right\} . \end{aligned}$$
(8)

If the function \(h\left( \cdot \right) \) depends on parameter \(\vartheta \), then we suppose that the constant \(C\) in (8) does not depend on \(\vartheta \).

The condition \(\mathcal{RP} \) is strenghtened in the following way.

\(\mathcal{A}_0.\) The functions \(S\left( \vartheta ,\cdot \right) , \sigma \left( \cdot \right) ^{\pm 1} \in \mathcal{P} \) and for all \(\vartheta \)

$$\begin{aligned} \mathop {{\overline{\text {lim}}}}\limits _{\left| y\right| \rightarrow \infty }\;\mathrm{sgn}\left( y\right) \;\frac{S\left( \vartheta ,y\right) }{\sigma \left( y\right) ^2} <0. \end{aligned}$$

Under condition \(\mathcal{A}_0\) the empirical distribution function \(\hat{F}_T\left( x\right) \) and empirical density \(\hat{f}_T\left( x\right) \) are unbiased, consistent, asymptotically normal and asymptotically efficient estimators of the functions \(F\left( \vartheta ,x\right) \) and \(f\left( \vartheta ,x\right) \) respectively. The random processes

$$\begin{aligned} \eta _T\left( \vartheta ,x\right) =\sqrt{T}\left( \hat{F}_T\left( x\right) -F\left( \vartheta , x\right) \right) ,\quad \zeta _T\left( \vartheta ,x\right) =\sqrt{T}\left( \hat{f}_T\left( x\right) -f\left( \vartheta , x\right) \right) \end{aligned}$$

converge to the Gaussian processes \(\eta \left( \vartheta ,x \right) \) and \(\zeta \left( \vartheta ,x \right) \), which admit the representations

$$\begin{aligned} \eta \left( \vartheta ,x \right)&=2\int _{-\infty }^{\infty }\frac{F\left( \vartheta ,y\right) F\left( \vartheta ,x\right) -F\left( \vartheta ,y\wedge x\right) }{\sigma \left( y\right) \sqrt{f\left( \vartheta ,y\right) }}\;\mathrm{d}W(y),\end{aligned}$$
(9)
$$\begin{aligned} \zeta \left( \vartheta ,x \right)&=2f\left( \vartheta ,x\right) \int _{-\infty }^{\infty }\frac{F\left( \vartheta ,y\right) -{1\!\hbox {I}}_{\left\{ y>x\right\} }}{\sigma \left( y\right) \sqrt{f\left( \vartheta ,y\right) }}\;\mathrm{d}W(y). \end{aligned}$$
(10)

Here \(W\left( \cdot \right) \) is two-sided Wiener process. For the proofs see Kutoyants (2004). These proofs are based on the following representations

$$\begin{aligned} \eta _T\left( \vartheta ,x\right)&=\frac{2}{\sqrt{T}} \int _{0}^{T}\frac{F\left( \vartheta ,x\right) F\left( \vartheta ,X_s\right) -F\left( \vartheta ,x\wedge X_s\right) }{\sigma \left( y\right) \,f\left( \vartheta ,y\right) }\,\mathrm{d}W_s \nonumber \\&\quad +\frac{2}{\sqrt{T}} \int _{X_0}^{X_T}\frac{F\left( \vartheta ,y\wedge x\right) -F\left( \vartheta ,y\right) F\left( \vartheta ,x\right) }{\sigma \left( y\right) ^2 \,f\left( \vartheta ,y\right) }\,\mathrm{d}y \end{aligned}$$
(11)

and

$$\begin{aligned} \zeta _T\left( \vartheta ,x\right)&=\frac{2f\left( \vartheta ,x\right) }{\sqrt{T}} \int _{0}^{T}\frac{F\left( \vartheta ,X_s\right) -{1\!\hbox {I}}_{\left\{ X_s>x\right\} }}{\sigma \left( y\right) \,f\left( \vartheta ,X_s\right) }\,\mathrm{d}W_s\nonumber \\&\quad \quad \quad +\frac{2f\left( \vartheta ,x\right) }{\sqrt{T}} \int _{X_0}^{X_T}\frac{{1\!\hbox {I}}_{\left\{ y>x\right\} } -F\left( \vartheta ,y\right) }{\sigma \left( y\right) ^2 \,f\left( \vartheta ,y\right) }\,\mathrm{d}y. \end{aligned}$$
(12)

It is easy to see that \(\mathcal{A}_0\) implies \(\mathcal{RP}\). Moreover, we can verify that the condition \(\mathcal{A}_0\) provides the equivalence of the measures \(\left\{ \mathbf{P}_\vartheta ^{\left( T\right) },\vartheta \in \Theta \right\} \) induced in the measurable space \(\left( \mathcal{C}\left[ 0,T\right] ,\mathcal{B}\right) \) of continuous on \(\left[ 0,T\right] \) functions by the solutions of this equation with different \(\vartheta \) (see Liptser and Shiryaev (2003)). Hence, the likelihood ratio has the following form

$$\begin{aligned} L\left( \vartheta ,X^T\right) =\exp \left\{ \int _{0}^{T}\frac{S\left( \vartheta ,X_s\right) }{\sigma \left( X_s\right) ^2}\,\mathrm{d}X_s-\int _{0}^{T}\frac{S\left( \vartheta ,X_s\right) ^2}{2\,\sigma \left( X_s\right) ^2}\,\mathrm{d}s \right\} \end{aligned}$$

and the MLE \(\hat{\vartheta }_T\) is defined by the equation

$$\begin{aligned} L\left( \hat{\vartheta }_T,X^T\right) =\sup _{\theta \in \Theta }L\left( \vartheta ,X^T\right) . \end{aligned}$$

To study the tests we need to know the properties of the MLE \(\hat{\vartheta }_T\) (in the regular case).

Below and in the sequel the dot means derivation w.r.t. \(\vartheta \) and the prime means derivation w.r.t. \(x\), i.e.; \(\dot{S}\left( \vartheta ,x\right) \) is \(d\)-vector and \(\ddot{S} \left( \vartheta ,x\right) \) is a \(d\times d\) matrix. The information matrix is

$$\begin{aligned} \mathrm{I}\left( \vartheta \right) =\mathbf{E}_{\vartheta } \left( \frac{\dot{S}\left( \vartheta ,\xi \right) \;\dot{S}\left( \vartheta ,\xi \right) ^* }{\sigma \left( \xi \right) ^2}\right) , \end{aligned}$$

where * means transposition and \(\xi \) is the r.v. with the invariant density function \(f\left( \vartheta ,x\right) \). The scalar product in \({\mathbb {R}}^d\) is denoted by \(\left\langle \cdot ,\cdot \right\rangle \).

We have two types of regularity conditions.

\(\mathcal{R}_1.\)

  • The set \(\Theta \) is an open and bounded subset of \({\mathbb {R}}^d\).

  • The function \(S\left( \vartheta ,x\right) \) has continuous derivatives w.r.t. \(\vartheta \) such that

    $$\begin{aligned} \dot{S}\left( \vartheta ,x\right) ,\; \ddot{S} \left( \vartheta ,x\right) \in \mathcal{P}. \end{aligned}$$
  • The information matrix is uniformly nondegerate

    $$\begin{aligned} \inf _{\vartheta \in \Theta }\inf _{\left| \lambda \right| =1,\lambda \in {\mathbb {R}}^d} \lambda ^*\mathrm{I}\left( \vartheta \right) \lambda >0 \end{aligned}$$

    and for any compact \({\mathbb {K}}\subset \Theta \), any \(\vartheta _0\in \Theta \) and any \(\nu >0\)

    $$\begin{aligned} \inf _{\vartheta \in {\mathbb {K}}} \inf _{\left| \vartheta -\vartheta _0\right| >\nu }\mathbf{E}_{\vartheta _0} \left( \frac{S\left( \vartheta ,\xi \right) -S\left( \vartheta _0,\xi \right) }{\sigma \left( \xi \right) }\right) ^2>0. \end{aligned}$$

Here \(\xi \) is a random variable with the density function \(f\left( \vartheta _0,x\right) \). By the conditions \(\mathcal{A}_0\) and \(\mathcal{R}_1\) the MLE is consistent, asymptotically normal

$$\begin{aligned} \sqrt{T}\left( \hat{\vartheta }_T-\vartheta \right) \Longrightarrow \mathcal{N}\left( 0, \mathrm{I}\left( \vartheta \right) ^{-1}\right) , \end{aligned}$$

we have the convergence of all polynomial moments and this estimator is asymptotically efficient (see Kutoyants (2004) for details). Moreover, the MLE admits the representation

$$\begin{aligned} \sqrt{T}\left( \hat{\vartheta }_T-\vartheta \right) =\frac{\mathrm{I}\left( \vartheta \right) ^{-1}}{\sqrt{T}}\int _{0}^{T} \frac{\dot{S}\left( \vartheta ,X_s\right) }{\sigma \left( X_s\right) }\;\mathrm{d}W_s\,\left( 1+o\left( 1\right) \right) . \end{aligned}$$
(13)

Let us introduce the matrix

$$\begin{aligned} N\left( \vartheta ,y\right) =\mathrm{I}\left( \vartheta \right) ^{-1}\int _{y}^{\infty } \frac{\dot{S}\left( \vartheta ,z\right) \; \dot{S}\left( \vartheta ,z\right) ^*}{\sigma \left( z\right) ^2} \,f\left( \vartheta ,z\right) \,\mathrm{d}z. \end{aligned}$$

Note that \(N\left( \vartheta ,-\infty \right) =I_d \), where \(I_d\) is the unit \(d\times d\) matrix.

The next regularity condition is

\(\mathcal{R}_2.\)

  • The functions \(\dot{S}\left( \vartheta ,x\right) \) and \(\sigma \left( x\right) \) have continuous derivatives w.r.t. \(x\)

    $$\begin{aligned} \dot{S}' \left( \vartheta ,x\right) , \;\sigma '\left( x\right) \quad \in \quad \mathcal{P}. \end{aligned}$$
  • The matrix \(N\left( \vartheta ,y\right) \) for any \(y\) is uniformly in \(\vartheta \in \Theta \) non degenerate and there existes a constant \(C>0\) such that

    $$\begin{aligned} \sup _{\vartheta \in \Theta }\sup _{\left| \lambda \right| =1} \lambda ^*N\left( \vartheta ,y\right) ^{-1}\lambda \le \frac{C}{1-F\left( \vartheta ,y\right) }. \end{aligned}$$

Let us remind what happens in the case of simple basic hypothesis, say, \(\vartheta =\vartheta _0\). Using the representation (11) and (12) it is shown that the corresponding statistics have the following limits

$$\begin{aligned} \Delta _T&=T\int _{}^{ }\left[ \hat{F}_T\left( x\right) -F\left( \vartheta _0,x\right) \right] ^2\mathrm{d}F\left( \vartheta _0,x\right) \Longrightarrow \int _{}^{ }\eta \left( \vartheta _0,x\right) ^2\mathrm{d}F\left( \vartheta _0,x\right) ,\\ \delta _T&=T\int _{}^{ }\left[ \hat{f}_T\left( x\right) -f\left( \vartheta _0,x\right) \right] ^2\mathrm{d}F\left( \vartheta _0,x\right) \Longrightarrow \int _{ }^{}\zeta \left( \vartheta _0,x\right) ^2\mathrm{d}F\left( \vartheta _0,x\right) . \end{aligned}$$

Therefore the tests based on these two statistics are not ADF. To construct the ADF test we put

$$\begin{aligned} \mu _0\left( \vartheta _0,x\right) =\frac{\zeta \left( \vartheta _0,x\right) }{2f\left( \vartheta _0 ,y\right) }=\int _{-\infty }^{\infty }\frac{F\left( \vartheta _0 ,y\right) -{1\!\hbox {I}}_{\left\{ y>x\right\} } }{\sigma \left( y\right) \sqrt{f\left( \vartheta _0 ,y\right) }} \;\mathrm{d}W(y), \end{aligned}$$

and note that by the CLT

$$\begin{aligned} \frac{\zeta _T\left( \vartheta _0 ,y\right) }{2f\left( \vartheta _0 ,y\right) }\Longrightarrow \mu _0\left( \vartheta _0,x\right) . \end{aligned}$$

Further, we have the convergence

$$\begin{aligned} L_1\left[ \zeta _T\left( \vartheta _0 \right) \right] \left( x\right)&=\int _{-\infty }^{x} \sigma \left( y\right) f\left( \vartheta _0 ,y\right) \mathrm{d}\left[ \frac{\zeta _T\left( \vartheta _0 ,y\right) }{2f\left( \vartheta _0 ,y\right) }\right] \nonumber \\&=\frac{1}{\sqrt{T}}\int _{0}^{T}{1\!\hbox {I}}_{\left\{ X_s<x\right\} }\,\mathrm{d}W_s+o\left( 1\right) \Longrightarrow w_{F\left( \vartheta _0 ,x\right) }. \end{aligned}$$
(14)

Hence

$$\begin{aligned} \bar{\delta }_T=&\int _{-\infty }^{\infty }L_1\left[ \zeta _T\left( \vartheta _0 \right) \right] \left( x\right) ^2\mathrm{d}F\left( \vartheta _0 ,x\right) \\&\quad \Longrightarrow \int _{-\infty }^{\infty }w_{F\left( \vartheta _0 ,x\right) }^2\,\mathrm{d}F\left( \vartheta _0 ,x\right) =\int _{0}^{1}w_s^2\,\mathrm{d}s \end{aligned}$$

and the test \(\bar{\psi }_T={1\!\hbox {I}}_{\left\{ \bar{\delta }_T>c_\varepsilon \right\} }\) is ADF (see the details in Kutoyants (2012)).

Moreover, we can define an asymptotically equivalent test \(\tilde{\psi }_T={1\!\hbox {I}}_{\left\{ \tilde{\delta }_T>c_\varepsilon \right\} } \), where

$$\begin{aligned} \tilde{\delta }_T=\int _{-\infty }^{\infty }\left[ \frac{1}{\sqrt{T}}\int _{0}^{T}\frac{{1\!\hbox {I}}_{\left\{ X_s<x\right\} }}{\sigma \left( X_s\right) }\;\left[ \mathrm{d}X_s-S\left( \vartheta _0 ,X_s\right) \,\mathrm{d}s\right] \right] ^2 \mathrm{d}F\left( \vartheta _0 ,x\right) \end{aligned}$$
(15)

and this test as well is ADF.

3 Main result

Remind that the value of parameter \(\vartheta \) is unknown that is why we replace \(\vartheta \) by its MLE \(\hat{\vartheta }_T\) and our goal is to find the transformations

$$\begin{aligned} L\left[ \eta _T\left( \hat{\vartheta }_T,\cdot \right) \right] \left( x\right) ,\quad \quad L\left[ \zeta _T\left( \hat{\vartheta }_T,\cdot \right) \right] \left( x\right) \end{aligned}$$

of the statistics \(\eta _T(\hat{\vartheta }_T,x )=\sqrt{T}\left( \hat{F}_T\left( x\right) -F(\hat{\vartheta }_T,x)\right) \) and \(\zeta _T\left( \hat{\vartheta }_T,x \right) =\) \(\sqrt{T}\big (\hat{f}_T\big (x\big )-f\big (\hat{\vartheta } _T,x\big )\big )\) such that the GoF tests constructed on it will be ADF. First note that we have equality

$$\begin{aligned} \left[ \eta _T(\hat{\vartheta }_T,x)\right] '=\zeta _T(\hat{\vartheta }_T,x ) , \end{aligned}$$

therefore if we find this transformation for \(\zeta _T(\hat{\vartheta }_T,\cdot ) \) then we obtain it for \(\eta _T(\hat{\vartheta }_T,\cdot ) \) too.

Moreover, we show that the linear transformation (14) of

$$\begin{aligned} \mu _T\left( \hat{\vartheta }_T,x \right) =\frac{\sqrt{T}(\hat{f}_T\left( x\right) -f(\hat{\vartheta }_T,x))}{2f(\hat{\vartheta }_T,x) },\quad \quad x\in {\mathbb {R}}\end{aligned}$$

gives us statistic which is asymptotically equivalent to the statistic

$$\begin{aligned} \xi _T\left( \hat{\vartheta }_T,x\right) =\frac{1}{\sqrt{T}}\int _{0}^{T} \frac{{1\!\hbox {I}}_{\left\{ X_s<x\right\} }}{\sigma \left( X_s\right) } \;\left[ \mathrm{d}X_s-S(\hat{\vartheta }_T,X_s)\mathrm{d}s\right] . \end{aligned}$$

Therefore our ADF test will be based on the statistic \(\xi _T(\hat{\vartheta }_T,x) \), which is much easier to calculate.

Introduce the random vector

$$\begin{aligned} \Delta \left( \vartheta \right) =\int _{-\infty }^{\infty }\frac{\dot{S}\left( \vartheta ,y\right) }{\sigma \left( y\right) }\sqrt{f\left( \vartheta ,y\right) }\,\mathrm{d}W(y)\quad \sim \quad \mathcal{N}\left( 0, \mathrm{I}\left( \vartheta \right) \right) \end{aligned}$$
(16)

and the Gaussian function

$$\begin{aligned} \mu \left( \vartheta ,x\right) = \mu _0 \left( \vartheta ,x\right) -2^{-1}\left\langle \mathrm{I}\left( \vartheta \right) ^{-1} \Delta \left( \vartheta \right) ,\frac{\partial \ell \left( \vartheta ,x\right) }{\partial \theta }\right\rangle ,\quad \quad x\in {\mathbb {R}}, \end{aligned}$$

where \(\ell \left( \vartheta ,x\right) =\ln f\left( \vartheta ,x\right) \) and \(\left\langle \cdot ,\cdot \right\rangle \) is the scalar product in \({\mathbb {R}}^d\). Further, let us put \(s=F\left( \vartheta ,y\right) \), \(t=F\left( \vartheta ,x\right) \), define the vector function

$$\begin{aligned} h\left( \vartheta ,s\right) =\mathrm{I}\left( \vartheta \right) ^{-1/2} \frac{\dot{S} \left( \vartheta ,F^{-1}\left( \vartheta ,s\right) \right) }{\sigma \left( F^{-1}\left( \vartheta ,s\right) \right) },\quad \quad \int _{0}^{1}h\left( \vartheta ,s\right) ^*h\left( \vartheta ,s\right) \mathrm{d}s =1, \end{aligned}$$

and Gaussian process

$$\begin{aligned} U\left( t\right) =w\left( t\right) -\left\langle \int _{0}^{1}h\left( \vartheta ,s\right) \mathrm{d}w\left( s\right) , \int _{0}^{t}h\left( \vartheta ,s\right) \mathrm{d}s\right\rangle , \end{aligned}$$
(17)

where \(w\left( s\right) , 0\le s\le 1\) is some Wiener process. Here \(F^{-1}\left( \vartheta ,s\right) \) is the function inverse to \(F\left( \vartheta ,y\right) \), i.e., the solution \(y\) of the equation \(F\left( \vartheta ,y\right) =s \). Below \(u\left( x\right) =U\left( F\left( \vartheta ,x\right) \right) \).

Theorem 1

Let the conditions \(\mathcal{ES}, \mathcal{A}_0\) and \(\mathcal{R}_1\) be fulfilled, then

$$\begin{aligned} \mu _T\left( \hat{\vartheta }_T,x \right) \Longrightarrow \mu \left( \vartheta ,x\right) ,\quad \quad \xi _T\left( \hat{\vartheta }_T,x \right) \Longrightarrow u\left( x\right) , \end{aligned}$$
(18)

and

$$\begin{aligned} \int _{-\infty }^{x}\sigma \left( y\right) f\left( \vartheta ,y\right) \mathrm{d}\mu \left( \vartheta ,y\right) =u\left( x\right) . \end{aligned}$$
(19)

Proof

Using the consisteny of the MLE we can write

$$\begin{aligned} \zeta _T\left( \hat{\vartheta }_T,x\right)&=\sqrt{T}\left( \hat{f}_T\left( x\right) -f\left( \vartheta ,x\right) \right) +\sqrt{T}\left( f\left( \vartheta ,x\right) -f(\hat{\vartheta }_T ,x) \right) \\&=\zeta _T\left( \vartheta ,x\right) -\left\langle \sqrt{T}(\hat{\vartheta }_T-\vartheta ) ,\frac{\partial f\left( \vartheta ,x\right) }{\partial \vartheta } \right\rangle +o\left( 1\right) . \end{aligned}$$

The slight modification of the proof of the Theorem 2.8 in Kutoyants (2004) allows us to verify the joint asymptotic normality of \(\zeta _T\left( \vartheta ,x\right) \) and \(\sqrt{T}\left( \hat{\vartheta } _T-\vartheta \right) \) as follows. Let us denote \(\Delta _T\left( \vartheta ,X^T\right) \) the vector score function

$$\begin{aligned} \Delta _T\left( \vartheta ,X^T\right) =\frac{1}{\sqrt{T}}\int _{0}^{T}\frac{\dot{S}\left( \vartheta ,X_s\right) }{\sigma \left( X_s\right) }\;\mathrm{d}W_s. \end{aligned}$$

The behavior of the MLE is described in Kutoyants (2004) through the weak convergence of the normalized likelihood ratio

$$\begin{aligned} Z_T\left( u\right) \equiv \frac{L\left( \vartheta +\frac{u}{\sqrt{T}},X^T\right) }{L\left( \vartheta ,X^T\right) }=\exp \left\{ \left\langle u,\Delta _T\left( \vartheta ,X^T\right) \right\rangle -\frac{1}{2} u^*\mathrm{I}\left( \vartheta \right) u+o\left( 1\right) \right\} . \end{aligned}$$

By the central limit theorem for stochastic integrals we have the joint asymptotic normality: for any \(\left( \lambda ,\nu \right) \in {\mathbb {R}}^{1+d}\)

$$\begin{aligned} \lambda \, \zeta _T\left( \vartheta ,x \right) +\left\langle \nu , \Delta _T\left( \vartheta ,X^T\right) \right\rangle \Longrightarrow \lambda \,\zeta \left( \vartheta ,x \right) +\left\langle \nu ,\Delta \left( \vartheta \right) \right\rangle . \end{aligned}$$

Hence following the proof of the mentioned above Theorem 2.8 we obtain the joint convergence

$$\begin{aligned} \left( \zeta _T\left( \vartheta ,x \right) , Z_T\left( \cdot \right) \right) \Longrightarrow \left( \zeta _0\left( \vartheta ,x \right) , Z\left( \cdot \right) \right) , \end{aligned}$$

where

$$\begin{aligned} Z\left( u\right) =\exp \left\{ \left\langle u,\Delta \left( \vartheta \right) \right\rangle -\frac{1}{2} u^*\mathrm{I}\left( \vartheta \right) u\right\} ,\quad \quad u\in {\mathbb {R}}^d. \end{aligned}$$

This joint convergence yields the joint asymptotic normality

$$\begin{aligned} \left( \zeta _T\left( \vartheta ,x \right) , \sqrt{T}(\hat{\vartheta } _T-\vartheta ) \right) \Longrightarrow \left( \zeta \left( \vartheta ,x \right) ,\mathrm{I}\left( \vartheta \right) ^{-1}\Delta \left( \vartheta \right) \right) \end{aligned}$$

with the same Wiener process \(W\left( \cdot \right) \) in (10) and (16).

Now the convergence (18) follows from the consisteny of the MLE, because \(f(\hat{\vartheta }_T,x)\rightarrow f\left( \vartheta ,x\right) \).

Therefore the limit \(\mu \left( \vartheta ,x\right) \) of \(\mu _T\left( \vartheta ,x\right) \) can be written as

$$\begin{aligned} \int _{-\infty }^{\infty }\left[ \frac{F\left( \vartheta ,y\right) -{1\!\hbox {I}}_{\left\{ y>x\right\} }-\left\langle \left[ 2\mathrm{I}\left( \vartheta \right) \right] ^{-1} \dot{S} \left( \vartheta ,y \right) , \dot{\ell }\left( \vartheta ,x\right) \right\rangle f\left( \vartheta ,y\right) }{\sigma \left( y\right) \sqrt{f\left( \vartheta ,y\right) }} \right] \mathrm{d}W\left( y\right) . \end{aligned}$$

Let us consider the linear transformation of \(\mu \left( \vartheta ,\cdot \right) \) following (14):

$$\begin{aligned} L_1\left[ \mu \right] \left( x \right) =\int _{-\infty }^{x}\sigma \left( y\right) f\left( \vartheta ,y\right) \,\mathrm{d}\mu \left( \vartheta ,y\right) . \end{aligned}$$

Remind the details of this transformation from Kutoyants (2012). Denote

$$\begin{aligned} F\left( \vartheta ,y\right)&=s,\quad a\left( \vartheta ,s\right) =\sigma \left( F^{-1}\left( \vartheta ,s\right) \right) ,\quad b\left( \vartheta ,s\right) = f \left( \vartheta ,F^{-1}\left( \vartheta ,s\right) \right) . \end{aligned}$$

Then we can write

$$\begin{aligned}&\int _{-\infty }^{\infty }\frac{F\left( \vartheta ,y\right) -{1\!\hbox {I}}_{\left\{ y>x\right\} } }{\sigma \left( y\right) \sqrt{f\left( \vartheta ,y\right) }} \;\mathrm{d}W\left( y\right) \\&\quad \quad =\int _{-\infty }^{\infty }\frac{\left[ F\left( \vartheta ,y\right) -{1\!\hbox {I}}_{\left\{ F\left( \vartheta ,y\right) >F\left( \vartheta ,x\right) \right\} }\right] }{\sigma \left( y\right) {f\left( \vartheta ,y\right) }} \sqrt{f\left( \vartheta ,y\right) }\;\mathrm{d}W\left( y\right) \\&\quad \quad =\int _{0 }^{1 }\frac{\left[ s-{1\!\hbox {I}}_{\left\{ s>t\right\} }\right] }{a\left( \vartheta ,s\right) b\left( \vartheta ,s\right) } \;\mathrm{d}w\left( s\right) \\&\quad \quad =\int _{0 }^{t }\frac{s }{a\left( \vartheta ,s\right) b\left( \vartheta ,s\right) } \;\mathrm{d}w\left( s\right) +\int _{t }^{1 }\frac{s-1 }{a\left( \vartheta ,s\right) b\left( \vartheta ,s\right) } \;\mathrm{d}w\left( s\right) \\&\quad \quad =v\left( \vartheta ,t\right) ,\quad \quad 0<t<1, \end{aligned}$$

where \(w\left( s\right) ,0\le s\le 1\) is the following Wiener process

$$\begin{aligned} w\left( s \right) =\int _{-\infty }^{F^{-1}\left( \vartheta ,s\right) }\sqrt{f\left( \vartheta ,y\right) }\,\mathrm{d}W\left( y\right) . \end{aligned}$$

Note that \(v\left( \vartheta ,0\right) =\infty \) (\(x=-\infty )\) and \(v\left( \vartheta ,1\right) =\infty \) (\(x=+\infty )\). Therefore we define this differential and the corresponding integrals below for \(t\in \left( \nu ,1-\nu \right) \) with small \(\nu >0\) and in the sequel \(\nu \rightarrow 0\) (\(x\rightarrow \pm \infty \)).

Hence

$$\begin{aligned} \mathrm{d}\mu _0 \left( \vartheta ,y\right) =\mathrm{d}v\left( \vartheta ,s\right) =\frac{1 }{a\left( \vartheta ,s\right) b\left( \vartheta ,s\right) } \;\mathrm{d}w\left( s\right) \end{aligned}$$

and

$$\begin{aligned} \int _{-\infty }^{x}\sigma \left( y\right) f\left( \vartheta ,y\right) \,\mathrm{d}\mu _0 \left( \vartheta ,y\right) = \int _{0}^{t}a\left( \vartheta ,s\right) b\left( \vartheta ,s\right) \,\mathrm{d}v\left( \vartheta ,s\right) =w\left( t\right) . \end{aligned}$$

To calculate the second term note that

$$\begin{aligned} \dot{\ell } \left( \vartheta ,x\right) =-\frac{\dot{G}\left( \vartheta \right) }{G\left( \vartheta \right) } +2\int _{0}^{x}\frac{\dot{S}\left( \vartheta ,y\right) }{\sigma \left( y\right) ^2}\mathrm{d}y. \end{aligned}$$

Therefore

$$\begin{aligned} \int _{-\infty }^{x}\sigma \left( y\right) f\left( \vartheta ,y\right) \mathrm{d}\dot{\ell } \left( \vartheta ,y\right) =2\int _{-\infty }^{x} \frac{\dot{S}\left( \vartheta ,y\right) }{\sigma \left( y\right) } f\left( \vartheta ,y\right) \mathrm{d}y \end{aligned}$$

and

$$\begin{aligned}&\int _{-\infty }^{x}\sigma \left( y\right) f\left( \vartheta ,y\right) \mathrm{d}\mu \left( \vartheta ,y\right) =w\left( F\left( \vartheta ,x\right) \right) \\&\quad \ -\biggl \langle \mathrm{I}\left( \vartheta \right) ^{-1/2} \int _{-\infty }^{\infty }\frac{\dot{S}\left( \vartheta ,y\right) }{\sigma \left( y\right) }\mathrm{d}w\left( F\left( \vartheta ,y\right) \right) , \mathrm{I}\left( \vartheta \right) ^{-1/2} \int _{-\infty }^{x }\frac{\dot{S}\left( \vartheta ,y\right) }{\sigma \left( y\right) }\mathrm{d}F\left( \vartheta ,y\right) \biggr \rangle \\&\quad =U\left( F\left( \vartheta ,x\right) \right) =w\left( t\right) -\left\langle \int _{0}^{1}h\left( \vartheta ,s\right) \mathrm{d}w\left( s\right) , \int _{0}^{t}h\left( \vartheta ,s\right) \mathrm{d}s\right\rangle . \end{aligned}$$

Further, we have

$$\begin{aligned}&\xi _T\left( \hat{\vartheta } _T,x\right) =\frac{1}{\sqrt{T}}\int _{0}^{T}\frac{{1\!\hbox {I}}_{\left\{ X_s<x\right\} }}{\sigma \left( X_s\right) } \;\left[ \mathrm{d}X_s-S(\vartheta ,X_s)\mathrm{d}s\right] \nonumber \\&\quad \quad +\frac{1}{\sqrt{T}}\int _{0}^{T}\frac{{1\!\hbox {I}}_{\left\{ X_s<x\right\} }}{\sigma \left( X_s\right) } \;\left[ S(\vartheta ,X_s)-S(\hat{\vartheta } _T,X_s)\right] \mathrm{d}s\nonumber \\&\quad =\frac{1}{\sqrt{T}}\int _{0}^{T}{1\!\hbox {I}}_{\left\{ X_s<x\right\} }\mathrm{d}W_s-\biggl \langle \left( \hat{\vartheta }_T- \vartheta \right) ,\int _{0}^{T}\frac{{1\!\hbox {I}}_{\left\{ X_s<x\right\} }\dot{S}(\vartheta ,X_s)}{\sqrt{T}\sigma \left( X_s\right) } \;\mathrm{d}s\biggr \rangle +o\left( 1\right) \nonumber \\&\quad \Longrightarrow w\left( F\left( \vartheta ,x\right) \right) -\biggl \langle \mathrm{I}\left( \vartheta \right) ^{-1}\Delta \left( \vartheta \right) ,\int _{-\infty }^{x}\frac{\dot{S}\left( \vartheta ,y\right) }{\sigma \left( y\right) }\; \mathrm{d}F\left( \vartheta ,y\right) \biggr \rangle =u\left( x\right) . \end{aligned}$$
(20)

It can be shown that

$$\begin{aligned} L_1\left[ \mu _T\right] \left( x\right) \Longrightarrow L_1\left[ \mu \right] \left( x\right) =u\left( x\right) . \end{aligned}$$

The same limit has the statistic \( \xi _T(\hat{\vartheta }_T,x)\). Therefore it is sufficient to find such transformation \(L_2\left[ \xi _T(\hat{\vartheta }_T,\cdot )\right] \left( x\right) \) that its limit is a Wiener process, say, \(L_2\left( [U\left( \cdot \right) \right] \left( t\right) =w_t \). Below we omit \(\vartheta \) in \(h\left( \vartheta ,t\right) \) and denote the matrix

$$\begin{aligned} {\mathbb {N}}\left( t\right) =\int _{t}^{1}h\left( \vartheta ,s\right) h^*\left( \vartheta ,s\right) \mathrm{d}s =N\left( \vartheta ,F^{-1}\left( \vartheta ,t\right) \right) . \end{aligned}$$

The transformation \(L_2\left[ \cdot \right] \) of the limit process given below in (21) coincides with one proposed by Khmaladze (1981) and the difference is in the proofs. The transformation \(L\left[ \cdot \right] \) in Khmaladze (1981) is based on two strong results: one is due to Hitsuda (1968), which gives the linear representation of a Gaussian process with measure equivalent to the measure of Wiener process and the second is due to Shepp (1966), which gives the condition of equivalence of the process \(U\left( s \right) , 0\le s \le \tau \) (see (1)) on any interval \(\left[ 0,\tau \right] , \tau <1\) to the Wiener process \(W_s, 0\le s\le \tau \). Then, in Khmaladze (1981), the limit \(\tau \rightarrow 1\) is considered. We do not use these two results and give the direct martingale proof using the solution of Fredholm equation of the second kind with degenerated kernel.

Theorem 2

Suppose that \(h\left( s\right) \) is continuous vector-function and the matrix \({\mathbb {N}}\left( t\right) , t\in [0,1)\) is nondenerate. Then

$$\begin{aligned} L_2\left( [U\left( \cdot \right) \right] \left( t\right) \equiv U\left( t\right) +\int _{0}^{t} \int _{0}^{s} {h^*\left( v\right) }\;{\mathbb {N}}\left( s\right) ^{-1}\; h\left( s\right) \,\mathrm{d}U\left( v\right) \;\mathrm{d}s= w_t, \end{aligned}$$
(21)

where \(w_t,t\in [0,1)\) is a Wiener process.

Proof

The proof will be done in several steps.

Step 1. We itroduce a Gaussian process

$$\begin{aligned} M_t=\int _{0}^{t}q\left( t,s\right) \,\mathrm{d}U\left( s\right) ,\quad \quad 0\le t\le 1, \end{aligned}$$
(22)

where the function \(q\left( t,s\right) \) is choosen as solution of special Fredholm equation.

Step 2. Then we show that with such choice of \(q\left( t,s\right) \) the process \(M_t\) becames a martingale and admits the representation

$$\begin{aligned} M_t=\int _{0}^{t}q\left( s,s\right) \,\mathrm{d}w_s,\quad \quad 0\le t\le 1, \end{aligned}$$

where \(w_s,0\le s\le 1\) is some Wiener process.

Step 3. This representation allows us to obtain the Wiener process by inverting this equation

$$\begin{aligned} w_t=\int _{0}^{t}\frac{1}{q\left( s,s\right) }\,\mathrm{d}M_s= U\left( t\right) +\int _{0}^{t}\frac{1}{q\left( s,s\right) }\int _{0}^{s}q'_s\left( s,v\right) \,\mathrm{d}U\left( v\right) \,\mathrm{d}s,\quad 0\le t\le 1. \end{aligned}$$

This last equality provides us the linear transformation

$$\begin{aligned} L_2\left[ U\right] \left( t\right) =U\left( t\right) +\int _{0}^{t}\frac{1}{q\left( s,s\right) }\int _{0}^{s}q'_s \left( s,v\right) \,\mathrm{d}U\left( v\right) \,\mathrm{d}s =w_t, \end{aligned}$$

and we show that it is equivalent to (21).

Now we realize this program. Suppose that \(q\left( t,s\right) \) is some continuous function and the process \(M_t\) is defined by the equality (22). Then the correlation function of \(M_t\) is (\(s<t\))

$$\begin{aligned} R\left( t,s\right)&=\mathbf{E}\left[ M_tM_s\right] =\mathbf{E}\left[ \int _{0}^{t}q\left( t,u\right) \,\mathrm{d}w\left( u\right) -\int _{0}^{t} q\left( t,u\right) \,\left\langle \zeta _*, h\left( u\right) \right\rangle \,\mathrm{d} u\right] \\&\quad \left[ \int _{0}^{s}q\left( s,v\right) \,\mathrm{d}w\left( v\right) -\int _{0}^{s}q\left( s,v\right) \,\left\langle \zeta _*, h\left( v\right) \right\rangle \,\mathrm{d} v\right] \\&=\int _{0}^{s}q\left( t,u\right) q\left( s,u\right) \,\mathrm{d}u- \left\langle \int _{0}^{s}q\left( s,v\right) \,h\left( v\right) \,\mathrm{d} v,\int _{0}^{t}{q\left( t,u\right) \,h\left( u\right) }\,\mathrm{d} u\right\rangle \\&=\int _{0}^{s}q\left( s,u\right) \left[ q\left( t,u\right) -{}\; \int _{0}^{t}q\left( t,v\right) \,\left\langle h\left( u\right) ,h\left( v\right) \right\rangle \mathrm{d} v\right] \,\mathrm{d}u. \end{aligned}$$

Therefore, if we take \(q\left( t,s\right) \) such that it solves the Fredholm equation (\(t\) is fixed)

$$\begin{aligned} q\left( t,s\right) -{} \int _{0}^{t} q\left( t,v\right) \,\left\langle h\left( s\right) ,h\left( v\right) \right\rangle \;\mathrm{d} v=1,\quad \quad s\in \left[ 0,t\right] , \end{aligned}$$
(23)

then

$$\begin{aligned} \mathbf{E}\left[ M_tM_s\right] =\mathbf{E}\left[ M_s^2\right] =\int _{0}^{s}q\left( s,u\right) \,\mathrm{d}u. \end{aligned}$$
(24)

The solution \(q\left( t,s\right) \) of the Eq. (23) can be found as follows. Let us put

$$\begin{aligned} q\left( t,s\right) =1+\left\langle \int _{0}^{t}q\left( t,v\right) h\left( v\right) \,\mathrm{d}v,{h\left( s\right) }\right\rangle =1+\left\langle A\left( t\right) ,{h\left( s\right) }\right\rangle =1+ {h\left( s\right) ^*}A\left( t\right) , \end{aligned}$$

where the vector-function \(A\left( t\right) \) itself is solution of the following equation (after multiplying (23) by \({h\left( s\right) }\) and integrating)

$$\begin{aligned} A\left( t\right) -\int _{0}^{t}{ h\left( s\right) }\,h\left( s\right) ^*\;\mathrm{d}s\; A\left( t\right) =\int _{0}^{t}{h\left( s\right) }\;\mathrm{d}s. \end{aligned}$$

We can write

$$\begin{aligned} \left( {I_d-\int _{0}^{t} {h\left( s\right) }h\left( s\right) ^*\;\mathrm{d}s}\right) \; A\left( t\right) = {\mathbb {N}}\left( t\right) A\left( t\right) =\int _{0}^{t}{h\left( s\right) }\;\mathrm{d}s \end{aligned}$$

(\(I_d\) is \(d\times d\) identity matrix) and remind that \({\mathbb {N}}\left( t\right) \) is nondegenerate, then we obtain

$$\begin{aligned} A\left( t\right) = {\mathbb {N}}\left( t\right) ^{-1}{\int _{0}^{t}{h\left( s\right) }\;\mathrm{d}s}. \end{aligned}$$

Therefore, the solution of (23) is the function

$$\begin{aligned} q\left( t,s\right) =1+\left\langle {\mathbb {N}}\left( t\right) ^{-1}{\int _{0}^{t}{h\left( v\right) }\;\mathrm{d}v}, h\left( s\right) \right\rangle . \end{aligned}$$
(25)

The last integral in (24) has the following property.

Lemma 1

$$\begin{aligned} \int _{0}^{t}q\left( t,s\right) \,\mathrm{d}s=\int _{0}^{t}q\left( s,s\right) ^2\mathrm{d}s. \end{aligned}$$
(26)

Proof

We show that

$$\begin{aligned} \frac{\mathrm{d}}{\mathrm{d}t}\int _{0}^{t}q\left( t,s\right) \mathrm{d}s=\frac{\mathrm{d}}{\mathrm{d}t}\int _{0}^{t}q\left( s,s\right) ^2\mathrm{d}s=q\left( t,t\right) ^2. \end{aligned}$$

We have

$$\begin{aligned}&\frac{\mathrm{d}}{\mathrm{d}t}\int _{0}^{t}q\left( t,s\right) \mathrm{d}s=1+\frac{\mathrm{d}}{\mathrm{d}t} \left[ \int _{0}^{t}h^*\left( s\right) \mathrm{d}s\;{\mathbb {N}}\left( t\right) ^{-1} \int _{0}^{t}h\left( v\right) \mathrm{d}v\right] \\&\quad \quad =1+2h^*\left( t\right) \;{\mathbb {N}}\left( t\right) ^{-1} \int _{0}^{t}h\left( v\right) \mathrm{d}v\\&\quad \quad \quad + \int _{0}^{t}h^*\left( s\right) \mathrm{d}s \; {\mathbb {N}}\left( t\right) ^{-1} h\left( t\right) h^*\left( t\right) N\left( t\right) ^{-1} \int _{0}^{t}h\left( v\right) \mathrm{d}v\\&\quad \quad =\left[ 1+h^*\left( t\right) \;{\mathbb {N}}\left( t\right) ^{-1}\int _{0}^{t}h\left( s\right) \mathrm{d}s\; \right] ^2 =q\left( t,t\right) ^2. \end{aligned}$$

The next step is the following Lemma.

Lemma 2

If the Gaussian process \(M_s\) satisfies (24) and we have (26) with some continuous positive function \(q\left( s,s\right) \), then

$$\begin{aligned} z\left( t\right) = \int _{0}^{t} q\left( s,s\right) ^{-1}\mathrm{d}M_s \end{aligned}$$

is a Wiener process.

Proof

Consider the partition \(0=s_0<s_1<\cdots <s_N=1\) and put

$$\begin{aligned} z_N\left( t\right) =\sum _{s_l\le t}^{} q\left( s_{l-1},s_{l-1}\right) ^{-1}\left[ M_{s_{l}}-M_{s_{l-1}}\right] . \end{aligned}$$

Note that by (24) we have \(\mathbf{E}M_{s}M_{t} =\mathbf{E}M_{s}^2\) for \(s<t\). Hence for \(l\not =m\)

$$\begin{aligned} \mathbf{E}\left[ M_{s_{l}}-M_{s_{l-1}}\right] \left[ M_{s_{m}}-M_{s_{m-1}} \right] =0. \end{aligned}$$

This allows us to write

$$\begin{aligned} \mathbf{E}z_N\left( t\right) z_N\left( s\right)&= \sum _{s_l\le s}^{} q\left( s_{l-1},s_{l-1}\right) ^{-2} \mathbf{E}\left[ M_{s_{l}}-M_{s_{l-1}} \right] ^2\\&= \sum _{s_l\le s}^{} q\left( s_{l-1},s_{l-1}\right) ^{-2} \mathbf{E}\left[ M_{s_{l}}^2-M_{s_{l-1}}^2\right] \\&= \sum _{s_l\le s}^{} q\left( s_{l-1},s_{l-1}\right) ^{-2} \int _{s_{l-1}}^{s_l}q\left( v,v\right) ^{2}\mathrm{d}v\longrightarrow s \end{aligned}$$

as \(\max \left| s_l-s_{l-1}\right| \rightarrow 0\). At the same time \(z_N\left( t\right) \rightarrow z\left( t\right) \) in mean-square. Therefore, \(\mathbf{E}z\left( t\right) =0\), \(\mathbf{E}z\left( t\right) z\left( s\right) =t\wedge s\) and \(z\left( t\right) \) is a Wiener process \(w_t\).

Hence

$$\begin{aligned} M_t=\int _{0}^{t}q\left( s,s\right) \,\mathrm{d}w_s,\quad \quad t\in [0,1) \end{aligned}$$

is a Gaussian martingale. This implies the equality

$$\begin{aligned} w_t&=\int _{0}^{t}\frac{1}{q\left( s,s\right) }\;\mathrm{d}M_s=U\left( t\right) +\int _{0}^{t}\frac{1}{q\left( s,s\right) }\;\int _{0}^{s} q'_s\left( s,v\right) \,\mathrm{d}U\left( v\right) \;\mathrm{d}s. \end{aligned}$$

For the derivative \(q'_t\left( t,s\right) \) we can write

$$\begin{aligned}&q'_t\left( t,s\right) = \left( A'\left( t\right) ,h\left( s\right) \right) \\&\quad \quad =h^* \left( s\right) {\mathbb {N}}\left( t\right) ^{-1}\;h\left( t\right) h^*\left( t\right) {\mathbb {N}}\left( t\right) ^{-1}\;\int _{0}^{t}h\left( v\right) \mathrm{d}v +h^*\left( s\right) \; {\mathbb {N}}\left( t\right) ^{-1}\; h\left( t\right) \\&\quad \quad =h^*\left( s\right) {\mathbb {N}}\left( t\right) ^{-1}\;h\left( t\right) \left[ h^*\left( t\right) {\mathbb {N}}\left( t\right) ^{-1}\;\int _{0}^{t}h\left( v\right) \mathrm{d}v+1\right] \\&\quad \quad =h^*\left( s\right) {\mathbb {N}}\left( t\right) ^{-1}\;h\left( t\right) \,q\left( t,t\right) . \end{aligned}$$

Hence

$$\begin{aligned} \frac{q'_s\left( s,v\right) }{q\left( s,s\right) }=h^*\left( v\right) {\mathbb {N}}\left( s\right) ^{-1}\;h\left( s\right) \end{aligned}$$

and we obtain the final expression

$$\begin{aligned} w_t= U\left( t\right) +\int _{0}^{t} \int _{0}^{s} {h^*\left( v\right) }\;{\mathbb {N}}\left( s\right) ^{-1}\; h\left( s\right) \,\mathrm{d}U\left( v\right) \;\mathrm{d}s. \end{aligned}$$

This is the explicit linear transformation \(w_t = L_2\left[ U \right] \left( t\right) \) of the process \(U\left( \cdot \right) \) in the Wiener process \(w_t\) and this proves the Theorem 2.

Let us denote

$$\begin{aligned} g\left( \vartheta ,y\right) = \frac{\dot{S}\left( \vartheta ,y\right) }{\;\;\sigma \left( y\right) }, \quad {\mathbb {N}}\left( \vartheta , x \right) = \int _{x}^{\infty }\frac{\dot{S}\left( \vartheta ,z\right) \dot{S}\left( \vartheta ,z\right) ^*}{\sigma \left( z\right) ^2}\,f\left( \vartheta ,z\right) \,\mathrm{d}z. \end{aligned}$$

Then we can write

$$\begin{aligned}&w_{F\left( \vartheta , x\right) }= U\left( F\left( \vartheta ,x\right) \right) \\&\quad \quad \quad +\int _{-\infty }^{x} \int _{-\infty }^{y} g^*\left( \vartheta ,y\right) {{\mathbb {N}}\left( \vartheta ,y \right) }^{-1}{}\; g\left( \vartheta ,z\right) \,\mathrm{d} U\left( F\left( \vartheta ,z\right) \right) \; f\left( \vartheta ,y\right) \mathrm{d}y, \end{aligned}$$

i.e., this transformation of \(U\left( \cdot \right) \) does not depend on the information matrix \(\mathrm{I}\left( \vartheta \right) \). Of course, \(U\left( \cdot \right) \) itself depends on \(\mathrm{I}\left( \vartheta \right) \).

To construct the test we have to replace \(U\left( F\left( \vartheta ,x\right) \right) ,g\left( \vartheta ,y \right) \) and \({\mathbb {N}}\left( \vartheta ,y \right) \) in (21) by their empirical versions based on observations only

$$\begin{aligned} \xi _T\left( \hat{\vartheta }_T,x\right) ,\quad g\left( \hat{\vartheta } _T,y \right) =\frac{\dot{S}\left( \hat{\vartheta }_T ,y\right) }{\;\;\sigma \left( y\right) },\quad \quad {\mathbb {N}}\left( \hat{\vartheta } _T ,y \right) \end{aligned}$$

respectively and to study

$$\begin{aligned}&v_T\left( \hat{\vartheta }_T, x\right) = \xi _T\left( \hat{\vartheta }_T ,x\right) \\&\quad +\int _{-\infty }^{x} \int _{-\infty }^{y} g^*\left( \hat{\vartheta }_T,y\right) {{\mathbb {N}}\left( \hat{\vartheta }_T ,y \right) }^{-1}{}\; g\left( \hat{\vartheta }_T,z\right) \,\mathrm{d} \xi _T\left( \hat{\vartheta }_T ,z\right) \; \mathrm{d}F\left( \hat{\vartheta }_T ,y\right) . \end{aligned}$$

Then we have to show that

$$\begin{aligned} v_T(\hat{\vartheta }_T, x)-v_T\left( \vartheta , x\right) \rightarrow 0,\quad v_T\left( \vartheta , x\right) \Longrightarrow w_{F\left( \vartheta , x\right) }. \end{aligned}$$

Unfortunately we can not do it directly. We have to avoid the calculation of the integral

$$\begin{aligned} S\left( \hat{\vartheta }_T,y\right) =\int _{-\infty }^{y} g\left( \hat{\vartheta }_T,z \right) \,\mathrm{d}\xi _T\left( \hat{\vartheta } _T,z\right) \end{aligned}$$

because this integral is equivalent in some sense to the Itô stochastic integral and \(\hat{\vartheta }_T\) depends on the whole trajectory \(\left( X_t,0\le t\le T\right) \). One way is to use the discrete approximation of this integral

$$\begin{aligned} K_n\left( \hat{\vartheta }_T,y\right) =\sum _{z_i<y}^{}g\left( \hat{\vartheta }_T,z_i \right) \,\left[ \xi _T\left( \hat{\vartheta }_T,z_{i+1}\right) -\xi _T\left( \hat{\vartheta }_T,z_{i}\right) \right] \end{aligned}$$

and to show that

$$\begin{aligned} K_n\left( \hat{\vartheta }_T,y\right) -K_n\left( \vartheta ,y\right) \rightarrow 0,\quad \quad K_n\left( \vartheta ,y\right) -K\left( \vartheta ,y\right) \rightarrow 0. \end{aligned}$$

Another possibility is to replace the corresponding stochastic integral by the ordinary one what we do below.

Introduce two functions

$$\begin{aligned} Q\left( \vartheta ,x,y\right)&= \int _{y\wedge x }^{x} \frac{\dot{S}^*\left( \vartheta ,v\right) }{\sigma \left( v\right) } {{\mathbb {N}}\left( \vartheta ,y \right) }^{-1}{}\;\mathrm{d}F\left( \vartheta ,v\right) ,\\ R\left( \vartheta ,x,y\right)&= \frac{\bigl \langle \dot{S}\left( \vartheta ,y\right) ,Q\left( \vartheta , x,y\right) \bigr \rangle }{\sigma \left( y\right) ^2} \end{aligned}$$

and the statistic

$$\begin{aligned} V_T\left( \hat{\vartheta }_T,x\right)&=\xi _T\left( \hat{\vartheta }_T,x\right) \\&\quad -\frac{1}{2\sqrt{T}}\int _{0}^{T} \left[ R_y'\left( \hat{\vartheta }_T ,x,X_s\right) \sigma \left( X_s\right) ^2\mathrm{d}s\right. \\&\quad \left. +\,2{R\left( \hat{\vartheta }_T ,x,X_s\right) }\,S(\hat{\vartheta }_T,X_s)\right] \,\mathrm{d}s. \end{aligned}$$

The main result of this work is the following theorem.

Theorem 3

Let the conditions \(\mathcal{ES}, \mathcal{A}_0\) and \(\mathcal{R}_1,\mathcal{R}_2\) be fulfilled, then the test \(\hat{\psi } _T={1\!\hbox {I}}_{\left\{ \delta _T>c_\varepsilon \right\} }\) with \(\delta _T\) and \(c_\varepsilon \) defined by the relations

$$\begin{aligned} \delta _T=\int _{-\infty }^{\infty }V_T\left( \hat{\vartheta } _T,x\right) ^2\mathrm{d}F(\hat{\vartheta }_T,x),\quad \quad \mathbf{P}\left( \int _{0}^{1}w_t^2\mathrm{d}t>c_\varepsilon \right) =\varepsilon \end{aligned}$$
(27)

is ADF and belongs to \(\mathcal{K}_\varepsilon \).

Proof

Let us suppose that \(g\left( \vartheta ,z \right) \) is piece-wise continuous function and consider the calculation of the integral

$$\begin{aligned} \int _{a }^{b} g\left( \vartheta ,z \right) \,\mathrm{d}\xi _T\left( \vartheta ,z\right) . \end{aligned}$$

For any partition \(a=z_1<z_2\cdots < z_K=b\) and \(\max \left| z_{k+1}-z_k\right| \rightarrow 0\) we have

$$\begin{aligned}&\sum _{k=1}^{K-1}g\left( \vartheta ,\tilde{z}_k\right) \left[ \xi _T\left( \vartheta ,z_{k+1}\right) -\xi _T\left( \vartheta ,z_{k}\right) \right] \\&\quad =\frac{1}{\sqrt{T}}\int _{0}^{T} \frac{\sum _{k=1}^{N-1}g\left( \vartheta ,\tilde{z}_k\right) {1\!\hbox {I}}_{\left\{ z_k\le X_s<z_{k+1}\right\} }}{\sigma \left( X_s\right) }\,\mathrm{d}X_s\\&\quad \quad -\frac{1}{\sqrt{T}}\int _{0}^{T} \frac{\sum _{k=1}^{N-1}g\left( \vartheta ,\tilde{z}_k\right) S(\vartheta ,X_s){1\!\hbox {I}}_{\left\{ z_k\le X_s< z_{k+1}\right\} }}{\sigma \left( X_s\right) }\,\mathrm{d}s\\&\quad \quad \longrightarrow \frac{1}{\sqrt{T}}\int _{0}^{T} \frac{g\left( \vartheta ,X_s\right) {1\!\hbox {I}}_{\left\{ a\le X_s< b\right\} }}{\sigma \left( X_s\right) }\,\mathrm{d}X_s\\&\quad \quad -\frac{1}{\sqrt{T}}\int _{0}^{T} \frac{g\left( \vartheta ,X_s\right) S(\vartheta ,X_s){1\!\hbox {I}}_{\left\{ a\le X_s< b\right\} }}{\sigma \left( X_s\right) }\,\mathrm{d}s. \end{aligned}$$

Therefore we have the equality

$$\begin{aligned}&\int _{-\infty }^{y}\frac{\dot{S}\left( \vartheta ,z\right) }{\sigma (z)}\,\mathrm{d}\xi _T\left( \vartheta ,z\right) =\frac{1}{\sqrt{T}}\int _{0}^{T} \frac{\dot{S}\left( \vartheta ,X_s\right) {1\!\hbox {I}}_{\left\{ X_s< y\right\} }}{\sigma \left( X_s\right) ^2}\,\mathrm{d}X_s\nonumber \\&\quad \quad \quad -\frac{1}{\sqrt{T}}\int _{0}^{T} \frac{\dot{S}\left( \vartheta ,X_s\right) S(\vartheta ,X_s){1\!\hbox {I}}_{\left\{ X_s< y\right\} }}{\sigma \left( X_s\right) ^2}\,\mathrm{d}s. \end{aligned}$$
(28)

Further, by Fubini theorem

$$\begin{aligned}&J_T\left( \vartheta ,x\right) =\int _{-\infty }^{x} g^*\left( \vartheta ,y\right) {{\mathbb {N}}\left( \vartheta ,y \right) }^{-1}{}\; \int _{-\infty }^{y} g\left( \vartheta ,z\right) \,\mathrm{d} \xi _T\left( \vartheta ,z\right) \; \mathrm{d}F\left( \vartheta ,y\right) ,\\&\quad =\frac{1}{\sqrt{T}}\int _{0}^{T} \frac{\dot{S}\left( \vartheta ,X_s\right) ^*}{\sigma \left( X_s\right) ^2} \int _{X_s\wedge x }^{x}{{\mathbb {N}}\left( \vartheta ,y \right) }^{-1} g\left( \vartheta ,y\right) \mathrm{d}F\left( \vartheta ,y\right) \,\mathrm{d}X_s\\&\quad \quad -\frac{1}{\sqrt{T}}\int _{0}^{T}\frac{\dot{S}\left( \vartheta ,X_s\right) ^*S(\vartheta ,X_s)}{\sigma \left( X_s\right) ^2} \int _{X_s\wedge x }^{x}{{\mathbb {N}}\left( \vartheta ,y \right) }^{-1} g\left( \vartheta ,y\right) {}\;\mathrm{d}F\left( \vartheta ,y\right) \;\mathrm{d}s\\&\quad =\frac{1}{\sqrt{T}}\int _{0}^{T} R\left( \vartheta ,x,X_s\right) \,\mathrm{d}X_s-\frac{1}{\sqrt{T}}\int _{0}^{T} {R\left( \vartheta ,x,X_s\right) }\,S(\vartheta ,X_s)\,\mathrm{d}s. \end{aligned}$$

By the Itô formula

$$\begin{aligned} \int _{0}^{T} R\left( \vartheta ,x,X_s\right) \,\mathrm{d}X_s=\int _{X_0}^{X_T} R\left( \vartheta ,x,y\right) \,\mathrm{d}y-\frac{1}{2}\int _{0}^{T} R_y'\left( \vartheta ,x,X_s\right) \sigma \left( X_s\right) ^2\mathrm{d}s. \end{aligned}$$

Hence we have no more stochastic integrals and can substitute the estimator

$$\begin{aligned}&\sqrt{T}J_T\left( \hat{\vartheta }_T ,x\right) =\int _{X_0}^{X_T} R\left( \hat{\vartheta }_T ,x,y\right) \,\mathrm{d}y\\&\quad -\int _{0}^{T} \left[ {R\left( \hat{\vartheta }_T ,x,X_s\right) }S(\hat{\vartheta }_T ,X_s)+\frac{1}{2}R_y'\left( \hat{\vartheta }_T ,x,X_s\right) \sigma \left( X_s\right) ^2 \right] \mathrm{d}s\\&\quad =\int _{X_0}^{X_T} R\left( \hat{\vartheta }_T ,x,y\right) \,\mathrm{d}y\\&\quad -\int _{0}^{T} \left[ {R\left( \hat{\vartheta }_T ,x,X_s\right) }S(\vartheta ,X_s)+\frac{1}{2}R_y'\left( \hat{\vartheta }_T ,x,X_s\right) \sigma \left( X_s\right) ^2 \right] \mathrm{d}s\\&\quad +\int _{0}^{T} {R\left( \hat{\vartheta }_T ,x,X_s\right) }\left[ S(\vartheta ,X_s) -S(\hat{\vartheta }_T ,X_s)\right] \mathrm{d}s. \end{aligned}$$

Further (below \(\hat{u}_T=\sqrt{T}(\hat{\vartheta }_T-\vartheta ) \))

$$\begin{aligned}&\left[ J_T\left( \hat{\vartheta }_T ,x\right) -J_T\left( \vartheta ,x\right) \right] =\left\langle \frac{\hat{u}_T}{{T}},\int _{X_0}^{X_T} \dot{R}\left( \vartheta ,x,y\right) \,\mathrm{d}y\right\rangle \\&\quad -\left\langle \frac{\hat{u}_T}{{T}},\int _{0}^{T} \left[ {\dot{R}\left( \vartheta ,x,X_s\right) }S(\vartheta ,X_s)+\frac{1}{2}\dot{R}_y'\left( \vartheta ,x,X_s\right) \sigma \left( X_s\right) ^2 \right] \mathrm{d}s\right\rangle \\&\quad -\left\langle \frac{ \hat{u}_T}{{T}},\int _{0}^{T} {R\left( \vartheta ,x,X_s\right) }\dot{S}(\vartheta ,X_s) \mathrm{d}s\right\rangle +o\left( 1\right) . \end{aligned}$$

Note that by Theorem 2.8 in Kutoyants (2004) for any \(p>0\)

$$\begin{aligned} \sup _\vartheta \mathbf{E}_\vartheta \left| \hat{\vartheta }_T-\vartheta \right| ^p\le C\,T^{-\frac{p}{2}}. \end{aligned}$$
(29)

Using once more the Itô formula we obtain

$$\begin{aligned}&\int _{X_0}^{X_T} \dot{R}\left( \vartheta ,x,y\right) \,\mathrm{d}y-\int _{0}^{T} \left[ {\dot{R}\left( \vartheta ,x,X_s\right) }S(\vartheta ,X_s)+\frac{1}{2}\dot{R}_y'\left( \vartheta ,x,X_s\right) \sigma \left( X_s\right) ^2 \right] \mathrm{d}s\\&\quad \quad =\int _{0}^{T} {\dot{R}\left( \vartheta ,x,X_s\right) }\mathrm{d}W_s. \end{aligned}$$

Hence

$$\begin{aligned}&\left( \mathbf{E}_\vartheta \left\langle \frac{\hat{u}_T}{T},\int _{0}^{T} {\dot{R}\left( \vartheta ,x,X_s\right) }\;\mathrm{d}W_s\right\rangle \right) ^2\\&\quad \quad \le \mathbf{E}_\vartheta \left| \hat{u}_T\right| ^2\;\left| \frac{1}{T}\int _{0}^{T} {\dot{R}\left( \vartheta ,x,X_s\right) }\;\mathrm{d}W_s \right| ^2\le \frac{C}{{T}} , \end{aligned}$$

and we can write

$$\begin{aligned} J_T\left( \hat{\vartheta }_T ,x\right)&=\frac{1}{\sqrt{T}}\int _{0}^{T} { R\left( \vartheta ,x,X_s\right) }\;\mathrm{d}W_s\\&\quad -\frac{ 1}{{T}}\int _{0}^{T} {R\left( \vartheta ,x,X_s\right) }\left\langle \hat{u}_T, \dot{S}(\vartheta ,X_s)\right\rangle \; \mathrm{d}s+o\left( 1\right) . \end{aligned}$$

Therefore

$$\begin{aligned}&V_T\left( \hat{\vartheta }_T,x\right) = \xi _T\left( \hat{\vartheta } _T,x\right) +\frac{1}{\sqrt{T}}\int _{0}^{T} { R\left( \vartheta ,x,X_s\right) }\;\mathrm{d}W_s\\&\quad -\frac{ 1}{{T}}\int _{0}^{T} {R\left( \vartheta ,x,X_s\right) } \left\langle \hat{u}_T,\dot{S}(\vartheta ,X_s)\right\rangle \; \mathrm{d}s+o\left( 1\right) =\hat{V}_T\left( \hat{\vartheta } _T,x\right) +o\left( 1\right) , \end{aligned}$$

where we put

$$\begin{aligned} \hat{V}_T\left( \hat{\vartheta }_T,x\right)&=\xi _T\left( \hat{\vartheta } _T,x\right) +\frac{1}{\sqrt{T}}\int _{0}^{T} { R\left( \vartheta ,x,X_s\right) }\;\mathrm{d}W_s\\&\quad -\frac{ 1}{{T}}\int _{0}^{T} {R\left( \vartheta ,x,X_s\right) }\left\langle \hat{u}_T ,\dot{S}(\vartheta ,X_s)\right\rangle \; \mathrm{d}s. \end{aligned}$$

To prove the convergence

$$\begin{aligned} \delta _T&=\int _{-\infty }^{\infty }\hat{V}_T\left( \hat{\vartheta } _T,x\right) ^2\mathrm{d}F\left( \hat{\vartheta }_T,x\right) +o\left( 1\right) \\&\quad \quad \Longrightarrow \int _{-\infty }^{\infty }w_{F\left( \vartheta ,x\right) } ^2\mathrm{d}F\left( \vartheta ,x\right) =\int _{0}^{1}w_t^2\mathrm{d}t \end{aligned}$$

we have to verify the following properties:

  1. 1.

    For any \(x_1,\ldots ,x_k\)

    $$\begin{aligned} \left( \hat{V}_T(\hat{\vartheta }_T,x_1),\ldots ,\hat{V}_T(\hat{\vartheta } _T,x_k)\right) \Longrightarrow \left( w_{F\left( \vartheta ,x_1\right) },\ldots ,w_{F\left( \vartheta ,x_k\right) }\right) . \end{aligned}$$
    (30)
  2. 2.

    For any \(\delta >0\) there exist \(L>0\) such that

    $$\begin{aligned} \int _{\left| x\right| >L}^{}\mathbf{E}_\vartheta \hat{V}_T(\hat{\vartheta } _T,x)^2f(\hat{\vartheta }_T,x)\,\mathrm{d}x<\delta . \end{aligned}$$
    (31)
  3. 3.

    For \(\left| x_i\right| <L\), \(i=1,2\),

    $$\begin{aligned} \mathbf{E}_\vartheta \left| \hat{V}_T(\hat{\vartheta }_T,x_2)-\hat{V}_T(\hat{\vartheta }_T,x_1)\right| ^2\le C\;\left| x_2-x_1\right| ^{1/2}. \end{aligned}$$
    (32)

Note that by the conditions (30) and (32) we have the convergence of the integrals

$$\begin{aligned} \int _{-L}^{L}\hat{V}_T(\hat{\vartheta }_T,x)^2\;\mathrm{d}F\left( \hat{\vartheta }_T,x\right) \Longrightarrow \int _{-L}^{L} w_{F\left( \vartheta ,x\right) }^2\;\mathrm{d}F\left( \vartheta ,x\right) =\int _{\nu _1 }^{1-\nu _2 }w_t^2\;\mathrm{d}t, \end{aligned}$$

where \(F\left( \vartheta ,-L\right) =\nu _1\) and \(F\left( \vartheta ,L\right) =1-\nu _2\).

The first convergence (30) follows from (20), central limit theorem for stochastic integrals and the law of large numbers

$$\begin{aligned} \frac{1}{T}\int _{0}^{T} R\left( \vartheta ,x_i,X_s\right) \dot{S}(\vartheta ,X_s)\; \mathrm{d}s\longrightarrow \int _{-\infty }^{\infty }R\left( \vartheta ,x_i,y\right) \dot{S}(\vartheta ,y)f\left( \vartheta ,y\right) \mathrm{d}y. \end{aligned}$$

Here \(i=1,\ldots ,k. \) Indeed, we obtain the joint asymptotic normality

$$\begin{aligned} \hat{V}_T(\hat{\vartheta }_T,x_i)&\Longrightarrow u\left( x\right) +\int _{-\infty }^{\infty }R\left( \vartheta ,x_i,y\right) \mathrm{d}w_{F\left( \vartheta ,y\right) } \\&\quad \quad -\int _{-\infty }^{\infty }R\left( \vartheta ,x_i,y\right) \left\langle \mathrm{I}\left( \vartheta \right) ^{-1}\Delta \left( \vartheta \right) , \dot{S}(\vartheta ,y)\right\rangle \mathrm{d}F\left( \vartheta ,y\right) . \end{aligned}$$

Note that the limit of (28) is equivalent to

$$\begin{aligned} \int _{-\infty }^{y}\frac{\dot{S}\left( \vartheta ,z\right) }{\sigma \left( z\right) }\mathrm{d}u\left( x\right)&=\int _{-\infty }^{y}\frac{\dot{S}\left( \vartheta ,z\right) }{\sigma \left( z\right) }\mathrm{d}w_{F\left( \vartheta ,z\right) }\\&\quad \quad - \int _{-\infty }^{y}\left\langle \mathrm{I}\left( \vartheta \right) ^{-1}\Delta \left( \vartheta \right) ,\dot{S}\left( \vartheta ,z\right) \right\rangle \frac{\dot{S}\left( \vartheta ,z\right) }{\sigma \left( z\right) ^2} \mathrm{d}F\left( \vartheta ,z\right) . \end{aligned}$$

To check (31) we write

$$\begin{aligned} \mathbf{E}_\vartheta \xi _T\left( x\right) ^2&\le 2 \mathbf{E}_\vartheta \left( \frac{1}{\sqrt{T}}\int _{0}^{T}{{1\!\hbox {I}}_{\left\{ X_s<x\right\} }}\mathrm{d}W_s\right) ^2\\&\quad \quad +2\mathbf{E}_\vartheta \left( \left\langle \hat{u}_T\;,\frac{1}{T}\int _{0}^{T}\frac{{1\!\hbox {I}}_{\left\{ X_s<x\right\} } \dot{S}\left( \tilde{\vartheta }_T,X_s\right) }{\sigma \left( X_s\right) }\mathrm{d}s\right\rangle \right) ^2\\&\le 2F\left( \vartheta ,x\right) +2\mathbf{E}_\vartheta \left| \hat{u}_T\right| ^2 \left| \frac{1}{T}\int _{0}^{T}\frac{{1\!\hbox {I}}_{\left\{ X_s<x\right\} } \dot{S}\left( \tilde{\vartheta }_T,X_s\right) }{\sigma \left( X_s\right) }\mathrm{d}s\right| ^2\le C. \end{aligned}$$

Remind that by conditions \(\mathcal{A}_0,\mathcal{R}_1,\mathcal{R}_2,\) all related functions have polynomial majorants. By condition \(\mathcal{A}_0\), the invariant density \(f\left( \vartheta ,x\right) \) has exponentially decreasing tails: there exist the constants \(c_1>0,C_2>0\) such that

$$\begin{aligned} f\left( \vartheta ,x\right) \le C_2\,e^{-c_2\left| x\right| }. \end{aligned}$$

Therefore all mathematical expectations are finite.

Further,

$$\begin{aligned}&\mathbf{E}_\vartheta \left| \hat{V}_T\left( \hat{\vartheta }_T,x_2\right) -\hat{V}_T\left( \hat{\vartheta }_T,x_1\right) \right| ^2 \le 3\mathbf{E}_\vartheta \left| \xi _T\left( x_2\right) -\xi _T\left( x_1\right) \right| ^2 \\&\quad \quad +3\mathbf{E}_\vartheta \left| \frac{1}{\sqrt{T}}\int _{0}^{T}\left[ R\left( \vartheta ,x_2,X_s\right) -R\left( \vartheta ,x_1,X_s\right) \right] \mathrm{d}W_s \right| ^2 \\&\quad \quad +3\mathbf{E}_\vartheta \left| \frac{1}{{T}}\int _{0}^{T}\left[ R\left( \vartheta ,x_2,X_s\right) -R\left( \vartheta ,x_1,X_s\right) \right] \left\langle \hat{u}_T, \dot{S}\left( \tilde{\vartheta }_T,X_s\right) \right\rangle \, \mathrm{d}s \right| ^2\\&\quad \le C\left( L\right) \,\left| x_2-x_1\right| ^{1/2}. \end{aligned}$$

For example (\(x_1<x_2\)),

$$\begin{aligned}&\mathbf{E}_\vartheta \left| \xi _T\left( x_2\right) -\xi _T\left( x_1\right) \right| ^2\le 2 \mathbf{E}_\vartheta \left( \frac{1}{\sqrt{T}}\int _{0}^{T} {{1\!\hbox {I}}_{\left\{ x_1<X_s<x_2\right\} }}{}\mathrm{d}W_s\right) ^2\\&\quad \quad +2\mathbf{E}_\vartheta \left( \left\langle \hat{u}_T\;,\frac{1}{T}\int _{0}^{T}\frac{{1\!\hbox {I}}_{\left\{ x_1<X_s<x\right\} } \dot{S}\left( \tilde{\vartheta }_T,X_s\right) }{\sigma \left( X_s\right) }\mathrm{d}s\right\rangle \right) ^2\\&\quad \quad \le 2\int _{x_1}^{x_2}f\left( \vartheta ,y\right) \,\mathrm{d}y +2 \left( \mathbf{E}_\vartheta \left| \hat{u}_T\right| ^4\right) ^{1/2} \left( \int _{x_1}^{x_2}P\left( y\right) f\left( \vartheta ,y\right) \,\mathrm{d}y \right) ^{1/2}\\&\quad \quad \le C\left| x_2-x_1\right| + C\left| x_2-x_1\right| ^{1/2}\le C\left( L\right) \left| x_2-x_1\right| ^{1/2}. \end{aligned}$$

Here \(P\left( y\right) \) is some polynome.

These properties of \(V_T(\hat{\vartheta }_T,x) \) allow us (see Theorem A1.22 (Ibragimov and Has’minskii 1981)) to verify the convergence

$$\begin{aligned} \int _{-\infty }^{\infty } V_T(\hat{\vartheta }_T,x)^2f(\hat{\vartheta } _T,x)\,\mathrm{d}x\Longrightarrow \int _{-\infty }^{\infty } w_{F\left( \vartheta ,x\right) }^2\;\mathrm{d}F\left( \vartheta ,x\right) =\int _{0}^{1}w_t^2\;\mathrm{d}t. \end{aligned}$$

Example 1

Linear case. Let us consider the one-dimensional (\(d=1\)) linear case

$$\begin{aligned} \mathrm{d}X_s=\vartheta a\left( X_s\right) \mathrm{d}s + \sigma \left( X_s\right) \mathrm{d}W_s,\quad X_0,\quad 0\le s\le T. \end{aligned}$$

We have some simplification because we have no more problem with the calculation of stochastic integral and the statistic can be calculated as follows. Let us denote

$$\begin{aligned} B_T\left( \hat{\vartheta }_T,x\right) = \xi _T(\hat{\vartheta }_T,x) +\int _{-\infty }^{x} \frac{ a\left( y\right) A_T(\hat{\vartheta } _T,y)}{{\mathbb {N}}\left( \hat{\vartheta }_T,y\right) \sigma \left( y\right) }\; \mathrm{d}F(\hat{\vartheta }_T,y), \end{aligned}$$

where

$$\begin{aligned} {\mathbb {N}}\left( \vartheta ,y\right) =\int _{y}^{\infty }\frac{a\left( z\right) ^2}{\sigma \left( z\right) ^2}\,f\left( \vartheta ,z\right) \;\mathrm{d}z \end{aligned}$$

and (see (28))

$$\begin{aligned} A_T(\hat{\vartheta }_T,y)&= \frac{1}{\sqrt{T}}\int _{0}^{T} \frac{a\left( X_s\right) {1\!\hbox {I}}_{\left\{ X_s< y\right\} }}{\sigma \left( X_s\right) ^2}\,\left[ \mathrm{d}X_s -\hat{\vartheta } _Ta\left( X_s\right) \,\mathrm{d}s \right] . \end{aligned}$$

Then we obtain the convergence

$$\begin{aligned} \delta _T=\int _{-\infty }^{\infty } B_T(\hat{\vartheta }_T,x)^2\mathrm{d}F(\hat{\vartheta }_T,x)\Longrightarrow \int _{0}^{1}w_t^2\;\mathrm{d}t \end{aligned}$$

Hence the test \(\hat{\psi }_T={1\!\hbox {I}}_{\left\{ \delta _T>c_\varepsilon \right\} }\) is ADF.

4 Discussion

In Theorem 2 the condition of existence of the finite solution on the interval \([0,1) \) is the following: the matrix

$$\begin{aligned} {\mathbb {N}}\left( t\right) =\int _{t}^{1} h\left( v\right) h^*\left( v\right) \mathrm{d}v,\quad { is}\;\; { positive}\;\; { definite} \;\;{ for}\; { any}\;\;t\in \left( 0,1\right) . \end{aligned}$$
(33)

Of course, we have to check it for any close to 1 value of \(t <1\). The quantity \({\mathbb {N}}\left( t\right) =\mathrm{I}\left( \vartheta \right) ^{-1} \mathrm{I}_t\left( \vartheta \right) \), where (\( t=F\left( \vartheta ,x\right) \))

$$\begin{aligned} \mathrm{I}_t\left( \vartheta \right) =\mathrm{I}_{F\left( \vartheta ,x\right) }\left( \vartheta \right) =\int _{x}^{\infty } \frac{\dot{S}\left( \vartheta ,y\right) \dot{S}\left( \vartheta ,y\right) ^*}{\sigma \left( y\right) ^2}f\left( \vartheta ,y\right) \mathrm{d}y \end{aligned}$$

is the Fisher information in the case of censored observations

$$\begin{aligned} Y_s=X_s\;{1\!\hbox {I}}_{\left\{ X_t>x\right\} },\quad \quad 0\le s\le T \end{aligned}$$

and the condition (33) means that this Fisher information is positive definite for any \(x<\infty \).

For example, if \(d=1\) and we suppose that

$$\begin{aligned} h\left( 1\right) =\mathop {\text {lim}}\limits _{t\rightarrow 1} \frac{\left| \dot{S}\left( \vartheta ,F^{-1}\left( \vartheta ,t\right) \right) \right| }{\sigma \left( F^{-1}\left( \vartheta ,t\right) \right) \sqrt{{\mathrm{I}\left( \vartheta \right) }}} =\mathop {\text {lim}}\limits _{y\rightarrow \infty } \frac{\left| \dot{S}\left( \vartheta ,y\right) \right| }{\sigma \left( y\right) \sqrt{{\mathrm{I}\left( \vartheta \right) }}} >0, \end{aligned}$$

then the condition (33) is fulfilled.

It is easy to see that for Ornstein-Uhlenbeck process \(h\left( 1\right) =\infty \), but the integral of \(h\left( \cdot \right) ^2\) on \(\left[ 0,1\right] \) is finite and equal to 1.

Note that if the function \(\dot{S}\left( \vartheta ,y\right) =0\) for \(y\ge b\) with some \(b\), then we have finite solution \(q\left( t,s\right) ,s\in \left[ 0,t\right] \) for the values \(t\in [0,F\left( \vartheta , b\right) )\) only.