Abstract
Let \((\Omega , \mathcal {F}, (\mathcal {F})_{t\ge 0}, P)\) be a complete stochastic basis, and X be a semimartingale with predictable compensator \((B, C, \nu )\). Consider a family of probability measures \(\mathbf {P}=( {P}^{n, \psi }, \psi \in \Psi , n\ge 1)\), where \(\Psi \) is an index set, \( {P}^{n, \psi }{\mathop {\ll }\limits ^\mathrm{loc}}{P}\), and denote the likelihood ratio process by \(Z_t^{n, \psi } =\frac{\mathrm{d}P^{n, \psi }|_{\mathcal {F}_t}}{\mathrm{d} P|_{\mathcal {F}_t}}\). Under some regularity conditions in terms of logarithm entropy and Hellinger processes, we prove that \(\log Z_t^{n}\) converges weakly to a Gaussian process in \(\ell ^\infty (\Psi )\) as \(n\rightarrow \infty \) for each fixed \(t>0\).
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction and Preliminaries
The celebrated Donsker theorem is a functional extension of the central limit theorem in probability theory. Plenty of research on this topic has come out in the past decades. The reader is referred to classic books, and papers like Dudley [3], Gine and Zinn [4], Ossiander [10], Andersen et. al [1], Liptser and Shiryaev [7], van der Geer [11], Billingsley [2], Jacod and Shiryaev [5] for both theoretical framework and wide applications. A primary purpose of the present paper is to establish a certain Donsker theorem for log-likelihood processes indexed by an arbitrary set. In this section, we first introduce some basic notions about log-likelihood processes and martingale representation property.
Throughout this paper, we follow the standard definitions and notations of martingale theory, which can be found in the book by Jacod and Shiryaev [5]. Let \((\Omega , \mathcal {F}, (\mathcal {F})_{t\ge 0}, P)\) be a complete stochastic basis. Fix a semimartingale X on it, and assume that all P-martingales have a representation property relative to X. Denote by the triplet \((B, C, \nu )\) the predictable characteristic of X (associated with some bounded truncation function). More precisely, if \(\Delta X_t=X_t-X_{t-}\) denotes the jump of X at time t, then \(X_t-\sum _{s\le t}(\triangle X_s-h_\tau (\triangle X_s))\) , where \(h_{\tau }(x)=x1_{(|x|\le \tau )}\), is a special semimartingale, which can be uniquely divided into a bounded variation process and a local martingale process. The B is a bounded variation process of \(X-\sum _{s\le \cdot }(\triangle X_s-h_\tau (\triangle X_s))\). Let \(X^c\) be the continuous local martingale part of X, then
Let \(\mu \) be the jump measure of X defined by
where \(\varepsilon _{(s, \Delta X_s(\omega ))}\) denotes the Dirac measure at point \((s, \Delta X_s(\omega ))\). The \(\nu \) is the unique predictable compensator of \(\mu \) (up to a P-null set). Namely, \(\nu \) is a predictable random measure such that for any predictable functionFootnote 1W, \(W*(\mu -\nu )=W*\mu -W*\nu )\) is a local martingale, where the \(W*\mu \) is defined by
(see Section 2.1 of Chapter 2 in Jacod and Shiryaev [5] for more details). Note the predictable quadratic variation is given by
where
and
It follows from Corollary 1.19 of Chapter 2 in Jacod and Shiryaev [5] that \(a_t=0\) is equivalent to the fact X is a quasi-left continuous process.Footnote 2 Specially, for a process with independent increments, \(a_t=0\) means this process has no fixed time of discontinuity.Footnote 3 Thus, we may and do choose a good version of both \(\hat{W}\) and a such that \(\hat{W}\) is the predictable projection of \(W(\omega , t, \Delta X_t)\mathbf {1}_{(\Delta X_t\ne 0)}\) and \(a_t\le 1\). In particular,
Now consider another probability measure \(P'\) such that
which means that for any \(t\ge 0\), \(P'|_{\mathcal {F}_t}\ll P|_{\mathcal {F}_t}\). Define the likelihood ratio process
It follows from Chapter III in Jacod and Shiryaev [5] that \(Z_t\) is a local martingale.
Since by assumption all P-martingales have a representation property relative to X, according to Theorem 5.19 of Chapter III in Jacod and Shiryayev [5], \(Z_t\) has the following representation: there is a predictable process \(\beta \) and a nonnegative predictable function Y on \(\tilde{\Omega }\) such that
Here
and \(\triangle \) is a random set defined as follows
Note that \(\beta , Y\) and \(\triangle \) depend on \(P'\). In fact, Y can be explicitly represented as follows. Let \(M_{\mu }^{P}\) be a measure on \((\tilde{\Omega }, \tilde{\mathcal {P}})\) where \(\tilde{\Omega }:=\Omega \times \mathbb {R}_+\times \mathbb {R}\), \(\tilde{\mathcal {P}}:=\mathcal {P}\otimes \mathcal {B}\), such that \(M_{\mu }^{P}(W)=E(W*\mu )_{\infty }\) for all measurable nonnegative functions W. Then Y is the conditional expectation of \(\frac{Z}{Z_{-}}\) with respect to \(\tilde{\mathcal {P}}\) under \(M_{\mu }^{P}\), namely
Define the log-likelihood process \(L_t\) by
This process has been a well-studied object in the context of both stochastic processes and statistical inferences. Obviously,
Assume we are given a family of probability measures \(\mathbf {P}^n=\{P^{n, \psi }: \psi \in \Psi \}\) on \((\Omega ,\mathcal {F})\), indexed by an arbitrary non-empty set \(\Psi \), and assume
for every \(n>0\) and \(\psi \in \Psi \). We shall be mainly interested in the sequence of likelihood ratio processes \(Z_t^{n, \psi }\). The main purpose of the paper is to establish a certain Donsker theorem for log-likelihood processes \(\log Z_t^n\) in \(\ell ^{\infty }(\Psi )\) as \(n\rightarrow \infty \), where we denote by \(\ell ^{\infty }(\Psi )\) the space of bounded real-valued functions defined on \(\Psi \).
It seems hard to develop directly an invariance principle for \(\log Z^n\) due to complicated structure. To the best of our knowledge, there are only few works in this area, such as Le Cam [6], Vostrikova [13] and so on. The reader can find some interesting results in Nishiyama [8] and [9], where \(\log Z^n\) is assumed to be very special continuous semimartingales and discrete time semimartingales, respectively. It is a challenging problem to extend Nishiyama’s work to general setting. To attack such a problem, we shall combine stochastic calculus techniques and chaining arguments with the Kakutani–Hellinger distance for probability measures. In particular, we shall characterize the regularity of \(\ell ^{\infty }(\Psi )\)-valued log-likelihood processes in terms of the Kakutani–Hellinger distance and the Hellinger processes.
The rest of the paper is organized as follows. We will first make some necessary assumptions and then state our main result in Sect. 2. The proof of main result is given in Sect. 3, which consists of several lemmas and two propositions.
2 Main Result
To state our main results, we need some more notations and make some technical assumptions. Start with the Kakutani–Hellinger distance between two probability measures P and \(P'\). Assume that Q is a third probability measure on \((\Omega ,\mathcal {F})\) such that
Let
and define the Kakutani–Hellinger distance by
It is easy to check that \(\rho (P, P')\) is a metric in the space of probability measures and does not depend on the probability measure Q. Note
For \(0<\alpha <1\), call \(\breve{H}(\alpha ; P, P')= E_Q(Z^\alpha (Z')^{1-\alpha })\) the Hellinger integral of order \(\alpha \). We remark that \( \breve{H}(\alpha ; P, P')\rightarrow 1\) as \(\alpha \rightarrow 0\) if \(P'\ll P\).
Proceed to introduce the Hellinger processes. Assume that
and define
Then for each \(0<\alpha <1\), there is a unique predictable increasing process \(h(\alpha ; P, P')\), called the Hellinger process of order \(\alpha \), such that
-
(i)
$$\begin{aligned} h(\alpha ; P, P')_0=0 \end{aligned}$$
-
(ii)
$$\begin{aligned} h(\alpha ; P, P')_t= \mathbf {1}_{\cup [0,\, S_n]}\cdot h(\alpha ; P, P')_t \end{aligned}$$
-
(iii)
$$\begin{aligned} Y(\alpha )_t+Y(\alpha )_-*\nu _t \quad \text{ is } \text{ local } \text{ martingale } \end{aligned}$$
where
$$\begin{aligned} S_n=\inf \{t: Z_t>n \, \text{ or } \, Z_t'>n\} \end{aligned}$$and
$$\begin{aligned} Y(\alpha )_t=Z_t^\alpha (Z_t')^{1-\alpha }. \end{aligned}$$
One can extend the above Hellinger process to order zero and even to a general function. Given a function \(\psi :\mathbb {R}\mapsto \mathbb {R} \) such that
is bounded with convention \(\frac{0}{0}=0\) and \(\psi (1)=0\). Denote
then there is a predictable increasing process, denoted by \(\imath (\psi ; P, P')\), such that
-
(i’)
$$\begin{aligned} \imath (\psi ; P, P')_0=0 \end{aligned}$$
-
(ii’)
$$\begin{aligned} {\imath (\psi ; P,P')}_t= \mathbf {1}_{\cup [0,\, T_n]}\cdot \imath (\psi ; P, P')_t \end{aligned}$$
-
(iii’)
$$\begin{aligned} \jmath (\psi ; P, P')_{T_n\wedge t}-\imath (\psi ; P,P')_{T_n\wedge t} \quad \text{ is } \text{ local } \text{ martingale } . \end{aligned}$$
Call \(\imath (\psi ; P,P')\) the Hellinger process of order 0 associated with \(\psi \). In particular, if
$$\begin{aligned} \psi (x)=\left\{ \begin{array}{ll} 1,&{}\quad x=0,\\ 0, &{}\quad x>0, \end{array} \right. \end{aligned}$$then we simply call \(\imath (\psi ; P, P')\) the Hellinger process of order 0.
In general, it is rather complicated to compute \(h(\alpha ; P, P')\). However, we fortunately have the following explicit formula in the special case \(P'\ll P\):
In particular,
Our technical assumptions mainly involve three aspects: the predictable envelope of \(\{Y^{n, \psi }, \psi \in \Psi \}\), the Kakutani–Hellinger distance between probability measures \(P^{n, \psi }\) and the size of index set \(\Psi \).
For every \(n>0\), denote the essence supremum \( \overline{Y}^{n}(\Psi )=[\sup _{\psi \in \Psi }Y^{n,\psi }]_{\tilde{\mathcal {P}}, M_{\nu }^P}\). This is the predictable envelope of \(\{Y^{n,\psi }, \psi \in \Psi \}\) used in Definitions 2.1 and 2.3 of Nishiyama [9].
Assumption 1
For any \(n>0\), \(\psi \in \Psi \), \(\triangle ^{n,\psi }\equiv \Omega \times [0,1]\) and \(0\le a<1\). Moreover, \(\{Y^{n,\psi }, \psi \in \Psi \}\) attains their predictable envelope for every \(n>0\), namely, there is a \(\psi _0\in \Psi \) such that \(Y^{n,\psi _0}=[\sup _{\psi \in \Psi }Y^{n,\psi }]_{\tilde{\mathcal {P}}, M_{\nu }^P}\).
Assumption 2
For every \(\varepsilon >0\), as \(n\rightarrow \infty \)
where \(f_{1+\varepsilon }(x)=|x-1|1_{\{1/(1+\varepsilon )<x<1+\varepsilon \}^{c}}\).
There is a nonnegative definite continuous function \(V_t\) on \(\Psi \times \Psi \), such that as \(n\rightarrow \infty \),
and for every \(\psi , \phi \in \Psi \),
Let \(\Psi \) be an arbitrary set, \(\Delta _\Pi \) a positive rational number. \(\Pi =\{\Pi (\varepsilon )\}_{\varepsilon \in (0,\Delta _\Pi ]}\) is called a decreasing series of finite partitions (DFP) of \(\Psi \) if
-
(i)
each \(\Pi (\varepsilon )=\{\Psi (\varepsilon ;k):1\le k\le N_{\Pi }(\varepsilon )\}\) is a finite partition of \(\Psi \), namely
$$\begin{aligned} \Psi =\bigcup _{k=1}^{N_{\Pi }(\varepsilon )}\Psi (\varepsilon ;k); \end{aligned}$$ -
(ii)
\(N_{\Pi }(\Delta _{\Pi })=1\) and \(\lim _{\varepsilon \downarrow 0}N_{\Pi }(\varepsilon )=\infty \);
-
(iii)
\(N_{\Pi }(\varepsilon )\ge N_{\Pi }(\varepsilon ')\) as \(\varepsilon \le \varepsilon '\).
Given a \(0<\varepsilon \le \Delta _\Pi \), the \(\varepsilon \)-entropy \(H_{\Pi }(\varepsilon )\) is defined by
Assumption 3
There exists a decreasing series of finite partitions, \(\Pi \), of \(\Psi \) such that as \(n\rightarrow \infty \)
and
where
We are now ready to state our main result as follows.
Theorem 2.1
Under Assumptions 1, 2 and 3, we have
where G stands for a Gaussian element in \(\ell ^{\infty }(\Psi )\), each d-dimensional marginal \((G^{\psi _{1}},\cdots ,G^{\psi _{d}})\) is a normal random vector with mean
and covariance structure
The proof is given in Sect. 2. For the sake of comparison, we review an earlier result due to Nishiyama [9] in the discrete time case. Let \( (\mathcal {F}_{i})_{i\ge 0} \) be a discrete time stochastic basis, and \(\mathbf {P}^{n}=\{P^{n,\psi }:\psi \in \Psi \}\) a family of probability measures on \((\Omega ,\mathcal {F})\), such that
Define
and
Nishiyama [9] studied weak convergence for log-likelihood processes \(\log W^n_n\) in \(\ell ^\infty (\Psi )\) and obtained a similar result to (2.12) under some integrability assumptions involving \(\xi ^n\)’s and entropy conditions. More specifically, assume
-
(i)
for every \(\varepsilon >0\)
$$\begin{aligned} \sum _{i=1}^{n}E\Big ( (\sup _{\psi \in \Psi }\xi ^{n,\psi }_{i})^{2}1_{\{\sup _{\psi \in \Psi }\xi ^{n,\psi }_{i}>\varepsilon \}}\Big |\mathcal {F}_{i-1}\Big ){\mathop {\longrightarrow }\limits ^{P}}0; \end{aligned}$$ -
(ii)
there exists a decreasing series of finite partitions, \(\Pi \), of \(\Psi \) such that
$$\begin{aligned} \sup _{\varepsilon \in (0,\Delta _\Pi ]\cap \mathbb {Q}}\max _{1\le k\le N_{\Pi }(\varepsilon )}\frac{1}{\varepsilon ^2} \sum _{i=1}^{n}\sup _{\psi ,\phi \in \Psi (\varepsilon ,k)}E\big (|\xi ^{n,\psi }_{i}-\xi ^{n,\phi }_{i}|^{2}|\mathcal {F}_{i-1}\big ) =O_{P}(1)\nonumber \\ \end{aligned}$$(2.18)and
$$\begin{aligned} \int _{0}^{\Delta _\Pi }H_{\Pi }(\varepsilon )\mathrm{d}\varepsilon <\infty ; \end{aligned}$$(2.19) -
(iii)
there is a \(V:\Psi \times \Psi \rightarrow R\) such that
$$\begin{aligned} \sup _{\psi \in \Psi }\left| \sum _{i=1}^{n}4E^{*} \big ((\xi ^{n,\psi }_{i})^{2}|\mathcal {F}_{i-1} \big )-V^{\psi ,\psi }\right| {\mathop {\longrightarrow }\limits ^{P}}0; \end{aligned}$$and for \(\psi ,\phi \in \Psi \)
$$\begin{aligned} \sum _{i=1}^{n}4E\left( \xi ^{n,\psi }_{i}\xi ^{n,\phi }_{i} |\mathcal {F}_{i-1}\right) {\mathop {\longrightarrow }\limits ^{P}} V^{\psi ,\phi }. \end{aligned}$$Then
$$\begin{aligned} \log W_{n}^{n}\Rightarrow G \quad \text{ in } \ell ^{\infty }(\Psi ), \end{aligned}$$(2.20)where G stands for a Gaussian element in \(l^{\infty }(\Psi )\), each d-dimensional marginal \((G^{\psi _{1}},\ldots ,G^{\psi _{d}})\) is a normal random vector with mean
$$\begin{aligned} \vec {\mu }=-\frac{1}{2}\big (V^{\psi _{i},\psi _{i}}, \, 1\le i\le d\big ) \end{aligned}$$and covariance structure
$$\begin{aligned} \Sigma =\left( V^{\psi _{i},\psi _{j}}\right) _{1\le i,j\le d}. \end{aligned}$$To conclude the Introduction, two more remarks are given .
Remark 2.2
Observe in the discrete time case the Hellinger process can be computed as follows.
and
Thus, there is to some extent a similarity between our assumptions in Theorem 2.1 and Nishiyama’s assumptions. However, it seems neater to use the Hellinger processes in continuous time case.
The integrability condition (Assumption 3) of partitioning entropy plays an important role in the proof of Theorem 2.1. It is possible to use the metric entropy condition, but we need to introduce a suitable pseudo-metric in the index set \(\Psi \). The Hellinger processes would also be very likely a good candidate.
Remark 2.3
It is rather interesting to consider the limiting behavior of the process \(\log Z^{n}\) in \(\ell ^{\infty }([0,1]\times \Psi )\). To this end, we need to establish a tightness criterion in the space \([0,1]\times \Psi \). This is more complicate, and will be left to the future work.
3 Proofs
Let us start with a decomposition. Observe that
It is easy to see
and so we have
Let \(\mu ^{n,\psi }\) be the jump measure of \(L^{n,\psi }\) defined by
and \(\nu ^{n,\psi }\) the corresponding predictable compensator. Then for any predictable function \(W(\omega , t, x)\),
and so by the fact that \(1-a_s\) is the predictable projection of \(\mathbf {1}_{( \Delta X_s= 0)}\),
Given a positive number \(\tau \), consider the truncation function
and define
Thus, combined together, we easily have a canonical decomposition
For simplicity of writing, let
Thus, we have
The proof of Theorem 2.1 will consist of a series of lemmas and propositions.
Lemma 3.1
Under Assumptions 1, 2 and 3, we have for each \(\psi \in \Psi \) and \(\tau >0\), as \(n\rightarrow \infty \)
Proof
Set
Then
Also, for any \(\delta >0\)
By the Lenglart domination property (see page 35 of Jacod and Shiryaev [5]),
Note for \(\delta <1\) there is a positive constant \(c_\delta \) such that for any \(x>0\)
so
On the other hand, for each \(\varepsilon \in (0,1)\)
Again, by the Lenglart domination property, it follows for any \(\eta >0\)
Letting \(n\rightarrow \infty \) and then \(\eta \rightarrow 0\), we have
In combination, we have proved the desired statement. \(\square \)
Lemma 3.2
Under Assumptions 1, 2 and 3, we have for each \(\psi \in \Psi \), as \(n\rightarrow \infty \)
Consequently,
Proof
Obviously, for any \(\varepsilon >0\)
Also, by Assumption 2
The desired (3.8) holds.
Observe an elementary inequality: for any \(0<\varepsilon <1\), there is a positive constant \(c_\varepsilon \) such that
Then it follows
Thus, under Assumption 2, we have by letting \(n\rightarrow \infty \) and then \(\varepsilon \rightarrow 0\)
For (3.10), note
where \(\Upsilon ^{n,\psi }_s\) is as in (3.6). Then it easily follows
Thus, under Assumption 2, we have by first letting \(n\rightarrow \infty \) and then \(\varepsilon \rightarrow 0\)
The proof is now complete. \(\square \)
Lemma 3.3
Under Assumptions 1, 2 and 3, we have for each \(\psi \in \Psi \) and \(\tau >0\), as \(n\rightarrow \infty \)
Proof
First, observe the quadratic variation of \(h_\tau *(\mu ^{n,\psi }- \nu ^{n,\psi })_t\) is given by
We shall prove that \(\langle h_\tau *(\mu ^{n,\psi } - \nu ^{n,\psi })\rangle _t\) converges in probability to zero below. Note there is a \( \delta >0 \) such that for any \(\varepsilon <\delta \)
where \(0<c_\varepsilon <\infty \). Thus, it follows for any \(\varepsilon <\delta \)
Hence, letting \(n\rightarrow \infty \) and then \(\varepsilon \rightarrow 0\) immediately yields
A similar argument shows
Combined together, we have the desired statement. \(\square \)
Lemma 3.4
Under Assumptions 1, 2 and 3, we have for each \(\psi \in \Psi \) and \(\tau >0\), as \(n\rightarrow \infty \)
Proof
Note there is a \( \delta >0 \) such that for any \(\varepsilon <\delta \)
where \(0<c_\varepsilon <\infty \). Thus, it follows for any \(\varepsilon <\delta \)
This implies
Since it was proved \(2(1-\sqrt{Y^{n,\psi }})^2*\nu _t{\mathop {\longrightarrow }\limits ^{P}}0\), then we have
Similarly, we have
Combined together, the proof is complete. \(\square \)
Proposition 3.5
Under Assumptions 1, 2 and 3, every finite-dimensional marginal of \(L^{n}\) converges weakly.
Proof
has non-degenerate limiting finite-dimensional marginal laws, and the other part of \(L^{n}\) asymptotically vanishes.
For every \(\psi \), the process
is a continuous semimartingale. Its predictable characteristics \((\hat{B}^{ \psi },\hat{C}^{ \psi }, 0)\) are
By Lemma 3.2 and Assumption 2, there is a non-decreasing continuous function V, such that \(V_{0}=0\),
for every \(\psi ,\phi \in \Psi \).
The proposition is now concluded by Theorem VIII.3.6 of Jacod and Shiryaev [5]. \(\square \)
Next we turn to verifying uniform tightness.
Lemma 3.6
Under Assumptions 1, 2 and 3, we have for each \(\tau >0\), as \(n\rightarrow \infty \)
Proof
Recall
Let us prove
and
For (3.16), note
Thus, we need only to prove
and
Let us first look at (3.18). Set
Then
For any \(\varepsilon , \eta >0\),
Note \(x-1> \log x\) if \(\log x>0\). Then
Recalling the definition of \(\imath (f_{1+\varepsilon },P,P^{n,\psi })\) and Assumptions 1 and 2, we can obtain (3.18). The proofs of (3.19) and (3.17) are similar. \(\square \)
Lemma 3.7
Under Assumptions 1, 2 and 3, for any \(\varepsilon , \eta >0\), there is a \(\delta >0\) and a partition \(\Pi (\delta )=\{\Psi (\delta ),\, 1\le k\le N(\delta )\}\) such that
Proof
Let us fix \(\varepsilon , \eta >0\). First, note
Let
and
We shall treat \(J^{n,1, \psi }_t\) and \(J^{n,2, \psi }_t\) separately below. Let us only focus on the \(J^{n,2, \psi }_t\) since the \(J^{n, 1, \psi }_t\) is similar and simpler.
According to Assumption 3, there is a sufficiently large positive finite constant K such that
Thus, we only need to condition on the event \(\{\mathfrak {H}^n\Vert _{\Pi , t}>K\}\). In particular, we shall prove
Assuming (3.25), we can take \(\delta \) so small that
from which it follows by the Markov inequality
This in turn together with (3.24) implies
It remains to prove (3.25). For every integer \(p\ge 0\), construct a nested refinement partition \(\Pi (2^{-p}\delta )=\{\Psi (2^{-p}\delta ; k), 1\le k\le N_\Pi (2^{-p}\delta )\} \) of \(\Psi \), and then choose an element \(\psi _{p,k}\) from each partitioning set \(\Psi (2^{-p}\delta ;k)\) in such a way that
For every \(\psi \in \Psi \) and each \(p\ge 0\), define \(\pi _{p} \psi =\psi _{p,k}\) and \(\Pi _{p}\psi =\Psi (2^{-p}\delta ;k)\) whenever \(\psi \in \Psi (2^{-p}\delta ;k)\). Obviously, \( \Pi _{p}\psi \subseteq \Pi _{p-1}\psi \). Define
Note \(W(\Pi _{p}\psi )\le W(\Pi _{p-1}\psi )\). Set
and
and for \(p\ge 1\)
It is easy to see
and for each \(p\ge 1\)
Hence, it follows for any \(q\ge 1\)
Note \(\pi _{0}\psi =\pi _{0}\phi \) if \( \psi , \phi \in \Psi (\delta , k) \), and so we have
It is now enough to show
We have the following identity
Denote
We shall only establish (3.29) for \(M^{n,p,\psi }\) since the other three terms in RHS of (3.30) can be similarly treated.
Obviously, \(M^{n,p,\psi }\) is a local martingale, and
On the other hand, the predictable quadratic variation of \(M^{n,p,\psi }_t\) satisfies
By an elementary calculation, for any \(\gamma \in (0,1)\) there is a constant \(c_\gamma \) such that
whenever \(x,y\in [1-\gamma ,1+\gamma ]\). Then it follows
By Bernstein–Freedman’s inequality (see Lemma 3.2 of Nishiyama [8]) for local martingale with bounded jumps, it follows for \(\varepsilon >0\),
This, in turn together with Lemma 2.2.10 of van der Vaart and Wellner [12], yields
and
Thus, (3.25) is obtained, and so complete the proof. \(\square \)
Lemma 3.8
Under Assumptions 1, 2 and 3, for any \(\varepsilon , \eta >0\), there is a \(\delta >0\) and a partition \(\Pi (\delta )=\{\Psi (\delta ),\, 1\le k\le N(\delta )\}\) such that
Proof
Recall
It is enough to prove the following two statements
and
We shall concentrate on proving (3.31) below since (3.32) is similar. The proof is completely similar to that of Lemma 3.7 with some minor modifications. For every integer \(p\ge 0\), choose an element \(\psi _{p,k}\) from each partitioning set \(\Psi (2^{-p}\delta ;k)\) in such a way that
and define \(\pi _{p} \psi =\psi _{p,k}\) and \(\Pi _{p}\psi =\Psi (2^{-p}\delta ;k)\) whenever \(\psi \in \Psi (2^{-p}\delta ;k)\). Note
then a main step is to prove
To this end, for \(p \ge 0\), set
Obviously, \(\Gamma (\Pi _{p}\psi )\le \Gamma (\Pi _{p-1}\psi )\). Define
where \(\alpha _{p}\) is as in (3.28).
Note we have the following identity
and
In addition, it is easy to see
by Schwarz’s inequality. Thus,
Thus, (3.31) is proved. We complete the proof of this lemma. \(\square \)
We can obtain the following proposition by Lemmas 3.6–3.8.
Proposition 3.9
Under Assumptions 1, 2 and 3, for any \(\varepsilon , \eta >0\), there is a \(\delta >0\) and a partition \(\Pi (\delta )=\{\Psi (\delta ),\, 1\le k\le N(\delta )\}\) such that
The proof of Theorem 2.1
Proposition 3.9 implies the asymptotic equicontinuity of \(L^n\), and the asymptotic marginal distribution of \(L^n\) is obtained by Proposition 3.5. Then we can obtain Theorem 2.1 by these two propositions and Theorem 1.1 in Nishiyama [9]. \(\square \)
Notes
Let \(\tilde{\Omega }=\Omega \times \mathbb {R}_+\times \mathbb {R}\), \(\tilde{\mathcal {P}}=\mathcal {P}\otimes \mathcal {B}\), where \(\mathcal {B}\) is a Borel \(\sigma \)-field on \(\mathbb {R}\) and \(\mathcal {P}\) a \(\sigma \)-field generated by all left continuous adapted processes on \(\Omega \times \mathbb {R}_+\). The predictable function is a \(\tilde{\mathcal {P}}\)-measurable function on \(\tilde{\Omega }\).
A quasi-left continuous process X is a càdlág adapted process such that for any increasing stopping times \(T_n\) with limit T, \(\lim _{n\rightarrow \infty } X_{T_n}=X_T.\)
t is called as the fixed time of discontinuity if \(P(\Delta X_t\ne 0)>0\).
References
Andersen, N., Giné, E., Ossiander, M., Zinn, J.: The central limit theorem and the law of iterated logarithm for empirical processes under local conditions. Probab. Theory Relat. Fields 77, 271–305 (1988)
Billingsley, P.: Convergence of Probability Measures, 2nd edn. Wiley, New York (1999)
Dudley, R.: Central limit theorems for empirical measures. Ann. Probab. 6, 899–929 (1978)
Giné, E., Zinn, J.: Some limit theorems for empirical processes. Ann. Probab. 12, 929–989 (1984)
Jacod, J., Shiryaev, A.: Limit theorems for stochastic processes. Grundlehren der Mathematischen Wissenschaften, vol. 288, 2nd edn. Springer, Berlin (2003)
Le Cam, L.: Locally asymptotically normal families of distributions. Univ. Calif. Publ. Stat. 3, 27–98 (1960)
Liptser, R., Shiryaev, A.: Theory of Martingale. Kluwer, Dordrecht (1989)
Nishiyama, Y.: Some central limit theorems for \(l^{\infty }-\) valued semimartingales and their applications. Probab. Theory Relat. Fields 108, 459–494 (1997)
Nishiyama, Y.: Weak convergence of some classes of martingales with jumps. Ann. Probab. 28, 685–712 (2000)
Ossiander, M.: A central limit theorem under metric entropy with \(L_{2}\) bracketing. Ann. Probab. 15, 897–919 (1987)
van de Geer, S.: Exponential inequalities for martingales, with application to maximum likelihood estimation for counting processes. Ann. Stat. 23, 1779–1801 (1995)
van der Vaart, A., Wellner, J.: Weak Convergence and Empirical Processes. Springer, New York (1996)
Vostrikova, L.: Functional limit theorems for the likelihood ratio processes. Ann. Univ. Sci. Budapest. Sect. Comput. 6, 145–182 (1985)
Acknowledgements
The authors would like to thank the anonymous referees and the Associate Editor for careful reading and constructive comments. This work was supported by the National Natural Science Foundation of China (Nos. 11371317, 11701331, 11731012, 11871425) , Fundamental Research Funds for Central Universities, Shandong Provincial Natural Science Foundation (No. ZR2017QA007) and Young Scholars Program of Shandong University.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Su, Z., Wang, H. A Donsker-Type Theorem for Log-Likelihood Processes. J Theor Probab 33, 1401–1425 (2020). https://doi.org/10.1007/s10959-019-00926-9
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10959-019-00926-9