Abstract
In this paper, the functional central limit theorem is established for martingale like random vectors under the framework sub-linear expectations introduced by Shige Peng. As applications, the Lindeberg central limit theorem for independent random vectors is established, the sufficient and necessary conditions of the central limit theorem for independent and identically distributed random vectors are found, and a Lévy’s characterization of a multi-dimensional G-Brownian motion is obtained.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction and Notations
In the classical framework of probability theory, the expectation is a linear function of random variables defined on a measurable space and the probability is an additive function of events on this space. The sub-linear expectation is an extension of the expectation by relaxing its linearity to sub-linearity, and it related probability, called capacity, is non-longer additive. Under the sub-linear expectation, Peng [7,8,9, 11] gave the notions of the G-normal distributions, G-Brownian motions, G-martingales, independence of random variables, identical distribution of random variables and so on. Peng [9, 11] and Krylov [5] established the central limit theorem for independent and identically distributed (i.i.d.) random variables. Zhang [16] obtained the Lindeberg central limit theorem for independent but not necessary identically distributed one-dimensional random variables as well as martingale like sequences. In this paper, we consider the multi-dimensional martingale like random vectors. In the classical probability space, since the convergence in distribution of a sequence of random vectors \({\varvec{X}}_n=(X_{n,1},\ldots ,X_{n,d})\) is equivalent to the convergence in distribution of any linear functions \(\sum _k\alpha _kX_{n,k}\) of \({\varvec{X}}_n\) by the Cramér–Wold device, the central limit theorem for random vectors follows from the central limit theorem for one-dimensional random variables directly. Under the sub-linear expectation, due to the nonlinearity, the Cramér–Wold device is no longer valid for showing the convergence of random vectors. In this paper, we derive the functional central limit theorem for martingale like random vectors under the Lindeberg condition. As applications, we establish the Lindeberg central limit theorem for independent random vectors, give the sufficient and necessary conditions of the central limit theorem for independent and identically distributed random vectors and obtain a Lévy characterization of a multi-dimensional G-Brownian motion.
We use the framework and notations of Peng [8, 9, 11]. If the reader is familiar with these notations, the remainder of this section can be skipped. Let \((\Omega ,\mathcal F)\) be a given measurable space, and let \(\mathscr {H}\) be a linear space of real functions defined on \((\Omega ,\mathcal F)\) such that if \(X_1,\ldots , X_n \in \mathscr {H}\) then \(\varphi (X_1,\ldots ,X_n)\in \mathscr {H}\) for each \(\varphi \in C_{l,Lip}(\mathbb R^n)\), where \(C_{l,Lip}(\mathbb R^n)\) denotes the linear space of (local Lipschitz) functions \(\varphi \) satisfying
\(\mathscr {H}\) is considered as a space of “random variables.” In this case, we denote \(X\in \mathscr {H}\). We also denote the space of bounded Lipschitz functions and the space of bounded continuous functions on \(\mathbb R^n\) by \(C_{b,Lip}(\mathbb R^n)\) and \(C_b(\mathbb R^n)\), respectively. A sub-linear expectation \(\widehat{\mathbb E}\) on \(\mathscr {H}\) is a function \(\widehat{\mathbb E}: \mathscr {H}\rightarrow \overline{\mathbb R}\) satisfying the following properties: for all \(X, Y \in \mathscr {H}\),
-
(1)
Monotonicity: If \(X \ge Y\) then \(\widehat{\mathbb E}[X]\ge \widehat{\mathbb E}[Y]\);
-
(2)
Constant preserving: \(\widehat{\mathbb E}[c] = c\);
-
(3)
Sub-additivity: \(\widehat{\mathbb E}[X+Y]\le \widehat{\mathbb E}[X] +\widehat{\mathbb E}[Y ]\) whenever \(\widehat{\mathbb E}[X] +\widehat{\mathbb E}[Y ]\) is not of the form \(+\infty -\infty \) or \(-\infty +\infty \);
-
(4)
Positive homogeneity: \(\widehat{\mathbb E}[\lambda X] = \lambda \widehat{\mathbb E}[X]\), \(\lambda \ge 0\).
Here, \(\overline{\mathbb R}=[-\infty , \infty ]\). The triple \((\Omega , \mathscr {H}, \widehat{\mathbb E})\) is called a sub-linear expectation space. Given a sub-linear expectation \(\widehat{\mathbb E}\), let us denote the conjugate expectation \(\widehat{\mathcal E}\) of \(\widehat{\mathbb E}\) by \( \widehat{\mathcal E}[X]:=-\widehat{\mathbb E}[-X]\), \( \forall X\in \mathscr {H}\). If X is not in \(\mathscr {H}\), we define its sub-linear expectation by \(\widehat{\mathbb E}^{*}[X]=\inf \{\widehat{\mathbb E}[Y]: X\le Y\in \mathscr {H}\}\). When there is no ambiguity, we also denote it by \(\widehat{\mathbb E}\).
After having the sub-linear expectation, we consider the capacities. Let \(\mathcal G\subset \mathcal F\). A function \(V:\mathcal G\rightarrow [0,1]\) is called a capacity if
It is called to be sub-additive if \(V(A\bigcup B)\le V(A)+V(B)\) for all \(A,B\in \mathcal G\) with \(A\bigcup B\in \mathcal G\). Let \((\Omega , \mathscr {H}, \widehat{\mathbb E})\) be a sub-linear expectation space. In this paper, we denote \((\mathbb V,\mathcal V)\) be a pair of capacities with the properties that
and \(\mathcal V(A):= 1-\mathbb V(A^c)\), \(A\in \mathcal F\). It is obvious that \( \mathcal V(A\bigcup B)\le \mathcal V(A)+\mathbb V(B). \) We call \(\mathbb V\) and \(\mathcal V\) the upper and the lower capacity, respectively. In general, we can choose \(\mathbb V\) as
To distinguish this capacity with others, we denote it by \(\widehat{\mathbb V}\) and \(\widehat{\mathcal V}(A)=1-\widehat{\mathbb V}(A)\). \(\widehat{\mathbb V}\) is the largest capacity satisfying (1.1).
When there exists a family of probability measure on \((\Omega ,\mathscr {F})\) such that
\(\mathbb V\) can be defined as
We denote this capacity by \(\mathbb V^{\mathscr {P}}\), and \(\mathcal V^{\mathscr {P}}(A)=1-\mathbb V^{\mathscr {P}}(A)\).
If \(\mathbb V_1\) and \(\mathbb V_2\) are two capacities having the property (1.1), then for any random variable \(X\in \mathscr {H}\),
In fact, let \(f, g\in C_{b,Lip}(\mathbb R)\) such that \(I\{y\ge x+\epsilon \}\le f(y)\le I\{y\ge x\}\le g(y)\le I\{y\ge x-\epsilon \}\). Then,
It follows from (1.5) that
for all but except countable many x. In this paper, the events that we considered are almost of the type \(\{X\ge x\}\) or \(\{X> x\}\) , and so the results will not depend on the capacity that we have chosen.
Next, we recall the notations of identical distribution and independence.
Definition 1.1
-
(i)
(Identical distribution) Let \({\varvec{X}}_1\) and \({\varvec{X}}_2\) be two n-dimensional random vectors defined, respectively, in sub-linear expectation spaces \((\Omega _1, \mathscr {H}_1, \widehat{\mathbb E}_1)\) and \((\Omega _2, \mathscr {H}_2, \widehat{\mathbb E}_2)\). They are called identically distributed, denoted by \({\varvec{X}}_1\overset{d}{=} {\varvec{X}}_2\), if
$$\begin{aligned} \widehat{\mathbb E}_1[\varphi ({\varvec{X}}_1)]=\widehat{\mathbb E}_2[\varphi ({\varvec{X}}_2)], \;\; \forall \varphi \in C_{l,Lip}(\mathbb R^n), \end{aligned}$$whenever the sub-expectations are finite. A sequence \(\{X_n;n\ge 1\}\) of random variables (or random vectors) is said to be identically distributed if \(X_i\overset{d}{=} X_1\) for each \(i\ge 1\).
-
(ii)
(Independence) In a sub-linear expectation space \((\Omega , \mathscr {H}, \widehat{\mathbb E})\), a random vector \({\varvec{Y}} = (Y_1, \ldots , Y_n)\), \(Y_i \in \mathscr {H}\) is said to be independent to another random vector \({\varvec{X}} = (X_1, \ldots , X_m)\) , \(X_i \in \mathscr {H}\) under \(\widehat{\mathbb E}\), if for each test function \(\varphi \in C_{l,Lip}(\mathbb R^m \times \mathbb R^n)\), we have \( \widehat{\mathbb E}[\varphi ({\varvec{X}}, {\varvec{Y}} )] = \widehat{\mathbb E}\big [\widehat{\mathbb E}[\varphi ({\varvec{x}}, {\varvec{Y}} )]\big |_{{\varvec{x}}={\varvec{X}}}\big ],\) whenever \(\overline{\varphi }({\varvec{x}}):=\widehat{\mathbb E}\left[ |\varphi ({\varvec{x}}, {\varvec{Y}} )|\right] <\infty \) for all \({\varvec{x}}\) and \(\widehat{\mathbb E}\left[ |\overline{\varphi }({\varvec{X}})|\right] <\infty \). Random variables (or random vectors) \(X_1,\ldots , X_n\) are said to be independent if for each \(2\le k\le n\), \(X_k\) is independent to \((X_1,\ldots , X_{k-1})\). A sequence of random variables (or random vectors) is said to be independent if for each n, \(X_1,\ldots , X_n\) are independent.
Finally, we recall the notations of G-normal distribution and G-Brownian motion which are introduced by Peng [8, 9]. We denote by \(\mathbb S(d)\) the collection of all \(d\times d\) symmetric matrices. A function \(G:\mathbb S(d) \rightarrow \mathbb R\) is called a sub-linear function monotonic in \(A \in \mathbb S(d)\) if for each \(A, \overline{A}\in \mathbb S(d)\),
Here, \(A\ge \overline{A}\) means that \(A- \overline{A}\) is semi-positive definite. G is continuous if \(|G(A)-G(\overline{A})|\rightarrow 0\) when \(\Vert A-\overline{A}\Vert _{\infty }\rightarrow 0\), where \(\Vert A-\overline{A}\Vert _{\infty }=\max _{i,j}|a_{ij}-\overline{a}_{ij}|\) for \(A=(a_{ij};i,j=1,\ldots ,d)\) and \(A=(\overline{a}_{ij};i,j=1,\ldots ,d)\).
Definition 1.2
(G-normal random variable) Let \(G:\mathbb S(d) \rightarrow \mathbb R\) be a continuous sub-linear function monotonic in \(A \in \mathbb S(d)\). A d-dimensional random vector \({\varvec{\xi }}=(\xi _1,\ldots ,\xi _d)\) in a sub-linear expectation space \((\widetilde{\Omega }, \widetilde{\mathscr {H}}, \widetilde{\mathbb E})\) is called a G-normal distributed random variable (written as \(\xi \sim N\big (0, G\big )\) under \(\widetilde{\mathbb E}\)), if for any \(\varphi \in C_{l,Lip}(\mathbb R^d)\), the function \(u({\varvec{x}},t)=\widetilde{\mathbb E}\left[ \varphi \left( {\varvec{x}}+\sqrt{t} {\varvec{\xi }}\right) \right] \) (\({\varvec{x}}\in \mathbb R^d, t\ge 0\)) is the unique viscosity solution of the following heat equation:
where \(Du=\big (\partial _{x_i} u, i=1,\ldots ,d\big )\) and \(D^2u=D(Du)=\big (\partial _{x_i,x_j} u\big )_{i,j=1}^d\).
That \({\varvec{\xi }}\) is a G-normal distributed random vector is equivalent to that if \({\varvec{\xi }}^{\prime }\) is an independent copy of \({\varvec{\xi }}\), then
and \(G(A)=\widetilde{\mathbb E}\left[ \langle {\varvec{\xi }}A,{\varvec{\xi }}\rangle \right] \) (cf. Definition II.1.4 and Example II.1.13 of Peng [10]), where \(\langle {\varvec{x}},{\varvec{y}}\rangle \) is the scalar product of \({\varvec{x}}, {\varvec{y}}\). When \(d=1\), G can be written as \(G(\alpha )=\alpha ^+\overline{\sigma }^2-\alpha ^+\underline{\sigma }^2\), and we write \(\xi \sim N(0,[\underline{\sigma }^2,\overline{\sigma }^2])\) if \(\xi \) is a G-normal distributed random variable (c..f. Peng [11]).
Definition 1.3
(G-Brownian motion) A d-dimensional random process \(({\varvec{W}}_t)_{t\ge 0}\) in the sub-linear expectation space \((\widetilde{\Omega }, \widetilde{\mathscr {H}}, \widetilde{\mathbb E})\) is called a G-Brownian motion if
-
(i)
\({\varvec{W}}_0={\varvec{0}}\);
-
(ii)
For each \(0\le t_1\le \ldots \le t_p\le t\le s\),
$$\begin{aligned}&\widetilde{\mathbb E}\left[ \varphi \big ({\varvec{W}}_{t_1},\ldots , {\varvec{W}}_{t_p}, {\varvec{W}}_s-{\varvec{W}}_t\big )\right] \nonumber \\&= \widetilde{\mathbb E}\left[ \widetilde{\mathbb E}\left[ \varphi \big ({\varvec{x}}_1,\ldots , {\varvec{x}}_p, \sqrt{t-s}){\varvec{\xi }}\big )\right] \big |_{{\varvec{x}}_1={\varvec{W}}_{t_1},\ldots , {\varvec{x}}_p={\varvec{W}}_{t_p}}\right] \\&\;\; \forall \varphi \in C_{l,Lip}(\mathbb R^{p\times (d+1)}), \nonumber \end{aligned}$$(1.6)where \({\varvec{\xi }}\sim N(0,G)\).
Let \(C_{[0,\infty )}=C_{[0,\infty )}(\mathbb R^d)\) be a function space of continuous real d-dimensional functions on \([0,\infty )\) equipped with the supremum norm \(\Vert {\varvec{x}}\Vert =\sum \limits _{i=1}^{\infty }\sup \nolimits _{0\le t\le 2^i}(|{\varvec{x}}(t)|\wedge 1)/2^i\), where \(|{\varvec{y}}|\) is the Euclidean norm of \({\varvec{y}}\). Denote by \(C_b\big (C_{[0,\infty )}\big )\) the set of bounded continuous functions \(h(x):C_{[0,\infty )}\rightarrow \mathbb R\). As shown in Peng [8, 10] and Denis et al. [2], there is a sub-linear expectation space \(\big (\widetilde{\Omega }, \widetilde{\mathscr {H}},\widetilde{\mathbb E}\big )\) with \(\widetilde{\Omega }= C_{[0,\infty )}\) and \(C_b\big (\widetilde{\Omega }\big )\subset \widetilde{\mathscr {H}}\) such that \(\widetilde{\mathbb E}\) is countably sub-additive, \((\widetilde{\mathscr {H}}, \widetilde{\mathbb E}[\Vert \cdot \Vert ])\) is a Banach space, and the canonical process \(W(t)(\omega ) = \omega _t (\omega \in \widetilde{\Omega })\) satisfies (i) and (ii). Further, there exists a weakly compact family of probability measures \(\mathscr {P}\) on \((\widetilde{\Omega }, \mathscr {B}_{\widetilde{\Omega }})\) such that
where \(\mathscr {B}_{\widetilde{\Omega }}\) is the Borel \(\sigma \)-algebra on \(\widetilde{\Omega }\) (c.f. Theorem 6.2.5 Proposition 6.3.2 of Peng [10]). In the sequel of this paper, the G-normal random vectors and G-Brownian motions are considered in \((\widetilde{\Omega }, \widetilde{\mathscr {H}}, \widetilde{\mathbb E})\).
2 Functional Central Limit Theorem for Martingale Vectors
On the sub-linear expectation space \((\Omega , \mathscr {H}, \widehat{\mathbb E})\), we write \(\eta _n\overset{d}{\rightarrow }\eta \) if \(\widehat{\mathbb E}\left[ \varphi (\eta _n)\right] \rightarrow \widehat{\mathbb E}\left[ \varphi (\eta )\right] \) holds for all bounded and continuous functions \(\varphi \), \(\eta _n\overset{\mathbb V}{\rightarrow }\eta \) if \(\mathbb V\left( |\eta _n-\eta |\ge \epsilon \right) \rightarrow 0\) for any \(\epsilon >0\), \( \eta _n\le \eta +o(1)\) in capacity \(\mathbb V\) if \( (\eta _n-\eta )^+\overset{\mathbb V}{\rightarrow }0\), \(\eta _n\rightarrow \eta \) in \(L_p\) if \(\lim _n \widehat{\mathbb E}[|\eta _n-\eta |^p]=0\), and \( \eta _n\le \eta +o(1)\) in \(L_p\) if \( (\eta _n-\eta )^+\rightarrow 0\) in \(L_p\). We also write \(\xi \le \eta \) in \(L_p\) if \(\widehat{\mathbb E}[((\xi -\eta )^+)^p]=0\), \(\xi = \eta \) in \(L_p\) if \(\widehat{\mathbb E}[|\xi -\eta |^p]=0\), \(X\le Y\) in \(\mathbb V\) if \(\mathbb V\left( X-Y\ge \epsilon \right) =0\) for all \(\epsilon >0\), and \(X= Y\) in \(\mathbb V\) if both \(X\le Y\) and \(Y\le X\) holds in \(\mathbb V\).
We recall the definition of the conditional expectation under the sub-linear expectation. Let \((\Omega , \mathscr {H}, \widehat{\mathbb E})\) be a sub-linear expectation space. Let \(\mathscr {H}_{n,0}\subset \ldots \subset \mathscr {H}_{n,k_n}\) be subspaces of \(\mathscr {H}\) such that
-
(i)
any constant \(c\in \mathscr {H}_{n,k}\) and,
-
(ii)
if \(X_1,\ldots ,X_d\in \mathscr {H}_{n,k}\), then \(\varphi (X_1,\ldots ,X_d)\in \mathscr {H}_{n,k}\) for any \(\varphi \in C_{l,lip}(\mathbb R^d)\), \(k=0,\ldots , k_n\).
Denote \(\mathscr {L}(\mathscr {H})=\{X:\widehat{\mathbb E}[|X|]<\infty , X\in \mathscr {H}\}\). We consider a system of operators in \(\mathscr {L}(\mathscr {H})\),
Suppose that the operators \(\widehat{\mathbb E}_{n,k}\) satisfy the following properties: for all \(X, Y \in \mathscr {L}({\mathscr {H}})\),
-
(a)
\( \widehat{\mathbb E}_{n,k} [ X+Y]=X+\widehat{\mathbb E}_{n,k}[Y]\) in \(L_1\) if \(X\in \mathscr {H}_{n,k}\), and \( \widehat{\mathbb E}_{n,k} [ XY]=X^+\widehat{\mathbb E}_{n,k}[Y]+X^-\widehat{\mathbb E}_{n,k}[-Y]\) in \(L_1\) if \(X\in \mathscr {H}_{n,k}\) and \(XY\in \mathscr {L}({\mathscr {H}})\);
-
(b)
\(\widehat{\mathbb E}\left[ \widehat{\mathbb E}_{n,k} [ X]\right] =\widehat{\mathbb E}[X]\).
Denote \(\widehat{\mathbb E}[X|\mathscr {H}_{n,k}]=\widehat{\mathbb E}_{n,k}[X]\), \(\widehat{\mathcal E}[X|\mathscr {H}_{n,k}]=-\widehat{\mathbb E}_{n,k}[-X]\). \(\widehat{\mathbb E}[X|\mathscr {H}_{n,k}]\) is called the conditional sub-linear expectation of X given \(\mathscr {H}_{n,k}\), and \(\widehat{\mathbb E}_{n,k}\) is called the conditional expectation operator.
For a random vector \({\varvec{X}}=(X_1,\ldots , X_d)\), we denote \(\widehat{\mathbb E}[{\varvec{X}}]=(\widehat{\mathbb E}[X_1],\ldots , \widehat{\mathbb E}[X_d])\) and \(\widehat{\mathbb E}[{\varvec{X}}|\mathscr {H}_{n,k}]=(\widehat{\mathbb E}[X_1|\mathscr {H}_{n,k}],\ldots , \widehat{\mathbb E}[X_d|\mathscr {H}_{n,k}])\). Now, we assume that \(\{{\varvec{Z}}_{n,k}; k=1,\ldots , k_n\}\) is an array of d-dimensional random vectors such that \({\varvec{Z}}_{n,k}\in \mathscr {H}_{n,k}\) and \(\widehat{\mathbb E}[|{\varvec{Z}}_{n,k}|^2]<\infty \), \(k=1,\ldots , k_n\). Let \(D_{[0,1]}=D_{[0,1]}(\mathbb R^d)\) be the space of right continuous d-dimensional functions having finite left limits which is endowed with the Skorohod topology (c.f. Billingsley [1]) and \(\tau _n(t)\) be a non-decreasing function in \(D_{[0,1]}(\mathbb R^1)\) which takes integer values with \(\tau _n(0)=0\), \(\tau _n(1)=k_n\). Define \({\varvec{S}}_{n,i}=\sum _{k=1}^i {\varvec{Z}}_{n,k}\),
Then, \({\varvec{W}}_n\) is an element in \(D_{[0,1]}(\mathbb R^d)\). The following is the functional central limit theorem.
Theorem 2.1
Suppose that the operators \(\widehat{\mathbb E}_{n,k}\) satisfy (a) and (b). Assume that the following Lindeberg condition is satisfied:
and
Further, assume that there is a continuous non-decreasing non-random function \(\rho (t)\) and a non-random function \(G:\mathbb S(d)\rightarrow \mathbb R\) for which
Then for any \(0=t_0<\ldots < t_d\le 1\),
and for any bounded continuous function \(\varphi :D_{[0,1]}(\mathbb R^d)\rightarrow \mathbb R\),
where \({\varvec{W}}\) is a G-Brownian motion with \({\varvec{W}}(1) \sim N(0,G)\) under \(\widetilde{\mathbb E}\), and \({\varvec{W}}\circ \rho (t)={\varvec{W}}(\rho (t))\).
The proof of this theorem will stated in the last section.
Remark 2.2
Let \(G_n(A,t)=\sum _{k\le \tau _n(t)} \widehat{\mathbb E}\left[ \langle {\varvec{Z}}_{n,k} A,{\varvec{Z}}_{n,k} \rangle \big |\mathscr {H}_{n,k-1}\right] \). It is easily seen that \(G_n(A,t):\mathbb S(d) \rightarrow \mathbb R\) is a continuous sub-linear function monotonic in \(A \in \mathbb S(d)\). So, G is a continuous sub-linear function monotonic in \(A \in \mathbb S(d)\). Without loss of generality, we assume \(G(I_{d\times d})=1\) for otherwise we can replace \(\rho (t)\) by \(G(I_{d\times d})\rho (t)\). It is obvious that
It follows that \(|G(A)-G(\overline{A})|\le d\Vert A-\overline{A}\Vert _{\infty }\). Then, it can be verified that (2.4) holds uniformly in A in a bounded area, and G(A) is continuous in \(A\in \mathbb S(d)\).
Remark 2.3
When \(d=1\), (2.4) is equivalent to
The condition (2.7) is assumed in Zhang [16]. But, (2.8) is replaced by a more stringent condition as follows,
As shown in Remark 3.2, (2.7) and (2.8) cannot be weakened furthermore.
3 Applications
3.1 Lindeberg’s CLT for Independent Random Vectors
From Theorem 2.1, we have the following functional central limit theorem for independent random vectors.
Theorem 3.1
Let \(\{{\varvec{Z}}_{n,k};k=1,\ldots , k_n\}\) be an array of independent d-dimensional random vectors, \(n=1,2,\ldots \), and \(\tau _n(t)\) be a non-decreasing function in \(D_{[0,1]}(\mathbb R^1)\) which takes integer values with \(\tau _n(0)=0\), \(\tau _n(1)=k_n\). Denote \( {\varvec{W}}_n(t)=\sum _{k\le \tau _n(t)}{\varvec{Z}}_{n,k}. \) Assume that
and
Further, assume that there is a continuous non-decreasing non-random function \(\rho (t)\) and a non-random function \(G:\mathbb S(d)\rightarrow \mathbb R\) for which
Then for any \(0=t_0<\ldots < t_d\le 1\),
and for any continuous function \(\varphi :D_{[0,1]}(\mathbb R^d)\rightarrow \mathbb R\) with \( |\varphi ({\varvec{x}})|\le C \sup _{t\in [0,1]}|{\varvec{x}}(t)|^2\),
where \({\varvec{W}}\) is G-Brownian motion on \([0,\infty )\) with \({\varvec{W}}(1) \sim N(0,G)\) under \(\widetilde{\mathbb E}\). Further, when \(p>2\), (3.5) holds for any continuous function \(\varphi :D_{[0,1]}(\mathbb R^d)\rightarrow \mathbb R\) with \( |\varphi ({\varvec{x}})|\le C \sup _{t\in [0,1]}|{\varvec{x}}(t)|^p\) if (3.1) is replaced by the condition that
Proof
For a bounded continuous function \(\varphi \), (3.5) follows from Theorem 2.1 for the functional central limit theorem of martingale vectors. For continuous function \(\varphi :D_{[0,1]}(\mathbb R^d)\rightarrow \mathbb R\) with \( |\varphi ({\varvec{x}})|\le C \sup _{t\in [0,1]}|{\varvec{x}}(t)|^p\), we first note that (3.1) is implied by (3.6) for \(p>2\). Since (3.5) holds for bounded continuous function \(\varphi \) and
it is sufficient to show that \(\{\max \limits _{i\le k_n}|\sum \nolimits _{k\le i}{\varvec{Z}}_{n,k}|^p, n\ge 1\}\) is uniformly integrable, i.e.,
under the conditions (3.2), (3.3), (3.1) or/and (3.6). For showing (3.7), it is sufficient to consider the one-dimensional case. Let \(Y_{n,k}=(- 1)\vee Z_{n,k}\wedge 1\) and \(\widehat{Y}_{n,k}=Z_{n,k}-Y_{n,k}\). Then, the Lindeberg condition (3.1) implies that
It follows that
by (3.2). Also, it is obvious that
By the Rosenthal-type inequality for independent random variables (c.f. Theorem 2.1 of Zhang[14]),
by (3.9) and (3.10). It follows that
For \(\widehat{Y}_{n,k}\), by the Rosenthal-type inequality for independent random variables again, we have
by (3.8) and the condition (3.1) (and (3.6) when \(p>2\)). Hence, (3.7) is proved.\(\square \)
Remark 3.2
When \(d=1\), the condition (3.3) is equivalent to
Suppose that \(\{Z_{n,k};k=1,\ldots , k_n\}\) is an array of independent random variables with \(\widehat{\mathbb E}[Z_{n,k}]=\widehat{\mathcal E}[Z_{n,k}]=0\), \(k=1,\ldots , k_n\), and the Lindeberg condition (3.14) is satisfied. If (3.4) or (3.5) holds, then as shown in the proof of Theorem 3.1,
So, the conditions (3.12) and (3.13) cannot be weakened furthermore.
Corollary 3.3
Let \(\{X_{n,k};k=1,\ldots , k_n\}\) be an array of independent random variables, \(n=1,2,\ldots \). Denote \(\overline{\sigma }_{n,k}^2=\widehat{\mathbb E}[X_{n,k}^2]\), \(\underline{\sigma }_{n,k}^2=\widehat{\mathcal E}[X_{n,k}^2]\) and \(B_n^2=\sum _{k=1}^{k_n} \overline{\sigma }_{n,k}^2\). Suppose that the Lindeberg condition is satisfied:
and further, there is a constant \(r\in [0,1]\) such that
Then for any continuous function \(\varphi \) with \(\ |\varphi (x)|\le C x^2\),
where \(\xi \sim N(0,[r, 1])\) under \(\widetilde{\mathbb E}\).
Proof
Let \(Z_{n,k}=X_{n,k}/B_n\), \(k=1,\ldots , k_n\). It is easily seen that the array \(\{Z_{n,k}; k=1,\ldots , k_n\}\) satisfies (3.1) and (3.2). Denote \(B_{n,0}^2=0\), \(B_{n,k}^2=\sum _{i=1}^k \overline{\sigma }_{n,i}^2\). Define the function \(\tau _n(t)\) by
From the Lindeberg condition (3.14), it is easily verified that
It follows that
and \(\tau _n(t)\rightarrow \infty \) if \(t>0\). By the condition (3.21), we have
So, (3.12) and (3.13) are satisfied with \(\rho (t)=t\). Hence, (3.23) follows from (3.5).
It is easily seen that (3.15) is implied by the following condition of Zhang [16],
Zhang [16] also showed that the condition (3.18) cannot be weakened to
However, the following theorem shows that if we consider a sequence of independent random variables instead of arrays of independent random variables, then the condition (3.15) can be weakened to (3.19).
Theorem 3.4
Let \(\{X_k;k=1,2,\ldots \}\) be a sequence of independent random variables. Denote \(\overline{\sigma }_{k}^2=\widehat{\mathbb E}[X_{k}^2]\), \(\underline{\sigma }_{k}^2=\widehat{\mathcal E}[X_{k}^2]\), \(B_n^2=\sum _{k=1}^{n} \overline{\sigma }_{k}^2\) . Suppose that the Lindeberg condition is satisfied:
and further, there is a constant \(r\in [0,1]\) such that
Then for any continuous function \(\varphi \) with \(\ |\varphi (x)|\le C x^2\),
where \(\xi \sim N(0,[r, 1])\) under \(\widetilde{\mathbb E}\).
Proof
Obviously, since (3.21) implies (3.15).\(\square \)
3.2 CLT for i.i.d. Random Vectors
Now, we consider a sequence \(\{{\varvec{X}}_k;k=1,2,\ldots \}\) of independent and identically distributed d-dimensional random vectors, and let \({\varvec{S}}_n=\sum _{k=1}^n {\varvec{X}}_k\).
If we let \({\varvec{Z}}_{n,k}={\varvec{X}}_k/\sqrt{n}\), \(k=1,\ldots , n\), then (3.1) is equivalent to that \(\widehat{\mathbb E}[(|{\varvec{X}}_1|^2-c)^+]\rightarrow 0\) as \(c\rightarrow \infty \), (3.2) is equivalent to that \(\widehat{\mathbb E}[{\varvec{X}}_1]=\widehat{\mathbb E}[-{\varvec{X}}_1]={\varvec{0}}\), and (3.3) is automatically satisfied with \(G(A)=\widehat{\mathbb E}[\langle {\varvec{X}}_1A,{\varvec{X}}_1\rangle ]\), \(\rho (t)\equiv t\) and \(\tau _n(t)=[nt]/n\). From Theorem 3.1, we obtain Peng’s central limit theorem (c.f. Theorem 2.4.4. of Peng (2019)).
Corollary 3.5
Suppose \(\widehat{\mathbb E}[(|{\varvec{X}}_1|^2-c)^+]\rightarrow 0\) as \(c\rightarrow \infty \), \(\widehat{\mathbb E}[{\varvec{X}}_1]=\widehat{\mathbb E}[-{\varvec{X}}_1]={\varvec{0}}\). Let \(G(A)=\widehat{\mathbb E}[\langle {\varvec{X}}_1A,{\varvec{X}}_1\rangle ]\). Then,
where \({\varvec{\xi }}\sim N\left( 0,G\right) \).
The next theorem gives the sufficient and necessary conditions of the central limit theorem for independent and identically distributed random vectors. For a random vector \({\varvec{X}}=(X_1,\ldots , X_d)\), we write \({\varvec{X}}^{(c)}=(X_1^{(c)},\ldots , X_d^{(c)})\), where \(X_i^{(c)}=(-c)\vee (X_i\wedge c)\), \(i=1,\ldots ,d\).
Theorem 3.6
Suppose that
-
(i)
\(\lim \limits _{c\rightarrow \infty } \widehat{\mathbb E}[|{\varvec{X}}_1|^2\wedge c]\) is finite;
-
(ii)
\(x^2\mathbb V\left( |{\varvec{X}}_1|\ge x\right) \rightarrow 0\) as \(x\rightarrow \infty \);
-
(iii)
\(\lim \limits _{c\rightarrow \infty }\widehat{\mathbb E}\left[ {\varvec{X}}_1^{(c)}\right] =\lim \limits _{c\rightarrow \infty }\widehat{\mathbb E}\left[ -{\varvec{X}}_1^{(c)}\right] ={\varvec{0}}\);
-
(iv)
The limit
$$\begin{aligned} G(A)=\lim _{c\rightarrow \infty }\widehat{\mathbb E}\left[ \langle {\varvec{X}}_1^{(c)} A,{\varvec{X}}_1^{(c)}\rangle \right] \end{aligned}$$(3.25)exists for each \( A\in \mathbb S(d)\).
Then for any bounded continuous function \(\varphi : D_{[0,1]}(\mathbb R^d)\rightarrow \mathbb R\),
where \({\varvec{W}}\) is a G-Brownian motion with \({\varvec{W}}_1\sim N(0,G)\). In particular, (3.24) holds with where \({\varvec{\xi }}\sim N\left( 0,G\right) \).
Conversely, if (3.24) holds for any \(\varphi \in C_{b,Lip}(\mathbb R^d)\) and a random vector \({\varvec{\xi }}\) with \(x^2\widetilde{\mathbb V}\left( |{\varvec{\xi }}|\ge x\right) \rightarrow 0\) as \(x\rightarrow \infty \), then (i)-(iv) hold.
Remark 3.7
If \(\widehat{\mathbb E}[(|{\varvec{X}}_1|^2-c)^+]\rightarrow 0\) as \(c\rightarrow \infty \), then (i), (ii) and (iv) are satisfied, \(G(A)=\widehat{\mathbb E}\left[ \langle {\varvec{X}}_1 A,{\varvec{X}}_1\rangle \right] \), and (iii) is equivalent to \(\widehat{\mathbb E}[{\varvec{X}}_1]=\widehat{\mathbb E}[-{\varvec{X}}_1]=0\). Also, if \(C_{\mathbb V}(|{\varvec{X}}_1|^2)<\infty \), then (i), (ii) and (iv) are satisfied.
For the one-dimensional case \(d=1\), (iv) is equivalent to \(\lim \limits _{c\rightarrow \infty }\widehat{\mathbb E}[X_1^2\wedge c]\) and \(\lim \limits _{c\rightarrow \infty }\widehat{\mathcal E}[X_1^2\wedge c]\) are finite which are implied by (i). In general, we don’t know whether (iv) can be derived from (i)-(iii) or not.
Proof
When \(d=1\), this theorem is proved by Zhang [15] (c.f. Theorem 4.2), where it is shown that \(\lim _{c\rightarrow \infty }\widehat{\mathbb E}\left[ {\varvec{X}}_1^c\right] \) and \(\lim _{c\rightarrow \infty }\widehat{\mathbb E}\left[ -{\varvec{X}}_1^c\right] \) exist and are finite under the condition (i). Note
It is easily seen that if the limit in (3.25) exists, then it is finite and G(A) is a continuous sub-linear function monotonic in \(A \in \mathbb S(d)\). We first prove the direct part. Let \({\varvec{Y}}_{n,k}= \frac{1}{\sqrt{n}}{\varvec{X}}_k^{(\sqrt{n})}\). By (i)-(iii) we have that
as \(n\rightarrow \infty \) and then \(\epsilon \rightarrow 0\), and
as \(n\rightarrow \infty \) and then \(x\rightarrow \infty \). Further, by (iv),
Denote \({\varvec{W}}_n(t)= \sum _{k=1}^{[nt]} {\varvec{Y}}_{n,k}\). By Theorem 3.1, for any bounded continuous function \(\varphi : D_{[0,1]}(\mathbb R^d)\rightarrow \mathbb R\),
Note
(3.26) is proved.
Now, suppose that (3.24) holds. By (3.24), for each element \(X_{1,i}\) of \({\varvec{X}}_1=(X_{1,1},\ldots , X_{1,d})\), \(i=1,\ldots , d\), we have
By Theorem 4.2 of Zhang [15], \(\lim \limits _{c\rightarrow \infty } \widehat{\mathbb E}[X_{1,i}^2\wedge c]\) is finite, \(x^2\mathbb V\left( |X_{1,i}|\ge x\right) \rightarrow 0\) as \(x\rightarrow \infty \), and \(\lim \limits _{c\rightarrow \infty }\widehat{\mathbb E}\big [X_{1,i}^{(c)}\big ]=\lim \limits _{c\rightarrow \infty }\widehat{\mathbb E}\big [-X_{1,i}^{(c)}\big ]=0\). So, (i)-(iii) are proved.
At last, we show (iv). Let \({\varvec{Y}}_{n,k}\) be defined as above. Then, (3.27)–(3.29) remain true. Let \({\varvec{T}}_{n,m}=\sum _{k=1}^m {\varvec{Y}}_{n,k}\), \(1\le m\le n\) and \({\varvec{T}}_n={\varvec{T}}_{n,n}\). Then, similar to (3.11), we have
Hence,
On the other hand, by (3.24) and (3.31),
Choosing \(\varphi ({\varvec{x}})=|{\varvec{x}}|^p\wedge c\) yields
Hence,
Let \(G_{\xi }^{(c)}(A)=\widetilde{\mathbb E}\left[ \langle {\varvec{\xi }}^{(c)}A,{\varvec{\xi }}^{(c)}\rangle \right] \). Note, for \(a>b\),
It follows that
by (3.34). It follows that
Now, choosing \(\varphi ({\varvec{x}})=\langle {\varvec{x}}^{(c)} A,{\varvec{x}}^{(c)}\rangle \) in (3.33) yields
Note that \(|\langle {\varvec{T}}_n A,{\varvec{T}}_n- \langle {\varvec{T}}_n^{(c)} A,{\varvec{T}}_n^{(c)}\rangle |\le 2|A|\cdot |{\varvec{T}}_n|^2I\{|{\varvec{T}}_n|>c\}\), and \(\{|{\varvec{T}}_n|^2, n\ge 1\}\) is uniformly integrable by (3.32). Letting \(c\rightarrow \infty \) in the above equation yields
On the other hand, note
Since
we have
It follows that
where the inequality is due to the independence of \(\langle {\varvec{Y}}_{n,k} A,{\varvec{Y}}_{n,k}\rangle \), \(k=1,\ldots , n\). We conclude that
Similar to (3.35), for \(\sqrt{n}\le b\le a\le \sqrt{n+1}\) we have
by (i) and (iii). Hence,
(iv) is now proved. \(\square \)
3.3 Lévy’s Characterization of G-Brownian Motion
At last, we give a Lévy’s characterization of a multi-dimensional G-Brownian motion as an application of Theorem 2.1. Let \(\{\mathscr {H}_t; t\ge 0\}\) be a non-decreasing family of subspaces of \(\mathscr {H}\) such that (1) a constant \(c\in \mathscr {H}_t\) and, (2) \(\varphi (X_1,\ldots ,X_d)\in \mathscr {H}_t\) whenever \(X_1,\ldots ,X_d\in \mathscr {H}_t\) and \(\varphi \in C_{l,lip}\). We consider a system of operators on \(\mathscr {L}(\mathscr {H})=\{X\in \mathscr {H}; \widehat{\mathbb E}[|X|]<\infty \}\),
and denote \(\widehat{\mathbb E}[X|\mathscr {H}_t]=\widehat{\mathbb E}_t[X]\), \(\widehat{\mathcal E}[X|\mathscr {H}_t]=-\widehat{\mathbb E}_t[-X]\). Suppose that the operators \(\widehat{\mathbb E}_t\) satisfy the following properties: for all \(X, Y \in \mathscr {L}({\mathscr {H}})\),
-
(i)
\( \widehat{\mathbb E}_t [ X+Y]=X+\widehat{\mathbb E}_t[Y]\) in \(L_1\) if \(X\in \mathscr {H}_t\), and \( \widehat{\mathbb E}_t [ XY]=X^+\widehat{\mathbb E}_t[Y]+X^-\widehat{\mathbb E}_t[-Y]\) in \(L_1\) if \(X\in \mathscr {H}_t\) and \(XY\in \mathscr {L}({\mathscr {H}})\);
-
(ii)
\(\widehat{\mathbb E}\left[ \widehat{\mathbb E}_t [ X]\right] =\widehat{\mathbb E}[X]\).
For a random vector \({\varvec{X}}=(X_1,\ldots , X_d)\), we denote \(\widehat{\mathbb E}_t[{\varvec{X}}]=\big (\widehat{\mathbb E}_t[X_1],\ldots , \widehat{\mathbb E}_t[X_d]\big )\).
Definition 3.8
A d-dimensional process \({\varvec{M}}_t\) is called a martingale, if \({\varvec{M}}_t\in \mathscr {L}(\mathscr {H}_t)\) and
Denote
The Lévy characterization of a one-dimensional G-Brownian motion under G-expectation in a Wiener space is established by Xu and Zhang [12, 13], Gao et al. [3], Lin [6] and Hu and Li [4] by the method of the stochastic calculus. The following theorem gives a Lévy characterization of a d-dimensional G-Brownian motion.
Theorem 3.9
Let \({\varvec{M}}_t\) be a d-dimensional random process in \((\Omega ,\mathscr {H},\mathscr {H}_t, \widehat{\mathbb E})\) with \({\varvec{M}}_0={\varvec{0}}\),
Suppose that \({\varvec{M}}_t\) satisfies
-
(I)
Both \({\varvec{M}}_t\) and \(-{\varvec{M}}_t\) are martingales;
-
(II)
There is a function \(G:\mathbb S(d) \rightarrow \mathbb R\) such that \(\langle {\varvec{M}}_t A, {\varvec{M}}_t\rangle -G(A)t\) is a real martingale for each \(A \in \mathbb S(d)\);
-
(III)
For any \(T>0\), \(\lim _{\delta \rightarrow 0}W_T({\varvec{M}},\delta )=0\).
Then, G(A) is continuous and monotonic in \(A \in \mathbb S(d)\), and \({\varvec{M}}_t\) satisfies Property (ii) as in Definition 1.3 with \({\varvec{M}}_1\sim N(0,G)\).
Proof
By (II), \(G(A)t =\widehat{\mathbb E}[\langle {\varvec{M}}_t A, {\varvec{M}}_t\rangle ]\). So, G(A) is monotonic in \(A \in \mathbb S(d)\). With the same argument as in 2.2, \(|G(A)-G(\overline{A})|\le d\Vert A-\overline{A}\Vert _{\infty } G(I)\). So G(A) is continuous in \(A \in \mathbb S(d)\). Note that \(\widehat{\mathbb E}[\langle ({\varvec{M}}_t-{\varvec{M}}_s) A, {\varvec{M}}_t-{\varvec{M}}_s\rangle |\mathscr {H}_s]=G(A)(t-s)\) (\(0<s<t\)) by (I) and (II). In particular, \(\widehat{\mathbb E}[M_{t,k}-M_{s,k})^2|\mathscr {H}_s]=\sigma _{k}^2(t-s)\) (\(0<s<t\)) for some \(\sigma _k\ge 0\). By Lemma 5.7 of Zhang [16], we have for each \(k=1,\ldots , d\),
For Property (ii) in Definition 1.3, it is sufficient to show that for any \(0<t_1<\cdots <t_p\) and \(\varphi \in C_{b,Lip}(\mathbb R^{d\times p})\),
Without loss of generality, we assume \(0<t_1<\cdots <t_d\le 1\). Let
and \(\tau _n(t)=[t2^n]\). Then, \(\widehat{\mathbb E}[{\varvec{Z}}_{n,k}|\mathscr {H}_{n,k-1}]=\widehat{\mathbb E}[-{\varvec{Z}}_{n,k}|\mathscr {H}_{n,k-1}]=0\),
Hence, the sequence \(\{{\varvec{Z}}_{n,k}, \mathscr {H}_{n,k}\}\) satisfies the conditions (2.2)-(2.4) with \(\rho (t)=t\). Let \({\varvec{W}}_n(\cdot )\) be defined as in (2.1). By Theorem 2.1, \( ({\varvec{W}}_n(t_1),\cdots ,{\varvec{W}}_n(t_p))\overset{d}{\rightarrow }({\varvec{W}}_{t_1},\ldots , {\varvec{W}}_{t_p}). \) On the other hand,
So, (3.37) holds for all \(\varphi \in C_{b,Lip}(\mathbb R^{d\times p})\). The proof is now completed.
4 Proofs
For the capacity and sub-linear expectation, we have the following lemma.
Lemma 4.1
We have
-
(1)
if \(X\le Y\) in \(L_p\), then \(X\le Y\) in \(\mathbb V\);
-
(2)
if \(X\le Y\) in \(\mathbb V\) and \(\widehat{\mathbb E}[((X-Y)^+)^p]<\infty \), then \(X\le Y\) in \(L_q\) for \(0<q<p\);
-
(3)
if \(X\le Y\) in \(\mathbb V\), f(x) is non-decreasing continuous function and \(\mathbb V(|Y|\ge M)\rightarrow 0\) as \(M\rightarrow \infty \), then \(f(X)\le f(Y)\) in \(\mathbb V\);
-
(4)
if \(p\ge 1\), \(X,Y\ge 0\) in \(L_p\), \(X\le Y\) in \(L_p\), then \(\widehat{\mathbb E}[X^p]\le \widehat{\mathbb E}[Y^p]\);
-
(5)
if \(\widehat{\mathbb E}\) is countably additive, then \(X\le Y\) in \(\mathbb V\) is equivalent to \(X\le Y\) in \(L_p\) for any \(p>0\);
-
(6)
if \(X_n\rightarrow 0\) in \(L_p\), then \(X_n\rightarrow 0\) in \(\mathbb V\) and in \(L_q\) for \(0<q<p\);
-
(7)
if \(X_n\rightarrow 0\) in \(\mathbb V\) and \(\widehat{\mathbb E}[|X_n|^p]\le C<\infty \), then \(X_n\rightarrow 0\) in \(L_q\) for \(0<q<p\).
Properties (1)–(5) are proved in Zhang [16]. By noting
\( |X_n|^q\le \epsilon ^q+\epsilon ^{q-p}|X_n|^p \), and \(\widehat{\mathbb E}[|X_n|^q]\le \epsilon ^q+\epsilon ^{q-p}\widehat{\mathbb E}[|X_n|^p]\), (6) follows. For (7), note that
the result follows.
The following lemma gives the properties of the conditional expectation operators \(\widehat{\mathbb E}_{n,k}\).
Lemma 4.2
[16] For any \(X,Y\in \mathscr {L}(\mathscr {H})\), we have
-
(a)
\(\widehat{\mathbb E}_{n,k} [c] = c\) in \(L_1\), \(\widehat{\mathbb E}_{n,k} [\lambda X] = \lambda \widehat{\mathbb E}_{n,k} [X]\) in \(L_1\) if \(\lambda \ge 0\);
-
(b)
\(\widehat{\mathbb E}_{n,k}[X]\le \widehat{\mathbb E}_{n,k}[Y]\) in \(L_1\) if \(X\le Y\) in \(L_1\);
-
(c)
\(\widehat{\mathbb E}_{n,k}[X]-\widehat{\mathbb E}_{n,k}[Y]\le \widehat{\mathbb E}_{n,k}[X-Y]\) in \(L_1\);
-
(d)
\(\widehat{\mathbb E}_{n,k}\left[ \left[ \widehat{\mathbb E}_{n,l} [ X]\right] \right] =\widehat{\mathbb E}_{n,l\wedge k} [ X]\) in \(L_1\);
-
(e)
if \(|X|\le M\) in \(L_p\) for all \(p\ge 1\), then \( \big |\widehat{\mathbb E}_{n,k}[X]\big | \le M\) in \(L_p\) for all \(p\ge 1\).
To prove functional central limit theorems, we need the following Rosenthal-type inequalities which can be proved by the same argument as in Theorem 4.1 of Zhang [16].
Lemma 4.3
Suppose that \(\{X_{n,i}\}\) are a set of bounded random variables, \(X_{n,k}\in \mathscr {H}_{n,k}\). Set \(S_0=0\), \(S_k=\sum _{i=1}^k X_{n,i}\). Then,
when \(\widehat{\mathbb E}[X_{n,k}|\mathscr {H}_{n,k-1}]\le 0\) in \(L_1\), \(k=1,\ldots , k_n\). In general, for \(p\ge 2\), there is a constant \(C_p\) such that
The following lemma will be used in the proof of the convergence of finite-dimensional distribution (2.5).
Lemma 4.4
[16] Suppose that the operators \(\widehat{\mathbb E}_{n,k}\) satisfy (a) and (b), \({\varvec{X}}_n\in \mathscr {H}_{n,k_n^{\prime }}\subset \mathscr {H}\) is a \(d_1\)-dimensional random vector, and \({\varvec{Y}}_n\in \mathscr {H}\) is a \(d_2\)-dimensional random vector. Write \(\mathscr {H}_{n}=\mathscr {H}_{n,k_n^{\prime }}\). Assume that \({\varvec{X}}_n\overset{d}{\rightarrow }{\varvec{X}}\), and for any bounded Lipschitz function \(\varphi ({\varvec{x}},{\varvec{y}}):\mathbb R_{d_1}\bigotimes \mathbb R_{d_2}\rightarrow \mathbb R\),
where \({\varvec{X}}\), \({\varvec{Y}}\) are two random vectors in a sub-linear expectation space \((\Omega , \mathscr {H}, \widetilde{\mathbb E})\) with \(\widetilde{\mathbb V}(\Vert {\varvec{X}}\Vert >\lambda )\rightarrow 0\) and \(\widetilde{\mathbb V}(\Vert {\varvec{Y}}\Vert >\lambda )\rightarrow 0\) as \(\lambda \rightarrow \infty \). Then
where \(\widetilde{{\varvec{Y}}}\) is independent to \(\widetilde{{\varvec{X}}}\), \(\widetilde{{\varvec{X}}}\overset{d}{=} {\varvec{X}}\) and \(\widetilde{{\varvec{Y}}}\overset{d}{=} {\varvec{Y}}\).
Proof of Theorem 2.1
Without loss of generality, we assume that \(|{\varvec{Z}}_{n,k}|\le \epsilon _n\), \(k=1,\ldots ,k_n\), with a sequence \(0<\epsilon _n\rightarrow 0\), \(\delta _{k_n}=\sum _{k=1}^{k_n}\widehat{\mathbb E}[|{\varvec{Z}}_{n,k}|^2|\mathscr {H}_{n,k-1}]\le 2\rho (1) \) in \(L_1\), and \(\chi _{k_n}=:\sum _{k=1}^{k_n}\left\{ |\widehat{\mathbb E}[ {\varvec{Z}}_{n,k} |\mathscr {H}_{n,k-1}]|+|\widehat{\mathcal E}[{\varvec{Z}}_{n,k} |\mathscr {H}_{n,k-1}]|\right\} <1\) in \(L_1\) (c.f. the same arguments at the beginning of the proofs of Theorems 3.1 and 3.2 of Zhang [16]). Under these assumptions, the property (g) of the conditional expectation implies that all random variables considered above are bounded in \(L_p\) for all \(p>0\), and then the convergences in (2.3) and (2.4) all hold in \(L_p\) for any \(p>0\), by Lemma 4.1.
We first show that for any \(r\ge 2\), there is a positive constant \(C_r>0\) such that
for any \(0<s<t\) and \(p>0\). Further, (4.7) holds uniformly in \(A \in \mathbb S(d)\) with \(|A|\le c\).
For (4.3)–(4.6), it is sufficient to verify the one-dimensional case. For (4.3), by Lemma 4.3,
Note that the random variable \(\max \limits _{\tau _n(s)\le k \le \tau _n(t)} |S_{n,k}-S_{n,\tau _n(s)}|\) is a bounded (\(\le (\tau _n(t)-\tau _n(s))\epsilon _n\)). By the property (g) of \(\widehat{\mathbb E}_{n,k}\), \(\widehat{\mathbb E}\Big [\max \limits _{\tau _n(s)\le k \le \tau _n(t)} |S_{n,k}-S_{n,\tau _n(s)}|^r\big |\mathscr {H}_{n,\tau _n(s)}\Big ]\) is bounded in \(L_p\) for any \(p>0\). Hence, by (1) and (2) of Lemma 4.1, (4.3) is proved. By this inequality and Lemma 4.1, it is sufficient to consider the case of \(p=1\) for (4.4)–(4.7).
It is easily shown that
Then,
It follows that
which implies (4.5) and (4.6).
For (4.7), we first note that
for any \(p>0\), by condition (2.4). Without loss of generality, we assume \(s=0\), \(t=1\). Note
And then
similar to (4.9). It follows that
Taking the sub-linear expectation yields
by (4.3) and the fact that \(\chi _{k_n}\rightarrow 0\) in \(L_p\). By noting (4.10), we have
(4.7) is proved. By the same argument as in Remark 2.2, (4.7) holds uniformly in \(A\in \mathbb S(d)\) with \(|A|\le c\).
For (4.4), it is easily seen the first and the third terms in (4.8) converge to 0 in \(L_1\), and the second term converges to \(\big (\rho (t)-\rho (s)\big )^{r/2}\) by (4.10). And hence, (4.4) is proved. The proof of (4.3)–(4.7) is completed.
Now, let \(\omega _{\delta }({\varvec{x}})=\sup _{|t-s|<\delta ,t,s\in [0,1]}|{\varvec{x}}(t)-{\varvec{x}}(s)|\). Assume \(0<\delta <1/10\). Let \(0=t_0<t_1\ldots <t_K=1\) such that \(t_k-t_{k-1}=\delta \), and let \(t_{K+1}=t_{K+2}=1\). For any \(\epsilon >0\), it is easily seen that
by (4.4). It follows that for any \(\epsilon >0\),
Hence, the sequence \(\{{\varvec{W}}_n(\cdot ); n\ge 1\}\) is tight, and so, for (2.6) it is sufficient to show (2.5) (c.f. [1, 16]). Note that (2.5) is equivalent to
By Lemma 4.4 and the induction, it is sufficient to show that for any \(0\le s<t\le 1\) and a bounded Lipschitz function \(\varphi ({\varvec{u}}, {\varvec{x}})\),
For showing (4.12), without loss of generality we assume \(s=0\) and \(t=1\), \(|\varphi ({\varvec{u}},{\varvec{x}})-\varphi ({\varvec{u}},{\varvec{y}})|\le |{\varvec{x}}-{\varvec{y}}|\), \(|\varphi ({\varvec{u}},{\varvec{x}})|\le 1\). Let \(V(t, {\varvec{x}})=V^{{\varvec{u}}} (t, {\varvec{x}})\) be the unique viscosity solution of the following equation,
where \(\varrho =\rho (1)-\rho (0)\). Without loss of generality, we assume that there is a constant \(\epsilon >0\) such that
for otherwise we can add a random vector \(\epsilon \cdot \widehat{\mathbb E}[|{\varvec{Z}}_{n,k}|^2\big |\mathscr {H}_{n,k-1}]{\varvec{\xi }}_{n,k}\) to \({\varvec{Z}}_{n,k}\), where \({\varvec{\xi }}_{n,k}\) has a d-dimensional standard normal \(N(0, I_{d\times d})\) distribution and is independent to \({\varvec{Z}}_{n,1},\ldots , {\varvec{Z}}_{n,k}\), \({\varvec{\xi }}_{n,1},\ldots ,{\varvec{\xi }}_{n,k-1}\). Under (4.13) by the interior regularity of \(V^{{\varvec{u}}}\) (c.f. Theorem C.4.5 of Peng[10]),
According to the definition of G-normal distribution, we have \(V(t,{\varvec{x}})=V^{{\varvec{u}}}(t,{\varvec{x}})=\widetilde{\mathbb E}\big [\varphi ({\varvec{u}}, {\varvec{x}}+\sqrt{\varrho +h-t}{\varvec{\xi }})\big ]\), where \({\varvec{\xi }}\sim N(0,G)\) under \(\widetilde{\mathbb E}\). In particular,
By (4.15), \(|V(\varrho +h,{\varvec{x}})-V(\varrho ,{\varvec{x}})|\le \sqrt{h}\widehat{\mathbb E}[|{\varvec{\xi }}|]\) and \(|V(h,{\varvec{0}})-V(0,{\varvec{0}})|\le \sqrt{h}\widehat{\mathbb E}[|{\varvec{\xi }}|]\). So, for (4.12) it is sufficient to show that
By (4.15) again and (4.14), for all \((t,{\varvec{x}})\in [0, \varrho +h/2] \times \mathbb R^d\),
For an integer m large enough, we define \(t_i=i/m\), \({\varvec{Y}}_{n,i}={\varvec{S}}_{n,\tau _n(t_i)}-{\varvec{S}}_{n,\tau _n(t_{i-1})}\), \(\widetilde{\delta }_i=\rho (t_i)\), \({\varvec{T}}_i=\sum _{j=1}^i {\varvec{Y}}_{n,j}\), \(i=1,\ldots , m\). Applying the Taylor’s expansion yields
and
where \(\gamma \) and \(\beta \) are between 0 and 1.
By (4.14), it is easily seen that
by (4.4), where C is a positive constant which does not depend on \(t_i\)s.
For \(J_{n,1}^i\), note
It follows that
For \(J_{n,2}^i\), we have
by (4.5) and (4.6). Similarly, \(\widehat{\mathbb E}[-J_{n,2}^i|\mathscr {H}_{n,0}]\le o(1)\) in \(L_1\).
For \(J_{n,3}^i\), we have
by (4.3) and (4.7), where C is a positive constant which does not depend on \(t_i\)s.
Combining the above arguments yields
The proof of (4.16) is completed by letting \(m\rightarrow \infty \).
References
Billingsley, P.: Convergence of Probability Measures. Wiley, New York (1968)
Denis, L., Hu, M., Peng, S.: Function spaces and capacity related to a sublinear expectation: application to G-Brownian motion pathes. Potential Anal. 34, 139–161 (2011)
Guo, X., Pan, C., Peng, S.: Martingale problem under nonlinear expectations. Math. Financ. Econ. 12, 135–164 (2018)
Hu, M.S., Li, X.J., Liu, G.M.: Lévys martingale characterization and reflection principle of G-Brownian motion (preprint) (2018). arXiv:1805.11370v1
Krylov, N.V.: On Shige Peng’s central limit theorem. Stoch. Process. Appl. 130(3), 1426–1434 (2020)
Lin, Q.: General martingale characterization of G-Brownian motion. Stoch. Anal. Appl. 31, 1024–1048 (2013)
Peng, S.: G-Brownian motion and dynamic risk measure under volatility uncertainty (preprint) (2007). arXiv:0711.2834v1
Peng, S.: Multi-dimensional G-Brownian motion and related stochastic calculus under G-expectation. Stoch. Process. Appl. 118, 2223–2253 (2008)
Peng, S.: A new central limit theorem under sublinear expectations (preprint) (2008b). arXiv:0803.2656v1
Peng, S.G.: Nonlinear expectations and stochastic calculus under uncertainty: with robust CLT and G-Brownian motion. In: Probability Theory and Stochastic Modelling vol. 95, Springer (2019a). https://doi.org/10.1007/978-3-662-59903-7
Peng, S.: Law of large numbers and central limit theorem under nonlinear expectations. Probab. Uncertain. Quant. Risk 4, 1–8 (2019). https://doi.org/10.1186/s41546-019-0038-2
Xu, J., Zhang, B.: Martingale characterization of G-Brownian motion. Stoch. Process. Appl. 119, 232–248 (2009)
Xu, J., Zhang, B.: Martingale property and capacity under G-Framework. Elect. J. Probab. 15, 2041–2068 (2010)
Zhang, L.-X.: Rosenthal’s inequalities for independent and negatively dependent random variables under sub-linear expectations with applications. Sci. China Math. 59(4), 751–768 (2016)
Zhang, L.-X.: The convergence of the sums of independent random variables under the sub-linear expectations. Acta Mathematica Sinica 36(3), 224–244 (2020)
Zhang, L.-X.: Lindeberg’s central limit theorems for martingale like sequences under sub-linear expectations. Sci. China Math. 64(6), 1263–1290 (2021)
Author information
Authors and Affiliations
Corresponding author
Additional information
This work was Supported by grants from the NSF of China (Grant No.11731012,12031005), Ten Thousands Talents Plan of Zhejiang Province (Grant No. 2018R52042), NSF of Zhejiang Province (Grant No. LZ21A010002) and the Fundamental Research Funds for the Central Universities.
Rights and permissions
About this article
Cite this article
Zhang, LX. Functional Shige Peng’s Central Limit Theorems for Martingale Vectors. Commun. Math. Stat. 12, 357–383 (2024). https://doi.org/10.1007/s40304-022-00294-7
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40304-022-00294-7
Keywords
- Random vector
- Central limit theorem
- Functional central limit theorem
- Martingale difference
- Sub-linear expectation