Abstract
In this paper, the Chover’s law of the iterated logarithm is established for a sequence of independent and identically distributed random variables under a sub-linear expectation space. As applications, several results on the Chover’s law of the iterated logarithm for traditional probability space have been generalized to the sub-linear expectation space context. Our results generalize those on Chover’s law of the iterated logarithm previously obtained by Qi and Cheng (Chinese Ann Math 17(A):195–206, 1996), Wu and Jiang (J Korean Stat Soc 39(2):199–206, 2010), and Wu (Acta Math Appl Sin (English Series) 32(2):385–394, 2016) from traditional probability space to the general sub-linear expectation space. There is no report on this form of Chover’s law of the iterated logarithm under sub-linear expectation, and we provide a method to study this subject.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
The classical limit theorems in probability theory play a fruitful role in the development of probability theory and its applications. These theorems have always been considered under additive probabilities and additive expectations. However, such additive hypothesis is unrealistic in many areas of applications. In fact, non-additive probabilities and non-additive expectations are useful tools for studying uncertainties for a long time, as early as 1961, Ellsberg [10] presented his arguments against necessarily additive probabilities with the help of the ‘mind experiments’. Feynman et al. [11] described the deviation of elementary particles from mechanical behavior to wave-like behavior by non-additivity, and so on.
The most convincing and well known axiomatization of additive probability was given by Savage [26]. However compelling Savage’s axioms and results are, they are not immune to attacks. Ellsberg [10] gave an example to show that in some cases additive probability of Savage [26] is not applicable, and that without additive probability measure is more suitable. In the framework of Anscombe and Aumann [1], Schmeidler [27] and [28] also suggested that the probability measure is allowed to be non-additive. Facts have proved that Schmeidler’s model may also explain some of the ‘paradoxes’ or counterexamples to the von Neumann and Morgenstern [19] expected theory, which have already stimulated many studies of various generalizations of expected theory. Such as Huber and Strassen [15], Quiggin [25], Yaari [36], Gilboa [12], Wakker [30], El Karoui et al. [9], Artzner et al. [2], Marinacci [17], Denis and Martini [8] and others, lead to results of the non-additive theory. This was the primary motive for developing the non-additive probability and non-additive expected theory. In the recent years, the theory and methodology of non-additive expectation have been well developed and received much attention in some application fields. For example, G-expectation (sub-linear expectation), was introduced in Peng [20] in the framework of the sub-linear expectation in a general function space by relaxing the linear property of the classical expectation to the sub-additivity and positive homogeneity. As a further development, Peng [21,22,23] constructed the basic framework, basic properties and a new central limit theorem under sub-linear expectations. In the framework of Peng [20,21,22,23] and Zhang [37,38,39], Hu and Yang [14] established the exponential inequalities, Rosenthal’s inequalities, Kolmogorov’s and Marcinkiewicz’s strong law of larger numbers and Hartman–Wintner’s law of iterated logarithm, Hu [13] and Chen [6] studied Kolmogorov’s strong law of larger numbers, Wu and Jiang [35] established the Chover’s law of iterated logarithm (LIL), Wu et al. [31] obtained the asymptotic approximation of inverse moment, Li et al. [16] get reflected solutions of backward stochastic differential equations driven by G-Brownian motion, and so on.
In probability space, Chover [7] established first the classical Chover’s LIL for a sequence of independent and identically distributed (i.i.d.) random variables. Some results of Chover’s LIL obtained by Mikosch [18], Vasudeva [29], Qi and Cheng [24] for sequences of independent random variables with different distributions, Chen [5], Cai [4] and Wu and Jiang [32] for dependent sequences. Some papers have been devoted to the study of another form of Chover’s LIL. We refer the reader to Qi and Cheng [24], Wu and Jiang [33], and Wu [34].
Recently, for a sequence of extended i.i.d. random variables under sub-linear expectation, Wu and Jiang [35] established the Chover’s LIL under the following condition
where, \(c(x)\ge 0,\lim \nolimits _{x\rightarrow \infty }c(x)=c>0\), \(l(x)>0\) is a slowly varying function, and \({\mathbb {V}}\) is the capacities corresponding to the sub-linear expectations (defined in Sect. 2).
The main purpose of this paper is to study and obtain another form of Chover’s LIL, and extend the LIL obtained by Qi and Cheng [24] and Wu and Jiang [33], etc. from traditional probability space to the general sub-linear expectation space. Because sub-linear expectation and capacity are not additive, many powerful tools and common methods for linear expectations and probabilities are no longer valid, so that the study of the limit theorems under sub-linear expectation becomes much more complex and difficult. We provide a method to study this subject.
2 Basic settings
The study of this paper uses the framework and notations which are established by Peng [23] and Wu and Jiang [35]. Let \((\Omega , {\mathcal {F}})\) be a measurable space and let \({\mathcal {H}}\) be a linear space of real functions defined on \((\Omega , {\mathcal {F}})\) such that \(\varphi (X_1,\ldots ,X_n)\in {\mathcal {H}}\) for any \(X_1,\ldots ,X_n\in {\mathcal {H}}\), \(\varphi \in C_{l,Lip}({\mathbb {R}}_n)\), where \(C_{l,Lip}({\mathbb {R}}_n)\) denotes the linear space of local Lipschitz functions \(\varphi \) satisfying
for some \(c>0, m\in {\mathbb {N}}\) depending on \(\varphi \). \({\mathcal {H}}\) is considered as a space of “random variables”. In this case we denote \(X\in {\mathcal {H}}\).
Definition 2.1
\(\hat{{\mathbb {E}}}:{\mathcal {H}}\rightarrow [-\infty , +\infty ]\) is called a sub-linear expectation, if \(\hat{{\mathbb {E}}}\) satisfies the following properties: for all \(X, Y\in {\mathcal {H}}\), we have
- (a)
Monotonicity: If \(X\ge Y\), then \(\hat{{\mathbb {E}}}X\ge \hat{{\mathbb {E}}}Y\);
- (b)
Constant preserving: \(\hat{{\mathbb {E}}}c=c\);
- (c)
Sub-additivity: \(\hat{{\mathbb {E}}}(X+Y)\le \hat{{\mathbb {E}}}X+\hat{{\mathbb {E}}}Y\);
- (d)
Positive homogeneity: \(\hat{{\mathbb {E}}}(\lambda X)=\lambda \hat{{\mathbb {E}}}X, \lambda \ge 0\).
The triple \((\Omega , {\mathcal {H}}, \hat{{\mathbb {E}}})\) is called a sub-linear expectation space, compared with the classical probability space \((\Omega , {\mathcal {F}}, P)\). For \(\hat{{\mathbb {E}}}\), the linear property of expectation is replaced by the sub-additivity and positive homogeneity. \(\hat{{\mathbb {E}}}\) is called a sub-linear expectation.
Given a sub-linear expectation \(\hat{{\mathbb {E}}}\), let us denote the conjugate expectation \(\hat{{\varepsilon }}\) of \(\hat{{\mathbb {E}}}\) by
In a sub-linear expectation space, we replace the concept of probability with the concept of capacity. Let \({\mathcal {G}}\subset {\mathcal {F}}\). A function \(V:{\mathcal {G}}\rightarrow [0, 1]\) is called a capacity if
It is called to be sub-additive if \(V(A\bigcup B)\le V(A)+V(B)\) for all \(A,B\in {\mathcal {G}}\) with \(A\bigcup B\in {\mathcal {G}}\). In the sub-linear expectation space \((\Omega , {\mathcal {H}}, \hat{{\mathbb {E}}})\), we denote a pair \(({\mathbb {V}}, \nu )\) of capacities by
where \(A^c\) is the complement set of A. By definition of \({\mathbb {V}}\) and \(\nu \), it is obvious that \({\mathbb {V}}\) is sub-additive, and \(\nu (A)\le {\mathbb {V}}(A)\), for all \(A\in {\mathcal {F}}\).
Definition 2.2
-
(i)
\(\hat{{\mathbb {E}}}\) is called to be countably sub-additive if it satisfies
$$\begin{aligned} \hat{{\mathbb {E}}}(X)\le \sum \limits _{n=1}^\infty \hat{{\mathbb {E}}}(X_n),\quad {\mathrm{whenever}}\,X\le \sum \limits _{n=1}^\infty X_n,\,X, X_n\in {\mathcal {H}},\,X\ge 0, X_n\ge 0. \end{aligned}$$It is called to be continuous if it satisfies
$$\begin{aligned} \hat{{\mathbb {E}}}(X_n)\uparrow \hat{{\mathbb {E}}}(X),\,{\mathrm{if}}\,0\le X_n\uparrow X, {\mathrm{and}}\, \hat{{\mathbb {E}}}(X_n)\downarrow \hat{{\mathbb {E}}}(X),\,{\mathrm{if}}\,0\le X_n\downarrow X, \,{\mathrm{where}}\,X, X_n\in {\mathcal {H}}. \end{aligned}$$ -
(ii)
A capacity V is called to be countably sub-additive if
$$\begin{aligned} V\left( \bigcup \limits _{n=1}^\infty A_n\right) \le \sum \limits _{n=1}^\infty V(A_n),\quad \forall A_n\in {\mathcal {F}}. \end{aligned}$$It is called to be continuous if it satisfies
$$\begin{aligned} V(A_n)\uparrow V(A),\,{\mathrm{if}}\,A_n\uparrow A, \,{\mathrm{and}}\, V(A_n)\downarrow V(A),\,{\mathrm{if}}\,A_n\downarrow A, \,{\mathrm{where}}\,A, A_n\in {\mathcal {F}}. \end{aligned}$$
Also, we define the Choquet integrals/expecations \((C_{\mathbb {V}},C_\nu )\) by
with V being replaced by \({\mathbb {V}}\) and \(\nu \) respectively.
The following Proposition 2.3 contains some basic properties used in this paper. Proposition 2.3 (i)–(iv) is easily shown from the Definitions 2.1 and 2.2 , respectively. Proposition 2.3 (v) follows from \(I(|X|\ge x)\le |X|^p/x^p\in {\mathcal {H}}, p>0\) and Proposition 2.3 (iii), Proposition 2.3 (vi) and (vii) has been established by Zhang [37], Lemma 4.1, Lemma 4.5(iii)).
Proposition 2.3
- (i):
-
For all \(X, Y\in {\mathcal {H}}\),
$$\begin{aligned} {\hat{\varepsilon }} X\le \hat{{\mathbb {E}}}X,\, \hat{{\mathbb {E}}}(X+c)=\hat{{\mathbb {E}}}X+c,\, |\hat{{\mathbb {E}}}(X-Y)|\le \hat{{\mathbb {E}}}|X-Y| \,{\mathrm{and}}\, \hat{{\mathbb {E}}}(X-Y)\ge \hat{{\mathbb {E}}}X-\hat{{\mathbb {E}}}Y. \end{aligned}$$ - (ii):
-
If \(\hat{{\mathbb {E}}}Y={\hat{\varepsilon }} Y\), then \(\hat{{\mathbb {E}}}(X+aY)=\hat{{\mathbb {E}}}X+a\hat{{\mathbb {E}}}Y\) for any \(a\in {\mathbb {R}}\).
- (iii):
-
If \(f\le I(A)\le g\), \(f,g\in {\mathcal {H}}\), then
$$\begin{aligned} \hat{{\mathbb {E}}}f\le {\mathbb {V}}(A)\le \hat{{\mathbb {E}}}g, \,\quad {\hat{\varepsilon }} f\le \nu (A)\le {\hat{\varepsilon }} g. \end{aligned}$$(2.1) - (iv):
-
If \({\mathbb {V}}\) (resp. \(\hat{{\mathbb {E}}})\) is continuous, then \({\mathbb {V}}\) (resp. \(\hat{{\mathbb {E}}})\) is countably sub-additive.
- (v):
-
Markov inequality: for any \(X\in {\mathcal {H}}\),
$$\begin{aligned} {\mathbb {V}}(|X|\ge x)\le \hat{{\mathbb {E}}}(|X|^p)/x^p, \,\nu (|X|\ge x)\le \hat{{\varepsilon }}(|X|^p)/x^p\quad {\mathrm{for}}\,{\mathrm{any}}\, x>0, p>0. \end{aligned}$$ - (vi):
-
H\(\ddot{o}\)lder inequality: \(\forall X, Y\in {\mathcal {H}}, p, q>1\) satisfying \(p^{-1}+q^{-1}=1\),
$$\begin{aligned} \hat{{\mathbb {E}}}(|XY|)\le \left( \hat{{\mathbb {E}}}(|X|^p)\right) ^{1/p} \left( \hat{{\mathbb {E}}}(|Y|^q)\right) ^{1/q}. \end{aligned}$$particularly, Jensen inequality: \(\forall X\in {\mathcal {H}}\),
$$\begin{aligned} \left( \hat{{\mathbb {E}}}(|X|^r)\right) ^{1/r}\le \left( \hat{{\mathbb {E}}} (|X|^s)\right) ^{1/s}\quad {\mathrm{for}}\quad 0<r\le s. \end{aligned}$$ - (vii):
-
If \(\hat{{\mathbb {E}}}\) is countably sub-additive, then \(\hat{{\mathbb {E}}}(|X|)\le C_{\mathbb {V}}(|X|)\) for any \(X\in {\mathcal {H}}\).
Definition 2.4
- (i)
(Identical distribution) Let \(X_1\) and \(X_2\) be two random variables defined in sub-linear expectation spaces \((\Omega , {\mathcal {H}}, \hat{{\mathbb {E}}})\). They are called identically distributed, denoted by \(X_1{\mathop {=}\limits ^{d}}X_2\), if
$$\begin{aligned} \hat{{\mathbb {E}}}(\varphi (X_1))=\hat{{\mathbb {E}}}(\varphi (X_2)), \forall \varphi \in C_{l,Lip}({\mathbb {R}}). \end{aligned}$$ - (ii)
(Independence) In a sub-linear expectation space \((\Omega , {\mathcal {H}}, \hat{{\mathbb {E}}})\), a random vector \({\mathbf {Y}}=(Y_1,\ldots ,Y_n)\), \(Y_i\in {\mathcal {H}}\) is said to be independent to another random vector \({\mathbf {X}}=(X_1,\ldots ,X_m), X_i\in {\mathcal {H}}\) under \(\hat{{\mathbb {E}}}\) if for each test function \(\varphi \in C_{l,Lip}({\mathbb {R}}_m\times {\mathbb {R}}_n)\) we have \(\hat{{\mathbb {E}}}(\varphi ({\mathbf {X}}, {\mathbf {Y}}))=\hat{{\mathbb {E}}}[\hat{{\mathbb {E}}}(\varphi ({\mathbf {x}}, {\mathbf {Y}}))|_{{\mathbf {x}}={\mathbf {X}}}]\), whenever \({\bar{\varphi }}({\mathbf {x}}):=\hat{{\mathbb {E}}}\left( |\varphi ({\mathbf {x}}, {\mathbf {Y}})|\right) <\infty \) for all \({\mathbf {x}}\) and \(\hat{{\mathbb {E}}}\left( |{\bar{\varphi }}({\mathbf {X}})|\right) <\infty \).
From the definition of independence, it is easily seen that, if Y is independent to X, and \(X, Y\in {\mathcal {H}}, X>0, \hat{{\mathbb {E}}}Y>0\), then
$$\begin{aligned} \hat{{\mathbb {E}}}(XY)=\hat{{\mathbb {E}}}(X)\hat{{\mathbb {E}}}(Y). \end{aligned}$$ - (iii)
(I.I.D. Random Variables) A sequence \(\{X_n; n\ge 1\}\) of random variables is said to be independent and identically distributed (i.i.d.), if \(X_{i+1}\) is independent to \((X_1, \ldots , X_i)\) and \(X_i{\mathop {=}\limits ^{d}}X_1\) for each \(i\ge 1\).
It can be showed that if \(\{X_n; n\ge 1\}\) is a sequence of independent random variables and \(f_1(x), f_2(x),\ldots \in C_{l,Lip}({\mathbb {R}})\), then \(\{f_n(X_n); n\ge 1\}\) is also a sequence of independent random variables.
In the following, let \(\{X_n; n\ge 1\}\) be a sequence of random variables in a sub-linear expectation space \((\Omega , {\mathcal {H}}, \hat{{\mathbb {E}}})\), and \(S_n=\sum _{i=1}^nX_i\). The symbol c stands for a generic positive constant which may differ from one place to another. Let \(a_n\ll b_n\) denote that there exists a constant \(c>0\) such that \(a_n\le cb_n\) for sufficiently large n, \(a_x\sim b_x\) denotes \(\lim _{x\rightarrow \infty }a_x/b_x=1\), and \(I(\cdot )\) denotes an indicator function.
To prove our results, we need the following three lemmas.
Lemma 2.5
(Borel–Cantelli Lemma, Zhang 2016a, Lemma 3.9 [37]) Let \(\{A_{n}; n\ge 1\}\) be a sequence of events in \({\mathcal {F}}\). Suppose that V is a countably sub-additive capacity. If \(\sum \nolimits _{n=1}^{\infty }V(A_{n})<\infty \), then \(V(A_{n};\mathrm{i.o.})=0\), where \(\{A_{n};\mathrm{i.o.}\}=\bigcap \nolimits _{n=1}^\infty \bigcup \nolimits _{m=n}^\infty A_m\).
Lemma 2.6
(Zhang (2016b, Theorem 2.1 (b) [38], 2016a, Theorem 3.1 (b) [37])) Suppose that \(X_k\) is independent to \((X_{k+1},\ldots ,X_n)\) for each \(k=1,\ldots ,n-1\), and \(\hat{{\mathbb {E}}}X_{n}\le 0\). Then
Here \(c_p\) is a positive constant depending only on p.
Definition 2.7
\(l(x)>0\) is said to be a slowly varying function at infinity if
\(f(x)>0\) is said to be a regularly varying function with index \(\rho \) at infinity, we write \(f\in {\mathbf {R}}_\rho \), if
\({\mathbf {R}}_0\) is the class of slowly varying function at infinity.
From Bingham et al. (1987 [3], (i) corresponds to p.12 Theorem 1.3.1, (ii)–(iv) correspond to p.16 Proposition 1.3.6, (v) corresponds to Theorem 1.5.4, and (vi) corresponds to Proposition 1.5.7 (ii)), we have
Proposition 2.8
-
(i)
l(x) is a slowly varying function at infinity if and only if
$$\begin{aligned} l(x)=c(x)\exp \left\{ \int _{a}^{x}{b(u)\over u}{\mathrm{d}}u\right\} ,\quad x\ge a, \end{aligned}$$for some \(a>0\), where \(c(x)\ge 0\), \( \lim \nolimits _{x\rightarrow \infty }c(x)=c>0, \) and \( \lim \nolimits _{x\rightarrow \infty }b(x)=0.\) Furthermore, f(x) is a regularly varying function with index \(\rho \) at infinity if and only if
$$\begin{aligned} f(x)=x^\rho l(x), \end{aligned}$$where l(x) is a slowly varying function.
-
(ii)
If l(x) varies slowly, then \((\ln l(x))/\ln x\rightarrow 0\) as \(x\rightarrow \infty \).
-
(iii)
If l(x) varies slowly, so does \((l(x))^\alpha \) for every \(\alpha \in {\mathbb {R}}\). If \(l_1, l_2\) vary slowly, so do \(l_1(x)l_2(x)\), \(l_1(x)+l_2(x)\).
-
(iv)
If l(x) varies slowly and \(\alpha >0\), then
$$\begin{aligned} x^\alpha l(x)\rightarrow \infty , \quad x^{-\alpha } l(x)\rightarrow 0 \quad (x\rightarrow \infty ). \end{aligned}$$ -
(v)
l(x) is a slowly varying function at infinity if and only if, for every \(\alpha >0\), there exists a non-decreasing function \(\phi \) and a non-increasing function \(\psi \) with
$$\begin{aligned} x^\alpha l(x)\sim \phi (x), \quad x^{-\alpha } l(x)\sim \psi (x) \quad (x\rightarrow \infty ). \end{aligned}$$ -
(vi)
If \(f_i\in {\mathbf {R}}_{\rho _i}\)\((i=1, 2)\), \(f_2(x)\rightarrow \infty \) as \(x\rightarrow \infty \), then \(f_1(f_2(x))\in {\mathbf {R}}_{\rho _1\rho _2}\).
Lemma 2.9
(Qi and Cheng 1996 [24]) Suppose that l(x) is a slowly varying function at infinity and h(x) is a positive function with \(\lim \nolimits _{x\rightarrow \infty }h(x)=\infty \). Then, for any given \(\delta >0\), there exists an \(x_{0}>0\) such that
3 Chover’s law of the iterated logarithm
In the sub-linear expectation space, the almost sure convergence of random variable sequences is different from the traditional probability space. Next we give the definition of almost sure convergence of a sequence of random variables in sub-linear expectation space.
Definition 3.1
A sequence of random variables \(\{X_{n}; n\ge 1\}\) in \((\Omega , {\mathcal {H}}, \hat{{\mathbb {E}}})\) is said to converge to X almost surely V, denoted by \(X_{n}\rightarrow X\) a.s. V as \(n\rightarrow \infty \) if, \(V(X_{n}\nrightarrow X)=0\).
In general, for arbitrary event A, it is said that A has the certain nature a.s. V, if V(A does not have the certain nature) = 0.
V can be replaced by \({\mathbb {V}}\) and \(\nu \) respectively. By \(\nu (A)\le {\mathbb {V}}(A)\) and \(\nu (A)+{\mathbb {V}}(A^c)=1\) for any \(A\in {\mathcal {F}}\), it is obvious that \(X_{n}\rightarrow X\) a.s. \({\mathbb {V}}\) implies \(X_{n}\rightarrow X\) a.s. \(\nu \), but \(X_{n}\rightarrow X\) a.s. \(\nu \) does not imply \(X_{n}\rightarrow X\) a.s. \({\mathbb {V}}\). Therefore, we can’t define \(X_{n}\rightarrow X\) a.s. \({\mathbb {V}}\) with \({\mathbb {V}}(X_{n}\rightarrow X)=1\).
We give an example satisfying \(X_n\rightarrow X\) a.s. \(\nu \); but not \(X_n\rightarrow X\) a.s.\({\mathbb {V}}\). To this end, first give the notations of G-normal distribution which is introduced by Peng [23].
Definition 3.2
(G-normal random variable) For \(0\le {\underline{\sigma }}^2\le {\bar{\sigma }}^2<\infty \), a random variable \(\xi \) in a sub-linear expectation space \((\Omega , {\mathcal {H}}, \hat{{\mathbb {E}}})\) is called a G-normal \({\mathcal {N}}(0, [{\underline{\sigma }}^2, {\bar{\sigma }}^2])\) distributed random variable, where \({\bar{\sigma }}^2=\hat{{\mathbb {E}}}\xi ^2, {\underline{\sigma }}^2=\hat{{\varepsilon }}\xi ^2\) (write \(\xi \sim {\mathcal {N}}(0, [{\underline{\sigma }}^2, {\bar{\sigma }}^2])\) under \(\hat{{\mathbb {E}}}\)), if for any \(\varphi \in C_{l,Lip}({\mathbb {R}})\), the function \(u(x, t)=\hat{{\mathbb {E}}}\left[ \varphi (x+\sqrt{t}\xi )\right] \) (\(x\in {\mathbb {R}}, t\ge 0\)) is the unique viscosity solution of the following heat equation:
where \(G(\alpha )=({\bar{\sigma }}^2\alpha ^+-{\underline{\sigma }}^2\alpha ^-)/2.\)
In particular, if \({\underline{\sigma }}={\bar{\sigma }}:=\sigma \), then \({\mathcal {N}}(0, [{\underline{\sigma }}^2, {\bar{\sigma }}^2])={\mathcal {N}}(0, \sigma ^2)\) is the usual normal distribution random variable.
Example 3.3
Let \(X_n\) be independent G-normal random variables with \(X_n\sim {\mathcal {N}}(0, [1/4^{2n}, 1])\) in a sub-linear expectation space \((\Omega , {\mathcal {H}}, \hat{{\mathbb {E}}})\). \(\hat{{\mathbb {E}}}\) and \({\mathbb {V}}\) are continuous. Then \(X_n\rightarrow 0\) a.s. \(\nu \); but not \(X_n\rightarrow 0\) a.s.\({\mathbb {V}}\).
Proof
Take \(\mu =1/2\), g is defined by the following form (3.6), by (2.1), \(X_n\) being independent and \({\mathbb {V}}\) being continuous.
Hence, combining Markov inequality: \(\nu (|X_m|>1/2^{m+1})\le 2^{2m+2}\hat{{\varepsilon }}X_m^2=2^{2m+2}4^{-2m}=1/4^{m-1}\), we get
That is \(X_n\rightarrow 0\) a.s. \(\nu \).
However, on the other hand
This combining with the continuity of \({\mathbb {V}}\), implies
That is \(X_n\nrightarrow 0\) a.s. \({\mathbb {V}}\).
In traditional probability space, let \(\{X_n; n\ge 1\}\) be a sequence of i.i.d. random variables with a nondegenerate distribution function F satisfying \(1-F(x)={c_{1}(x)l(x)\over x^{\alpha }}\) and \(F(-x)={c_{2}(x)l(x)\over x^{\alpha }}\) for \(0<\alpha <2, x\rightarrow \infty \), where for \(x>0\), \(c_{i}(x)\ge 0,\lim \nolimits _{x\rightarrow \infty }c_{i}(x)=c_{i},\)\(i=1,2,\)\(c_{1}+c_{2}>0\), and l(x) is a slowly varying function at infinity. Chover [7] established the following classical Chover LIL:
Some Chover’s LIL type results obtained by Mikosch [18], Qi and Cheng [24], Vasudeva [29], Chen [5], Wu and Jiang [32]. Qi and Cheng [24], Wu and Jiang [33], and Wu [34] also studied and obtained another form of Chover’s LIL. They proved separately that there exist some constants \(A_{n}\in {\mathbb {R}}\), \(B_{n}>0\) such that
for sequences of independent and dependent random variables.
Recently, under sub-linear expectation, for a sequence of extended i.i.d. random variables satisfying (1.1), Wu and Jiang [35] established the following Chover’s LIL:
where \(c_{n}=0\) for \(0<\alpha <1\), \(c_{n}=n\hat{{\mathbb {E}}}X_{1}\) for \(1\le \alpha <2\), \(\lg _0x:=x\) and \(\lg _j x :=\ln \{\max ({\mathrm{e}}, \lg _{j-1}x)\}\) for \(j\ge 1\), and \(B(x)=\inf \{y; {\mathbb {V}}(|X_{1}|\ge y)\le 1/x\}\) for \(x>0\).
In this paper we studied and extended another form of Chover’s LIL different from (3.2). The corresponding results are obtained for traditional probability space by obtained Qi and Cheng [24], Wu and Jiang [33], and Wu [34], etc. from traditional probability space to the sub-linear expectation space. We will prove that Chover’s LIL similar to (3.1) still holds under the sub-linear expectation space.
Later in this paper, we always assume that \(\{X_{n}; n\ge 1\}\) is a sequence of identically distributed random variables in \((\Omega , {\mathcal {H}}, \hat{{\mathbb {E}}})\) and satisfies the following condition
It shall be noted that the identical distribution is defined under \(\hat{{\mathbb {E}}}\), not under \({\mathbb {V}}\) (see Definition 3.2). The identically distributed of \(X_i\) refers to \(\hat{{\mathbb {E}}}(f(X_i))=\hat{{\mathbb {E}}}(f(X_1))\) for \(f(\cdot )\in C_{l,Lip}({\mathbb {R}})\), but does not imply \({\mathbb {V}}(f(X_i)\in A)={\mathbb {V}}(f(X_1)\in A)\). Therefore, it is necessary to prove that (3.3) is equivalent to for any natural number \(k\ge 1\),
We only need to prove that (3.3) implies (3.4). Firstly, by the definition of \(C_{\mathbb {V}}\),
In order to prove (3.4), we need to convert \({\mathbb {V}}\) to \(\hat{{\mathbb {E}}}\) by using (2.1), so we construct the lower function \(g(x)\in C_{l,Lip}({\mathbb {R}})\).
For \(0<\mu <1\), let \(g(x)\in C_{l,Lip}({\mathbb {R}})\) be a non-increasing function such that
Then
Thus, combining this with (2.1) and \(X_k{\mathop {=}\limits ^{d}}X_1\), we obtain for \(x>0\)
and for any \(0<a<b\),
Hence, combining with (3.5),
Therefore,
from (3.3). That is, (3.4) is established.
Now, we show that (3.3) implies \(C_{{\mathbb {V}}}(|X_1|^p)<\infty \) for all \(0<p<2\).
Set
and define
By (3.5),
On the other hand, by sub-additivity of \({\mathbb {V}}\), we have \({\mathbb {V}}(t<|X_{1}|\le x)\ge {\mathbb {V}}(|X_{1}|>t)-{\mathbb {V}}(|X_{1}|>x)\), so
Hence, we get
Therefore, when (3.3) holds, we get \(C_{\mathbb {V}}(X_{1}^{2}I(|X_{1}|\le x))\sim H(x)\), and
From \(\left( {H(x)\over \exp (\int _{1}^{x}{2L(t)\over t}\mathrm{d}t)}\right) ^{'}\equiv 0,\) we have
Thus, from (3.9) and Proposition 2.8 (i) (iii), H(x) is a slowly varying function at infinity and \(1/K(x)= x^2/H(x)\) is a regularly varying function with index 2 at infinity. Hence, from Bingham et al. [3], p. 28 Theorem 1.5.12), D(x) is a regularly varying function with index 1 / 2 at infinity and \(1/K(D(x))\sim x\) as \(x\rightarrow \infty \), so, combining Proposition 2.8 (iii) (vi),
where \(l_1(\cdot )\) and \(l_2(\cdot )\) are slowly varying functions, and
By (3.10) and Proposition 2.8 (ii)
By (3.9): there is \(n_0\), so that when \(n\ge n_0\) , we have \(G(c a_n)\le K(c a_n)\), (3.11), and \(K(c a_n)\sim c^{-2}K(a_n)\), for any \(c>0\),
For any \(0<p<2\), by (3.10) and the Proposition 2.8 (iv), we have \(a_n\le n^{1/p}\) for sufficiently large n, thus,
from (3.13).
Further, if \(\hat{{\mathbb {E}}}\) is countably sub-additive, then by Proposition 2.3 (vii), \(\hat{{\mathbb {E}}}(|X_1|^p)\le C_{\mathbb {V}}(|X_1|^p)<\infty \) for any \(0<p<2\). In particular, \(\hat{{\mathbb {E}}}(|X_1|)<\infty \).
With the above preparation, we can describe our theorems as follows.
Theorem 3.4
Assume that \(\{X_{n};n\ge 1\}\) is a sequence of i.i.d. random variables with \(\hat{{\mathbb {E}}}X_k={\hat{\varepsilon }}X_k\), \(\hat{{\mathbb {E}}}\) and \({\mathbb {V}}\) are countably sub-additive, and (3.3) holds. Then
where \({\tilde{S}}_n=\sum \nolimits _{j=1}^{n}(X_j-\hat{{\mathbb {E}}}(X_j))\).
It is natural to ask, whether there exists a distribution satisfying (3.3) such that the lower bound of (3.14) holds. Our answer is positive under \(\nu \). And we can get a fixed value such that the equal in (3.14) is established about \(\nu \) under some extra conditions for L(x).
Theorem 3.5
Assume that the conditions of Theorem 3.4 hold, and \(\delta \in [0, 1)\) is fixed. If for any given \(\varepsilon >0\), there exists an \(x_1>0\) such that for all \(x\ge x_1\),
Then
further, if \({\mathbb {V}}\) is continuous, then
so,
where \({\tilde{S}}_n\) is defined by Theorem 3.4.
Remark 3.6
Theorems 3.4 and 3.5 are Chover’s LIL under the sub-linear expectations. Theorems 3.4 and 3.5 extended Chover’s LIL by obtained Qi and Cheng [24], Wu and Jiang [33], and Wu [34], etc. from traditional probability space to the sub-linear expectation space.
Remark 3.7
Under condition (1.1), Wu and Jiang [35] studied and obtained Chover’s LIL: (3.2). In this paper, under condition (3.3), we study and obtain another form of Chover’s LIL: (3.14), (3.16) and (3.17). The research contents and research results of Wu and Jiang [35] and this paper are not overlapping and do not include each other.
Remark 3.8
It is important to note that the condition that “a sequence \(\{X_{n}; n\ge 1\}\) is independent under \(\hat{{\mathbb {E}}}\)” does not implies that “a sequence \(\{X_{n}; n\ge 1\}\) is independent under \({\mathbb {V}}\)”. So, we have not “the divergence part” of the Borel–Cantelli lemma. We can’t use the standard argument of the Borel–Cantelli lemma to show (3.17). It is very difficult to prove (3.17), and we do not know yet whether (3.17) is established about almost surely \({\mathbb {V}}\).
Proof of Theorem 3.4
Suppose that \(\{X_{n}; n\ge 1\}\) is a sequence of independent random variables. It is important to note that the independence under \(\hat{{\mathbb {E}}}\) is defined through \(\varphi \) in \(C_{l,Lip}\) (see Definition 3.2) and the indicator function \(I(|x|\le a)\) does not belong to \(C_{l,Lip}\). Therefore, in order to ensure that the sequence of truncated random variables is also independent, we cannot censor \(X_i\) using the indicator function. This needs to modify the indicator function by functions \(g(\cdot )\) in \(C_{l,Lip}\). Let \(g(\cdot )\) be defined by (3.6).
For \(a_j=D(j\ln j(\ln \ln j)^2)\), let,
and
Obviously, \(\{\widetilde{Y_{j}};j\ge 1\}\) is also a sequence of independent random variables with \(\hat{{\mathbb {E}}}\widetilde{Y_{j}}=0\). By (3.7), (3.8), (3.10), (3.11), Proposition 2.3 (vi): Jensen inequality, \(\hat{{\mathbb {E}}}\) is countably sub-additive and Proposition 2.3 (vii), \(X_j{\mathop {=}\limits ^{d}}X_1\), and H(x) is increasing,
Thus, for any \(\varepsilon >0\), let \(h(x)=\ln x(\ln \ln x)^2, \delta =1/2\) in Lemma 2.9, for sufficiently large n, we get
On the other hand, let \(h(x)=\ln {x({\ln {\ln {x}}})^{2}}\), \(\delta =\varepsilon /4\) in Lemma 2.9, for sufficiently large n, we have
For any \(\varepsilon >0\), let \(p>\max (2, 1+4(1-\varepsilon )/\varepsilon , 16/(5\varepsilon +4))\) in (2.2) of Lemma 2.6, by Proposition 2.3 (v): Markov inequality, \(\max \nolimits _{1\le j\le 2^{n}}|{\tilde{Y}}_{j}|\le 2a_{2^{n}}\), (3.18) and (3.19), for sufficiently large n, we obtain
Hence,
from \((p-1)\varepsilon /8+(1+\varepsilon )/2>1\) and \((5\varepsilon +4) p/16>1\). By the Borel-Cantelli lemma (Lemma 2.5),
For any n, there exists k such that \(2^{k-1}\le n<2^{k}\), by above inequality, for sufficiently large n,
In the calculation of \({\mathbb {V}}(f(X_i)\in A)\), we need to convert \({\mathbb {V}}\) to \(\hat{{\mathbb {E}}}\). By (2.1), (3.7), and (3.13),
Thus, combining this with (3.20), we get
Let \(g_j(x)\in C_{l,Lip}({\mathbb {R}}), j\ge 1\) such that \(0\le g_j(x)\le 1\) for all x and \(g_j\left( \frac{x}{a_{2^j}}\right) =1\) if \(a_{2^{j-1}}< x\le a_{2^j}\), \(g_j\left( \frac{x}{a_{2^j}}\right) =0\) if \(x\le \mu a_{2^{j-1}}\) or \(x>(1+\mu )a_{2^j}\). Then,
Similar to (3.10) in Wu and Jiang [35],
Hence, (3.13) implies
Thus, from Proposition 2.8 (v), \(D(n)(\ln n)^{(1+\varepsilon )/2}\) and \(a_n\) being increasing, by g(x) being decreasing, (3.19), (3.22), and \(\hat{{\mathbb {E}}}\) being countably sub-additive, we get
Thus, by Kronecker Lemma,
This and (3.21) imply that
considering \(\{-X_n; n\ge 1\}\) instead of \(\{X_n; n\ge 1\}\) in (3.24), we can obtain
By \(\hat{{\mathbb {E}}}X_k=\hat{{\varepsilon }}X_k\), we have \(\hat{{\mathbb {E}}}(-X_k)=-\hat{{\varepsilon }}X_k=-\hat{{\mathbb {E}}}X_k\), therefore,
Hence, for any \(\varepsilon >0\),
Let \(\varepsilon \rightarrow 0\), we have
That is (3.14) holds. This completes the proof of Theorem 3.4.
Proof of Theorem 3.5
First prove (3.16). Let \(\delta \in [0,1)\) be fixed. And for any given \(\varepsilon >0\), let
Using the same notations and similar method of Theorem 3.4, we can easily get
By (3.11), (3.12) and (3.15), for all sufficiently large n,
Therefore \(\sum \nolimits _{n=1}^{\infty }{\mathbb {V}}(X_{n}\ne Y_{n})\le \sum \nolimits _{n=1}^{\infty }{\mathbb {V}}(|X_{1}|>\mu ^2a'_{n})<\infty \), which combining (3.25), we obtain
Similar to the proof of (3.23), we can obtain
Hence,
Considering \(\{-X_n; n\ge 1\}\) instead of \(\{X_n; n\ge 1\}\) in above formula and using the fact \(\hat{{\mathbb {E}}}X_k=\hat{{\varepsilon }}X_k\), we can obtain
Therefore, by the arbitrariness of \(\varepsilon \)
Secondly, we prove (3.17). By (3.11), (3.12) and (3.15), for any \(c>0\),
Hence
where \(d_n=D(n(\ln {n})^{1-\delta -\varepsilon })\). For any \(M>0\), let
By \(1-g(x)\in C_{l,Lip}\), so \(\{\xi _j; j\ge 1\}\) is also independent. Using (2.1), (3.7), and (3.26),
For any \(0<\delta<\epsilon <1\) and \(t>0\), we get
So, using Proposition 2.3 (i): \(\hat{{\mathbb {E}}}(X-Y)\ge \hat{{\mathbb {E}}}X-\hat{{\mathbb {E}}}Y\),
By the independence of \(\{\xi _i; i\ge 1\}\) and the fact that e\(^x\ge 1+x\),
On the other hand, by noting e\(^x\le 1+|x|\)e\(^{|x|}\), e\(^x\ge 1+x\) and \(0\le \xi _j\le 1\),
Also, by (2.3) in Lemma 2.6, \(b_n\rightarrow \infty \) and for \(0\le \xi _j\le 1\), \(\hat{{\mathbb {E}}}(\xi _j-\hat{{\mathbb {E}}}\xi _j)^2=\hat{{\mathbb {E}}} (\xi ^2_j-2\xi \hat{{\mathbb {E}}}\xi _j+(\hat{{\mathbb {E}}}\xi _j)^2) \le \hat{{\mathbb {E}}}(\xi _j+\hat{{\mathbb {E}}}\xi _j)=2\hat{{\mathbb {E}}}(\xi _j)\),
Thus, by Proposition 2.3 (vi): Hölder inequality and (3.7), it follows that
Hence, substitute (3.28)–(3.30) in (3.27), we have
Letting \(\delta \rightarrow 0\) and then \(t\rightarrow \infty \), we get
Now, choose \(\epsilon =1-\mu >0\). Noting that
and the continuity of \({\mathbb {V}}\), we get
Note that
From the arbitrariness of M, it follows that
Therefore,
That is
Since \(0\le \delta <1\), let \(h(x)=(\ln x)^{1-\delta -\varepsilon }\), \(0<\varepsilon <1-\delta \), \(0<\eta =\frac{\varepsilon }{4(1-\delta -\varepsilon )}\), applying Lemma 2.9 and (3.10),
Thus combining this with (3.31), for all sufficiently large n,
which implies
from \(\varepsilon \) being arbitrary. That is that (3.17) holds.
By Proposition 2.3, continuous implies countably sub-additive for \({\mathbb {V}}\), and (3.16) implies
Therefore, (3.16) and (3.17) imply
This completes the proof of Theorem 3.5.
Finally, we give an example to show that for each \(\delta \in {[0,1)}\), there exists a distribution such that conditions in Theorem 3.5 are satisfied.
Example
Assume that \(\{X_{n};n\ge 1\}\) is a sequence of i.i.d. random variables with \(\hat{{\mathbb {E}}}X_k={\hat{\varepsilon }}X_k\), \(\hat{{\mathbb {E}}}\) is countably sub-additive, and \({\mathbb {V}}\) is continuous. For \(\delta \in {[0,1)}\), if
Proof
It is easy to check that
Which implies that
Thus, combining with (3.8), we have
and
Therefore, (3.3) and (3.15) hold. That is, all conditions of Theorem 3.5 are satisfied. Hence (3.16) and (3.17) hold. \(\square \)
References
Anscombe, F.J., Aumann, R.J.: A definition of subjective probability. Ann. Math. Stat. 34, 199–205 (1963)
Artzner, Ph, Delbaen, F., Eber, J.M., Heath, D.: Thinking coherently. RISK 10, 86–71 (1997)
Bingham, N.H., Goldie, C.M., Teugels, J.L.: Regular Variation. Cambridge University Press, Cambridge (1987)
Cai, G.H.: Law of the iterated logarithm for \(\rho \)-mixing sequence. Acta Math. Sin. 49(1), 155–160 (2006)
Chen, P.Y.: Chover’s LIL for \(\varphi \)-mixing sequence of heavy-tailed random vectors. Acta Math. Sin. 48(3), 447–456 (2005)
Chen, Z.J.: Strong laws of large numbers for sub-linear expectations. Sci. China Math. 59(5), 945–954 (2016)
Chover, J.: A law of the iterated logarithm for stable summands. Proc. Am. Math. Soc. 17(2), 441–443 (1966)
Denis, L., Martini, C.: A theoretical framework for the pricing of contingent claims in the presence of model uncertainty. Ann. Appl. Probab. 16(2), 827–852 (2006)
El Karoui, N., Peng, S.G., Quenez, M.C.: Backward stochastic differential equation in finance. Math. Finance 7(1), 1–71 (1997)
Ellsberg, D.: Risk, ambiguity and the Savage axioms. Q. J. Econ. 75, 643–669 (1961)
Feynman, R., Leighton, R., Sands, M.: The Feynman Lectures on Physics. Quantum Mechanics, pp. 1–11. Addison-Wesley, Reading (1963)
Gilboa, I.: Expected utility theory with purely subjective non-additive probabilities. J. Math. Econ. 16, 65–68 (1987)
Hu, C.: A strong law of large numbers for sub-linear expectation under a general moment condition. Stat. Probab. Lett. 119, 248–258 (2016)
Hu, Z.C., Yang, Y.Z.: Some inequalities and limit theorems under sublinear expectations. Acta Math. Appl. Sin. (English Series) 33(2), 451–462 (2017)
Huber, P.J., Strassen, V.: Minimax tests and the Neyman–Pearson Lemma for capacity. Ann. Stat. 1(2), 151–263 (1973)
Li, H.W., Peng, S.G., Hima, A.S.: Reflected solutions of backward stochastic differential equations driven by G-Brownian motion. Sci. China Math. 61(1), 1–26 (2018)
Marinacci, M.: Limit laws for non-additive probabilities and their frequentist interpretation. J. Econ. Theory 84, 145–195 (1999)
Mikosch, T.: On the law of the iterated logarithm for independent random variables outside the domain of partial attraction of the normal law (in Russian). Vestnik Leningr. Univ. 13, 35–39 (1984)
Neumann, J., Morgenstern, O.: Theory of Games and Economic Behavior, 2nd edn. Princeton University Press, Princeton (1947)
Peng, S.: Backward SDE and related g-expectations, in Backward Stochastic Differential Equations. Pitman Research Notes in Math. Series, No.364, El Karoui Mazliak edit, pp. 141–159 (1997)
Peng, S.: Monotonic limit theorem of BSDE and nonlinear decomposition theorem of Doob–Meyer type. Probab. Theory Relat. Fields 113(4), 473–499 (1999)
Peng, S.: G-expectation, G-Brownian motion and related stochastic calculus of It\({\hat{o}}\) type. In: Benth, F.E., Di Nunno, G., Lindstrøm, T., Øksendal, B., Zhang, T. (eds.) Stochastic Analysis and Applications. Abel Symposia, vol. 2. Springer, Berlin (2007)
Peng, S.: Multi-dimensional G-Brownian motion and related stochastic calculus under G-expectation. Stoch. Process. Appl. 118(12), 2223–2253 (2008)
Qi, Y.C., Cheng, P.: On the law of the iterated logarithm for the partied sum in the domain of attraction of stable distribution. Chin. Ann. Math. 17(A), 195–206 (1996)
Quiggin, J.: A theory of anticipated utility. J. Econ. Behav. Organ. 3, 243–323 (1982)
Savage, L.J.: The Foundations of Statistics. Wiley, New York (1954). (Znd ed. 1972 (Dover, New York))
Schmeidler, D.: Subjective Probability Without Additivity (temporary title). Foerder Institute for Economic Research, Tel-Aviv University, Tel-Aviv (1982)
Schmeidler, D.: Subjective Probability and Expected Utility Without Additivity. Mimeo, New York (1984)
Vasudeva, R.: Chover’s law of the iterated logarithm and weak convergence. Acta Math. Hung. 44, 215–221 (1984)
Wakker, P.O.: Representations of Choice Situations. Ph.D. Thesis, Tilburg, rewritten as Additive Representation of Preferences (1989), (Ch. VI). Klewer Academic Publishers, Norwell (1986)
Wu, Y., Wang, X.J., Zhang, L.: On the asymptotic approximation of inverse moment under sub-linear expectations. J. Math. Anal. Appl. (2018). https://doi.org/10.1016/j.jmaa.2018.08.010
Wu, Q.Y., Jiang, Y.Y.: Chover-type laws of the k-iterated logarithm for \({\tilde{\rho }}\)-mixing sequences of random variables. J. Math. Anal. Appl. 366(2), 435–443 (2010a)
Wu, Q.Y., Jiang, Y.Y.: A law of the iterated logarithm of partial sums for NA random variables. J. Korean Stat. Soc. 39(2), 199–206 (2010b)
Wu, Q.Y.: Laws of the iterated logarithm for \({\tilde{\rho }}\)-mixing random variables with normal distribution. Acta Math. Appl. Sin. (English Series) 32(2), 385–394 (2016)
Wu, Q.Y., Jiang, Y.Y.: Strong law of large numbers and Chover’s law of the iterated logarithm under sub-linear expectations. J. Math. Anal. Appl. 460(1), 252–270 (2018)
Yaari, M.: Risk Aversion Without Diminishing Marginal Utility (Also revised under the title ‘The Dual Theory of Choice Under Risk’.). Mimeo, New York (1984)
Zhang, L.X.: Exponential inequalities under the sub-linear expectations with applications to laws of the iterated logarithm. Sci. China Math. 59(2), 2503–2526 (2016b)
Zhang, L.X.: Rosenthal’s inequalities for independent and negatively dependent random variables under sub-linear expectations with applications. Sci. China Math. 59(4), 751–768 (2016c)
Zhang, L.X., Lin, J.H.: Marcinkiewicz’s strong law of large numbers for nonlinear expectations. Stat. Probab. Lett. 137, 269–276 (2018)
Author information
Authors and Affiliations
Contributions
QW conceived of the study, drafted, complete and approved the final manuscript. JL conceived of the study, complete and read the final manuscript
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
The research was supported by the National Natural Science Foundation of China (11661029, 71963008), and the Support Program of the Guangxi China Science Foundation (2018GXNSFAA281011, 2018GXNSFAA294131)
Rights and permissions
About this article
Cite this article
Wu, Q., Lu, J. Another form of Chover’s law of the iterated logarithm under sub-linear expectations. RACSAM 114, 22 (2020). https://doi.org/10.1007/s13398-019-00757-7
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s13398-019-00757-7