1 Introduction

The classical limit theorems in probability theory play a fruitful role in the development of probability theory and its applications. These theorems have always been considered under additive probabilities and additive expectations. However, such additive hypothesis is unrealistic in many areas of applications. In fact, non-additive probabilities and non-additive expectations are useful tools for studying uncertainties for a long time, as early as 1961, Ellsberg [10] presented his arguments against necessarily additive probabilities with the help of the ‘mind experiments’. Feynman et al. [11] described the deviation of elementary particles from mechanical behavior to wave-like behavior by non-additivity, and so on.

The most convincing and well known axiomatization of additive probability was given by Savage [26]. However compelling Savage’s axioms and results are, they are not immune to attacks. Ellsberg [10] gave an example to show that in some cases additive probability of Savage [26] is not applicable, and that without additive probability measure is more suitable. In the framework of Anscombe and Aumann [1], Schmeidler [27] and [28] also suggested that the probability measure is allowed to be non-additive. Facts have proved that Schmeidler’s model may also explain some of the ‘paradoxes’ or counterexamples to the von Neumann and Morgenstern [19] expected theory, which have already stimulated many studies of various generalizations of expected theory. Such as Huber and Strassen [15], Quiggin [25], Yaari [36], Gilboa [12], Wakker [30], El Karoui et al. [9], Artzner et al. [2], Marinacci [17], Denis and Martini [8] and others, lead to results of the non-additive theory. This was the primary motive for developing the non-additive probability and non-additive expected theory. In the recent years, the theory and methodology of non-additive expectation have been well developed and received much attention in some application fields. For example, G-expectation (sub-linear expectation), was introduced in Peng [20] in the framework of the sub-linear expectation in a general function space by relaxing the linear property of the classical expectation to the sub-additivity and positive homogeneity. As a further development, Peng [21,22,23] constructed the basic framework, basic properties and a new central limit theorem under sub-linear expectations. In the framework of Peng [20,21,22,23] and Zhang [37,38,39], Hu and Yang [14] established the exponential inequalities, Rosenthal’s inequalities, Kolmogorov’s and Marcinkiewicz’s strong law of larger numbers and Hartman–Wintner’s law of iterated logarithm, Hu [13] and Chen [6] studied Kolmogorov’s strong law of larger numbers, Wu and Jiang [35] established the Chover’s law of iterated logarithm (LIL), Wu et al. [31] obtained the asymptotic approximation of inverse moment, Li et al. [16] get reflected solutions of backward stochastic differential equations driven by G-Brownian motion, and so on.

In probability space, Chover [7] established first the classical Chover’s LIL for a sequence of independent and identically distributed (i.i.d.) random variables. Some results of Chover’s LIL obtained by Mikosch [18], Vasudeva [29], Qi and Cheng [24] for sequences of independent random variables with different distributions, Chen [5], Cai [4] and Wu and Jiang [32] for dependent sequences. Some papers have been devoted to the study of another form of Chover’s LIL. We refer the reader to Qi and Cheng [24], Wu and Jiang [33], and Wu [34].

Recently, for a sequence of extended i.i.d. random variables under sub-linear expectation, Wu and Jiang [35] established the Chover’s LIL under the following condition

$$\begin{aligned} {\mathbb {V}}(|X_1|>x)={c(x)l(x)\over x^{\alpha }}\quad \mathrm{for}\, 0<\alpha <2\quad {\mathrm{and}} \quad {\mathrm{any}}\,x>0, \end{aligned}$$
(1.1)

where, \(c(x)\ge 0,\lim \nolimits _{x\rightarrow \infty }c(x)=c>0\), \(l(x)>0\) is a slowly varying function, and \({\mathbb {V}}\) is the capacities corresponding to the sub-linear expectations (defined in Sect. 2).

The main purpose of this paper is to study and obtain another form of Chover’s LIL, and extend the LIL obtained by Qi and Cheng [24] and Wu and Jiang [33], etc. from traditional probability space to the general sub-linear expectation space. Because sub-linear expectation and capacity are not additive, many powerful tools and common methods for linear expectations and probabilities are no longer valid, so that the study of the limit theorems under sub-linear expectation becomes much more complex and difficult. We provide a method to study this subject.

2 Basic settings

The study of this paper uses the framework and notations which are established by Peng [23] and Wu and Jiang [35]. Let \((\Omega , {\mathcal {F}})\) be a measurable space and let \({\mathcal {H}}\) be a linear space of real functions defined on \((\Omega , {\mathcal {F}})\) such that \(\varphi (X_1,\ldots ,X_n)\in {\mathcal {H}}\) for any \(X_1,\ldots ,X_n\in {\mathcal {H}}\), \(\varphi \in C_{l,Lip}({\mathbb {R}}_n)\), where \(C_{l,Lip}({\mathbb {R}}_n)\) denotes the linear space of local Lipschitz functions \(\varphi \) satisfying

$$\begin{aligned} |\varphi ({\mathbf {x}})-\varphi ({\mathbf {y}})|\le c(1+|{\mathbf {x}}|^m+|{\mathbf {y}}|^m)|{\mathbf {x}}-{\mathbf {y}}|, \quad \forall {\mathbf {x}}, {\mathbf {y}}\in {\mathbb {R}}_n, \end{aligned}$$

for some \(c>0, m\in {\mathbb {N}}\) depending on \(\varphi \). \({\mathcal {H}}\) is considered as a space of “random variables”. In this case we denote \(X\in {\mathcal {H}}\).

Definition 2.1

\(\hat{{\mathbb {E}}}:{\mathcal {H}}\rightarrow [-\infty , +\infty ]\) is called a sub-linear expectation, if \(\hat{{\mathbb {E}}}\) satisfies the following properties: for all \(X, Y\in {\mathcal {H}}\), we have

  1. (a)

    Monotonicity: If \(X\ge Y\), then \(\hat{{\mathbb {E}}}X\ge \hat{{\mathbb {E}}}Y\);

  2. (b)

    Constant preserving: \(\hat{{\mathbb {E}}}c=c\);

  3. (c)

    Sub-additivity: \(\hat{{\mathbb {E}}}(X+Y)\le \hat{{\mathbb {E}}}X+\hat{{\mathbb {E}}}Y\);

  4. (d)

    Positive homogeneity: \(\hat{{\mathbb {E}}}(\lambda X)=\lambda \hat{{\mathbb {E}}}X, \lambda \ge 0\).

The triple \((\Omega , {\mathcal {H}}, \hat{{\mathbb {E}}})\) is called a sub-linear expectation space, compared with the classical probability space \((\Omega , {\mathcal {F}}, P)\). For \(\hat{{\mathbb {E}}}\), the linear property of expectation is replaced by the sub-additivity and positive homogeneity. \(\hat{{\mathbb {E}}}\) is called a sub-linear expectation.

Given a sub-linear expectation \(\hat{{\mathbb {E}}}\), let us denote the conjugate expectation \(\hat{{\varepsilon }}\) of \(\hat{{\mathbb {E}}}\) by

$$\begin{aligned} \hat{{\varepsilon }} X:=-\hat{{\mathbb {E}}}(-X), \quad \forall X\in {\mathcal {H}}. \end{aligned}$$

In a sub-linear expectation space, we replace the concept of probability with the concept of capacity. Let \({\mathcal {G}}\subset {\mathcal {F}}\). A function \(V:{\mathcal {G}}\rightarrow [0, 1]\) is called a capacity if

$$\begin{aligned} V(\emptyset )=0,V(\Omega )=1\quad {\mathrm{and}}\quad V(A)\le V(B) \quad {\mathrm{for}}\,\forall A\subseteq B, A, B\in {\mathcal {G}}. \end{aligned}$$

It is called to be sub-additive if \(V(A\bigcup B)\le V(A)+V(B)\) for all \(A,B\in {\mathcal {G}}\) with \(A\bigcup B\in {\mathcal {G}}\). In the sub-linear expectation space \((\Omega , {\mathcal {H}}, \hat{{\mathbb {E}}})\), we denote a pair \(({\mathbb {V}}, \nu )\) of capacities by

$$\begin{aligned} {\mathbb {V}}(A):=\inf \{\hat{{\mathbb {E}}}\xi ; I(A)\le \xi , \xi \in {\mathcal {H}}\},\quad \nu (A):=1-{\mathbb {V}}(A^c),\quad \forall A\in {\mathcal {F}}, \end{aligned}$$

where \(A^c\) is the complement set of A. By definition of \({\mathbb {V}}\) and \(\nu \), it is obvious that \({\mathbb {V}}\) is sub-additive, and \(\nu (A)\le {\mathbb {V}}(A)\), for all \(A\in {\mathcal {F}}\).

Definition 2.2

  1. (i)

    \(\hat{{\mathbb {E}}}\) is called to be countably sub-additive if it satisfies

    $$\begin{aligned} \hat{{\mathbb {E}}}(X)\le \sum \limits _{n=1}^\infty \hat{{\mathbb {E}}}(X_n),\quad {\mathrm{whenever}}\,X\le \sum \limits _{n=1}^\infty X_n,\,X, X_n\in {\mathcal {H}},\,X\ge 0, X_n\ge 0. \end{aligned}$$

    It is called to be continuous if it satisfies

    $$\begin{aligned} \hat{{\mathbb {E}}}(X_n)\uparrow \hat{{\mathbb {E}}}(X),\,{\mathrm{if}}\,0\le X_n\uparrow X, {\mathrm{and}}\, \hat{{\mathbb {E}}}(X_n)\downarrow \hat{{\mathbb {E}}}(X),\,{\mathrm{if}}\,0\le X_n\downarrow X, \,{\mathrm{where}}\,X, X_n\in {\mathcal {H}}. \end{aligned}$$
  2. (ii)

    A capacity V is called to be countably sub-additive if

    $$\begin{aligned} V\left( \bigcup \limits _{n=1}^\infty A_n\right) \le \sum \limits _{n=1}^\infty V(A_n),\quad \forall A_n\in {\mathcal {F}}. \end{aligned}$$

    It is called to be continuous if it satisfies

    $$\begin{aligned} V(A_n)\uparrow V(A),\,{\mathrm{if}}\,A_n\uparrow A, \,{\mathrm{and}}\, V(A_n)\downarrow V(A),\,{\mathrm{if}}\,A_n\downarrow A, \,{\mathrm{where}}\,A, A_n\in {\mathcal {F}}. \end{aligned}$$

Also, we define the Choquet integrals/expecations \((C_{\mathbb {V}},C_\nu )\) by

$$\begin{aligned} C_V(X):=\int _{0}^\infty V(X>x){\mathrm{d}}x+\int _{-\infty }^0 (V(X>x)-1){\mathrm{d}}x \end{aligned}$$

with V being replaced by \({\mathbb {V}}\) and \(\nu \) respectively.

The following Proposition 2.3 contains some basic properties used in this paper. Proposition 2.3 (i)–(iv) is easily shown from the Definitions 2.1 and 2.2 , respectively. Proposition 2.3 (v) follows from \(I(|X|\ge x)\le |X|^p/x^p\in {\mathcal {H}}, p>0\) and Proposition 2.3 (iii), Proposition 2.3 (vi) and (vii) has been established by Zhang [37], Lemma 4.1, Lemma 4.5(iii)).

Proposition 2.3

(i):

For all \(X, Y\in {\mathcal {H}}\),

$$\begin{aligned} {\hat{\varepsilon }} X\le \hat{{\mathbb {E}}}X,\, \hat{{\mathbb {E}}}(X+c)=\hat{{\mathbb {E}}}X+c,\, |\hat{{\mathbb {E}}}(X-Y)|\le \hat{{\mathbb {E}}}|X-Y| \,{\mathrm{and}}\, \hat{{\mathbb {E}}}(X-Y)\ge \hat{{\mathbb {E}}}X-\hat{{\mathbb {E}}}Y. \end{aligned}$$
(ii):

If \(\hat{{\mathbb {E}}}Y={\hat{\varepsilon }} Y\), then \(\hat{{\mathbb {E}}}(X+aY)=\hat{{\mathbb {E}}}X+a\hat{{\mathbb {E}}}Y\) for any \(a\in {\mathbb {R}}\).

(iii):

If \(f\le I(A)\le g\)\(f,g\in {\mathcal {H}}\), then

$$\begin{aligned} \hat{{\mathbb {E}}}f\le {\mathbb {V}}(A)\le \hat{{\mathbb {E}}}g, \,\quad {\hat{\varepsilon }} f\le \nu (A)\le {\hat{\varepsilon }} g. \end{aligned}$$
(2.1)
(iv):

If \({\mathbb {V}}\) (resp. \(\hat{{\mathbb {E}}})\) is continuous, then \({\mathbb {V}}\) (resp. \(\hat{{\mathbb {E}}})\) is countably sub-additive.

(v):

Markov inequality: for any \(X\in {\mathcal {H}}\),

$$\begin{aligned} {\mathbb {V}}(|X|\ge x)\le \hat{{\mathbb {E}}}(|X|^p)/x^p, \,\nu (|X|\ge x)\le \hat{{\varepsilon }}(|X|^p)/x^p\quad {\mathrm{for}}\,{\mathrm{any}}\, x>0, p>0. \end{aligned}$$
(vi):

H\(\ddot{o}\)lder inequality: \(\forall X, Y\in {\mathcal {H}}, p, q>1\) satisfying \(p^{-1}+q^{-1}=1\),

$$\begin{aligned} \hat{{\mathbb {E}}}(|XY|)\le \left( \hat{{\mathbb {E}}}(|X|^p)\right) ^{1/p} \left( \hat{{\mathbb {E}}}(|Y|^q)\right) ^{1/q}. \end{aligned}$$

particularly, Jensen inequality: \(\forall X\in {\mathcal {H}}\),

$$\begin{aligned} \left( \hat{{\mathbb {E}}}(|X|^r)\right) ^{1/r}\le \left( \hat{{\mathbb {E}}} (|X|^s)\right) ^{1/s}\quad {\mathrm{for}}\quad 0<r\le s. \end{aligned}$$
(vii):

If \(\hat{{\mathbb {E}}}\) is countably sub-additive, then \(\hat{{\mathbb {E}}}(|X|)\le C_{\mathbb {V}}(|X|)\) for any \(X\in {\mathcal {H}}\).

Definition 2.4

(Peng [22, 23], Zhang [37])

  1. (i)

    (Identical distribution) Let \(X_1\) and \(X_2\) be two random variables defined in sub-linear expectation spaces \((\Omega , {\mathcal {H}}, \hat{{\mathbb {E}}})\). They are called identically distributed, denoted by \(X_1{\mathop {=}\limits ^{d}}X_2\), if

    $$\begin{aligned} \hat{{\mathbb {E}}}(\varphi (X_1))=\hat{{\mathbb {E}}}(\varphi (X_2)), \forall \varphi \in C_{l,Lip}({\mathbb {R}}). \end{aligned}$$
  2. (ii)

    (Independence) In a sub-linear expectation space \((\Omega , {\mathcal {H}}, \hat{{\mathbb {E}}})\), a random vector \({\mathbf {Y}}=(Y_1,\ldots ,Y_n)\), \(Y_i\in {\mathcal {H}}\) is said to be independent to another random vector \({\mathbf {X}}=(X_1,\ldots ,X_m), X_i\in {\mathcal {H}}\) under \(\hat{{\mathbb {E}}}\) if for each test function \(\varphi \in C_{l,Lip}({\mathbb {R}}_m\times {\mathbb {R}}_n)\) we have \(\hat{{\mathbb {E}}}(\varphi ({\mathbf {X}}, {\mathbf {Y}}))=\hat{{\mathbb {E}}}[\hat{{\mathbb {E}}}(\varphi ({\mathbf {x}}, {\mathbf {Y}}))|_{{\mathbf {x}}={\mathbf {X}}}]\), whenever \({\bar{\varphi }}({\mathbf {x}}):=\hat{{\mathbb {E}}}\left( |\varphi ({\mathbf {x}}, {\mathbf {Y}})|\right) <\infty \) for all \({\mathbf {x}}\) and \(\hat{{\mathbb {E}}}\left( |{\bar{\varphi }}({\mathbf {X}})|\right) <\infty \).

    From the definition of independence, it is easily seen that, if Y is independent to X, and \(X, Y\in {\mathcal {H}}, X>0, \hat{{\mathbb {E}}}Y>0\), then

    $$\begin{aligned} \hat{{\mathbb {E}}}(XY)=\hat{{\mathbb {E}}}(X)\hat{{\mathbb {E}}}(Y). \end{aligned}$$
  3. (iii)

    (I.I.D. Random Variables) A sequence \(\{X_n; n\ge 1\}\) of random variables is said to be independent and identically distributed (i.i.d.), if \(X_{i+1}\) is independent to \((X_1, \ldots , X_i)\) and \(X_i{\mathop {=}\limits ^{d}}X_1\) for each \(i\ge 1\).

It can be showed that if \(\{X_n; n\ge 1\}\) is a sequence of independent random variables and \(f_1(x), f_2(x),\ldots \in C_{l,Lip}({\mathbb {R}})\), then \(\{f_n(X_n); n\ge 1\}\) is also a sequence of independent random variables.

In the following, let \(\{X_n; n\ge 1\}\) be a sequence of random variables in a sub-linear expectation space \((\Omega , {\mathcal {H}}, \hat{{\mathbb {E}}})\), and \(S_n=\sum _{i=1}^nX_i\). The symbol c stands for a generic positive constant which may differ from one place to another. Let \(a_n\ll b_n\) denote that there exists a constant \(c>0\) such that \(a_n\le cb_n\) for sufficiently large n, \(a_x\sim b_x\) denotes \(\lim _{x\rightarrow \infty }a_x/b_x=1\), and \(I(\cdot )\) denotes an indicator function.

To prove our results, we need the following three lemmas.

Lemma 2.5

(Borel–Cantelli Lemma, Zhang 2016a, Lemma 3.9 [37])  Let \(\{A_{n}; n\ge 1\}\) be a sequence of events in \({\mathcal {F}}\). Suppose that V is a countably sub-additive capacity. If \(\sum \nolimits _{n=1}^{\infty }V(A_{n})<\infty \), then \(V(A_{n};\mathrm{i.o.})=0\), where \(\{A_{n};\mathrm{i.o.}\}=\bigcap \nolimits _{n=1}^\infty \bigcup \nolimits _{m=n}^\infty A_m\).

Lemma 2.6

(Zhang (2016b, Theorem 2.1 (b) [38], 2016a, Theorem 3.1 (b) [37])) Suppose that \(X_k\) is independent to \((X_{k+1},\ldots ,X_n)\) for each \(k=1,\ldots ,n-1\), and \(\hat{{\mathbb {E}}}X_{n}\le 0\). Then

$$\begin{aligned}&{\hat{{\mathbb {E}}}}\left( \left| \max \limits _{k\le n}S_k\right| ^p\right) \le c_p\left\{ \sum \limits _{k=1}^{n}{\hat{{\mathbb {E}}}}|X_k|^p+ \left( \sum \limits _{k=1}^{n}{\hat{{\mathbb {E}}}}X^2_k\right) ^{p/2}\right\} \,{\mathrm{for}}\,p\ge 2. \end{aligned}$$
(2.2)
$$\begin{aligned}&{\mathbb {V}}(S_n\ge x)\le c\frac{\sum \nolimits _{k=1}^{n}{\hat{{\mathbb {E}}}}X_k^2}{x^2}, \quad {\mathrm{for}}\, x>0. \end{aligned}$$
(2.3)

Here \(c_p\) is a positive constant depending only on p.

Definition 2.7

\(l(x)>0\) is said to be a slowly varying function at infinity if

$$\begin{aligned} \lim \limits _{t\rightarrow \infty }{l(tx)\over l(t)}=1 \quad {\mathrm{for}}\, {\mathrm{any}}\,x>0. \end{aligned}$$

\(f(x)>0\) is said to be a regularly varying function with index \(\rho \) at infinity, we write \(f\in {\mathbf {R}}_\rho \), if

$$\begin{aligned} \lim \limits _{t\rightarrow \infty }{f(tx)\over f(t)}=x^\rho \quad {\mathrm{for}}\,{\mathrm{any}}\,x>0. \end{aligned}$$

\({\mathbf {R}}_0\) is the class of slowly varying function at infinity.

From Bingham et al. (1987 [3], (i) corresponds to p.12 Theorem 1.3.1, (ii)–(iv) correspond to p.16 Proposition 1.3.6, (v) corresponds to Theorem 1.5.4, and (vi) corresponds to Proposition 1.5.7 (ii)), we have

Proposition 2.8

  1. (i)

    l(x) is a slowly varying function at infinity if and only if

    $$\begin{aligned} l(x)=c(x)\exp \left\{ \int _{a}^{x}{b(u)\over u}{\mathrm{d}}u\right\} ,\quad x\ge a, \end{aligned}$$

    for some \(a>0\), where \(c(x)\ge 0\), \( \lim \nolimits _{x\rightarrow \infty }c(x)=c>0, \) and \( \lim \nolimits _{x\rightarrow \infty }b(x)=0.\) Furthermore, f(x) is a regularly varying function with index \(\rho \) at infinity if and only if

    $$\begin{aligned} f(x)=x^\rho l(x), \end{aligned}$$

    where l(x) is a slowly varying function.

  2. (ii)

    If l(x) varies slowly, then \((\ln l(x))/\ln x\rightarrow 0\) as \(x\rightarrow \infty \).

  3. (iii)

    If l(x) varies slowly, so does \((l(x))^\alpha \) for every \(\alpha \in {\mathbb {R}}\). If \(l_1, l_2\) vary slowly, so do \(l_1(x)l_2(x)\), \(l_1(x)+l_2(x)\).

  4. (iv)

    If l(x) varies slowly and \(\alpha >0\), then

    $$\begin{aligned} x^\alpha l(x)\rightarrow \infty , \quad x^{-\alpha } l(x)\rightarrow 0 \quad (x\rightarrow \infty ). \end{aligned}$$
  5. (v)

    l(x) is a slowly varying function at infinity if and only if, for every \(\alpha >0\), there exists a non-decreasing function \(\phi \) and a non-increasing function \(\psi \) with

    $$\begin{aligned} x^\alpha l(x)\sim \phi (x), \quad x^{-\alpha } l(x)\sim \psi (x) \quad (x\rightarrow \infty ). \end{aligned}$$
  6. (vi)

    If \(f_i\in {\mathbf {R}}_{\rho _i}\)\((i=1, 2)\), \(f_2(x)\rightarrow \infty \) as \(x\rightarrow \infty \), then \(f_1(f_2(x))\in {\mathbf {R}}_{\rho _1\rho _2}\).

Lemma 2.9

(Qi and Cheng 1996 [24]) Suppose that l(x) is a slowly varying function at infinity and h(x) is a positive function with \(\lim \nolimits _{x\rightarrow \infty }h(x)=\infty \). Then, for any given \(\delta >0\), there exists an \(x_{0}>0\) such that

$$\begin{aligned} h^{-\delta }(x)\le \inf \limits _{x\le y \le xl(x)}{l(y)\over l(x)}\le \sup \limits _{x\le y \le xl(x)}{l(y)\over l(x)}\le h^{\delta }(x)\quad {\mathrm{for\, all}}\, x>x_{0}. \end{aligned}$$

3 Chover’s law of the iterated logarithm

In the sub-linear expectation space, the almost sure convergence of random variable sequences is different from the traditional probability space. Next we give the definition of almost sure convergence of a sequence of random variables in sub-linear expectation space.

Definition 3.1

A sequence of random variables \(\{X_{n}; n\ge 1\}\) in \((\Omega , {\mathcal {H}}, \hat{{\mathbb {E}}})\) is said to converge to X almost surely V, denoted by \(X_{n}\rightarrow X\) a.s. V as \(n\rightarrow \infty \) if, \(V(X_{n}\nrightarrow X)=0\).

In general, for arbitrary event A, it is said that A has the certain nature a.s. V, if V(A does not have the certain nature) = 0.

V can be replaced by \({\mathbb {V}}\) and \(\nu \) respectively. By \(\nu (A)\le {\mathbb {V}}(A)\) and \(\nu (A)+{\mathbb {V}}(A^c)=1\) for any \(A\in {\mathcal {F}}\), it is obvious that \(X_{n}\rightarrow X\) a.s. \({\mathbb {V}}\) implies \(X_{n}\rightarrow X\) a.s. \(\nu \), but \(X_{n}\rightarrow X\) a.s. \(\nu \) does not imply \(X_{n}\rightarrow X\) a.s. \({\mathbb {V}}\). Therefore, we can’t define \(X_{n}\rightarrow X\) a.s. \({\mathbb {V}}\) with \({\mathbb {V}}(X_{n}\rightarrow X)=1\).

We give an example satisfying \(X_n\rightarrow X\) a.s. \(\nu \); but not \(X_n\rightarrow X\) a.s.\({\mathbb {V}}\). To this end, first give the notations of G-normal distribution which is introduced by Peng [23].

Definition 3.2

(G-normal random variable) For \(0\le {\underline{\sigma }}^2\le {\bar{\sigma }}^2<\infty \), a random variable \(\xi \) in a sub-linear expectation space \((\Omega , {\mathcal {H}}, \hat{{\mathbb {E}}})\) is called a G-normal \({\mathcal {N}}(0, [{\underline{\sigma }}^2, {\bar{\sigma }}^2])\) distributed random variable, where \({\bar{\sigma }}^2=\hat{{\mathbb {E}}}\xi ^2, {\underline{\sigma }}^2=\hat{{\varepsilon }}\xi ^2\) (write \(\xi \sim {\mathcal {N}}(0, [{\underline{\sigma }}^2, {\bar{\sigma }}^2])\) under \(\hat{{\mathbb {E}}}\)), if for any \(\varphi \in C_{l,Lip}({\mathbb {R}})\), the function \(u(x, t)=\hat{{\mathbb {E}}}\left[ \varphi (x+\sqrt{t}\xi )\right] \) (\(x\in {\mathbb {R}}, t\ge 0\)) is the unique viscosity solution of the following heat equation:

$$\begin{aligned} \partial _tu-G(\partial ^2_{xx}u)=0,\quad u(0, x)=\varphi (x), \end{aligned}$$

where \(G(\alpha )=({\bar{\sigma }}^2\alpha ^+-{\underline{\sigma }}^2\alpha ^-)/2.\)

In particular, if \({\underline{\sigma }}={\bar{\sigma }}:=\sigma \), then \({\mathcal {N}}(0, [{\underline{\sigma }}^2, {\bar{\sigma }}^2])={\mathcal {N}}(0, \sigma ^2)\) is the usual normal distribution random variable.

Example 3.3

Let \(X_n\) be independent G-normal random variables with \(X_n\sim {\mathcal {N}}(0, [1/4^{2n}, 1])\) in a sub-linear expectation space \((\Omega , {\mathcal {H}}, \hat{{\mathbb {E}}})\). \(\hat{{\mathbb {E}}}\) and \({\mathbb {V}}\) are continuous. Then \(X_n\rightarrow 0\) a.s. \(\nu \); but not \(X_n\rightarrow 0\) a.s.\({\mathbb {V}}\).

Proof

Take \(\mu =1/2\), g is defined by the following form (3.6), by (2.1), \(X_n\) being independent and \({\mathbb {V}}\) being continuous.

$$\begin{aligned} {\mathbb {V}}\left( \bigcap \limits _{m=n}^\infty (|X_m| \le 1/2^m)\right)&\ge \hat{{\mathbb {E}}}\left( \prod \limits _{m=n}^\infty g(2^m|X_m|)\right) =\prod \limits _{m=n}^\infty \hat{{\mathbb {E}}}\left( g(2^m|X_m|)\right) \\&\ge \prod \limits _{m=n}^\infty {\mathbb {V}}(|X_m|\le 1/2^{m+1}). \end{aligned}$$

Hence, combining Markov inequality: \(\nu (|X_m|>1/2^{m+1})\le 2^{2m+2}\hat{{\varepsilon }}X_m^2=2^{2m+2}4^{-2m}=1/4^{m-1}\), we get

$$\begin{aligned} \nu (X_n\nrightarrow 0)\le & {} \nu \left( \bigcap \limits _{n=1}^\infty \bigcup \limits _{m=n}^\infty (|X_m|> 1/2^m)\right) \\\le & {} \nu \left( \bigcup \limits _{m=n}^\infty (|X_m|> 1/2^m)\right) =1-{\mathbb {V}}\left( \bigcap \limits _{m=n}^\infty (|X_m|\le 1/2^m)\right) \\\le & {} 1-\prod \limits _{m=n}^\infty {\mathbb {V}}(|X_m|\le 1/2^{m+1})\\= & {} 1-\prod \limits _{m=n}^\infty \left( 1-\nu (|X_m|> 1/2^{m+1})\right) \\\le & {} 1-\prod \limits _{m=n}^\infty (1-1/4^{m-1})\le \sum \limits _{m=n}^\infty \frac{1}{4^{m-1}}\\\rightarrow & {} 0, \,{\mathrm{as}} \,n\rightarrow \infty . \end{aligned}$$

That is \(X_n\rightarrow 0\) a.s. \(\nu \).

However, on the other hand

$$\begin{aligned} {\mathbb {V}}\left( \bigcup \limits _{m=n}^\infty (|X_m|\ge 1/2)\right) \ge {\mathbb {V}}(|X_m|\ge 1/2)\ge {\mathbb {V}}(|{\mathcal {N}}(0, 1)|\ge 1/2)=C_0>0. \end{aligned}$$

This combining with the continuity of \({\mathbb {V}}\), implies

$$\begin{aligned} {\mathbb {V}}(X_n\nrightarrow 0)\ge {\mathbb {V}}\left( \bigcap \limits _{n=1}^\infty \bigcup \limits _{m=n}^\infty (|X_m|> 1/2)\right) =\lim \limits _{n\rightarrow \infty }{\mathbb {V}}\left( \bigcup \limits _{m=n}^\infty (|X_m|\ge 1/2)\right) \ge C_0>0. \end{aligned}$$

That is \(X_n\nrightarrow 0\) a.s. \({\mathbb {V}}\).

In traditional probability space, let \(\{X_n; n\ge 1\}\) be a sequence of i.i.d. random variables with a nondegenerate distribution function F satisfying \(1-F(x)={c_{1}(x)l(x)\over x^{\alpha }}\) and \(F(-x)={c_{2}(x)l(x)\over x^{\alpha }}\) for \(0<\alpha <2, x\rightarrow \infty \), where for \(x>0\), \(c_{i}(x)\ge 0,\lim \nolimits _{x\rightarrow \infty }c_{i}(x)=c_{i},\)\(i=1,2,\)\(c_{1}+c_{2}>0\), and l(x) is a slowly varying function at infinity. Chover [7] established the following classical Chover LIL:

$$\begin{aligned} \limsup \limits _{n\rightarrow \infty }\left| {S_{n}\over n^{1/\alpha }}\right| ^{{1\over \ln \ln n}}={\mathrm{e}}^{{1\over \alpha }}\quad {\mathrm{a.s.}}, \end{aligned}$$

Some Chover’s LIL type results obtained by Mikosch [18], Qi and Cheng [24], Vasudeva [29], Chen [5], Wu and Jiang [32]. Qi and Cheng [24], Wu and Jiang [33], and Wu [34] also studied and obtained another form of Chover’s LIL. They proved separately that there exist some constants \(A_{n}\in {\mathbb {R}}\), \(B_{n}>0\) such that

$$\begin{aligned} \limsup \limits _{n\rightarrow \infty }\left| {S_{n}-A_{n}\over B_{n}}\right| ^{{1\over \ln \ln n}}\le {\mathrm{e}}^{{1\over 2}}\quad {\mathrm{a.s.}} \end{aligned}$$
(3.1)

for sequences of independent and dependent random variables.

Recently, under sub-linear expectation, for a sequence of extended i.i.d. random variables satisfying (1.1), Wu and Jiang [35] established the following Chover’s LIL:

(3.2)

where \(c_{n}=0\) for \(0<\alpha <1\), \(c_{n}=n\hat{{\mathbb {E}}}X_{1}\) for \(1\le \alpha <2\), \(\lg _0x:=x\) and \(\lg _j x :=\ln \{\max ({\mathrm{e}}, \lg _{j-1}x)\}\) for \(j\ge 1\), and \(B(x)=\inf \{y; {\mathbb {V}}(|X_{1}|\ge y)\le 1/x\}\) for \(x>0\).

In this paper we studied and extended another form of Chover’s LIL different from (3.2). The corresponding results are obtained for traditional probability space by obtained Qi and Cheng [24], Wu and Jiang [33], and Wu [34], etc. from traditional probability space to the sub-linear expectation space. We will prove that Chover’s LIL similar to (3.1) still holds under the sub-linear expectation space.

Later in this paper, we always assume that \(\{X_{n}; n\ge 1\}\) is a sequence of identically distributed random variables in \((\Omega , {\mathcal {H}}, \hat{{\mathbb {E}}})\) and satisfies the following condition

$$\begin{aligned} \lim \limits _{x\rightarrow \infty }\frac{x^2{\mathbb {V}}(|X_1|>x)}{C_{{\mathbb {V}}}(X_1^2I(|X_1|\le x))}=0. \end{aligned}$$
(3.3)

It shall be noted that the identical distribution is defined under \(\hat{{\mathbb {E}}}\), not under \({\mathbb {V}}\) (see Definition 3.2). The identically distributed of \(X_i\) refers to \(\hat{{\mathbb {E}}}(f(X_i))=\hat{{\mathbb {E}}}(f(X_1))\) for \(f(\cdot )\in C_{l,Lip}({\mathbb {R}})\), but does not imply \({\mathbb {V}}(f(X_i)\in A)={\mathbb {V}}(f(X_1)\in A)\). Therefore, it is necessary to prove that (3.3) is equivalent to for any natural number \(k\ge 1\),

$$\begin{aligned} \lim \limits _{x\rightarrow \infty }\frac{x^2{\mathbb {V}}(|X_k|>x)}{C_{{\mathbb {V}}}(X_k^2I(|X_k|\le x))}=0. \end{aligned}$$
(3.4)

We only need to prove that (3.3) implies (3.4). Firstly, by the definition of \(C_{\mathbb {V}}\),

$$\begin{aligned} C_{\mathbb {V}}(X_{k}^{2}I(|X_{k}|\le x))= & {} \int _{0}^{\infty }{\mathbb {V}}(X^2_{k}I(|X_{k}|\le x)> y){\mathrm{d}}y\nonumber \\= & {} \int _{0}^{\infty }2t{\mathbb {V}}(|X_{k}|I(|X_{k}|\le x)> t){\mathrm{d}}t\quad ({\mathrm{let}}\,y=t^2)\nonumber \\= & {} \int _{0}^{x}2t{\mathbb {V}}(t<|X_{k}|\le x){\mathrm{d}}t. \end{aligned}$$
(3.5)

In order to prove (3.4), we need to convert \({\mathbb {V}}\) to \(\hat{{\mathbb {E}}}\) by using (2.1), so we construct the lower function \(g(x)\in C_{l,Lip}({\mathbb {R}})\).

For \(0<\mu <1\), let \(g(x)\in C_{l,Lip}({\mathbb {R}})\) be a non-increasing function such that

$$\begin{aligned} 0\le g(x)\le 1\;\;{\mathrm{for}}\; {\mathrm{all}}\;\;x \;\; {\mathrm{and}}\;\; g(x)=1 \;\; {\mathrm{if}}\;\; x\le \mu ,\;\; g(x)=0\;\; {\mathrm{if}}\;\; x>1. \end{aligned}$$
(3.6)

Then

$$\begin{aligned}&I(|x|\le \mu )\le g(|x|)\le I(|x|\le 1),\,I(|x|>c)\le 1-g\left( \frac{|x|}{c}\right) \le I(|x|>\mu c)\, \nonumber \\&\quad {\mathrm{for}}\,{\mathrm{any}}\,c>0. \end{aligned}$$
(3.7)

Thus, combining this with (2.1) and \(X_k{\mathop {=}\limits ^{d}}X_1\), we obtain for \(x>0\)

$$\begin{aligned} {\mathbb {V}}(|X_k|>x)\le \hat{{\mathbb {E}}}\left( 1-g \left( \frac{|X_k|}{x}\right) \right) =\hat{{\mathbb {E}}}\left( 1-g \left( \frac{|X_1|}{x}\right) \right) \le {\mathbb {V}}(|X_1|>\mu x) \end{aligned}$$

and for any \(0<a<b\),

$$\begin{aligned} {\mathbb {V}}(a<|X_k|\le b)\ge & {} \hat{{\mathbb {E}}}\left( g\left( \frac{|X_k|}{b}\right) -g\left( \frac{\mu |X_k|}{a}\right) \right) \\= & {} \hat{{\mathbb {E}}}\left( g\left( \frac{|X_1|}{b}\right) -g\left( \frac{\mu |X_1|}{a}\right) \right) \\\ge & {} {\mathbb {V}}\left( \frac{a}{\mu }<|X_1|\le \mu b\right) . \end{aligned}$$

Hence, combining with (3.5),

$$\begin{aligned} C_{\mathbb {V}}(X_{k}^{2}I(|X_{k}|\le x))\ge & {} \int _{0}^{x}2t{\mathbb {V}}(t/\mu<|X_{1}|\le \mu x){\mathrm{d}}t\\= & {} \int _{0}^{x/\mu }2\mu ^2 z{\mathbb {V}}(z<|X_{1}|\le \mu x) {\mathrm{d}}z\quad ({\mathrm{let}}\,z=t/\mu )\\\ge & {} \mu ^2\int _{0}^{\mu x}2 z{\mathbb {V}}(z<|X_{1}|\le \mu x){\mathrm{d}}z\\= & {} \mu ^2C_{\mathbb {V}}(X_{1}^{2}I(|X_{1}|\le \mu x)). \end{aligned}$$

Therefore,

$$\begin{aligned} \frac{x^2{\mathbb {V}}(|X_k|>x)}{C_{{\mathbb {V}}}(X_k^2I(|X_k|\le x))}\le \frac{1}{\mu ^4} \frac{(\mu x)^2{\mathbb {V}}(|X_1|>\mu x)}{C_{{\mathbb {V}}}(X_1^2I(|X_1|\le \mu x))}\rightarrow 0\quad {\mathrm{as}}\,x\rightarrow \infty \end{aligned}$$

from (3.3). That is, (3.4) is established.

Now, we show that (3.3) implies \(C_{{\mathbb {V}}}(|X_1|^p)<\infty \) for all \(0<p<2\).

Set

$$\begin{aligned} G(x):={\mathbb {V}}(|X_{1}|>x), \quad K(x):=2x^{-2}\int ^x_0tG(t){\mathrm{d}}t,\\ H(x):=2\int ^x_0tG(t){\mathrm{d}}t=x^{2}K(x), \quad L(x):=\frac{G(x)}{K(x)}, x>0, \end{aligned}$$

and define

$$\begin{aligned} D(x):=\inf \{y; 1/K(y)\ge x\}\quad {\mathrm{for}}\quad x>0, \,{\mathrm{and}}\,a_{j}=D(j\ln j(\ln \ln j)^{2}). \end{aligned}$$

By (3.5),

$$\begin{aligned} C_{\mathbb {V}}(X_{1}^{2}I(|X_{1}|\le x))=\int _{0}^{x}2t{\mathbb {V}}(t<|X_{1}|\le x){\mathrm{d}}t\le 2\int _{0}^{x}tG(t){\mathrm{d}}t=H(x). \end{aligned}$$

On the other hand, by sub-additivity of \({\mathbb {V}}\), we have \({\mathbb {V}}(t<|X_{1}|\le x)\ge {\mathbb {V}}(|X_{1}|>t)-{\mathbb {V}}(|X_{1}|>x)\), so

$$\begin{aligned} C_{\mathbb {V}}(X_{1}^{2}I(|X_{1}|\le x))\ge \int _{0}^{x}2t\left( {\mathbb {V}}(|X_{1}|>t)-{\mathbb {V}}(|X_{1}|>x)\right) {\mathrm{d}}t= H(x)-x^2{\mathbb {V}}(|X_{1}|>x). \end{aligned}$$

Hence, we get

$$\begin{aligned} H(x)-x^{2}{\mathbb {V}}(|X_{1}|>x)\le C_{\mathbb {V}}(X_{1}^{2}I(|X_{1}|\le x))\le H(x). \end{aligned}$$
(3.8)

Therefore, when (3.3) holds, we get \(C_{\mathbb {V}}(X_{1}^{2}I(|X_{1}|\le x))\sim H(x)\), and

$$\begin{aligned} L(x)=\frac{G(x)}{K(x)}=\frac{x^{2}{\mathbb {V}}(|X_{1}|>x)}{H(x)}\sim \frac{x^{2}{\mathbb {V}}(|X_{1}|>x)}{C_{\mathbb {V}}(X_{1}^{2}I(|X_{1}|\le x))}\rightarrow 0. \end{aligned}$$
(3.9)

From \(\left( {H(x)\over \exp (\int _{1}^{x}{2L(t)\over t}\mathrm{d}t)}\right) ^{'}\equiv 0,\) we have

$$\begin{aligned} H(x)=H(1)\exp \left( \int _{1}^{x}{2L(t)\over t}\mathrm{d}t\right) . \end{aligned}$$

Thus, from (3.9) and Proposition 2.8 (i) (iii), H(x) is a slowly varying function at infinity and \(1/K(x)= x^2/H(x)\) is a regularly varying function with index 2 at infinity. Hence, from Bingham et al. [3], p. 28 Theorem 1.5.12), D(x) is a regularly varying function with index 1 / 2 at infinity and \(1/K(D(x))\sim x\) as \(x\rightarrow \infty \), so, combining Proposition 2.8 (iii) (vi),

$$\begin{aligned} D(x)=x^{1/2}l_1(x),a_n=n^{1/2}l_2(n), \end{aligned}$$
(3.10)

where \(l_1(\cdot )\) and \(l_2(\cdot )\) are slowly varying functions, and

$$\begin{aligned} \lim \limits _{x\rightarrow \infty }xK(D(x))=1. \end{aligned}$$
(3.11)

By (3.10) and Proposition 2.8 (ii)

$$\begin{aligned} \lim \limits _{x\rightarrow \infty }{\ln (D(x))\over \ln x}={1\over 2}. \end{aligned}$$
(3.12)

By (3.9): there is \(n_0\), so that when \(n\ge n_0\) , we have \(G(c a_n)\le K(c a_n)\), (3.11), and \(K(c a_n)\sim c^{-2}K(a_n)\), for any \(c>0\),

$$\begin{aligned} \sum \limits _{n=1}^{\infty }{\mathbb {V}}(|X_1|>c a_n)&= \sum \limits _{n=1}^{\infty }G(c a_n)\ll \sum \limits _{n=n_0}^{\infty }K(c a_n)\sim c^{-2}\sum \limits _{n=n_0}^{\infty }K(a_n)\nonumber \\&=c^{-2}\sum \limits _{n=n_0}^{\infty }K(D(n\ln n(\ln \ln n)^2))\nonumber \\&\sim c^{-2}\sum \limits _{n=n_0}^{\infty }\frac{1}{n\ln n(\ln \ln n)^2}\nonumber \\&<\infty . \end{aligned}$$
(3.13)

For any \(0<p<2\), by (3.10) and the Proposition 2.8 (iv), we have \(a_n\le n^{1/p}\) for sufficiently large n, thus,

$$\begin{aligned} C_{\mathbb {V}}(|X_1|^p)= & {} \int \limits _0^\infty {\mathbb {V}}(|X_1|^p>x){\mathrm{d}}x=\sum \limits _{n=0}^{\infty }\int \limits _n^{n+1}{\mathbb {V}}(|X_1|>x^{1/p}){\mathrm{d}}x\\\le & {} \sum \limits _{n=0}^{\infty }\int \limits _n^{n+1}{\mathbb {V}}(|X_1|>n^{1/p}){\mathrm{d}}x=\sum \limits _{n=0}^{\infty }{\mathbb {V}}(|X_1|>n^{1/p}) \ll \sum \limits _{n=0}^{\infty }{\mathbb {V}}(|X_1|>a_n)<\infty , \end{aligned}$$

from (3.13).

Further, if \(\hat{{\mathbb {E}}}\) is countably sub-additive, then by Proposition 2.3 (vii), \(\hat{{\mathbb {E}}}(|X_1|^p)\le C_{\mathbb {V}}(|X_1|^p)<\infty \) for any \(0<p<2\). In particular, \(\hat{{\mathbb {E}}}(|X_1|)<\infty \).

With the above preparation, we can describe our theorems as follows.

Theorem 3.4

Assume that \(\{X_{n};n\ge 1\}\) is a sequence of i.i.d. random variables with \(\hat{{\mathbb {E}}}X_k={\hat{\varepsilon }}X_k\), \(\hat{{\mathbb {E}}}\) and \({\mathbb {V}}\) are countably sub-additive, and (3.3) holds. Then

$$\begin{aligned} \limsup \limits _{n\rightarrow \infty }\left( {|{\tilde{S}}_{n}|\over D(n)}\right) ^{\displaystyle {1\over \ln \ln n}}\le \mathrm{e}^{{1\over 2}}\quad {\mathrm{a.s.}}\quad {\mathbb {V}}, \end{aligned}$$
(3.14)

where \({\tilde{S}}_n=\sum \nolimits _{j=1}^{n}(X_j-\hat{{\mathbb {E}}}(X_j))\).

It is natural to ask, whether there exists a distribution satisfying (3.3) such that the lower bound of (3.14) holds. Our answer is positive under \(\nu \). And we can get a fixed value such that the equal in (3.14) is established about \(\nu \) under some extra conditions for L(x).

Theorem 3.5

Assume that the conditions of Theorem 3.4 hold, and \(\delta \in [0, 1)\) is fixed. If for any given \(\varepsilon >0\), there exists an \(x_1>0\) such that for all \(x\ge x_1\),

$$\begin{aligned} (\ln x)^{-\varepsilon -\delta }\le L(x)\le (\ln x)^{\varepsilon -\delta }. \end{aligned}$$
(3.15)

Then

$$\begin{aligned} \limsup \limits _{n\rightarrow \infty }\left( {|{\tilde{S}}_{n}|\over D(n )}\right) ^{\displaystyle {1\over \ln \ln n}}\le {\mathrm{e}}^{{1-\delta \over 2}}\quad {\mathrm{a.s.}}\,{\mathbb {V}}, \end{aligned}$$
(3.16)

further, if \({\mathbb {V}}\) is continuous, then

$$\begin{aligned} \limsup \limits _{n\rightarrow \infty }\left( {|{\tilde{S}}_{n}|\over D(n )}\right) ^{\displaystyle {1\over \ln \ln n}}\ge {\mathrm{e}}^{{1-\delta \over 2}}\quad {\mathrm{a.s.}}\,\nu , \end{aligned}$$
(3.17)

so,

$$\begin{aligned} \limsup \limits _{n\rightarrow \infty }\left( {|{\tilde{S}}_{n}|\over D(n )}\right) ^{\displaystyle {1\over \ln \ln n}}={\mathrm{e}}^{{1-\delta \over 2}}\quad {\mathrm{a.s.}}\,\nu \end{aligned}$$

where \({\tilde{S}}_n\) is defined by Theorem 3.4.

Remark 3.6

Theorems 3.4 and 3.5 are Chover’s LIL under the sub-linear expectations. Theorems 3.4 and 3.5 extended Chover’s LIL by obtained Qi and Cheng [24], Wu and Jiang [33], and Wu [34], etc. from traditional probability space to the sub-linear expectation space.

Remark 3.7

Under condition (1.1), Wu and Jiang [35] studied and obtained Chover’s LIL: (3.2). In this paper, under condition (3.3), we study and obtain another form of Chover’s LIL: (3.14), (3.16) and (3.17). The research contents and research results of Wu and Jiang [35] and this paper are not overlapping and do not include each other.

Remark 3.8

It is important to note that the condition that “a sequence \(\{X_{n}; n\ge 1\}\) is independent under \(\hat{{\mathbb {E}}}\)” does not implies that “a sequence \(\{X_{n}; n\ge 1\}\) is independent under \({\mathbb {V}}\)”. So, we have not “the divergence part” of the Borel–Cantelli lemma. We can’t use the standard argument of the Borel–Cantelli lemma to show (3.17). It is very difficult to prove (3.17), and we do not know yet whether (3.17) is established about almost surely \({\mathbb {V}}\).

Proof of Theorem 3.4

Suppose that \(\{X_{n}; n\ge 1\}\) is a sequence of independent random variables. It is important to note that the independence under \(\hat{{\mathbb {E}}}\) is defined through \(\varphi \) in \(C_{l,Lip}\) (see Definition 3.2) and the indicator function \(I(|x|\le a)\) does not belong to \(C_{l,Lip}\). Therefore, in order to ensure that the sequence of truncated random variables is also independent, we cannot censor \(X_i\) using the indicator function. This needs to modify the indicator function by functions \(g(\cdot )\) in \(C_{l,Lip}\). Let \(g(\cdot )\) be defined by (3.6).

For \(a_j=D(j\ln j(\ln \ln j)^2)\), let,

$$\begin{aligned} Y_{j}=X_jg\left( \frac{|X_j|}{a_j}\right) ,\quad \widetilde{Y_{j}}=Y_{j}-\hat{{\mathbb {E}}}Y_{j}, \end{aligned}$$

and

$$\begin{aligned} {\widetilde{T}}_{n}=\sum \limits _{j=1}^{n}\widetilde{Y_{j}},\quad B_{n} =\sum \limits _{j=1}^{n}\hat{{\mathbb {E}}}\widetilde{Y_{j}}^{2}. \end{aligned}$$

Obviously, \(\{\widetilde{Y_{j}};j\ge 1\}\) is also a sequence of independent random variables with \(\hat{{\mathbb {E}}}\widetilde{Y_{j}}=0\). By (3.7), (3.8), (3.10), (3.11), Proposition 2.3 (vi): Jensen inequality, \(\hat{{\mathbb {E}}}\) is countably sub-additive and Proposition 2.3 (vii), \(X_j{\mathop {=}\limits ^{d}}X_1\), and H(x) is increasing,

$$\begin{aligned} B_{n}&=\sum \limits _{j=1}^{n}\hat{{\mathbb {E}}}(Y_{j}-\hat{{\mathbb {E}}}Y_{j})^{2} \le 2\sum \limits _{j=1}^{n}\hat{{\mathbb {E}}}\left( Y^{2}_{j} +(\hat{{\mathbb {E}}}Y_{j})^{2}\right) \\&\ll \sum \limits _{j=1}^{n}\hat{{\mathbb {E}}}Y_{j}^{2} =\sum \limits _{j=1}^{n}\hat{{\mathbb {E}}}\left( X_{j}^{2}g \left( \frac{|X_j|}{a_j}\right) \right) =\sum \limits _{j=1}^{n}\hat{{\mathbb {E}}}\left( X_1^{2}g \left( \frac{|X_1|}{a_j}\right) \right) \\&\le \sum \limits _{j=1}^{n}C_{\mathbb {V}}\left( X_{1}^{2}g \left( \frac{|X_{1}|}{a_{j}}\right) \right) \le \sum \limits _{j=1}^{n}C_{\mathbb {V}}\left( X_{1}^{2}I(|X_{1}|\le a_{j})\right) \\&\le \sum \limits _{j=1}^{n}H(a_{j})\le nH(a_{n})\\&=nH(D(n\ln {n}(\ln {\ln {n}})^{2}))\\&=nK(D(n\ln {n}(\ln {\ln {n}})^{2}))(D(n\ln {n}(\ln {\ln {n}})^{2}))^{2}\\&\sim {\sqrt{n}\over \sqrt{\ln {n}}\ln {\ln {n}}}l_1(n\ln {n}(\ln {\ln {n}})^{2})D(n\ln {n}(\ln {\ln {n}})^{2})\\&=D(n)D(n\ln {n}(\ln {\ln {n}})^{2}){l_1(n\ln {n}(\ln {\ln {n}})^{2})\over l_1(n)}{1\over \sqrt{\ln {n}(\ln {\ln {n}})^{2}}}. \end{aligned}$$

Thus, for any \(\varepsilon >0\), let \(h(x)=\ln x(\ln \ln x)^2, \delta =1/2\) in Lemma 2.9, for sufficiently large n, we get

$$\begin{aligned} B_{n}\le D(n)D(n\ln {n}(\ln {\ln {n}})^{2})=D(n)a_n. \end{aligned}$$
(3.18)

On the other hand, let \(h(x)=\ln {x({\ln {\ln {x}}})^{2}}\), \(\delta =\varepsilon /4\) in Lemma 2.9, for sufficiently large n, we have

$$\begin{aligned} {D(n)(\ln {n})^{(1+\varepsilon )/2}\over a_n}&= {D(n)(\ln {n})^{(1+\varepsilon )/2}\over D(n\ln {n}(\ln {\ln {n}})^{2})} ={(\ln {n})^{(1+\varepsilon )/2}\over \sqrt{\ln {n}(\ln {\ln {n}})^{2}}}{l_1(n)\over l_1(n(\ln {n})(\ln \ln n)^{2})}\nonumber \\&\ge {(\ln {n})^{(1+\varepsilon )/2}\over \sqrt{\ln {n} (\ln {\ln {n}})^{2}}} (\ln n(\ln \ln n)^{2})^{-\varepsilon /4}\nonumber \\&={(\ln {n})^{\varepsilon /4}\over (\ln {\ln {n}})^{1+\varepsilon /2}} \ge (\ln {n})^{\varepsilon /8}. \end{aligned}$$
(3.19)

For any \(\varepsilon >0\), let \(p>\max (2, 1+4(1-\varepsilon )/\varepsilon , 16/(5\varepsilon +4))\) in (2.2) of Lemma 2.6, by Proposition 2.3 (v): Markov inequality, \(\max \nolimits _{1\le j\le 2^{n}}|{\tilde{Y}}_{j}|\le 2a_{2^{n}}\), (3.18) and (3.19), for sufficiently large n, we obtain

$$\begin{aligned} {\mathbb {V}}&\left( |\max \limits _{1\le j\le 2^{n}}\widetilde{T_{j}}|\ge D(2^{n})(\ln {2^{n}})^{(1+\varepsilon )/2} \right) \\&\le \frac{\hat{{\mathbb {E}}}\left| \max \nolimits _{1\le j\le 2^{n}}\widetilde{T_{j}}\right| ^p}{D^p(2^{n})(\ln {2^{n}})^{(1+\varepsilon )p/2}}\\&\ll \frac{\sum \nolimits _{k=1}^{2^{n}}\hat{{\mathbb {E}}}|{\tilde{Y}}_k|^p+ \left( \sum \nolimits _{k=1}^{2^{n}}\hat{{\mathbb {E}}}{\tilde{Y}}^2_k\right) ^{p/2}}{D^p(2^{n})(\ln {2^{n}})^{(1+\varepsilon )p/2}}\\&\ll \frac{\sum \nolimits _{k=1}^{2^{n}}\hat{{\mathbb {E}}} {\tilde{Y}}_k^2a^{p-2}_{2^{n}}+B^{p/2}_{2^n}}{D^p(2^{n})(\ln {2^{n}})^{(1 +\varepsilon )p/2}}\\&\ll \frac{a^{p-1}_{2^{n}}D(2^n)}{D^p(2^{n})(\ln {2^{n}})^{(1+\varepsilon )p/2}} +\frac{a^{p/2}_{2^{n}}D^{p/2}(2^n)}{D^p(2^{n})(\ln {2^{n}})^{(1+\varepsilon )p/2}}\\&\le \frac{1}{(\ln {2^{n}})^{(p-1)\varepsilon /8+(1+\varepsilon )/2}} +\frac{1}{(\ln {2^{n}})^{(5\varepsilon +4) p/16}}\\&\ll \frac{1}{n^{(p-1)\varepsilon /8+(1+\varepsilon )/2}} +\frac{1}{n^{(5\varepsilon +4) p/16}}. \end{aligned}$$

Hence,

$$\begin{aligned} \sum \limits _{n=1}^{\infty }{\mathbb {V}}\left( |\max \limits _{1\le j\le 2^{n}}\widetilde{T_{j}}|\ge D(2^{n})(\ln {2^{n}})^{(1+\varepsilon )/2}\right) <\infty , \end{aligned}$$

from \((p-1)\varepsilon /8+(1+\varepsilon )/2>1\) and \((5\varepsilon +4) p/16>1\). By the Borel-Cantelli lemma (Lemma 2.5),

$$\begin{aligned} \limsup _{n\rightarrow \infty }{|\max \nolimits _{1\le j\le 2^{n}}\widetilde{T_{j}}|\over D(2^{n})(\ln {2^{n}})^{(1+\varepsilon )/2}}\le 1 \quad {\mathrm{a.s.}}\,{\mathbb {V}}. \end{aligned}$$

For any n, there exists k such that \(2^{k-1}\le n<2^{k}\), by above inequality, for sufficiently large n,

$$\begin{aligned} {\tilde{T_n}\over D(n)(\ln {n})^{(1+\varepsilon )/2}}&\le {|\max \nolimits _{2^{k-1}\le n\le 2^{k}}{\widetilde{T}}_{n}|\over D(2^{k-1})(\ln {2^{k-1}})^{(1+\varepsilon )/2}}\nonumber \\&\le {|\max \nolimits _{1\le n\le 2^{k}}{\widetilde{T}}_{n}|\over D(2^{k})(\ln {2^{k}})^{(1+\varepsilon )/2}} \sup _{k\ge 1}{D(2^{k})(\ln {2^{k}})^{(1+\varepsilon )/2}\over D(2^{k-1})(\ln {2^{k-1}})^{(1+\varepsilon )/2}}\nonumber \\&={\max \nolimits _{1\le n\le 2^{k}}|{\widetilde{T}}_{n}|\over D(2^{k})(\ln {2^{k}})^{(1+\varepsilon )/2}} \sup _{k\ge 1}{\sqrt{2}l_1(2^{k})k^{(1+\varepsilon )/2}\over l_1(2^{k-1})(k-1)^{(1+\varepsilon )/2}}\nonumber \\&\le c\quad {\mathrm{a.s.}}\,{\mathbb {V}}. \end{aligned}$$
(3.20)

In the calculation of \({\mathbb {V}}(f(X_i)\in A)\), we need to convert \({\mathbb {V}}\) to \(\hat{{\mathbb {E}}}\). By (2.1), (3.7), and (3.13),

$$\begin{aligned} \sum \limits _{n=1}^{\infty }{\mathbb {V}}(X_{n}\ne Y_{n})&\le \sum \limits _{n=1}^{\infty }{\mathbb {V}}(|X_{n}|>\mu a_{n})\le \sum \limits _{n=1}^{\infty }\hat{{\mathbb {E}}}\left( 1-g\left( \frac{|X_n|}{\mu a_{n}}\right) \right) \\&=\sum \limits _{n=1}^{\infty }\hat{{\mathbb {E}}}\left( 1-g\left( \frac{|X_1|}{\mu a_{n}}\right) \right) \le \sum \limits _{n=1}^{\infty }{\mathbb {V}}(|X_{1}| \ge \mu ^2a_{n})\\&<\infty . \end{aligned}$$

Thus, combining this with (3.20), we get

$$\begin{aligned} \frac{\sum \nolimits _{k=1}^{n}(X_k-\hat{{\mathbb {E}}}Y_k)}{D(n)}\le c(\ln n)^{(1+\varepsilon )/2} \quad {\mathrm{a.s.}}\,{\mathbb {V}}\quad {\mathrm{for}}\,\mathrm{any}\,\varepsilon >0. \end{aligned}$$
(3.21)

Let \(g_j(x)\in C_{l,Lip}({\mathbb {R}}), j\ge 1\) such that \(0\le g_j(x)\le 1\) for all x and \(g_j\left( \frac{x}{a_{2^j}}\right) =1\) if \(a_{2^{j-1}}< x\le a_{2^j}\), \(g_j\left( \frac{x}{a_{2^j}}\right) =0\) if \(x\le \mu a_{2^{j-1}}\) or \(x>(1+\mu )a_{2^j}\). Then,

$$\begin{aligned} g_j\left( \frac{|X_1|}{a_{2^j}}\right) \le I(\mu a_{2^{j-1}}<|X_1|\le (1+\mu )a_{2^j}),\quad 1-g \left( \frac{|X_{1}|}{a_{2^{k-1}}}\right) \le \sum \limits _{j=k-1}^{\infty }g_j\left( \frac{|X_{1}|}{a_{2^j}}\right) . \nonumber \\ \end{aligned}$$
(3.22)

Similar to (3.10) in Wu and Jiang [35],

$$\begin{aligned} \sum \limits _{n=1}^{\infty }{\mathbb {V}}(|X_{1}|>ca_{n})\ge & {} \sum \limits _{k=1}^{\infty }\sum \limits _{2^{k-1}\le n\le 2^k}{\mathbb {V}}(|X_{1}|>ca_{n})\ge \sum \limits _{k=1}^{\infty }\sum \limits _{2^{k-1}\le n\le 2^k} {\mathbb {V}}(|X_{1}|>ca_{2^k})\\= & {} 2^{-1}\sum \limits _{k=1}^{\infty }2^k{\mathbb {V}}(|X_{1}|>ca_{2^k}). \end{aligned}$$

Hence, (3.13) implies

$$\begin{aligned} \sum \limits _{k=1}^{\infty }2^{k}{\mathbb {V}}(|X_{1}|>c\cdot a_{2^{k}})<\infty ,\quad \forall c>0. \end{aligned}$$

Thus, from Proposition 2.8 (v), \(D(n)(\ln n)^{(1+\varepsilon )/2}\) and \(a_n\) being increasing, by g(x) being decreasing, (3.19), (3.22), and \(\hat{{\mathbb {E}}}\) being countably sub-additive, we get

$$\begin{aligned} \sum \limits _{n=1}^{\infty } \frac{\hat{{\mathbb {E}}}|X_n-Y_n|}{D(n)(\ln n)^{(1+\varepsilon )/2}}= & {} \sum \limits _{k=1}^{\infty }\sum \limits _{2^{k-1}\le n<2^k} \frac{\hat{{\mathbb {E}}}\left( |X_{1}|\left( 1-g\left( \frac{|X_{1}|}{a_n} \right) \right) \right) }{D(n)(\ln n)^{(1+\varepsilon )/2}}\\\le & {} \sum \limits _{k=1}^{\infty }\sum \limits _{2^{k-1}\le n<2^k} \frac{\hat{{\mathbb {E}}}\left( |X_{1}|\left( 1-g\left( \frac{|X_{1}|}{a_{2^{k-1}}} \right) \right) \right) }{D(2^{k-1})(\ln 2^{k-1})^{(1+\varepsilon )/2}}\\= & {} \sum \limits _{k=1}^{\infty } \frac{2^{k-1}}{D(2^{k-1})(\ln 2^{k-1})^{(1+\varepsilon )/2}}\hat{{\mathbb {E}}} \left( |X_{1}|\left( 1-g\left( \frac{|X_{1}|}{a_{2^{k-1}}}\right) \right) \right) \\\le & {} \sum \limits _{k=1}^{\infty } \frac{2^{k-1}}{D(2^{k-1})(\ln 2^{k-1})^{(1+\varepsilon )/2}} \sum \limits _{j=k-1}^{\infty }\hat{{\mathbb {E}}}\left( |X_{1}|g_j\left( \frac{|X_{1}|}{a_{2^j}}\right) \right) \\= & {} \sum \limits _{j=0}^{\infty }\sum \limits _{k=1}^{j+1} \frac{2^{k-1}}{D(2^{k-1})(\ln 2^{k-1})^{(1+\varepsilon )/2}}\hat{{\mathbb {E}}} \left( |X_{1}|g_j\left( \frac{|X_{1}|}{a_{2^{k-1}}}\right) \right) \\\ll & {} \sum \limits _{j=0}^{\infty }\frac{2^j}{D(2^j)(\ln 2^j)^{(1+\varepsilon )/2}} \hat{{\mathbb {E}}}\left( |X_{1}|g_j\left( \frac{|X_{1}|}{a_{2^j}}\right) \right) \\\ll & {} \sum \limits _{j=0}^{\infty }a_{2^j} \frac{2^j}{D(2^j)(\ln 2^j)^{(1+\varepsilon )/2}}{\mathbb {V}} \left( |X_{1}|> \mu a_{2^{j-1}}\right) \\\ll & {} \sum \limits _{j=1}^{\infty }\frac{2^ja_{2^j}}{D(2^{j})(\ln 2^{j})^{(1 +\varepsilon )/2}}{\mathbb {V}}\left( |X_{1}|> \mu a_{2^j}\right) \\\ll & {} \sum \limits _{j=1}^{\infty }2^j{\mathbb {V}}\left( |X_{1}|> \mu a_{2^j}\right) \\< & {} \infty . \end{aligned}$$

Thus, by Kronecker Lemma,

$$\begin{aligned} \frac{\left| \sum \nolimits _{i=1}^n(\hat{{\mathbb {E}}}X_i-\hat{{\mathbb {E}}}Y_i) \right| }{D(n)(\ln n)^{(1+\varepsilon )/2}}\le \frac{\sum \nolimits _{i=1}^n\hat{{\mathbb {E}}}|X_i-Y_i)|}{D(n)(\ln n)^{(1+\varepsilon )/2}}\rightarrow 0,\quad {\mathrm{as}}\,n\rightarrow \infty . \end{aligned}$$
(3.23)

This and (3.21) imply that

$$\begin{aligned} \frac{{\tilde{S}}_{n}}{D(n)}\le c(\ln {n})^{(1+\varepsilon )/2} \quad {\mathrm{a.s.}}\,{\mathbb {V}}\quad {\mathrm{for}}\,\mathrm{any}\,\varepsilon >0. \end{aligned}$$
(3.24)

considering \(\{-X_n; n\ge 1\}\) instead of \(\{X_n; n\ge 1\}\) in (3.24), we can obtain

$$\begin{aligned} \frac{\sum \nolimits _{k=1}^{n}(-X_k-\hat{{\mathbb {E}}}(-X_k))}{D(n)}\le c(\ln {n})^{(1+\varepsilon )/2} \quad {\mathrm{a.s.}}\,{\mathbb {V}}\quad {\mathrm{for}}\,\mathrm{any}\,\varepsilon >0. \end{aligned}$$

By \(\hat{{\mathbb {E}}}X_k=\hat{{\varepsilon }}X_k\), we have \(\hat{{\mathbb {E}}}(-X_k)=-\hat{{\varepsilon }}X_k=-\hat{{\mathbb {E}}}X_k\), therefore,

$$\begin{aligned} \frac{\sum \nolimits _{k=1}^{n}(X_k-\hat{{\mathbb {E}}}(X_k))}{D(n)}\ge -c(\ln {n})^{(1+\varepsilon )/2} \quad {\mathrm{a.s.}}\quad {\mathbb {V}}\quad {\mathrm{for}}\,\mathrm{any}\,\varepsilon >0. \end{aligned}$$

Hence, for any \(\varepsilon >0\),

$$\begin{aligned} \limsup \limits _{n\rightarrow \infty }\left( {|{\tilde{S}}_{n}|\over D(n)}\right) ^{{1\over \ln \ln n}}\le \mathrm{e}^{(1+\varepsilon )/2}\quad {\mathrm{a.s.}}\,{\mathbb {V}}. \end{aligned}$$

Let \(\varepsilon \rightarrow 0\), we have

$$\begin{aligned} \limsup \limits _{n\rightarrow \infty }\left( {|{\tilde{S}}_{n}|\over D(n)}\right) ^{{1\over \ln \ln n}}\le {\mathrm{e}}^{1/2} \quad \mathrm{a.s.}\,{\mathbb {V}}. \end{aligned}$$

That is (3.14) holds. This completes the proof of Theorem 3.4.

Proof of Theorem 3.5

First prove (3.16). Let \(\delta \in [0,1)\) be fixed. And for any given \(\varepsilon >0\), let

$$\begin{aligned} a'_{j}=D\left( j(\ln j)^{1-\delta +\varepsilon }(\ln \ln j)^{2}\right) . \end{aligned}$$

Using the same notations and similar method of Theorem 3.4, we can easily get

$$\begin{aligned} {{\tilde{T}}_{n}\over D(n)(\ln n)^{(1-\delta +\varepsilon )/2}}\le c\quad {\mathrm{a.s.}}\,{\mathbb {V}}. \end{aligned}$$
(3.25)

By (3.11), (3.12) and (3.15), for all sufficiently large n,

$$\begin{aligned} {\mathbb {V}}(|X_{1}|>a'_{n})&=G(a'_{n})=K(a'_{n})L(a'_{n})\le K(a'_{n})(\ln {a'_{n}})^{\varepsilon -\delta }\\&=K(D(n(\ln {n})^{1-\delta +\varepsilon }(\ln {\ln {n}})^{2})) (\ln {D(n(\ln {n})^{1-\delta +\varepsilon })} (\ln {\ln {n}})^{2})^{\varepsilon -\delta }\\&\sim {1\over n(\ln {n})^{1-\delta +\varepsilon } (\ln {\ln {n}})^{2}}{1\over 2}(\ln (n(\ln {n})^{1-\delta +\varepsilon } (\ln {\ln {n}})^{2}))^{\varepsilon -\delta }\\&\sim {1\over 2n\ln {n}(\ln {\ln {n})^{2}}}. \end{aligned}$$

Therefore \(\sum \nolimits _{n=1}^{\infty }{\mathbb {V}}(X_{n}\ne Y_{n})\le \sum \nolimits _{n=1}^{\infty }{\mathbb {V}}(|X_{1}|>\mu ^2a'_{n})<\infty \), which combining (3.25), we obtain

$$\begin{aligned} \frac{\sum \nolimits _{k=1}^{n}(X_k-\hat{{\mathbb {E}}}Y_k)}{D(n) (\ln {n})^{(1-\delta +\varepsilon )/2}}\le c \quad {\mathrm{a.s.}}\,{\mathbb {V}}. \end{aligned}$$

Similar to the proof of (3.23), we can obtain

$$\begin{aligned} \frac{\left| \sum \nolimits _{k=1}^{n}(\hat{{\mathbb {E}}}X_k -\hat{{\mathbb {E}}}Y_k)\right| }{D(n)(\ln n)^{(1-\delta +\varepsilon )/2}} \rightarrow 0, \quad {\mathrm{as}}\,n\rightarrow \infty . \end{aligned}$$

Hence,

$$\begin{aligned} \frac{{\tilde{S}}_{n}}{D(n)}\le c(\ln {n})^{(1-\delta +\varepsilon )/2} \quad {\mathrm{a.s.}}\,{\mathbb {V}}. \end{aligned}$$

Considering \(\{-X_n; n\ge 1\}\) instead of \(\{X_n; n\ge 1\}\) in above formula and using the fact \(\hat{{\mathbb {E}}}X_k=\hat{{\varepsilon }}X_k\), we can obtain

$$\begin{aligned} \frac{{\tilde{S}}_{n}}{D(n)}\ge -c(\ln {n})^{(1-\delta +\varepsilon )/2} \quad {\mathrm{a.s.}}\,{\mathbb {V}}. \end{aligned}$$

Therefore, by the arbitrariness of \(\varepsilon \)

$$\begin{aligned} \limsup _{n\rightarrow \infty }\left( {|{\tilde{S}}_{n}|\over D(n)}\right) ^{{1\over \ln {\ln {n}}}} \le \mathrm{e}^{(1-\delta )/2} \quad {\mathrm{a.s.}}\,{\mathbb {V}}. \end{aligned}$$

Secondly, we prove (3.17). By (3.11), (3.12) and (3.15), for any \(c>0\),

$$\begin{aligned} {\mathbb {V}}(|X_{1}|>cD(n(\ln {n})^{1-\delta -\varepsilon }))&=G(cD(n(\ln {n})^{1-\delta -\varepsilon }))\\&= K(cD(n(\ln {n})^{1-\delta -\varepsilon }))L(cD(n(\ln {n})^{1-\delta -\varepsilon }))\\&\ge c K(D(n(\ln {n})^{1-\delta -\varepsilon })) (\ln {D(n(\ln {n})^{1-\delta -\varepsilon }}))^{-\varepsilon -\delta }\\&\sim c{(\ln (n(\ln {n})^{1-\delta -\varepsilon }))^{-\varepsilon -\delta }\over n(\ln {n})^{1-\delta -\varepsilon } }\sim {c\over n\ln {n}}. \end{aligned}$$

Hence

$$\begin{aligned} \sum \limits _{n=1}^{\infty }{\mathbb {V}}(|X_{1}|>cd_n)=\infty \quad {\mathrm{for}} \, {\mathrm{any}} \,c>0, \end{aligned}$$
(3.26)

where \(d_n=D(n(\ln {n})^{1-\delta -\varepsilon })\). For any \(M>0\), let

$$\begin{aligned} \xi _j=1-g\left( \frac{|X_j|}{Md_j}\right) , \,\eta _n=\sum _{j=1}^n\xi _j\, {\mathrm{and}} \,b_n=\sum _{j=1}^n\hat{{\mathbb {E}}}\xi _j. \end{aligned}$$

By \(1-g(x)\in C_{l,Lip}\), so \(\{\xi _j; j\ge 1\}\) is also independent. Using (2.1), (3.7), and (3.26),

$$\begin{aligned} b_n=\sum \limits _{j=1}^n\hat{{\mathbb {E}}}\left( 1-g\left( \frac{|X_j|}{Md_j}\right) \right) = \sum \limits _{j=1}^n\hat{{\mathbb {E}}}\left( 1-g\left( \frac{|X_1|}{Md_j} \right) \right) \ge \sum \limits _{j=1}^n{\mathbb {V}}(|X_1|> Md_j)\rightarrow \infty . \end{aligned}$$

For any \(0<\delta<\epsilon <1\) and \(t>0\), we get

$$\begin{aligned} I\left( \frac{\eta _n-b_n}{b_n}\ge -\epsilon \right)\ge & {} I\left( -\epsilon \le \frac{\eta _n-b_n}{b_n}<\delta \right) \\\ge & {} {\mathrm{e}}^{-t\delta }\exp \left\{ t\frac{\eta _n-b_n}{b_n}\right\} I \left( -\epsilon \le \frac{\eta _n-b_n}{b_n}<\delta \right) \\= & {} \left\{ \begin{array}{cc} 0\ge {\mathrm{e}}^{-t\delta }\left( \exp \left\{ t\frac{\eta _n-b_n}{b_n}\right\} -{\mathrm{e}}^{-t\epsilon }\right) , \frac{\eta _n-b_n}{b_n}<-\epsilon ,\\ {\mathrm{e}}^{-t\delta }\exp \left\{ t\frac{\eta _n-b_n}{b_n}\right\} \ge {\mathrm{e}}^{-t\delta }\left( \exp \left\{ t\frac{\eta _n-b_n}{b_n}\right\} -{\mathrm{e}}^{-t\epsilon }\right) , -\epsilon \le \frac{\eta _n-b_n}{b_n}<\delta \end{array} \right. \\\ge & {} {\mathrm{e}}^{-t\delta }\left( \exp \left\{ t\frac{\eta _n-b_n}{b_n}\right\} -{\mathrm{e}}^{-t\epsilon }\right) I\left( \frac{\eta _n-b_n}{b_n}<\delta \right) \\\ge & {} {\mathrm{e}}^{-t\delta }\left( \exp \left\{ t\frac{\eta _n-b_n}{b_n}\right\} -{\mathrm{e}}^{-t\epsilon }\right) - {\mathrm{e}}^{-t\delta -t}\exp \left\{ t\frac{\eta _n}{b_n}\right\} I (\frac{\eta _n-b_n}{b_n}\ge \delta )\\\ge & {} {\mathrm{e}}^{-t\delta }\left( \exp \left\{ t\frac{\eta _n-b_n}{b_n}\right\} -{\mathrm{e}}^{-t\epsilon }\right) - {\mathrm{e}}^{-t\delta -t}\exp \left\{ t\frac{\eta _n}{b_n}\right\} \left( 1 -g\left( \frac{\eta _n-b_n}{\delta b_n}\right) \right) . \end{aligned}$$

So, using Proposition 2.3 (i): \(\hat{{\mathbb {E}}}(X-Y)\ge \hat{{\mathbb {E}}}X-\hat{{\mathbb {E}}}Y\),

$$\begin{aligned} {\mathbb {V}}\left( \frac{\eta _n-b_n}{b_n}\ge -\epsilon \right)\ge & {} {\mathrm{e}}^{-t\delta }\left( \hat{{\mathbb {E}}}\exp \left\{ t\frac{\eta _n-b_n}{b_n}\right\} -{\mathrm{e}}^{-t\epsilon }\right) \nonumber \\&-{\mathrm{e}}^{-t\delta -t}\hat{{\mathbb {E}}}\left( \exp \left\{ t\frac{\eta _n}{b_n}\right\} \left( 1-g\left( \frac{\eta _n-b_n}{\delta b_n} \right) \right) \right) . \end{aligned}$$
(3.27)

By the independence of \(\{\xi _i; i\ge 1\}\) and the fact that e\(^x\ge 1+x\),

$$\begin{aligned} \hat{{\mathbb {E}}}\left( \exp \left\{ t\frac{\eta _n-b_n}{b_n}\right\} \right) =\prod \limits _{j=1}^n\hat{{\mathbb {E}}}\left( \exp \left\{ t\frac{\xi _j -\hat{{\mathbb {E}}}\xi _j}{b_n}\right\} \right) \ge \prod \limits _{j=1}^n\hat{{\mathbb {E}}}\left( t\frac{\xi _j -\hat{{\mathbb {E}}}\xi _j}{b_n}+1\right) =1.\nonumber \\ \end{aligned}$$
(3.28)

On the other hand, by noting e\(^x\le 1+|x|\)e\(^{|x|}\), e\(^x\ge 1+x\) and \(0\le \xi _j\le 1\),

$$\begin{aligned} \hat{{\mathbb {E}}}\left( \exp \left\{ 2t\frac{\eta _n}{b_n}\right\} \right) \le \prod \limits _{j=1}^n\hat{{\mathbb {E}}}\left( 1+t\frac{2\xi _j}{b_n}{\mathrm{e}}^{2t/b_n}\right) \le \prod \limits _{j=1}^n\exp \left( 2t\frac{\hat{{\mathbb {E}}}\xi _j}{b_n}{\mathrm{e}}^{2t/b_n}\right) =\exp \left( 2t{\mathrm{e}}^{2t/b_n}\right) .\nonumber \\ \end{aligned}$$
(3.29)

Also, by (2.3) in Lemma 2.6, \(b_n\rightarrow \infty \) and for \(0\le \xi _j\le 1\), \(\hat{{\mathbb {E}}}(\xi _j-\hat{{\mathbb {E}}}\xi _j)^2=\hat{{\mathbb {E}}} (\xi ^2_j-2\xi \hat{{\mathbb {E}}}\xi _j+(\hat{{\mathbb {E}}}\xi _j)^2) \le \hat{{\mathbb {E}}}(\xi _j+\hat{{\mathbb {E}}}\xi _j)=2\hat{{\mathbb {E}}}(\xi _j)\),

$$\begin{aligned} {\mathbb {V}}\left( \frac{\eta _n-b_n}{b_n}\ge \mu \delta \right)= & {} {\mathbb {V}} \left( \sum \limits _{j=1}^n(\xi _j-\hat{{\mathbb {E}}}\xi _j)\ge \mu \delta b_n\right) \ll \frac{\sum _{j=1}^n\hat{{\mathbb {E}}}(\xi _j-\hat{{\mathbb {E}}}\xi _j)^2}{b^2_n}\\\ll & {} \frac{\sum _{j=1}^n\hat{{\mathbb {E}}}\xi _j}{b^2_n} =\frac{1}{b_n}\rightarrow 0\,{\mathrm{as}}\,n\rightarrow \infty . \end{aligned}$$

Thus, by Proposition 2.3 (vi): Hölder inequality and (3.7), it follows that

$$\begin{aligned} \hat{{\mathbb {E}}}\left[ \exp \left\{ t\frac{\eta _n}{b_n}\right\} \left( 1-g\left( \frac{\eta _n-b_n}{\delta b_n}\right) \right) \right]\le & {} \left\{ \hat{{\mathbb {E}}}\left[ \exp \left\{ 2t\frac{\eta _n}{b_n}\right\} \right] \hat{{\mathbb {E}}}\left[ 1-g\left( \frac{\eta _n-b_n}{\delta b_n}\right) \right] ^2\right\} ^{1/2}\nonumber \\\le & {} \exp \left( t{\mathrm{e}}^{2t/b_n}\right) \left( {\mathbb {V}} \left( \frac{\eta _n-b_n}{b_n}\ge \mu \delta \right) \right) ^{1/2}\nonumber \\\le & {} \exp \left( t{\mathrm{e}}^{2t/b_n}\right) \frac{1}{b^{1/2}_n}\rightarrow 0. \end{aligned}$$
(3.30)

Hence, substitute (3.28)–(3.30) in (3.27), we have

$$\begin{aligned} \liminf \limits _{n\rightarrow \infty }{\mathbb {V}}\left( \frac{\eta _n-b_n}{b_n} \ge -\epsilon \right) \ge {\mathrm{e}}^{-t\delta }(1-{\mathrm{e}}^{-t\epsilon }). \end{aligned}$$

Letting \(\delta \rightarrow 0\) and then \(t\rightarrow \infty \), we get

$$\begin{aligned} \lim \limits _{n\rightarrow \infty }{\mathbb {V}}\left( \frac{\eta _n-b_n}{b_n} \ge -\epsilon \right) =1. \end{aligned}$$

Now, choose \(\epsilon =1-\mu >0\). Noting that

$$\begin{aligned} \left( \frac{\eta _n}{b_n}>\mu ; {\mathrm{i.o.}}\right) \subset \left( \sum \limits _{j=1}^\infty \xi _j=\infty \right) \subset \left( \xi _j\ne 0; {\mathrm{i.o.}}\right) \subset \left( \frac{|X_n|}{M d_n}>\mu ; {\mathrm{i.o.}}\right) \end{aligned}$$

and the continuity of \({\mathbb {V}}\), we get

$$\begin{aligned} {\mathbb {V}}\left( \limsup \limits _{n\rightarrow \infty }\frac{|X_n|}{d_n}>\mu M\right)= & {} {\mathbb {V}}\left( \frac{|X_n|}{M d_n}>\mu ; {\mathrm{i.o.}}\right) \ge {\mathbb {V}}\left( \sum \limits _{j=1}^\infty \left( 1-g\left( \frac{|X_j|}{M d_j} \right) \right) =\infty \right) \\\ge & {} {\mathbb {V}}\left( \frac{\eta _n}{b_n}>\mu ; {\mathrm{i.o.}}\right) = {\mathbb {V}}\left( \frac{\eta _n-b_n}{b_n}>-(1-\mu ); {\mathrm{i.o.}}\right) \\\ge & {} \limsup \limits _{n\rightarrow \infty }{\mathbb {V}}\left( \frac{\eta _n-b_n}{b_n}> -(1-\mu )\right) =1. \end{aligned}$$

Note that

$$\begin{aligned} \limsup \limits _{n\rightarrow \infty }\frac{|X_n|}{d_n}= & {} \limsup \limits _{n\rightarrow \infty } \frac{|{\tilde{S}}_n-{\tilde{S}}_{n-1}+\hat{{\mathbb {E}}}X_n|}{d_n}\\\le & {} 2\limsup \limits _{n\rightarrow \infty } \frac{|{\tilde{S}}_n|}{d_n}+\limsup \limits _{n\rightarrow \infty } \frac{\hat{{\mathbb {E}}}|X_1|}{d_n}\\= & {} 2\limsup \limits _{n\rightarrow \infty } \frac{|{\tilde{S}}_n|}{d_n}. \end{aligned}$$

From the arbitrariness of M, it follows that

$$\begin{aligned} {\mathbb {V}}\left( \limsup \limits _{n\rightarrow \infty } \frac{|{\tilde{S}}_n|}{d_n}>M_1\right) =1,\quad \forall M_1>0. \end{aligned}$$

Therefore,

$$\begin{aligned} {\mathbb {V}}\left( \limsup \limits _{n\rightarrow \infty } \frac{|{\tilde{S}}_n|}{d_n}=\infty \right) = \lim \limits _{M_1\rightarrow \infty }{\mathbb {V}}\left( \limsup \limits _{n\rightarrow \infty } \frac{|{\tilde{S}}_n|}{d_n}>M_1\right) = 1. \end{aligned}$$

That is

$$\begin{aligned} \limsup \limits _{n\rightarrow \infty } \frac{|{\tilde{S}}_n|}{d_n}=\infty \quad {\mathrm{a.s.}}\quad \nu \end{aligned}$$
(3.31)

Since \(0\le \delta <1\), let \(h(x)=(\ln x)^{1-\delta -\varepsilon }\), \(0<\varepsilon <1-\delta \), \(0<\eta =\frac{\varepsilon }{4(1-\delta -\varepsilon )}\), applying Lemma 2.9 and (3.10),

$$\begin{aligned} {D(n(\ln {n})^{1-\delta -\varepsilon })\over D(n)(\ln {n})^{{(1-\delta )/2}-\varepsilon }}&={(\ln {n})^{(1-\delta -\varepsilon )/2}l_1(n(\ln {n})^{1-\delta -\varepsilon })\over l_1(n)(\ln {n})^{(1-\delta )/2-\varepsilon }}\\&\ge (\ln {n})^{\varepsilon /2}(\ln {n})^{-\eta (1-\delta -\varepsilon )}\\&=(\ln {n})^{\varepsilon /4}\rightarrow \infty , n\rightarrow \infty . \end{aligned}$$

Thus combining this with (3.31), for all sufficiently large n,

$$\begin{aligned}&{|{\tilde{S}}_{n}|\over D(n)(\ln {n})^{(1-\delta )/2-\varepsilon }} ={|{\tilde{S}}_{n}|\over d_n}{D(n(\ln {n})^{1-\delta -\varepsilon })\over D(n)(\ln {n})^{(1-\delta )/2-\varepsilon }}\ge 1 \quad {\mathrm{a.s.}}\,\nu \end{aligned}$$

which implies

$$\begin{aligned} \limsup _{n\rightarrow \infty }\left( {|{\tilde{S}}_{n}|\over D(n)}\right) ^{{1\over \ln {\ln {n}}}} \ge \mathrm{e}^{(1-\delta )/2} \quad {\mathrm{a.s.}}\,\nu \end{aligned}$$

from \(\varepsilon \) being arbitrary. That is that (3.17) holds.

By Proposition 2.3, continuous implies countably sub-additive for \({\mathbb {V}}\), and (3.16) implies

$$\begin{aligned} \limsup \limits _{n\rightarrow \infty }\left( {|{\tilde{S}}_{n}|\over D(n )}\right) ^{\displaystyle {1\over \ln \ln n}}\le {\mathrm{e}}^{{1-\delta \over 2}}\quad {\mathrm{a.s.}}\,\nu , \end{aligned}$$

Therefore, (3.16) and (3.17) imply

$$\begin{aligned} \limsup \limits _{n\rightarrow \infty }\left( {|{\tilde{S}}_{n}|\over D(n )}\right) ^{\displaystyle {1\over \ln \ln n}}={\mathrm{e}}^{{1-\delta \over 2}}\quad {\mathrm{a.s.}}\,\nu \end{aligned}$$

This completes the proof of Theorem 3.5.

Finally, we give an example to show that for each \(\delta \in {[0,1)}\), there exists a distribution such that conditions in Theorem 3.5 are satisfied.

Example

Assume that \(\{X_{n};n\ge 1\}\) is a sequence of i.i.d. random variables with \(\hat{{\mathbb {E}}}X_k={\hat{\varepsilon }}X_k\), \(\hat{{\mathbb {E}}}\) is countably sub-additive, and \({\mathbb {V}}\) is continuous. For \(\delta \in {[0,1)}\), if

$$\begin{aligned} G(x):={\mathbb {V}}(|X_1|>x)=x^{-2}\exp \left\{ \int ^x_{{\mathrm{e}}}{\frac{2}{u(\ln u)^{\delta }\ln \ln u}}{\mathrm{d}}u\right\} , x>{\mathrm{e}}, \end{aligned}$$

then (3.16) and (3.17) hold.

Proof

It is easy to check that

$$\begin{aligned} H(x):= & {} 2\int ^x_0tG(t){\mathrm{d}}t\sim \int ^x_{{\mathrm{e}}}2t^{-1}\exp \left\{ \int ^t_{{\mathrm{e}}}{\frac{2}{u(\ln u)^{\delta }\ln \ln u}}{\mathrm{d}}u\right\} {\mathrm{d}}t\\\sim & {} \int ^x_{{\mathrm{e}}}\left( \frac{2}{t}+\frac{\delta \ln \ln t}{t\ln ^{1-\delta }t}+\frac{1}{t\ln ^{1-\delta }t}\right) \exp \left\{ \int ^t_{{\mathrm{e}}}{\frac{2}{u(\ln u)^{\delta }\ln \ln u}}{\mathrm{d}}u\right\} {\mathrm{d}}t\\= & {} \int ^x_{{\mathrm{e}}}\left( \ln ^\delta t\ln \ln t\exp \left\{ \int ^t_{{\mathrm{e}}}{\frac{2}{u(\ln u)^{\delta }\ln \ln u}}{\mathrm{d}}u\right\} \right) '{\mathrm{d}}t\\= & {} \ln ^{\delta }x\ln \ln x\exp \left\{ \int ^x_{{\mathrm{e}}}{\frac{2}{u(\ln u)^{\delta }\ln \ln u}}{\mathrm{d}}u\right\} \quad {\mathrm{as}}\quad x\rightarrow \infty . \end{aligned}$$

Which implies that

$$\begin{aligned} \frac{H(x)}{x^2{\mathbb {V}}(|X_1|>x)}\sim \ln ^{\delta }x\ln \ln x\rightarrow \infty \quad {\mathrm{as}}\quad x\rightarrow \infty . \end{aligned}$$

Thus, combining with (3.8), we have

$$\begin{aligned} \frac{x^2{\mathbb {V}}(|X_1|>x)}{C_{\mathbb {V}}(X_1^2I(|X_1|\le x))}\le \frac{x^2{\mathbb {V}}(|X_1|>x)}{H(x)-x^2{\mathbb {V}}(|X_1|>x)} =\frac{1}{\frac{H(x)}{x^2{\mathbb {V}}(|X_1|>x)}-1}\rightarrow 0,\quad {\mathrm{as}}\quad x\rightarrow \infty , \end{aligned}$$

and

$$\begin{aligned} L(x)=\frac{x^2{\mathbb {V}}(|X_1|>x)}{H(x)}\sim {\frac{1}{(\ln x)^{\delta }\ln \ln x}}. \end{aligned}$$

Therefore, (3.3) and (3.15) hold. That is, all conditions of Theorem 3.5 are satisfied. Hence (3.16) and (3.17) hold. \(\square \)