1 Introduction

Let \(X, X_1, X_2, \ldots \) be i.i.d. random variables and set \(S_n=\sum _{j=1}^n X_j, n \ge 1.\) Further set \(Lt =\log _e(t \vee e), LL t = L(Lt)\) and \( LLL t = L( LL t), t \ge 0.\) In 1956, Darling and Erdős proved that under the assumption \(\mathbb {E}|X|^3 < \infty ,\) \(\mathbb {E}X^2=1\) and \(\mathbb {E}X=0,\) the following convergence in distribution result holds,

$$\begin{aligned} a_n \max _{1\le k \le n} |S_k|{/}\sqrt{k} - b_n \mathop {\rightarrow }\limits ^{d} \tilde{Y}, \end{aligned}$$
(1.1)

where \(a_n = \sqrt{2 LL n}, b_n = 2 LL n + LLL n{/}2 - \log (\pi ){/}2\) and \(\tilde{Y}\) is a random variable which has an extreme value distribution with distribution function \(y \mapsto \exp (-\exp (-y)).\)

The above third moment assumption was later relaxed in [14, 16] to \(\mathbb {E}|X|^{2 + \delta } < \infty \) for some \(\delta >0,\) but the question remained open whether a finite second moment would be already sufficient.

This was finally answered in [6], where it is shown that (1.1) holds if and only if

$$\begin{aligned} \mathbb {E}X^2 I\{|X| \ge t\}= o \left( ( LL t)^{-1}\right) \,\, \hbox {as}\,\,t \rightarrow \infty . \end{aligned}$$

Moreover, it is shown in [6] that the above result holds more generally under the assumption of a finite second moment if one replaces the normalizers \(\sqrt{k}\) by \(\sqrt{B_k}\), where \(B_n=\sum _{j=1}^n \sigma _j^2\) and

$$\begin{aligned} \sigma _n^2:= \mathbb {E}X^2 I\left\{ |X| \le \sqrt{n}{/}( LL n)^p\right\} \end{aligned}$$

for some \(p \ge 2.\) So we have under the classical assumption \(\mathbb {E}X^2 < \infty \) and \(\mathbb {E}X=0,\)

$$\begin{aligned} a_n \max _{1\le k \le n} |S_k|{/}\sqrt{B_k} - b_n \mathop {\rightarrow }\limits ^{d} \tilde{Y}. \end{aligned}$$

For some further related work on the classical Darling–Erdős theorem, the reader is referred to [2, 4, 10] and the references in these articles.

The Darling–Erdős theorem is also related to finding an integral test refining the Hartman–Wintner LIL, a problem which was already addressed by Feller in 1946. Here, one can relatively easily prove that the classical Kolmogorov–Erdős–Petrowski integral test for Brownian motion holds for sums of i.i.d. mean zero random variables if one has \(\mathbb {E}|X|^{2 + \delta } < \infty \) for some \(\delta >0\). In this case, one has for any non-decreasing function \(\phi {:}\,]0,\infty [ \rightarrow ]0,\infty [,\)

$$\begin{aligned} \mathbb {P}\left\{ |S_n| \le \sqrt{n}\phi (n)\hbox { eventually}\right\} = 1 \text{ or } 0, \end{aligned}$$

according as \(\sum _{n=1}^{\infty } n^{-1}\phi (n)\exp (-\phi ^2(n){/}2)\) is finite or infinite.

Feller proved that this result remains valid under the second moment assumption if one replaces \(\sqrt{n}\) by \(\sqrt{B_n}\) defined as above with \(p \ge 4\). Similarly, as for the Darling–Erdős theorem, this implies that the Kolmogorov–Erdős–Petrowski integral test holds in its original form if

$$\begin{aligned} \mathbb {E}X^2 I\{|X| \ge t\}= O \left( ( LL t)^{-1}\right) \,\,\hbox { as }t \rightarrow \infty . \end{aligned}$$

The proof in [11] was based on a skillful double truncation argument which only worked for symmetric distributions. Finally, in [6] an extension of this argument to the general non-symmetric case was found so that we now know that most results in [11] are correct (see also [1] for more historical background).

There is still one question in the paper [11] which has not yet been addressed, namely whether it is possible to make the theorem “slightly more elegant” by replacing the sequence \(\sqrt{B_n}\) by \(\sqrt{n}\sigma _n.\) Feller writes that he “was unable to devise a proof simple enough to be justified by the slight improvement of the theorem” (see p. 632 in [11]). We believe that we have found a simple enough proof of Feller’s claim (see Step 3 in the proof of Theorem 2.3).

This leads to the following improved version of the Darling–Erdős theorem under the finite second moment assumption:

$$\begin{aligned} a_n \max _{1\le k \le n} |S_k|{/}\sqrt{k} \sigma _k - b_n \mathop {\rightarrow }\limits ^{d} \tilde{Y}. \end{aligned}$$
(1.2)

At the same time, we can show that there is a much wider choice for the truncation level in the definition of \(\sigma _n^2\). For instance, it is possible to define \(\sigma _n^2\) as \(\mathbb {E}X^2I\{|X| \le \sqrt{n}\}.\)

This improved version of the Darling–Erdős theorem will actually follow from a general result for d-dimensional random vectors which will be given in the following section.

2 Statement of Main Results

We now consider i.i.d. d-dimensional random vectors \(X, X_1, X_2, \ldots \) such that \(\mathbb {E}|X|^2 < \infty \) and \(\mathbb {E}X=0\), where we denote the Euclidean norm by \(|\cdot |\). The corresponding matrix norm will be denoted by \(\Vert \cdot \Vert \), that is, we set

$$\begin{aligned} \Vert A\Vert :=\sup _{|x| \le 1} |Ax| \end{aligned}$$

for any (dd)-matrix A. It is well known that \(\Vert A\Vert =\) the largest eigenvalue of A if A is symmetric and nonnegative definite.

Let again \(S_n:=\sum _{j=1}^n X_j, n \ge 1.\) Horváth [12] obtained in 1994 the following multidimensional version of the Darling–Erdős theorem assuming that \(\mathbb {E}|X|^{2+\delta } < \infty \) for some \(\delta > 0\) and that Cov(X) (\(=\) the covariance matrix of X) is equal to the d-dimensional identity matrix I,

$$\begin{aligned} a_n \max _{1\le k \le n} |S_k|{/}\sqrt{k} - b_{d,n} \mathop {\rightarrow }\limits ^{d} \tilde{Y}, \end{aligned}$$
(2.1)

where \(\tilde{Y}\) has the same distribution as in dimension 1,

$$\begin{aligned} b_{d,n}:= 2 LL n + d LLL n{/}2 - \log (\varGamma (d{/}2)), \end{aligned}$$

and \(\varGamma (t), t > 0\) is the Gamma function. Recall that \(\varGamma (1{/}2)=\sqrt{\pi }\) so that this extends the one-dimensional Darling–Erdős theorem.

We are ready to formulate our general result. We consider non-decreasing sequences \(c_n\) of positive real numbers satisfying for large n,

$$\begin{aligned} \exp \left( -(\log n)^{\epsilon _n}\right) \le c_n{/}\sqrt{n} \le \exp \left( (\log n)^{\epsilon _n}\right) , \end{aligned}$$
(2.2)

where \(\epsilon _n \rightarrow 0.\)

Further, let for each \(n,\,\varGamma _n\) be the symmetric nonnegative definite matrix such that

$$\begin{aligned} \varGamma _n^2 = \left[ \mathbb {E}X^{(i)}X^{(j)} I\{|X| \le c_n\}\right] _{1 \le i, j \le d},\quad n \ge 1. \end{aligned}$$
(2.3)

If the covariance matrix of \(X=(X^{(1)},\ldots , X^{(d)})\) is positive definite, the matrices \(\varGamma _n\) will be invertible for large enough n. Replacing \(c_n\) by \(c_{n \vee n_0}\) for a suitable \(n_0 \ge 1\) if necessary, we can assume w.l.o.g. that all matrices \(\varGamma _n\) are invertible.

Theorem 2.1

Let \(X, X_n, n \ge 1\) be i.i.d. mean zero random vectors in \(\mathbb {R}^d\) such that \(E|X|^2 < \infty \) and \(\mathrm {Cov}(X) = I.\) Then, we have for any sequence \(\{c_n\}\) satisfying condition (2.2),

$$\begin{aligned} a_n \max _{1 \le k \le n} |\varGamma _k^{-1} S_k|{/}\sqrt{k} - b_{d,n} \mathop {\rightarrow }\limits ^{d} \tilde{Y}, \end{aligned}$$
(2.4)

where \(\tilde{Y}{:}\,\varOmega \rightarrow \mathbb {R}\) is a random variable such that

$$\begin{aligned} \mathbb {P}\{\tilde{Y} \le t\}=\exp (-\exp (-t)), t \in \mathbb {R}. \end{aligned}$$

Under the additional assumption

$$\begin{aligned} \mathbb {E}|X|^2 I\{|X| \ge t\}=o\left( ( LL t)^{-1}\right) \hbox { as } t \rightarrow \infty , \end{aligned}$$
(2.5)

we also have

$$\begin{aligned} a_n \max _{1 \le k \le n} |S_k|{/}\sqrt{k} - b_{d,n} \mathop {\rightarrow }\limits ^{d} \tilde{Y}. \end{aligned}$$
(2.6)

It is easy to see that condition (2.5) is satisfied if \(\mathbb {E}|X|^2 LL |X|< \infty .\) This latter condition, however, is more restrictive than (2.5).

It is natural to ask whether this condition is also necessary as in the one-dimensional case (see Theorem 2 in [6]). This question becomes much more involved in the multidimensional case, and we get a slightly weaker result, namely that the following condition

$$\begin{aligned} \mathbb {E}|X|^2I\{|X| \ge t\}= O\left( ( LL t)^{-1}\right) \,\, \hbox { as }\,t \rightarrow \infty \end{aligned}$$
(2.7)

is necessary for (2.6).

To prove this result, we show that if condition (2.7) is not satisfied, then \(a_n \max _{1 \le k \le n} |S_k|{/}\sqrt{k} - b_{d,n}\) cannot converge in distribution to any variable of the form \(\tilde{Y}+c,\) where \(c \in \mathbb {R}.\)

If one allows this larger class of limiting distributions, condition (2.7) is optimal. There are examples where \(\mathbb {E}|X|^2I\{|X| \ge t\} = O(( LL t)^{-1})\) and \(a_n \max _{1 \le k \le n} |S_k|{/}\sqrt{k} - b_{d,n}\) converges in distribution to \(\tilde{Y}+c\) for some \(c \ne 0\) (see Theorem 6.1 below).

Theorem 2.2

Let \(X, X_n, n \ge 1\) be i.i.d. mean zero random vectors in \(\mathbb {R}^d\) such that \(E|X|^2 < \infty \) and \(\mathrm {Cov}(X) = I\) and suppose that there exists a \(c \in \mathbb {R}\) such that

$$\begin{aligned} a_n \max _{1 \le k \le n} |S_k|{/}\sqrt{k} - b_{d,n} \mathop {\rightarrow }\limits ^{d} \tilde{Y} + c, \end{aligned}$$
(2.8)

where \(\tilde{Y}\) is as in Theorem 2.1. Then, condition (2.7) holds.

Our basic tool for proving the above results is a new strong invariance principle for sums of i.i.d. random vectors which is valid under a finite second moment assumption. If one has an approximation with an almost sure error term of order \(o(\sqrt{n{/} LL n})\), one can obtain the Darling–Erdős theorem directly from the normally distributed case. The problem is that it is impossible to get such an approximation under the sole assumption of a finite second moment. In [6], it was shown that one needs the “good” approximation of order \(o(\sqrt{n{/} LL n})\) only if the sums \(|S_n|\) are large and it was shown that in dimension 1 one can obtain approximations which are particularly efficient for the random subsequence where the sums are large. Using recent results on d-dimensional strong approximations (see [9, 15]), we are now able to obtain an analogue of the approximation in [6] in the d-dimensional setting [see Lemma 3.1 and relation (3.6) below]. As an additional new feature, we also show that an approximation by \(\varGamma _n \sum _{j=1}^n Z_j\) is possible where \(Z_j, j\ge 1\) are i.i.d. \(\mathcal {N}(0,I)\)-distributed random vectors with \(\mathcal {N}(0,\varSigma )\) denoting the d-dimensional normal distribution with mean zero and covariance matrix \(\varSigma .\) This type of approximation leads to the improved versions of the Darling–Erdős theorem and Feller’s integral test as indicated in Sect. 1.

Theorem 2.3

Let \(X, X_n, n \ge 1\) be i.i.d. mean zero random vectors in \(\mathbb {R}^d\) such that \(E|X|^2 < \infty \) and \(\mathrm {Cov}(X) = \varGamma ^2,\) where \(\varGamma \) is a symmetric nonnegative definite (d,d)-matrix. Let \(c_n\) be a non-decreasing sequence of positive real numbers satisfying condition (2.2) for large n and let \(\varGamma _n\) be defined as in (2.3). If the underlying p-space \((\varOmega , \mathcal {F}, \mathbb {P})\) is rich enough one can construct independent \(\mathcal {N}(0,I)\)-distributed random vectors \(Z_n, n \ge 1\) such that we have for the partial sums \(T_n:=\sum _{j=1}^n Z_j, n \ge 1,\)

  1. (a)

    \(S_n - \varGamma \;T_n = o(\sqrt{n LL n})\) as \(n \rightarrow \infty \) with prob. 1,

  2. (b)

    \(\mathbb {P}\{ |S_n - \varGamma _n T_n| \ge 2\sqrt{n}{{/}} LL n, |S_n| \ge \frac{4}{3}\Vert \varGamma \Vert \sqrt{n LL n} \hbox { infinitely often}\}=0,\) and

  3. (c)

    \(\mathbb {P}\{ |S_n - \varGamma _n T_n| \ge 2\sqrt{n}{/} LL n, |\varGamma T_n| \ge \frac{4}{3}\Vert \varGamma \Vert \sqrt{n LL n} \hbox { infinitely often}\}=0.\)

Combining our strong invariance principle with the Kolmogorov–Erdős–Petrowski integral test for d-dimensional Brownian motion, one obtains by the same arguments as in Section 5 of [6] the following result,

Theorem 2.4

Let \(X, X_n, n \ge 1\) be i.i.d. mean zero random vectors in \(\mathbb {R}^d\) such that \(E|X|^2 < \infty \) and \(\mathrm {Cov}(X) = \varGamma ^2,\) where \(\varGamma \) is a symmetric positive definite (d,d)-matrix. Let \(c_n\) be a non-decreasing sequence of positive real numbers satisfying condition (2.2) for large n and let \(\varGamma _n\) be defined as in (2.3). Then, we have for any non-decreasing function \(\phi {:}\,]0,\infty [ \rightarrow ]0,\infty [,\)

$$\begin{aligned} \mathbb {P}\left\{ |\varGamma _n^{-1}S_n| \le \sqrt{n}\phi (n) \hbox { eventually}\right\} = 1 \hbox { or }=0 \end{aligned}$$

according as

$$\begin{aligned} I_d(\phi ):= \sum _{n=1}^{\infty }n^{-1} \phi ^d(n)\exp (-\phi ^2(n){/}2) < \infty \hbox { or} =\infty . \end{aligned}$$

Note that we can assume w.l.o.g. that all matrices \(\varGamma _n\) are invertible since they converge to \(\varGamma \) which is invertible.

Let \(\lambda _n (\varLambda _n)\) be the smallest (largest) eigenvalue of \(\varGamma _n, n \ge 1.\) Assuming that Cov(X) \(=I\), we can infer from Theorem 2.4,

$$\begin{aligned} I_d(\phi ) < \infty \Rightarrow \mathbb {P}\left\{ |S_n| \le \varLambda _n \sqrt{n}\phi (n) \hbox { eventually}\right\} = 1 \end{aligned}$$
(2.9)

and

$$\begin{aligned} I_d(\phi ) = \infty \Rightarrow \mathbb {P}\left\{ |S_n| >\lambda _n \sqrt{n} \phi (n) \hbox { infinitely often}\right\} = 1, \end{aligned}$$
(2.10)

which is the d-dimensional version of the result conjectured by Feller [11].

The proof of our strong invariance principle (\(=\) Theorem 2.3) will be given in Sect. 3. In the two subsequent Sects. 4 and 5, we will show how Theorems 2.1 and 2.2 follow from the strong invariance principle. In Sect. 6, we return to the real-valued case and show that if \(\mathbb {E}X^2I\{|X| \ge t\} \sim c( LL t)^{-1}\) that then (1.1) still remains valid if we replace \(\tilde{Y}\) by \(\tilde{Y}-c\). Finally, we answer a question which was posed in [13].

3 Proof of the Strong Invariance Principle

Our proof is divided into three steps.

Step 1 We recall a double truncation argument which goes back to Feller [11] for symmetric random variables. This was later extended to non-symmetric random variables in [6] and finally to random elements in Hilbert space in [7]. To formulate the relevant result, we need some extra notation. We set

$$\begin{aligned}&X_n^{'} := X_n I \left\{ | X_n | \le \sqrt{n}{{/}}( LL n)^5 \right\} ,&\overline{X}'_n := X'_n - \mathbb {E}X'_n;\\&X''_n := X_n I \left\{ \sqrt{n}{{/}}( LL n)^5< | X_n | \le \sqrt{n LL n} \right\} ,&\overline{X}''_n := X''_n - \mathbb {E}X''_n;\\&X'''_n := X_n I \left\{ \sqrt{n LL n} < | X_n | \right\} ,&\overline{X}'''_n := X'''_n - \mathbb {E}X'''_n; \end{aligned}$$

and we denote the corresponding sums by \(S'_n, \overline{S}'_n, S''_n, \overline{S}''_n, S'''_n, \overline{S}'''_n.\)

Then, we have (see [7], Lemmas 11 and 12)

$$\begin{aligned} S_n - \overline{S}'_n = o(\sqrt{n LL n}) \hbox { a.s.} \end{aligned}$$
(3.1)

and

$$\begin{aligned} \mathbb {P}\left\{ |S_n - \overline{S}'_n| \ge \sqrt{n}{{/}}( LL n), |S_n| \ge \Vert \varGamma \Vert \sqrt{n LL n} \hbox { i.o.}\right\} =0. \end{aligned}$$
(3.2)

Step 2 Let \(\varSigma _n\) be the sequence of symmetric nonnegative definite matrices such that \(\varSigma _n^2\) is the covariance matrix of \(X'_n\) for \(n \ge 1.\) Furthermore, let A(t) be the symmetric nonnegative definite matrices satisfying

$$\begin{aligned} A(t)^2 = \left[ \mathbb {E}X^{(j)}X^{(k)}I\{|X| \le t\}\right] _{1 \le j, k \le d}, t \ge 0. \end{aligned}$$

It is easy to see that \(A(t)^2, t \ge 0\) is monotone, that is, \(A(t)^2 - A(s)^2\) is nonnegative definite if \(0 \le s \le t.\)

This implies that \(A(t), t \ge 0\) is monotone as well (see Theorem V.1.9 in [3]). Consequently, \(A(c_n), n \ge 1\) is a monotone sequence of symmetric nonnegative definite matrices whenever \(c_n\) is non-decreasing. Moreover, \(A(c_n)\) converges to \(\varGamma \) if \(c_n \rightarrow \infty .\)

We have the following strong approximation result, where we set

$$\begin{aligned} \tilde{\varGamma }_n:=A\left( \sqrt{n}{/}( LL n)^5\right) , n \ge 1. \end{aligned}$$

Lemma 3.1

If the underlying p-space is rich enough, one can construct independent random vectors \(Z_i \sim \mathcal {N}(0, I)\) such that

$$\begin{aligned} \overline{S}'_n - \sum _{i=1}^n \tilde{\varGamma }_i Z_i = o\left( \sqrt{n}{/} LL n\right) \text { a.s. } \end{aligned}$$
(3.3)

and

$$\begin{aligned} S_n - \sum _{i=1}^n \varGamma Z_i = o\left( \sqrt{n LL n}\right) \hbox { a.s.} \end{aligned}$$
(3.4)

Proof

  1. (i)

    We first show that one can construct independent \(\mathcal {N}(0,I)\)-distributed random vectors such that

    $$\begin{aligned} \overline{S}'_n - \sum _{i=1}^n \varSigma _i Z_i = o\left( \sqrt{n}{/} LL n\right) \text { a.s. } \end{aligned}$$

    By Corollary 3.2 from [9] and the fact that \(\mathbb {E} | \overline{X}'_n |^3 \le 8 \mathbb {E} | X'_n |^3,\) it is enough to show

    $$\begin{aligned} \sum _{n =1}^{\infty } \mathbb {E} \left| X'_n \right| ^3 {{\Big /}}\left( \frac{ \sqrt{ n } }{ LL n } \right) ^3 < \infty . \end{aligned}$$

    Using the simple inequality,

    $$\begin{aligned} \mathbb {E} | X'_n |^3 \le \mathbb {E} |X|^{2 + \delta } I\{|X| \le \sqrt{n}\} \sqrt{n}^{1-\delta }( LL n)^{-5(1-\delta )},\quad 0< \delta <1, \end{aligned}$$

    we find (setting \(\delta =2{/}5\)) that the above series is

    $$\begin{aligned} \le \sum _{n=1}^{\infty } \mathbb {E} |X|^{12{/}5} I\{|X| \le \sqrt{n}\} {/}\sqrt{n}^{12{/}5} \end{aligned}$$

    Using a standard argument (see for instance, the proof of part (a) of Lemma 3.3 in [9]), one can show that this last series is finite whenever \(\mathbb {E}|X|^2 < \infty .\)

  2. (ii)

    To complete the proof of (3.3), it is now sufficient to show that

    $$\begin{aligned} \sum _{i=1}^n (\varSigma _i - \tilde{\varGamma }_i)Z_i = o(\sqrt{n}{/} LL n) \hbox { a.s.} \end{aligned}$$

    By a standard argument, this follows if

    $$\begin{aligned} \sum _{n=1}^{\infty } \frac{ \mathbb {E} \left| \left( \varSigma _n - \tilde{\varGamma }_n \right) Z_n \right| ^2 }{ n{/}( LL n)^2 } < \infty . \end{aligned}$$
    (3.5)

    Since \(Z_n \sim \mathcal {N}(0, I)\), we have

    $$\begin{aligned} \mathbb {E} \left| \left( \varSigma _n - \tilde{\varGamma }_n \right) Z_n \right| ^2 \le d \left\| \varSigma _n - \tilde{\varGamma }_n \right\| ^2 \le d\Vert \varSigma _n^2 -\tilde{ \varGamma }_n^2\Vert , \end{aligned}$$

    where we have used Theorem X.1.1 in [3] for the last inequality. From the definition of \(\varSigma _n\) and \(\tilde{\varGamma }_n\), it is obvious that

    $$\begin{aligned} \langle x, (\tilde{\varGamma }_n ^2-\varSigma _n^2 )x \rangle = \left( \mathbb {E} \langle X, x\rangle I \left\{ | X| \le \sqrt{n}{/}( LL n)^5 \right\} \right) ^2,\quad x \in \mathbb {R}^d. \end{aligned}$$

    The last expression equals \( \left( \mathbb {E} \langle X, x\rangle I \{ | X| > \sqrt{n}{/}( LL n)^5 \} \right) ^2\) since \( \mathbb {E} \langle X, x\rangle =0.\) Hence \(\left\| \varSigma _n^2 - \tilde{\varGamma }_n^2 \right\| \le \left( \mathbb {E} | X| I \{ | X| > \sqrt{n}{/}( LL n)^5 \} \right) ^2 \le (\mathbb {E}|X|^2)^2 n^{-1} ( LL n)^{10} \). It is easy now to see that the series in (3.5) is finite.

  3. (iii)

    Finally note that

    $$\begin{aligned} S_n - \sum _{i=1}^n \varGamma Z_i = (S_n - \overline{S}'_n) + \left( \overline{S}'_n -\sum _{i=1}^n \tilde{\varGamma }_i Z_i\right) + \sum _{i=1}^n (\tilde{\varGamma }_i -\varGamma )Z_i, \end{aligned}$$

    where the first two terms are of almost sure order \(o(\sqrt{n LL n})\) by (3.1) and (3.3), respectively. Since \(\tilde{\varGamma }_n \rightarrow \varGamma \) as \( n \rightarrow \infty \), we also have that

    $$\begin{aligned} \sum _{i=1}^n (\tilde{\varGamma }_i -\varGamma )Z_i=o(\sqrt{n LL n}) \hbox { a.s.}, \end{aligned}$$

    and we can conclude that indeed \(S_n - \sum _{i=1}^n \varGamma Z_i = o(\sqrt{n LL n})\) a.s. Lemma 3.1 has been proven. \(\square \)

Step 3 Combining Lemma 3.1 with relations (3.1) and (3.2), we find that

$$\begin{aligned} \mathbb {P}\left\{ \left| S_n - \sum _{j=1}^n \tilde{\varGamma }_j Z_j\right| \ge \ 3\sqrt{n}{/}(2 LL n), |\varGamma T_n| \ge \frac{5}{4}\Vert \varGamma \Vert \sqrt{n LL n} \hbox { i.o.}\right\} =0. \end{aligned}$$
(3.6)

We next show that

$$\begin{aligned} \mathbb {P}\left\{ \left| \sum _{j=1}^n (\varGamma _n - \tilde{\varGamma }_j)Z_j\right| \ge \sqrt{n}{/}(2 LL n), |T_n| \ge \frac{5}{4}\sqrt{n LL n} \hbox { i.o.}\right\} =0, \end{aligned}$$
(3.7)

where

$$\begin{aligned} \varGamma _n:=A(c_n), n \ge 1 \end{aligned}$$

and \(c_n\) is an arbitrary non-decreasing sequence of positive real numbers satisfying condition (2.2) for large n.

Using that \(\{|\varGamma T_n| \ge \Vert \varGamma \Vert x\} \subset \{|T_n| \ge x\}, x >0\), we get from (3.6) and (3.7):

$$\begin{aligned} \mathbb {P}\left\{ \left| S_n - \varGamma _n T_n\right| \ge 2\sqrt{n}{{/}} LL n, |\varGamma T_n| \ge \frac{5}{4}\Vert \varGamma \Vert \sqrt{n LL n} \hbox { i.o.}\right\} =0. \end{aligned}$$
(3.8)

Further recall that \(S_n -\varGamma T_n = o(\sqrt{n LL n})\) a.s. (see Lemma 3.1). Consequently, we can infer from (3.8) that

$$\begin{aligned} \mathbb {P}\left\{ \left| S_n - \varGamma _n T_n\right| \ge 2\sqrt{n}{{/}} LL n, |S_n| \ge \frac{4}{3} \Vert \varGamma \Vert \sqrt{n LL n} \hbox { i.o.}\right\} =0. \end{aligned}$$
(3.9)

We see that the proof of Theorem 2.3 is complete once we have established (3.7). Toward this end, we need the following inequality which is valid for normally distributed random vectors \(Y{:}\,\varOmega \rightarrow \mathbb {R}^d\) with mean zero and covariance matrix \(\varSigma \):

$$\begin{aligned} \mathbb {P}\{|Y| \ge x\} \le \exp \left( -x^2{/}(8\sigma ^2)\right) , x \ge 2\mathbb {E}|Y|^2, \end{aligned}$$
(3.10)

where \(\sigma ^2\) is the largest eigenvalue of \(\varSigma \) (see Lemma 4 in [7]).

From (3.10), we trivially get that

$$\begin{aligned} \mathbb {P}\{|Y| \ge x\} \le 2\exp \left( -x^2{/}\left( 8\mathbb {E}|Y|^2\right) \right) , x \ge 0. \end{aligned}$$
(3.11)

Though this last inequality is clearly suboptimal, it will nevertheless be more than sufficient for the proof of (3.7).

Proof of (3.7) To simplify notation, we set \(d_n:=\sqrt{n}{/}( LL n)^5, n \ge 1,\) \(n_k:=2^k\) and \(\ell _k:=[2^{k-1}{/}(Lk)^5], k \ge 0.\) By the Borel–Cantelli lemma, it is enough to show that

$$\begin{aligned} \sum _{k=1}^{\infty }\mathbb {P}\left( \bigcup _{n=n_{k-1}}^{n_k}\left\{ \left| \sum _{j=1}^n (\varGamma _n - \tilde{\varGamma }_j)Z_j\right| \ge \sqrt{n}{/}(2 LL n), |T_n| \ge \frac{5}{4} \sqrt{n LL n} \right\} \right) < \infty . \end{aligned}$$

Set \(\tilde{\mathbb {N}}:= \tilde{\mathbb {N}}_1 \cap \tilde{\mathbb {N}}_2\), where

$$\begin{aligned} \tilde{\mathbb {N}}_1:=\left\{ k{:}\,\Vert \varGamma _{n_k} - \tilde{\varGamma }_{\ell _k}\Vert \le \Vert \varGamma \Vert (Lk)^{-5{/}2}\right\} \end{aligned}$$

and

$$\begin{aligned} \tilde{\mathbb {N}}_2:= \left\{ k{:}\,\Vert \varGamma _{n_k} - \varGamma _{n_{k-1}}\Vert \le (Lk)^{-2}\right\} . \end{aligned}$$

Then, it is easy to see that the above series is finite if

$$\begin{aligned} \sum _{k \in \tilde{\mathbb {N}}} \mathbb {P}\left\{ \max _{n_{k-1} \le n \le n_k} \left| \sum _{j=1}^n (\varGamma _n -\tilde{ \varGamma }_j)Z_j\right| \ge 2^{(k-1){/}2}{/}(2 Lk)\right\} < \infty \end{aligned}$$
(3.12)

and

$$\begin{aligned} \sum _{k \not \in \tilde{\mathbb {N}}} \mathbb {P}\left\{ \max _{n_{k-1} \le n \le n_k}|T_n| \ge 2^{(k-1){/}2} (Lk)^{1{/}2}\right\} < \infty . \end{aligned}$$
(3.13)

To bound the series in (3.12), we first note that employing the Lévy inequality for sums of independent symmetric random vectors, one obtains

$$\begin{aligned}&\mathbb {P}\left\{ \max _{n_{k-1} \le n \le n_k} \left| \sum _{j=1}^n (\varGamma _n - \tilde{\varGamma }_j)Z_j \right| \ge 2^{(k-1){/}2}{/}(2 Lk)\right\} \\&\quad \le \mathbb {P}\left\{ \max _{n_{k-1} \le n \le n_k} \left| \sum _{j=1}^n (\varGamma _{n_{k}} - \tilde{\varGamma }_j)Z_j \right| \ge 2^{(k-1){/}2}{/}(4 Lk)\right\} \\&\qquad +\; \mathbb {P}\left\{ \max _{n_{k-1} \le n \le n_k} \left| (\varGamma _{n_k} - \varGamma _n)\sum _{j=1}^n Z_j \right| \ge 2^{(k-1){/}2}{/}( 4Lk)\right\} \\&\quad \le 2 \mathbb {P}\left\{ \left| \sum _{j=1}^{n_k} (\varGamma _{n_{k}} - \tilde{\varGamma }_j)Z_j \right| \ge 2^{(k-1){/}2}{/}(4Lk)\right\} \\&\qquad +\;2\mathbb {P}\left\{ \Vert \varGamma _{n_k} - \varGamma _{n_{k-1}}\Vert \;\left| \sum _{j=1}^{n_k} Z_j\right| \ge 2^{(k-1){/}2}{/}(4 Lk)\right\} =:2 p_{k,1} + 2 p_{k,2}, \end{aligned}$$

where we have also used the fact that

$$\begin{aligned} \Vert \varGamma _{n_k}-\varGamma _n\Vert \le \Vert \varGamma _{n_k}-\varGamma _{n_{k-1}}\Vert \hbox { if } n_{k-1} \le n \le n_k. \end{aligned}$$

This follows easily from the monotonicity of the sequence \(\varGamma _n.\)

To bound \(p_{k,1}\), we first note that by Theorem X.1.1 in [3] for \(1 \le j \le \ell _k,\)

$$\begin{aligned} \Vert \varGamma _{n_k} - \tilde{\varGamma }_j\Vert ^2\le & {} \Vert \varGamma ^2_{n_k} - \tilde{\varGamma }^2_j\Vert = \sup _{|x| \le 1} \mathbb {E}\langle x, X \rangle ^2 I\left\{ d_j \wedge c_{n_k} < |X| \le d_j \vee c_{n_k}\right\} \nonumber \\\le & {} \sup _{|x| \le 1} \mathbb {E}\langle x, X \rangle ^2=\Vert \varGamma \Vert ^2. \end{aligned}$$
(3.14)

Apply (3.11) with \(Y= \sum _{j=1}^{n_k} (\varGamma _{n_{k}} - \tilde{\varGamma }_j)Z_j.\) Then, clearly \(\mathbb {E}Y = 0\) and, moreover, by independence of the random vectors \(Z_j\), we have for \(k \in \tilde{\mathbb {N}},\)

$$\begin{aligned} \mathbb {E}|Y|^2= & {} \sum _{j=1}^{n_k} \mathbb {E}|(\varGamma _{n_{k}} - \tilde{\varGamma }_j)Z_j|^2 \le d \sum _{j=1}^{n_k} \Vert \varGamma _{n_k} - \tilde{\varGamma }_j\Vert ^2\\ {}\le & {} d\Vert \varGamma \Vert ^2\left( \ell _k + (n_k-\ell _k)(Lk)^{-5}\right) \le 2d\Vert \varGamma \Vert ^2 n_k(Lk)^{-5}. \end{aligned}$$

We conclude that

$$\begin{aligned} p_{k,1} \le 2\exp \left( -(Lk)^3{/}\left( 512d\Vert \varGamma \Vert ^2\right) \right) , k \in \tilde{\mathbb {N}}_1. \end{aligned}$$

Similarly, we obtain

$$\begin{aligned} p_{k,2} \le \mathbb {P}\left\{ \left| \sum _{j=1}^{n_k} Z_j\right| \ge 2^{(k-1){/}2} Lk{/}4\right\} \le 2\exp \left( -(Lk)^2{/}(256d)\right) ,\quad k \in \tilde{\mathbb {N}}_2. \end{aligned}$$

It is now clear that the series in (3.12) is finite.

To show that the series in (3.13) is finite, we note that by (3.11) and the Lévy inequality,

$$\begin{aligned} \mathbb {P}\left\{ \max _{n_{k-1} \le n \le n_k}|T_n| \ge 2^{(k-1){/}2} (Lk)^{1{/}2}\right\} \le 2 \mathbb {P}\left\{ |T_{n_k}| \ge 2^{(k-1){/}2} (Lk)^{1{/}2}\right\} \le 4 k^{-\eta }, \end{aligned}$$

where \(\eta =(16d)^{-1}\) and it is enough to check that

$$\begin{aligned} \sum _{k \not \in \tilde{\mathbb {N}}}k^{-\eta } < \infty . \end{aligned}$$

To verify that this series is finite, observe that by the argument used in (3.14) we have,

$$\begin{aligned} \Vert \varGamma _{n_k}-\varGamma _{n_{k-1}}\Vert ^2\le & {} \sup _{|x| \le 1}\mathbb {E}\langle x, X \rangle ^2 I\left\{ c_{n_{k-1}}< |X| \le \ c_{n_k}\right\} \\\le & {} \mathbb {E}|X|^2 I\left\{ c_{n_{k-1}} < |X| \le \ c_{n_k}\right\} , \end{aligned}$$

which implies

$$\begin{aligned} \sum _{k=1}^{\infty } \Vert \varGamma _{n_k}-\varGamma _{n_{k-1}}\Vert ^2 \le \mathbb {E}|X|^2 < \infty . \end{aligned}$$

We conclude that

$$\begin{aligned} \sum _{k \not \in \tilde{\mathbb {N}}_2}(Lk)^{-2} < \infty . \end{aligned}$$

So the proof of (3.7) is complete if we show that

$$\begin{aligned} \sum _{k \not \in \tilde{\mathbb {N}}_1}k^{-\eta } < \infty . \end{aligned}$$
(3.15)

We need another lemma.

Lemma 3.2

Consider two sequences \(c_{k,i}, k \ge 1\) of positive real numbers satisfying for large enough k

$$\begin{aligned} 2^{k{/}2}\exp (-k^{\delta }) \le c_{k,i} \le 2^{k{/}2}\exp (k^{\delta }),i=1,2, \end{aligned}$$
(3.16)

where \(0<\delta <1.\) Set \(\varGamma _{k,i} := A(c_{k,i}), i=1,2, k \ge 1.\) Then, we have,

$$\begin{aligned} \sum _{k=1}^{\infty }k^{-\delta } \Vert \varGamma _{k,1} - \varGamma _{k,2}\Vert ^2 < \infty . \end{aligned}$$

Proof

Using the same argument as in (3.14), we have for large k

$$\begin{aligned} \Vert \varGamma _{k,1} - \varGamma _{k,2}\Vert ^2 \le \mathbb {E}|X|^2 I\left\{ 2^{k{/}2}\exp (-k^{\delta }) <|X| \le 2^{k{/}2}\exp (k^{\delta })\right\} \le \sum _{j=[k- 3k^{\delta }]}^{ [k+ 3k^{\delta }]} \beta _j, \end{aligned}$$

where \(\beta _j :=\mathbb {E}|X|^2 I\{2^{j-1} < |X|^2 \le 2^j\}, j \ge 1\).

We can conclude that for some \(k_0 \ge 1\) and a suitable \(j_0 \ge 0,\)

$$\begin{aligned} \sum _{k=k_0}^{\infty }k^{-\delta } \Vert \varGamma _{k,1} - \varGamma _{k,2}\Vert ^2 \le \sum _{j=j_0}^{\infty }\beta _j \sum _{k=m_1(j)}^{m_2(j)} k^{-\delta }, \end{aligned}$$
(3.17)

where

$$\begin{aligned} m_1(j)=\min \left\{ k \ge k_0{:}\,[k+3k^{\delta }] \ge j\right\} \end{aligned}$$

and

$$\begin{aligned} m_2(j)=\max \left\{ k \ge k_0{:}\, [k-3k^{\delta }] \le j\right\} . \end{aligned}$$

It is easy to see that \(m_1(j) \ge j-3j^{\delta } \ge j{/}2\) and \(m_2(j) \le j + 4j^{\delta }\) for large j.

Consequently, we have for large j

$$\begin{aligned} \sum _{k=m_1(j)}^{m_2(j)} k^{-\delta } \le 2^{\delta } (m_2(j)-m_1(j)+ 1)j^{-\delta } \le 2^{3+\delta } < \infty . \end{aligned}$$
(3.18)

We obviously have \(\sum _{j=1}^{\infty } \beta _j < \infty \) (as \(\mathbb {E}|X|^2 < \infty \)). Combining relations (3.17) and (3.18), we obtain the assertion of the lemma. \(\square \)

We apply the above lemma with \(c_{k,1} = c_{n_k}, c_{k,2} = d_{\ell _k}, k \ge 1.\) From condition (2.2), we readily obtain that for large k,

$$\begin{aligned} 2^{k{/}2}\exp (-k^{\epsilon '_k}) \le c_{k,1} \le 2^{k{/}2}\exp (k^{\epsilon '_k}), \end{aligned}$$

where \(\epsilon '_k:=\epsilon _{n_k} \rightarrow 0\) so that condition (3.16) is satisfied for any \(\delta > 0\). This is also the case for the sequence \(c_{k,2}\). So we can choose \(\delta =\eta {/}2\), and it follows that

$$\begin{aligned} \infty >\sum _{k \not \in \tilde{\mathbb {N}}_1}k^{-\eta {/}2}\Vert \varGamma _{n_k} -\tilde{\varGamma }_{\ell _k}\Vert ^2 \ge \Vert \varGamma \Vert ^2\sum _{k \not \in \tilde{\mathbb {N}}_1} k^{-\eta {/}2}(Lk)^{-5}, \end{aligned}$$

which shows that (3.15) holds.

4 Proof of Theorem 2.1

We first prove (2.4). Set \(k_n= [\exp ((Ln)^{\alpha })]\), where \(0< \alpha <1\). Then, it follows from the d-dimensional version of the Hartman–Wintner LIL that for any given \(\epsilon >0,\) with prob. 1,

$$\begin{aligned} |\varGamma _k^{-1} S_k|{/}\sqrt{k} \le \lambda _k^{-1}\sqrt{2 LL k}(1+\epsilon ), k \ge k_0(\omega , \epsilon ), \end{aligned}$$

where \(\lambda _k\) is the smallest eigenvalue of \(\varGamma _k\). As \(\lambda _k \nearrow 1,\) we can conclude that for large enough n

$$\begin{aligned} \max _{1 \le k <k_n} |\varGamma _k^{-1} S_k|{/}\sqrt{k} \le \sqrt{2\alpha LL n}(1+2\epsilon ), \end{aligned}$$

which is \(\le \sqrt{2 LL n}\) if we choose \(\epsilon \) small enough. It follows that

$$\begin{aligned} a_n \max _{1 \le k <k_n} |\varGamma _k^{-1} S_k|{/}\sqrt{k} -b_{d,n} \rightarrow -\infty \hbox { a.s.} \end{aligned}$$
(4.1)

So (2.4) holds if and only if

$$\begin{aligned} a_n \max _{k \in K_n} |\varGamma _k^{-1} S_k|{/}\sqrt{k} -b_{d,n} \mathop {\rightarrow }\limits ^{d} \tilde{Y}, \end{aligned}$$

where \(K_n:=\{k_n+1,\ldots , n\}.\)

We split \(K_n\) into two random subsets:

$$\begin{aligned} K_{n,1}(\cdot ):=\left\{ k \in K_n{:}\,|S_k - \varGamma _k T_k| \le 2\sqrt{k}{/}( LL k)\right\} ,\quad K_{n,2}(\cdot ):= K_n{\setminus }K_{n,1}(\cdot ). \end{aligned}$$

In view of Theorem 2.3(b) (where we set \(\varGamma =I\)), there are with prob. 1 only finitely many k’s such that

$$\begin{aligned} |S_k - \varGamma _k T_k| > 2\sqrt{k}{/} LL k \hbox { and }|\varGamma _k^{-1} S_k| \ge 4 \sqrt{k LL k}{/}(3\lambda _k), \end{aligned}$$

where \(\lambda _k\) is again the smallest eigenvalue of \(\varGamma _k\).

As \(\lambda _k \nearrow 1,\) we can conclude that with prob. 1 there are only finitely many k’s such that

$$\begin{aligned} |S_k - \varGamma _k T_k| > 2\sqrt{k}{/} LL k \hbox { and }|\varGamma _k^{-1} S_k| \ge \sqrt{2k LL k}, \end{aligned}$$

and it follows that

$$\begin{aligned} a_n \max _{k \in K_{n,2}(\cdot )} |\varGamma _k^{-1} S_k|{/}\sqrt{k} -b_{d,n} \rightarrow -\infty \hbox { a.s.} \end{aligned}$$

We see that (2.4) is equivalent to

$$\begin{aligned} a_n \max _{k \in K_{n,1}(\cdot )} |\varGamma _k^{-1} S_k|{/}\sqrt{k} -b_{d,n} \mathop {\rightarrow }\limits ^{d} \tilde{Y}. \end{aligned}$$

From the definition of the sets \(K_{n,1}(\cdot )\), we easily get that

$$\begin{aligned} a_n \max _{k \in K_{n,1}(\cdot )} |\varGamma _k^{-1} S_k|{/}\sqrt{k} - a_n \max _{k \in K_{n,1}(\cdot )} | T_k|{/}\sqrt{k} \rightarrow 0 \hbox { a.s.} \end{aligned}$$

By Slutsky’s lemma (2.4) holds if and only if

$$\begin{aligned} a_n \max _{k \in K_{n,1}(\cdot )} |T_k|{/}\sqrt{k} -b_{d,n} \mathop {\rightarrow }\limits ^{d} \tilde{Y}. \end{aligned}$$

Looking at Theorem 2.3(c), we can also conclude that

$$\begin{aligned} a_n \max _{k \in K_{n,2}(\cdot )} |T_k|{/}\sqrt{k} -b_{d,n} \rightarrow -\infty \hbox { a.s.} \end{aligned}$$

and the proof of (2.4) further reduces to showing

$$\begin{aligned} a_n \max _{k \in K_n} |T_k|{/}\sqrt{k} -b_{d,n} \mathop {\rightarrow }\limits ^{d} \tilde{Y}. \end{aligned}$$

Using the same argument as in (4.1), we also see that

$$\begin{aligned} a_n \max _{1 \le k <k_n} |T_k|{/}\sqrt{k} -b_{d,n} \rightarrow -\infty \hbox { a.s.} \end{aligned}$$

and we have shown that (2.4) holds if

$$\begin{aligned} a_n \max _{1 \le k \le n} |T_k|{/}\sqrt{k} -b_{d,n} \mathop {\rightarrow }\limits ^{d} \tilde{Y}. \end{aligned}$$

This is the Darling–Erdős theorem for normally distributed random vectors which follows from (2.1). Thus, (2.4) has been proven.

We now turn to the proof of (2.6). By Slutsky’s lemma and (4.1), it is enough to show that

$$\begin{aligned} \varDelta _n:=a_n \left| \max _{k_n \le k \le n} |S_k|{/}\sqrt{k} - \max _{k_n \le k \le n} |\varGamma _k^{-1}S_k|{/}\sqrt{k} \right| \mathop {\rightarrow }\limits ^{\mathbb {P}} 0, \end{aligned}$$

Using the triangular inequality, it is easy to see that

$$\begin{aligned} \varDelta _n \le a_n \max _{k_n \le k \le n} \left| (I -\varGamma _k)\varGamma _k^{-1}S_k{/}\sqrt{k}\right| \le a_n\Vert I -\varGamma _{k_n}\Vert \max _{1 \le k \le n} |\varGamma _k^{-1}S_k|{/}\sqrt{k}. \end{aligned}$$

From (2.4), it follows that \( (\max _{1 \le k \le n} |\varGamma _k^{-1}S_k|{/}\sqrt{k}){/}\sqrt{ LL n}\) is stochastically bounded. By assumption (2.5), we also have that \(\Vert I -\varGamma _{k_n}\Vert = o(( LL n)^{-1})\). Recalling that \(a_n=\sqrt{2 LL n},\) we see that \(\varDelta _n \mathop {\rightarrow }\limits ^{\mathbb {P}} 0\) and our proof of Theorem 2.1 is complete.   \(\square \)

5 Proof of Theorem 2.2

Using the same arguments as in the proof of Theorem 2.1, we can infer from (2.8) via relations (3.4) and (3.6) that

$$\begin{aligned} M_n:=a_n \max _{k_n \le k \le n} |T'_k|{/}\sqrt{k} - b_{d,n} \mathop {\rightarrow }\limits ^{d} \tilde{Y}+c, \end{aligned}$$
(5.1)

where \(T'_k =\sum _{j=1}^k \tilde{\varGamma }_j Z_j\) and the random vectors \(Z_j\) are i.i.d. with \(\mathcal {N}(0, I)\)-distribution and \(k_n \le \exp ((Ln)^{\alpha })\) for some \(0< \alpha <1.\)

Our first lemma gives an upper bound of \(\mathbb {P}\{M_n > t\}\) via the corresponding probability for the maximum of a subcollection of the random variables \(\{|T'_k|{/}\sqrt{k}{:}\,k_n \le k \le n\}\) (see Lemma 4.3 in [5] for a related result).

Let \(0< \xi < 1\) be fixed. Set

$$\begin{aligned} m_j = [\exp (j\xi {/} LL n)], j \ge 1 \hbox { and }N=N_n=[Ln LL n {/}\xi ]. \end{aligned}$$

Then, \(m_{N} \le n \le m_{N+1}\). Also note that the sequence \(m_j\) depends on n and \(\xi \).

Next, set

$$\begin{aligned} j_n:= \min \{j{:}\,m_j \ge Ln\} \hbox { and }k_n=m_{j_n} \end{aligned}$$

so that \(j_n \sim \xi ^{-1}( LL n)^2\) and \(k_n \sim Ln\) as \(n \rightarrow \infty .\)

Finally to simplify notation, we set \(f_n(y)=(b_{d,n} + y){/}a_n, y \in \mathbb {R}\) so that

$$\begin{aligned} \mathbb {P}\{M_n> y\}=\mathbb {P}\left\{ \max _{k_n \le k \le n} |T'_k|{/}\sqrt{k} > f_n(y)\right\} . \end{aligned}$$

Lemma 5.1

Given \( 0< \delta < 1\), we have for \(y \in \mathbb {R}\) and \(n \ge n_0=n_0(\xi ,\delta ,y),\)

$$\begin{aligned} (1-\delta )\; \mathbb {P}\left\{ M_n> y+\delta \right\} \le \mathbb {P}\left\{ \max _{j_n \le j \le N} | T'_{m_j} | {{/}}\sqrt{ m_j } > f_n(y) \right\} + \mathbb {P}\left\{ |Z_1| \ge f_n(y)\right\} , \end{aligned}$$

provided that \(0 < \xi \le \delta ^{3}{/}(36d).\)

Proof

Noting that

$$\begin{aligned}&\mathbb {P}\left\{ M_n> y+\delta \right\} =\mathbb {P}\left\{ \max _{k_n \le k \le n} |T'_k|{{/}}\sqrt{k}> f_n(y+ \delta ) \right\} \\&\quad \le \mathbb {P}\left\{ \max _{k_n \le k \le n }| T'_k |{{/}}\sqrt{k}> f_n(y+ \delta ), \max _{j_n \le j \le N} |T'_{m_j} |{{/}}\sqrt{ m_j } \le f_n(y ) \right\} \\&\quad +\, \mathbb {P}\left\{ \max _{j_n \le j \le N} | T'_{m_j}| {{/}}\sqrt{ m_j } > f_n(y ) \right\} ,\end{aligned}$$

it is enough to show that

$$\begin{aligned}&\mathbb {P}\left\{ \max _{k_n \le k \le n} |T'_k |{{/}}\sqrt{k}> f_n(y+ \delta ), \max _{j_n \le j \le N}| T'_{m_j} | {/}\sqrt{ m_j } \le f_n(y ) \right\} \\ \nonumber&\quad \le \delta \; \mathbb {P}\left\{ \max _{k_n \le k \le n} |T'_k |{{/}}\sqrt{k} > f_n(y+ \delta )\right\} + \mathbb {P}\left\{ |Z_1| \ge f_n(y)\right\} , \end{aligned}$$
(5.2)

if \(\xi \) is sufficiently small.

Consider the following stopping time,

$$\begin{aligned} \tau := \inf \left\{ k\ge k_n{:}\,| T'_k | {{/}}\sqrt{k} > f_n(y+\delta ) \right\} . \end{aligned}$$

Then, it is obvious that the probability in (5.2) is bounded above by

$$\begin{aligned} \sum _{j=j_n+1}^{N-1 } \sum _{k= m_{j-1}+1}^{ m_j-1} \mathbb {P}\left\{ \tau = k, \max _{j_n \le j \le N}| T'_{m_j} | {/}\sqrt{ m_j } \le f_n(y ) \right\} + \; \mathbb {P}\left\{ m_{N-1} < \tau \le n \right\} \end{aligned}$$
(5.3)

Furthermore, we have for \(j_n +1 \le j \le N-1,\)

$$\begin{aligned}&\sum _{k= m_{j-1}+1}^{ m_j -1} \mathbb {P}\left\{ \tau = k, \max _{j_n\le j \le N} | T'_{m_j} | {/}\sqrt{ m_j } \le f_n(y ) \right\} \\&\quad \le \sum _{k= m_{j-1}+1}^{ m_j -1} \mathbb {P}\{ \tau = k \} \mathbb {P}\left\{ \frac{ |T'_k - T'_{m_{j+1}}|}{\sqrt{m_{j+1} - k} } > \frac{ \sqrt{k} f_n(y+\delta ) -\sqrt{m_{j+1} } f_n(y) }{ \sqrt{m_{j+1} - k} }\right\} . \end{aligned}$$

Next observe that

$$\begin{aligned}&\max _{m_{j-1}< k \le m_j} \mathbb {P}\left\{ \frac{ |T'_k - T'_{m_{j+1}} | }{ \sqrt{m_{j+1} - k} }> \frac{ \sqrt{k} f_n(y+\delta ) -\sqrt{m_{j+1} } f_n(y) }{ \sqrt{m_{j+1} - k} }\right\} \\&\quad \le \mathbb {P}\left\{ | Z_1 | > \frac{ \sqrt{m_{j-1} } f_n(y+\delta ) -\sqrt{m_{j+1} } f_n(y) }{ \sqrt{m_{j+1} - m_{j-1} } }\right\} . \end{aligned}$$

After some calculation, we find that for large enough n

$$\begin{aligned} \frac{ \sqrt{m_{j-1} } f_n(y+\delta ) -\sqrt{m_{j+1} } f_n(y) }{ \sqrt{m_{j+1} - m_{j-1} } } \ge \frac{\delta }{3\sqrt{\xi }}- 4\sqrt{\xi } \ge \frac{\delta }{6\sqrt{\xi }} \end{aligned}$$

where the last inequality holds since \(\xi \le \delta {/}24.\) We trivially have by Markov’s inequality,

$$\begin{aligned} \mathbb {P}\{|Z_1| \ge \delta {/}(6\sqrt{\xi })\} \le 36 \xi \mathbb {E}[|Z_1|^2]{/}\delta ^2= 36\xi d{/}\delta ^2, \end{aligned}$$

which is \(\le \delta \) by our condition on \(\xi .\)

It follows that

$$\begin{aligned} \sum _{j=j_n+1}^{N-1 } \sum _{k= m_{j-1}+1}^{ m_j -1} \mathbb {P}\left\{ \tau = k, \max _{j_n\le j \le N} | T'_{m_j}|{/}\sqrt{ m_j } \le f_n(y ) \right\} \le \delta \; \mathbb {P}\{k_n \le \tau \le m_{N-1}\}. \end{aligned}$$
(5.4)

Concerning the second term in (5.3), simply note that

$$\begin{aligned}&\mathbb {P}\left\{ m_{N-1}< \tau \le n \right\} \\&\quad \le \mathbb {P}\left\{ m_{N-1} < \tau \le n, |T'_{m_{N+1}}|{/}\sqrt{m_{N+1}} \le f_n(y) \right\} + \mathbb {P}\left\{ |T'_{m_{N+1}}|{/}\sqrt{m_{N+1}}> f_n(y) \right\} \\&\quad =\sum _{k=m_{N-1}+1}^n \mathbb {P}\left\{ \tau =k, |T'_{m_{N+1}}|{/}\sqrt{m_{N+1}} \le f_n(y) \right\} + \mathbb {P}\left\{ |Z_1| > f_n(y)\right\} . \end{aligned}$$

Arguing as above, we readily obtain,

$$\begin{aligned} \mathbb {P}\left\{ m_{N-1}< \tau \le n \right\} \le \delta \mathbb {P}\left\{ m_{N-1} < \tau \le n \right\} + \mathbb {P}\left\{ |Z_1| > f_n(y)\right\} . \end{aligned}$$
(5.5)

Combining relations (5.4) and (5.5) and recalling (5.3), we see that

$$\begin{aligned}&\mathbb {P}\left\{ \max _{k_n \le k \le n} |T'_k |{/}\sqrt{k}> f_n(y+ \delta ), \max _{j_n \le j \le N} | T'_{m_j} |{{/}}\sqrt{ m_j } \le f_n(y ) \right\} \\&\quad \le \delta \;\mathbb {P}\{k_n \le \tau \le n\} + \mathbb {P}\{|Z_1| > f_n(y)\}. \end{aligned}$$

This implies (5.2) since

$$\begin{aligned} \mathbb {P}\{k_n \le \tau \le n\}= \mathbb {P}\left\{ \max _{k_n\le k \le n} |T'_k | {{/}}\sqrt{k} > f_n(y+\delta ) \right\} , \end{aligned}$$

and the proof of Lemma 5.1 is complete. \(\square \)

We finally need the following lemma,

Lemma 5.2

Let Y be a d-dimensional random vector with distribution \(\mathcal {N}(0,\varSigma ),\) where \(d \ge 2.\) Assume that the largest eigenvalue of \(\varSigma \) is equal to 1 and has multiplicity \(d-1.\) Denote the remaining (smallest) eigenvalue of \(\varSigma \) by \(\sigma ^2\). Then, we have:

$$\begin{aligned} \mathbb {P}\{|Y| \ge t\} \le \frac{2}{\sqrt{1-\sigma ^2}} \mathbb {P}\{|Z| \ge t\}, t >0, \end{aligned}$$

where \(Z{:}\,\varOmega \rightarrow \mathbb {R}^{d-1}\) has a normal\((0, I_{d-1})\)-distribution.

Proof

If \(d \ge 3\), Lemma 5.2 follows by integrating the inequality given in Lemma 1(a) of [7].

To prove Lemma 5.2 if \(d=2\), we proceed similarly as in [7]. Choose an orthonormal basis \(e_1, e_2\) of \(\mathbb {R}^2\) consisting of two eigenvectors corresponding to the eigenvalues 1 and \(\sigma ^2 \in ]0,1[\) of \(\varSigma .\) Then,

$$\begin{aligned} Y=\sum _{i=1}^2 \langle Y, e_i\rangle e_i =: \eta _1 e_1 + \sigma \eta _2 e_2 \end{aligned}$$

where \(\eta _i, 1 \le i \le 2 \) are independent standard normal random variables.

It is then obvious that

$$\begin{aligned} Y^2 = \eta _1^2 + \sigma ^2 \eta _2^2 =: R_1 +R_2, \end{aligned}$$

where \(R_1\) and \(R_2{/}\sigma ^2\) have Chi-square distributions with 1 degree of freedom. Denote the densities of \(R_1+R_2,\,R_1\) and \(R_2\) by \(h, h_1, h_2\).

Then, \(h_2(y)=h_1(y{/}\sigma ^2){/}\sigma ^2\) and

$$\begin{aligned} h(z)=\sigma ^{-2}\int _0^z h_1(z-y)h_1(y{/}\sigma ^2)\mathrm{d}y, z \ge 0. \end{aligned}$$

Using that \(h_1(y)=(2\pi )^{-1{/}2} y^{-1{/}2} e^{-y{/}2}, y > 0\), we can infer that

$$\begin{aligned}&h(z){/}h_1(z) =\frac{1}{2\pi \sigma } \int _0^z (1-y{/}z)^{-1{/}2} y^{-1{/}2} e^{-(\sigma ^{-2} -1)y{/}2} \mathrm{d}y\\&\quad \le \frac{1}{\sqrt{2}\pi \sigma } \int _0^{z{/}2} y^{-1{/}2} e^{-(\sigma ^{-2} -1)y{/}2} \mathrm{d}y + \frac{e^{-(\sigma ^{-2} -1)z{/}4}}{\sqrt{2}\pi \sigma \sqrt{z}}\int _{z{/}2}^z (1-y{/}z)^{-1{/}2}\mathrm{d}y\\&\quad \le (2\pi \sigma ^2 (\sigma ^{-2}-1))^{-1{/}2} + \sqrt{2z}e^{-(\sigma ^{-2} -1)z{/}4} (\pi \sigma )^{-1}. \end{aligned}$$

Employing the trivial inequality \(e^{-x{/}2}\le x^{-1{/}2}, x > 0,\) it follows that

$$\begin{aligned} h(z){/}h_1(z) \le \left[ (2\pi )^{-1{/}2} + \sqrt{8}{/}\pi \right] (1-\sigma ^2)^{-1{/}2} \le 2 (1-\sigma ^2)^{-1{/}2}, z \ge 0. \end{aligned}$$

We can conclude that for \(t \ge 0,\)

$$\begin{aligned} \mathbb {P}\{|Y| \ge t\} = \int _{t^2}^{\infty } h(z) \mathrm{d}z \le \frac{2}{\sqrt{1-\sigma ^2}}\int _{t^2}^{\infty } h_1(z) \mathrm{d}z = \frac{2}{\sqrt{1-\sigma ^2}}\mathbb {P}\{|Z| \ge t\} \end{aligned}$$

and Lemma 5.2 has been proven. \(\square \)

Recall that \(d_n=\sqrt{n}{/}( LL n)^5\) and \(\tilde{\varGamma }_n = A(d_n), n \ge 1.\) Let \(\{v_1, \ldots , v_d\}\) be an orthonormal basis of \(\mathbb {R}^d.\) Then, it is easy to see that

$$\begin{aligned} \mathbb {E}|X|^2I\{|X|>d_n\}= & {} \sum _{i=1}^d \mathbb {E}\left\langle X, v_i \right\rangle ^2 I\{|X|>d_n\}\\= & {} \sum _{i=1}^d \left\langle v_i, (I-\tilde{\varGamma }^2_n)v_i\right\rangle \le d\Vert I-\tilde{\varGamma }^2_n\Vert . \end{aligned}$$

It is now obvious that condition (2.7) is equivalent to

$$\begin{aligned} \Vert I-\tilde{\varGamma }^2_n \Vert = O (( LL n)^{-1}) \hbox { as }n \rightarrow \infty , \end{aligned}$$

Furthermore, \(\Vert I - \tilde{\varGamma }^2_n \Vert \) is equal to \(1 -\tilde{\lambda }^2_n,\) where \(\tilde{\lambda }_n\) is the smallest eigenvalue of \(\tilde{\varGamma }_n\) since \(\tilde{\varGamma }^2_n\) is symmetric and \(I -\tilde{\varGamma }^2_n \) is nonnegative definite. So it remains to be shown that (5.1) implies

$$\begin{aligned} 1 -\tilde{\lambda }^2_n = O\left( ( LL n)^{-1}\right) \hbox { as }n \rightarrow \infty . \end{aligned}$$
(5.6)

or, equivalently, to show that if (5.6) does not hold, we cannot have (5.1).

To that end, we apply Lemma 5.1 with \(\delta =1{/}2\) and we get for \(y \in \mathbb {R},\)

$$\begin{aligned}&\mathbb {P}\left\{ M_n \ge y\right\} \nonumber \\&\quad \le 2 \mathbb {P}\left\{ \max _{j_n \le j \le N} | T'_{m_j} | {/}\sqrt{ m_j } > f_n(y-1{/}2) \right\} + 2 \mathbb {P}\left\{ | Z_1| \ge f_n(y-1{/}2)\right\} \nonumber \\&\quad \le 2 N \mathbb {P}\{|\tilde{\varGamma }_n Z_1| \ge f_n(y-1{/}2\} + 2 \mathbb {P}\left\{ | Z_1| \ge f_n(y-1{/}2)\right\} . \end{aligned}$$
(5.7)

Here we have used the monotonicity of the sequence \(\tilde{\varGamma }_k, k \ge 1\) which implies that \(\tilde{\varGamma }_n -\mathrm {Cov}( T'_{m_j}{/}\sqrt{m_j})\) is nonnegative definite for \(j_n \le j \le N.\) This allows us to conclude that for \(j_n \le j \le N,\)

$$\begin{aligned} \mathbb {P}\{ |T'_{m_j} | {/}\sqrt{ m_j }> x\} \le \mathbb {P}\{|\tilde{\varGamma }_n Z_1| > x\}, x \in \mathbb {R}. \end{aligned}$$

Let \(D_n\) be the d-dimensional diagonal matrix with \(D_n(i,i)=1, 1 \le i \le d-1\) and \(D_n(d,d)=\tilde{\lambda }_n\). Then, clearly

$$\begin{aligned} \mathbb {P}\{|\tilde{\varGamma }_{n} Z_1| \ge f_n(y-1{/}2)\} \le \mathbb {P}\{|D_{n} Z_1| \ge f_n(y-1{/}2)\} \end{aligned}$$

and we can infer from Lemma 5.2 that

$$\begin{aligned} \mathbb {P}\{|\tilde{\varGamma }_{n} Z_1| \ge f_n(y-1{/}2)\} \le 2(1-\tilde{\lambda }_n^2)^{-1{/}2} \mathbb {P}\left\{ |Z'| \ge f_n(y-1{/}2)\right\} , \end{aligned}$$
(5.8)

where \(Z'\) is a \((d-1)\)-dimensional normal mean zero random vector with covariance matrix equal to the identity matrix.

Using the fact that the square of the Euclidean norm of a d-dimensional \(\mathcal {N}(0,I)\)-distributed random vector X has a gamma distribution with parameters d / 2 and 2, one can show that there exist positive constants \(C_1(d), C_2(d)\) so that

$$\begin{aligned} C_1(d) t^{d-2}\exp (-t^2{/}2) \le \mathbb {P}\{|X| \ge t\} \le C_2(d) t^{d-2}\exp (-t^2{/}2), t \ge 2d. \end{aligned}$$
(5.9)

(see Lemma 1 and Lemma 3 in [8], where more precise bounds are given if \(d \ge 3\). If \(d=1\), this follows directly from well-known bounds for the tail probabilities of the one-dimensional normal distribution. If \(d=2\), the random variable \(|X|^2\) has an exponential distribution and (5.9) is trivial).

We can conclude that for large enough n

$$\begin{aligned} \mathbb {P}\{|Z'| \ge f_n(y-1{/}2)\}\le & {} C_2(d-1)f_n(y-1{/}2)^{d-3} \exp (-(f_n(y-1{/}2)^2{/}2)\\\le & {} C_3(d) f_n(y-1{/}2)^{-1} \mathbb {P}\{|Z_1| \ge f_n(y-1{/}2)\}, \end{aligned}$$

where we set \(C_3(d)= C_2(d-1){/}C_1(d).\) Returning to inequality (5.8) and noting that \(f_n(y-1{/}2) \ge \sqrt{\log \log n}\) if n is large, we get in this case,

$$\begin{aligned} \mathbb {P}\{|\tilde{\varGamma }_{n} Z_1| \ge f_n(y-1{/}2)\} \le 2C_3(d)\{(1-\tilde{\lambda }_n^2)\log \log n\}^{-1{/}2}\mathbb {P}\left\{ |Z_1| \ge f_n(y-1{/}2)\right\} . \end{aligned}$$

Applying (5.9) once more, we find that

$$\begin{aligned} \mathbb {P}\{|Z_1| \ge f_n(y-1{/}2)\}=O(N^{-1})=O\left( (\log n \log \log n)^{-1}\right) \hbox { as }n \rightarrow \infty . \end{aligned}$$

Recalling (5.7) we can conclude that if

$$\begin{aligned} \limsup _{n \rightarrow \infty } (1-\tilde{\lambda }_n^2) \log \log n = \infty , \end{aligned}$$

we have for any \(y \in \mathbb {R},\)

$$\begin{aligned} \liminf _{n \rightarrow \infty } \mathbb {P}\left\{ M_n > y\right\} =0. \end{aligned}$$

Consequently, \(M_n\) cannot converge in distribution to any variable of the form \(\tilde{Y}+c\). \(\square \)

Remark

  1. 1.

    Denote the distribution of \(a_n\max _{1 \le k \le n} |S_k|{/}\sqrt{k} -b_{d,n} \) by \(Q_n\). From (3.4) and (3.6), it follows that this sequence is tight if and only if the distributions of \(M_n\) form a tight sequence. The above argument actually shows that this last sequence cannot be tight if condition (2.7) is not satisfied. Moreover, it is not difficult to prove via Theorem 2.1 that (2.7) implies that the sequence \(\{Q_n{:}\,n \ge 1\}\) is tight. Thus, we have

    $$\begin{aligned} \{Q_n{:}\,n \ge 1\} \hbox { is tight } \Longleftrightarrow (2.7). \end{aligned}$$
  2. 2.

    Also note that

    $$\begin{aligned} \mathbb {P}\left\{ a_n \max _{1 \le k \le n} |\varGamma _k^{-1}S_k|{/}\sqrt{k} - b_n \le 0\right\} \le \mathbb {P}\left\{ a_n \varLambda _n^{-1}\max _{1 \le k \le n} |S_k|{/}\sqrt{k} - b_n \le 0\right\} , \end{aligned}$$

    where \(\varLambda _n\) is the largest eigenvalue of \(\varGamma _n\) which in turn is defined as in (2.3) [here we can choose any sequence \(c_n\) satisfying condition (2.2)]. Using this inequality, one can show by the same argument as on p. 255 in [6] that (2.6) implies

    $$\begin{aligned} 1 - \varLambda ^2_n = o \left( ( LL n)^{-1}\right) \hbox { as }n \rightarrow \infty . \end{aligned}$$

    This is of course weaker than (2.5) if \(d \ge 2.\)

6 Some Further Results

We first prove the following Darling–Erdős type theorem with a shifted limiting distribution.

Theorem 6.1

Let \(X, X_n, n \ge 1\) be i.i.d. real-valued random variables with \(\mathbb {E}X^2 =1\) and \(\mathbb {E}X=0.\) Assume that for some \(c>0,\)

$$\begin{aligned} \mathbb {E}X^2I\{|X| \ge t\} \sim c ( LL t)^{-1} \hbox { as }t \rightarrow \infty . \end{aligned}$$

Then, we have,

$$\begin{aligned} a_n \max _{1 \le k \le n} |S_k|{/}\sqrt{k} - b_{n} \mathop {\rightarrow }\limits ^{d} \tilde{Y}-c, \end{aligned}$$

where \(\tilde{Y}\) and \(b_n\) are defined as in (1.1).

Proof

  1. (i)

    Set \(\sigma _n^2 =\mathbb {E}X^2I\{|X| \le \sqrt{n}\}\) and let \(1 \le k_n \le \exp ((Ln)^{\alpha })\) for some \(0< \alpha <1.\) Then, we have by Theorem 2.1 and the argument in (4.1),

    $$\begin{aligned} a_n \max _{k_n \le k \le n} |S_k|{{/}}\sqrt{k}\sigma _k - b_{n} \mathop {\rightarrow }\limits ^{d} \tilde{Y}, \end{aligned}$$

    which trivially implies for any sequence \( \rho _n\) of positive real numbers converging to 1,

    $$\begin{aligned} \rho _n a_n \max _{k_n \le k \le n} |S_k|{/}\sqrt{k}\sigma _k - \rho _n b_{n} \mathop {\rightarrow }\limits ^{d} \tilde{Y} \end{aligned}$$
    (6.1)

    Set \(k_n = [\exp ((Ln)^{\alpha }]\), where \(0< \alpha <1.\) Then, it is easy to see that

    $$\begin{aligned}&\mathbb {P}\left\{ a_n \max _{1 \le k \le n} \frac{|S_k|}{\sqrt{k}} - b_n \le y\right\} \\&\quad \le \,\mathbb {P}\left\{ \sigma _{k_n} a_n \max _{k_n \le k \le n} \frac{|S_k|}{\sqrt{k}\sigma _k} - b_n \le y\right\} \\&\quad =\mathbb {P}\left\{ \sigma _{k_n} a_n \max _{k_n \le k \le n} \frac{|S_k|}{\sqrt{k}\sigma _k} - \sigma _{k_n} b_n \le y +(1-\sigma _{k_n})b_n\right\} \end{aligned}$$

    Noticing that \((1-\sigma _{k_n})b_n \sim (1 -\sigma ^2_{k_n})(2 LL n){/}(1+\sigma _{k_n})\sim c( LL k_n)^{-1} LL n\) (since \(\sigma ^2_{k_n} \rightarrow \mathbb {E}X^2=1\)), it is clear that \((1-\sigma _{k_n})b_n \rightarrow c{/}\alpha \) as \(n \rightarrow \infty .\) By (6.1) (with \(\rho _n=\sigma _{k_n}\)), this last sequence of probabilities converges to \(\mathbb {P}\{\tilde{Y} \le y +c{/}\alpha \}.\) Since this holds for any \(0< \alpha <1,\) it follows that

    $$\begin{aligned} \limsup _{n \rightarrow \infty } \mathbb {P}\left\{ a_n \max _{1 \le k \le n} \frac{|S_k|}{\sqrt{k}} - b_n \le y\right\} \le \mathbb {P}\{\tilde{Y} \le y +c\}. \end{aligned}$$
    (6.2)
  2. (ii)

    Similarly, we have,

    $$\begin{aligned}&\mathbb {P}\left\{ a_n \max _{1 \le k \le n} \frac{|S_k|}{\sqrt{k}} - b_n \le y\right\} \\&\quad \ge \mathbb {P}\left\{ \sigma _{n} a_n \max _{1 \le k \le n} \frac{|S_k|}{\sqrt{k}\sigma _k} - \sigma _{n} b_n \le y +(1-\sigma _{n})b_n\right\} , \end{aligned}$$

    where \((1-\sigma _n)b_n \rightarrow c\) as \(n \rightarrow \infty .\) Applying (6.1) (with \(k_n=1\) and \(\rho _n=\sigma _n\)), we obtain that

    $$\begin{aligned} \liminf _{n \rightarrow \infty } \mathbb {P}\left\{ a_n \max _{1 \le k \le n} \frac{|S_k|}{\sqrt{k}} - b_n \le y\right\} \ge \mathbb {P}\{\tilde{Y} \le y +c\} \end{aligned}$$

    and Theorem 6.1 has been proven. \(\square \)

We finally mention the following result for real-valued random variables given in [13] where it is shown that if \(\mathbb {E}X=0, \mathbb {E}X^2 =1\) and \(\mathbb {E}X^2 LL|X| < \infty \), then one has

$$\begin{aligned} 2 LL n \left( \sup _{k \ge n} \frac{|S_k|}{\sqrt{2k LL k}}-1\right) -\frac{3}{2} LLL n + LLL Ln + \log (3{/}\sqrt{8}) \mathop {\rightarrow }\limits ^{d} \tilde{Y}. \end{aligned}$$
(6.3)

The authors asked whether this result can hold under the finite second moment assumption.

Using Theorem 2.3 in combination with Theorem 1.1 in [13], we obtain the following general result:

$$\begin{aligned} 2 LL n \left( \sup _{k \ge n} \frac{|S_k|}{\sqrt{2 k LL k}\sigma _k}-1\right) -\frac{3}{2} LLL n + LLL Ln + \log (3{/}\sqrt{8}) \mathop {\rightarrow }\limits ^{d} \tilde{Y}, \end{aligned}$$

where \(\sigma _n^2 = \mathbb {E}X^2 I\{|X| \le c_n\}\) and \(c_n\) is a non-decreasing sequence of positive real numbers satisfying condition (2.2). As in [6], this implies that (6.3) holds if and only if condition (2.5) is satisfied.