1 INTRODUCTION

We suppose {(Xnij), 1 \(\leqslant \) i, j \(\leqslant \) n, n = 2, 3, …} is a sequence of matrices of independent random variables and {\({{\vec {\pi }}_{n}}\) = (\({{\pi }_{n}}(1)\), \({{\pi }_{n}}(2)\), …, \({{\pi }_{n}}(n)\)), n = 2, 3, …} is a sequence of random permutations of the numbers 1, 2, …, n. Let \({{\vec {\pi }}_{n}}\) have a uniform distribution on the set of all permutations of 1, 2, …, n and be independent of (Xnij) for any n. We define the combinatorial sum Sn by the relation

$${{S}_{n}} = \sum\limits_{i = 1}^n {{{X}_{{ni{{\pi }_{n}}(i)}}}} .$$

We note that if the distributions of Xnij coincide for all \(1\;\leqslant \;j\;\leqslant \;n\) at all n, then Sn has the same distribution as the sum of independent random variables. Although this case is well studied, we should take it into account when estimating the optimality of the obtained results.

Under certain conditions, the sequence of distributions of normalized combinatorial sums weakly converges to the normal law. Any similar result is called a combinatorial central limit theorem (CLT). Research in this direction started long ago. The combinatorial CLT has been studied by Wald and Wolfowitz [1], Noether [2], Hoeffding [3], Motoo [4], Kolchin and Chistyakov [5]. Later on, nonasymptotic bounds of the type of Berry–Esseen and Esseen inequalities were found.  Bolthausen [6], von Bahr [7], Ho and Chen [8], Goldstein [9], Neammanee and Suntornchost [10], Neammanee and Rattanawong [11], Chen, Goldstein and Shao [12], Chen and Fang [13], and Frolov [14, 15] obtained similar results. A.N. Frolov obtained the results for a random number of summands in [16].

Bounds in CLT allow the asymptotics of probabilities of large deviations in logarithmic zones to be found. Usually, in this case, they speak of moderate deviations. We obtained such results for combinatorial sums in [17].

In our work [18], we first obtained results on the asymptotic behavior of probabilities of large deviations of combinatorial sums in power zones. In that work, we assumed that random variables satisfy some analogue of Bernstein’s condition. But for separate particular cases, combinatorial sums do not have independent increments. This makes it difficult to use classical methods of summation theory for independent random variables. In [18], we obtained the bounds of the moment-generating function and its logarithmic derivatives of the normalized combinatorial sum rather than of its particular summands. This is what led to those results.

Bernstein’s condition is one of the forms of the existence condition for the exponential moment. A natural problem is to obtain new results on the asymptotics of probabilities of large deviations if it is violated. This is what we discuss in this work. We replace Bernstein’s condition with the weaker Linnik condition. To prove the results, we use the truncation method.

2 RESULTS

We suppose {(Xnij), 1 \(\leqslant \) i, j \(\leqslant \) n, n = 2, 3, …} is a sequence of matrices of independent random variables such that

$$\sum\limits_{i = 1}^n {{\mathbf{E}}{{X}_{{nij}}}} = \sum\limits_{j = 1}^n {{\mathbf{E}}{{X}_{{nij}}}} = 0$$
(1)

for any n. We suppose {\({{\vec {\pi }}_{n}}\) = (\({{\pi }_{n}}(1)\), \({{\pi }_{n}}(2)\), …, \({{\pi }_{n}}(n)\)), n = 2, 3, …} is a sequence of random permutations of the numbers 1, 2, …, n. We assume that \({{\vec {\pi }}_{n}}\) has a uniform distribution on the set of permutations Pn and is independent of (Xnij) for any n.

We put

$${{S}_{n}} = \sum\limits_{i = 1}^n {{{X}_{{ni{{\pi }_{n}}(i)}}}.} $$

It is not difficult to check that

$${\mathbf{E}}{{S}_{n}} = 0,\quad {\mathbf{D}}{{S}_{n}} = {\mathbf{E}}S_{n}^{2} - {{({\mathbf{E}}{{S}_{n}})}^{2}} = \frac{1}{{n - 1}}\sum\limits_{i,j = 1}^n {{{{({\mathbf{E}}{{X}_{{nij}}})}}^{2}}} + \frac{1}{n}\sum\limits_{i,j = 1}^n {{\mathbf{D}}{{X}_{{nij}}}.} $$

Thus, condition (1) ensures centrality of the combinatorial sums. Replacing \({\mathbf{D}}{{X}_{{nij}}}\) = \({\mathbf{E}}X_{{nij}}^{2}\) – (EXnij)2 in the latter formula, we have

$${\mathbf{D}}{{S}_{n}} = \frac{1}{{n(n - 1)}}\sum\limits_{i,j = 1}^n {{{{({\mathbf{E}}{{X}_{{nij}}})}}^{2}}} + \frac{1}{n}\sum\limits_{i,j = 1}^n {{\mathbf{E}}X_{{nij}}^{2}.} $$

If \({\mathbf{D}}{{S}_{n}} \to \infty \), the principal part of the variance is the normalized sum of the second moments

$${{B}_{n}} = \frac{1}{n}\sum\limits_{i,j = 1}^n {{\mathbf{E}}X_{{nij}}^{2}.} $$

Therefore, in what follows, we use {Bn} as the norming sequence for Sn.

Further, we assume that the summands have all moments. We put

$${{\gamma }_{n}} = \max \left\{ {\mathop {\max }\limits_{i,j} \frac{{\sqrt n }}{{\sqrt {{{B}_{n}}} }}{\mathbf{E}}\left| {{{X}_{{nij}}}} \right|,\;\mathop {\max }\limits_i \sum\limits_{j = 1}^n {\frac{{{\mathbf{E}}X_{{nij}}^{2}}}{{{{B}_{n}}}}} ,\;\mathop {\max }\limits_j \sum\limits_{i = 1}^n {\frac{{{\mathbf{E}}X_{{nij}}^{2}}}{{{{B}_{n}}}}} ,\;\sum\limits_{j = 1}^n {\frac{{{\mathbf{E}}{{{\left| {{{X}_{{nij}}}} \right|}}^{3}}}}{{\sqrt n B_{n}^{{3/2}}}}} } \right\}.$$
(2)

We note that γn \( \geqslant \) 1. This follows from the fact that \(n{{B}_{n}}\) = \(\sum\nolimits_{i,j = 1}^n {{\mathbf{E}}X_{{nij}}^{2}} \) \(\leqslant \) \(n\mathop {\max }\limits_i \sum\nolimits_{j = 1}^n {{\mathbf{E}}X_{{nij}}^{2}} \).

The result below was obtained in our work [18].

Theorem 1. We suppose {Mn} is a nondecreasing sequence of positive constants such that for s = 1, 2, 3 the inequalities

$$\left| {{\mathbf{E}}X_{{nij}}^{k}} \right|\;\leqslant \;Dk!M_{n}^{{k - s}}{\mathbf{E}}{{\left| {{{X}_{{nij}}}} \right|}^{s}}$$
(3)

hold for all k \( \geqslant \) s, all \(1\;\leqslant \;i,j\;\leqslant \;n\), and all \(n\; \geqslant \;2\), where D is the absolute positive constant.

Then, for any sequence of real numbers {un} such that un → ∞, \(u_{n}^{3}\) = \(o(\sqrt n {\text{/}}{{\gamma }_{n}})\), and un = \(o(\sqrt {{{B}_{n}}} {\text{/}}{{M}_{n}})\)  as n → ∞, the relation

$${\mathbf{P}}\left( {{{S}_{n}}\; \geqslant \;{{u}_{n}}\sqrt {{{B}_{n}}} } \right)\sim 1 - \Phi ({{u}_{n}})\quad as\quad n \to \infty $$
(4)

holds, where Φ(x) is the standard normal distribution function.

The condition \(u_{n}^{3} = o(\sqrt n {\text{/}}{{\gamma }_{n}})\) is natural for relation (4), which is the exact (not logarithmic) asymptotic of large deviations. If we assume that all Xnij are identically distributed, this condition turns into the optimal condition un = o(n1/6).

Condition (3) is similar to Bernstein’s condition, which is a form of the exponential-moment condition. In [18], we give some variants of this conditions and the examples of random variables that satisfy the hypothesis of Theorem 1. In particular, these include bounded random variables. In the latter case, Spearman’s rank correlation coefficient is an important example of a combinatorial sum.

The theorem below is our principal result. In it, we replace Bernstein’s condition by the weaker Linnik condition, thus expanding the theorem’s application domain. For instance, the Weibull distribution with the parameter α that arises, in particular, as a limit distribution in the theory of extreme order statistics satisfies the Linnik condition and does not satisfy Bernstein’s condition when α < 1.

Theorem 2. We suppose for some β ∈ (0, 1) the inequalities

$${\mathbf{E}}{{e}^{{{{{\left| {{{X}_{{nij}}}} \right|}}^{\beta }}}}}\;\leqslant \;C{\mathbf{E}}{{\left| {{{X}_{{nij}}}} \right|}^{s}}$$
(5)

hold for s = 1, 2, 3, all \(1\;\leqslant \;i,j\;\leqslant \;n\) and all \(n\; \geqslant \;2\), where C is an absolute positive constant.

We suppose {un} is a sequence of real numbers such that \({{u}_{n}} \to \infty \), \(u_{n}^{3}\) = \(o(\sqrt n {\text{/}}{{\gamma }_{n}})\), un = \(o(B_{n}^{{\beta /(2(2 - \beta ))}})\), and \(\ln (n{{c}_{n}} + 1)\) = \(o(u_{n}^{\beta }B_{n}^{{\beta /2}})\) as \(n \to \infty \), where

$${{c}_{n}} = \frac{1}{n}\mathop {\max }\limits_i \sum\limits_{j = 1}^n {{{\varphi }_{{nij}}}} + \frac{1}{n}\mathop {\max }\limits_j \sum\limits_{i = 1}^n {{{\varphi }_{{nij}}}} + \frac{1}{{{{n}^{2}}}}\sum\limits_{i,j = 1}^n {{{\varphi }_{{nij}}}} ,$$
$${{\varphi }_{{nij}}} = {\mathbf{E}}{{e}^{{{{{\left| {{{X}_{{nij}}}} \right|}}^{\beta }}}}}I\left\{ {\left| {{{X}_{{nij}}}} \right|\; \geqslant \;{{u}_{n}}\sqrt {{{B}_{n}}} } \right\}.$$

Then, relation (4) holds.

Condition (5) holds if for any i, j, and n the random variable Xnij can have one of k given distributions. Moreover, if there exist positive constants A and B such that \({\mathbf{E}}{{e}^{{{{{\left| {{{X}_{{nij}}}} \right|}}^{\beta }}}}}\;\leqslant \;A\) and \({\mathbf{E}}{{\left| {{{X}_{{nij}}}} \right|}^{s}}\; \geqslant \;B\) for all i, j, n, and s, then condition (5) holds.

If Xnij have the same distributions for any i, j, and n, then by Theorem 2 relation (4) holds for un = \(o({{n}^{\alpha }})\), where α = min{1/6, β/(2(2 – β))}. From Linnik’s work [19], we know that in this case, for β ∈ (0, 1/2], the Linnik condition is optimal to fulfill for relation (4) in the zone un = \(o({{n}^{{\beta /(2(2 - \beta ))}}})\), while for β > 1/2 it requires additional conditions. Thus, the hypotheses of Theorem 2 cannot be improved in this case.

Remark 1. Theorem 2 still holds for β = 1. One should replace cn by zero.

3 PROOFS

We now prove our results.

Proof of Theorem 2. We put \({{y}_{n}} = {{u}_{n}}\sqrt {{{B}_{n}}} \), \({{\bar {X}}_{{nij}}}\) = \({{X}_{{nij}}}I\left\{ {{\kern 1pt} \left| {{{X}_{{nij}}}} \right| < {{y}_{n}}} \right\}\), \({{\tilde {X}}_{{nij}}}\) = \({{X}_{{nij}}}I\left\{ {{\kern 1pt} \left| {{{X}_{{nij}}}} \right|\; \geqslant \;{{y}_{n}}} \right\}\), and Tn = \(\sum\nolimits_{i = 1}^n {{{{\bar {X}}}_{{ni{{\pi }_{n}}(i)}}}} \). We suppose B0 = \(\bigcap\nolimits_{i = 1}^n {\left\{ {{{X}_{{ni{{\pi }_{n}}(i)}}} = {{{\bar {X}}}_{{ni{{\pi }_{n}}(i)}}}} \right\}} \) and B1 = \(\bigcup\nolimits_{i = 1}^n {\left\{ {{{X}_{{ni{{\pi }_{n}}(i)}}} \ne {{{\bar {X}}}_{{ni{{\pi }_{n}}(i)}}}} \right\}} \). We have

$$\begin{gathered} {\mathbf{P}}({{S}_{n}}\; \geqslant \;{{y}_{n}}) = {\mathbf{P}}(\{ {{S}_{n}}\; \geqslant \;{{y}_{n}}\} \cap {{B}_{0}}) + {\mathbf{P}}(\{ {{S}_{n}}\; \geqslant \;{{y}_{n}}\} \cap {{B}_{1}})\;\leqslant \;{\mathbf{P}}({{T}_{n}}\; \geqslant \;{{y}_{n}}) + {\mathbf{P}}({{B}_{1}}) \\ \leqslant \;{\mathbf{P}}({{T}_{n}}\; \geqslant \;{{y}_{n}}) + \sum\limits_{i = 1}^n {{\mathbf{P}}\left( {\left| {{{X}_{{ni{{\pi }_{n}}(i)}}}} \right|\; \geqslant \;{{y}_{n}}} \right)} = {\mathbf{P}}({{T}_{n}}\; \geqslant \;{{y}_{n}}) + \frac{1}{n}\sum\limits_{i,j = 1}^n {{\mathbf{P}}\left( {\left| {{{X}_{{nij}}}} \right|\; \geqslant \;{{y}_{n}}} \right).} \\ \end{gathered} $$

Similarly, we have

$${\mathbf{P}}({{T}_{n}}\; \geqslant \;{{y}_{n}})\;\leqslant \;{\mathbf{P}}({{S}_{n}}\; \geqslant \;{{y}_{n}}) + \frac{1}{n}\sum\limits_{i,j = 1}^n {{\mathbf{P}}\left( {\left| {{{X}_{{nij}}}} \right|\; \geqslant \;{{y}_{n}}} \right).} $$

Hence, the inequality

$$\left| {{\mathbf{P}}({{S}_{n}}\; \geqslant \;{{y}_{n}}) - {\mathbf{P}}({{T}_{n}}\; \geqslant \;{{y}_{n}})} \right|\;\leqslant \;\frac{1}{n}\sum\limits_{i,j = 1}^n {{\mathbf{P}}\left( {\left| {{{X}_{{nij}}}} \right|\; \geqslant \;{{y}_{n}}} \right)} $$
(6)

holds.

For all \(1\;\leqslant \;i,j\;\leqslant \;n\), we put \({{\bar {a}}_{{nij}}} = {\mathbf{E}}{{\bar {X}}_{{nij}}}\), \(\bar {\sigma }_{{nij}}^{2} = {\mathbf{D}}{{\bar {X}}_{{nij}}}\), \({{\bar {a}}_{{ni.}}}\) = \(\frac{1}{n}\sum\nolimits_{j = 1}^n {{{{\bar {a}}}_{{nij}}}} \), \({{\bar {a}}_{{n.j}}} = \frac{1}{n}\sum\nolimits_{i = 1}^n {{{{\bar {a}}}_{{nij}}}} \), \({{\bar {a}}_{{n..}}} = \frac{1}{{{{n}^{2}}}}\sum\nolimits_{i,j = 1}^n {{{{\bar {a}}}_{{nij}}}} \), and \({{\mu }_{{nij}}} = {{\bar {a}}_{{ni.}}} + {{\bar {a}}_{{n.j}}} - {{\bar {a}}_{{n..}}}\). We designate \({{Y}_{{nij}}}\) = \({{\bar {X}}_{{nij}}} - {{\mu }_{{nij}}}\), Rn = \(\sum\nolimits_{i = 1}^n {{{Y}_{{ni{{\pi }_{n}}(i)}}}} \), \({{\bar {B}}_{n}}\) = \(\frac{1}{n}\sum\nolimits_{i,j = 1}^n {{\mathbf{E}}Y_{{nij}}^{2}} \).

First, we show that the relation

$${\mathbf{P}}\left( {{{R}_{n}}\; \geqslant \;{{u}_{n}}\sqrt {{{{\bar {B}}}_{n}}} } \right)\sim 1 - \Phi ({{u}_{n}})\quad {\text{as}}\quad n \to \infty .$$
(7)

holds. To do this, we apply Theorem 1 and use the following result to check its hypotheses.

Lemma 1. We suppose y > 0, β ∈ (0, 1), μ ∈ (–1, 1), α > 0, and a ∈ (0, 1). Let X be a random variable and \(\bar {X}\) = \(XI\left\{ {\left| X \right| < y} \right\}\). We assume that \({\mathbf{E}}{{e}^{{{{{\left| X \right|}}^{\beta }}}}}\;\leqslant \;\alpha {\mathbf{E}}{{\left| {\bar {X} - \mu } \right|}^{s}}\) for s = 1, 2, 3 and \(\left| \mu \right|\;\leqslant \;M\ln 2\), where M = \({{y}^{{1 - \beta }}}{\text{/}}a\).

Then,

$$\left| {{\mathbf{E}}{{{(\bar {X} - \mu )}}^{k}}} \right|\;\leqslant \;C(a,\beta )\alpha k!{{M}^{{k - s}}}{\mathbf{E}}{{\left| {\bar {X} - \mu } \right|}^{s}}$$

for all k > s for s = 1, 2, 3, where C(a, β) = \(8(2 + {{3}^{{1/\beta }}}{{((1 - a)\beta )}^{{ - 1/\beta }}})\).

Proof. For all z from the circle \(\left| z \right|\;\leqslant \;a{{y}^{{\beta - 1}}}\) = 1/M and s = 1, 2, 3, we have

$$\left| {{\mathbf{E}}{{{(\bar {X} - \mu )}}^{s}}{{e}^{{z(\bar {X} - \mu )}}}} \right|\;\leqslant \;{\mathbf{E}}{{\left| {\bar {X} - \mu } \right|}^{s}}{{e}^{{\left| z \right|\left( {\left| {\bar {X}} \right| + \left| \mu \right|} \right)}}}\;\leqslant \;{{2}^{{s - 1}}}{{e}^{{\left| z \right|\left| \mu \right|}}}{\mathbf{E}}\left( {{{{\left| {\bar {X}} \right|}}^{s}} + {{{\left| \mu \right|}}^{s}}} \right){{e}^{{\left| z \right|\left| {\bar {X}} \right|}}}$$
$$\leqslant \;8{\mathbf{E}}\left( {{{{\left| X \right|}}^{s}} + 1} \right){{e}^{{\left| z \right|{{{\left| {\bar {X}} \right|}}^{\beta }}{{y}^{{1 - \beta }}}}}}\;\leqslant \;8{\mathbf{E}}{{\left| X \right|}^{s}}{{e}^{{a{{{\left| X \right|}}^{\beta }}}}} + 8{\mathbf{E}}{{e}^{{a{{{\left| {\bar {X}} \right|}}^{\beta }}}}}$$
$$\leqslant \;8{\mathbf{E}}{{e}^{{{{{\left| X \right|}}^{\beta }}}}}\mathop {\sup }\limits_{x\; \geqslant \;0} {{x}^{s}}{{e}^{{(a - 1){{x}^{\beta }}}}} + 8{\mathbf{E}}{{e}^{{{{{\left| X \right|}}^{\beta }}}}}I\left\{ {\left| X \right| < y} \right\} + 8{\mathbf{P}}\left( {\left| X \right|\; \geqslant \;y} \right)$$
$$\leqslant \;8\left( {\frac{{{{s}^{{1/\beta }}}}}{{{{{((1 - a)\beta )}}^{{1/\beta }}}}} + 2} \right){\mathbf{E}}{{e}^{{{{{\left| X \right|}}^{\beta }}}}}\;\leqslant \;C(a,\beta )\alpha {\mathbf{E}}{{\left| {\bar {X} - \mu } \right|}^{s}}.$$

By the Cauchy inequalities for the coefficients of expansion of the analytical function \({\mathbf{E}}{{(\bar {X} - \mu )}^{s}}{{e}^{{z(\bar {X} - \mu )}}}\), we obtain what is required.

\(\square \)

We show that condition (3) is fulfilled for Ynij with \({{M}_{n}} = y_{n}^{{1 - \beta }}{\text{/}}a\). Given condition (5), it is sufficient to show that \(\left| {{{\mu }_{{nij}}}} \right|\;\leqslant \;{{M}_{n}}\ln 2\) and \({\mathbf{E}}{{\left| {{{X}_{{nij}}}} \right|}^{s}}\;\leqslant \;\alpha {\mathbf{E}}{{\left| {{{{\bar {X}}}_{{nij}}} - {{\mu }_{{nij}}}} \right|}^{s}}\) for s = 1, 2, 3.

The functions \(x{{e}^{{ - {{x}^{\beta }}}}}\), \({{x}^{2}}{{e}^{{ - {{x}^{\beta }}}}}\), and \({{x}^{3}}{{e}^{{ - {{x}^{\beta }}}}}\) decrease for \(x\; \geqslant \;{{x}_{0}} > 0\). Further, we take that yn > x0.

Given condition (10), for all i, we have

$$n\left| {{{{\bar {a}}}_{{ni.}}}} \right|\;\leqslant \;\left| {\sum\limits_{j = 1}^n {{\mathbf{E}}{{{\bar {X}}}_{{nij}}}} } \right| = \left| {\sum\limits_{j = 1}^n {{\mathbf{E}}{{{\tilde {X}}}_{{nij}}}} } \right|\;\leqslant \;\sum\limits_{j = 1}^n {{\mathbf{E}}\left| {{{X}_{{nij}}}} \right|I\left\{ {\left| {{{X}_{{nij}}}} \right|\; \geqslant \;{{y}_{n}}} \right\}} \;\leqslant \;{{y}_{n}}{{e}^{{ - y_{n}^{\beta }}}}\sum\limits_{j = 1}^n {{{\varphi }_{{nij}}}.} $$
(8)

Similarly, we have

$$n\left| {{{{\bar {a}}}_{{n.j}}}} \right|\;\leqslant \;{{y}_{n}}{{e}^{{ - y_{n}^{\beta }}}}\sum\limits_{i = 1}^n {{{\varphi }_{{nij}}}} \quad {\text{for}}\;{\text{all}}\;j\;{\text{and}}\;{{n}^{2}}\left| {{{{\bar {a}}}_{{n..}}}} \right|\;\leqslant \;{{y}_{n}}{{e}^{{ - y_{n}^{\beta }}}}\sum\limits_{i,j = 1}^n {{{\varphi }_{{nij}}}} .$$
(9)

Hence,

$$\mathop {\max }\limits_{i,j} \left| {{{\mu }_{{nij}}}} \right|\;\leqslant \;{{c}_{n}}{{y}_{n}}{{e}^{{ - y_{n}^{\beta }}}} = {{\varepsilon }_{n}} = o(1).$$
(10)

Further, for s = 1, 2, 3 and all i and j, the inequalities

$$\begin{gathered} {\mathbf{E}}{{\left| {{{{\tilde {X}}}_{{nij}}}} \right|}^{s}} = {\mathbf{E}}{{\left| {{{X}_{{nij}}}} \right|}^{s}}I\left\{ {\left| {{{X}_{{nij}}}} \right|\; \geqslant \;{{y}_{n}}} \right\}\;\leqslant \;y_{n}^{s}{{e}^{{ - y_{n}^{\beta }}}}{{\varphi }_{{nij}}} \\ \leqslant \;y_{n}^{3}{{e}^{{ - y_{n}^{\beta }}}}C{\mathbf{E}}{{\left| {{{X}_{{nij}}}} \right|}^{s}}\;\leqslant \;0.05{\mathbf{E}}{{\left| {{{X}_{{nij}}}} \right|}^{s}}, \\ {{\left| {{{\mu }_{{nij}}}} \right|}^{s}}\;\leqslant \;0.05{{C}^{{ - 1}}}\;\leqslant \;0.05{{C}^{{ - 1}}}{\mathbf{E}}{{e}^{{{{{\left| {{{X}_{{nij}}}} \right|}}^{\beta }}}}}\;\leqslant \;0.05{\mathbf{E}}{{\left| {{{X}_{{nij}}}} \right|}^{s}} \\ \end{gathered} $$
(11)

hold for all sufficiently large n. Therefore,

$${\mathbf{E}}{{\left| {{{X}_{{nij}}}} \right|}^{s}}\;\leqslant \;{{3}^{{s - 1}}}\left( {{\mathbf{E}}{{{\left| {{{{\bar {X}}}_{{nij}}} - {{\mu }_{{nij}}}} \right|}}^{s}} + {\mathbf{E}}{{{\left| {{{{\tilde {X}}}_{{nij}}}} \right|}}^{s}} + {{{\left| {{{\mu }_{{nij}}}} \right|}}^{s}}} \right)\;\leqslant \;9{\mathbf{E}}{{\left| {{{{\bar {X}}}_{{nij}}} - {{\mu }_{{nij}}}} \right|}^{s}} + 0.9{\mathbf{E}}{{\left| {{{X}_{{nij}}}} \right|}^{s}}.$$

Hence, \({\mathbf{E}}{{\left| {{{X}_{{nij}}}} \right|}^{s}}\;\leqslant \;90{\mathbf{E}}{{\left| {{{{\bar {X}}}_{{nij}}} - {{\mu }_{{nij}}}} \right|}^{s}}\). Moreover, \({\mathbf{E}}{{\left| {{{{\bar {X}}}_{{nij}}} - {{\mu }_{{nij}}}} \right|}^{s}}\) \(\leqslant \) \(4\left( {{\mathbf{E}}{{{\left| {{{{\bar {X}}}_{{nij}}}} \right|}}^{s}} + {{{\left| {{{\mu }_{{nij}}}} \right|}}^{s}}} \right)\) \(\leqslant \) \(5{\mathbf{E}}{{\left| {{{X}_{{nij}}}} \right|}^{s}}\). Thus, for s = 1, 2, 3 and all i and j, the inequalities

$$0.2{\mathbf{E}}{{\left| {{{{\bar {X}}}_{{nij}}} - {{\mu }_{{nij}}}} \right|}^{s}}\;\leqslant \;{\mathbf{E}}{{\left| {{{X}_{{nij}}}} \right|}^{s}}\;\leqslant \;90{\mathbf{E}}{{\left| {{{{\bar {X}}}_{{nij}}} - {{\mu }_{{nij}}}} \right|}^{s}}$$
(12)

hold for all rather large n. Hence, by Lemma 1, condition (3) holds for Ynij with Mn = \(y_{n}^{{1 - \beta }}{\text{/}}a\).

Now, we estimate the difference of the variances Bn and \({{\bar {B}}_{n}}\). We have

$${{B}_{n}} - {{\bar {B}}_{n}} = \frac{1}{n}\sum\limits_{i,j = 1}^n {\left( {{\mathbf{E}}X_{{nij}}^{2} - {\mathbf{E}}Y_{{nij}}^{2}} \right)} = \frac{1}{n}\sum\limits_{i,j = 1}^n {\left( {{\mathbf{E}}X_{{nij}}^{2} - {\mathbf{E}}{{{\left( {{{{\bar {X}}}_{{nij}}} - {{\mu }_{{nij}}}} \right)}}^{2}}} \right)} $$
$$ = \frac{1}{n}\sum\limits_{i,j = 1}^n {\left( {{\mathbf{E}}\tilde {X}_{{nij}}^{2} - 2{{\mu }_{{nij}}}{\mathbf{E}}{{{\bar {X}}}_{{nij}}} + \mu _{{nij}}^{2}} \right)} $$
$$ = \frac{1}{n}\sum\limits_{i,j = 1}^n {{\mathbf{E}}\tilde {X}_{{nij}}^{2}} - \frac{2}{n}\sum\limits_{i,j = 1}^n {{{\mu }_{{nij}}}{{{\bar {a}}}_{{nij}}}} + \frac{1}{n}\sum\limits_{i,j = 1}^n {\mu _{{nij}}^{2}.} $$

Further,

$$\begin{gathered} \frac{1}{n}\sum\limits_{i,j = 1}^n {{{\mu }_{{nij}}}{{{\bar {a}}}_{{nij}}}} = \frac{1}{n}\sum\limits_{i,j = 1}^n {{{{\bar {a}}}_{{ni.}}}{{{\bar {a}}}_{{nij}}}} + \frac{1}{n}\sum\limits_{i,j = 1}^n {{{{\bar {a}}}_{{n.j}}}{{{\bar {a}}}_{{nij}}}} - {{{\bar {a}}}_{{n..}}}\frac{1}{n}\sum\limits_{i,j = 1}^n {{{{\bar {a}}}_{{nij}}}} \\ = \sum\limits_{i = 1}^n {\bar {a}_{{ni.}}^{2}} + \sum\limits_{j = 1}^n {\bar {a}_{{n.j}}^{2}} - n\bar {a}_{{n..}}^{2} \\ \end{gathered} $$

By relations (8)–(11), we have

$$\begin{gathered} \sum\limits_{i = 1}^n {\bar {a}_{{ni.}}^{2}} \;\leqslant \;n\varepsilon _{n}^{2},\quad \sum\limits_{j = 1}^n {\bar {a}_{{n.j}}^{2}} \;\leqslant \;n\varepsilon _{n}^{2},\quad n\bar {a}_{{n..}}^{2}\;\leqslant \;n\varepsilon _{n}^{2},\quad \sum\limits_{i,j = 1}^n {\mu _{{nij}}^{2}} \;\leqslant \;{{n}^{2}}\varepsilon _{n}^{2}, \\ \sum\limits_{i,j = 1}^n {{\mathbf{E}}\tilde {X}_{{nij}}^{2}} \;\leqslant \;{{y}_{n}}{{n}^{2}}{{\varepsilon }_{n}}. \\ \end{gathered} $$
(13)

Hence,

$$\left| {{{B}_{n}} - {{{\bar {B}}}_{n}}} \right|\;\leqslant \;8{{y}_{n}}n{{\varepsilon }_{n}}.$$

Moreover,

$$\begin{gathered} {{y}_{n}}n{{\varepsilon }_{n}}{{(1 - \Phi ({{u}_{n}}))}^{{ - 1}}} = \exp \left\{ { - y_{n}^{\beta } + 2\ln {{y}_{n}} + \ln (n{{c}_{n}}) + \frac{{u_{n}^{2}}}{2} + \ln \left( {\sqrt {2\pi } {{u}_{n}}} \right)} \right\} \\ = \exp \left\{ { - y_{n}^{\beta } + o\left( {u_{n}^{\beta }B_{n}^{{\beta /2}}} \right)} \right\} \to 0\quad {\text{as}}\quad n \to \infty . \\ \end{gathered} $$
(14)

In particular, \({{B}_{n}}\sim {{\bar {B}}_{n}}\). Given inequalities (12), we can conclude that the variable \(\bar {\gamma }\) specified by formula (2) with \({{X}_{{nij}}}\) replaced by \({{Y}_{{nij}}}\) satisfies the relation \({{\gamma }_{n}} = O({{\bar {\gamma }}_{n}})\) as n → ∞.

By Theorem 1, relation (7) holds for any sequence of real numbers {un} such that un → ∞, \(u_{n}^{3}\) = \(o(\sqrt n {\text{/}}{{\bar {\gamma }}_{n}})\) = \(o(\sqrt n {\text{/}}{{\gamma }_{n}})\), and un = \(o(\sqrt {{{{\bar {B}}}_{n}}} {\text{/}}{{M}_{n}})\) = \(o(B_{n}^{{\beta /(2(2 - \beta ))}})\) as n → ∞.

Further, we have

$${\mathbf{P}}({{T}_{n}}\; \geqslant \;{{y}_{n}}) = {\mathbf{P}}({{R}_{n}} + n{{\bar {a}}_{{n..}}}\; \geqslant \;{{y}_{n}}) = {\mathbf{P}}\left( {{{R}_{n}}\; \geqslant \;{{{v}}_{n}}\sqrt {{{{\bar {B}}}_{n}}} } \right),$$

where

$${{{v}}_{n}} = \frac{{{{y}_{n}} - n{{{\bar {a}}}_{{n..}}}}}{{\sqrt {{{{\bar {B}}}_{n}}} }} = \frac{{{{u}_{n}}\sqrt {{{{\bar {B}}}_{n}} + O({{y}_{n}}n{{\varepsilon }_{n}})} + O(n{{\varepsilon }_{n}})}}{{\sqrt {{{{\bar {B}}}_{n}}} }}\sim {{u}_{n}}\quad {\text{as}}\quad n \to \infty .$$

Since

$$\frac{{{v}_{n}^{2}}}{2} = \frac{{u_{n}^{2}\left( {{{{\bar {B}}}_{n}} + O({{y}_{n}}n{{\varepsilon }_{n}})} \right) + O\left( {{{u}_{n}}\sqrt {{{{\bar {B}}}_{n}}} n{{\varepsilon }_{n}}} \right)}}{{2{{{\bar {B}}}_{n}}}} = \frac{{u_{n}^{2}}}{2} + o(1)\quad {\text{as}}\quad n \to \infty ,$$

we can conclude that 1 – \(\Phi ({{{v}}_{n}})\) ~ 1 – \(\Phi ({{u}_{n}})\) as n → ∞. Given (7), we obtain

$${\mathbf{P}}({{T}_{n}}\; \geqslant \;{{y}_{n}}) = {\mathbf{P}}\left( {{{R}_{n}}\; \geqslant \;{{{v}}_{n}}\sqrt {{{{\bar {B}}}_{n}}} } \right)\sim 1 - \Phi ({{u}_{n}})\quad {\text{as}}\quad n \to \infty .$$

It follows from inequalities (6) and (13) and relation (14) that

$${\mathbf{P}}({{S}_{n}}\; \geqslant \;{{y}_{n}}) = {\mathbf{P}}({{T}_{n}}\; \geqslant \;{{y}_{n}}) + O({{y}_{n}}n{{\varepsilon }_{n}}) = (1 - \Phi ({{u}_{n}}))(1 + o(1))\quad {\text{as}}\quad n \to \infty .$$

The theorem is proved completely.

\(\square \)

Proof of Remark 1. If β = 1, then by Lemma 1 with \(\bar {X} = X\) and μ = 0 (truncation and centering are not required in this case) condition (3) holds with \({{M}_{n}}\) = 1/a. Remark 1 follows from Theorem 1. The conditions on cn are superfluous in this case.

\(\square \)