Keywords

1 Introduction

We first present three theorems from the thesis of Yu. V. Prokhorov [10]. Let

$$\xi _{1},\xi _{2},\ldots ,\xi _{n},\ldots $$

be a sequence of independent identically distributed random variables with distribution function \(F(x) = P\{\xi _{1} < x\}\).

Theorem P4.

Let F(x) satisfy one of the following two conditions:

  1. 1.

    F(x) is a discrete distribution function;

  2. 2.

    There exists an integer n 0 such that \({F}^{{_\ast}n_{0}}(x)\) has an absolutely continuous component.

Then there exists a sequence \(\{G_{n}(x)\}\) of infinitely divisible distribution functions such that

$$\Vert {F}^{{_\ast}n}(x) - G_{ n}(x)\Vert \rightarrow 0\quad \mbox{ as}\quad n \rightarrow \infty ,$$

where \(\Vert \cdot \Vert\) stands for the total variation.

Theorem P5.

In order that

$$\Vert {F}^{{_\ast}n}(xB_{ n} + A_{n}) - G(x)\Vert \rightarrow 0,\quad n \rightarrow \infty ,$$

for some appropriately chosen constants B n > 0 and A n and a stable distribution function G(x), the following conditions are necessary and sufficient:

  1. 1.

    \({F}^{{_\ast}n}(xB_{n} + A_{n}) \rightarrow G(x),\quad n \rightarrow \infty ,\quad x \in {R}^{1}\);

  2. 2.

    There exists n 0 such that

    $$\int\limits_{-\infty }^{\infty }p_{ n_{0}}(x)\,dx > 0,$$

where \(p_{n_{0}}(x) = \frac{d} {dx}F_{(x)}^{{_\ast}n_{o}}\) .

Theorem P6.

Suppose that \(\xi _{1}\) takes only the values \(m = 0,\pm 1,\ldots \) and that the stable distribution function G(x) has a density g(x). Then

$$\sum\limits_{m}\bigg{\vert }P\{\xi _{1} + \cdots + \xi _{n} = m\} - \frac{1} {B_{n}}g\Big{(}\frac{m - A_{n}} {B_{n}} \Big{)}\bigg{\vert }\rightarrow \infty $$

if and only if the following two conditions are satisfied:

  1. 1.

    \({F}^{{_\ast}n}(xB_{n} + A_{n}) \rightarrow G(x),\quad n \rightarrow \infty ,\quad x \in {R}^{1}\);

  2. 2.

    The maximal step of the distribution of ξ 1 equals 1.

In the case where \(G(x) = \Phi (x)\) is the standard Gaussian distribution function, the following statement is proved.

Theorem 1.

Let \(\xi _{1}\) have 0 mean and unit variance. In order that

$$\Vert {F}^{{_\ast}n}(x\sqrt{n}) - \Phi (x)\Vert = O({n}^{-\delta /2}),\quad n \rightarrow \infty ,$$

for some \(\delta \in (0,1]\) , the following two conditions are necessary and sufficient:

  1. 1.

    \(\sup _{x}\big{\vert }{F}^{{_\ast}n}(x\sqrt{n}) - \Phi (x)\big{\vert } = O({n}^{-\delta /2}),\quad n \rightarrow \infty ;\)

  2. 2.

    There exists n 0 such that the distribution function \({F}^{{_\ast}n_{0}}(x)\) has an absolutely continuous component.

The theorem is proved in [2]. In the same paper, a sequence of random variables \(\xi _{1},\ldots ,\xi _{n},\ldots \) with values \(m = 0,\pm 1,\pm 2,\ldots \) is also considered. In this case, the following statement is proved.

Theorem 2.

In order that

$$\sum\limits_{m}\bigg{\vert }P\{\xi _{1} + \cdots + \xi _{n} = m\} - \frac{1} {\sqrt{2\pi n}}\ {\mathit{e}}^{-{m}^{2}/2n }\bigg{\vert } = O({n}^{-\delta /2})$$

for some \(\delta \in (0,1]\) , the following two conditions are necessary and sufficient :

  1. 1.

    \(\sup _{x}\big{\vert }{F}^{{_\ast}n}(x\sqrt{n}) - \Phi (x)\big{\vert } = O({n}^{-\delta /2}),\quad n \rightarrow \infty \) ;

  2. 2.

    The maximal step of the distribution of ξ 1 is 1.

In the case where P(A) is a probability distribution defined in the k-dimensional space R k, and \(\Phi (A)\) is the standard k-dimensional normal distribution, the following theorem is proved in [3].

Theorem 3.

In order that

$$\sup _{A\in {\mathfrak{M}}^{k}}\big{\vert }{P}^{{_\ast}n}(A\sqrt{n}) - \Phi (A)\big{\vert } = O({n}^{-\delta /2}),$$

the following two conditions are necessary and sufficient :

  1. 1.

    \(\sup _{\Vert \vec{t}\Vert =1}\sup _{x\in {R}^{1}}\big{\vert }{P}^{{_\ast}n}(\sqrt{n}A_{x}(\vec{t}))) - \Phi (A_{x}(\vec{t}))\big{\vert } = O({n}^{-\delta /2})\quad \mbox{ as}\quad n \rightarrow \infty ,\) where \(A_{x}(\vec{t}) =\{\vec{ u} :\, (\vec{t},\vec{u}) < x\}\) , \(\Vert \vec{t}\Vert\) is the length of a vector \(\vec{t} \in {R}^{k}\) , and \((\vec{u},\vec{t})\) denotes the inner product in R k ;

  2. 2.

    There exists n 0 such that the distribution function \({F}^{{_\ast}n_{0}}\) has a absolutely continuous component.

The statements of Theorems 13 remain valid if one replaces \(\Phi (A)\) by “long” Chebyshev–Cramer asymptotic expansions with appropriate changes in condition (1) and with no changes in the Prokhorov conditions (in the theorems, conditions (2)); see [3].

2 Appell Polynomials

Recall that a sequence of polynomials \(g_{n}(x)\), n = 1, 2, , is called an Appell polynomial set if

$$\frac{d} {dx}g_{n}(x) = ng_{n-1}(x),\ n = 1,2,\ldots ,\ x \in {R}^{1};$$

see [6], p. 242.

Often, by Appell polynomials are meant the polynomials

$$A_{j}(z) = {(-1)}^{j}{z}^{j+1} \sum\limits_{l=0}^{j-1}q_{ jl}{z}^{l}$$
(1)

defined by

$$\Big{(}1 + \frac{z} {\tau }{\Big{)}}^{\tau } ={ \mathit{e}}^{z}\bigg{(}1 + \sum\limits_{j=1}^{\infty }\Big{(}\frac{1} {\tau }{\Big{)}}^{j}A_{ j}(z)\bigg{)}$$
(2)

for \(\vert z\vert < \tau \) (see [5, 8]).

The coefficients \(q_{j\,l}\) satisfy the recursion formula

$$q_{jl} = \frac{(j + l)q_{j-1,l} + q_{j-1,l-1}} {j + l + 1}$$
(3)

for \(j = 1,2,\ldots \), \(l = 1,2,\ldots ,j - 2\) (see [8]). For l < 0, q jl  = 0, and

$$q_{j0} = \frac{1} {j + 1},\qquad q_{j,j-1} = \frac{1} {{2}^{j}j!}.$$

It is known [8] that

$$q_{jl} = \sum\limits_{\begin{array}{c} \nu _{1} + 2\nu _{2} + \cdots + j\nu _{j} = j \\ \nu _{1} + \nu _{2} + \cdots + \nu _{j} = l + 1 \end{array} } \prod\limits_{i=1}^{j}\bigg{[} \frac{1} {\nu _{1}!}\Big{(} \frac{1} {i + 1}{\Big{)}}^{\nu _{i} }\bigg{]}$$

for \(j = 1,2,\ldots \), \(l = 0,1,\ldots ,j - 1\).

Estimating the remainder terms of asymptotic expansions, we will use the following lemma.

Lemma 1.

We have

$$\sum\limits_{l=0}^{j-1}q_{ jl} \leq \frac{1} {2},\quad j = 1,2,\ldots.$$
(4)

The lemma can be proved by induction using (3).

Let us now estimate the remainder term

$$R_{s}(z,\tau ) = \Big{(}1 + \frac{z} {\tau }{\Big{)}}^{\tau } -{\mathit{e}}^{z}\bigg{(}1 + \sum\limits_{j=1}^{s}\Big{(}\frac{1} {\tau }{\Big{)}}^{j}A_{ j}(z)\bigg{)}.$$

Here z may be a complex number, e.g., the difference of characteristic functions of random vectors, \(\tau > 0\), and \(\vert z\vert < \tau \); \(s = 1,2,\ldots \).

Lemma 2.

We have

$$\vert R_{s}(z,\tau )\vert \leq \left \{\begin{array}{@{}l@{\quad }l@{}} \frac{1} {2}{\left (\frac{1} {\tau }\right )}^{s} \frac{1} {\tau -\vert z\vert }\vert {z}^{s+2}{\mathit{e}}^{z}\vert \quad &\text{ if}\ \vert z\vert < \tau , \\ \frac{1} {2}{\left (\frac{1} {\tau }\right )}^{s} \frac{1} {\tau - 1}\vert {z}^{s+2}{\mathit{e}}^{z}\vert \quad &\text{ if}\ \vert z\vert = 1\ \text{ and}\ \tau > 1, \\ \frac{1} {2}{\left (\frac{\vert z\vert } {\tau } \right )}^{s+1} \frac{\tau \vert {z}^{s+2}{\mathit{e}}^{z}\vert } {2(\vert z\vert - 1)(\tau -\vert z{\vert }^{2})}\quad &\text{ if}\ 1 < \vert z\vert < \sqrt{\tau }. \\ \quad \end{array} \right.$$

Proof.

From (2) and (3) it follows that

$$R_{s}(z,\tau ) = \sum\limits_{j=s+1}^{\infty }\Big{(}\frac{1} {\tau }{\Big{)}}^{j}A_{ j}(z){\mathit{e}}^{z} = \Big{(}\frac{1} {\tau }{\Big{)}}^{s+1}{z}^{s+2}{\mathit{e}}^{z} \sum\limits_{r=1}^{\infty }\Big{(}-\frac{z} {\tau }{\Big{)}}^{r} \sum\limits_{l=0}^{r+s}q_{ r+s+1,l}{z}^{l}.$$

Now it remains to apply inequality (4), and the lemma follows after a simple calculation.

3 Expansion of Convolutions of Measures by Appell Polynomials

Consider the convolutions of generalized finite-variation measures μ(B), \(B \in {\mathfrak{M}}^{k}\):

$$\Big{(}\mu _{0} + \frac{\mu } {n}{\Big{)}}^{{_\ast}n}(B) = \int\limits_{{R}^{k}}\Big{(}\mu _{0} + \frac{\mu } {n}\Big{)}(B -\vec{ x})\Big{(}\mu _{0} + \frac{\mu } {n}{\Big{)}}^{{_\ast}(n-1)}(d\vec{x}),$$

where μ0 is the Dirac measure, \(\vec{0} = (0,0,\ldots ,0) \in {R}^{k}\), \(n = 1,2,\ldots \);

$$\Big{(}\mu _{0} + \frac{\mu } {n}{\Big{)}}^{{_\ast}0} = \mu _{ 0};\qquad \mu _{0} {_\ast} \mu = \mu.$$

It is obvious that

$$\Big{\Vert}\Big{(}\mu _{0} + \frac{\mu } {n}{\Big{)}}^{{_\ast}n}\Big{\Vert} \leq \bigg{(}\Big{\Vert}\mu _{ 0} + \frac{\mu } {n}\Big{\Vert}{\bigg{)}}^{n}.$$

Theorem 4.

If \(\Vert \mu \Vert < n\) , we have the asymptotic expansion

$$\Big{(}\mu _{0} + \frac{\mu } {n}{\Big{)}}^{{_\ast}n} ={ \mathit{e}}^{\mu } {_\ast}\bigg\{\mu _{ 0} + \sum\limits_{j=1}^{\infty }\Big{(} \frac{1} {n}{\Big{)}}^{j}A_{ j}(\mu )\bigg\}$$

where \(n = 1,2,\ldots \) , and

$$A_{j}(\mu ) = {(-1)}^{j}{\mu }^{{_\ast}(j+1)} {_\ast}\sum\limits_{l=0}^{j-1}q_{ jl}\,{\mu }^{{_\ast}l}$$

is the Appell polynomial with the powers of μ are understood in the convolution sense.

Proof.

Obviously,

$${\left (\mu _{0} + \frac{\mu } {n}\right )}^{{_\ast}n} = \sum\limits_{\nu =0}^{n}{\left ( \frac{1} {n}\right )}^{\nu }\left({n \atop \nu}\right) {\mu }^{{_\ast}\nu } = \sum\limits_{\nu =0}^{\infty }\frac{{\mu }^{{_\ast}\nu }} {{n}^{\nu }} \frac{n(n - 1)\ldots (n - \nu + 1)} {\nu !} ,$$

where

$$n(n - 1)\ldots (n - \nu + 1) = \sum\limits_{j=0}^{\nu -1}{(-1)}^{j}{n}^{\nu -j}C_{ \nu }^{(j)},$$

and \(C_{\nu }^{(j)}\) is the Stirling number of the first kind.

From the last two equalities it follows that

$$\begin{array}{rcl} \Big{(}\mu _{0} + \frac{\mu } {n}{\Big{)}}^{{_\ast}n}& =& \mu _{ 0} + \sum\limits_{\nu +1}^{\infty } \frac{{\mu }^{{_\ast}\nu }} {\nu !{n}^{\nu }} \sum\limits_{j=0}^{\nu -1}{(-1)}^{j}C_{ \nu }^{(j)}{n}^{\nu -j} \\ & =& \mu _{0} + \sum\limits_{j=0}^{\infty }\Big{(} - \frac{1} {n}{\Big{)}}^{j} \sum\limits_{\nu =j+1}^{\infty }\frac{1} {\nu !}{\mu }^{{_\ast}\nu }C_{ \nu }^{(j)}.\end{array}$$

Since \(C_{\nu }^{(0)} = 1\) and

$$C_{\nu }^{(j)} = \sum\limits_{l=0}^{j-1}q_{ jl}\nu (\nu - 1)\ldots (\nu - j - l),$$

we obtain

$$\begin{array}{rcl} \Big{(}\mu _{0} + \frac{\mu } {n}{\Big{)}}^{{_\ast}n}& =& \mu _{ 0} + \sum\limits_{\nu +1}^{\infty } \frac{1} {n!}{\mu }^{{_\ast}\nu }C_{ \nu }^{(0)} + \\ & & +\sum\limits_{j=1}^{\infty }\Big{(} - \frac{1} {n}{\Big{)}}^{j} \sum\limits_{\nu =j+1}^{\infty }\frac{1} {\nu !}{\mu }^{{_\ast}\nu } \sum\limits_{k=0}^{j-1}q_{ jk}\nu (\nu - 1)\ldots (\nu - j - k) = \\ & =& \sum\limits_{\nu =0}^{\infty }\frac{1} {\nu !}{\mu }^{{_\ast}\nu } + \sum\limits_{j=1}^{\infty }\Big{(} - \frac{1} {n}{\Big{)}}^{j} \sum\limits_{k=0}^{j-1}q_{ jk} \sum\limits_{\nu =j+k+1}^{\infty } \frac{1} {(\nu - j - k - 1)!}{\mu }^{{_\ast}\nu }= \\ & =&{ \mathit{e}}^{\mu } + \sum\limits_{j=1}^{\infty }\Big{(} - \frac{1} {n}{\Big{)}}^{j} \sum\limits_{k=0}^{j-1}q_{ jk}{\mu }^{{_\ast}(j+k+1)} {_\ast}\bigg{(}\sum\limits_{l=0}^{\infty }\frac{1} {l!}{\mu }^{l}\bigg{)} = \\ & =&{ \mathit{e}}^{\mu } {_\ast}\bigg\{\mu _{ 0} + \sum\limits_{j=1}^{\infty }\Big{(} - \frac{1} {n}{\Big{)}}^{j}{\mu }^{{_\ast}(j+1)} {_\ast}\bigg{(}\sum\limits_{k+1}^{j-1}q_{ jk}{\mu }^{{_\ast}k}\bigg{)}\bigg\} = \\ & =&{ \mathit{e}}^{\mu } {_\ast}\bigg\{\mu _{ 0} + \sum\limits_{j=1}^{\infty }\Big{(} \frac{1} {n}{\Big{)}}^{j}A_{ j}(\mu )\bigg\}.\end{array}$$

The theorem is proved.

Theorem 5.

Let μ and \(\mu _{1}\) be generalized finite-variation measures in R k . Then, for every Borel set \(B \in {\mathfrak{M}}^{k}\) , we have

$$\begin{array}{rcl} & & \bigg{\vert }\Big{(}\mu _{1} {_\ast}\Big{(}\mu _{0} + \frac{\mu } {n}\Big{)}{\Big{)}}^{{_\ast}n}(B) - \mu _{ 1}^{{_\ast}n} {_\ast}{\mathit{e}}^{\mu } {_\ast}\bigg\{\mu _{ 0} + \sum\limits_{j=1}^{s}\Big{(} \frac{1} {n}{\Big{)}}^{j}A_{ j}(\mu )\bigg\}(B)\bigg{\vert } \\ & &\quad \leq \left \{\begin{array}{@{}l@{\quad }l@{}} \frac{1} {2}\Big{(} \frac{1} {n}{\Big{)}}^{s}\Delta (B) \quad &\text{ if }\ \Vert \mu \Vert < 1, \\ \frac{1} {2(n - 1)}\Big{(} \frac{1} {n}{\Big{)}}^{s}\Delta (B) \quad &\text{ if }\ \Vert \mu \Vert = 1, \\ \frac{1} {2}\Big{(}\frac{\Vert \mu \Vert } {n}{\Big{)}}^{s} \frac{\Delta (B)} {(\Vert \mu \Vert - 1)(n -\Vert {\mu \Vert }^{2})}\quad &\text{ if }\ 1 <\Vert \mu \Vert < \sqrt{n}, \end{array} \right. \end{array}$$

where \(n = 1,2,\ldots \) , and

$$\Delta (B) =\sup _{\vec{x}}\big{\vert }\mu _{1}^{{_\ast}n} {_\ast} {\mu }^{{_\ast}(s+2)} {_\ast}{\mathit{e}}^{\mu }(B -\vec{ x})\big{\vert }.$$

Proof.

When \(\Vert \mu \Vert < n\), the remainder term is

$$\begin{array}{rcl} r_{s+1}(B)& =& \sum\limits_{j=s+1}^{\infty }\Big{(} \frac{1} {n}{\Big{)}}^{j}{(-1)}^{j}{\mathit{e}}^{\mu } {_\ast} \mu _{ 1}^{{_\ast}n} {_\ast} {\mu }^{{_\ast}(j+1)} \sum\limits_{l=0}^{j-1}q_{ jl}{\mu }^{{_\ast}l}(B) = \\ & =&{ \mathit{e}}^{\mu } {_\ast} \mu _{ 1}^{{_\ast}n} {_\ast} {\mu }^{{_\ast}(s+2)}\Big{(} - \frac{1} {n}{\Big{)}}^{s+1} {_\ast}\sum\limits_{r=0}^{\infty }\Big{(} -\frac{\mu } {n}{\Big{)}}^{{_\ast}r} \sum\limits_{l=0}^{r+s}q_{ r+s+1,l}{\mu }^{{_\ast}l} = \\ & =& \Big{(} - \frac{1} {n}{\Big{)}}^{s+1} \int\limits_{{R}^{k}}{\mathit{e}}^{\mu } {_\ast} \mu _{ 1}^{{_\ast}n} {_\ast} {\mu }^{{_\ast}(s+2)}(B -\vec{ x}) \\ & & \left (\sum\limits_{r=0}^{\infty }\Big{(} -\frac{\mu } {n}{\Big{)}}^{{_\ast}r} \sum\limits_{l=0}^{r+s}q_{ r+s+1,l}{\mu }^{{_\ast}l}\right )(d\vec{x}).\end{array}$$

From this it follows that

$$\vert r_{s+1}(B)\vert \leq \Big{(} \frac{1} {n}{\Big{)}}^{s+1}\Delta (B)\sum\limits_{r=0}^{\infty }\bigg{(}\frac{\Vert \mu \Vert } {n}{\bigg{)}}^{r} \sum\limits_{l=0}^{r+s}q_{ r+s+1,l}{(\Vert \mu \Vert )}^{l}.$$
(5)

Here,

$$\sum\limits_{r=0}^{\infty }\bigg{(}\frac{\Vert \mu \Vert } {n}{\bigg{)}}^{r} \sum\limits_{l=0}^{r+s}q_{ r+s+1,l}{(\Vert \mu \Vert )}^{l} \leq \left \{\begin{array}{@{}l@{\quad }l@{}} \frac{1} {2} \frac{n} {n -\Vert \mu \Vert } \quad &\text{ if}\ \Vert \mu \Vert < 1, \\ \frac{1} {2} \frac{n} {n - 1} \quad &\text{ if}\ \Vert \mu \Vert = 1, \\ \frac{1} {2} \frac{\Vert {\mu \Vert }^{s+1}} {\Vert \mu \Vert - 1} \frac{n} {n -\Vert {\mu \Vert }^{2}}\quad &\text{ if}\ 1 <\Vert \mu \Vert < \sqrt{n}. \end{array} \right.$$

From this and from (5) the theorem follows.

Suppose that the probability distribution has an inverse generalized measure \({G}^{-{_\ast}}\), i.e.,

$$G {_\ast} {G}^{-{_\ast}} = {G}^{-{_\ast}}{_\ast} G = E_{ 0},$$

where E 0 is the degenerate \(k\)-dimensional measure concentrated at \(\vec{0} \in {R}^{k}\). Such a property is possessed by accompanying probability distributions \({\mathit{e}}^{F-E_{0}}\), i.e., \({G}^{-{_\ast}}\ =\ { \mathit{e}}^{-(F-E_{0})}\).

Theorem 6.

Let F be a k-dimensional probability distribution, let a probability distribution G have an inverse \({G}^{-{_\ast}}\) , and let \(\varrho =\Vert (F - G) {_\ast} {G}^{-{_\ast}}\Vert < 1\) . Then

$${F}^{{_\ast}n} = {G}^{{_\ast}n} {_\ast}{\mathit{e}}^{n(F-G){_\ast}{G}^{-{_\ast}} }{_\ast}\bigg\{E_{0} + \sum\limits_{j=1}^{\infty }\Big{(} \frac{1} {n}{\Big{)}}^{j}A_{ j}(n(F - G) {_\ast} {G}^{-{_\ast}})\bigg\},$$
(6)

where

$$A_{j}(n(F-G){_\ast}{G}^{-{_\ast}}) = {(-1)}^{j}{(n(F-G){_\ast}{G}^{-{_\ast}})}^{{_\ast}(j+1)}{_\ast}\sum\limits_{l=0}^{j-1}q_{ jl}{(n(F-G){_\ast}{G}^{-{_\ast}})}^{{_\ast}l}.$$

To estimate the remainder term

$$r_{s+1}(B) = \sum\limits_{j=s+1}^{\infty }\Big{(} \frac{1} {n}{\Big{)}}^{j}{G}^{{_\ast}n} {_\ast}{\mathit{e}}^{n(F-G){_\ast}{G}^{-{_\ast}} }{_\ast} A_{j}\big{(}n(F - G) {_\ast} {G}^{-{_\ast}}\big{)}(B),$$

we use

$$\Delta (B) =\sup _{\vec{x}}\big{\vert }{G}^{{_\ast}n} {_\ast}{\mathit{e}}^{n(F-G){_\ast}{G}^{-{_\ast}} }{_\ast}\big{(}n(F - G) {_\ast} {G}^{-{_\ast}}{\big{)}}^{{_\ast}(s+2)}(B -\vec{ x})\big{\vert }$$

and

$$L = \Big{(} \frac{1} {n}{\Big{)}}^{s+1} \sum\limits_{r=0}^{\infty }{\varrho }^{r} \sum\limits_{l=0}^{r+s}q_{ r+s+1,l}\big{(}\Vert n\varrho \Vert {\big{)}}^{l}.$$

It is obvious that

$$\vert r_{s+1}(B)\vert \leq L\Delta (B),$$
(7)

where

$$L \leq \left \{\begin{array}{@{}l@{\quad }l@{}} \frac{1} {2(1 - \varrho )}\Big{(} \frac{1} {n}{\Big{)}}^{s+1}\quad &\text{ if}\ n\varrho < 1, \\ \frac{1} {2}\Big{(} \frac{1} {n}{\Big{)}}^{s} \frac{1} {n - 1} \quad &\text{ if}\ n\varrho = 1, \\ \frac{1} {2} \frac{{\varrho }^{s+1}} {(1 - n{\varrho }^{2})} \quad &\text{ if}\ \frac{1} {n} < \varrho < \frac{1} {\sqrt{n}}. \end{array} \right.$$
(8)

From (7) and (8) there follows an estimate of the remainder term in the asymptotic expansion (6).

4 Expansion of a Convolution by Accompanying Probability Measures

Every k-dimensional probability measure P satisfies the identity

$$P ={ \mathit{e}}^{P-E_{0} } {_\ast}\big{(}E_{0} - {(P - E_{0})}^{{_\ast}2} {_\ast} E{(E_{ 0} - P)}^{{_\ast}\xi _{1} }\big{)},$$
(9)

where

$$E{(E_{0} - P)}^{{_\ast}\xi _{1} } = \sum\limits_{m=0}^{\infty }P\{\xi _{ 1} = m\}{(E_{0} - P)}^{{_\ast}m}$$
(10)

with \(P\{\xi _{1} = m\} = \frac{m + 1} {(m + 2)!}\), \(m = 0,1,2,\ldots \).

From (9) and (10) it follows that

$$\big{(}(P -{\mathit{e}}^{P-E_{0} }) {_\ast}{\mathit{e}}^{-(P-E_{0})}{\big{)}}^{{_\ast}l} = {(-1)}^{l}{(P - E_{ 0})}^{{_\ast}2l} {_\ast} E{(E_{ 0} - P)}^{{_\ast}z_{l} }$$
(11)

for l = 1, 2, , where \(z_{l} = \xi _{1} + \xi _{2} + \cdots + \xi _{l}\) -is the sum of i.i.d. random variables \(\xi _{1},\xi _{2},\ldots ,\xi _{l}\).

It is obvious that, for all P,

$$\big{\Vert}\big{(}(P -{\mathit{e}}^{P-E_{0} }) {_\ast}{\mathit{e}}^{-(P-E_{0})}{\big{)}}^{{_\ast}l}\big{\Vert} \leq \bigg{(}\frac{1 +{ \mathit{e}}^{2}} {4}{ \bigg{)}}^{l},\qquad l = 1,2,\ldots.$$

If \(\Vert {(P - E_{0})}^{{_\ast}2}\Vert < \frac{4} {1+{\mathit{e}}^{2}}\), then

$$\varrho = \big{\Vert}(P -{\mathit{e}}^{P-E_{0} }) {_\ast}{\mathit{e}}^{-(P-E_{0})}\big{\Vert} < 1,$$

and for the convolution \({P}^{{_\ast}n}\), we can apply Theorem 6:

$${P}^{{_\ast}n} ={ \mathit{e}}^{n(P-E_{0})} {_\ast}{\mathit{e}}^{n\mu } {_\ast}\bigg\{E_{ 0} + \sum\limits_{j=1}^{\infty }\Big{(} \frac{1} {n}{\Big{)}}^{j}A_{ j}(n\mu )\bigg\},$$
(12)

where

$$A_{j}(n\mu ) = {(-1)}^{j}{(n\mu )}^{{_\ast}(j+1)} {_\ast}\sum\limits_{l=0}^{j-1}q_{ jl}\,{(n\mu )}^{{_\ast}l}$$

and

$$\mu = (P -{\mathit{e}}^{P-E_{0} }) {_\ast}{\mathit{e}}^{-(P-E_{0})}.$$

Let us estimate the remainder term

$$r_{s+1}(B) ={ \mathit{e}}^{n(P-E_{0})} {_\ast}{\mathit{e}}^{n\mu } {_\ast}\bigg{(}E_{ 0} + \sum\limits_{j=s+1}^{\infty }\Big{(} \frac{1} {n}{\Big{)}}^{j}A_{ j}(n\mu )\bigg{)}(B).$$

From (11) it follows that

$${\mu }^{{_\ast}l} = {(-1)}^{l}{(P - E_{ 0})}^{{_\ast}2l} {_\ast} E{(E_{ 0} - P)}^{{_\ast}z_{l} }$$

and

$$\begin{array}{rcl} r_{s+1}(B)& =& {(-n)}^{s+2}{(P - E_{ 0})}^{{_\ast}2(s+2)} {_\ast} E{(E_{ 0} - P)}^{{_\ast}z_{s+2} } {_\ast}{\mathit{e}}^{n(P-E_{0})} {_\ast} \\ & &{_\ast}\exp \big\{ - n{(P - E_{0})}^{{_\ast}2}E{(E_{ 0} - P)}^{{_\ast}\xi _{1} }\big\} {_\ast}\sum\limits_{r=0}^{\infty }{(-\mu )}^{{_\ast}r} {_\ast}\sum\limits_{l=0}^{r+s}q_{ r+s+1,l}{(n\mu )}^{{_\ast}l}.\end{array}$$

Theorem 7.

Suppose that \(\Vert {(P - E_{0})}^{{_\ast}2}\Vert < \frac{4} {1+{\mathit{e}}^{2}}\) . Then, for all Borel sets \(B \in {\mathfrak{M}}^{k}\) ,

$$\vert r_{s+1}(B)\vert \leq \Delta (B) \cdot L,$$

where

$$\begin{array}{rcl} \Delta (B)& =& \sup _{\vec{x}}\Big{\vert }\big{(}n{(P - E_{0})}^{{_\ast}2}{\big{)}}^{{_\ast}(s+2)} {_\ast} E{(E_{ 0} - P)}^{{_\ast}(z_{s}+2)} {_\ast} \\ & &{_\ast}\exp \big\{n(P - E_{0}) {_\ast} (E_{0} - (P - E_{0}) {_\ast} E{(E_{0} - P)}^{{_\ast}\xi _{1} }\big\}(B -\vec{ x})\Big{\vert }, \\ \end{array}$$

\(\varrho =\Vert \mu \Vert\) , and

$$L = \left \{\begin{array}{@{}l@{\quad }l@{}} \frac{1} {2(1 - \varrho )}\Big{(} \frac{1} {n}{\Big{)}}^{s+1}\quad &\text{ if}\ n\varrho < 1, \\ \frac{1} {2}\Big{(} \frac{1} {n}{\Big{)}}^{s} \frac{1} {n - 1} \quad &\text{ if}\ n\varrho = 1, \\ \frac{1} {2} \frac{{\varrho }^{s+1}} {1 - n{\varrho }^{2}} \quad &\text{ if}\ 1 < n\varrho < \sqrt{n}. \end{array} \right.$$

The theorem follows from inequalities (7) and (8).

5 Asymptotic Bergström Expansion

For any \(k\)-dimensional probability distributions P and Q,

$${P}^{{_\ast}n} = \sum\limits_{\nu =0}^{s}C_{ n}^{\nu }{Q}^{{_\ast}(n-\nu )} {_\ast} {(P - Q)}^{{_\ast}\nu } + r_{ n}^{(s+1)}$$

(the Bergström identity). Here, for \(s + 1 < n\),

$$r_{n}^{(s+1)} = \sum\limits_{m=s+1}^{n}C_{ m-1}^{s}{P}^{{_\ast}(n-m)} {_\ast} {(P - Q)}^{{_\ast}(s+1)} {_\ast} {Q}^{{_\ast}(m-s-1)}.$$

Let \(\Theta \) be a negative hypergeometric random variable taking the natural values \(m = s + 1,s + 2,\ldots ,n\) with probabilities

$$P\{\Theta = m\} = \frac{C_{m-1}^{s}} {C_{n}^{s+1}}.$$

Then we can rewrite the remainder term as

$$r_{n}^{(s+1)} = C_{ n}^{s+1}{(P - Q)}^{{_\ast}(s+1)} {_\ast} E({P}^{{_\ast}(n-\Theta )} {_\ast} {Q}^{{_\ast}(\Theta -s-1)}),$$

where

$$E({P}^{{_\ast}(n-\Theta )} {_\ast} {Q}^{{_\ast}(\Theta -s-1)}) = \sum\limits_{m=s+1}^{n}P\{\Theta = m\}{P}^{{_\ast}(n-m)} {_\ast} {Q}^{{_\ast}(m-s-1)}.$$

Lemma 3.

Suppose that P and Q have finite jth-order absolute moments and that

$$\int\limits_{{R}^{k}}{(\vec{t},\vec{x})}^{r}d(P - Q)(\vec{x}) = 0$$

for \(r = 1,2,\ldots ,j\) and \(\vec{t} \in {R}^{k}\) . Then

$$\int\limits_{{R}^{k}}{(\vec{t},\vec{x})}^{l}d{(P - Q)}^{{_\ast}m}(\vec{x}) = 0$$

for \(l = 0,1,\ldots ,(j + 1)m - 1\) and \(\vec{t} \in {R}^{k}\) .

Remark.

If the first moments of P and Q coincide, then

$$\int\limits_{{R}^{k}}{(\vec{t},\vec{x})}^{l}d{(P - Q)}^{{_\ast}m}(\vec{x}) = 0$$

for \(l = 0,1,\ldots ,3m - 1\).

The lemma is proved by using characteristic functions and the Faa de Bruno formula that can be found, e.g., in [8].

Since

$$C_{n}^{\nu } = \frac{{n}^{\nu }} {\nu !} \bigg{(}1 + \sum\limits_{j=1}^{\nu -1}\Big{(} - \frac{1} {n}{\Big{)}}^{j}C_{ \nu }^{(j)}\bigg{)},$$

where \(C_{\nu }^{(j)}\) is the Stirling number of the first kind, \(C_{\nu }^{(0)} = 1\), and

$$C_{\nu }^{(j)} = \nu (\nu - 1)\cdots (\nu - j)\sum\limits_{l=0}^{j-1}q_{ jl}(\nu - j - 1)\cdots (\nu - j - l),$$

we have, for \(1 \leq s < n\),

$$\begin{array}{rcl} A_{n}^{(s)}(B)& =& {Q}^{{_\ast}n} + \sum\limits_{\nu =1}^{s}C_{ n}^{\nu }{Q}^{{_\ast}(n-\nu )} {_\ast} {(P - Q)}^{{_\ast}\nu }(B) = \\ & =& {Q}^{{_\ast}n}(B) + \sum\limits_{j=0}^{s-1}\Big{(} \frac{1} {n}{\Big{)}}^{j} \sum\limits_{\nu =j+1}^{s}{(-1)}^{j} \frac{1} {\nu !}C_{\nu }^{(j)}{Q}^{{_\ast}(n-\nu )} {_\ast} {(n(P - Q))}^{{_\ast}\nu }(B).\end{array}$$

Now, the Bergström identity becomes

$$\begin{array}{rcl}{ P}^{{_\ast}n}(B)& =& {Q}^{{_\ast}n}(B) + \sum\limits_{j=0}^{s-1}\Big{(} \frac{1} {n}{\Big{)}}^{j} \sum\limits_{\nu =j+1}^{s}\frac{{(-1)}^{j}} {\nu !} C_{\nu }^{(j)}{Q}^{{_\ast}(n-\nu )} {_\ast} {(n(P - Q))}^{{_\ast}\nu }(B) + \\ & & +C_{n}^{\nu }{(P - Q)}^{{_\ast}(s+1)} {_\ast} E({P}^{{_\ast}(n-\Theta )} {_\ast} {Q}^{{_\ast}(\Theta -s-1)})(B). \end{array}$$
(13)

Let us now consider the cases where Q(B) is the normal k-dimensional distribution \(\Phi (B) = P\{\boldsymbol \xi \in B\}\), \(\boldsymbol \xi \sim N_{k}(\vec{0},\Sigma )\), where Σ is a nondegenerate matrix of second moments.

Suppose that the expectation vectors and second-moment matrices of P(B) and \(\Phi (B)\) coincide.

Theorem 8.

Suppose that the probability distribution \(P(B) = P\{\boldsymbol \eta \in B\}\) has finite absolute moments of order \(2 + \delta \) with \(0 < \delta \leq 1\) . Then there exists a constant C, depending only on k, s, and δ, such that

$$\sup _{B\in {\mathfrak{M}}^{k}}\big{\vert }{\Phi }^{{_\ast}(n-\nu )} {_\ast}\big{(}n(P - \Phi ){\big{)}}^{{_\ast}\nu }(B\sqrt{n})\big{\vert }\leq \bigg{(}\frac{CE\big{[}{(\boldsymbol {\eta }^{T}{\Sigma }^{-1}\boldsymbol \eta )}^{\frac{2+\delta } {2} }\big{]}} {{n}^{\delta /2}}{ \bigg{)}}^{\nu }$$

for \(1 \leq \nu \leq s\) , where

$$E{(\boldsymbol {\eta }^{T}{\Sigma }^{-1}\boldsymbol \eta )}^{\frac{2+\delta } {2} } = \int\limits_{{R}^{k}}{(\vec{{x}}^{T}{\Sigma }^{-1}\vec{x})}^{\frac{2+\delta } {2} }dP(\vec{x}),$$

and \(\boldsymbol {\eta }^{T}\) is the transpose of the vector \(\boldsymbol \eta \) .

Theorem 8 is proved in [4]. H. Bergström proved that (see [1])

$$\sup _{B\in {\mathfrak{M}}^{k}}\big{\vert }{\Phi }^{{_\ast}(n-\nu )} {_\ast} {(n(P - \Phi ))}^{{_\ast}\nu }(B\sqrt{n})\big{\vert } = O\bigg{(}\frac{{(\ln n)}^{k/2}} {{n}^{\delta /2}}{ \bigg{)}}^{\nu }.$$

We will estimate the remainder term \(r_{n}^{(s+1)}(B)\) for all convex Borel sets \(B \in {\mathfrak{N}}^{k}\).

Theorem 9.

Suppose that the assumptions of Theorem  8 are satisfied and that the characteristic function of the random vector \(\boldsymbol \eta _{1}\) satisfies Cramer condition  (C):

$$\overline{\lim }_{\Vert \vec{t}\Vert \rightarrow \infty }\vert E{\mathit{e}}^{i(\vec{t},\boldsymbol \eta _{1})}\vert < 1.$$

Then

$$\sup _{B\in {\mathfrak{N}}^{k}}\vert r_{n}^{(s+1)}(B)\vert = o({n}^{-(\delta /2)s}).$$

The theorem is proved in [4].

In the one-dimensional case (k = 1), Bergström [1] proved that from his asymptotic expansion there follows the Chebyshev–Cramer asymptotic expansion.

For k > 1 and \(Q(B) = \Phi (B)\), from (13) it follows that

$$\begin{array}{rcl}{ P}^{{_\ast}n}(B\sqrt{n})& =& \Phi (B)\! +\! \sum\limits_{j=0}^{s-1}\Big{(} \frac{1} {n}{\Big{)}}^{j}\!\!\! \sum\limits_{\nu =j+1}^{s}\frac{{(-1)}^{j}} {\nu !} C_{\nu }^{(j)}{\Phi }^{{_\ast}(n-\nu )} {_\ast} {(n(P - \Phi ))}^{{_\ast}\nu }(B(\sqrt{n}) \\ & & +C_{n}^{s+1}{(P - \Phi )}^{{_\ast}(s+1)} {_\ast} E\big{(}{P}^{{_\ast}(n-\Theta )} {_\ast} {\Phi }^{{_\ast}(\Theta -s-1)}(B\sqrt{n})\big{)}. \end{array}$$
(14)

The formal asymptotic expansion of the density \(p_{\nu }(\vec{y})\) of the convolution

$${\Phi }^{{_\ast}(n-\nu )} {_\ast} {(n(P - \Phi ))}^{{_\ast}\nu }(B\sqrt{n}) = \int\limits_{B}p_{\nu }(\vec{y})d\vec{y}$$

is

$$\begin{array}{rcl} p_{\nu }(\vec{y})& \approx & \sum\limits_{m=0}^{\infty }\sum\limits_{l=0}^{\infty }\Big{(} \frac{1} {\sqrt{n}}{\Big{)}}^{l+\nu +2m} \frac{{(-\nu )}^{m}} {m!(3\nu + l)!}\, \frac{{\partial }^{m+3\nu +l}} {\partial {\epsilon }^{m}\partial {\varrho }^{3\nu +l}} \times \\ & &\times \Bigg{[}\int\limits_{x\in {R}^{k}}\bigg{(} \frac{1} {\sqrt{(1 + \epsilon )2\pi }}{\bigg{)}}^{k} \frac{1} {\sqrt{\vert \Sigma \vert }}\times \\ & &\times \exp \bigg\{ - \frac{1} {2(1 + \epsilon )}{(\vec{y} -\vec{ x}\varrho )}^{T}{\Sigma }^{-1}(\vec{y} -\vec{ x}\varrho )\bigg\}d{(P - \Phi )}^{{_\ast}\nu }(\vec{x})\Bigg{]}_{\big{ \vert }\begin{array}{c} \epsilon = 0 \\ \varrho = 0 \end{array} }\end{array}$$
(15)

for \(1 \leq \nu \leq s\), where \(\vert \Sigma \vert \) denotes the determinant of the matrix Σ.

Let \(\boldsymbol \xi _{\epsilon } \sim N_{k}(\vec{0},(1 + \epsilon )\Sigma ))\) be a k-dimensional normal random vector. If \(\epsilon = 0\), then

$$\boldsymbol \xi _{0} =\boldsymbol \xi \sim N_{k}(\vec{0},\Sigma ).$$

From (14) and (15) we get the following formal expansion of the convolution \({P}^{{_\ast}n}(B\sqrt{n})\):

$$\begin{array}{rcl}{ P}^{{_\ast}n}(B\sqrt{n})& \approx & \Phi (B) + \sum\limits_{j=0}^{s-1}\Big{(} \frac{1} {n}{\Big{)}}^{j} \sum\limits_{\nu =j+1}^{s}\frac{{(-1)}^{j}} {\nu !} C_{\nu }^{(j)} \sum\limits_{m=0}^{\infty }\sum\limits_{l=0}^{\infty }\Big{(} \frac{1} {\sqrt{n}}{\Big{)}}^{l+\nu +2m} \frac{{(-\nu )}^{m}} {m!(3\nu + l)!} \cdot \\ & &\cdot \frac{{\partial }^{m+3\nu +l}} {\partial {\epsilon }^{m}\partial {\varrho }^{3\nu +l}}\Bigg{[}\int\limits_{\vec{x}\in {R}^{k}}P\{\boldsymbol \xi _{\epsilon } +\vec{ x}\varrho \in B\}d{(P - \Phi )}^{{_\ast}\nu }(\vec{x})\Bigg{]}_{\big{ \vert }\begin{array}{c} \epsilon = 0 \\ \varrho = 0 \end{array} } + \cdots = \\ & & = P\{\boldsymbol \xi \in B\} + \sum\limits_{r=1}^{\infty }\Big{(} \frac{1} {\sqrt{n}}{\Big{)}}^{r}{ \sum\limits_{j=0}^{s-1} \sum\limits_{\nu =j+1}^{s} \sum\limits_{m=0}^{\infty }\sum\limits_{l=0}^{\infty } \atop 2j + l + \nu + 2m = r} \frac{{(-1)}^{j+m}{\nu }^{m}} {\nu !m!(3\nu + l)!}C_{\nu }^{(j)} \cdot \\ & &\cdot \frac{{\partial }^{m+3\nu +l}} {\partial {\epsilon }^{m}\partial {\varrho }^{3\nu +l}}\Bigg{[}\int\limits_{\vec{x}\in {R}^{k}}P\{\boldsymbol \xi _{\epsilon } +\vec{ x}\varrho \in B\}d{(P - \Phi )}^{{_\ast}\nu }(\vec{x})\Bigg{]}_{\big{ \vert }\begin{array}{c} \epsilon = 0 \\ \varrho = 0 \end{array} } + \cdots \,, \\ \end{array}$$

where

$$\begin{array}{rcl} P\{\boldsymbol \xi _{\epsilon } +\vec{ x}\varrho <\vec{ z}\}& =& \bigg{(} \frac{1} {\sqrt{2\pi (1 + \epsilon )}}{\bigg{)}}^{k} \frac{1} {\sqrt{\vert \Sigma \vert }} \cdot \\ & &\cdot \int\limits_{\vec{y}<\vec{z}}\exp \bigg\{ - \frac{1} {2(1 + \epsilon )}{(\vec{y} -\vec{ x}\varrho )}^{T}{\Sigma }^{-1}(\vec{y} -\vec{ x}\varrho )\bigg\}d\vec{y}.\end{array}$$

The formal expansions are obtained by means of the characteristic functions.

6 Expansion of a Convolution by χ2-Distributions

Let \(\boldsymbol \xi _{\mu } \sim N_{k}(\boldsymbol \mu ,\Sigma )\) be a normal k-dimensional random vector with nondegenerate covariation? matrix Σ. The random variable

$$\chi _{k}^{2} = {(\boldsymbol \xi _{ \mu } -\boldsymbol \mu )}^{T}{\Sigma }^{-1}(\boldsymbol \xi _{ \mu } -\boldsymbol \mu )$$

has the \({\chi }^{2}\)-distribution with k degrees of freedom, and the random variable

$$\chi _{k}^{2}(\delta ) = {(\boldsymbol \xi _{ \mu } -\boldsymbol \nu )}^{T}{\Sigma }^{-1}(\boldsymbol \xi _{ \mu } -\boldsymbol \nu )$$

has the noncentral χ2-distribution with k degrees of freedom and noncentrality parameter

$$\delta = {(\boldsymbol \mu -\boldsymbol \nu )}^{T}{\Sigma }^{-1}(\boldsymbol \mu -\boldsymbol \nu ).$$

The distribution function of \(\chi _{k}^{2}(\delta )\) is

$$P\{\chi _{k}^{2}(\delta ) < x\} = \sum\limits_{j=0}^{\infty }\bigg{[}\frac{{(\delta /2)}^{j}} {j!}{ \mathit{e}}^{-\delta /2}\bigg{]}\,P\{\chi _{ k+2j}^{2} < x\}$$
(16)

(see [7, 9]).

Let

$$\vec{S}_{n} = \frac{1} {\sqrt{n}}\sum\limits_{j=1}^{n}\boldsymbol \eta _{ j}$$

be the sum of i.i.d. k-dimensional vectors \(\boldsymbol \eta _{1},\ldots ,\boldsymbol \eta _{n},\ldots \) with zero mean vector \(\vec{0}\ \in \ {R}^{k}\) and nondegenerate covariation matrix \(\Sigma \). Let \(\boldsymbol \xi \sim N_{k}(\vec{0},\Sigma )\) and

$$A_{x} =\{\vec{ y} \in {R}^{k} :\ \vec{ {y}}^{T}{\Sigma }^{-1}\vec{y} < x\},\quad x > 0.$$

We are interested in an asymptotic expansion of

$$P\{\vec{S}_{n} \in A_{x}\} = P\{\vec{S}_{n}^{T}{\Sigma }^{-1}\vec{S}_{ n} < x\} = {P}^{{_\ast}n}(\sqrt{n}A_{ x}),$$

where \(P(\sqrt{n}A_{x}) = P\big\{ \frac{\boldsymbol \eta _{1}} {\sqrt{n}} \in A_{x}\big\}\), i.e., the difference

$$P\{\vec{S}_{n}^{T}{\Sigma }^{-1}\vec{S}_{ n} < x\} - P\{\boldsymbol {\xi }^{T}{\Sigma }^{-1}\boldsymbol \xi < x\} = {P}^{{_\ast}n}(\sqrt{n}A_{ x}) - P\{\chi _{k}^{2} < x\}$$
(17)

for x > 0.

Denote by \(\widehat{P}(\vec{t})\) and \(\widehat{\Phi }(\vec{t})\) the characteristic functions of the vectors \(\boldsymbol \eta _{1}\) and \(\boldsymbol \xi \). From the Bergström identity (13) it follows that

$$\begin{array}{rcl} \bigg{(}\widehat{P}\bigg{(} \frac{\vec{t}} {\sqrt{n}}\bigg{)}{\bigg{)}}^{n}& =& \widehat{\Phi }(\vec{t}) + \sum\limits_{\nu =0}^{s-1}\Big{(} \frac{1} {n}{\Big{)}}^{j} \sum\limits_{\nu =j+1}^{s}\frac{{(-1)}^{j}} {\nu !} C_{\nu }^{(j)} \cdot \\ & &\cdot \bigg{(}\widehat{\Phi }\Big{(} \frac{\vec{t}} {\sqrt{n}}\Big{)}{\bigg{)}}^{n-\nu }\bigg{(}\bigg{(}\widehat{P}\Big{(} \frac{\vec{t}} {\sqrt{n}}\Big{)} -\widehat{\Phi }\Big{(} \frac{\vec{t}} {\sqrt{n}}\Big{)}\bigg{)}n{\bigg{)}}^{\nu } + \\ & & +C_{n}^{s+1}\bigg{(}\widehat{P}\Big{(} \frac{\vec{t}} {\sqrt{n}}\Big{)} -\widehat{ \Phi }\Big{(} \frac{\vec{t}} {\sqrt{n}}\Big{)}{\bigg{)}}^{s+1} \cdot \\ & &\cdot E\bigg{[}\bigg{(}\widehat{P}\Big{(} \frac{\vec{t}} {\sqrt{n}}\Big{)}{\bigg{)}}^{n-\Theta }\bigg{(}\widehat{\Phi }\Big{(} \frac{\vec{t}} {\sqrt{n}}\Big{)}{\bigg{)}}^{\Theta -s-1}\bigg{]}, \end{array}$$
(18)

where

$$\begin{array}{rcl} & & \bigg{(}\widehat{\Phi }\Big{(} \frac{\vec{t}} {\sqrt{n}}\Big{)}{\bigg{)}}^{n-\nu }\bigg{(}n\bigg{(}\widehat{P}\Big{(} \frac{\vec{t}} {\sqrt{n}}\Big{)}\bigg{)} -\widehat{ \Phi }\Big{(} \frac{\vec{t}} {\sqrt{n}}\Big{)}{\bigg{)}}^{\nu } = \\ & & = \int\limits_{\vec{x}\in {R}^{k}}{\mathit{e}}^{i(\vec{t}/\sqrt{n},\vec{x})}\bigg{(}\widehat{\Phi }\Big{(} \frac{\vec{t}} {\sqrt{n}}\Big{)}{\bigg{)}}^{n-\nu }d{(n(P - \Phi ))}^{{_\ast}\nu }(\vec{x}).\end{array}$$

The Fourier transform is

$$\begin{array}{rcl} p_{\nu }(\vec{y})& =& \Big{(} \frac{1} {2\pi }{\Big{)}}^{k} \int\limits_{\vec{t}\in {R}^{k}}{\mathit{e}}^{-i(\vec{t},\vec{y})}\bigg{(}\widehat{\Phi }\Big{(} \frac{\vec{t}} {\sqrt{n}}\Big{)}{\bigg{)}}^{n-\nu }\bigg{(}n\bigg{(}\widehat{P}\Big{(} \frac{\vec{t}} {\sqrt{n}}\Big{)} -\widehat{ \Phi }\Big{(} \frac{\vec{t}} {\sqrt{n}}\Big{)}\bigg{)}{\bigg{)}}^{\nu }d\vec{t} = \\ & =& \int\limits_{\vec{x}\in {R}^{k}}\Big{(} \frac{1} {2\pi }{\Big{)}}^{k} \int\limits_{\vec{t}\in {R}^{k}}{\mathit{e}}^{-i(\vec{t},\vec{y}-\vec{x}/\sqrt{n})}\exp \Big\{ -\frac{1} {2} \frac{n - \nu } {n} \vec{{t}}^{T}\Sigma \vec{t}\Big\}\,d\vec{t}\,d{(n(P - \Phi ))}^{{_\ast}\nu }(\vec{x}).\end{array}$$

By the change of variables \(\vec{v} = \sqrt{\frac{n - \nu } {n}} \vec{t}\) we obtain

$$\begin{array}{rcl} p_{\nu }(\vec{y})& =& \int\limits_{\vec{x}\in {R}^{k}}\bigg{(}\sqrt{ \frac{n} {n - \nu }} \frac{1} {2\pi }{\bigg{)}}^{k} \int\limits_{\vec{v}\in {R}^{k}}\exp \bigg\{ - i\Big{(}\vec{v},\sqrt{ \frac{n} {n - \nu }}\vec{y} - \frac{\vec{x}} {\sqrt{n - \nu }}\Big{)}\bigg\} \cdot \\ & &\qquad \quad \cdot \exp \Big\{ -\frac{1} {2}\vec{{v}}^{T}\Sigma \vec{v}\Big\}d\vec{v}\,d{(n(P - \Phi ))}^{{_\ast}\nu }(\vec{x}), \end{array}$$
(19)

where

$$\begin{array}{rcl} & & \bigg{(}\sqrt{ \frac{n} {n - \nu }} \frac{1} {2\pi }{\bigg{)}}^{k}\!\! \int\limits_{\vec{v}\in {R}^{k}}\!\!\exp \bigg\{ - i\Big{(}\vec{v},\sqrt{ \frac{n} {n - \nu }}\vec{y} - \frac{\vec{x}} {\sqrt{n - \nu }}\Big{)}\bigg\}\exp \Big\{ -\frac{1} {2}\vec{{v}}^{T}\Sigma \vec{v}\Big\}d\vec{v} = \\ & & = \bigg{(}\sqrt{ \frac{n} {n - \nu }}{\bigg{)}}^{k}\bigg{(} \frac{1} {\sqrt{2\pi }}{\bigg{)}}^{k} \frac{1} {\sqrt{\vert \Sigma \vert }} \\ & &\exp \bigg\{ -\frac{1} {2}\bigg{(}\vec{y}\sqrt{ \frac{n} {n - \nu }} - \frac{\vec{x}} {\sqrt{n - \nu }}{\bigg{)}}^{T}{\Sigma }^{-1}\bigg{(}\vec{y}\sqrt{ \frac{n} {n - \nu }} - \frac{\vec{x}} {\sqrt{n - \nu }}\bigg{)}\bigg\}. \end{array}$$
(20)

From (17) to (20) after the change of variables \(\vec{u} =\vec{ y}\sqrt{ \frac{n} {n-\mu }} - \frac{\vec{x}} {\sqrt{n-\nu }}\) it follows that

$$\begin{array}{rcl} & & \int\limits_{\vec{{y}}^{T}{\Sigma }^{-1}\vec{y}<x}p_{\nu }(\vec{y})\,d\vec{y} = \int\limits_{\vec{x}\in {R}^{k}}\bigg{(} \frac{1} {\sqrt{2\pi }}{\bigg{)}}^{k} \frac{1} {\sqrt{\vert \Sigma \vert }} \\ & &\int\limits_{{(\vec{u}+ \frac{\vec{x}} {\sqrt{n-\nu }})}^{T}{\Sigma }^{-1}(\vec{u}+ \frac{\vec{x}} {\sqrt{n-\nu }})<x \frac{n} {n-\nu } }{\mathit{e}}^{-1/2\vec{{u}}^{T}{\Sigma }^{-1}\vec{u} }d\vec{u}\,d{(n(P - \Phi ))}^{{_\ast}\nu }(\vec{x}) = \\ & & = \int\limits_{\vec{x}\in {R}^{k}}\!\!P\bigg\{\bigg{(}\boldsymbol \xi \! +\! \frac{\vec{x}} {\sqrt{n - \nu }}{\bigg{)}}^{T}\!\!{\Sigma }^{-1}\Big{(}\xi + \frac{\vec{x}} {\sqrt{n - \nu }}\Big{)}\! <\! x \frac{n} {n - \nu }\bigg\}d{(n(P - \Phi ))}^{{_\ast}\nu }(\vec{x}) = \\ & & = \int\limits_{\vec{x}\in {R}^{k}}P\bigg\{\chi _{k}^{2}(\delta (\vec{x})) < x \frac{n} {n - \nu }\bigg\}d{(n(P - \Phi ))}^{{_\ast}\nu }(\vec{x}), \\ \end{array}$$

where

$$\delta (\vec{x}) = \frac{1} {n - \nu }\vec{{x}}^{T}{\Sigma }^{-1}\vec{x}$$

is the noncentrality parameter of the \(\chi _{k}^{2}(\delta (\vec{x}))\)-distribution. From (16) it follows that

$$P\{\chi _{k}^{2}(\delta (\vec{x})) < x\} = \sum\limits_{j=0}^{\infty }\Big{(} \frac{1} {n - \nu }{\Big{)}}^{j}\frac{{(\frac{1} {2}\vec{{x}}^{T}{\Sigma }^{-1}\vec{x})}^{j}} {j!}{ \mathit{e}}^{-\frac{1} {2} \frac{1} {n-\nu }\vec{{x}}^{T}{\Sigma }^{-1}\vec{x} }P\Big\{\chi _{k+2j}^{2} < x \frac{n} {n - \nu }\Big\}.$$

Now the asymptotic Bergström expansion writes as

$$\begin{array}{rcl} & & P\{\boldsymbol {\xi }^{T}{\Sigma }^{-1}\boldsymbol \xi < x\} + \sum\limits_{j=0}^{s-1}\Big{(} - \frac{1} {n}{\Big{)}}^{j} \sum\limits_{\nu =j+1}^{s} \frac{1} {\nu !}C_{\nu }^{(j)} \\ & & \int\limits_{\vec{x}\in {R}^{k}}P\Big\{\chi _{k}^{2}(\delta (\vec{x})) < x \frac{n} {n - \nu }\Big\}d{(n(P - \Phi ))}^{{_\ast}\nu }(\vec{x}).\end{array}$$

To estimate the remainder term, we applied Theorem 9. Thus, we have proved the following:

Theorem 10.

Suppose that a random vector \(\boldsymbol \eta _{1}\) has a finite absolute moment of order \(2 + \delta \) for some \(0 < \delta \leq 1\) and that the characteristic function \(\widehat{P}(\vec{t})\) satisfies the Cramer condition

$${ \overline{\lim } \atop \Vert \vec{t}\Vert \rightarrow \infty } \vert \widehat{P}(\vec{t})\vert < 1. $$
(C)

Then

$$\begin{array}{rcl} & & P\{\vec{S}_{n}^{T}{\Sigma }^{-1}\vec{S}_{ n} < x\} = P\{\chi _{k}^{2} < x\} + \\ & & +\sum\limits_{\nu =0}^{s-1}\Big{(} - \frac{1} {n}{\Big{)}}^{j} \sum\limits_{\nu =j+1}^{s} \frac{1} {\nu !}C_{\nu }^{(j)} \sum\limits_{r=0}^{\infty }\Big{(} \frac{1} {n - \nu }{\Big{)}}^{r}P\Big\{\chi _{ k+2r}^{2} < x \frac{n} {n - \nu }\Big\} \frac{1} {r!} \times \\ & &\times \int\limits_{\vec{x}\in {R}^{k}}\bigg{(}\frac{1} {2}\vec{{x}}^{T}{\Sigma }^{-1}\vec{x}{\bigg{)}}^{r}{\mathit{e}}^{-\frac{1} {2} \frac{1} {n-\nu }\vec{{x}}^{T}{\Sigma }^{-1}\vec{x} }d{(n(P - \Phi ))}^{{_\ast}\nu }(\vec{x}) + o({n}^{-(\delta /2)s}) \\ \end{array}$$

for all x > 0 and s = 1,2,….

If instead of considering the χ k 2 random variable, we change t, F, μ, etc., then we have also to change the definition of the set A x and to replace the Cramer condition (C) by the Prokhorov [11, 12] condition that there exists n 0 such that the convolution \({P}^{{_\ast}n_{0}}(\vec{x})\) has an absolutely continuous component.