1 INTRODUCTION

According to the definition the \({{\chi }^{2}}\)-distribution is the distribution of the variable

$$\xi = \lambda \sum\limits_{k = 1}^n {x_{k}^{2}} ,$$
(1)

where \({{x}_{k}}\) are independent standard normal random variables with a zero mean and unit variance. For ease of comparison with the results obtained below, we introduce the coefficient \(\lambda \).

The distribution of the variable \(\xi \) is well known:

$$P(\xi ) = \frac{1}{{2\lambda \Gamma \left( {{n \mathord{\left/ {\vphantom {n 2}} \right. \kern-0em} 2}} \right)}}{{\left( {\frac{\xi }{{2\lambda }}} \right)}^{{\frac{n}{2} - 1}}}{{e}^{{ - \frac{\xi }{{2\lambda }}}}}.$$
(2)

It has a single maximum at the point \(\xi = \lambda (n - 2)\) and its mean and variance are

$$\left\langle \xi \right\rangle = n\lambda ,\,\,\,\,\left\langle {{{\xi }^{2}}} \right\rangle - {{\left\langle \xi \right\rangle }^{2}} = 2n{{\lambda }^{2}},$$
(3)

respectively. The number \(n\) is usually called the number of degrees of freedom of this distribution.

2 BASIC EXPRESSIONS

In the present paper, we discuss a more general distribution that is the distribution of the weighted sum of random variables

$$\xi = \sum\limits_{k = 1}^n {{{\lambda }_{k}}x_{k}^{2}} ,$$
(4)

where the weights \({{\lambda }_{k}}\) can be positive or negative. Such distribution appears in different applications (see, for example, [14]). Although many authors examined the distribution of the variable (4), nobody succeeded in obtaining a closed form of the distribution function. They proposed different approximations of this function using, for example, series in the Laguerre polynomials [5, 6], gamma series expansions [7, 8], and so on. The papers [9, 10] present a comprehensive review of the approximation methods.

It is easy to see that the presence of the weights in the sum (4) can be treated as a sum or difference of the standard normal random variables \(x_{k}^{2}\) with the variance \(\sigma _{k}^{2} = {{\left| {{{\lambda }_{k}}} \right|}^{{ - 1}}}\) that is not equal to one: \(\xi = \sum\nolimits_1^n {\operatorname{sgn} ({{\lambda }_{k}}){{{({{{{x}_{k}}} \mathord{\left/ {\vphantom {{{{x}_{k}}} {{{\sigma }_{k}}}}} \right. \kern-0em} {{{\sigma }_{k}}}})}}^{2}}} \).

When the number of degrees of freedom is not too large, to obtain the distribution of the variable (4) we can use the convolution of distributions with fewer degrees of freedom. In what follows, we will obtain a general expression for the case of a large number of degrees of freedom.

2.1 Convolution of Distributions

Suppose we have two random variables \({{\xi }_{1}} \geqslant 0\) and \({{\xi }_{2}} \geqslant 0\) whose distributions are \({{P}_{1}}({{\xi }_{1}})\) and \({{P}_{2}}({{\xi }_{2}})\), respectively. Then the expression

$$P(\xi ) = \int\limits_0^\infty {d{{\xi }_{1}}\int\limits_0^\infty {d{{\xi }_{2}}} } {{P}_{1}}({{\xi }_{1}}){{P}_{2}}({{\xi }_{2}})\delta \left[ {\xi - \left( {{{\xi }_{1}} \pm {{\xi }_{2}}} \right)} \right]$$
(5)

defines the distribution of the variable \(\xi = {{\xi }_{1}} \pm {{\xi }_{2}}\).

(a) Let us examine the distribution of the sum \(\xi = {{\xi }_{1}} + {{\xi }_{2}}\). In this case, from Eq. (5) we obtain

$$P(\xi ) = \left\{ \begin{gathered} \int\limits_0^\xi {{{P}_{1}}(x){{P}_{2}}(\xi - x)dx\,\,{\text{if}}\,\,\xi \geqslant 0} \hfill \\ 0\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,{\text{if}}\,\,\xi < 0 \hfill \\ \end{gathered} \right.,$$
(6)

We see that the distribution of the sum \(\xi = {{\xi }_{1}} + {{\xi }_{2}}\) is nonzero only inside the interval \(\xi \geqslant 0\).

(b) Let us examine the distribution of the difference \(\xi = {{\xi }_{1}} - {{\xi }_{2}}\). From Eq. (5) it follows that in this case

$$P(\xi ) = \left\{ {\begin{array}{*{20}{c}} {\int\limits_0^\infty {{{P}_{1}}\left( {x + \xi } \right){{P}_{2}}\left( x \right)dx,\,\,\,\,\xi \geqslant 0} } \\ {\int\limits_0^\infty {{{P}_{1}}\left( x \right){{P}_{2}}\left( {x + \left| \xi \right|} \right)dx,\,\,\,\,\xi < 0} } \end{array}} \right..$$
(7)

We see that the distribution of the variable \(\xi = {{\xi }_{1}} - {{\xi }_{2}}\) is nonzero both for \(\xi \geqslant 0\) and \(\xi \leqslant 0\). Moreover, when \({{P}_{1}}(x) = {{P}_{2}}(x)\) the distribution \(P(\xi )\) is symmetrical about the point \(\xi = 0\).

2.2 General Case

In the general case, it is difficult to apply the convolution method when the number of degrees of freedom is large. The expression

$$P(\xi ) = \frac{1}{{{{{(2\pi )}}^{{{n \mathord{\left/ {\vphantom {n 2}} \right. \kern-0em} 2}}}}}}\int\limits_{ - \infty }^\infty {d{{x}_{1}}} \int\limits_{ - \infty }^\infty {d{{x}_{2}}} ...\int\limits_{ - \infty }^\infty {d{{x}_{n}}} {{e}^{{ - \frac{1}{2}\sum {x_{k}^{2}} }}}\delta \left( {\xi - \sum\limits_{k = 1}^n {{{\lambda }_{k}}x_{k}^{2}} } \right)$$
(8)

describes the most general form of the distribution of the variable (4). It is not difficult to calculate the moments of this distribution performing the integration in Eq. (8). As a result, we obtain

$$\left\langle \xi \right\rangle = \sum\limits_{k = 1}^n {{{\lambda }_{k}}} ,\,\,\,\,\left\langle {{{\xi }^{2}}} \right\rangle - {{\left\langle \xi \right\rangle }^{2}} = 2\sum\limits_{k = 1}^n {\lambda _{k}^{2}} .$$
(9)

We see that the expressions (9) are a generalization of the expressions (3).

When we replace the \(\delta \)-function with its integral representation the expression (8) takes the form

$$P(\xi ) = \frac{1}{{{{{(2\pi )}}^{{{n \mathord{\left/ {\vphantom {n 2}} \right. \kern-0em} 2} + 1}}}}}\int\limits_{ - \infty }^\infty {d\omega {{e}^{{i\omega \xi }}}} \prod\limits_{k = 1}^n {\left( {\int\limits_{ - \infty }^\infty {d{{x}_{k}}} {{e}^{{ - \frac{1}{2}x_{k}^{2}\left( {1 + i2\omega {{\lambda }_{k}}} \right)}}}} \right)} .$$
(10)

Carrying out the integration over the variables \({{x}_{k}}\) we obtain

$$P(\xi ) = \frac{1}{{2\pi }}\int\limits_{ - \infty }^\infty {\frac{{{{e}^{{i\omega \xi }}}}}{{\prod\limits_{k = 1}^n {\sqrt {1 + i2\omega {{\lambda }_{k}}} } }}d\omega } .$$
(11)

The following analysis of Eq. (11) depends on the order of degeneracy of the weights in the sum (4). In particular, when \({{\lambda }_{1}} = {{\lambda }_{2}} = ...{{\lambda }_{n}} = \lambda \) and \(n\) is even, the integral (11) has one pole of the order \({n \mathord{\left/ {\vphantom {n 2}} \right. \kern-0em} 2}\) and we obtain the standard \({{\chi }^{2}}\)-distribution (2) for the variable \({\xi \mathord{\left/ {\vphantom {\xi \lambda }} \right. \kern-0em} \lambda }\).

For definiteness of the calculations, let us suppose that \(n\) is even (\(n = 2M\)) and all the weights in Eq. (4) are doubly degenerated. In this case, for each pair \({{\lambda }_{k}} = {{\lambda }_{r}}\) we introduce \({{\Lambda }_{m}} = {{\lambda }_{k}} = {{\lambda }_{r}}\). In other words, we have \(M = {n \mathord{\left/ {\vphantom {n 2}} \right. \kern-0em} 2}\) different weights \({{\Lambda }_{m}}\) and \(m = 1,2,...,M\). Then we can rewrite the integral (11) as

$$P(\xi ) = \frac{1}{{2\pi }}\int\limits_{ - \infty }^\infty {\frac{{{{e}^{{i\omega \xi }}}}}{{\prod\limits_{m = 1}^{{n \mathord{\left/ {\vphantom {n 2}} \right. \kern-0em} 2}} {(1 + i2\omega {{\Lambda }_{m}})} }}d\omega } .$$
(12)

We see that this integral has \({n \mathord{\left/ {\vphantom {n 2}} \right. \kern-0em} 2}\) poles of the first order at the points \(\omega = {i \mathord{\left/ {\vphantom {i {2{{\Lambda }_{m}}}}} \right. \kern-0em} {2{{\Lambda }_{m}}}}\).

When integrating over \(\omega \), we have to consider the cases \(\xi > 0\) and \(\xi < 0\) separately. When \(\xi > 0\), only the poles in the upper half-plane (\(\operatorname{Im} \,\omega > 0\)) of the complex variable \(\omega \) determined by the positive weights contribute to the integral (11). In the case \(\xi < 0\), the integral (11) is the sum of the residues of the poles in the lower half-plane (\(\operatorname{Im} \,\omega < 0\)), which correspond to the negative weights \({{\Lambda }_{m}}\). With this in mind we obtain

$$P(\xi ) = \frac{1}{2}\sum\limits_{{{\Lambda }_{m}} > 0}^{} {\frac{{{{e}^{{{{ - \xi } \mathord{\left/ {\vphantom {{ - \xi } {2{{\Lambda }_{m}}}}} \right. \kern-0em} {2{{\Lambda }_{m}}}}}}}}}{{{{\Lambda }_{m}}\prod\limits_{r \ne m}^{{n \mathord{\left/ {\vphantom {n 2}} \right. \kern-0em} 2}} {(1 - {{{{\Lambda }_{r}}} \mathord{\left/ {\vphantom {{{{\Lambda }_{r}}} {{{\Lambda }_{m}}}}} \right. \kern-0em} {{{\Lambda }_{m}}}})} }}} ,\,\,{\text{when}}\,\,\xi > 0,$$
(13)
$$P(\xi ) = \frac{1}{2}\sum\limits_{{{\Lambda }_{m}} < 0}^{} {\frac{{{{e}^{{ - \left| {{\xi \mathord{\left/ {\vphantom {\xi {2{{\Lambda }_{m}}}}} \right. \kern-0em} {2{{\Lambda }_{m}}}}} \right|}}}}}{{\left| {{{\Lambda }_{m}}} \right|\prod\limits_{r \ne m}^{{n \mathord{\left/ {\vphantom {n 2}} \right. \kern-0em} 2}} {(1 - {{{{\Lambda }_{r}}} \mathord{\left/ {\vphantom {{{{\Lambda }_{r}}} {{{\Lambda }_{m}}}}} \right. \kern-0em} {{{\Lambda }_{m}}}})} }}} ,\,\,{\text{when}}\,\,\xi < 0.$$
(14)

Note that in Eq. (13) (\(\xi > 0\)) we sum up only over \({{\Lambda }_{m}} > 0\) and when \(\xi < 0\) the summation in Eq. (14) is carried out over \({{\Lambda }_{m}} < 0\).

We see that in the general case the distribution \(P(\xi )\) is asymmetric with respect to the point \(\xi = 0\). It becomes symmetric only in the special case of a symmetric spectrum of the weights \({{\Lambda }_{m}}\) when for each positive weight \({{\Lambda }_{k}}\) there is a negative weight \({{\Lambda }_{r}} = - {{\Lambda }_{k}}\).

Before proceeding to the analysis of special cases, let us discuss the general properties of the weighted distribution \(P(\xi )\). For this purpose, we note that for any set of different values \({{\Lambda }_{1}},{{\Lambda }_{2}},...,{{\Lambda }_{M}}\) the well-known Euler-Lagrange equalities are valid [11]:

$$\sum\limits_{m = 1}^M {\frac{{\Lambda _{m}^{k}}}{{\prod\limits_{r \ne m} {({{\Lambda }_{m}} - {{\Lambda }_{r}})} }}} = \left\{ \begin{gathered} 1,\,\,\,\,k = M - 1 \hfill \\ 0,\,\,\,\,0 \leqslant k \leqslant M - 2 \hfill \\ \end{gathered} \right..$$
(15)

These equalities follow from the properties of the Vandermonde determinant.

When \(k = M - 1\), Eq. (15) corresponds to a normalization of the distribution \(P(\xi )\). Indeed, performing the integration in (13), (14) we obtain an evident result:

$$\int\limits_{ - \infty }^\infty {P(\xi )} d\xi = \sum\limits_{m = 1}^M {\frac{{\Lambda _{m}^{{M - 1}}}}{{\prod\limits_{r \ne m} {({{\Lambda }_{m}} - {{\Lambda }_{r}})} }}} = 1.$$
(16)

In addition, from Eq. (15) we see that if \(k = M - 1\) the function \(P(\xi )\) is continuous at the point \(\xi = 0\). Really, from Eqs. (13) and (14) it follows that

$$P(\xi \to {{0}_{ + }}) - P(\xi \to {{0}_{ - }}) = \sum\limits_{m = 1}^M {\frac{{\Lambda _{m}^{{M - 2}}}}{{\prod\limits_{r \ne m} {({{\Lambda }_{m}} - {{\Lambda }_{r}})} }}} = 0\,\,{\text{and}}\,\,P(\xi \to {{0}_{ + }}) = P(\xi \to {{0}_{ - }}).$$
(17)

The similar relations are also valid for the derivatives \({{\left. {{{{{d}^{k}}P} \mathord{\left/ {\vphantom {{{{d}^{k}}P} {d{{\xi }^{k}}}}} \right. \kern-0em} {d{{\xi }^{k}}}}} \right|}_{{\xi \to {{0}_{ + }}}}} = {{\left. {{{{{d}^{k}}P} \mathord{\left/ {\vphantom {{{{d}^{k}}P} {d{{\xi }^{k}}}}} \right. \kern-0em} {d{{\xi }^{k}}}}} \right|}_{{\xi \to {{0}_{ - }}}}}\). This means that derivatives of the order \(k \in [1,M - 2]\) are continuous at the point \(\xi = 0\). Moreover, in the case of a symmetrical spectrum of the weights, from Eqs. (13)(15) we have \({{\left. {{{dP} \mathord{\left/ {\vphantom {{dP} {d{{\xi }^{k}}}}} \right. \kern-0em} {d{{\xi }^{k}}}}} \right|}_{{\xi = 0}}} = 0\) and consequently the distribution \(P(\xi )\) is symmetric with respect to the point \(\xi = 0\) and reaches its maximum at this point.

Let us discuss the properties of the distribution in a more specific case of the same sign of all the weights \({{\Lambda }_{m}}\). For example, let them be positive. Then from Eqs. (13) and (14) it follows that \(P(\xi ) = 0\) when \(\xi < 0\) and on the interval \(\xi \geqslant 0\) we have

$$P(\xi ) = \frac{1}{2}\sum\limits_{m = 1}^M {\frac{{\Lambda _{m}^{{M - 2}}{{e}^{{{{ - \xi } \mathord{\left/ {\vphantom {{ - \xi } {2{{\Lambda }_{m}}}}} \right. \kern-0em} {2{{\Lambda }_{m}}}}}}}}}{{\prod\limits_{r \ne m} {({{\Lambda }_{m}} - {{\Lambda }_{r}})} }}} ,\,\,\,\,\xi \geqslant 0,\,\,\,\,M = {n \mathord{\left/ {\vphantom {n 2}} \right. \kern-0em} 2}.$$
(18)

Setting \(\xi = 0\) and comparing the obtained expression with Eq. (15) for \(k = M - 2\) we see that at this point the density \(P(\xi )\) is equal to zero. Moreover, when \(k < M - 2\) the equality imposes conditions on the derivatives of the function \(P(\xi )\) at zero point. Indeed, differentiating the expression (18) and comparing the result with Eq. (15), we obtain

$${{\left. {\frac{{{{d}^{r}}P}}{{d{{\xi }^{r}}}}} \right|}_{{\xi = 0}}} = \frac{{{{{( - 1)}}^{r}}}}{{{{2}^{r}}}}\sum\limits_{m = 1}^M {\frac{{\Lambda _{m}^{{M - 2 - r}}}}{{\prod\limits_{r \ne m} {({{\Lambda }_{m}} - {{\Lambda }_{r}})} }}} = 0\,\,{\text{when}}\,\,M - 2 \geqslant r \geqslant 0.$$
(19)

From Eq. (19) it follows that the same as in the case of the standard the \({{\chi }^{2}}\)-distribution (2), the weighted sum with the positive weights decreases as \(P(\xi )\sim {{\xi }^{{{n \mathord{\left/ {\vphantom {n {2 - 1}}} \right. \kern-0em} {2 - 1}}}}}\) when \(\xi \to 0\).

Concluding this Section, let us note that the comparison of the moments of the distribution \(P(\xi )\) following from the general definitions (9) and the series representations (13) and (14) allows us to obtain expressions that complementing the Euler-Lagrange equalities (see Appendix).

3 EXAMPLES OF SPECIFIC DISTRIBUTIONS

To understand more clearly the basic properties of the weighted \({{\chi }^{2}}\)-distribution, let us discuss some simple examples.

Example I. The simplest case of the distribution of the weighted variable corresponds to the number of degrees of freedom \(n = 2\). Then \(\xi = {{\xi }_{1}} \pm {{\xi }_{2}}\), where \({{\xi }_{1}} = {{\lambda }_{1}}x_{1}^{2}\), \({{\xi }_{2}} = {{\lambda }_{2}}x_{2}^{2}\), \({{\lambda }_{{1,2}}} > 0\). The equation (2) describes the distributions of the variables \({{\xi }_{1}}\) and \({{\xi }_{2}}\) with the numbers of degrees of freedom \({{n}_{1}} = {{n}_{2}} = 1\). Consequently, \(P({{\xi }_{k}}) = {{\exp ({{{{\xi }_{k}}} \mathord{\left/ {\vphantom {{{{\xi }_{k}}} {2{{\lambda }_{k}}}}} \right. \kern-0em} {2{{\lambda }_{k}}}})} \mathord{\left/ {\vphantom {{\exp ({{{{\xi }_{k}}} \mathord{\left/ {\vphantom {{{{\xi }_{k}}} {2{{\lambda }_{k}}}}} \right. \kern-0em} {2{{\lambda }_{k}}}})} {\sqrt {2\pi {\kern 1pt} {{\lambda }_{k}}{{\xi }_{k}}} }}} \right. \kern-0em} {\sqrt {2\pi {\kern 1pt} {{\lambda }_{k}}{{\xi }_{k}}} }}\), \(k = 1,2\).

(Ia) \(\xi = {{\xi }_{1}} + {{\xi }_{2}}\). The variable \(\xi = {{\xi }_{1}} + {{\xi }_{2}}\) is positive definite. Therefore \(P(\xi ) = 0\) when \(\xi < 0\) and in the case \(\xi \geqslant 0\) we obtain from Eq. (6):

$$P(\xi ) = \frac{1}{{2\pi \sqrt {{{\lambda }_{1}}{{\lambda }_{2}}} }}{{e}^{{ - \frac{\xi }{{2{{\lambda }_{2}}}}}}}\int\limits_0^\xi {{{e}^{{ - \frac{1}{2}x\left( {\frac{1}{{{{\lambda }_{1}}}} - \frac{1}{{{{\lambda }_{2}}}}} \right)}}}} \frac{{dx}}{{\sqrt {x(\xi - x)} }},\,\,\,\,\xi \geqslant 0.$$
(20)

When substituting \(x = \xi {{\cos }^{2}}({\varphi \mathord{\left/ {\vphantom {\varphi 2}} \right. \kern-0em} 2})\), after some transformations of the integral we obtain

$$P(\xi ) = \frac{1}{{2\sqrt {{{\lambda }_{1}}{{\lambda }_{2}}} }}{{e}^{{ - \frac{\xi }{4}\left( {\frac{1}{{{{\lambda }_{1}}}} + \frac{1}{{{{\lambda }_{2}}}}} \right)}}}{{{\text{I}}}_{0}}({{\xi }_{ - }}),\,\,\,\,\,{{\xi }_{ - }} = \left| {\frac{\xi }{4}\left( {\frac{1}{{{{\lambda }_{1}}}} - \frac{1}{{{{\lambda }_{2}}}}} \right)} \right|,$$
(21)

where \({{\operatorname{I} }_{0}}({{\xi }_{ - }})\) is the modified Bessel function [12].

Since the asymptotics of the modified Bessel function are \({{\operatorname{I} }_{0}}(z) \to 1\) when \(z \to 0\) and \({{\operatorname{I} }_{0}}(z) \to {{{{e}^{z}}} \mathord{\left/ {\vphantom {{{{e}^{z}}} {\sqrt {2\pi z} }}} \right. \kern-0em} {\sqrt {2\pi z} }}\) when \(z \to \infty \) we have

$$P(\xi ) = \frac{1}{{2\sqrt {{{\lambda }_{1}}{{\lambda }_{2}}} }}\left\{ {\begin{array}{*{20}{c}} {{{e}^{{ - \frac{\xi }{4}\left( {\frac{1}{{{{\lambda }_{1}}}} + \frac{1}{{{{\lambda }_{2}}}}} \right)}}},\,\,\,\,\xi \ll 1} \\ {{{e}^{{ - \frac{\xi }{{2{{\lambda }_{{\min }}}}}}}},\,\,\,\,\xi \gg 1} \end{array}} \right.\,\,{\text{where}}\,\,{{\lambda }_{{\min }}} = \min ({{\lambda }_{1}},{{\lambda }_{2}}).$$
(22)

As we expected when \({{\lambda }_{1}} = {{\lambda }_{2}} = \lambda \) the expression (21) turns into the expression (2).

(Ib) \(\xi = {{\xi }_{1}} - {{\xi }_{2}}\). From Eq. (7) it follows that the distribution of the variable \(\xi = {{\xi }_{1}} - {{\xi }_{2}}\) for the interval \(\xi \geqslant 0\) is

$$P(\xi ) = \frac{1}{{2\pi \sqrt {{{\lambda }_{1}}{{\lambda }_{2}}} }}{{e}^{{ - \frac{\xi }{{2{{\lambda }_{1}}}}}}}\int\limits_0^\infty {{{e}^{{ - \frac{1}{2}x\left( {\frac{1}{{{{\lambda }_{1}}}} + \frac{1}{{{{\lambda }_{2}}}}} \right)}}}} \frac{{dx}}{{\sqrt {x{\kern 1pt} (\xi + x)} }}.$$
(23)

An analogous expression for the interval \(\xi \leqslant 0\) we obtain making the change in the integral (23): \({{\lambda }_{1}} \leftrightarrow {{\lambda }_{2}}\) and \(\xi \to \left| \xi \right|\). Then after some transformations we have for \(\xi \in ( - \infty ,\infty )\):

$$P(\xi ) = \frac{1}{{2\sqrt {{{\lambda }_{1}}{{\lambda }_{2}}} }}{{e}^{{ - \frac{\xi }{4}\left( {\frac{1}{{{{\lambda }_{1}}}} - \frac{1}{{{{\lambda }_{2}}}}} \right)}}}{{\operatorname{K} }_{0}}({{\xi }_{ + }}),\,\,\,\,{{\xi }_{ + }} = \left| {\frac{\xi }{4}\left( {\frac{1}{{{{\lambda }_{1}}}} + \frac{1}{{{{\lambda }_{2}}}}} \right)} \right|,$$
(23)

and \({{K}_{0}}(z)\) is the modified Bessel function [12] whose asymptotics are \({{K}_{0}}(z)\sim \left| {\ln z} \right|\) when \(z \to 0\) and \({{K}_{0}}(z)\sim {{e}^{{ - z}}}\sqrt {{\pi \mathord{\left/ {\vphantom {\pi {2z}}} \right. \kern-0em} {2z}}} \) when \(z \to \infty \). Consequently,

$$P(\xi ) = \frac{1}{{2\sqrt {{{\lambda }_{1}}{{\lambda }_{2}}} }}{{e}^{{\frac{{\xi ({{\lambda }_{1}} - {{\lambda }_{2}})}}{{4{{\lambda }_{1}}{{\lambda }_{2}}}}}}}\ln \left( {\frac{{4{{\lambda }_{1}}{{\lambda }_{2}}}}{{\left| \xi \right|({{\lambda }_{1}} + {{\lambda }_{2}})}}} \right)\,\,{\text{when}}\,\,\left| \xi \right| \ll 1$$
(24)

and

$$P(\xi ) = \frac{1}{{\sqrt {2\pi \left| \xi \right|({{\lambda }_{1}} + {{\lambda }_{2}})} }}\left\{ {\begin{array}{*{20}{c}} {{{e}^{{ - {\kern 1pt} {\kern 1pt} \frac{\xi }{{2{{\lambda }_{1}}}}}}},\,\,\,\,\xi \to + \infty } \\ {{{e}^{{ - {\kern 1pt} {\kern 1pt} \frac{{\left| \xi \right|}}{{2{{\lambda }_{2}}}}}}},\,\,\,\,\xi \to - \infty } \end{array}} \right..$$
(25)

Example II. Let us examine the distribution of the variable \(\xi = {{\xi }_{1}} \pm {{\xi }_{2}}\) with the number of degrees of freedom \(n = 4\), where \({{\xi }_{1}} = {{\lambda }_{1}}\sum\nolimits_{k = 1}^2 {x_{{1k}}^{2}} \) and \({{\xi }_{2}} = {{\lambda }_{2}}\sum\nolimits_{k = 1}^2 {x_{{2k}}^{2}} \), \({{\lambda }_{{1,2}}} > 0\). The distributions of the variables \({{\xi }_{{1,2}}}\) are defined by Eq. (2) with the numbers of degrees of freedom \({{n}_{{1,2}}} = 2\).

(IIa) \(\xi = {{\xi }_{1}} + {{\xi }_{2}}\). The distribution of the variable in question is nonzero only in the interval \(\xi \geqslant 0\). In this case, from Eq. (6) we obtain:

$$P(\xi ) = \frac{{{{e}^{{{{ - \xi } \mathord{\left/ {\vphantom {{ - \xi } {2{{\lambda }_{1}}}}} \right. \kern-0em} {2{{\lambda }_{1}}}}}}} - {{e}^{{{{ - \xi } \mathord{\left/ {\vphantom {{ - \xi } {2{{\lambda }_{2}}}}} \right. \kern-0em} {2{{\lambda }_{2}}}}}}}}}{{2({{\lambda }_{1}} - {{\lambda }_{2}})}},\,\,\,\,\xi \geqslant 0,\,\,\,\,{{\lambda }_{{1,2}}} > 0$$
(26)

and the moments of this distribution are

$$\left\langle \xi \right\rangle = 2({{\lambda }_{1}} + {{\lambda }_{2}}),\,\,\,\,\left\langle {{{\xi }^{2}}} \right\rangle - {{\left\langle \xi \right\rangle }^{2}} = 4\left( {\lambda _{1}^{2} + \lambda _{2}^{2}} \right).$$
(27)

As we could expect when \({{\lambda }_{1}} = {{\lambda }_{2}} = \lambda \) the distribution (26) turns into the standard \({{\chi }^{2}}\)-distribution (2) with the number of degrees of freedom \(n = {{n}_{1}} + {{n}_{2}} = 4\); the expressions for the moments of the distribution coincide with Eq. (3).

(IIb) \(\xi = {{\xi }_{1}} - {{\xi }_{2}}\). In this case, we obtain the distribution of the difference \(\xi = {{\xi }_{1}} - {{\xi }_{2}}\) from Eq. (7):

$$P(\xi ) = \frac{1}{{2\left( {{{\lambda }_{1}} + {{\lambda }_{2}}} \right)}}\left\{ {\begin{array}{*{20}{c}} {{{e}^{{ - \frac{\xi }{{2{{\lambda }_{1}}}}}}},\,\,\,\,\xi \geqslant 0} \\ {{{e}^{{ - \frac{{\left| \xi \right|}}{{2{{\lambda }_{2}}}}}}},\,\,\,\,\xi \leqslant 0} \end{array}} \right.,\,\,\,\,{{\lambda }_{{1,2}}} > 0.$$
(28)

The moments of this distribution have the form

$$\left\langle \xi \right\rangle = 2({{\lambda }_{1}} - {{\lambda }_{2}}),\,\,\,\,\left\langle {{{\xi }^{2}}} \right\rangle - {{\left\langle \xi \right\rangle }^{2}} = 4\left( {\lambda _{1}^{2} + \lambda _{2}^{2}} \right).$$
(29)

The expressions (29) coincide with Eq. (27) if we make a change \({{\lambda }_{2}} \to - {{\lambda }_{2}}\). However, this is where the coincidences end. We see that the distribution (28) is asymmetric with respect to the origin of coordinates \(\xi = 0\). Moreover, the derivatives of \(P(\xi )\) are discontinuous at the point \(\xi = 0\). This example shows that the weighted distribution can be very different from the standard form (2).

Example III. Suppose we have two variables \({{\xi }_{1}}\) and \({{\xi }_{2}}\) that are subject to the \({{\chi }^{2}}\)-distribution with the numbers of the degrees of freedom \({{n}_{1}}\) and \({{n}_{2}}\), respectively. The distribution of the variable \(\xi = {{\xi }_{1}} + {{\xi }_{2}}\) is evident: it is defined by Eq. (2) with the number of degrees of freedom \(n = {{n}_{1}} + {{n}_{2}}\).

Let us examine the distribution of the variable \(\xi = {{\xi }_{1}} - {{\xi }_{2}}\). From Eq. (7) we obtain

$$P(\xi ) = {\text{const}}\int\limits_0^\infty {{{\varepsilon }^{{{{m}_{1}}}}}{{{\left( {\varepsilon + \left| \xi \right|} \right)}}^{{{{m}_{2}}}}}{{e}^{{ - \varepsilon - \frac{1}{2}\left| \xi \right|}}}d\varepsilon } ,\,\,\,\,{{m}_{{1,2}}} = \frac{{{{n}_{{1,2}}}}}{2} - 1.$$
(30)

For the sake of simplicity we set \({{n}_{1}} = {{n}_{2}}\) and \({{m}_{{1,2}}} = m\). In this case, the distribution is symmetric with respect to the point \(\xi = 0\). To analyze the asymptotics of this distribution we use the saddle-point method. We suppose that \(m \gg 1\) and rewrite Eq. (30) as

$$P(\xi ) = {{P}_{0}}\int\limits_0^\infty {{{e}^{{ - S(\varepsilon )}}}d\varepsilon } ,$$
(31)

where P0 is a normalization constant whose form is not significant. In Eq. (31)

$$S(\varepsilon ) = x - m\ln \left( {{{x}^{2}} - \frac{1}{4}{{\xi }^{2}}} \right),\,\,\,\,x = \varepsilon + \frac{1}{2}\left| \xi \right|,$$
(32)
$$\frac{{dS}}{{dx}} = 1 - \frac{{2mx}}{{{{x}^{2}} - \frac{1}{4}{{\xi }^{2}}}},\,\,\,\,\frac{{{{d}^{2}}S}}{{d{{x}^{2}}}} = \frac{{x - m}}{{mx}}.$$

Setting \({{dS} \mathord{\left/ {\vphantom {{dS} {dx}}} \right. \kern-0em} {dx}} = 0\) we define the saddle-point \({{x}_{0}} = m\left( {1 + \sqrt {1 + {{{{\xi }^{2}}} \mathord{\left/ {\vphantom {{{{\xi }^{2}}} {4{{m}^{2}}}}} \right. \kern-0em} {4{{m}^{2}}}}} } \right)\) and the form of the distribution

$$P(\xi ) = \frac{{{{P}_{0}}}}{{\sqrt {4\pi ({{x}_{0}} - m)} }}{{(2m{{x}_{0}})}^{{m - \frac{1}{2}}}}{{e}^{{ - {{x}_{0}}}}}.$$
(33)

From Eq. (33) we obtain the form of the function \(P(\xi )\) near the center of the distribution (\(\left| \xi \right| \ll m\)) and at the tails (\(\left| \xi \right| \gg m\)):

$$P(\xi )\sim \left\{ \begin{gathered} {{e}^{{ - \frac{1}{{16m}}{{\xi }^{2}}}}},\,\,\,\,\left| \xi \right| \ll m \hfill \\ {{\left| \xi \right|}^{{m - 1}}}{{e}^{{ - \frac{1}{2}\left| \xi \right|}}},\,\,\,\,\left| \xi \right| \gg m \hfill \\ \end{gathered} \right.,\,\,\,\,m = \frac{n}{2}.$$
(34)

We see that the behavior of \(P(\xi )\) resembles the χ2-distribution only at the far ends of the tails.

Example IV. Let us examine the distribution of the variable (4) when the doubly degenerated weights (\(n = 2M\)) have the form

$$\frac{1}{{{{\Lambda }_{k}}}} = \frac{1}{{{{\Lambda }_{0}}}} + (k - 1)\Delta ,\,\,\,\,{{\lambda }_{0}} > 0,\,\,\,\,\Delta \geqslant 0.$$
(35)

In this case, Eq. (13) takes the form

$$P(\xi ) = \frac{1}{2}{{e}^{{ - \frac{1}{{2{{\Lambda }_{0}}}}\xi }}}\sum\limits_{k = 1}^M {\frac{1}{{{{\Lambda }_{k}}\prod\limits_{r \ne k}^M {{{\Lambda }_{r}}\left( {\Lambda _{r}^{{ - 1}} - \Lambda _{k}^{{ - 1}}} \right)} }}} {{e}^{{ - \frac{1}{2}(k - 1)\xi \Delta }}}.$$
(36)

When we take into account the relation

$$\frac{1}{{{{\Lambda }_{k}}\prod\limits_{r \ne k}^M {\left[ {{{\Lambda }_{r}}\left( {\Lambda _{r}^{{ - 1}} - \Lambda _{k}^{{ - 1}}} \right)} \right]} }} = \frac{{{{{( - 1)}}^{{k - 1}}}}}{{(k - 1)!(M - k)!{{\Delta }^{{M - 1}}}\prod\limits_{m = 1}^M {{{\Lambda }_{r}}} }}$$

from Eq. (36) we obtain:

$$P(\xi ) = {{P}_{0}}{{e}^{{ - \frac{\xi }{{2{{\Lambda }_{0}}}}}}}{{\left( {\frac{{1 - {{e}^{{ - \frac{1}{2}\xi \Delta }}}}}{\Delta }} \right)}^{{M - 1}}},\,\,\,\,{{P}_{0}} = \frac{1}{{2(M - 1)!\Lambda _{0}^{M}}}\prod\limits_{k = 1}^M {\left( {1 + k{{\Lambda }_{0}}\Delta } \right)} .$$
(37)

As we could expect, when \(\Delta \to 0\) this expression passes to the non-degenerate case (2).

4 CONCLUSIONS

Our analysis shows that in the general case the distribution of the weighted sum bears little resemblance with the standard (not weighted) χ2-distribution. Although the function \(P(\xi )\) is continuous, its derivatives are generally discontinuous at the point \(\xi = 0\) except for a few special cases. In the general case, our attempt to determine the maximum of the function \(P(\xi )\) failed. The expressions (13) and (14) allow one to calculate the distribution \(P(\xi )\) in the most general case.