Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

7.1 Mixed Moments of Student’s \(t\)-Distributions

Let \(M_d\) be the Euclidean space of symmetric \(d\times d\) matrices with the scalar product \(\langle A_1,A_2\rangle :=\mathrm tr (A_1 A_2)\), \(A_1, A_2\in M_d\), \(M_d^+\subset M_d\) be the cone of nonnegative definite matrices and \(\fancyscript{P}(M^+_d)\) be a class of probability measures on \(M^+_d\). Here \(\mathrm tr A\) denotes the trace of a matrix \(A\).

The probability distribution of a d-dimensional random vector \(X\) is said to be the mixture of centered Gaussian distributions with the mixing distribution \(U\in \fancyscript{P}(M^+_d)\) (\(U\)-mixture for short) if, for all \(z\in R^d\),

$$\begin{aligned} \mathrm E e^{i\langle z,X\rangle }=\int \limits _{M^+_d}e^{-\frac{1}{2}\langle zA,z\rangle }U(\text{d}A). \end{aligned}$$
(7.1)

The distributional properties of such mixtures are well studied (see, e.g., [1, 2] and references therein).

Let \(c_j=(c_{j_1},\ldots ,c_{j_d})\in R^d\), \(j=1,2,\ldots ,2n\). We shall derive formulas evaluating \(\mathrm E \left(\prod\nolimits ^{2n}_{j=1}\langle c_j,X\rangle \right)\) for \(U\)-mixtures of Gaussian distributions, including Student’s \(t\)-distribution.

Let \(\Pi _{2n}\) be the class of pairings \(\sigma \) on the set \(I_{2n}=\left\{ 1,2,\ldots ,2n\right\} \), i.e. the partitions of \(I_{2n}\) into \(n\) disjoint pairs, implying that

$$\begin{aligned} \mathrm card \Pi _{2n}=\frac{(2n)!}{2^nn!}. \end{aligned}$$

For each \(\sigma \in \Pi _{2n}\), we define uniquely the subsets \(I_{2n\backslash \sigma }\) and integers \(\sigma (j)\), \(j\in I_{2n\backslash \sigma }\), by the equality

$$\begin{aligned} \sigma =\left\{ \left(j,\sigma (j)\right),j\in I_{2n\backslash \sigma }\right\} . \end{aligned}$$

If \(U=\varepsilon _\Sigma \) is a Dirac measure with fixed \(\Sigma \in M^+_d\), i.e. the Gaussian case, Isserlis theorem (in mathematical physics known as Wick theorem) says (see, e.g., [35]) that

$$\begin{aligned} \mathrm E \left[\prod ^{2n}_{j=1}\langle c_j,X\rangle \right]=\sum _{\sigma \in \Pi _{2n}}\prod _{j\in I_{2n\backslash \sigma }}\langle c_j \Sigma , c_{\sigma (j)}\rangle :=m_{2n}(c,\Sigma ). \end{aligned}$$
(7.2)

Write

$$\begin{aligned} \phi _U(\Theta ):=\int \limits _{M^+_d}e^{-\mathrm tr (A\Theta )}U(\text{d}A), \quad \Theta \in M^+_d. \end{aligned}$$
(7.3)

Theorem 7.1

[6] The following statements hold:

  1. (i)

    The probability distribution of a d-dimensional random vector\(X\)is the\(U\)-mixture of centered Gaussian distributions if and only if

    $$\begin{aligned} \mathrm E e^{i\langle z, X\rangle }=\phi _U\left(\frac{1}{2}z^Tz\right), \end{aligned}$$
    (7.4)

    where\(z^{T}\)is the transposed vector\(z\).

  2. (ii)

    If the probability distribution of\(X\)is the\(U\)-mixture of centered Gaussian distributions and, for\(j=1,2, \ldots ,2n\),

    $$\begin{aligned} \int \limits _{M^+_d}\langle c_jA,c_j\rangle ^nU(\text{d}A)<\infty , \end{aligned}$$
    (7.5)

    then

    $$\begin{aligned} \mathrm E \left[\prod ^{2n}_{j=1}\langle c_j,X\rangle \right]=\sum _{\sigma \in \Pi _{2n}}\int \limits _{M^+_d}m^{\sigma }_{2n}(c,A)U(\text{d}A), \end{aligned}$$
    (7.6)

    where

    $$\begin{aligned} m^{\sigma }_{2n}(c,A)=\prod _{j\in I_{2n\backslash \sigma }}\langle c_jA,c_{\sigma (j)}\rangle . \end{aligned}$$

Proof

  1. (i)

    The statement follows from (7.1) and (7.3), because, obviously,

    $$\begin{aligned} \mathrm tr \left((z^Tz)A\right)=\langle zA,z\rangle . \end{aligned}$$
  2. (ii)

    Observe that card \(I_{2n\backslash \sigma }=n\) and, for all \(\sigma \in \Pi _{2n}\) and \(A\in M^+_d\),

    $$\begin{aligned} \prod _{j\in I_{2n\backslash \sigma }}\left|\langle c_j A,c_{\sigma (j)}\rangle \right|^n&\le n^{-n}\left(\sum _{j\in I_{2n\backslash \sigma }}\left|\langle c_jA,c_{\sigma (j)}\rangle \right|\right)^n \nonumber \\&\le n^{-1}\sum _{j\in I_{2n\backslash \sigma }}\left|\langle c_jA, c_{\sigma (j)}\rangle \right|^n \nonumber \\&\le \frac{2^{n-1}}{n}\sum _{j\in I_{2n\backslash \sigma }}\left[\langle c_jA,c_j\rangle ^n+\langle c_{\sigma (j)}A,c_{\sigma (j)}\rangle ^n\right] \nonumber \\&= \frac{2^{n-1}}{n}\sum ^{2n}_{j=1}\langle c_jA,c_j\rangle ^n. \end{aligned}$$
    (7.7)

    Using (7.5) and (7.7), we find that

    $$\begin{aligned} \mathrm E \left[\prod ^{2n}_{j=1}\langle c_j, X\rangle \right]&= \int \limits _{M^+_d}m_{2n}(c,A)U(\text{d}A)\\&= \sum _{\sigma \in \Pi _{2n}}\int \limits _{M^+_d}m^{\sigma }_{2n}(c,A)U(\text{d}A). \end{aligned}$$

\(\square \)

Taking (see also [7])

$$\begin{aligned} U=\fancyscript{L}(Y\Sigma ), \end{aligned}$$

where \(\Sigma \in M^+_d\) is fixed and

$$\begin{aligned} \fancyscript{L}(Y)=GIG\left(-\frac{\nu }{2}, \nu , 0\right) \end{aligned}$$

we have that

$$\begin{aligned} \phi _U(\Theta )=\frac{2\left(\frac{\nu }{2}\right)^{\frac{\nu }{4}}\left(\mathrm tr (\Sigma \Theta )\right)^{\frac{\nu }{4}}}{\Gamma \left(\frac{\nu }{2}\right)}K_{\frac{\nu }{2}}\left(\sqrt{2\mathrm tr (\Sigma \Theta )}\right), \end{aligned}$$
(7.8)
$$\begin{aligned} \fancyscript{L}(X)=T_d(\nu ,\Sigma ,0) \end{aligned}$$
(7.9)

and, for \(j=1,2,\ldots ,2n\)

$$\begin{aligned} \int \limits _{M^+_d}\langle c_jA,c_j\rangle ^nU(\text{d}A)=\left\{ \begin{array}{l} \dfrac{\Gamma \left(\frac{\nu }{2}-n\right)}{\left(\dfrac{\nu }{2}\right)^{\dfrac{\nu }{2}-n}}\langle c_j\Sigma ,c_j\rangle ^n, \quad \text {if}\, 2n<\nu , \\ \infty , \quad \text {if}\, 2n\ge \nu . \end{array} \right. \end{aligned}$$

Thus, for \(2n<\nu \),

$$\begin{aligned} \int \limits _{R^d}\prod ^{2n}_{j=1}\langle c_j,x\rangle T_d\left(\nu ,\Sigma ,0\right)(\text{d}x)=\frac{\Gamma \left(\frac{\nu }{2}-n\right)}{\left(\frac{\nu }{2}\right)^{\frac{\nu }{2}-n}}m_{2n}(c,\Sigma ), \end{aligned}$$
(7.10)
$$\begin{aligned} \int \limits _{R^d}\prod ^{2n}_{j=1}\langle c_j,x\rangle T_d(\nu ,\Sigma ,\alpha )(\text{d}x)=\int \limits _{R^d}\prod ^{2n}_{j=1}\left[\langle c_j,y\rangle +\langle c_j,\alpha \rangle \right]T_d(\nu ,\Sigma ,0)(\text{d}y) \end{aligned}$$

and because of anti-symmetry, for \(2k+1<\nu \),

$$\begin{aligned} \int \limits _{R^d}\prod ^{2k+1}_{j=1}\langle c_j,x\rangle T_d(\nu ,\Sigma ,0)(\text{d}x)=0. \end{aligned}$$

Remark 7.2

Let \(\nu \ge d\) be an integer, \(Y_1,\ldots ,Y_{\nu }\) be i.i.d. d-dimensional centered Gaussian vectors with a covariance matrix \(\Sigma \), \(|\Sigma |>0\), and \(U=\fancyscript{L}\left(\nu \Sigma ^{-1}_{\nu }\right)\), where the matrix

$$\begin{aligned} W_{\nu }=\sum ^{\nu }_{j=1}Y^T_jY_j. \end{aligned}$$

If \(\nu \ge d\), the matrix \(W_{\nu }\) is invertible with probability 1, because it is well known that the Wishart distribution

$$\begin{aligned} \fancyscript{L}(W_{\nu }):=W_d(\Sigma ,\nu ) \end{aligned}$$

has a density

$$\begin{aligned} W_d(\Sigma ,\nu ,A)=\left\{ \begin{array}{l} \dfrac{|A|^{\dfrac{\nu -d-1}{2}}\exp \left\{ -\dfrac{1}{2}\mathrm tr \left(\Sigma ^{-1}A\right)\right\} }{\left(2^d|\Sigma |\right)^{\dfrac{\nu }{2}} \pi ^{\dfrac{d(d-1)}{4}}\prod \limits ^{d}_{j=1}\Gamma \left(\dfrac{k-j+1}{2}\right)}, \quad \text {if}\, |A|>0, \\ 0, \quad \text {otherwise} . \end{array} \right. \end{aligned}$$

Because (see, e.g., [2, 8, 9])

$$\begin{aligned} \int \limits _{M^+_d}e^{-\frac{1}{2}\langle zA,z\rangle }U(\text{d}A)&=\int \limits _{R^d}e^{i\langle z,x\rangle }T_d(\nu ,\Sigma ,0)(\text{d}x) \nonumber \\&=\mathrm E [e^{-\frac{1}{2}\langle z\Sigma ,z\rangle Y}], \quad z\in R^d, \end{aligned}$$
(7.11)

taking \(z=tc\), \(t\in R^1\), \(c\in R^d\), we find that

$$\begin{aligned} \int \limits _{M^+_d}e^{-\frac{t^2}{2}\langle cA,c\rangle }U(\text{d}A)=\mathrm E \left[e^{-\frac{t^2}{2}\langle c\Sigma ,c\rangle Y}\right]. \end{aligned}$$

Thus, for all \(c\in R^d\),

$$\begin{aligned} \fancyscript{L}\left(\nu \langle cW^{-1},c\rangle \right)=\fancyscript{L}\left(\langle c\Sigma ,c\rangle Y\right), \end{aligned}$$

contradicting to the formula

$$\begin{aligned} \fancyscript{L}\left(\langle cW^{-1}_{\nu },c\rangle \right)=\fancyscript{L}\left(\langle c\Sigma ^{-1}, c\rangle \frac{1}{\chi ^2_{\nu -d+1}}\right) \end{aligned}$$

in [9].

Unfortunately, the last formula was used in [6, Example 3.

From (7.11) we easily find that

$$\begin{aligned} \int \limits _{R^d}e^{i\langle z,x\rangle }T_d(\nu ,\Sigma ,\alpha )(\text{d}x)&= \frac{e^{i\langle z,\alpha \rangle }}{2^{\frac{\nu }{2}-1}\Gamma \left(\frac{\nu }{2}\right)}\left(\nu \langle z\Sigma ,z\rangle \right)^{\frac{\nu }{4}}\\\quad&\times K_{\frac{\nu }{2}}\left(\sqrt{\nu \langle z\Sigma ,z\rangle }\right), \quad z\in R^d, \end{aligned}$$

(see [10, 11]).

7.2 Long-Range Dependent Stationary Student Processes

It is well known (see, e.g., [12]) that a real square integrable and continuous in quadratic mean stochastic process \(X=\left\{ X_t,t\in R^1\right\} \) is second order stationary if and only if it has the following spectral decomposition:

$$\begin{aligned} X_t=\alpha +\int \limits ^{\infty }_{-\infty }\cos {(\lambda t)}v(\text{d}\lambda )+\int \limits ^{\infty }_{-\infty }\sin {(\lambda t)}w(\text{d}\lambda ), \quad t\in R^1, \end{aligned}$$

where \(\alpha =\mathrm E X_0\), \(v(\text{d}\lambda )\) and \(w(\text{d}\lambda )\) are mean 0 and square integrable real random measures such that, for each \(A, A_1, A_2\in \fancyscript{B}(R^1)\),

$$\begin{aligned} \mathrm E \left[v(A_1)v(A_2)\right]=\mathrm E v^2(A_1\cap A_2), \end{aligned}$$
(7.12)
$$\begin{aligned} \mathrm E \left[w(A_1)w(A_2)\right]=\mathrm E w^2(A_1\cap A_2), \end{aligned}$$
(7.13)
$$\begin{aligned} \mathrm E \left[v(A_1)w(A_2)\right]=0, \end{aligned}$$
(7.14)
$$\begin{aligned} \tilde{F}(A):=\mathrm E v^2(A)=\mathrm E w^2(A). \end{aligned}$$
(7.15)

The correlation function \(r\) satisfies

$$\begin{aligned} r(t)=\int \limits ^{\infty }_{-\infty }\cos {(\lambda t)}F(\text{d}\lambda ), \end{aligned}$$

where

$$\begin{aligned} F(A)=\frac{\tilde{F}(A)}{\tilde{F}(R^1)}, \quad A\in \fancyscript{B}(R^1). \end{aligned}$$

Following [13], we shall construct a class of strictly stationary stochastic processes \(X=\left\{ X_t, t\in R^1\right\} \) such that

$$\begin{aligned} \fancyscript{L}(X_t)\equiv T_1\left(\nu ,\sigma ^2,\alpha \right), \quad \nu >2, \end{aligned}$$

called the Student’s stationary processes.

Recall the notion and some properties of the independently scattered random measures (i.s.r.m.) (see [1315]).

Let \(T\in \fancyscript{B}(R^d)\), \(\fancyscript{S}\) be a \(\sigma \)-ring of subsets of T (i.e. countable unions of sets in \(\fancyscript{S}\) belong to \(\fancyscript{S}\) and, if \(A,B\in \fancyscript{S}\), \(A\subset B\), then \(B\backslash A\in \fancyscript{S}\)). The \(\sigma \) algebra generated by \(\fancyscript{S}\) is denoted \(\sigma (\fancyscript{S})\).

A collection of random variables \(v=\left\{ v(A),A\in \fancyscript{S}\right\} \) defined on a probability space \(\left(\Omega ,\fancyscript{F},\mathrm P \right)\) is said to be an i.s.r.m. if, for every sequence \(\left\{ A_n,n\ge 1\right\} \) of disjoint sets in \(\fancyscript{S}\), the random variables \(v(A_n)\), \(n=1,2,\ldots \), are independent and

$$\begin{aligned} v\left(\bigcup ^{\infty }_{n=1}A_n\right)=\sum ^{\infty }_{n=1}v(A_n) \quad a.s., \end{aligned}$$

whenever \(\bigcup\nolimits ^{\infty }_{n=1}\,A_n\in \fancyscript{S}\).

Let \(v(A)\), \(A\in \fancyscript{S}\), be infinitely divisible,

$$\begin{aligned} \log \mathrm{E e^{izv(A)}}=izm_0(A)-\frac{1}{2}z^2m_1(A)+\int \limits _{R^+_0}\left(e^{izu}-1-iz\tau (u)\right)\Pi (A,\text{d}u), \end{aligned}$$

where \(m_0\) is a signed measure, \(\Pi (A,\text{d}u)\) for fixed \(A\) is a measure on \(\fancyscript{B}(R^1_0)\) such that

$$\begin{aligned} \int \limits _{R^1_0}\left(1\wedge u^2\right)\Pi (A,\text{d}u)<\infty ; \end{aligned}$$
$$\begin{aligned} \tau (u)=\left\{ \begin{array}{l} u, \quad \text {if}\, |u|\le 1, \\ \dfrac{u}{|u|}, \quad \text {if}\, |u|>1. \end{array} \right. \end{aligned}$$

Assume now that \(m_0=m_1=0\) and

$$\begin{aligned} \Pi (A,\text{d}u)=M(A)\Pi (\text{d}u), \end{aligned}$$

where \(M(A)\) is some measure on \(T\) and \(\Pi (\text{d}u)\) is some Lévy measure on \(R^1_0\).

Integration of functions on \(T\) with respect to \(v\) is defined first for real simple functions \(f=\sum\nolimits ^{n}_{j=1}x_j1_{A_j}\), \(A_j\in \fancyscript{S}\), \(j=1,\ldots ,n\), by

$$\begin{aligned} \int \limits _{A}f(x)v(\text{d}x)=\sum ^{n}_{j=1}x_jv(A\cap A_j), \end{aligned}$$

where A is any subset of \(T\), for which \(A\in \sigma (\fancyscript{S})\) and \(A\cap A_j\in \fancyscript{S}\), \(j=1,\ldots ,n\).

In general, a function \(f{:}\left(T,\sigma (\fancyscript{S})\right)\rightarrow \left(R^1,\fancyscript{B}(R^1)\right)\) is said to be \(v\)-integrable if there exists a sequence \(\left\{ f_n, n=1,2,\ldots \right\} \) of simple functions as above such that \(f_n\rightarrow f\) \(M\)-a.e. and, for every \(A\in \sigma (\fancyscript{S})\), the sequence \(\left\{ \int _{A}f_n(x)v(\text{d}x),\right.\) \(\left.n=1,2,\ldots \right\} \) converges in probability, as \(n\rightarrow \infty \). If \(f\) is \(v\)-integrable, we write

$$\begin{aligned} \int \limits _{A}f(x)v(\text{d}x)=p-\lim _{n\rightarrow \infty }\int \limits _{A}f_n(x)v(\text{d}x). \end{aligned}$$

The integrand \(\int _{A}f(x)v(\text{d}x)\) does not depend on the approximating sequence.

A function \(f\) on \(T\) is \(v\)-integrable if and only if

$$\begin{aligned} \int \limits _{T}Z_0\left(f(x)\right)M(\text{d}x)<\infty \end{aligned}$$

and

$$\begin{aligned} \int \limits _{T}\left|Z\left(f(x)\right)\right|M(\text{d}x)<\infty , \end{aligned}$$

where

$$\begin{aligned} Z_0(y)=\int \limits _{R^1_0}\left(1\wedge (uy)^2\right)\Pi (\text{d}u), \end{aligned}$$

and

$$\begin{aligned} Z(y)=\int \limits _{R^1_0}\left(\tau (uy)-y\tau (u)\right)\Pi (\text{d}u). \end{aligned}$$

For such functions \(f\)

$$\begin{aligned} \log \mathrm E \exp \left\{ i\xi \int \limits _{A}f(x)v(\text{d}x)\right\} =\int \limits _{A}\varkappa \left(\xi f(x)\right)M(\text{d}x), \end{aligned}$$

where

$$\begin{aligned} \varkappa (\xi )=\int \limits _{R^1_0}\left(e^{i\xi u}-1-i\xi \tau (u)\right)\Pi (\text{d}u). \end{aligned}$$

Let now \(Y_t=\left(Y^1_t,Y^2_t\right)\), \(t\ge 0\), be a bivariate Student-Lévy process such that

$$\begin{aligned} \fancyscript{L}(Y_1)=T_2(\nu ,\sigma ^2I_2,0), \quad I_2=\left(\begin{array}{cc} 1&0 \\ 0&1 \end{array}\right), \end{aligned}$$

and \(F\) be an arbitrary probability distribution on \(R^1\).

Let \(T=R^1\), \(\fancyscript{S}\) be the \(\sigma \)-ring of subsets \(A=\bigcup\nolimits ^{\infty }_{j=1}\left(a_j,b_j\right]\), where the intervals \(\left(a_j,b_j\right]\), \(j=1,2,\ldots \), are disjoint. Define i.m.r.m. \(v\) and \(w\) by the equalities:

$$\begin{aligned} v(A)=\sum ^{\infty }_{j=1}\left(Y^1_{F(b_j)}-Y^1_{F(a_j)}\right) \end{aligned}$$

and

$$\begin{aligned} w(A)=\sum ^{\infty }_{j=1}\left(Y^2_{F(b_j)}-Y^2_{F(a_j)}\right), \quad A=\bigcup ^{\infty }_{j=1}\left(a_j,b_j\right]\in \fancyscript{S}. \end{aligned}$$

Because, for \(i=1,2,\) \(j=1,2,\ldots \), \(\nu >2\),

$$\begin{aligned} \mathrm E (Y^{i}_{F(b_j)}-Y^i_{F(a_j)})=0, \end{aligned}$$
$$\begin{aligned} \mathrm E (Y^{i}_{F(b_j)}-Y^{i}_{F(a_j)})^2=\frac{\sigma ^2\nu}{\nu -2}\left(F(b_j)-F(a_j)\right) \end{aligned}$$

and

$$\begin{aligned} \sum ^{\infty }_{j=1}\mathrm E (Y^i_{F(b_j)}-Y^i_{F(a_j)})^2\le \frac{\sigma ^2\nu}{\nu -2}<\infty , \end{aligned}$$

the definition of \(v\) and \(w\) is correct.

From (7.10) it follows that \(v\) and \(w\) satisfies (7.12)–(7.15) with

$$\begin{aligned} \tilde{F}(A)=\frac{\sigma ^2\nu}{\nu -2}F(A), \quad A\in \fancyscript{S}. \end{aligned}$$

Thus, the process

$$\begin{aligned} X_t=\alpha +\int \limits ^{\infty }_{-\infty }\cos (ut)v(\text{d}u)+\int \limits ^{\infty }_{-\infty }\sin {(ut)}w(\text{d}u), \quad t\in R^1, \end{aligned}$$

is well defined, strictly stationary,

$$\begin{aligned} \fancyscript{L}(X_t)\equiv T_1(\nu ,\sigma ^2,\alpha ) \end{aligned}$$

and the correlation function \(r\) satisfies

$$\begin{aligned} r(t)=\int \limits ^{\infty }_{-\infty }\cos {(ut)}F(\text{d}u), \quad t\in R^1. \end{aligned}$$

Strict stationarity of \(X\) follows from the formula (see [13]):

$$\begin{aligned} \mathrm E e^{i \sum \limits ^{n}_{j=1}\eta _jX_{t_j}}&= e^{i\alpha \sum \limits ^{n}_{j=1}\eta _j}\\&\quad\times \exp \left\{ \int \limits ^{\infty }_{-\infty }\log {\hat{h}_{\nu ,\sigma }}\left(\frac{1}{2}\sum ^{n}_{j,k=1}\eta _j\eta _k\cos \left(u(t_j-t_k)\right)\right)F(\text{d}u)\right\} , \\&\quad\eta _j, t_j\in R^1, \quad j=1,\ldots ,n, \end{aligned}$$

where

$$\begin{aligned} \hat{h}_{\nu ,\sigma }(\theta )&:= \int \limits ^{\infty }_{0}e^{-\theta u}\frac{1}{\sigma ^2}gig\left(\frac{u}{\sigma ^2};-\frac{\nu }{2},\nu ,0\right)\text{d}u\\&= \frac{2}{\Gamma\left(\frac{\nu}{2}\right)}\left(\frac{\theta \sigma ^2 \nu }{2}\right)^{\frac{\nu }{4}}K_{\frac{\nu }{2}}\left(\sqrt{2\sigma ^2 \theta \nu }\right), \quad \theta >0. \end{aligned}$$

As it was checked in [16], if

$$\begin{aligned} F(\text{d}u)=f_{\beta ,\gamma }(u)\text{d}u, \quad 0<\beta \le 1, \quad \gamma \in R^1, \end{aligned}$$

where

$$\begin{aligned} f_{\beta ,\gamma }(u)=\frac{1}{2}\left[f_{\beta ,0}(u+\gamma )+f_{\beta ,0}(u-\gamma )\right], \quad u\in R^1, \end{aligned}$$

with

$$\begin{aligned} f_{\beta ,0}(u)=\frac{2^{\frac{1-\beta }{2}}}{\sqrt{\pi }\Gamma \left(\frac{\beta }{2}\right)}K_{1-\beta }\left(|u|\right)|u|^{\frac{(1-\beta )}{2}}, \end{aligned}$$

then

$$\begin{aligned} r(t)=\frac{\cos {\gamma t}}{(1+t^2)^{\frac{\beta }{2}}}, \quad t\in R^1, \end{aligned}$$

and

$$\begin{aligned} \int \limits ^{\infty }_{-\infty }\left|r(t)\right|\text{d}t=\infty , \end{aligned}$$

implying long-range dependence of \(X\) (see also [1720]).

Remark 7.3

Defining Student-Lamperti process \(X^{\star }\) as (see [21])

$$\begin{aligned} X^{\star }_t=t^HX_{\log t}, \quad t>0, \quad X^{\star }_0=0, \quad H>0. \end{aligned}$$

we have that \(X^{\star }\) is \(H\)-self-similar, i.e., for each \(c>0\), processes \(\left\{ X^{\star }_{ct}, t\ge 0\right\} \) and \(\left\{ c^{H}X^{\star }_t,t\ge 0\right\} \) have the same finite dimensional distributions, and (see [13])

$$\begin{aligned} \mathrm E e^{i\sum \limits ^{n}_{j=1}\eta _jX^{\star }_{t_j}}&= e^{i\alpha \sum \limits ^{n}_{j=1}t^H_j\eta _j}\\&\quad\times \exp \!\left\{ \!\int \limits ^{\infty }_{-\infty }\!\left[\log {\hat{h}_{\nu ,\sigma }}\!\left(\!\frac{1}{2}\sum ^{n}_{j,k=1}\eta _j\eta _k t^H_jt^H_k\cos \!\left(\!u\log \frac{t_j}{t_k}\!\right)\!\right)\!\right]\!F(\text{d}u)\!\right\} \!,\\&\quad t_j>0, \quad \eta _j\in R^1, \quad j=1,\ldots ,n. \end{aligned}$$

In particular,

$$\begin{aligned} \mathrm E e^{i\eta X^{\star }_t}=e^{i\alpha t^{H}\eta }\hat{h}_{\nu ,\sigma }\left(t^{2H}\frac{\eta ^2}{2}\right), \quad t>0, \quad \eta \in R^1, \end{aligned}$$

and

$$\begin{aligned} \mathrm{E}e^{i\eta\left(X^{\star}_t-X^{\star}_s\right)}&=&e^{i\alpha\left(t^H-s^H\right)\eta}\exp\Bigg\{\int\limits^{\infty}_{-\infty}\Bigg[\!\log\hat{h}_{\nu,\sigma}\!\Bigg(\frac{1}{2}\eta^2\Bigg(s^{2H}+\,t^{2H}\Bigg.\Bigg.\\ \Bigg.&&\quad-2s^Ht^H\cos\!\left(u \log \frac{t}{s}\right)\Bigg)\!\Bigg]F(\text{d}u)\Bigg\},\quad s,t>0, \quad \eta\in R^1.\end{aligned}$$

7.3 Lévy Copulas

Considering the probability distributions \(F\) on \(R^d\) with the 1-dimensional Student’s \(t\) marginals \(F_{j,j}=1,\ldots ,d\), and having in mind their relationship with stochastic processes, we restricted ourselves to the cases when \(F\) is a mixture of the d-dimensional Gaussian distributions .

Denoting

$$\begin{aligned} C(u_1,\ldots ,u_d):=F\left(F^{-1}_1(u_1),\ldots ,F^{-1}_d(u_d)\right), \quad u_j\in [0, 1], \quad j=1,\ldots ,d, \end{aligned}$$

it is obvious that this function is the probability distribution function on the d-cube [0,1]\(^{d}\) with uniform one-dimensional marginals, called the d-copula (see, e.g., [22]). Trivially,

$$\begin{aligned} F(x_1,\ldots ,x_d)=C\left(F_1 (x_1),\ldots ,F_d(x_d)\right), \quad (x_1,\dots ,x_d)\in R^d. \end{aligned}$$
(7.16)

Formula (7.16) with the arbitrary d-copula defines uniquely the probability distributions on \(R^d\) with the given Student’s 1-dimensional marginals. These statements are very special cases of well known Sklar’s theorem (see [23, 24]).

Thus, taking concrete d-copulas we shall obtain a wide class of multivariate generalizations of Student’s \(t\)-distributions.

For instance, the Archimedean copulas have the from

$$\begin{aligned} C(u_1,\ldots ,u_d)=\psi \left(\psi ^{-1}(u_1)+\cdot +\psi ^{-1}(u_d)\right), \quad u_j\in [0,1], \quad j=1,\ldots ,d, \end{aligned}$$

where \(\psi \) is a d-monotone function on \([0,\infty )\), i.e., for each \(x\ge 0\) and \(k=0,1,\ldots ,d-2\),

$$\begin{aligned} (-1)^k\frac{\text{d}^k}{\text{d}x^k}\psi (x)\ge 0, \end{aligned}$$

\((-1)^{d-2}\psi ^{(d-2)}(x)\), \(x\ge 0\), is nonincreasing and convex function.

In particular, if

$$\begin{aligned} \psi (x)=(1+x)^{-\frac{1}{\theta }}, \quad \theta \in (0,\infty ), \quad x\ge 0, \end{aligned}$$

we have the Clayton’s copula

$$\begin{aligned} C(u_1,\ldots ,u_d)=\left(\sum ^{d}_{j=1}u^{-\theta }_j-d+1\right)^{-\frac{1}{\theta }}, \quad u_j\in [0,1], \quad j=1,\ldots ,d. \end{aligned}$$

If \(\phi (x)=\exp \left\{ -x^{\frac{1}{\theta }}\right\} \), \(\theta \ge 1\), \(x\ge 0\), we obtain the Gumbel copula

$$\begin{aligned} C(u_1,\ldots ,u_d)=\exp \left\{ -\left(\sum ^d_{j=1}\left(-\log u_j\right)^{\theta }\right)^{\frac{1}{\theta }}\right\} , \quad u_j\in [0, 1], \quad j=1,\ldots ,d. \end{aligned}$$

Unfortunately, it is difficult to describe if the copulation preserves such important for us properties of marginal distributions as infinite divisibility or self-decomposability.

A promising direction for future work is a notion of Lévy copulas and, analogously to the classical copulas, construction of new Lévy measures on \(R^d\) using marginal ones (see [2528]). Following [28], we briefly describe an analogue of Sklar’s theorem in this context.

Let \(\bar{R}:=\left(-\infty ,\infty \right]\). For \(a,b\in \bar{R}^d\) we write \(a\le b\), if \(a_k\le b_k\), \(k=1,\ldots ,d\) and, in this case, denote

$$\begin{aligned} \left(a,b\right]:=\left(a_1,b_1\right]\times \ldots \times \left(a_d,b_d\right]. \end{aligned}$$

Let \(F:S\rightarrow \bar{R}\) for some subset \(S\subset \bar{R}^d\). For \(a,b\in S\) with \(a\le b\) and \(\overline{(a,b]}\subset S\), the \(F\)-volume of \((a,b]\) is defined by

$$\begin{aligned} V_F\left((a,b]\right):=\sum _{u\in \{a_1,b_1\}\times \cdots \times \{a_d,b_d\}} (-1)^{N(u)}F(u), \end{aligned}$$

where \(N(u):=\sharp \{k:u_k=a_k\}\).

A function \(F:S\rightarrow \bar{R}\) is called d-increasing if \(V_F\left((a,b]\right)\ge 0\) for all \(a,b\in S\) with \(a\le b\) and \(\overline{(a,b]}\subset S\).

Definition 7.4

Let \(F:\bar{R}^d\rightarrow \bar{R}\) be a d-increasing function such that \(F(u_1,\ldots , u_d)=0\) if \(u_i=0\) for at least one \(i\in \{1,\ldots ,d\}\). For any non-empty index set \(I\subset \{1,\ldots ,d\}\) the \(I\)-marginal of \(F\) is the function \(F_I:\bar{R}^{|I|}\rightarrow \bar{R}\), defined by

$$\begin{aligned} F^I\left((u)i)_{i\in I}\right):=\lim _{a\rightarrow \infty }\sum _{(u_i)_{i\in I^c}\in \{-a,\infty \}^{|I^c|}}F(u_1,\ldots ,u_d)\prod _{i\in I^c}\mathrm sgn u_i, \end{aligned}$$

where \(I^c=\{1,\ldots ,d\}\backslash I\), \(|I|:=\mathrm card I\), and

$$\begin{aligned} \mathrm sgn x=\left\{ \begin{array}{l} 1, \quad \text {if}\, x\ge 0, \\ -1, \quad \text {if}\, x<0. \end{array} \right. \end{aligned}$$

Definition 7.5

A function \(F:\bar{R}^d\rightarrow \bar{R}\) is called a Lévy copula if

  1. 1.

    \(F(u_1,\ldots ,u_d)\ne \infty \) for \((u_1,\ldots ,u_d)\ne (\infty ,\ldots ,\infty )\),

  2. 2.

    \(F(u_1,\ldots ,u_d)=0\) if \(u_i=0\) for at least one \(i\in \{1,\ldots ,d\}\),

  3. 3.

    \(F\) is d-increasing,

  4. 4.

    \(F^{\{i\}}(u)=u\) for any \(i\in \{1,\ldots ,d\}\), \(u\in R^1\).

Write

$$\begin{aligned} \fancyscript{I}(x):=\left\{ \begin{array}{l} (x,\infty ), \quad \text {if}\, x\le 0, \\ (-\infty ,x], \quad \text {if}\, x>0. \end{array} \right. \end{aligned}$$

Definition 7.6

Let \(X=(X^1,\ldots ,X^d)\) be an \(R^d\)-valued Lévy process with the Lévy measure \(\Pi \). The tail integral of \(X\) is the function \(V:\left(R^1\backslash \{0\}\right)^d\rightarrow R^1\) defined by

$$\begin{aligned} V(x_1,\ldots ,x_d):=\prod ^d_{i=1}\mathrm sgn (x_i)\Pi \left(\fancyscript{I}(x_1)\times \cdots \times \fancyscript{I}(x_d)\right) \end{aligned}$$

and, for any non-empty \(I\subset \{1,\ldots ,d\}\) the \(I\)-marginal tail integral \(V^I\) of \(X\) is the tail integral of the process \(X^I:=(X^i)_{i\in I}\).

We denote one-dimensional margins by \(V_i:=V^{\{i\}}\).

Observe, that marginal tail integrals {\(V^I{:}\!I{\subset }\{1,\ldots ,d\}\) non-empty} are uniquely determined by \(\Pi \). Conversely, \(\Pi \) is uniquely determined by the set of its marginal tail integral.

Relationship between Lévy copulas and Lévy processes are described by the following analogue of Sklar’s theorem.

Theorem 7.7

[28]

  1. 1.

    Let\(X=\left(X^1, \ldots ,X^d\right)\)be an\(R^d\)-valued Lévy process. Then there exists a Lévy copula\(F\)such that the tail integrals of\(X\)satisfy

    $$\begin{aligned} V\left((x_i)_{i\in I}\right)=F^I\left(\left(V_i (x_i)\right)_{i\in I}\right), \end{aligned}$$
    (7.17)

    for any non-empty\(I\subset \{1,\ldots ,d\}\)and any\((x_i)_{i\in I}\in \left(R^1\backslash \{0\}\right)^{|I|}\). The Lévy copula\(F\)is unique on\(\mathrm Ran V_1\times \cdots \times \mathrm Ran V_d\).

  2. 2.

    Let\(F\)be a d-dimensional Lévy copula and\(V_i\), \(i=1,\ldots , d\), be tail integrals of real-valued Lévy processes. Then there exists an\(R^d\)-valued Lévy process\(X\)whose components have tail integrals\(V_1,\ldots ,V_d\)and whose marginal tail integrals satisfy (7.17) for any non-empty\(I\subset \{1,\ldots ,d\}\)and any\((x_i)_{i\in I}\in \left(R^1\backslash \{0\}\right)^{|I|}\). The Lévy measure\(\Pi \)of\(X\)is uniquely determined by\(F\)and\(V_i\), \(i=1, \ldots , d\).

In the above formulation \(\mathrm Ran V\) means the range of \(V\). The reader is referred for proofs to [28].

An analogue of the Archimedean copulas is as follows (see [28]).

Let \(\varphi :[-1,1]\rightarrow [-\infty ,\infty ]\) be a strictly increasing continuous function with \(\varphi (1)=\infty \), \(\varphi (0)=0\), and \(\varphi (-1)=-\infty \), having derivatives of orders up to d on \((-1,0)\) and \((0,1)\), and, for any \(k=1,\ldots ,d\), satisfying

$$\begin{aligned} \frac{\text{d}^{k}\varphi (u)}{\text{d}u^k}\ge 0, \quad u\in (0,1) \quad \text {and} \quad (-1)^k\frac{\text{d}^k\varphi (u)}{\text{d}u^k}\le 0, \quad u\in (-1,0). \end{aligned}$$

Let

$$\begin{aligned} \tilde{\varphi }(u):=2^{d-2}\left(\varphi (u)-\varphi (-u)\right), \quad u\in [-1,1]. \end{aligned}$$

Then

$$\begin{aligned} F(u_1,\ldots ,u_d):=\varphi \left(\prod ^{d}_{i=1}\tilde{\varphi }^{-1}(u_i)\right) \end{aligned}$$

defines a Lévy copula.

In particular, if

$$\begin{aligned} \varphi (x):=\eta \left(-\log |x|\right)^{-\frac{1}{\vartheta }}1_{\{x>0\}}-(1-\eta )\left(-\log |x|\right)^{-\frac{1}{\vartheta }}1_{\{x<0\}} \end{aligned}$$

with \(\vartheta >0\) and \(\eta \in (0,1)\), then

$$\begin{aligned} \tilde{\varphi }(x)=2^{d-2}\left(-\log |x|\right)^{-\frac{1}{\vartheta }}\mathrm sgn x, \quad x\in -1,1], \end{aligned}$$

and

$$\begin{aligned} F(u_1,\ldots ,u_d)=2^{2-d}\left(\sum ^d_{i=1}|u_i|^{-\vartheta }\right)^{-\frac{1}{\vartheta }}(\eta 1_{\{u_1\ldots u_d\ge 0\}}-(1-\eta )1_{\{u_1\ldots u_d<0\}}), \end{aligned}$$

resembling the ordinary Clayton copulas.