1 INTRODUCTION

After the publication of Mandelbrot’s papers [1, 2], stochastic processes with continuous trajectories and self-similarity properties acquired not only a theoretical interest, but also important practical applications, especially in the fields of finance and telecommunications. The major process among self-similar stochastic processes is, certainly, the fractional Brownian motion (fBm), which is the Gaussian process with a zero initial value, stationary increments, and exponential growth of the variance t2H, where the time t \( \geqslant \) 0. Here H ∈ (0, 1] is the so-called Hurst exponent. In this paper, we study the fBm at H ∈ (0, 1/2) for a concave function of the accumulated variance. In this case, near zero, the fBm is locally approximated by the fractional Ornstein–Uhlenbeck process (fOU). In this paper, the fOU is considered in the sense presented by Wolpert and Taqqu in [3] as a fractional derivative of the solution of the classical Langevin stochastic differential equation (SDE) (see, e.g., [4]). The classical Ornstein–Uhlenbeck process (OU) (stationary Gaussian–Markov process) is a stationary solution of the Langevin SDE. After Lamperti’s pioneering paper [5], where a nonrandom change of time, the so-called Lamperti transform, was used to find the direct relation between the Brownian motion and the OU process, it became possible to study Brownian motion via the OU process and vice versa. This relation was later extended to the fBm. However, in order to apply the Lamperti transform to obtain the fBm, it is necessary to consider another fOU process, the fOU in the sense of Barndorff-Nielsen [6, 7]. Note another extension of the OU process to the “fractional” case, when fBm increments are considered in the Langevin SDE instead of ordinary Brownian motion increments (see, e.g., [8]). The fOU processes constructed in all senses designated here are always stationary and Gaussian.

The PSI-processes studied in this paper are doubly stochastic Poisson subordinators for the sequences (ξ) = (ξj),   j = 0, 1, …, consisting of independent identically distributed random variables with a zero mean and finite variance. A more detailed description of the PSI-processes is presented in the next section of the paper.

We propose in this paper the method of construction of the fOU process in the sense of Wolpert and Taqqu, based on the approximation by the sums of independent identically distributed PSI-processes with a random intensity \({{\varrho }_{\kappa }}\) having a gamma-distribution with a random gamma-distributed scale (see formula (14) from Theorem in Section 3), or, which is equivalent, by the root of a reversed beta-distribution (see formula (27)).

This distribution of the random intensity \({{\varrho }_{\kappa }}\) makes it possible to obtain the required covariance of the PSI-process coinciding with the fOU process covariance in the Wolpert–Taqqu sense. The subsequent application of the central limiting theorem for vectors establishes a weak convergence of finite-dimensional distributions and allows us to derive some asymptotic relations. To continue studying the limits of sums of the PSI-processes with a random intensity to find the facts of convergence in the functional spaces, the lemma about the local modulus of continuity of the PSI-processes is proved in this paper. The functional limit theorem for normalized sums of PSI-processes with a nonrandom intensity is proved in [9].

One of the simplest but comprehensive examples of a PSI-process is the so-called telegraph process [10]. We consider the telegraph process as the basic trial process for the further study of the total modulus of continuity of PSI-processes, that is the supremum of the local modulus of continuity taken over the whole time period of the process.

The calculation of the exact distribution of the local modulus of continuity for an arbitrary distribution of terms of the sequence (ξ) is a rather complicated and demanding problem. We propose a solution of this problem for two cases: the Rademacher distribution describing the telegraph process and the uniform distribution. At the end of the paper, we present the exact and the asymptotic values of the local modulus of continuity for a PSI-process with the intensity \({{\varrho }_{\kappa }}\) over a small fixed time span for these two types of distributions.

Let us recall the necessary definitions and properties of the standard fBm.

Definition 1. The standard fractional Brownian motion WH(t), t\({{\mathbb{R}}_{ + }}\), with the Hurst parameter (self-similarity index) H ∈ (0, 1] is defined as a Gaussian process starting at zero with zero mean and the following covariance function:

$$\mathbb{E}\{ {{W}_{H}}(s){{W}_{H}}(t)\} \;\mathop = \limits^\Delta \;{{R}_{H}}(s,t) = \frac{1}{2}({{s}^{{2H}}} + {{t}^{{2H}}} - {{\left| {t - s} \right|}^{{2H}}}),\quad s,t \in {{\mathbb{R}}_{ + }}.$$
(1)

The trajectories of the fBm processes are continuous and nondifferentiable anywhere (except the degenerate case H = 1, when the trajectories are almost surely random rays in the right half-plane). The fBm processes are characterized by their stationary increments and, most importantly, by the power self-similarity property

$${{W}_{H}}(at)\;\mathop = \limits^d \;{{a}^{H}}{{W}_{H}}(t),\quad \forall a > 0,\quad t \in {{\mathbb{R}}_{ + }}.$$
(2)

The sign \(\mathop = \limits^d \) denotes the equality of the finite-dimensional distributions.

It is important that if the variance of a centered Gaussian process increases as t2H and the process has stationary increments, then this process is necessarily fBm with the Hurst parameter H ∈ (0, 1]. The following classification of cases is adopted: when H ∈ (0, 1/2) and the function of accumulated variance is strictly concave, the fBm increments are negatively correlated; when H = 1/2 and the function of accumulated variance is linear (standard Brownian motion), the increments are independent; when H ∈ (1/2, 1) and the function of accumulated variance is strictly convex, the fBm increments are positively correlated; and when H = 1 and the process trajectories are linear with probability one, the increment correlation is unity. Note that the Markovian property is fulfilled for fBm only in the standard Brownian motion case, i.e., when H = 1/2.

2 PSI-PROCESSES: DEFINITION AND BASIC PROPERTIES

We describe the algorithm of subordination of a random sequence index by a doubly stochastic Poisson process used in this paper. Consider the standard Poisson process with a unit intensity Π1(t), t \( \geqslant \) 0. Let (ξ) = (ξn), n = 0, 1, …, be a certain sequence of random variables, and λ = λ(ω), ω ∈ Ω be a certain nonnegative random variable, where λ, (ξ), and Π1 are jointly independent.

Definition 2. We call the Poisson stochastic index process (PSI-process) a random process ψλ with continuous time obtained by randomization of the sequence (ξ) by a doubly stochastic Poisson process with a random intensity λ,

$$\psi (s) = {{\psi }_{\lambda }}(s)\;\mathop = \limits^\Delta \;{{\xi }_{{{{\Pi }_{1}}(s\lambda )}}},\quad s\; \geqslant \;0.$$
(3)

We call the Poisson process Π1 a leading process, and we call the sequence (ξ) a subordinate or a driven process.

We consider the telegraph process [10] as a simple (but already nontrivial) example of a PSI-process. The telegraph process is canonically defined as:

$$g(s) = {{g}_{\mu }}(s)\;\mathop = \limits^\Delta \;c \cdot {{( - 1)}^{{{{\Pi }_{\mu }}(s)}}},\quad s\; \geqslant \;0,$$
(4)

where c is a random variable with the Rademacher distribution, which takes the values ±1 with the probabilities 1/2, independent of the Poisson process Πμ with the nonrandom intensity μ > 0 . The telegraph process with the random positive intensity μ: Πμ(s) is also defined naturally: Πμ(s) in (4) should be replaced with Π1s), where Π1 and μ are assumed independent.

According to the “coloring theorem” ([11], Ch. 5), the telegraph process has the distribution of a PSI-process, when the terms of the sequence (ξ) are independent and have the identical Rademacher distributions, while the intensity of the corresponding telegraph process decreases twice, i.e.,

$${{\xi }_{{{{\Pi }_{1}}(\lambda s)}}}\;\mathop = \limits^\mathcal{D} \;{{\xi }_{0}}{{( - 1)}^{{{{\Pi }_{1}}(\lambda s/2)}}},\quad s\; \geqslant \;0,$$
(5)

where the identity of distributions is understood in the Skorokhod space \(\mathcal{D}\) over \({{\mathbb{R}}_{ + }}\).

Definition 2 of the PSI-process implies that the random process ψ is strictly stationary, if the subordinate sequence (ξ) is strictly stationary.

We consider in this paper only the situation when the driven sequence (ξ) consists of independent identically distributed random variables. It was shown in [12] that in this case, when \(\mathbb{E}{{\xi }_{0}}\) = 0, \(\mathbb{D}{{\xi }_{0}}\) = 1, the process ψ has the covariance function

$$\operatorname{cov} ({{\psi }_{\lambda }}(s),{{\psi }_{\lambda }}(t)) = {{L}_{\lambda }}\left( {\left| {t - s} \right|} \right),\quad s,t\; \geqslant \;0,$$
(6)

where LX(t) = \(\mathbb{E}({{e}^{{ - Xt}}})\), t \( \geqslant \) 0, is the Laplace transform of the nonnegative random variable X.

In addition to the PSI-processes, we consider the properly normalized sums of their independent copies. We assume that all terms of the driven sequence have zero mean and unit variance.

Definition 3. The limiting PSI-process Ψλ is a process obtained as a N → ∞ limit of sums of N\(\mathbb{N}\) independent copies of a PSI-process normalized by \(\sqrt N \) as N → ∞

$$\frac{{\psi _{{{{\lambda }_{1}}}}^{{(1)}}(t) + \ldots + \psi _{{{{\lambda }_{N}}}}^{{(N)}}(t)}}{{\sqrt N }} \Rightarrow {{\Psi }_{\lambda }}(t),\quad t\; \geqslant \;0.$$

Here \((\psi _{{{{\lambda }_{i}}}}^{{(i)}})\) are the independent copies of the process ψλ and the convergence ⇒ is understood in the sense of a weak convergence of finite dimensional distributions.

Note that the processes of the \((\psi _{{{{\lambda }_{i}}}}^{{(i)}})\) type, i\(\mathbb{N}\), depend on the random intensities (λi) as on random variables, while the limiting PSI-process Ψλ depends on λ exclusively via its distribution.

Applying the central limiting theorem for vectors and equality (6), one can see [12] that, according to the assumptions in Definition 3, the limiting PSI-process exists and is a stationary Gaussian process with the covariance function

$$\operatorname{cov} ({{\Psi }_{\lambda }}(s),{{\Psi }_{\lambda }}(t)) = \operatorname{cov} ({{\psi }_{\lambda }}(s),{{\psi }_{\lambda }}(t)) = {{L}_{\lambda }}\left( {\left| {t - s} \right|} \right),\quad s,t\; \geqslant \;0.$$
(7)

3 THE FRACTIONAL ORNSTEIN-UHLENBECK PROCESS IN THE WOLPERT–TAQQU SENSE AND THE CONVERGENCE OF PSI-PROCESSES TO IT

We consider a stationary fractional Ornstein–Uhlenbeck process in the Wolpert–Taqqu sense (fOUW–T) \(Z_{t}^{\kappa }\), t \( \geqslant \) 0, with the velocity parameter β > 0 and the scale parameter σ > 0, which is expressed in the form of the stochastic integral

$$Z_{t}^{\kappa } = \sigma \sqrt {2\beta } \int\limits_{ - \infty }^t {\frac{{{{\beta }^{{\kappa - 1}}}{{{(t - s)}}^{{\kappa - 1}}}}}{{\Gamma (\kappa )}}{{e}^{{ - \beta (t - s)}}}W(ds)} ,\quad \kappa > \frac{1}{2}.$$
(8)

Here W is the Gaussian measure with independent values, which is set on the Borelian sets \(\mathbb{R}\) and has a structural Lebesgue measure.Footnote 1 Apparently, \(Z_{t}^{\kappa }\) is the centered Gaussian function.

The process Zκ is defined in [3], where the following properties of the stochastic process Zκ are proved.

(1) The process Zκ is obtained by the fractional integration of the classical OU process: stationary Gaussian Markov random process. The parameter κ sets the order of the fractional integration. Here the integration is understood in the trajectory-wise sense.

(2) The stationary Gaussian process \(Z_{t}^{\kappa }\), t \( \geqslant \) 0, at 1/2 < κ < 1 approaches locally in a neighborhood of zero the fBm in the mean-square sense, more precisely, the variance of the increment \(\mathbb{D}\){Zκ(t) –Zκ(0)} is equivalent to b2t2H as t → 0+, when the Hurst exponent H ∈ (0, 1/2) is equal to κ – 1/2, i.e., 2H = 2κ – 1; the multiplier b is set by the equality

$${{b}^{2}}\;\mathop = \limits^\Delta \;\frac{{ - 2{{\sigma }^{2}}{{\beta }^{{2\kappa - 1}}}}}{{\Gamma (2\kappa )\cos (\pi \kappa )}}.$$
(9)

(3) The covariance of Zκ can be written as follows:

$${{\rho }^{\kappa }}(t)\;\mathop = \limits^\Delta \;\mathbb{E}(Z_{t}^{\kappa }Z_{0}^{\kappa }) = \frac{{2{{\sigma }^{2}}{{e}^{{ - \beta t}}}}}{{\Gamma {{{(\kappa )}}^{2}}}}\int\limits_0^\infty {{{{(\beta t + x)}}^{{\kappa - 1}}}{{x}^{{\kappa - 1}}}{{e}^{{ - 2x}}}dx} ,\quad t\; \geqslant \;0.$$
(10)

This formula for the covariance is presented in the original paper of Wolpert and Taqqu [3] under number (7) on p. 1525.

(4) As it was shown by Wolpert and Taqqu in their original paper (see formula (8) of [3]), the covariance function \(Z_{t}^{\kappa }\) at nonnegative t can be expressed through the modified Bessel function of the second kind \({{\mathcal{K}}_{{\kappa - \frac{1}{2}}}}\):

$${{\rho }^{\kappa }}(t) = \frac{{{{2}^{{\frac{3}{2} - \kappa }}}{{\sigma }^{2}}{{\beta }^{{\kappa - \frac{1}{2}}}}}}{{\Gamma (\kappa )\sqrt \pi }}{{t}^{{\kappa - \frac{1}{2}}}}{{\mathcal{K}}_{{\kappa - \frac{1}{2}}}}(\beta t).$$
(11)

It is easy to calculate the variance Z  κ and to transform it using the properties of the Bessel function and the Legendre duplication formula for the gamma-function:

$${{V}^{2}}\;\mathop = \limits^\Delta \;{{\rho }^{\kappa }}(0) = \mathbb{E}{{(Z_{0}^{\kappa })}^{2}} = \frac{{{{2}^{{2 - 2\kappa }}}{{\sigma }^{2}}\Gamma (2\kappa - 1)}}{{\Gamma {{{(\kappa )}}^{2}}}} = \frac{{\Gamma (\kappa - 1{\text{/}}2){{\sigma }^{2}}}}{{\Gamma (\kappa )\sqrt \pi }}.$$
(12)

Here the latter expression is presented in [3] on p. 1525.

(5) In [3], the spectral density of the process Zκ is calculated:

$$\int\limits_{ - \infty }^\infty {{{e}^{{i\theta t}}}{{\rho }^{\kappa }}(t)dt} = \frac{{2{{\sigma }^{2}}}}{\beta }{{\left( {1 + \frac{{{{\theta }^{2}}}}{{{{\beta }^{2}}}}} \right)}^{{ - \kappa }}},\quad \theta \in \mathbb{R}.$$
(13)

We present the following theorem as one of the basic results of this paper.

Theorem. For \(\frac{1}{2}\) < κ < 1, the stochastic process \(Z_{t}^{\kappa }\), t \( \geqslant \) 0, up to the multiplier V from (12) is a limiting PSI-process Ψλ in the sense of Definition 3, when all drive sequences (ξ) type consist of totally independent and identically distributed random variables with zero means and unit variances, and the random intensity λ has the following distribution:

$$\lambda \;\mathop = \limits^d \;\beta \left( {1 + \frac{{{{\gamma }_{{1 - \kappa }}}}}{{\eta {\text{/}}2}}} \right)\;\mathop = \limits^\Delta \;{{\varrho }_{\kappa }},$$
(14)

where β > 0 is the nonrandom scale parameter; the random variable η has the gamma distribution Γ2κ – 1 with a unit scale and the shape parameter 2κ – 1; the random variable γ1 − κ is independent of  η and has the gamma-distribution Γ1 – κ with a unit scale and the shape parameter 1 – κ.

The density pκ of the random intensity \({{\varrho }_{\kappa }}\) from (14) for the PSI-process implementing Zκ can be written as

$${{p}_{\kappa }}(x) = \frac{{\Gamma (\kappa ){{{(2\beta )}}^{{2\kappa - 1}}}}}{{\Gamma (1 - \kappa )\Gamma (2\kappa - 1)}}\frac{1}{{{{{({{x}^{2}} - {{\beta }^{2}})}}^{\kappa }}}}{{\mathbb{I}}_{{(\beta ,\infty )}}}(x),\quad x \in \mathbb{R},$$
(15)

\(\mathbb{I}\) denotes the indicator function.

The following asymptotic takes place at t → 0+:

$$\begin{gathered} \mathbb{E}{{(Z_{t}^{\kappa } - Z_{0}^{\kappa })}^{2}} = \frac{{2{{\sigma }^{2}}\sqrt \pi }}{{\Gamma (\kappa )\sin \left( {\left( {\kappa - \frac{1}{2}} \right)\pi } \right)}} \\ \times \;\left( {\frac{{{{\beta }^{{2\kappa - 1}}}}}{{{{2}^{{2\kappa - 1}}}\Gamma \left( {\kappa + \frac{1}{2}} \right)}}{{t}^{{2\kappa - 1}}} - \frac{{{{\beta }^{2}}}}{{2\Gamma \left( {\frac{5}{2} - \kappa } \right)}}{{t}^{2}} + \frac{{{{\beta }^{{2\kappa + 1}}}}}{{{{2}^{{2\kappa + 1}}}\Gamma \left( {\kappa + \frac{3}{2}} \right)}}{{t}^{{2\kappa + 1}}} - O({{t}^{4}})} \right). \\ \end{gathered} $$
(16)

Note that (14) describes a mixture of gamma-distributions with the random rate parameter η/(2β) which, in turn, has a gamma-distribution.

Proof of Theorem begins with formula (10) describing the covariance function of the process Zκ. First, we show that this covariance is the value at the point t of the Laplace transform of a certain nonnegative random variable of λ, up to the scalar multiplier V2 that is defined in (12).

For κ < 1, the expression \({{x}^{{1 - \kappa }}}{{(\beta t + x)}^{{\kappa - 1}}}\)\(\forall x\) > 0 as a function of t \( \geqslant \) 0 is the Laplace transform of the gamma-distribution:

$$\frac{{{{x}^{{1 - \kappa }}}}}{{{{{(\beta t + x)}}^{{1 - \kappa }}}}} = \int\limits_0^\infty {{{e}^{{ - tu}}}\frac{{{{x}^{{1 - \kappa }}}{{u}^{{ - \kappa }}}{{e}^{{ - xu/\beta }}}}}{{\Gamma (1 - \kappa ){{\beta }^{{1 - \kappa }}}}}du} = \int\limits_0^\infty {\frac{{{{{v}}^{{ - \kappa }}}{{e}^{{ - {v}(1 + t\beta /x)}}}}}{{\Gamma (1 - \kappa )}}d{v}.} $$
(17)

Substitute this equality into (10) and change the order of integration:

$$\begin{gathered} \mathbb{E}(Z_{t}^{\kappa }Z_{0}^{\kappa }) = \frac{{2{{\sigma }^{2}}{{e}^{{ - \beta t}}}}}{{\Gamma {{{(\kappa )}}^{2}}}}\int\limits_0^\infty {\int\limits_0^\infty {\frac{{{{{v}}^{{ - \kappa }}}{{e}^{{ - {v}(1 + t\beta /x)}}}}}{{\Gamma (1 - \kappa )}}d{v}{{x}^{{2\kappa - 2}}}{{e}^{{ - 2x}}}dx} } \\ = \frac{{2{{\sigma }^{2}}{{e}^{{ - \beta t}}}}}{{\Gamma {{{(\kappa )}}^{2}}\Gamma (1 - \kappa )}}\int\limits_0^\infty {{{{v}}^{{ - \kappa }}}{{e}^{{ - {v}}}}dv} \int\limits_0^\infty {{{x}^{{2\kappa - 2}}}{{e}^{{ - 2x - {v}t\beta /x}}}dx.} \\ \end{gathered} $$
(18)

Make the substitution \({v}{\text{/}}x\) = y in the inner integral to obtain the equality

$$\int\limits_0^\infty {{{x}^{{2\kappa - 2}}}{{e}^{{ - 2x - {v}t\beta /x}}}dx} = \int\limits_0^\infty {{{{v}}^{{2\kappa - 1}}}{{y}^{{ - 2\kappa }}}{{e}^{{ - 2{v}/y - t\beta y}}}dy.} $$

Substitute it into (18) and change the order of integration once again:

$$\mathbb{E}(Z_{t}^{\kappa }Z_{0}^{\kappa }) = \frac{{2{{\sigma }^{2}}{{e}^{{ - \beta t}}}}}{{\Gamma {{{(\kappa )}}^{2}}\Gamma (1 - \kappa )}}\int\limits_0^\infty {{{{v}}^{{ - \kappa }}}{{e}^{{ - {v}}}}d{v}\int\limits_0^\infty {{{{v}}^{{2\kappa - 1}}}{{y}^{{ - 2\kappa }}}{{e}^{{ - 2{v}/y - t\beta y}}}dy} } $$
$$\begin{gathered} = \frac{{2{{\sigma }^{2}}{{e}^{{ - \beta t}}}}}{{\Gamma {{{(\kappa )}}^{2}}\Gamma (1 - \kappa )}}\int\limits_0^\infty {{{y}^{{ - 2\kappa }}}{{e}^{{ - t\beta y}}}dy} \int\limits_0^\infty {{{{v}}^{{\kappa - 1}}}{{e}^{{ - {v}(1 + 2/y)}}}d{v}} \\ = \frac{{2{{\sigma }^{2}}{{e}^{{ - \beta t}}}}}{{\Gamma {{{(\kappa )}}^{2}}\Gamma (1 - \kappa )}}\int\limits_0^\infty {{{y}^{{ - 2\kappa }}}{{e}^{{ - t\beta y}}}\frac{{\Gamma (\kappa )}}{{{{{(1 + 2{\text{/}}y)}}^{\kappa }}}}dy} \\ \end{gathered} $$
(19)
$$ = \frac{{2{{\sigma }^{2}}}}{{\Gamma (\kappa )\Gamma (1 - \kappa )}}\int\limits_0^\infty {{{e}^{{ - t\beta (y + 1)}}}\frac{1}{{{{y}^{\kappa }}{{{(y + 2)}}^{\kappa }}}}dy.} $$

Equality (19) shows that λ \(\mathop = \limits^d \) β(Y + 1), where Y has the density on (0, ∞) proportional to \({{y}^{{ - \kappa }}}{{(y + 2)}^{{ - \kappa }}}\). We find the proportionality factor. Having replaced r = 2/(y + 2) at 1/2 < κ < 1 we obtain the chain of equalities:

$$\begin{gathered} \int\limits_0^\infty {\frac{{dy}}{{{{y}^{\kappa }}{{{(y + 2)}}^{\kappa }}}}} = {{2}^{{ - 2\kappa }}}\int\limits_0^\infty {{{{\left( {\frac{y}{{y + 2}}} \right)}}^{{ - \kappa }}}{{{\left( {\frac{2}{{y + 2}}} \right)}}^{{2\kappa }}}dy} = {{2}^{{1 - 2\kappa }}}\int\limits_0^1 {{{{(1 - r)}}^{{ - \kappa }}}{{r}^{{2\kappa - 2}}}dr} \\ = {{2}^{{1 - 2\kappa }}}B(1 - \kappa ,2\kappa - 1) = {{2}^{{1 - 2\kappa }}}\frac{{\Gamma (1 - \kappa )\Gamma (2\kappa - 1)}}{{\Gamma (\kappa )}}, \\ \end{gathered} $$
(20)

where B denotes the beta-function. Thus, the density of Y is:

$${{p}_{Y}}(y) = \frac{{{{2}^{{2\kappa - 1}}}\Gamma (\kappa )}}{{\Gamma (1 - \kappa )\Gamma (2\kappa - 1)}}\frac{1}{{{{y}^{\kappa }}{{{(y + 2)}}^{\kappa }}}},$$
(21)

whence formula (15) for the density of λ follows.

All that remains is to note that

$${{\rho }^{\kappa }}(t) = {{V}^{2}}\frac{{{{e}^{{ - \beta t}}}}}{{\Gamma (2\kappa - 1)}}\int\limits_0^\infty {\frac{{{{{(y{\text{/}}2\beta )}}^{{1 - \kappa }}}}}{{{{{\left( {t + \frac{y}{{2\beta }}} \right)}}^{{1 - \kappa }}}}}{{y}^{{(2\kappa - 1) - 1}}}{{e}^{{ - y}}}dy,} $$
(22)

where the multiplier next to V2 as a function of t is the Laplace transform of the “gamma over gamma” distribution with the shift by β: β + \(2\beta {{\gamma }_{{1 - \kappa }}}{\text{/}}\eta \), where \({{\gamma }_{{1 - \kappa }}}\) has the distribution \({{\Gamma }_{{1 - \kappa }}}\) with a unit scale and is independent of the random variable η, which, in turn, has the distribution \({{\Gamma }_{{2\kappa - 1}}}\) with a unit scale.

We consider N\(\mathbb{N}\) independent copies of a PSI-process with a random intensity distributed as λ. All terms of all subordinate sequences (ξ) are jointly independent, have identical distributions, zero means, and variances given by equality (12). According to the central limit theorem for vectors, the sum of N independent copies normalized by \(\sqrt N \) converges at N → ∞ to the distribution of the process Zκ in the sense of weak convergence of finite-dimensional distributions.

Now we move to the proof of the asymptotic. The following representation of the modified Bessel function of the second kind is known (see, e.g., [14], Ch. 9):

$${{\mathcal{K}}_{s}}(t) = \frac{\pi }{2}\frac{{{{\Im }_{{ - s}}}(t) - {{\Im }_{s}}(t)}}{{\sin (s\pi )}},\quad s \in \mathbb{R},\quad t \in {{\mathbb{R}}_{ + }},$$
(23)

where \({{\Im }_{s}}(t)\) is the modified Bessel function of the first kind that can be expanded in the series:

$${{\Im }_{s}}(t) = \sum\limits_{m = 0}^\infty {\frac{1}{{m!\Gamma (m + s + 1)}}{{{\left( {\frac{t}{2}} \right)}}^{{2m + s}}}.} $$
(24)

As the corollary, at t \( \geqslant \) 0, the covariance function ρκ can be expanded in the following series:

$$\begin{gathered} {{\rho }^{\kappa }}(t) = \frac{{2{{\sigma }^{2}}}}{{\Gamma (\kappa )\sqrt \pi }}{{\left( {\frac{{\beta t}}{2}} \right)}^{{\kappa - \frac{1}{2}}}}{{\mathcal{K}}_{{\kappa - \frac{1}{2}}}}(\beta t) \\ = \frac{{{{\sigma }^{2}}\sqrt \pi }}{{\Gamma (\kappa )\sin \left( {\left( {\kappa - \frac{1}{2}} \right)\pi } \right)}}\left( {\sum\limits_{m = 0}^\infty {\frac{1}{{m!\Gamma \left( {m + \frac{3}{2} - \kappa } \right)}}{{{\left( {\frac{{\beta t}}{2}} \right)}}^{{2m}}}} - \sum\limits_{m = 0}^\infty {\frac{1}{{m!\Gamma \left( {m + \kappa + \frac{1}{2}} \right)}}{{{\left( {\frac{{\beta t}}{2}} \right)}}^{{2m + 2\kappa - 1}}}} } \right). \\ \end{gathered} $$
(25)

From this we can find the expansion of the increment variance in the Taylor series up to an arbitrarily high accuracy (the sums have different lower boundaries since the covariance function is positive at zero):

$$\mathbb{E}{{(Z_{t}^{\kappa } - Z_{0}^{\kappa })}^{2}} = 2({{\rho }^{\kappa }}(0) - {{\rho }^{\kappa }}(t))$$
$$ = \frac{{2{{\sigma }^{2}}\sqrt \pi }}{{\Gamma (\kappa )\sin \left( {\left( {\kappa - \frac{1}{2}} \right)\pi } \right)}}\left( {\sum\limits_{m = 0}^\infty {\frac{1}{{m!\Gamma \left( {m + \kappa + \frac{1}{2}} \right)}}{{{\left( {\frac{{\beta t}}{2}} \right)}}^{{2m + 2\kappa - 1}}}} - \sum\limits_{m = 1}^\infty {\frac{1}{{m!\Gamma \left( {m + \frac{3}{2} - \kappa } \right)}}{{{\left( {\frac{{\beta t}}{2}} \right)}}^{{2m}}}} } \right)$$
(26)
$$ = \frac{{2{{\sigma }^{2}}\sqrt \pi }}{{\Gamma (\kappa )\sin \left( {\left( {\kappa - \frac{1}{2}} \right)\pi } \right)}}\left( {\frac{{{{\beta }^{{2\kappa - 1}}}}}{{{{2}^{{2\kappa - 1}}}\Gamma \left( {\kappa + \frac{1}{2}} \right)}}{{t}^{{2\kappa - 1}}} - \frac{{{{\beta }^{2}}}}{{4\Gamma \left( {\frac{5}{2} - \kappa } \right)}}{{t}^{2}} + \frac{{{{\beta }^{{2\kappa + 1}}}}}{{{{2}^{{2\kappa + 1}}}\Gamma \left( {\kappa + \frac{3}{2}} \right)}}{{t}^{{2\kappa + 1}}} + O({{t}^{4}})} \right).\,\,\,\,\,\,\square $$

Note another representation of the random variable \({{\varrho }_{\kappa }}\) via the beta-distribution. For simplicity, we suppose that the parameter β = 1.

Statement 1. The following equality for the distribution takes place

$${{\varrho }_{\kappa }}\;\mathop = \limits^d \;\frac{1}{{\sqrt {{{\beta }_{{\kappa - 1/2,1 - \kappa }}}} }},$$
(27)

where the random variable βa, b has the beta-distribution with the density proportional to \({{z}^{{a - 1}}}(1\)z)b – 1, ab > 0, z ∈ (0, 1).

Proof. Indeed, a simple calculation shows that at x > 1

$$\begin{gathered} \frac{d}{{dx}}\mathbb{P}(\beta _{{a,b}}^{{ - 1/2}}\;\leqslant \;x) = \frac{1}{{B(a,b)}}\frac{d}{{dx}}\int\limits_{{{x}^{{ - 2}}}}^1 {{{z}^{{a - 1}}}{{{(1 - z)}}^{{b - 1}}}dz} \\ = \frac{{2{{x}^{{ - 3}}}}}{{B(a,b)}}{{x}^{{2 - 2a}}}{{(1 - {{x}^{{ - 2}}})}^{{b - 1}}} = \frac{{2\Gamma (a + b)}}{{\Gamma (a)\Gamma (b)}}{{x}^{{1 - 2a - 2b}}}{{({{x}^{2}} - 1)}^{{b - 1}}}. \\ \end{gathered} $$
(28)

When a = κ – \(\frac{1}{2}\) and b = 1 – κ, this expression gives \(\frac{{2\Gamma (1{\text{/}}2)}}{{\Gamma (\kappa - 1{\text{/}}2)\Gamma (1 - \kappa )}}{{({{x}^{2}} - 1)}^{{ - \kappa }}}\), which is reduced to the form (15) by using the Legendre duplication formula for the gamma-function.

4 THE LOCAL MODULUS OF CONTINUITY AT ZERO FOR A PSI-PROCESS WITH A RANDOM INTENSITY

To obtain results on stronger convergence of PSI-processes, in particular, the convergence in functional spaces, we need to estimate the probability of large oscillations of PSI-processes with a random intensity. The lemma below in this section creates a certain basis for these estimations.

Definition 4. We define the random process χδ of the local modulus of continuity of the PSI-process ψ with the parameter δ > 0 as follows:

$${{\chi }_{\delta }}(t)\;\mathop = \limits^\Delta \;\mathop {\sup }\limits_{s \in [t,t + \delta ]} \left| {{{\psi }_{\lambda }}(s) - {{\psi }_{\lambda }}(t)} \right|,\quad t \in {{\mathbb{R}}_{ + }}.$$
(29)

Due to the stationarity of ψλ, it is obvious that χδ is a stationary process. According to the stationary extension theorem [15] we can assume t\(\mathbb{R}\).

Lemma. Let (ξ) = (ξ0, ξ1, …) be a sequence of independent identically distributed random variables with the distribution function F(x) = \(\mathbb{P}\)0 \(\leqslant \)x), x\(\mathbb{R}\), λ be a nonnegative random variable with the Laplace transform Lλ(t) = \(\mathbb{E}\)(e–λt), and Π1(s) = Π(s), s \( \geqslant \) 0, be the standard Poisson process. We assume that ξ, λ, and Π are jointly independent. We consider a PSI-process ψλ as defined in Definition 2, with a random intensity λ. Then for an arbitrary fixed δ > 0, the equality

$$\mathbb{P}({{\chi }_{\delta }}(0)\; \geqslant \;r) = \mathbb{P}({{\chi }_{\delta }}(t)\; \geqslant \;r) = \int\limits_{ - \infty }^\infty {[1 - {{L}_{\lambda }}(\delta (1 - F(x + r) + F(x - r)))]dF(x)} $$
(30)

holds for all those r > 0, for which F(x) and F(x + r) have no common points of discontinuity.

We call a the expression on the left-hand side of (30) local modulus of continuity, which is a nonrandom function of δ and r. Note that the local modulus of continuity is independent of t.

Proof. First, we assume that λ is fixed. If Π(λs) has no jumps on [0, δ] \( \mathrel\backepsilon \)s, then ψλ(s) = ψλ(0). If Π(λs) has k > 0 jumps on [0, δ], then χδ(0) = max{|ξ1 – ξ0|, …, |ξk – ξ0|}. Since the terms of (ξ) are independent and identically distributed, then, by the total probability formula, integrating over x, the condition imposed on the initial value of the PSI-process, ψ(0) = ξ0 = x, we obtain

$$\mathbb{P}(\max \{ \left| {{{\xi }_{1}} - {{\xi }_{0}}} \right|, \ldots ,\left| {{{\xi }_{k}} - {{\xi }_{0}}} \right|\} < r) = \int\limits_{ - \infty }^\infty {{{\mathbb{P}}^{k}}(\left| {{{\xi }_{1}} - x} \right| < r)dF(x).} $$

When F(y) and F(y + r) as functions of y have no common discontinuity points, the following equality is true

$$\mathbb{P}(\max \{ \left| {{{\xi }_{1}} - {{\xi }_{0}}} \right|, \ldots ,\left| {{{\xi }_{k}} - {{\xi }_{0}}} \right|\} \; \geqslant \;r) = 1 - \int\limits_{ - \infty }^\infty {{{{(F(x + r) - F(x - r))}}^{k}}dF(x).} $$

For the fixed λ, the Poisson process Π(λs) has k jumps on the interval [0, δ] with the probability \(\frac{{{{{(\lambda \delta )}}^{k}}}}{{k!}}{{e}^{{ - \lambda \delta }}}\); therefore, according to the total probability formula, summing over k and using the exponent expansion, we obtain the chain of equalities

$$\mathbb{P}(\left. {{{\chi }_{\delta }}(0)\; \geqslant \;r} \right|\lambda ) = \sum\limits_{k = 1}^\infty {\left( {1 - \int\limits_{ - \infty }^\infty {{{{(F(x + r) - F(x - r))}}^{k}}dF(x)} } \right)\frac{{{{{(\lambda \delta )}}^{k}}}}{{k!}}{{e}^{{ - \lambda \delta }}}} $$
$$ = 1 - {{e}^{{ - \lambda \delta }}} - {{e}^{{ - \lambda \delta }}}\int\limits_{ - \infty }^\infty {(\exp (\lambda \delta (F(x + r) - F(x - r))) - 1)dF(x)} $$
$$ = \int\limits_{ - \infty }^\infty {(1 - \exp ( - \lambda \delta (1 - F(x + r) + F(x - r))))dF(x),} $$

where the change of the order of summation and integration is justified by the Fubini theorem. We also use the obvious property \(\int_{ - \infty }^\infty {dF(x)} \) = 1.

It remains to note that equality (30) is obtained by averaging over λ. The change of the integration order is also ensured by applying the Fubini theorem.

Note that it is easy to make an estimation for the probability in (30) in terms of a concentration function for the distribution of the random variable ξ0. We recall the definition of the concentration function Q for the random variable X:

$${{Q}_{X}}(r)\;\mathop = \limits^\Delta \;\mathop {\sup }\limits_{x \in \mathbb{R}} \mathbb{P}(x\;\leqslant \;\xi \;\leqslant \;x + r).$$

Having made direct calculations, we obtain that representation (30) that implies

$$\mathbb{P}({{\chi }_{\delta }}(0)\; \geqslant \;r)\;\leqslant \;1 - {{L}_{\lambda }}(\delta (1 - {{Q}_{{{{\xi }_{0}}}}}(2r))),\quad r > 0.$$
(31)

5 THE TELEGRAPH PROCESS AS A SPECIAL CASE OF A PSI-PROCESS

We assume that the distribution of independent identically distributed terms of the driven sequence (ξ) has a discrete component. Then \({{\xi }_{{n - 1}}}\) = ξn occurs with the positive probability for each n\(\mathbb{N}\). This means that when the leading process Π1t) makes the nth jump, the PSI-process does not make any jump. This is a difficulty when working with the discretely distributed (ξn). However, in the example below this difficulty is easily managed due to the symmetry.

Example. We consider independent identically distributed random variables with the Rademacher distribution, i.e., taking the values ±1 with the probabilities 1/2, as a driven sequence (ξ). Then, as was mentioned above, a PSI-process with the fixed intensity 1 is a telegraph process with one-half intensity. To verify this statement, we note that the events {ξ0 = ξ1}, {ξ1 = ξ2}, … are a sequence of independent events, each with probability 1/2. Therefore, if we consider only the points of the Poisson process Π1, at which \({{\xi }_{{{{\Pi }_{1}}( \cdot )}}}\) changes the sign, then, according to the coloring theorem (see [11, Ch. 5]), they form the Poisson process with the intensity P0 = ξ1} = 1/2 along the positive half-line, which provides the representation on the right-hand side of (5).

Since (5) is fulfilled as the equality of distributions of processes, it is possible to make a random replacement of time, introducing a positive random multiplier λ: t \( \mapsto \) λt, and to obtain

$$\psi (t)\;\mathop = \limits^\mathcal{D} \;{{\xi }_{0}}{{( - 1)}^{{{{\Pi }_{1}}(\lambda t/2)}}},\quad t\; \geqslant \;0.$$

For this process, it is easy to calculate the distribution of the local modulus of continuity χδ, since, by construction, χδ(t) takes only two values for any t: 0, if Π1s/2) has no jumps on the interval s ∈ [t, t + δ], and 2, if at least one jump occurs on this interval. Therefore, at all t \( \geqslant \) 0, we obtain

$$\mathbb{P}({{\chi }_{\delta }}(t) = 0) = {{L}_{\lambda }}(\delta {\text{/}}2),\quad \mathbb{P}({{\chi }_{\delta }}(t) = 2) = 1 - {{L}_{\lambda }}(\delta {\text{/}}2).$$
(32)

Let us find the same probabilities by using formula (30); thus, we can verify formula (30) using the test example:

$$\mathbb{P}({{\chi }_{\delta }}(0)\; \geqslant \;r) = \int\limits_{ - \infty }^\infty {[1 - {{L}_{\lambda }}(\delta (1 - F(x + r) + F(x - r)))]dF(x),} $$

where the distribution function F(x) of the random variable ξ0 takes the values: 1/2 on the interval [‒1, 1) \( \mathrel\backepsilon \)x, 0 on the left of this interval, and 1 on the right of it. The latter integral transforms into the sum of two summands

$$\frac{1}{2}[1 - {{L}_{\lambda }}(\delta (1 - F( - 1 + r) + F( - 1 - r)))] + \frac{1}{2}[1 - {{L}_{\lambda }}(\delta (1 - F(1 + r) + F(1 - r)))],$$

and each of them is equal to 1 – Lλ(δ/2) at r ∈ (0, 2) or 0 at r > 2, in perfect agreement with (32).

Corollary. For the telegraph process with the random intensity \({{\varrho }_{\kappa }}\), the asymptotic of probability of its increment near zero instantly follows:

$$\mathbb{P}({{\chi }_{\delta }}(0) > 0)\sim {{b}^{2}}{{\left( {\frac{\delta }{2}} \right)}^{{2H}}},\quad \delta \to 0 + ,$$
(33)

where the denotation for b2 is introduced by equality (9).

The covariance function of the process χδ(t), t \( \geqslant \) 0, is set by the expression

$$\begin{gathered} \operatorname{cov} ({{\chi }_{\delta }}(t))\;\mathop = \limits^\Delta \;\mathbb{E}\{ ({{\chi }_{\delta }}(s) - \mathbb{E}{{\chi }_{\delta }}(s))({{\chi }_{\delta }}(t + s) - \mathbb{E}{{\chi }_{\delta }}(t + s))\} \\ = \mathbb{E}({{\chi }_{\delta }}(0){{\chi }_{\delta }}(t)) - {{\mathbb{E}}^{2}}{{\chi }_{\delta }}(0) \\ \end{gathered} $$
(34)

(due to the stationarity, s can be arbitrary).

Statement 2. For the telegraph process with the random intensity λ that has the Laplace transform Lλ, the covariance of the process of local modulus of continuity χδ can be written as:

$$\operatorname{cov} ({{\chi }_{\delta }}(t)) = \left\{ \begin{gathered} 4({{L}_{\lambda }}((t + \delta ){\text{/}}2) - {{L}_{\lambda }}{{(\delta {\text{/}}2)}^{2}}),\quad 0 \leqslant t \leqslant \delta ; \hfill \\ 4({{L}_{\lambda }}(\delta ) - {{L}_{\lambda }}{{(\delta {\text{/}}2)}^{2}}),\quad t > \delta . \hfill \\ \end{gathered} \right.$$
(35)

Proof. If 0 ≤ t ≤ δ, then the intervals [0, δ] and [t, t + δ] intersect and form three intervals [0, t], [t, δ], [δ, t + δ]. The product χδ(0)χδ(t) is equal to zero, if there are no jumps either in the first and second, or in the second and third, or on all three specified intervals; otherwise, the product is equal to 4. At the fixed intensity λ we have \(\mathbb{P}\)δ(0)χδ(t) = 0|λ) = \(2{{e}^{{ - \lambda \delta /2}}}(1\)\({{e}^{{ - \lambda t/2}}})\) + \({{e}^{{ - \lambda (t + \delta )/2}}}\). Therefore, when calculating the mathematical expectation over λ, we obtain \(\mathbb{E}{{\chi }_{\delta }}(0){{\chi }_{\delta }}(t)\) = 4(1 – 2Lλ(δ/2) + Lλ((t + δ)/2)). Subtracting the squared mathematical expectation \(\mathbb{E}{{\chi }_{\delta }}(0)\) = 2(1 – 2Lλ(δ/2)), which is found from formulas (32), we obtain the first case in formula (35).

If t > δ, then the intervals [0, δ] and [t, t + δ] do not intersect, and the product \({{\chi }_{\delta }}(0){{\chi }_{\delta }}(t)\) is equal to 4 only in the case, when there are jumps on both intervals, and is zero otherwise. Therefore, \(\mathbb{E}({{\chi }_{\delta }}(0){{\chi }_{\delta }}(t)\) | λ) = 4(1 – \({{e}^{{ - \lambda \delta /2}}}{{)}^{2}}\). Averaging over λ and subtracting \({{\mathbb{E}}^{2}}{{\chi }_{\delta }}(0)\), we obtain the second case in (35).

Note that if the intensity λ is degenerate (as it is in the “real” telegraph process), then at t \( \geqslant \) δ, the covariance vanishes, since in this case, the values of χδ(0) and χδ(t) are independent. However, at a nondegenerate random intensity, the covariance is positive and does not tend to zero at t → ∞.

6 MODULUS OF CONTINUITY OF A PSI-PROCESS FOR UNIFORMLY DISTRIBUTED TERMS OF A DRIVEN SEQUENCE

We consider the case of a PSI-process with a uniform distribution of terms of a driven sequence (ξ). In this case, it is possible to carry out the explicit calculations.

Statement 3. Let random variables ξ0, ξ1, … be independent and have an identical uniform distribution on [–a, a], a > 0; the intensity λ is an arbitrary nonnegative random variable with the Laplace transform L(λ).

Then

$$\mathbb{P}({{\chi }_{\delta }}(0)\; \geqslant \;r) = \left\{ \begin{gathered} 0,\quad r\; \geqslant \;2a; \hfill \\ 2 - \frac{r}{a} - \frac{2}{\delta }\int\limits_0^{\frac{{\delta (2a - r)}}{{2a}}} {{{L}_{\lambda }}(y)dy} ,\quad 2a > r\; \geqslant \;a; \hfill \\ 1 - \frac{{a - r}}{a}{{L}_{\lambda }}\left( {\frac{{\delta (a - r)}}{a}} \right) - \frac{2}{\delta }\int\limits_{\frac{{\delta (a - r)}}{a}}^{\frac{{\delta (2a - r)}}{{2a}}} {{{L}_{\lambda }}(y)dy} ,\quad a > r > 0. \hfill \\ \end{gathered} \right.$$
(36)

This statement follows from Lemma by a direct calculation that is relatively simple, since the argument of the Laplace transform in formula (30) is a piecewise linear function.

We obtain the formula for the local modulus of continuity in the case when the intensity has the Laplace transform proportional to the covariance function (10) of the fOUW–T process (8).

Distribution (36) of the local modulus of continuity χδ(0) has an atom at zero, and this corresponds to a situation when the leading Poisson process has no jumps on [0, δ]. Obviously, when δ → 0+, the weight of this atom approaches 1. However, provided that χδ(0) > 0, the distribution has the following nontrivial asymptotic.

Statement 4. Let the terms of the driven sequence (ξ) have identical uniform distributions on [–a, a], and the leading doubly stochastic Poisson process has a random intensity \({{\varrho }_{\kappa }}\). Then the following limiting relation is fulfilled for χδ:

$$\mathop {\lim }\limits_{\delta \to 0 + } \mathbb{P}({{\chi }_{\delta }}(0)\; \geqslant \;\left. r \right|{{\chi }_{\delta }} > 0) = \left\{ \begin{gathered} 0,\quad r\; \geqslant \;2a; \hfill \\ \frac{1}{\kappa }{{\left( {1 - \frac{r}{{2a}}} \right)}^{{2\kappa }}},\quad 2a > r\; \geqslant \;a; \hfill \\ \frac{1}{\kappa }\left( {{{{\left( {1 - \frac{r}{{2a}}} \right)}}^{{2\kappa }}} - (1 - \kappa ){{{\left( {1 - \frac{r}{a}} \right)}}^{{2\kappa }}}} \right),\quad a > r > 0. \hfill \\ \end{gathered} \right.$$
(37)

Note the dependence of this limiting conditional distribution on κ, which is explained by a nonzero probability that there is more than one jump for the considered random intensity \({{\varrho }_{\kappa }}\), on condition that the jumps occur on the small interval [0, δ]; this probability considerably depends on κ.

Proof. From relation (25), using the Legendre duplication formula and the Euler reflection formula for gamma-function, we find the asymptotic expansion at zero of the Laplace transform Lλ of the random intensity \({{\varrho }_{\kappa }}\) (and simultaneously the expansion of the covariance ρκ)

$${{L}_{\lambda }}(t) = \frac{{{{\rho }^{\kappa }}(t)}}{{{{V}^{2}}}} = 1 - \frac{{\Gamma \left( {\frac{3}{2} - \kappa } \right)}}{{\Gamma \left( {\kappa + \frac{1}{2}} \right)}}{{\left( {\frac{{\beta t}}{2}} \right)}^{{2\kappa - 1}}} + O({{t}^{2}}),\quad t \to 0{\kern 1pt} {\text{ + }}.$$
(38)

Designating for brevity

$$C\;\mathop = \limits^\Delta \;\frac{{{{\beta }^{{2\kappa - 1}}}\Gamma (3{\text{/}}2 - \kappa )}}{{{{2}^{{2\kappa - 1}}}\Gamma (\kappa + 1{\text{/}}2)}}$$
(39)

and substituting the relation

$$\frac{2}{\delta }\int\limits_{A\delta }^{B\delta } {(1 - C{{y}^{{2\kappa - 1}}} + O({{y}^{2}}))dy} = 2(B - A) - C{{\delta }^{{2\kappa - 1}}}({{B}^{{2\kappa }}} - {{A}^{{2\kappa }}}){\text{/}}\kappa + O({{\delta }^{2}}),\quad \delta \to 0 + ,$$

into formula (36) at A and B equal to the values of, accordingly, upper and lower limits of integration in (36), we obtain that at δ → 0+, the following is fulfilled

$$\mathbb{P}({{\chi }_{\delta }}(0)\; \geqslant \;r) = \left\{ \begin{gathered} 0,\quad r\; \geqslant \;2a; \hfill \\ \frac{C}{\kappa }{{\left( {1 - \frac{r}{{2a}}} \right)}^{{2\kappa }}}{{\delta }^{{2\kappa - 1}}} + O({{\delta }^{2}}),\quad 2a > r\; \geqslant \;a; \hfill \\ \frac{C}{\kappa }\left( {{{{\left( {1 - \frac{r}{{2a}}} \right)}}^{{2\kappa }}} - (1 - \kappa ){{{\left( {1 - \frac{r}{a}} \right)}}^{{2\kappa }}}} \right){{\delta }^{{2\kappa - 1}}} + O({{\delta }^{2}}),\quad a > r > 0. \hfill \\ \end{gathered} \right.$$

Having divided this expression by the probability of the condition \(\mathbb{P}\)δ(0) > 0) = 1 – Lλ(δ) ~ Cδ2κ – 1, δ → 0+, we obtain formula (37).