1 Introduction

Assume that we have a family \(\{X, X_{j}, j=1,2,\dots \}\) of independent identically distributed (i.i.d.) random variables (r.vs.) that have a common distribution with mean and finite positive variance:

$$\begin{aligned} \mu =\mathbf {E}X,\qquad \sigma ^{2}=\mathbf {D}X<\infty , \qquad F_{X}(x)= \mathbf {P}(X<x),\quad x\in \mathbb {R}, \end{aligned}$$
(1)

where \(\mathbb {R}\) is the set of real numbers. In addition,

$$\begin{aligned} \mathbf {E}X^{k}=\frac{1}{i^{k}}\dfrac{d^{k}}{du^{k}}f_{X}(u)\Big |_{u=0},\quad {\varGamma }_{k}(X)=\frac{1}{i^{k}}\dfrac{d^{k}}{du^{k}}\ln f_{X}(u)\Big |_{u=0}, \; k=1,2,\dots , \end{aligned}$$
(2)

denotes the kth-order moments and cumulants, where \(f_{X}(u)\) is the characteristic function (ch.f.)

$$\begin{aligned} f_{X}(u)=\mathbf {E}e^{iuX}=\int _{-\infty }^{\infty }e^{iux}dF_{X}(x), \qquad u \in \mathbb {R}, \end{aligned}$$
(3)

of the random variable (r.v.) X. The existence of \({\varGamma }_{k}(X)\) up to order k must be implied by the existence of all the kth-order absolute moments of X. Here \({\varGamma }_{1}(X)=\mathbf {E}X\) and \({\varGamma }_{2}(X)=\mathbf {D}X\).

The theory of large deviations deals with the probabilities of rare events that are exponentially small as a function of some parameter. For example, in insurance mathematics, such problems arise in the approximation for small probabilities of large claims that occur rarely. The theory of large deviations was originally created for non-random sums \(S_{n}=\sum _{j=1}^{n}X_{j}\), \(n\in \mathbb {N}\), where \(\mathbb {N}=\{1,2,\dots \}\) is the set of natural numbers, and then extended to a class of random processes [34].

The first fundamental theorem of large deviations for \(S_{n}\) was proved by Cramér [8] who showed that the rate function is the convex conjugate of the logarithm of the moment generating function of the underlying common distribution. The cases that have been the most studied (see, e.g., in [34]) are when the Cramér condition is satisfied, namely, the characteristic functions of the terms are analytic in a neighborhood of zero: the Linnik case when all the moments of summands are finite but their growth does not assure the analyticity of the ch.f. in the neighborhood of zero; the case of the so-called moderate deviations where the summands have only the finite number of moments; and the case when Cramér and Linnik conditions are not fulfilled, but the behavior of distribution tails of summands is regular enough.

Many of the basic ideas and results of theorems on large deviations for the sums \(S_{n}\) have been presented by Ibragimov and Linnik, Petrov, Nagaev, S.V. [16, 30, 32]. In these studies, large deviation theorems have been obtained by the rather complicated analytical saddle-point method [18] and, as a rule, for sums of i.i.d. r.vs. This is the simplest case that allows one to conceive the general view of large deviation probabilities. Asymptotic expansions for large deviations were first obtained by Kubilius [24]. Without a detailed exposition on the history of asymptotic expansions and local limit theorems taking into account large deviations in the scheme of summation of r.vs., we cite, e.g., [2, 7, 30, 31, 33]; see also the books [16, 32, 34] and references therein.

The next major step in addressing problems of large deviation theorems was made when Statulevičius [35] proposed the method of cumulants to consider large deviation probabilities for various statistics. The cumulant method was developed by Rudzkis, Saulis, and Statulevičius [34, p. 18]. They proved a general lemma of large deviations for an arbitrary r.v. X with mean \(\mu =0\), variance \(\sigma ^{2}=\mathbf {E}X^{2}=1\), and regular behavior of its cumulants (see condition \((S_{\gamma })\) below): there exist \(\gamma \ge 0\) and \({\varDelta }> 0\) such that

$$\begin{aligned}&|{\varGamma }_{k}(X)| \le \frac{(k!)^{1+\gamma }}{{\varDelta }^{k-2}}, \qquad k=3,4, \ldots . \qquad \qquad \qquad \,\quad \quad \quad \quad (S_{\gamma }) \end{aligned}$$

The method of cumulants provided a way to obtain large deviation theorems for sums of independent and dependent r.vs., polynomials forms, multiple stochastic integrals of random processes, and polynomial statistics in both the Cramér (\(\gamma =0\)) and the power Linnik zones (\(\gamma >0\)). The monograph [34] addresses these issues. Although cumulant method has been studied to investigate a more precise asymptotic analysis of the distribution via the rate of convergence and large deviation probabilities, Döring, Eichelsbacher [11] established moderate deviation principles for a rather general class of r.vs. fulfilling certain bounds of the cumulants.

For asymptotic expansions and local limit theorems that take into account large deviations when the cumulant method is used, we only cite [9, 10] and [34, p. 154–187] as these works reflect the area of our interest. In more detail, Saulis [34, p. 154] presented an asymptotic expansion for the density function of an arbitrary r.v. with zero mean and unit variance. Based on the aforementioned result, Saulis (see Theorem 6.1 in [34, p. 180]) established an asymptotic expansion in the Cramér zone of large deviations for the density function of the sum \(S_{n}\) of independent nonidentically distributed r.vs. Also, the structure of the remainder term of the asymptotic expansion in the case where \(\gamma =0\) was delivered. Asymptotic expansions of large deviations in both the Cramér and the power Linnik zones for the density function have been generalized in [9, 10] by considering asymptotic expansions in the areas of large deviations for the density function of sums of independent r.vs. in a triangular array scheme. These results improve on the results on sums of r.vs. with weights in [6].

The theory of large deviations offers interesting problems when the number of summands is itself a r.v. Let us consider \(N_{t}\), \(t \ge 0\), as the most popular Poisson process—the homogeneous Poisson process (see in [27]) with linear mean value function \(\Lambda (t)=\lambda t\), \(t \ge 0\), for some \(\lambda > 0\), and the distribution

$$\begin{aligned} q_{m}=\mathbf {P}(N_{t}=m)=e^{-\lambda t}\frac{(\lambda t)^{m}}{m!},\qquad m \in \mathbb {N}_{0}, \end{aligned}$$
(4)

where \(\mathbb {N}_{0}=\{0,1,2, \dots \}\). In addition,

$$\begin{aligned} \mathbf {E}N_{t}=\lambda t,\qquad \mathbf {D}N_{t}=\lambda t. \end{aligned}$$
(5)

Throughout, we assume that \(N_{t}\) is independent of \(\{X,X_{j},j=1,2,\dots \}\). If \(N_{t}\) is a homogeneous Poisson process, then

$$\begin{aligned} S_{N_{t}}=\sum _{j=1}^{N_{t}}X_{j}, \qquad S_{0}=0, \end{aligned}$$
(6)

is a compound Poisson process.

Since r.v. \(X_{1}, X_{2},\dots \) and \(N_{t}\) are independent, and \(\{X,X_{j},j=1,2,\dots \}\) are i.i.d., then according to (1), (4), (5),

$$\begin{aligned} \mathbf {E}S_{N_{t}}&= \sum _{m=0}^{\infty }\mathbf {E}\left( \sum _{j=1}^{N_{t}}X_{j}|N_{t}=m\right) =\mu \sum _{m=0}^{\infty }m q_{m} =\lambda t\mu , \end{aligned}$$
(7)
$$\begin{aligned} \mathbf ES_{N_{t}}^{2}&=\mathbf {E}\left( \sum _{j=1}^{N_{t}}X_{j}\right) ^{2}=\sum _{m=0}^{\infty }\left( \mathbf {E}\sum _{j=1}^{m}X_{j}^{2}+\mathbf {E}\sum _{i,j=1,\ i\ne j}^{m}X_{i}X_{j} \right) q_{m}\nonumber \\&=\sum _{m=0}^{\infty }\left( \mathbf {E}X^{2}\sum _{j=1}^{m}mq_{m}+\mu ^2\sum _{j=1}^{m} m(m-1)q_{m}\right) \nonumber \\&=\mathbf {E}X^{2}\lambda t+\mu ^{2}\mathbf {E}N_{t}^{2}-\mu ^{2} \lambda t, \nonumber \\ \mathbf {D}S_{N_{t}}&= \mathbf {E}S_{N_{t}}^{2}-(\mathbf {E}S_{N_{t}})^{2} = \lambda t(\mu ^{2}+\sigma ^ {2}). \end{aligned}$$
(8)

For instance, in the continuous dynamic models of an insurance stock [27, p. 152], \(R_{t}=R_{0}+P_{t}-S_{N_{t}}, \ t \ge 0,\) can express the surplus \(R_{t}\) at time t. Here, \(R_{0}\) is the initial reserve and \(P_{t}\) is the total premium received up to time t. That is, the company sells insurance policies and receives a premium according to \(P_{t}\). The sum (6) is the total claim amount process in the time interval [0, t]. In this example, \(X_{j}, \ j=1,2,\ldots \), denotes the jth claim, and \(N_{t}\) is the number of claims by time t.

Because \(N_{t}\) is independent of \(X_{j}\), \(j=1,2,\dots \), according to (3) and (4), the ch.f.

$$\begin{aligned} f_{S_{N_{t}}}(u)=\mathbf {E}e^{iuS_{N_{t}}} =\overset{\infty }{\underset{m=0}{\sum }} f_{X}^{m}(u)q_{m}=e^{-\lambda t(1-f_{X}(u))}, \quad u\in \mathbb {R}, \end{aligned}$$
(9)

of (6) exist if the ch.f. (3) of the r.v. X exist.

Let us consider standardized compound Poisson process

$$\begin{aligned} \tilde{S}_{N_{t}}=\frac{S_{N_{t}}-\mathbf {E}S_{N_{t}}}{\sqrt{\mathbf {D}S_{N_{t}}}},\qquad \mathbf {D}S_{N_{t}}>0, \quad t>0. \end{aligned}$$
(10)

Hence, it follows from (7)–(9) that

$$\begin{aligned} f_{\tilde{S}_{N_{t}}}(u) = f^{\lambda t}_{S_{N_{1}}-\mu }(\bar{u}), \qquad \bar{u}=u/ \sqrt{\mathbf {D}S_{N_{t}}}, \end{aligned}$$
(11)

where

$$\begin{aligned} f_{S_{N_{1}}-\mu }(\bar{u})=\exp \{f_{X}(\bar{u})-1-i\mu \bar{u}\} \end{aligned}$$

is the ch.f. of the r.v. \(S_{N_{1}}-\mu =\sum ^{N_{1}}_{j=1}X_{j}-\mu \), with \(N_{1}\) a Poisson r.v. with the parameter 1. Also, \(\mathbf {E}(S_{N_{1}}-\mu )=0\) and \(\mathbf {D}(S_{N_{1}}-\mu )=\mathbf {E}X^{2}.\) As discussed in [4, 22], the representation (11) shows that the asymptotic behavior of (11) as \(\lambda t\rightarrow \infty \) is similar to that of the ch.f. of the r.v. \(\sum _{j=1}^{n}X_{j}/\sqrt{\mathbf {D}S_{N_{t}}}\) as \(n\asymp \lambda t\rightarrow \infty \) (we write \(u(x)\asymp v(x)\) for real functions u(x) and v(x) if \(u(x)=O(v(x))\) and \(v(x)=O(u(x))\)), where the \(X_{j}\) are independent r.vs. The asymptotic properties of Poisson random sums are to a great extent similar to the corresponding properties of sums of the same r.vs. with a non-random number of summands [3]. However, this analogy is not absolutely exact as, for example, the distribution function

$$\begin{aligned} F_{S_{N_{t}}}(x)=e^{-\lambda t}F_{0}(x)+\overset{\infty }{\underset{m=1}{\sum }} q_{m}F_{X}^{*m}(x), \qquad x\in \mathbb {R}, \end{aligned}$$
(12)

of the compound Poisson process is not absolutely continuous for all \(x\in \mathbb {R}\) because of the presence of an atom at zero. Here \(F_{X}^{*m}(x)\) is the m-fold convolution of the distribution function \( F_{X}(x)\) of the r.v. X with itself, and \(F_{0}(x)\) is the distribution function with a single unit jump at zero.

Central limit problems for Poisson random sums have been addressed, for example, in [4, 21]; see also the books [3, 27] and references therein. Local limit theorems for Poisson random sums are available in [22], where the results for non-random sums presented in [23] are extended. For treatments of asymptotic expansions for Poisson random sums, we refer the reader, for example, to [1, 3].

Presently, there are many strong results, for example, [5, 12, 26, 29], on approximations of exponential bounds of tail probabilities for compound Poisson sums under different assumptions and with various applications. More specifically, Embrechts [12] considered saddle-point approximations in the context of the compound Poisson sum. Saddle-point approximations of ruin probabilities in other contexts have also been well studied; see [18] for references within. On Edgeworth expansions for compound Poisson processes, we refer to [1]. Cramér-type moderate deviations for a studentized compound Poisson process have been addressed in [17]. For some special classes of heavy-tailed distributions; see, for example, [28, 29].

As we are here interested not only in the convergence to the normal distribution but also in a more accurate asymptotic analysis, we must first find a suitable bound for the kth-order cumulants of the standardized compound Poisson process \(\tilde{S}_{N_{t}}\) defined by (10). To obtain upper bounds for \({\varGamma }_{k}(\tilde{S}_{N_{t}})\), \(k=3,4,\dots \), we must impose conditions for the kth-order moments of the r.v. X that has a common distribution with mean and finite positive variance (1). Consequently, we say that the r.v. X satisfies condition \((\bar{B}_{0})\) if there exists constant \(K>0\) such that

$$\begin{aligned}&|\mathbf {E}(X-\mu )^{k}|\le k!K^{k-2}\sigma ^{2},\qquad k=3,4,\dots . \qquad \qquad \qquad \quad (\bar{B}_{0}) \end{aligned}$$

Condition \((\bar{B}_{0})\) is often called Bernstein condition. Note that, if Cramér’s condition holds, that is, there exists \(a>0\) such that \(\mathbf {E}\exp \{a|X|\} <\infty \), then X satisfies condition \((\bar{B}_{0})\). Condition \((\bar{B}_{0})\) ensures the existence of all order moments of the r.v. X.

Taking into account the fact that \({\varGamma }_{k}(X)= {\varGamma }_{k}(X-\mu )\), \(k=3,4,\dots \), and using Lemma 3.1 in [34, p. 42], we take up the position that

Proposition 1.1

If the r.v. X satisfies condition \((\bar{B}_{0})\), then

$$\begin{aligned} |{\varGamma }_{k}(X)|\le k!M^{k-2}\sigma ^{2},\qquad M=2 \max \{\sigma ,K\}, \qquad k=2,3,\dots . \end{aligned}$$
(13)

We shall also need the following proposition.

Proposition 1.2

If the r.v. X with \(0<\sigma ^{2}<\infty \) satisfies condition \((\bar{B}_{0})\) and \(N_{t}\), \(t> 0\), is the homogeneous Poisson process with the probability (4), then

$$\begin{aligned} \vert {\varGamma }_{k}(\tilde{S}_{N_{t}})\vert \le \dfrac{ k!}{{\varDelta }_{t}^{k-2}},\qquad {\varDelta }_{t}=\frac{\sqrt{\lambda t(\sigma ^{2}+\mu ^ {2})}}{K}, \quad K>0, \quad k=3,4,\dots \ . \end{aligned}$$
(14)

Proof

(2) and (9) give us the kth-order cumulants of the compound Poisson process \(S_{N_{t}}\), which is defined by (6):

$$\begin{aligned} {\varGamma }_{k}(S_{N_{t}})= \dfrac{d^{k}}{i^{k}du^{k}}\ln f_{S_{N_{t}}}(u) \Big \vert _{u=0} =\lambda t\mathbf {E}X^{k},\qquad k=1,2,\dots \ . \end{aligned}$$
(15)

Based on (\(\bar{B}_{0 }\)), for the kth-order moments \(\mathbf {E}X^{k}\) of the r.v. X with \(0<\sigma ^{2}<\infty \), we use the following condition

$$\begin{aligned} |\mathbf {E}X^{k}|\le k!K^{k-2}\mathbf {E}X^{2},\qquad k=3,4,\dots \ . \end{aligned}$$

Therefore, \({\varGamma }_{k}(\tilde{S}_{N_{t}})={\varGamma }_{k}(S_{N_{t}})/(\mathbf {D}S_{N_{t}})^{k/2}\), \(k=2,3,\ldots \), yield (14). Here \(\mathbf {D}S_{N_{t}}\) is defined by (8). \(\square \)

Note that for the convergence to the standard normal distribution it is sufficient that \({\varGamma }_{k}(\tilde{S}_{N_{t}})\rightarrow 0\) (\(\lambda t\rightarrow \infty \)) for each \(k\ge 3\) (Leonov, 1964).

For the normal approximation that takes into consideration large deviations in both the Cramér and the power Linnik zones for the distribution function of a compound Poisson process for when the cumulant method is used, we refer the reader to [19, 20]. Observe that, under Theorems 1 and 2 in [19, p. 134-135], we deduce the following:

Corollary 1.1

If the r.v. \( X \) with \(0<\sigma ^{2}<\infty \) satisfies conditions \((\bar{B}_{0})\), and \(N_{t}\) is a homogeneous Poisson process with the probability (4), then for \(x\ge 0\), \(x=O((\lambda t)^{1/6})\), \(\lambda t\rightarrow \infty \), we have

$$\begin{aligned} \dfrac{1-F_{\tilde{S}_{N_{t}}}(x)}{1-{\varPhi }(x)} =\exp \left\{ \frac{x^{3}}{\sqrt{\lambda t}} \frac{\mathbf {E}X^{3}}{6(\sigma ^2+\mu ^2)^{3/2}}\right\} \Big (1+O\Big (\frac{x+1}{\sqrt{\lambda t}}\Big )\Big ), \end{aligned}$$

where \({\varPhi }(x)\) is the standard normal distribution function.

Corollary 1.2

If the r.v. \( X \) with \(0<\sigma ^{2}<\infty \) satisfies conditions \((\bar{B}_{0})\), and \(N_{t}\) is a homogeneous Poisson process with the probability (4), then

$$\begin{aligned} \frac{1-F_{\tilde{S}_{N_{t}}}(x)}{1-{\varPhi }(x)}\rightarrow 1,\qquad \frac{F_{\tilde{S}_{N_{t}}}(-x)}{{\varPhi } (-x)}\rightarrow 1 \end{aligned}$$

hold for \(x\ge 0\), \(x=o((\lambda t)^{1/6})\), if \(\lambda t\rightarrow \infty \).

The main purpose of this paper is an asymptotic expansion that takes into consideration large deviations in the Cramér zone for the distribution density function of the process (10) (see Proposition 3.1, Theorem 3.1, and Corollary 3.1 in Sect. 3). In Sect. 3, the result on the asymptotic expansion (33) extends asymptotic expansions for the density function of the sums of non-random number of summands that takes into consideration large deviations in the Cramér zone [9, 10]. The solution to the problem of the aforementioned section is obtained by first using a general lemma presented by Saulis (1980) [34, p. 154] on the asymptotic expansion of the density function for an arbitrary r.v. with zero mean and unit variance. Among the existing methods for large deviations (see, e. g., [1315, 18, 34]), we rely on the cumulant method [34]. Following [9, 10, 34], to estimate the reminder term (35) of the asymptotic expansion (33), along with aforementioned methods, Statulevičius’ known estimates for characteristic functions are used [34, p. 172–174]). Consequently, Sect. 2 is devoted to present the auxiliary Lemmas 2.12.3 that lead to an estimate of the reminder term (35).

2 Auxiliary Lemmas

Suppose that for an arbitrary r.v. X with mean \(\mu = 0\), variance \(\sigma ^{2}<\infty \), and distribution function \(F_{X}(x)=\mathbf {P}(X<x)\) for all \(x\in \mathbb {R}\), there exists a density function

$$\begin{aligned} p_{X}(x)=\dfrac{d}{dx}F_{X}(x). \end{aligned}$$

Moreover, let \(X^{^{\prime }}=X-Y\) be an arbitrary, symmetrized r.v., where Y is independent of X and with the same distribution. Clearly, the distribution and characteristic functions of \(X^{^{\prime }}\) are

$$\begin{aligned} F_{X^{^{\prime }}}(x)=\int _{-\infty }^{\infty }F_{X}(x+y)dF_{X}(x),\qquad f_{X^{^{\prime }}}(u)=|f_{X}(u)|^{2}. \end{aligned}$$

The corresponding density will be denoted by \(p_{X^{^{\prime }}}(x)\). Statulevičius proved the following lemmas (see Lemmas 2.1, 2.2, 2.3 in [34, p. 172–174]).

Lemma 2.1

Let X be any r.v. with density \(p_{X}(x)\). Then, for any collection \(\mathfrak {M}=\{{\varDelta }_{i},A_{i},i=1,2,\ldots \}\) of non-overlapping intervals \({\varDelta }_{i}\) and positive constants \(A_{i}<\infty \) for any \(-\infty<u<\infty \), the estimate

$$\begin{aligned} |f_{X}(u)|\le \exp \Big \{-\frac{u^{2}}{3}\overset{\infty }{\underset{i=1}{ \sum }}\frac{\mathcal {Q}_{i}^{3}}{(|{\varDelta }_{i}||u|+2\pi )^{2}A_{i}^{2}} \Big \} \end{aligned}$$

holds, where

$$\begin{aligned} \mathcal {Q}_{i}=\int _{{\varDelta }_{i}}\min \{A_{i},p_{X^{^{\prime }}}(x)\}dx. \end{aligned}$$

Corollary 2.1

If \(p_{X}(x)\le A<\infty \) and \(\sigma ^{2}=\mathbf {E}X^{2}<\infty \), then

$$\begin{aligned} |f_{X}(u)|\le \exp \Big \{-\frac{u^{2}}{96}\frac{1}{(2\sigma |u|+\pi )^{2}A^{2}}\Big \}, \end{aligned}$$

for all \(-\infty<u<\infty ,\) where \(A>0.\)

Lemma 2.2

Let a nonnegative function g(u), defined on the interval \([b,\infty )\), satisfies the Lipschitz condition \(|g(u+s)-g(u)|\le K|s|\). Moreover, let \(V:=\int _{b}^{\infty }g(u)du<\infty .\) Then for any \(\varepsilon >0\) and any partition \(b=u_{0}<u_{1}<\dots \) of the interval \([b,\infty )\) with \(\underset{0\le k<\infty }{\max } (u_{k+1}-u_{k})\le \varepsilon \), we have the inequality

$$\begin{aligned} \overset{\infty }{\underset{k=0}{\sum }}\left( \underset{u_{k}\le u\le u_{k+1}}{ \max }g^{2}(u)\right) {\varDelta } u_{k}\le V\left( 2K\varepsilon +4\underset{a\le u<\infty }{\sup }g(u)\right) , \end{aligned}$$

where \({\varDelta } u_{k}=u_{k+1}-u_{k}\).

Let us assume that \(X_{j}\), \(j=1,2,\ldots \), are independent, nonidentically distributed r.vs., and put \(B_{n}^{2}=\sum _{j=1}^{n}\sigma _{j}^{2}\). Let

$$\begin{aligned} l_{n}(H_{n})=\frac{1}{B_{n}^{2}}\overset{n}{\underset{j=1}{\sum }} \int _{|x|\le H_{n}}x^{2}p_{X_{j}^{^{\prime }}}(x),\; J_{n}(u)=\sum \limits _{j=1}^{n}\int _{-\infty }^{\infty }\langle xy \rangle ^{2}p_{X_{j}^{^{\prime }}}(x)dx,\; H_{n}>0, \end{aligned}$$

where \(\langle b \rangle \) denotes the distance of number b to the nearest integer.

Lemma 2.3

For any \(n\ge 1\) and \(H_{n}>0,\) there exists a partition \(\dots<u_{-1}^{(n)}<u_{0}^{(n)}=0<u_{1}^{(n)}<u_{2}^{(n)}<\dots \) of the interval \((-\infty ,\infty )\) satisfying the condition

$$\begin{aligned} (6H_{n})^{-1}\le {\varDelta } u_{k}^{(n)}\le (4H_{n})^{-1},\qquad {\varDelta } u_{k}^{(n)}=u_{k+1}^{(n)}-u_{k}^{(n)}, \end{aligned}$$

such that

$$\begin{aligned} J_{n}(u)\ge \frac{1}{2}l_{n}(H_{n})\left( u-u_{k0}^{(n)}\right) ^{2}B_{n} ^{2}, \end{aligned}$$

provided \(u\in [u_{k}^{(n)},\) \(u_{k+1}^{(n)}],\) where, for a given n, \(u^{(n)}_{k0}\) equals either \(u_{k}^{(n)}\) or \(u_{k+1}^{(n)}\) depending on u.

3 Asymptotic Expansion in the Large Deviation Cramér Zone for the Density Function of the Compound Poisson Process

Along with the condition (\(\bar{B}_{0}\)), we assume that for the r.v. X that has a common distribution with mean and finite positive variance (1), there exists density function \(p_{X}(x)=(d/dx)F_{X}(x)\) such that

$$\begin{aligned}&\sup _{x}p_{X}(x)\le A<\infty ,\qquad A>0. \qquad \qquad \qquad \qquad \qquad \quad (D^{\prime }) \end{aligned}$$

Let us recall that certain difficulties may appear in the formulation of problems related to local limit theorems for a compound Poisson process (6) as the distribution function (12) of \(S_{N_{t}}\) is not continuous for all \(x\in \mathbb {R}\), because of the presence of an atom at zero. Let us consider the case where \(F_{0}(x)=1\), \(x>0\), thus if (12) is differentiable, then

$$\begin{aligned} p_{S_{N_{t}}}(x)=\frac{d}{dx}F_{S_{N_{t}}}(x)=\overset{\infty }{\underset{m=0 }{\sum }}p_{X_{1}+\ldots +X{m}}(x)q_{m},\qquad x>0, \end{aligned}$$
(16)

where \(p_{0}(x)=0\). In addition, we may conclude that the fulfillment of the condition (\(D^{\prime }\)) implies

$$\begin{aligned} \sup _{x}p_{S_{N_{t}}}(x)\le A\overset{\infty }{\underset{m=0}{\sum }}q_{m}=A<\infty , \qquad A>0. \end{aligned}$$

Observe that according to the proof of general Lemma 6.1 [34, p. 154], we must first introduce the conjugate r.v. of an arbitrary r.v. X and conjugate process of the compound Poisson process \(S_{N_{t}}\) that are necessary to establish the purpose of the paper.

Definition 3.1

X(h), \(h=h(x)>0\) is an arbitrary r.v. conjugate to X, if the respective density and characteristic functions are

$$\begin{aligned} p_{X(h)}(x)=\varphi ^{-1}_{X}(h)\exp \{hx\}p_{X}(x),\qquad f_{X(h)}(u)=\varphi ^{-1}_{X}(h)\varphi _{X}(h+iu). \end{aligned}$$
(17)

where

$$\begin{aligned} \varphi _{X}(h)=\int _{-\infty }^{\infty }e^ {hx}p_{X}(x)dx, \qquad h\ge 0, \end{aligned}$$
(18)

is the generating function for the r.v. X.

Moreover, according to [5, p. 361], we assume that the conjugate compound Poisson process can be defined by

$$\begin{aligned} S_{N_{t}(h)}(h)=\overset{N_{t}(h)}{\underset{j=1}{\sum }}X_{j}(h), \end{aligned}$$
(19)

where \(N_{t}(h)\) and \(X_{j}(h)\), \(t\ge 0\), \(h>0\), are independent. Additionally, the probability of \(N_{t}(h)\) is

$$\begin{aligned} q_{m}(h)=\mathbf {P(}N_{t}(h)=m\mathbf {)}=\frac{1}{m!}\exp \{-\lambda t\varphi _{X}(h)\}(\lambda t \varphi _{X}(h))^{m}, \end{aligned}$$
(20)

where \(\varphi _{X}(h)\) is the generating function (18) of the r.v. X. The quantity h will be defined later.

The identification of \(N_{t}(h)\) and X(h) can be performed with the help of the Laplace transform of \(S_{N_{t}(h)}(h)\). Note that an arbitrary conjugate r.v. X(h) of an arbitrary r.v. X is defined by the density function (17). Thus, let us define the conjugate process of the compound Poisson process by using (17) with \(X(h):=S_{N_{t}(h)}(h)\) and \(X:=S_{N_{t}}\) [5]:

$$\begin{aligned} p_{S_{N_{t}(h)}(h)}(x)=\varphi ^{-1}_{S_{N_{t}}}(h)e^ { hx } p_{S_{N_{t}}}(x). \end{aligned}$$
(21)

By virtue of (18), together with (4) and (16), we can state that the generating function of \(S_{N_{t}}\) is

$$\begin{aligned} \varphi _{S_{N_{t}}}(h) =e^{ -\lambda t}\overset{\infty }{\underset{m=0}{\sum }}\frac{(\lambda t \varphi _{X}(h))^{m}}{m!}= \exp \{-\lambda t(1-\varphi _{X}(h))\}. \end{aligned}$$
(22)

Hence, by the definition of the ch.f. (3) of the r.v. X, and from (17), (21), and (22), we have

$$\begin{aligned} f_{S_{N_{t}(h)}(h)}(u)&=\varphi ^{-1}_{S_{N_{t}}}(h)\int _{-\infty }^{\infty } e^{(h+iu)x} p_{S_{N_{t}}}(x)dx =\varphi ^{-1} _{S_{N_{t}}}(h)\varphi _{S_{N_{t}}}(h+iu) \nonumber \\&=\exp \{-\lambda t\varphi _{X}(h)(1-f_{X(h)}(u))\}. \end{aligned}$$
(23)

Clearly, the same result (23) follows using (19) and (20). Note that \(q_{m}(0)=q_{m}\) as \(\varphi _{X}(0)=1\), where \(q_{m}\) is defined by (4).

The use of the definition (2) of the moments of X, together with (17) and (18), produces the rth-order moments of X(h)

$$\begin{aligned} \mathbf {E}X^{r}(h)= \varphi ^{-1}_{X}(h)\dfrac{d^{r}}{dh^{r}}\varphi _{X}(h) =\varphi ^{-1}_{X}(h)\sum _{k=r}^{\infty } \dfrac{\mathbf {E} X ^{k}h^{k-r}}{(k-r)!}, \quad r=1,2,\dots \ . \end{aligned}$$
(24)

Additionally, based on Lemma 1 in [32, p. 135], together with the definition (2), we have

$$\begin{aligned} {\varGamma }_{r}(X(h)) =\frac{d^{r}}{dh^{r}}\ln \varphi _{X}(h)=\overset{\infty }{\underset{k=r}{ \sum }}\frac{{\varGamma }_{k}(X)}{(k-r)!}h^{k-r},\qquad r=1,2,.... \end{aligned}$$
(25)

Hence, it follows from (24) and (25) that

$$\begin{aligned} \mu (h)&=\mathbf {E}X(h)={\varGamma }_{1}(X(h))=\overset{\infty }{ \underset{k=1}{\sum }}\frac{{\varGamma }_{k}(X)h^{k-1}}{(k-1)!},\, \end{aligned}$$
(26)
$$\begin{aligned} \mathbf {E}X^{2}(h)&=\varphi ^{-1}_{X}(h)\overset{\infty }{\underset{ k=2}{\sum }}\frac{\mathbf {E} X ^{k}h^{k-2}}{(k-2)!}, \quad \sigma ^{2}(h) =\mathbf {D}X(h)=\overset{\infty }{\underset{k=2}{\sum }} \frac{{\varGamma }_{k}(X)h^{k-2}}{(k-2)!}. \end{aligned}$$
(27)

According to the definition (2) of cumulants, together with (23),

$$\begin{aligned} {\varGamma }_{r}(S_{N_{t}(h)}(h)) =\lambda t\varphi _{X}(h)\mathbf {E}X^{r}(h)=\overset{\infty }{\underset{k=r}{\sum }} \frac{{\varGamma }_{k}(S_{N_{t}})h^{k-r}}{(k-r)!},\quad r=1,2,\dots , \end{aligned}$$
(28)

where \({\varGamma }_{k}(S_{N_{t}})\) is defined by (15).

For the following, set

$$\begin{aligned} \tilde{S}_{N_{t}(h)}(h)=\frac{S_{N_{t}(h)}(h)-\mathbf {E}S_{N_{t}(h)}(h)}{\sqrt{ \mathbf {D}S_{N_{t}(h)}(h)}}, \qquad \mathbf {D}S_{N_{t}(h)}(h)>0, \; t>0, \end{aligned}$$
(29)

where by (28),

$$\begin{aligned} \mathbf {E}S_{N_{t}(h)}(h)=\lambda t\varphi _{X}(h)\mathbf {E}X(h),\qquad \mathbf { D}S_{N_{t}(h)}(h)=\lambda t\varphi _{X}(h)\mathbf {E}X^{2}(h). \end{aligned}$$
(30)

By virtue of (2) and (23), the ch.f.

$$\begin{aligned} f_{\tilde{S}_{N_{t}(h)}(h)}(u)=\exp \big \{-i \mathbf {E}S_{N_{t}(h)}(h)\tilde{u}-\lambda t\varphi _{X}(h)( 1-f_{X(h)}(\tilde{u}))\big \} \end{aligned}$$
(31)

holds, where \(\tilde{u}=u/\sqrt{\mathbf {D}S_{N_{t}(h)}(h)}\).

Based on [16, p. 213–216], to derive the equation that gives the solution of \(h=h(x)> 0\), we need to perform the following calculations. By (21),

$$\begin{aligned} F_{S_{N_{t}}}(x)=\varphi _{S_{N_{t}}}(h) \int _{-\infty }^{x}e^{ -hy} dF_{S_{N_{t}(h)}(h)}(y). \end{aligned}$$

Thus, according to

$$\begin{aligned} F_{\tilde{S}_{N_{t}}}(y)&=F_{S_{N_{t}}}(\sqrt{\mathbf {D}S_{N_{t}}}y+\mathbf {E}S_{N_{t}}),\\ F_{\tilde{S}_{N_{t}(h)}(h)}(y)&=F_{S_{N_{t}(h)}(h)}\Big (\sqrt{\mathbf {D}S_{N_{t}(h)}(h)}y+ \mathbf {E}S_{N_{t}(h)}(h)\Big ), \end{aligned}$$

we have

$$\begin{aligned} F_{\tilde{S}_{N_{t}}}(x)&=\varphi _{S_{N_{t}}}(h)\overset{\sqrt{\mathbf {D}S_{N_{t}}}x+ \mathbf {E}S_{N_{t}}}{\underset{-\infty }{\int }}e^{ -hy} dF_{S_{N_{t}(h)}(h)}(y)\\&=\varphi _{S_{N_{t}}}(h)\int _{-\infty }^{z}e ^{-h(\sqrt{\mathbf {D}S_{N_{t}(h)}(h)}y+ \mathbf {E}S_{N_{t}(h)}(h))}dF_{\tilde{S}_{N_{t}(h)}(h)}(y) \\&=\varphi _{S_{N_{t}}}(h)\int _{-\infty }^{0}e^{-h(\sqrt{\mathbf {D}S_{N_{t}(h)}(h)}y+\mathbf {E}S_{N_{t}(h)}(h))}dF_{\tilde{S}_{N_{t}(h)}(h)}(y), \end{aligned}$$

with \(z=(\sqrt{\mathbf {D}S_{N_{t}}}x-\mathbf {E}S_{N_{t}(h)}(h)+\mathbf {E}S_{N_{t}})/\sqrt{\mathbf {D} S_{N_{t}(h)}(h)}\), when

$$\begin{aligned} x=\frac{\mathbf {E}S_{N_{t}(h)}(h)}{\sqrt{\mathbf {D}S_{N_{t}}}}-\frac{\mathbf {E} S_{N_{t}}}{\sqrt{\mathbf {D}S_{N_{t}}}}, \end{aligned}$$
(32)

where \(\mathbf {E}S_{N_{t}}\) and \(\mathbf {D}S_{N_{t}}\) are defined by (7), (8) and \(\mathbf {E}S_{N_{t}(h)}(h)\) by (30). Hence, according to [16], the quantity \(h=h(x)>0\) should be defined as the solution of equation (32).

Recall that for the kth-order cumulants of the standardized compound Poisson process \(\tilde{S}_{N_{t}}\), which is defined by (10), the upper estimate (14) holds. Thus, observe that \(\tilde{S}_{N_{t}}\) satisfies Statulevičius condition \((S_{\gamma })\) with \(\gamma =0\) and \({\varDelta }:= {\varDelta }_{t}\), where \({\varDelta }_{t}\) is defined by (14). Accordingly, the general Lemma 6.1 [34, p. 154] yields the following Proposition 3.1.

We shall use \(\theta _{i},\,i=1,2,..\). (with or without an index) to denote a quantity, not always one and the same, that does not exceed 1 in modulus.

Proposition 3.1

If the r.v. \( X \) with \(0<\sigma ^{2}<\infty \) satisfies conditions \((\bar{B}_{0})\) and \((D^{\prime })\), and \(N_{t}\) is a homogeneous Poisson process with the probability (4), then for every \(l\ge 3\), in the interval \(0 \le x<\sqrt{\lambda t(\sigma ^{2}+\mu ^{2})}/(24K)\), the asymptotic expansion

$$\begin{aligned} \dfrac{p_{\tilde{S}_{N_{t}}}(x)}{\phi (x)} =&\exp \{L_{t}(x)\}\bigg (1+\underset{v=0}{\overset{l-3}{\sum }}M_{t,v}(x) \nonumber \\&\ +\theta _{1}q(l)\Big (\frac{K(x+1)}{\sqrt{\lambda t(\sigma ^{2}+\mu ^{2})}}\Big )^{l-2}+\theta _{2}R_{t}(h)\bigg ) \end{aligned}$$
(33)

is valid, where \(\lambda ,\ t,\ K >0\), and

$$\begin{aligned} \phi (x)&=\frac{e^{-\frac{x^{2}}{2}}}{\sqrt{2\pi }}, \ q(l)=\Big (\frac{3\sqrt{2e}}{2}\Big )^{l}+8(l+2)^{2}4^{3(l+1)} {\varGamma } \Big (\frac{3l+1}{2}\Big ),\ l\ge 1, \end{aligned}$$
(34)
$$\begin{aligned} R_{t}(h)&=\int _{|u|\ge U_{t}}|f_{\tilde{S}_{N_{t}(h)}(h)}(u)|du, \end{aligned}$$
(35)
$$\begin{aligned} U_{t}&=\frac{1}{12}\Big (1-\frac{Kx}{\sqrt{\lambda t(\sigma ^{2}+\mu ^{2})}}\Big ) \frac{\sqrt{\lambda t(\sigma ^{2}+\mu ^{2})}}{K}. \end{aligned}$$
(36)

Here \({\varGamma } (\alpha )=\int _{0}^{\infty }x^{\alpha -1}e^{-x}dx\). If \(\alpha =n\in \mathbb {N}\), then \({\varGamma } (n)=(n-1)!\). Furthermore,

$$\begin{aligned} L_{t}(x)=\sum _{k=3}^{\infty }\tilde{\lambda }_{t,k}x^{k}, \end{aligned}$$

where the coefficients \(\tilde{\lambda }_{t,k}\) (expressed by cumulants of \(\tilde{S}_{N_{t}}\)) coincide with the coefficients of the Cramér series [32] given by the formula \(\tilde{\lambda }_{t,k}=-b_{t,k-1}/k\), where \(b_{t,k}\) are identified successively from equations

$$\begin{aligned} \overset{j}{\underset{r=1}{\sum }}\frac{1}{r!}{\varGamma }_{r+1}(\tilde{S} _{N_{t}})\underset{j_{i}\ge 1}{\underset{j_{1}+\ldots +j_{r}=j}{\sum }}\, \overset{r}{\underset{i=1}{\prod }}b_{t,j_{i}}=\bigg \{ \begin{array}{ll} 1,\, &{} j=1, \\ 0,\, &{} j=2,3,\cdots . \end{array} \end{aligned}$$

In particular,

$$\begin{aligned} \tilde{\lambda }_{t,2}= & {} -1/2, \quad \tilde{\lambda }_{t,3}={\varGamma }_{3}(\tilde{S}_{N_{t}})/6,\quad \tilde{\lambda }_{t,4}=({\varGamma }_{4}(\tilde{S}_{N_{t}})-3{\varGamma }_{3}^{2}(\tilde{S}_{N_{t}}))/24 ,\\ \tilde{\lambda }_{t,5}= & {} ( {\varGamma }_{5}(\tilde{ S}_{N_{t}})-10{\varGamma }_{3}(\tilde{S}_{N_{t}}){\varGamma }_{4}(\tilde{S} _{N_{t}})+15{\varGamma }_{3}^{3}(\tilde{S}_{N_{t}}))/120, \dots . \end{aligned}$$

For \(\tilde{\lambda }_{t ,k}\), the following estimate is valid:

$$\begin{aligned} \vert \tilde{\lambda }_{t,k}\vert \le \dfrac{2}{k}\Big (\dfrac{16K}{\sqrt{\lambda t(\sigma ^{2}+\mu ^{2})}}\Big )^{k-2} , \qquad k=2,3,\dots , \end{aligned}$$

For the polynomials \(M_{t,v}(x)\), the formulas

$$\begin{aligned} M_{t,v}(x)= & {} \overset{v}{\underset{k=0}{\sum }}K_{t,k}(x)Q_{t,v-k}(x),\qquad \qquad \qquad \; \; M_{t,0}(x)\equiv 0, \\ K_{t,v}(x)= & {} \sum ^{*} \underset{i=1}{\overset{v}{\prod }}\frac{1}{k_{i}!}(-\tilde{\lambda }_{t,i+2}x^{i+2})^{k_{i}},\qquad \qquad \; \, K_{t,0}(x)\equiv 1, \\ Q_{t,v}(x)= & {} \sum ^{*} H_{v+2m}(x)\underset{i=1}{\overset{v}{\prod }}\frac{1}{ k_{i}!}\Big (\frac{{\varGamma }_{i+2}(\tilde{S}_{N_{t}})}{(i+2)!}\Big )^{k_{i}}, \quad Q_{t,0}(x)\equiv 1, \end{aligned}$$

hold, where the summation \(\sum ^{*}\) is taken over all nonnegative integer solutions \((k_{1}, k_{2},\ldots ,k_{v})\), \(0\le k_{1},\ldots ,k_{v}\le v\), \(1\le m\le v\), of the equation \(k_{1}+2k_{2}+\ldots +vk_{v}=v\), \(k_{1}+k_{2}+\ldots +k_{v}=m\). Here \(H_{v}(x)\) denotes the Chebyshev–Hermite polynomials

$$\begin{aligned} H_{v}(x)=(-1) ^{v}e^{ \frac{x^{2}}{2}} \frac{d^{v}}{ dx^{v}}e^{ -\frac{x^{2}}{2}} ,\qquad v=0,1,\dots \ . \end{aligned}$$

In particular,

$$\begin{aligned} M_{t,1}(x)&=-{\varGamma }_{3}(\tilde{S}_{N_{t}})x/2, \\ M_{t,2}(x)&=(5{\varGamma }_{3}^{2}(\tilde{S}_{N_{t}})-2{\varGamma }_{4}(\tilde{S}_{N_{t}}))x^{2}/8+(3{\varGamma }_{4}(\tilde{S}_{N_{t}})-5{\varGamma }_{3}^{2}(\tilde{S}_{N_{t}}))/24, \\ M_{t,3}(x)&=(34{\varGamma }_{3}(\tilde{S}_{N_{t}}){\varGamma }_{4}(\tilde{S}_{N_{t}})-4{\varGamma }_{5}(\tilde{S}_{N_{t}})-45{\varGamma }_{3}^{3}(\tilde{S}_{N_{t}}))x^3/48 \\&\quad + (6{\varGamma }_{5}(\tilde{S}_{N_{t}})-35{\varGamma }_{3}(\tilde{S}_{N_{t}}) {\varGamma }_{4}(\tilde{S}_{N_{t}})+35{\varGamma }_{3}^{3}(\tilde{S}_{N_{t}}))x/48, \dots \ . \end{aligned}$$

Theorem 3.1

If the r.v. \( X \) with \(0<\sigma ^{2}<\infty \) satisfies conditions \((\bar{B}_{0})\) and \((D^{\prime })\), and \(N_{t}\) is a homogeneous Poisson process with the probability (4), then for every \(l\ge 3\), in the interval \(0 \le x<\sqrt{\lambda t(\sigma ^{2}+\mu ^{2})}/(24K),\, K>0\), the asymptotic expansion (33) holds. Moreover, for the reminder term \(R_{t}(h)\), which is defined by (35), the estimate

$$\begin{aligned} R_{t}(h) \le \frac{1}{c_{1}(h)U_{t}}\exp \{- c_{1}(h)U_{t}^{2}\} +c_{2}(h)\exp \{-\lambda t c_{3}(h)\} \textit{ as } \lambda t > 2 \end{aligned}$$
(37)

holds. Here \(h=h(x)>0\) is the solution of equation (32), and \(U_{t}\) is defined by (36). In addition,

$$\begin{aligned} c_{1}(h)&=\sigma ^{2}(h)/(\pi ^{2}\mathbf {E}X^{2}(h)), \end{aligned}$$
(38)
$$\begin{aligned} c_{2}(h)&=12\pi \sqrt{2\pi }e^{2\varphi _{X}(h)}\frac{\sqrt{\mathbf {E}X^{2}(h)}}{\sigma (h)}A( \sqrt{2}\pi \sigma (h)+4H(h)), \end{aligned}$$
(39)
$$\begin{aligned} c_{3}(h)&=\frac{\varphi _{X}(h)(1-e^{-c})^{3}}{16(\tau (h)+H(h))^{2}A^{2}(h)} ,\qquad c>0, \end{aligned}$$
(40)

where \(\varphi _{X}(h)\) and \(A>0\) are defined, respectively, by (18) and \((D^{\prime })\). Furthermore, H(h), \(\tau (h) \), A(h) are defined by (47), (59), (60), respectively.

Remark 3.1

For constants \(c_{1}(h)\), \(c_{2}(h)\), and \(c_{3}(h)\), estimates

$$\begin{aligned} c_{1}=\frac{\sigma ^{2}}{1.6 \pi ^{2}(\sigma ^2+\mu ^2)}, \ c_{2}=333e^{4}\sqrt{2\pi }\sqrt{\sigma ^2+\mu ^2}MA/\sigma , \ c_{3}=\frac{c}{(MA)^{2}}, \end{aligned}$$
(41)

hold, where \(0<c<2\cdot 10^{-8}\), \(M=2\max \{\sigma ,K\}\), \(K>0\), and A is defined by (\(D^{'}\)). Consequently,

$$\begin{aligned} R_{t}(h)&\le 1.6 \pi ^{2}\frac{\sigma ^2+\mu ^2}{\sigma ^{2}U_{t}} \exp \Big \{-\frac{\sigma ^{2}U_{t}^{2}}{1.6 \pi ^{2}(\sigma ^2+\mu ^2)}\Big \} \\&\quad +333e^{4}\sqrt{2\pi }\frac{\sqrt{\sigma ^2+\mu ^2}}{\sigma }MA \exp \Big \{-\frac{\lambda t c}{(MA)^{2}}\Big \}. \end{aligned}$$

Observe that, under Proposition 3.1 and Theorem 3.1, we deduce the following corollary.

Corollary 3.1

If the r.v. \( X \) with \(0<\sigma ^{2}<\infty \) satisfies conditions \((\bar{B}_{0})\) and \((D^{\prime })\), and \(N_{t}\) is a homogeneous Poisson process with the probability (4), then for \(x\ge 0\), \(x=O((\lambda t)^{1/6})\), \(\lambda t\rightarrow \infty \), we have

$$\begin{aligned} \dfrac{p_{\tilde{S}_{N_{t}}}(x)}{\phi (x)} =\exp \Big \{\frac{x^{3}}{\sqrt{\lambda t}}\frac{\mathbf {E}X^{3}}{6(\sigma ^2+\mu ^2)^{3/2}}\Big \} \Big (1+O\Big (\frac{x+1}{\sqrt{\lambda t}}\Big )\Big ). \end{aligned}$$

Proof of Theorem 3.1

At first let us present a sketch of the proof. To prove Theorem 3.1, by Proposition 3.1, the estimate (37) of the reminder term (35) has to be verified. Obviously,

$$\begin{aligned} R_{t}(h)=\int _{|u|\ge U_{t}}|f_{\tilde{S}_{N_{t}(h)}(h)}(u)|du=I_{1}+I_{2}, \end{aligned}$$
(42)

where

$$\begin{aligned} I_{1}=\int _{U_{t}\le |u|\le \tilde{U}_{t}(h)}|f_{\tilde{S}_{N_{t}(h)}(h)}(u)|du,\qquad I_{2}=\int _{\tilde{U}_{t}(h)\le |u|<\infty }|f_{ \tilde{S}_{N_{t}(h)}(h)}(u)|du, \end{aligned}$$

here \(\tilde{U}_{t}(h)\) defined by (47). Accordingly, the proof of this theorem splits into two main steps. The first step consists in getting estimate (50) of \(I_{1}\). For that, we proved that the upper estimate (49) of \(f_{\tilde{S}_{N_{t}(h)}(h)}(u)\) as \(|u|\le \tilde{U}_{t}(h)\) holds.

Now that we have the upper estimate of \(I_{1}\), we can discuss the second step in the proof of Theorem 3.1. The second step of the proof is devoted to estimate \(I_{2}\). This step mainly consists in finding the upper estimate of \(f_{\tilde{S}_{N_{t}(h)}(h)}(u)\) as \(|u|\ge \tilde{U}_{t}(h)\). In order to achieve result of the second step, Lemmas 2.12.3 and evaluations presented in Theorem 6.1 in [34, p. 185] are applied. In more detail, at first using (44)–(46), the estimate (51) of \(I_{2}\) is proved. From expression of (51) follows that further estimation of \(I_{2}\) generally consists in estimating \(\exp \big \{-\lambda t \varphi _{X}(h)I_{h}(u) \big \}\), where \(I_{h}(u)\) is defined by (46). Thus, in the first and second subset, the application of Lemmas 2.3, 2.1 let us to produce the lower estimates (53) and (54) of \(I_{h}(u)\). So actually we proved the second upper estimate (55) of \(I_{2}\). According to it, for the final estimate of \(I_{2}\), the estimating of \(\sum _{k}\underset{u_{k}<u<u_{k+1}}{\sup }\exp \{-\varphi _{X}(h)(1-|f_{X(h)}(2\pi u)|^{2})\}\), Q(h), \(\tau (h)\) and A(h) (see (57)–(60)) should be performed. In the third subset, Lemma 2.2 leads to the estimate (57) of \(\sum _{k}\underset{u_{k}<u<u_{k+1}}{\sup }\exp \{-\varphi _{X}(h)(1-|f_{X(h)}(2\pi u)|^{2})\}\). And in the fourth subset, according to the proof of Theorem 6.1 in [34, p. 185], lower estimate and equalities (58)–(60) of Q(h), \(\tau (h)\), A(h) are achieved.

At last, estimates (50), (61) from the first and second steps lead to the estimate (37) of the reminder term (35).

Step 1 (Estimate \(I_{1}\) ). Suppose that \(X^{^{\prime }}(h)=X(h)-Y(h)\) is a symmetrized conjugate r.v., where the conjugate r.v. Y(h) is independent of X(h) and with the same distribution. Clearly, the distribution and characteristic functions of \(X^{^{\prime }}(h)\) are as follows

$$\begin{aligned} F_{X^{^{\prime }}(h)}(x)=\int _{-\infty }^{\infty }F_{X(h)}(x+y)dF_{X(h)}(x),\qquad f_{X^{^{\prime }}(h)}(u)=|f_{X(h)}(u)|^{2}. \end{aligned}$$

The corresponding density will be denoted by \(p_{X^{^{\prime }}(h)}(x)\). Obviously, \(\mathbf {D}X^{^{\prime }}(h)=2\sigma ^{2}(h)\).

Denote

$$\begin{aligned} l_{h}(H(h))=\frac{1}{\sigma ^{2}(h)}\int _{|x|<H(h)}x^{2}p_{X^{^{\prime }}(h)}(x)dx,\qquad H(h)>0. \end{aligned}$$
(43)

Because

$$\begin{aligned} 1-|f_{X(h)}(2\pi u)|\ge (1-|f_{X(h)}(2\pi u)|^{2})/2:=I_{h}(u), \end{aligned}$$
(44)

by (23), we obtain

$$\begin{aligned} |f_{S_{N_{t}(h)}(h)}(2\pi u)|&\le \exp \{-\lambda t\varphi _{X}(h)(1-|f_{X(h)}(2\pi u)|)\} \nonumber \\&\le \exp \{-\lambda t\varphi _{X}(h)I_{h}(u)\}, \end{aligned}$$
(45)

where

$$\begin{aligned} I_{h}(u)=\int _{-\infty }^{\infty }\sin ^{2}(\pi ux)p_{X^{^{\prime }}(h)}(x)dx&\ge 4u^2\int _{|x|\le 1/(2|u|)}x^2p_{X^{^{\prime }}(h)}(x)dx \nonumber \\&\ge 4u^{2}\sigma ^{2}(h)l_{h}(1/(2|u|)). \end{aligned}$$
(46)

Here \(l_{h}(1/(2|u|))\) is defined by (43).

Let us denote

$$\begin{aligned} \tilde{U}_{t}(h)=\pi \sqrt{\mathbf {D}S_{N_{t}(h)}(h)}/H(h),\quad H(h)=2( \mathbf {E}(X(h)-\mu (h))^{4})^{1/2}/\sigma (h). \end{aligned}$$
(47)

Further,

$$\begin{aligned} l_{h}(H(h))&=\frac{1}{\sigma ^{2}(h)}\int _{-\infty }^{\infty }x^{2}p_{X^{^{\prime }}(h)}(x)dx-\frac{2}{\sigma ^{2}(h)}\int _{H(h)}^{\infty }x^{2}p_{X^{^{\prime }}(h)}(x)dx \nonumber \\&\; \ge 2\Big (1-\frac{2\mathbf {E|}X(h)-\mu (h)|^{4} }{\sigma ^{2}(h)H^{2}(h)}\Big ) \ge 1 \text {,} \end{aligned}$$
(48)

if \(H(h)=2\big (\mathbf {E(}X(h)-\mu (h))^{4}\big )^{1/2}/\sigma (h)\). The use of (31) and (45)–(48) gives

$$\begin{aligned} |f_{S_{N_{t}(h)}(h)}(2\pi u)|\le \exp \{-\lambda t\varphi _{X}(h)4u^{2}\sigma ^{2}(h)\}\quad as\quad |u|\le 1/(2H(h)), \end{aligned}$$

and

$$\begin{aligned} |f_{\tilde{S}_{N_{t}(h)}(h)}(u)|\le \exp \big \{-u^{2}\sigma ^{2}(h)/( \pi ^{2}\mathbf {E}X^{2}(h))\big \} \quad as\quad |u|\le \tilde{U}_{t}(h). \end{aligned}$$
(49)

Here \(\mu (h),\,\mathbf {E}X^{2}(h),\) and \(\sigma ^{2}(h)\) are defined by (26) and (27). Also, \(\mathbf {D}S_{N_{t}(h)}(h)\) is defined by (30). Consequently,

$$\begin{aligned} I_{1} \le \dfrac{2}{U_{t}} \int _{U_{t}}^{\tilde{U}_{t}(h)}|u|\exp \Big \{-u^{2}\dfrac{\sigma ^{2}(h)}{\pi ^{2}\mathbf {E}X^{2}(h)}\Big \}du \le \dfrac{1}{c_{1}(h)U_{t}}\exp \{-c_{1}(h)U_{t}^{2}\}, \end{aligned}$$
(50)

according to (49). Here \(U_{t}\) and \(c_{1}(h)\) are defined by (36) and (38).

Step 2 (Estimate \(I_{2}\) ). Now let us estimate \(I_{2}\):

$$\begin{aligned} I_{2}&=2\pi \sqrt{\mathbf {D}S_{N_{t}(h)}(h)}\int _{(2H(h))^{-1}\le |u|<\infty }|f_{S_{N_{t}(h)}(h)}(2\pi u)|du \\&\le 2\pi \sqrt{\mathbf {D}S_{N_{t}(h)}(h)}\int _{(2H(h))^{-1}\le |u|<\infty }\exp \{-(\lambda t-2)\varphi _{X}(h)I_{h}(u)\} \\&\quad \cdot \exp \{-2\varphi _{X}(h)(1-|f_{X(h)}(2\pi u))|\}du, \end{aligned}$$

by (44) and (45) with \(\lambda t>2\), where \(I_{h}(u)\) is defined by (46). Hence, observing that \(\exp \{2\varphi _{X}(h)I_{h}(u)\}\le \exp \{2\varphi _{X}(h)\}\) from (46), we arrive at

$$\begin{aligned} I_{2}&\le 2\pi e^{2\varphi _{X}(h)}\sqrt{\mathbf {D}S_{N_{t}(h)}(h)} \int _{|u|\ge (2H(h))^{-1}}\exp \big \{-\lambda t \varphi _{X}(h)I_{h}(u) \big \} \nonumber \\&\quad \cdot \exp \{-\varphi _{X}(h)(1-|f_{X(h)}(2\pi u)|^{2})\}du . \end{aligned}$$
(51)

Substep 2.1 (The first lower estimate of \(I_{h}(u)\) ): If we set \(n=1\) and use the conjugate r.v. X(h) instead of X in Lemma 2.3, then we find that for any \(H(h)>0\) there exists a partition \(\ldots<u_{-1}<u_{0}=0<u_{1}<u_{2}<\ldots \) of the interval \((-\infty ,\infty )\) satisfying the condition

$$\begin{aligned} (6H(h))^{-1}\le {\varDelta } u_{k}\le (4H(h))^{-1},\qquad {\varDelta } u_{k}=u_{k+1}-u_{k}. \end{aligned}$$
(52)

such that

$$\begin{aligned} I_{h}(u)\ge \exp \{-2\sigma ^{2}(h)l_{h}(H(h))(u-u_{k0})^{2}\}, \end{aligned}$$
(53)

provided \(u\in [u_{k}\), \(u_{k+1}]\), where \(u_{k0}\) is \(u_{k}\) or \(u_{k+1}\) depending on u. Here, \(l_{h}(H(h))\) is defined by (43).

Substep 2.2 (The second lower estimate of \(I_{h}(u)\) ): In contrast, employing Lemma 2.1 gives: if X(h) has a density function such that \(p_{X(h)}(x)\le A(h)<\infty \), then for any collection \(\mathfrak {M(}h\mathfrak {)}=\{{\varDelta } (h),A(h)\},\) of the interval \({\varDelta }(h)\) and positive constant A(h), the estimate

$$\begin{aligned} I_{h}(u)\ge \frac{Q^{3}(h)}{3(|{\varDelta } (h)|+2H(h))^{2}A^{2}(h)}. \end{aligned}$$
(54)

holds for all \(|u|\ge 1/(2H(h))\), \(H(h)>0\). Here

$$\begin{aligned} Q(h)=\int _{{\varDelta } (h)}\min \{A(h),p_{X^{^{\prime }}(h)}(x)\}dx. \end{aligned}$$

The next step is to estimate (51) for \((3/4)\lambda t\varphi _{X}(h)I_{h}(u)\) and \((1/4)\lambda t\varphi _{X}(h)I_{h}(u)\) using (54) and (53), respectively. According to (54) and (53),

$$\begin{aligned} I_{2}&\le 2\pi \sqrt{2 \pi }e^{2\varphi _{X}(h)}\sqrt{\mathbf {D}S_{N_{t}(h)}(h)}\exp \Big \{-\frac{\lambda t\varphi _{X}(h)Q^{3}(h)}{4(|{\varDelta } (h)|+2H(h))^{2}A^{2}(h)}\Big \}\nonumber \\&\quad \cdot \underset{k}{\sum } \int _{u_{k}}^{u_{k+1}}\exp \left\{ -\lambda t\varphi _{X}(h) \sigma ^{2}(h)l_{h}(H(h))(u-u_{k0})^{2}/2\right\} \nonumber \\&\quad \cdot \exp \left\{ -\varphi _{X}(h)(1-|f_{X(h)}(2\pi u)|^{2})\right\} du\nonumber \\&\le 2\pi \sqrt{2 \pi }e^{2\varphi _{X}(h)}\frac{\sqrt{\mathbf {E} X^{2}(h)}}{\sigma (h)} \exp \Big \{-\frac{\lambda t\varphi _{X}(h)Q^{3}(h)}{4(|{\varDelta } (h)|+2H(h))^{2}A^{2}(h)}\Big \} \nonumber \\&\quad \cdot \underset{k}{\sum }\underset{u_{k}<u<u_{k+1}}{\sup }\exp \left\{ -\varphi _{X}(h)(1-|f_{X(h)}(2\pi u)|^{2})\right\} . \end{aligned}$$
(55)

\(Substep \,2.3\, \big (Estimate \,of\, \sum _{k}\underset{u_{k}<u<u_{k+1}}{\sup }\exp \{-\varphi _{X}(h)(1-|f_{X(h)}(2\pi u)|^{2})\}\big )\).

We remark that Lemma 2.2 holds with

$$\begin{aligned} g(u) =e^{-\varphi _{X}(h)(1-|f_{X(h)}(2\pi u)|^{2})} =e^{-\varphi _{X}(h)}\overset{\infty }{\underset{k=0}{\sum }}\frac{ |f_{X(h)}(2\pi u)|^{2k}\varphi _{X}^{k}(h)}{k!}. \end{aligned}$$

Here (see [34, p. 186]),

$$\begin{aligned} |f_{X^{^{\prime }}(h)}(2\pi (u+s))-f_{X^{^{\prime }}(h)}(2\pi u)| \le 2\pi s\Big (\int \limits _{-\infty }^{\infty }y^{2}p_{X^{^{\prime }}(h)}(y)dy\Big )^{1/2} =2\sqrt{2}\pi \sigma (h)s. \end{aligned}$$

Hence, \(|g(u+s)-g(u)|\le 2\sqrt{2}\pi \sigma (h)s\). Accordingly, Lemma 2.2 holds with

$$\begin{aligned} K:=\tilde{K}(h)=2\sqrt{2}\pi \sigma (h),\qquad V:=V(h)=\int _{-\infty }^{\infty }g(u)du\le A, \end{aligned}$$
(56)

as

$$\begin{aligned} \int _{-\infty }^{\infty }|f_{X(h)}(2\pi u)|^{2}du \le p_{X^{^{\prime }}(h)}(0)\le A. \end{aligned}$$

Therefore, taking Lemma 2.2 into consideration, together with (52) and (56), we can write

$$\begin{aligned} \sum _{k}&\underset{u_{k}<u<u_{k+1}}{\sup }\exp \left\{ -\varphi _{X}(h)(1-|f_{X(h)}(2\pi u)|^{2})\right\} \nonumber \\&\le 6H(h)A\Big (\frac{4\pi \sqrt{2}\sigma (h)}{4H(h)}+4\Big ) = 6 A(\sqrt{2}\pi \sigma (h)+4H(h)). \end{aligned}$$
(57)

\(Substep \,2.4\, (Q(h),\, \tau (h),\, A(h))\). Further, let us find \(\tau (h)\) such that

$$\begin{aligned} Q(h)=\underset{|y|\le \tau (h)}{\int }p_{X^{^{\prime }}(h)}(y)dy\ge 1-e^{-c},\qquad c>0. \end{aligned}$$
(58)

It was proved in Theorem 6.1 in [34, p. 185] that

$$\begin{aligned} \underset{|y|\ge \tau (h)}{\int }p_{X^{^{\prime }}(h)}(y)dy\le \exp \{ -(\tilde{A}-h)\tau (h)\} \varphi _{X^{^{\prime }}}(\tilde{A})\varphi _{X^{^{\prime }}}^{-1}(h), \end{aligned}$$

if \(\tilde{A}>h\ge 0\). Hence

$$\begin{aligned} \exp \{ -(\tilde{A}-h)\tau (h)\} \varphi _{X^{^{\prime }}}(\tilde{A})\varphi _{X^{^{\prime }}}^{-1}(h)\le \exp \{ -c\} ,\qquad c>0. \end{aligned}$$

It suffices that

$$\begin{aligned} \tau (h)= \frac{c+\ln (\varphi _{X^{^{\prime }}}(\tilde{A})/\varphi _{X^{^{\prime }}}(h))}{\tilde{A}-h}>0, \qquad \tilde{A}\ge h>0,\qquad c>0, \end{aligned}$$
(59)

where \(\varphi _{X^{^{\prime }}}(\tilde{A})\) and \(\varphi _{X^{^{\prime }}}(h)\) are defined by (18). Next, if \({\varDelta }(h)=]-\tau (h),\tau (h)[,\) then recalling \((D^{\prime })\), we derive

$$\begin{aligned} p_{X^{^{\prime }}(h)}(y)=\varphi _{X^{^{\prime }}}^{-1}(h)\exp \{ hy\} p_{X^{^{\prime }}}(y)\le A(h)<\infty , \end{aligned}$$

where

$$\begin{aligned} A(h)=\varphi _{X^{^{\prime }}}^{-1}(h)\exp \{ h\tau (h) \} A<\infty ,\qquad c>0. \end{aligned}$$
(60)

Substituting (57) and (58)–(60) into (55), we derive

$$\begin{aligned} I_{2}\le c_{2}(h)\exp \{-\lambda tc_{3}(h)\}, \end{aligned}$$
(61)

where \(c_{2}(h)\) and \(c_{3}(h)\) are defined by (39) and (40).

Finally, (42), (50), and (61) lead to (37). \(\square \)

Let us verify estimates (41) that were mentioned in Remark 3.1. The use of (13), (27), and (25) gives

$$\begin{aligned} \sigma ^{2}(h)&=\sigma ^{2}\Big (1+\theta \overset{\infty }{ \underset{k=3}{\sum }}k(k-1)(1/28)^{k-2}\Big )=\sigma ^{2}(1+\theta 0,231), \end{aligned}$$
(62)
$$\begin{aligned} {\varGamma }_{4}(X(h))&\le (\sigma M)^{2}\overset{\infty }{\underset{k=4}{\sum }}k(k-1)(k-2)(k-3)(1/28) ^{k-4} =28,79(\sigma M)^{2}, \end{aligned}$$
(63)

if \(0\le h\le 1 /(28M).\) Because

$$\begin{aligned} \mathbf {E(}X(h)-\mu (h))^{4}/\sigma ^{2}(h)={\varGamma }_{4}(X(h))/\sigma ^{2}(h)+3\sigma ^{2}(h) \end{aligned}$$

by (62) and (63), together with \(\sigma \le M/2\), we evaluate

$$\begin{aligned} |\mathbf {E}(X(h)-\mu (h))^{4}|/\sigma ^{2}(h)\le 38,36M^{2}. \end{aligned}$$
(64)

Hence

$$\begin{aligned} H(h)\le 2M(38,36)^{1/2} \end{aligned}$$
(65)

from (refU(h)H(h)) and (64). Employing (24) and \((\bar{B}_{0})\), together with \(K\le M/2<M\), we derive

$$\begin{aligned} \mathbf {E}X^{2}(h)&=\varphi _{X}^{-1}(h)\Big (\mathbf {E}X^{2}+\theta \mathbf {E}X^{2}\overset{\infty }{\underset{k=3}{\sum }}\frac{k!}{(k-2)!}(Kh)^{k-2}\Big ) \nonumber \\&=\mathbf {E}X^{2}(1+\theta 0,231)/\varphi _{X}(h), \end{aligned}$$
(66)

if \(0\le h\le 1 /(28M)\).

Further, we need the estimate

$$\begin{aligned} \varphi _{X-\mu }(z) =\exp \Big \{\overset{\infty }{\underset{k=2}{\sum }} \dfrac{1}{k!}{\varGamma }_{k}(X)z^{k}\Big \} =\exp \Big \{\dfrac{1}{2}\sigma ^{2}z^{2}\Big (1+\theta \frac{1}{ 12}\Big )\Big \}, \end{aligned}$$

from (13), if \(|z|\le \tilde{A}=1/(25M)\). Thus,

$$\begin{aligned} \exp \{ 11 \sigma ^{2}z^{2}/24\}\le |\varphi _{X}(z)|\le \exp \{ 13 \sigma ^{2}z^{2}/24\}. \end{aligned}$$
(67)

The application of (62), (66), and (67) gives

$$\begin{aligned} \mathbf {E}X^{2}(h)/\sigma ^{2}(h)\le 1,6\mathbf {E}X^{2} /\sigma ^{2}. \end{aligned}$$
(68)

The next step is to estimate \(\tau (h)\) and A(h) defined by (59) and (60), respectively. Recalling (67) and observing that \(\tilde{A}=1/(25M),\,\sigma \le M/2,\,h\le 1 /(28 M)<\tilde{A}\), we assert

$$\begin{aligned} \tau (h)\le \frac{24c+13\sigma ^{2}\tilde{A}^{2} +11\sigma ^{2}h^{2}}{24(\tilde{A}-h)}\le 233\frac{1}{3}Mc+\frac{17}{200}M, \end{aligned}$$
(69)

and

$$\begin{aligned} A(h)\le \exp \big \{ 1/300+25c/3\big \} A, \end{aligned}$$
(70)

where \(c>0\) and \(A>0\) are defined by (58) and \((D^{\prime }\)). Finally, employing (65)–(70) gives estimates (41).

Remark 3.2

Note that (30) together with (8), (26), and \((\bar{B}_{0})\) leads to the estimate of (32),

$$\begin{aligned} x =\frac{\lambda t}{\sqrt{\mathbf {D}S_{N_{t}}}}\overset{\infty }{\underset{k=2}{\sum }}\frac{\mathbf {E}X^{k}}{(k-1)!}h^{k-1} =\sqrt{\mathbf {D}S_{N_{t}}}h\Big (1+\theta \frac{Kh(3-2Kh)}{(1-Kh)^{2}}\Big ) \end{aligned}$$

if \(h\le 1/M\). Subsequently, if \( h\le 1/(28M)\), then \(x \le \sqrt{\lambda t (\sigma ^{2}+\mu ^{2})}/(24K)\) as \(K< M\).