Abstract
The paper is devoted to obtaining the asymptotic expansion and determination of the structure of the remainder term taking into consideration large deviations in the Cramér zone for the distribution density function of the standardized compound Poisson process. Following Deltuvienė and Saulis (Acta Appl Math 78:87–97, 2003. doi:10.1023/A:1025783905023; Lith Math J 41:620–625, 2001) and Saulis and Statulevičius [Limit theorems for large deviations. Mathematics and its applications (Soviet Series), vol 73, pp 154–187, Kluwer, Dordrecht, 1991], the solution to the problem is achieved by first using a general lemma presented by Saulis (see Lemma 6.1 in Saulis and Statulevičius 1991, p. 154) on the asymptotic expansion for the density function of an arbitrary random variable with zero mean and unit variance and combining methods for cumulants and characteristic functions. By taking into consideration the large deviations in the Cramér zone for the density function of the standardized compound Poisson process, the result for the asymptotic expansion extends the asymptotic expansions for the density function of the sums of non-random number of summands (Deltuvienė and Saulis 2003, 2001).
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Assume that we have a family \(\{X, X_{j}, j=1,2,\dots \}\) of independent identically distributed (i.i.d.) random variables (r.vs.) that have a common distribution with mean and finite positive variance:
where \(\mathbb {R}\) is the set of real numbers. In addition,
denotes the kth-order moments and cumulants, where \(f_{X}(u)\) is the characteristic function (ch.f.)
of the random variable (r.v.) X. The existence of \({\varGamma }_{k}(X)\) up to order k must be implied by the existence of all the kth-order absolute moments of X. Here \({\varGamma }_{1}(X)=\mathbf {E}X\) and \({\varGamma }_{2}(X)=\mathbf {D}X\).
The theory of large deviations deals with the probabilities of rare events that are exponentially small as a function of some parameter. For example, in insurance mathematics, such problems arise in the approximation for small probabilities of large claims that occur rarely. The theory of large deviations was originally created for non-random sums \(S_{n}=\sum _{j=1}^{n}X_{j}\), \(n\in \mathbb {N}\), where \(\mathbb {N}=\{1,2,\dots \}\) is the set of natural numbers, and then extended to a class of random processes [34].
The first fundamental theorem of large deviations for \(S_{n}\) was proved by Cramér [8] who showed that the rate function is the convex conjugate of the logarithm of the moment generating function of the underlying common distribution. The cases that have been the most studied (see, e.g., in [34]) are when the Cramér condition is satisfied, namely, the characteristic functions of the terms are analytic in a neighborhood of zero: the Linnik case when all the moments of summands are finite but their growth does not assure the analyticity of the ch.f. in the neighborhood of zero; the case of the so-called moderate deviations where the summands have only the finite number of moments; and the case when Cramér and Linnik conditions are not fulfilled, but the behavior of distribution tails of summands is regular enough.
Many of the basic ideas and results of theorems on large deviations for the sums \(S_{n}\) have been presented by Ibragimov and Linnik, Petrov, Nagaev, S.V. [16, 30, 32]. In these studies, large deviation theorems have been obtained by the rather complicated analytical saddle-point method [18] and, as a rule, for sums of i.i.d. r.vs. This is the simplest case that allows one to conceive the general view of large deviation probabilities. Asymptotic expansions for large deviations were first obtained by Kubilius [24]. Without a detailed exposition on the history of asymptotic expansions and local limit theorems taking into account large deviations in the scheme of summation of r.vs., we cite, e.g., [2, 7, 30, 31, 33]; see also the books [16, 32, 34] and references therein.
The next major step in addressing problems of large deviation theorems was made when Statulevičius [35] proposed the method of cumulants to consider large deviation probabilities for various statistics. The cumulant method was developed by Rudzkis, Saulis, and Statulevičius [34, p. 18]. They proved a general lemma of large deviations for an arbitrary r.v. X with mean \(\mu =0\), variance \(\sigma ^{2}=\mathbf {E}X^{2}=1\), and regular behavior of its cumulants (see condition \((S_{\gamma })\) below): there exist \(\gamma \ge 0\) and \({\varDelta }> 0\) such that
The method of cumulants provided a way to obtain large deviation theorems for sums of independent and dependent r.vs., polynomials forms, multiple stochastic integrals of random processes, and polynomial statistics in both the Cramér (\(\gamma =0\)) and the power Linnik zones (\(\gamma >0\)). The monograph [34] addresses these issues. Although cumulant method has been studied to investigate a more precise asymptotic analysis of the distribution via the rate of convergence and large deviation probabilities, Döring, Eichelsbacher [11] established moderate deviation principles for a rather general class of r.vs. fulfilling certain bounds of the cumulants.
For asymptotic expansions and local limit theorems that take into account large deviations when the cumulant method is used, we only cite [9, 10] and [34, p. 154–187] as these works reflect the area of our interest. In more detail, Saulis [34, p. 154] presented an asymptotic expansion for the density function of an arbitrary r.v. with zero mean and unit variance. Based on the aforementioned result, Saulis (see Theorem 6.1 in [34, p. 180]) established an asymptotic expansion in the Cramér zone of large deviations for the density function of the sum \(S_{n}\) of independent nonidentically distributed r.vs. Also, the structure of the remainder term of the asymptotic expansion in the case where \(\gamma =0\) was delivered. Asymptotic expansions of large deviations in both the Cramér and the power Linnik zones for the density function have been generalized in [9, 10] by considering asymptotic expansions in the areas of large deviations for the density function of sums of independent r.vs. in a triangular array scheme. These results improve on the results on sums of r.vs. with weights in [6].
The theory of large deviations offers interesting problems when the number of summands is itself a r.v. Let us consider \(N_{t}\), \(t \ge 0\), as the most popular Poisson process—the homogeneous Poisson process (see in [27]) with linear mean value function \(\Lambda (t)=\lambda t\), \(t \ge 0\), for some \(\lambda > 0\), and the distribution
where \(\mathbb {N}_{0}=\{0,1,2, \dots \}\). In addition,
Throughout, we assume that \(N_{t}\) is independent of \(\{X,X_{j},j=1,2,\dots \}\). If \(N_{t}\) is a homogeneous Poisson process, then
is a compound Poisson process.
Since r.v. \(X_{1}, X_{2},\dots \) and \(N_{t}\) are independent, and \(\{X,X_{j},j=1,2,\dots \}\) are i.i.d., then according to (1), (4), (5),
For instance, in the continuous dynamic models of an insurance stock [27, p. 152], \(R_{t}=R_{0}+P_{t}-S_{N_{t}}, \ t \ge 0,\) can express the surplus \(R_{t}\) at time t. Here, \(R_{0}\) is the initial reserve and \(P_{t}\) is the total premium received up to time t. That is, the company sells insurance policies and receives a premium according to \(P_{t}\). The sum (6) is the total claim amount process in the time interval [0, t]. In this example, \(X_{j}, \ j=1,2,\ldots \), denotes the jth claim, and \(N_{t}\) is the number of claims by time t.
Because \(N_{t}\) is independent of \(X_{j}\), \(j=1,2,\dots \), according to (3) and (4), the ch.f.
of (6) exist if the ch.f. (3) of the r.v. X exist.
Let us consider standardized compound Poisson process
Hence, it follows from (7)–(9) that
where
is the ch.f. of the r.v. \(S_{N_{1}}-\mu =\sum ^{N_{1}}_{j=1}X_{j}-\mu \), with \(N_{1}\) a Poisson r.v. with the parameter 1. Also, \(\mathbf {E}(S_{N_{1}}-\mu )=0\) and \(\mathbf {D}(S_{N_{1}}-\mu )=\mathbf {E}X^{2}.\) As discussed in [4, 22], the representation (11) shows that the asymptotic behavior of (11) as \(\lambda t\rightarrow \infty \) is similar to that of the ch.f. of the r.v. \(\sum _{j=1}^{n}X_{j}/\sqrt{\mathbf {D}S_{N_{t}}}\) as \(n\asymp \lambda t\rightarrow \infty \) (we write \(u(x)\asymp v(x)\) for real functions u(x) and v(x) if \(u(x)=O(v(x))\) and \(v(x)=O(u(x))\)), where the \(X_{j}\) are independent r.vs. The asymptotic properties of Poisson random sums are to a great extent similar to the corresponding properties of sums of the same r.vs. with a non-random number of summands [3]. However, this analogy is not absolutely exact as, for example, the distribution function
of the compound Poisson process is not absolutely continuous for all \(x\in \mathbb {R}\) because of the presence of an atom at zero. Here \(F_{X}^{*m}(x)\) is the m-fold convolution of the distribution function \( F_{X}(x)\) of the r.v. X with itself, and \(F_{0}(x)\) is the distribution function with a single unit jump at zero.
Central limit problems for Poisson random sums have been addressed, for example, in [4, 21]; see also the books [3, 27] and references therein. Local limit theorems for Poisson random sums are available in [22], where the results for non-random sums presented in [23] are extended. For treatments of asymptotic expansions for Poisson random sums, we refer the reader, for example, to [1, 3].
Presently, there are many strong results, for example, [5, 12, 26, 29], on approximations of exponential bounds of tail probabilities for compound Poisson sums under different assumptions and with various applications. More specifically, Embrechts [12] considered saddle-point approximations in the context of the compound Poisson sum. Saddle-point approximations of ruin probabilities in other contexts have also been well studied; see [18] for references within. On Edgeworth expansions for compound Poisson processes, we refer to [1]. Cramér-type moderate deviations for a studentized compound Poisson process have been addressed in [17]. For some special classes of heavy-tailed distributions; see, for example, [28, 29].
As we are here interested not only in the convergence to the normal distribution but also in a more accurate asymptotic analysis, we must first find a suitable bound for the kth-order cumulants of the standardized compound Poisson process \(\tilde{S}_{N_{t}}\) defined by (10). To obtain upper bounds for \({\varGamma }_{k}(\tilde{S}_{N_{t}})\), \(k=3,4,\dots \), we must impose conditions for the kth-order moments of the r.v. X that has a common distribution with mean and finite positive variance (1). Consequently, we say that the r.v. X satisfies condition \((\bar{B}_{0})\) if there exists constant \(K>0\) such that
Condition \((\bar{B}_{0})\) is often called Bernstein condition. Note that, if Cramér’s condition holds, that is, there exists \(a>0\) such that \(\mathbf {E}\exp \{a|X|\} <\infty \), then X satisfies condition \((\bar{B}_{0})\). Condition \((\bar{B}_{0})\) ensures the existence of all order moments of the r.v. X.
Taking into account the fact that \({\varGamma }_{k}(X)= {\varGamma }_{k}(X-\mu )\), \(k=3,4,\dots \), and using Lemma 3.1 in [34, p. 42], we take up the position that
Proposition 1.1
If the r.v. X satisfies condition \((\bar{B}_{0})\), then
We shall also need the following proposition.
Proposition 1.2
If the r.v. X with \(0<\sigma ^{2}<\infty \) satisfies condition \((\bar{B}_{0})\) and \(N_{t}\), \(t> 0\), is the homogeneous Poisson process with the probability (4), then
Proof
(2) and (9) give us the kth-order cumulants of the compound Poisson process \(S_{N_{t}}\), which is defined by (6):
Based on (\(\bar{B}_{0 }\)), for the kth-order moments \(\mathbf {E}X^{k}\) of the r.v. X with \(0<\sigma ^{2}<\infty \), we use the following condition
Therefore, \({\varGamma }_{k}(\tilde{S}_{N_{t}})={\varGamma }_{k}(S_{N_{t}})/(\mathbf {D}S_{N_{t}})^{k/2}\), \(k=2,3,\ldots \), yield (14). Here \(\mathbf {D}S_{N_{t}}\) is defined by (8). \(\square \)
Note that for the convergence to the standard normal distribution it is sufficient that \({\varGamma }_{k}(\tilde{S}_{N_{t}})\rightarrow 0\) (\(\lambda t\rightarrow \infty \)) for each \(k\ge 3\) (Leonov, 1964).
For the normal approximation that takes into consideration large deviations in both the Cramér and the power Linnik zones for the distribution function of a compound Poisson process for when the cumulant method is used, we refer the reader to [19, 20]. Observe that, under Theorems 1 and 2 in [19, p. 134-135], we deduce the following:
Corollary 1.1
If the r.v. \( X \) with \(0<\sigma ^{2}<\infty \) satisfies conditions \((\bar{B}_{0})\), and \(N_{t}\) is a homogeneous Poisson process with the probability (4), then for \(x\ge 0\), \(x=O((\lambda t)^{1/6})\), \(\lambda t\rightarrow \infty \), we have
where \({\varPhi }(x)\) is the standard normal distribution function.
Corollary 1.2
If the r.v. \( X \) with \(0<\sigma ^{2}<\infty \) satisfies conditions \((\bar{B}_{0})\), and \(N_{t}\) is a homogeneous Poisson process with the probability (4), then
hold for \(x\ge 0\), \(x=o((\lambda t)^{1/6})\), if \(\lambda t\rightarrow \infty \).
The main purpose of this paper is an asymptotic expansion that takes into consideration large deviations in the Cramér zone for the distribution density function of the process (10) (see Proposition 3.1, Theorem 3.1, and Corollary 3.1 in Sect. 3). In Sect. 3, the result on the asymptotic expansion (33) extends asymptotic expansions for the density function of the sums of non-random number of summands that takes into consideration large deviations in the Cramér zone [9, 10]. The solution to the problem of the aforementioned section is obtained by first using a general lemma presented by Saulis (1980) [34, p. 154] on the asymptotic expansion of the density function for an arbitrary r.v. with zero mean and unit variance. Among the existing methods for large deviations (see, e. g., [13–15, 18, 34]), we rely on the cumulant method [34]. Following [9, 10, 34], to estimate the reminder term (35) of the asymptotic expansion (33), along with aforementioned methods, Statulevičius’ known estimates for characteristic functions are used [34, p. 172–174]). Consequently, Sect. 2 is devoted to present the auxiliary Lemmas 2.1–2.3 that lead to an estimate of the reminder term (35).
2 Auxiliary Lemmas
Suppose that for an arbitrary r.v. X with mean \(\mu = 0\), variance \(\sigma ^{2}<\infty \), and distribution function \(F_{X}(x)=\mathbf {P}(X<x)\) for all \(x\in \mathbb {R}\), there exists a density function
Moreover, let \(X^{^{\prime }}=X-Y\) be an arbitrary, symmetrized r.v., where Y is independent of X and with the same distribution. Clearly, the distribution and characteristic functions of \(X^{^{\prime }}\) are
The corresponding density will be denoted by \(p_{X^{^{\prime }}}(x)\). Statulevičius proved the following lemmas (see Lemmas 2.1, 2.2, 2.3 in [34, p. 172–174]).
Lemma 2.1
Let X be any r.v. with density \(p_{X}(x)\). Then, for any collection \(\mathfrak {M}=\{{\varDelta }_{i},A_{i},i=1,2,\ldots \}\) of non-overlapping intervals \({\varDelta }_{i}\) and positive constants \(A_{i}<\infty \) for any \(-\infty<u<\infty \), the estimate
holds, where
Corollary 2.1
If \(p_{X}(x)\le A<\infty \) and \(\sigma ^{2}=\mathbf {E}X^{2}<\infty \), then
for all \(-\infty<u<\infty ,\) where \(A>0.\)
Lemma 2.2
Let a nonnegative function g(u), defined on the interval \([b,\infty )\), satisfies the Lipschitz condition \(|g(u+s)-g(u)|\le K|s|\). Moreover, let \(V:=\int _{b}^{\infty }g(u)du<\infty .\) Then for any \(\varepsilon >0\) and any partition \(b=u_{0}<u_{1}<\dots \) of the interval \([b,\infty )\) with \(\underset{0\le k<\infty }{\max } (u_{k+1}-u_{k})\le \varepsilon \), we have the inequality
where \({\varDelta } u_{k}=u_{k+1}-u_{k}\).
Let us assume that \(X_{j}\), \(j=1,2,\ldots \), are independent, nonidentically distributed r.vs., and put \(B_{n}^{2}=\sum _{j=1}^{n}\sigma _{j}^{2}\). Let
where \(\langle b \rangle \) denotes the distance of number b to the nearest integer.
Lemma 2.3
For any \(n\ge 1\) and \(H_{n}>0,\) there exists a partition \(\dots<u_{-1}^{(n)}<u_{0}^{(n)}=0<u_{1}^{(n)}<u_{2}^{(n)}<\dots \) of the interval \((-\infty ,\infty )\) satisfying the condition
such that
provided \(u\in [u_{k}^{(n)},\) \(u_{k+1}^{(n)}],\) where, for a given n, \(u^{(n)}_{k0}\) equals either \(u_{k}^{(n)}\) or \(u_{k+1}^{(n)}\) depending on u.
3 Asymptotic Expansion in the Large Deviation Cramér Zone for the Density Function of the Compound Poisson Process
Along with the condition (\(\bar{B}_{0}\)), we assume that for the r.v. X that has a common distribution with mean and finite positive variance (1), there exists density function \(p_{X}(x)=(d/dx)F_{X}(x)\) such that
Let us recall that certain difficulties may appear in the formulation of problems related to local limit theorems for a compound Poisson process (6) as the distribution function (12) of \(S_{N_{t}}\) is not continuous for all \(x\in \mathbb {R}\), because of the presence of an atom at zero. Let us consider the case where \(F_{0}(x)=1\), \(x>0\), thus if (12) is differentiable, then
where \(p_{0}(x)=0\). In addition, we may conclude that the fulfillment of the condition (\(D^{\prime }\)) implies
Observe that according to the proof of general Lemma 6.1 [34, p. 154], we must first introduce the conjugate r.v. of an arbitrary r.v. X and conjugate process of the compound Poisson process \(S_{N_{t}}\) that are necessary to establish the purpose of the paper.
Definition 3.1
X(h), \(h=h(x)>0\) is an arbitrary r.v. conjugate to X, if the respective density and characteristic functions are
where
is the generating function for the r.v. X.
Moreover, according to [5, p. 361], we assume that the conjugate compound Poisson process can be defined by
where \(N_{t}(h)\) and \(X_{j}(h)\), \(t\ge 0\), \(h>0\), are independent. Additionally, the probability of \(N_{t}(h)\) is
where \(\varphi _{X}(h)\) is the generating function (18) of the r.v. X. The quantity h will be defined later.
The identification of \(N_{t}(h)\) and X(h) can be performed with the help of the Laplace transform of \(S_{N_{t}(h)}(h)\). Note that an arbitrary conjugate r.v. X(h) of an arbitrary r.v. X is defined by the density function (17). Thus, let us define the conjugate process of the compound Poisson process by using (17) with \(X(h):=S_{N_{t}(h)}(h)\) and \(X:=S_{N_{t}}\) [5]:
By virtue of (18), together with (4) and (16), we can state that the generating function of \(S_{N_{t}}\) is
Hence, by the definition of the ch.f. (3) of the r.v. X, and from (17), (21), and (22), we have
Clearly, the same result (23) follows using (19) and (20). Note that \(q_{m}(0)=q_{m}\) as \(\varphi _{X}(0)=1\), where \(q_{m}\) is defined by (4).
The use of the definition (2) of the moments of X, together with (17) and (18), produces the rth-order moments of X(h)
Additionally, based on Lemma 1 in [32, p. 135], together with the definition (2), we have
Hence, it follows from (24) and (25) that
According to the definition (2) of cumulants, together with (23),
where \({\varGamma }_{k}(S_{N_{t}})\) is defined by (15).
For the following, set
where by (28),
By virtue of (2) and (23), the ch.f.
holds, where \(\tilde{u}=u/\sqrt{\mathbf {D}S_{N_{t}(h)}(h)}\).
Based on [16, p. 213–216], to derive the equation that gives the solution of \(h=h(x)> 0\), we need to perform the following calculations. By (21),
Thus, according to
we have
with \(z=(\sqrt{\mathbf {D}S_{N_{t}}}x-\mathbf {E}S_{N_{t}(h)}(h)+\mathbf {E}S_{N_{t}})/\sqrt{\mathbf {D} S_{N_{t}(h)}(h)}\), when
where \(\mathbf {E}S_{N_{t}}\) and \(\mathbf {D}S_{N_{t}}\) are defined by (7), (8) and \(\mathbf {E}S_{N_{t}(h)}(h)\) by (30). Hence, according to [16], the quantity \(h=h(x)>0\) should be defined as the solution of equation (32).
Recall that for the kth-order cumulants of the standardized compound Poisson process \(\tilde{S}_{N_{t}}\), which is defined by (10), the upper estimate (14) holds. Thus, observe that \(\tilde{S}_{N_{t}}\) satisfies Statulevičius condition \((S_{\gamma })\) with \(\gamma =0\) and \({\varDelta }:= {\varDelta }_{t}\), where \({\varDelta }_{t}\) is defined by (14). Accordingly, the general Lemma 6.1 [34, p. 154] yields the following Proposition 3.1.
We shall use \(\theta _{i},\,i=1,2,..\). (with or without an index) to denote a quantity, not always one and the same, that does not exceed 1 in modulus.
Proposition 3.1
If the r.v. \( X \) with \(0<\sigma ^{2}<\infty \) satisfies conditions \((\bar{B}_{0})\) and \((D^{\prime })\), and \(N_{t}\) is a homogeneous Poisson process with the probability (4), then for every \(l\ge 3\), in the interval \(0 \le x<\sqrt{\lambda t(\sigma ^{2}+\mu ^{2})}/(24K)\), the asymptotic expansion
is valid, where \(\lambda ,\ t,\ K >0\), and
Here \({\varGamma } (\alpha )=\int _{0}^{\infty }x^{\alpha -1}e^{-x}dx\). If \(\alpha =n\in \mathbb {N}\), then \({\varGamma } (n)=(n-1)!\). Furthermore,
where the coefficients \(\tilde{\lambda }_{t,k}\) (expressed by cumulants of \(\tilde{S}_{N_{t}}\)) coincide with the coefficients of the Cramér series [32] given by the formula \(\tilde{\lambda }_{t,k}=-b_{t,k-1}/k\), where \(b_{t,k}\) are identified successively from equations
In particular,
For \(\tilde{\lambda }_{t ,k}\), the following estimate is valid:
For the polynomials \(M_{t,v}(x)\), the formulas
hold, where the summation \(\sum ^{*}\) is taken over all nonnegative integer solutions \((k_{1}, k_{2},\ldots ,k_{v})\), \(0\le k_{1},\ldots ,k_{v}\le v\), \(1\le m\le v\), of the equation \(k_{1}+2k_{2}+\ldots +vk_{v}=v\), \(k_{1}+k_{2}+\ldots +k_{v}=m\). Here \(H_{v}(x)\) denotes the Chebyshev–Hermite polynomials
In particular,
Theorem 3.1
If the r.v. \( X \) with \(0<\sigma ^{2}<\infty \) satisfies conditions \((\bar{B}_{0})\) and \((D^{\prime })\), and \(N_{t}\) is a homogeneous Poisson process with the probability (4), then for every \(l\ge 3\), in the interval \(0 \le x<\sqrt{\lambda t(\sigma ^{2}+\mu ^{2})}/(24K),\, K>0\), the asymptotic expansion (33) holds. Moreover, for the reminder term \(R_{t}(h)\), which is defined by (35), the estimate
holds. Here \(h=h(x)>0\) is the solution of equation (32), and \(U_{t}\) is defined by (36). In addition,
where \(\varphi _{X}(h)\) and \(A>0\) are defined, respectively, by (18) and \((D^{\prime })\). Furthermore, H(h), \(\tau (h) \), A(h) are defined by (47), (59), (60), respectively.
Remark 3.1
For constants \(c_{1}(h)\), \(c_{2}(h)\), and \(c_{3}(h)\), estimates
hold, where \(0<c<2\cdot 10^{-8}\), \(M=2\max \{\sigma ,K\}\), \(K>0\), and A is defined by (\(D^{'}\)). Consequently,
Observe that, under Proposition 3.1 and Theorem 3.1, we deduce the following corollary.
Corollary 3.1
If the r.v. \( X \) with \(0<\sigma ^{2}<\infty \) satisfies conditions \((\bar{B}_{0})\) and \((D^{\prime })\), and \(N_{t}\) is a homogeneous Poisson process with the probability (4), then for \(x\ge 0\), \(x=O((\lambda t)^{1/6})\), \(\lambda t\rightarrow \infty \), we have
Proof of Theorem 3.1
At first let us present a sketch of the proof. To prove Theorem 3.1, by Proposition 3.1, the estimate (37) of the reminder term (35) has to be verified. Obviously,
where
here \(\tilde{U}_{t}(h)\) defined by (47). Accordingly, the proof of this theorem splits into two main steps. The first step consists in getting estimate (50) of \(I_{1}\). For that, we proved that the upper estimate (49) of \(f_{\tilde{S}_{N_{t}(h)}(h)}(u)\) as \(|u|\le \tilde{U}_{t}(h)\) holds.
Now that we have the upper estimate of \(I_{1}\), we can discuss the second step in the proof of Theorem 3.1. The second step of the proof is devoted to estimate \(I_{2}\). This step mainly consists in finding the upper estimate of \(f_{\tilde{S}_{N_{t}(h)}(h)}(u)\) as \(|u|\ge \tilde{U}_{t}(h)\). In order to achieve result of the second step, Lemmas 2.1–2.3 and evaluations presented in Theorem 6.1 in [34, p. 185] are applied. In more detail, at first using (44)–(46), the estimate (51) of \(I_{2}\) is proved. From expression of (51) follows that further estimation of \(I_{2}\) generally consists in estimating \(\exp \big \{-\lambda t \varphi _{X}(h)I_{h}(u) \big \}\), where \(I_{h}(u)\) is defined by (46). Thus, in the first and second subset, the application of Lemmas 2.3, 2.1 let us to produce the lower estimates (53) and (54) of \(I_{h}(u)\). So actually we proved the second upper estimate (55) of \(I_{2}\). According to it, for the final estimate of \(I_{2}\), the estimating of \(\sum _{k}\underset{u_{k}<u<u_{k+1}}{\sup }\exp \{-\varphi _{X}(h)(1-|f_{X(h)}(2\pi u)|^{2})\}\), Q(h), \(\tau (h)\) and A(h) (see (57)–(60)) should be performed. In the third subset, Lemma 2.2 leads to the estimate (57) of \(\sum _{k}\underset{u_{k}<u<u_{k+1}}{\sup }\exp \{-\varphi _{X}(h)(1-|f_{X(h)}(2\pi u)|^{2})\}\). And in the fourth subset, according to the proof of Theorem 6.1 in [34, p. 185], lower estimate and equalities (58)–(60) of Q(h), \(\tau (h)\), A(h) are achieved.
At last, estimates (50), (61) from the first and second steps lead to the estimate (37) of the reminder term (35).
Step 1 (Estimate \(I_{1}\) ). Suppose that \(X^{^{\prime }}(h)=X(h)-Y(h)\) is a symmetrized conjugate r.v., where the conjugate r.v. Y(h) is independent of X(h) and with the same distribution. Clearly, the distribution and characteristic functions of \(X^{^{\prime }}(h)\) are as follows
The corresponding density will be denoted by \(p_{X^{^{\prime }}(h)}(x)\). Obviously, \(\mathbf {D}X^{^{\prime }}(h)=2\sigma ^{2}(h)\).
Denote
Because
by (23), we obtain
where
Here \(l_{h}(1/(2|u|))\) is defined by (43).
Let us denote
Further,
if \(H(h)=2\big (\mathbf {E(}X(h)-\mu (h))^{4}\big )^{1/2}/\sigma (h)\). The use of (31) and (45)–(48) gives
and
Here \(\mu (h),\,\mathbf {E}X^{2}(h),\) and \(\sigma ^{2}(h)\) are defined by (26) and (27). Also, \(\mathbf {D}S_{N_{t}(h)}(h)\) is defined by (30). Consequently,
according to (49). Here \(U_{t}\) and \(c_{1}(h)\) are defined by (36) and (38).
Step 2 (Estimate \(I_{2}\) ). Now let us estimate \(I_{2}\):
by (44) and (45) with \(\lambda t>2\), where \(I_{h}(u)\) is defined by (46). Hence, observing that \(\exp \{2\varphi _{X}(h)I_{h}(u)\}\le \exp \{2\varphi _{X}(h)\}\) from (46), we arrive at
Substep 2.1 (The first lower estimate of \(I_{h}(u)\) ): If we set \(n=1\) and use the conjugate r.v. X(h) instead of X in Lemma 2.3, then we find that for any \(H(h)>0\) there exists a partition \(\ldots<u_{-1}<u_{0}=0<u_{1}<u_{2}<\ldots \) of the interval \((-\infty ,\infty )\) satisfying the condition
such that
provided \(u\in [u_{k}\), \(u_{k+1}]\), where \(u_{k0}\) is \(u_{k}\) or \(u_{k+1}\) depending on u. Here, \(l_{h}(H(h))\) is defined by (43).
Substep 2.2 (The second lower estimate of \(I_{h}(u)\) ): In contrast, employing Lemma 2.1 gives: if X(h) has a density function such that \(p_{X(h)}(x)\le A(h)<\infty \), then for any collection \(\mathfrak {M(}h\mathfrak {)}=\{{\varDelta } (h),A(h)\},\) of the interval \({\varDelta }(h)\) and positive constant A(h), the estimate
holds for all \(|u|\ge 1/(2H(h))\), \(H(h)>0\). Here
The next step is to estimate (51) for \((3/4)\lambda t\varphi _{X}(h)I_{h}(u)\) and \((1/4)\lambda t\varphi _{X}(h)I_{h}(u)\) using (54) and (53), respectively. According to (54) and (53),
\(Substep \,2.3\, \big (Estimate \,of\, \sum _{k}\underset{u_{k}<u<u_{k+1}}{\sup }\exp \{-\varphi _{X}(h)(1-|f_{X(h)}(2\pi u)|^{2})\}\big )\).
We remark that Lemma 2.2 holds with
Here (see [34, p. 186]),
Hence, \(|g(u+s)-g(u)|\le 2\sqrt{2}\pi \sigma (h)s\). Accordingly, Lemma 2.2 holds with
as
Therefore, taking Lemma 2.2 into consideration, together with (52) and (56), we can write
\(Substep \,2.4\, (Q(h),\, \tau (h),\, A(h))\). Further, let us find \(\tau (h)\) such that
It was proved in Theorem 6.1 in [34, p. 185] that
if \(\tilde{A}>h\ge 0\). Hence
It suffices that
where \(\varphi _{X^{^{\prime }}}(\tilde{A})\) and \(\varphi _{X^{^{\prime }}}(h)\) are defined by (18). Next, if \({\varDelta }(h)=]-\tau (h),\tau (h)[,\) then recalling \((D^{\prime })\), we derive
where
Substituting (57) and (58)–(60) into (55), we derive
where \(c_{2}(h)\) and \(c_{3}(h)\) are defined by (39) and (40).
Finally, (42), (50), and (61) lead to (37). \(\square \)
Let us verify estimates (41) that were mentioned in Remark 3.1. The use of (13), (27), and (25) gives
if \(0\le h\le 1 /(28M).\) Because
by (62) and (63), together with \(\sigma \le M/2\), we evaluate
Hence
from (refU(h)H(h)) and (64). Employing (24) and \((\bar{B}_{0})\), together with \(K\le M/2<M\), we derive
if \(0\le h\le 1 /(28M)\).
Further, we need the estimate
from (13), if \(|z|\le \tilde{A}=1/(25M)\). Thus,
The application of (62), (66), and (67) gives
The next step is to estimate \(\tau (h)\) and A(h) defined by (59) and (60), respectively. Recalling (67) and observing that \(\tilde{A}=1/(25M),\,\sigma \le M/2,\,h\le 1 /(28 M)<\tilde{A}\), we assert
and
where \(c>0\) and \(A>0\) are defined by (58) and \((D^{\prime }\)). Finally, employing (65)–(70) gives estimates (41).
Remark 3.2
Note that (30) together with (8), (26), and \((\bar{B}_{0})\) leads to the estimate of (32),
if \(h\le 1/M\). Subsequently, if \( h\le 1/(28M)\), then \(x \le \sqrt{\lambda t (\sigma ^{2}+\mu ^{2})}/(24K)\) as \(K< M\).
References
Babu, G.J., Singh, K., Yang, Y.: Edgeworth expansions for compound Poisson processes and the bootstrap. Ann. Inst. Stat. Math. 55(1), 583–594 (2003). doi:10.1007/BF02530486
Bikelis, A., Žemaitis, A.: Asymptotic expansions for the probabilities of large deviations. Normal approximation III. Lith. Math. J. 16, 332–348 (1976)
Bening, V.E., Korolev, V.Yu.: Generalized Poisson models and their applications in insurance and finance, Utrecht, the Netherlands, VSP (2002)
Bening, V.E., Korolev, V.Y., Shorgin, S.Y.: On approximations to generalized Poisson’s distributions. J. Math. Sci. 83(3), 360–372 (1997). doi:10.1007/BF02400920
Bonin, O.: Large deviation theorems for weighted compound Poisson sums. Probab. Math. Stat. 23(2), 357–368 (2003)
Book, S.A.: A large deviation theorem for weighted sums. Z. Wahrscheinlichkeitstheorie verw. Geb. 26, 43–49 (1973)
Borovkov, A.A., Mogulskii, A.A.: Integro-local limit theorems including large deviations for sums of random vectors. II. Theory Probab. Appl. 45, 5–29 (2000). doi:10.1137/S0040585X97978026
Cramer, H.: Sur un nouveau théoréme-limite de la théorie des probabilités. Actualités Scientifiques et Industrialles. Colloque Consecréá la Théorie des Probabilité 3. Hermann, Paris.III 736, 5–23 (1938)
Deltuvienė, D., Saulis, L.: Asymptotic expansions of the distribution density function for the sum of random variables in the series scheme in large deviations zones. Acta Appl. Math. 78, 87–97 (2003). doi:10.1023/A:1025783905023
Deltuvienė, D., Saulis, L.: Asymptotic expansions for the density function of the series scheme of random variables in large deviation Cramér zone. Lith. Math. J. 41, 620–625 (2001)
Döring, H., Eichelsbacher, P.: Moderate deviations via cumulants. J. Theor. Probab. 26, 360–385 (2013). doi:10.1007/s10959-012-0437-0
Embrechts, P., Jensen, J.L., Maejima, M., Teugels, J.L.: Approximations for compound Poisson and Pólya processes. Adv. Appl. Probab. 17(3), 623–637 (1985)
Fatalov, V.: Exact asymptotics of probabilities of large deviations for Markov chains: the Laplace method. Izv. Math. 75(4), 837–868 (2011)
Fatalov, V.: Large deviations for distributions of sums of random variables: Markov Chain Method. Probl. Inf. Transm. 46(2), 160–183 (2010). doi:10.1134/S0032946010020055
Gao, F., Zhao, X.: Delta method in large deviations and moderate deviations for estimators. Ann. Stat. 39(2), 1211–1240 (2011). doi:10.1214/10-AOS865
Ibragimov, I.A., Linnik, Y.V.: Nezavisimye stationarno svyazannye velichiny. Nauka, Moscow (1965). (in Russian)
Jing, B.-Y., Wang, Q., Zhou, W.J.: Cramér-type moderate deviation for studentized compound Poisson sum. J. Theor. Probab. 28, 1556–1570 (2015). doi:10.1007/s10959-014-0542-3
Jensen, J.L.: Saddlepoint Approximations. Oxford University Press, Oxford (1995)
Kasparavičiūtė, A., Saulis, L.: Large deviations for weighted random sums. Nonlinear Anal. Model. Control 18(2), 129–142 (2013)
Kasparavičiūtė, A.: Theorems of large deviations for the sums of a random number of independent random variables. PhD thesis. Vilnius Gediminas technical university (2013). DS.005.1.01.ETD
Korolev, V.Y., Shevtsova, I.G.: An improvement of the Berry–Esseen inequality with applications to Poisson and mixed Poisson random sums. Scand. Actuar. J. 2012(2), 81–90 (2012). doi:10.1080/03461238.2010.485370
Korolev, VYu., Zhukov, YuV: Convergence rate estimates in local limit theorems for Poisson random sums. J. Math. Sci. 99(4), 1439–1444 (2000). doi:10.1007/BF02673718
Korolev, V.Y., Zhukov, Y.V.: On the rate of convergence in the local central limit theorem for densities. J. Math. Sci. 91(3), 2931–2941 (1998). doi:10.1007/BF02432864
Kubilius, J.: Probabilistic methods in the theory of numbers. American Mathematical Society, Providence (1964)
Leonov, V.P.: Some Applications of Higher Cumulants in Random Process Theory. Nauka, Moscow (1964)
Michel, R.: On probabilities of large claims that are compound Poisson distributed. Bltter der DGVFM 21(2), 207–211 (1993)
Mikosch, T.: Non-life Insurance Mathematics. An Introduction with the Poisson process. Springer, Berlin (2009)
Mikosch, T., Nagaev, A.V.: Large deviations of heavy-tailed sums with applications in insurance. Extremes 1, 81–11 (1998). doi:10.1023/A:1009913901219
Mita, H.: Probabilities of large deviations for sums of random number of independent identically distributed random variables and its application to a compound Poisson process. Tokyo J. Math. 20(2), 353–364 (1997)
Nagaev, S.V.: Large deviations of independent random variables. Ann. Probab. 7, 745–789 (1979)
Nagaev, A.V.: Local limit theorems with respect of large deviations. Limit theorems and random processes. Tashkent 71–88 (1967)
Petrov, V.V.: Limit theorems of probability theory. Sequences of Independent Random Variables. Oxford Studies in Probability, vol. 4. Clarendon/Oxford University Press, New York (1995)
Rikhter, V.: Local limit theorems for large deviations. Theory Probab. Appl. 2(2), 214–229 (1957)
Saulis, L., Statulevičius, V.: Limit Theorems for large Deviations. Mathematics and Its Applications (Soviet Series), vol. 73. Kluwer Academic, Dordrecht (1991). Translated and revised from the 1989 Russian original
Statulevičius, V.: On large deviations. Z. Wahrsc. und Verw. Geb. 6, 133–144 (1966)
Acknowledgments
We express our gratitude to Prof. Dr. Habil. Leonas Saulis for his insightful suggestions. We thank the Editor and the referee for their comments helping us in improving the article.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Kasparavičiūtė, A., Deltuvienė, D. Asymptotic Expansion for the Distribution Density Function of the Compound Poisson Process in Large Deviations. J Theor Probab 30, 1655–1676 (2017). https://doi.org/10.1007/s10959-016-0696-2
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10959-016-0696-2