Abstract
In this article, Stein’s method and z-functions are used to determine a non-uniform bound for approximating the cumulative distribution function of a nonnegative integer-valued random variable X by the negative binomial cumulative distribution function with parameters \(r\in {\mathbb {R}}^+\) and \(p=1-q\in (0,1)\). This bound is an appropriate criterion for evaluating the accuracy of this approximation. The result obtained in this study is used to approximate cumulative distribution functions including the negative hypergeometric cumulative distribution function, the Pólya cumulative distribution function, and the beta negative binomial cumulative distribution function.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
The probability approximation has been widely studied in the area of probability theory and statistics. It is usually related to the law of large numbers, the law of small numbers, central iterated logarithm, etc. In the past, mathematicians and statisticians have applied a number of methods to derive uniform and non-uniform bounds for measuring the accuracy of the corresponding probability approximation. One of the most useful methods is Stein’s method. This is a powerful and efficient technique for finding uniform and non-uniform bounds of the approximation of one probability distribution by another. These can be expressed in terms of the distance between the two probability distributions. Stein [5] introduced his first technique in order to give an explicit uniform bound for estimating the accuracy of the normal approximation, and later this method was extended to other distributions, including Poisson, binomial, and negative binomial. The current study focuses on the application of Stein’s method to the negative binomial approximation. Let NB be the negative binomial random variable with parameters \(r\in {\mathbb {R}}^+\) and \(p=1-q\in (0,1)\), and its probability mass function is of the form
The mean and variance of NB are \(E(NB)=\frac{rq}{p}\) and \(Var(NB)=\frac{rq}{p^{2}}\), respectively. For \(r\in {\mathbb {N}}\), the distribution of NB is considered as the distribution of the number of failures before the \(r{\mathrm{th}}\) success. Additionally, the geometric distribution is a special case of the negative binomial distribution when \(r=1\).
The first study of negative binomial approximation was conducted by Brown and Phillips [1]. They used Stein’s method to obtain a uniform bound for approximating the distribution of a sum of dependent Bernoulli random variables by a negative binomial distribution and used this result to approximate the Pólya distribution. Wang and Xia [10] used a negative binomial distribution to approximate the distribution of the number of k-run with a uniform bound using Stein’s method. Vellaisamy and Upadhye [8] used Kerstan’s method to obtain a uniform bound for approximating the distribution of a sum of independent geometric random variables by a negative binomial distribution. Vellaisamy et al. [9] presented a uniform bound for the total variation between the distribution of a sum of n independent nonnegative integer-valued random variables and a negative binomial distribution using Stein’s method. Kumar and Upadhye [3] used a probability generating function approach to find the Stein operator for the negative binomial distribution and applied the operator to obtain a uniform bound for the total variation distance between the distribution of a sum of n independent random variables and a negative binomial distribution; however, it is not easy to compute this result. The distribution of a single nonnegative integer-valued random variable by a negative binomial distribution can be approximated as follows. Let X be a nonnegative integer-valued random variable with the probability mass function \(p_{X}(x)>0\) for every \(x\in \mathcal {X}\), where \(\mathcal {X}\) is the space of X. The mean and variance of X are \(\mu \) and \(0<\sigma ^{2}<\infty \), respectively. In this case, Teerapabolarn and Boondirek [6] used Stein’s method together with z-functions to obtain a uniform bound that can be expressed in the following form:
and for \(\mu =\frac{rq}{p}\),
for every \(A\subseteq {\mathbb {N}}\cup \{0\}\), where \(d_{A}(X,NB)=\left| P(X\in A)-P(NB\in A)\right| \) is the distance between the distribution of X and a negative binomial distribution and \(z(x)=\frac{1}{p_{X}(x)}\sum _{k=0}^{x}(\mu -k)p_{X}(k)\) represents the z-functions associated with the random variable X. It can be seen that the bound in the case of \(\mu =\frac{rq}{p}\) is sharper than that in the case of \(\mu \ne \frac{rq}{p}\). Teerapabolarn [7] used the same tools to improve the uniform bound in (1.3) and produce a sharper result:
for every \(A\subseteq {\mathbb {N}}\cup \{0\}\). For \(A=C_{x_0}\) and \(\mu =\frac{rq}{p}\), where \(C_{x_0}=\{0,...,x_0\}\) as \(x_0\in {\mathbb {N}}\cup \{0\}\), Jaioun and Teerapabolarn [2] used Stein’s method and w-functions to obtain a uniform bound for the approximation. Because \(z(x)=\sigma ^2w(x)\) for every \(x\in \mathcal {X} \), the result of Jaioun and Teerapabolarn [2] can be written as
for every \(x_0\in {\mathbb {N}}\cup \{0\}\), where \(d_{C_{x_0}}(X,NB)=\left| P(X\le x_0)-P(NB\le x_0)\right| \) is the distance between the cumulative distribution function of X and the negative binomial cumulative distribution function. For \(A=\{x_0\}\), \(x_0\in {\mathbb {N}}\cup \{0\}\), and \(r\ge 1\), Malingam and Teerapabolarn [4] used the same tools as Jaioun and Teerapabolarn [2] to give a non-uniform bound for the point metric between the distributions of X and NB.
In view of the bound in (1.5), this does not change as \(x_0\in {\mathbb {N}}\cup \{0\}\) changes or depend on \(x_0\in {\mathbb {N}}\cup \{0\}\). This may not be enough to measure the accuracy of this approximation, and in this situation, a new bound is required. The aim of this study is to determine a non-uniform bound with respect to the result from (1.5).
We use Stein’s method and z-functions to obtain the main result of this study, as discussed in Sect. 2. In Sect. 3, we use these tools to give a new result of the approximation. In Sect. 4, some applications are provided to illustrate the desired result. Our conclusions are presented in the last section.
2 Method
Teerapabolarn and Boondirek [6] gave the z-function associated with a nonnegative integer-valued random variable X as follows:
A recurrence form of (2.1) can be expressed as
Furthermore, for any function \(f:{\mathbb {N}}\cup \{0\}\rightarrow {\mathbb {R}}\) satisfying \(E\left| z(X)\Delta f(X)\right| <\infty \), this yields
where \(\Delta f(X)=f(X+1)-f(X)\).
Brown and Phillips [1] adapted Stein’s original method and applied it to the negative binomial distribution. They modified Stein’s equation for a negative binomial distribution with parameters \(r\in {\mathbb {R}}^+\) and \(p=1-q\in (0,1)\) as follows:
where \(N_{r,p}(h)=E[h(NB)] \) and f and h are bounded real valued functions defined on \({\mathbb {N}}\cup \{0\}\).
For \(A\subseteq {\mathbb {N}}\cup \{0\}\), let \(h_{A}:{\mathbb {N}}\cup \{0\} \longrightarrow {\mathbb {R}}\) be indicator function
Following Brown and Phillips [1] and Teerapabolarn and Boondirek [6], the solution \(f_{A}\) of (2.4) is of the form
For \(A=\{x_{0}\}\) and \(x_{0}\in {\mathbb {N}}\cup \{0\}\), setting \(h_{x_{0}}=h_{\{x_{0}\}}\) and the solution \(f_{x_{0}}=f_{\{x_{0}\}}\) of (2.6) can be rewritten as
and Malingam and Teerapabolarn [4] showed that
Similarly, for \(A=C_{x_{0}}\) and \( x_{0}\in {\mathbb {N}}\cup \{0\}\), the solution \(f_{C_{x_{0}}}\) given by (2.6) is of the form
and we also have
Note that \(\Delta f_{C_{x_{0}}}(x)=\sum _{k=0}^{x_0} \Delta f_{k}(x)\) and \( f_{C_{x_{0}}}(x)=\sum _{k=0}^{x_0} f_{k}(x)\).
The following lemma gives two bounds for \(\Delta f_{x_{0}}\) and \(\Delta f_x\), which are directly obtained by combining the bounds in Jaioun and Teerapabolarn [2] and Malingam and Teerapabolarn [4].
Lemma 2.1
For \(x_{0},x\in {\mathbb {N}}\), we have the following:
Lemma 2.2
Let \(x_{0}\in {\mathbb {N}}\cup \{0\}\) and \(x\in {\mathbb {N}}\). The following inequalities then hold:
Proof
For \(x\le x_{0}\), we have
where the last inequality follows Lemma 2.1 of Teerapabolarn [7]. For \(x>x_{0}\), we obtain
Hence, the inequalities in (2.12) hold. \(\square \)
Lemma 2.3
For \(x, k\in {\mathbb {N}}\), \(n \in \{1,...,x\}\) and \(r\in \mathbb {R^{+}}\), we have the following:
Proof
All results in (2.13)−(2.15) follow mathematical induction. \(\square \)
Lemma 2.4
For \(x_{0}\in {\mathbb {N}}\), \(\Delta f_{C_{x_{0}}}\) is an increasing function in \(x\in \{1,...,x_{0}\}\).
Proof
Let \(\Delta ^{2}f_{C_{x_{0}}}(x)=\Delta f_{C_{x_{0}}}(x+1)-\Delta f_{C_{x_{0}}}(x)\). We have to show that \(\Delta ^{2}f_{C_{x_{0}}}(x)>0\) for every \(x\in \{1,...,x_0-1\}\).
Following (2.10), we obtain
Let \(F(q)=x(x+1)\sum _{k=0}^{x+1}\frac{\Gamma (r+k)q^{k}}{k!\Gamma (r)}-2x(r+x+1)\sum _{k=0}^{x}\frac{\Gamma (r+k)q^{k+1}}{k!\Gamma (r)}+(r+x)(r+x+1)\sum _{k=0}^{x-1}\frac{\Gamma (r+k)q^{k+2}}{k!\Gamma (r)}\). We then have to show that \(F(q)>0\). It is observed that F(q) seems to be a polynomial function of degree \(x+1\), of the form
where
for \(j=0,1,...,x-1\), \(S_{1}=rx(x+1)-2x(r+x+1)\) and \(S_{0}=x(x+1)\).
By factor theorem, \(q-\alpha \) is a factor of F(q) if and only if \(F(\alpha )=0\). When \(\alpha =1\), we shall show that \(F(1)=0\). With F(q) noted above, we have
Thus, by (2.18), \(q-1\) is a factor of F(q), so that, \(F(q)=(q-1)G(q)\) where G(q) is the quotient. If \(G(q)<0\), this implies that \(F(q)>0\). It is seen that
where \(U_{x-j}=\sum _{k=0}^jS_{x-(k-1)}\) for \(j=0,1,...x\). Using (2.19), it is not easy to check that \(G(q)<0\). However, one property of the polynomial function states that, if \(q-\alpha \) is a factor of F(q), then \((q-\alpha )^2\) is a factor of F(q) if and only if \(q-\alpha \) is a factor of F(q) and \(F'(q)\), where \(F'(q)\) is the derivative of F(q) with respect to q. Accordingly, we have to show that \(q-1\) is a factor of \(F'(q)\) or show that \(F'(1)=0\). Because
we obtain
Therefore, by (2.18) and (2.21), \((q-1)^2\) is a factor of F(q), so that, \(F(q)=(q-1)^2H(q)\) when H(q) is the another quotient.
In the next step we shall show that \(H(q)>0\). Because \(H(q)=\frac{G(q)}{q-1}\), we get
where \(W_{x-j}=U_{x}+U_{x-1}+\cdots +U_{x-(j-1)}\), for every \(j=0,1,..,x \). For \(1\le j\le x\), \(W_{x-j}\) can be expressed as
This implies that \(H(q)>0\) or \(F(q)>0\), and which gives \(\Delta ^{2}f_{C_{x_{0}}}(x)>0\) for every \(x\in \{1,...,x_0-1\}\). Hence \(\Delta f_{C_{x_{0}}}\) is an increasing function in \(x\in \{1,...,x_{0}\}\). \(\square \)
Lemma 2.5
\(|\Delta f_{C_{0}}|\) is a decreasing function in \(x\in {\mathbb {N}}\).
Proof
Following the proof of the second case in (2.12), we have
In order to prove that \(|\Delta f_{C_{0}}|\) is a decreasing function, we suffice to show that
for every \(x\in {\mathbb {N}}\). It is seen that
which yields (2.23). Therefore \(|\Delta f_{C_{0}}|\) is a decreasing function in \(x\in {\mathbb {N}}\). \(\square \)
The following lemma gives a non-uniform bound for \(|\Delta f_{C_{x_0}}|\).
Lemma 2.6
Let \(x_0\in {\mathbb {N}}\cup \{0\}\) and \(x\in {\mathbb {N}}\). We then have the following
- (1)
For \(x_0=0\),
$$\begin{aligned} \sup _{x}\left| \Delta f_{C_{x_0}}(x)\right| \le \frac{rq-(1-p^r)p}{r(r+1)q^2}. \end{aligned}$$(2.24) - (2)
For \(x_0\in {\mathbb {N}}\),
$$\begin{aligned} \sup _{x}\left| \Delta f_{C_{x_0}}(x)\right|&\le \min \left\{ \frac{1-p^{r+1}}{(r+1)q},\frac{1}{x_0}\right\} \end{aligned}$$(2.25)and
$$\begin{aligned} \sup _{x}\left| \Delta f_{C_{x_0}}(x)\right|&\le \frac{1}{(r+x)q}. \end{aligned}$$(2.26)
Proof
(1) By Lemma 2.5, we have
Thus, inequality (2.24) is obtained.
(2) Because the bound in (2.26) is obtained from Jaioun and Teerapabolarn [2], we show that the bound in (2.25) holds, as follows. For \(x_{0}\ge x\),
For \(x_{0}<x\),
Hence, by (2.27) and (2.28), the bound in (2.25) holds. \(\square \)
3 Main Result
This section uses Stein’s method together with z-functions to determine a non-uniform bound for the distance between the cumulative distribution function of a nonnegative integer-valued random variable X and the negative binomial cumulative distribution function. The following theorem presents the main requirement.
Theorem 3.1
Given the above definitions, let \(\mu =\frac{rq}{p}\). We then have the following:
- 1.
For \(x_0=0\),
$$\begin{aligned} d_{C{x_0}}(X,NB)\le \frac{rq-(1-p^r)p}{r(r+1)q^2}E\left| (r+X)q-pz(X)\right| . \end{aligned}$$(3.1) - 2.
For \(x_0\in \mathcal {X}\setminus \{0\}\),
$$\begin{aligned} d_{C_{x_0}}(X,NB)&\le \min \Bigg \{\min \left\{ \frac{1-p^{r+1}}{(r+1)q},\frac{1}{x_0}\right\} E\left| (r+X)q-pz(X)\right| \Bigg .,\nonumber \\&\quad \Bigg .\sum _{x\in \mathcal {X}\setminus \{0\}}\left| 1-\frac{z(x)p}{(r+x)q}\right| p_X(x)\Bigg \} \end{aligned}$$(3.2)
Proof
By the proof given in Jaioun and Teerapabolarn [2], it follows that
where \(f=f_{C_{x_0}}\) is defined in (2.9). Because \(E|z(X)\Delta f(X)|<\infty \), then by (2.3), we obtain
Because \((r+X)q-pz(X)=0\) when \(X=0\), we get
For \(x_0=0\) an by applying Lemma 2.6 (1), we have
This completes the proof. For \(x_0\in \mathcal {X}\setminus \{0\}\) and by applying Lemma 2.6 (2), we have
and
Combining (3.3) and (3.4), the result of (3.2) is obtained. \(\square \)
Remark 3.1
For \(X=0\), an accurate approximation of \(d_{C_{x_0}}(X,NB)\) can be directly obtained from the exact error \(\left| p_X(0)-p^r\right| \).
The following corollary is an immediate consequence that helps us to compute the bounds in Theorem 3.1.
Corollary 3.1
If \((r+x)q-pz(x)\ge 0\) for every \(x\in \mathcal {X}\) or \((r+x)q-pz(x)\le 0\) for every \(x\in \mathcal {X}\), then we have the following:
- 1.
For \(x_0=0\),
$$\begin{aligned} d_{C_{x_0}}(X,NB)\le \frac{rq-(1-p^r)p}{r(r+1)q^2}\left| \mu -\sigma ^2p\right| . \end{aligned}$$(3.5) - 2.
For \(x_0\in \mathcal {X}\backslash \{0\}\),
$$\begin{aligned} d_{C_{x_0}}(X,NB)&\le \min \Bigg \{\min \left\{ \frac{1-p^{r+1}}{(r+1)q},\frac{1}{x_0}\right\} \left| \mu -\sigma ^2p\right| \Bigg .,\nonumber \\&~~~~\Bigg .\sum _{x\in \mathcal {X}\setminus \{0\}}\left| 1-\frac{z(x)p}{(r+x)q}\right| p_X(x)\Bigg \}. \end{aligned}$$(3.6)
Remark
By comparing the bounds in Theorem 3.1 and (1.5), it shows that the non-uniform bound in Theorem 3.1 is better than the uniform bound in (1.5).
4 Applications
In this study, we use the result from Theorem 3.1, or Corollary 3.1, to approximate cumulative distribution functions, including the negative hypergeometric cumulative distribution function, the Pólya cumulative distribution function and the beta negative binomial cumulative distribution function. In addition, we also give numerical results that measure measuring the accuracy of each approximation.
4.1 Negative Hypergeometric Distribution
Let X be a negative hypergeometric random variable with parameters N, r and \(m (<N)\), whose probability mass function is of the form
The mean and variance of X are \(\mu =\frac{mr}{N-m+1}\) and \(\sigma ^2=\frac{mr(N-m-r+1)(N+1)}{(N-m+1)^2(N-m+2)}\), respectively. It follows from (2.2) that \(z(x)=\frac{(r+x)(m-x)}{N-m+1}\). Let \(p=\frac{N-m+1}{N+1}\). We then have \((r+x)q-pz(x)=\frac{x(r+x)}{N+1}\ge 0\) for every \(x\in \{0,1,...,m\}\). Therefore, by Corollary 3.1, we have
This bound indicates that the result gives a good approximation when \(\frac{r}{N-m+1}\) is small and/or \(x_0\) increases. That is, the negative binomial cumulative distribution function with parameters r and \(\frac{N-m+1}{N+1}\) can be used as an approximation of the negative hypergeometric cumulative distribution function with parameters N, r and m when \(\frac{r}{N-m+1}\) is small and/or \(x_0\) increases.
Numerical results:
- 1.
Let \(m=30\), \(r=50\) and \(N=500\). We then have \(p=0.94012\) and the result in the negative binomial approximation to the negative hypergeometric cumulative distribution function is of the form
$$\begin{aligned} d_{C_{x_0}}(X,NB)\le {\left\{ \begin{array}{ll} 0.078912 &{}\text {if}~x_0=0,\\ 0.106157 &{}\text {if}~x_0=1,2,3,\\ \frac{0.344111}{x_0} &{}\text {if}~x_0=4,...,30,\\ \end{array}\right. } \end{aligned}$$which improves upon the result of Jaioun and Teerapabolarn [2]:
$$\begin{aligned} d_{C_{x_0}}(X,NB)\le 0.107847 \end{aligned}$$for every \(x_0\in \{0,1,...,30\}\).
- 2.
Let \(m=50\), \(r=50\) and \(N=1000\). We then have \(p=0.95005\) and the result in the negative binomial approximation to the negative hypergeometric cumulative distribution function is of the form
$$\begin{aligned} d_{C_{x_0}}(X,NB)\le {\left\{ \begin{array}{ll} 0.035875 &{}\text {if}~x_0=0,\\ 0.051230 &{}\text {if}~x_0=1,2,\\ \frac{0.140829}{x_0} &{}\text {if}~x_0=3,...,50,\\ \end{array}\right. } \end{aligned}$$which also improves upon the result of Jaioun and Teerapabolarn [2]:
$$\begin{aligned} d_{C_{x_0}}(X,NB)\le 0.051230 \end{aligned}$$for every \(x_0\in \{0,1,...,50\}\).
4.2 Pólya Distribution
Let X be a Pólya random variable with parameters N, m, n and c, whose probability mass function is given by
The mean and variance of X are \(\mu =\frac{mn}{N}\) and \(\sigma ^2=\frac{mn(N+cm)(N-n)}{N^2(N+c)}\), respectively. Using the relation in (2.2), we get \(z(x)=\frac{(n+cx)(m-x)}{N}\). Let \(p=\frac{N}{N+cm}\) and \(r=\frac{n}{c}\). We then have \((r+x)q-pz(x)=\frac{(n+cx)x}{N+cm}\ge 0\) for every \(x\in \{0,1,...,m\}\). Thus, by Corollary 3.1, we obtain
This bound is a good alternative criterion for approximating the Pólya cumulative distribution function with parameters N, m, n and c by the negative binomial cumulative distribution function with parameters \(\frac{n}{c}\) and \(\frac{N}{N+cm}\) when \(\frac{n}{N}\) is small and/or \(x_0\) increases.
Numerical results:
- 1.
Let \(m=50\), \(n=30\), \(c=3\) and \(N=500\). We then have \(p=0.769231\) and the result in the negative binomial approximation to the Pólya cumulative distribution function is as follows:
$$\begin{aligned} d_{C_{x_0}}(X,NB)\le {\left\{ \begin{array}{ll} 0.053565 &{}\text {if}~x_0=0,\\ 0.06 &{}\text {if}~x_0=1,2,3,\\ \frac{0.196819}{x_0} &{}\text {if}~x_0=4,...,50,\\ \end{array}\right. } \end{aligned}$$which improves upon the result of Jaioun and Teerapabolarn [2]:
$$\begin{aligned} d_{C_{x_0}}(X,NB)\le 0.073208 \end{aligned}$$for every \(x_0\in \{0,1,...,50\}\).
- 2.
Let \(m=80\), \(n=60\), \(c=2\) and \(N=1000\). We then have \(p=0.862069\) and the result in the negative binomial approximation to the Pólya cumulative distribution function is as follows:
$$\begin{aligned} d_{C_{x_0}}(X,NB)\le {\left\{ \begin{array}{ll} 0.055159 &{}\text {if}~x_0=0,\\ 0.06 &{}\text {if}~x_0=1,...,4,\\ \frac{0.297006}{x_0} &{}\text {if}~x_0=5,...,80,\\ \end{array}\right. } \end{aligned}$$which also improves upon the result of Jaioun and Teerapabolarn [2]:
$$\begin{aligned} d_{C_{x_0}}(X,NB)\le 0.068764 \end{aligned}$$for every \(x_0\in \{0,1,...,80\}\).
4.3 Beta Negative Binomial Distribution
Let X be a beta negative binomial random variable with parameters \(\alpha ,\beta \) and r, whose probability mass function can be written as
The mean and variance of X are \(\mu =\frac{r\beta }{\alpha -1}\) and \(\sigma ^2=\frac{r\beta (r+\alpha -1)(\alpha +\beta -1)}{(\alpha -2)(\alpha -1)^2}\), respectively. Following (2.2), we have that \(z(x)=\frac{(r+x)(\beta +x)}{(\alpha -1)}\). Let \(p=\frac{\alpha -1}{\alpha +\beta -1}\). We then \((r+x)q-pz(x)=-\frac{x(r+x)}{\alpha +\beta -1}\le 0\) for every \(x\in {\mathbb {N}}\cup \{0\}\). Thus, by Corollary 3.1, it follows that
The bound in this application tell us that the negative binomial cumulative distribution function with parameters r and \(\frac{\alpha -1}{\alpha +\beta -1}\) is a good estimate of the beta negative binomial cumulative distribution function with parameters \(\alpha ,\beta \) and r when \(\alpha \) is large relative to r and/or \(x_0\) increases.
Numerical results:
- 1.
Let \(\alpha =450,\beta =55\) and \(r=20\). We then have \(p=0.890873\) and the result in the negative binomial approximation to the beta negative binomial cumulative distribution function can be written as
$$\begin{aligned} d_{C_{x_0}}(X,NB)\le {\left\{ \begin{array}{ll} 0.031685 &{}\text {if}~x_0=0,\\ 0.044543 &{}\text {if}~x_0=1,2,\\ \frac{0.114839}{x_0} &{}\text {if}~x_0\ge 3,\\ \end{array}\right. } \end{aligned}$$which improves upon the result of Jaioun and Teerapabolarn [2]:
$$\begin{aligned} d_{C_{x_0}}(X,NB)\le 0.045685 \end{aligned}$$for every \(x_0\in {\mathbb {N}}\cup \{0\}\).
- 2.
Let \(\alpha =800,\beta =70\) and \(r=30\). We then have \(p=0.919448\) and the result in the negative binomial approximation to the beta negative binomial cumulative distribution function can be written as
$$\begin{aligned} d_{C_{x_0}}(X,NB)\le {\left\{ \begin{array}{ll} 0.026583 &{}\text {if}~x_0=0,\\ 0.037547 &{}\text {if}~x_0=1,2,\\ \frac{0.102101}{x_0} &{}\text {if}~x_0\ge 3,\\ \end{array}\right. } \end{aligned}$$which also improves upon the result of Jaioun and Teerapabolarn [2]:
$$\begin{aligned} d_{C_{x_0}}(X,NB)\le 0.037861 \end{aligned}$$for every \(x_0\in {\mathbb {N}}\cup \{0\}\).
5 Conclusions
The non-uniform bound presented in this study is a new bound for approximating the distance between the cumulative distribution function of a nonnegative integer-valued random variable and the negative binomial cumulative distribution function. It was obtained by applying Stein’s method and z-functions, and setting the mean of the negative binomial random variable is equal to the mean of a nonnegative integer-valued random variable. It is seen that the accuracy of the approximation depends on \(x_0\), so that, the bound decreases as \(x_0\) increases. In terms of the corresponding bounds, the bound obtained in this study is better than that given in Jaioun and Teerapabolarn [2], including both theoretical and numerical results. Therefore, the non-uniform bound obtained in this study is an appropriate criterion for measuring the accuracy of this approximation.
References
Brown, T.C., Phillips, M.J.: Negative binomial approximation with Stein’s method. Methodol. Comput. Appl. Probab. 1, 407–421 (1999)
Jaioun, K., Teerapabolarn, K.: A uniform bound on negative binomial approximation with \(w\)-functions. Appl. Math. Sci. 9, 2831–2841 (2015)
Kumar, A.N., Upadhye, N.S.: On perturbations of Stein operator. Commun. Stat. Theory Methods 46, 9284–9302 (2017)
Malingam, P., Teerapabolarn, K.: A pointwise negative binomial approximation by \(w\)-functions. Int. J. Pure Appl. Math. 69, 453–467 (2011)
Stein, C.: A bound for the error in the normal approximation to the distribution of a sum of dependent random variables. Proc. Sixth Berkeley Symp. Math. Stat. Probab. 2, 583–602 (1972)
Teerapabolarn, K., Boondirek, A.: Negative binomial approximation with Stein’s method and Stein’s identity. Int. Math. Forum 5, 2541–2551 (2010)
Teerapabolarn, K.: An improved bound for negative binomial approximation with \(z\)-functions. AKCE Int. J. Graphs Comb. 14, 287–294 (2017)
Vellaisamy, P., Upadhye, N.S.: Compound negative binomial approximations for sums of random variables. Probab. Math. Stat. 29, 205–226 (2009)
Vellaisamy, P., Upadhye, N.S., Cekanavicius, V.: On negative binomial approximation. Theory Probab. Appl. 57, 97–109 (2013)
Wang, X., Xia, A.: On negative binomial approximation to \(k\)-runs. J. Appl. Probab. 45, 456–471 (2008)
Acknowledgements
The authors would like to thank the referees for helpful comments and suggestions which led to the present paper.
Author information
Authors and Affiliations
Corresponding author
Additional information
Rosihan M. Ali.
Rights and permissions
About this article
Cite this article
Jaioun, K., Panichkitkosolkul, W. & Teerapabolarn, K. A Non-uniform Bound on Negative Binomial Approximation via Stein’s Method and z-functions. Bull. Malays. Math. Sci. Soc. 43, 519–536 (2020). https://doi.org/10.1007/s40840-018-0697-7
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40840-018-0697-7