1 Introduction

The probability approximation has been widely studied in the area of probability theory and statistics. It is usually related to the law of large numbers, the law of small numbers, central iterated logarithm, etc. In the past, mathematicians and statisticians have applied a number of methods to derive uniform and non-uniform bounds for measuring the accuracy of the corresponding probability approximation. One of the most useful methods is Stein’s method. This is a powerful and efficient technique for finding uniform and non-uniform bounds of the approximation of one probability distribution by another. These can be expressed in terms of the distance between the two probability distributions. Stein [5] introduced his first technique in order to give an explicit uniform bound for estimating the accuracy of the normal approximation, and later this method was extended to other distributions, including Poisson, binomial, and negative binomial. The current study focuses on the application of Stein’s method to the negative binomial approximation. Let NB be the negative binomial random variable with parameters \(r\in {\mathbb {R}}^+\) and \(p=1-q\in (0,1)\), and its probability mass function is of the form

$$\begin{aligned} p_{NB}(k)=\frac{\Gamma (r+k)p^{r}q^{k}}{k!\Gamma (r)},~~k\in {\mathbb {N}}\cup \{0\}. \end{aligned}$$
(1.1)

The mean and variance of NB are \(E(NB)=\frac{rq}{p}\) and \(Var(NB)=\frac{rq}{p^{2}}\), respectively. For \(r\in {\mathbb {N}}\), the distribution of NB is considered as the distribution of the number of failures before the \(r{\mathrm{th}}\) success. Additionally, the geometric distribution is a special case of the negative binomial distribution when \(r=1\).

The first study of negative binomial approximation was conducted by Brown and Phillips [1]. They used Stein’s method to obtain a uniform bound for approximating the distribution of a sum of dependent Bernoulli random variables by a negative binomial distribution and used this result to approximate the Pólya distribution. Wang and Xia [10] used a negative binomial distribution to approximate the distribution of the number of k-run with a uniform bound using Stein’s method. Vellaisamy and Upadhye [8] used Kerstan’s method to obtain a uniform bound for approximating the distribution of a sum of independent geometric random variables by a negative binomial distribution. Vellaisamy et al. [9] presented a uniform bound for the total variation between the distribution of a sum of n independent nonnegative integer-valued random variables and a negative binomial distribution using Stein’s method. Kumar and Upadhye [3] used a probability generating function approach to find the Stein operator for the negative binomial distribution and applied the operator to obtain a uniform bound for the total variation distance between the distribution of a sum of n independent random variables and a negative binomial distribution; however, it is not easy to compute this result. The distribution of a single nonnegative integer-valued random variable by a negative binomial distribution can be approximated as follows. Let X be a nonnegative integer-valued random variable with the probability mass function \(p_{X}(x)>0\) for every \(x\in \mathcal {X}\), where \(\mathcal {X}\) is the space of X. The mean and variance of X are \(\mu \) and \(0<\sigma ^{2}<\infty \), respectively. In this case, Teerapabolarn and Boondirek [6] used Stein’s method together with z-functions to obtain a uniform bound that can be expressed in the following form:

$$\begin{aligned} d_{A}(X,NB)\le \frac{1-p^{r}}{rq}E\left| (r+X)q-pz(X)\right| +\left| \frac{rq}{p}-\mu \right| \end{aligned}$$
(1.2)

and for \(\mu =\frac{rq}{p}\),

$$\begin{aligned} d_{A}(X,NB)\le \frac{1-p^{r}}{rq}E\left| (r+X)q-pz(X)\right| \end{aligned}$$
(1.3)

for every \(A\subseteq {\mathbb {N}}\cup \{0\}\), where \(d_{A}(X,NB)=\left| P(X\in A)-P(NB\in A)\right| \) is the distance between the distribution of X and a negative binomial distribution and \(z(x)=\frac{1}{p_{X}(x)}\sum _{k=0}^{x}(\mu -k)p_{X}(k)\) represents the z-functions associated with the random variable X. It can be seen that the bound in the case of \(\mu =\frac{rq}{p}\) is sharper than that in the case of \(\mu \ne \frac{rq}{p}\). Teerapabolarn [7] used the same tools to improve the uniform bound in (1.3) and produce a sharper result:

$$\begin{aligned} d_{A}(X,NB)&\le \min \Bigg \{\frac{1-p^{r+1}}{(r+1)q}E\left| (r+X)q-pz(X)\right| ,\nonumber \\&\quad \sum _{x\in \mathcal {X}\backslash \{0\}}\left| 1-\frac{z(x)p}{(r+x)q}\right| p_X(x)\Bigg \} \end{aligned}$$
(1.4)

for every \(A\subseteq {\mathbb {N}}\cup \{0\}\). For \(A=C_{x_0}\) and \(\mu =\frac{rq}{p}\), where \(C_{x_0}=\{0,...,x_0\}\) as \(x_0\in {\mathbb {N}}\cup \{0\}\), Jaioun and Teerapabolarn [2] used Stein’s method and w-functions to obtain a uniform bound for the approximation. Because \(z(x)=\sigma ^2w(x)\) for every \(x\in \mathcal {X} \), the result of Jaioun and Teerapabolarn [2] can be written as

$$\begin{aligned} d_{C_{x_0}}(X,NB)\le \frac{1-p^{r+1}}{(r+1)q}E\left| (r+X)q-pz(X)\right| \end{aligned}$$
(1.5)

for every \(x_0\in {\mathbb {N}}\cup \{0\}\), where \(d_{C_{x_0}}(X,NB)=\left| P(X\le x_0)-P(NB\le x_0)\right| \) is the distance between the cumulative distribution function of X and the negative binomial cumulative distribution function. For \(A=\{x_0\}\), \(x_0\in {\mathbb {N}}\cup \{0\}\), and \(r\ge 1\), Malingam and Teerapabolarn [4] used the same tools as Jaioun and Teerapabolarn [2] to give a non-uniform bound for the point metric between the distributions of X and NB.

In view of the bound in (1.5), this does not change as \(x_0\in {\mathbb {N}}\cup \{0\}\) changes or depend on \(x_0\in {\mathbb {N}}\cup \{0\}\). This may not be enough to measure the accuracy of this approximation, and in this situation, a new bound is required. The aim of this study is to determine a non-uniform bound with respect to the result from (1.5).

We use Stein’s method and z-functions to obtain the main result of this study, as discussed in Sect. 2. In Sect. 3, we use these tools to give a new result of the approximation. In Sect. 4, some applications are provided to illustrate the desired result. Our conclusions are presented in the last section.

2 Method

Teerapabolarn and Boondirek [6] gave the z-function associated with a nonnegative integer-valued random variable X as follows:

$$\begin{aligned} z(x)p_{X}(x) =\sum _{j=0}^{x}(\mu -j)p_{X}(j),~x\in \mathcal {X}. \end{aligned}$$
(2.1)

A recurrence form of (2.1) can be expressed as

$$\begin{aligned} z(0)=\mu ,~z(x)=\frac{z(x-1)p_{X}(x-1)}{p_{X}(x)}+\mu -x,~x\in \mathcal {X}\backslash \{0\}. \end{aligned}$$
(2.2)

Furthermore, for any function \(f:{\mathbb {N}}\cup \{0\}\rightarrow {\mathbb {R}}\) satisfying \(E\left| z(X)\Delta f(X)\right| <\infty \), this yields

$$\begin{aligned} E\left[ (X-\mu )f(X)\right] =E\left[ z(X)\Delta f(X)\right] , \end{aligned}$$
(2.3)

where \(\Delta f(X)=f(X+1)-f(X)\).

Brown and Phillips [1] adapted Stein’s original method and applied it to the negative binomial distribution. They modified Stein’s equation for a negative binomial distribution with parameters \(r\in {\mathbb {R}}^+\) and \(p=1-q\in (0,1)\) as follows:

$$\begin{aligned} h(x)-N_{r,p}(h)=(1-p)(r+x)f(x+1)-xf(x) \end{aligned}$$
(2.4)

where \(N_{r,p}(h)=E[h(NB)] \) and f and h are bounded real valued functions defined on \({\mathbb {N}}\cup \{0\}\).

For \(A\subseteq {\mathbb {N}}\cup \{0\}\), let \(h_{A}:{\mathbb {N}}\cup \{0\} \longrightarrow {\mathbb {R}}\) be indicator function

$$\begin{aligned} h_{A}(x)= {\left\{ \begin{array}{ll} 1 &{}\text {if}~x\in A,\\ 0 &{}\text {if}~x\notin A. \end{array}\right. } \end{aligned}$$
(2.5)

Following Brown and Phillips [1] and Teerapabolarn and Boondirek [6], the solution \(f_{A}\) of (2.4) is of the form

$$\begin{aligned} f_{A}(x)= {\left\{ \begin{array}{ll} \frac{x!\Gamma (r)}{\Gamma (r+x)xp^{r}q^{x}} \left[ N_{r,p}(h_{A\cap C_{x-1}}) - N_{r,p}(h_A) N_{r,p}(h_{C_{x-1}}) \right] &{}\text {if}~x\ge 1,\\ 0&{}\text {if}~x= 0. \end{array}\right. } \end{aligned}$$
(2.6)

For \(A=\{x_{0}\}\) and \(x_{0}\in {\mathbb {N}}\cup \{0\}\), setting \(h_{x_{0}}=h_{\{x_{0}\}}\) and the solution \(f_{x_{0}}=f_{\{x_{0}\}}\) of (2.6) can be rewritten as

$$\begin{aligned} f_{x_{0}}(x)= {\left\{ \begin{array}{ll} -\frac{x!\Gamma (r)}{\Gamma (r+x)xp^{r}q^{x}} N_{r,p}(h_{ x_{0}})N_{r,p}(h_{ C_{x-1}})&{}\text {if}~x\le x_{0},\\ \frac{x!\Gamma (r)}{\Gamma (r+x)xp^{r}q^{x}} N_{r,p}(h_{x_{0}})N_{r,p}(1-h_{ C_{x-1}})&{}\text {if}~x>x_{0},\\ 0&{}\text {if}~x=0, \end{array}\right. } \end{aligned}$$
(2.7)

and Malingam and Teerapabolarn [4] showed that

$$\begin{aligned} \Delta f_{x_{0}}(x)=f_{x_{0}}(x+1)-f_{x_{0}}(x)= {\left\{ \begin{array}{ll} <0&{}\text {if}~x\ne x_{0},\\ >0&{}\text {if}~x=x_{0}. \end{array}\right. } \end{aligned}$$
(2.8)

Similarly, for \(A=C_{x_{0}}\) and \( x_{0}\in {\mathbb {N}}\cup \{0\}\), the solution \(f_{C_{x_{0}}}\) given by (2.6) is of the form

$$\begin{aligned} f_{C_{x_{0}}}(x)= {\left\{ \begin{array}{ll} \frac{x!\Gamma (r)}{\Gamma (r+x)xp^{r}q^{x}}N_{r,p}(h_{C_{x-1}})N_{r,p}(1-h_{C_{x_{0}}})&{}\text {if}~x\le x_{0},\\ \frac{x!\Gamma (r)}{\Gamma (r+x)xp^{r}q^{x}}N_{r,p}(h_{C_{x_{0}}})N_{r,p}(1-h_{C_{x-1}})&{}\text {if}~x>x_{0},\\ 0&{}\text {if}~x=0, \end{array}\right. } \end{aligned}$$
(2.9)

and we also have

$$\begin{aligned} \Delta f_{C_{x_{0}}}(x)= {\left\{ \begin{array}{ll} \frac{x!\Gamma (r)N_{r,p}(1-h_{C_{x_{0}}})\left[ \frac{x}{q} N_{r,p}(h_{ C_{x}})-(r+x)N_{r,p}(h_{C_{x-1}})\right] }{\Gamma (r+x+1)xp^{r}q^{x}}&{}\text {if}~x\le x_{0},\\ \frac{x!\Gamma (r)N_{r,p}(h_{C_{x_{0}}})\left[ \frac{x}{q} N_{r,p}(1-h_{ C_{x}})-(r+x)N_{r,p}(1-h_{ C_{x-1}})\right] }{\Gamma (r+x+1)xp^{r}q^{x}}&{}\text {if}~x>x_{0}. \end{array}\right. } \end{aligned}$$
(2.10)

Note that \(\Delta f_{C_{x_{0}}}(x)=\sum _{k=0}^{x_0} \Delta f_{k}(x)\) and \( f_{C_{x_{0}}}(x)=\sum _{k=0}^{x_0} f_{k}(x)\).

The following lemma gives two bounds for \(\Delta f_{x_{0}}\) and \(\Delta f_x\), which are directly obtained by combining the bounds in Jaioun and Teerapabolarn [2] and Malingam and Teerapabolarn [4].

Lemma 2.1

For \(x_{0},x\in {\mathbb {N}}\), we have the following:

$$\begin{aligned} \Delta f_x(x)\le \min \left\{ \frac{1-p^{r+1}}{(r+1)q},\frac{1}{x}\right\} . \end{aligned}$$
(2.11)

Lemma 2.2

Let \(x_{0}\in {\mathbb {N}}\cup \{0\}\) and \(x\in {\mathbb {N}}\). The following inequalities then hold:

$$\begin{aligned} \Delta f_{C_{x_{0}}}(x) {\left\{ \begin{array}{ll}>0&{}\text {if}~x\le x_{0},\\ <0&{}\text {if}~x>x_{0}. \end{array}\right. } \end{aligned}$$
(2.12)

Proof

For \(x\le x_{0}\), we have

$$\begin{aligned} \Delta f_{C_{x_{0}}}(x)&=\frac{x!\Gamma (r)N_{r,p}(1-h_{C_{x_{0}}})}{\Gamma (r+x+1)xp^{r}q^{x}}\left[ \frac{x}{q} N_{r,p}(h_{C_{x}})-(r+x)N_{r,p}(h_{C_{x-1}})\right] \\&=\frac{x!\Gamma (r)N_{r,p}(1-h_{C_{x_{0}}})}{\Gamma (r+x)xq^{x}}\Bigg [\frac{x}{q(r+x)}\sum _{k=0}^{x}\frac{\Gamma (r+k)q^{k}}{k!\Gamma (r)}-\sum _{k=0}^{x-1}\frac{\Gamma (r+k)q^{k}}{k!\Gamma (r)}\Bigg ]\\&>0, \end{aligned}$$

where the last inequality follows Lemma 2.1 of Teerapabolarn [7]. For \(x>x_{0}\), we obtain

$$\begin{aligned} \Delta f_{C_{x_{0}}}(x)&=\frac{x!\Gamma (r)N_{r,p}(h_{ C_{x_{0}}})}{\Gamma (r+x+1)xq^{x}}\Bigg [\frac{x}{q}\sum _{k=x+1}^{\infty }\frac{\Gamma (r+k)q^{k}}{k!\Gamma (r)}-(r+x)\sum _{k=x}^{\infty }\frac{\Gamma (r+k)q^{k}}{k!\Gamma (r)}\Bigg ]\\&=\frac{x!\Gamma (r)N_{r,p}(h_{ C_{x_{0}}})}{\Gamma (r+x+1)xq^{x}}\Bigg [x\sum _{k=x+1}^{\infty }\frac{(r+k-1)\Gamma (r+k-1)q^{k-1}}{k(k-1)!\Gamma (r)}\\&\quad -(r+x)\sum _{k=x}^{\infty }\frac{\Gamma (r+k)q^{k}}{k!\Gamma (r)}\Bigg ]\\&=\frac{x!\Gamma (r)N_{r,p}(h_{ C_{x_{0}}})}{\Gamma (r+x+1)xq^{x}} \Bigg [x\sum _{k=x}^{\infty }\frac{(r+k)\Gamma (r+k)q^{k}}{(k+1)k!\Gamma (r)}\\&\quad -(r+x)\sum _{k=x}^{\infty }\frac{\Gamma (r+k)q^{k}}{k!\Gamma (r)}\Bigg ]\\&=\frac{x!\Gamma (r)N_{r,p}(h_{C_{x_{0}}})}{\Gamma (r+x+1)xq^{x}}\sum _{k=x}^{\infty }\frac{\Gamma (r+k)q^{k}}{k!\Gamma (r)}\Bigg [\frac{r(x-k)-(r+x)}{k+1}\Bigg ]\\&<0. \end{aligned}$$

Hence, the inequalities in (2.12) hold. \(\square \)

Lemma 2.3

For \(x, k\in {\mathbb {N}}\), \(n \in \{1,...,x\}\) and \(r\in \mathbb {R^{+}}\), we have the following:

$$\begin{aligned} {\textit{(1)}}&~\sum _{i=0}^{k}\frac{\Gamma (r+i)}{i!\Gamma (r)}=\frac{\Gamma (r+k+1)}{k!\Gamma (r+1)}. \end{aligned}$$
(2.13)
$$\begin{aligned} {\textit{(2)}}&~\sum _{i=1}^{k}\frac{\Gamma (r+i)}{(i-1)!\Gamma (r)}=\frac{r\Gamma (r+k+1)}{(k-1)!\Gamma (r+2)}. \end{aligned}$$
(2.14)
$$\begin{aligned} {\textit{(3)}}&~\sum _{i=1}^{n}\frac{i\Gamma (r+x-i)}{(x-i)!\Gamma (r)}\left[ \frac{2x-(i-1)(r+1)}{x-i+1}\right] =\frac{n(n+1)\Gamma (r+x-n)}{(x-n)!\Gamma (r)}. \end{aligned}$$
(2.15)

Proof

All results in (2.13)−(2.15) follow mathematical induction. \(\square \)

Lemma 2.4

For \(x_{0}\in {\mathbb {N}}\), \(\Delta f_{C_{x_{0}}}\) is an increasing function in \(x\in \{1,...,x_{0}\}\).

Proof

Let \(\Delta ^{2}f_{C_{x_{0}}}(x)=\Delta f_{C_{x_{0}}}(x+1)-\Delta f_{C_{x_{0}}}(x)\). We have to show that \(\Delta ^{2}f_{C_{x_{0}}}(x)>0\) for every \(x\in \{1,...,x_0-1\}\).

Following (2.10), we obtain

$$\begin{aligned}&\Delta ^{2}f_{C_{x_{0}}}(x)\nonumber \\&\quad =\frac{x!\Gamma (r)N_{r,p}(1-h_{ C_{x_{0}}})}{\Gamma (r+x+2)q^{x+1}}\biggr [\frac{x+1}{q}\sum _{k=0}^{x+1}\frac{\Gamma (r+k)q^{k}}{k!\Gamma (r)}-(r+x+1)\sum _{k=0}^{x}\frac{\Gamma (r+k)q^{k}}{k!\Gamma (r)}\biggr ]\nonumber \\&\qquad -\frac{(x-1)!\Gamma (r)N_{r,p}(1-h_{ C_{x_{0}}})}{\Gamma (r+x+1)q^{x}}\biggr [\frac{x}{q}\sum _{k=0}^{x}\frac{\Gamma (r+k)q^{k}}{k!\Gamma (r)}-(r+x)\sum _{k=0}^{x-1}\frac{\Gamma (r+k)q^{k}}{k!\Gamma (r)}\biggr ]\nonumber \\&\quad =\frac{(x-1)!\Gamma (r)N_{r,p}(1-h_{ C_{x_{0}}})}{\Gamma (r+x+2)q^{x+2}}\biggl [x(x+1)\sum _{k=0}^{x+1}\frac{\Gamma (r+k)q^{k}}{k!\Gamma (r)}-2x(r+x+1)\nonumber \\&\qquad \times \sum _{k=0}^{x}\frac{\Gamma (r+k)q^{k+1}}{k!\Gamma (r)}+(r+x)(r+x+1)\sum _{k=0}^{x-1}\frac{\Gamma (r+k)q^{k+2}}{k!\Gamma (r)}\biggr ]. \end{aligned}$$
(2.16)

Let \(F(q)=x(x+1)\sum _{k=0}^{x+1}\frac{\Gamma (r+k)q^{k}}{k!\Gamma (r)}-2x(r+x+1)\sum _{k=0}^{x}\frac{\Gamma (r+k)q^{k+1}}{k!\Gamma (r)}+(r+x)(r+x+1)\sum _{k=0}^{x-1}\frac{\Gamma (r+k)q^{k+2}}{k!\Gamma (r)}\). We then have to show that \(F(q)>0\). It is observed that F(q) seems to be a polynomial function of degree \(x+1\), of the form

$$\begin{aligned} F(q)=S_{x+1}q^{x+1}+\cdots +S_{2}q^2+S_{1}q+S_{0}, \end{aligned}$$
(2.17)

where

$$\begin{aligned} S_{x-(j-1)}&=\frac{x(x+1)\Gamma (r+x-j+1)}{(x-j+1)!\Gamma (r)}-\frac{2x(r+x+1)\Gamma (r+x-j)}{(x-j)!\Gamma (r)}\\&~~~~+\frac{(r+x)(r+x+1)\Gamma (r+x-j-1)}{(x-j-1)!\Gamma (r)} \end{aligned}$$

for \(j=0,1,...,x-1\), \(S_{1}=rx(x+1)-2x(r+x+1)\) and \(S_{0}=x(x+1)\).

By factor theorem, \(q-\alpha \) is a factor of F(q) if and only if \(F(\alpha )=0\). When \(\alpha =1\), we shall show that \(F(1)=0\). With F(q) noted above, we have

$$\begin{aligned} F(1)&=x(x+1)\sum _{k=0}^{x+1}\frac{\Gamma (r+k)}{k!\Gamma (r)}-2x(r+x+1)\sum _{k=0}^{x}\frac{\Gamma (r+k)}{k!\Gamma (r)}\nonumber \\&\quad +(r+x)(r+x+1)\sum _{k=0}^{x-1}\frac{\Gamma (r+k)}{k!\Gamma (r)}\nonumber \\&=x(x+1)\frac{\Gamma (r+x+2)}{(x+1)!\Gamma (r+1)}-2x(r+x+1)\frac{\Gamma (r+x+1)}{x!\Gamma (r+1)}\nonumber \\&~~~~+(r+x)(r+x+1)\frac{\Gamma (r+x)}{(x-1)!\Gamma (r+1)}\text {~~[by } (2.13)]\nonumber \\&=\frac{\Gamma (r+x+2)}{(x-1)!\Gamma (r+1)}-2\frac{\Gamma (r+x+2)}{(x-1)!\Gamma (r+1)}+\frac{\Gamma (r+x+2)}{(x-1)!\Gamma (r+1)}=0.\nonumber \\ \end{aligned}$$
(2.18)

Thus, by (2.18), \(q-1\) is a factor of F(q), so that, \(F(q)=(q-1)G(q)\) where G(q) is the quotient. If \(G(q)<0\), this implies that \(F(q)>0\). It is seen that

$$\begin{aligned} G(q)=\frac{F(q)}{q-1}=U_{x}q^{x}+U_{x-1}q^{x-1}+\cdots +U_{x-j}q^{x-j}+\cdots +U_{1}q+U_{0}, \end{aligned}$$
(2.19)

where \(U_{x-j}=\sum _{k=0}^jS_{x-(k-1)}\) for \(j=0,1,...x\). Using (2.19), it is not easy to check that \(G(q)<0\). However, one property of the polynomial function states that, if \(q-\alpha \) is a factor of F(q), then \((q-\alpha )^2\) is a factor of F(q) if and only if \(q-\alpha \) is a factor of F(q) and \(F'(q)\), where \(F'(q)\) is the derivative of F(q) with respect to q. Accordingly, we have to show that \(q-1\) is a factor of \(F'(q)\) or show that \(F'(1)=0\). Because

$$\begin{aligned} F'(q)&=x(x+1)\sum _{k=1}^{x+1}\frac{\Gamma (r+k)q^{k-1}}{(k-1)!\Gamma (r)}-2x(r+x+1)\left[ \sum _{k=1}^{x}\frac{\Gamma (r+k)q^{k}}{(k-1)!\Gamma (r)}\right. \nonumber \\&\quad +\left. \sum _{k=0}^{x}\frac{\Gamma (r+k)q^{k}}{k!\Gamma (r)}\right] +(r+x)(r+x+1)\left[ \sum _{k=1}^{x-1}\frac{\Gamma (r+k)q^{k+1}}{(k-1)!\Gamma (r)}\right. \nonumber \\&\quad +\left. \sum _{k=0}^{x-1}\frac{2\Gamma (r+k)q^{k+1}}{k!\Gamma (r)}\right] , \end{aligned}$$
(2.20)

we obtain

$$\begin{aligned} F'(1)&=x(x+1)\sum _{k=1}^{x+1}\frac{\Gamma (r+k)}{(k-1)!\Gamma (r)}-2x(r+x+1)\left[ \sum _{k=1}^{x}\frac{\Gamma (r+k)}{(k-1)!\Gamma (r)}\right. \nonumber \\&\quad +\left. \sum _{k=0}^{x}\frac{\Gamma (r+k)}{k!\Gamma (r)}\right] +(r+x)(r+x+1)\left[ \sum _{k=1}^{x-1}\frac{\Gamma (r+k)}{(k-1)!\Gamma (r)}\right. \nonumber \\&\quad +\left. \sum _{k=0}^{x-1}\frac{2\Gamma (r+k)}{k!\Gamma (r)}\right] \nonumber \\&=x(x+1)\frac{r\Gamma (r+x+2)}{x!\Gamma (r+2)}-2x(r+x+1)\left[ \frac{r\Gamma (r+x+1)}{(x-1)!\Gamma (r+2)}\right. \nonumber \\&\quad +\left. \frac{\Gamma (r+x+1)}{x!\Gamma (r+1)}\right] +(r+x)(r+x+1)\left[ \frac{r\Gamma (r+x)}{(x-2)!\Gamma (r+2)}\right. \nonumber \\&\quad +\left. \frac{2\Gamma (r+x)}{(x-1)!\Gamma (r+1)}\right] \text { [by } (2.13) \hbox { and } (2.14)]\nonumber \\&=\frac{r(x+1)\Gamma (r+x+2)}{(x-1)!\Gamma (r+2)}-\frac{2xr\Gamma (r+x+2)}{(x-1)!\Gamma (r+2)}-\frac{2(r+1)\Gamma (r+x+2)}{(x-1)!\Gamma (r+2)}\nonumber \\&\quad +\frac{r(x-1)\Gamma (r+x+2)}{(x-1)!\Gamma (r+2)}+\frac{2(r+1)\Gamma (r+x+2)}{(x-1)!\Gamma (r+2)}\nonumber \\&=\frac{\Gamma (r+x+2)}{(x-1)!\Gamma (r+2)}\left[ r(x+1)-2rx-2(r+1)+r(x-1)+2(r+1)\right] \nonumber \\&=0. \end{aligned}$$
(2.21)

Therefore, by (2.18) and (2.21), \((q-1)^2\) is a factor of F(q), so that, \(F(q)=(q-1)^2H(q)\) when H(q) is the another quotient.

In the next step we shall show that \(H(q)>0\). Because \(H(q)=\frac{G(q)}{q-1}\), we get

$$\begin{aligned} H(q)=W_{x-1}q^{x-1}+W_{x-2}q^{x-2}+\cdots +W_{x-j}q^{x-j}+\cdots +W_{1}q + W_{0}, \end{aligned}$$
(2.22)

where \(W_{x-j}=U_{x}+U_{x-1}+\cdots +U_{x-(j-1)}\), for every \(j=0,1,..,x \). For \(1\le j\le x\), \(W_{x-j}\) can be expressed as

$$\begin{aligned} W_{x-j}&=U_{x}+U_{x-1}+\cdots +U_{x-(j-1)}\nonumber \\&=jS_{x+1}+(j-1)S_{x}+\cdots +S_{x-(j-2)}\nonumber \\&=\frac{\Gamma (r+x-1)}{(x-1)!\Gamma (r)}\left( \frac{2x}{x}\right) +\frac{2\Gamma (r+x-2)}{(x-2)!\Gamma (r)}\left[ \frac{2x-(r+1)}{x-1}\right] +\cdots \nonumber \\&\quad +\frac{j\Gamma (r+x-j)}{(x-j)!\Gamma (r)}\left[ \frac{2x-(j-1)(r+1)}{x-j+1}\right] \nonumber \\&=\sum _{i=1}^{j}\frac{i\Gamma (r+x-i)}{(x-i)!\Gamma (r)}\left[ \frac{2x-(i-1)(r+1)}{x-i+1}\right] \nonumber \\&=\frac{j(j+1)\Gamma (r+x-j)}{(x-j)!\Gamma (r)}\nonumber ~~\text {[by } (2.15)]\\&>0. \end{aligned}$$

This implies that \(H(q)>0\) or \(F(q)>0\), and which gives \(\Delta ^{2}f_{C_{x_{0}}}(x)>0\) for every \(x\in \{1,...,x_0-1\}\). Hence \(\Delta f_{C_{x_{0}}}\) is an increasing function in \(x\in \{1,...,x_{0}\}\). \(\square \)

Lemma 2.5

\(|\Delta f_{C_{0}}|\) is a decreasing function in \(x\in {\mathbb {N}}\).

Proof

Following the proof of the second case in (2.12), we have

$$\begin{aligned} |\Delta f_{C_{0}}(x)|&=\frac{x!\Gamma (r)N_{r,p}(h_{C_0})}{\Gamma (r+x+1)xq^{x}}\sum _{k=x}^{\infty }\frac{\Gamma (r+k)q^{k}}{k!\Gamma (r)}\Bigg [\frac{r(k+1-x)+x}{k+1}\Bigg ]\nonumber \\&=\frac{(x-1)!\Gamma (r)p^r}{\Gamma (r+x+1)}\sum _{k=x}^{\infty }\frac{\Gamma (r+k)q^{k-x}}{k!\Gamma (r)}\Bigg [\frac{r(k+1-x)+x}{k+1}\Bigg ]. \end{aligned}$$

In order to prove that \(|\Delta f_{C_{0}}|\) is a decreasing function, we suffice to show that

$$\begin{aligned} \sum _{k=x}^{\infty }\frac{\Gamma (r+k)q^{k-x}}{k!\Gamma (r)} \Bigg [\frac{r(k+1-x)+x}{k+1}\Bigg ]&>\frac{x}{r+x+1}\sum _{k=x+1}^{\infty }\Bigg [\frac{\Gamma (r+k)q^{k-x-1}}{k!\Gamma (r)}\nonumber \\&\quad \times \frac{r(k-x)+(x+1)}{k+1}\Bigg ] \end{aligned}$$
(2.23)

for every \(x\in {\mathbb {N}}\). It is seen that

$$\begin{aligned}&\sum _{k=x}^{\infty }\frac{\Gamma (r+k)q^{k-x}}{k!\Gamma (r)}\Bigg [\frac{r(k+1-x)+x}{k+1}\Bigg ]-\frac{x}{r+x+1}\sum _{k=x+1}^{\infty }\Bigg [\frac{\Gamma (r+k)q^{k-x-1}}{k!\Gamma (r)}\\&\qquad \times \frac{r(k-x)+(x+1)}{k+1}\Bigg ]\\&\quad =\sum _{k=x}^{\infty }\frac{\Gamma (r+k)q^{k-x}}{k!\Gamma (r)}\Bigg [\frac{r(k+1-x)+x}{k+1}\Bigg ]-\frac{x}{r+x+1}\sum _{k=x}^{\infty }\Bigg [\frac{\Gamma (r+k+1)q^{k-x}}{(k+1)!\Gamma (r)}\\&\qquad \times \frac{r(k+1-x)+(x+1)}{k+2}\Bigg ]\\&\quad =\sum _{k=x}^{\infty }\frac{\Gamma (r+k)q^{k-x}}{k!\Gamma (r)}\Bigg [\frac{r(k+1-x)+x}{k+1}-\frac{x(r+k)[r(k+1-x)+(x+1)]}{{(k+1)(k+2)(r+x+1)}}\Bigg ]\\&\quad =\sum _{k=x}^{\infty }\frac{\Gamma (r+k)q^{k-x}}{k!\Gamma (r)}\\&\qquad \times \frac{[r(k+1-x)+x][(k+2)(r+x+1)]-x(r+k)[r(k+1-x)+(x+1)]}{(k+1)(k+2)(r+x+1)}\\&\quad =\sum _{k=x}^{\infty }\frac{\Gamma (r+k)q^{k-x}}{k!\Gamma (r)}\\&\qquad \times \frac{r(k+1-x)[r(k-x)+k+2(r+x+1)]+x[r(k-x-1)+2(r+x+1)]}{(k+1)(k+2)(r+x+1)}\\&\quad >0, \end{aligned}$$

which yields (2.23). Therefore \(|\Delta f_{C_{0}}|\) is a decreasing function in \(x\in {\mathbb {N}}\). \(\square \)

The following lemma gives a non-uniform bound for \(|\Delta f_{C_{x_0}}|\).

Lemma 2.6

Let \(x_0\in {\mathbb {N}}\cup \{0\}\) and \(x\in {\mathbb {N}}\). We then have the following

  1. (1)

    For \(x_0=0\),

    $$\begin{aligned} \sup _{x}\left| \Delta f_{C_{x_0}}(x)\right| \le \frac{rq-(1-p^r)p}{r(r+1)q^2}. \end{aligned}$$
    (2.24)
  2. (2)

    For \(x_0\in {\mathbb {N}}\),

    $$\begin{aligned} \sup _{x}\left| \Delta f_{C_{x_0}}(x)\right|&\le \min \left\{ \frac{1-p^{r+1}}{(r+1)q},\frac{1}{x_0}\right\} \end{aligned}$$
    (2.25)

    and

    $$\begin{aligned} \sup _{x}\left| \Delta f_{C_{x_0}}(x)\right|&\le \frac{1}{(r+x)q}. \end{aligned}$$
    (2.26)

Proof

(1) By Lemma 2.5, we have

$$\begin{aligned} |\Delta f_{C_{0}}(x)|&\le |\Delta f_{C_{0}}(1)|\\&=f_0(1)-f_0(2)~~\text {(by } f_{C_0}=f_0)\\&=\frac{rq-(1-p^r)p}{r(r+1)q^2}. \end{aligned}$$

Thus, inequality (2.24) is obtained.

(2) Because the bound in (2.26) is obtained from Jaioun and Teerapabolarn [2], we show that the bound in (2.25) holds, as follows. For \(x_{0}\ge x\),

$$\begin{aligned} 0&\le \Delta f_{C_{x_{0}}}(x)~~\text {(by Lemma}~2.2)\nonumber \\&\le \Delta f_{C_{x_{0}}}(x_{0})~~\text {(by Lemma}~2.4)\nonumber \\&=\sum _{k=0}^{x_0}\Delta f_k(x_0)\nonumber \\&=\Delta f_0(x_0)+\cdots +\Delta f_{x_0}(x_0)\nonumber \\&\le f_{x_{0}}(x_{0})~~\text {(by } (2.8))\nonumber \\&\le \min \left\{ \frac{1-p^{r+1}}{(r+1)q},\frac{1}{x_{0}}\right\} ~~\text {(by Lemma}~2.1). \end{aligned}$$
(2.27)

For \(x_{0}<x\),

$$\begin{aligned} 0&\le -\Delta f_{C_{x_{0}}}(x)~~\text {(by Lemma}~2.2)\nonumber \\&=-\sum _{k=0}^{x_0}\Delta f_k(x)-\sum _{k=x_0+1}^{x}\Delta f_k(x)+\sum _{k=x_0+1}^{x}\Delta f_k(x)\nonumber \\&=-\sum _{k=0}^{x}\Delta f_k(x)+\Delta f_{x_0+1}(x)+\cdots +\Delta f_{x}(x)\nonumber \\&=-\Delta f_{C_{x}}(x)+\Delta f_{x_0+1}(x)+\cdots +\Delta f_{x}(x)\nonumber \\&\le \Delta f_{x}(x)~~\text {[by Lemma}~2.2 \hbox { and } (2.8)]\nonumber \\&\le \min \left\{ \frac{1-p^{r+1}}{(r+1)q},\frac{1}{x}\right\} ~~\text {(by Lemma}~2.1)\nonumber \\&\le \min \left\{ \frac{1-p^{r+1}}{(r+1)q},\frac{1}{x_{0}}\right\} ~~\text {(by } x_{0}<x). \end{aligned}$$
(2.28)

Hence, by (2.27) and (2.28), the bound in (2.25) holds. \(\square \)

3 Main Result

This section uses Stein’s method together with z-functions to determine a non-uniform bound for the distance between the cumulative distribution function of a nonnegative integer-valued random variable X and the negative binomial cumulative distribution function. The following theorem presents the main requirement.

Theorem 3.1

Given the above definitions, let \(\mu =\frac{rq}{p}\). We then have the following:

  1. 1.

    For \(x_0=0\),

    $$\begin{aligned} d_{C{x_0}}(X,NB)\le \frac{rq-(1-p^r)p}{r(r+1)q^2}E\left| (r+X)q-pz(X)\right| . \end{aligned}$$
    (3.1)
  2. 2.

    For \(x_0\in \mathcal {X}\setminus \{0\}\),

    $$\begin{aligned} d_{C_{x_0}}(X,NB)&\le \min \Bigg \{\min \left\{ \frac{1-p^{r+1}}{(r+1)q},\frac{1}{x_0}\right\} E\left| (r+X)q-pz(X)\right| \Bigg .,\nonumber \\&\quad \Bigg .\sum _{x\in \mathcal {X}\setminus \{0\}}\left| 1-\frac{z(x)p}{(r+x)q}\right| p_X(x)\Bigg \} \end{aligned}$$
    (3.2)

Proof

By the proof given in Jaioun and Teerapabolarn [2], it follows that

$$\begin{aligned} d_{C_{x_0}}(X,NB)&=\left| rqE[\Delta f(X)]+qE[X\Delta f(X)]-pE[(X-\mu )f(X)]\right| , \end{aligned}$$

where \(f=f_{C_{x_0}}\) is defined in (2.9). Because \(E|z(X)\Delta f(X)|<\infty \), then by (2.3), we obtain

$$\begin{aligned} d_{C_{x_0}}(X,NB)&=\left| rqE[\Delta f(X)]+qE[X\Delta f(X)]-pE[z(X)\Delta f(X)]\right| \nonumber \\&\le E \left[ |(r+X)q-pz(X)||\Delta f(X)|\right] . \end{aligned}$$

Because \((r+X)q-pz(X)=0\) when \(X=0\), we get

$$\begin{aligned} d_{C_{x_0}}(X,NB)&\le \sum _{x\in \mathcal {X}\setminus \{0\}}\left| (r+x)q-pz(x)\right| \left| \Delta f(x)\right| p_X(x)\nonumber \\&\le \sum _{x\in {\mathcal {X}}\setminus \{0\}}\left| (r+x)q-pz(x)\right| \sup _{x}\left| \Delta f(x)\right| p_X(x). \end{aligned}$$

For \(x_0=0\) an by applying Lemma 2.6 (1), we have

$$\begin{aligned} d_{C_{x_0}}(X,NB)&\le \frac{rq-(1-p^r)p}{r(r+1)q^2}\sum _{x\in \mathcal {X}\setminus \{0\}}\left| (r+x)q-pz(x)\right| p_X(x)\nonumber \\&=\frac{rq-(1-p^r)p}{r(r+1)q^2}E\left| (r+X)q-pz(X)\right| . \end{aligned}$$

This completes the proof. For \(x_0\in \mathcal {X}\setminus \{0\}\) and by applying Lemma 2.6 (2), we have

$$\begin{aligned} d_{C_{x_0}}(X,NB)&\le \min \left\{ \frac{1-p^{r+1}}{(r+1)q},\frac{1}{x_0}\right\} E\left| (r+X)q-pz(X)\right| \end{aligned}$$
(3.3)

and

$$\begin{aligned} d_{C_{x_0}}(X,NB)&\le \sum _{x\in \mathcal {X}\setminus \{0\}}\left| \frac{(r+x)q-pz(x)}{(r+x)q}\right| p_X(x)\nonumber \\&=\sum _{x\in \mathcal {X}\setminus \{0\}}\left| 1-\frac{z(x)p}{(r+x)q}\right| p_X(x). \end{aligned}$$
(3.4)

Combining (3.3) and (3.4), the result of (3.2) is obtained. \(\square \)

Remark 3.1

For \(X=0\), an accurate approximation of \(d_{C_{x_0}}(X,NB)\) can be directly obtained from the exact error \(\left| p_X(0)-p^r\right| \).

The following corollary is an immediate consequence that helps us to compute the bounds in Theorem 3.1.

Corollary 3.1

If \((r+x)q-pz(x)\ge 0\) for every \(x\in \mathcal {X}\) or \((r+x)q-pz(x)\le 0\) for every \(x\in \mathcal {X}\), then we have the following:

  1. 1.

    For \(x_0=0\),

    $$\begin{aligned} d_{C_{x_0}}(X,NB)\le \frac{rq-(1-p^r)p}{r(r+1)q^2}\left| \mu -\sigma ^2p\right| . \end{aligned}$$
    (3.5)
  2. 2.

    For \(x_0\in \mathcal {X}\backslash \{0\}\),

    $$\begin{aligned} d_{C_{x_0}}(X,NB)&\le \min \Bigg \{\min \left\{ \frac{1-p^{r+1}}{(r+1)q},\frac{1}{x_0}\right\} \left| \mu -\sigma ^2p\right| \Bigg .,\nonumber \\&~~~~\Bigg .\sum _{x\in \mathcal {X}\setminus \{0\}}\left| 1-\frac{z(x)p}{(r+x)q}\right| p_X(x)\Bigg \}. \end{aligned}$$
    (3.6)

Remark

By comparing the bounds in Theorem 3.1 and (1.5), it shows that the non-uniform bound in Theorem 3.1 is better than the uniform bound in (1.5).

4 Applications

In this study, we use the result from Theorem 3.1, or Corollary 3.1, to approximate cumulative distribution functions, including the negative hypergeometric cumulative distribution function, the Pólya cumulative distribution function and the beta negative binomial cumulative distribution function. In addition, we also give numerical results that measure measuring the accuracy of each approximation.

4.1 Negative Hypergeometric Distribution

Let X be a negative hypergeometric random variable with parameters Nr and \(m (<N)\), whose probability mass function is of the form

$$\begin{aligned} p_X(x)=\frac{{r+x-1\atopwithdelims ()x}{N-r-x\atopwithdelims ()m-x}}{{N\atopwithdelims ()m}}~, x\in \{0,1,...,m\}. \end{aligned}$$

The mean and variance of X are \(\mu =\frac{mr}{N-m+1}\) and \(\sigma ^2=\frac{mr(N-m-r+1)(N+1)}{(N-m+1)^2(N-m+2)}\), respectively. It follows from (2.2) that \(z(x)=\frac{(r+x)(m-x)}{N-m+1}\). Let \(p=\frac{N-m+1}{N+1}\). We then have \((r+x)q-pz(x)=\frac{x(r+x)}{N+1}\ge 0\) for every \(x\in \{0,1,...,m\}\). Therefore, by Corollary 3.1, we have

$$\begin{aligned} d_{C_{x_0}}(X,NB)\le {\left\{ \begin{array}{ll} \frac{rm-(N-m+1)(1-p^r)}{m(N-m+2)p}&{}\text {if}~x_0=0,\\ \frac{r}{N-m+1}\min \Bigg \{\min \left\{ 1-p^{r+1},\frac{(r+1)q}{x_0}\right\} \frac{N+1}{N-m+2},1\Bigg \}&{}\text {if}~x_0>0. \end{array}\right. } \end{aligned}$$

This bound indicates that the result gives a good approximation when \(\frac{r}{N-m+1}\) is small and/or \(x_0\) increases. That is, the negative binomial cumulative distribution function with parameters r and \(\frac{N-m+1}{N+1}\) can be used as an approximation of the negative hypergeometric cumulative distribution function with parameters Nr and m when \(\frac{r}{N-m+1}\) is small and/or \(x_0\) increases.

Numerical results:

  1. 1.

    Let \(m=30\), \(r=50\) and \(N=500\). We then have \(p=0.94012\) and the result in the negative binomial approximation to the negative hypergeometric cumulative distribution function is of the form

    $$\begin{aligned} d_{C_{x_0}}(X,NB)\le {\left\{ \begin{array}{ll} 0.078912 &{}\text {if}~x_0=0,\\ 0.106157 &{}\text {if}~x_0=1,2,3,\\ \frac{0.344111}{x_0} &{}\text {if}~x_0=4,...,30,\\ \end{array}\right. } \end{aligned}$$

    which improves upon the result of Jaioun and Teerapabolarn [2]:

    $$\begin{aligned} d_{C_{x_0}}(X,NB)\le 0.107847 \end{aligned}$$

    for every \(x_0\in \{0,1,...,30\}\).

  2. 2.

    Let \(m=50\), \(r=50\) and \(N=1000\). We then have \(p=0.95005\) and the result in the negative binomial approximation to the negative hypergeometric cumulative distribution function is of the form

    $$\begin{aligned} d_{C_{x_0}}(X,NB)\le {\left\{ \begin{array}{ll} 0.035875 &{}\text {if}~x_0=0,\\ 0.051230 &{}\text {if}~x_0=1,2,\\ \frac{0.140829}{x_0} &{}\text {if}~x_0=3,...,50,\\ \end{array}\right. } \end{aligned}$$

    which also improves upon the result of Jaioun and Teerapabolarn [2]:

    $$\begin{aligned} d_{C_{x_0}}(X,NB)\le 0.051230 \end{aligned}$$

    for every \(x_0\in \{0,1,...,50\}\).

4.2 Pólya Distribution

Let X be a Pólya random variable with parameters Nmn and c, whose probability mass function is given by

$$\begin{aligned} p_X(x)=\frac{{\frac{n}{c}+x-1\atopwithdelims ()x}{\frac{N-n}{c}+m-x-1\atopwithdelims ()m-x}}{{\frac{N}{c}+m-1\atopwithdelims ()m}}~,x\in \{0,1,...,m\}. \end{aligned}$$

The mean and variance of X are \(\mu =\frac{mn}{N}\) and \(\sigma ^2=\frac{mn(N+cm)(N-n)}{N^2(N+c)}\), respectively. Using the relation in (2.2), we get \(z(x)=\frac{(n+cx)(m-x)}{N}\). Let \(p=\frac{N}{N+cm}\) and \(r=\frac{n}{c}\). We then have \((r+x)q-pz(x)=\frac{(n+cx)x}{N+cm}\ge 0\) for every \(x\in \{0,1,...,m\}\). Thus, by Corollary 3.1, we obtain

$$\begin{aligned} d_{C_{x_0}}(X,NB)\le {\left\{ \begin{array}{ll} \frac{nm-N(1-p^{\frac{n}{c}})}{m(N+c)p}&{}\text {if}~x_0=0,\\ \frac{n}{N}\min \Bigg \{\min \left\{ 1-p^{\frac{n}{c}+1},\frac{(\frac{n}{c}+1)q}{x_0}\right\} \frac{N+cm}{N+c},1\Bigg \}&{}\text {if}~x_0>0. \end{array}\right. } \end{aligned}$$

This bound is a good alternative criterion for approximating the Pólya cumulative distribution function with parameters Nmn and c by the negative binomial cumulative distribution function with parameters \(\frac{n}{c}\) and \(\frac{N}{N+cm}\) when \(\frac{n}{N}\) is small and/or \(x_0\) increases.

Numerical results:

  1. 1.

    Let \(m=50\), \(n=30\), \(c=3\) and \(N=500\). We then have \(p=0.769231\) and the result in the negative binomial approximation to the Pólya cumulative distribution function is as follows:

    $$\begin{aligned} d_{C_{x_0}}(X,NB)\le {\left\{ \begin{array}{ll} 0.053565 &{}\text {if}~x_0=0,\\ 0.06 &{}\text {if}~x_0=1,2,3,\\ \frac{0.196819}{x_0} &{}\text {if}~x_0=4,...,50,\\ \end{array}\right. } \end{aligned}$$

    which improves upon the result of Jaioun and Teerapabolarn [2]:

    $$\begin{aligned} d_{C_{x_0}}(X,NB)\le 0.073208 \end{aligned}$$

    for every \(x_0\in \{0,1,...,50\}\).

  2. 2.

    Let \(m=80\), \(n=60\), \(c=2\) and \(N=1000\). We then have \(p=0.862069\) and the result in the negative binomial approximation to the Pólya cumulative distribution function is as follows:

    $$\begin{aligned} d_{C_{x_0}}(X,NB)\le {\left\{ \begin{array}{ll} 0.055159 &{}\text {if}~x_0=0,\\ 0.06 &{}\text {if}~x_0=1,...,4,\\ \frac{0.297006}{x_0} &{}\text {if}~x_0=5,...,80,\\ \end{array}\right. } \end{aligned}$$

    which also improves upon the result of Jaioun and Teerapabolarn [2]:

    $$\begin{aligned} d_{C_{x_0}}(X,NB)\le 0.068764 \end{aligned}$$

    for every \(x_0\in \{0,1,...,80\}\).

4.3 Beta Negative Binomial Distribution

Let X be a beta negative binomial random variable with parameters \(\alpha ,\beta \) and r, whose probability mass function can be written as

$$\begin{aligned} p_X(x)=\frac{\Gamma (r+\alpha )\Gamma (x+\beta )\Gamma (r+x)\Gamma (\alpha +\beta )}{\Gamma (r+x+\alpha +\beta )\Gamma (r)\Gamma (x+1)\Gamma (\beta )\Gamma (\alpha )}~,x\in {\mathbb {N}}\cup \{0\}. \end{aligned}$$

The mean and variance of X are \(\mu =\frac{r\beta }{\alpha -1}\) and \(\sigma ^2=\frac{r\beta (r+\alpha -1)(\alpha +\beta -1)}{(\alpha -2)(\alpha -1)^2}\), respectively. Following (2.2), we have that \(z(x)=\frac{(r+x)(\beta +x)}{(\alpha -1)}\). Let \(p=\frac{\alpha -1}{\alpha +\beta -1}\). We then \((r+x)q-pz(x)=-\frac{x(r+x)}{\alpha +\beta -1}\le 0\) for every \(x\in {\mathbb {N}}\cup \{0\}\). Thus, by Corollary 3.1, it follows that

$$\begin{aligned} d_{C_{x_0}}(X,NB)\le {\left\{ \begin{array}{ll} \frac{r\beta -(\alpha -1)(1-p^r)}{\beta (\alpha -2)p}&{}\text {if}~x_0=0,\\ \frac{r}{\alpha -1}\min \Bigg \{\min \left\{ 1-p^{r+1},\frac{(r+1)q}{x_0}\right\} \frac{\alpha +\beta -1}{\alpha -2},1\Bigg \}&{}\text {if}~x_0>0. \end{array}\right. } \end{aligned}$$

The bound in this application tell us that the negative binomial cumulative distribution function with parameters r and \(\frac{\alpha -1}{\alpha +\beta -1}\) is a good estimate of the beta negative binomial cumulative distribution function with parameters \(\alpha ,\beta \) and r when \(\alpha \) is large relative to r and/or \(x_0\) increases.

Numerical results:

  1. 1.

    Let \(\alpha =450,\beta =55\) and \(r=20\). We then have \(p=0.890873\) and the result in the negative binomial approximation to the beta negative binomial cumulative distribution function can be written as

    $$\begin{aligned} d_{C_{x_0}}(X,NB)\le {\left\{ \begin{array}{ll} 0.031685 &{}\text {if}~x_0=0,\\ 0.044543 &{}\text {if}~x_0=1,2,\\ \frac{0.114839}{x_0} &{}\text {if}~x_0\ge 3,\\ \end{array}\right. } \end{aligned}$$

    which improves upon the result of Jaioun and Teerapabolarn [2]:

    $$\begin{aligned} d_{C_{x_0}}(X,NB)\le 0.045685 \end{aligned}$$

    for every \(x_0\in {\mathbb {N}}\cup \{0\}\).

  2. 2.

    Let \(\alpha =800,\beta =70\) and \(r=30\). We then have \(p=0.919448\) and the result in the negative binomial approximation to the beta negative binomial cumulative distribution function can be written as

    $$\begin{aligned} d_{C_{x_0}}(X,NB)\le {\left\{ \begin{array}{ll} 0.026583 &{}\text {if}~x_0=0,\\ 0.037547 &{}\text {if}~x_0=1,2,\\ \frac{0.102101}{x_0} &{}\text {if}~x_0\ge 3,\\ \end{array}\right. } \end{aligned}$$

    which also improves upon the result of Jaioun and Teerapabolarn [2]:

    $$\begin{aligned} d_{C_{x_0}}(X,NB)\le 0.037861 \end{aligned}$$

    for every \(x_0\in {\mathbb {N}}\cup \{0\}\).

5 Conclusions

The non-uniform bound presented in this study is a new bound for approximating the distance between the cumulative distribution function of a nonnegative integer-valued random variable and the negative binomial cumulative distribution function. It was obtained by applying Stein’s method and z-functions, and setting the mean of the negative binomial random variable is equal to the mean of a nonnegative integer-valued random variable. It is seen that the accuracy of the approximation depends on \(x_0\), so that, the bound decreases as \(x_0\) increases. In terms of the corresponding bounds, the bound obtained in this study is better than that given in Jaioun and Teerapabolarn [2], including both theoretical and numerical results. Therefore, the non-uniform bound obtained in this study is an appropriate criterion for measuring the accuracy of this approximation.