1 Introduction

Studies of tail asymptotic properties, expressed in terms of simple functions, often lead to approximations, error bounds for system performance, and computational algorithms, besides their own interest. These studies become more important when closed-form or explicit solutions are not expected. On the one hand, except for a very limited number of basic queueing models, it is not in general expected to have a simple closed-form or explicit solution for the stationary queue length or waiting time distribution when it exists, but on the other hand, expressions of transformations, in many cases, do exist for the distribution, say expressions in terms of the generating function (GF) for the stationary queue length distribution or the Laplace–Stieltjes transform (LST) of the stationary waiting time distribution. Mathematically, the transformation of the distribution contains all information about the distribution, but for the application purpose, we often need the inversion of the transformation. Once again, simple or closed formulas or expressions for the inversion do not exist for most of the cases. Many retrial queues are such examples, for which we do not expect, in general, closed-form or explicit solutions for the stationary distribution of the queue length process or the waiting time under a stability condition. However, expressions for the transform of the distribution are available, in terms of which tail asymptotic analysis might prevail.

The focus of this paper is to carry out an asymptotic analysis for a type of retrial queues with batch arrivals, referred to as \(M^X/G/1\) retrial queues. Studies on retrial queues are extensive during the past 30 years or so. Research outcomes and progress have been reported in more than 100 publications due to the importance of retrial queues in applications, as such in the areas of call centres, computer and telecommunication networks among many others. Early surveys or books include Yang and Templeton [38], Falin [12], Kulkarni and Liang [26], Falin and Templeton [13], Artalejo [1, 2], while Artajelo and Gómez-Corral [4], Artalejo [3] and Kim and Kim [21] are among the recent ones. Studies on tail behaviour can be classified into two categories: light tail and heavy tail. For light-tailed behaviour, references include Kim et al. [25], Liu and Zhao [29], Kim et al. [22], Liu et al. [27], Kim et al. [24], Kim and Kim [20], Artalejo and Phung-Duc [5], Kim [19], while for heavy-tailed behaviour, we refer to Shang et al. [35], Kim et al. [23], Yamamuro [37], Liu et al. [28], and Masuyama [32]. In addition, there are many references in the literature for asymptotic analysis for the corresponding non-retrial queues, e.g. Asmussen et al. [6], and Boxma and Denisov [9]. For more references, we would like to mention two excellent surveys: Borst et al. [8], and Boxma and Zwart [10].

Closely related to the model of our interest in this paper are the following three references. In [35], it was proved that if the number of customers in the standard M/G/1 queue has a subexponential distribution, then the number of customers in the corresponding M/G/1 retrial queue has the same tail asymptotic behaviour (referred to as the equivalence theorem). In [37], the same result as in [35] was proved for the batch arrival \(M^X/G/1\) retrial queue under the condition that the batch size has a finite exponential moment, and in [32], the main result in [37] was extended to a BMAP/G/1 retrial queue.

It has been noticed that in the literature, for a retrial queue with batch arrivals and general service times, the impact of the arrival batch on the tail equivalence property has not been sufficiently addressed. For example, in [37] for the \(M^X/G/1\) retrial queue, it is assumed that the arrival batch has a finite exponential moment, or in [32] for the BMAP/G/1 retrial queue, the light-tailed condition was relaxed to possibly moderately heavy-tailed batches (see, for example, [6] for a definition, i.e. the batch size has a tail not heavier than \(e^{-\sqrt{x}}\)). The common feature in both situations is the fact that compared to the batch size, the tail of the service time is heavier. To the best of our knowledge, in the literature, there is no report on the tail equivalence between a standard batch arrival queue and its corresponding retrial queue if the arrival batch size has a tail heavier than or as heavy as the service time.

For showing the equivalence theorem, it is common practice to establish a stochastic decomposition first. This decomposition writes the total number of customers in the system for the retrial queue as the sum of the total number of customers in the system for the corresponding (non-retrial) queue and another independent random variable. The equivalence theorem is to prove that the total number of customers in the system for the retrial queue and the total number of customers in the system for the corresponding non-retrial queue have the same type of tail asymptotic behaviour. That has been done in the literature for the M/G/1 case and extended to the \(M^X/G/1\) and BMAP/G/1 cases under the assumption that the batch size is lighter than the service time. In terms of the decomposition, it implies that the other variable is simply dominated by the total number of customers in the system of the standard (non-retrial) model. Therefore, no detailed analysis for the other variable is needed for establishing the equivalence.

In this paper, we consider the \(M^X/G/1\) retrial queue, just as in [37]. The equivalence theorem is now proved for the case in which the batch size has regularly varying tail, so it is heavier than the moderately heavy tail and without the assumption that the service time has a tail heavier than the batch size, which is a challenging problem and has not been considered in the literature. Another more interesting result (our main contribution in this paper, see Theorem 6.1), is an asymptotic characterization of the difference between two tail probabilities: the tail probability of the total number \(L_{\mu }\) of customers in the system for the \(M^X/G/1\) retrial queue and the tail probability of the total number \(L_{\infty }\) of customers in the corresponding standard (non-retrial) queue. The literature result in the classical equivalence theorem states that \(P\{L_{\mu }>t\}\sim P\{L_{\infty }>t\}\), which is a first-order asymptotic result (i.e. the difference \(P\{L_{\mu }>t\}-P\{L_{\infty }>t\}\) is a small quantity dominated by \(P\{L_{\infty }>t\}\)). This result provides no information about the measurement of this difference, which is the target of our main theorem. In this sense, our result can be viewed as a refinement of the first-order asymptotic result.

The exhaustive version of stochastic decomposition approach was also used in a companion paper Liu and Zhao [31] for a different model with a different focus. The model studied in [31] is not a batch arrival queueing system, so the \(M^X/G/1\) model considered in this paper is not a special case of [31]. Moreover, the proof of Theorem 1(1) in [31] relies on the important property established in Theorem 5.1 of the present paper. The same exhaustive decomposition approach has also been applied to a model with Bernoulli schedule in [30].

Since the presentation of the exhaustive decomposition involves many details, and the asymptotic analysis and the achievement of the refined asymptotic result require non-trivial technical efforts, we provide the following step by step analysis/proofs for a better presentation.

Step 1::

This is the starting point. We assume that an expression of the transform of the target random variable’s distribution exists. Very often, we need to rewrite the expression of the transformation in a form in favour of decompositions;

Step 2::

For each of the components in the rewritten form of the transformation, provide probabilistic interpretations, or prove that each component is the transformation of a probability distribution;

Step 3::

Carry out asymptotic analysis for each probabilistic component in the decomposition to obtain its tail asymptotic expression;

Step 4::

Prove a key tail asymptotic property of an involved random variable, which is required in the final step;

Step 5::

Prove the main theorem by identifying all components which provide dominant contributions to the tail asymptotic expression of the target quantity.

The rest of the paper is organized as follows: in Sect. 2, we describe the \(M^X/G/1\) retrial queue model and rewrite the GF (a literature result) for \(D^{(0)}\) (we indeed have \(L_{\mu }=L_{\infty }+D^{(0)}\) in terms of the stochastic decomposition; in Sect. 3, a further decomposition, together with its analysis, of each component in the decomposition in Sect. 2 is provided; in Sect. 4, asymptotic analysis on the components in the decompositions given in Sect. 3 is carried out; we complete the proof of our key property (the tail asymptotic behaviour of \(D^{(0)}\)) in Sect. 5; the refined tail equivalence theorem (main) for the total number of customers is proved in Sect. 6; the asymptotic tail behaviour for \(D^{(1)}\) is provided in Sect. 7; discussion on the key condition of Lemma 6.1 and two examples is provided in Sect. 8; and concluding remarks are made in the final Sect. 9. Appendix A contains some of the literature results, together with our verified preliminary results, needed for proving our main theorem; some detailed proofs are organized in Appendix B; and Appendix C provides the proofs for the \(\Delta \)-analyticity, required for the discussions of two examples in Sect. 8.

2 Preliminaries

In this section, we provide details in the first step of making an exhaustive stochastic decomposition. Two transformations (\(D^{(0)}\) and \(D^{(1)}\)) are for our target distributions. The key is to decompose \(D^{(0)}\) since \(D^{(1)}\) can be easily expressed by \(D^{(0)}\). An expression of \(D^{(0)}\) was obtained in [13]. For the decomposition purpose, we need to rewrite it. Before doing so, we need to describe the model and introduce necessary notations first.

In this paper, we consider the \(M^X/G/1\) retrial queue (the same model considered in [37]), in which the primary customers arrive in batches, the successive arrival epochs form a Poisson process with rate \(\lambda \), and the generic batch size X has the probability distribution \(P\{X=k\}\) for \(k\ge 1\) with a finite mean \(\chi _1\). If the server is free at the arrival epoch, then one of the arriving customers receives service immediately and the others join the orbit becoming repeated customers, whereas if the server is busy, all arriving customers join the orbit becoming repeated customers. Each of the repeated customers in the orbit independently retries for receiving service after an exponential time with rate \(\mu \) until it finds the server idle and then starts its service immediately. The customer in service leaves the system immediately after the completion of its service. Both primary and repeated customers require the same amount of the service time. Assume the generic service time B has the probability distribution B(x) with \(B(0)=0\) with a finite mean \(\beta _1\). Let \(\rho =\lambda \beta _1 \chi _1\). It is well known that the system is stable if and only if (iff) \(\rho <1\), which is assumed to hold throughout the paper.

We use \(\beta (s)\) and \(\beta _n\) to represent the LST and the nth moment of B(x), respectively. The generating function (GF) of X is denoted by \(X(z)=E(z^X)=\sum _{k=1}^{\infty }P\{X=k\} z^k\). In addition, we define \(X_0=X-1\), and then, it is clear that \(X_0(z)=E(z^{X_0})=X(z)/z\).

Let \(N_{orb}\) be the number of the repeated customers in the orbit, and \(C_{ser}=1 \text{ or }\ 0\) corresponds to the server being busy or idle, respectively. Let \(D^{(0)}\) (\(D^{(1)}\)) be a random variable (rv) (see Table 1) having the same distribution as the conditional distribution of the number of repeated customers in the orbit given that the server is free (busy). It is clear that \(D^{(0)}\) takes nonnegative integers with the GF \(D^{(0)}(z)=E(z^{D^{(0)}}) {\mathop {=}\limits ^\textrm{def}} E(z^{N_{orb}}|C_{ser}=0)\). Note that \(P\{C_{ser}=0\}=1-\rho \). The following result on \(D^{(0)}(z)\) (p. 174 of [13]) is our starting point:

$$\begin{aligned} D^{(0)}(z)=\exp \left\{ -\frac{\lambda }{\mu }\int _z^1\frac{1-\beta (\lambda -\lambda X(u)) X_0(u)}{\beta (\lambda -\lambda X(u))-u} \hbox {d}u\right\} . \end{aligned}$$
(2.1)

Our particular interest is to analyse the asymptotic behaviour of the tail probability for \(D^{(0)}\) which is the independent increment from \(L_{\infty }\) to \(L_{\mu }\) in the stochastic decomposition, see, for example, [37] and also Sect. 6, from which the tail asymptotic behaviour (refined equivalence theorem) for the total number of customers is proved in Sect. 6, and the tail asymptotic behaviour for \(D^{(1)}\) is also a consequence of the above asymptotic result (see Sect. 7). To proceed, we first rewrite (2.1). Let

$$\begin{aligned} K^{*}(u)= & {} \frac{1-\beta (\lambda -\lambda X(u)) X_0(u)}{(\rho +\chi _1 -1)(1-u)}, \end{aligned}$$
(2.2)
$$\begin{aligned} K^{\circ }(u)= & {} \frac{(1-\rho )(1-u)}{\beta (\lambda -\lambda X(u))-u}, \end{aligned}$$
(2.3)
$$\begin{aligned} K(u)= & {} K^{*}(u)\cdot K^{\circ }(u), \end{aligned}$$
(2.4)
$$\begin{aligned} \psi= & {} \frac{\lambda (\rho +\chi _1 -1)}{\mu (1-\rho )}. \end{aligned}$$
(2.5)

It immediately follows from (2.1) that

$$\begin{aligned} D^{(0)}(z)=\exp \left\{ -\psi \int _z^1 K(u) \hbox {d}u\right\} . \end{aligned}$$
(2.6)

We have now completed the first step of applying the exhaustive decomposition method. The rewritten form of the transformation function \(D^{(0)}(z)\) suggests to decompose it to more detailed components at different levels: K in the first level, \(K^{*}\) and \(K^{\circ }\) in the second level, and more detailed compositions in both \(K^{*}\) and \(K^{\circ }\). The analysis of \(D^{(0)}\) will be carried out in the following three sections (representing Step 2 to Step 4, respectively): in Sect. 3 we first prove that both \(K^{*}\) and \(K^{\circ }\) are probability transformations, and establish further stochastic decompositions for each of these two components; in Sect. 4, asymptotic analysis on the components in the decomposition is carried out; and we complete the proof of the key property (the tail asymptotic behaviour of \(D^{(0)}\)) in Sect. 5.

To conclude this section, we provide Table 1 for a list of notations introduced in this section.

Table 1 A list of notations introduced in Sect. 2

3 Stochastic decompositions related to K(z)

In this section, we carry out the second step in applying the exhaustive version of the stochastic decomposition, which is a key step in the whole analysis. We first prove that both \(K^{*}(z)\) and \(K^{\circ }(z)\) are the GFs of the probability distributions for two discrete nonnegative random variables (rvs), denoted by \(K^{*}\) and \(K^{\circ }\), respectively. Assume that \(K^{*}\) and \(K^{\circ }\) are independent. Therefore, according to (2.4), K(z) is the GF of \(K=K^{*} + K^{\circ }\). We then further decompose \(K^{*}\) and \(K^{\circ }\), respectively, into sums of independent rvs, for which we can carry out tail asymptotic analysis (given in the next section).

To see \(K^{*}(z)\) is the GF for a probability distribution, we need to see the following: (1) \(\beta (\lambda -\lambda X(z))\) is the GF for a discrete nonnegative random variable (rv), so is \(\beta (\lambda -\lambda X(z)) X_0(u)\), and (2) for a GF Q(z) of a discrete nonnegative rv, \(1-Q(z)/(1-z)\) is essentially (up to a constant factor) the GF of its equilibrium distribution. Specifically, we have the following facts (Facts A–D).

Fact A: Let \(N_{B}\) and \(N_{B_X}\) be the number of batches and the total number of customers arrived within a service time B, respectively. It is then clear that \(N_{B_X}=X^{(1)}+X^{(2)}+\cdots +X^{(N_{B})}\), where \(X^{(1)}, X^{(2)},\cdots , X^{(N_{B})}\) are independent copies of the batch size X. It is well known that

$$\begin{aligned} E(z^{N_{B}})= & {} \int _0^{\infty }\sum _{k=0}^{\infty }\frac{(\lambda x)^{k}}{k!}e^{-\lambda x}{z^k} \hbox {d}B(x)=\beta (\lambda -\lambda z), \end{aligned}$$
(3.1)
$$\begin{aligned} E(z^{N_{B_X}})= & {} \int _0^{\infty }\sum _{k=0}^{\infty }\frac{(\lambda x)^{k}}{k!}e^{-\lambda x}(X(z))^k \hbox {d}B(x)=\beta (\lambda -\lambda X(z)). \end{aligned}$$
(3.2)

Let \(X_0\) and \(N_{B_X}\) be independent. Then, \(\beta (\lambda -\lambda X(z)) X_0(z)\) is the GF of \(N_{B_XX_0}{\mathop {=}\limits ^\textrm{d}} N_{B_X}+X_0\), where \({\mathop {=}\limits ^\textrm{d}}\) stands for equality in distribution.

Fact B: \(E(N_{B})=\lambda \beta _1\), \(E(N_{B_X})=\lim _{z\uparrow 1}\frac{\hbox {d}}{\hbox {d}z}\beta (\lambda -\lambda X(z))=\lambda \beta _1\chi _1=\rho \), and for \(N_{B_XX_0}\),

$$\begin{aligned} E(N_{B_XX_0})=E(N_{B_X}+X_0)=\rho +\chi _1-1. \end{aligned}$$
(3.3)

Fact C: Let N be an arbitrary discrete nonnegative rv with the GF \(Q(z)=\sum _{n=0}^{\infty }q(n)z^n\), where \(q(n)=P\{N=n\}\). Denote by \(\overline{q}(n)\) the tail probability of N, i.e. \(\overline{q}(n) = P\{N>n\}=\sum _{k=n+1}^{\infty }q(k)\), \(n\ge 0\). Under the assumption that \(E(N)<\infty \), the discrete equilibrium probability distribution \(q^{(de)}(n)\) associated with \(\{q(n)\}_{n=0}^{\infty }\) is defined by

$$\begin{aligned} q^{(de)}(n) = \overline{q}(n)/E(N)=P\{N>n\}/E(N). \end{aligned}$$
(3.4)

Let us use the notation \(N^{(de)}\) to represent a rv having the distribution \(\{q^{(de)}(n)\}_{n=0}^{\infty }\). Then, the GF of \(\{q^{(de)}(n)\}_{n=0}^{\infty }\) is given by

$$\begin{aligned} Q^{(de)}(z)= & {} \displaystyle \frac{1}{E(N)}\cdot \frac{1-Q(z)}{1-z}, \end{aligned}$$
(3.5)

which follows from the fact that

$$\begin{aligned} \sum _{n=0}^{\infty }\left( \sum _{k=n+1}^{\infty }q(k)\right) z^n=\sum _{k=1}^{\infty }\sum _{n=0}^{k-1}q(k) z^n=\sum _{k=0}^{\infty }\frac{q(k) (1-z^k)}{1-z} =\frac{1-Q(z)}{1-z}.\quad \quad \end{aligned}$$
(3.6)

Now, according to (2.2) and the above Facts, we have

$$\begin{aligned} K^{*} {\mathop {=}\limits ^\textrm{d}} N_{B_XX_0}^{(de)}. \end{aligned}$$
(3.7)

where \(K^{*}(z)\) is the GF of the discrete equilibrium distribution of \(N_{B_XX_0}\).

Fact D: It can be shown that \(K^{\circ }(z)\) is also the GF of a discrete probability distribution. Let \(B^{(e)}(x)\) be the equilibrium distribution of B(x), which is defined by \(1-B^{(e)}(x)= \beta _1^{-1}\int _x^{\infty }(1-B(t))\hbox {d}t\). It is well known that the LST of \(B^{(e)}(x)\) is \(\displaystyle \beta ^{(e)}(s)=(1-\beta (s))/(\beta _1 s)\). Moreover, by Fact C, we know that \(X^{(de)}(z){\mathop {=}\limits ^\textrm{def}}(1-X(z))/(\chi _1(1-z))\) is the GF of a discrete nonnegative rv, denoted by \(X^{(de)}\). Therefore, (2.3) can be rewritten as

$$\begin{aligned} K^{\circ }(z)= & {} (1-\rho )\left[ 1-\frac{1-\beta (\lambda -\lambda X(z))}{1-X(z)} \cdot \frac{1-X(z)}{1-z}\right] ^{-1}\nonumber \\= & {} \frac{1-\rho }{1-\rho \beta ^{(e)}(\lambda -\lambda X(z))\cdot X^{(de)}(z)}\nonumber \\= & {} \sum _{k=0}^{\infty }(1-\rho )\rho ^{k}\left( \beta ^{(e)}(\lambda -\lambda X(z))\cdot X^{(de)}(z)\right) ^k. \end{aligned}$$
(3.8)

Let \(B^{(e)}\) be a rv with probability distribution function \(B^{(e)}(x)\). Denote by \(N_{B^{(e)}}\) and \(N_{B^{(e)}_X}\) the number of batches and the total number of customers arriving within a random time \(B^{(e)}\), respectively. By Fact A, we immediately know that \(\beta ^{(e)}(\lambda -\lambda X(z))\) is the GF of a discrete nonnegative rv, denoted by \(N_{B^{(e)}_X}\). Therefore, \(\beta ^{(e)}(\lambda -\lambda X(z))\cdot X^{(de)}(z)\) is the GF of \(N_{B^{(e)}_XX^{(de)}} {\mathop {=}\limits ^\textrm{d}} N_{B^{(e)}_X}+X^{(de)}\), where \(N_{B^{(e)}_X}\) and \(X^{(de)}\) are independent. From (3.8), \(K^{\circ }\) can be viewed as the geometric sum of i.i.d. rvs (or the number of rvs in the summation is geometrically distributed), i.e.

$$\begin{aligned} K^{\circ }=N_{B^{(e)}_XX^{(de)}}^{(1)}+N_{B^{(e)}_XX^{(de)}}^{(2)}+\cdots +N_{B^{(e)}_XX^{(de)}}^{(J)}\quad \text{ for } J\ge 1, \text{ and } K^{\circ }=0 \text{ if } J=0,\nonumber \\ \end{aligned}$$
(3.9)

where \(P(J=k)=(1-\rho )\rho ^{k}\) (\(k\ge 0\)), rvs \(N_{B^{(e)}_XX^{(de)}}^{(i)}\) (\(i\ge 1\)) are independent copies of \(N_{B^{(e)}_XX^{(de)}}\), and J and \(N_{B^{(e)}_XX^{(de)}}^{(i)}\) (\(i\ge 1\)) are independent.

Finally, it follows from Facts C and D, and the expression in (2.4) that K can be regarded as the sum of independent rvs \(K^{*}\) and \(K^{\circ }\), i.e.

$$\begin{aligned} K {\mathop {=}\limits ^\textrm{d}} K^{*}+K^{\circ } \end{aligned}$$
(3.10)

having the GF given in (2.4).

4 Asymptotic tail probability for the rv K

In this section, we present the third step of the analysis by providing tail asymptotic results for the components in the stochastic decompositions for \(K^{*}\) and \(K^{\circ }\), based on which our key property (Theorem 5.1) on the asymptotic tail behaviour for \(D^{(0)}\) is proved. For convenience of readers, a collection of literature results, required in this paper, are provided in Appendix A.

Throughout the rest of the paper, \(R_{\sigma }\) and S are the collections of the regularly varying (at \(\infty \)) functions with index \(\sigma \) and subexponential functions, respectively, and L(x) is a slowly varying (at \(\infty \)) function. Refer to Appendix A for more details. It is also worthwhile to mention that for a distribution F on \((0,\infty )\), if \(1-F(x)\in R_{-\alpha }\) for \(\alpha \ge 0\), then \(F\in \mathcal S\) (see, for example, Embrechts et al. [11], p. 50; or Appendix A).

Our discussion is based on the assumption that both service time B and the batch size X have regularly varying tails. Specifically, we make the following assumptions:

  1. A1.

    \(P\{B>x\}\sim x^{-d_B}L(x)\) as \(x\rightarrow \infty \) where \(d_B>1\); and

  2. A2.

    \(P\{X>j\}\sim c_X\cdot j^{-d_X}L(j)\) as \(j\rightarrow \infty \) where \(d_X>1\) and \(c_X\ge 0\).

Remark 4.1

Throughout the paper, unless otherwise specified, all L’s stand for the same slowly varying function. It is a convention that in A2, we allow \(c_X=0\), which means that

$$\begin{aligned} \lim _{j\rightarrow \infty }\frac{P\{X>j\}}{j^{-d_X}L(j)}=0. \end{aligned}$$

By Karamata’s theorem (see Lemma A.3) and the Assumption A1, we know that \(\int _x^{\infty }(1-B(t))\hbox {d}t \sim (d_B-1)^{-1} x^{-d_B+1} L(x)\) as \(x\rightarrow \infty \), which implies \(1-B^{(e)}(x)\sim ((d_B-1)\beta _1)^{-1} x^{-d_B+1} L(x)\) as \(x\rightarrow \infty \). By the definitions of \(N_{B}\) and \(N_{B^{(e)}}\) in Fact A and Lemma A.4, we have

$$\begin{aligned} P\{N_{B}>j\}\sim & {} 1-B(j/\lambda )\ \sim \ \lambda ^{d_B} j^{-d_B}L(j), \end{aligned}$$
(4.1)
$$\begin{aligned} P\{N_{B^{(e)}}>j\}\sim & {} 1-B^{(e)}(j/\lambda )\ \sim \ \frac{\lambda ^{d_B-1}}{(d_B-1)\beta _1} j^{-d_B+1}L(j). \end{aligned}$$
(4.2)

Next, let us state a result on tail asymptotics for K, which will be used in later sections.

Theorem 4.1

Under Assumptions A1 and A2,

$$\begin{aligned} P\{K>j\} \sim c_{K}\cdot j^{-a+1}L(j),\quad \text{ as } j\rightarrow \infty , \end{aligned}$$
(4.3)

where

$$\begin{aligned} a=\min (d_B, d_X)>1 \end{aligned}$$
(4.4)

and

$$\begin{aligned} c_{K}=\left\{ \begin{array}{ll} (\lambda \chi _1)^{a}\chi _1/((a-1)(1-\rho )(\rho +\chi _1-1)),&{}\quad \text{ if }\ d_X>d_B>1,\\ c_X/((a-1)(1-\rho )(\rho +\chi _1-1)),&{}\quad \text{ if }\ 1<d_X<d_B \text{ and }\ c_X>0, \\ ((\lambda \chi _1)^{a}\chi _1+c_X)/((a-1)(1-\rho )(\rho +\chi _1-1)),&{}\quad \text{ if }\ d_X=d_B>1 \text{ and }\ c_X>0. \end{array} \right. \end{aligned}$$
(4.5)

Proof

Based on whether or not the batch size X has a tail lighter than the service time B, we divided our proof to Theorem 4.1 into the three cases, details of which are organized in Appendix B. \(\square \)

5 Key property—asymptotic tail probability for the rv \(D^{(0)}\)

This section provides the fourth step, in which a key tail asymptotic property is established, which is required for the final step (proof of the main theorem). Note that \(D^{(0)}(z)\) is explicitly expressed by K(z) in (2.6), based on which we are able to study the asymptotic property for the tail probability of \(D^{(0)}\) using the result on K in Theorem 4.1. This is a key property of this paper since the refined asymptotic properties in the main theorem (Theorem 6.1) and the asymptotic property of \(D^{(1)}\) in Theorem 7.1 can be readily proved by using Theorem 5.1

Theorem 5.1

(Key property) Under Assumptions A1 and A2,

$$\begin{aligned} P\{D^{(0)}>j\}\sim (1-1/a) c_K\psi \cdot j^{-a}L(j)=c_{D^{(0)}}\cdot j^{-a}L(j),\quad \text{ as }\ j\rightarrow \infty ,\quad \quad \end{aligned}$$
(5.1)

where \(a=\min (d_B, d_X)>1\),

$$\begin{aligned} c_{D^{(0)}}= & {} \left\{ \begin{array}{ll} (\lambda \chi _1)^{a+1}/(a\mu (1-\rho )^2), &{}\quad \text{ if }\ d_X>d_B>1,\\ \lambda c_X/(a\mu (1-\rho )^2), &{}\quad \text{ if }\ 1<d_X<d_B \text{ and }\ c_X>0, \\ ((\lambda \chi _1)^{a+1}+\lambda c_X)/(a\mu (1-\rho )^2), &{}\quad \text{ if }\ d_X=d_B>1 \text{ and }\ c_X>0, \end{array} \right. \qquad \end{aligned}$$
(5.2)

and \(\psi \) and \(c_K\) are expressed in (2.5) and (4.5), respectively.

Proof

See Appendix B, in which we divide the proof of Theorem 5.1 into two parts, depending on whether a is an integer or not. \(\square \)

6 Refined equivalence theorem

As the final step, we will prove our main theorem in this section. Under assumptions A1 and A2, we first present the asymptotic tail equivalence in (6.11) for the total numbers of customers in an \(M^X/G/1\) retrial queue and the corresponding standard \(M^X/G/1\) queue without retrial. This is a generalization (under the assumption of regularly varying tails) of the equivalence theorem in the literature since we allow the batch size to have a tail probability heavier than that of the service time. Then, we focus on the difference between the tail probability of the total number of customers in the system for the retrial queue and the tail probability of the total number of customers in the corresponding non-retrial queue and provide a characterization (6.12) for the asymptotic behaviour of this difference, which is our main contribution: a refined result for the tail equivalence between the two systems.

As mentioned in introduction, in order to establish the equivalence theorem for a retrial queueing system, people often use a stochastic decomposition result (e.g. [35, 37] and [32]). For the \(M^X/G/1\) retrial queue, the total number \(L_{\mu }\) of customers in the system can be written as the sum of two independent random variables, the total number \(L_{\infty }\) of customers in the corresponding \(M^X/G/1\) queueing system (without retrial) and \(D^{(0)}\), i.e.

$$\begin{aligned} L_{\mu } {\mathop {=}\limits ^\textrm{d}} L_{\infty }+D^{(0)}. \end{aligned}$$
(6.1)

It is well known (see, for example, [37]) that

$$\begin{aligned} Ez^{L_{\infty }}= & {} \beta (\lambda -\lambda X(z)) \cdot \frac{(1-\rho )(1-z)}{\beta (\lambda -\lambda X(z))-z}. \end{aligned}$$
(6.2)

The equality (6.1) can be verified easily because

$$\begin{aligned} Ez^{L_{\mu }}= & {} \sum _{n=0}^{\infty }z^nP\{C_{ser}=0,N_{orb}=n\}+\sum _{n=0}^{\infty }z^{n+1}P\{C_{ser}=1,N_{orb}=n\}\nonumber \\= & {} p_0(z)+zp_1(z), \end{aligned}$$
(6.3)

where \(p_i(z){\mathop {=}\limits ^\textrm{def}}\sum _{n=0}^{\infty }z^nP\{C_{ser}=i,N_{orb}=n\}\), \(i=0,1\), are explicitly expressed on p. 174 of [13], with which (6.3) leads to \(Ez^{L_{\mu }}=Ez^{L_{\infty }}\cdot Ez^{D^{(0)}}\) and subsequently to (6.1).

It follows from (6.2) and (2.3) that

$$\begin{aligned} E(z^{L_{\infty }})= & {} \beta (\lambda -\lambda X(z))\cdot K^{\circ }(z), \end{aligned}$$
(6.4)

which implies that \(L_{\infty } {\mathop {=}\limits ^\textrm{d}} N_{B_X}+K^{\circ }\), where \(N_{B_X}\) and \(K^{\circ }\) are assumed to be independent. Note that, from (B.9), (B.16) and (B.21), under Assumptions A1 and A2,

$$\begin{aligned} P\{K^{\circ }>j\}\sim & {} c_{K^{\circ }}\cdot j^{-a+1}L(j),\quad \text{ as } j\rightarrow \infty , \end{aligned}$$
(6.5)

where \(a=\min (d_B, d_X)>1\) and

$$\begin{aligned} c_{K^{\circ }}= & {} \left\{ \begin{array}{ll} (\lambda \chi _1)^{a}/((a-1)(1-\rho )),&{}\quad \text{ if } d_X>d_B>1,\\ \lambda \beta _1c_X/((a-1)(1-\rho )),&{}\quad \text{ if } 1<d_X<d_B \text{ and }\ c_X>0, \\ ((\lambda \chi _1)^{a}+\lambda \beta _1c_X)/((a-1)(1-\rho )),&{}\quad \text{ if } d_X=d_B>1 \text{ and }\ c_X>0. \end{array} \right. \nonumber \\ \end{aligned}$$
(6.6)

It follows from (B.5), (B.12), (B.19) and (6.5) that \(P\{N_{B_X}>j\}=o(P\{K^{\circ }>j\})\). So,

$$\begin{aligned} P\{L_{\infty }>j\}\sim P\{K^{\circ }>j\}\sim c_{K^{\circ }}\cdot j^{-a+1}L(j). \end{aligned}$$
(6.7)

By Theorem 5.1, we have \(P\{D^{(0)}>j\}=o(P\{L_{\infty }>j\})\), and therefore,

$$\begin{aligned} P\{L_{\mu }>j\}\sim P\{L_{\infty }>j\}. \end{aligned}$$
(6.8)

Next, we refine the asymptotic equivalence (6.8). Precisely, we will characterize the asymptotic behaviour of the difference \(P\{L_{\mu }>j\}-P\{L_{\infty }>j\}\) as \(j\rightarrow \infty \). Towards this end, we provide the following lemma, which will be used to confirm our assertion later. We use the notation \(\overline{F}(\cdot )=1-F(\cdot )\).

Lemma 6.1

Let \(X_1\) and \(X_2\) be independent nonnegative rvs with distribution functions \(F_1\) and \(F_2\), respectively. Assume that \(\overline{F}_1(t)\sim c_1 t^{-d+1}L_1(t)\) and \(\overline{F}_2(t)\sim c_2 t^{-d}L_2(t)\) as \(t\rightarrow \infty \), where \(c_1>0\), \(c_2>0\), \(d>1\) and both \(L_1(t)\) and \(L_2(t)\) are slowly varying functions at infinity. Suppose that \(\overline{F}_1(t)=\int _t^{\infty }f_1(x)\hbox {d}x\) and \(f_1(x)\) is ultimately decreasing. Then, as \(t\rightarrow \infty \),

$$\begin{aligned} P\{X_1+X_2>t\}=\overline{F}_1(t)+ (d-1)\mu _{F_2}\cdot t^{-1}\overline{F}_1(t) (1+o(1))+\overline{F}_2(t) (1+o(1)),\nonumber \\ \end{aligned}$$
(6.9)

where \(\mu _{F_2}<\infty \) is the mean value of \(X_2\).

Proof

See Appendix B. \(\square \)

Remark 6.1

In Lemma 6.1, if \(X_1\) is a nonnegative integer valued rv, then the condition that \(f_1(x)\) is ultimately decreasing should be replaced by that \(P\{X_1=j\}\) is ultimately decreasing, because we can define \(f_1(t) = P\{X_1=j\}\) for \(j\le t<j+1\) and \(j\ge 0\).

Remark 6.2

The ultimately decreasing condition imposed in Lemma 6.1 is, in general, a non-trivial one to justify. We will discuss this condition in Sect. 8.

Recalling (6.1), (6.7) and (5.1), applying Lemma 6.1 with the setting of \(X_1=L_{\infty }\), \(X_2=D^{(0)}\) and \(X_1+X_2=L_{\mu }\), and noting that \(E(X_2)=\frac{\hbox {d}}{\hbox {d}z}D^{(0)}(z)\big |_{z=1}=\psi \) (see (2.6)), we conclude that

$$\begin{aligned} P\{L_{\mu }>j\}-P\{L_{\infty }>j\}\sim (a-1)\psi \cdot j^{-1}P\{L_{\infty }>j\} +P\{D^{(0)}>j\}.\nonumber \\ \end{aligned}$$
(6.10)

The results (6.7), (6.8), (6.10) and (5.1) are summarized in the following theorem.

Theorem 6.1

(Main theorem—a refined equivalence) For the stable \(M^X/G/1\) retrial queue with assumptions A1 and A2, we have the following asymptotic properties. As \(j\rightarrow \infty \),

$$\begin{aligned} P\{L_{\mu }>j\}\sim P\{L_{\infty }>j\}\sim c_{K^{\circ }}\cdot j^{-a+1}L(j). \end{aligned}$$
(6.11)

Furthermore, if \(P\{L_{\infty }=j\}\) is ultimately decreasing in j, then

$$\begin{aligned} \quad P\{L_{\mu }>j\}-P\{L_{\infty }>j\}\sim \left[ (a-1)\psi c_{K^{\circ }}+c_{D^{(0)}}\right] \cdot j^{-a}L(j), \end{aligned}$$
(6.12)

where \(\psi \), \(c_{K^{\circ }}\) and \(c_{D^{(0)}}\) are given in (2.5), (6.6) and (5.2), respectively.

Remark 6.3

It is worth mentioning that in the first part of Theorem 6.1, the asymptotic equivalence \(P\{L_{\mu }>j\}\sim P\{L_{\infty }>j\}\) is proved without the assumption of a lighter tail for the batch size than that for the service time. This equivalence was verified with the assumption of a light-tailed batch size in [37] or a moderately heavy-tailed batch size in [32], but in both cases the batch size distribution has a tail lighter than the tail of the service time distribution.

7 Asymptotic property for the tail probability of the rv \(D^{(1)}\)

Recall the definition of the rv \(D^{(1)}\) in Sect. 2, i.e. \(D^{(1)}\) is a rv having the distribution equal to the conditional distribution of the number of repeated customers in the orbit given that the server is busy. Note that \(P\{C_{ser}=1\}=\rho \). The following result on \(D^{(1)}(z)\) is from [13] (p. 174):

$$\begin{aligned} D^{(1)}(z){\mathop {=}\limits ^\textrm{def}}E(z^{N_{orb}}|C_{ser}=1)= & {} \frac{1-\beta (\lambda -\lambda X(z))}{\beta (\lambda -\lambda X(z))-z}\cdot \frac{1-\rho }{\rho }\cdot D^{(0)}(z), \end{aligned}$$
(7.1)

where \(D^{(0)}(z)\) is given in (2.1). Rewriting (7.1) gives

$$\begin{aligned} D^{(1)}(z)= & {} \frac{1-\beta (\lambda -\lambda X(z))}{(\lambda -\lambda X(z))\beta _1} \cdot \frac{1- X(z)}{(1-z)\chi _1 }\cdot \frac{(1-\rho )(1-z)}{\beta (\lambda -\lambda X(z))-z}\cdot D^{(0)}(z)\nonumber \\= & {} \beta ^{(e)}(\lambda -\lambda X(z))\cdot X^{(de)}(z)\cdot K^{\circ }(z)\cdot D^{(0)}(z), \end{aligned}$$
(7.2)

where \(K^{\circ }(\cdot )\) is defined in (2.3), \(\beta ^{(e)}(\lambda -\lambda X(z))\cdot X^{(de)}(z)\) is stated in Fact D.

It follows from (7.2) that

$$\begin{aligned} D^{(1)} {\mathop {=}\limits ^\textrm{d}} N_{B^{(e)}_XX^{(de)}}+K^{\circ }+D^{(0)}, \end{aligned}$$
(7.3)

where \(N_{B^{(e)}_XX^{(de)}}\), \(K^{\circ }\) and \(D^{(0)}\), stated in Sects. 2 and 3, are independent rvs having GFs \(\beta ^{(e)}(\lambda -\lambda X(z))\cdot X^{(de)}(z)\), \(K^{\circ }(z)\) and \(D^{(0)}(z)\), respectively. It follows from (5.1) and (6.5) that \(P\{D^{(0)}>j\}=o(P\{K^{\circ }>j\})\), hence

$$\begin{aligned} P\{D^{(1)}>j\}\sim & {} P\{N_{B^{(e)}_XX^{(de)}}+K^{\circ }>j\}. \end{aligned}$$
(7.4)

Similar to the developments for \(P\{D^{(0)}>j\}\), our discussion on \(P\{D^{(1)}>j\}\) distinguishes three cases, which is essentially based on whether the batch size X has a tail lighter than, heavier than, or equally heavy as the tail of the service time B.

Case 1. \(d_X>d_B\) in Assumptions A1 and A2:

In this case, the asymptotic property for the tail probabilities of \(P(N_{B^{(e)}_XX^{(de)}}>j)\) and \(P\{K^{\circ }>j\}\) as \(j\rightarrow \infty \) is given in (B.8) and (B.9), respectively. Applying Part (ii) of Lemma A.7, we get

$$\begin{aligned} P\{D^{(1)}>j\}\sim \frac{(\lambda \chi _1)^{d_B}}{(d_B-1)(1-\rho )\rho }\cdot j^{-d_B+1}L(j),\quad j\rightarrow \infty . \end{aligned}$$
(7.5)

Case 2. \(d_X<d_B\) and \(c_X> 0\) in Assumptions A1 and A2:

In this case, the asymptotic property for the tail probabilities of \(P(N_{B^{(e)}_XX^{(de)}}>j)\) and \(P\{K^{\circ }>j\}\) as \(j\rightarrow \infty \) is given in (B.15) and (B.16), respectively. Applying Lemma A.7, we get

$$\begin{aligned} P\{D^{(1)}>j\}\sim \frac{\lambda \beta _1c_X}{(d_X-1)(1-\rho )\rho }\cdot j^{-d_X+1}L(j),\quad j\rightarrow \infty . \end{aligned}$$
(7.6)

Case 3. \(d_X=d_B=a\) and \(c_X> 0\) in Assumptions A1 and A2:

In a manner similar to Cases 1 and 2, one can prove

$$\begin{aligned} P\{D^{(1)}>j\}\sim \frac{(\lambda \chi _1)^{a}+\lambda \beta _1c_X}{(a-1)(1-\rho )\rho }\cdot j^{-a+1}L(j),\quad j\rightarrow \infty , \end{aligned}$$
(7.7)

where we have skipped detailed derivations to avoid repetition.

The above results in three cases are summarized in the following theorem.

Theorem 7.1

Under A1 and A2,

$$\begin{aligned} P\{D^{(1)}>j\}\sim & {} c_{D^{(1)}}\cdot j^{-a+1}L(j),\quad \text{ as } j\rightarrow \infty , \end{aligned}$$
(7.8)

where \(a=\min (d_B, d_X)>1\) and

$$\begin{aligned} c_{D^{(1)}}=\left\{ \begin{array}{ll} (\lambda \chi _1)^{a}/((a-1)(1-\rho )\rho ), &{}\quad \text{ if } d_X>d_B>1,\\ \lambda \beta _1c_X/((a-1)(1-\rho )\rho ), &{}\quad \text{ if } 1<d_X<d_B \text{ and }\ c_X>0, \\ ((\lambda \chi _1)^{a}+\lambda \beta _1c_X)/((a-1)(1-\rho )\rho ), &{}\quad \text{ if } d_X=d_B>1 \text{ and }\ c_X>0. \end{array} \right. \nonumber \\ \end{aligned}$$
(7.9)

8 Worked examples

The ultimately decreasing property of \(P\{L_\infty =j\}\) is the condition for applying Theorem 6.1. In general, verifications of this condition can be non-trivial, since the probability distribution \(P\{L_\infty =j\}\) is only given in the form of \(Ez^{L_{\infty }}\) by (6.2). In spite of this, its verification may still be possible for specific cases. In this section, we demonstrate a procedure for verifying the ultimately decreasing condition of \(P\{L_\infty =j\}\) for two special cases: in the first case, the tail probability of the service time is heavier than that of the batch size, and the second case is the opposite. This procedure is based on the high-order asymptotic expansion of \(Ez^{L_{\infty }}\) as z goes to 1.

8.1 Monotonicity condition

Before proceeding to the examples, we prove the following lemma, and present a few literature results, which will be used later.

Lemma 8.1

Suppose that \(x_n=c_0n^{-d_0}+\sum _{i=1}^{k} c_i n^{-d_i} +o(n^{-d_0-1})\) for \(n\ge 0\), where \(c_0>0\), \(d_0>0\), \(k\ge 0\), \(d_0<d_i\le d_0+1\) and \(-\infty<c_i<\infty \) for \(i=1,\cdots ,k\). Then, \(\{x_n\}_{n=0}^{\infty }\) is an ultimately decreasing sequence, i.e. \(x_{n+1}-x_n\) is negative for n large enough.

Proof

Let \(\Delta x_n=x_{n+1}-x_n\) for \(n\ge 0\). Then,

$$\begin{aligned} \Delta x_n= & {} c_0 n^{-d_0}\big [(1+1/n)^{-d_0}-1\big ]+\sum _{i=1}^{k} c_i n^{-d_i}\big [(1+1/n)^{-d_i}-1\big ]+o(n^{-d_0-1})\nonumber \\= & {} -c_0 d_0 n^{-d_0 -1} +o(n^{-d_0-1}), \end{aligned}$$
(8.1)

which is ultimately negative. \(\square \)

It is worthwhile to mention that identifying \(c_0\) and \(d_0\), which satisfies the condition of Lemma 8.1, is the key for verifying the ultimately decreasing condition of \(x_n\), while specific values for \(c_i\) and \(d_i\) for \(i >0\) are of no importance as long as they satisfy the condition of Lemma 8.1.

8.2 Flajolet–Sedgewick’s result on tail behaviour of expansion coefficients

Recalling (6.4) and (3.8), we have

$$\begin{aligned} Ez^{L_{\infty }}= & {} \beta (\lambda -\lambda X(z))\cdot K^{\circ }(z), \end{aligned}$$
(8.2)
$$\begin{aligned} K^{\circ }(z)= & {} \frac{1-\rho }{1-\rho \beta ^{(e)}(\lambda -\lambda X(z))\cdot X^{(de)}(z)}. \end{aligned}$$
(8.3)

Remember that \(P\{L_{\infty }=j\}\) is given as the coefficient of the term \(z^j\) in the generating function \(Ez^{L_{\infty }}\). To study the tail asymptotic property of the coefficient sequence, it is convenient to establish the correspondence between the asymptotic property of the generating function at its singularity \(z=1\) and the tail asymptotic property in the coefficient sequence. For this purpose, the following facts are useful. Readers may refer to pages 380–392, Theorem VI.1 and Theorem VI.3(ii) in Flajolet and Sedgewick [14] for details of facts (ii) and (iii), respectively. For the concepts of the \(\Delta \)-domain and the \(\Delta \)-analyticity, see Definition C.1 in Appendix C.

Lemma 8.2

(Theorem VI.1 and Theorem VI.3(ii) in [14])

  1. (i)

    Let \(\mathcal {P}(s)\) be any polynomial of degree \(k\ge 0\), i.e. \(\mathcal {P}(s)=p_0+p_1 s +p_2 s^2 +\cdots +p_{k}s^{k}\). Assume that \(F(z)=\sum _{n=0}^{\infty }f_n z^{n}=\mathcal {P}(1-z)\). Then, there exists some \(n_0\) (\(=k+1\)) such that \(f_n\equiv 0\) for \(n\ge n_0\). A special case is \(F(z)=(1-z)^{m}\), where m is a non-negative integer.

  2. (ii)

    Assume that \(F(z)=\sum _{n=0}^{\infty }f_n z^{n}=(1-z)^{\sigma }\), where \(\sigma \ne 0,1,2,\cdots \). Then, \(f_n=n^{-\sigma -1}/\Gamma (-\sigma )\Big (1+\frac{\sigma (\sigma +1)}{2}n^{-1} +o(n^{-1})\Big )\) as \(n\rightarrow \infty \).

  3. (iii)

    Assume that \(F(z)=\sum _{n=0}^{\infty }f_n z^{n}\) is \(\Delta \)-analytic at 1, and \(F(z)=o((1-z)^{\sigma })\) as z goes to 1 within the \(\Delta \)-domain at 1, where \(\sigma \ne 0,1,2,\cdots \). Then, \(f_n=o(n^{-\sigma -1})\) as \(n\rightarrow \infty \).

Part (i) of the above lemma reveals that for the purpose of asymptotic analysis, there is no need to specify the expression of coefficient \(f_n\) for any polynomial \(\mathcal {P}(1-z)\) because it eventually becomes zero, so we will pay no attention to such polynomials in our later asymptotic analysis.

8.3 Example 1: Pareto service time and geometric batch size

In this subsection, we demonstrate the procedure by an example for the case that the tail probability of the service time is heavier than the tail probability of the batch size. Specifically, assume that the service time B has a Pareto distribution with non-integer shape parameter \(a>1\) and scale parameter \(b>0\), i.e. \(B(x)=1-\left( 1+ x/b\right) ^{-a}\) for \(x\ge 0\), and \(\beta _1=E(B)=b/(a-1)\). Suppose that the batch size X has a geometric distribution \(P\{X=n\}=(1-q) q^{n-1}\), \(0< q<1\), \(n=1,2,\ldots \), with mean \(\chi _1=E(X)= 1/(1-q)\) and the GF \(X(z)= (1-q)z/(1-qz)\).

Let \(\mathbb {C}\) denote the complex plane and \(\mathbb {D}=\mathbb {C}\backslash (-\infty ,0]\) denote the complex plane \(\mathbb {C}\) with the negative real axis \((-\infty ,0]\) (the branch cut) removed. By the result of Goovaerts et al. (see (3.15) in [17]), the LSTs \(\beta (s)\) and \(\beta ^{(e)}(s)\) can be defined by analytical continuation for \(s\in \mathbb {D}\) as follows:

$$\begin{aligned} \beta (s)= & {} 1+\sum _{n=1}^{\infty }\frac{(-bs)^n}{(a-1)\cdots (a-n)} -\Gamma (1-a)(bs)^a e^{bs}, \end{aligned}$$
(8.4)
$$\begin{aligned} \beta ^{(e)}(s)=\frac{1-\beta (s)}{\beta _1 s}= & {} 1+\sum _{n=1}^{\infty }\frac{(-bs)^n}{(a-2)\cdots (a-1-n)} -\Gamma (2-a)(bs)^{a-1} e^{bs}.\nonumber \\ \end{aligned}$$
(8.5)

Using the fact that \(e^{bs} =1 + bs +o(s)\) as \(s\rightarrow 0\), we have

$$\begin{aligned} \beta (s)= & {} 1+s\mathcal {R}(s) -\Gamma (1-a) (bs)^a + o(s^{a}), \end{aligned}$$
(8.6)
$$\begin{aligned} \beta ^{(e)}(s)= & {} 1+s\mathcal {Q}(s) -\Gamma (2-a)\big [(bs)^{a-1}+(bs)^{a}\big ] + o(s^{a}), \end{aligned}$$
(8.7)

where \(\mathcal {R}(s)\) and \(\mathcal {Q}(s)\) are two polynomials of degree \(\lfloor a\rfloor -1\) (the symbol \(\lfloor a\rfloor \) denotes the integral part of a).

It can be proved that \(Ez^{L_{\infty }}\), given in (8.2) and (8.3), can be analytically continued to \(\mathbb {C}\backslash [1,\infty )\) (see Appendix C for details). In addition,

$$\begin{aligned} 1-X(z)= & {} \frac{(1-z)/(1-q)}{1+q(1-z)/(1-q)}\ =\ \frac{1-z}{1-q}\left[ 1 + \sum _{k=1}^{\infty }\left( -q\cdot \frac{1-z}{1-q} \right) ^k\right] ,\nonumber \\ \end{aligned}$$
(8.8)
$$\begin{aligned} \big (1-X(z)\big )^a= & {} \left( \frac{1-z}{1-q}\right) ^a +o((1-z)^{a}),\quad \text{ as } z\uparrow 1,\nonumber \\ \end{aligned}$$
(8.9)
$$\begin{aligned} \big (1-X(z)\big )^{a-1}= & {} \left( \frac{1-z}{1-q}\right) ^{a-1} - (a-1)q \left( \frac{1-z}{1-q}\right) ^{a} +o((1-z)^{a}),\quad \text{ as } z\uparrow 1,\nonumber \\ \end{aligned}$$
(8.10)

where the fact that \((1-x)^{b}=1-bx +o(x)\) as \(x\rightarrow 0\) was used.

By (8.7), (8.8), (8.9), and (8.10), we have

$$\begin{aligned} \beta ^{(e)}(\lambda -\lambda X(z))= & {} 1+(1-z)\mathcal {Q}_1(1-z) -\Gamma (2-a) \left( b\lambda \cdot \frac{1-z}{1-q}\right) ^{a-1} \nonumber \\{} & {} +h_1\cdot (1-z)^{a}+ o((1-z)^{a}),\quad \text{ as } z\uparrow 1, \end{aligned}$$
(8.11)

where \(\mathcal {Q}_1(s)\) is a polynomial of degree \(\lfloor a\rfloor -1\) and \(h_1\) is a real number.

By the definition (see Fact D) of \(X^{(de)}(z)\) and (8.8),

$$\begin{aligned} X^{(de)}(z)=\frac{1-X(z)}{\chi _1(1-z)}= & {} 1 + \sum _{k=1}^{\infty }\left( -q\cdot \frac{1-z}{1-q} \right) ^k. \end{aligned}$$
(8.12)

It follows from (8.11) and (8.12) that

$$\begin{aligned} \beta ^{(e)}(\lambda -\lambda X(z))X^{(de)}(z)= & {} 1+(1-z)\mathcal {Q}_2(1-z) -\Gamma (2-a) \left( b\lambda \cdot \frac{1-z}{1-q}\right) ^{a-1} \nonumber \\{} & {} +h_2\cdot (1-z)^{a} + o((1-z)^{a}),\quad \text{ as } z\uparrow 1, \end{aligned}$$
(8.13)

where \(\mathcal {Q}_2(s)\) is a polynomial of degree \(\lfloor a\rfloor -1\) and \(h_2\) is a real number.

Substituting (8.13) into (8.3), we get, as \(z\uparrow 1\),

$$\begin{aligned}{} & {} K^{\circ }(z) =\frac{1}{1{+}\frac{\rho }{1-\rho }\left[ 1-\beta ^{(e)}(\lambda {-} \lambda X(z))\cdot X^{(de)}(z)\right] }\nonumber \\{} & {} \quad =\frac{1}{1-(1-z)\mathcal {Q}_3(1-z) {+}\frac{\rho }{1-\rho }\Gamma (2-a)\left( b\lambda \cdot \frac{1-z}{1-q}\right) ^{a-1} - h_3\cdot (1-z)^{a} {+} o((1-z)^{a})}, \nonumber \\ \end{aligned}$$
(8.14)

where \(\mathcal {Q}_3(s)\) is a polynomial of degree \(\lfloor a\rfloor -1\) and \(h_3\) is a real number.

Applying \(1/(1-x) =1 + x + x^2 +\cdots \) to (8.14), we obtain

$$\begin{aligned} K^{\circ }(z)= & {} 1+(1-z)\mathcal {Q}_4(1-z) -\frac{\rho }{1-\rho }\Gamma (2-a)\left( b\lambda \cdot \frac{1-z}{1-q}\right) ^{a-1} + h_4\cdot (1-z)^{a} \nonumber \\{} & {} +\sum _{k\in S} g_{k}\cdot (1-z)^{k(a-1)} + o((1-z)^{a}),\quad \text{ as } z\uparrow 1, \end{aligned}$$
(8.15)

where \(\mathcal {Q}_4(s)\) is a polynomial of degree \(\lfloor a\rfloor -1\), \(S=\{k\ |\ k\ge 2,\ k(a-1)\le a\}\), \(h_4\) and \(g_k\)’s are real numbers.

Similarly, by (8.6), (8.8) and (8.9), we have

$$\begin{aligned} \beta (\lambda -\lambda X(z))= & {} 1 + \mathcal {R}_1(1-z) +h_5\cdot (1-z)^{a} + o((1-z)^{a}),\quad \text{ as } z\uparrow 1.\nonumber \\ \end{aligned}$$
(8.16)

where \(\mathcal {R}_1(s)\) is a polynomial of degree \(\lfloor a\rfloor -1\) and \(h_5\) is a real number.

It follows from (8.2), (8.15) and (8.16) that

$$\begin{aligned} Ez^{L_{\infty }}= & {} 1+\mathcal {R}_2(1-z) -\frac{\rho }{1-\rho }\Gamma (2-a)\left( b\lambda \cdot \frac{1-z}{1-q}\right) ^{a-1} + r_2\cdot (1-z)^{a} \nonumber \\{} & {} +\sum _{k\in S} g_{k}\cdot (1-z)^{k(a-1)} + o((1-z)^{a}),\quad \text{ as } z\uparrow 1, \end{aligned}$$
(8.17)

where \(\mathcal {R}_2(s)\) is a polynomial of degree \(\lfloor a\rfloor -1\) and \(r_2\) is a real number.

Since the analyticity of \(Ez^{L_{\infty }}\) in \(\mathbb {C}\backslash [1,\infty )\) implies the \(\Delta \)-analyticity of \(Ez^{L_{\infty }}\) at 1, we are able to immediately obtain the following lemma by applying properties (i)–(iii) of Lemma 8.2 to (8.17), in which the facts \(\rho =\lambda \beta _1 \chi _1=\lambda \frac{b}{a-1} \frac{1}{1-q}\) and \(-\frac{\rho }{1-\rho }\frac{\Gamma (2-a)}{\Gamma (1-a)} \big (\frac{b\lambda }{1-q}\big )^{a-1}= \frac{1}{1-\rho }\big (\frac{b\lambda }{1-q}\big )^{a}\) are used.

Lemma 8.3

For the \(M^X/G/1\) model defined in Example 1, we have, as \(j\rightarrow \infty \),

$$\begin{aligned} P\{L_{\infty }=j\}= & {} \frac{1}{1-\rho }\left( \frac{b\lambda }{1-q}\right) ^{a} j^{-a}+ l_1 j^{-a-1} +\sum _{k\in S} l_{k}j^{-k(a-1)-1} + o(j^{-a-1}),\nonumber \\ \end{aligned}$$
(8.18)

where \(S=\{ k \vert k \ge 2, k(a-1) \le a \}\), and \(l_k\)’s are real numbers.

The following proposition is a direct consequence of applying Lemma 8.1 to (8.18).

Proposition 8.1

For the \(M^X/G/1\) model defined in Example 1, \(P\{L_{\infty }=j\}\) is ultimately decreasing.

Remark 8.1

(i) Let \(q\rightarrow 0\). The batch size X has a degenerative distribution \(P\{X= 1\}=1\). So, for the M/G/1 queue with Pareto service, \(P\{L_{\infty }=j\}\) is ultimately decreasing. This result can also be directly verified from an exact formula for \(P\{L_{\infty }=j\}\) (see (29) and (39) in Ramsay [33]). (ii) For the M/G/1 queue with Pareto service, \(P\{L_{\infty }=j\}\sim \frac{(b\lambda )^{a} }{1-\rho } j^{-a}\) (by (8.18) with \(q= 0\)), it follows that \(P\{L_{\infty }>j\}\sim \frac{(b\lambda )^{a}}{(1-\rho )(a-1)} j^{-a+1}=\frac{\rho }{1-\rho } (j/b\lambda )^{-a+1}\sim \frac{1}{1-\rho }P\{B^{(e)}>j/\lambda \}\) as \(j\rightarrow \infty \), which is consistent with (1.4) in [6].

Further, by Theorem 6.1, \(P\{L_{\mu }>j\}=P\{L_{\infty }>j\}+ c\cdot j^{-a}(1+o(1))\), which along with (8.18) leads to \(P\{L_{\mu }=j\}=P\{L_{\infty }=j\}+ o(j^{-a})=\frac{1}{1-\rho }\left( \frac{b\lambda }{1-q}\right) ^{a} j^{-a}+ o(j^{-a})\).

8.4 Example 2: Pareto service time and batch size heavier than service time

In this subsection, we apply the same procedure, used in the previous subsection, to another example for the case that the tail of the batch size distribution is heavier than that of the service time distribution. Specifically, we assume that the service time B has a Pareto distribution with shape parameter \(1<a<2\) and scale parameter \(b>0\), i.e. \(B(x)=1-\left( 1+ x/b\right) ^{-a}\) for \(x\ge 0\), and \(\beta _1=E(B)=b/(a-1)\). For the batch size \(X=X_0+1\), we assume that \(X_0(z)=E(z^{X_0})=\tau (1-z)\), where \(\tau (s)\) is the LST of the continuous-time distribution of another Pareto rv T with \(P\{T\le t\}=1-\left( 1+ t/v\right) ^{-u}\) for \(t\ge 0\), \(v>0\) and \(1<u<2\). So, \(\tau _1= E(T)= v/(u-1)\). In addition, we assume that \(u<a\). Clearly, \(X_0\) can be regarded as the number of Poisson arrivals at rate 1 within a random time T. Since \(P\{X> j\}\sim P\{X_0> j\}\sim P\{T> j\}\) as \(j\rightarrow \infty \) (see Lemma A.1), the condition \(u<a\) implies that X has a tail heavier than B. Moreover, \(E(X_0)= E(T)=\tau _1\), so \(\chi _1= E(X)=1+ \tau _1\).

In a way similar to obtaining (8.4), and using \(e^{vs} =1 + vs +o(s)\), we have, for \(s\in \mathbb {D}\), as \(s\rightarrow 0\),

$$\begin{aligned} \tau (s)= & {} 1- \tau _1 s -w_1 s^2-\Gamma (1-u)\big [(vs)^u + (vs)^{u+1}\big ]+ o(s^{u+1}), \end{aligned}$$
(8.19)

where \(w_1\) is a real number.

It can be proved that \(Ez^{L_{\infty }}\) given in (8.2) and (8.3) can be analytically continued to \(\mathbb {C}\backslash [1,\infty )\) (a detailed proof is given in Appendix C).

Note that \(1-X(z)=1-z\tau (1-z)=1-\tau (1-z)+(1-z)\tau (1-z)\). Therefore, as \(z\uparrow 1\),

$$\begin{aligned}{} & {} 1-X(z) = \chi _1(1-z)\big [1+w_2(1-z) +(v^u/\chi _1)\Gamma (1-u) (1-z)^{u-1} \nonumber \\{} & {} \quad +w_3(1-z)^{u}+o((1-z)^{u})\big ], \end{aligned}$$
(8.20)

where \(w_2\) and \(w_3\) are real numbers. It follows that

$$\begin{aligned} \big (1-X(z)\big )^a= & {} O((1-z)^{a})=o((1-z)^{u}), \end{aligned}$$
(8.21)
$$\begin{aligned} \big (1-X(z)\big )^{a-1}= & {} \sum _{k\in S_1} g_{k}\cdot (1-z)^{a-1+k(u-1)} +o((1-z)^{u}), \end{aligned}$$
(8.22)

where \(S_1=\{k\ |\ k\ge 0,\ a-1+k(u-1)\le u\}\), and \(g_k\)’s are real numbers.

Substituting (8.20), (8.21), and (8.22) into (8.7) leads to

$$\begin{aligned}{} & {} \beta ^{(e)}(\lambda -\lambda X(z)) \nonumber \\{} & {} \quad = 1 + w_4(1-z) +\sum _{k\in S_1} e_{k}\cdot (1-z)^{a-1+k(u-1)} +o((1-z)^{u}), \end{aligned}$$
(8.23)

where \(w_4\) and \(e_k\)’s are real numbers. In addition, by (8.20), we have

$$\begin{aligned}{} & {} X^{(de)}(z)=1+w_2(1-z) +(v^u/\chi _1)\Gamma (1-u) (1-z)^{u-1} \nonumber \\{} & {} \quad + w_3(1-z)^{u}+ o((1-z)^{u}). \end{aligned}$$
(8.24)

It follows from (8.23) and (8.24) that

$$\begin{aligned}{} & {} \beta ^{(e)}(\lambda -\lambda X(z))X^{(de)}(z) = 1+w_5(1-z) \nonumber \\{} & {} \quad +(v^u/\chi _1)\Gamma (1-u) (1-z)^{u-1}+ w_3(1-z)^{u}\nonumber \\{} & {} \quad +\sum _{k\in S_1} r_{k}\cdot (1-z)^{a-1+k(u-1)} + o((1-z)^{u}), \end{aligned}$$
(8.25)

where \(w_5\) and \(r_k\)’s are real numbers. In a way similar to obtaining (8.15), we get

$$\begin{aligned} K^{\circ }(z)= & {} 1+w_6(1-z) +\frac{\rho }{1-\rho }(v^u/\chi _1)\Gamma (1-u) (1-z)^{u-1}+ w_7(1-z)^{u}\nonumber \\{} & {} +\sum _{(k_1,k_2)\in S_2}g_{k_1,k_2}\cdot (1-z)^{k_1(a-1)+k_2(u-1)} +\sum _{k\in S_3} h_{k}\cdot (1-z)^{k(u-1)}\nonumber \\{} & {} + o((1-z)^{u}),\quad \text{ as } z\uparrow 1, \end{aligned}$$
(8.26)

where \(S_2=\{(k_1,k_2)\ |\ k_1\ge 1,\ k_2\ge 0,\ k_1(a-1)+k_2(u-1)<u\}\), \(S_3=\{k\ |\ k\ge 2,\ k(u-1)<u\}\) and \(g_{k_1,k_2}\)’s and \(h_k\)’s are real numbers.

Substitute (8.21) into (8.4) to give

$$\begin{aligned} \beta (\lambda -\lambda X(z))= & {} 1 -\beta _1(1-z) + o((1-z)^{u}),\quad \text{ as } z\uparrow 1. \end{aligned}$$
(8.27)

It follows from (8.2), (8.26) and (8.27) that

$$\begin{aligned} Ez^{L_{\infty }}= & {} 1+w_8(1-z) +\frac{\rho }{1-\rho }(v^u/\chi _1)\Gamma (1-u) (1-z)^{u-1}+ w_9(1-z)^{u}\nonumber \\{} & {} +\sum _{(k_1,k_2)\in S_2}g^*_{k_1,k_2}\cdot (1-z)^{k_1(a-1)+k_2(u-1)} +\sum _{k\in S_3} h^*_{k}\cdot (1-z)^{k(u-1)}\nonumber \\{} & {} + o((1-z)^{u}),\quad \text{ as } z\uparrow 1, \end{aligned}$$
(8.28)

where \(w_8\), \(w_9\), \(g^*_{k_1,k_2}\)’s, and \(h^*_k\)’s are real numbers.

Once again, the analyticity of \(Ez^{L_{\infty }}\) in \(\mathbb {C}\backslash [1,\infty )\) implies the \(\Delta \)-analyticity of \(Ez^{L_{\infty }}\) at 1; we are able to immediately obtain the following lemma by applying Lemma 8.2 to (8.28).

Lemma 8.4

For the \(M^X/G/1\) queue defined in Example 2, we have

$$\begin{aligned} P\{L_{\infty }=j\}= & {} \frac{\lambda \beta _1 v^u}{1-\rho } j^{-u}+ w_{10}j^{-u-1} +\sum _{(k_1,k_2)\in S_2} l_{k_1,k_2}\cdot j^{-k_1(a-1)-k_2(u-1)-1}\nonumber \\{} & {} +\sum _{k\in S_3}l_{k}\cdot j^{-k(u-1)-1} + o(j^{-u-1}),\quad \text{ as } j\rightarrow \infty , \end{aligned}$$
(8.29)

where we have used the fact \(\frac{\rho }{1-\rho }\frac{v^u}{\chi _1} j^{-u}=\frac{\lambda \beta _1 v^u}{1-\rho } j^{-u}\), and \(w_{10}\), \(l_{k_1,k_2}\)’s and \(l_k\)’s are real numbers.

The following proposition is a direct consequence of applying Lemmas 8.1 to (8.29).

Proposition 8.2

For the \(M^X/G/1\) model defined in Example 2, \(P\{L_{\infty }=j\}\) is ultimately decreasing.

Further, by Theorem 6.1, \(P\{L_{\mu }>j\}=P\{L_{\infty }>j\}+ c\cdot j^{-u}(1+o(1))\), which along with (8.29) leads to \(P\{L_{\mu }=j\}=P\{L_{\infty }=j\}+ o(j^{-u})=\frac{\lambda \beta _1 v^u}{1-\rho } j^{-u}+ o(j^{-u})\).

9 Concluding remarks

In this paper, in terms of the exhaustive version of stochastic decompositions, we, through a step by step process, proved a generalized equivalence theorem to the batch arrival \(M^X/G/1\) queueing model ((6.11) in Theorem 6.1) and a refined tail asymptotic result ((6.12) in Theorem 6.1) of the equivalence theorem.

The monotone condition imposed in Lemma 6.1 and Theorem 6.1 is a commonly required condition by a “standard” Tauberian theorem, or in Heaviside operational calculus. Such information is in general not available since the probability sequence of interest is unknown. It is not expected that one can verify this monotonicity for a general case. Instead, this property could be shown to hold for specific cases through non-trivial efforts as illustrated in the previous section.

One may notice that the asymptotic property given in Lemmas 8.3 and 8.4 is for the local probability \(P\{L_{\infty }=j\}\) (also \(P\{L_{\mu }=j\}\)), which is stronger than that for the tail probability \(P\{L_{\infty }>j\}\) (also \(P\{L_{\mu }>j\}\)) in these two special cases. It is of interest to investigate this local asymptotic for more general cases in separate work.

We expect that the exhaustive version of stochastic decompositions can be applied to other queueing models with a closed-form solution for the transformation of a target variable for characterizing its tail asymptotic behaviour, such as retrial queues with a linear control policy (a more generalized version of the retrial queueing model with constant retrial policy), queueing models with a gated control, queues with unreliable servers, polling systems among possible others.