1 Introduction

In this paper, we revisit a single server retrial queueing system with two orbits and no waiting room, which has been studied by Avrachenkov et al. (2014). The analysis in Avrachenkov et al. (2014) is based on the solution of a Riemann–Hilbert boundary value problem, while our focus is on exact tail asymptotics for the joint stationary distribution of the two orbits under a busy or idle state of the server, using the kernel method, a different method that does not require a full determination of the unknown generating function. In this system, there are two independent exogenous Poisson streams (representing two types of customers) flowing into the server, and the server can hold at most one customer at a time. Upon arrival, if a type i customer finds a busy server, it will join its orbit and wait for retrial at a specified exponential rate for the customers of type i. Such a queueing system could serve as a model for two competing job streams in a carrier sensing multiple access system, and it has an application in a local area computer network (LAN) as explained in Avrachenkov et al. (2014).

Retrial queueing systems have been attracting researchers’ attention for many years (e.g., Artalejo and Gómez-Corral 2008; Artalejo 2010; Falin 1990; Yang and Templeton 1987 and references therein). Much of the previous work lays the emphasis on performance measures, such as the mean size of the orbit, the average number of the customers in the system, the average waiting time among others. We also notice that stationary tail asymptotic analysis has recently become one of the central research topics for retrial queues due to not only its own importance, but also its applications in approximation and performance bounds. For example, Shang et al. (2006) proved that the stationary queue length of the M / G / 1 retrial queue has a subexponential tail if the queue length of the corresponding M / G / 1 queue has a tail of the same type. Kim et al. (2010) extended the study on the M / G / 1 retrial queue by Kim et al. (2007) to a MAP / G / 1 retrial queue, and obtained tail asymptotics for the queue size distribution. Tail asymptotic properties were obtain for M / M / m retrial queue by Kim and Kim (2012), and Kim et al. (2012). By adopting matrix-analytic theory and the censoring technique, Liu et al. (2012) studied the M / M / c retrial queues with non-persistent customers and obtained tail asymptotics for the joint stationary distribution of the number of retrial customers in the orbit and the number of busy servers.

Most of the studies on retrial queues assumed a single type of customers flowing into the system, and references on retrial systems with multi-class customers are quite limited. The model studied by Avrachenkov et al. (2014) and again in this paper is such a system. This model is an example of the two-dimensional QBD process (for example, see Ozawa 2013), or the random walk in the quarter plane modulated by a two-state Markov chain (another example of retrial queues having this structure is Li and Zhao 2005). In Avrachenkov et al. (2014), the authors showed how this modulated model is converted to a scalar fundamental form, which can be solved in terms of a Riemann–Hilbert boundary value problem (BVP) due to its special structure of this system. Motivated by this, we extend their research on this model by considering the tail asymptotic behaviour of the stationary joint probability distribution of the two orbits with either an idle or a busy server, by using the kernel method—a different method that does not require a full determination of the unknown generating function. For more details about the kernel method, readers may refer to Fayolle et al. (1999), Li and Zhao (2009, 2011, 2012), and Li et al. (2013). We point out that tail asymptotic properties for Markov modulated or more general block-structured random walks have also been studied by using other methods, for example in Miyazawa and Zhao (2004), Sakuma et al. (2006), Kobayashi et al. (2010), and Miyazawa (2015).

It is worthwhile to mention that many typical queues have a linear retrial rate, while the retrial rate for the model in this paper is a constant, independent of the number of customers in the orbit (but dependent on type i). Such a system can be a model for the situation in which only the head of the line customer is allowed for retrial. This type of retrial queues are also very important with many interesting applications. Studies on this type of retrial queues are enormous including: Fayolle (1986), Farahmand (1990), Choi et al. (1993), Gómez-Corral (1999), Artalejo et al. (2001), Breuer et al. (2002), and Kim et al. (2014). Readers may also refer to the survey papers (Gómez-Corral 2006; Avram et al. 2014) for more information.

The main contribution in this paper includes: (1) the characterization of the tail asymptotic properties in the joint distribution for a large queue i (\(i=1, 2\)) with either an idle or a busy server. A total of three types of properties are identified (see Theorems 6.2, 6.3, and 6.4 for the case of a busy server, and Theorems 6.5 and 6.6 for the case of an idle server); and (2) an illustration on how to convert a matrix-form fundamental form for the Markov modulated random walk into a (scalar) functional form corresponding to one state of the chain, through a censored Markov chain or solving the matrix-form fundamental form (see remarks in the last section). Therefore, it can be studied by the kernel method.

The rest of the paper is organized as follows: Sect. 2 provides the model description; Sect. 3 identifies the censored random walk in the quarter plane; dominant singularities of the unknown generating function are located in Sect. 4, while the detailed asymptotic property of the unknown function at its dominant singularity is discussed in Sect. 5; exact tail asymptotic properties for stationary probabilities of the system, which are our main results, and their detail proofs are presented in Sect. 6. Concluding remarks are made in the final section.

2 Model description

In this paper, we consider a single server queueing system with two independent Poisson streams of arrivals and two retrial orbits, the same system studied in Avrachenkov et al. (2014). Following Avrachenkov et al. (2014), the two arrival rates are denoted by \(\lambda _i\), \(i=1,2\), with \(\lambda =\lambda _1+\lambda _2\). The server can hold at most one customer at a time (without a waiting room). It means that when the server is busy, an arriving type i customer will join in orbit i of infinity capacity. Retrials from all customers in orbit i for service are characterized by a Poisson process with constant rate \(\mu _i\). The service time for each customer is independent of its type and follows an exponential distribution with rate \(\mu \). The retrial mechanism imposed can be a model when only the customer at head of the line (orbit) is allowed for retrial.

Fig. 1
figure 1

Matrix transition diagram

Let I(t) be the state of the server (either idle or busy), or the number of customers in the server, and let \(Q_i(t)\) denote the number of customers in orbit i at time t for \(i=1,2\). Then, it generates a continuous time Markov chain \(X(t)=\{(Q_1(t), Q_2(t), I(t)):t\in [0,\infty ]\}\) on the state space \(\{0,1,\ldots \}\times \{0,1,\ldots \}\times \{0,1\}\). From Avrachenkov et al. (2014), we know that the system is stable if and only if \(\lambda (\lambda _1+\mu _1)<\mu \mu _1\) and \(\lambda (\lambda _2+\mu _2)<\mu \mu _2\). Under the stability condition, the unique stationary probability vector for the system is denoted by \(\Pi _{m,n}=(\pi _{m,n}(0),\pi _{m,n}(1))\) for \(m,n=0,1, \ldots \). For the purpose of finding the stationary distribution, we consider the corresponding discrete time Markov chain through the uniformization technique. Without loss of generality, we assume that \(\lambda +\mu +\mu _1+\mu _2=1\). For this discrete time chain, a transition diagram, partitioned according to the state of the server, is depicted in Fig. 1, where

$$\begin{aligned} A_{1,0}= & {} A_{1,0}^{(1)}=A_{1,0}^{(2)}=A_{1,0}^{(0)}= \left( \begin{array}{cccc} 0 &{} 0 \\ 0 &{} \lambda _1 \end{array}\right) ,\\ A_{0,1}= & {} A_{0,1}^{(1)}=A_{0,1}^{(2)}=A_{0,1}^{(0)}= \left( \begin{array}{cccc} 0 &{} 0 \\ 0 &{} \lambda _2 \end{array}\right) ,\\ A_{-1,0}= & {} A_{-1,0}^{(1)}= \left( \begin{array}{cccc} 0 &{} \mu _1 \\ 0 &{} 0 \end{array}\right) ,\\ A_{0,-1}= & {} A_{0,-1}^{(2)}= \left( \begin{array}{cccc} 0 &{} \mu _2 \\ 0 &{} 0 \end{array}\right) ,\\ {A_{0,0}}= & {} \left( \begin{array}{cccc} \mu &{} \lambda \\ \mu &{} \mu _1+\mu _2 \end{array}\right) ,\\ {A_{0,0}^{(1)}}= & {} \left( \begin{array}{cccc} \mu +\mu _2 &{} \lambda \\ \mu &{} \mu _1+\mu _2 \end{array}\right) ,\\ {A_{0,0}^{(2)}}= & {} \left( \begin{array}{cccc} \mu +\mu _1 &{} \lambda \\ \mu &{} \mu _1+\mu _2 \end{array}\right) ,\\ {A_{0,0}^{(0)}}= & {} \left( \begin{array}{cccc} \mu +\mu _1+\mu _2 &{} \lambda \\ \mu &{} \mu _1+\mu _2 \end{array}\right) . \end{aligned}$$

We define the probability generating function (PGF) \(P^{(k)}(x,y)\) for the stationary probabilities \(\pi _{m,n}(k)\) as

$$\begin{aligned} P^{(k)}(x,y)=\sum _{m=0}^{\infty }\sum _{n=0}^{\infty }\pi _{m,n}(k)x^{m}y^{n},\quad |x|\le 1,|y|\le 1,\quad k=0,1, \end{aligned}$$

and denote

$$\begin{aligned} P(x,y)=\left( P^{(0)}(x,y),P^{(1)}(x,y)\right) . \end{aligned}$$

Following the idea in Fayolle et al. (1999), we can obtain the (matrix-form) fundamental form for the Markov modulated random walk in the quarter plane:

$$\begin{aligned} P(x,y)H(x,y)=P(x,0)H_{1}(x,y)+P(0,y)H_{2}(x,y)+\Pi _{0,0}H_{0}(x,y), \end{aligned}$$
(2.1)

where

$$\begin{aligned} H(x,y)= & {} -\bar{h}(x,y),\\ H_1(x,y)= & {} -\bar{h}(x,y)+\bar{h}_1(x,y)y, \\ H_2(x,y)= & {} -\bar{h}(x,y)+\bar{h}_2(x,y)x,\\ H_0(x,y)= & {} \bar{h}_0(x,y)xy+\bar{h}(x,y)-\bar{h}_1(x,y)y-\bar{h}_2(x,y)x \end{aligned}$$

with

$$\begin{aligned} \bar{h}(x,y)= & {} xy\left( \sum _{i=-1}^1\sum _{j=-1}^1 A_{i,j}x^{i}y^{j}-I\right) ,\\ \bar{h}_1(x,y)= & {} x\left( \sum _{i=-1}^1\sum _{j=0}^1A^{(1)}_{i,j}x^{i}y^{j}-I\right) ,\\ \bar{h}_2(x,y)= & {} y\left( \sum _{i=0}^1\sum _{j=-1}^1A^{(2)}_{i,j}x^{i}y^{j}-I\right) ,\\ \bar{h}_0(x,y)= & {} \sum _{i=0}^1\sum _{j=0}^1A^{(0)}_{i,j}x^{i}y^{j}-I. \end{aligned}$$

For detailed derivation, see the work in [15].

Remark 2.1

It is worthwhile to mention that the fundamental form (1.3.6) in Fayolle et al. (1999) is for the generating function excluding boundary probabilities, while ours is for the complete joint probability vector. In [15], it is pointed out that these two forms (for Markov modulated random walks) are equivalent. In fact, for \(k=0,1,\) let

$$\begin{aligned} \pi ^{(k)}(x,y)=\sum _{m=1}^{\infty }\sum _{n=1}^{\infty }\pi _{m,n}(k)x^{m-1}y^{n-1}, \end{aligned}$$

and \(\pi (x,y)=(\pi ^{(0)}(x,y), \pi ^{(1)}(x,y))\), then

$$\begin{aligned} -\pi (x,y) \bar{h}(x,y) = \pi (x,0) \bar{h}_1(x,y) + \pi (0,y) \bar{h}_2(x,y) + \Pi _{0,0} \bar{h}_0(x,y) \end{aligned}$$

by noticing that

$$\begin{aligned} \pi _1^{(k)}(x)= & {} \pi ^{(k)}(x,0)=\sum _{m=1}^{\infty }\pi _{m,0}(k)x^{m-1},\\ \pi _2^{(k)}(y)= & {} \pi ^{(k)}(0,y)=\sum _{n=1}^{\infty }\pi _{0,n}(k)y^{n-1}. \end{aligned}$$

For the retrial queueing system studied in this paper, after some calculations, we have

$$\begin{aligned} H(x,y)= & {} \left( \begin{array}{cccc} (\lambda +\mu _1+\mu _2)xy &{} -(\mu _2x+\mu _1y+\lambda xy)\\ -\mu xy &{} -[\lambda _2 xy^2+\lambda _1 x^2 y-(\lambda +\mu )xy] \end{array}\right) ,\\ H_1(x,y)= & {} \left( \begin{array}{cccc} \mu _2 xy &{} -\mu _2 x\\ 0 &{} 0 \end{array}\right) ,\\ H_2(x,y)= & {} \left( \begin{array}{cccc} \mu _1 xy &{} -\mu _1 y\\ 0 &{} 0 \end{array}\right) ,\\ H_0(x,y)= & {} \left( \begin{array}{cccc} 0 &{} 0\\ 0 &{} 0 \end{array}\right) . \end{aligned}$$

Hence, the fundamental form (2.1) can be simplified as

$$\begin{aligned} P(x,y)H(x,y)=P(x,0)H_{1}(x,y)+P(0,y)H_{2}(x,y). \end{aligned}$$

Equivalently,

$$\begin{aligned}&\left( P^{(0)}(x,y),P^{(1)}(x,y)\right) \left( \begin{array}{cccc} (\lambda +\mu _1+\mu _2)xy &{} -(\mu _2x+\mu _1y+\lambda xy)\\ -\mu xy &{} -[\lambda _2 xy^2+\lambda _1 x^2 y-(\lambda +\mu )xy] \end{array}\right) \\&\quad \!=\!\left( P^{(0)}(x,0),P^{(1)}(x,0)\right) \left( \begin{array}{cccc} \mu _2 xy &{} -\mu _2 x\\ 0 &{} 0 \end{array}\right) +\left( P^{(0)}(0,y),P^{(1)}(0,y)\right) \left( \begin{array}{cccc} \mu _1 xy &{} -\mu _1 y\\ 0 &{} 0 \end{array}\right) , \end{aligned}$$

or,

$$\begin{aligned}&(\lambda +\mu _1+\mu _2)P^{(0)}(x,y)-\mu P^{(1)}(x,y)\nonumber \\&\quad =\mu _2 P^{(0)}(x,0)+\mu _1P^{(0)}(0,y),\end{aligned}$$
(2.2)
$$\begin{aligned}&(\lambda xy+\mu _1 y+\mu _2 x)P^{(0)}(x,y)+[\lambda _1 x+\lambda _2 y-(\lambda +\mu )]xy P^{(1)}(x,y)\nonumber \\&\quad =\mu _2 xP^{(0)}(x,0)+\mu _1 y P^{(0)}(0,y). \end{aligned}$$
(2.3)

(2.2) and (2.3) are identical to equations (18) and (19) in Avrachenkov et al. (2014), derived from direct calculations.

3 Censored Markov chain

One may notice that Eqs. (2.2) and (2.3) provide a relationship between generating functions for an idle server and for a busy server. Therefore, we start our analysis for a busy server since in this case, the censored Markov chain can be expressed explicitly. This censored Markov chain is a random walk in the quarter plane, which has been extensively studied in the literature. To this end, we first consider the uniformized discrete time Markov chain of the continuous time chain X(t) for the retrial model with uniformization parameter \(\theta = \lambda + \mu +\mu _1 +\mu _2=1\). We partition the transition matrix P of the uniformized chain according to the server state and then consider the censored chain to the set of states of a busy server. Specifically, let \(X(n)=\{(Q_n^{(1)},Q_n^{(2)}, I_n)\}\) be the uniformized chain on the state space \(\{0, 1, \ldots \} \times \{0, 1, \ldots \} \times \{0, 1\}\) and let \(E=\{0,1,\ldots \}\times \{0,1,\ldots \}\times \{1\}\) and \(E^c=\{0,1,\ldots \}\times \{0,1,\ldots \}\times \{0\}\). Partition the transition matrix P according to E and its complement \(E^c\) into:

figure a

where using the lexicographical order for states of \((Q_n^{(1)},Q_n^{(2)})\), \(P_{ij}\) can be expressed as

$$\begin{aligned} P_{00}= & {} \left( \begin{array}{cccccc} &{}A_{0} &{} \\ &{} &{}A_{1} &{} \\ &{} &{} &{}A_{1} &{} \\ &{} &{} &{} &{}\ddots \end{array} \right) ,\qquad P_{01}=\left( \begin{array}{cccccc} &{}B_{0} &{} \\ &{}B_{1} &{}B_{0} &{} \\ &{} &{}B_{1} &{}B_{0} &{} \\ &{} &{} &{}\ddots &{}\ddots \end{array} \right) ,\\ P_{10}= & {} \left( \begin{array}{cccccc} &{}C_0 &{} \\ &{} &{}C_0 &{}\\ &{} &{} &{}\ddots \end{array} \right) ;\quad \quad P_{11}=\left( \begin{array}{cccccc} &{}D_{0} &{}D_{1} \\ &{} &{}D_{0} &{}D_{1} \\ &{} &{} &{}\ddots &{}\ddots \end{array} \right) , \end{aligned}$$

with

$$\begin{aligned} A_{0}= & {} \left( \begin{array}{cccccc} &{}\mu +\mu _1+\mu _2 &{} \\ &{} &{}\mu +\mu _1 &{} \\ &{} &{} &{}\mu +\mu _1 &{} \\ &{} &{} &{} &{}\ddots \end{array} \right) ,\qquad A_{1}=\left( \begin{array}{cccccc} &{}\mu +\mu _2 &{} \\ &{} &{}\mu &{}\\ &{} &{} &{}\mu &{} \\ &{} &{} &{} &{} \ddots \end{array} \right) ,\\ B_{0}= & {} \left( \begin{array}{cccccc} &{}\lambda &{} \\ &{}\mu _2 &{}\lambda &{} \\ &{} &{}\mu _2 &{}\lambda &{} \\ &{} &{} &{}\ddots &{}\ddots \end{array} \right) ,\qquad B_{1}=\left( \begin{array}{cccccc} &{}\mu _1 &{} \\ &{} &{}\mu _1 &{}\\ &{} &{} &{}\ddots \end{array} \right) , \quad \quad C_{0}=\left( \begin{array}{cccccc} &{}\mu &{} \\ &{} &{}\mu &{} \\ &{} &{} &{}\ddots \end{array} \right) ,\\ D_{0}= & {} \left( \begin{array}{cccccc} &{}\mu _1+\mu _2 &{}\lambda _2 \\ &{} &{}\mu _1+\mu _2 &{}\lambda _2 \\ &{} &{} &{}\ddots &{}\ddots \end{array} \right) ,\qquad D_{1}=\left( \begin{array}{cccccc} &{}\lambda _1 &{} \\ &{} &{}\lambda _1 &{} \\ &{} &{} &{} \ddots \end{array} \right) . \end{aligned}$$

Notice that \(P_{00}\) is diagonal, it is straightforward to have the fundamental matrix of \(P_{00}\) as follows:

$$\begin{aligned} \hat{P}_{00}=\sum _{n=0}^{\infty } P_{00}^n = \text {diag} (\hat{A}_0, \hat{A}_1, \hat{A}_1, \ldots ), \end{aligned}$$

where

$$\begin{aligned} \hat{A}_0&= \text {diag} \left( \frac{1}{\lambda }, \frac{1}{\lambda +\mu _2}, \frac{1}{\lambda +\mu _2}, \ldots \right) , \\ \hat{A}_1&= \text {diag} \left( \frac{1}{\lambda +\mu _1}, \frac{1}{\lambda +\mu _1+\mu _2}, \frac{1}{\lambda +\mu _1+\mu _2}, \ldots \right) . \end{aligned}$$

Furthermore, notice that \(P_{10}\) is also diagonal, and therefore the censored chain to E can be easily computed as

$$\begin{aligned} P^{(E)} = P_{11}+P_{10}\hat{P}_{00}P_{01} = \left( \begin{array}{cccccc} D_0 + \mu \hat{A}_0B_0 &{} D_1 &{} \\ \mu \hat{A}_1B_1 &{} D_0 + \mu \hat{A}_1B_0 &{} D_1 &{} \\ &{} \mu \hat{A}_1B_1 &{} D_0 + \mu \hat{A}_1B_0 &{} D_1 &{} \\ &{} &{} \ddots &{} \ddots &{} \ddots &{} \end{array} \right) . \end{aligned}$$

This censored chain is an example, referred to the simple random walk, of the random walks in the quarter plane studied in Fayolle et al. (1999), whose transition diagram is depicted in Fig. 2.

Fig. 2
figure 2

Transition diagram of the censored random walk

In our case,

$$\begin{aligned} p_{1,0}= & {} p_{1,0}^{(1)}=p_{1,0}^{(2)}=p_{1,0}^{(0)}=\lambda _1, ~~ p_{0,1}=p_{0,1}^{(1)}=p_{0,1}^{(2)}=p_{0,1}^{(0)}=\lambda _2;\\ p_{-1,0}= & {} \frac{\hat{\mu }_1}{\lambda +\mu _1+\mu _2}, ~~ p_{0,-1}=\frac{\hat{\mu }_2}{\lambda +\mu _1+\mu _2}, ~~ p_{0,0}=\mu _1+\mu _2+\frac{\lambda \mu }{\lambda +\mu _1+\mu _2};\\ p_{-1,0}^{(1)}= & {} \frac{\hat{\mu }_1}{\lambda +\mu _1}, ~~ p_{0,0}^{(1)}=\mu _1+\mu _2+\frac{\lambda \mu }{\lambda +\mu _1};\\ p_{0,-1}^{(2)}= & {} \frac{\hat{\mu }_2}{\lambda +\mu _2}, ~~ p_{0,0}^{(2)}=\mu _1+\mu _2+\frac{\lambda \mu }{\lambda +\mu _2};\\ p_{0,0}^{(0)}= & {} \mu _1+\mu _2+\mu , \end{aligned}$$

where \(\hat{\mu }_i=\mu \mu _i\) for \(i=1,2\).

Let \(\alpha =\lambda +\mu _1+\mu _2=1-\mu \) and \(\hat{\lambda }_i=\alpha \lambda _i\) for \(i=1,2\). It follows from Avrachenkov et al. (2014) that under the system stability condition (for the retrial queue), at least one of \(\hat{\lambda }_1<\hat{\mu }_1\) and \(\hat{\lambda }_2<\hat{\mu }_2\) holds. Without loss of generality, we assume that \(\hat{\lambda }_1<\hat{\mu }_1\) throughout the paper. For this censored random walk, the fundamental form [equation (1.3.6) in Fayolle et al. (1999)] is given by:

$$\begin{aligned} -h(x,y) \pi ^{(1)}(x,y)= h_1(x,y) \pi _1^{(1)}(x) + h_2(x,y) \pi _2^{(1)}(y) + h_0(x,y) \pi _{0,0}(1), \end{aligned}$$
(3.1)

where

$$\begin{aligned} h(x,y)= & {} xy\left( \sum _{i=-1}^1\sum _{j=-1}^1 p_{i,j}x^{i}y^{j}-1\right) = a(x)y^2+b(x)y+c(x), \end{aligned}$$
(3.2)
$$\begin{aligned} h_1(x,y)= & {} x\left( \sum _{i=-1}^1\sum _{j=0}^1 p^{(1)}_{i,j}x^{i}y^{j}-1\right) = a_1(x)y+b_1(x), \end{aligned}$$
(3.3)
$$\begin{aligned} h_2(x,y)= & {} y\left( \sum _{i=0}^1\sum _{j=-1}^1 p^{(2)}_{i,j}x^{i}y^{j}-1\right) =a_2(x)y^2+b_2(x)y+c_2(x), \end{aligned}$$
(3.4)
$$\begin{aligned} h_0(x,y)= & {} \sum _{i=0}^1\sum _{j=0}^1 p^{(0)}_{i,j}x^{i}y^{j}-1 =a_0(x)y+b_0(x), \end{aligned}$$
(3.5)

with

$$\begin{aligned} a(x)= & {} p_{0,1}x, \quad b(x)=p_{-1,0}-(1-p_{0,0})x+p_{1,0}x^2, \quad c(x)=p_{0,-1}x;\\ a_1(x)= & {} p_{0,1}^{(1)}x, \quad b_1(x)=p_{-1,0}^{(1)}-(1-p_{0,0}^{(1)})x+p_{1,0}^{(1)}x^2;\\ a_2(x)= & {} p_{0,1}^{(2)}, \quad b_2(x)=p_{0,0}^{(2)}-1+ p_{1,0}^{(2)}x, \quad c_2(x)=p_{0,-1}^{(2)};\\ a_0(x)= & {} p_{0,1}^{(0)}, \quad b_0(x)=p_{1,0}^{(0)}x-(1-p_{0,0}^{(0)}). \end{aligned}$$

In Li and Zhao (2011, 2012), a kernel method has been promoted for studying exact tail asymptotic properties for random walks in the quarter plane. In the following, we apply this method to the retrial queue model to explicitly (in terms of system parameters) characterize regions on which different tail asymptotic properties hold. First, based on the fundamental form in (3.1), asymptotic properties at the dominant singularity for the generating function \(\pi _1^{(1)}(x)\) or \(\pi _2^{(1)}(y)\) are obtained, based on which regions of different exact tail asymptotic properties for probabilities \(\pi _{m,n}(1)\) with a fixed value of n or m are identified through a Tauberian-like theorem (Theorem 6.1). Then, based on the relationship given in (2.2), the generating functions \(\pi _1^{(0)}(x)\) and \(\pi _2^{(0)}(y)\) are analyzed, and characterization of the exact tail asymptotic properties for \(\pi _{m,n}(0)\) is provided.

4 Dominant singularity of \(\pi _1^{(1)}(x)\)

Since discussions for dominant singularities of the two functions \(\pi _1^{(1)}(x)\) and \(\pi _2^{(1)}(y)\) are repetitive, we only provide details for \(\pi _1^{(1)}(x)\).

According to Li and Zhao (2011, 2012), the dominant singularity of \(\pi _1^{(1)}(x)\) is either a branch point of the Riemann surface defined by the kernel equation \(h(x,y)=0\), or a pole of the function \(\pi _1^{(1)}(x)\). The following two subsections are devoted to these two cases, respectively.

4.1 Branch points for kernel equation \(h(x,y)=0\)

For the censored random walk, we consider the kernel equation \(h(x,y)=0\) defined by the kernel function h(xy) in the fundamental form (3.1). Write \(\alpha h(x,y)\) as a quadratic form in y with coefficients that are polynomials in x:

$$\begin{aligned} \alpha h(x,y)=(\hat{\lambda }_2x)y^2+\left[ \hat{\lambda }_1 x^2-(\hat{\lambda }_1+\hat{\lambda }_2+\hat{\mu }_1+\hat{\mu }_2)x+\hat{\mu }_1\right] y+\hat{\mu }_2 x. \end{aligned}$$
(4.1)

For a fixed x, \(h(x,y)=0\) has two solutions

$$\begin{aligned} Y_{\pm }(x)=\frac{-\hat{b}(x)\pm \sqrt{\Delta (x)}}{2\hat{\lambda }_2 x}, \end{aligned}$$

where \(\hat{b}(x)=\alpha b(x)=\hat{\lambda }_1 x^2-(\hat{\lambda }_1+\hat{\lambda }_2+\hat{\mu }_1+\hat{\mu }_2)x+\hat{\mu }_1\) and \(\Delta (x)=b_+(x)b_-(x)\) with

$$\begin{aligned} b_+(x)= & {} \hat{b}(x)+2x\sqrt{\hat{\lambda }_2\hat{\mu }_2}=(x-1)(\hat{\lambda }_1 x-\hat{\mu }_1)-(\sqrt{\hat{\lambda }_2}-\sqrt{\hat{\mu }_2})^2x,\\ b_-(x)= & {} \hat{b}(x)-2x\sqrt{\hat{\lambda }_2\hat{\mu }_2}=(x-1)(\hat{\lambda }_1 x-\hat{\mu }_1)-(\sqrt{\hat{\lambda }_2}+\sqrt{\hat{\mu }_2})^2x. \end{aligned}$$

Denote the branch points by \(x_i,i=1,2,3,4\), which are the zeros of \(\Delta (x)\), then we have

$$\begin{aligned} b_+(x)=\hat{\lambda }_1(x-x_2)(x-x_3)\quad \text {and} \quad b_-(x)=\hat{\lambda }_1(x-x_1)(x-x_4), \end{aligned}$$
(4.2)

where

$$\begin{aligned} 0<x_1<x_2<1<\hat{\mu }_1/\hat{\lambda }_1\le x_3<x_4<+\infty \end{aligned}$$

and

$$\begin{aligned} x_3 = \frac{\left( \hat{\lambda }_1 + \hat{\mu }_1 \right) + \left( \sqrt{\hat{\lambda }_2}-\sqrt{\hat{\mu }_2} \right) ^2 - \sqrt{ \left[ \left( \hat{\lambda }_1 + \hat{\mu }_1 \right) + \left( \sqrt{\hat{\lambda }_2}-\sqrt{\hat{\mu }_2} \right) ^2 \right] ^2 - 4 \hat{\lambda }_1 \hat{\mu }_1}}{2 \hat{\lambda }_1}\nonumber \\ \end{aligned}$$
(4.3)

is a candidate of the dominant singularity of \(\pi _1^{(1)}(x)\).

Consider the following cut planes:

$$\begin{aligned} \widetilde{\mathbb {C}}_x= & {} \mathbb {C}_x-[x_3,x_4],\\ \widetilde{\mathbb {C}}_y= & {} \mathbb {C}_y-[y_3,y_4],\\ \widetilde{\widetilde{\mathbb {C}}}_x= & {} \mathbb {C}_x-[x_3,x_4]\cup [x_1,x_2],\\ \widetilde{\widetilde{\mathbb {C}}}_y= & {} \mathbb {C}_y-[y_3,y_4]\cup [y_1,y_2], \end{aligned}$$

where \({\mathbb {C}}_x\) and \({\mathbb {C}}_y\) are the complex planes of x and y, respectively. In the cut plane \(\widetilde{\widetilde{\mathbb {C}}}_x\), define the two branches of Y(x) by

$$\begin{aligned} Y_0(x)&=Y_-(x) \quad \text {and} \quad Y_1(x)=Y_+(x) \quad if\quad |Y_-(x)|\le |Y_+(x)|,\\ Y_0(x)&=Y_+(x) \quad \text {and} \quad Y_1(x)=Y_-(x) \quad if\quad |Y_-(x)|>|Y_+(x)|. \end{aligned}$$

Symmetrically, when x and y are interchanged, we also have branch points \(y_i,i=1,2,3,4\), satisfying

$$\begin{aligned} 0<y_1<y_2<1<y_3<y_4<+\infty \end{aligned}$$

as well as the two branches \(X_0(y)\) and \(X_1(y)\) defined in a similar fashion.

Detailed properties of the branches \(Y_0(x)\) and \(Y_1(x)\) (\(X_0(y)\) and \(X_1(y)\)) are needed in the asymptotic analysis for functions \(\pi _1^{(1)}(x)\) and \(\pi _1^{(0)}(x)\) (\(\pi _2^{(1)}(y)\) and \(\pi _2^{(0)}(y)\)), which are presented in the following two lemmas.

Lemma 4.1

The functions \(Y_i(x)\), \(i=0,1,\) are meromorphic in the cut plane \(\widetilde{\widetilde{\mathbb {C}}}_x\). In addition,

  1. (i)

    \(Y_0(x)\) has one zero and no poles and \(Y_1(x)\) has two poles and no zeros. Hence, \(Y_0(x)\) is analytic in \(\widetilde{\widetilde{\mathbb {C}}}_x\).

  2. (ii)

    \(|Y_0(x)|<|Y_1(x)|\) in the whole cut complex plane \(\widetilde{\widetilde{\mathbb {C}}}_x\). \(|Y_0(x)|=|Y_1(x)|\) takes place only on the cuts.

  3. (iii)

    \(|Y_0(x)|<1\) if \(|x|=1\), \(x\ne 1\), and \(Y_0(1)=\min \left( 1,\frac{\hat{\mu }_2}{\hat{\lambda }_2}\right) \le 1\).

  4. (iv)

    For all \(x\in \mathbb {C}_x\), \(|Y_0(x)|\le \sqrt{\frac{\hat{\mu }_2}{\hat{\lambda }_2}}\) and \(|Y_1(x)|\ge \sqrt{\frac{\hat{\mu }_2}{\hat{\lambda }_2}}\).

  5. (v)

    If \(x\in [x_1,x_2]\), then \(|Y_0(x)|=\sqrt{\frac{\hat{\mu }_2}{\hat{\lambda }_2}}\) and \(X_0(Y_0(x))=x\).

Moreover,

  1. (vi)

    \(0<Y_0(x)\le 1\) for \(1\le x\le \frac{\hat{\mu }_1}{\hat{\lambda }_1}\) (recall that \(\hat{\lambda }_1<\hat{\mu }_1\)).

Parallel results for \(X_i(y)\), \(i=0,1\) can be stated as well.

Proof

See Fayolle and Iasnogorodski (1979), Li and Zhao (2011) and Li and Zhao (2012) for proofs of (i)–(v). Here we only detail the proof to (vi).

For \(1\le x\le \frac{\hat{\mu }_1}{\hat{\lambda }_1}\), let \(\frac{\alpha h(x,y)}{x}=0\), which leads to

$$\begin{aligned} \tilde{b}(y)+\left( \hat{\lambda }_1 x+\frac{\hat{\mu }_1}{x}\right) y=0, \end{aligned}$$

where \(\tilde{b}(y)=\hat{\lambda }_2y^2-(\hat{\lambda }_1+\hat{\lambda }_2+\hat{\mu }_1+\hat{\mu }_2)y+\hat{\mu }_2\). Since \(\hat{\lambda }_1 x+\frac{\hat{\mu }_1}{x}\) is decreasing on \(\left[ 1,\sqrt{\frac{\hat{\mu }_1}{\hat{\lambda }_1}}\right] \) and increasing on \(\left( \sqrt{\frac{\hat{\mu }_1}{\hat{\lambda }_1}},\frac{\hat{\mu }_1}{\hat{\lambda }_1}\right] \), therefore, \(2\sqrt{\hat{\lambda }_1\hat{\mu }_1}\le \hat{\lambda }_1 x+\frac{\hat{\mu }_1}{x}\le \hat{\lambda }_1+\hat{\mu }_1\). For \(y<0\), the inequalities

$$\begin{aligned} \left\{ \begin{array}{l} \tilde{b}(y)+2 y\sqrt{\hat{\lambda }_1\hat{\mu }_1}\ge 0 \\ \tilde{b}(y)+(\hat{\lambda }_1+\hat{\mu }_1)y\le 0 \end{array} \right. \end{aligned}$$

have no solutions. For \(y\ge 0\), solve the following inequalities

$$\begin{aligned} \left\{ \begin{array}{l} \tilde{b}(y)+2 y\sqrt{\hat{\lambda }_1\hat{\mu }_1}\le 0\\ \tilde{b}(y)+(\hat{\lambda }_1+\hat{\mu }_1)y\ge 0 \end{array} \right. \end{aligned}$$

to have \(y_2\le y\le \min \big (1,\frac{\hat{\mu }_2}{\hat{\lambda }_2}\big )\), or \(\max \big (1,\frac{\hat{\mu }_2}{\hat{\lambda }_2}\big )\le y\le y_3\). This means that for \(1\le x\le \frac{\hat{\mu }_1}{\hat{\lambda }_1}\), \(y_2 \le Y_0(x)\le 1\). \(\square \)

Lemma 4.2

We present more properties about \(Y_0(x)\) and \(X_0(y)\) below:

  1. (i)

    If \(\hat{\mu }_2>\hat{\lambda }_2\), then \(0<Y_0(x)<1\) for \(x\in \big (1,\frac{\hat{\mu }_1}{\hat{\lambda }_1}\big )\), and \(1<Y_0(x)<\sqrt{\frac{\hat{\mu }_2}{\hat{\lambda }_2}}\) for \(x\in \big (\frac{\hat{\mu }_1}{\hat{\lambda }_1},x_3\big )\). Specially, \(Y_0\big (\frac{\hat{\mu }_1}{\hat{\lambda }_1}\big )=1\) and \(Y_0(x_3)=\sqrt{\frac{\hat{\mu }_2}{\hat{\lambda }_2}}>1\).

  2. (ii)

    If \(\hat{\mu }_2<\hat{\lambda }_2\), then \(0<Y_0(x)<1\) for \(x\in (1,x_3)\). Also, \(Y_0\big (\frac{\hat{\mu }_1}{\hat{\lambda }_1}\big )=\frac{\hat{\mu }_2}{\hat{\lambda }_2}<1\) and \(Y_0(x_3)=\sqrt{\frac{\hat{\mu }_2}{\hat{\lambda }_2}}<1\).

  3. (iii)

    If \(\hat{\mu }_2=\hat{\lambda }_2\), then \(x_3=\frac{\hat{\mu }_1}{\hat{\lambda }_1}\), \(0<Y_0(x)<1\) for \(x\in (1,\frac{\hat{\mu }_1}{\hat{\lambda }_1})\), and \(Y_0(1)=Y_0\big (\frac{\hat{\mu }_1}{\hat{\lambda }_1}\big )=1\).

Similarly,

  1. (i’)

    If \(\hat{\mu }_2>\hat{\lambda }_2\), then \(0<X_0(y)<1\) for \(y\in \big (1,\frac{\hat{\mu }_2}{\hat{\lambda }_2}\big )\), and \(1<X_0(y)<\sqrt{\frac{\hat{\mu }_1}{\hat{\lambda }_1}}\) for \(y\in \big (\frac{\hat{\mu }_2}{\hat{\lambda }_2},y_3\big )\). Specially, \(X_0\big (\frac{\hat{\mu }_2}{\hat{\lambda }_2}\big )=1\) and \(X_0(y_3)=\sqrt{\frac{\hat{\mu }_1}{\hat{\lambda }_1}}>1\).

  2. (ii’)

    If \(\hat{\mu }_2\le \hat{\lambda }_2\), then \(1<X_0(y)<\sqrt{\frac{\hat{\mu }_1}{\hat{\lambda }_1}}\) for \(y\in (1,y_3)\). Also, \(X_0(1)=1\) and \(X_0(y_3)=\sqrt{\frac{\hat{\mu }_1}{\hat{\lambda }_1}}>1\).

Proof

Based on Eqs. (4.1) and (4.2), it is easy to know that \(\hat{b}(x)<0\) for \(x\in (1,x_3)\), so the branch \(Y_0(x)\) should take \(Y_-(x)=\frac{-\hat{b}(x)-\sqrt{\Delta (x)}}{2\hat{\lambda }_2 x}\). Solving the inequalities \(Y_0(x)>1\) and \(Y_0(x)<1\) with \(x\in (1,x_3)\), we obtain, after some simple calculations, the results in (i)–(iii) of the lemma. (i’) and (ii’) can be proved in the same way. \(\square \)

4.2 Poles of \(\pi _1^{(1)}(x)\)

Since the censored random walk is a standard walk in the quarter plane, literature results can now be applied to the analysis of the dominant singularity of \(\pi _1^{(1)}(x)\). Therefore, besides the branch point \(x_3\), given in (4.3), the other candidate for the dominant singularity is a pole of function \(\pi _1^{(1)}(x)\). In the following, we refine literature results, which lead to an explicit characterization of both the dominant pole and the regions for different exact tail asymptotic properties.

Theorem 4.1

(Theorem 4.4 in Li and Zhao 2012) If \(x_p\) is the pole of \(\pi _1^{(1)}(x)\) with the smallest modulus in \((1, x_3]\), then \(x_p\) is a zero of \(h_1(x,Y_0(x))\) or \(Y_0(x_p)\) is a zero of \(h_2(X_0(y),y)\). In the latter case, \(|Y_0(x_p)|>1\). On the other hand, if \(x_p\) is the zero of \(h_1(x,Y_0(x))\) or \(Y_0(x_p)\) with \(|Y_0(x_p)|>1\) is a zero of \(h_2(X_0(y),y)\) with the smallest modulus in \((1, x_3]\), then \(x_p\) is the pole of \(\pi _1^{(1)}(x)\) in \((1, x_3]\). Moreover, \(x_p\) is real. Parallel results can be easily stated for \(\pi _2^{(1)}(y)\).

For the retrial queue model with two input streams and two orbits studied in this paper, we show in the following that the pole of \(\pi _1^{(1)}(x)\) [respectively \(\pi _2^{(1)}(y)\)] can only be the zero of \(h_1(x,Y_0(x))\) [respectively \(h_2(X_0(y),y)\)].

First, we discuss properties of the pole of \(\pi _1^{(1)}(x)\) in interval \((1,x_3]\). For convenience, let \(x^*\) be the unique zero in \((1,x_3]\) of the function \(h_1(x,Y_0(x))\) if such a zero exists, otherwise let \(x^*=+\infty \) (in this case, obviously \(x^*\) can never be the dominant singularity since \(x_3 < +\infty \)). Instead of directly considering the equation \(h_1(x,Y_0(x))=0\), we consider the product of two functions \(h_1(x,Y_0(x))\) and \(h_1(x,Y_1(x))\), which is a polynomial:

$$\begin{aligned} h_1(x,Y_0(x))h_1(x,Y_1(x))=\frac{\hat{\mu }_2}{\alpha (\lambda +\mu _1)^2}(x-1)g(x), \end{aligned}$$

where

$$\begin{aligned} g(x)=\lambda \lambda _1(\lambda +\mu _1)x^2+\lambda \mu _1(\lambda +\mu _1-\mu )x-\mu \mu _1^2. \end{aligned}$$

Since \(\lambda (\lambda _1+\mu _1)<\mu \mu _1\), it is easily to check that \(g(0)<0\) and \(g(1)<0\). Hence \(g(x)=0\) has one positive zero \(x_+\) and one negative zero \(x_-\). Especially, \(x_+>1\). The expressions of the two zeros are given as

$$\begin{aligned} x_+= & {} \frac{-\lambda \mu _1(\lambda +\mu _1-\mu )+\sqrt{[\lambda \mu _1(\lambda +\mu _1-\mu )]^2+4\lambda \lambda _1(\lambda +\mu _1)\mu \mu _1^2}}{2\lambda \lambda _1(\lambda +\mu _1)},\nonumber \\ x_-= & {} \frac{-\lambda \mu _1(\lambda +\mu _1-\mu )-\sqrt{[\lambda \mu _1(\lambda +\mu _1-\mu )]^2+4\lambda \lambda _1(\lambda +\mu _1)\mu \mu _1^2}}{2\lambda \lambda _1(\lambda +\mu _1)}. \end{aligned}$$
(4.4)

Since either \(p_{i,j}\) or \(p_{i,j}^{(1)}\) is not X-shaped (refer to Li and Zhao 2012 for details) in this censored random walk, Theorem 4.5 in Li and Zhao (2012) guarantees that the candidate zero of \(h_1(x,Y_0(x))\) can only be \(x_+\). Solving \(h_1(x_+,y)=0\), and then from (3.3) we get

$$\begin{aligned} y=Y(x_+)=1-\frac{[\lambda _1(\lambda +\mu _1)x_+-\mu \mu _1](x_+-1)}{\lambda _2(\lambda +\mu _1)x_+}, \end{aligned}$$
(4.5)

where \(Y(x_+)\) is either \(Y_0(x_+)\) or \(Y_1(x_+)\). On the other hand, \(\frac{\mu \mu _1}{\lambda _1(\lambda +\mu _1)}>1\) and \(g\big (\frac{\mu \mu _1}{\lambda _1(\lambda +\mu _1)}\big )=\frac{\lambda _2\mu \mu _1^{2}}{\lambda _1}>0\), hence, \(1<x_+<\frac{\mu \mu _1}{\lambda _1(\lambda +\mu _1)}\). This means \(Y(x_+)>1\). Furthermore, to check whether or not \(x_+\) is the pole of \(\pi _1^{(1)}(x)\), we will carry out a discussion under the condition \(\hat{\mu }_2>\hat{\lambda }_2\) and \(\hat{\mu }_2\le \hat{\lambda }_2\), respectively, in the following lemma.

Lemma 4.3

1. When \(\hat{\mu }_2>\hat{\lambda }_2\), the value of \(x^*\) depends on the value of \(x_+\):

  1. (a)

    For \(x_+\in \big (1,\frac{\hat{\mu }_1}{\hat{\lambda }_1}\big ]\), we have \(x^*=+\infty ;\)

  2. (b)

    For \(x_+\in \left( \frac{\hat{\mu }_1}{\hat{\lambda }_1},\min \left( x_3, \frac{\mu \mu _1}{\lambda _1(\lambda +\mu _1)}\right) \right) \), we have \(x^*=x_+<x_3\) if \(Y(x_+)<\sqrt{\frac{\hat{\mu }_2}{\hat{\lambda }_2}}\), and \(x^*=+\infty \) otherwise;

  3. (c)

    For \(x_+=x_3<\frac{\mu \mu _1}{\lambda _1(\lambda +\mu _1)}\), we have \(Y(x_+)=\sqrt{\frac{\hat{\mu }_2}{\hat{\lambda }_2}}\) and \(x^*=x_+=x_3;\)

  4. (d)

    For \(x_3<x_+<\frac{\mu \mu _1}{\lambda _1(\lambda +\mu _1)}\), we have \(x^*=+\infty \).

2. When \(\hat{\mu }_2\le \hat{\lambda }_2\), we have \(x^*=+\infty \).

Proof

For the case \(\hat{\mu }_2>\hat{\lambda }_2\), if \(x_+\in \left( 1,\frac{\hat{\mu }_1}{\hat{\lambda }_1}\right] \), it leads to \(0<Y_0(x)<1\) from Lemma 4.2. For the case \(\hat{\mu }_2\le \hat{\lambda }_2\), if \(x_+\in (1,x_3]\), it leads to \(Y_0(x_+)\le \sqrt{\frac{\hat{\mu }_2}{\hat{\lambda }_2}}\le 1\) from Lemma 4.1. The both cases contradict \(Y(x_+)>1\). Hence, we can conclude that \(h_1(x,Y_0(x))\) has no zero on \([1,+\infty )\) (\(x_+\) is the zero of \(h_1(x,Y_1(x))\)). Therefore, \(x^*=+\infty \). Other conclusions are easy to make. \(\square \)

Remark 4.1

It is worthwhile to notice that: (i) If there does not exist a pole in \((1,x_3]\), then \(x_3\) is the dominant singularity of \(\pi _1^{(1)}(x)\). Therefore, for the purpose of dominant singularity, we do not need to consider case 1(d) in Lemma 4.3. (ii) The right-hand expression in (4.5) can be either \(Y_0(x_+)\) or \(Y_1(x_+)\). (iii) It is possible that both \(x_+\) and \(x_-\) are zeros of \(h_1(x,Y_1(x))\). In this case, \(h_1(x,Y_0(x))=0\) has no solution.

Next, we show \(h_2(X_0(y),y)\) has no zeros. Based on Theorem 4.1, if the pole in \((1,x_3]\) of \(\pi _1^{(1)}(x)\) is not \(x^*\), then it is denoted by \(\tilde{x}_1\). For convenience, define \(y^*\) to be the unique zero of \(h_2(X_0(y),y)\) in \((1,y_3]\) if such a zero exists, otherwise let \(y^*=+\infty \). Following the same idea as above, we have

$$\begin{aligned} h_2(X_0(y),y)h_2(X_1(y),y)=\frac{\hat{\mu }_1}{\alpha (\lambda +\mu _2)^2}(y-1)f(y), \end{aligned}$$

where

$$\begin{aligned} f(y)=\lambda \lambda _2(\lambda +\mu _2)y^2+\lambda \mu _2(\lambda +\mu _2-\mu )y-\mu \mu _2^2 \end{aligned}$$

has two zeros: \(y_-<0\) and \(y_+>1\). Solving \(h_2(x,y_+)=0\), and then from (3.4) we get

$$\begin{aligned} x=X(y_+)=1-\frac{[\lambda _2(\lambda +\mu _2)y_+-\mu \mu _2](y_+-1)}{\lambda _1(\lambda +\mu _2)y_+}, \end{aligned}$$

where \(X(y_+)\) is either \(X_0(y_+)\) or \(X_1(y_+)\).

Using a similar argument, parallel results to Lemma 4.3-1 can be obtained. Since \(\lambda _2(\lambda +\mu _2)<\mu \mu _2\) and \(f\left( \frac{\mu \mu _2}{\lambda _2(\lambda +\mu _2)}\right) =\frac{\lambda _1\mu \mu _2^{2}}{\lambda _2}>0\), we have \(1<y_+<\frac{\mu \mu _2}{\lambda _2(\lambda +\mu _2)}\). This leads to \(X(y_+)>1\). Next, we claim that \(\tilde{x}_1\) cannot exist.

If \(h_2(X_0(y),y)\) has a zero \(y^*\) in \((1,y_3]\), then \(y^*=y_+\) and \(1<X(y_+)=X_0(y^*)\le \sqrt{\frac{\hat{\mu }_1}{\hat{\lambda }_1}}\). From Theorem 4.7 in Li and Zhao (2012) we know \(\tilde{x}_1=X_1(y^*)\). Hence, \(\tilde{x}_1\in \left[ \sqrt{\frac{\hat{\mu }_1}{\hat{\lambda }_1}},\frac{\hat{\mu }_1}{\hat{\lambda }_1}\right) \) and \(0<Y_0(\tilde{x}_1)<1\) from Lemma 4.2. Obviously, it contradicts to that \(Y_0(\tilde{x}_1)\) is a pole of \(\pi _2^{(1)}(y)\). Therefore, \(\tilde{x}_1\) cannot exist.

Based on the above discussion, we are ready to summarize the detailed properties on the location of the dominant singularity. For convenience, we introduce the following three conditions:

  • Condition 1. \(\hat{\mu }_2 > \hat{\lambda }_2\), \(x_+\in \left( \frac{\hat{\mu }_1}{\hat{\lambda }_1},\min \left( x_3,\frac{\mu \mu _1}{\lambda _1(\lambda +\mu _1)}\right) \right) \) and \(Y(x_+) < \sqrt{\frac{\hat{\mu }_2}{\hat{\lambda }_2}}\).

  • Condition 2. \(\hat{\mu }_2 > \hat{\lambda }_2\) and \(x_+ =x_3 \in \big (\frac{\hat{\mu }_1}{\hat{\lambda }_1},\frac{\mu \mu _1}{\lambda _1(\lambda +\mu _1)}\big )\).

  • Condition 3. One of the following three: (a) \(\hat{\mu }_2 \le \hat{\lambda }_2\); (b) \(\hat{\mu }_2 > \hat{\lambda }_2\) and \(x_+ \in (1, \frac{\hat{\mu }_1}{\hat{\lambda }_1}]\); and (c) \(\hat{\mu }_2 > \hat{\lambda }_2\), \(x_+\in \left( \frac{\hat{\mu }_1}{\hat{\lambda }_1},\min \big (x_3,\frac{\mu \mu _1}{\lambda _1(\lambda +\mu _1)}\big )\right) \), and \(Y(x_+) \ge \sqrt{\frac{\hat{\mu }_2}{\hat{\lambda }_2}}\).

Lemma 4.4

  • Case 1: Under Condition 1, the dominant singularity \(x_{dom}=x^*=x_+ < x_3\), which is a pole.

  • Case 2: Under Condition 2, the dominant singularity \(x_{dom}=x_3=x^*=x_+\), which is both a branch point and a pole.

  • Case 3: Under Condition 3, the dominant singularity \(x_{dom}=x_3 < x^*=+\infty \), which is a branch point.

Remark 4.2

One should notice that the above lemma is a refinement of the literature result for a general random walk in the quarter plane. It provides explicit conditions (in terms of system parameters), under which the dominant singularity \(x_{dom}\) of \(\pi _1^{(1)}(x)\) (also explicitly expressed) is either \(x_{dom} = x_3\) or \(x_{dom}=x_+\), since all the branch point \(x_3\), the pole \(x_+\) and \(Y(x_+)\) are explicitly expressed in (4.3), (4.4) and (4.5), respectively.

5 Asymptotic properties of \(\pi _1^{(1)}(x)\) at its dominant singularity

Once again, in this section, we only provide detailed analysis for the function \(\pi _1^{(1)}(x)\). Due to symmetry, parallel results for \(\pi _2^{(1)}(y)\) can be easily stated and similarly proved. In the previous section, we proved that either \(x_3\) or \(x_+\) is the dominant singularity of \(\pi _1^{(1)}(x)\). In this section, we prove (in Theorem 5.1) that there exist three types of detailed asymptotic properties as x approaches to the dominant singularity \(x_{dom}\) of \(\pi _1^{(1)}(x)\), depending on \(x_{dom}=x_+<x_3\) or \(x_{dom}=x_3<x_+\) or \(x_{dom}=x_+=x_3\) respectively.

For simplicity in the following discussion, especially for the case of \(x_{dom}=x_3\), we write

$$\begin{aligned} Y_0(x)= & {} p(x)+q(x)\sqrt{1-\frac{x}{x_{dom}}},\nonumber \\ h_1(x,Y_0(x))= & {} p_1(x)+q_1(x)\sqrt{1-\frac{x}{x_{dom}}},\nonumber \\ Y_0(x_{dom})-Y_0(x)= & {} \bigg (1-\frac{x}{x_{dom}}\bigg )p^*(x)-q(x)\sqrt{1-\frac{x}{x_{dom}}},\nonumber \\ h_1(x,Y_0(x))-h_1(x_{dom},Y_0(x_{dom}))\!= & {} \!\bigg (1-\frac{x}{x_{dom}}\bigg )p_1^*(x)+q_1(x)\sqrt{1-\frac{x}{x_{dom}}},\qquad \end{aligned}$$
(5.1)

where

$$\begin{aligned} p(x)= & {} -\frac{\hat{b}(x)}{2\hat{\lambda }_2x}, ~~ q(x)=-\frac{1}{2\hat{\lambda }_2x}\sqrt{\frac{\Delta (x)}{1-x/x_{dom}}},~~\text {if}\;~~ x_{dom}=x_3,\\ p_1(x)= & {} -\frac{\hat{b}(x)a_1(x)}{2\hat{\lambda }_2x}+b_1(x), ~~ q_1(x)=a_1(x)q(x),\\ p^*(x)= & {} \frac{(p(x_{dom})-p(x))x_{dom}}{x_{dom}-x}~~ \text {and}\;~~p_1^*(x)=\frac{(p_1(x)-p_1(x_{dom}))x_{dom}}{x_{dom}-x}. \end{aligned}$$

Theorem 5.1

The behaviour of \(\pi _1^{(1)}(x)\) at the dominant singularity is given as

  1. (i)

    If \(x_{dom}=x^*=x_+<x_3\), then

    $$\begin{aligned} \displaystyle \lim _{x\rightarrow x_+}\bigg (1-\frac{x}{x_+}\bigg )\pi _1^{(1)}(x)=C_{1,0}, \end{aligned}$$

    where

    $$\begin{aligned} C_{1,0}=\frac{(\lambda +\mu _1)\sqrt{\Delta (x_+)}\left[ h_2(x_+,Y_0(x_+))\pi _2^{(1)}(Y_0(x_+))+h_0(x_+,Y_0(x_+))\pi _{0,0}(1)\right] }{\hat{\mu }_2\lambda \lambda _1x_+(x_+-1)(x_+-x_-)}. \end{aligned}$$
  2. (ii)

    If \(x_{dom}=x_3=x^*=x_+\), then

    $$\begin{aligned} \displaystyle \lim _{x\rightarrow x_{dom}}\sqrt{1-x/x_{dom}}\pi _1^{(1)}(x)=C_{2,0}, \end{aligned}$$

    where

    $$\begin{aligned} C_{2,0}=\frac{2 }{\lambda _1}\times \frac{h_2(x_{dom},Y_0(x_{dom}))\pi _2^{(1)}(Y_0(x_{dom}))+h_0(x_{dom},Y_0(x_{dom}))\pi _{0,0}(1)}{\sqrt{x_{dom}(x_{dom}-x_1)(x_{dom}-x_2)(x_4-x_{dom})}}. \end{aligned}$$
  3. (iii)

    If \(x_{dom}=x_3<x^*=+\infty \), then

    $$\begin{aligned} \displaystyle \lim _{x\rightarrow x_3}\sqrt{1-x/x_3}\pi _1^{'(1)}(x)=C_{3,0}, \end{aligned}$$

    where \(\pi _1^{'(1)}(x)\) is the derivative of \(\pi _1^{(1)}(x)\) and

    $$\begin{aligned} C_{3,0}=-\frac{q(x_3)}{2x_3}\frac{d}{dy}\left[ \frac{h_2(x_3,y)\pi _2^{(1)}(y)+h_0(x_3,y)\pi _{0,0}(1)}{h_1(x_3,y)}\right] {\bigg |_{y=Y_0(x_3)}}. \end{aligned}$$

Proof

(i) If \(x_{dom}=x^*=x_+<x_3\), then \(x_{dom}\) is a simple pole of \(\pi _1^{(1)}(x)\). Based on the analysis in Li and Zhao (2012), we can rewrite

$$\begin{aligned} \pi _1^{(1)}(x)= & {} -\frac{\left[ h_2(x,Y_0(x))\pi _2^{(1)}(Y_0(x))+h_0(x,Y_0(x))\pi _{0,0}(1)\right] h_1(x,Y_1(x))}{h_1(x,Y_0(x))h_1(x,Y_1(x))}\\= & {} -\frac{\left[ h_2(x,Y_0(x))\pi _2^{(1)}(Y_0(x))+h_0(x,Y_0(x))\pi _{0,0}(1)\right] h_1(x,Y_1(x))}{\frac{\hat{\mu }_2}{\alpha (\lambda +\mu _1)^2}(x-1)g(x)}\\= & {} -\frac{\left[ h_2(x,Y_0(x))\pi _2^{(1)}(Y_0(x))+h_0(x,Y_0(x))\pi _{0,0}(1)\right] h_1(x,Y_1(x))}{\frac{\hat{\mu }_2}{\alpha (\lambda +\mu _1)}(x-1)\lambda \lambda _1(x-x_-)(x-x_+)}. \end{aligned}$$

It follows that

$$\begin{aligned} \displaystyle \lim _{x\rightarrow x_+}\bigg (1-\frac{x}{x_+}\bigg ) \pi _1^{(1)}(x)=C_{1,0}. \end{aligned}$$

(ii) If \(x_{dom}=x_3=x^*=x_+\), then \(h_1(x_{dom},Y_0(x_{dom}))=0\). In this case, we can rewrite \(\pi _1^{(1)}(x)\) as

$$\begin{aligned} \pi _1^{(1)}(x)=\frac{-h_2(x,Y_0(x))\pi _2^{(1)}(Y_0(x))-h_0(x,Y_0(x))\pi _{0,0}(1)}{\sqrt{1-x/x_{dom}}\left[ \sqrt{1-x/x_{dom}}p_1^*(x)+q_1(x)\right] }. \end{aligned}$$

It follows that

$$\begin{aligned}&\displaystyle \lim _{x\rightarrow x_{dom}}\sqrt{1-x/x_{dom}}\pi _1^{(1)}(x)\\&\quad \quad =\frac{h_2(x_{dom},Y_0(x_{dom}))\pi _2^{(1)}(Y_0(x_{dom}))+h_0(x_{dom},Y_0(x_{dom}))\pi _{0,0}(1)}{-a_1(x_{dom})q(x_{dom})}=C_{2,0}. \end{aligned}$$

(iii) If \(x_{dom}=x_3<x^*\), let

$$\begin{aligned} T(x,y)=\frac{-h_2(x,Y_0(x))\pi _2^{(1)}(Y_0(x))-h_0(x,Y_0(x))\pi _{0,0}(1)}{h_1(x,Y_0(x))}. \end{aligned}$$

Then the derivative of \(\pi _1^{(1)}(x)\) is given by

$$\begin{aligned} \pi _1^{'(1)}(x)=\frac{\partial T}{\partial x}+\frac{\partial T}{\partial y}\frac{dY_0(x)}{dx} \end{aligned}$$

with

$$\begin{aligned} \frac{dY_0(x)}{dx}=p'(x)+q'(x)\sqrt{1-x/x_{dom}}-\frac{q(x)}{2x_{dom}\sqrt{1-x/x_{dom}}}, \end{aligned}$$

where p(x) and q(x) are defined in Eq. (5.1). Since it is obvious that \(\displaystyle \lim _{x\rightarrow x_3}\sqrt{1-x/x_3} \frac{dY_0(x)}{dx}=-\frac{q(x_3)}{2x_3}\), \(\displaystyle \lim _{x\rightarrow x_3}\sqrt{1-x/x_3} \frac{\partial T}{\partial x}=0\) and \(\frac{\partial T}{\partial y}\) is continuous at \((x_3,Y_0(x_3))\), so,

$$\begin{aligned} \displaystyle \lim _{x\rightarrow x_{3}}\sqrt{1-x/x_3}\pi _1^{'(1)}(x)= & {} -\frac{q(x_3)}{2x_3}\frac{\partial T}{\partial y}\bigg |_{(x_3,Y_0(x_3))}=C_{3,0}. \end{aligned}$$

\(\square \)

6 Tail asymptotic properties in stationary probabilities

Exact tail asymptotic properties in stationary probabilities are obtained directly from the corresponding asymptotic properties of the unknown generating function by applying the following Tauberian-like theorem. This theorem is originated from Bender (1974), and more complete versions can be found in Flajolet and Sedgewich (2009), which include the following theorem as a special case.

Theorem 6.1

(Tauberian-like theorem for single singularity) Let \(A(z)=\sum _{n\ge 0}a_n z^n\) be analytic at zero with R the radius of convergence. Suppose that R is a singularity of A(z) that can be continued to a \(\Delta \)-domain at R. If for a real number \(\beta \notin \{0,-1,-2, \ldots \}\),

$$\begin{aligned} \lim _{z\rightarrow R}(1-z/R)^{\beta }A(z)=g, \end{aligned}$$

where g is a non-zero constant. Then,

$$\begin{aligned} a_n \sim \frac{g}{\Gamma (\beta )}n^{\beta -1}R^{-n}, \end{aligned}$$

where \(\Gamma (\beta )\) is the value of Gamma function at \(\beta \), and \(a_n \sim b_n\) is equivalent to \(\lim _n a_n/b_n =1\).

The Tauberian-like theorem claims that the tail behaviour in the sequence of the coefficients in the Taylor expansion of the analytic function corresponds to the asymptotic property of the function at its dominant singularity. In the following subsections, we show how to apply Theorem 6.1 to characterize the tail behaviour in the joint probabilities \(\pi _{m,n}(k)\) for a fixed number n of customers in orbit 2. Specifically, in Sect. 6.1, we provide a characterization for tail asympotics, when the server is busy, in the sequence of: (1) boundary probabilities \(\pi _{m,0}(1)\); (2) marginal probabilities \(\pi _m^{(1)}=\sum _{n=1}^{\infty }\pi _{m,n}(1)\); (3) joint probabilities \(\pi _{m,n}(1)\) for a fixed \(n > 0\) (along the direction of queue one). While in Sect. 6.2, when the server is idle, we provide a characterization for tail asympotics in \(\pi _{m,n}(0)\) for a fixed n and for the marginal distribution \(\pi _m^{(0)}=\sum _{n=1}^{\infty }\pi _{m,n}(0)\).

Remark 6.1

By symmetry, tail behaviour in \(\pi _{m,n}(1)\) and \(\pi _{m,n}(0)\) for a fixed number m of customers in orbit 1 (and also in the marginal distributions for the second queue length when the server is busy and idle, respectively) can be easily stated and similarly proved.

6.1 Exact tail asymptotics when the server is busy

First, we consider the sequence \(\pi _{m,0}(1)\) of the boundary probabilities. When the second queue is empty and the server is busy, the exact tail asymptotic behaviour of the stationary probability sequence \(\pi _{m,0}(1)\) along the increasing direction of the first queue is a direct consequence of the characterization of the asymptotic property for the function \(\pi _1^{(1)}(x)\) in Theorem 5.1 and the Tauberian-like theorem (Theorem 6.1).

Theorem 6.2

For a stable retrial queue with two input streams and two orbits studied in this paper, when m is large, we have three types of tail asymptotic properties for the boundary probabilities \(\pi _{m,0}(1)\):

  • Type 1: (Exact geometric decay) Under Condition 1,

    $$\begin{aligned} \pi _{m,0}(1)\sim C_{1,0}\left( \frac{1}{x_+}\right) ^{m-1}, \quad m\ge 1; \end{aligned}$$
  • Type 2: (Geometric decay with prefactor \(m^{-1/2}\)) Under Condition 2,

    $$\begin{aligned} \pi _{m,0}(1)\sim \frac{C_{2,0}}{\sqrt{\pi }}m^{-\frac{1}{2}}\left( \frac{1}{x_{dom}}\right) ^{m-1}, \quad m\ge 1; \end{aligned}$$
  • Type 3: (Geometric decay with prefactor \(m^{-3/2}\)) Under Condition 3,

    $$\begin{aligned} \pi _{m,0}(1)\sim \frac{C_{3,0}}{\sqrt{\pi }}m^{-\frac{3}{2}}\left( \frac{1}{x_3}\right) ^{m-2}, \quad m\ge 1. \end{aligned}$$

Here, constants \(C_{i,0}\) (\(i=1,2,3\)) are given in Theorem 5.1.

Remark 6.2

One may notice that in Type 3, the power of the decay rate is \(m-2\) instead of \(m-1\) since the Tauberian-like theorem is applied to the derivative of the function.

For characterizing the asymptotic behaviour of the marginal probability \(\pi _m^{(1)}=\sum _{n=1}^{\infty }\pi _{m,n}(1)\), we compute \(\pi ^{(1)}(x,1)\),

$$\begin{aligned} \pi ^{(1)}(x,1)= & {} \frac{h_1(x,1)\pi _1^{(1)}(x)+h_2(x,1)\pi _2^{(1)}(1)+h_0(x,1)\pi _{0,0}(1)}{-h(x,1)}\\= & {} -\frac{\frac{1}{(\lambda +\mu _1)}\left[ \lambda _1(\lambda +\mu _1)x-\hat{\mu }_1\right] \pi _1^{(1)}(x)+\lambda _1\pi _2^{(1)}(1)+\lambda _1\pi _{0,0}(1)}{\lambda _1x-\frac{\hat{\mu }_1}{\alpha }}\\= & {} \frac{\alpha }{\lambda +\mu _1}\frac{\left[ \lambda _1(\lambda +\mu _1)x-\hat{\mu }_1\right] \pi _1^{(1)}(x)+\lambda _1(\lambda +\mu _1)(\pi _2^{(1)}(1)+\pi _{0,0}(1))}{\hat{\mu }_1(1-\frac{\hat{\lambda }_1}{\hat{\mu }_1}x)}. \end{aligned}$$

If \(\hat{\lambda }_2\ne \hat{\mu }_2\), it follows from (4.2) that we have \(\hat{\mu }_1/\hat{\lambda }_1<x_3\). Therefore, from Lemma 4.4 we can claim that \(1<\hat{\mu }_1/\hat{\lambda }_1<\min (x^*,x_3)\) is always true. Obviously, \(\hat{\mu }_1/\hat{\lambda }_1\) is the dominant singularity of \(\pi ^{(1)}(x,1)\), which is a simple pole. If \(\hat{\lambda }_2=\hat{\mu }_2\), then from Lemma 4.2-(iii), we have \(x_3=\hat{\mu }_1/\hat{\lambda }_1\). Again, according to Lemma 4.4, the dominant singularity of \(\pi _1^{(1)}(x)\) is \(x_3=\hat{\mu }_1/\hat{\lambda }_1<x^*=+\infty \). Notice that \(\lim _{x\rightarrow x_3} \pi _1^{(1)}(x)\) is finite. Therefore, the Tauberian-like theorem can be still applied.

Theorem 6.3

(i)

$$\begin{aligned} \displaystyle \lim _{x\rightarrow \hat{\mu }_1/\hat{\lambda }_1}\bigg (1-\frac{x}{\hat{\mu }_1/\hat{\lambda }_1}\bigg )\pi ^{(1)}(x,1)=C_m, \end{aligned}$$

where

$$\begin{aligned} C_m= -\frac{\mu _2}{\lambda +\mu _1}\pi _1^{(1)}(\hat{\mu }_1/\hat{\lambda }_1)+\frac{\hat{\lambda }_1}{\hat{\mu }_1}\big (\pi _2^{(1)}(1)+\pi _{0,0}(1)\big ); \end{aligned}$$
(6.1)

and (ii) The marginal probabilities \(\pi _m^{(1)}\) has an exact geometric decay with decay rate equal to \(x_{dom}=\hat{\mu }_1/\hat{\lambda }_1\):

$$\begin{aligned} \pi _m^{(1)}\sim C_m\left( \frac{\hat{\lambda }_1}{\hat{\mu }_1}\right) ^{m-1}. \end{aligned}$$

Remark 6.3

It should be noticed that one may consider \(\sum _{n=0}^{\infty } (\pi _{m,n}(1)+\pi _{m,n}(0))\) the usual marginal distribution of the first queue. Its tail asymptotic property can be easily obtained since the property for \(\pi _m^{(1)}\) and \(\pi _{m,0}(1)\) have been studied, and the property for \(\pi _m^{(0)}=\sum _{n=1}^{\infty }\pi _{m,n}(0)\) and \(\pi _{m,0}(0)\) can be similarly obtained.

Next, the exact tail asymptotic behaviour for joint probabilities can be obtained from the recursive relationship of the generating functions \(\varphi _n(x)\), defined by

$$\begin{aligned} \varphi _n(x)=\sum _{m=1}^{\infty }\pi _{m,n}(1)x^{m-1}, \quad n\ge 0. \end{aligned}$$

It is clear that \(\varphi _0(x)=\pi _1^{(1)}(x)\). From the balance equations of the censored random walk, we can obtain

$$\begin{aligned} c(x)\varphi _1(x)+b_1(x)\varphi _0(x)= & {} a_0^*(x), \end{aligned}$$
(6.2)
$$\begin{aligned} c(x)\varphi _2(x)+b(x)\varphi _1(x)+a_1(x)\varphi _0(x)= & {} a_1^*(x), \end{aligned}$$
(6.3)
$$\begin{aligned} c(x)\varphi _{n+1}(x)+b(x)\varphi _n(x)+a(x)\varphi _{n-1}(x)= & {} a_n^*(x), \quad n\ge 2, \end{aligned}$$
(6.4)

where

$$\begin{aligned} a_0^*(x)= & {} -c_2(x)\pi _{0,1}-b_0(x)\pi _{0,0}, \\ a_1^*(x)= & {} -c_2(x)\pi _{0,2}-b_2(x)\pi _{0,1}-a_0(x)\pi _{0,0}, \\ a_n^*(x)= & {} -c_2(x)\pi _{0,n+1}-b_2(x)\pi _{0,n}-a_2(x)\pi _{0,n-1}, \quad n\ge 2. \end{aligned}$$

Rewrite (6.4) as

$$\begin{aligned} \varphi _{n+1}(x)=\frac{-b(x)\varphi _n(x)-a(x)\varphi _{n-1}(x)+a_n^*(x)}{c(x)}, \quad n\ge 2, \end{aligned}$$

and note that \(c(x)=p_{0,-1}x\). Hence, we established the fact that \(\varphi _n(x)\) has the same singularities as \(\varphi _0(x)\) since that the zero of c(x) is not a pole of \(\varphi _n(x)\) for all \(n\ge 0\).

By adopting Theorem 7.1 and Lemma 7.2 in Li and Zhao (2012) directly, we define

$$\begin{aligned} A_i(x_{dom})=-\frac{b_1(x_{dom})}{c(x_{dom})}C_{i,0}, ~~ i=1,2,3 \quad \text {and} \quad B_3(x_3)=-\frac{p_1(x_3)}{c(x_3)}C_{3,0}, \end{aligned}$$

then we can conclude the results in the following theorem.

Theorem 6.4

Corresponding to the three types in Theorem 5.1, when m is large, we have the following tail asymptotic properties for the joint probabilities \(\pi _{m,n}(1)\) for a fixed n:

  • Type 1: (Exact geometric decay)

    $$\begin{aligned} \pi _{m,n}(1)\sim A_1(x_+)\bigg (\frac{1}{Y_1(x_+)}\bigg )^{n-1}\left( \frac{1}{x_+}\right) ^{m-1}, \quad n\ge 1; \end{aligned}$$
  • Type 2: (Geometric decay with prefactor \(m^{-1/2}\))

    $$\begin{aligned} \pi _{m,n}(1)\sim \frac{A_2(x_{dom})}{\sqrt{\pi }}\bigg (\frac{1}{Y_1(x_{dom})}\bigg )^{n-1}m^{-\frac{1}{2}}\left( \frac{1}{x_{dom}}\right) ^{m-1}, \quad n\ge 1; \end{aligned}$$
  • Type 3: (Geometric decay with prefactor \(m^{-3/2}\))

    $$\begin{aligned} \pi _{m,n}(1)\sim \frac{[A_3(x_3)+(n-1)B_3(x_3)]}{\sqrt{\pi }}\bigg (\frac{1}{Y_1(x_3)}\bigg )^{n-1}m^{-\frac{3}{2}}\left( \frac{1}{x_3}\right) ^{m-2}, \quad n\ge 1. \end{aligned}$$

6.2 Exact tail asymptotics when the server is idle

Having known the exact tail asymptotic properties of the boundary, marginal and joint distributions for \(I(t)=1\) (or the server is busy), we can now study the tail asymptotic properties for \(I(t)=0\) (or the server is idle) based on the relationship given in (2.2).

Setting \(y=0\) in (2.2) leads to

$$\begin{aligned} (\lambda +\mu _1)P^{(0)}(x,0)=\mu P^{(1)}(x,0)+\mu _1P^{(0)}(0,0), \end{aligned}$$
(6.5)

which means that \(P^{(0)}(x,0)\) and \(P^{(1)}(x,0)\) have the same asymptotic property.

Similarly, setting \(y=1\) in (2.2) leads to

$$\begin{aligned} \alpha P^{(0)}(x,1)=\mu P^{(1)}(x,1)+\mu _2 P^{(0)}(x,0)+\mu _1 P^{(0)}(0,1). \end{aligned}$$
(6.6)

Substituting (6.5) into (6.6) gives

$$\begin{aligned} \alpha P^{(0)}(x,1)=\mu P^{(1)}(x,1)+\frac{\mu \mu _2}{\lambda +\mu _1} P^{(1)}(x,0)+\frac{\mu _1\mu _2}{\lambda +\mu _1}P^{(0)}(0,0) + \mu _1 P^{(0)}(0,1). \end{aligned}$$

Since the asymptotic property at the dominant singularity of \(P^{(0)}(x,1)\) is dominated by the asymptotic property of the function \(\mu P^{(1)}(x,1)\), \(P^{(0)}(x,1)\) and \(P^{(1)}(x,1)\) have the same asymptotic property. Based on the above, we have the following conclusion:

Theorem 6.5

Assume that the retrial queue with two input streams and two orbits is stable.

  1. (i)

    For large m, we have three types of tail asymptotic properties for the boundary probabilities \(\pi _{m,0}(0)\) correspondingly.

    • Type 1: (Exact geometric decay)

      $$\begin{aligned} \pi _{m,0}(0)\sim \frac{\mu }{\lambda +\mu _1}C_{1,0}\left( \frac{1}{x_+}\right) ^{m-1}, \quad m\ge 1; \end{aligned}$$
    • Type 2: (Geometric decay with prefactor \(m^{-1/2}\))

      $$\begin{aligned} \pi _{m,0}(0)\sim \frac{\mu }{\lambda +\mu _1}\frac{C_{2,0}}{\sqrt{\pi }}m^{-\frac{1}{2}}\left( \frac{1}{x_{dom}}\right) ^{m-1} \quad m\ge 1; \end{aligned}$$
    • Type 3: (Geometric decay with prefactor \(m^{-3/2}\))

      $$\begin{aligned} \pi _{m,0}(0)\sim \frac{\mu }{\lambda +\mu _1}\frac{C_{3,0}}{\sqrt{\pi }}m^{-\frac{3}{2}}\left( \frac{1}{x_3}\right) ^{m-2} \quad m\ge 1. \end{aligned}$$

    Here, constants \(C_{i,0}\ (i=1,2,3)\) are given in Theorem 5.1.

  2. (ii)

    The tail asymptotic property of the marginal distribution \(\pi _m^{(0)}=\sum _{n=1}^{\infty }\pi _{m,n}(0)\) is determined by

    $$\begin{aligned} \pi _m^{(0)}\sim \frac{\mu }{\alpha }C_m\left( \frac{\hat{\lambda }_1}{\hat{\mu }_1}\right) ^{m-1}, \end{aligned}$$

    where \(C_m\) is provided by (6.1).

We finally characterize the tail asymptotic behaviour for the joint probabilities \(\pi _{m,n}(0)\) for a fixed \(n > 0\). Define the generating function

$$\begin{aligned} G_n^{(k)}(x)=\sum _{m=0}^{\infty }\pi _{m,n}(k)x^{m}, \quad k=0,1 ~~ n\ge 1. \end{aligned}$$

Referring to equation (14) in Avrachenkov et al. (2014), we have

$$\begin{aligned} \alpha G_n^{(0)}(x)-\mu G_n^{(1)}(x)=\mu _1\pi _{0,n}(0), \end{aligned}$$

which obviously leads to the following theorem.

Theorem 6.6

Corresponding to the three types in Theorem 5.1, when m is large, we have the following tail asymptotic properties of the joint probabilities \(\pi _{m,n}(0)\) for a fixed n:

  • Type 1: (Exact geometric decay) Under Condition 1,

    $$\begin{aligned} \pi _{m,n}(0)\sim \frac{\mu }{\alpha }A_1(x_+)\bigg (\frac{1}{Y_1(x_+)}\bigg )^{n-1}\left( \frac{1}{x_+}\right) ^{m-1}, \quad n\ge 1; \end{aligned}$$
  • Type 2: (Geometric decay with prefactor \(m^{-1/2}\)) Under Condition 2,

    $$\begin{aligned} \pi _{m,n}(0)\sim \frac{\mu }{\alpha }\frac{A_2(x_{dom})}{\sqrt{\pi }}\bigg (\frac{1}{Y_1(x_{dom})}\bigg )^{n-1}m^{-\frac{1}{2}}\left( \frac{1}{x_{dom}}\right) ^{m-1}, \quad n\ge 1; \end{aligned}$$
  • Type 3: (Geometric decay with prefactor \(m^{-3/2}\)) Under Condition 3,

    $$\begin{aligned} \pi _{m,n}(0)\sim \frac{\mu }{\alpha }\frac{[A_3(x_3)+(n-1)B_3(x_3)]}{\sqrt{\pi }}\bigg (\frac{1}{Y_1(x_3)}\bigg )^{n-1}m^{-\frac{3}{2}}\left( \frac{1}{x_3}\right) ^{m-2}, \quad n\ge 1. \end{aligned}$$

7 Concluding remarks

In this paper, we considered the exact tail asymptotic behaviours of a retrial queue with two input streams and two orbits. Partitioned according to the two states of the server, this model is formulated as a random walk in the quarter plane whose transition probabilities are modulated by a two-state Markov chain (idle or busy). Our work is a revisit of the same model studied in Avrachenkov et al. (2014). While in Avrachenkov et al. (2014), the study is based on the solution to a BVP, we employed a different method, the kernel method. The advantage of using this method mainly relies on the fact that there is no need to have a full determination of the unknown generating function. Instead, we only need the location of the dominant singularity of the unknown function and the asymptotic property at its dominant singularity. By this method, tail asymptotic properties in stationary probabilities for the model are obtained when the first queue size is large. Due to symmetry, it is not difficult to state and (similarly) prove parallel exact tail asymptotic properties when the second queue size is large. In addition, exact tail asymptotic results for other probability sequences formed from the joint stationary probabilities can also be considered. For example, we can consider the total number of customers in the system as follows: let

$$\begin{aligned} \pi _T=\sum _{\begin{array}{c} m,n: \\ m+n=T \end{array}}\pi _{m,n} \end{aligned}$$

and we compute \(\pi ^{(1)}(x,x)\), according to (3.1):

$$\begin{aligned} \pi ^{(1)}(x,x)=-\frac{\big (\lambda x-\frac{\hat{\mu }_1}{\lambda +\mu _1}\big )\pi ^{(1)}_1(x)+\big (\lambda x-\frac{\hat{\mu }_2}{\lambda +\mu _2}\big )\pi ^{(1)}_2(x)+\lambda \pi _{0,0}(1)}{x(\lambda x-\frac{\hat{\mu }_1+\hat{\mu }_2}{\alpha })}. \end{aligned}$$

Then, the dominant singularity is determined by comparing \(x=(\hat{\mu }_1+\hat{\mu }_2)/\hat{\lambda }\) to the dominant singularities of \(\pi ^{(1)}_1(x)\) and \(\pi ^{(1)}_2(x)\), and therefore the asymptotic property at its dominant singularity is determined. The exact tail asymptotic property is a consequence of the Tauberian-like theorem.

This paper used a censored chain to convert the matrix-form fundamental form into a usual (scalar) fundamental form. It is not always feasible to do this conversion since explicit expressions might not exist for the censored chain. A general method is to solve the matrix-form fundamental form to have a relationship between generating functions for different states of the modulated chain. For example, the censored chain to the idle state does not have an explicit expression for its transition matrix. However, in terms of the relationship in (2.2) and (2.3) obtained by solving the matrix-form fundamental form, we can have the following functional equation:

$$\begin{aligned} R(x,y)P^{(0)}(x,y)=A(x,y)P^{(0)}(x,0)+B(x,y)P^{(0)}(0,y),\quad |x|\le 1,|y|\le 1, \end{aligned}$$

with

$$\begin{aligned} R(x,y)&=\hat{\lambda }_1(1-x)xy+\hat{\lambda }_2(1-y)xy-\hat{\mu }_1(1-x)y-\hat{\mu }_2(1-y)x,\\ A(x,y)&=\left[ (1-y)(\lambda _2 y-\mu )+\lambda _1(1-x)y\right] \mu _2 x,\\ B(x,y)&=\left[ (1-x)(\lambda _1 x-\mu )+\lambda _2(1-y)x\right] \mu _1 y, \end{aligned}$$

which is equivalent to:

$$\begin{aligned} R(x,y)\pi ^{(0)}(x,y)= & {} \frac{A(x,y)-R(x,y)}{y}\pi _1^{(0)}(x)\\&\,+\,\frac{B(x,y)-R(x,y)}{x}\pi _2^{(0)}(y)+\frac{A(x,y)+B(x,y)-R(x,y)}{xy}\pi _{0,0}(0). \end{aligned}$$

After some calculations, the above equation also can be written as

$$\begin{aligned} -\hat{h}^{(0)}(x,y)\pi ^{(0)}(x,y)=\hat{h}_1^{(0)}(x,y)\pi _1^{(0)}(x)+\hat{h}_2^{(0)}(x,y)\pi _2^{(0)}(y)+\hat{h}_0^{(0)}(x,y)\pi _{0,0}(0), \end{aligned}$$

where

$$\begin{aligned} \hat{h}^{(0)}(x,y)&=[\hat{\lambda }_1x+\hat{\lambda }_2y+\hat{\mu }_1x^{-1}+\hat{\mu }_2y^{-1}-(\hat{\lambda }+\hat{\mu }_1+\hat{\mu }_2)]xy,\\ \hat{h}_1^{(0)}(x,y)&=[\lambda _1(\lambda +\mu _1)x+\lambda _2(\lambda +\mu _1)y+\hat{\mu }_1x^{-1}-\lambda (\lambda +\mu _1)-\hat{\mu }_1]x,\\ \hat{h}_2^{(0)}(x,y)&=[\lambda _1(\lambda +\mu _2)x+\lambda _2(\lambda +\mu _2)y+\hat{\mu }_2y^{-1}-\lambda (\lambda +\mu _2)-\hat{\mu }_2]y,\\ \hat{h}_0^{(0)}(x,y)&=\lambda \lambda _1x+\lambda \lambda _2y-\lambda ^2. \end{aligned}$$

The above functional equation is the fundamental form corresponding a random walk defined by

$$\begin{aligned} \hat{p}_{1,0}= & {} \hat{\lambda }_1, \quad \hat{p}_{0,1}=\hat{\lambda }_2, \quad \hat{p}_{-1,0}=\hat{\mu }_1, \quad \hat{p}_{0,-1}=\hat{\mu }_2, \quad \hat{p}_{0,0}=1-(\hat{\lambda }+\hat{\mu }_1+\hat{\mu }_2),\\ \hat{p}_{1,0}^{(1)}= & {} \lambda _1(\lambda +\mu _1), \quad \hat{p}_{0,1}^{(1)}=\lambda _2(\lambda +\mu _1), \quad \hat{p}_{-1,0}^{(1)}=\hat{\mu }_1, \quad \hat{p}_{0,0}^{(1)}=1-[\lambda (\lambda +\mu _1)+\hat{\mu }_1],\\ \hat{p}_{1,0}^{(2)}= & {} \lambda _1(\lambda +\mu _2), \quad \hat{p}_{0,1}^{(2)}=\lambda _2(\lambda +\mu _2), \quad \hat{p}_{0,-1}^{(2)}=\hat{\mu }_2, \quad \hat{p}_{0,0}^{(2)}=1-[\lambda (\lambda +\mu _2)+\hat{\mu }_2],\\ \hat{p}_{1,0}^{(0)}= & {} \lambda \lambda _1, \quad \hat{p}_{0,1}^{(0)}=\lambda \lambda _2, \quad \hat{p}_{0,0}^{(0)}=1-\lambda ^2. \end{aligned}$$

We now can apply the kernel method to the resulting fundamental form to obtain exact tail asymptotic properties for probabilities with an idle server.

Finally, we emphasize that this work serves as an illustration of how the kernel method can be applied to random walks modulated by a finite-state Markov chain, which has a similar structure property to that the retrial queue model possesses.