Keywords

1 Introduction

In queuing theory there exists a special class of systems, in which the following situation is characterized: if a call finds the server busy, instead of queueing before the server it goes into the orbit, from there, after some random time, it tries to get onto the server again. Such models with orbits are called retrial queuing systems or RQ-systems [1,2,3,4,5].

On the other hand, tandem queuing systems represent a connection between one node queue and queuing networks: such systems can be considered as queuing networks with a linear topology [6]. Furthermore, tandem RQ-systems can be used to simulate the processing process, in which incoming requests are serviced sequentially at several stages. The need for sequential services arises in processing requests in call-centers [7,8,9], in controlling the data flow between elements of a multi-agent robotic system [10], etc. Tandem queuing networks are extensively studied. If the buffer is full, then the request is lost in such systems [11]. In contrast to this, we study tandem systems with an orbit of infinite capacity. Studies in this area have already been carried out by several authors. In [14], a model with a correlated flow of arrivals and the operation of the second station is described by a Markov chain.

Retrial tandem queues are studied by several authors. Arvachenkov and Yechiali [2, 3] study the model with constant retrial ate and they obtained some analytic and approximate results. However, as far as we know, tandem queues with classical retrial policy, i.e., the retrial rate is proportional to the number of customers in the obit, are less studied in the literature. We are aware of only one the related work in this line by Phung-Duc [13] in which the author studied a tandem retrial queue where only blocked customers at the first server join the orbit while those are blocked at the second one are lost. In this model, explicit joint distribution of the queue length and the state of the servers is obtained. It should be noted that the loss at the second server makes the model simple and allows the explicit solution.

In contrast to this, the underlying Markov chain of the model in the current paper is non-homogeneous because the retrial rate is proportional to the number of customers in the orbit. As a result, the model can be formulated using a level-dependent quasi-birth-and-death process where the level is the number of customers in the orbit and the phase represents the states of the servers. However, it is well-known that level-dependent QBD does not have analytical solution in general and in our model. Thus, our aim in this paper is to obtain an explicit form for the distribution of the number of customers in the orbit under some asymptotic condition. To this end, we study the model in a special regime, i.e., the case in which the retrial rate is extremely small. Under this regime, the number of customers in the orbit explodes. However, after some appropriate scaling, the scaled version of the number of customers in the orbit follows a proper distribution. The main tool to derive our results is the method of asymptotic analysis [12] under the condition of a large delay of calls in the orbit. We also validate the accuracy of the analytical results by comparing them with simulations.

The rest of our paper is organized as follows. In Sect. 2, we present the model in details. Section 3 presents the Kolmogorov’s equations for the model while Sect. 4 shows the asymptotic analysis. In Sect. 5, we utilize the asymptotic results to build an approximation an validate it by comparing with simulation. Section 6 concludes our paper.

2 Mathematical Model and Problem Statement

We consider a retrial queueing system with Poisson arrival process of incoming calls with rate \(\lambda \) and two sequentially connected servers (see Fig. 1). Upon the arrival of a call, if the first server is free, the call occupies it. The call is served for a random time exponentially distributed with parameter \(\mu _1\) and then tries to go to the second server. If the second server is free, the call moves to it for a random time exponentially distributed with parameter \(\mu _2\). When a call arrives, if the first server is busy, the call instantly goes to the orbit, stays there for an exponentially distributed time with parameter \(\sigma \) and then tries to occupy the first server again. If after completing the service at the first server if the call finds that the second server is busy, it instantly goes to the same orbit, where, after random exponentially distributed delay with parameter \(\sigma \), tries to move to the first server for service again.

Fig. 1.
figure 1

Tandem RQ-system.

Let us denote:

Process \(\textit{N}_{1}(\textit{t})\) - the state of the first server at time \(\textit{t}\): 0, if the server is free; 1, if the server is busy;

Process \(\textit{N}_{2}(\textit{t})\) - the state of the second server at time \(\textit{t}\): 0, if the server is free; 1, if the server is busy;

Process \(\textit{I}(\textit{t})\) - the number of calls in the orbit at the time \(\textit{t}\).

The goal of the study is to obtain the stationary probability distribution of the number of calls in the orbit \(\textit{I}(\textit{t})\) and the probability distribution of servers’ states in the considered system.

3 Derivation of Differential Kolmogorov Equations

We define probabilities

$$\begin{aligned} P_{n_{1}n_{2}}(i,t)=P\{N_{1}(t)=n_{1},N_{2}(t)=n_{2}, I(t)=i\}; n_{1}=0,1; n_{2}=0,1. \end{aligned}$$
(1)

The three-dimensional process \(\{N_{1}(t),N_{2}(t), I(t)\}\) is a Markov chain. For probability distribution (1) we can write the system of differential Kolmogorov equations:

$$\begin{aligned} \begin{array}{c} \frac{{\partial P_{00}(i,t)}}{{\partial t}} = -(\lambda + i \sigma )P_{00}(i,t) + \mu _{2}P_{01}(i,t), \\ \frac{{\partial P_{10}(i,t)}}{{\partial t}} = \lambda P_{00}(i,t) + (i+1)\sigma P_{00}(i+1,t) - (\lambda +\mu _{1})P_{10}(i,t)\\ +\lambda P_{10}(i-1,t)+\mu _{2} P_{11}(i,t), \\ \frac{{\partial P_{01}(i,t)}}{{\partial t}} = \mu _{1}P_{10}(i,t)-(\lambda +i\sigma + \mu _{2})P_{01}(i,t) + \mu _{1}P_{11}(i-1,t), \\ \frac{{\partial P_{11}(i,t)}}{{\partial t}} = \lambda P_{01}(i,t)+(i+1)\sigma P_{01}(i+1,t)-(\lambda + \mu _{1}+\mu _{2})P_{11}(i,t)\\ + \lambda P_{11}(i-1,t). \end{array} \end{aligned}$$
(2)

We introduce partial characteristic functions, denoting \(\textit{j}=\sqrt{-1}\)

$$\begin{aligned} H_{n_{1}n_{2}}(u,t)=\sum _{i=0}^{\infty }e^{jui}P_{n_{1}n_{2}}(i,t). \end{aligned}$$
(3)

Rewriting system (2) in the following form

$$\begin{aligned} \begin{array}{l} \frac{{\partial H_{00}(u,t)}}{{\partial t}} = -\lambda H_{00}(u,t) + j\sigma \frac{\partial H_{00}(u,t)}{\partial u} \mu _{2}H_{01}(u,t), \\ \frac{{\partial H_{10}(u,t)}}{{\partial t}} = \lambda H_{00}(u,t) -j\sigma e^{-ju}\frac{\partial H_{00}(u,t)}{\partial u} \\ - (\lambda +\mu _{1} - \lambda e^{ju})H_{10}(u,t)+\mu _{2} H_{11}(u,t), \\ \frac{{\partial H_{01}(u,t)}}{{\partial t}} = \mu _{1}H_{10}(u,t)-(\lambda + \mu _{2})H_{01}(u,t) \\ +j\sigma \frac{\partial H_{01}(u,t)}{\partial u} +\mu _{1}e^{ju}H_{11}(u,t), \\ \frac{{\partial H_{11}(u,t)}}{{\partial t}} = \lambda H_{01}(u,t)-j\sigma e^{-ju}\frac{\partial H_{01}(u,t)}{\partial u}\\ -(\lambda + \mu _{1}+\mu _{2}-\lambda e^{ju})H_{11}(u,t). \end{array} \end{aligned}$$
(4)

Denote matrices

$$\begin{aligned} \begin{aligned}&\boldsymbol{\mathrm {A}}=\begin{bmatrix} -\lambda &{} \lambda &{} 0 &{} 0\\ 0 &{} -(\lambda +\mu _{1}) &{}\mu _{1} &{} 0\\ \mu _{2} &{} 0 &{}-(\lambda +\mu _{2}) &{} \lambda \\ 0 &{} \mu _{2} &{} 0 &{} -(\lambda +\mu _{1}+\mu _{2}) \end{bmatrix}, \\&\boldsymbol{\mathrm {B}}=\begin{bmatrix} 0 &{} 0 &{} 0 &{} 0\\ 0 &{} \lambda &{}0 &{} 0\\ 0 &{} 0 &{} 0 &{} 0\\ 0 &{} 0 &{} \mu _{2} &{} \lambda \end{bmatrix}, \boldsymbol{\mathrm {I}}_{0}=\begin{bmatrix} 1 &{} 0 &{} 0 &{} 0\\ 0 &{} 0 &{} 0 &{} 0\\ 0 &{} 0 &{} 1 &{} 0\\ 0 &{} 0 &{} 0 &{} 0 \end{bmatrix}, \boldsymbol{\mathrm {I}}_{1}=\begin{bmatrix} 0 &{} 1 &{} 0 &{} 0\\ 0 &{} 0 &{} 0 &{} 0\\ 0 &{} 0 &{} 0 &{} 1\\ 0 &{} 0 &{} 0 &{} 0 \end{bmatrix}. \end{aligned} \end{aligned}$$
(5)

Let us write the system (4) in the matrix form

$$\begin{aligned} \frac{{\partial \boldsymbol{\mathrm {H}}(u,t)}}{{\partial t}}=\boldsymbol{\mathrm {H}}(u,t)\{\boldsymbol{\mathrm {A}}+e^{ju}\boldsymbol{\mathrm {B}}\}+ju\frac{{\partial \boldsymbol{\mathrm {H}}(u,t)}}{{\partial u}}\{\boldsymbol{\mathrm {I}}_{0}-e^{-ju}\boldsymbol{\mathrm {I}}_{1}\}. \end{aligned}$$
(6)

Multiplying equations of system (6) an identity column vector \(\mathbf{e} \), we get scalar equation and add it to the system (6) in order to have

$$\begin{aligned} \begin{array}{c} \frac{{\partial \boldsymbol{\mathrm {H}}(u,t)}}{{\partial t}}=\boldsymbol{\mathrm {H}}(u,t)\{\boldsymbol{\mathrm {A}}+e^{ju}\boldsymbol{\mathrm {B}}\}+ju\frac{{\partial \boldsymbol{\mathrm {H}}(u,t)}}{{\partial u}}\{\boldsymbol{\mathrm {I}}_{0}-e^{-ju}\boldsymbol{\mathrm {I}}_{1}\}, \\ \frac{{\partial \boldsymbol{\mathrm {H}}(u,t)}}{{\partial t}}\boldsymbol{\mathrm {e}}=(e^{ju}-1)\{\boldsymbol{\mathrm {H}}(u,t)\boldsymbol{\mathrm {B}}+j\sigma e^{-ju}\frac{{\partial \boldsymbol{\mathrm {H}}(u,t)}}{{\partial u}}\boldsymbol{\mathrm {I}}_{1}\}\boldsymbol{\mathrm {e}}. \end{array} \end{aligned}$$
(7)

This system of equations is the basis in further research. We will solve it by an asymptotic method under the asymptotic condition \(\sigma \rightarrow 0\).

4 Research of the Tandem RQ-System by the Method of Asymptotic Analysis

We will solve the Eq. (7) by a method of asymptotic analysis under the asymptotic condition of unlimitedly increasing the average delay of calls in the orbit, i.e., \(1/\sigma \rightarrow \infty \). Under the the steady-state regime, the system of Eq. (7) is written as follows.

$$\begin{aligned} \begin{array}{c} \boldsymbol{\mathrm {H}}(u)\{\boldsymbol{\mathrm {A}}+e^{ju}\boldsymbol{\mathrm {B}}\}+ju\boldsymbol{\mathrm {H}}'(u)\{\boldsymbol{\mathrm {I}}_{0}-e^{-ju}\boldsymbol{\mathrm {I}}_{1}\}=0, \\ \{\boldsymbol{\mathrm {H}}(u)\boldsymbol{\mathrm {B}}+j\sigma e^{-ju} \boldsymbol{\mathrm {H}}'(u)\boldsymbol{\mathrm {I}}_{1}\}\boldsymbol{\mathrm {e}}=0. \end{array} \end{aligned}$$
(8)

4.1 The First Order Asymptotic

Denote \(\sigma \) = \(\epsilon \) and perform the following substitution in (8)

$$\begin{aligned} \begin{aligned} u=\epsilon w, \boldsymbol{\mathrm {H}}(u)=\boldsymbol{\mathrm {F}}(w,\epsilon ). \end{aligned} \end{aligned}$$
(9)

We obtain

$$\begin{aligned} \begin{array}{c} \boldsymbol{\mathrm {F}}(w,\epsilon )\{\boldsymbol{\mathrm {A}}+e^{j\epsilon w}\boldsymbol{\mathrm {B}}\}+j\frac{{\partial \boldsymbol{\mathrm {F}}(w,\epsilon )}}{{\partial w}}\{\boldsymbol{\mathrm {I}}_{0}-e^{-j\epsilon w}\boldsymbol{\mathrm {I}}_{1}\}=0, \\ \{\boldsymbol{\mathrm {F}}(w,\epsilon )\boldsymbol{\mathrm {B}}+j e^{-j\epsilon w}\frac{{\partial \boldsymbol{\mathrm {F}}(w,\epsilon )}}{{\partial w}}\boldsymbol{\mathrm {I}}_{1}\}\boldsymbol{\mathrm {e}}=0. \end{array} \end{aligned}$$
(10)

Theorem 1

Under the asymptotic condition \(\sigma \rightarrow 0\), the following equality is true

$$\begin{aligned} \begin{aligned} \lim _{\sigma \rightarrow \ 0} Ee^{jw\sigma i(t)}=e^{jw\kappa _{1}}, \end{aligned} \end{aligned}$$
(11)

where \(\kappa _{1}\) is a solution of the scalar equation

$$\begin{aligned} \begin{aligned} \boldsymbol{\mathrm {r}}(\kappa _{1})\{ \boldsymbol{\mathrm {B}}-\kappa _{1} \boldsymbol{\mathrm {I}}_{1}\} \boldsymbol{\mathrm {e}}=0, \end{aligned} \end{aligned}$$
(12)

and vector \(\mathbf{r} (\kappa _{1})\) satisfies the normality condition

$$\begin{aligned} \begin{aligned} \boldsymbol{\mathrm {r}}(\kappa _{1})\boldsymbol{\mathrm {e}}=1, \end{aligned} \end{aligned}$$
(13)

and is a solution of matrix equation

$$\begin{aligned} \begin{aligned} \boldsymbol{\mathrm {r}}(\kappa _{1})\{( \boldsymbol{\mathrm {A}}+ \boldsymbol{\mathrm {B}})-\kappa _{1}(\boldsymbol{\mathrm {I}}_{0}-\boldsymbol{\mathrm {I}}_{1})\}=0. \end{aligned} \end{aligned}$$
(14)

Proof

Let us take the limit \(\epsilon \rightarrow 0\) in the system (10) and get the system for \(\boldsymbol{\mathrm {F}}(w) = \lim _{\epsilon \rightarrow 0} \boldsymbol{\mathrm {F}}(w, \epsilon )\):

$$\begin{aligned} \begin{array}{c} \boldsymbol{\mathrm {F}}(w)( \boldsymbol{\mathrm {A}}+ \boldsymbol{\mathrm {B}})+j \boldsymbol{\mathrm {F'}}(w)(\boldsymbol{\mathrm {I}}_{0}-\boldsymbol{\mathrm {I}}_{1})=0, \\ (\boldsymbol{\mathrm {F}}(w)\boldsymbol{\mathrm {B}}+j \boldsymbol{\mathrm {F'}}(w)\boldsymbol{\mathrm {I}}_{1})\boldsymbol{\mathrm {e}}=0. \end{array} \end{aligned}$$
(15)

We find the solution of this system in the form

$$\begin{aligned} \begin{aligned} \boldsymbol{\mathrm {F}}(w)=\boldsymbol{\mathrm {r}}\varPhi (w), \end{aligned} \end{aligned}$$
(16)

where row vector \(\boldsymbol{\mathrm {r}}\) defines two-dimensional probability distribution of the states of servers \((n_{1}, n_{2})\), the sum of the elements of which is equal to one, according to the normalization condition.

Substituting the Eq. (16) in the system (15),we obtain

$$\begin{aligned} \begin{array}{c} \boldsymbol{\mathrm {r}}( \boldsymbol{\mathrm {A}}+ \boldsymbol{\mathrm {B}})+j \boldsymbol{\mathrm {r}}\frac{\varPhi '(w)}{\varPhi (w)}(\boldsymbol{\mathrm {I}}_{0}-\boldsymbol{\mathrm {I}}_{1})=0, \\ \boldsymbol{\mathrm {r}}\left\{ \boldsymbol{\mathrm {B}}+j \frac{\varPhi '(w)}{\varPhi (w)}\boldsymbol{\mathrm {I}}_{1}\right\} \boldsymbol{\mathrm {e}}=0. \end{array} \end{aligned}$$
(17)

Because the ratio \(\frac{\varPhi '(w)}{\varPhi (w)}\) not depends on w, the scalar function \(\varPhi (w)\) has the form

$$\begin{aligned} \begin{aligned} \varPhi (w)=e^{jw\kappa _{1}}, \end{aligned} \end{aligned}$$
(18)

then \(j\frac{\varPhi '(w)}{\varPhi (w)}=-\kappa _{1}\). Let us substitute this equation to the system (17)

$$\begin{aligned} \begin{array}{c} \boldsymbol{\mathrm {r}}( \boldsymbol{\mathrm {A}}+ \boldsymbol{\mathrm {B}})- \boldsymbol{\mathrm {r}}\kappa _{1}(\boldsymbol{\mathrm {I}}_{0}-\boldsymbol{\mathrm {I}}_{1})=0, \\ \boldsymbol{\mathrm {r}}(\boldsymbol{\mathrm {B}}-\kappa _{1} \boldsymbol{\mathrm {I}}_{1})\boldsymbol{\mathrm {e}}=0. \end{array} \end{aligned}$$
(19)

Solving this system, we find the probability distribution of states of servers \(\boldsymbol{\mathrm {r}}\) and \(\kappa _{1}\).

The first order asymptotic only defines the mean asymptotic value \(\kappa _{1}/\sigma \) of the number of calls in the orbit in prelimit situation of nonzero values of \(\sigma \). For more detailed information of the number I(t) of calls in the orbit, let us consider the second order asymptotic.

4.2 The Second Order Asymptotic

Substituting the following equation in the system (8)

$$\begin{aligned} \begin{aligned} \boldsymbol{\mathrm {H}}(u)=\exp \left( j\frac{u}{\sigma }\kappa _{1}\right) \boldsymbol{\mathrm {H}}^{(2)}(u), \end{aligned} \end{aligned}$$
(20)

we obtain

$$\begin{aligned} \begin{array}{c} \boldsymbol{\mathrm {H}}^{(2)}(u)\{\boldsymbol{\mathrm {A}}+e^{ju}\boldsymbol{\mathrm {B}}-\kappa _{1}(\boldsymbol{\mathrm {I}}_{0}-e^{-ju}\boldsymbol{\mathrm {I}}_{1})\}+j\sigma \frac{{d \boldsymbol{\mathrm {H}}^{(2)}(u)}}{{du}}\{\boldsymbol{\mathrm {I}}_{0}-e^{-ju}\boldsymbol{\mathrm {I}}_{1}\}=0, \\ \boldsymbol{\mathrm {H}}^{(2)}(u)(\boldsymbol{\mathrm {B}}-e^{-ju}\kappa _{1}\boldsymbol{\mathrm {I}}_{1})\boldsymbol{\mathrm {e}}+j\sigma e^{-ju}\frac{{d \boldsymbol{\mathrm {H}}^{(2)}(u)}}{{du}}\boldsymbol{\mathrm {I}}_{1}\boldsymbol{\mathrm {e}}=0. \end{array} \end{aligned}$$
(21)

Denote \(\sigma = \epsilon ^{2}\) and perform the following substitution in (21)

$$\begin{aligned} \begin{aligned} u=\epsilon w, \boldsymbol{\mathrm {H}}^{(2)}(u)=\boldsymbol{\mathrm {F}}^{(2)}(w,\epsilon ), \end{aligned} \end{aligned}$$
(22)

and obtain the system

$$\begin{aligned} \begin{array}{c} \boldsymbol{\mathrm {F}}^{(2)}(w,\epsilon )\{\boldsymbol{\mathrm {A}}+e^{j\epsilon w}\boldsymbol{\mathrm {B}}-\kappa _{1}(\boldsymbol{\mathrm {I}}_{0}-e^{-j\epsilon w}\boldsymbol{\mathrm {I}}_{1})\}+j\epsilon \frac{{\partial \boldsymbol{\mathrm {F}}^{(2)}(w,\epsilon )}}{{\partial w}}\{\boldsymbol{\mathrm {I}}_{0}-e^{-j\epsilon w}\boldsymbol{\mathrm {I}}_{1}\}=0, \\ \boldsymbol{\mathrm {F}}^{(2)}(w,\epsilon )(\boldsymbol{\mathrm {B}}-e^{-j\epsilon w}\kappa _{1}\boldsymbol{\mathrm {I}}_{1})\boldsymbol{\mathrm {e}}+j\epsilon e^{-j\epsilon w}\frac{{\partial \boldsymbol{\mathrm {F}}^{(2)}(w,\epsilon )}}{{\partial w}}\boldsymbol{\mathrm {I}}_{1}\boldsymbol{\mathrm {e}}=0. \end{array} \end{aligned}$$
(23)

Theorem 2

In the context of Theorem 1 the following equation is true

$$\begin{aligned} \begin{aligned} \lim _{\sigma \rightarrow \ 0} Ee^{jw\sqrt{\sigma }\left( i(t)-\frac{\kappa _{1}}{\sigma }\right) }=e^{\frac{(jw)^{2}}{2}\kappa _{2}}, \end{aligned} \end{aligned}$$
(24)

where \(\kappa _{2}\) is a solution of the scalar equation

$$\begin{aligned} \begin{aligned} \boldsymbol{\mathrm {g}}(\kappa _{2})(\boldsymbol{\mathrm {B}}-\kappa _{1}\boldsymbol{\mathrm {I}}_{1})\boldsymbol{\mathrm {e}}=\boldsymbol{\mathrm {r}}\boldsymbol{\mathrm {I}}_{1}(\kappa _{2}-\kappa _{1}) \boldsymbol{\mathrm {e}}, \end{aligned} \end{aligned}$$
(25)

and vector \(\mathbf{g} (\kappa _{2})\) is a solution of the system

$$\begin{aligned} \begin{array}{c} \boldsymbol{\mathrm {g}}(\kappa _{2})\{\boldsymbol{\mathrm {A}}+\boldsymbol{\mathrm {B}}-\kappa _{1}(\boldsymbol{\mathrm {I}}_{0}-\boldsymbol{\mathrm {I}}_{1})\}=\boldsymbol{\mathrm {r}}(\kappa _{2}\boldsymbol{\mathrm {I}}_{0}-\kappa _{2}\boldsymbol{\mathrm {I}}_{1}-\boldsymbol{\mathrm {B}}+\kappa _{1}\boldsymbol{\mathrm {I}}_{1}), \\ \boldsymbol{\mathrm {g}}(\kappa _{2})\boldsymbol{\mathrm {e}}=0. \end{array} \end{aligned}$$
(26)

Proof

Let us substitute the following expansion into the system (23)

$$\begin{aligned} \begin{aligned} \boldsymbol{\mathrm {F}}^{(2)}(w,\epsilon )=\varPhi _{2}(w)(\boldsymbol{\mathrm {r}}+j\epsilon w\boldsymbol{\mathrm {f}}) +O(\epsilon ^{2}), \end{aligned} \end{aligned}$$
(27)

where \(\boldsymbol{\mathrm {r}}=\begin{bmatrix} r_{00}&r_{10}&r_{01}&r_{11} \end{bmatrix}\) and \(\boldsymbol{\mathrm {f}}=\begin{bmatrix} f_{00}&f_{10}&f_{01}&f_{11} \end{bmatrix}\), we obtain

$$\begin{aligned} \begin{array}{c} \varPhi _{2}(w)(\boldsymbol{\mathrm {r}}+j\epsilon w\boldsymbol{\mathrm {f}})\left\{ \boldsymbol{\mathrm {A}}+e^{j\epsilon w}\boldsymbol{\mathrm {B}}-\kappa _{1}(\boldsymbol{\mathrm {I}}_{0}-e^{-j\epsilon w}\boldsymbol{\mathrm {I}}_{1})\right\} \\ +j\epsilon (\varPhi _{2}'(w)(\boldsymbol{\mathrm {r}}+j\epsilon w\boldsymbol{\mathrm {f}})+\varPhi _{2}(w)j\epsilon \boldsymbol{\mathrm {f}})(\boldsymbol{\mathrm {I}}_{0}-e^{-j\epsilon w}\boldsymbol{\mathrm {I}}_{1})=O(\epsilon ^{2}), \\ \varPhi _{2}(w)(\boldsymbol{\mathrm {r}}+j\epsilon w\boldsymbol{\mathrm {f}})(\boldsymbol{\mathrm {B}}-e^{-j\epsilon w}\kappa _{1}\boldsymbol{\mathrm {I}}_{1})\boldsymbol{\mathrm {e}}\\ +j\epsilon e^{-j\epsilon w}(\varPhi _{2}'(w)(\boldsymbol{\mathrm {r}}+j\epsilon w\boldsymbol{\mathrm {f}})+\varPhi _{2}(w)j\epsilon \boldsymbol{\mathrm {f}})\boldsymbol{\mathrm {I}}_{1}\boldsymbol{\mathrm {e}}=O(\epsilon ^{2}). \end{array} \end{aligned}$$
(28)

Rewrite the system (28) in the following form:

$$\begin{aligned} \begin{array}{c} \varPhi _{2}(w)(\boldsymbol{\mathrm {r}}+j\epsilon w\boldsymbol{\mathrm {f}})\left\{ \boldsymbol{\mathrm {A}}+e^{j\epsilon w}\boldsymbol{\mathrm {B}}-\kappa _{1}(\boldsymbol{\mathrm {I}}_{0}-e^{-j\epsilon w}\boldsymbol{\mathrm {I}}_{1})\right\} \\ +j\epsilon (\varPhi _{2}'(w)(\boldsymbol{\mathrm {r}}(\boldsymbol{\mathrm {I}}_{0}-e^{-j\epsilon w}\boldsymbol{\mathrm {I}}_{1})=O(\epsilon ^{2}), \\ \varPhi _{2}(w)(\boldsymbol{\mathrm {r}}+j\epsilon w\boldsymbol{\mathrm {f}})(\boldsymbol{\mathrm {B}}-e^{-j\epsilon w}\kappa _{1}\boldsymbol{\mathrm {I}}_{1})\boldsymbol{\mathrm {e}}+j\epsilon e^{-j\epsilon w}\varPhi _{2}'(w)\boldsymbol{\mathrm {r}}\boldsymbol{\mathrm {I}}_{1}\boldsymbol{\mathrm {e}}=O(\epsilon ^{2}). \end{array} \end{aligned}$$
(29)

Let us expand the exponent in a series

$$\begin{aligned} \begin{array}{c} \varPhi _{2}(w)(\boldsymbol{\mathrm {r}}+j\epsilon w\boldsymbol{\mathrm {f}})\left\{ \boldsymbol{\mathrm {A}}+(1+j\epsilon w)\boldsymbol{\mathrm {B}}-\kappa _{1}(\boldsymbol{\mathrm {I}}_{0}-(1--j\epsilon w)\boldsymbol{\mathrm {I}}_{1})\right\} \\ +j\epsilon (\varPhi _{2}'(w)(\boldsymbol{\mathrm {r}}(\boldsymbol{\mathrm {I}}_{0}-(1-j\epsilon w)\boldsymbol{\mathrm {I}}_{1})=O(\epsilon ^{2}), \\ \varPhi _{2}(w)(\boldsymbol{\mathrm {r}}+j\epsilon w\boldsymbol{\mathrm {f}})(\boldsymbol{\mathrm {B}}-(1-j\epsilon w)\kappa _{1}\boldsymbol{\mathrm {I}}_{1})\boldsymbol{\mathrm {e}}+j\epsilon (1-j\epsilon w)\varPhi _{2}'(w)\boldsymbol{\mathrm {r}}\boldsymbol{\mathrm {I}}_{1}\boldsymbol{\mathrm {e}}=O(\epsilon ^{2}). \end{array} \end{aligned}$$
(30)

Open the parentheses and group the terms for \(\epsilon ^{0}\) and \(\epsilon ^{1}\)

$$\begin{aligned} \begin{array}{c} \varPhi _{2}(w)\boldsymbol{\mathrm {r}}\left\{ \boldsymbol{\mathrm {A}}+\boldsymbol{\mathrm {B}}-\kappa _{1}(\boldsymbol{\mathrm {I}}_{0}-\boldsymbol{\mathrm {I}}_{1})\right\} \\ +\varPhi _{2}(w)j\epsilon w\left\{ \boldsymbol{\mathrm {r}}\boldsymbol{\mathrm {B}}-\boldsymbol{\mathrm {r}}\kappa _{1}\boldsymbol{\mathrm {I}}_{1}+\boldsymbol{\mathrm {f}}\boldsymbol{\mathrm {A}}+\boldsymbol{\mathrm {f}}\boldsymbol{\mathrm {B}}-\boldsymbol{\mathrm {f}}\kappa _{1}(\boldsymbol{\mathrm {I}}_{0}-\boldsymbol{\mathrm {I}}_{1})\right\} \\ +j\epsilon \varPhi _{2}'(w)\boldsymbol{\mathrm {r}}(\boldsymbol{\mathrm {I}}_{0}-\boldsymbol{\mathrm {I}}_{1})=O(\epsilon ^{2}), \\ \varPhi _{2}(w)\boldsymbol{\mathrm {r}}(\boldsymbol{\mathrm {B}}-\kappa _{1}\boldsymbol{\mathrm {I}}_{1})\boldsymbol{\mathrm {e}}+\varPhi _{2}(w)j\epsilon w(\boldsymbol{\mathrm {r}}\kappa _{1}\boldsymbol{\mathrm {I}}_{1}+\boldsymbol{\mathrm {f}}\boldsymbol{\mathrm {B}}-\boldsymbol{\mathrm {f}}\kappa _{1}\boldsymbol{\mathrm {I}}_{1})\boldsymbol{\mathrm {e}}\\ +j\epsilon \varPhi _{2}'(w)\boldsymbol{\mathrm {r}}\boldsymbol{\mathrm {I}}_{1}\boldsymbol{\mathrm {e}}=O(\epsilon ^{2}). \end{array} \end{aligned}$$
(31)

Taking into account the system (19), rewrite the system (31) in the form

$$\begin{aligned} \begin{array}{c} \boldsymbol{\mathrm {r}}\boldsymbol{\mathrm {B}}-\boldsymbol{\mathrm {r}}\kappa _{1}\boldsymbol{\mathrm {I}}_{1}+\boldsymbol{\mathrm {f}}\boldsymbol{\mathrm {A}}+\boldsymbol{\mathrm {f}}\boldsymbol{\mathrm {B}}-\boldsymbol{\mathrm {f}}\kappa _{1}(\boldsymbol{\mathrm {I}}_{0}-\boldsymbol{\mathrm {I}}_{1})+\frac{\varPhi _{2}'(w)}{w\varPhi _{2}(w)}\boldsymbol{\mathrm {r}}(\boldsymbol{\mathrm {I}}_{0}-\boldsymbol{\mathrm {I}}_{1})=0, \\ (\boldsymbol{\mathrm {r}}\kappa _{1}\boldsymbol{\mathrm {I}}_{1}+\boldsymbol{\mathrm {f}}\boldsymbol{\mathrm {B}}-\boldsymbol{\mathrm {f}}\kappa _{1}\boldsymbol{\mathrm {I}}_{1})\boldsymbol{\mathrm {e}}+\frac{\varPhi _{2}'(w)}{w\varPhi _{2}(w)}\boldsymbol{\mathrm {r}}\boldsymbol{\mathrm {I}}_{1}\boldsymbol{\mathrm {e}}=0. \end{array} \end{aligned}$$
(32)

Because the ratio \(\frac{d\varPhi _{2}'(w)/dw}{w\varPhi _{2}(w)}\) not depends on w, the scalar function \(\varPhi _{2}(w)\) has the form

$$\begin{aligned} \begin{aligned} \varPhi _{2}(w)=e^{\left\{ \frac{(jw)^{2}}{2}\kappa _{2}\right\} }, \end{aligned} \end{aligned}$$
(33)

then \(\frac{\varPhi _{2}'(w)}{w\varPhi _{2}(w)}=-\kappa _{2}\). Let us substitute this equation to the system (32)

$$\begin{aligned} \begin{array}{c} \boldsymbol{\mathrm {f}}\left\{ \boldsymbol{\mathrm {A}}+\boldsymbol{\mathrm {B}}-\kappa _{1}(\boldsymbol{\mathrm {I}}_{0}-\boldsymbol{\mathrm {I}}_{1})\right\} =\boldsymbol{\mathrm {r}}(\kappa _{2}\boldsymbol{\mathrm {I}}_{0}-\kappa _{2}\boldsymbol{\mathrm {I}}_{1}-\boldsymbol{\mathrm {B}}+\kappa _{1}\boldsymbol{\mathrm {I}}_{1}), \\ \boldsymbol{\mathrm {f}}(\boldsymbol{\mathrm {B}}-\kappa _{1}\boldsymbol{\mathrm {I}}_{1})\boldsymbol{\mathrm {e}}=\boldsymbol{\mathrm {r}}\boldsymbol{\mathrm {I}}_{1}(\kappa _{2}-\kappa _{1})\boldsymbol{\mathrm {e}}. \end{array} \end{aligned}$$
(34)

The system (34) is an inhomogeneous system of linear algebraic equations for \(\boldsymbol{\mathrm {f}}\). Since the determinant of the matrix of coefficients of the system is equal to 0, and the rank of the extended matrix is equal to the rank of the matrix of coefficients, the system is consistent and has many solutions.

Let us consider the inhomogeneous system of Eqs. (34) and homogeneous system of Eqs. (19). If we compare them, we can see that system (19) is a homogeneous system for system (34). In this case, we can write the solution to system (34) in the form

$$\begin{aligned} \begin{aligned} \boldsymbol{\mathrm {f}}=C\boldsymbol{\mathrm {r}}+\boldsymbol{\mathrm {g}}, \end{aligned} \end{aligned}$$
(35)

where C is a constant, \( \boldsymbol{\mathrm {r}}\) is the stationary distribution of the probabilities of the states of the servers and the row vector \(\boldsymbol{\mathrm {g}}\) is a particular solution of the inhomogeneous system (34), to which we will assign the condition \( \boldsymbol{\mathrm {ge}}=0\).

Substituting the expression (35) in the system (34), we obtain

$$\begin{aligned} \begin{array}{c} \boldsymbol{\mathrm {g}}\left\{ \boldsymbol{\mathrm {A}}+\boldsymbol{\mathrm {B}}-\kappa _{1}(\boldsymbol{\mathrm {I}}_{0}-\boldsymbol{\mathrm {I}}_{1})\right\} =\boldsymbol{\mathrm {r}}(\kappa _{2}\boldsymbol{\mathrm {I}}_{0}-\kappa _{2}\boldsymbol{\mathrm {I}}_{1}-\boldsymbol{\mathrm {B}}+\kappa _{1}\boldsymbol{\mathrm {I}}_{1}), \\ \boldsymbol{\mathrm {g}}(\boldsymbol{\mathrm {B}}-\kappa _{1}\boldsymbol{\mathrm {I}}_{1})\boldsymbol{\mathrm {e}}=\boldsymbol{\mathrm {r}}\boldsymbol{\mathrm {I}}_{1}(\kappa _{2}-\kappa _{1})\boldsymbol{\mathrm {e}}. \end{array} \end{aligned}$$
(36)

The solution of this system of inhomogeneous equations allows us to find a parameter \(\kappa _{2}\), that determines the variance of the number of claims in the orbit as \(\kappa _{2}/\sigma \).

The second order asymptotic shows that the asymptotic probability distribution of the number I(t) of calls in the orbit is Gaussian with mean asymptotic \(\kappa _{1}/\sigma \) and dispersion as \(\kappa _{2}/\sigma \).

5 Approximation Accuracy and its Application Area

Now we could build a Gaussian approximation

$$\begin{aligned} \begin{aligned} P^{(2)}(i)=(L(i+0.5)-L(i-0.5))(1-L(-0.5))^{-1}, \end{aligned} \end{aligned}$$
(37)

where L(x) is the normal distribution function with parameters \(\kappa _{1}/\sigma \) and \(\kappa _{2}/\sigma \).

Approximation accuracy \(\textit{P}^{(2)}(\textit{i})\) will be defined by using Kolmogorov range.

$$\begin{aligned} \varDelta =\max _{k\ge 0}\left| \sum _{i=0}^{k}\left( P_{i}^{(2)}-P_{i}\right) \right| , \end{aligned}$$
(38)

where \(P_{i}\) is the probability probability distribution of the number of claims in the orbit, obtained by the simulation.

The table contains values for this range for various values of \(\sigma \) and \(\rho \) (system load):

$$\begin{aligned} \rho =\frac{\lambda (\mu _{1}+\mu _{2})}{\mu _{1}\mu _{2}}. \end{aligned}$$
(39)

We consider \(\mu _{1}\) = 1 and \(\mu _{2}\) = 2 for all experiments (Table 1).

Table 1. Kolmogorov range.
Fig. 2.
figure 2

The probability distribution of the number of claims in the orbit \(\sigma =0.5, \rho =0.5\).

In Figs. 2, 3 and 4, the solid line shows the approximation of \(P^{(2)}_{i}\), the dashed line - the probability distribution of the number of claims in the orbit, obtained by the simulation (\(P_{i}\)).

Fig. 3.
figure 3

The probability distribution of the number of claims in the orbit \(\sigma =0.1, \rho =0.5\).

Fig. 4.
figure 4

The probability distribution of the number of claims in the orbit \(\sigma =0.02, \rho =0.5\).

It can be seen from the table that the accuracy of the approximations increases with decreasing parameters \(\rho \) and \(\sigma \). The Gaussian approximation is applicable for values of \(\sigma<\) 0.02, where the relative error, in the form of the Kolmogorov distance, does not exceed 0.05.

6 Conclusion

In this paper, we consider the tandem retrial queueing system with Poisson arrival process. Using the method of asymptotic analysis under the asymptotic condition of the long delay in the orbit, we obtain mean asymptotic \(\kappa _{1}/\sigma \) and dispersion as \(\kappa _{2}/\sigma \) and build the Gaussian approximation for the probability distribution of the number of calls in the orbit in the considered RQ-system. Comparing with the results of simulation, it is shown that the accuracy of the approximations increases with decreasing parameters \(\sigma \) and the system load.