Keywords

1 Introduction

In service systems idle time of an operator should be minimized to increase the productivity. An operator not only receives calls from outside but also makes outgoing calls in the idle time. The example of that could be the cellphone that is used for incoming and outgoing calls. In call centers operators could receive arriving calls but as soon as they have free time and are in standby mode they could make outgoing calls to sell packages and services of the center [1].

Retrial Queues with two-way communication have been extensively studied recently [2,3,4,5,6,7]. In these literatures the arrival process is Poisson process. However, it is well known that real traffic has a more complex structure. Markov modulated Poisson process (MMPP) can represent correlated traffic and thus it is more suitable for modelling real traffic.

In this paper, we consider asymptotic analysis for the distribution of the number of customers in the system under two conditions: (i) high outgoing call rate and (ii) low service rate for outgoing calls. In case (i), the server makes an outgoing call as soon as it becomes idle while in case (ii), the duration of an outgoing call is extremely long.

In both cases, the number of incoming calls in the system explodes. However, using suitable scalings, we prove that the scaled version of the number of incoming calls in the system follow some simple distributions, i.e. Gaussian distribution [8] and Gamma distribution, respectively [9].

The remainder of the paper is presented as follows. In Sect. 2, we describe the model in detail and preliminaries for later asymptotic analysis. In Sects. 3 and 4, we present our main contribution for the model with Markov modulated Poisson process. In Sect. 5 we show the ranges of parameters under which our approximations are usable. Section 6 is devoted to concluding remark.

2 Model Description and Problem Definition

We consider a single server queueing model with two types of calls: incoming calls and outgoing calls. Incoming calls arrive at the system according to a Markov modulated Poisson process. The incoming call that finds the server idle receives a service for an exponentially distributed time with rate \(\mu _{1}\). Upon entering the system the call that finds the server being busy immediately joins the orbit, where it stays during a random time exponentially distributed with rate \(\sigma \). If the server is idle (empty) it starts making outgoing calls to the outside with rate \(\alpha \). If the outgoing call finds the server free the call goes into service for an exponentially distributed time with rate \(\mu _{2}\). If upon entering the system the outgoing call finds the server being busy the call is lost and is not considered in the future. Let i(t) denote the number of calls in the system at the time t, k(t) denote the state of the server: 0 if the server is free, 1 if the server is busy serving an incoming call, 2 if the server is busy serving an outgoing call and n(t) denote the state of the background process of the MMPP at time t. The infinitesimal generator of n(t) is defined by matrix \(\mathbf Q \). When \(n(t) = n\), the arrival rate is given by \(\lambda _{n}\) \((n=1,2,...,N)\). To determine the condition for the existence of a stationary regime, we define the matrix \({\mathbf {\Lambda }}\) in the form \({\mathbf {\Lambda }}=\frac{\rho \mu _{1}{\mathbf {\Lambda }}_{1}}{\mathbf{r}{\mathbf {\Lambda }}_{1}{} \mathbf{e}},\) where \({\mathbf {\Lambda }}_{1}\) is a diagonal matrix with nonnegative elements, and the condition for the existence of a stationary regime is the fulfillment of the inequalities \(0<\rho <1\).

Under the current setting the three-dimensional process \(\{k(t), n(t), i(t)\}\) is a Markov chain. Under the stability condition, the stationary probability distribution \(P\{k(t) = k, n(t) = n, i(t) = i\}= P_k(n, i)\) is the unique solution of Kolmogorov system of equations:

$$\begin{aligned} \begin{array}{c} -(\lambda _{n} +i\sigma +\alpha )P_{0} (n,i)+\mu _{1} P_{1} (n,i+1)+\mu _{2} P_{2} (n,i+1)+\sum _{v=1}^{N}P_{0} (v,i)q_{vn} =0,\\[2ex] -(\lambda _{n} +\mu _{1} )P_{1} (n,i)+\lambda _{n} \left[ P_{1} (n,i-1)+P_{0} (n,i-1)\right] +i\sigma P_{0} (n,i) \\ +\, \sum _{v=1}^{N}P_{1} (v,i)q_{vn} =0 , \end{array} \end{aligned}$$
$$\begin{aligned} -(\lambda _{n} +\mu _{2} )P_{2} (n,i)+P_{0} (n,i-1)\alpha +P_{2} (n,i-1)\lambda _{n} +\sum _{v=1}^{N}P_{2} (v,i)q_{vn} =0. \end{aligned}$$
(1)

We introduce partial characteristic functions [10], denoting \( j=\sqrt{-1} \):

$$\begin{aligned} H_{0} (n,u)=\sum _{i=0}^{\infty }e^{jui} P_{0} (n,i) ,\mathrm{\; \; \; \; \; }H_{k} (n,u)=\sum _{i=1}^{\infty }e^{jui} P_{k} (n,i) ,\mathrm{\; \; \; }k=1,2. \end{aligned}$$

For \(k = 1, 2\), there will be at least one call in the system. Rewriting system (1) in the following form:

$$\begin{aligned} \begin{array}{c} -(\lambda _{n} +\alpha )H_{0} (n,u)+j\sigma \frac{\partial H_{0} (n,u)}{\partial u} +\mu _{1} e^{-ju} H_{1} (n,u)+\mu _{2} e^{-ju} H_{2} (n,u) \\ + \, \sum _{v=1}^{N}H_{0} (v,u)q_{vn} =0, \end{array} \end{aligned}$$
$$\begin{aligned} \begin{array}{c} -(\lambda _{n} +\mu _{1} )H_{1} (n,u)+\lambda _{n} e^{ju} \left[ H_{1} (n,u)+H_{0} (n,u)\right] -j\sigma \frac{\partial H_{0} (n,u)}{\partial u} \\ +\,\sum _{v=1}^{N}H_{1} (v,u)q_{vn} =0 , \end{array} \end{aligned}$$
$$\begin{aligned} -(\lambda _{n} +\mu _{2} )H_{2} (n,u)+\alpha e^{ju} H_{0} (n,u)+\lambda _{k} e^{ju} H_{2} (n,u)+\sum _{v=1}^{N}H_{2} (v,u)q_{vn} =0. \end{aligned}$$
(2)

We define \(\mathbf I \) - unit matrix, \({\mathbf {\Lambda }} =diag[\lambda _{n} ]\),

$$\begin{aligned} \mathbf H (u) = \{H_{k}(1,u), H_{k}(2,u),{\dots },H_{k}(N,u), \end{aligned}$$
$$\begin{aligned} \mathbf H '_{k} (u)=\left\{ \frac{\partial H_{k} (1,u)}{\partial u} ,\frac{\partial H_{k} (2,u)}{\partial u} ,...,\frac{\partial H_{k} (N,u)}{\partial u} \right\} . \end{aligned}$$

Let’s write system (2) in a matrix form (3):

$$\begin{aligned} \begin{array}{c} \mathbf H _{0} (u)(\mathbf Q -{\mathbf {\Lambda }} -\alpha \mathbf I )+j\sigma \mathbf H '_{0} (u)+\mu _{1} e^{-ju} \mathbf H _{1} (u)+\mu _{2} e^{-ju}{} \mathbf H _{2} (u)=0,\\[2ex] \mathbf H _{1} (u)\left( \mathbf Q +\left( e^{ju} -1\right) {\mathbf {\Lambda }} -\mu _{1}{} \mathbf I \right) +\mathbf H _{0} (u)e^{ju} {\mathbf {\Lambda }} -j\sigma \mathbf H '_{0} (u)=0, \end{array} \end{aligned}$$
$$\begin{aligned} \mathbf H _{2} (u)\left( \mathbf Q +\left( e^{ju} -1\right) {\mathbf {\Lambda }} -\mu _{2} \mathbf I \right) +\alpha e^{ju} \mathbf H _{0} (u)=0. \end{aligned}$$
(3)

Let’s sum the equations of the system (3)

$$\begin{aligned} \begin{array}{c} \mathbf H _{0} (u)\left[ \mathbf Q +\left( e^{ju} -1\right) \left( {\mathbf {\Lambda }} +\alpha \mathbf I \right) \right] +\mathbf H _{1} (u)\left[ \mathbf Q +\left( e^{ju} -1\right) \left( {\mathbf {\Lambda }} -\mu _{1} e^{-ju} \mathbf I \right) \right] \\ +\, \mathbf H _{2} (u)\left[ \mathbf Q +\left( e^{ju} -1\right) \left( {\mathbf {\Lambda }} -\mu _{2} e^{-ju} \mathbf I \right) \right] =0. \end{array} \end{aligned}$$

Multiplying the last equation by a unit vector \(\mathbf {e}\) and using \(\mathbf {Qe} = 0\), we obtain

$$\begin{aligned} \mathbf H _{0} (u)\left( {\mathbf {\Lambda }} +\alpha \mathbf I \right) \mathbf e +H_{1} (u)\left( {\mathbf {\Lambda }} -\mu _{1} e^{-ju} \mathbf I \right) \mathbf e +\mathbf H _{2} (u)\left( {\mathbf {\Lambda }} -\mu _{2} e^{-ju} \mathbf I \right) \mathbf e =0. \end{aligned}$$

Multiplying the last equation by a \(e^{ju}\):

$$\begin{aligned} \begin{array}{c} \mathbf H _{0} (u)\left( e^{ju} {\mathbf {\Lambda }} +\alpha e^{ju} \mathbf I \right) \mathbf e +\mathbf H _{1} (u)\left( e^{ju} {\mathbf {\Lambda }} -\mu _{1} \mathbf I \right) \mathbf e \\ +\,\mathbf H _{2} (u)\left( e^{ju} {\mathbf {\Lambda }} -\mu _{2} \mathbf I \right) \mathbf e =0. \end{array} \end{aligned}$$
(4)

We will consider the system (3) and the Eq. (4), i.e. a system of three matrix equations and one scalar equation:

$$\begin{aligned} \mathbf H _{0} (u)(\mathbf Q -{\mathbf {\Lambda }} -\alpha \mathbf I )+j\sigma \mathbf H '_{0} (u)+\mu _{1} e^{-ju} \mathbf H _{1} (u)+\mu _{2} e^{-ju} \mathbf H _{2} (u)=0, \end{aligned}$$
$$\begin{aligned} \mathbf H _{1} (u)\left( \mathbf Q +\left( e^{ju} -1\right) {\mathbf {\Lambda }} -\mu _{1} \mathbf I \right) +\mathbf H _{0} (u)e^{ju} {\mathbf {\Lambda }} -j\sigma \mathbf H '_{0} (u)=0, \end{aligned}$$
$$\begin{aligned} \mathbf H _{2} (u)\left( \mathbf Q +\left( e^{ju} -1\right) {\mathbf {\Lambda }} -\mu _{2}{} \mathbf I \right) +\alpha e^{ju} \mathbf H _{0} (u)=0, \end{aligned}$$
$$\begin{aligned} \begin{array}{c} \mathbf H _{0} (u)\left( e^{ju} {\mathbf {\Lambda }} +\alpha e^{ju} \mathbf I \right) \mathbf e +\mathbf H _{1} (u)\left( e^{ju} {\mathbf {\Lambda }} -\mu _{1} \mathbf I \right) \mathbf e \\ +\, \mathbf H _{2} (u)\left( e^{ju}{\mathbf {\Lambda }} -\mu _{2} \mathbf I \right) \mathbf e =0. \end{array} \end{aligned}$$
(5)

The characteristic function H(u) of the number of incoming calls in the retrial queue is expressed through partial characteristic functions \(\mathbf H _k(u)\) by the following equation

$$\begin{aligned} H(u)=Ee^{jui(t)}=(\mathbf H _{0} (u)+\mathbf H _{1} (u)+\mathbf H _{2} (u))\mathbf e . \end{aligned}$$

We will find the characteristics of our retrial queue with two-way communication with Markov modulated Poisson input. The main content of this paper is the solution of system (5) by using an asymptotic analysis method in two limit conditions: of the high rate of making outgoing calls and the low rate of service time of outgoing calls.

3 Asymptotic Analysis of MMPP/M/1/1 Retrial Queue with Two-Way Communication Under the High Rate of Making Outgoing Calls (\(\alpha \rightarrow \infty \))

We will investigate system (5) by asymptotic analysis method under the high rate of making outgoing calls.

3.1 First Order Asymptotic

Theorem 1

Suppose i(t) is the number of calls in the system of the stationary MMPP/M/1/1 retrial queue with outgoing calls, then the (6) holds

$$\begin{aligned} \mathop {\lim }\limits _{\alpha \rightarrow \infty } Ee^{jw\frac{i(t)}{\alpha } } =e^{jw\kappa _{1} }, \end{aligned}$$
(6)

where \(\kappa _{1}\) is the positive root of the equation

$$\begin{aligned} \begin{array}{c} \mathbf r \left\{ \kappa _{1} \sigma \left( \mu _{1} \mathbf I -\mathbf Q \right) ^{-1} +\left( \mu _{2}{} \mathbf I -\mathbf Q \right) ^{-1} \right\} ^{-1} \\ \times \, \left\{ \mathbf I +\kappa _{1} \sigma \left( \mu _{1} \mathbf I -\mathbf Q \right) ^{-1} \left( {\mathbf {\Lambda }} -\mu _{1}{} \mathbf I \right) +\left( \mu _{2} \mathbf I -\mathbf Q \right) ^{-1} \left( {\mathbf {\Lambda }} -\mu _{2} \mathbf I \right) \right\} \mathbf e =0. \end{array} \end{aligned}$$
(7)

The row vector \(\mathbf r \) is the stationary probability distribution of the underlying process n(t) which is given as the unique solution of the system \(\mathbf rQ = 0\), \(\mathbf re =1\).

Proof

We denote \(\alpha = 1/\varepsilon \) in the system (5), and introduce the following notations

$$u = \varepsilon w, \qquad \mathbf H _{0}(u) = \varepsilon \mathbf F _{0}(w, \varepsilon ), \qquad \mathbf H _{k}(u) =\mathbf F _{k}(w, \varepsilon ), \qquad k=1,2,$$

in order to get the following system

$$\begin{aligned} \mathbf F _{0} (w,\varepsilon )(\varepsilon \mathbf Q -\varepsilon {\mathbf {\Lambda }} -\mathbf I )+j\sigma \frac{\partial \mathbf F _{0} (w,\varepsilon )}{\partial w} +\mu _{1} e^{-j\varepsilon w} \mathbf F _{1} (w,\varepsilon )+\mu _{2} e^{-j\varepsilon w} \mathbf F _{2} (w,\varepsilon )=0, \end{aligned}$$
$$\begin{aligned} \mathbf F _{1} (w,\varepsilon )\left( \mathbf Q +\left( e^{j\varepsilon w} -1\right) {\mathbf {\Lambda }} -\mu _{1} \mathbf I \right) +\varepsilon e^{j\varepsilon w} \mathbf F _{0} (w,\varepsilon ){\mathbf {\Lambda }} -j\sigma \frac{\partial \mathbf F _{0} (w,\varepsilon )}{\partial w} =0, \end{aligned}$$
$$\begin{aligned} \mathbf F _{2} (w,\varepsilon )\left( \mathbf Q +\left( e^{j\varepsilon w} -1\right) {\mathbf {\Lambda }} -\mu _{2} \mathbf I \right) +e^{j\varepsilon w} \mathbf F _{0} (w,\varepsilon )=0, \end{aligned}$$
$$\begin{aligned} \begin{array}{c} \mathbf F _{0} (w,\varepsilon )\left( \varepsilon e^{j\varepsilon w} {\mathbf {\Lambda }} +e^{j\varepsilon w} \mathbf I \right) \mathbf e +\mathbf F _{1} (w,\varepsilon )\left( e^{j \varepsilon w} {\mathbf {\Lambda }} -\mu _{1} \mathbf I \right) \mathbf e \\ +\, \mathbf F _{2} (w,\varepsilon )\left( e^{j\varepsilon w} {\mathbf {\Lambda }} -\mu _{2} \mathbf I \right) \mathbf e =0. \end{array} \end{aligned}$$
(8)

Considering the limit as \(\varepsilon \rightarrow 0\) in the system (8), then we will get

$$\begin{aligned} -\mathbf F _{0} (w)+j\sigma \mathbf F '_{0} (w)+\mu _{1} \mathbf F _{1} (w)+\mu _{2} \mathbf F _{2} (w)=0, \end{aligned}$$
$$\begin{aligned} \mathbf F _{1} (w)\left( \mathbf Q -\mu _{1}{} \mathbf I \right) -j\sigma \mathbf F '_{0} (w)=0, \end{aligned}$$
$$\begin{aligned} \mathbf F _{2} (w)\left( \mathbf Q -\mu _{2} \mathbf I \right) +\mathbf F _{0} (w)=0, \end{aligned}$$
$$\begin{aligned} \mathbf F _{0} (w)\mathbf e +\mathbf F _{1} (w)\left( {\mathbf {\Lambda }} -\mu _{1} \mathbf I \right) \mathbf e +\mathbf F _{2} (w)\left( {\mathbf {\Lambda }} -\mu _{2} \mathbf I \right) \mathbf e =0. \end{aligned}$$
(9)

Our idea is to find the solution of (9) in the form of

$$\begin{aligned} \mathbf F _{k} (w)=\varPhi (w)\mathbf r _{k} . \end{aligned}$$
(10)

Here \(\mathbf r _k\), \(k = 1, 2\) are vectors with components \(r_{kn}\), where \(r_{kn}\) is the probability that the server is in state k, and the MMPP is in state n; \(\mathbf r _{0}\) is a vector with components \(\mathbf r _{0n}\), and has no sense of probability, since the probability that the server will be in the zero state (will be free) as \(\alpha \rightarrow \infty \) is zero.

$$\begin{aligned} -\mathbf r _{0} +j\sigma \frac{ \varPhi ' (w)}{\varPhi (w)} \mathbf r _{0} +\mu _{1}{} \mathbf r _{1} +\mu _{2} \mathbf r _{2} =0, \end{aligned}$$
$$\begin{aligned} \mathbf r _{1} \left( \mathbf Q -\mu _{1} \mathbf I \right) -j\sigma \frac{ \varPhi ' (w)}{\varPhi (w)} \mathbf r _{0} =0, \end{aligned}$$
$$\begin{aligned} \mathbf r _{2} \left( \mathbf Q -\mu _{2} \mathbf I \right) +\mathbf r _{0} =0, \end{aligned}$$
$$\begin{aligned} \mathbf r _{0} \mathbf e +\mathbf r _{1} \left( {\mathbf {\Lambda }} -\mu _{1} \mathbf I \right) \mathbf e +r_{2} \left( {\mathbf {\Lambda }} -\mu _{2} \mathbf I \right) \mathbf e =0. \end{aligned}$$
(11)

As the relation \(j\frac{ \varPhi ' (w)}{\varPhi (w)}\) does not depend on w, the function is obtained in the following form

$$\begin{aligned} \varPhi (w)=\exp \{jw\kappa _{1} \}, \end{aligned}$$

which coincides with (6). The value of the parameter \(\kappa _{1}\) will be defined below. We rewrite the system (11) in the form

$$\begin{aligned} -\mathbf r _{0} -\kappa _{1} \sigma \mathbf r _{0} +\mu _{1} \mathbf r _{1} +\mu _{2} \mathbf r _{2} =0, \end{aligned}$$
$$\begin{aligned} \mathbf r _{1} \left( \mathbf Q -\mu _{1} \mathbf I \right) +\kappa _{1} \sigma \mathbf r _{0} =0, \end{aligned}$$
$$\begin{aligned} \mathbf r _{2} \left( \mathbf Q -\mu _{2} \mathbf I \right) +\mathbf r _{0} =0, \end{aligned}$$
$$\begin{aligned} \mathbf r _{0} \mathbf e +\mathbf r _{1} \left( {\mathbf {\Lambda }} -\mu _{1} \mathbf I \right) \mathbf e +\mathbf r _{2} \left( {\mathbf {\Lambda }} -\mu _{2} \mathbf I \right) \mathbf e =0. \end{aligned}$$
(12)

Let’s review the normalization condition for stationary server state probability distribution

$$\begin{aligned} \mathbf r _{1} +\mathbf r _{2} =\mathbf r . \end{aligned}$$

The row vector \(\mathbf r \) is the stationary probability distribution of the underlying process n(t). Vector \(\mathbf r \) is defined as the unique solution of the system \(\mathbf rQ = 0\), \(\mathbf re = 1\). We have

$$\begin{aligned} \mathbf r _{1} =\kappa _{1} \sigma \mathbf r _{0} \left( \mu _{1} \mathbf I -\mathbf Q \right) ^{-1} , \end{aligned}$$
$$\begin{aligned} \mathbf r _{2} =\mathbf r _{0} \left( \mu _{2} \mathbf I -\mathbf Q \right) ^{-1} , \end{aligned}$$
$$\begin{aligned} \mathbf r _{1} +\mathbf r _{2} =\mathbf r . \end{aligned}$$
(13)

We substitute the values of the vectors \(\mathbf r _{k}\), \(k = 1, 2\) into the last equation of the system (13). We obtain an equation that determines the vector \(\mathbf r _{0}\):

$$\begin{aligned} \mathbf r _{0} =\mathbf r \left\{ \kappa _{1} \sigma \left( \mu _{1} \mathbf I -\mathbf Q \right) ^{-1} +\left( \mu _{2} \mathbf I -\mathbf Q \right) ^{-1} \right\} ^{-1} . \end{aligned}$$
(14)

Now we substitute the first two equalities of the system (13) into the scalar equation of system (12). We obtain the equation that determines the value of \(\mathbf r _{0}\):

$$\begin{aligned} \mathbf r _{0} \left\{ \mathbf I +\kappa _{1} \sigma \left( \mu _{1} \mathbf I -\mathbf Q \right) ^{-1} \left( {\mathbf {\Lambda }} -\mu _{1} \mathbf I \right) +\left( \mu _{2} \mathbf I -\mathbf Q \right) ^{-1} \left( {\mathbf {\Lambda }} -\mu _{2} \mathbf I \right) \right\} \mathbf e =0. \end{aligned}$$

Substituting this equality into Eq. (14), we obtain an equation for \(\kappa _{1}\), which coincides with (7):

$$\begin{aligned} \begin{array}{c} \mathbf r \left\{ \kappa _{1} \sigma \left( \mu _{1} \mathbf I -\mathbf Q \right) ^{-1} +\left( \mu _{2} \mathbf I -\mathbf Q \right) ^{-1} \right\} ^{-1} \\ \times \, \left\{ \mathbf I +\kappa _{1} \sigma \left( \mu _{1} \mathbf I -\mathbf Q \right) ^{-1} \left( {\mathbf {\Lambda }} -\mu _{1} \mathbf I \right) +\left( \mu _{2} \mathbf I -\mathbf Q \right) ^{-1} \left( {\mathbf {\Lambda }} -\mu _{2} \mathbf I \right) \right\} \mathbf e =0. \end{array} \end{aligned}$$
(15)

The first order asymptotic i.e. Theorem 1, only defines the mean asymptotic value \(\kappa _{1}\alpha \) of a number of calls in an system in prelimit situation of \(\alpha \rightarrow 0\). For more detailed research of the number i(t) of calls in an system let’s consider the second order asymptotic.

3.2 Second Order Asymptotic

Theorem 2

In the context of Theorem 1 the following equation is true

$$\begin{aligned} \mathop {\lim }\limits _{\alpha \rightarrow \infty } E\exp \left\{ jw\frac{\frac{1}{\alpha } i(t)-\kappa _{1} }{\sqrt{\alpha } } \right\} =e^{\frac{\left( jw\right) ^{2} }{2} \kappa _{2} } , \end{aligned}$$
(16)

where parameter \(\kappa _{2}\) is given by

$$\begin{aligned} \kappa _{2} =\frac{1}{\sigma } \cdot \frac{\mathbf{r}_{0} \mathbf{e}+\mathbf{r}_1{\mathbf {\Lambda }}{} \mathbf{e}+\mathbf{r}_2{\mathbf {\Lambda }}{} \mathbf{e} +\left[ \mathbf{y}_{0} +\mathbf{y}_{1} \left( {\mathbf {\Lambda }} -\mu _{1} \mathbf{I}\right) +\mathbf{y}_{2} \left( {\mathbf {\Lambda }} -\mu _{2} \mathbf{I}\right) \right] \mathbf{e}}{\left[ -\mathbf{g}_{0} +\mathbf{g}_{1} \left( \mu _{1} \mathbf{I}-{\mathbf {\Lambda }} \right) +\mathbf{g}_{2} \left( \mu _{2} \mathbf{I}-{\mathbf {\Lambda }} \right) \right] \mathbf{e}} . \end{aligned}$$
(17)

Here the vector of \(\mathbf r _0\) and the vectors of probabilities \(\mathbf r _1\), \(\mathbf r _2\) are defined above. The vectors \(\mathbf g _0\), \(\mathbf g _1\), \(\mathbf g _2\), \(\mathbf y _0\), \(\mathbf y _1\), \(\mathbf y _2\) are defined by the two systems:

$$\begin{aligned} \begin{array}{c} \mathbf g _{0} \left[ -\mathbf I -\sigma \kappa _{1} \mathbf I +\mu _{2} \left( \mu _{2} \mathbf I -\mathbf Q \right) ^{-1} +\mu _{1} \sigma \kappa _{1} \left( \mu _{1} \mathbf I -\mathbf Q \right) ^{-1} \right] \\ =\, \mathbf r _{0} -\mathbf r _{0} \mu _{1} \left( \mu _{1} \mathbf I -\mathbf Q \right) ^{-1}, \end{array} \end{aligned}$$
$$\begin{aligned} \mathbf g _{1} =\left( \mathbf g _{0} \sigma \kappa _{1} +\mathbf r _{0} \right) \left( \mu _{1} \mathbf I -\mathbf Q \right) ^{-1} , \end{aligned}$$
$$\begin{aligned} \mathbf g _{2} =\mathbf g _{0} \left( \mu _{2} \mathbf I -\mathbf Q \right) ^{-1} , \end{aligned}$$
$$\begin{aligned} (\mathbf g _0+\mathbf g _1+\mathbf g _2)\mathbf e =0. \end{aligned}$$
(18)
$$\begin{aligned} \begin{array}{c} \mathbf y _{0} \left[ \left( -\mathbf I -\sigma \kappa _{1} \mathbf I \right) +\mu _{1} \sigma \kappa _{1} \left( \mu _{1} \mathbf I -\mathbf Q \right) ^{-1} +\mu _{2} \left( \mu _{2} \mathbf I -\mathbf Q \right) ^{-1} \right] \\ =\,\mu _{1} \mathbf r _{1} \left[ \mathbf I -{\mathbf {\Lambda }} \left( \mu _{1} \mathbf I -\mathbf Q \right) ^{-1} \right] +\mu _{2} \left[ \mathbf r _{2} -\left( \mathbf r _{0} +\mathbf r _{2} {\mathbf {\Lambda }} \right) \left( \mu _{2} \mathbf I -\mathbf Q \right) ^{-1} \right] \end{array} \end{aligned}$$
$$\begin{aligned} \mathbf y _{1} =\left( \mathbf y _{0} \sigma \kappa _{1} +\mathbf r _{1} {\mathbf {\Lambda }} \right) \left( \mu _{1} \mathbf I -\mathbf Q \right) ^{-1} , \end{aligned}$$
$$\begin{aligned} \mathbf y _{2} =\left( \mathbf y _{0} +\mathbf r _{0} +\mathbf r _{2}{\mathbf {\Lambda }} \right) \left( \mu _{2} \mathbf I -\mathbf Q \right) ^{-1},\end{aligned}$$
$$\begin{aligned} (\mathbf y _0+\mathbf y _1+\mathbf y _2)\mathbf e =0. \end{aligned}$$
(19)

Proof

We introduce the following notations in the system (5)

$$\begin{aligned} \mathbf H _{k} (u)=\exp \left( j\alpha u\kappa _{1} \right) \mathbf H _{k}^{(2)} (u), \end{aligned}$$
(20)

and we get

$$\begin{aligned} \begin{array}{c} \mathbf H _{0}^{(2)} (u)(\mathbf Q -{\mathbf {\Lambda }} -\alpha \mathbf I -\sigma \alpha \kappa _{1} )+\mu _{1} e^{-ju} \mathbf H _{1}^{(2)} (u)+\mu _{2} e^{-ju} \mathbf H _{2}^{(2)} (u)\\ + \, j\sigma \frac{d\mathbf H _{0}^{(2)} (u)}{du} =0, \end{array} \end{aligned}$$
$$\begin{aligned} \mathbf H _{1}^{(2)} (u)\left( \mathbf Q +\left( e^{ju} -1\right) {\mathbf {\Lambda }} -\mu _{1} \mathbf I \right) +\mathbf H _{0}^{(2)} (u)\left( e^{ju} {\mathbf {\Lambda }} +\sigma \alpha \kappa _{1}{} \mathbf I \right) -j\sigma \frac{dH_{0}^{(2)} (u)}{du} =0, \end{aligned}$$
$$\begin{aligned} \mathbf H _{2}^{(2)} (u)\left( \mathbf Q +\left( e^{ju} -1\right) {\mathbf {\Lambda }} -\mu _{2} \mathbf I \right) +\alpha e^{ju} \mathbf H _{0}^{(2)} (u)=0, \end{aligned}$$
$$\begin{aligned} \begin{array}{c} \mathbf H _{0}^{(2)} (u)\left( e^{ju} {\mathbf {\Lambda }} +\alpha e^{ju} \mathbf I \right) \mathbf e +\mathbf H _{1}^{(2)} (u)\left( e^{ju} {\mathbf {\Lambda }} -\mu _{1} \mathbf I \right) \mathbf e \\ + \, \mathbf H _{2}^{(2)} (u)\left( e^{ju} {\mathbf {\Lambda }} -\mu _{2} \mathbf I \right) \mathbf e =0. \end{array} \end{aligned}$$
(21)

Denoting \(\alpha = 1/ \varepsilon ^{2}\), and introducing the following notations

$$\begin{aligned} u = \varepsilon w, \,\,\, \mathbf H _{0}^{2}(u)=\varepsilon ^{2} \mathbf F _{0}^{2} (w,\varepsilon ), \, \, \, \, \mathbf H _{k}^{2} (u)=\mathbf F _{k}^{2} (w,\varepsilon ), \, \, \, \, k=1,2, \end{aligned}$$
(22)

we obtain

$$\begin{aligned} \begin{array}{c} \mathbf F _{0}^{(2)} (w,\varepsilon )(\varepsilon ^{2} \mathbf Q -\varepsilon ^{2} {\mathbf {\Lambda }} -\mathbf I -\sigma \kappa _{1} \mathbf I )+\mu _{1} e^{-j\varepsilon w} \mathbf F _{1}^{(2)} (w,\varepsilon )+\mu _{2} e^{-j\varepsilon w} \mathbf F _{2}^{(2)} (w,\varepsilon )\\ + \, j\sigma \varepsilon \frac{\partial \mathbf F _{0}^{(2)} (w,\varepsilon )}{\partial w} =0, \end{array} \end{aligned}$$
$$\begin{aligned} \begin{array}{c} \mathbf F _{1}^{(2)} (w,\varepsilon )\left( \mathbf Q +\left( e^{j\varepsilon w} -1\right) {\mathbf {\Lambda }} -\mu _{1} \mathbf I \right) +\mathbf F _{0}^{(2)} (w,\varepsilon )\left( \varepsilon ^{2} e^{j\varepsilon w} {\mathbf {\Lambda }} +\sigma \kappa _{1} \mathbf I \right) \\ -\, j\sigma \varepsilon \frac{\partial \mathbf F _{0}^{(2)} (w,\varepsilon )}{\partial w} =0, \end{array} \end{aligned}$$
$$\begin{aligned} \mathbf F _{2}^{(2)} (w,\varepsilon )\left( \mathbf Q +\left( e^{j\varepsilon w} -1\right) {\mathbf {\Lambda }} -\mu _{2} \mathbf I \right) +e^{j\varepsilon w} \mathbf F _{0}^{(2)} (w,\varepsilon )=0, \end{aligned}$$
$$\begin{aligned} \begin{array}{c} \mathbf F _{0}^{(2)} (w,\varepsilon )e^{j\varepsilon w} \left( \varepsilon ^{2} {\mathbf {\Lambda }} +\mathbf I \right) \mathbf e +\mathbf F _{1}^{(2)} (w,\varepsilon )\left( e^{j\varepsilon w} {\mathbf {\Lambda }} -\mu _{1} \mathbf I \right) \mathbf e \\ + \, \mathbf F _{2}^{(2)} (w,\varepsilon )\left( e^{j\varepsilon w} {\mathbf {\Lambda }}-\mu _{2}{} \mathbf I \right) \mathbf e =0. \end{array} \end{aligned}$$
(23)

Our idea is to seek a solution of the system (5) in the form

$$\begin{aligned} \mathbf F _{k}^{(2)} (w,\varepsilon )=\varPhi _{2} (w)\left\{ \mathbf r _{k} +j\varepsilon w\mathbf f _{k} \right\} +o\left( \varepsilon ^{2} \right) . \end{aligned}$$
(24)

Substituting (24) to (23), we obtain

$$\begin{aligned} \begin{array}{c} \mathbf r _{0} \left( -\mathbf I -\sigma \kappa _{1} \mathbf I \right) +\mu _{1} \mathbf r _{1} +\mu _{2} \mathbf r _{2} +j\varepsilon w\left[ \mathbf f _{0} \left( -\mathbf I -\sigma \kappa _{1} \mathbf I \right) +\mu _{1} \left( \mathbf f _{1} -\mathbf r _{1} \right) +\mu _{2} \left( \mathbf f _{2} -\mathbf r _{2} \right) \right] \\ +\, j\sigma \varepsilon \frac{d\varPhi _{2} (w)/dw}{\varPhi _{2} (w)} \mathbf r _{0} =o\left( \varepsilon ^{2} \right) ,\\ \mathbf r _{1} \left( \mathbf Q -\mu _{1} \mathbf I \right) +\mathbf r _{0} \sigma \kappa _{1} +j\varepsilon w\left[ \mathbf f _{1} \left( \mathbf Q -\mu _{1} \mathbf I \right) +\mathbf f _{0} \sigma \kappa _{1} +\mathbf r _{1} {\mathbf {\Lambda }} \right] \\ -\, j\sigma \varepsilon \frac{d\varPhi _{2} (w)/dw}{\varPhi _{2} (w)} \mathbf r _{0} =o\left( \varepsilon ^{2} \right) ,\\ \mathbf r _{2} \left( \mathbf Q -\mu _{2} \mathbf I \right) +\mathbf r _{0} +j\varepsilon w\left[ \mathbf f _{2} \left( \mathbf Q -\mu _{2} \mathbf I \right) +\mathbf r _{0} +\mathbf f _{0} +\mathbf r _{2} {\mathbf {\Lambda }} \right] =o\left( \varepsilon ^{2} \right) ,\\ \mathbf r _{0} \mathbf e +\mathbf r _{1} \left( {\mathbf {\Lambda }} -\mu _{1} \mathbf I \right) \mathbf e +\mathbf r _{2} \left( {\mathbf {\Lambda }} -\mu _{2} \mathbf I \right) \mathbf e \\ +\, j\varepsilon w\left[ \mathbf f _{0} +\mathbf f _{1} \left( {\mathbf {\Lambda }} -\mu _{1} \mathbf I \right) +\mathbf f _{2} \left( {\mathbf {\Lambda }} -\mu _{2}{} \mathbf I \right) +\mathbf r _{0} +\mathbf r _{1} {\mathbf {\Lambda }} +\mathbf r _{2} {\mathbf {\Lambda }} \right] \mathbf e =o\left( \varepsilon ^{2} \right) . \end{array} \end{aligned}$$

Previously, the system of equations (12) was obtained. Taking this into account, we have

$$\begin{aligned} j\varepsilon \left[ \mathbf f _{0} \left( -\mathbf I -\sigma \kappa _{1} \mathbf I \right) +\mu _{1} \left( \mathbf f _{1} -\mathbf r _{1} \right) +\mu _{2} \left( \mathbf f _{2} -\mathbf r _{2} \right) \right] +j\sigma \varepsilon \frac{d\varPhi _{2} (w)/dw}{w\varPhi _{2} (w)}{} \mathbf r _{0} =o\left( \varepsilon ^{2} \right) , \end{aligned}$$
$$\begin{aligned} j\varepsilon \left[ \mathbf f _{1} \left( \mathbf Q -\mu _{1} \mathbf I \right) +\mathbf f _{0} \sigma \kappa _{1} +\mathbf r _{1} {\mathbf {\Lambda }} \right] -j\sigma \varepsilon \frac{d\varPhi _{2} (w)/dw}{w\varPhi _{2} (w)} \mathbf r _{0} =o\left( \varepsilon ^{2} \right) , \end{aligned}$$
$$\begin{aligned} j\varepsilon w\left[ \mathbf f _{2} \left( \mathbf Q -\mu _{2} \mathbf I \right) +\mathbf r _{0} +\mathbf f _{0} +\mathbf r _{2} {\mathbf {\Lambda }} \right] =o\left( \varepsilon ^{2} \right) , \end{aligned}$$
$$\begin{aligned} j\varepsilon w\left[ \mathbf f _{0} +\mathbf f _{1} \left( {\mathbf {\Lambda }} -\mu _{1} \mathbf I \right) +\mathbf f _{2} \left( {\mathbf {\Lambda }} -\mu _{2} \mathbf I \right) +\mathbf r _{0} +\mathbf r _{1} {\mathbf {\Lambda }} +\mathbf r _{2} {\mathbf {\Lambda }} \right] \mathbf e =o\left( \varepsilon ^{2} \right) . \end{aligned}$$

Dividing these equations by \(\varepsilon \) and taking the limit as \(\varepsilon \rightarrow 0\) yields

$$\begin{aligned} \mathbf f _{0} \left( -\mathbf I -\sigma \kappa _{1} \mathbf I \right) +\mu _{1} \left( \mathbf f _{1} -\mathbf r _{1} \right) +\mu _{2} \left( \mathbf f _{2} -\mathbf r _{2} \right) +\sigma \frac{d\varPhi _{2} (w)/dw}{w\varPhi _{2} (w)} \mathbf r _{0} =0, \end{aligned}$$
$$\begin{aligned} \mathbf f _{1} \left( \mathbf Q -\mu _{1} \mathbf I \right) +\mathbf f _{0} \sigma \kappa _{1} +\mathbf r _{1} {\mathbf {\Lambda }} -\sigma \frac{d\varPhi _{2} (w)/dw}{w\varPhi _{2} (w)} \mathbf r _{0} =0, \end{aligned}$$
$$\begin{aligned} \mathbf f _{2} \left( \mathbf Q -\mu _{2} \mathbf I \right) +\mathbf r _{0} +\mathbf f _{0} +\mathbf r _{2} {\mathbf {\Lambda }} =0, \end{aligned}$$
$$\begin{aligned} \left[ \mathbf f _{0} +\mathbf f _{1} \left( {\mathbf {\Lambda }} -\mu _{1} \mathbf I \right) +\mathbf f _{2} \left( {\mathbf {\Lambda }} -\mu _{2} \mathbf I \right) +\mathbf r _{0} +\mathbf r _{1} {\mathbf {\Lambda }} +\mathbf r _{2} {\mathbf {\Lambda }} \right] \mathbf e =0. \end{aligned}$$

These equations imply that \(\frac{\varPhi '_{2} (w)}{w\varPhi _{2} (w)} \) doesn’t depend on w and thus the scalar function \(\varPhi _{2}(w)\) is given in the following form

$$\begin{aligned} \varPhi _{2} (w)=\exp \frac{\left( jw\right) ^{2} }{2} \kappa _{2}, \end{aligned}$$

which coincides with (16). We have \(\frac{\varPhi '_{2} (w)}{w\varPhi _{2} (w)} =-\kappa _{2} \) and then we obtain the system

$$\begin{aligned} \mathbf f _{0} \left( -\mathbf I -\sigma \kappa _{1} \mathbf I \right) +\mu _{1} \mathbf f _{1} +\mu _{2} \mathbf f _{2} =\sigma \kappa _{2} \mathbf r _{0} +\mu _{1} \mathbf r _{1} +\mu _{2} \mathbf r _{2}, \end{aligned}$$
$$\begin{aligned} \mathbf f _{1} \left( \mathbf Q -\mu _{1} \mathbf I \right) +\mathbf f _{0} \sigma \kappa _{1} =-\mathbf r _{1} {\mathbf {\Lambda }} -\sigma \kappa _{2} \mathbf r _{0} , \end{aligned}$$
$$\begin{aligned} \mathbf f _{2} \left( \mathbf Q -\mu _{2} \mathbf I \right) +\mathbf f _{0} =-\mathbf r _{0} -\mathbf r _{2} {\mathbf {\Lambda }} , \end{aligned}$$
$$\begin{aligned} \left[ \mathbf f _{0} +\mathbf f _{1} \left( {\mathbf {\Lambda }} -\mu _{1} \mathbf I \right) +\mathbf f _{2} \left( {\mathbf {\Lambda }}-\mu _{2} \mathbf I \right) \right] \mathbf e =-\left( \mathbf r _{0} +\mathbf r _{1} {\mathbf {\Lambda }}+r_{2} {\mathbf {\Lambda }}\right) \mathbf e . \end{aligned}$$
(25)

System (25) is an inhomogeneous system of linear equations, with respect to the vectors \(\mathbf f _0\), \(\mathbf f _1\), \(\mathbf f _2\). The determinant of the matrix of the system is zero (the sums of rows are all zero). The rank of the extended matrix and the rank of the matrix of coefficients coincide . Consider systems (12) and (25). System (12) is homogeneous, system (25) is inhomogeneous. Consequently, we can write the solution of the inhomogeneous system (25) in the form \(\mathbf f _{k}=C\mathbf r _{k} +\kappa _{2} \sigma \mathbf g _{k} +\mathbf y _{k} ,\) where C is a constant, vectors \(\mathbf r _n\) are defined above, vectors \(\mathbf g _k\) and \(\mathbf y _k\) are particular solutions of the system (25) and then

$$\begin{aligned} \begin{array}{c} C\left[ \mathbf r _{0} \left( -\mathbf I -\sigma \kappa _{1} \mathbf I \right) +\mu _{1} \mathbf r _{1} +\mu _{2} \mathbf r _{2} \right] +\kappa _{2} \sigma \left[ \mathbf g _{0} \left( -\mathbf I -\sigma \kappa _{1} \mathbf I \right) +\mu _{1} \mathbf g _{1} +\mu _{2} \mathbf g _{2} \right] \\ +\, \mu _{1} \mathbf y _{1} +\mu _{2} \mathbf y _{2} +\mathbf y _{0} \left( -\mathbf I -\sigma \kappa _{1} \mathbf I \right) =\sigma \kappa _{2} \mathbf r _{0} +\mu _{1} \mathbf r _{1} +\mu _{2} \mathbf r _{2} , \\ C\left[ \mathbf r _{1} \left( \mathbf Q -\mu _{1} \mathbf I \right) +\mathbf r _{0} \sigma \kappa _{1} \right] +\kappa _{2} \sigma \left[ \mathbf g _{1} \left( \mathbf Q -\mu _{1} \mathbf I \right) +\mathbf g _{0} \sigma \kappa _{1} \right] +\mathbf y _{1} \left( \mathbf Q -\mu _{1} \mathbf I \right) +\mathbf y _{0} \sigma \kappa _{1} \\ =-\mathbf r _{1} {\mathbf {\Lambda }} -\sigma \kappa _{2} \mathbf r _{0}, \\ C\left[ \mathbf r _{2} \left( \mathbf Q -\mu _{2} \mathbf I \right) +\mathbf r _{0} \right] +\kappa _{2} \sigma \left[ \mathbf g _{2} \left( \mathbf Q -\mu _{2} \mathbf I \right) +\mathbf g _{0} \right] +\mathbf y _{2} \left( \mathbf Q -\mu _{2} \mathbf I \right) +\mathbf y _{0} =-\mathbf r _{0} -\mathbf r _{2}{\mathbf {\Lambda }}, \\ C\left[ \mathbf r _{0} +\mathbf r _{1} \left( {\mathbf {\Lambda }} -\mu _{1} I\right) +\mathbf r _{2} \left( {\mathbf {\Lambda }} -\mu _{2} \mathbf I \right) \right] \mathbf e \\ +\, \kappa _{2} \sigma \left[ \mathbf g _{0} +\mathbf g _{1} \left( {\mathbf {\Lambda }}-\mu _{1} \mathbf I \right) +\mathbf g _{2} \left( {\mathbf {\Lambda }} -\mu _{2} \mathbf I \right) \right] \mathbf e \\ +\, \left[ \mathbf y _{0} +\mathbf y _{1} \left( {\mathbf {\Lambda }} -\mu _{1} \mathbf I \right) +\mathbf y _{2} \left( {\mathbf {\Lambda }}-\mu _{2} \mathbf I \right) \right] \mathbf e =-\left( \mathbf r _{0} +\mathbf r _{1} {\mathbf {\Lambda }} +\mathbf r _{2} {\mathbf {\Lambda }} \right) \mathbf e . \end{array} \end{aligned}$$

Previously, the system of Eq. (12) was obtained. Taking this into account, the coefficients of C are zeros and we can rewrite the last system in the form

$$\begin{aligned} \begin{array}{c} \kappa _{2} \sigma \left[ \mathbf g _{0} \left( -\mathbf I -\sigma \kappa _{1} \mathbf I \right) +\mu _{1} \mathbf g _{1} +\mu _{2} \mathbf g _{2} \right] +\mu _{1} \mathbf y _{1} +\mu _{2} \mathbf y _{2} +\mathbf y _{0} \left( -\mathbf I -\sigma \kappa _{1} \mathbf I \right) \\ =\,\sigma \kappa _{2} \mathbf r _{0} +\mu _{1} \mathbf r _{1} +\mu _{2} \mathbf r _{2} ,\\ \kappa _{2} \sigma \left[ \mathbf g _{1} \left( \mathbf Q -\mu _{1} I\right) +\mathbf g _{0} \sigma \kappa _{1} \right] +\mathbf y _{1} \left( \mathbf Q -\mu _{1} \mathbf I \right) +\mathbf y _{0} \sigma \kappa _{1} \\ =\, -\mathbf r _{1} {\mathbf {\Lambda }} -\sigma \kappa _{2} \mathbf r _{0} ,\kappa _{2} \sigma \left[ \mathbf g _{2} \left( \mathbf Q -\mu _{2} \mathbf I \right) +\mathbf g _{0} \right] +\mathbf y _{2} \left( \mathbf Q -\mu _{2} \mathbf I \right) +\mathbf y _{0} =-\mathbf r _{0} -\mathbf r _{2} {\mathbf {\Lambda }},\\ \end{array} \end{aligned}$$
$$\begin{aligned} \begin{array}{c} \kappa _{2} \sigma \left[ \mathbf g _{0} +\mathbf g _{1} \left( {\mathbf {\Lambda }} -\mu _{1} \mathbf I \right) +\mathbf g _{2} \left( {\mathbf {\Lambda }} -\mu _{2} \mathbf I \right) \right] \mathbf e \\ +\, \left[ \mathbf y _{0} +\mathbf y _{1} \left( {\mathbf {\Lambda }} -\mu _{1} \mathbf I \right) +\mathbf y _{2} \left( {\mathbf {\Lambda }} -\mu _{2} \mathbf I \right) \right] \mathbf e =-\left( \mathbf r _{0} +\mathbf r _{1} {\mathbf {\Lambda }} +r_{2} {\mathbf {\Lambda }} \right) \mathbf e . \end{array} \end{aligned}$$
(26)

We consider the first three equations of the system (26). We equate the corresponding coefficients for \(\kappa _{2}\) to obtain

$$\begin{aligned} \mathbf g _{0} \left( -\mathbf I -\sigma \kappa _{1} \mathbf I \right) +\mu _{1} \mathbf g _{1} +\mu _{2} \mathbf g _{2} =\mathbf r _{0} , \end{aligned}$$
$$\begin{aligned} \mathbf g _{1} \left( \mathbf Q -\mu _{1} \mathbf I \right) +\mathbf g _{0} \sigma \kappa _{1} =-\mathbf r _{0} , \end{aligned}$$
$$\begin{aligned} \mathbf g _{2} \left( \mathbf Q -\mu _{2} \mathbf I \right) +\mathbf g _{0} =0, \end{aligned}$$
(27)

and

$$\begin{aligned} \mu _{1} \mathbf y _{1} +\mu _{2} \mathbf y _{2} +\mathbf y _{0} \left( -\mathbf I -\sigma \kappa _{1} \mathbf I \right) =\mu _{1} \mathbf r _{1} +\mu _{2} \mathbf r _{2} , \end{aligned}$$
$$\begin{aligned} \mathbf y _{1} \left( \mathbf Q -\mu _{1} \mathbf I \right) +\mathbf y _{0} \sigma \kappa _{1} =-\mathbf r _{1} {\mathbf {\Lambda }} , \end{aligned}$$
$$\begin{aligned} \mathbf y _{2} \left( \mathbf Q -\mu _{2} \mathbf I \right) +\mathbf y _{0} =-\mathbf r _{0} -\mathbf r _{2} {\mathbf {\Lambda }} . \end{aligned}$$
(28)

From systems (27) and (28) we obtain systems:

$$\begin{aligned} \mathbf g _{0} \left[ -\mathbf I -\sigma \kappa _{1} \mathbf I +\mu _{2} \left( \mu _{2} \mathbf I -\mathbf Q \right) ^{-1} +\mu _{1} \sigma \kappa _{1} \left( \mu _{1} \mathbf I -\mathbf Q \right) ^{-1} \right] =\mathbf r _{0} -\mathbf r _{0} \mu _{1} \left( \mu _{1} \mathbf I -\mathbf Q \right) ^{-1} , \end{aligned}$$
$$\begin{aligned} \mathbf g _{1} =\left( \mathbf g _{0} \sigma \kappa _{1} +\mathbf r _{0} \right) \left( \mu _{1}{} \mathbf I -\mathbf Q \right) ^{-1} , \end{aligned}$$
$$\begin{aligned} \mathbf g _{2} =\mathbf g _{0} \left( \mu _{2} \mathbf I -\mathbf Q \right) ^{-1} . \end{aligned}$$
(29)
$$\begin{aligned} \begin{array}{c} \mathbf y _{0} \left[ \left( -\mathbf I -\sigma \kappa _{1}{} \mathbf I \right) +\mu _{1} \sigma \kappa _{1} \left( \mu _{1} \mathbf I -\mathbf Q \right) ^{-1} +\mu _{2} \left( \mu _{2} \mathbf I -\mathbf Q \right) ^{-1} \right] \\ =\, \mu _{1} r_{1} \left[ \mathbf I -{\mathbf {\Lambda }}\left( \mu _{1} \mathbf I -\mathbf Q \right) ^{-1} \right] +\mu _{2} \left[ \mathbf r _{2} -\left( \mathbf r _{0} +\mathbf r _{2} {\mathbf {\Lambda }} \right) \left( \mu _{2} \mathbf I -\mathbf Q \right) ^{-1} \right] , \end{array} \end{aligned}$$
$$\begin{aligned} \mathbf y _{1} =\left( \mathbf y _{0} \sigma \kappa _{1} +\mathbf r _{1} {\mathbf {\Lambda }} \right) \left( \mu _{1} \mathbf I -\mathbf Q \right) ^{-1} , \end{aligned}$$
$$\begin{aligned} \mathbf y _{2} =\left( \mathbf y _{0} +\mathbf r _{0} +\mathbf r _{2} {\mathbf {\Lambda }}\right) \left( \mu _{2} \mathbf I -\mathbf Q \right) ^{-1} . \end{aligned}$$
(30)

The determinants of the coefficient matrices systems (29) and (30) are zero. Then we define an additional condition for this systems of equations

$$\begin{aligned} (\mathbf g _0+\mathbf g _1+\mathbf g _2)\mathbf e =0, \end{aligned}$$
$$\begin{aligned} (\mathbf y _0+\mathbf y _1+\mathbf y _2)\mathbf e =0. \end{aligned}$$

Thus, the solutions of inhomogeneous systems for \(\mathbf g _{0}\), \(\mathbf g _{1}\), \(\mathbf g _{2}\), \(\mathbf y _{0}\), \(\mathbf y _{1}\), \(\mathbf y _{2}\) are uniquely determined. We obtain systems that coincide with the systems (18) and (19). Substituting values \(\mathbf g _{0}\), \(\mathbf g _{1}\), \(\mathbf g _{2}\), \(\mathbf y _{0}\), \(\mathbf y _{1}\), \(\mathbf y _{2}\) into the scalar equation of the system (26), we obtain

$$\begin{aligned} \kappa _{2} =\frac{1}{\sigma } \cdot \frac{\mathbf{r}_{0} \mathbf e +\mathbf r _1{\mathbf {\Lambda }}{} \mathbf e +\mathbf r _2{\mathbf {\Lambda }}{} \mathbf e +\left[ \mathbf y _{0} +\mathbf y _{1} \left( {\mathbf {\Lambda }} -\mu _{1} \mathbf I \right) +\mathbf y _{2} \left( {\mathbf {\Lambda }} -\mu _{2} \mathbf I \right) \right] \mathbf e }{\left[ -\mathbf g _{0} +\mathbf g _{1} \left( \mu _{1} \mathbf I -{\mathbf {\Lambda }} \right) +\mathbf g _{2} \left( \mu _{2} \mathbf I -{\mathbf {\Lambda }} \right) \right] \mathbf e } . \end{aligned}$$

This equality coincides with (17).

Second order asymptotic i.e. Theorem 2, shows that the asymptotic probability distribution of the number i(t) of calls in a system is Gaussian with mean asymptotic \(\kappa _{1}\alpha \) and dispersion \(\kappa _{2}\alpha \).

4 Asymptotic Analysis of MMPP/M/1/1 Retrial Queue with Two-Way Communication Under the Low Rate of Service Time of Outgoing Calls (\(\mu _{2} \rightarrow 0\))

We will research system (5) by asymptotic analysis method under the low rate of service time of outgoing calls.

Theorem 3

Suppose i(t) is a number of calls in an system of stationary \(MMPP{\slash }M{\slash }1{\slash }1\) retrial queue with two-way communication, then the following equation is true

$$\begin{aligned} H(u)=\mathop {\lim }\limits _{\mu _{2} \rightarrow 0} Ee^{jw\mu _{2} i(t)} =\left( 1-j\frac{\rho \mu _{1}}{\mu _{2}} u\right) ^{-\left( \frac{\alpha }{\sigma \left( 1-\rho \right) } +1\right) } , \end{aligned}$$
(31)

where \(\rho \mu _{1}=\mathbf r {\mathbf {\Lambda }}{} \mathbf e \) and \(\rho \) is the trafic intensity.

Proof

We denote \(\mu _{2}=\varepsilon \), let’s substitute the following in the system (5)

$$\begin{aligned} u = \varepsilon w, \, \, \, \, \mathbf H _{0}(u)= \varepsilon \mathbf F _{0}(w, \varepsilon ), \, \, \, \, \mathbf H _{k}(u) = \mathbf F _{k}(w, \varepsilon ), \, \, \, \, k=1,2. \end{aligned}$$

We will get the system

$$\begin{aligned} \begin{array}{c} \varepsilon \mathbf F _{0} (w,\varepsilon )(\mathbf Q -{\mathbf {\Lambda }} -\alpha \mathbf I )+j\sigma \frac{\partial \mathbf F _{0} (w,\varepsilon )}{\partial w} +\mu _{1} e^{-j\varepsilon w} \mathbf F _{1} (w,\varepsilon )\\ +\, \varepsilon e^{-j\varepsilon w} \mathbf F _{2} (w,\varepsilon )=0, \end{array} \end{aligned}$$
$$\begin{aligned} \mathbf F _{1} (w,\varepsilon )\left( \mathbf Q +\left( e^{j\varepsilon w} -1\right) {\mathbf {\Lambda }} -\mu _{1} \mathbf I \right) +\mathbf F _{0} (w,\varepsilon )\varepsilon e^{j\varepsilon w} {\mathbf {\Lambda }} -j\sigma \frac{\partial \mathbf F _{0} (w,\varepsilon )}{\partial w} =0, \end{aligned}$$
$$\begin{aligned} \mathbf F _{2} (w,\varepsilon )\left( \mathbf Q +\left( e^{j\varepsilon w} -1\right) {\mathbf {\Lambda }} -\varepsilon \mathbf I \right) +\alpha \varepsilon e^{j\varepsilon w} \mathbf F _{0} (w,\varepsilon )=0, \end{aligned}$$
$$\begin{aligned} \begin{array}{c} \mathbf F _{0} (w,\varepsilon )\varepsilon \left( e^{j\varepsilon w} {\mathbf {\Lambda }} +\alpha e^{j\varepsilon w} \mathbf I \right) \mathbf e +\mathbf F _{1} (w,\varepsilon )\left( e^{j\varepsilon w} {\mathbf {\Lambda }} -\mu _{1} \mathbf I \right) \mathbf e \\ +\, \mathbf F _{2} (w,\varepsilon )\left( e^{j\varepsilon w} {\mathbf {\Lambda }} -\varepsilon \mathbf I \right) \mathbf e =0. \end{array} \end{aligned}$$
(32)

Considering the limit as \(\varepsilon \rightarrow 0\) in the system (32) then we will get

$$\begin{aligned} j\sigma \mathbf F '_{0} (w)+\mu _{1} \mathbf F _{1} (w)=0, \end{aligned}$$
$$\begin{aligned} \mathbf F _{1} (w)\left( \mathbf Q -\mu _{1} \mathbf I \right) -j\sigma \mathbf F '_{0} (w)=0, \end{aligned}$$
$$\begin{aligned} \mathbf F _{2} (w)\mathbf Q =0, \end{aligned}$$
$$\begin{aligned} \mathbf F _{1} (w)\left( {\mathbf {\Lambda }} -\mu _{1} I\right) \mathbf e +\mathbf F _{2} (w){\mathbf {\Lambda }} \mathbf e =0. \end{aligned}$$
(33)

From the first and second equations we obtain \( \mathbf F _{1} (w)\mathbf Q =0,\, \, \, \,\mathbf F _{2} (w)\mathbf Q =0.\) We seek the solution of the system (33) in the form \(\mathbf F _{k} (w)=\varPhi _{k} (w)\mathbf r ,\mathrm{\; \; \; \; }k=1,2.\) Then, given the fact that \(\mathbf r {\mathbf {\Lambda }} \mathbf e =\rho \mu _{1} \) and

$$\begin{aligned} j\sigma \mathbf F '_{0} (w)+\mu _{1} \varPhi _{1} (w)\mathbf r =0, \end{aligned}$$
$$\begin{aligned} \varPhi _{1} (w)\mathbf r \left( \mathbf Q -\mu _{1} \mathbf I \right) -j\sigma \mathbf F '_{0} (w)=0, \end{aligned}$$
$$\begin{aligned} \varPhi _{2} (w)\mathbf rQ =0, \end{aligned}$$
$$\begin{aligned} \varPhi _{1} (w)\mathbf r \left( {\mathbf {\Lambda }} -\mu _{1} \mathbf I \right) \mathbf e +\varPhi _{2} (w)\mathbf r {\mathbf {\Lambda }} \mathbf e =0,\end{aligned}$$

we have

$$\begin{aligned} j\sigma \mathbf F '_{0} (w)+\mu _{1} \varPhi _{1} (w)\mathbf r =0, \end{aligned}$$
$$\begin{aligned} \varPhi _{1} (w)\left( \rho -1\right) \mu _{1} +\varPhi _{2} (w)\rho \mu _{1} =0. \end{aligned}$$

We denote \(\varPhi _{1} (w)+\varPhi _{2} (w)=\varPhi (w)\), then \(\varPhi _{1} (w)=\rho \varPhi (w), \, \, \, \, \varPhi _{2} (w)=\left( 1-\rho \right) \varPhi (w).\) Furthermore,

$$\begin{aligned} F_{1} (w)=\rho \varPhi (w)r, \, \, \, \, F_{2} (w)=\left( 1-\rho \right) \varPhi (w)r. \end{aligned}$$
(34)

Multiplying the third equation of system (32) by the unit vector \(\mathbf e \), considering the limit as \(\varepsilon \rightarrow 0\), we have

$$\begin{aligned} \left( 1-\rho \right) \varPhi (w)\mathbf r \left( jw{\mathbf {\Lambda }} -\mathbf I \right) \mathbf e +\alpha \mathbf F _{0} (w)e=0. \end{aligned}$$

We denote

$$\begin{aligned} \mathbf F _{0} (w)\mathbf e =\varphi (w) . \end{aligned}$$
(35)

Then

$$\begin{aligned} \frac{\alpha }{\left( 1-\rho \right) \left( 1-jw\rho \mu _{1} \right) } \varphi (w)=\varPhi (w). \end{aligned}$$
(36)

We consider the first equation of system (33), multiplying it by a unit vector \(\mathbf e \) and taking into account (34), (35) and (36), we obtain

$$\begin{aligned} j\sigma \varphi '(w)+\frac{\alpha \mu _{1} \rho }{\left( 1-\rho \right) \left( 1-jw\rho \mu _{1} \right) } \varphi (w)=0. \end{aligned}$$

The solution of the differential equation has the form

$$\begin{aligned} \varphi (w)=C\left( 1-jw\rho \mu _{1} \right) ^{-\frac{\alpha }{\sigma \left( 1-\rho \right) } } . \end{aligned}$$

Then

$$\begin{aligned} \varPhi (w)=\left( 1-jw\rho \mu _{1} \right) ^{-\left( \frac{\alpha }{\sigma \left( 1-\rho \right) } +1\right) } . \end{aligned}$$

Making reverse substitutions, we obtain the characteristic function (31).

Theorem 3 shows that the asymptotic probability distribution of i(t) of calls in the system under the low rate of service time of outgoing calls is Gamma.

5 Approximation Accuracy \( P^{(2)}(i)\)

The accuracy of the approximation \( P^{(2)}(i)\) is defined by using Kolmogorov range \(\varDelta _2 =\max \limits _{0\le i\le N} \left| {\sum \limits _{v=0}^i {\left( {P(v)-P^{(2)}(v)} \right) } } \right| ,\) which represents the difference between distributions P(i) and \( P^{(2)}(i)\), where P(i) is obtained by using numerical algorithm for the MMPP/M/1/1 retrial queue and the approximation \( P^{(2)}(i)\) is given by Gaussian and Gamma approximations.

Table 1. Kolmogorov range \(\mu _{1}=1, \mu _{2}=2, \sigma =1\)
Table 2. Kolmogorov range, \(\mu _{1}=1, \alpha =1, \sigma =1\)

Tables 1 contains the values of \(\varDelta _2\) for various values of rate \(\rho \) and \(\alpha \) for MMPP/M/1/1 retrial queue with two-way communication. We fix \(\mu _1 = 1\), \(\mu _2 = 2\) and \(\sigma = 1\) in Table 1. Table 2 contains the values of \(\varDelta _2\) for various values of rate \(\rho \) and \(\mu _2\) for MMPP/M/1/1 retrial queue with two-way communication. We fix \(\mu _1 = 1\), \(\alpha = 1\) and \(\sigma = 1\) in Table 2. We observe in Table 1 that the approximation accuracy increases with the increase in \(\alpha \) and in Table 2 that the approximation accuracy increases with the decrease in \(\mu _2\).

6 Conclusions

In this paper, we have considered retrial queue with two-way communication with MMPP input. We have found the first and the second order asymptotics of the number of calls in the system under the condition of the high rate of making outgoing calls. Based on the obtained asymptotics we have built the Gaussian approximation of the probability distribution of the number of calls in the system. Our numerical results have revealed that the accuracy of Gaussian approximation increases while increasing \(\alpha \). We have found the Gamma approximation of the number of calls in the system under the condition of the low service rate of outgoing calls. Our numerical results have revealed that the accuracy of Gamma approximation increases while decreasing \(\mu _2\). In future we plan to consider this retrial queueing system in other asymptotic conditions.