1 Introduction

Since the introduction of the first matrix analytic methods [9] a wide range of Markov chain-based analysis approaches has been extended to Markov chains with regular matrix block structures. Currently, popular text books summarize basic matrix analytic approaches, for example [3, 8, 10], which establish the belief that the Markov chain-based analysis of stochastic models with memoryless components can be extended to the same models with modulating Markov environment. For example, the analysis of a queueing system with a Poisson arrival process can be extended to the analysis of the same system with a Markov arrival process (MAP). For simple queuing systems and straightforward performance measures this extension is more or less standard today. For important performance measures of complex queuing systems several standard queueing methodologies are not applicable when the system operates in a Markov environment [5], and it is still a challenge to find adequate matrix analytic methodology to handle these.

In this paper we consider the delay analysis of a rather complex queueing system operating in a Markov environment with customer re-sequencing. Regular customers arrive at the system and occupy one place in a buffer of infinite capacity. Re-sequencing signals arrive at the system and upon arrival each re-sequencing signal moves one regular customer from the buffer to another queue (re-sequencing buffer) of infinite capacity and leaves the system. There is one server which serves customers from both queues. Upon service completion one regular customer from the buffer goes to the server, or, only if there are no regular customers in the buffer, one customer from the re-sequencing buffer enters service. No service interruption is allowed. This model with memoryless components has already been solved in [12]. The basic idea of the solution method in [12] is rather standard: evaluate the joint stationary distribution of the number of customers in both queues (whose generating function has a closed form) and compute the delay distribution of a tagged customer which arrives at the system in steady state. Due to a geometric dependence of the delay distribution (expressed in terms of Laplace transform) of an arriving regular customer on the number of customers in both queues, delay distribution is obtained by appropriate substitution of Laplace transform functions into the parameters of the generating function describing the joint stationary distribution.

In the first phase of our research we very much shared the general belief of the extendibility of the analysis to Markov environments, but it turned out that the commonly applied approaches, including the one in [12], are not applicable in case of a Markov environment due to the presence of non-commuting matrices and matrix functions. In order to maintain our general belief we looked for an appropriate variant of the analysis approach which allows evaluation with non-commuting matrices.

One part of the proposed analysis approach is the application of Kronecker algebra [2, 13], which, to some extent, overcomes the limitations imposed by non-commuting matrices. Since the very beginning, the use of Kronecker algebra is quite common in matrix analytic methods [8, 10]. In this respect the only contribution of the paper is a case study which demonstrates that multiple application of Kronecker transformations allows the analysis of more and more complex expressions with non-commuting matrices, which otherwise are not computable in closed form [6, 7]. The other part of the proposed analysis approach is the variant of the queue analysis which allows matrix-based computations. In this respect the major difference is that our matrix-based analysis cannot be decomposed into stationary analysis of the number of customers and delay analysis of a tagged customer, but we have to compute both of these “ingredients” at once.

The rest of the paper is structured as follows. Section 2 briefly introduces the queueing model with re-sequencing and Sect. 3 summarizes the QBD-type analysis of the Markov chain which describes the joint stationary distribution of the number of customers in each buffer and states of the Markov environment. The analysis of the waiting time is presented in Sect. 4 and the most complex part of the analysis is deferred to Sect. 5. Finally, some results of numerical experiments carried out using obtained expressions are in Sect. 6.

2 Model description

We consider a queueing system with two buffers: the regular buffer with high-priority customers and the re-sequencing buffer with low-priority customers. Arriving regular customers are added to the regular buffer of infinite capacity and wait for service. External re-sequencing signals also arrive at the system and each signal moves one customer at the head of the regular buffer (if any) to the re-sequencing buffer of infinite capacity and itself leaves the system. The service policy of customers in the regular and re-sequencing buffers is non-preemptive priority with the first-in-first-out (FIFO) discipline within each buffer, i.e. an arriving customer (also further referred to as high priority) occupies one place at the end of the regular buffer, and a re-sequenced customer (also further referred to as low priority) occupies one place at the end of the re-sequencing buffer. There is one server which serves customers from both buffers and the service process is the same for both high and low priority customers.

Customers arrive according to a MAP with generator matrices \((\mathbf {A_0},\mathbf {A_1})\), the service process is a MAP with \((\mathbf {S_0},\mathbf {S_1})\) and re-sequencing signals arrive according to a MAP with \((\mathbf {H_0},\mathbf {H_1})\). Let \(\mathbf {A_J}=\mathbf {A_0}+\mathbf {A_1}, \mathbf {S_J}=\mathbf {S_0}+\mathbf {S_1}\) and \(\mathbf {H_J}=\mathbf {H_0}+\mathbf {H_1}\), denote the phase processes of the associated MAPs (see, for example, [8] for details). The block structure of the Markov chain representing the number of high- and low-priority customers in the system is depicted in Fig. 1. The block represents the set of states with the same number of high- and low-priority customers and with different phases of the MAPs. The letters on the figures describe

  • arrival of a customer: \({\mathcal {A}}=\mathbf {A_1}\otimes I \otimes I \),

  • service of a customer: \({\mathcal {S}}= I \otimes \mathbf {S_1}\otimes I \),

  • re-sequencing of a customer: \({\mathcal {H}}= I \otimes I \otimes \mathbf {H_1}\),

  • phase change when re-sequencing is possible: \({\mathcal {L}}=\mathbf {A_0}\oplus \mathbf {S_0}\oplus \mathbf {H_0}\),

  • phase change when re-sequencing is not possible: \({\mathcal {L}}^{\prime }=\mathbf {A_0}\oplus \mathbf {S_0}\oplus \mathbf {H_J}\),

  • phase change when re-sequencing is not possible and the service process is stopped: \({\mathcal {L}}_0=\mathbf {A_0}\otimes I \oplus \mathbf {H_J}=\mathbf {A_0}\otimes I \otimes I + I \otimes I \otimes \mathbf {H_J}\),

where \(I\) denotes the identity matrix of appropriate size. One has to note that the phase of the service process does not change when the system is empty and is equal to that phase in which the next service starts. The main goal of the analysis is to evaluate the stationary waiting time distribution of a regular customer arriving in the system.

3 Joint stationary distribution

Before deriving expressions for the stationary waiting time distribution, one has to obtain expressions for the joint stationary distribution of the number of customers in the regular buffer and the re-sequencing buffer, and the states of the regular and resequencing arrivals and service processes.

3.1 Censored process

To simplify the analysis and obtain a Markov chain with a regular structure we censor the Markov chain in Fig. 1 for the cases when the server is busy. The structure of the censored Markov chain is depicted in Fig. 2. The transitions of the upper left block of the censored chain are obtained as

$$\begin{aligned} {\mathcal {L}}^{\prime \prime }={\mathcal {L}}^{\prime }-{\mathcal {S}}{\mathcal {L}}_0^{-1}{\mathcal {A}}=(\mathbf {A_0}\oplus \mathbf {S_0}\oplus \mathbf {H_J}) - ( I \otimes \mathbf {S_1}\otimes I )(\mathbf {A_0}\otimes I \oplus \mathbf {H_J})^{-1} (\mathbf {A_1}\otimes I \otimes I ). \end{aligned}$$
Fig. 1
figure 1

Block structure of the Markov chain representing the number of regular (high priority) and re-sequenced (low priority) customers

Fig. 2
figure 2

Block structure of the censored Markov chain representing the number of regular (high priority) and re-sequenced (low priority) customers

3.2 QBD representation of the censored process

Following, for example, the discussion of Section 13.1 in [8], we can represent the censored Markov chain as a QBD process where the levels are composed of the set of states where the number of regular customers is the same (these states form the columns of blocks in Fig. 2). The generator \({\mathbb Q}\) of the censored process can be represented in hyper-block tridiagonal form, where the hyper-block refers to the set of (infinitely many) states on the same level.

$$\begin{aligned} {\mathbb Q}= \begin{pmatrix} \mathbb {L}^{\prime } &{}\quad \mathbb {F} &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad \cdots \\ \mathbb {B} &{}\quad \mathbb {L} &{}\quad \mathbb {F} &{}\quad 0 &{}\quad 0 &{}\quad \cdots \\ 0 &{}\quad \mathbb {B} &{}\quad \mathbb {L} &{}\quad \mathbb {F} &{}\quad 0 &{}\quad \cdots \\ 0 &{}\quad 0 &{}\quad \mathbb {B} &{}\quad \mathbb {L} &{}\quad \mathbb {F} &{}\quad \cdots \\ \vdots &{}\quad \vdots &{}\quad \vdots &{}\quad \vdots &{}\quad \vdots &{}\quad \ddots \\ \end{pmatrix}\!. \end{aligned}$$

Due to the fact that the number of states within each level is infinite, matrices \(\mathbb {L}^{\prime }, \mathbb {L}, \mathbb {B}, \mathbb {F}\) have infinite rows and columns which are associated with the blocks in Fig. 2.

$$\begin{aligned} \mathbb {L}^{\prime }= & {} \begin{pmatrix} {\mathcal {L}}^{\prime \prime } &{}\quad 0 &{}\quad 0 &{}\quad \cdots \\ {\mathcal {S}} &{}\quad {\mathcal {L}}^{\prime } &{}\quad 0 &{}\quad \cdots \\ 0 &{}\quad {\mathcal {S}} &{}\quad {\mathcal {L}}^{\prime } &{}\quad \cdots \\ \vdots &{}\quad \vdots &{}\quad \vdots &{}\quad \ddots \\ \end{pmatrix}\!, \quad \mathbb {L}= \begin{pmatrix} {\mathcal {L}} &{}\quad 0 &{}\quad 0 &{}\quad \cdots \\ 0 &{}\quad {\mathcal {L}} &{}\quad 0 &{}\quad \cdots \\ 0 &{}\quad 0 &{}\quad {\mathcal {L}} &{}\quad \cdots \\ \vdots &{}\quad \vdots &{}\quad \vdots &{}\quad \ddots \\ \end{pmatrix}\!, \quad \mathbb {F}= \begin{pmatrix} {\mathcal {A}} &{}\quad 0 &{}\quad 0 &{}\quad \cdots \\ 0 &{}\quad {\mathcal {A}} &{}\quad 0 &{}\quad \cdots \\ 0 &{}\quad 0 &{}\quad {\mathcal {A}} &{}\quad \cdots \\ \vdots &{}\quad \vdots &{}\quad \vdots &{}\quad \ddots \\ \end{pmatrix}\!,\\ \mathbb {B}= & {} \begin{pmatrix} {\mathcal {S}} &{}\quad {\mathcal {H}} &{}\quad 0 &{}\quad 0 &{}\quad \cdots \\ 0 &{}\quad {\mathcal {S}} &{}\quad {\mathcal {H}} &{}\quad 0 &{}\quad \cdots \\ 0 &{}\quad 0 &{}\quad {\mathcal {S}} &{}\quad {\mathcal {H}} &{}\quad \cdots \\ \vdots &{}\quad \vdots &{}\quad \vdots &{}\quad \vdots &{}\quad \ddots \\ \end{pmatrix}. \end{aligned}$$

3.3 Condition for stability

The number of customers increases in the system due to arrivals, whose average rate is \(\lambda \), and decreases due to service, whose average rate is \(\mu \). Since the considered system is work conserving and the re-sequencing signal does not change the number of customer in the system, the condition for stability is \(\mu >\lambda \). Here \(\lambda \) is computed from \((\mathbf {A_0},\mathbf {A_1})\) as \(\lambda =\pi _A \mathbf {A_1} \mathbf {1}\), where \(\pi _A\) is the solution of \(\pi _A \mathbf {A_J}=0, \pi _A \mathbf {1} = 1\) and \(\mathbf {1}\) is the column vector of ones of the appropriate size. The value of \(\mu \) can be computed similarly from \((\mathbf {S_0},\mathbf {S_1})\). Throughout the paper it is assumed that the queue is stable.

3.4 QBD analysis of the process

In the censored Markov chain we denote the stationary probability vector of the set of states with \(i\) regular and \(j\) delayed customers by \(\pi _{ij}\) (\(i,j\ge 0\)) and compose the following row vectors

$$\begin{aligned} \mathbf {p}_{i}= & {} (\pi _{i,0}, \pi _{i,1}, \pi _{i,2}, \pi _{i,3},\ldots ), \quad i \ge 0, \\ \mathbf {p}= & {} (\mathbf {p}_{0}, \mathbf {p}_{1},\mathbf {p}_{2},\mathbf {p}_{3},\ldots ). \end{aligned}$$

Considering the hyper-block structure of \({\mathbb Q}\), the linear infinite system of equations \(\mathbf {p}\, {\mathbb Q} = \mathbf {0}, \mathbf {p}\, \mathbf {1} =1\) have the following hyper-block structure

$$\begin{aligned}&\mathbf {p}_{0} \mathbb {L}^{\prime } + \mathbf {p}_{1} \mathbb {B} = \mathbf {0}, \end{aligned}$$
(1)
$$\begin{aligned}&\mathbf {p}_{i-1} \mathbb {F} + \mathbf {p}_{i} \mathbb {L} + \mathbf {p}_{i+1} \mathbb {B} = \mathbf {0}, \quad \forall i \ge 1, \end{aligned}$$
(2)
$$\begin{aligned}&\sum _{k=0}^{\infty } \mathbf {p}_{k} \mathbf {1} = 1. \end{aligned}$$
(3)

The solution of Eqs. (13) has a matrix geometric structure \(\mathbf {p}_{i}=\mathbf {p}_{i-1} \mathbb {R}, i \ge 1\), where the matrix \(\mathbb {R}\) is the minimal non-negative solution of the equation

$$\begin{aligned} {\mathbb F} + {\mathbb R} {\mathbb L} + {\mathbb R^2} {\mathbb B}= {\mathbf 0}, \end{aligned}$$
(4)

where \({\mathbf 0}\) denotes the zero matrix [8]. Due to the level-independent behaviour of the \({\mathbb Q}\) matrix, \(\mathbb {R}\) has the following upper-diagonal block-Toeplitz form

$$\begin{aligned} \mathbb {R}= \begin{pmatrix} {\mathbf {R}}_0 &{}\quad {\mathbf {R}}_1 &{}\quad {\mathbf {R}}_2 &{}\quad {\mathbf {R}}_3 &{}\quad {\mathbf {R}}_4 &{}\quad \ldots \\ 0 &{}\quad {\mathbf {R}}_0 &{}\quad {\mathbf {R}}_1 &{}\quad {\mathbf {R}}_2 &{}\quad {\mathbf {R}}_3 &{}\quad \ldots \\ 0 &{}\quad 0 &{}\quad {\mathbf {R}}_0 &{}\quad {\mathbf {R}}_1 &{}\quad {\mathbf {R}}_2 &{}\quad \ldots \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad {\mathbf {R}}_0 &{}\quad {\mathbf {R}}_1 &{}\quad \ldots \\ \vdots &{}\quad \vdots &{}\quad \vdots &{}\quad \vdots &{}\quad \vdots &{}\quad \ddots \\ \end{pmatrix}\!\!. \end{aligned}$$

Based on the block structure of \(\mathbb {R}\), the hyper-block level equation \(\mathbf {p}_{i}=\mathbf {p}_{i-1} \mathbb {R}\) can be written as

$$\begin{aligned} \pi _{ij} = \sum _{k=0}^{j} \pi _{i-1,k} {\mathbf R}_{j-k}, \ i \ge 1, \ j \ge 0. \end{aligned}$$
(5)

The explicit computation of the \({\mathbf R}_{j}\) matrices is possible, in general, utilizing the structure of matrices \(\mathbb {L}, \mathbb {B}, \mathbb {F}\) and \(\mathbb {R}\), and we can write equations for the \((i,j)\)th block of matrix \({\mathbb F} + {\mathbb R} {\mathbb L} + {\mathbb R^2} {\mathbb B}\). It can be verified by direct calculation that Eq. (4) is equivalent to the following system of equations:

$$\begin{aligned}&{\mathcal {A}} + {\mathbf {R}}_0 {\mathcal {L}} +{\mathbf {R}}_0^2 {\mathcal {S}} = \mathbf {0}, \end{aligned}$$
(6)
$$\begin{aligned}&{\mathbf {R}}_1 {\mathcal {L}} + {\mathbf {R}}_0^2 {\mathcal {H}} + \sum _{i=0}^1 {\mathbf {R}}_i{\mathbf {R}}_{1-i} {\mathcal {S}} = \mathbf {0}, \end{aligned}$$
(7)
$$\begin{aligned}&{\mathbf {R}}_{j} {\mathcal {L}} + \sum _{i=0}^{j-1} {\mathbf {R}}_i{\mathbf {R}}_{j-i-1} {\mathcal {H}} + \sum _{i=0}^j {\mathbf {R}}_i{\mathbf {R}}_{j-i} {\mathcal {S}} = \mathbf {0}, \quad j \ge 2. \end{aligned}$$
(8)

The elements of the main diagonal of the matrix \({\mathbb F} + {\mathbb R} {\mathbb L} + {\mathbb R^2} {\mathbb B}\) are equal to the left part of (6), and the elements of the first upper diagonal are equal to the left part of (7). Finally, the \(j\)-th upper diagonals contain elements which are equal to the left part of (8) for the corresponding value of \(j\).

Let us introduce the matrix generating function

$$\begin{aligned} \overline{\mathbf R}(z) = \sum _{i=0}^{\infty }z^i {\mathbf R}_i, \ |z|<1. \end{aligned}$$

Multiplying the left-hand and right-hand sides of (6) by \(z^0\), (7) by \(z^1\), and (8) by \(z^j\), and summing up over all values of \(j\ge 0\), we obtain

$$\begin{aligned} {\mathcal {A}} + {\mathbf {R}}_0 {\mathcal {L}}+{\mathbf {R}}_0^2 {\mathcal {S}} + \sum _{j=1}^{\infty } z^j \left( {\mathbf {R}}_{j} {\mathcal {L}}+ \sum _{i=0}^{j-1} {\mathbf {R}}_i{\mathbf {R}}_{j-i-1} {\mathcal {H}} + \sum _{i=0}^j {\mathbf {R}}_i{\mathbf {R}}_{j-i}{\mathcal {S}} \right) = \mathbf {0}. \end{aligned}$$

Therefore \(\overline{\mathbf R}(z)\) is the minimal non-negative solution of the quadratic matrix equation

$$\begin{aligned} {\mathcal {A}} + \overline{\mathbf R}(z){\mathcal {L}}+ \overline{\mathbf R}^2(z) \left( z {\mathcal {H}} + {\mathcal {S}} \right) =\mathbf {0}. \end{aligned}$$
(9)

Throughout the paper we rely on the assumption that efficient numerical methods are available for the solution of quadratic matrix equations [1, 4] and consequently we consider the matrices defined by quadratic matrix equations to be known. From (5), for \(\hat{\pi }_{i}(z)=\sum _{j=0}^\infty \pi _{ij} z^j, i\ge 1\), we have

$$\begin{aligned} \hat{\pi }_{i}(z) = \sum _{j=0}^\infty \pi _{ij} z^j = \sum _{j=0}^\infty z^j \sum _{k=0}^{j} \pi _{i-1,k} {\mathbf R}_{j-k}=\hat{\pi }_{i-1}(z) \overline{\mathbf R}(z). \end{aligned}$$
(10)

Similarly to the definition of \({\mathbb R}\), we can define the matrix \({\mathbb G}\) of the hyper-block QBD process, which is the minimal non-negative solution of \({\mathbb B} + {\mathbb L} {\mathbb G} + {\mathbb F} {\mathbb G^2}=\mathbf {0}\). The level independent block structure of the hyper-block QBD process ensures an upper-diagonal block-Toeplitz form for the matrix \(\mathbb {G}\) as well.

$$\begin{aligned} \mathbb {G}= \begin{pmatrix} {\mathbf {G}}_0 &{}\quad {\mathbf {G}}_1 &{}\quad {\mathbf {G}}_2 &{}\quad {\mathbf {G}}_3 &{}\quad {\mathbf {G}}_4 &{}\quad \ldots \\ 0 &{}\quad {\mathbf {G}}_0 &{}\quad {\mathbf {G}}_1 &{}\quad {\mathbf {G}}_2 &{}\quad {\mathbf {G}}_3 &{}\quad \ldots \\ 0 &{}\quad 0 &{}\quad {\mathbf {G}}_0 &{}\quad {\mathbf {G}}_1 &{}\quad {\mathbf {G}}_2 &{}\quad \ldots \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad {\mathbf {G}}_0 &{}\quad {\mathbf {G}}_1 &{}\quad \ldots \\ \vdots &{}\quad \vdots &{}\quad \vdots &{}\quad \vdots &{}\quad \vdots &{}\quad \ddots \\ \end{pmatrix}. \end{aligned}$$

The equation for the \((i,j)\)th block of the matrix \({\mathbb B} + {\mathbb L} {\mathbb G} + {\mathbb F}{\mathbb G^2}\) gives

$$\begin{aligned}&{\mathcal {S}} + {\mathcal {L}} {\mathbf {G}}_0 + {\mathcal {A}} {\mathbf {G}}_0^2 = \mathbf {0}, \end{aligned}$$
(11)
$$\begin{aligned}&{\mathcal {H}} + {\mathcal {L}} {\mathbf {G}}_1 + {\mathcal {A}} \sum _{i=0}^1 {\mathbf {G}}_i{\mathbf {G}}_{1-i} = \mathbf {0}, \end{aligned}$$
(12)
$$\begin{aligned}&{\mathcal {L}} {\mathbf {G}}_{j} + {\mathcal {A}} \sum _{i=0}^j {\mathbf {G}}_i {\mathbf {G}}_{j-i} = \mathbf {0}, \quad j \ge 2. \end{aligned}$$
(13)

Introducing the matrix generating function

$$\begin{aligned} \overline{\mathbf G}(z) = \sum _{i=0}^{\infty }z^i {\mathbf G}_i, \ |z|<1, \end{aligned}$$

multiplying the \(j\)-th equation of (1113) with \(z^j\) and summing up over all values of \(j\) leads to

$$\begin{aligned} ({\mathcal {S}} + z {\mathcal {H}}) + {\mathcal {L}} \overline{\mathbf G}(z)+ {\mathcal {A}} \overline{\mathbf G}^2(z) = \mathbf {0}. \end{aligned}$$
(14)

3.5 Censored process on level 0

From the hyper-block level description we can obtain the generator of the censored process on level 0 as \({\mathbb {L}^{\prime } + \mathbb {F} \mathbb {G}}\), which has the following M/G/1-type block level structure

$$\begin{aligned} \mathbb {L}^{\prime } + \mathbb {F} \mathbb {G}= \begin{pmatrix} {\mathcal {L}}^{\prime \prime } + {\mathcal {A}} {\mathbf {G}}_{0} &{}\quad {\mathcal {A}} {\mathbf {G}}_1 &{}\quad {\mathcal {A}} {\mathbf {G}}_2 &{}\quad {\mathcal {A}} {\mathbf {G}}_3 &{} \quad {\mathcal {A}} {\mathbf {G}}_4 &{}\quad \ldots \\ {\mathcal {S}} &{}\quad {\mathcal {L}}^{\prime } + {\mathcal {A}} {\mathbf {G}}_{0} &{}\quad {\mathcal {A}} {\mathbf {G}}_1 &{}\quad {\mathcal {A}} {\mathbf {G}}_2 &{}\quad {\mathcal {A}} {\mathbf {G}}_3 &{}\quad \ldots \\ 0 &{}\quad {\mathcal {S}} &{}\quad {\mathcal {L}}^{\prime } + {\mathcal {A}} {\mathbf {G}}_{0} &{}\quad {\mathcal {A}} {\mathbf {G}}_1 &{}\quad {\mathcal {A}} {\mathbf {G}}_2 &{}\quad \ldots \\ 0 &{}\quad 0 &{}\quad {\mathcal {S}} &{}\quad {\mathcal {L}}^{\prime } + {\mathcal {A}} {\mathbf {G}}_{0} &{}\quad {\mathcal {A}} {\mathbf {G}}_1 &{}\quad \ldots \\ \vdots &{}\quad \vdots &{}\quad \vdots &{}\quad \vdots &{}\quad \vdots &{}\quad \ddots \\ \end{pmatrix}\!. \end{aligned}$$

The fundamental matrix of this M/G/1-type Markov chain, \(\check{\mathbf {G}}\), is the minimal non-negative solution of the matrix equation

$$\begin{aligned} {\mathcal {S}} + {\mathcal {L}}^{\prime } \check{\mathbf {G}} + {\mathcal {A}} \sum _{i=0}^{\infty } {\mathbf {G}}_i \check{\mathbf {G}}^{i+1}=\mathbf {0}, \end{aligned}$$
(15)

for which efficient numerical analysis is available [1] as well.

3.6 Obtaining \(\pi _{00}\)

Based on the fundamental matrix of the censored process on level 0 we can compute the generator matrix of the censored process on level 0 and block 0:

$$\begin{aligned} \mathbf {Q}_{00} ={\mathcal {L}}^{\prime \prime } + {\mathcal {A}} \sum _{i=0}^\infty {\mathbf {G}}_i \check{\mathbf {G}}^i ={\mathcal {L}}^{\prime } \underbrace{- {\mathcal {S}} {\mathcal {L}}_0^{-1} {\mathcal {A}}} + {\mathcal {A}} \sum _{i=0}^\infty {\mathbf {G}}_i \check{\mathbf {G}}^i , \end{aligned}$$
(16)

and \(\pi _{00}\) is the properly normalized solution of \(\pi _{00} \mathbf {Q}_{00}=0\). We note that the underbraced term comes from censoring out the block with an idle server. To compute \(\pi _{00}\) we need to get back to the original non-censored Markov chain in Fig. 1.

Let \(\pi _\mathrm{idle}\) be the stationary distribution of the block of states representing an idle server (the left most block in Fig. 1). The stationary utilization of the system is \(\rho =\lambda /\mu \), from which \(\pi _\mathrm{idle}\mathbf {1}=1-\rho \).

We compute the stationary distribution of \(\pi _\mathrm{idle}\) in two steps. First we consider the non-censored Markov chain in Fig. 1 and analyse the behaviour of this Markov chain restricted to the block of states representing an idle server (the left most block in Fig. 1) and to the block of states representing a busy server and idle queues (the block next to the left most one in Fig. 1). According to the state transitions between these two blocks in Fig. 1 and (16), the generator of the process restricted to these two blocks is

$$\begin{aligned} \mathbf {Q}_{nq} = \begin{pmatrix} {\mathcal {L}}_0 &{}\quad {\mathcal {A}} \\ {\mathcal {S}} &{}\quad {\mathcal {L}}^{\prime } + {\mathcal {A}} \sum _{i=0}^\infty {\mathbf {G}}_i \check{\mathbf {G}}^i \end{pmatrix} =\begin{pmatrix} {\mathcal {L}}_0 &{}\quad {\mathcal {A}} \\ {\mathcal {S}} &{}\quad {\mathbf {T}}_0 \end{pmatrix}\!, \end{aligned}$$

where we introduced the notation \({\mathbf {T}}_0 ={\mathcal {L}}^{\prime } + {\mathcal {A}} \sum _{i=0}^\infty {\mathbf {G}}_i \check{\mathbf {G}}^i\). Further censoring this process to the idle block we have

$$\begin{aligned} \mathbf {Q}_\mathrm{idle} = {\mathcal {L}}_0 - {\mathcal {A}} {\mathbf {T}}_0^{-1} {\mathcal {S}}, \end{aligned}$$

and \(\pi _\mathrm{idle}\) is obtained as the solution of the linear system of equations

$$\begin{aligned} \pi _\mathrm{idle} \mathbf {Q}_\mathrm{idle}=0 \quad \text {and} \quad \pi _\mathrm{idle}\mathbf {1}=1-\rho . \end{aligned}$$

Finally, \(\pi _{00}\) is obtained from

$$\begin{aligned} \pi _{00} = - \pi _\mathrm{idle} {\mathcal {A}} {\mathbf {T}}_0^{-1}. \end{aligned}$$

3.7 Computing \(\hat{\pi }_{0}(z)\)

The components of the vector \(\mathbf {p}_{0} = (\pi _{0,0}, \pi _{0,1}, \pi _{0,2}, \pi _{0,3},\ldots )\), can be computed from Ramaswami’s recursive formula (see, for example, [8]). Specifically,

$$\begin{aligned} \pi _{0,m}= & {} -\biggl ( \sum _{i=0}^{m-1} \pi _{0,i} {\mathbf T}_{m-i} \biggl ) {\mathbf T}_0^{-1}, \quad m \ge 1, \end{aligned}$$
(17)

where \({\mathbf T}_0\) is defined above and

$$\begin{aligned} {\mathbf T}_ m= & {} {\mathcal {A}} \sum _{k=m}^{\infty } {\mathbf {G}}_k \check{\mathbf {G}}^{k-m}, \quad m \ge 1. \end{aligned}$$

Though Ramaswami’s formula gives a way to compute \(\mathbf {p}_{0}\), we are interested in an expression for its components \(\pi _{0,m}, m \ge 0\), in terms of the generating function

$$\begin{aligned} \hat{\pi }_0(z)= \sum _{m=0}^{\infty } \pi _{0,m} z^m, \quad 0<z<1. \end{aligned}$$

In order to obtain an equation for \(\hat{\pi }_0(z)\) one has to write the equation \(\mathbf {p}_{0} (\mathbb {L}^{\prime } + \mathbb {F} \mathbb {G}) = 0\) in block form:

$$\begin{aligned}&\pi _{0,0} ( {\mathcal {L}}^{\prime \prime } + {\mathcal {A}} {\mathbf {G}}_0) + \pi _{0,1} {\mathcal {S}} = 0, \end{aligned}$$
(18)
$$\begin{aligned}&\pi _{0,0} {\mathcal {A}} {\mathbf {G}}_1 + \pi _{0,1} ( {\mathcal {L}}^{\prime } + {\mathcal {A}} {\mathbf {G}}_0) + \pi _{0,2} {\mathcal {S}} = 0, \end{aligned}$$
(19)
$$\begin{aligned}&\sum _{k=0}^j \pi _{0,k} {\mathcal {A}} {\mathbf {G}}_{j-k} + \pi _{0,j} {\mathcal {L}}^{\prime } + \pi _{0,j+1} {\mathcal {S}} = 0, \quad j \ge 2. \end{aligned}$$
(20)

By multiplying the \(k\)-th equation with \(z^k\) and summing up over all values of \(k\) one obtains

$$\begin{aligned} 0= & {} \hat{\pi }_{0}(z) \left( {\mathcal {A}} \overline{\mathbf G}(z) + {\mathcal {L}}^{\prime } + \frac{1}{z}{\mathcal {S}}\right) + \pi _{0,0} \left( {\mathcal {L}}^{\prime \prime } - {\mathcal {L}}^{\prime }- \frac{1}{z}{\mathcal {S}}\right) \end{aligned}$$
(21)

and

$$\begin{aligned} \hat{\pi }_{0}(z)= & {} \pi _{0,0} \left( {\mathcal {L}}^{\prime } - {\mathcal {L}}^{\prime \prime } + \frac{1}{z}{\mathcal {S}}\right) \left( {\mathcal {A}} \overline{\mathbf G}(z) + {\mathcal {L}}^{\prime } + \frac{1}{z}{\mathcal {S}}\right) ^{-1}. \end{aligned}$$
(22)

We note that the function \(\hat{\pi }_{0}(z)\) is undefined at \(z=1\), and \(\hat{\pi }_{0}(1)\), whenever used, should be read as \(\lim _{z\rightarrow 1} \hat{\pi }_{0}(z)\).

3.8 Distribution immediately after customer arrival

As MAP arrival do not see time averages (that is, the PASTA property does not hold) one has to calculate stationary probabilities \(\tilde{\pi }_{ij}\) that after a customer arrival there are \(i\) (\(i\ge 1\)) customers in the regular buffer and \(j\) (\(j \ge 0\)) in the re-sequencing buffer. Following the same argument as in [11], we can write

$$\begin{aligned} \tilde{\pi }_{ij}= \frac{1}{\lambda } \pi _{i-1,j} {\mathcal {A}}, \ i \ge 1, \ j \ge 0, \quad \text {and} \quad \tilde{\pi }_{00} = \frac{1}{\lambda } \pi _\mathrm{idle} {\mathcal {A}}. \end{aligned}$$

3.9 Analysis with Kronecker expansion

In the sequel we need to evaluate various infinite summations of matrix expressions with non-commuting matrices. The analysis of those expressions is based on the technique which we refer to as Kronecker expansion [2, 13] and is based on the identity \(vec(ABC)=(C^T \otimes A) vec(B)\). In this identity \(vec\) denotes the column stacking vector operator which transforms a matrix of size \(n \times m\) into a vector of size \(nm \times 1\). We are going to utilize the property of the \(vec\) operator that \(vec(A)=A\) for a matrix \(A\) of size \(n \times 1\). We note that the above identity has several seemingly different forms, for example \(vec(AB)=(I^T \otimes A) vec(B)=(B^T \otimes A) vec(I)=(B^T \otimes I) vec(A)\).

Indeed, depending on the complexity of the obtained matrix expressions, we need to apply the Kronecker expansion multiple times, which generates larger and lager matrix expressions. This is a price we need to pay for generalizing the memoryless models to a Markov-modulated environment.

4 Stationary waiting time distribution

We analyse the stationary waiting time (\(W\)), starting from the instant when a regular customer arrives at the system up to the instant when it enters service. Its distribution will be evaluated in terms of the Laplace transform \(\omega (s)=E(e^{-sW})\). A regular customer may arrive at the server from the regular buffer or from the re-sequencing buffer, and thus the stationary waiting time distribution can be computed as

$$\begin{aligned} \omega (s)=E(e^{-sW})= & {} \omega _\mathrm{H}(s)+ \omega _\mathrm{L}(s) \\= & {} E(e^{-sW} I_{\{\text {served from regular buffer}\}}) \\&+\, E\left( e^{-sW} I_{\{\text {served from re-sequencing buffer}\}}\right) \!, \end{aligned}$$

where \(I_{\{a\}}\) is the indicator of event \(a\).

4.1 Stationary waiting time distribution of customer that receives service from regular buffer

When a customer is served from the regular buffer the waiting time is

$$\begin{aligned} \omega _\mathrm{H}(s)&=E(e^{-sW} I_{\{\text {served from regular buffer}\}}) \\&=\sum _{i=1}^\infty \sum _{j=0}^\infty \tilde{\pi }_{ij} \left( (s I -{\mathcal {L}}_A)^{-1} ({\mathcal {S}}+{\mathcal {H}}) \right) ^{i-1} (s I -{\mathcal {L}}_A)^{-1} {\mathcal {S}} \mathbf {1} \\&=\frac{1}{\lambda } \sum _{i=1}^\infty \underbrace{\sum _{j=0}^\infty \pi _{i-1,j}}_{\hat{\pi }_{i-1}(1)} {\mathcal {A}} (\underbrace{(s I -{\mathcal {L}}_A)^{-1} ({\mathcal {S}}+{\mathcal {H}})}_{\mathbf {U}(s)} )^{i-1} \underbrace{(s I -{\mathcal {L}}_A)^{-1} {\mathcal {S}} \mathbf {1}}_{v(s)} \\&=\frac{1}{\lambda } \sum _{i=0}^\infty \hat{\pi }_{i}(1) {\mathcal {A}}\mathbf {U}(s)^{i} v(s) =\frac{1}{\lambda } ( v(s)^T\otimes 1 ) \underbrace{\sum _{i=0}^\infty \left( {\mathbf {U}(s)^{i}}^T \otimes \hat{\pi }_{i}(1) \right) }_{\mathbf {K}(s)} vec({\mathcal {A}}), \end{aligned}$$

where \({\mathcal {L}}_A=\mathbf {A_J}\oplus \mathbf {S_0}\oplus \mathbf {H_0}, vec\) is the column stacking vector operator and we used the identity \(vec(ABC)=(C^T \otimes A) vec(B)\). For \(\mathbf {K}(s)\) we have

$$\begin{aligned} \mathbf {K}(s)&=\sum _{i=0}^\infty \left( {\mathbf {U}(s)^{i}}^T \otimes \hat{\pi }_{i}(1) \right) = \left( I \otimes \hat{\pi }_{0}(1) \right) + \sum _{i=1}^\infty \left( {\mathbf {U}(s)^{i}}^T \otimes \hat{\pi }_{i}(1) \right) \\&= \left( I \otimes \hat{\pi }_{0}(1) \right) + \sum _{i=1}^\infty \left( {\mathbf {U}(s)^{i-1}}^T {\mathbf {U}(s)}^T \otimes \hat{\pi }_{i-1}(1) {\overline{\mathbf R}(1)} \right) \\&= \left( I \otimes \hat{\pi }_{0}(1) \right) + \mathbf {K}(s) \left( {\mathbf {U}(s)}^T \otimes {\overline{\mathbf R}(1)} \right) \end{aligned}$$

and

$$\begin{aligned} \mathbf {K}(s)&=\left( I \otimes \hat{\pi }_{0}(1) \right) \left( I - {\mathbf {U}(s)}^T \otimes {\overline{\mathbf R}(1)} \right) ^{-1}. \end{aligned}$$
(23)

In this final expression for \(\mathbf {K}(s)\) the values of \(\overline{\mathbf R}(1)\) and \(\hat{\pi }_{0}(1)\) are obtained from (9) and (22), respectively. Note that without the Kronecker expansion the expression for \(\omega _\mathrm{H}(s)\) could not be obtained in closed form because of non-commutativity of the matrices \({\mathcal {A}}\) and \({\mathbf {U}(s)}\). In the scalar case these matrices are reduced to scalars and \(\omega _\mathrm{H}(s)\) is computed at once in terms of the generating function. In the Markov environment considered one had to search a representation in the form \(\mathbf {K}(s)=K_1(s) + \mathbf {K}(s) K_2(s)\). Here \(K_1(s)\) is the 0-th element of the infinite sum. The second term \(\mathbf {K}(s) K_2(s)\) is, on one hand, the sum from 1 to \(\infty \) and, on the other hand, it is the product of the original infinite sum from 0 to \(\infty \) with some matrix \(K_2(s)\).

4.2 Stationary waiting time distribution of the customer that receives service from re-sequencing buffer

For \(j\le i\), let \({\mathbb {F}}(t,i,j,k)\) be the matrix (according to the initial and final phases of the MAPs \((\mathbf {A_0},\mathbf {A_1}), (\mathbf {S_0},\mathbf {S_1})\) and \((\mathbf {H_0},\mathbf {H_1})\)) of the probabilities that \(k\) customers arrive, \(i-j\) customers are served and \(j\) are moved to the re-sequencing buffer in time \(t\), when the initial number of customers in the regular buffer is larger than \(i\). For the Laplace transform \(\tilde{{\mathbb {F}}}(s,i,j,k)=\int _t e^{-st} {\mathbb {F}}(t,i,j,k) \mathrm{d}t\) we have

$$\begin{aligned} \tilde{{\mathbb {F}}}(s,0,0,0) = (s I -{\mathcal {L}})^{-1} = {\mathcal {L}}(s), \end{aligned}$$
(24)

and otherwise

$$\begin{aligned} \tilde{{\mathbb {F}}}(s,i,j,k)&= I_{\{i>j\}} {\mathcal {L}}(s) {\mathcal {S}} \tilde{{\mathbb {F}}}(s,i-1,j,k) \\&\quad + I_{\{j>0\}} {\mathcal {L}}(s) {\mathcal {H}} \tilde{{\mathbb {F}}}(s,i-1,j-1,k) \nonumber \\&\quad + I_{\{k>0\}} {\mathcal {L}}(s) {\mathcal {A}} \tilde{{\mathbb {F}}}(s,i,j,k-1), \nonumber \end{aligned}$$
(25)

where \({\mathcal {L}}(s)\) is defined in (24). The case that the tagged customer moves to the re-sequencing buffer is described by \(\tilde{{\mathbb {F}}}(s,i,j,k) {\mathcal {H}}\).

After getting to the re-sequencing buffer, the customer needs to wait for the service of the high-priority customers (in the regular buffer) and the low-priority customers which were in the re-sequencing buffer. The Markov chain representing the waiting time in the re-sequencing buffer is depicted in Fig. 3.

Fig. 3
figure 3

Block structure of the Markov chain representing the waiting time in the re-sequencing buffer

The matrix Laplace transform of the time to reduce the number of high-priority customers by one, \(\widetilde{\mathbf {G}}(s)\), is the solution of

$$\begin{aligned} s \widetilde{\mathbf {G}}(s)= ({\mathcal {S}} + {\mathcal {H}}) + {\mathcal {L}} \widetilde{\mathbf {G}}(s) + {\mathcal {A}} \widetilde{\mathbf {G}}(s)^2, \end{aligned}$$

and the Laplace transform of the time to reduce the number of low-priority customers (in the re-sequencing buffer) by one is the solution of

$$\begin{aligned} s \widehat{\mathbf {G}}(s)= {\mathcal {S}} + {\mathcal {L}}^\prime \widehat{\mathbf {G}}(s) + {\mathcal {A}} \widetilde{\mathbf {G}}(s) \widehat{\mathbf {G}}(s). \end{aligned}$$

Hence, if the number of customers in the regular buffer is \(j \ge 0\) and in the re-sequencing buffer is \(k \ge 0\) at the arrival of the tagged customer in the re-sequencing buffer, then the subsequent waiting time is \(\widetilde{\mathbf {G}}(s)^j \widehat{\mathbf {G}}(s)^{k+1}\).

Based on the previously computed matrix Laplace transforms, the waiting time of the customer which enters service from the re-sequencing buffer can be computed as

$$\begin{aligned} \omega _\mathrm{L}(s)&=E\left( e^{-sW} I_{\{\text {served from re-sequencing buffer}\}}\right) \nonumber \\&=\sum _{i=1}^\infty \sum _{j=0}^\infty \tilde{\pi }_{ij} \sum _{\ell =0}^{i-1} \sum _{k=0}^\infty \tilde{{\mathbb {F}}}(s,i-1,\ell ,k){\mathcal {H}} \widetilde{\mathbf {G}}(s)^k \widehat{\mathbf {G}}(s)^{j+\ell +1} \mathbf {1} \nonumber \\&=\frac{1}{\lambda } \sum _{i=0}^\infty \sum _{j=0}^\infty \pi _{i,j} {\mathcal {A}} \sum _{\ell =0}^{i} \sum _{k=0}^\infty \tilde{{\mathbb {F}}}(s,i,\ell ,k){\mathcal {H}} \widetilde{\mathbf {G}}(s)^k \widehat{\mathbf {G}}(s)^{j+\ell +1} \mathbf {1}. \end{aligned}$$
(26)

The main part of the analysis of \(\omega _\mathrm{L}(s)\) is deferred to the next section. But in the course of the subsequent derivations we will make use of several quantities which are better introduced if one firstly considers terms of \(\omega _\mathrm{L}(s)\) with \(i=0\). Thus we represent \(\omega _\mathrm{L}(s)\) as

$$\begin{aligned} \omega _\mathrm{L}(s) =&\omega _\mathrm{L}^{i>0}(s) + \omega _\mathrm{L}^{i=0}(s) \nonumber \\ =&\frac{1}{\lambda } \sum _{i=1}^\infty \sum _{j=0}^\infty \pi _{i,j} {\mathcal {A}} \sum _{\ell =0}^{i} \sum _{k=0}^\infty \tilde{{\mathbb {F}}}(s,i,\ell ,k){\mathcal {H}} \widetilde{\mathbf {G}}(s)^k \widehat{\mathbf {G}}(s)^{j+\ell +1} \mathbf {1} \nonumber \\&+ \frac{1}{\lambda } \sum _{j=0}^\infty \pi _{0,j} {\mathcal {A}} \sum _{k=0}^\infty \underbrace{\tilde{{\mathbb {F}}}(s,0,0,k)}_{({\mathcal {L}}(s) {\mathcal {A}})^k {\mathcal {L}}(s) } {\mathcal {H}} \widetilde{\mathbf {G}}(s)^k \widehat{\mathbf {G}}(s)^{j+1} \mathbf {1} \end{aligned}$$
(27)

and in the next subsection derive expression for \(\omega _\mathrm{L}^{i=0}(s)\).

4.3 Computation of \(\omega _\mathrm{L}^{i=0}(s)\)

Applying the identity \(vec(ABC)=(C^T \otimes A) vec(B)\), to \(\omega _\mathrm{L}^{i=0}(s)\) one obtains

$$\begin{aligned} \omega _\mathrm{L}^{i=0}(s)&= \frac{1}{\lambda } \sum _{j=0}^\infty \sum _{k=0}^\infty \pi _{0,j} {\mathcal {A}} ({\mathcal {L}}(s) {\mathcal {A}})^k {\mathcal {L}}(s) {\mathcal {H}} \widetilde{\mathbf {G}}(s)^k \widehat{\mathbf {G}}(s)^{j+1} \mathbf {1} \\&= \frac{1}{\lambda } \sum _{j=0}^\infty \sum _{k=0}^\infty \left( \mathbf {1}^T {\widehat{\mathbf {G}}(s)^{j+1}}^T {\widetilde{\mathbf {G}}(s)^k}^T \otimes \pi _{0,j} {\mathcal {A}} ({\mathcal {L}}(s) {\mathcal {A}})^k \right) vec({\mathcal {L}}(s) {\mathcal {H}}) \\&= \frac{1}{\lambda } \left( \mathbf {1}^T \widehat{\mathbf {G}}(s)^T \otimes 1 \right) \\&\quad \times \underbrace{\sum _{j=0}^\infty \left( {\widehat{\mathbf {G}}(s)^{j}}^T \otimes \pi _{0,j} \right) }_{{\varvec{\Theta }}{\mathbf {(s)}}} \left( I \otimes {\mathcal {A}} \right) \underbrace{\sum _{k=0}^\infty \left( {\widetilde{\mathbf {G}}(s)^k}^T \otimes ({\mathcal {L}}(s) {\mathcal {A}})^k \right) }_ {(I-\widetilde{\mathbf {G}}(s)^T \otimes {\mathcal {L}}(s) {\mathcal {A}})^{-1}} vec({\mathcal {L}}(s) {\mathcal {H}}) \\&= \frac{1}{\lambda } \left( \mathbf {1}^T \widehat{\mathbf {G}}(s)^T \otimes 1 \right) {\varvec{\Theta }}{\mathbf {(s)}} \left( I \otimes {\mathcal {A}} \right) \left( I-\widetilde{\mathbf {G}}(s)^T \otimes {\mathcal {L}}(s) {\mathcal {A}}\right) ^{-1} vec({\mathcal {L}}(s) {\mathcal {H}}). \end{aligned}$$

The only unknown in this expression is \({\varvec{\Theta }}{\mathbf {(s)}}\). In order to compute it we revisit (1820). Kronecker multiply the \(j\)-th equation with \({\widehat{\mathbf {G}}(s)^{j+1}}^T \) from the left and sum up over all values of \(j\). This gives

$$\begin{aligned} 0= & {} \left( \widehat{\mathbf {G}}(s)^T \otimes \pi _{0,0} ({\mathcal {L}}^{\prime \prime } - {\mathcal {L}}^{\prime }) \right) + \underbrace{\sum _{j=0}^\infty \left( {\widehat{\mathbf {G}}(s)^{j}}^T \otimes \pi _{0,j} \right) }_{{\varvec{\Theta }}{\mathbf {(s)}}} \left( \widehat{\mathbf {G}}(s)^T \otimes {\mathcal {L}}^{\prime } \right) \\&\quad + \underbrace{\sum _{j=0}^\infty \left( {\widehat{\mathbf {G}}(s)^{j+1}}^T \otimes \pi _{0,j+1} \right) }_{{\varvec{\Theta }}{\mathbf {(s)}}-\left( I \otimes \pi _{0,0} \right) } \left( I \otimes {\mathcal {S}} \right) + \sum _{j=0}^\infty \left( {\widehat{\mathbf {G}}(s)^{j+1}}^T \otimes \sum _{k=0}^j \pi _{0,k} {\mathcal {A}} {\mathbf {G}}_{j-k} \right) . \end{aligned}$$

The last term can be rewritten as

$$\begin{aligned}&\sum _{j=0}^\infty \left( {\widehat{\mathbf {G}}(s)^{j+1}}^T \otimes \sum _{k=0}^j \pi _{0,k} {\mathcal {A}} {\mathbf {G}}_{j-k} \right) \\&=\underbrace{\sum _{k=0}^\infty \left( {\widehat{\mathbf {G}}(s)^{k}}^T \otimes \pi _{0,k} \right) }_{{\varvec{\Theta }}{\mathbf {(s)}}} \left( {\widehat{\mathbf {G}}(s)}^T \otimes {\mathcal {A}} \right) \underbrace{\sum _{j=k}^\infty \left( {\widehat{\mathbf {G}}(s)^{j-k}}^T \otimes {\mathbf {G}}_{j-k} \right) }_ {{\varvec{\Phi }}{\mathbf {(s)}}=\sum _{j=0}^\infty \left( {\widehat{\mathbf {G}}(s)^{j}}^T \otimes {\mathbf {G}}_{j} \right) }. \end{aligned}$$

Putting this altogether, one obtains the equation for \({\varvec{\Theta }}{\mathbf {(s)}}\) in the form

$$\begin{aligned} \nonumber 0&= \left( \widehat{\mathbf {G}}(s)^T \otimes \pi _{0,0} ({\mathcal {L}}^{\prime \prime } - {\mathcal {L}}^{\prime }) \right) - \left( I \otimes \pi _{0,0} {\mathcal {S}} \right) \\&\quad + {\varvec{\Theta }}{\mathbf {(s)}} \left[ \left( \widehat{\mathbf {G}}(s)^T \otimes {\mathcal {L}}^{\prime } \right) + \left( I \otimes {\mathcal {S}} \right) + \left( {\widehat{\mathbf {G}}(s)}^T \otimes {\mathcal {A}} \right) {\varvec{\Phi }}{\mathbf {(s)}} \right] , \end{aligned}$$
(28)

where the only unknown is \({\varvec{\Phi }}{\mathbf {(s)}}\). The computation of \({\varvec{\Phi }}{\mathbf {(s)}}\) follows the same pattern as that of \({\varvec{\Theta }}{\mathbf {(s)}}\). Revisit (1113) and Kronecker multiply the \(j\)-th equation with \({\widehat{\mathbf {G}}(s)^{j}}^T \) from the left and sum up over all values of \(j\). This leads to the equation

$$\begin{aligned} \mathbf {0}&= \left( I \otimes {\mathcal {S}} \right) + \left( \widehat{\mathbf {G}}(s)^T \otimes {\mathcal {H}} \right) + \left( I \otimes {\mathcal {L}} \right) \sum _{j=0}^\infty \left( {\widehat{\mathbf {G}}(s)^{j}}^T \otimes {\mathbf {G}}_{j} \right) \\&\quad + \left( I \otimes {\mathcal {A}} \right) \sum _{j=0}^\infty \left( {\widehat{\mathbf {G}}(s)^{j}}^T \otimes \sum _{i=0}^j {\mathbf {G}}_{i} {\mathbf {G}}_{j-i} \right) \\&= \left( I \otimes {\mathcal {S}} \right) + \left( \widehat{\mathbf {G}}(s)^T \otimes {\mathcal {H}} \right) + \left( I \otimes {\mathcal {L}} \right) {\varvec{\Phi }}{\mathbf {(s)}} + \left( I \otimes {\mathcal {A}} \right) {\varvec{\Phi }}{\mathbf {(s)}}^2, \end{aligned}$$

which is a quadratic matrix equation for \({\varvec{\Phi }}{\mathbf {(s)}}\).

5 Computation of \(\omega _\mathrm{L}(s)\)

In the following we split the expression (26) for \(\omega _\mathrm{L}(s)\) into the following two terms:

$$\begin{aligned}&\omega _\mathrm{L}(s)=\omega _\mathrm{L}^{k=0}(s) + \omega _\mathrm{L}^{k>0}(s), \end{aligned}$$

where \(\omega _\mathrm{L}^{k=0}(s)\) includes only terms of (26) with \(k=0\) and \(\omega _\mathrm{L}^{k>0}(s)\) all other terms, and further obtain expressions for each of them individually.

5.1 Analysis of \(\omega _\mathrm{L}^{k=0}(s)\)

In order to compute \(\omega _\mathrm{L}^{k=0}(s)\) we perform Kronecker expansion, applying the relation \(vec(ABC)=(C^T \otimes A) vec(B)\) two times. We have

$$\begin{aligned} \omega _\mathrm{L}^{k=0}(s)&= \frac{1}{\lambda } \sum _{i=0}^\infty \sum _{j=0}^\infty \pi _{i,j} {\mathcal {A}} \underbrace{\sum _{\ell =0}^{i} \tilde{{\mathbb {F}}}(s,i,\ell ,0){\mathcal {H}} \widehat{\mathbf {G}}(s)^{\ell }}_{\widehat{{\mathcal {F}}}_{k=0}(s,i)} \widehat{\mathbf {G}}(s)^{j+1}\mathbf {1} \\&= \frac{1}{\lambda } \sum _{i=0}^\infty \sum _{j=0}^\infty \pi _{i,j} {\mathcal {A}} \widehat{{\mathcal {F}}}_{k=0}(s,i) \widehat{\mathbf {G}}(s)^{j+1} \mathbf {1} \\&=\frac{1}{\lambda } \sum _{i=0}^\infty \sum _{j=0}^\infty \bigg (\mathbf {1}^T {\widehat{\mathbf {G}}(s)^{j+1}}^T \otimes \pi _{i,j} {\mathcal {A}} \bigg ) vec(\widehat{{\mathcal {F}}}_{k=0}(s,i)) \\&=\frac{1}{\lambda } \bigg (\mathbf {1}^T {\widehat{\mathbf {G}}(s)}^T \otimes 1 \bigg ) \sum _{i=0}^\infty \sum _{j=0}^\infty \bigg ( {\widehat{\mathbf {G}}(s)^{j}}^T \otimes \pi _{i,j} \bigg ) \bigg ( I \otimes {\mathcal {A}} \bigg ) vec(\widehat{{\mathcal {F}}}_{k=0}(s,i)) \\&=\frac{1}{\lambda } \bigg (\mathbf {1}^T {\widehat{\mathbf {G}}(s)}^T \otimes 1 \bigg ) \underbrace{\sum _{i=0}^\infty \sum _{j=0}^\infty \bigg [ vec(\widehat{{\mathcal {F}}}_{k=0}(s,i))^T \!\otimes \bigg ( {\widehat{\mathbf {G}}(s)^{j}}^T \!\otimes \pi _{i,j} \bigg )\bigg ]}_{\mathbf {M}(s)} vec\bigg ( I \otimes {\mathcal {A}} \bigg ). \end{aligned}$$

Now we focus on the analysis of \(\mathbf {M}(s)\). Just as in the case of \(\mathbf {K}(s)\) in Sect. 4.1, we will show that the unknown matrix \(\mathbf {M}(s)\) can be expressed in the form \(\mathbf {M}(s)=M_1(s)+\mathbf {M}(s) M_2(s)\). Extracting the term with \(i=0\) from the sum one can write

$$\begin{aligned} \mathbf {M}(s)&= \bigg [ vec(\underbrace{\widehat{{\mathcal {F}}}_{k=0}(s,0)}_{{\mathcal {L}}(s) {\mathcal {H}}})^T \otimes \underbrace{\sum _{j=0}^\infty \bigg ( {\widehat{\mathbf {G}}(s)^{j}}^T \otimes \pi _{0,j} \bigg )}_{{\varvec{\Theta }}(s)} \bigg ] \nonumber \\&\quad + \sum _{i=1}^\infty \bigg [ {vec(\widehat{{\mathcal {F}}}_{k=0}(s,i))^T} \otimes {\sum _{j=0}^\infty \bigg ( {\widehat{\mathbf {G}}(s)^{j}}^T \otimes \pi _{i,j} \bigg )}\bigg ]. \end{aligned}$$
(29)

Here the only two unknown quantities are inside the sum over \(i\). Firstly we obtain an expression for \(vec(\widehat{{\mathcal {F}}}_{k=0}(s,i))^T\). Revisiting the definition of \(\widehat{{\mathcal {F}}}_{k=0}(s,i)\) and applying (25) when \(i>0\), we obtain

$$\begin{aligned} \widehat{{\mathcal {F}}}_{k=0}(s,i)= & {} \sum _{\ell =0}^{i} \tilde{{\mathbb {F}}}(s,i,\ell ,0){\mathcal {H}} \widehat{\mathbf {G}}(s)^{\ell } \\= & {} \sum _{\ell =1}^{i-1} \tilde{{\mathbb {F}}}(s,i,\ell ,0){\mathcal {H}} \widehat{\mathbf {G}}(s)^{\ell } + \tilde{{\mathbb {F}}}(s,i,0,0){\mathcal {H}} +\tilde{{\mathbb {F}}}(s,i,i,0){\mathcal {H}} \widehat{\mathbf {G}}(s)^{i} \\= & {} \sum _{\ell =1}^{i-1} {\mathcal {L}}(s) {\mathcal {S}} \tilde{{\mathbb {F}}}(s,i\!-\!1,\ell ,0){\mathcal {H}} \widehat{\mathbf {G}}(s)^{\ell } \\&\quad + \sum _{\ell =1}^{i-1} {\mathcal {L}}(s) {\mathcal {H}} \tilde{{\mathbb {F}}}(s,i\!-\!1,\ell -1,0){\mathcal {H}} \widehat{\mathbf {G}}(s)^{\ell } \\&\quad +\, {\mathcal {L}}(s) {\mathcal {S}} \tilde{{\mathbb {F}}}(s,i\!-\!1,0,0){\mathcal {H}} + {\mathcal {L}}(s) {\mathcal {H}} \tilde{{\mathbb {F}}}(s,i\!-\!1,i\!-\!1,0){\mathcal {H}} \widehat{\mathbf {G}}(s)^{i} \\= & {} {\mathcal {L}}(s) {\mathcal {S}} \sum _{\ell =0}^{i-1} \tilde{{\mathbb {F}}}(s,i\!-\!1,\ell ,0){\mathcal {H}} \widehat{\mathbf {G}}(s)^{\ell } \\&\quad +\, {\mathcal {L}}(s) {\mathcal {H}} \sum _{\ell =0}^{i-1} \tilde{{\mathbb {F}}}(s,i\!-\!1,\ell ,0){\mathcal {H}} \widehat{\mathbf {G}}(s)^{\ell } \widehat{\mathbf {G}}(s), \end{aligned}$$

or, equivalently, in terms of \(\widehat{{\mathcal {F}}}_{k=0}(s,i)\):

$$\begin{aligned}&\widehat{{\mathcal {F}}}_{k=0}(s,i)= {\mathcal {L}}(s) {\mathcal {S}} \widehat{{\mathcal {F}}}_{k=0}(s,i-1) + {\mathcal {L}}(s) {\mathcal {H}} \widehat{{\mathcal {F}}}_{k=0}(s,i-1) \widehat{\mathbf {G}}(s), \quad i \ge 1. \end{aligned}$$
(30)

By applying the \(vec\) operator to (30), one finds the following expression for \(vec(\widehat{{\mathcal {F}}}_{k=0}(s,i))^T, i \ge 1\):

$$\begin{aligned} vec(\widehat{{\mathcal {F}}}_{k=0}(s,i))^T \!=\! vec(\widehat{{\mathcal {F}}}_{k=0}(s,i-1))^T \bigg [\bigg (I \otimes {\mathcal {L}}(s) {\mathcal {S}} \bigg ) \!+\! \bigg ( \widehat{\mathbf {G}}(s)^T \otimes {\mathcal {L}}(s) {\mathcal {H}} \bigg ) \bigg ]^T \!. \end{aligned}$$
(31)

Now we obtain an expression for the second unknown quantity in (29) which is in the sum over \(i\) on the right-hand side of the first Kronecker product. With respect to (5) it can be rewritten in the form

$$\begin{aligned} \sum _{j=0}^\infty \bigg ( {\widehat{\mathbf {G}}(s)^{j}}^T \otimes \pi _{i,j} \bigg )&=\sum _{j=0}^\infty \sum _{m=0}^j \bigg ( {\widehat{\mathbf {G}}(s)^{j}}^T \otimes \pi _{i-1,m} \mathbf {R}_{j-m} \bigg ) \\&=\!\sum _{j=0}^\infty \bigg ( {\widehat{\mathbf {G}}(s)^{j}}^T \!\otimes \pi _{i-1,j} \bigg ) \underbrace{\sum _{n=0}^\infty \bigg ( {\widehat{\mathbf {G}}(s)^{n}}^T \!\otimes \mathbf {R}_{n} \bigg )}_{{\varvec{\Psi }}{\mathbf {(s)}}} \nonumber \\&=\!\sum _{j=0}^\infty \bigg ( {\widehat{\mathbf {G}}(s)^{j}}^T \!\otimes \pi _{i-1,j} \bigg ) {\varvec{\Psi }}{\mathbf {(s)}}, \nonumber \end{aligned}$$
(32)

where we redefined the running indexes in the second step. Substitution of (31) and (32) into (29) yields the sought-for expression for \(\mathbf {M}(s)\):

$$\begin{aligned} \mathbf {M}(s)&= \bigg [ vec({\mathcal {L}}(s) {\mathcal {H}})^T \otimes {\varvec{\Theta }}(s)\bigg ] \\&\quad + \sum _{i=1}^\infty \bigg [ vec(\widehat{{\mathcal {F}}}_{k=0}(s,i-1))^T \otimes \sum _{j=0}^\infty \bigg ( {\widehat{\mathbf {G}}(s)^{j}}^T \otimes \pi _{i-1,j} \bigg )\bigg ] \\&\qquad \times \bigg \{ \bigg [\bigg (I \otimes {\mathcal {L}}(s) {\mathcal {S}} \bigg ) + \bigg ( \widehat{\mathbf {G}}(s) \otimes {\mathcal {L}}(s) {\mathcal {H}} \bigg ) \bigg ]^T \otimes {{\varvec{\Psi }}}(s) \bigg \} \\&= \bigg [ vec({\mathcal {L}}(s) {\mathcal {H}})^T \otimes {\varvec{\Theta }}(s)\bigg ] \\&\quad + \mathbf {M}(s) \bigg \{ \bigg [\bigg (I \otimes {\mathcal {L}}(s) {\mathcal {S}} \bigg ) + \bigg ( \widehat{\mathbf {G}}(s)^T \otimes {\mathcal {L}}(s) {\mathcal {H}} \bigg ) \bigg ]^T \otimes {\varvec{\Psi }}(s) \bigg \}. \end{aligned}$$

The expression for \({\varvec{\Psi }}{\mathbf {(s)}}\) can be obtained from (6) to (8) completely in the same way as is it done for \({\varvec{\Phi }}{\mathbf {(s)}}\), and thus is omitted.

5.2 Analysis of \(\omega _\mathrm{L}^{k>0}(s)\)

In this subsection we tackle the most complex case with the highest number of infinite summations and non-commuting matrices. For \(\omega _\mathrm{L}^{k>0}(s)\) one has to apply Kronecker expansion multiple times. Firstly we recall that the definition of \(\omega _\mathrm{L}^{k>0}(s)\) is

$$\begin{aligned}&\omega _\mathrm{L}^{k>0}(s)= \frac{1}{\lambda } \sum _{i=0}^\infty \sum _{j=0}^\infty \pi _{i,j} {\mathcal {A}} \underbrace{\sum _{\ell =0}^{i} \sum _{k=1}^\infty \tilde{{\mathbb {F}}}(s,i,\ell ,k){\mathcal {H}} \widetilde{\mathbf {G}}(s)^k \widehat{\mathbf {G}}(s)^{\ell }}_{{\mathcal {F}}(s,i)} \widehat{\mathbf {G}}(s)^{j+1} \mathbf {1}~ \end{aligned}$$

and now consider term \({\mathcal {F}}(s,i)\). Applying the \(vec\) operator to \({\mathcal {F}}(s,i)\) according to the following Kronecker expansion

$$\begin{aligned} vec(ABCD)&=(D^T\otimes A) vec(BC) = (vec(BC)^T\otimes (D^T\otimes A)) vec(I) \\&= (vec(I)^T \otimes I \otimes I) (C \otimes B^T \otimes D^T \otimes A) vec(I), \end{aligned}$$

one gets

$$\begin{aligned}&vec({\mathcal {F}}(s,i)) \\&= (vec(I)^T \otimes I \otimes I) \underbrace{\sum _{\ell =0}^{i} \sum _{k=1}^\infty \bigg ( \widetilde{\mathbf {G}}(s)^k \otimes {\mathcal {H}}^T \otimes {\widehat{\mathbf {G}}(s)^{\ell }}^T \otimes \tilde{{\mathbb {F}}}(s,i,\ell ,k) \bigg )}_{{\mathcal {F}}^{\otimes }(s,i)} vec(I) \\&= (vec(I)^T \otimes I \otimes I) {\mathcal {F}}^{\otimes }(s,i) vec(I). \end{aligned}$$

Considering the expression for \({\mathcal {F}}(s,i)\) and using (25), when \(i>0\) and \(k>0\), we get

$$\begin{aligned} {\mathcal {F}}(s,i)&= \sum _{\ell =0}^{i} \sum _{k=1}^\infty \tilde{{\mathbb {F}}}(s,i,\ell ,k){\mathcal {H}} \widetilde{\mathbf {G}}(s)^k \widehat{\mathbf {G}}(s)^{\ell } \nonumber \\&=\sum _{\ell =0}^{i-1} \sum _{k=1}^\infty {\mathcal {L}}(s) {\mathcal {S}} \tilde{{\mathbb {F}}}(s,i\!-\!1,\ell ,k){\mathcal {H}} \widetilde{\mathbf {G}}(s)^k \widehat{\mathbf {G}}(s)^{\ell } \nonumber \\&\quad + \sum _{\ell =0}^{i-1} \sum _{k=1}^\infty {\mathcal {L}}(s) {\mathcal {H}} \tilde{{\mathbb {F}}}(s,i\!-\!1,\ell ,k){\mathcal {H}} \widetilde{\mathbf {G}}(s)^k \widehat{\mathbf {G}}(s)^{\ell +1} \nonumber \\&\quad + \sum _{\ell =0}^{i} \sum _{k=0}^\infty {\mathcal {L}}(s) {\mathcal {A}} \tilde{{\mathbb {F}}}(s,i,\ell ,k){\mathcal {H}} \widetilde{\mathbf {G}}(s)^{k+1} \widehat{\mathbf {G}}(s)^{\ell }. \end{aligned}$$
(33)

Having such an expression for \({\mathcal {F}}(s,i)\) we can now write out a relation for the term \({\mathcal {F}}^{\otimes }(s,i)\) in the form

$$\begin{aligned} {\mathcal {F}}^{\otimes }(s,i)&=\underbrace{\bigg [ \bigg (I \otimes I \otimes I \otimes {\mathcal {L}}(s) {\mathcal {S}} \bigg ) + \bigg (I \otimes I \otimes {\widehat{\mathbf {G}}(s)}^T \otimes {\mathcal {L}}(s) {\mathcal {H}} \bigg )\bigg ]}_{\mathbf {L}(s)} {\mathcal {F}}^{\otimes }(s,i\!-\!1) \nonumber \\&\quad + \underbrace{\bigg (\widetilde{\mathbf {G}}(s) \otimes I \otimes I \otimes {\mathcal {L}}(s) {\mathcal {A}} \bigg )}_{\mathbf {K}(s)} \bigg ({\mathcal {F}}^{\otimes }(s,i)+\widehat{{\mathcal {F}}}_{k=0}^{\otimes }(s,i)\bigg ) \nonumber \\&= [ I- \mathbf {K}(s)]^{-1} [ \mathbf {L}(s) {\mathcal {F}}^{\otimes }(s,i\!-\!1) + \mathbf {K}(s) \widehat{{\mathcal {F}}}^{\otimes }_{k=0}(s,i) ], \end{aligned}$$
(34)

where we introduced the notation

$$\begin{aligned} \widehat{{\mathcal {F}}}_{k=0}^{\otimes }(s,i) = \sum _{\ell =0}^{i} \bigg ( I \otimes {\mathcal {H}}^T \otimes {\widehat{\mathbf {G}}(s)^{\ell }}^T \otimes \tilde{{\mathbb {F}}}(s,i,\ell ,0) \bigg ), \quad i \ge 0. \end{aligned}$$

From (25) it follows that

$$\begin{aligned} {{\mathcal {F}}}^{\otimes }(s,0)&= \sum _{k=1}^\infty \bigg ( \widetilde{\mathbf {G}}(s)^k \otimes {\mathcal {H}}^T \otimes I \otimes ({\mathcal {L}}(s) {\mathcal {A}})^k {\mathcal {L}}(s) \bigg ) \\&= \bigg [I-\bigg ( \widetilde{\mathbf {G}}(s) \otimes I \otimes I \otimes {\mathcal {L}}(s) {\mathcal {A}} \bigg )\bigg ]^{-1} \bigg ( \widetilde{\mathbf {G}}(s) \otimes {\mathcal {H}}^T \otimes I \otimes {\mathcal {L}}(s) {\mathcal {A}} {\mathcal {L}}(s) \bigg ) \end{aligned}$$

and \(\widehat{{\mathcal {F}}}_{k=0}^{\otimes }(s,0)= I \otimes {\mathcal {H}}^T \otimes I \otimes {\mathcal {L}}(s)\). For \(i\ge 1\), from (30) we have

$$\begin{aligned} \widehat{{\mathcal {F}}}_{k=0}^{\otimes }(s,i)&= \mathbf {L}(s) ~{\mathcal {F}}_{k=0}^{\otimes }(s,i-1),\quad i \ge 1. \end{aligned}$$

Now we go back to \(\omega _\mathrm{L}^{k>0}(s)\) and apply the \(vec\) operator multiple times in the following way:

$$\begin{aligned}&\omega _\mathrm{L}^{k>0}(s)=\frac{1}{\lambda } \sum _{i=0}^\infty \sum _{j=0}^\infty \pi _{i,j} {\mathcal {A}} {\mathcal {F}}(s,i) \widehat{\mathbf {G}}(s)^{j+1} \mathbf {1} \\&= \frac{1}{\lambda } \sum _{i=0}^\infty \sum _{j=0}^\infty \bigg ( \mathbf {1}^T {\widehat{\mathbf {G}}(s)^{j+1}}^T \otimes \pi _{i,j} {\mathcal {A}} \bigg ) vec\bigg ({\mathcal {F}}(s,i)\bigg ) \\&= \frac{1}{\lambda } \bigg (\mathbf {1}^T {\widehat{\mathbf {G}}(s)}^T\otimes 1 \bigg ) \sum _{i=0}^\infty \sum _{j=0}^\infty \bigg ({\widehat{\mathbf {G}}(s)^{j}}^T \otimes \pi _{i,j} \bigg ) \bigg ( I \otimes {\mathcal {A}} \bigg ) vec\bigg ({\mathcal {F}}(s,i)\bigg ) \\&= \frac{1}{\lambda } \bigg (\mathbf {1}^T {\widehat{\mathbf {G}}(s)}^T \!\otimes 1 \bigg ) \sum _{i=0}^\infty \sum _{j=0}^\infty \bigg [ vec\bigg ({\mathcal {F}}(s,i)\bigg )^T \!\otimes \bigg ({\widehat{\mathbf {G}}(s)^{j}}^T \!\otimes \pi _{i,j} \bigg ) \bigg ] vec\bigg ( I \otimes {\mathcal {A}} \bigg ) \\&= \frac{1}{\lambda } \bigg (\mathbf {1}^T {\widehat{\mathbf {G}}(s)}^T\otimes 1 \bigg ) \sum _{i=0}^\infty \sum _{j=0}^\infty \bigg [ vec(I)^T {\mathcal {F}}^{\otimes }(s,i)^T (vec(I) \otimes I \otimes I) \\&\qquad \otimes \bigg ({\widehat{\mathbf {G}}(s)^{j}}^T \otimes \pi _{i,j} \bigg ) \bigg ] vec\bigg ( I \otimes {\mathcal {A}} \bigg ) \\&= \frac{1}{\lambda } \bigg (\mathbf {1}^T {\widehat{\mathbf {G}}(s)}^T\otimes 1 \bigg ) \bigg [vec(I)^T \otimes I \bigg ] \underbrace{\sum _{i=0}^\infty \sum _{j=0}^\infty \bigg [{\mathcal {F}}^{\otimes }(s,i)^T \otimes \bigg ({\widehat{\mathbf {G}}(s)^{j}}^T \otimes \pi _{i,j} \bigg ) \bigg ]}_{\mathbf {N}(s)} \\&\qquad \times \bigg [(vec(I) \otimes I \otimes I) \otimes I \bigg ] vec\bigg ( I \otimes {\mathcal {A}} \bigg ). \end{aligned}$$

The only unknown quantity in the expression for \(\omega _\mathrm{L}^{k>0}(s)\) is \(\mathbf {N}(s)\). Its form can be found from (32) and (34). We have

$$\begin{aligned} \mathbf {N}(s)&= \bigg [{\mathcal {F}}^{\otimes }(s,0)^T \otimes \underbrace{\sum _{j=0}^\infty \bigg ({\widehat{\mathbf {G}}(s)^{j}}^T \otimes \pi _{0,j} \bigg )}_{{\varvec{\Theta }}{\mathbf {(s)}}} \bigg ] \\&\quad + \sum _{i=1}^\infty \sum _{j=0}^\infty \bigg [{\mathcal {F}}^{\otimes }(s,i)^T \otimes \bigg ({\widehat{\mathbf {G}}(s)^{j}}^T \otimes \pi _{i,j} \bigg ) \bigg ] \\&= \bigg [{\mathcal {F}}^{\otimes }(s,0)^T \otimes {\varvec{\Theta }}{\mathbf {(s)}} \bigg ] \\&\quad + \underbrace{\sum _{i=1}^\infty \sum _{j=0}^\infty \bigg [ \widehat{{\mathcal {F}}}^{\otimes }_{k=0}(s,i)^T \otimes \bigg ({\widehat{\mathbf {G}}(s)^{j}}^T \otimes \pi _{i,j} \bigg ) \bigg ]}_{\mathbf {Z}(s)} \bigg (\mathbf {K}(s)^T {[I-\mathbf {K}(s)]^{-1}}^T \otimes I \bigg ) \\&\quad + \underbrace{\sum _{i=1}^\infty \sum _{j=0}^\infty \bigg [ {\mathcal {F}}^{\otimes }(s,i\!-\!1)^T \mathbf {L}(s)^T {[I-\mathbf {K}(s)]^{-1}}^T \otimes \bigg ({\widehat{\mathbf {G}}(s)^{j}}^T \otimes \pi _{i-1,j} \bigg ) {\varvec{\Psi }}{\mathbf {(s)}} \bigg ] }_{\mathbf {N}(s) \left( \mathbf {L}(s)^T {[I-\mathbf {K}(s)]^{-1}}^T \otimes {\varvec{\Psi }}{\mathbf {(s)}}\right) }\!, \end{aligned}$$

and for \(\mathbf {Z}(s)\), using properties of Kronecker product, one obtains the relation

$$\begin{aligned} \mathbf {Z}(s)&= \sum _{i=1}^\infty \sum _{j=0}^\infty \bigg [ \widehat{{\mathcal {F}}}^{\otimes }_{k=0}(s,i-1)^T \mathbf {L}(s)^T \otimes \bigg ({\widehat{\mathbf {G}}(s)^{j}}^T \otimes \pi _{i-1,j} \bigg ) {\varvec{\Psi }}{\mathbf {(s)}} \bigg ] \\&= \sum _{i=0}^\infty \sum _{j=0}^\infty \bigg [ \widehat{{\mathcal {F}}}^{\otimes }_{k=0}(s,i)^T \otimes \bigg ({\widehat{\mathbf {G}}(s)^{j}}^T \otimes \pi _{i,j} \bigg )\bigg ] \bigg ( \mathbf {L}(s)^T \otimes {\varvec{\Psi }}{\mathbf {(s)}} \bigg ) \\&= \bigg [\bigg ( \widehat{{\mathcal {F}}}^{\otimes }_{k=0}(s,0)^T \otimes \sum _{j=0}^\infty \bigg ({\widehat{\mathbf {G}}(s)^{j}}^T \otimes \pi _{0,j} \bigg ) \bigg ) + \mathbf {Z}(s) \bigg ] \bigg ( \mathbf {L}(s)^T \otimes {\varvec{\Psi }}{\mathbf {(s)}} \bigg ) \\&= \bigg [ \bigg ( \bigg ( I \otimes {\mathcal {H}}^T \otimes I \otimes {\mathcal {L}}(s) \bigg )^T \otimes {\varvec{\Theta }}{\mathbf {(s)}} \bigg ) + \mathbf {Z}(s) \bigg ] \bigg ( \mathbf {L}(s)^T \otimes {\varvec{\Psi }}{\mathbf {(s)}} \bigg ). \end{aligned}$$

The latter relation allows computation of \(\mathbf {Z}(s)\) and subsequently \(\mathbf {N}(s)\) and \(\omega _\mathrm{L}^{k>0}(s)\).

6 Numerical example

In this section we present a simple numerical example, where only the service time depends on the Markov environment. Regular customers and re-sequencing signals arrive according to Poisson processes with rates \(\lambda \) and \(\gamma \), respectively. The service process has a phase-type distribution with the following representation \((\mathbf {\beta }, \mathbf {B})\):

$$\begin{aligned} \mathbf {\beta }= \begin{pmatrix} 0.5 &{} 0.5 \\ \end{pmatrix} , \ \ \mathbf {B}= \begin{pmatrix} -4 &{}\quad 2 \\ 1 &{}\quad -4 \\ \end{pmatrix}\!. \end{aligned}$$

The service rate is \(\mu =-1/\mathbf {\beta }\mathbf {B}^{-1}\mathbf {1}=2.5\). Using the notation introduced in Sect. 2, one has

$$\begin{aligned} \mathbf {A_0}= (-\lambda ),~\mathbf {A_1} = (\lambda ), \quad \mathbf {H_0} =(-\gamma ), ~\mathbf {H_1} = (\gamma ),\quad \mathbf {S_0}= \mathbf {B}, ~\mathbf {S_1}= -\mathbf {B}\mathbf {1} \mathbf {\beta }. \end{aligned}$$

Substituting this into the expression for \(\omega (s)\), one can compute moments of the waiting time by differentiation. As the mean waiting time of an arbitrary customer can be easily computed from Little’s law, one is mainly interested in higher moments, for example variance. Even for the simple case considered the expression for the variance is very cumbersome and thus we just state numerical results without providing the expression itself. Figure 4 plots variance of the waiting time as a function of the re-sequencing rate, \(\gamma \), for three values of load \(\rho =\lambda / \mu =0.48, \rho =0.72\) and \(\rho =0.88\).

Fig. 4
figure 4

Behaviour of variance of customer’s waiting time as a function of re-sequencing intensity \(\gamma \) for different values of load

It is worth noticing that in each case considered there exists a point with maximal variance inside the plotted range, which is in line with intuitive expectations. If the re-sequencing rate, \(\gamma \), is low then a very small number of customers are re-sequenced and the variance of the waiting time is close to the variance of the waiting time in the ordinary \(M/PH/1\) queue (with the same arrival and service process). If the re-sequencing rate is large, almost every arriving customer is re-sequenced and thus again the variance of the waiting time almost coincides with the variance in the \(M/PH/1\) queue. In between these extreme cases some customers get re-sequenced, some do not and thus the variance of the waiting time increases.

7 Conclusion

The paper considers a queueing model with a re-sequencing buffer. The delay analysis of this model with memoryless arrival and service processes is provided in [12] with the use of generating functions. We extended the delay analysis to Markov-modulated arrival and service processes. This extension required the introduction of a completely new methodology due to the presence of non-commuting matrices. The main elements of the proposed new methodology are the replacement of the generating functions by utilizing the space inhomogeneity of the model (based on the relation of infinite summations from 0 to infinity with infinite summations from 1 to infinity), and the use of the Kronecker expansion. The price to pay for the use of the Kronecker expansion is the multiplicative increase of the matrix dimensions, which might result in inhibitive computational complexity for “large” models.