Keywords

1 Introduction

Queueing systems subject to external stochastic influences such as Markov modulation and random environment are of considerate interest in scientific literature. Such influences are often represented as system breakdowns or arrivals of priority customers (including batch arrivals) that force the system to behave at a different mode. Namely, several cases of Markov-modulated single-server queues are studied in [1]. Represented as a road subject to traffic incidents, the \(M/M/\infty \) system operating under batch partial failures is considered in [2]. The random environment is assumed to have only two states: when there is an incident and when there are no incidents on the road. In [4], authors consider a more general case of \(M/M/\infty \) queue in Markovian random environment with arbitrary finite number of states. The expression for steady-state factorial moments is obtained. In [5], the analysis of \(M/G/\infty \) system in Markovian random environment is given. As a result, transient mean and stationary variance of the number of customers present in the system are obtained; a deeper analysis of exponential service case is conducted; the asymptotic normality of the number of customers probability distribution is shown under conditions of high arrival rate and frequent environment transitions due to the central limit theorem. In turn, the steady-state mean number of customers in \(M/M/\infty \) in semi-Markovian random environment is obtained in [7] as well as the steady-state distribution of the number of customers for the environment with 2 states.

Depending on the model application, different service policies of present customers with respect to environment transition may be considered. For instance, in one of the earliest papers [3] a single-server queue with general service-time is considered subject to interruptions with generally distributed durations. Two different cases of customer behavior after interruption clearance are studied: first, the resume policy, when the service is continued as it was left before the interruption; second, the repeat policy, when the service starts over. In [10, 11], the \(M/G/\infty \) system in semi-Markovian random environment is studied. These papers cover three cases of the present customers’ reaction to environment state transitions. The first one is considered in the present paper — service-time distribution stays the same while the customer is in the system. This policy is also assumed in [5]. In the second case all customers are immediately cleared from the queue as environment state transition happens. The last case considers customers in service moving to a secondary queue which is an infinite-server system with bulk arrivals. This case is specifically analyzed in [10], and as a result the steady-state mean number of customers in the secondary queue is obtained.

Infinite-server queues are often used to approximate the behavior of systems with sufficiently large number of servers, such as banks, call-centers, supermarkets or digital distribution platforms. Such objects in reality are often affected by extraneous factors of stochastic nature which affect their performance. For instance, the change of bank rate set by the Central bank affects the conditions under which commercial banks give loans to their clients. These, in turn, significantly influence the intensity of clients’ arrival. In this article we consider a mathematical model of such situation as an \(M/GI/\infty \) queue operating in a random environment, for which the underlying process is a semi-Markov process with finite number of states. The arrival rate and service-time distribution change according to the environment state. Note that distribution of service-time customers which are currently being served does not change until the service-time is finished. Say the bank provided a credit to the client on certain conditions and during the repayment period there was a change of bank rate. The client will continue to repay his debt on those initial conditions — as mentioned in a loan agreement.

2 Problem Statement

We consider an \(M/GI/\infty \) queueing system operating in semi-Markovian random environment. The system under discussion is an infinite-server queue with one stationary Poisson arrival process with parameter \(\lambda _sN\) and the unlimited number of servers each having service-time distribution function \(B_s(x)\), \(s = \overline{1,K}\). We use a large parameter N that represents the condition of high arrival rate. Here \(s=\overline{1,K}\) is the current state of a semi-Markov stochastic process s(t) defined by the matrix product \(\mathbf {P\cdot A}(x)\). The matrix \(\mathbf {P}\) here is a probability matrix of s(t) state transitions and \(\mathbf {A}(x)\) is a diagonal matrix with conditional sojourn time cdfs for every state \(s = \overline{1,K}\) of s(t) on its main diagonal. As there is always a free server in the system, there is no queue or loss option and each arriving customer is immediately placed at any free server and stays there for random time with distribution function \(B_s(x)\). Note that we study the case when service-time distribution of a customer which is currently being served does not change until its service is finished.

That being said, for considered model we define a two-component process \(\left\{ i(t), s(t)\right\} \), where i(t) with values \(i\ge 0\) is the number of customers in the system at time t. Apparently, this process is non-Markovian. To deal with it, we first apply the original method of dynamic screening and the method of supplementary variable.

3 Method of Dynamic Screening

The method of dynamic screening can be used for the analysis of both queueing systems and networks. Further applications may be found in [1416]. We apply this method to our system in the following way.

Given that at a certain time \(t_0\) the system is empty, we pick a moment T and track the customer arrivals during the time interval \((t_0, T)\). The customer will be referred to as “screened” at time t with probability

$$\begin{aligned} S_s (t) = 1 - B_s (T - t), s = \overline{1, K}, t_0 < t < T, \end{aligned}$$

if it arrived at the system at time \(t < T\) and was not fully serviced until the time T. Thus, the screened customers will be in the system taking up its servers at time T.

Let us denote by n(t) the number of customers that were screened until time t. Stochastic process n(t) is a screened point process with its points being the screened customers. The following identity always takes place:

$$\begin{aligned} i(T) = n(T). \end{aligned}$$
(1)

We need to choose time \(t_{0}\) so that at all times \(t < t_0\) there are no screened customers, i.e.

$$\begin{aligned} S_s(t) = 1 - B_s(T - t) = 0, s = \overline{1, K}, t < t_0. \end{aligned}$$

Since \(B_s(x)\) is a cumulative distribution function, it is obvious enough to put \(t_0 = -\infty \).

We write the possible state transitions of n(t) and their probabilities assuming \(n(t) = n, n \ge 0\) as follows:

$$\begin{aligned} n(t + \varDelta t) = {\left\{ \begin{array}{ll} n + 1, &{}\text {with prob. } \lambda _s\varDelta t S_s(t) + o(\varDelta t),\\ n, &{}\text {with prob. } 1 - \lambda _s\varDelta t S_s(t) + o(\varDelta t), \end{array}\right. } s = \overline{1, K} \end{aligned}$$

Equality (1) allows us to analyze a point process n(t) instead of i(t). Characteristics of the process n(t) at time T coincide with the characteristics of value i(T).

4 Kolmogorov Differential Equations

In order to deal with semi-Markovian process s(t), we first need to apply the method of supplementary variable. We define z(t) as residual sojourn time of s(t) process in the current state, i.e. the interval from t until the next environment transition. It follows that the three-dimensional process \(\left\{ s(t), n(t), z(t)\right\} \) is a Markovian one. Therefore, we define the probabilities of system and environment state at time t as follows:

$$\begin{aligned} P(s, n, z, t) = P\left\{ s(t) = s, n(t) = n, z(t) < \frac{z}{N}\right\} , s = \overline{1, K}, n \ge 0. \end{aligned}$$
(2)

Here the big parameter N justifies the condition of frequent environment transitions that compensates high arrival rate. The matrices that define the process s(t) are determined as follows:

$$\begin{aligned} \mathbf {P} = \begin{pmatrix} p_{11} &{} p_{12} &{} \cdots &{} p_{1K} \\ p_{21} &{} p_{22} &{} \cdots &{} p_{2K} \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ p_{K1} &{} p_{K2} &{} \cdots &{} p_{KK} \end{pmatrix}, \mathbf {A}(x)=\begin{pmatrix} A_1(x) &{} 0 &{} \cdots &{} 0 \\ 0 &{} A_2(x) &{} \cdots &{} 0 \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ 0 &{} 0 &{} \cdots &{} A_K(x) \end{pmatrix}. \end{aligned}$$

Let \(\tau _s\) be the sojourn time of s(t) in state \(s = \overline{1, K}\). Then functions \(A_s(x)\) are defined in the following way:

$$\begin{aligned} A_s(x) = P\left\{ \frac{\tau _s}{N} < x\right\} = P\left\{ \tau _s < Nx\right\} , s = \overline{1, K}, \end{aligned}$$

which means that \(A_s(x)\) are the distribution functions of N-fold sojourn time of s(t) in state \(s = \overline{1, K}\).

The system of Kolmogorov differential equations that defines the probabilities (2) is written as follows:

$$\begin{aligned}&\frac{1}{N}\frac{\partial P(s, n, z, t)}{\partial t} - \frac{\partial P(s, n, z, t)}{\partial z} + \frac{\partial P(s, n, 0, t)}{\partial z} = \nonumber \\&\qquad \lambda _s S_s(t)\left\{ P(s, n - 1, z, t) - P(s, n, z, t)\right\} + \\&\qquad A_s(z)\sum \limits _{k = 1}^{K}p_{ks}\frac{\partial P(k, n, 0, t)}{\partial z}, s = \overline{1, K}, n \ge 0 \nonumber \end{aligned}$$
(3)

Here we use the denotation

$$\begin{aligned} \frac{\partial P(s, n, 0, t)}{\partial z} = \frac{\partial P(s, n, z, t)}{\partial z}\bigg |_{z = \infty }. \end{aligned}$$

Provided \(z \rightarrow \infty \), the initial condition to such system’s solution is defined as follows:

$$\begin{aligned} P(s, n, t_0) = {\left\{ \begin{array}{ll} r(s), &{}\text {if } n = 0, \\ 0, &{}\text {if } n > 0, \end{array}\right. } s = \overline{1, K} \end{aligned}$$
(4)

Here r(s) are the stationary probabilities of embedded Markov chain states of s(t), \(s = \overline{1, K}\). The partial characteristic functions of the process \(\left\{ s(t), n(t), z(t)\right\} \) are defined as follows:

$$\begin{aligned} H(s, u, z, t) = \sum \limits _{n = 0}^{\infty }e^{jun}P(s, n, z, t), s = \overline{1, K} \end{aligned}$$

Here \(j = \sqrt{-1}\) is the imaginary unit. We rewrite the system (4) using partial characteristic functions in the following way:

$$\begin{aligned}&\frac{1}{N}\frac{\partial H(s, u, z, t)}{\partial t} - \frac{\partial H(s, u, z, t)}{\partial z} + \frac{\partial H(s, u, 0, t)}{\partial z} = \nonumber \\&\quad \quad \,\,\quad \,\,\quad \lambda _s S_s(t)(e^{ju} - 1)H(s, u, z, t) + \\&\qquad \,\,\,\quad A_s(z)\sum \limits _{k = 1}^{K}p_{ks}\frac{\partial H(k, u, 0, t)}{\partial z}, s = \overline{1, K} \nonumber \end{aligned}$$
(5)

We then use the following vector and matrix denotations:

$$\begin{aligned}&\mathbf {H}(u, z, t) = \begin{pmatrix} H(1, u, z, t)&H(2, u, z, t)&\cdots&H(K, u, z, t) \end{pmatrix}, \\&\mathbf {\Lambda } = \begin{pmatrix} \lambda _1 &{} 0 &{} \cdots &{} 0 \\ 0 &{} \lambda _2 &{} \cdots &{} 0 \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ 0 &{} 0 &{} \cdots &{} \lambda _K \end{pmatrix}, \mathbf {S}(t) = \begin{pmatrix} S_1(t) &{} 0 &{} \cdots &{} 0 \\ 0 &{} S_2(t) &{} \cdots &{} 0 \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ 0 &{} 0 &{} \cdots &{} S_K(t) \end{pmatrix}, \end{aligned}$$

to rewrite the system (4) as follows:

$$\begin{aligned} \begin{aligned}&\frac{1}{N}\frac{\partial \mathbf {H}(u, z, t)}{\partial t} - \frac{\partial \mathbf {H}(u, z, t)}{\partial z} + \\ \frac{\partial \mathbf {H}(u, 0, t)}{\partial z}&\left[ \mathbf {I - PA}(z)\right] = (e^{ju} - 1)\mathbf {H}(u, z, t)\mathbf {\Lambda S}(t). \end{aligned} \end{aligned}$$
(6)

Here \(\mathbf {I}\) is the identity matrix. Our goal is to obtain the solution to system (6) as \(z \rightarrow \infty \) that satisfies the initial condition derived from (4):

$$\begin{aligned} \mathbf {H}(u, t_0) = \mathbf {r}. \end{aligned}$$
(7)

The row vector \(\mathbf {r}\) here is the stationary probability distribution of the embedded Markov chain of the process s(t) and solves the following system of matrix-vector equations:

$$\begin{aligned} \left\{ \begin{aligned} \mathbf {rP}&= \mathbf {r}, \\ \mathbf {re}&= 1. \end{aligned}\right. \end{aligned}$$
(8)

5 Method of Asymptotic Analysis

Method of asymptotic analysis for queueing systems is the analysis of equations that define any of the system’s characteristics or parameters [13]. It allows us to obtain the explicit distribution, parameters and moments under certain asymptotic conditions.

We obtain the solution to system (6) under asymptotic conditions of high arrival rate and frequent environment transitions, that is, as \(N\rightarrow \infty \).

5.1 First-Order Asymptotic Analysis

Let us define substitutions for system (6) as follows:

$$\begin{aligned} \varepsilon = \frac{1}{N}, u = \varepsilon w, \mathbf {H}(u, z, t) = \mathbf {F}_1(w, z, t, \varepsilon ). \end{aligned}$$

Then (6) can be rewritten as

$$\begin{aligned}&\varepsilon \frac{\partial \mathbf {F}_1(w, z, t, \varepsilon )}{\partial t} - \frac{\partial \mathbf {F}_1(w, z, t, \varepsilon )}{\partial z} \nonumber \\&\,\,\, + \frac{\partial \mathbf {F}_1(w, 0, t, \varepsilon )}{\partial z}\left[ \mathbf {I - PA}(z)\right] \\&= (e^{j\varepsilon w} - 1)\mathbf {F}_1(w, z, t, \varepsilon )\mathbf {\Lambda S}(t).\nonumber \end{aligned}$$
(9)

As \(\varepsilon \rightarrow 0\), the following equality holds:

$$\begin{aligned} \frac{\partial \mathbf {F}_1(w, z, t)}{\partial z} = \frac{\partial \mathbf {F}_1(w, 0, t)}{\partial z}\left[ \mathbf {I - PA}(z)\right] . \end{aligned}$$
(10)

We then represent the function \(\mathbf {F}_1(w, z, t)\) as a product

$$\begin{aligned} \mathbf {F}_1(w, z, t) = \mathbf {r}(z)\varPhi _1(w, t). \end{aligned}$$
(11)

Substitution (11) applied to (10) gives the following equation that defines row-vector \(\mathbf {r}(z)\):

$$\begin{aligned} \mathbf {r}(z) = \int \limits _{0}^{z}\mathbf {r}'(0)\left[ \mathbf {I - PA}(x)\right] dx \end{aligned}$$
(12)

To determine the value \(\mathbf {r}'(0)\), we make the following substitution

$$\begin{aligned} \mathbf {r}'(0) = C\mathbf {r},\text { } C = const. \end{aligned}$$
(13)

Note that according to (8)

$$\begin{aligned}&\lim \limits _{z\rightarrow \infty }\mathbf {r}(z) = C\int \limits _{0}^{\infty }\mathbf {r}\left[ \mathbf {I} - \mathbf {PA}(x)\right] dx \\&\,\,\,\,\,= C\int \limits _{0}^{\infty }\mathbf {r}\left[ \mathbf {I - A}(x)\right] dx = C\mathbf {rA}. \end{aligned}$$

Apparently, the matrix \(\mathbf {A}\) here is the diagonal matrix containing means \(A_s, s = \overline{1, K}\) of distribution functions from \(\mathbf {A}(x)\) on its main diagonal. According to (8), the constant C is derived as follows:

$$\begin{aligned} C = \frac{1}{\mathbf {rAe}} = \frac{1}{a}. \end{aligned}$$

Finally, we write the expression for \(\mathbf {r}(z)\):

$$\begin{aligned} \mathbf {r}(z) = \frac{1}{a}\int \limits _{0}^{z}\mathbf {r}\left[ \mathbf {I - PA}(x)\right] dx. \end{aligned}$$

Note that

$$\begin{aligned} \mathbf {r}(z)\bigg |_{z = \infty } = \frac{\mathbf {rA}}{a}. \end{aligned}$$

Now we set \(z = \infty \) in (5.1) and make substitution (11):

$$\begin{aligned}&\varepsilon \frac{1}{a}\mathbf {rA}\frac{\partial \varPhi _1(w, t)}{\partial t} + \varPhi _1(w, t)\mathbf {r}\left[ \mathbf {I - P}\right] \\&\,\,\,\,\,\, = (e^{j\varepsilon w} - 1)\varPhi _1(w, t)\frac{1}{a}\mathbf {rA\Lambda S}(t). \end{aligned}$$

Post-multiplication by \(\mathbf {e}\) of both parts of the latter equation gives us the following first-order ordinary differential equation:

$$\begin{aligned} \frac{\partial \varPhi _1(w, t)}{\partial t} = \frac{1}{a} \frac{e^{j\varepsilon w} - 1}{\varepsilon }\varPhi _1(w, t)\mathbf {rA\Lambda S}(t)\mathbf {e}. \end{aligned}$$
(14)

As \(\varepsilon \rightarrow 0\), the function \(\varPhi _1(w, t)\) that solves the equation above and satisfies the initial condition derived from (7) is as follows:

$$\begin{aligned}&\varPhi _1(w, t) = exp\left\{ jw\kappa _1(t)\right\} , \\&\kappa _1(t) = \frac{1}{a}\int \limits _{-\infty }^{t}\mathbf {rA\Lambda S}(\tau )\mathbf {e}d\tau . \end{aligned}$$

Finally, we can write

$$\begin{aligned} \mathbf {H}(u, t) = \mathbf {F}_1(w, t, \varepsilon ) \approx \mathbf {F}_1(w, t) = \frac{\mathbf {rA}}{a}\varPhi _1(w, t) = \frac{\mathbf {rA}}{a}exp\{jw\kappa _1(t)\}, \end{aligned}$$

where \(w = Nu\). It follows that

$$\begin{aligned} M\{e^{jun(t)}\} = \mathbf {H}(u, t)\mathbf {e}\approx h_1(u, t) = exp\{ju\kappa _{1}(t)N\}. \end{aligned}$$

Since (1) takes place, we can finally conclude:

$$\begin{aligned}&M\{e^{jui(T)}\} = M\{e^{jun(T)}\} = \mathbf {H}(u, T)\mathbf {e}\\&\quad \quad \approx h_{1}(u, T) = exp\{ju\kappa _{1}(T)N\}. \end{aligned}$$

Let us calculate the value \(\kappa _{1}(T)\):

$$\begin{aligned}&\kappa _{1}(T) = \int _{-\infty }^{T}\mathbf {rA\Lambda S}(t)\mathbf {e}dt = \int _{-\infty }^{T}\sum \limits _{s = 1}^{K}\mathbf {r}(s)A_s\lambda _{s} S_{s}(t)dt \\&\quad \qquad \,\, = \sum \limits _{s = 1}^{K}\mathbf {r}(s)A_s\lambda _{s} \int _{-\infty }^{T}\{1 - B_{s}(T - t)\}dt \\&\,\,\,= \sum \limits _{s = 1}^{K}\mathbf {r}(s)A_s\lambda _{s} \int _{0}^{\infty }\{1 - B_{s}(\tau )\}d\tau = \sum \limits _{s = 1}^{K}\mathbf {r}(s)A_s\lambda _{s}b_{s}, \end{aligned}$$

where \(b_s\) are the service-time means, \(s=\overline{1, K}\). Thus

$$\begin{aligned} \kappa _{1}(T) = \sum \limits _{s = 1}^{K}\mathbf {r}(s)\lambda _{s} \int \limits _{0}^{\infty }\{ 1 - B_s(\tau )\}d\tau = \sum \limits _{s = 1}^{K}\mathbf {r}(s)A_s\lambda _{s}b_s = \mathbf {rA\Lambda Be}, \end{aligned}$$

where \(\mathbf {B}\) is a diagonal matrix containing service-time means \(b_s\).

5.2 Second-Order Asymptotic Analysis

In the equation (6) we make a substitution

$$\begin{aligned} \mathbf {H}(u, z, t) = \mathbf {H}_2(u, z, t)e\{ju\kappa _1(t)N\}. \end{aligned}$$
(15)

The function \(\mathbf {H}_2(u, z, t)\) here is the centered characteristic function as the following relation takes place:

$$\begin{aligned}&\mathbf {H}_2(u, z, t)\mathbf {e} = \mathbf {H}(u, z, t)e^{-ju\kappa _1(t)N}\mathbf {e} \\&\quad = M\left\{ exp\left[ ju(n(t) - \kappa _{1}(t)N)\right] \right\} . \end{aligned}$$

The substitution (15) yields an equation which defines \(\mathbf {H}_2(u, z, t)\):

$$\begin{aligned}&\quad \,\, \quad \frac{1}{N}\frac{\partial \mathbf {H}_2(u,z,t)}{\partial t} - \frac{\partial \mathbf {H}_2(u,z,t)}{\partial z} \nonumber \\&\qquad \quad + \frac{\partial \mathbf {H}_2(u,0,t)}{\partial z}\left[ \mathbf {I - PA}(z)\right] \\&= \mathbf {H}_2(u,z,t)\left\{ (e^{ju} - 1)\mathbf {\Lambda S}(t) - ju\kappa '_1(t)\mathbf {I}\right\} \nonumber \end{aligned}$$
(16)

We rewrite the latter system using substitutions

$$\begin{aligned} \varepsilon ^2 = \frac{1}{N}, u = \varepsilon w, \mathbf {H}_2(u, t) = \mathbf {F}_2(w, z, t, \varepsilon ) \end{aligned}$$

in the following way:

$$\begin{aligned} \varepsilon ^2\frac{\partial \mathbf {F}_2(w, z, t, \varepsilon )}{\partial t} -&\frac{\partial \mathbf {F}_2(w, z, t, \varepsilon )}{\partial z} \nonumber \\ + \frac{\partial \mathbf {F}_2(w, 0, t, \varepsilon )}{\partial t}&[\mathbf {I - PA}(z)] \\ = \mathbf {F}_2(w, z, t, \varepsilon )[(e^{j\varepsilon w} -&1)\mathbf {\Lambda S}(t) - j\varepsilon w\kappa '_1(t)\mathbf {I}] \nonumber \end{aligned}$$
(17)

As \(\varepsilon \rightarrow 0\), the following relation takes place:

$$\begin{aligned} \frac{\partial \mathbf {F}_2(w, z, t)}{\partial z} = \frac{\partial \mathbf {F}_2(w, 0, t)}{\partial z}\left[ \mathbf {I - PA}(z)\right] . \end{aligned}$$

It follows that the function \(\mathbf {F}_2(w, z, t)\) may be represented as follows:

$$\begin{aligned} \mathbf {F}_2(w, z, t) = \mathbf {r}(z)\varPhi _2(w, t). \end{aligned}$$
(18)

In turn, the function \(\mathbf {F}_2(w, z, t, \varepsilon )\) may be approximated with the following expression:

$$\begin{aligned} \mathbf {F}_2(w, z, t, \varepsilon ) = \varPhi _2(w, t)\left\{ \mathbf {r}(z) + j\varepsilon w\mathbf {f}_2(z, t)\right\} + O(\varepsilon ^2). \end{aligned}$$
(19)

The row-vector function \(\mathbf {f}_2(z, t)\) is to be defined. To do this, first we make a substitution (19) in the system (17). We also make the following approximation in (17):

$$\begin{aligned} e^{j\varepsilon w} - 1 = j\varepsilon w + O(\varepsilon ^2), \end{aligned}$$

and then set \(\varepsilon \rightarrow 0\). These manipulations yield us the following equation that defines \(\mathbf {f}_2(z, t)\):

$$\begin{aligned} \frac{\partial \mathbf {f}_2(z, t)}{\partial z} - \frac{\partial \mathbf {f}_2(0, t)}{\partial z}\left[ \mathbf {I - PA}(z)\right] + \varPhi _2(w, t)\mathbf {r}(z)\left[ \mathbf {\Lambda S}(t) - \kappa '_1(t)\mathbf {I}\right] = \mathbf {0}. \end{aligned}$$
(20)

Here \(\mathbf {0}\) is the row-vector filled with zeros. It follows that as \(z\rightarrow \infty \), we have the relation

$$\begin{aligned} \mathbf {f}_2(t) = \int \limits _{0}^{\infty }\left\{ \frac{\partial \mathbf {f}_2(0, t)}{\partial z}\left[ \mathbf {I - PA}(x)\right] - \mathbf {r}(x)\left[ \mathbf {\Lambda S}(t) - \kappa '_1(t)\mathbf {I}\right] \right\} dx. \end{aligned}$$
(21)

The right part of the latter relation is the improper integral. In order for it to converge, it is necessary that the integrand function converges to 0 as the variable of integration approaches \(\infty \). That is, the following relation stands for \(\mathbf {f}_2(0, t)\):

$$\begin{aligned} \frac{\partial \mathbf {f}_2(0, t)}{\partial z}\left[ \mathbf {I - P}\right] = \frac{\mathbf {rA}}{a}\left[ \mathbf {\Lambda S}(t) - \kappa '_1(t)\mathbf {I}\right] . \end{aligned}$$
(22)

The equation above is the non-homogeneous underdetermined system of linear equations. We represent its solution as a sum of general solution to homogeneous system and a partial solution to non-homogeneous system:

$$\begin{aligned} \frac{\partial \mathbf {f}_2(0, t)}{\partial z} = c(t)\mathbf {r} + \mathbf {g}(t), \end{aligned}$$
(23)

where c(t) is an arbitrary scalar function of t. We write the additional condition for the function \(\mathbf {g}(t)\) as follows:

$$\begin{aligned} \mathbf {g}(t)\mathbf {e} = 0, \end{aligned}$$
(24)

Let us now define the explicit expression for (21):

$$\begin{aligned}&\mathbf {f}_2(t) = \int \limits _{0}^{\infty }\left\{ \frac{\partial \mathbf {f}_2(0, t)}{\partial z}\left[ \mathbf {I - P + P(I - A}(z))\right] - \mathbf {r}(z)\left[ \mathbf {\Lambda S}(t) - \kappa '_1(t)\mathbf {I}\right] \right\} dz \\&\quad \qquad \qquad \qquad = \int \limits _{0}^{\infty }\left[ \frac{1}{a}\mathbf {rA - r}(z)\right] \left[ \mathbf {\Lambda S}(t) - \kappa '_1(t)\mathbf {I}\right] dz \\&\quad \qquad \qquad \qquad \quad \quad + \int \limits _{0}^{\infty }\frac{\partial \mathbf {f}_2(0, t)}{\partial z}\mathbf {P}\left[ \mathbf {I - A}(z)\right] dz \\&= \frac{1}{a}\mathbf {rA}\int \limits _{0}^{\infty }\left\{ \mathbf {I - A}^{-1}\int \limits _{0}^{z}\left[ \mathbf {I - A}(x)\right] dx\right\} dz\left[ \mathbf {\Lambda S}(t) - \kappa '_1(t)\mathbf {I}\right] + \frac{\partial \mathbf {f}_2(0, t)}{\partial z}\mathbf {PA}. \end{aligned}$$

Note that \(\mathbf {A}^{-1}\int \limits _{0}^{z}\left[ \mathbf {I - A}(x)\right] dx\) is a diagonal matrix that contains distribution functions of both elapsed and residual sojourn time of s(t) at each of its states. Then denoted by \(\mathbf {\overline{A}}\) is the diagonal matrix that contains means of such cdfs respectively. Finally, we rewrite the expression for \(\mathbf {f}_2(t)\) as follows:

$$\begin{aligned} \mathbf {f}_2(t) = \frac{1}{a}\mathbf {rA\overline{A}}\left[ \mathbf {\Lambda S}(t) - \kappa '_1(t)\mathbf {I}\right] + \frac{\partial \mathbf {f}_2(0, t)}{\partial z}\mathbf {PA}. \end{aligned}$$
(25)

Now we show that the row-vector function \(\mathbf {f}_2(t)\) does not actually depend on the arbitrary scalar function c(t) that is present in (23). To do that, we consider the following term that is present in (25):

$$\begin{aligned}&\qquad \frac{\partial \mathbf {f}_2(0, t)}{\partial z}\mathbf {PA}\left[ \mathbf {\Lambda S}(t) - \kappa '_1(t)\mathbf {I}\right] \mathbf {e} \\&= \left[ c(t)\mathbf {r + g}(t)\right] \mathbf {PA}\left[ \mathbf {I} - \frac{1}{a}\mathbf {erA}\right] \mathbf {\Lambda S}(t)\mathbf {e} \\&\qquad = \mathbf {g}(t)\mathbf {PA}\left[ \mathbf {I} - \frac{1}{a}\mathbf {erA}\right] \mathbf {\Lambda S}(t)\mathbf {e} \\&+ c(t)\mathbf {rPA\Lambda S}(t)\mathbf {e} - c(t)\frac{1}{a}\mathbf {rPAerA\Lambda S}(t)\mathbf {e}. \end{aligned}$$

With (8) and \(a = \mathbf {rAe}\) in mind, we conclude that the two latter terms cancel each other. Thus, the function c(t) is not present in (25).

Now let us determine the function \(\varPhi _2(w, t)\). For this purpose, we again make substitution (19) in (17) and also the following approximation:

$$\begin{aligned} e^{j\varepsilon w} - 1 = j\varepsilon w + \frac{(j\varepsilon w)^2}{2} + O(\varepsilon ^3). \end{aligned}$$

As \(\varepsilon \rightarrow 0\) and \(z\rightarrow \infty \), this yields us the first-order ODE that defines \(\varPhi _2(w, t)\):

$$\begin{aligned} \frac{\partial \varPhi _2(w, t)}{\partial t} = \frac{(jw)^2}{2}\varPhi _2(w, t)\left\{ \kappa '_1(t) + 2\mathbf {f}_2(t)\left[ \mathbf {\Lambda S}(t) - \kappa '_1(t)I\right] \mathbf {e}\right\} . \end{aligned}$$
(26)

Its solution that satisfies the initial condition derived from (7) is of the following form:

$$\begin{aligned} \varPhi _2(w, t) = exp\left\{ \frac{(jw)^2}{2}\kappa _2(t)\right\} , \end{aligned}$$
(27)

where

$$\begin{aligned} \kappa _2(t) = \kappa _1(t) + 2\int \limits _{-\infty }^{t}\mathbf {f}_2(\tau )\left[ \mathbf {\Lambda S}(\tau ) - \kappa '_1(\tau )\mathbf {I}\right] \mathbf {e}d\tau . \end{aligned}$$
(28)

Thus, the expression for the centered characteristic function \(\mathbf {H}_2(u, t)\) is obtained and is written as follows:

$$\begin{aligned}&\mathbf {H}_2(u, t) = \mathbf {F}_2(w, t, \varepsilon ) \approx \mathbf {F}_2(w, t) = \frac{\mathbf {rA}}{a}\varPhi _{2}(w, t) \\&= \frac{\mathbf {rA}}{a}exp\{\frac{(jw)^2}{2}\kappa _2(t)\} = \frac{\mathbf {rA}}{a}exp\{\frac{(ju)^2}{2}\kappa _2(t)N\}. \end{aligned}$$

It follows that

$$\begin{aligned} \mathbf {H}(u, t) = \mathbf {H}_2(u, t)e^{ju\kappa _1(t)N} \approx \frac{\mathbf {rA}}{a}exp\left\{ ju\kappa _1(t)N + \frac{(ju)^2}{2}\kappa _2(t)N\right\} , \end{aligned}$$
(29)
$$\begin{aligned} M\{e^{jun(t)}\} = \mathbf {H}(u, t)\mathbf {e} \approx h_{2}(u, t) = exp\left\{ ju\kappa _{1}(t)N + \frac{(ju)^{2}}{2}\kappa _{2}(t)N\right\} . \end{aligned}$$
(30)

Considering (1), the following identities are true:

$$\begin{aligned} \begin{aligned} M\{e^{jui(T)}\} = M\{e^{jun(T)}\}&= \mathbf {H}(u, T)\mathbf {e} \approx h_2(u, T) \\ = exp\{ju\kappa _1(T)N&+ \frac{(ju)^2}{2}\kappa _2(T)N\}, \end{aligned} \end{aligned}$$
(31)

where \(\kappa _{2}(T)\) is of the following form:

$$\begin{aligned} \kappa _2(T) = \kappa _1(T) + 2\int \limits _{-\infty }^{T}\mathbf {f}_2(t)\left[ \mathbf {\Lambda S}(t) - \kappa '_1(t)\mathbf {I}\right] \mathbf {e}dt \end{aligned}$$
(32)

According to the definition of functions \(S_s(t) = 1 - B_s(T - t)\) it is clear that \(\lim \limits _{t\rightarrow \infty }S_s(t) = 0, s = \overline{1, K}\). Therefore, it is clear that the improper integral (32) is converging and thus can be calculated numerically given specific system and environment parameters.

Obviously, the asymptotic steady-state probability distribution of the number of customers in the system defined by (31) is normal with first and second cumulants \(\kappa _{1}(t)N\) and \(\kappa _{2}(t)N\) respectively. It is known that

$$\begin{aligned} M\{i(T)\} \approx \kappa _{1}(T)N, D\{i(T)\} \approx \kappa _{2}(T)N. \end{aligned}$$
(33)

Inverse Fourier transform of (31) gives the probability density function of the normally distributed random variable:

$$\begin{aligned} p(x) = \frac{1}{\sqrt{2\pi \kappa _{2}(T)N}}exp\left\{ -\frac{(x - \kappa _{1}(T)N)^{2}}{2\kappa _{2}(T)N}\right\} . \end{aligned}$$
(34)

It is necessary to switch from this continuous distribution to discrete as follows:

$$\begin{aligned} P(i) = Cp(i), i \ge 0, \end{aligned}$$
(35)

where the constant value C is defined considering the normalizing condition:

$$\begin{aligned} \sum \limits _{i = 0}^{\infty }P(i) = C\sum \limits _{i = 0}^{\infty }p(i) = 1. \end{aligned}$$
(36)

Due to (36), C is given as follows:

$$\begin{aligned} C = 1/\sum \limits _{i = 0}^{\infty }p(i) \end{aligned}$$
(37)

6 Conclusion

Thus, the Gaussian approximation of the probability distribution of the number of customers in the system \(M(\lambda _s)/G(B_s(x))/\infty \) is obtained during the asymptotic analysis under conditions of high arrival rate and frequent environment transitions. Using the method of dynamic screening, we considered a non-stationary Markov point process n(t) instead of non-Markovian i(t) which is the number of customers in the system. Then, according to the method of supplementary variable we defined the residual sojourn time z(t) in the present state of the environment process s(t) to be able to analyze it with theory of Markov processes tools. After deriving the system of differential equations in terms of vector characteristic functions of the number of customers in the system, we conducted the asymptotic analysis of the system in question.

Earlier we considered a problem of \(M/G/\infty \) queue operating in Markovian random environment with the same service policy when service-time distribution does not change while the customer is in the system. Similarly, we obtained the steady-state probability distribution of the number of customers in the system. However, the Markov case narrows down the application area significantly. Thus, in this paper we considered a more general case with random environment being semi-Markovian.