Keywords

1 Introduction

Queueing models with different-type vacation policies are investigated extensively and the number of publications in this topic is still increasing. This kind of systems can be successfully applied in modelling and analyzing the phenomena occurring in telecommunication and computer networks. In particular, they are very useful tool when the service stations (e.g. servers, network nodes) can use the idle period to perform some additional tasks. Different variations of vacation policy can be used then to describe the considered problem more precisely.

One of such variations is the working vacation (WV) policy proposed in 2002 by Servi and Finn. In [10] the authors introduce an M / M / 1 queue with WV policy to model the behaviour of a reconfigurable WDM (wavelength-division multiplexing) optical access network. Each time the server with WV finishes the processing of the last job waiting in the queue, it switches to a WV mode. During the WV period another (usually slower) rate of service is offered, instead of suspending it completely. The idle resources can be used simultaneously to accomplish tasks not related to the processing of the queue, e.g. some maintenance work. Some variants of WV systems can be applied e.g. in modelling of the operation of a network node in which the typical processing rate changes periodically, due to e.g. introducing a priority traffic.

Baba [1] extended the investigation of WV queues to a GI / M / 1 queue, and derived the steady-state distribution for the number of customers in the system. In [14] Wu and Takagi obtained the stationary distributions for the queue size and the time a customer stays in the system in case of an M / G / 1 WV model. Banik [2] used the method of embedded Markov chain and a supplementary variable to compute the queue-length distribution, the probability of blocking, and the mean waiting time in the GI / M / 1 WV queue with finite buffer. An M / G / 1 retrial queue was analyzed in [4]. The method of supplementary variable was used to obtain the stationary queue-size distribution in the case of a model with vacation interruption. In [12] the authors investigated an M / M / 1 retrial queue with WV and feedback under N-policy. The probability generating function of the number of customers present in the system in the steady state of an M / G / 1 queue with exponential WV and gated service discipline was obtained in [9].

While the literature concerning WV queues is growing fast, most of the results relates only to the stationary characteristics of the system. However, in some cases the steady-state analysis does not give the sufficient insight into the system’s behaviour, e.g. in some situations the convergence rate to the stationary distributions can be very slow. The transient analysis can also be imposed by the high variability of the system parameters or the need of observing the behaviour of the server shortly after its opening or applying a new control mechanism.

The transient results for queueing models with the WV regime can be found e.g. in [11, 13]. In [11] the time-dependent system-size probabilities were obtained in the case of the M / M / 1 queue with heterogeneous service and customers’ impatience. One can find the transient distribution of the number of customers in the case of the M / M / c model in [13]. In [6, 7] an M / M / 1 / N queue with WV policy was investigated and transient distributions of the queue size and the virtual waiting time of a customer were obtained. In [5] the time dependent analysis of queueing delay behaviour in the case of a GI / M / 1 / N queue was accomplished.

This paper concerns the analysis of an M / M / 1 / N queue with a generally distributed working vacations. The incoming stream of packets is described by a Poisson process with rate \(\lambda .\) When, after completion of processing a job in normal mode (with rate \(\mu \)), the server finds the queue empty, it switches to the WV mode and it comes back to the normal mode after a random, generally distributed, amount of time with the cumulative distribution function (CDF) \(G(\cdot )\). During the WV period, the arriving jobs are served with the different rate \(\mu ^*\ ({<}{\mu })\). After completion of the WV, the rate of service is switched back to \(\mu \). If a job enters the system while there are already N jobs present, it is lost due to the buffer overflow. In the paper we deal with a CDF of the time to the first buffer overflow in the considered model. Applying the analytical approach based on identifying Markov moments in the evolution of the system, continuous total probability formula and linear algebra, we obtain the representation for the Laplace transform (LT) of the tail CDF of the time to the first buffer overflow, conditioned by the number of packets accumulated in the buffer at the starting moment.

The phenomenon of losses is typical in packet-oriented telecommunication and computer networks (e.g. in IP routers) or in automated production systems, where some tasks have to be redirected to another production line or a magazine in the case of exhaustion of the available capacity for accumulation the incoming traffic. As it seems, the well-known performance measure as the loss ratio does not give sufficient knowledge about the probabilistic nature of the process of losses. The in-depth analysis requires e.g. the information about the probability distribution of the time to buffer saturation.

The rest of the paper is organized as follows. In Sect. 2 the investigated problem is defined, and a useful algebraic result is presented. Sections 3 and 4 contain the necessary solutions used in Sect. 5 to solve the main problem. In Sect. 5 the LT of the time to buffer overflow distribution is derived using algebraic approach based on theorem stated in Sect. 2. Section 6 contains some numerical examples.

2 Equations for Time-to-Buffer-Overflow Distribution

Introduce the CDF of the time \(\tau \) to the first buffer overflow as follows:

$$\begin{aligned} {}&B_{n}(t){\mathop {=}\limits ^{def}}{\mathbf P}\left\{ \tau>t\,|\,X(0)=n\right\} ,\quad t>0, \end{aligned}$$
(1)

where X(0) stands for the initial buffer state, \(n\in \{0, 1, ..., N-1\}\).

We assume, that if the queue is empty at the opening, then the WV period initialize the evolution of the system. Otherwise, it starts the operation in normal mode. Using the method of embedded Markov chain and the formula of total probability with respect to the moment when WV period ends, we can write the first equation (for \(n=0\)):

$$\begin{aligned} {}&B_{0}(t)=\int _{0}^{t}\sum _{k=0}^{N-1}P_{0}^{slow}(u,k)B_{k}(t-u)\mathrm {d}G(u)+[1-G(t)]B_{0}^{slow}(t), \end{aligned}$$
(2)

where

$$\begin{aligned} {}&P_{0}^{slow}(u,k)={\mathbf P}\left\{ \bigl (X(u)=k)\,\wedge \,\bigl (\sup _{v\in [0,u]}{X(v)}\le N-1\bigr )\,|\,X(0)=0\right\} \end{aligned}$$
(3)

is the probability that in the time interval of length u the number of jobs present in the system will change from 0 to k, and it will not reach N (i.e. the buffer overflow will not occur) and

$$\begin{aligned} {}&B_{0}^{slow}(t){\mathop {=}\limits ^{def}}{\mathbf P}\left\{ \tau>t\,|\,X(0)=0\right\} ,\quad t>0 \end{aligned}$$

is the CDF of the time to the first buffer overflow in the M / M / 1 / N system without WV policy where the parameter of input stream is the same as in original model and the service parameter is \(\mu ^*\). In the case the system starts the evolution being non-empty and in normal mode, using the formula of total probability with respect to the first Markov event after the opening, we get, similarly

$$\begin{aligned} B_{n}(t)=\lambda \int _{0}^{t}\mathrm {e}^{-(\lambda +\mu )x}B_{n+1}(t-x)\mathrm {d}x+ \mu \int _{0}^{t}\mathrm {e}^{-(\lambda +\mu )x}B_{n-1}(t-x)\mathrm {d}x+\mathrm {e}^{-(\lambda +\mu )t}, \end{aligned}$$
(4)

where \(1\le n\le N-2,\) and, finally,

$$\begin{aligned} {}&B_{N-1}(t)=\mu \int _{0}^{t}\mathrm {e}^{-(\lambda +\mu )x}B_{N-2}(t-x)\mathrm {d}x+\mathrm {e}^{-(\lambda +\mu )t}. \end{aligned}$$
(5)

In (4)–(5) the summand with coefficient \(\lambda \) (\(\mu \)) in front of the integral corresponds to the situation, where the arrival of a new job (the end of processing a job) occurs first. The last summand is the probability of no events before t.

To find explicit formulae for the functions \(B_n(t),\, 0\le n\le N-1,\) we need to find the representations of \(P_0^{slow}(t,k)\) and \(B_0^{slow}(t).\) In the analytical approach we use the following algebraic result (see [8]):

Theorem 1

Let \((\tau _k),\ k\ge 0,\, \tau _0\not =0\) and \((\theta _k),\, k\ge 1\) be two given sequences. Each solution of the following system of linear equations:

$$\begin{aligned} {}&\sum _{k=-1}^{n-2}\tau _{k+1}y_{n-k}-y_n=\theta _n,\ n\ge 2, \end{aligned}$$

can be written in the following way:

$$\begin{aligned} {}&y_n=MR_{n-1}+\sum _{k=2}^n R_{n-k}\theta _k,\ n\ge 2, \end{aligned}$$

where \(M\in \mathbb {R},\) and \((R_k),\ k\ge 0\) is a sequence defined recursively as follows:

$$\begin{aligned} {}&R_0=0,\, \, R_1=\tau _0^{-1},\, \, R_{k+1}=R_1(R_k-\sum _{i=0}^k\tau _{i+1}R_{k-i}). \end{aligned}$$

3 Time to Buffer Overflow in a Reliable System

Let us consider a reliable system (without working vacation policy) that processes the arriving packets with a slower speed \(\mu ^{*}.\) We can write the following system of equations (using the formula of total probability with respect to the first event after opening):

$$\begin{aligned} B^{slow}_{0}(t)&=\lambda \int _{0}^{t}\mathrm {e}^{-\lambda x}B_{1}^{slow}(t-x)\mathrm {d}x+\mathrm {e}^{-\lambda t}, \nonumber \\ B^{slow}_{n}(t)&=\lambda \int _{0}^{t}\mathrm {e}^{-(\lambda +\mu ^{*})x}B^{slow}_{n+1}(t-x)\mathrm {d}x+ \mu ^{*}\int _{0}^{t}\mathrm {e}^{-(\lambda +\mu ^{*})x}B^{slow}_{n-1}(t-x)\mathrm {d}x\nonumber \\ {}&+\mathrm {e}^{-(\lambda +\mu ^{*})t}, \end{aligned}$$
(6)

where \(1\le n\le N-2,\) and

$$\begin{aligned} B^{slow}_{N-1}(t)&= \mu ^{*}\int _{0}^{t}\mathrm {e}^{-(\lambda +\mu ^{*})x}B^{slow}_{N-2}(t-x)\mathrm {d}x+\mathrm {e}^{-(\lambda +\mu ^{*})t}. \end{aligned}$$
(7)

The interpretation of this equations is similar to the analogous ones from Sect. 2. After introducing LTs \(b_i^{slow}(s)=\int _0^\infty \mathrm {e}^{-st}B_i^{slow}(t) \mathrm {d}t,\ 0\le i\le N-1\), we can rewrite this system in the form

$$\begin{aligned} {}&b^{slow}_{0}(s)=\frac{\lambda }{\lambda +s}b^{slow}_{1}(s)+\frac{1}{\lambda +s},\end{aligned}$$
(8)
$$\begin{aligned} {}&b^{slow}_{n}(s)=\frac{\lambda }{\lambda +\mu ^{*}+s}b^{slow}_{n+1}(s) +\frac{\mu ^{*}}{\lambda +\mu ^{*}+s}b^{slow}_{n-1}(s)+\frac{1}{\lambda +\mu ^{*}+s}, \end{aligned}$$
(9)

where \(1\le n\le N-2,\) and

$$\begin{aligned} {}&b^{slow}_{N-1}(s)=\frac{\mu ^{*}}{\lambda +\mu ^{*}+s}b^{slow}_{N-2}(s)+\frac{1}{\lambda +\mu ^{*}+s}. \end{aligned}$$
(10)

If we define the following sequence:

$$\begin{aligned} {}&a^{*}_{0}(s)=\frac{\lambda }{\lambda +\mu ^{*}+s},a^{*}_{2}(s)=\frac{\mu ^{*}}{\lambda +\mu ^{*}+s},a^{*}_{k}(s)=0,\,k\ne 0, 2, \end{aligned}$$

and

$$\begin{aligned} {}&\phi ^{*}_{n}(s)=-\frac{1}{\lambda +\mu ^{*}+s}-\delta _{n,1}a^*_2(s) b^{slow}_{0}(s), \end{aligned}$$

where \(\delta _{i,j}\) is the Kronecker delta function (i.e. \(\delta _{i,j}=1\) when \(i=j\) and \(\delta _{i,j}=0\) otherwise), then the system (9) can be rewritten as follows:

$$\begin{aligned} {}&\sum _{k=-1}^{n-1}a_{k+1}^{*}(s)b^{slow}_{n-k}(s)-b^{slow}_{n}(s)=\phi ^{*}_{n}(s),\ \ 1\le n\le N-2. \end{aligned}$$
(11)

In consequence, the solution of (8), (10) and (11) can be written in the following way (see Theorem 1):

$$\begin{aligned} {}&b^{slow}_{n}(s)=M^{*}(s)R^{*}_{n}(s)+\sum _{k=1}^{n}R^{*}_{n-k}(s)\phi ^{*}_{k}(s),\quad 1\le n\le N-1, \end{aligned}$$
(12)

where \(M^*(s)\) is an unknown function, and \(R_k^*(s)\) is a sequence defined recursively as follows:

$$\begin{aligned} {}&R_0^*(s)=0,\, R_1^*(s)=\left( a^*_0(s) \right) ^{-1},\ R_{k+1}^*(s)=R_1^*(s)\left( R_k^*(s)-a^*_2(s) R_{k-1}^*(s) \right) ,\, \end{aligned}$$
(13)

for \(1\le k\le N-2.\)

Introducing (12) into the right side of (10), we get:

$$\begin{aligned} b^{slow}_{N-1}(s)=&a^{*}_{2}(s)\Bigl [M^{*}(s)R^{*}_{N-2}(s)+\sum _{k=1}^{N-2}R^{*}_{N-2-k}(s) \nonumber \\ {}&\times \Bigl (-\frac{1}{\lambda +\mu ^{*}+s}-\delta _{1,k}a^*_2(s)b^{slow}_{0}(s)\Bigr )\Bigr ]+\frac{1}{\lambda +\mu ^{*}+s}. \end{aligned}$$
(14)

Similarly, writing (12) for \(n=N-1,\) we obtain:

$$\begin{aligned} {}&b^{slow}_{N-1}(s)=M^{*}(s)R^{*}_{N-1}(s)+\sum _{k=1}^{N-1}R^{*}_{N-1-k}(s)\Bigl (-\frac{1}{\lambda +\mu ^{*}+s}-\delta _{1,k}a^*_2(s)b^{slow}_{0}(s)\Bigr )\Bigr ]. \end{aligned}$$
(15)

Comparing the right sides of (14) and (15), we eliminate \(M^{*}(s)\) as a function of \(b^{slow}_{0}(s):\)

$$\begin{aligned} M^{*}(s)=&\left( {\lambda +\mu ^{*}+s}\right) ^{-1}\left( a^{*}_{2}(s)R^{*}_{N-2}(s)-R^{*}_{N-1}(s)\right) ^{-1}\\ {}&\times \Biggl [\sum _{k=1}^{N-2}\left( a^*_2(s)R_{N-k-2}^*(s)-R_{N-k-1}^*(s) \right) +\mu ^*b_0^{slow}(s)\left( a^*_2(s)R_{N-3}^*(s)-R_{N-2}^*(s)\right) -1 \Biggr ]. \nonumber \end{aligned}$$
(16)

Referring now to (8), (12) and (16), we get

$$\begin{aligned} {} b^{*}_{0}(s)=&\frac{1}{\lambda +s}\Biggl \{\lambda R^{*}_{1}(s)\Bigl (a^{*}_{2}(s)R^{*}_{N-2}(s)-R^{*}_{N-1}(s)\Bigr )^{-1}\nonumber \\ {}&\times \Biggl [\sum _{k=1}^{N-2}\Bigl (-\frac{1}{\lambda +\mu ^{*}+s}-\delta _{1,k} a_2^*(s)b^{slow}_{0}(s)\Bigr ) \bigl (R^{*}_{N-1-k}(s)-a^{*}_{2}(s)R^{*}_{N-2-k}(s)\bigr ) \nonumber \\ {}&-\frac{1}{\lambda +\mu ^{*}+s}\Biggr ]+1\Biggr \}, \end{aligned}$$

and hence, after simplification,

$$\begin{aligned} {}&b^{slow}_{0}(s)=T^{slow}_{1}(s)T^{slow}_{2}(s), \end{aligned}$$
(17)

where

$$\begin{aligned} {} T^{slow}_{1}(s){\mathop {=}\limits ^{def}}&\Biggl (s+\lambda +\mu ^*\left( a^{*}_{2}(s)R^{*}_{N-2}(s)-R^{*}_{N-1}(s)\right) ^{-1}\left( a^*_2(s)R^*_{N-3}(s)-R^*_{N-2}(s) \right) \Biggr )^{-1} \end{aligned}$$
(18)

and

$$\begin{aligned} {}&T^{slow}_{2}(s){\mathop {=}\limits ^{def}}\frac{\left( \sum _{k=0}^{N-2}\left( a^*_2(s)R^*_{N-k-2}(s)-R^*_{N-k-1}(s) \right) -1 \right) }{\left( a^*_2(s)R^*_{N-2}(s)-R^*_{N-1}(s) \right) }. \end{aligned}$$
(19)

4 Solution for \(P_{0}^{slow}(t,m)\)

Let us note that if we take (see (3))

$$\begin{aligned} {}&P_{n}^{slow}(t,m)={\mathbf P}\left\{ \bigl (X(t)=m)\,\wedge \,\bigl (\sup _{v\in [0,t]}{X(v)}\le N-1\bigr )\,|\,X(0)=n\right\} , \end{aligned}$$

where \(0\le n\le N-1,\) then the following system is satisfied:

$$\begin{aligned} P^{slow}_{0}(t,m)=&~\lambda \int _{0}^{t}\mathrm {e}^{-\lambda x}P_{1}^{slow}(t-x,m)\mathrm {d}x+\delta _{m,0}\mathrm {e}^{-\lambda t},\\ P^{slow}_{n}(t,m)=&~\lambda \int _{0}^{t}\mathrm {e}^{-(\lambda +\mu ^{*})x}P^{slow}_{n+1,m}(t-x,m)\mathrm {d}x \nonumber \\ {}+~&\mu ^{*}\int _{0}^{t}\mathrm {e}^{-(\lambda +\mu ^{*})x}P^{slow}_{n-1}(t-x,m)\mathrm {d}x+\delta _{m,n}\mathrm {e}^{-(\lambda +\mu ^{*})t}, \end{aligned}$$

where \(1\le n\le N-2,\) and

$$\begin{aligned} {}&P^{slow}_{N-1}(t,m)= \mu ^{*}\int _{0}^{t}\mathrm {e}^{-(\lambda +\mu ^{*})x}P^{slow}_{N-2}(t-x,m)\mathrm {d}x+\delta _{m,N-1}\mathrm {e}^{-(\lambda +\mu ^{*})t}. \end{aligned}$$

After introducing LTs \(p_i^{slow}(s)=\int _0^\infty \mathrm {e}^{-st}P_i^{slow}(t) \mathrm {d}t,\ 0\le i\le N-1\), we get

$$\begin{aligned} {}&p^{slow}_{0}(s,m)=\frac{\lambda }{\lambda +s}p^{slow}_{1}(s,m)+\frac{\delta _{m,0}}{\lambda +s},\end{aligned}$$
(20)
$$\begin{aligned} {}&p^{slow}_{n}(s,m)=\frac{\lambda }{\lambda +\mu ^{*}+s}p^{slow}_{n+1}(s,m) +\frac{\mu ^{*}}{\lambda +\mu ^{*}+s}p^{slow}_{n-1}(s,m)+\frac{\delta _{m,n}}{\lambda +\mu ^{*}+s}, \end{aligned}$$
(21)

where \(1\le n\le N-2,\) and

$$\begin{aligned} {}&p^{slow}_{N-1}(s,m)=\frac{\mu ^{*}}{\lambda +\mu ^{*}+s}p^{slow}_{N-2}(s,m)+\frac{\delta _{m,N-1}}{\lambda +\mu ^{*}+s}. \end{aligned}$$
(22)

Comparing (20)–(22) to (8)–(10), we conclude that (compare (17)–(19))

$$\begin{aligned} {}&p^{slow}_{0}(s,m)=T^{slow}_{1}(s)T^{slow}_{3}(s,m), \end{aligned}$$

where

$$\begin{aligned} {} T^{slow}_{3}(s,m){\mathop {=}\limits ^{def}}&\frac{\left( \sum _{k=0}^{N-2}\delta _{m,k}\left( a^*_2(s)R^*_{N-k-2}(s)-R^*_{N-k-1}(s) \right) -\delta _{m,N-1} \right) }{\left( a^*_2(s)R^*_{N-2}(s)-R^*_{N-1}(s) \right) }. \end{aligned}$$

5 Main Result

Introducing LTs \(b_i(s)=\int _0^\infty \mathrm {e}^{-st}B_i(t) \mathrm {d}t,\ 0\le i\le N-1\) into Eqs. (2), (4) and (5), we get

$$\begin{aligned} {}&b_{0}(s)=\sum _{k=0}^{N-1}b_{k}(s)\widehat{p}_{k}(s)+d(s),\end{aligned}$$
(23)
$$\begin{aligned} {}&b_{n}(s)=\frac{\lambda }{\lambda +\mu +s}b_{n+1}(s) +\frac{\mu }{\lambda +\mu +s}b_{n-1}(s)+\frac{1}{\lambda +\mu +s}, \end{aligned}$$
(24)

for \(1\le n\le N-2,\) and

$$\begin{aligned} {}&b_{N-1}(s)=\frac{\mu }{\lambda +\mu +s}b_{N-2}(s)+\frac{1}{\lambda +\mu +s}, \end{aligned}$$
(25)

where we denote

$$\begin{aligned} {}&\widehat{p}_{k}(s){\mathop {=}\limits ^{def}}\int _{0}^{\infty }\mathrm {e}^{-st}P^{slow}_{0}(t,k)\mathrm {d}G(t),\\ {}&d(s){\mathop {=}\limits ^{def}} \int _{0}^{\infty }\mathrm {e}^{-st}B^{slow}_{0}(t)[1-G(t)]\mathrm {d}t. \end{aligned}$$

The solution of (24)–(25) is given by (compare (12))

$$\begin{aligned} {}&b_{n}(s)=M(s)R_{n}(s)+\sum _{k=1}^{n}R_{n-k}(s)\phi _{k}(s),\quad 1\le n\le N-1, \end{aligned}$$
(26)

where now the sequence \(\bigl (R_{k}(s)\bigr )\) is recursively defined using

$$\begin{aligned} {}&a_{0}(s)=\frac{\lambda }{\lambda +\mu +s},a_{2}(s)=\frac{\mu }{\lambda +\mu +s},a_{k}(s)=0,\,k\ne 0, 2, \end{aligned}$$

and

$$\begin{aligned} {}&\phi _{n}(s)=-\frac{1}{\lambda +\mu +s}-\delta _{n,1}a_2(s)b_{0}(s). \end{aligned}$$

Substituting (26) into (23), we obtain

$$\begin{aligned} b_{0}(s)&=\sum _{k=1}^{N-1}\widehat{p}_{k}(s)\Bigl [M(s)R_{k}(s)+\sum _{i=1}^{k}R_{k-i}(s)\Bigl (-\frac{1}{\lambda +\mu +s}-\delta _{1,i}a_2(s)b_{0}(s)\Bigr )\Bigr ]\nonumber \\ {}&+b_{0}(s)\widehat{p}_{0}(s)b_{0}(s), \end{aligned}$$
(27)

and hence

$$\begin{aligned} {}&M(s)=A(s)b_{0}(s)+B(s), \end{aligned}$$

where

$$\begin{aligned} {}&A(s){\mathop {=}\limits ^{def}}\frac{1+a_2(s)\sum _{k=1}^{N-1}\widehat{p}_{k}(s)R_{k-1}(s)-\widehat{p}_{0}(s)}{\sum _{k=1}^{N-1}\widehat{p}_{k}(s)R_{k}(s)}, \end{aligned}$$

and

$$\begin{aligned} {}&B(s){\mathop {=}\limits ^{def}}\frac{(\lambda +\mu +s)^{-1}\sum _{k=1}^{N-1}\widehat{p}_{k}(s)\sum _{i=1}^{k}R_{k-i}(s)-d(s)}{\sum _{k=1}^{N-1}\widehat{p}_{k}(s)R_{k}(s)}. \end{aligned}$$

The remaining task is to find the representation for \(b_{0}(s).\) To do it, let us use (26) on the right side of (25) and compare to the right side of (26) written for \(n=N-2.\) We obtain:

$$\begin{aligned} {}&b_{0}(s)=\left[ A(s)\left( a_2(s)R_{N-2}(s)-R_{N-1}(s)\right) -a_2(s)\left( a_2(s)R_{N-3}(s)-R_{N-2}(s)\right) \right] ^{-1}\\ {}&\times \left[ \frac{\left( \sum _{k=1}^{N-2}(a_2(s)R_{N-k-2}(s)-R_{N-k-1}(s))-1 \right) }{(s+\lambda +\mu )}-B(s)(a_2(s)R_{N-2}(s)-R_{N-1}(s)) \right] . \nonumber \end{aligned}$$
(28)

6 Numerical Examples

In this section, we present some numerical examples for M / M / 1 / 10 system with WV. The length of the WV period follows the distribution \(G(t)={1\over 2}(1-\exp {(-ct)})+{1\over 4}(2-\exp {(-3t)}-\exp {(-t)}),\) and the parameters of arrival process, service in normal and in the WV mode are \(\lambda , \mu \) and \(\mu ^*\), respectively. In the following examples, we present the values of \(B_0(5)\) in dependence of \(\mu \) and the mean length of the WV period \(\mathbf {E}(T_{WV}).\)

Example 1

Let us consider the M / M / 1 / 10 model with WV with \(\lambda =1,3,5\), \(\mu ^*=1\) and \(c=0.1\) (\(\mathbf {E}(T_{WV})=0.46\)). We compute \(B_0(5)\) for ten different values of \(\mu .\)

In Figs. 1, 2 and 3 the behaviour of \(B_0(5)\) in dependence of \(\mu \) is presented for \(\lambda =1,3,5\). The Fig. 1 shows the numerical result for \(\lambda =1\). We see, that as the value of service rate rises, the probability, that the time to the first buffer overflow is greater than 5 decreases. As one can note, in this scenario the ratio between \(\lambda \) and \(\mu \) never exceeds 1, therefore the system will cope with arriving jobs and will often take the vacations. In WV mode, the server operates with rate \(\mu ^*=1\) which is equal to \(\lambda ,\) thus the more time the system spends in WV mode, the probability of buffer overflow before the moment t increases.

As shown on the Figs. 2 and 3, \(B_0(5)\) increases as long as the ratio \(\rho =\lambda /\mu \) is greater than 1. Then, as described previously, the server will more often take the vacations, thus the probability of no buffer saturation before \(t=5\) decreases.

Fig. 1.
figure 1

The values of \(B_0(5)\) in dependence of \(\mu \) for \(\lambda =1\)

Fig. 2.
figure 2

The values of \(B_0(5)\) in dependence of \(\mu \) for \(\lambda =3\)

Fig. 3.
figure 3

The values of \(B_0(5)\) in dependence of \(\mu \) for \(\lambda =5\)

Fig. 4.
figure 4

The values of \(B_0(5)\) in dependence of \(\mathbf {E}(T_{WV})\) for \(\lambda =2, \mu =5\) and \(\mu ^*=1.\)

Example 2

We consider the M / M / 1 / 10 model with WV with \(\lambda =2,5,7, \mu =5\) and \(\mu ^*=1\). We compute \(B_0(5)\) for different values of \(\mathbf {E}(T_{WV}).\)

The Figs. 4, 5 and 6 present values of \(B_0(5)\) in dependence of \(\mathbf {E}(T_{WV})\) for three different values of \(\lambda .\)

In the first scenario (Fig. 4), when \(\lambda =2,\) the probability, that there will be no buffer overflow before \(t=5\) increases simultaneously with the mean length of vacation period. The ratio \(\rho =0.4\) and \(\rho ^*=\lambda /\mu ^*=1,\) therefore, the load of the server never exceedes 1.

As one can note, for \(\lambda =5\) (Fig. 5), the probabilities \(B_0(5)\) increase in case of short WV period and begin to decrease starting at a value of \(\mathbf {E}(T_{WV})\approx 1.\) The longer the WV period lasts, the longer the load is \(\rho ^*=5\) instead of \(\rho =1.\) The similar observation can be made in the case of \(\lambda =7\) (Fig. 6).

Fig. 5.
figure 5

The values of \(B_0(5)\) in dependence of \(\mathbf {E}(T_{WV})\) for \(\lambda =5, \mu =5\) and \(\mu ^*=1.\)

Fig. 6.
figure 6

The values of \(B_0(5)\) in dependence of \(\mathbf {E}(T_{WV})\) for \(\lambda =10, \mu =5\) and \(\mu ^*=1.\)

7 Conclusions

In the paper, the \(M/M/1/N-\)queue was investigated using the embedded Markov chain technique, the formula of total probability and linear algebra. The explicit representation of the LT of the transient conditional tail CDF of the time to the first buffer overflow is found. To obtain the CDF, one of many different approaches of numerical LT inversion may be applied, e.g. Dubner-Abate or Gaver-Stehfest method (see [3]). To give an insight into the considered model, the obtained result was illustrated with some numerical examples.