1 Introduction

The B M A P/G/1 queue is a field of intensive research since several years. It has been analyzed by Ramaswami [1], Lucantoni [2], Neuts [3], Takine and Takahashi [4], Dudin et al. [5] and many others. The BMAP, a special class of tractable Markov renewal process, is a rich class of point processes that includes many well known processes such as Poisson, PH-renewal processes, and Markov modulated Poisson process. One of the most significant features of the BMAP is the underlying Markovian structure and fits ideally in the context of matrix analytic solutions to stochastic models. Matrix analytic methods were first introduced and studied by Neuts [6]. Poisson processes are the simplest and most tractable ones used extensively in stochastic modeling. The idea of the BMAP is to significantly generalize the Poisson processes and still keep the tractability for modeling purposes. Furthermore, in many practical applications, notably in communications engineering, production and manufacturing engineering, the arrivals do not usually form a renewal process. So, BMAP is a convenient tool to model both renewal and non-renewal arrivals.

Batch service queues have been discussed extensively over the last few decades as they have proven to be very useful in various fields such as production, transportation, and traffic processes (see, e.g., Chaudhry and Templeton [7], Chakravarthy [8], Dudin and Chakravarthy [9], Powell and Humblet [10], Banerjee et al. [11], Maity and Gupta [12], and the references therein). Similarly, batch arrival queues have also been investigated extensively in the past and a huge amount of literature is available on this topic (see, e.g., Choudhury and Madan [13], Jain and Upadhyaya [14], Kumar and Arumuganathan [15], Sikdar et al. [16], and Xu et al. [17]). Several analytical results on batch arrival/service queue can be found in the book by Chaudhry and Templeton [7]. The situations where customers arrive in batches and are also served in batches are referred to as bulk-arrival bulk-service queues and have scope of applications in communication systems such as ATM switching systems, circuit-switched TDMA systems and traffic concentrators. In such systems messages (batches), which consist of several fixed length packets (customers), arrive at switching multiplexer (server) that transmits packets in batches of variable capacity according to some protocol to be decided at the beginning of the transmission. For example, the protocol may be designed in such a way that, if at the beginning of transmission it’s transmission capacity is k (1 ≤ kB), where B is the maximum transmission capacity of the multiplexer, it will transmit m i n (k, t h e w h o l e q u e u e) packets. At present, their utility continues to expand in telecommunication systems where the processor processes packets in batch. Figure 1 illustrates the data transmission in ATM switching system.

Fig. 1
figure 1

Framework of ATM switching system

Over the past three decades, queueing systems with server vacations have become a matter of special interest, as they can be used to model server’s unavailability due to various reasons, while staying within the framework of traditional models. The modeling and analysis for the queueing systems with vacations have widely studied in many real-life situations such as production/inventory systems, digital communication and computer network, etc. The readers find a through review of such literature in Doshi [18], and the monographs of Takagi [19], Tian and Zhang [20] as well as the references therein. The B M A P/G/1 queue with server vacations has been analyzed by Banik [21, 22], Banik and Samanta [23], Baek et al. [24], Ferrandiz [25], Matendo [26], Schellhaas [27].

Another aspect which is frequently encountered in real applications is about availability of limited waiting space with the server. In this case, one of the main concerns of a system designer is to provide a sufficient buffer space so that the loss probability is kept minimal. To this end, it is essential to calculate the loss probability accurately. There has been considerable effort in this direction and the readers are referred to the book by Takagi [28]. The finite buffer B M A P/G/1/N queue with server vacations have been analyzed by Niu et al. [29] and Banik et al. [30]. Niu et al. [29] have considered the B M A P/G/1/N queue with single- and multiple-vacation along with setup and close-down times whereas Banik et al. [30] have studied the B M A P/G/1/N queue with limited service discipline.

In this paper, we consider a finite buffer B M A P/G Y/1/N queue where customers are served by the single server in batches of random capacity Y = i (1 ≤ iB), to be decided at the beginning of the service, with probability y i , where B is the maximum serving capacity of the server. The queue has finite buffer capacity of size N (> B), so at any time maximum (N + B) customers can be present in the system. In addition, server is allowed to take vacations if he finds an empty queue at service completion epoch. Using the supplementary variable and the embedded Markov chain techniques, we obtain the distributions of number of customers in the queue at service completion, vacation termination, arbitrary and arrival epochs. Various performance measures such as average queue lengths, average waiting time in the queue, probability of blocking, probability that the server is busy have been discussed. Finally, some numerical results have been presented in the form of tables and graphs for a wide range of model parameters. It may be remarked that our model includes a wide class of queueing models since we considered correlated arrivals (B M A P), general service time, and batch service with random capacity. In addition, finite buffer queues are more important because most of the real-life applications have finite spaces. Moreover, we can even analyze the infinite buffer queues using the results of finite buffer queues by taking N sufficiently large.

The model discussed in this paper has potential application in several areas mentioned above. One specific practical application fitting our model is the following: consider a manufacturing system where production orders arrive at the system in batches of random size and form a single queue based on the order of their arrival. Items are manufactured in batches of random size which is decided at the beginning of the production process according to batch service rule discussed above. That is when i (where iI + is the production capacity) orders are present in the queue, production begins and takes m i n(i, t h e w h o l e q u e u e l e n g t h) orders at a time. Whenever the production ends and no orders are present, the production facility is shut down for a random length of time (vacation) which can be utilized for machine maintenances or can be utilized for other secondary work.

This paper is organized as follows. In Section 2, we give the description of the model. The steady-state queue length distributions at various epochs are analyzed for single- and multiple-vacation policies in Sections 3 and 4, respectively. Some important performance measures have been discussed in Section 5. Section 6 deals with numerical results using the analytical results obtained in previous sections. Section 7 concludes the paper.

2 Model description

We consider a B M A P/G Y/1/N single and multiple vacations queueing system wherein customers arrive according to an m-state batch Markovian arrival process (BMAP) with representation {D k , k ≥ 0} of order m. The BMAP in continuous time is described as follows. Let the underlying Markov chain be irreducible and have infinitesimal generator \(\mathbf {D}={\sum }_{k=0}^{\infty }\mathbf {D}_{k}\). At the end of a sojourn time in phase i, that is, exponentially distributed with parameter λ i , there occurs a transition to another (or possibly the same) phase and that transition may or may not correspond to an arrival. With probability p i j (0), 1 ≤ jm, ji, there will be a transition to phase j without an arrival. With probability p i j (k), 1 ≤ jm, k ≥ 1, there will be a transition to phase j with a batch arrival of size k. Therefore, we have

$$\sum\limits_{j=1,j\neq i}^{m}p_{ij}(0)+\sum\limits_{k=1}^{\infty}\sum\limits_{j=1}^{m}p_{ij}(k)=1, \quad 1\leq i\leq m. $$

It is convenient to represent D k , k ≥ 0, by letting (D 0) i i = −λ i , 1 ≤ im, (D 0) i j = λ i p i j (0), 1 ≤ i, jm, ji, and (D k ) i j = λ i p i j (k), 1 ≤ i, jm, k ≥ 1. The matrix D 0 has strictly negative diagonal elements, non-negative off-diagonal elements, row sums less than or equal to zero and we assume it is nonsingular. By assuming D 0 is a nonsingular matrix, the interarrival times are finite with probability one and the arrival process does not terminate. Thus, D 0 is an m × m matrix which governs the phase transitions that correspond to no customer arrivals and D k , k ≥ 1, is an m × m matrix with non-negative elements, which governs the phase transitions that correspond to a batch arrival of size k. Let K(t) denote the number of arrivals in (0, t] and J(t) be the phase of the underlying Markov chain at time t with state space {i : 1 ≤ im}. Then {K(t), J(t)} is a two-dimensional Markov process of BMAP with state space {(n, i):n ≥ 0, 1 ≤ im}. The infinitesimal generator of BMAP is given by

$$\mathbf{\mathcal {Q}} = \left( \begin{array}{ccccc} \mathbf{D}_{0} & \mathbf{D}_{1} & \mathbf{D}_{2} & \mathbf{D}_{3} & {\cdots} \\ \mathbf{0} & \mathbf{D}_{0} & \mathbf{D}_{1} & \mathbf{D}_{2} & {\cdots} \\ \mathbf{0} & \mathbf{0} & \mathbf{D}_{0} & \mathbf{D}_{1} & {\cdots} \\ {\vdots} & {\vdots} & {\vdots} & {\vdots} & {\ddots} \end{array} \right). $$

As \(\mathbf {\mathcal {Q}}\) is the infinitesimal generator of the BMAP, we have \({\sum }_{k=0}^{\infty } \mathbf {D}_{k}\mathbf {e}=\mathbf {0}\), where e is a column vector of ones with an appropriate dimension. Further, since \(\mathbf { D}={\sum }_{k=0}^{\infty } \mathbf {D}_{k}\) is the infinitesimal generator of the underlying Markov chain {J(t)}, there exits a stationary probability vector \(\overline {\boldsymbol {\pi }}\) such that \(\overline {\boldsymbol {\pi }}\textbf {D} = \textbf {0}, \ \overline {\boldsymbol {\pi }}\textbf {e} = 1\). Then the average arrival rate λ and average batch arrival rate λ g of the stationary BMAP are given by \(\lambda ^{\ast }=\overline {\boldsymbol {\pi }}{\sum }_{k=1}^{\infty } k \textbf {D}_{k} \textbf {e}\) and \(\lambda _{g}=\overline {\boldsymbol {\pi }}{\sum }_{k=1}^{\infty } \textbf {D}_{k} \textbf {e}\), respectively.

Let {P(n, t), n ≥ 0, t ≥ 0} denote the m × m matrix whose (i, j)-th element is the conditional probability defined as

$$\begin{array}{@{}rcl@{}} P_{ij}(n,t)=P\{K(t)=n,J(t)=j|K(0)=0,J(0)=i\},\quad 1\leq i,j \leq m. \end{array} $$

The matrices P(n, t) satisfy the following system of difference-differential equations:

$$\begin{array}{@{}rcl@{}} \frac{d}{dt}\mathbf{P}(0,t)&=&\textbf{P}(0,t)\textbf{D}_{0},\quad t>0, \end{array} $$
(1)
$$\begin{array}{@{}rcl@{}} \frac{d}{dt}\textbf{P}(n,t)&=&\textbf{P}(n,t)\mathbf{ D}_{0}+\sum\limits_{k=0}^{n-1}\textbf{P}(k,t)\textbf{D}_{n-k}, \quad n\geq 1,\quad t>0, \end{array} $$
(2)

with P(0,0) = I m and P(n,0) = 0, n ≥ 1, where I m is the identity matrix of order m.

Let N denote the waiting capacity of the system and B is the maximum serving capacity of the server, so that not more than N + B customers can present in the system at anytime. We assume that B < N. Since customers arrive in batches of random size and buffer size is finite, we consider here the admission strategy for an arrival batch is partial-batch acceptance strategy (PBAS). Customers accepted by the system under PBAS are served by a single server who has a random capacity Y with probability mass function P(Y = k) = y k , for k = 1,2,…, B, and the probability generating function \(Y(z)={\sum }_{k=1}^{B}y_{k}z^{k}\) with finite mean \(E[Y]={\sum }_{k=1}^{B}ky_{k}\). If the queue length is less than the service capacity Y = k at the beginning of a service, the server does not wait until the number of customers reaches k, but takes all customers waiting in the queue for service at that time. That is, at the beginning of a service with capacity k, the server takes m i n(k, t h e w h o l e q u e u e l e n g t h) customers for service. Let S(x) {s(x)} [S (𝜃)] be the distribution function (DF) {probability density function (pdf)} [Laplace-Stieltjes transform (LST)] of the service time S of a batch. We assume that the distribution of service time of a batch does not dependent on the size of the batch. In this connection, see Pradhan et al. [31] and references therein. Let V(x) {v(x)} [V (𝜃)] be the DF {pdf} [LST] of a vacation time V. The mean service and vacation times are E[S] = −S ∗(1)(0) and E[V] = −V ∗(1)(0), respectively, where f ∗(1)(0) is the first derivative of f (𝜃) at 𝜃 = 0. The traffic intensity is given by ρ = λ E[S]/E[Y]. Let us denote A n and M n , n ≥ 0, as the m × m matrices defined by

$$\begin{array}{@{}rcl@{}} \textbf{A}_{n}&=&{\int}_{0}^{\infty}\textbf{P}(n,x)dS(x), \quad n\geq 0, \end{array} $$
(3)
$$\begin{array}{@{}rcl@{}} \textbf{M}_{n}&=&{\int}_{0}^{\infty}\textbf{P}(n,x)dV(x), \quad n\geq 0. \end{array} $$
(4)

The (i, j)-th element of A n (M n ) represents the conditional probability that n customers have been arrived to the system during a service (vacation) time of a batch and the underlying Markov chain of BMAP is in phase j at the end of the service (vacation) time given that the underlying Markov chain was in phase i at the beginning of the service (vacation). Further, let us denote \(\widehat {\mathbf { A}}_{n}={\sum }_{k=n}^{\infty } {\textbf {A}}_{k}\), 1 ≤ nN and \(\widehat {\textbf {M}}_{N}={\sum }_{k=N}^{\infty } \mathbf { M}_{k}\).

Note that the derivations of A n , \(\widehat {\mathbf { A}}_{n}\), M n and \(\widehat {\textbf {M}}_{N}\) are given in Appendices A and B, when S(x) and V(x) follow phase type (PH) and deterministic distributions.

3 Single vacation

If the server finds no customers present in the queue at a service completion of a batch, the server enters into the vacation state of random length V. At the end of the vacation, if the server finds one or more customers waiting in the queue, the server begins to serve them according to batch service rule. Otherwise, if the server sees an empty system at the end of that vacation, the server stays in the system (called dormant period) until at least one customer arrives. Such policy is known as single vacation (S V) policy.

3.1 Queue length distribution at service completion and vacation termination epochs

Let Ψ +(n)[Φ +(n)], 0 ≤ nN, be the 1 × m vector whose i-th component \({\Psi }^{+}_{i}(n)[{\Phi }^{+}_{i}(n)]\) is the probability that there are n customers in the queue at service completion [vacation termination] epoch and the batch arrival process in phase i. Consider a Markov chain described by the state space ▽ = {(n, j, 2)∪(n, j, 1) : 0 ≤ nN; 1 ≤ jm}, where the three tuple (n, j,2) refers to a service period with 2 representing busy state, n referring the number of customers in the queue at service completion epoch, j representing the phase of batch arrival; the three tuple (n, j,1) refers to a vacation period with 1 representing vacation state, n referring the number of customers in the queue at vacation termination epoch, j representing the phase of batch arrival. The corresponding transition probability matrix (TPM) \(\mathcal {P}\) with four block matrices of this Markov chain is given by

$$\begin{array}{@{}rcl@{}} \mathcal{ P}=\left[\begin{array}{cc} \boldsymbol{\Delta}_{(N+1)m\times (N+1)m}&\quad\boldsymbol{\Lambda}_{(N+1)m\times (N+1)m}\\ \boldsymbol{\Xi}_{(N+1)m\times (N+1)m}&\quad \boldsymbol{\Omega}_{(N+1)m\times (N+1)m}\end{array}\right], \end{array} $$

where Δ describes the transitions among the service completion epochs, Λ gives the transition from a service completion to the next vacation termination epoch, Ξ refers the transition from a vacation termination epoch to next service completion epoch and finally Ω describes the transitions among vacation termination epochs. Then \(\mathcal { P}\) can be written as

$$\begin{array}{@{}rcl@{}} \mathcal{P}=\left[\begin{array}{ccccccccccccc}\textbf{0} & \textbf{0} & {\cdots} & \textbf{0} & \textbf{0} & {\cdots} & \textbf{0} & \textbf{0} & \textbf{M}_{0} & \textbf{M}_{1} & {\cdots} & \textbf{M}_{N-1} &\widehat{\textbf{M}}_{N}\\[-2pt] {\textbf{L}}_{1,0} & {\textbf{L}}_{1,1} & {\cdots} & {\textbf{L}}_{1,N-B} & {\textbf{L}}_{1,N-B+1} & {\cdots} & {\mathbf{ L}}_{1,N-1} & {\textbf{L}}_{1,N} &\textbf{0} & \textbf{0} & {\cdots} & \textbf{0} &\textbf{0}\\[-2pt] {\mathbf{ L}}_{2,0} & {\textbf{L}}_{2,1} & {\cdots} & {\textbf{L}}_{2,N-B} & {\mathbf{ L}}_{2,N-B+1} & {\cdots} & {\textbf{L}}_{2,N-1} & {\textbf{L}}_{2,N} &\textbf{0} & \textbf{0} & {\cdots} & \textbf{0} &\textbf{0}\\[-2pt] {\vdots} & {\vdots} & & {\vdots} & {\vdots} & & {\vdots} & {\vdots} &{\vdots} & {\vdots} & & {\vdots} &\vdots\\[-2pt] {\mathbf{ L}}_{B,0} & {\textbf{L}}_{B,1} & {\cdots} & {\textbf{L}}_{B,N-B} & {\mathbf{ L}}_{B,N-B+1} & {\cdots} & {\textbf{L}}_{B,N-1} & {\textbf{L}}_{B,N} &\textbf{0} & \textbf{0} & {\cdots} & \textbf{0} &\textbf{0}\\[-2pt] \textbf{0} & {\textbf{L}}_{B+1,1} & {\cdots} & {\mathbf{ L}}_{B+1,N-B} & {\textbf{L}}_{B+1,N-B+1} & {\cdots} & {\mathbf{ L}}_{B+1,N-1} & {\textbf{L}}_{B+1,N} &\textbf{0} & \textbf{0} & {\cdots} & \textbf{0} &\textbf{0}\\[-2pt] \textbf{0} & \textbf{0} & {\cdots} & {\textbf{L}}_{B+2,N-B} & {\textbf{L}}_{B+2,N-B+1} & {\cdots} & {\textbf{L}}_{B+2,N-1} & {\textbf{L}}_{B+2,N} &\textbf{0} & \textbf{0} & \cdots & \textbf{0} &\textbf{0}\\[-2pt] {\vdots} & {\vdots} & & {\vdots} & {\vdots} & & {\vdots} & {\vdots} &{\vdots} & {\vdots} & & {\vdots} &\vdots\\[-2pt] \textbf{0} & \textbf{0} & {\cdots} & {\textbf{L}}_{N,N-B} & {\textbf{L}}_{N,N-B+1} & {\cdots} & {\mathbf{ L}}_{N,N-1} & {\textbf{L}}_{N,N} &\textbf{0} & \textbf{0} & {\cdots} & \textbf{0} &\textbf{0}\\[-2pt] \mathbf{ Q}_{0} & \textbf{Q}_{1} & {\cdots} & \textbf{Q}_{N-B} & \textbf{Q}_{N-B+1} & {\cdots} & \textbf{Q}_{N-1} & \textbf{Q}_{N} &\textbf{0} & \textbf{0} & {\cdots} & \textbf{0} &\textbf{0}\\[-2pt] {\textbf{L}}_{1,0} & {\textbf{L}}_{1,1} & {\cdots} & {\mathbf{ L}}_{1,N-B} & {\textbf{L}}_{1,N-B+1} & {\cdots} & {\textbf{L}}_{1,N-1} & {\textbf{L}}_{1,N} &\textbf{0} & \textbf{0} & {\cdots} & \textbf{0} &\textbf{0}\\[-2pt] {\textbf{L}}_{2,0} & {\textbf{L}}_{2,1} & {\cdots} & {\textbf{L}}_{2,N-B} & {\textbf{L}}_{2,N-B+1} & {\cdots} & {\textbf{L}}_{2,N-1} & {\textbf{L}}_{2,N} &\textbf{0} & \textbf{0} & \cdots & \textbf{0} &\textbf{0}\\[-2pt] {\vdots} & {\vdots} & & {\vdots} & {\vdots} & & {\vdots} & {\vdots} &{\vdots} & {\vdots} & & {\vdots} &\vdots\\[-2pt] {\textbf{L}}_{B,0} & {\textbf{L}}_{B,1} & {\cdots} & {\textbf{L}}_{B,N-B} & {\textbf{L}}_{B,N-B+1} & {\cdots} & {\textbf{L}}_{B,N-1} & {\textbf{L}}_{B,N} &\textbf{0} & \textbf{0} & \cdots & \textbf{0} &\textbf{0}\\[-2pt] \textbf{0} & {\textbf{L}}_{B+1,1} & {\cdots} & {\textbf{L}}_{B+1,N-B} & {\textbf{L}}_{B+1,N-B+1} & {\cdots} & {\textbf{L}}_{B+1,N-1} & {\mathbf{ L}}_{B+1,N} &\textbf{0} & \textbf{0} & {\cdots} & \textbf{0} &\textbf{0}\\[-2pt] \textbf{0} & \textbf{0} & {\cdots} & {\mathbf{ L}}_{B+2,N-B} & {\textbf{L}}_{B+2,N-B+1} & {\cdots} & {\mathbf{ L}}_{B+2,N-1} & {\textbf{L}}_{B+2,N} &\textbf{0} & \textbf{0} & {\cdots} & \textbf{0} &\textbf{0}\\[-2pt] {\vdots} & {\vdots} & & {\vdots} & {\vdots} & & {\vdots} & {\vdots} &\vdots & {\vdots} & & {\vdots} &\vdots\\[-2pt] \textbf{0} & \textbf{0} & {\cdots} & {\mathbf{ L}}_{N,N-B} & {\textbf{L}}_{N,N-B+1} & {\cdots} & {\textbf{L}}_{N,N-1} & {\textbf{L}}_{N,N} &\textbf{0} & \textbf{0} & {\cdots} & \textbf{0} &\textbf{0}\end{array}\right], \end{array} $$

where

$$\begin{array}{@{}rcl@{}} {\textbf{L}}_{i,j}&=&\left\{ \begin{array}{ll} \sum\limits_{k=1}^{B}y_{k}\textbf{A}_{j}, & i=1; 0\leq j\leq N-1, \\[-2pt] \sum\limits_{k=1}^{B}y_{k}\widehat{\textbf{A}}_{N}, &i=1; j=N, \\[-2pt] \sum\limits_{k=i}^{B}y_{k}\textbf{A}_{j}+\sum\limits_{k=max(1,i-j)}^{i-1}y_{k}\textbf{A}_{j-i+k}, &2\leq i\leq B; 0\leq j\leq N-1, \\[-2pt] \sum\limits_{k=i}^{B}y_{k}\widehat{\textbf{A}}_{N}+\sum\limits_{k=max(1,i-N)}^{i-1}y_{k}\widehat{\textbf{A}}_{N-i+k}, &2\leq i\leq B; j=N, \\[-2pt] \sum\limits_{k=max(1,i-j)}^{B}y_{k}\textbf{A}_{j-i+k}, &B\,+\,1\leq i\leq N; i\,-\,B\leq j\leq N\,-\,1, \\[-2pt] \sum\limits_{k=1}^{B}y_{k}\widehat{\textbf{A}}_{N-i+k}, &B+1\leq i\leq N; j=N, \end{array} \right. \\ \textbf{Q}_{j}&=&\left\{ \begin{array}{ll} \sum\limits_{i=1}^{B}\overline{\textbf{D}}_{i}\sum\limits_{k=i}^{B}y_{k}\textbf{A}_{0}, & j=0, \\[-2pt] \sum\limits_{i=1}^{B}\overline{\textbf{D}}_{i}\sum\limits_{k=i}^{B}y_{k}\mathbf{ A}_{j} +\sum\limits_{i=2}^{B}\overline{\mathbf{ D}}_{i}\sum\limits_{k=max(1,i-j)}^{i-1}y_{k}\textbf{A}_{j-i+k}\\ +\sum\limits_{i=B+1}^{min(B+j,N)}\overline{\textbf{D}}_{i}\sum\limits_{k=max(1,i-j)}^{B}y_{k}\textbf{A}_{j-i+k}, &1\leq j\leq N-1, \\ \sum\limits_{i=1}^{B}\overline{\mathbf{ D}}_{i}\sum\limits_{k=i}^{B}y_{k}\widehat{\textbf{A}}_{N} +\sum\limits_{i=2}^{B}\overline{\mathbf{ D}}_{i}\sum\limits_{k=max(1,i-N)}^{i-1}y_{k}\widehat{\textbf{A}}_{N-i+k}\\ +\sum\limits_{i=B+1}^{N}\overline{\mathbf{ D}}_{i}\sum\limits_{k=max(1,i-N)}^{B}y_{k}\widehat{\textbf{A}}_{N-i+k}, &j=N, \end{array} \right. \end{array} $$

with \(\overline {\textbf {D}}_{k}=(-\textbf {D}_{0})^{-1}\textbf {D}_{k},~ 1 \leq k \leq N-1\) and \(\overline {\textbf {D}}_{N}=(-\mathbf { D}_{0})^{-1}\widehat {\textbf {D}}_{N}\), where \(\widehat {\mathbf { D}}_{k}={\sum }_{n=k}^{\infty } \textbf {D}_{n}\), k ≥ 1. The (i, j)-th element of the matrix \(\overline {\textbf {D}}_{k}\) is the conditional probability that a dormant period ends with an arrival of a batch of size k and the arrival process is in phase j, given that the dormant period begins with the arrival process in phase i.

The probability vectors Ψ +(n) and Φ +(n), 0 ≤ nN, of the number of customers in the queue at service completion and vacation termination epochs can be obtained by solving the system of equations \([\boldsymbol {\Psi }^{+}\ \boldsymbol {\Phi }^{+}]\mathcal {P}=[\boldsymbol {\Psi }^{+}\ \boldsymbol {\Phi }^{+}]\), and [Ψ + Φ +]e = 1 using the GTH (Grassmann et al. [32]) algorithm given in Latouche and Ramaswami [33, p. 123], where Ψ + = [Ψ +(0), Ψ +(1),…, Ψ +(N)] and Φ + = [Φ +(0), Φ +(1),…, Φ +(N)].

3.2 Queue length distribution at arbitrary epoch

We are now in a position to obtain queue length distribution at arbitrary epoch. For this we develop relations among distributions of number of customers in the queue at service completion, vacation termination and arbitrary epochs. The state of the system at time t is described by the following r.vs., namely

  • K q (t) = number of customers present in the queue excluding the batch in service,

  • \(\widetilde {S}(t)\) = the remaining service time of the batch in service,

  • \(\widetilde {V}(t)\) = the remaining vacation time of the server,

  • \(\widetilde {J}(t)\) = the state of the underlying Markov chain of BMAP,

  • ξ(t)= the state of the server at time t, that is, \(\xi (t) = \left \{ \begin {array}{l} 2, \ \text {if the server is busy},\\ 1, \ \text {if the server is on vacation}, \\ 0, \ \text {if the server is in dormancy}. \end {array} \right .\)

Let us define their joint probabilities, for 1 ≤ im, as

$$\begin{array}{@{}rcl@{}} {\Psi}_{i}(n,x;t)\ dx & = & P(K_{q}(t)=n, \ J(t)=i,\ x<\widetilde{S}(t) \leq x+dx ,\ \xi(t)=2), ~ 0 \leq n \leq N, ~ x \geq 0, \\ {\Phi}_{i}(n,x;t)\ dx & = & P(K_{q}(t)=n, \ J(t)=i,\ x<\widetilde{V}(t)\leq x+dx ,\ \xi(t)=1), ~ 0 \leq n \leq N, ~ x \geq 0, \\ \nu_{i}(0;t) & =& P(K_{q}(t)=0, \ J(t)=i,\ \xi(t)=0). \end{array} $$

Further, in the steady-state, let Ψ(n, x), Φ(n, x), and ν(0) be the row vectors of order m whose i-th components are Ψ i (n, x), Φ i (n, x) and ν i (0), respectively. Relating the states of the system at two consecutive time epochs t and (t + d t), and using probabilistic arguments, we get a set of partial differential equations for each phase i, (1 ≤ im). Assuming that the steady-state exists and using matrices and vectors notations, we obtain

$$\begin{array}{@{}rcl@{}} -\frac{d}{dx}\boldsymbol{\Psi}(0,x) & = & \boldsymbol{\Psi}(0,x)\textbf{D}_{0}+ s(x) \sum\limits_{k=1}^{B} \left[\boldsymbol{\Psi}(k,0)+\boldsymbol{\Phi}(k,0)\right]\sum\limits_{l=k}^{B} y_{l} + s(x) \ \boldsymbol{\nu}(0)\sum\limits_{k=1}^{B} \textbf{D}_{k} \sum\limits_{l=k}^{B} y_{l}, \end{array} $$
(5)
$$\begin{array}{@{}rcl@{}} -\frac{d}{dx}\boldsymbol{\Psi}(n,x) & = & \sum\limits_{k=0}^{n} \boldsymbol{\Psi} (k,x)\textbf{D}_{n-k}+ s(x) \sum\limits_{k=1}^{B} \left[\boldsymbol{\Psi}(n+k,0)+\boldsymbol{\Phi}(n+k,0)\right]y_{k}\\ && + s(x) \ \boldsymbol{\nu}(0)\sum\limits_{k=1}^{B} \textbf{D}_{n+k}y_{k},~ 1 \leq n \leq N-B-1, \end{array} $$
(6)
$$\begin{array}{@{}rcl@{}} -\frac{d}{dx}\boldsymbol{\Psi}(n,x) & = & \sum\limits_{k=0}^{n} \boldsymbol{\Psi} (k,x)\textbf{D}_{n-k}+ s(x) \sum\limits_{k=1}^{N-n} \left[\boldsymbol{\Psi}(n+k,0)+\boldsymbol{\Phi}(n+k,0)\right]y_{k}\\ && + s(x) \ \boldsymbol{\nu}(0)\left( \sum\limits_{k=1}^{N-1-n} \textbf{D}_{n+k}y_{k}+\widehat{\textbf{D}}_{N}\ y_{N-n}\right),~ N-B \leq n \leq N-1, \end{array} $$
(7)
$$\begin{array}{@{}rcl@{}} -\frac{d}{dx}\boldsymbol{\Psi}(N,x) & = & \sum\limits_{k=0}^{N-1} \boldsymbol{\Psi} (k,x)\widehat{\textbf{D}}_{N-k}+\boldsymbol{\Psi} (N,x)\textbf{D}, \end{array} $$
(8)
$$\begin{array}{@{}rcl@{}} -\frac{d}{dx}\boldsymbol{\Phi}(0,x) & = & \boldsymbol{\Phi}(0,x)\textbf{D}_{0} +v(x)\boldsymbol{\Psi}(0,0), \end{array} $$
(9)
$$\begin{array}{@{}rcl@{}} -\frac{d}{dx}\boldsymbol{\Phi}(n,x) & = & \sum\limits_{k=0}^{n} \boldsymbol{\Phi}(k,x)\textbf{D}_{n-k},~ 1 \leq n \leq N-1, \end{array} $$
(10)
$$\begin{array}{@{}rcl@{}} -\frac{d}{dx}\boldsymbol{\Phi}(N,x) & = & \sum\limits_{k=0}^{N-1} \boldsymbol{\Phi} (k,x)\widehat{\textbf{D}}_{N-k}+\boldsymbol{\Phi} (N,x)\textbf{D}, \end{array} $$
(11)
$$\begin{array}{@{}rcl@{}} \textbf{0} & = & \boldsymbol{\nu}(0)\textbf{D}_{0}+ \boldsymbol{\Phi} (0,0). \end{array} $$
(12)

Let us define the Laplace-Stieltjes transform of Ψ(n, x) and Φ(n, x) as

$$\begin{array}{@{}rcl@{}} \boldsymbol{\Psi}^{\ast}(n,\theta) = {\int}_{0}^{\infty} e^{-\theta x} \boldsymbol{\Psi} (n,x) \ dx ~ ~ \text{and} ~ ~ \boldsymbol{\Phi}^{\ast}(n,\theta) = {\int}_{0}^{\infty} e^{-\theta x} \boldsymbol{\Phi} (n,x) \ dx, ~ 0 \leq n \leq N, ~ Re\ (\theta) \geq 0, \end{array} $$

so that

$$\begin{array}{@{}rcl@{}} \boldsymbol{\Psi}(n)\equiv\boldsymbol{\Psi}^{\ast}(n,0)= {\int}_{0}^{\infty} \boldsymbol{\Psi} (n,x) \ dx ~ ~ \text{and} ~ ~ \boldsymbol{\Phi}(n)\equiv\boldsymbol{\Phi}^{\ast}(n,0)= {\int}_{0}^{\infty} \boldsymbol{\Phi} (n,x) \ dx, ~0 \leq n \leq N. \end{array} $$

Now, multiplying Eq. 5 to Eq. 11 by e 𝜃x and integrating w.r.t. x over 0 to , we get

$$\begin{array}{@{}rcl@{}} -\theta \boldsymbol{\Psi}^{\ast}(0,\theta) +\boldsymbol{\Psi}(0,0)& = & \boldsymbol{\Psi}^{\ast}(0,\theta)\textbf{D}_{0}+ S^{\ast}(\theta) \sum\limits_{k=1}^{B} [\boldsymbol{\Psi}(k,0)+\boldsymbol{\Phi}(k,0)]\sum\limits_{l=k}^{B} y_{l} \\ &&+ S^{\ast}(\theta) \ \boldsymbol{\nu}(0)\sum\limits_{k=1}^{B} \textbf{D}_{k} \sum\limits_{l=k}^{B} y_{l}, \end{array} $$
(13)
$$\begin{array}{@{}rcl@{}} -\theta \boldsymbol{\Psi}^{\ast}(n,\theta)+\boldsymbol{\Psi}(n,0) & = & \sum\limits_{k=0}^{n} \boldsymbol{\Psi}^{\ast} (k,\theta)\textbf{D}_{n-k}+ S^{\ast}(\theta) \sum\limits_{k=1}^{B} [\boldsymbol{\Psi}(n+k,0)+\boldsymbol{\Phi}(n+k,0)]y_{k}\\ && + S^{\ast}(\theta) \ \boldsymbol{\nu}(0)\sum\limits_{k=1}^{B} \textbf{D}_{n+k}\ y_{k},\quad 1 \leq n \leq N-B-1, \end{array} $$
(14)
$$\begin{array}{@{}rcl@{}} -\theta\boldsymbol{\Psi}^{\ast}(n,\theta)+\boldsymbol{\Psi}(n,0) & = & \sum\limits_{k=0}^{n} \boldsymbol{\Psi}^{\ast} (k,\theta)\textbf{D}_{n-k} + S^{\ast}(\theta) \ \boldsymbol{\nu}(0)\left( \sum\limits_{k=1}^{N-1-n} \textbf{D}_{n+k}y_{k}+\widehat{\textbf{D}}_{N} y_{N-n}\right)\\ &&+ S^{\ast}(\theta) \sum\limits_{k=1}^{N-n} [\boldsymbol{\Psi}(n\,+\,k,0)+\boldsymbol{\Phi}(n\,+\,k,0)]y_{k},\; N\!\,-\,B \!\leq\! n \leq N\,-\,1, \end{array} $$
(15)
$$\begin{array}{@{}rcl@{}} -\theta \boldsymbol{\Psi}^{\ast}(N,\theta ) +\boldsymbol{\Psi}(N,0)& = & \sum\limits_{k=0}^{N-1} \boldsymbol{\Psi}^{\ast} (k,\theta)\widehat{\textbf{D}}_{N-k}+\boldsymbol{\Psi}^{\ast} (N,\theta)\textbf{D}, \end{array} $$
(16)
$$\begin{array}{@{}rcl@{}} -\theta \boldsymbol{\Phi}^{\ast}(0,\theta)+\boldsymbol{\Phi}(0,0) & = & \boldsymbol{\Phi}^{\ast}(0,\theta)\textbf{D}_{0} +V^{\ast}(\theta)\ \boldsymbol{\Psi}(0,0), \end{array} $$
(17)
$$\begin{array}{@{}rcl@{}} -\theta\boldsymbol{\Phi}^{\ast}(n,\theta)+\boldsymbol{\Phi}(n,0) & = & \sum\limits_{k=0}^{n} \boldsymbol{\Phi}^{\ast}(k,\theta)\textbf{D}_{n-k},~ 1 \leq n \leq N-1, \end{array} $$
(18)
$$\begin{array}{@{}rcl@{}} -\theta \boldsymbol{\Phi}^{\ast}(N,\theta)+\boldsymbol{\Phi}(N,0) & = & \sum\limits_{k=0}^{N-1} \boldsymbol{\Phi}^{\ast}(k,\theta)\widehat{\textbf{D}}_{N-k}+\boldsymbol{\Phi}^{\ast}(N,\theta)\mathbf{ D}. \end{array} $$
(19)

Now, using the above equations, we obtain a few results in the form of lemmas which will be used to get the queue length distribution at arbitrary epoch. Also, these results have their own interpretations.

Lemma 1

The mean number of entrances to the vacation state per unit of time equals the mean number of departures from the vacation state per unit of time, that is,

$$\begin{array}{@{}rcl@{}} \boldsymbol{\Psi} (0,0)\mathbf{\mathit{e}} & = & \sum\limits_{k=0}^{N} \boldsymbol{\Phi}(k,0)\mathbf{\mathit{e}}. \end{array} $$

Proof

Setting 𝜃 = 0 and post-multiplying by e in Eq. 13 to Eq. 16, adding them and using Eq. 12, after simple algebraic manipulation leads to the result of Lemma 1. □

Lemma 2

The following equalities hold true:

$$\begin{array}{@{}rcl@{}} E[S]\sum\limits_{k=0}^{N}\boldsymbol{\Psi} (k,0)\mathbf{\mathit{e}} & = & \sum\limits_{k=0}^{N} \boldsymbol{\Psi}(k)\mathbf{\mathit{e}}=\rho^{\prime}\quad (say), \end{array} $$
(20)
$$\begin{array}{@{}rcl@{}} E[V]\sum\limits_{k=0}^{N}\boldsymbol{\Phi} (k,0)\mathbf{\mathit{e}}+ \boldsymbol{\nu} (0)\mathbf{\mathit{e}} & = & \sum\limits_{k=0}^{N} \boldsymbol{\Phi}(k)\mathbf{\mathit{e}} + \boldsymbol{\nu} (0)\mathbf{\mathit{e}}=1-\rho^{\prime}. \end{array} $$
(21)

These results have probabilistic interpretations: \({\sum }_{k=0}^{N}\boldsymbol {\Psi } (k,0)\mathbf {\mathit {e}}\) denotes the mean number of service completion per unit of time and multiplying this by E[S] gives ρ , where ρ represents the probability that the server is busy. Similarly, \({\sum }_{n=0}^{N} \boldsymbol {\Phi }(n,0)\mathbf {e}\) denotes the rate of vacation termination and multiplying this by E[V] yields the probability that the server is on vacation. Therefore, 1−ρ represents the probability that the server is in an unavailable period, which corresponds to the time taken for a vacation plus dormancy.

Proof

Post-multiplying by e in Eq. 13 to Eq. 16 and adding them, using the relation \({\sum }_{k=0}^{\infty } \textbf {D}_{k}\mathbf {\mathit {e}}={\textbf {0}}\), Eq. 12 and Lemma 1, after some manipulation, we obtain

$$\begin{array}{@{}rcl@{}} \sum\limits_{k=0}^{N} \boldsymbol{\Psi}^{\ast}(k,\theta)\mathbf{\mathit{e}} & = & \frac{1-S^{\ast}(\theta)}{\theta}\sum\limits_{k=0}^{N} \boldsymbol{\Psi} (k,0)\mathbf{\mathit{e}}. \end{array} $$

Taking the limit as 𝜃→0, after simplification, we obtain (20).

Similarly, post-multiplying by e in Eq. 17 to Eq. 19 and adding them, using the relation \({\sum }_{k=0}^{\infty } \mathbf { D}_{k}\mathbf {\mathit {e}}={\textbf {0}}\) and Lemma 1, after some manipulation, we obtain

$$\begin{array}{@{}rcl@{}} \sum\limits_{k=0}^{N} \boldsymbol{\Phi}^{\ast}(k,\theta)\mathbf{\mathit{e}} & = & \frac{1-V^{\ast}(\theta)}{\theta}\sum\limits_{k=0}^{N} \boldsymbol{\Phi} (k,0)\mathbf{\mathit{e}}. \end{array} $$

Taking the limit as 𝜃→0, after some algebraic manipulation, we get \(E[V]{\sum }_{k=0}^{N}\boldsymbol {\Phi } (k,0)\mathbf {\mathit {e}} = {\sum }_{k=0}^{N} \boldsymbol {\Phi }(k)\mathbf {\mathit {e}}\). Adding ν(0)e to the both sides, we have the desired result (21). One may note here that in the case of SV policy the idle period may consist of a vacation period and the dormant period. □

Lemma 3

The expression for ρ is given by

$$\begin{array}{@{}rcl@{}} \rho^{\prime} & = & \frac{E[S] \sum\limits_{k=0}^{N} \boldsymbol{\Psi}^{+}(k)\mathbf{\mathit{e}}}{E[S] \sum\limits_{k=0}^{N} \boldsymbol{\Psi}^{+}(k) \mathbf{\mathit{e}}+E[V]\sum\limits_{k=0}^{N} \boldsymbol{\Phi}^{+}(k)\mathbf{\mathit{e}}+ \boldsymbol{\Phi}^{+}(0) (-\mathbf{ D}_{0})^{-1}\mathbf{\mathit{e}}}. \end{array} $$

Proof

Applying the conditional probability, we get

$$\begin{array}{@{}rcl@{}} {\Psi}^{+}_{i}(n)&=&P\{ n \text{~customers in the queue just prior to service completion epoch and}\\ &&\text{batch arrival} \text{process being in phase}~i~|~\text{at most}~N~\text{customers in the queue}\\ &&\text{just prior to either service}\text{completion- or vacation termination-epoch}\}\\ &=&\frac{1}{\Upsilon}{\Psi}_{i}(n,0),\quad 0\leq n \leq N, \end{array} $$

where

$$\begin{array}{@{}rcl@{}} {\Upsilon} &=&P\{\text{at most}~N~\text{customers in the queue just prior to either service completion-}\\ &&\text{or vacation termination-epoch}\}\\ &=&\sum\limits_{k=0}^{N}[\boldsymbol{\Psi}(k,0)+\boldsymbol{\Phi}(k,0)]\mathbf{ e}. \end{array} $$

Similarly, we can obtain an expression for \({\Phi }^{+}_{i}(n)\). In matrix and vector notations, we have

$$\begin{array}{@{}rcl@{}} \boldsymbol{\Psi}^{+}(n) & = &\frac{1}{\Upsilon}\boldsymbol{\Psi}(n,0), ~ 0 \leq n \leq N, \end{array} $$
(22)
$$\begin{array}{@{}rcl@{}} \boldsymbol{\Phi}^{+}(n) & = & \frac{1}{\Upsilon}\boldsymbol{\Phi}(n,0), ~ 0 \leq n \leq N. \end{array} $$
(23)

From Eqs. 20 and 21, we can write

$$\begin{array}{@{}rcl@{}} \frac{\rho^{\prime}}{1-\rho^{\prime}}=\frac{E[S]\sum\limits_{k=0}^{N}\boldsymbol{\Psi} (k,0)\mathbf{\mathit{e}}}{E[V]\sum\limits_{k=0}^{N}\boldsymbol{\Phi} (k,0)\mathbf{\mathit{e}}+ \boldsymbol{\nu} (0)\mathbf{\mathit{e}}}. \end{array} $$
(24)

Using Eqs. 1222 and 23 in Eq. 24, after simplification, we get the desired result. □

Lemma 4

The expression for Υ is given by

$$\begin{array}{@{}rcl@{}} {\Upsilon} & = & \left( \frac{\rho^{\prime}}{E[S]}+\frac{1-\rho^{\prime}}{E[V]}\right)\left( 1+\frac{\boldsymbol{\Phi}^{+} (0)(-\textbf{D}_{0})^{-1}\mathbf{\mathit{e}}}{E[V]}\right)^{-1}. \end{array} $$

Proof

Using Eqs. 20 and 21 in \({\Upsilon }={\sum }_{n=0}^{N}[\boldsymbol {\Psi }(n,0)+\boldsymbol {\Phi }(n,0)]\mathbf {\mathit {e}}\), we have

$$\begin{array}{@{}rcl@{}} {\Upsilon}=\frac{\rho^{\prime}}{E[S]}+\frac{1-\rho^{\prime}-\boldsymbol{\nu}(0)\mathbf{\mathit{e}}}{E[V]}. \end{array} $$
(25)

Using Eqs. 12 and 23 in Eq. 25, after simplification, we obtain the desired result.

Now, we are in a position to determine arbitrary epoch probabilities in terms of service completion and vacation termination epoch probabilities. These can be obtained using the following theorem. □

Theorem 3.1

The arbitrary epoch probabilities ν(0), Ψ(n) and Φ(n) are given by

$$\begin{array}{@{}rcl@{}} \boldsymbol{\nu}(0) & = & {\Upsilon} \boldsymbol{\Phi}^{+}(0) (-\mathbf{D}_{0})^{-1}, \end{array} $$
(26)
$$\begin{array}{@{}rcl@{}} \boldsymbol{\Psi}(0) & = & \left[{\Upsilon} \left\{\sum\limits_{k=1}^{B} [\boldsymbol{\Psi}^{+}(k)+\boldsymbol{\Phi}^{+}(k)]\sum\limits_{l=k}^{B} y_{l}-\boldsymbol{\Psi}^{+}(0)\right\}+ \boldsymbol{\nu} (0)\sum\limits_{k=1}^{B} \mathbf{D}_{k}\sum\limits_{l=k}^{B} y_{l}\right](-\mathbf{D}_{0})^{-1}, \end{array} $$
(27)
$$\begin{array}{@{}rcl@{}} \boldsymbol{\Psi}(n)&=&\left[\sum\limits_{k=0}^{n-1}\boldsymbol{\Psi}(k)\mathbf{D}_{n-k}+{\Upsilon} \left\{\sum\limits_{k=1}^{B} [\boldsymbol{\Psi}^{+}(n+k)+\boldsymbol{\Phi}^{+}(n+k)]y_{k}-\boldsymbol{\Psi}^{+}(n)\right\}\right.\\ &&\left. + \boldsymbol{\nu} (0)\sum\limits_{k=1}^{B} \mathbf{D}_{n+k} y_{k}\right](-\mathbf{D}_{0})^{-1},~ 1 \leq n \leq N-B-1, \end{array} $$
(28)
$$\begin{array}{@{}rcl@{}} \boldsymbol{\Psi}(n)&=&\left[\sum\limits_{k=0}^{n-1}\boldsymbol{\Psi}(k)\mathbf{D}_{n-k}+{\Upsilon} \left\{\sum\limits_{k=1}^{N-n} [\boldsymbol{\Psi}^{+}(n+k)+\boldsymbol{\Phi}^{+}(n+k)]y_{k}-\boldsymbol{\Psi}^{+}(n)\right\}\right.\\ &&\left.+ \boldsymbol{\nu} (0)\left( \sum\limits_{k=1}^{N-1-n} \mathbf{D}_{n+k} y_{k}+\widehat{\mathbf{D}}_{N} y_{N-n}\right) \right](-\mathbf{D}_{0})^{-1},~ N-B \leq n \leq N-1, \end{array} $$
(29)
$$\begin{array}{@{}rcl@{}} \boldsymbol{\Phi}(0) & = & \left[{\Upsilon}\left( \boldsymbol{\Psi}^{+}(0)- \boldsymbol{\Phi}^{+}(0)\right)\right](-\mathbf{D}_{0})^{-1}, \end{array} $$
(30)
$$\begin{array}{@{}rcl@{}} \boldsymbol{\Phi}(n) & = & \left[\sum\limits_{k=0}^{n-1}\boldsymbol{\Phi}(k)\mathbf{ D}_{n-k}-{\Upsilon} \boldsymbol{\Phi}^{+}(n)\right](-\mathbf{D}_{0})^{-1},~ 1 \leq n \leq N-1. \end{array} $$
(31)

Proof

Using Eq. 23 in Eq. 12, a little algebra gives Eq. 26. For Eq. 27 to Eq. 31, setting 𝜃 = 0 in Eq. 13 to Eqs. 1517 and 18, using Eqs. 22 and 23, after simplification, we obtain the desired result. □

Remark 1

It may be remarked here that we do not have explicit expressions componentwise separately for Ψ(N) and Φ(N). However, one can compute them using Lemma 2 as \(\boldsymbol {\Psi }(N)\mathbf {\mathit {e}}= \rho ^{\prime }-{\sum }_{k=0}^{N-1} \boldsymbol {\Psi }(k)\mathbf {\mathit {e}}\) and \(\boldsymbol {\Phi }(N)\mathbf {\mathit {e}}= 1-\rho ^{\prime }-{\sum }_{k=0}^{N-1} \boldsymbol {\Phi }(k)\mathbf {\mathit {e}} - \boldsymbol {\nu }(0)\mathbf {\mathit {e}}\). Further, Ψ(N) + Φ(N) can be obtained using the normalization condition as \(\boldsymbol {\Psi }(N)+\boldsymbol {\Phi }(N)=\overline {\boldsymbol {\pi }}-{\sum }_{k=0}^{N-1}[\boldsymbol {\Psi }(k)+\boldsymbol {\Phi }(k)]-\boldsymbol {\nu }(0)\). Though we are not getting the vectors Ψ(N) and Φ(N) componentwise separately, but the above results are sufficient to determine the key performance measures, see Section 5.

3.3 Queue length distribution at arrival epoch

Let ν (0), Ψ (n), Φ (n), 0 ≤ nN, be the 1×m vectors whose j-th component give the probability that a batch arrival finds n customers in the queue and the arrival process is in phase j just after the arrival of a batch. They are given by

$$\begin{array}{@{}rcl@{}} \boldsymbol{\nu}^{-}(0)&=&\frac{\boldsymbol{\nu}(0)\widehat{\textbf{D}}_{1}}{\lambda_{g}},\\ \boldsymbol{\Psi}^{-}(n)&=&\frac{\boldsymbol{\Psi}(n)\widehat{\textbf{D}}_{1}}{\lambda_{g}},\quad 0 \leq n \leq N-1,\\ \boldsymbol{\Phi}^{-}(n)&=&\frac{\boldsymbol{\Phi}(n)\widehat{\textbf{D}}_{1}}{\lambda_{g}},\quad 0 \leq n \leq N-1, \end{array} $$

which can be obtained using the “rate-in and rate-out” argument; for more details, see Kim et al. [34].

Remark 2

It may be remarked here that we do not have explicit expressions for Ψ (N) and Φ (N). However, one can compute them using the normalization condition as \(\boldsymbol {\Psi }^{-}(N)+\boldsymbol {\Phi }^{-}(N)= \frac {1}{\lambda _{g}}[\overline {\boldsymbol {\pi }}-\boldsymbol {\nu }(0)-{\sum }_{k=0}^{N-1}(\boldsymbol {\Phi }(k)+\boldsymbol {\Psi }(k))]\widehat {\mathbf {D}}_{1}\). Though we are not getting the vectors Ψ (N) and Φ (N) componentwise separately, but Ψ (N) + Φ (N) is sufficient to determine the key performance measures, see Section 5.

This completes analytic analysis of B M A P/G Y/1/N queue with single vacation policy. Performance measures and discussion of numerical results are presented in Sections 5 and 6, respectively. In the following section, we consider B M A P/G Y/1/N queue with multiple vacation policy.

4 Multiple vacation

We consider here the B M A P/G Y/1/N queue with the same assumptions and notations described in Section 2 except the multiple vacation (M V) policy. In this policy, when the server finishes serving a batch and finds the queue empty, he goes for a vacation. On return if the server finds one or more customers waiting, he serves them as per the batch service rule until the system empties. However, on return from a vacation if the server finds no customers waiting, he immediately proceeds for another vacation and continues in this manner until he finds at least one waiting customer in the queue. Without going into further details, we give a brief account of the model for the sake of completeness.

4.1 Queue length distribution at service completion and vacation termination epochs

Following the procedure described in case of SV policy, we have the TPM for MV policy as

$$\begin{array}{@{}rcl@{}} \mathcal{P}=\left[\begin{array}{lllllllllllll}\mathbf{0} & \mathbf{0} & {\cdots} & \mathbf{0} & \mathbf{0} & {\cdots} & \mathbf{0} & \mathbf{0} & \mathbf{M}_{0} & \mathbf{M}_{1} & {\cdots} & \mathbf{M}_{N-1} &\widehat{\mathbf{ M}}_{N} \\ {\mathbf{L}}_{1,0} & {\mathbf{L}}_{1,1} & {\cdots} & {\mathbf{ L}}_{1,N-B} & {\mathbf{L}}_{1,N-B+1} & {\cdots} & {\mathbf{L}}_{1,N-1} & {\mathbf{L}}_{1,N} &\mathbf{0} & \mathbf{0} & {\cdots} & \mathbf{0} &\mathbf{0}\\[-1pt] {\mathbf{L}}_{2,0} & {\mathbf{L}}_{2,1} & {\cdots} & {\mathbf{L}}_{2,N-B} & {\mathbf{L}}_{2,N-B+1} & {\cdots} & {\mathbf{L}}_{2,N-1} & {\mathbf{L}}_{2,N} &\mathbf{0} & \mathbf{0} & {\cdots} & \mathbf{0} &\mathbf{0}\\[-1pt] {\vdots} & {\vdots} & & {\vdots} & {\vdots} & & {\vdots} & {\vdots} &{\vdots} & {\vdots} & & {\vdots} &\vdots\\[-1pt] {\mathbf{L}}_{B,0} & {\mathbf{L}}_{B,1} & {\cdots} & {\mathbf{L}}_{B,N-B} & {\mathbf{L}}_{B,N-B+1} & {\cdots} & {\mathbf{L}}_{B,N-1} & {\mathbf{L}}_{B,N} &\mathbf{0} & \mathbf{0} & {\cdots} & \mathbf{0} &\mathbf{0}\\[-1pt] \mathbf{0} & {\mathbf{L}}_{B+1,1} & {\cdots} & {\mathbf{L}}_{B+1,N-B} & {\mathbf{ L}}_{B+1,N-B+1} & {\cdots} & {\mathbf{L}}_{B+1,N-1} & {\mathbf{L}}_{B+1,N} &\mathbf{0} & \mathbf{0} & {\cdots} & \mathbf{0} &\mathbf{0}\\[-1pt] \mathbf{0} & \mathbf{0} & {\cdots} & {\mathbf{L}}_{B+2,N-B} & {\mathbf{L}}_{B+2,N-B+1} & {\cdots} & {\mathbf{L}}_{B+2,N-1} & {\mathbf{L}}_{B+2,N} &\mathbf{0} & \mathbf{0} & {\cdots} & \mathbf{0} &\mathbf{0}\\[-1pt] {\vdots} & {\vdots} & & {\vdots} & {\vdots} & & {\vdots} & {\vdots} &{\vdots} & {\vdots} & & {\vdots} &\vdots\\[-1pt] \mathbf{0} & \mathbf{0} & {\cdots} & {\mathbf{L}}_{N,N-B} & {\mathbf{L}}_{N,N-B+1} & {\cdots} & {\mathbf{L}}_{N,N-1} & {\mathbf{L}}_{N,N} &\mathbf{0} & \mathbf{0} & {\cdots} & \mathbf{0} &\mathbf{0}\\[-1pt] \mathbf{0} & \mathbf{0} & {\cdots} & \mathbf{0} & \mathbf{0} & {\cdots} & \mathbf{0} & \mathbf{0} &\mathbf{M}_{0} & \mathbf{ M}_{1} & {\cdots} & \mathbf{M}_{N-1} &\widehat{\mathbf{M}}_{N}\\[-1pt] {\mathbf{L}}_{1,0} & {\mathbf{L}}_{1,1} & {\cdots} & {\mathbf{L}}_{1,N-B} & {\mathbf{L}}_{1,N-B+1} & {\cdots} & {\mathbf{L}}_{1,N-1} & {\mathbf{L}}_{1,N} &\mathbf{0} & \mathbf{0} & {\cdots} & \mathbf{0} &\mathbf{0}\\[-1pt] {\mathbf{L}}_{2,0} & {\mathbf{L}}_{2,1} & {\cdots} & {\mathbf{L}}_{2,N-B} & {\mathbf{L}}_{2,N-B+1} & {\cdots} & {\mathbf{L}}_{2,N-1} & {\mathbf{L}}_{2,N} &\mathbf{0} & \mathbf{0} & {\cdots} & \mathbf{0} &\mathbf{0}\\[-1pt] {\vdots} & {\vdots} & & {\vdots} & {\vdots} & & {\vdots} & {\vdots} &{\vdots} & {\vdots} & & {\vdots} &\vdots\\[-1pt] {\mathbf{L}}_{B,0} & {\mathbf{L}}_{B,1} & {\cdots} & {\mathbf{L}}_{B,N-B} & {\mathbf{L}}_{B,N-B+1} & {\cdots} & {\mathbf{L}}_{B,N-1} & {\mathbf{L}}_{B,N} &\mathbf{0} & \mathbf{0} & {\cdots} & \mathbf{0} &\mathbf{0}\\[-1pt] \mathbf{0} & {\mathbf{L}}_{B+1,1} & {\cdots} & {\mathbf{L}}_{B+1,N-B} & {\mathbf{ L}}_{B+1,N-B+1} & {\cdots} & {\mathbf{L}}_{B+1,N-1} & {\mathbf{L}}_{B+1,N} &\mathbf{0} & \mathbf{0} & {\cdots} & \mathbf{0} &\mathbf{0}\\[-1pt] \mathbf{0} & \mathbf{0} & {\cdots} & {\mathbf{L}}_{B+2,N-B} & {\mathbf{L}}_{B+2,N-B+1} & {\cdots} & {\mathbf{L}}_{B+2,N-1} & {\mathbf{L}}_{B+2,N} &\mathbf{0} & \mathbf{0} & {\cdots} & \mathbf{0} &\mathbf{0}\\[-1pt] {\vdots} & {\vdots} & & {\vdots} & {\vdots} & & {\vdots} & {\vdots} &{\vdots} & {\vdots} & & {\vdots} &\vdots\\[-1pt] \mathbf{0} & \mathbf{0} & {\cdots} & {\mathbf{L}}_{N,N-B} & {\mathbf{ L}}_{N,N-B+1} & {\cdots} & {\mathbf{L}}_{N,N-1} & {\mathbf{L}}_{N,N} &\mathbf{ 0} & \mathbf{0} & {\cdots} & \mathbf{0} &\mathbf{0}\end{array}\right]. \end{array} $$

The probability vectors Ψ +(n) and Φ +(n), 0 ≤ nN, of the number of customers in the queue at service completion and vacation termination epochs can be obtained by solving the system of equations \([\boldsymbol {\Psi }^{+}\ \boldsymbol {\Phi }^{+}]\mathcal {P}=[\boldsymbol {\Psi }^{+}\ \boldsymbol {\Phi }^{+}]\), and [Ψ + Φ +]e = 1 as described in single vacation policy.

4.2 Queue length distribution at arbitrary epoch

Relating the states of the system at two consecutive time epochs t and (t + d t), and using probabilistic arguments, we get a set of partial differential equations for each phase i, (1 ≤ im). Assuming that the steady-state exists and using matrices and vectors notations, we obtain

$$\begin{array}{@{}rcl@{}} -\frac{d}{dx}\boldsymbol{\Psi}(0,x) & = & \boldsymbol{\Psi}(0,x)\textbf{D}_{0}+ s(x) \sum\limits_{k=1}^{B} [\boldsymbol{\Psi}(k,0)+\boldsymbol{\Phi}(k,0)]\sum\limits_{l=k}^{B} y_{l}, \end{array} $$
(32)
$$\begin{array}{@{}rcl@{}} -\frac{d}{dx}\boldsymbol{\Psi}(n,x) & = & \sum\limits_{k=0}^{n} \boldsymbol{\Psi} (k,x)\textbf{D}_{n-k}+ s(x) \sum\limits_{k=1}^{B} [\boldsymbol{\Psi}(n+k,0)+\boldsymbol{\Phi}(n+k,0)]y_{k},\quad 1 \leq n \leq N-B, \hspace{0.5cm} \end{array} $$
(33)
$$\begin{array}{@{}rcl@{}} -\frac{d}{dx}\boldsymbol{\Psi}(n,x) & = & \sum\limits_{k=0}^{n} \boldsymbol{\Psi} (k,x)\textbf{D}_{n-k}+ s(x) \sum\limits_{k=1}^{N-n} [\boldsymbol{\Psi}(n+k,0)+\boldsymbol{\Phi}(n+k,0)]y_{k},\\ && \hspace{15pc} N\,-\,B+1 \leq n \leq N\,-\,1, \end{array} $$
(34)
$$\begin{array}{@{}rcl@{}} -\frac{d}{dx}\boldsymbol{\Psi}(N,x) & = & \sum\limits_{k=0}^{N-1} \boldsymbol{\Psi} (k,x)\widehat{\textbf{D}}_{N-k}+\boldsymbol{\Psi} (N,x)\textbf{D}, \end{array} $$
(35)
$$\begin{array}{@{}rcl@{}} -\frac{d}{dx}\boldsymbol{\Phi}(0,x) & = & \boldsymbol{\Phi}(0,x)\textbf{D}_{0} +v(x)[\boldsymbol{\Phi}(0,0)+\boldsymbol{\Psi}(0,0)], \end{array} $$
(36)
$$\begin{array}{@{}rcl@{}} -\frac{d}{dx}\boldsymbol{\Phi}(n,x) & = & \sum\limits_{k=0}^{n} \boldsymbol{\Phi}(k,x)\textbf{D}_{n-k},~ 1 \leq n \leq N-1, \end{array} $$
(37)
$$\begin{array}{@{}rcl@{}} -\frac{d}{dx}\boldsymbol{\Phi}(N,x) & = & \sum\limits_{k=0}^{N-1} \boldsymbol{\Phi} (k,x)\widehat{\textbf{D}}_{N-k}+\boldsymbol{\Phi} (N,x)\textbf{D}. \end{array} $$
(38)

Multiplying Eq. 32 to Eq. 38 by e 𝜃x and integrating w.r.t. x over 0 to , we get

$$\begin{array}{@{}rcl@{}} -\theta \boldsymbol{\Psi}^{\ast}(0,\theta) +\boldsymbol{\Psi}(0,0)& = & \boldsymbol{\Psi}^{*}(0,\theta)\textbf{D}_{0}+ S^{*}(\theta) \sum\limits_{k=1}^{B} [\boldsymbol{\Psi}(k,0)+\boldsymbol{\Phi}(k,0)]\sum\limits_{l=k}^{B} y_{l}, \end{array} $$
(39)
$$\begin{array}{@{}rcl@{}} -\theta\boldsymbol{\Psi}^{*}(n,\theta)+\boldsymbol{\Psi}(n,0) & = & \sum\limits_{k=0}^{n} \boldsymbol{\Psi}^{*} (k,\theta)\textbf{D}_{n-k}+ S^{*}(\theta) \sum\limits_{k=1}^{B} [\boldsymbol{\Psi}(n+k,0)+\boldsymbol{\Phi}(n+k,0)]y_{k},\\ && \hspace{13pc}1 \leq n \leq N-B, \end{array} $$
(40)
$$\begin{array}{@{}rcl@{}} -\theta\boldsymbol{\Psi}^{*}(n,\theta)+\boldsymbol{\Psi}(n,0) & = & \sum\limits_{k=0}^{n} \boldsymbol{\Psi}^{*} (k,\theta)\textbf{D}_{n-k}+ S^{*}(\theta) \sum\limits_{k=1}^{N-n} [\boldsymbol{\Psi}(n+k,0)+\boldsymbol{\Phi}(n+k,0)]y_{k},\\ && \hspace{11pc}N-B+1 \leq n \leq N-1, \end{array} $$
(41)
$$\begin{array}{@{}rcl@{}} -\theta\boldsymbol{\Psi}^{*}(N,\theta) +\boldsymbol{\Psi}(N,0)& = & \sum\limits_{k=0}^{N-1} \boldsymbol{\Psi}^{*} (k,\theta)\widehat{\textbf{D}}_{N-k}+\boldsymbol{\Psi}^{*}(N,\theta)\textbf{D}, \end{array} $$
(42)
$$\begin{array}{@{}rcl@{}} -\theta\boldsymbol{\Phi}^{*}(0,\theta)+\boldsymbol{\Phi}(0,0) & = & \boldsymbol{\Phi}^{*}(0,\theta)\textbf{D}_{0} +V^{*}(\theta)[\boldsymbol{\Phi}(0,0)+\boldsymbol{\Psi}(0,0)], \end{array} $$
(43)
$$\begin{array}{@{}rcl@{}} -\theta\boldsymbol{\Phi}^{*}(n,\theta)+\boldsymbol{\Phi}(n,0) & = & \sum\limits_{k=0}^{n} \boldsymbol{\Phi}^{*}(k,\theta)\textbf{D}_{n-k},\quad 1 \leq n \leq N-1, \end{array} $$
(44)
$$\begin{array}{@{}rcl@{}} -\theta\boldsymbol{\Phi}^{*}(N,\theta)+\boldsymbol{\Phi}(N,0) & = & \sum\limits_{k=0}^{N-1} \boldsymbol{\Phi}^{*}(k,\theta)\widehat{\textbf{D}}_{N-k}+\boldsymbol{\Phi}^{*}(N,\theta)\mathbf{ D}. \end{array} $$
(45)

Now, using the above equations, we obtain a few results in the form of lemmas which will be used to get the queue length distribution at arbitrary epoch. Also, these results have their own interpretations.

Lemma 5

The following equality holds true:

$$\begin{array}{@{}rcl@{}} \boldsymbol{\Psi} (0,0)\mathbf{\mathit{e}} & = &\sum\limits_{k=1}^{N} \boldsymbol{\Phi}(k,0)\mathbf{\mathit{e}}. \end{array} $$

Proof

Setting 𝜃 = 0 and post-multiplying by e in Eq. 39 to Eq. 42, adding them and after simple algebraic manipulation leads to the result of Lemma 5. □

Lemma 6

The following equalities hold true:

$$\begin{array}{@{}rcl@{}} E[S]\sum\limits_{k=0}^{N}\boldsymbol{\Psi} (k,0)\mathbf{\mathit{e}} & = & \sum\limits_{k=0}^{N} \boldsymbol{\Psi}(k)\mathbf{\mathit{e}}=\rho^{\prime}, \end{array} $$
(46)
$$\begin{array}{@{}rcl@{}} E[V]\sum\limits_{k=0}^{N}\boldsymbol{\Phi} (k,0)\mathbf{\mathit{e}} & = & \sum\limits_{k=0}^{N} \boldsymbol{\Phi}(k)\mathbf{\mathit{e}}=1-\rho^{\prime}. \end{array} $$
(47)

Proof

Post-multiplying by e in Eqs. 39 to 42 and adding them, using the relation \({\sum }_{k=0}^{\infty } \textbf {D}_{k}\mathbf {\mathit {e}}={\textbf {0}}\) and Lemma 5, after some manipulation, we obtain

$$\begin{array}{@{}rcl@{}} \sum\limits_{k=0}^{N} \boldsymbol{\Psi}^{\ast}(k,\theta)\mathbf{\mathit{e}} & = & \frac{1-S^{\ast}(\theta)}{\theta}\sum\limits_{k=0}^{N} \boldsymbol{\Psi} (k,0)\mathbf{\mathit{e}}. \end{array} $$

Taking the limit as 𝜃→0, after simplification, we obtain (46).

Similarly, post-multiplying by e in Eq. 43 to Eq. 45 and adding them, using the relation \({\sum }_{k=0}^{\infty } \mathbf { D}_{k}\mathbf {\mathit {e}}={\textbf {0}}\) and Lemma 5, after some manipulation, we obtain

$$\begin{array}{@{}rcl@{}} \sum\limits_{k=0}^{N} \boldsymbol{\Phi}^{*}(k,\theta)\mathbf{\mathit{e}} & = & \frac{1-V^{*}(\theta)}{\theta}\sum\limits_{k=0}^{N} \boldsymbol{\Phi} (k,0)\mathbf{\mathit{e}}. \end{array} $$

Taking the limit as 𝜃→0, after some algebraic manipulation, we obtain (47). □

Lemma 7

The probability that the server is busy is given by

$$\begin{array}{@{}rcl@{}} \rho^{\prime} & = & \frac{E[S]\sum\limits_{k=0}^{N} \boldsymbol{\Psi}^{+}(k)\mathbf{\mathit{e}}}{E[S] \sum\limits_{k=0}^{N} \boldsymbol{\Psi}^{+}(k)\mathbf{\mathit{e}}+E[V]\sum\limits_{k=0}^{N} \boldsymbol{\Phi}^{+}(k)\mathbf{\mathit{e}}}. \end{array} $$

Proof

From Eqs. 46 and 47, we can write

$$\begin{array}{@{}rcl@{}} \frac{\rho^{\prime}}{1-\rho^{\prime}}=\frac{E[S]\sum\limits_{k=0}^{N}\boldsymbol{\Psi} (k,0)\mathbf{\mathit{e}}}{E[V]\sum\limits_{k=0}^{N}\boldsymbol{\Phi} (k,0)\mathbf{\mathit{e}}}. \end{array} $$
(48)

Using Eqs. 22 and 23 in Eq. 48, after simplification, we get the desired result. □

Remark 3

One may note here that in case of MV policy, idle period may consist of several vacations each with identical distribution.

Lemma 8

The expression for Υ is given by

$$\begin{array}{@{}rcl@{}} {\Upsilon} & = & \frac{\rho^{\prime}}{E[S]}+\frac{1-\rho^{\prime}}{E[V]}. \end{array} $$

Proof

Using Eqs. 46 and 47 in \({\Upsilon }={\sum }_{n=0}^{N}[\boldsymbol {\Psi }(n,0)+\boldsymbol {\Phi }(n,0)]\mathbf {\mathit {e}}\), we have

$$\begin{array}{@{}rcl@{}} {\Upsilon}=\frac{\rho^{\prime}}{E[S]}+\frac{1-\rho^{\prime}}{E[V]}. \end{array} $$
(49)

Theorem 4.1

The arbitrary epoch probabilities are given by

$$\begin{array}{@{}rcl@{}} \boldsymbol{\Psi}(0)&=&{\Upsilon} \left\{\sum\limits_{k=1}^{B} [\boldsymbol{\Psi}^{+}(k)+\boldsymbol{\Phi}^{+}(k)]\sum\limits_{l=k}^{B} y_{l}-\boldsymbol{\Psi}^{+}(0)\right\}(-\mathbf{D}_{0})^{-1},\\[-2pt] \boldsymbol{\Psi}(n)\!\!&=&\!\!\left[\sum\limits_{k=0}^{n-1}\boldsymbol{\Psi}(k)\mathbf{D}_{n-k}\,+\,{\Upsilon} \left\{\sum\limits_{k=1}^{B} [\boldsymbol{\Psi}^{+}(n\,+\,k)\,+\,\boldsymbol{\Phi}^{+}(n\,+\,k)]y_{k}\,-\,\boldsymbol{\Psi}^{+}(n)\right\}\right](-\mathbf{D}_{0})^{-1},\\ &&\hspace{7.5cm}1 \leq n \leq N-B,\\[-2pt] \boldsymbol{\Psi}(n)\!\!&=&\!\!\left[\sum\limits_{k=0}^{n-1}\boldsymbol{\Psi}(k)\mathbf{D}_{n-k}\,+\,{\Upsilon} \left\{\sum\limits_{k=1}^{N-n} [\boldsymbol{\Psi}^{+}(n\,+\,k)\,+\,\boldsymbol{\Phi}^{+}(n\,+\,k)]y_{k}\,-\,\boldsymbol{\Psi}^{+}(n)\right\} \right](\,-\,\mathbf{D}_{0})^{-1},\\[-2pt] &&\hspace{7.4cm} N\,-\,B\,+\,1 \!\leq\! n \!\leq\! N\,-\,1,\\[-2pt] \boldsymbol{\Phi}(0) & = & {\Upsilon}\boldsymbol{\Psi}^{+}(0)\ (-\mathbf{D}_{0})^{-1},\\[-2pt] \boldsymbol{\Phi}(n) & = & \left[\sum\limits_{k=0}^{n-1}\boldsymbol{\Phi}(k)\mathbf{ D}_{n-k}-{\Upsilon} \boldsymbol{\Phi}^{+}(n)\right](-\mathbf{D}_{0})^{-1},~ 1 \leq n \leq N-1. \end{array} $$

Proof

Setting 𝜃 = 0 in Eq. 39 to Eqs. 4143 and 44, then using Eqs. 22 and 23, after simplification, we obtain the desired result. □

Remark 4

It may be remarked here that we do not have explicit expressions componentwise separately for Ψ(N) and Φ(N). However, one can compute them using Lemma 6 as \(\boldsymbol {\Psi }(N)\mathbf {\mathit {e}}= \rho ^{\prime }-{\sum }_{k=0}^{N-1} \boldsymbol {\Psi }(k)\mathbf {\mathit {e}}\) and \(\boldsymbol {\Phi }(N)\mathbf {\mathit {e}}= 1-\rho ^{\prime }-{\sum }_{k=0}^{N-1} \boldsymbol {\Phi }(k)\mathbf {\mathit {e}}\). Further, Ψ(N) + Φ(N) can be obtained using the normalization condition as \(\boldsymbol {\Psi }(N)+\boldsymbol {\Phi }(N)=\overline {\boldsymbol {\pi }}-{\sum }_{k=0}^{N-1}[\boldsymbol {\Psi }(k)+\boldsymbol {\Phi }(k)]\). Though we are not getting the vectors Ψ(N) and Φ(N) componentwise separately, but the above results are sufficient to determine the key performance measures, see Section 5.

Lemma 9

The vectors Ψ (n) and Φ (n), 0 ≤ nN, are given by

$$\begin{array}{@{}rcl@{}} \boldsymbol{\Psi}^{-}(n)&=&\frac{\boldsymbol{\Psi}(n)\widehat{\mathbf{ D}}_{1}}{\lambda_{g}},\quad 0 \leq n \leq N-1,\\ \boldsymbol{\Phi}^{-}(n)&=&\frac{\boldsymbol{\Phi}(n)\widehat{\textbf{D}}_{1}}{\lambda_{g}},\quad 0 \leq n \leq N-1,\\ \boldsymbol{\Psi}^{-}(N)+\boldsymbol{\Phi}^{-}(N)&=& \frac{1}{\lambda_{g}}[\overline{\boldsymbol{\pi}}-{\sum}_{k=0}^{N-1}(\boldsymbol{\Psi}(k)+\boldsymbol{\Phi}(k))]\widehat{\mathbf{ D}}_{1}. \end{array} $$

5 Performance measure

Performance measures are important features of queueing systems as they reflect the efficiency of the queueing system under consideration. Once the distributions of number of customers in the queue at different epochs are known, various performance measures of the system can be obtained. We derive below some performance measures such as the average queue lengths, the loss probabilities, etc. The average number of customers in the queue at an arbitrary epoch \(L_{q}={\sum }_{k=1}^{N} k [\boldsymbol {\Psi }(k)+\boldsymbol {\Phi }(k)]\textbf {e}\), the average number of customers in the queue when the server is busy \(L_{b}={\sum }_{k=1}^{N} k \boldsymbol {\Psi } (k)\textbf {e}\), the average number of customers in the queue when the server is on vacation \(L_{v}={\sum }_{k=1}^{N} k \boldsymbol {\Phi } (k)\textbf {e}\). Next we compute the blocking probabilities of the first-, an arbitrary- and the last-customer of an arriving batch.

  • (i) Blocking probability of the first customer in a batch

Let P B F be the probability that the first customer in a batch (and therefore the whole batch) is being lost upon arrival. The first customer is being lost if there is no waiting place, i.e., there have been N customers in the queue. Hence, the blocking probability of the first customer of an arriving batch is given by

$$\begin{array}{@{}rcl@{}} P_{BF} & = & [\boldsymbol{\Psi}^{-}(N)+\boldsymbol{\Phi}^{-}(N)]\textbf{e}. \end{array} $$
  • (ii) Blocking probability of an arbitrary customer in a batch

Let P B A be the probability that an arbitrary customer in a batch is being lost upon arrival. Let H k be the matrix of order m × m whose (i, j)-th element [H k ] i j is the probability that the position of an arbitrary customer in an arrival batch is k with phase changes from i to j. Then

$$\begin{array}{@{}rcl@{}} \textbf{H}_{k}=\frac{1}{\lambda^{\star}}\sum\limits_{n=k}^{\infty}\mathbf{ D}_{n}, \quad k=1,2,3,\ldots, \end{array} $$

for details, see Gupta et al. [35]. Hence, an arbitrary customer in a batch is lost if he finds n (0 ≤ nN) customers in the queue upon arrival and his position in his batch is kN+1−n. Thus, we have

$$\begin{array}{@{}rcl@{}} P_{BA}&=&\boldsymbol{\nu}(0)\sum\limits_{k=N+1}^{\infty} \mathbf{ H}_{k}\mathbf{\mathit{e}}+\sum\limits_{n=0}^{N} [\boldsymbol{\Psi}(n)+\boldsymbol{\Phi}(n)]\sum\limits_{k=N+1-n}^{\infty} \textbf{H}_{k}\mathbf{ e},\quad \text{for single vacation,}\\ &=&\sum\limits_{n=0}^{N} [\boldsymbol{\Psi}(n)+\boldsymbol{\Phi}(n)]\sum\limits_{k=N+1-n}^{\infty} \textbf{H}_{k}\mathbf{ e},\quad \text{for multiple vacation.} \end{array} $$

Let W q be the average waiting time in the queue of an arbitrary customer of a batch. Then, by Little’s rule, we have W q = L q /λ , where λ = λ (1−P B A ) is the effective arrival rate.

  • (iii) Blocking probability of the last customer in a batch

Let P B L be the probability that the last customer in a batch is being lost upon arrival. The last customer in a batch is being lost if he finds n (0 ≤ nN) customers in the queue upon arrival and his batch size is kN+1−n. Hence, the blocking probability of the last customer of a batch is given by

$$\begin{array}{@{}rcl@{}} P_{BL}&=&\frac{1}{\lambda_{g}}\left[\boldsymbol{\nu}(0)\sum\limits_{k=N+1}^{\infty} \textbf{D}_{k}\mathbf{\mathit{e}}+\sum\limits_{n=0}^{N} [\boldsymbol{\Psi}(n)+\boldsymbol{\Phi}(n)]\sum\limits_{k=N+1-n}^{\infty} \textbf{D}_{k}\textbf{e}\right],\quad \text{for single vacation}\\ &=&\frac{1}{\lambda_{g}}\left[\sum\limits_{n=0}^{N} [\boldsymbol{\Psi}(n)+\boldsymbol{\Phi}(n)]\sum\limits_{k=N+1-n}^{\infty} \textbf{D}_{k}\mathbf{ e}\right],\quad \text{for multiple vacation}. \end{array} $$

6 Numerical result

In this section, we provide a few numerical examples to get some practical idea of the system. It would be helpful for the engineers and practitioners to know how the various system performances behave with the corresponding change of model parameters. During the computational work, several outputs were generated for testing the procedure but only a few of them are presented here. All the calculations were performed using Maple 13 on PC having configuration Intel(R) Core(TM) i3-3240 CPU @ 3.40GHz with 4.00 GB RAM. No difficulty arises during computational work even for large N(=250) which generates a transition probability matrix \(\mathcal {P}\) of order 1004×1004. Hence, we solved 1004 system of simultaneous linear equations for Ψ +(n) and Φ +(n), 0 ≤ nN. In real world applications, the size of an arriving batch is always bounded. Thus, assuming that arriving batch size has a finite support is reasonable in mathematical modeling. Tables 12 and 3 show the results of B M A P/P H Y/1/100 queue in case of single vacation policy with the matrices D n of BMAP as

$$\begin{array}{@{}rcl@{}} \textbf{D}_{0}&=& \left[ \begin{array}{cc} -1.425 & 0.850 \\ 0.875& -1.275 \end{array} \right], ~ \textbf{D}_{1} = \left[ \begin{array}{cc} 0.095 & 0.020 \\ 0.025 & 0.055 \end{array} \right], ~ \textbf{D}_{3} = \left[ \begin{array}{cc} 0.1425 & 0.030 \\ 0.0375 & 0.0825 \end{array} \right],\\ \textbf{D}_{5}&= &\left[ \begin{array}{cc} 0.1425 & 0.030 \\ 0.0375 & 0.0825 \end{array} \right], ~ \textbf{D}_{7}= \left[ \begin{array}{cc} 0.095 & 0.020 \\ 0.025 & 0.055 \end{array} \right]. \end{array} $$

This leads to \(\overline {\boldsymbol {\pi }}=\left [\begin {array}{cc}0.5128205&0.4871795\end {array} \right ]\) with λ = 1.9589744. We assume that maximum batch size for service is B = 3 with y 1 = 0.7, y 2 = 0.2, y 3 = 0.1. The phase type representation of service time is taken as β = [0.4 0.6], \(\textbf {S}=\left [ \begin {array}{cc} -6.683 & 2.453 \\ 1.367 & -8.566 \end {array} \right ]\)with E[S] = 0.1714053. The vacation time is taken as deterministic with mean E[V] = 1/0.25. These lead to ρ = 0.2398418. We evaluated the state probabilities using the procedure discussed for finite-buffer queue by taking sufficiently large N and found that \(\boldsymbol {\nu }(0)+{\sum }_{n=0}^{\infty }[\boldsymbol {\Psi }(n)+\boldsymbol {\Phi }(n)]=\overline {\boldsymbol {\pi }}\) as it should be. This is due to the fact that finite-buffer queue behaves as an infinite-buffer queue, when ρ < 1 and N is sufficiently large.

Table 1 Queue length distribution at service completion vacation termination epochs in case of single vacation policy
Table 2 Queue length distribution at arbitrary epoch in case of single vacation policy
Table 3 Queue length distribution at arrival epoch in case of single vacation policy

Tables 45 and 6 show the results of B M A P/D Y/1/250 queue in case of multiple vacation policy with the matrices D n of BMAP as

$$\begin{array}{@{}rcl@{}} \textbf{D}_{0}= \left[ \begin{array}{cc} -1.530& 0.250 \\ 0.275& -1.225 \end{array} \right], \textbf{D}_{1} = \left[ \begin{array}{cc} 0.0790& 0.0625 \\ 0.6125& 0.0025 \end{array} \right], \textbf{D}_{5} = \left[ \begin{array}{cc} 0.5025& 0.1260\\ 0.0125& 0.1025 \end{array} \right], \\\textbf{D}_{18} = \left[ \begin{array}{cc} 0.41& 0.10 \\ 0.10& 0.12 \end{array} \right]. \end{array} $$

This leads to \(\overline {\boldsymbol {\pi }}=\left [\begin {array}{cc}0.6499838&0.3500162\end {array} \right ]\) with λ = 9.9039812. We assume that maximum batch size for service is B = 7 with y 1 = 0.45, y 3 = 0.25, y 5 = 0.2, y 7 = 0.1. The service time is taken as deterministic with mean E[S] = 1/3.5. The phase type representation of vacation time is taken as α = [0.7 0.3], \(\textbf {T}=\left [ \begin {array}{cc} -1.098 & 0.864 \\ 0.071 & -0.532 \end {array} \right ]\)with E[V] = 2.5400159. These lead to ρ = 0.9757617.

Table 4 Queue length distribution at service completion vacation termination epochs in case of multiple vacation policy
Table 5 Queue length distribution at arbitrary epoch in case of multiple vacation policy
Table 6 Queue length distribution at arrival epoch in case of multiple vacation policy

In Fig. 2, we have plotted the probability that the server is busy (ρ ) against the mean vacation time for a B M A P/P H Y/1/18 queue with SV as well as MV with the following model parameters. The BMAP representation is taken as

$$\begin{array}{@{}rcl@{}} \textbf{D}_{0}& =& \left[ \begin{array}{cc} -2.625 & 1.50 \\ 0.875& -1.375 \end{array} \right], ~ \textbf{D}_{1} = \left[ \begin{array}{cc} 0.525 & 0.150 \\ 0.075 & 0.225 \end{array} \right], ~ \textbf{D}_{3} = \left[ \begin{array}{cc} 0.350 & 0.100 \\ 0.050 & 0.150 \end{array} \right], \end{array} $$

with λ = 1.3090909. The PH-type representation of service time is taken as β = [0.3 0.7], \(\textbf {S}=\left [ \begin {array}{cc} -2.183 & 2.453 \\ 1.367 & -2.986 \end {array} \right ]\)with E[S] = 1.3006183 and ρ = 0.8961198. We assume y 1 = 0.3, y 2 = 0.5, y 3 = 0.2. The vacation time distribution is taken as E 2 with α = [1.0 0.0], \(\textbf {T}=\left [ \begin {array}{cc} -\kappa & \kappa \\ 0.0 & -\kappa \end {array} \right ]\)with E[V] = 2.0/κ, and suitably varying κ to get various values of E[V]. It is observed from Fig. 2 that ρ decreases when the mean vacation time E[V] increases. Also, one can observe that ρ is little higher in SV as compare to MV. Finally they converge to the same value when the mean vacation time is long. This is due to the fact that the server sees at least one customer in the system after returning from the vacation state in both the policies. In Fig. 3, we have plotted the average waiting time in the queue (W q ) against E[V] with the same model parameters as used for Fig. 2. It is seen from this figure that W q increases when E[V] increases. Further, we also see that W q is little higher in MV than SV. Finally they converge to the same value when the mean vacation time is long. This is due to the fact that the customers have to wait in the queue long time in both the policies when the server returns from the vacation state after a long time.

Fig. 2
figure 2

Effect of mean vacation time on \(\rho ^{\prime }\)

Fig. 3
figure 3

Effect of mean vacation time on W q

The effect of traffic intensity (ρ) on the blocking probability of a last customer (P B L ) and the average queue length (L q ) are shown in Figs. 4 and 5, respectively for a \(BMAP/{E_{2}^{Y}}/1/20\) queue with SV as well as MV with the following input parameters: BMAP representation is taken as

$$\begin{array}{@{}rcl@{}} \textbf{D}_{0} &=& \left[ \begin{array}{ccc} -0.542410 & 0.003728 & 0.000000 \\ 0.004349 & -0.022989& 0.000622\\ 0.000000 & 0.001243 & -2.269670 \end{array} \right], ~ \textbf{D}_{4} =\left[ \begin{array}{ccc} 0.014352 & 0.000000 & 0.362725 \\ 0.000000 & 0.012178 & 0.000435\\ 1.581375 & 0.003479 & 0.003044 \end{array} \right],\\ \textbf{D}_{7} &=& \left[ \begin{array}{ccc} 0.004101 & 0.000000 & 0.103636 \\ 0.000000 & 0.003479& 0.000124\\ 0.451822 & 0.000994 & 0.000870 \end{array} \right], ~ \textbf{D}_{15} =\left[ \begin{array}{ccc} 0.002050 & 0.000000 & 0.051818 \\ 0.000000 & 0.001740 & 0.000062\\ 0.225911 & 0.000497 & 0.000435 \end{array} \right], \end{array} $$

with λ = 2.8500199. The service time distribution is taken as E 2 with β = [1.0 0.0], \(\textbf {S}=\left [ \begin {array}{cc} -\kappa & \kappa \\ 0.0 & \kappa \end {array} \right ]\), and therefore E[S] = 2/κ and suitably varying κ to obtain various values of ρ. We assume y 5 = 0.2, y 7 = 0.2, y 10 = 0.6. The PH-type representation of a vacation is taken as α = [0.7 0.3], \(\textbf {T}=\left [ \begin {array}{cc} -1.098 & 0.864 \\ 0.071 & -0.532 \end {array} \right ]\)with E[V] = 2.54. From these figures it can be observed that P B L and L q initially decreases as ρ increases up to 0.2 and then increases as ρ increases. Also, P B L and L q both are higher in MV as compare to SV. Finally they converge to the same value. This is due to the fact that in both SV and MV policies, the system becomes full when the traffic load is high.

Fig. 4
figure 4

Effect of traffic intensity on P B L

Fig. 5
figure 5

Effect of traffic intensity on L q

7 Conclusion

This paper analyzed a B M A P/G Y/1/N vacation (single and multiple) queueing system, where customers are served by the single server in batches of random capacity to be decided at the beginning of the service. With the help of the supplementary variable and the embedded Markov chain techniques, we obtain the queue length distributions at various epochs and other performance measures. The model presented in this paper may be useful in manufacturing system where production orders arrive at the system in batches of random size and form a single queue based on the order of their arrival. Items are manufactured in batches of random size which is decided at the beginning of the production process according to batch service rule discussed above.