1 Introduction

Mathematical study of various queueing models has been carried out in the last few decades due to its application in numerous queueing scenarios on the fastest-changing civilization. In particular, wide applications of queueing models are found in the area of electrical engineering, cellular networks, web browsing, traffic modelling of internet protocol (IP) networks, hybrid high-speed radio technologies and laser-based communication systems, wireless networks with linear topology, and other related systems. Most communication networks have non-stationary (bursty) and self-similar output flows wherein the inter-service times are highly correlated. The service process with correlated service times of customers who are served in batches of random size can be accurately modelled by batch Markovian service process (BMSP). The BMSP may be a natural choice to capture the correlated bursty and self-similar traffics in communication networks. The BMSP is a versatile service process and generalizes the Markovian service process (MSP) by allowing batch services. The BMSP is the generalization of batch Poisson process, Markov-modulated Poisson process and batch PH-renewal process. The BMSP has the same features as that of the batch Markovian arrival process (BMAP) wherein arrivals are replaced with service completions. Hence, BMSP has similar impact like BMAP on analytical results and application areas for service process in queueing system. For more detailed information about the BMAP, its special cases, properties, and related research work, see Lucantoni [1] and survey paper by Chakravarthy [2].

Many authors investigated non-renewal correlated service queueing models over the last three decades and such results are available in the extant queueing literature. Abate et al. [3] and Alfa et al. [4] obtained the stationary distributions of MAP/MSP/1 queue based on the perturbation theory approach. Horváth et al. [5] discussed the output process of MAP/MSP/1 queue by proposing an approximate analysis method. Zhang et al. [6] carried out the departure process of BMAP/MSP/1 queue using an exact aggregate solution technique (called ETAQA). Samanta et al. [7] analysed the BMAP/MSP/1 queueing model based on zeros of the related characteristics function of the vector probability generating function (p.g.f.) of system-length distribution at random epoch. Bocharov et al. [8] investigated the GI/MSP/1 queue using the embedded Markov chain technique and semi-Markov process to carry out the stationary characteristics of system performance. Gupta and Banik [9] analysed the GI/MSP/1 queueing system based on the matrix-geometric method and the supplementary variable technique. Chaudhry et al. [10] investigated the GI/MSP/1 queue using the zeros of the related characteristic function of the vector p.g.f. of system-length distribution at pre-arrival epoch. Samanta and Zhang [11] discussed the GI/D-MSP/1 queue with multiple vacations in discrete-time based on matrix-geometric method. In this relation, see also Samanta et al. [12, 13], Samanta and Nandi [14, 15], Chaudhry et al. [16], Samanta [17], and Wang et al. [18].

However, very few works have been done on the corresponding batch Markovian service process. Krishnamoorthy and Joshua [19] analysed the BMAP/BMSP/1 queueing model with Markov dependent arrival as well as service batch sizes in which the state probability vectors of the system and some relevant performance measures are obtained. Sandhya et al. [20] investigated an infinite-buffer BMAP/BMSP/1 queue by partitioning the infinitesimal generator with blocks having groups of customers of maximum size of arrival and service batch sizes. Based on the use of matrix-geometric method pioneered by Neuts [21], they determined the stationary probability of the number of customers waiting for service and other performance measures. Bank and Samanta [22] discussed the BMAP/BMSP/1 queueing system comparatively by roots method and matrix-geometric method. Using a matrix-analytical approach, Wang et al. [18] analysed the finite-buffer DBMAP/DBMSP/1/K queue in discrete time to evaluate the long-term packet loss probabilities over wireless networks. Banik et al. [23, 24] studied finite-buffer GI/BMSP/1/N queueing model using the embedded Markov chain technique. Banik [25] analysed state-dependent arrival in the GI/BMSP/1 queueing system based on combination of the matrix-geometric method and the Markov renewal theory argument. For further details on BMSP, the readers are referred to Chaplygin [26].

The above literature survey motivates us to investigate an infinite waiting space GI/BMSP/1 queueing system in which customers arrive one by one followed by a renewal process. The single server provides service to the customers in batches under batch Markovian service process with minimum threshold value ‘a’. Moreover, after a batch service completion, if the number of waiting customers in the system is less than ‘a’, the server does not provide service to the customers until the number of customers in the system becomes at least ‘a’ and then starts to serve a batch of customers. We analyse the model for random as well as fixed batch size service. In random size batch service, if the number of customers in the queue after a batch service completion is at least ‘a’, then the server serves a batch of k customers with the service rate as per rate matrix of the BMSP for batch size k. But, if the number of customers in the queue after a batch service completion is lesser than ‘a’, then the server remains idle until the number of customers in the queue becomes at least ‘a’. Moreover, it is assumed that arriving customer is not allowed to join in ongoing service of a batch even if there is an unused service capacity. In fixed size batch service, if the number of customers in the queue after a batch service completion is at least ‘a’, then the server serves a batch of fixed size ‘a’ customers with the service rate as per rate matrix of the BMSP for batch size a. But, if the number of customers in the queue after a batch service completion is lesser than ‘a’, then the server remains idle until the number of customers in the queue becomes at least ‘a’. We first determine the vector p.g.f. of the system-length distribution at pre-arrival epoch. The system-length distribution at pre-arrival epoch is extracted in terms of zeros of the related characteristic polynomial of the vector probability generating function. We use the Markov renewal theory argument to determine the system-length distribution at random epoch. We also derive the system-length distribution at post-departure epoch using the ‘rate in = rate out’ argument. Some numerical results are illustrated through analytical results obtained in this paper to manifest the key performance measures of the system and correctness of analytical results.

The model discussed in this paper has scope of application in packaging and shipping of vaccines. The vaccine container comes one by one according to the renewal process for the packaging and shipping. The server provides service (i.e. packaging and shipping of vaccine containers) in batches with minimum ‘a’ number of vaccine containers according to the BMSP. The phases of BMSP are labelled as insulated packaging, temperature monitoring, storage volume standardization, labelling, and standard shipping. If the number of vaccine containers in the queue after a batch service completion is lesser than ‘a’, then the server remains idle until the number of vaccine containers in the queue becomes at least ‘a’. Moreover, it is assumed that the arriving vaccine container is not allowed to join in ongoing service of a batch even if there is an unused service capacity. In case of packaging and shipping for the fixed number of vaccine containers, the fixed batch size model is also applicable.

The remainder of this paper is organized as follows. The model is described in Sect. 2. Section 3 carried out the system-length distributions at various time epochs. Numerical results are exhibited in Sect. 4. The paper is concluded in Sect. 5.

2 Model Description

We consider an infinite waiting space GI/BMSP/1 queueing system, where customers arrive one by one followed by a renewal process. The inter-arrival times of successive arrivals are assumed to be independent and identically distributed (i.i.d.) random variables with cumulative distribution function (C.D.F.) A(x), \(x\ge 0\) with \(A(0)=0\). Define the Laplace–Stieltjes transform (L.–S.T.) of A(x) by \(\widetilde{A}(s)=\int _0^{\infty }\mathrm{{e}}^{-sx}\mathrm{{d}}A(x)\), \(\mathrm{{Re}}(s)\ge 0\). The mean inter-arrival time is \(\frac{1}{\lambda }=-\frac{\mathrm{{d}}}{\mathrm{{d}}s}\widetilde{A}(s)|_{s=0}\). The inter-arrival time is independent of the service process. The customers are served by single server in batches with minimum of ‘a’, \((1\le a<\infty )\), customers under an m-state batch Markovian service process (BMSP). The m-state is mentioned as the phase (state) of the underlying Markov chain (UMC) corresponding to the BMSP. The service process BMSP is specified by the sequence of \(m\times m\) rate matrices \(\{\mathbf{L}_{k},~k\ge 0\}\), where \(\mathbf{L}_{k}\) governs the transition of the phase process with the rate of service of batch size k (\(k\ge 1\)) and the matrix \(\mathbf{L}_0\) governs the transition of the phase process which does not generate real services. The matrix \(\mathbf{L}_{0}\) is a non-singular stable matrix, and the diagonal element \([L_{0}]_{ii}\) of \(\mathbf{L}_{0}\) characterizes the mean rate of exponential sojourn time in state i. Let us define \(\mathbf{L}(z)=\sum _{n=0}^{\infty }{} \mathbf{L}_nz^n\), \(|z|\le 1\), with \(\mathbf{L}=\sum _{k=0}^{\infty }{} \mathbf{L}_{k}\) being an infinitesimal generator of the UMC related to the BMSP. The fundamental service rate of the BMSP is defined as \(\mu ^{*}=\overline{{\varvec{\pi }}}\sum _{k=1}^{\infty }k\mathbf{L}_{k}{} \mathbf{e}\), where \(\overline{{\varvec{\pi }}}\) is the unique solution of the system of linear equations \(\overline{{\varvec{\pi }}}{} \mathbf{L}=\mathbf{0}\) and \(\overline{{\varvec{\pi }}}{} \mathbf{e}=1\), where \(\mathbf{0}\) denotes a row vector of order m whose all elements are 0 and \(\mathbf{e}\) denotes a column vector of order m whose all elements are 1. In case of fixed size service with \(a=1\), we have \(\mathbf{L}_{k}=\mathbf{0}\), for \(k\ge 2\) and therefore, the model GI/BMSP/1 reduces to GI/MSP/1 queueing model. To ensure the stability of the system, the traffic intensity \(\rho \) is given by \(\rho =\lambda /\mu ^{*}< 1\).

In this paper, we assume that the server commences a busy period under two different service phase initiations. These two possible service phase commencements are referred to as Model I and Model II. They are defined as follows:

  • Model I The service phase does not change during idle periods of the system, i.e. the service phase is frozen during idle period. Therefore, the service phase at the starting point of a busy period is the same as the service phase at the end point of the preceding busy period.

  • Model II The service phase changes during idle periods of the system, i.e. the service phase is not frozen during idle period. Therefore, the service phase at the starting point of a busy period is the corresponding service phase at the end point of the last idle period.

3 Analysis of the Model

In this section, we analyse the GI/BMSP/1 queueing system and determine various system characteristics. For this purpose, we define the state of the system at time t by \((\mathcal {N}(t),\mathcal {J}(t))\), where \(\mathcal {N}(t)\) denotes the number of customers served in (0, t] and \(\mathcal {J}(t)\) is the phase of the UMC corresponding to the BMSP at time t. Let \(\mathbf{P}(n,t),n\ge 0,t\ge 0\), be an \(m\times m\) matrix whose (ij)th element

$$\begin{aligned} P_{ij}(n,t)=Pr\{{\mathcal {N}(t)=n,\mathcal {J}(t)=j|\mathcal {N}(0)=0,\mathcal {J}(0)=i}\}, \quad 1\le i,j\le m, \end{aligned}$$

is the conditional probability that n customers are served in (0, t] with the service process being in phase j at time t, given that the service process was in the phase i at time \(t=0\).

Using the property of BMSP, we have

$$\begin{aligned} \frac{\mathrm{{d}}}{\mathrm{{d}}t}{} \mathbf{P}(n,t)= & {} \sum \limits _{i=0}^{n}{} \mathbf{P}(i,t)\mathbf{L}_{n-i} ,\quad n\ge 0, \end{aligned}$$
(1)

with \(\mathbf{P}(0,0)=\mathbf{I}_{m}\) and \(\mathbf{P}(n,0)=\mathbf{0}\), \(n\ge 1\), where \(\mathbf{I}_{m}\) is the identity matrix of order m.

Multiplying (1) by \(z^n\) and adding them over n from 0 to \(\infty \), using \(\mathbf{P}^{*}(z,t)=\sum _{n=0}^{\infty }\mathbf{P}(n,t)z^{n}\), \(\vert z\vert \le 1\), we obtain

$$\begin{aligned} \frac{\mathrm{{d}}}{\mathrm{{d}}t}{} \mathbf{P}^{*}(z,t)= & {} \mathbf{P}^{*}(z,t)\mathbf{L}(z), \end{aligned}$$
(2)

with \(\mathbf{P}^{*}(z,0)=\mathbf{I}_{m}.\)

Now, solving (2) with \(\mathbf{P}^{*}(z,0)=\mathbf{I}_{m}\), we obtain

$$\begin{aligned} \mathbf{P}^{*}(z,t)=\mathrm{{e}}^{\mathbf{L}(z)t},\quad \vert z\vert \le 1,~t\ge 0. \end{aligned}$$
(3)

Let \(\mathbf{S}_n\), \(n\ge 0\), denote the square matrix of order m whose (ij)th element specifies the conditional probability that n customers are served during an inter-arrival period of the arrival process and the service process being in phase j at the end of the inter-arrival period, given that the service process was in phase i at the starting point of the inter-arrival period. Then, we have

$$\begin{aligned} \mathbf{S}_{n}=\int _{0}^{\infty }{} \mathbf{P}(n,x)\mathrm{{d}}A(x),\quad n\ge 0. \end{aligned}$$
(4)

To evaluate the matrix \(\mathbf{S}_{n}\) for arbitrary inter-arrival time distribution, we apply the uniformization argument given in Lucantoni [27] as

$$\begin{aligned} \mathbf{P}(n,x)=\sum _{k=0}^{\infty }\mathrm{{e}}^{-\theta x}\dfrac{(\theta x)^{k}}{k!}{} \mathbf{U}_{n}^{(k)},\quad n\ge 0, \end{aligned}$$
(5)

where \(\theta =\text {max}_{i}{[-L_{0}]_{ii}}\), \(1\le i\le m\), and \(\mathbf{U}^{(k)}_{n}\) is given by

$$\begin{aligned} \mathbf{U}^{(0)}_{0}= & {} \mathbf{I}_{m},~~\mathbf{U}^{(0)}_{n}=\mathbf{0},~~\mathbf{U}^{(k+1)}_{0}=\mathbf{U}^{(k)}_{0}(\mathbf{I}_{m}+\theta ^{-1}\mathbf{L}_{0}),\\ \mathbf{U}^{(k+1)}_{n}= & {} \mathbf{U}^{(k)}_{n}(\mathbf{I}_{m}+\theta ^{-1}\mathbf{L}_{0})+\theta ^{-1}\sum _{i=0}^{n-1}{} \mathbf{U}_{i}^{(k)}\mathbf{L}_{n-i},\quad n\ge 1,~~k\ge 0. \end{aligned}$$

Now, using (5) in (4), we obtain

$$\begin{aligned} \mathbf{S}_{n}=\sum _{k=0}^{\infty }\sigma _{k}{} \mathbf{U}_{n}^{(k)},~n\ge 0, \end{aligned}$$
(6)

where

$$\begin{aligned} \sigma _{k}=\int _{0}^{\infty }\mathrm{{e}}^{-\theta x}\frac{(\theta x)^{k}}{k!}\mathrm{{d}}A(x),\quad k\ge 0. \end{aligned}$$
(7)

Multiplying \(z^k\) on both the sides of (7) and summing over k from 0 to \(\infty \), we get

$$\begin{aligned} \sum _{k=0}^{\infty }\sigma _{k}z^k=\widetilde{A}(\theta -\theta z). \end{aligned}$$
(8)

Since the L.–S.T. of inter-arrival time distribution is rational or approximated rational function, i.e. the degree of the numerator should be less than or equal to the degree of the denominator of the L.–S.T. of the inter-arrival time distribution, therefore we consider that \(\widetilde{A}(\theta -\theta z)\) will be in the following form:

$$\begin{aligned} \widetilde{A}(\theta -\theta z)=\frac{\sum _{k=0}^{p}\phi _kz^k}{\sum _{k=0}^{n}\psi _kz^k}, \quad p=0,1,2,\dots , n \quad \hbox {with} \quad \psi _0=1. \end{aligned}$$
(9)

From (8) and (9), we obtain

$$\begin{aligned} \sum _{k=0}^{\infty }\sigma _{k}z^k\sum _{k=0}^{n}\psi _kz^k=\sum _{k=0}^{p}\phi _kz^k,\quad p=0,1,2,\dots , n. \end{aligned}$$
(10)

Equating the coefficients of \(z^r, r=0,1,2,3,\dots \) from (10), we have

$$\begin{aligned} \sigma _0= & {} \phi _0, \\ \sigma _r= & {} \phi _r-\sum _{i=0}^{r-1}\sigma _i \psi _{r-i}, \quad r=1,2,\ldots ,p,\\ \sigma _r= & {} - \sum _{i=1}^{min(r-1,n)}\sigma _{r-i} \psi _i, \quad r \ge p+1. \end{aligned}$$

Now, we can determine \(\mathbf{{S}}_n\), \(n\ge 0\), from (6) with the results of \(\sigma _r\), \(r\ge 0\), given above.

Moreover, let \({\varvec{\Omega }}_n\) denote the square matrix of order m whose (ij)th element specifies the limiting probability that n customers are served during an elapsed inter-arrival time of the arrival process with the service process being in phase j, given that the service process was in phase i at the starting point of the inter-arrival period. Using the Markov renewal theory argument, we have

$$\begin{aligned} {\varvec{\Omega }}_n=\lambda \int _{0}^{\infty }{} \mathbf{P}(n,t)[1-A(t)]dt, \quad n\ge 0. \end{aligned}$$
(11)

In order to get \({\varvec{\Omega }}_n\), for \(n\ge 0\), we can write (4) as

$$\begin{aligned} \mathbf{S}_n= & {} -\int _{0}^{\infty }{} \mathbf{P}(n,t)\mathrm{{d}}[1-A(t)], \nonumber \\= & {} -\bigg [\mathbf{P}(n,t)[1-A(t)]\bigg ]_{0}^{\infty }+\int _{0}^{\infty }\frac{\mathrm{{d}}}{\mathrm{{d}}t}\mathbf{P}(n,t)[1-A(t)]\mathrm{{d}}t,\nonumber \\= & {} \mathbf{P}(n,0)+\int _{0}^{\infty }\frac{\mathrm{{d}}}{\mathrm{{d}}t}{} \mathbf{P}(n,t)[1-A(t)]\mathrm{{d}}t. \end{aligned}$$
(12)

Using (1) in (12), for \(n=0\), we have

$$\begin{aligned} \mathbf{S}_0= & {} \mathbf{I}_m+\int _{0}^{\infty }{} \mathbf{P}(0,t)\mathbf{L}_0[1-A(t)]\mathrm{{d}}t, \nonumber \\= & {} \mathbf{I}_m+\frac{1}{\lambda }{\varvec{\Omega }}_0\mathbf{L}_0 \quad [\text {using} (11)]. \end{aligned}$$
(13)

Hence, we have

$$\begin{aligned} {\varvec{\Omega }}_0=\lambda (\mathbf{I}_m-\mathbf{S}_0)(-\mathbf{L}_0)^{-1}. \end{aligned}$$
(14)

Using (1) in (12), for \(n\ge 1\), we have

$$\begin{aligned} \mathbf{S}_n= & {} \int _{0}^{\infty }\sum \limits _{i=0}^{n}{} \mathbf{P}(i,t)\mathbf{L}_{n-i}[1-A(t)]\mathrm{{d}}t,\nonumber \\= & {} \frac{1}{\lambda }\sum \limits _{i=0}^{n}{\varvec{\Omega }}_i\mathbf{L}_{n-i}\quad [\text {using} (11)], \end{aligned}$$

which yields

$$\begin{aligned} {\varvec{\Omega }}_n= & {} \bigg (\sum \limits _{i=0}^{n-1}{\varvec{\Omega }}_i\mathbf{L}_{n-i}-\lambda \mathbf{S}_n\bigg )(-\mathbf{L}_0)^{-1},\quad n \ge 1. \end{aligned}$$
(15)

According to Chaudhry et al. [10], \(\widetilde{P}_{ij}(n,t)\) represents the conditional probability that n customers are served in (0, t] and the service process being frozen in phase j at time t, given that the service process was in phase i at \(t=0\). Then, \(\widetilde{P}_{ij}(n,t),n\ge 1, t\ge 0\), can be expressed as

$$\begin{aligned} \widetilde{P}_{ij}(n,t+\varDelta t)=\widetilde{P}_{ij}(n,t)+\sum \limits _{r=0}^{n-1}\sum \limits _{k=1}^{m} P_{ik}(r,t)[\widehat{L}_{n-r}]_{kj}\varDelta t+O(\varDelta t), \quad 1 \le i,j \le m, \end{aligned}$$

with \(\widetilde{P}_{ij}(n,0)=0\), \(n\ge 1\), and \([\widehat{L}_{r}]_{ij}=\sum \limits _{k=r}^{\infty }[{L}_{k}]_{ij}\), \(r\ge 1\).

Rearranging the terms and taking the limit as \(\varDelta t \rightarrow 0\), it reduces to

$$\begin{aligned} \frac{\mathrm{{d}}}{\mathrm{{d}}t}\widetilde{P}_{ij}(n,t)=\sum \limits _{r=0}^{n-1}\sum \limits _{k=1}^{m} P_{ik}(r,t)[\widehat{L}_{n-r}]_{kj}, \quad n\ge 1, \end{aligned}$$

which yields in matrix notation as

$$\begin{aligned} \frac{\mathrm{{d}}}{\mathrm{{d}}t}\widetilde{\mathbf{P}}(n,t)=\sum \limits _{r=0}^{n-1} \mathbf{P}(r,t)\widehat{\mathbf{L}}_{n-r}, \quad n\ge 1. \end{aligned}$$
(16)

with \(\widetilde{\mathbf{P}}(n,0)=\mathbf{0}\), \(n\ge 1\), and \(\widehat{\mathbf{L}}_r=\sum \limits _{k=r}^{\infty }{} \mathbf{L}_k,r \ge 1\).

Further, let \(\mathbf{\widehat{S}}_n\) denote the square matrix of order m whose (ij)th element specifies the probability that n customers are served during an inter-arrival period of the arrival process and the service process being frozen in phase j at the end point of the nth service completion, given that the service process was in phase i at the starting point of the inter-arrival period. Then, we have

$$\begin{aligned} \widehat{\mathbf{S}}_n= {\left\{ \begin{array}{ll} \int _{0}^{\infty }\widetilde{\mathbf{P}}(n,t)\mathrm{{d}}A(t), \quad \text {for Model I},\\ \sum \limits _{k=n}^{\infty }{} \mathbf{S}_k , \quad \text {for Model II}. \end{array}\right. } \end{aligned}$$

For Model I, \(\widehat{\mathbf{S}}_n\) can be expressed as

$$\begin{aligned} \widehat{\mathbf{S}}_n= & {} \int _{0}^{\infty }\widetilde{\mathbf{P}}(n,t)\mathrm{{d}}A(t), \nonumber \\= & {} -\int _{0}^{\infty }\widetilde{\mathbf{P}}(n,t)\mathrm{{d}}[1-A(t)],\nonumber \\= & {} -\bigg [\widetilde{\mathbf{P}}(n,t)[1-A(t)]\bigg ]_{0}^{\infty }+\int _{0}^{\infty }\frac{\mathrm{{d}}}{\mathrm{{d}}t}\widetilde{\mathbf{P}}(n,t)[1-A(t)]\mathrm{{d}}t,\nonumber \\= & {} \sum \limits _{r=0}^{n-1}\int _{0}^{\infty }{\mathbf{P}}(r,t)\widehat{\mathbf{L}}_{n-r}[1-A(t)]\mathrm{{d}}t \quad [\text {using (16)}],\nonumber \\= & {} \frac{1}{\lambda } \sum \limits _{r=0}^{n-1}{\varvec{\Omega }}_r\widehat{\mathbf{L}}_{n-r},\quad n\ge 1 \quad [\text {using (11)}]. \end{aligned}$$

3.1 Random Size Batch Service

In this section, we consider that the server serves the customers according to the BMSP with minimum of ‘a’, \(1\le a<\infty \), customers in a batch, i.e. \(\mathbf{L}_k=\mathbf{0}\), \(k=1,2,\ldots ,a-1\), \(\mathbf{L}_a\ne \mathbf{0}\) and at least one of \(\mathbf{L}_k\), \(k=a+1,a+2,\ldots \), is a nonzero matrix.

3.1.1 System-Length Distribution at Pre-arrival Epoch

We now focus on the system-length distribution just before an arrival epoch. Let the kth customer arrives at time instant \(t_k, k=0,1,2, \dots ,\) with \(t_0=0\), and let \(t_{k}^{-}\) denote the pre-arrival epoch, that is, the time epoch just before the arrival instant \(t_k\). Then, the state of the system at \(t_{k}^{-}\) defined as \(\{Y_{t_{k}^{-}},J_{t_{k}^{-}}\}\) is a Markov chain, where \(Y_{t_{k}^{-}}\) denotes the number of customers in the system and \(J_{t_{k}^{-}}\) the phase of the service process at the embedded point \(t_{k}^{-}\). In the limiting case, let us define \(\pi _{j}^{-}(n)={\lim }_{k\rightarrow \infty }Pr\{Y_{t_{k}^{-}}=n,J_{t_{k}^{-}}=j\}\), \(n\ge 0\), \(1\le j \le m\), where \(\pi _{j}^{-}(n)\) denotes the pre-arrival epoch probability that there are n customers in the system and the service being in phase j. Let \({\varvec{\pi }}^{-}(n)\) denote the row vector of order m whose jth component is \(\pi _{j}^{-}(n)\). For two consecutive embedded points, we now have a Markov chain having state space \(\{(n,j), n\ge 0, j=1,2,\dots , m\}\). Therefore, the transition probability matrix \(\mathscr {P}\) for this Markov chain is given by

figure a

where

$$\begin{aligned} {\varvec{\varTheta }}= & {} {\left\{ \begin{array}{ll} \mathbf{I}_m, \quad \text {for Model I},\\ \sum \limits _{n=0}^{\infty }{} \mathbf{S}_n, \quad \text {for Model II}. \end{array}\right. } \end{aligned}$$

Let \({\varvec{\pi }}^{-}=[{\varvec{\pi }}^{-}(0), {\varvec{\pi }}^{-}(1),\) \({\varvec{\pi }}^{-}(2), {\varvec{\pi }}^{-}(3), \ldots ]\) be the stationary probability vector of \(\mathscr {P}\). Then, \({\varvec{\pi }}^{-}={\varvec{\pi }}^{-} \mathscr {P}\) can be written explicitly as

$$\begin{aligned} {\varvec{\pi }}^{-}(0)= & {} {\varvec{\pi }}^{-}(a-1)\mathbf{\widehat{S}}_{a}+ \sum \limits _{r=2a-1}^{\infty }{\varvec{\pi }}^{-}(r)\mathbf{\widehat{S}}_{r+1}, \end{aligned}$$
(17)
$$\begin{aligned} {\varvec{\pi }}^{-}(1)= & {} {\varvec{\pi }}^{-}(0){\varvec{\varTheta }}+\sum \limits _{r=a}^{2a-2}{\varvec{\pi }}^{-}(r)\mathbf{\widehat{S}}_{r}+\sum \limits _{r=2a-1}^{\infty }{\varvec{\pi }}^{-}(r)\mathbf{S}_{r}, \end{aligned}$$
(18)
$$\begin{aligned} {\varvec{\pi }}^{-}(n)= & {} {\varvec{\pi }}^{-}(n-1){\varvec{\varTheta }}+\sum \limits _{r=a+n-1}^{\infty }{\varvec{\pi }}^{-}(r)\mathbf{S}_{r-n+1},\quad 2\le n\le a-1, \end{aligned}$$
(19)
$$\begin{aligned} {\varvec{\pi }}^{-}(n)= & {} {\varvec{\pi }}^{-}(n-1)\mathbf{S}_0+\sum \limits _{r=a+n-1}^{\infty }{\varvec{\pi }}^{-}(r)\mathbf{S}_{r-n+1}, \quad n\ge a. \end{aligned}$$
(20)

Multiplying (17)–(20) by appropriate powers of z, using \({\varvec{\pi }}^{-*}(z)=\sum \nolimits _{n=0}^{\infty }{\varvec{\pi }}^{-}(n)z^n\) and \(\mathbf{S}(z)=\sum \nolimits _{n=0}^{\infty }{} \mathbf{S}_{n}z^{n}=\int _{0}^{\infty }{} \mathbf{P}^{*}(z,t)dA(t)=\int _{0}^{\infty }\mathrm{{e}}^{\mathbf{L}(z)t}dA(t)=\widetilde{A}(-\mathbf{L}(z))\) [using (3)], we obtain

(21)

where \(\text {adj}[\cdot ]\) and \(\det [\cdot ]\) represent the adjoint matrix and the determinant of a square matrix, respectively.

To determine the vector \({\varvec{\pi }}^{-}(n), n\ge 0\), from (21), we first obtain closed-form analytic expression for each component \(\pi _{j}^{-*}(z)\) defined as \(\pi _{j}^{-*}(z)=\sum _{n=0}^{\infty }\pi _{j}^{-}(n)z^n\), \(|z| \le 1\), of the vector p.g.f. \({\varvec{\pi }}^{-*}(z)\) given in (21). Since each component of \({\varvec{\pi }}^{-*}(z)\) is convergent in \(|z| \le 1\), therefore the zeros of \(\det [\mathbf{I}_m-z{\mathbf{S}}(z^{-1})]\) whose modulus value is less than or equal to 1 must be the zeros of the numerator of each component of (21). Therefore, these zeros do not have any responsibility to make partial fractions of (21). So, we want to have the expertise of the zeros of \(\det [\mathbf{I}_m-z\mathbf{S}(z^{-1})]\) whose modulus value is greater than 1. According to Chaudhry et al. [10], \(\det [\mathbf{I}_mz-\mathbf{S}(z)]\) has exactly m zeros inside of \(|z| = 1\) and one zero on the circle \(|z| = 1\). Let these inside zeros be denoted by \(\gamma _{i}\), \(1\le i \le m\). Since \(\det [\mathbf{I}_mz-\mathbf{S}(z)]\) has m zeros \(\gamma _i\) inside of \(|z| = 1\), the function \(\det [\mathbf{I}_m-z\mathbf{S}(z^{-1})]\) has m zeros \(1/\gamma _i\) outside of \(|z| = 1\). We assume that all \(\gamma _i\) \((1\le i \le m)\) are simple zeros. Hence, using the analyticity of \(\pi _{j}^{-*}(z)\) and the partial fraction method, we have

$$\begin{aligned} \pi _{j}^{-*}(z)=\sum _{n=0}^{a-2}\pi _{j}^{-}(n)z^n+\sum \limits _{i=1}^{m}\frac{k_{ij} (\gamma _i z)^{a-1}}{1-\gamma _i z}, \quad 1\le j \le m, \end{aligned}$$
(22)

where \(k_{ij}\) are constants to be determined.

Substitute \(z=1\) in (22) and using \(\sum \limits _{j=1}^{m}\pi _{j}^{-*}(1)=1\), we have

$$\begin{aligned} \sum _{n=0}^{a-2}\sum _{j=1}^{m}\pi _{j}^{-}(n)+\sum \limits _{j=1}^{m}\sum \limits _{i=1}^{m} \frac{k_{ij}\gamma _i^{a-1}}{1-\gamma _i}=1. \end{aligned}$$
(23)

Now, comparing the coefficient of \(z^{n}\), \(n\ge a-1\), from (22), we have

$$\begin{aligned} \pi _{j}^{-}(n)=\sum \limits _{i=1}^{m}k_{ij}\gamma _i^{n},\quad 1\le j \le m, \end{aligned}$$

and hence

$$\begin{aligned} {\varvec{\pi }}^{-}(n)=\left[ \sum \limits _{i=1}^{m}k_{i1}\gamma _i^{n}, \ldots ,\sum \limits _{i=1}^{m}k_{ij}\gamma _i^{n},\ldots , \sum \limits _{i=1}^{m}k_{im}\gamma _i^{n}\right] ,\quad n\ge a-1. \end{aligned}$$
(24)

Using (24) in (17), we have

$$\begin{aligned}&\bigg [\pi _{1}^{-}(0), \ldots , \pi _{j}^{-}(0),\ldots ,\pi _{m}^{-}(0)\bigg ]=\bigg [\sum \limits _{i=1}^{m}k_{i1}\gamma _i^{a-1}, \ldots ,\sum \limits _{i=1}^{m}k_{ij}\gamma _i^{a-1},\ldots , \sum \limits _{i=1}^{m}k_{im}\gamma _i^{a-1}\bigg ]\widehat{\mathbf{S}}_{a}\nonumber \\&\quad +\sum \limits _{r=2a-1}^{\infty }\bigg [\sum \limits _{i=1}^{m}k_{i1}\gamma _i^{r}, \ldots , \sum \limits _{i=1}^{m}k_{ij}\gamma _i^{r},\ldots , \sum \limits _{i=1}^{m}k_{im}\gamma _i^{r}\bigg ]\widehat{\mathbf{S}}_{r+1}. \end{aligned}$$
(25)

Using (24) in (18), we have

$$\begin{aligned}&\bigg [\pi _{1}^{-}(1), \ldots ,\pi _{j}^{-}(1),\ldots , \pi _{m}^{-}(1)\bigg ]= \bigg [\pi _{1}^{-}(0), \dots ,\pi _{j}^{-}(0),\ldots ,\pi _{m}^{-}(0)\bigg ] {\varvec{\varTheta }}\nonumber \\&\quad +\sum \limits _{r=a}^{2a-2}\bigg [\sum \limits _{i=1}^{m}k_{i1}\gamma _i^{r}, \ldots ,\sum \limits _{i=1}^{m}k_{ij}\gamma _i^{r},\dots , \sum \limits _{i=1}^{m}k_{im}\gamma _i^{r}\bigg ]\widehat{\mathbf{S}}_{r} \nonumber \\&\quad +\sum \limits _{r=2a-1}^{\infty }\bigg [\sum \limits _{i=1}^{m}k_{i1}\gamma _i^{r}, \ldots ,\sum \limits _{i=1}^{m}k_{ij}\gamma _i^{r},\dots ,\sum \limits _{i=1}^{m}k_{im}\gamma _i^{r}\bigg ]\mathbf{S}_{r}. \end{aligned}$$
(26)

Using (24) in (19), for \(n=2,3,\ldots , a-2\), we have

$$\begin{aligned}&\bigg [\pi _{1}^{-}(n), \ldots ,\pi _{j}^{-}(n),\ldots , \pi _{m}^{-}(n)\bigg ]\nonumber \\&\quad = \bigg [\pi _{1}^{-}(n-1), \dots ,\pi _{j}^{-}(n-1),\dots ,\pi _{m}^{-}(n-1)\bigg ] {\varvec{\varTheta }}\nonumber \\&\qquad +\sum \limits _{r=a+n-1}^{\infty }\bigg [\sum \limits _{i=1}^{m}k_{i1}\gamma _i^{r}, \ldots ,\sum \limits _{i=1}^{m}k_{ij}\gamma _i^{r},\dots , \sum \limits _{i=1}^{m}k_{im}\gamma _i^{r}\bigg ]\mathbf{S}_{r-n+1}. \end{aligned}$$
(27)

Using (24) in (19), for \(n=a-1\), we have

$$\begin{aligned}&\bigg [\sum \limits _{i=1}^{m}k_{i1}\gamma _i^{a-1}, \dots ,\sum \limits _{i=1}^{m}k_{ij}\gamma _i^{a-1},\ldots ,\sum \limits _{i=1}^{m}k_{im}\gamma _i^{a-1}\bigg ]\nonumber \\&\quad =\bigg [\pi _{1}^{-}(a-2), \ldots , \pi _{j}^{-}(a-2), \ldots ,\pi _{m}^{-}(a-2)\bigg ] {\varvec{\varTheta }}\nonumber \\&\qquad +\sum \limits _{r=2a-2}^{\infty }\bigg [\sum \limits _{i=1}^{m}k_{i1}\gamma _i^{r}, \ldots ,\sum \limits _{i=1}^{m}k_{ij}\gamma _i^{r},\dots ,\sum \limits _{i=1}^{m}k_{im}\gamma _i^{r}\bigg ]\mathbf{S}_{r-a+2}. \end{aligned}$$
(28)

Using (24) in (20), for \(n=a,a+1,\ldots , a+m-2\), we have

$$\begin{aligned} \bigg [\sum \limits _{i=1}^{m}k_{i1}\gamma _i^{n},\ldots , \sum \limits _{i=1}^{m}k_{ij}\gamma _i^{n} \ldots ,\sum \limits _{i=1}^{m}k_{im}\gamma _i^{n}\bigg ]=&\bigg [\sum \limits _{i=1}^{m}k_{i1}\gamma _i^{n-1},\ldots , \sum \limits _{i=1}^{m}k_{ij}\gamma _i^{n-1} \ldots ,\sum \limits _{i=1}^{m}k_{im}\gamma _i^{n-1}\bigg ]\mathbf{S}_0 \nonumber \\&+ \sum \limits _{r=a+n-1}^{\infty }\bigg [\sum \limits _{i=1}^{m}k_{i1}\gamma _i^{r},\ldots , \sum \limits _{i=1}^{m}k_{ij}\gamma _i^{r},\ldots ,\sum \limits _{i=1}^{m}k_{im}\gamma _i^{r}\bigg ]\mathbf{S}_{r-n+1}.\nonumber \\ \end{aligned}$$
(29)

Equations (25)–(29) give \(m(a-1)+ m^2\) simultaneous linear dependent equations which do not provide unique solution. Therefore, replacing any one equation among \(m(a-1)+ m^2\) equations by (23), we get \(m(a-1)+ m^2\) simultaneous linear independent equations in \(m(a-1)+ m^2\) unknown variables, \(k_{ij}\)’s \((1\le i,j \le m)\) and \(\pi _{j}^{-}(n)\) \((0 \le n\le a-2,1\le j \le m)\). Solving these \(m(a-1)+ m^2\) simultaneous linear independent equations, we get the \(m(a-1)+ m^2\) unknown variables \(k_{ij}\)’s \((1\le i,j \le m)\) and \(\pi _{j}^{-}(n)\) \((0 \le n\le a-2,1\le j \le m)\) uniquely.

3.1.2 System-Length Distribution at Random Epoch

Here, we procure the steady-state system-length distribution at random epoch using the Markov renewal theory which constructs the connection between the system-length distributions at random and pre-arrival epochs. For this, let us denote \({\varvec{\pi }}(n)=[\pi _1(n),\ldots ,\pi _i(n),\ldots , \pi _m(n)]\), \(n\ge 0\), be the row vectors of order m, where the ith component \(\pi _i(n)\) represents the probability of n customers in the system and the service process being in phase i at random epoch.

In order to obtain the steady-state probability vectors \({\varvec{\pi }}(n)\), \(n\ge 0\), let us define the square matrix \({{\varvec{\Phi }}}_n\), \(n\ge 1\), of order m whose (ij)th element describes the limiting probability that n customers are served during an elapsed inter-arrival time of the arrival process with the service process being frozen in phase j at the end point of the nth customer service completion, provided the service process was in phase i at the starting point of the inter-arrival period. Using the Markov renewal theory argument, we have

$$\begin{aligned} {{\varvec{\Phi }}}_n= {\left\{ \begin{array}{ll} \lambda \int _{0}^{\infty }\widetilde{\mathbf{P}}(n,t)[1-A(t)]\mathrm{{d}}t, \quad \text {for Model I},\\ \sum \limits _{k=n}^{\infty }{\varvec{\Omega }}_k, \quad \text {for Model II}. \end{array}\right. } \end{aligned}$$
(30)

Using (16) in (30), for Model I, we have

$$\begin{aligned} {{\varvec{\Phi }}}_n= & {} \lambda \sum \limits _{r=0}^{n-1} \int _{0}^{\infty }\int _{0}^{t}{} \mathbf{P}(r,x)dx\widehat{\mathbf{L}}_{n-r}[1-A(t)]dt\nonumber \\= & {} \sum \limits _{r=0}^{n-1}{} \mathbf{J}(r)\widehat{\mathbf{L}}_{n-r}, \quad n \ge 1, \end{aligned}$$

where

$$\begin{aligned} \mathbf{J}(n)=\lambda \int _{0}^{\infty }\int _{0}^{t}{} \mathbf{P}(n,x)\mathrm{{d}}x[1-A(t)]\mathrm{{d}}t,\quad n\ge 0. \end{aligned}$$
(31)

Using (1) in (31), for \(n=0\), we have

$$\begin{aligned} \mathbf{J}(0)= & {} \lambda \int _{0}^{\infty }\int _{0}^{t}\mathrm{{e}}^{\mathbf{L}_0x}\mathrm{{d}}x[1-A(t)]\mathrm{{d}}t, \nonumber \\= & {} (\mathbf{I}_m-{\varvec{\Omega }}_0)(-\mathbf{L}_0)^{-1}, \end{aligned}$$

taking into consideration (11) and incidentally \(\mathrm{{e}}^{\mathbf{L}_0t}=\mathbf{P}(0,t)\), \(t\ge 0\).

Using (1) in (31), for \(n\ge 1\), we have

$$\begin{aligned} \mathbf{J}(n)= & {} \lambda \int _{0}^{\infty }\int _{0}^{t}\bigg [\sum \limits _{k=0}^{n-1}\mathbf{P}(k,x)\mathbf{L}_{n-k}-\frac{\mathrm{{d}}}{\mathrm{{d}}x}\mathbf{P}(n,x)\bigg ]\mathrm{{d}}x[1-A(t)]\mathrm{{d}}t(-\mathbf{L}_0)^{-1}, \end{aligned}$$

which, considering (11) and (31), leads to

$$\begin{aligned} \mathbf{J}(n)= & {} \bigg [\sum \limits _{k=0}^{n-1}{} \mathbf{J}(k)\mathbf{L}_{n-k}-{\varvec{\Omega }}_n \bigg ](-\mathbf{L}_0)^{-1}, \quad n\ge 1. \end{aligned}$$

Now, using the Markov renewal theory argument, see, for example, Çinlar [28] or Lucantoni and Neuts [29], we obtain

$$\begin{aligned} {\varvec{\pi }}(0)= & {} {\varvec{\pi }}^{-}(a-1){\varvec{\Phi }}_{a}+ \sum \limits _{r=2a-1}^{\infty }{\varvec{\pi }}^{-}(r){\varvec{\Phi }}_{r+1},\\ {\varvec{\pi }}(1)= & {} {\varvec{\pi }}^{-}(0){\varvec{\Upsilon }}+\sum \limits _{r=a}^{2a-2} {\varvec{\pi }}^{-}(r){\varvec{\Phi }}_{r}+\sum \limits _{r=2a-1}^{\infty }{\varvec{\pi }}^{-}(r){\varvec{\Omega }}_{r},\\ {\varvec{\pi }}(n)= & {} {\varvec{\pi }}^{-}(n-1){\varvec{\Upsilon }}+\sum \limits _{r=a+n-1}^{\infty } {\varvec{\pi }}^{-}(r){\varvec{\Omega }}_{r-n+1},\quad 2\le n\le a-1, \\ {\varvec{\pi }}^{-}(n)= & {} {\varvec{\pi }}^{-}(n-1){\varvec{\Omega }}_0+\sum \limits _{r=a+n-1}^{\infty } {\varvec{\pi }}^{-}(r){\varvec{\Omega }}_{r-n+1}, \quad n\ge a, \end{aligned}$$

where

$$\begin{aligned} {\varvec{\Upsilon }}= {\left\{ \begin{array}{ll} \mathbf{I}_m, \quad \text {for Model I}\\ \sum \limits _{n=0}^{\infty }{\varvec{\Omega }}_n , \quad \text {for Model II.} \end{array}\right. } \end{aligned}$$

The mean system length \((L_s)\) can be obtained as \(L_s=\sum \nolimits _{n=1}^{\infty }n{\varvec{\pi }}(n)\mathbf{e}.\) From the Little’s law, we have the mean sojourn time \(W_s=\dfrac{L_s}{\lambda }\).

3.1.3 System-Length Distribution at Post-departure Epoch

Here, we work out the post-departure epoch probability which arises immediately after service completion of a batch. Let \({\varvec{\pi }}^{+}(n)=[\pi _{1}^{+}(n),\ldots ,\pi _{i}^{+}(n),\ldots ,\pi _{m}^{+}(n)]\), \(n\ge 0\), be the row vectors of order m, where the ith component \(\pi _{i}^{+}(n)\) denotes the post-departure epoch probability that n customers are in the system immediately after service completion of a batch and the service process being in phase i. Thus, using the ‘rate in = rate out’ arguments, for more details see Kim et al. [30], we have

$$\begin{aligned} {\varvec{\pi }}^{+}(n)=\frac{\sum \limits _{k=a}^{\infty }{\varvec{\pi }}(n+k)\mathbf{L}_k}{\sum \limits _{n=a}^{\infty }{\varvec{\pi }}(n)\sum \limits _{k=a}^{n}\mathbf{L}_k \mathbf{e}},\quad n\ge 0. \end{aligned}$$

3.2 Fixed Size Batch Service

In this section, we consider that the server serves the customers according to the BMSP with fixed batch size ‘a’, i.e. \(\mathbf{L}_k=\mathbf{0}\), for all \(k \in {\mathbb {N}}- \{a\}\), where \({\mathbb {N}}\) is the set of natural numbers.

3.2.1 System-Length Distribution at Pre-arrival Epoch

For two consecutive embedded points of the system defined in Sect. 3.1.1, the one-step transition probability matrix \(\mathscr {P}\) for fixed size batch service is given by

figure b

Hence, \({\varvec{\pi }}^{-}={\varvec{\pi }}^{-} \mathscr {P}\) yields

$$\begin{aligned} {\varvec{\pi }}^{-}(0)= & {} \sum \limits _{r=1}^{\infty }{\varvec{\pi }}^{-}(ra-1)\mathbf{\widehat{S}}_{ra}, \end{aligned}$$
(32)
$$\begin{aligned} {\varvec{\pi }}^{-}(n)= & {} {\varvec{\pi }}^{-}(n-1){\varvec{\varTheta }}+\sum \limits _{r=1}^{\infty }{\varvec{\pi }}^{-}(ra+n-1)\mathbf{\widehat{S}}_{ra},\quad 1\le n\le a-1, \end{aligned}$$
(33)
$$\begin{aligned} {\varvec{\pi }}^{-}(n)= & {} \sum \limits _{r=0}^{\infty }{\varvec{\pi }}^{-}(ra+n-1)\mathbf{S}_{ra}, \quad n\ge a. \end{aligned}$$
(34)

Multiplying (32)–(34) by relevant powers of z and adding them, we obtain

(35)

By investigating along the same lines discussed in Sect. 3.1.1, we have

$$\begin{aligned} \pi _{j}^{-*}(z)=\sum _{n=0}^{a-2}\pi _{j}^{-}(n)z^n+\sum \limits _{i=1}^{m}\frac{k_{ij} (z \gamma _i)^{a-1}}{1-\gamma _i z}, \quad 1\le j \le m, \end{aligned}$$
(36)

subject to

$$\begin{aligned} \sum _{n=0}^{a-2}\sum _{j=1}^{m}\pi _{j}^{-}(n) +\sum \limits _{j=1}^{m}\sum \limits _{i=1}^{m} \frac{k_{ij}\gamma _i^{a-1}}{1-\gamma _i}=1. \end{aligned}$$
(37)

Now, accumulate the coefficients of \(z^{n}\), \(n\ge a-1\), from both the sides of (36), we have

$$\begin{aligned} \pi _{j}^{-}(n)=\sum \limits _{i=1}^{m}k_{ij}\gamma _i^{n},\quad 1\le j \le m, \end{aligned}$$

and hence

$$\begin{aligned} {\varvec{\pi }}^{-}(n)=\bigg [\sum \limits _{i=1}^{m}k_{i1}\gamma _i^{n},\ldots , \sum \limits _{i=1}^{m}k_{ij}\gamma _i^{n},\ldots ,\sum \limits _{i=1}^{m}k_{im}\gamma _i^{n}\bigg ],\quad n\ge a-1. \end{aligned}$$
(38)

Using (38) in (32), we have

$$\begin{aligned}&\bigg [\pi _{1}^{-}(0),\ldots ,\pi _{j}^{-}(0),\ldots ,\pi _{m}^{-}(0)\bigg ]\nonumber \\&\quad =\sum \limits _{r=1}^{\infty }\bigg [\sum \limits _{i=1}^{m}k_{i1}\gamma _i^{ra-1}, \ldots ,\sum \limits _{i=1}^{m}k_{ij}\gamma _i^{ra-1},\ldots ,\sum \limits _{i=1}^{m}k_{im}\gamma _i^{ra-1}\bigg ]\widehat{\mathbf{S}}_{ra}. \end{aligned}$$
(39)

Using (38) in (33), we have

$$\begin{aligned}&\bigg [\pi _{1}^{-}(n),\ldots ,\pi _{j}^{-}(n),\ldots ,\pi _{m}^{-}(n)\bigg ]\nonumber \\&\quad =\bigg [\pi _{1}^{-}(n-1), \ldots ,\pi _{j}^{-}(n-1),\ldots ,\pi _{m}^{-}(n-1)\bigg ] {\varvec{\varTheta }}\nonumber \\&\qquad + \sum \limits _{r=1}^{\infty }\bigg [\sum \limits _{i=1}^{m}k_{i1}\gamma _i^{ra+n-1},\ldots , \sum \limits _{i=1}^{m}k_{ij}\gamma _i^{ra+n-1}\ldots ,\sum \limits _{i=1}^{m}k_{im}\gamma _i^{ra+n-1}\bigg ]\widehat{\mathbf{S}}_{ra},\nonumber \\&\qquad n=1,2,\ldots , a-2, \end{aligned}$$
(40)

Using (38) in (33), for \(n=a-1\), we have

$$\begin{aligned}&\bigg [\sum \limits _{i=1}^{m}k_{i1}\gamma _i^{a-1},\ldots , \sum \limits _{i=1}^{m}k_{ij}\gamma _i^{a-1},\ldots ,\sum \limits _{i=1}^{m}k_{im}\gamma _i^{a-1}\bigg ]\nonumber \\&\quad = \bigg [\pi _{1}^{-}(a-2),\ldots , \pi _{j}^{-}(a-2),\ldots ,\pi _{m}^{-}(a-2)\bigg ] {\varvec{\varTheta }}\nonumber \\&\qquad +\sum \limits _{r=1}^{\infty }\bigg [\sum \limits _{i=1}^{m}k_{i1}\gamma _i^{ra+n-1},\ldots , \sum \limits _{i=1}^{m}k_{ij}\gamma _i^{ra+n-1},\ldots , \sum \limits _{i=1}^{m}k_{im}\gamma _i^{ra+n-1}\bigg ]\widehat{\mathbf{S}}_{ra}. \end{aligned}$$
(41)

Using (38) in (34), for \(n=a,a+1,\ldots , a+m-2\), we have

$$\begin{aligned}&\bigg [\sum \limits _{i=1}^{m}k_{i1}\gamma _i^{n},\ldots , \sum \limits _{i=1}^{m}k_{ij}\gamma _i^{n},\ldots , \sum \limits _{i=1}^{m}k_{im}\gamma _i^{n}\bigg ]\nonumber \\&\quad =\sum \limits _{r=0}^{\infty } \bigg [\sum \limits _{i=1}^{m}k_{i1}\gamma _i^{ra+n-1},\ldots , \sum \limits _{i=1}^{m}k_{ij}\gamma _i^{ra+n-1},\ldots ,\sum \limits _{i=1}^{m}k_{im} \gamma _i^{ra+n-1}\bigg ]\mathbf{S}_{ra}. \end{aligned}$$
(42)

Equations (39)–(42) give \(m(a-1)+ m^2\) simultaneous linear dependent equations which do not provide unique solution. Therefore, replacing any one equation among \(m(a-1)+ m^2\) equations by (37), we get \(m(a-1)+ m^2\) simultaneous linear independent equations in \(m(a-1)+ m^2\) unknown variables, \(k_{ij}\)’s \((1\le i,j \le m)\) and \(\pi _{j}^{-}(n)\) \((0 \le n\le a-2,1\le j \le m)\). Solving these \(m(a-1)+ m^2\) simultaneous linear independent equations, we get the \(m(a-1)+ m^2\) unknown variables \(k_{ij}\)’s \((1\le i,j \le m)\) and \(\pi _{j}^{-}(n)\) \((0 \le n\le a-2,1\le j \le m)\) uniquely.

3.2.2 System-Length Distribution at Random Epoch

Based on the same Markov renewal theory argument used in Sect. 3.1.2 for random size batch service, the system-length distribution at random epoch for fixed size batch service can be expressed as

$$\begin{aligned} {\varvec{\pi }}(0)= & {} \sum \limits _{r=1}^{\infty }{\varvec{\pi }}^{-}(ra-1){\varvec{\Phi }}_{ra}, \\ {\varvec{\pi }}(n)= & {} {\varvec{\pi }}^{-}(n-1){\varvec{\Upsilon }}+\sum \limits _{r=1}^{\infty }{\varvec{\pi }}^{-}(ra+n-1) {\varvec{\Phi }}_{ra},\quad 1\le n\le a-1, \\ {\varvec{\pi }}(n)= & {} \sum \limits _{r=0}^{\infty }{\varvec{\pi }}^{-}(ra+n-1){\varvec{\Omega }}_{ra}, \quad n\ge a. \end{aligned}$$

The mean system length \((L_s)\) can be obtained as \(L_s=\sum \nolimits _{n=1}^{\infty }n{\varvec{\pi }}(n)\mathbf{e}.\) From the Little’s law, we have the mean sojourn time \(W_s=\dfrac{L_s}{\lambda }\).

3.2.3 System-Length Distribution at Post-departure Epoch

Based on the same argument used in Sect. 3.1.3 for random size batch service, the system-length distribution at post-departure epoch for fixed size batch service can be expressed as

$$\begin{aligned} {\varvec{\pi }}^{+}(n)=\frac{{\varvec{\pi }}(n+a)\mathbf{L}_a}{\sum \limits _{n=a}^{\infty }{\varvec{\pi }}(n)\mathbf{L}_a \mathbf{e}},\quad n\ge 0. \end{aligned}$$

Remark 1

For fixed size service with \(a=1\), both the set of Eqs. (17)–(20) and (32)–(34) converted to

$$\begin{aligned} {\varvec{\pi }}^{-}(0)= & {} \sum \limits _{r=0}^{\infty }{\varvec{\pi }}^{-}(r)\mathbf{\widehat{S}}_{r+1}, \end{aligned}$$
(43)
$$\begin{aligned} {\varvec{\pi }}^{-}(n)= & {} \sum \limits _{r=n-1}^{\infty }{\varvec{\pi }}^{-}(r)\mathbf{S}_{r-n+1}, \quad n\ge 1. \end{aligned}$$
(44)

Using \(\sum _{k_1}^{k_2}=0\), for \(k_1 > k_2\), both Eqs. (21) and (35) are reduced to

$$\begin{aligned} {\varvec{\pi }}^{-*}(z)=\frac{\sum \limits _{n=0}^{\infty }{\varvec{\pi }}^{-}(n)\left[ \widehat{\mathbf{S}}_{n+1}- \sum \limits _{r=n+1}^{\infty }\mathbf{S}_{r}z^{n-r+1} \right] \text {adj}[\mathbf{I}_m-z\mathbf{S}(z^{-1})]}{\det [\mathbf{I}_m-z\mathbf{S}(z^{-1})]}. \end{aligned}$$
(45)

The above results matched with Chaudhry et al. [10] for GI/MSP/1 queue.

4 Numerical Results

This section provides numerical outcomes to validate the correctness of our analytical results by considering different inter-arrival time distributions (heavy-tailed) and service matrices of BMSP \((\mathbf{L}_0 ,\mathbf{L}_{a} ,\mathbf{L}_{a+1} ,... )\) in different self-explanatory Tables 1, 2, 3, 4, 5, 6, 7 and 8 and graphs. Bottom of the tables contain some relevant performance measures. Table 2 exhibits that the relation \(\overline{{\varvec{\pi }}}=\sum _{n=0}^{a-1}{\varvec{\pi }}(n)+\sum _{n=a}^{\infty }{\varvec{\pi }}(n)\) holds as the service phases change during idle periods of the system. Further, Table 5 exhibits that the relation \(\sum _{n=0}^{a-1}{\varvec{\pi }}(n)=1-\rho \) holds as the service phases do not change during idle periods of the system and this relation holds only for fixed batch size service. These truth confirm the correctness of our analytical and numerical results. From practical point of view, let \(N_0<\infty \) denote the maximum service batch size of the service process, i.e. \(\mathbf{L}_k=\mathbf{0}, \text {for all}~ k \ge N_0+1 \).

Table 1 System-length distribution at pre-arrival epoch for Model II
Table 2 System-length distribution at random epoch for Model II
Table 3 System-length distribution at post-departure epoch for Model II
Table 4 System-length distribution at pre-arrival epoch for Model I
Table 5 System-length distribution at random epoch for Model I
Table 6 System-length distribution at post-departure epoch for Model I

Example 1

The goal of the this example is to validate the correctness of our analytical results for the WB/BMSP/1 queue, where WB stands for Weibull distribution. The system-length distributions at various time epochs are presented in Tables 1, 2 and 3. Bottom of Table 2 contains the mean system length and mean sojourn time of an arrived customer. The probability density function (p.d.f.) and C.D.F. of the Weibull distribution are taken as \(a(x)=\frac{c}{\beta }(\frac{x}{\beta })^{c-1}\mathrm{{e}}^{-(\frac{x}{\beta })^c}\), \(x\ge 0\) and \(A(x)=1-\mathrm{{e}}^{-(\frac{x}{\beta })^c}\), \(x\ge 0\), respectively, with shape parameter \(\beta =0.025\) and scale parameter \(c=1.85\). This leads to \(\lambda =\frac{1}{\beta \Gamma (1+\frac{1}{c})}=45.03424618\). Now, our main focus is to derive the explicit expression of \(\mathbf{S}(z)\) and \(\mathbf{S}_n\), \(n\ge 0\), numerically. For this, we have needed L.–S.T. of the Weibull distribution. Since the L.–S.T. of Weibull distribution does not exist, we obtain an approximate L.–S.T. \(\widetilde{A}(s)\) using the GTAM described by Shortle et al. [31]. According to Shortle et al. [31], we assume \(M=100\) probabilities \(y_i=1-\widehat{r}^i\), where \(y_i=A(x_i)\), \(1\le i\le M\), and for some \(\widehat{r}\) in (0, 1). Hence, \(1-(\frac{x_i}{\beta })^c=1-\widehat{r}^i\) shows that \(x_i=\beta [i~log(\frac{1}{\widehat{r}})]^{^{\frac{1}{c}}}\). Allot the probability \(y_i\) to each point \(x_i\) as

$$\begin{aligned} p_{_{1}}=\frac{y_1+y_2}{2},\quad p_i=\frac{y_{i+1}-y_{i-1}}{2}, \quad i=2,3,\ldots , M-1, \quad p_{_{M}}=1-\frac{y_{_{M-1}}+y_{_{M}}}{2}. \end{aligned}$$

The above all \(p_i\)’s can be written as a function of \(\widehat{r}\) as

$$\begin{aligned} p{_{_1}}=\frac{2-\widehat{r}-\widehat{r}^2}{2},\quad p_i=\frac{\widehat{r}^{i-1}-\widehat{r}^{i+1}}{2}, \quad i=2,3,\ldots , M-1,\quad p_{_{M}}=\frac{\widehat{r}^{M-1}+\widehat{r}^{M}}{2}. \end{aligned}$$

To determine \(\widehat{r}\), a binary search has been done on \(\sum _{i=1}^{M}p_{i}x_{i}=\frac{1}{\lambda }\) which gives \(\widehat{r}=0.95605390\). Thus, we have an approximate L.–S.T. of Weibull inter-arrival time distribution as

$$\begin{aligned} \widetilde{A}(s)=\sum \limits _{i=1}^{M}p_i\mathrm{{e}}^{-sx_i}. \end{aligned}$$
(46)

Now, we convert the transcendental function \(\widetilde{A}(s)\) given in (46) to a rational function using the Padé approximation method, see Akar and Arikan [32]. Using Padé(3, 4) in (46), we have

$$\begin{aligned} \widetilde{A}(s)\approx \frac{1.00000000+0.00483753s+0.00002094s^2-3.38987909 \times 10^{-8}s^3}{1.00000000+0.02704285s+0.0030138s^2+0.00000165s^3+3.81608327 \times 10^{-9}s^4}. \end{aligned}$$
(47)

Substitute \(s\mathbf{I}_m=-\mathbf{L}(z)\) in (47) and using \(\mathbf{S}(z)\approx \widetilde{A}(-\mathbf{L}(z))\), we obtain

$$\begin{aligned} \mathbf{S}(z)\approx & {} \bigg [\mathbf{I}_{m}+0.00483753(-\mathbf{L}(z))+0.00002094(-\mathbf{L}(z))^{2}-3.38987909 \times 10^{-8}(-\mathbf{L}(z))^{3}\bigg ] \\&\times \bigg [\mathbf{I}_{m}+0.02704285(-\mathbf{L}(z))+0.0030138(-\mathbf{L}(z))^{2}+0.00000165(-\mathbf{L}(z))^{3}\\&+3.81608327 \times 10^{-9}(-\mathbf{L}(z))^{4}\bigg ]^{-1}. \end{aligned}$$

Substitute \(s=\theta -\theta z\) in (47), we obtain

$$\begin{aligned} \widetilde{A}(\theta -\theta z)\approx \frac{0.72819146-0.05519272z+0.00295301z^{2}+0.00007734z^{3}}{1.00000000-0.37774905z+0.05794233z^{2}-0.00429479z^{3}+0.00013060z^{4}}.\nonumber \\ \end{aligned}$$
(48)

In order to determine \(\mathbf{{S}}_n\), \(n\ge 0\), from (6), we evaluate \(\sigma _{k}\), \(k\ge 0\), from (10) by using (48) in (8).

Now, we consider the BMSP for Tables (1, 2, 3) as follows:

$$\begin{aligned} \mathbf{L}_0= & {} \left[ \begin{array}{cccc} -12.0&{} 1.0&{}1.0&{}1.0\\ 2.0&{} -10.0&{} 1.0&{}1.0\\ 3.0&{} 2.0&{} -14.0&{}1.0\\ 1.0&{} 2.0&{} 2.0&{} -15.0\end{array}\right] ,\,\mathbf{L}_4= \left[ \begin{array}{cccc} 0.5&{} 0.5&{} 0.7&{}0.3\\ 0.5&{}0.8&{}0.5&{} 0.2\\ 0.9&{} 0.2&{} 0.1&{}0.8\\ 0.6&{} 0.4&{}0.5&{}0.5\end{array} \right] ,\,\mathbf{L}_6= \left[ \begin{array}{cccc} 0.3&{} 0.2&{} 0.1&{} 0.4\\ 0.1&{}0.6&{}0.2&{}0.1\\ 0.7&{} 0.1&{} 0.1&{} 0.1\\ 0.5&{}0.2&{}0.2&{}0.1\end{array}\right] ,\\ \mathbf{L}_9= & {} \left[ \begin{array}{cccc} 0.9&{} 0.3&{} 0.2&{} 0.1\\ 0.2&{} 0.3&{} 0.1&{}0.4\\ 0.6&{} 0.3&{} 0.4&{} 0.2\\ 0.8&{} 0.7&{}0.3&{} 0.2\end{array}\right] ,\, \mathbf{L}_{10}=\left[ \begin{array}{cccc} 0.7&{} 0.6&{}0.8&{} 0.9\\ 0.1&{} 0.2&{} 0.3&{} 0.1\\ 0.2&{}0.4&{}0.1&{}0.3\\ 0.8&{}0.9&{}0.7&{}0.6\end{array} \right] ,{\mathbf{L}}_{12}=\left[ \begin{array}{cccc} 0.3&{}0.7&{}0.4&{}0.1\\ 0.5&{}0.3&{} 0.1&{}0.4\\ 0.7&{}0.8&{}0.5&{}0.5\\ 0.8&{}0.2&{}0.3&{}0.7\end{array}\right] , \end{aligned}$$

with \( \mathbf{L}_k=\mathbf{0},\; k \in {\mathbb {N}}- \{4,6,9,10,12\}\). Note that the input matrices for this BMSP are taken in which the minimum batch size \(a=4\) and the maximum batch size \(N_0 =12\). This yields

$$\begin{aligned} \overline{{\varvec{\pi }}} =\left[ \begin{array}{ccccc} 0.32221459&0.32234715&0.18821449&0.16722377 \end{array} \right] , \end{aligned}$$

with \(\mu ^{*}=66.11195378 \), and hence the traffic intensity \(\rho =0.68118160\).

Example 2

The goal of the this example is to validate the correctness of our analytical results for the LN/BMSP/1 queue, where LN stands for lognormal distribution. The system-length distributions at various time epochs are presented in Tables 4, 5 and 6. Bottom of Table 5 contains the mean system length and mean sojourn time of an arrived customer. The p.d.f. and C.D.F. of the lognormal inter-arrival time distribution are taken as \(a(x)=\frac{1}{x\alpha \sqrt{2\pi }}\mathrm{{e}}^{-\frac{(ln(x)-\beta )^2}{2\alpha ^2}}\) and \(A(x)=\frac{1}{2}+\frac{1}{2}\text {erf}\left[ \frac{ln(x)-\beta }{\sqrt{2}\alpha }\right] \), \(x>0\), respectively, with \(\alpha =0.55\) and \(\beta =1.92\). The mean arrival rate \(\lambda \equiv \dfrac{1}{\mathrm{{e}}^{(\beta +\frac{\alpha ^2}{2})}}=0.12602815\). Similar to Example 1, our main focus is to derive the explicit expression of \(\mathbf{S}(z)\) and \(\mathbf{S}_n\), \(n\ge 0\), numerically. Since the L.–S.T. of Lognormal distribution does not exist, we obtain an approximate L.–S.T. \(\widetilde{A}(s)\) using the GTAM described by Shortle et al. [31]. According to Shortle et al. [31], we assume \(M=100\) probabilities \(y_i=1-\widehat{r}^i\), where \(y_i=A(x_i)\), \(1\le i\le M\), and for some \(\widehat{r}\) in (0, 1). Hence, \(\frac{1}{2}+\frac{1}{2}\text {erf}\left[ \frac{\mathrm{{ln}}(x_i)-\beta }{\sqrt{2}\alpha }\right] =1-\widehat{r}^i\) implies that \(x_i=\mathrm{{e}}^{\beta +\sqrt{2}\alpha \text {erf}^{-1}(1-2\widehat{r}^i)}\), where \(\text {erf}^{-1}[x]\) is the inverse error function of \(\text {erf}[x]\). In order to determine \(\widehat{r}\) in (0, 1), we use the command \( {RootOf}({\mathrm{{erf}}}(_{-}Z)-1+2\widehat{r}^i)\) for \(\text {erf}^{-1}(1-2\widehat{r}^i)\) in Maple software. Allot the probability \(p_i\) to each point \(x_i\) as

$$\begin{aligned} p_{_{1}}=\frac{y_1+y_2}{2},\quad p_i=\frac{y_{i+1}-y_{i-1}}{2}, \quad i=2,3,\ldots , M-1,\quad p_{_{M}}=1-\frac{y_{_{M-1}}+y_{_{M}}}{2}. \end{aligned}$$

The above all \(p_i\)’s can be written as a function of \(\widehat{r}\) as

$$\begin{aligned} p{_{_1}}=\frac{2-\widehat{r}-\widehat{r}^2}{2},\quad p_i=\frac{\widehat{r}^{i-1}-\widehat{r}^{i+1}}{2}, \quad i=2,3,\ldots , M-1,\quad p_{_{M}}=\frac{\widehat{r}^{M-1}+\widehat{r}^{M}}{2}. \end{aligned}$$

To determine \(\widehat{r}\), a binary search has been done on \(\sum _{i=1}^{M}p_{i}x_{i}=\frac{1}{\lambda }\) which gives \(\widehat{r}=0.94774845\). Thus, we have an approximate L.–S.T. of lognormal inter-arrival time distribution as

$$\begin{aligned} \widetilde{A}(s)=\sum \limits _{i=1}^{M}p_i\mathrm{{e}}^{-sx_i}. \end{aligned}$$
(49)

Now, we convert the transcendental function \(\widetilde{A}(s)\) given in (49) to a rational function using the Padé approximation method, see Akar and Arikan [32]. Using Padé(3, 4) in (49), we have

$$\begin{aligned} \widetilde{A}(s)\approx \frac{1.00000000+0.80371222s+5.29653694s^2-7.04794571s^3}{1.00000000+8.73844756s+32.91188263s^2+78.68204564s^3+110.95800202 s^4}. \end{aligned}$$
(50)

Substitute \(s\mathbf{I}_m=-\mathbf{L}(z)\) in (50) and using \(\mathbf{S}(z)\approx \widetilde{A}(-\mathbf{L}(z))\), we obtain

$$\begin{aligned} \mathbf{S}(z)\approx & {} \bigg [\mathbf{I}_{m}+0.80371222(-\mathbf{L}(z))+5.29653694(-\mathbf{L}(z))^{2}-7.04794571(-\mathbf{L}(z))^{3}\bigg ]\\&\times \bigg [\mathbf{I}_{m}+8.73844756(-\mathbf{L}(z))+32.91188263(-\mathbf{L}(z))^{2}+78.68204564(-\mathbf{L}(z))^{3}\\&+110.95800202(-\mathbf{L}(z))^{4}\bigg ]^{-1}. \end{aligned}$$

Substitute \(s=\theta -\theta z\) in (50), we obtain

$$\begin{aligned} \widetilde{A}(\theta -\theta z)\approx \frac{0.02655234+0.01575050z-0.05885442z^{2}+0.03133293z^{3}}{1.00000000-2.89469203z+3.25076963z^{2}-1.67179655z^{3}+0.33050029z^{4}}. \end{aligned}$$
(51)

In order to determine \(\mathbf{{S}}_n\), \(n\ge 0\), from (6), we evaluate \(\sigma _{k}\), \(k\ge 0\), from (10) by using (51) in (8).

Now, we consider the BMSP for Tables 4, 5, 6 as follows:

$$\begin{aligned} \mathbf{L}_0= & {} \left[ \begin{array}{ccc} -0.46&{}0.10&{}0.30\\ 0.30&{} -0.45&{}0.10\\ 0.20&{}0.40&{} -0.67\end{array}\right] ,\,\mathbf{L}_6= \left[ \begin{array}{cccc} 0.03&{} 0.01&{} 0.02\\ 0.02&{} 0.01&{} 0.02\\ 0.02&{} 0.01&{} 0.04\end{array} \right] ,\;\hbox { with}\; \mathbf{L}_k=\mathbf{0},\; k \in {\mathbb {N}}- \{6\}. \end{aligned}$$

Note that the BMSP is considered with fixed batch size \(a=N_0=6\). This yields

$$\begin{aligned} \overline{{\varvec{\pi }}} =\left[ \begin{array}{cccc} 0.39141631&0.34420601&0.26437768 \end{array} \right] , \end{aligned}$$

with \(\mu ^{*}=0.35521030\) and hence the traffic intensity \(\rho =0.35479869\).

Example 3

The goal of the this example is to obtain the numerical result of GI/MSP/1 queue from GI/BMSP/1 queue when \(a=1\) and generate some graphs to show different aspects of the model. In this regard, we choose Pareto distribution as inter-arrival time distribution. The p.d.f. and C.D.F. of the Pareto inter-arrival time distribution are taken as \(a(x)=\frac{\theta c^\theta }{(c+x)^{\theta +1}}\), \(x\ge 0\) and \(A(x)=1-\frac{c^\theta }{(c+x)^{\theta }}\), \(x\ge 0\), respectively, with shape parameter \(\theta =2.75\) and scale parameter \(c=1.25\). This leads to \(\lambda =(\theta -1)/c= 1.40\). Since the L.–S.T. of Pareto distribution does not exist, we obtain an approximate L.–S.T. \(\widetilde{A}(s)\) using the GTAM described by Shortle et al. [31]. Using the similar process given in Examples 1 and 2, we have an approximate L.–S.T. as

$$\begin{aligned} \widetilde{A}(s)\approx \frac{1.00000000+9.18673177s+30.15079569s^{2} +32.86111075s^{3}}{1.00000000+9.90101748s+36.20987447s^{2}+51.77863653s^{3}+17.05266162s^{4}}. \end{aligned}$$
(52)

To generate numerical result for the model GI/MSP/1 queue from the GI/BMSP/1 queueing model when \(a=1\), we consider the MSP as

$$\begin{aligned} \mathbf{L}_0= \left[ \begin{array}{ccc}-2.5&{}0.43&{}0.27\\ 0.55&{}-2.75&{}0.25\\ 0.50&{}0.14&{}-2.68 \end{array} \right] ,\quad \mathbf{L}_1= \left[ \begin{array}{cccc} 0.40&{}0.80&{}0.60\\ 0.70&{}0.75&{}0.50\\ 0.90&{}0.44&{}0.70\end{array} \right] . \end{aligned}$$

This yields

$$\begin{aligned} \overline{{\varvec{\pi }}} =\left[ \begin{array}{cccc} 0.38619556&0.32210353&0.29170091 \end{array} \right] , \end{aligned}$$

with \(\mu ^{*}=1.91832375\) and hence the traffic intensity \(\rho =0.72980382\). The numerical results for the GI/MSP/1 queue are generated by the roots method given in this paper, and the matrix-geometric method given in Samanta [17] is presented in Tables 7 and 8, respectively.

Table 7 System-length distribution at pre-arrival epoch for Model I using roots method
Table 8 System-length distribution at pre-arrival epoch for Model I using matrix-geometric method

Now, in order to show the effect of traffic intensity (\(\rho \)) on the average system length (\(L_s\)), we choose the rate matrices \(\mathbf{L}_n\), \(n\ge 0\), of order \(m=3\) of the service process BMSP with maximum and minimum service batch size \(N_0=6\) and \(a=3\), respectively, such that each entry of \(\mathbf{L}_n\), \(n\ge 0\), is a function of \(\delta \) \((\delta >0)\) and they are given by

$$\begin{aligned} \mathbf{L}_0= & {} \left[ \begin{array}{ccc} -5\delta &{}\dfrac{\delta }{2}&{}\dfrac{\delta }{3}\\ \dfrac{\delta }{3}&{}-7\delta &{}\delta \\ \dfrac{\delta }{2}&{}\dfrac{\delta }{3}&{}-9\delta \\ \end{array} \right] ,~~ \mathbf{L}_3= \left[ \begin{array}{ccc} \dfrac{\delta }{2}&{}\quad \dfrac{\delta }{3}&{}\quad \delta \\ \dfrac{\delta }{2}&{}\quad \dfrac{\delta }{3}&{}\quad \dfrac{\delta }{3}\\ \delta &{}\quad \dfrac{\delta }{2}&{}\quad \dfrac{\delta }{3}\\ \end{array}\right] ,~~ \mathbf{L}_5= \left[ \begin{array}{ccc} \dfrac{\delta }{3} &{}\quad \dfrac{\delta }{2}&{}\quad \dfrac{\delta }{3}\\ \dfrac{\delta }{2}&{}\quad 2 \delta &{}\quad \dfrac{\delta }{3}\\ \dfrac{\delta }{3}&{}\quad \delta &{}\quad \dfrac{\delta }{2}\\ \end{array}\right] ,\\ \mathbf{L}_{6}= & {} \left[ \begin{array}{ccc} \dfrac{\delta }{2}&{}\quad \dfrac{\delta }{3}&{}\quad \dfrac{\delta }{3}\\ \dfrac{\delta }{3}&{}\quad \delta &{}\quad \dfrac{\delta }{3}\\ \dfrac{\delta }{2}&{}\quad \delta &{}\quad 3 \delta \end{array}\right] , \end{aligned}$$

with \( \mathbf{L}_k=\mathbf{0},\; k \in {\mathbb {N}}- \{3,5,6\}\) and \(\delta \) takes the values \(0.40,0.20,0.15,0.11,0.09,0.078,0.068,0.060,0.053\) to generate different values of \(\rho \) presented in the graph. It is observed from Fig. 1 that the average system length increases as the traffic intensity \(\rho \) increases. Further, the average system length increases faster when the traffic intensity is closer to 1.

Fig. 1
figure 1

Average system length (\(L_s\)) versus traffic intensity (\(\rho \))

Moreover, to show the effect of fixed batch size (a) on the average system length (\(L_s\)), we choose the following BMSPs as

$$\begin{aligned} \mathbf{L}_0= \dfrac{1}{a} \left[ \begin{array}{ccc} -2.46&{}0.22&{}0.12\\ 0.11&{}-2.61&{}0.32\\ 0.13&{}0.47&{}-2.73 \end{array} \right] ,\quad \mathbf{L}_a= \dfrac{1}{a} \left[ \begin{array}{cccc} 0.06&{}1.01&{}1.05\\ 1.03&{}1.08&{}0.07\\ 1.01&{}0.08&{}1.04 \end{array}\right] , \end{aligned}$$

with \(\rho =0.65260008\), and

$$\begin{aligned} \mathbf{L}_0= \dfrac{1}{a} \left[ \begin{array}{ccc}-5.48 &{}0.2&{}0.1 \\ 0.2&{}-5.73&{}0.1 \\ 0.3&{}0.1&{}-4.57 \end{array} \right] ,\quad \mathbf{L}_a= \dfrac{1}{a} \left[ \begin{array}{cccc} 1.08&{}1.05&{}3.05\\ 1.07&{}3.06&{}1.30\\ 1.05&{}1.08&{}2.04 \end{array} \right] , \end{aligned}$$

with \(\rho =0.29201793\) and for both the cases \(\mathbf{L}_k=\mathbf{0},\; k \in {\mathbb {N}}- \{a\}\). It is observed from Fig. 2 that for both the cases of low and high traffic intensities the average system length strictly monotonically increases as the fixed batch size a increases. We also see that the average system length for high traffic intensity is always higher than the average system length for low traffic intensity.

Fig. 2
figure 2

Average system length (\(L_s\)) versus fixed batch size (a)

5 Conclusion

This paper analysed the GI/BMSP/1 queueing system to obtain analytical expressions for the system-length distributions at three time epochs ( pre-arrival, random, and post-departure) and other important performance measures of the system. We first determined the system-length distribution at pre-arrival epoch based on the zeros of the related characteristic polynomial of the vector p.g.f. of the system-length distribution. Next, we have established the relation between pre-arrival and random epochs using Markov renewal theory argument to obtain the system-length distribution at random epoch. In addition, the analytical results obtained in this paper have been numerically verified by variety of numerical examples to display the consequence of the system framework and correctness of the analysis. Furthermore, it would be interesting to analyse sojourn time distribution of an arriving customer based on analytical results of this paper and is left for future investigation.