1 Introduction

In this paper, we consider a single-server queue with batch Poisson arrivals, variable service speed and setup time. Our model is motivated by power-aware servers in data centers [9, 10, 17]. The CPU of a server is able to process at multiple speeds by using either frequency scaling [13] or dynamic voltage and frequency scaling (DVFS) techniques [11, 15]. In recent years, CPUs with variable speed have become popular because they can save energy consumption while keeping acceptable response time for jobs. The server can automatically adjust its speed according to the workload in the system. By doing so, the power consumption is small at low workload and is large at high workload.

In this paper, we assume that jobs arrive at the system in batches according to a Poisson process and that the arrival process is independent of the state of the system. The service requirement of each job in a batch is independently and identically distributed (i.i.d.) with an exponential distribution. The service speed of the server is instantaneously adapted according to the number of jobs in the system. In particular, the service rate of the server is proportional to the number of jobs in the system. Furthermore, the server is turned off immediately after becoming empty in order to save energy consumption. At the moment when a batch arrives at an empty system, the OFF server is turned on. However, some exponentially distributed setup time is needed in order to reactivate the OFF server. We call the above queueing model the M\({}^{X}\)/M/1/SET-VARI queue where SET and VARI stand for setup and variable service rate, respectively.

The contribution of this paper is threefold. First, we obtain the necessary and sufficient condition for the existence of the unique stationary queue length distribution, which we call the stability condition hereafter. We show that the stability condition of our model is that the logarithmic moment of the batch size is finite. Interestingly, the system can be stable even if the mean batch size is infinite. Second, we derive the probability generating function (PGF) of the number of jobs in the system. It should be noted that the number of jobs in the system of our model is identical to that of the M\({}^{X}\)/M/\(\infty \) queue with setup time, which to the best of our knowledge has not been investigated in the literature. Third, we derive the Laplace–Stieltjes transform (LST) of the sojourn time distribution, which is obtained in series form involving infinite-dimensional matrices. The derivation of the sojourn time distribution is challenging because the sojourn time of a tagged job depends on not only the state of the system upon arrival but also on the batches arriving after it. Therefore, the sojourn time distribution cannot be obtained directly from the PGF of the queue length distribution via the distributional Little’s law [8].

Our model extends the one proposed by Lu et al. [9] in which an M/M/1/SET-VARI queue was considered. In [9], the solution in terms of infinite series was presented for the stationary queue length distribution. From the queue length distribution, the mean response time is obtained via Little’s law and the mean power consumption is obtained. These metrics are used in [9] to find the energy-response trade-off. However, the sojourn time distribution was not considered in [9]. Baba [2] considered the M\({}^{X}\)/M/1 queue with setup time where the processing speed of the server is fixed. He derived the PGF of the number of jobs in the system and the LST of the sojourn time distribution. Adan and D’Auria [1] considered a single-server queueing system where jobs arrive according to a Poisson arrival stream, the service requirements of jobs follow the exponential distribution with mean 1 and the service rate of the server is controlled by a threshold. They derived the stationary distribution of the number of jobs in the system and the LST of the sojourn time distribution in explicit form. The sojourn time distribution of our model is derived using first-step analysis, which is also adopted by Adan and D’ Auria [1]. The difference is that the underlying Markov chain in Adan and D’ Auria [1] is homogeneous after a threshold, while our underlying Markov chain is spatially nonhomogeneous. As a result, the former allows explicit expression while our formulae involve inverse mappings of infinite matrices.

The remainder of this paper is organized as follows: In Sect. 2, we describe the M\({}^{X}\)/M/1/SET-VARI queue in detail. In Sect. 3, we derive the stability condition. In Sect. 4, we derive the PGF of the number of jobs in the system in an integral but computable form. In Sect. 5, we derive the LST of the sojourn time distribution. In Sect. 6, we present numerical experiments showing the energy-performance trade-off and the sojourn time distribution by numerically inverting the Laplace–Stieltjes transform. Finally, in Sect. 7, we present the conclusion of this paper and future work.

2 Model

In this section, we describe our queueing model, the M\({}^X\)/M/1/SET-VARI queue, in detail. The M\({}^X\)/M/1/SET-VARI queue has a single-server operating under the FCFS (First Come First Served) service discipline and an infinite buffer space. Batches of jobs arrive at the system according to a Poisson process with rate \(\lambda \). The numbers of jobs in batches are i.i.d., where X is the batch size with distribution \(x_i={\mathsf {P}} (X=i)\) for \(i\in {\mathbb {N}}:=\{1,2,\dots \}\) and the PGF (Probability Generating Function) is denoted by \({\widehat{X}}(z):=\sum _{k\ge 1} x_kz^k\).

The special feature of our model is that the service speed of the server is proportional to the number of jobs in the system. In particular, the service rate is \(n\mu \), provided that the number of jobs in the system is n. This is equivalent to the following assumption: The service requirements of jobs are i.i.d. with exponential distribution with mean 1. The basic speed of the server when there is one job in the system is given by \(\mu \in (0,\infty )\). When there are n jobs in the system, the speed of the server is scaled up to \(n\mu \). Thus, when there are n jobs in the system, the residual sojourn time of the ongoing job follows the exponential distribution with mean \((n\mu )^{-1}\) due to the memoryless property of exponential distributions.

In order to save energy, the server is turned off immediately if the system becomes empty upon a service completion. Furthermore, when a batch arrives at the empty system, the server is turned on. However, the server needs some setup time before it can process jobs. Therefore, if a batch arrives at the empty system, it has to wait until the setup time finishes. We assume that the setup time follows the exponential distribution with mean \(\alpha ^{-1}\). During the setup time, the server cannot serve a job but consumes energy.

Let I(t) and N(t) denote the state of the server and the number of jobs in the system, respectively, at time t. When the server is off or in the setup process, \(I(t)=0\), and when the server is processing a job, \(I(t)=1\). In the current setting, the joint stochastic process \(\{Z(t):=(I(t),N(t)); t\ge 0\}\) is an irreducible continuous-time Markov chain with state space \({\mathscr {S}}=\{(0,j);\,j\in {\mathbb {Z}}_+\}\cup \{(1,j);\,j\in {\mathbb {N}}\}\), where \({\mathbb {Z}}_+:=\{0\}\cup {\mathbb {N}}\). We assume that \(x_1>0\) so that the Markov chain \(\{Z(t)\}\) is irreducible. Figure 1 shows the state transition diagram of this Markov chain for a special case where the maximum batch size is two.

Fig. 1
figure 1

Transition diagram of the case \(x_1+x_2=1\)

Remark 1

As mentioned in Sect. 1, the PGF of the number of jobs in the system of the M\({}^X\)/M/1/SET-VARI queue is identical to that of the M\({}^{X}\)/M/\(\infty \) queue with setup time. However, the sojourn time distributions of these two models may be different because the sojourn time distribution of a tagged job of the latter is determined upon its arrival, while that of the former is affected by future arrivals. Some researchers have studied the M\({}^X\)/M/\(\infty \) queue without setup time. For example, Shanbhag [14] derived moment generating functions of some performance measures, for example, the number of jobs in the system and the sojourn time. Cong [3] derived the stability condition.

3 The stability condition

In this section, we derive the stability condition of our model. Because \(\{Z(t);t\ge 0\}\) is an irreducible and regular Markov chain, \(\{Z(t)\}\) is positive recurrent if and only if a unique stationary distribution exists.

It follows from Theorem 1 that the stability condition of the M\({}^X\)/M/1/SET-VARI queue is that the logarithmic moment of the batch size is finite. Cong [3] derived the same stability condition for the special case of the M\({}^X\)/M/\(\infty \) queue without setup time. The addition of the setup time does not change the stability of the system. This is intuitively clear because the effects of setup times disappear when the system is under heavy load (i.e., large number of jobs present).

It should be noted that the proof of Cong [3] is based on the transient solution, which is derived using the method of collective marks. Here, we prove the stability condition for a more general model than the one in Cong [3] using alternative methods.

Theorem 1

\(\{Z(t);t\ge 0\}\) has a unique stationary distribution if and only if

$$\begin{aligned} \mathsf {E}[\log (X+1)]<\infty . \end{aligned}$$
(3.1)

Proof

It should be noted that \(\{Z(t)\}\) is positive recurrent if and only if there exists an invariant measure \({\varvec{\xi }}:=(\xi _{i,j})_{(i,j)\in {\mathscr {S}}}\) of \(\{Z(t)\}\) such that \(\xi _{i,j}>0\) for \((i,j)\in {\mathscr {S}}\) and \(\sum _{(i,j)\in {\mathscr {S}}}\xi _{i,j}<\infty \) [12]. We define the generating functions \({\widehat{\xi }}_0(z)\) and \({\widehat{\xi }}_1(z)\) as follows:

$$\begin{aligned} {\widehat{\xi }}_0(z)=\sum _{j=0}^{\infty }\xi _{0,j}z^j,\quad {\widehat{\xi }}_1(z) =\sum _{j=1}^{\infty }\xi _{1,j}z^j. \end{aligned}$$

The invariant measure \({\varvec{\xi }}\) satisfies the following balance equations:

$$\begin{aligned} \lambda \xi _{0,0}= & {} \mu \xi _{1,1}, \end{aligned}$$
(3.2)
$$\begin{aligned} (\lambda +\alpha )\xi _{0,j}= & {} \lambda \sum _{k=1}^jx_k \xi _{0,j-k},\quad j=1,2,\dots , \end{aligned}$$
(3.3)
$$\begin{aligned} (\lambda +\mu )\xi _{1,1}= & {} \alpha \xi _{0,1}+2\mu \xi _{1,2}, \end{aligned}$$
(3.4)
$$\begin{aligned} (\lambda +j\mu )\xi _{1,j}= & {} \alpha \xi _{0,j}+(1+j) \mu \xi _{1,j+1}+\lambda \sum _{k=1}^{j-1}x_k\xi _{1,j-k},\quad j=2,3,\dots . \end{aligned}$$
(3.5)

Multiplying (3.3) by \(z^j\), taking the sum over \(j\in {\mathbb {N}}\), and rearranging the result, we obtain

$$\begin{aligned} {\widehat{\xi }}_0(z) = {(\lambda +\alpha )\xi _{0,0} \over \lambda +\alpha -\lambda {\widehat{X}}(z)}. \end{aligned}$$
(3.6)

Multiplying (3.4) by z and (3.5) by \(z^j\) and taking the sum over \(j\ge 2\) yields

$$\begin{aligned}&\lambda \sum _{j=1}^{\infty }\xi _{1,j}z^j+\mu z\sum _{j=1}^{\infty }\xi _{1,j}\big (z^{j}\big )'\\&\quad =\alpha \sum _{j=0}^\infty \xi _{0,j}z^j-\alpha \xi _{0,0} +\mu \sum _{j=1}^{\infty }\xi _{1,j}\big (z^{j}\big )'-\mu \xi _{1,1} +\lambda \sum _{k=1}^{\infty }x_kz^k\sum _{j=1}^{\infty }\xi _{1,j}z^{j}. \end{aligned}$$

Rearranging the above equation, we find that

$$\begin{aligned} {\mathrm{d}\over \mathrm{d}z}{\widehat{\xi }}_1(z) = {\lambda \over \mu }q(z){\widehat{\xi }}_1(z) + {\lambda \over \mu }q(z){\widehat{\xi }}_0(z), \end{aligned}$$
(3.7)

where \(q(z):=(1-{\widehat{X}}(z))/(1-z)\). We define Q(z) as the primitive function of q(z) such that \(Q(0)=0\). The solution of (3.7) is given by

$$\begin{aligned} {\widehat{\xi }}_1(z)=H(z)\exp \Big ( {\lambda \over \mu }Q(z)\Big ), \end{aligned}$$
(3.8)

where H(z) is some function which will be determined later. Differentiating (3.8) and substituting the result into (3.7), we obtain

$$\begin{aligned} {\mathrm{d}\over \mathrm{d}z}H(z)=\exp \Big (- {\lambda \over \mu }Q(z)\Big ) {\lambda \over {\mu }}q(z){\widehat{\xi }}_0(z). \end{aligned}$$

It follows from \({\widehat{\xi }}_1(0)=0\) and \(Q(0)=0\) that \(H(0)=0\). Therefore, we have

$$\begin{aligned} H(z) =\int _{0}^{z}\exp \Big (- {\lambda \over \mu }Q(u)\Big ) {\lambda \over \mu }q(u){\widehat{\xi }}_0(u)\mathrm{d}u. \end{aligned}$$

Substituting this equation into (3.8), we obtain

$$\begin{aligned} {\widehat{\xi }}_1(z) =\exp \Big ( {\lambda \over \mu }Q(z)\Big ) \bigg \{\int _{0}^{z}\exp \Big (- {\lambda \over \mu }Q(u)\Big ) {\lambda \over \mu }q(u){\widehat{\xi }}_0(u){\mathrm{d}}u\bigg \}. \end{aligned}$$
(3.9)

It should be noted that (3.6) and (3.9) are equivalent to the system of balance equations (3.2)–(3.6).

Assuming that \(\{Z(t)\}\) is positive recurrent, we will prove that (3.1) holds. Thus, there exists an invariant measure \({\varvec{\xi }}\) such that \(\xi _{i,j}>0\) for \((i,j)\in {\mathscr {S}}\) and \(\sum _{(i,j)\in {\mathscr {S}}}\xi _{i,j}<\infty \). Therefore, we have

$$\begin{aligned} \xi _{0,0}\le & {} {\widehat{\xi }}_0(z),\quad \text {for all }z\in [0,1], \end{aligned}$$
(3.10)
$$\begin{aligned} {\widehat{\xi }}_0(z)+{\widehat{\xi }}_1(z)\le & {} \sum _{(i,j)\in {\mathscr {S}}}\xi _{i,j},\quad \text {for all }z\in [0,1]. \end{aligned}$$
(3.11)

From (3.9) and (3.10), we also have

$$\begin{aligned} {\widehat{\xi }}_0(z)+{\widehat{\xi }}_1(z)\ge & {} \xi _{0,0}+\xi _{0,0}\exp \Big ( {\lambda \over \mu }Q(z)\Big ) \int _{0}^{z}\exp \Big (- {\lambda \over \mu }Q(u)\Big ) {\lambda \over \mu }q(u)\mathrm{d}u\nonumber \\= & {} \xi _{0,0}+\xi _{0,0}\exp \Big ( {\lambda \over \mu }Q(z)\Big ) \int _{0}^{z}{\mathrm{d}\over \mathrm{d}u}\Big \{-\exp \Big (- {\lambda \over \mu }Q(u)\Big ) \Big \}{\mathrm{d}}u\nonumber \\= & {} \xi _{0,0}\exp \Big ( {\lambda \over \mu }Q(z)\Big ),\quad \text {for all } z\in [0,1]. \end{aligned}$$
(3.12)

Because of (3.11), \(\xi _{i,j}>0\), \((i,j)\in {\mathscr {S}}\), and \(\sum _{(i,j)\in {\mathscr {S}}}\xi _{i,j}<\infty \). We can take the limit of (3.12) as \(z\uparrow 1\) in order to obtain

$$\begin{aligned} \sum _{(i,j)\in {\mathscr {S}}}^{\infty }\xi _{i,j}\ge & {} \xi _{0,0}\exp \Big ( {\lambda \over \mu }Q(1)\Big ). \end{aligned}$$
(3.13)

Using \(Q(z)=\sum _{j\ge 1}\mathsf {P}[X\ge j]z^j/j\), we can show the following inequality:

$$\begin{aligned} Q(1)= & {} \sum _{k=1}^\infty x_k\bigg \{\sum _{j=1}^{k+1}\frac{1}{j}-\log (k+1)\bigg \}+\mathsf {E}\Big [-{1\over X+1}+\log (X+1)\Big ]\nonumber \\> & {} \gamma -\mathsf {E}\Big [{1\over X+1}\Big ]+\mathsf {E}[\log (X+1)]\nonumber \\\ge & {} \gamma - 1/2 + \mathsf {E}[\log (X+1)], \end{aligned}$$
(3.14)

where \(\gamma \) is Euler’s constant [16]. The first inequality in (3.14) is due to

$$\begin{aligned} \sum _{j=1}^{k+1}\frac{1}{j}-\log (k+1) > \gamma , \end{aligned}$$

while the second inequality in (3.14) is because \(X \ge 1\). Therefore, it follows from (3.13), (3.18) and \(\sum _{(i,j)\in {\mathscr {S}}}\xi _{i,j}<\infty \) that \(\mathsf {E}[\log (X+1)]<\infty \).

Now, assuming that \(\mathsf {E}[\log (X+1)]<\infty \), we will prove the existence of a positive invariant measure \({\varvec{\xi }}\) such that \(\sum _{(i,j)\in {\mathscr {S}}}\xi _{i,j}<\infty \). We select an arbitrary \(\xi _{0,0}>0\). First, we prove that \(\xi _{i,j}>0\) for any \((i,j)\in {\mathscr {S}}\). We can recursively prove that \(\xi _{0,j}>0\) for \(j\in {\mathbb {N}}\) by using (3.3), and it follows from (3.2) that \(\xi _{1,1}>0\). In addition, comparing the coefficients of \(z^j\) on both sides of (3.7), we have, for \(j\in {\mathbb {N}}\),

$$\begin{aligned} (j+1)\xi _{1,j+1}= {\lambda \over \mu }\sum _{k=0}^j\mathsf {P}[X>j-k]\xi _{0,k} + {\lambda \over \mu }\sum _{k=1}^j\mathsf {P}[X>j-k]\xi _{1,k}, \end{aligned}$$
(3.15)

where we have used \(q(z)=\sum _{j\ge 0}\mathsf {P}[X>j]z^j\). Due to \(\xi _{0,0}>0\), we can also prove that \(\xi _{1,j}>0\) for \(j\ge 2\) by using the recursive formula (3.15). Thus, under the assumption that \(\xi _{0,0}>0\), it follows that \(\xi _{i,j}>0\) for any \((i,j)\in \mathscr {S}\).

Next, we prove that \(\sum _{(i,j)\in {\mathscr {S}}}\xi _{i,j}<\infty \). From (3.6), we have

$$\begin{aligned} {\widehat{\xi }}_0(z) \le {\lambda +\alpha \over \lambda +\alpha -\lambda {\widehat{X}}(1)}\xi _{0,0}\, \le \, {\lambda +\alpha \over \alpha }\xi _{0,0},\quad {\text {for all }} z\in [0,1], \end{aligned}$$
(3.16)

where the second inequality is to cover the case \(\mathsf {P}[X< \infty ] <1\). From (3.9) and (3.16), we also have

$$\begin{aligned}&{\widehat{\xi }}_0(z)+{\widehat{\xi }}_1(z) \nonumber \\&\quad \le \xi _{0,0}{\lambda +\alpha \over \alpha }+\xi _{0,0}{\lambda +\alpha \over \alpha }\exp \Big ( {\lambda \over \mu }Q(z)\Big ) \int _{0}^{z}\exp \Big (- {\lambda \over \mu }Q(u)\Big ) {\lambda \over \mu }q(u){\mathrm{d}}u\nonumber \\&\quad = \xi _{0,0}{\lambda +\alpha \over \alpha }+\xi _{0,0}{\lambda +\alpha \over \alpha }\exp \Big ( {\lambda \over \mu }Q(z)\Big ) \int _{0}^{z}{\mathrm{d}\over \mathrm{d}u}\Big \{-\exp \Big (- {\lambda \over \mu }Q(u)\Big )\Big \}{\mathrm{d}}u\nonumber \\&\quad =\xi _{0,0}{\lambda +\alpha \over \alpha }\exp \Big ( {\lambda \over \mu }Q(z)\Big ),\quad {\text {for all }}z\in [0,1]. \end{aligned}$$
(3.17)

Using \(Q(z)=\sum _{j\ge 1}\mathsf {P}[X\ge j]z^j/j\), we can show the following inequality:

$$\begin{aligned} Q(z)\le & {} \sum _{j\ge 1}\mathsf {P}[X\ge j]{1\over j}\nonumber \\= & {} \sum _{k=1}^\infty x_k\bigg \{\sum _{j=1}^k\frac{1}{j}-\log (k+1)\bigg \}+\mathsf {E}[\log (X+1)]\nonumber \\< & {} \gamma +\mathsf {E}[\log (X+1)],\quad \text {for all }z\in [0,1], \end{aligned}$$
(3.18)

where the inequality in (3.18) is due to

$$\begin{aligned} \sum _{j=1}^k\frac{1}{j}-\log (k+1) < \gamma . \end{aligned}$$

Taking the limit of (3.17) as \(z\uparrow 1\), it follows from (3.18) that

$$\begin{aligned} \lim _{z\uparrow 1}\big \{{\widehat{\xi }}_0(z)+{\widehat{\xi }}_1(z)\big \}\le & {} \xi _{0,0}{\lambda +\alpha \over \alpha }\exp \Big ( {\lambda \over \mu }\{\gamma +\mathsf {E}[\log (X+1)]\}\Big ). \end{aligned}$$
(3.19)

Note that

$$\begin{aligned} \lim _{z\uparrow 1}\big \{{\widehat{\xi }}_0(z)+{\widehat{\xi }}_1(z)\big \}= & {} \lim _{z\uparrow 1}\sum _{j=0}^{\infty }\xi _{0,j}z^j +\lim _{z\uparrow 1}\sum _{j=1}^{\infty }\xi _{1,j}z^j\nonumber \\= & {} \sum _{j=0}^{\infty }\lim _{z\uparrow 1}\xi _{0,j}z^j +\sum _{j=1}^{\infty }\lim _{z\uparrow 1}\xi _{1,j}z^j\nonumber \\= & {} \sum _{j=0}^{\infty }\xi _{0,j} +\sum _{j=1}^{\infty }\xi _{1,j}, \end{aligned}$$
(3.20)

where the second equation holds because \(\xi _{i,j}>0\) for \((i,j)\in {\mathscr {S}}\). It follows from (3.19), (3.20), \(\xi _{0,0}>0\) and \(\mathsf {E}[\log (X+1)]<\infty \) that there exists an invariant measure \({\varvec{\xi }}\) such that \(\xi _{i,j}>0\) for \((i,j)\in {\mathscr {S}}\) and \(\sum _{(i,j)\in {\mathscr {S}}}\xi _{i,j}<\infty \). Therefore \(\{Z(t)\}\) is positive recurrent. \(\square \)

4 The number of jobs in the system

In this section, we consider the number of jobs in the system in steady state, i.e., assuming \(\mathsf {E}[\log (X+1)]<\infty \). From Theorem 1, there exists the unique stationary distribution \({\varvec{\pi }}=(\pi _{i,j})_{i,j\in {\mathscr {S}}}\), where \(\pi _{i,j}:=\lim _{t\rightarrow \infty }\mathsf {P}[\,I(t)=i, N(t)=j\,]\) for \((i,j)\in {\mathscr {S}}\). We define the number of jobs in the system in steady state as N, and its PGF as \({\widehat{\pi }}(z):=\sum _{j=0}^\infty \pi _{0,j}z^j+\sum _{j=1}^\infty \pi _{1,j}z^j\). The PGF \({\widehat{\pi }}(z)\) is given by Theorem 2.

Theorem 2

\({\widehat{\pi }}(z)\) is given as follows:

$$\begin{aligned} {\widehat{\pi }}(z)= & {} {\exp \big ({\lambda \over \mu }Q(z)\big ) +\int _{0}^{z}\exp \big ({\lambda \over \mu }\{Q(z)-Q(u)\}\big ) {\lambda (\lambda +\alpha ){\widehat{X}}'(u)\over \{\lambda +\alpha -\lambda {\widehat{X}}(u)\}^2}{\mathrm{d}}u \over \exp \big ({\lambda \over \mu }Q(1)\big ) +\int _{0}^{1}\exp \big ({\lambda \over \mu }\{Q(1)-Q(u)\}\big ) {\lambda (\lambda +\alpha ){\widehat{X}}'(u)\over \{\lambda +\alpha -\lambda {\widehat{X}}(u)\}^2}{\mathrm{d}}u }. \end{aligned}$$
(4.1)

Proof

The stationary distribution \({\varvec{\pi }}\) is the positive invariant measure which satisfies the normalizing condition, i.e., \({\widehat{\pi }}(1)=1\). From (3.6) and (3.9), we have

$$\begin{aligned} {\widehat{\pi }}(z)= & {} {\lambda +\alpha \over \lambda +\alpha -\lambda {\widehat{X}}(z)}\pi _{0,0}\nonumber \\&+\int _{0}^{z}\exp \Big (-{\lambda \over \mu }\big \{Q(z)-Q(u)\big \}\Big ) {\lambda \over \mu }q(u){\lambda +\alpha \over \lambda +\alpha -\lambda {\widehat{X}}(z)} \pi _{0,0}{\mathrm{d}}u. \end{aligned}$$
(4.2)

By partial integration, we obtain

$$\begin{aligned} {\widehat{\pi }}(z)= & {} \pi _{0,0}\exp \Big ({\lambda \over \mu }Q(z)\Big )\nonumber \\&+\,\pi _{0,0} \int _{0}^{z}\exp \Big ({\lambda \over \mu }\big \{Q(z)-Q(u)\big \}\Big ) {\lambda (\lambda +\alpha ){\widehat{X}}'(z)\over \{\lambda +\alpha -\lambda {\widehat{X}}(z)\}^2} {\mathrm{d}}u.\qquad \end{aligned}$$
(4.3)

From (4.3) and \({\widehat{\pi }}(1)=1\), we obtain \(\pi _{0,0}\) as follows:

$$\begin{aligned} \pi _{0,0}^{-1}= & {} \exp \Big ({\lambda \over \mu }Q(1)\Big ) + \int _{0}^{1}\exp \Big ({\lambda \over \mu }\big \{Q(1)-Q(u)\big \}\Big ) {\lambda (\lambda +\alpha ){\widehat{X}}'(z)\over \{\lambda +\alpha -\lambda {\widehat{X}}(z)\}^2}{\mathrm{d}}u.\qquad \end{aligned}$$
(4.4)

Substituting (4.4) into (4.3), we obtain Theorem 2. \(\square \)

Remark 2

As mentioned in Sect. 1, the number of jobs in the system in our model is identical to that in the M\({}^{X}\)/M/\(\infty \) queue with setup time. Simplifying equation (2.9) in Shanbhag  [14] for the system without setup time, we know that the PGF of the number of customers for that system, denoted by \({\widehat{\pi }}^*(z)\), can be obtained as follows:

$$\begin{aligned} {\widehat{\pi }}^*(z)=\exp \Big ({\lambda \over \mu }\{Q(z)-Q(1)\}\Big ). \end{aligned}$$
(4.5)

It is easy to see that (4.1) tends to (4.5) as \(\alpha \rightarrow \infty \).

We can easily obtain the average number of jobs in the system.

Corollary 1

Assuming that \(\mathsf {E}[X] < \infty \), \({\mathsf {E}}[N]\) is given as follows:

$$\begin{aligned} \mathsf {E}[N]= & {} {\lambda \mathsf {E}[X] \over \mu }\left\{ {\lambda +\alpha \over \alpha }{\mu \over \alpha }\pi _{0,0}+1\right\} , \end{aligned}$$

where \(\pi _{0,0}\) is given by (4.4).

Proof

Differentiating (4.1), we obtain

$$\begin{aligned} {\widehat{\pi }}'(z)= & {} {\lambda (\lambda +\alpha ){\widehat{X}}'(z) \over (\lambda +\alpha -\lambda {\widehat{X}}(z))^2}\pi _{0,0} +{\lambda \over \mu }q(z){\widehat{\pi }}(z). \end{aligned}$$

Taking the limit as \(z\uparrow 1\) in the above equation yields

$$\begin{aligned} {\widehat{\pi }}'(1)= & {} \lim _{z\uparrow 1}{\widehat{\pi }}'(z)\\= & {} {\lambda (\lambda +\alpha )\mathsf {E}[X] \over \alpha ^2}\pi _{0,0} +{\lambda \over \mu }\lim _{z\uparrow 1}{1-{\widehat{X}}(z) \over 1-z}\\= & {} {\lambda (\lambda +\alpha )\mathsf {E}[X] \over \alpha ^2}\pi _{0,0} +{\lambda \over \mu }\mathsf {E}[X], \end{aligned}$$

where the third equality is due to L’Hospital’s rule. From the relation \(\mathsf {E}[N]={\widehat{\pi }}'(1)\), we obtain Corollary 1. \(\square \)

5 Sojourn time distribution

In this section, we derive the LST of the sojourn time distribution. Note that the LST of a distribution function F(t) is defined as \(F^*(s):=\int _{t\ge 0} \mathrm{e}^{-st}F({\mathrm{d}}t)\). We assume that \(\mathsf {E}[X]<\infty \) in this section for the existence of the equilibrium distribution.

In the M\({}^{X}\)/M/1/SET-VARI queue, the server changes the speed upon arrivals and departures of jobs. Therefore, the sojourn time distribution of a tagged job is affected by the batches that arrive after it. This makes the derivation of the sojourn time distribution complex and challenging. We first derive the conditional LST for the sojourn time distribution. Then, combining with the queue length distribution, we obtain the unconditional LST for the sojourn time distribution.

First, we consider the case where the server is processing a job when the tagged job arrives. Let \(S_1(n,m)\) denote the residual sojourn time of the tagged job, given that it is in the mth position and the system state is (1, n). Conditioning on the first-step transitions, we have, for \(m\in {\mathbb {N}}\) and \(n\ge m\),

$$\begin{aligned} S_1(n,m)= \frac{Y}{\lambda +n\mu }+\left\{ \begin{array}{ll} S_1(n-1,m-1),&{}\displaystyle \text {w.p.}{~~}\frac{n\mu }{\lambda +n\mu },\\ S_1(n+k,m),&{}\displaystyle \text {w.p.}{~~}\frac{\lambda x_k}{\lambda +n\mu },\quad k\in {\mathbb {N}}, \end{array}\right. \end{aligned}$$
(5.1)

where Y denotes the exponential random variable with mean 1, and \(S_1(n,0)=0\) for \(n \in {\mathbb {N}}\). Furthermore, let \(\psi _1(n,m,s)\) denote the LST of \(S_1(n,m)\). Taking the LST of both sides of (5.1), we obtain, for \(m\in {\mathbb {N}}\) and \(n\ge m\),

$$\begin{aligned} \psi _1(n,m,s)= & {} {n\mu \over s+\lambda +n\mu }\psi _1(n-1,m-1,s) \nonumber \\&+\,{\lambda \over s+\lambda +n\mu }{\displaystyle \sum _{k=1}^\infty } x_k\psi _1(n+k,m,s), \end{aligned}$$
(5.2)

where \(\psi _1(n,0,s)=1\). We use the convention that \(\psi _1(n,m,s)=0\) for \(n<m\). Furthermore, we define the infinite column vector \({\varvec{\psi }}_1(m,s)\) as

$$\begin{aligned} {\varvec{\psi }}_1(m,s)=(\psi _1(0,m,s),\dots ,\psi _1(m-1,m,s), \psi _1(m,m,s),\psi _1(m+1,m,s),\dots )^{\top }, \end{aligned}$$

where \({\varvec{a}}^{\top }\) denotes the transposed vector of \({\varvec{a}}\). Note that the nth element of \({\varvec{\psi }}_1(m,s)\) is \(\psi _1(n-1,m,s)\) for \(n\ge m+1\).

We define infinite matrices \({\varvec{\varLambda }}^{(1)}\) and \({\varvec{M}}\) as

$$\begin{aligned} {\varvec{\varLambda }}^{(1)}= & {} \big (\varLambda ^{(1)}_{ij}\big )_{(i,j)\in {\mathbb {N}} \times {\mathbb {N}}},\\ \varLambda ^{(1)}_{ij}= & {} \left\{ \begin{array}{ll} \displaystyle \frac{\lambda x_{j-i}}{s+\lambda +(i-1)\mu },&{}1<i<j,\\ 0,&{}\text {otherwise}, \end{array}\right. \nonumber \end{aligned}$$
(5.3)
$$\begin{aligned} {\varvec{M}}= & {} \big ({ M}_{ij}\big )_{(i,j)\in {\mathbb {N}}\times {\mathbb {N}}},\\ {M}_{ij}= & {} \left\{ \begin{array}{ll} \displaystyle \frac{(i-1)\mu }{s+\lambda +(i-1)\mu },&{}1<i=j+1,\\ 0,&{}\text {otherwise}. \end{array}\right. \nonumber \end{aligned}$$
(5.4)

Rearranging (5.2) by using these matrices, we obtain

$$\begin{aligned} {\varvec{\psi }}_1(m,s)={\varvec{M}}{\varvec{\psi }}_1(m-1,s)+{\varvec{\varLambda }}^{(1)}{\varvec{\psi }}_1(m,s), \qquad m\in {\mathbb {N}}. \end{aligned}$$
(5.5)

From \({\varvec{\psi }}_1(n,0,s)=1\) for any \(n\in {\mathbb {N}}\), we have

$$\begin{aligned} {\varvec{\psi }}_1(0,s)={\varvec{1}}, \end{aligned}$$

where \({\varvec{1}}\) is the infinite column vector whose elements are all equal to 1. Let \(\parallel {\varvec{z}}\parallel _2\) denote the Euclidean norm of the vector \({\varvec{z}}\). We prove that the operator norm of the infinite matrix \({\varvec{\varLambda }}^{(1)}\), \(\parallel {\varvec{\varLambda }}^{(1)}\parallel =\sup _{\parallel {\varvec{z}}\parallel _2=1}\parallel {\varvec{\varLambda }}^{(1)}{\varvec{z}}\parallel _2\), is strictly smaller than 1. Indeed, for all \({\varvec{z}}=(z_1,z_2,\dots )^{\top }\) such that \(\parallel {\varvec{z}}\parallel _2=1\), we have

$$\begin{aligned} \parallel {\varvec{\varLambda }}^{(1)}{\varvec{z}}\parallel _2= & {} \Big (\sum _{i> 1}\Big (\sum _{j>i}\frac{\lambda x_{j-i}}{s+\lambda +(i-1)\mu }z_j\Big )^2\Big )^{1/2} \\= & {} \Big (\sum _{i> 1} \frac{\lambda }{s+\lambda +(i-1)\mu } \Big (\sum _{j>i} x_{j-i}z_j\Big )^2\Big )^{1/2} \\< & {} \frac{\lambda }{\lambda + s} \Big (\sum _{i>1}\Big (\sum _{j\ge 1}x_{j}z_{j+i}\Big )^2\Big )^{1/2} \\\le & {} \frac{\lambda }{\lambda + s} \Big (\sum _{i>1}\sum _{j\ge 1}x_{j}\Big (z_{j+i}\Big )^2\Big )^{1/2} \\\le & {} \frac{\lambda }{\lambda + s} \Big (\sum _{j\ge 1}x_{j}||{\varvec{z}}||_2^2\Big )^{1/2} \\= & {} \frac{\lambda }{\lambda + s}, \end{aligned}$$

where the second inequality holds because of Jensen’s inequality. Thus,

$$\begin{aligned} \parallel {\varvec{\varLambda }}^{(1)}\parallel \le \frac{\lambda }{\lambda + s} < 1. \end{aligned}$$

Because \(\parallel {\varvec{\varLambda }}^{(1)}\parallel \) is strictly smaller than 1, \(({\varvec{I}}-{\varvec{\varLambda }}^{(1)})\) has an inverse mapping, where \({\varvec{I}}\) is the infinite identity matrix [6, Section 29, Theorem 8]. Therefore, from (5.5), we obtain the following recurrence equations, for \(m\in {\mathbb {N}}\):

$$\begin{aligned} {\varvec{\psi }}_1(m,s)=({\varvec{I}}-{\varvec{\varLambda }}^{(1)})^{-1}{\varvec{M}}{\varvec{\psi }}_1(m-1,s). \end{aligned}$$

Solving this equation, we obtain

$$\begin{aligned} {\varvec{\psi }}_1(m,s)=\{({\varvec{I}}-{\varvec{\varLambda }}^{(1)})^{-1}{\varvec{M}}\}^{m}{\varvec{1}}. \end{aligned}$$
(5.6)

Next, we consider the case where the server is not processing a job when the tagged job arrives. Let \(S_0(n,m)\) denote the residual sojourn time of the tagged job, given that the tagged job is in the mth position and the system state is (0, n). Let \(\psi _0(n,m,s)\) denote the LST of \(S_0(n,m)\) for \(n\in {\mathbb {N}}\) and \(m\le n\). In addition, we define the infinite column vector \({\varvec{\psi }}_0(m,s)\) as

$$\begin{aligned} {\varvec{\psi }}_0(m,s)= & {} (\psi _0(0,m,s),\dots ,\psi _0(m-1,m,s), \psi _0(m,m,s),\\&\psi _0(m+1,m,s),\dots )^{\top }, \end{aligned}$$

where \(\psi _0(n,0,s)=1\), \(n\in {\mathbb {Z}}_+\), and \(\psi _0(n,m,s)=0\), \(n\in {\mathbb {Z}}_+\) and \(m>n\). We use the convention that \(\psi _0(n,m,s)=0\) for \(n<m\). As with the analysis for \({\varvec{\psi }}_1(m,s)\), i.e., (5.1)–(5.6), we obtain

$$\begin{aligned} {\varvec{\psi }}_0(m,s)=({\varvec{I}}-{\varvec{\varLambda }}^{(0)})^{-1}{\varvec{A}}{\varvec{\psi }}_1(m,s), \end{aligned}$$
(5.7)

where the infinite matrices \({\varvec{\varLambda }}^{(0)}\) and \({\varvec{A}}\) are defined as follows:

$$\begin{aligned} \varvec{\varLambda }^{(0)}= & {} \big (\varLambda ^{(0)}_{ij}\big )_{(i,j)\in {\mathbb N}\times {\mathbb N}},\quad \quad \varLambda ^{(0)}_{ij}=\left\{ \begin{array}{ll} \displaystyle \frac{\lambda x_{j-i}}{s+\lambda +\alpha },&{}1<i<j,\\ 0,&{}\text {otherwise}, \end{array}\right. \end{aligned}$$
(5.8)
$$\begin{aligned} \varvec{A}= & {} \big (A_{ij}\big )_{(i,j)\in {\mathbb N}\times {\mathbb N}},\quad \quad A_{ij}=\left\{ \begin{array}{ll} \displaystyle \frac{\alpha }{s+\lambda +\alpha },&{}1<i=j,\\ 0,&{}\text {otherwise}. \end{array}\right. \end{aligned}$$
(5.9)

Note that \(({\varvec{I}}-{\varvec{\varLambda }}^{(0)})\) has an inverse mapping, which can be proved similarly to the analysis for \({\varvec{\varLambda }}^{(1)}\).

Next, we derive the unconditional LST of the sojourn time distribution. To this end, we define \(\tau (i,n,m)\) as the probability that the tagged job is located in the mth position and the state of the system becomes (in) immediately after its arrival. Let \(({{\mathscr {I}}},{\mathscr {L}}_p)\) denote the state of the system just before the tagged job arrives at the system. Let \({\mathscr {P}}\) denote the position at which the tagged job is located immediately after it enters the system. Let \({\widetilde{X}}\) denote the number of jobs in the batch to which the tagged job belongs. Under the assumption that \(\mathsf {E}[X]<\infty \), the distribution of \({\widetilde{X}}\) is the equilibrium distribution of X, given in [4], for \(k\in {\mathbb {N}}\),

$$\begin{aligned} \mathsf {P}[{\widetilde{X}}=k]=\frac{kx_k}{\mathsf {E}[X]}. \end{aligned}$$

In addition, from PASTA [18], we have, for \((i,n)\in {\mathscr {S}}\),

$$\begin{aligned} \mathsf {P}[{{\mathscr {I}}}=i,\,{\mathscr {L}}_p=n]=\pi _{i,n}. \end{aligned}$$

Let \({\mathscr {X}}(n,m):=\{k\,;\,x_k>0,\,n\le k\le m\}\). We obtain, for \(n\in {\mathbb {N}}\) and \(n\ge m\),

$$\begin{aligned}&\tau (1,n,m)\\&\quad = \sum _{k\in {\mathscr {X}}(n-m+1,n-1)} \mathsf {P}[{{\mathscr {I}}}=1,\,{\mathscr {L}}_p=n-k,\,{\widetilde{X}}=k,\,{\mathscr {P}}=m] \\&\quad = \sum _{k\in {\mathscr {X}}(n-m+1,n-1)} \mathsf {P}[{\mathscr {P}}=m|{{\mathscr {I}}}=1,\,{\mathscr {L}}_p=n-k,\,{\widetilde{X}}=k]\\&\qquad \times \mathsf {P}[{{\mathscr {I}}}=1,\,{\mathscr {L}}_p=n-k,\,\tilde{X}=k] \nonumber \\&\quad = \sum _{k\in {\mathscr {X}}(n-m+1,n-1)} {1\over k}\mathsf {P}[\tilde{X}=k|{{\mathscr {I}}}=1,\,{\mathscr {L}}_p=n-k]\mathsf {P}[{{\mathscr {I}}}=1,\,{\mathscr {L}}_p=n-k]\\&\quad = \sum _{k\in {\mathscr {X}}(n-m+1,n-1)} {\frac{1}{k}\frac{kx_k}{\mathsf {E}[X]}}\pi _{1,n-k}\\&\quad = \sum _{k=n-m+1}^{n-1}\pi _{1,n-k}{\frac{x_k}{\mathsf {E}[X]}}. \end{aligned}$$

Similarly to the above, we obtain, for \(n\in {\mathbb {Z}}_+\) and \(n\ge m\),

$$\begin{aligned} \tau (0,n,m)=\sum _{k=n-m+1}^{n}\pi _{0,n-k}{\frac{x_k}{\mathsf {E}[X]}}. \end{aligned}$$

Since we use the convention that \(\psi _i(n,m,s)=0\) for \(n<m\), the LST of the sojourn time distribution, denoted by \({\psi }(s)\), can be expressed as follows by using \(\psi _0(n,m,s)\) and \(\psi _1(n,m,s)\):

$$\begin{aligned} {\psi }(s)= & {} \sum _{n=1}^\infty \sum _{m=1}^n\tau (0,n,m)\psi _0(n,m,s)+\sum _{n=2}^\infty \sum _{m=2}^{n}\tau (1,n,m)\psi _1(n,m,s)\nonumber \\= & {} \sum _{m=1}^\infty \sum _{n=m}^\infty \sum _{k=n-m+1}^{n}\pi _{0,n-k}{\frac{x_k}{\mathsf {E}[X]}}\psi _0(n,m,s)\nonumber \\&+\sum _{m=2}^\infty \sum _{n=m}^{\infty }\sum _{k=n-m+1}^{n-1}\pi _{1,n-k}{\frac{x_k}{\mathsf {E}[X]}}\psi _1(n,m,s).\qquad \end{aligned}$$
(5.10)

It is obvious that the infinite series included in \(\psi (s)\) converges. The reason is that \(\sum _{n=1}^\infty \sum _{m=1}^n\tau (0,n,m)+\sum _{n=2}^\infty \sum _{m=2}^n\tau (1,n,m)=1\) and \(0\le \psi _i(n,m,s)\le 1\) for \(i=1,2\), \(n\in {\mathbb {N}}\) and \(1\le m\le n\).

For a compact expression of (5.10), we define the infinite matrices \({\varvec{I}}_m\), for \(m\in {\mathbb {N}}\), and \({\varvec{B}}\) as

$$\begin{aligned} \varvec{I}_m= & {} \big (I_{ij}\big )_{(i,j)\in {\mathbb N}\times {\mathbb N}},\quad \quad I_{ij}=\left\{ \begin{array}{ll} 1,&{}1\le i=j\le m,\\ 0,&{}\text {otherwise}, \end{array}\right. \end{aligned}$$
(5.11)
$$\begin{aligned} \varvec{B}= & {} \big (B_{ij}\big )_{(i,j)\in {\mathbb N}\times {\mathbb N}},\quad \quad B_{ij}=\left\{ \begin{array}{ll} x_{j-i},&{}1\le i<j,\\ 0,&{}\text {otherwise}. \end{array}\right. \end{aligned}$$
(5.12)

In addition, we define the infinite row vectors \({\varvec{\pi }}_0\) and \({\varvec{\pi }}_1\) as

$$\begin{aligned} {\varvec{\pi }}_0=(\pi _{0,0},\pi _{0,1},\pi _{1,2},\dots ),\quad {\varvec{\pi }}_1=(0,\pi _{1,1},\pi _{1,2},\dots ). \end{aligned}$$

Rearranging (5.10) by using these matrices and vectors, we obtain

$$\begin{aligned} \psi (s)= & {} \sum _{m=1}^\infty \frac{{\varvec{\pi }}_0}{\mathsf {E}[X]}{\varvec{I}}_m{\varvec{B}}({\varvec{I}}-{\varvec{I}}_m){\varvec{\psi }}_0(m,s)\nonumber \\&{}+\sum _{m=1}^\infty \frac{{\varvec{\pi }}_1}{\mathsf {E}[X]}{\varvec{I}}_m{\varvec{B}}({\varvec{I}}-{\varvec{I}}_m){\varvec{\psi }}_1(m,s). \end{aligned}$$
(5.13)

From (5.6), (5.7) and (5.13), we obtain the LST of the sojourn time distribution as follows:

Theorem 3

The LST of the sojourn time distribution, \(\psi (s)\), is given as follows:

$$\begin{aligned} \psi (s)= & {} \sum _{m=1}^\infty \frac{1}{\mathsf {E}[X]}\Big [{\varvec{\pi }}_0{\varvec{I}}_m{\varvec{B}}({\varvec{I}}-{\varvec{I}}_m)({\varvec{I}}-{\varvec{\varLambda }}^{(0)})^{-1}{\varvec{A}}\Big ]\Big [({\varvec{I}}-{\varvec{\varLambda }}^{(1)})^{-1}{\varvec{M}}\Big ]^{m}{\varvec{1}}\\&{}+\sum _{m=1}^\infty \frac{1}{\mathsf {E}[X]}\Big [{\varvec{\pi }}_1{\varvec{I}}_m{\varvec{B}}({\varvec{I}}-{\varvec{I}}_m)\Big ]\Big [({\varvec{I}}-{\varvec{\varLambda }}^{(1)})^{-1}{\varvec{M}}\Big ]^{m}{\varvec{1}}, \end{aligned}$$

where \({\varvec{\varLambda }}^{(1)}\), \({\varvec{M}}\), \({\varvec{\varLambda }}^{(0)}\), \({\varvec{A}}\), \({\varvec{I}}_m\) and \({\varvec{B}}\) are given by (5.3), (5.4), (5.8), (5.9), (5.11 and (5.12, respectively.

Remark 3

The LST of the sojourn time distribution given in Theorem 3 is in series form involving infinite-dimensional matrices. Therefore, an approximation is necessary for numerical calculation. In Sect. 6.2, for numerical experiments, we present a method to approximate \(\psi (s)\). However, we have not yet been able to find a bound for the error. It is important future work to find an approximation method with guaranteed accuracy.

6 Numerical results

In this section, we present some numerical results for the M\({}^X\)/M/1/SET-VARI queue. We consider three types of distribution for X: the binomial distribution with parameters \(n\in {\mathbb {N}}\) and \(0<p<1\), denoted by \(\text {Binom}(n,p)\), the discrete uniform distribution with parameters \(a,b\in {\mathbb {N}}\) (\(a\le b\)), denoted by \(\text {Unif}\{a,b\}\), and the geometric distribution with parameter \(0< p<1\), denoted by \(\text {Geo}(p)\).

6.1 Energy consumption and response time trade-off

We consider the trade-off between the average energy consumption and the average sojourn time. In order to compare the variable speed CPU with the fixed speed CPU, we also consider the \(\text {M}^{X}/\text {M}/1/\text {SET-FIX}\) queue where the service speed is fixed, while other settings are kept the same as the M\({}^X\)/M/1/SET-VARI queue. We use the PGF of the number of jobs in the system derived for the \(\text {M}^{X}/\text {M}/1/\text {SET-FIX}\) queue in [2]. The assumptions regarding energy consumption per unit time for each state are given in Table 1.

Table 1 Energy consumption per unit time at each state [7, 9]

Note that the constants \(K_{\text {service}}\) and \(K_{\text {set}}\) from Table 1 depend on the particular system. Let \(\mathsf {E}[W_{\text {v}}]\) and \(\mathsf {E}[W_{\text {f}}]\) denote the average sojourn time of the variable speed queue and that of the fixed speed queue, respectively. In addition, let \(\mathsf {E}[P_{\text {v}}]\) and \(\mathsf {E}[P_{\text {f}}]\) denote the average energy consumption of the variable speed queue and the fixed speed queue, respectively. \(\mathsf {E}[P_{\text {v}}]\) and \(\mathsf {E}[P_{\text {f}}]\) can be expressed as follows:

$$\begin{aligned} \mathsf {E}[P_{\text {v}}]= & {} \mu ^2\{{\widehat{\pi }}_0(1)-\pi _{0,0}+{\widehat{\pi }}_1''(1)+{\widehat{\pi }}_1'(1)\},\\ \mathsf {E}[P_{\text {f}}]= & {} {\mu ^2 \over (\lambda +\alpha )(\lambda +\alpha \lambda {\mathsf {E}}[X]/\mu )}. \end{aligned}$$

We will explore the relationship between the average sojourn time and the average power consumption. In addition, we will compare \(\mathsf {E}[W_{\text {v}}]\) with \(\mathsf {E}[W_{\text {f}}]\) under the condition that \(\mathsf {E}[P_{\text {v}}]=\mathsf {E}[P_{\text {f}}]\). In what follows, we assume that \(\alpha =0.1\) and \(\lambda \mathsf {E}[X]=1\). Note that \(\lambda \mathsf {E}[X]\) is the mean number of jobs arriving per unit time. In the numerical experiments, the value of \(\lambda \) and the distribution of X change while keeping \(\lambda \mathsf {E}[X] = 1\).

The procedure of numerical experiments is as follows. First, fixing the value of \(\mu \in (0,3]\), we compute the average energy consumption per unit time of the M\({}^X\)/M/1/SET-VARI queue, denoted by \(A_{\text {v}}\), and the average sojourn time. Let \(\mu _{\text {f}}(A)\) denote the unique service rate which realizes the average energy consumption A in the fixed queue. Next, we compute \(\mu _{\text {f}}(A_{\text {v}})\) by

$$\begin{aligned} \mu _{\text {f}}(A_{\text {v}})=\big (-\alpha \lambda \mathsf {E}[X]+\sqrt{(\alpha \lambda \mathsf {E}[X])^2+4\lambda A_{\text {v}}(\lambda +\alpha )}\,\big )/(2\lambda ). \end{aligned}$$

As a result, the energy consumption for both models is kept the same. Finally, we compute the average sojourn time of the M\({}^X\)/M/1/SET-FIX queue under the parameter \(\mu _{\text {f}}(A_{\text {v}})\). In this numerical experiment, we compute the average sojourn time from the average number of jobs in the system using Little’s formula [8].

Fig. 2
figure 2

\(\lambda {\mathsf {E}}[X]=1.0,\ \alpha =1.0\). X follows the binomial distribution

Fig. 3
figure 3

\(\lambda {\mathsf {E}}[X]=1.0,\ \alpha =1.0\). X follows the discrete uniform distribution

In Fig. 2, we present the results when X is \(\text {Binom}(9,1/6)\) (and \(\lambda =0.4\)) and when X is \(\text {Binom}(9,1/3)\) (and \(\lambda =0.25\)). Jobs in the case of \(\text {Binom}(9,1/3)\) are more likely to arrive in larger batches than those in the case of \(\text {Binom}(9,1/6)\). In Fig. 3, we present the results when X is \(\text {Unif}\{1,4\}\) (and \(\lambda =0.4\)) and when X is \(\text {Unif}\{1,7\}\) (and \(\lambda =0.25\)). Similarly, jobs in the case of \(\text {Unif}\{1,7\}\) are likely to arrive in larger batches than those in the case of \(\text {Unif}\{1,4\}\).

In all the cases in Figs. 2 and 3, we observe the trade-off between the average sojourn time and the average power consumption, i.e., the average sojourn time is smaller when the average power consumption is larger. In addition, keeping the average number of arrivals per unit time (\(\lambda {\mathsf {E}}[X]\)) and the average power consumption the same, arriving in larger batches results in a smaller average sojourn time. This result seems intuitively true and might be proven.

Let’s compare the variable speed queue with the fixed queue. Our numerical results show that for high-performance applications, in which delays must be kept small, having variable speed can result in both shorter delays and lower energy than having fixed speed, while the opposite is true for applications where energy usage is more important than delay performance.

These observations could be explained as follows: Energy consumption in the variable service queue monotonically increases with the number of jobs in the system while that in fixed service queue is constant whenever there are jobs in the system.

6.2 The sojourn time distribution

In Figs. 4 and 5, we present the probability densities of the sojourn time distributions computed by numerically inverting the Laplace–Stieltjes transform. In what follows, we assume that \(\alpha =0.1\) and \(\lambda {\mathsf {E}}[X]=1\). The LST of the sojourn time distribution is given in Theorem 3, but it is in series form involving infinite-dimensional matrices. Therefore, as mentioned in Remark 3, approximation is necessary for numerical calculation.

Fig. 4
figure 4

Comparison of the cases in which X follows different distributions. \(\lambda =0.4,\ \alpha =0.1,\ \mu =0.1,\ \mathsf {E}[X]=2.5\)

Fig. 5
figure 5

Comparison of the cases in which X follows different distributions. \(\lambda =0.25,\ \alpha =0.1,\ \mu =0.1,\ \mathsf {E}[X]=4.0\)

We present a procedure to compute the LST of the sojourn time distribution, \(\psi (s)\). First, we truncate the infinite vectors \({\varvec{\pi }}_0\) and \({\varvec{\pi }}_1\) to the vectors of their first \((N^*+1)\) elements, where the constant \(N^*\) is determined by

$$\begin{aligned} N^*=\inf \big \{n\in {\mathbb {N}};\,1-{\sum }_{j=0}^n\pi _{0,j}-{\sum }_{j=1}^n\pi _{1,j} <10^{-4}\big \}. \end{aligned}$$

This is equivalent to disregarding the states with more than \(N^*\) jobs in the system whose probability is \(10^{-4}\). We compute \(\pi _{0,0}\) by (4.4) and \(\pi _{i,j}\), \(i=0,1\), \(j=1,\dots ,N^*\), by (3.3), (3.4) and (3.5). In addition, we truncate the infinite matrices appearing in \(\psi (s)\) to their \((N^* + 1) \times (N^* + 1)\) north-west corner matrices. We compute each element of the infinite matrices by (5.3), (5.4), (5.8), (5.9), (5.11) and (5.12). Let \(\psi ^*(s)\) denote the function computed by Theorem 3 using the truncated matrices and vectors. In our numerical experiments, we use the value of \(\psi ^*(s)\) as an approximation to the LST of the sojourn time distribution. It is important future work to estimate the error of this approximation. Our extensive numerical experiments show that the approximation is fairly accurate in the sense that the final results do not change much as \(N^{*}\) is increased.

Next, we present the procedure to compute the value of the sojourn time distribution for \(t\in [0,T]\) by numerically inverting the Laplace–Stieltjes transform [5] for fixed \(T > 0\). The function \(f^{(K)}(t)\) and the constant \(K^*(t)\) are defined as follows:

$$\begin{aligned} f^{(K)}(t)= & {} \frac{h}{\pi }\exp \Big (\frac{6}{T}t\Big )\Big \{\frac{\psi (6/T)}{2}+\textstyle {\sum }_{k=1}^K{\text {Re}}\big [\psi (6/T+ikh)\exp ({i}kht)\big ]\Big \},\\ K^*(t)= & {} \inf \big \{K\in {\mathbb {N}};\,\big |f^{(K)}(t)-f^{(K-1)}(t)\big |<10^{-4}\big \}, \end{aligned}$$

where \(i=\sqrt{-1}\) and \(\text {Re}(a+ib)=a\). In our numerical experiments, we use the value of \(f^{(K^*(t))}(t)\) as the sojourn time distribution. The constant h is the step size; we set \(h=1/100\).

In Figs. 4 and 5, we investigate the impact of the batch size distribution on the sojourn time distribution. Figure 4 presents the sojourn time distribution for \(\lambda =0.4\), \(\mathsf {E}[X]=2.5\) and \(\mu =0.1\), while Fig. 5 shows that for \(\lambda =0.25\), \(\mathsf {E}[X]=4\) and \(\mu =0.1\). Note that in both figures \(\lambda {\mathsf {E}}[X]=1\). We observe that the curves of Binom(9, 1 / 6) and Uni(1, 4) almost coincide. The values of second, third and fourth moments are 7.5, 25.8 and 99.2 for Binom(9, 1 / 6), and 7.5, 25.0 and 113.5 for Uni(1, 4). On the other hand, the values of second, third and fourth moments are 10.0, 58.8 and 480.0 for Geo(1 / 2.5). This suggests that high-order moments (roughly fourth or higher) have less influence in the sojourn time distribution.

Compared with Fig. 4, the curves for the binomial distribution, uniform distribution and geometric distribution are different in Fig. 5. The second moments are 18.0 for Binom(9, 1 / 3), 20.0 for Uni(1, 4) and 38.0 for Geo(1 / 2.5). This suggests that the second moment of the batch size has a significant impact on the sojourn time distribution.

7 Conclusion

In this paper, we have studied the M\({}^X\)/M/1/SET-VARI queue. We have derived the PGF of the number of jobs in the system in an integral form. Furthermore, we have derived the LST of the sojourn time distribution, which is obtained in series form involving infinite-dimensional matrices. Through numerical experiments, we have been able to observe some insights into the sojourn time distribution and the average energy consumption. One remark is that the stationary queue length distribution and the sojourn time distribution of the finite buffer version can be obtained using almost the same procedure as for the infinite buffer model, so we have omitted that analysis here. Furthermore, the finite buffer is easier in the sense that it is always stable and the sojourn time distribution does not involve infinite matrices. As future work, we plan to consider the model where the service rate is an arbitrary function of the number of jobs in the system. Models with general setup time and service time distributions may also be investigated somewhere else.