Keywords

1 Introduction

Modern computer networks are characterized by the integration of heterogeneous services (phone calls, text messages, media content, cloud computing) over the same physical infrastructure. The traffic flows generated by the different applications have specific statistical features (in terms of packet size, bit-rate and service requirements) and hence is of primary importance the analysis of queueing systems with several classes of customers [3, 4, 8, 9, 15]. Moreover, due to the heterogeneity of services provided by communication networks [6, 10,11,12,13,14, 16], the features of the required resources should be taken into account.

In traditional multiclass queuing systems the service process is typically characterized in terms of service time distribution. In this paper we assume that customers have different random capacity requirements (depending on their class), so that the proposed model can be useful for analysis and design issues in high-performance computer and communication systems, in which service time and customer volume are independent quantities (see [7] and references therein).

In more detail, the application of the dynamic screening method permit us to analyse heterogeneous resource queuing system with unlimited servers number, non-exponential service time and renewal arriving process are investigated.

The remainder of this paper is structured as follows. Section 2 introduces the mathematical model and the application of the dynamic screening method to the considered multiclass queueing system, while in Sect. 3 the corresponding Kolmogorov equations are presented. Section 4 highlights our main contribution, the derivation of first and second order asymptotics under heavy load conditions (i.e., when the mean interarrival time tends to 0), and their applicability is verified in Sect. 5 by means of discrete event simulation. Finally, Sect. 6 ends the paper with some final remarks.

2 Mathematical Model

Consider a queuing system with an infinite number of servers and n types of customers, characterized by different service times and queuing resource requirements. Arrivals are described by a renewal process with interarrival time distribution \(A\left( z\right) \) and for each of them class i is selected with probability \(p_i \left( i = 1,\ldots , n\right) \), where \(\sum \limits _{i=1}^{n}p_i=1\). Each arriving customer instantly occupies the first free server, with service time distribution \(B_i\left( \tau \right) \) and required resource distribution \(G_i\left( y\right) \), both depending on the type i of the customer. At the end of the service, the customer leaves the system. Resource amount and service times are mutually independent, and do not depend on the epochs of customer arrivals.

Denote by \(V_i\left( t\right) \) \(\left( i=1,\ldots ,n\right) \) the total resource amount occupied by each type of customers at the moment t. The aim of this work is to determine the probabilistic characterization of the n-dimensional process \(\left\{ \mathbf V (t)\right\} \). This process is, in general, non Markovian, but it can be investigated by means of the dynamic screening method.

In Fig. 1, \(n+1\) time axes, labeled from 0 to n, are shown: axis 0 indicates the epochs of customers arrivals, while the remaining axes \(i=1,\ldots ,n\) correspond to the different types of customers.

Fig. 1.
figure 1

Dynamic screening of the arrival process

We define a set of n functions (dynamic probabilities) \(S_i\left( t\right) ,\) that satisfy the conditions

$$\begin{aligned} 0 \le S_i\left( t\right) \le 1, \quad \sum \limits _{i=1}^{n}S_i\left( t\right) \le 1, \end{aligned}$$

and assume that a customer, arrived in the system at time t, is screened to axis i with probability \(S_i\left( t\right) \), and is not screened anywhere with probability

$$\begin{aligned} S_0\left( t\right) = 1 - \sum \limits _{i=1}^{n}S_i\left( t\right) . \end{aligned}$$

Let the system be empty at time \(t_0\), and let us choose some arbitrary time T with \(T>t_0\). Hence, the probability \(S_i\left( t\right) \) that a i-type customer, arrived at time t with \(t_0 \le t \le T\), will be serviced by time T, is given by

$$\begin{aligned} S_i(t) = 1 - B_i(T - t), \left( i=1,\ldots ,n\right) . \end{aligned}$$

Denote by \(W_i\left( t\right) \) the total resource amount screened on axis i. Then, the extended process \(\left\{ \mathbf V \left( t\right) \right\} \) satisfies the following property:

$$\begin{aligned} P\left\{ \mathbf V \left( T\right)<\mathbf x \right\} =P\left\{ \mathbf W \left( T\right) <\mathbf x \right\} \end{aligned}$$
(1)

for all \(\mathbf x =\left\{ x_1,\ldots ,x_n\right\} \), where the inequalities \(\mathbf V \left( T\right) <\mathbf x \) and \(\mathbf W \left( T\right) <\mathbf x \) mean that \(V_1\left( T\right)<x_1,\ldots ,V_n\left( T\right) <x_n\) and \(W_1\left( T\right)<x_1,\ldots ,W_n\left( T\right) <x_n\), respectively. Equality (1) permits us to investigate the process \(\left\{ \mathbf V \left( t\right) \right\} \) via the analysis of the process \(\left\{ \mathbf W \left( t\right) \right\} .\)

3 Kolmogorov Differential Equations

Let \(z\left( t\right) \) be the residual time from t to the next arrival (in the renewal input process) and let us denote by

$$\begin{aligned} P\left( z,\mathbf w ,t\right) =P\left\{ z\left( t\right)<z,\mathbf W (t)<\mathbf w \right\} \end{aligned}$$

the probability distribution of the \(n+1\)-dimensional Markovian process \(\left\{ z\left( t\right) , \mathbf W \left( t\right) \right\} \).

By the law of total probability, we get the following system of Kolmogorov differential equations:

where \(\mathbf w =\left\{ w_1,\ldots ,w_n\right\} , \mathbf y _\mathbf i =\left\{ 0,\ldots ,y,\ldots ,0\right\} \), \(z>0\), \(w_{i}>0 \ \left( i=1,\ldots ,n\right) \), with the initial condition

$$\begin{aligned} P\left( z,\mathbf w ,t_0\right) = \left\{ \begin{array}{cl} R(z), &{} \mathbf w =\mathbf 0 ,\\ 0, &{} \text{ otherwise }, \end{array} \right. \end{aligned}$$

where \(R \left( z\right) \) represents the stationary probability distribution of the values of the random process \( z \left( t\right) \).

By introducing the partial characteristic function:

$$\begin{aligned} h\left( z,\mathbf v ,t\right) =\int \limits _0^\infty e^{jv_1w_1}\ldots \int \limits _0^\infty e^{jv_nw_n}P\left( z,d\mathbf w ,t\right) \quad z>0, w_i>0, \end{aligned}$$

where \(j=\displaystyle \sqrt{-1}\) denotes the imaginary unit, we obtain the following equations:

$$ \frac{\partial h\left( z,\mathbf v ,t\right) }{\partial t}=\frac{\partial h\left( z,\mathbf v ,t\right) }{\partial z} $$
$$\begin{aligned} + \frac{\partial h\left( 0,\mathbf v ,t\right) }{\partial z} \left[ A\left( z\right) -1+A\left( z\right) \sum \limits _{i=1}^{n}p_iS_i\left( t\right) \left( G_i^*\left( v_i\right) -1\right) \right] , \end{aligned}$$
(2)

where

$$\begin{aligned} G_i^*\left( v_i\right) =\int \limits _{0}^{\infty }e^{jv_iy}dG_i\left( y\right) , \end{aligned}$$

with the initial condition

$$\begin{aligned} h\left( z,\mathbf v ,t_0\right) =R\left( z\right) . \end{aligned}$$
(3)

4 Asymptotic Analysis

In general, Eq. (2) cannot be solved analytically, but it is possible to find approximate solutions under suitable asymptotic conditions; in this paper we focus on the case of infinitely growing arrival rate.

To this aim, let us write the distribution function of the interarrival times as \(A\left( Nz\right) \), where N is some parameter that tends to infinity in the asymptotic analysis [1, 2].

Then, Eq. (2) becomes

$$ \frac{1}{N}\frac{\partial h\left( z,\mathbf v ,t\right) }{\partial t}=\frac{\partial h\left( z,\mathbf v ,t\right) }{\partial z} $$
$$\begin{aligned} + \frac{\partial h\left( 0,\mathbf v ,t\right) }{\partial z} \left[ A(z)-1+ A\left( z\right) \sum \limits _{i=1}^{n}p_iS_i\left( t\right) \left( G_i^*\left( v_i\right) -1\right) \right] , \end{aligned}$$
(4)

with the initial condition (3).

We solve the problem (4)–(3) under the asymptotic condition \(N \rightarrow \infty \), and obtain approximate solutions with different levels of accuracy, denoted in the following as “first-order asymptotic” \(h\left( z,\mathbf v ,t\right) \approx h^{(1)}\left( z,\mathbf v ,t\right) \) and “second-order asymptotic” \(h\left( z,\mathbf v ,t\right) \approx h^{(2)}\left( z,\mathbf v ,t\right) \).

4.1 The First-Order Asymptotic Analysis

As a preliminary result, in this section we present the first-order asymptotic as the following lemma.

Lemma

The first-order asymptotic characteristic function of the process \(\left\{ z\left( t\right) , \mathbf W \left( t\right) \right\} \) is given by

$$\begin{aligned} h^{(1)}\left( z,\mathbf v ,t\right) =R\left( z\right) \exp \left\{ N\lambda \sum \limits _{i=1}^{n}jv_ia_1^{(i)}p_i\int \limits _{t_{0}}^{t}S_i\left( \tau \right) d\tau \right\} , \end{aligned}$$

where \(\lambda =\left( \int \limits _{0}^{\infty }\left( 1-A\left( x\right) \right) dx\right) ^{-1}\) and \(a_1^{(i)}=\int \limits _{0}^{\infty }ydG_i(y)\) is the mean amount of resources required by i-type customers.

Proof

By introducing the following notations

$$\begin{aligned} \varepsilon =\frac{1}{N}, \mathbf v =\varepsilon \mathbf y ,h\left( z,\mathbf v ,t\right) =f_1\left( z,\mathbf y ,t,\varepsilon \right) , \end{aligned}$$
(5)

in expressions (4) and (3), we get

$$ \varepsilon \frac{\partial f_1\left( z,\mathbf y ,t,\varepsilon \right) }{\partial t}=\frac{\partial f_1\left( z,\mathbf y ,t,\varepsilon \right) }{\partial z} $$
$$\begin{aligned} + \frac{\partial f_1\left( 0,\mathbf y ,t,\varepsilon \right) }{\partial z}\left[ A(z)-1+A\left( z\right) \sum \limits _{i=1}^{n}p_iS_i\left( t\right) \left( G_i^*\left( \varepsilon y_i\right) -1\right) \right] , \end{aligned}$$
(6)

with the initial condition

$$\begin{aligned} f_1\left( z,\mathbf y ,t_0,\varepsilon \right) =R\left( z\right) . \end{aligned}$$
(7)

The asymptotic solution of the problem (6)–(7), i.e. the function \(f_1\left( z,\mathbf y ,t\right) = \lim \limits _{\varepsilon \rightarrow 0}f_1\left( z,\mathbf y ,t,\varepsilon \right) \), can be obtained in two steps.

Step 1. Let \(\varepsilon \rightarrow 0\); then Eq. (6) becomes:

$$\begin{aligned} \frac{\partial f_1\left( z,\mathbf y ,t\right) }{\partial z}+\frac{\partial f_1\left( 0,\mathbf y ,t\right) }{\partial z}\left( A\left( z\right) -1\right) =0. \end{aligned}$$

and hence \(f_1(z,\mathbf y ,t)\) can be expressed as

$$\begin{aligned} f_1\left( z,\mathbf y ,t\right) =R\left( z\right) \varPhi _1\left( \mathbf y ,t\right) , \end{aligned}$$
(8)

where \(\varPhi _1\left( \mathbf y ,t\right) \) is some scalar function, satisfying the condition \(\varPhi _1\left( \mathbf y ,t_0\right) =1.\)

Step 2. Now let \(z\rightarrow \infty \) in (6):

$$\begin{aligned} \varepsilon \frac{\partial f_1\left( \infty ,\mathbf y ,t,\varepsilon \right) }{\partial t}=\frac{\partial f_1(0,\mathbf y ,t,\varepsilon )}{\partial z}\sum \limits _{i=1}^{n}p_iS_i\left( t\right) \left( G_i^*(\varepsilon y_i)-1\right) . \end{aligned}$$

Then, we substitute here the expression (8), take advantage of the Taylor expansion

$$\begin{aligned} e^{j\varepsilon s}=1+{j\varepsilon s}+O(\varepsilon ^2), \end{aligned}$$
(9)

divide by \( \varepsilon \) and perform the limit as \(\varepsilon \rightarrow 0 \). Since \( R^{\prime }\left( 0\right) = \lambda \), we get the following differential equation:

$$\begin{aligned} \frac{\partial \varPhi _1\left( \mathbf y ,t\right) }{\partial t}= \varPhi _1\left( \mathbf y ,t\right) \lambda \sum \limits _{i=1}^{n}p_iS_i\left( t\right) jy_ia_1^{(i)}, \end{aligned}$$
(10)

where \(a_1^{(i)}=\int \limits _{0}^{\infty }ydG_i(y)\).

Taking into account the initial condition, the solution of (10) is

$$\begin{aligned} \varPhi _1\left( \mathbf y ,t\right) =\exp \left\{ \lambda \sum \limits _{i=1}^{n}jy_ia_1^{(i)}p_i \int \limits _{t_{0}}^{t}S_i\left( \tau \right) d\tau \right\} . \end{aligned}$$

By substituting \(\varPhi _1\left( \mathbf y ,t\right) \) from (8), we obtain

$$\begin{aligned} f_1\left( z, \mathbf y ,t\right) =R\left( z\right) \exp \left\{ \lambda \sum \limits _{i=1}^{n}jy_ia_1^{(i)}p_i \int \limits _{t_{0}}^{t}S_i\left( \tau \right) d\tau \right\} . \end{aligned}$$

Therefore, we can write

The proof is complete.

4.2 The Second-Order Asymptotic Analysis

Now we are able to formulate the main contribution of this work, which is summarized by the following theorem.

Theorem

The second-order asymptotic characteristic function of the process \(\left\{ z\left( t\right) ,\mathbf W \left( t\right) \right\} \) is given by

$$ h^{(2)}\left( z,\mathbf v ,t\right) =R\left( z\right) \exp \left\{ N\lambda \sum \limits _{i=1}^{n}jv_ia_1^{(i)}p_i\int \limits _{t_{0}}^{t}S_i\left( \tau \right) d\tau \right. $$
$$ + N\lambda \sum \limits _{i=1}^{n}\frac{(jv_i)^2}{2}a_2^{(i)}p_i\int \limits _{t_0}^{t}S_i\left( \tau \right) d\tau $$
$$\begin{aligned} \left. + \frac{N\kappa }{2}\sum \limits _{i=1}^{n}\sum \limits _{m=1}^{n}jv_ia_1^{(i)}p_ijv_ma_1^{(m)}p_m\int \limits _{t_0}^{t}S_i\left( \tau \right) S_m\left( \tau \right) d\tau \right\} , \end{aligned}$$
(11)

where \(a_2^{(i)}=\int \limits _{0}^{\infty }y^2dG_i(y)\) and \(\kappa =\lambda ^3\left( \sigma ^2-a^2\right) \), a and \(\sigma ^2\) being the mean and the variance of the interarrival time, respectively.

Proof

Let \(h_2\left( z,\mathbf v ,t\right) \) be a solution of the following equation

$$\begin{aligned} h\left( z,\mathbf v ,t\right) =h_2\left( z,\mathbf v ,t\right) \exp \left\{ N\lambda \sum \limits _{i=1}^{n}jv_ia_1^{(i)}p_i \int \limits _{t_{0}}^{t}S_i\left( \tau \right) d\tau \right\} \end{aligned}$$
(12)

Substituting this expression into (3) and (4), we get the following equivalent problem:

$$ \frac{1}{N}\frac{\partial h_2\left( z,\mathbf v ,t\right) }{\partial t}+\lambda h_2\left( z,\mathbf v ,t\right) \sum \limits _{i=1}^{n}jv_ia_1^{(i)}p_i S_i\left( t\right) = $$
$$\begin{aligned} \frac{\partial h_2\left( z,\mathbf v ,t\right) }{\partial z}+\frac{\partial h_2\left( 0,\mathbf v ,t\right) }{\partial z} \left[ A\left( z\right) -1+A\left( z\right) \sum \limits _{i=1}^{n} p_iS_i\left( t\right) \left( G_i^*(v_i)-1\right) \right] , \end{aligned}$$
(13)

with the initial condition

$$\begin{aligned} h_2\left( z,\mathbf v ,t_0\right) =R\left( z\right) . \end{aligned}$$
(14)

By performing the following changes of variable

$$\begin{aligned} \varepsilon ^2=\frac{1}{N}, \mathbf v =\varepsilon \mathbf y , h_2\left( z,\mathbf v ,t\right) =f_2\left( z,\mathbf y ,t,\varepsilon \right) . \end{aligned}$$
(15)

in (13) and (14), we get the following problem:

$$ \varepsilon ^2 \frac{\partial f_2\left( z,\mathbf y ,t,\varepsilon \right) }{\partial t}+f_2\left( z,\mathbf y ,t,\varepsilon \right) \lambda \sum \limits _{i=1}^{n}j\varepsilon y_ia_1^{(i)}p_iS_i\left( t\right) =\frac{\partial f_2\left( z,\mathbf y ,t,\varepsilon \right) }{\partial z} $$
$$\begin{aligned} + \frac{\partial f_2\left( 0,\mathbf y ,t,\varepsilon \right) }{\partial z}\left[ A\left( z\right) -1+A\left( z\right) \sum \limits _{i=1}^{n}p_iS_i\left( t\right) \left( G_i^*(\varepsilon y_i)-1\right) \right] , \end{aligned}$$
(16)

with the initial condition

$$\begin{aligned} f_2\left( z,\mathbf y ,t_0,\varepsilon \right) =R\left( z\right) . \end{aligned}$$
(17)

As a generalization of the approach used in the previous subsection, the asymptotic solution of this problem

$$\begin{aligned} f_2\left( z,\mathbf y ,t\right) =\lim \limits _{\varepsilon \rightarrow 0}f_2\left( z,\mathbf y ,t,\varepsilon \right) \end{aligned}$$

can be derived in three steps.

Step 1. Letting \(\varepsilon \rightarrow 0\) in (16), we get the following equation:

$$\begin{aligned} \frac{\partial f_2\left( z,\mathbf y ,t\right) }{\partial z}+\frac{\partial f_2\left( 0,\mathbf y ,t\right) }{\partial z}\left( A\left( z\right) -1\right) =0. \end{aligned}$$

Hence, we can express \(f_2\left( z,\mathbf y ,t\right) \) as

$$\begin{aligned} f_2\left( z,\mathbf y ,t\right) =R\left( z\right) \varPhi _2\left( \mathbf y ,t\right) , \end{aligned}$$
(18)

where \(\varPhi _2\left( \mathbf y ,t\right) \) is some scalar function that satisfies the condition \(\varPhi _2\left( \mathbf y ,t_0\right) =1.\)

Step 2. The solution \(f_2\left( z,\mathbf y ,t\right) \) can be represented in the expansion form

$$\begin{aligned} f_2\left( z,\mathbf y ,t\right) =\varPhi _2\left( \mathbf y ,t\right) \left[ R\left( z\right) +f\left( z\right) \sum \limits _{i=1}^{n}j\varepsilon y_ia_1^{(i)}p_iS_i\left( t\right) \right] +O\left( \varepsilon ^2\right) , \end{aligned}$$
(19)

where \(f\left( z\right) \) is a suitable function. By substituting the previous expression and the Taylor-Maclaurin expansion (9) in (16), taking into account that \(R^\prime \left( z\right) =\lambda \left( 1-A\left( z\right) \right) \), it is easy to verify that

$$\begin{aligned} f^\prime \left( 0\right) =\lambda f\left( \infty \right) +\frac{\kappa }{2}, \end{aligned}$$

and \(\kappa =\lambda ^3\left( \sigma ^2-a^2\right) \), where a and \(\sigma _2\) are the mean and the variance of the interarrival time.

Step 3. Letting \( z \rightarrow \infty \) in (16), by the definition of the function \(f_2\left( z,\mathbf y ,t,\varepsilon \right) \), we obtain

$$\begin{aligned} \lim \limits _{z\rightarrow \infty }\frac{\partial f_2\left( z,\mathbf y ,t,\varepsilon \right) }{\partial z}=0, \end{aligned}$$

and, taking into account the expansion

$$\begin{aligned} e^{j\varepsilon s}=1+j\varepsilon s+\frac{\left( j\varepsilon s\right) ^2}{2}+O\left( \varepsilon ^3\right) , \end{aligned}$$

we can write

$$\begin{aligned}&\varepsilon ^2 \frac{\partial f_2\left( \infty ,\mathbf y ,t,\varepsilon \right) }{\partial t}+f_2\left( \infty ,\mathbf y ,t,\varepsilon \right) \lambda \sum \limits _{i=1}^{n}p_iS_i\left( t\right) j\varepsilon y_ia_1^{(i)} \\&= \frac{\partial f_2\left( 0,\mathbf y ,t,\varepsilon \right) }{\partial z}\sum \limits _{i=1}^{n} p_iS_i\left( t\right) \left( j\varepsilon y_ia_1^{(i)}+\frac{(j\varepsilon y_i)^2}{2}a_2^{(i)}\right) +O\left( \varepsilon ^3\right) , \end{aligned}$$

where \(a_2^{(i)}=\int \limits _{0}^{\infty }y^2dG_i(y)\).

By substituting here the expansion (19) and taking the limit as \(z\rightarrow \infty \), we get

After simple manipulations, and taking into account that

$$\begin{aligned} \frac{\kappa }{2}=\left( f^\prime \left( 0\right) -f\left( \infty \right) \right) , \end{aligned}$$

we get the following differential equation for \(\varPhi _2\left( \mathbf y ,t \right) \):

whose solution (with the given initial condition) can be expressed

Substituting this expression into (18) and performing the inverse substitutions of (15) and (12), we get the expression (11) for the asymptotic characteristic function of the process \(\left\{ z\left( t\right) , \mathbf W \left( t\right) \right\} \).

The proof is complete.

Corollary

For \(z\rightarrow \infty ,t=T\) and \(t_0\rightarrow -\infty \) we get the characteristic function of the process \(\left\{ \mathbf V \left( t\right) \right\} \)in the steady state regime

$$ h\left( \mathbf v \right) =\exp \left\{ N\lambda \sum \limits _{i=1}^{n}jv_ia_1^{(i)}b_i\right. $$
$$\begin{aligned} \left. + N\lambda \sum \limits _{i=1}^{n}\frac{(jv_i)^2}{2}a_2^{(i)}p_ib_i+\frac{N\kappa }{2}\sum \limits _{i=1}^{n}\sum \limits _{m=1}^{n}jv_ia_1^{(i)}jv_ma_1^{(m)}K_{im}\right\} , \end{aligned}$$
(20)

where

$$ b_i=p_i\int \limits _{0}^{\infty }\left( 1-B_i\left( \tau \right) \right) d\tau , $$
$$ K_{im}=p_ip_m\int \limits _{0}^{\infty }\left( 1-B_i\left( \tau \right) \right) \left( 1-B_m\left( \tau \right) \right) d\tau . $$

The structure of function (20) implies that the n-dimensional process \(\left\{ \mathbf V \left( t\right) \right\} \) is asymptotically Gaussian with mean

$$\begin{aligned} \mathbf a =N\lambda \left[ \begin{array}{cccc} a_1^{(1)}b_1&a_1^{(2)}b_2&\ldots&a_1^{(n)}b_n \end{array} \right] \end{aligned}$$

and covariance matrix

$$\begin{aligned} \mathbf K =N\left[ \lambda \mathbf K ^{(1)}+\kappa \mathbf K ^{(2)}\right] , \end{aligned}$$

where

$$ \mathbf K ^{(1)}=\left[ \begin{array}{cccc} a_2^{(1)}b_1 &{} 0 &{} \ldots &{} 0\\ 0 &{} a_2^{(2)}b_2 &{} \ldots &{} 0\\ \ldots &{} \ldots &{} \ldots &{} \ldots \\ 0 &{} 0 &{} \ldots &{} a_2^{(n)}b_n \end{array} \right] , $$
$$ \mathbf K ^{(2)}=\left[ \begin{array}{cccc} a_1^{(1)}a_1^{(1)}K_{11} &{} a_1^{(1)}a_1^{(2)}K_{12} &{} \ldots &{} a_1^{(1)}a_1^{(n)}K_{1n}\\ a_1^{(2)}a_1^{(1)}K_{21} &{} a_1^{(2)}a_1^{(2)}K_{22} &{} \ldots &{} a_1^{(2)}a_1^{(n)}K_{2n}\\ \ldots &{} \ldots &{} \ldots &{} \ldots \\ a_1^{(n)}a_1^{(1)}K_{n1} &{} a_1^{(n)}a_1^{(2)}K_{n2} &{} \ldots &{} a_1^{(n)}a_1^{(n)}K_{nn} \end{array} \right] . $$

5 Simulation Results

The previous results, summarized by (20), are obtained under the asymptotic condition of infinitely growing arrival rate (\(N\rightarrow \infty \)) and, hence, they can provide suitable approximations only for sufficiently large values of N. To investigate their practical applicability, we have considered several simulation scenarios, varying all the system parameters (i.e., the distributions of the interarrival and service times and of the customer capacity as well as the probabilities \(p_i\)). Since all the different simulation sets led to similar results, for sake of brevity, we present just one of them.

In more detail, we assume that the input renewal process is characterized by a uniform distribution of the interarrival time in the interval \(\left[ 0.5,1.5\right] \), corresponding to a fundamental rate of arrivals \(\lambda =1\) customers per time unit. Moreover, each arriving customer may belong to one of \(n=3\) types, according to the following probabilities: \(p_1=0.5\), \(p_2=0.3\) and \(p_3=0.2\). We assume that resource amounts occupied by each customer type have exponential distribution, with parameters 2, 1 and 0.4, respectively. Finally, the service times have gamma distribution with shape and inverse scale parameters equal to \(\alpha _1 = \beta _1 = 0.5\), \(\alpha _2 = \beta _2 = 1.5\) and \(\alpha _3 = \beta _3 = 2.5\), respectively.

Our aim is to show that the Gaussian approximation gets better and better as N goes to infinity, thus providing some indications on reasonable lower bounds of N for the applicability of (20). Hence, we carried out different sets of simulation experiments (in each of them \(10^{10}\) arrivals were generated) for increasing values of N and compared the asymptotic distributions with the empiric ones in terms of Kolmogorov distance [5]

$$ \varDelta =\sup \limits _x\left| F\left( x\right) -G\left( x\right) \right| $$

where F(x) is the cumulative distribution function built on the basis of simulation results, and G(x) is the Gaussian approximation given by (20); the corresponding parameters for the three classes are summarized in Table 1. For sake of brevity, we show the results only for the marginal distributions of the total resource amount for each class of customers.

Table 1. Parameters of Gaussian approximations

Tables 2, 3 and 4 report the values of the Kolmogorov distance for the three types of customers, highlighting that the goodness of the approximation depends not only on N, but also on the different statistical features of the considered customers class.

Table 2. Kolmogorov distance for the first type of customers
Table 3. Kolmogorov distance for the second type of customers
Table 4. Kolmogorov distance for the third type of customers
Fig. 2.
figure 2

Distributions of the total resource amount for the first type of customers

Fig. 3.
figure 3

Distributions of the total resource amount for the second type of customers

Fig. 4.
figure 4

Distributions of the total resource amount for the third type of customers

As expected, the asymptotic results become more and more accurate when the scale parameter N increases. This conclusion is also confirmed by Figs. 2, 3 and 4, which compare the asymptotic approximations with the empirical histograms for the total resource amount of each type of customers for two different values of N.

6 Conclusions

In this work we considered a GI/GI/\(\infty \) queue with n types of customers under the assumption that arrivals follow a renewal process and each customer occupies a random resource amount, independent of its service time. At first we determined the corresponding Kolmogorov differential equations, which in the general case cannot be solved analytically. Hence, we derived first and second-order asymptotic approximations in case of infinitely growing arrival rate, and we pointed out that the n-dimensional probability distribution of the total resource amount is asymptotically n-dimensional Gaussian. Finally, by means of discrete-event simulation we verified the goodness of the approximation, and highlighted how the applicability region of the asymptotic approximation (i.e., lower bounds on the scale parameter N for all the different classes of users) can be determined by considering the Kolmogorov distance as accuracy measure.