Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

The goals of this chapter are to:

  • Explore the question of stability of a sequence of stochastic models in the context of general queueing systems by means of the Kantorovich functional \(\mathcal{A}_{c}\)

  • ,Consider the case of queueing systems with independent interarrival and service times

  • ,Consider the special case of approximating a stochastic queueing system by means of a deterministic model.

Notation introduced in this chapter:

1 Introduction

The subject of this chapter is the fundamental problem of the stability of a sequence of stochastic models that can be interpreted as approximations or perturbations of a given initial model. We consider queueing systems and study their stability properties with respect to the Kantorovich functional \(\mathcal{A}_{c}\). We begin with a general one-server queueing system with no assumptions on the interarrival times and then proceed to the special cases of independent interarrival times and independent service times. Finally, we consider deterministic queueing systems as approximations to a stochastic queuing model.

Table 1

2 Stability of G | G | 1 | -Systems

As a model example of the applicability of Kantorovich’s theorem in the stability problem for queueing systems, we consider the stability of the system \(G\vert G\vert 1\vert \infty \).Footnote 1 The notation \(G\vert G\vert 1\vert \infty \) means that we consider a single-server queue with “input flow” \(\{e_{n}\}_{n=0}^{\infty }\) and “service flow” \(\{s_{n}\}_{n=0}^{\infty }\) consisting of dependent nonidentically distributed components. Here, \(\{e_{n}\}_{n=0}^{\infty }\) and \(\{s_{n}\}_{n=0}^{\infty }\) are treated as sequences of the lengths of the time intervals between the nth and (n + 1)th arrivals and the service times of the nth arrival, respectively.

Define the recursive sequence

$$w_{0} = 0,\qquad w_{n+1} =\max (w_{n} + s_{n} - e_{n},0),\qquad n = 1,2,\ldots.$$
(13.2.1)

The quantity w n may be viewed as the waiting time for the nth arrival to begin to be served. We introduce the following notation: \(\mathbf{e}_{j,k} = (e_{j}\ldots,e_{k})\), \(\mathbf{s}_{j,k} = (s_{j},\ldots,s_{k})\), k > j, \(\mathbf{e} = (e_{0},e_{1},\ldots \,)\), and \(\mathbf{s} = (s_{0},s_{1},\ldots \,)\). Along with the model defined by relations (13.2.1), we consider a sequence of analogous models by indexing it with the letter r (\(r \geq 1\)). That is, all quantities pertaining to the rth model will be designated in the same way as model (13.2.1) but with superscript r: \(e_{n}^{(r)}\), \(s_{n}^{(r)}\), \(w_{n}^{(r)}\), and so on. It is convenient to regard the value \(r = \infty \) (which can be omitted) as corresponding to the original model. All of the random variables are assumed to be defined on the same probability space. For brevity, functionals \(\Phi \) depending just on the distributions of the RVs X and Y will be denoted by \(\Phi (X,Y )\).

For the system \(G\vert G\vert 1\vert \infty \) in question, define for \(k \geq 1\) nonnegative functions \(\phi _{k}\) on \(({\mathbb{R}}^{k},\|x\|)\), \(\|(x_{1},\ldots,x_{k})\| = \vert x_{1}\vert + \cdots + \vert x_{k}\vert \), as follows:

$$\begin{array}{rcl} \phi _{k}(\xi _{1},\ldots,\xi _{k},\eta _{1},\ldots,\eta _{k})& :=& \max [0,\eta _{k} - \xi _{k},(\eta _{k} - \xi _{k}) \\ & & +\,(\eta _{k-1} - \xi _{k-1}),\ldots,(\eta _{k} - \xi _{k}) + \cdots + (\eta _{1} - \xi _{1})]\end{array}$$
(13.2.1)

It is not hard to see that \(\phi (\mathbf{e}_{n-k,n-1},\mathbf{s}_{n-k,n-1})\) is the waiting time for the nth arrival under the condition that w nk  = 0.

Let \(c \in \mathfrak{C} =\{ c(x,y) = H(d(x,y)),H \in \mathcal{H}\}\) [see (12.2.2)]. The system \(G\vert G\vert 1\vert \infty \) is uniformly stable with respect to the functional \(\mathcal{A}_{c}\) finite time intervals if, for every positiveT, the following limit relation holds: as \(r \rightarrow \infty \),

$$\begin{array}{rcl} \delta _{(r)}(T;\mathcal{A}_{c})& :=& \mathop{\sup}\limits_{n\geq 0}\max _{1\leq k\leq T}\mathcal{A}_{c}(\phi _{k}(\mathbf{e}_{n,n+k-1},\mathbf{s}_{n,n+k-1}), \\ & & \phi _{k}(\mathbf{e}_{n,n+k-1}^{(r)},{\mathbf{s}}^{(r)}n,n + k - 1)) \rightarrow 0,\end{array}$$
(13.2.2)

where \(\mathcal{A}_{c}\) is the Kantorovich functional on \(\mathfrak{X}({\mathbb{R}}^{k})\)

$$\mathcal{A}_{c}(X,Y ) =\inf \{ Ec(\widetilde{X},\widetilde{Y }) :\widetilde{ X}\stackrel{\mathrm{d}}{=}X,\widetilde{Y }\stackrel{\mathrm{d}}{=}Y \}$$
(13.2.3)

[see (12.2.1)].

Similarly, we define \({\delta }^{(r)}(T;\mathcal{l}_{p})\), where \(\mathcal{l}_{p} = \mathcal{L}_{p}\) (\(0 < p < \infty \)) is the minimal metric w.r.t. the \(\mathcal{L}_{p}\)-distance.Footnote 2 Relation (13.2.2) means that the largest deviation between the variables w n + k and \(w_{n+k}^{(r)}\), \(k = 1,\ldots,T\), converges to zero as \(r \rightarrow \infty \) if at time n both compared systems are free of “customers,” and for any positive T this convergence is uniform in n.

Theorem 13.2.1.

If for each \(r = 1,2,\ldots,\infty \) the sequences \({\mathbf{e}}^{(r)}\) and \({\mathbf{s}}^{(r)}\) are independent, then

$$\begin{array}{rcl} \delta _{c}^{(r)}(T;\mathcal{A}_{ c})& \leq & K_{H}\mathop{\sup}\limits_{n\geq 0}\mathcal{A}_{c}(\mathbf{e}_{n,n+T-1},\mathbf{e}_{n,n+T-1}^{(r)}) \\ & & +K_{H}\mathop{\sup}\limits_{n\geq 0}\mathcal{A}_{c}(\mathbf{s}_{n,n+T-1},\mathbf{s}_{n,n+T-1}^{(r)}),\end{array}$$
(13.2.4)

where K H is given by (12.2.2). In particular,

$$\begin{array}{rcl} \delta _{c}^{(r)}(T;\mathcal{l}_{ p})& \leq & \mathop{\sup}\limits_{n\geq 0}\mathcal{l}_{p}(\mathbf{e}_{n,n+T-1},\mathbf{e}_{n,n+T-1}^{(r)}) \\ & & +\mathop{\sup}\limits_{n\geq 0}\mathcal{l}_{p}(\mathbf{s}_{n,n+T-1},\mathbf{s}_{n,n+T-1}^{(r)}).\end{array}$$
(13.2.5)

Proof.

We will prove (13.2.4) only. The proof of (13.2.3) is carried out in a similar way. For any \(1 \leq k \leq T\) we have the triangle inequality

$$\begin{array}{rcl} & & \mathcal{L}_{p}(\phi _{k}(\mathbf{e}_{n,n+k-1},\mathbf{s}_{n,n+k-1}),\phi _{k}(\mathbf{e}_{n,n+k-1}^{(r)},\mathbf{s}_{ n,n+k-1}^{(r)})) \\ & & \quad \leq \mathcal{L}_{p}(\phi _{k}(\mathbf{e}_{n,n+k-1},\mathbf{s}_{n,n+k-1}),\phi _{k}(\mathbf{e}_{n,n+k-1}^{(r)},\mathbf{s}_{ n,n+k-1})) \\ & & \qquad + \mathcal{L}_{p}(\phi _{k}(\mathbf{e}_{n,n+k-1},\mathbf{s}_{n,n+k-1}),\phi _{k}(\mathbf{e}_{n,n+k-1},\mathbf{s}_{n,n+k-1}^{(r)})) \\ & & \quad \leq \mathcal{L}_{p}(\phi _{k}(\mathbf{e}_{n,n+T-1},\mathbf{s}_{n,n+T-1}),\phi _{k}(\mathbf{e}_{n,n+T-1}^{(r)},\mathbf{s}_{ n,n+T-1})) \\ & & \qquad + \mathcal{L}_{p}(\phi _{k}(\mathbf{e}_{n,n+T-1},\mathbf{s}_{n,n+T-1}),\phi _{k}(\mathbf{e}_{n,n+T-1},\mathbf{s}_{n,n+T-1}^{(r)}))\end{array}$$
(13.2.2)

Changing over to minimal metric \(\mathcal{l}_{p}\) and using the assumption that \({\mathbf{e}}^{(r)}\) and \({\mathbf{s}}^{(r)}\) are independent (\(r = 1,\ldots,\infty \)) we have that

$$\begin{array}{rcl} \inf \{\mathcal{L}_{p}(\phi _{k}(\mathbf{e}_{n,n+k-1},\mathbf{s}_{n,n+k-1}),\phi _{k}(\mathbf{e}_{n,n+k-1}^{(r)},\mathbf{s}_{ n,n+k-1}^{(r)}))\}& & \\ \qquad \leq \mathcal{l}_{p}(\mathbf{e}_{n,n+T-1},\mathbf{e}_{n,n+T-1}^{(r)}),\mathcal{l}_{ p}(\mathbf{s}_{n,n+T-1},\mathbf{s}_{n,n+T-1}^{(r)}).& &\end{array}$$
(13.2.6)

The infimum in the last inequality is taken over all joint distributions

$$\begin{array}{rcl} F(x,y,\xi,\eta )& =& \mathrm{Pr}(\mathbf{e}_{n,n+k-1} < x,\mathbf{e}_{n,n+k-1}^{(r)} < y)\mathrm{Pr}(\mathbf{s}_{ n,n+k-1}<\xi,\mathbf{s}_{n,n+k-1}^{(r)}<\eta ), \\ & & \quad x,y,\xi,\eta \in {\mathbb{R}}^{k}, \\ \end{array}$$

with fixed marginal distributions

$$\begin{array}{rcl} F_{1}(x,\xi )& =& \mathrm{Pr}(\mathbf{e}_{n,n+k-1} < x,\mathbf{s}_{n,n+k-1} < \xi ), \\ F_{2}(y,\eta )& =& \mathrm{Pr}(\mathbf{e}_{n,n+k-1}^{(r)} < x,\mathbf{s}_{ n,n+k-1}^{(r)} < \xi ), \\ \end{array}$$

and thus the left-hand side of (13.2.5) is not greater than

$$\mathcal{l}_{p}(\phi _{k}(\mathbf{e}_{n,n+k-1},\mathbf{s}_{n,n+k-1}),\phi (\mathbf{e}_{n,n+k-1}^{(r)} < x,\mathbf{s}_{ n,n+k-1}^{(r)} < \xi )),$$

which proves (13.2.4).

From (13.2.3) and (13.2.4) it is possible to derive an estimate of the stability of the system \(G\vert G\vert 1\vert \infty \) in the sense of (13.2.2). It can be expressed in terms of the deviations of the vectors \(\mathbf{e}_{n,n+T-1}^{(r)}\) and \(\mathbf{s}_{n,n+T-1}^{(r)}\) from \(\mathbf{e}_{n,n+T-1}\) and \(\mathbf{s}_{n,n+T-1}\), respectively. Such deviations are easy to estimate if we impose additional restrictions on \({\mathbf{e}}^{(r)}\) and \({\mathbf{s}}^{(r)}\), \(r = 1,2,\ldots \). For example, when the terms of the sequences are independent, the following estimates hold:

$$\mathcal{A}_{c}(\mathbf{e}_{n,n+T-1},\mathbf{e}_{n,n+T-1}^{(r)}) \leq K_{ H}^{q} \sum\limits_{j=n}^{n+T-1}\mathcal{A}_{ c}(e_{j},e_{j}^{(r)}),\quad q = [\log _{ 2}T] + 1,$$
(13.2.7)
$$\mathcal{l}_{p}(\mathbf{e}_{n,n+T-1},\mathbf{e}_{n,n+T-1}^{(r)}) \leq \sum\limits_{j=n}^{n+T-1}\mathcal{l}_{ p}(e_{j},e_{j}^{(r)})\quad \mbox{ for}\ 0 \leq p \leq \infty.$$
(13.2.8)

Let us check (13.2.8). One gets (13.2.7) by a similar argument. By the minimality of \(\mathcal{l}_{p}\), for any vectors \(X = (X_{1},\ldots,X_{T})\), \(Y = (Y _{1},\ldots,Y _{T}) \in \mathfrak{X}({\mathbb{R}}^{T})\) with independent components, we have that the Minkowski inequality

$$\mathcal{L}_{p}(X,Y ) = {[E\|X - {Y \|}^{p}]}^{1/p \prime} \leq \sum\limits_{i=1}^{T}\mathcal{L}_{ p}(X_{i},Y _{i}),\quad p \prime =\max (1,p),$$
(13.2.9)

implies

$$\mathcal{l}_{p}(X,Y ) \leq \sum\limits_{i=1}^{T}\mathcal{l}_{ p}(X_{i},Y _{i}),$$
(13.2.10)

i.e., (13.2.8) holds.

Estimates (13.2.7) and (13.2.8) can be even further simplified when the terms of these sequences are identically distributed. On the basis of (13.2.3) and (13.2.4), it is possible to construct stability estimates for the system that are uniform over the entire time axis.Footnote 3

3 Stability of GI | GI | 1 | -System

The system \(GI\vert GI\vert 1\vert \infty \) is a special case of \(G\vert G\vert 1\vert \infty \). For this model the RVs \(\zeta _{n} = s_{n} - e_{n}\) are i.i.d., and we assume that \(E\zeta _{1} < 0\). Then the one-dimensional stationary distribution of the waiting time coincides with the distribution of the following maximum:

$$w =\mathop{\sup}\limits_{k\geq 0}Y _{k},\qquad Y _{k} = \sum\limits_{j=-k}^{-1}\zeta _{ j},\qquad Y _{0} = 0,\qquad \zeta _{-j}\stackrel{\mathrm{d}}{=}\zeta _{j}.$$
(13.3.1)

The perturbed model [i.e., \(e_{k}^{(r)}\), \(s_{k}^{(r)}\), \(Y _{k}^{(r)}\)] is assumed to be also of the type GI | GI | 1 | .Footnote 4 Borovkov [1984, p. 239] noticed that one of the aims of the stability theorems is to estimate the closeness of \(E{f}^{(r)}({W}^{(r)})\) and Ef(W) for various kinds of functions f, f (r). Borovkov [1984, p. 239–240] proposed considering the case

$${f}^{(r)}(x) - f(y) \leq A\vert x - y\vert,\quad \forall x,y \in \mathbb{R}.$$
(13.3.2)

Borovkov [1984, p. 270] proved that

$$D =\sup \{ \vert Ef({w}^{(r)}) - Ef(w)\vert : \vert f(x) - f(y)\vert \leq A\vert x - y\vert,x,y \in \mathbb{R}\} < c\epsilon,$$
(13.3.3)

assuming that \(\vert \zeta _{1}^{(r)} - \zeta _{1}\vert \leq \epsilon \) a.s. Here and in what follows, c stands for an absolute constant that may be different in different places.

By (3.3.12), (3.4.18), (5.4.16), and Theorem 6.2.1, we have for the minimal metric \(\mathcal{l}_{1} =\widehat{ \mathcal{L}}_{1}\)

$$A\mathcal{l}_{1}({w}^{(r)},w) =\sup \{ E{f}^{(r)}({w}^{(r)}) - Ef(w) : ({f}^{(r)},f)\ \mbox{ satisfy (13.3.2)}\} = D,$$
(13.3.4)

provided that \(E\vert {w}^{(r)}\vert + E\vert w\vert < \infty \). Thus, the estimate in (13.3.3) essentially says that

$$\mathcal{l}_{1}({w}^{(r)},w) \leq c\mathcal{l}_{ \infty }(\zeta _{1}^{(r)},\zeta _{ 1}),$$
(13.3.5)

where for any \(X,Y \in \mathfrak{X}(\mathbb{R})\)

$$\mathcal{l}_{\infty }(X,Y ) =\widehat{ \mathcal{L}}_{\infty }(X,Y ) =\mathop{\sup}\limits_{0\leq t\leq 1}\vert F_{X}^{-1}(t) - F_{ Y }^{-1}(t)\vert $$
(13.3.6)

[see (2.5.4), (3.3.14), (3.4.18), (7.5.15), and Corollary 7.4.2]. Actually, using (7.4.18) with H(t) = t p we have

$$\mathcal{l}_{p}(X,Y ) =\widehat{ \mathcal{L}}_{p}(X,Y ) ={ \left (\int\nolimits_{0}^{1}\vert F_{ X}^{-1}(t) - F_{ Y }^{-1}(t){\vert }^{p}\mathrm{d}t\right )}^{1/p},$$
(13.3.7)

where \(F_{X}^{-1}\) is the generalized inverse of the DF F X

$$F_{X}^{-1}(t) :=\sup \{ x : F_{ X}(x) \leq t\}.$$
(13.3.8)

Letting \(p \rightarrow \infty \) we obtain (13.3.6).

The estimate in (13.3.5) needs strong assumptions on the disturbances in order to conclude stability. We will refine the bound (13.3.5) considering bounds of

$$\begin{array}{rcl} A\mathcal{l}_{p}^{p}({w}^{(r)},w)& =& \sup \{E{f}^{(r)}({w}^{(r)}) - Ef(w) : {f}^{(r)}(x) - f(y) \leq A\vert x - y{\vert }^{p}, \\ & & \quad \forall x,y \in {\mathbb{R}}^{1}\},\qquad 0 < p < \infty, \end{array}$$
(13.3.9)

assuming that \(E\vert {w}^{(r)}{\vert }^{p} + E\vert w{\vert }^{p} < \infty \). The next lemma considers the closeness of the prestationary distributions of \(w_{n} =\max (0,w_{n-1} + \zeta _{n-1})\), w 0 = 0, and of \(w_{n}^{(r)}\) [see (13.2.1)].

Lemma 13.3.1.

For any \(0 < p < \infty \) and \(E\zeta _{1} = E\zeta _{1}^{(r)}\) , the following inequality holds:

$$\mathcal{l}_{p}(w_{n}^{(r)},w_{ n}) \leq A_{p},$$
(13.3.10)
$$A_{p} :=\min \left (\frac{n(n + 1)} {2} \epsilon _{p},c\min _{1/p-1<\delta <2/p-1}{n}^{1/(1+\delta )}\epsilon _{ p(1+\delta )}^{p}\right ),\ \mbox{ for }\ p \in (0,1],$$

where

$$\begin{array}{rcl} A_{p}& :=& c{n}^{1/p}\epsilon _{ p}\quad \mbox{ for}\ 1 < p \leq 2, \\ A_{p}& :=& c{n}^{1/2}\epsilon _{ p}\quad \mbox{ for}\ p > 2, \\ \end{array}$$

and

$$\epsilon _{p} := \mathcal{l}_{p}(\zeta _{1},\zeta _{1}^{(r)}).$$

Remark 13.3.1.

The condition \(E\zeta _{1} = E\zeta _{1}^{(r)}\) means that we know the mean of \(\zeta _{1}^{(r)}\) for the perturbed “real” model and we chose an “ideal” model with \(\zeta _{1}\) having the same mean.

Proof.

The distributions of the waiting times w n and w n  ∗  can be determined as follows:

$$\begin{array}{rcl} w_{n}& =& \max (0,\zeta _{n-1},\zeta _{n-1} + \zeta _{n-2},\ldots,\zeta _{n-1} + \cdots + \zeta _{1})\stackrel{\mathrm{d}}{=}\max _{0\leq j\leq n}Y _{j}, \\ w_{n}^{(r)}& =& \max (0,\zeta _{ n-1}^{(r)},\zeta _{ n-1}^{(r)} + \zeta _{ n-2}^{(r)},\ldots,\zeta _{ n-1}^{(r)} + \cdots + \zeta _{ 1}^{(r)})\stackrel{\mathrm{d}}{=}\max _{ 0\leq j\leq n}Y _{j}^{(r)}\end{array}$$
(13.2.3)

Further [see (19.4.41), Theorem 19.4.6], we will prove the following estimates of the closeness between w n (r) and w n :

$$\mathcal{l}_{p}(w_{n}^{(r)},w_{ n}) \leq \frac{n(n + 1)} {2} \mathcal{l}_{p}(\zeta _{1},\zeta _{1}^{(r)})\quad \mbox{ if}\ 0 < p \leq 1$$
(13.3.11)

and

$$\mathcal{l}_{p}(w_{n}^{(r)},w_{ n}) \leq \frac{p} {p - 1}B_{p}{n}^{1/p}\epsilon _{ p}\quad \mbox{ if}\ 1 < p \leq 2,$$
(13.3.12)

where B 1 = 1, \(B_{p} = 18{p}^{3/2}/{(p - 1)}^{1/2}\) for \(1 < p \leq 2\). From (13.3.12) and \(\mathcal{l}_{p} \leq \mathcal{l}_{p(1+\delta )}\) for any 0 < p < 1 and \((1/p) - 1 < \delta \leq 2/p - 1\) we have \(1 \leq p(1 + \delta ) \leq 2\) and

$$\mathcal{l}_{p}(w_{n}^{(r)},w_{ n}) \leq c{n}^{1/(1+\delta )}\epsilon _{ p(1+\delta )}^{p}.$$
(13.3.13)

For \(p \geq 2\) we have

$$\begin{array}{rcl} \mathcal{L}_{p}^{p}(w_{ n},w_{n}^{(r)})& =& E{\left \vert \vee _{k=1}^{n}Y _{ k} -\vee _{k=1}^{n}Y _{ k}^{(r)}\right \vert }^{p} \\ & \leq & \frac{p} {p - 1}E\vert Y _{n} - Y _{n}^{(r)}{\vert }^{p} \leq c{n}^{p/2}\mathcal{L}_{ p}{(\zeta _{1},\zeta _{1}^{(r)})}^{p}.\end{array}$$
(13.3.14)

This last inequality is a consequence of the Marcinkiewicz–Zygmund inequality.Footnote 5 Passing to the minimal metrics \(\mathcal{l}_{p} =\widehat{ \mathcal{L}}_{p}\) in (13.3.14) we get (13.3.10).

Remark 13.3.2.

  1. (a)

    The estimates in (13.3.10) are of the right order, as can be seen by examples. If, for example, \(p \geq 2\), consider \(\zeta _{i}\stackrel{\mathrm{d}}{=}N(0,1)\) and \(\zeta _{i}^{(r)} = 0\); then \(\mathcal{l}_{p}(w_{n}^{(r)},w_{n}) = c{n}^{1/2}\).

  2. (b)

    If \(p = \infty \), then \(\mathcal{l}_{\infty }(w_{n}^{(r)},w_{n}) \leq n\epsilon _{\infty }\).

Define the stopping times

$$\begin{array}{rcl} \theta & =\inf \left \{k : w_{k} =\max _{0\leq j\leq k}Y _{j} = w =\mathop{\sup}\limits_{j\geq 0}Y _{j}\right \},& \\ {\theta }^{(r)}& =\inf \{ k : w_{ k}^{(r)} = {w}^{(r)}\}. &\end{array}$$
(13.3.15)

From Lemma 13.3.1 we now obtain estimates for \(\mathcal{l}_{p}({w}^{(r)},w)\) in terms of the distributions of \(\theta \), \({\theta }^{(r)}\). Define \(G(n) :=\mathrm{ Pr}(\max ({\theta }^{(r)},\theta ) = n) <\mathrm{ Pr}({\theta }^{(r)} = n) +\mathrm{ Pr}(\theta = n)\).

Theorem 13.3.1.

If \(1 < p \leq 2\), \(\lambda \), \(\mu \geq 1\) with \((1/\lambda ) + (1/\mu ) = 1\) and \(E\zeta _{1} = E\zeta _{1}^{(r)} < 0\) , then

$$\mathcal{l}_{p}^{p}({w}^{(r)},w) \leq c\epsilon _{ p\lambda } \sum\limits_{n=1}^{\infty }{n}^{1/\lambda }G{(n)}^{1/\mu }.$$
(13.3.16)

Proof.

$$\begin{array}{rcl} \mathcal{L}_{p}^{p}({w}^{(r)},w)& =& E\vert {w}^{(r)} - w{\vert }^{p} = \sum\limits_{n=0}^{\infty }E\vert {w}^{(r)} - w{\vert }^{p}I\{\max ({\theta }^{(r)},\theta ) = n\} \\ & =& \sum\limits_{n=0}^{\infty }E\vert w_{ n} - w_{n}^{(r)}{\vert }^{p}I\{\max ({\theta }^{(r)},\theta ) = n\} \\ & \leq & \sum\limits_{n=0}^{\infty }{(E\vert w_{ n} - w_{n}^{(r)}{\vert }^{p\lambda })}^{1/\lambda }G{(n)}^{1/\mu }, \\ \end{array}$$

and thus, by (13.3.10),

$$\mathcal{l}_{p}^{p}({w}^{(r)},w) \leq \sum\limits_{n=0}^{\infty }A_{ p\lambda }^{p}G{(n)}^{1/\mu } = \sum\limits_{n=0}^{\infty }c{n}^{1/\lambda }\epsilon _{ p\lambda }G{(n)}^{1/\mu }.$$

Remark 13.3.3.

  1. (a)

    If

    $$G(n) \leq c{n}^{-\mu (1/\lambda +1+\epsilon )}$$
    (13.3.17)

    for some \(\epsilon > 0\), then \(\sum\limits_{n=1}^{\infty }{n}^{1/\lambda }G{(n)}^{1/\mu } \leq c\sum\limits_{n=1}^{\infty }{n}^{-1/(1+\epsilon )} \leq \infty \). For conditions on \(\zeta _{1}\), \(\zeta _{1}^{{_\ast}}\) ensuring (13.3.17), compare Borovkov [1984, pp. 229, 230, 240].

  2. (b)

    For \(0 < p \leq 1\) and p > 2, in the same way we get from Lemma 13.3.1 corresponding estimates for \(\mathcal{l}_{p}({w}^{(r)},w)\).

  3. (c)

    Note that \(\mathcal{l}_{1}({w}^{(r)},w) \leq \mathcal{l}_{p}({w}^{(r)},w)\), i.e., \(\mathcal{l}_{p}\)-metric represents more functions w.r.t. the deviation [see the side conditions in (13.3.9)] than \(\mathcal{l}_{1}\). Moreover, \(\epsilon _{p\lambda } = \mathcal{l}_{p\lambda }(\zeta _{1}^{(r)},\zeta _{1}) \leq \mathcal{l}_{\infty }(\zeta _{1}^{(r)},\zeta _{1})\). Therefore, Theorem 13.3.1 is a refinement of the estimates given by Borovkov [1984, p. 270].

4 Approximation of a Random Queue by Means of Deterministic Queueing Models

The conceptually simplest class of queueing models are those of the deterministic type. Such models are usually explored under the assumption that the underlying (real) queueing system is close (in some sense) to a deterministic system. It is common practice to change the random variables governing the queueing model with constants in the neighborhood of their mean values. In this section we evaluate the possible error involved by approximating a random queueing model with a deterministic one. To get precise estimates, we explore relationships between distances in the space of random sequences, precise moment inequalities, and the Kemperman [1968, 1987] geometric approach to a certain trigonometric moment problem.

More precisely, as in Sect. 13.2, we consider a single-channel queueing system \(G\vert G\vert 1\vert \infty \) with sequences \(\mathbf{e} = (e_{0},e_{1},\ldots \,)\) and \(\mathbf{s} = (s_{0},s_{1},\ldots \,)\) of interarrival times and service times, respectively, assuming that \(\{e_{j}\}_{j\geq 1}\) and \(\{s_{j}\}_{j\geq 1}\) are dependent and nonidentically distributed RVs. We denote by \(\boldsymbol \zeta = (f_{0},\zeta _{1},\ldots )\) the difference \(\mathbf{s} -\mathbf{e}\) and let \(\mathbf{w} = (w_{0},w_{1},\ldots \,)\) be the sequence of waiting times, determined by (13.2.1).

Along with the queueing model \(G\vert G\vert 1\vert \infty \) defined by the input random characteristics \(\mathbf{e}\), \(\mathbf{s}\), \(\zeta \) and the output characteristic \(\mathbf{w}\), we consider an approximating model with corresponding inputs \({\mathbf{e}}^{{_\ast}}\), \({\mathbf{s}}^{{_\ast}}\), \({\boldsymbol \zeta }^{{_\ast}}\) and output \({\mathbf{w}}^{{_\ast}}\),

$$w_{0}^{{_\ast}} = 0,\qquad w_{ n+1}^{{_\ast}} = (w_{ n}^{{_\ast}} + S_{ n}^{{_\ast}}- e_{ n}^{{_\ast}})_{ +},\qquad n = 1,2,\ldots,$$
(13.4.1)

where \((\cdot )_{+} =\max (0,\cdot )\). The latter model has a simpler structure, namely, we assume that \({\mathbf{e}}^{{_\ast}}\) or \({\mathbf{s}}^{{_\ast}}\) is deterministic. We also assume that estimates of the deviations between certain moments of e j and \(e_{j}^{{_\ast}}\) (resp. s j and \(s_{j}^{{_\ast}}\) or \(\zeta _{j}\) and \(\zeta _{j}^{{_\ast}}\)) are given.

We will consider two types of approximating models:

  1. (a)

    \(D\vert G\vert 1\vert \infty \) (i.e., \(e_{j}^{{_\ast}}\) are constants and in general, \(e_{j}^{{_\ast}}\neq e_{i}^{{_\ast}}\) for \(i\neq j\)) and

  2. (b)

    \(D\vert D\vert 1\vert \infty \) (i.e., \(e_{j}^{{_\ast}}\) and \(s_{j}^{{_\ast}}\) are constants).

The next theorem provides a bound for the deviation between the sequences \(\mathbf{w} = (w_{0},w_{1},\ldots \,)\) and \({\mathbf{w}}^{{_\ast}} = (w_{1}^{{_\ast}},w_{2}^{{_\ast}},\ldots \,)\) in terms of the Prokhorov metric \(\boldsymbol \pi \).Footnote 6 We denote by \(U = {\mathbb{R}}^{\infty }\) the space of all sequences with the metric

$$d(\overline{x},\overline{y}) = \sum\limits_{i=0}^{\infty }{2}^{-i}\vert x_{ i} - y_{i}\vert \quad [\overline{x} := (x_{0},x_{1},\ldots \,),\ \overline{y} := (y_{0},y_{1},\ldots \,)],$$

which may take infinite values. Let \({\mathfrak{X}}^{\infty } = \mathfrak{X}({\mathbb{R}}^{\infty })\) be the space of all random sequences defined on a “rich enough” probability space \((\Omega,\mathcal{A},\mathrm{Pr})\); see Remark 2.7.2. Then the Prokhorov metric in \({\mathfrak{X}}^{\infty }\) is given by

$$\begin{array}{rcl} \boldsymbol \pi (X,Y )& :=& \inf \{\epsilon > 0 :\mathrm{ Pr}(X \in A) \leq \mathrm{ Pr}(Y \in {A}^{\epsilon }) + \epsilon, \\ & & \quad \forall \ \mbox{ Borel sets}\ A \subset {\mathbb{R}}^{\infty }\}, \end{array}$$
(13.4.2)

where \({A}^{\epsilon }\) is the open \(\epsilon \)-neighborhood of A. Recall the Strassen–Dudley theorem (see Corollary 7.5.2 of Chap. 7):

$$\boldsymbol \pi (X,Y ) =\widehat{ \mathbf{K}}(X,Y ) :=\inf \{ \mathbf{K}(\overline{X},\overline{Y }) : \overline{X},\overline{Y } \in {\mathfrak{X}}^{\infty },\overline{X}\stackrel{\mathrm{d}}{=}X,\overline{Y }\stackrel{\mathrm{d}}{=}Y \},$$
(13.4.3)

where \(\mathbf{K}\) is the Ky Fan metric

$$\mathbf{K}(X,Y ) :=\inf \{ \epsilon > 0 :\mathrm{ Pr}(d(X,Y ) > \epsilon ) < \epsilon \},\quad X,Y \in {\mathfrak{X}}^{\infty }$$
(13.4.4)

(Example 3.4.2).

In stability problems for characterizations of \(\epsilon \)-independence the following concept is frequently used.Footnote 7 Let \(\epsilon > 0\) and \(X = (X_{0},X_{1},\ldots \,) \in {\mathfrak{X}}^{\infty }\). The components of X are said to be \(\epsilon \) -independent if

$$\mathrm{IND}(X) = \boldsymbol \pi (X,\underline{X}) \leq \epsilon,$$

where the components \(\underline{X}_{i}\) of \(\underline{X}\) are independent and \(\underline{X}_{i}\stackrel{\mathrm{d}}{=}X_{i}\) (\(i \geq 0\)). The Strassen–Dudley theorem gives upper bounds for \(\mathrm{IND}(X)\) in terms of the Ky Fan metric \(\mathbf{K}(X,\underline{X})\).

Lemma 13.4.1.

Let the approximating model be of the type \(D\vert G\vert 1\vert \infty \) . Assume that the sequences \(\mathbf{e}\) and \(\mathbf{s}\) of the queueing model \(G\vert G\vert 1\vert \infty \) are independent. Then

$$\boldsymbol \pi (\mathbf{w},{\mathbf{w}}^{{_\ast}}) \leq \mathrm{ IND}(\mathbf{s}) +\mathrm{ IND}({\mathbf{s}}^{{_\ast}}) + \sum\limits_{j=1}^{\infty }(\boldsymbol \pi (e_{ j},e_{j}^{{_\ast}}) + \boldsymbol \pi (s_{ j},s_{j}^{{_\ast}})).$$
(13.4.5)

Proof.

By (13.2.1) and (13.4.1),

$$\begin{array}{rcl} d(\mathbf{w},{\mathbf{w}}^{{_\ast}})& =& \sum\limits_{n=1}^{\infty }{2}^{-n}\vert w_{ n} - w_{n}^{{_\ast}}\vert \\ & =& \sum\limits_{n=1}^{\infty }{2}^{-n}\vert \max (0,s_{ n-1} - e_{n-1},\ldots,(s_{n-1} - e_{n-1}) + \cdots + (s_{0} - e_{0})) \\ & & -\max (0,s_{n-1}^{{_\ast}}- e_{ n-1}^{{_\ast}},\ldots,(s_{ n-1}^{{_\ast}}- e_{ n-1}^{{_\ast}}) + \cdots + (s_{ 0}^{{_\ast}}- e_{ 0}^{{_\ast}}))\vert \\ & \leq & \sum\limits_{n=1}^{\infty }{2}^{-n}\vert \max (0,s_{ n-1} - e_{n-1},\ldots,(s_{n-1} - e_{n-1}) + \cdots + (s_{0} - e_{0})) \\ & & -\max (0,s_{n-1} - e_{n-1}^{{_\ast}},\ldots,(s_{ n-1} - e_{n-1}^{{_\ast}}) + \cdots + (s_{ 0} - e_{0}^{{_\ast}}))\vert \\ & & +\sum\limits_{n=1}^{\infty }{2}^{-n}\vert \max (0,s_{ n-1} - e_{n-1}^{{_\ast}},\ldots,(s_{ n-1} - e_{n-1}^{{_\ast}}) + \cdots + (s_{ 0} - e_{0}^{{_\ast}})) \\ & & -\max (0,s_{n-1}^{{_\ast}}- e_{ n-1}^{{_\ast}},\ldots,(s_{ n-1}^{{_\ast}}- e_{ n-1}^{{_\ast}}) + \cdots + (s_{ 0}^{{_\ast}}- e_{ 0}^{{_\ast}}))\vert \\ & \leq & \sum\limits_{n=1}^{\infty }{2}^{-n}\max (\vert e_{ n-1} - e_{n-1}^{{_\ast}}\vert,\ldots,\vert e_{ n-1} - e_{n-1}^{{_\ast}}\vert + \cdots + \vert e_{ 0} - e_{0}^{{_\ast}}\vert ) \\ & & +\sum\limits_{n=1}^{\infty }{2}^{-n}\max (\vert s_{ n-1} - s_{n-1}^{{_\ast}}\vert,\ldots,\vert s_{ n-1} - s_{n-1}^{{_\ast}}\vert + \cdots + \vert s_{ 0} - s_{0}^{{_\ast}}\vert ) \\ & \leq & \sum\limits_{n=1}^{\infty }{2}^{-n} \sum\limits_{j=0}^{n-1}(\vert e_{ j} - e_{j}^{{_\ast}}\vert + \vert s_{ j} - s_{j}^{{_\ast}}\vert ) \\ & \leq & d(\mathbf{e},{\mathbf{e}}^{{_\ast}}) + d(\mathbf{s},{\mathbf{s}}^{{_\ast}})\end{array}$$
(13.2.4)

Hence, by the definition of the Ky Fan metric (13.4.4), we obtain \(\mathbf{K}(\mathbf{w},{\mathbf{w}}^{{_\ast}}) \leq \mathbf{K}(\mathbf{e},{\mathbf{e}}^{{_\ast}}) + \mathbf{K}(\mathbf{s},{\mathbf{s}}^{{_\ast}})\). Next, using representation (13.4.3) let us choose independent pairs \((\mathbf{e}_{\epsilon },\mathbf{e}_{\epsilon }^{{_\ast}})\), \((\mathbf{s}_{\epsilon },\mathbf{s}_{\epsilon }^{{_\ast}})\) (\(\epsilon > 0\)) such that \(\boldsymbol \pi (\mathbf{e},{\mathbf{e}}^{{_\ast}}) > \mathbf{K}(\mathbf{e}_{\epsilon },\mathbf{e}_{\epsilon }^{{_\ast}}) - \epsilon \), \(\boldsymbol \pi (\mathbf{s},{\mathbf{s}}^{{_\ast}}) > \mathbf{K}(\mathbf{s}_{\epsilon },\mathbf{s}_{\epsilon }^{{_\ast}}) - \epsilon \), and \(\mathbf{e}\stackrel{\mathrm{d}}{=}\mathbf{e}_{\epsilon }\), \({\mathbf{e}}^{{_\ast}}\stackrel{\mathrm{d}}{=}\mathbf{e}_{\epsilon }^{{_\ast}}\), \(\mathbf{s}\stackrel{\mathrm{d}}{=}\mathbf{s}_{\epsilon }\), \({\mathbf{s}}^{{_\ast}}\stackrel{\mathrm{d}}{=}\mathbf{s}_{\epsilon }^{{_\ast}}\). Then by the independence of \(\mathbf{e}\) and \(\mathbf{s}\) (resp. \({\mathbf{e}}^{{_\ast}}\) and \({\mathbf{s}}^{{_\ast}}\)), we have

$$\begin{array}{rcl} \boldsymbol \pi (\mathbf{w},{\mathbf{w}}^{{_\ast}})& =& \inf \{\mathbf{K}(\mathbf{w}_{ 0},\mathbf{w}_{0}^{{_\ast}}) : \mathbf{w}_{ 0}\stackrel{\mathrm{d}}{=}\mathbf{w},\mathbf{w}_{0}^{{_\ast}}\stackrel{\mathrm{d}}{=}{\mathbf{w}}^{{_\ast}}\} \\ & \leq & \inf \{\mathbf{K}(\mathbf{e}_{0},\mathbf{e}_{0}^{{_\ast}}) + \mathbf{K}(\mathbf{s}_{ 0},\mathbf{s}_{0}^{{_\ast}}) : (\mathbf{e}_{ 0},\mathbf{s}_{0})\stackrel{\mathrm{d}}{=}(\mathbf{e},\mathbf{s}),(\mathbf{e}_{0}^{{_\ast}},\mathbf{s}_{ 0}^{{_\ast}})\stackrel{\mathrm{d}}{=}({\mathbf{e}}^{{_\ast}},{\mathbf{s}}^{{_\ast}}), \\ & & \quad \mathbf{e}_{0}\ \text{ is independent of}\ \mathbf{s}_{0},\mathbf{e}\ \text{ is independent of}\ \mathbf{s}, \\ & & \quad \mathbf{e}_{0}^{{_\ast}}\ \text{ is independent of}\ \mathbf{s}_{ 0}^{{_\ast}},{\mathbf{e}}^{{_\ast}}\ \text{ is independent of}\ {\mathbf{s}}^{{_\ast}}\} \\ & \leq & \mathbf{K}(\mathbf{e}_{\epsilon },\mathbf{e}_{\epsilon }^{{_\ast}}) + \mathbf{K}(\mathbf{s}_{ \epsilon },\mathbf{s}_{\epsilon }^{{_\ast}}) \leq \boldsymbol \pi (\mathbf{e},{\mathbf{e}}^{{_\ast}}) + \boldsymbol \pi (\mathbf{s},{\mathbf{s}}^{{_\ast}}) + 2\epsilon, \\ \end{array}$$

which proves that

$$\boldsymbol \pi (\mathbf{w},{\mathbf{w}}^{{_\ast}}) \leq \boldsymbol \pi (\mathbf{e},{\mathbf{e}}^{{_\ast}}) + \boldsymbol \pi (\mathbf{s},{\mathbf{s}}^{{_\ast}}).$$
(13.4.6)

Next let us estimate \(\boldsymbol \pi (\mathbf{e},{\mathbf{e}}^{{_\ast}})\) in the preceding inequality. Observe that

$$\mathbf{K}(X,Y ) \leq \sum\limits_{i=0}^{\infty }\mathbf{K}(X_{ i},Y _{i})$$
(13.4.7)

for any \(X,Y \in {\mathfrak{X}}^{\infty }\). In fact, if \(\mathbf{K}(X_{i},Y _{i}) \leq \epsilon _{i}\) and \(1 > \epsilon = \sum\limits_{i=0}^{\infty }\epsilon _{i}\), then

$$\begin{array}{rcl} \epsilon & >& \sum\limits_{i=0}^{\infty }\mathrm{Pr}(\vert X_{ i} - Y _{i}\vert > \epsilon _{i}) \geq \sum\limits_{i=0}^{\infty }\mathrm{Pr}({2}^{-i}\vert X_{ i} - Y _{i}\vert > \epsilon _{i}) \\ & \geq & \mathrm{Pr}\left (\sum\limits_{i=0}^{\infty }{2}^{-i}\vert X_{ i} - Y _{i}\vert > \epsilon \right )\end{array}$$
(13.2.5)

Letting \(\epsilon _{i} \rightarrow \mathbf{K}(X_{i},Y _{i})\) we obtain (13.4.7). By (13.4.7) and \(\boldsymbol \pi (\mathbf{e},{\mathbf{e}}^{{_\ast}}) = \mathbf{K}(\mathbf{e},{\mathbf{e}}^{{_\ast}})\), we have

$$\boldsymbol \pi (\mathbf{e},{\mathbf{e}}^{{_\ast}}) \leq \sum\limits_{i=0}^{\infty }\mathbf{K}(e_{ i},e_{i}^{{_\ast}}) = \sum\limits_{i=0}^{\infty }\boldsymbol \pi (e_{ i},e_{i}^{{_\ast}}).$$
(13.4.8)

Next we will estimate \(\boldsymbol \pi (\mathbf{s},{\mathbf{s}}^{{_\ast}})\) on the right-hand side of (13.4.6). By the triangle inequality for the metric \(\boldsymbol \pi \), we have

$$\boldsymbol \pi (\mathbf{s},{\mathbf{s}}^{{_\ast}}) \leq \mathrm{ IND}(\mathbf{s}) +\mathrm{ IND}({\mathbf{s}}^{{_\ast}}) + \boldsymbol \pi (\underline{\mathbf{s}},{\underline{\mathbf{s}}}^{{_\ast}}),$$
(13.4.9)

where the sequence \(\underline{\mathbf{s}}\) (resp. \({\underline{\mathbf{s}}}^{{_\ast}}\)) in the last inequality consists of independent components such that \(\underline{s}_{j}\stackrel{\mathrm{d}}{=}s_{j}\) (resp. \(\underline{s}_{j}^{{_\ast}}\stackrel{\mathrm{d}}{=}s_{j}^{{_\ast}}\)). We now need the “regularity” property of the Prokhorov metric,

$$\boldsymbol \pi (X + Z,Y + Z) \leq \boldsymbol \pi (X,Y ),$$
(13.4.10)

for any Z independent of X and Y in \({\mathfrak{X}}^{\infty }\). In fact, (13.4.10) follows from the Strassen–Dudley theorem (13.4.3) and the corresponding inequality for the Ky Fan metric

$$\mathbf{K}(X + Z,Y + Z) \leq \mathbf{K}(X,Y )$$
(13.4.11)

for all X, Y, and Z in \({\mathfrak{X}}^{\infty }\). By the triangle inequality and (13.4.10), we have

$$\boldsymbol \pi \left (\sum\limits_{i=0}^{\infty }X_{ i},\sum\limits_{i=0}^{\infty }Y _{ i}\right ) \leq \sum\limits_{i=0}^{\infty }\boldsymbol \pi (X_{ i},Y _{i})$$
(13.4.12)

for all \(X,Y \in {\mathfrak{X}}^{\infty }\), \(X = (X_{0},X_{1},\ldots \,)\) and \(Y = (Y _{0},Y _{1},\ldots \,)\) with independent components. Thus \(\boldsymbol \pi (\underline{\mathbf{s}},{\underline{\mathbf{s}}}^{{_\ast}}) \leq \sum\limits_{j=0}^{\infty }\boldsymbol \pi (s_{j},s_{j}^{{_\ast}})\), which together with (13.4.6), (13.4.8), and (13.4.9) complete the proof of (13.4.5).

In the next theorem we will omit the restriction that \(\mathbf{e}\) and \(\mathbf{s}\) are independent, but we will assume that the approximation model is of a completely deterministic type \(D\vert D\vert 1\vert \infty \). (Note that for this approximation model \(e_{j}^{{_\ast}}\) and \(s_{j}^{{_\ast}}\) can be different constants for different j.)

Lemma 13.4.2.

Under the preceding assumptions, we have the following estimates:

$$\boldsymbol \pi (\mathbf{w},{\mathbf{w}}^{{_\ast}}) = \mathbf{K}(\mathbf{w},{\mathbf{w}}^{{_\ast}}) \leq \boldsymbol \pi (\boldsymbol \zeta,{\boldsymbol \zeta }^{{_\ast}}) \leq \sum\limits_{j=0}^{\infty }\pi (\zeta _{ j},\zeta _{j}^{{_\ast}}) = \sum\limits_{j=0}^{\infty }\mathbf{K}(\zeta _{ j},\zeta _{j}^{{_\ast}}),$$
(13.4.13)
$$\boldsymbol \pi (\mathbf{w},{\mathbf{w}}^{{_\ast}}) \leq \sum\limits_{j=0}^{\infty }(\boldsymbol \pi (e_{ j},e_{j}^{{_\ast}})+\boldsymbol \pi (s_{ j},s_{j}^{{_\ast}})) = \sum\limits_{j=0}^{\infty }(\mathbf{K}(e_{ j},e_{j}^{{_\ast}})+\mathbf{K}(s_{ j},s_{j}^{{_\ast}})).$$
(13.4.14)

The proof is similar to that of the previous theorem.

Lemmas 13.4.1 and 13.4.2 transfer our original problem of estimating the deviation between \(\mathbf{w}\) and \({\mathbf{w}}^{{_\ast}}\) to a problem of obtaining sharp or nearly sharp upper bounds for \(\mathbf{K}(e_{j},e_{j}^{{_\ast}}) = \boldsymbol \pi (e_{j},e_{j}^{{_\ast}})\) [resp. \(\mathbf{K}(\zeta _{j},\zeta _{j}^{{_\ast}})\)], assuming that certain moment characteristics of e j (resp. \(\zeta _{j}\)) are given. The problem of estimating \(\boldsymbol \pi (s_{j},s_{j}^{{_\ast}})\) in (13.4.5)Footnote 8 reduces to estimating the terms \(\mathrm{IND(\mathbf{s})}\), \(\mathrm{IND({\mathbf{s}}^{{_\ast}})}\), and \(\boldsymbol \pi (e_{j},e_{j}^{{_\ast}})\). \(\mathrm{IND}(\mathbf{s})\) and \(\mathrm{IND}({\mathbf{s}}^{{_\ast}})\) can be estimated using the Strassen–Dudley theorem and the Chebyshev inequalities. The estimates for \(\boldsymbol \pi (e_{j},e_{j}^{{_\ast}})\), \(\boldsymbol \pi (\zeta _{j},\zeta _{j}^{{_\ast}})\) e j  ∗ , \(\zeta _{j}^{{_\ast}}\) being constants, are given in the next Lemmas 13.4.3–13.4.8

Lemma 13.4.3.

Let \(\alpha > 0\), \(\delta \in [0,1]\) , and \(\phi \) be a nondecreasing continuous function on \([0,\infty )\) . Then the Ky Fan radius (with fixed moment \(\phi \) )

$$R = R(\alpha,\delta,\phi ) :=\max \{ \mathbf{K}(X,\alpha ) : E\phi (\vert X - \alpha \vert ) \leq \delta \}$$
(13.4.15)

is equal to \(\min (1,\psi (\delta ))\) , where \(\psi \) is the inverse function of \(t\phi (t)\), \(t \geq 0\).

Proof.

By Chebyshev’s inequality, \(\mathbf{K}(X,\alpha ) \leq \psi (\delta )\) if \(E\phi (\vert X - \alpha \vert ) \leq \delta \), and thus \(R \leq \min (1,\psi (\delta ))\). Moreover, \(\psi (\delta ) < 1\) (otherwise, we have trivially that R = 1), then by letting \(X = X_{0} + \alpha \), where X 0 takes the values \(-\epsilon \), 0, \(\epsilon := \psi (\delta )\) with probabilities \(\epsilon /2\), \(1 - \epsilon \), \(\epsilon /2\), respectively, we obtain \(\mathbf{K}(X,\alpha ) = \psi (\delta )\), as is required.

Using Lemma 13.4.3 we obtain a sharp estimate of \(\mathbf{K}(\zeta _{j},\zeta _{j}^{{_\ast}})\) (\(\zeta _{j}^{{_\ast}}\) constant) if it is known that \(E\phi (\vert \zeta _{j} - \zeta _{j}^{{_\ast}}\vert ) \leq \delta \). However, the problem becomes more difficult if one assumes that

$$\zeta _{j} \in S_{\zeta _{j}^{{_\ast}}}(\epsilon _{1j},\epsilon _{2j},f_{j},g_{j}),$$
(13.4.16)

where for fixed constants \(\alpha \in \mathbb{R}\), \(\epsilon _{i} \geq 0\), and \(\epsilon _{2} > 0\)

$$S_{\alpha }(\epsilon _{1},\epsilon _{2},f,g) :=\{ X \in \widetilde{ \mathfrak{X}} : \vert Ef(X) - f(\alpha )\vert \leq \epsilon _{1},\vert Eg(X) - g(\alpha )\vert \leq \epsilon _{2}\},$$
(13.4.17)

and \(\widetilde{\mathfrak{X}}\) is the set of real-valued RVs for which Ef(X) and Eg(X) exist.

Suppose that the only information we have on hand concerns estimates of the deviations \(\vert Ef(\zeta _{j}) - f(\zeta _{j}^{{_\ast}})\vert \) and \(\vert Eg(\zeta _{j}) - g(\zeta _{j}^{{_\ast}})\vert \). Here, the main problem is the evaluation of the Ky Fan radius

$$D = D_{\alpha }(\epsilon _{1},\epsilon _{2},f,g) =\mathop{\sup}\limits_{X\in S_{\alpha }(\epsilon _{1},\epsilon _{2},f,g)}\mathbf{K}(X,\alpha ) =\mathop{\sup}\limits_{X\in S_{\alpha }(\epsilon _{1},\epsilon _{2},f,g)}\boldsymbol \pi (X,\alpha ).$$
(13.4.18)

The next theorem deals with an estimate of \(D_{\alpha }(\epsilon _{1},\epsilon _{2},f,g)\) for the “classic” case

$$f(x) = x,\quad g(x) = {x}^{2}.$$
(13.4.19)

Lemma 13.4.4.

If f(x) = x, g(x) = x 2 then

$$\epsilon _{2}^{1/3} \leq D_{ \alpha }(\epsilon _{1},\epsilon _{2},f,g) \leq \min (1,\gamma ),$$
(13.4.20)

where \(\gamma = {(\epsilon _{2} + 2\vert \alpha \vert \epsilon _{1})}^{1/3}\).

Proof.

By Chebyshev’s inequality for any \(X \in S_{\alpha }(\epsilon _{1},\epsilon _{2},f,g)\), we have \({\mathbf{K}}^{3}(X,\alpha ) \leq E{X}^{2} - 2\alpha EX + {\alpha }^{2} := I\). We consider two cases:

If \(\alpha > 0\) then \(I \leq {\alpha }^{2} + \epsilon _{2} - 2\alpha (\alpha - \epsilon _{1}) + {\alpha }^{2} = {\gamma }^{3}\).

If \(\alpha \leq 0\) then \(I \leq 2{\alpha }^{2} + \epsilon _{2} - 2\alpha (\alpha + \epsilon _{1}) = {\gamma }^{3}\).

Hence the upper bound of D (13.4.20) is established.

Consider the RV X, which takes the values \(\alpha - \epsilon \), \(\alpha \), \(\alpha + \epsilon \) with probabilities p, q, p, (2p + q = 1), respectively. Then \(EX = \alpha \), so that \(\vert EX - \alpha \vert = 0 \leq \epsilon _{1}\). Further, \(E{X}^{2} = {\alpha }^{2} + 2{\epsilon }^{2}p = \epsilon _{2} + {\alpha }^{2}\) if we choose \(\epsilon = \epsilon _{2}^{1/3}\), \(p = \epsilon _{2}^{1/3}/2\). Then \(F_{X}(\alpha + \epsilon - 0) - F_{X}(\alpha - \epsilon ) = q = 1 - \epsilon _{2}^{1/3}\), and thus \(\mathbf{K}(X,\alpha ) \geq \epsilon _{2}^{1/3}\), which proves the lower bound of D in (13.4.20).

Using Lemma 13.4.4 we can easily obtain estimates for \(D_{\alpha }(\epsilon _{1},\epsilon _{2},f,g)\), where

$$f(x) := \lambda + \mu x + \zeta {x}^{2}\qquad x,\lambda,\mu,\zeta \in \mathbb{R}$$

and

$$g(x) := a + bx + c{x}^{2}\qquad x,a,b,c \in \mathbb{R}$$

are polynomials of degree two. That is, assuming \(c\neq 0\), we may represent f as follows: f(x) = A + Bx + Cg(x), where \(A = \lambda - \zeta a/c\), \(B = \mu - \zeta b/c\), \(C = \zeta /c\).

Lemma 13.4.5.

Let f and g be defined as previously. Assume \(c\neq 0\) , and \(B\neq 0\) . Then

$$D_{\alpha }(\epsilon _{1},\epsilon _{2},f,g) \leq D_{\alpha }(\widetilde{\epsilon }_{1},\widetilde{\epsilon }_{2},\widetilde{f},\widetilde{g}),$$

where

$$\widetilde{\epsilon }_{1} := \frac{1} {\vert B\vert }(\vert C\vert \epsilon _{2} + \epsilon _{1}),\qquad \widetilde{\epsilon }_{2} := \frac{1} {\vert c\vert }\left [\left \vert \frac{b} {B}\right \vert (\vert C\vert \epsilon _{2} + \epsilon _{1}) + \epsilon _{2}\right ],$$
$$\widetilde{f}(x) = x,\qquad \widetilde{g}(x) = {x}^{2}.$$

In particular, \(D_{\alpha }(\epsilon _{1},\epsilon _{2},f,g) \leq {(\widetilde{\epsilon }_{2} + 2\vert \alpha \vert \widetilde{\epsilon }_{1})}^{1/3} = {(c_{1}\epsilon _{2} + c_{2}\epsilon _{1})}^{1/3}\) , where

$$c_{1} = \frac{1} {\vert c\vert \vert \mu - \zeta b\vert }(\vert b\zeta \vert + \vert \mu - \zeta b\vert + 2\vert \alpha \vert \vert \zeta c\vert )$$

and

$$c_{2} = \left \vert \frac{b} {\mu - \zeta b}\right \vert + 2\vert \alpha \vert.$$

Proof.

First we consider the special case f(x) = x and g(x) = a + bx + cx 2, \(x \in \mathbb{R}\), where \(a,b,c\neq 0\) are real constants. We prove first that

$$D_{\alpha }(\epsilon _{1},\epsilon _{2},f,g) \leq D_{\alpha }(\epsilon _{1},\widetilde{\epsilon }_{2},f,\widetilde{g}),$$
(13.4.21)

where \(\widetilde{\epsilon }_{2} := (1/\vert c\vert )(\vert b\vert \epsilon _{1} + \epsilon _{2})\) and \(\widetilde{g}(x) = {x}^{2}\). Thus, by (13.4.20), we get

$$D_{\alpha }(\epsilon _{1},\epsilon _{2},f,g) \leq {(\widetilde{\epsilon }_{2} + 2\vert \alpha \vert \epsilon _{1})}^{1/3}.$$
(13.4.22)

Since \(\vert Ef(X) - f(\alpha )\vert = \vert EX - \alpha \vert \leq \epsilon _{1}\) and \(\vert Eg(X) - g(\alpha )\vert = \vert b(EX - \alpha ) + c(E{X}^{2} - {\alpha }^{2})\vert \leq \epsilon _{2}\), we have that \(\vert c\vert \vert E{X}^{2} - {\alpha }^{2}\vert \leq \vert b\vert \vert EX - \alpha \vert + \epsilon _{2} \leq \vert b\vert \epsilon _{1} + \epsilon _{2}\). That is, \(\vert E{X}^{2} - {\alpha }^{2}\vert \leq \widetilde{ \epsilon }_{2}\), which establishes the required estimate (13.4.21).

Now we consider the general case of \(f(x) = \lambda + \mu x + \zeta {x}^{2}\). From f(x) = A + Bx + Cg(x) and the assumptions that \(\vert Ef(X) - f(\alpha )\vert \leq \epsilon _{1}\) and \(\vert Eg(X) - g(\alpha )\vert \leq \epsilon _{2}\), we have \(\vert B\vert \vert EX - \alpha \vert \leq \vert Ef(X) - f(\alpha )\vert + \vert C\vert \vert Eg(X) - g(\alpha )\vert \leq \epsilon _{1} + \vert C\vert \epsilon _{2}\), that is, \(\vert EX - \alpha \vert \leq \widetilde{ \epsilon }_{1}\). Therefore, \(D_{\alpha }(\epsilon _{1},\epsilon _{2},f,g) \leq D_{\alpha }(\widetilde{\epsilon }_{1},\epsilon _{2},\widetilde{f},g)\), where \(\widetilde{f}(x) = x\). Using (13.4.22) we have that \(D_{\alpha }(\widetilde{\epsilon }_{1},\epsilon _{2},\widetilde{f},g) \leq D_{\alpha }(\widetilde{\epsilon }_{1},\widetilde{\epsilon }_{2},\widetilde{f},\widetilde{g})\), where

$$\widetilde{\epsilon }_{2} = \frac{1} {\vert c\vert }(\vert b\vert \widetilde{\epsilon }_{1} + \epsilon _{2}),$$

which by means of Lemma 13.4.4 completes the proof of Lemma 13.4.5.

The main assumption in Lemmas 13.4.3–13.4.5 was the monotonicity of \(\phi \), f, and g, which allows us to use the Chebyshev inequality. More difficult is the problem of finding \(D_{\alpha }(\epsilon _{1},\epsilon _{2},f,g)\) when f and g are not polynomials of degree two. The case of

$$f(x) =\cos x\quad \text{ and}\quad g(x) =\sin x,$$

where \(x \in [0,2\pi ]\), is particularly difficult.

Remark 13.4.1.

In fact, we will investigate the rate of the convergence of \(\mathbf{K}(X_{n},\alpha ) \rightarrow 0\) (\(0 \leq X_{n} \leq 2\pi \)) as \(n \rightarrow \infty \), provided that \(E\cos X_{n} \rightarrow \cos \alpha \) and \(E\sin X_{n} \rightarrow \sin \alpha \). In the next lemma, we show Berry–Essen-type bounds for the implication

$$E\exp (iX_{n}) \rightarrow \exp (i\alpha ) \Rightarrow \mathbf{K}(X_{n},\alpha ) = \boldsymbol \pi (X_{n},\alpha ) \rightarrow 0.$$

In what follows, we consider probability measures \(\mu \) on \([0,2\pi ]\) and let

$$M(\epsilon ) = \left \{\mu : \left \vert \int \nolimits \cos t\mathrm{d}\mu -\cos \alpha \right \vert \leq \epsilon,\left \vert \int \nolimits \sin t\mathrm{d}\mu -\sin \alpha \right \vert \leq \epsilon \right \}.$$
(13.4.23)

We would like to evaluate the trigonometric Ky Fan (or Prokhorov) radius for \(M(\epsilon )\) defined by

$$D =\sup \{ \boldsymbol \pi (\mu,\delta _{\alpha }) : \mu \in M(\epsilon )\},$$
(13.4.24)

where \(\delta _{\alpha }\) is the point mass at \(\alpha \) and \(\boldsymbol \pi (\mu,\delta _{\alpha })\) is the Ky Fan (or Prokhorov) metric

$$\boldsymbol \pi (\mu,\delta _{\alpha }) =\inf \{ r > 0 : \mu ([\alpha - r,\alpha + r]) \geq 1 - r\}.$$
(13.4.25)

Our main result is as follows.

Lemma 13.4.6.

Let fixed \(\alpha \in [1,2\pi - 1]\) and \(\epsilon \in (0,(1/\sqrt{2})(1 -\cos 1))\) . We get D as the unique solution of

$$D - D\cos D = \epsilon (\vert \cos \alpha \vert + \vert \sin \alpha \vert ).$$
(13.4.26)

Here we have that \(D \in (0,1)\).

Remark 13.4.2.

By (13.4.24), one obtains

$$D \leq {[2\epsilon (\vert \cos \alpha \vert + \vert \sin \alpha \vert )]}^{1/3}.$$
(13.4.27)

From (13.4.26), (13.4.27) [and see also (13.4.28)] we have that \(D \rightarrow 0\) as \(\epsilon \rightarrow 0\). The latter implies that \(\boldsymbol \pi (\mu,\delta _{\alpha }) \rightarrow 0\), which in turn gives that \(\mu \stackrel{\mathrm{w}}{\rightarrow }\delta _{\alpha }\), where \(\delta _{\alpha }\) is the point mass at \(\alpha \). In fact, D converges to zero quantitatively through (13.4.24) and (13.4.27), that is, the knowledge of D gives the rate of weak convergence of \(\mu \) to \(\delta _{\alpha }\) (see also Lemma 13.4.7).

The proofs of Lemmas 13.4.6 and 13.4.7, while based on the solution of certain moment problems (see Chap. 9), need more facts on the Kemperman geometric approach for the solution of the general moment problemFootnote 9 and therefore will be omitted. For the necessary proofs see Anastassiou and Rachev [1992].

Lemma 13.4.7.

Let \(f(x) =\cos x\), \(g(x) =\sin x\); \(\alpha \in [0,1)\) or \(\alpha \in (2\pi - 1,2\pi )\) . Define

$$\begin{array}{rcl} D& =& D_{\alpha }(\epsilon,f,g) \\ & =& \sup \{\mathbf{K}(X,\alpha ) : \vert E\cos X -\cos \alpha \vert \leq \epsilon,\vert E\sin X -\sin \alpha \vert \leq \epsilon \}\end{array}$$
(13.2.6)

Let \(\beta = \alpha + 1\) if \(\alpha \in [0,1)\) , and let \(\beta = \alpha - 1\) if \(\alpha \in (2\pi - 1,2\pi )\) . Then

$$D_{\alpha }(\epsilon,f,g) \leq D_{\beta }(\epsilon (\cos 1 +\sin 1),f,g).$$

In particular, by (13.4.27),

$$D_{\alpha }(\epsilon,f,g) \leq [2\epsilon (\cos 1 +\sin 1){(\vert \cos \alpha \vert + \vert \sin \alpha \vert )}^{1/3}$$
(13.4.28)

for any \(0 \leq \alpha < 2\pi \) and \(\epsilon \in (0,(1/\sqrt{2})(1 -\cos 1))\).

Further, we are going to use (13.4.28) to obtain estimates for \(D_{\alpha }(\epsilon,f,g)\), where \(f(x) = \lambda + \mu \cos x + \zeta \sin x\), \(x \in [0,2\pi ]\), \(\lambda \), \(\mu \), \(\zeta \in \mathbb{R}\), and \(g(x) = a + b\cos x + c\sin x\), \(x \in [0,2\pi ]\), a, b, \(c \in \mathbb{R}\). Assuming \(c\neq 0\) we have \(f(x) = A + B\cos x + Cg(x)\), where \(A = \lambda - \zeta a/c\), \(B = \mu - \zeta b/c\), \(C = \zeta /c\).

Lemma 13.4.8.

Let the trigonometric polynomials f and g be defined as previously. Assume \(c\neq 0\) and \(B\neq 0\) . Then \(D_{\alpha }(\epsilon,f,g) \leq D_{\alpha }(\epsilon \tau \eta,\widetilde{f},\widetilde{g})\) for any \(0 \leq \alpha < 2\pi \) , where

$$\tau =\max \left (1, \frac{1} {\vert c\vert }(\vert b\vert + 1\vert \right )$$

and

$$\eta =\max \left (1, \frac{1} {\vert B\vert }(\vert C\vert + 1)\right )$$

\(\widetilde{f}(x) =\cos x\), \(\widetilde{g}(x) =\sin x\) . If

$$0 \leq \epsilon \leq \frac{1} {\tau \eta \sqrt{2}}(1 -\cos 1),$$

then we obtain

$$D_{\alpha }(\epsilon,f,g) \leq {[2\epsilon \tau \eta (\cos 1 +\sin 1)(\vert \cos \alpha \vert + \vert \sin \alpha \vert )]}^{1/3}$$
(13.4.29)

for any \(0 \leq \alpha < 2\pi \).

The proof is similar to that of Lemma 13.4.5.

Now we can state the main result determining the deviation between the waiting times of a deterministic and a random queueing model.

Theorem 13.4.1.

  1. (i)

    Let the approximating queueing model be of type \(D\vert G\vert 1\vert \infty \) . Assume that the sequences \(\mathbf{e}\) and \(\mathbf{s}\) of the “real” queue of type \(G\vert G\vert 1\vert \infty \) are independent. Then the Prokhorov metric between the sequences of waiting times of \(D\vert G\vert 1\vert \infty \) queue and \(G\vert G\vert 1\vert \infty \) queue is estimated as follows:

    $$\boldsymbol \pi (\mathbf{w},{\mathbf{w}}^{{_\ast}}) \leq \mathrm{ IND}(\mathbf{s}) +\mathrm{ IND}(\mathbf{s}{_\ast}) + \sum\limits_{j=1}^{\infty }(\boldsymbol \pi (e_{ j},e_{j}^{{_\ast}}) + \boldsymbol \pi (s_{ j},s_{j}^{{_\ast}})).$$
    (13.4.30)
  2. (ii)

    Assume that the approximating model is of type \(D\vert D\vert 1\vert \infty \) and the “real” queue is of type \(G\vert G\vert 1\vert \infty \) . Then

    $$\boldsymbol \pi (\mathbf{w},{\mathbf{w}}^{{_\ast}}) \leq 2\sum\limits_{j=1}^{\infty }\boldsymbol \pi (\zeta _{ j},\zeta _{j}^{{_\ast}})$$
    (13.4.31)

    and

    $$\boldsymbol \pi (\mathbf{w},{\mathbf{w}}^{{_\ast}}) \leq 2\sum\limits_{j=1}^{\infty }(\boldsymbol \pi (e_{ j},e_{j}^{{_\ast}}) + \boldsymbol \pi (s_{ j},s_{j}^{{_\ast}})).$$
    (13.4.32)
  3. (iii)

    The right-hand sides of (13.4.30)–(13.4.32) can be estimated as follows: let \(\boldsymbol \pi (X,{X}^{{_\ast}})\) denote \(\boldsymbol \pi (e_{j},e_{j}^{{_\ast}})\) in (13.4.30) or \(\boldsymbol \pi (\zeta _{j},\zeta _{j}^{{_\ast}})\) in (13.4.31) or \(\boldsymbol \pi (e_{j},e_{j}^{{_\ast}})\) ( \(\boldsymbol \pi (s_{j},s_{j}^{{_\ast}})\) ) in (13.4.32) (note that \({X}^{{_\ast}}\) is a constant). Then:

    1. (a)

      If the function \(\phi \) is nondecreasing on \([0,\infty )\) and continuous on [0,1] and satisfies

      $$E\phi (\vert X - {X}^{{_\ast}}\vert ) \leq \delta \leq 1,$$
      (13.4.33)

      then

      $$\boldsymbol \pi (X,{X}^{{_\ast}}) \leq \min (1,\psi (\delta )),$$
      (13.4.34)

      where \(\psi \) is the inverse function of \(t\phi (t)\).

    2. (b)

      If \(\vert Ef(X) - f({X}^{{_\ast}})\vert \leq \epsilon _{1}\), \(\vert Eg(X) - g({X}^{{_\ast}})\vert \leq \epsilon _{2}\) , where

      $$\begin{array}{rcl} f(x)& =& \lambda + \mu x + \zeta {x}^{2},\quad x,\lambda,\mu,\zeta \in \mathbb{R}, \\ g(x)& =& \alpha + bx + c{x}^{2},\quad x,a,b,c \in \mathbb{R}, \\ \end{array}$$

      \(c\neq 0\), \(\mu \neq \zeta b/c\) , then for any \(\epsilon _{1} > 0\) and \(\epsilon _{2} > 0\)

      $$\boldsymbol \pi (X,{X}^{{_\ast}}) \leq {(\widetilde{\epsilon }_{ 2} + 2\vert {X}^{{_\ast}}\vert \widetilde{\epsilon }_{ 1})}^{1/3},$$

      where \(\widetilde{\epsilon }_{1}\) and \(\widetilde{\epsilon }_{2}\) are linear combinations of \(\epsilon _{1}\) and \(\epsilon _{2}\) defined as in Lemma 13.4.5.

    3. (c)

      If \(X \in [0,2\pi ]\) a.e. and \(Ef(X) - f({X}^{{_\ast}})\vert \leq \epsilon \), \(\vert Eg(X) - g({X}^{{_\ast}})\vert \leq \epsilon \) , where \(f(x) = \lambda + \mu \cos x + \zeta \sin x\) , and \(g(x) = a + b\cos x + c\sin x\) for \(x \in [0,2\pi ]\) , a, b, c, \(\lambda \), \(\mu \), \(\zeta \in \mathbb{R}\), \(c\neq 0\), \(\mu \neq \zeta b/c\) , then

      $$\mathbf{K}(X,{X}^{{_\ast}}) \leq {[2\epsilon \tau \eta (\cos 1 +\sin 1)(\vert \cos {X}^{{_\ast}}\vert + \vert \sin {X}^{{_\ast}}\vert )]}^{1/3},$$

      where the constants \(\tau \) and \(\eta \) are defined as in Lemma 13.4.8.

Open problem 13.4.1.

First, one can easily combine the results of this section with those of Kalashnikov and Rachev [1988, Chap. 5, to obtain estimates between the outputs of general multichannel and multistage models and approximating queueing models of types \(G\vert D\vert 1\vert \infty \) and \(D\vert G\vert 1\vert \infty \). However, it is much more interesting and difficult to obtain sharp estimates for \(\boldsymbol \pi (\mathbf{e},{\mathbf{e}}^{{_\ast}})\), assuming that \(\mathbf{e}\) and \({\mathbf{e}}^{{_\ast}}\) are random sequences satisfying

$$\vert E(e_{j} - e_{J}^{{_\ast}})\vert \leq \epsilon _{ 1j},\qquad \vert Ef_{j}(\vert e_{j}\vert ) - Ef_{j}(\vert e_{j}^{{_\ast}}\vert ) \leq \epsilon _{ 2j}.$$

Here, even the case \(f_{j}(x) = {x}^{2}\) is open (Chap. 9).

Open problem 13.4.2.

It is interesting to obtain estimates for \(\mathcal{l}_{p}(\mathbf{w},{\mathbf{w}}^{{_\ast}})\), (\(0 < p \leq \infty \)), where \(\mathcal{l}_{p} =\widehat{ \mathcal{L}}_{p}\) (Sects. 13.2 and 13.3).