1 Introduction

We are concerned with the investigation of a new block-structured discrete-time retrial queue. The motivation to study discrete-time queueing models is that they provide a more appropriate framework than their continuous-time counterparts for modelling many communication and computer systems where the basic units are digital. Systems where time is slotted include asynchronous transfer mode (ATM), ALOHA protocol, time-division multiple access (TDMA) and others. Interested readers are referred to the selected references (Bruneel and Kim 1993; Chaudhry 2000; Rom and Sidi 1990; Takagi 1993) for further motivation and a comprehensive review.

There exists an abundant bibliography on the use of block-structured Markov chains and matrix-analytic methods concerning the stochastic modelling of discrete-time queues with Markovian arrival processes. It is well-known that the real flow in many communication networks is of a bursty and correlated nature. As a result, the class of renewal processes is not flexible enough. In order to solve this drawback, many authors assume the discrete-time Markovian arrival process (D-MAP) pioneered by Blondia and Casals (1992); see Lee et al. (2005), Lenin (2006), Van Velthoven et al. (2005), Zhao et al. (2006) and the references therein. The D-MAP is a versatile arrival process with correlated arrivals which includes, as particular cases, the Bernoulli arrival process (i.e., the inter-arrival times follow a geometric law), the Markov modulated Bernoulli process and others. The key point is the consideration of phases and two finite matrices D 0 and D 1, which contain respectively the phase transition probabilities of the underlying discrete-time Markov chain when either an arrival occurs (D 1) or not (D 0). It should be pointed out that the D-MAP (or its generalization with batch arrivals) gives homogeneous representations of the arrival flows. But, in many models, the arrivals may depend on the system state. This happens, for instance, in models with finite population and in retrial queues where each customer generates its own flow of repeated orders. The need of non-homogeneous block arrival patterns has motivated us to introduce in this paper a discrete block state-dependent arrival (D-BSDA) distribution.

On the other hand, many queueing situations have the feature that customers who find all the servers busy upon arrival are obliged to abandon the service area, but they repeat their request after some random time. As a consequence, repeated attempts for service from the pool of unsatisfied customers, called orbit, are superimposed on the ordinary stream of primary arrivals. It should be pointed out that in the majority of applications of retrial queues only the service zone is observable, but the orbit is a kind of invisible queue which cannot be observed. For example, this is the case of the retrial queues arising in cellular mobile networks and in telephone call centers. A good account of the analytical results on retrial queues can be found in the monograph by Falin and Templeton (1997), whereas the recent book by Artalejo and Gomez-Corral (2008) focuses on approximate techniques and algorithmic methods. The seminal work by Yang and Li (1995) plays a central role on the literature on discrete-time retrial queues. Most of the existing papers (see e.g. Artalejo et al. 2005; Atencia and Moreno 2005; Wang and Zhao 2007 and their references) investigate single server models of Geo/G/1-type and mainly follows the methodology introduced in Yang and Li (1995). As an exception, we mention a few recent papers dealing with the multiserver case (Artalejo and Lopez-Herrero 2007b; Artalejo et al. 2008) and the use of matrix-analytic methods (Alfa 2006).

The above considerations both on the Markovian arrival processes and the retrial literature on discrete-time are our sources of motivation for this study. More concretely, we deal with a block-structured discrete-time retrial queue with finite population, where the primary arrivals and the retrials follow our D-BSDA description and the service times are of discrete phase-type.

In a nutshell, the main goals in this paper can be briefly summarized in the following two goals:

  1. (i)

    The introduction of the D-BSDA distribution, which allow us to deal with a flow of non-homogeneous block-structured Markovian arrivals. This distribution provides flexibility to model non-homogeneous patterns in which certain parameters vary randomly over time (see Section 7), as well as to regulate the arrival speed or the admission of incoming customers in controlled stochastic models where control is exerted on the basis of the available information about the current system state (see Appendix A). In sum, to gain flexibility to design complex arrival flows it is important not only in queueing and communication systems, but also in other fields like epidemiology and stochastic models of population growth where the underlying models typically assume Markovian non-homogeneous patterns (Allen 2003).

  2. (ii)

    A comparison with the current state of art suggests that the queueing model investigated here seems of interest in its own right. In Artalejo and Lopez-Herrero (2007b) a simulation study of a discrete-time queue with finite population is presented. For the study of the waiting time (see Section 6) in discrete-time retrial queues with geometric distributions one may refer to the papers (Artalejo and Lopez-Herrero 2007b; Artalejo et al. 2008). The investigation of retrial queues evolving in a random environment (see example in Section 7) was the subject matter of the papers (Artalejo and Lopez-Herrero 2009; Kim et al. 2007; Roszik et al. 2007). Our study goes beyond this scarce literature. More specifically, in order to remark the improvement obtained through the use of the D-BSDA distribution, we stress that both the arrivals generated by each source of a finite population and the repeated attempts made by each customer in the orbit lead to non-homogeneous flows. Thus, if we wish to deal with an arrival/retrial process allowing dependence on a certain number of phases, we necessarily need to enhance the homogeneous description provided by D-MAP process. The D-BSDA distribution allows us to reach this goal.

Some directions for future research include the identification of a number of versatile arrival patterns obtained as particular cases of the D-BSDA distribution. The point is to identify particularizations for which the kernel of the D-BSDA distribution remains sufficiently tractable, yet enough versatile, for computational purposes. A procedure for fitting D-BSDA distributions to a dataset is a related problem of an undoubted practical relevance.

The organization of the rest of the paper is as follows. In Section 2, the D-BSDA distribution is introduced and related to the D-MAP. The mathematical model and the construction of the underlying block-structured discrete-time Markov chain of M/G/1-type are presented in Sections 3 and 4, respectively. In Section 5, we show how the RG-factorizations provide an elegant approach for the computation of the stationary probability vector. Section 6 deals with the study of the waiting time that a tagged customer spends in orbit. The analysis leads to tractable algorithmic solutions. In Section 7, we give numerical examples that illustrate the system performance. Finally, some particularizations of the D-BSDA distribution and a simple example with small population size and limited phases are given in Appendices A and B, respectively.

2 The D-BSDA distribution

Our interest here is in introducing a new discrete block state-dependent arrival (D-BSDA) distribution, which will be used in Section 3 to govern the joint arrival of primary and retrial customers in our non-homogeneous queueing model. Previously, in what follows, we define the D-BSDA distribution in a more general multidimensional framework to demonstrate how one may extend the Bernoulli arrival process and the D-MAP to more general arrival patterns and deal with level dependent discrete-time Markov chains.

To define a D-BSDA distribution we start from a multidimensional irreducible discrete-time Markov chain \((\overrightarrow{X},\overrightarrow{Y })=\{(X_{t}^{1},...,X_{t}^{k},Y_{t}^{1},...,Y_{t}^{l}); t=0,1,...\}\). The sub-vector \((X_{t}^{1},...,X_{t}^{k})\) provides a k-dimensional description of the fundamental aspects of the system state at time t + (e.g., number of units in the system), whereas the sub-vector \((Y_{t}^{1},...,Y_{t}^{l})\) is a l-dimensional phase vector which completes the Markovian description of the system state. The meaning of phase is flexible and refer to the phases arising as elements of certain arrival processes (e.g., D-MAP process (Blondia and Casals 1992)) and probability distributions (e.g., PH distributions (Latouche and Ramaswami 1999; Neuts 1981)), as well as phases associated to environmental changes, etc. Let \(S_{\overrightarrow{X},\overrightarrow{Y}}\) be a discrete set of dimension k + l denoting the state space of the Markov chain \((\overrightarrow{X},\overrightarrow{Y})\).

Given the initial state \((\overrightarrow{x},\overrightarrow{y})\) at t −, we define a p-dimensional random vector \(\overrightarrow{N}_{t}|_{( \overrightarrow{x},\overrightarrow{y})}=\left. (N_{t}^{1},...,N_{t}^{p})\right\vert _{(\overrightarrow{x},\overrightarrow{y})}\) counting the number of events occurring around the slot boundary; that is, within (t − ,t + ).

In the above framework, a D-BSDA distribution is defined by a sequence \( \left\{ \mathbf{D}_{\overrightarrow{x}}^{\overrightarrow{n}};( \overrightarrow{x},\overrightarrow{n})\in S_{\overrightarrow{X}, \overrightarrow{N}}\right\} \), where \(\mathbf{D}_{\overrightarrow{x}}^{ \overrightarrow{n}}\) is a matrix with elements

$$ \mathbf{D}_{\overrightarrow{x}}^{\overrightarrow{n}}(\overrightarrow{y}; \overrightarrow{y}^{\prime })=P\left\{ \left. \overrightarrow{N}= \overrightarrow{n},\text{ }\overrightarrow{Y}_{t}=\overrightarrow{y}^{\prime }\right\vert \text{ }(\overrightarrow{X}_{t-1},\overrightarrow{Y}_{t-1})=( \overrightarrow{x},\overrightarrow{y})\right\} . $$
(2.1)

In other words, for any given \((\overrightarrow{x},\overrightarrow{ y})\), each element of matrix \(\mathbf{D}_{\overrightarrow{x}}^{ \overrightarrow{n}}\) gives the probability of having \(\overrightarrow{n} =(n_{1},...,n_{p})\) events in (t − ,t + ) and the phase vector evolves from the initial value \(\overrightarrow{y}\) to the final one \(\overrightarrow{y}^{\prime}\). We notice that the dimension of \(\mathbf{D}_{\overrightarrow{x} }^{\overrightarrow{n}}\) is \(\mathop{\textstyle \prod }\nolimits_{i=1}^{l}\left\vert S_{Y_{i}}\right\vert \), where \(\left\vert S_{Y_{i}}\right\vert\) is the cardinality of the state space \(S_{Y_{i}}\) corresponding to the ith component of the vector \(\overrightarrow{Y}\).

We observe that the kernel (D 0,D 1) of the D-MAP can be obtained as a particular case of the D-BSDA distribution. To this end, we must take l = p = 1, \(S_{\left. N\right\vert _{( \overrightarrow{x},\overrightarrow{y})}}=\{0,1\}\) (that is, 0 or 1 arrivals can occur), and assume the homogeneous case where the matrices \(\mathbf{D}_{ \overrightarrow{x}}^{0}\) and \(\mathbf{D}_{\overrightarrow{x}}^{1}\) are independent of \(\overrightarrow{x}\). Obviously, the case \(S_{\left. N\right\vert _{(\overrightarrow{x},\overrightarrow{y})}}=\mathbb{N}\) and \(\mathbf{D}_{\overrightarrow{x}}^{k}\equiv \mathbf{D}^{k}\), for k ≥ 0, leads to the D-BMAP where matrices D k, for k ≥ 1, govern transitions that correspond to a batch arrival of size k. The matrix D 0 still corresponds to no arrival case.

In the next section we will describe the D-BSDA distribution governing the flow of primary arrivals and retrials in our queueing model. For the description of other selected examples of state-dependent arrival processes obtained from the D-BSDA distribution, we refer the reader to Appendix A.

3 The model description

The description of the mathematical model is as follows. We assume that time has been divided into intervals of unit length called slots. We suppose that departures occur in (t − ,t), while primary arrivals and retrials occur in (t,t + ). This policy, called generalized early arrival scheme (G-EAS), is the most extended queueing policy in the existing related literature (Alfa 2006; Artalejo et al. 2005; 2008; Artalejo and Lopez-Herrero 2007b; Artalejo and Gomez-Corral 2008; Atencia and Moreno 2005; Wang and Zhao 2007; Yang and Li 1995); see Fig. 1.

Fig. 1
figure 1

Various epochs in G-EAS

We deal with a queueing model with finite population K, 2 ≤ K < ∞, where each customer can be classified into three categories:

  1. i)

    customer (if any) occupying the server,

  2. ii)

    customers in the orbit; that is, those customers who find the server busy upon arrival and must conduct retrials in order to finally get service,

  3. iii)

    free customers (or sources); that is, those customers who are not being served and not waiting in the orbit.

Any arriving customer who finds the server free begins immediately to be served. Otherwise, he joins the orbit and reapplies for service later. The arrival of primary customers and retrial customers follow a D-BSDA distribution. On the other hand, service is rendered by a server which provides discrete PH service times. The concrete specifications of the D-BSDA and the PH distributions are given in the sequel.

The system state at time t + can be described by the Markovian process {(O t ,C t ,M t ,N t ); t = 0,1,...}, where O t denotes the number of customers in orbit, C t represents the phase of the service and M t and N t indicate, respectively, the arrival phase and the retrial phase in progress at time t +. Note that C t  = 0 means that the server is free, while C t  = j, for 1 ≤ j ≤ s, indicates that the server is busy and the service time is at phase j. Then, the system state is given by

$$ S=\{(i,j,m,n);\text{ }0\leq i\leq K-1,\text{ }0\leq j\leq s,\text{ }0\leq m\leq M,\text{ }0\leq n\leq N\}. $$
(3.1)

The service times follow a discrete PH distribution with representation \((\boldsymbol{\alpha}, \mathbf{T})\), where \(\boldsymbol{\alpha}\) is a row vector of dimension s providing the initial service phase and T is a square matrix of size s with entries \(T_{jj^{\prime}}\) given the probability that the service time does not finish and changes from phase j to phase j in the next slot. We implicitly assume that there is no atom at j = 0, so we have the initial probability α 0 = 0. We also denote by \(\mathbf{t}^{0}=(\mathbf{I}_{s}-\mathbf{T})\mathbf{e}_{s}\) the column vector which jth element is the probability that the service time ends in the current slot given that its phase is j. We note that I s is the identity matrix of order s and e s is a column vector of order s of 1s.

The joint distribution of the primary arrivals generated by the free customers and the retrials generated by the customers in orbit are controlled by a D-BSDA description with k = l = p = 2, \(\overrightarrow{x} =(i,j)\), \(\overrightarrow{y}=(m,n)\) and \(\overrightarrow{n}=(a,r)\), where a and r are, respectively, the number of primary arrivals and retrials at the slot under consideration. Then, \(S_{\overrightarrow{X},\overrightarrow{Y}}=S\) and \(\overrightarrow{N}|_{(i,j,m,n)}\) takes values on the set

$$ S_{\overrightarrow{N}|_{(i,j,m,n)}}=\{(a,r);\text{ }0\leq a\leq K-i- \widetilde{j},\text{ }0\leq r\leq i\}, $$

where \(\widetilde{j}=1-\delta _{0j}\) (δ ab is Kronecker’s function). Note that \(S_{\overrightarrow{N}|_{(i,j,m,n)}}\) depends only on the fundamental system state \(\overrightarrow{x}=(i,j)\).

Then, the kernel of the D-BSDA distribution are the matrices \(\{\mathbf{D }_{ij}^{a,r}\); 0 ≤ i ≤ K − 1,0 ≤ j ≤ s, \(0\leq a\leq K-i-\widetilde{j}\), 0 ≤ r ≤ i}, with elements \(\mathbf{D}_{ij}^{a,r}(m,n;m^{\prime},n^{\prime})\) denoting the probability of having a primary arrivals and r retrials in (t,t + ), while the arrival and retrial phases become m and n , respectively, given that the initial system state is (i,j,m,n).

4 Construction of the one-step probability matrix

In this section, we construct the one-step transition probability matrix of the Markov chain {(O t ,C t ,M t ,N t ); t = 0,1,...}. First of all, we notice that the state space S can be partitioned according to the orbit levels l(i) and the sub-levels l(i,j) defined by

$$ \begin{array}{rcl} l(i)&=&\{(i,j,m,n);\text{ }0\leq j\leq s,0\leq m\leq M,0\leq n\leq N\},\text{ } 0\leq i\leq K-1, \\ l(i,j)&=&\{(i,j,m,n);\text{ }0\leq m\leq M,0\leq n\leq N\},\text{ }0\leq i\leq K-1,\text{ }0\leq j\leq s. \end{array} $$

Then, the matrix P has the following finite level dependent M/G/1-type structure:

$$ \mathbf{P=}\left( \begin{array}{ccccc} \mathbf{P}_{00}\,\,\,\, & \mathbf{P}_{01}\,\,\,\, & \cdots\,\,\,\, & \mathbf{P}_{0,K-2}\,\,\,\, & \mathbf{P} _{0,K-1} \\ \mathbf{P}_{10}\,\,\,\, & \mathbf{P}_{11}\,\,\,\, & \cdots\,\,\,\, & \mathbf{P}_{1,K-2}\,\,\,\, & \mathbf{P} _{1,K-1} \\ \mathbf{0}_{g\times g}\,\,\,\, & \mathbf{P}_{21}\,\,\,\, & \cdots\,\,\,\, & \mathbf{P}_{2,K-2}\,\,\,\, & \mathbf{P}_{2,K-1} \\ & & \ddots\,\,\,\, & \vdots\,\,\,\, & \vdots \\ \mathbf{0}_{g\times g}\,\,\,\, & \mathbf{0}_{g\times g}\,\,\,\, & \cdots\,\,\,\, & \mathbf{P} _{K-1,K-2}\,\,\,\, & \mathbf{P}_{K-1,K-1} \end{array} \right) , $$
(4.1)

where the blocks \(\mathbf{P}_{ii^{\prime}}\) are of dimension g = (s + 1)(M + 1)(N + 1).

The blocks P i,i − 1, for 1 ≤ i ≤ K − 1, represent the case in which the orbit size decreases one unit. In terms of the sub-levels l(i,j), the block \(\mathbf{P}_{i,i-1}=\left(\mathbf{P}_{(i,j)(i-1,j^{ \prime})}\right)\) can be partitioned as follows:

$$ \begin{array}{rcl} \displaystyle \mathbf{P}_{(i,0)(i-1,0)}&=&\mathbf{0}_{h\times h}, \\ \displaystyle \mathbf{P}_{(i,0)(i-1,j^{\,\prime})}&=&\left( \sum\limits_{r=1}^{i}\mathbf{D} _{i0}^{0,r}\right) \alpha _{j^{\,\prime}},\text{ }1\leq j^{\,\prime}\leq s, \end{array} $$
(4.2)
$$ \begin{array}{rcl} \displaystyle \mathbf{P}_{(i,j)(i-1,0)}&=&\mathbf{0}_{h\times h},\text{ }1\leq j\leq s, \\ \displaystyle \mathbf{P}_{(i,j)(i-1,j^{\,\prime})}&=&t_{j}^{0}\left( \sum\limits_{r=1}^{i} \mathbf{D}_{ij}^{0,r}\right) \alpha _{j^{\,\prime}},\text{ }1\leq j,j^{\,\prime}\leq s, \end{array} $$
(4.3)

where \(\mathbf{t}^{0}=\left( t_{j}^{0}\right)\), \(\boldsymbol{\alpha} =\left( \alpha _{j}\right)\), 0 h×h is the null square matrix of dimension h, with h = (M + 1)(N + 1). The Eqs. 4.2 and 4.3 correspond, respectively, to the case (a = 0,r ≥ 1) and (d = 1,a = 0,r ≥ 1). Here, d is 0 or 1 according to the number of departures in (t − ,t).

The blocks P ii , for 0 ≤ i ≤ K − 1, have the following structure:

$$\displaystyle \mathbf{P}_{(i,0)(i,0)}=\mathbf{D}_{i0}^{0,0}, $$
(4.4)
$$\displaystyle \mathbf{P}_{(i,0)(i,j^{\,\prime})}=\left( \sum\limits_{r=0}^{i}\mathbf{D} _{i0}^{1,r}\right) \alpha _{j^{\,\prime}},\text{ }1\leq j^{\,\prime}\leq s, $$
(4.5)
$$\displaystyle \mathbf{P}_{(i,j)(i,0)}=t_{j}^{0}\mathbf{D}_{ij}^{0,0},\text{ }1\leq j\leq s, $$
(4.6)
$$\displaystyle \mathbf{P}_{(i,j)(i,j^{\,\prime})}=(1-\delta _{i,K-1})t_{j}^{0}\left( \sum\limits_{r=0}^{i}\mathbf{D}_{ij}^{1,r}\right) \alpha _{j^{\,\prime}} +\,T_{jj^{\,\prime}}\left( \sum\limits_{r=0}^{i}\mathbf{D}_{ij}^{0,r}\right) , \text{ }1\leq j,j^{\,\prime}\leq s. $$
(4.7)

Now the above expressions (4.4)–(4.7) are associated, respectively, to the existence of the following events in (t − ,t + ): (a = r = 0), (a = 1,r ≥ 0), (d = 1,a = r = 0) and (d = a = 1,r ≥ 0) ∧ (d = a = 0,r ≥ 0).

Finally, the blocks \(\mathbf{P}_{ii^{\prime}}\), for 0 ≤ i ≤ K − 2 and i + 1 ≤ i  ≤ K − 1, describe the motion associated with transitions to higher orbit levels. The partition in terms of the sub-levels l(i,j) is given by

$$ \begin{array}{rcl} \displaystyle \mathbf{P}_{(i,0)(i^{\prime },0)}&=&\mathbf{0}_{h\times h}, \\ \displaystyle \mathbf{P}_{(i,0)(i^{\prime },j^{\,\prime})}&=&\left( \sum\limits_{r=0}^{i} \mathbf{D}_{i0}^{i^{\prime }-i+1,r}\right) \alpha _{j^{\,\prime}},\text{ } 1\leq j^{\,\prime}\leq s, \end{array} $$
(4.8)
$$ \begin{array}{rcl} \displaystyle \mathbf{P}_{(i,j)(i^{\prime },0)}&=&\mathbf{0}_{h\times h},\text{ }1\leq j\leq s, \\ \displaystyle \mathbf{P}_{(i,j)(i^{\prime },j^{\,\prime})}&=&(1-\delta _{i^{\prime },K-1})t_{j}^{0}\left( \sum\limits_{r=0}^{i}\mathbf{D}_{ij}^{i^{\prime }-i+1,r}\right) \alpha _{j^{\,\prime}} \\ \displaystyle &&+\,T_{jj^{\,\prime}}\left( \sum\limits_{r=0}^{i}\mathbf{D}_{ij}^{i^{\prime }-i,r}\right) ,\text{ }1\leq j,j^{\,\prime}\leq s, \end{array} $$
(4.9)

where, in decreasing order, the above formulas (4.8) and (4.9) reflect the cases (a = i  − i + 1,r ≥ 0) and (d = 1,a = i  − i + 1,r ≥ 0) ∧ (d = 0,a = i  − i,r ≥ 0).

We next give an alternative expression for the special case where the blocks \(\mathbf{D}_{ij}^{a,r}\) depend only on \(\widetilde{j}\); that is, the probability in Eq. 2.1 does not depend on the service phase. Under this reasonable assumption, the blocks of P can be rewritten in a more compact form. For instance, Eqs. 4.2 and 4.3 can be written as

$$ \begin{array}{rcl} \mathbf{P}_{(i,0)(i-1,b)}&=&\boldsymbol{\alpha }\otimes \left( \sum\limits_{r=1}^{i}\mathbf{D}_{i0}^{0,r}\right) ,\text{ } \\ \mathbf{P}_{(i,b)(i-1,b)}&=&\left( \mathbf{t}^{0}\boldsymbol{\alpha }\right) \otimes \left( \sum\limits_{r=1}^{i}\mathbf{D}_{ib}^{0,r}\right) , \end{array} $$

where ⊗ denotes Kronecker’s product and \(l(i,b)=\mathop{\textstyle \bigcup }\nolimits_{j=1}^{s}l(i,j)\), for 0 ≤ i ≤ K − 1. Of course, the rest of expressions can also be reformulated in terms of the Kronecker’s product notation.

In Appendix B we show the blocks corresponding to a simple queue with population size K = 2 and limited phases.

5 Stationary distribution of the system state

Since we deal with a finite and irreducible Markov chain, then it must be positive recurrent. As a result, its stationary probability vector \(\boldsymbol{\pi}\), partitioned as \(\boldsymbol{\pi}=(\boldsymbol{\pi}_{0},...,\boldsymbol{\pi}_{K-1})\), always exists and satisfies

$$ \boldsymbol{\pi }=\boldsymbol{\pi {\rm P}},\text{ }\boldsymbol{\pi e}_{f}=1, $$

where f = K(s + 1)(M + 1)(N + 1) represents the cardinality of the state space S.

In what follows, we use the UL-type RG-factorization to compute the vector \(\boldsymbol{\pi}\). For 0 ≤ k ≤ K − 1 and 0 ≤ i,j ≤ k, it follows from the general censoring theory (Li and Cao 2004; Li and Zhao 2004; Li 2009) that

$$ \mathbf{P}_{ij}^{[\leq k]}=\mathbf{P}_{ij}+\sum\limits_{l=k+1}^{K-1}\mathbf{P }_{il}^{[\leq l]}\left( \mathbf{I}_{g}-\mathbf{P}_{ll}^{[\leq l]}\right) ^{-1}\mathbf{P}_{lj}^{[\leq l]}, $$

where \(\mathbf{P}_{ij}^{[\leq k]}\) denotes the blocks of the censored Markov chain with censoring set consisting of all states in and below the level l(k).

Note that

$$ \mathbf{P}_{ij}^{[\leq (K-1)]}=\mathbf{P}_{ij}\text{ and }\mathbf{P} _{ij}^{[\leq 0]}=\mathbf{P}_{ij}^{[0]}. $$

We define

$$ \begin{array}{rcl} \boldsymbol{\psi }_{k}&=&\mathbf{P}_{kk}^{[\leq k]},\text{ }0\leq k\leq K-1, \\ \mathbf{R}_{ij}&=&\mathbf{P}_{ij}^{[\leq j]}\left( \mathbf{I}_{g}-\boldsymbol{\psi }_{j}\right) ^{-1},\text{ }0\leq i<j\leq K-1, \\ \mathbf{G}_{ij}&=&\left( \mathbf{I}_{g}-\boldsymbol{\psi }_{i}\right) ^{-1}\mathbf{ P}_{ij}^{[\leq i]},\text{ }0\leq j<i\leq K-1. \end{array} $$

Lemma 5.1

For 0 ≤ j ≤ K − 1 and j + 2 ≤ i ≤ K − 1, we have

$$ \mathbf{G}_{ij}=\mathbf{0}_{g\times g}. $$

Proof

First of all, as a consequence of the M/G/1-type structure of matrix P in Eq. 4.1, we notice that for the (K − 1)th row and 0 ≤ j ≤ K − 3, we have

$$ \begin{array}{rcl} \mathbf{G}_{K-1,j}&=&\left( \mathbf{I}_{g}-\boldsymbol{\psi }_{K-1}\right) ^{-1} \mathbf{P}_{K-1,j}^{[\leq (K-1)]} \\ &=&\left( \mathbf{I}_{g}-\mathbf{P}_{K-1,K-1}\right) ^{-1}\mathbf{P}_{K-1,j}= \mathbf{0}_{g\times g}.\text{ } \end{array} $$

Then, for the (K − 2)th row and 0 ≤ j ≤ K − 4, we find that

$$ \mathbf{P}_{K-2,j}^{[\leq (K-2)]}=\mathbf{P}_{K-2,j}+\mathbf{P} _{K-2,K-1}\left( \mathbf{I}_{g}-\mathbf{P}_{K-1,K-1}\right) ^{-1}\mathbf{P} _{K-1,j}=\mathbf{0}_{g\times g}\text{.} $$

Thus, it is clear that

$$ \mathbf{G}_{K-2,j}=\left( \mathbf{I}_{g}-\boldsymbol{\psi }_{K-2}\right) ^{-1} \mathbf{P}_{K-2,j}^{[\leq (K-2)]}=\mathbf{0}_{g\times g},\text{ }0\leq j\leq K-4. $$

By induction, for the (K − 1 − k)th row where 2 ≤ k ≤ K − 3, we may obtain that

$$ \mathbf{P}_{K-1-k,j}^{[\leq (K-1-k)]}=\mathbf{0}_{g\times g}\text{, }0\leq j\leq K-k-3, $$

which leads to

$$ \mathbf{G}_{K-1-k,j}=\mathbf{0}_{g\times g},\text{ }2\leq k\leq K-3,\text{ } 0\leq j\leq K-k-3. $$

This completes the proof. □

With the help of Lemma 5.1, the UL-type RG-factorization of the Markov chain {(O t ,C t ,M t ,N t ); t = 0,1,...} takes the following form:

$$ \mathbf{I}_{f}-\mathbf{P}=\left( \mathbf{I}_{f}-\mathbf{R}_{U}\right) \left( \mathbf{I}_{f}-\boldsymbol{\psi }_{D}\right) \left( \mathbf{I}_{f}-\mathbf{G} _{L}\right) , $$
(5.1)

where

$$ \begin{array}{rcl} &\mathbf{R}_{U}=\left( \begin{array}{cccccc} \mathbf{0}_{g\times g}\,\,\,\, & \mathbf{R}_{01}\,\,\,\, & \mathbf{R}_{02}\,\,\,\, & \cdots\,\,\,\, &\mathbf{R}_{0,K-2}\,\,\,\, & \mathbf{R}_{0,K-1} \\ & \mathbf{0}_{g\times g}\,\,\,\, & \mathbf{R}_{12}\,\,\,\, & \cdots\,\,\,\, & \mathbf{R}_{1,K-2}\,\,\,\, &\mathbf{R}_{1,K-1} \\ & & \ddots\,\,\,\, & \ddots\,\,\,\, & \vdots\,\,\,\, & \vdots \\ & & & \mathbf{0}_{g\times g}\,\,\,\, & \mathbf{R}_{K-3,K-2}\,\,\,\, & \mathbf{R}_{K-3,K-1} \\ & & & & \mathbf{0}_{g\times g}\,\,\,\, & \mathbf{R}_{K-2,K-1} \\ & & & & & \mathbf{0}_{g\times g} \end{array} \right) , \\ &\boldsymbol{\psi }_{D}=diag\left( \boldsymbol{\psi }_{0},...,\boldsymbol{\psi } _{K-1}\right) , \end{array} $$
(5.2)

and

$$ \mathbf{G}_{L}=\left( \begin{array}{ccccc} \mathbf{0}_{g\times g} &\,\,\,\, &\,\,\,\, &\,\,\,\, & \\ \mathbf{G}_{10}\,\,\,\, & \mathbf{0}_{g\times g} &\,\,\,\, &\,\,\,\, & \\ \mathbf{0}_{g\times g}\,\,\,\, & \mathbf{G}_{21}\,\,\,\, & \mathbf{0}_{g\times g} &\,\,\,\, & \\ \vdots\,\,\,\, & \,\,\,\, & \ddots\,\,\,\, & \ddots & \\ \mathbf{0}_{g\times g}\,\,\,\, & \cdots\,\,\,\, & \mathbf{0}_{g\times g}\,\,\,\, & \mathbf{G} _{K-1,K-2}\,\,\,\, & \mathbf{0}_{g\times g} \end{array} \right) . $$
(5.3)

The following theorem gives a recursive route of computation for \(\boldsymbol{\pi}\).

Theorem 5.2

The stationary probability vector \(\boldsymbol{\pi }=(\boldsymbol{\pi }_{0},...,\boldsymbol{\pi}_{K-1})\) of the finite level dependent Markov chain {(O t ,C t ,M t ,N t ); t = 0,1,...} of \(M/G/\dot{1}\)-type can be recursively computed from the following expressions:

$$ \displaystyle \boldsymbol{\pi }_{0}=\tau \boldsymbol{\theta }, $$
(5.4)
$$\displaystyle \boldsymbol{\pi }_{j}=\sum\limits_{i=0}^{j-1}\boldsymbol{\pi }_{i}\mathbf{R}_{ij}, \text{ }1\leq j\leq K-1, $$
(5.5)

where \(\boldsymbol{\theta}\) is the stationary probability vector of the censored Markov chain to level l(0), with one-step transition probability matrix \(\mathbf{P}^{[0]}=\boldsymbol{\psi}_{0}\), and the scalar τ is uniquely determined by the normalization condition \(\sum\nolimits_{j=0}^{K-1}\boldsymbol{\pi}_{j}\mathbf{e}_{g}=1\).

Proof

Obviously, the stationary probability vector \(\boldsymbol{\pi}\) satisfies that \(\boldsymbol{\pi}\left( \mathbf{I}_{f}-\mathbf{P} \right) =\mathbf{0}_{f}\). Thus, by using Eq. 5.1 we have

$$ \boldsymbol{\pi }\left( \mathbf{I}_{f}-\mathbf{R}_{U}\right) \left( \mathbf{I} _{f}-\boldsymbol{\psi }_{D}\right) \left( \mathbf{I}_{f}-\mathbf{G}_{L}\right) = \mathbf{0}_{f}. $$

By defining \(\mathbf{x}=\boldsymbol{\pi }\left( \mathbf{I}_{f}-\mathbf{R} _{U}\right)\), we obtain

$$ \mathbf{x}\left( \mathbf{I}_{f}-\boldsymbol{\psi }_{D}\right) \left( \mathbf{I} _{f}-\mathbf{G}_{L}\right) =\mathbf{0}_{f}. $$

Since the matrix I g  − G L is invertible (see Eq. 5.3), the above expression reduces to

$$ \mathbf{x}\left( \mathbf{I}_{f}-\boldsymbol{\psi }_{D}\right) =\mathbf{0}_{f} . $$

Then, we partition x as x = (x 0,...,x K − 1) and find that x j  = x j ψ j , for 0 ≤ j ≤ K − 1. The censored theory shows that the positive recurrence of P implies the positive recurrence of ψ 0, while the matrices I g  − ψ j are invertible for 1 ≤ j ≤ K − 1. Thus, we obtain x j  = 0 g , for 1 ≤ j ≤ K − 1. Therefore, we have

$$ \boldsymbol{\pi }\left( \mathbf{I}_{f}-\mathbf{R}_{U}\right) =(\tau \boldsymbol{ \theta }\text{,}\mathbf{0}_{g},...,\mathbf{0}_{g}). $$

Taking into account the structural form of Eq. 5.2, we easily conclude that the stationary vector \(\boldsymbol{\pi}\) leads to formulas (5.4) and (5.5). □

6 Waiting time

In Section 3, we assumed the G-EAS policy to investigate the stationary distribution of the system state. For analyzing the waiting time distribution, we must be more specific about which customer gains the competency for service at a given slot. If the server is free and the number of primary arrivals and retrials is greater than one, then we assume that the primary arrivals have priority to occupy the server over the retrials. In addition, if we restrict to one group of customers (primary arrivals or retrials), then we assume the random order policy among its members. For example, if we observe that (d = 0,a = 3,r = 2) in a given slot, then one of the three primary arrivals will occupy the free server. To this end, the three customers have the same chance; that is, probability 1/3. In the case (d = a = 0,r = 4), each retrial customer occupies the server with probability 1/4.

Notice that the system state does not depend on the above service discipline, but the order in which customers are served influences the waiting time distribution. For the consideration of other service disciplines used in discrete-time retrial models, we refer to the paper (Artalejo and Lopez-Herrero 2007b).

Let us assume that, at time t +, the system state is (i,j,m,n), for any i ≥ 1. Then, we denote by W ijmn the conditional waiting time for a tagged customer in orbit. This definition excludes the service time. Let w(k) be the column vector of dimension v = (K − 1)g comprising the probabilities \(P\left\{ W_{ijmn}=k\right\}\), for k ≥ 1, in lexicographical order.

The following result provides a simple scheme for the recursive computation of the probability mass function of w(k), for k ≥ 1.

Theorem 6.1

The probability mass function {w(k); k ≥ 1} verifies that

$$ \mathbf{w}(1)=\mathbf{Ue}_{v}, $$
(6.1)
$$\mathbf{w}(k)=\mathbf{Ww}(k-1),\text{ \textit{for }}k\geq 2, $$
(6.2)

where U is the diagonal matrix of dimension v given by

$$ \mathbf{U=}\text{ }diag\left( \widetilde{\mathbf{P}}_{10},...,\widetilde{ \mathbf{P}}_{K-1,K-2}\right), $$

with

$$ \widetilde{\mathbf{P}}_{i,i-1}=\frac{1}{i}\mathbf{P}_{i,i-1},\mathit{\ } 1\leq i\leq K-1, $$

and W is the following square matrix of dimension v:

$$ \mathbf{W=}\left( \begin{array}{ccccc} \mathbf{P}_{11}\,\,\,\, & \mathbf{P}_{12}\,\,\,\, & \cdots\,\,\,\, & \mathbf{P}_{1,K-2}\,\,\,\, & \mathbf{P} _{1,K-1} \\ \widehat{\mathbf{P}}_{21}\,\,\,\, & \mathbf{P}_{22}\,\,\,\, & \cdots\,\,\,\, & \mathbf{P}_{2,K-2}\,\,\,\, &\mathbf{P}_{2,K-1} \\ \mathbf{0}_{g\times g}\,\,\,\, & \widehat{\mathbf{P}}_{31}\,\,\,\, & \cdots\,\,\,\, & \mathbf{P} _{3,K-2}\,\,\,\, & \mathbf{P}_{3,K-1} \\ & & \ddots\,\,\,\, & \vdots\,\,\,\, & \vdots \\ \mathbf{0}_{g\times g}\,\,\,\, & \mathbf{0}_{g\times g}\,\,\,\, & \cdots\,\,\,\, & \widehat{\mathbf{P }}_{K-1,K-2}\,\,\,\, &\mathbf{P}_{K-1,K-1} \end{array} \right) , $$

where the blocks \(\widehat{\mathbf{P}}_{i,i-1}\) are defined by

$$ \widehat{\mathbf{P}}_{i,i-1}=\frac{i-1}{i}\mathbf{P}_{i,i-1},\text{ }2\leq i\leq K-1. $$

Proof

We observe that the event \(\left( W_{ijmn}=1\right)\) takes place if a retrial customer gets service, which occurs with probability \(P_{(i,j,m,n)(i-1,j^{\,\prime},m^{\prime },n^{\prime })}\), and the tagged customer is the selected one with probability 1/i. For 1 ≤ i ≤ K − 1, 0 ≤ j ≤ s, 0 ≤ m ≤ M, 0 ≤ n ≤ N, this yields

$$ P\left\{ W_{ijmn}=1\right\} =\sum\limits_{j^{\,\prime}=1}^{s}\sum\limits_{m^{\prime }=0}^{M}\sum\limits_{n^{\prime }=0}^{N}P_{(i,j,m,n)(i-1,j^{\,\prime},m^{\prime },n^{\prime })}\frac{1}{i}. $$
(6.3)

On the other hand, the next equation describes the motion between two successive slots. The first term is associated with the case where a non-tagged retrial customer occupies the server, whereas the second term corresponds to cases where i  ≥ i, which implies the impossibility of getting service from the orbit.

$$ \begin{array}{rcl} P\left\{ W_{ijmn}=k\right\} &=&\sum\limits_{j^{\,\prime}=1}^{s}\sum\limits_{m^{\prime }=0}^{M}\sum\limits_{n^{\prime }=0}^{N}P_{(i,j,m,n)(i-1,j^{\,\prime},m^{\prime },n^{\prime })}\frac{i-1}{i} P\left\{ W_{i-1,j^{\,\prime}m^{\prime }n^{\prime }}=k-1\right\} \\ &&+\sum\limits_{i^{\prime }=i}^{K-1}\sum\limits_{j^{\,\prime}=0}^{s}\sum\limits_{m^{\prime }=0}^{M}\sum\limits_{n^{\prime }=0}^{N}P_{(i,j,m,n)(i^{\prime },j^{\,\prime},m^{\prime },n^{\prime })}P\left\{ W_{i^{\prime }j^{\,\prime}m^{\prime }n^{\prime }}=k-1\right\} , \text{ }k\geq 2. \\ \end{array} $$
(6.4)

Equations 6.3 and 6.4 can be written in matrix form as claimed in Eqs. 6.1 and 6.2. □

We next give recursive relations for the factorial conditional waiting time moments \(W_{ijmn}^{(k)}=E\left[ W_{ijmn}(W_{ijmn}-1)\cdots (W_{ijmn}-k+1) \right]\), for k ≥ 1. The notation w (k) stands for a column vector of dimension v containing the moments \(W_{ijmn}^{(k)}\), for 1 ≤ i ≤ K − 1, 0 ≤ j ≤ s, 0 ≤ m ≤ M, 0 ≤ n ≤ N, in lexicographical order.

Corollary 6.2

The factorial moments w(k). for k ≥ 1, satisfy that

$$ \mathbf{w}^{(0)}=\mathbf{e}_{v}, $$
(6.5)
$$\left( \mathbf{I}_{v}-\mathbf{W}\right) \mathbf{w}^{(k)}=\delta _{1k}\mathbf{ Ue}_{v}+k\mathbf{Ww}^{(k-1)},\text{ }k\geq 1. $$
(6.6)

Proof

By multiplying Eq. 6.3 by z and Eq. 6.4 by z k, and adding for all k, we obtain the following expression for the probability generating function \(W_{ijmn}(z)=\mathop{\textstyle \sum }\nolimits_{k=1}^{\infty }P\left\{ W_{ijmn}=k\right\} z^{k}\), for 1 ≤ i ≤ K − 1, 0 ≤ j ≤ s, 0 ≤ m ≤ M, 0 ≤ n ≤ N,

$$ \begin{array}{rcl} W_{ijmn}(z)&=&\sum\limits_{j^{\,\prime}=1}^{s}\sum\limits_{m^{\prime }=0}^{M}\sum\limits_{n^{\prime }=0}^{N}P_{(i,j,m,n)(i-1,j^{\,\prime},m^{\prime },n^{\prime })}\left( \frac{z}{i}+\frac{(i-1)z}{i} W_{i-1,j^{\,\prime}m^{\prime }n^{\prime }}(z)\right) \\ &&+\sum\limits_{i^{\prime }=i}^{K-1}\sum\limits_{j^{\,\prime}=0}^{s}\sum\limits_{m^{\prime }=0}^{M}\sum\limits_{n^{\prime }=0}^{N}P_{(i,j,m,n)(i^{\prime },j^{\,\prime},m^{\prime },n^{\prime })}zW_{i^{\prime }j^{\,\prime}m^{\prime }n^{\prime }}(z). \end{array} $$

Then, by differentiating at z = 1, we obtain that

$$ \begin{array}{rcl} W_{ijmn}^{(k)}&=&\sum\limits_{j^{\,\prime}=1}^{s}\sum\limits_{m^{\prime }=0}^{M}\sum\limits_{n^{\prime }=0}^{N}P_{(i,j,m,n)(i-1,j^{\,\prime},m^{\prime },n^{\prime })}\left( \frac{\delta _{1k}}{i}+\frac{i-1}{i}\left( W_{i-1,j^{\,\prime}m^{\prime }n^{\prime }}^{(k)} + kW_{i-1,j^{\,\prime}m^{\prime }n^{\prime }}^{(k-1)}\right) \right) \\ && +\sum\limits_{i^{\prime }=i}^{K-1}\sum\limits_{j^{\,\prime}=0}^{s}\sum\limits_{m^{\prime }=0}^{M}\sum\limits_{n^{\prime }=0}^{N}P_{(i,j,m,n)(i^{\prime },j^{\,\prime},m^{\prime },n^{\prime })}\left( W_{i^{\prime }j^{\,\prime}m^{\prime }n^{\prime }}^{(k)}+kW_{i^{\prime }j^{\,\prime}m^{\prime }n^{\prime }}^{(k-1)}\right) . \end{array} $$
(6.7)

Alternative, in matrix form, the above formula (6.7) turns to expression (6.6). Formula (6.5) is trivial so we obtain the desired result. □

We next give explicit but lengthy multiple sums expressions for the probability mass function of the unconditional version of the waiting time W. For ease of notation, we only specify the variation of a variable when it is different from the variation indicated in Eq. 3.1.

Proposition 6.3

The unconditional mass probabilities P{W = k}, for k ≥ 0, are given by

$$ \begin{array}{rcl} P\{W=0\}&=&\sum\limits_{i,m,n}\pi _{i0mn}\sum\limits_{a=1}^{K-i}\sum\limits_{r,m^{\prime },n^{\prime }}{\mathbf D}_{i0}^{a,r}(m,n;m^{\prime },n^{\prime })\frac{1}{a} \\ &&+\sum\limits_{i=0}^{K-2}\sum\limits_{j=1}^{s}\sum\limits_{m,n}\pi _{ijmn}t_{j}^{0}\sum\limits_{a=1}^{K-i-1}\sum\limits_{r,m^{\prime },n^{\prime }}{\mathbf D}_{ij}^{a,r}(m,n;m^{\prime },n^{\prime })\frac{1}{a}, \end{array} $$
(6.8)
$$ \begin{array}{rcl} P\{W=k\}&=&\sum\limits_{i,m,n}\pi _{i0mn}\sum\limits_{a=1}^{K-i}\sum\limits_{r,j^{\,\prime},m^{\prime },n^{\prime }}{\mathbf D}_{i0}^{a,r}(m,n;m^{\prime },n^{\prime })\alpha _{j^{\,\prime}} \frac{a-1}{a}P\left\{ W_{i+a-1,j^{\,\prime}m^{\prime }n^{\prime }}=k\right\} \\ &&+\sum\limits_{i=0}^{K-2}\!\sum\limits_{j=1}^{s}\!\sum\limits_{m,n}\pi _{ijmn}\!\sum\limits_{j^{\,\prime}=1}^{s}T_{jj^{\,\prime}}\!\sum\limits_{a=1}^{K-i-1}\sum\limits_{r,m^{\prime },n^{\prime }}{\mathbf D}_{ij}^{a,r}(m,n;m^{\prime },n^{\prime })P\left\{ W_{i+a,j^{\,\prime}m^{\prime }n^{\prime }}=k\right\} \\ &&+\,(1-\delta _{K2})\sum\limits_{i=0}^{K-3}\sum\limits_{j=1}^{s}\sum\limits_{m,n}\pi _{ijmn}t_{j}^{0}\sum\limits_{a=2}^{K-i-1}\sum\limits_{r,j^{\,\prime},m^{\prime },n^{\prime }}{\mathbf D}_{ij}^{a,r}(m,n;m^{\prime },n^{\prime })\alpha _{j^{\,\prime}} \\ &&\times \frac{a-1}{a}P\left\{ W_{i+a-1,j^{\,\prime}m^{\prime }n^{\prime }}=k\right\} ,\text{ }k\geq 1. \end{array} $$
(6.9)

Proof

We notice that the first term on the right-hand side of Eq. 6.8 corresponds to the case where the server is idle at time t −. Then, at least one primary arrival occur and the tagged customer wins the competition for service. In contrast, the second term is associated with the case where the server is busy at time t − but a departure occurs in (t − ,t). Among the primary arrivals taking place in (t,t + ), the tagged customer is selected to occupy the free server.

We now explain formula (6.9) for P{W = k}, for k ≥ 1. Now the first term on the right-hand side of Eq. 6.9 indicates that the server is idle at time t −, the tagged customer arrives but he is not selected to get service. The second term shows the case where the server is busy at time t − and the departure does not occur so the primary arriving customers, including the tagged one, must join the orbit. Finally, the last term corresponds to the case where the server is busy at time t −, the departure occurs, but at least two primary customers arrive in (t,t + ) and the tagged customer lost the competency for getting service. □

For the sake of completeness, we finally discuss the computation of the number of retrials made by a customer. This descriptor can be viewed as a parallel performance measure of the waiting time. So far it has been studied in the context of the M/M/c and the M/G/1 retrial queues (Artalejo and Lopez-Herrero 2007a).

Given that, at time t +, the system state is (i,j,m,n), for i ≥ 1, we mark one of the customers in orbit. Let R ijmn be the number of retrials made by the tagged customer before entering into service. The objective is to calculate the probabilities P{ R ijmn  = k}, for k ≥ 1, 1 ≤ i ≤ K − 1, 0 ≤ j ≤ s, 0 ≤ m ≤ M and 0 ≤ n ≤ N. To this end, we introduce the auxiliary probabilities \(P\{ R_{ijmn}^{l}=k\}\), for 0 ≤ l ≤ k − 1, defined as the probability that the tagged customer makes k retrials, given that he has already made l and the current system state is (i,j,m,n). Then, it is clear that \( P\{ R_{ijmn}=k\} =P\{ R_{ijmn}^{0}=k\}\).

For any fixed k ≥ 1, it is possible to obtain a finite system of equations for probabilities \(P\{R_{ijmn}^{l}=k\}\) which can be solved recursively in descending order for l = k − 1,...,0. In the first step l = k − 1. Then, if the server is idle, we have

$$ \begin{array}{rcl}P\big\{ R_{i0mn}^{k-1}=k\big\} &=&\sum\limits_{m^{\prime },n^{\prime }}\mathbf{D}_{i0}^{0,0}(m,n;m^{\prime},n^{\prime })P\big\{ R_{i0m^{\prime}n^{\prime }}^{k-1}=k\big\} \\ &&+\sum\limits_{a=1}^{K-i}\sum\limits_{j^{\,\prime},m^{\prime },n^{\prime}}\mathbf{D}_{i0}^{a,0}(m,n;m^{\prime},n^{\prime })\alpha _{j^{\,\prime}}P\big\{R_{i+a-1,j^{\,\prime}m^{\prime }n^{\prime }}^{k-1}=k\big\} \\&&+\sum\limits_{r=1}^{i}\sum\limits_{j^{\,\prime},m^{\prime },n^{\prime }}\mathbf{D}_{i0}^{0,r}(m,n;m^{\prime},n^{\prime })\alpha _{j^{\,\prime}}\frac{1}{i} \\&&+\sum\limits_{r=1}^{i}\sum\limits_{a,j^{\,\prime},m^{\prime },n^{\prime}}\mathbf{D}_{i0}^{a,r}(m,n;m^{\prime },n^{\prime })\alpha _{j^{\,\prime}}\frac{i-r}{i} P\big\{R_{i+a-1,j^{\,\prime}m^{\prime }n^{\prime}}^{k-1}=k\big\}. \\ \end{array}$$
(6.10)

The two first terms in Eq. 6.10 correspond to the cases (a = 0,r = 0) and (a ≥ 1,r = 0). We need to update the arrival and retrial phases. In the latter case, a primary arrival gets service and the phase j is assigned to the starting service time. The third term reflects that (a = 0,r ≥ 1) and the tagged customer occupies the idle server. The last term is associated to the case (a ≥ 0,r ≥ 1), but the tagged customer no retry in this slot. Thus, any other customer occupies the server and the system state is updated. If (a = 0,r ≥ 1) and the tagged customer retries but he losses the competency for service, then at least k + 1 retrials are needed in order to get service. Thus, this case does not contribute with positive probability to the event \(\left( R_{i0mn}^{k-1}=k\right)\). We arrive to the same conclusion when (a ≥ 1,r ≥ 1) and the tagged customer retries.

Similar arguments lead to equations first for the case where the server is busy (1 ≤ j ≤ s) and then for \(P\{R_{ijmn}^{l}=k\},\) for 0 ≤ l ≤ k − 2. We omit here the resulting formulas, but we mention that formulas for \( P\left\{ R_{ilmn}^{l}=k\right\}\) are linked to the previous index l + 1. As a result, the desired initial probabilities \(P\left\{ R_{ijmn}=k\right\}\) can be computed by employing a backward recursive scheme.

7 Numerical work

In this section, numerical examples are presented to illustrate the system performance. The experiments are done for the following specifications of the kernel of the D-BSDA distribution and the PH service time distribution.

  1. 1.

    The kernel of theD-BSDAdistribution

We assume the case where the blocks \(\mathbf{D}_{ij}^{a,r}=\left( \mathbf{D} _{ij}^{a,r}(m,n;m^{\prime },n^{\prime })\right)\) depend only on \(\widetilde{ j}\) (see Section 4). Then, for 0 ≤ i ≤ K − 1, \(0\leq a\leq K-i- \widetilde{j}\) and 0 ≤ r ≤ i, we have

$$ \begin{array}{rcl} \mathbf{D}_{i0}^{a,r}(m,n;m^{\prime },n^{\prime })&=&\binom{K-i}{a} p_{m}^{a}(1-p_{m})^{K-i-a} \\ &&\times \binom{i}{r}q_{n}^{r}(1-q_{n})^{i-r}\mathbf{P}_{Y}(m,n;m^{\prime },n^{\prime }), \end{array}$$
(7.1)
$$ \begin{array}{rcl} \mathbf{D}_{ib}^{a,r}(m,n;m^{\prime },n^{\prime })&=&\binom{K-i-1}{a} p_{m}^{a}(1-p_{m})^{K-i-1-a} \\ &&\times \binom{i}{r}q_{n}^{r}(1-q_{n})^{i-r}\mathbf{P}_{Y}(m,n;m^{\prime },n^{\prime }). \end{array}$$
(7.2)

We notice that a free customer generates a primary arrival in a given slot with probability p m depending on the arrival phase m. Analogously, a retrial customer reapplies for service, in the slot under consideration, with probability q n depending on the retrial phase n.

The above specification (7.1)–(7.2) represents a model with geometric arrivals and retrials where the arrival and retrial phases are governed by an irreducible discrete-time Markov chain with one-step transition probability matrix \(\mathbf{P}_{Y}=\left( \mathbf{P}_{Y}(m,n;m^{\prime },n^{\prime })\right)\). We present numerical experiments with M = N = 1 (two phases). The following three choices for matrix P Y are considered:

$$ \mathbf{P}_{Y}^{1}=\left( \begin{array}{cccc} 2/9{\kern2pt} & 1/9{\kern2pt} & 4/9{\kern2pt} & 2/9 \\ 2/9{\kern2pt} & 1/9{\kern2pt} & 4/9{\kern2pt} & 2/9 \\ 2/9{\kern2pt} & 1/9{\kern2pt} & 4/9{\kern2pt} & 2/9 \\ 2/9{\kern2pt} & 1/9{\kern2pt} & 4/9{\kern2pt} & 2/9 \end{array} \right) ,\text{ }\mathbf{P}_{Y}^{2}=\left( \begin{array}{cccc} 1/2{\kern2pt} & 1/2{\kern2pt} & 0{\kern2pt} & 0 \\ 0{\kern2pt} & 1/2{\kern2pt} & 1/2{\kern2pt} & 0 \\ 0{\kern2pt} & 0{\kern2pt} & 1/2{\kern2pt} & 1/2 \\ 1/2{\kern2pt} & 0{\kern2pt} & 0{\kern2pt} & 1/2 \end{array} \right) ,\text{ } $$

and

$$ \mathbf{P}_{Y}^{3}=\left( \begin{array}{cccc} 0{\kern2pt} & 1/3{\kern2pt} & 1/3{\kern2pt} & 1/3 \\ 1/2{\kern2pt} & 1/2{\kern2pt} & 0{\kern2pt} & 0 \\ 1/2{\kern2pt} & 0{\kern2pt} & 1/2{\kern2pt} & 0 \\ 1/2{\kern2pt} & 0{\kern2pt} & 0{\kern2pt} & 1/2 \end{array} \right) . $$
  1. 2.

    The PH service time distribution

For the service time distribution, we choose s = 3, \(\mathbf{\alpha }=\left( 1/4,1/2,1/4\right)\), \(\mathbf{t}^{0}=\left( 0,0,1\right) ^{\prime}\) and

$$ \mathbf{T=}\left( \begin{array}{ccc} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{array} \right) . $$

This choice represents a discrete law consisting of 4 − k slots with probability α k , for 1 ≤ k ≤ 3.

In a first set of experiments, we concentrate on the main performance stationary measures. More concretely, once the stationary vector \(\boldsymbol{\pi}\) is computed, it is easy to obtain the stationary probability that the server is busy, P b , and the mean number of customers in orbit, E[O]. These descriptors are defined as follows:

$$ \begin{array}{rcl} P_{b}&=&\sum\limits_{i=0}^{K-1}\sum\limits_{j=1}^{s}\sum\limits_{m=0}^{M}\sum \limits_{n=0}^{N}\pi _{ijmn}, \\ E[O]&=&\sum\limits_{i=0}^{K-1}\sum\limits_{j=0}^{s}\sum\limits_{m=0}^{M}\sum \limits_{n=0}^{N}i\pi _{ijmn}. \end{array} $$

For the case (p 0,p 1,q 0,q 1) = (1/4,4/5,1/3,3/5) and \(\mathbf{P} _{Y}=\mathbf{P}_{Y}^{1}\), Table 1 gives the values of P b and E[O] as a function of the population size. In agreement with the intuitive expectations, both descriptors increase with increasing values of K. For K = 30, the probability P b is very close to one.

Table 1 The effect of K on the stationary measures

For \(\mathbf{P}_{Y}=\mathbf{P}_{Y}^{1}\), in Table 2 we observe the effect of the vector (p 0,p 1,q 0,q 1) in P b and E[O]. Each cell contains two entries corresponding to K = 5 and K = 50 (in bold). In the case K = 50, the entries show that we deal with a very congested system. Since the number of customers in orbit is high, a retrial customer occupies a free server almost surely, irrespectively of the magnitude of (q 0,q 1). This explains why the obtained results are almost insensitive with respect to the retrial pair (q 0,q 1).

Table 2 The effect of (p 0,p 1,q 0,q 1) on the stationary measures

In Table 3, we take (p 0,p 1,q 0,q 1) = (1/4,4/5,1/3,3/5) and turn our attention to the influence of the environmental matrix P Y . We again display two values corresponding to K = 5 and K = 50. In terms of P b , a clear pattern cannot be inferred. However, the expectation E[O] shows that the orbit is more congested when the arrival and retrial phases are governed by matrix \(\mathbf{P}_{Y}^{1}\).

Table 3 The effect of P Y on the stationary measures

In a second set of numerical results, we deal with the expectation of the conditional waiting times W ijmn . For a model with K = 5 and some selected initial states (i,j,m,n), Tables 4 and 5 show, respectively, the effect of (p 0,p 1,q 0,q 1) and P Y on E[W ijmn ]. Once more, we assume that \(\mathbf{P}_{Y}=\mathbf{P}_{Y}^{1}\) in Table 4, whereas (p 0,p 1,q 0,q 1) = (1/4,4/5,1/3,3/5) in Table 5.

Table 4 The effect of (p 0,p 1,q 0,q 1) on E[W ijmn ]
Table 5 The effect of P Y on E[W ijmn ]

The entries for (4,1,1,0) and (4,2,1,1) in Table 4 show that the corresponding expectations differ just in one unit. The explanation is as follows. Since j ∈ {1,2}, the service will continue in progress at least during one more slot; see above the choice for the PH service time. Moreover, i = 4 and \(\widetilde{j}=1\) so there is no chance for new arrivals or successful retrials within the current slot. On the other hand, the pair (m,n) has no influence on forthcoming slots because \(\mathbf{P}_{Y}^{1}\) has identical rows. Under these conditions, comparing the cases j = 1 and j = 2, it is clear that a tagged retrial customer waits for one slot less when j = 2.

In Tables 2 and 4, four choices of the vector (p 0,p 1,q 0,q 1) were obtained by permuting the arrival pair (p 0,p 1) and the retrial pair (q 0,q 1). Since for all choices the system remains quite congested, the effect of (p 0,p 1,q 0,q 1) on the performance measures is moderate. In Table 6, we compare the results obtained for (1/4,4/5,1/3,3/5) and (5/100,2/100,98/100,95/100). The latter choice of the vector (p 0,p 1,q 0,q 1) is associated to a model with small arrival probabilities and high retrial probabilities. Thus, as it can be expected, the congestion decreases both for K = 5 and K = 50 showing that the stationary measures, P b and E[O], and the expected waiting time, E[W 1000], are sensitive to changes in the D-BSDA distribution.

Table 6 Sensitivity of the D-BSDA distribution