1 Introduction

This paper generalizes in many directions the result obtained by Gelenbe in [12] where G-networks with trigger signals were introduced and were shown to have a product form steady-state distribution. First, we consider a closed network with customers and three types of stations: single server stations, infinite server stations and stations without server. Second, the signals arrive from the outside and are routed with a state dependent probability to a station in a the network. A signal triggers a customer movement from the station where it is received to any other queue in the network. Despite these uncommon features, we prove that such a network has a product form steady-state distribution. We also prove an “arrival see time average” property [ASTA] to relate the state seen by a incoming customer (due to routing or trigger) with the steady-state distribution for a network with a customer less. This property allows to develop a MVA like algorithm to compute the average queue length and the expected sojourn time.

G-networks of queues with signals have received a considerable attention since the seminal paper by Gelenbe [11] in 1991 where he introduced networks with positive and negative customers. A negative customer deletes a positive customer if there is any in the queue at its arrival. Then it disappears. If the queue is empty, it also disappears immediately. A negative customer is never kept in the queue. It is now seen as a signal which deletes a customer. Such a network with positive and negative customers are associated with models of Random Neural networks [13] and are therefore suitable to model control algorithms. Since then, many papers on networks of queues with signals have been published (see for instance a annotated bibliography [5]). It is worthy to remark that most of the results are obtained for open networks of queues (see [6] for one notable exception). Indeed, most of the signals studied so far implies the destruction of customers. In a closed network, such a behavior leads to an empty network after some time. Some numerical algorithms have been designed to solve explicitly the flow equations which are more general than the ones we get for Jackson or Gordon networks of queues [9]. For closed networks of queues, one must also compute the normalization constant. To avoid this computation, we develop an exact Mean Value Analysis approach. We prove for the first time, to the best of our knowledge, an ASTA property for a network with signals. It is well-known that in a closed Gordon Newell network, an arriving customer sees the steady-state distribution of the network with one customer less. Such a question was not considered so far for closed network of generalized queues with restart or trigger signals. The answer we provide here allows to compute the average queue size without computing the steady-state distribution: this is an extension of the well-known MVA algorithm [23].

Recently G-network with triggers have been proposed to model data processing and energy consumption [10, 14,15,16, 18]. In this model, denoted as Energy Packet Networks (EPN in the following), we can represent the flow of intermittent sources of energy like batteries and solar or wind based generators and their interactions with IT devices consuming energy like sensors, cpu, storage systems and networking elements. The main idea of EPNs is to represent energy in terms of discrete units called Energy Packets (or EP). Each EP represents a certain number of Joules. EP are produced by intermittent source of energy (solar, wind). Thus, the flows of EP is associated with random processes. EPs can be stored in a battery from which they can leak after a random delay. They also interact with devices which need energy to perform some works. Again this interaction is associated with some random processes. Note that a EPN is not only a theoretical concept. A more practical approach where power packets are really implemented as a pulse of current characterized by an intensity, a voltage and a time duration had been presented in the electrical engineering literature (see for instance [24]. These packets are associated with a protocol, control information and some hardware for switching and routing.

In the original Energy Packet Network model presented for instance in [17], we model the energy as EPs and the workload ad Data Packets (DPs). To transmit a DP between two nodes, one must use one EP. In a G-network, this is modeled with two types of queues: a battery queue and a workload queue (see Fig. 1). EP are stored in a battery queue while DP are queued before service in the workload queue. Each node in the network is associated with a queue to store the DP and a battery (the EP queue) to keep the energy. The EPs are sent to the DP queue and triggers the customer movement between workload queues in the network. When an EP arrives at a DP queue which is not backlogged, the energy is lost.

Fig. 1.
figure 1

The classical EP network model: 2 types dedicated respectively to EP and DP, the migration of a DP is provoked by the movement of an EP.

We hope that the theoretical results we provide in this paper will be useful for that research direction. This paper is merely theoretical as we prove that the queueing network has a product form steady-state distribution if the associated Markov chain is ergodic. The proof of the product form result is based on the resolution of the global balance equation. One may use other theoretical approaches to establish the result. But it not clear that the CAT and RCAT theorems proved by Harrison [1, 19, 21] are easier here. G-networks have also been modeled as networks of synchronized automata [4, 7] and a proof based on properties of tensors has been proposed associated with this representation. We think that the proof we present here are easier. To simplify the presentation the proof is postponed in an appendix.

The technical part of the paper is organized as follows. In the next section we present the model and we state that the network of queues has a product form steady state distribution. Many details of the proof are postponed into an appendix for the sake of readability. As the network is closed and the number of customers is constant, not all the states are reachable and the distribution is known up to a normalization constant. To avoid the computation of this constant, we develop in Sect. 3 a mean value analysis algorithm to obtain the mean queue length and the average waiting time. This algorithm requires that we relate the state seen by an arriving customer or a signal and the steady-state distribution. In Sect. 4, we present some possible extensions of these results and an example of a closed network with energy packets and data packets.

2 Description of the Model

We investigate generalized networks with an arbitrary number N of queues, one class of customers and one type of signals (i.e. triggers as introduced by Gelenbe in [12]. We consider that the networks contains three types of station: stations with one server (in set \({\mathcal{{F}}}\)), stations with an infinite number of servers (in set \({\mathcal{{I}}}\)) and stations without server (in set \({\mathcal{{Z}}}\)). In a station without any server, the customers do not receive service but they react to the signal. The stations received customers which are waiting for service, are served before migration to another queue, but they can also react to a signal as usual with G-networks of queues with signals. We consider here a trigger signal defined by Gelenbe in [12]. At its arrival to a non empty queue (say i) a trigger moves a customer to queue j according to routing matrix \(\mathbf {T}\) and it disappears immediately. If queue i is empty, the trigger signal vanishes instantaneously. Triggers are never queued. Triggers arrive to the system following to a Poisson process with rate \(\lambda ^t\) and they are routed to station i with a state dependent probability which will be detailed in the following paragraphs. Note that matrix \(\mathbf {T}\) is stochastic but we do not require it is irreducible.

In most of the papers in the literature, G-networks with signals have an open topology because many signals imply the deletion of customers. Here we assume that the signals are external and only implies customer movement. Thus, we have a balance for the customers in the queues. If the queue is empty, it remains empty after reception of a trigger. If there is a backlog, we still have the same total number of customers after the reception of a signal. Therefore it is possible to consider a closed network where the total number of customers is constant. Let K be this number of customers in the network.

Let us turn back to the routing of triggers to queues. Let \(\varvec{x}=(x_1,\ldots ,x_N)\) be the state of the system where \(x_i\) is the number of customers in station i. Thus \(K = || \varvec{x} ||_1\). We consider the following quantity:

$$ S(\varvec{x}) = \sum _{i \in {\mathcal{{F}}}} 1_{x_i>0} + \sum _{i \in {\mathcal{{I}}}} x_i + \sum _{i \in {\mathcal{{Z}}}} x_i $$

The probability that a trigger entering the network of queues is routed to queue i is:

  • \(\frac{1_{x_i>0}}{K} \) if i is station with one server,

  • \(\frac{ x_i }{K } \) if i is an infinite server station,

  • \(\frac{x_i }{K} \) if i is station without server,

and it vanishes before joining a station with probability \(\frac{K-S(\varvec{x})}{K}\). Indeed we have \(S(\varvec{x}) \le K \) and these probabilities are all well-defined. The remaining of the model is more classical. Service times are exponentially distributed with rate \(\mu _i\) for a server at station i (for i in \({\mathcal{{F}}}\) and \({\mathcal{{I}}}\)). At the completion of their service, the customers move between queues according to routing matrix \(\mathbf {R}\). Note that this matrix is initially defined as a rectangular matrix because it models the routing between a queue in \({\mathcal{{F}}}\cup {\mathcal{{I}}}\) to a queue in \({\mathcal{{F}}}\cup {\mathcal{{I}}}\cup {\mathcal{{Z}}}\). We complete this matrix to obtain a square matrix with null rows corresponding to nodes in \({\mathcal{{Z}}}\). Note that \(\mathbf {R}\) is not stochastic as it contains some null rows.

Assumption 1

We assume in the following that:

  • \(\lambda ^t >0\).

  • \(\mu _j >0 \) for all j in \({\mathcal{{I}}}\cup {\mathcal{{F}}}\).

  • Consider the directed graph built as follows: the set of nodes is the set of stations and there exists an arc from node i to node j if either \(\mathbf{{R}}[i,j] >0\) or \( \mathbf{{T}}[i,j]>0\). Let DG be this directed graph. We assume that DG is strongly connected,

Clearly \((\varvec{x})_t\) is a continuous time Markov chain. It has a finite number of states. As we assume that the directed graph of the customer movement (due to signals or routing of customers after their service) is strongly connected, it is also irreducible. Therefore it is ergodic and the steady-state distribution exists. The following result characterizes this distribution.

Theorem 1

Let K be the number of customers in the network. Under Assumptions 1, the Markov chain \((\varvec{x})_t\) has the following steady-state distribution:

$$\begin{aligned} \pi (K, \varvec{x}) = \frac{1}{G(K)} 1_{(\sum _i x_i =K)} \prod _{i \in {\mathcal{{F}}}} \rho _i^{x_i} \prod _{i \in {\mathcal{{I}}}} \frac{\rho _i^{x_i}}{x_i !} \prod _{i \in {\mathcal{{Z}}}} \frac{\gamma _i^{x_i}}{x_i !}, \end{aligned}$$
(1)

where \(\rho _i\) and \(\gamma _i\) are defined by the flow equations: for all queue i in \({\mathcal{{F}}}\) and in \({\mathcal{{I}}}\):

$$\begin{aligned} \rho _i = \frac{\sum _{j \in {\mathcal{{F}}}\cup {\mathcal{{I}}}} \mu _j \rho _j \mathbf{{R}}[j,i] + \sum _{j \in {\mathcal{{F}}}\cup {\mathcal{{I}}}} \frac{\lambda ^t}{K} \rho _j \mathbf{{T}}[j,i] + \sum _{j \in {\mathcal{{Z}}}} \frac{\lambda ^t}{K} \gamma _j \mathbf{{T}}[j,i] }{\mu _i + \frac{\lambda ^t}{K}}, \end{aligned}$$
(2)

and finally for all queue i in \({\mathcal{{Z}}}\)

$$\begin{aligned} \gamma _i = \frac{\sum _{j \in {\mathcal{{F}}}\cup {\mathcal{{I}}}} K \mu _j \rho _j \mathbf{{R}}[j,i] + \sum _{j \in {\mathcal{{F}}}\cup {\mathcal{{I}}}} {\lambda ^t} \rho _j \mathbf{{T}}[j,i] + \sum _{j \in {\mathcal{{Z}}}} {\lambda ^t} \gamma _j \mathbf{{T}}[j,i] }{ \lambda ^t}. \end{aligned}$$
(3)

Proof: The proof of product form is based on the analysis of the Chapman Kolmogorov equation for steady-state. For the sake of readability we now give the equation and some explanations for several terms in the equation. The analysis is then postponed in an appendix.

Let us first give the global balance equation. In the following \(\varvec{e_j}\) will be a vector the components of which are all equal to 0 except component j which is equal to 1.

$$\begin{aligned} \begin{aligned} \pi (K,\varvec{x}) [\sum \nolimits _{i \in {\mathcal{{F}}}}&\mu _i 1_{x_i>0} + \sum \nolimits _{i \in {\mathcal{{I}}}} \mu _i x_i + \lambda ^t (\sum \nolimits _{i \in {\mathcal{{F}}}} \frac{1_{x_i>0}}{K} + \sum \nolimits _{i \in {\mathcal{{I}}}\cup {\mathcal{{Z}}}} \frac{x_i}{K} )]&\\&\\&= \sum \nolimits _{i \in {\mathcal{{F}}}} \sum \nolimits _{j \in {\mathcal{{F}}}\cup {\mathcal{{I}}}\cup {\mathcal{{Z}}}} \mu _i \pi (K,\varvec{x} +\varvec{e_i} -\varvec{e_j}) \mathbf{{R}}[i,j] 1_{x_j>0}&~[1] \\&+ \sum \nolimits _{i \in {\mathcal{{I}}}} \sum \nolimits _{j \in {\mathcal{{F}}}\cup {\mathcal{{I}}}\cup {\mathcal{{Z}}}} x_i \mu _i \pi (K,\varvec{x} +\varvec{e_i} -\varvec{e_j}) \mathbf{{R}}[i,j] 1_{x_j>0}&[2] \\&+ \sum \nolimits _{i \in {\mathcal{{F}}}} \sum \nolimits _{j \in {\mathcal{{F}}}\cup {\mathcal{{I}}}\cup {\mathcal{{Z}}}} \pi (K,\varvec{x} +\varvec{e_i} -\varvec{e_j}) \mathbf{{T}}[i,j] 1_{x_j>0} \lambda ^t \frac{1_{x_i + 1>0}}{K}&[3] \\&+ \sum \nolimits _{i \in {\mathcal{{I}}}} \sum \nolimits _{j \in {\mathcal{{F}}}\cup {\mathcal{{I}}}\cup {\mathcal{{Z}}}} \pi (K,\varvec{x} +\varvec{e_i} -\varvec{e_j}) \mathbf{{T}}[i,j] 1_{x_j>0} \lambda ^t \frac{ {x_i + 1 }}{K}&[4] \\&+ \sum \nolimits _{i \in {\mathcal{{Z}}}} \sum \nolimits _{j \in {\mathcal{{F}}}\cup {\mathcal{{I}}}\cup {\mathcal{{Z}}}} \pi (K,\varvec{x} +\varvec{e_i} -\varvec{e_j}) \mathbf{{T}}[i,j] 1_{x_j>0} \lambda ^t \frac{ {x_i + 1 }}{K}.&[5]\\ \end{aligned} \end{aligned}$$
(4)

The first two terms of the right hand side describe the services in stations in \({\mathcal{{F}}}\) and \({\mathcal{{I}}}\). Remember that in stations of \({\mathcal{{Z}}}\) the services do not occur. The last three terms describe the effect of trigger signals arriving at queue i with a state dependent probability and moving a customer to another queue somewhere else in the network (say j). The left hand side of the equation contains the description of service with state dependent service rate for stations in \({\mathcal{{I}}}\) and in \({\mathcal{{F}}}\). The last two terms of the l.h.s. describe the arrival of a trigger signal. Note that we avoid to take into account null transitions on both sides of the balance equation. Remember that some triggers may vanish without any effect.    \(\square \)

Once the theorem has been established, we still have to prove is the existence of the rates \(\rho _i\) (for i in \({\mathcal{{F}}}\) and \({\mathcal{{I}}}\)) and \(\gamma _j\) (for j in \({\mathcal{{Z}}}\)). We begin with a technical lemma.

Lemma 1

(Stochastic). For all queue j in \({\mathcal{{F}}}\cup {\mathcal{{I}}}\cup {\mathcal{{Z}}}\), matrix M defined in Eq. 5 is stochastic.

$$\begin{aligned} \mathbf{{M}}[\mathbf{{j}}, \mathbf{{i}}] = \frac{\mu _\mathbf{{j}} \mathbf{{1}}_{\mathbf{{j}} \in {\mathcal{{I}}}\cup {\mathcal{{F}}}} }{\mu _\mathbf{{j}} \mathbf{{1}}_{\mathbf{{j}} \in {\mathcal{{I}}}\cup {\mathcal{{F}}}} + \lambda ^\mathbf{{t}}/\mathbf{{K}}} \mathbf{{R}}[\mathbf{{j}}, \mathbf{{i}}] + \frac{ \lambda ^\mathbf{{t}}/\mathbf{{K}}}{\mu _\mathbf{{j}} \mathbf{{1}}_{\mathbf{{j}} \in {\mathcal{{I}}}\cup {\mathcal{{F}}}} + \lambda ^\mathbf{{t}}/\mathbf{{K}}} \mathbf{{T}}[\mathbf{{j}}, \mathbf{{i}}]. \end{aligned}$$
(5)

Furthermore matrix M is irreducible.

Proof: Consider an arbitrary index j in \({\mathcal{{I}}}\cup {\mathcal{{F}}}\).

$$ \mathbf{{M}}[\mathbf{{j}}, \mathbf{{i}}] = \frac{\mu _\mathbf{{j}} }{\mu _\mathbf{{j}} + \lambda ^\mathbf{{t}}/\mathbf{{K}} } \mathbf{{R}}[\mathbf{{j}}, \mathbf{{i}}] + \frac{ \lambda ^\mathbf{{t}}/\mathbf{{K}} }{\mu _\mathbf{{j}} + \lambda ^\mathbf{{t}}/\mathbf{{K}} } \mathbf{{T}}[\mathbf{{j}}, \mathbf{{i}}]. $$

And rows j of matrix \(\mathbf {R}\) and \(\mathbf {T}\) are distributions of probability. Therefore as a convex sum of distributions of probability the i-th row of \(\mathbf {M}\) is a distribution of probability.

Now assume that j is in \({\mathcal{{Z}}}\). We have:

$$ \mathbf{{M}}[\mathbf{{j}}, \mathbf{{i}}] = \mathbf{{T}}[\mathbf{{j}}, \mathbf{{i}}]. $$

As matrix \(\mathbf {T}\) is stochastic by assumption, the i-th row of \(\mathbf {M}\) is also a distribution of probability. Finally, all rows of \(\mathbf {M}\) are distributions of probability and therefore matrix \(\mathbf {M}\) is stochastic.

Now remember that DG is strongly connected. The adjacency matrix \(\mathbf {A}\) of directed graph DG is defined by

$$ \mathbf{{A}}[\mathbf{{i}}, \mathbf{{j}}] = \mathbf{{1}}_\mathbf{{R[i,j]>0}} ~\mathbf{OR}~\mathbf{{1}}_\mathbf{{T[i,j]>0}} $$

As \(\lambda ^t\) and \(\mu _j\) (for all j in \({\mathcal{{I}}}\cup {\mathcal{{F}}}\) ) are positive, we also have:

$$ \mathbf{{A}} [\mathbf{{i}}, \mathbf{{j}}] = \mathbf{{1}}_{\mathbf{{M}} [\mathbf{{i}},\mathbf{{j}}]>\mathbf{{0}}} $$

Matrix \(\mathbf {M}\) is irreducible because it is associated with adjacency matrix \(\mathbf {A}\) which is strongly connected by the third part of Assumptions 1.

Property 1

(Existence). Under Assumptions 1, the system of fixed point equations (Eqs. 2 and 3) has a solution.

Proof: let us denote by v the vector defined by

$$ \left[ \begin{array} {llll} \mathbf{{v}} (\mathbf{{i}}) &{}= &{} \rho _i (\mu _i + \lambda ^t/K) &{}~i \in {\mathcal{{F}}}\cup {\mathcal{{I}}},\\ \mathbf{{v}} (\mathbf{{i}}) &{}= &{} \gamma _i \lambda ^t/K &{}~i \in {\mathcal{{Z}}}.\\ \end{array} \right. $$

After substitution in Eq. 2, we have for all i in \({\mathcal{{I}}}\cup {\mathcal{{F}}}\):

$$\begin{aligned} \mathbf{{v}} [i] = \sum _{j \in {\mathcal{{F}}}\cup {\mathcal{{I}}}} \mathbf{{v}} [j] \mathbf{{M}}[j,i] + \sum _{j \in {\mathcal{{Z}}}} \mathbf{{v}} [j] \mathbf{T}[j,i]. \end{aligned}$$
(6)

Similarly for Eq. 3 we get for all i in \({\mathcal{{Z}}}\):

$$\begin{aligned} \mathbf{{v}} [\mathbf{{i}}] = \sum _{\mathbf{{j}} \in {\mathcal{{F}}}\cup {\mathcal{{I}}}} \mathbf{v[j]} \mathbf{M[j,i]} + \sum _{\mathbf{{j}} \in {\mathcal{{Z}}}} \mathbf{v[j]} \mathbf{T[j,i]}. \end{aligned}$$
(7)

Thus, combining both equations in vector form, taking into account that \(\mathbf{M}[j,i] = \mathbf{T}[j,i] \) for all j in \({\mathcal{{Z}}}\):

$$\begin{aligned} \mathbf {v = v M}. \end{aligned}$$
(8)

The previous lemma states that matrix \(\mathbf {M}\) is stochastic and irreducible. Thus there exists an eigenvector associated with eigenvalue 1 and \(\mathbf {v}\) is an arbitrary positive multiple of this eigenvector. Remember that for a closed queuing network, we can consider any multiple of the eigenvector as the unique solution for the probability distribution is obtained after normalization.    \(\square \)

As usual it remains to compute G. A natural idea consists in a generalization of the convolution algorithm proposed by Buzen [2], to networks of queues with signals. In the following we develop another idea which allows to compute the expected queue length and average waiting time without computing the normalization constraint.

3 Mean Value Analysis

We have to prove an arrival theorem to relate the probability seen by an arriving customer to the steady-state probability (the so-called ASTA property). We follow the approach presented by Harrison and Patel in [20]. Let us introduce some additional notation. Let \(\pi _{Ai} (K,\varvec{x})\) be the probability that an arriving customer at queue i sees state \(\varvec{x}\). This is due to a transition from state \(\varvec{x+e_j}\) to state \(\varvec{x+e_i}\). In state \(\varvec{x}\) the total number of customers is \(K-1\). We begin with some technical properties.

Property 2

Due to the product form solution for the steady-state distribution, we have, for all state \(\varvec{x}\):

$$ G(K) \pi (K, \varvec{x} + \varvec{e_j} ) = G(K-1) \pi (K-1, \varvec{x} ) a_j, $$

where:

$$ \left[ \begin{array} {llll} a_j &{} = &{} \rho _j, &{}~if~j \in {\mathcal{{F}}}\\ &{}&{}&{}\\ a_j &{} = &{} \frac{\rho _j}{x_j+1}, &{}~if~j \in {\mathcal{{I}}}\\ &{}&{}&{}\\ a_j &{} = &{} \frac{\gamma _j}{x_j+1}, &{}~if~j \in {\mathcal{{Z}}}\\ \end{array} \right. $$

Proof: Assume first that \( j \in {\mathcal{{F}}}\). Then

$$ G(K) \pi (K, \varvec{x} + \varvec{e_j} ) = 1_{(1+\sum _i x_i =K)} \rho _j^{x_j+1} \prod _{i \in {\mathcal{{F}}}, i\ne j } \rho _i^{x_i} \prod _{i \in {\mathcal{{I}}}} \frac{\rho _i^{x_i}}{x_i !} \prod _{i \in {\mathcal{{Z}}}} \frac{\gamma _i^{x_i}}{x_i !} $$

Thus:

$$ G(K) \pi (K, \varvec{x} + \varvec{e_j} ) = 1_{(\sum _i x_i =K-1)} \rho _j \prod _{i \in {\mathcal{{F}}}} \rho _i^{x_i} \prod _{i \in {\mathcal{{I}}}} \frac{\rho _i^{x_i}}{x_i !} \prod _{i \in {\mathcal{{Z}}}} \frac{\gamma _i^{x_i}}{x_i !} = \rho _j G(K-1) \pi (K-1, \varvec{x}) $$

The proof is similar for \( j \in {\mathcal{{I}}}\) and \( j \in {\mathcal{{Z}}}\). It is omitted for the sake of conciseness.

   \(\square \)

Theorem 2

(Arrivals See Time Average). An arriving customer at queue i sees the steady-state distribution in a network with one customer less:

$$ \pi _{Ai} (K,\varvec{x}) = \pi (K-1,\varvec{x}) $$

Proof: The process is stationary. Therefore, \(\pi _{Ai} (K,\varvec{x})\) can be expressed as the ratio of the expected number of transitions giving an arrival to node i at state \(\varvec{x}\) (i.e. \(A_i (\varvec{x})\)) and the expected number at any internal state \(\varvec{y}\), i.e. \(\sum _y A_i (\varvec{y})\):

$$\begin{aligned} \pi _{Ai} (K,\varvec{x}) = \frac{A_i (\varvec{x})}{\sum _y A_i (\varvec{y})}. \end{aligned}$$
(9)

A customer arriving at queue i sees state \(\varvec{x}\) during a transition from state \(\varvec{x}+\varvec{e_j} \) to state \(\varvec{x}+\varvec{e_i} \). This transition occurs after a service completion or after the reception of a trigger signal at station j. Remember that the service rates or the trigger routing probability may be state dependent. Thus:

$$\begin{aligned} Ai (\varvec{x})&= \sum \nolimits _{j \in {\mathcal{{F}}}} \pi (K, \varvec{x}+\varvec{e_j}) \mu _j \mathbf{{R}}[\mathbf{{j}}, \mathbf{{i}}] \\&+ \sum \nolimits _{j \in {\mathcal{{I}}}} \pi (K, \varvec{x}+\varvec{e_j}) \mu _j (x_j+1) \mathbf{{R}}[\mathbf{{j}}, \mathbf{{i}}] \\&+ \sum \nolimits _{j \in {\mathcal{{F}}}} \pi (K, \varvec{x}+\varvec{e_j}) \frac{\lambda ^t }{K} \mathbf{{T}}[\mathbf{{j}}, \mathbf{{i}}] \\&+ \sum \nolimits _{j \in {\mathcal{{I}}}\cup {\mathcal{{Z}}}} \pi (K, \varvec{x}+\varvec{e_j}) \frac{\lambda ^t (x_j+1)}{K} \mathbf{{T}}[\mathbf{{j}}, \mathbf{{i}}]. \end{aligned}$$

Reordering the summations, we get:

$$\begin{aligned} Ai (\varvec{x})&= \sum \nolimits _{j \in {\mathcal{{F}}}} \pi (K, \varvec{x}+\varvec{e_j}) \left[ \mu _j \mathbf{{R}}[\mathbf{{j}}, \mathbf{{i}}] + \frac{\lambda ^\mathbf{{t}} }{\mathbf{{K}}} \mathbf{{T}}[\mathbf{{j}}, \mathbf{{i}}]\right] \\&+ \sum \nolimits _{j \in {\mathcal{{I}}}} \pi (K, \varvec{x}+\varvec{e_j}) (x_j+1) \left[ \mu _j \mathbf{{R}}[\mathbf{{j}}, \mathbf{{i}}] + \frac{\lambda ^\mathbf{{t}} }{\mathbf{{K}}} \mathbf{{T}}[\mathbf{{j}}, \mathbf{{i}}]\right] \\&+ \sum \nolimits _{j \in {\mathcal{{Z}}}} \pi (K, \varvec{x}+\varvec{e_j}) \frac{\lambda ^t (x_j+1)}{K} \mathbf{{T}}[\mathbf{{j}}, \mathbf{{i}}].\\ \end{aligned}$$

Using the definition for matrix \(\mathbf {M}\) we obtain after substitution:

$$\begin{aligned} \begin{aligned} Ai (\varvec{x})&= \sum \nolimits _{j \in {\mathcal{{F}}}} \pi (K, \varvec{x}+\varvec{e_j}) \left[ \mu _j + \frac{\lambda ^t }{K} \right] \mathbf{{M}}[\mathbf{{j}}, \mathbf{{i}}] \\&+ \sum \nolimits _{j \in {\mathcal{{I}}}} \pi (K, \varvec{x}+\varvec{e_j}) (x_j+1) \left[ \mu _j + \frac{\lambda ^t }{K} \right] \mathbf{{M}}[\mathbf{{j}}, \mathbf{{i}}] \\&+ \sum \nolimits _{j \in {\mathcal{{Z}}}} \pi (K, \varvec{x}+\varvec{e_j}) \frac{\lambda ^t (x_j+1)}{K} \mathbf{{M}}[\mathbf{{j}}, \mathbf{{i}}].\\ \end{aligned} \end{aligned}$$
(10)

Let us now turn back to Property 2 from which we easily obtain:

$$ \pi (K, \varvec{x} + \varvec{e_j} ) = \pi (K-1, \varvec{x} ) a_j \frac{G(K-1)}{G(K)}, $$

which is substituted into Eq. 10 to get:

$$\begin{aligned} Ai (\varvec{x})&= \frac{G(K-1)}{G(K)} \sum \nolimits _{j \in {\mathcal{{F}}}} \pi (K-1, \varvec{x}) a_j \left[ \mu _j + \frac{\lambda ^t }{K} \right] \mathbf{{M}}[j,i] \\&+ \frac{G(K-1)}{G(K)} \sum \nolimits _{j \in {\mathcal{{I}}}} \pi (K-1, \varvec{x}) (x_j+1) a_j \left[ \mu _j + \frac{\lambda ^t }{K} \right] \mathbf{{M}}[j,i] \\&+\frac{G(K-1)}{G(K)} \sum \nolimits _{j \in {\mathcal{{Z}}}} \pi (K-1, \varvec{x}) a_j \frac{\lambda ^t (x_j+1)}{K} \mathbf{{M}}[j,i].\\ \end{aligned}$$

Taking into account the definition of \(a_j\) for the various queues and the definition of \(\mathbf{{v}}[i]\), we get after substitution:

$$\begin{aligned} Ai (\varvec{x})&= \frac{G(K-1)}{G(K)} \sum \nolimits _{j \in {\mathcal{{F}}}} \pi (K-1, \varvec{x}) \mathbf{{v}}[j] \mathbf{{M}}[j,i] \\&+ \frac{G(K-1)}{G(K)} \sum \nolimits _{j \in {\mathcal{{I}}}} \pi (K-1, \varvec{x}) (x_j+1) \mathbf{{v}}[j] \mathbf{{M}}[j,i] \\&+ \frac{G(K-1)}{G(K)} \sum \nolimits _{j \in {\mathcal{{Z}}}} \pi (K-1, \varvec{x}) \mathbf{{v}}[j] \mathbf{{M}}[j,i]\\ \end{aligned}$$

Thus,

$$ Ai (\varvec{x}) = \frac{G(K-1)}{G(K)} \pi (K-1, \varvec{x}) \sum _{j \in {\mathcal{{F}}}\cup {\mathcal{{I}}}\cup {\mathcal{{Z}}}} \mathbf{{v}}[j] \mathbf{{M}}[j,i]. $$

Remember that \(\mathbf{{v}}\) is the eigenvector of matrix \(\mathbf{{M}}\). Thus,

$$\begin{aligned} Ai (\varvec{x}) = \frac{G(K-1)}{G(K)} \pi (K-1, \varvec{x}) \mathbf{{v}}[i]. \end{aligned}$$
(11)

Combining this last equation and Eq. 9, we finally get

$$ \pi _{Ai} (K,\varvec{x}) = \pi (K-1,\varvec{x}). $$

and the proof is complete.    \(\square \)

We now present the algorithm to compute de average queue size and the expected sojourn time in each queue. It is similar to a classical MVA algorithm for a single class closed queueing network as detailed in [2, 20]. Let us first introduce some notation:

  • \(T_i (K) \) is the sojourn time at queue i when the number of customers in the network is K,

  • \(N_i(K) \) is the average queue size at queue i when the number of customers in the network is K,

  • \(\varLambda _i(K) \) is the arrival rate at queue i when the number of customers in the network is K.

The first step is to define an equivalent service time. Remember that some stations (i.e. in \({\mathcal{{Z}}}\)) do not have a server and in some stations the signals trigger customers movement. Let \(S_i\) be the average equivalent service time.

$$ S_i = \frac{1}{\mu _i 1_{i \in {\mathcal{{I}}}\cup {\mathcal{{F}}}} + \frac{\lambda ^t}{K}} $$

Little equation give two sets of equations as in the classical MVA approach:

$$ N_i (K) = \varLambda _i (K) T_i(K) $$

And:

$$ K = \varLambda _i (K) (\sum _j T_j(K) \frac{\mathbf{v}[j]}{\mathbf{v}[i]}) $$

Finally, the theorem on the state seen by an arriving customer allows to relate the sojourn time to the average queue size:

$$ T_i (K) = (1+N_i(K-1))S_i $$

These three sets of equations allow a computation for \(T_i (K)\), \(N_i (K)\) and \(\varLambda _i (K)\) for all values of K beginning with \(K=1\). When \(K=1\), the quantities are initialized with:

$$ T_i (1) = S_i,~\varLambda _i (1) = \frac{1}{\sum _j S_j \frac{\mathbf{v}[j]}{\mathbf{v}[i]}},~N_i(1) = \frac{S_i}{\sum _j S_j \frac{\mathbf{v}[j]}{\mathbf{v}[i]}}. $$

   \(\square \)

4 An Example and Some Possible Extensions

Let us first present a simple example (depicted in Fig. 2) to illustrate some features of the model. The network is decomposed into two sub-networks which are connected by the movement of customers provoked by signals. The first sub-network is a ring containing queues labeled 1, 2 and 3. The second sub-network is a tandem with two queues labeled 4, and 5. The signals arriving in the first sub-network move a customer to queue 4 while they provoke a migration to queue 1 when they arrive in the second sub-network.

Fig. 2.
figure 2

An example with a two sub-networks topology. The migrations of customers provoked by signals are represented by a hatched lines with a dot to indicate the destination.

Station 5 does not have any server. Therefore the customers accumulate until the station receives a signal to move to station 1. Such a model was proposed for the optimization of the Energy Packets arrival. Indeed, a very simple idea is to provide energy to stations where packets are waiting. The trigger signal (as mentioned in the introduction) represents EP which are needed to move the customers to another part of the network (between the two sub-networks in the example). The ordinary movements of customers based on matrix \(\mathbf{R}\) are not supposed to need energy (or at least the energy is insignificant).

We now propose some extensions of the model. First, one may replace the external source of triggers by a sub-network of queues with positive and negative customers as in Fig. 3. This is the original model we first consider and the following property explains how to decompose the model into two parts, the second one being solved by Theorem 1.

Property 3

As proved in [3], an open network of positive and negative customers is quasi-reversible and the flows of signals (whatever they are) leaving the network follow Poisson processes. Therefore we may add in the model of Theorem 1 an open subnetwork sending trigger signal to the closed sub-network instead of assuming an external Poisson arrival of signals.

One may also consider stations with multiple servers. Here we only consider stations with 0, 1 or an infinite number of servers. We have to find the associate routing probability which provides a product form. We also have to extend the MVA algorithm to deal with these stations. Concerning the routing probabilities of signals for stations without server, we have obtained preliminary results showing that it is possible to have much more general routing functions for these stations. We hope to provide more general results in the near future.

Fig. 3.
figure 3

Mixed topology. The signals are generated by the first sub-network (on the left) and sent to the closed subnetwork (on the right). The first sub-network is open and it contains positive and negative customers. Positive customers (resp. negative customers) movements are represented by solid lines (resp. doted lines). The emission of trigger signals is represented by hatched lines.

Finally, we have considered here the single class version of the problem. G-networks with multiple classes of customers and signals have already been studied in the literature (for instance [8] for Processor Sharing queues). We think that it is possible to extend both the proof of product form and arrival theorem for network with multiple classes of customers and signals, at least for Processor Sharing queues and Infinite Server queues.

5 Concluding Remarks

We obtained one of the first closed form expression for closed networks of queues with signals. It is also to the best of our knowledge the first time that an arrival theorem for a closed network with signals is proved. This allows to generalize the MVA approach for networks with more complex movement of customers. The result rises many interesting theoretical questions for closed networks with triggers, signals, or more general synchronizations of queues (for instance, the closed version of networks with load balancing presented in [22]). We want to address some of these problems in the near future.