Keywords

1 Introduction

Analytically solvable models of sensor networks often exploit Jackson networks and their generalizations, e.g. BCMP and Kelly networks. This seems to be natural whenever the sensor nodes are deployed in a predefined area and remain on their position as static sensors. For an in-depth study of an advanced setting see [MAG06], a more recent study is [WYH12] which elaborates on a simpler network but incorporates refined details. In these settings each node of the Jackson network represents a sensor: Its message queue is modeled by an exponential queueing system which constitutes the internal structure of the node.

It seems to be less obvious that Jackson networks can serve as models for networks of mobile sensor nodes but there is now a bulk of studies available where this methodology was successfully applied, a survey is [WDW07]. In general the authors proceed as follows: In a first step a single “referenced node” is investigated in detail collecting the other nodes and more (external) information into the node’s environment. Thereafter, the nodes are combined by some approximating procedure to enforce closed form steady state solutions of the steady state equations, typical examples are [Li11, LTL05, ZL11, QFX+11].

Although in all these papers the authors propose that their two-step modeling procedure yields results which are in good agreement with simulation results, there still remains the weak point that formally the models do not fall into the class of product form networks of the Jacksonian type, where the exact solution of the global balance equations is at hand and yields a simple equilibrium distribution.

It is the aim of the present paper to go one first step on the path to such a theory: To start with a network model for a high dimensional system, to construct a Markov process for the evolution of the system over time, to write down the global balance equations, and to solve this equations explicitly without any intermediate decomposition-aggregation steps, and eventually to come up with a product form solution. In the language of product form calculus we end up with a proof that the system’s coordinate processes are separable.

These coordinates are not only queue length processes, similarly e.g. to networks in a random environment, where some of the coordinates represent the external environment of a standard Jackson network. Examples for such mixed coordinate processes are described in the survey [Dad15]. Our model similarly will not fit into the class of Jackson or BCMP networks.

We emphasize that our model is a very stylized picture of the motivating real world systems, and we make simplifying assumptions as it is well established in the Jackson or BCMP setting. We will discuss this in detail below.

A special feature of our work is a two-scale modeling: We start with a network of moving servers (mobile nodes) and describe one distinguished server in full detail (on the microlevel), while the other servers are described on a macrolevel providing only rough information, which in our case is the overall number of “other” servers present at each vertex of the network.

A natural continuation of the project is to consider more moving servers on the microlevel. This is part of ongoing research.

Related Work: Besides the work mentioned in the second paragraph of this introduction it will come out that our model has close connections to sensor networks with static nodes where to enhance connectivity additional mobil nodes are moving in the network’s area, for an investigation concentrating on end-to-end delay see [AK08] and the references there.

We owe a special feature of our model to Gannon, Pechersky, Suhov, and Yambartsev [GPE+14] who investigated models from statistical physics in an environment which has the structure of a Jackson network. Of special interest to our setting is their simplest model: A random walker on the nodes of a standard Jackson network. The interaction of the Jacksonian queues and the random walker is of the form that the random walker acts as an attractor or a repeller for standard customers to the node where the random walker resides.

The random walker model of Gannon, Pechersky, Suhov, and Yambartsev is not covered by the BCMP or Kelly networks framework [BCMP75, Kel79] but closely related.

The Paper’s Structure: In Sect. 2 we describe typical scenarios of mobile sensor networks and extract general principles. We emphasize underlying mobility schemes, e.g. random waypoint regimes. In Sect. 3 we shortly present standard Jackson networks, and in Sect. 4 we describe how the distinguished moving server is added to the Jackson network and prove our main result on separability of this network under stationarity conditions. In Sect. 5 we summarize our findings and indicate directions of further research.

Notation and Conventions:

  • \(\mathbb {R}_{0}^{+}=[0,\infty )\), \(\mathbb {N}={1,2,3,\ldots }\), \(\mathbb {N}_{0}=\{0\}\cup \mathbb {N}\).

  • Node set of our graphs (networks) is \(\overline{J}:=\{1,\dots ,J\}\). The “extended node set” is \(\overline{J}_0 := \{0,1,\dots ,J\}\), where “0” refers to the source and sink of the network.

  • \(\mathbf {e}_j\) is the standard j-th base vector in \(\mathbb N^{\overline{J}}_0\) if \(1\le j\le J\).

  • \(\mathbf {n}= (n_j:j\in \overline{J})\) is the joint queue length vector of the Jackson network.

  • Indicator function \(1_{A}=1\) if A is true, \(1_{A}=0\) otherwise.

  • Kronecker-Delta \(\delta _{xy}:=1_{[x=y]}\).

  • Distances are denoted by d; if necessary, details will be given in the text.

2 Network Scenarios

The scenarios we have in mind encompass moving interdependent entities which are distributed in space. These entities usually carry a complex internal structure. Because we will end with a generalized queueing network model we will refer to the various entities, unless otherwise specified, as moving “customers” in a network, details will be introduced below.

Example 2.1

[WWDL07] Given an area which is cell-partitioned into disjoint (non-overlapping) cells (subareas), collected in the cell set \(\overline{J}=\{1,\dots ,J\}\), the customers are “delay/fault-tolerant mobile sensors”, initially distributed randomly over the cells, and each sensor is associated with a home cell. The probability r(mij) that a sensor with home cell m, staying in cell i moves to cell j is inverse-proportional to the distance between cells m and j, \(r(m;i,j)\simeq d(m,j)^{-1}\). Each sensor has a data queue (that contains maximum K messages) which receives and sends messages. Therefore the sensor’s internal structure is that of a single server queue. A sensor with home cell m generates data and inserts data messages into its queue with rate \(\lambda _m\). Moreover, it obtains data messages from other sensors to forward these in direction to a sink of the network. The message queue decreases with a rate which depends on the queue length and in general on the status of the nodes in the neighbourhood. In [WDW07][Sect. 3.4] this model of a cell-partitioned area is used to analyse movements in the ZebraNet.

Example 2.2

In [BH06][Sect. 5.1] a mobility model with geographic constraints is described: Customers’ movements are restricted “to the pathways in the map”. Customers in this example are non-stationary sensor nodes. The resulting model for the structure of the feasible movements is a random graph. The vertices of the graph usually represent buildings and/or street intersections of a city and the edges model streets and freeways of the city between these buildings, resp. intersections. Initially the customers are distributed randomly over the edges of the graph. Thereafter for each customer a destination vertex is chosen randomly and the customers move on a shortest path on the edges to their destination, staying there for a random time, and selects a new destination vertex for the next movement, and so on.

Example 2.3

Another mobility model in [BH06][Sect. 5.2] with geographic constraints is an “obstacle mobility model”. The obstacles are buildings in the area under consideration and the pathways are found by construction of the Voronoi diagram with edges between the vertices defined by the buildings. The mobile nodes (customers) are allowed to move between the buildings on the Voronoi pathways only: Whenever a node leaves a vertex (after staying a random time there) it selects its next building randomly and moves towards this vertex on a shortest path over the edges.

Summarizing the scenarios: In any case a finite set of vertices is connected by a structured set of edges. While in the second and third scenario the buildings and street intersections naturally can be modeled as vertices (points), in the first scenario we generate a vertex by contracting the cell to a point, which is in line with the analytical investigation in [WWDL07]. On this graph the customers (nodes) move according to some randomized regime. The number of customers may be fixed or varying, possibly without bound.

We will concentrate on the case of an unlimited number of customers which arrive from the exterior of the graph and depart eventually. The set of vertices is \(\overline{J}=\{1,\dots ,J\}\), the edges will be determined by the transition graph of the mobility regime. We will assume that the routing decisions according to the mobility scheme of the customers are determined as follows: Whenever a customer leaves vertex i he selects his subsequent vertex j with probability \(r(i,j)\ge 0\), given i independent of anything else. This procedure transforms Examples 2.2 and 2.3 into a (generalized) Jackson network, to be defined in Sect. 3.

This simple Jackson network like outcome for Examples 2.2 and 2.3 is possible because the themes in the survey [BH06] are mobility regimes, e.g. random waypoint models and their generalizations. The center of the present paper is to extend the Jackson network model to incorporate Example 2.1.

At present, analytical results for this extension seem to be out of reach. We therefore present a simplified network model which distinguishes different levels of detail. Our procedure is guided by the standard approach to investigate a “referenced node” in a network of mobile nodes: In a complex network of customers, pathways, and vertices only one customer is modeled in detail (=“referenced node”), the influence of the rest of the network is incorporated into a simplified environment of this customer (=“Jackson network”). The referenced node is not a node of this underlying Jackson network, but will be a moving \(M/M/1/\infty \) queue itself, for more details, see e.g. [WDW07][Sect. 3.5], or [KD14].

To be more specific: We take one customer (traveling sensor node = Moving Queue = MQ) with explicit internal message queue. MQ cycles as a test customer forever in the network, while all the other customers around him on the graph’s vertices are only counted as pure Jackson-type customers without internal structure. The challenging part of the model is the interaction of the test customer MQ and the other parts of the system.

From an abstract point of view this approach is a two-scale model where the test customer is investigated on the microlevel very detailed, while all the other parts of the system are described only on a macrolevel, determined similar to a mean-field approximation.

Remark: Neglecting the internal queue of the test customer and some further features (which will be introduced later on), this model resembles the structure of so-called mixed BCMP networks [BCMP75]: The mean-field customers are “external”, coming from and going to an exterior world, while the test customer is “internal” for the network, cycling inside the network forever.

3 Standard Jackson Networks

We consider a Jackson network [Jac57] with node set \(\overline{J}:= \{1,\dots ,J\}\). Customers arrive in independent external Poisson streams, at node j with intensity \(\lambda _j \ge 0,\) we set \(\lambda =\lambda _1+\ldots +\lambda _J> 0\). Customers are indistinguishable, follow the same rules, and request for exponentially(1)-distributed service at all nodes. All these requests constitute an independent family of variables which are independent of the arrival streams. Nodes are exponential single servers with state dependent service rates and infinite waiting room under first-come-first-served (FCFS) regime. If at node i are \(n_i>0\) customers, either in service or waiting, service is provided there with intensity \(\mu _i(n_i)>0\). Routing is Markovian, a customer departing from node i immediately proceeds to node j with probability \(r{(i,j)}\ge 0\), and departs from the network with probability \(r(j,0).\) Taking \(r(0,j)=\lambda _j/\lambda ,\ r(0,0)=0\), we assume that the extended routing matrix \(r=(r(i,j):{i,j\in \overline{J}_0})\) is irreducible. Then the traffic equations

$$\begin{aligned} \eta _j=\lambda _j + \sum _{i=1}^J \eta _i r(i,j),\quad \quad j\in \overline{J}, \end{aligned}$$
(3.1)

have a unique solution which we denote by \(\eta = (\eta _j:j\in \overline{J}).\) We extend the traffic Eq. (3.1) to a steady state equation for a routing Markov chain by

$$\begin{aligned} \eta _j=\sum _{i=0}^J \eta _i r(i,j),\quad \quad j=0,1,\dots ,J, \end{aligned}$$
(3.2)

which is solved by \(\eta = (\eta _j:j=0,1,\dots ,J),\) where \(\eta _0 := \lambda \), the other \(\eta _j\) are from (3.1). We use \(\eta \) in both meanings and emphasize the later one by extended traffic solution \(\eta \). \(\eta \) is in both cases usually not a stochastic vector.

Let \({\mathbf X}=(X(t):t\ge 0)\) denote the vector process recording the queue lengths in the network. \(X(t)=(X_1(t),\dots ,X_J(t))\) reads: at time t there are \(X_j(t)\) customers present at node j, either in service or waiting. The assumptions put on the system imply that \({\mathbf X}\) is a Markov process on state space \(\mathbb N^{\overline{J}}_0\). For an ergodic network process \({\mathbf X}\) Jackson’s theorem [Jac57] states that the unique steady state and limiting distribution \(\xi \) on \( \mathbb N^{\overline{J}}_0\) is with normalization constants C(j) for the marginal (over nodes) distributions

$$\begin{aligned} \xi (\mathbf {n})=\xi (n_1,\dots ,n_J) = \prod _{j=1}^{J} \prod _{m=1}^{n_j} \frac{\eta _j}{\mu _j(m)} C(j)^{-1}\, , \qquad \mathbf {n}\in \mathbb N^{\overline{J}}_0. \end{aligned}$$
(3.3)

Assumption 3.1

Throughout we set the following assumption in force:

The extended routing matrix \(r=(r(i,j):i,j\in \overline{J}_0)\) is reversible with respect to the measure \(\eta = (\eta _j:j\in \overline{J}_0),\) i.e. it holds

$$\begin{aligned} \eta _j r(j,i)= \eta _i r(i,j),\quad \quad \forall i,j=0,1,\dots ,J. \end{aligned}$$
(3.4)

4 Injecting a Moving Queue into the Jackson Network

We take the Jackson network from Sect. 3 and enlarge this network by adding a “distinguished customer” called MQ (= Moving Queue = mobile sensor node) who cycles on the nodes of the network forever, governed by an irreducible stochastic matrix \(p=(p(i,j):i,j\in \overline{J})\). In the language of BCMP models MQ is an “internal customer” while the other customers are “external” which arrive from the source and eventually depart to the sink. MQ is characterized by its position \(k\in \overline{J}\) on the network and its queue length \(\ell \in \mathbb N_0\). The internal service rates (death rates) \(\delta (\ell )>0\) and arrival rates (birth rates) \(\beta (\ell )>0\) for MQ’s internal queue are strictly positive and in general queue length dependent.

It will come out that a Markov process description of the system is possible with state space \(\mathbb E:= \mathbb N_0^J\times \overline{J}\times \mathbb N_0\). The process of interest is denoted by

$$\begin{aligned}&{\mathbf Z}= ({\mathbf X},{\mathbf V},{\mathbf Y}) =((X(t),V(t),Y(t)):t\ge 0)\\&\qquad \qquad \qquad \qquad \qquad =((X_1(t),\dots ,X_J(t),V(t),Y(t)):t\ge 0), \end{aligned}$$

where \(((X_1(t),\dots ,X_J(t),V(t),Y(t))\) indicates that at time t there are \(X_j(t)\) external customers at node \(j\in \overline{J}\), and that MQ is located at node \(V(t)\in \overline{J}\) and has a queue length of \(Y(t)\in \mathbb N_0\). A typical state of the system will be denoted by \((n_1,\dots ,n_J,k,\ell )\).

The dynamics of the MQ is influenced by the joint queue length process \({\mathbf X}\) only locally. If MQ resides at time t at node \(V(t)=k\), additional capacity is provided there to “serve” MQ in parallel to the \(n_k\) other customers present which are served in a FCFS regime. The additional capacity to serve MQ results in a departure intensity

$$\begin{aligned} \nu ^{(k)}(n_k,\ell ) = e^{-\varphi n_k} \end{aligned}$$
(4.1)

for MQ with a fixed constant \(\varphi \in (-\infty ,0]\). Being served at k, MQ immediately jumps to node \(k'\in \overline{J}\) with probability \(p(k,k')\). p is not required to be reversible.

We further define for any \(k\in \overline{J}\) an influence vector

$$\begin{aligned} \varvec{\gamma }(k)=(\gamma _j(k):j\in \overline{J}_0)\in (0,1]^{\overline{J}_0}, \qquad \text {with }~ \gamma _0(k) :=1, \end{aligned}$$
(4.2)

which is in force whenever MQ resides in node k. These influence vectors describe in a compact way the consequences for the other Jackson customers, originating from MQ’s actual position in the network.

Assume that at time t MQ stays at node \(V(t)=k\in \overline{J}\) and the queue length at j is \(X_j(t)= n_j\ge 1, j\in \overline{J}\). Then the customer at the head of the line of node j (if any) is served with intensity \(\mu _j(n_j,k) := \mu _j(n_j)\cdot \gamma _j(k)\). When this customer’s service expires, he departs immediately directed by the adjusted routing probability vector \(r^{(k)}=(r^{(k)}(j,i):{j,i\in \overline{J}_0}),\) which is defined for the non-diagonal transition probabilities (\(i\ne j\))

$$\begin{aligned} r^{(k)}(j,i):= \left\{ \begin{array}{ll} \text {for } j=0: &{}\\ r(0,i)\cdot \gamma _i(k), &{} \text {if } i \ne k, \\ r(0,k)\cdot \gamma _k(k)\cdot e^{\varphi }, &{} \text {if } i=k,\\ \text {for } j\ne 0, j\ne k:&{}\\ r(j,i)\cdot \gamma _i(k), &{} \text {if } i\in \overline{J}_0 \setminus \{j,k\},\\ r(j,k)\cdot \gamma _k(k)\cdot e^{\varphi }, &{} \text {if } i=k,\\ \text {for } j = k: &{}\\ r(k,i)\cdot \gamma _i(k), &{} \text {if } i\in \overline{J}_0 \setminus \{k\}.\\ \end{array} \right. \end{aligned}$$

The diagonal transition probabilities (\(j=i\)) are

$$\begin{aligned} r^{(k)}(j,j):= \left\{ \begin{array}{ll} \text {for } j\in \overline{J}_0\setminus \{k\}:&{}\\ r(j,j)+\sum _{h\ne j,k}r(j,h)\cdot (1 -\gamma _h(k)) + r(j,k)\cdot (1 - \gamma _k(k)\cdot e^{\varphi }), &{}\\ \text {for } j = k: &{}\\ r(k,k)+\sum _{h\ne k}r(k,h)\cdot (1 -\gamma _h(k)).\\ \end{array} \right. \end{aligned}$$

This definition implies for the effective external arrival rates \(\lambda _i{(k)}=\lambda \cdot r^{(k)}(0,i)\)

$$\begin{aligned} =\left\{ \begin{array}{ll} \lambda r(0,i)\cdot \gamma _i(k), &{} \text {if } i\ne k; \\ \lambda r(0,k)\cdot \gamma _k(k)\cdot e^{\varphi }, &{} \text {if } i=k; \\ \lambda \left( \sum _{h\ne k}r(0,h)\cdot (1 -\gamma _h(k)) + r(0,k)\cdot (1 - \gamma _k(k)\cdot e^{\varphi })\right) , &{} \text {if } i=0. \end{array} \right. \end{aligned}$$
(4.3)

If MQ resides at k, then for \(i=0\) in (4.3) \(\lambda - \lambda _0{(k)}\) is the effective arrival rate at the network, due to MQ’s influence on the network when staying in k.

Remarks: (i) Consider the case \(\varphi =0\). If MQ stays at vertex k, setting the influence vector \(\varvec{\gamma }(k)\) in force, the rerouting probabilities for the other customers can be considered as randomized reflection, defined in [KDO14][Sect. 2.2]: A customer departing from i who selects (with probability r(ij)) to enter j is allowed to settle down at j with probability \(\varvec{\gamma }_j(k)\); with probability \(1 - \varvec{\gamma }_j(k)\) he is reflected at j and stays on at i to obtain another service. (ii) This is a random generalization of the well-known blocking resolution scheme blocking-after-service (BAS) in connection with repeated service and random destination (rs-rd) which is applied in transmission networks with finite buffers to protect against buffer overflow, see [Onv90][p. 502]. (iii) Randomized reflection has been used successfully to redirect routing of customers in Jackson networks in a random environment. Rerouting is interpreted there as a reaction of a network’s (local) controllers when environment condition changes and therefore capacities in the network are changed, see [KDO16]. (iv) The factor \(e^\varphi \) can be replaced here and in (4.11) below by any number in \(a\in (0,1]\). Setting a to \(e^\varphi \) gives notational credit to the paper [GPE+14], where it seemingly occurred first in this form.

Because of \(\varphi \le 0,\) MQ acts as a repeller for the other Jackson customers who want to enter the vertex where MQ resides. On the other side, the form of \(\nu ^{(k)}(n_k,\ell ) \) enforces MQ to leave a cell or building where already many customers are present. The more involved case of MQ as an attractor, i.e. \(\varphi \ge 0\) is part of our ongoing research.

Example 4.1

For simplicity of presentation we have fixed FCFS regime for the Jackson network customers with state dependent service rates. This framework covers the seemingly most important special service rates to enclose the Example 2.1 in our setting. Recall that in this scenario the vertex \(j\in \overline{J}\) is a representative for a cell where \(n_j\) mobile sensors are present. It is tempting to assume that the sensors move (almost) independently of one another. This can be modeled by taking \(\mu _j(n_j) =\mu _j\cdot n_j\) for some (regional) cell specific constant \(\mu _j\), i.e. the vertex j acts as an \(M/M/\infty \) node as long as \(j\ne k\).

If \(j=k\), i.e. MQ resides in cell j, a similar conclusion via classical BCMP or Kelly framework of queueing networks seems to be not possible, but nevertheless it is tempting again to visualize the cell as an infinite server with the special property that the internal customer MQ is served with an additional locally state dependent capacity which is controlled by the function \(e^{-\varphi n_k}\), similar to Kelly’s \(\phi _k(n_k+1)\) [Kel79][p. 58].

Example 4.2

For the Examples 2.2 and 2.3 a reference to infinite server systems is even more natural if we recall that in both scenarios the vertices are buildings or lane intersections, where customers stay for a random amount of time, while the edges are lanes between these vertices. The joint movement of entities on a lane is naturally modeled in transportation networks by being served at an infinite server. The “service” at the vertices might be modeled by more specific service disciplines, e.g. FCFS at a road intersection.

Example 4.3

The influence vectors \(\varvec{\gamma }(k)=(\gamma _j(k):j\in \overline{J}_0)\) are versatile devices to determine the influence of MQ. Denote by \(d:\overline{J}\times \overline{J}\rightarrow \mathbb N_0\) the distance between vertices of the network, i.e. d(ij) is the minimal number of hops to reach vertex j from i, where \(d(i,i)=0\). If \(\varvec{\gamma }(k)\) fulfils \(\gamma _j(k) =1\) unless \(d(k,j)\le 1\) the influence of MQ on the network is restricted to the 1-hop neighbourhood. If \(\varvec{\gamma }(k)\) fulfils \(\gamma _j(k) =1\) unless \(d(k,j)\le 2\) its influence is restricted to the 2-hop range.

The strictly positive transition rates of \({\mathbf Z}\) are

$$\begin{aligned} q(\mathbf {n},k,\ell ;\mathbf {n}+\mathbf {e}_i,k,\ell ) = \lambda _i(k), \end{aligned}$$
(4.4)
$$\begin{aligned} q(\mathbf {n},k,\ell ;\mathbf {n}-\mathbf {e}_j,k,\ell ) = 1_{(n_j>0)}\mu _j(n_j.k) r^{(k)}(j,0), \end{aligned}$$
(4.5)
$$\begin{aligned} q(\mathbf {n},k,\ell ;\mathbf {n}-\mathbf {e}_j+\mathbf {e}_i,k,\ell ) = 1_{(n_j>0)}\mu _j(n_j.k) r^{(k)}(j,i), \end{aligned}$$
(4.6)
$$\begin{aligned} q(\mathbf {n},k,\ell ;\mathbf {n},k',\ell )= \nu ^{(k)}(n_k,\ell ) p(k,k'), \end{aligned}$$
(4.7)
$$\begin{aligned} q(\mathbf {n},k,\ell ;\mathbf {n},k,\ell +1) = \beta (\ell ), \end{aligned}$$
(4.8)
$$\begin{aligned} q(\mathbf {n},k,\ell ;\mathbf {n},k,\ell -1) = 1_{(\ell >0)}\delta (\ell ). \end{aligned}$$
(4.9)

The proof of the next theorem is omitted. It is along the same lines as that of the following one.

Theorem 4.4

Assume \({\mathbf Z}\) to be ergodic. Then its unique stationary and limiting distribution is with normalization constant C

$$\begin{aligned} \pi (\mathbf {n},k,\ell ) = C^{-1} \prod _{g=1}^{J} \prod _{m=1}^{n_g} \frac{\eta _g}{\mu _g(m)} e^{\varphi n_k} \psi _k \prod _{s=0}^{\ell -1}\frac{\beta (s)}{\delta (s+1)},\quad (\mathbf {n},k,\ell )\in \mathbb E. \end{aligned}$$
(4.10)

Here \((\eta _g:g\in \overline{J})\) is taken from (3.1) and \((\psi _k:k\in \overline{J})\) is the unique stationary distribution of MQ’s routing matrix \(p=(p(k,k'):k,k'\in \overline{J})\).

We now allow that the internal birth and death rates of MQ are not only queue length dependent, but also location dependent. So, if MQ resides at k, service rates are \(\delta ^{(k)}(\ell )>0\) and arrival rates are \(\beta ^{(k)}(\ell )>0\). Furthermore MQ’s travel transition rates are now

$$\begin{aligned} \nu ^{(k)}(n_k,\ell ) = e^{-\varphi n_k}\cdot \prod _{s=0}^{\ell -1}\frac{\delta ^{(k)}(s+1)}{\beta ^{(k)}(s)}. \end{aligned}$$
(4.11)

The strictly positive transition rates of the system are again (4.4)–(4.6) (invariant) and with adapted rates

$$\begin{aligned} q(\mathbf {n},k,\ell ;\mathbf {n},k',\ell )= \nu ^{(k)}(n_k,\ell ) p(k,k'), \end{aligned}$$
(4.12)
$$\begin{aligned} q(\mathbf {n},k,\ell ;\mathbf {n},k,\ell +1) = \beta ^{(k)}(\ell ), \end{aligned}$$
(4.13)
$$\begin{aligned} q(\mathbf {n},k,\ell ;\mathbf {n},k,\ell -1) = 1_{(\ell >0)}\delta ^{(k)}(\ell ). \end{aligned}$$
(4.14)

The global balance equations for \({\mathbf Z}\) are for \((\mathbf {n},k,\ell )\in \mathbb E\)

$$\begin{aligned}&x(\mathbf {n},k,\ell )\Big [\sum _{i\in \overline{J}} \lambda _i{(k)} + \sum _{i\in \overline{J}} 1_{(n_i>0)} \mu _i(n_i,k) (1- r^{(k)}(i,i))\\&\qquad \qquad \quad +\, \beta ^{(k)}(\ell ) + 1_{(\ell >0)}\delta ^{(k)}(\ell ) + \nu ^{(k)}(n_k,\ell )(1-p(k,k))\Big ]\\&= \sum _{i\in \overline{J}} 1_{(n_i>0)} x(\mathbf {n}-\mathbf {e}_i,k,\ell )\lambda _i(k) + \sum _{j\in \overline{J}} x(\mathbf {n}+\mathbf {e}_j,k,\ell ) \mu _j(n_j+1,k) r^{(k)}(j,0)\\&\quad + \sum _{i\in \overline{J}} 1_{(n_i>0)} \sum _{j\in \overline{J}\setminus \{i\}} x(\mathbf {n}-\mathbf {e}_i +\mathbf {e}_j,k,\ell ) \mu _j(n_j+1,k) r^{(k)}(j,i)\\&\quad +\, 1_{(\ell >0)} x(\mathbf {n},k,\ell -1) \beta ^{(k)}(\ell -1) + x(\mathbf {n},k,\ell +1) \delta ^{(k)}(\ell +1)\\&\quad + \sum _{k'\in \overline{J}\setminus \{k\}} x(\mathbf {n},k',\ell ) \nu ^{(k')}(n_k',\ell ) p(k',k).\\ \end{aligned}$$

Theorem 4.5

Assume \({\mathbf Z}\) to be ergodic. Then its unique stationary and limiting distribution is with normalization constant C

$$\begin{aligned} \pi (\mathbf {n},k,\ell ) = C^{-1} \prod _{g=1}^{J} \prod _{m=1}^{n_g} \frac{\eta _g}{\mu _g(m)} e^{\varphi n_k} \psi _k \prod _{s=0}^{\ell -1}\frac{\beta ^{(k)}(s)}{\delta ^{(k)}(s+1)},\quad (\mathbf {n},k,\ell )\in \mathbb E. \end{aligned}$$
(4.15)

\((\eta _g:g\in \overline{J})\) is from (3.1) and \((\psi _k:k\in \overline{J})\) is the unique stationary distribution of MQ’s stochastic routing matrix \(p=(p(k,k'):k,k'\in \overline{J})\).

Proof

We exploit some detailed balance equations which underly the structure of the global balance equation. We first consider the terms concerning the queue length process \({\mathbf Y}\) of MQ and equate

$$\begin{aligned}&x(\mathbf {n},k,\ell )\Big [\beta ^{(k)}(\ell ) + 1_{(\ell >0)}\delta ^{(k)}(\ell )\Big ]\\= & {} 1_{(\ell >0)} x(\mathbf {n},k,\ell -1) \beta ^{(k)}(\ell -1) + x(\mathbf {n},k,\ell +1) \delta ^{(k)}(\ell +1), \end{aligned}$$

which after inserting \(\pi \) and canceling \(C^{-1} \prod _{g=1}^{J} \prod _{m=1}^{n_g} \frac{\eta _g}{\mu _g(m)} e^{\varphi n_k} \psi _k\) yields global balance equations for an ergodic birth-death process with parameters \(\beta ^{(k)}(\ell )\) and \(\delta ^{(k)}(\ell ),\) respectively, which are parametrized by \((\mathbf {n},k)\). Next, we equate

$$\begin{aligned} x(\mathbf {n},k,\ell )\Big [\nu ^{(k)}(n_k,\ell )(1-p(k,k))\Big ] = \sum _{k'\in \overline{J}\setminus \{k\}} x(\mathbf {n},k',\ell ) \nu ^{(k')}(n_k',\ell ) p(k',k), \end{aligned}$$

which after inserting \(\pi \) and canceling \(C^{-1} \prod _{g=1}^{J} \prod _{m=1}^{n_g} \frac{\eta _g}{\mu _g(m)}\) yields

$$\begin{aligned}&e^{\varphi n_k} \psi _k \prod _{s=0}^{\ell -1}\frac{\beta ^{(k)}(s)}{\delta ^{(k)}(s+1)} \Big [\prod _{s=0}^{\ell -1}\frac{\delta ^{(k)}(s+1)}{\beta ^{(k)}(s)} e^{-\varphi n_k}(1-p(k,k))\Big ]\\= & {} \sum _{k'\in \overline{J}\setminus \{k\}} e^{\varphi n_k'} \psi _{k'} \prod _{s=0}^{\ell -1}\frac{\beta ^{(k')}(s)}{\delta ^{(k')}(s+1)} \Big [\prod _{s=0}^{\ell -1}\frac{\delta ^{(k')}(s+1)}{\beta ^{(k')}(s)}e^{-\varphi n_k'}p(k',k)\Big ]. \end{aligned}$$

This boils down to the balance equation of MQ’s routing matrix \(p=(p(k,k'):k,k'\in \overline{J})\) which by definition is solved by \((\psi _k:k\in \overline{J})\). The remaining terms are

$$\begin{aligned}&x(\mathbf {n},k,\ell )\Big [\sum _{i\in \overline{J}} \lambda _i(k) + \sum _{i\in \overline{J}} 1_{(n_i>0)} \mu _i(n_i,k) (1- r^{(k)}(i,i))\Big ]\\&= \sum _{i\in \overline{J}} 1_{(n_i>0)} x(\mathbf {n}-\mathbf {e}_i,k,\ell )\lambda _i(k) + \sum _{j\in \overline{J}} x(\mathbf {n}+\mathbf {e}_j,k,\ell ) \mu _j(n_j+1,k) r^{(k)}(j,0)\\&+ \sum _{i\in \overline{J}} 1_{(n_i>0)} \sum _{j\in \overline{J}\setminus \{i\}} x(\mathbf {n}-\mathbf {e}_i +\mathbf {e}_j,k,\ell ) \mu _j(n_j+1,k) r^{(k)}(j,i). \end{aligned}$$

Note that constantly occurs \((k,\ell )\). Canceling \(C^{-1} \psi _k \prod _{s=0}^{\ell -1}\frac{\beta ^{(k)}(s)}{\delta ^{(k)}(s+1)}\) and multiplying with \(\left( \prod _{g=1}^{J} \prod _{m=1}^{n_g} \frac{\eta _g}{\mu _g(m)}\right) ^{-1}\) we obtain the equation

$$\begin{aligned}&e^{\varphi n_k}\Big [\sum _{i\in \overline{J}\setminus \{k\}} \lambda r(0,i)\cdot \gamma _i(k) + \lambda r(0,k)\cdot \gamma _k(k)\cdot e^{\varphi }\\&\qquad \quad + \sum _{i\in \overline{J}\setminus \{k\}} 1_{(n_i>0)}\mu _i(n_i) \gamma _i(k) \big (1- r(i,i)\\&\qquad \quad -\sum _{h\in \overline{J}\setminus \{i,k\}}r(i,h)(1 -\gamma _h(k)) - r(i,k) (1 - \gamma _k(k)\cdot e^{\varphi })\big )\\&\qquad \quad + 1_{(n_k>0)}\mu _k(n_k) \cdot \gamma _k(k) \Big (1- r(k,k)-\sum _{h\in \overline{J}\setminus \{k\}}r(k,h) (1 -\gamma _h(k)) \Big )\Big ]\\= & {} e^{\varphi n_k} \sum _{i\in \overline{J}\setminus \{k\}} 1_{(n_i>0)} \frac{\mu _i(n_i)}{\eta _i}\lambda r(0,i)\cdot \gamma _i(k)\\&\,+\, e^{\varphi (n_k-1)} 1_{(n_k>0)} \frac{\mu _k(n_k)}{\eta _k}\lambda r(0,k)\cdot \gamma _k(k)e^{\varphi }\\&\,+\, e^{\varphi n_k} \sum _{j\in \overline{J}\setminus \{k\}} \frac{\eta _j}{\mu _j(n_j+1)} \mu _j(n_j+1) \gamma _j(k) r(j,0)\\&\,+\,e^{\varphi (n_k+1)} \frac{\eta _k}{\mu _k(n_k+1)} \mu _k(n_k+1) \gamma _k(k) r(k,0)\\&\,+\,e^{\varphi n_k} \sum _{i\in \overline{J}\setminus \{k\}} 1_{(n_i>0)} \sum _{j\in \overline{J}\setminus \{k, i\}} \frac{\mu _i(n_i)}{\eta _i}\frac{\eta _j}{\mu _j(n_j+1)} \mu _j(n_j+1) \gamma _j(k) r(j,i) \gamma _i(k)\\&\,+\,e^{\varphi (n_k-1)} 1_{(n_k>0)} \sum _{j\in \overline{J}\setminus \{k\}} \frac{\mu _k(n_k)}{\eta _k}\frac{\eta _j}{\mu _j(n_j+1)} \mu _j(n_j+1) \gamma _j(k) e^{\varphi } r(j,k) \gamma _k(k)\\&\,+\,e^{\varphi (n_k+1)} \sum _{i\in \overline{J}\setminus \{k\}} 1_{(n_i>0)} \frac{\mu _i(n_i)}{\eta _i}\frac{\eta _k}{\mu _k(n_k+1)} \mu _k(n_k+1) \gamma _k(k) r(k,i) \gamma _i(k). \end{aligned}$$

By canceling \(e^{\varphi n_k}\) we obtain after some algebraic manipulations

$$\begin{aligned}&\Big [\sum _{i\in \overline{J}\setminus \{k\}} \overbrace{\lambda r(0,i)\cdot \gamma _i(k)}^{(2)} + \overbrace{\lambda r(0,k)\cdot \gamma _k(k)\cdot e^{\varphi }}^{(1)}\\&\quad + \sum _{i\in \overline{J}\setminus \{k\}} 1_{(n_i>0)}\mu _i(n_i) \gamma _i(k) \big (\sum _{h\in \overline{J}\setminus \{i,k\}}\overbrace{r(i,h)\gamma _h(k)}^{(6)} + \overbrace{r(i,k) \gamma _k(k)\cdot e^{\varphi }}^{(5)} +\overbrace{r(i,0))}^{(3)}\big )\\&\quad +\,1_{(n_k>0)}\mu _k(n_k) \cdot \gamma _k(k) \big (\sum _{h\in \overline{J}\setminus \{k\}}\overbrace{r(k,h)\gamma _h(k)}^{(7)} + \overbrace{r(k,0)\big )}^{(4)}\Big ]\\= & {} \sum _{i\in \overline{J}\setminus \{k\}} \overbrace{1_{(n_i>0)} \frac{\mu _i(n_i)}{\eta _i}\lambda r(0,i)\cdot \gamma _i(k))}^{(3)} + \overbrace{1_{(n_k>0)} \frac{\mu _k(n_k)}{\eta _k}\lambda r(0,k)\cdot \gamma _k(k))}^{(4)}\\&\quad + \sum _{j\in \overline{J}\setminus \{k\}} \overbrace{{\eta _j} \gamma _j(k) r(j,0))}^{(2)} +\overbrace{e^{\varphi } {\eta _k} \gamma _k(k) r(k,0))}^{(1)}\\&\quad + \sum _{i\in \overline{J}\setminus \{k\}} 1_{(n_i>0)} \sum _{j\in \overline{J}\setminus \{k, i\}} \overbrace{\frac{\mu _i(n_i)}{\eta _i} {\eta _j} \gamma _j(k) r(j,i) \gamma _i(k))}^{(6)}\\&\quad + 1_{(n_k>0)} \sum _{j\in \overline{J}\setminus \{k\}} \overbrace{\frac{\mu _k(n_k)}{\eta _k}{\eta _j} \gamma _j(k) r(j,k) \gamma _k(k)}^{(7)}\\&\quad + \sum _{i\in \overline{J}\setminus \{k\}} \overbrace{e^{\varphi } 1_{(n_i>0)} \frac{\mu _i(n_i)}{\eta _i} {\eta _k} \gamma _k(k) r(k,i) \gamma _i(k)}^{(5)}. \end{aligned}$$

We are now enforced to recur to Assumption 3.1 and equate pairwise terms with the help of reversibility of r. We equate the indicated partial sums and obtain after premultiplication with associated factors from outside of brackets the following valid expressions.

Because of \(\lambda =\eta _0\) the next four lines follow:

$$\begin{aligned}&\qquad \qquad \qquad \quad \lambda r(0,k)\cdot \gamma _k(k)\cdot e^{\varphi } \mathop {=}\limits ^{(1)} e^{\varphi } {\eta _k} \gamma _k(k) r(k,0),\\&\forall {i\in \overline{J}\setminus \{k\}}:\quad \lambda r(0,i)\cdot \gamma _i(k) \mathop {=}\limits ^{(2)} {\eta _i} \gamma _i(k) r(i,0),\\&\forall {i\in \overline{J}\setminus \{k\}}: \quad \eta _i 1_{(n_i>0)}\mu _i(n_i) \cdot \gamma _i(k) r(i,0)\mathop {=}\limits ^{(3)} 1_{(n_i>0)} {\mu _i(n_i)}\lambda r(0,i)\cdot \gamma _i(k),\\&\qquad \qquad \qquad \quad {\eta _k} 1_{(n_k>0)}\mu _k(n_k) \cdot \gamma _k(k) r(k,0) \mathop {=}\limits ^{(4)} 1_{(n_k>0)} {\mu _k(n_k)}\lambda r(0,k)\cdot \gamma _k(k),\\ \end{aligned}$$

and the next lines are obvious from reversibility:

$$\begin{aligned}&\forall {i\in \overline{J}\setminus \{k\}}:\quad {\eta _i} 1_{(n_i>0)} {\mu _i(n_i)}\gamma _i(k) r(i,k) \gamma _k(k) e^{\varphi }\\&\qquad \qquad \qquad \qquad \qquad \qquad \qquad \mathop {=}\limits ^{(5)} e^{\varphi } 1_{(n_i>0)} {\mu _i(n_i)} {\eta _k} \gamma _k(k) r(k,i) \gamma _i(k),\\&\forall {i,j\in \overline{J}\setminus \{k\}}: {\eta _i} 1_{(n_i>0)}\mu _i(n_i)\gamma _i(k) r(i,j)\gamma _j(k)\\&\qquad \qquad \qquad \qquad \qquad \qquad \qquad \mathop {=}\limits ^{(6)} 1_{(n_i>0)} {\mu _i(n_i)} \eta _j \gamma _j(k) r(j,i) \gamma _i(k),\\&\forall {j\in \overline{J}\setminus \{k\}}: {\eta _k} 1_{(n_k>0)}\mu _k(n_k)\gamma _k(k) r(k,j)\gamma _j(k)\\&\qquad \qquad \qquad \qquad \qquad \qquad \qquad \mathop {=}\limits ^{(7)} 1_{(n_k>0)} {\mu _k(n_k)} \eta _j \gamma _j(k) r(j,k) \gamma _k(k). \end{aligned}$$

This validates \(\pi \) as the global balance equations of \({\mathbf Z}\).

Example 4.6

We proved the theorems with service rates \(\mu _j(n_j,k) = \mu _j(n_j) \gamma _j(k)\) when \(n_j\) Jackson network customers stay at vertex j and MQ resides at k. In Examples 4.1 and 4.2 we demonstrated that this covers especially the natural infinite server setting for the Jackson customers. This leads to the observation that by \(\gamma _j(k)\) the service intensity of the individual customers is reduced:

\(\mu _j(n_j,k) = (\mu _j \gamma _j(k)) n_j.\)

Discussion of the Modeling Assumptions: (i) The process \({\mathbf Z}\) is not reversible although the pure Jackson network process without the MQ is reversible with respect to the stationary distribution \(\xi (\mathbf {n})\) from (3.3) by Assumption 3.1. Reversibility of the underlying pure Jackson network process seems at the present stage of development indispensable. This clearly restricts applicability of the result of Theorem 4.5. For example it excludes one-way lanes for the traveling nodes. On the other side, starting with this case is worth for laying the ground for eventually more general settings. (ii) Introducing influence vectors \(\varvec{\gamma }(k)=(\gamma _j(k):j\in \overline{J}_0)\in (0,1]^{\overline{J}_0}\) which are in force whenever MQ resides in node k, and MQ’s repeller function, goes back to ideas in [GPE+14, KDO14]. There this controls interactions between different components of a multidimensional system. Application of this scheme in the context of this paper is still restricted due to \(0<\gamma _j(k)\le 1\), which means that for the influenced service rates holds \(\mu (n_j,k)\le \mu (n_j)\) for all \(n_j\). The power of this scheme will come out when \(0\le \gamma _j(k)< \infty \) is included as is demonstrated in [KDO16]. With \(\varphi >0\), in context of the models considered here (e.g. in Example 2.1) this means that whenever MQ is present at k, the rate of incoming other customers into cell k is increased. In the framework of [AK08] this would increase the connectivity of the network. This is part of our future research. (iii) The most critical point is in our opinion the choice of the portion of the vertices’ capacity dedicated to MQ. For the situation of Theorem 4.4 we have taken \(\nu ^{(k)}(n_k,\ell ) =: \widetilde{\nu }^{(k)}(n_k)=e^{-\varphi n_k}\) from [GPE+14]. Studying the balance equations there (and in our more complicated framework as well) reveals that this choice is essential to obtain the product form steady state via reversibility. We mention that in [GPE+14] there is no “moving queue” but only a “random walker” with reversible routing, but without any internal structure. MQ’s routing matrix p is not required to be reversible. Moreover, in the framework of Theorem 4.4 we do not need additional special assumptions. These come into the play if the development of MQ’s internal message queue is location dependent (i.e. \(\beta ^{(k)}(\cdot ), \delta ^{(k)}(\cdot )\)) which is desirable in our opinion. We pay with requiring the complicated service rates \(\nu ^{(k)}(n_k,\ell )\) in (4.11).

Example 4.7

Consider the scenario from [WWDL07] in Example 2.1 and take a distinguished moving node in the cell-partitioned area. If we want to reduce the other nodes to customers in a network, we are faced with the problem, that the routing of these customers is dependent on the position of their home-cell, i.e. we need customer types which carry this information.

Our present oversimplified model does not offer this feature.

With our formalism it is possible to take the distinguished node’s routing as the matrix p and then construct an averaged routing matrix r for the other customers, where averaging is done according to weights representing the population sizes of the home-cells. A similar averaging is necessary for the mean sojourn times for these other customers in the different cells they visit. To obtain these averaged values needs iterative procedures because we admit arrivals from and departures to the exterior for the other customers.

5 Conclusion and Further Research

We have developed a two-scale model for a network of mobile nodes, guided by scenarios from the literature on mobile sensor networks. The main outcome is the stationary distribution of the system which exhibits its separability in equilibrium.

Further research will be on including into the theory the case of the mobile customer being an attractor for the other customers, the possibility to have different classes of external customers with individual class dependent service time distributions, and to investigate on the microlevel two or more internal moving queues injected into the Jackson network and their interaction.

A seemingly hard problem will be to remove the assumption of reversibility of the underlying Jackson network.