Abstract
We introduce AutoCAT, a class of algorithms to automatically generate exact and approximate product-form solutions for large Markov processes that cannot be solved by direct numerical methods. Focusing on models that can be described as cooperating Markov processes, which include queueing networks and stochastic Petri nets as special cases, it is shown that finding a global optimum for a nonconvex quadratic program is sufficient for determining a product-form solution. Such problems are notoriously hard to solve due to the inherent difficulty of searching over nonconvex sets. Using a potential theory for Markov processes, convexification techniques, and a class of linear constraints that follow from stochastic characterization of product-form solutions, we obtain a family of linear programming relaxations that can be solved efficiently. A sequence of these linear programs is solved under increasingly tighter constraints to determine the exact product-form solution of a model when one exists. This approach is then extended to obtain approximate solutions for non-product-form models. Finally, our new techniques are validated with examples and increasingly complex case studies that show the effectiveness of the method on both conventional and novel performance models.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
Introduction
Performance modeling often involves the abstraction of the various components of a system under study and their mutual interactions as a Markov process. Although there exist several high-level formalisms for specifying particular classes of Markov processes, such as queueing networks or stochastic Petri nets, the state space explosion problem typically limits our ability to compute metrics related to the long-term behavior of the system. A notable exception is the class of product-form models, in which the equilibrium probability of a state is a scaled product of the marginal state probabilities of the Markov processes that represent the individual components of the system. Foremost examples of models enjoying a product form include open and closed queueing networks with single and multiple service classes [6, 29], possibly supporting various forms of blocking [4] and different arrival types [17, 18, 20], stochastic Petri nets [3], Markovian process algebras [24], and stochastic automata networks [19].
We introduce AutoCAT, an optimization-based technique that automatically constructs exact or approximate product forms for a large class of performance models. We consider models that may be described as a cooperation (i.e., synchronization) of Markov processes over a given set of named actions [40]. This class of processes includes as special cases queueing networks, stochastic Petri nets, stochastic automata, and several other model types that are popular in performance evaluation [27]. Although certain Markov processes enjoy a number of useful properties for determining a product-form solution, such as reversibility [31], quasireversibility [30, 37], and local balance [38], cooperating Markov processes additionally benefit from their compositional structure, which is conducive to recursive analysis and the reversed compound agent theorem (RCAT) in particular [22, 23].
As we discuss in the section titled “Preliminaries,” RCAT defines a set of sufficient conditions for cooperating Markov processes to enjoy a product-form solution. To the best of our knowledge, RCAT is the most general formalism available to construct product forms by means of simple conditions that do not require direct solution of the joint probability distribution of the model at equilibrium. We leverage this result to show that RCAT product-form conditions are equivalent to a nonlinear optimization problem with nonconvex quadratic constraints. Nonconvex global optimization is \(\mathcal{N}\mathcal{P}\)-hard in general [11], and thus we derive efficient linear programming (LP) relaxations that are solved sequentially to find the exact product form of a model when one exists. Since the length of such a sequence depends on the tightness of the LP relaxations, we define a hierarchy of increasingly tighter linear programs, based on a potential theory for Markov processes [12], convexification techniques [36], and a set of linear constraints that we derive from the RCAT product-form conditions.
Most importantly, this procedure is extended to the approximate analysis of non-product-form Markov processes, which arise in the vast majority of practical systems. It is applied first in “toy” examples to illustrate the method and then in case studies to validate its main features and to assess its numerical tractability and accuracy. Among these models, it is shown that such approximations may be useful to investigate closed queueing network models with phase-type (PH) distributed service times [8]. Recently, it was shown in [14, 15] that such models may be approximated quite accurately by approximate product-form solutions. Here we provide an example illustrating that the AutoCAT approximation may provide improved accuracy with respect to the methods of Casale and Harrison [15] and Casale et al. [14].
Our method provides one of the first available non-application-specific algorithms for product-form analysis; moreover, at the same time, it constructs automatically workable approximations for the equilibrium probabilities of interacting Markov processes without a product form. A preliminary constructive tool of this type was proposed by Argent-Katwala [2], where a symbolic solver was proposed for product forms based on the sufficient conditions of RCAT. This tool constructed product forms for Markov processes composed from others for which a product form was already known, but it was not able to detect whether a given Markov process admits a product-form solution. Moreover, the cost of symbolic linear algebra inevitably makes the technique applicable only to simple processes. Buchholz [9, 10] defines the first general-purpose automatic technique for identifying exact and approximate product-form solutions in stochastic models. The methodology minimizes a residual error norm using an optimization technique based on efficient quadratic programming. Using the stochastic automata network (SAN) formalism, Buchholz’s method uses a Kronecker representation of the cooperations to avoid generating the joint state space. Then, an iterative technique searches for a local optimum that is used to compute an approximate product form. In Balsamo et al. [34] and Marin and Bulo [5] propose INAP, a fixed-point method to estimate reversed rates in RCAT product forms. The main benefit of the INAP algorithm is computational efficiency, which enables the analysis of large models.
To summarize, our main contributions are as follows:
-
We present an algorithm to automatically decide whether a given Markov model has a product-form solution and, if so, to compute it without solving the underlying global balance equations. Specifically, in the section “Does a Product Form Exist?,” we introduce a formulation of the problem in the form of a quadratically constrained optimization, and we obtain efficient linear relaxations in the section “Linearization Methodology.”
-
In the section “Exact Product-Form Construction,” we show that this algorithm guarantees, within the boundaries of the numerical tolerance of the optimizer, that a product-form solution will be found if it exists.
-
Next, approximation techniques stemming from our methodology are developed in the section “Automated Approximations.” Such approximations can be applied to a wide class of performance models that are represented as cooperations of Markov processes.
Our methodology is validated with small examples and case studies in the section “Examples and Case Studies,” which testify to the effectiveness of the approach on performance models of practical interest. These include stochastic Petri nets and closed queueing networks with PH distributed service times.
Preliminaries
We consider a collection of M Markov processes that cooperate over a set of A actions. Each cooperating process might represent, for example, a queue, a stochastic automaton, a Petri net, or an agent in a stochastic process algebra. Process k is defined on a set of N k ≥ 1 states such that the joint state space of the Markov process comprising the cooperation has up to N prod = ∏ k N k states. Process indices are k, m = 1, …, M, m≠k, action indices are a, b, c = 1, …, A, and (marginal) state indices for process k are n k , n′ k = 1, …, N k . An action a labels a synchronizing transition in a pairwise cooperation between two processes k and m≠k, which can only take place in both processes simultaneously.
We follow the convention of defining active and passive roles for each action a in the pair of processes it synchronizes. The set of active (respectively passive) actions for process k is denoted by \({\mathcal{A}}_{k}\) (respectively \({\mathcal{P}}_{k}\)).
Consider an action a such that \(a \in {\mathcal{A}}_{k}\) and \(a \in {\mathcal{P}}_{m}\), i.e., which is active in k and passive in m≠k. Further, assume that when action a is enabled, it triggers with rate μ a state transitions n k → n k ′ and n m → n m ′ in processes k and m, respectively. We summarize this information in rate matrices A a and P a of orders N k and N m , respectively. That is, we set the values A a [n k , n k ′] = μ a and P a [n m , n m ′] = p m for each pair (n k , n k ′) and (n m , n m ′) where a is enabled, where p m is the probability of the transition n m → n m ′ in the passive process when action a takes place, and M[i, j] stands for the element at row i and column j of matrix M. Note that the rate of the passive action is unspecified; it is assigned subsequently according to the equilibrium behavior of the active process (i.e., process k here) [22]. Observe also that the rates of transitions n k → n k lie on the diagonal of A a . Such rates define hidden transitions, which do not alter the local state of the active process but can affect the local state of the passive process m. Finally, we account for local state jumps that are not due to cooperations, which we call local transitions. The rates of all local transitions for process k are stored in the N k ×N k matrix L k .
Product-Form Solutions
We assume the joint process underlying the cooperation to be ergodic, and the goal of our analysis is to determine a product-form expression for the model’s joint state probability function at equilibrium. Unless otherwise stated, we always refer to the RCAT product form defined in [25]; note that this is a superset of product forms that can be obtained by quasireversibility [35]. Our goal is to find marginal probability vectors π k (n k ) for each cooperating process k such that the equilibrium solution of the model enjoys the product-form expression
where G is a normalizing constant and α(n 1, …, n k , …, n M ) is the joint state probability function. Under the RCAT methodology, finding a product-form solution such as (4.1) requires one to analyze each process k in “isolation,” i.e., to study its transitions over the marginal state space \({\mathcal{S}}_{k} =\{ {n}_{k}\,\vert \,0 \leq {n}_{k} < {N}_{k}\}\). If process k cooperates passively on one or more actions \(b \in {\mathcal{P}}_{k}\), then their (passive) rates of occurrence in isolation are undefined. Thus, they cannot be solved for the marginal probabilities π k (n k ) before such rates are assigned. This is because the rate of a passive action in process k may depend on the state of the cooperating process m, as we elaborate subsequently. The RCAT theorem introduced in [22, 23] establishes that, if we can define a generator matrix Q k on \({\mathcal{S}}_{k}\) satisfying conditions RC1, RC2, and RC3 stated at the end of this section, then the equilibrium vectors π k satisfying π k Q k = 0 and π k 1 = 1 for 1 ≤ k ≤ M provide a product-form solution (4.1). Specifically, if the three sufficient conditions of RCAT are met, then a certain outgoing rate x b , called a reversed rate, is associated with each passive action \(b \in {\mathcal{P}}_{k}\) in each state n k where b is enabled. We point to [22] for a probabilistic interpretation of x b as a rate in a time-reversed Markov process. The RCAT conditions together with the reversed rates then allow the generators Q k of each Markov component process to be defined uniquely as follows:
where x = (x 1, …, x a , …, x A )T > 0 is the vector of reversed rates and Δ k (x) is the diagonal matrix ensuring that Q k 1 = 0. From Q k , we can compute the product-form solution of the cooperation based on (4.1). This provides a major computational
RCAT Sufficient Conditions
The original formulation of RCAT was expressed using the stochastic process algebra PEPA [22, 23]. We provide here a reformulation of RCAT’s conditions using matrix expressions that are simpler to integrate in optimization programs.
-
RCAT Condition 1 (RC1). Passive actions are always enabled, i.e., P a 1 ≥ 1 for all \(a \in {\mathcal{P}}_{k},1 \leq k \leq M.\)
-
RCAT Condition 2 (RC2). For \(a \in {\mathcal{A}}_{k}\) each state in process k has an incoming transition due to active action a, i.e., A a T1 > 0 for all \(a \in {\mathcal{A}}_{k},1 \leq k \leq M.\)
-
RCAT Condition 3 (RC3). There exists a vector of reversed rates
$$x = {({\bf x}_{1},\ldots, {x}_{a},\ldots, {x}_{A})}^{T} > {\bf 0}$$such that the generators Q k have equilibrium vectors π k that satisfy the following rate equations: π k A a = x a π k for all \(a \in {\mathcal{A}}_{k},1 \leq k \leq M.\) (Note that we use the generalized expression introduced in [35] in place of the original condition in [22], although the two forms are equivalent.)
If the preceding conditions are met, then the vectors π k immediately define a product-form solution (4.1). We stress, however, that verifying RC3 is much more challenging than RC1 and RC2 as it is necessary in practice to find a reversed rate vector x that satisfies the rate equations.
Does a Product Form Exist?
Let us now turn to the problem of finding an algorithm that automatically constructs a RCAT product form if one exists. We assume initially that every component process has a finite state space (N k < ∞ ∀k), the generalization to countably infinite state spaces being simple, as discussed in the appendix, “Infinite Processes.” According to RC3, to construct an RCAT product form we need to find a solution x > 0 of the exact nonlinear system of equations
This defines a nonconvex feasible region due to the bilinear products x a π k and x b π k in the first two sets of constraints. Such a region is illustrated in Fig. 4.1. We now provide the following characterization.
Proposition 4.1.
Consider the vectors x L,0 = (x 1 L,0 ,x 2 L,0 ,…,x A L,0)T and x U,0 = (x 1 U,0 ,x 2 U,0 ,…,x A U,0)T defined by the values
where \({\mathcal{I}}^{+} =\{ i\mid {\sum \nolimits }_{j}{\bf A}_{a}[i,j] > 0\}\) . Then any feasible solution of ENS satisfies the necessary condition x L,0 ≤ x ≤ x U,0.
Proof.
The statement follows directly from RC3 since \({x}_{a} = {x}_{a}{\boldsymbol \pi }_{k}{\bf 1} = {\boldsymbol \pi }_{k}{\bf A}_{a}\bf 1\). Thus, \({x}_{a} = {\boldsymbol \pi }_{k}{\bf A}_{a}{\bf 1} \geq {\boldsymbol \pi }_{k}({x}_{a}^{L,0}{\bf 1}) = {x}_{a}^{L,0}\) and \({x}_{a} = {\boldsymbol \pi }_{k}{\bf A}_{a}{\bf 1} \leq {\boldsymbol \pi }_{k}({x}_{a}^{U,0}{\bf 1}) = {x}_{a}^{U,0}\).
Let us also note that x satisfies ENS if and only if it is a global minimum for the quadratically constrained program
which has O(A + N sum) variables and O(AMN max) constraints, where N sum = ∑ k N k and N max = max k N k . Here s a + and s a − are slack variables that guarantee the feasibility of all constraints in the early stages of the nonlinear optimization where the solver may be unable to determine a feasible assignment of x in ENS. By construction, f qcp ≥ 0. Furthermore, RC3 holds if and only if f qcp = 0. Since all other quantities are bounded, we can also find upper and lower bounds on s a + and s a − . As such, QCP is a quadratically constrained program with box constraints, a class of problems that is known to be \(\mathcal{N}\mathcal{P}\)-hard [11]. The difficulty in a direct solution of ENS or QCP is clear even in “toy problems” such as identifying product forms in Jackson networks, i.e., queueing networks with exponential servers. For example, searching for a product form in a Jackson queueing network with two feedback queues one often finds that MATLAB’s fmincon function fails to identify in ENS the search direction due to the small magnitudes of the gradients. QCP has better numerical properties than ENS, but it can take up to 5–10 min on commonly available hardware to find the reversed rates x needed to construct the product form (4.1). Thus QCP quickly becomes intractable on models with several queues. This shows that constructing product-form solutions by numerical optimization methods is, in general, a difficult problem. Moreover, it motivates an investigation of convex relaxations of ENS and QCP to derive efficient techniques for automatically constructing product forms. Indeed, automatic product-form analysis is fundamental to generating approximations for non-product-form models, as we show in the section “Automated Approximations.”
Linearization Methodology
We now seek to obtain efficient linear programming (LP) relaxations of ENS that overcome the difficulties of solving a nonlinear system directly. To obtain an effective linearization, we first apply, in section “Convex Envelopes,” an established convexification technique [36]. A tighter linear relaxation specific to RCAT is then developed in the section “Tightening the Linear Relaxation” and is shown to dramatically improve the quality of the relaxation. Finally, the section “Potential-Theory Constraints” obtains a tighter formulation based on a potential theory for Markov processes.
Convex Envelopes
To obtain a linearization of ENS, we first rewrite the generator matrix of process k as \({\bf Q}_{k}{\bf (x)} =\widetilde{ {\bf T}}_{k} +{ \sum \limits_{b\in {\mathcal{ P}}_{k}}}{x}_{b}\widetilde{{\bf P}}_{b}\), where \(\widetilde{{\bf P}}_{b}\) is the sum of the P b matrices and of the component of Δ k (x) that multiplies x b . Then we condense the nonlinear components of ENS into the variables z c, k = x c π k such that we replace the bilinear terms x c π k in π k Q k (x) by z c, k . Consequently, the only nonlinear constraints left are z c, k = x c π k , which we linearize to obtain an LP relaxation of ENS. We first observe that all variables involved in the bilinear product x c π k are bounded, as a consequence of Proposition 4.1 and the fact that probabilities are bounded. We can thereby always write the bounds x L, 0 ≤ x ≤ x U, 0 and π k L ≤ π k ≤ π k U. Under these assumptions, for all \(c \in {\mathcal{A}}_{k}\) and 1 ≤ k ≤ M, z c, k is always enclosed in the convex envelope proposed by McCormick in [36], which is known to be the tightest linear relaxation for bounded bilinear variables. Adding the constraint z c, k 1 = x c yields a linear programming relaxation of ENS:
for an arbitrary linear objective function f lpr n = f( ⋅), where n is an integer, used in the section “Exact Product-Form Construction” to parameterize a sequence of upper and lower bounding vectors on x and π k . The preceding optimization program is an LP that can be solved in polynomial time using interior-point methods. The number of variables and constraints in ENS grows asymptotically as O(A + N sum) and O(AMN max), respectively. In the foregoing linearized version LPR, the number of variables increases as O(AN sum) and the number of constraints remains at O(AMN max) asymptotically.
Rejecting the Existence of Product Forms. When LPR is infeasible, we can conclude that no RCAT product form exists for the model under study. To see this, it is sufficient to observe that LPR is a relaxation of ENS. Thus, all solutions of ENR are feasible points of LPR, but there exist points in LPR that do not solve ENS. Thus, since the feasible region of LPR is larger than that of ENR, we conclude that if LPR is infeasible, then so is ENS. This provides an interesting innovation over existing techniques for determining product-form solutions since none is currently able to exclude the existence of a product form when one cannot be found. (Note that, although we can then conclude that there is no RCAT product form, we cannot exclude the possibility that there is a non-RCAT product form, were such to exist.)
Tightening the Linear Relaxation
We now define our first method for obtaining tighter linearizations of ENS based on specific properties of the RCAT theorem. This is useful because McCormick’s bounds are known to be wide in many cases [1].
Applying recursively the rate equations in RC3 v times we may write \({\boldsymbol \pi }_{k}{\bf A}_{a}^{v+1} = {x}_{a}^{v+1}{\boldsymbol \pi }_{k}\), for all \(a \in {\mathcal{A}}_{k}\), 1 ≤ k ≤ M, since RC3 implies that we can exchange scaling by x a with right multiplication by A a . Summing over all v ≥ 0 we obtain \({\boldsymbol \pi }_{k}{\bf A}_{a}{\bf H}_{a} = {x}_{a}{(1 - {x}_{a})}^{-1}{\boldsymbol \pi }_{k}\), where \({\bf H}_{a} = {{\bf (I -{A}}_{a})}^{-1}\), and we have assumed, without loss of generality, that the units of measure of the rates are scaled such that x a ≤ x a U < 1 and ρ(A a ) < 1, where ρ(M) denotes the spectral radius of a matrix M. Rearranging terms and using z a, k = x a π k , we obtain the new linear constraint
for all active actions \(a \in {\mathcal{A}}_{k}\) and processes k, 1 ≤ k ≤ M. This provides an extra set of constraints that can be added to the linear relaxation of ENS to refine (reduce) the feasible region. Note that since A a is a constant matrix, (4.3) is a linear equation in π k and z a, k .
The advantages of the method outlined above become even more apparent when we consider the generator constraint in LPR. For example, if we left-multiply by x a , we obtain
where we use the fact that the exchange rule holds for z b, k , too, since \({x}_{a}{\bf z}_{b,k} = {x}_{a}{x}_{b}{\boldsymbol \pi }_{k} = {x}_{b}{\boldsymbol \pi }_{k}{\bf A}_{a} = {z}_{b,k}{\bf A}_{a}\) for all \(a \in {\mathcal{A}}_{k}\), 1 ≤ k ≤ M. Equation (4.4) creates a direct linear relationship between the terms z a, k and z b, k for active and passive actions that cannot be inferred directly from LPR since it is based on exact knowledge of the bilinear relation z b, k = x b π k . As we show in an illustrative example at the end of this subsection, the additional constraints (4.3) and (4.4) greatly improve the LP approximation of ENS. Furthermore, following a similar argument, we can generate a hierarchy of linear constraints for v = 0, 1, …
together with the condition obtained by summing over v ≥ 0:
In summary, we have refined the linearization into the hierarchy of tight linear programming relaxations TLPR (n, V ), which extends LPR by including constraints (4.5) for v = 1, …, V and (4.6).
We remark that, even though the preceding formulation is much more detailed than LPR, it inevitably requires an increased number of constraints, which now grows as O(VAMN max), while the complexity in terms of number of variables is the same as LPR. Thus, increased accuracy is obtained at a cost of additional computational complexity.
Potential-Theory Constraints
Potential theory is often applied in sensitivity and transient analyses of Markov processes and in Markov decision process theory [12]. We use it to derive tighter linearizations of ENS; to the best of our knowledge, this is the first time that potentials have been applied to product-form theory.
Consider a process with generator matrix Q k (x) and equilibrium probability vector π k (x), and define the vector \({\bf f}({n}_{k}) = {({f}_{1},\ldots, {f}_{i},\ldots, {f}_{{N}_{k}})}^{T}\), where f i = 1 if i = n k and f i = 0 otherwise. The linear metric \({\eta }_{k}{\bf (x)} = {\boldsymbol \pi }_{k}{\bf (x)f}({n}_{k}) = {\boldsymbol \pi }_{k}({n}_{k})\) is then the marginal probability of state n k in process k, given x. Potential theory provides compact formulas for studying the changes in the values of linear functions such as η k (x) under arbitrarily large perturbations of the generator matrix Q k (x). Let x 0 > 0 be an arbitrary reference point for x such that Q k (x 0) is a valid generator matrix with equilibrium vector π k 0 and η k (x 0) = π k 0(n k ). Then it is straightforward to show that the difference between η k (x) and η k (x 0) is (as in [12])
where \({\bf g}({\bf x},{n}_{k}) = (-{\bf Q}_{k}{\bf(x)} + {\bf 1}{\boldsymbol \pi }_{k}{\bf (x)})^{-1}{\bf f}({n}_{k})\) is the so-called zero potential of the function η k (x). [Notice that \({\boldsymbol \pi }_{k}{\bf(x)} = {\boldsymbol \pi }_{k}{\bf (x)}(-{\bf Q}_{k}{\bf(x)} + {\bf 1}{\boldsymbol \pi }_{k}{\bf (x)})\).] For the system under study, we can use (4.2) to rewrite (4.7) as
Defining the potential matrix G(x 0) = [g(x 0, n 1) g(x 0, n 2) …g(x 0, N k )] we obtain a new set of linear constraints
This provides a further tightening of the linear relaxation of ENS.
Note that zero potentials can be memory consuming to evaluate because the matrix inverse \({(-{\bf Q}_{k}{\bf (x)} + {\bf 1}{\boldsymbol \pi }_{k}{\bf(x)})}^{-1}\) does not preserve the sparsity of Q k . Furthermore, the rank 1 update 1π k cannot be performed efficiently since Q k is a singular matrix and updating techniques such as the Sherman–Morrison formula do not apply [39]. We address this computational issue by the algorithm shown in Fig. 4.2, which modifies the classical Jacobi iteration [41] to take advantage of the rank 1 structure of the term 1π k . That is, at each iteration, we isolate a vector h from the residual matrix in such a way that the matrix 1π k is never explicitly computed. Thus, only vectors of the same order of π k are stored in memory, and also Q k remains in sparse form. In this way, each potential can always be computed efficiently with respect to storage requirements and in the worst case has asymptotic computational cost O(JN k 2), J being the number of Jacobi iterations. The potential matrix G is therefore computed in O(JN k 3) steps and, for the fixed reference point x 0, needs to be evaluated only once, requiring a computation time that is usually small compared to the time required to solve the linear optimization programs. Moreover, even for the largest models, it is always possible to consider constraints arising from a subset of the columns of G, again posing a tradeoff between computational costs and accuracy. Finally, using the exchange rule discussed in the section “Tightening the Linear Relaxation” we again obtain the hierarchy of constraints
and the asymptotic condition after simple algebra becomes
where \({\bf A}_{a}^{^{\prime}}\stackrel{def}{=}{\bf A}_{a} -{\bf A}_{a}^{2}\). Summarizing, we can add (4.8)–(4.10) to LPR to generate tighter relaxations. We denote the resulting zero-potential relaxation as ZPR(n, V ). Note that ZPR(n, V ) has the same asymptotic complexity of TLPR(n, V ).
Exact Product-Form Construction
We now consider a technique for finding the solution of ENS that solves a sequence of the linear relaxations defined in the previous section, i.e., LPR(n), TLPR(n), or ZPR(n). Since the approach is identical for all relaxations, we limit the discussion to LPR(n).
The iterative algorithm defines a sequence of progressively tighter bounds x L, n and x U, n on the reversed rates x such that for sufficiently large n, LPR(n) determines a feasible solution x of ENS if one exists. Initial conditions are \({\bf x}^{L,0}\stackrel{\mathrm{def}}{=}{\bf x}^{L},\;{\bf x}^{U,0}\stackrel{\mathrm{def}}{=}{\bf x}^{U}\), where x L and x U are the bounds defined in Proposition 4.1. We have the following result.
Proposition 4.2.
For each n = 1,2,…, consider a sequence of 2A linear relaxations of ENS, the first A with objective function f lpr n,c = max x c and the remaining A with objective function g lpr n,c = min x c , for action c, 1 ≤ c ≤ A and bounds x L,n , x U,n , as previously.
-
If f lpr n,c = x c U,n or g lpr n,c = x c L,n , then the cth component of the linear relaxation solution x satisfies the bilinear constraint z c,k = x c π k that is necessary for a solution of ENS.
-
Otherwise, f lpr n,c < x c U,n and the bounds at iteration n + 1 may be refined to
$${x}_{c}^{U,n+1}\stackrel{def}{=}{f}_{\mathrm{ lpr}}^{n,c},\qquad {x}_{ c}^{L,n+1}\stackrel{def}{=}{g}_{\mathrm{ lpr}}^{n,c}$$for all actions c, 1 ≤ c ≤ A, that define a feasible region for LPR(n + 1) that is strictly tighter than for LPR(n).
Proof.
Consider the case g lpr n = x c L, n, so that LPR(n) makes the assignment x c = x c L, n. Then the first two McCormick constraints become
which readily imply the bilinear relation \({\bf z}_{c,k} = {x}_{c}^{L,n}{\boldsymbol \pi }_{k} = {x}_{c}{\boldsymbol \pi }_{k}\). A similar proof holds for f lpr n = x c U, n.
Otherwise, if f lpr n > x c L, n, then the feasible region of LPR(n + 1) does not include any point outside LPR(n) and excludes the points x c = x c L, n. Hence it is strictly tighter than the feasible region for LPR(n).
The preceding result guarantees that, if the sequence of linear relaxations yields feasible solutions, then the bounding box for LPR(n) defined by
can only decrease its volume or keep it constant as n increases. The volume must therefore converge as n increases, and for sufficiently large n, x c U, n ≈ x c U, n + 1 and x c L, n ≈ x c L, n + 1 in each dimension c. However, this implies that the outcome of iteration n + 1 needs to be \({f}_{\mathrm{lpr}}^{n+1,c} = {x}_{c}^{U,n}\) and \({g}_{\mathrm{lpr}}^{n+1,c} = {x}_{c}^{U,n}\), which gives z c = x c π k by the first case of Proposition 4.2. Thus, for sufficiently large n, the border of the bounding box intersects points that are feasible for ENS, at least along one dimension c. This yields several possible outcomes for the sequence of linear relaxations:
-
The constraints in the linear relaxations are infeasible. As observed earlier, this allows us to conclude that no feasible RCAT product form exists for the model under study.
-
One or more solutions x of the 2A linear relaxations are also feasible solutions of ENS. This allows us to construct directly a product-form solution by (4.1).
-
No solution x of the 2A linear relaxations is feasible for ENS for all dimensions c = 1, …, A. We have never encountered such a case in product-form detection for stochastic models; however, it can be resolved by a standard branch-and-bound method and reapplying the iteration on each partition of the feasible region.
Summarizing, a sequence of linear relaxations is sufficient to identify a product-form solution if one exists. No guarantee on the maximum number of linear programs to be solved can be given since the problem is \(\mathcal{N}\mathcal{P}\)-hard in general; however, we show in the section “Examples and Case Studies” that this is typically small. In the section “Automated Approximations,” we further illustrate how this sequence of linear programs can be modified to identify an approximate product form for a cooperation of Markov processes.
Practical Implementation
Pure Cooperations. If x = 0 is a valid solution of ENS, then we call the model a pure cooperation. This is because x = 0 implies that π k = 0, which in turn requires all entries of L k to be zero for all processes. Hence, the model’s rates are solely those of cooperations. Pure cooperations represent a very large class of models of practical interest, e.g., closed queueing networks with exponential or hyperexponential service, but their product-form analysis is harder due to the existence of infinite solutions x.
Suppose a model is a pure cooperation and consider a graph defined by the M ×M incidence matrix G such that G[i, j] = 1 if and only if process j is passive in a cooperation with the active process i, 0 otherwise. Then, if r = rank(G) < M, the model has M − r degrees of freedom in assigning the values of the x vector. Thus, for these models there exists a continuous solution surface in ENS rather than a single feasible solution. As we show in the section “Closed Stochastic Model,” this creates difficulties for existing product-form analysis techniques. However, we show that our method finds the correct solution x > 0 under the condition that only objective functions of the type maxx c U are used in the linear relaxations. This is because the search algorithm would otherwise converge, due to a lack of a strong lower bound, to the unreliable solution x = 0.
Numerical properties. As the area of the bounding boxes decreases, the linear relaxations can be increasingly challenging to solve due to the presence of many hundreds of constraints on a small area and to the numerical scale of the equilibrium probabilities, which can become very small when N k is several tens or hundreds of states. In such conditions, and without a careful specification of the linear programs, the solver may erroneously return that the program is infeasible, whereas a feasible solution does exist. However, a number of strategies can prevent such problems. First, it is often beneficial to reformulate equality constraints as “soft” constraints, e.g., for a small ε tol > 0
that differentiates tolerances depending on the value of each individual term in π k . Another useful strategy consists of tuning the numerical tolerances of the LP solver. For instance, in IBM ILOG CPLEX’s primal and dual simplex algorithms, this may be achieved by setting the Markowitz numerical tolerance to a large value such as 0. 5. In addition, if the relaxation used is TLPR or ZPR, then it is often beneficial for a numerically challenging model to revert to the LPR formulation, which is less constraining. Finally, for models where the feasible region is sufficiently small, one could solve QCP directly without much effort and with the benefit of removing the extra constraints introduced by the linear relaxations. In our implementation, such corrections are done at runtime through a set of retrial runs upon detection that a LP is infeasible.
Automated Approximations
Using the preceding LP-based method, a non-product-form solution may be approximated using a product-form. The particular approximation we propose differs depending on which condition out of RC1, RC2, and RC3 is violated. Two approximations are now elaborated.
Rate Approximation
RC3 becomes infeasible when the solver cannot find a single reversed rate x a that satisfies the condition for some actions a. Assuming that a solution x defines a valid RCAT product-form, for each process k we can always define a Markov-modulated point process \({\mathcal{M}}_{a,k}\) associated with the activation of the action \(a \in {\mathcal{A}}_{k}\) in the Markov process with generator matrix Q k . Let the random variable X a, k denote the interarrival time between two consecutive activations of action a in ℳ a, k and define the rate λ a, k to be the reciprocal of the mean interarrival time E[X a, k ]. Then we approximate
The principle of rate approximation is to assume (inexactly) that (4.11) is a sufficient condition for a product-form solution. Let \(\widetilde{x} = (\widetilde{{x}}_{1},\widetilde{{x}}_{2},\ldots, \widetilde{{x}}_{A})\) be an approximate solution that can be found by the approximate ENS program, which is defined by replacing RC3 with (4.11) in ENS and its relaxations.
We note that \(\widetilde{\bf x}\) includes the exact solution \(\widetilde{\bf x} = \bf x\) when it exists; thus AENS is a relaxation of ENS. Note that using the approach introduced in the section “Tightening the Linear Relaxation,” one may further tighten the relaxation using a quadratic constraint
which provides a more accurate approximation of x a by \(\widetilde{{x}}_{a}\) but involves relaxation of a convex, and thus efficiently solvable, quadratically constrained program. Such an extension is left for future work.
The foregoing approximation can be applied to all programs introduced in the preceding sections, e.g., for LPR we define the rate approximation ALPR.
Structural Approximation
Example cases where RC1 is violated are models with blocking, where a cooperating process is not allowed to synchronize passively owing to capacity constraints, e.g., a queue with a finite-size buffer. Similarly, violations of RC2 are exemplified by models with priorities, where a low-priority action is disabled until higher-priority tasks are completed. Structural approximation iteratively updates the rate matrices A a and P b in order to account for the blocked or disabled status of certain transitions. Then, the search algorithm presented in the section “Exact Product-Form Construction” is run normally, if needed using rate approximation to address any violations of RC3 introduced by the updates. The updating process is detailed in the pseudocode reported in the appendix “Structural Approximation Pseudocode.” First, we correct the blocked (respectively disabled) transitions in P b (respectively A a ) by hidden transitions in the synchronizing process that do change the state of the passive (respectively active) process. For A a such hidden transitions need to be set to the reversed rate of action a in order to satisfy RC3. Next, a local iteration is done to scale the rates of the active (respectively passive) process to account approximately for the probability that the event could not occur in the passive (respectively active) process prior to the updates. Note that the particular way in which A a and P b are updated may be customized to reflect how the particular class of models under study handles the specific types of blocking or job priorities. For example, the pseudocode applies to the case of blocking followed by retrials; variants are discussed in the section “Models with Resource Constraints.”
Example
Consider two small processes k and m with N k = 2 and N m = 3 states. Suppose there is a single action a = 1, \(a \in {\mathcal{A}}_{k}\), \(a \in {\mathcal{P}}_{m}\). The rate and local transition matrices are
Process k has a high transition rate between its two states and the 1 → 2 one requires synchronization with process m. However, when process m is in state 1, no passive action is enabled (all zeros in the first row of P a ). Hence, k is prevented from transiting from state 1 to state 2.
The structural approximation sets P a (1, 1) = 1 and corrects the rates in process k to account for the blocking effects. In the resulting model, L k and L m are unaffected; instead
where α a, 1 = π m (0)[1] and π m (0) is the equilibrium probability distribution for process m in the model for iteration n = 0 having the rate matrices A a (0) = A a and P a (0) = P a (1). In this way, we have adjusted the active rates in such a way that, for the fraction of time where process m is in state 1, process k has the rate of action a’s transitions to another state proportionally reduced. For this example, it is found that the fraction of the joint probability mass incorrectly placed by the product-form approximation converges after four iterations to 5. 9%, while it is 45% if we just add P a (1)(1, 1) = 1 and do not apply corrections to A a (1).
Examples and Case Studies
Example: LPR and TLPR
We use a small example to illustrate and compare typical levels of tightness obtained by TLPR and LPR. The results are shown in Fig. 4.3, where the 2-norm for the current optimal solution with respect to the RC3 formula is evaluated for LPR(n) and TLPR(n, 1). The algorithm, described in the section “Exact Product-Form Construction,” increases the lower bound on the reversed rates in each iteration. The model is composed of M = 2 agents that interact over A = 2 actions a and b with \({\mathcal{A}}_{1} =\{ a\}\), \({\mathcal{P}}_{1} =\{ b\}\), \({\mathcal{A}}_{2} =\{ b\}\), and \({\mathcal{P}}_{2} =\{ a\}\). Process 1 has N 1 = 2 states, and process 2 is defined by N 1 = 4 states. Rates of active actions and local transitions are given in Table 4.1. The passive rate matrices have \({\bf P}_{a}(1,4) = {\bf P}_{a}(2,1) = {\bf P}_{a}(3,2) = {\bf P}_{a}(4,3) = 1\) and P b (2, 1) = 1.
For this example, the LP solver finds a product form in both cases, with reversed rates x a = 0. 659039 and x b = 0. 646361. Linear programs here and in the rest of the paper are generated from MATLAB using YALMIP [33] and solved by IBM ILOG CPLEX’s parallel barrier method with 16 software threads [28]. CPU time is 28 ms for LPR(n) (20 variables, 82 constraints and bounds) and 51 ms for TLPR(n, 1) (20 variables, 106 constraints and bounds).
The case studies in the sections “Closed Stochastic Model” and “A G-Network Model” focus on exact product-form solutions and are used to evaluate the proposed methodology against state-of-the-art techniques, namely, Buchholz’s method [9] and INAP [34]. Conversely, the section “Models with Resource Constraints” illustrates the accuracy of rate and structural approximations on two models with resource constraints.
Example: ZPR
The example in this subsection illustrates certain benefits of the zero-potential relaxation over LPR and TLPR. Consider the toy model studied in [34, Fig. 5] composed of M = 2 processes m = 1 and k = 2 defined over N m = 4 and N k = 3 states. The processes cooperate on actions \(a \in {\mathcal{A}}_{k}\) and \(b \in {\mathcal{A}}_{m}\) and are defined by the rate and transition matrices given in the appendix “ZPR Example Model.” On this model, all relaxations find a product-form solution associated to the reversed rates x = (0. 70, 1. 90). LPR requires 14 linear programs to converge to such a solution with \({\epsilon }_{tol} = 1{0}^{-4}\) tolerance. Conversely, ZPR obtains the same solution in just six linear programs. Noticeably, at the first iteration ZPR achieves a 2-norm for the residual of RC3 that is achieved by LPR after only five linear programs. This provides a qualitative idea of the benefits of ZPR over LPR. Compared to TLPR, instead, ZPR offers similar accuracy, including in this example where TLPR completes after five linear programs. However, we have found ZPR to be numerically more robust than TLPR on several instances.
Closed Stochastic Model
Next, a challenging model of a closed network comprising three queues, indexed by k = 1, 2, 3, that cooperate with a parallel system modeled by the stochastic Petri net shown in Fig. 4.4, indexed by k = 4. This Petri net abstracts a generic parallel system where some operations are synchronized between two servers, e.g., mirrored disk drives. The special structure of this model has been shown recently to admit a product form for certain values of the transition rates [26]. The places P 1 and P 2 receive tokens, representing disk requests, from transitions T 1, T 12, and T 2. Such transitions are passive, meaning that they are activated by other components. The other transitions are active and fire after exponentially distributed times when all their input places have at least one token. The rates of the underlying exponential distributions are σ1 = 0. 4 for T 1 ′, σ2 = 0. 1 for T 2 ′, and σ12 = 0. 33 for T 12 ′. Place P 1 receives jobs passively from transition T 1 ′ and actively outputs into T 1 at the rate μ1 = 0. 5 (actions 4 and 1, respectively); similarly, place P 2 receives jobs from T 2 and feeds T 2 ′ at the rate μ2 = 0. 6 (actions 3 and 6). Similarly, the queue k = 3 receives from T 12 and outputs to T 12 ′ at the rate μ3 = 0. 9 (actions 2 and 5). Thus, \({N}_{k} = +\infty \), k = 1, …, M, and the model is a cooperation of M = 4 infinite processes on A = 6 actions. In the RCAT methodology, any cooperating process is considered in isolation with all its (possibly infinite) states, even if part of a closed model. This is consistent with the fact that the specific population in the model affects the computation of the normalizing constant, but not the structure of the product-form solution for a joint state [38].
In addition, note that the model is a pure cooperation, due to the lack of local transitions, having the dependency graph
Since G has a rank r = 2, there are \(M - r = 2\) degrees of freedom in assigning the reversed rate vector x = (x 1, x 2, …, x 6). Specifically, it is shown in [26] that the following necessary conditions hold for a product-form solution: \({x}_{1} = {x}_{4},{x}_{2} = {x}_{5},{x}_{3} = {x}_{6},{x}_{5} = {\sigma }_{12}{x}_{4}{x}_{6}{({\sigma }_{1}{\sigma }_{2})}^{-1}\). We apply our method and the INAP algorithm in [34] to determine a product-form solution of type (4.1). INAP is a simple fixed-point iteration that starting from a random guess of vector x progressively refines it until finding a product-form solution. For an action a the refinement step averages the value of the reversed rates of action a in all states of the active process. Buchholz’s method in [9] cannot be used on the present example because it does not apply to closed models. For both INAP and our method we truncate the queue state space to N k = 75 states, the Petri net to N 4 = 100 states. Thus, the product-form solution we obtain is valid for closed models with up to N = 75 circulating jobs.
Numerical Results. The best performing relaxation on this example is TLPR(n, 1), which returns, after 35. 82 s, a solution
that matches the RC3 conditions with a tolerance of 10 − 3. Since the tolerance of the solver is \({\epsilon }_{tol} = 1{0}^{-4}\), we regard this as an acceptable value considering that TLPR(n, 1) describes a tight feasible region that may require the LP to apply numerical perturbations. Note that this is a standard feature of modern state-of-the-art LP solvers. LPR(n, 1) provides a more accurate solution, x lpr = (0. 4008, 0. 3305, 0. 1001, 0. 4001, 0. 3300, 0. 1001), but requires 234. 879 s of CPU time to converge and 124 linear programs. INAP seems instead to suffer a significant loss of accuracy with this parameterization and does not converge. The returned solution after 48. 64 s and 15, 000 iterations is
which is still quite far from the correct solution, especially concerning the necessary condition x 3 = x 6. We have further investigated this problem and observed that, in contrast to our algorithm, INAP ignores ergodicity constraints; hence most of the mass in this example is placed in states near the truncation border. This appears to be the reason for the failed convergence.
A G-Network Model
We next consider a generalized, open queueing network, where customers are of positive and negative types, i.e., a G-network [20]. These models enjoy a product-form solution, but this is not generally available in closed form and requires numerical techniques to determine it. Hence, G-networks provide a useful benchmark to compare different approaches for automated product-form analysis.
The queue parameterization used in this case study is the one given in [5] for a large model with M = 10 queues and A = 24 actions. Model parameters are given in the appendix “G-Network Case Study.” The infinite state space is truncated such that each queue has N k = 75 states. The size of the joint state space for the truncated model is 5. 63 ⋅1018 states, which is infeasible to solve numerically in the joint process.
Numerical Results. For this model, ENS and QCP fail almost immediately, reporting that the magnitude of the gradient is too small. Conversely, LPR returns the solution in Table 4.2. Quite interestingly, ZPR returns a different set of reversed rates, but these are found to generate the same equilibrium distributions π k for all processes within the numerical tolerance \({\epsilon }_{tol} = 1{0}^{-4}\). Thus, this case study again confirms that our approach also provides valid answers in models with multiple solutions. A comparison of the convergence speed of LPR and ZPR is given in Fig. 4.5; TLPR fails in this case due to numerical issues since the feasible region is very tight. We have investigated the problem further and found that the barrier method is responsible for such instabilities and that switching to the simplex algorithm solves the problem and provides the same product-form solution as LPR.
We now compare our technique against Buchholz’s method, applied in finding product forms of type (4.1). Buchholz’s method involves a quadratic optimization technique that minimizes the residual norm with respect to a product-form solution for the model. This is done without explicitly computing the joint state distribution; hence it is efficient computationally. Comparison with the method proposed here is interesting since Buchholz’s method seeks local optima instead of the global optima searched for by AutoCAT. We have verified that, on small- to medium-scale models, the method is efficient in finding product forms. However, the local optimization approach for large models does not guarantee that a product-form solution will be found when one is known to exist. In particular, we used random initial points and found that, even though the residual errors are similar to those of the optimum solution of LPR, the specific local optimum returned by Buchholz’s method can differ substantially in terms of the global product-form probability distribution. In particular, for some local optima, the marginal probability distribution at a queue is not geometric and the error on performance indices can be very large. This confirms the importance of using global optimization methods, such as that proposed in this chapter, for product-form analysis, especially in large-scale models. Furthermore, we believe that including RC3 in Buchholz’s method would help to ensure the geometric structure of the marginal distribution.
Models with Resource Constraints
Finally, we consider an automated approximation of performance models with resource constraints. We have considered an open queueing network composed of M = 5 exponential, first-come first-served queues with finite buffer sizes described by the vector \(({B}_{1},{B}_{2},\ldots, {B}_{M}) = (7,2,+\infty, 3,10)\). Routing probabilities and model parameters are given in the appendix “Loss and BBS Models.” In particular, arrival rates are chosen such that the equilibrium of the network differs dramatically from that of the corresponding infinite capacity model, where the first queues would be fully saturated. To explore the accuracy of rate and structural approximations, we have considered two opposite blocking types: blocking before service (BBS) [4], where a job is blocked before entering the server if its target queue is full, and the classical loss policy, where a job reaching a full station leaves the network. Such policies apply homogeneously to all queues. In both models, we study as the target performance metric the mean queue-length vector n = (n 1, …, n M ) because such values are typically harder to approximate than utilizations as they depend more strongly on the entire marginal probability distributions of the queues.
The BBS model requires structural approximation to improve the accuracy of the initial ALPR rate approximation. To adapt the A a corrections to this specific blocking policy, it is sufficient to delete the term α c, n Δ(A c (0)1) from the updating in the structural approximation pseudocode, implying that jobs are not executed while the target station is busy. For this case study, the absolute values of the queue lengths obtained by simulation are n = (5. 946, 1. 262, 0. 327, 1. 1631, 1. 653). The estimates returned by structural approximation converge after the fifth iteration to n sa(5) = (5. 9580, 1. 3117, 0. 2871, 1. 0631, 1. 3559) with an error on the bottleneck queue of just 0. 20%.
For the loss model, we found that queue lengths are estimated accurately by the ALPR rate approximation alone, after adding hidden transitions to the P b matrices to correct RC1. In particular, n ra = (5. 0792, 0. 9599, 0. 2688, 0. 5050, 0. 5273), where the result of the simulation is n = (5. 4877, 1. 0642, 0. 2766, 0. 5248, 0. 5536), which has an average relative gap of 5. 72%.This confirms the quality of the rate approximation in the loss case. Note that both in this case and in the BBS model computational costs are less than 5 min.
We have also tried to apply Buchholz’s method to these examples, but as with the model of the section “A G-Network Model,” the technique converges to a local optimum that differs from the simulated equilibrium behavior. Conversely, INAP does not apply to approximate analysis.
Closed Phase-Type Queueing Network
We now describe an example of approximate analysis of closed queueing networks. For illustration purposes, we focus on a machine repairman model comprising a single-server first-come, first-served queue in tandem with an infinite-server station. The same methodology can be used for larger models. The infinite-server station has exponentially distributed service times with rate μ2 = 20 jobs per second. The queue has PH service times (we refer the reader to [8] for an introduction to PH models). The distribution chosen has two states and representation (α, T) with initial vector α = (1, 0) and PH subgenerator
This PH model generates hyperexponential service times with mean 0. 9996, squared coefficient of variation 9. 9846, and skewness 19. 6193. With this parameterization, the model is solved for a population N = 15 jobs by direct evaluation of the underlying Markov chain obtaining a throughput X ex = 0. 6303 jobs per second. This is lower than the throughput X pf = 0. 6701 jobs per second provided by a corresponding product-form model where the PH service time distribution is replaced by an exponential distribution.
We then approximate the solution of this model by AutoCAT and study its relative accuracy. To cope with the lack of explicit constraints to find feasible reversed rates different from the degenerate ones \({\bf x} = ({x}_{1},{x}_{2}) = {\bf 0}\), we use the following iterative method. Initially, we set x 1 = X pf. Based on this educated guess, we run our approximation method based on the LPR formulation to find an approximate value for x 2. This allows the model to be solved after computing numerically the normalizing constant of the equilibrium probabilities and readily provides an estimate X (1) for the network throughput. In the following iteration we assign x 1 = X (1) and reoptimize to find a new value of x 2 and corresponding throughput X (2). This iterative scheme is reapplied until convergence is achieved.Footnote 1
For the model under study, this approximation provides a sequence of solutions X lpr (1) = 1. 0004, X lpr (2) = 0. 6291, X lpr (3) = 0. 6089, X lpr (4) = 0. 6107, and X lpr (5) = 0. 6105 jobs per second, for a total of 40 solver iterations. The last solution provides a relative error on the exact one of − 3. 14% compared to the 6. 31% error of the product-form approximation, thus reducing the approximation error by about 50%.
We have also compared accuracy with a recent iterative approximation technique for closed networks, inspired by RCAT and proposed in [14, 15]. This technique involves replacing each \(-/PH/1\) queueing station by a load-dependent station such that the state probability distribution for a model with M queues is
where n i is the number of jobs in queue i, G is a normalization constant, and
where η i is the largest eigenvalue of the rate matrix for the quasi-birth-and-death process obtained by studying the ith station as an open \(PH/PH/1\) queue with appropriate input process and utilization ρ i . We point to [14] for further details on this construction; here we simply stress that this particular approximation differs from the AutoCAT one by using only the slowest decay rate of the queue-length marginal probabilities for such a \(PH/PH/1\) queue, whereas in this chapter we developed more general approximations that do not resort to asymptotic arguments to simplify the model and that may be applied also to stochastic systems other than queueing networks.
A comparison with the method proposed in [14, 15] reveals that the throughput returned by the approximation is X = 0. 5843 jobs per second with a relative error of − 7. 30%. While classes of models exist where it can be shown that this method is far more accurate than the product-form one [14], this example convincingly illustrates a case where the AutoCAT approximation is the most accurate available.
Conclusion
We have introduced an optimization-based approach to product-form analysis of performance models that can be described as a cooperation of Markov processes, e.g., queueing networks and stochastic Petri nets. Our methodology consists of solving a sequence of linear programming relaxations for a nonconvex optimization program that captures a set of sufficient conditions for a product form. The main limitation of our methodology is that we cannot represent cooperations involving actions that synchronize over more than two processes. However, multiple cooperations are useful only in specialized models, e.g., queueing networks with catastrophes [18]. We believe that such extension is possible, although it may require a sequence of independent product-form search problems to be solved. Hence, the computational costs of such solutions should be evaluated for models of practical interest.
Finally, we plan to study the effects of integrating new constraints into the linear programs, such as costs or bounds on the variables that may help in determining a particular reversed rate vector among a set of multiple feasible solutions. For instance, for models that enjoy bounds on their steady state that may be expressed as linear programs, e.g., stochastic Petri nets [32], this could enable the generation of exact or approximate product forms that are guaranteed to be within the known theoretical bounds.
Notes
- 1.
Note that all test cases did converge, but no rigorous convergence proof is available.
References
Al-Khayyal, F.A., Falk, J.E.: Jointly constrained biconvex programming. Math. Oper. Res. 8(2), 273–286 (1983)
Argent-Katwala, A.: Automated product-forms with Meercat. In: Proceedings of SMCTOOLS, October 2006
Balbo, G., Bruell, S.C., Sereno, M.: Product form solution for generalized stochastic Petri nets. IEEE TSE 28(10), 915–932 (2002)
Balsamo, S., Onvural, R.O., De Nitto Personé, V.: Analysis of Queueing Networks with Blocking. Kluwer, Norwell, MA (2001)
Balsamo, S., Dei Rossi, G., Marin, A.: A numerical algorithm for the solution of product-form models with infinite state spaces. In Computer Performance Engineering (A. Aldini, M. Bernardo, L. Bononi, V. Cortellessa, Eds.) LNCS 6342, Springer 2010. (7th Europ. Performance Engineering Workshop EPEW 2010, Bertinoro (Fc), Italy, (2010)
Baskett, F., Chandy, K.M., Muntz, R.R., Palacios, F.G.: Open, closed, and mixed networks of queues with different classes of customers. J. ACM 22(2), 248–260 (1975)
Bertsimas, D., Tsitsiklis, J.: Introduction to Linear Optimization. Athena Scientific, Nashua, NH (1997)
Bolch, G., Greiner, S., de Meer, H., Trivedi, K.S.: Queueing Networks and Markov Chains. Wiley, New York (1998)
Buchholz, P.: Product form approximations for communicating Markov processes. In: Proceedings of QEST, pp. 135–144. IEEE, New York (2008)
Buchholz, P.: Product form approximations for communicating Markov processes. Perform. Eval. 67(9), 797–815 (2010)
Burer, S., Letchford, A.N.: On nonconvex quadratic programming with box constraints. SIAM J. Optim. 20(2), 1073–1089 (2009)
Cao, X.R.: The relations among potentials, perturbation analysis, and Markov decision processes. Discr. Event Dyn. Sys. 8(1), 71–87 (1998)
Cao, X.R., Ma, D.J.: Performance sensitivity formulae, algorithms and estimates for closed queueing networks with exponential servers. Perform. Eval. 26, 181–199 (1996)
Casale, G., Harrison, P.G.: A class of tractable models for run-time performance evaluation. In: Proceedings of ACM/SPEC ICPE (2012)
Casale, G., Harrison, P.G., Vigliotti, M.G.: Product-form approximation of queueing networks with phase-type service. ACM Perf. Eval. Rev. 39(4) (2012)
de Souza e Silva, E., Ochoa, P.M.: State space exploration in Markov models. In: Proceedings of ACM SIGMETRICS, pp. 152–166 (1992)
Dijk, N.: Queueing Networks and Product Forms: A Systems Approach. Wiley, Chichester (1993)
Fourneau, J.M., Quessette, F.: Computing the steady-state distribution of G-networks with synchronized partial flushing. In: Proceedings of ISCIS, pp. 887–896. Springer, Berlin (2006)
Fourneau, J.M., Plateau, B., Stewart, W.: Product form for stochastic automata networks. In: Proceedings of ValueTools, pp. 1–10 (2007)
Gelenbe, E.: Product-form queueing networks with negative and positive customers. J. App. Probab. 28(3), 656–663 (1991)
GNU GLPK 4.8. http://www.gnu.org/software/glpk/
Harrison, P.G.: Turning back time in Markovian process algebra. Theor. Comput. Sci 290(3), 1947–1986 (2003)
Harrison, P.G.: Reversed processes, product forms and a non-product form. Lin. Algebra Appl. 386, 359–381 (2004)
Harrison, P.G., Hillston, J.: Exploiting quasi-reversible structures in Markovian process algebra models. Comp. J. 38(7), 510–520 (1995)
Harrison, P.G., Lee, T.: Separable equilibrium state probabilities via time reversal in Markovian process algebra. Theor. Comput. Sci 346, 161–182 (2005)
Harrison, P.G., Llado, C.: A PMIF with Petri net building blocks. In: Proceedings of ICPE (2011)
Hillston, J.: A compositional approach to performance modelling. Ph.D. Thesis, University of Edinburgh (1994)
IBM ILOG CPLEX 12.0 User’s Manual, 2010
Jackson, J.R.J.: Jobshop-like queueing systems. Manage. Sci. 10(1), 131–142 (1963)
Kelly, F.P.: Networks of queues with customers of different types. J. Appl. Probab. 12(3), 542–554 (1975)
Kelly, F.P.: Reversibility and Stochastic Networks. Wiley, New York (1979)
Liu, Z.: Performance analysis of stochastic timed Petri nets using linear programming approach. IEEE TSE 11(24), 1014–1030 (1998)
Löfberg, J.: YALMIP: A toolbox for modeling and optimization in MATLAB. In: Proceedings of CACSD (2004)
Marin, A., Bulò, S.R.: A general algorithm to compute the steady-state solution of product-form cooperating Markov chains. In: Proceedings of MASCOTS, pp. 1–10 (2009)
Marin, A., Vigliotti, M.G.: A general result for deriving product-form solutions in Markovian models. In: Proceedings of ICPE, pp. 165–176 (2010)
McCormick, G.P.: Computability of global solutions to factorable nonconvex programs. Math. Prog. 10, 146–175 (1976)
Muntz, R.R.: Poisson departure processes and queueing networks. Tech. Rep. RC 4145, IBM T.J. Watson Research Center, Yorktown Heights, NY (1972)
Nelson, R.D.: The mathematics of product form queuing networks. ACM Comp. Surv. 25(3), 339–369 (1993)
Nocedal, J., Wright, S.J.: Numerical Optimization. Springer, Berlin (1999)
Plateau, B.: On the stochastic structure of parallelism and synchronization models for distributed algorithms. SIGMETRICS 147–154 (1985)
Saad, Y.: Iterative Methods for Sparse Linear Systems. SIAM, Philadelphia (2000)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendix
Infinite Processes
Numerical optimization techniques generally require matrices of finite size. In both ENS and its relaxations, we therefore used exact or approximate aggregations to truncate the state spaces of any infinite processes. Let C + 1 be the maximum acceptable matrix order. Then we decompose the generator matrix and its equilibrium probability vector of an infinite process k as
Finally, we comment on the choice of the parameter C for a given process k. Since this determines the number of states N k for the truncated process, an optimal choice of this value can provide substantial computational savings. Let us first note that starting from a small C, it is easy to integrate additional constraints or potential vectors in the linear formulations for a value C ′ > C. Recall that we propose in the rest of the paper a sequence of linear programs in order to obtain a feasible solution x. Then, if a linear program is infeasible, this can be due either to a lack of a product form or to a truncation where C is too small. The latter case can be readily diagnosed by adding slack variables, as in QCP, to the ergodicity condition and verifying if this is sufficient to restore feasibility. In such a case, the C value is updated to the smallest value such that feasibility is restored in the main linear program.
ZPR Example Model
Structural Approximation Pseudocode
G-Network Case Study
We report the parameters for the G-network given in [5]. The network consists of M = 10 queues with exponentially distributed service times having rates μ1 = 4. 5 and \({\mu }_{i} = 4.0 + (0.1)i\) for i ∈ [2, 10]. The external arrival rate defines a Poisson process with rate λ = 5. 0. The routing matrix for (positive) customers has in row i and column j the probability r i, j + of a (positive) customer being routed to queue j, as a positive customer, upon leaving queue i. In this case study, this routing matrix is given by
Loss and BBS Models
The model is composed of M = 5 queues that cooperate on a set of A = 12 actions, one for each possible job movement from and inside the network. The routing probabilities R[k, j] from queue k to queue j are as follows:
Rights and permissions
Copyright information
© 2013 Springer Science+Business Media New York
About this paper
Cite this paper
Casale, G., Harrison, P.G. (2013). AutoCAT: Automated Product-Form Solution of Stochastic Models. In: Latouche, G., Ramaswami, V., Sethuraman, J., Sigman, K., Squillante, M., D. Yao, D. (eds) Matrix-Analytic Methods in Stochastic Models. Springer Proceedings in Mathematics & Statistics, vol 27. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-4909-6_4
Download citation
DOI: https://doi.org/10.1007/978-1-4614-4909-6_4
Published:
Publisher Name: Springer, New York, NY
Print ISBN: 978-1-4614-4908-9
Online ISBN: 978-1-4614-4909-6
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)