1 Introduction

A discrete event system (e.g., [1]) is a dynamical system driven by the instantaneous occurrences of events. In a discrete event system, two basic elements are distinguished: the event set and the rule describing the behavior of the system. By considering events such as parts entering or leaving machines, discrete event systems offer an interesting framework to model manufacturing systems at a high level of abstraction. Many formal approaches such as finite-state automata (e.g., [2]) and Petri nets (e.g., [3]) have been investigated to express the rule describing the behavior of the system. In the following, we focus on discrete event systems where this rule is only composed of synchronizations (i.e., conditions of the form: for all kl, occurrence k of event e 2 is at least τ units of time after occurrence kl of event e 1 with \(\tau \in \mathbb{N}_{0}\) and \(l \in \mathbb{N}_{0}\)). The behavior of manufacturing systems functioning under a predefined schedule can often be adequately modeled by synchronizations (see Example 1).

Discrete event systems where the rule describing the behavior is only composed of synchronizations are called \(\left (\max,+\right )\)-linear systems. This terminology is due to the fact that a specific behavior, namely the behavior under the earliest functioning rule, is described by linear equations in particular algebraic structures such as the \(\left (\max,+\right )\)-algebra. In the literature, only this specific behavior is usually considered. For \(\left (\max,+\right )\)-linear systems, it is possible to partition the set of events into input, internal, and output events and, based on this partition, to derive a \(\left (\max,+\right )\)-linear state-space model of the system. Therefore, much effort has been made during the last decades to adapt key concepts from standard control theory to \(\left (\max,+\right )\)-linear systems. Transfer function matrices have been introduced for \(\left (\max,+\right )\)-linear systems by using formal power series [4]. Furthermore, some standard control approaches such as optimal feedforward control [5], model reference control [68], and model predictive control [9] have been extended to \(\left (\max,+\right )\)-linear systems. For manufacturing systems, model reference control is particularly interesting, as it offers techniques to both reduce the size of internal buffers and take into account unexpected disturbances.

We emphasize that the purpose of this contribution is not to compare different modelling and control approaches for manufacturing systems. On the contrary, we concentrate on a specific class of manufacturing systems exclusively governed by synchronization and delay phenomena. As pointed out above, models for this class of systems are linear in certain algebraic structures. For this reason, many methods for designing control can be adapted form standard linear systems theory to be applicable to the discussed class of manufacturing systems. A key advantage of this approach is that the desired control policy, i.e., the way control reacts to external inputs and measured outputs, can be computed analytically and offline. Hence, the required computational online effort is negligeable.

The rule describing the behavior of \(\left (\max,+\right )\)-linear systems can also be expressed by specific timed Petri nets called timed event graphs (TEGs). A TEG is a directed bipartite graph, where the set of nodes is partitioned into a set of places and a set of transitions, and arcs are either from places to transitions or from transitions to places. Moreover, in a TEG, each place has precisely one incoming and one outgoing arc. Each place is equipped with a holding time. Places may contain tokens, and transitions are associated with events. A transition can “fire” (i.e., the associated event can occur) if and only if each place from which an arc leads to the transition (“upstream place”) has at least one token residing in the respective place for at least the corresponding holding time. If the transition “fires” (i.e., the associated event occurs), all upstream places lose one token and all downstream places (places to which there is an arc from the considered transition) gain one token. Places and transitions are graphically represented by circles and bars, and the holding times, if nonzero, are indicated by adding numbers to places. In the following, we focus on \(\left (\max,+\right )\)-linear representations to formally manipulate systems, but use timed event graphs to graphically represent systems.

This chapter is structured as follows. In the next section, necessary mathematical tools are recalled. The modeling of the considered class of discrete event systems in the \(\left (\max,+\right )\)-algebra and in the dioid \(\mathscr{M}_{in}^{ax}[\![\gamma,\delta ]\!]\) is presented in Sect. 3. Finally, Sect. 4 focuses on control for \(\left (\max,+\right )\)-linear systems. Throughout this chapter, the simple manufacturing system introduced in Example 1 is used to illustrate and clarify the presented concepts. We emphasize that illustration and clarification is the sole purpose of this example. However, methods based on \(\left (\max,+\right )\)-linear systems are also suitable for industrially relevant systems: for example, in [10], this approach is used to model and control high-throughput screening systems (i.e., systems to rapidly test thousands of biochemical substances) with over one hundred events and dozens of activities and resources.

Example 1.

A simple manufacturing system composed of three machines, denoted M 1, M 2, and M 3, is considered. Machine M 1 consumes workpieces of type 1 and releases workpieces of type 3. Machine M 2 consumes workpieces of type 2 and releases workpieces of type 4. Machine M 3 pairwise assembles workpieces of type 3 and 4 and delivers workpieces of type 5. The production of a new workpiece of type 5 from workpieces of type 1 and 2 starts after the receiving of an order from a customer. Orders and workpieces of type 1 and 2 correspond to the inputs of the manufacturing system (i.e., external influences either from suppliers or from customers) and workpieces of type 5 correspond to the output of the manufacturing system. Each machine has a capacity of one. The processing time associated with machine M 1, denoted τ 1, is four units of time and the processing time associated with machine M 2 (resp. M 3), denoted τ 2 (resp. τ 3), is two units of time. Furthermore, a machine M i with 1 ≤ i ≤ 3 can start processing the next workpiece as soon as it finishes processing the current workpiece. The buffers have an infinite capacity. To formally describe the dynamics of this manufacturing system, we define the following events:

event u i (with i = 1, 2):

a workpiece of type i enters the system

event s i (with 1 ≤ i ≤ 3):

machine M i starts to process a (pair of) workpiece(s)

event f i (with 1 ≤ i ≤ 3):

machine M i delivers a processed workpiece

event o :

an order is received

event y :

a workpiece of type 5 leaves the system

The behavior of the considered manufacturing system is completely expressed by synchronizations of the events defined above. Two synchronizations are needed to express the dynamics of each machine M i with 1 ≤ i ≤ 3. The first synchronization models the process associated with machine M i : for all k ≥ 0, occurrence k of event f i is at least τ i units of time after occurrence k of event s i . The second synchronization models the capacity constraint: for all k ≥ 1, occurrence k of event s i is at least zero units of time after occurrence k − 1 of event f i . Furthermore, to model the flow of workpieces outside the machines some additional synchronizations are needed. The supply of workpieces of type i with i = 1, 2 is modeled by “for all k ≥ 0, occurrence k of event s i is at least zero units of time after occurrence k of event u i ” with i = 1, 2. The supply for machine M 3 of workpieces processed by machine M i with i = 1, 2 is expressed by “for all k ≥ 0, occurrence k of event s 3 is at least zero units of time after occurrence k of event f i ” with i = 1, 2. The release of workpieces of type 5 is modeled “for all k ≥ 0, occurrence k of event y is at least zero units of time after occurrence k of event f 3”. Finally, orders are taken into account by “for all k ≥ 0, occurrence k of event s i is at least zero units of time after occurrence k of event o” with i = 1, 2.

The timed event graph associated with this manufacturing system is shown in Fig. 1, where holding times (if nonzero) are indicated by numbers attached to places.

Fig. 1
figure 1

A simple manufacturing system

2 Mathematical Preliminaries

In this section, necessary elements of dioid theory and residuation theory are recalled. A complete survey on these topics is available in [4] and [11], respectively.

2.1 Dioid Theory

Dioids (or idempotent semirings) are algebraic structures which play a major role in the modeling of \(\left (\max,+\right )\)-linear systems.

Definition 1 (Dioid).

A dioid is a set \(\mathscr{D}\) endowed with two binary operations, denoted ⊕ and ⊗, such that:

  • ⊕ is associative, commutative, idempotent (\(\forall a \in \mathscr{ D},a \oplus a = a\)), and admits a neutral element ɛ.

  • ⊗ is associative and admits a neutral element e.

  • ⊗ is distributive with respect to ⊕ from both sides:

    $$\displaystyle{\forall a,b,c \in \mathscr{ D},\quad \left \{\begin{array}{l} a \otimes \left (b \oplus c\right ) = \left (a \otimes b\right ) \oplus \left (a \otimes c\right )\\ \left (a \oplus b \right ) \otimes c = \left (a \otimes c \right ) \oplus \left (b \otimes c \right ) \end{array} \right.}$$
  • ɛ is absorbing for ⊗, i.e., \(\forall a \in \mathscr{ D},\;\;a\otimes \varepsilon =\varepsilon \otimes a =\varepsilon\).

If \(\mathscr{D}\) is closed for infinite sums and distributivity is extended to infinite sums, then dioid \(\mathscr{D}\) is said to be complete.

Formally, the operations ⊕ and ⊗ are very similar to the standard operations + and ×. Therefore, these operations are respectively called addition and multiplication. Then, ɛ is called the zero element of the dioid \(\mathscr{D}\) and e is its unit element. As in classical algebra, ⊗ is often omitted and the product is simply denoted by juxtaposition (i.e., ab corresponds to ab). As ⊕ is associative, commutative, and idempotent, it induces a partial order on \(\mathscr{D}\) defined by a⪯bab = b. Hence, a dioid is a partially ordered set.

By analogy with standard linear algebra, the operations ⊕ and ⊗ are extended to matrices with entries in a dioid \(\mathscr{D}\).

$$\displaystyle\begin{array}{rcl} & & \forall A,B \in \mathscr{ D}^{n\times p},\quad \left (A \oplus B\right )_{ ij} = A_{ij} \oplus B_{ij} {}\\ & & \forall A \in \mathscr{ D}^{n\times p},\forall B \in \mathscr{ D}^{p\times q},\quad \left (A \otimes B\right )_{ ij} =\bigoplus _{ k=1}^{p}A_{ ik}B_{kj} {}\\ \end{array}$$

The operation ⊕ also provides a partial order over \(\mathscr{D}^{n\times p}\). Formally, for \(A,B \in \mathscr{ D}^{n\times p}\), A⪰BA = AB. The next proposition gives the algebraic structure of the set of square matrices with entries in a dioid endowed with the operations ⊕ and ⊗ defined above.

Proposition 1 ([4]).

Let \(\mathscr{D}\) be a dioid. The set \(\mathscr{D}^{n\times n}\) endowed with the operationsanddefined above is a dioid. Besides, if \(\mathscr{D}\) is complete, then \(\mathscr{D}^{n\times n}\) is complete.

The next theorem plays an essential role in the following to solve implicit inequalities of the form X⪰AXB where A, X, and B are matrices with entries in a complete dioid.

Theorem 1 (Kleene Star Theorem [4]).

Let \(\mathscr{D}\) be a complete dioid and \(A \in \mathscr{ D}^{n\times n},B \in \mathscr{ D}^{n\times p}\) . Denote the unit element of \(\mathscr{D}^{n\times n}\) by e. Then, the inequality X⪰AXB admits A B as least solution, where the Kleene star of A, denoted A , is defined by

$$\displaystyle{A^{{\ast}} =\bigoplus _{ k=0}^{+\infty }A^{k}\mathit{\mbox{ with }}A^{k} = \left \{\begin{array}{l} e\mathit{\mbox{ if }}k = 0 \\ A \otimes A^{k-1}\mathit{\mbox{ otherwise }} \end{array} \right.}$$

In Sect. 3, modeling of \(\left (\max,+\right )\)-linear systems in the \(\left (\max,+\right )\)-algebra and in the dioid \(\mathscr{M}_{in}^{ax}[\![\gamma,\delta ]\!]\) will be discussed. Next we briefly describe these two dioids.

2.1.1 The \(\left (\max,+\right )\)-Algebra

The \(\left (\max,+\right )\)-algebra, denoted \(\overline{\mathbb{N}}_{\max }\), is defined as the set \(\mathbb{N}_{0} \cup \left \{-\infty,+\infty \right \}\) endowed with the operations max and +. This corresponds to a complete dioid with max as addition ⊕ and + as multiplication ⊗. The zero element ɛ is equal to − and the unit element e is equal to 0. The order induced by the operation ⊕ corresponds to the standard order, as

$$\displaystyle{a\preceq b \Leftrightarrow a \oplus b = b \Leftrightarrow b =\max \left (a,b\right ) \Leftrightarrow a \leq b}$$

Example 2.

In the following, some simple calculations in \(\overline{\mathbb{N}}_{\max }\) are described. In the scalar case,

$$\displaystyle{5 \oplus 3 =\max \left (5,3\right ) = 5\mbox{ and }5 \otimes 3 = 5 + 3 = 8}$$

In the matrix case,

$$\displaystyle\begin{array}{rcl} & & \left (\begin{array}{ccc} 5&3& + \infty \\ \varepsilon &4 & \varepsilon \\ e & \varepsilon & \varepsilon \end{array} \right ) \oplus \left (\begin{array}{ccc} 2& \varepsilon & 2\\ 3 &e & 4 \\ e& \varepsilon & + \infty \end{array} \right ) = \left (\begin{array}{ccc} 5&3& + \infty \\ 3 &4 & 4 \\ e& \varepsilon & + \infty \end{array} \right ) {}\\ & & \left (\begin{array}{ccc} 5&3& + \infty \\ \varepsilon &4 & \varepsilon \\ e & \varepsilon & \varepsilon \end{array} \right ) \otimes \left (\begin{array}{ccc} 2& \varepsilon & 2\\ 3 &e & 4 \\ e& \varepsilon & + \infty \end{array} \right ) = \left (\begin{array}{ccc} + \infty &3& + \infty \\ 7 &4 & 8 \\ 2 & \varepsilon & 2 \end{array} \right ){}\\ \end{array}$$

2.1.2 The Dioid \(\mathscr{M}_{in}^{ax}[\![\gamma,\delta ]\!]\)

In the following, a brief introduction to the dioid \(\mathscr{M}_{in}^{ax}[\![\gamma,\delta ]\!]\) is given. This dioid is especially convenient for modeling and control of \(\left (\max,+\right )\)-linear systems. For a formal definition of this dioid, the reader is invited to consult [4]. A C++-library dedicated to computation in the dioid \(\mathscr{M}_{in}^{ax}[\![\gamma,\delta ]\!]\) is described in [12]. First, the concepts of daters and operators are recalled.

Definition 2 (Dater).

A dater is a non-decreasing mapping from \(\mathbb{Z}\) to \(\overline{\mathbb{N}}_{\max }\) equal to ε over \(\left \{n \in \mathbb{Z}\vert n <0\right \}\). The set of daters is denoted D.

In the following sections, daters will be used to describe the occurrence times of events. Then, for a dater d associated with a particular event, d(k), k ≥ 0, will denote the time when the event occurs for the kth time. Note that it is customary to start enumeration of event occurrences by 0 (instead of 1).

Of particular interest are the daters ɛ D and e D defined by

$$\displaystyle{\forall k \in \mathbb{Z},\quad \varepsilon _{D}\left (k\right ) =\varepsilon \mbox{ and }e_{D}\left (k\right ) = \left \{\begin{array}{l} \varepsilon \mbox{ if }k <0\\ e\mbox{ if } k \geq 0 \end{array} \right.}$$

The set of daters is endowed with an operation, denoted ⊕, derived from the operation ⊕ over \(\overline{\mathbb{N}}_{\max }\). Formally,

$$\displaystyle{\forall d_{1},d_{2} \in D,\forall k \in \mathbb{Z},\quad \left (d_{1} \oplus d_{2}\right )\left (k\right ) = d_{1}\left (k\right ) \oplus d_{2}\left (k\right )}$$

Definition 3 (Operator).

An operator is a mapping from D to D. The set of operators is denoted \(\mathscr{O}\).

Using the operation ⊕ over D, a matrix of operators is defined as a mapping between vectors of daters. Matrix \(O \in \mathscr{ O}^{n\times p}\) corresponds to the mapping from D p to D n defined by

$$\displaystyle{\forall d \in D^{p},\quad O\left (d\right )_{ i} =\bigoplus _{ j=1}^{p}O_{ ij}\left (d_{j}\right )}$$

Of particular interest are the operators \(\varepsilon _{\mathscr{O}}\), \(e_{\mathscr{O}}\), γ, and δ defined by

$$\displaystyle\begin{array}{rcl} & & \forall d \in D,\quad \varepsilon _{\mathscr{O}}\left (d\right ) =\varepsilon _{D}\mbox{ and }e_{\mathscr{O}}\left (d\right ) = d {}\\ & & \forall d \in D,\forall k \in \mathbb{Z},\quad \gamma \left (d\right )\left (k\right ) = d\left (k - 1\right )\mbox{ and }\delta \left (d\right )\left (k\right ) = 1d\left (k\right ) {}\\ \end{array}$$

The set of operators is endowed with an operation, denoted ⊕, derived from the operation ⊕ defined over D. Formally,

$$\displaystyle{\forall o_{1},o_{2} \in \mathscr{ O},\forall d \in \ D\quad \left (o_{1} \oplus o_{2}\right )\left (d\right ) = o_{1}\left (d\right ) \oplus o_{2}\left (d\right )}$$

Furthermore, an operation ⊗ over \(\mathscr{O}\) is defined as the composition of mappings: for all \(o_{1},o_{2} \in \mathscr{ O}\), o 1o 2 = o 1o 2. Under some conditions, the set of operators \(\mathscr{O}\) endowed with the operations ⊕ and ⊗ defined above is a complete dioid. Then, the dioid \(\mathscr{M}_{in}^{ax}[\![\gamma,\delta ]\!]\) is defined to be the complete dioid spanned by \(\left \{\varepsilon _{\mathscr{O}},e_{\mathscr{O}},\gamma,\delta \right \}\). Let \(\nu \in \mathbb{N}_{0}\) and \(\tau \in \mathbb{N}_{0}\). The operator γ ν δ τ belongs to \(\mathscr{M}_{in}^{ax}[\![\gamma,\delta ]\!]\) and corresponds to

$$\displaystyle{\forall d \in D,\forall k \in \mathbb{Z},\quad \left (\gamma ^{\nu }\delta ^{\tau }\right )\left (d\right )\left (k\right ) =\tau d\left (k-\nu \right )}$$

By construction, calculation rules are available to simplify expressions in \(\mathscr{M}_{in}^{ax}[\![\gamma,\delta ]\!]\). Operators γ and δ commute:

$$\displaystyle\begin{array}{rcl} \forall d \in D,\forall k \in \mathbb{Z},\quad \left (\gamma \delta \right )\left (d\right )\left (k\right )& =& \delta \left (d\right )\left (k - 1\right ) {}\\ & =& 1d\left (k - 1\right ) {}\\ & =& 1\gamma \left (d\right )\left (k\right ) {}\\ & =& \left (\delta \gamma \right )\left (d\right )\left (k\right ) {}\\ \end{array}$$

Furthermore, let l 1, l 2 in \(\mathbb{N}_{0}\). For all dD and \(k \in \mathbb{Z}\),

$$\displaystyle\begin{array}{rcl} \left (\delta ^{l_{1} } \oplus \delta ^{l_{2} }\right )\left (d\right )\left (k\right )& =& l_{1}d\left (k\right ) \oplus l_{2}d\left (k\right ) {}\\ & =& \left (l_{1} \oplus l_{2}\right )d\left (k\right ) {}\\ & =& \delta ^{\max \left (l_{1},l_{2}\right )}\left (d\right )\left (k\right ) {}\\ \left (\gamma ^{l_{1} } \oplus \gamma ^{l_{2} }\right )\left (d\right )\left (k\right )& =& d\left (k - l_{1}\right ) \oplus d\left (k - l_{2}\right ) {}\\ & =& d\left (k -\min \left (l_{1},l_{2}\right )\right )\mbox{ as dater }d\ \text{is non-decreasing} {}\\ & =& \gamma ^{\min \left (l_{1},l_{2}\right )}\left (d\right )\left (k\right ) {}\\ \end{array}$$

Hence, \(\delta ^{l_{1}} \oplus \delta ^{l_{2}} =\delta ^{\max \left (l_{1},l_{2}\right )}\) and \(\gamma ^{l_{1}} \oplus \gamma ^{l_{2}} =\gamma ^{\min \left (l_{1},l_{2}\right )}\).

2.1.2.1 Representing Daters in the Dioid \(\mathscr{M}_{in}^{ax}[\![\gamma,\delta ]\!]\)

The dioid \(\mathscr{M}_{in}^{ax}[\![\gamma,\delta ]\!]\) offers a method to elegantly manipulate daters: a dater d is associated with the operator \(\bigoplus _{k=0}^{+\infty }\gamma ^{k}\delta ^{d\left (k\right )}\) where δ (resp. δ +) stands for ɛ (resp. δ ). Then, the operator o in \(\mathscr{M}_{in}^{ax}[\![\gamma,\delta ]\!]\) associated with a dater d is the single operator in \(\mathscr{M}_{in}^{ax}[\![\gamma,\delta ]\!]\) satisfying \(o\left (e_{D}\right ) = d\). Using calculation rules specific to \(\mathscr{M}_{in}^{ax}[\![\gamma,\delta ]\!]\), the expression of the operator associated with a dater is often much simpler than the expression of the dater itself. In the following, we do not distinguish between a dater and the associated operator in \(\mathscr{M}_{in}^{ax}[\![\gamma,\delta ]\!]\).

Example 3.

Let us consider the dater d defined by

$$\displaystyle{d\left (k\right ) = \left \{\begin{array}{l} \varepsilon \mbox{ if }k <0 \\ 3\mbox{ if }k = 0,1 \\ 5\mbox{ if }k = 2\\ 6 + 4j\mbox{ if } k = 3 + 3j\mbox{ with }j \in \mathbb{N}_{ 0} \\ 8 + 4j\mbox{ if }k = 4 + 3j,5 + 3j\mbox{ with }j \in \mathbb{N}_{0} \end{array} \right.}$$

The dater d is pictured in Fig. 2. In \(\mathscr{M}_{in}^{ax}[\![\gamma,\delta ]\!]\),

$$\displaystyle{d =\bigoplus _{ k=0}^{+\infty }\gamma ^{k}\delta ^{d\left (k\right )} =\delta ^{3} \oplus \gamma \delta ^{3} \oplus \gamma ^{2}\delta ^{5} \oplus \left (\gamma ^{3}\delta ^{6} \oplus \gamma ^{4}\delta ^{8} \oplus \gamma ^{5}\delta ^{8}\right )\left (\gamma ^{3}\delta ^{4}\right )^{{\ast}}}$$

Using calculation rules specific to \(\mathscr{M}_{in}^{ax}[\![\gamma,\delta ]\!]\), the expression of dater d is simplified:

$$\displaystyle{d =\delta ^{3} \oplus \gamma ^{2}\delta ^{5} \oplus \left (\gamma ^{3}\delta ^{6} \oplus \gamma ^{4}\delta ^{8}\right )\left (\gamma ^{3}\delta ^{4}\right )^{{\ast}}}$$
Fig. 2
figure 2

Dater d

2.2 Residuation Theory

Residuation theory gives the theoretical foundation for the control of \(\left (\max,+\right )\)-linear systems.

Definition 4 (Residuated Mapping).

Let f: EF with E and F ordered sets. Mapping f is said to be residuated if f is non-decreasing and if, for all yF, the least upper bound of the subset {xE | f(x)⪯y} exists and lies in this subset. This element in E is denoted f (y). Mapping f from F to E is called the residual of f.

Let a be an element in a complete dioid \(\mathscr{D}\). The mappings L a : xax (left-multiplication by a) and R a : xxa (right-multiplication by a) over \(\mathscr{D}\) are residuated. The residuals are denoted by (left-division by a) and (right-division by a). By definition, (resp. ) denotes the greatest solution x of the inequality ax⪯b (resp. xa⪯b).

The operations and are also extended to matrices. Hence, (resp. ) corresponds to the greatest solution X of the inequality AX⪯B (resp. XA⪯B).

Example 4.

For a, b in \(\overline{\mathbb{N}}_{\max }\),

3 Modeling

After some preliminary remarks on the modeling assumptions, the modeling of \(\left (\max,+\right )\)-linear systems is presented both in the \(\left (\max,+\right )\)-algebra and in the dioid \(\mathscr{M}_{in}^{ax}[\![\gamma,\delta ]\!]\).

3.1 Preliminaries

3.1.1 Input, Output, and Internal Events

The event set of a \(\left (\max,+\right )\)-linear system is partitioned into

input events::

these events are the source of synchronizations, but not subject to synchronizations. Input events correspond to external events affecting the system (e.g., external supplies of workpieces or orders from customers).

output events::

these events are subject to synchronizations, but not the source of synchronizations. Output events correspond to events in the system which are directly seen by other systems (e.g., deliveries of finished products).

internal events::

these events are both subject to and the source of synchronizations. Internal events model the internal dynamics of the system.

Events which are neither subject to nor the source of synchronizations are neglected, as we focus on interactions between events. In the rest of this chapter, we consider \(\left (\max,+\right )\)-linear systems, where:

  • the sets of input, output, and internal events are not empty

  • there exist no direct synchronizations of output events by input events

In practice, these assumptions either hold or can be made to hold by adding some fictitious internal events. Furthermore, the following convention for notation is used. The numbers of input, output, and internal events are respectively denoted by m, p, and n. Input, output, and internal events are respectively denoted by u, y, and x and integer subscripts are used to distinguish events of the same kind.

Example 5.

In the considered example, the event set is partitioned into

  • input events u 1, u 2, and o

  • internal events s 1, s 2, s 3, f 1, f 2, and f 3

  • output event y

These events are relabeled according to the above notation (see Fig. 3). For this system, m = 3, n = 6, and p = 1.

Fig. 3
figure 3

A manufacturing system

3.1.2 Earliest Functioning Rule

Synchronizations (i.e., conditions of the form: for all kl, occurrence k of event e 2 is at least τ units of time after occurrence kl of event e 1) only specify conditions enabling occurrences of events, but never force an event to occur. Therefore, a \(\left (\max,+\right )\)-linear system is not univocally determined: a predefined timing pattern of the input events may lead to different timing patterns for internal and output events. The only requirement is that these patterns are admissible with respect to the synchronizations required by the considered system.

In the following, we only consider a particular behavior for \(\left (\max,+\right )\)-linear systems, namely the behavior under the earliest functioning rule. The earliest functioning rule requires that each internal or output event occurs as soon as possible. Under the earliest functioning rule, a \(\left (\max,+\right )\)-linear system is univocally determined: a predefined timing pattern of the input events leads to a unique timing pattern for internal and output events. This fundamental property is a direct consequence of the model in the \(\left (\max,+\right )\)-algebra presented later.

Example 6.

In the considered example, the earliest functioning rule is suitable, as the aim is to meet the orders as soon as possible.

3.1.3 Modeling with Daters

To capture the timed dynamics of a discrete event system, a dater is associated with each event such that the dater gives the times of occurrences of the considered event. In the following, no distinction in the notation is made between an event and the associated dater. Hence, for an event d, \(d\left (k\right )\) denotes the time of occurrence k of event d. This leads to the following interpretation for daters:

\(d\left (k\right ) =\varepsilon\)::

occurrence k of event d is at t = −. By convention, occurrence k, with k < 0, of an event is always at t = −.

\(d\left (k\right ) \in \mathbb{N}_{0}\)::

occurrence k of event d is at time \(d\left (k\right )\).

\(d\left (k\right ) = +\infty\)::

occurrence k of event d never happens.

The fact that daters are non-decreasing (i.e., for a dater d, \(d\left (k + 1\right )\succeq d\left (k\right )\) for all \(k \in \mathbb{Z}\)) is always satisfied as occurrence k + 1 of event d is never before occurrence k of event d.

3.2 Modeling in the \(\left (\max,+\right )\)-Algebra

Next, we show how to model \(\left (\max,+\right )\)-linear systems by recursive equations in the \(\left (\max,+\right )\)-algebra. Using daters, the synchronization “for all kl, occurrence k of event e 2 is at least τ units of time after occurrence kl of event e 1” corresponds to

$$\displaystyle{\forall k \in \mathbb{Z},\quad e_{2}\left (k\right ) \geq \tau +e_{1}\left (k - l\right )}$$

in the standard algebra or to

$$\displaystyle{\forall k \in \mathbb{Z},\quad e_{2}\left (k\right )\succeq \tau e_{1}\left (k - l\right )}$$

in the \(\left (\max,+\right )\)-algebra. Furthermore, the effect of several synchronizations on a single event is also expressed by a single inequality. For example, the synchronizations “for all kl 1, occurrence k of event e 2 is at least τ 1 units of time after occurrence kl 1 of event e 1,1” and “for all kl 2, occurrence k of event e 2 is at least τ 2 units of time after occurrence kl 2 of event e 1,2” are both expressed by a single inequality either in the standard algebra

$$\displaystyle{\forall k \in \mathbb{Z},\quad e_{2}\left (k\right ) \geq \max \left (\tau _{1} + e_{1,1}\left (k - l_{1}\right ),\tau _{2} + e_{1,2}\left (k - l_{2}\right )\right )}$$

or in the \(\left (\max,+\right )\)-algebra

$$\displaystyle{\forall k \in \mathbb{Z},\quad e_{2}\left (k\right )\succeq \tau _{1}e_{1,1}\left (k - l_{1}\right ) \oplus \tau _{2}e_{1,2}\left (k - l_{2}\right )}$$

Hence, the rule describing the behavior of the system can be expressed by the following matrix inequalities in \(\overline{\mathbb{N}}_{\max }\).

$$\displaystyle{ \left \{\begin{array}{l} x\left (k\right )\succeq \bigoplus _{i=0}^{L}\left (A_{i}x\left (k - i\right ) \oplus B_{i}u\left (k - i\right )\right ) \\ y\left (k\right )\succeq \bigoplus _{i=0}^{L}C_{i}x\left (k - i\right )\end{array} \right. }$$
(1)

where x, u, and y respectively correspond to the vectors of daters associated with internal, input, and output events, and L denotes the greatest parameter l over all synchronizations. Furthermore, matrices A i , B i , and C i belong respectively to \(\overline{\mathbb{N}}_{\max }^{n\times n}\), \(\overline{\mathbb{N}}_{\max }^{n\times m}\), and \(\overline{\mathbb{N}}_{\max }^{p\times n}\). The entries of these matrices are given by the parameters of the synchronizations.

To simplify (1), the event set of the considered \(\left (\max,+\right )\)-linear system is extended by additional internal events. The resulting extended set of internal events is referred to as the set of state events. The daters of all state events are collected in a single vector, which, slightly abusing notation, is again called x. This allows us to convert (1) to a first-order recursion. The resulting inequalities are given in (2). The validity of this step results from the equivalence between the different synchronization relations between events e 1 and e 2 pictured in Fig. 4.

$$\displaystyle{ \left \{\begin{array}{l} x\left (k\right )\succeq A_{0}x\left (k\right ) \oplus A_{1}x\left (k - 1\right ) \oplus B_{0}u\left (k\right ) \\ y\left (k\right )\succeq C_{0}x\left (k\right ) \end{array} \right. }$$
(2)
Fig. 4
figure 4

Equivalent synchronizations if no other synchronizations affect event e i

By convention, \(x\left (k\right )\) and \(y\left (k\right )\) have all entries equal to ɛ for k < 0. This choice is valid according to (2). As the behavior under the earliest functioning rule is considered, the time of occurrence k ≥ 0 of state and output events is given by the least solution for \(x\left (k\right )\) and \(y\left (k\right )\) in (2). Considering that x is composed of daters (i.e., \(x\left (k\right )\succeq x\left (k - 1\right )\) for all \(k \in \mathbb{Z}\)), we have

$$\displaystyle\begin{array}{rcl} & & x\left (k\right )\succeq A_{0}x\left (k\right ) \oplus A_{1}x\left (k - 1\right ) \oplus B_{0}u\left (k\right ) {}\\ & & \Leftrightarrow x\left (k\right )\succeq A_{0}x\left (k\right ) \oplus \left (A_{1} \oplus e\right )x\left (k - 1\right ) \oplus B_{0}u\left (k\right ) {}\\ \end{array}$$

Hence, using Theorem 1, the following \(\left (\max,+\right )\)-linear state-space model is obtained:

$$\displaystyle{ \left \{\begin{array}{l} x\left (k\right ) = Ax\left (k - 1\right ) \oplus Bu\left (k\right )\\ y\left (k\right ) = Cx\left (k\right ) \end{array} \right. }$$
(3)

where \(A = A_{0}^{{\ast}}\left (A_{1} \oplus e\right )\), B = A 0 B 0, and C = C 0. Hence, \(\left (\max,+\right )\)-linear systems are deterministic and, as expected, \(\left (\max,+\right )\)-linear (i.e., a \(\left (\max,+\right )\)-linear combination of inputs induces the corresponding \(\left (\max,+\right )\)-linear combination of outputs).

Example 7.

The synchronizations in the considered example are represented by the following matrix inequalities in \(\overline{\mathbb{N}}_{\max }\).

$$\displaystyle{\left \{\begin{array}{l} x\left (k\right )\succeq \left (\begin{array}{cccccc} \varepsilon & \varepsilon & \varepsilon & \varepsilon & \varepsilon &\varepsilon \\ 4 & \varepsilon & \varepsilon & \varepsilon & \varepsilon &\varepsilon \\ \varepsilon & \varepsilon & \varepsilon & \varepsilon & \varepsilon &\varepsilon \\ \varepsilon & \varepsilon &2& \varepsilon & \varepsilon &\varepsilon \\ \varepsilon &e & \varepsilon &e & \varepsilon &\varepsilon \\ \varepsilon & \varepsilon & \varepsilon & \varepsilon &2&\varepsilon \end{array} \right )x\left (k\right ) \oplus \left (\begin{array}{cccccc} \varepsilon &e&\varepsilon & \varepsilon &\varepsilon & \varepsilon \\ \varepsilon & \varepsilon &\varepsilon & \varepsilon &\varepsilon &\varepsilon \\ \varepsilon & \varepsilon &\varepsilon &e &\varepsilon &\varepsilon \\ \varepsilon & \varepsilon &\varepsilon & \varepsilon &\varepsilon &\varepsilon \\ \varepsilon & \varepsilon &\varepsilon & \varepsilon &\varepsilon &e\\ \varepsilon & \varepsilon &\varepsilon & \varepsilon &\varepsilon & \varepsilon \end{array} \right )x\left (k - 1\right ) \oplus \left (\begin{array}{ccc} e& \varepsilon &e\\ \varepsilon & \varepsilon & \varepsilon \\ \varepsilon &e &e\\ \varepsilon & \varepsilon & \varepsilon \\ \varepsilon & \varepsilon & \varepsilon \\ \varepsilon & \varepsilon & \varepsilon \end{array} \right )u\left (k\right ) \\ y\left (k\right )\succeq \left (\begin{array}{cccccc} \varepsilon &\varepsilon &\varepsilon &\varepsilon &\varepsilon &e \end{array} \right )x\left (k\right )\end{array} \right.}$$

This leads to the following \(\left (\max,+\right )\)-linear state-space model:

$$\displaystyle{\left \{\begin{array}{l} x\left (k\right ) = \left (\begin{array}{cccccc} e&e& \varepsilon & \varepsilon & \varepsilon & \varepsilon \\ 4 &4 & \varepsilon & \varepsilon & \varepsilon &\varepsilon \\ \varepsilon & \varepsilon &e&e& \varepsilon & \varepsilon \\ \varepsilon & \varepsilon &2 &2 & \varepsilon &\varepsilon \\ 4&4&2&2&e&e\\ 6 &6 &4 &4 &2 &2 \end{array} \right )x\left (k - 1\right ) \oplus \left (\begin{array}{ccc} e& \varepsilon &e\\ 4 & \varepsilon &4 \\ \varepsilon &e&e\\ \varepsilon &2 &2 \\ 4&2&4\\ 6 &4 &6 \end{array} \right )u\left (k\right ) \\ y\left (k\right ) = \left (\begin{array}{cccccc} \varepsilon &\varepsilon &\varepsilon &\varepsilon &\varepsilon &e \end{array} \right )x\left (k\right ) \end{array} \right.}$$

Let us consider the input corresponding to a supply of five workpieces of type 1 and type 2 at time 0 and an order of five workpieces of type 5 at time 0. Hence the kth, 0 ≤ k ≤ 4, occurrence of event u 1 (“a workpiece of type 1 enters the system”), u 2 (“a workpiece of type 2 enters the system”) and u 3 = o (“an order is received”) is at time 0. The associated daters are

$$\displaystyle{u_{1}\left (k\right ) = u_{2}\left (k\right ) = u_{3}\left (k\right ) = \left \{\begin{array}{l} \varepsilon \mbox{ if }k <0 \\ e\mbox{ if }0 \leq k <5 \\ + \infty \mbox{ if }k \geq 5 \end{array} \right.}$$

The induced output can be easily calculated from the linear difference equation (3):

$$\displaystyle{y\left (k\right ) = \left \{\begin{array}{l} \varepsilon \mbox{ if }k <0 \\ 6 \otimes 4^{k}\mbox{ if }0 \leq k <5 \\ + \infty \mbox{ if }k \geq 5 \end{array} \right.}$$

Hence, a workpiece of type 5 is delivered at time 6, 10, 14, 18, and 22.

3.3 Modeling in the Dioid \(\mathscr{M}_{in}^{ax}[\![\gamma,\delta ]\!]\)

Next, we show how to model \(\left (\max,+\right )\)-linear systems in the dioid \(\mathscr{M}_{in}^{ax}[\![\gamma,\delta ]\!]\). Let us consider the synchronization “for all kl, occurrence k of event e 2 is at least τ units of time after occurrence kl of event e 1”. As mentioned before, this corresponds to the following inequality in \(\overline{\mathbb{N}}_{\max }\):

$$\displaystyle{\forall k \in \mathbb{Z},\quad e_{2}\left (k\right )\succeq \tau e_{1}\left (k - l\right )}$$

Rewriting this relation with the operators γ and δ leads to the following inequality over daters: \(e_{2}\succeq \left (\delta ^{\tau }\gamma ^{l}\right )\left (e_{1}\right )\). Furthermore, the combination of several synchronizations on the same event can be expressed in a single inequality by using the operation ⊕ over daters. For example, synchronizations “for all kl 1, occurrence k of event e 2 is at least τ 1 units of time after occurrence kl 1 of event e 1,1” and “for all kl 2, occurrence k of event e 2 is at least τ 2 units of time after occurrence kl 2 of event e 1,2” are both expressed by a single inequality:

$$\displaystyle{e_{2}\succeq \left (\delta ^{\tau _{1}}\gamma ^{l_{1}}\right )\left (e_{1,1}\right ) \oplus \left (\delta ^{\tau _{2}}\gamma ^{l_{2}}\right )\left (e_{1,2}\right )}$$

Hence, the rule describing the behavior of the system can be expressed by the following matrix inequalities.

$$\displaystyle{ \left \{\begin{array}{l} x\succeq A\left (x\right ) \oplus B\left (u\right )\\ y\succeq C\left (x\right ) \end{array} \right. }$$
(4)

where x, u, and y respectively correspond to the vectors of daters associated with internal, input, and output events and matrices A, B, and C respectively belong to \(\mathscr{M}_{in}^{ax}[\![\gamma,\delta ]\!]^{n\times n}\), \(\mathscr{M}_{in}^{ax}[\![\gamma,\delta ]\!]^{n\times m}\), and \(\mathscr{M}_{in}^{ax}[\![\gamma,\delta ]\!]^{p\times n}\). Furthermore, as daters can be represented by elements in the dioid \(\mathscr{M}_{in}^{ax}[\![\gamma,\delta ]\!]\), the vectors of daters x, u, and y appearing in (4) can be replaced by vectors with entries in \(\mathscr{M}_{in}^{ax}[\![\gamma,\delta ]\!]\). This leads to the following matrix inequalities in \(\mathscr{M}_{in}^{ax}[\![\gamma,\delta ]\!]\).

$$\displaystyle{ \left \{\begin{array}{l} x\succeq Ax \oplus Bu\\ y\succeq Cx\end{array} \right. }$$
(5)

Under the earliest functioning rule, y = Cx and, using Theorem 1, x = A Bu. This leads to a transfer function matrix H = CA B. Hence, the output y induced by input u is given by y = Hu.

Example 8.

The synchronizations in the considered example are represented by the following matrix inequalities in \(\mathscr{M}_{in}^{ax}[\![\gamma,\delta ]\!]\).

$$\displaystyle{\left \{\begin{array}{l} x\succeq \left (\begin{array}{cccccc} \varepsilon & \gamma & \varepsilon & \varepsilon & \varepsilon &\varepsilon \\ \delta ^{4}& \varepsilon & \varepsilon & \varepsilon & \varepsilon &\varepsilon \\ \varepsilon & \varepsilon & \varepsilon & \gamma & \varepsilon &\varepsilon \\ \varepsilon & \varepsilon &\delta ^{2}& \varepsilon & \varepsilon &\varepsilon \\ \varepsilon &e & \varepsilon &e & \varepsilon &\gamma \\ \varepsilon & \varepsilon & \varepsilon & \varepsilon &\delta ^{2}&\varepsilon \end{array} \right )x \oplus \left (\begin{array}{ccc} e& \varepsilon &e\\ \varepsilon & \varepsilon & \varepsilon \\ \varepsilon &e &e\\ \varepsilon & \varepsilon & \varepsilon \\ \varepsilon & \varepsilon & \varepsilon \\ \varepsilon & \varepsilon & \varepsilon \end{array} \right )u \\ y\succeq \left (\begin{array}{cccccc} \varepsilon &\varepsilon &\varepsilon &\varepsilon &\varepsilon &e \end{array} \right )x\end{array} \right.}$$

Hence, using [12], the transfer function matrix H is given by

$$\displaystyle{H = \left (\begin{array}{ccc} \delta ^{6}\left (\gamma \delta ^{4}\right )^{{\ast}}&\delta ^{4}\left (\gamma \delta ^{2}\right )^{{\ast}}&\delta ^{6}\left (\gamma \delta ^{4}\right )^{{\ast}} \end{array} \right )}$$

As before, let us consider the input corresponding to a supply of five workpieces of type 1 and type 2 at time 0 and to an order of five workpieces at time 0. The associated operators in \(\mathscr{M}_{in}^{ax}[\![\gamma,\delta ]\!]\) are

$$\displaystyle{u_{1} = u_{2} = u_{3} = e \oplus \gamma ^{5}\delta ^{+\infty }}$$

The induced output is given by

$$\displaystyle{y =\delta ^{6} \oplus \gamma \delta ^{10} \oplus \gamma ^{2}\delta ^{14} \oplus \gamma ^{3}\delta ^{18} \oplus \gamma ^{4}\delta ^{22} \oplus \gamma ^{5}\delta ^{+\infty }}$$

This result is of course coherent with the one obtained by modeling in the \(\left (\max,+\right )\)-algebra.

4 Control

In this section, we focus on control methods modifying the internal dynamics of the system by adding a \(\left (\max,+\right )\)-linear prefilter P (see Fig. 5a) or a \(\left (\max,+\right )\)-linear output feedback F (see Fig. 5b). As in standard control theory, a prefilter is a dynamical system that processes an external input v as, e.g., a reference signal, and provides a suitable input u = Pv to the system to be controlled. The notion of output feedback refers to a scenario where the system output y is fed back via a dynamical system F to generate the input u = Fyv to the system to be controlled. Both control structures aim at modifying the given system dynamics to make it react in an appropriate way to any external input. In a manufacturing context, where external inputs are often non-controllable (e.g., orders from customers or parts delivered by suppliers), this is clearly an appropriate strategy. Note that other control methods such as optimal feedforward control [5] and model predictive control [9] are available to directly manipulate the inputs when this is possible.

Fig. 5
figure 5

Different control architectures

The main purpose of the control approach discussed in this section is to reduce the size of internal buffers (and the number of workpieces in the production process at a given time instant) by adequately delaying the occurrences of input events. This effect can be easily quantified using second order theory for \(\left (\max,+\right )\)-linear systems [13] (i.e., least upper bounds for the number of tokens in places are computed). However, the main drawback of this control approach is a possible slowing down of the system. Hence, choosing a prefilter or a feedback amounts to finding a trade-off between rapidity of the system and sizes of the internal buffers. In the following, we review some techniques to address this trade-off. The principle is to reduce as much as possible the internal buffers while satisfying some requirements on the rapidity of the system. Two typical requirements are: preservation of the transfer function matrix or preservation of the throughput.

Example 9.

In the considered example, the internal buffers B 1 between machine M 1 and machine M 3 and B 2 between machine M 2 and machine M 3 are of interest. In the uncontrolled case, u = v. In this case, the sizes of the buffers B 1 and B 2 are both equal to + , as the number of tokens between the transitions labelled x 2 (resp. x 4) and x 5 in Fig. 3 is unbounded. On the other hand, not controlling the system lets the system evolve maximally fast, as no synchronizations are added by a prefilter P or an output feedback F. Clearly, in practice, buffers always have restricted size, and it is therefore vital to introduce control.

4.1 Model Reference Control

In model reference control [68], the requirement with respect to the rapidity of the system is expressed by a reference model G. The transfer function matrix of the controlled system, denoted H c , must satisfy the condition H c ⪯G. Hence, the reference model G is an upper bound for the transfer function matrix of the controlled system: the dynamics of the controlled system is required to be at least as fast as the one specified by the reference model G. In the following, model reference control is only considered for the case G = H (i.e., the controlled system must be at least as fast as the uncontrolled one or, in other words, control is not allowed to “slow down” the output of the system). However, under some assumptions, the following discussion can be generalized to any reference model G. Next, model reference control by using either a prefilter or an output feedback is investigated.

4.1.1 Prefilter

Applying a prefilter P leads to the transfer function matrix HP for the controlled system. Hence, a prefilter P such that HP⪯H or, equivalently, such that is valid for model reference control. Under this restriction, we want to delay as much as possible the occurrences of input events, i.e., select the optimal (i.e., greatest) prefilter P such that . Therefore, seems to be the optimal prefilter. However, it is not always possible to implement this prefilter, as it may be non-causal (i.e., at time t this prefilter may need information available at time t + 1 or later). This problem is solved by using a specific mapping called causal projection and denoted Pr + (see [10, 14] for a formal discussion on the causal projection). Hence, the optimal prefilter, denoted P H , is given by

By construction, P H ⪰e and HP H ⪯H. Hence, HP H = H. Thus, the prefilter P H does not modify the transfer function matrix of the system.

Example 10.

The prefilter P H associated with the considered example is given by

As expected, the prefilter P H does not modify the transfer function matrix of the system:

$$\displaystyle{HP_{H} = H = \left (\begin{array}{ccc} \delta ^{6}\left (\gamma \delta ^{4}\right )^{{\ast}}&\delta ^{4}\left (\gamma \delta ^{2}\right )^{{\ast}}&\delta ^{6}\left (\gamma \delta ^{4}\right )^{{\ast}} \end{array} \right )}$$

A state-space system realizing the transfer function matrix P H is:

$$\displaystyle{\left \{\begin{array}{l} x_{P} = \left (\begin{array}{cc} \gamma \delta ^{4}& \varepsilon \\ \varepsilon &\gamma \delta ^{2} \end{array} \right )x_{P} \oplus \left (\begin{array}{ccc} e& \varepsilon &e\\ \varepsilon &e & \varepsilon \end{array} \right )v \\ u = \left (\begin{array}{cc} e& \varepsilon \\ \delta ^{2}&e\\ e & \varepsilon \end{array} \right )x_{P} \end{array} \right.}$$

An implementation of this system in terms of a TEG is shown in Fig. 6. In the controlled system, the size of the internal buffer B 2 is equal to 0: as soon as a workpiece of type 4 is produced by machine M 2, this workpiece is immediately used by machine M 3. However, it can be easily seen that the size of the internal buffer B 1 is still equal to + . Hence, in this example, using a prefilter that does not modify the system transfer function matrix will not allow to upper-bound all internal buffers.

Fig. 6
figure 6

Model reference control with prefilter

4.1.2 Output Feedback

To understand the need for feedback, we have to consider perturbations in the model. In the following, we only consider additive state perturbations. This leads to a modified version of the model in \(\mathscr{M}_{in}^{ax}[\![\gamma,\delta ]\!]\):

$$\displaystyle{ \left \{\begin{array}{l} x\succeq Ax \oplus Bu \oplus q\\ y\succeq Cx\end{array} \right. }$$
(6)

where vector \(q \in \mathscr{ M}_{in}^{ax}[\![\gamma,\delta ]\!]^{n}\) represents state perturbations. Note that, for manufacturing systems, additive state perturbations are sufficient to model a large class of uncertainties and failures such as machine breakdowns or changes in processing times of machines. Considering perturbations leads to an additional transfer function matrix from q to y. Indeed,

$$\displaystyle{y = Hu \oplus CA^{{\ast}}q}$$

Perturbations do also affect the sizes of internal buffers. In many cases, the existence of perturbations strongly reduces the advantages induced by prefilters, as, by construction, prefilters cannot take into account perturbations.

Example 11.

Taking into account perturbations annihilates the gain induced by the optimal prefilter P H in the considered example. With the optimal prefilter P H , the sizes of internal buffers B 1 and B 2 remain equal to + when perturbations are considered. Indeed, a breakdown of machine M 3, such as machine M 3 is broken from the start (i.e., q 4 = δ + and q i = ɛ for i ≠ 4), could lead to an infinite accumulation of workpieces in buffers B 1 and B 2.

The previous discussion illustrates the need for control structures taking into account perturbations. In the following, we focus on output feedback, i.e., u = Fyv. The transfer function matrix of the controlled system is obtained as follows.

$$\displaystyle\begin{array}{rcl} y& =& Hu \oplus CA^{{\ast}}q {}\\ & =& HFy \oplus Hv \oplus CA^{{\ast}}q {}\\ & =& \left (HF\right )^{{\ast}}Hv \oplus \left (HF\right )^{{\ast}}CA^{{\ast}}q {}\\ \end{array}$$

where the last equality follows from Theorem 1. Hence, if we choose the reference model G = H, i.e., we require feedback to not slow down the output of the system, we seek a feedback F such that \(\left (HF\right )^{{\ast}}H\preceq H\). To delay the occurrences of input events as much as possible, we select the greatest causal feedback F such that \(\left (HF\right )^{{\ast}}H\preceq H\). This feedback, denoted F H , is given by

For the proof, the reader is invited to consult [6, 14]. As \(\left (HF_{H}\right )^{{\ast}}\succeq e\), \(\left (HF_{H}\right )^{{\ast}}H\succeq H\). Furthermore, by construction, \(\left (HF_{H}\right )^{{\ast}}H\preceq H\). Hence, \(\left (HF_{H}\right )^{{\ast}}H = H\). Thus, the feedback F H does not modify the transfer function matrix of the system.

Example 12.

The feedback F H associated with the considered example is given by

As expected, the feedback F H does not modify the transfer function matrix of the system:

$$\displaystyle{\left (HF_{H}\right )^{{\ast}}H = H = \left (\begin{array}{ccc} \delta ^{6}\left (\gamma \delta ^{4}\right )^{{\ast}}&\delta ^{4}\left (\gamma \delta ^{2}\right )^{{\ast}}&\delta ^{6}\left (\gamma \delta ^{4}\right )^{{\ast}} \end{array} \right )}$$

A state-space system realizing the transfer function matrix F H is given by:

$$\displaystyle{\left \{\begin{array}{l} x_{F} =\gamma \delta ^{2}x_{F} \oplus y \\ w = \left (\begin{array}{c} \varepsilon \\ \gamma ^{2}\\ \varepsilon \end{array} \right )x_{P} \end{array}\right.}$$

An implementation of this system in terms of a TEG is shown in Fig. 7. The size of the internal buffer B 2 is now equal to 2, whereas in the uncontrolled case it was equal to + , i.e., by using an output feedback, we indeed succeed in reducing the size of internal buffer B 2. However, the size of the internal buffer B 1 is still equal to + . Hence, for this example, using an output feedback that does not modify the system transfer function matrix will not allow to upper-bound all internal buffers. In other words, the specification of not altering the system transfer function matrix is too strict. For this reason, we will now describe control for a less restrictive control specification.

Fig. 7
figure 7

Model reference control with output feedback

4.2 Preserving the Throughput

The aim of this approach is to preserve the throughput (i.e., the maximal average production rate) of the system. Clearly, preserving the transfer function matrix, as done in model reference control, implies preserving the throughput. Hence, the latter is less restrictive (in terms of requirements on the rapidity of the system) than the former, and we expect smaller internal buffers, if all events are delayed as much as possible subject to the respective requirement. In general, the optimal control preserving the throughput will slow down the system in the sense of providing a greater transfer function matrix. In the literature, this approach has only been investigated for feedback [15, 16]. As shown in [4], the greatest output feedback preserving the throughput leads to internal buffers of finite size.

Example 13.

The throughput associated with the considered example amounts to one workpiece every four units of time. The greatest feedback F σ preserving the throughput is

$$\displaystyle{F_{\sigma } = \left (\begin{array}{c} \gamma ^{2}\delta ^{2}\left (\gamma \delta ^{4}\right )^{{\ast}} \\ \gamma \left (\gamma \delta ^{4}\right )^{{\ast}} \\ \gamma ^{2}\delta ^{2}\left (\gamma \delta ^{4}\right )^{{\ast}} \end{array} \right )}$$

The resulting closed-loop transfer function matrix is

$$\displaystyle\begin{array}{rcl} \left (HF_{\sigma }\right )^{{\ast}}H = \left (\begin{array}{ccc} \delta ^{6}\left (\gamma \delta ^{4}\right )^{{\ast}}&\delta ^{4}\left (\gamma \delta ^{4}\right )^{{\ast}}\delta ^{6}\left (\gamma \delta ^{4}\right )^{{\ast}} \end{array} \right )& & {}\\ \end{array}$$

while the open-loop transfer function matrix is

$$\displaystyle{H = \left (\begin{array}{ccc} \delta ^{6}\left (\gamma \delta ^{4}\right )^{{\ast}}&\delta ^{4}\left (\gamma \delta ^{2}\right )^{{\ast}}&\delta ^{6}\left (\gamma \delta ^{4}\right )^{{\ast}} \end{array} \right )}$$

Hence, the transfer function matrix of the controlled system is strictly greater than the one of the uncontrolled one, i.e., the controlled system is slower than the uncontrolled one. However, as expected, the throughput of the controlled system and of the uncontrolled system are both equal to one workpiece every four units of time.

A state-space system realizing the transfer function matrix F σ is given by:

$$\displaystyle{\left \{\begin{array}{l} x_{F} =\gamma \delta ^{4}x_{F} \oplus y \\ w = \left (\begin{array}{c} \gamma ^{2}\delta ^{2}\\ \gamma \\ \gamma ^{2}\delta ^{2} \end{array} \right )x_{P} \end{array} \right.}$$

An implementation of this system in terms of a TEG is shown in Fig. 8. The size of internal buffer B 1 is equal to two, and the size of the buffer B 2 is equal to one. Hence, by appropriately slowing down the system, the suggested feedback has indeed succeeded in strongly reducing internal buffers B 1 and B 2 (in the uncontrolled case, the sizes of internal buffers B 1 and B 2 are both equal to + ). A behavior affected by the suggested feedback is provided by the input

$$\displaystyle{v_{1} = v_{3} = e \oplus \gamma ^{5}\delta ^{+\infty }\mbox{ and }v_{ 2} =\delta ^{20} \oplus \gamma ^{5}\delta ^{+\infty }}$$

This corresponds to an order of five workpieces and an arrival of five workpieces of type 1 at time t = 0, and an arrival of five workpieces of type 2 at time t = 20. In the uncontrolled system, workpieces of type 5 are delivered at time 24, 26, 28, 30, and 32. With feedback F σ , workpieces of type 5 are delivered at time 24, 28, 32, 36, and 40. Hence, the feedback F σ slowed down the system

Fig. 8
figure 8

Output feedback preserving the throughput

5 Conclusion

In this chapter, we have explained how to use \(\left (\max,+\right )\)-linear systems to model manufacturing problems characterized by synchronizations (i.e., conditions of the form: for all kl, occurrence k of event e 2 is at least τ units of time after occurrence kl of event e 1). Furthermore, we have also presented some methods to address the trade-off between rapidity of the system and sizes of internal buffers. In particular, we have discussed two techniques preserving the transfer function matrix (i.e., the input-output behavior) and preserving the throughput (i.e., the maximal average production rate). Many other techniques have been investigated, e.g., preserving the response to a specific input [17] or preserving both the input-output behavior and the perturbation-output behavior [18].