Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Short Introduction

This chapter deals with large-scale systems, composed with independent subsystems that share common resources. A typical example of that kind of system is the problem of energy management system for a residential house: heating systems, refrigerant systems, hot-water tank, washing-machine are independent equipments but the global available power is limited by a subscripted contract. A global objective is to minimize the total consumption of the building, while fulfilling the local objectives such as a good ambient air temperature, ...  Other examples of such a system could be a local residential area in which each house has its own behavior, or systems connected by smart grid.

The treatment of the optimization problem could be too complex with a global approach. Moreover, each time another subsystem is added, the whole controller synthesis has to be done again. For these reasons, a distributed structure is interesting. It leads to define smaller and less complex optimization problems as well as to ensure a modularity of the structure.

In the following, the whole optimization problem is linear: the local models and their constraints as well as the global objective and the global constraints. The special structure of all the constraints and the linear aspect of the problem allow to use the Dantzig-Wolfe decomposition. In a few words, the Dantzig-Wolfe decomposition is an algorithm which is well-known to simplify the complexity of a linear optimization problem when the matrix of constraints is block-angular. The initial problem is decomposed into several smaller linear problems, and an coordination mechanism is used such that the solution of the initial problem is found. In this chapter, as it will be explained, the distributed control structure will be based on this decomposition, which will be used not only to reduce the complexity of the global problem, but also to ensure the modularity.

2 Boundary Conditions

2.1 System Description

We suppose in this chapter that the studied system is composed of a collection of \(q\) independent discrete-time dynamical systems \((S_i)\) that are interconnected by different input constraints. Their dynamical equation can be described as follows:

$$\begin{aligned} \left\{ \begin{array}{ccl} \displaystyle \mathbf {x}_{i}(k+1) &{}=&{} \displaystyle \mathbf {A}_{i}\mathbf {x}_{i}(k)+\mathbf {B}_{i}\mathbf {u}_i(k),\\ \displaystyle \mathbf {y}_{i}(k) &{}= &{} \mathbf {C}_{i}\mathbf {x}_{i}(k). \end{array} \right. \end{aligned}$$
(6.1)

In this equation, the local state \(\mathbf {x}_{i}\) is in \(\mathbb {R}^{n_{x_i}}\), the local control input \(\mathbf {u}_{i}\) is in \(\mathcal{{U}}_{i}\subset \mathbb {R}^{n_{u_i}}\) and the local output \(\mathbf {y}_{i}\) is in \(\mathbb {R}^{n_{y_i}}\). We suppose that the local input constraint set is defined by \(n_{c_i}\) linear constraints:

$$\begin{aligned} \mathcal{U}_i:=\left\{ \mathbf {u}_{i}\in \mathbb {R}^{n_{u_i}} \quad \backslash \quad \forall n \in \{1,\ldots ,n_{c_i}\}, \quad \mathbf {e}_{{i_n}}^\mathtt{T }\mathbf {u}_{i}\le d_{i_n}\right\} \end{aligned}$$
(6.2)

This set can be also described by the aggregated notation:

$$\begin{aligned} \mathcal{U}_i:=\left\{ \mathbf {u}_{i}\in \mathbb {R}^{n_{u_i}} \quad \backslash \quad \mathbf {E}_{{i}}\mathbf {u}_{i}\le \mathbf {d}_{i}\right\} \end{aligned}$$

Though all these systems are dynamically independent, we suppose that they are connected by \(n_G\) global linear input constraints. The global constraint set is defined as follows:

$$\begin{aligned} \mathcal{U}_G:=\left\{ \left( \mathbf {u}_{i}\right) _{i=1,q}\in \mathbb {R}^{n_{u_1}+\cdots +n_{u_q}} \quad \backslash \quad \forall n \in \{1,\ldots ,n_{G}\}, \quad \sum \limits _{i=1}^{q}\mathbf {h}_{{i_n}}^\mathtt{T }\mathbf {u}_{i}\le g_n\right\} . \end{aligned}$$
(6.3)

For the different inputs, we suppose that they fulfill the conditions given in Assumption 6.1, which are not restrictive as soon as we consider engineering systems. Besides, we suppose that there exists at least a collection of local inputs that satisfies the global constraints, this point is given in Assumption 6.1:

Assumption 6.1

Boundedness and existence of a global solution. We suppose that:

  • all the sets \(\mathcal{U}_i\) are nonempty bounded convex sets.

  • there exists \(\mathbf {v}=\left( \mathbf {v}_{i}\right) _{i=1,q}\) such that \(\mathbf {v}\in \mathcal{U}_G\) and for all \(i\in \{1,\ldots ,q\}\), \(\quad \mathbf {v}_{i}\in \mathcal{U}_i\).

The objective in this paper is to implement a model predictive control for each systems, while taking in consideration the global constraints. This problem is formalized in the next section.

2.2 Control Objective

In predictive control, most of the results are provided for a quadratic cost which is interesting for different reasons such as the explicit solution or stability analysis. Here, the problem is tackled with the \(l_1\)-norm which is more representative of real costs (price, power consumption, ...).

For a given prediction horizon \(N_\mathrm{{p}}\), we denote by \(\mathbf {u}_{i}(k:k+N_\mathrm{{p}}-1)\) the vector of all the control inputs over the prediction horizon. \(\mathbf {y}_{i}(k+1:k+N_\mathrm{{p}})\) is the predicted output, constructed from the state \(\mathbf {x}_{i}(k)\) and according to the input vector \(\mathbf {u}_{i}(k:k+N_\mathrm{{p}}-1)\). \(\mathbf {y}_{i}(k+1:k+N_\mathrm{{p}})\) can be defined recursively by:

$$\begin{aligned} \begin{array}{lcl} \mathbf {y}_{i}(k+l) &{} =&{} \mathbf {C}_{{i}}\mathbf {x}_{i}(k+l)\\ &{}=&{} \mathbf {C}_{{i}}\left( \mathbf {A}^{(l)}_{{i}}\mathbf {x}_{i}(k)+\sum \limits _{j=1}^{l}\mathbf {A}_{{i}}^{(l-j)}\mathbf {B}_{{i}}\mathbf {u}_{{i}}(k+j-1)\right) \end{array} \end{aligned}$$
(6.4)

Then, this equation can be aggregated as follows:

$$\begin{aligned} \begin{array}{lcl} \mathbf {y}_{i}(k+1:k+N_\mathrm{{p}})&= \mathbf {M}_{i}\mathbf {x}_{i}(k)+\mathbf {N}_{i}\mathbf {u}_{i}(k:k+N_\mathrm{{p}}-1) \end{array} \end{aligned}$$
(6.5)

The expressions of the matrices \(\mathbf {M}_{i}\) and \(\mathbf {N}_{i}\) are directly derived from equation (6.4). Let us denote with \(\mathbf {y}_\mathrm{{ref},i}(k+1:k+N_\mathrm{{p}})\) the vector of the future reference. The objective function for each system \((S_i)\) can be stated as follows:

$$\begin{aligned} J_i(\mathbf {x}_{i}(k),\mathbf {u}_{i}(k:k+N_\mathrm{{p}}-1))&= \left\| \mathbf {y}_{\mathrm{{ref},i}}(k+1:k+N_\mathrm{{p}})-\mathbf {y}_i(k+1:k+N_\mathrm{{p}})\right\| _1\nonumber \\&\quad +\alpha _i\left\| \mathbf {u}_i(k:k+N_\mathrm{{p}}-1)\right\| _1. \end{aligned}$$
(6.6)

\(\alpha _i\) is a positive weight factor. From these local objective functions, the global problem can be derived, integrating the global constraints. This leads to:

problem 6.1

Centralized Problem. Given all the current states and the future references \(\left( \mathbf {x}_{i}(k),\mathbf {y}_\mathrm{{ref},i}(k+1:k+N_\mathrm{{p}})\right) _{i=1,q}\)

$$\begin{aligned} \min \limits _{\left( \mathbf {u}_{i}(k:k+N_\mathrm{{p}}-1)\right) _{i=1,q}}\quad \sum \limits _{i=1}^{q}\quad J_i(\mathbf {x}_{i}(k),\mathbf {u}_{i}(k:k+N_\mathrm{{p}}-1)), \end{aligned}$$
(6.7)

subject to:

  • the local dynamical equations (6.1),

  • the local constraints, \(\forall i\in \{1,\ldots ,q\}\):

    $$\begin{aligned} \forall l\in \{1,\ldots ,N_\mathrm{{p}}\},\quad \mathbf {u}_{i}(k+l-1)\in \mathcal{U}_i, \end{aligned}$$
  • and the global constraints,

    $$\begin{aligned} \forall l\in \{1,\ldots ,N_\mathrm{{p}}\},\quad \left( \mathbf {u}_{i}(k+l-1)\right) _{i=1,q}\in \mathcal{U}_G. \end{aligned}$$

2.3 Control Structure

Because of the very particular structure of the global system (all the local systems are independent), the distribution of the optimization problem can be very interesting. The resulting control structure is based on \(q+1\) optimization agents. One will be dedicated to each system \((S_i)\), and one which will be in charge of the fulfillment of the global constraints, called the coordinator agent. Concerning the communication protocols, the coordinator agent will exchange data with all the other agents, but the other agents will not communicate with each other. The control structure is presented in Fig. 6.1. All the interactions will be formalized in the next section.

Fig. 6.1
figure 1

Distributed structure

In this chapter, we suppose that all the exchanges are perfect between the agents, this leads to the following assumption:

Assumption 6.2

Communication and data exchange. For the communication protocol, we suppose that:

  • there is no delay induced by the communications.

  • there is no loss of information during the exchanges.

Now that the global problem and the control structure are presented, the different steps of the resolution will be specified in the following section.

3 Description of the Approach

The principle of the decomposition is to use an iterative procedure at each time step (k). Then, for each iterative step, three actions are sequentially achieved:

  • A first one is done by the coordinator agent which has to solve its own optimization problem. Its solution is then sent to each other agent.

  • A second action in which all the local agents solve, in parallel, their optimization problem. Their solution is sent to the coordinator agent.

  • A last step, in which a decision is taken by the coordinator agent. Two cases can occur: either the convergence is reached and the procedure is stopped, or the convergence is not reached, and then a new iteration has to be done.

In the first two steps, different optimizations have to be solved. The distribution of the initial problem relies on linear decomposition technique, and all the agents solve their problem using a linear solver. As it can be noticed, the local cost function (6.6) is the sum of \(l_1\)-norm functions. It has to be rewritten into a linear form. This point is developed in the next paragraph, before the formalization of all the optimization problems and the interactions between the agents.

3.1 Exact Linearization of the Local Problem

As the function (6.6) is a sum of \(l_1\)-norm functions, it is also a sum of absolute values. To linearize the problem, the idea is to introduce two auxiliary variables for each absolute value.

For a better understanding, let us consider the case where \(n_{u_i}=n_{y_i}=1\) and the prediction horizon \(N_\mathrm{{p}}=1\).

The local minimization problem is the following one:

$$\begin{aligned} \min \limits _{u_i(k)}\ |y_{ref,i}(k+1)-y_i(k+1)|+\alpha _i|u_i(k)|, \end{aligned}$$
(6.8)

subject to

$$\begin{aligned} u_i(k)\in \mathcal{{U}}_i.\end{aligned}$$
(6.9)

Then by introducing 4 positive auxiliary variables \(\underline{\mu _i}(k),\overline{\mu _i}(k),\underline{\eta _i}(k),\overline{\eta _i}(k)\), the previous problem is equivalent to:

$$\begin{aligned} \min \limits _{u_i(k),\underline{\mu _i}(k),\overline{\mu _i}(k),\underline{\eta _i}(k),\overline{\eta _i}(k)} \left( \underline{\eta _i}(k)+\overline{\eta _i}(k)\right) +\alpha _i\left( \underline{\mu _i}(k)+\overline{\mu _i}(k)\right) , \end{aligned}$$
(6.10)

subject to

$$\begin{aligned} \mathbf {E}_{i}u_i(k)\le \mathbf {d}_{i}\end{aligned}$$
(6.11)
$$\begin{aligned} y_i(k+1)+\underline{\eta _i}(k)-\overline{\eta _i}(k)=y_{ref,i}(k+1)&\end{aligned}$$
(6.12)
$$\begin{aligned} u_i(k)+\underline{\mu _i}(k)-\overline{\mu _i}(k)=0&\end{aligned}$$
(6.13)
$$\begin{aligned} \underline{\mu _i}(k),\overline{\mu _i}(k),\underline{\eta _i}(k),\overline{\eta _i}(k)\ge 0&\end{aligned}$$
(6.14)

Sketches of proof can be found in [3], in which another formulation is proposed to linearize a \(l_1\)-norm predictive control.

3.2 Local Agent Optimization Problem

This transformation is exactly the same in the general case. Just to have an estimation of the number of variables involved for each subsystem optimization, the initial problem has \(n_{u_i}\cdot N_\mathrm{{p}}\) variables and \(n_{c_i}\cdot N_\mathrm{{p}}\) constraints with a nonlinear cost function whereas the rewritten problem has \(\left( 3n_{u_i}+2n_{y_i}\right) \cdot N_\mathrm{{p}}\) variables and \(\left( n_{c_i}+n_{u_i}+n_{y_i}\right) \cdot N_\mathrm{{p}}\) constraints but with a linear cost function.

Then, let us denote with \(\mathbf {w}_{i}(k:k+N_\mathrm{{p}}-1)\) the vector of all the variables of the rewritten local optimization problem, with

$$\begin{aligned} {\mathbf w}_{i}(k:k+N_\mathrm{{p}}-1)=\left( \begin{array}{c} {\mathbf u}_{i}(k:k+N_{{ \mathrm p}}-1)\\ \underline{{\mu }_{i}}(k:k+N_\mathrm{{p}}-1)\\ \overline{{\mu }_{i}}(k:k+N_\mathrm{{p}}-1)\\ \underline{{\eta }_{i}}(k:k+N_\mathrm{{p}}-1)\\ \overline{{\eta }_{i}}(k:k+N_\mathrm{{p}}-1) \end{array} \right) \end{aligned}$$

As we will see in the following, from an iteration to the next one during the distribution process, all the constraints of the local problem remain the same, but the linear cost function can change. At an iteration \(p\), let us denote with \({\mathbf {k}_{{i}}^{(p)}}^\mathtt T \mathbf {w}_{i}(k:k+N_\mathrm{{p}}-1)\) this cost function.

Then, the local problem can be stated as follows:

Problem 6.2

Rewritten Local Optimization Problem. For a time step \(k\), and at an iteration \(p\), the local optimization problem is:

$$\begin{aligned} \min \limits _{{\mathbf {w}}_{i}(k:k+N_\mathrm{{p}}-1)}\ {\mathbf {k}_{{i}}^{(p)}}^\mathtt T \mathbf {w}_{i}(k:k+N_\mathrm{{p}}-1), \end{aligned}$$
(6.15)

subject to

$$\begin{aligned} \overline{\mathbf {E}}_{i}\mathbf {w}_{i}(k:k+l-1)\le {\overline{\mathbf {d}}_i}(k). \end{aligned}$$
(6.16)

It is important to notice that in this local problem, the matrix \({\mathbf {\overline{E}}}_i\) is independent of the time contrary to the vector \({\mathbf {\overline{d}}}_{i}(k)\), which has to be calculated at each time step and depends on the future target \(\mathbf {y}_\mathrm{{ref},i}(k+1:k+N_\mathrm{{p}})\) and the current state \(\mathbf {x}_{i}(k)\). Their expressions are the following:

$$\begin{aligned} {\overline{\mathbf {E}}}_{i}=\left[ \begin{array}{ccccc} &{}&{}&{}&{}\\ diag ({\mathbf {E}}_{i})&{}{\mathbf {0}}&{}{\mathbf {0}}&{}{\mathbf {0}}&{}{\mathbf {0}}\\ &{}&{}&{}&{}\\ \hline {\mathbf {N}}_{i}&{}{\mathbf {0}}&{}{\mathbf {0}}&{}{\mathbf {I}}&{}-{\mathbf {I}}\\ -{\mathbf {N}}_{i}&{}{\mathbf {0}}&{}{\mathbf {0}}&{}-{\mathbf {I}}&{}{\mathbf {I}}\\ \hline {\mathbf {I}}&{}{\mathbf {I}}&{}-{\mathbf {I}}&{}{\mathbf {0}}&{}{\mathbf {0}}\\ -{\mathbf {I}}&{}-{\mathbf {I}}&{}{\mathbf {I}}&{}{\mathbf {0}}&{}{\mathbf {0}}\\ \hline {\mathbf {0}}&{}-{\mathbf {I}}&{}{\mathbf {0}}&{}{\mathbf {0}}&{}{\mathbf {0}}\\ \hline {\mathbf {0}}&{}{\mathbf {0}}&{}-{\mathbf {I}}&{}{\mathbf {0}}&{}{\mathbf {0}}\\ \hline {\mathbf {0}}&{}{\mathbf {0}}&{}{\mathbf {0}}&{}-{\mathbf {I}}&{}{\mathbf {0}}\\ \hline {\mathbf {0}}&{}{\mathbf {0}}&{}{\mathbf {0}}&{}{\mathbf {0}}&{}-{\mathbf {I}} \end{array} \right] ,\quad {\overline{\mathbf {d}}}_{i}(k)=\left[ \begin{array}{c} {\mathbf {d}}_{i}\\ \vdots \\ {\mathbf {d}}_{i}\\ \hline {\mathbf {y}}_\mathrm{{ref},i}(k+1:k+N_\mathrm{{p}})-\mathbf {M}_{i}\mathbf {x}_{i}(k)\\ -\mathbf {y}_\mathrm{{ref},i}(k+1:k+N_\mathrm{{p}})+\mathbf {M}_{i}\mathbf {x}_{i}(k)\\ \hline \mathbf {0}\\ \mathbf {0}\\ \hline \mathbf {0}\\ \hline \mathbf {0}\\ \hline \mathbf {0}\\ \hline \mathbf {0} \end{array} \right] \end{aligned}$$
(6.17)

During the iteration process, the cost function \({{\mathbf {k}}_{{i}}^{(p)}}^\mathtt T \mathbf {w}_{i}(k:k+N_\mathrm{{p}}-1)\) will change from an iteration to another one. This is the way that the global constraints are going to be taken into account. When there is no global constraint, the cost function of the rewritten local optimization problem would be \({{\mathbf {k}}_{{i,0}}}^\mathtt T \mathbf {w}_{i}(k:k+N_\mathrm{{p}}-1)\), with:

$$\begin{aligned} \mathbf {k}_{{i,0}}^\mathtt T =\left[ \begin{array}{c|c|c|c|c} \mathbf {0}^\mathtt T&\alpha _i\mathbf {1}^\mathtt T&\alpha _i\mathbf {1}^\mathtt T&\mathbf {1}^\mathtt T&\mathbf {1}^\mathtt T \end{array}\right] \end{aligned}$$
(6.18)

As it will be explained in the following, this vector will be used during the exchanges from the local agents to the coordinator agent.

3.3 The Coordinator Agent Problem

The main objective of the coordinator agent is to guarantee that the global constraints are fulfilled. This is ensured by different optimizations that will be done iteratively. From an iteration to the next one, a new optimization variable is added. The way this new optimization variable is integrated, on the cost function and on the constraints, only depends on the solutions of the local optimizations that have been done at the previous iteration. But before starting, the initialization requires \(N_0\) different admissible points for the local agents. More precision on the initialization will be given in a next paragraph.

Then, at iteration \(p\), the coordinator agent will have \(\upsilon ^{(p)}\) optimization variables, with \(\upsilon ^{(p)}=N_0+p-1\). For \(s=\{1,\ldots ,\upsilon ^{(p)}\}\), let us denote with \(\mathbf {w}_{{i}}^{(s)}(k:k+N_\mathrm{{p}}-1)\) the \(s\)-th admissible point of the local agent.Footnote 1

Then, we define the following variables, \(\forall s\in \{1,\ldots ,\upsilon ^{(p)}\}\),

$$\begin{aligned} z^{(s)}(k)&= \sum \limits _{i=1}^{q}\ \mathbf {k}_{{i,0}}^\mathtt{T }\mathbf {w}_{{i}}^{(s)}(k:k+N_\mathrm{{p}}-1)\end{aligned}$$
(6.19)
$$\begin{aligned} r_{n,l}^{(s)}(k)&= \sum \limits _{i=1}^{q}\ \mathbf {h}_{{i_n}}^\mathtt{T }\mathbf {u}_{{i}}^{(s)}(k+l-1), \quad \forall n\in \{1,\ldots ,n_G\}, \forall l\in \{1,\ldots ,N_\mathrm{{p}}\}\nonumber \\ \end{aligned}$$
(6.20)

From this, the coordinator agent has to solve its own problem, which can be stated as follows:

Problem 6.3

Coordinator Optimization Problem. At an iteration \(p\), given the variables \(z^{(s)}(k)\) and \(r_{n,l}^{(s)}(k)\), with \(s\in \{1,\ldots ,\upsilon ^{(p)}\}\), the optimization problem is:

$$\begin{aligned} \min \limits _{(v_s)_{s=1,\upsilon ^{(p)}}}\ \sum \limits _{s=1}^{\upsilon ^{(p)}}\ z^{(s)}(k)v_s \end{aligned}$$
(6.21)

with respect to

$$\begin{aligned} \forall n\in \{1,\ldots ,n_G\}, \forall l\in \{1,\ldots ,N_\mathrm{{p}}\},\ \sum \limits _{s=1}^{\upsilon ^{(p)}} r_{n,l}^{(s)}(k) v_s \le g_n\quad&:&\lambda _{n,l} \end{aligned}$$
(6.22)
$$\begin{aligned} \sum \limits _{s=1}^{\upsilon ^{(p)}} v_s = 1\quad&:&\sigma \end{aligned}$$
(6.23)
$$\begin{aligned} \forall s\in \{1,\ldots ,\upsilon ^{(p)}\},\quad v_s \ge 0 \quad&\end{aligned}$$
(6.24)

The coordinator agent has to find not only the solution \(v_s\) but also the dual variable values \(\lambda _{n,l}\), associated to the linear constraints.Footnote 2 (6.22) and the dual variable value \(\sigma \) associated to the linear constraint (6.23).

3.4 Iterative Mechanism

Each iteration consists in the following three steps:

  1. 1.

    The coordination agent optimization. At iteration \(p\), it has to solve its problem 6.3. Let us denote with \((v_s^{(p)})_{s=1,\upsilon ^{(p)}}\) the solution of this problem, and \(\lambda _{n,l}^{(p)}\) and \(\sigma ^{(p)}\) the associated optimal dual variable values.

    The values of the dual variables \(\lambda _{n,l}^{(p)}\) are then sent to each local agent, and will be used to define a new cost function \({{\mathbf {k}}_{{i}}^{(p)}}^\mathtt{T }\mathbf {w}_{i}(k:k+N_\mathrm{{p}}-1)\), which is calculated using:

    $$\begin{aligned} {\mathbf {k}_{{i}}^{(p)}}&= {\mathbf {k}_{{i,0}}}+ \left[ \begin{array}{c} -\left( \sum \limits _{n=1}^{n_G} \lambda _{n,1}^{(p)}\mathbf {h}_{{i_n}}\right) \\ \vdots \\ -\left( \sum \limits _{n=1}^{n_G} \lambda _{n,N_\mathrm{{p}}}^{(p)}\mathbf {h}_{{i_n}}\right) \\ \hline \mathbf {0}\\ \hline \mathbf {0}\\ \hline \mathbf {0}\\ \hline \mathbf {0}\\ \end{array}\right] \end{aligned}$$
    (6.25)

    The important point is that the second term of this sum can be calculated by coordinator agent and given numerically to the local agent. This term is a penalization cost vector whose objective is to take into account the global constraints in the local optimization.

  2. 2.

    In parallel, each optimization agent has to solve its own problem 6.2 with the new cost function \({{\mathbf {k}}_{{i}}^{(p)}}^\mathtt T \mathbf {w}_{i}(k:k+N_\mathrm{{p}}-1)\) to find its solution \(\mathbf {w}_{{i}}^{(\upsilon ^{(p)}+1)}(k:k+N_\mathrm{{p}}-1)\), which has to be sent to the coordinator agent.

  3. 3.

    The convergence checking is done after these two steps by comparing two values:

    • the value of the dual variable \(\sigma ^{(p)}\)

    • the value of the sum of all the local optimization values, calculated as follows:

    $$\begin{aligned} \psi ^{(p)}=\sum \limits _{i=1}^{q}{\mathbf {k}_{{i}}^{(p)}}^\mathtt{T }\mathbf {w}_{i}^{(\upsilon ^{(p)}+1)}(k:k+N_\mathrm{{p}}-1) \end{aligned}$$
    (6.26)

    The value \(\psi ^{(p)}\) obtained by Eq. (6.26) can be seen as the cost function resulting from a relaxed optimization problem with less constraint than the initial one. Two cases can occur:

  4. 3a.

    If \(\psi ^{(p)}<\sigma ^{(p)}\), a new iteration has to be done.The coordinator agent will use the local solutions \(\mathbf {w}_{{i}}^{(\upsilon ^{(p)}+1)}(k:k+N_\mathrm{{p}}-1)\) to calculate new variable values \(z^{(\upsilon ^{(p)}+1)}(k)\) and \(r_{n,l}^{(\upsilon ^{(p)}+1)}(k)\), according to Eqs. (6.19) and (6.20).

  5. 3b.

    In the other case, (i.e. \(\psi ^{(p)}\ge \sigma ^{(p)}\)), the optimal solution of the initial problem has been achieved. More precisely, the local optimal solution can be computed as follows:

    $$\begin{aligned} \mathbf {w}_{i}^{\star }(k:k+N_\mathrm{{p}}-1)=\sum \limits _{s=1}^{\upsilon ^{(p)}}v_s^{(p)}\mathbf {w}_{i}^{(s)}(k:k+N_\mathrm{{p}}-1), \end{aligned}$$
    (6.27)

    which is a convex combination of all the previous solutions done for the different cost function.

    As usual in model predictive control, only the first input of the control sequence is brought to the system, which is \(\mathbf {u}_{i}^{\star }(k)\).

3.5 Initialization

This point is maybe the main difficulty and drawback of the method. To initialize the algorithm, it is necessary to provide enough different solutions to the master problem such that it can be feasible. From a theoretical point of view, only two different solutions are enough, but only if they are good ones ...they cannot be chosen randomly, but such that the resulting master problem is feasible. From a practical point of view, they can be provided using engineering considerations, but also could be the results of different local optimization done with arbitrary cost functions. But anyway, there exist different techniques such as the one proposed in [5] to develop an always feasible coordinator agent problem.

3.6 Overview of the Procedure

From all the previous considerations, the iterative procedure of the distributed MPC based on the Dantzig-Wolfe decomposition can be recapitulated as in Algorithm 6.1. This algorithm will be called at each time step \(k\).

figure a

3.7 A Remark on Modularity

Among the mechanisms used during the distribution procedure, it is important to notice that each local agent only interacts with the coordinator agent but not with other local agents. Then, adding a new subsystem implies a new local agent which is integrated in the structure, but it only communicates with the coordinator agent without any modification on the other agents. Moreover the complexity of the coordinator optimization problem is not increasing, because it only changes the way the coefficients \(z^{(s)}(k)\) and \(r_{n,l}^{(\upsilon ^{(s)})}(k)\) are calculated. All these considerations ensure the modularity of the approach.

4 Theoretical Results Availability

The two major advantages of this method are that:

  • The global solution will be found

  • It requires a finite number of iterations.

The presented methodology is very close to the Dantzig-Wolfe decomposition procedure, which has been firstly proposed in [6]. The conditions of the convergence is linked to the fact that at each step, all the solutions of the optimization problems are finite, and there is no unfeasible problem.

For all the local agent, as from an optimization to the next one, only the cost function changes, it is always possible to solve the problem and the solution is finite. This point is ensured by the Assumption 6.1. For the coordinator agent problem, as soon as the initialization is well tuned, the problem cannot be unfeasible anymore. Because of Assumption 6.2, in which all the communications are supposed to be perfect, the iterative process is then ensured to converge to the optimal solution.

All the mathematical concepts that are used to prove the convergence of the decomposition can be found in the different following references [1, 2, 4], in which many different analyzes have been provided.

5 Application Results Availability

In this chapter, we propose a distributed structure to control different systems when they are connected by global input constraints. This is the easiest case to use the Dantzig-Wolfe decomposition. This can be extended to a more general class of systems, when there exist global constraints on the outputs or on the states. In this case, the main difficulty is linked to the feasibility of the problem, which is not ensured anymore if the constraints are hard one. Different methods can be used to soften these constraints and then to use the decomposition.

In this chapter, all the systems are dynamically independent. In a more general case, if the systems are connected by the inputs, states or outputs, the methodology can be adapted to take into account these interactions. This can be done using nested iterations and a communication-based algorithm, such the one proposed in [9]. The major drawback of these nested iterations is the fact that the algorithm converges but not to the optimal solution of the problem. Anyway, this method has been used with success on building temperature regulation, where it has been shown that the suboptimality is not really significant.

In a case where all the systems are connected by common input variables, the problem can also be distributed using the dual of the Dantzig-Wolfe decomposition, the Benders’ decomposition. This has been used to regulation temperature in building in the multi-source case [10].

If the first applications of the method dealt with temperature regulation in buildings [8], it also has been extended to energy management system in residential house to develop an adjustable structure to optimize the energy bill [7].

6 Conclusions

In this chapter, a distributed structure has been proposed to control a collection of linear dynamical systems coupled by linear global constraints. The particular structure of the constraints set allows the use of the Dantzig-Wolfe decomposition to define the control architecture. The two major advantages of this method are first the fact that the solution found during the iteration process is the global solution. The second one is link to the modularity induced by the control structure which allow adding or removing subsystems in the structure without any modification for the other local controllers. The methodology is proposed for a collection of independent subsystems, a current direction for future works is how to generalize this method to interconnected subsystems.