Keywords

Introduction

RMPC is an optimization-based approach to the synthesis of robust control laws for constrained control systems subject to bounded uncertainty. RMPC synthesis can be seen as an adequately defined repetitive decision-making process, in which the underlying decision-making process is a suitably formulated robust optimal control (ROC) problem. The underlying ROC problem is specified in such a way so as to ensure that all possible predictions of the controlled state and corresponding control actions sequences satisfy constraints and that the “worst-case” cost is minimized. The decision variable in the corresponding ROC problem is a control policy (i.e., a sequence of control laws) ensuring that different control actions are allowed at different predicted states, while the uncertainty takes on a role of the adversary. RMPC utilizes recursively the solution to the associated ROC problem in order to implement the feedback control law that is, in fact, equal to the first control law of an optimal control policy.

A theoretically rigorous approach to RMPC synthesis can be obtained either by employing, in a repetitive fashion, the dynamic programming solution of the corresponding ROC problem or by solving online, in a recursive manner, an infinite-dimensional optimization problem (Rawlings and Mayne 2009). In either case, the associated computational complexity renders the exact RMPC synthesis hardly ever tractable. This computational impracticability of the theoretically exact RMPC, in conjunction with the convoluted interactions of the uncertainty with the evolution of the controlled system, constraints, and control objectives, has made RMPC an extremely challenging and active research field. It has become evident that a prominent challenge is to develop a form of RMPC synthesis that adequately handles the effects of the uncertainty and yet is computationally plausible. Contemporary research proposals aim to address the inevitable trade-off between the quality of guaranteed structural properties and the corresponding computational complexity. A categorization of the existing proposals for RMPC synthesis can be based on the treatment of the effects of the uncertainty. In this sense, two alternative approaches to RMPC synthesis appear to be dominant.

The first category of the alternative approaches is represented by the methods that utilize, when possible, inherent robustness of nominal MPC synthesis. These proposals deploy a nominal MPC, albeit designed for a suitably modified control system, constraints, and control objectives. Such approaches are computationally practicable. However, the effects of the uncertainty are taken care of in an indirect way; the robustness properties of the controlled dynamics are frequently addressed via an a posteriori input-to-state stability analysis, which might be unnecessarily conservative and geometrically insensitive. Equally important drawbacks of these approaches to RMPC synthesis arise due to the fact that the nominal MPC synthesis is itself an inherently fragile (nonrobust) process; in particular, the stability property of the conventional MPC might fail to be robust (Grimm et al. 2004) and, furthermore, the optimal control of constrained discrete time systems, employed for the nominal MPC synthesis, can be a fragile process itself (Raković 2009).

The second category of RMPC design methods encapsulates the approaches that take the effects of the uncertainty into account more directly. These proposals are compatible with the emerging consensus: there is a need for the deployment of the simplifying approximations of the underlying control policy and sensible prioritization and modification of control objectives so as to simultaneously enhance computational tractability and ensure a priori guarantees of the desirable topological properties and system-theoretic rigor. The simplifying parameterizations of the control policy are employed primarily to allow for a computationally efficient handling of the interactions of the uncertainty with the evolution of the controlled system and constraints. The control objectives are prioritized and modified when necessary, in order to ensure that the corresponding ROC problem is computationally tractable. The effectiveness of such methods depends crucially on the ability to detect a sufficiently rich parameterization of control policy and to devise a systematic way for meaningful simplification of control objectives.

In a stark contrast to a well-matured theory of the nominal MPC synthesis, a systematic assessment of, and unified exposure to, the current state of affairs in the RMPC field is a highly demanding chore. Nevertheless, it is possible to outline the main aspects of the exact RMPC synthesis and to provide an overview of the dominant simplifying approximations.

Contemporary Setting and Uncertainty Effect

The contemporary approach to the exact RMPC synthesis is now delineated in a step-by-step manner.

The system: The most common setting in RMPC synthesis considers the control systems modelled, in discrete time, by

$$\displaystyle{ x^{+} = f\left (x,u,w\right ), }$$
(1)

where \(x \in \mathbb{R}^{n}\), \(u \in \mathbb{R}^{m}\), w \(\in \mathbb{R}^{p}\), and \(x^{+} \in \mathbb{R}^{n}\) are, respectively, the current state, control and uncertainty, and the successor state, while \(f(\cdot,\cdot,\cdot ) : \mathbb{R}^{n} \times \mathbb{R}^{m} \times \mathbb{R}^{p} \rightarrow \mathbb{R}^{n}\) is the state transition map assumed to be continuous. Thus, when x k , u k , and w k are the state, the control, and the uncertainty at the time instance k, then \(x_{k+1} = f\) (x k , u k , w k ) is the state at the time instance k + 1.

The constraints: The system variables x, u, and w are subject to hard constraints:

$$\displaystyle{ \left (x,u,w\right ) \in \mathbb{X} \times \mathbb{U} \times \mathbb{W}, }$$
(2)

where the constraint sets \(\mathbb{X}\) and \(\mathbb{U}\) represent state and control constraints, while the constraint set \(\mathbb{W}\) specifies geometric bounds on the uncertainty. The constraint sets \(\mathbb{X} \subset \mathbb{R}^{n}, \mathbb{U} \subset \mathbb{R}^{m}\), and \(\mathbb{W} \subset \mathbb{R}^{p}\) are assumed to be compact.

The control policy: It is necessary to specify, in a manner that is compatible with the type and nature of the uncertainty, the information available for the RMPC synthesis. The traditional state feedback setting treats the case in which, at any time instance k, the state x k is known when the current control u k is determined, while the values of the current and future uncertainty (wk+i) are not known but are guaranteed to take the values within the uncertainty constraint set \(\mathbb{W}\) (i.e., \(w_{k+i} \in \mathbb{W}\)). Within this setting, the use of a control policy,

$$\displaystyle{ \Pi _{N-1} := \left \{\pi _{0}\left (\cdot \right ),\pi _{1}\left (\cdot \right ),\ldots,\pi _{N-1}\left (\cdot \right )\right \}, }$$
(3)

where N is the prediction horizon and each \(\pi _{k}(\cdot )\) : \(\mathbb{R}^{n} \rightarrow \mathbb{R}^{m}\) is a control law, is structurally permissible and desirable.

The generalized state and control predictions: Because of the uncertainty, the ordinary state and control predictions, as employed in the nominal MPC, are not suitable. Clearly, when x and κ(x) are the current state and control, then the successor state x+ can take any value in the possible set of successor states {f (x, κ(x), w) : \(w \in \mathbb{W}\)}. Consequently, it is necessary to consider suitably generalized state and control predictions. The interaction of the uncertainty with the predicted behavior of the system is captured naturally by invoking the maps F(⋅ , ⋅ ) and G(⋅ , ⋅ ) specified, for any subset X of \(\mathbb{R}^{n}\) and any control function \(\kappa (\cdot ) : \mathbb{R}^{n} \rightarrow \mathbb{R}^{m}\), by

$$\displaystyle\begin{array}{rcl} & & F\left (X,\kappa \right )\ := \left \{f\left (x,\kappa \left (x\right ),w\right ) : x \in X,w \in \mathbb{W}\right \}\mbox{ and} \\ & & G\left (X,\kappa \right )\ := \left \{\kappa \left (x\right ) : x \in X\right \}. {}\end{array}$$
(4)

Within the considered setting, the corresponding state and control predictions are, in fact, set-valued and, for each relevant k, obey the relations

$$\displaystyle\begin{array}{rcl} & & X_{k+1}\ = F\left (X_{k},\pi _{k}\right )\mbox{ and }U_{k} = G\left (X_{k},\pi _{k}\right ),\mbox{ with} \\ & & \quad X_{0}:= \left \{x\right \}. {}\end{array}$$
(5)

The set sequences X N ​ : = {\(X_{0},X_{1},\ldots,X_{N-1},X_{N}\)} and UN−1 : = {U0, U1, , UN−1} represent the possible sets of the predicted states and control actions, which are commonly known as the state and control tubes. Evidently, the state and control tubes are functions of the initial state x and a control policy \(\Pi _{N-1}\). Reversely, for a given initial state x, any structurally permissible control policy \(\Pi _{N-1}\) results in the possible sets of the predicted states and control actions.

The robust constraint satisfaction: One of the primary objectives in RMPC synthesis is to ensure that the generalized state and control predictions satisfy state and control constraints. Because of the repetitive nature of RMPC, it would be ideal to consider the control policy and generalized state and control predictions over the infinite horizon (i.e., for N = ). Unfortunately, this is hardly ever practicable in a direct fashion. When the prediction horizon is finite, the robust constraint satisfaction reduces to the conditions that for all \(k = 0,1,\ldots,N - 1\), the set inclusions

$$\displaystyle{ X_{k} \subseteq \mathbb{X}\mbox{ and }U_{k} \subseteq \mathbb{U} }$$
(6)

hold true and that the possible set of states X N at the prediction time instance N satisfies the set inclusion

$$\displaystyle{ X_{N} \subseteq \mathbb{X}_{f}, }$$
(7)

where \(\mathbb{X}_{f} \subseteq \mathbb{X}\) is a suitable terminal constraint set.

The terminal constraint set: In order to account for the utilization of the control policy \(\Pi _{N-1}\) and generalized state and control predictions over the finite horizon N and to ensure that these can be prolonged indirectly over the infinite horizon, a terminal constraint set is employed. This set is obtained by considering the uncertain dynamics

$$\displaystyle{ x^{+} = f\left (x,\kappa _{ f}\left (x\right ),w\right ) }$$
(8)

controlled by a local control function κ f (⋅ ). The design of a control law κ f (⋅ ) is usually performed offline in an optimal manner by considering the unconstrained version of the system (1), while the terminal constraint set \(\mathbb{X}_{f}\) accounts locally for the state and control constraints. The terminal constraint set \(\mathbb{X}_{f}\) is assumed to be compact and robust positively invariant for the dynamics (8) and constraint sets (2). Thus, the set \(\mathbb{X}_{f}\) and a local control function κ f (⋅ ) satisfy

$$\displaystyle\begin{array}{rcl} & & F\left (\mathbb{X}_{f},\kappa _{f}\right ) \subseteq \mathbb{X}_{f} \subseteq \mathbb{X}\mbox{ and }\mathbb{U}_{f} \\ & & \quad := G\left (\mathbb{X}_{f},\kappa _{f}\right ) \subseteq \mathbb{U}, {}\end{array}$$
(9)

or, equivalently, \(\mathbb{X}_{f} \subseteq \mathbb{X}\), and for all \(x \in \mathbb{X}_{f}\), it holds that \(\kappa _{f}(x) \in \mathbb{U}\) and \(\forall w \in \mathbb{W}\), \(f(x,\kappa _{f}(x),w) \in \mathbb{X}_{f}\). The most appropriate choice for \(\mathbb{X}_{f}\) is the maximal robust positively invariant set for the dynamics (8) and constraint sets (2).

The generalized origin: Due to the presence of the uncertainty, the stabilization of the origin might not be attainable and, thus, it might be necessary to consider the origin in a generalized sense. The most natural candidate for the generalized origin is a minimal robust positively invariant set for the dynamics (8) and constraint sets (2). This set is entirely determined by the associated state set dynamics

$$\displaystyle{ X^{+} = F\left (X,\kappa _{ f}\right ), }$$
(10)

which are completely induced by the local dynamics (8) and the uncertainty constraint set \(\mathbb{W}\). The generalized origin, namely, the minimal robust positively invariant set, is compact and well defined in the case when the local control function κ f (⋅ ) ensures that the corresponding map F(⋅ , κ f ) is a contraction on the space of compact subsets of \(\mathbb{X}_{f}\) (Artstein and Raković 2008), which we assume to be the case. The generalized origin \(\mathbb{X}_{\mathcal{O}}\) is the unique solution to the fixed-point set equation.

$$\displaystyle{ X = F\left (X,\kappa _{f}\right ), }$$
(11)

and is an exponentially stable attractor for the state set dynamics (10) with the basin of attraction being the space of compact subsets of \(\mathbb{X}_{f}\). Thus, the conventional (0,0) fixed-point pair ought to be replaced by the fixed-point pair of sets \((\mathbb{X}_{\mathcal{O}}, \mathbb{U}_{\mathcal{O}})\) required to satisfy

$$\displaystyle\begin{array}{rcl} & & \mathbb{X}_{\mathcal{O}}\ = F\left (\mathbb{X}_{\mathcal{O}},\kappa _{f}\right ) \subseteq \mbox{ interior}\left (\mathbb{X}_{f}\right )\mbox{ and} \\ & & \mathbb{U}_{\mathcal{O}}\ := G\left (\mathbb{X}_{\mathcal{O}},\kappa _{f}\right ) \subseteq \mathbb{U}_{f}. {}\end{array}$$
(12)

The generalized cost functions: The performance requirements are, as usual, expressed via a cost function, which is obtained by considering a stage cost function \(\ell(\cdot,\cdot ) : \mathbb{X} \times \mathbb{U} \rightarrow \mathbb{R}_{+}\) and a terminal cost function \(V _{f}(\cdot ) : \mathbb{X}_{f} \rightarrow \mathbb{R}_{+}\). The stage cost function (⋅ , ⋅ ) is continuous and, due to the uncertainty, adequately lower bounded w.r.t. to the generalized origin \(\mathbb{X}_{\mathcal{O}}\). The latter condition requires that for all \(x \in \mathbb{X}\) and all \(u \in \mathbb{U}\), the function \(\ell(\cdot,\cdot )\) satisfies

$$\displaystyle{ \alpha _{1}\left (\mbox{ dist}\left (\mathbb{X}_{\mathcal{O}},x\right )\right ) \leq \ell\left (x,u\right ), }$$
(13)

where \(\alpha _{1}(\cdot )\) is a \(\mathcal{K}\)-class (Kamke’s) function and dist\((\mathbb{X}_{\mathcal{O}},\cdot )\) is the distance function from the set \(\mathbb{X}_{\mathcal{O}}\). The consideration of the generalized origin requires the additional condition that for all \(x \in \mathbb{X}_{\mathcal{O}}\), the use of local control function κ f (⋅ ) is “free of charge” w.r.t. (⋅ , ⋅ ), i.e., that for all \(x \in \mathbb{X}_{\mathcal{O}}\), we have

$$\displaystyle{ \ell\left (x,\kappa _{f}\left (x\right )\right ) = 0. }$$
(14)

As in the case of the terminal constraint set \(\mathbb{X}_{f}\), the terminal cost function V f \((\cdot )\) is employed to account for the utilization of the finite prediction horizon N, and it should provide locally a theoretically suitable upper bound of the highly desired infinite horizon cost. The terminal cost function \(V _{f}(\cdot )\) is assumed to be continuous and adequately upper bounded w.r.t. the generalized origin \(\mathbb{X}_{\mathcal{O}}\). The latter bound reduces to the requirement that for all \(x \in \mathbb{X}_{f}\), we have

$$\displaystyle{ V _{f}\left (x\right ) \leq \alpha _{2}\left (\mbox{ dist}\left (\mathbb{X}_{\mathcal{O}},x\right )\right ), }$$
(15)

where, as above, α2(⋅ ) is a \(\mathcal{K}\)-class function. In addition, the terminal cost function \(V _{f}(\cdot )\) satisfies locally a usual condition for robust stabilization, which is expressed by the requirement that for all \(x \in \mathbb{X}_{f}\) and all \(w \in \mathbb{W}\), it holds that

$$\displaystyle{ V _{f}\left (f\left (x,\kappa _{f}\left (x\right ),w\right )\right ) - V _{f}\left (x\right ) \leq -\ell\left (x,\kappa _{f}\left (x\right )\right ). }$$
(16)

The cost function \(V _{N}(\cdot,\cdot,\cdot )\) is defined, for all \(x \in \mathbb{X}\), all \(\Pi _{N-1}\), and all w\(_{N-1} :=\{ w_{0},w_{1},\ldots,w_{N-1}\}\), by

$$\displaystyle{ V _{N}\left (x,\Pi _{N-1},\mathbf{w}_{N-1}\right ) :=\sum \limits _{ k=0}^{N-1}\ell\left (x_{ k},u_{k}\right ) + V _{f}\left (x_{N}\right ), }$$
(17)

where, for notational simplicity, \(u_{k} :=\pi _{k}(x_{k})\) and \(x_{k} := x_{k}(x,\Pi _{N-1},\mathbf{w}_{N-1})\) denote the solution of (1) when the initial state is x, control policy is \(\Pi _{N-1}\), and uncertainty realization is wN−1.

The exact ROC: In view of the uncertainty, the corresponding exact ROC problem \(\mathbb{P}_{N}(x)\), for any \(x \in \mathbb{X}\), aims to optimize the “worst-case” performance so that it takes the form of an infinite-dimensional minimaximization:

$$\displaystyle\begin{array}{rcl} & & J_{N}\left (x,\Pi _{N-1}\right )\ :=\mathop{ \max }\limits _{\mathbf{w}_{N-1}\in \mathbb{W}^{N}}V _{N}\left (x,\Pi _{N-1},\mathbf{w}_{N-1}\right ), \\ & & \qquad \quad V _{N}^{0}\left (x\right )\ :=\mathop{ \min }\limits _{ \Pi _{N-1}\in \Pi _{N-1}\left (x\right )}J_{N}\left (x,\Pi _{N-1}\right ), \\ & & \qquad \Pi _{N-1}^{0}\left (x\right )\ \in \arg \mathop{\min }\limits _{ \Pi _{N-1}\in \Pi _{N-1}\left (x\right )}J_{N}\left (x,\Pi _{N-1}\right ), {}\end{array}$$
(18)

where \(\boldsymbol{\Pi }_{N-1}(x)\) denotes the set of the constraint admissible control policies defined, for all \(x \in \mathbb{X}\), by

$$\displaystyle{ \Pi _{N-1}\left (x\right ) := \left \{\Pi _{N-1} : \mathrm{conditions}\ (5)\text{\textendash }(7)\ \mathrm{hold}\right \}. }$$
(19)

The value function \(V _{N}^{0}\left (\cdot \right )\) might not admit a unique optimal control policy, so that \(\Pi _{N-1}^{0}\left (\cdot \right )\) represents a selection from the set of optimal control policies (this selection is usually induced by a numerical solver employed for the online calculations). The effective domain \(\mathcal{X}_{N}\) of the value function \(V _{N}^{0}\left (\cdot \right )\) and associated optimal control policy \(\Pi _{N-1}^{0}\left (\cdot \right )\) is given by

$$\displaystyle{ \mathcal{X}_{N} := \left \{x \in \mathbb{R}^{n} : \Pi _{ N-1}\left (x\right )\neq \not 0\right \}. }$$
(20)

and is known in the literature as the N-step min–max controllable set to a target set \(\mathbb{X}_{f}\). Within the considered setting, the set \(\mathcal{X}_{N}\) is a compact subset of \(\mathbb{X}\) such that \(\mathbb{X}_{f} \subseteq \mathcal{X}_{N}\).

The exact RMPC: The exact RMPC synthesis requires online solution of the minimaximization (18) in order to implement numerically the control law \(\pi _{0}^{0}\left (\cdot \right )\). The control law \(\pi _{0}^{0}\left (\cdot \right )\) is well defined for all \(x \in \mathcal{X}_{N}\), and it induces the controlled uncertain dynamics specified, for all \(x \in \mathcal{X}_{N}\), by

$$\displaystyle{ x^{+} \in \mathcal{F}\left (x\right ),\mathcal{F}\left (x\right ) := \left \{f\left (x,\pi _{ 0}^{0}\left (x\right ),w\right ) : w \in \mathbb{W}\right \}. }$$
(21)

Within the considered setting, the exact RMPC law \(\pi _{0}^{0}\left (\cdot \right )\) renders the N-step min–max controllable set \(\mathcal{X}_{N}\) robust positively invariant. Namely, for all \(x \in \mathcal{X}_{N}\), it holds that

$$\displaystyle{ \mathcal{F}\left (x\right ) \subseteq \mathcal{X}_{N} \subseteq \mathbb{X}\mbox{ and }\pi _{0}^{0}\left (x\right ) \in \mathbb{U}. }$$
(22)

Furthermore, the associated value function \(V _{N}^{0}\left (\cdot \right ) : \mathcal{X}_{N} \rightarrow \mathbb{R}_{+}\) is, by construction, a Lyapunov certificate verifying the robust asymptotic stability of the generalized origin \(\mathbb{X}_{\mathcal{O}}\) for the controlled uncertain dynamics (21) with the basin of attraction being equal to the N-step min–max controllable set \(\mathcal{X}_{N}\). More precisely, for all \(x \in \mathcal{X}_{N}\), it holds that

$$\displaystyle{ \alpha _{1}\left (\mbox{ dist}\left (\mathbb{X}_{\mathcal{O}},x\right )\right ) \leq V _{N}^{0}\left (x\right ) \leq \alpha _{ 3}\left (\mbox{ dist}\left (\mathbb{X}_{\mathcal{O}},x\right )\right ), }$$
(23)

where α3(⋅ ) is a suitable \(\mathcal{K}\)-class function, while for all \(x \in \mathcal{X}_{N}\) and all \(x^{+} \in \mathcal{F}(x)\), it holds that

$$\displaystyle{ V _{N}^{0}\left (x^{+}\right ) - V _{ N}^{0}\left (x\right ) \leq -\alpha _{ 1}\left (\mbox{ dist}\left (\mathbb{X}_{\mathcal{O}},x\right )\right ). }$$
(24)

Clearly, under fairly natural conditions, the exact RMPC synthesis induces rather strong structural properties, but the associated computational complexity is overwhelming. However, in the above overview, the effects of the uncertainty have been “dissected” and the “basic building blocks” employed for the exact RMPC synthesis have been clearly identified. In turn, this step-by-step overview suggests indirectly the meaningful and simplifying approximations in order to enhance computational practicability.

Computational Simplifications

The computational intractability of the exact RMPC synthesis can be tackled by considering suitable parameterizations of control policy \(\Pi _{N-1}\) and associated state and control tubes X N and UN−1 and by adopting computationally simpler performance criteria.

The core simplification is the use of finite-dimensional parameterization of control policy. The control policy should be suitably parameterized so as to allow for the utilization of both the least conservative generalized state and control predictions and a range of simpler, but sensible, cost functions.

The explicit form of the exact state and control tubes is usually highly complex, and it is computationally beneficial to employ, when feasible, the implicit representation of the possible sets of predicted state and control actions. An alternative is to utilize outer-bounding approximations of the exact state and control tubes; these are obtained by making use of simpler sets that usually admit finite-dimensional parameterizations. In the latter case, the exact set dynamics of the state and control tubes given by (5) are usually relaxed to set inclusions

$$\displaystyle\begin{array}{rcl} & & \left \{x_{0}\right \} \subseteq X_{0},\mbox{ and, }F\left (X_{k},\pi _{k}\right ) \subseteq X_{k+1} {}\\ & & \quad \mbox{ and }G\left (X_{k},\pi _{k}\right ) \subseteq U_{k}. {}\\ \end{array}$$

The generalized origin, i.e., the minimal robust positively invariant set \(\mathbb{X}_{\mathcal{O}}\), is an integral component for the analysis. Its explicit computation is rather demanding and, hence, its use for the online calculations might not be convenient. A computationally feasible alternative is to deploy the terminal constraint set \(\mathbb{X}_{f}\) as a “relaxed form” of the generalized origin; this is particularly beneficial when the local control function κ f (⋅ ) is optimal w.r.t. infinite horizon cost associated with the unconstrained version of the system (1).

The performance requirements should be carefully prioritized and modified when necessary, in such a way so as to be expressible by the cost functions that do not require intractable minimax optimization but still ensure that the associated value function verifies the robust stability and attractivity of the generalized origin \(\mathbb{X}_{\mathcal{O}}\) or the terminal constraint set \(\mathbb{X}_{f}\).

The outlined guidelines have played a pivotal role in devising a number of theoretically sound and computationally efficient parameterized RMPC syntheses within the setting of linear control systems subject to additive disturbances and polytopic constraints. In this linear–polytopic setting, the state transition map f (⋅ , ⋅ , ⋅ ) of (1) is linear:

$$\displaystyle{ f\left (x,u,w\right ) = Ax + Bu + w, }$$
(25)

where the matrix pair \((A,B) \in \mathbb{R}^{n\times n} \times \mathbb{R}^{n\times m}\) is assumed to be known and strictly stabilizable. The local control function κ f (⋅ ) and associated local uncertain dynamics are linear:

$$\displaystyle{ u = Kx\mbox{ and }x^{+} = \left (A + BK\right )x + w. }$$
(26)

The matrix \(K \in \mathbb{R}^{m\times n}\) is designed offline and is such that the eigenvalues of the matrix A +BK are strictly inside of the unit circle. The constraint sets \(\mathbb{X}\) and \(\mathbb{U}\) are polytopes (A polytope is a convex and compact set specified by finitely many linear/affine inequalities, or by a convex hull of finitely many points) in \(\mathbb{R}^{n}\) and \(\mathbb{R}^{m}\) that contain the origin in their interior. The uncertainty constraint set \(\mathbb{W}\) is a polytope in \(\mathbb{R}^{n}\) that contains the origin.

The terminal constraint set \(\mathbb{X}_{f}\) is the maximal robust positively invariant for \(x^{+} = (A + \mathit{BK})x + w\) and constraint set (\(\mathbb{X}_{K}, \mathbb{W}\)) where \(\mathbb{X}_{K} :=\{ x \in \mathbb{X} : \mathit{Kx} \in \mathbb{U}\}\). The set \(\mathbb{X}_{f}\) is assumed to be a polytope in \(\mathbb{R}^{n}\) that contains the generalized origin \(\mathbb{X}_{\mathcal{O}}\) (which is the minimal robust positively invariant set for \(x^{+} = (A + \mathit{BK})x + w\) and constraint set (\(\mathbb{X}_{K}, \mathbb{W}\))) in its interior.

It has recently been demonstrated that the major simplified RMPC syntheses in the linear–polytopic setting employ control policies within the class of separable state feedback (SSF) control policies (Raković 2012). More precisely, the predictions of the overall states x k and associated control actions u k are parameterized in terms of the predictions of the partial states x(j, k), j = 0, 1, , k and partial control actions u(j, k), j = 0, 1, , k via

$$\displaystyle{ x_{k} =\sum \limits _{ j=0}^{k}x_{ (j,k)}\mbox{ and }u_{k} =\sum \limits _{ j=0}^{k}u_{ (j,k)}, }$$
(27)

where, for notational simplicity, \(u_{k} :=\pi _{k}(x_{k})\) and \(u_{(j,k)} :=\pi _{(j,k)}(x_{(j,k)})\). To ensure the dynamical consistency with (25), the predicted partial states x(j, k) evolve according to

$$\displaystyle{ x_{(j,k+1)} = Ax_{(j,k)} + Bu_{(j,k)}, }$$
(28)

(for \(j = 0,1,\ldots,N - 1\) and \(k = j,j + 1\ldots,N - 1\)), while the “partial” initial conditions x(k, k) satisfy

$$\displaystyle\begin{array}{rcl} x_{(0,0)}& = x\mbox{ and}&{}\end{array}$$
(29a)
$$\displaystyle\begin{array}{rcl} x_{(k,k)}& = w_{k-1}\mbox{ for }k = 1,2,\ldots,N.&{}\end{array}$$
(29b)

As elaborated on in Raković (2012) and Raković et al. (2012), the utilization of the SSF control policy allows for:

  • The deployment of the highly desirable implicit representation of the exact state and control tubes induced by the SSF control policy. This implicit representation is parameterized via O(N2) decision variables.

  • The numerically convenient formulation of the robust constraint satisfaction via O(N2) linear/affine inequalities and equalities.

  • The computationally efficient minimization of an upper bound of the “worst-case” cost for which the stage and terminal cost functions are specified in terms of the weighted distances from the terminal constraint set \(\mathbb{X}_{f}\) and the associated control set \(\mathbb{U}_{f} = K\mathbb{X}_{f}\).

As shown in Raković (2012) and Raković et al. (2012), the RMPC control laws, based on the use of the SSF control policy, can be implemented online by solving a standard convex optimization problem whose complexity (in terms of the numbers of decision variables and affine inequalities and equalities) is O(N2). The corresponding RMPC synthesis ensures directly that the terminal constraint set \(\mathbb{X}_{f}\) is robustly exponentially stable, and it also induces indirectly the robust exponential stability of the generalized origin \(\mathbb{X}_{\mathcal{O}}\).

The previously dominant control policy parameterizations include time-invariant affine state feedback (TIASF), time-varying affine state feedback (TVASF), and affine in the past disturbances feedback (APDF) control policies. All of these parameterizations are subsumed by the SSF control policy, as all of them induce additional structural restrictions on the parameterizations of the predicted state and control actions specified in (27) and on the associated dynamics given by (28), (29) and (30). In particular, the TIASF control policy (Chisci et al. 2001; Gossner et al. 1997) imposes structural restrictions that, for each relevant k,

$$\displaystyle{ u_{(j,k)} = Kx_{(j,k)}\mbox{ for }j = 1,2,\ldots,k, }$$
(30)

where K is the local control matrix of (26). The TVASF control policy (Löfberg 2003) induces less restrictive requirements that, for each relevant k,

$$\displaystyle{ u_{(j,k)} = K_{(j,k)}x_{(j,j)}\mbox{ for }j = 1,2,\ldots,k, }$$
(31)

where the matrices \(K_{(j,k)} \in \mathbb{R}^{m\times n}\) are part of the decision variable. The APDF control policy (Goulart et al. 2006; Löfberg 2003) is an algebraic reparameterization of the TVASF control policy, which requires the conditions that, for each relevant k,

$$\displaystyle{ u_{(j,k)} = M_{(j,k)}x_{(j,k)}\mbox{ for }j = 1,2,\ldots,k, }$$
(32)

where the matrices \(M_{(j,k)} \in \mathbb{R}^{m\times n}\) are part of the decision variable. A comprehensive trade-off analysis between the quality of guaranteed structural properties and the associated computational complexity and a theoretically meaningful ranking of the existing RMPC syntheses in the linear–polytopic setting is reported in the recent plenary paper (Raković 2012). Therein, it is demonstrated that the dominant approach is the RMPC synthesis utilizing the SSF control policy (Raković 2012) (also known as the parameterized tube MPC (Raković et al. 2012)).

Summary and Future Directions

The exact RMPC synthesis has reached a remarkable degree of theoretical maturity in the general setting. The corresponding theoretical advances are, however, accompanied with the impeding computational complexity. On the bright side of the things, a number of rather sophisticated RMPC synthesis methods, which are both computationally efficient and theoretically sound, have been developed for the frequently encountered linear–polytopic case.

The further advances in the RMPC field might be driven by the utilization of more structured types and models of the uncertainty. The challenge of devising a computationally efficient and theoretically sound RMPC synthesis might need to be tackled in several phases; the initial steps might focus on adequate RMPC synthesis for particular classes of nonlinear control systems. Finally, it would seem reasonable to expect that the lessons learned in the RMPC field might play an important role for the research developments in the fields of the stochastic and adaptive MPC.

Cross-References

Recommended Reading

The recent monograph (Rawlings and Mayne (2009)) provides an in-depth systematic exposure to the RMPC field and is also a rich source of relevant references. The invaluable overview of the theory and computations of the maximal and minimal robust positively invariant sets can be found in (Artstein and Raković (2008), Kolmanovsky and Gilbert (1998), Raković et al. (2005), and Blanchini and Miani (2008)). The important paper (Scokaert and Mayne (1998)) points out the theoretical benefits of the use of the control policy, but it also indicates indirectly the computational impracticability of the associated feedback min–max RMPC. The early tube MPC synthesis (Mayne et al. 2005) is both computationally efficient and theoretically sound, and it represents an important step forward in the linear–polytopic setting. The so-called homothetic tube MPC synthesis (Raković et al. 2013) is a recent improvement of the first generation of the tube MPC synthesis (Mayne et al. 2005), and it has a high potential to effectively handle the parametric uncertainty of the matrix pair (A, B). The current state of the art in the linear–polytopic setting is reached by the RMPC synthesis using the SSF control policy (Raković 2012; Raković et al. 2012). The output feedback RMPC synthesis in the linear–polytopic setting can be handled with direct extensions of the tube MPC syntheses (Mayne et al. 2009).