1 Introduction

Nowadays, model predictive control (MPC) has drawn the attention of many researchers and engineers [1, 20]. The MPC problem is initially investigated in a linear discrete-time system [3]. Some closed-form results have been addressed in simplified discrete-time systems [3]. These techniques have been used in industrial applications successfully [32, 36]. The commercial versions of the MPC include dynamic matrix control (DMC), predictive functional control (PFC), generalized predictive control (GPC), and the other MPC methods [5, 23, 27].

In the MPC methods, the plant behavior is anticipated at each sample time knowing the nominal model. Then, some control sequences are generated while a given cost function is minimized. The first input signal is applied to the plant and the others are discarded. Such a process is repeated at each sample time [24, 25]. In the mentioned scheme, a suitable control sequence is numerically computed at the sample time by some matrix calculations [3]. The MPC concept can also be used in various applications as well as chaotic systems synchronization [17] and constrained guidance system [29].

In the last decade, the linear matrix inequality (LMI) has been an efficient and powerful tool due to the huge progressive development in the matrix calculus. Hence, analysis tools and synthesis techniques based on the LMI have some advantages over the other ones [2, 28].

Most times, the stability checking and also control system design can be translated into an LMI feasibility or a minimization problem [10, 28]. A static gain is expressed as an LMI feasibility problem in the state feedback control design [28]. Such a static gain may be deliberately determined at each sample time [30]. In the mentioned MPC, some controller parameters like the gains are updated at each sample time rather than computing the whole control sequences.

An LMI-based MPC scheme is firstly addressed in discrete-time systems [16, 37]. In this method, both the prediction and control horizons tend to the infinity for obtaining an LMI representation. Thus, the MPC design is actually translated into a selection of some suitable static gains that are updated at each sample time. Then, a similar control problem has been formulated in linear systems subjected to actuator nonlinearity [33, 38], nonlinear systems [21, 22], constrained systems [7, 18], uncertain systems [8], networked control systems [31, 32, 42], parameter-varying systems [19], event-triggered systems [41] and time-delayed systems [11, 40]. Initially, the MPC is inherently suggested in control systems with a discrete-time representation, but the MPC formulation could also be extended into continuous-time systems [4, 9, 26].

A typical dynamic control system has various additional degrees of freedom rather than a static control policy. As a result, it is expected that the transient response is effectively compensated by the dynamic controller in comparison with the static control law. The static control design is restricted to a suitable gains selection in the state or output feedback structure. But, the problem formulation is increasingly complicated in the dynamic control design as well. Additionally, an extra cost should be paid due to the computational load. Therefore, a reasonable trade-off between the closed-loop performance and required computation time may be necessarily taken into account in the real-time systems. The raised issues are substantially reduced utilizing fast numerical optimization techniques. Consequently, the transient performance is progressively improved via predictive dynamic control. The existing MPCs are usually realized with a static control law. In this study, the MPC, in which the control law contains some state variables, is referred to as the model predictive dynamic control (MPDC).

Over the last decade, various MPDC methods have been developed in discrete-time control systems. An MPC scheme has been designed in discrete-time systems through dynamic output feedback [6]. A discrete-time integral sliding-model predictive control addresses the dynamic lateral motion of autonomous driving vehicles [15]. By applying event-triggered dynamic output feedback, MPC is planned in uncertain fuzzy systems [35]. Then, by a predictive event-triggered strategy, a dynamic surface control is investigated in strict-feedback systems in the presence of network-induced delays [39]. The aforementioned control method may not be immediately applicable to a typical uncertain system.

Although the design of static control systems may follow the standard ways, synthesizing the dynamic control systems will have some major complexities. Hence, MPDC will be interested in uncertain dynamical systems. Lately, an MPDC has been derived in continuous-time uncertain systems [12]. In this method, a minimization problem subjected to some LMIs is solved at predefined sample times. Then, the parameters of the dynamic control are updated in real-time operation. The results of the continuous-time MPDC cannot be directly applied to the discrete-time systems due to the induced discretization error. Accordingly, it may destabilize and/or destroy the transient response of the closed-loop system. This point motivates the author to reformulate the MPDC problem for synthesizing an effective discrete-time control system in the presence of unstructured uncertainties. A matrix transformation is suggested to perform such a control goal. The presented MPDC can be expressed based on some LMI terms. Thus, the main contribution is concentrated on a robust MPDC formulation and synthesis in uncertain discrete-time systems. Hence, an LMI-based control technique is proposed to the MPDC design in uncertain control systems. The dynamic controller parameters can be automatically updated in real-time applications.

To tune the controller parameters, at each sample time, an LMI solver as well as YALMIP, SEDUMI, SDPT3, MOSEK, LMILab and so on may be used to handle the optimization problem numerically. Therefore, the stability and/or performance characteristic of the closed-loop system can be considerably improved via the MPDC when compared to the existing MPC.

The rest of the paper is organized as follows: In Sect. 2, some mathematical preliminaries and definitions are briefly addressed. The discrete-time MPDC problem is formulated in Sect. 3, and then, the main contribution is presented in Sect. 4. In Sect. 5, the results are used in a numerical simulation. Finally, some concluding remarks are presented in the last section.

2 Definitions and Mathematical Preliminaries

Hereafter, \({I}_{n}\) is a \(n\times n\) identity matrix and the operator \(\Vert .\Vert \) is a two-norm of the given matrix. The Euclidean spaces\({\mathbb{R}}\),\({\mathbb{R}}^{n}\) and \({\mathbb{R}}^{m\times n}\) are some well-known vector spaces. Thus, the set \({\mathbb{R}}\) describes all real numbers, \({\mathbb{R}}^{n}\) is the set of all real vectors that contain \(n\) elements, and \({\mathbb{R}}^{m\times n}\) is the set of all \(m\times n\) real matrices. A symmetric matrix \(\Theta \in {\mathbb{R}}^{n\times n}\) is positive definite if the condition \({\vartheta }^{T}\Theta \vartheta >0\) satisfies for every non-zero\(\vartheta \in {\mathbb{R}}^{n}\). Additionally, the matrix \(\Theta \) is negative definite if \(-\Theta \) is a positive definite matrix. The following mathematical lemmas are borrowed from the literature to make this study self-contained:

Lemma 1

For any symmetric matrices \(Q\in {\mathbb{R}}^{s\times s}\), \(R\in {\mathbb{R}}^{t\times t}\)and rectangular matrix \(S\in {\mathbb{R}}^{s\times t}\), the following inequalities are equivalent:

  1. 1.

    \(R>0\) and \(Q-S{R}^{-1}{S}^{T}>0\)

  2. 2.

    \(Q>0\) and \(R-{S}^{T}{Q}^{-1}S>0\)

  3. 3.

    \(\left[\begin{array}{cc}Q& S\\ {S}^{T}& R\end{array}\right]>0\)

Lemma 1 can be explicitly obtained by the Schur’s matrix decomposition. Hence, it is referred to as the Schur’s complement lemma [28].

Lemma 2

(Barbalat’s lemma [14]) Let \(\varphi \left(.\right):{R}^{+}\to {R}^{+}\)be a uniformly continuous Lebesgue measureable function on \([0,+\infty )\). If \(\underset{t\to +\infty }{\mathrm{lim}}{\int }_{0}^{t}\varphi \left(\tau \right)\mathrm{d}\tau \)exists, then \(\underset{t\to +\infty }{\mathrm{lim}}\varphi \left(t\right)=0\). In a similar way, if \(\underset{k\to +\infty }{\mathrm{lim}}\sum_{i=0}^{k}\varphi \left(i\right)\)exists, then \(\underset{k\to +\infty }{\mathrm{lim}}\varphi \left(k\right)=0\).

These lemmas will be very useful in the proof of the main theorems.

3 Problem Setup

Consider a discrete-time plant described by the following difference equation:

$$\left\{\begin{array}{l}{x}_{p}\left(k+1\right)={A}_{p}{x}_{p}\left(k\right)+{B}_{p}u\left(k\right)+{f}_{p}\left({x}_{p}\left(k\right) \right)\\ y\left(k\right)={C}_{p}{x}_{p}\left(k\right)\end{array}\right.$$
(1)

where \({x}_{p}\left(k\right)\in {\mathbb{R}}^{{n}_{p}}\) is the state vector, \(y\left(k\right)\in {\mathbb{R}}^{q}\) is the output vector and \(u\left(k\right)\in {\mathbb{R}}^{p}\) is the control input of the plant (1). The plant (1) can be imagined as an LTI system that is subjected to a nonlinear term \({f}_{p}\left(.\right)\). The control effort \(u\left(k\right)\) is inherently unknown and generated by the control law. In other words, the signal \(u\left(k\right)\) is not known in advance. It is numerically calculated via the proposed optimization problem. Furthermore, the uncertain system (1) may be stable or unstable in the open-loop structure. The following assumptions are considered to formulate the MPDC problem:

Assumption 1

Assume that the time-independent plant (1) is stabilizable. Thus, there exists a constrained control sequence \(u\left(k\right)\), \(\Vert u\left(k\right)-\stackrel{-}{u}\Vert \le {u}_{\mathrm{m}\mathrm{a}\mathrm{x}}\) and \(\stackrel{-}{u}=\underset{k\to +\infty }{\mathrm{lim}}u\left(k\right)\), such that the plant states \({x}_{p}\left(k\right)\) converge to its equilibrium point \(\stackrel{-}{x}\) (i.e. \(\underset{k\to +\infty }{\mathrm{lim}}{x}_{p}\left(k\right)=\stackrel{-}{x}\)). Additionally, it is assumed that the states of the uncertain plant (1) are measurable for the control purpose.

Assumption 2

The nonlinear function \({f}_{p}(.)\) may be unknown, but it is zero at zero (i.e. \({f}_{p}\left(0\right)=0\)). The following inequality also holds:

$$\Vert {f}_{p}\left(\alpha \right)-{f}_{p}\left(\beta \right)\Vert \le {L}_{p}\Vert \alpha -\beta \Vert , \forall \alpha ,\beta \in {\mathbb{R}}^{{n}_{p}}$$
(2)

The inequality (2) is known as the Lipschitz condition. The constant \({L}_{p}\) is the maximum slope of the nonlinear term \({f}_{p}(.)\). The vector \({f}_{p}\left(.\right)\) can be fully unknown for the designer. Hence, the nonlinear function \({f}_{p}\left(.\right)\) is treated as an uncertain term. The exact value of the nonlinear term \({f}_{p}\left(.\right)\) is not necessary to be known for the proposed method. Although the uncertain system (1) is partially unknown, the control parameters are determined with only some certain values of the plant (1) as well as \({A}_{p}\), \({B}_{p}\), \({C}_{p}\) and \({L}_{p}\).

The following dynamic control system may be used to regulate the plant (1):

$$\left\{\begin{array}{l}{x}_{c}\left(k+1\right)={A}_{c}{x}_{c}\left(k\right)+{B}_{c}e(k)\\ u\left(k\right)={C}_{c}{x}_{c}\left(k\right)+{D}_{c}e\left(k\right)\end{array}\right.$$
(3)

where \({x}_{c}\left(k\right)\in {\mathbb{R}}^{{n}_{c}}\) is the state vector of the control system (3) and also the term \(e\left(k\right)=r\left(k\right)-y(k)\) denotes the error signal. Thus, the set point \(r\left(k\right)\) is explicitly appeared in the error term \(e\left(k\right)\). Then, in order to regulate the plant output \(y\left(k\right)\), the controller (3) is designed such that the error signal is modified in a suitable way.

Assumption 3

For the sake of simplicity, the reference \(r\left(k\right)=r\) is assumed to be constant. Furthermore, the number of the controller states is equal to the number of plant state (i.e. \({n}_{c}={n}_{p}=n\)).

Let define the variable \(x\left(k\right)\stackrel{\scriptscriptstyle\mathrm{def}}{=}{\left[\begin{array}{cc}{x}_{p}^{T}\left(k\right)& {x}_{c}^{T}\left(k\right)\end{array}\right]}^{T}\). Then, the closed-loop system that contains the uncertain plant and controller can be written as:

$$\left\{\begin{array}{l}x\left(k+1\right)=Ax\left(k\right)+Br+{f}_{N}\left(x\left(k\right)\right)\\ y\left(k\right)=Cx\left(k\right) \end{array}\right.$$
(4)

where

$$A=\left[\begin{array}{cc}{A}_{p}-{B}_{p}{D}_{c}{C}_{p}& {B}_{p}{C}_{c}\\ -{B}_{c}{C}_{p}& {A}_{c}\end{array}\right], B=\left[\begin{array}{c}{B}_{p}{D}_{c}\\ {B}_{c}\end{array}\right], C=\left[\begin{array}{cc}{C}_{p}& 0\end{array}\right], {f}_{N}\left(x\left(k\right)\right)=\left[\begin{array}{c}{f}_{p}\left({x}_{p}\left(k\right) \right)\\ 0\end{array}\right]$$

The system matrix \(A\) can be decomposed as follows:

$$A=\left[\begin{array}{cc}{A}_{p}& 0\\ 0& 0\end{array}\right]+\left[\begin{array}{cc}0& {B}_{p}\\ {I}_{n}& 0\end{array}\right]\left[\begin{array}{cc}{A}_{c}& {B}_{c}\\ {C}_{c}& {D}_{c}\end{array}\right]\left[\begin{array}{cc}0& {I}_{n}\\ {-C}_{p}& 0\end{array}\right]$$
(5)

Besides, the reference signal is a constant one, when the closed-loop system (4) is stable, the equilibrium point is found as:

$$\left\{\begin{array}{l}\stackrel{-}{x}=A\stackrel{-}{x}+Br+f\left(\stackrel{-}{x}\right)\\ r=C\stackrel{-}{x}\end{array}\right.$$
(6)

where \(\underset{k\to +\infty }{\mathrm{lim}}x\left(k\right)=\underset{k\to +\infty }{\mathrm{lim}}x\left(k+1\right)\,=\,\stackrel{-}{x}\).

Let define a deviated variable like \(\xi \left(k\right)\stackrel{\scriptscriptstyle\mathrm{def}}{=}{\left[\begin{array}{cc}{\xi }_{p}^{T}\left(k\right)& {\xi }_{c}^{T}\left(k\right)\end{array}\right]}^{T}=x\left(k\right)-\stackrel{-}{x}\). Then, the closed-loop plant model can be written as the following autonomous system:

$$\xi \left(k+1\right)=A\xi \left(k\right)+f\left(\xi \left(k\right)\right)$$
(7)

where the nonlinear term is defined as \(f\left(\xi \left(k\right)\right)\stackrel{\scriptscriptstyle\mathrm{def}}{=}{f}_{N}\left(x\left(k\right)\right)-{f}_{N}\left(\stackrel{-}{x}\right)\). Thus, the proposed control tries to handle the autonomous system (7).

To design the control system parameters, an infinite horizon cost function may be selected at sample time k as follows:

$$J\left(k\right)=\sum_{i=0}^{+\infty }\left({\left({\widehat{x}}_{p}\left(k+i|k\right)-{\stackrel{-}{x}}_{p}\right)}^{T}Q\left({\widehat{x}}_{p}\left(k+i|k\right)-{\stackrel{-}{x}}_{p}\right)+\left.{\left(\widehat{u}\left(k+i|k\right)-\stackrel{-}{u}\right)}^{T}R\left(\widehat{u}\left(k+i|k\right)-\stackrel{-}{u}\right)\right)\right.$$
(8)

where \(\stackrel{-}{u}=\underset{k\to +\infty }{\mathrm{lim}}u\left(k\right)\) and the weights \(Q\in {\mathbb{R}}^{n\times n}\ge 0\) and \(R\in {\mathbb{R}}^{p\times p}>0\) are symmetric matrices.

In the cost function (8), the term \({\widehat{x}}_{p}\left(k+i|k\right)\) is the predicted value of the plant states at time instant \(k+i\) with \({\widehat{x}}_{p}\left(k|k\right)={x}_{p}\left(k\right)\). The first computed control effort \(\widehat{u}\left(k+i|k\right)\) is applied to the plant (1) while the other input signals are discarded. The objective function (8) will enforce that the plant state and control input are converged to their steady-state values (i.e. \({\stackrel{-}{x}}_{p}\) and \(\stackrel{-}{u}\)). Hence, the summation arguments will tend to zero.

The control signal \(u(k)\) is designed such that the cost function (8) is minimized in the MPDC problem. In Eq. (8), the prediction horizon tends to infinity. Hence, by using the Barbalat lemma, if the objective function (8) is bounded, then the plant states \({x}_{p}(k)\) are converged to a predefined value \({\stackrel{-}{x}}_{p}\) (i.e. \(\underset{k\to +\infty }{\mathrm{lim}}{x}_{p}(k)={\stackrel{-}{x}}_{p}\)). Therefore, it implies \(\underset{k\to +\infty }{\mathrm{lim}}e(k)=0\) when considering Eq. (6). Although the objective function (8) explicitly depends on the plant states \({x}_{p}(k)\), the tracking error \(e(k)\) is implicitly considered in the control problem. In the control system (3), we have:

$$u\left(k\right)={C}_{c}{x}_{c}\left(k\right)+{D}_{c}\left(r-{C}_{p}{x}_{p}\left(k\right)\right)$$
(9)

Then, the cost function (8) may be rewritten as:

$$J\left(k\right)=\sum_{i=0}^{+\infty }{\widehat{\xi }}^{T}\left(k+i|k\right)\Phi \widehat{\xi }\left(k+i|k\right)$$
(10)

where

$$\Phi =\left[\begin{array}{ll}Q+{{C}_{p}^{T}D}_{c}^{T}R{D}_{c}{C}_{p}& {{-C}_{p}^{T}D}_{c}^{T}R{C}_{c}\\ -{C}_{c}^{T}R{D}_{c}{C}_{p}& {C}_{c}^{T}R{C}_{c}\end{array}\right]$$
(11)

The weight matrix \(\Phi \) can be decomposed as follows:

$$\Phi =\left[\begin{array}{cc}Q& 0\\ 0& 0\end{array}\right]+\left[\begin{array}{c}{{C}_{p}^{T}D}_{c}^{T}\\ -{C}_{c}^{T}\end{array}\right]R\left[\begin{array}{cc}{D}_{c}{C}_{p}& -{C}_{c}\end{array}\right]$$
(12)

The MPDC design is subsequently presented in discrete-time systems.

4 Main Results

An LMI-based approach is developed to find the parameters of a dynamic control system (3). Hence, the controller parameters can be obtained via the solution of an LMI minimization problem with a systematic procedure.

Theorem 1

Suppose that the assumptions 13are held and \(A\in {\mathbb{R}}^{2n}\)and \(\Phi \in {\mathbb{R}}^{2n}\)are two known matrices. At time instant \(k\), if there exists a symmetric positive definite matrix \(P\in {\mathbb{R}}^{2n}\)and a positive constant \(\gamma \)such that the following minimization problem is feasible:

$$Min\;\gamma $$

subject to

$${A}^{T}PA+{L}_{p}\left(PA+{A}^{T}P\right)+\left({L}_{p}^{2}-1\right)P+\Phi \le 0$$
(13)
$${\xi }^{T}\left(k\right)P\xi \left(k\right)\le \gamma $$
(14)

then, the closed-loop system (4) is asymptotically stabilized with the control law (3), and also, the minimized value \(\gamma \)is an upper bound for the cost function (8).

Proof

Consider a quadratic Lyapunov function like \(V\left(k\right)={\widehat{\xi }}^{T}\left(k\right)P\widehat{\xi }\left(k\right)\) and its difference as \(\Delta V(k)=V\left(k+1\right)-V\left(k\right)\). Let pre-multiply the inequality (13) by the vector \({\widehat{\xi }}^{T}\left(k+i|k\right)\) and post-multiply it by the vector \(\widehat{\xi }\left(k+i|k\right)\). Then,

$$ \hat{\xi }^{T} \left( {k + i|k} \right)\left( {A^{T} PA + L_{p} \left( {PA + A^{T} P} \right) + L_{p}^{2} P} \right)\hat{\xi }\left( {k + i|k} \right) \le \hat{\xi }^{T} \left( {k + i|k} \right)\left( {P - {\Phi }} \right)\hat{\xi }\left( {k + i|k} \right) $$
(15)

The following inequalities can be found by applying the inequality (2):

$$\left\{\begin{array}{l}{f}^{T}\left(\widehat{\xi }\left(k+i|k\right)\right)Pf\left(\widehat{\xi }\left(k+i|k\right)\right)\le {L}_{p}^{2}{\widehat{\xi }}^{T}\left(k+i|k\right)P\widehat{\xi }\left(k+i|k\right)\\ {\widehat{\xi }}^{T}\left(k+i|k\right){A}^{T}Pf\left(\widehat{\xi }\left(k+i|k\right)\right)\le {L}_{p}{\widehat{\xi }}^{T}\left(k+i|k\right){A}^{T}P\widehat{\xi }\left(k+i|k\right)\\ {f}^{T}\left(\widehat{\xi }\left(k+i|k\right)\right)PA\widehat{\xi }\left(k+i|k\right)\le {L}_{p}{\widehat{\xi }}^{T}\left(k+i|k\right)PA\widehat{\xi }\left(k+i|k\right)\end{array}\right.$$
(16)

Using the condition (16), an upper bound of the following inequality can be obtained:

$$ \begin{aligned} & \left( {A\hat{\xi }\left( {k + i|k} \right) + f\left( {\hat{\xi }\left( {k + i|k} \right)} \right)} \right)^{T} P\left( {A\hat{\xi }\left( {k + i|k} \right) + f\left( {\hat{\xi }\left( {k + i|k} \right)} \right)} \right) \\ & \le \hat{\xi }^{T} \left( {k + i|k} \right)\left( {A^{T} PA + L_{p} \left( {PA + A^{T} P} \right) + L_{p}^{2} P} \right)\hat{\xi }\left( {k + i|k} \right) \\ & \le \hat{\xi }^{T} \left( {k + i|k} \right)\left( {{\text{P}} - {\Phi }} \right)\hat{\xi }\left( {k + i|k} \right) \\ \end{aligned} $$
(17)

Therefore, inequality (17) can be found as:

$$ \begin{aligned} & \left( {A\hat{\xi }\left( {k + i|k} \right) + f\left( {\hat{\xi }\left( {k + i|k} \right)} \right)} \right)^{T} P\left( {A\hat{\xi }\left( {k + i|k} \right) + f\left( {\hat{\xi }\left( {k + i|k} \right)} \right)} \right) \\ & - \hat{\xi }^{T} \left( {k + i|k} \right)P\hat{\xi }\left( {k + i|k} \right) + \hat{\xi }^{T} \left( {k + i|k} \right){\Phi }\hat{\xi }\left( {k + i|k} \right) \le 0 \\ \end{aligned} $$
(18)

Then, by using Eq. (7), the inequality (18) can be written as follows:

$$ \begin{aligned} & \hat{\xi }^{T} \left( {k + i + 1|k} \right)P\hat{\xi }\left( {k + i + 1|k} \right) - \hat{\xi }^{T} \left( {k + i|k} \right)P\hat{\xi }\left( {k + i|k} \right) \\ & \quad + \,\hat{\xi }^{T} \left( {k + i|k} \right){\Phi }\hat{\xi }\left( {k + i|k} \right) \le 0 \\ \end{aligned} $$
(19)

Therefore, the inequality (13) implies that the following condition holds at any time instant \(k\):

$$\Delta V\left(k+i|k\right)=V\left(k+i+1|k\right)-V\left(k+i|k\right)\le -{\widehat{\xi }}^{T}\left(k+i|k\right)\Phi \widehat{\xi }\left(k+i|k\right)$$
(20)

Then, at time instant \(k\), by taking summation from both sides of the inequality (20) from \(i=0\) to infinity, we have:

$$\underset{i\to +\infty }{\mathrm{lim}}V\left(k+i+1|k\right)-V\left(k|k\right)\le -J\left(k\right)$$
(21)

The Barbalat’s lemma implies \(\underset{i\to +\infty }{\mathrm{lim}}V\left(k+i+1|k\right)=0\). Then,

$$J\left(k\right)\le {\xi }^{T}\left(k\right)P\xi \left(k\right)$$
(22)

The cost function \(J\left(k\right)\) has an upper bound like \(\gamma \) by considering the condition (22) at time instant \(k\). The minimum \(\gamma \) could be obtained by a suitable selection of the matrix \(P\).

In Theorem 1, two matrices \(A\) and \(\Phi \) are explicitly dependent on the controller parameters (\({A}_{c},{B}_{c},{C}_{c},{D}_{c}\)). The matrices \(A\) and \(\Phi \) are not completely known. Thus, the control design problem may have some complexities. But, an innovative matrix transformation is suggested to translate the MPDC problem into an LMI optimization one. Hence, the controller parameters are directly computed at each sample time in the next theorem.

Theorem 2

Suppose that the assumptions 1–3 are held simultaneously. At time instant \(k\), if there exists two symmetric positive definite matrices \(X,Y, U, V\in {\mathbb{R}}^{n\times n}\)and some compatible matrices \(K\in {\mathbb{R}}^{n\times n},L\in {\mathbb{R}}^{n\times p},M\in {\mathbb{R}}^{q\times n},N\in {\mathbb{R}}^{q\times p}\)and a positive constant \(\gamma \)such that following minimization problem is feasible:

$$ \begin{aligned} & Min~\gamma \\ & {\text{subject}}\,{\text{to}} \\ & \left[ {\begin{array}{*{20}c} Y & {I_{n} } \\ {I_{n} } & X \\ \end{array} } \right] > 0 \\ \end{aligned} $$
(23)
$$\left[\begin{array}{ccc} Y & * & * \\ I_n & X & * \\ M & -NC_p & u_{max}^{2} I_p \end{array}\right] \geq 0$$
(24)
$$\left[\begin{array}{ccc}Y& V& {\xi }_{p}\left(k\right)\\ *& Y& {\xi }_{c}\left(k\right)\\ *& *& 1\end{array}\right]\ge 0$$
(25)
$$\left[\begin{array}{cccccc}{\mathcal{M}}_{11}& {\mathcal{M}}_{12}& {\mathcal{M}}_{13}& {\mathcal{M}}_{14}& Y& -{M}^{T}\\ *& {\mathcal{M}}_{22}& K& {\mathcal{M}}_{24}& {I}_{n}& {C}_{p}^{T}{N}^{T}\\ *& *& Y& {I}_{n}& 0& 0\\ *& *& *& X& 0& 0\\ *& *& *& *& \gamma {Q}^{-1}& 0\\ *& *& *& *& *& \gamma {R}^{-1}\end{array}\right]\ge 0$$
(26)

where the star symbol \(*\)denotes the matrix symmetry and also the matrices \({\mathcal{M}}_{11}\),\({\mathcal{M}}_{12}\), \({\mathcal{M}}_{13}\),\({\mathcal{M}}_{14}\), \({\mathcal{M}}_{22}\)and \({\mathcal{M}}_{24}\)are defined as follows:

$$\left\{\begin{array}{l}{\mathcal{M}}_{11}\stackrel{\scriptscriptstyle\mathrm{def}}{=}\left(1-{L}_{p}^{2}\right)Y-{L}_{p}\left({A}_{p}Y+{B}_{P}M+Y{A}_{p}^{T}+{M}^{T}{B}_{P}^{T}\right) \\ {\mathcal{M}}_{12}\stackrel{\scriptscriptstyle\mathrm{def}}{=}{\left(1-{L}_{p}^{2}\right)I}_{n}-{L}_{p}\left({A}_{p}-{B}_{p}N{C}_{p}+{A}_{p}^{T}-{C}_{p}^{T}{{N}^{T}B}_{p}^{T}\right)\\ {\mathcal{M}}_{13}\stackrel{\scriptscriptstyle\mathrm{def}}{=}{A}_{p}Y+{B}_{P}M \\ {\mathcal{M}}_{14}\stackrel{\scriptscriptstyle\mathrm{def}}{=}{A}_{p}-{B}_{p}N{C}_{p} \\ {\mathcal{M}}_{22}\stackrel{\scriptscriptstyle\mathrm{def}}{=}\left(1-{L}_{p}^{2}\right)X-{L}_{p}\left(X{A}_{p}-L{C}_{p}+{A}_{p}^{T}X-{C}_{p}^{T}{L}^{T}\right) \\ {\mathcal{M}}_{24}\stackrel{\scriptscriptstyle\mathrm{def}}{=}X{A}_{p}-L{C}_{p}\end{array}\right.$$
(27)

Then, by means of the dynamic control law (3) with the following parameters:

$${A}_{c}={U}^{-1}\left(K-X{A}_{p}Y+L{C}_{p}Y-X{B}_{p}M-X{B}_{p}N{C}_{P}Y\right){V}^{-1}$$
(28)
$${B}_{c}={U}^{-1}(L-X{B}_{p}N)$$
(29)
$${C}_{c}=(M+N{C}_{P}Y){V}^{-1}$$
(30)
$${D}_{c}=N$$
(31)

where

$$\left\{\begin{array}{c}U=X{\left({I}_{n}-{X}^{-1}{Y}^{-1}\right)}^\frac{1}{2} \\ V=-{\left({I}_{n}-{X}^{-1}{Y}^{-1}\right)}^\frac{1}{2}Y\end{array}\right.$$
(32)

The uncertain plant (1) is asymptotically stabilized while the minimized value \(\gamma \)is an upper bound for the cost function (8).

Proof

The matrix \(P\) is a symmetric positive definite one. Let partition \(P\) as follows:

$$P=\gamma \left[\begin{array}{cc}X& U\\ U& X\end{array}\right]$$
(33)

The Schur’s complement lemma implies that the matrices \(X\) must be positive definite. The inverse of the matrix \(P\) is also symmetric positive definite. It can be written as:

$${P}^{-1}={\gamma }^{-1}\left[\begin{array}{cc}Y& V\\ V& Y\end{array}\right]$$
(34)

where

$$ \left\{ {\begin{array}{*{20}l} {XY + UV = I_{n} } \hfill \\ {XV + UY = 0~} \hfill \\ {XY = YX} \hfill \\ {UY = YU} \hfill \\ \end{array} } \right. $$
(35)

The Schur’s complement lemma implies that \(Y\) must be positive definite matrices. It is not hard to show that the matrices \(U\) and \(V\) in term of \(X\) and \(Y\) can be computed as:

$$ \left\{ {\begin{array}{*{20}l} {U = X\left( {I_{n} - X^{{ - 1}} Y^{{ - 1}} } \right)^{{\frac{1}{2}}} } \hfill \\ {V = - \left( {I_{n} - X^{{ - 1}} Y^{{ - 1}} } \right)^{{\frac{1}{2}}} Y} \hfill \\ \end{array} } \right. $$
(36)

Consider another invertible symmetric matrix \(\Psi \) as follows:

$$\Psi =\left[\begin{array}{cc}Y& V\\ V& 0\end{array}\right]$$
(37)

The matrix \(\Psi \) can be decomposed as:

$$\Psi =\left[\begin{array}{cc}{I}_{n}& 0\\ 0& V\end{array}\right]\left[\begin{array}{cc}Y& {I}_{n}\\ {I}_{n}& 0\end{array}\right]\left[\begin{array}{cc}{I}_{n}& 0\\ 0& V\end{array}\right]$$
(38)

Let define two matrices \(\Lambda \) and \(\Omega \) as follows:

$$\Lambda \stackrel{\scriptscriptstyle\mathrm{def}}{=}\left[\begin{array}{cc}{I}_{n}& 0\\ 0& V\end{array}\right],\Omega \stackrel{\scriptscriptstyle\mathrm{def}}{=}\left[\begin{array}{cc}Y& {I}_{n}\\ {I}_{n}& 0\end{array}\right]$$
(39)

The inverse of the matrix \(\Psi \) may be computed as:

$${\Psi }^{-1}=\left[\begin{array}{cc}0& {V}^{-1}\\ {V}^{-1}& -{V}^{-1}Y{V}^{-1}\end{array}\right]=\left[\begin{array}{cc}{I}_{n}& 0\\ 0& {V}^{-1}\end{array}\right]\left[\begin{array}{cc}0& {I}_{n}\\ {I}_{n}& -Y\end{array}\right]\left[\begin{array}{cc}{I}_{n}& 0\\ 0& {V}^{-1}\end{array}\right]$$
(40)

Besides the matrix \(P\) is symmetric positive definite, the matrix \(\Psi P\Psi \) is also symmetric and positive definite. Then, the condition (23) may be obtained as follows:

$$\Psi P\Psi =\gamma \left[\begin{array}{cc}Y& V\\ V& VXV\end{array}\right]=\gamma \left[\begin{array}{cc}{I}_{n}& 0\\ 0& V\end{array}\right]\left[\begin{array}{cc}Y& {I}_{n}\\ {I}_{n}& X\end{array}\right]\left[\begin{array}{cc}{I}_{n}& 0\\ 0& V\end{array}\right]>0$$
(41)

Therefore,

$${A}^{T}PA+{L}_{p}\left(PA+{A}^{T}P\right)+({L}_{p}^{2}-1)P+\Phi \le 0$$
(42)

Let pre- and post-multiply both sides of the inequality (42) by the matrix \(\Psi \). Then

$$\Psi {A}^{T}PA\Psi +{L}_{p}\Psi PA\Psi +{L}_{p}\Psi {A}^{T}P\Psi +({L}_{p}^{2}-1)\Psi P\Psi +\Psi \Phi \Psi \le 0$$
(43)

The inequality (43) may be compactly rewritten as follows:

$$\left(1-{L}_{p}^{2}\right)\Psi P\Psi -{L}_{p}\Psi PA\Psi -{L}_{p}\Psi {A}^{T}P\Psi -{\left(\Psi PA\Psi \right)}^{\mathrm{T}}{\left(\Psi P\Psi \right)}^{-1}\Psi PA\Psi -\Psi \Phi \Psi \ge 0$$
(44)

The controller parameters (\({A}_{c},{B}_{c},{C}_{c},{D}_{c}\)) in terms of the matrices (\(K,L,M,N\)) can be found as:

$$\left[\begin{array}{cc}{A}_{c}& {B}_{c}\\ {C}_{c}& {D}_{c}\end{array}\right]={\left[\begin{array}{cc}U& X{B}_{p}\\ 0& {I}_{p}\end{array}\right]}^{-1}\left[\begin{array}{cc}K-X{A}_{p}Y& L\\ M& N\end{array}\right]{\left[\begin{array}{cc}V& 0\\ -{C}_{p}Y& {I}_{q}\end{array}\right]}^{-1}$$
(45)

The controller parameters could be simplified as follows:

$$\left\{\begin{array}{l}{A}_{c}={U}^{-1}\left(K-X{A}_{p}Y+L{C}_{p}Y-X{B}_{p}M-X{B}_{p}N{C}_{P}Y\right){V}^{-1}\\ {B}_{c}={U}^{-1}\left(L-X{B}_{p}N\right) \\ {C}_{c}=\left(M+N{C}_{P}Y\right){V}^{-1} \\ {D}_{c}=N\end{array}\right.$$
(46)

It is evident that the matrices (\(K,L,M,N\)) in terms of the controller parameters (\({A}_{c},{B}_{c},{C}_{c},{D}_{c}\)) are expressed as:

$$\left[\begin{array}{cc}K& L\\ M& N\end{array}\right]=\left[\begin{array}{cc}U& X{B}_{p}\\ 0& {I}_{p}\end{array}\right]\left[\begin{array}{cc}{A}_{c}& {B}_{c}\\ {C}_{c}& {D}_{c}\end{array}\right]\left[\begin{array}{cc}V& 0\\ -{C}_{p}Y& {I}_{q}\end{array}\right]+\left[\begin{array}{cc}X{A}_{p}Y& 0\\ 0& 0\end{array}\right]$$
(47)

In inequality (43), the term \(\Psi \Phi \Psi \) is computed as follows:

$$\Psi \Phi \Psi ={\left[\begin{array}{cc}Y& V\\ -M& N{C}_{p}V\end{array}\right]}^{T}\left[\begin{array}{cc}Q& 0\\ 0& R\end{array}\right]\left[\begin{array}{cc}Y& V\\ -M& N{C}_{p}V\end{array}\right]$$
(48)

Then,

$$\Psi \Phi \Psi =\left[\begin{array}{cc}{I}_{n}& 0\\ 0& V\end{array}\right]{\left[\begin{array}{cc}Y& {I}_{n}\\ -M& N{C}_{p}\end{array}\right]}^{T}\left[\begin{array}{cc}Q& 0\\ 0& R\end{array}\right]\left[\begin{array}{cc}Y& {I}_{n}\\ -M& N{C}_{p}\end{array}\right]\left[\begin{array}{cc}{I}_{n}& 0\\ 0& V\end{array}\right]$$
(49)

and also the term \(\Psi PA\Psi \) can be obtained as:

$$\Psi PA\Psi =\upgamma \left[\begin{array}{cc}{A}_{p}Y+{B}_{P}M& {A}_{p}V-{B}_{p}N{C}_{p}V\\ VK& VX{A}_{p}V-VL{C}_{p}V\end{array}\right]$$
(50)

It can be decomposed as follows:

$$\Psi PA\Psi =\upgamma \left[\begin{array}{cc}{I}_{n}& 0\\ 0& V\end{array}\right]\left[\begin{array}{cc}{A}_{p}Y+{B}_{P}M& {A}_{p}-{B}_{p}N{C}_{p}\\ K& X{A}_{p}-L{C}_{p}\end{array}\right]\left[\begin{array}{cc}{I}_{n}& 0\\ 0& V\end{array}\right]$$
(51)

Let pre- and post-multiply both sides of the inequality (44) by the matrix \({\Lambda }^{-1}\). Then

$$ \begin{aligned} & \left( {1 - L_{p}^{2} } \right)\left[ {\begin{array}{*{20}c} Y & {I_{n} } \\ {I_{n} } & X \\ \end{array} } \right] - L_{p} \left[ {\begin{array}{*{20}c} {A_{p} Y + B_{P} M} & {A_{p} - B_{p} NC_{p} } \\ K & {XA_{p} - LC_{p} } \\ \end{array} } \right] \\ & \quad - \,L_{p} \left[ {\begin{array}{*{20}c} {A_{p} Y + B_{P} M} & {A_{p} - B_{p} NC_{p} } \\ K & {XA_{p} - LC_{p} } \\ \end{array} } \right]^{T} - {\upgamma }^{ - 1} \left[ {\begin{array}{*{20}c} Y & {I_{n} } \\ { - M} & {NC_{p} } \\ \end{array} } \right]^{T} \left[ {\begin{array}{*{20}c} Q & 0 \\ 0 & R \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} Y & {I_{n} } \\ { - M} & {NC_{p} } \\ \end{array} } \right] \\ & \quad - \,\left[ {\begin{array}{*{20}c} {A_{p} Y + B_{P} M} & {A_{p} - B_{p} NC_{p} } \\ K & {XA_{p} - LC_{p} } \\ \end{array} } \right]^{T} \left[ {\begin{array}{*{20}c} Y & {I_{n} } \\ {I_{n} } & X \\ \end{array} } \right]^{ - 1} \left[ {\begin{array}{*{20}c} {A_{p} Y + B_{P} M} & {A_{p} - B_{p} NC_{p} } \\ K & {XA_{p} - LC_{p} } \\ \end{array} } \right] \ge 0 \\ \end{aligned} $$
(52)

Therefore, the condition (26) is obtained by applying the Schur’s complement lemma. At each sample time \(k\), the Lyapunov stability theorem can lead to the following condition:

$$J\left(k\right)<{\xi }^{T}\left(k\right)P\xi \left(k\right)\le \gamma $$
(53)

It can be modified as:

$${\xi }^{T}\left(k\right){\left[\begin{array}{cc}Y& V\\ V& Y\end{array}\right]}^{-1}\xi \left(k\right)\le 1$$
(54)

Then, the inequality (25) is found by applying the Schur’s lemma. The control effort \(u\left(k\right)\) is written as:

$$v\left(k\right)=u\left(k\right)-\stackrel{-}{u}=\left[\begin{array}{cc}-{D}_{c}{C}_{p}& {C}_{C}\end{array}\right]\xi \left(k\right)$$
(55)

The two-norm of the deviated control signal \(v\left(k\right)\) is computed as:

$${\Vert v\left(k\right)\Vert }^{2}={\xi }^{T}\left(k\right){\left[\begin{array}{cc}-{D}_{c}{C}_{p}& {C}_{C}\end{array}\right]}^{T}\left[\begin{array}{cc}-{D}_{c}{C}_{p}& {C}_{C}\end{array}\right]\xi \left(k\right)$$
(56)

Equation (56) can be written as follows:

$${\Vert v\left(k\right)\Vert }^{2}={\xi }^{T}\left(k\right){\left[\begin{array}{cc}-N{C}_{p}& \left(M+N{C}_{P}Y\right){V}^{-1}\end{array}\right]}^{T}\left[\begin{array}{cc}-N{C}_{p}& (M+N{C}_{P}Y){V}^{-1}\end{array}\right]\xi \left(k\right)$$
(57)

The following matrix decomposition is used to find an upper bound:

$$\left[\begin{array}{cc}-N{C}_{p}& (M+N{C}_{P}Y){V}^{-1}\end{array}\right]=\left[\begin{array}{cc}M& -N{C}_{p}\end{array}\right]\left[\begin{array}{cc}0& {V}^{-1}\\ {I}_{n}& -Y{V}^{-1}\end{array}\right]$$
(58)

The condition (57) can be rewritten as:

$${\Vert v\left(k\right)\Vert }^{2}={\xi }^{T}\left(k\right){\left[\begin{array}{cc}0& {V}^{-1}\\ {I}_{n}& -Y{V}^{-1}\end{array}\right]}^{T}{\left[\begin{array}{cc}M& -N{C}_{p}\end{array}\right]}^{T}\left[\begin{array}{cc}M& -N{C}_{p}\end{array}\right]\left[\begin{array}{cc}0& {V}^{-1}\\ {I}_{n}& -Y{V}^{-1}\end{array}\right]\xi \left(k\right)$$
(59)

It is easy to check that the following decomposition is valid:

$$\left[\begin{array}{cc}Y& {I}_{n}\\ {I}_{n}& X\end{array}\right]=\left[\begin{array}{cc}Y& V\\ {I}_{n}& 0\end{array}\right]{\left[\begin{array}{cc}Y& V\\ V& Y\end{array}\right]}^{-1}\left[\begin{array}{cc}Y& {I}_{n}\\ V& 0\end{array}\right]$$
(60)

Using the Schur’s lemma, one may have:

$${\left[\begin{array}{cc}M& -N{C}_{p}\end{array}\right]}^{T}\left[\begin{array}{cc}M& -N{C}_{p}\end{array}\right]\le {u}_{\mathrm{m}\mathrm{a}\mathrm{x}}^{2}\left[\begin{array}{cc}Y& {I}_{n}\\ {I}_{n}& X\end{array}\right]={u}_{\mathrm{m}\mathrm{a}\mathrm{x}}^{2}\left[\begin{array}{cc}Y& V\\ {I}_{n}& 0\end{array}\right]{\left[\begin{array}{cc}Y& V\\ V& Y\end{array}\right]}^{-1}\left[\begin{array}{cc}Y& {I}_{n}\\ V& 0\end{array}\right]$$
(61)

Let pre- and post-multiply the inequality (61) by the following matrices, respectively:

$$ \left[ {\begin{array}{*{20}c} Y & V \\ {I_{n} } & 0 \\ \end{array} } \right]^{ - 1} = \left[ {\begin{array}{*{20}c} 0 & {I_{n} } \\ {V^{ - 1} } & { - V^{ - 1} Y} \\ \end{array} } \right]\;{\text{and}}\;\left[ {\begin{array}{*{20}c} Y & {I_{n} } \\ V & 0 \\ \end{array} } \right]^{ - 1} = \left[ {\begin{array}{*{20}c} 0 & {V^{ - 1} } \\ {I_{n} } & { - YV^{ - 1} } \\ \end{array} } \right] $$
(62)

The inequality (61) could be simplified as follows:

$${\left[\begin{array}{cc}-N{C}_{p}& \left(M+N{C}_{P}Y\right){V}^{-1}\end{array}\right]}^{T}\left[\begin{array}{cc}-N{C}_{p}& (M+N{C}_{P}Y){V}^{-1}\end{array}\right]\le {u}_{\mathrm{m}\mathrm{a}\mathrm{x}}^{2}{\left[\begin{array}{cc}Y& V\\ V& Y\end{array}\right]}^{-1}$$
(63)

Then, the control sequence \(\Vert v\left(k\right)\Vert \) may be an upper-bounded using the condition (57).

$${\Vert v\left(k\right)\Vert }^{2}\le {u}_{\mathrm{m}\mathrm{a}\mathrm{x}}^{2}{\xi }^{T}\left(k\right){\left[\begin{array}{cc}Y& V\\ V& Y\end{array}\right]}^{-1}\xi \left(k\right)\le {u}_{\mathrm{m}\mathrm{a}\mathrm{x}}^{2}$$
(64)

It completes the proof.

Remark 1

The inequality (26) is used while both matrices \(Q\) and \(R\) are invertible. Sometimes, the matrix \(Q\) may not be invertible. Hence, the symmetric matrices \(Q\) and \(R\) could be decomposed by means of the Cholesky factorization technique as follows [34]:

$$\left\{\begin{array}{c}Q={Q}_{\mathrm{c}\mathrm{h}}^{T}\times {Q}_{\mathrm{c}\mathrm{h}}\\ R={R}_{\mathrm{c}\mathrm{h}}^{T}\times {R}_{\mathrm{c}\mathrm{h}}\end{array}\right.$$
(65)

where the terms \({Q}_{\mathrm{c}\mathrm{h}}\) and \({R}_{\mathrm{c}\mathrm{h}}\) are some unique triangular matrices. Then, the condition (26) could be interchanged with the following LMI:

$$\left[\begin{array}{cccccc}{\mathcal{M}}_{11}& {\mathcal{M}}_{12}& {\mathcal{M}}_{13}& {\mathcal{M}}_{14}& Y{Q}_{\mathrm{c}\mathrm{h}}& -{M}^{T}{R}_{\mathrm{c}\mathrm{h}}\\ *& {\mathcal{M}}_{22}& K& {\mathcal{M}}_{24}& {Q}_{\mathrm{c}\mathrm{h}}& {C}_{p}^{T}{N}^{T}{R}_{\mathrm{c}\mathrm{h}}\\ *& *& Y& {I}_{n}& 0& 0\\ *& *& *& X& 0& 0\\ *& *& *& *& \gamma {I}_{n}& 0\\ *& *& *& *& *& \gamma {I}_{n}\end{array}\right]\ge 0$$
(66)

The other matrix factorization techniques like diagonalization procedure may be also used to tackle such difficulty. The matrices \(Q\) and \(R\) are decomposed via the diagonalization method as follows [34]:

$$\left\{\begin{array}{c}Q={Q}^\frac{1}{2}\times {Q}^\frac{1}{2} \\ R={R}^\frac{1}{2}\times {R}^\frac{1}{2}\end{array}\right.$$
(67)

where the terms \({Q}^\frac{1}{2}\) and \({R}^\frac{1}{2}\) are the square root of the matrices \(Q\) and \(R\), respectively. Then, the LMI condition (26) is interchanged with the following LMI:

$$\left[\begin{array}{cccccc}{\mathcal{M}}_{11}& {\mathcal{M}}_{12}& {\mathcal{M}}_{13}& {\mathcal{M}}_{14}& Y{Q}^\frac{1}{2}& -{M}^{T}{R}^\frac{1}{2}\\ *& {\mathcal{M}}_{22}& K& {\mathcal{M}}_{24}& {Q}^\frac{1}{2}& {C}_{p}^{T}{N}^{T}{R}^\frac{1}{2}\\ *& *& Y& {I}_{n}& 0& 0\\ *& *& *& X& 0& 0\\ *& *& *& *& \gamma {I}_{n}& 0\\ *& *& *& *& *& \gamma {I}_{n}\end{array}\right]\ge 0$$
(68)

Remark 2

The plant output \(y\left(k\right)\) norm may admit an upper bound like \({y}_{max}\) (i.e. \(\Vert y\left(k\right)-r\Vert <{y}_{\mathrm{m}\mathrm{a}\mathrm{x}}\)). Such inequality can be written as follows:

$$\left[\begin{array}{cc}I& {C}_{p}{\xi }_{p}\left(k\right)\\ *& {y}_{\mathrm{m}\mathrm{a}\mathrm{x}}^{2}\end{array}\right]>0$$
(69)

In Theorem 2, the inequality (69) can be added to the LMI sets (23)–(26) to guarantee that the output constraint is satisfied.

Remark 3

The MPDC may outperform the other MPCs in terms of the performance cost. But, its computational demand to solve the MPDC optimization problem seems harder than the other MPC. In the discrete-time control implementation, there is enough time to compute the controller parameters by solving the LMI minimization problem. The optimization issue is typically solved in less than 0.1 s in the proposed method. Hence, such a problem can be handled in practical applications.

Remark 4

The minimization of the quadratic performance index (8) subject to some plant constraints is the main objective of the MPDC approach. Thus, the optimization of the objective function (8) is only taken into account in the MPDC design rather than in the other control requirements. Therefore, various practical issues, as well as the actuator life cycle problem (control input rate), can also be considered in the cost function to achieve a more efficient control scheme.

Remark 5

The optimization problem of Theorem 2 may be solved at each sample time. The results of Theorem 2 can also be used to design a robust optimal control in the uncertain system (1). Hence, the cost function can be written as follows:

$$J=\sum_{k=0}^{+\infty }\left({\left({x}_{p}\left(k\right)-{\stackrel{-}{x}}_{p}\right)}^{T}Q\left({x}_{p}\left(k\right)-{\stackrel{-}{x}}_{p}\right)+\left.{\left(u\left(k\right)-\stackrel{-}{u}\right)}^{T}R\left(u\left(k\right)-\stackrel{-}{u}\right)\right)\right.$$
(70)

The proposed optimization problem is solved in an off-line way when the initial conditions of the uncertain plant (1) are known. Therefore, an off-line dynamic control system may be obtained in an uncertain system (1).

Remark 6

In Assumption 1, it is supposed that the states of the uncertain system (1) have to be measurable to the control designer. But, some parts of the states may not be available in real-time implementations. In this case, an extra (full or reduced order) observer block can be incorporated to estimate the non-measured states of the uncertain system (1). Nevertheless, an additional error is induced regarding the transient response of the estimator dynamic. Consequently, in the case of the non-measured states, the raised issue could be handled if the output feedback MPDC is derived for the uncertain system (1).

5 Simulation Results

Consider the following discrete-time system [13]:

$$\left\{\begin{array}{l}{x}_{p}\left(k+1\right)=\left[\begin{array}{ccc}0.9& 0.8 & 0.1\\ -0.1& 0.7& 0.2\\ -0.2& -0.4& -0.2\end{array}\right]{x}_{p}\left(k\right)+\left[\begin{array}{c}0\\ 0\\ 1\end{array}\right]u\left(k\right)+\frac{1}{2+{x}_{3}^{2}\left(k\right)}\left[\begin{array}{c}0\\ {x}_{1}\left(k\right)\\ {x}_{2}\left(k\right)\end{array}\right]\\ {y}_{p}\left(k\right)=\left[\begin{array}{ccc}1& 0& 0\end{array}\right]{x}_{p}\left(k\right)\end{array}\right.$$
(71)

The initial conditions of the plant (71) and the controller states are chosen as \({x}_{p}\left(0\right)={\left[\begin{array}{ccc}1& 2& 1\end{array}\right]}^{T}\) and \({x}_{c}\left(0\right)={\left[\begin{array}{ccc}0& 0& 0\end{array}\right]}^{T}\), respectively. Recently, two LMI-based MPC algorithms have been suggested to regulate the uncertain plant (1). They include the discrete-time MPC [13] and the continuous-time MPDC [12]. In order to implement the continuous-time MPDC, a continuous-time form of the nonlinear system (1) may be approximated via the Euler backward method as follows:

$${\dot{x}}_{p}={A}_{p}^{c}{x}_{p}+{f}_{p}^{c}\left({x}_{p} \right)+{B}_{p}^{c}u$$
(72)

where\({A}_{p}^{c}=\frac{1}{T}\left({A}_{p}-{I}_{n}\right)\), \({B}_{p}^{c}=\frac{1}{T}{B}_{p}\), \({f}_{p}^{c}\left(. \right)=\frac{1}{T}{f}_{p}\left(.\right)\) and \(T\) denote the sample time. The discrete-time MPC is applied as \(u\left(k\right)=F\left(k\right){x}_{p}\left(k\right)\). The gain \(F\left(k\right)\) is updated at sample time with the solution of a minimization problem.

Hence, the simulation results are compared to the mentioned MPC methods with the same \(Q\) and \(R\) weights. The weight matrices are selected as \(Q={I}_{3}\) and \(R=1\). Then, it is assumed that the control input affects the performance index the same as the plant states. The weights \(Q\) and \(R\) are some invertible matrices. Hence, no factorization is necessary.

The results of Theorem 2 is applied to the MPDC design while the constraint on the control input sets as \(\left|u\left(k\right)\right|\le 0.5\). In the numerical simulation, the reference signal is assumed to be zero and the sample time is selected as 1 s. The control and prediction horizons tend to infinity. The optimization problem is numerically solved via the LMILab. Then, the MPCs parameters are updated at each sample time. The following quadratic cost function is considered as the performance index:

$${J}_{0}=\sum_{k=0}^{+\infty }{{x}_{p}\left(k\right)}^{T}Q{x}_{p}\left(k\right)+{u\left(k\right)}^{T}Ru\left(k\right)$$
(73)

The performance criterion can be evaluated by applying Theorem 2. The comparative results are shown in Table 1. Thus, the controllers are designed while the cost function \({J}_{0}\) is minimized as well.

Table 1 The comparison of the performance indexes

The generated control signal \(u(k)\) is plotted in Fig. 1. The states of the example are also illustrated in Figs. 2, 3 and 4. The cost value upper bound \(\gamma \) at each sample time is depicted in Fig. 5. It is seen that the state deviations of the uncertain plant (71) are considerably small via the suggested predictive control compared to the other control techniques. The simulation results are demonstrated in Figs. 15 and Table 1 by using the proposed and existing MPCs.

Fig. 1
figure 1

The applied control signal \(u(k)\)

Fig. 2
figure 2

The first state of the plant \({x}_{1}(k)\)

Fig. 3
figure 3

The second state of the plant \({x}_{2}(k)\)

Fig. 4
figure 4

The third state of the plant \({x}_{3}(k)\)

Fig. 5
figure 5

The upper bound of the cost function \(\gamma \)

As a consequence, the outcomes verify the performance improvement of the closed-loop system compared to the other predictive control methods. Therefore, the control goals are accomplished by the presented MPDC in the discrete-time systems by considering the system uncertainty and the given control constraint.

6 Conclusion

The MPDC design is investigated in the discrete-time uncertain systems. A quadratic objective function is selected as the control design requirement. Then, a matrix transformation is used to express the results in terms of some LMI’s. It is shown that the MPDC synthesis can be translated into another LMI minimization problem by using the matrix transformation. The dynamic controller parameters are updated at each sample time via the solution of the optimization problem. The procedure is applied to a discrete-time example to demonstrate the effectiveness of the proposed approach versus the existing results. The efficiency of the suggested MPDC is numerically shown in terms of the control and transient performances.