6.1 Chapter Overview

This chapter treats the following topics: (a) Adaptive neurofuzzy control of micro-actuators, (b) Nonlinear optimal control of underactuated MEMS.

With reference to (a) the chapter presents an adaptive fuzzy approach to the problem of control of electrostatically actuated MEMS, which is based on differential flatness theory and which uses exclusively output feedback. It is shown that the model of the electrostatically actuated MEMS is a differentially flat one and this permits to transform it to the so-called linear canonical form. For the new description of the system’s dynamics the transformed control inputs contain unknown terms which depend on the system’s parameters. To identify these terms adaptive fuzzy approximators are used in the control loop. Thus an adaptive fuzzy control scheme is implemented in which the unknown or unmodeled system dynamics is approximated by neurofuzzy networks and next this information is used by a feedback controller that makes the electrostatically activated MEMS converge to the desirable motion setpoints. This adaptive control scheme is exclusively implemented with the use of output feedback, while the state vector elements which are not directly measured are estimated with the use of a state observer that operates in the control loop. The learning rate of the adaptive fuzzy system is suitably computed from Lyapunov analysis, so as to ensure that both the learning procedure for the unknown system’s parameters, the dynamics of the observer and the dynamics of the control loop will remain stable. The Lyapunov stability analysis depends on two Riccati equations, one associated with the feedback controller and one associated with the state observer.

With reference to (b) the chapter proposes a nonlinear optimal control method for solving the problem of control of coupled underactuated micro-electromechanical systems (MEMS). The MEMS model consists of a Van-der-Pol oscillator being elastically coupled with a forced Duffing oscillator . The dynamic model of the MEMS is approximately linearized around a temporary operating point with the use of first-order Taylor series expansion and after computing the Jacobian matrices of its state-space model. For the approximately linearized model of the MEMS a nonlinear optimal (H-infinity) feedback controller is designed. This controller stands for the solution of the MEMS optimal control problem under model uncertainty and external perturbations. The computation of the feedback control gain relies on the solution of an algebraic Riccati equation taking place at each time step of the control method. Finally, to achieve state estimation-based control through the measurement of a small number of the MEMS state vector elements, the H-infinity Kalman Filter is used as a robust state estimator. In both cases (a) and (b) the global asymptotic stability properties of the control scheme are proven through Lyapunov analysis.

6.2 Adaptive Neurofuzzy Control of Microactuators

6.2.1 Outline

As micro and nanotechnology develop fast, the use of MEMS and particularly of microactuators is rapidly deploying. One can note several systems where the use of microactuators has become indispensable and the solution of the associated control problems has become a prerequisite. In [501, 507, 649, 651] electrostatic microactuators are used in adaptive optics and optical communications. In [56, 327] microactuators are used for micromanipulation and precise positioning of microobjects. Several approaches to the control of microactuators have been proposed. In [263, 276, 550] adaptive control methods have been used. In [142, 607] solution of microactuation control problems through robust control approaches has been attempted. In [482] backstepping control has been used, while in [550] an output feedback control scheme has been implemented. Additional results for the stabilization and control of microactuators have been presented in [192, 389]. In such control systems, convergence of the state vector elements to the associated reference setpoints has to be performed with accuracy, despite modeling uncertainties, parametric variations or external perturbations. Moreover, the reliable functioning of the control loop has to be assured despite difficulties in measuring the complete state vector of the MEMS. The present section develops a new method for the control of micro-electromechanical systems (MEMS) which is based on differential flatness theory. The considered control problem is a nontrivial one because of the of the unknown nonlinear dynamical model of the actuator and because of the constraint to implement the control using exclusively output feedback (it is little reliable and technically difficult to use sensor measurements for the monitoring of all state variables of the micro-actuator). The differential flatness theory control approach is based on an exact linearization of the MEMS dynamics which avoids the numerical errors of the approximate linearization that is performed by other nonlinear control methods [93, 235, 335, 454, 457].

First, the section shows that the dynamic model of the studied microactuator is a differentially flat one. This means that all its state variables and the control input can be written as functions of one single algebraic variable, which is the flat output, and also as functions of the flat output’s derivatives [267, 450, 452, 476, 519]. This change of variables (differential flatness theory-based diffeomorphism) enables to transform the nonlinear model of the actuator into the linear canonical (Brunovsky) form [145, 334, 546, 572]. In the latter description of the MEMS, the transformed control input contains elements which are associated with the unknown nonlinear dynamics of the system. These are identified on-line with the use of neurofuzzy approximators and the estimated system dynamics is finally used for the computation of the control signal that will make the MEMS state vector track the desirable setpoints. Thus an adaptive fuzzy control scheme is implemented [457, 462]. The learning rate of the neurofuzzy approximators is determined by the requirement to assure that the Lyapunov function of the control loop will always have a negative first-order derivative.

Next, another problem that has to be dealt with was that only output feedback can be used for the implementation of the MEMS control scheme. The nonmeasurable state variables of the microactuator have to be reconstructed with the use of a state estimator (observer), which functions again inside the control loop. Thus, finally, the Lyapunov function for the proposed control scheme comprises three quadratic terms: (i) a term that describes the tracking error of the MEMS state variables from the reference setpoints, (ii) a term that describes the error in the estimation of the non-measurable state vector elements of the microactuator with respect to the reference setpoints, and (iii) a sum of quadratic terms associated with the distance of the weights of the neurofuzzy approximators from the values that give the best approximation of the unknown MEMS dynamics. It is proven that an adaptive (learning) control law can be found assuring that the Lyapunov function will continuously have a negative first order derivative, thus also confirming that the stability of the control loop will be preserved and that accurate tracking of the setpoints by the system’s state variables will be achieved (H-infinity tracking performance).

Fig. 6.1
figure 1

Diagram of the 1-DOF parallel-plate electrostatic actuator

6.2.2 Dynamic Model of the Electrostatic Actuator

The considered MEMS (electrostatic microactuator) is depicted in Fig. 6.1. The dynamic model of the MEMS has been analyzed in [172, 199, 648, 650], where model-based control approaches have been mostly developed. It is assumed that Q(t) is the charge of the device, while \(\varepsilon \) is the permitivity in the gap. Then the capacitance of the device is

$$\begin{aligned} C(t)={{{\varepsilon }A} \over {G(t)}} \end{aligned}$$
(6.1)

while the attractive electrostatic force on the moving plate is

$$\begin{aligned} F(t)={{V_a^2} \over 2}{{{\partial }C} \over {{\partial }G}}=-{{{\varepsilon }A{V_a^2}} \over {2G^2(t)}}=-{{Q^2(t)} \over {2{\varepsilon }A}} \end{aligned}$$
(6.2)

Thus, the equation of motion of the actuator is given by

$$\begin{aligned} m\ddot{G}(t)+b\dot{G}(t)+k(G(t)-G_0)=-{{Q^2(t)} \over {2{\varepsilon }A}} \end{aligned}$$
(6.3)

From Eqs. (6.2) and (6.3) it can be concluded that the electrostatic force F increases with the inverse square of the gap, while the restoring mechanical force which is associated with the term \(k(G(t)-G_0)\) increases linearly with the plate deflection. A critical value for the voltage across the device is called pull-in voltage and is given by [651]

$$\begin{aligned} V_{pi}=\sqrt{{8kG_0^2} \over {{27}C_0}} \end{aligned}$$
(6.4)

It is assumed that the MEMS starts operating from an initially uncharged state at \(t=0\). Then the charge of the electrodes at time instant t is given by \(Q(t)={\int _0^t}I_s(\tau )d{\tau }\), or equivalently \(\dot{Q}(t)=I_s(t)\). By applying Kirchhoff’s voltage law one has for the current that goes through the resistor

$$\begin{aligned} \dot{Q}(t)={1 \over R}\left( V_s(t)-{{Q(t)G(t)} \over {{\varepsilon }A}}\right) \end{aligned}$$
(6.5)

Next, the equations of the system’s dynamics given in Eqs. (6.3)–(6.5) undergo a transformation which consists of a change of the time scale \(\tau ={\omega }t\) and of the following normalization

$$\begin{aligned} \begin{array}{ccc} x=1-{G \over G_0} &{} q={Q \over Q_{pi}}&{} \\ u={{V_s} \over V_{pi}} &{} i={{I_s} \over {V_{pi}{\omega _0}{C_0}}} &{} r={\omega _0}{C_0}{R} \end{array} \end{aligned}$$
(6.6)

where \(C_0={{{\varepsilon }A} \over G_0}\), \(Q_{pi}={3 \over 2}{C_0}V_{pi}\) is the pull-in charge corresponding to the pull-in voltage, \(\omega _0=\sqrt{k/m}\) is the undamped natural frequency, and \(\zeta ={b \over {2m\omega _0}}\) is the damping ratio. The normalized voltage across the actuator can be expressed in terms of normalized deflection x of the moveable electrode, that is \({u_o}={3 \over 2}q(1-x)\), while the dynamics of the normalized charge is \(\dot{q}={2 \over 3}i\).

After the aforementioned normalization and transformation, the dynamic model of the microactuator is written as [651]

$$\begin{aligned}&\qquad \qquad \dot{x}=v \nonumber \\&\dot{v}=-2{\zeta }v-x+{1 \over 3}q^2 \\&\dot{q}={1 \over r}q(1-x)+{2 \over {3r}}u\nonumber \end{aligned}$$
(6.7)

In the previous state-space model: \(\dot{x}=v\): is a variable denoting the speed of deflection of the moving electrode, q is a variable denoting the ratio between the actual change of the plates Q and the pull-in charge \(Q_{pi}\). It holds that \(q={Q \over Q_{p_i}}\), where \(Q_{p_i}={3 \over 2}{C_o}V_{p_i}\) and \(V_{p_i}\) is the pull-in voltage.

Remark 1

The previously analyzed MEMS dynamics is a highly nonlinear one and nonlinear control methods have to be used for it. One can distinguish three main approaches in the control of nonlinear dynamical systems: (i) control based on global linearization methods, (ii) control based on approximate linearization methods, (iii) Lyapunov methods.

The results of the present section are mostly based on approach (iii) that is Lyapunov theory-based design of feedback controllers for dynamical systems of unknown model and of non completely measurable state vector. Comparing to methods (i) and (ii), approach (iii) is a completely model-free one. Therefore, the major benefit from it is that there is no dependence on prior knowledge of the microactuator’s dynamics. The main difficulty in the application of approach (iii) is that it may require operations between matrices of high dimension. Thus it becomes computationally more demanding than approaches (i) and (ii).

6.2.3 Linearization of the MEMS Model Using Lie Algebra

The MEMS nonlinear dynamics given in Eq. (6.7), with state vector defined as \(x=[x,v, q]\), is also written in the form

$$\begin{aligned} \dot{x}=f(x)+g(x)u \end{aligned}$$
(6.8)

where the vector fields f(x) and g(x) are defined as

$$\begin{aligned} f(x)= \begin{pmatrix} v \\ -2{\zeta }v-x+{1 \over 2}q^2 \\ -{1 \over r}q(1-x) \end{pmatrix} g(x)=\begin{pmatrix} 0 \\ 0 \\ {2 \over {3r}} \end{pmatrix} \end{aligned}$$
(6.9)

Using the above formulation, one can arrive at a linearized description of the MEMS dynamics using a differential geometric approach and the computation of Lie derivatives . The following state variables are defined: \(z_1=h_1(x)=x\), \(z_2={L_f}{h_1}(x)\) and \(z_3={L_f^2}{h_1}(x)\). It holds that

$$\begin{aligned} \begin{array}{cc} &{}z_2={L_f}{h_1}(x){\Rightarrow }z_2={{{\partial }{h_1}} \over {\partial {x_1}}}{f_1}+{{{\partial }{h_1}} \over {\partial {x_2}}}{f_2}+{{{\partial }{h_1}} \over {\partial {x_3}}}{f_3}{\Rightarrow }\\ &{}z_2=1{f_1}+0{f_2}+0{f_3}{\Rightarrow }z_2=f_1{\Rightarrow }z_2=v{\Rightarrow }z_2=\dot{x} \end{array} \end{aligned}$$
(6.10)

In a similar manner one computes

$$\begin{aligned} \begin{array}{cc} &{} z_3={L_f^2}{h_1}(x){\Rightarrow }z_3={{{\partial }{z_2}} \over {\partial {x_1}}}{f_1}+{{{\partial }{z_2}} \over {\partial {x_2}}}{f_2}+{{{\partial }{z_2}} \over {\partial {x_3}}}{f_3}{\Rightarrow }\\ &{} z_3=0{f_1}+1{f_2}+0{f_3}{\Rightarrow }z_3=\dot{v}{\Rightarrow }z_3=\ddot{x} \end{array} \end{aligned}$$
(6.11)

Moreover, one has that

$$\begin{aligned} \dot{z}_3=x^{(3)}={L_f^3}{h_1}(x)+{{L_g}{L_f^2}{h_1}x}{\cdot }u \end{aligned}$$
(6.12)

where

$$\begin{aligned}&{L_f^3}{h_1}(x)={L_f}{z_2}{\Rightarrow }{L_f^3}{h_1}(x)={{{\partial }{z_3}} \over {\partial {x_1}}}{f_1}+{{{\partial }{z_3}} \over {\partial {x_2}}}{f_2}+{{{\partial }{z_3}} \over {\partial {x_3}}}{f_3}{\Rightarrow }\nonumber \\&{L_f^3}{h_1}(x)=1{f_1}-2{\zeta }{f_2}+{2 \over 3}q{f_3}{\Rightarrow }{L_f^3}{h_1}(x)=v-2{\zeta }\dot{v}+{2 \over 3}q\left( -{1 \over r}q(1-x)\right) {\Rightarrow } \nonumber \\&{L_f^3}{h_1}(x)=\dot{y}-2{\zeta }\ddot{y}+{2 \over 3}q\left[ -{1 \over r}q(1-x)\right] {\Rightarrow }{L_f^3}{h_1}(x)=-2{\zeta }\ddot{y}-\dot{y}-{1 \over r}(1-y){2 \over 3}q^2{\Rightarrow } \nonumber \\&\qquad \qquad {L_f^3}{h_1}(x)=-2{\zeta }\ddot{y}-\dot{y}-{2 \over r}(1-y)[\ddot{y}+2{\zeta }\dot{y}+y] \end{aligned}$$
(6.13)

Following a similar procedure one finds

$$\begin{aligned} \begin{array}{cc} &{}{L_g}{L_f^2}h_1(x)={L_g}{z_3}{\Rightarrow }{L_g}{L_f^2}h_1(x)={{{\partial }{z_3}} \over {\partial {x_1}}}{g_1}+{{{\partial }{z_3}} \over {\partial {x_2}}}{g_2}+{{{\partial }{z_3}} \over {\partial {x_3}}}{g_3}{\Rightarrow } \\ &{} {L_g}{L_f^2}h_1(x)=1{g_1}-2{\zeta }{g_2}+{2 \over 3}q{g_3}{\Rightarrow }{L_g}{L_f^2}h_1(x)={4 \over {9r}}q{\Rightarrow }\\ &{} {L_g}{L_f^2}h_1(x)={4 \over {9r}}\sqrt{3[\ddot{y}+2{\zeta }\dot{y}+y]} \end{array} \end{aligned}$$
(6.14)

For the linearized description of the MEMS dynamics given in Eq. (6.12), and using that \(v={L_f^3}{h_1}(x)+{L_g}{L_f^2}{h_1}(x)u\) one obtains the state-space description

$$\begin{aligned} \begin{pmatrix} \dot{z}_1 \\ \dot{z}_2 \\ \dot{z}_3 \end{pmatrix}= \begin{pmatrix} 0 &{} 1 &{} 0 \\ 0 &{} 0 &{} 1 \\ 0 &{} 0 &{} 0 \end{pmatrix} \begin{pmatrix} z_1 \\ z_2 \\ z_3 \end{pmatrix}+ \begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix}v \end{aligned}$$
(6.15)
$$\begin{aligned} z^{meas}= \begin{pmatrix} 1&0&0 \end{pmatrix} \begin{pmatrix} z_1 \\ z_2 \\ z_3 \end{pmatrix} \end{aligned}$$
(6.16)

For the linearized description of the system given in Eq. (6.25) the design of a state feedback controller is carried out as follows:

$$\begin{aligned} v=y_d^{(3)}-{k_1}(\ddot{y}-\ddot{y}_d)-{k_2}(\dot{y}-\dot{y}_d)-k_3(y-y_d) \end{aligned}$$
(6.17)

which results in tracking error dynamics of the form

$$\begin{aligned} e^{(3)}(t)+{k_1}\ddot{e}(t)+{k_2}\dot{e}(t)+{k_3}e(t)=0 \end{aligned}$$
(6.18)

By selecting the feedback gains \(k_i, \ i=1,2,3\) such that the characteristic polynomial of Eq. (6.31) to be a Hurwitz one, it is assured that \(lim_{t{\rightarrow }\infty }e(t)=0\).

6.2.4 Differential Flatness of the Electrostatic Actuator

6.2.4.1 Differential Flatness Properties of the Electrostatic Microactuator

The dynamic model of the electrostatic microactuator given in Eq. (6.7) is considered. The flat output of the model is taken to be be \(y=x\). Therefore, it also holds \(v=\dot{y}\). From the second row of the state space equations, given in Eq. (6.7) one has

$$\begin{aligned} \begin{array}{cc} &{}\ddot{y}=-2{\zeta }\dot{y}-y+{1 \over 3}{q^2}{\Rightarrow }q^2=3[\ddot{y}+2{\zeta }\dot{y}+y]\\ &{}{\Rightarrow }q=\sqrt{3[\ddot{y}+2{\zeta }\dot{y}+y]}{\Rightarrow }q=f_q(y,\dot{y},\ddot{y}) \end{array} \end{aligned}$$
(6.19)

From the third row of the state space equations, given in Eq. (6.7) one has

$$\begin{aligned} u={{3r} \over 2}\left[ \dot{q}+{1 \over r}q(1-x)\right] {\Rightarrow }u=f_u(y,\dot{y},\ddot{y}, y^{(3)}) \end{aligned}$$
(6.20)

Since all state variables and the control input of the system are expressed as functions of the flat output and its derivatives, it is concluded that the model of the electrostatic actuator is a differentially flat one.

6.2.4.2 Linearization of the MEMS Model Using Differential Flatness Theory

From the second row of the state-space model given in Eq. (6.7) it holds that

$$\begin{aligned} \ddot{y}=-2{\zeta }\dot{y}-y+{1 \over 3}{q^2} \end{aligned}$$
(6.21)

By deriving once more with respect to time one gets

$$\begin{aligned} y^{(3)}=-2{\zeta }\ddot{y}-\dot{y}+{2 \over 3}q\dot{q} \end{aligned}$$
(6.22)

By substituting the third row of the state-space model given in Eq. (6.7) one obtains

$$\begin{aligned} \begin{array}{cc} y^{(3)}=-2{\zeta }\ddot{y}-\dot{y}+{2 \over 3}q\left[ -{1 \over r}q(1-x)+{2 \over {3r}}u\right] {\Rightarrow } \\ y^{(3)}=-2{\zeta }\ddot{y}-\dot{y}-{2 \over {3r}}(1-x){q^2}+{4 \over {9r}}qu \end{array} \end{aligned}$$
(6.23)

Next, using from Eq. (6.19) that \(q^2=\ddot{y}+2{\zeta }\dot{y}+y\) or equivalently that \(q=\sqrt{\ddot{y}+2{\zeta }\dot{y}+y}\) the following relation is obtained

$$\begin{aligned} y^{(3)}=-2{\zeta }\ddot{y}-\dot{y}-{2 \over e}(1-y)[\ddot{y}+2{\zeta }\dot{y}+y]+{4 \over {9r}}\sqrt{3[\ddot{y}+2{\zeta }\dot{y}+y}]u \end{aligned}$$
(6.24)

or equivalently

$$\begin{aligned} y^{(3)}=f(y,\dot{y},\ddot{y})+g(y,\dot{y},\ddot{y})u \end{aligned}$$
(6.25)

where

$$\begin{aligned} f(y,\dot{y},\ddot{y})=-2{\zeta }\ddot{y}-\dot{y}-{2 \over r}(1-y)[\ddot{y}+2{\zeta }\dot{y}+y] \end{aligned}$$
(6.26)
$$\begin{aligned} g(y,\dot{y},\ddot{y})={4 \over {9r}}[\sqrt{3[\ddot{y}+2{\zeta }\dot{y}+y}] \end{aligned}$$
(6.27)

For the linearized description of the MEMS dynamics given in Eq. (6.25), and using the notation \(z_1=y\), \(z_2=\dot{y}\) and \(z_3=\ddot{y}\), and \(v=f(y,\dot{y},\ddot{y})+g(y,\dot{y},\ddot{y})u\) one arrives also at the state-space description

$$\begin{aligned} \begin{pmatrix} \dot{z}_1 \\ \dot{z}_2 \\ \dot{z}_3 \end{pmatrix}= \begin{pmatrix} 0 &{} 1 &{} 0 \\ 0 &{} 0 &{} 1 \\ 0 &{} 0 &{} 0 \end{pmatrix} \begin{pmatrix} z_1 \\ z_2 \\ z_3 \end{pmatrix}+ \begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix}v \end{aligned}$$
(6.28)
$$\begin{aligned} z^{meas}=\begin{pmatrix} 1&0&0 \end{pmatrix} \begin{pmatrix} z_1 \\ z_2 \\ z_3 \end{pmatrix} \end{aligned}$$
(6.29)

For the linearized description of the system given in Eq. (6.25) the design of a state feedback controller is carried out as follows:

$$\begin{aligned} v=y_d^{(3)}-{k_1}(\ddot{y}-\ddot{y}_d)-{k_2}(\dot{y}-\dot{y}_d)-k_3(y-y_d) \end{aligned}$$
(6.30)

which results in tracking error dynamics of the form

$$\begin{aligned} e^{(3)}(t)+{k_1}\ddot{e}(t)+{k_2}\dot{e}(t)+{k_3}e(t)=0 \end{aligned}$$
(6.31)

By selecting the feedback gains \(k_i, \ i=1,2,3\) such that the characteristic polynomial of Eq. (6.31) to be a Hurwitz one, it assured that \(lim_{t{\rightarrow }\infty }e(t)=0\).

6.2.5 Adaptive Fuzzy Control of the MEMS Model Using Output Feedback

6.2.5.1 Problem Statement

Adaptive fuzzy control aims at solving the microactuator’s control problem in case that its dynamics is unknown and the state vector is not completely measurable. It has been shown that after applying the differential flatness theory-based transformation, the following non-linear SISO system is obtained:

$$\begin{aligned} x^{(n)}=f(x,t)+g(x, t)u+\tilde{d} \end{aligned}$$
(6.32)

where f(xt), g(xt) are unknown nonlinear functions and \(\tilde{d}\) is an unknown additive disturbance. The objective is to force the system’s output \(y=x\) to follow a given bounded reference signal \(x_d\). In the presence of non-Gaussian disturbances w, successful tracking of the reference signal is denoted by the \(H_{\infty }\) criterion [450, 457].

$$\begin{aligned} {\int _0^T}{e^T}Qe{dt} \le {\rho ^2} {\int _0^T}{w^T}w{dt} \end{aligned}$$
(6.33)

where \(\rho \) is the attenuation level and corresponds to the maximum singular value of the transfer function G(s) of the linearized equivalent of Eq. (6.32).

6.2.5.2 Transformation of Tracking into a Regulation Problem

The flatness-based adaptive fuzzy control approach for nonlinear systems control consists of the following steps : (i) linearization is applied; (ii) the unknown system dynamics are approximated by neural or fuzzy estimators, (iii) an \(H_{\infty }\) control term, is employed to compensate for estimation errors and external disturbances. If the state vector is not measurable, this can be reconstructed with the use of an observer.

For measurable state vector x, desirable state vector \(x_m\) and uncertain functions f(xt) and g(xt) an appropriate control law for (6.32) would be

$$\begin{aligned} u={ {1 \over {\hat{g}(x,t)}}[x_m^{(n)}-\hat{f}(x, t)+{K^T}e+{u_c}]} \end{aligned}$$
(6.34)

where, \(\hat{f}\) and \(\hat{g}\) are the approximations of the unknown parts of the system dynamics f and g respectively, and which can be given by the outputs of suitably trained neuro-fuzzy networks. The term \(u_c\) denotes a supervisory controller which compensates for the approximation error \(w=[f(x,t)-\hat{f}(x,t)]+[g(x,t)-\hat{g}(x, t)]u\), as well as for the additive disturbance \(\tilde{d}\). Moreover the feedback control gains \(K^T=[k_n, k_{n-1},\ldots , k_1]\), and the vector of the state vector element’s tracking error \(e^T=[e,\dot{e},\ddot{e},\ldots , e^{(n-1)}]^T\) are chosen such that the polynomial \(e^{(n)}+{k_1}e^{(n-1)}+{k_2}e^{(n-2)}+\cdots +{k_n}e\) is Hurwitz. The substitution of control law of Eq. (6.34) in (6.32) results into

$$\begin{aligned} \begin{array}{c} x^{(n)}=f(x,t)+g(x, t){1 \over {\hat{g}(x,t)}}[x_m^{(n)}-{\hat{f}(x,t)}-{K^T}e+u_c]+ \tilde{d} \Rightarrow \\ x^{(n)}=f(x,t)+ \{ \hat{g}(x,t)+[g(x,t)-\hat{g}(x, t)] \} {1 \over {\hat{g}(x,t)}}[x_m^{(n)}-{\hat{f}(x,t)}-{K^T}e+u_c]+ \tilde{d} \Rightarrow \\ x^{(n)}=f(x,t)+ \Bigg \{{\hat{g}(x,t) \over {\hat{g}(x,t)}}[x_m^{(n)}-{\hat{f}(x,t)}-{K^T}e+u_c]+ [g(x,t)-\hat{g}(x,t)]u\Bigg \} + \tilde{d} \Rightarrow \\ x^{(n)}=f(x,t)+ x_m^{(n)}-\hat{f}(x,t)-{K^T}e+{u_c}+[g(x,t)-\hat{g}(x,t)]u+{u_c}+\tilde{d} \Rightarrow \\ x^{(n)}-x_m^{(n)}=-{K^T}e+[f(x,t)-\hat{f}(x,t)]+[g(x,t)-\hat{g}(x,t)]u+{u_c}+\tilde{d} \Rightarrow \\ x^{(n)}=-{K^T}e+{u_c}+[f(x,t)-\hat{f}(x,t)]+[g(x,t)-\hat{g}(x, t)]u+\tilde{d} \end{array} \end{aligned}$$
(6.35)

The above relation can be written in a state-equations form. The state vector is taken to be \(e^T=[e,\dot{e},\ldots , e^{(n-1)}]\), which yields

$$\begin{aligned} \dot{e}=Ae-B{K^T}e+B{u_c}+B\{[f(x,t)-\hat{f}(x,t)]+[g(x,t)-\hat{g}(x, t)]u+\tilde{d}\} \end{aligned}$$
(6.36)

or equivalently

$$\begin{aligned} \begin{array}{cc} &{}\dot{e}=(A-B{K^T})e+B{u_c}+B \{[f(x,t)-\hat{f}(x,t)]+[g(x,t)-\hat{g}(x, t)]u+\tilde{d} \}\\ &{}{e_1}={C^T}e \end{array} \end{aligned}$$
(6.37)

where

$$\begin{aligned}&A=\begin{pmatrix} 0 &{} 1 &{} 0 &{} \cdots &{} \cdots &{} 0 \\ 0 &{} 0 &{} 1 &{} \cdots &{} \cdots &{} 0 \\ \cdots &{} \cdots &{} \cdots &{} \cdots &{} \cdots &{} \cdots \\ \cdots &{} \cdots &{} \cdots &{} \cdots &{} \cdots &{} \cdots \\ 0 &{} 0 &{} 0 &{} \cdots &{} \cdots &{} 1 \\ 0 &{} 0 &{} 0 &{} \cdots &{} \cdots &{} 0 \end{pmatrix} \\&B^T=\begin{pmatrix} 0,0,\cdots , 0,1 \end{pmatrix}, \ C^T=\begin{pmatrix}1,0,\cdots , 0,0\end{pmatrix} \nonumber \\&K^T=\begin{pmatrix}k_0,k_1,\cdots , k_{n-2}, k_{n-1}\end{pmatrix} \nonumber \end{aligned}$$
(6.38)

where \(e_1\) denotes the output error \(e_1=x-{x_m}\). Eq. (6.37) describes a regulation problem.

6.2.5.3 Estimation of the State Vector

The control of the microactuator described by Eq. (6.32) becomes more complicated when the state vector x is not directly measurable and has to be reconstructed through a state observer. The following definitions are used

  • error of the state vector \(e=x-x_m\)

  • error of the estimated state vector \(\hat{e}=\hat{x}-x_m\)

  • observation error \(\tilde{e}=e-\hat{e}=(x-x_m)-(\hat{x}-x_m)\)

When an observer is used to reconstruct the state vector, the control law of Eq. (6.34) is written as

$$\begin{aligned} u={ {1 \over {\hat{g}(\hat{x},t)}}[x_m^{(n)}-\hat{f}(\hat{x}, t)+{K^T}e+{u_c}]} \end{aligned}$$
(6.39)

Applying Eq. (6.39) to the nonlinear system described by Eq. (6.32), after some operations results into

$$\begin{aligned} x^{(n)}=x_m^{(n)}&-{K^T\hat{e}}+{u_c}+[f(x,t)-\hat{f}(\hat{x},t)]+\\&[g(x,t)-\hat{g}(\hat{x}, t)]u+\tilde{d} \end{aligned}$$

It holds \(e=x-x_m \Rightarrow x^{(n)}=e^{(n)}+x_m^{(n)}\). Substituting \(x^{(n)}\) in the above equation gives

$$\begin{aligned} \begin{array}{c} e^{(n)}+x_m^{(n)}=x_m^{(n)}-{K^T\hat{e}}+u_c+[f(x,t)-\hat{f}(\hat{x},t)]+\\ +[g(x,t)-\hat{g}(\hat{x}, t)]u+\tilde{d}\Rightarrow \end{array} \end{aligned}$$
(6.40)
$$\begin{aligned} \begin{array}{c} \dot{e}=Ae-B{K^T\hat{e}}+B{u_c}+B\{[f(x,t)-\hat{f}(\hat{x},t)]+\\ +[g(x,t)-\hat{g}(\hat{x}, t)]u+\tilde{d}\} \end{array} \end{aligned}$$
(6.41)
$$\begin{aligned} e_1={C^T}e \end{aligned}$$
(6.42)

where \(e=[e,\dot{e},\ddot{e},\ldots , e^{(n-1)}]^T\), and \(\hat{e}=[\hat{e},\dot{\hat{e}},\ddot{\hat{e}},\ldots ,\hat{e}^{(n-1)}]^T\).

The state observer is designed according to Eqs. (6.41) and (6.42) and is given by [457]:

$$\begin{aligned} \dot{\hat{e}}=A{\hat{e}}-B{K^T}{\hat{e}}+{K_o}[e_1-{C^T}{\hat{e}}] \end{aligned}$$
(6.43)
$$\begin{aligned} \hat{e}_1={C^T}{\hat{e}} \end{aligned}$$
(6.44)

The observation gain \(K_o=[k_{o_0}, k_{o_1},\ldots , k_{o_{n-2}}, k_{o_{n-1}}]^T\) is selected so as to ensure the convergence of the observer.

6.2.5.4 The Additional Control Term \(u_c\)

The additional term \(u_c\) which appeared in Eq. (6.34) is also introduced in the observer-based control to compensate for:

  • The external disturbances \(\tilde{d}\)

  • The state vector estimation error \(\tilde{e}=e-\hat{e}=x-\hat{x}\)

  • The approximation error of the nonlinear functions f(xt) and g(xt), denoted as \(w=[f(x,t)-\hat{f}(\hat{x},t)]+[g(x,t)-\hat{g}(\hat{x}, t)]u\)

The control signal \(u_c\) consists of 2 terms, namely:

  • the \(H_{\infty }\) control term, \(u_a=-{1 \over r}{B^T}P\tilde{e}\) for the compensation of d and w

  • the control term \(u_b\) for the compensation of the observation error \(\tilde{e}\)

6.2.5.5 Dynamics of the Observation Error

The observation error is defined as \(\tilde{e}=e-\hat{e}=x-\hat{x}\). Subtracting Eq. (6.43) from (6.41) as well as Eq. (6.44) from (6.42) one gets

$$\begin{aligned}&\dot{e}-\dot{\hat{e}}=A(e-\hat{e})+B{u_c}+B\{[f(x,t)-\hat{f}(\hat{x},t)]+\\&+[g(x,t)-\hat{g}(\hat{x}, t)]u+ \tilde{d}\}-{K_o}{C^T}(e-\hat{e}) \\&{e_1}-{\hat{e}_1}={C^T}(e-\hat{e}) \end{aligned}$$

that is

$$\begin{aligned}&\dot{\tilde{e}}=A\tilde{e}+B{u_c}+B\{[f(x,t)-\hat{f}(\hat{x},t)]+\nonumber \\&+[g(x,t)-\hat{g}(\hat{x}, t)]u+\tilde{d}\}-{K_o}{C^T}\tilde{e} \nonumber \\&\qquad \qquad \tilde{e}_1={C^T}\tilde{e} \end{aligned}$$

which can be written as

$$\begin{aligned} \dot{\tilde{e}}=(A-{K_o}{C^T}){\tilde{e}}+B{u_c}+B\{[f(x,t)-\hat{f}(\hat{x},t)]+[g(x,t)-\hat{g}(\hat{x}, t)]u+\tilde{d}\} \end{aligned}$$
(6.45)
$$\begin{aligned} \tilde{e}_1=C{\tilde{e}} \end{aligned}$$
(6.46)

6.2.5.6 Approximation of the Unknown MEMS Dynamics

Neurofuzzy networks can been trained on-line to approximate parts of the unknown dynamics of the microactuator, or to compensate for external disturbances. The approximation of functions f(xt) and g(xt) of Eq.(6.32) can be carried out with Takagi-Sugeno neuro-fuzzy networks of zero or first order (Fig. 6.2 ). These consist of rules of the form:

\(R^l:\) IF \(\hat{x}\) is \(A_1^l\) AND \(\dot{\hat{x}}\) is \(A_2^l\) AND \(\cdots \) AND \(\hat{x}^{(n-1)}\) is \(A_n^l\) THEN \(\bar{y}^l={\sum _{i=1}^n}{w_i^l}{\hat{x}_i}+b^l, \ \ l=1,2,\ldots , L\)

The output of the neuro-fuzzy model is calculated by taking the average of the consequent part of the rules

$$\begin{aligned} \hat{y} = \frac{{\sum _{l=1}^L}{\bar{y}^l}{\prod _{i=1}^{n}}{\mu _{A_i^l}(\hat{x}_i)}}{\sum _{l=1}^L{\prod _{i=1}^{n}}{\mu _{A_i^l}(\hat{x}_i)}} \end{aligned}$$
(6.47)

where \(\mu _{A_i^l}\) is the membership function of \(x_i\) in the fuzzy set \(A_i^l\). The training of the neuro-fuzzy networks is carried out with \(1^{st}\) order gradient algorithms, in pattern mode, i.e. by processing only one data pair \((x_i, y_i)\) at every time step i. The estimation of f(xt) and g(xt) can be written as

$$\begin{aligned} \begin{array}{c} {\hat{f}}({\hat{x}}|\theta _{f})={\theta _{f}^{T}}{\phi ({\hat{x}})} \\ {\hat{g}}({\hat{x}}|\theta _{g})={\theta _{g}^{T}}{\phi ({\hat{x}})} \end{array} \end{aligned}$$
(6.48)

where \(\phi (\hat{x})\) are kernel functions with elements \(\phi ^l(\hat{x})={ {{\prod _{i=1}^{n}}{\mu _{A_i^l}(\hat{x}_i)}} \over {\sum _{l=1}^L{\prod _{i=1}^{n}}{\mu _{A_i^l}(\hat{x}_i)}} } \ \ l=1,2,\cdots , L\). It is assumed that that the weights \(\theta _f\) and \(\theta _g\) vary in the bounded areas \(M_{\theta _f}\) and \(M_{\theta _g}\) which are defined as

$$\begin{aligned} \begin{array}{c} M_{\theta _f}=\{\theta _f \in R^h: ||\theta _f||\le {m_{\theta _f}} \} \\ M_{\theta _g}=\{\theta _g \in R^h: ||\theta _g||\le {m_{\theta _g}} \} \end{array} \end{aligned}$$
(6.49)

with \(m_{\theta _f}\) and \(m_{\theta _g}\) positive constants. The values of \(\theta _f\) and \(\theta _g\) for which optimal approximation is achieved are:

$$\begin{aligned}&{\theta _f^*}=arg \ min_{\theta _f \in M_{\theta _f}}[sup_{x \in U_x, \hat{x} \in U_{\hat{x} }} |f(x)-\hat{f}(\hat{x}|\theta _f)|]\\&{\theta _g^*}=arg \ min_{\theta _g \in M_{\theta _g}}[sup_{x \in U_x, \hat{x} \in U_{\hat{x} }} |g(x)-\hat{g}(\hat{x}|\theta _g)|] \end{aligned}$$

The variation ranges of x and \(\hat{x}\) are the compact sets

$$\begin{aligned} \begin{array}{c} U_x=\{x \in R^{n}: ||x|| \le m_x< \infty \}, \\ U_{\hat{x}}=\{\hat{x} \in R^{n}: ||\hat{x}|| \le m_{\hat{x}} < \infty \} \end{array} \end{aligned}$$
(6.50)
Fig. 6.2
figure 2

Neuro-fuzzy approximator for the unknown dynamics of the mioroactuator: \(G_i\) Gaussian basis function, \(N_i\): normalization unit

The approximation error of f(xt) and g(xt) is given by

$$\begin{aligned} \begin{array}{c} w=[\hat{f}(\hat{x}|\theta _f^*)-f(x,t)]+[\hat{g}(\hat{x}|\theta _g^*)-g(x,t)]u \Rightarrow \\ w=\{[\hat{f}(\hat{x}|\theta _f^*)-f(x|\theta _f^*)]+[f(x|\theta _f^*)-f(x,t)]\}+\\ \{[\hat{g}(\hat{x}|\theta _g^*)-g(\hat{x}|\theta _g^*)]+[g(\hat{x}|\theta _g^*)g(x, t)]\}u \end{array} \end{aligned}$$
(6.51)

where

  • \(\hat{f}(\hat{x}|\theta _f^*)\) is the approximation of f for the best estimation \(\theta _f^*\) of the weights’ vector \(\theta _f\).

  • \(\hat{g}(\hat{x}|\theta _g^*)\) is the approximation of g for the best estimation \(\theta _g^*\) of the weights’ vector \(\theta _g\).

The approximation error w can be decomposed into \(w_a\) and \(w_b\), where

$$\begin{aligned}&w_a=[\hat{f}(\hat{x}|{\theta _f})-\hat{f}(\hat{x}|{\theta _f^*})]+[\hat{g}(\hat{x}|{\theta _g})-\hat{g}(\hat{x}|{\theta _g^*})]u \\&w_b=[\hat{f}(\hat{x}|{\theta _f^*})-f(x,t)]+[\hat{g}(\hat{x}|{\theta _g^*})-g(x, t)]u \end{aligned}$$

Finally, the following two parameters are defined:

$$\begin{aligned} \tilde{\theta }_f=\theta _f-\theta _f^*, \ \ \ \tilde{\theta }_g=\theta _g-\theta _g^* \end{aligned}$$
(6.52)

6.2.6 Lyapunov Stability Analysis

6.2.6.1 Design of the Lyapunov Function

The adaptation law of the neurofuzzy approximators’ weights \(\theta _f\) and \(\theta _g\) as well as of the supervisory control term \(u_c\) for the microactuator’s loop are derived from the requirement for negative definiteness of the Lyapunov function

$$\begin{aligned} V={1 \over 2}{{\hat{e}^T}{P_1}{\hat{e}}}+{1 \over 2}{{\tilde{e}^T}{P_2}{\tilde{e}}}+{1 \over {2{\gamma _1}}}{\tilde{\theta }_f^T}{\tilde{\theta }_f}+{1 \over {2{\gamma _2}}}{\tilde{\theta }_g^T}{\tilde{\theta }_g} \end{aligned}$$
(6.53)

The selection of the Lyapunov function relies on the following principle of indirect adaptive control \(\hat{e}: \lim _{t \rightarrow \infty }{\hat{x}(t)}={x_d}(t)\) and \(\tilde{e}: \lim _{t \rightarrow \infty }{\hat{x}(t)}=x(t)\). This yields \(\lim _{t \rightarrow \infty }x(t)={x_d}(t)\). Substituting Eqs. (6.41), (6.42) and Eqs. (6.45), (6.46) into Eq. (6.53) and differentiating results into

$$\begin{aligned} \dot{V}={1 \over 2}{\dot{\hat{e}}^T}{P_1}{\hat{e}}+{1\over 2}{\hat{e}^T}{P_1}{\dot{\hat{e}}}+ {1 \over 2}{\dot{\tilde{e}}^T}{P_2}{{\tilde{e}}}+{1 \over 2}{\tilde{e}^T}{P_2}{\dot{\tilde{e}}}+ {1 \over {\gamma _1}}{\tilde{\theta }_f^T}{\dot{\tilde{\theta }}_f}+{1 \over {\gamma _2}}{\tilde{\theta }_g^T}{\dot{\tilde{\theta }}_g} \end{aligned}$$
(6.54)

which in turn gives

$$\begin{aligned} \begin{array}{c} \dot{V}={1 \over 2}\{(A-BK^T)\hat{e}+{K_o}{C^T}\tilde{e}\}^T{P_1}{\hat{e}}+{1 \over 2}{{\hat{e}}^T}{P_1}\{(A-BK^T)\hat{e}+{K_o}{C^T}\tilde{e}\}+\\ +{1 \over 2} \{(A-{K_o}C^T)\tilde{e}+B{u_c}+Bd+Bw \}^T{P_2}{\tilde{e}}+ {1 \over 2} {\tilde{e}^T}{P_2}\{(A-{K_o}C^T)\tilde{e}+Bu_c+Bd+Bw \}+\\ +{{1 \over {\gamma _1}}{\tilde{\theta }_f}^T{\dot{\tilde{\theta }}_f}}+{{1 \over {\gamma _2}}{\tilde{\theta }_g}^T{\dot{\tilde{\theta }}_g}} \end{array} \end{aligned}$$
(6.55)

or, equivalently

$$\begin{aligned} \begin{array}{c} \dot{V}={1 \over 2}\{ {\hat{e}^T}(A-B{K^T})^T+{\tilde{e}^T}C{K_o^T}\}{P_1}{\hat{e}}+{1 \over 2}{\hat{e}^T}{P_1}\{(A-B{K^T}){\hat{e}}+ {K_o}{C^T}{\tilde{e}}\}+\\ +{1 \over 2} \{ {\tilde{e}^T}(A-{K_o}C^T)^T+{B^T}{u_c}+{B^T}w+{B^T}d \}{P_2}{\tilde{e}}+ {1 \over 2}{\tilde{e}^T}{P_2}\{(A-{K_o}{C^T}){\tilde{e}}+B{u_c}+Bw+Bd\}+\\ +{{1 \over {\gamma _1}}{\tilde{\theta }_f^T}}{\dot{\tilde{\theta }}_f}+{{1 \over {\gamma _2}}{\tilde{\theta }_g^T}}{\dot{\tilde{\theta }}_g} \end{array} \end{aligned}$$
(6.56)
$$\begin{aligned} \begin{array}{c} \dot{V}={1 \over 2}{\hat{e}^T}{(A-BK^T)^T}{P_1}{\hat{e}}+ {1 \over 2}{\tilde{e}^T}C{K_o^T}{P_1}{\hat{e}}++{1 \over 2}{\hat{e}^T{P_1}(A-BK^T)\hat{e}}+{1 \over 2}{\hat{e}^T}{P_1}{K_o}{C^T}{\tilde{e}}+\\ +{1 \over 2}{\tilde{e}^T}{(A-{K_oC^T})^T}{P_2}{\tilde{e}}+{1 \over 2}{B^T}{P_2}{\tilde{e}(u_c+w+d)}+{1 \over 2}{\tilde{e}^T}{P_2}(A-{K_o}C^T){\tilde{e}}+\\ +{1 \over 2}{\tilde{e}^T}{P_2}B(u_c+w+d)+{{1 \over {\gamma _1}}{\tilde{\theta }_f^T}{\dot{\tilde{\theta }}}_f}+{{1 \over {\gamma _2}}{\tilde{\theta }_g^T}{\dot{\tilde{\theta }}}_g} \end{array} \end{aligned}$$
(6.57)

Assumption 1: For given positive definite matrices \(Q_1\) and \(Q_2\) there exist positive definite matrices \(P_1\) and \(P_2\), which are the solution of the following Riccati equations [457]

$$\begin{aligned} {(A-BK^T)^T}{P_1}+{P_1}(A-BK^T)+Q_1=0 \end{aligned}$$
(6.58)
$$\begin{aligned} \begin{array}{c} {(A-{K_o}C^T)}^T{P_2}+{P_2}{(A-{K_o}C^T)}-\\ -{P_2}B({2 \over r}-{1 \over {\rho ^2}}){B^T}{P_2}+{Q_2}=0 \end{array} \end{aligned}$$
(6.59)

The conditions given in Eqs. (6.58)–(6.59) are related to the requirement that the systems described by Eqs. (6.43), (6.44) and Eqs. (6.45), (6.46) become asymptotically stable. Substituting Eqs. (6.58)–(6.59) into \(\dot{V}\) yields

$$\begin{aligned} \begin{array}{c} \dot{V}={1 \over 2}{\hat{e}^T}\{(A-BK^T)^T{P_1}+{P_1}(A-BK^T)\}{\hat{e}} +{\tilde{e}^T}C{K_o^T}{P_1}{\hat{e}}+\\ +{1 \over 2}{\tilde{e}^T}\{(A-{K_o}C^T)^T{P_2}+{P_2}(A-{K_o}{C^T})\}{\tilde{e}}+{B^T}{P_2}{\tilde{e}}(u_c+w+d)+\\ +{1 \over {\gamma _1}}{\tilde{\theta }_f^T}{\dot{\tilde{\theta }}_f}+ {1 \over {\gamma _2}}{\tilde{\theta }_g^T}{\dot{\tilde{\theta }}_g} \end{array} \end{aligned}$$
(6.60)

which is also written as

$$\begin{aligned} \begin{array}{c} \dot{V}=-{1 \over 2}{{\hat{e}^T}{Q_1}{\hat{e}}}+{\tilde{e}^T}C{K_o^T}{P_1}{\hat{e}}-\\ -{1 \over 2} \tilde{e}^T \{{Q_2}-{P_2}B({2 \over r}-{1 \over {\rho ^2}}){B^T}{P_2}\}{\tilde{e}}+{B^T}{P_2}{\tilde{e}}(u_c+w+d)+\\ +{1 \over {\gamma _1}}{\tilde{\theta }_f^T}{\dot{\tilde{\theta }}_f}+ {1 \over {\gamma _2}}{\tilde{\theta }_g^T}{\dot{\tilde{\theta }}_g} \end{array} \end{aligned}$$
(6.61)

Following the concept analyzed in Chapter 3, the supervisory control \(u_c\) is decomposed in two terms, \(u_a\) and \(u_b\)

$$\begin{aligned} u_a=-{1 \over r}p_{1n}\tilde{e}_1=-{1 \over r}{\tilde{e}^T}{P_2}B+{1 \over r}(p_{2n}\tilde{e}_2+\cdots +p_{nn}\tilde{e}_n)=-{1 \over r}{\tilde{e}^T}{P_2}B+{\Delta }{u_a} \end{aligned}$$
(6.62)

where \(p_{1n}\) stands for the last (n-th) element of the first row of matrix \(P_2\), and

$$\begin{aligned} u_b=-[{({P_2}B)^T}({P_2}B)]^{-1}({P_2}B)^TC{K_o^T}{P_1}{\hat{e}} \end{aligned}$$
(6.63)
  • \(u_a\) is an \(H_{\infty }\) control used for the compensation of the approximation error w and the additive disturbance \(\tilde{d}\). Its first component \(-{1 \over r}{\tilde{e}^T}{P_2}B\) has been chosen so as to compensate for the term \({1 \over r}{\tilde{e}^T}{P_2}B{B^T}{P_2}\tilde{e}\), which appears in Eq. (6.61). By subtracting the second component \(-{1 \over r}(p_{2n}\tilde{e}_2+\cdots +p_{nn}\tilde{e}_n)\) one has that \(u_a=-{1 \over r}p_{1n}\tilde{e}_1\), which means that \(u_a\) is computed based on the feedback of the measurable variable \(\tilde{e}_1\). Eq. (6.62) is finally rewritten as \(u_a=-{1 \over r}{\tilde{e}^T}{P_2}B+{\Delta }{u_a}\).

  • \(u_b\) is a control used for the compensation of the observation error (the control term \(u_b\) has been chosen so as to satisfy the condition \({\tilde{e}^T}{P_2}B{u_b}=-{\tilde{e}^T}C{K_o^T}{P_1}\hat{e}\).

Fig. 6.3
figure 3

The proposed adaptive-fuzzy control scheme

The control scheme is depicted in Fig. 6.3. Substituting Eqs. (6.62) and (6.63) in \(\dot{V}\), one gets

$$\begin{aligned} \begin{array}{c} \dot{V}=-{1 \over 2}{\hat{e}^T}{Q_1}{\hat{e}}+{\tilde{e}^T}C{K_o^T}{P_1}{\hat{e}} -{1 \over 2}{\tilde{e}^T}{Q_2}{\tilde{e}}+{1 \over r}{\tilde{e}^T}{P_2}B{B^T}{P_2}{\tilde{e}}-\\ -{1 \over {2\rho ^2}}{\tilde{e}^T}{P_2}B{B^T}{P_2}{\tilde{e}}+{\tilde{e}^T}{P_2}B{u_b}-{1 \over r}{\tilde{e}^T}{P_2}B{B^T}{P_2}{\tilde{e}}+ {B^T}{P_2}{\tilde{e}(w+d+{\Delta }{u_a})}+\\ +{1 \over {\gamma _1}}{\tilde{\theta }_f^T}{\dot{\tilde{\theta }}_f}+{1 \over {\gamma _2}}{\tilde{\theta }_g^T}{\dot{\tilde{\theta }}_g} \end{array} \end{aligned}$$
(6.64)

or equivalently,

$$\begin{aligned} \begin{array}{c} \dot{V}=-{1 \over 2}{\hat{e}^T}{Q_1}{\hat{e}}-{1 \over 2}{\tilde{e}^T}{Q_2}{\tilde{e}}-{{1 \over 2{\rho ^2}}{\tilde{e}^T}{P_2}B{B^T}{P_2}{\tilde{e}}} + {B^T}{P_2}{\tilde{e}(w+d+{\Delta }{u_a})}+\\ +{1 \over {\gamma _1}}{\tilde{\theta }_f^T}{\dot{\tilde{\theta }}_f}+ {1 \over {\gamma _2}}{\tilde{\theta }_g^T}{\dot{\tilde{\theta }}_g} \end{array} \end{aligned}$$
(6.65)

It holds that \(\dot{\tilde{\theta }}_f=\dot{\theta }_f-\dot{\theta _f^*}=\dot{\theta _f}\) and \(\dot{\tilde{\theta }}_g=\dot{\theta }_g-\dot{\theta _g^*}=\dot{\theta _g}\). The following weight adaptation laws are considered:

(6.66)
(6.67)

To set \(\dot{\theta }_f\) and \(\dot{\theta }_g\) equal to 0, when \(||\theta _f \ge m_{\theta _f}||\), and \(||\theta _g \ge m_{\theta _g}||\) the projection operator is employed [450]:

$$\begin{aligned}&P\{{\gamma _1}\tilde{e}^T{P_2}B{\phi (\hat{x})}\}=-{\gamma _1}{\tilde{e}^T}{P_2}B{\phi (\hat{x})}+\\&\qquad \quad +{\gamma _1}{\tilde{e}^T}{P_2}B{{\theta _f}{\theta _f^T} \over {||\theta _f||^2}}{\phi (\hat{x})}\\&P\{{\gamma _1}\tilde{e}^T{P_2}B{\phi (\hat{x})}{u_c}\}=-{\gamma _1}{\tilde{e}^T}{P_2}B{\phi (\hat{x})}{u_c}+\\&\qquad \quad +{\gamma _1}{\tilde{e}^T}{P_2}B{{\theta _f}{\theta _f^T} \over {||\theta _f||^2}}{\phi (\hat{x})}{u_c} \end{aligned}$$

The update of \(\theta _f\) stems from a gradient algorithm on the cost function \({1 \over 2}(f-\hat{f})^2\) [33, 432]. The update of \(\theta _g\) is also of the gradient type, while \(u_c\) implicitly tunes the adaptation gain \(\gamma _2\). Substituting Eqs. (6.66) and (6.67) in \(\dot{V}\) gives

$$\begin{aligned} \begin{array}{c} \dot{V}=-{1 \over 2}{\hat{e}^T}{Q_1}{\hat{e}}-{1 \over 2}{\tilde{e}^T}{Q_2}{\tilde{e}}-{1 \over 2{\rho ^2}}{\tilde{e}^T}{P_2}B{B^T}{P_2}{\tilde{e}}+ {B^T}{P_2}{\tilde{e}(w+d+{\Delta }{u_a})}+\\ +{1 \over {\gamma _1}}{\tilde{\theta }_f^T}({-\gamma _1}{\tilde{e}^T}{P_2}B{\phi {(\hat{x})}})+{1\over {\gamma _2}}{\tilde{\theta }_g^T}({-\gamma _2}{\tilde{e}^T}{P_2}B{\phi {(\hat{x})}}u) \end{array} \end{aligned}$$
(6.68)

which is also written as

$$\begin{aligned} \begin{array}{c} \dot{V}=-{1 \over 2}{\hat{e}^T}{Q_1}{\hat{e}}-{1 \over 2}{\tilde{e}^T}{Q_2}{\tilde{e}}-{1 \over 2{\rho ^2}}{\tilde{e}^T}{P_2}B{B^T}{P_2}{\tilde{e}}+ {\tilde{e}^T}{P_2}B(w+d+{\Delta }{u_a})-\\ -{\tilde{e}^T}{P_2}B{\tilde{\theta }_f^T}\phi (\hat{x})-{\tilde{e}^T}{P_2}B{\tilde{\theta }_g^T}{\phi (\hat{x})}u \end{array} \end{aligned}$$
(6.69)

and using Eqs. (6.48) and (6.52) results into

$$\begin{aligned} \begin{array}{c} \dot{V}=-{1 \over 2}{\hat{e}^T}{Q_1}{\hat{e}}-{1 \over 2}{\tilde{e}^T}{Q_2}{\tilde{e}}-{1 \over 2{\rho ^2}}{\tilde{e}^T}{P_2}B{B^T}{P_2}{\tilde{e}}+ {\tilde{e}^T}{P_2}B(w+d+{\Delta }{u_a})-\\ -{\tilde{e}^T}{P_2}B\{[\hat{f}(\hat{x}|\theta _f)+\hat{g}(\hat{x}|\theta _f)u]-[\hat{f}(\hat{x}|\theta _f^*) +\hat{g}(\hat{x}|\theta _g^*)u]\} \end{array} \end{aligned}$$
(6.70)

where \([\hat{f}(\hat{x}|\theta _f)+\hat{g}(\hat{x}|\theta _f)u]-[\hat{f}(\hat{x}|\theta _f^*)+\hat{g}(\hat{x}|\theta _g^*)u]=w_a\). Thus setting \(w_1=w+w_a+d+{\Delta }{u_a}\) one gets

$$\begin{aligned} \begin{array}{c} \dot{V}=-{1 \over 2}{\hat{e}^T}{Q_1}{\hat{e}}{1 \over 2}{\tilde{e}^T}{Q_2}{\tilde{e}}-{1 \over 2{\rho ^2}}{\tilde{e}^T}{P_2}B{B^T}{P_2}{\tilde{e}}+{B^T}{P_2}{\tilde{e}}{w_1}\Rightarrow \\ \dot{V}=-{1 \over 2}{\hat{e}^T}{Q_1}{\hat{e}}{1 \over 2}{\tilde{e}^T}{Q_2}{\tilde{e}}-{1 \over 2{\rho ^2}}{\tilde{e}^T}{P_2}B{B^T}{P_2}{\tilde{e}}+{1 \over 2 }{w_1^T}{B^T}{P_2}{\tilde{e}}+{1 \over 2}{\tilde{e}^T}{P_2}B{w_1} \end{array} \end{aligned}$$
(6.71)

Lemma: The following inequality holds

$$\begin{aligned} \begin{array}{c} {1 \over 2}{\tilde{e}^T}{P_2}B{w_1}+{1 \over 2}{w_1^T}{B^T}{P_2}{\tilde{e}}-{1 \over 2{\rho ^2}}{\tilde{e}^T}{P_2}B{B^T}{P_2}{\tilde{e}}\,\le {1 \over 2}{\rho ^2}{w_1^T}{w_1} \end{array} \end{aligned}$$
(6.72)

Proof: The binomial \(({\rho }a-{1 \over \rho }b)^2 \ge 0\) is considered. Expanding the left part of the above inequality one gets

$$\begin{aligned} \begin{array}{c} {\rho ^2}{a^2}+{1 \over {\rho ^2}}{b^2}-2ab \ge 0 \Rightarrow {1 \over 2}{\rho ^2}{a^2}+{1 \over {2\rho ^2}}{b^2}-ab \ge 0\\ \Rightarrow ab-{1 \over {2\rho ^2}}{b^2} \le {1 \over 2}{\rho ^2}{a^2} \Rightarrow {1 \over 2}ab+{1 \over 2}ab-{1 \over {2\rho ^2}}{b^2} \le {1 \over 2}{\rho ^2}{a^2} \end{array} \end{aligned}$$
(6.73)

The following substitutions are carried out: \(a=w_1\) and \(b=\tilde{e}^T{P_2}B\) and the previous relation becomes

$$\begin{aligned} \begin{array}{c} {1 \over 2}{w_1^T}{B^T}{P_2}{\tilde{e}}+{1 \over 2}{\tilde{e}^T}{P_2}B{w_1}-{{1 \over {2\rho ^2}} {\tilde{e}^T}{P_2}B{B^T}{P_2}{\tilde{e}}}\\ \le {1 \over 2} {\rho ^2}{w_1^T}{w_1} \end{array} \end{aligned}$$
(6.74)

The above inequality is used in \(\dot{V}\), and the right part of the associated inequality is enforced

$$\begin{aligned} \dot{V} {\le } -{1 \over 2}{\hat{e}^T{Q_1}{\hat{e}}}-{1 \over 2}{\tilde{e}^T{Q_2}{\tilde{e}}}+{1 \over 2}{\rho ^2}{w_1^T}{w_1} \end{aligned}$$
(6.75)

Thus, Eq. (6.75) can be written as

$$\begin{aligned} \dot{V} \le -{1 \over 2}{E^T}QE+{1 \over 2}{\rho ^2}{w_1^T}{w_1} \end{aligned}$$
(6.76)

where

$$\begin{aligned} E=\begin{pmatrix} \hat{e} \\ \tilde{e} \end{pmatrix}, \ \ Q=\begin{pmatrix} Q_1 &{} 0 \\ 0 &{} Q_2 \end{pmatrix}=diag[Q_1,Q_2] \end{aligned}$$
(6.77)

Hence, the \(H_{\infty }\) performance criterion is derived. For \(\rho \) sufficiently small Eq. (6.75) will be true and the \(H_{\infty }\) tracking criterion will be satisfied. In that case, the integration of \(\dot{V}\) from 0 to T gives

$$\begin{aligned} \begin{array}{c} {\int _0^T}{\dot{V}(t)}dt \le -{1 \over 2} {\int _0^T}{||E||^2}dt+{1 \over 2}{\rho ^2}{\int _0^T}{||w_1||^2}dt \Rightarrow \\ 2V(T)-2V(0) \le -{\int _0^T}{||E||_Q^2}dt+{\rho ^2}{\int _0^T}{||w_1||^2}dt \Rightarrow \\ 2V(T)+{\int _0^T}{||E||_Q^2}dt \le 2V(0)+ {\rho ^2}{\int _0^T}{||w_1||^2}dt \end{array} \end{aligned}$$

It is assumed that there exists a positive constant \(M_w>0\) such that \(\int _0^{\infty }{||w_1||^2}dt \le M_w\). Therefore for the integral \(\int _0^{T}{||E||_Q^2}dt\) one gets

$$\begin{aligned} {\int _0^{\infty }}{||E||_Q^2}dt \le 2V(0)+{\rho ^2}{M_w} \end{aligned}$$
(6.78)

Thus, the integral \({\int _0^{\infty }}{||E||_Q^2}dt\) is bounded and according to Barbalat’s Lemma

$$\begin{aligned}&\lim _{t \rightarrow \infty }{E(t)}=0 \Rightarrow \begin{array}{c} \lim _{t \rightarrow \infty }{\hat{e}(t)}=0 \\ \lim _{t \rightarrow \infty }{\tilde{e}(t)}=0 \end{array} \end{aligned}$$

Therefore \(\lim _{t \rightarrow \infty }{e(t)}=0\).

6.2.6.2 Riccati Equation Coefficients in \(H_{\infty }\) Control Robustness

Following the concept of the flatness-based adaptive fuzzy control which has been developed in previous sections, the linear system of Eqs. (6.45) and (6.46) is considered again

$$\begin{aligned}&\dot{\tilde{e}}=(A-{K_o}C^T){\tilde{e}}+B{u_c}+B\{[f(x,t)-\hat{f}(\hat{x},t)]+[g(x,t)-\hat{g}(\hat{x}, t)]u+\tilde{d}\}\\&\qquad \qquad \qquad \qquad \qquad \qquad \qquad e_1={C^T}{\tilde{e}} \end{aligned}$$

Once again the aim of \(H_{\infty }\) control is to eliminate the impact of the modelling errors \(w=[f(x,t)-\hat{f}(\hat{x},t)]+[g(x,t)-\hat{g}(\hat{x}, t)]u\) and the external disturbances \(\tilde{d}\) which are not white noise signals. This implies the minimization of the following quadratic cost function for the microactuator’s state vector tracking problem [132, 243, 305]:

$$\begin{aligned}&J(t)={1 \over 2} \int _{0}^{T} [{\tilde{e}^T(t)}\tilde{e}(t)+r{u_c^T(t)}u_c(t)-{\rho ^2}{(w+\tilde{d})^T}{(w+\tilde{d})}]dt, \ \ r, \rho >0 \end{aligned}$$
(6.79)

The weight r determines how much the control signal should be penalized and the weight \(\rho \) determines how much the disturbances influence should be rewarded in the sense of a min-max differential game. The control input \(u_c\) has been defined as the sum of the terms described in Eqs. (6.62) and (6.63).

The parameter \(\rho \) in Eq. (6.79), is an indication of the closed-loop system robustness. If the values of \(\rho >0\) are excessively decreased with respect to r, then the solution of the Riccati equation is no longer a positive definite matrix. Consequently there is a lower bound \(\rho _{min}\) of \(\rho \) for which the \(H_{\infty }\) control problem has a solution. The acceptable values of \(\rho \) lie in the interval \([\rho _{min},\infty )\). If \(\rho _{min}\) is found and used in the design of the \(H_{\infty }\) controller, then the closed-loop system will have increased robustness. Unlike this, if a value \(\rho > \rho _{min}\) is used, then an admissible stabilizing \(H_{\infty }\) controller will be derived but it will be a suboptimal one. The Hamiltonian matrix

$$\begin{aligned} H=\begin{pmatrix} A-{K_o}{C^T} &{} -({2 \over r}-{1 \over \rho ^2})B{B^T} \\ -Q &{} -({A-{K_o}{C^T}})^T \end{pmatrix} \end{aligned}$$
(6.80)

provides a criterion for the existence of a solution of the Riccati equation Eq. (6.59). A necessary condition for the solution of the algebraic Riccati equation to be a positive semi-definite symmetric matrix is that H has no imaginary eigenvalues [132, 457].

6.2.7 Simulation Tests

The performance of the proposed output feedback-based adaptive fuzzy control approach for MEMS (microactuator) was tested in the case of tracking of several reference setpoints. The only measurable variable, used in the control loop was the microactuator’s deflection variable x. Indicative variation ranges for the MEMS parameters are \({\zeta }{\in }[0.1,3]\) and \(r{\in }[0.1,3]\) without excluding that these parameters may take values in wider intervals. In the simulation tests, the dynamic model of the MEMS, as well as the numerical values of its parameters were considered to be completely unknown.

The estimation of the unknown dynamics of the system with the use of neuro-fuzzy approximators has been explained in Sect. 6.2.5.6. Knowing that there are \(i=3\) state variables for the MEMS model and that each such variable comprises \(n=3\) fuzzy sets, the total number of rules in the fuzzy rule base should be \(n^m=3^3=27\). The aggregate output of the neuro-fuzzy approximator (rule-base) for function f(x) is given by Eq. (6.47). The centers \(c_i^{(l)}, \ i=1,\ldots , 3\) and the variances \(v^{(l)}\) of each rule are summarized in Table 6.1. Similar is the structure of the neuro-fuzzy approximator for function g(x).

Table 6.1 Table I: Parameters of the fuzzy rule base

The control loop was based on simultaneous estimation of the unknown MEMS dynamics (this was performed with the use of neuro-fuzzy approximators) and of the nonmeasurable elements of the microactuator’s state vector, that is of the deflections change rate \(\dot{x}\) and of the charge of the plates q (this was performed with the use of the state observer). The obtained results are presented in Figs. 6.4, 6.5, 6.6, 6.7 and 6.8. The real values of the monitored parameters (state vector variables) are denoted with blue line, the estimated variables are denoted with green line and the reference setpoints are plotted as red lines. It can be noticed that differential flatness theory-based adaptive fuzzy control of the MEMS, achieved fast and accurate tracking of the reference setpoints.

The implementation of the proposed control scheme requires that the two algebraic Riccati equations which have been defined in Eqs. (6.58) and (6.59) are solved in each iteration of the control algorithm. These provide the positive definite matrices \(P_1\) and \(P_2\) which are used for the computation of the control signals \(u_a\) and \(u_b\) that have been defined in Eqs. (6.62) and (6.63). The transients of the state vector elements \(x_i, \ i=1,\ldots , 3\), are determined by the values given to the positive definite matrices \(Q_i, \ i=1,\ldots , 3\), as well as by the value of the parameter r and of the H-infinity coefficient (attenuation level) \(\rho \). It has been confirmed that the variations of both \(x_i, \ i=1,\ldots , 3\) and of the control input u were smooth.

Fig. 6.4
figure 4

Output feedback based adaptive fuzzy control of MEMS (microactuator) - Test 1: a state variables \(x_i\), \(i=1,\ldots , 3\) of the initial nonlinear system, b transformed state variables \(y_i\), \(i=1,\ldots , 3\) (blue line: real value, red line: setpoint)

Fig. 6.5
figure 5

Output feedback based adaptive fuzzy control of MEMS (microactuator) - Test 2: a state variables \(x_i\), \(i=1,\ldots , 3\) of the initial nonlinear system, b transformed state variables \(y_i\), \(i=1,\ldots , 3\) (blue line: real value, red line: setpoint)

Fig. 6.6
figure 6

Output feedback based adaptive fuzzy control of MEMS (microactuator) - Test 3: a state variables \(x_i\), \(i=1,\ldots , 3\) of the initial nonlinear system, b transformed state variables \(y_i\), \(i=1,\ldots , 3\) (blue line: real value, red line: setpoint)

Fig. 6.7
figure 7

Output feedback based adaptive fuzzy control of MEMS (microactuator) - Test 4: a state variables \(x_i\), \(i=1,\ldots , 3\) of the initial nonlinear system, b transformed state variables \(y_i\), \(i=1,\ldots , 3\) (blue line: real value, red line: setpoint)

Fig. 6.8
figure 8

Output feedback based adaptive fuzzy control of MEMS (microactuator) - Test 5: a state variables \(x_i\), \(i=1,\ldots , 3\) of the initial nonlinear system, b transformed state variables \(y_i\), \(i=1,\ldots , 3\) (blue line: real value, red line: setpoint)

One can compare the proposed adaptive fuzzy control method for the electromechanically actuated MEMS against model-based control methods based on the approximate linearization of MEMS. The latter method consists of local linearization of the MEMS model round operating points and on the solution of LMIs and remains dependent on knowledge of the MEMS dynamics. In [456], it has been shown that although the proposed adaptive control scheme uses no prior knowledge about the system’s dynamics it performs equally well to the aforementioned model-based control approach. The associated simulation results about the comparison of the two methods can be found in [456].

6.3 Nonlinear Optimal Control of Underactuated MEMS

6.3.1 Outline

Extending the analysis on the dynamics of microactuators that was given in the previous section, one can consider next micro-electromechanical systems (MEMS) which exhibit often the dynamics of nonlinear oscillators such as the Van-der-Pol oscillator and the Duffing oscillator [214, 215, 407, 409]. In certain cases these oscillator models are coupled and are described for instance by a Van-der-Pol oscillator driven by a forced Duffing oscillator [138, 266, 376, 579]. Such micro-electromechanical systems can exhibit complex and chaotic dynamics [29, 265, 339, 383]. In an aim to improve the precision and reliability of MEMS, nonlinear control of MEMS has been the subject of wide research during the last years [184, 336, 368, 384, 496]. However, taking into account the nonlinearities of their dynamic model and possible underactuation, the problem of control of these micro-electromechanical systems is considered as a non-trivial one [96, 120, 296, 338].

In this section a nonlinear optimal (H-infinity) control method is developed for the model of a MEMS described in the form of a Van-der-Pol oscillator elastically coupled with a forced Duffing oscillator. This MEMS receives control input only at the side of the Duffing oscillator. The MEMS dynamic model undergoes first approximate linearization around a temporary operating point (equilibrium) which is redefined at each iteration of the control method. This temporary equilibrium comprises the present value of the system’s state vector and the last value of the control inputs vector that was applied on it. The linearization makes use of first-order Taylor series expansion and requires the computation of the system’s Jacobian matrices [33, 431, 463] . The modelling error which is due to the truncation of higher order terms in the Taylor series expansion is considered to be a disturbance term which is finally compensated by the robustness of the control algorithm.

For the approximately linearized model of the MEMS an optimal (H-infinity) feedback controller is designed [461, 466]. As explained in the previous sections, the H-infinity controller represents the solution to the optimal control problem under model uncertainty and external perturbations. Actually, the H-infinity controller stands for the solution to a min-max differential game in which the control inputs try to minimize a cost function comprising a quadratic term of the state vector’s tracking error, whereas the model uncertainty and the disturbance inputs try to maximize it [450, 457, 459]. For the computation of the feedback gain of the H-infinity controller an algebraic Riccati equation is solved repetitively at each time-step of the control method [305, 564].

The stability of the proposed nonlinear optimal control method is proven through Lyapunov analysis. First, it is demonstrated that the control loop satisfies the H-infinity tracking performance criterion, which signifies elevated robustness against model uncertainty and external perturbations. Moreover, under moderate conditions it is shown that the control scheme has also global asymptotic stability properties. Finally, to implement state estimation-based control of the MEMS through the measurement of a small number of its state vector elements, the H-infinity Kalman Filter is proposed as a robust state estimator [169, 511].

6.3.2 Dynamic Model of MEMS

The dynamic model of the coupled MEMS comprises a Van der Pol oscillator driven by a forced Duffing oscillator (Fig. 6.9). The variation in time of the Van der Pol oscillator is given by state variable \(z_1\) while the variation in time of the Duffing oscillator is given by state variable \(z_2\). Moreover, the control input to the MEMS is the sinusoidal voltage \(V=ucos({\omega _d \over \omega _1}\tau )\) , and thus one has the following dynamics [266, 376]

$$\begin{aligned} \begin{array}{c} \ddot{z}_1+{\gamma _2}(z_1^2-1)\dot{z}_1+\left( {\omega _2 \over \omega _1}\right) ^2{z_1}=k(z_2-z_1)\\ \ddot{z}_2+{\gamma _2}\dot{z}_2+{\delta }{z_2^3}=(z_1-z_2)+ucos\left( {\omega _d \over \omega _1}\tau \right) \end{array} \end{aligned}$$
(6.81)
Fig. 6.9
figure 9

Diagram of an electrostatically actuated MEMS, exhibiting the dynamics of a Duffing oscillator

The following state variables are defined \(x_1=z_1\), \(x_2=\dot{z}_1\), \(x_2=z_2\) and \(x_4=\dot{z}_2\). Then, the state-space description of the system is given by

$$\begin{aligned} \begin{array}{c} \dot{x}_1=x_2 \\ \dot{x}_2=-{\gamma _1}(x_1^2-1){x_2}-\left( {\omega _2 \over \omega _1}\right) ^2{x_1}+k(x_2-x_1) \\ \dot{x}_3=x_4 \\ \dot{x}_4=-{\gamma _2}{x_4}-{\delta }{x_3^2}+k(x_1-x_3)+ucos\left( {\omega _d \over \omega _1}\tau \right) \end{array} \end{aligned}$$
(6.82)

Next, the system of the coupled Van der Pol and Duffing oscillators can be written in the following matrix form

$$\begin{aligned} \begin{pmatrix} \dot{x}_1\\ \dot{x}_2\\ \dot{x}_3\\ \dot{x}_4 \end{pmatrix} =\begin{pmatrix} x_2 \\ -{\gamma _1}(x_1^2-1){x_2}-({\omega _2 \over \omega _1})^2{x_1}+k(x_2-x_1) \\ x_4 \\ \dot{x}_4=-{\gamma _2}{x_4}-{\delta }{x_3^2}+k(x_1-x_3) \end{pmatrix}+ \begin{pmatrix} 0 \\ 0 \\ 0 \\ cos({\omega _d \over \omega _1}\tau ) \end{pmatrix}u \end{aligned}$$
(6.83)

Thus by defining the vector fields \(f(x) \ x{\in }R^{4{\times }1}\) and \(g(x) \ x{\in }R^{4{\times }1}\) where

$$\begin{aligned} f(x)=\begin{pmatrix} x_2 \\ -{\gamma _1}(x_1^2-1){x_2}-({\omega _2 \over \omega _1})^2{x_1}+k(x_2-x_1) \\ x_4 \\ \dot{x}_4=-{\gamma _2}{x_4}-{\delta }{x_3^2}+k(x_1-x_3) \end{pmatrix} g(x)=\begin{pmatrix} 0 \\ 0 \\ 0 \\ cos({\omega _d \over \omega _1}\tau ) \end{pmatrix}u \end{aligned}$$
(6.84)

one arrives at the states-space description

$$\begin{aligned} \dot{x}=f(x)+g(x)u \end{aligned}$$
(6.85)

6.3.3 Approximate Linearization of the MEMS Dynamics

Approximate linearization of the MEMS dynamics given in Eq. (6.85) is performed around the temporary operating point (equilibrium) \((x^{*}, u^{*})\) which is re-defined at each iteration of the control algorithm by the present value of the system’s state vector \(x^{*}\) and the last value of the control inputs vector \(u^{*}\) that was exerted on it. This results in the following linear state-space form of the system:

$$\begin{aligned} \dot{x}=Ax+Bu+\tilde{d} \end{aligned}$$
(6.86)

where matrices A and B are defined as follows:

$$\begin{aligned} A={\nabla _x}[f(x)+g(x)u]\mid _{(x^{*}, u^{*})}{\Rightarrow } A={\nabla _x}f(x)\mid _{(x^{*})} \end{aligned}$$
(6.87)
$$\begin{aligned} B={\nabla _u}[f(x)+g(x)u]\mid _{(x^{*}, u^{*})}{\Rightarrow } B=g(x) \end{aligned}$$
(6.88)

About the Jacobian matrix \({\nabla _x}f(x)\mid _{(x^{*})}\) one has

$$\begin{aligned} {\nabla _x}f(x)=\begin{pmatrix} \displaystyle {{{\partial }{f_1}} \over {{\partial }{x_1}}} &{} \displaystyle {{{\partial }{f_1}} \over {{\partial }{x_2}}} &{} \displaystyle {{{\partial }{f_1}} \over {{\partial }{x_3}}} &{} \displaystyle {{{\partial }{f_1}} \over {{\partial }{x_4}}} \\ \displaystyle {{{\partial }{f_2}} \over {{\partial }{x_1}}} &{} \displaystyle {{{\partial }{f_2}} \over {{\partial }{x_2}}} &{} \displaystyle {{{\partial }{f_2}} \over {{\partial }{x_3}}} &{} \displaystyle {{{\partial }{f_2}} \over {{\partial }{x_4}}} \\ \displaystyle {{{\partial }{f_3}} \over {{\partial }{x_1}}} &{} \displaystyle {{{\partial }{f_3}} \over {{\partial }{x_2}}} &{} \displaystyle {{{\partial }{f_3}} \over {{\partial }{x_3}}} &{} \displaystyle {{{\partial }{f_3}} \over {{\partial }{x_4}}} \\ \displaystyle {{{\partial }{f_4}} \over {{\partial }{x_1}}} &{} \displaystyle {{{\partial }{f_4}} \over {{\partial }{x_2}}} &{} \displaystyle {{{\partial }{f_4}} \over {{\partial }{x_3}}} &{} \displaystyle {{{\partial }{f_4}} \over {{\partial }{x_4}}} \end{pmatrix} \end{aligned}$$
(6.89)

where the elements of the first row of \({\nabla _x}f(x)\) are: \({{{\partial }{f_1}} \over {{\partial }{x_1}}}=0\), \({{{\partial }{f_1}} \over {{\partial }{x_2}}}=1\), \({{{\partial }{f_1}} \over {{\partial }{x_3}}}=0\), and \({{{\partial }{f_1}} \over {{\partial }{x_4}}}=0\).

the elements of the second row of \({\nabla _x}f(x)\) are: \({{{\partial }{f_2}} \over {{\partial }{x_1}}}=-{\gamma _1}2{x_1}{x_2}-({\omega _2 \over \omega _1})^2-k\), \({{{\partial }{f_2}} \over {{\partial }{x_2}}}=-{\gamma _1}(x_1^2-1)\), \({{{\partial }{f_2}} \over {{\partial }{x_3}}}=1\), and \({{{\partial }{f_2}} \over {{\partial }{x_4}}}=0\).

the elements of the third row of \({\nabla _x}f(x)\) are: \({{{\partial }{f_3}} \over {{\partial }{x_1}}}=0\), \({{{\partial }{f_3}} \over {{\partial }{x_2}}}=0\), \({{{\partial }{f_3}} \over {{\partial }{x_3}}}=0\), and \({{{\partial }{f_3}} \over {{\partial }{x_4}}}=1\).

and the elements of the fourth row of \({\nabla _x}f(x)\) are: \({{{\partial }{f_4}} \over {{\partial }{x_1}}}=k\), \({{{\partial }{f_4}} \over {{\partial }{x_2}}}=0\), \({{{\partial }{f_4}} \over {{\partial }{x_3}}}=-2{\delta }{x_3}-k\), and \({{{\partial }{f_3}} \over {{\partial }{x_4}}}=-\gamma _1\).

6.3.4 Design of an H-Infinity Nonlinear Feedback Controller

6.3.4.1 Equivalent Linearized Dynamics of the MEMS

After linearization round its current operating point, the dynamic model of the MEMS is written as

$$\begin{aligned} \dot{x}=Ax+Bu+d_1 \end{aligned}$$
(6.90)

Parameter \(d_1\) stands for the linearization error in the dynamic model of the MEMS appearing in Eq. (6.90). The reference setpoints for the MEMS model state vector are denoted by \(\mathbf{{x_d}}=[x_1^{d},\ldots , x_6^{d}]\). Tracking of this trajectory is achieved after applying the control input \(u^{*}\). At every time instant the control input \(u^{*}\) is assumed to differ from the control input u appearing in Eq. (6.90) by an amount equal to \({\Delta }u\), that is \(u^{*}=u+{\Delta }u\)

$$\begin{aligned} \dot{x}_d=Ax_d+Bu^{*}+d_2 \end{aligned}$$
(6.91)

The dynamics of the controlled system described in Eq. (6.90) can be also written as

$$\begin{aligned} \dot{x}=Ax+Bu+Bu^{*}-Bu^{*}+d_1 \end{aligned}$$
(6.92)

and by denoting \(d_3=-Bu^{*}+d_1\) as an aggregate disturbance term one obtains

$$\begin{aligned} \dot{x}=Ax+Bu+Bu^{*}+d_3 \end{aligned}$$
(6.93)

By subtracting Eq. (6.91) from (6.93) one has

$$\begin{aligned} \dot{x}-\dot{x}_d=A(x-x_d)+Bu+d_3-d_2 \end{aligned}$$
(6.94)

By denoting the tracking error as \(e=x-x_d\) and the aggregate disturbance term as \(\tilde{d}=d_3-d_2\), the tracking error dynamics becomes

$$\begin{aligned} \dot{e}=Ae+Bu+\tilde{d} \end{aligned}$$
(6.95)

The above linearized form of the MEMS model can be efficiently controlled after applying an H-infinity feedback control scheme.

6.3.5 The Nonlinear H-Infinity Control

The initial nonlinear model of MEMS is in the form

$$\begin{aligned} \dot{x}=\tilde{f}(x, u) \ \ x{\in }R^n, \ u{\in }R^m \end{aligned}$$
(6.96)

Linearization of the MEMS model that comprises coupled electromechanical oscillators is performed at each iteration of the control algorithm round its present operating point \({(x^{*},u^{*})}=(x(t), u(t-T_s))\), where \(T_s\) is the sampling period. The linearized equivalent model of the system is described by

$$\begin{aligned} \dot{x}=Ax+Bu+L\tilde{d} \ \ x{\in }R^n, \ u{\in }R^m, \ \tilde{d}{\in }R^q \end{aligned}$$
(6.97)

where matrices A and B are obtained from the computation of the Jacobians of the MEMS model, and vector \(\tilde{d}\) denotes disturbance terms due to linearization errors. The problem of disturbance rejection for the MEMS linearized model that is described by

$$\begin{aligned}&\dot{x}=Ax+Bu+L\tilde{d} \nonumber \\&\quad y=Cx \end{aligned}$$
(6.98)

where \(x{\in }R^n\), \(u{\in }R^m\), \(\tilde{d}{\in }R^q\) and \(y{\in }R^p\), cannot be handled efficiently if the classical LQR control scheme is applied. This is because of the existence of the perturbation term \(\tilde{d}\). The disturbance term \(\tilde{d}\) apart from modeling (parametric) uncertainty and external perturbation terms can also represent noise terms of any distribution.

As explained in the application of the control method in previous sections, in the \(H_{\infty }\) control approach, a feedback control scheme is designed for trajectory tracking by the MEMS state vector and simultaneous disturbance rejection, considering that the disturbance affects the system in the worst possible manner. The disturbances’ effects are incorporated in the following quadratic cost function:

$$\begin{aligned} J(t)={1 \over 2}{\int _0^T}[{y^T}(t)y(t)+r{u^T}(t)u(t)-{\rho ^2}{\tilde{d}^T}(t)\tilde{d}(t)]dt, \ \ r,{\rho }>0 \end{aligned}$$
(6.99)

As mentioned in previous chapters, the significance of the negative sign in the cost function for the MEMS control loop is that this is associated with the perturbation variable \(\tilde{d}(t)\) is that the disturbance tries to maximize the cost function J(t) while the control signal u(t) tries to minimize it. The physical meaning of the relation given above is that the control signal and the disturbances compete to each other within a min-max differential game. This problem of min-max optimization can be written as

$$\begin{aligned} {min_{u}}{max_{\tilde{d}}}J(u,\tilde{d}) \end{aligned}$$
(6.100)

The objective of the optimization procedure is to compute a control signal u(t) which can compensate for the worst possible disturbance, that is externally imposed to the MEMS. As explained in previous sections, the solution to the mini-max optimization problem is directly related to the value of the parameter \(\rho \). This means that there is an upper bound in the disturbances magnitude that can be annihilated by the control signal.

6.3.5.1 Computation of the Feedback Control Gains

For the linearized model of the MEMS given by Eq. (6.98) the cost function of Eq. (6.99) is defined, where the coefficient r determines the penalization of the control input and the weight coefficient \(\rho \) determines the reward of the disturbances’ effects.

In adherence to the analysis of the control method given in previous sections, it is assumed again that (i) The energy that is transferred from the disturbances signal \(\tilde{d}(t)\) is bounded, that is \({\int _0^{\infty }}{\tilde{d}^T(t)}\tilde{d}(t){dt}<\infty \), (ii) matrices [AB] and [AL] are stabilizable, (iii) matrix [AC] is detectable. Then, the optimal feedback control law is given by

$$\begin{aligned} u(t)=-Kx(t) \end{aligned}$$
(6.101)

with

$$\begin{aligned} K={1 \over r}{B^T}P \end{aligned}$$
(6.102)

where P is a positive semi-definite symmetric matrix which is obtained from the solution of the Riccati equation

$$\begin{aligned} {A^T}P+PA+Q-P\left( {1 \over r}B{B^T}-{1 \over {2\rho ^2}}L{L^T}\right) P=0 \end{aligned}$$
(6.103)

where Q is also a positive definite symmetric matrix. The worst case disturbance is given by

$$\begin{aligned} \tilde{d}(t)={1 \over \rho ^2}{L^T}Px(t) \end{aligned}$$
(6.104)

The diagram of the MEMS control loop is depicted in Fig. 6.10.

Fig. 6.10
figure 10

Diagram of the control scheme for MEMS comprising a Van-der-Pol oscillator coupled with a forced Duffing oscillator

6.3.6 Lyapunov Stability Analysis

Through Lyapunov stability analysis it will be shown that the proposed nonlinear control scheme assures \(H_{\infty }\) tracking performance for the MEMS model, and that in case of bounded disturbance terms asymptotic convergence to the reference setpoints is achieved. The tracking error dynamics for the MEMS model is written in the form

$$\begin{aligned} \dot{e}=Ae+Bu+L\tilde{d} \end{aligned}$$
(6.105)

where in the MEMS case \(L=I{\in }R^4\) with I being the identity matrix. Variable \(\tilde{d}\) denotes model uncertainties and external disturbances of the micro-electromechanical system’s model. The following Lyapunov equation is considered

$$\begin{aligned} V={1 \over 2}{e^T}Pe \end{aligned}$$
(6.106)

where \(e=x-x_d\) is the tracking error. By differentiating with respect to time one obtains

$$\begin{aligned} \begin{array}{c} \dot{V}={1 \over 2}{\dot{e}^T}Pe+{1 \over 2}{e^T}P\dot{e}{\Rightarrow }\\ \dot{V}={1 \over 2}{[Ae+Bu+L\tilde{d}]^T}Pe+{1 \over 2}{e^T}P[Ae+Bu+L\tilde{d}]{\Rightarrow } \\ \end{array} \end{aligned}$$
(6.107)
$$\begin{aligned} \begin{array}{c} \dot{V}={1 \over 2}[{e^T}{A^T}+{u^T}{B^T}+{\tilde{d}^T}{L^T}]Pe+\\ +{1 \over 2}{e^T}P[Ae+Bu+L\tilde{d}]{\Rightarrow } \\ \end{array} \end{aligned}$$
(6.108)
$$\begin{aligned} \begin{array}{c} \dot{V}={1 \over 2}{e^T}{A^T}Pe+{1 \over 2}{u^T}{B^T}Pe+{1 \over 2}{\tilde{d}^T}{L^T}Pe+ \\ {1 \over 2}{e^T}PAe+{1 \over 2}{e^T}PBu+{1 \over 2}{e^T}PL\tilde{d} \end{array} \end{aligned}$$
(6.109)

The previous equation is rewritten as

$$\begin{aligned} \begin{array}{c} \dot{V}={1 \over 2}{e^T}({A^T}P+PA)e+\left( {1 \over 2}{u^T}{B^T}Pe+{1 \over 2}{e^T}PBu\right) +\\ +\left( {1 \over 2}{\tilde{d}^T}{L^T}Pe+{1 \over 2}{e^T}PL\tilde{d}\right) \end{array} \end{aligned}$$
(6.110)

Assumption: For given positive definite matrix Q and coefficients r and \(\rho \) there exists a positive definite matrix P, which is the solution of the following matrix equation

$$\begin{aligned} {A^T}P+PA=-Q+P\left( {2 \over r}B{B^T}-{1 \over \rho ^2}L{L^T}\right) P \end{aligned}$$
(6.111)

Moreover, the following feedback control law is applied to the system

$$\begin{aligned} u=-{1 \over {r}}{B^T}Pe \end{aligned}$$
(6.112)

By substituting Eqs. (6.111) and (6.112) one obtains

$$\begin{aligned} \begin{array}{c} \dot{V}={1 \over 2}{e^T}\left[ -Q+P\left( {2 \over r}B{B^T}-{1 \over \rho ^2}L{L^T}\right) P\right] e+\\ +{e^T}PB\left( -{1 \over {r}}{B^T}Pe\right) +{e^T}PL\tilde{d}{\Rightarrow } \end{array} \end{aligned}$$
(6.113)
$$\begin{aligned} \begin{array}{c} \dot{V}=-{1 \over 2}{e^T}Qe+{1 \over {r}}{e^T}PB{B^T}Pe-{1 \over {2\rho ^2}}{e^T}PL{L^T}Pe\\ -{1 \over {r}}{e^T}PB{B^T}Pe+{e^T}PL\tilde{d} \end{array} \end{aligned}$$
(6.114)

which after intermediate operations gives

$$\begin{aligned} \dot{V}=-{1 \over 2}{e^T}Qe-{1 \over {2\rho ^2}}{e^T}PL{L^T}Pe+{e^T}PL\tilde{d} \end{aligned}$$
(6.115)

or, equivalently

$$\begin{aligned} \begin{array}{c} \dot{V}=-{1 \over 2}{e^T}Qe-{1 \over {2\rho ^2}}{e^T}PL{L^T}Pe+\\ +{1 \over 2}{e^T}PL\tilde{d}+{1 \over 2}{\tilde{d}^T}{L^T}Pe \end{array} \end{aligned}$$
(6.116)

Lemma: The following inequality holds

$$\begin{aligned} {1 \over 2}{e^T}PL\tilde{d}+{1 \over 2}\tilde{d}{L^T}Pe-{1 \over {2\rho ^2}}{e^T}PL{L^T}Pe\,{\le }\,{1 \over 2}{\rho ^2}{\tilde{d}^T}\tilde{d} \end{aligned}$$
(6.117)

Proof: The binomial \(({\rho }{\alpha }-{1 \over \rho }b)^2\) is considered. Expanding the left part of the above inequality one gets

$$\begin{aligned} \begin{array}{c} {\rho ^2}{a^2}+{1 \over {\rho ^2}}{b^2}-2ab \ge 0 \Rightarrow {1 \over 2}{\rho ^2}{a^2}+{1 \over {2\rho ^2}}{b^2}-ab \ge 0 \Rightarrow \\ ab-{1 \over {2\rho ^2}}{b^2} \le {1 \over 2}{\rho ^2}{a^2} \Rightarrow {1 \over 2}ab+{1 \over 2}ab-{1 \over {2\rho ^2}}{b^2} \le {1 \over 2}{\rho ^2}{a^2} \end{array} \end{aligned}$$
(6.118)

The following substitutions are carried out: \(a=\tilde{d}\) and \(b={e^T}{P}L\) and the previous relation becomes

$$\begin{aligned} {1 \over 2}{\tilde{d}^T}{L^T}Pe+{1 \over 2}{e^T}PL\tilde{d}-{1 \over {2\rho ^2}}{e^T}PL{L^T}Pe\,{\le }\,{1 \over 2}{\rho ^2}\tilde{d}^T\tilde{d} \end{aligned}$$
(6.119)

Equations (6.119) is substituted in (6.116) and the inequality is enforced, thus giving

$$\begin{aligned} \dot{V}{\le }-{1 \over 2}{e^T}Qe+{1 \over 2}{\rho ^2}{\tilde{d}^T}\tilde{d} \end{aligned}$$
(6.120)

Equation (6.120) shows that the \(H_{\infty }\) tracking performance criterion is satisfied. The integration of \(\dot{V}\) from 0 to T gives

$$\begin{aligned} \begin{array}{c} {\int _0^T}\dot{V}(t)dt{\le }-{1 \over 2}{\int _0^T}{||e||_Q^2}{dt}+{1 \over 2}{\rho ^2}{\int _0^T}{||\tilde{d}||^2}{dt}{\Rightarrow }\\ 2V(T)+{\int _0^T}{||e||_Q^2}{dt}\,{\le }\, 2V(0)+{\rho ^2}{\int _0^T}{||\tilde{d}||^2}dt \end{array} \end{aligned}$$
(6.121)

Moreover, if there exists a positive constant \(M_d>0\) such that

$$\begin{aligned} \int _0^{\infty }{||\tilde{d}||^2}dt \le M_d \end{aligned}$$
(6.122)

then one gets

$$\begin{aligned} {\int _0^{\infty }}{||e||_Q^2}dt \le 2V(0)+{\rho ^2}{M_d} \end{aligned}$$
(6.123)

Thus, the integral \({\int _0^{\infty }}{||e||_Q^2}dt\) is bounded. Moreover, V(T) is bounded and from the definition of the Lyapunov function V in Eq. (6.106) it becomes clear that e(t) will be also bounded since \(e(t) \ \in \ \Omega _e=\{e|{e^T}Pe\,{\le }\, 2V(0)+{\rho ^2}{M_d}\}\). According to the above and with the use of Barbalat’s Lemma one obtains \(lim_{t \rightarrow \infty }{e(t)}=0\).

Elaborating on the above, it can be noted that the proof of global asymptotic stability for the control loop of the MEMS model is based on Eq. (6.120) and on the application of Barbalat’s Lemma. It uses the condition of Eq. (6.122) about the boundedness of the square of the aggregate disturbance and modelling error term \(\tilde{d}\) that affects the model. However, as explained above the proof of global asymptotic stability is not restricted by this condition. By selecting the attenuation coefficient \(\rho \) to be sufficiently small and in particular to satisfy \(\rho ^2<||e||^2_Q / ||\tilde{d}||^2\) one has that the first derivative of the Lyapunov function is upper bounded by 0. Therefore for the i-th time interval it is proven that the Lyapunov function defined in Eq (6.106) is a decreasing one. This also assures that the first derivative of the Lyapunov function of the system defined in Eq. (6.106) will always be negative.

6.3.7 Robust State Estimation with the Use of the H-infinity Kalman Filter

The MEMS control loop can be implemented with the use of information provided by a small number of sensors and by processing only a small number of state variables. To reconstruct the missing information about the state vector of the MEMS model it is proposed to use a filtering scheme and based on it to apply state estimation-based control [169, 457, 511]. The recursion of the \(H_{\infty }\) Kalman Filter, for the MEMS model, can be formulated in terms of a measurement update and a time update part

Measurement update:

$$\begin{aligned}&D(k)=[I-{\theta }W(k)P^{-}(k)+{C^T}(k)R(k)^{-1}C(k)P^{-}(k)]^{-1} \nonumber \\&K(k)=P^{-}(k)D(k){C^T}(k)R(k)^{-1}\\&\hat{x}(k)=\hat{x}^{-}(k)+K(k)[y(k)-C\hat{x}^{-}(k)] \nonumber \end{aligned}$$
(6.124)

Time update:

$$\begin{aligned} \begin{array}{c} \hat{x}^{-}(k+1)=A(k)x(k)+B(k)u(k)\\ P^{-}(k+1)=A(k)P^{-}(k)D(k)A^T(k)+Q(k) \end{array} \end{aligned}$$
(6.125)

where it is assumed that parameter \(\theta \) is sufficiently small to assure that the covariance matrix \({P^{-}(k)}^{-1}-{\theta }W(k)+C^T(k)R(k)^{-1}C(k)\) will be positive definite. When \(\theta =0\) the \(H_{\infty }\) Kalman Filter becomes equivalent to the standard Kalman Filter. One can measure only a part of the state vector of the MEMS (e.g. state variables \(x_1\) and \(x_3\)), and can estimate through filtering the rest of the state vector elements.

6.3.8 Simulation Tests

6.3.8.1 Computation of Setpoints for the Model of the Coupled MEMS

The setpoints for the model of the coupled MEMS, that is if the Van der Pol Oscillator driven by the forced Duffing oscillator are computed by exploiting the model’s differential flatness properties. The system is in triangular form and thus it is differentially flat, with flat output equal to \(y=x_1\). From the first row of the state-space model of Eq. (6.83) it holds that \(x_2=\dot{x}_1\). Moreover, from the second row of the state-space model of Eq. (6.83) one obtains

$$\begin{aligned} x_3={1 \over k}\left[ {\dot{x}_2+{\gamma _1}(x_1^2-1){x_2}+\left( {\omega _2 \over \omega _1}\right) ^2{x_1}+k{x_1}}\right] \end{aligned}$$
(6.126)

From the third row of the state-space model it holds \(x_4=\dot{x}_3\). Additionally, from the fourth raw of the state-space model one has

$$\begin{aligned} u={1 \over {cos\left( {\omega _d \over \omega _1}\tau \right) }}[\dot{x}_4+{\gamma _2}{x_4}+{\delta }{x_3^2}-k(x_1-x_3)] \end{aligned}$$
(6.127)

Therefore, all state variables and the control inputs of the model can be expressed as differential functions of the flat output, and as a consequence the MEMS system consisting of the Van der Pol oscillator, driven by the Duffing oscillator, is a differentially flat one.

Fig. 6.11
figure 11

Setpoint 1: a Convergence of state variables \(x_1\) and \(x_2\) of the MEMS (blue lines) to the reference setpoints (red lines) and estimates of them provided by the Kalman Filter (green lines), b Convergence of state variables \(x_3\) and \(x_4\) of the MEMS (blue lines) to the reference setpoints (red lines) and estimates of them provided by the Kalman Filter (green lines)

Fig. 6.12
figure 12

Setpoint 2: a Convergence of state variables \(x_1\) and \(x_2\) of the MEMS (blue lines) to the reference setpoints (red lines) and estimates of them provided by the Kalman Filter (green lines), b Convergence of state variables \(x_3\) and \(x_4\) of the MEMS (blue lines) to the reference setpoints (red lines) and estimates of them provided by the Kalman Filter (green lines)

Fig. 6.13
figure 13

Setpoint 3: a Convergence of state variables \(x_1\) and \(x_2\) of the MEMS (blue lines) to the reference setpoints (red lines) and estimates of them provided by the Kalman Filter (green lines), b Convergence of state variables \(x_3\) and \(x_4\) of the MEMS (blue lines) to the reference setpoints (red lines) and estimates of them provided by the Kalman Filter (green lines)

Fig. 6.14
figure 14

Setpoint 4: a Convergence of state variables \(x_1\) and \(x_2\) of the MEMS (blue lines) to the reference setpoints (red lines) and estimates of them provided by the Kalman Filter (green lines), b Convergence of state variables \(x_3\) and \(x_4\) of the MEMS (blue lines) to the reference setpoints (red lines) and estimates of them provided by the Kalman Filter (green lines)

Fig. 6.15
figure 15

Setpoint 5: a Convergence of state variables \(x_1\) and \(x_2\) of the MEMS (blue lines) to the reference setpoints (red lines) and estimates of them provided by the Kalman Filter (green lines), b Convergence of state variables \(x_3\) and \(x_4\) of the MEMS (blue lines) to the reference setpoints (red lines) and estimates of them provided by the Kalman Filter (green lines)

Fig. 6.16
figure 16

Setpoint 6: a Convergence of state variables \(x_1\) and \(x_2\) of the MEMS (blue lines) to the reference setpoints (red lines) and estimates of them provided by the Kalman Filter (green lines), b Convergence of state variables \(x_3\) and \(x_4\) of the MEMS (blue lines) to the reference setpoints (red lines) and estimates of them provided by the Kalman Filter (green lines)

Fig. 6.17
figure 17

Variation of the control input u applied to the MEMS a when tracking Setpoint 1, b when tracking Setpoint 2

Fig. 6.18
figure 18

Variation of the control input u applied to the MEMS a when tracking Setpoint 3, b when tracking Setpoint 4

Fig. 6.19
figure 19

Variation of the control input u applied to the MEMS a when tracking Setpoint 5, b when tracking Setpoint 6

6.3.8.2 Simulation Diagrams

Simulation tests have been carried out to test the tracking accuracy of the proposed nonlinear optimal (H-infinity) control method for the MEMS that comprised the Van-del-Pol oscillator model, elastically coupled to the forced Duffing oscillator model. The obtained simulation result have confirmed that despite the nonlinearities and the underactuation in the MEMS state-space description the proposed control scheme achieves fast and accurate tracking of all reference setpoints, while also keeping moderate the variations of the control input. The simulation results are depicted in Figs. 6.11, 6.12, 6.13, 6.14, 6.15, 6.16, 6.17, 6.18 and 6.19, where the real values of the MEMS state variables are depicted in blue, the reference setpoints of the experiments are plotted in red while the estimated values of the state vector elements (provided by the H-infinity Kalman Filter) are printed in green.

The computation of the feedback gain of the H-infinity controller was based on the solution of the algebraic Riccati equation of Eq. (6.111), taking place at each time step of the control method. The selection of the attenuation coefficient \(\rho \) determines the robustness of the control algorithm as well as the existence of a solution in the aforementioned Riccati equation. As explained in the preceding sections, by selecting \(\rho \) to be sufficiently small the global asymptotic stability of the control method is assured. Moreover, the values of parameters \(\rho \),r and of matrix Q appearing in Eq. (6.111) determine the transients of the control method.

By using the H-infinity Kalman Filter as a robust state estimator it has become possible to implement state estimation-based control through the measurement of selected state vector elements (for instance the position variables of the Van-der-Pol and the Duffing oscillators). The rest of the state vector elements were estimated through the Kalman Filter’s recursion. The use of a state estimator, in place of measurements of the entire state vector of the MEMS is important considering the difficulty of obtaining sensor measurements at the MEMS scale.