Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

8.1 Introduction

In this chapter, differential flatness theory is used to develop adaptive fuzzy control for electric motors and actuators of unknown model, or of model characterized by nonmeasurable state variables and parametric uncertainty. First, the problem of adaptive fuzzy control for DC electric motors is studied. The considered electric motors can be written in the Brunovsky (canonical) form after a transformation of their state variables and control input. The resulting control signal is shown to consist of nonlinear elements, which in case of unknown system parameters can be approximated using neuro-fuzzy networks. An adaptation law for the neuro-fuzzy approximators can be computed using Lyapunov stability analysis . It is shown that the proposed adaptation law assures stability of the closed loop. First, a nonlinear DC motor model is used to evaluate the performance of the proposed flatness-based adaptive control scheme.

Additionally, in this chapter it is shown that the complete 6th order model of the induction motor satisfies differential flatness properties since all its state variables and control inputs can be expressed as functions of the flat outputs. The flat outputs are chosen to be the rotor’s turn angle and orientation angle of the magnetic flux. This type of flatness-based control for the induction motor model is implemented in cascading loops . Moreover, nonlinear Kalman Filtering methods, such the Extended and the Unscented Kalman Filter are included in this control scheme to estimate the state vector of the asynchronous motor using a limited number of sensors, such as the ones measuring stator currents. In the latter case, control of the induction motor is implemented through feedback of the estimated state vector. The efficiency of the Kalman Filter-based control scheme is confirmed by simulation experiments.

Finally, the chapter proposes an adaptive fuzzy approach to the problem of control of electrostatically actuated MEMS (microelectromechanical systems), which is based on differential flatness theory and which uses exclusively output feedback . It is shown that the model of the electrostatically actuated MEMS is a differentially flat one and this permits to transform it to the so-called linear canonical form . For the new description of the system’s dynamics the transformed control inputs contain unknown terms which depend on the system’s parameters. To identify these terms an adaptive fuzzy approximator is used in the control loop. Thus an indirect adaptive control scheme is implemented in which the unknown or unmodeled system dynamics is approximated by neuro-fuzzy networks and next this information is used by a feedback controller that makes the electrostatically activated MEMS converge to the desirable motion setpoints. This adaptive control scheme is exclusively implemented with the use of output feedback, while the state vector elements which are not directly measured are estimated with the use of a state observer that operates in the control loop. The learning rate of the adaptive fuzzy system is suitably computed from Lyapunov analysis, so as to assure that both the learning procedure for the unknown system’s parameters, the dynamics of the observer and the dynamics of the control loop will remain stable. The Lyapunov stability analysis depends on two Riccati equations, one associated with the feedback controller and one associated with the state observer . Finally, it is proven that for the control scheme that comprises the feedback controller, the state observer and the neuro-fuzzy approximator, H-infinity tracking performance can be succeeded. The functioning of the control loop has been evaluated through simulation experiments.

8.2 Flatness-Based Adaptive Control of DC Motors

8.2.1 Overview

This section is particularly concerned with differentially flat single-input dynamical systems which can be written in the Brunovksy (canonical) form. As shown in Chap. 2, according to the Lie-Backlünd condition for equivalence of differentially flat systems, if a system is differentially flat then it can be transformed to the linear canonical (Brunovsky) form. In particular, transformation into the Brunovksy form can be succeeded for systems that admit static feedback linearization (i.e., a change of coordinates for both the system state variables and the system’s control input). Single-input differentially flat systems admit static feedback linearization and therefore can be finally written in the Brunovsky form [340]. Moreover, flatness-based adaptive fuzzy control can be applied to multi-input dynamical systems. For MIMO dynamical systems which are differentially flat and which admit static feedback linearization, transformation to the canonical (Brunovsky) form can be performed. Additionally, even for MIMO dynamical systems which are differentially flat and admit only dynamic feedback linearization, one can succeed transformation to the canonical (Brunovsky) form. Therefore, there exists a wide class of nonlinear dynamical systems to which the proposed flatness-based adaptive fuzzy control method can be applied [426].

After transformation to the linear canonical form, the resulting control input is shown to contain nonlinear elements which depend on the system’s parameters. If the parameters of the system are unknown, then the nonlinear terms which appear in the control signal can be approximated with the use of neuro-fuzzy networks. In this chapter it is shown that a suitable learning law can be defined for the aforementioned neuro-fuzzy approximators so as to preserve the closed-loop system stability. Lyapunov stability analysis proves also that the proposed flatness-based adaptive fuzzy control scheme results in \(H_{\infty }\) tracking performance, in accordance to the results of [407, 410, 413, 433].

Unlike other adaptive fuzzy control schemes which are based on several assumptions about the structure of the nonlinear system as well as about the uncertainty characterizing the system’s model, the proposed adaptive fuzzy control scheme based on differential flatness theory offers an exact solution to the design of adaptive controllers for unknown dynamical systems. The only assumption needed for the design of the controller and for succeeding \(H_{\infty }\) tracking performance for the control loop is that there exists a solution for a Riccati equation associated to the linearized error dynamics of the differentially flat model. This assumption is quite reasonable for several nonlinear systems (including electric motors and actuators), thus providing a systematic approach to the design of reliable controllers for such systems [426, 433].

8.2.2 Dynamics and Linearization of the DC Motor Model

The control approach to be followed in this section is similar to the one analyzed in Chap. 3. The dynamic model of the nonlinear DC motor has been already presented in Chap. 4. As explained, the dynamical model of the DC-motor model can be written as an affine in the input system: \(\dot{x}=f(x,t)+g(x,t)u\), with \(\dot{x}\) denoting the derivative of the motor’s state vector, \(x=[x_1,x_2,x_3]^T=[\theta ,\dot{\theta },i_{\alpha }]\) (\(\theta \) is the position of the motor, \(\dot{\theta }\) is the angular velocity of the motor and \(i_{\alpha }\) is the armature current) [202, 539]. Functions f(x) and g(x) are vector field functions defined as :

$$\begin{aligned} f(x)=\begin{pmatrix} x_2 \\ {k_1}{x_2}+{k_2}{x_3}+{k_3}{x_3^2}+{k_4}{T_1} \\ {k_5}{x_2}+{k_6}{x_2}{x_3}+{k_7}{x_3} \end{pmatrix}, g(x)=\begin{pmatrix} 0 \\ 0 \\ k_8 \end{pmatrix} \end{aligned}$$
(8.1)

where \(k_1=-F/J\), \(k_2=A/J\), \(k_3=B/J\), \(k_4=-1/J\), \(k_5=-A/L\), \(k_6=-B/L\), \(k_7=-R/L\), \(k_8=-1/L\), R and L are the armature resistance and induction respectively, and J is the rotor’s inertia, while F is the friction. Variable A is a constant defining the torque due to the armature’s current, while variable B is a constant associated to the armature’s reaction. Now, the state-space equation of the DC motor becomes

$$\begin{aligned} \begin{array}{c} \dot{x}_1=x_2 \\ \dot{x}_2={k_1}{x_2}+{k_2}{x_3}+{k_3}{x_3^2}+{k_4}{T_1} \\ \dot{x}_3={k_5}{x_2}+{k_6}{x_2}{x_3}+{k_7}{x_3}+{k_8}u \end{array} \end{aligned}$$
(8.2)

where \(T_1\) the load torque and u is the terminal voltage. From the second row of Eq. (13.2) one obtains,

$$\begin{aligned} \begin{array}{c} \ddot{x}_2={k_1}\dot{x}_2+{k_2}\dot{x}_3+2{k_3}{x_3}\dot{x}_3 \Rightarrow \\ \ddot{x}_2={k_1}\dot{x}_2+{k_2}\dot{x}_3+2{k_3{k_5}{x_2}{x_3}}+2{k_3}{k_6}{x_2}{x_3^2}+2{k_3}{k_7}{x_3^2}+2{k_3}{k_8}{x_3}u \end{array} \end{aligned}$$
(8.3)

Thus, one has \(\ddot{x}_2=\bar{f}(x)+\bar{g}(x)u\) where

$$\begin{aligned} \begin{array}{c} \bar{f}(x)={k_1}\dot{x}_2+{k_2}\dot{x}_3+2{k_3{k_5}{x_2}{x_3}}+2{k_3}{k_6}{x_2}{x_3^2}+2{k_3}{k_7}{x_3^2}, \ \text {and}\\ \bar{g}(x)=2{k_3}{k_8}{x_3} \end{array} \end{aligned}$$
(8.4)

For the considered nonlinear electric motor model described in Eq. (8.4) it is assured that inherently \(\bar{g}(x)\,{\ne }\,0\), therefore again singularities are not going to appear in the control law. Moreover, assuming the effects of friction \({k_1}{x}_2\) and of the load torque \({k_4{}T_1}\) as external disturbances, the nonlinear DC motor model of Eq. (13.1) becomes

$$\begin{aligned} \begin{array}{c} \dot{x}_1=x_2 \\ \dot{x}_2={k_2}{x_3}+{k_3}{x_3^2} \\ \dot{x}_3={k_5}{x_2}+{k_6}{x_2}{x_3}+{k_7}{x_3}+{k_8}u \end{array} \end{aligned}$$
(8.5)

Selecting the flat output to be \(y=x_1\) one can see that all state variables \(x_i, \ i=1,2,3\) and the control input u can be expressed as functions of the flat output and its derivatives. Indeed it holds

$$\begin{aligned} \begin{array}{c} x_1=y\\ x_2=\dot{y}\\ x_3={{-{k_2}+\sqrt{|k_2^2+4{k_3}\ddot{y}|}} \over {2k_3}}\\ \end{array} \end{aligned}$$
(8.6)

with control input

$$\begin{aligned} \begin{array}{c} u={1 \over {\bar{g}(x)}}[y^{(3)}-\bar{f}(y,\dot{y},\ddot{y})]. \end{array} \end{aligned}$$
(8.7)

The aforementioned system of Eq. (13.4) can be written in the Brunovsky form:

$$\begin{aligned} \begin{array}{c} \begin{pmatrix} {\dot{y}_1}\\ {\dot{y}_2}\\ {\dot{y}_3}\\ \end{pmatrix}= \begin{pmatrix} 0 &{} 1 &{} 0 \\ 0 &{} 0 &{} 1 \\ 0 &{} 0 &{} 0 \end{pmatrix} \begin{pmatrix} y_1 \\ y_2 \\ y_3 \end{pmatrix}+ \begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix} \end{array}v \end{aligned}$$
(8.8)

where \(y_1=y\) and \(v=\bar{f}(x,t)+\bar{g}(x,t)u\).

With functions \(\bar{f}(x,t)\) and \(\bar{g}(x,t)\) to be given by Eq. (8.6). The stability analysis of the adaptive fuzzy control scheme follows the stages presented in Chap. 3, for the case of single-input dynamical systems. Using a control input as in Eq. (8.7) it was possible to make the electric motor’s angle track any desirable setpoint. Regarding the implementation of the flatness-based adaptive fuzzy controller, the neuro-fuzzy approximators for functions \(\bar{f}(x,t)\) and \(\bar{g}(x,t)\) have now three inputs, i.e., x, \(\dot{x}\) and \(\ddot{x}\). Taking that each fuzzy input variable consists of 3 fuzzy sets, there are now 27 fuzzy rules of the form:

$$\begin{aligned} \begin{array}{c} R^l: \text {IF} \ {x} \ \text {is} \ A_1^l \ \text {AND} \ {\dot{x}} \ \text {is} \ A_2^l \ \text {AND} \ {\ddot{x}} \ \text {is} \ A_3^l \,\text {THEN} \ \hat{f}^l \ \text {is} \ \ b^l \end{array} \end{aligned}$$
(8.9)

In the simulation experiments, it was assumed that at the beginning of the second half of the simulation time an additive sinusoidal disturbance of amplitude \(A=2.0\) and period \(T=7.5\) s affected the DC motor. The position and velocity variations for a sinusoidal setpoint are depicted in Fig. 8.1a, b, respectively. The performance of the proposed flatness-based adaptive fuzzy control was also tested in the tracking of a seesaw setpoint. The associated position and velocity variation are demonstrated in Fig. 8.2a, b, respectively. The control signal in the case of tracking of a sinusoidal setpoint is shown in Fig. 8.3a, while the control signal when tracking a seesaw setpoint is shown in Fig. 8.3b. Finally, the approximation of function g(xt) in the case of tracking of a sinusoidal setpoint is shown in Fig. 8.4a (and is marked as a dashed line), while when tracking a seesaw setpoint the approximated function g(xt) is shown in Fig. 8.4b.

Fig. 8.1
figure 1

a Tracking of a sinusoidal position setpoint (red line) by the angle of the motor. b Tracking of a sinusoidal velocity setpoint (red line) by the angular velocity of the motor

Fig. 8.2
figure 2

a Tracking of a seesaw position setpoint (red line) by the angle of the motor. b Tracking of a seesaw velocity setpoint (red line) by the angular velocity of the motor (continuous line)

Fig. 8.3
figure 3

a Control input of the motor when tracking a sinusoidal setpoint. b Control input of the motor when tracking a seesaw setpoint

Fig. 8.4
figure 4

a Approximation of function \(\bar{g}(x,t)\) of the motor model when tracking a sinusoidal setpoint. b Approximation of function \(\bar{g}(x,t)\) of the motor model when tracking a seesaw setpoint

As already explained in Chap. 3, and comparing the proposed flatness-based adaptive fuzzy control method to other adaptive fuzzy control approaches and to the analysis on neural adaptive control methods given in the relevant bibliography (e.g., [167, 168, 205]), the following can be noted: (i) the transformation of the initial nonlinear system into the linearized Brunovksy (canonical) form does not require the computation of partial derivatives or Lie derivatives, (ii) there is no need to make restrictive assumptions about the number of truncated higher order terms in the linearization of the system’s nonlinear model or about a bounded error in the linearization of the output of the neural/fuzzy approximators, (iii) the number of adaptable parameters that is involved in the training of the neural/fuzzy approximators remains small and there is no need to use an excessive number of neural/fuzzy approximators to produce the control signal, (iv) the tracking performance of the neuro-fuzzy control loop is evaluated with the use of specific metrics (e.g., \(H_{\infty }\) tracking performance), (v) the proposed flatness-based control method can be also extended to MIMO actuation systems, without constraining assumptions about their dynamics and structure (e.g. triangular, affine-in-the-input, of canonical form, etc.) [426, 433].

The problem of avoidance of singularities in the proposed control scheme has been already discussed in Chap. 3. Flatness-based adaptive fuzzy control assures stability of the control loop and the asymptotic elimination of the tracking error. Therefore, it can be ensured that \(\tilde{\theta }_g=\theta _g-\theta _g^{*}{\rightarrow }0\) which means that \(\hat{g}(x){\rightarrow }g(x)\). Provided that function g(x) does not become zero as long as x remains bounded, then \(\hat{g}(x)\) will be also different than zero. For the cart-pole system described in Eq. (3.39) and Eq. (3.40) it holds that \(g(x)=0\) for \(x_1=\theta =0\), however, for \(x_1{\in }(0,\pi )\) this case is not going to occur, even if ideally there is no function approximation error and \(\hat{g}(x)\) coincides with g(x). For the considered nonlinear electric motor model described in Eq. (8.4) it is assured that inherently \(\bar{g}(x) \ne 0\), therefore again singularities are not going to appear in the control law. In the generic case, to assure the avoidance of singularities in the proposed control scheme one has to exclude singularity points from the reference trajectory that the system’s state vector has to track.

Singularities may appear not only in the proposed adaptive fuzzy control scheme but in all control systems which are based on static feedback linearization. For example, the linearization of the system through the use of a new control input of the form \(v=f(x)+g(x)u\) means that \(u=g(x)^{-1}[v-f(x)]\) which does exclude the appearance of singularities. Therefore, singularities do not concern only the proposed control method but the whole class of static feedback-based linearization schemes. As explained in Chap. 2, some modifications can be introduced in the design of the controller to prohibit the appearance of singularities, for example, a change in coordinates that results in a new state-space representation which does not include any points of singularity [340].

8.3 Flatness-Based Control of Induction Motors in Cascading Loops

8.3.1 Overview

This section analyzes sensorless control of induction motors with the use of control methods which are based on differential flatness theory. Induction motors are currently a main element of several industrial systems, as well as of motion transmission and transportation systems. The possibility to reduce the number of sensors involved in the control of induction motors has been a subject of systematic research during the last years [49, 100, 170, 199, 200, 280, 296, 336, 432, 538]. As a result, state estimation-based control has become an active research area in the field of electric machines and power electronics. Elimination of the speed and magnetic flux sensors has the advantages of lower cost, ruggedness as well as increased reliability. Nonlinear Kalman Filtering can be used to obtain accurate estimates of the induction motor’s state vector through the processing of measurements coming from a small number of sensors, e.g., control input currents applied to stator. A well-established nonlinear Kalman Filtering approach is the Extended Kalman Filter (EKF), which is based on a linearization of the nonlinear dynamics using a first-order Taylor expansion [31, 229, 405, 408]. Alternatively, the Unscented Kalman Filter (UKF) can be considered. The Unscented Kalman Filter is a derivative-free state estimation method of high accuracy. The state distribution in UKF is approximated by a Gaussian random variable, which is represented using a minimal set of suitably chosen weighted sample points. These sigma points are propagated through the true nonlinear system, thus generating the posterior sigma-point set, and the posterior statistics are calculated. The sample points progressively converge to the true mean and covariance of the Gaussian random variable [418, 419]. The use of the Unscented Kalman Filter for state estimation and control of nonlinear electric motor models is a relatively new and promising topic. Indicative results on the use of the UKF for sensorless control of induction motors and fault diagnosis of electric drives can be found in [4, 5, 6, 233, 256].

In this section, a sensorless control scheme for induction motors is developed consisting of (i) a nonlinear Kalman Filter, such as the Extended or the Unscented Kalman Filter, which provides estimates of the complete 6th order state vector of the induction motor, after sequential processing of measurements from a limited number of sensors (as the ones measuring stator currents), (ii) a nonlinear controller that is based on the principles of the differential flatness theory, which unlike the conventional field-oriented control approach makes no assumption about decoupling between the rotor’s magnetic flux and the rotor’s angular speed. The performance of the Extended Kalman Filter-based sensorless control scheme is tested through simulation experiments and compared to an Unscented Kalman Filter-based control loop. It is shown that both the EKF- and the UKF-based control result in fast and accurate trajectory tracking.

8.3.2 A Cascading Loops Scheme for Control of Field-Oriented Induction Motors

8.3.2.1 Field-Oriented Induction Motor Model

As in the case of asynchronous generators that was analyzed in Chap. 7 to derive the dynamic model of an induction motor, the three-phase variables are first transformed to two-phase ones. This two-phase system can be described in the stator-coordinates frame \(\alpha -b\) and the associated voltages are denoted as \(v_{s_\alpha }\) and \(v_{s_b}\), while the currents of the stator are \(i_{s_\alpha }\) and \(i_{s_b}\), and the components of the rotor’s magnetic flux are \(\psi _{r_\alpha }\) and \(\psi _{r_b}\) (Fig. 8.5). Then, the rotation angle of the rotor with respect to the stator is denoted by \(\delta \), and a rotating reference frame \(d-q\) on rotor, is defined [432].

Fig. 8.5
figure 5

Schematic diagram of the proposed flatness-based control scheme with the use of nonlinear Kalman filtering

The state vector of the motor is \(x=[\theta ,\omega ,\psi _{r_\alpha },\psi _{r_b},i_{s_\alpha },i_{s_b}]\) (where \(\theta \) stands for the turn angle of the rotor and \(\omega \) for the rotation speed), while the dynamic model of the induction motor is written as [62, 412, 434]:

$$\begin{aligned} \begin{array}{c} \dot{x}=f(x)+{g_\alpha }(x)v_{s_\alpha }+{g_b}(x)v_{s_b}+w(t)\\ z=h(x)+v(t) \end{array} \end{aligned}$$
(8.10)

with the first row to describe the state equation of the motor and the second row to describe the measurement equation of the motor (where h(x) is a nonlinear vector field of x). The elements of the induction motor’s dynamic model are :

$$\begin{aligned} f(x)= \begin{pmatrix} x_2 \\ {\mu _1}({x_3}{x_6}-{x_4}{x_5})-{T_L \over J} \\ -{\alpha _1}{x_3}-{n_p}{x_2}{x_4}+{\alpha _1}M{x_5} \\ {n_p}{x_2}{x_3}-{\alpha _1}{x_4}+{\alpha _1}M{x_6} \\ {\alpha _1}{\beta _1}{x_3}+{n_p}{\beta _1}{x_2}{x_4}-{\gamma _1}{x_5} \\ -{n_p}{\beta _1}{x_2}{x_3}+{\alpha _1}{\beta _1}{x_4}-{\gamma _1}{x_6} \end{pmatrix} \end{aligned}$$
(8.11)
$$\begin{aligned} \begin{array}{cc} g_\alpha =[0,0,0,0, {1 \over {{\sigma }{L_s}}}, 0]^T&g_b=[0,0,0,0,0,{1 \over {{\sigma }{L_s}}}]^T \end{array} \end{aligned}$$
(8.12)

where J is the rotor’s inertia, and \(T_L\) is the external load torque. The rest of the model parameters are \(\sigma =1-M^2/{L_s}{L_r}\), \(\alpha _1={R_r \over L_r}\), \(\beta _1={M \over {{\sigma }{L_s}{L_r}}}\), \(\gamma _1=({{{M^2}{R_r}} \over {{\sigma }{L_s}{L_r^2}}}+{{R_s} \over {{\sigma }{L_s}}})\), \(\mu _1={{{n_p}M} \over {JL_r}}\), where \(L_s\), \(L_r\) are the stator and rotor auto-inductances, M is the mutual inductance and \(n_p\) is the number of poles.

The process noise w(k) given in Eq. (8.10) is due to model inaccuracies associated with random variations of the model’s parameters. For example, resistances, inductances, and magnetic permeability of the electric motor can exhibit a stochastic variation round a nominal value. The measurement noise v(k) given in Eq. (8.10) is due to stochastic variations of the elements of the measuring devices. If the effects of the noise signals are not compensated by a filtering procedure, the performance of the control loop can be unsatisfactory or even the stability of the control loop can be risked. In the sensorless control scheme of the induction motor studied in this section, the measured variables are considered to be the a-b reference frame currents of the stator.

8.3.2.2 Decoupling of Speed-Flux Dynamics

The classical method for induction motors control is based on a transformation of the stator’s currents (\(i_{s_\alpha }\) and \(i_{s_b}\)) and of the magnetic fluxes of the rotor (\(\psi _{r_\alpha }\) and \(\psi _{r_b}\)) to the reference frame \(d-q\) which rotates together with the rotor. In the \(d-q\) frame there will be only one nonzero component of the magnetic flux \(\psi _{r_d}\), while the component of the flux along the q axis equals 0. The new control inputs of the system are considered to be \(v_{s_d}\), \(v_{s_q}\), and are associated to the \(d-q\) frame voltages \(v_d\) and \(v_q\), respectively. The control inputs \(v_{s_d}\), \(v_{s_q}\) are connected to \(v_{s_\alpha }\), \(v_{s_b}\) of Eq. (8.10), according to the relation

$$\begin{aligned} \begin{pmatrix} {v_s}_\alpha \\ {v_s}_b \end{pmatrix}=||\psi ||{\cdot } \begin{pmatrix} {\psi _r}_\alpha &{} {\psi _r}_b \\ {\psi _r}_b &{} {\psi _r}_\alpha \end{pmatrix}^{-1} \begin{pmatrix} {v_s}_d \\ {v_s}_q \end{pmatrix} \end{aligned}$$
(8.13)

where \(\psi =\psi _{r_d}\) and \(||\psi ||=\sqrt{\psi _{s_\alpha }^2+\psi _{s_b}^2}\). Next, the following nonlinear feedback control law is defined

$$\begin{aligned} \begin{pmatrix} {v_s}_d \\ {v_s}_q \end{pmatrix}= {\sigma }{L_s} \begin{pmatrix} -{n_p}{\omega }{{i_s}_q}-{{\alpha }M{{{i_s}_q}^2} \over {{\psi _r}_d}}-{\alpha }b{{\psi _r}_d}+{v_d} \\ {n_p}{\omega }{{i_s}_d}+b{n_p}{\omega }{{\psi _r}_d}+{{\alpha }M{{i_s}_q}{{i_s}_d} \over {{\psi _r}_d}}+v_q \end{pmatrix} \end{aligned}$$
(8.14)

The control signal in the coordinates system \(\alpha -b\) is

$$\begin{aligned} \begin{array}{c} \begin{pmatrix} {{v_s}_\alpha } \\ {{v_s}_b} \end{pmatrix}=||\psi ||{\sigma }{L_s} \begin{pmatrix} {\psi _r}_\alpha &{} {\psi _r}_b \\ -{\psi _r}_b &{} {\psi _r}_\alpha \end{pmatrix}^{-1}{\cdot }\\ {\cdot }\begin{pmatrix} -{n_p}{\omega }{i_s}_q-{{\alpha }M{{{i_s}_q}^2} \over {{\psi _r}_d}}-{\alpha }{\beta }{{\psi _r}_d}+v_d \\ {n_p}{\omega }{i_s}_d+{\beta }{n_p}{\omega }{{\psi _r}_d}+{{{\alpha }M{{i_s}_q}{{i_s}_d}} \over {{\psi _r}_d}}+v_q \end{pmatrix} \end{array} \end{aligned}$$
(8.15)

Substituting Eq. (8.15) into Eq. (8.10) one obtains

$$\begin{aligned} {d \over {dt}}\omega ={\mu }{{\psi _r}_d}{{i_s}_q}-{{T_L} \over J} \end{aligned}$$
(8.16)
$$\begin{aligned} {d \over {dt}}{i_s}_q=-{\gamma }{{i_s}_q}+v_q \end{aligned}$$
(8.17)
$$\begin{aligned} {d \over {dt}}{\psi _r}_d=-{\alpha }{{\psi _r}_d}+{\alpha }M{{i_s}_d} \end{aligned}$$
(8.18)
$$\begin{aligned} {d \over {dt}}{i_s}_d=-{\gamma }{{i_s}_d}+{v_d} \end{aligned}$$
(8.19)
$$\begin{aligned} {d \over {dt}}\rho ={n_p}{\omega }+{\alpha }M{{{i_s}_q} \over {{\psi _r}_d}} \end{aligned}$$
(8.20)

The system of Eqs. (8.16)–(8.20) consists of two linear subsystems, where the first one has as output the magnetic flux \({\psi _r}_d\) and the second has as output the rotation speed \(\omega \), i.e.,

$$\begin{aligned} {d \over {dt}}{{\psi _r}_d}=-{\alpha }{{\psi _r}_d}+{\alpha }M{{i_s}_d} \end{aligned}$$
(8.21)
$$\begin{aligned} {d \over {dt}}{{i_s}_d}=-{\gamma }{{i_s}_d}+v_d \end{aligned}$$
(8.22)
$$\begin{aligned} {d \over {dt}}\omega ={\mu }{{\psi _r}_d}{{i_s}_q}-{{T_L} \over J} \end{aligned}$$
(8.23)
$$\begin{aligned} {d \over {dt}}{{i_s}_q}=-{\gamma }{{i_s}_q}+v_q \end{aligned}$$
(8.24)

If \({\psi _r}_d{\rightarrow }{\psi _r}_d^\text {ref}\), i.e., the transient phenomena for \({\psi _r}_d\) have been eliminated and \({\psi _r}_d\) has converged to a steady state value, then Eq. (8.23) is not dependent on \({\psi _r}_d\), and consequently the two subsystems described by Eqs. (8.21)–(8.22) and Eqs. (8.23)–(8.24) are decoupled. The subsystem that is described by Eqs. (8.21) and (8.22) is linear and has as control input \(v_d\), and can be controlled using methods of DC motor control [336, 413, 434, 535].

8.3.3 A Flatness-Based Control Approach for Induction Motors

In [338] the voltage-fed induction machine was shown to be a differentially flat system. It has been proven that the angle of the rotor position (rotation angle \(\theta \)) and the angle \(\rho \) of the magnetic field (angle between flux \(\psi _{r_a}\) and \(\psi _{r_b}\)) constitute a flat output for the induction motor model [117, 118, 535]. Since all state variables of the circuits describing the induction motor dynamics can be expressed as functions of \(y=(\theta ,\rho )\) and its derivatives it can be concluded that the induction motor is a differentially flat system .

The equations of the induction motor in the \(d-q\) reference frame, given by Eqs. (8.21)–(8.24), are now rewritten in the form of Eqs. (8.25)–(8.29):

$$\begin{aligned} \begin{array}{l} {d \over {dt}}\omega ={\mu }{{\psi _r}_d}{i_s}_q-{{T_L} \over J} \end{array} \end{aligned}$$
(8.25)
$$\begin{aligned} \begin{array}{l} {d \over {dt}}{{\psi _r}_d}=-{\alpha }{{\psi _r}_d}+{\alpha }M{{i_s}_d} \end{array} \end{aligned}$$
(8.26)
$$\begin{aligned} \begin{array}{l} {d \over {dt}}{{i_s}_d}=-{\gamma }{{i_s}_d}+{\alpha }{\beta }{{\psi _r}_d}+{n_p}{\omega }{{i_s}_q}+{{\alpha }M{{{i_s}_q}^2} \over {{\psi _r}_d}}+{1 \over {{\sigma }{L_s}}}{v_s}_d \end{array} \end{aligned}$$
(8.27)
$$\begin{aligned} \begin{array}{l} {d \over {dt}}{{i_s}_q}=-{\gamma }{{i_s}_q}-{\beta }{n_p}{\omega }{{\psi _r}_d}-{n_p}{\omega }{{i_s}_d}-{{\alpha }M{{i_s}_q}{{i_s}_d} \over {\psi _{r_d}}}+{1 \over {{\sigma }{L_s}}}{{v_s}_q} \end{array} \end{aligned}$$
(8.28)
$$\begin{aligned} {d \over {dt}}{\rho }={n_p}{\omega }+{{\alpha }M{{i_s}_q} \over {{\psi _r}_d}} \end{aligned}$$
(8.29)

The flat outputs for the voltage-fed induction motor are the angle of the rotor \(\theta \) and variable \(\rho \), where \(\rho \) has been defined as the rotor flux angle. According to [117], if the stator current dynamics are much faster than the speed and flux dynamics a faster inner current control loop can be designed using only Eqs. (8.27) and (8.28) and assuming the speed and flux as constants. For the outer speed and flux control design, the stator currents are treated as new control inputs and the system behavior is described by Eqs. (8.25), (8.26), and (8.29). This system of lower order is also flat with \(\psi _{r_d}\) and \(\theta \) as flat outputs.

It can be shown that all state variables of the induction motor can be written as functions of the flat outputs and their derivatives. Moreover, using Eqs. (8.27) and (8.28) a controller that satisfies the flatness properties (and thus it can be also expressed as a function of the flat outputs and their derivatives) is:

$$\begin{aligned} \begin{array}{l} v_{s_d}={\sigma }{L_s}({di_{s_d}^{*} \over {dt}}+{\gamma }{i_{s_d}^{*}}-{\alpha }{\beta }{\psi _{r_d}}-{n_p}{\omega }{i_{s_q}}-{{\alpha }M{{i_{s_q}}^2} \over {\psi _{r_d}}}+v_d) \end{array} \end{aligned}$$
(8.30)
$$\begin{aligned} \begin{array}{l} v_{s_q}={\sigma }{L_s}({di_{s_q}^{*} \over {dt}}+{\gamma }{i_{s_q}^{*}}+\beta {n_p}{\omega }{\psi _{r_d}}+ {n_p}{\omega }{i_{s_d}}+{{\alpha }M{{i_{s_q}}{i_{s_d}}} \over {\psi _{r_d}}}+v_q) \end{array} \end{aligned}$$
(8.31)

where \(i_{s_q}^{*}\) and \(i_{s_d}^{*}\) denote current setpoints. Substituting Eqs. (8.30) and (8.31) into Eqs. (8.27) and (8.28) one obtains the dynamics of the current tracking errors.

$$\begin{aligned} \begin{array}{c} {{d{\varDelta }{i_{s_d}}} \over {dt}}=-{\gamma }{\varDelta }{i_{s_d}}+{v_d} \end{array} \end{aligned}$$
(8.32)
$$\begin{aligned} \begin{array}{c} {{d{\varDelta }{i_{s_q}}} \over {dt}}=-{\gamma }{\varDelta }{i_{s_q}}+{v_q} \end{array} \end{aligned}$$
(8.33)

where \({\varDelta }{i_{s_d}}=(i_{s_d}-i_{s_d}^{*})\). For the decoupled system of Eqs. (8.32) and (8.33) one can apply state feedback control. For example, a suitable state feedback controller would be

$$\begin{aligned} v_d=-{\gamma _1}{\varDelta }{i_{s_d}} \end{aligned}$$
(8.34)
$$\begin{aligned} v_q=-{\gamma _2}{\varDelta }{i_{s_q}} \end{aligned}$$
(8.35)

Tracking of the reference setpoint can be also succeeded for the rotor’s speed and flux through the application of the control law of Eqs. (8.30) and (8.31) to Eqs. (8.25) and (8.29). The control inputs are chosen as

$$\begin{aligned} \begin{array}{c} i_{s_d}={1 \over {{\alpha }M}}({{d{\psi _{r_d}}^{*}} \over {dt}}+{\alpha }\psi _{r_d}^{*}+i_d) \end{array} \end{aligned}$$
(8.36)
$$\begin{aligned} \begin{array}{c} i_{s_q}={1 \over {{\mu }\psi _{r_d}}}({{d\omega ^{*}} \over {dt}}+i_q) \end{array} \end{aligned}$$
(8.37)

Denoting \({\varDelta }{\psi _{r_d}}=\psi _{r_d}-\psi _{r_d}^{*}\) and \({\varDelta }{\omega }=\omega -\omega ^{*}\) the tracking error dynamics are given by

$$\begin{aligned} \begin{array}{c} {{d{\varDelta }{\psi _{r_d}}} \over {dt}}=-{\alpha }{\varDelta }{\psi _{r_d}}+{i_d} \end{array} \end{aligned}$$
(8.38)
$$\begin{aligned} \begin{array}{c} {{d{\varDelta }{\omega }} \over {dt}}=-{T \over J}+i_q \end{array} \end{aligned}$$
(8.39)

The convergence of the tracking error to zero can be assured through the application of the following feedback control laws:

$$\begin{aligned} \begin{array}{c} i_d=-{\alpha _1}{\varDelta }{\psi _{r_d}} \end{array} \end{aligned}$$
(8.40)
$$\begin{aligned} \begin{array}{c} i_q={T \over J}-{\alpha _2}{\varDelta }{\omega } \end{array} \end{aligned}$$
(8.41)

8.3.4 Implementation of the EKF for the Nonlinear Induction Motor Model

State estimation for nonlinear systems with the use of the Extended Kalman Filter has been explained in Eqs. (4.13) and (4.14). To implement the Extended Kalman Filter in the induction motor’s model that is expressed in the \(d-q\) reference frame the Jacobian matrices \(J_{\phi }\) and \(J_{\gamma }\) are calculated . Thus:

$$\begin{aligned} J_{\phi }=[J_{\phi }^1,J_{\phi }^2,J_{\phi }^3,J_{\phi }^4,J_{\phi }^5,J_{\phi }^6,]^T \end{aligned}$$
(8.42)

where the rows of the above-defined Jacobian matrix are given by \(J_{\phi }^1=[0,1,0,0,0,0]\), \(J_{\phi }^2=[0,0,\mu {x_5},0,\mu {x_3},0]\), \(J_{\phi }^3=[0,0,-\alpha ,{\alpha }M,0,0]\), \(J_{\phi }^4=[0,{n_p}{x_5},{\alpha }{\beta }-{{\alpha }M{x_5^2} \over {x_3^2}},-\gamma \), \({n_p}{x_2}+{{2{\alpha }M{x_5}} \over {x_3}},0]\), \(J_{\phi }^5=[0,-{\beta }{n_p}{x_3}-{n_p}{x_4}\), \(-{\beta }{n_p}{x_2}+{{{\alpha }M{x_4}{x_5}} \over {x_3^2}}, -{n_p}{x_2}-{{{\alpha }M{x_5}} \over {x_3}}, -\gamma -{{{\alpha }M{x_4}} \over {x_3}}, 0]\) and \(J_{\phi }^6=[0\), \({n_p}\), \(-{{{\alpha }M{x_5}} \over {x_3^2}}\), 0, \({{{\alpha }M} \over {x_3}}\), 0].

Moreover, considering that the motor’s state vector is \(x=[\theta \),\(\omega \),\(\psi _{s_d}\),\(i_{s_d}\),\(i_{s_q}\),\(\rho ]\) and that the measurable state vector elements are the stator currents, initially expressed in the \(a-b\) reference frame as \(i_{s_a}\) and \(i_{s_b}\), and equivalently in the \(d-q\) reference frame as \(i_{s_d}\) and \(i_{s_q}\), one has the measurement equation Jacobian matrix

$$\begin{aligned} J_{\gamma }= \begin{pmatrix} 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 \end{pmatrix} \end{aligned}$$
(8.43)

Since the Jacobian matrix \(J_{\phi }\), is associated to the drift term of the system’s dynamics, and is computed using the system’s continuous-time description of Eq. (8.10), then in the EKF recursion of Eqs. (4.13) and (4.14) it should be substituted by \(I+{T_s}{J_{\phi }}\) where \(T_s\) is the sampling period and \(I{\in }R^{n{\times }n}\) is the identity matrix.

8.3.5 Unscented Kalman Filtering for Induction Motor Control

Apart from Extended Kalman Filtering for sensorless control of the induction motor, the Unscented Kalman Filter can be also used. The stages of Unscented Kalman Filtering for nonlinear dynamical systems, consisting of time update and measurement update have been given in Eqs. (4.18) and (4.19). In Unscented Kalman Filter-based control a set of suitably chosen weighted sample points (sigma points) are propagated through the nonlinear system and used to approximate the true value of the system’s state vector and of the state vector’s covariance matrix . The UKF algorithm is also summarized as follows:

The time update of the UKF is

$$\begin{aligned} \begin{array}{l} x_k^i=\phi (x_{k-1}^i)+L(k-1)U(k-1), \ i=0,1,\ldots ,2n \\ {\hat{x}_k^{-}}={{\sum }_{i=0}^{2n}}{w_i}{x_{k^-}^i}\\ {P_{xx}}_{k^-}={P_{xx}}_{k-1}+Q_k\\ \end{array} \end{aligned}$$

The measurement update of the UKF is

$$\begin{aligned} \begin{array}{l} z_k^i=h(x_{k^-}^i,u_k)+r_k, \ i=0,1,\ldots ,2n \\ \hat{z}_k={{\sum }_{i=0}^{2n}}{w_i}{z_k^i} \\ P_{{zz}_k}={{\sum }_{i=0}^{2n}}{w_i}[{z_k^i}-\hat{z}_k][{z_k^i-\hat{z}_k}]^T+R_k \\ P_{{xz}_k}={{\sum }_{i=0}^{2n}}{w_i}[x_{k^-}^i-\hat{x_{k^-}}][z_k^i-\hat{z}_k]^T \\ K_k={P_{{xz}_k}}{P_{{zz}_k}}^{-1} \\ \hat{x}_k=\hat{x}_{k^-}+{K_k}[z_k-\hat{z}_k] \\ {{P_{xx}}_k}={P_{k^-}}-{K_k}{{P_{zz}}_k}{K_k^T} \end{array} \end{aligned}$$
Fig. 8.6
figure 6

Approximation of a 2D distribution by the extended and Unscented Kalman Filter

It is noted that the Unscented Kalman Filter results in posterior approximations that are accurate to the third order for Gaussian inputs for all nonlinearities. For non-Gaussian inputs, approximations are accurate to at least the second order, with the accuracy of third and higher order moments determined by the specific choice of weights and scaling factors. Furthermore, unlike EKF no analytical Jacobians of the system equations need to be calculated. The concept of UKF for approximating the distribution of a system’s state is given in Fig. 8.6 [533]. It can be observed that comparing to EKF, the UKF (sigma-point) approach succeeds improved estimation of the state vector’s mean value and covariance (only 5 points are needed to approximate sufficiently the 2D distribution).

8.4 Simulation Results

The flatness-based control method for the induction motor that was presented in Sect. 8.3.3 requires knowledge of the motor’s state vector \(x=[\theta \),\(\omega \),\(\psi _{s_d}\),\(i_{s_d}\),\(i_{s_q}\),\(\rho ]\). It will be shown that it is possible to implement state estimation for the electric motor using measurements only of the stator currents \(i_{s_a}\) and \(i_{s_b}\). A nonlinear Kalman Filter, such as the Unscented Kalman Filter or the Extended Kalman Filter, can give estimates of the nonmeasured state vector elements, i.e., of the rotor’s angle \(\theta \), of the rotation speed \(\omega \), of the magnetic flux \(\psi _{r_d}\), and of the angle \(\rho \) between the flux vectors \(\psi _{r_a}\) and \(\psi _{r_b}\). Using currents \(i_{s_a}\) and \(i_{s_b}\) and the estimate \(\hat{\rho }\) of angle \(\rho \), the input measurements \(i_{s_d}\) and \(i_{s_q}\) can be provided to the nonlinear Kalman Filters. Thus one has

$$\begin{aligned} \begin{pmatrix} i_{s_d} \\ i_{s_q} \end{pmatrix}= \begin{pmatrix} cos(\hat{\rho }) &{} \sin (\hat{\rho }) \\ -sin(\hat{\rho }) &{} cos(\hat{\rho }) \end{pmatrix}{\cdot } \begin{pmatrix} i_{s_a} \\ i_{s_b} \end{pmatrix} \end{aligned}$$
(8.44)
Fig. 8.7
figure 7

Angle \(\theta \) of the induction motor (blue line) in sensorless control when tracking a sinusoidal setpoint (red line) and state estimation is performed with a the Extended Kalman Filter, b Unscented Kalman Filter

The performance of the proposed sensorless control scheme, which uses the nonlinear Kalman Filtering for estimation of the nonmeasurable parameters of the motor’s state vector is depicted in Figs. 8.7, 8.8, 8.9, 8.10 (tracking of a sinusoidal setpoint) and in Figs. 8.11, 8.12, 8.13, 8.14 (tracking of a seesaw setpoint). Comparison between the sensorless control loop that is based on the Extended and the Unscented Kalman Filter is provided.

From the simulation experiments it can be observed that the Unscented Kalman Filter-based control results in fast and accurate trajectory tracking. The performance of the UKF-based control loop, when considering as measured variables only the stator currents, was comparable to the one of the EKF-based control loop. Methods to further enhance the robustness of the nonlinear filtering-based control loops have been discussed in [12, 38].

Fig. 8.8
figure 8

Angular velocity \(\omega \) of the induction motor (blue line) in sensorless control when tracking a sinusoidal setpoint (red line) and state estimation is performed with a the Extended Kalman Filter, b Unscented Kalman Filter

Fig. 8.9
figure 9

Control input current \(i_{s_d}\) of the induction motor (blue line) in sensorless control when tracking a sinusoidal setpoint (red line) and state estimation is performed with a the Extended Kalman Filter, b Unscented Kalman Filter

Fig. 8.10
figure 10

Control input current \(i_{s_q}\) of the induction motor (blue line) in sensorless control when tracking a sinusoidal setpoint (red line) and state estimation is performed with a the Extended Kalman Filter, b Unscented Kalman Filter

Fig. 8.11
figure 11

Angle \(\theta \) of the induction motor in sensorless control (blue line) when tracking a seesaw setpoint (red line) and state estimation is performed with a the Extended Kalman Filter, b Unscented Kalman Filter

Fig. 8.12
figure 12

Angular velocity \(\omega \) of the induction motor (blue line) in sensorless control when tracking a seesaw setpoint (red line) and state estimation is performed with a the Extended Kalman Filter, b Unscented Kalman Filter

Fig. 8.13
figure 13

Control input current \(i_{s_d}\) of the induction motor (blue line) in sensorless control when tracking a seesaw setpoint (red line) and state estimation is performed with a the Extended Kalman Filter, b Unscented Kalman Filter

Fig. 8.14
figure 14

Control input current \(i_{s_q}\) of the induction motor (blue line) in sensorless control when tracking a seesaw setpoint (red line) and state estimation is performed with a the Extended Kalman Filter, b Unscented Kalman Filter

8.5 Flatness-Based Adaptive Control of Electrostatic MEMS Using Output Feedback

8.5.1 Introduction

The previously developed results on adaptive fuzzy control of nonlinear DC motors will be further extended toward control of microactuators. As micro and nanotechnology develop fast, the use of MEMS and particularly of microactuators is rapidly deploying. One can note several systems where the use of microactuators has become indispensable and the solution of the associated control problems has become a prerequisite. In [472, 487, 613, 614], electrostatic microactuators are used in adaptive optics and optical communications. In [53, 324] microactuators are used for micromanipulation and precise positioning of microobjects. Several approaches to the control of microactuators have been proposed. In [276, 292, 513] adaptive control methods have been used. In [146, 583], solution of microactuation control problems through robust control approaches has been attempted. In [468] backstepping control has been used, while in [513] an output feedback control scheme has been implemented. Additional results for the stabilization and control of microactuators have been presented in [197, 389, 513]. In such control systems, convergence of the state vector elements to the associated reference setpoints has to be performed with accuracy, despite modeling uncertainties, parametric variations of external perturbations. Moreover, the reliable functioning of the control loop has to be assured despite difficulties in measuring the complete state vector of the MEMS. The present section develops a new method for the control of microelectromechanical systems (MEMS) which is based on differential flatness theory. The considered control problem is a nontrivial one because of the unknown nonlinear dynamical model of the actuator and because of the constraint to implement the control using exclusively output feedback (it is little reliable and technically difficult to use sensor measurements for the monitoring of all state variables of the microactuator). The differential flatness theory control approach is based on an exact linearization of the MEMS dynamics which avoids the numerical errors of the approximate linearization that is performed by other nonlinear control methods [103, 250, 344, 452, 454].

First, the section shows that the dynamic model of the studied microactuator is a differentially flat one. This means that all its state variables and the control input can be written as functions of one single algebraic variable, which is the flat output, and also as functions of the flat output’s derivatives [286, 427, 453, 465, 495]. This change of variables (differential flatness theory-based diffeomorphism) enables to transform the nonlinear model of the actuator into the linear canonical (Brunovsky) form [152, 351, 516, 535]. In the latter description of the MEMS the transformed control input contains elements which are associated with the unknown nonlinear dynamics of the system. These are identified on-line with the use of neuro-fuzzy approximators and the estimated system dynamics is finally used for the computation of the control signal that will make the MEMS state vector track the desirable setpoints. Thus an adaptive fuzzy control scheme is implemented [407, 454]. The learning rate of the neuro-fuzzy approximators is determined by the requirement to assure that the Lyapunov function of the control loop will remain a negative definite one.

Next, another problem that has to be dealt with is that only output feedback can be used for the implementation of the MEMS control scheme. The nonmeasurable state variables of the microactuator have to be reconstructed with the use of a state estimator (observer), which functions again inside the control loop. Thus, finally, the Lyapunov function for the proposed control scheme comprises three quadratic terms: (i) a term that describes the deviation of the MEMS state variables from the reference setpoints, (ii) a term that describes the error in the estimation of the nonmeasurable state vector elements of the microactuator with respect to the reference setpoints, and (iii) a sum of quadratic terms associated with the distance of the weights of the neuro-fuzzy approximators from the values that give the best estimation of the unknown MEMS dynamics. It is proven that an adaptive (learning) control law can be found assuring that the Lyapunov function will remain a negative definite one, thus assuring that the stability of the control loop will be preserved and that accurate tracking of the setpoints by the system’s state variables will be succeeded (H-infinity tracking performance).

8.5.2 Dynamic Model of the Electrostatic Actuator

The considered MEMS (electrostatic microactuator) is depicted in Fig. 8.15. The dynamic model of the MEMS has been analyzed in [180, 203, 614–616], where model-based control approaches have been mostly developed. It is assumed that Q(t) is the charge of the device, while \(\varepsilon \) is the permitivity in the gap. Then the capacitance of the device is

$$\begin{aligned} \begin{array}{c} C(t)={{{\varepsilon }A} \over {G(t)}} \end{array} \end{aligned}$$
(8.45)

while the attractive electrostatic force on the moving plate is

$$\begin{aligned} \begin{array}{c} F(t)={{V_n^2} \over 2}{{{\partial }C} \over {{\partial }G}}=-{{{\varepsilon }A{V_n^2}} \over {2G^2(t)}}=-{{Q^2(t)} \over {2{\varepsilon }A}} \end{array} \end{aligned}$$
(8.46)

Thus, the equation of motion of the actuator is given by

$$\begin{aligned} \begin{array}{c} m\ddot{G}(t)+b\dot{G}(t)+k(G(t)-G_0)=-{{Q^2(t)} \over {2{\varepsilon }A}} \end{array} \end{aligned}$$
(8.47)

From Eqs. (8.46) and (8.47) it can be concluded that the electrostatic force F increases with the inverse square of the gap, while the restoring mechanical force which is associated with the term \(k(G(t)-G_0)\) increases linearly with the plate deflection. A critical value for the voltage across the device is called pull-in voltage and is given by [613]

$$\begin{aligned} \begin{array}{c} V_{pi}=\sqrt{{8kG_0^2} \over {{27}C_0}} \end{array} \end{aligned}$$
(8.48)

It is assumed that the MEMS starts operating from an initially uncharged state at \(t=0\). Then the charge of the electrodes at time instant t is given by \(Q(t)={\int _0^t}I_s(\tau )d{\tau }\), or equivalently \(\dot{Q}(t)=I_s(t)\). By applying Kirchhoff’s voltage law one has for the current that goes through the resistor

$$\begin{aligned} \begin{array}{c} \dot{Q}(t)={1 \over R}(V_s(t)-{{Q(t)G(t)} \over {{\varepsilon }A}}) \end{array} \end{aligned}$$
(8.49)

Next, the equations of the system’s dynamics given in Eqs. (8.47)–(8.49) undergo a transformation which consists of a change of the time scale \(\tau ={\omega }t\) and of the following normalization

$$\begin{aligned} \begin{array}{ccc} x=1-{G \over G_0} &{} q={Q \over Q_{pi}} &{} \\ u={{V_s} \over V_{pi}} &{} i={{I_s} \over {V_{pi}{\omega _0}{C_0}}} &{} r={\omega _0}{C_0}{R} \end{array} \end{aligned}$$
(8.50)

where \(C_0={{{\varepsilon }A} \over G_0}\), \(Q_{pi}={3 \over 2}{C_0}V_{pi}\) is the pull-in charge corresponding to the pull-in voltage, \(\omega _0=\sqrt{k/m}\) is the undamped natural frequency, and \(\zeta ={b \over {2m\omega _0}}\) is the damping ratio. The normalized voltage across the actuator can be expressed in terms of normalized deflection x of the moveable electrode, that is, \({u_o}={3 \over 2}q(1-x)\), while the dynamics of the normalized charge is \(\dot{q}={2 \over 3}i\).

After the aforementioned normalization and transformation, the dynamic model of the actuator is written as [613]

$$\begin{aligned} \begin{array}{c} \dot{x}=v \\ \dot{v}=-2{\zeta }v-x+{1 \over 3}q^2 \\ \dot{q}={1 \over r}q(1-x)+{2 \over {3r}}u \end{array} \end{aligned}$$
(8.51)
Fig. 8.15
figure 15

Diagram of the 1-DOF parallel-plate electrostatic actuator

The model’s state variables are defined as follows: \(\dot{x}=v\): is a variable denoting the speed of deflection of the moving electrode, q is a variable denoting the ratio between the actual change of the plates Q and the pull-in charge \(Q_{pi}\). It holds that \(q={Q \over Q_{p_i}}\), where \(Q_{p_i}={3 \over 2}{C_o}V_{p_i}\) and \(V_{p_i}\) is the pull-in voltage.

8.5.3 Linearization of the MEMS Model Using Lie Algebra

The MEMS nonlinear dynamics given in Eq. (8.51), with state vector defined as \(x=[x,v,q]\), is also written in the form

$$\begin{aligned} \begin{array}{c} \dot{x}=f(x)+g(x)u \end{array} \end{aligned}$$
(8.52)

where the vector fields f(x) and g(x) are defined as

$$\begin{aligned} \begin{array}{cc} f(x)= \begin{pmatrix} v \\ -2{\zeta }v-x+{1 \over 2}q^2 \\ -{1 \over r}q(1-x) \end{pmatrix} &{} g(x)=\begin{pmatrix} 0 \\ 0 \\ {2 \over {3r}} \end{pmatrix} \end{array} \end{aligned}$$
(8.53)

Using the above formulation, one can arrive at a linearized description of the MEMS dynamics using a differential geometric approach and the computation of Lie derivatives. The following state variables are defined: \(z_1=h_1(x)=x\), \(z_2={L_f}{h_1}(x)\) and \(z_3={L_f^2}{h_1}(x)\). It holds that

$$\begin{aligned} \begin{array}{c} z_2={L_f}{h_1}(x){\Rightarrow }z_2={{{\partial }{h_1}} \over {\partial {x_1}}}{f_1}+{{{\partial }{h_1}} \over {\partial {x_2}}}{f_2}+{{{\partial }{h_1}} \over {\partial {x_3}}}{f_3}{\Rightarrow }\\ z_2=1{f_1}+0{f_2}+0{f_3}{\Rightarrow }z_2=f_1{\Rightarrow }z_2=v{\Rightarrow }z_2=\dot{x} \end{array} \end{aligned}$$
(8.54)

In a similar manner one computes

$$\begin{aligned} \begin{array}{c} z_3={L_f^2}{h_1}(x){\Rightarrow }z_3={{{\partial }{z_2}} \over {\partial {x_1}}}{f_1}+{{{\partial }{z_2}} \over {\partial {x_2}}}{f_2}+{{{\partial }{z_2}} \over {\partial {x_3}}}{f_3}{\Rightarrow } \\ z_3=0{f_1}+1{f_2}+0{f_3}{\Rightarrow }z_3=\dot{v}{\Rightarrow }z_3=\ddot{x} \end{array} \end{aligned}$$
(8.55)

Morever, it holds that

$$\begin{aligned} \begin{array}{c} \dot{z}_3=x^{(3)}={L_f^3}{h_1}(x)+{{L_g}{L_f^2}{h_1}x}{\cdot }u \end{array} \end{aligned}$$
(8.56)

where

$$\begin{aligned} \begin{array}{c} {L_f^3}{h_1}(x)={L_f}{z_2}{\Rightarrow }{L_f^3}{h_1}(x)={{{\partial }{z_3}} \over {\partial {x_1}}}{f_1}+{{{\partial }{z_3}} \over {\partial {x_2}}}{f_2}+{{{\partial }{z_3}} \over {\partial {x_3}}}{f_3}{\Rightarrow }\\ {L_f^3}{h_1}(x)=1{f_1}-2{\zeta }{f_2}+{2 \over 3}q{f_3}{\Rightarrow }{L_f^3}{h_1}(x)=v-2{\zeta }\dot{v}+{2 \over 3}q(-{1 \over r}q(1-x)){\Rightarrow }\\ {L_f^3}{h_1}(x)=\dot{y}-2{\zeta }\ddot{y}+{2 \over 3}q[-{1 \over r}q(1-x)]{\Rightarrow }{L_f^3}{h_1}(x)=-2{\zeta }\ddot{y}-\dot{y}-{1 \over r}(1-y){2 \over 3}q^2{\Rightarrow }\\ {L_f^3}{h_1}(x)=-2{\zeta }\ddot{y}-\dot{y}-{2 \over r}(1-y)[\ddot{y}+2{\zeta }\dot{y}+y] \end{array} \end{aligned}$$
(8.57)

Following a similar procedure one finds

$$\begin{aligned} \begin{array}{c} {L_g}{L_f^2}h_1(x)={L_g}{z_3}{\Rightarrow }{L_g}{L_f^2}h_1(x)={{{\partial }{z_3}} \over {\partial {x_1}}}{g_1}+{{{\partial }{z_3}} \over {\partial {x_2}}}{g_2}+{{{\partial }{z_3}} \over {\partial {x_3}}}{g_3}{\Rightarrow } \\ {L_g}{L_f^2}h_1(x)=1{g_1}-2{\zeta }{g_2}+{2 \over 3}q{g_3}{\Rightarrow }{L_g}{L_f^2}h_1(x)={4 \over {9r}}q{\Rightarrow }\\ {L_g}{L_f^2}h_1(x)={4 \over {9r}}\sqrt{3[\ddot{y}+2{\zeta }\dot{y}+y]} \end{array} \end{aligned}$$
(8.58)

For the linearized description of the MEMS dynamics given in Eq. (8.56), and using that \(v={L_f^3}{h_1}(x)+{L_g}{L_f^2}{h_1}(x)u\) one arrives also at the state-space description

$$\begin{aligned} \begin{array}{c} \begin{pmatrix} \dot{z}_1 \\ \dot{z}_2 \\ \dot{z}_3 \end{pmatrix}= \begin{pmatrix} 0 &{} 1 &{} 0 \\ 0 &{} 0 &{} 1 \\ 0 &{} 0 &{} 0 \end{pmatrix} \begin{pmatrix} z_1 \\ z_2 \\ z_3 \end{pmatrix}+ \begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix} \end{array}v \end{aligned}$$
(8.59)
$$\begin{aligned} \begin{array}{c} z^{meas}=\begin{pmatrix} 1 &{} 0 &{} 0 \end{pmatrix} \begin{pmatrix} z_1 \\ z_2 \\ z_3 \end{pmatrix} \end{array} \end{aligned}$$
(8.60)

For the linearized description of the system given in Eq. (8.69) the design of a state feedback controller is carried out as follows:

$$\begin{aligned} \begin{array}{c} v=y_d^{(3)}-{k_1}(\ddot{y}-\ddot{y}_d)-{k_2}(\dot{y}-\dot{y}_d)-k_3(y-y_d) \end{array} \end{aligned}$$
(8.61)

which results in tracking error dynamics of the form

$$\begin{aligned} \begin{array}{c} e^{(3)}(t)+{k_1}\ddot{e}(t)+{k_2}\dot{e}(t)+{k_3}e(t)=0 \end{array} \end{aligned}$$
(8.62)

By selecting the feedback gains \(k_i, \ i=1,2,3\) such that the characteristic polynomial of Eq. (8.62) to be a Hurwitz one, it is assured that \(lim_{t{\rightarrow }\infty }e(t)=0\).

8.5.4 Differential Flatness of the Electrostatic Actuator

8.5.4.1 Differential Flatness Properties of the Electrostatic Microactuator

The dynamic model of the electrostatic actuator given in Eq. (8.51) is considered. The flat output of the model is taken to be \(y=x\). Therefore, it also holds \(v=\dot{y}\). From the second row of the state-space equations, given in Eq. (8.51) one has

$$\begin{aligned} \begin{array}{c} \ddot{y}=-2{\zeta }\dot{y}-y+{1 \over 3}{q^2}{\Rightarrow }q^2=3[\ddot{y}+2{\zeta }\dot{y}+y]\\ {\Rightarrow }q=\sqrt{3[\ddot{y}+2{\zeta }\dot{y}+y]}{\Rightarrow }q=f_q(y,\dot{y},\ddot{y}) \end{array} \end{aligned}$$
(8.63)

From the third row of the state-space equations, given in Eq. (8.51) one has

$$\begin{aligned} \begin{array}{c} u={{3r} \over 2}[\dot{q}+{1 \over r}q(1-x)]{\Rightarrow }u=f_u(y,\dot{y},\ddot{y}) \end{array} \end{aligned}$$
(8.64)

Since all state variables and the control input of the system are expressed as functions of the flat output and its derivatives, it is concluded that the model of the electrostatic actuator is a differentially flat one.

8.5.4.2 Linearization of the MEMS Model Using Differential Flatness Theory

From the second row of the state-space model given in Eq. (8.51) it holds that

$$\begin{aligned} \begin{array}{c} \ddot{y}=-2{\zeta }\dot{y}-y+{1 \over 3}{q^2} \end{array} \end{aligned}$$
(8.65)

By deriving once more with respect to time one gets

$$\begin{aligned} \begin{array}{c} y^{(3)}=-2{\zeta }\ddot{y}-\dot{y}+{2 \over 3}q\dot{q} \end{array} \end{aligned}$$
(8.66)

By substituting the third row of the state-space model given in Eq. (8.51) one obtains

$$\begin{aligned} \begin{array}{c} y^{(3)}=-2{\zeta }\ddot{y}-\dot{y}+{2 \over 3}q[-{1 \over r}q(1-x)+{2 \over {3r}}u]{\Rightarrow }\\ y^{(3)}=-2{\zeta }\ddot{y}-\dot{y}-{2 \over {3r}}(1-x){q^2}+{4 \over {9r}}qu \end{array} \end{aligned}$$
(8.67)

Next, using from Eq. (8.63) that \(q^2=\ddot{y}+2{\zeta }\dot{y}+y\) or equivalently that \(q=\sqrt{\ddot{y}+2{\zeta }\dot{y}+y}\) the following relation is obtained

$$\begin{aligned} \begin{array}{c} y^{(3)}=-2{\zeta }\ddot{y}-\dot{y}-{2 \over e}(1-y)[\ddot{y}+2{\zeta }\dot{y}+y]+{4 \over {9r}}\sqrt{3[\ddot{y}+2{\zeta }\dot{y}+y}]u \end{array} \end{aligned}$$
(8.68)

or equivalently

$$\begin{aligned} \begin{array}{c} y^{(3)}=f(y,\dot{y},\ddot{y})+g(y,\dot{y},\ddot{y})u \end{array} \end{aligned}$$
(8.69)

where

$$\begin{aligned} \begin{array}{c} f(y,\dot{y},\ddot{y})=-2{\zeta }\ddot{y}-\dot{y}-{2 \over r}(1-y)[\ddot{y}+2{\zeta }\dot{y}+y] \end{array} \end{aligned}$$
(8.70)
$$\begin{aligned} \begin{array}{c} g(y,\dot{y},\ddot{y})={4 \over {9r}}[\sqrt{3[\ddot{y}+2{\zeta }\dot{y}+y}] \end{array} \end{aligned}$$
(8.71)

For the linearized description of the MEMS dynamics given in Eq. (8.69), and using the notation \(z_1=y\), \(z_2=\dot{y}\) and \(z_3=\ddot{y}\), and \(v=f(y,\dot{y},\ddot{y})+g(y,\dot{y},\ddot{y})u\) one arrives also at the state-space description

$$\begin{aligned} \begin{array}{c} \begin{pmatrix} \dot{z}_1 \\ \dot{z}_2 \\ \dot{z}_3 \end{pmatrix}= \begin{pmatrix} 0 &{} 1 &{} 0 \\ 0 &{} 0 &{} 1 \\ 0 &{} 0 &{} 0 \end{pmatrix} \begin{pmatrix} z_1 \\ z_2 \\ z_3 \end{pmatrix}+ \begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix}v \end{array} \end{aligned}$$
(8.72)
$$\begin{aligned} \begin{array}{c} z^{meas}=\begin{pmatrix} 1 &{} 0 &{} 0 \end{pmatrix} \begin{pmatrix} z_1 \\ z_2 \\ z_3 \end{pmatrix} \end{array} \end{aligned}$$
(8.73)

For the linearized description of the system given in Eq. (8.69) the design of a state feedback controller is carried out as follows:

$$\begin{aligned} \begin{array}{c} v=y_d^{(3)}-{k_1}(\ddot{y}-\ddot{y}_d)-{k_2}(\dot{y}-\dot{y}_d)-k_3(y-y_d) \end{array} \end{aligned}$$
(8.74)

which results in tracking error dynamics of the form

$$\begin{aligned} \begin{array}{c} e^{(3)}(t)+{k_1}\ddot{e}(t)+{k_2}\dot{e}(t)+{k_3}e(t)=0 \end{array} \end{aligned}$$
(8.75)

By selecting the feedback gains \(k_i, \ i=1,2,3\) such that the characteristic polynomial of Eq. (8.75) to be a Hurwitz one, it assured that \(lim_{t{\rightarrow }\infty }e(t)=0\).

8.5.5 Adaptive Fuzzy Control of the MEMS Model Using Output Feedback

8.5.5.1 Problem Statement

In subsection 8.5.4 the model of the MEMS actuator was transformed to a form for which it is possible to apply differential flatness theory-based adaptive fuzzy control. The purpose for using adaptive control, is to solve the microactuator’s control problem in case that its dynamics is unknown and the state vector is not completely measurable. It has been shown that after applying the differential flatness theory-based transformation, the following nonlinear SISO system is obtained :

$$\begin{aligned} x^{(n)}=f(x,t)+g(x,t)u+\tilde{d} \end{aligned}$$
(8.76)

where f(xt), g(xt) are unknown nonlinear functions and \(\tilde{d}\) is an unknown additive disturbance. The objective is to force the system’s output \(y=x\) to follow a given bounded reference signal \(x_d\). As explained in Chap. 3, in the presence of non-Gaussian disturbances w, successful tracking of the reference signal is denoted by the \(H_{\infty }\) criterion [454]

$$\begin{aligned} {\int _0^T}{e^T}Qe{dt} \le {\rho ^2} {\int _0^T}{w^T}w{dt} \end{aligned}$$
(8.77)

where \(\rho \) is the attenuation level and corresponds to the maximum singular value of the transfer function G(s) of the linearized equivalent of Eq. (8.76).

8.5.5.2 Transformation of Tracking into a Regulation Problem

The \(H_{\infty }\) approach to nonlinear systems control consists of the following steps: (i) linearization is applied, (ii) the unknown system dynamics are approximated by neural of fuzzy estimators, (iii) an \(H_{\infty }\) control term, is employed to compensate for estimation errors and external disturbances. If the state vector is not measurable, this can be reconstructed with the use of an observer.

For measurable state vector x, desirable state vector \(x_m\) and uncertain functions f(xt) and g(xt) an appropriate control law for (8.76) would be

$$\begin{aligned} u={ {1 \over {\hat{g}(x,t)}}[x_m^{(n)}-\hat{f}(x,t)+{K^T}e+{u_c}]} \end{aligned}$$
(8.78)

where, \(\hat{f}\) and \(\hat{g}\) are the approximations of the unknown parts of the system dynamics f and g respectively, and which can be given by the outputs of suitably trained neuro-fuzzy networks. The term \(u_c\) denotes a supervisory controller which compensates for the approximation error \(w=[f(x,t)-\hat{f}(x,t)]+[g(x,t)-\hat{g}(x,t)]u\), as well as for the additive disturbance \(\tilde{d}\). Moreover the vectors \(K^T=[k_n,k_{n-1},\ldots ,k_1]\), and \(e^T=[e,\dot{e},\ddot{e},\ldots ,e^{(n-1)}]^T\) are chosen such that the polynomial \(e^{(n)}+{k_1}e^{(n-1)}+{k_2}e^{(n-2)}+\cdots +{k_n}e\) is Hurwitz. The control law of Eq. (8.78) in Eq. (8.76) results into

$$\begin{aligned} \begin{array}{c} x^{(n)}=f(x,t)+g(x,t){1 \over {\hat{g}(x,t)}}[x_m^{(n)}-{\hat{f}(x,t)}-{K^T}e+u_c]+ \tilde{d} \Rightarrow \\ x^{(n)}=f(x,t)+ \{ \hat{g}(x,t)+[g(x,t)-\hat{g}(x,t)] \} {1 \over {\hat{g}(x,t)}}[x_m^{(n)}-{\hat{f}(x,t)}-{K^T}e+u_c]+ \tilde{d} \Rightarrow \\ x^{(n)}=f(x,t)+ \{ {\hat{g}(x,t) \over {\hat{g}(x,t)}}[x_m^{(n)}-{\hat{f}(x,t)}-{K^T}e+u_c]+ [g(x,t)-\hat{g}(x,t)]u + \tilde{d} \Rightarrow \\ x^{(n)}=f(x,t)+ x_m^{(n)}-\hat{f}(x,t)-{K^T}e+{u_c}+[g(x,t)-\hat{g}(x,t)]u+{u_c}+\tilde{d} \Rightarrow \\ x^{(n)}-x_m^{(n)}=-{K^T}e+[f(x,t)-\hat{f}(x,t)]+[g(x,t)-\hat{g}(x,t)]u+{u_c}+\tilde{d} \Rightarrow \\ x^{(n)}=-{K^T}e+{u_c}+[f(x,t)-\hat{f}(x,t)]+[g(x,t)-\hat{g}(x,t)]u+\tilde{d} \\ \end{array} \end{aligned}$$
(8.79)

The above relation can be written in a state equations form. The state vector is taken to be \(e^T=[e,\dot{e},\ldots ,e^{(n-1)}]\), which yields

$$\begin{aligned} \begin{array}{c} \dot{e}=Ae-B{K^T}e+B{u_c}+B\{[f(x,t)-\hat{f}(x,t)]+[g(x,t)-\hat{g}(x,t)]u+\tilde{d}\} \end{array} \end{aligned}$$
(8.80)

or equivalently

$$\begin{aligned} \begin{array}{l} \dot{e}=(A-B{K^T})e+B{u_c}+B \{ [f(x,t)-\hat{f}(x,t)]+[g(x,t)-\hat{g}(x,t)]u+\tilde{d} \}\\ \\ {e_1}={C^T}e \end{array} \end{aligned}$$
(8.81)

where

$$\begin{aligned} \begin{array}{l} A=\begin{pmatrix} 0 &{} 1 &{} 0 &{} \cdots &{} \cdots &{} 0 \\ 0 &{} 0 &{} 1 &{} \cdots &{} \cdots &{} 0 \\ \cdots &{} \cdots &{} \cdots &{} \cdots &{} \cdots &{} \cdots \\ \cdots &{} \cdots &{} \cdots &{} \cdots &{} \cdots &{} \cdots \\ 0 &{} 0 &{} 0 &{} \cdots &{} \cdots &{} 1 \\ 0 &{} 0 &{} 0 &{} \cdots &{} \cdots &{} 0 \end{pmatrix} \\ B^T=\begin{pmatrix} 0,0,\ldots ,0,1 \end{pmatrix}, \ C^T=\begin{pmatrix}1,0,\ldots ,0,0\end{pmatrix}\\ K^T=\begin{pmatrix}k_0,k_1,\ldots ,k_{n-2},k_{n-1}\end{pmatrix} \\ \end{array} \end{aligned}$$
(8.82)

where \(e_1\) denotes the output error \(e_1=x-{x_m}\). Equation (8.81) describes a regulation problem.

8.5.5.3 Estimation of the State Vector

As explained in Chap. 5, the control of the system described by Eq. (8.76) becomes more complicated when the state vector x is not directly measurable and has to be reconstructed through a state observer. The following definitions are used:

  • error of the state vector \(e=x-x_m\)

  • error of the estimated state vector \(\hat{e}=\hat{x}-x_m\)

  • observation error \(\tilde{e}=e-\hat{e}=(x-x_m)-(\hat{x}-x_m)\)

When an observer is used to reconstruct the state vector, the control law of Eq. (8.78) is written as

$$\begin{aligned} u={{1 \over {\hat{g}(\hat{x},t)}}[x_m^{(n)}-\hat{f}(\hat{x},t)+{K^T}e+{u_c}]} \end{aligned}$$
(8.83)

Applying Eq. (8.83) to the nonlinear system described by Eq. (8.76), after some operations results into

$$\begin{aligned} \begin{array}{c} x^{(n)}=x_m^{(n)}-{K^{T}\hat{e}}+{u_c}+[f(x,t)-\hat{f}(\hat{x},t)]+\\ {[g(x,t)-\hat{g}(\hat{x},t)]}u+\tilde{d} \end{array} \end{aligned}$$

It holds \(e=x-x_m \Rightarrow x^{(n)}=e^{(n)}+x_m^{(n)}\). Substituting \(x^{(n)}\) in the above equation gives

$$\begin{aligned} \begin{array}{c} e^{(n)}+x_m^{(n)}=x_m^{(n)}-{K^T\hat{e}}+u_c+[f(x,t)-\hat{f}(\hat{x},t)]+\\ +[g(x,t)-\hat{g}(\hat{x},t)]u+\tilde{d}\Rightarrow \end{array} \end{aligned}$$
(8.84)
$$\begin{aligned} \begin{array}{c} \dot{e}=Ae-B{K^T\hat{e}}+B{u_c}+B\{[f(x,t)-\hat{f}(\hat{x},t)]+\\ +[g(x,t)-\hat{g}(\hat{x},t)]u+\tilde{d}\} \end{array} \end{aligned}$$
(8.85)
$$\begin{aligned} e_1={C^T}e \end{aligned}$$
(8.86)

where \(e=[e,\dot{e},\ddot{e},\ldots ,e^{(n-1)}]^T\), and \(\hat{e}=[\hat{e},\dot{\hat{e}},\ddot{\hat{e}},\ldots ,\hat{e}^{(n-1)}]^T\).

The state observer is designed according to Eqs. (8.85) and (8.86) and is given by [454]:

$$\begin{aligned} \dot{\hat{e}}=A{\hat{e}}-B{K^T}{\hat{e}}+{K_o}[e_1-{C^T}{\hat{e}}] \end{aligned}$$
(8.87)
$$\begin{aligned} \hat{e}_1={C^T}{\hat{e}} \end{aligned}$$
(8.88)

The observation gain \(K_o=[k_{o_0},k_{o_1},\ldots ,k_{o_{n-2}},k_{o_{n-1}}]^T\) is selected so as to assure the convergence of the observer .

8.5.5.4 Additional Control Term for Disturbances Compensation

The additional term \(u_c\) which appears in Eq. (8.83) is used in the observer-based control to compensate for:

  • The external disturbances \(\tilde{d}\)

  • The state vector estimation error \(\tilde{e}=e-\hat{e}=x-\hat{x}\)

  • The approximation error of the nonlinear functions f(xt) and g(xt), denoted as \(w=[f(x,t)-\hat{f}(\hat{x},t)]+[g(x,t)-\hat{g}(\hat{x},t)]u\)

The control signal \(u_c\) consists of 2 terms, namely:

  • the \(H_{\infty }\) control term, \(u_a=-{1 \over r}{B^T}P\tilde{e}\) for the compensation of d and w

  • the control term \(u_b\) for the compensation of the observation error \(\tilde{e}\)

8.5.5.5 Dynamics of the Observation Error

The observation error is defined as \(\tilde{e}=e-\hat{e}=x-\hat{x}\). Substructing Eq. (8.87) from Eq. (8.85) as well as Eq. (8.88) from Eq. (8.86) one gets

$$\begin{aligned} \begin{array}{l} \dot{e}-\dot{\hat{e}}=A(e-\hat{e})+B{u_c}+B\{[f(x,t)-\hat{f}(\hat{x},t)]+\\ +[g(x,t)-\hat{g}(\hat{x},t)]u+ \tilde{d}\}-{K_o}{C^T}(e-\hat{e})\\ \\ {e_1}-{\hat{e}_1}={C^T}(e-\hat{e}) \end{array} \end{aligned}$$

i.e.,

$$\begin{aligned} \begin{array}{c} \dot{\tilde{e}}=A\tilde{e}+B{u_c}+B\{[f(x,t)-\hat{f}(\hat{x},t)]+\\ +[g(x,t)-\hat{g}(\hat{x},t)]u+\tilde{d}\}-{K_o}{C^T}\tilde{e} \\ \\ \tilde{e}_1={C^T}\tilde{e} \end{array} \end{aligned}$$

which can be written as

$$\begin{aligned} \begin{array}{c} \dot{\tilde{e}}=(A-{K_o}{C^T}){\tilde{e}}+B{u_c}+B\{[f(x,t)-\hat{f}(\hat{x},t)]+\\ +[g(x,t)-\hat{g}(\hat{x},t)]u+\tilde{d} \end{array} \end{aligned}$$
(8.89)
$$\begin{aligned} \tilde{e}_1=C{\tilde{e}} \end{aligned}$$
(8.90)

8.5.5.6 Approximation of the Unknown MEMS Dynamics

Neuro-fuzzy networks can be trained on-line to approximate parts of the dynamic equation of nonlinear systems, as well as to compensate for external disturbances. The approximation of functions f(xt) and g(xt) of Eq. (8.83) can be carried out with Takagi–Sugeno neuro-fuzzy networks of zero or first order (Fig. 3.1). These consist of rules of the form:

$$\begin{aligned} \begin{array}{c} R^l: \text {IF}\, \hat{x} \,\text {is}\, A_1^l \,\text {AND}\, \dot{\hat{x}}\, \text {is}\, A_2^l \,\text {AND}\, \cdots \, \text {AND}\, \hat{x}^{(n-1)}\,\text {is}\, A_n^l\\ \qquad \text {THEN}\, \bar{y}^l={{\sum }_{i=1}^n}{w_i^l}{\hat{x}_i}+b^l, \ \ l=1,2,\ldots ,L\\ \end{array} \end{aligned}$$

The output of the neuro-fuzzy model is calculated by taking the average of the consequent part of the rules

$$\begin{aligned} \hat{y}={ { {\sum _{l=1}^L}{\bar{y}^l}{\prod _{i=1}^{n}}{\mu _{A_i^l}(\hat{x}_i)}} \over {\sum _{l=1}^L{\prod _{i=1}^{n}}{\mu _{A_i^l}(\hat{x}_i)} } } \end{aligned}$$
(8.91)

where \(\mu _{A_i^l}\) is the membership function of \(x_i\) in the fuzzy set \(A_i^l\). The training of the neuro-fuzzy networks is carried out with 1st order gradient algorithms, in pattern mode, i.e., by processing only one data pair \((x_i,y_i)\) at every time step i. The estimation of f(xt) and g(xt) can be written as

$$\begin{aligned} \begin{array}{c} \hat{f}(\hat{x}|\theta _f)={\theta _f^T}{\phi (\hat{x})} \hat{g}(\hat{x}|\theta _g)={\theta _g^T}{\phi (\hat{x})} \end{array} \end{aligned}$$
(8.92)

where \(\phi (\hat{x})\) are kernel functions with elements \(\phi ^l(\hat{x})={ {{\prod _{i=1}^{n}}{\mu _{A_i^l}(\hat{x}_i)}} \over {\sum _{l=1}^L{\prod _{i=1}^{n}}{\mu _{A_i^l}(\hat{x}_i)}} } \ \ l=1,2,\ldots ,L\). It is assumed that that the weights \(\theta _f\) and \(\theta _g\) vary in the bounded areas \(M_{\theta _f}\) and \(M_{\theta _g}\) which are defined as

$$\begin{aligned} \begin{array}{c} M_{\theta _f}=\{\theta _f \in R^h: ||\theta _f||\le {m_{\theta _f}} \} \\ M_{\theta _g}=\{\theta _g \in R^h: ||\theta _g||\le {m_{\theta _g}} \} \end{array} \end{aligned}$$
(8.93)

with \(m_{\theta _f}\) and \(m_{\theta _g}\) positive constants. The values of \(\theta _f\) and \(\theta _g\) for which optimal approximation is succeeded are:

$$\begin{aligned} \begin{array}{l} {\theta _f^*}=arg \ min_{\theta _f \in M_{\theta _f}}[sup_{x \in U_x, \hat{x} \in U_{\hat{x} }} |f(x)-\hat{f}(\hat{x}|\theta _f)|]\\ {\theta _g^*}=arg \ min_{\theta _g \in M_{\theta _g}}[sup_{x \in U_x, \hat{x} \in U_{\hat{x} }} |g(x)-\hat{g}(\hat{x}|\theta _g)|] \end{array} \end{aligned}$$

The variation ranges of x and \(\hat{x}\) are the compact sets

$$\begin{aligned} \begin{array}{c} U_x=\{x \in R^{n}: ||x|| \le m_x < \infty \}, \\ U_{\hat{x}}=\{\hat{x} \in R^{n}: ||\hat{x}|| \le m_{\hat{x}} < \infty \} \end{array} \end{aligned}$$
(8.94)

The approximation error of f(xt) and g(xt) is defined as

$$\begin{aligned} \begin{array}{l} w=[\hat{f}(\hat{x}|\theta _f^*)-f(x,t)]+[\hat{g}(\hat{x}|\theta _g^*)-g(x,t)]u \Rightarrow \\ w=\{[\hat{f}(\hat{x}|\theta _f^*)-f(x|\theta _f)]+[f(x|\theta _f)-f(x,t)]\}+\\ \qquad \;\,\{[\hat{g}(\hat{x}|\theta _g^*)-g(\hat{x}|\theta _g)]+[g(\hat{x}|\theta _g)-g(x,t)]\}u \end{array} \end{aligned}$$
(8.95)

where

  • \(\hat{f}(\hat{x}|\theta _f^*)\) is the approximation of f for the best estimation \(\theta _f^*\) of the weights’ vector \(\theta _f\).

  • \(\hat{g}(\hat{x}|\theta _g^*)\) is the approximation of g for the best estimation \(\theta _g^*\) of the weights’ vector \(\theta _g\).

The approximation error w can be decomposed into \(w_a\) and \(w_b\), where

$$\begin{aligned}&w_a=[\hat{f}(\hat{x}|{\theta _f})-\hat{f}(\hat{x}|{\theta _f^*})]+[\hat{g}(\hat{x}|{\theta _g})-\hat{g}(\hat{x}|{\theta _g^*})]u \\&w_b=[\hat{f}(\hat{x}|{\theta _f^*})-f(x,t)]+[\hat{g}(\hat{x}|{\theta _g^*})-g(x,t)]u \end{aligned}$$

Finally, the following two parameters are defined:

$$\begin{aligned} \tilde{\theta }_f=\theta _f-\theta _f^*, \ \ \ \tilde{\theta }_g=\theta _g-\theta _g^* \end{aligned}$$
(8.96)

8.5.6 Lyapunov Stability Analysis

The adaptation law of the neuro-fuzzy approximators weights \(\theta _f\) and \(\theta _g\) as well as of the supervisory control term \(u_c\) is derived from the requirement for negative definiteness of the Lyapunov function

$$\begin{aligned} V={1 \over 2}{{\hat{e}^T}{P_1}{\hat{e}}}+{1 \over 2}{{\tilde{e}^T}{P_2}{\tilde{e}}}+{1 \over {2{\gamma _1}}}{\tilde{\theta }_f^T}{\tilde{\theta }_f}+{1 \over {2{\gamma _2}}}{\tilde{\theta }_g^T}{\tilde{\theta }_g} \end{aligned}$$
(8.97)

The selection of the Lyapunov function is based on the following principle of indirect adaptive control \(\hat{e}: \lim _{t \rightarrow \infty }{\hat{x}(t)}={x_d}(t)\) and \(\tilde{e}: \lim _{t \rightarrow \infty }{\hat{x}(t)}=x(t)\). This yields \(\lim _{t \rightarrow \infty }x(t)={x_d}(t)\). Substituting Eqs. (8.85), (8.86) and Eqs. (8.89), (8.90) into Eq. (8.97) and differentiating results into

$$\begin{aligned} \begin{array}{c} \dot{V}={1 \over 2}{\dot{\hat{e}}^T}{P_1}{\hat{e}}+{1\over 2}{\hat{e}^T}{P_1}{\dot{\hat{e}}}+ {1 \over 2}{\dot{\tilde{e}}^T}{P_2}{{\tilde{e}}}+{1 \over 2}{\tilde{e}^T}{P_2}{\dot{\tilde{e}}}+ {1 \over {\gamma _1}}{\tilde{\theta }_f^T}{\dot{\tilde{\theta }}_f}+{1 \over {\gamma _2}}{\tilde{\theta }_g^T}{\dot{\tilde{\theta }}_g} \end{array} \end{aligned}$$
(8.98)

which in turn gives

$$\begin{aligned} \begin{array}{c} \dot{V}={1 \over 2}\{(A-BK^T)\hat{e}+{K_o}{C^T}\tilde{e}\}^T{P_1}{\hat{e}}+{1 \over 2}{{\hat{e}}^T}{P_1}\{(A-BK^T)\hat{e}+{K_o}{C^T}\tilde{e}\}+\\ +{1 \over 2} \{(A-{K_o}C^T)\tilde{e}+B{u_c}+Bd+Bw \}^T{P_2}{\tilde{e}}+ {1 \over 2} {\tilde{e}^T}{P_2}\{(A-{K_o}C^T)\tilde{e}+\}\\ +Bu_c+Bd+Bw+{{1 \over {\gamma _1}}{\tilde{\theta }_f}^T{\dot{\tilde{\theta }}_f}}+{{1 \over {\gamma _2}}{\tilde{\theta }_g}^T{\dot{\tilde{\theta }}_g}} \end{array} \end{aligned}$$
(8.99)

or, equivalently

$$\begin{aligned} \begin{array}{c} \dot{V}={1 \over 2}\{ {\hat{e}^T}(A-B{K^T})^T+{\tilde{e}^T}C{K_o^T}\}{P_1}{\hat{e}}+{1 \over 2}{\hat{e}^T}{P_1}\{(A-B{K^T}){\hat{e}}+ {K_o}{C^T}{\tilde{e}}\}+\\ +{1 \over 2} \{ {\tilde{e}^T}(A-{K_o}C^T)^T+{B^T}{u_c}+{B^T}w+{B^T}d \}{P_2}{\tilde{e}}+ {1 \over 2}{\tilde{e}^T}{P_2}\{(A-{K_o}{C^T}){\tilde{e}}+\\ +B{u_c}+Bw+Bd\}+{{1 \over {\gamma _1}}{\tilde{\theta }_f^T}}{\dot{\tilde{\theta }}_f}+{{1 \over {\gamma _2}}{\tilde{\theta }_g^T}}{\dot{\tilde{\theta }}_g} \end{array} \end{aligned}$$
(8.100)
$$\begin{aligned} \begin{array}{c} \dot{V}={1 \over 2}{\hat{e}^T}{(A-BK^T)^T}{P_1}{\hat{e}}+ {1 \over 2}{\tilde{e}^T}C{K_o^T}{P_1}{\hat{e}}+\\ +{1 \over 2}{\hat{e}^T{P_1}(A-BK^T)\hat{e}}+{1 \over 2}{\hat{e}^T}{P_1}{K_o}{C^T}{\tilde{e}}+\\ +{1 \over 2}{\tilde{e}^T}{(A-{K_oC^T})^T}{P_2}{\tilde{e}}+{1 \over 2}{B^T}{P_2}{\tilde{e}(u_c+w+d)}+\\ +{1 \over 2}{\tilde{e}^T}{P_2}(A-{K_o}C^T){\tilde{e}}+{1 \over 2}{\tilde{e}^T}{P_2}B(u_c+w+d)+\\ +{{1 \over {\gamma _1}}{\tilde{\theta }_f^T}{\dot{\tilde{\theta }}}_f}+{{1 \over {\gamma _2}}{\tilde{\theta }_g^T}{\dot{\tilde{\theta }}}_g} \end{array} \end{aligned}$$
(8.101)

Assumption 1: For given positive definite matrices \(Q_1\) and \(Q_2\) there exist positive definite matrices \(P_1\) and \(P_2\), which are the solution of the following Riccati equations [454]

$$\begin{aligned} {(A-BK^T)^T}{P_1}+{P_1}(A-BK^T)+Q_1=0 \end{aligned}$$
(8.102)
$$\begin{aligned} \begin{array}{c} {(A-{K_o}C^T)}^T{P_2}+{P_2}{(A-{K_o}C^T)}-\\ -{P_2}B({2 \over r}-{1 \over {\rho ^2}}){B^T}{P_2}+{Q_2}=0 \end{array} \end{aligned}$$
(8.103)

The conditions given in Eqs. (8.102)–(8.103) are related to the requirement that the systems described by Eqs. (8.87), (8.88), (8.89), and (8.90) exhibit stable dynamics. Substituting Eqs. (8.102)–(8.103) into \(\dot{V}\) yields

$$\begin{aligned} \begin{array}{c} \dot{V}={1 \over 2}{\hat{e}^T}\{(A-BK^T)^T{P_1}+{P_1}(A-BK^T)\}{\hat{e}} +{\tilde{e}^T}C{K_o^T}{P_1}{\hat{e}}+\\ +{1 \over 2}{\tilde{e}^T}\{(A-{K_o}C^T)^T{P_2}+{P_2}(A-{K_o}{C^T})\}{\tilde{e}}+{B^T}{P_2}{\tilde{e}}(u_c+w+d)+\\ +{1 \over {\gamma _1}}{\tilde{\theta }_f^T}{\dot{\tilde{\theta }}_f}+ {1 \over {\gamma _2}}{\tilde{\theta }_g^T}{\dot{\tilde{\theta }}_g} \end{array} \end{aligned}$$
(8.104)

which is also written as

$$\begin{aligned} \begin{array}{c} \dot{V}=-{1 \over 2}{{\hat{e}^T}{Q_1}{\hat{e}}}+{\tilde{e}^T}C{K_o^T}{P_1}{\hat{e}}-{1 \over 2} \tilde{e}^T \{{Q_2}-{P_2}B({2 \over r}-{1 \over {\rho ^2}}){B^T}{P_2}\}{\tilde{e}}+\\ +{B^T}{P_2}{\tilde{e}}(u_c+w+d)+{1 \over {\gamma _1}}{\tilde{\theta }_f^T}{\dot{\tilde{\theta }}_f}+ {1 \over {\gamma _2}}{\tilde{\theta }_g^T}{\dot{\tilde{\theta }}_g}\\ \end{array} \end{aligned}$$
(8.105)

The supervisory control \(u_c\) is decomposed in two terms, \(u_a\) and \(u_b\)

$$\begin{aligned} \begin{array}{c} u_a=-{1 \over r}p_{1n}\tilde{e}_1=-{1 \over r}{\tilde{e}^T}{P_2}B+{1 \over r}(p_{2n}\tilde{e}_2+\cdots +p_{nn}\tilde{e}_n)=\\ =-{1 \over r}{\tilde{e}^T}{P_2}B+{\varDelta }{u_a} \end{array} \end{aligned}$$
(8.106)

where \(p_{1n}\) stands for the last (nth) element of the first row of matrix \(P_2\), and

$$\begin{aligned} u_b=-[{({P_2}B)^T}({P_2}B)]^{-1}({P_2}B)^TC{K_o^T}{P_1}{\hat{e}} \end{aligned}$$
(8.107)
  • \(u_a\) is an \(H_{\infty }\) control used for the compensation of the approximation error w and the additive disturbance \(\tilde{d}\). Its first component \(-{1 \over r}{\tilde{e}^T}{P_2}B\) has been chosen so as to compensate for the term \({1 \over r}{\tilde{e}^T}{P_2}B{B^T}{P_2}\tilde{e}\), which appears in Eq. (8.105). By subtracting the second component \(-{1 \over r}(p_{2n}\tilde{e}_2+\cdots +p_{nn}\tilde{e}_n)\) one has that \(u_a=-{1 \over r}p_{1n}\tilde{e}_1\), which means that \(u_a\) is computed based on the feedback the measurable variable \(\tilde{e}_1\). Equation (8.106) is finally rewritten as \(u_a=-{1 \over r}{\tilde{e}^T}{P_2}B+{\varDelta }{u_a}\).

  • \(u_b\) is a control used for the compensation of the observation error (the control term \(u_b\) has been chosen so as to satisfy the condition \({\tilde{e}^T}{P_2}B{u_b}=-{\tilde{e}^T}C{K_o^T}{P_1}\hat{e}\) (Fig. 8.16).

The control scheme is depicted in Fig. 10.18. Substituting Eqs. (8.106) and (10.212) in \(\dot{V}\), one gets

Fig. 8.16
figure 16

The proposed \(H_{\infty }\) control scheme

$$\begin{aligned} \begin{array}{c} \dot{V}=-{1 \over 2}{\hat{e}^T}{Q_1}{\hat{e}}+{\tilde{e}^T}C{K_o^T}{P_1}{\hat{e}} -{1 \over 2}{\tilde{e}^T}{Q_2}{\tilde{e}}+{1 \over r}{\tilde{e}^T}{P_2}B{B^T}{P_2}{\tilde{e}}-\\ -{1 \over {2\rho ^2}}{\tilde{e}^T}{P_2}B{B^T}{P_2}{\tilde{e}}+{\tilde{e}^T}{P_2}B{u_b}-{1 \over r}{\tilde{e}^T}{P_2}B{B^T}{P_2}{\tilde{e}}+ {B^T}{P_2}{\tilde{e}(w+d+{\varDelta }{u_a})}+\\ +{1 \over {\gamma _1}}{\tilde{\theta }_f^T}{\dot{\tilde{\theta }}_f}+{1 \over {\gamma _2}}{\tilde{\theta }_g^T}{\dot{\tilde{\theta }}_g} \end{array} \end{aligned}$$
(8.108)

or equivalently,

$$\begin{aligned} \begin{array}{c} \dot{V}=-{1 \over 2}{\hat{e}^T}{Q_1}{\hat{e}}-{1 \over 2}{\tilde{e}^T}{Q_2}{\tilde{e}}-{{1 \over 2{\rho ^2}}{\tilde{e}^T}{P_2}B{B^T}{P_2}{\tilde{e}}} + {B^T}{P_2}{\tilde{e}(w+d+{\varDelta }{u_a})}+\\ +{1 \over {\gamma _1}}{\tilde{\theta }_f^T}{\dot{\tilde{\theta }}_f}+ {1 \over {\gamma _2}}{\tilde{\theta }_g^T}{\dot{\tilde{\theta }}_g} \end{array} \end{aligned}$$
(8.109)

It holds that \(\dot{\tilde{\theta }}_f=\dot{\theta }_f-\dot{\theta _f^*}=\dot{\theta _f}\) and \(\dot{\tilde{\theta }}_g=\dot{\theta }_g-\dot{\theta _g^*}=\dot{\theta _g}\). The following weight adaptation laws are considered :

$$\begin{aligned} {\dot{\theta }_f}=\left\{ \begin{array}{c} -{\gamma _1}{\tilde{e}^T}{P_2}B{\phi (\hat{x})} \ \ if \ ||\theta _f||<m_{\theta _f} \\ 0 \ \ ||\theta _f|| \ge m_{\theta _f} \end{array}\right. \end{aligned}$$
(8.110)
$$\begin{aligned} {\dot{\theta }_g}=\left\{ \begin{array}{c} -{\gamma _2}{\tilde{e}^T}{P_2}B{\phi (\hat{x})}{u_c} \ \ if \ ||\theta _g||<m_{\theta _g} \\ 0 \ \ ||\theta _g|| \ge m_{\theta _g} \end{array}\right. \end{aligned}$$
(8.111)

To set \(\dot{\theta }_f\) and \(\dot{\theta }_g\) equal to 0, when \(||\theta _f \ge m_{\theta _f}||\), and \(||\theta _g \ge m_{\theta _g}||\) the projection operator is employed [427]:

$$\begin{aligned}&\,\, P\{{\gamma _1}\tilde{e}^T{P_2}B{\phi (\hat{x})}\}=-{\gamma _1}{\tilde{e}^T}{P_2}B{\phi (\hat{x})}+\\&\qquad \,+{\gamma _1}{\tilde{e}^T}{P_2}B{{\theta _f}{\theta _f^T} \over {||\theta _f||^2}}{\phi (\hat{x})}\\&\\&P\{{\gamma _1}\tilde{e}^T{P_2}B{\phi (\hat{x})}{u_c}\}=-{\gamma _1}{\tilde{e}^T}{P_2}B{\phi (\hat{x})}{u_c}+\\&\qquad \,+{\gamma _1}{\tilde{e}^T}{P_2}B{{\theta _f}{\theta _f^T} \over {||\theta _f||^2}}{\phi (\hat{x})}{u_c} \end{aligned}$$

The update of \(\theta _f\) follows a gradient algorithm on the cost function \({1 \over 2}(f-\hat{f})^2\) [31, 405]. The update of \(\theta _g\) is also of the gradient type, while \(u_c\) implicitly tunes the adaptation gain \(\gamma _2\). Substituting Eqs. (8.110) and (8.111) in \(\dot{V}\) gives

$$\begin{aligned} \begin{array}{c} \dot{V}=-{1 \over 2}{\hat{e}^T}{Q_1}{\hat{e}}-{1 \over 2}{\tilde{e}^T}{Q_2}{\tilde{e}}-{1 \over 2{\rho ^2}}{\tilde{e}^T}{P_2}B{B^T}{P_2}{\tilde{e}}+ {B^T}{P_2}{\tilde{e}(w+d+{\varDelta }{u_a})}+\\ +{1 \over {\gamma _1}}{\tilde{\theta }_f^T}({-\gamma _1}{\tilde{e}^T}{P_2}B{\phi {(\hat{x})}})+{1\over {\gamma _2}}{\tilde{\theta }_g^T}({-\gamma _2}{\tilde{e}^T}{P_2}B{\phi {(\hat{x})}}u)\\ \end{array} \end{aligned}$$
(8.112)

which is also written as

$$\begin{aligned} \begin{array}{c} \dot{V}=-{1 \over 2}{\hat{e}^T}{Q_1}{\hat{e}}-{1 \over 2}{\tilde{e}^T}{Q_2}{\tilde{e}}-{1 \over 2{\rho ^2}}{\tilde{e}^T}{P_2}B{B^T}{P_2}{\tilde{e}}+ {\tilde{e}^T}{P_2}B(w+d+{\varDelta }{u_a})-\\ -{\tilde{e}^T}{P_2}B{\tilde{\theta }_f^T}\phi (\hat{x})-{\tilde{e}^T}{P_2}B{\tilde{\theta }_g^T}{\phi (\hat{x})}u \end{array} \end{aligned}$$
(8.113)

and using Eqs. (8.92) and (8.96) results into

$$\begin{aligned} \begin{array}{c} \dot{V}=-{1 \over 2}{\hat{e}^T}{Q_1}{\hat{e}}-{1 \over 2}{\tilde{e}^T}{Q_2}{\tilde{e}}-{1 \over 2{\rho ^2}}{\tilde{e}^T}{P_2}B{B^T}{P_2}{\tilde{e}}+ {\tilde{e}^T}{P_2}B(w+d+{\varDelta }{u_a})-\\ -{\tilde{e}^T}{P_2}B\{[\hat{f}(\hat{x}|\theta _f)+\hat{g}(\hat{x}|\theta _f)u]-[\hat{f}(\hat{x}|\theta _f^*) +\hat{g}(\hat{x}|\theta _g^*)u]\} \end{array} \end{aligned}$$
(8.114)

where \([\hat{f}(\hat{x}|\theta _f)+\hat{g}(\hat{x}|\theta _f)u]-[\hat{f}(\hat{x}|\theta _f^*)+\hat{g}(\hat{x}|\theta _g^*)u]=w_a\). Thus setting \(w_1=w+w_a+d+{\varDelta }{u_a}\) one gets

$$\begin{aligned} \begin{array}{c} \dot{V}=-{1 \over 2}{\hat{e}^T}{Q_1}{\hat{e}}{1 \over 2}{\tilde{e}^T}{Q_2}{\tilde{e}}-{1 \over 2{\rho ^2}}{\tilde{e}^T}{P_2}B{B^T}{P_2}{\tilde{e}}+{B^T}{P_2}{\tilde{e}}{w_1}\Rightarrow \\ \dot{V}=-{1 \over 2}{\hat{e}^T}{Q_1}{\hat{e}}{1 \over 2}{\tilde{e}^T}{Q_2}{\tilde{e}}-{1 \over 2{\rho ^2}}{\tilde{e}^T}{P_2}B{B^T}{P_2}{\tilde{e}}+{1 \over 2 }{w_1^T}{B^T}{P_2}{\tilde{e}}+{1 \over 2}{\tilde{e}^T}{P_2}B{w_1} \end{array} \end{aligned}$$
(8.115)

Lemma: The following inequality holds

$$\begin{aligned} \begin{array}{c} {1 \over 2}{\tilde{e}^T}{P_2}B{w_1}+{1 \over 2}{w_1^T}{B^T}{P_2}{\tilde{e}}-{1 \over 2{\rho ^2}}{\tilde{e}^T}{P_2}B{B^T}{P_2}{\tilde{e}}\le {1 \over 2}{\rho ^2}{w_1^T}{w_1} \end{array} \end{aligned}$$
(8.116)

Proof: The binomial \(({\rho }a-{1 \over \rho }b)^2 \ge 0\) is considered. Expanding the left part of the above inequality one gets

\({\rho ^2}{a^2}+{1 \over {\rho ^2}}{b^2}-2ab \ge 0 \Rightarrow {1 \over 2}{\rho ^2}{a^2}+{1 \over {2\rho ^2}}{b^2}-ab \ge 0 \Rightarrow ab-{1 \over {2\rho ^2}}{b^2} \le {1 \over 2}{\rho ^2}{a^2} \Rightarrow {1 \over 2}ab+{1 \over 2}ab-{1 \over {2\rho ^2}}{b^2} \le {1 \over 2}{\rho ^2}{a^2}\)

The following substitutions are carried out: \(a=w_1\) and \(b=\tilde{e}^T{P_2}B\) and the previous relation becomes

$$\begin{aligned} \begin{array}{c} {1 \over 2}{w_1^T}{B^T}{P_2}{\tilde{e}}+{1 \over 2}{\tilde{e}^T}{P_2}B{w_1}-{{1 \over {2\rho ^2}} {\tilde{e}^T}{P_2}B{B^T}{P_2}{\tilde{e}}}\\ \le {1 \over 2} {\rho ^2}{w_1^T}{w_1} \end{array} \end{aligned}$$
(8.117)

The above inequality is used in \(\dot{V}\), and the right part of the associated inequality is enforced

$$\begin{aligned} \dot{V} {\le } -{1 \over 2}{\hat{e}^T{Q_1}{\hat{e}}}-{1 \over 2}{\tilde{e}^T{Q_2}{\tilde{e}}}+{1 \over 2}{\rho ^2}{w_1^T}{w_1} \end{aligned}$$
(8.118)

Thus, Eq. (8.118) can be written as

$$\begin{aligned} \dot{V} \le -{1 \over 2}{E^T}QE+{1 \over 2}{\rho ^2}{w_1^T}{w_1} \end{aligned}$$
(8.119)

where

$$\begin{aligned} E=\begin{pmatrix} \hat{e} \\ \tilde{e} \end{pmatrix}, \ \ Q=\begin{pmatrix} Q_1 &{} 0 \\ 0 &{} Q_2 \end{pmatrix}={ {diag}}[Q_1,Q_2] \end{aligned}$$
(8.120)

Hence, the \(H_{\infty }\) performance criterion is derived. For \(\rho \) sufficiently small Eq. (8.118) will be true and the \(H_{\infty }\) tracking criterion will be satisfied. In that case, the integration of \(\dot{V}\) from 0 to T gives

$$\begin{aligned}&\,\, {\int _0^T}{\dot{V}(t)}dt \le -{1 \over 2} {\int _0^T}{||E||^2}dt+{1 \over 2}{\rho ^2}{\int _0^T}{||w_1||^2}dt \Rightarrow \\&2V(T)-2V(0) \le -{\int _0^T}{||E||_Q^2}dt+{\rho ^2}{\int _0^T}{||w_1||^2}dt \Rightarrow \\&\quad 2V(T)+{\int _0^T}{||E||_Q^2}dt \le 2V(0)+ {\rho ^2}{\int _0^T}{||w_1||^2}dt \end{aligned}$$

It is assumed that there exists a positive constant \(M_w>0\) such that \(\int _0^{\infty }{||w_1||^2}dt \le M_w\). Therefore for the integral \(\int _0^{T}{||E||_Q^2}dt\) one gets

$$\begin{aligned} {\int _0^{\infty }}{||E||_Q^2}dt \le 2V(0)+{\rho ^2}{M_w} \end{aligned}$$
(8.121)

Thus, the integral \({\int _0^{\infty }}{||E||_Q^2}dt\) is bounded and according to Barbalat’s Lemma

$$\begin{aligned}&\lim _{t \rightarrow \infty }{E(t)}=0 \Rightarrow \begin{array}{c} {\lim }_{t \rightarrow \infty }{\hat{e}(t)}=0 \\ {\lim }_{t \rightarrow \infty }{\tilde{e}(t)}=0 \end{array} \end{aligned}$$

Therefore \(\lim _{t \rightarrow \infty }{e(t)}=0\).

Fig. 8.17
figure 17

Output feedback-based adaptive fuzzy control of MEMS (microactuator)—Test 1: a state variables \(x_i\), \(i=1,\ldots ,3\) of the initial nonlinear system, b transformed state variables \(y_i\), \(i=1,\ldots ,3\) (blue line real value, red line setpoint)

Fig. 8.18
figure 18

Output feedback-based adaptive fuzzy control of MEMS (microactuator)—Test 2: a state variables \(x_i\), \(i=1,\ldots ,3\) of the initial nonlinear system, b transformed state variables \(y_i\), \(i=1,\ldots ,3\) (blue line real value, red line setpoint)

Fig. 8.19
figure 19

Output feedback-based adaptive fuzzy control of MEMS (microactuator)—Test 3: a state variables \(x_i\), \(i=1,\ldots ,3\) of the initial nonlinear system, b transformed state variables \(y_i\), \(i=1,\ldots ,3\) (blue line: real value, red line: setpoint)

Fig. 8.20
figure 20

Output feedback-based adaptive fuzzy control of MEMS (microactuator)—Test 4: a state variables \(x_i\), \(i=1,\ldots ,3\) of the initial nonlinear system, b transformed state variables \(y_i\), \(i=1,\ldots ,3\) (blue line: real value, red line: setpoint)

Fig. 8.21
figure 21

Output feedback-based adaptive fuzzy control of MEMS (microactuator)—Test 5: a state variables \(x_i\), \(i=1,\ldots ,3\) of the initial nonlinear system, b transformed state variables \(y_i\), \(i=1,\ldots ,3\) (blue line: real value, red line: setpoint)

8.5.7 Simulation Tests

The performance of the proposed output feedback-based adaptive fuzzy control approach for MEMS (microactuator) was tested in the case of tracking of several reference setpoints. The only measurable variable used in the control loop was the microactuator’s deflection variable x. The dynamic model of the MEMS, as well as the numerical values of its parameters were considered to be completely unknown. The control loop was based on simultaneous estimation of the unknown MEMS dynamics (this was performed with the use of neuro-fuzzy approximators) and of the nonmeasurable elements of the microactuator’s state vector, that is, of the deflections change rate \(\dot{x}\) and of the charge of the plates q (this was performed with the use of the state observer). The obtained results are presented in Figs. 8.17, 8.18, 8.19, 8.20, 8.21. The real values of the monitored parameters (state vector variables) are denoted with blue line, the estimated variables are denoted with green line, and the reference setpoints are plotted as red lines. It can be noticed that differential flatness theory-based adaptive fuzzy control of the MEMS, succeeded fast and accurate tracking of the reference setpoints.

The implementation of the proposed control scheme requires that the two algebraic Riccati equations which have been defined in Eqs. (8.102) and (8.103) are solved in each iteration of the control algorithm. These provide the positive definite matrices \(P_1\) and \(P_2\) which are used for the computation of the control signals \(u_a\) and \(u_b\) which have been defined in Eqs. (8.106) and (8.107). The transients of the state vector elements \(x_i, \ i=1,\ldots ,3\) observed while tracking the reference setpoints, are determined by the values given to the positive definite matrices \(Q_i, \ i=1,2\), as well as by the value of the parameter r and of the H-infinity coefficient (attenuation level) \(\rho \). Moreover, the values of the feedback control gains K and \(K_o\) also affected the convergence characteristics of the controller and of the observer. It has been confirmed that the variations of both \(x_i, \ i=1,\ldots ,3\) and of the control input u were smooth.

From the simulation tests it can be noticed that the proposed adaptive control scheme that was based on differential flatness theory assured the stability of the microactuator’s control loop, as well as good transient performance in the tracking of the reference setpoints.