Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Significant research effort has been devoted to explore the means and properties of active vibration attenuation in the last two to three decades. The complex discipline of active vibration control (AVC ) is now leaving the realm of experimental applications and slowly starting to appear in advanced commercial products. In addition to the hardware components in AVC, such as the actuators performing changes in system dynamics and sensors providing feedback information about vibration levels, an essential unit of this feedback system is the control strategy itself.

The field of engineering concerned with the design of control strategies and algorithms for dynamic systems is called control engineering. Active vibration control heavily relies on the recent theoretical results of control engineering. Generic strategies commonly used for dynamic plants ranging from missile cruise control to washing machines are also applicable to AVC systems. These strategies can be as simple as multiplying a feedback signal with a constant gain and supplying it back to the actuator; or may be complex online optimization based on ideas such as the model predictive control (MPC) algorithm. The aim of this chapter is thus to present a few essential control algorithms which are commonly used in active vibration control and to provide examples of their utilization in academic studies. It is by no means our goal to replicate intricate details and results of control theory such as robustness or uncertainty. It is not our ambition to list and describe every possible control strategy either. The current chapter serves mainly as a taste of strategies utilized for AVC other than MPC. The reader interested in general aspects of control engineering is referred to feedback control textbooks such as the book on the basics of classical control theory by Antsaklis and Michel [10]; on state-space control by Williams and Lawrence [118] and others works [9, 50, 73, 74].

Fig. 4.1
figure 1

An excellent example of the possibilities offered by active vibration control: a modified F/A-18A test aircraft with active aeroelastic wings [76] is shown with accelerometers visible on the wing surfaces. In addition, the actuators and sensors the performance of active systems depends on the choice and design of control algorithms as well (Courtesy of NASA)

The first section is concerned with classical control methods applied to active vibration attenuation. These methods are often referred to as position or velocity feedback where the system simply uses a fixed gain to multiply position, velocity or acceleration signal in order to compute an input signal, which is in turn supplied to the actuator. Section 4.2 discusses proportional-integral-derivative (PID) controllers, which are very commonly used in all areas of engineering, including AVC . The next two sections review slightly more advanced controller strategies: Sect. 4.3 deals with linear quadratic control, while Sect. 4.4 considers \({\fancyscript{H}}_\infty\) control—both being basic optimization -based methods. Of these two methods, linear quadratic control is particularly interesting, since it is used both extensively and effectively in AVC. Moreover, it can be regarded as a basis for the model predictive control approach. The last major section of this chapter reviews the more exotic control approaches that are exciting and potentially powerful, albeit seldom used in the area of active vibration control due to several practical limitations. These methods are referred to as soft computing approaches and we will cover the essential basics of neural network s, genetic algorithms and fuzzy control here. The former two approaches rely on ideas borrowed from nature, that is the working principles of the nervous system and evolutionary processes. The latter, fuzzy control utilizes the idea of fuzzy sets where complex dynamics can be controlled using common sense and trivial statements. The reason why we have decided to briefly cover these less common methods is their potential to control and model highly nonlinear and hysteretic dynamics, such as magnetorheological (MR ) dampers. Finally, Sect. 4.6 will mention some of the alternative algorithms which can be used in active vibration control in a nutshell.

This book is first and foremost concerned with the application of model predictive control to vibration attenuation, therefore this control strategy will be introduced in a comprehensive detail starting from the next part—Part II. As it has been mentioned before, the current chapter is only concerned with algorithms other than MPC; hence we will only give brief description in order to familiarize the reader with the idea. Model predictive control is an advanced control algorithm where optimal inputs are calculated based on predictions given by an internal plant model. Model predictive control belongs to the broader family of algorithms based on optimal control. Unlike in the case of for instance linear quadratic (LQ) control, MPC does not only generate optimal inputs but it also actively respects various system constraints. As with every conceivable real plant, vibration control systems have inherent limitations—such as actuator saturation and others. These limitations render the system nonlinear, thus precautions are to be made to guarantee the stability of the control system. As it will be elaborated later, the application of model predictive control to vibration attenuation can be a non-trivial task, mainly because of the high sampling rates necessary because of fast system dynamics.

Books on the topic of vibration mechanics and vibration control start to adapt to the new trend of AVC and set aside chapters on control theory. In addition to the material provided by this chapter, an excellent treatment of vibration control concepts for a single degree of freedom vibrating system is given in the recent book by Benaroya and Nagurka [13]. For those with minimal or no background in control theory, parts Footnote 1 of this publication may give a fast yet still very thorough discussion on classical transfer function-based control strategy, and state-space control related to the problem of vibration. Moreover, there is an abundance of excellent literature discussing control theory from the active vibration control viewpoint such as the classical book on AVC by Fuller et al. [42] or Inman [55], Preumont [96] and others [49, 56, 97].

Figure 4.1 illustrates a heavily modified F/A-18A test aircraft equipped with actuators and accelerometers to create a complex control system that is capable of altering the aerodynamic properties of the plane. Just as in the case of this aircraft with active aeroelastic wings—or any other control system—the proper choice of hardware in AVC is not the only important aspect of the design. The overall effectiveness and safety of the system also depends on a fitting control architecture.

1 Classical Feedback Methods

When the vibration signal measured by the sensors is simply amplified by a gain and fed back to the actuators, we may classify this type of feedback control system as a classical feedback method. To demonstrate the concept mathematically, let us consider a vibrating system described as a continuous state-space system or in other words given by a set of first order differential equations [32]:

$$ \begin{aligned} \dot{{\bf x}}(t) &={\bf Ax}(t)+{\bf Bu}(t)\\ {\bf y}(t) &= {\bf Cx}(t)+{\bf Du}(t) \end{aligned} $$
(4.1)

where \({\bf A}\) is the state matrix, \({\bf B}\) is the input matrix and \({\bf C}\) is the output matrix. Because the term \({\bf Du}(t)\) represents direct input–output feedthrough, it is omitted from our representation Footnote 2. Although it is more common to represent system models in classical control theory by continuous (Laplace domain) or discrete (Z-domain) transfer functions, the state-space representation will be preferred here, since the model predictive controllers (MPC) introduced in upcoming chapters will also utilize a state-space model.

Vibration control literature often uses to the terms position feedback and velocity feedback [55, 97]. These two very common control methods are a part of classical feedback-based vibration control. The idea in direct position or velocity feedback methods is in fact very simple: in direct position feedback, the position signal is amplified by a gain and fed back to the force actuators, while in direct velocity feedback the sensor output is differentiated, amplified and fed back to the force actuators. According to this underlying idea, we may define the input in direct position feedback as [42]:

$$ {\bf u}(t) = -{\bf Ky}(t) $$
(4.2)

where \({\bf K}\) is the feedback gain matrix. In case the signal is based on velocity measurement, we have a velocity feedback and the input is given by [96, 97]:

$$ {\bf u}(t) = -{\bf K}\dot{{\bf y}}(t) $$
(4.3)

Furthermore, it is also possible to utilize the acceleration measurement and formulate control input as

$$ {\bf u}(t) = -{\bf K}\ddot{{\bf y}}(t) $$
(4.4)

but the use of a second order filter to generate a force proportional to the output of that filter is also a possible strategy when using direct acceleration feedback.

Note that unlike in the case of linear quadratic (LQ) control, here the input is not calculated using the state \({\bf x}(t),\) instead the output \({\bf y}(t)\) and its derivatives are utilized. Considering the case of direct position feedback given by (4.2) and substituting the output equation the input will be rendered to

$$ {\bf u}(t) = -{\bf KCx}(t) $$
(4.5)

where the input term can be now substituted back to the original state equation in (4.1) to get

$$ \dot{{\bf x}}(t) = ({\bf A}-{\bf BKC}){\bf x}(t) $$
(4.6)

In case the sensors and actuators are co-located, we may assume that \({\bf C}={\bf B}^T,\) rendering the output equation to \({\bf y}(t)={\bf B}^T{{\bf x}}(t)\).

In direct velocity feedback, two very common approaches exist. These are [113]:

  • constant amplitude velocity feedback (CAVF)

  • constant gain velocity feedback (CGVF)

In constant amplitude velocity feedback (CAVF), the gain of the \(i\)-th actuator is the opposite of that of the \(i\)-th sensor. This is mathematically denoted as:

$$ {\bf u}(t) = -{\bf K}\,\hbox{sign}(\dot{{\bf y}}(t)) $$
(4.7)

One may easily see that this control law is nonlinear and discontinuous. As the name implies, the feedback voltage amplitude in CAVF is constant. The feedback gain matrix used in this approach is defined by a diagonal matrix of the individual constant amplitudes \(\bar{A}_i\):

$$ {\bf K}=[0\;\hbox{diag}(\bar{A}_1\;\bar{A}_2\;\ldots\;\bar{A}_i\;\ldots \bar{A}_N)] $$
(4.8)

where \(i=1\ldots N\) is the number of actuating points. In constant gain velocity feedback the driving voltage of the \(i\)-th actuator is given by the relation introduced earlier in (4.2). The gain matrix here is simply a diagonal matrix of individual actuator gains, creating a linear continuous controller:

$$ {\bf K}=[0\;\hbox{diag}({K}_1\;{K}_2\;\ldots\;{K}_i\;\ldots\;{K}_N) ] $$
(4.9)

where \(i=1\ldots N\) is the number of actuating points, and \({K}_i\) are the associated gains.

Direct position feedback is not to be confused with the different concept of positive position feedback (PPF) [37, 45]. While in direct position/velocity/acceleration feedback the use of a position, velocity or acceleration sensor in combination of force actuator is implied, positive position feedback assumes a strain sensor in combination with a strain actuator [96]. The best-known combination of such a sensor and actuator pair are the commonly used piezoceramic transducers. The essential idea behind positive position feedback is to use the position signal in combination with a second order filter to generate the output for the strain actuator. The second order filter shall have an increased damping, ultimately attenuating vibrations in the closed-loop system. Preumont suggests the use of such a system in a decentralized manner, with co-located sensor and actuator pairs, thereby establishing stability [96]. An advantage of the PPF approach is that it can be designed based on an experimental transfer function, without a deeper analytical knowledge of the structure [55]. For a SISO system, the PPF control input is based on the strain signal and its transfer function can be given by [55, 96, 97]:

$$ G(s)={{-K}\over{s^2 + 2 \zeta_{\rm fil} \omega_{\rm{fil}} s +\omega_{\rm fil}^2}} $$
(4.10)

which combined by the error signal \(e=r-y=-y\) and the negative gain \(-K\) gives an overall positive feedback. The variables \(\zeta_{\rm fil}\;\hbox{and}\;\omega_{\rm fil}\) denote the damping and the frequency of the filter tuned to the mode, which is to be damped. If the output of this second order filter is marked by \({\bf v}_{\rm fil},\) the input from the sensors is the displacement \(y,\) then for a MDOF system the controller can be described in terms of the filter equation and the output equation [55, 96, 97]:

$$ \begin{aligned} \ddot{{\bf v}}_{\rm fil}(t)+\xi_{\rm fil}\dot{{\bf v}}_{\rm fil}(t)+ \Uplambda_{\rm fil}{{\bf v}}_{\rm fil}(t) &= {\bf y}(t)\\ {\bf u}(t) &= {\bf Kv}_{\rm fil}(t) \end{aligned} $$
(4.11)

where \(\xi_{\rm fil}\) is a diagonal matrix containing the terms \(2\zeta_{{\rm fil}_{\,i}} \omega_{{\rm fil}_{\,i}}\) on its main diagonal, similarly \(\Uplambda_{\rm fil}\) is a diagonal matrix containing the squares of the filter frequencies \(\omega^2_{{\rm fil}_{\,i}}\) on its main diagonal, for each individual filter \(i.\) The filter in (4.11) may be augmented by a rectangular matrix \({\bf E}_{\rm fil}\) that allows using more filters than actuators, thereby allowing to damp more modes than the number of available actuators:

$$ \begin{aligned} \ddot{{\bf v}}_{\rm fil}(t)+\xi_{\rm fil}\dot{{\bf v}}_{\rm fil}(t)+ \Uplambda_{\rm fil}{{\bf v}}_{\rm fil}(t) &={\bf E}_{\rm fil}{\bf y}(t)\\ {\bf u}(t) &= {\bf E}_{\rm fil}^T {\bf Kv}_{\rm fil}(t) \end{aligned} $$
(4.12)

Given co-located sensors with the dynamics given by \({\bf y}={\bf B}^{T}{\bf x}\) and system dynamics described by the equation \({\bf M}\ddot{{\bf q}}+{\bf B}_{{\bf d}}{\dot {\bf q}}+{\bf K}_{{\bf s}} {\bf q}={\bf Bu},\) we may couple the PPF controller with the system to obtain [55]:

$$ \left[\begin{array}{ll} {\bf M} & 0\\ 0 & 1 \end{array}\right] \left[\begin{array}{l} \ddot{{\bf q}} \\ \ddot{{\bf v}}_{\rm fil} \end{array}\right] + \left[\begin{array}{ll} {\bf B}_{{\bf d}} & 0\\ 0 & \xi_{\rm fil} \end{array}\right] \left[\begin{array}{l} \dot{{\bf q}}\\ \dot{{\bf v}}_{\rm fil} \end{array}\right] + \left[\begin{array}{cc} {\bf K}_{{\bf s}} & -{\bf KB}\\ -{\bf KB}^{T} & \Lambda_{\rm fil} \end{array}\right] \left[\begin{array}{l} {{\bf q}}\\ {{\bf v}}_{\rm fil} \end{array}\right] = \left[\begin{array}{l} 0\\ 0 \end{array}\right] $$
(4.13)

A review of the stability properties of direct velocity and acceleration feedback, moreover the stability of PPF assuming co-located sensors and actuators is given in [96], while the necessary and sufficient condition for the asymptotic stability of (4.11) has been established by Fanson and Caughey in [37] based on Lyapunov’s direct method. Since both the augmented mass matrix and the augmented damping matrix are positive definite, the stability of the closed-loop system in (4.13) will only depend on the positive definiteness of the augmented stiffness matrix [55].

The control action in classical feedback control is realized through the manipulation of the closed-loop system poles by the feedback gain matrix \({\bf K}.\) If a well defined \({\bf K}\) is used, the original lightly damped poles of the open-loop system \({\bf A}\) are transformed into the better damped poles of the closed-loop system \(({\bf A}-{\bf BKC})\) [113]. Unlike in the case of optimization-based approaches such as linear quadratic control or \({\fancyscript{H}}_\infty\) control, here an indirect computation of the feedback matrix \({\bf K}\) is employed. We may use several well established methods for the computation of \({\bf K}\) such as direct experimentation, strategies based on the pole-zero representation of the system (root-locus, pole-placement) or frequency domain methods (such as the Nyquist method) [113]. Although finding a direct fixed static output-feedback seems simple and intuitive enough, this fundamental control engineering problem is relatively challenging in terms of computational complexity [15, 16]. For example, the essential technique of pole-placement is NP-hard even for linear time-invariant systems [40].

A comparison of the use of classical feedback control methods with optimal control for vibration attenuation is discussed by Vasques and Rodrigues in [113]. A delayed position feedback is utilized for the vibration control of a flexible link manipulator in [59], while others employ position feedback for similar systems as well [102]. The counter-phased sound signal is employed to attenuate sound in enclosures by Lee et al. in [77], while a similar counter-phase signal is based on a gain scheduled observer in state-space in [17]. Optical tracking on satellites is ensured by AVC using a transfer function representation in [83]. Seismic activity is attenuated on a model using positive acceleration feedback in [99], while rotor vibrations are damped using PPF in [2]. Other works utilizing position feedback-based vibration control systems are [26, 53, 62, 71, 95, 106, 114].

Velocity or strain rate-based classical feedback controllers are very commonly implemented in vibration control applications as well. Aircraft tail vibrations are damped based on velocity feedback strategies in several works [8, 33, 34]. Other examples of such controllers are presented in [18, 106, 107, 119, 132]. A modified acceleration feedback-based method is applied for a cantilever beam in [84].

An even simpler on-off type controller is used in [130] for the control of rotor vibrations. A variation of this is referred to as bang–bang control. Here the controller switches between two extreme states depending on the position, velocity or the combination of the two. Such a controller has been utilized for example by Tzou et al. in [112] for cantilever vibration control.

Stability in systems controlled through classical feedback methods is guaranteed through the usual stability tests known in classical continuous or discrete controls. Moreover, Preumont states that stability in such systems can be guaranteed through the perfect physical collocation of sensors and actuators [96].

In addition to active systems, semi-active vibration damping gained some interest because of its simple electronics and hardware realization [81]. In the case of semi-active systems the obvious advantage is that A/D and D/A converters, voltage or charge amplifiers, microcontrollers are not needed; therefore making product integration simpler and economic. This method takes advantage of a fact that a circuit—using piezoelectric transducers and other simple electronic components like resistors and capacitors—may be tuned analogously to a vibration absorber. The disadvantage of this method is that one absorber may be tuned only to damp one vibration mode. Semi-active state switched resistive circuits with simple control logic are the next iteration of this concept. For example in case that the displacement and velocity at a specific point satisfies Eq. (4.14), the circuit will be switched to open circuit state. In  all other cases, it will be closed circuit. Such a system including the optimal placement and resistance is discussed in [81]. The state switching law can be expressed  by:

$$ q(x,y,t)\dot{q}(x,y,t)\geq0 $$
(4.14)

where \(q\) is a displacement depending on coordinates \(x, y\) and time \(t\); and its derivation is velocity with the same parameters.

2 Proportional-Integral-Derivative Controllers

Proportional-integral-derivative (PID) controllers are widely used in industrial practice. In the absence of knowledge about the exact underlying process PID is a good controller choice, since its tuning parameters can be translated into physical changes on the closed-loop system dynamics. However, using advanced modeling and control approaches one may develop strategies offering much more than PID does. Among others, the disadvantages of the PID control strategy are that even though the controller has been tuned very carefully, it still does not guarantee the best possible control course. This situation can be remedied with the use of optimization -based algorithms such as LQ. Moreover, as with every real control system, the inputs and often the outputs are constrained as well. Such constraints are in practice implemented using saturation limits; this however introduces a level of nonlinearity in the control law. The nonlinearity of the law means that the proofs of stability and optimality no longer apply. Constraint handling even with guaranteed stability is successfully solved by the use of model predictive control.

The position or velocity-based classical feedback methods, where the measured signal is simply multiplied with a fixed gain are in fact not so distant from a PID controller or its variants. The similarity is clearer in the case if the controlled pant is a one degree of freedom vibrating system. PID controllers from the active vibration control viewpoint are reviewed in a very intuitive way by Fuller et al. in [42]. Let us now imagine a one DOF vibrating system where the amplitudes are measured and given as \(q(t).\) We may devise a controller for this system, which calculates the control signal from the position coordinate \(q(t),\) the velocity coordinate \(\dot{q}(t)\) and the acceleration value \(\ddot{q}(t){.}\) In case we could measure these values independently, an input to the actuators \(u(t)\) could be calculated by these signals multiplied by three independent gains [42]:

$$ u(t)=g_d q(t) + g_v \dot{q}(t) + g_a \ddot{q}(t) $$
(4.15)

where \(g_d\!, g_v\;\hbox{and}\;g_a\) are gains for the displacement, velocity and acceleration component. Due to physical limitations and practical and economic considerations not all signals can be measured. Let us therefore imagine that we can only measure the velocity signal \(\dot{q}(t),\) and the acceleration is computed using numerical derivation, while the displacement is computed using a numerical equivalent of integration. In this case, we could formulate our problem as

$$ u(t)=g_d \int_{0}^{t}{q(t)} dt + g_v \dot{q}(t) + g_a {{d}\over{dt}} \dot{q}(t) $$
(4.16)

This in fact would be nothing else than a simple continuous PID controller. Generally, the displacement gain \(g_d\) is called the integral gain (\(K_i\)) in control engineering, since it is associated with the integral action. Similarly, the gain \(g_v\) associated with the unchanged signal is known as the proportional gain (\(K_p\)) while \(g_a\) as the derivative gain (\(K_d\)).

We will now review the principles of a generic PID controller designed for a single-input and single-output (SISO) system. Instead of the representation used in (4.15) and (4.16) focusing on vibrating systems, we will use notation known from control engineering. As with other feedback controllers, the first step in a PID algorithm is to calculate an error value \(e(t)\) which is a difference between the desired reference setting \(r(t)\) and the actual measured output \(y(t)\):

$$ e(t)=r(t)-y(t) $$
(4.17)

The sum of the error value itself \(e(t),\) its time integral \(\int_{0}^{t}{e(t)} {dt}\) and its derivative \({{d}\over{dt}}e(t)\) multiplied by individual tuning constants creates the input \(u(t)\) to the controlled plant. This in fact defines a PID controller. Mathematically we can express this as

$$ u(t)=K_p{e(t)} + K_{i}\int_{0}^{t}{e(t)} dt + K_{d}{{d}\over{dt}}e(t) $$
(4.18)

which is the so-called ideal form of a PID regulator. As the name implies, the first term is the proportional term where the error is multiplied by the proportional gain \(K_p.\) The second term in (4.18) is the integral term, which is multiplied by the integral gain \(K_i.\) This is followed by the derivative term, multiplying the error derivative by the derivative constant \(K_d.\) A block algebra scheme of this process is featured in Fig. 4.2.

Fig. 4.2
figure 2

Block algebra scheme of the inner workings of a PID controller

We may try to imagine the meaning of the three components of a PID controller by relating the error and the practical interpretation of the integration and derivation operations:

  • The proportional term P is related to the current error.

  • The integral term I is related to the history of errors or the past, since the integral expresses the area under the error curve or in discrete terms the sum of all errors.

  • The derivative term D is related to the future of the error, since a derivative expresses the rate of change or slope of the error curve, creating a kind of prediction about its upcoming trend.

It is not always necessary to use every component of the PID controller. By setting the appropriate tuning constants to zero, we can create an array of controllers missing one or two components of the original PID. In practice, however, only the following combinations are used: P, I, PI, PD. Another common notation expressing a PID controller is its so-called standard form:

$$ u(t)=K_p\left({e(t)} + {{1}\over{T_i}}\int_{0}^{t}{e(t)} dt + T_d{{d}\over{dt}}e(t)\right) $$
(4.19)

where \(K_p\) is the proportional constant and \(T_i\;\hbox{and}\;T_d\) are the integral and derivative time constants. We may also express a PID controller as a Laplace transform, more suited to numerical simulations in software prototyping environments such as Matlab/Simulink:

$$ G(s)=K_p + {{K_i}\over{s}} + K_d{s}={{K_d{s^2} + K_p{s} + K_i}\over{s}} $$
(4.20)

where \(G(s)\) is the continuous transfer function of the PID controller and \(s\) is the Laplace operator. An alternative transfer function of a PID controller is given by [75]:

$$ G(s)=K_p \left( 1 + {{1}\over{T_i s}} +{{T_d s}\over{1 + {{T_d}\over{N}}s}} \right) $$
(4.21)

where the additional term \(1 + T_d/Ns\) is a low pass filter introduced on the derivative action. As it has been already noted in (4.16), a PID controller implemented on a vibrating mechanical system can be interpreted as an analogy of velocity feedback, where position and acceleration measurement is estimated by numerical methods.

Let us now briefly return to the three independent proportional gain formulation of (4.15) and investigate how a closed-loop vibrating system will change if we implement such a simple controller. Remember that for this example we will assume that all three values of displacement, velocity and acceleration can be directly measured and the controller is a sum of these three proportional values. The vibrating mechanical system shall be represented by a one degree of freedom system in (2.108) for which the transfer function is defined as:

$$ H(s)={{Q(s)}\over{F_e(s)}}={{1}\over{m s^2 + b s +k}} $$
(4.22)

where \(Q(s)\) is the vibration amplitude and \(F_e\) is the external disturbance in the Laplace domain. Here \(H(s)\) represents the dynamics of the vibrating system in open-loop that is, without a controller. Furthermore, let us perform a Laplace transform on the simple proportional control law given by (4.15) to get:

$$ U(s)=g_d Q(s) + g_v Q(s) s + g_a Q(s) s^2 $$
(4.23)

and finally obtain the transfer function of the control law [42]:

$$ G(s)={{U(s)}\over{Q(s)}}=g_d + g_v s + g_a s^2 $$
(4.24)

where \(Q(s)\) is the measured position and \(U(s)\) is the controller input in the Laplace domain. To calculate the closed-loop response of this system, we must consider the direct path from the disturbance to the displacement (\(H(s)\)) and divide it by the indirect path (\(1+H(s)G(s)\)) which contains the controller as well. After substituting for \(H(s)\;\hbox{and}\;G(s)\) we will get [42]:

$$ F(s)={{Q(s)}\over{F_e(s)}}={{H(s)}\over{1+H(s)G(s)}}={{1}\over{(m + g_a )s^2 + (b + g_v) s +(k + g_d)}} $$
(4.25)

which is a transfer function describing the new, controlled relationship between excitation \(F_e(s)\) and vibration \(Q(s).\) One may easily see that there is a direct and physically interpretable connection between the individual gains \(g_d,\,g_v\) and \(g_a\) which help to create the new modified mass, stiffness and damping properties of the system:

$$ F(s)={{Q(s)}\over{F_e(s)}}={{1}\over{m{'} s^2 + b{'} s +k{'}}} $$
(4.26)

where \(m{'}, b{'}\;\hbox{and}\;k{'}\) are the modified closed-loop mass, damping and stiffness values.

The above discussion is valid to systems with no delays. Unfortunately, delays are always present in control systems and are caused by imperfect sensor or actuator dynamics. The digital sampling process itself may introduce delays into the closed-loop system as well. These delays may cause that the damping properties will change dramatically if the excitation frequency is much higher than the resonance frequency of the system. We can model the dynamics of a controller similarly to (4.22), which also takes delays into account by [42]:

$$ G(s)={{U(s)}\over{Q(s)}}=e^{-\tau_d s}(g_d + g_v s + g_a s^2) $$
(4.27)

where \(\tau_d\) is delay and \(e^{-\tau_d s}\) models this delay in the Laplace domain. Let us assume that the delay is small and then the frequency response can be expressed by:

$$ e^{-j \omega \tau_d} \approx 1-j \omega \tau_d $$
(4.28)

which is valid for \(\omega \tau_d << 1.\) Now the closed-loop frequency response of this system can be expressed similarly to (4.26) by equivalent mass, damping and stiffness terms:

$$ F(s)={{Q(s)}\over{F_e(s)}}={{1}\over{\omega^2 m^{\prime\prime} + \omega b^{\prime\prime} +k^{\prime\prime}}} $$
(4.29)

where the new equivalent effective mass \(m^{\prime\prime},\) effective damping \(b^{\prime\prime}\) and effective stiffness \(k^{\prime\prime}\) terms can be expressed by [42]:

$$ \begin{aligned} m^{\prime\prime} &= m + g_a-\tau_d g_v \end{aligned} $$
(4.30)
$$ \begin{aligned} b^{\prime\prime} &= b + g_v-\tau_d g_d + \omega^2 \tau_d g_a \end{aligned} $$
(4.31)
$$ \begin{aligned} k^{\prime\prime} = k + g_d \end{aligned} $$
(4.32)

If we compare \(k^{\prime\prime}\;\hbox{and}\;k{'}\) we can see that the delay has no impact on the effective stiffness. For lightly damped systems the term \(\tau_d g_v\) is small when compared to the mass \(m,\) therefore its impact on the effective mass is minimal. The effective damping is however greatly influenced by both the delay \(\tau_d\) and the frequency \(\omega.\) Let us assume changing the effective mass and stiffness twice to their relative magnitude under displacement and acceleration feedback [42]. For a lightly damped system, the term \(\tau_d g_d\) is comparable to \(b\) if the delay is small compared to the period of the natural resonant frequency on the system. On the other hand, for frequencies \(\omega\) over the damped natural frequency \(\omega_d\) the term \(\omega^2 \tau_d g_a\) becomes comparable to \(b.\) With a displacement or acceleration-based feedback even a small delay may dramatically alter the effective damping or even render the system unstable . That is why in classical feedback control (see Sect. 4.1) velocity-based feedback is preferred. Velocity feedback will not alter effective mass, stiffness or damping properties of the system significantly if unmodeled delay is introduced into the closed-loop system.

Although it is possible to create a purely continuous-time PID controller, it is more common to implement it in a digital control system. For this it is necessary to replace the integral term with its discrete-time equivalent, summing:

$$ \int_{0}^{t}{e(t)} {dt} \approx \sum\limits_{i=1}^{k-1} e(i) T_s $$
(4.33)

It is also necessary to compute differences numerically, instead of symbolic differentiation:

$$ {{de(t)}\over{dt}} \approx {{e(k)-e(k-1)}\over{T_s}} $$
(4.34)

for a sampling time \(T_s.\) The resulting controller will be suitable for discrete-time application. A discrete PID controller is sometimes referred to as a PSD controller, exchanging the integral term with summation. A so-called velocity form of a discrete PID (PSD) controller can be expressed as:

$$ \begin{aligned} u_k =\,&u_{(k-1)}\\ &+K_p\left[\left(1+{{T_s}\over{T_i}}+{{T_d}\over{T_s}}\right) e(k)+\left(-1-{{2T_d}\over{T_s}}\right) e(k-1)+{{T_d}\over{T_s}}e(k-2)\right]\\[5pt] \end{aligned} $$
(4.35)

or alternatively we may write [12]:

$$ \begin{aligned} u_k &=\\ &K_p\left[e(k)+{{T_s}\over{T_i}}\left({{e(0)+e(k)}\over{2}} + \sum_{i=1}^k e(i) \right) +{{T_d}\over{T_s}}\left((e(k)-e(k-1) \right) \right]\\[5pt] \end{aligned} $$
(4.36)

where \(T_s\) is the discrete sampling period. The discrete-time PID controller may be expressed after Z-transformation in the Z-domain by [50]:

$$ G(z)={{U(z)}\over{E(z)}}=K_p + K_i{{z}\over{z-1}} + K_d{{z-1}\over{z}} $$
(4.37)

There are different methods to tune a PID controller. One of the most widely used is a simple iterative trial and error process. Other methods include Ziegler-Nichols, Cohen-Coon, iterative response shaping and others. Note that, as it has been previously implied, the use of a PID controller neither guarantees stability of the control loop nor is it optimal in any sense. The available literature on PID controllers is extensive, therefore we will not discuss the control engineering aspects and details of this method. The reader shall refer to the relevant publications on the topic.

PID controllers are utilized by Fung et al. in [43] to control the vibrations of a flexible beam actuated through an electromagnet. Semi-active suspensions can be also controlled via PID [35]. The use of the PID strategy for earthquake-induced vibration control in civil engineering structures is suggested by Carotti and Lio; and Guclu in [21, 47]. Yet another possible application of PID in vibration control is for the AVC of flexible link mechanisms as described in [62] and other similar vibration control systems [6, 60, 111].

3 Linear Quadratic Control

Linear quadratic control belongs to the broader family of algorithms based on optimal control. In optimal control, a cost function indicating a performance index is chosen which is then minimized to obtain an optimal input \(u(k)\) [55].

Let us consider a continuous, linear time-invariant state-space system as defined by (4.1). The cost function in the continuous linear quadratic optimal control problem can be chosen to be quadratically dependent on the control input and the state or output response:

$$ J={{1}\over{2}} \int_{t_0}^{t_f} \big( {\bf x}^{T}(t){\bf Qx}(t) + {\bf u}^{T}(t){\bf Ru}(t) \big) {\rm d}t+{{1}\over{2}} {\bf x}^{T}(t_f){\bf P}_f{\bf x}(t_f) $$
(4.38)

where \({\bf Q}\) is a state weighting matrix, \({\bf R}\) is an input weighting matrix and \({\bf P}_f\) is a terminal weighting matrix. All these weighting factors or penalty matrices can be chosen by the control engineer to fine-tune the behavior of the controller, according to the particular needs of the plant.

A linear quadratic (LQ) regulator (LQR) is a special case of the generic linear quadratic control problem. Contrary to the general case described above, the weighting matrices in the LQR problem are constant. Moreover, the control horizon \(t_f\) is assumed to approaching infinity. The matrix \({\bf Q}\) is positive semidefinite, while matrix \({\bf R}\) is positive definite. The generic LQ optimal control problem is expressed as the minimization of the following cost function [42, 97]:

$$ J={{1}\over{2}} \int_{t_0}^{\infty} \big( {\bf x}^{T}(t){\bf Qx}(t) + {\bf u}^{T}(t){\bf Ru}(t) \big) {\rm d}t $$
(4.39)

We may interpret the above formulation as an attempt to minimize the overall control energy measured in a quadratic form. In fact, the LQR controller is an automated way to find an optimal fixed feedback matrix. The final control law then assumes the form of a constant matrix state feedback gain in the form [10, 97]:

$$ {\bf u}(t) = -{\bf Kx}(t) $$
(4.40)

rendering the continuous-time state-space representation in (4.1) to:

$$ \dot{{\bf x}}(t)=({\bf A}-{\bf BK}){\bf x}(t) $$
(4.41)

which is the closed-loop state equation of the continuous system with an LQ fixed feedback law.

The matrix gain \({\bf K}\) can be expressed as [10]:

$$ {\bf K}={\bf R}^{-1}{\bf B}^{T}{\bf P} $$
(4.42)

where \({\bf P}\) is the solution of the differential Ricatti equation given as [42, 118]:

$$ \dot{{\bf P}} = -{\bf PA}-{\bf A}^{{T}}{\bf P}+{\bf PBR}^{-1} {\bf B}^{T}{\bf P}-{\bf Q} $$
(4.43)

which for the infinite horizon LQR problem is replaced by the so-called algebraic Ricatti equation (ARE) defined as

$$ {\bf 0} = -{\bf PA}-{\bf A}^{T}{\bf P}+{\bf PB} {\bf R}^{-1}{\bf B}^{T}{\bf P}-{\bf Q} $$
(4.44)

For a discrete time-invariant state-space system, we may define the LQR controller as the fixed matrix feedback gain \({\bf K},\) which minimizes the following infinite horizon cost function [50]:

$$ J = \sum\limits_{k=0}^{\infty} \left( {\bf x}_k^T {\bf Qx}_k + {\bf u}_k^T {\bf Ru}_k \right) $$
(4.45)

The output voltage at the actuators is then:

$$ {\bf u}_k = -{\bf Kx}_k $$
(4.46)

and the discrete linear time-invariant state-space system will be rendered to

$$ {\bf x}_{k+1}=({\bf A}-{\bf BK}){\bf x}_k=\varPhi {\bf x}_k $$
(4.47)

where \(\varPhi\) expresses the state dynamics of the closed-loop system controlled through a fixed LQ gain. The LQR feedback gain may be calculated from

$$ {\bf K} = ({\bf R} + {\bf B}^T {\bf PB})^{-1} {\bf B}^T {\bf PA} $$
(4.48)

where \({\bf P}\) is the solution of the discrete-time algebraic Ricatti equation (DARE) defined by

$$ {\bf P} = {\bf Q} + {\bf A}^T \left( {\bf P}-{\bf PB} \left( {\bf R} + {\bf B}^T {\bf PB} \right)^{-1} {\bf B}^T {\bf P} \right) {\bf A} $$
(4.49)

LQ controllers are extensively used both in general industrial applications and in vibration control. To list some of the applications, the LQ strategy has been suggested for the active vibration control of buildings during an earthquake [85, 97], semi-active control for vehicle suspensions and mounts platforms [24, 39, 60], in a hybrid feedforward-feedback setup for active noise control in [66], in optical drives [22] and in numerous other academic studies aimed at vibration control [1, 43, 51, 52, 59, 67, 88].

4 \({\fancyscript{H}}_2\;\hbox{and}\;{\fancyscript{H}}_\infty\) Control

Just like the previously introduced linear quadratic control scheme, \(H\) -infinity or as it is commonly denoted in the literature \({\fancyscript{H}}_\infty\) (\(H_\infty\)) controllers are a subclass of optimization -based control methods too. \({\fancyscript{H}}_\infty\) control methods are often utilized with the aim to create a robust and stabilizing control system [72]. Similarly to LQ controllers, one of the biggest disadvantage of \({\fancyscript{H}}_\infty\) controllers is their general inability to handle constraints, such as saturation limits or naturally occurring process constraints. The advantage of \({\fancyscript{H}}_\infty\) controllers is the possibility to control multi-variable systems and robust control formulations.

In essence, the \({\fancyscript{H}}_\infty\)-optimization of control systems is based on the minimization of the peak value of closed-loop frequency functions [72]. Let us consider a simple problem involving a SISO system plant with a disturbance, which is illustrated on Fig. 4.3. Here we have a plant \({\fancyscript{P}}(s)\) and a controller \({\fancyscript{H}}_\infty (s).\) The output of the system is denoted by \(y(s),\) while the system is also subjected to an outside disturbance \(v(s).\) The reference value is denoted by \(w(s),\) let us keep it at zero for now. We can denote the Laplace transform of the plant output as:

$$ y(s)=v(s)-{\fancyscript{P}}(s) {\fancyscript{H}}_\infty (s) y(s) $$
(4.50)

from this it follows that

$$ y(s)={\fancyscript{S}}(s) v(s) $$
(4.51)

where \({\fancyscript{S}}(s)\) is the so-called sensitivity function according to

$$ {\fancyscript{S}}(s)={{1}\over{1+{\fancyscript{P}}(s) {\fancyscript{H}}_\infty(s)}} $$
(4.52)

or in matrix terms

$$ {\fancyscript{S}}(s)=\left[{\bf I}+{\fancyscript{P}}(s) {\fancyscript{H}}_\infty (s)\right]^{-1} $$
(4.53)

and we may also define the complementary sensitivity function

$$ {\fancyscript{T}}(s)=[{\bf I}+{\fancyscript{P}}(s){\fancyscript{H}}_\infty(s)]^{-1}{\fancyscript{P}} (s){\fancyscript{H}}_\infty (s) $$
(4.54)

The sensitivity function characterizes the sensitivity of the system output to disturbances and its value is in the ideal case \({\fancyscript{S}}(s)=0.\) One may regard the sensitivity function as a performance indicator, similarly to the cost function that is used in LQ control or in MPC control as well. A low sensitivity function value implies a low tracking error, thus ultimately increasing the controller performance.

Fig. 4.3
figure 3

Formulating the sensitivity function in \({\fancyscript{H}}_\infty\) control: general SISO control loop with a reference and outside disturbance

Alternatively for no outside disturbance \(v(s)=0\) but for a given reference tracking \(w(s) \neq 0\) we may define that [31]:

$$ \begin{aligned}[b] y(s) &={\fancyscript{P}}(s) {\fancyscript{H}}_\infty(s) [w(s)-y(s)]\\ &=[{\bf I}+{\fancyscript{P}}(s){\fancyscript{H}}_\infty(s)]^{-1}{\fancyscript{P}} (s){\fancyscript{H}}_\infty(s) w(s)\\ &={\fancyscript{T}} w(s) \end{aligned} $$
(4.55)

Similarly to the output of the plant, for the control error we may define

$$ \begin{aligned}[b] e(s) &=[w(s)-y(s)]\\ &=[{\bf I}-{\fancyscript{T}}(s)] w(s)\\ &={\fancyscript{S}}(s) w(s)\\ \end{aligned} $$
(4.56)

Our aim is to make the closed-loop feedback system stable and find a controller \({\fancyscript{H}}_\infty (s)\) which minimizes the peak value of the sensitivity function in the frequency domain: \({\fancyscript{S}}(j \omega)\) [124, 125]. The peak value of the sensitivity function can be defined as the infinite norm of the function \({\fancyscript{S}}(j \omega)\) given by:

$$ ||{\fancyscript{S}}||_\infty=\max_\omega |{\fancyscript{S}}(j \omega)| $$
(4.57)

The maximal or peak value of the sensitivity function in the frequency domain as defined by (4.57) is graphically illustrated in Fig. 4.4.

Fig. 4.4
figure 4

Peak value \(||{\fancyscript{S}}||_\infty\) of the sensitivity function \({\fancyscript{S}}(j \omega)\)

Although the maximum of the absolute value of the sensitivity function is a very intuitive way to define its peak, it is not always logically feasible. This is because for some functions the peak value is not assumed at all for a finite frequency \(\omega.\)Instead, one may replace the maximum with supremum or at least an upper bound, so (4.57) will change to:

$$ ||{\fancyscript{S}}||_\infty=\sup_\omega |{\fancyscript{S}}(j \omega)| $$
(4.58)

We would like to minimize the peak of the sensitivity function, since in case this peak is small, then so is the magnitude of \({\fancyscript{S}}\) for all frequencies. This implies that the disturbances are attenuated uniformly well over the whole frequency range [72]. The minimization of \(||{\fancyscript{S}}||_\infty\) is a worst-case optimization procedure, since it minimizes the effect of the worst disturbance on the output. For a physical vibrating system this may be understood as minimizing the effect of a harmonic disturbance on the closed-loop controlled system in its resonance—that is where the sensitivity function \(|{\fancyscript{S}}|\) has its peak value.

In order to provide a mathematically more detailed interpretation of \({\fancyscript{H}}_\infty\) controllers in general, let us define what the \({\fancyscript{H}}_\infty\) norm means: if \({\fancyscript{H}}_\infty\) is a space of matrix-valued functions bounded in the right-half of the complex space, the value of the \({\fancyscript{H}}_\infty\) norm is the maximal singular value of the function over that space [97]. In other words, the \({\fancyscript{H}}_\infty\) norm is the maximal gain in any direction and frequency for a SISO system, or as it has been previously pointed out, the maximal magnitude of the frequency response.

Let us now define a controlled plant \({\fancyscript{P}}(s)\) that has two inputs: \({\bf w}(s)\) is the reference signal containing disturbances, while \({\bf u}(s)\) is the controlled input to the plant. The plant has two outputs as well, namely \({\bf e}(s),\) which is the error signal we aim to minimize and \({\bf y}(s)\) which is the measurable plant output. Unlike in the previous case our system is not SISO anymore, but MIMO therefore the variables \({\bf u}(s), {\bf e}(s), {\bf y}(s)\;\hbox{and}\;{\bf w}(s)\) are vectors while \({\fancyscript{H}}_\infty\;\hbox{and}\;{\fancyscript{P}}\) are matrices of transfer functions.

For this system we desire to find a matrix \({\fancyscript{H}}_\infty\) (or essentially a feedback matrix \({\bf K}\)), which will generate the optimal input \({\bf u}(s)\) based on the measured signal \({\bf y}(s)\)—see Fig. 4.5 for illustration. This augmented system can be described by [46, 58, 104]:

$$\left[\begin{array}{ll} {\bf e}(s)\\ {\bf y}(s) \end{array}\right] ={\fancyscript{P}}(s) \left[\begin{array}{l} {\bf w}(s)\\ {\bf u}(s) \end{array}\right] = \left[\begin{array}{ll} {\fancyscript{P}}_{11}(s) & {\fancyscript{P}}_{12}(s)\\ {\fancyscript{P}}_{21}(s) & {\fancyscript{P}}_{22}(s) \end{array}\right] \left[\begin{array}{l} {\bf w}(s)\\ {\bf u}(s) \end{array}\right] $$
(4.59)
$$ {\bf u}(s) = {\fancyscript{H}}_\infty (s) {\bf y}(s) $$
(4.60)

We may express the dependence of error \({\bf e}(s)\) on the reference \({\bf w}(s)\) by a term very similar to (4.56) using the sensitivity function to express the error based on the reference. For this, we substitute (4.60) into (4.59) and separate the matrix expression into two equations to get:

$$ {\bf e}(s)={\fancyscript{P}}_{11}(s){\bf w}(s) + {\fancyscript{P}}_{12}(s){\fancyscript{H}}_\infty(s) {\bf y}(s) $$
(4.61)
$$ {\bf y}(s)={\fancyscript{P}}_{21}(s){\bf w}(s) + {\fancyscript{P}}_{22}(s){\fancyscript{H}}_\infty(s) {\bf y}(s) $$
(4.62)

Expressing \({\bf y}(s)\) from the second equation yields

$$ {\bf y}(s)=[{\bf I} + {\fancyscript{P}}_{22}(s){\fancyscript{H}}_\infty(s)]^{-1} {\fancyscript{P}}_{21}(s) {\bf w}(s) $$
(4.63)

which after substituting into the first equation yields

$$ {\bf e}(s) = {\fancyscript{P}}_{11}(s) + {\fancyscript{P}}_{12}(s) {\fancyscript{H}}_\infty(s) [{\bf I}-{\fancyscript{P}}_{22}(s) {\fancyscript{H}}_\infty(s)]^{-1}{\fancyscript{P}}_{21}(s){\bf w}(s) $$
(4.64)
$$ =F_\ell({\fancyscript{P}},{\fancyscript{H}}_\infty) {\bf w}(s) $$
(4.65)

where the operator \(F_\ell\) is known as the lower linear fractional transformation and it expresses the sensitivity function.

Fig. 4.5
figure 5

Controlled plant and \({\fancyscript{H}}_\infty\) controller

The objective of \({\fancyscript{H}}_\infty\) control for the system defined above is to find such a feedback matrix \({\fancyscript{H}}_\infty(s),\) which minimizes the lower linear fractional transformation or the \(F_\ell\) part of (4.64) according to the \({\fancyscript{H}}_\infty\) norm. The same definition also applies for \({\fancyscript{H}_2}\) control. The infinity norm for a general MIMO system can be expressed as the peak value of the largest singular value taken as a function of frequency [46, 97]:

$$ ||F_\ell({\fancyscript{P}},{\fancyscript{H}}_\infty)||_\infty = \sup_\omega \bar{\sigma}_s(F_\ell({\fancyscript{P}}, {\fancyscript{H}}_\infty)(j\omega)) $$
(4.66)

where \(\bar{\sigma}_s\) is the maximal singular value of the matrix \(F_\ell({\fancyscript{P}},{\fancyscript{H}}_\infty)(j\omega)\).

\({\fancyscript{H}}_\infty\) control is utilized in [14] to control the vibration of rotor blades in a helicopter individually. Time-invariant but linear nature of the forward helicopter flight is solved through gain scheduling of the \({\fancyscript{H}}_\infty\) control laws. Other applications of \({\fancyscript{H}}_\infty\) based vibration control systems are for example active seats for the automotive industry or spacecraft s [109], active magnetic suspensions for rotors [58], active noise control [20] and active seismic vibration control in buildings [63, 97].

5 Soft Computing Approaches

The use of control approaches based on genetic algorithms, artificial neural network s and fuzzy control is fairly atypical for active vibration control. The reason for this is that soft computing control systems are rather suited for plants and phenomena, which are difficult if not impossible to model using exact mathematical, respectively numerical approaches. However, the dynamic behavior of vibrating mechanical systems can be easily characterized using ordinary or partial differential equations. This process then results in transfer function or state-space based models. By the aid of these models, exact hard control rules can be formulated.

Direct vibration control through genetic algorithms, neural network s or fuzzy control is rare. These somewhat “exotic” methods may however be utilized to tune more traditional controllers or to define the physical size or distribution of sensors and actuators. Other vibration control related applications in which the above-mentioned control methods are useful are the ones with large actuator hysteresis or other significant nonlinearities, such as magnetorheological dampers. The following sections will briefly characterize these soft computing methods.

5.1 Neural Networks

Artificial neural network s (ANN) mimic the behavior of biological neural network s found in nature by using programming constructs that resemble neurons and their interconnections. Just as in nature, the structure of an ANN changes and adapts according to the inflowing information emulating the learning process.

Biological neurons are replaced by nodes Footnote 3 in an artificial neural network and they are represented by the shaded circle in Fig. 4.6. The simplest ANN has three layers, as denoted in Fig. 4.6 consisting of an input, a hidden and an output layer. The input nodes or neurons send data via synapses to the second hidden layer, which in turn sends data to the output layer via other synapses. The synapses are denoted as arrows on the figure and in practice they store weighting parameters used to manipulate the transferred data.

Fig. 4.6
figure 6

Schematic illustration of a simple feedforward artificial neural network . The nodes denote programming modules mimicking neurons in a biological neural network . Synapses or connections between the individual neurons are denoted by arrows

For practical reasons, real life implementations of artificial neural network s rely on statistical and signal processing ideas more heavily than exact biological principles. However, what ANN and a real biological neural network have in common is their adaptive, distributed, nonlinear and parallel processing nature.

Let us represent the neural network with a function \(f(x),\) which takes \(x\) as its input. The function \(f(x)\) is a composition of other functions \(g_i\) which in turn may be a composition of yet other sets of function. This functional dependency is represented in Fig. 4.7. The dependency of functions can be interpreted in a so-called functional view, which is predominantly associated with optimization tasks. If we assume the set of functions \(g_i\) to be a vector \({\bf g}=\left[g_1\;g_2\;\ldots\;g_i\;\ldots\;g_n\!\right]\), then from the functional view the input \(x\) is transformed into a three-dimensional vector \({\bf h}\) which is in turn transformed into the two-dimensional vector \({\bf g}\) and finally to \(f.\) Another equivalent view of the artificial neural network is the so-called probabilistic view, which is commonly used in the context of graphical models.

The neural network s represented in Figs. 4.6 and 4.7 are of the feedforward type, without cycles. It is possible to include cycles in ANN, in that case we are talking about a recurrent network.

The use of neural network s in magnetorheological (MR ) damper-based semi-active control systems is justified by the large hysteretic and nonlinear behavior of MR dampers. An MR damper actuated semi-active vehicle suspension that is indirectly controlled by artificial neural network s has been proposed by Zapaterio et al. in [126]. A neural network is used as an inverse model of the MR damper: the desired force acts as an input, which is used to calculate the voltage needed to generate that force. The voltage level is then input into a controller acquired via traditional methods [126]. A neural network approach is used for the control of semi-active vehicle suspensions by Eski at al. in [35] as well. The control system is contrasted to a traditional PID controller in simulation. Eski et al. combine a PID controller with a novel ANN-based dynamics predictor.

Chen et al. combine different artificial neural network methods to attenuate acoustic signals with a voice-coil actuator in [23]. The ANN methods provide means to tune the parameters of traditional transfer function-based controllers adaptively. Adaptive vibration control is implemented similarly using ANN in [68], where the authors suggest that an ANN-based adaption method can be computationally less intensive than traditional adaptation methods. ANN has been used in [133] as well to create models for a predictive controller-based vibration flexible link manipulator vibration suppression system.

Fig. 4.7
figure 7

Schematic illustration of the dependency of an artificial neural network . Input data \(x\) is mapped through a series of functions \(h_i\;\hbox{and}\;g_i\) to the final neural network \(f(x)\)

Neural networks are used to suppress vibrations in a permanent magnet linear motor in [123], rotor system in [3] and in other vibration control applications [7, 27, 64, 121, 122, 127, 129].

5.2 Genetic Algorithms

Similar to the artificial neural network s presented previously, genetic algorithms (GA) mimic nature’s behavior. Instead of emulating the working principles of a nervous system, genetic algorithms copy the evolutionary selection process. In fact, genetic algorithms belong to the larger class of evolutionary algorithms and are often utilized in optimization and search problems.

The candidate solutions of a GA problem are represented by the individuals Footnote 4 and these individuals carry encoded genetic information represented by chromosomes. Footnote 5 The population of such chromosome carrying individuals is the genetic algorithm itself, which is gradually evolving toward an optimal solution through several generations. Naturally, in GA the genetic information is represented by binary or other type of strings instead of the DNA. We may describe the steps of a genetic algorithm in a simplified manner:

  • initialization

  • selection

  • reproduction

  • termination

At the initialization stage, a population of individuals with random genetic information is generated. Typically, a population consists of several hundreds or thousands of individual “creatures”, covering the range of all possible solutions. It is also possible to insert individuals with possible optimal genetic material, so to aid the speed and succession of the selection process.

Just as in nature, the fitter individual survives. In the next stage of the genetic algorithm, a sub-set of the original population is selected based on fitness to survive and allowed to reproduce. Naturally, the fitness function is a measure of solution quality and is based on the desired type of solution, what is better for the individual changes according to the problem type. The selection process also contains a random element, so genetic information from individuals with a smaller fitness level can also enter the next generation. This helps to diversify the population.

The individuals surviving the selection process can reproduce to create the successive generation. This selection process also emulates the natural selection process. The genetic information of the “parents” is combined by genome crossover and a degree of randomness is also introduced by mutation. The process is repeated until a population with the desired size is created and the process continues with the next iteration of the selection process. With each new generation, a pool of genetic material is created which is different from that of the previous generation.

The genetic algorithm is usually terminated after a pre-set number of generations has evolved, or is terminated based on the fitness of the population.

Genetic algorithms for complex problems require extensive computation time. The computation of a complex fitness function for each individual in the population is the main limiting factor of GA. Another drawback is that the GA tends to converge toward local optima, instead of the global optimum. Certain techniques exist to diversify the population and prevent this, but no guarantee for the global optimum can be given.

The schematic representation of a simple genetic algorithm is featured in Fig. 4.8. The shaded circles represent the individuals, while the column of circles is the actual generation. The genetic information here is the color of the circle, which of course could be represented very easily by a binary string. The fitness function here is the darkest shade, we can state that the fittest is the darkest individual because it could hide well against a dark background and thus survive to pass on its genes. At an initialization stage (I) a population is generated with random genetic information, after this selection takes place (S). Selection includes fitness evaluation, where the lighter shades are removed from the population and some random “deaths” also occur—the unfavorable mutations are selected against. The rest of the population may reproduce (R) and the new generation appears (G). The reproduction happens through the crossing of the genetic information of the parents and possibly random mutations. The favorable mutation is more likely to survive and reproduce. After a satisfactory population fitness or generation number is reached, the algorithm is terminated (T).

It is clear that the nature of GA is more suited to supplementary optimization in vibration control, such as the optimal placement of actuators and sensors. It is possible to use GA as an adaptation feature, augmenting the function of other control systems. The direct utilization of GA in vibration control is not recommended because of the possible computational burden or the occurrence of local minima. Despite of its limitations, GA has been used in active vibration control applications.

Fig. 4.8
figure 8

Simplified schematic representation of the progression of a genetic algorithm. I is for initialization, S is for selection, R for reproduction G denotes the next generation and T is for termination

The most popular way to utilize GA in the field of vibration control is to perform a geometric design optimization and therefore passively reach a better vibration response [65, 89]. A certain application of this principle is the optimal sizing and placement of actuators and sensors for active control [19, 86, 100, 120]. Tuning parameter optimization for traditional control systems can be carried out with the help of GA as well [5, 100, 120].

5.3 Fuzzy Control

Fuzzy control allows creating intricate nonlinear controllers, based on a set of simpler heuristic laws. These heuristic control laws may come from the experience of an engineer, common sense actions or may be a result of extensive mathematical simulation and optimization .

Fuzzy controllers are based on fuzzy logic, derived from fuzzy set theory. In contrast to binary logic where a statement can have either true (1) or false (0) values, in fuzzy logic the statements can assume values in between these two extremes [128]. Fuzzy controllers may use so-called linguistic variables to describe the control laws [87]. For example, instead of assigning certain acceleration values to the vibration of a mechanical system, in fuzzy control we can replace these by terms like “in equilibrium ”, “medium vibrations” and “heavily vibrating”. Control laws and functions can be associated with these linguistic terms to create a fuzzy controller. Figure 4.9 illustrates a simple set of three rules, describing the vibration level of a mechanical structure. Let us take a look at the dashed line which represents the measure of our current vibration level. It is certainly not heavily vibrating and close to the equilibrium —the value of this statement is about 60% true or 0.6. The actual level is also a little into medium vibration levels, the value of the statement that we have medium vibrations is about 30% or 0.3. In linguistic terms, we can say that our structure is slightly vibrating.

Fig. 4.9
figure 9

Example fuzzy rules describing the measured vibration levels

Similarly, it is possible to associate the actuator actions with an analogous set of rules and linguistic descriptors [87]. Let us imagine three different rules for an actuator: “no action”, “medium action” and “intense action”. Let us now formulate a set of rules based on these vibration levels and actuator actions, for example in linguistic terms we may logically define:

  • if vibration is in equilibrium then take no action

  • if vibration is medium then take medium action

  • if vibration is heavy then take intense action

The shape of the membership functions featured in Fig. 4.9 may be completely changed or altered by the designer, and one may use various logical statements and operators in addition to the if-is-then construct [94]. The example illustrated above is very simple, but it is always possible to add more rules and insert other logical twists and turns into the control law. In contrast to genetic algorithms or neural network s, the fuzzy control laws can be interpreted in a way that a human operator or designer can easily understand. For those interested in the theoretical basics of fuzzy sets, fuzzy logic and the design of fuzzy controllers, books by Michels et al. [87] and others [36, 94, 128] can be recommended. An interesting connection is made between multi-parametric programming based MPC (MPMPC) and the control of systems described modeled by a set of fuzzy laws by Kvasnica et al. in [69, 70], where the explicit minimum-time MPC controllers are proposed for Takagi-Sugeno fuzzy systems.

Fuzzy control is more suited to direct vibration control than for example artificial neural network s or genetic algorithms. Although it is possible to use fuzzy control to tune the parameters in classical controllers [80], fuzzy control systems may be used alone for vibration attenuation. Fuzzy control is combined with a (moving) sliding mode controller by Sung et al. in [110], while a fuzzy control-based vehicle suspension is suggested in [108] by Sun and Yang. The performance of a fuzzy controller is contrasted to a simple PD controller in by Guclu and Yazici in [48] for the active control of earthquake-induced vibrations. Fuzzy control is also suggested for the use on space borne robotic manipulator arms in [116].

Fuzzy control is often utilized for the active vibration control of active and semi-active vehicle suspension systems [82, 110] because of its ability to emulate and control the highly hysteretic and nonlinear behavior of MR dampers. The fuzzy strategy is also employed in civil engineering [48, 91] and in other works [23, 29, 54, 79, 92, 117, 129].

6 Other Approaches

The creativity of the human mind is limitless, and this is also true for designing control strategies which can be used in active vibration control. Minor or major alterations of algorithms introduced previously are abundant in the academic literature. Furthermore, several works discuss the combination of two methods, for example using soft computing techniques to turn classical methods into more advanced adaptive or robust control systems. Here we will list briefly some of the approaches used in AVC that have not been explicitly mentioned before.

Sliding mode control (SMC) applies a state switching strategy to alter the dynamics of the control system. The control law is not a continuous function of time, instead it is a nonlinear system of alternate structures, which are switched based on the current state [11]. The main advantage of SMC is robustness, moreover if bang-bang control is required; the SMC strategy can be even optimal.

Examples of state-space representation-based control laws other than the ones presented here are dynamic response shaping, eigenvalue placement and minimum energy control [118]. In a different state-space-based approach Bohn et al. utilizes an observer to attenuate engine-induced vibrations in [17]. This observer is used to reconstruct the original disturbance signal, which is then fed back with a negative sign as a control input. Due to the ever changing speed of the engine, the observer gains are scheduled based on a speed signal.

Optimization-based control methods may take different aspects of the vibration engineering task into account, such as the minimization of deflections [98], velocities, accelerations [115], maximizing resonant frequencies [103] and minimizing vibration or acoustic energy [41]. Optimization may be used as a tool to find the ideal placement and number of sensors and actuators offline and in combination with traditional controllers online [19, 44, 90, 103]. Moreover, as it has been previously mentioned, various optimization-based methods can be used offline to tune the parameters of traditional controllers based on a cost function [41, 115]. Examples of direct optimization-based vibration control approaches in addition to the ones presented here can be found in [25, 78, 98] and other works.

Feedback loops are not the only way to control vibrating systems; numerous studies use the feedforward approach to attenuate mechanical disturbances. In fact, feedforward is often preferred over feedback in active noise cancellation systems [42, 105]. If there is no information available about the disturbance acting on the system, a feedback loop must be used. In case the type and character of the disturbance is known a priori, feedforward may be a very good choice. Such scenarios include periodic oscillations caused by rotating machines [93], or structures where a sensor may be placed in between the transmission path of the source and the primary point of actuation. In the case of feedforward control, sensors are not used to directly affect the response of the controller, they are employed as a type of adaptive measure to tune the feedforward controller and monitor its performance instead [42]. One of the common strategies in feedforward control is the use of the so-called filtered-x LMS algorithm, which is used to tune a FIR filter adaptively. Feedback control-based on the \({\fancyscript{H}}_\infty\) method is contrasted to feedforward control in the work of Seba et al. [101] for a car engine vibration attenuation system. Feedforward-based vibration control has been applied to a single-link manipulators in [4, 38], while noise attenuation and control applications for windows [57], loudspeakers [131], heating ventilation and air conditioning (HVAC) [30] and other systems [28, 61] are also very common.