Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

In this chapter, the focus is on the strong attenuation of multiple sparsely located unknown and time-varying disturbances. One assumes that the various tonal disturbances are distant to each other in the frequency domain by a distance in Hz at least equal to \(10\,\%\) of the frequency of the disturbance and that the frequency of these disturbances vary over a wide frequency region.

The problem is to assure in this context a certain number of performance indices such as global attenuation, disturbance attenuation at the frequency of the disturbances, a tolerated maximum amplification (water bed effect), a good adaptation transient (see Sect. 12.3). The most difficult problem is to be sure that in all the configurations the maximum amplification is below a specified value. There is first a fundamental problem to solve: one has to be sure that in the known frequency case, for any combination of disturbances the attenuation and the maximum amplification specifications are achieved. The adaptive approach will only try to approach the performances of a linear controller for the case of known disturbances. So before discussing the appropriate adaptation schemes one has to consider the design methods to be used in order to achieve these constraints for the known frequencies case. This will be discussed in Sect. 13.2.

2 The Linear Control Challenge

In this section, the linear control challenge will be presented for the case of rejection of multiple narrow-band disturbances taking also into account the possible presence of low damped complex zeros in the vicinity of the border of the operational zone. Considering that in a linear context all the information is available, the objective is to set up the best achievable performance for the adaptive case.

Assuming that only one tonal vibration has to be cancelled in a frequency region far from the presence of low damped complex zeros and that the models of the plant and of the disturbance are known, the design of a linear regulator is relatively straightforward, using the internal model principle (see Chaps. 7 and 12).

The problem becomes much more difficult if several tonal vibrations (sinusoidal disturbances) have to be attenuated simultaneously since the water bed effect may become significant without a careful shaping of the sensitivity function when using the internal model principle. Furthermore, if the frequencies of the disturbance may be close to those of some of very low damped complex zeros of the plant, the use of the internal model principle should be used with care even in the case of a single disturbance (see Sect. 12.5).

This section will examine the various aspects of the design of a linear controller in the context of multiple tonal vibrations and the presence of low damped complex zeros. It will review various linear controller strategies.

To be specific these design aspects will be illustrated in the context of the active vibration control system using an inertial actuator, described in Sect. 2.2 and which has been already used for the case of a single tonal disturbance.

In this system, the tonal vibrations are located in the range of frequencies between 50 and 95 Hz. The frequency characteristics of the secondary path are given in Sect. 6.2.

Assume that a tonal vibration (or a narrow-band disturbance) p(t) is introduced into the system affecting the output y(t). The effect of this disturbance is centred at a specific frequency. As mentioned in Sect. 12.2.3, the IMP can be used to asymptotically reject the effects of a narrow-band disturbance at the system’s output if the system has enough gain in this region.

It is important also to take into account the fact that the secondary path (the actuator path) has no gain at very low frequencies and very low gain in high frequencies near \(0.5\,{f_s}\). Therefore, the control system has to be designed such that the gain of the controller be very low (or zero) in these regions (preferably 0 at 0 Hz and \(0.5\,{f_s}\)). Not taking into account these constraints can lead to an undesirable stress on the actuator.

In order to assess how good the controller is, it is necessary to define some control objectives that have to be fulfilled. For the remaining of this section, the narrow-band disturbance is supposed to be known and composed of 3 sinusoidal signals with 55, 70 and 85 Hz frequencies. The control objective is to attenuate each component of the disturbance by a minimum of 40 dB, while limiting the maximum amplification at 9 dB within the frequency region of operation. Furthermore it will be required that low values of the modulus of the input sensitivity function be achieved outside the operation region.

The use of the IMP principle completed with the use of auxiliary real (aperiodic) poles which have been used in Chap. 11 as a basic design for adaptive attenuation of one unknown disturbance may not work satisfactory for the case of multiple unknown disturbances even if it may provide good performance in some situations [1]. Even in the case of a single tonal disturbance, if low damped complex zeros near the border of the operation region are present, this simple design is not satisfactory. Auxiliary low damped complex poles have to be added. See Chap. 12, Sect. 12.6.

One can say in general, that the IMP is doing too much in terms of attenuation of tonal disturbances which of course can generate in certain case unacceptable water bed effects. In fact in practice one does not need a full rejection of the disturbance, but just a certain level of attenuation.

Three linear control strategy for attenuation of multiple narrow-band disturbances will be considered

  1. 1.

    Band-stop filters (BSF) centred at the frequencies of the disturbances

  2. 2.

    IMP combined with tuned notch filters

  3. 3.

    IMP with additional fixed resonant poles

The controller design will be done in the context of pole placement. The initial desired closed-loop poles for the design of the central controller defined by the characteristic polynomial \(P_0\) include all the stable poles of the secondary path model and the free auxiliary poles are all set at 0.3. The fixed part of the central controller numerator is chosen as \(H_R(z^{-1})=(1-z^{-1})\cdot (1+z^{-1})\) in order to open the loop at 0 Hz and 0.5 \({f_s}\).

2.1 Attenuation of Multiple Narrow-Band Disturbances Using Band-Stop Filters

The purpose of this method is to allow the possibility of choosing the desired attenuation and bandwidth of attenuation for each of the narrow-band component of the disturbance. Choosing the level of attenuation and the bandwidth allows to preserve acceptable characteristics of the sensitivity functions outside the attenuation bands and this is very useful in the case of multiple narrow-band disturbances. This is the main advantage with respect to classical internal model principle which in the case of several narrow-band disturbances, as a consequence of complete cancellation of the disturbances, may lead to unacceptable values of the modulus of the output sensitivity function outside the attenuation regions. The controller design technique uses the shaping of the output sensitivity function in order to impose the desired attenuation of narrow-band disturbances. This shaping techniques has been presented in Sect. 7.2.

The process output can be written asFootnote 1

$$\begin{aligned} y(t)=G(q^{-1})\cdot u(t)+p(t), \end{aligned}$$
(13.1)

where

$$\begin{aligned} G(q^{-1})=q^{-d}\frac{B(q^{-1})}{A(q^{-1})} \end{aligned}$$
(13.2)

is called the secondary path of the system.

As specified in the introduction, the hypothesis of constant dynamic characteristics of the AVC system is considered (similar to [2, 3]). The denominator of the secondary path model is given by

$$\begin{aligned} A(q^{-1})=1+a_1q^{-1}+\cdots +a_{n_A}q^{-n_A}, \end{aligned}$$
(13.3)

the numerator is given by

$$\begin{aligned} B(q^{-1})=b_1q^{-1}+\cdots +b_{n_B}q^{-n_B}=1+q^{-1}B^*(q^{-1}) \end{aligned}$$
(13.4)

and d is the integer delay (number of sampling periods).Footnote 2

The control signal is given by

$$\begin{aligned} u(t)=-R(q^{-1})\cdot y(t)-S^*(q^{-1})\cdot u(t-1), \end{aligned}$$
(13.5)

with

$$\begin{aligned} S(q^{-1})&=1+q^{-1}S^*(q^{-1})=1+s_1q^{-1}+\cdots +s_{n_S}q^{-n_S} \nonumber \\&=S'(q^{-1})\cdot H_S(q^{-1}),\end{aligned}$$
(13.6)
$$\begin{aligned} R(q^{-1})&=r_0+r_1q^{-1}+\cdots +r_{n_R}q^{-n_R}=R'(q^{-1})\cdot H_R(q^{-1}), \end{aligned}$$
(13.7)

where \(H_S(q^{-1})\) and \(H_R(q^{-1})\) represent fixed (imposed) parts in the controller and \(S'(q^{-1})\) and \(R'(q^{-1})\) are computed.

The basic tool is a digital filter \(S_{BSF_i}(z^{-1})/P_{BSF_i}(z^{-1})\) with the numerator included in the controller polynomial S and the denominator as a factor of the desired closed-loop characteristic polynomial, which will assure the desired attenuation of a narrow-band disturbance (index \(i\in \{1,\cdots ,n\}\)).

The BSFs have the following structure

$$\begin{aligned} \frac{S_{BSF_i}(z^{-1})}{P_{BSF_i}(z^{-1})}=\frac{1+\beta _1^iz^{-1}+\beta _2^iz^{-2}}{1+\alpha _1^iz^{-1}+\alpha _2^iz^{-2}} \end{aligned}$$
(13.8)

resulting from the discretization of a continuous filter (see also [4, 5])

$$\begin{aligned} F_i(s)=\frac{s^2+2\zeta _{n_i}\omega _{i}s+\omega _{i}^2}{s^2+2\zeta _{d_i}\omega _{i}s+\omega _{i}^2} \end{aligned}$$
(13.9)

using the bilinear transformation. This filter introduces an attenuation of

$$\begin{aligned} M_i=-20\cdot \log _{10}\left( \frac{\zeta _{n_i}}{\zeta _{d_i}}\right) \end{aligned}$$
(13.10)

at the frequency \(\omega _{i}\). Positive values of \(M_i\) denote attenuations (\(\zeta _{n_i}<\zeta _{d_i}\)) and negative values denote amplifications (\(\zeta _{n_i}>\zeta _{d_i}\)). Details on the computation of the corresponding digital BSF have been given in Chap. 7.Footnote 3

Remark

The design parameters for each BSF are the desired attenuation (\(M_i\)), the central frequency of the filter (\(\omega _i\)) and the damping of the denominator (\(\zeta _{d_i}\)). The denominator damping is used to adjust the frequency bandwidth of the BSF. For very small values of the frequency bandwidth the influence of the filters on frequencies other than those defined by \(\omega _i\) is negligible. Therefore, the number of BSFs and subsequently that of the narrow-band disturbances that can be compensated can be large.

For n narrow-band disturbances, n BSFs will be used

$$\begin{aligned} H_{BSF}(z^{-1})=\frac{S_{BSF}(z^{-1})}{P_{BSF}(z^{-1})}=\frac{\prod _{i=1}^{n} S_{BSF_i}(z^{-1})}{\prod _{i=1}^{n} P_{BSF_i}(z^{-1})} \end{aligned}$$
(13.11)

As stated before, the objective is that of shaping the output sensitivity function. \(S(z^{-1})\) and \(R(z^{-1})\) are obtained as solutions of the Bezout equation

$$\begin{aligned} P(z^{-1})=A(z^{-1})S(z^{-1})+z^{-d}B(z^{-1})R(z^{-1}), \end{aligned}$$
(13.12)

where

$$\begin{aligned} S(z^{-1})=H_S(z^{-1})S'(z^{-1}),\quad R(z^{-1})=H_{R_1}(z^{-1})R'(z^{-1}), \end{aligned}$$
(13.13)

and \(P(z^{-1})\) is given by

$$\begin{aligned} P(z^{-1})=P_0(z^{-1})P_{BSF}(z^{-1}). \end{aligned}$$
(13.14)

In the last equation, \(P_{BSF}\) is the product of the denominators of all the BSFs, (13.11), and \(P_0\) defines the initial imposed poles of the closed-loop system in the absence of the disturbances (allowing also to satisfy robustness constraints). The fixed part of the controller denominator \(H_S\) is in turn factorized into

$$\begin{aligned} H_S(z^{-1})=S_{BSF}(z^{-1})H_{S_1}(z^{-1}), \end{aligned}$$
(13.15)

where \(S_{BSF}\) is the combined numerator of the BSFs, (13.11), and \(H_{S_1}\) can be used if necessary to satisfy other control specifications. \(H_{R_1}\) is similar to \(H_{S_1}\) allowing to introduce fixed parts in the controller’s numerator if needed (like opening the loop at certain frequencies). It is easy to see that the output sensitivity function becomes

$$\begin{aligned} S_{yp}(z^{-1})=\frac{A(z^{-1})S'(z^{-1})H_{S_1}(z^{-1})}{P_0(z^{-1})}\frac{S_{BSF}(z^{-1})}{P_{BSF}(z^{-1})} \end{aligned}$$
(13.16)

and the shaping effect of the BSFs upon the sensitivity functions is obvious. The unknowns \(S'\) and \(R'\) are solutions of

$$\begin{aligned} P(z^{-1})&=P_0(z^{-1})P_{BSF}(z^{-1}) =A(z^{-1})S_{BSF}(z^{-1})H_{S_1}(z^{-1})S'(z^{-1})\,+ \nonumber \\&\quad +z^{-d}B(z^{-1})H_{R_1}(z^{-1})R'(z^{-1}). \end{aligned}$$
(13.17)

and can be computed by putting (13.17) into matrix form (see also [5]). The size of the matrix equation that needs to be solved is given by

$$\begin{aligned} n_{Bez}=n_A+n_B+d+n_{H_{S_1}}+n_{H_{R_1}}+2\cdot n-1, \end{aligned}$$
(13.18)

where \(n_A\), \(n_B\) and d are respectively the order of the plant’s model denominator, numerator and delay (given in (13.3) and (13.4)), \(n_{H_{S_1}}\) and \(n_{H_{R_1}}\) are the orders of \(H_{S_1}(z^{-1})\) and \(H_{R_1}(z^{-1})\) respectively and n is the number of narrow-band disturbances. Equation (13.17) has an unique minimal degree solution for \(S'\) and \(R'\), if \(n_P\le n_{Bez}\), where \(n_P\) is the order of the pre-specified characteristic polynomial \(P(q^{-1})\). Also, it can be seen from (13.17) and (13.15) that the minimal orders of \(S'\) and \(R'\) will be:

$$\begin{aligned} n_{S'} = n_B+d+n_{H_{R_1}}-1,\quad n_{R'} = n_A+n_{H_{S_1}}+2\cdot n-1.\nonumber \end{aligned}$$

In Fig. 13.1, one can see the improvement obtained using BSF with respect to the case when IMP with real auxiliary poles is used. The dominant poles are the same in both cases. The input sensitivity function is tuned before introducing the BSFs.

Fig. 13.1
figure 1

Output sensitivity function for various controller designs: using IMP with auxiliary real poles (dotted line), using band-stop filters (dashed line), and using tuned \(\rho \) notch filters (continuous line)

2.2 IMP with Tuned Notch Filters

This approach is based on the idea of considering an optimal attenuation of the disturbance taking into account both the zeros and poles of the disturbance model. It is assumed that the model of the disturbance is a notch filter and the disturbance is represented by

$$\begin{aligned} p(t)=\frac{D_p(\rho q^{-1})}{D_p(q^{-1})}e(t) \end{aligned}$$
(13.19)

where e(t) is a zero mean white Gaussian noise sequence and

$$\begin{aligned} D_p(z^{-1}) = 1 + \alpha z^{-1} + z^{-2}, \end{aligned}$$
(13.20)

is a polynomial with roots on the unit circle.Footnote 4

In (13.20), \(\alpha = -2\cos \left( 2\pi \omega _1 T_s\right) \), \(\omega _1\) is the frequency of the disturbance in Hz, and \(T_s\) is the sampling time. \(D_p(\rho z^{-1})\) is given by:

$$\begin{aligned} D_p(\rho z^{-1})=1+\rho \alpha z^{-1}+\rho ^2 z^{-2}, \end{aligned}$$
(13.21)

with \(0<\rho <1\). The roots of \(D_p(\rho z^{-1})\) are in the same radial line as those of \(D_p(z^{-1})\) but inside of the unitary circle, and therefore stable [6].

This model is pertinent for representing narrow-band disturbances as shown in Fig. 13.2, where the frequency characteristics of this model for various values of \(\rho \) are shown.

Fig. 13.2
figure 2

Magnitude plot frequency responses of a notch filter for various values of the parameter \(\rho \)

Using the output sensitivity function, the output of the plant in the presence of the disturbance can be expressed as

$$\begin{aligned} y(t)=\frac{AS'}{P_0}\frac{H_S}{P_{aux}}\frac{D_p(\rho q^{-1})}{D_p(q^{-1})}e(t) \end{aligned}$$
(13.22)

or alternatively as

$$\begin{aligned} y(t)=\frac{AS'}{P_0} \beta (t) \end{aligned}$$
(13.23)

where

$$\begin{aligned} \beta (t)=\frac{H_S}{P_{aux}}\frac{D_p(\rho q^{-1})}{D_p(q^{-1})}e(t) \end{aligned}$$
(13.24)

In order to minimize the effect of the disturbance upon y(t), one should minimize the variance of \(\beta (t)\). One has two tuning devices \(H_S\) and \(P_{aux}\). Minimization of the variance of \(\beta (t)\) is equivalent of searching \(H_S\) and \(P_{aux}\) such that \(\beta (t)\) becomes a white noise [5, 7]. The obvious choices are \(H_S=D_p\) (which corresponds to the IMP) and \(P_{aux}=D_p(\rho z^{-1})\). Of course this development can be generalized for the case of multiple narrow-band disturbances. Figure 13.1 illustrates the effect of this choice upon the output sensitivity function. As it can be seen, the results are similar to those obtained with BSF.

2.3 IMP Design Using Auxiliary Low Damped Complex Poles

The idea is to add a number of fixed auxiliary resonant poles which will act effectively as \(\rho \)-filters for few frequencies and as an approximation of the \(\rho \)-filters at the other frequencies. This means that a number of the real auxiliary poles used in the basic IMP design will be replaced by a number of resonant complex poles. The basic ad-hoc rule is that the number of these resonant poles is equal to the number of the low damped complex zeros located near the border of the operation region plus \(n-1\) (n is the number of tonal disturbances).

For the case of 3 tonal disturbances located in the operation region 50 to 95 Hz taking also into account the presence of the low damped complex zeros, the locations and the damping of these auxiliary resonant poles are summarized in Table 13.1. The poles at 50 and 90 Hz are related to the presence in the neighbourhood of low damped complex zeros. The poles at 60 and 80 Hz are related to the 3 tonal disturbances to be attenuated. The effect of this design with respect to the basic design using real auxiliary poles is illustrated in Fig. 13.3.

Table 13.1 Auxiliary low damped complex poles added to the closed-loop characteristic polynomial
Fig. 13.3
figure 3

Output sensitivity function for IMP design with real auxiliary poles and with resonant auxiliary poles

3 Interlaced Adaptive Regulation Using Youla–Kučera IIR Parametrization

The adaptive algorithm developed in Chap. 12 uses an FIR structure for the Q-filter. In this section, a new algorithm is developed, using an IIR structure for the Q filter in order to implement the linear control strategies using tuned notch filters (tuned auxiliary resonant poles). The use of this strategy is mainly dedicated to the case of multiple unknown tonal disturbances.

As indicated previously, since \(D_p(\rho z^{-1})\) will define part of the desired closed-loop poles, it is reasonable to consider an IIR Youla–Kučera filter of the form \(B_Q(z^{-1})/A_Q(z^{-1})\) with \(A_Q(z^{-1})=D_p(\rho q^{-1})\) (which will automatically introduce \(D_p(\rho q^{-1})\) as part of the closed-loop poles). \(B_Q\) will introduce the internal model of the disturbance. In this context, the controller polynomials R and S are defined by

$$\begin{aligned} R(z^{-1})&=A_Q(z^{-1})R_0(z^{-1})+H_{R_0}(z^{-1})H_{S_0}(z^{-1})A(z^{-1})B_Q(z^{-1}),\end{aligned}$$
(13.25)
$$\begin{aligned} S(z^{-1})&=A_Q(z^{-1})S_0(z^{-1})-H_{R_0}(z^{-1})H_{S_0}(z^{-1})z^{-d}B(z^{-1})B_Q(z^{-1}), \end{aligned}$$
(13.26)

and the poles of the closed-loop are given by:

$$\begin{aligned} P(z^{-1})= A_Q(z^{-1})P_0(z^{-1}). \end{aligned}$$
(13.27)

\(R_0(z^{-1})\), \(S_0(z^{-1})\) are the numerator and denominator of the central controller

$$\begin{aligned} R_0(z^{-1})&=H_{R_0}(z^{-1})R_0'(z^{-1}),\end{aligned}$$
(13.28)
$$\begin{aligned} S_0(z^{-1})&=H_{S_0}(z^{-1})S'_0(z^{-1}), \end{aligned}$$
(13.29)

and the closed-loop poles defined by the central controller are the roots of

$$\begin{aligned} P_0(z^{-1})=A(z^{-1})S_0(z^{-1})H_{S_0}(z^{-1})+q^{-d}B(z^{-1})R_0(z^{-1})H_{R_0}(z^{-1}). \end{aligned}$$
(13.30)

It can be seen from (13.25) and (13.26) that the new controller polynomials conserve the fixed parts of the central controller.

Using the expression of the output sensitivity function (AS / P) the output of the system can be written as follows:

$$\begin{aligned} y(t)&= \frac{A\left[ A_QS_0-H_{R_0}H_{S_0}q^{-d}BB_Q\right] }{P}p(t),\end{aligned}$$
(13.31)
$$\begin{aligned} y(t)&= \frac{\left[ A_QS_0-H_{R_0}H_{S_0}q^{-d}BB_Q\right] }{P}w(t), \end{aligned}$$
(13.32)

where the closed-loop poles are defined by (13.27) and where w(t) is defined as:

$$\begin{aligned} w(t)&=A(q^{-1})y(t)-q^{-d}B(q^{-1})u(t)\end{aligned}$$
(13.33)
$$\begin{aligned}&=A(q^{-1})p(t) \end{aligned}$$
(13.34)

Comparing (13.32) with (12.20) from Chap. 12, one can see that they are similar except that \(S_0\) is replaced by \(A_QS_0\) and \(P_0\) by \(A_QP_0\). Therefore if \(A_Q\) is known, the algorithm given in Chap. 12 for the estimation of the Q FIR filter can be used for the estimation of \(B_Q\). In fact this will be done using an estimation of \(A_Q\). A block diagram of the interlaced adaptive regulation using the Youla–Kučera parametrization is shown in Fig. 13.4. The estimation of \(A_Q\) is discussed next.

Fig. 13.4
figure 4

Interlaced adaptive regulation using an IIR YK controller parametrization

3.1 Estimation of \(A_Q\)

Assuming that plant model = true plant in the frequency range, where the narrow-band disturbances are introduced, it is possible to get an estimation of p(t), named \(\hat{p}(t)\), using the following expression

$$\begin{aligned} \hat{p}(t) = \frac{1}{A(q^{-1})}w(t) \end{aligned}$$
(13.35)

where w(t) was defined in (13.33). The main idea behind this algorithm is to consider the signal \(\hat{p}(t)\) as

$$\begin{aligned} \hat{p}(t) = \sum _{i=1}^{n}c_{i}\sin \left( \omega _{i}t+\beta _{i}\right) +\eta (t), \end{aligned}$$
(13.36)

where \(\left\{ c_i,\omega _i,\beta _i\right\} \ne 0\), n is the number of narrow-band disturbances and \(\eta \) is a noise affecting the measurement. It can be verified that, after two steps of transient \(\left( 1-2\cos (2\pi \omega _i T_s)q^{-1}+q^{-2}\right) \cdot c_i\sin \left( \omega _i t+\beta _i\right) =0\) [8]. Then the objective is to find the parameter \(\left\{ {\alpha }\right\} _{i=1}^{n}\) that makes \(D_p(q^{-1})\hat{p}(t)=0\).

The previous product can be equivalently written as \(D_p(q^{-1})\hat{p}(t+1)=0\) and its expression is

$$\begin{aligned} x(t+1)&= D_p(q^{-1})\hat{p}(t+1),\nonumber \\&= \hat{p}(t+1)+\sum _{i=n}^{n-1}\alpha _i\left[ \hat{p}(t+1-i)+\hat{p}(t+1-2n+i)\right] +\cdots \nonumber \\&\cdots +\alpha _n \hat{p}(t+1-n)+\hat{p}(t+1-2n). \end{aligned}$$
(13.37)

where n is the number of narrow-band disturbances.

Defining the parameter vector as

$$\begin{aligned} \theta _{D_p}=\left[ \alpha _1,\alpha _2,\ldots ,\alpha _n\right] ^T, \end{aligned}$$
(13.38)

and the observation vector at time t as:

$$\begin{aligned} \phi _{D_p}(t) = \left[ \phi _{1}^{D_p}(t),\phi _{2}^{D_p}(t),\ldots ,\phi _{n}^{D_p}(t)\right] ^T, \end{aligned}$$
(13.39)

where

$$\begin{aligned} \phi _{j}^{D_p}(t)&= \hat{p}(t+1-j)+\hat{p}(t+1-2n+j),\; j=1,\ldots ,n-1 \end{aligned}$$
(13.40)
$$\begin{aligned} \phi _{n}^{D_p}(t)&= \hat{p}(t+1-n). \end{aligned}$$
(13.41)

Equation (13.37) can then be simply represented by

$$\begin{aligned} {x}(t+1) = \theta _{D_p}^{T}\phi _{D_p}(t)+\left( \hat{p}(t+1)+\hat{p}(t+1-2n)\right) . \end{aligned}$$
(13.42)

Assuming that an estimation of \(\hat{D}_p(q^{-1})\) is available at the instant t, the estimated product is written as follows:

$$\begin{aligned} \hat{x}(t+1)&= \hat{D}_p(q^{-1})\hat{p}(t+1),\nonumber \\&= \hat{p}(t+1)+\sum _{i=n}^{n-1}\hat{\alpha }_i\left[ \hat{p}(t+1-i)+\hat{p}(t+1-2n+i)\right] +\cdots \nonumber \\&\cdots +\hat{\alpha }_n \hat{p}(t+1-n)+\hat{p}(t+1-2n)\end{aligned}$$
(13.43)
$$\begin{aligned}&= \hat{\theta }^{T}_{D_p}(t)\phi _{D_p}(t)+\left( \hat{p}(t+1)+\hat{p}(t+1-2n)\right) \end{aligned}$$
(13.44)

where \(\hat{\theta }_{D_p}(t)\) is the estimated parameter vector at time t. Then the a priori prediction error is given by

$$\begin{aligned} \varepsilon ^{\circ }_{D_p}(t+1)&= x(t+1) - \hat{x}(t+1)=\left[ \theta ^{T}_{D_p}-\hat{\theta }_{D_p}^{T}(t)\right] \cdot \phi _{D_p}(t), \end{aligned}$$
(13.45)

and the a posteriori adaptation error using the estimation at \(t+1\)

$$\begin{aligned} \varepsilon _{D_p}(t+1) = \left[ \theta _{D_p}^T-\hat{\theta }_{D_p}^T(t+1)\right] \cdot \phi _{D_p}(t), \end{aligned}$$
(13.46)

Equation (13.46) has the standard form of an a posteriori adaptation error [9] which allows to associate the standard parameter adaptation algorithm (PAA) introduced in Chap. 4 (Eqs. (4.121)–(4.123)):

$$\begin{aligned} \hat{\theta }_{D_p}(t+1)&= \hat{\theta }_{D_p}(t)+\frac{F_2(t)\phi _{D_p}(t)\varepsilon ^{\circ }_{D_p}(t+1)}{1+\phi _{D_p}(t)^T F_2(t)\phi _{D_p}(t)}\end{aligned}$$
(13.47)
$$\begin{aligned} \varepsilon ^{\circ }_{D_p}(t+1)&= x(t+1) - \hat{x}(t+1)\end{aligned}$$
(13.48)
$$\begin{aligned} \hat{x}(t+1)&= \hat{\theta }^{T}_{D_p}(t)\phi _{D_p}(t)+\left( \hat{p}(t+1)+\hat{p}(t+1-2n)\right) \end{aligned}$$
(13.49)
$$\begin{aligned} F_2(t+1)^{-1}&= \lambda _1(t)F_2(t)^{-1}-\lambda _2(t)\phi _{D_p}(t)\phi _{D_p}(t)^{T}\\ 0<\lambda _1(t)&\le 1;\quad 0\le \lambda _2(t)<2;\quad F_2(0)>0\nonumber \end{aligned}$$
(13.50)

The PAA defined in (4.121)–(4.123) is used with \(\phi (t) = \phi _{D_p}(t)\), \(\hat{\theta }(t)=\hat{\theta }_{D_p}(t)\) and \(\varepsilon ^\circ (t+1)=\varepsilon ^\circ _{D_p}(t+1)\). For implementation, since the objective is to make \(x(t+1)\rightarrow 0\), the implementable a priori adaptation error is defined as follows:

$$\begin{aligned} \varepsilon _{D_p}^{\circ }(t+1)&=0-\hat{D}_p(q^{-1},t)\hat{p}(t+1)\nonumber \\&=-\hat{\theta }_{D_p}^T(t)\phi _{D_p}(t)-\left( \hat{p}(t+1)+\hat{p}(t-2n+1)\right) . \end{aligned}$$
(13.51)

Additional filtering can be applied on \(\hat{p}(t)\) to improve the signal-noise ratio. Since a frequency range of interest was defined, a bandpass filter can be used on \(\hat{p}(t)\). Once an estimation of \(D_p\) is available, \(A_Q=D_p(\rho q^{-1})\) is immediately generated. Since the estimated \(\hat{A}_Q\) will be used for the estimation of the parameters of \(B_Q\) one needs to show that: \(\lim _{t\rightarrow \infty } \hat{A}_Q(z^{-1})=A_Q(z^{-1 })\). This is shown in Appendix C.

3.2 Estimation of \(B_Q(q^{-1})\)

Taking into account (13.12), (13.15), (13.16), and (13.17), it remains to compute \(B_Q(z^{-1})\) such that

$$\begin{aligned} S(z^{-1})=D_p(z^{-1})H_{S_0}(z^{-1})S'(z^{-1}). \end{aligned}$$
(13.52)

Turning back to (13.26) one obtains

$$\begin{aligned} S_0A_Q=D_pH_{S_0}S'+z^{-d}BH_{R_0}H_{S_0}B_Q. \end{aligned}$$
(13.53)

and taking into consideration also (13.29) it results

$$\begin{aligned} S'_0A_Q=D_pS'+z^{-d}BH_{R_0}B_Q. \end{aligned}$$
(13.54)

Once an estimation algorithm is developed for polynomial \(\hat{A}_Q(q^{-1})\), the next step is to develop the estimation algorithm for \(\hat{B}_Q(q^{-1})\). Assuming that the estimation \(\hat{A}_Q(t)\) of \(A_Q(z^{-1})\) is available, one can incorporate this polynomial to the adaptation algorithm defined in Sect. 12.2.2. Using (13.32) and (13.27) and assuming that an estimation of \(\hat{B}_Q(q^{-1})\) is available at the instant t, the a priori error is defined as the output of the closed-loop system written as followsFootnote 5

$$\begin{aligned} \varepsilon ^\circ (t+1)&= \frac{S_0\hat{A}_Q(t)-q^{-d}BH_{S_0}H_{R_0}\hat{B}_Q(t)}{P_0\hat{A}_Q(t)} w(t+1)\nonumber \\&= \frac{S_0}{P_0}w(t+1) - \frac{q^{-d}B^{*}H_{S_0}H_{R_0}}{P_0}\frac{\hat{B}_Q(t)}{\hat{A}_Q(t)}w(t)\end{aligned}$$
(13.55)
$$\begin{aligned}&= w_1(t+1) - \frac{\hat{B}_Q(t)}{\hat{A}_Q(t)}w^{f}(t) \end{aligned}$$
(13.56)

where the notationsFootnote 6

$$\begin{aligned} w(t+1)&= A\frac{D_p(\rho )}{D_p}\delta (t+1)\end{aligned}$$
(13.57)
$$\begin{aligned} w_1(t+1)&= \frac{S_0}{P_0}w(t+1)\end{aligned}$$
(13.58)
$$\begin{aligned} w^f(t)&= \frac{q^{-d}B^{*}H_{S_0}H_{R_0}}{P_0}w(t) \end{aligned}$$
(13.59)

have been introduced.

Substituting (13.53) in (13.55) one gets:

$$\begin{aligned} \varepsilon ^\circ (t+1) =&\frac{H_{S_0}D_pS'}{P_0A_Q}w(t+1)+\frac{q^{-d}B^{*}H_{S_0}H_{R_0}}{P_0}\frac{B_Q}{A_Q}w(t)\,-\nonumber \\&- \frac{q^{-d}B^{*}H_{S_0}H_{R_0}}{P_0}\frac{\hat{B}_Q(t)}{\hat{A}_Q(t)}w(t)\end{aligned}$$
(13.60)
$$\begin{aligned} =&\upsilon (t+1) + \frac{q^{-d}B^{*}H_{S_0}H_{R_0}}{P_0}\left[ \frac{B_Q}{A_Q}-\frac{\hat{B}_Q(t)}{\hat{A}_Q(t)}\right] w(t) \end{aligned}$$
(13.61)

where

$$\begin{aligned} \upsilon (t+1) = \frac{H_{S_0}D_pS'}{P_0A_Q}\frac{AD_p(\rho )}{D_p}\delta (t+1)=\frac{H_{S_0}S'A}{P_0}\delta (t+1) \end{aligned}$$
(13.62)

tends asymptotically to zero since it is the output of an asymptotically stable filter whose input is a Dirac pulse.

The equation for the a posteriori error takes the formFootnote 7

$$\begin{aligned} \varepsilon (t+1) =&\frac{1}{A_Q}\left[ \theta _1^T-\hat{\theta }_1^T(t+1)\right] \phi _1(t)+\upsilon ^{f}(t+1)+\upsilon _1(t+1), \end{aligned}$$
(13.63)

where

$$\begin{aligned} \upsilon ^{f}(t+1)&= \frac{1}{A_Q}\upsilon (t+1)\rightarrow 0,\; \mathrm {since}\;A_Q\;\mathrm {is}\;a.s. \end{aligned}$$
(13.64)
$$\begin{aligned} \upsilon _1(t+1)&= \frac{1}{A_Q}\left( A^{*}_Q-\hat{A}^{*}_Q(t+1)\right) \left( -\hat{u}_Q^f(t)\right) \rightarrow 0, \end{aligned}$$
(13.65)
$$\begin{aligned} \theta _1&= \left[ b^{Q}_0,\ldots ,b^{Q}_{2n-1}\right] ^T\end{aligned}$$
(13.66)
$$\begin{aligned} \hat{\theta }_1(t+1)&= \left[ \hat{b}^{Q}_0(t+1),\ldots ,\hat{b}^{Q}_{2n-1}(t+1)\right] ^T\end{aligned}$$
(13.67)
$$\begin{aligned} \phi _1(t)&= \left[ w^{f}(t),\ldots ,w^{f}(t+1-2n)\right] ^T \end{aligned}$$
(13.68)
$$\begin{aligned} w^{f}(t)= \frac{q^{-d}B^{*}H_{S_1}H_{R_1}}{P_0}w(t) \end{aligned}$$
(13.69)

and n is the number of narrow-band disturbances. The convergence towards zero for the signal \(\upsilon _1(t+1)\) is assured by the fact that \(\lim _{t\rightarrow \infty } \hat{A}_Q(t,z^{-1})=A_Q(z^{-1 })\) (see Appendix C). Since \(\upsilon ^{f}(t+1)\) and \(\upsilon _1(t+1)\) tend towards zero, (13.63) has the standard form of an adaptation error equation (see Chap. 4 and [9]), and the following PAA is proposed:

$$\begin{aligned} \hat{\theta }_1(t+1)&= \hat{\theta }_1(t)+F_1(t)\varPhi _1(t)\nu (t+1)\end{aligned}$$
(13.70)
$$\begin{aligned} \nu (t+1)&= \frac{\nu ^\circ (t+1)}{1+\varPhi _1^T(t)F_1(t)\varPhi _1(t)}\end{aligned}$$
(13.71)
$$\begin{aligned} F_1(t+1)^{-1}&= \lambda _1(t)F_1(t)^{-1}-\lambda _2(t)\varPhi _1(t)\varPhi _1^T(t)\end{aligned}$$
(13.72)
$$\begin{aligned} 0<\lambda _1(t)\le&1;\;0\le \lambda _2(t)<2;\; F_1(0)>0 \end{aligned}$$
(13.73)

There are several possible choices for the regressor vector \(\varPhi _1(t)\) and the adaptation error \(\nu (t+1)\), because there is a strictly positive real condition for stability related to the presence of the term \(\frac{1}{A_Q}\) in (13.63). For the case where \(\nu (t+1)=\varepsilon (t+1)\), one has \(\nu ^\circ (t+1)=\varepsilon ^\circ (t+1)\), where

$$\begin{aligned} \varepsilon ^\circ (t+1) = w_1(t+1) -\hat{\theta }_1^T(t)\varPhi _1(t). \end{aligned}$$
(13.74)

For the case where \(\nu (t+1)=\hat{A}_Q\varepsilon (t+1)\):

$$\begin{aligned} \nu ^\circ (t+1)=\varepsilon ^\circ (t+1) + \sum _{i=1}^{n_{A_Q}} \hat{a}_i^Q\varepsilon (t+1-i). \end{aligned}$$
(13.75)
Table 13.2 Comparison of algorithms for the adaptation of the numerator parameters \(B_Q(z^{-1})\)

These various choices result from the stability analysis given in Appendix C. They are detailed below and summarized in Table 13.2.

  • \(\varPhi _1(t)=\phi _1(t)\). In this case, the prediction error \(\varepsilon (t+1)\) is chosen as adaptation error \(\nu (t+1)\) and the regressor vector \(\varPhi _1(t) = \phi _1(t)\). Therefore, the stability condition is: \(H'=\frac{1}{A_Q}-\frac{\lambda _2}{2}\) (\(\max _t \lambda _2(t)\le \lambda _2<2\)) should be strictly positive real (SPR).

  • \(\nu (t+1) = \hat{A}_Q\varepsilon (t+1)\). The adaptation error is considered as the filtered prediction error \(\varepsilon (t+1)\) through a filter \(\hat{A}_Q\). The regressor vector is \(\varPhi _1(t) = \phi _1(t) \) and the stability condition is modified to: \(H' = \frac{\hat{A}_Q}{A_Q}-\frac{\lambda _2}{2}\) (\(\max _t \lambda _2(t)\le \lambda _2<2\)) should be SPR where \(\hat{A}_Q\) is a fixed estimation of \(A_Q\).

  • \(\varPhi _1(t) = \phi _1^f(t)\). Instead of filtering the adaptation error, the observations can be filtered to relax the stability condition.Footnote 8 By filtering the observation vector \(\phi _1(t)\) through \(\frac{1}{\hat{A}_Q}\) and using \(\nu (t+1) = \varepsilon (t+1)\), the stability condition is: \(H'=\frac{\hat{A}_Q}{A_Q}-\frac{\lambda _2}{2}\) (\(\max _t \lambda _2(t)\le \lambda _2<2\)) should be SPR, where \(\phi _1^f(t) = \frac{1}{\hat{A}_Q}\phi _1(t)\) (\(\hat{A}_Q\) is a fixed estimation of \(A_Q\)).

  • \(\varPhi _1(t) = \phi _1^f(t)= \frac{1}{\hat{A}_Q(t)}\) where \(\hat{A}_Q=\hat{A}_Q(t)\) is the current estimation of \(A_Q\). When filtering through a current estimation \(\hat{A}_Q(t)\) the condition is similar to the previous case except that it is only valid locally [9].

It is this last option which is used in [10] and in Sect. 13.5.

The following procedure is applied at each sampling time for adaptive operation:

  1. 1.

    Get the measured output \(y(t+1)\) and the applied control u(t) to compute \(w(t+1)\) using (13.33).

  2. 2.

    Obtain the filtered signal \(\hat{p}(t+1)\) from (13.35).

  3. 3.

    Compute the implementable a priori adaptation error with (13.48).

  4. 4.

    Estimate \(\hat{D}_p(q^{-1})\) using the PAA and compute at each step \(\hat{A}_Q(q^{-1})\).

  5. 5.

    Compute \(w^{f}(t)\) with (13.69).

  6. 6.

    Compute \(w_1(t+1)\) with (13.58).

  7. 7.

    Put the filtered signal \(w_2^f(t)\) in the observation vector, as in (13.68).

  8. 8.

    Compute the a priori adaptation error defined in (13.74).

  9. 9.

    Estimate the \(B_Q\) polynomial using the parametric adaptation algorithm (13.70)–(13.72).

  10. 10.

    Compute and apply the control (see Fig. 13.4):

    $$\begin{aligned} S_0u(t) = -R_0y(t+1) - H_{S_0}H_{R_0}\left( \hat{B}_Q(t)w(t+1)-\hat{A}_Q^{*}\hat{u}_Q(t)\right) . \end{aligned}$$
    (13.76)

4 Indirect Adaptive Regulation Using Band-Stop Filters

In this section , an indirect adaptive regulation scheme will be developed for implementing the attenuation of multiple unknown narrow-band disturbances using band-stop filters centred at the frequencies corresponding to spikes in the spectrum of the disturbance. The principle of the linear design problem has been discussed in Sect. 13.2.1.

The design of the BSF for narrow-band disturbance attenuation is further simplified by considering a Youla–Kučera parametrization of the controller [2, 1113]. By doing this, the dimension of the matrix equation that has to be solved is reduced significantly and therefore the computation load will be much lower in the adaptive case.

In order to implement this approach in the presence of unknown narrow-band disturbances, one needs to estimate in real time the frequencies of the spikes contained in the disturbance. System identification techniques can be used to estimate the ARMA model of the disturbance [3, 14]. Unfortunately, to find the frequencies of the spikes from the estimated model of the disturbance requires computation in real time of the roots of an equation of order \(2\cdot n\), where n is the number of spikes. Therefore, this approach is applicable in the case of one eventually two narrow-band disturbances [1, 2]. What is needed is an algorithm which can directly estimate the frequencies of the various spikes of the disturbance. Several methods have been proposed [15]. The adaptive notch filter (ANF) is particularly interesting and has been reviewed in a number of articles [6, 1621]. In this book, the estimation approach presented in [22, 23] will be used. Combining the frequency estimation procedure and the control design procedure, an indirect adaptive regulation system for attenuation of multiple unknown and/or time-varying narrow-band disturbances is obtained.

In the present context, the hypothesis of constant dynamic characteristics of the AVC system is made (like in [3]). Furthermore, the corresponding control model is supposed to be accurately identified from input/output data.

4.1 Basic Scheme for Indirect Adaptive Regulation

The equation describing the system has been given in Sect. 13.2. The basic scheme for indirect adaptive regulation is presented in Fig. 13.5. In the context of unknown and time-varying disturbances, a disturbance observer followed by a disturbance model estimation block have to be used in order to obtain information on the disturbance characteristics needed to update the controller parameters.

Fig. 13.5
figure 5

Basic scheme for indirect adaptive regulation

With respect to Eq. (13.1), it is supposed that

$$\begin{aligned} p(t)=\frac{D(\rho q^{-1})}{D(q^{-1})}\delta (t), \quad \rho \in \left( 0,1\right) \text { is a fixed constant}, \end{aligned}$$
(13.77)

represents the effect of the disturbance on the measured output.Footnote 9

Under the hypothesis that the plant model parameters are constant and that an accurate identification experiment can be run, a reliable estimate \(\hat{p}(t)\) of the disturbance signal can be obtained using the following disturbance observer

$$\begin{aligned} \hat{p}(t+1)&=y(t+1)-\frac{q^{-d}B^*(q^{-1})}{A(q^{-1})}u(t)\nonumber \\&=\frac{1}{A(q^{-1})}\left( A(q^{-1})y(t+1)-q^{-d}B^*(q^{-1})u(t)\right) \end{aligned}$$
(13.78)

A disturbance model estimation block can then be used to identify the frequencies of the sines in the disturbance. With this information, the control parameters can directly be updated using the procedure described in Sect. 13.2.1. To deal with time-varying disturbances, the Bezout equation (13.17) has to be solved at each sampling instant in order to adjust the output sensitivity function. Nevertheless, given the size of this equation (see (13.18)), a significant part of the controller computation time would be consumed to solve this equation. To reduce the complexity of this equation, a solution based on the Youla–Kučera parametrization is described in the following section.

4.2 Reducing the Computational Load of the Design Using the Youla–Kučera Parametrization

The attenuation of narrow-band disturbances using band-stop filters (BSF) has been presented in Sect. 13.2.1 in the context of linear controllers.

In an indirect adaptive regulation scheme, the Diophantine equation (13.17) has to be solved either at each sampling time (adaptive operation) or each time when a change in the narrow-band disturbances’ frequencies occurs (self-tuning operation). The computational complexity of (13.17) is significant (in the perspective of its use in adaptive regulation). In this section, we show how the computation load of the design procedure can be reduced by the use of the Youla–Kučera parametrization.

As before, a multiple band-stop filter, (13.11), should be computed based on the frequencies of the multiple narrow-band disturbance (the problem of frequencies estimation will be discussed in Sect. 13.4.3).

Suppose that a nominal controller is available, as in (13.28) and (13.29), that assures nominal performances for the closed-loop system in the absence of narrow-band disturbances. This controller satisfies the Bezout equation

$$\begin{aligned} P_0(z^{-1})=A(z^{-1})S_0(z^{-1})+q^{-z}B(z^{-1})R_0(z^{-1}). \end{aligned}$$
(13.79)

Since \(P_{BSF}(z^{-1})\) will define part of the desired closed-loop poles, it is reasonable to consider an IIR Youla–Kučera filter of the form \(\frac{B_Q(z^{-1})}{P_{BSF}(z^{-1})}\) (which will automatically introduce \(P_{BSF}(z^{-1})\) as part of the closed-loop poles). For this purpose, the controller polynomials are factorized as

$$\begin{aligned} R(z^{-1})&=R_0(z^{-1})P_{BSF}(z^{-1})+A(z^{-1})H_{R_0}(z^{-1})H_{S_0}(z^{-1})B_Q(z^{-1}), \end{aligned}$$
(13.80)
$$\begin{aligned} S(z^{-1})&=S_0(z^{-1})P_{BSF}(z^{-1})-z^{-d}B(z^{-1})H_{R_0}(z^{-1})H_{S_0}(z^{-1})B_Q(z^{-1}), \end{aligned}$$
(13.81)

where \(B_Q(z^{-1})\) is an FIR filter that should be computed in order to satisfy

$$\begin{aligned} P(z^{-1})=A(z^{-1})S(z^{-1})+z^{-d}B(z^{-1})R(z^{-1}), \end{aligned}$$
(13.82)

for \(P(z^{-1})=P_0(z^{-1})P_{BSF}(z^{-1})\), and \(R_0(z^{-1})\), \(S_0(z^{-1})\) given by (13.28) and (13.29), respectively. It can be seen from (13.80) and (13.81), using (13.28) and (13.29), that the new controller polynomials conserve the fixed parts of the nominal controller.

Equation (13.18) gives the size of the matrix equation to be solved if the Youla–Kučera parametrization is not used. Using the previously introduced YK parametrization, it will be shown here that a smaller size matrix equation can be found that allows to compute the \(B_Q(z^{-1})\) filter so that the same shaping be introduced on the output sensitivity function (13.16). This occurs if the controller denominator \(S(z^{-1})\) in (13.81) is the same as the one given in (13.13), i.e.,

$$\begin{aligned} S(z^{-1})=S_{BSF}(z^{-1})H_{S_0}(z^{-1})S'(z^{-1}), \end{aligned}$$
(13.83)

where \(H_S(z^{-1})\) has been replaced by (13.15).

Replacing \(S(z^{-1})\) in the left term with its formula given in (13.81) and rearranging the terms, one obtains

$$\begin{aligned} S_0P_{BSF}=S_{BSF}H_{S_0}S'+z^{-d}BH_{R_0}H_{S_0}B_Q. \end{aligned}$$
(13.84)

and taking into consideration also (13.29) it results

$$\begin{aligned} S'_0P_{BSF}=S_{BSF}S'+q^{-d}BH_{R_0}B_Q, \end{aligned}$$
(13.85)

which is similar to (13.54) except that band-stop filters are used instead of notch filters.

In the last equation, the left side of the equal sign is known and on its right side only \(S'(z^{-1})\) and \(B_Q(z^{-1})\) are unknown. This is also a Bezout equation which can be solved by finding the solution to a matrix equation of dimension

$$\begin{aligned} n_{Bez_{YK}}=n_B+d+n_{H_{R_0}}+2\cdot n-1. \end{aligned}$$
(13.86)

As it can be observed, the size of the new Bezout equation is reduced in comparison to (13.18) by \(n_A+n_{H_{S_0}}\). For systems with large dimensions, this has a significant influence on the computation time. Taking into account that the nominal controller is an unique and minimal degree solution the Bezout equation (13.79), we find that the left hand side of (13.85) is a polynomial of degree

$$\begin{aligned} n_{S'_0}+2\cdot n = 2\cdot n + n_B + d +n_{H_{R_0}}-1, \end{aligned}$$
(13.87)

which is equal to the quantity given in (13.86). Therefore, the solution of the simplified Bezout equation (13.85) is unique and of minimal degree. Furthermore, the order of the \(B_Q\) FIR filter is equal to \(2\cdot n\).

Figure 13.6 summarizes the implementation of the Youla–Kučera parametrized indirect adaptive controller.

Fig. 13.6
figure 6

Youla–Kučera schema for indirect adaptive control

4.3 Frequency Estimation Using Adaptive Notch Filters

In order to use the presented control strategy in the presence on unknown and/or time-varying narrow-band disturbances, one needs an estimation in real time of the spikes’ frequencies in the spectrum of the disturbance. Based on this estimation in real time of the frequencies of the spikes, the band-stop filters will be designed in real time.

In the framework of narrow-band disturbance rejection, it is usually supposed that the disturbances are in fact sinusoidal signals with variable frequencies. It is assumed that the number of narrow-band disturbances is known (similar to [2, 3, 8]). A technique based on ANFs (adaptive notch filters) will be used to estimate the frequencies of the sinusoidal signals in the disturbance (more details can be found in [6, 23]).

The general form of an ANF is

$$\begin{aligned} H_{f}(z^{-1})=\frac{A_{f}(z^{-1})}{A_{f}(\rho z^{-1})}, \end{aligned}$$
(13.88)

where the polynomial \(A_{f}(z^{-1})\) is such that the zeros of the transfer function \(H_{f}(z^{-1})\) lie on the unit circle. A necessary condition for a monic polynomial to satisfy this property is that its coefficients have a mirror symmetric form

$$\begin{aligned} A_{f}(z^{-1})=1+a_1^{f}z^{-1}+\cdots +a_n^{f}z^{-n}+\cdots +a_1^{f}z^{-2n+1}+z^{-2n}. \end{aligned}$$
(13.89)

Another requirement is that the poles of the ANF should be on the same radial lines as the zeros but slightly closer to the origin of the unit circle. Using filter denominators of the general form \(A_{f}(\rho z^{-1})\) with \(\rho \) a positive real number smaller but close to 1, the poles have the desired property and are in fact located on a circle of radius \(\rho \) [6].

The estimation algorithm will be detailed next. It is assumed that the disturbance signal (or a good estimation) is available.

A cascade construction of second-order ANF filters is considered. Their number is given by the number of narrow-band signals, whose frequencies have to be estimated. The main idea behind this algorithm is to consider the signal \(\hat{p}(t)\) as having the form

$$\begin{aligned} \hat{p}(t)=\sum _{i=1}^{n}c_i \sin (\omega _i \cdot t + \beta _i) + \eta (t), \end{aligned}$$
(13.90)

where \(\eta (t)\) is a noise affecting the measurement and n is the number of narrow-band signals with different frequencies.

The ANF cascade form will be given by (this is an equivalent representation of Eqs. (13.88) and (13.89))

$$\begin{aligned} H_f(z^{-1})=\prod _{i=1}^{n}H_f^i(z^{-1})=\prod _{i=1}^{n}\frac{1+a^{f_i}z^{-1}+z^{-2}}{1+\rho a^{f_i}z^{-1}+\rho ^2 z^{-2}}. \end{aligned}$$
(13.91)

Next, the estimation of one spike’s frequency is considered, assuming convergence of the other \(n-1\), which can thus be filtered out of the estimated disturbance signal, \(\hat{p}(t)\), by applying

$$\begin{aligned} \hat{p}^j(t)=\prod _{\begin{array}{c} i=1 \\ i\ne j \end{array}}^{n}\frac{1+a^{f_i}z^{-1}+z^{-2}}{1+\rho a^{f_i}z^{-1}+\rho ^2 z^{-2}} \hat{p}(t). \end{aligned}$$
(13.92)

The prediction error is obtained from

$$\begin{aligned} \varepsilon (t)=H_f(z^{-1}) \hat{p}(t) \end{aligned}$$
(13.93)

and can be computed based on one of the \(\hat{p}^j(t)\) to reduce the computation complexity. Each cell can be adapted independently after prefiltering the signal by the others. Following the Recursive Prediction Error (RPE) technique, the gradient is obtained as

$$\begin{aligned} \psi ^j(t)=-\frac{\partial \varepsilon (t)}{\partial a^{f_j}}=\frac{(1-\rho )(1-\rho z^{-2})}{1+\rho a^{f_j}z^{-1}+\rho ^2 z^{-2}}\hat{p}^j(t). \end{aligned}$$
(13.94)

The parameter adaptation algorithm can be summarized as

$$\begin{aligned} \hat{a}^{f_j}(t)&=\hat{a}^{f_j}(t-1)+\alpha (t-1)\cdot \psi ^j(t)\cdot \varepsilon (t)\end{aligned}$$
(13.95)
$$\begin{aligned} \alpha (t)&=\frac{\alpha (t-1)}{\lambda +\alpha (t-1)\psi ^j(t)^2}. \end{aligned}$$
(13.96)

where \(\hat{a}^{f_j}\) are estimations of the true \(a^{f_j}\), which are connected to the narrow-band signals’ frequencies by \(\omega _{f_j}=f_s\cdot \arccos (-\frac{a^{f_j}}{2})\), where \(f_s\) is the sampling frequency.

4.3.1 Implementation of the Algorithm

The design parameters that need to be provided to the algorithm are the number of narrow-band spikes in the disturbance (n), the desired attenuations and damping of the BSFs, either as unique values (\(M_i=M,~\zeta _{d_i}=\zeta _{d},~\forall i \in \{1,\ldots , n\}\)) or as individual values for each of the spikes (\(M_i\) and \(\zeta _{d_i}\)), and the central controller (\(R_0\), \(S_0\)) together with its fixed parts (\(H_{R_0}\), \(H_{S_0}\)) and of course the estimation of the spikes’ frequencies. The control signal is computed by applying the following procedure at each sampling time:

  1. 1.

    Get the measured output \(y(t+1)\) and the applied control u(t) to compute the estimated disturbance signal \(\hat{p}(t+1)\) as in (13.78).

  2. 2.

    Estimate the disturbances’ frequencies using adaptive notch filters, Eqs. (13.92)–(13.96).

  3. 3.

    Calculate \(S_{BSF}(z^{-1})\) and \(P_{BSF}(z^{-1})\) as in (13.8)–(13.11).

  4. 4.

    Find \(Q(z^{-1})\) by solving the reduced order Bezout equation (13.85).

  5. 5.

    Compute and apply the control using (13.5) with R and S given respectively by (13.80) and (13.81) (see also Fig. 13.6):

    $$\begin{aligned} S_0u(t) = -R_0y(t+1) - H_{S_0}H_{R_0}\left( B_Q(t)w(t+1)-P_{BSF}^{*}u_Q(t)\right) . \end{aligned}$$
    (13.97)

4.4 Stability Analysis of the Indirect Adaptive Scheme

The stability analysis of this scheme can be found in [24].

5 Experimental Results: Attenuation of Three Tonal Disturbances with Variable Frequencies

Samples of the experimental results obtained with the direct adaptive regulation scheme (see Sect. 13.2.3 and [25]), with the interlaced adaptive regulation scheme (see Sect. 13.3) and with the indirect adaptive regulation scheme (see Sect. 13.4) on the test bench described in Chap. 2, Sect. 2.2 are given in this section. A step change in the frequencies of three tonal disturbances is considered (with return to the initial values of the frequencies). Figures 13.7, 13.8 and 13.9 show the time responses of the residual force. Figures 13.10, 13.11 and 13.12 show the difference between the PSD in open-loop and in closed-loop as well as the estimated output sensitivity function. Figure 13.13 shows the evolution of the parameters of the FIR adaptive Youla–Kučera filter used in the direct adaptive regulation scheme. Figures 13.14 and 13.15 show the evolution of the estimated parameters of \(D_p\) (used to compute \(A_Q\)—the denominator of the IIR Youla–Kučera filter) and of the numerator \(B_Q\) of the IIR Youla–Kučera filter used in the interlaced adaptive regulation scheme. Figure 13.16 shows the evolution of the estimated frequencies of the three tonal disturbances used to compute the band-stop filters in the indirect adaptive regulation scheme.

Fig. 13.7
figure 7

Time response of the direct adaptive regulation scheme using a FIR Youla–Kučera filter for a step change in frequencies (three tonal disturbances)

Fig. 13.8
figure 8

Time response of the interlaced adaptive regulation scheme using an IIR Youla–Kučera filter for a step change in frequencies (three tonal disturbances)

Fig. 13.9
figure 9

Time response of the indirect adaptive regulation scheme using BSF filters for a step change in frequencies (three tonal disturbances)

For this particular experiment, the interlaced adaptive regulation scheme offers the best compromise disturbance attenuation/maximum amplification. Nevertheless, a global evaluation requires to compare the experimental results on a number of situations and this is done in the next section.

Fig. 13.10
figure 10

Difference between open-loop and closed-loop PSD of the residual force and the estimated output sensitivity function for the direct adaptive regulation scheme

Fig. 13.11
figure 11

Difference between open-loop and closed-loop PSD of the residual force and the estimated output sensitivity function for the interlaced adaptive regulation scheme

Fig. 13.12
figure 12

Difference between open-loop and closed-loop PSD of the residual force and the estimated output sensitivity function for the indirect adaptive regulation scheme

Fig. 13.13
figure 13

Evolution of the parameters of the FIR Youla–Kučera filter for a step change in frequencies (direct adaptive regulation scheme)

Fig. 13.14
figure 14

Evolution of the estimated parameters of the \(D_P\) polynomial (disturbance model) during a step change of the disturbance frequencies (interlaced adaptive regulation scheme)

Fig. 13.15
figure 15

Evolution of the parameters of the numerator of the IIR Youla–Kučera filter during a step change of the disturbance frequencies (interlaced adaptive regulation scheme)

Fig. 13.16
figure 16

Evolution of the estimated frequencies of the disturbance during a step change of disturbance frequencies (indirect adaptive regulation scheme)

6 Experimental Results: Comparative Evaluation of Adaptive Regulation Schemes for Attenuation of Multiple Narrow-Band Disturbances

6.1 Introduction

Three schemes for adaptive attenuation of single and multiples sparsely located unknown and time-varying narrow-band disturbances have been presented in Chap. 12, Sect. 12.2.2 and in Sects. 13.3 and 13.4 of this chapter. They can be summarized as follows:

  1. (1)

    Direct adaptive regulation using FIR Youla–Kučera parametrization

  2. (2)

    Interlaced adaptive regulation using IIR Youla–Kučera parametrization

  3. (3)

    Indirect adaptive regulation using band-stop filters

The objective is to comparatively evaluate these three approaches in a relevant experimental environment.

An international benchmark on adaptive regulation of sparse distributed unknown and time-varying narrow-band disturbances has been organized in 2012–2013. The summary of the results can be found in [26]. The various contributions can be found in [25, 2732]. Approaches 1 and 3 have been evaluated in this context. The approach 2, which is posterior to the publication of the benchmark results has been also evaluated in the same context. Detailed results can be found in [33]. Approaches 1 and 3 provided some of the best results for the fulfilment of the benchmark specifications (see [26]). Therefore a comparison of the second approach with the first and third approach is relevant for assessing its potential.

In what follows a comparison of the three approaches will be made in the context of the mentioned benchmark. The objective will be to assess their potential using some of the global indicators used in benchmark evaluation.

In Chap. 12, Sect. 12.3, some of the basic performance indicators have been presented. In the benchmark evaluation process, several protocols allowing to test the performance for various environmental conditions have been defined. Based on the results obtained for the various protocols, global performance indicator have been defined and they will be presented in the next section. This will allow later to present in a compact form the comparison of the real-time performance of the three approaches considered in Chap. 12 and this chapter. Further details can be found in [25, 28, 33].

The basic benchmark specification are summarized in Table 13.3 for the three levels of difficulty (range of frequencies variations: 50 to 95 Hz):

  • Level 1: Rejection of a single time-varying sinusoidal disturbance.

  • Level 2: Rejection of two time-varying sinusoidal disturbances.

  • Level 3: Rejection of three time-varying sinusoidal disturbances.

Table 13.3 Control specifications in the frequency domain

6.2 Global Evaluation Criteria

Evaluation of the performances will be done for both simulation and real-time results. The simulation results will give us information upon the potential of the design methods under the assumption: design model \(=\) true plant model. The real-time results will tell us in addition what is the robustness of the design with respect to plant model uncertainties and real noise.

Steady-State Performance (Tuning capabilities)

As mentioned earlier, these are the most important performances. Only if a good tuning for the attenuation of the disturbance can be achieved, it makes sense to examine the transient performance of a given scheme. For the steady-state performance, which is evaluated only for the simple step change in frequencies, the variable k, with \(k=1,\ldots ,3\), will indicate the level of the benchmark. In several criteria a mean of certain variables will be considered. The number of distinct experiments, M, is used to compute the mean. This number depends upon the level of the benchmark (\(M=10\) if \(k=1\), \(M=6\) if \(k=2\), and \(M=4\) if \(k=3\)).

The performances can be evaluated with respect to the benchmark specifications. The benchmark specifications will be in the form: XXB, where XX will denote the evaluated variable and B will indicate the benchmark specification. \(\varDelta XX\) will represent the error with respect to the benchmark specification.

Global Attenuation—GA

The benchmark specification corresponds to \(GAB_k=30\) dB, for all the levels and frequencies, except for 90 and 95 Hz at \(k=1\), for which \(GAB_1\) is 28 and 24 dB respectively.

Error:

$$\begin{aligned} \varDelta GA_i&=GAB_k-GA_i \quad \text {if} \quad GA_i<GAB_k\\ \varDelta GA_i&=0 \quad \text {if} \quad GA_i \ge GAB_k \end{aligned}$$

with \(i=1,\ldots ,M\).

Global Attenuation Criterion:

$$\begin{aligned} J_{\varDelta GA_{k}}=\frac{1}{M}\sum ^{M}_{j=1}{\varDelta GA_i}. \end{aligned}$$
(13.98)

Disturbance Attenuation—DA

The benchmark specification corresponds to \(DAB=40\) dB, for all the levels and frequencies.

Error:

$$\begin{aligned} \varDelta DA_{ij}&=DAB-DA_{ij} \quad \text {if} \quad DA_{ij}<DAB\\ \varDelta DA_{ij}&=0 \quad \text {if} \quad DA_{ij}\ge DAB \end{aligned}$$

with \(i=1,\ldots ,M\) and \(j=1,\ldots ,j_{max}\), where \(j_{max}=k\).

Disturbance Attenuation Criterion

$$\begin{aligned} J_{\varDelta DA_k}=\frac{1}{kM}\sum ^{M}_{i=1}\sum ^{k}_{j=1}{\varDelta DA_{ij}} \end{aligned}$$
(13.99)

Maximum Amplification—MA

The benchmark specifications depend on the level, and are defined as

$$\begin{aligned} MAB_k&=6 \text { dB}, \quad \text {if} \quad k=1\\ MAB_k&=7 \text { dB}, \quad \text {if} \quad k=2\\ MAB_k&=9 \text { dB}, \quad \text {if} \quad k=3 \end{aligned}$$

Error:

$$\begin{aligned} \varDelta MA_i&=MA_i-MAB_k, \quad \text {if} \quad MA_i>MAB_k\\ \varDelta MA_i&=0, \quad \text {if} \quad MA_i\le MAB_k \end{aligned}$$

with \(i=1,\ldots ,M\).

Maximum Amplification Criterion

$$\begin{aligned} J_{\varDelta MA_k}=\frac{1}{M}\sum ^{M}_{i=1}\varDelta MA_i. \end{aligned}$$
(13.100)

Global Criterion of Steady-State Performance for One Level

$$\begin{aligned} J_{SS_k}=\frac{1}{3}[J_{\varDelta GA_k}+J_{\varDelta DA_k}+J_{\varDelta MA_k}]. \end{aligned}$$
(13.101)

Benchmark Satisfaction Index for Steady-State Performance

The Benchmark Satisfaction Index is a performance index computed from the average criteria \(J_{\varDelta GA_{k}}\), \(J_{\varDelta DA_{k}}\) and \(J_{\varDelta MA_{k}}\). The Benchmark Satisfaction Index is \(100\,\%\), if these quantities are “0” (full satisfaction of the benchmark specifications) and it is \(0\,\%\) if the corresponding quantities are half of the specifications for GA, and DA or twice the specifications for MA. The corresponding reference error quantities are summarized below:

$$\begin{aligned} \varDelta GA_{index}= & {} 15,\\ \varDelta DA_{index}= & {} 20,\\ \varDelta MA_{index,1}= & {} 6, \quad \text {if} \quad k=1,\\ \varDelta MA_{index,2}= & {} 7, \quad \text {if} \quad k=2,\\ \varDelta MA_{index,3}= & {} 9, \quad \text {if} \quad k=3. \end{aligned}$$

The computation formulas are

$$\begin{aligned} GA_{index,k}&=\left( \frac{\varDelta GA_{index}-J_{\varDelta GA_k}}{\varDelta GA_{index}}\right) 100\,\%\\ DA_{index,k}&=\left( \frac{\varDelta DA_{index}-J_{\varDelta DA_k}}{\varDelta DA_{index}}\right) 100\,\%\\ MA_{index,k}&=\left( \frac{\varDelta MA_{index,k}-J_{\varDelta MA_k}}{\varDelta MA_{index,k}}\right) 100\,\%. \end{aligned}$$

Then the Benchmark Satisfaction Index (BSI), is defined as

$$\begin{aligned} BSI_k=\frac{GA_{index,k}+DA_{index,k}+MA_{index,k}}{3}. \end{aligned}$$
(13.102)

The results for \(BSI_k\) obtained both in simulation and real-time for each approach and all the levels are summarized in Tables 13.4 and 13.5 respectively and represented graphically in Fig. 13.17. The YK IIR scheme provides the best results in simulation for all the levels but the indirect approach provides very close results. In real time it is the YK IIR scheme which gives the best results for level 1 and the YK FIR which gives the best results for levels 2 and 3. Nevertheless, one has to mention that the results of the YK FIR scheme are highly dependent on the design of the central controller.

Table 13.4 Benchmark satisfaction index for steady-state performance (simulation results)
Table 13.5 Benchmark satisfaction index for steady-state performance (real-time results)
Fig. 13.17
figure 17

Benchmark Satisfaction Index (BSI) for all levels for simulation and real-time results

The results obtained in simulation allows the characterization of the performance of the proposed design under the assumption that design model = true plant model. Therefore in terms of capabilities of a design method to meet the benchmark specification the simulation results are fully relevant. It is also important to recall that Level 3 of the benchmark is the most important. The difference between the simulation results and real-time results, allows one to characterize the robustness in performance with respect to uncertainties on the plant and noise models used for design.

To assess the performance loss passing from simulation to real-time results the Normalized Performance Loss and its global associated index is used. For each level one defines the Normalized Performance Loss as

$$\begin{aligned} NPL_k=\left( \frac{BSI_{ksim}-BSI_{kRT}}{BSI_{ksim}}\right) 100\,\% \end{aligned}$$
(13.103)

and the global NPL is given by

$$\begin{aligned} NPL&=\frac{1}{M}\sum ^{M}_{k=1}NPL_k \end{aligned}$$
(13.104)

where \(N=3\).

Table 13.6 gives the normalized performance loss for the three schemes. Figure 13.18 summarizes in a bar graph these results. The YK IIR scheme assures a minimum loss for level 1, while the YK FIR scheme assures the minimum loss for level 2 and 3.

Table 13.6 Normalized performance loss
Fig. 13.18
figure 18

Normalized Performance Loss (NPL) for all levels (smaller \(=\) better)

Global Evaluation of Transient Performance

For evaluation of the transient performance an indicator has been defined by Eq. 12.46. From this indicator, a global criterion can be defined as follows

$$\begin{aligned} J_{\varDelta {Trans_k}}=\frac{1}{M}\sum ^{M}_{j=1}\varDelta {Trans_i}, \end{aligned}$$
(13.105)

where \(M=10\) if \(k=1\), \(M=6\) if \(k=2\), and \(M=4\) if \(k=3\).

Transient performance are summarized in Table 13.7. All the schemes assures in most of the cases the \(100\,\%\) of the satisfaction index for transient performance, which means that the adaptation transient duration is less or equal to 2 s in most of the cases (except the indirect scheme for level 2 in simulation).

Table 13.7 Benchmark satisfaction index for transient performance (for simple step test)

Evaluation of the Complexity

For complexity evaluation, the measure of the Task Execution Time (TET) in the xPC Target environment will be used. This is the time required to perform all the calculations on the host target PC for each method. Such process has to be done on each sample time. The more complex is the approach, the bigger is the TET. One can argue that the TET depends also on the programming of the algorithm. Nevertheless, this may change the TET by a factor of 2 to 4 but not by an order of magnitude. The xPC Target MATLAB environment delivers an average of the TET (ATET). It is however interesting to asses the TET specifically associated to the controller by subtracting from the measured TET in closed-loop operation, the average TET in open-loop operation.

The following criteria to compare the complexity between all the approaches are defined.

$$\begin{aligned} \varDelta TET_{Simple,k}&=ATET_{Simple,k}-ATET_{OL_{Simple,k}}\end{aligned}$$
(13.106)
$$\begin{aligned} \varDelta TET_{Step,k}&=ATET_{Step,k}-ATET_{OL_{Step,k}}\end{aligned}$$
(13.107)
$$\begin{aligned} \varDelta TET_{Chirp,k}&=ATET_{Chirp,k}-ATET_{OL_{Chirp,k}} \end{aligned}$$
(13.108)

where \(k=1,\ldots ,3\). The symbols Simple, Step and Chirp Footnote 10 are associated respectively to Simple Step Test (application of the disturbance), Step Changes in Frequency and Chirp Changes in Frequency. The global \(\varDelta TET_{k}\) for one level is defined as the average of the above computed quantities:

$$\begin{aligned} \varDelta TET_{k}=\frac{1}{3}\left( \varDelta TET_{Simple,k}+\varDelta TET_{Step,k}+\varDelta TET_{Chirp,k}\right) \end{aligned}$$
(13.109)

where \(k=1,\ldots ,3\). Table 13.8 and Fig. 13.19 summarize the results obtained for the three schemes. All the values are in microseconds. Higher values indicate higher complexity. The lowest values (lower complexity) are highlighted.

As expected, the YK FIR algorithm has the smallest complexity. YK IIR has a higher complexity than the YK FIR (This is due to the incorporation of the estimation of \(A_Q(z^{-1})\)) but still significantly less complex than the indirect approach using BSF.

Table 13.8 Task execution time
Fig. 13.19
figure 19

The controller average task execution time (\(\varDelta TET\))

Tests with a different experimental protocol have been done. The results obtained are coherent with the tests presented above. Details can be found in [10, 34].

7 Concluding Remarks

It is difficult to decide what is the best scheme for adaptive attenuation of multiple narrow-band disturbances. There are several criteria to be taken into account:

  • If an individual attenuation level should be fixed for each spike, the indirect adaptive scheme using BSF is the most appropriate since it allows to achieve specific attenuation for each spike.

  • If the objective is to have a very simple design of the central controller, YK IIR scheme and the indirect adaptive scheme have to be considered.

  • If the objective is to have the simplest scheme requiring the minimum computation time, clearly the YK FIR has to be chosen.

  • If the objective is to make a compromise between the various requirements mentioned above, it is the YK IIR adaptive scheme which has to be chosen.

8 Notes and References

The reference [34] gives a thorough view of the various solutions for adaptive attenuation of multiple narrow-band disturbances. The specific references are [25, 2732] to which the reference [10] has to be added.