FormalPara Learning Objectives

After reading this chapter, the reader is expected to

  • Implement and analyse the Wiener filter.

  • Write a python code to implement the LMS algorithm and its variants.

  • Perform system identification using the LMS algorithm.

  • Perform inverse system modelling using the NLMS algorithm.

  • Implement adaptive line enhancer using the LMS algorithm and its variants.

  • Implement the RLS algorithm.

FormalPara Roadmap of the Chapter

The roadmap of this chapter is depicted below. This chapter starts with the Wiener filter, least mean square (LMS) algorithm and its variant approaches for adaptive signal processing applications like system identification and signal denoising. Next, the RLS algorithm is discussed with the suitable python code.

A tree diagram lists the types of filters as optimum and adaptive filters. Optimum has Wiener filters. Adaptive is divided into the L M S algorithm and the R L S algorithm. L M S algorithm has N L M S algorithm and sign L M S algorithm.
FormalPara PreLab Questions
  1. 1.

    List out the valid differences between the optimal filter and the adaptive filter.

  2. 2.

    What is an adaptive filter? How it differs from the ordinary filter.

  3. 3.

    Examples of adaptive filter.

  4. 4.

    When are adaptive filters preferred?

  5. 5.

    List out the performance measures of the adaptive filter.

  6. 6.

    What is an LMS algorithm?

  7. 7.

    What do you mean by least square estimation?

  8. 8.

    List out the variants of LMS algorithm.

  9. 9.

    How the step size impacts the LMS algorithm?

  10. 10.

    What is the RLS algorithm, and how it differs from LMS?

11.1 Wiener Filter

Wiener filter is the mean square error (MSE) optimal stationary linear filter for signal corrupted by additive noise. The Wiener filter computation requires the assumption that the signal and noise are in the random process. The general block diagram of the Wiener filter is shown in Fig. 11.1. The main objective of the Wiener filter is to obtain the filter coefficient of the LTI filter, which can provide the final output (y[n]) as much as the minimum MSE between the output and the desired signal or target (d[n]). In Fig. 11.1, s[n] denotes the original signal, which is a clean signal, and it is corrupted by additive noise η[n] to give the signal x[n]. The parameters of the filter have to be designated has to be designed in such a way that the output of the filter y[n] should resemble the desired signal d[n] such that the error ‘e[n]’ is minimum.

Fig. 11.1
A schematic of the Wiener filter. The input s of n and eta of n are added and the output x of n goes to the block h of n. The output y of n is added with d of n to yield e of n.

Block diagram of Wiener filter

The expression for the optimal Wiener filter is given by

$$ {h}_{\mathrm{opt}}={R}^{-1}p $$
(11.1)

The above expression is termed as ‘Wiener-Hopf’ expression, which is named after American-born Norbert Wiener and Austrian-born Eberhard Hopf. The expression for optimal filter depends on the autocorrelation matrix (R) of the observed signal (x[n]) and the cross-correlation vector (p) between the observed signal (x[n]) and the desired signal (d[n]). hopt denotes the optimal filter coefficients.

Experiment 11.1 Wiener Filtering

The aim of this experiment is to implement the Wiener filtering using python. Here the optimal filter coefficients are obtained using the Wiener-Hopf equation given in Eq. (11.1). The python code for Wiener filter is shown in Fig. 11.2. Simulation result of the python code given in Fig. 11.2 is depicted in Fig. 11.3.

Fig. 11.2
A program code for the Wiener filter. It imports libraries, generates signal s of n, random noise, computes observed signal x of n, trims the autocorrelation and cross-correlation values, obtains autocorrelation matrix and its inverse, filter coefficient, filters the noisy signal, and plots graphs.

Python code for Wiener filtering

Fig. 11.3
Three simulated line graphs of amplitude versus t. A. Clean signal. B. Noisy signal. C. Filtered signal. A is a continuous sine wave. B and C are flickering sine waves with fluctuations. B has more flickerings than C.

Simulation result of Wiener filter

The built-in functions used in python code shown in Fig. 11.2 is summarized in Table 11.1.

Table 11.1 Built-in functions used in the python code given in Fig. 11.2

Inference

From Fig. 11.3, it can be made the following observations:

  1. 1.

    The input or clean signal frequency is 5 Hz, and it is a smooth sine waveform.

  2. 2.

    The additive noise added signal as input to the Wiener filter, and it is a distorted signal.

  3. 3.

    The filtered signal is not a smooth sine waveform. However, this waveform is far better than the noisy signal. Hence, the Wiener filter has a capability to minimize the impact of additive noise in a signal.

Task

  1. 1.

    Change the value of standard deviation in random noise generation python command ‘np.random.normal(0,.2,len(s))’ given in Fig. 11.2. Execute and make the appropriate changes in this python code to get ‘filtered signal’ as similar as ‘clean signal’.

Experiment 11.2 Wiener Filter Using Built-In Function

This experiment performs the Wiener filtering using built-in function in ‘scipy’ library. The built-in function is available in thescipy’ library ‘wiener’ can be used to filter out the noisy components. In this experiment, noise-free sinusoidal signal of 5 Hz frequency is generated. The clean signal is corrupted by adding random noise, which follows the normal distribution with zero mean and 0.2 standard deviation. The corrupted signal is then passed through the Wiener filter to minimize the impact of noise. The steps followed along with the built-in functions used in the program are given in Table 11.2.

Table 11.2 Steps followed and built-in functions

The python code which performs this task is shown in Fig. 11.4, and the corresponding output is shown in Fig. 11.5.

Fig. 11.4
A program code for the Wiener filter. It imports numpy, matplot, and signal libraries, generates the clean signal, adds noise, performs the Wiener filter function, and plots the results by labeling the x-axis as time and y-axis as amplitude for the Clean signal, Noisy signal, and Filtered signal.

Wiener filtering using built-in function

Fig. 11.5
Three simulated line graphs of amplitude versus time. A. Clean signal. B. Noisy signal. C. Filtered signal. A is a continuous sine wave. B and C are flickering sine waves with fluctuations. B has more flickerings in crests than C.

Result of Wiener filtering

Inference

From Fig. 11.5, it is possible to infer that the impact of noise is minimized after passing the noisy signal through Wiener filter.

11.1.1 Wiener Filter in Frequency Domain

From Wiener-Hopf equation, the expression for the optimal Wiener filter is given by

$$ {h}_{\mathrm{opt}}={R}^{-1}p $$
(11.2)

The above equation can be expressed as

$$ {h}_{\mathrm{opt}}=\frac{p}{R} $$
(11.3)

In the above expression, ‘p’ represents the cross-correlation between desired signal and the observed signal, and ‘R’ represents the autocorrelation of the observed signal. Taking Fourier transform on both sides of Eq. (11.3), we get

$$ \mathrm{FT}\left\{{h}_{\mathrm{opt}}\right\}=\frac{\mathrm{FT}\left\{p\right\}}{\mathrm{FT}\left\{R\right\}} $$
(11.4)

According to the Wiener-Khinchin theorem, Fourier transform of autocorrelation function gives power spectral density. Using this theorem, Eq. (11.4) is expressed as

$$ H\left({\mathrm{e}}^{j\omega}\right)=\frac{S_{dx}\left({\mathrm{e}}^{j\omega}\right)}{S_{xx}\left({\mathrm{e}}^{j\omega}\right)} $$
(11.5)

In Eq. (11.5), H(e) represents the frequency response of the Wiener filter, Sdx(e) represents the cross-power spectral density estimation between desired and observed signal and Sxx(e) represents the power spectral density of the observed signal.

Experiment 11.3 Wiener Filter in Frequency Domain

The steps followed in the implementation of Wiener filter in frequency domain are given in Fig. 11.6. The noisy signal is obtained by adding white noise, which follows normal distribution to the clean signal. The observed signal is a clean signal with white noise added to it. The power spectral density of the observed signal is represented by Sxx(e). The power spectral density between the desired and observed signal is represented by Sdx(e). The Wiener filter is obtained in the frequency domain using the relation \( H\left({\mathrm{e}}^{j\omega}\right)=\frac{S_{dx}\left({\mathrm{e}}^{j\omega}\right)}{S_{xx}\left({\mathrm{e}}^{j\omega}\right)} \). Here the desired signal is the clean signal s[n]. Upon taking inverse Fourier transform of H(e), the impulse response of the Wiener filter is obtained.

Fig. 11.6
A schematic of the Wiener filter in the frequency domain. The clean signal s of n and the noisy signal eta of n are added to get the observed signal that goes to the power spectral density, Wiener filter, and inverse Fourier transform to get the impulse response.

Wiener filter in frequency domain

The python code used to implement the Wiener filter in frequency domain is shown in Fig. 11.7, and the corresponding output is in Fig. 11.8.

Fig. 11.7
A program code for the Wiener filter in the frequency domain. It generates clean, noise, and observed signals, calculates the power spectral density of observed and desired signals, performs Wiener filters in frequency and time domain, gets the filtered signal, and plots the graphs.

Python code to implement Wiener filter in frequency domain

Fig. 11.8
2 sets of graphs. 1. 3 Graphs of amplitude versus time for Clean, Noisy, and Filtered signals with waveforms. 2. 4 graphs. Needle graph of amplitude versus n for impulse response, magnitude and phase versus omega with decreasing trends, j omega versus sigma plots zeros and poles with an ellipse.

Result and characteristics of Wiener filtering

The built-in functions used in the program and its purpose are given in Table 11.3.

Table 11.3 Built-in functions used in this experiment

Inference

From Fig. 11.8, the following observations can be made:

  1. 1.

    The impact of noise is minimized by applying the Wiener filter.

  2. 2.

    The impulse response of the Wiener filter is not symmetric; hence, the phase response of the filter is not a linear curve.

  3. 3.

    From the magnitude response, it is possible to observe that the filter is a lowpass filter, and it performs smoothing actions to minimize the impact of noise.

  4. 4.

    From the pole-zero plot, it is possible to observe that poles and zeros lie within the unit circle; hence the filter is stable.

11.2 Adaptive Filter

The adaptive filter is a non-linear filter, which updates the value of the filter coefficients based on some specific criterion. The general block diagram of the adaptive filter is shown in Fig. 11.9. From this figure, it is possible to observe that the filter coefficients are updated based on the error, e[n] between the output of the filter y[n] and reference data d[n]. Examples of adaptive filters are LMS filter and RLS filter.

Fig. 11.9
A schematic of the adaptive filter. The input signal x of n goes to the adaptive filter to get the output y of n. y of n is added with d of n and a variable signal e of n.

General block diagram of adaptive filtering

11.2.1 LMS Adaptive Filter

The LMS is a least mean square algorithm that works based on the stochastic gradient descent approach to adapt the estimate based on the current error. The estimate is called the weight or filter coefficient. The weight or filter coefficient update equation of the LMS algorithm is given by.

$$ w\left[n+1\right]=w\left[n\right]+\mu x\left[n\right]e\left[n\right] $$
(11.6)

where w[n + 1] represents the new weight or updated weight, w[n] denotes the old weight, μ indicates the step size or learning rate, x[n] is the input signal or data and the error signal e[n] = d[n] − y[n]. d[n] is the reference data or target data, and y[n] is the actual output of the adaptive filter of the system.

Experiment 11.4 Implementation of LMS Algorithm

This experiment discusses the implementation of LMS algorithm for adaptive filtering using python. The python code to define the LMS algorithm as a function is shown in Fig. 11.10. This code can be called a function in the different applications of the LMS algorithm, which will be discussed in the subsequent experiments. From Fig. 11.10, it is possible to see that the weight updation formula of the LMS algorithm given in Eq. (11.6) exists in it.

Fig. 11.10
A program code defines a function for the L M S algorithm. It calls the values of x, mu, N, and t, finds N 1, w, e, x n, e n, w, and e of n for in the range from 0 to, N to F, and updates the filter and records the error.

Python code for LMS algorithm

Inference

  1. 1.

    From Fig. 11.10, it is possible to observe that the LMS algorithm is written as a function, and it can be called a signal processing application whenever needed.

  2. 2.

    The inputs to the LMS function are ‘x’, ‘mu’, ‘N’ and ‘t’. ‘x’ denotes the input data, ‘mu’ represents step size, ‘t’ denotes the reference data or target data and ‘N’ indicates the length of the adaptive filter.

  3. 3.

    The outputs from this LMS function are ‘w’, which denotes the adaptive filter coefficients, and ‘e’ is an error between the estimate and target data.

Experiment 11.5 System Identification Using LMS Algorithm

This experiment deals with unknown system identification using the LMS algorithm. Let us consider the unknown system as an FIR filter with a length of 25. In this experiment, the output filter coefficients are obtained by using LMS algorithm with different number of iterations. The block diagram of the system identification is shown in Fig. 11.11. The python code to find the unknown system using the LMS algorithm is given in Fig. 11.12, and its simulation result is shown in Fig. 11.13.

Fig. 11.11
A schematic of the system identification. The input x of n is split and given to the system and w of n. The outputs of them are added and subtracted to get e of n and that is fed to the adaptive algorithm which makes the w of n a variable.

Block diagram of system identification

Fig. 11.12
A program code for unknown system identification. Imports numpy, matplot, and signal libraries, calls the size of input data, filter size, n iterations, x, h, t, and mu, and plots graphs for filter to be Identified, Error signal at iteration, and identified Filter at iteration by giving the labels.

Python code for unknown system identification

Fig. 11.13
Six needle graphs. h of n versus n for filter to be identified. w of n versus n for identified filter at the iterations 10, 25, 50, 100, and 150. Each plots impulse-like responses with positive and negative values.

Simulation results of Experiment 11.5

Figure 11.12 indicates that the number of iterations is considered as 10, 50, 100 and 150, and the length of the unknown FIR filter is chosen as 25. The input to the LMS algorithm is a random signal with a length of 500 samples. The targeted or desired or reference data is obtained by convolving the input random signal with the unknown FIR filter coefficients along with the random noise.

Note that the inputs to the LMS algorithm ([w,e]=LMS_algorithmm(x,mu,N,t,n_iter[i])) are random signal (x), learning rate (mu), length of the filter (N), a reference signal (t) and number or iteration (n_iter). Also, note that the filter coefficients (h) are not given as input to the LMS algorithm. The outputs of the LMS algorithm are error signal (e) and identified filter output (w).

The simulation result of the python code given in Fig. 11.12 is displayed in Fig. 11.13.

Inference

From Fig. 11.13, it is possible to observe that the adaptive filter result approaches the original filter coefficients while increasing the number of iterations.

Task

Increase/decrease the length of the FIR filter and fix the number of iterations is 50. Comment on the observed result.

Experiment 11.6 Inverse System Modelling Using LMS Algorithm

This experiment discusses the inverse system modelling using LMS algorithm. The general block diagram of inverse system modelling using adaptive filter is shown in Fig. 11.14. From this figure, it is possible to understand that the unknown system and the adaptive filter are connected in a cascade form, and the delayed version of the input signal act as a reference signal. The aim of adaptive filtering in this experiment is to obtain the inverse system of the unknown system so that y[n] and d[n] will be similar. If y[n] and d[n] are similar, then the adaptive filter is equal to the inverse of the unknown system.

Fig. 11.14
A block diagram of Inverse system modeling. The input x of n goes to the unknown system and an adaptive filter that gives the output. A variable input e of n is added with the output y of n. A feedback loop of delay goes to the adder.

Inverse system modelling using adaptive filter

In communication systems, inverse system modelling is used as channel equalization. In such scenario, the adaptive filter is termed as ‘equalizer’. Adaptive equalizer can combat intersymbol interference. Intersymbol interference arises because of the spreading of a transmitted pulse due to the dispersive nature of the channel.

The impulse response of the channel is given by

$$ h\left[n\right]=\left\{\begin{array}{ll}\frac{1}{2}\left[1+\cos \left(\frac{2\pi }{W}\left(n-2\right)\right)\right],& n=1,2,3\\ {}0,& \mathrm{otherwise}\end{array}\right. $$
(11.7)

In the above equation, ‘W’ represents the channel capacity. Higher value of ‘W’ implies that the channel is more complex.

The python code to obtain the inverse of unknown system using LMS algorithm is given in Fig. 11.15, and its corresponding simulation result is shown in Fig. 11.16.

Fig. 11.15
A program code for Inverse system. It imports 4 libraries, generates random data and random noise, the impulse response of the channel, and M S E, and plots the graphs for the Impulse Response of the channel, inverse, and cascaded filters by giving the labels.

Python code for Inverse system modelling

Fig. 11.16
4 needle graphs. A. h 1 of n versus n. B. h 2 of n versus n. C. h 1 of n asterisk h 2 of n versus n. D. modulus of H of omega versus omega. Graphs A and B plot the impulse responses of channel, inverse, and cascaded filters. Graph D plots a constant trend for the magnitude response of cascaded filter.

Simulation result of inverse system modelling

Inference

From Fig. 11.16, it is possible to perceive the following facts

  1. 1.

    The impulse response of the cascaded system is an impulse. This implies that the cascade of channel filter and its inverse system results in an identity system.

  2. 2.

    The Fourier transform of an impulse response will result in a flat spectrum. This is obvious by observing the spectrum of the cascaded system.

Task

  1. 1.

    Increase the order of the adaptive filter and obtain the impulse response of the inverse system.

11.2.2 Normalized LMS Algorithm

The weight updation formula for the normalized LMS algorithm is given by

$$ w\left[n+1\right]={w}^T\left[n\right]+\frac{\beta }{{\left\Vert x\right\Vert}^2+c}e\left[n\right]x\left[n\right] $$
(11.8)

where ‘β’ is a positive constant, which controls the convergence speed of the algorithm. ‘c’ is a small regularization parameter; it is added with the norm of the signal x[n] to avoid the divide by zero error.

Experiment 11.7 Normalized LMS (NLMS) Algorithm

The python code for the normalized LMS algorithm is given in Fig. 11.17.

Fig. 11.17
A set of program codes defines a function for N L M S algorithm that calls the values of x, N, t, beta, c, n iterations, and calculates w, e, and x n, e n, mu, w, and e of n for the range between 0 and n iterations, updates learning rate and filter by using N L M S algorithm, and records the error.

Python code for NLMS algorithm

Inference

  1. 1.

    From Fig. 11.17, it is possible to observe that it is in the form of a function, and it can be called for the adaptive signal processing applications whenever required.

  2. 2.

    Also, it is possible to know that step size or learning rate is not given as a direct input to the function.

  3. 3.

    The step size is calculated using the input data, β and ‘c’.

Experiment 11.8 Inverse System Modelling Using NLMS Algorithm

This experiment is a repetition of the inverse system modelling experiment, which was discussed earlier. Here, Experiment 11.6 is repeated with the same specifications, and NLMS is used for adaptive filtering instead of LMS algorithm. The python code of this experiment is shown in Fig. 11.18, and its corresponding simulation result is displayed in Fig. 11.19.

Fig. 11.18
A set of program codes imports numpy, matplot, signal, and F F T libraries, generates random data and random noise, the impulse response of the channel, frequency response, and plots the figure by giving the labels and titles as the impulse response of the channel, inverse, and cascaded filters.

Python code for Experiment 11.8

Fig. 11.19
4 graphs. 1. h 1 of n versus n. 2. h 2 of n versus n. 3. h 1 of n asterisk h 2 of n versus n. 4. modulus of H of omega versus omega. 1 to 3 are needle graphs of the impulse responses of channel, inverse, and cascaded filters. 4 plots a constant line for the magnitude response of cascaded filter.

Simulation result of the python code given in Fig. 11.18

Inference

The following conclusions can be made from this experiment:

  1. 1.

    From this Fig. 11.19, it is possible to conclude that the cascade of channel and inverse filter gives the impulse response as unit impulse sequence.

  2. 2.

    The magnitude response confirms that the cascaded filter spectrum is a dc.

  3. 3.

    Therefore, the channel filter and the adaptive filter are inverse to each other.

11.2.3 Sign LMS Algorithm

The weight updation formula for Sign LMS algorithm is given by

$$ w\left[n+1\right]=w\left[n\right]+\mu \kern0.5em \operatorname{sign}\left\{e\left[n\right]x\left[n\right]\right\} $$
(11.9)

where ‘sign’ indicates the sign of the number, ‘w[n + 1]’ represents new weight and ‘e[n]’ denotes the error signal between target and estimated signal.

Experiment 11.9 Adaptive Line Enhancer Using Sign LMS Algorithm

This experiment discusses the python implementation of adaptive line enhancer using sign LMS algorithm. The block diagram of adaptive line enhancer is shown in Fig. 11.20. From this figure, it is possible to observe that input to the FIR filter is a noisy version of the input signal (x[n]), and the final output (y[n]) is the enhanced input signal or noise-free signal. The aim of this experiment is to remove the noisy components present in the input signal using sign LMS adaptive algorithm. The python code for the “sign LMS algorithm” is given in Fig. 11.21 as a function.

Fig. 11.20
A schematic of the adaptive line enhancer. The inputs s of n and v of n are added and given to z inverse k which gives the signal x of n. It goes to the F I R filter and an adaptive algorithm. The outputs are added with d of n with feedback from z inverse k.

Block diagram of adaptive line enhancer

Fig. 11.21
A set of program codes defines a function for the L M S algorithm. It calls the values of x, mu, N, t, n iterations, calculates x n, e n, w, and e of n between the range 0 and n iterations, updates the filter, and records the error.

Python code for Sign LMS algorithm

The python code for adaptive line enhancer using sign LMS is given in Fig. 11.22. In this experiment, the input signal has 500, 2000 and 3500 Hz frequencies. The sampling frequency is considered as 8000 Hz. The input signal is added with the external random noise, which is the input to the adaptive filter. The number of delay is chosen as 10, and length of the adaptive FIR filter is fixed as 25. The main objective of this experiment is to recover or enhance the original signal from the noisy input data using sign LMS algorithm. The simulation result of the python code given in Fig. 11.22 is shown in Fig. 11.23. From the magnitude spectrum, it is possible to observe that the noise impact is reduced by the sign LMS algorithm.

Fig. 11.22
A program code imports n p, p l t, signal, and f f t, calculates signal and sampling frequencies, T, t, noise, d, delay, filter length, step size, frequency response, and plots the graphs by giving the labels as input noisy signal, denoised signal, Spectrum of noisy signal and denoised signals.

Python code for adaptive line enhancer using sign LMS

Fig. 11.23
4 graphs. A. x of t. B. y of t. A and B against t. C. modulus of X of omega. D. modulus of Y of omega. C and D against omega. A and B plot chaotic spikes and dips for input noisy and denoised signals. C and D plot impulse trends with chaotic waves for the spectrum of noisy and denoised signals.

Simulation result of the adaptive line enhancer using sign LMS

Inference

From this experiment, the following observations can be drawn:

  1. 1.

    From Fig. 11.23, the magnitude response of the noisy signal indicates that the signal has three unique frequency components and noisy components.

  2. 2.

    The magnitude response of denoised signal has three spikes, and the impact of the noisy components is lesser than the input magnitude response.

Task

  1. 1.

    Do the suitable adjustments in the parameters used in the python code given in Fig. 11.22 to reduce the effect of noise in the denoised or enhanced signal?

11.3 RLS Algorithm

Recursive least square (RLS) is an adaptive algorithm based on the idea of least squares. The block diagram of the adaptive filter based on RLS algorithm is shown in Fig. 11.24. From the figure x[n] is the input to the filter, d[n] is the desired signal and the difference between the desired signal and the output of the filter is the error signal e[n]. Forgetting factor is used in RLS algorithm to remove or minimize the influence of old measurements. A small forgetting factor reduces the influence of old samples and increases the weight of new samples; as a result, a better tracking can be realized at the cost of a higher variance of the filter coefficients. A large forgetting factor keeps more information about the old samples and has a lower variance of the filter coefficients, but it takes a longer time to converge.

Fig. 11.24
A schematic of the adaptive filter. The input x of n is fed to the variable filter that gives y of n. It is added with d of n and e of n at the output. The input x of n is fed back to the R L S adaptive algorithm along with e of n.

Block diagram of adaptive filter based on RLS algorithm

Let us define the a priori error as \( \hat{e}\left[n\right]=d\left[n\right]-{w}^T\left[n-1\right]x\left[n\right] \) and the weight updation formula for the RLS algorithm is given by

$$ w\left[n\right]=w\left[n-1\right]+\frac{P\left[n-1\right]x\left[n\right]\hat{e}\left[n\right]}{\lambda +{x}^T\left[n\right]P\left[n-1\right]x\left[n\right]} $$
(11.10)

If \( k\left[n\right]=\frac{P\left[n-1\right]x\left[n\right]}{\lambda +{x}^T\left[n\right]P\left[n-1\right]x\left[n\right]} \) represents the gain, then the above expression can be written as

$$ w\left[n\right]=w\left[n-1\right]+k\left[n\right]\hat{e}\left[n\right] $$
(11.11)

The flow chart of the sequence of steps followed in RLS algorithm is shown in Fig. 11.25. From the flow chart, it is possible to observe that the algorithm is iterative. Proper initialization of filter coefficients is necessary for convergence.

Fig. 11.25
A flow chart of five levels. Initialization P of 0, Computation of gain k of n, Computation of error e tilde of n, Weight updation w of n, and Updation of Inverse Correlation matrix P of n. P of n is again fed as the input to the computation of gain.

Flow chart of sequence of steps in RLS algorithm

Experiment 11.10 Implementation of RLS Algorithm

This experiment discusses the implementation of RLS algorithm using python. The python code for RLS algorithm is given in Fig. 11.26, and it is in the form of a function so that this function can be used for different applications.

Fig. 11.26
A set of program codes defines a function for the R L S algorithm. It calls the values of x, lambda, delta, N, t, and n iterations, calculates x n, k 1, k 2, k 3, k, e n, w, P, and e of n, and updates the filter. Returns w and e.

Python code for RLS algorithm

Experiment 11.11 Adaptive Line Enhancer Using RLS Algorithm

This experiment is a repetition of Experiment 11.9; instead of sign LMS, RLS algorithm is used to filter out the noisy component present in the input signal. The python code for this experiment is given in Fig. 11.27, and its corresponding simulation result is displayed in Fig. 11.28.

Fig. 11.27
A set of program codes imports n p, p l t, signal, and f f t, assigns signal and sampling frequency, calculates d, delay, filter length, frequency response, and plots the graph by giving the title input noisy signal, Denoised signal, Spectrum of noisy signal, and Spectrum of denoised signal.

Python code for adaptive line enhancer using RLS

Fig. 11.28
4 graphs. A. x of t. B. y of t. A and B against t. C. modulus of X of omega. D. modulus of Y of omega, C and D against omega. A and B plot chaotic spikes and dips for input noisy and denoised signals. C and D plot impulse trends with chaotic waves for a spectrum of noisy and denoised signals.

Simulation result of the python code given in Fig. 11.27

Inference

From Fig. 11.28, it is possible to confirm that the magnitude response of the filtered or denoised output is better than the magnitude response of the noisy input. Therefore, RLS algorithm can act as an adaptive line enhancer.

Experiment 11.12 Comparison of System Identification with Different Adaptive Filters

The main objective of this experiment is to compare the simulation result of different adaptive algorithms like LMS, NLMS, Sign LMS and RLS for the system identification process. The python code to compare the simulation results of system identification is given in Fig. 11.29, and its simulation results are shown in Fig. 11.30.

Fig. 11.29
A Python code for the comparison of adaptive algorithms for system identification. It imports n p, p l t, and signal, assigns size of the input data, filter size, input to the filter, F I R filter to be identified, desired signal, calculates t with noise, L M S stepsize, and plots the graphs.

Python code for unknown system identification

Fig. 11.30
5 graphs against n. A. h of n. B, C, D, and E are w of n versus n. A plots the filter to be identified. B, C, D, and E plot impulse-like trends with positive-negative values for the signals identified by L M S, N L M S, sign L M S, and R L S.

Simulation result of the python code given in Fig. 11.29

Inference

From Fig. 11.30, it is possible to observe that proper selection of the adaptive filter parameters like step size or learning rate, forgetting factor and regularization plays a major role in using the adaptive filtering algorithm for the system identification application in signal processing.

Task

Write a python code to compare the simulation result of different adaptive algorithms like LMS, NLMS, sign LMS and RLS for adaptive line enhancement application in signal processing.

Exercises

  1. 1.

    Execute the python code given in Fig. 11.12 and compare the estimated filter ‘w’ with the original filter coefficients ‘h’ for different length of the filter. Also, execute the same python code and comment on the convergence of the LMS algorithm with different values of learning rate ‘mu’, including negative value.

  2. 2.

    Use the python code for the sign LMS algorithm given in Fig. 11.22 to compute the impulse response of the inverse filter and comment on the role of learning rate.

  3. 3.

    Modify the sign LMS algorithm based on the equation of the sign regressor algorithm is given by w[n + 1] = w[n] + μe[n] sign {x[n]}, and compute the impulse response of the inverse filter and comment on the simulation result.

  4. 4.

    Modify the sign LMS algorithm based on the equation of sign-sign LMS algorithm is given by w[n + 1] = w[n] + μ  sign {e[n]} sign {x[n]}, and compute the impulse response of the inverse filter and comment on the simulation result.

  5. 5.

    Use the python code for RLS algorithm given in Fig. 11.26 to obtain the inverse filter coefficients and comment on the simulation result. Also, comment on the selection of the forgetting factor and regularization parameter.

Objective Questions

  1. 1.

    The filter which is based on the minimum mean square error criterion, is

    1. A.

      Wiener filter

    2. B.

      Window-based FIR filter

    3. C.

      Frequency sampling-based FIR filter

    4. D.

      Savitsky Golay filter

  2. 2.

    If ‘R’ is the autocorrelation matrix of the observed signal and ‘p’ represents the cross-correlation between the desired signal and the observed signal, then the expression for the Wiener-Hopf equation is

    1. A.

      wopt = R × p

    2. B.

      wopt = R + p

    3. C.

      wopt = R − p

    4. D.

      wopt = p/R

  3. 3.

    The weight update expression of the standard LMS algorithm is

    1. A.

      w(n + 1) = w(n) + μx[n]e[n]

    2. B.

      w(n + 1) = w(n) − μx[n]e[n]

    3. C.

      w(n + 1) = w(n) + μx[n]e2[n]

    4. D.

      w(n + 1) = w(n) − μx[n]e2[n]

  4. 4.

    If μ refers to the step size and λ refers to the eigen value of the autocorrelation matrix, then the condition for convergence of LMS algorithm is given by

    1. A.

      \( 0<\mu <\frac{2}{\lambda_{\mathrm{min}}} \)

    2. B.

      \( 0<\mu <\frac{2}{\lambda_{\mathrm{max}}} \)

    3. C.

      \( 0<\mu <\frac{2}{\lambda_{\mathrm{max}}^2} \)

    4. D.

      \( 0<\mu <\frac{2}{\lambda_{\mathrm{min}}^2} \)

  5. 5.

    Statement 1: Wiener filter is based on the statistics of the input data.

    Statement 2: Wiener filter is an optimal filter with respect to minimum mean absolute error

    1. A.

      Statements 1 and 2 are true

    2. B.

      Statement 1 is correct, and Statement 2 is wrong

    3. C.

      Statement 1 is wrong, Statement 2 is correct

    4. D.

      Statements 1 and 2 are wrong

  6. 6.

    The filter which changes its characteristics in accordance with the environment is termed as

    1. A.

      Optimal filter

    2. B.

      Non-linear filter

    3. C.

      Adaptive filter

    4. D.

      Linear filter