Abstract
After reading this chapter, the reader is expected to
Access provided by Autonomous University of Puebla. Download chapter PDF
Similar content being viewed by others
After reading this chapter, the reader is expected to
-
Implement and analyse the Wiener filter.
-
Write a python code to implement the LMS algorithm and its variants.
-
Perform system identification using the LMS algorithm.
-
Perform inverse system modelling using the NLMS algorithm.
-
Implement adaptive line enhancer using the LMS algorithm and its variants.
-
Implement the RLS algorithm.
The roadmap of this chapter is depicted below. This chapter starts with the Wiener filter, least mean square (LMS) algorithm and its variant approaches for adaptive signal processing applications like system identification and signal denoising. Next, the RLS algorithm is discussed with the suitable python code.
FormalPara PreLab Questions-
1.
List out the valid differences between the optimal filter and the adaptive filter.
-
2.
What is an adaptive filter? How it differs from the ordinary filter.
-
3.
Examples of adaptive filter.
-
4.
When are adaptive filters preferred?
-
5.
List out the performance measures of the adaptive filter.
-
6.
What is an LMS algorithm?
-
7.
What do you mean by least square estimation?
-
8.
List out the variants of LMS algorithm.
-
9.
How the step size impacts the LMS algorithm?
-
10.
What is the RLS algorithm, and how it differs from LMS?
11.1 Wiener Filter
Wiener filter is the mean square error (MSE) optimal stationary linear filter for signal corrupted by additive noise. The Wiener filter computation requires the assumption that the signal and noise are in the random process. The general block diagram of the Wiener filter is shown in Fig. 11.1. The main objective of the Wiener filter is to obtain the filter coefficient of the LTI filter, which can provide the final output (y[n]) as much as the minimum MSE between the output and the desired signal or target (d[n]). In Fig. 11.1, s[n] denotes the original signal, which is a clean signal, and it is corrupted by additive noise η[n] to give the signal x[n]. The parameters of the filter have to be designated has to be designed in such a way that the output of the filter y[n] should resemble the desired signal d[n] such that the error ‘e[n]’ is minimum.
The expression for the optimal Wiener filter is given by
The above expression is termed as ‘Wiener-Hopf’ expression, which is named after American-born Norbert Wiener and Austrian-born Eberhard Hopf. The expression for optimal filter depends on the autocorrelation matrix (R) of the observed signal (x[n]) and the cross-correlation vector (p) between the observed signal (x[n]) and the desired signal (d[n]). hopt denotes the optimal filter coefficients.
Experiment 11.1 Wiener Filtering
The aim of this experiment is to implement the Wiener filtering using python. Here the optimal filter coefficients are obtained using the Wiener-Hopf equation given in Eq. (11.1). The python code for Wiener filter is shown in Fig. 11.2. Simulation result of the python code given in Fig. 11.2 is depicted in Fig. 11.3.
The built-in functions used in python code shown in Fig. 11.2 is summarized in Table 11.1.
Inference
From Fig. 11.3, it can be made the following observations:
-
1.
The input or clean signal frequency is 5 Hz, and it is a smooth sine waveform.
-
2.
The additive noise added signal as input to the Wiener filter, and it is a distorted signal.
-
3.
The filtered signal is not a smooth sine waveform. However, this waveform is far better than the noisy signal. Hence, the Wiener filter has a capability to minimize the impact of additive noise in a signal.
Task
-
1.
Change the value of standard deviation in random noise generation python command ‘np.random.normal(0,.2,len(s))’ given in Fig. 11.2. Execute and make the appropriate changes in this python code to get ‘filtered signal’ as similar as ‘clean signal’.
Experiment 11.2 Wiener Filter Using Built-In Function
This experiment performs the Wiener filtering using built-in function in ‘scipy’ library. The built-in function is available in the ‘scipy’ library ‘wiener’ can be used to filter out the noisy components. In this experiment, noise-free sinusoidal signal of 5 Hz frequency is generated. The clean signal is corrupted by adding random noise, which follows the normal distribution with zero mean and 0.2 standard deviation. The corrupted signal is then passed through the Wiener filter to minimize the impact of noise. The steps followed along with the built-in functions used in the program are given in Table 11.2.
The python code which performs this task is shown in Fig. 11.4, and the corresponding output is shown in Fig. 11.5.
Inference
From Fig. 11.5, it is possible to infer that the impact of noise is minimized after passing the noisy signal through Wiener filter.
11.1.1 Wiener Filter in Frequency Domain
From Wiener-Hopf equation, the expression for the optimal Wiener filter is given by
The above equation can be expressed as
In the above expression, ‘p’ represents the cross-correlation between desired signal and the observed signal, and ‘R’ represents the autocorrelation of the observed signal. Taking Fourier transform on both sides of Eq. (11.3), we get
According to the Wiener-Khinchin theorem, Fourier transform of autocorrelation function gives power spectral density. Using this theorem, Eq. (11.4) is expressed as
In Eq. (11.5), H(ejω) represents the frequency response of the Wiener filter, Sdx(ejω) represents the cross-power spectral density estimation between desired and observed signal and Sxx(ejω) represents the power spectral density of the observed signal.
Experiment 11.3 Wiener Filter in Frequency Domain
The steps followed in the implementation of Wiener filter in frequency domain are given in Fig. 11.6. The noisy signal is obtained by adding white noise, which follows normal distribution to the clean signal. The observed signal is a clean signal with white noise added to it. The power spectral density of the observed signal is represented by Sxx(ejω). The power spectral density between the desired and observed signal is represented by Sdx(ejω). The Wiener filter is obtained in the frequency domain using the relation \( H\left({\mathrm{e}}^{j\omega}\right)=\frac{S_{dx}\left({\mathrm{e}}^{j\omega}\right)}{S_{xx}\left({\mathrm{e}}^{j\omega}\right)} \). Here the desired signal is the clean signal s[n]. Upon taking inverse Fourier transform of H(ejω), the impulse response of the Wiener filter is obtained.
The python code used to implement the Wiener filter in frequency domain is shown in Fig. 11.7, and the corresponding output is in Fig. 11.8.
The built-in functions used in the program and its purpose are given in Table 11.3.
Inference
From Fig. 11.8, the following observations can be made:
-
1.
The impact of noise is minimized by applying the Wiener filter.
-
2.
The impulse response of the Wiener filter is not symmetric; hence, the phase response of the filter is not a linear curve.
-
3.
From the magnitude response, it is possible to observe that the filter is a lowpass filter, and it performs smoothing actions to minimize the impact of noise.
-
4.
From the pole-zero plot, it is possible to observe that poles and zeros lie within the unit circle; hence the filter is stable.
11.2 Adaptive Filter
The adaptive filter is a non-linear filter, which updates the value of the filter coefficients based on some specific criterion. The general block diagram of the adaptive filter is shown in Fig. 11.9. From this figure, it is possible to observe that the filter coefficients are updated based on the error, e[n] between the output of the filter y[n] and reference data d[n]. Examples of adaptive filters are LMS filter and RLS filter.
11.2.1 LMS Adaptive Filter
The LMS is a least mean square algorithm that works based on the stochastic gradient descent approach to adapt the estimate based on the current error. The estimate is called the weight or filter coefficient. The weight or filter coefficient update equation of the LMS algorithm is given by.
where w[n + 1] represents the new weight or updated weight, w[n] denotes the old weight, μ indicates the step size or learning rate, x[n] is the input signal or data and the error signal e[n] = d[n] − y[n]. d[n] is the reference data or target data, and y[n] is the actual output of the adaptive filter of the system.
Experiment 11.4 Implementation of LMS Algorithm
This experiment discusses the implementation of LMS algorithm for adaptive filtering using python. The python code to define the LMS algorithm as a function is shown in Fig. 11.10. This code can be called a function in the different applications of the LMS algorithm, which will be discussed in the subsequent experiments. From Fig. 11.10, it is possible to see that the weight updation formula of the LMS algorithm given in Eq. (11.6) exists in it.
Inference
-
1.
From Fig. 11.10, it is possible to observe that the LMS algorithm is written as a function, and it can be called a signal processing application whenever needed.
-
2.
The inputs to the LMS function are ‘x’, ‘mu’, ‘N’ and ‘t’. ‘x’ denotes the input data, ‘mu’ represents step size, ‘t’ denotes the reference data or target data and ‘N’ indicates the length of the adaptive filter.
-
3.
The outputs from this LMS function are ‘w’, which denotes the adaptive filter coefficients, and ‘e’ is an error between the estimate and target data.
Experiment 11.5 System Identification Using LMS Algorithm
This experiment deals with unknown system identification using the LMS algorithm. Let us consider the unknown system as an FIR filter with a length of 25. In this experiment, the output filter coefficients are obtained by using LMS algorithm with different number of iterations. The block diagram of the system identification is shown in Fig. 11.11. The python code to find the unknown system using the LMS algorithm is given in Fig. 11.12, and its simulation result is shown in Fig. 11.13.
Figure 11.12 indicates that the number of iterations is considered as 10, 50, 100 and 150, and the length of the unknown FIR filter is chosen as 25. The input to the LMS algorithm is a random signal with a length of 500 samples. The targeted or desired or reference data is obtained by convolving the input random signal with the unknown FIR filter coefficients along with the random noise.
Note that the inputs to the LMS algorithm ([w,e]=LMS_algorithmm(x,mu,N,t,n_iter[i])) are random signal (x), learning rate (mu), length of the filter (N), a reference signal (t) and number or iteration (n_iter). Also, note that the filter coefficients (h) are not given as input to the LMS algorithm. The outputs of the LMS algorithm are error signal (e) and identified filter output (w).
The simulation result of the python code given in Fig. 11.12 is displayed in Fig. 11.13.
Inference
From Fig. 11.13, it is possible to observe that the adaptive filter result approaches the original filter coefficients while increasing the number of iterations.
Task
Increase/decrease the length of the FIR filter and fix the number of iterations is 50. Comment on the observed result.
Experiment 11.6 Inverse System Modelling Using LMS Algorithm
This experiment discusses the inverse system modelling using LMS algorithm. The general block diagram of inverse system modelling using adaptive filter is shown in Fig. 11.14. From this figure, it is possible to understand that the unknown system and the adaptive filter are connected in a cascade form, and the delayed version of the input signal act as a reference signal. The aim of adaptive filtering in this experiment is to obtain the inverse system of the unknown system so that y[n] and d[n] will be similar. If y[n] and d[n] are similar, then the adaptive filter is equal to the inverse of the unknown system.
In communication systems, inverse system modelling is used as channel equalization. In such scenario, the adaptive filter is termed as ‘equalizer’. Adaptive equalizer can combat intersymbol interference. Intersymbol interference arises because of the spreading of a transmitted pulse due to the dispersive nature of the channel.
The impulse response of the channel is given by
In the above equation, ‘W’ represents the channel capacity. Higher value of ‘W’ implies that the channel is more complex.
The python code to obtain the inverse of unknown system using LMS algorithm is given in Fig. 11.15, and its corresponding simulation result is shown in Fig. 11.16.
Inference
From Fig. 11.16, it is possible to perceive the following facts
-
1.
The impulse response of the cascaded system is an impulse. This implies that the cascade of channel filter and its inverse system results in an identity system.
-
2.
The Fourier transform of an impulse response will result in a flat spectrum. This is obvious by observing the spectrum of the cascaded system.
Task
-
1.
Increase the order of the adaptive filter and obtain the impulse response of the inverse system.
11.2.2 Normalized LMS Algorithm
The weight updation formula for the normalized LMS algorithm is given by
where ‘β’ is a positive constant, which controls the convergence speed of the algorithm. ‘c’ is a small regularization parameter; it is added with the norm of the signal x[n] to avoid the divide by zero error.
Experiment 11.7 Normalized LMS (NLMS) Algorithm
The python code for the normalized LMS algorithm is given in Fig. 11.17.
Inference
-
1.
From Fig. 11.17, it is possible to observe that it is in the form of a function, and it can be called for the adaptive signal processing applications whenever required.
-
2.
Also, it is possible to know that step size or learning rate is not given as a direct input to the function.
-
3.
The step size is calculated using the input data, β and ‘c’.
Experiment 11.8 Inverse System Modelling Using NLMS Algorithm
This experiment is a repetition of the inverse system modelling experiment, which was discussed earlier. Here, Experiment 11.6 is repeated with the same specifications, and NLMS is used for adaptive filtering instead of LMS algorithm. The python code of this experiment is shown in Fig. 11.18, and its corresponding simulation result is displayed in Fig. 11.19.
Inference
The following conclusions can be made from this experiment:
-
1.
From this Fig. 11.19, it is possible to conclude that the cascade of channel and inverse filter gives the impulse response as unit impulse sequence.
-
2.
The magnitude response confirms that the cascaded filter spectrum is a dc.
-
3.
Therefore, the channel filter and the adaptive filter are inverse to each other.
11.2.3 Sign LMS Algorithm
The weight updation formula for Sign LMS algorithm is given by
where ‘sign’ indicates the sign of the number, ‘w[n + 1]’ represents new weight and ‘e[n]’ denotes the error signal between target and estimated signal.
Experiment 11.9 Adaptive Line Enhancer Using Sign LMS Algorithm
This experiment discusses the python implementation of adaptive line enhancer using sign LMS algorithm. The block diagram of adaptive line enhancer is shown in Fig. 11.20. From this figure, it is possible to observe that input to the FIR filter is a noisy version of the input signal (x[n]), and the final output (y[n]) is the enhanced input signal or noise-free signal. The aim of this experiment is to remove the noisy components present in the input signal using sign LMS adaptive algorithm. The python code for the “sign LMS algorithm” is given in Fig. 11.21 as a function.
The python code for adaptive line enhancer using sign LMS is given in Fig. 11.22. In this experiment, the input signal has 500, 2000 and 3500 Hz frequencies. The sampling frequency is considered as 8000 Hz. The input signal is added with the external random noise, which is the input to the adaptive filter. The number of delay is chosen as 10, and length of the adaptive FIR filter is fixed as 25. The main objective of this experiment is to recover or enhance the original signal from the noisy input data using sign LMS algorithm. The simulation result of the python code given in Fig. 11.22 is shown in Fig. 11.23. From the magnitude spectrum, it is possible to observe that the noise impact is reduced by the sign LMS algorithm.
Inference
From this experiment, the following observations can be drawn:
-
1.
From Fig. 11.23, the magnitude response of the noisy signal indicates that the signal has three unique frequency components and noisy components.
-
2.
The magnitude response of denoised signal has three spikes, and the impact of the noisy components is lesser than the input magnitude response.
Task
-
1.
Do the suitable adjustments in the parameters used in the python code given in Fig. 11.22 to reduce the effect of noise in the denoised or enhanced signal?
11.3 RLS Algorithm
Recursive least square (RLS) is an adaptive algorithm based on the idea of least squares. The block diagram of the adaptive filter based on RLS algorithm is shown in Fig. 11.24. From the figure x[n] is the input to the filter, d[n] is the desired signal and the difference between the desired signal and the output of the filter is the error signal e[n]. Forgetting factor is used in RLS algorithm to remove or minimize the influence of old measurements. A small forgetting factor reduces the influence of old samples and increases the weight of new samples; as a result, a better tracking can be realized at the cost of a higher variance of the filter coefficients. A large forgetting factor keeps more information about the old samples and has a lower variance of the filter coefficients, but it takes a longer time to converge.
Let us define the a priori error as \( \hat{e}\left[n\right]=d\left[n\right]-{w}^T\left[n-1\right]x\left[n\right] \) and the weight updation formula for the RLS algorithm is given by
If \( k\left[n\right]=\frac{P\left[n-1\right]x\left[n\right]}{\lambda +{x}^T\left[n\right]P\left[n-1\right]x\left[n\right]} \) represents the gain, then the above expression can be written as
The flow chart of the sequence of steps followed in RLS algorithm is shown in Fig. 11.25. From the flow chart, it is possible to observe that the algorithm is iterative. Proper initialization of filter coefficients is necessary for convergence.
Experiment 11.10 Implementation of RLS Algorithm
This experiment discusses the implementation of RLS algorithm using python. The python code for RLS algorithm is given in Fig. 11.26, and it is in the form of a function so that this function can be used for different applications.
Experiment 11.11 Adaptive Line Enhancer Using RLS Algorithm
This experiment is a repetition of Experiment 11.9; instead of sign LMS, RLS algorithm is used to filter out the noisy component present in the input signal. The python code for this experiment is given in Fig. 11.27, and its corresponding simulation result is displayed in Fig. 11.28.
Inference
From Fig. 11.28, it is possible to confirm that the magnitude response of the filtered or denoised output is better than the magnitude response of the noisy input. Therefore, RLS algorithm can act as an adaptive line enhancer.
Experiment 11.12 Comparison of System Identification with Different Adaptive Filters
The main objective of this experiment is to compare the simulation result of different adaptive algorithms like LMS, NLMS, Sign LMS and RLS for the system identification process. The python code to compare the simulation results of system identification is given in Fig. 11.29, and its simulation results are shown in Fig. 11.30.
Inference
From Fig. 11.30, it is possible to observe that proper selection of the adaptive filter parameters like step size or learning rate, forgetting factor and regularization plays a major role in using the adaptive filtering algorithm for the system identification application in signal processing.
Task
Write a python code to compare the simulation result of different adaptive algorithms like LMS, NLMS, sign LMS and RLS for adaptive line enhancement application in signal processing.
Exercises
-
1.
Execute the python code given in Fig. 11.12 and compare the estimated filter ‘w’ with the original filter coefficients ‘h’ for different length of the filter. Also, execute the same python code and comment on the convergence of the LMS algorithm with different values of learning rate ‘mu’, including negative value.
-
2.
Use the python code for the sign LMS algorithm given in Fig. 11.22 to compute the impulse response of the inverse filter and comment on the role of learning rate.
-
3.
Modify the sign LMS algorithm based on the equation of the sign regressor algorithm is given by w[n + 1] = w[n] + μe[n] sign {x[n]}, and compute the impulse response of the inverse filter and comment on the simulation result.
-
4.
Modify the sign LMS algorithm based on the equation of sign-sign LMS algorithm is given by w[n + 1] = w[n] + μ sign {e[n]} sign {x[n]}, and compute the impulse response of the inverse filter and comment on the simulation result.
-
5.
Use the python code for RLS algorithm given in Fig. 11.26 to obtain the inverse filter coefficients and comment on the simulation result. Also, comment on the selection of the forgetting factor and regularization parameter.
Objective Questions
-
1.
The filter which is based on the minimum mean square error criterion, is
-
A.
Wiener filter
-
B.
Window-based FIR filter
-
C.
Frequency sampling-based FIR filter
-
D.
Savitsky Golay filter
-
A.
-
2.
If ‘R’ is the autocorrelation matrix of the observed signal and ‘p’ represents the cross-correlation between the desired signal and the observed signal, then the expression for the Wiener-Hopf equation is
-
A.
wopt = R × p
-
B.
wopt = R + p
-
C.
wopt = R − p
-
D.
wopt = p/R
-
A.
-
3.
The weight update expression of the standard LMS algorithm is
-
A.
w(n + 1) = w(n) + μx[n]e[n]
-
B.
w(n + 1) = w(n) − μx[n]e[n]
-
C.
w(n + 1) = w(n) + μx[n]e2[n]
-
D.
w(n + 1) = w(n) − μx[n]e2[n]
-
A.
-
4.
If μ refers to the step size and λ refers to the eigen value of the autocorrelation matrix, then the condition for convergence of LMS algorithm is given by
-
A.
\( 0<\mu <\frac{2}{\lambda_{\mathrm{min}}} \)
-
B.
\( 0<\mu <\frac{2}{\lambda_{\mathrm{max}}} \)
-
C.
\( 0<\mu <\frac{2}{\lambda_{\mathrm{max}}^2} \)
-
D.
\( 0<\mu <\frac{2}{\lambda_{\mathrm{min}}^2} \)
-
A.
-
5.
Statement 1: Wiener filter is based on the statistics of the input data.
Statement 2: Wiener filter is an optimal filter with respect to minimum mean absolute error
-
A.
Statements 1 and 2 are true
-
B.
Statement 1 is correct, and Statement 2 is wrong
-
C.
Statement 1 is wrong, Statement 2 is correct
-
D.
Statements 1 and 2 are wrong
-
A.
-
6.
The filter which changes its characteristics in accordance with the environment is termed as
-
A.
Optimal filter
-
B.
Non-linear filter
-
C.
Adaptive filter
-
D.
Linear filter
-
A.
Bibliography
Simon Haykin, “Adaptive Filter Theory”, Pearson, 2008.
Bernard Widrow, Samuel D. Stearns, “Adaptive Signal Processing”, Pearson, 2002.
Dimitris G. Manolakis, Vinay K. Ingle, and Stephen M. Kogon, “Statistical and Adaptive Signal Processing: Spectral Estimation, Signal Modeling, Adaptive Filtering and Array Processing”, Artech House Publishers, 2005.
Behrouz F. Boroujey, “Adaptive Filters: Theory and Applications”, Wiley -Blackwell, 2013.
Alexandar D. Poularikas, “Adaptive Filtering”, CRC Press, 2015.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this chapter
Cite this chapter
Esakkirajan, S., Veerakumar, T., N. Subudhi, B. (2024). Adaptive Signal Processing. In: Digital Signal Processing. Springer, Singapore. https://doi.org/10.1007/978-981-99-6752-0_11
Download citation
DOI: https://doi.org/10.1007/978-981-99-6752-0_11
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-6751-3
Online ISBN: 978-981-99-6752-0
eBook Packages: Computer ScienceComputer Science (R0)