Abstract
Some stochastic gradient (SG) algorithms for Hammerstein systems with piecewise linearity are developed in this paper. Due to the complexity of the nonlinear structure, the key term separation is used to transfer the nonlinear model into a regression model, and then, some SG algorithms are proposed for this model. Since the SG algorithm has slow convergence rate, a forgetting factor SG algorithm and an Aitken SG algorithm are provided. Compared with the forgetting factor SG algorithm, the Aitken SG algorithm has smaller variance of estimation error, which means the Aitken SG algorithm is more effective. Two simulation examples are provided to show the effectiveness of the proposed algorithms.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Parameter estimation plays an important role in controller design [13, 14] because the controller design of dynamic system is usually established on the premise that the parameters of dynamic systems are known [19, 28]. Compared with the linear systems, nonlinear systems are more extensive in engineering practice [6, 10], and they can be roughly divided into four categories: Hammerstein systems [36], Wiener systems [18], Hammerstein–Wiener systems and Wiener–Hammerstein systems [2, 35]. Recently, many identification algorithms have been developed for these nonlinear systems, such as the stochastic gradient (SG) algorithms [39], the expectation maximization algorithms and the iterative algorithms [4]. The SG algorithm updates the parameter estimates only through the latest input–output data at each sampling instant, and it does not need to compute the inverse matrix; thus, it has less computational loads. Its variants include the multi-innovation stochastic gradient algorithms [16, 42] and the gradient-based iterative algorithms [12].
The idea of the gradient-based identification algorithms is first to determine the search direction and then to calculate the step size for each sampling instant. Although the computational effort of the SG algorithm is small, its convergence rate is slow because of its zigzag search directions. In general, there are two methods to improve the convergence rates. One is to obtain the optimal direction at each sampling instant. For example, for control problems with undetermined final time, Hussu provided a conjugate-gradient method [20]. The other method is to compute a suitable step size at each sampling instant. For instance, Chen and Ding introduced a convergence index into a modified stochastic gradient(M-SG) algorithm to improve the convergence rate [7]. Ma et al. [30] studied a forgetting factor stochastic gradient (FF-SG) algorithm for Hammerstein systems with saturation and preload nonlinearities. Although the M-SG and FF-SG algorithms can improve the convergence rates, they also bring some issues, such as severe oscillation when the estimates of the parameters approach to the true values [24].
One may ask whether it is feasible to develop a modified SG algorithm, which can not only quickly estimate the parameters, but also decrease the variances of the estimation errors. For this sake, the Aitken method is introduced in this paper. The Aitken method is a sequence acceleration method, used for accelerating the convergence rate of sequences. It is efficient for accelerating the convergence rate of a sequence which is converging linearly. For example, Pavaloiu et al. [34] studied an Aitken–Newton iterative method for nonlinear equations, which is more competitive than some optimization methods with the same convergence order. Bumbariu [5] developed an improved Aitken acceleration method for solving nonlinear equations which computes the solutions of the nonlinear equations with fast convergence rates. The proposed approaches of this paper have some interesting features.
-
1.
Using the key term separation method, which can transform a complex Hammerstein systems with piecewise linearity into a simplify regression model.
-
2.
Studying an FF-SG algorithm for this nonlinear system, which can improve the convergence rate.
-
3.
Developing an Aitken-based SG algorithm, which has quick convergence rates and small estimation error variances.
-
4.
Extending the proposed methods to identify the Hammerstein systems with colored noise.
In summary, this paper is listed as follows. Section 2 introduces the Hammerstein model. Section 3 presents some SG algorithms. Section 4 studies the Aitken-based SG algorithm for the piecewise linear system with colored noise. In Sect. 5, two illustrative examples are provided. Section 6 gives the conclusions of this paper and the directions of future research.
2 The Hammerstein System with Piecewise Linearity
The piecewise linear system is a special kind of switching systems, which widely exists in engineering practice [27, 37]. Such a system can be used to model or approximately describe the processes with different gains in different input intervals, e.g., the systems of flight control, circuits and biology [26, 31].
Consider the Hammerstein system with piecewise linearity as follows:
where \(q(\tau )\) is the input which is taken as a persistent excitation signal sequence with zero mean and unit variance, \(y(\tau )\) is the output, \(v(\tau )\) is a white noise with zero mean and variance \(\sigma ^2\), and a piecewise linearity \(f(q(\tau ))\) is shown in Fig. 1, which can be written as
where the corresponding segment slopes are \(m_1\) and \(m_2\).
The polynomials \(A(\zeta )\) and \(B(\zeta )\) are expressed as
Since the piecewise linearity is expressed by two equations, the Hammerstein system may be illustrated by two models [3]. Then, the considered Hammerstein model is equivalent to a switching model [1]. It is well known that the identification for switching models is more challenging. In order to simplify the identification process, the key term separation method is introduced [8, 9].
Define a switching function,
Then, the nonlinear part \(f(q(\tau ))\) of input is written as
The nonlinear model can be written as
Define the information vector \(\varvec{{\chi }}(\tau )\) and the parameter vector \(\varvec{{\xi }}\) as
Then, the nonlinear model can be simplified as a regression model:
The proposed algorithms in this paper are based on this identification model. Many identification methods are derived based on the identification models of dynamical systems [29, 32, 33], which can be used to estimate the parameters of bilinear systems [23, 38, 47, 48] , and can be applied to fields such as chemical process control systems. From Eq. (7), it can be seen that the parameters can be estimated by all the traditional identification algorithms in the cost of heavy computational demands [17].
Remark 1
In this paper, \(b_0\) is assumed to be equal to 1; otherwise, \(b_i\) cannot be separated from \(b_im_k\). Assume the parameter estimates are
Once the parameter estimates have been obtained, we can get \(\hat{m}_1\) and \(\hat{m}_2\) first, and then, based on \(\hat{m}_1\) and \(\hat{m}_2\), we can get \(\hat{b}_i=\frac{\hat{b}_i\hat{m}_k}{\hat{m}_k}, i=1,\ldots ,n-1, k=1,2\).
3 Some Stochastic Gradient Algorithms
The SG algorithm can be realized online, which updates parameters according to the latest input–output data [11]. Therefore, it has less computational efforts. However, this algorithm has slow convergence rates. In this section, some modified SG algorithms will be investigated.
3.1 The Traditional Stochastic Gradient Algorithm
Define the cost function
Assume that the parameter estimates at time \(\tau \) are \(\hat{\varvec{{\xi }}}(\tau -1)\), the key of the SG algorithm is to get a better estimate \(\hat{\varvec{{\xi }}}(\tau )\) which satisfies
\(\hat{\varvec{{\xi }}}(\tau )\) is obtained based on \(\hat{\varvec{{\xi }}}(\tau -1)\) and is written by
In order to keep (8) holding, substituting (9) into (8) gets
Keep \(\hat{\varvec{{\xi }}}(\tau -1)\) fixing and define \(J(\lambda (\tau ))\) as
Let
Setting the above derivative equal to zero obtains
Then, we can get the steepest descent algorithm
However, when \(\varvec{{\chi }}^{\mathrm{T}}(\tau )\varvec{{\chi }}(\tau )\) is small, the correction items \(\frac{\hat{\varvec{{\chi }}}(\tau )}{\varvec{{\chi }}^{\mathrm{T}}(\tau ) \varvec{{\chi }}(\tau )}\big (y(\tau )-{\varvec{{\chi }}}^{\mathrm{T}}(\tau )\hat{\varvec{{\xi }}}(\tau -1)\big )\) would be large, which leads to the steepest descent algorithm be divergent. With this in mind, we define
Then, we get the projection algorithm
Since \(\rho \) is a constant, the unchanged step size will make the estimate of the algorithm oscillate seriously when the estimates are closing to the true values. In order to solve this problem, we replace \(\rho \) by \(\lambda (\tau -1)\). Then, the SG algorithm to estimate the parameter \(\varvec{{\xi }}\) is listed as follows,
Remark 2
Although the traditional SG algorithm has less computational efforts, it also brings some challenging issues, e.g., slow convergence rates, especially for systems with large number of unknown parameters.
3.2 Two Modified Stochastic Gradient Algorithms
In order to increase the convergence rates, two modified SG algorithms for the Hammerstein system are developed in this subsection. A forgetting factor SG (FF-SG) algorithm is first introduced,
Remark 3
The FF-SG algorithm introduces a forgetting factor r in the step size [15, 43, 46], which will make the step size larger at each sampling instant. Therefore, the FF-SG algorithm has quicker convergence rates compared with the traditional SG algorithm.
Remark 4
Although the FF-SG algorithm can increase the convergence rates, it brings some challengings, such as large estimation error variances.
To make the variance of the estimation error smaller, another modified SG algorithm will be studied in the following, which is termed as the Aitken-based SG (A-SG) algorithm. Assume that the parameter estimate \(\hat{\varvec{{\xi }}}(\tau )\) converges to the true value \(\varvec{{\xi }}\), which means that
It is equivalent to
where \({\xi }_{\varrho }\) is the \(\varrho \)th element in the parameter vector \({\varvec{{\xi }}}\), \(\varrho =1,2,\ldots ,3n\). When \(\tau \) is large enough, the equivalent expression of (19) can be written as
From (19) and (20), it follows that
Then, the Aitken accelerated iteration formula for \({\xi }_{\varrho }\) can be written as
However, the parameter \(\varvec{{\xi }}\) cannot be computed by Eq. (18) because it is not a scalar, but a vector. In order to get the vector, the parameter \(\varvec{{\xi }}\) is rewritten as
Then, Eq. (21) is equivalent to the 3n equations as follows,
Then, we have
Define
The Aitken-based SG algorithm is obtained as follows,
The A-SG algorithm starts the iterations as follows.
-
1.
To initialize: Let \(\tau =1\), \(\hat{\varvec{{\xi }}}(0)=\mathbf{1}_{3n}/p_0\), \(p_0=10^6\) and \(\lambda (0)=1\).
-
2.
Let \(y(\tau )=0, q(\tau )=0\), \(\tau \leqslant 0\), and give an error tolerance number \(\varepsilon \).
-
3.
Collect the input–output data \(\{q(\tau ), y(\tau )\}\).
-
4.
Form \({\varvec{{\chi }}}(\tau )\) by (28).
-
5.
Compute \(e(\tau )\) and \(\lambda (\tau )\) by (27) and (29), respectively.
-
6.
Update the estimation vector \(\hat{\varvec{{\xi }}}(\tau )\) by (26).
-
7.
Compute each estimate \(\bar{a}_{i}(\tau ), i=1,\ldots , n\), \(\bar{m}_{k}(t), k=1,2\) and \(\bar{b}_j(\tau )\bar{m}_k(\tau ), j=1, \ldots , n-1\) by (23)–(25), and then form \(\bar{\varvec{{\xi }}}(\tau )\).
-
8.
Compare \(\bar{\varvec{{\xi }}}(\tau )\) and \(\bar{\varvec{{\xi }}}(\tau -1)\): if \(\Vert \bar{\varvec{{\xi }}}(\tau )-\bar{\varvec{{\xi }}}(\tau -1)\Vert \leqslant \varepsilon \), then obtain the \(\bar{\varvec{{\xi }}}(\tau )\) and go to the next step; otherwise, increase \(\tau \) by 1 and go to step 3.
-
9.
Compute \(\bar{m}_k(\tau )\) first, and then calculate \(\bar{b}_i(\tau )=\frac{\bar{b}_i(\tau )\bar{m}_k(\tau )}{\bar{m}_k(\tau )}\).
Remark 5
The A-SG algorithm utilizes three connected parameter estimates to obtain an optimal parameter estimate, which does not use a large step size to speed up the convergence. Therefore, the A-SG algorithm has quicker convergence rates but smaller estimation error variances.
4 The Identification for the Hammerstein Piecewise Linearity System with Colored Noise
In this part, the SG algorithms are developed to identify the Hammerstein system with colored noise, which contains unmeasurable noise variables in the information vector.
4.1 Problem Description and Identification Model
Consider the Hammerstein piecewise linearity system with colored noise as follows,
where \(D(\zeta ):=1+d_1\zeta ^{-1}+d_2\zeta ^{-2}+\cdots +d_n\zeta ^{-n_d}\), the definitions of \(A(\zeta ),B(\zeta )\) and the piecewise linearity part are the same as those in Section 2.
By utilizing the key term separation technique, the system can be transformed into
Then, the system is written by
Define the information vector \(\varvec{{\psi }}(\tau )\) and the parameter vector \(\varvec{{\vartheta }}\) as
Then, the nonlinear system can be expressed as a simple form,
4.2 The Aitken Stochastic Gradient Algorithm
Since the information vector in the Hammerstein piecewise linearity system with colored noise contains the unmeasured noise variables \(v(\tau -i)\), we denote \(\hat{v}(\tau )\) and \(\hat{\varvec{{\psi }}}(\tau )\) as the estimates of the \(v(\tau )\) and \(\varvec{{\psi }}(\tau )\) at time \(\tau \), respectively. Let \(\hat{\varvec{{\vartheta }}}(\tau )\) be the estimate of \(\varvec{{\vartheta }}\) at time \(\tau \) and define the innovation \(e(\tau )\) at time \(\tau \) as follows,
where
Remark 6
Since the information vector \(\varvec{{\psi }}(\tau )\) contains the unmeasurable variables \(v(\tau -i)\), their estimates \(e(\tau -i)\) can be used to replace these unknown noise variables \(v(\tau -i)\) in the information vector.
By using the Aitken accelerated iteration technique, the Aitken SG (A-SG) algorithm for the Hammerstein system with colored noise is developed as follows,
The flowchart of the A-SG algorithm is presented in Fig. 2. The proposed methods in this paper can combine other identification methods [40, 41] to study the parameter estimation problems of different systems with colored noises such as nonlinear systems [21, 22] and can be applied to other studies such as signal modeling and communication networked systems.
5 Numerical Examples
Example 1
Consider the following Hammerstein model,
where \(\{v(\tau )\}\) is taken as a white noise sequence with zero mean and variance \(\sigma ^2=0.10^2\), and \(\{q(\tau )\}\) is an input sequence with zero mean and unit variance.
The SG, the FF-SG and the A-SG algorithms are applied to estimate the parameters of the piecewise linear system. The estimation errors \(\delta :=\Vert \hat{\varvec{{\xi }}}-\varvec{{\xi }}\Vert /\Vert \varvec{{\xi }}\Vert \) or \(\delta :=\Vert \bar{\varvec{{\xi }}}-\varvec{{\xi }}\Vert /\Vert \varvec{{\xi }}\Vert \) versus \(\tau \) are shown in Fig. 3 and Tables 1, 2, 3. The means and variances of these three algorithms are given in Table 4.
Example 2
Consider the following Hammerstein model with colored noise,
where \(\{v(\tau )\}\) is taken as a white noise sequence with zero mean and variance \(\sigma ^2=0.10^2\), and \(\{q(\tau )\}\) is an input sequence with zero mean and unit variance.
The SG, the FF-SG and the A-SG algorithms are applied to estimate the parameters of the piecewise linear system with colored noise, and the estimation errors \(\delta :=\Vert \hat{\varvec{{\vartheta }}}-\varvec{{\vartheta }}\Vert /\Vert \varvec{{\vartheta }}\Vert \) or \(\delta :=\Vert \bar{\varvec{{\vartheta }}}-\varvec{{\vartheta }}\Vert /\Vert \varvec{{\vartheta }}\Vert \) versus \(\tau \) are shown in Fig. 4.
From these two examples, we can get the following finds.
-
1.
Tables 1, 2, 3 show that the FF-SG algorithm and the A-SG algorithm are better than the SG algorithm.
-
2.
Figures 3 and 4 show that the estimation error curve of the FF-SG algorithm oscillates seriously when the errors converge to zero, but the estimation error curve of the A-SG algorithm is relatively smooth.
-
3.
Table 4 shows that the A-SG algorithm is the most effective algorithm among these three algorithms.
-
4.
The algorithms proposed in this paper can not only identify the Hammerstein system with white noise, but also the Hammerstein system with colored noise.
6 Conclusions
In this paper, some SG algorithms are proposed for Hammerstein systems with piecewise linearity. The key term separation method is used to transform the nonlinear model into a regression model. In order to accelerate the convergence rate of the SG algorithm, an FF-SG algorithm and an A-SG algorithm are studied. Compared with the FF-SG algorithm, the A-SG algorithm has almost the same estimation error mean but smaller estimation error variance. Therefore, the A-SG algorithm has a wider application prospect in system identification.
The purpose of this paper is to develop two accelerated SG algorithms for nonlinear systems. These methods can combine other identification algorithms, e.g., recursive least squares algorithm, expectation–maximization algorithm, to study the parameter estimation issues of time-delay systems, switching systems and neural network learning systems [25, 44, 45].
Data Availability Statement
All data generated or analyzed during this study are included in this article.
References
M. Ahmadi, H. Mojallali, Identification of multiple-input single-output Hammerstein models using Bezier curves and Bernstein polynomials. Appl. Math. Modell. 35(4), 1969–1982 (2011)
E.W. Bai, An optimal two-stage identification algorithm for Hammerstein-Wiener nonlinear systems. Automatica 34(3), 333–338 (1998)
E.W. Bai, Identification of linear systems with hard input nonlinearities of known structure. Automatica 38(5), 853–860 (2002)
G. Bottegal, A.Y. Aravkin, H. Hjalmarsson, G. Pillonetto, Robust EM kernel-based methods for linear system identification. Automatica 67, 114–126 (2016)
O. Bumbariu, A new Aitken type method for accelerating iterative sequences. Appl. Math. Comput. 219(1), 78–82 (2012)
G.Y. Chen, M. Gan, G.L. Chen, Generalized exponential autoregressive models for nonlinear time series: Stationarity, estimation and applications. Inf. Sci. 438, 46–57 (2018)
J. Chen, Modified stochastic gradient algorithms with fast convergence rates. J. Vib. Control 17(9), 1281–1286 (2011)
J. Chen, Y.J. Liu, Q.M. Zhu, Multi-step-length gradient iterative algorithm for equation-error type models. Syst. Control Lett. 115, 15–21 (2018)
J. Chen, X.P. Wang, R. Ding, Gradient based estimation algorithm for Hammerstein systems with saturation and dead-zone nonlinearities. Appl. Math. Modell. 36, 238–243 (2012)
J. Chen, Q.M. Zhu, J. Li, Biased compensation recursive least squares algorithm for rational models. Nonlinear Dyn. 91(2), 797–807 (2018)
F. Ding, X.P. Liu, G. Liu, Identification methods for Hammerstein nonlinear systems. Digit. Signal Process 21(2), 215–238 (2011)
F. Ding, Y.J. Liu, B. Bao, Gradient based and least squares based iterative estimation algorithms for multi-input multi-output systems. Proc. Inst. Mech. Eng. Part I: J. Syst. Control Eng. 226(1), 43–55 (2012)
F. Ding, L. Lv, J. Pan, X.K. Wan, X.B. Jin, Two-stage gradient-based iterative estimation methods for controlled autoregressive systems using the measurement data. Int. J. Control Autom. Syst. 18(4), 886–896 (2020)
F. Ding, L. Xu, D.D. Meng et al., Gradient estimation algorithms for the parameter identification of bilinear systems using the auxiliary model. J. Comput. Appl. Math. 369, 112575 (2020)
F. Ding, L. Xu, Q.M. Zhu, Performance analysis of the generalised projection identification for time-varying systems. IET Control Theory Appl. 10(18), 2506–2514 (2016)
F. Ding, X. Zhang, L. Xu, The innovation algorithms for multivariable state-space models. Int. J. Adapt. Control Signal Process. 33(11), 1601–1608 (2019)
M. Gan, C.L.P. Chen, G.Y. Chen, L. Chen, On some separated algorithms for separable nonlinear squares problems. IEEE Trans. Cybern. 48(10), 2866–2874 (2018)
A. Hagenblad, L. Ljung, A. Wills, Maximum likelihood identification of Wiener models. Automatica 44(11), 2697–2705 (2008)
J.T. Hu, G.X. Sui, X.X. Lv, X.D. Li, Fixed-time control of delayed neural networks with impulsive perturbations. Nonlinear Anal.: Modell. Control 23(6), 904–920 (2018)
A. Hussu, The conjugate-gradient method for optimal control problems with undetermined final time. Int. J. Control 15(1), 79–82 (1972)
Y. Ji, X.K. Jiang, L.J. Wan, Hierarchical least squares parameter estimation algorithm for two-input Hammerstein finite impulse response systems. J. Frankl. Inst. 357(8), 5019–5032 (2020)
Y. Ji, C. Zhang, Z. Kang, T. Yu, Parameter estimation for block-oriented nonlinear systems using the key term separation. Int. J. Robust Nonlinear Control 30(9), 3727–3752 (2020)
M.H. Li, X.M. Liu, Maximum likelihood least squares based iterative estimation for a class of bilinear systems using the data filtering technique. Int. J. Control Autom. Syst. 18(6), 1581–1592 (2020)
J.S. Li, Y.Y. Zheng, Z.P. Lin, Recursive identification of time-varying systems: Self-tuning and matrix RLS algorithms. Syst. Control Lett. 66, 104–110 (2014)
X. Li, D. O’Regan, H. Akca, Global exponential stabilization of impulsive neural networks with unbounded continuously distributed delays. IMA J. Appl. Math. 80(1), 85–99 (2015)
X. Liu, J. Cao, W. Yu, Q. Song, Nonsmooth finite-time synchronization of switched coupled neural networks. IEEE Trans. Cybern. 46(10), 2360–2371 (2016)
X. Liu, J. Lam, W. Yu, G. Chen, Finite-time consensus of multiagent systems with a switching protocol. IEEE Trans. Neural Netw. Learn. Syst. 27(4), 853–862 (2016)
X.Y. Liu, H.S. Su, M.Z.Q. Chen, A switching approach to designing finite-time synchronization controllers of coupled neural networks. IEEE Trans. Neural Netw. Learn. Syst. 27(2), 471–482 (2016)
H. Ma, J. Pan et al., Partially-coupled least squares based iterative parameter estimation for multi-variable output-error-like autoregressive moving average systems. IET Control Theory Appl. 13(18), 3040–3051 (2019)
J.X. Ma, W.L. Xiong et al., Data filtering based forgetting factor stochastic gradient algorithm for Hammerstein systems with saturation and preload nonlinearities. J. Frankl. Inst. 353(16), 4280–4299 (2016)
H. Oktem, A survey on piecewise-linear models of regulatory dynamical systems. Nonlinear Anal.: Theory, Method Appl. 63(3), 336–349 (2005)
J. Pan, X. Jiang, X.K. Wan, W. Ding, A filtering based multi-innovation extended stochastic gradient algorithm for multivariable control systems. Int. J. Control Autom. Syst. 15(3), 1189–1197 (2017)
J. Pan, H. Ma, X. Zhang et al., Recursive coupled projection algorithms for multivariable output-error-like systems with coloured noises. IET Signal Process. 14(7), 455–466 (2020)
I. Pavaloiu, E. Catinas, On a robust Aitken–Newton method based on the Hermite polynomial. Appl. Math. Comput. 287, 224–231 (2016)
C. Philippe, S.C. Johan, Hammerstein–Wiener system estimator initialization. Automatica 40(9), 1543–1550 (2004)
H. Salhi, S. Kamoun, A recursive parametric estimation algorithm of multivariable nonlinear systems described by Hammerstein mathematical models. Appl. Math. Modell. 39(16), 4951–4962 (2015)
J. Vörös, Parameter identification of Wiener systems with multisegment piecewise-linear nonlinearities. Syst. Control Lett. 56(2), 99–105 (2007)
L.J. Wang, Y. Ji, L.J. Wan, N. Bu, Hierarchical recursive generalized extended least squares estimation algorithms for a class of nonlinear stochastic systems with colored noise. J. Frankl. Inst. 356(16), 10102–10122 (2019)
X.H. Wang, T. Hayat, A. Alsaedi, Combined state and multi-innovation parameter estimation for an input nonlinear state space system using the key term separation. IET Control Theory Appl. 10(13), 1503–1512 (2016)
L. Xu, The damping iterative parameter identification method for dynamical systems based on the sine signal measurement. Signal Process. 120, 660–667 (2016)
L. Xu, F. Ding, Iterative parameter estimation for signal models based on measured data. Circuits Syst. Signal Process. 37(7), 3046–3069 (2018)
L. Xu, F. Ding, Recursive least squares and multi-innovation stochastic gradient parameter estimation methods for signal modeling. Circuits Syst. Signal Process. 36(4), 1735–1753 (2017)
L. Xu, W.L. Xiong, A. Alsaedi, T. Hayat, Hierarchical parameter estimation for the frequency response based on the dynamical window data. Int. J. Control Autom. Syst. 16(4), 1756–1764 (2018)
X. Yang, X. Li, Q. Xi, P. Duan, Review of stability and stabilization for impulsive delayed systems. Math. Biosci. Eng. 15(6), 1495–1515 (2018)
X. Zhang, F. Ding, Adaptive parameter estimation for a general dynamical system with unknown states. Int. J. Robust Nonlinear Control 30(4), 1351–1372 (2020)
X. Zhang, F. Ding, L. Xu, Recursive parameter estimation methods and convergence analysis for a special class of nonlinear systems. Int. J. Robust Nonlinear Control 30(4), 1373–1393 (2020)
X. Zhang, F. Ding, L. Xu, E.F. Yang, Highly computationally efficient state filter based on the delta operator. Int. J. Adapt. Control Signal Process. 33(6), 875–889 (2019)
X. Zhang, Q.Y. Liu et al., Recursive identification of bilinear time-delay systems through the redundant rule. J. Frankl. Inst. 357(1), 726–747 (2020)
Acknowledgements
This work is supported by the National Natural Science Foundation of China (No. 61973137), the Funds of the Science and Technology on Near-Surface Detection Laboratory (No. TCGZ2019A001) and the Fundamental Research Funds for the Central Universities (No. JUSRP22016).
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Pu, Y., Yang, Y. & Chen, J. Some Stochastic Gradient Algorithms for Hammerstein Systems with Piecewise Linearity. Circuits Syst Signal Process 40, 1635–1651 (2021). https://doi.org/10.1007/s00034-020-01554-z
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00034-020-01554-z