Abstract
By extending the least squares-based iterative (LSI) method, this paper presents a decomposition-based LSI (D-LSI) algorithm for identifying linear-in-parameters systems and an interval-varying D-LSI algorithm for handling the identification problems of missing-data systems. The basic idea is to apply the hierarchical identification principle to decompose the original system into two fictitious sub-systems and then to derive new iterative algorithms to estimate the parameters of each sub-system. Compared with the LSI algorithm and the interval-varying LSI algorithm, the decomposition-based iterative algorithms have less computational load. The numerical simulation results demonstrate that the proposed algorithms work quite well.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Parameter estimation and mathematical models are essential for system identification [13, 31, 33], system optimization [16, 24] and state and data filtering [14, 19, 32]. Exploring new parameter estimation methods is an eternal theme of system identification [5, 6], and many identification methods have been developed for linear and nonlinear systems [1, 25, 38, 40], dual-rate sampled systems [9, 11, 36] and state-delay systems [28]. Iterative methods can be used for estimating parameters and solving matrix equations [4]. The iterative identification algorithms make full use of the measured data at each iteration and thus can produce more accurate parameter estimates than the existing recursive identification algorithms [29]. For decades, many iterative methods have been applied in the parameter estimation, such as the Newton iterative method [7, 26, 41, 42], the gradient-based iterative methods [39] and the least squares-based iterative (LSI) method [17]. Jin et al. [23] studied the LSI identification methods for multivariable integrating and unstable processes in closed loop; Wang et al. [37] derived several gradient-based iterative estimation algorithms for a class of nonlinear systems with colored noises using the filtering technique.
The least squares identification method involves matrix inversion, and its computational complexity depends on the dimensions of the covariance matrices [18]. In order to reduce the computational complexity, the decomposition technique is usually taken to transform a large-scale system into several sub-systems with small sizes, which can be easier to identify. Chen et al. [2] developed a decomposition-based least squares identification algorithm for input nonlinear systems by adopting the key term separation technique; Zhang [43] proposed a decomposition-based LSI identification algorithm for output error moving average systems based on the hierarchical identification principle.
In the field of system identification, missing-data systems have received much attention. Dual-rate sampled systems and multirate (non-uniformly) sampled systems can be regarded as a class of the systems with missing data [10]. In recent years, different identification methods for missing-data systems have been reported in the literature, e.g., the interval-varying auxiliary model-based recursive least squares method [8], the filtering-based multiple-model method [27] and the interval-varying auxiliary model-based multi-innovation stochastic gradient (V-AM-MISG) identification method [8, 12]. Recently, Jin et al. [22] extended the V-AM-MISG method to multivariable output error systems with scarce measurements by means of the interval-varying and multi-innovation methods in [8, 12]; Raghavan et al. [30] studied the expectation maximization-based state-space model identification problems with irregular output sampling.
This paper applies the decomposition technique to study the parameter identification problems of linear-in-parameters systems for improving computational efficiency. The key is to decompose the information vector into two sub-information vectors and the parameter vector into two sub-parameter vectors with smaller dimensions and fewer variables and then to estimate the parameters of each sub-system, respectively. The main contributions are as follows.
-
A decomposition-based LSI (D-LSI) algorithm is developed for linear-in-parameters systems by employing the hierarchical identification principle.
-
An interval-varying D-LSI algorithm is derived for estimating the parameters of the systems with missing data.
-
The proposed algorithms have higher computational efficiency than the LSI algorithm and the interval-varying LSI algorithm.
This paper is organized as follows: Section 2 introduces the identification model of the linear-in-parameters systems. Section 3 gives an LSI algorithm for comparisons. A D-LSI algorithm for the linear-in-parameters systems is developed in Sect. 4. Section 5 describes the parameter estimation problem with missing data and proposes an interval-varying LSI algorithm. Section 6 derives an interval-varying D-LSI algorithm to reduce computational load. The effectiveness of the proposed algorithms is illustrated by two simulation examples in Sect. 7. Finally, Sect. 8 gives some conclusions.
2 System Description and Identification Model
Let us introduce some notation. “\(A=:X\)” or “\(X:=A\)” stands for “A is defined as X”; \(\hat{{\varvec{\vartheta }}}(t)\) denotes the estimate of \({\varvec{\vartheta }}\) at time t; the norm of a matrix (or a column vector) \(\varvec{X}\) is defined by \(\Vert \varvec{X}\Vert ^2:=\mathrm{tr}[\varvec{X}\varvec{X}^{\mathrm{T}}]\); \(\mathbf{1}_n\) stands for an n-dimensional column vector whose elements are all 1; the superscript T denotes the matrix transpose.
Consider the linear-in-parameters system which can be expressed as
where \(y(t)\in {\mathbb {R}}\) is the measured output, \({\varvec{\phi }}(t)\in {\mathbb {R}}^m\) is the information vector consisting of the system input–output data, \({\varvec{\theta }}\in {\mathbb {R}}^m\) is the parameter vector to be estimated, \(v(t)\in {\mathbb {R}}\) is the random white noise with zero mean and variance \(\sigma ^2\), A(z) and F(z) with known orders \(n_a\) and \(n_f\) are polynomials in the unit backward shift operator \(z^{-1}\) with the property \(z^{-1}y(t)=y(t-1)\), and defined by
The objective of this paper is to use the decomposition technique to derive iterative methods for estimating the parameters \({\varvec{\theta }}\), \(a_i\) and \(f_i\) in (1) from observation data for reducing the computational load. Without loss of generality, assume that \({\varvec{\phi }}(t)=\mathbf{0}\), \(y(t)=0\) and \(v(t)=0\) for \(t\leqslant 0\).
Define the parameter vectors and the information vectors,
Define the intermediate variable
Then, System (1) can be rewritten as
Equation (4) is the identification model of System (1), and its parameter vector \({\varvec{\vartheta }}\) contains all the parameters \({\varvec{\theta }}\), \(a_i\) and \(f_i\) of the system.
3 The Least Squares-Based Iterative Algorithm
In this section, we give a least squares-based iterative algorithm for comparisons.
Consider the newest p data from \(j=t-p+1\) to \(j=t\) (p represents the data length). According to the identification model in (4), define a quadratic function:
Assume that the information matrix \({\varvec{\varphi }}(t)\) is persistently exciting for large p. Minimizing the function \(J({\varvec{\vartheta }})\), we can obtain the least squares estimate of the parameter vector \({\varvec{\vartheta }}\):
Notice that the estimate \(\hat{{\varvec{\vartheta }}}(t)\) in (5) is impossible to obtain directly because the information vector \({\varvec{\varphi }}(t-j)\) contains the unknown term \(x(t-i)\). Here, the approach is based on the hierarchical identification principle: let \(k=1,2,3, \ldots \) be an iterative variable, \(\hat{{\varvec{\vartheta }}}_k(t):=\left[ \begin{array}{c} \hat{\varvec{a}}_k(t) \\ \hat{\varvec{f}}_k(t) \\ \hat{{\varvec{\theta }}}_k(t) \end{array}\right] \in {\mathbb {R}}^n\) be the iterative estimate of \({\varvec{\vartheta }}\) at iteration k, use the estimate \(\hat{x}_{k-1}(t-i)\) of \(x(t-i)\) to construct the estimate \(\hat{{\varvec{\varphi }}}_{x,k}(t)\) of \({\varvec{\varphi }}_x(t)\) at iteration k:
and define the estimate of \({\varvec{\varphi }}(t)\):
Replacing \({\varvec{\varphi }}_x(t)\), \({\varvec{\theta }}\) and \(\varvec{f}\) in (2) with \(\hat{{\varvec{\varphi }}}_{x,k}(t)\), \(\hat{{\varvec{\theta }}}_k(t)\) and \(\hat{\varvec{f}}_k(t)\), respectively, the estimate \(\hat{x}_k(t)\) of x(t) can be computed by
Replacing \({\varvec{\varphi }}(t-j)\) in (5) with \(\hat{{\varvec{\varphi }}}_k(t-j)\), we can obtain the following least squares-based iterative (LSI) algorithm for estimating \({\varvec{\vartheta }}\):
The LSI parameter estimation algorithm is able to make full use of all the input–output data at each iteration, and thus, the parameter estimation accuracy can be greatly improved.
4 The Decomposition-Based LSI Algorithm
The LSI algorithm can improve the parameter estimation accuracy, but the disadvantage is that it needs heavy computational load for large-scale systems. By means of the hierarchical identification principle, the following derives a D-LSI algorithm to improve the computational efficiency.
The identification model in (3) includes the known information vectors \({\varvec{\varphi }}_y(t)\) and \({\varvec{\phi }}(t)\), and the unknown information vector \({\varvec{\varphi }}_x(t)\). Define a new information vector
and the corresponding parameter vector
Based on the hierarchical identification principle [3], by defining two intermediate variables
we can decompose the identification model in (3) into the following two fictitious sub-models:
The parameter vectors \({\varvec{\theta }}_1=\left[ \begin{array}{c} \varvec{a} \\ {\varvec{\theta }} \end{array} \right] \) and \(\varvec{f}\) to be identified are included in the two sub-models, respectively.
According to Eqs. (16) and (17), minimizing the quadratic functions
we can obtain the following least squares estimates of the parameter vectors \({\varvec{\theta }}\) and \(\varvec{f}\):
Here, we have used the assumption that the information vectors \({\varvec{\varphi }}_1(t)\) and \({\varvec{\varphi }}_x(t)\) are persistently exciting for large p. Substituting (14) and (15) into (18) and (19), respectively, we have
However, the information vector \({\varvec{\varphi }}_x(t)\) contains the unknown term \(x(t-i)\), thus the algorithm in (20) and (21) cannot be implemented. Similarly, we use the hierarchical identification principle to solve this problem: let \(\hat{{\varvec{\theta }}}_{1,k}(t):=[\hat{\varvec{a}}_k^{\mathrm{T}}(t),\hat{{\varvec{\theta }}}_k^{\mathrm{T}}(t)]^{\mathrm{T}}\in {\mathbb {R}}^{n_a+m}\) be the iterative estimate of \({\varvec{\theta }}_1\) at iteration k, \(\hat{{\varvec{\varphi }}}_{x,k}(t)\) be the estimate of \({\varvec{\varphi }}_x(t)\) by replacing \(x(t-i)\) with its estimate \(\hat{x}_{k-1}(t-i)\) at iteration \(k-1\).
Replacing \({\varvec{\varphi }}_x(t)\), \(\varvec{f}\) and \({\varvec{\theta }}_1\) in (20) and (21) with their corresponding estimates \(\hat{{\varvec{\varphi }}}_{x,k}(t)\), \(\hat{\varvec{f}}_{k-1}(t)\) and \(\hat{{\varvec{\theta }}}_{1,k-1}(t)\), respectively, we can summarize the decomposition-based LSI (D-LSI) algorithm of the linear-in-parameters systems as
In the D-LSI algorithm, the dimensions of the covariance matrices \(\varvec{S}_1^{-1}(t)\) and \(\hat{\varvec{S}}_{2,k}^{-1}(t)\) in (22) and (24) are \((n_a+m) \times (n_a+m)\) and \(n_f\times n_f\). In the LSI algorithm, the dimension of the covariance matrix \(\hat{\varvec{S}}_k^{-1}(t)\) in (6) is \((n_a+m+n_f) \times (n_a+m+n_f)\). Thus, the D-LSI algorithm requires less computational cost than the LSI algorithm.
The steps involved in the D-LSI algorithm to compute the parameter estimation vectors \(\hat{{\varvec{\theta }}}_{1,k}(t)\) and \(\hat{\varvec{f}}_k(t)\) are listed in the following.
-
1.
Set the data length p, let \(t=p\), collect the observation data {y(i), \({\varvec{\phi }}(i)\): \(i=0, 1, \ldots , p-1\)}, and set a small positive number \(\varepsilon \).
-
2.
Collect the observation data y(t) and \({\varvec{\phi }}(t)\) and form \({\varvec{\varphi }}_y(t)\) using (27) and \({\varvec{\varphi }}_1(t)\) using (26).
-
3.
Let \(k=1\), set the initial values \(\hat{{\varvec{\theta }}}_{1,0}(0)=\mathbf{1}_{n_a+m}/p_0\), \(\hat{\varvec{f}}_0(0)=\mathbf{1}_{n_f}/p_0\), \(\hat{x}_0(t-i)=1/p_0\) \((i=1,2,\ldots ,n_f)\), \(p_0=10^6\).
-
4.
Form \(\hat{{\varvec{\varphi }}}_{x,k}(t)\) using (28), compute \(\varvec{S}_1(t)\) and \(\hat{\varvec{S}}_{2,k}(t)\) using (23) and (25).
-
5.
Update the parameter estimation vectors \(\hat{{\varvec{\theta }}}_{1,k}(t)\) and \(\hat{\varvec{f}}_k(t)\) using (22) and (24), respectively.
-
6.
Read \(\hat{{\varvec{\theta }}}_k(t)\) from \(\hat{{\varvec{\theta }}}_{1,k}(t)\) using (30), and compute \(\hat{x}_k(t)\) using (29).
-
7.
Compare \(\hat{{\varvec{\theta }}}_{1,k}(t)\) with \(\hat{{\varvec{\theta }}}_{1,k-1}(t)\) and \(\hat{\varvec{f}}_k(t)\) with \(\hat{\varvec{f}}_{k-1}(t)\): if
$$\begin{aligned} \Vert \hat{{\varvec{\theta }}}_{1,k}(t)-\hat{{\varvec{\theta }}}_{1,k-1}(t)\Vert +\Vert \hat{\varvec{f}}_k(t)-\hat{\varvec{f}}_{k-1}(t)\Vert \leqslant \varepsilon , \end{aligned}$$obtain k, \(\hat{{\varvec{\theta }}}_{1,k}(t)\) and \(\hat{\varvec{f}}_k(t)\), increase t by 1, and go to Step 2; otherwise, increase k by 1, and go to Step 4.
5 The Interval-Varying LSI Algorithm
This section derives an interval-varying LSI algorithm to solve the identification problems of systems with missing data.
In many applications, there are many reasons for missing sampled data to arise. In general, a missing-data system implies that most data are available and few data are missing over a period of time. The following considers such a system with missing data that the inputs are normally available at every instant t because the input signals are usually generated by digital computers in practice, and only a small number of data are missing, as shown in Fig. 1 [8, 12], where “\(+\)” stands for missing data or bad data (outliers or unbelievable data), e.g., the outputs y(3), y(8), y(9), y(23), \(\ldots \) are missing samples and y(15), \(\ldots \) are unbelievable samples.
For convenience, we define an integer sequence \(\{t_s, s=0,1,2,\ldots \}\) satisfying
with \(t^*_s:=t_s-t_{s-1}\geqslant 1\), such that y(t) and \({\varvec{\varphi }}_y(t)\) are available only when \(t=t_s\) \((s=0, 1, 2, \ldots )\), or equivalently, the data set \(\{y(t_s), {\varvec{\varphi }}_y(t_s): s=0,1,2,\ldots \}\) contains all available outputs. For instance, for the missing-data pattern in Fig. 1, when the order \(n_a=3\), define the integer sequence \(\{t_0\), \(t_1\), \(t_2\), \(\ldots \), \(t_9\), \(\ldots \}\), for \(t_0=0\), \(t_1=7\), \(t_2=13\), \(\ldots \), \(t_9=28\), \(\ldots \), i.e., {\(y(t_0), {\varvec{\varphi }}_y(t_0)\)}, {\(y(t_1), {\varvec{\varphi }}_y(t_1)\)}, {\(y(t_2), {\varvec{\varphi }}_y(t_2)\)}, \(\ldots \), {\(y(t_9), {\varvec{\varphi }}_y(t_9)\)}, \(\ldots \) are available.
Replacing t in (4) with \(t_s\) gives
with
Consider p data from \(i=t_{s-p+1}\) to \(i=t_s\). Define the stacked output vector \(\varvec{Y}(t_s)\) and the stacked information matrix \({\varvec{\varPsi }}(t_s)\) as
Assume that the information vector \({\varvec{\varphi }}(t_s)\) is persistently exciting for large p, that is, \([{\varvec{\varPsi }}^{\mathrm{T}}(t_s){\varvec{\varPsi }}(t_s)]\) is non-singular. The difficulty is that the information vector \({\varvec{\varphi }}_x(t_s)\) in \({\varvec{\varPsi }}(t_s)\) contains the unknown variable \(x(t_s-i)\). Replacing \(x(t_s-i)\) in (34) with their estimates \(\hat{x}_{k-1}(t_s-i)\) at iteration \(k-1\), and minimizing the quadratic function
we can obtain the following interval-varying least squares-based iterative (V-LSI) algorithm for estimating the parameter vector \({\varvec{\vartheta }}\):
We simply hold the parameter estimate \(\hat{{\varvec{\vartheta }}}_k(t)\) remains unchanged over the interval \(\left[ t_s,t_{s+1}-1\right] \).
6 The Interval-Varying D-LSI Algorithm
In the following, we study an interval-varying D-LSI algorithm based on the decomposition technique to reduce computational cost.
Replacing t in (14)–(17) with \(t_s\) gives
Define the stacked output vectors \(\varvec{Y}(t_s)\), \(\varvec{Y}_1(t_s)\) and \(\varvec{Y}_2(t_s)\) and the stacked information matrices \({\varvec{\varPsi }}_1(t_s)\) and \({\varvec{\varPsi }}_x(t_s)\) as
Define two quadratic functions:
Assume that the information vectors \({\varvec{\varphi }}_1(t_s)\) and \({\varvec{\varphi }}_x(t_s)\) are persistently exciting for large p, that is, \([{\varvec{\varPsi }}_1^{\mathrm{T}}(t_s){\varvec{\varPsi }}_1(t_s)]\) and \([{\varvec{\varPsi }}_x^{\mathrm{T}}(t_s){\varvec{\varPsi }}_x(t_s)]\) are non-singular. Letting the partial derivatives of \(J_1({\varvec{\theta }}_1)\) and \(J_2(\varvec{f})\) with respect to \({\varvec{\theta }}_1\) and \(\varvec{f}\) be zero leads to the following least squares estimates of the parameter vectors \({\varvec{\theta }}_1\) and \(\varvec{f}\):
However, we can see that the right-hand sides of Eqs. (44) and (45) contain the unknown parameters \({\varvec{\theta }}_1\) and \(\varvec{f}\), and the information vector \({\varvec{\varphi }}_x(t_s)\) in \({\varvec{\varPsi }}_x(t_s)\) contains the unknown term \(x(t_s-i)\), so the estimates \(\hat{{\varvec{\theta }}}_1(t_s)\) and \(\hat{\varvec{f}}(t_s)\) are impossible to compute directly. Here, we use the hierarchical identification principle to solve this problem: let
\(\hat{{\varvec{\theta }}}_{1,k}(t_s):=\left[ \begin{array}{c} \hat{\varvec{a}}_k(t_s) \\ \hat{{\varvec{\theta }}}_k(t_s) \end{array} \right] \) and \(\hat{\varvec{f}}_k(t_s)\) be the iterative estimates of \({\varvec{\theta }}_1=\left[ \begin{array}{c} \varvec{a} \\ {\varvec{\theta }} \end{array} \right] \) and \(\varvec{f}\) at iteration k, respectively, and \(\hat{x}_k(t_s-i)\) be the estimate of \(x(t_s-i)\). Define
Replacing \({\varvec{\theta }}\), \(\varvec{f}\) and \({\varvec{\varphi }}_x(t_s)\) in (2) with \(\hat{{\varvec{\theta }}}_k(t_s)\), \(\hat{\varvec{f}}_k(t_s)\) and \(\hat{{\varvec{\varphi }}}_{x,k}(j)\), the estimate \(\hat{x}_k(j)\) of x(j) can be computed by
Replacing \({\varvec{\varPsi }}_x(t_s)\), \({\varvec{\theta }}_1\) and \(\varvec{f}\) in (44) and (45) with their corresponding estimates \(\hat{{\varvec{\varPsi }}}_{x,k}(t_s)\), \(\hat{{\varvec{\theta }}}_{1,k-1}(t_s)\) and \(\hat{\varvec{f}}_{k-1}(t_s)\), respectively, we can summarize the interval-varying D-LSI algorithm of computing \(\hat{{\varvec{\theta }}}_{1,k}(t_s)\) and \(\hat{\varvec{f}}_k(t_s)\) as
To initialize the algorithm, we take \(\hat{{\varvec{\theta }}}_{1,0}(t_0)\) and \(\hat{\varvec{f}}_0(t_0)\) as real vectors with small positive entries, e.g., \(\hat{{\varvec{\theta }}}_{1,0}(t_0)=\mathbf{1}_{n_a+m}/p_0\), \(\hat{\varvec{f}}_0(t_0)=\mathbf{1}_{n_f}/p_0\), \(\hat{x}_0(i)=1/p_0\) \((i\leqslant t_1-1)\), \(p_0=10^6\). The parameter estimates \(\hat{{\varvec{\theta }}}_{1,k}(t)\) and \(\hat{\varvec{f}}_k(t)\) in (47) and (49) remain unchanged over the interval \(\left[ t_s,t_{s+1}-1\right] \).
The interval-varying D-LSI algorithm (which is abbreviated as the V-D-LSI algorithm) uses the data over the finite data window with the length p, thus the V-D-LSI algorithm can track time-varying parameters and be used for online identification. The interval-varying identification algorithms are proposed for missing-data systems but can be extended to systems with scarce measurements.
7 Examples
Example 1
Consider the following linear-in-parameters system:
In simulation, \(\{{\varvec{\phi }}(t)\}\) is taken as an uncorrelated persistent excitation vector sequence with zero mean and unit variance, \(\{v(t)\}\) as a white noise sequence with zero mean and different variances \(\sigma ^2=0.10^2\) and \(\sigma ^2= 0.50^2\), respectively. Take the data length \(t=p=L_e=3000\) data and apply the LSI algorithm and the D-LSI algorithm to identify this example system, the parameter estimates and their errors \(\delta \) versus iteration k are given in Tables 1 and 2 and Figs. 2 and 3 where the parameter estimation error is defined as \(\delta :=\Vert \hat{{\varvec{\vartheta }}}_k-{\varvec{\vartheta }}\Vert /\Vert {\varvec{\vartheta }}\Vert \times 100\,\%\).
From Tables 1 and 2 and Figs. 2 and 3, we can draw the following conclusions.
-
The estimation errors given by the LSI algorithm and the D-LSI algorithm become smaller (in general) as iteration k increases or the noise variance \(\sigma ^2\) decreases—see Tables 1 and 2 and Figs. 2 and 3.
-
The parameter estimates given by the LSI algorithm and the D-LSI algorithm are very close to the true parameters for large k—see Tables 1 and 2.
Example 2
Consider the following linear-in-parameters system with missing data:
The simulation conditions are similar to those of Example 1, here the noise variances \(\sigma ^2=0.50^2\) and \(\sigma ^2=1.00^2\), respectively. Take \(s=p=L_e=3000\) and \(t^*_s=3\), collect the input–output data \(\{{\varvec{\phi }}(t), y(t_s)\}\). Apply the V-LSI algorithm and the V-D-LSI algorithm to identify this example system, the parameter estimates and their estimation errors \(\delta :=\Vert \hat{{\varvec{\vartheta }}}_k(t_s)-{\varvec{\vartheta }}\Vert /\Vert {\varvec{\vartheta }}\Vert \times 100\,\%\) versus k are given in Tables 3 and 4 and Figs. 4 and 5.
From Tables 3 and 4 and Figs. 4 and 5, it is clear that as the iteration k increases, the parameter estimates given by the V-LSI algorithm and the V-D-LSI algorithm converge to their true values, and the estimation errors become smaller (generally); under the same data length and noise variance, the estimation accuracies of the V-LSI algorithm and the V-D-LSI algorithm are close.
8 Conclusions
A D-LSI algorithm and a V-D-LSI algorithm are derived for identifying the linear-in-parameters systems by means of the least squares search and the decomposition technique. The analysis shows that under the same noise level and iteration, the D-LSI algorithm and the V-D-LSI algorithm give almost same parameter estimation accuracy. Compared with the LSI algorithm and the V-LSI algorithm, the decomposition-based iterative algorithms require less computational cost. The simulation results indicate that the proposed algorithms can generate highly accurate parameter estimates. The identification idea can be extended to study the parameter estimation problems of other linear systems and nonlinear systems with colored noises, missing-data systems and scarce measurement systems [34, 35], hybrid networks and uncertain chaotic delayed systems [20, 21], and can be applied to other fields [15].
References
M. Amairi, Recursive set-membership parameter estimation using fractional model. Circuits Syst. Signal Process. 34(12), 3757–3788 (2015)
H.B. Chen, F. Ding, Y.S. Xiao, Decomposition-based least squares parameter estimation algorithm for input nonlinear systems using the key term separation technique. Nonlinear Dyn. 79(3), 2027–2035 (2015)
H.B. Chen, Y.S. Xiao et al., Hierarchical gradient parameter estimation algorithm for Hammerstein nonlinear systems using the key term separation principle. Appl. Math. Comput. 247, 1202–1210 (2014)
M. Dehghan, M. Hajarian, Iterative algorithms for the generalized centro-symmetric and central anti-symmetric solutions of general coupled matrix equations. Eng. Comput. 29(5), 528–560 (2012)
F. Ding, System Identification—New Theory and Methods (Science Press, Beijing, 2013)
F. Ding, System Identification—Performances Analysis for Identification Methods (Science Press, Beijing, 2014)
F. Ding, K.P. Deng, X.M. Liu, Decomposition based Newton iterative identification method for a Hammerstein nonlinear FIR system with ARMA noise. Circuits Syst. Signal Process. 33(9), 2881–2893 (2014)
F. Ding, J. Ding, Least squares parameter estimation with irregularly missing data. Int. J. Adapt. Control Signal Process. 24(7), 540–553 (2010)
J. Ding, C.X. Fan, J.X. Lin, Auxiliary model based parameter estimation for dual-rate output error systems with colored noise. Appl. Math. Model. 37(6), 4051–4058 (2013)
J. Ding, J.X. Lin, Modified subspace identification for periodically non-uniformly sampled systems by using the lifting technique. Circuits Syst. Signal Process. 33(5), 1439–1449 (2014)
F. Ding, X.M. Liu, Y. Gu, An auxiliary model based least squares algorithm for a dual-rate state space system with time-delay using the data filtering. J. Frankl. Inst. Eng. Appl. Math. 353 (2015). doi:10.1016/j.jfranklin.2015.10.025
F. Ding, G. Liu, X.P. Liu, Parameter estimation with scarce measurements. Automatica 47(8), 1646–1655 (2011)
F. Ding, X.H. Wang, Q.J. Chen, Y.S. Xiao, Recursive least squares parameter estimation for a class of output nonlinear systems based on the model decomposition. Circuits Syst. Signal Process. 35 (2016). doi:10.1007/s00034-015-0190-6
F. Ding, Y.J. Wang, J. Ding, Recursive least squares parameter estimation algorithms for systems with colored noise using the filtering technique and the auxiliary model. Digit. Signal Process. 37, 100–108 (2015)
C.L. Fan, H.J. Li, X. Ren, The order recurrence quantification analysis of the characteristics of two-phase flow pattern based on multi-scale decomposition. Trans. Inst. Meas. Control 37(6), 793–804 (2015)
Y.Q. Hei, W.T. Li, W.H. Fu, X.H. Li, Efficient parallel artificial bee colony algorithm for cooperative spectrum sensing optimization. Circuits Syst. Signal Process. 34(11), 3611–3629 (2015)
Y.B. Hu, Iterative and recursive least squares estimation algorithms for moving average systems. Simul. Model. Pract. Theory 34, 12–19 (2013)
Y.B. Hu, B.L. Liu, Q. Zhou, C. Yang, Recursive extended least squares parameter estimation for Wiener nonlinear systems with moving average noises. Circuits Syst. Signal Process. 33(2), 655–664 (2014)
J. Huang, Y. Shi, H.N. Huang, Z. Li, l-2-l-infinity filtering for multirate nonlinear sampled-data systems using T–S fuzzy models. Digit. Signal Process. 23(1), 418–426 (2013)
Y. Ji, X.M. Liu, Unified synchronization criteria for hybrid switching-impulsive dynamical networks. Circuits Syst. Signal Process. 34(5), 1499–1517 (2015)
Y. Ji, X.M. Liu et al., New criteria for the robust impulsive synchronization of uncertain chaotic delayed nonlinear systems. Nonlinear Dyn. 79(1), 1–9 (2015)
Q.B. Jin, Z. Wang, X.P. Liu, Auxiliary model-based interval-varying multi-innovation least squares identification for multivariable OE-like systems with scarce measurements. J. Process Control 35, 154–168 (2015)
Q.B. Jin, Z. Wang, J. Wang, Least squares based iterative identification for multivariable integrating and unstable processes in closed loop. Appl. Math. Comput. 242, 10–19 (2014)
Q.B. Jin, Z. Wang, Q. Wang, R.G. Yang, Optimal input design for identifying parameters and orders of MIMO systems with initial values. Appl. Math. Comput. 224, 735–742 (2013)
Q.B. Jin, Z. Wang, R.G. Yang, J. Wang, An effective direct closed loop identification method for linear multivariable systems with colored noise. J. Process Control 24(5), 485–492 (2014)
J.H. Li, Parameter estimation for Hammerstein CARARMA systems based on the Newton iteration. Appl. Math. Lett. 26(1), 91–96 (2013)
W.L. Li, Y.M. Jia, J.P. Du, J. Zhang, Robust state estimation for jump Markov linear systems with missing measurements. J. Frankl. Inst. 350(6), 1476–1487 (2013)
J. Li, J.D. Zhu, Z.H. Feng, Y.J. Zhao, D.H. Li, Passive multipath time delay estimation using MCMC methods. Circuits Syst. Signal Process. 34(12), 3897–3913 (2015)
X.G. Liu, J. Lu, Least squares based iterative identification for a class of multirate systems. Automatica 46(3), 549–554 (2010)
H. Raghavan, A.K. Tangirala, R.B. Gopaluni, S.L. Shah, Identification of chemical processes with irregular output sampling. Control Eng. Pract. 14(4), 467–480 (2006)
C.F. So, S.H. Leung, Maximum likelihood whitening pre-filtered total least squares for resolving closely spaced signals. Circuits Syst. Signal Process. 34(8), 2739–2747 (2015)
Y.J. Wang, F. Ding, Iterative estimation for a nonlinear IIR filter with moving average noise by means of the data filtering technique. IMA J. Math. Control Inf. (2016). doi:10.1093/imamci/dnv067
X.H. Wang, F. Ding, Convergence of the recursive identification algorithms for multivariate pseudo-linear regressive systems. Int. J. Adapt. Control Signal Process. 30 (2015). doi:10.1002/acs.2642
X.H. Wang, F. Ding, Recursive parameter and state estimation for an input nonlinear state space system using the hierarchical identification principle. Signal Process. 117, 208–218 (2015)
D.Q. Wang, Y.P. Gao, Recursive maximum likelihood identification method for a multivariable controlled autoregressive moving average system. IMA J. Math. Control Inf. (2015). doi:10.1093/imamci/dnv021
D.Q. Wang, H.B. Liu et al., Highly efficient identification methods for dual-rate Hammerstein systems. IEEE Trans. Control Syst. Technol. 23(5), 1952–1960 (2015)
C. Wang, T. Tang, Several gradient-based iterative estimation algorithms for a class of nonlinear systems using the filtering technique. Nonlinear Dyn. 77(3), 769–780 (2014)
D.Q. Wang, W. Zhang, Improved least squares identification algorithm for multivariable Hammerstein systems. J. Frankl. Inst. Eng. Appl. Math. 352(11), 5292–5370 (2015)
L. Xu, The damping iterative parameter identification method for dynamical systems based on the sine signal measurement. Signal Process. 120, 660–667 (2016)
L. Xu, A proportional differential control method for a time-delay system using the Taylor expansion approximation. Appl. Math. Comput. 236, 391–399 (2014)
L. Xu, Application of the Newton iteration algorithm to the parameter estimation for dynamical systems. J. Comput. Appl. Math. 288, 33–43 (2015)
L. Xu, L. Chen, W.L. Xiong, Parameter estimation and controller design for dynamic systems from the step responses based on the Newton iteration. Nonlinear Dyn. 79(3), 2155–2163 (2015)
W.G. Zhang, Decomposition based least squares iterative estimation for output error moving average systems. Eng. Comput. 31(4), 709–725 (2014)
Acknowledgments
This work was supported by the National Natural Science Foundation of China (No. 61304138), the Natural Science Foundation of Jiangsu Province (China, BK20130163), the 111 Project (B12018) and the University Graduate Practice Innovation Program of Jiangsu Province (SJZZ15_0151). The first author is grateful to her supervisor—Professor Feng Ding, and the main idea of this paper is from him and his books “System Identification—Multi-Innovation Identification Theory and Methods, Beijing: Science Press, 2016” and Ding [5, 6].
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Wang, F., Liu, Y. & Yang, E. Least Squares-Based Iterative Identification Methods for Linear-in-Parameters Systems Using the Decomposition Technique. Circuits Syst Signal Process 35, 3863–3881 (2016). https://doi.org/10.1007/s00034-015-0232-0
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00034-015-0232-0