Abstract
The present study focuses on Model Order Reduction (MOR) methods of non-intrusive nature that can be seen as belonging to the category of system identification techniques. Indeed, whereas the system to analyze is considered as a black box, the accurate modeling of the relationship between its input and output is the aim of the proposed techniques. In this framework, the paper deals with two different methodologies for the system identification of thermal problems. The first identifies a linear thermal system by means of an Extended Kalman Filter (EKF). The approach starts from an a priori guessed analytical model whose expression is assumed to describe appropriately the response of the system to identify. Then, the EKF is used for estimating the model transient states and parameters. However, this methodology is not extended to the processing of nonlinear systems due to the difficulty related to the analytical model construction step. Therefore, a second approach is presented, based on an Unscented Kalman Filter (UKF). Finally, a Finite Element (FE) model is used as a reference, and the good agreement between the FE results and the responses produced by the EKF and UKF methods in the linear case fully illustrate their interest.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
- Model Order Reduction (MOR)
- System identification
- Extended Kalman Filter (EKF)
- Unscented Kalman Filter (UKF)
- Finite Element (FE)
55.1 Introduction
The increasing complexity of mathematical models used to predict real-world systems has led to a need for model reduction, which means developing systematic algorithms for replacing large-scaled models with far simpler ones that still accurately capture the most important aspects of the phenomena being modeled. Model reduction techniques can be divided into two main categories. Intrusive methods belong to the first. Their principle is based on projection techniques that map a large number of degrees of freedom (DOFs) to a small number of generalized coordinates using an appropriate reduced-order basis (ROB). They may be called “internal methods” as they require access to the governing equations to project them onto the subspace spanned by the set of ROB vectors. Some of the basic and earliest methods in this category are Guyan (static condensation) [6] and Craig and Bampton reduction [4] that combines the Guyan reduction and modal truncation. These classical methods have been commonly used in mechanical engineering problems for many years and can easily be applied to the thermal domain as well. However, they are more suitable for processing linear systems. More recently, modern reduction techniques such as either Singular Value Decompositions (SVD) or Proper Orthogonal Decomposition (POD) were introduced in the last decades. The POD method is an a posteriori powerful technique for model reduction of large-scale non-linear systems and it has been successfully applied for the simulation and control of complex systems [2, 11]. The second category of reduction methods is of non-intrusive nature. This category can be viewed as belonging to the category of system identification techniques aimed at developing models that describe mathematically the dynamic behavior of the real system. System identification techniques drive a model, considered as a black-box, by operating directly only on input data such as command law and output results. In this framework, this paper focuses on two methods based on Kalman filters, namely the Extended Kalman Filter (EKF) and Unscented Kalman Filter (UKF) proposed by Sorenson [12], and Julier and Uhlman [7, 8], respectively. These methods deal with both linear and nonlinear systems. The EKF applies the standard Kalman Filter to nonlinear systems by simply linearizing all the nonlinear models. However, in practice, the use of EKF has two well-known drawbacks [8, 9]. First, linearization can produce highly unstable filters if the assumptions of local linearity are violated. Second, the derivation of the Jacobian matrices is nontrivial in most applications and often lead to significant implementation difficulties, especially when the model construction step starts from a continuous state-space form. However, the UKF is founded on the intuition that it is easier to approximate a Gaussian distribution than it is to approximate an arbitrary nonlinear function [7].
In this study, both the EKF and UKF approaches are used to estimate a linear thermal transient model. To illustrate these methods, a Finite Element (FE) model is considered as a reference. The accuracy of the identified system model is evaluated by comparing its response with the numerical results produced by the FE reference model.
The rest of this paper is organized as follows. Section 55.2 presents the problem statement. The detailed ROM formulation used in identification system technique is given in Sect. 55.3. We also briefly present principles of the EKF and UKF. Section 55.4 contains results for the case study. A sensitivity analysis is conducted to evaluate EKF and UKF performance. Conclusions and future work end this paper.
55.2 Problem Statement
In this study, a thermal transient problem is investigated and it is described by a Finite Element FE model of dimension n (Eq. 55.1). A transient heat flux (ϕ imp (t)) and a convective condition are considered as boundary conditions. Thermal properties (conductivity k and heat transfer coefficient h) are assumed temperature- and time-independent, and radiative effects are neglected. Initially the system is at a uniform initial temperature T 0 and the surrounding temperature is T out . Hence, the linear system governing the FE model is:
where [C] and [K] are the heat capacity and the conductivity matrices. The notation \(\{T\} = {[T_{1}(t)\,\,T_{2}(t)\,\,\ldots \,\,\,T_{n}(t)]}^{T}\) stands for the nodal temperature vector, \(\{\dot{T}\} = {[\dot{T}_{1}(t)\,\,\dot{T}_{2}(t)\,\,\ldots \,\,\,\dot{T}_{n}(t)]}^{T}\) is the time derivative of the this vector, and {F} = [ϕ imp 0 0 … hT out ]T designates the heat flux vector of dimension n.
55.3 System Identification
55.3.1 Generalities
For an identification problem, where the model is considered as a black-box, temperature measurements or part of them are known as well as the forcing term ϕ imp , whereas the model operators are unknown. To deal with this category of problems, system identification technique based on Kalman Filters variants is herein investigated. Both Extended Kalman Filter (EKF) and Unscented Kalman Filter (UKF) are applied in order to identify a reduced-order model (ROM) for the direct linear problem given in Eq. 55.1. The system identification procedure illustrated in Fig. 55.1 is performed in two steps:
-
(1)
The measurements
-
(2)
The model construction
55.3.2 Step 1: Measurements
In the standard (direct) heat transfer problem (55.1), the operators [C], [K] and {F} are assumed known and {T} is determined through a numerical integration method implemented in Matlab (e.g. Runge-Kutta method). This FE model is considered as a reference and its numerical results are compared to the results of the identified system model to evaluate its accuracy.
55.3.3 Step 2: Model Construction
The choice of the model, which consists in selecting a mathematical model to describe the input-output behaviour of the system of interest, is a fundamental step. To begin, the FE reference linear model presented in Eq. (55.1) is first considered. For the purpose of identification, the reference model is transformed into a time-invariant state space form:
where x(t) is the n ×1 state vector, A the n ×n state transition matrix, B the n ×p input-state matrix, C the m ×n state-output matrix and D the m ×p direct transmission matrix. For physical systems, D is usually the zero matrix. The vector u(t) generally groups the applied forces in (55.1) and B is a matrix which maps the physical locations of the input forces (p-input vector) to the internal variables of the realization. Similarly, y(t) are physical sensor measurements (or numerical observations at the DOFs of the reference FE model) yielded by temperature probes, and C is a matrix which constructs these physical quantities (m-output vector) from the internal variables x(t).
In = addition :
-
The system is controllable if and only if the matrix \(\Gamma _{cont} = [B\,\,AB;{A}^{2}B;\,\ldots \,;{A}^{n-1}B] \in {\mathbb{R}}^{n\times np}\) is of rank n.
-
The state x(t) is observable if and only if the matrix \(\Gamma _{obs} = [C;CA;C{A}^{2};\,\ldots \,;C{A}^{n-1}] \in {\mathbb{R}}^{mn\times n}\) is of rank n.
55.3.3.1 The Reduced-Order Model (ROM)
Model order reduction is closely related to system identification. It is therefore interesting to keep only the internal variables that capture the essential dynamics of the system. To this purpose, the reference problem (55.1) is represented by a linear reduced-order model (ROM) of dimension n r ≤ n, in which \(\left \{x_{r}\right \} = {[T_{r_{1}}\,T_{r_{2}}\cdots T_{r_{n_{r}}}]}^{T}\) is the new temperature reduced-order state vector. This latter consists of internal variables that are used to describe the dynamic relationships. If an orthonormal basis change U such that U − 1 AU is a diagonal matrix is applied to the system (55.2), this system becomes:
with \(\tilde{A} = {U}^{-1}AU\); \(\tilde{B} = {U}^{-1}B\) and \(\tilde{C} = CU\). If only one input u(t) = ϕ imp (t) is applied, the constitutive reduced-order model (ROM) further simplifies into: \(\tilde{A} = \left [\begin{array}{*{20}{l}} a_{1} & & \\ & \ddots & \\ & & a_{n_{r}} \end{array} \right ]\); \(\tilde{B} = \left [b_{1} \vdots b_{n_{r}} \right ]\); \(\tilde{C} = \left [\begin{array}{*{20}{c}} c_{11} & \cdots & c_{1n_{r}}\\ \vdots & \ddots & \vdots \\ c_{n_{r}1} & \cdots &c_{n_{r}n_{r}} \end{array} \right ]\). The coefficients a i , {i = 1, ⋯ , n} depend on the time-constants τ i , {i = 1, ⋯ , n} of the reference problem; \(a_{i} = -\frac{1} {\tau _{i}}\).
55.3.3.2 Setting of Extended Kalman Filter (EKF)
In this section, the conversion of the continuous ROM (Eq. 55.3) into a discrete representation (Eq. 55.4) is performed using EKF by means of an exponential discretization technique [3](see A.1). The resulting discrete ROM is given by Eq. 55.4:
where the \(\tilde{f}_{d_{_{ k}}}\) and \(\tilde{h}_{d}\) are nonlinear evolution and observation functions at time k, the θ k a n p − stationary parameter vector at time k; \(\theta _{k} = \left [{e}^{a_{1}T}\cdots \,{e}^{a_{n_{r}}T},\, \frac{b_{1}} {a_{1}} \,\,\cdots \,\,\frac{b_{n_{r}}} {a_{2}} \,\,c_{11}\,\cdots \,\,c_{1n_{r}}\,\,c_{n_{r}1}\,\cdots \,\,c_{n_{r}n_{r}}\right ]_{k}^{T}\).
55.3.3.3 Setting of Unscented Kalman Filter (UKF)
The continuous ROM in Eq. (55.3) can also be written as:
where \(x_{r} ={ \left [T_{r_{1}}\,\,\cdots \,T_{r_{n_{r}}}\,a_{1}\,\,\cdots \,\,a_{n_{r}}\,b_{1}\,\,\cdots \,\,b_{n_{r}}\,c_{11}\,\,\cdots \,\,c_{1n_{r}}\,\,c_{n_{r}1}\,\,\cdots \,\,c_{n_{r}n_{r}}\right ]}^{T}\) is the extended reduced-state vector, the \(\tilde{f}\) and \(\tilde{h}\) are the nonlinear evolution and observation functions. An implicit numerical integration method (Dormand-Prince method) [5, 10] is used in order to discretize \(\tilde{f}\) and therefore obtain a (discrete-time) recursive ROM. In the case of nonlinear system, a i , {i = 1, ⋯ , n r } become time-dependent. To process it, we use the same ROM as in (55.5) and the UKF algorithm is unchanged. Here, the advantage of UKF as regards the implementation simplicity is highlighted with respect to the EKF. This latter methodology actually is not extended to the processing of nonlinear systems due to the difficulty related to the analytical model construction step.
55.3.4 Basic Formulation of Kalman Filters
55.3.4.1 Extended Kalman Filter (EKF)
In this section, a nonlinear state-space model equivalent to the above model in Eq. (55.4) is considered:
where x k is the extended state vector, whose distribution is assumed to be a Gaussian random variable, y k the noisy measurement vector, u k − 1 the known input at time k − 1, f k and h k the nonlinear process and the nonlinear measurement functions, and w and v the process and measurement noise, respectively. These latter are assumed to be uncorrelated zero-mean Gaussian white noises with time-invariant covariance matrices Q and R. The idea of the EKF is to linearize the nonlinear process f k and measurement function h k by a first order Taylor series (Jacobian) at each time step around the most recent estimate of the state vector x. The resulting EKF algorithm is summarized in B.1.1, [12].
55.3.4.2 Unscented Kalman Filter (UKF)
The UKF represents an alternative to the extended Kalman Filter (EKF). The UKF is based on the fact that it is easier to approximate a Gaussian distribution than it is to approximate an arbitrary nonlinear function [7]. Instead of linearization process using Jacobian matrices similarly the EKF approach, the UKF uses a deterministic sampling technique known as the Unscented Transform (UT), proposed by Julier and Uhlmann [7, 8]. The idea of UT is to form 2n + 1 samples (or sigma-points) that capture exactly the mean and covariance of the original distribution of x. These sigma-points are then propagated through the non-linearity and the mean and covariance of the transformed variable are estimated from them. The UT scheme is illustrated in B.1.2. Consider now the model in (55.6) used in Sect. 55.3.4.1. The distribution of the vector x k is assumed to be a Gaussian random variable. The UKF is presented in B.1.3.
55.4 Numerical Results
A 10-DOF Finite Element (FE) model simulation is carried out to evaluate the performance of the EKF and UKF methods regarding system identification. The problem is numerically simulated using ode23, a Matlab routine implementing a low-order Runge-Kutta method with adaptative step size control. Initial conditions consist of a uniform temperature. The forcing term u = ϕ imp (t) is a square signal applied at DOF 1 (cf. Fig. 55.2 (top)), and Fig. 55.2 (bottom) shows the simulation results. As a first step, the Singular Value Decomposition (SVD) is used in order to determine the minimum number of modes required to capture in the ROM the essential dynamics of the reference model (cf. Fig. 55.3). The Fig. 55.3 (top left and top right) shows that the first two singular values are much greater than the rest (the numerical values are 14,040, 3,314, 885.9, …). As such, the modal contribution is dominated by the first two modes. Hence, the reference model can be represented by a two-order reduced model.
55.4.1 Filtering Step
Now we apply EKF and UKF to the FE reference model. Our goal is to identify the discrete ROM by using known input forcing vector u = ϕ imp (t), a square signal (see Fig. 55.2 (top)), and available temperature data collected at DOF 1 and 10 of the FE reference model (see Fig. 55.2 (bottom)). For simplicity, the state and observation noise covariance matrices are set as \(Q =\sigma _{ w}{}^{2}I{}_{n_{e}\times n}{}_{e}\) and \(R =\sigma _{ _{v}}^{2}I_{m\times m}\), where \(\sigma _{_{ w}}^{2}\) and \(\sigma _{_{ v}}^{2}\) are the state and observation noise variances. The notations \(I_{n_{e}\times n_{e}}\) and I m ×m denote the n e ×n e and m ×m identity matrices, and \(n_{e} = 10,\,m = 2\,\) stand for the dimension of the extended state vector and observation vector. The initial state estimate covariance is set as P 0 = p 0 Diag(vect), where p 0 is the initial state error variance and vect = 10, 10, 1, 1, 0. 01, 0. 01, 1, 1, 1, 1 a vector of dimension n e .
55.4.2 Sensitivity Analysis
In this study, we show how: (1) the initial state estimate covariance P 0 representing the confidence in the initial state estimate, (2) state model covariance Q representing the confidence in the Kalman model and (3) the observation covariance R representing the confidence in the measurements, affect the performance of both EKF and UKF. The performance of the EKF and UKF can be measured by: (1) comparison of the true observed and estimated temperature and the corresponding terms in P (not shown here); (2) evolution of the identified parameters and the corresponding terms in P; and finally (3) the evolution of the residual term, which is the difference between the predicted and true observed temperatures at DOF of observation (1 and 10 in the reference FE model).
55.4.2.1 Sensitivity to the State Model Covariance
Figures 55.4–55.7, illustrate the sensitivity of EKF and UKF to the state covariance by comparing values from \(\sigma _{w} = 2.1{0}^{-5}\) to \(\sigma _{w} = 2.1{0}^{-9}\) and from \(\sigma _{w} = 1{0}^{-2}\) to \(\sigma _{w} = 1{0}^{-11}\), respectively. Increasing the state model covariance increases the convergence speed (Fig. 55.4) and sensitivity to measurement (a significant decrease of the residual term when σ w goes from 10 − 8 (dashed-dotted curve; error up to ∼ 6 %) to 10 − 5 (dashed curve; error up to ∼ 1. 6 %) in Fig. 55.7 (left)). Increasing the state model covariance too far results in parameter identification failure (dashed and solid curves in Fig. 55.6 (left)) and solution divergence (blue solid curve in Figs. 55.4 and 55.5). In other words, when we are less confident in the Kalman model, the gain K at update time in both EKF and UKF algorithm is sufficient large and hence observations play a significant role in estimating the state (temperatures) but not in parameter identification.
55.4.2.2 Sensitivity to the Observation Covariance
Figures 55.8 and 55.9 illustrate the sensitivity of EKF and UKF to the observation covariance by comparing values from \(\sigma _{v} = 1{0}^{-1}\) to \(\sigma _{v} = 1{0}^{-4}\) and from σ v = 1 to \(\sigma _{v} = 1{0}^{-4}\), respectively. Decreasing the measurement covariance magnitude increases the convergence speed. Decreasing the magnitude too far results in erratic parameter value (dotted curve; \({e}^{a_{2}T} > 1\) in Fig. 55.8 (left) and a 2 > 1 in Fig. 55.9 (left)) or the solution fails to converge (dotted curve; \({e}^{a_{1}T}\) in Fig. 55.8 (left), a 1 in Fig. 55.9 (left)). Conversely, increasing the measurement covariance magnitude too far causes the identified parameters to remain fairly constant at an erratic value (solid curve; \({e}^{a_{1}T} < 0\) and \({e}^{a_{2}T} > 1\) in Fig. 55.8 and solid curve; a 1, a 2 > 0 in Fig. 55.9 (left)) and the solution fails to converge (solid curve in Fig. 55.9 (right)).
55.4.2.3 Sensitivity to the Initial State Estimate Covariance
Kalman filters diverge (dotted curve in Fig. 55.10 (left)), or converge slowly (dotted curve in Fig. 55.11 (left)), because the initial state covariance is very small. However, if p 0 is very large, the filter converges to an erratic value (solid curve \({e}^{a_{1}T},{e}^{a_{2}T} > 1\) in Fig. 55.10 (left); and a 2 > 0 in Fig. 55.11 (bottom left)) or fails to converge (solid curve in Fig. 55.11 (top left);a 1 parameter).
To conclude, the best performance of EKF and UKF is obtained for the following values \((\sigma _{w} = 1{0}^{-6},\sigma _{v} = 1{0}^{-3},\) \(p_{0} = 1{0}^{-6})\) and \(\left (\sigma _{w} = 1{0}^{-8},\sigma _{v} = 1{0}^{-2},p_{0} = 1{0}^{-6}\right )\), respectively. As the quotient \(\frac{\sigma _{w}} {\sigma _{v}}\) is very small, that means more confidence is attributed in the Kalman model, the model adopted for reduction is validated.
55.5 Conclusions and Future Work
This paper presents the EKF and UKF methods in order to identify a reduced order model for a linear thermal system based on data produced by a FE numerical model. The sensitivity of these two methods with respect to state model covariance, observation noise covariance, as well as to initial state estimate covariance is analyzed. This analysis shows that these three Kalman entries significantly impact the filter results and that judicious choices have to be made to guarantee convergence and obtain the best performance and optimal values. This paper illustrates this behavior though the comparison between the FE results and the responses produced by the EKF and UKF. A crucial advantage of the UKF with respect to the EKF is that the implementation of the method for processing a nonlinear system can be done without difficulty by using the same approach for the ROM construction as the one detailed in this paper.
This study has validated both of EKF and UKF methods through a 10-DOF FE linear model. Future studies will deal with larger order FE models, first in the linear and then in the nonlinear domain, using UKF.
References
Andrews HC, Patterson CL (1976) Singular value decompositions and digital image processing. IEEE Trans Acoust Speech Signal Process 24(1):26–53
Berkooz G, Holmes P, Lumley JL (1993) The proper orthogonal decomposition in the analysis of turbulent flows. Annu Rev Fluid Mech 25(1):539–575
Boyce WE, DiPrima RC (1977) Elementary differential equations and boundary value problems. John Wiley & Sons, New York
Craig R, Bampton MCC (1968) Coupling of substructures for dynamic analyses. AIAA J 6(7):1313–1319
Dormand JR, Prince PJ(1980) A family of embedded runge-kutta formulae. J Comput Appl Math 6(1):19–26
Guyan RJ (1965) Reduction of stiffness and mass matrices. AIAA J 3(2):380–380
Julier SJ, Uhlmann JK (1996) A general method for approximating nonlinear transformations of probability distributions. Technical report, University of Oxford, Departement of Engineering Science
Julier SJ, Uhlmann JK (1997) A new extension of the kalman filter to nonlinear systems. In: The proceedings of aeroSense: the 11th international symposium on aerospace/defence sensing, simulation and controls, Orlando, Florida, pp 182–193
Joseph J, LaViola Jr (2003) A comparison of unscented and extended kalman filtering for estimating quaternion motion. In the proceedings of the 2003 american control conference, Denver, Colorado, pp 2435–2440
Mathews JH, Fink KD (2004) Numerical methods using MATLAB. Prentice Hall, Upper Saddle River, New Jersey
Moore BC (1981) Principal component analysis in linear systems: controllability, observability, and model reduction. IEEE Trans Autom Control 26(1):17–32
Sorenson HW (1970) Least-squares estimation: from gauss to kalman. IEEE Spectr 7(7):63–68
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendices
Model Construction Step Using EKF
The solution of Eq. (55.3) on the time interval t i t f is given by [1]:
where the exponential matrix is defined as \({e}^{\tilde{A}t} =\sum \limits _{ k=0}^{\infty }\frac{1} {k!}{(\tilde{A}t)}^{k}\) With t i = t k and \(t_{f} = t_{k+1}\), (55.7) becomes:
Simplifying the notation by writing k instead of t k and supposing u(t) constant over the sampling interval t k t k + 1 , the discrete state space model is written as follows:
where \(x_{r_{k}}\) is the state vector of internal variables at time k, y k the observation vector at time k, u k − 1 the input data at time k − 1, and \((\tilde{A}_{d}\,\tilde{B}_{d}\,\tilde{C}_{d})\) the constitutive matrices of the discrete reduced-order model: \(\tilde{A}_{d} = \left [\begin{array}{*{20}c} {e}^{a_{1}T}&& \\ &\ddots & \\ &&{e}^{a_{n_{r}}T}\\ \end{array} \right ]\); \(\tilde{B}_{d} = ({e}^{\tilde{A}_{d}T}-I)\left [\begin{array}{l} b_{1}\\ \vdots \\ b_{n_{r}}\\ \end{array} \right ] = \left [\begin{array}{l} \frac{b_{1}} {a_{1}} ({e}^{a_{1}T} - 1)\\ \vdots \\ \frac{b_{n_{r}}} {a_{n_{r}}}({e}^{a_{n_{r}}T} - 1) \\ \end{array} \right ]\); \(\tilde{C}_{d} =\tilde{ C} = \left [\begin{array}{*{20}c} c_{11} & \cdots & c_{1n_{r}}\\ \vdots & \ddots & \vdots \\ c_{n_{r}1} & \cdots &c_{n_{r}n_{r}}\\ \end{array} \right ]\).
The objective of our procedure being the identification of parameters, they have to be included in the state vector. The functions \(\tilde{A}_{d}\) and \(\tilde{C}_{d}\) are thereby nonlinear and will be denoted \(\tilde{f}_{d}\) and \(\tilde{h}_{d}\), respectively. The discrete model is then given by:
EKF and UKF Algorithms
55.2.1 Extended Kalman Filter (EKF)
Extended Kalman Filter algorithm
Description:
-
1:
Initialization:
State mean and covarianc at k = 0: \(\hat{x}_{0} = E\left [x_{0}\right ]\) and \(P_{0} = E\left [(x_{0} -\hat{ x}_{0}){(X_{0} -\hat{ x}_{0})}^{T}\right ]\)
-
2:
Prediction phase
-
(a)
The process model Jacobian: \(F_{k} = \dfrac{\partial f_{k}} {\partial x} _{x=\hat{x}_{k-1}}\)
-
(b)
Predicted state mean and covariance: \(\hat{x}_{k}^{-} = f_{k}(\hat{x}_{k-1},u_{k-1})\) and \(P_{k}^{-} = F_{k}P_{k}F_{k}^{T} + Q\)
-
(a)
-
3:
Correction phase
-
(a)
Measurement model Jacobian: \(H_{k} = \dfrac{\partial h_{k}} {\partial x} _{x=\hat{x}_{k}^{-}}\)
-
(b)
Measurement update:
-
Measurement prediction: \(\hat{y}_{k}= h_{k}\left (\hat{x}_{k}^{-}\right )\)
-
Innovation (Residual term): \(\tilde{y}_{k} = y_{k} -\hat{ y}_{k}\)
-
Innovation covariance matrix: \(M_{k} = \mathop{cov}\left (\tilde{y}_{k}\right )\,=\,H_{k}P_{k}^{-}H_{k}^{T} + R\)
-
-
(c)
Updated state mean and Covariance:
-
Kalman Gain matrix: \(K_{k} = P_{k}^{-}H_{k}^{T}M_{k}^{-1}\)
-
State update: \(\hat{x}_{k} =\hat{ x}_{k}^{-} + K_{k}\tilde{y}_{k}\)
-
Covariance update: \(P_{k} = \left (I - K_{k}H_{k}\right )P_{k}^{-}\)
-
-
(a)
55.2.2 Unscented Transform (UT)
Unscented Transform
Let x ∈ ℝ n be a Gaussian random vector and y = g(x) a general nonlinear function, g : ℝ n → ℝ m; \(y = g(x);\,\,E\left [x\right ] =\bar{ x};\,E[(x -\bar{ x})\) \({(x -\bar{ x})}^{T}]\,=\,P_{xx}\)
1: Decomposition of the distribution in 2n + 1 sigma-points \(\{\chi _{i,\,\omega _{i}}\}_{i=0\,\ldots \,2n} = UT(\bar{x},P_{xx})\) where \(\chi _{0} =\bar{ x}\ \ \ ;\ \ \ \omega _{0} = \frac{\kappa } {(n+\kappa )}\) \(\left.\chi _{i} =\bar{ x} + [\sqrt{(n+\kappa )P_{xx}}]\ \ \ ;\ \ \ \omega _{i} = \frac{1} {2(n+\kappa )} \chi _{i+n} =\bar{ x} - [\sqrt{(n+\kappa )P_{xx}}]\ \ \ ;\ \ \ \omega _{i+n} = \frac{1} {2(n+\kappa )}\right \}\,i = 1\,\ldots \,n\) N.B. The term \(\left [\sqrt{(n+\kappa )P_{xx}}\right ]_{i}\) represents the ith column vector of the matrix square root (n + κ)P xx and is derived via the Cholesky factorisation. The parameter κ is a scaling parameter and ω i an associated weight of each sigma-point.
55.2.3 Unscented Kalman Filter (UKF)
Unscented Kalman Filter algorithm
Description:
-
1:
Initialization:
State mean and covarianc at k = 0: \(\hat{x}_{0} = E\left [x_{0}\right ]\) and \(P_{0} = E\left [(x_{0} -\hat{ x}_{0}){(x_{0} -\hat{ x}_{0})}^{T}\right ]\)
-
2:
Prediction phase
-
(a)
Generation of 2n + 1 sigma-points \(\{\chi _{_{i},k-1},\omega _{i}\}_{i=0\,\ldots \,2n} = UT(\hat{x}_{k-1},P_{x_{k-1}})\)
-
(b)
Predicted state: \(\chi _{_{i},k}^{-} = f_{k}(\chi _{i,k-1},u_{k-1})\) and \(\hat{x}_{_{k}}^{-} =\sum \limits _{ i=0}^{2n}\omega _{i}\chi _{_{i},k}^{-}\)
-
(c)
Predicted covariance: \(P_{x_{k}}^{-} =\sum \limits _{ i=0}^{2n}\omega _{i}(\chi _{_{i},k}^{-}-\hat{ x}_{_{k}}^{-}){(\chi _{_{i},k}^{-}-\hat{ x}_{_{k}}^{-})}^{T} + Q\)
-
(a)
-
3:
Correction phase
-
(a)
Measurement update: \(Y _{i,k} = h_{k}(\chi _{_{i,k}}^{-})\)
-
(b)
Measurement prediction: \(\hat{y_{k}} =\sum \limits _{ i=0}^{2n}\omega _{i}Y _{_{i},k}\)
-
(c)
Innovation (Residual term): \(\tilde{y}_{k} = Y _{i,k} -\hat{ y_{k}}\)
-
(d)
Innovation covariance: \(P_{y_{k}} =\sum \limits _{ i=0}^{2n}\omega _{i}\tilde{y}_{k}\tilde{y}_{k}^{T} + R\)
-
(e)
Cross covariance: \(P_{x_{k}y_{k}} =\sum \limits _{ i=0}^{2n}\omega _{i}(\chi _{_{i,k}}^{-}-\hat{ x}_{_{k}}^{-}){(Y _{i,k} -\hat{ y_{k}})}^{T} + R\)
-
(f)
Updated state mean and Covariance:
-
Kalman Gain matrix: \(K_{k} = P_{x_{k}y_{k}}P_{_{y_{ k}}}^{-1}\)
-
State update: \(\hat{x}_{k} =\hat{ x}_{k}^{-} + K_{k}\tilde{y}_{k}\)
-
Covariance update: \(P_{x_{k}} = P_{x_{k}}^{-}- K_{k}P_{y_{k}}K_{k}^{T}\)
-
-
(a)
Rights and permissions
Copyright information
© 2014 The Society for Experimental Mechanics
About this paper
Cite this paper
Abid, F., Chevallier, G., Blanchard, J.L., Dion, J.L., Dauchez, N. (2014). System Identification Using Kalman Filters. In: Allemang, R., De Clerck, J., Niezrecki, C., Wicks, A. (eds) Topics in Modal Analysis, Volume 7. Conference Proceedings of the Society for Experimental Mechanics Series. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-6585-0_55
Download citation
DOI: https://doi.org/10.1007/978-1-4614-6585-0_55
Published:
Publisher Name: Springer, New York, NY
Print ISBN: 978-1-4614-6584-3
Online ISBN: 978-1-4614-6585-0
eBook Packages: EngineeringEngineering (R0)