Keywords

1 Introduction

The problem of nonlinear filtering has its origins in the areas of tracking and signal processing. Nevertheless, the underlying setting is extremely general and is ubiquitous in many applied areas such as integrated navigation system, geodetic positioning, and automatic control, where random processes are used to model complex dynamical phenomenon. In essence, nonlinear filtering is to estimate the state of a nonlinear and non-Gaussian stochastic system from noisy measurement data. However, this process inevitably involves a gross error when there is a deviation between the theoretical model and the practical model, leading to the biased or even divergent filtering solution (Einicke 2012). The unscented Kalman filter (UKF), which is a typical nonlinear filter, is such a particular case. It suffers from the problem that the filtering performance is very sensitive to system model error.

This paper presents a new UKF algorithm for nonlinear filtering. The algorithm incorporates model prediction in the process of UKF. It augments system driving noise to system state to increase the input information of system state. The proposed algorithm overcomes the limitation of the traditional UKF that the filtering performance is sensitive to system model error. It can effectively resist the disturbance of model error on system state estimation and enhance the robustness of the filtering process. Experiments and comparison analysis with the existing methods have been conducted to comprehensively evaluate the performance of the proposed filtering algorithm.

2 Related Work

The traditional Kalman filter uses statistical characteristics of the system model to determine estimates recursively. However, the optimality of this filter heavily depends on linearity. A significant amount of research efforts have been dedicated to nonlinear filtering. The EKF is an approach to recursive nonlinear estimation. It is obtained by first-order linearization of nonlinear models such that the traditional KF can be applied. However, the EKF has the well-known drawbacks such as the large linearization error, derivation of Jacobian matrices, and poor robustness against the system model uncertainty (Boutayeb and Aubry 1999; Julier and Uhlmann 1997; Julier et al. 1995; Doucet et al. 1998).

The particle filter is an optimal recursive Bayesian filtering method based on Monte Carlo simulation, aiming to produce a sample of independent random variables distributed according to the conditional probability distribution (Doucet et al. 2001; Van der Merwe et al. 2000; Oppenheim et al. 2008). This method provides a complete representation for the posterior distribution of the state, enabling the easy calculation of any statistical estimate such as the mean, modes, kurtosis, and variance. Consequently, it can deal with nonlinear system models and non-Gaussian noise. However, the phenomenon of particle degeneracy may occur in the approximation process, and the accuracy largely depends on the choice of the importance sampling density and resampling scheme. The approximation process may diverge if a dynamic system has very small noise or the measurement noise has very small variance (Oppenheim et al. 2008; Wang 2006). The computational complexity also depends on the number of samples in the process of state estimation.

The UKF is a derivative-free filter. It combines the concept of unscented transform with the linear update structure of the Kalman filtering. This method is based on statistical approximations of system equations without requiring the calculation of the Jacobian matrix (Julier and Uhlmann 2004). It can capture the posterior mean and covariance accurately to the second order of Taylor series expansion for any nonlinearity, leading to a superior performance with an equivalent computational complexity (Pan et al. 2005). Further, the use of unscented transformation can avoid random sampling error caused by any sampling method such as the Monte Carlo simulation, thus dramatically reducing the number of points without the trade-off of the filtering accuracy (Julier 1998). However, similar to the EKF, the UKF is sensitive to system model uncertainty. In the presence of model uncertainties such that the input data cannot reflect the real model, the filtering solution of the UKF will be deteriorated or even divergent (Yang and Li 2011; Jwo and Lai 2009; Gao et al. 2015).

In the recent years, the robust adaptive filter has been used in integrated navigation system to control the influences due to both system state models and measurement singularities. Yang et al. reported a robust adaptive filter by combining the robust maximum likelihood estimation with the adaptive filtering process to adaptively adjust the weight matrix of predicted parameters according to the difference between system measurement and model information (Yang and Gao 2006; Yang et al. 2001). This filter can be adaptively converted into the classical Kalman filter, adaptive Kalman filter, and Sage filter by modifying the weight matrix and adaptive factor. However, the filtering method is difficult to estimate state parameters at the epochs with insufficient measurement information. They also developed a robust adaptive filter with multiple adaptive factors (Yang and Cui 2008). Although the robustness is improved by using multiple adaptive factors, the use of multiple adaptive factors causes an extra computational load, as it requires the number of measurements at all calculation epochs be larger than the number of state components. Ding et al. reported a process noise scaling method to monitors the filtering process using covariance matching for improving the robustness of adaptive filtering (Ding et al. 2007). However, this method cannot optimally distribute noises to each individual source. Gao et al. also reported a robust adaptive filter for SINS/SAR (Strap-down Inertial Navigation System/Synthetic Aperture Radar) integrated navigation system (Gao et al. 2011). However, this method may not guarantee that the robust adaptive factor constructed from predicted residuals is optimal for achieving the best filtering result. In general, the iterative process involved in the robust adaptive filtering requires reliable state estimate to calculate the covariance matrix of measurement noise. If the state estimate is disturbed by kinematic model error and measurements, it is difficult to obtain the reliable equivalent covariance matrix for describing the characteristics of measurement noise (Sayed and Rupp 2010).

Model predictive is a method to determine the minimum model error during the estimation process, where the model error is not limited to Gaussian noise characteristics (Goldenstein 2004). Crassidis et al. reported a nonlinear model prediction algorithm (Crassidis and Markley 1997). This algorithm estimates the model error by comparing predictions with measurements. Subsequently, it remedies the filter conditions according to the model error to obtain the estimation of system state. Due to the real-time estimation and correction of system error model, the MPF algorithm is capable of handling large and dynamic errors of a nonlinear system model, leading to a continuous model to avoid discrete jumps in system state estimate. However, this algorithm requires the calculation of partial derivatives, leading to an increased computational load. Fang et al. reported a method to estimate the model error for an INS/GPS integrated navigation system by combining the EKF with model prediction (Fang and Gong 2010). Huang et al. studied the closed-loop robust stability of a nonlinear model predictive filter coupled with an extended Kalman filter, showing that the estimation error dynamics of the EKF are input-to-state stable in the presence of nonvanishing perturbations (Huang et al. 2013). However, due to the use of the EKF, these two methods suffer from the problems of the EKF in handling nonlinear systems, such as the requirement of calculating the Jacobian matrix and the low accuracy due to the use of linear approximation.

3 Model Prediction Based UKF

3.1 Prediction of System Model Error

Consider the state and measurement equations of a nonlinear system

$$ \dot{\widehat{\mathbf{x}}}(t)=f\left(\widehat{\mathbf{x}}(t)\right)+G\left(\widehat{\mathbf{x}}(t)\right)\underline {\mathbf{d}}(t) $$
(12.1)
$$ \widehat{\mathbf{y}}(t)=\underline{h}\left(\widehat{\mathbf{x}}(t)\right) $$
(12.2)

where both \( f\left(\cdot \right) \) and \( h\left(\cdot \right) \) are continuous and differentiable nonlinear functions, and \( \underline {\mathbf{d}}(t)\in {R}^r \) represents the model-error vector. \( \mathbf{x}(t)\in {R}^n \) is the state vector, \( \widehat{\mathbf{x}}(t) \) the state estimate vector of x(t), \( G\left(\widehat{\mathbf{x}}(t)\right)\in {R}^{n\times r} \) the model-error distribution matrix, \( \mathbf{y}(t)\in {R}^m \) the measurement vector, and ŷ(t) the estimated output vector.

Equation (12.2) can be discretized as

$$ \tilde{\mathbf{z}}\left({t}_k\right)=h\left(\widehat{\mathbf{x}}\left({t}_k\right)\right)+{\mathbf{v}}_k $$
(12.3)

where \( \tilde{\mathbf{z}}\left({t}_k\right) \) is the discrete expression of ŷ(t), and \( {\mathbf{v}}_k\in {R}^{m\times 1} \) is the measurement noise vector of the system.

The cost function is constructed as by summing the weighted sum square of the measurement-minus-estimate residuals and the weighted sum square of the model correction term

$$ \begin{aligned}[b]J\left(\underline {\mathbf{d}}(t)\right)&=\frac{1}{2}{\left\{\tilde{\mathbf{z}}\left(t+\Delta t\right)-\widehat{\mathbf{y}}\left(t+\Delta t\right)\right\}}^{\mathrm{T}}{\mathbf{R}}^{-1}\left\{\tilde{\mathbf{z}}\left(t+\Delta t\right)-\widehat{\mathbf{y}}\left(t+\Delta t\right)\right\}\\&\quad+\frac{1}{2}{\underline {\mathbf{d}}}^{\mathrm{T}}(t)\mathbf{W}\underline {\mathbf{d}}(t)\end{aligned} $$
(12.4)

where W is the system weight matrix and R is measurement covariance matrix of v k .

By minimization of (12.4), we can obtain the system model error

$$ \begin{aligned}[b]\underline {\mathbf{d}}(t)&=-{\left\{{\left[\boldsymbol{\Lambda} \left(\Delta t\right)\mathbf{S}\left(\widehat{\mathbf{x}}\right)\right]}^{\mathrm{T}}{\mathbf{R}}^{-1}\left[\boldsymbol{\Lambda} \left(\Delta t\right)\mathbf{S}\left(\widehat{\mathbf{x}}\right)\right]+\mathbf{W}\right\}}^{-1}\\ &\quad{\left[\boldsymbol{\Lambda} \left(\Delta t\right)\mathbf{S}\left(\widehat{\mathbf{x}}\right)\right]}^{\mathrm{T}}{\mathbf{R}}^{-1}\left[\mathbf{l}\left(\widehat{\mathbf{x}},\Delta t\right)-\tilde{\mathbf{z}}\left(t+\Delta t\right)+\widehat{\mathbf{y}}(t)\right]\end{aligned} $$
(12.5)

where the ith element of \( \mathbf{l}\left(\widehat{\mathbf{x}}(t),\Delta t\right) \) is defined as

$$ {l}_i\left(\widehat{\mathbf{x}}(t),\Delta t\right)={\displaystyle \sum_{k=1}^{p_i}\frac{\Delta {t}^k}{k!}}{L}_f^k\left({h}_i\right) $$
(12.6)

where L k f (h i ) is the kth-order Lie derivative.

\( \boldsymbol{\Lambda} \left(\Delta t\right)\in {\mathbf{R}}^{m\times m} \) is a diagonal matrix with elements defined by

$$ {\lambda}_{ii}=\frac{\Delta {t}^{p_i}}{p_i!},\kern1em i=1,2,\cdots, m $$
(12.7)

where \( {p}_i\kern0.5em \left(i=1,2,\cdots, m\right) \) is the lowest order of the derivative of \( h\left(\widehat{\mathbf{x}}(t)\right) \).

\( \mathbf{S}\left(\widehat{\mathbf{x}}\right)\in {\mathbf{R}}^{m\times r} \) is a matrix with each row defined by

$$ {\mathbf{s}}_i=\left[{L}_{g_1}\left({L}_f^{p_i-1}\left({h}_i\right)\right),\cdots, {L}_{g_r}\left({L}_f^{p_i-1}\left({h}_i\right)\right)\right],\kern1em i=1,2,\cdots, m $$
(12.8)

Letting \( \mathbf{A}=\boldsymbol{\Lambda} \left(\Delta t\right)\mathbf{S}\left(\widehat{\mathbf{x}}\right) \), (12.5) can be simplified as

$$ \underline {\mathbf{d}}(t)=-{\left\{{\mathbf{A}}^{\mathrm{T}}{\mathbf{R}}^{-1}\mathbf{A}+\mathbf{W}\right\}}^{-1}{\mathbf{A}}^{\mathrm{T}}{\mathbf{R}}^{-1}\left[\mathbf{l}\left(\widehat{\mathbf{x}},\Delta t\right)-\tilde{\mathbf{z}}\left(t+\Delta t\right)+\widehat{\mathbf{y}}(t)\right] $$
(12.9)

3.2 Expanded System State

Consider the discrete state equation and measurement equation of a navigation system

$$ {\mathbf{X}}_k=f\left({\mathbf{X}}_{k-1}\right)+{\boldsymbol{\Gamma}}_{k-1}{\mathbf{w}}_{k-1} $$
(12.10)
$$ {\mathbf{Z}}_k=h\left({\mathbf{X}}_k\right)+{\mathbf{v}}_k $$
(12.11)

where X k is the estimated state vector, \( f\left(\cdot \right) \) and \( h\left(\cdot \right) \) are nonlinear functions of the system state, \( {\boldsymbol{\Gamma}}_{k-1} \) is the noise driving matrix, \( {\mathbf{w}}_{k-1} \) is the state noise vector of the system state model, Z k is the measurement vector, and v k is the measurement noise vector.

Let us first consider the expansion of the space dimension of the system state vector. In order to take into account the effect of system driving noise on the system state model, the noise is added in the system state. Thus, the expanded system state vector and covariance matrix can be expressed as

$$ {\mathbf{X}}_k^a={\left[{\mathbf{X}}_k^{\mathrm{T}}\kern0.5em {\underline {\mathbf{d}}}_k^{\mathrm{T}}\kern0.5em {\mathbf{v}}_k^{\mathrm{T}}\right]}^{\mathrm{T}} $$
(12.12)
$$ {\mathbf{P}}_k^a=\left[\begin{array}{ccc}\hfill {\mathbf{P}}_k^x\hfill & \hfill \hfill & \hfill \hfill \\ {}\hfill \hfill & \hfill {\mathbf{Q}}_k^d\hfill & \hfill \hfill \\ {}\hfill \hfill & \hfill \hfill & \hfill {\mathbf{R}}_k^v\hfill \end{array}\right] $$
(12.13)

where the dimension of X a k is \( L=n+q+p \), q is the dimension of Q, and p is the dimension of R.

As mentioned previously, the UKF is very sensitive to system model error, thus unable to deal with the problem of error estimation. In order to overcome this limitation, this paper adopts model prediction to determine a reliable model error \( \underline {\mathbf{d}}(t) \) to improve the robustness of the UKF. Subsequently, by replacing \( {\mathbf{w}}_{k-1} \) in (12.10) with \( {\underline {\mathbf{d}}}_{k-1} \) and substituting the estimated value of the expanded state into the state and measurement equations, we can get

$$ {\mathbf{X}}_k^a=f\left({\mathbf{X}}_{k-1}^a\right)+{\boldsymbol{\Gamma}}_{k-1}{\underline {\mathbf{d}}}_{k-1} $$
(12.14)
$$ {\mathbf{Z}}_k^a=h\left({\mathbf{X}}_k^a\right)+{\mathbf{v}}_k $$
(12.15)

where \( {\underline {\mathbf{d}}}_k \) and v k can be assumed as a white noise process. This is because if the random error is not a white noise process, it can be transformed into a white noise process by decreasing the sampling rate.

Therefore, the following conditions hold for \( {\underline {\mathbf{d}}}_k \) and v k

$$ \left\{\begin{array}{cl}\hfill \mathrm{E}\left[{\underline{d}}_k\right]=0,\hfill & \hfill \kern1.25em \mathrm{C}\mathrm{O}\mathrm{V}\left[{\underline{d}}_k,{\underline{d}}_j^{\mathrm{T}}\right]={\mathbf{Q}}_k{\delta}_{kj}\hfill \\ {}\hfill \mathrm{E}\left[{v}_k\right]=0,\hfill & \hfill \kern1.25em \mathrm{C}\mathrm{O}\mathrm{V}\left[{v}_k,{v}_j^{\mathrm{T}}\right]={\mathbf{R}}_k{\delta}_{kj}\hfill \\ {}\hfill \hfill & \hfill \mathrm{C}\mathrm{O}\mathrm{V}\left[{\underline{d}}_k,{v}_j^{\mathrm{T}}\right]=0\hfill \end{array}\right. $$
(12.16)

3.3 Filtering Algorithm

The proposed model prediction based UKF includes the following steps:

  1. (1)

    Calculate the sigma point χ k of the state vector

    $$ {\boldsymbol{\upchi}}_k=\left[\begin{array}{c}{\overline{\mathbf{x}}}_{k-1}^a\\ {}{\overline{\mathbf{x}}}_{k-1}^a+\sqrt{\left(L+I\right)\cdot {\overline{\mathbf{P}}}_{x,k-1}^a}\\ {}{\overline{\mathbf{x}}}_{k-1}^a-\sqrt{\left(L+I\right)\cdot {\overline{\mathbf{P}}}_{x,k-1}^a}\end{array}\right] $$
    (12.17)
  2. (2)

    Calculate the system model error \( {\underline {\mathbf{d}}}_{k-1} \) from (12.9)

    $$ \underline {\mathbf{d}}(t)=-{\left\{{\mathbf{A}}^{\mathrm{T}}{\mathbf{R}}^{-1}\mathbf{A}+\mathbf{W}\right\}}^{-1}{\mathbf{A}}^{\mathrm{T}}{\mathbf{R}}^{-1}\left[\mathbf{l}\left(\widehat{\mathbf{x}},\Delta t\right)-\tilde{\mathbf{z}}\left(t+\Delta t\right)+\widehat{\mathbf{y}}(t)\right] $$
    (12.18)
  3. (3)

    The time update equations are

    $$ {\boldsymbol{\upchi}}_{k,k-1}^a=f\left({\boldsymbol{\upchi}}_{k-1}^x\right)+{\boldsymbol{\upchi}}_{k-1}^d $$
    (12.19)
    $$ {\widehat{\mathbf{x}}}_{k,k-1}^a={\displaystyle \sum_{i=0}^{2L}{W}_i^{(m)}\cdot {\boldsymbol{\upchi}}_{i,k,k-1}^a} $$
    (12.20)
    $$ {\mathbf{P}}_{k,k-1}^a={\displaystyle \sum_{i=0}^{2L}{W}_i^{(c)}\cdot \left({\boldsymbol{\upchi}}_{i,k,k-1}^a-{\widehat{\mathbf{x}}}_{k,k-1}\right)\cdot {\left({\boldsymbol{\upchi}}_{i,k,k-1}^a-{\widehat{\mathbf{x}}}_{k,k-1}\right)}^{\mathrm{T}}} $$
    (12.21)
  4. (4)

    The measurement update equations are

    $$ {\mathbf{Z}}_k^a=h\left({\boldsymbol{\upchi}}_{k,k-1}^{(z)}\right)+{\boldsymbol{\upchi}}_{k,k-1}^{(v)} $$
    (12.22)
    $$ {\widehat{\mathbf{Z}}}_k={\displaystyle \sum_{i=0}^{2L}{W}_i^{(m)}\cdot {\mathbf{Z}}_{i,k}^a} $$
    (12.23)
    $$ {\mathbf{P}}_{x_k{z}_k}={\displaystyle \sum_{i=0}^{2L}{W}_i^{(c)}\cdot \left({\boldsymbol{\upchi}}_{i,k,k-1}^a-{\widehat{\mathbf{x}}}_{k,k-1}\right)\cdot {\left({\mathbf{Z}}_{i,k}^a-{\widehat{\mathbf{Z}}}_k\right)}^{\mathrm{T}}} $$
    (12.24)
    $$ {\mathbf{P}}_{z_k}={\displaystyle \sum_{i=0}^{2L}{W}_i^{(c)}\cdot \left({\mathbf{Z}}_{i,k}^a-{\widehat{\mathbf{Z}}}_k\right)\cdot {\left({\mathbf{Z}}_{i,k}^a-{\widehat{\mathbf{Z}}}_k\right)}^{\mathrm{T}}} $$
    (12.25)
    $$ {\overline{\mathbf{P}}}_{z_k}={\gamma}_{ij}{\mathbf{P}}_{z_k} $$
    (12.26)
    $$ {\mathbf{K}}_k={\mathbf{P}}_{x_k{z}_k}{\overline{\mathbf{P}}}_{z_k}^{-1} $$
    (12.27)
    $$ {\widehat{\mathbf{x}}}_k^a={\widehat{\mathbf{x}}}_{k,k-1}^a+{\mathbf{K}}_k\left({\mathbf{Z}}_k^a-{\widehat{\mathbf{Z}}}_k\right) $$
    (12.28)
    $$ {\mathbf{P}}_k^a={\mathbf{P}}_{k,k-1}^a-{\mathbf{K}}_k{\overline{\mathbf{P}}}_{z_k}{\mathbf{K}}_k^{\mathrm{T}} $$
    (12.29)

In (12.24), W (m) i and W (c) i are the weight matrices, which are represented as

$$ \left\{\begin{array}{l}{W}_0^{(m)}=\frac{I}{\left(L+I\right)}\\[4pt] {}{W}_0^{(c)}=\frac{I}{\left(L+I\right)}+\left(1-{\alpha}^2-\beta \right)\\ [4pt] {}{W}_i^{(m)}={W}_i^{(c)}=\frac{0.5}{\left(L+I\right)}\kern1.5em i=1,2,\cdots, 2L\end{array}\right. $$
(12.30)

where \( L+I=3 \). When \( I<0 \), the semi-regularity of (12.30) cannot be guaranteed. In this case, a scale correction (Gao et al. 2011) is applied to the obtained sigma point set to make (12.30) semi-regular. In the most cases, \( {10}^{-4}\le \alpha \le 1 \), and \( \beta =2 \) is optimal for Gaussian distribution (Julier et al. 1995).

In (12.26), γ ii represents the main diagonal element of prior weight matrix γ. If the measurement information contains abnormality, the corresponding weighted matrix is adjusted by the following IGG III weight function to improve the filtering performance

$$ {\gamma}_{ii}=\left\{\begin{array}{l}1\kern14em \left|{\tilde{V}}_i\right|\le {k}_0\\ {}\frac{k_0}{\left|{\tilde{V}}_i\right|}{\left(\frac{k_1-{\tilde{V}}_i}{k_1-{k}_0}\right)}^2\kern10em {k}_0<\left|{\tilde{V}}_i\right|\le {k}_1\\ {}0\kern14em \left|{\tilde{V}}_i\right|>{k}_1\end{array}\right. $$
(12.31)

where \( \tilde{V}_{i} \) is standardized residuals, k 0 and k 1 are constants, \( {k}_0=1.0\sim 1.5 \) and \( {k}_1=3.5\sim 8.0 \).

$$ {\gamma}_{ij}=\sqrt{\gamma_{ii}}\sqrt{\gamma_{jj}} $$
(12.32)

4 Performance Evaluation and Discussions

Experiments have been conducted to comprehensively evaluate and analyze the performance of the proposed filtering algorithm (named the MP-UKF). The comparison analysis with the existing filtering algorithms such as the EKF and UKF is also discussed in this section.

4.1 Simulations and Analysis

The univariate non-stationary growth model (UNGM) was adopted to evaluate the performance of the proposed algorithm. The state and measurement equations of the UNGM are described as

$$ x(t)=0.5x\left(t-1\right)+\frac{2.5x\left(t-1\right)}{1+{\left[x\left(t-1\right)\right]}^2}+8 \cos \left[1.2\left(t-1\right)\right]+w(t) $$
(12.33)
$$ Z(t)=x(t)/20+v(t) $$
(12.34)

where both w(t) and v(t) are the zero-mean Gaussian noises, and their intensity variances Q and R are

$$ Q=\operatorname{cov}\left[w(t),w{\left(\tau \right)}^{\mathrm{T}}\right]=10 $$
(12.35)
$$ R=\operatorname{cov}\left[v(t),v{\left(\tau \right)}^{\mathrm{T}}\right]=1 $$
(12.36)

The initial state and its variance are assumed as

$$ x(0)=0.1\;\mathrm{and}\;P(0)=2 $$
(12.37)

By expanding the state space dimension, we get

$$ {x}^a(0)={\left[\begin{array}{ccc}\hfill 0.1\hfill & \hfill d\hfill & \hfill 0\hfill \end{array}\right]}^{\mathrm{T}}\mathrm{and}\ {\mathbf{P}}^a(0)={\left[\begin{array}{ccc}\hfill 2\hfill & \hfill \hfill & \hfill \hfill \\ {}\hfill \hfill & \hfill 10\hfill & \hfill \hfill \\ {}\hfill \hfill & \hfill \hfill & \hfill 1\hfill \end{array}\right]}^{\mathrm{T}} $$
(12.38)

where d is calculated by (12.9).

It can be seen from Fig. 12.1 that the estimated values of the EKF are within (−20, 10), leading to large deviations from the real values. The estimated values of the UKF are within (−15, 5). Although the UKF improves the estimation performance of the EKF, there still are pronounced deviations between the estimated and real values. In contrast, the proposed MP-UKF accurately captures the true values, leading to the estimate error within (−5, 5). Figure 12.2 shows the root mean square errors (RMSE) of the three filtering algorithms. The RMSE is within (1, 8) for the EKF, (0.5, 2.5) for the UKF, and (0, 0.5) for the RMP-AUKF. Therefore, it is evident that the MP-UKF outperforms the other two.

Fig. 12.1
figure 1

Estimation of the three filtering methods

Fig. 12.2
figure 2

Root mean square errors of the three algorithms for the estimation

4.2 Experiments and Analysis

Practical experiments were conducted to evaluate the performance of the proposed filtering algorithm for a SINS/CNS/SAR (Strap-down Inertial Navigation System/Celestial Navigation System/Synthetic Aperture Radar) integrated navigation system. The navigation coordinate is E-N-U (East-North-Up) geography coordinate system. The SINS/CNS/SAR integrated navigation system is

$$ {\mathbf{X}}_k=f\left({\mathbf{X}}_{k-1}\right)+{\boldsymbol{\Gamma}}_{k-1}{\underline {\mathbf{d}}}_{k-1} $$
(12.39)
$$ {\mathbf{Z}}_k={\mathbf{H}}_k{\mathbf{X}}_k+{\mathbf{v}}_k $$
(12.40)

where \( f\left(\cdot \right) \) is the nonlinear function of the state vector X and is described as

$$ \begin{aligned}[b] f\left(\mathbf{X}(t)\right)&=\left[\begin{array}{l}{\boldsymbol{C}}_{nb}^{-1}\left[\left(I-{\boldsymbol{C}}_n^{n^{\prime }}\right){\widehat{\omega}}_{in}^n+{\boldsymbol{C}}_n^{n^{\prime }}\delta {\omega}_{in}^n-{\boldsymbol{C}}_b^{n^{\prime }}\delta {\omega}_{ib}^b\right]\\[8pt] {}\left[I-{\left({\boldsymbol{C}}_n^{n^{\prime }}\right)}^{\mathrm{T}}\right]{\boldsymbol{C}}_b^{n^{\prime }}{\widehat{\boldsymbol{f}}}_{sf}^b+{\left({\boldsymbol{C}}_n^{n^{\prime }}\right)}^{\mathrm{T}}{\boldsymbol{C}}_b^{n^{\prime }}\delta {\boldsymbol{f}}_{sf}^b\\[8pt] \quad-\left(2\delta {\omega}_{ie}^n+\delta {\omega}_{en}^n\right)\times \widehat{\boldsymbol{V}}-\left(2{\widehat{\omega}}_{ie}^n+{\widehat{\omega}}_{en}^n\right)\\[8pt] \quad\times \delta \boldsymbol{V}+\left(2{\omega}_{ie}^n+{\omega}_{en}^n\right)\times \delta \boldsymbol{V}+\delta \boldsymbol{g}\\[8pt] {}\frac{{\widehat{V}}_N}{{\widehat{R}}_M+h}-\frac{\left({\widehat{V}}_N-\delta {V}_N\right)}{\left({\widehat{R}}_M-\delta {R}_M\right)+\left(\widehat{h}-\delta h\right)}\\[8pt] {}\frac{{\widehat{V}}_E \sec L}{{\widehat{R}}_N+h}-\frac{\left({\widehat{V}}_E-\delta {V}_E\right) \sec \left(\widehat{L}-\delta L\right)}{\left({\widehat{R}}_N-\delta {R}_N\right)+\left(\widehat{h}-\delta h\right)}\\[8pt] {}\delta {V}_U\end{array}\right]\end{aligned} $$
(12.41)

The state variables are

$$ \mathbf{X}={\left[{\phi}_E\kern0.5em {\phi}_N\kern0.5em {\phi}_U\kern0.5em \delta {v}_E\kern0.5em \delta {v}_N\kern0.5em \delta {v}_U\kern0.5em \delta L\kern0.5em \delta \lambda \kern0.5em \delta h\kern0.5em {\varepsilon}_{bx}\kern0.5em {\varepsilon}_{by}\kern0.5em {\varepsilon}_{bz}\kern0.5em {\nabla}_x\kern0.5em {\nabla}_y\kern0.5em {\nabla}_z\right]}^{\mathrm{T}} $$
(12.42)

where (ϕ E , ϕ N , ϕ U ) is the attitude error, (δv E , δv N , δv U ) is the velocity error, (δL, δλ, δh) the position error, (ε bx , ε by , ε bz ) the gyro constant drift, and \( \left({\nabla}_X,{\nabla}_Y,{\nabla}_Z\right) \) the accelerometer zero-bias.

The SINS and SAR are combined as sub-filter 1, and its measurement is

$$ {\mathbf{Z}}_1=\left[\begin{array}{c}{\phi}_E-{\phi}_{\mathrm{SE}}\\ {}\left(L-{L}_S\right){R}_M\\ {}\left(\lambda -{\lambda}_S\right){R}_N \cos L\end{array}\right] $$
(12.43)

The SINS and CNS are combined as sub-filter 2, and its measurement is

$$ {\mathbf{Z}}_2=\left[\begin{array}{c}{\phi}_E-{\phi}_{\mathrm{CE}}\\ {}{\phi}_N-{\phi}_{\mathrm{CN}}\\ {}{\phi}_U-{\phi}_{\mathrm{CU}}\end{array}\right] $$
(12.44)

The measurement of the SINS/CNS/SAR integrated navigation system is described as

$$ \mathbf{Z}=\left[\begin{array}{c}{\mathbf{Z}}_1\\ {}{\mathbf{Z}}_2\end{array}\right] $$
(12.45)

The measurement matrix of the SINS/CNS/SAR integrated navigation system is

$$ \mathbf{H}={\left[\begin{array}{c}{\mathbf{H}}_1\\ {}{\mathbf{H}}_2\end{array}\right]}_{6\times 10} $$
(12.46)

where

$$ {\mathbf{H}}_1=\left[\begin{array}{cccccc}\hfill {0}_{1\times 2}\hfill & \hfill 1\hfill & \hfill {0}_{1\times 3}\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill {0}_{1\times 7}\hfill \\ {}\hfill {0}_{1\times 2}\hfill & \hfill 0\hfill & \hfill {0}_{1\times 3}\hfill & \hfill {R}_M\hfill & \hfill 0\hfill & \hfill {0}_{1\times 7}\hfill \\ {}\hfill {0}_{1\times 2}\hfill & \hfill 0\hfill & \hfill {0}_{1\times 3}\hfill & \hfill 0\hfill & \hfill {R}_N \cos L\hfill & \hfill {0}_{1\times 7}\hfill \end{array}\right] $$
(12.47)
$$ {\mathbf{H}}_2=\left[\begin{array}{cccc}\hfill - \cos {\psi}_I\hfill & \hfill \sin {\psi}_I\hfill & \hfill 0\hfill & \hfill {0}_{1\times 12}\hfill \\ {}\hfill \frac{ \sin {\psi}_I}{ \cos {\theta}_I}\hfill & \hfill -\frac{\left( \cos {\psi}_I-2 \sin {\theta}_I \sin {\gamma}_I \cos {\gamma}_I \sin {\psi}_I\right)}{ \cos {\theta}_I}\hfill & \hfill 0\hfill & \hfill {0}_{1\times 12}\hfill \\ {}\hfill - \sin {\psi}_I \tan {\theta}_I\hfill & \hfill \cos {\psi}_I \tan {\theta}_I\hfill & \hfill -1\hfill & \hfill {0}_{1\times 12}\hfill \end{array}\right] $$
(12.48)

Suppose the Earth’s rotation angular velocity is \( {\omega}_{\mathrm{ie}}=15^{\circ}/\mathrm{h} \), the radius of the Earth is \( {R}_e=6378\kern0.5em \mathrm{km} \), and the acceleration of gravity is selected as \( \mathrm{g}=9.780\kern0.5em \mathrm{m}/{\mathrm{s}}^2 \). The initial position of the aircraft was at East longitude 109°, North latitude 34°, and altitude 1000 m. The initial attitude angle error was \( {\phi}_E={\phi}_N={\phi}_U=0.{5}^{\prime } \), the velocity error \( \delta {V}_E=\delta {V}_N=\delta {V}_U=0.1\kern0.5em \mathrm{m}/\mathrm{s} \), and the position error \( \delta \lambda ={1}^{{\prime\prime} },\kern0.5em \delta L={1}^{{\prime\prime} },\kern0.5em \delta h=10\kern0.5em \mathrm{m} \). The gyro’s random drift and walk were 0.01 °/h and \( 0.001{}^{\circ}/\sqrt{\mathrm{h}} \). The accelerometer’s zero offset and random walk were 10−4g and \( {10}^{-5}\mathrm{g}/\sqrt{\mathrm{s}} \). The horizontal positioning accuracy of SAR was 25 m, the heading angle of mean square error by SAR was 0.1°, and the measurement accuracy of CCD (Charge-Coupled Device) star sensor was 20″. The sampling cycles of SINS, SAR, and CNS were 0.01, 1, and 0.5 s. The filtering period was 1 s. The measurement noise was \( \boldsymbol{v}={\left[0.5\kern0.5em 0.2\kern0.5em 0.2\kern0.5em 0.5\kern0.5em 0.5\kern0.5em 0.5\right]}^{\mathrm{T}} \).

As shown in Figs. 12.3, 12.4, and 12.5, the EKF algorithm has the largest attitude angle error with the heading error (–1.5′, 1.25′), roll angle error (−1′, 1.5′), and pitch angle error (−1.25′, 1.5′). The portions of the curve, which are beyond (−1′, 1′), involve a significant change in amplitude. The attitude angle error is improved by the UKF algorithm with all the heading, roll, and pitch angle errors within (−1′, 1′). In contrast, the attitude angle error by the proposed filtering algorithm is reduced to (−0.3′, 0.4′).

Fig. 12.3
figure 3

Heading angle error of the SINS/CNS/SAR integrated navigation system

Fig. 12.4
figure 4

Roll angle error of the SINS/CNS/SAR integrated navigation system

Fig. 12.5
figure 5

Pitch angle error of the SINS/CNS/SAR integrated navigation system

Figures 12.6 and 12.7 show the similar trend as Figs. 12.3, 12.4, and 12.5. With the EKF, the velocity error is largest, which is within (−1.5 m/s, 1.5 m/s). With the UKF, both the east and north velocity errors are within (−1 m/s, 0.9 m/s). However, the proposed filter algorithm reduces the velocity error to (−0.2 m/s, 0.3 m/s). As shown in Figs. 12.8 and 12.9, the velocity errors in latitude and longitude for the EKF are gradually reduced by the UKF and proposed MP-UKF. The estimate errors of the three filters are within (\( -0.{9}^{{\prime\prime} } \), 0. 8″), (\( -0.{5}^{{\prime\prime} } \), 0. 5″), and (\( -0.{2}^{{\prime\prime} } \), 0. 2″), respectively. Table 12.1 shows the detailed comparison for the position errors of the three filters.

Fig. 12.6
figure 6

E-velocity error of the SINS/CNS/SAR integrated navigation system

Fig. 12.7
figure 7

N-velocity error of the SINS/CNS/SAR integrated navigation system

Fig. 12.8
figure 8

Latitude error of the SINS/CNS/SAR integrated navigation system

Fig. 12.9
figure 9

Longitude error of the SINS/CNS/SAR integrated navigation system

Table 12.1 Position error statistics of the SINS/CNS/SAR integrated navigation system

From the above results and analysis, it is evident that the proposed algorithm can significantly reduce navigation error. The achieved navigation accuracy is much higher than those of the EKF and UKF.

5 Conclusions

This paper presents a nonlinear filtering algorithm for state estimation in nonlinear systems. By taking into account the effect of system driving noise on system state, this algorithm adds system driving noise in system state to expand the input of system state information to the system. Subsequently, it incorporates model prediction in the UKF filtering process to overcome the UKF limitation that the performance is sensitive to system model error. Simulation and experimental results demonstrate that the proposed filtering algorithm can significantly reduce navigation error. The achieved accuracy is much higher than those of the EKF and UKF.

Future work will focus on improving the computational performance of the proposed algorithm. Despite the improved filtering accuracy, the increase of the state space dimension in the proposed algorithm causes an extra computational load to the filtering process. It is expected to establish a strategy to simplify the computational process of the proposed algorithm by studying the relationship between the accuracy and computational time in terms of the degree of nonlinearity.