Keywords

1 Introduction

For linear systems, the Kalman filter is recursively performed with the promising performance. However, for nonlinear systems, the performance of Kalman filter will be significantly degraded and possibly diverged [1]. The extended Kalman filter (EKF) is a kind of nonlinear optimal algorithm with its robustness in the nonlinear systems, but the EKF requires the computation of the Jacobian matrix, which greatly increases the complexity of the filter [2].

Later, it is found that the Gaussian approximation is simpler, compared to the nonlinear function approximation. Motivated by the Gaussian approximation, the unscented Kalman filter (UKF) is proposed. On the basis of no trace transform, the deterministic sampling and the linear Kalman filtering framework are adopted in the UKF. It has the following characteristics [3, 4]: (1) compared to the EKF, its accuracy is raised to the third-order accuracy for Gauss data and the second-order accuracy for the nonlinear non-Gaussian data; (2) the computation of the Jacobian matrix is not required; (3) the discrete system and the additive noise can be handled by the UKF; (4) the computational complexity is the same order with that of the EKF; and (5) the deterministic sampling strategy is adopted to avoid the problem of particle recession and dilution.

But for a practical system, the UKF performance will be greatly reduced due to the uncertainty model used and unknown signal statistical properties. In addition, because of the bit precision of the hardware and filtering errors, the calculations in every step yield the uncompensated errors. In the UKF, the calculation is iteratively performed. Therefore, the accuracy of UFK will be significantly reduced by error accumulation in hundreds of iterations. Hence, some improved filter methods are proposed.

Particle filter is presented in [5]. In this method, a large number of particles will be produced, and its computation will be more intensive. Based on MIT an adaptive UKF with the robustness to interference [6] is proposed, but there are also a large number of partial differential calculations required. In [7], a confidence interval is proposed to overcome the accuracy of UKF prediction degradation. Meanwhile, the numerical stability in the UKF is considered [8]. Minimum entropy criterion [9], confidence interval [10], and singular value decomposition [11] are used in the UKF to improve the accuracy.

Motivated by the covariance intersection algorithm (CIA) [12], we propose an improved UKF. The improved value can be derived via the actual value and the estimate value, given the unknown correlation between these two values. In this method, there is correlation between the true value for the present moment and the estimate value for the next moment. But the numerical value of correlation is unknown in actual situation. Through the CIA, the improved value is gotten without the correlation.

2 The Covariance Intersection Algorithm

A and B are relevant information. When information A and information B need fusion, the correlation information between A and B is very helpful for information fusion. But in most instances, the correlation information is unknown.

How to solve this problem? The CIA is provided. {a, P aa} and {b, P bb} represent information A and its covariance and information B and its covariance, respectively. Meanwhile, \( \tilde{a}=a-\overline{a} \), \( \tilde{b}=b-\overline{b} \), and \( \tilde{\mathbf{c}}=c-\overline{c} \). \( \tilde{a} \), \( \tilde{b} \), and \( \tilde{\mathbf{c}} \) are error values. a, b, and c are actual values. \( \overline{a} \), \( \overline{b} \), and \( \overline{c} \) are mean values.

The mean squared error \( {\overline{\mathbf{P}}}_{aa} \) and \( {\overline{\mathbf{P}}}_{bb} \) and the covariance \( {\overline{\mathbf{P}}}_{ab} \) are computed as follows:

$$ {\overline{\mathbf{P}}}_{aa}=\mathrm{E}\left[{\tilde{\mathbf{a}}\tilde{\mathbf{a}}}^T\right],{\overline{\mathbf{P}}}_{bb}=\mathrm{E}\left[{\tilde{b}\tilde{b}}^T\right],{\overline{\mathbf{P}}}_{ab}=\mathrm{E}\left[\tilde{\mathbf{a}}{\tilde{b}}^T\right] $$

\( \overline{a} \) and \( \overline{b} \) are actually not known. Hence, \( {\overline{\mathbf{P}}}_{aa} \) and \( {\overline{\mathbf{P}}}_{bb} \) are also unknown.

In the CIA, \( {\overline{\mathbf{P}}}_{aa} \) and \( {\overline{\mathbf{P}}}_{bb} \) are approximated by the values P aa and P bb. Based on {a, P aa} and {b, P bb}, the improved estimated value of {c, P cc} is obtained by the CIA without correlation P ab.

In Fig. 1, the solid line ellipses are P aa and P bb, and the dotted lines ellipses are P cc. Based on P aa and P bb, different P cc is derived from different P ab. P ab is the correlation information between information A and information B.

Fig. 1
figure 1

Improved covariance elliptical shape

As shown in Fig. 1, P cc is always located in the intersection of P aa and P bb for any value of P ab. Hence, according to the CIA, P cc is obtained even if P ab is unknown, and the more information P cc can be recovered with the better P ab. The CIA process is expressed by:

$$ {\mathbf{P}}_{cc}^{-1}=w{\mathbf{P}}_{aa}^{-1}+\left(1-w\right){\mathbf{P}}_{bb}^{-1}\vspace*{-1pc} $$
(1)
$$ {\mathbf{P}}_{cc}^{-1}\mathbf{c}=w{\mathbf{P}}_{aa}^{-1}\mathbf{a}+\left(1-w\right){\mathbf{P}}_{bb}^{-1}\mathbf{b} $$
(2)

where w is the weighting factor assigned to a and b. w is adopted different value in 0 ≤ w ≤ 1 under different optimization method, for example, the Newton-Raphson method, positive semi-definite method, and convex optimization method. If the optimization method is improved, {c, P cc} will be more accurate. Therefore the optimal values {c, P cc} are unique and relative to the optimization method.

In the CIA, {c, P cc} are computed based on {a, P aa} and {b, P bb}. The only constraint is consistent, which is \( {\mathbf{P}}_{aa}-{\overline{\mathbf{P}}}_{aa}\ge 0 \) and \( {\mathbf{P}}_{bb}-{\overline{\mathbf{P}}}_{bb}\ge 0 \), to satisfy the consistency constraint \( {\mathbf{P}}_{cc}-{\overline{\mathbf{P}}}_{cc}\ge 0 \) [12]. \( {\overline{\mathbf{P}}}_{cc} \) is the error variance and \( {\overline{\mathbf{P}}}_{cc}=\mathrm{E}\left[{\tilde{c}\tilde{c}}^T\right] \).

3 The Improved UKF

3.1 UKF

It is assumed that the nonlinear system is:

$$ \mathbf{X}(k)=f\ \left(\mathbf{X}\left(k-1\right)\right)+\mathbf{W}\left(k-1\right)\vspace*{-1pc} $$
(3)
$$ \mathbf{Y}(k)=h\left(\mathbf{X}(k)\right)+\mathbf{V}(k) $$
(4)

where f(·) and h(·) are nonlinear functions, k is the kth time, X(k) is the system state vector, Y(k) is the system measurement vector, and V(k) and W(k) are process noise and measurement noise. Their statistical properties are:

$$ \left\{\begin{array}{l}E\left[\mathbf{W}(k)\right]=\mathbf{0},E\left[\mathbf{V}(k)\right]=\mathbf{0}\\ {}E\left[\mathbf{W}(i)\mathbf{W}(j)\right]=\mathbf{R}{\delta}_{ij},\forall i,j\\ {}E\left[\mathbf{V}(i)\mathbf{V}(j)\right]=\mathbf{Q}{\delta}_{ij},\begin{array}{c}\end{array}\forall i,j\\ {}E\left[\mathbf{W}(i)\mathbf{V}{(j)}^T\right]=\mathbf{0}\end{array}\right. $$
(5)

R(k) and Q(k) are their mean squared error.

  1. 1.

    Initialization

$$ {\widehat{\mathbf{X}}}^a\left(0\left|0\right.\right)={\left[\widehat{\mathbf{X}}{\left(0\left|0\right.\right)}^T\begin{array}{c}\end{array}0\begin{array}{c}\end{array}0\right]}^T\vspace*{-1pc} $$
(6)
$$ {\mathbf{P}}_{\mathbf{XX}}^a\left(0\left|0\right.\right)=\left[\begin{array}{ccc}{\mathbf{P}}_{\mathbf{XX}}\left(0\left|0\right.\right)& 0& 0\\ {}0& \mathbf{Q}\left(0\left|0\right.\right)& 0\\ {}0& 0& \mathbf{R}\left(0\left|0\right.\right)\end{array}\right] $$
(7)
  1. 2.

    Proportion symmetry sampling

$$ \boldsymbol{\upchi} \left(k-1\left|k-1\right.\right)={\left[\begin{array}{c}\widehat{\mathbf{X}}\left(k-1\left|k-1\right.\right)\\ {}\widehat{\mathbf{X}}\left(k-1\left|k-1\right.\right)+\sqrt{\left(n+\lambda \right){\mathbf{P}}_{\mathbf{X}\mathbf{X}i}\left(k-1\left|k-1\right.\right)}\\ {}\widehat{\mathbf{X}}\left(k-1\left|k-1\right.\right)-\sqrt{\left(n+\lambda \right){\mathbf{P}}_{\mathbf{X}\mathbf{X}i}\left(k-1\left|k-1\right.\right)}\end{array}\right]}^T $$
(8)
$$ {\mathbf{P}}_{\mathbf{XX}}\left(k\left|k\right.\right)=\left[\begin{array}{ccc}{\mathbf{P}}_{\mathbf{XX}}\left(k\left|k\right.\right)& 0& 0\\ {}0& \mathbf{Q}\left(k\left|k\right.\right)& 0\\ {}0& 0& \mathbf{R}\left(k\left|k\right.\right)\end{array}\right] $$
(9)

\( \widehat{\mathbf{X}}\left(k-1\left|k-1\right.\right) \) is the filter value at the k − 1th time. P XX(k − 1|k − 1) is the mean squared error of \( \widehat{\mathbf{X}}\left(k-1\left|k-1\right.\right) \). P XXi(k − 1|k − 1) is the ith column of P XX(k − 1|k − 1) and i = 1, 2, ⋯, n.λ = α 2(n + κ) − n, where α and κ are impact factors and they generally take small values.

  1. 3.

    Time update equations

$$ \boldsymbol{\upchi} \left(k\left|k-1\right.\right)=f\left(\boldsymbol{\upchi} \left(k-1\left|k-1\right.\right),k-1\right)\vspace*{-2pc} $$
(10)
$$ \boldsymbol{\upchi} \left(k\left|k-1\right.\right)=f\left(\boldsymbol{\upchi} \left(k-1\left|k-1\right.\right),k-1\right)\vspace*{-2pc} $$
(11)
$$ \boldsymbol{\upmu} \left(k\left|k-1\right.\right)=h\left(\boldsymbol{\upchi} \left(k\left|k-1\right.\right),k-1\right)\vspace*{-2pc} $$
(12)
$$ \widehat{\mathbf{Y}}\left(k\left|k-1\right.\right)=\sum \limits_{i=0}^{2n}{\mathbf{W}}_m^{(i)}{\boldsymbol{\upmu}}_i\left(k\left|k-1\right.\right) $$
(13)

In Eq.(13), μ i(k|k − 1) is the ith column of μ(k|k − 1) and i = 1, 2, ⋯, 2n. \( {\mathbf{W}}_m^{(i)} \) is the weighted value. \( {\mathbf{W}}_m^{(0)}=\lambda /\left(n+\lambda \right) \), \( {\mathbf{W}}_m^{(0)}=\lambda /\left(n+\lambda \right) \), and i = 1, 2, ⋯, 2n:

$$ \begin{array}{l}{\mathbf{P}}_{\mathbf{X}\mathbf{X}}\left(k\left|k-1\right.\right)=\sum \limits_{i=0}^{2n}{\mathbf{W}}_c^{(i)}\left[\left({\boldsymbol{\upchi}}_i\left(k\left|k-1\right.\right)-\widehat{\mathbf{X}}\left(k\left|k-1\right.\right)\right)\right.\\ {\quad }\left.\qquad\qquad\qquad\times {\left({\boldsymbol{\upchi}}_i\left(k\left|k-1\right.\right)-\widehat{\mathbf{X}}\left(k\left|k-1\right.\right)\right)}^T\right]\end{array} $$
(14)

χ i(k| k − 1) is the ith column of χ(k| k − 1) and i = 0, 1, ⋯, 2n, and \( {\mathbf{W}}_c^{(i)} \) is the weighted covariance matrix.

\( {\mathbf{W}}_c^{(0)}=\lambda /\left(n+\lambda \right)+\left(1\hbox{-} {\alpha}^2+\beta \right) \) and \( {\mathbf{W}}_c^{(i)}=1/2\left(n+\lambda \right),i=1,2,\cdots, 2n \). β is the prior distribution factor (it is usually set to 2 for Gaussian distribution).

  1. 4.

    Measurement update equations

$$\begin{array}{l} {\mathbf{P}}_{\mathbf{X}\mathbf{Y}}\left(k\left|k-1\right.\right)=\sum \limits_{i=0}^{2n}{\mathbf{W}}_c^{(i)}\left[\left({\boldsymbol{\upchi}}_i\left(k\left|k-1\right.\right)-\widehat{\mathbf{X}}\left(k\left|k-1\right.\right)\right)\right.\\ {}\quad \left.\qquad\qquad\qquad\times {\left({\boldsymbol{\upmu}}_i\left(k\left|k-1\right.\right)-\widehat{\mathbf{Y}}\left(k\left|k-1\right.\right)\right)}^T\right]\end{array}\vspace*{-1pc} $$
(15)
$$ \begin{array}{l}{\mathbf{P}}_{\mathbf{Y}\mathbf{Y}}\left(k\left|k-1\right.\right)=\sum \limits_{i=0}^{2n}{\mathbf{W}}_c^{(i)}\left[\left({\boldsymbol{\upmu}}_i\left(k\left|k-1\right.\right)-\widehat{\mathbf{Y}}\left(k\left|k-1\right.\right)\right)\right.\\ {}\qquad\qquad\qquad\quad\times \left.{\left({\boldsymbol{\upmu}}_i\left(k\left|k-1\right.\right)-\widehat{\mathbf{Y}}\left(k\left|k-1\right.\right)\right)}^T\right]\end{array}\vspace*{-1pc} $$
(16)
$$ \mathbf{K}(k)={\mathbf{P}}_{\mathbf{XY}}\left(k\left|k-1\right.\right){\mathbf{P}}_{\mathbf{YY}}^{-1}\left(k\left|k-1\right.\right)\vspace*{-1pc} $$
(17)
$$ \widehat{\mathbf{X}}\left(k\left|k\right.\right)=\widehat{\mathbf{X}}\left(k\left|k-1\right.\right)+\mathbf{K}(k)\left(\mathbf{Y}(k)-\widehat{\mathbf{Y}}\left(k\left|k-1\right.\right)\right)\vspace*{-1pc} $$
(18)
$$ {\mathbf{P}}_{\mathbf{XX}}\left(k\left|k\right.\right)={\mathbf{P}}_{\mathbf{XX}}\left(k\left|k-1\right.\right)-\mathbf{K}(k){\mathbf{P}}_{\mathbf{YY}}\left(k\left|k-1\right.\right){\mathbf{K}}^T(k) $$
(19)

where P XX, P XY, P YY represent covariance matrix between X and X, X and Y, and Y and Y, respectively. K is filter gain; \( \widehat{\mathbf{X}}\left(k\left|k\right.\right) \) is the filter value at the kth time.

3.2 The Improved UKF

The UKF, incorporating the CIA, can obtain the better estimated value \( {\widehat{\mathbf{X}}}_{\mathrm{improved}}\left(k\left|k\right.\right) \) without the covariance information between the real value X(k − 1) and the estimated value \( \widehat{\mathbf{X}}\left(k\left|k\right.\right) \). Meanwhile, the UKF accuracy is improved in the proposed method. Equations are expressed as follows:

$$ {\mathbf{P}}^{-1}=w{\mathbf{P}}_{\mathbf{XX}}^{-1}\left(k\left|k\right.\right)+\left(1-w\right){\mathbf{P}}_{\mathbf{XX}}^{-1}\left(k-1\left|k-1\right.\right) $$
(20)
$$ {\mathbf{P}}^{-1}{\widehat{\mathbf{X}}}_{\mathrm{improved}}\left(k\left|k\right.\right)\!=w{\mathbf{P}}_{\mathbf{X}\mathbf{X}}^{-1}\left(k\left|k\right.\right)\widehat{\mathbf{X}}\left(k\left|k\right.\right){+}\left(1-w\right){\mathbf{P}}_{\mathbf{X}\mathbf{X}}^{-1}\left(k-1\!\left|k\right.-1\right)\mathbf{X}\left(k{-}1\right) $$
(21)

\( {\widehat{\mathbf{X}}}_{\mathrm{improved}}\left(k\left|k\right.\right) \) is the improved filter value and its covariance matrix is P −1. w is the weighting factor assigned to \( \widehat{\mathbf{X}}\left(k\left|k\right.\right) \) and X(k − 1).

At the kth time, the real value X(k − 1) is known. Through the UKF, the filter value \( \widehat{\mathbf{X}}\left(k\left|k\right.\right) \) is computed. Due to the mismatched system model, noises, and interferences existing in the UKF process, the accuracy of \( \widehat{\mathbf{X}}\left(k\left|k\right.\right) \) is reduced.

But the relevancy between the real value X(k − 1) and the filter value \( \widehat{\mathbf{X}}\left(k\left|k\right.\right) \) is existing in the practice and unknown in the actual situation. Through the proposed method, the relevancy computation about covariance matrix between \( \widehat{\mathbf{X}}\left(k\left|k\right.\right) \) and X(k − 1) is avoided, and the accuracy of \( \widehat{\mathbf{X}}\left(k\left|k\right.\right) \) is improved. \( {\widehat{\mathbf{X}}}_{\mathrm{improved}}\left(k\left|k\right.\right) \) is obtain by (20) and (21) without the covariance matrix. The algorithm procedure of improved UKF is in Fig. 2.

Fig. 2
figure 2

Algorithm flow chart

It is the algorithm flow chart in Fig. 2. \( \widehat{\mathbf{X}}\left(k\left|k\right.\right) \) and P XX(k| k) are computed through UKF. And then \( {\widehat{\mathbf{X}}}_{\mathrm{improved}}\left(k\left|k\right.\right) \) is got by the CIA.

In fact, some methods [9,10,11] are adopted to keep filter result stability in the UKF algorithms, for example, U − D decomposition filter and singular value decomposition filter. In the U − D decomposition filter, covariance matrix P is decomposed as UDU T, where U is an upper triangular matrix and D is diagonal matrix. Hence, UD 1/2 is equivalent to \( {\mathbf{P}}^{\raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{$2$}\right.} \). In the singular value decomposition filter, V is the eigenvector matrix of P, and D is diagonal matrix, where the diagonal element is the singular value of P. Therefore, VD 1/2 is also equivalent to \( {\mathbf{P}}^{\raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{$2$}\right.} \). These two algorithms keep positive definite of P and make better robustness of the UKF algorithms.

In these two algorithms, there will be few changes in the process of UKF. \( {\mathbf{P}}_{\mathbf{XX}}^a\left(k-1\left|k-1\right.\right) \) is decomposed. In U − D decomposition filter, Eqs. (8) and (9) transform into Eqs. (22) and (23).

$$ {\mathbf{P}}_{\mathbf{XX}}\left(k-1\left|k-1\right.\right)=\mathbf{U}\left(k-1\left|k-1\right.\right)\mathbf{D}\left(k-1\left|k-1\right.\right)\mathbf{U}{\left(k-1\left|k-1\right.\right)}^T $$
(22)
$$ \begin{array}{l}\boldsymbol{\upchi} \left(k-1\left|k-1\right.\right)\\ {}={\left[\begin{array}{c}\widehat{\mathbf{X}}\left(k-1\left|k-1\right.\right)\\ {}\widehat{\mathbf{X}}\left(k-1\left|k-1\right.\right)+{\mathbf{U}}_i\left(k-1\left|k-1\right.\right)\sqrt{\left(n+\lambda \right){\mathbf{D}}_i\left(k-1\left|k-1\right.\right)}\\ {}\widehat{\mathbf{X}}\left(k-1\left|k-1\right.\right)-{\mathbf{U}}_i\left(k-1\left|k-1\right.\right)\sqrt{\left(n+\lambda \right){\mathbf{D}}_i\left(k-1\left|k-1\right.\right)}\Big]{}^T\end{array}\right]}^T\end{array} $$
(23)

In the singular value decomposition filter, change equations are:

$$ {\mathbf{P}}_{\mathbf{XX}}\left(k-1\left|k-1\right.\right)=\mathbf{V}\left(k-1\left|k-1\right.\right)\mathbf{D}\left(k-1\left|k-1\right.\right)\mathbf{V}{\left(k-1\left|k-1\right.\right)}^T $$
(24)
$$ \begin{array}{l}{\displaystyle \begin{array}{l}\boldsymbol{\upchi} \left(k-1\left|k-1\right.\right)\\ {}={\left[\begin{array}{c}\widehat{\mathbf{X}}\left(k-1\left|k-1\right.\right)\\ {}\widehat{\mathbf{X}}\left(k-1\left|k-1\right.\right)+{\mathbf{V}}_i\left(k-1\left|k-1\right.\right)\sqrt{\left(n+\lambda \right){\mathbf{D}}_i\left(k-1\left|k-1\right.\right)}\\ {}\widehat{\mathbf{X}}\left(k-1\left|k-1\right.\right)-{\mathbf{V}}_i\left(k-1\left|k-1\right.\right)\sqrt{\left(n+\lambda \right){\mathbf{D}}_i\left(k-1\left|k-1\right.\right)}\end{array}\right]}^T\\ {}\kern6.25em \end{array}}\end{array} $$
(25)

V i(k − 1|k − 1), U i(k − 1|k − 1), and D i(k − 1|k − 1) are the ith column of V(k − 1|k − 1), U(k − 1|k − 1), and D(k − 1|k − 1). i = 1, 2, ⋯, n. Hence, \( \widehat{\mathbf{X}}\left(k\left|k\right.\right) \), P XX(k|k), and X(k − 1) can be obtained. And then the CIA is also used to improve accuracy.

4 The Improved Kalman Filter (KF)

The linear system model is:

$$ \left\{\begin{array}{c}\mathbf{X}\left(\mathbf{k}+\mathbf{1}\right)=\boldsymbol{\Phi} \left(\mathbf{k}\right)\mathbf{X}\left(\mathbf{k}\right)+\boldsymbol{\Gamma} \left(\mathbf{k}\right)\mathbf{W}\left(\mathbf{k}\right)\\ {}\mathbf{Y}(k)=\mathbf{H}(k)\mathbf{X}(k)+\mathbf{V}(k)\end{array}\right. $$
(26)

X(k + 1) represents the system state vector, and Y(k) is the system measurement vector. V(k) and W(k) are process noise and measurement noise. Their statistical properties are in (5). Φ(k), Γ(k), and H(k) are computation models according to the object.

  1. 1.

    Time update equations

$$ \left\{\begin{array}{l}\widehat{\mathbf{X}}(k)=\boldsymbol{\Phi} (k)\widehat{\mathbf{X}}\left(k-1\right)\\ {}\mathbf{P}\left(k\left|k-1\right.\right)=\boldsymbol{\Phi} (k)\mathbf{P}(k)\boldsymbol{\Phi} {(k)}^T+\boldsymbol{\Gamma} (k)\mathbf{Q}(k)\boldsymbol{\Gamma} {(k)}^T\end{array}\right. $$
(27)

P is the covariance matrix of \( \widehat{\mathbf{X}} \).

  1. 2.

    Measurement update equations

$$ \left\{\begin{array}{l}\mathbf{K}(k)=\mathbf{P}\left(k\left|k-1\right.\right)\mathbf{H}{(k)}^T{\left(\mathbf{H}(k)\mathbf{P}\left(k\left|k-1\right.\right)\mathbf{H}{(k)}^T+\mathbf{R}(k)\right)}^{-1}\\ {}\widehat{\mathbf{X}}\left(k\left|k\right.\right)=\widehat{\mathbf{X}}\left(k\left|k-1\right.\right)+\mathbf{K}(k)\left(\mathbf{Y}(k)-\mathbf{H}(k)\widehat{\mathbf{X}}\left(k\left|k-1\right.\right)\right)\\ {}\mathbf{P}\left(k\left|k\right.\right)=\left(\mathbf{I}-\mathbf{K}(k)\mathbf{H}(k)\right)\mathbf{P}\left(k\left|k-1\right.\right)\end{array}\right. $$
(28)

At time k, K is filtering gain and \( \widehat{\mathbf{X}}\left(k\left|k\right.\right) \) is the filter value.

From the model in (26), (27), and (28), there are also noises and interferences existing in the KF process. Hence, the accuracy of KF is also improved by the presented method.

According to the KF, \( \widehat{\mathbf{X}}\left(k\left|k\right.\right) \), P(k|k), and X(k − 1) are obtained. The presented method is also used in the KF.

$$ {\mathbf{P}}_{KF}^{-1}=w{\mathbf{P}}^{-1}\left(k\left|k\right.\right)+\left(1-w\right){\mathbf{P}}^{-1}\left(k-1\left|k-1\right.\right) $$
(29)
$$ {\mathbf{P}}_{KF}^{-1}{\widehat{\mathbf{X}}}_{\mathrm{improved}}\left(k\left|k\right.\right)=w{\mathbf{P}}^{-1}\!\left(k\left|k\right.\right)\widehat{\mathbf{X}}\left(k\left|k\right.\right){+}\left(1-w\right){\mathbf{P}}^{-1}\!\left(k-1\left|k\right.-1\right)\mathbf{X}\left(k-1\right) $$
(30)

\( {\mathbf{P}}_{KF}^{-1} \) is the fused covariance matrix; \( {\widehat{\mathbf{X}}}_{\mathrm{improved}}\left(k\left|k\right.\right) \) is the improved filter value.

5 Simulation

In this model, the position, velocity, and acceleration (PVA) motion model is usually adopted. Two sensors are used to track one target. This model is in (31) and (32).

$$ \mathbf{X}(k)=\left(\begin{array}{cccc}1& 0& \Delta t& 0\\ {}0& 1& 0& \Delta t\\ {}0& 0& 1& 0\\ {}0& 0& 0& 1\end{array}\right)\mathbf{X}\left(k-1\right)+\mathbf{W}\left(k-1\right) $$
(31)
$$ \theta {(k)}^i=\arctan \left(\frac{y_k-{s}_y^i}{x_k-{s}_x^i}\right)+V{(k)}^i $$
(32)

X(k) is a state variable vector, \( \mathbf{X}(k)={\left(\begin{array}{cccc}{x}_k& {y}_k& {\dot{x}}_k& {\dot{y}}_k\end{array}\right)}^T \). The initial value of X is \( {\left(\begin{array}{cccc}0& 0& 1& 0\end{array}\right)}^T \). x k, y k are horizontal and vertical position, and \( {\dot{x}}_k,{\dot{y}}_k \) are the corresponding horizontal and longitudinal velocity. W(k − 1) and V(k)i represent Gaussian noise, W(k − 1) mathematical expectation is zero, and the covariance is \( \left[\begin{array}{cccc}\frac{1}{30}\Delta {t}^3& 0& \frac{1}{20}\Delta {t}^2& 0\\ {}0& \frac{1}{30}\Delta {t}^3& 0& \frac{1}{20}\Delta {t}^2\\ {}\frac{1}{20}\Delta {t}^2& 0& \frac{1}{10}\Delta t& 0\\ {}0& \frac{1}{20}\Delta {t}^2& 0& \frac{1}{10}\Delta t\end{array}\right] \).\( {V}_k^i \)~N(0, 0.052).\( {s}_x^i \) and \( {s}_y^i \) are the ith sensor measurement value. The location of them are represented by\( \left({s}_x^1,{s}_y^1\right)=\left(1,1\right) \) and \( \left({s}_x^2,{s}_y^2\right)=\left(-1,-2\right) \), respectively. The time interval Δt is 0.01 s; the simulation time is 5 s. And [11] is the compared method.

In Fig. 3, two black triangles are the sensors’ location. Black line is the real trajectory. Red line is the compared method estimation trajectory. Blue line represents the trajectory obtained by the presented method. As shown in Fig. 3, the presented method is closer to the real trajectory, compared to the compared method.

Fig. 3
figure 3

Tracking trajectory between the UKF and the presented method

In Fig. 4, the vertical axis represents MSE on the x-axis, and the horizontal axis represents time. In Fig. 5, the vertical axis represents MSE on the y-axis, and the horizontal axis represents time. Red line is the presented method MSE, and blue line is the UKF MSE in both of these two figures. From Figs. 4 and 5, the precision is improved by more than half at some time, for example, time 120, 230, and 400 on the x-axis and time 100, 190, and 380 on the y-axis. Generally speaking, the accuracy of proposed method is overall higher than the accuracy of the compared method. Especially,

Fig. 4
figure 4

MSE on the x-axis

Fig. 5
figure 5

MSE on the y-axis

In Fig. 6, the MSE and Monte Carlo methods are adopted to compare these two methods’ performance, and the number of Monte Carlo simulations is 500. The expression is as follows:

$$ \mathrm{MSE}=\sqrt{\frac{1}{M}\sum \limits_{n=1}^M\left({\left(x\left(k\left|k\right.\right)-x(k)\right)}^2+{\left(y\left(k\left|k\right.\right)-y(k)\right)}^2\right)} $$
(33)

M = 500, the value of k is from 1to 500.

Fig. 6
figure 6

MSE

In Fig. 6, the presented method MSE is lower than the compared MSE. It is expressed that the accuracy of the proposed method is better than one of the compared method.

5.1 The Second Model

$$ \mathbf{X}\left(k+1\right)=\left[\begin{array}{cc}1& 1\\ {}-1& -1\end{array}\right]\mathbf{X}(k)+\left[\begin{array}{c}1\\ {}1\end{array}\right]W(k) $$
(34)
$$ \mathbf{Y}(k)=\left[\begin{array}{cc}1& 1\end{array}\right]\mathbf{X}(k)+V(k) $$
(35)

\( \mathbf{X}(k)={\left[\begin{array}{cc}{X}_1(k)& {X}_2(k)\end{array}\right]}^T \), the initial value is \( {\left[\begin{array}{cc}0& 0\end{array}\right]}^T \); W(k), V(k) are zero-mean independent white noise with different varianceQ and R. Q = 0.7, R = 0.9. Other element values are as follows:

$$ \boldsymbol{\Phi} (k)=\left[\begin{array}{cc}1& 1\\ {}-1& -1\end{array}\right],\kern0.5em \boldsymbol{\Gamma} (k)=\left[\begin{array}{c}1\\ {}1\end{array}\right],\kern0.5em \mathbf{H}(k)=\left[\begin{array}{cc}1& 1\end{array}\right] $$

Because the second model is a linear system, KF is used.

In Figs. 7 and 8, red line is the KF estimation value, and blue line is the presented method value. As is shown in Figs. 7 and 8, the accuracy of KF is obviously higher than the one of the proposed method. It is also instruction that the proposed method is effective.

Fig. 7
figure 7

X1(k)

Fig. 8
figure 8

X2(k)

6 Conclusion

In this paper, an enhanced UKF method based on CIA is proposed. According to the real value at time k − 1 and the filter value at time k, the accuracy is effectively improved by this method. And more importantly, the correlation information between the real value and the filter value is not involved.