Keywords

1 Introduction

The Global Navigation Satellite System (GNSS) is constantly developing and its applications have penetrated into every aspect of our daily life. As navigation terminals are applied in more and more fields, the receiving environment they need to face is becoming more and more complex, such as the high-occlusion environment of cities, mountains and canyons, as well as the high-dynamic environment of airborne and projectile loads [1]. The Kalman filter algorithm is a typical algorithm for filtering in satellite navigation information processing. It uses the motion model and the observation model to obtain the state estimation of the carrier [2]. However, it needs linearization of the positioning equation, which may introduce errors. At the same time, the Kalman filter requests the prior information of the state model and the observation model to achieve the optimal estimation. There are many empirical models used in practice, and the model cannot be completely matched [3]. In a complex environment, abnormal observation errors can cause problems such as unstable filtering results and poor precision. In view of the application of Kalman filter in the actual process, algorithms such as extended Kalman filter and unscented Kalman filter have been proposed, but they may not really adapt to navigation and positioning in difficult environments [4]. The neural network has strong intelligent processing capabilities such as denoising, learning, adaptive, and complex mapping [5]. In order to improve the filtering precision, an algorithm for compensating the dynamic model error by the neural network is proposed. The error of the dynamic model is corrected by the RBF neural network in the filter estimation part, which suppresses the contribution of the dynamic model abnormal disturbance to the navigation solution effectively.

2 Standard Kalman Filter Algorithm

Assuming that the state equations and observation equations of the system are known, and they are as follows

$$ \varvec{X}_{k} =\varvec{\varPhi}_{k,k - 1} \varvec{X}_{k - 1} + \varvec{W}_{k} $$
(1.1)
$$ \varvec{L}_{k} = \varvec{A}_{k} \varvec{X}_{k} + \varvec{e}_{k} $$
(1.2)

Wherein, the subscript k represents the time. \( \varvec{X}_{k} \) is the state vector. \( \varvec{\varPhi}_{k} \) is the coefficient matrix. \( \varvec{W}_{k} \) is the Gaussian white noise process error vector. \( \varvec{L}_{k} \) is the observation vector. \( \varvec{e}_{k} \) is the observed noise vector.

The standard Kalman filter algorithm is as follows.

The prediction state vector \( \bar{\varvec{X}}_{k} \) and the prediction state covariance matrix \( \varvec{\varSigma}_{{\bar{\varvec{X}}_{k} }} \) are obtained by using the state estimation vector \( \hat{\varvec{X}}_{k - 1} \) at time k − 1, the state estimation vector covariance matrix \( \varvec{\varSigma}_{{\hat{\varvec{X}}_{k - 1} }} \), and the state transition matrix \( \varvec{\varPhi}_{k,k - 1} \).

$$ \bar{\varvec{X}}_{k} =\varvec{\varPhi}_{k,k - 1} \hat{\varvec{X}}_{k - 1} $$
(1.3)
$$ \varvec{\varSigma}_{{\bar{\varvec{X}}_{k} }} =\varvec{\varPhi}_{k,k - 1}\varvec{\varSigma}_{{\hat{\varvec{X}}_{k - 1} }}\varvec{\varPhi}_{k,k - 1}^{\text{T}} +\varvec{\varSigma}_{{\varvec{W}_{k} }} $$
(1.4)

The innovation vector \( \bar{\varvec{V}}_{k} \) and the innovation vector covariance matrix \( \varvec{\varSigma}_{{\bar{\varvec{V}}_{k} }} \) are calculated using the prediction state vector \( \bar{\varvec{X}}_{k} \), the current observation vector \( \varvec{L}_{k} \), and the observation design matrix \( \varvec{A}_{k} \).

$$ \bar{\varvec{V}}_{k} = \varvec{A}_{k} \bar{\varvec{X}}_{k} - \varvec{L}_{k} $$
(1.5)
$$ \varvec{\varSigma}_{{\bar{\varvec{V}}_{k} }} = \varvec{A}_{k}\varvec{\varSigma}_{{\bar{\varvec{X}}_{k} }} \varvec{A}_{k}^{\text{T}} +\varvec{\varSigma}_{k} $$
(1.6)

Then, the gain matrix is calculated.

$$ \varvec{K}_{k} =\varvec{\varPhi}_{k,k - 1} \varvec{A}_{k}\varvec{\varSigma}_{{\bar{\varvec{V}}_{k} }}^{ - 1} $$
(1.7)

Finally, the current state estimate is obtained and a new state estimation vector covariance matrix is calculated.

$$ \hat{\varvec{X}}_{k} = \bar{\varvec{X}}_{k} - \varvec{K}_{k} \bar{\varvec{V}}_{k} $$
(1.8)
$$ \varvec{\varSigma}_{{\hat{\varvec{X}}_{k} }} = \left( {\varvec{I} - \varvec{K}_{k} \varvec{A}_{k} } \right)\varvec{\varSigma}_{{\bar{\varvec{X}}_{k - 1} }} \left( {\varvec{I} - \varvec{A}_{k}^{\text{T}} \varvec{K}_{k}^{\text{T}} } \right) + \varvec{K}_{k}\varvec{\varSigma}_{k} \varvec{K}_{k}^{\text{T}} $$
(1.9)

3 Error Compensation Algorithms for Dynamic Models Based on Neural Networks

3.1 Algorithm Principle

According to Eqs. (1.5) and (1.8), the standard Kalman filter solution can be rewritten as

$$ \hat{\varvec{X}}_{k} = \bar{\varvec{X}}_{k} - \varvec{K}_{k} \left( {\varvec{A}_{k} \bar{\varvec{X}}_{k} - \varvec{L}_{k} } \right) $$
(1.10)

It is assumed that the state prediction vector \( \bar{\varvec{X}}_{k} \) of the dynamic model can obtain the true value \( \varvec{X}_{k} \) of the state vector of the maneuver carrier at time k after error compensation. Define the error compensation vector as \( {\varvec{\Delta}}\varvec{Er} \).

$$ \begin{aligned} \varvec{X}_{k} = & \left( {\bar{\varvec{X}}_{k} + {\varvec{\Delta}}\varvec{Er}} \right) - \varvec{K}_{k} \left( {\varvec{A}_{k} \left( {\bar{\varvec{X}}_{k} + {\varvec{\Delta}}\varvec{Er}} \right) - \varvec{L}_{k} } \right) \\ = & \left( {\varvec{I} + \varvec{K}_{k} \varvec{A}_{k} } \right){\varvec{\Delta}}\varvec{Er} - \varvec{K}_{k} \bar{\varvec{V}}_{k} + \bar{\varvec{X}}_{k} \\ \end{aligned} $$
(1.11)

By Eq. (1.11), the relationship between the error compensation vector \( {\varvec{\Delta}}\varvec{Er} \) and the product \( \varvec{K}_{k} \bar{\varvec{V}}_{k} \) between the Kalman filter gain and the innovation vector can be considered as a nonlinear mapping of multidimensional input and output.

$$ {\varvec{\Delta}}\varvec{Er} = F\left( {\varvec{K}_{k} \bar{\varvec{V}}_{k} } \right) $$
(1.12)

Therefore, the neural network method can be used to approximate the nonlinear mapping F by training, and the relationship between \( \varvec{K}_{k} \bar{\varvec{V}}_{k} \) and \( {\varvec{\Delta}}\varvec{Er} \) can be learned.

In the network prediction phase, the error compensation vector is predicted and the dynamic model is compensated. As shown in Fig. 1.

Fig. 1.
figure 1

Compensation flow of neural network dynamics model.

The innovation vector \( \bar{\varvec{V}}_{k} \) can also be called the prediction residual vector, which reflects the dynamic model error mainly. The Kalman filter gain \( \varvec{K}_{k} \) is determined by the state estimation vector covariance matrix, the observation design matrix, and the observation vector weight matrix. When the observation is reliable and the system dynamics model is abnormal, the state estimation vector covariance matrix may bring the dynamic model error accordingly. Therefore, the Kalman filter gain and the innovation vector are both the source of the dynamic model error and the influencing factors, which explains the rationality of the algorithm in the physical sense.

The algorithm block diagram is shown in Fig. 2.

Fig. 2.
figure 2

Compensation algorithms of neural network dynamics model.

In practical applications, the Kalman filter gain \( \varvec{K}_{k} \), the innovation vector \( \bar{\varvec{V}}_{k} \) and the error compensation vector \( {\varvec{\Delta}}\varvec{Er} \) can be collected for offline training according to the carrier’s starting motion for a certain length of time. After training, the neural network can be used to correct the dynamic model prediction information \( \bar{\varvec{X}}_{k} \).

3.2 Simulation Analysis

The simulation time lasted for 2000 s. Assume that the carrier only moves in a uniform circular motion on the X-Y plane, and remains stationary in the Z direction(vz = 0, az = 0). Carrier reference track is known. The X-direction track is shown in Fig. 3.

Fig. 3.
figure 3

Reference trajectory of the X-direction.

Assume that the receiver clock error model is a discrete-time first-order Markov model. The GPS precise ephemeris interpolation on April 1, 2007 was used to obtain the position of the satellite, and the distance between the carrier and the satellite was generated. The simulated clock error and the observed pseudorange noise were added. Set the standard deviation of the observed pseudorange noise to the typical value \( \sigma_{{\text{UERE}}} = 5.3\,\text{m} \) of the equivalent distance error of the GPS standard positioning service user.

The network input during training is the corresponding \( \varvec{K}_{k} \bar{\varvec{V}}_{k} \) of each epoch. The reference output is the dynamic model error \( {\varvec{\Delta}}\varvec{Er} \) calculated by using the true value of the motion state of the carrier. Filter convergence time is 10 s. Train with data from 11 s to 1100 s. The post-990 s data is solved by positioning, and the \( \varvec{K}_{k} \bar{\varvec{V}}_{k} \) corresponding to the epoch is input by the trained network to obtain the dynamic model error prediction value \( {\varvec{\Delta}}\varvec{Er} \), which is taken into Eq. 1.11, and the motion state prediction of the carrier after the dynamic model compensation is obtained. The value is used as the result of the positioning. RBF neural network has the characteristics of simple structure, simple training, fast learning convergence and unique optimal approximation. Here RBF neural network is used. The maximum number of neurons is set to 1000. The mean square error goal is set to 0.1. The expansion speed of the radial basis function is set to 1. The positioning result is compared with the standard Kalman filter (Figs. 4 and 5).

Fig. 4.
figure 4

Standard Kalman filter.

Fig. 5.
figure 5

Error compensation algorithm for dynamic model based on neural network.

Only the coordinate difference of the X-axis component is plotted in the figures of this paper, and the coordinate difference of the Y-axis and Z-axis components is similar to the X-axis component. The root mean square error statistics of each filtered output are listed in Table 1.

Table 1. Statistical results of root mean square error.

It can be seen from the comparison that the neural network-based dynamic model compensation algorithm can effectively suppress the influence of the kinetic model anomaly, and the root mean square error in the horizontal and vertical directions is reduced by about 70%.

4 Measured Data Analysis

4.1 Dynamic Sports Car Measured Data Acquisition

The on-board GPS observation data collected at Songya Lake Park in Changsha City on September 19, 2018 was selected to verify the filtering algorithm proposed in this chapter. The positioning results obtained by the RTK system are used as reference. In Fig. 6, the receiver antenna is placed on the roof of the car during data acquisition. The sports car runs around the Songya Lake Park, and the sports car route is shown in Fig. 7.

Fig. 6.
figure 6

Antenna placement diagram for sports car experiment.

Fig. 7.
figure 7

Trajectory diagram of sports car experiment.

Each epoch is separated by 1 s. The number of visible satellites (cutoff angle) and GDOP values during the acquisition time are shown in Figs. 8 and 9, respectively.

Fig. 8.
figure 8

Visible satellite number diagram.

Fig. 9.
figure 9

Error compensation algorithm for dynamic model based on neural network.

4.2 Experimental Results and Analysis

The sports car data has a total of 1787 epochs. In order to fully adapt to the environment and obtain the initial value of each connection weight, the sample of the first 1200 s epoch is taken as the network training sample, and the initial weight of the network is assigned. The data of the last 587 epochs is used as prediction data. The results of the standard Kalman filter and the neural network-based observation error compensation filter algorithm are shown in Figs. 10 and 11.

Fig. 10.
figure 10

Standard Kalman filter.

Fig. 11.
figure 11

Standard Kalman filter.

The statistical results of each filtered output are shown in the following Table 2.

Table 2. Statistical results of root mean square error.

It can be known from the above results that there are some uncertain factors in the observation information and dynamic model information. Standard Kalman filter solution is not ideal. The dynamic model compensation algorithm based on neural network can effectively suppress the model error. Its filtering performance is greatly improved compared with the standard Kalman filter. The root mean square error in X, Y and Z directions is reduced by about 85%, 95% and 90% respectively. Through training and learning, the positioning deviation in all directions is eliminated in the positioning prediction (see Table 3). And in terms of error standard deviation in all directions, the algorithm reduces the X, Y and Z directions by about 70%, 60% and 60% respectively, compared to the standard Kalman filter. However, a burr appears between 300 s and 400 s in the prediction phase, which is due to the large GDOP value of the satellite and the mutation of GDOP value during this period. The learning sample does not contain similar conditions, resulting in no effective elimination of abnormal errors in the prediction phase.

Table 3. Mean and standard deviation of errors.

5 Conclusion

Based on the standard Kalman filter algorithm and the RBF neural network, a dynamic model compensation algorithm based on neural network is proposed in this paper. By using the nonlinear mapping approximation ability of the RBF neural network, the relationship between the product of the Kalman filter gain and the new interest and the dynamic model error is learned, and the dynamic model error is compensated in the prediction phase. Simulation and experimental results show that the proposed algorithm can improve the positioning accuracy of standard Kalman filter effectively.