Introduction

Global positioning system (GPS) measurements can be corrupted by several error sources. These errors are categorized as biases and random errors, i.e. ionosphere, troposphere, satellite clock, receiver clock offsets, receiver noise, and multipath. Differential GPS (DGPS) provides users with corrections to remove the correlated bias terms between receivers. Any interruption of the DGPS service causes a loss of navigation guidance.

This paper focuses on the continuity performance of the DGPS corrections. The discontinuity of the DGPS navigation service can result from a short-term loss of lock of the GPS signals at the DGPS reference station or unintentional interruptions of the DGPS correction transfer caused by hardware or software failure. The service discontinuity can be overcome by applying predictions of the DGPS corrections. Since DGPS correction is a function of time, it can be modeled and predicted using a neural network (NN).

For a given input vector X, the neural network will produce an output vector Y = g (X). A neural network can provide an approximation for any function g of the input vector X, provided that the network has sufficient nodes. The approximation can be achieved for any desired degree of accuracy provided that sufficient hidden units are available.

Application of neural network to GPS and DGPS applications has been studied by several researchers. Sang et al. (1997) applied neural network for predicting DGPS corrections. The experiment was performed before selective availability (SA) was turned off, and then the main component of the correction was for SA error. Jwo et al. (2004) investigated the neural network prediction for DGPS pseudo range correction (PRC) under SA off with an improved mapping design. The main contribution of the DGPS correction was ionosphere and troposphere delays. The DGPS correction data were used for auto regressive moving average (ARMA) neural network input.

In this study, the residual of DGPS correction, which is the difference between the current and previous corrections, is also used for the ARMA neural network input in addition to the DGPS correction data. Auto regressive (AR) neural network is investigated as well as ARMA neural network. A comparison of the accuracy and effective time prediction results between the ARMA and AR neural networks was carried out. In addition to the PRC, the carrier phase correction is predicted as well. The positioning results were compared for the ARMA and AR models.

There are two categories of neural network architectures used in dynamic system modeling. One category is feedforward artificial neural networks (FANN), including the multilayer perceptron and the cascaded network architecture, and the other is the radial basis function network (RBF). FANN was used in this study.

In the following, the artificial neural networks are discussed in several sections. After a brief review of the neural networks, the implemented neural network models are explained. The experimental setup and the results of the ARMA and AR neural networks are presented, DGPS positioning results are presented and discussed.

Methodology

This study combines back propagation neural network (BPNN) with ARMA and AR models. Brief description of each methodology is as follows.

Backpropagation neural networks

The BPNN has been the most popular method among all neural network methods: it is a feedforward, multilayer perceptron (MLP), supervised learning network. The procedure of finding a gradient vector within the network structure is referred to as backpropagation (BP) since the gradient is calculated in the opposite direction to the flow of the output of each node (Specht 1991). Neural network training is based on minimizing the error between the current output and the target vector.

For MLP, the output of one layer becomes the input of the following layer. The neurons in the first layer receive external inputs, and the outputs of the neurons in the last layer are considered the network output. Figure 1 shows an example of a three-layer neural network. The following equation describes this operation (Hagan et al. 1996):

$$ a^{{m + 1}} = f^{{m + 1}} {\left( {W^{{m + 1}} a^{m} + b^{{m + 1}} } \right)}\quad {\text{for}}\,m = 1,2,...,M - 1 $$
(1)

where a is the input vector to the layer, b is the noise vector, W is the weight matrix of each neuron, M is the number of layers in the network, and f is the activation function of the neurons. The activation functions of the hidden layers and the output layer are typically sigmoid functions with the form (Hagan et al. 1996):

$$ f{\left( u \right)} = \frac{1} {{{\left( {1 + {\text{e}}^{{ - u}} } \right)}}} $$
(2)

where = (−∞,∞) and f(u) ∈ (0,1).

Fig. 1
figure 1

Three-layer network

Autoregressive (AR) model

The identification problem deduces the relationships between past input data and future data of an unknown time series or a dynamic system (Ljung 1999). The general AR(R) formulation for a system is given by the linear equation (Cholewo and Zurada 1997):

$$ y{\left( t \right)} + {\sum\limits_{k = 1}^R {\alpha _{k} y{\left( {t - kT} \right)}} } = \in _{t} $$
(3)

where T is a sampling time and R is the model order of the AR model. y(t) is a linear combination of the previous values. ∈ t is white noise with zero mean and variance \( \sigma ^{2}_{ \in } . \)

Autoregressive moving average (ARMA) model

The general ARMA (R, P) formulation for a system is given by the linear equation (Ljung 1999):

$$ y{\left( t \right)} + {\sum\limits_{k = 1}^R {\alpha _{k} y{\left( {t - kT} \right)}} } = {\sum\limits_{k = 0}^P {\beta _{k} u{\left( {t - kT} \right)}} } $$
(4)

where T is the sampling time, and R and P are the model order of the ARMA model. y(t) is a linear combination of the previous values for input u and output y.

Construction of an artificial neural network predictor

This section describes the mathematical models of neural network predictors combined with ARMA and AR models.

Autoregressive moving average (ARMA) neural networks

Based on the ARMA model, a neural network predictor can be constructed. To predict the DGPS corrections at time k + 1, denoted as y(+ 1), a series of DGPS corrections y(t), y(− 1),…, y(t − R) and the residual of the DGPS carrier phase correction u(t), u(− 1), …, u(− P) can be used.

The desired output is computed based on the calculation used by Kimoto et al. (1990):

$$ y(t + 1) = y(t) + r(t) + e(t) $$
(5)

where y(t) represents the DGPS correction at time t, e(t) is a white noise sequence, and r(t) is the sum of the desired output rate of change until time t (Kimoto et al. 1990):

$$ r_{n} {\left( t \right)} = {\sum\limits_{i = 1}^n {r(t + i)} }\quad i = 0,1,2,...,n $$
(6)

where r(t) is the logarithm of the rate of change of the desired output at time t and t − 1:

$$ r(t) = \ln (y(t)/y(t - 1)) $$
(7)

Using neural networks, the coefficients of the ARMA model can be tuned. If the basis sigmoid function is written as a polynomial function (Chon and Cohen 1997):

$$ p_{i} {\left( x \right)} = a_{{0i}} + a_{{1i}} x + a_{{2i}} x^{2} + \cdot \cdot \cdot + a_{{ni}} x^{n} $$
(8)

and

$$ x_{i} = {\sum\limits_{j = 0}^P {w_{{ji}} u{\left( {t - j} \right)}} } + {\sum\limits_{j = 1}^R {v_{{ji}} y{\left( {t - j} \right)}} } $$
(9)

Then, the coefficients of the n-th order ARMA model are defined as (Chon and Cohen 1997):

$$ \begin{aligned}{} \alpha _{i} = & {\sum\limits_{s = 1}^M {c_{s} a_{{1s}} w_{{is}} } } \\ \beta _{i} = & {\sum\limits_{s = 1}^M {c_{s} a_{{1s}} v_{{is}} } } \\ \end{aligned} $$
(10)

where α i and β i are the coefficients of the ARMA model. w is and v is are the weight matrices of i-th input and s-th perceptron. Figure 2 shows the topology of the implemented ARMA neural network. DGPS correction y and residual u are the input of the neural network predictor.

Fig. 2
figure 2

A three-layer ARMA-neural network topology for DGPS correction

Autoregressive (AR) neural networks

The AR neural network predictor is constructed to predict the DGPS corrections at time k + 1, denoted as y(+ 1), using a series of DGPS corrections y(t), y(− 1), ..., y(t − R) data. R is the model order of the AR model terms. The desired output of the AR neural network is computed using the same technique as the ARMA.

Using neural networks, the constants of the AR model can be tuned. With the basic sigmoid function, the polynomial function can be expressed as (Chon and Cohen 1997):

$$ \begin{aligned}{} & p_{i} {\left( x \right)} = a_{{0i}} + a_{{1i}} x + a_{{2i}} x^{2} + \cdot \cdot \cdot + a_{{ni}} x^{n} \\ & {\text{and}} \\ & x_{i} = {\sum\limits_{j = 1}^R {w_{{ji}} y{\left( {t - j} \right)}} } \\ \end{aligned} $$
(11)

Then, the coefficients of the n-th order AR model are defined as:

$$ \alpha _{i} = {\sum\limits_{s = 1}^M {c_{s} a_{{1s}} w_{{is}} } } $$
(12)

where α i is the coefficient of the AR model. The coefficient will change for every correction data. w is is the weight matrix of i-th input and s-th perceptron. Then, using the AR model, the DGPS correction can be predicted. The number of neural network input is dependent on the AR model chosen. Figure 3 shows the topology of the implemented AR neural network. DGPS correction y is the input of the predictor.

Fig. 3
figure 3

A three-layer AR neural network topology for DGPS correction

Akaike’s information criterion

Akaike’s information criterion (AIC) is a measure of the goodness of fit of an estimated statistical model. It can be used to select an optimal model. The criterion may be minimized for choices of model order to form a tradeoff between the fit of the model (which lowers the sum of squared residuals) and the model’s complexity, which is measured by model order (Bierens 2005).

AIC for the AR(R) model can be given by

$$ {\text{AIC}} = \ln (s^{2}_{R} ) + \frac{{2R}} {T} $$
(13)

where R is the number of parameters in the AR model and, in an AR(R) case, s 2 R is the estimated residual variance, i.e. sum of squared residuals for model AR(R) over T. That is, it is the average squared residual for model R. An AR(R) model versus an AR(R + 1) can be compared using this criterion for determining an optimal model order.

AIC for ARMA (R, P) model (Voss and Feng 2002) can be given by

$$ {\text{AIC}} = \ln (s^{2}_{{R,P}} ) + \frac{{2(R + P)}} {T} $$
(14)

where R + P is the number of parameters in the ARMA model, and S 2 R,P is the estimated residual variance, i.e. sum of the squared residuals for model ARMA (R, P) over T. An ARMA (R, P) model versus an ARMA (R + 1, P + 1) can be compared using this criterion.

Experiments and results

The experiments on the artificial neural networks were performed using real data for pseudorange and carrier phase correction predictions. The reference base station was located on the roof of the engineering building at Konkuk University, Seoul, Korea. Two NovAtel OEM 4 GPS receivers were used in the DGPS experimental setup. The measurement noise standard deviations for each of the pseudorange observables were assumed to be identical: σ ρi  = 1m, for i = 1,2, ..., n.

The measurement update rate was 1 s and six GPS satellites were used in the experiment. The experiment was conducted in three parts. First, experiments to obtain the optimal models of the ARMA neural network and AR neural network predictors were carried out. Second experiments acquired effective prediction times for the optimal ARMA neural network and AR neural network predictors. The final experiment applied the ARMA and AR neural networks to the DGPS positioning solution.

ARMA neural network experiments

First, the artificial neural network was constructed using ARMA (2, 1) for pseudorange correction. The network was trained using the BP algorithm, and it took 3 min to train the ARMA neural network before it was ready to provide DGPS corrections. Once the neural network was trained, the constructed ARMA model was used to predict a time series with a step length of 6 s. The neural network was trained to predict DGPS corrections 6 s ahead of the current correction. The experiment was performed for satellite PRN 1, 4, 7, 13, 20, and 24. For satellite PRN 7, the mean error of the neural network was 0.8056 m. Figure 4 shows the predicted PRC value of the neural network and the actual PRC of the DGPS.

Fig. 4
figure 4

Comparison of the DGPS and neural network C/A code pseudorange correction using ARMA (3, 2)

Figure 4 shows that the result of the neural networks prediction is close to the real values. The prediction accuracy of the neural networks is better than 1 m; however, there were several jumps in the error values as shown in Fig. 5. These conditions occur when the geometric dilution of the precision changes with time due to the relative motion of the satellites, as shown in Fig. 6.

Fig. 5
figure 5

Neural network DGPS C/A code correction prediction error using ARMA (3, 2)

Fig. 6
figure 6

Geometric dilution of precision time series

Figure 7 shows the prediction error time series using the ARMA (9, 8) model. The root mean square error (RMSE) prediction accuracy of the ARMA (9, 8) model was 0.2472 m. The prediction accuracy of the neural networks using this model was better than in the first experiment using (3,2) model. Comparing Figs. 5 and 7, it can be seen that a higher ARMA order reduces the error level.

Fig. 7
figure 7

Neural network DGPS C/A code correction prediction error using ARMA (9, 8)

Figure 8 shows the RMSE of the pseudorange correction prediction generated by varying the ARMA order for each satellite. Eleven ARMA models were used to generate the pseudorange correction prediction for six satellites. The RMSE of the prediction decreases with an increasing ARMA order. This improvement results from the increase of the number of residual inputs to the model. With more residual input, the correction accuracy will be improved. However, accuracy improvement with a higher order becomes less significant near ARMA (9, 8). Therefore, ARMA (9, 8) can be chosen as a compromise between the computational load and accuracy. The prediction accuracy can be increased by increasing the number of perceptron and layers of the neural network. However, this will cause a computational burden and increase the learning time.

Fig. 8
figure 8

Root mean square error of ARMA model C/A code prediction for different ARMA orders

Another experiment was also conducted to predict the DGPS carrier phase correction. The same model used to predict the C/A PRC was applied to predict the carrier phase correction. Figure 9 shows the results of the carrier phase prediction by the neural network with the ARMA (2, 1) for satellite PRN 7. The neural network predictions gave good accuracy except some peaks.

Fig. 9
figure 9

Comparison of the DGPS and neural network carrier phase correction using ARMA (2, 1)

Figure 10 shows the error time series of the neural networks prediction using the ARMA (2, 1) order. The RMSE of the phase prediction was 0.2988 m.

Fig. 10
figure 10

Neural network DGPS carrier phase correction prediction error using ARMA (2, 1)

Next experiment was conducted using the neural network with the ARMA (3, 2) model. Figure 11 shows the error time series of the neural network prediction. This experiment gave better prediction accuracy: the RMSE of the prediction was 0.1830 m. The experiments performed for other satellites gave similar results.

Fig. 11
figure 11

Neural network DGPS carrier phase correction prediction error using ARMA (3, 2)

Figure 12 shows the carrier phase prediction RMSE for six satellites by varying ARMA model order. The experiment was conducted by increasing the ARMA model to identify the relationship between the prediction accuracy and the ARMA model order. As like the pseudorange correction experiments, the prediction error decreased by increasing ARMA model order. However, the reduction error stabilized when the ARMA model reached ARMA (9, 8).

Fig. 12
figure 12

Root mean square error of ARMA model carrier phase prediction for different ARMA orders

The AIC value defined in the previous section was calculated in order to find an optimal ARMA model order. The similar prediction trend as like the prediction error is obtained for the AIC; when the ARMA model order is increased, the AIC value decreases and then converges.

Figure 13 shows the AIC value of the six satellites with an increasing ARMA model order. The AIC value begins to converge when the ARMA model order reaches ARMA (9, 8). Increasing the ARMA model order above ARMA (9, 8) does not yield a significant reduction of the AIC value. From these results, ARMA (9, 8) can be chosen as the optimal order.

Fig. 13
figure 13

AIC value of the ARMA model for different ARMA orders

Another experiment was conducted to acquire the effective time period for the prediction. The time step of the prediction was 6 s. First, the learning process is undertaken, and then after the 80th time step, the DGPS correction signal was cut. ARMA (9, 8) can predict as far as 20 steps ahead while maintaining good prediction rates. This time limitation is equal to 2 min because the step time is 6 s. Figure 14 shows the error comparison of the prediction and real data after the DGPS signal was cut. After the 20th step, the error begins to grow. Experiments for other satellites gave similar results.

Fig. 14
figure 14

Prediction effective time for ARMA (9, 8)

Autoregressive (AR) neural network experiment

Experiments with the AR models were conducted for six satellites as in the ARMA models. For the AR neural network, it takes 4 min of training time before the system can provide good predictions. Once the neural network is trained, the network is ready to provide correction predictions.

Experiments for the DGPS PRC prediction gave the similar results as the ARMA experiments. The prediction error was reduced as the AR neural network order was increased. Figure 15 shows the comparison of the PRC prediction and the real data for the AR (8) model. However, the prediction error remains too large, as shown in Fig. 16; the maximum prediction error reached almost 30 m. When the AR model order was increased to 15, the prediction error was reduced, as shown in Fig. 17. The maximum prediction error was reduced below 3 m.

Fig. 15
figure 15

Comparison of the DGPS C/A code correction prediction using AR (8)

Fig. 16
figure 16

Neural network DGPS C/A code correction prediction error using AR (8)

Fig. 17
figure 17

Neural network DGPS C/A code correction prediction error using AR (15)

The experiment for the carrier phase correction gave the similar results as the PRC: increasing the AR model order reduces the prediction error. Figure 18 shows the comparison of the carrier phase prediction data using the model and the observation data with the AR (8) neural network. The prediction error remains too large, as shown in Fig. 19.

Fig. 18
figure 18

Comparison of DGPS and neural network carrier phase correction using AR (8)

Fig. 19
figure 19

Neural network DGPS carrier phase correction prediction error using AR (8)

The next experiment was conducted to obtain the optimal order for the AR model. The AIC was computed by varying the AR model order. The AR model orders from 8 through to 21 were tested. Figure 20 shows the AIC value for the PRN satellites for each AR order. It can be seen that the AIC value decreases as the autoregressive order increases. However, at one point, the AIC value begins to increase again. The order that gives the smallest AIC value for satellites is 15 or 16. The smallest AIC values indicate that the prediction error is the smallest possible. Thus, the AR neural network will provide the best prediction result if the AR order is 15 or 16.

Fig. 20
figure 20

AIC values for different AR orders

An experiment to analyze the prediction error level change by varying the AR order was also conducted. Figure 21 shows the RMSE of the PRC prediction for each satellite when the AR model order increases. It is clearly shown that the best order is 15 or 16. The RMSE decreases with an increasing AR order. Then, the RMSE increases again when the autoregressive order is higher than 16.

Fig. 21
figure 21

Root mean square error of AR model PRC prediction for different AR orders

From this data, the AR model order 15 or 16 can be chosen as the optimal order for AR neural networks. Order 15 can be chosen as the best order since it requires less computational load than order 16.

An experiment to find out the effective time period of the prediction was also conducted for the AR neural network. The time step of the prediction was 6 s. After the learning period, the DGPS correction signal was cut at the 80th time step. Figure 22 shows the comparison of the prediction and the real data after the DGPS signal was cut. AR (15) maintains prediction accuracy until 18 steps after the cut. This is equal to 1 min 48 s. The experiments for other satellites gave similar results.

Fig. 22
figure 22

Prediction error effective time for AR (15)

DGPS positioning experiment

In the previous section, the DGPS correction prediction errors using ARMA neural network and AR neural network were analyzed. Now, the DGPS position errors will be compared with and without the neural networks. For this experiment, two DGPS positioning systems were developed. The first system was augmented with the ARMA neural network predictor [ARMA (9, 8)] and the second was not augmented with a predictor. After continuous transmission of DGPS correction signal to the system, the signal was cut for approximately 1 min. Then, the DGPS correction signal was connected again. This action was repeated several times. When the DGPS correction signal was disconnected, DGPS positioning without the neural network prediction became stand-alone positioning.

Figure 23 shows the comparison of the two systems. The 95% (2D RMS) accuracy of the DGPS system without the neural network prediction was 12.108 m, but the system with the prediction was 2.553 m. The positioning accuracy was significantly improved with the ARMA prediction system. The ARMA neural network succeeded in providing continuity of performance for the DGPS system while maintaining accuracy.

Fig. 23
figure 23

DGPS positioning error with and without the ARMA neural network

Applying the AR neural network gives the similar result: Fig. 24 shows the comparison of the system with and without the AR neural network. When the loss of lock occurred, the positioning accuracy became poor. However, for the system with the AR prediction, the position accuracy was maintained. Using the AR (15) neural network, the same results were obtained as in the ARMA (9, 8) neural network. The 95% (2 DRMS) positioning accuracy was 2.553 m.

Fig. 24
figure 24

DGPS positioning error with and without the AR neural network

From the experiments, it was found that the ARMA neural network needs less time for training and has a longer effective prediction time than the AR neural network. The ARMA neural network requires 3 min for training, while the AR neural network requires 4 min. In terms of the effective prediction time, the ARMA neural network can provide 20 steps ahead while the AR neural network can only provide 18 steps. However, the positioning accuracy with the ARMA and AR neural network models are almost the same: approximately 2.553 m for 95% accuracy.

Conclusions

The ARMA and AR neural networks have been incorporated to predict the differential DGPS carrier phase and pseudorange information. When the carrier phase signal is temporarily lost for a limited time, the neural network with the ARMA or AR model predicts the carrier phase or pseudorange correction data with good accuracy. The benefits of using the ARMA model include easy implementation and good accuracy. The ARMA model includes the residual of the carrier phase in the present and previous predictions as an input to the neural network predictor. Neural network experiments with different ARMA model orders were performed and the results showed that an increasing ARMA order improved the prediction accuracy.

For the AR neural networks, increasing the order of the AR neural network did not always improve the prediction accuracy. Until a certain order of the AR neural network, the accuracy is increased; however, the accuracy is degraded after that order. Akaike’s information criteria were computed to obtain the best order of the ARMA and AR neural network predictors. A comparison between the ARMA and AR neural networks shows that the ARMA neural network requires less network training time and gives a longer effective prediction times than the AR neural network. However, in terms of positioning accuracy, the ARMA neural network gives the same accuracy as the AR neural network.

In summary, incorporating neural networks with ARMA and AR model mechanisms into the DGPS system has been demonstrated to provide significant accuracy improvements when the DGPS signals are temporarily unavailable. The neural network prediction mechanism improves the continuity performance of the DGPS system.