Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

In this chapter, we begin with a discussion of the general mathematical model of GPS observation and its linearisation . All partial derivatives of the observation function are given in detail. These are necessary for forming GPS observation equations . We then outline the linear transformation and covariance propagation . In the section on data combinations , we discuss all meaningful and useful data combinations, such as ionosphere-free, geometry-free, code–phase combinations, and ionospheric residuals , as well as differential Doppler and Doppler integration . In the data differentiation section, we discuss single, double , and triple differences and their related observation equations and weight propagation. The parameters in the equations are greatly reduced through difference forming; however, the covariance derivations are tedious. In the last two sections, we discuss the equivalent properties between the uncombined and combining and the undifferenced and differencing algorithms . We propose a unified GPS data processing method, which is described in detail. The method is selectively equivalent to the zero-, single-, double-, triple-, and user-defined differential methods.

6.1 General Mathematical Models of GPS Observations

Recalling the discussions in Chap. 4, the GPS code pseudorange , carrier phase , and Doppler observables are formulated as (cf. Eqs. 4.7, 4.18 and 4.23)

$$ R_{i}^{k} (t_{\text{r}} ,t_{\text{e}} ) = \rho_{i}^{k} (t_{\text{r}} ,t_{\text{e}} ) - (\delta t_{\text{r}} - \delta t_{k} )c + \delta_{\text{ion}} + \delta_{\text{trop}} + \delta_{\text{tide}} + \delta_{\text{rel}} + \varepsilon_{\text{c}}, $$
(6.1)
$$ \lambda \varPhi_{i}^{k} (t_{\text{r}} ,t_{\text{e}} ) = \rho_{i}^{k} (t_{\text{r}} ,t_{\text{e}} ) - (\delta t_{\text{r}} - \delta t_{k} )c + \lambda N_{i}^{k} - \delta_{\text{ion}} + \delta_{\text{trop}} + \delta_{\text{tide}} + \delta_{\text{rel}} + \varepsilon_{\text{p}} ,\;{\text{and}} $$
(6.2)
$$ D = \frac{{{\text{d}}\rho_{i}^{k} (t_{\text{r}} ,t_{\text{e}} )}}{{\lambda {\text{d}}t}} - f\frac{{{\text{d}}(\delta t_{\text{r}} - \delta t_{k} )}}{{{\text{d}}t}} + \delta_{{{\text{rel}}\_f}} + \varepsilon_{\text{d}}, $$
(6.3)

Where ionospheric effects can be approximated as (cf. Sect. 5.1.2, Eq. 5.26)

$$ \delta_{\text{ion}} = \frac{{A_{1} }}{{f^{2} }} + \frac{{A_{2} }}{{f^{3} }}, $$

and R is the observed pseudorange , Φ is the observed phase, D is Doppler measurement, t e denotes the GPS signal emission time of the satellite k, t r denotes the GPS signal reception time of the receiver i, c denotes the speed of light , subscript i and superscript k denote the receiver and satellite, and δt r and δt k denote the clock errors of the receiver and satellite at the time t r and t e, respectively. The terms δ ion, δ trop, δ tide, and δ rel denote the ionospheric, tropospheric, tidal, and relativistic effects, respectively. Tidal effects include earth tides and ocean tidal loading . The multipath effect was discussed in Sect. 5.6 and is omitted here. ε c, ε p. and ε d are the remaining errors, respectively. f is the frequency, wavelength is denoted by λ, A 1, and A 2 are ionospheric parameters, \( N_{i}^{k} \) is the ambiguity related to receiver i and satellite k, δ rel_f is the frequency correction of the relativistic effects, the \( \rho_{i}^{k} \) is the geometric distance, and (cf. Eq. 4.6)

$$ \rho_{i}^{k} (t_{\text{r}} ,t_{\text{e}} ) = \rho_{i}^{k} (t_{\text{r}} ) + \frac{{{\text{d}}\rho_{i}^{k} (t_{\text{r}} )}}{{{\text{d}}t}}{\Delta}t, $$
(6.4)

where Δt denotes the signal transmitting time and \( \Delta t = t_{\text{r}} - t_{\text{e}} .\;{\text{d}}\rho_{i}^{k} (t_{\text{r}} )/{\text{d}}t \) denotes the time derivation of the radial distance between the satellite and receiver at time t r. All terms in Eqs. 6.1 and 6.2 have units of length (metres).

Considering Eq. 6.4 in the ECEF coordinate system, the geometric distance is a function of station state vector \( (x_{i} ,y_{i} ,z_{i} ,\dot{x}_{i} ,\dot{y}_{i} ,\dot{z}_{i} i) \) (denoted by X i ) and satellite state vector \( (x_{k} ,y_{k} ,z_{k} ,\dot{x}_{k} ,\dot{y}_{k} ,\dot{z}_{k} ) \) (denoted by X k ). GPS observation Eqs. 6.1, 6.2, and 6.3 can then be generally presented as

$$ O = F(X_{i} ,\;X_{k} ,\;\delta t_{i} ,\;\delta t_{k} ,\;\delta_{\text{ion}} ,\;\delta_{\text{trop}} ,\;\delta_{\text{tide}} ,\;\delta_{\text{rel}} ,\;N_{i}^{k} ,\;\delta_{{\text{rel}}\_f} ), $$
(6.5)

where O denotes observation and F denotes implicit function. In other words, the GPS observable is a function of state vectors of the station and satellite, and numbers of physical effects as well as ambiguity parameters. In principle, GPS observations can be used to solve for the desired parameters of the function in Eq. 6.5. This is why GPS is now widely used for positioning and navigation (to determine the state vector of the station), orbit determination (to determine the state vector of satellite ), timing (to synchronise clocks), meteorological applications (i.e. tropospheric profiling), and ionospheric occultation (i.e. ionospheric sounding). In turn, the satellite orbit is a function of the earth’s gravitational field and the number of disturbing effects such as solar radiation pressure and atmospheric drag . GPS is now also used for gravity field mapping and for solar and earth system study.

It is obvious that Eq. 6.5 is non-linear. The straightforward mathematical method for solving problem 6.5 is to search for the optimal solution using various effective search algorithms. The so-called ambiguity function (AF, see Sects. 8.5 and 12.2) method is an example. Generally speaking, solving a non-linear problem is much more complicated than first linearising the problem and then solving the linearised problem.

It is worth noting that the satellite and station state vectors will be represented in the same coordinate system ; otherwise, coordinate transformation , as discussed in Chap. 2, will be carried out. Because the rotations are “distance-keeping” transformations, the distances computed in two different coordinate systems must be the same. However, because of the earth’s rotation, the velocities expressed in the ECI and ECEF coordinate systems are not the same. Generally, the station coordinates and both the ionospheric and tropospheric effects are given and presented in the ECEF system. A satellite state vector may be given in both the ECSF and ECEF systems, depending on the need of the specific application.

6.2 Linearisation of the Observation Model

The non-linear multivariable function F in Eq. 6.5 can be further generalised as

$$ O = F(Y) = F(y_{1} ,y_{2} , \ldots ,y_{n} ), $$
(6.6)

where variable vector Y has n elements. The linearisation is accomplished by expanding the function in a Taylor series to the first order (linear term) as

$$ O = F(Y^{0} ) + \left. {\frac{\partial F(Y)}{\partial Y}} \right|_{{Y^{0} }} \cdot {\text{d}}Y + \varepsilon ({\text{d}}Y), $$
(6.7)

where

$$ \frac{\partial F(Y)}{\partial Y} = \left( {\begin{array}{*{20}c} {\frac{\partial F}{{\partial y_{1} }}} & {\frac{\partial F}{{\partial y_{2} }}} & \ldots & {\frac{\partial F}{{\partial y_{n} }}} \\ \end{array} } \right),\quad {\text{and}}\quad {\text{d}}Y = \left( {Y - Y^{0} } \right) = \left( {\begin{array}{*{20}c} {{\text{d}}y_{1} } \\ {{\text{d}}y_{2} } \\ \vdots \\ {{\text{d}}y_{n} } \\ \end{array} } \right); $$

the symbol\( |_{{y^{0} }} \) means that the partial derivative ∂F(Y)/∂Y takes the value of Y = Y 0 and ε is the truncation error, which is a function of the second-order partial derivative and dY. Y 0 is called the initial value vector . Equation 6.7 turns out then to be

$$ O{-}C = \left( {\begin{array}{*{20}c} {\frac{\partial F}{{\partial y_{1} }}} & {\frac{\partial F}{{\partial y_{2} }}} & \cdots & {\frac{\partial F}{{\partial y_{n} }}} \\ \end{array} } \right)_{{Y^{0} }} \cdot \left( {\begin{array}{*{20}c} {{\text{d}}y_{1} } \\ {{\text{d}}y_{2} } \\ \vdots \\ {{\text{d}}y_{n} } \\ \end{array} } \right) + \varepsilon , $$
(6.8)

where F(Y 0) is denoted by C (or say, the computed value). So GPS observation Eq. 6.6 is linearised as a linear equation (Eq. 6.8). Denoting the observation and truncation errors as v and OC as l, partial derivative (∂F/∂y j )\( |_{{y^{0} }} \) = a j , then Eq. 6.8 can be written as

$$ l_{i} = \left( {\begin{array}{*{20}c} {a_{i1} } & {a_{i2} } & \ldots & {a_{in} } \\ \end{array} } \right) \cdot \left( {\begin{array}{*{20}c} {{\text{d}}y_{1} } \\ {{\text{d}}y_{2} } \\ \vdots \\ {{\text{d}}y_{n} } \\ \end{array} } \right) + v_{i} ,\quad (i = 1,2, \ldots ,m), $$
(6.9)

where l is also often called “observable” in adjustment or OC (observed minus computed), and j and i are indices of unknowns and the observations. Equation 6.9 is a linear error equation. A set of GPS observables then forms a linear error equation system:

$$ \left( {\begin{array}{*{20}c} {l_{1} } \\ {l_{2} } \\ \vdots \\ {l_{m} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} {a_{11} } & {a_{12} } & \ldots & {a_{1n} } \\ {a_{21} } & {a_{22} } & \ldots & {a_{2n} } \\ \vdots & \vdots & \vdots & \vdots \\ {a_{m1} } & {a_{m2} } & \ldots & {a_{mn} } \\ \end{array} } \right) \cdot \left( {\begin{array}{*{20}c} {{\text{d}}y_{1} } \\ {{\text{d}}y_{2} } \\ \vdots \\ {{\text{d}}y_{n} } \\ \end{array} } \right) + \left( {\begin{array}{*{20}c} {v_{1} } \\ {v_{2} } \\ \vdots \\ {v_{m} } \\ \end{array} } \right), $$

or in matrix form (dY is denoted by X)

$$ L = AX + V, $$
(6.10)

where m is the observable number. A number of adjustment and filtering methods (cf. Chap. 7) can be applied for solving GPS problem 6.10. The solved parameter vector is X (or dY). The original unknown vector Y can be obtained by adding dY to Y 0. V is the residual vector. Statistically, V is assumed to be a random vector, and is normally distributed with zero expectation and variance var(V). To characterise the different qualities and correlation situations of the observables, a so-called weight matrix P is introduced to Eq. 6.10. Supposing all observations are linearly independent or uncorrelated, the covariance of observable vector L is

$$ Q_{LL} = {\text{cov}}(L) = \sigma^{2} E $$
(6.11)

or

$$ P = Q_{LL}^{ - 1} = \frac{1}{{\sigma^{2} }}E, $$
(6.12)

where E is an identity matrix of dimension m × m, superscript −1 is an inversion operator, and cov(L) is the covariance of L.

Generally, the linearisation process is considered to have been done well only when the solved unknown vector dY is sufficiently small. Therefore, the initial vector Y 0 must be carefully given. In cases in which the initial vector is not well known or not well given, the linearisation process must be repeated. In other words, the initial vector that is not well known must be modified by the solved vector dY, and the linearisation process must be repeated until dY converges. If X = 0, then L = V; therefore, the “observable” vector L is sometimes also called a residual vector. If the initial vector Y 0 is well known or well given, then the residual vector V can also be used as a criterion to judge the “goodness or badness” of the original observable vector. This property is used in robust Kalman filtering to adjust the weight of the observable (cf. Chap. 7).

6.3 Partial Derivatives of Observation Function

  • Partial Derivatives of Geometric Path Distance with Respect to the State Vector \( (x_{i} ,y_{i} ,z_{i} ,\dot{x}_{i} ,\dot{y}_{i} ,\dot{z}_{i} ) \) of the GPS Receiver

The signal transmitting path is described by (cf. Eqs. 4.3 and 4.6 in Chap. 4)

$$ \rho_{i}^{k} (t_{\text{r}} ,t_{\text{e}} ) = \sqrt {(x_{k} (t_{\text{e}} ) - x_{i} )^{2} + (y_{k} (t_{\text{e}} ) - y_{i} )^{2} + (z_{k} (t_{\text{e}} ) - z_{i} )^{2} } ,\quad {\text{and}} $$
(6.13)
$$ \rho_{i}^{k} (t_{\text{r}} ,t_{\text{e}} ) \approx \rho_{i}^{k} (t_{\text{r}} ,t_{\text{r}} ) + \frac{{{\text{d}}\rho_{i}^{k} (t_{\text{r}} ,t_{\text{r}} )}}{{{\text{d}}t}}\Delta t, $$
(6.14)

where index k denotes the satellite, and the satellite coordinates are related to the signal emission time t e, i denotes the station, and station coordinates are related to the signal reception time t r, Δt = t e − t r. Then one has

$$ \begin{aligned} \frac{{{\text{d}}\rho_{i}^{k} (t_{\text{r}} ,t_{\text{r}} )}}{{{\text{d}}t}} & = \frac{1}{{\rho_{i}^{k} (t_{\text{r}} ,t_{\text{r}} )}} \\ & \quad \times \left( {\begin{array}{*{20}c} {(x_{k} - x_{i} )(\dot{x}_{k} - \dot{x}_{i} )} & {(y_{k} - y_{i} )(\dot{y}_{k} - \dot{y}_{i} )} & {(z_{k} - z_{i} )(\dot{z}_{k} - \dot{z}_{i} )} \\ \end{array} } \right), \end{aligned} $$
(6.15)

where the satellite state vector is related to the time t r, and

$$ \frac{{\partial \rho_{i}^{k} (t_{\text{r}} ,t_{\text{e}} )}}{{\partial (x_{i} ,y_{i} ,z_{i} )}} = \frac{ - 1}{{\rho_{i}^{k} (t_{\text{r}} ,t_{\text{e}} )}}\left( {\begin{array}{*{20}c} {x_{k} - x_{i} } & {y_{k} - y_{i} } & {z_{k} - z_{i} } \\ \end{array} } \right), $$
(6.16)
$$ \frac{{\partial \rho_{i}^{k} (t_{\text{r}} ,t_{\text{e}} )}}{{\partial (\dot{x}_{i} ,\dot{y}_{i} ,\dot{z}_{i} )}} = \frac{{ -\Delta t}}{{\rho_{i}^{k} (t_{\text{r}} ,t_{\text{r}} )}}\left( {\begin{array}{*{20}c} {x_{k} - x_{i} } & {y_{k} - y_{i} } & {z_{k} - z_{i} } \\ \end{array} } \right). $$
(6.17)
  • Partial Derivatives of Geometric Path Distance with Respect to the State Vector \( (x_{k} ,y_{k} ,z_{k} ,\dot{x}_{k} ,\dot{y}_{k} ,\dot{z}_{k} ) \) of the GPS Satellite

Similar to above, one has

$$ \frac{{\partial \rho_{i}^{k} (t_{\text{r}} ,t_{\text{e}} )}}{{\partial (x_{k} ,y_{k} ,z_{k} )}} = \frac{1}{{\rho_{i}^{k} (t_{\text{r}} ,t_{\text{e}} )}}\left( {\begin{array}{*{20}c} {x_{k} - x_{i} } & {y_{k} - y_{i} } & {z_{k} - z_{i} } \\ \end{array} } \right), $$
(6.18)
$$ \frac{{\partial \rho_{i}^{k} (t_{\text{r}} ,t_{\text{e}} )}}{{\partial (\dot{x}_{k} ,\dot{y}_{k} ,\dot{z}_{k} )}} = \frac{{\Delta t}}{{\rho_{i}^{k} (t_{\text{r}} ,t_{\text{r}} )}}\left( {\begin{array}{*{20}c} {x_{k} - x_{i} } & {y_{k} - y_{i} } & {z_{k} - z_{i} } \\ \end{array} } \right). $$
(6.19)
  • Partial Derivatives of the Doppler Observable with Respect to the Velocity Vector of the Station

The time differentiation of the geometric signal path distance can be derived as

$$ \begin{aligned} \frac{{{\text{d}}\rho_{i}^{k} (t_{\text{r}} ,t_{\text{e}} )}}{{{\text{d}}t}} & = \frac{1}{{\rho_{i}^{k} (t_{\text{r}} ,t_{\text{e}} )}} \\ & \quad \left( {(x_{k} (t_{\text{e}} ){\kern 1pt} - {\kern 1pt} x_{i} )(\dot{x}_{k} (t_{\text{e}} ){\kern 1pt} - {\kern 1pt} \dot{x}_{i} ) + (y_{k} (t_{\text{e}} ){\kern 1pt} - {\kern 1pt} y_{i} )(\dot{y}_{k} (t_{\text{e}} ){\kern 1pt} - {\kern 1pt} \dot{y}_{i} ) + (z_{k} (t_{\text{e}} ){\kern 1pt} - {\kern 1pt} z_{i} )(\dot{z}_{k} (t_{\text{e}} ){\kern 1pt} - {\kern 1pt} \dot{z}_{i} )} \right); \\ \end{aligned} $$
(6.20)

then one has

$$ \frac{{\partial ({\text{d}}\rho_{i}^{k} (t_{\text{r}} ,t_{\text{e}} )/{\text{d}}t)}}{{\partial (\dot{x}_{i} ,\dot{y}_{i} ,\dot{z}_{i} )}} = \frac{ - 1}{{\rho_{i}^{k} (t_{\text{r}} ,t_{\text{e}} )}}\left( {\begin{array}{*{20}c} {x_{k} (t_{\text{e}} ) - x_{i} } & {y_{k} (t_{\text{e}} ) - y_{i} } & {z_{k} (t_{\text{e}} ) - z_{i} } \\ \end{array} } \right). $$
(6.21)
  • Partial Derivatives of Clock Errors with Respect to the Clock Parameters

If the clock errors are modelled by Eq. 5.163 (cf. Sect. 5.5)

$$ \delta t_{i} = b_{i} + d_{i} t + e_{i} t^{2} ,\quad \delta t_{k} = b_{k} + d_{k} t + e_{k} t^{2} , $$
(6.22)

where i and k are the indices of the clock error parameters of the receiver and satellite, then one has

$$ \begin{aligned} \frac{{\partial \delta t_{i} }}{{\partial \left( {b_{i} ,d_{i} ,e_{i} } \right)}} & = \left( {\begin{array}{*{20}c} 1 & t & {t^{2} } \\ \end{array} } \right)\quad {\text{and}} \\ \frac{{\partial \delta t_{k} }}{{\partial \left( {b_{k} ,d_{k} ,e_{k} } \right)}} & = \left( {\begin{array}{*{20}c} 1 & t & {t^{2} } \\ \end{array} } \right). \\ \end{aligned} $$
(6.23)

If the clock errors are modelled by Eq. 5.164 (cf. Sect. 5.5)

$$ \delta t_{i} = b_{i} ,\quad \delta t_{k} = b_{k} , $$
(6.24)

then

$$ \frac{{\partial \delta t_{i} }}{{\partial b_{i} }} = 1,\quad \frac{{\partial \delta t_{k} }}{{\partial b_{k} }} = 1. $$
(6.25)

The above derivatives are valid for both the code and phase observable equations. For the Doppler observable, denote (cf. Eq. 6.3)

$$ \delta_{\text{clock}} = f\frac{{{\text{d}}(\delta t_{i} - \delta t_{k} )}}{{{\text{d}}t}}, $$
(6.26)

then for the clock error model of Eq. 6.22 one has

$$ \frac{{\partial \delta_{\text{clock}} }}{{\partial (d_{i} ,e_{i} )}} = \left( {\begin{array}{*{20}c} 1 & {2t} \\ \end{array} } \right)f\quad {\text{and}}\quad \frac{{\partial \delta_{\text{clock}} }}{{\partial (d_{k} ,e_{k} )}} = \left( {\begin{array}{*{20}c} 1 & {2t} \\ \end{array} } \right)f. $$
(6.27)
  • Partial Derivatives of Tropospheric Effects with Respect to the Tropospheric Parameters

If the tropospheric effects can be modelled by (cf. Sect. 5.2)

$$ \begin{aligned} {\text{I}}: & \quad \delta_{\text{trop}} = f_{\text{p}} {\text{d}}\rho \quad {\text{and}} \\ {\text{II}}: & \quad \delta_{\text{trop}} = \frac{{f_{\text{z}} {\text{d}}\rho }}{F} + \frac{{f_{\text{a}} {\text{d}}\rho }}{{F_{\text{c}} }}, \\ \end{aligned} $$
(6.28)

where dρ is the tropospheric effect computed by using the standard tropospheric model , f p, f z, f a are parameters of the tropospheric delay in path, zenith, azimuth directions, and F and F c are the mapping and co-mapping functions discussed in Sect. 5.2. The derivatives with respect to the parameters f p, f z, f a are then

$$ \begin{aligned} {\text{I}}: & \quad \frac{{\partial \delta_{\text{trop}} }}{{\partial f_{\text{p}} }} = {\text{d}}\rho \quad {\text{and}} \\ {\text{II}}: & \quad \frac{{\partial \delta_{\text{trop}} }}{{\partial (f_{\text{z}} ,f_{\text{a}} )}} = \left( {\begin{array}{*{20}c} {{\text{d}}\rho } & {{\text{d}}\rho } \\ F & {F_{\text{c}} } \\ \end{array} } \right) . \\ \end{aligned} $$
(6.29)

Furthermore, if the tropospheric parameters are defined as a step function or first-order polynomial (cf. Sect. 5.2) by

$$ \begin{aligned} {\text{I}}: & \quad f_{\text{p}} = f_{\text{z}} = f_{j} \quad {\text{if}}\quad t_{j - 1} < t \le t_{j} ,\quad j = 1, 2 ,\ldots , n\quad {\text{and}} \\ {\text{II}}: & \quad f_{\text{p}} = f_{\text{z}} = f_{j - 1} + (f_{j} - f_{j - 1} )\frac{{t - t_{j - 1} }}{{\Delta t}}\quad {\text{if}}\quad t_{j - 1} < t \le t_{j} ,\quad j = 1,2, \ldots ,n + 1, \\ \end{aligned} $$
(6.30)

where Δt = (t n  − t 0)/n, t 0 and t n are the beginning and the ending times of the GPS survey, and Δt is usually selected by 2–4 h. Then one has

$$ \begin{aligned} {\text{I}}: & \quad \frac{{\partial f_{\text{p}} }}{{\partial f_{j} }} = \frac{{\partial f_{\text{z}} }}{{\partial f_{j} }} = 1\quad {\text{and}} \\ {\text{II}}: & \quad \frac{{\partial f_{\text{p}} }}{{\partial (f_{j - 1} ,f_{j} )}} = \frac{{\partial f_{\text{z}} }}{{\partial (f_{j - 1} ,f_{j} )}} = \left( {\begin{array}{*{20}c} {1 + \frac{{ - t + t_{j - 1} }}{{{\Delta} t}}} & {\frac{{t - t_{j - 1} }}{{{\Delta} t}}} \\ \end{array} } \right). \\ \end{aligned} $$
(6.31)

The azimuth dependence may be assumed to be (cf. Eq. 5.121)

$$ f_{\text{a}} = g_{1} \,\cos a + g_{2} \,\sin a, $$
(6.32)

where a is the azimuth , and g 1 and g 2 are called azimuth-dependent parameters. Then one gets

$$ \frac{{\partial f_{\text{a}} }}{{\partial (g_{1} ,g_{2} )}} = \left( {\begin{array}{*{20}c} {\cos a} & {\sin a} \\ \end{array} } \right). $$
(6.33)

If parameters g 1 and g 2 are also defined as step functions or first-order polynomials like Eq. 6.30, the partial derivatives can be obtained in a similar manner to Eq. 6.31.

  • Partial Derivatives of the Phase Observable with Respect to the Ambiguity Parameters

Depending on the scale that one prefers, there is

$$ \frac{\partial \lambda N}{\partial \lambda N} = 1\quad {\text{or}}\quad \frac{\partial \lambda N}{\partial N} = \lambda . $$
(6.34)
  • Partial Derivatives of Tidal Effects with Respect to the Tidal Parameters

If the earth tide model in Eqs. 5.147 and 5.149 is used, then the tidal effects can generally be written as

$$ \delta_{{{\text{earth}} {-} {\text{tide}}}} = s_{1} h_{2} + s_{2} l_{2} + s_{3} h_{3} , $$
(6.35)

where s 1, s 2, and s 3 are the coefficient functions , which are given in Sect. 5.4.2 in detail, and h 2, h 3, and l 2 are the love numbers and Shida number, respectively. Then one has

$$ \frac{{\partial \delta_{{{\text{earth}} {-} {\text{tide}}}} }}{{\partial (h_{2} ,l_{2} ,h_{3} )}} = \left( {\begin{array}{*{20}c} {s_{1} } & {s_{2} } & {s_{3} } \\ \end{array} } \right). $$
(6.36)

Ocean loading tide effects can be modelled as

$$ \delta_{{{\text{loading}} {-} {\text{tide}}}} = f_{\text{load}} \left( {\begin{array}{*{20}c} {{\text{d}}x_{\text{load}} } & {{\text{d}}y_{\text{load}} } & {{\text{d}}z_{\text{load}} } \\ \end{array} } \right), $$
(6.37)

where f load is the factor of the computed ocean loading effect vector (dx load dy load dz load). Then one has

$$ \frac{{\partial \delta_{{{\text{loading}} {-} {\text{tide}}}} }}{{\partial f_{\text{load}} }} = \left( {\begin{array}{*{20}c} {dx_{\text{load}} } & {dy_{\text{load}} } & {dz_{\text{load}} } \\ \end{array} } \right). $$
(6.38)

6.4 Linear Transformation and Covariance Propagation

For any linear equation system

$$ L = AX $$
(6.39)

or

$$ \left( {\begin{array}{*{20}c} {l_{1} } \\ {l_{2} } \\ \vdots \\ {l_{m} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} {a_{11} } & {a_{12} } & \ldots & {a_{1n} } \\ {a_{21} } & {a_{22} } & \ldots & {a_{2n} } \\ \vdots & \vdots & \vdots & \vdots \\ {a_{m1} } & {a_{m2} } & \ldots & {a_{mn} } \\ \end{array} } \right)\,\left( {\begin{array}{*{20}c} {x_{1} } \\ {x_{2} } \\ \vdots \\ {x_{n} } \\ \end{array} } \right), $$

a linear transformation can be defined as a multiplying operation of matrix T to Eq. 6.39, i.e.

$$ TL = TAX $$
(6.40)

or

$$ \left( {\begin{array}{*{20}c} {t_{11} } & {t_{12} } & \ldots & {t_{1m} } \\ {t_{21} } & {t_{22} } & \ldots & {t_{2m} } \\ \vdots & \vdots & \vdots & \vdots \\ {t_{k1} } & {t_{k2} } & \ldots & {t_{km} } \\ \end{array} } \right){\kern 1pt} \left( {\begin{array}{*{20}c} {l_{1} } \\ {l_{2} } \\ \vdots \\ {l_{m} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} {t_{11} } & {t_{12} } & \ldots & {t_{1m} } \\ {t_{21} } & {t_{22} } & \ldots & {t_{2m} } \\ \vdots & \vdots & \vdots & \vdots \\ {t_{k1} } & {t_{k2} } & \ldots & {t_{km} } \\ \end{array} } \right){\kern 1pt} \left( {\begin{array}{*{20}c} {a_{11} } & {a_{12} } & \ldots & {a_{1n} } \\ {a_{21} } & {a_{22} } & \ldots & {a_{2n} } \\ \vdots & \vdots & \vdots & \vdots \\ {a_{m1} } & {a_{m2} } & \ldots & {a_{mn} } \\ \end{array} } \right){\kern 1pt} \left( {\begin{array}{*{20}c} {x_{1} } \\ {x_{2} } \\ \vdots \\ {x_{n} } \\ \end{array} } \right), $$

where T is called the linear transformation matrix and has a dimension of k × m. An inverse transformation of T is denoted by T −1. An invertible linear transformation does not change the property (and solutions) of the original linear equations. This may be verified by multiplying T −1 to Eq. 6.40. A non-invertible linear transformation is called a rank-deficient (or not full rank) transformation.

The covariance matrix of L is denoted by cov(L) or Q LL (cf. Sect. 6.2); the covariance of the transformed L (i.e. TL) can then be obtained by the covariance propagation theorem by (cf., e.g., Koch 1988)

$$ {\text{cov}}(TL) = T\,{\text{cov}}(L)T^{\text{T}} = TQ_{LL} T^{\text{T}} , $$
(6.41)

where superscript T denotes the transpose of the transformation matrix.

If transformation matrix T is a vector (i.e. k = 1) and L is an inhomogeneous and independent observable vector (i.e. covariance matrix Q LL is a diagonal matrix with elements of \( \sigma_{j}^{2} \), where \( \sigma_{j}^{2} \) is the variance (σ j is called standard deviation) of the observable l j ), then Eqs. 6.40 and 6.41 can be written as

$$ \begin{aligned} & \left( {\begin{array}{*{20}c} {t_{1} } & {t_{2} } & \ldots & {t_{m} } \\ \end{array} } \right){\kern 1pt} \left( {\begin{array}{*{20}c} {l_{1} } \\ {l_{2} } \\ \vdots \\ {l_{m} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} {t_{1} } & {t_{2} } & \ldots & {t_{m} } \\ \end{array} } \right){\kern 1pt} \left( {\begin{array}{*{20}c} {a_{11} } & {a_{12} } & \ldots & {a_{1n} } \\ {a_{21} } & {a_{22} } & \ldots & {a_{2n} } \\ \vdots & \vdots & \vdots & \vdots \\ {a_{m1} } & {a_{m2} } & \ldots & {a_{mn} } \\ \end{array} } \right){\kern 1pt} \left( {\begin{array}{*{20}c} {x_{1} } \\ {x_{2} } \\ \vdots \\ {x_{n} } \\ \end{array} } \right)\quad {\text{and}} \\ & {\text{cov}}(TL) = \left( {\begin{array}{*{20}c} {t_{1} } & {t_{2} } & \ldots & {t_{m} } \\ \end{array} } \right){\kern 1pt} \left( {\begin{array}{*{20}c} {\sigma_{1}^{2} } & 0 & \ldots & 0 \\ 0 & {\sigma_{2}^{2} } & \ldots & 0 \\ \vdots & \vdots & \vdots & \vdots \\ 0 & 0 & \ldots & {\sigma_{m}^{2} } \\ \end{array} } \right){\kern 1pt} \left( {\begin{array}{*{20}c} {t_{1} } \\ {t_{2} } \\ \vdots \\ {t_{m} } \\ \end{array} } \right). \\ \end{aligned} $$
(6.42)

Denoting cov(TL) as \( \sigma_{TL}^{2} \), one gets

$$ \sigma_{TL}^{2} = t_{1}^{2} \sigma_{1}^{2} + t_{2}^{2} \sigma_{2}^{2} + \ldots + t_{m}^{2} \sigma_{m}^{2} = \sum\limits_{j = 1}^{m} {t_{j}^{2} \sigma_{j}^{2} } . $$
(6.43)

Equation 6.43 is called the error propagation theorem.

6.5 Data Combinations

Data combinations are methods of combining GPS data measured with the same receiver at the same station. Generally, the observables are the code pseudoranges, carrier phases and Doppler at working frequencies such as C/A code, P1 and P2 codes, L1 phase Φ 1 and L2 phase Φ 2, and Doppler D 1 and D 2. In the future, there will also be P5 code, L5 phase Φ 5 and Doppler D 5. According to the observation equations of the observables, a suitable combination can be advantageous for understanding and solving GPS problems.

For convenience, the code, phase, and Doppler observables are simplified and rewritten as (cf. Eqs. 6.16.3)

$$ R_{j} = \rho - (\delta t_{\text{r}} - \delta t_{k} )c + \delta_{\text{ion}} (j) + \delta_{\text{trop}} + \delta_{\text{tide}} + \delta_{\text{rel}} + \varepsilon_{\text{c}} , $$
(6.44)
$$ \lambda_{j} \varPhi_{j} = \rho - (\delta t_{\text{r}} - \delta t_{k} )c + \lambda_{j} N_{j} - \delta_{\text{ion}} (j) + \delta_{\text{trop}} + \delta_{\text{tide}} + \delta_{\text{rel}} + \varepsilon_{\text{p}} , $$
(6.45)
$$ D_{j} = \frac{{{\text{d}}\rho }}{{\lambda_{j} {\text{d}}t}} - f_{j} \frac{{{\text{d}}(\delta t_{\text{r}} - \delta t_{k} )}}{{{\text{d}}t}} + \varepsilon_{\text{d}} ,\quad {\text{and}} $$
(6.46)
$$ \delta_{\text{ion}} (j) = \frac{{A_{1} }}{{f_{j}^{2} }} + \frac{{A_{2} }}{{f_{j}^{3} }}. $$
(6.47)

where j is the index of frequency f, the means of the other symbols are the same as the notes of Eqs. 6.16.3. Equation 6.47 is an approximation for code.

A general code–code combination can be formed by n 1 R 1 + n 2 R 2 + n 5 R 5, where n 1, n 2, and n 5 are arbitrary constants. However, in order to make such a combination that still has the sense of a code survey, a standardised combination has to be formed by

$$ R = \frac{{n_{1} R_{1} + n_{2} R_{2} + n_{5} R_{5} }}{{n_{1} + n_{2} + n_{5} }}. $$
(6.48)

The newly formed code R can then be interpreted as a weight-averaged code survey of R 1, R 2, and R 5. The mathematical model of the observable Eq. 6.44 is generally still valid for R. Denoting the standard deviation of code observable R i as σ ci (i = 1, 2, 5), the newly-formed code observation R has the variance of

$$ \sigma_{\text{c}}^{2} = \frac{1}{{(n_{1} + n_{2} + n_{5} )^{2} }}\left( {n_{1}^{2} \sigma_{{{\text{c}}1}}^{2} + n_{2}^{2} \sigma_{{{\text{c}}2}}^{2} + n_{5}^{2} \sigma_{\text{c5}}^{2} } \right). $$

Because

$$ \left| {\frac{{n_{1} + n_{2} + \ldots + n_{m} }}{m}} \right| \le \sqrt {\frac{{n_{1}^{2} + n_{2}^{2} + \ldots + n_{m}^{2} }}{m}} , $$

(cf., e.g., Wang et al. 1979; Bronstein and Semendjajew 1987), one has the property of

$$ (n_{1} + n_{2} + \ldots + n_{m} )^{2} \le m(n_{1}^{2} + n_{2}^{2} + \ldots + n_{m}^{2} ), $$

where m is the maximum index. Therefore, in our case, one has

$$ \sigma_{c}^{2} \ge m \cdot { \hbox{min} }\left\{ {\sigma_{{{\text{c}}1}}^{2} ,\sigma_{{{\text{c}}2}}^{2} ,\sigma_{{{\text{c}}5}}^{2} } \right\},\quad m = 2\;{\text{or}}\; 3 $$

for combinations of two or three code observables.

A general phase–phase linear combination can be formed by

$$ \varPhi = n_{1} \varPhi_{1} + n_{2} \varPhi_{2} + n_{5} \varPhi_{5} , $$
(6.49)

where the combined signal has the frequency and wavelength

$$ f = n_{1} f_{1} + n_{2} f_{2} + n_{5} f_{5} \quad {\text{and}}\quad \lambda = \frac{c}{f}. $$
(6.50)

λΦ means the measured distance (with ambiguity!) and can be presented alternatively as

$$ \lambda \varPhi = \frac{1}{f}\left( {n_{1} f_{1} \lambda_{1} \varPhi_{1} + n_{2} f_{2} \lambda_{2} \varPhi_{2} + n_{5} f_{5} \lambda_{5} \varPhi_{5} } \right). $$
(6.51)

The mathematical model of Eq. 6.45 is generally still valid for the newly formed λΦ. Denoting the standard deviation of phase observable λ i Φ as σ i (i = 1, 2, 5), the newly formed observation has a variance of

$$ \sigma^{2} = \frac{1}{{f^{2} }}\left( {n_{1}^{2} f_{1}^{2} \sigma_{1}^{2} + n_{2}^{2} f_{2}^{2} \sigma_{2}^{2} + n_{5}^{2} f_{5}^{2} \sigma_{5}^{2} } \right) $$
(6.52)

and

$$ \sigma^{2} \ge m \cdot { \hbox{min} }\left\{ {\sigma_{1}^{2} ,\sigma_{2}^{2} ,\sigma_{5}^{2} } \right\}, $$

with m = 2 or 3 for combinations of two or three phases.

That is, the data combination will degrade the quality of the original data.

Linear combinations Φ W = Φ 1 − Φ 2 and Φ X = 2Φ 1 − Φ 2 are called wide-lane and x-lane combinations with wavelengths of about 86.2 and 15.5 cm. They reduce the first-order ionospheric effects on frequency f 2 to 40 % and 20 %, respectively. Φ N = Φ 1 + Φ 2 is called a narrow-lane combination.

6.5.1 Ionosphere-Free Combinations

Due to Eqs. 6.446.47, phase–phase and code–code ionosphere-free combinations can be formed by (cf. Sect. 5.1)

$$ \lambda \varPhi = \frac{{f_{1}^{2} \lambda_{1} \varPhi_{1} - f_{2}^{2} \lambda_{2} \varPhi_{2} }}{{f_{1}^{2} - f_{2}^{2} }} = \lambda (f_{1} \varPhi_{1} - f_{2} \varPhi_{2} )\quad {\text{and}} $$
(6.53)
$$ R = \frac{{f_{1}^{2} R_{1} - f_{2}^{2} R_{2} }}{{f_{1}^{2} - f_{2}^{2} }}. $$
(6.54)

The related observation equations can be formed from Eqs. 6.44 and 6.45 as

$$ R = \rho - (\delta t_{\text{r}} - \delta t_{k} )c + \delta_{\text{trop}} + \delta_{\text{tide}} + \delta_{\text{rel}} + \varepsilon_{\text{cc}} \quad {\text{and}} $$
(6.55)
$$ \lambda \varPhi = \rho - (\delta t_{\text{r}} - \delta t_{k} )c + \lambda N + \delta_{\text{trop}} + \delta_{\text{tide}} + \delta_{\text{rel}} + \varepsilon_{\text{pc}} , $$
(6.56)

where

$$ N = f_{1} N_{1} - f_{2} N_{2} ,\quad \lambda = \frac{c}{{f_{1}^{2} - f_{2}^{2} }}, $$
(6.57)

ε cc and ε pc denote the residuals after the combination of code and phase, respectively.

The advantages of such ionosphere-free combinations are that the ionospheric effects have disappeared from the observation Eqs. 6.55 and 6.56 and the other terms of the equations have remained the same. However, the combined ambiguity is not an integer anymore, and the combined observables have higher standard deviations . Equations 6.55 and 6.56 are indeed first-order ionosphere-free combinations .

Second-order ionosphere-free combinations can be formed by (see Sect. 5.1.2 for details)

$$ \lambda \varPhi = C_{1} \lambda_{1} \varPhi_{1} + C_{2} \lambda_{2} \varPhi_{2} + C_{5} \lambda_{5} \varPhi_{5} \quad {\text{and}} $$
(6.58)
$$ R = C_{1} R_{1} + C_{2} R_{2} + C_{5} R_{5} , $$
(6.59)

where

$$ \begin{aligned} C_{1} & = \frac{{f_{1}^{3} (f_{5} - f_{2} )}}{{C_{4} }},\quad C_{2} = \frac{{ - f_{2}^{3} (f_{5} - f_{1} )}}{{C_{4} }}, \\ C_{5} & = \frac{{f_{5}^{3} (f_{2} - f_{1} )}}{{C_{4} }},\quad C_{4} = f_{1}^{3} (f_{5} - f_{2} ) - f_{2}^{3} (f_{5} - f_{1} ) + f_{5}^{3} (f_{2} - f_{1} ), \\ \lambda & = \frac{c}{{C_{4} }},\quad N = C_{4} (C_{1} N_{1} + C_{2} N_{2} + C_{5} N_{5} ). \\ \end{aligned} $$

The related observation equations are the same as Eqs. 6.55 and 6.56, with λ and N given above.

6.5.2 Geometry-Free Combinations

Given Eqs. 6.446.46, code–code, phase–phase, and phase–code geometry-free combinations can be formed by

$$ R_{1} - R_{2} = \delta_{\text{ion}} (1) - \delta_{\text{ion}} (2) +\Delta \varepsilon_{\text{c}} = \frac{{A_{1} }}{{f_{1}^{2} }} - \frac{{A_{1} }}{{f_{2}^{2} }} +\Delta \varepsilon_{\text{c}} , $$
(6.60)
$$ \lambda_{1} \varPhi_{1} - \lambda_{2} \varPhi_{2} = \lambda_{1} N_{1} - \lambda_{2} N_{2} - \frac{{A_{1} }}{{f_{1}^{2} }} + \frac{{A_{1} }}{{f_{2}^{2} }} +\Delta \varepsilon_{\text{p}} , $$
(6.61)
$$ \lambda_{1} D_{1} - \lambda_{2} D_{2} =\Delta \varepsilon_{\text{d}} , $$
(6.62)
$$ \lambda_{j} \varPhi_{j} - R_{j} = \lambda_{j} N_{j} - 2\delta_{\text{ion}} (j) +\Delta \varepsilon_{\text{pc}} , \quad {\text{and}}\quad j = 1, 2, 5, $$
(6.63)

where

$$ \Delta \delta_{\text{ion}} = \delta_{\text{ion}} (1) - \delta_{\text{ion}} (2) = \frac{{A_{1} }}{{f_{1}^{2} }} - \frac{{A_{1} }}{{f_{2}^{2} }}. $$
(6.64)

For an ionospheric model of the second order, one has approximately

$$ \Delta \delta_{\text{ion}} = \delta_{\text{ion}} (1) - \delta_{\text{ion}} (2) = \frac{{A_{1} }}{{f_{1}^{2} }} - \frac{{A_{1} }}{{f_{2}^{2} }} + \frac{{A_{2} }}{{f_{1}^{3} }} - \frac{{A_{2} }}{{f_{2}^{3} }}. $$

The geometry-free code–code and phase–phase combinations cancel out all other terms in the observation equations except the ionospheric term and the ambiguity parameters. Recalling the discussions of Sect. 5.1, δ ion is the ionospheric path delay and can be considered a mapping of the zenith delay \( \delta_{\text{ion}}^{z} \) or δ ion = \( \delta_{\text{ion}}^{z} \) F, where F is the mapping function (cf. Sect. 5.1). So one has

$$ \delta_{\text{ion}} (1) = \frac{{A_{ 1}^{\text{z}} }}{{f_{1}^{2} }}F = \frac{{A_{ 1}^{{}} }}{{f_{1}^{2} }}, $$
(6.65)

where A 1 and \( A_{1}^{z} \) have the physical meaning of total electron content at the signal path direction and the zenith direction, respectively. \( A_{1}^{z} \) is then independent from the zenith angle of the satellite. If the variability of the electron content at the zenith direction is stable enough, \( A_{1}^{z} \) can be modelled by a step function or a first-order polynomial with a reasonably short time interval Δt by

$$ A_{1}^{z} = g_{j} \quad {\text{if}}\quad t_{{j{-} 1}} < t \le t_{j} ,\quad j = 1, 2, \ldots ,n + 1 $$
(6.66)

or

$$ A_{1}^{\text{z}} = g_{j - 1} + \left( {g_{j} - g_{j - 1} } \right)\frac{{t - t_{j - 1} }}{{\Delta t}}\quad {\text{if}}\quad t_{j - 1} < t \le t_{j} ,j = 1,2, \ldots ,n + 1, $$
(6.67)

where Δt = (t n  − t 0)/n, and t 0 and t n are the beginning and ending time of the GPS survey. Δt can be selected, for example, as 30 min. g j is the coefficient of the polynomial.

Geometry-free combinations of Eqs. 6.60, 6.61, and 6.63 (only for j = 1) can be considered a linear transformation of the original observable vector L = (R 1 R 2 λ 1 Φ 1 λ 2 Φ 2)T by

$$ \left( {\begin{array}{*{20}c} 1 & { - 1} & 0 & 0 \\ 0 & 0 & 1 & { - 1} \\ { - 1} & 0 & 1 & 0 \\ \end{array} } \right) \cdot \left( {\begin{array}{*{20}c} {R_{1} } \\ {R_{2} } \\ {\lambda_{1} \varPhi_{1} } \\ {\lambda_{2} \varPhi_{2} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} 0 & 0 & g \\ {\lambda_{1} } & { - \lambda_{2} } & { - g} \\ {\lambda_{1} } & 0 & d \\ \end{array} } \right) \cdot \left( {\begin{array}{*{20}c} {N_{1} } \\ {N_{2} } \\ {A_{1} } \\ \end{array} } \right) + \left( {\begin{array}{*{20}c} {\Delta \varepsilon_{\text{c}} } \\ {\Delta \varepsilon_{\text{p}} } \\ {\Delta \varepsilon_{\text{pc}} } \\ \end{array} } \right), $$
(6.68)

where Eq. 6.65 is used and

$$ g = \left( {\frac{1}{{f_{1}^{2} }} - \frac{1}{{f_{2}^{2} }}} \right),\quad d = - \frac{2}{{f_{1}^{2} }}\quad {\text{and}}\quad T = \left( {\begin{array}{*{20}c} 1 & { - 1} & 0 & 0 \\ 0 & 0 & 1 & { - 1} \\ { - 1} & 0 & 1 & 0 \\ \end{array} } \right). $$

Equation 6.68 is called an ambiguity-ionospheric equation . For any viewed GPS satellite, Eq. 6.68 is solvable. If the variance vector of the observable vector is

$$ \left( {\begin{array}{*{20}c} {\sigma_{\text{c}}^{2} } & {\sigma_{\text{c}}^{2} } & {\sigma_{\text{p}}^{2} } & {\sigma_{\text{p}}^{2} } \\ \end{array} } \right)^{\text{T}} , $$

then the covariance matrix of the original observable vector is (cf. Sect. 6.2)

$$ Q_{LL} = \left( {\begin{array}{*{20}c} {\sigma_{\text{c}}^{2} } & 0 & 0 & 0 \\ 0 & {\sigma_{\text{c}}^{2} } & 0 & 0 \\ 0 & 0 & {\sigma_{\text{p}}^{2} } & 0 \\ 0 & 0 & 0 & {\sigma_{\text{p}}^{2} } \\ \end{array} } \right), $$

and the covariance matrix of the transformed observable vector (left side of Eq. 6.68) is (cf. Sect. 6.4)

$$ \text{cov} (TL) = TQ_{LL} T^{\text{T}} = \left( {\begin{array}{*{20}c} {2\sigma_{\text{c}}^{2} } & 0 & { - \sigma_{\text{c}}^{2} } \\ 0 & {2\sigma_{\text{p}}^{2} } & {\sigma_{\text{p}}^{2} } \\ { - \sigma_{\text{c}}^{2} } & {\sigma_{\text{p}}^{2} } & {\sigma_{\text{c}}^{2} + \sigma_{\text{p}}^{2} } \\ \end{array} } \right), $$

and

$$ P = (\text{cov} (TL))^{ - 1} = \frac{1}{2}\left( {\begin{array}{*{20}c} {h + \sigma_{\text{c}}^{ - 2} } & { - h} & {2h} \\ { - h} & {h + \sigma_{\text{p}}^{ - 2} } & { - 2h} \\ {2h} & { - 2h} & {4h} \\ \end{array} } \right),\quad h = \frac{1}{{\sigma_{\text{c}}^{2} + \sigma_{\text{p}}^{2} }}. $$
(6.69)

Taking all measured data at a station into account, the ambiguity and ionospheric parameters (as a step function of the polynomial) can be solved by using Eq. 6.68 with the weight of Eq. 6.69. Taking into account the data station by station, all ambiguity and ionospheric parameters can be determined. The different weights of the code and phase measurements are considered exactly here. Because of the physical property of the ionosphere, all solved ionospheric parameters will have the same sign. Even though observation Eq. 6.68 is already a linear equation system, an initialisation is still helpful to avoid numbers from ambiguities that are too large. The broadcasting ionospheric model can be used for initialisation of the related ionospheric parameters .

A geometry-free combination of Eq. 6.62 can be used as a quality check of the Doppler data .

6.5.3 Standard Phase–Code Combination

Traditionally, phase and code combinations are used to compute the wide-lane ambiguity (cf. Sjoeberg 1999; Hofmann-Wellenhof et al. 1997). The formulas can be derived as follows. Dividing λ j into Eq. 6.63 and forming the difference for j = 1 and j = 2, one gets

$$ \varPhi_{\text{w}} - \frac{{R_{1} }}{{\lambda_{1} }} + \frac{{R_{2} }}{{\lambda_{2} }} = N_{\text{w}} - \frac{{2A_{1} }}{c}\left( {\frac{1}{{f_{1} }} + \frac{1}{{f_{2} }}} \right), $$
(6.70)

where Φ W = Φ 1Φ 2, N W = N 1 − N 2, and they are called wide-lane observable and ambiguity; c is the velocity of light and A 1 is the ionospheric parameter. The error term is omitted here. Equation 6.60 can be rewritten (by omitting the error term) as

$$ A_{1} = (R_{1} - R_{2} )\frac{{f_{1}^{2} f_{2}^{2} }}{{f_{2}^{2} - f_{1}^{2} }}, $$
(6.71)

and then one gets

$$ \frac{{A_{1} }}{c}\left( {\frac{1}{{f_{1} }} - \frac{1}{{f_{2} }}} \right) = \left( {\frac{{R_{1} }}{{\lambda_{1} f_{1} }} - \frac{{R_{2} }}{{\lambda_{2} f_{2} }}} \right)\frac{{f_{1} f_{2} }}{{f_{2} + f_{1} }} = \frac{{R_{1} }}{{\lambda_{1} }}\frac{{f_{2} }}{{(f_{1} + f_{2} )}} - \frac{{R_{2} }}{{\lambda_{2} }}\frac{{f_{1} }}{{(f_{1} + f_{2} )}}. $$
(6.72)

Substituting Eq. 6.72 into 6.70 yields

$$ N_{\text{w}} = \varPhi_{\text{w}} - \frac{{f_{1} - f_{2} }}{{f_{1} + f_{2} }}\left( {\frac{{R_{1} }}{{\lambda_{1} }} + \frac{{R_{2} }}{{\lambda_{2} }}} \right). $$
(6.73)

Equation 6.73 is the most popular formula for computing wide-lane ambiguities using phase and code observables. The undifferenced ambiguity N 1 can be derived as follows. Setting Φ 2 = Φ 1 − Φ W, N 2 = N 1N W into Eq. 6.61 and omitting the error term, one has

$$ \begin{aligned} \lambda_{1} N_{1} - \lambda_{2} \left( {N_{1} - N_{\text{w}} } \right) & = \frac{{A_{1} }}{{f_{1}^{2} }} - \frac{{A_{1} }}{{f_{2}^{2} }} + \lambda_{1} \varPhi_{1} - \lambda_{2} \left( {\varPhi_{1} - \varPhi_{\text{w}} } \right), \\ N_{1} & = \varPhi_{1} - (\varPhi_{\text{w}} - N_{\text{w}} )\frac{{f_{1} }}{{f_{\text{w}} }} + \frac{{A_{1} }}{c}\frac{{f_{1} + f_{2} }}{{f_{1} f_{2} }} \\ \end{aligned} $$

or

$$ N_{1} = \varPhi_{1} - (\varPhi_{\text{w}} - N_{\text{w}} )\frac{{f_{1} }}{{f_{\text{w}} }} - \frac{{R_{1} }}{{\lambda_{1} }}\frac{{f_{2} }}{{f_{\text{w}} }} + \frac{{R_{2} }}{{\lambda_{2} }}\frac{{f_{1} }}{{f_{\text{w}} }}, $$
(6.74)

where f w = f 1 − f 2 is the wide-lane frequency.

Compared with the adjustment method derived in Sect. 6.5.2, it is obvious that the quality differences of the phase and code data are not considered using Eqs. 6.73 and 6.74 for determining the ambiguity parameters. Therefore, we suggest that the method proposed in Sect. 6.5.2 be used.

6.5.4 Ionospheric Residuals

Considering the GPS observables as a time series, the geometry-free combinations of Eqs. 6.606.64 can be rewritten as

$$ R_{1} (t_{j} ) - R_{2} (t_{j} ) =\Delta \delta_{\text{ion}} (t_{j} ) +\Delta \varepsilon_{\text{c}} , $$
(6.75)
$$ \lambda_{1} \varPhi_{1} (t_{j} ) - \lambda_{2} \varPhi_{2} (t_{j} ) = \lambda_{1} N_{1} - \lambda_{2} N_{2} -\Delta \delta_{\text{ion}} (t_{j} ) +\Delta \varepsilon \quad {\text{and}} $$
(6.76)
$$ \lambda_{i} \varPhi_{i} (t_{j} ) - R_{i} (t_{j} ) = \lambda_{i} N_{i} - 2\delta_{\text{ion}} (i,t_{j} ) +\Delta \varepsilon_{\text{pc}} , \quad i = 1, 2, 5, $$
(6.77)

where

$$ \Delta \delta_{\text{ion}} (t_{j} ) = \delta_{\text{ion}} (1,t_{j} ) - \delta_{\text{ion}} (2,t_{j} ) = \frac{{A_{1} (t_{j} )}}{{f_{1}^{2} }} - \frac{{A_{1} (t_{j} )}}{{f_{2}^{2} }},\quad j = 1, 2, \ldots ,m. $$
(6.78)

The differences of the above observable combinations at the two consecutive epochs t j and t j–1 can be formed as

$$ \Delta _{t} R_{1} (t_{j} ) -\Delta _{t} R_{2} (t_{j} ) =\Delta _{t}\Delta \delta_{\text{ion}} (t_{j} ) +\Delta _{t}\Delta \varepsilon_{\text{c}} , $$
(6.79)
$$ \lambda_{1}\Delta _{t} \varPhi_{1} (t_{j} ) - \lambda_{2}\Delta _{t} \varPhi_{2} (t_{j} ) = \lambda_{1}\Delta _{t} N_{1} - \lambda_{2}\Delta _{t} N_{2} -\Delta _{t}\Delta \delta_{\text{ion}} (t_{j} ) +\Delta _{t}\Delta \varepsilon_{\text{p}} , \quad {\text{and}} $$
(6.80)
$$ \lambda_{i}\Delta _{t} \varPhi_{i} (t_{j} ) -\Delta _{t} R_{i} (t_{j} ) = \lambda_{i}\Delta _{t} N_{i} - 2\Delta _{t} \delta_{\text{ion}} (i,t_{j} ) +\Delta _{t}\Delta \varepsilon_{\text{pc}} , \quad i = 1, 2, 5, $$
(6.81)

where Δ t is a time difference operator, and for any time function G(t), Δ t G(t j ) = G(t j )–G(t j−1) is valid.

Because the time differences of the ionospheric effects Δ t δ ion and Δ t Δδ ion are generally very small, they are called ionospheric residuals . In the case of no cycle slips, i.e. ambiguities N 1 and N 2 are constant, ΔN 1 and ΔN 2 equal zero. Equations 6.796.81 are called ionospheric residual combinations . The first combination of Eq. 6.79 can be used for a consistency check of two code measurements. Equations 6.80 and 6.81 can be used for a cycle slip check. Equation 6.81 is a phase–code combination, due to the lower accuracy of the code measurements; it can be used only to check for large cycle slips. Equation 6.80 is a phase–phase combination , and therefore it has higher sensitivity to cycle slips. However, two special cycle slips, ΔN 1 and ΔN 2, can lead to a very small combination of δ 1Δ t N 1 − δ 2Δ t N 2. Examples of such combinations can be found in Hofmann-Wellenhof et al. (1997). Thus even the ionospheric residual of Eq. 6.80 is very small; it may not guarantee that there are no cycle slips.

6.5.5 Differential Doppler and Doppler Integration

  • Differential Doppler

The numerical differentiation of the original observables given in Eqs. 6.44 and 6.45 at the two consecutive epochs t j and t j−1 can be formed as

$$ \frac{{\Delta_{t} R_{j} }}{{\lambda_{j} \Delta t}} = \frac{{\Delta_{t} \rho }}{{\lambda_{j} \Delta t}} - f_{j} \frac{{\Delta_{t} (\delta t_{\text{r}} - \delta t_{k} )}}{\Delta t} + \frac{{\Delta_{t} \varepsilon_{c} }}{{\lambda_{j} \Delta t}},\quad j = 1, 2,\quad {\text{and}} $$
(6.82)
$$ \frac{{\Delta_{t} \varPhi_{j} }}{\Delta t} = \frac{{\Delta_{t} \rho }}{{\lambda_{j} \Delta t}} - f_{j} \frac{{\Delta_{t} (\delta t_{\text{r}} - \delta t_{k} )}}{\Delta t} + \frac{{\Delta_{t} \varepsilon_{p} }}{{\lambda_{j} \Delta t}}, \quad j = 1, 2, $$
(6.83)

where Δ t t is a numerical differentiation operator and Δt = t j  − t j−1.

The left-hand side of Eq. 6.83 is called differential Doppler. Ionospheric residuals are negligible and are omitted here. The third terms of Eqs. 6.82 and 6.83 on the right-hand side are small residual errors. For convenience of comparison, the Doppler observable model of Eq. 6.46 is copied below:

$$ D_{j} = \frac{{{\text{d}}\rho }}{{\lambda_{j} {\text{d}}t}} - f_{j} \frac{{{\text{d}}(\delta t_{\text{r}} - \delta t_{k} )}}{{{\text{d}}t}} + \varepsilon_{\text{d}} . $$
(6.84)

It is clear that Eqs. 6.83 and 6.84 are nearly the same. The only difference is that in Doppler Eq. 6.84, the observed Doppler is an instantaneous one, and its model is presented by theoretical differentiation, whereas the term on the left-hand side of Eq. 6.83 is the numerically differenced Doppler (formed by phases), and its model is presented by numerical differentiation . Doppler measurement measures the instantaneous motion of the GPS antenna, whereas differential Doppler describes a kind of average velocity of the antenna over two consecutive epochs. The velocity solution of Eq. 6.83 (denoted by \( \left( {\begin{array}{*{20}c} {\dot{x}} & {\dot{y}} & {\dot{z}} \\ \end{array} } \right)^{\text{T}} \)) can be used to predict the future kinematic position by

$$ \left( {\begin{array}{*{20}c} {x_{j + 1} } \\ {y_{j + 1} } \\ {z_{j + 1} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} {x_{j} } \\ {y_{j} } \\ {z_{j} } \\ \end{array} } \right) + \left( {\begin{array}{*{20}c} {\dot{x}_{j} } \\ {\dot{y}_{j} } \\ {\dot{z}_{j} } \\ \end{array} } \right) \cdot\Delta t. $$
(6.85)

In other words, differential Doppler can be used as the system equation of a Kalman filter for kinematic positioning . The Kalman filter will be discussed in the next chapter. A Kalman filter using differential Doppler will be discussed in Sect. 9.8.

  • Doppler Integration

Integrating the instantaneous Doppler Eq. 6.84, one has

$$ \lambda_{j} \int\limits_{{t_{j - 1} }}^{{t_{j} }} {D_{j} {\text{d}}t =\Delta _{t} \rho -\Delta _{t} (\delta t_{\text{r}} - \delta t_{k} )c + \varepsilon_{\text{d}} } . $$

Using the operator Δ t to the undifferenced phase Eq. 6.45 and code Eq. 6.44, one gets

$$ \begin{aligned} \lambda_{j}\Delta _{t} \varPhi_{j} & =\Delta _{t} \rho -\Delta _{t} (\delta t_{\text{r}} - \delta t_{k} )c + \lambda_{j}\Delta _{t} N_{j} + \varepsilon_{\text{p}} \quad {\text{and}} \\\Delta _{t} R_{j} & =\Delta _{t} \rho -\Delta _{t} (\delta t_{\text{r}} - \delta t_{k} )c + \varepsilon_{\text{c}} , \\ \end{aligned} $$
(6.86)

where the same symbols are used for the error terms (later too). Differencing the first equation of Eq. 6.86 with the integrated Doppler leads to

$$ \lambda_{j}\Delta _{t} N_{j} = \lambda_{j}\Delta _{t} \varPhi_{j} - \lambda_{j} \int\limits_{{t_{j - 1} }}^{{t_{j} }} {D_{j} {\text{d}}t + \varepsilon_{1} } $$

or

$$ \Delta _{t} N_{j} =\Delta _{t} \varPhi_{j} - \int\limits_{{t_{j - 1} }}^{{t_{j} }} {D_{j} {\text{d}}t + \varepsilon_{1} ,\quad j = 1, 2, 5.} $$
(6.87)

Thus, integrated Doppler can be used for cycle slip detection . This detection method is very reasonable. The phase is measured by keeping track of the partial phase and accumulating the integer count. If any loss of lock of the signal happens during this time, the integer accumulation will be wrong, i.e. a cycle slip occurs. Therefore, an external instantaneous Doppler integration can be used as an alternative method of cycle slip detection. The integration can be achieved by first fitting the Doppler with a suitable order polynomial, and then integrating that within the time interval.

  • Code Smoothing

Comparing the two formulas of Eq. 6.86, one has

$$ \Delta _{t} R_{j} = \lambda_{j} \Delta_{t} \varPhi_{j} - \lambda_{j}\Delta _{t} N_{j} + \varepsilon_{2} $$

or

$$ \Delta _{t} R_{j} = \lambda_{j}\Delta _{t} \varPhi_{j} + \varepsilon_{3} . $$
(6.88)

Equation 6.88 can be used for smoothing the code survey by phase if there are no cycle slips.

  • Differential Phases

The first formula of Eq. 6.86 is the numerical difference of the phases at the two consecutive epochs t j and t j−1

$$ \lambda_{j}\Delta _{t} \varPhi_{j} =\Delta _{t} \rho -\Delta _{t} (\delta t_{\text{r}} - \delta t_{k} )c + \lambda_{j}\Delta _{t} N_{j} + \varepsilon_{\text{p}} , \quad j = 1, 2. $$

All terms on the right-hand side except the ambiguity term are of low variation. Any cycle slips will lead to a sudden jump in the time difference of the phases. Therefore, the time-differenced phase can be used as an alternative method of cycle slip detection .

6.6 Data Differentiations

Data differentiations are methods of combining GPS data (of the same type) measured at different stations. For the convenience of later discussions, tidal effects and relativistic effects are considered corrected before forming the differences. The original code, phase, and Doppler observables as well as their standardised combinations can be rewritten as (cf. Eqs. 6.446.47)

$$ R_{i}^{k} (j) = \rho_{i}^{k} - c\delta t_{i} + c\delta t_{k} + \delta_{\text{ion}} (j) + \delta_{\text{trop}} + \varepsilon_{\text{c}} , $$
(6.89)
$$ \lambda_{j} \varPhi_{i}^{k} (j) = \rho_{i}^{k} - c\delta t_{i} + c\delta t_{k} + \lambda_{j} N_{i}^{k} (j) - \delta_{\text{ion}} (j) + \delta_{\text{trop}} + \varepsilon_{\text{p}} , $$
(6.90)
$$ \delta_{\text{ion}} (j) = \frac{{A_{1} }}{{f_{j}^{2} }} + \frac{{A_{2} }}{{f_{j}^{3} }},\quad {\text{and}} $$
(6.91)
$$ D_{i}^{k} (j) = \frac{{{\text{d}}\rho_{i}^{k} }}{{\lambda_{j} {\text{d}}t}} - f_{j} \frac{{{\text{d}}(\delta t_{i} - \delta t_{k} )}}{{{\text{d}}t}} + \varepsilon_{\text{d}} , $$
(6.92)

where j (j = 1,2,5) is the index of frequency f, subscript i is the index of the station number, and superscript k is the ID number of the satellite.

6.6.1 Single Differences

Single difference (SD) is the difference formed by data observed at two stations on the same satellite as

$$ {\text{SD}}_{{i{1,}i{2}}}^{k} (O) = O_{i2}^{k} - O_{i1}^{k} , $$
(6.93)

where O is the original observable, and i1 and i2 are the two ID numbers of the stations. Supposing the original observables have the same variance of σ 2, then the single-difference observable has a variance of 2σ 2. Considering Eqs. 6.896.92, one has

$$ {\text{SD}}_{i1, i2}^{k} (R(j)) = \rho_{i2}^{k} - \rho_{i1}^{k} - c\delta t_{i2} + c\delta t_{i1} + {\text{d}}\delta_{\text{ion}} (j) + {\text{d}}\delta_{\text{trop}} + {\text{d}}\varepsilon_{\text{c}} , $$
(6.94)
$$ \begin{aligned} {\text{SD}}_{i1, i2}^{k} (\lambda_{j} \varPhi (j)) & = \rho_{i2}^{k} - \rho_{i1}^{k} - c\delta t_{i2} + c\delta t_{i1} + \lambda_{j} N_{i2}^{k} (j) - \lambda_{j} N_{i1}^{k} (j) \\ & \quad - {\text{d}}\delta_{\text{ion}} (j) + {\text{d}}\delta_{\text{trop}} + {\text{d}}\varepsilon_{\text{p}} ,\quad {\text{and}} \\ \end{aligned} $$
(6.95)
$$ {\text{SD}}_{i1, i2}^{k} (D(j)) = \frac{{\dot{\rho }_{i2}^{k} - \dot{\rho }_{i1}^{k} }}{{\lambda_{j} }} - f_{j} \frac{{{\text{d}}(\delta t_{i2} - \delta t_{i1} )}}{{{\text{d}}t}} + {\text{d}}\varepsilon_{\text{d}} , $$
(6.96)

where \( \dot{\rho } \) is the time differentiation of ρ, and dδ ion(j) and dδ trop are the differenced ionospheric and tropospheric effects at the two stations related to the satellite k, respectively.

The most important property of single differences is that the satellite clock error terms in the model are eliminated. However, it should be emphasised that the satellite clock error, which implicitly affects the computation of satellite position, must still be carefully considered. Ionospheric and tropospheric effects are reduced through difference forming, especially for those stations that are not very far apart. Because of the identical mathematical models of the station clock errors and ambiguities, not all clock and ambiguity parameters can be resolved in the single-difference equations of Eqs. 6.946.96.

For the original observable vector of station i1 and i2,

$$ O = \left( {\begin{array}{*{20}c} {O_{i1}^{k1} } & {O_{i1}^{k2} } & {O_{i1}^{k3} } & {O_{i2}^{k1} } & {O_{i2}^{k2} } & {O_{i2}^{k3} } \\ \end{array} } \right)^{T} ,\quad \text{cov} (O) = \sigma^{2} E, $$

the single differences

$$ {\text{SD}}(O) = \left( {\begin{array}{*{20}c} {O_{{i1, i{2}}}^{k1} } & {O_{i1, i2}^{k2} } & {O_{i1, i2}^{k3} } \\ \end{array} } \right)^{\text{T}} , $$

can be formed by a linear transformation

$$ \begin{aligned} {\text{SD}}(O) & = C \cdot O\quad {\text{and}} \\ C & = \left( {\begin{array}{*{20}c} { - 1} & 0 & 0 & 1 & 0 & 0 \\ 0 & { - 1} & 0 & 0 & 1 & 0 \\ 0 & 0 & { - 1} & 0 & 0 & 1 \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} { - E} & E \\ \end{array} } \right). \\ \end{aligned} $$
(6.97)

Where common satellites k1, k2, k3 are observed, E is an identity matrix whose size is that of the observed satellite number; in the above example the size is 3 × 3.

The covariance matrix of the single differences is then

$$ \text{cov} ({\text{SD}}(O)) = C \cdot \text{cov} (O) \cdot C^{\text{T}} = \sigma^{2} C \cdot C^{\text{T}} = 2\sigma^{2} E, $$
(6.98)

i.e. the weight matrix is

$$ P = \frac{1}{{2\sigma^{2} }}E. $$

In other words, the single differences are uncorrelated observables in the case of a single baseline . C in Eq. 6.97 is a general form, so C is denoted by C s = (−E n×n E n×n ), and n is the number of commonly viewed satellites.

Single differences can be formed for any baselines as long as the two stations have common satellites in sight. However, these should be a set of “independent” baselines. The most widely used methods involve the formation of radial or transverse baselines. Supposing the stations’ ID vector is (i1, i2, i3,…, i(m − 1), im), and the baseline between station i1 and i2 is denoted by (i1, i2), then the radial baselines can be formed, for example, by (i1, i2),(i1, i3),…,(i1, im), and the transverse baselines by (i1, i2), (i2, i3),…,(i(m − 1), im). Station i1 is called a reference station and is freely selectable. In some cases, mixed radial and transverse baselines must be formed—for example, by (i1, i2), (i1, i3), (i3, i4),…,(i3, i(m − 1)), (i3, im). Sometimes the baselines must be formed by several groups, and thus several references must be selected. A method of forming independent and optimal baseline networks will be discussed in Sects. 9.1 and 9.2.

In the case in which three stations are used to measure the GPS data, the original observable vector of stations i1, i2 and i3 is

$$ O_{i} = \left( {\begin{array}{*{20}c} {O_{i}^{k1} } & \ldots & {O_{i}^{kn} } \\ \end{array} } \right)^{\text{T}} ,\quad \text{cov} (O_{i} ) = \sigma^{2} E_{n \times n} , \quad i = i{1},i{2},i{3}, $$

where n is the commonly observed satellite number. The single differences of the baseline (i, j) are

$$ {\text{SD}}_{i,j} (O) = \left( {\begin{array}{*{20}c} {O_{i,j}^{k1} } & { \ldots .} & {O_{i,j}^{kn} } \\ \end{array} } \right)^{\text{T}} \quad i, j = i{1},i{2},i{3}, \quad i \ne j. $$

If the baselines are formed in a radial way, i.e. as (i1, i2) and (i1, i3), then one has

$$ \left( {\begin{array}{*{20}c} {{\text{SD}}_{i1,i2} (O)} \\ {{\text{SD}}_{i1,i3} (O)} \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} { - E} & E & 0 \\ { - E} & 0 & E \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {O_{i1} } \\ {O_{i2} } \\ {O_{i3} } \\ \end{array} } \right), $$

and

$$ \begin{aligned} \text{cov} ({\text{SD}}) & = \sigma^{2} \left( {\begin{array}{*{20}c} { - E} & E & 0 \\ { - E} & 0 & E \\ \end{array} } \right)\left( {\begin{array}{*{20}c} { - E} & { - E} \\ E & 0 \\ 0 & E \\ \end{array} } \right) = \sigma^{2} \left( {\begin{array}{*{20}c} {2E} & E \\ E & {2E} \\ \end{array} } \right)\quad {\text{and}} \\ P_{s} & = [\text{cov} ({\text{SD}})]^{ - 1} = \frac{1}{{3\sigma^{2} }}\left( {\begin{array}{*{20}c} {2E} & { - E} \\ { - E} & {2E} \\ \end{array} } \right). \\ \end{aligned} $$
(6.99)

If the baselines are formed in a transverse way, i.e. as (i1, i2) and (i2, i3), then one has

$$ \begin{aligned} \left( {\begin{array}{*{20}c} {{\text{SD}}_{i1,i2} (O)} \\ {{\text{SD}}_{i2,i3} (O)} \\ \end{array} } \right) & = \left( {\begin{array}{*{20}c} { - E} & E & 0 \\ 0 & { - E} & E \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {O_{i1} } \\ {O_{i2} } \\ {O_{i3} } \\ \end{array} } \right), \\ \text{cov} ({\text{SD}}) & = \sigma^{2} \left( {\begin{array}{*{20}c} { - E} & E & 0 \\ 0 & { - E} & E \\ \end{array} } \right)\left( {\begin{array}{*{20}c} { - E} & 0 \\ E & { - E} \\ 0 & E \\ \end{array} } \right) = \sigma^{2} \left( {\begin{array}{*{20}c} {2E} & { - E} \\ { - E} & {2E} \\ \end{array} } \right)\quad {\text{and}} \\ P_{s} & = [\text{cov} ({\text{SD}})]^{ - 1} = \frac{1}{{3\sigma^{2} }}\left( {\begin{array}{*{20}c} {2E} & E \\ E & {2E} \\ \end{array} } \right). \\ \end{aligned} $$

It is obvious that the single differences are correlated if the number of stations is greater than two, and the correlation depends on the ways the baselines are formed. Therefore, it is not possible to derive a general covariance formula for the single differences of a network. Furthermore, the commonly viewed satellite number n could be different from baseline to baseline, further complicating the formulation of the covariance matrix .

A baseline-wise processing of the GPS data of a network using single differences is equivalent to an omission of the correlation between the baselines .

6.6.2 Double Differences

Double differences are formed between two single differences related to two observed satellites as

$$ {\text{DD}}_{i1,i2}^{k1,k2} (O) = {\text{SD}}_{i1,i2}^{k2} (O) - {\text{SD}}_{i1,i2}^{k1} (O) $$
(6.100)

or

$$ {\text{DD}}_{i1,i2}^{k1,k2} (O) = (O_{i2}^{k2} - O_{i1}^{k2} ) - (O_{i2}^{k1} - O_{i1}^{k1} ), $$
(6.101)

where k1 and k2 are the two id numbers of the satellites. Supposing the original observables have the same variance of σ 2, then the double-differenced observables have a variance of 4σ 2. Considering Eqs. 6.896.92, one has

$$ {\text{DD}}_{i1,i2}^{k1,k2} (R(j)) = \rho_{i2}^{k2} - \rho_{i1}^{k2} - \rho_{i2}^{k1} + \rho_{i1}^{k1} + {\text{dd}}\delta_{\text{ion}} (j) + {\text{dd}}\delta_{\text{trop}} + {\text{dd}}\varepsilon_{\text{c}} , $$
(6.102)
$$ \begin{aligned} {\text{DD}}_{i1,i2}^{k1,k2} (\lambda_{j} \varPhi (j)) & = \rho_{i2}^{k2} - \rho_{i1}^{k2} - \rho_{i2}^{k1} + \rho_{i1}^{k1} + \lambda_{j} (N_{i2}^{k2} (j) - N_{i1}^{k2} (j) \\ & \quad - N_{i2}^{k1} (j) + N_{i1}^{k1} (j)) - {\text{dd}}\delta_{\text{ion}} (j) + {\text{dd}}\delta_{\text{trop}} + {\text{dd}}\varepsilon_{\text{p}} ,\quad {\text{and}} \\ \end{aligned} $$
(6.103)
$$ {\text{DD}}_{i1,i2}^{k1,k2} (D(j)) = \frac{{\dot{\rho }_{i2}^{k2} - \dot{\rho }_{i1}^{k2} - \dot{\rho }_{i2}^{k1} + \dot{\rho }_{i1}^{k1} }}{{\lambda_{j} }} + {\text{dd}}\varepsilon_{\text{d}} , $$
(6.104)

where ddδ ion(j) and ddδ trop are the differenced ionospheric and tropospheric effects at the two stations related to the two satellites, respectively. For the ionosphere-free combined observables (denoted by j = 4 for distinguishing), the ionospheric error terms have vanished from above equations.

The most important property of double differences is that the clock error terms in the equation (model) are completely eliminated. It should be emphasised that the clock error, which implicitly affects the computation of the position of the satellite, must still be carefully considered. Ionospheric and tropospheric effects are reduced greatly through difference forming, especially for those stations that are not far apart. Double-differenced Doppler directly describes the geometry change. Double-differenced ambiguities can be denoted by

$$ N_{i1,i2}^{k1,k2} (j) = N_{i2}^{k2} (j) - N_{i1}^{k2} (j) - N_{i2}^{k1} (j) + N_{i1}^{k1} (j). $$
(6.105)

For convenience, the original ambiguities used in Eq. 6.103 are for the case of the reference satellite changing.

For the single-difference observable vector

$$ {\text{SD}}(O) = \left( {\begin{array}{*{20}c} {O_{i1,i2}^{k1} } & {O_{i1,i2}^{k2} } & {O_{i1,i2}^{k3} } \\ \end{array} } \right)^{\text{T}} \quad {\text{and}}\quad \text{cov} ({\text{SD}}(O)) = 2\sigma^{2} E, $$
(6.106)

the double differences

$$ {\text{DD}}(O) = \left( {\begin{array}{*{20}c} {O_{i1,i2}^{k1,k2} } & {O_{i1,i2}^{k1,k3} } \\ \end{array} } \right)^{\text{T}} $$
(6.107)

can be formed by a linear transformation

$$ {\text{DD}}(O) = C_{\text{d}} \cdot {\text{SD}}(O), $$
(6.108)
$$ C_{\text{d}} = \left( {\begin{array}{*{20}c} { - 1} & 1 & 0 \\ { - 1} & 0 & 1 \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} { - I_{m} } & E \\ \end{array}_{m \times m} } \right)\quad \left( {{\text{here}}\;m = {2}} \right), $$
(6.109)

where E is an identity matrix of size m × m, I is a 1 vector of size m (all elements of the vector are 1), m is the number of formed double differences , and m = n − 1. The covariance matrix of the double differences is then

$$ \text{cov} ({\text{DD}}(O)) = C_{\text{d}} \cdot \text{cov} ({\text{SD}}(O)) \cdot C_{\text{d}}^{\text{T}} = 2\sigma^{2} C_{\text{d}} \cdot C_{\text{d}}^{\text{T}} = 2\sigma^{2} \left( {\begin{array}{*{20}c} 2 & 1 \\ 1 & 2 \\ \end{array} } \right). $$
(6.110)

For single and double differences

$$ {\text{SD}}(O) = \left( {\begin{array}{*{20}c} {O_{i1,i2}^{k1} } & {O_{i1,i2}^{k2} } & {O_{i1,i2}^{k3} } & {O_{i1,i2}^{k4} } \\ \end{array} } \right)^{\text{T}} ,\quad \text{cov} ({\text{SD}}(O)) = 2\sigma^{2} E, \quad {\text{and}} $$
(6.111)
$$ {\text{DD}}(O) = \left( {\begin{array}{*{20}c} {O_{i1,i2}^{k1,k2} } & {O_{i1,i2}^{k1,k3} } & {O_{i1,i2}^{k1,k4} } \\ \end{array} } \right)^{\text{T}} , $$
(6.112)

the linear transformation matrix C d and the covariance matrix can be obtained by

$$ C_{\text{d}} = \left( {\begin{array}{*{20}c} { - 1} & 1 & 0 & 0 \\ { - 1} & 0 & 1 & 0 \\ { - 1} & 0 & 0 & 1 \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} { - I} & E \\ \end{array} } \right)\quad {\text{and}} $$
(6.113)
$$ \text{cov} ({\text{DD}}(O)) = C_{\text{d}} \cdot \text{cov} ({\text{SD}}(O)) \cdot C_{\text{d}}^{\text{T}} = 2\sigma^{2} C_{\text{d}} \cdot C_{\text{d}}^{\text{T}} = 2\sigma^{2} \left( {\begin{array}{*{20}c} 2 & 1 & 1 \\ 1 & 2 & 1 \\ 1 & 1 & 2 \\ \end{array} } \right). $$
(6.114)

For the general case of

$$ \begin{aligned} {\text{SD}}(O) & = \left( {\begin{array}{*{20}c} {O_{i1,i2}^{k1} } & {O_{i1,i2}^{k2} } & {O_{i1,i2}^{k3} } & \ldots & {O_{i1,i2}^{kn} } \\ \end{array} } \right)^{\text{T}} ,\quad \text{cov} ({\text{SD}}(O)) = 2\sigma^{2} E, \quad {\text{and}} \\ {\text{DD}}(O) & = \left( {\begin{array}{*{20}c} {O_{i1,i2}^{k1,k2} } & {O_{i1,i2}^{k1,k3} } & \ldots & {O_{i1,i2}^{k1,km} } \\ \end{array} } \right)^{\text{T}} , \\ \end{aligned} $$
(6.115)

it is obvious that the general transformation matrix C d and the related covariance matrix can be represented as

$$ C_{\text{d}} = \left( {\begin{array}{*{20}c} { - I_{m} } & E \\ \end{array}_{m \times m} } \right)\quad {\text{and}} $$
(6.116)
$$ \text{cov} ({\text{DD}}(O)) = C_{\text{d}} \text{cov} ({\text{SD}}(O))C_{\text{d}}^{\text{T}} = 2\sigma^{2} C_{\text{d}} C_{\text{d}}^{\text{T}} = 2\sigma^{2} \left( {I_{m \times m} + E_{m \times m} } \right) $$
(6.117)

where I m×m is an m × m matrix whose elements are all 1, and the weight matrix has the form of

$$ P = [\text{cov} ({\text{DD}}(O))]^{ - 1} = \frac{1}{{2\sigma^{2} n}}\left( {nE_{m \times m} - I_{m \times m} } \right), $$
(6.118)

where n = m + 1. Equation 6.118 can be verified by an identity matrix test (i.e. P · cov(DD(O)) = E).

In the case of three stations, supposing n common satellites (k1, k2,…, kn) are viewed, then the single and double differences can be written as

$$ \begin{aligned} {\text{SD}}_{i,j} (O) & = \left( {\begin{array}{*{20}c} {O_{i,j}^{k1} } & {O_{i,j}^{k2} } & {O_{i,j}^{k3} } & \ldots & {O_{i,j}^{kn} } \\ \end{array} } \right)^{\text{T}} \quad {\text{and}} \\ {\text{DD}}_{i,j} (O) & = \left( {\begin{array}{*{20}c} {O_{i,j}^{k1,k2} } & {O_{i,j}^{k1,k3} } & \ldots & {O_{i,j}^{k1,km} } \\ \end{array} } \right)^{\text{T}} \quad i,j = i{1},i{2},i{3},4\quad i \ne j. \\ \end{aligned} $$
(6.119)

Then one has the transformation and covariance

$$ \begin{aligned} \left( {\begin{array}{*{20}c} {{\text{DD}}_{i1,i2} (O)} \\ {{\text{DD}}_{i1,i3} (O)} \\ \end{array} } \right) & = \left( {\begin{array}{*{20}c} {C_{\text{d}} } & 0 \\ 0 & {C_{\text{d}} } \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {{\text{SD}}_{i1,i2} (O)} \\ {{\text{SD}}_{i1,i3} (O)} \\ \end{array} } \right)\quad {\text{and}} \\ \text{cov} ({\text{DD}}) & = \left( {\begin{array}{*{20}c} {C_{\text{d}} } & 0 \\ 0 & {C_{\text{d}} } \\ \end{array} } \right)\text{cov} (SD)\left( {\begin{array}{*{20}c} {C_{\text{d}} } & 0 \\ 0 & {C_{\text{d}} } \\ \end{array} } \right)^{\text{T}} = \sigma^{2} \left( {\begin{array}{*{20}c} {2E} & { - E} \\ { - E} & {2E} \\ \end{array} } \right)(C_{\text{d}} C_{\text{d}}^{\text{T}} ). \\ \end{aligned} $$

Because of the dependence of the cov(SD) on the baselines forming, cov(DD) is also dependent on the baselines forming. A baseline-wise processing of a network GPS data using double differences is equivalent to an omission of the correlation between the baselines.

6.6.3 Triple Differences

Triple differences are formed between two double differences related to the same stations and satellites at the two adjacent epochs as

$$ {\text{TD}}_{i1,i2}^{k1,k2} (O(t1,t2)) = {\text{DD}}_{i1,i2}^{k1,k2} (O(t2)) - {\text{DD}}_{i1,i2}^{k1,k2} (O(t1)) $$

or

$$ \begin{aligned} {\text{TD}}_{i1,i2}^{k1,k2} (O(t1,t2)) & = O_{i2}^{k2} (t2) - O_{i1}^{k2} (t2) - O_{i2}^{k1} (t2) + O_{i1}^{k1} (t2) \\ & \quad - O_{i2}^{k2} (t1) + O_{i1}^{k2} (t1) + O_{i2}^{k1} (t1) - O_{i1}^{k1} (t1), \\ \end{aligned} $$
(6.120)

where t1 and t2 are two adjacent epochs. Supposing the original observables have the same variance of σ 2, then the triple-differenced observables have a variance of 8σ 2. Considering Eqs. 6.1026.104, one has

$$ \begin{aligned} {\text{TD}}_{i1,i2}^{k1,k2} (R(j,t1,t2)) & = \rho_{i2}^{k2} (t2) - \rho_{i1}^{k2} (t2) - \rho_{i2}^{k1} (t2) + \rho_{i1}^{k1} (t2) - \rho_{i2}^{k2} (t1) \\ & \quad + \rho_{i1}^{k2} (t1) + \rho_{i2}^{k1} (t1) - \rho_{i1}^{k1} (t1) + td\varepsilon_{\text{c}} , \\ \end{aligned} $$
(6.121)
$$ \begin{aligned} {\text{TD}}_{i1,i2}^{k1,k2} (\lambda_{j} \varPhi (j,t1,t2)) & = \rho_{i2}^{k2} (t2) - \rho_{i1}^{k2} (t2) - \rho_{i2}^{k1} (t2) + \rho_{i1}^{k1} (t2) - \rho_{i2}^{k2} (t1) \\ & \quad + \rho_{i1}^{k2} (t1) + \rho_{i2}^{k1} (t1) - \rho_{i1}^{k1} (t1) + \delta N + t{\text{d}}\varepsilon_{\text{p}} , \quad {\text{and}} \\ \end{aligned} $$
(6.122)
$$ \begin{aligned} {\text{TD}}_{i1,i2}^{k1,k2} (D(j,t1,t2)) & = \frac{{\dot{\rho }_{i2}^{k2} (t2) - \dot{\rho }_{i1}^{k2} (t2) - \dot{\rho }_{i2}^{k1} (t2) + \dot{\rho }_{i1}^{k1} (t2)}}{{\lambda_{j} }} \\ & \quad - \frac{{\dot{\rho }_{i2}^{k2} (t1) - \dot{\rho }_{i1}^{k2} (t1) - \dot{\rho }_{i2}^{k1} (t1) + \dot{\rho }_{i1}^{k1} (t1)}}{{\lambda_{j} }} + t{\text{d}}\varepsilon_{\text{d}} , \\ \end{aligned} $$
(6.123)

where

$$ \delta N = \lambda_{j} (N_{i1,i2}^{k1,k2} (j,t2) - N_{i1,i2}^{k1,k2} (j,t1)). $$
(6.124)

Ionospheric and tropospheric effects are eliminated. If there are no cycle slips during the time, the term of Eq. 6.124 is zero. Therefore, triple differences of Eq. 6.122 can also be used as a check for the cycle slips. Through triple-difference forming , the systematic cycle slip turns out to be an effect like an outlier.

The most important property of triple differences is that only the geometric change is left in the models. Triple differences of Doppler describe the acceleration of the position.

For double differences

$$ {\text{DD}}(O(t)) = \left( {\begin{array}{*{20}c} {O_{i1,i2}^{k1,k2} (t)} & {O_{i1,i2}^{k1,k3} (t)} & \ldots & {O_{i1,i2}^{k1,km} (t)} \\ \end{array} } \right)^{\text{T}} , $$
(6.125)

one has

$$ {\text{TD}}(O(t1,t2)) = C_{T} \cdot \left( {\begin{array}{*{20}c} {{\text{DD}}(O(t1))} \\ {{\text{DD}}(O(t2))} \\ \end{array} } \right), $$
(6.126)

where

$$ C_{T} = \left( {\begin{array}{*{20}c} { - E_{m \times m} } & E \\ \end{array}_{m \times m} } \right). $$
(6.127)

Then the related covariance matrix can be represented as

$$ \begin{aligned} \text{cov} ({\text{TD(}}O(t1,t2))) & = C_{T} \cdot \text{cov} ({\text{DD(}}O)) \cdot C_{T}^{\text{T}} \\ & = C_{T} \cdot C_{d2} \,\text{cov} ({\text{SD(}}O)) \cdot C_{{{\text{d}}2}}^{\text{T}} C_{T}^{\text{T}} = 2\sigma^{2} C_{T} C_{{{\text{d}}2}} C_{{{\text{d}}2}}^{\text{T}} C_{T}^{\text{T}} , \\ \end{aligned} $$
(6.128)

where C d2 is the double-difference transformation matrix of two epochs. Because double differences are independent epoch wise, C d2 is a diagonal matrix of C d, i.e.

$$ C_{{{\text{d}}2}} = \left( {\begin{array}{*{20}c} {C_{\text{d}} } & 0 \\ 0 & {C_{\text{d}} } \\ \end{array} } \right). $$
(6.129)

It is worth noting that the triple differences formed by epochs (t1, t2) are correlated to the differences formed by epochs (t0, t1) and (t1, t2). Such correlation makes a sequential processing of the triple-difference data very complicated. Sequentially using the above covariance formula indicates an omission of the correlation related to the previous epoch and the next epoch.

Taking the correlation between the baselines into account, an exact correlation description of the triple differences of a GPS network becomes very difficult.

6.7 Equivalence of the Uncombined and Combining Algorithms

Uncombined and combining algorithms are standard GPS data processing methods, which can often be found in the literature (cf., e.g., Leick 2004; Hofmann-Wellenhof et al. 2001). Different combinations own different properties and are beneficial for dealing with the data and solving the problem in different cases (Hugentobler et al. 2001; Kouba and Heroux 2001; Zumberge et al. 1997). The equivalence between the undifferenced and differencing algorithms was proved and a unified equivalent data processing method proposed by Xu (2002, cf. Sect. 6.8). The question of whether the uncombined and combining algorithms are also equivalent is an interesting topic and will be addressed here in detail (cf. Xu et al. 2006a).

6.7.1 Uncombined GPS Data Processing Algorithms

  • Original GPS Observation Equations

The original GPS code pseudorange and carrier phase measurements represented in Eqs. 6.44 and 6.45 (cf. Sect. 6.5) can be simplified as

$$ R_{j} = C_{\rho } + \delta_{\text{ion}} (j), $$
(6.130)
$$ \lambda_{j} \varPhi_{j} = C_{\rho } + \lambda_{j} N_{j} - \delta_{\text{ion}} (j), \quad j = {1},{2} $$
(6.131)

where

$$ C_{\rho } = \rho - (\delta t_{\text{r}} - \delta t_{k} )c + \delta_{\text{trop}} + \delta_{\text{tide}} + \delta_{\text{rel}} + \varepsilon_{i} ,\quad i = c,p $$
(6.132)
$$ \delta_{\text{ion}} (j) = \frac{{A_{1} }}{{f_{j}^{2} }} = \frac{{A_{1}^{z} }}{{f_{j}^{2} }}F = \frac{{f_{s}^{2} B_{1} }}{{f_{j}^{2} }} = \frac{{f_{s}^{2} B_{1}^{z} }}{{f_{j}^{2} }}F. $$
(6.133)

Where symbols have the same meanings as those of Eqs. 6.446.47. j is the index of the frequency f and wavelength λ. A 1 and A z1 are the ionospheric parameters in the path and zenith directions; B 1 and \( B_{1}^{z} \) are scaled A 1 and \( A_{1}^{z} \) with \( f_{s}^{2} \) for numerical reasons. c denotes the speed of light , index c denotes code. C ρ is called geometry and N j is the ambiguity. For simplicity, the residuals of the codes (and phases) are denoted with the same symbol ε c (and ε p ) and have the same standard deviations of σ c (and σ p ). Equations 6.130 and 6.131 can be written in a matrix form with weight matrix P as (Blewitt 1998)

$$ \left( {\begin{array}{*{20}c} {R_{1} } \\ {R_{2} } \\ {\lambda_{1} \varPhi_{1} } \\ {\lambda_{2} \varPhi_{2} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} 0 & 0 & {f_{s}^{2} /f_{1}^{2} } & 1 \\ 0 & 0 & {f_{s}^{2} /f_{2}^{2} } & 1 \\ 1 & 0 & { - f_{s}^{2} /f_{1}^{2} } & 1 \\ 0 & 1 & { - f_{s}^{2} /f_{2}^{2} } & 1 \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {\lambda_{1} N_{1} } \\ {\lambda_{2} N_{2} } \\ {B_{1} } \\ {C_{\rho } } \\ \end{array} } \right),\quad P = \left( {\begin{array}{*{20}c} {\sigma_{c}^{2} } & 0 & 0 & 0 \\ 0 & {\sigma_{c}^{2} } & 0 & 0 \\ 0 & 0 & {\sigma_{p}^{2} } & 0 \\ 0 & 0 & 0 & {\sigma_{p}^{2} } \\ \end{array} } \right)^{ - 1} . $$
(6.134)
  • Solutions of Uncombined Observation Equations

Equation 6.134 includes the observations of one satellite viewed by one receiver at one epoch. Alternatively, Eq. 6.134 can be considered a transformation between the observations and unknowns, and the transformation is a linear and invertible one. Denoting

$$ a = \frac{{f_{1}^{2} }}{{f_{1}^{2} - f_{2}^{2} }},\quad b = \frac{{ - f_{2}^{2} }}{{f_{1}^{2} - f_{2}^{2} }},\quad g = \frac{1}{{f_{1}^{2} }} - \frac{1}{{f_{2}^{2} }},\quad q = gf_{s}^{2} , $$
(6.135)

then one has relations of

$$ 1 - a = b,\quad \frac{1}{{f_{1}^{2} g}} = b,\quad \frac{1}{{f_{2}^{2} g}} = - a $$
(6.136)

and

$$ \left( {\begin{array}{*{20}c} 0 & 0 & {f_{s}^{2} /f_{1}^{2} } & 1 \\ 0 & 0 & {f_{s}^{2} /f_{2}^{2} } & 1 \\ 1 & 0 & { - f_{s}^{2} /f_{1}^{2} } & 1 \\ 0 & 1 & { - f_{s}^{2} /f_{2}^{2} } & 1 \\ \end{array} } \right)^{ - 1} = \left( {\begin{array}{*{20}c} {1 - 2a} & { - 2b} & 1 & 0 \\ { - 2a} & {2a - 1} & 0 & 1 \\ {1/q} & { - 1/q} & 0 & 0 \\ a & b & 0 & 0 \\ \end{array} } \right) = T. $$
(6.137)

Where a and b are the coefficients of the ionosphere-free combinations of the observables of L1 and L2. The solution of Eq. 6.134 has a form of (by multiplying the transformation matrix T to Eq. 6.134)

$$ \left( {\begin{array}{*{20}c} {\lambda_{1} N_{1} } \\ {\lambda_{2} N_{2} } \\ {B_{1} } \\ {C_{\rho } } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} {1 - 2a} & { - 2b} & 1 & 0 \\ { - 2a} & {2a - 1} & 0 & 1 \\ {1/q} & { - 1/q} & 0 & 0 \\ a & b & 0 & 0 \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {R_{1} } \\ {R_{2} } \\ {\lambda_{1} \varPhi_{1} } \\ {\lambda_{2} \varPhi_{2} } \\ \end{array} } \right). $$
(6.138)

The related covariance matrix of the above solution vector is then

$$ \begin{aligned} Q & = {\text{cov}}\left( {\begin{array}{*{20}c} {\lambda_{1} N_{1} } \\ {\lambda_{2} N_{2} } \\ {B_{1} } \\ {C_{\rho } } \\ \end{array} } \right) = T\left( {\begin{array}{*{20}c} {\sigma_{c}^{2} } & 0 & 0 & 0 \\ 0 & {\sigma_{c}^{2} } & 0 & 0 \\ 0 & 0 & {\sigma_{p}^{2} } & 0 \\ 0 & 0 & 0 & {\sigma_{p}^{2} } \\ \end{array} } \right)T^{\text{T}} \\ & = \left( {\begin{array}{*{20}c} {(1 - 2a)^{2} + 4b^{2} + \frac{{\sigma_{p}^{2} }}{{\sigma_{c}^{2} }}} & {4a^{2} - 4ab - 2a + 2b} & {\frac{1 - 2a + 2b}{q}} & {a - 2a^{2} - 2b^{2} } \\ {4a^{2} - 4ab - 2a + 2b} & {8a^{2} - 4a + 1 + \frac{{\sigma_{p}^{2} }}{{\sigma_{\text{c}}^{2} }}} & {\frac{1 - 4a}{q}} & { - 2a^{2} + 2ab - b} \\ {\frac{1 - 2a + 2b}{q}} & {\frac{1 - 4a}{q}} & {\frac{2}{{q^{2} }}} & {\frac{a - b}{q}} \\ {a - 2a^{2} - 2b^{2} } & { - 2a^{2} + 2ab - b} & {\frac{a - b}{q}} & {a^{2} + b^{2} } \\ \end{array} } \right)\sigma_{\text{c}}^{2} . \\ \end{aligned} $$
(6.139)

Equation 6.139 can be simplified by using the relation of 1 − a = b and neglecting the terms of (σ p/σ c)2 (because (σ p/σ c) is less than 0.01) as well as letting f s = f 1 (so that q = 1/b). Taking the relationships of ratios of the frequencies into account (f 1 = 154f 0 and f 2 = 120f 0, f 0 is the fundamental frequency), one has approximately

$$ {\text{cov}}\left( {\begin{array}{*{20}c} {\lambda_{1} N_{1} } \\ {\lambda_{2} N_{2} } \\ {B_{1} } \\ {C_{\rho } } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} {26.2971} & {33.4800} & {11.1028} & { - 15.1943} \\ {33.4800} & {42.6629} & {14.1943} & { - 19.2857} \\ {11.1028} & {14.1943} & {4.7786} & { - 6.3243} \\ { - 15.1943} & { - 19.2857} & { - 6.3243} & {8.8700} \\ \end{array} } \right)\sigma_{c}^{2} $$
(6.140)

The precision of the solutions will be further discussed in Sect. 6.7.3. The parameterisation of the GPS observation models is an important issue and can be found in Chap. 9 or (Blewitt 1998; Xu 2004) if interested.

6.7.2 Combining Algorithms of GPS Data Processing

  • Ionosphere - Free Combinations

Letting transformation matrix

$$ T_{1} = \left( {\begin{array}{*{20}c} 1 & { - 1} & 0 & 0 \\ a & b & 0 & 0 \\ 0 & 0 & a & b \\ {1/2} & 0 & {1/2} & 0 \\ \end{array} } \right), $$
(6.141)

and applying the transform to the Eq. 6.134, one has

$$ T_{1} \left( {\begin{array}{*{20}c} {R_{1} } \\ {R_{2} } \\ {\lambda_{1} \varPhi_{1} } \\ {\lambda_{2} \varPhi_{2} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} 0 & 0 & q & 0 \\ 0 & 0 & 0 & 1 \\ a & b & 0 & 1 \\ {1/2} & 0 & 0 & 1 \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {\lambda_{1} N_{1} } \\ {\lambda_{2} N_{2} } \\ {B_{1} } \\ {C_{\rho } } \\ \end{array} } \right). $$
(6.142)

The ionospheric parameter in Eq. 6.142 is free in the last three equations, which are traditionally called ionosphere-free combinations. Solving the ionosphere-free equations or the whole Eq. 6.142 will lead to the same results. Equation 6.142 has a unique solution vector of

$$ \left( {\begin{array}{*{20}c} {\lambda_{1} N_{1} } \\ {\lambda_{2} N_{2} } \\ {B_{1} } \\ {C_{\rho } } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} 0 & { - 2} & 0 & 2 \\ 0 & {(2a - 1)/b} & {1/b} & { - 2a/b} \\ {1/q} & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ \end{array} } \right)T_{1} \left( {\begin{array}{*{20}c} {R_{1} } \\ {R_{2} } \\ {\lambda_{1} \varPhi_{1} } \\ {\lambda_{2} \varPhi_{2} } \\ \end{array} } \right), $$
(6.143)

or (noticing (1 − a) = b, cf. Eq. 6.136)

$$ \left( {\begin{array}{*{20}c} {\lambda_{1} N_{1} } \\ {\lambda_{2} N_{2} } \\ {B_{1} } \\ {C_{\rho } } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} {1 - 2a} & { - 2b} & 1 & 0 \\ { - 2a} & {2a - 1} & 0 & 1 \\ {1/q} & { - 1/q} & 0 & 0 \\ a & b & 0 & 0 \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {R_{1} } \\ {R_{2} } \\ {\lambda_{1} \varPhi_{1} } \\ {\lambda_{2} \varPhi_{2} } \\ \end{array} } \right). $$
(6.144)

Equations 6.144 and 6.138 are identical. Therefore, the covariance matrix of the solution vector on the left side of Eq. 6.144 is the same as that given in Eq. 6.139. This shows that the uncombined algorithms and the ionosphere-free combinations are equivalent in this case.

  • Geometry - Free Combinations

Letting transformation matrix

$$ T_{2} = \left( {\begin{array}{*{20}c} a & b & 0 & 0 \\ 1 & { - 1} & 0 & 0 \\ 0 & 0 & 1 & { - 1} \\ { - 1} & 0 & 1 & 0 \\ \end{array} } \right), $$
(6.145)

and applying the transformation to Eq. 6.134, one has

$$ T_{2} \left( {\begin{array}{*{20}c} {R_{1} } \\ {R_{2} } \\ {\lambda_{1} \varPhi_{1} } \\ {\lambda_{2} \varPhi_{2} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} 0 & 0 & 0 & 1 \\ 0 & 0 & q & 0 \\ 1 & { - 1} & { - q} & 0 \\ 1 & 0 & { - 2f_{s}^{2} /f_{1}^{2} } & 0 \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {\lambda_{1} N_{1} } \\ {\lambda_{2} N_{2} } \\ {B_{1} } \\ {C_{\rho } } \\ \end{array} } \right). $$
(6.146)

The geometric component in Eq. 6.146 is free in the last three equations, which are traditionally called geometry-free combinations. Solving the geometry-free equations or Eq. 6.146 will lead to the same results. Equation 6.146 has a unique solution vector of

$$ \left( {\begin{array}{*{20}c} {\lambda_{1} N_{1} } \\ {\lambda_{2} N_{2} } \\ {B_{1} } \\ {C_{\rho } } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} 0 & {2/(f_{1}^{2} g)} & 0 & 1 \\ 0 & {2/(f_{1}^{2} g) - 1} & { - 1} & 1 \\ 0 & {1/q} & 0 & 0 \\ 1 & 0 & 0 & 0 \\ \end{array} } \right)T_{2} \left( {\begin{array}{*{20}c} {R_{1} } \\ {R_{2} } \\ {\lambda_{1} \varPhi_{1} } \\ {\lambda_{2} \varPhi_{2} } \\ \end{array} } \right), $$
(6.147)

or (noticing 1/(f 21 g) = b, cf. Equation 6.136)

$$ \left( {\begin{array}{*{20}c} {\lambda_{1} N_{1} } \\ {\lambda_{2} N_{2} } \\ {B_{1} } \\ {C_{\rho } } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} {2b - 1} & { - 2b} & 1 & 0 \\ {2b - 2} & {1 - 2b} & 0 & 1 \\ {1/q} & { - 1/q} & 0 & 0 \\ a & b & 0 & 0 \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {R_{1} } \\ {R_{2} } \\ {\lambda_{1} \varPhi_{1} } \\ {\lambda_{2} \varPhi_{2} } \\ \end{array} } \right). $$
(6.148)

Taking the relations of Eq. 6.136 (i.e. b = 1 − a) into account, Eqs. 6.148 and 6.138 are identical. Therefore, the covariance matrix of the solution vector on the left side of Eq. 6.148 is identical with that of Eq. 6.139. This shows that the uncombined algorithms and the geometry-free combinations are equivalent in this case.

  • Ionosphere - Free and Geometry - Free Combinations

Letting transformation matrix

$$ T_{3} = \left( {\begin{array}{*{20}c} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & { - 1} & 1 & 0 \\ 0 & { - 1} & 0 & 1 \\ \end{array} } \right), $$
(6.149)

one then has

$$ T_{3} T_{1} = \left( {\begin{array}{*{20}c} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & { - 1} & 1 & 0 \\ 0 & { - 1} & 0 & 1 \\ \end{array} } \right)\left( {\begin{array}{*{20}c} 1 & { - 1} & 0 & 0 \\ a & b & 0 & 0 \\ 0 & 0 & a & b \\ {1/2} & 0 & {1/2} & 0 \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} 1 & { - 1} & 0 & 0 \\ a & b & 0 & 0 \\ { - a} & { - b} & a & b \\ {1/2 - a} & { - b} & {1/2} & 0 \\ \end{array} } \right). $$
(6.150)

Applying the transformation 6.150 to Eq. 6.134 or applying the transformation 6.149 to Eq. 6.142 leads to the same results, and one has

$$ T_{3} T_{1} \left( {\begin{array}{*{20}c} {R_{1} } \\ {R_{2} } \\ {\lambda_{1} \varPhi_{1} } \\ {\lambda_{2} \varPhi_{2} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} 0 & 0 & q & 0 \\ 0 & 0 & 0 & 1 \\ a & b & 0 & 0 \\ {1/2} & 0 & 0 & 0 \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {\lambda_{1} N_{1} } \\ {\lambda_{2} N_{2} } \\ {B_{1} } \\ {C_{\rho } } \\ \end{array} } \right) $$
(6.151)

or

$$ \left( {\begin{array}{*{20}c} {R_{1} - R_{2} } \\ {aR_{1} + bR_{2} } \\ {a\lambda_{1} \varPhi_{1} + b\lambda_{2} \varPhi_{2} - aR_{1} - bR_{2} } \\ {(\lambda_{1} \varPhi_{1} + R_{1} )/2 - aR_{1} - bR_{2} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} 0 & 0 & q & 0 \\ 0 & 0 & 0 & 1 \\ a & b & 0 & 0 \\ {1/2} & 0 & 0 & 0 \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {\lambda_{1} N_{1} } \\ {\lambda_{2} N_{2} } \\ {B_{1} } \\ {C_{\rho } } \\ \end{array} } \right). $$
(6.152)

The ionosphere and geometry are both free in the last two equations, which are called ionosphere-geometry-free combinations . Solving the ionosphere-free and geometry-free equations or directly solving Eq. 6.152 will lead to the same results. Equation 6.152 has a unique solution vector of

$$ \left( {\begin{array}{*{20}c} {\lambda_{1} N_{1} } \\ {\lambda_{2} N_{2} } \\ {B_{1} } \\ {C_{\rho } } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} 0 & 0 & 0 & 2 \\ 0 & 0 & {1/b} & { - 2a/b} \\ {1/q} & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ \end{array} } \right)T_{3} T_{1} \left( {\begin{array}{*{20}c} {R_{1} } \\ {R_{2} } \\ {\lambda_{1} \varPhi_{1} } \\ {\lambda_{2} \varPhi_{2} } \\ \end{array} } \right), $$
(6.153)

or (noticing (1 − a)/b = 1, cf. Eq. 6.136)

$$ \left( {\begin{array}{*{20}c} {\lambda_{1} N_{1} } \\ {\lambda_{2} N_{2} } \\ {B_{1} } \\ {C_{\rho } } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} {1 - 2a} & { - 2b} & 1 & 0 \\ { - 2a} & {2a - 1} & 0 & 1 \\ {1/q} & { - 1/q} & 0 & 0 \\ a & b & 0 & 0 \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {R_{1} } \\ {R_{2} } \\ {\lambda_{1} \varPhi_{1} } \\ {\lambda_{2} \varPhi_{2} } \\ \end{array} } \right). $$
(6.154)

Equations 6.154 and 6.138 are identical. This shows that the uncombined algorithms and the ionosphere-geometry-free combinations are equivalent in this discussed case.

  • Diagonal Combinations

Letting transformation matrix

$$ T_{4} = \left( {\begin{array}{*{20}c} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & { - 2a} \\ 0 & 0 & 0 & 1 \\ \end{array} } \right), $$
(6.155)

one has

$$ T_{4} T_{3} T_{1} = \left( {\begin{array}{*{20}c} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & { - 2a} \\ 0 & 0 & 0 & 1 \\ \end{array} } \right)\left( {\begin{array}{*{20}c} 1 & { - 1} & 0 & 0 \\ a & b & 0 & 0 \\ { - a} & { - b} & a & b \\ {1/2 - a} & { - b} & {1/2} & 0 \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} 1 & { - 1} & 0 & 0 \\ a & b & 0 & 0 \\ { - 2ab} & {b(2a - 1)} & 0 & b \\ {1/2 - a} & { - b} & {1/2} & 0 \\ \end{array} } \right). $$
(6.156)

If applying the transformation 6.156 to Eq. 6.134 or applying the transformation 6.155 to Eq. 6.151, one has the same results of

$$ T_{4} T_{3} T_{1} \left( {\begin{array}{*{20}c} {R_{1} } \\ {R_{2} } \\ {\lambda_{1} \varPhi_{1} } \\ {\lambda_{2} \varPhi_{2} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} 0 & 0 & q & 0 \\ 0 & 0 & 0 & 1 \\ 0 & b & 0 & 0 \\ {1/2} & 0 & 0 & 0 \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {\lambda_{1} N_{1} } \\ {\lambda_{2} N_{2} } \\ {B_{1} } \\ {C_{\rho } } \\ \end{array} } \right). $$
(6.157)

In the above equation, the ionosphere and geometry as well as the ambiguities are diagonal to each other. Such combinations are called diagonal ones. The solution vector of Eq. 6.157 may be easily derived

$$ \left( {\begin{array}{*{20}c} {\lambda_{1} N_{1} } \\ {\lambda_{2} N_{2} } \\ {B_{1} } \\ {C_{\rho } } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} 0 & 0 & 0 & 2 \\ 0 & 0 & {1/b} & 0 \\ {1/q} & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ \end{array} } \right)T_{4} T_{3} T_{1} \left( {\begin{array}{*{20}c} {R_{1} } \\ {R_{2} } \\ {\lambda_{1} \varPhi_{1} } \\ {\lambda_{2} \varPhi_{2} } \\ \end{array} } \right) $$
(6.158)

or

$$ \left( {\begin{array}{*{20}c} {\lambda_{1} N_{1} } \\ {\lambda_{2} N_{2} } \\ {B_{1} } \\ {C_{\rho } } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} {1 - 2a} & { - 2b} & 1 & 0 \\ { - 2a} & {2a - 1} & 0 & 1 \\ {1/q} & { - 1/q} & 0 & 0 \\ a & b & 0 & 0 \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {R_{1} } \\ {R_{2} } \\ {\lambda_{1} \varPhi_{1} } \\ {\lambda_{2} \varPhi_{2} } \\ \end{array} } \right). $$
(6.159)

Equations 6.159 and 6.138 are identical, which shows that the uncombined algorithms and diagonal combinations are equivalent in the case discussed.

  • General Combinations

For arbitrary combinations, once the transformation matrix is invertible, the transformed equations are equivalent to the original equations based on algebraic theory. The solution vector and the variance–covariance matrix are identical. In other words, regardless of the combinations used, neither the solutions nor the precision of the solutions obtained will differ. Various combinations lead to an easier resolution of specific related problems.

  • Wide - and Narrow - Lane Combinations

Denoting

$$ T_{5} = \left( {\begin{array}{*{20}c} 0 & 0 & 0 & 2 \\ 0 & 0 & {1/b} & 0 \\ {1/q} & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ \end{array} } \right) $$
(6.160)

and letting transformation matrix

$$ T_{6} = \left( {\begin{array}{*{20}c} {\frac{1}{{\lambda_{1} }}} & {\frac{ - 1}{{\lambda_{2} }}} & 0 & 0 \\ {\frac{1}{{\lambda_{1} }}} & {\frac{1}{{\lambda_{2} }}} & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{array} } \right), $$
(6.161)

one may form the wide and narrow lanes (Petovello 2006) directly by multiplying Eq. 6.161 by Eq. 6.158 to obtain the related wide- and narrow-lane ambiguities

$$ \left( {\begin{array}{*{20}c} {N_{1} - N_{2} } \\ {N_{1} + N_{2} } \\ {B_{1} } \\ {C_{\rho } } \\ \end{array} } \right) = T_{6} T_{5} T_{4} T_{3} T_{1} \left( {\begin{array}{*{20}c} {R_{1} } \\ {R_{2} } \\ {\lambda_{1} \varPhi_{1} } \\ {\lambda_{2} \varPhi_{2} } \\ \end{array} } \right). $$
(6.162)

Indeed, there is T 5 T 4 T 3 T 1 = T. Because of the unique properties of the solutions of different combinations, any direct combinations of the solutions must be equivalent to each other. No one combination will lead to a better solution or greater precision of solutions than any other combination. From this rigorous theoretical aspect, the traditional wide-lane ambiguity fixing technique may lead to a more effective search, but not a better solution and precision of the ambiguity.

6.7.3 Secondary GPS Data Processing Algorithms

  • In the Case of More Satellites in View

Up to now, the discussions have been limited for the observations of one satellite viewed by one receiver at one epoch. The original observation equation is given in Eq. 6.134. The solution vector and its covariance matrix are given in Eqs. 6.138 and 6.139, respectively. The elements of the covariance matrix depend on the coefficients of Eq. 6.134, and the coefficients of the observation equation depend on the method of parameterisation . For example, if instead of B 1, \( B_{1}^{z} \) is used, then Eq. 6.134 becomes

$$ \left( {\begin{array}{*{20}c} {R_{1} (k)} \\ {R_{2} (k)} \\ {\lambda_{1} \varPhi_{1} (k)} \\ {\lambda_{2} \varPhi_{2} (k)} \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} 0 & 0 & {F_{k} f_{s}^{2} /f_{1}^{2} } & 1 \\ 0 & 0 & {F_{k} f_{s}^{2} /f_{2}^{2} } & 1 \\ 1 & 0 & { - F_{k} f_{s}^{2} /f_{1}^{2} } & 1 \\ 0 & 1 & { - F_{k} f_{s}^{2} /f_{2}^{2} } & 1 \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {\lambda_{1} N_{1} (k)} \\ {\lambda_{2} N_{2} (k)} \\ {B_{1}^{z} } \\ {C_{\rho } (k)} \\ \end{array} } \right), $$
(6.163)

where k is the index of the satellite. Ionospheric mapping function F k is dependent on the zenith distance of the satellite k. The solution vector of Eq. 6.163 is then similar to that of Eq. 6.138:

$$ \left( {\begin{array}{*{20}c} {\lambda_{1} N_{1} (k)} \\ {\lambda_{2} N_{2} (k)} \\ {B_{1}^{z} } \\ {C_{\rho } (k)} \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} {1 - 2a} & { - 2b} & 1 & 0 \\ { - 2a} & {2a - 1} & 0 & 1 \\ {1/q_{k} } & { - 1/q_{k} } & 0 & 0 \\ a & b & 0 & 0 \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {R_{1} (k)} \\ {R_{2} (k)} \\ {\lambda_{1} \varPhi_{1} (k)} \\ {\lambda_{2} \varPhi_{2} (k)} \\ \end{array} } \right),\quad Q\left( k \right), $$
(6.164)

where q k  = qF k and Q(k) is the covariance matrix, which can be similarly derived and given by adding the index k to q in Q of Eq. 6.139. The terms on the right-hand side can be considered secondary “observations” of the unknowns on the left-hand side. If K satellites are viewed, one has the observation equations of one receiver

$$ \left( {\begin{array}{*{20}c} {\lambda_{1} N_{1} (1)} \\ {\lambda_{2} N_{2} (1)} \\ {B_{1}^{z} } \\ {C_{\rho } (1)} \\ \vdots \\ {\lambda_{1} N_{1} (K)} \\ {\lambda_{2} N_{2} (K)} \\ {B_{1}^{z} } \\ {C_{\rho } (K)} \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} {1 - 2a} & { - 2b} & 1 & 0 & { \ldots \ldots .} & 0 & 0 & 0 & 0 \\ { - 2a} & {2a - 1} & 0 & 1 & { \ldots \ldots .} & 0 & 0 & 0 & 0 \\ {1/q_{1} } & { - 1/q_{1} } & 0 & 0 & { \ldots \ldots .} & 0 & 0 & 0 & 0 \\ a & b & 0 & 0 & { \ldots \ldots .} & 0 & 0 & 0 & 0 \\ \vdots & \vdots & \vdots & \vdots & {\ldots \ldots } & \vdots & \vdots & \vdots & \vdots \\ 0 & 0 & 0 & 0 & { \ldots \ldots .} & {1 - 2a} & { - 2b} & 1 & 0 \\ 0 & 0 & 0 & 0 & { \ldots \ldots .} & { - 2a} & {2a - 1} & 0 & 1 \\ 0 & 0 & 0 & 0 & { \ldots \ldots .} & {1/q_{K} } & { - 1/q_{K} } & 0 & 0 \\ 0 & 0 & 0 & 0 & { \ldots \ldots .} & a & b & 0 & 0 \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {R_{1} (1)} \\ {R_{2} (1)} \\ {\lambda_{1} \varPhi_{1} (1)} \\ {\lambda_{2} \varPhi_{2} (1)} \\ \vdots \\ {R_{1} (K)} \\ {R_{2} (K)} \\ {\lambda_{1} \varPhi_{1} (K)} \\ {\lambda_{2} \varPhi_{2} (K)} \\ \end{array} } \right), $$
(6.165)

and variance matrix

$$ Q_{K} = \left( {\begin{array}{*{20}c} {Q(1)} & { \ldots \ldots .} & 0 \\ \vdots & {\ldots \ldots } & \vdots \\ 0 & { \ldots \ldots .} & {Q(K)} \\ \end{array} } \right). $$
(6.166)

Multiplying a transformation matrix

$$ T(K) = \left( {\begin{array}{*{20}c} 1 & 0 & 0 & 0 & { \ldots \ldots .} & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & { \ldots \ldots .} & 0 & 0 & 0 & 0 \\ 0 & 0 & {1/K} & 0 & { \ldots \ldots .} & 0 & 0 & {1/K} & 0 \\ 0 & 0 & 0 & 1 & { \ldots \ldots .} & 0 & 0 & 0 & 0 \\ \vdots & \vdots & \vdots & \vdots & {\ldots \ldots } & \vdots & \vdots & \vdots & \vdots \\ 0 & 0 & 0 & 0 & { \ldots \ldots .} & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & { \ldots \ldots .} & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & { \ldots \ldots .} & 0 & 0 & 0 & 1 \\ \end{array} } \right) $$
(6.167)

to Eq. 6.165, one has the solutions of GPS observation equations of one station

$$ \left( {\begin{array}{*{20}c} {\lambda_{1} N_{1} (1)} \\ {\lambda_{2} N_{2} (1)} \\ {B_{1}^{z} } \\ {C_{\rho } (1)} \\ \vdots \\ {\lambda_{1} N_{1} (K)} \\ {\lambda_{2} N_{2} (K)} \\ {C_{\rho } (K)} \\ \end{array} } \right) = T(K)\left( {\begin{array}{*{20}c} {1 - 2a} & { - 2b} & 1 & 0 & { \ldots \ldots .} & 0 & 0 & 0 & 0 \\ { - 2a} & {2a - 1} & 0 & 1 & { \ldots \ldots .} & 0 & 0 & 0 & 0 \\ {1/q_{1} } & { - 1/q_{1} } & 0 & 0 & { \ldots \ldots .} & 0 & 0 & 0 & 0 \\ a & b & 0 & 0 & { \ldots \ldots .} & 0 & 0 & 0 & 0 \\ \vdots & \vdots & \vdots & \vdots & {\ldots \ldots } & \vdots & \vdots & \vdots & \vdots \\ 0 & 0 & 0 & 0 & { \ldots \ldots .} & {1 - 2a} & { - 2b} & 1 & 0 \\ 0 & 0 & 0 & 0 & { \ldots \ldots .} & { - 2a} & {2a - 1} & 0 & 1 \\ 0 & 0 & 0 & 0 & { \ldots \ldots .} & {1/q_{K} } & { - 1/q_{K} } & 0 & 0 \\ 0 & 0 & 0 & 0 & { \ldots \ldots .} & a & b & 0 & 0 \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {R_{1} (1)} \\ {R_{2} (1)} \\ {\lambda_{1} \varPhi_{1} (1)} \\ {\lambda_{2} \varPhi_{2} (1)} \\ \vdots \\ {R_{1} (K)} \\ {R_{2} (K)} \\ {\lambda_{1}\Phi _{1} (K)} \\ {\lambda_{2} \varPhi_{2} (K)} \\ \end{array} } \right), $$
(6.168)

and the related

$$ Q = T(K)Q_{K} (T(K))^{\text{T}} $$
(6.169)

where mapping function is used to combine the K ionospheric parameters into one. Similar discussions can be made for the cases of using more receivers. The original observation vector and the so-called secondary “observation” vector are

$$ \left( {\begin{array}{*{20}c} {R_{1} (k)} \\ {R_{2} (k)} \\ {\lambda_{1} \varPhi_{1} (k)} \\ {\lambda_{2} \varPhi_{2} (k)} \\ \end{array} } \right),\quad \left( {\begin{array}{*{20}c} {\lambda_{1} N_{1} (k)} \\ {\lambda_{2} N_{2} (k)} \\ {B_{1} (k)} \\ {C_{\rho } (k)} \\ \end{array} } \right). $$
(6.170)

The two vectors are equivalent, as proved in Sect. 6.7.2, and they can be uniquely transformed from one to the other. Any further data processing can be considered processing based on the secondary “observations”. The secondary “observations” own the equivalence property whether they are uncombined or combining. Therefore the equivalence property is valid for further data processing based on the secondary “observations”.

  • GPS Data Processing Using Secondary “Observations”

A by-product of the above equivalence discussions is that GPS data processing can be performed directly using so-called secondary observations . In addition to the two ambiguity parameters (scaled with the wavelengths), the other two secondary observations are the electron density in the observation path (scaled by square of f 1) and the geometry. The geometry includes the whole observation model with the exception of the ionospheric and ambiguity terms. For a time series of the secondary “observations”, the electron density (or, for simplicity, “ionosphere”) and the “geometry” are real-time observations, whereas the “ambiguities” are constants in case no cycle slip occurs (Langley 1998a, b). Sequential adjustment or filtering methods can be used to deal with the observation time series. It is worth noting that the secondary “observations” are correlated with one another (see the covariance matrix Eq. 6.139). However, the “ambiguities” are direct observations of the ambiguity parameters, and the “ionosphere” and “geometry” are modelled by Eqs. 6.132 and 6.133, respectively. The “ambiguity” observables are ionosphere-geometry-free. The “ionosphere” observable is geometry-free and ambiguity-free. The “geometry” observable is ionosphere-free. But although some algorithms may be more effective, the results and the precision of the solutions are equivalent regardless of the algorithm used. It should be emphasised that all the above discussions are based on the observation model 6.134. The problem concerning the parameterisation of the GPS observation model will not affect the conclusions of these discussions and will be further explored in Chap. 9.

  • Precision Analysis

If the sequential time series of the original observations are considered time-independent, as they traditionally have been, then the secondary “observations” and their precision are also independent time series. From Eq. 6.140, the standard deviations of the L1 and L2 ambiguities are approximately 5.1281σ c and 6.5317σ c, respectively. The standard deviation of ionosphere and geometry “observations” are about 2.1860σ c and 2.9783σ c, respectively. Thus the precision of the “observed” ambiguities is lower than that of the others at one epoch. If the standard deviation of the P code is approximately 1 dm (phase-smoothed), then the precision of the ambiguities determined by one epoch is lower than 0.5 m. However, an average filter of m epoch data will raise the precision by a factor of sqrt(m) (square root of m). After 100 or 10,000 epochs, the ambiguities are able to be determined with precision of about 5 cm or 5 mm. “Ionospheric” effects are observed with better precision. However, due to the high dynamic of the electron movements, ionospheric effects may not be easily smoothed to improve precision. The “geometry” model is the most complicated, and discussions on static, kinematic, and dynamic applications can be found in numerous publications (cf., e.g., ION proceedings, Chap. 10).

6.7.4 Summary

Here, the equivalence properties between uncombined and combining algorithms have been proved theoretically by algebraic linear transformations . The solution vector and related covariance matrix are identical regardless of the algorithms used. Different combinations can lead to a more effective and easier way of dealing with the data. So-called ionosphere-geometry-free and diagonal combinations have been derived, which have better properties than those of the traditional combinations. A data processing algorithm using the uniquely transformed secondary “observations” has been outlined and used to prove the equivalence. Because of the unique properties of solutions for different combinations, any direct combination of solutions must be equivalent to each other. No one combination will yield a better solution or one with greater precision than any other combination. In this respect, the traditional wide-lane ambiguity fixing technique may lead to a more effective search of ambiguity, but it will not lead to a better solution and precision of the ambiguity. The equivalence of the uncombined and combining algorithms can be called Xu’s equivalence theory of GNSS data combinations (Xu 2003, 2007).

6.8 Equivalence of Undifferenced and Differencing Algorithms

In Sect. 6.6, the single, double, and triple differences and their related observation equations were discussed. The number of unknown parameters in the equations was greatly reduced through difference forming; however, the covariance derivations are tedious, especially for a GPS network.

In this section, a unified GPS data processing method based on equivalently eliminated equations is proposed, and the equivalence between undifferenced and differencing algorithms is proved. The theoretical background of the method is also given. By selecting the eliminated unknown vector as a vector of zero, a vector of satellite clock error, a vector of all clock error, a vector of clock and ambiguity parameters, or a vector of user-defined unknowns, the respective selectively eliminated equivalent observation equations can be formed. The equations are equivalent to the zero-, single-, double-, triple-, or user-defined differencing equations. The advantage in such a technique is that the different GPS data processing methods are unified into one unique method, while the original observation vector is retained, and the weight matrix maintains the uncorrelated diagonal form. In other words, the use of this equivalent method allows one to selectively reduce the unknown number, without having to deal with the complicated correlation problem. Several special cases of single, double, and triple difference are discussed in detail to illustrate the theory. The reference-related parameters are dealt with using the a priori datum method.

6.8.1 Introduction

In practice, the common methods for GPS data processing are the so-called zero-difference (non-differential), single-difference, double-difference, and triple-difference methods (Bauer 1994; Hofmann-Wellenhof et al. 1997; King et al. 1987; Leick 1995; Remondi 1984; Seeber 1993; Strang and Borre 1997; Wang et al. 1988). It is well known that the observation equations of the differencing methods can be obtained by carrying out a related linear transformation to the original equations. When the weight matrix is similarly transformed according to the law of covariance propagation, all methods are equivalent theoretically. A theoretical proof of the equivalence between the non-differential and differential methods was described by Schaffrin and Grafarend (1986). A comparison of the advantages and disadvantages of the non-differential and differential methods can be found, for example, in de Jong (1998). The advantage of the differential methods is that there are fewer unknown parameters, and the whole problem to be solved thus becomes smaller. The disadvantage of the differential methods is that there is a correlation problem that appears in cases of multiple baselines of single difference and in all double as well as triple differences . The correlation problem is often complicated and difficult to deal with exactly (compared with the uncorrelated problem). The advantages and disadvantages reach a balance. If one wants to deal with a reduced problem (cancellation of many unknowns), then one has to deal with the correlation problem. As an alternative, we use the equivalent observation equation approach to unify the non-differential and differential methods while retaining all the advantages of both methods.

In the following sections, the theoretical basis of the equivalently eliminated equations is presented, based on the derivation described by Zhou (1985). Several cases are then discussed in detail to illustrate the theory. The reference-related parameters are dealt with using the a priori datum method. A summary of the selectively eliminated equivalent GPS data processing method is outlined at the end.

6.8.2 Formation of Equivalent Observation Equations

For convenience of later discussion, the method for forming an equivalently eliminated equation system is outlined here. The theory is given in Sect. 7.6 in detail. In practice, there may be only one group of unknowns of interest, and it is better to eliminate the other group of unknowns (called nuisance parameters), for example, because of their size. In this case, the use of the so-called equivalently eliminated observation equation system can be very beneficial (Wang et al. 1988; Xu and Qian 1986; Zhou 1985). The nuisance parameters can be eliminated directly from the observation equations instead of from the normal equations .

The linearised observation equation system can be represented using the matrix

$$ V = L - \left( {A\quad B} \right)\,\left( {\begin{array}{*{20}c} {{\text{X}}_{1} } \\ {{\text{X}}_{2} } \\ \end{array} } \right)\quad {\text{and}}\quad P, $$
(6.171)

where L is an observation vector of dimension n, A and B are coefficient matrices of dimension n × (s − r) and n × r, X 1 and X 2 are unknown vectors of dimension s − r and r, V is residual error, s is the total number of unknowns, and P is the weight matrix of dimension n × n.

The related least squares normal equation can then be formed as

$$ \left( {\begin{array}{*{20}c} A & B \\ \end{array} } \right)^{\text{T}} P\left( {\begin{array}{*{20}c} A & B \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {X_{1} } \\ {X_{2} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} A & B \\ \end{array} } \right)^{\text{T}} PL $$
(6.172)

or

$$ M_{11} X_{1} + M_{12} X_{2} = B_{1} \quad {\text{and}} $$
(6.173)
$$ M_{21} X_{1} + M_{22} X_{2} = B_{2} , $$
(6.174)

where

$$ \begin{aligned} B_{1} = A^{\text{T}} PL,\quad B_{2} & = B^{\text{T}} PL\quad {\text{and}} \\ \left( {\begin{array}{*{20}c} {A^{\text{T}} PA} & {A^{\text{T}} PB} \\ {B^{\text{T}} PA} & {B^{\text{T}} PB} \\ \end{array} } \right) & = \left( {\begin{array}{*{20}c} {M_{11} } & {M_{12} } \\ {M_{21} } & {M_{22} } \\ \end{array} } \right). \\ \end{aligned} $$
(6.175)

After eliminating the unknown vector X 1, the eliminated equivalent normal equation system is then

$$ M_{2} X_{2} = R_{2} , $$
(6.176)

where

$$ M_{2} = - M_{21} M_{11}^{ - 1} M_{12} + M_{22} = B^{\text{T}} PB - B^{\text{T}} PA M_{11}^{ - 1} A^{\text{T}} PB \quad {\text{and}} $$
(6.177)
$$ R_{2} = B_{2} - M_{21} M_{11}^{ - 1} B_{1} $$
(6.178)

The related equivalent observation equation of Eq. 6.176 is then (cf. Sect. 7.6; Xu and Qian 1986; Zhou 1985)

$$ U = L - (E - J)BX_{2} , \quad P, $$
(6.179)

where

$$ J = AM_{11}^{ - 1} A^{\text{T}} P. $$
(6.180)

E is an identity matrix of size n, L and P are the original observation vector and weight matrix , and U is the residual vector, which has the same property as V in Eq. 6.171. The advantage of using Eq. 6.179 is that the unknown vector X 1 has been eliminated; however, L vector and P matrix remain the same as the originals.

Similarly, the X 2-eliminated equivalent equation system is

$$ U_{1} = L - (E - K)AX_{1} \quad {\text{and}}\quad P, $$
(6.181)

where

$$ K = BM_{22}^{ - 1} B^{\text{T}} P,\quad M_{22} = B^{\text{T}} PB, $$

and U 1 is the residual vector (which has the same property as V).

We have separated the observation Eq. 6.171 into two equations, Eqs. 6.179 and 6.181; each equation contains only one of the unknown vectors. Each unknown vector can be solved independently and separately. Equations 6.179 and 6.181 are called equivalent observation equations of Eq. 6.171.

The equivalence property of Eqs. 6.171 and 6.179 is valid under three implicit assumptions. The first is that an identical observation vector is used, the second is that the parameterisation of X 2 is identical, and the third is that the X 1 is able to be eliminated. Otherwise, the equivalence does not hold.

6.8.3 Equivalent Equations of Single Differences

In this section, equivalent equations are first formed to eliminate the satellite clock errors from the original zero-difference equations , and then the equivalence of the single differences (in two cases) related to the original zero-difference equations is proved.

Single differences cancel all satellite clock errors out of the observation equations. This can also be achieved by forming equivalent equations where satellite clock errors are eliminated. Considering Eq. 6.171 the original observation equation , and X 1 the vector of satellite clock errors, the equivalent equations of single differences can be formed as outlined in Sect. 6.8.2.

Suppose n common satellites (k1, k2,…, kn) are observed at stations i1 and i2. The original observation equation can then be written as

$$ \left( {\begin{array}{*{20}c} {V_{i1} } \\ {V_{i2} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} {L_{i1} } \\ {L_{i2} } \\ \end{array} } \right) - \left( {\begin{array}{*{20}c} E & {B_{i1} } \\ E & {B_{i2} } \\ \end{array} } \right) \cdot \left( {\begin{array}{*{20}c} {X_{1} } \\ {X_{2} } \\ \end{array} } \right)\quad {\text{and}}\quad P = \frac{1}{{\sigma^{2} }}\left( {\begin{array}{*{20}c} E & 0 \\ 0 & E \\ \end{array} } \right), $$
(6.182)

where X 1 is the vector of satellite clock errors and X 2 is the vector of other unknowns. For simplicity, clock errors are scaled by the speed of light c and directly used as unknowns; the X 1-related coefficient matrix is then an identity matrix, E.

Comparing Eq. 6.182 with Eq. 6.171, one has (cf. Sect. 6.8.2)

$$ A = \left( {\begin{array}{*{20}c} E \\ E \\ \end{array} } \right),\quad B = \left( {\begin{array}{*{20}c} {B_{i1} } \\ {B_{i2} } \\ \end{array} } \right),\quad L = \left( {\begin{array}{*{20}c} {L_{i1} } \\ {L_{i2} } \\ \end{array} } \right)\quad {\text{and}}\quad V = \left( {\begin{array}{*{20}c} {V_{i1} } \\ {V_{i2} } \\ \end{array} } \right), $$

and

$$ \begin{aligned} M_{11} & = \left( {\begin{array}{*{20}c} E & E \\ \end{array} } \right)\frac{1}{{\sigma^{2} }}\left( {\begin{array}{*{20}c} E & 0 \\ 0 & E \\ \end{array} } \right)\left( {\begin{array}{*{20}c} E \\ E \\ \end{array} } \right) = \frac{2}{{\sigma^{2} }}E, \\ J & = \left( {\begin{array}{*{20}c} E \\ E \\ \end{array} } \right)\frac{{\sigma^{2} }}{2}E\left( {\begin{array}{*{20}c} E & E \\ \end{array} } \right)P = \frac{1}{2}\left( {\begin{array}{*{20}c} E & E \\ E & E \\ \end{array} } \right), \\ E_{2n \times 2n} - J & = \frac{1}{2}\left( {\begin{array}{*{20}c} E & { - E} \\ { - E} & E \\ \end{array} } \right)\quad {\text{and}} \\ \left( {E_{2n \times 2n} - J} \right)B & = \frac{1}{2}\left( {\begin{array}{*{20}c} {B_{i1} - B_{i2} } \\ {B_{i2} - B_{i1} } \\ \end{array} } \right). \\ \end{aligned} $$

So the equivalently eliminated equation system of Eq. 6.182 is

$$ \left( {\begin{array}{*{20}c} {U_{i1} } \\ {U_{i2} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} {L_{i1} } \\ {L_{i2} } \\ \end{array} } \right) - \frac{1}{2}\left( {\begin{array}{*{20}c} {B_{i1} - B_{i2} } \\ {B_{i2} - B_{i1} } \\ \end{array} } \right) \cdot X_{2} ,\quad P = \frac{1}{{\sigma^{2} }}\left( {\begin{array}{*{20}c} E & 0 \\ 0 & E \\ \end{array} } \right), $$
(6.183)

where the satellite clock error vector X 1 is eliminated, and the observable vector and weight matrix are unchanged.

Denoting B s = B i2 − B i1, the least squares normal equation of Eq. 6.183 can then be formed as (cf. Chap. 7) (suppose Eq. 6.183 is solvable)

$$ \frac{1}{2}\left( {\begin{array}{*{20}c} { - B_{\text{s}}^{\text{T}} } & {B_{\text{s}}^{\text{T}} } \\ \end{array} } \right) \cdot P \cdot \left( {\begin{array}{*{20}c} { - B_{\text{s}} } \\ {B_{\text{s}} } \\ \end{array} } \right) \cdot X_{2} = \left( {\begin{array}{*{20}c} { - B_{\text{s}}^{\text{T}} } & {B_{\text{s}}^{\text{T}} } \\ \end{array} } \right) \cdot P \cdot \left( {\begin{array}{*{20}c} {L_{i1} } \\ {L_{i2} } \\ \end{array} } \right) $$

or

$$ B_{\text{s}}^{\text{T}} B_{\text{s}} \cdot X_{2} = B_{\text{s}}^{\text{T}} (L_{i2} - L_{i1} ). $$
(6.184)

Alternatively, a single-difference equation can be obtained by multiplying Eq. 6.182 with a transformation matrix C s

$$ C_{\text{s}} = \left( {\begin{array}{*{20}c} { - E} & E \\ \end{array} } \right), $$

giving

$$ C_{\text{s}} \cdot \left( {\begin{array}{*{20}c} {V_{i1} } \\ {V_{i2} } \\ \end{array} } \right) = C_{\text{s}} \cdot \left( {\begin{array}{*{20}c} {L_{i1} } \\ {L_{i2} } \\ \end{array} } \right) - C_{\text{s}} \cdot \left( {\begin{array}{*{20}c} E & {B_{i1} } \\ E & {B_{i2} } \\ \end{array} } \right) \cdot \left( {\begin{array}{*{20}c} {X_{1} } \\ {X_{2} } \\ \end{array} } \right) $$

or

$$ V_{i2} - V_{i1} = (L_{i2} - L_{i1} ) - (B_{i2} - B_{i1} )X_{2} $$
(6.185)

and

$$ \text{cov} ({\text{SD}}(O)) = C_{\text{s}} \sigma^{2} \left( {\begin{array}{*{20}c} E & 0 \\ 0 & E \\ \end{array} } \right)C_{\text{s}}^{\text{T}} = 2\sigma^{2} E\quad {\text{and}}\quad P_{\text{s}} = \frac{1}{{2\sigma^{2} }}E, $$
(6.186)

where P s is the weight matrix of single differences, and cov(SD(O)) is the covariance of the single-difference (SD) observation vector (O). Supposing Eq. 6.185 is solvable, the least squares normal equation system of Eq. 6.185 is then

$$ (B_{i2} - B_{i1} )^{\text{T}} (B_{i2} - B_{i1} )X_{2} = (B_{i2} - B_{i1} )^{\text{T}} (L_{i2} - L_{i1} ). $$
(6.187)

It is clear that Eqs. 6.187 and 6.184 are identical. Therefore, in the case of two stations, the single-difference Eq. 6.185 is equivalent to the equivalently eliminated Eq. 6.183, and is consequently equivalent to the original zero-difference equation.

Suppose n common satellites (k1, k2,…, kn) are observed at stations i1, i2 and i3. The original observation equation can then be written as

$$ \left( {\begin{array}{*{20}c} {V_{i1} } \\ {V_{i2} } \\ {V_{i3} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} {L_{i1} } \\ {L_{i2} } \\ {L_{i3} } \\ \end{array} } \right) - \left( {\begin{array}{*{20}c} E & {B_{i1} } \\ E & {B_{i2} } \\ E & {B_{i3} } \\ \end{array} } \right) \cdot \left( {\begin{array}{*{20}c} {X_{1} } \\ {X_{2} } \\ \end{array} } \right)\quad {\text{and}}\quad P = \frac{1}{{\sigma^{2} }}\left( {\begin{array}{*{20}c} E & 0 & 0 \\ 0 & E & 0 \\ 0 & 0 & E \\ \end{array} } \right). $$
(6.188)

Comparing Eq. 6.188 with Eq. 6.171, one has (cf. Section 6.8.2)

$$ A = \left( {\begin{array}{*{20}c} E \\ E \\ E \\ \end{array} } \right), \quad B = \left( {\begin{array}{*{20}c} {B_{i1} } \\ {B_{i2} } \\ {B_{i3} } \\ \end{array} } \right), \quad L = \left( {\begin{array}{*{20}c} {L_{i1} } \\ {L_{i2} } \\ {L_{i3} } \\ \end{array} } \right)\quad {\text{and}}\quad V = \left( {\begin{array}{*{20}c} {V_{i1} } \\ {V_{i2} } \\ {V_{i3} } \\ \end{array} } \right), $$

and

$$ \begin{aligned} M_{11} & = A^{\text{T}} PA = \frac{3}{{\sigma^{2} }}E, \\ J & = A\frac{{\sigma^{2} }}{3}EA^{\text{T}} P = \frac{1}{3}\left( {\begin{array}{*{20}c} E & E & E \\ E & E & E \\ E & E & E \\ \end{array} } \right), \\ E_{3n \times 3n} - J & = \frac{1}{3}\left( {\begin{array}{*{20}c} {2E} & { - E} & { - E} \\ { - E} & {2E} & { - E} \\ { - E} & { - E} & {2E} \\ \end{array} } \right),\quad {\text{and}} \\ \left( {E_{3n \times 3n} - J} \right)B & = \frac{1}{3}\left( {\begin{array}{*{20}c} {2B_{i1} - B_{i2} - B_{i3} } \\ { - B_{i1} + 2B_{i2} - B_{i3} } \\ { - B_{i1} - B_{i2} + 2B_{i3} } \\ \end{array} } \right). \\ \end{aligned} $$

So the equivalently eliminated equation system of Eq. 6.188 is

$$ \left( {\begin{array}{*{20}c} {U_{i1} } \\ {U_{i2} } \\ {U_{i3} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} {L_{i1} } \\ {L_{i2} } \\ {L_{i3} } \\ \end{array} } \right) - \frac{1}{3}\left( {\begin{array}{*{20}c} {2B_{i1} - B_{i2} - B_{i3} } \\ { - B_{i1} + 2B_{i2} - B_{i3} } \\ { - B_{i1} - B_{i2} + 2B_{i3} } \\ \end{array} } \right) \cdot X_{2} ,\quad P = \frac{1}{{\sigma^{2} }}\left( {\begin{array}{*{20}c} E & 0 & 0 \\ 0 & E & 0 \\ 0 & 0 & E \\ \end{array} } \right), $$
(6.189)

and the related least squares normal equation can be formed as

$$ \frac{1}{3}\left( {\begin{array}{*{20}c} {2B_{i1} - B_{i2} - B_{i3} } \\ { - B_{i1} + 2B_{i2} - B_{i3} } \\ { - B_{i1} - B_{i2} + 2B_{i3} } \\ \end{array} } \right)^{\text{T}} \left( {\begin{array}{*{20}c} {2B_{i1} - B_{i2} - B_{i3} } \\ { - B_{i1} + 2B_{i2} - B_{i3} } \\ { - B_{i1} - B_{i2} + 2B_{i3} } \\ \end{array} } \right)X_{2} = \left( {\begin{array}{*{20}c} {2B_{i1} - B_{i2} - B_{i3} } \\ { - B_{i1} + 2B_{i2} - B_{i3} } \\ { - B_{i1} - B_{i2} + 2B_{i3} } \\ \end{array} } \right)^{\text{T}} \left( {\begin{array}{*{20}c} {L_{i1} } \\ {L_{i2} } \\ {L_{i3} } \\ \end{array} } \right). $$
(6.190)

Alternatively, for Eq. system 6.188, single differences can be formed using transformation (cf. Sect. 6.6.1)

$$ C_{s} = \left( {\begin{array}{*{20}c} { - E} & E & 0 \\ 0 & { - E} & E \\ \end{array} } \right) $$

and

$$ P_{\text{s}} = [\text{cov} ({\text{SD}})]^{ - 1} = \frac{1}{{3\sigma^{2} }}\left( {\begin{array}{*{20}c} {2E} & E \\ E & {2E} \\ \end{array} } \right). $$

The correlation problem appears in the case of single differences of multiple baselines . The related observation equations and the least squares normal equation can be written as

$$ \left( {\begin{array}{*{20}c} {V_{i2} - V_{i1} } \\ {V_{i3} - V_{i2} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} {L_{i2} - L_{i1} } \\ {L_{i3} - L_{i2} } \\ \end{array} } \right) - \left( {\begin{array}{*{20}c} {B_{i2} - B_{i1} } \\ {B_{i3} - B_{i2} } \\ \end{array} } \right)X_{2} , \quad P_{\text{s}} \quad {\text{and}} $$
(6.191)
$$ \left( {\begin{array}{*{20}c} {B_{i2} - B_{i1} } \\ {B_{i3} - B_{i2} } \\ \end{array} } \right)^{\text{T}} \left( {\begin{array}{*{20}c} {2E} & E \\ E & {2E} \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {B_{i2} - B_{i1} } \\ {B_{i3} - B_{i2} } \\ \end{array} } \right)X_{2} = \left( {\begin{array}{*{20}c} {B_{i2} - B_{i1} } \\ {B_{i3} - B_{i2} } \\ \end{array} } \right)^{\text{T}} \left( {\begin{array}{*{20}c} {2E} & E \\ E & {2E} \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {L_{i2} - L_{i1} } \\ {L_{i3} - L_{i2} } \\ \end{array} } \right). $$
(6.192)

Equations 6.190 and 6.192 are identical. This may be proved by expanding both equations and comparing the results. Again, this shows that the equivalently eliminated equations are equivalent to the single-difference equations, but without the need to deal with the correlation problem.

6.8.4 Equivalent Equations of Double Differences

Double differences cancel all clock errors out of the observation equations . This can also be achieved by forming equivalent equations where all clock errors are eliminated. Considering Eq. 6.171 the original observation equation, and X 1 the vector of all clock errors, the equivalent equation of double differences can be formed as outlined in Sect. 6.8.2.

In the case of two stations, supposing n common satellites (k1, k2,…, kn) are observed at station i1 and i2, the equivalent single-difference observation equation is then Eq. 6.183. Denoting B s1 = B i2 − B i1, the station clock error parameter as δt i1 − δt i2 (cf. Eqs. 6.896.92), and assigning the coefficients of the first column to the station clock errors, i.e. B s1 = (I n×1 B s), Eq. 6.183 turns out to be

$$ \left( {\begin{array}{*{20}c} {U_{i1} } \\ {U_{i2} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} {L_{i1} } \\ {L_{i2} } \\ \end{array} } \right) - \frac{1}{2}\left( {\begin{array}{*{20}c} { - I_{n \times 1} } & { - B_{s} } \\ {I_{n \times 1} } & {B_{s} } \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {X_{c} } \\ {X_{3} } \\ \end{array} } \right)\quad {\text{and}}\quad P = \frac{1}{{\sigma^{2} }}\left( {\begin{array}{*{20}c} E & 0 \\ 0 & E \\ \end{array} } \right), $$
(6.193)

where X c is the station clock error vector, X 3 is the other unknown vector, B s is the X 3-related coefficient matrix, \( I_{n \times 1} \) is a 1 matrix (where all elements are 1), and clock errors are scaled by the speed of light.

Comparing Eq. 6.193 with Eq. 6.171, one has (cf. Sect. 6.8.2)

$$ A = \frac{1}{2}\left( {\begin{array}{*{20}c} { - I_{n \times 1} } \\ {I_{n \times 1} } \\ \end{array} } \right),\quad B = \frac{1}{2}\left( {\begin{array}{*{20}c} { - B_{\text{s}} } \\ {B_{\text{s}} } \\ \end{array} } \right),\quad L = \left( {\begin{array}{*{20}c} {L_{i1} } \\ {L_{i2} } \\ \end{array} } \right)\quad {\text{and}}\quad V = \left( {\begin{array}{*{20}c} {U_{i1} } \\ {U_{i2} } \\ \end{array} } \right), $$

and

$$ \begin{aligned} M_{11} & = \frac{1}{4}\left( {\begin{array}{*{20}c} { - I_{n \times 1}^{\text{T}} } & {I_{n \times 1}^{\text{T}} } \\ \end{array} } \right)\frac{1}{{\sigma^{2} }}\left( {\begin{array}{*{20}c} E & 0 \\ 0 & E \\ \end{array} } \right)\left( {\begin{array}{*{20}c} { - I_{n \times 1} } \\ {I_{n \times 1} } \\ \end{array} } \right) = \frac{n}{{2\sigma^{2} }}, \\ J & = \left( {\begin{array}{*{20}c} { - I_{n \times 1} } \\ {I_{n \times 1} } \\ \end{array} } \right)\frac{{\sigma^{2} }}{2n}\left( {\begin{array}{*{20}c} { - I_{n \times 1}^{\text{T}} } & {I_{n \times 1}^{\text{T}} } \\ \end{array} } \right) \cdot P = \frac{1}{2n}\left( {\begin{array}{*{20}c} {I_{n \times n} } & { - I_{n \times n} } \\ { - I_{n \times n} } & {I_{n \times n} } \\ \end{array} } \right),\quad {\text{and}} \\ \left( {E_{2n \times 2n} - J} \right)\frac{1}{2}\left( {\begin{array}{*{20}c} { - B_{\text{s}} } \\ {B_{\text{s}} } \\ \end{array} } \right) & = \frac{1}{2}\left( {\begin{array}{*{20}c} { - E_{n \times n} + \frac{1}{n}I_{n \times n} } \\ {E_{n \times n} - \frac{1}{n}I_{n \times n} } \\ \end{array} } \right)B_{\text{s}} . \\ \end{aligned} $$

So the equivalently eliminated equation system of Eq. 6.193 is

$$ \left( {\begin{array}{*{20}c} {U_{i1} } \\ {U_{i2} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} {L_{i1} } \\ {L_{i2} } \\ \end{array} } \right) - \frac{1}{2}\left( {\begin{array}{*{20}c} { - E_{n \times n} + \frac{1}{n}I_{n \times n} } \\ {E_{n \times n} - \frac{1}{n}I_{n \times n} } \\ \end{array} } \right)B_{\text{s}} X_{3} \quad {\text{and}}\quad P = \frac{1}{{\sigma^{2} }}\left( {\begin{array}{*{20}c} E & 0 \\ 0 & E \\ \end{array} } \right), $$
(6.194)

where the receiver clock error vector X c is eliminated, observable vector and weight matrix are unchanged. The normal equation has a simple form of

$$ B_{\text{s}}^{\text{T}} \left( {E_{n \times n} - \frac{1}{n}I_{n \times n} } \right)B_{\text{s}} X_{3} = B_{\text{s}}^{\text{T}} \left( {E_{n \times n} - \frac{1}{n}I_{n \times n} } \right)\left( {L_{i2} - L_{i1} } \right). $$
(6.195)

Alternatively, the traditional single-difference observation Eqs. 6.185 and 6.186 can be rewritten as

$$ V_{i2} - V_{i1} = (L_{i2} - L_{i1} ) - \left( {\begin{array}{*{20}c} {I_{n \times 1} } & {B_{\text{s}} } \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {X_{\text{c}} } \\ {X_{3} } \\ \end{array} } \right) $$

or

$$ \left( {\begin{array}{*{20}c} {V_{i2}^{1} - V_{i1}^{1} } \\ {V_{i2}^{k} - V_{i1}^{k} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} {L_{i2}^{1} - L_{i1}^{1} } \\ {L_{i2}^{k} - L_{i1}^{k} } \\ \end{array} } \right) - \left( {\begin{array}{*{20}c} 1 & {B_{\text{s}}^{1} } \\ {I_{m \times 1} } & {B_{\text{s}}^{k} } \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {X_{\text{c}} } \\ {X_{3} } \\ \end{array} } \right) $$
(6.196)

and

$$ \text{cov} ({\text{SD}}(O)) = C_{\text{s}} \sigma^{2} \left( {\begin{array}{*{20}c} E & 0 \\ 0 & E \\ \end{array} } \right)C_{\text{s}}^{\text{T}} = 2\sigma^{2} E\quad {\text{and}}\quad P_{\text{s}} = \frac{1}{{2\sigma^{2} }}E, $$

where m = n − 1, and the superscript 1 and k denote the first row and remaining rows of the matrices (or columns in case of vectors), respectively. The double-difference transformation matrix and covariance are (cf. Sect. 6.6.2, Eqs. 6.1166.118)

$$ \begin{aligned} C_{\text{d}} & = \left( {\begin{array}{*{20}c} { - I_{m \times 1} } & E \\ \end{array}_{m \times m} } \right), \\ \text{cov} ({\text{DD}}(O)) & = C_{\text{d}} \,\text{cov} ({\text{SD}}(O))C_{\text{d}}^{\text{T}} = 2\sigma^{2} C_{\text{d}} C_{\text{d}}^{\text{T}} = 2\sigma^{2} \left( {I_{m \times m} + E_{m \times m} } \right)\quad {\text{and}} \\ P_{\text{d}} & = [\text{cov} ({\text{DD}}(O))]^{ - 1} = \frac{1}{{2\sigma^{2} n}}\left( {nE_{m \times m} - I_{m \times m} } \right). \\ \end{aligned} $$

The double-difference observation equation and related normal equation are

$$ C_{\text{d}} \left( {\begin{array}{*{20}c} {V_{i2}^{1} - V_{i1}^{1} } \\ {V_{i2}^{k} - V_{i1}^{k} } \\ \end{array} } \right) = C_{\text{d}} \left( {\begin{array}{*{20}c} {L_{i2}^{1} - L_{i1}^{1} } \\ {L_{i2}^{k} - L_{i1}^{k} } \\ \end{array} } \right) - C_{\text{d}} \left( {\begin{array}{*{20}c} 1 & {B_{\text{s}}^{1} } \\ {I_{m \times 1} } & {B_{\text{s}}^{k} } \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {X_{\text{c}} } \\ {X_{3} } \\ \end{array} } \right) $$

or

$$ C_{\text{d}} \left( {\begin{array}{*{20}c} {V_{i2}^{1} - V_{i1}^{1} } \\ {V_{i2}^{k} - V_{i1}^{k} } \\ \end{array} } \right) = C_{\text{d}} \left( {\begin{array}{*{20}c} {L_{i2}^{1} - L_{i1}^{1} } \\ {L_{i2}^{k} - L_{i1}^{k} } \\ \end{array} } \right) - C_{\text{d}} \left( {\begin{array}{*{20}c} {B_{\text{s}}^{1} } \\ {B_{\text{s}}^{k} } \\ \end{array} } \right)X_{3} , $$

i.e.

$$ C_{\text{d}} (V_{i2} - V_{i1} ) = C_{\text{d}} (L_{i2} - L_{i1} ) - C_{\text{d}} B_{\text{s}} X_{3} $$
(6.197)

and

$$ B_{\text{s}}^{\text{T}} C_{\text{d}}^{\text{T}} P_{\text{d}} C_{\text{d}} B_{\text{s}} X_{3} = B_{\text{s}}^{\text{T}} C_{\text{d}}^{\text{T}} P_{\text{d}} C_{\text{d}} (L_{i2} - L_{i1} ), $$
(6.198)

where

$$ C_{\text{d}}^{\text{T}} P_{\text{d}} C_{\text{d}} = \frac{1}{{2\sigma^{2} n}}\left( {\begin{array}{*{20}c} { - I_{m \times 1} } & E \\ \end{array}_{m \times m} } \right)^{\text{T}} \left( {nE_{m \times m} - I_{m \times m} } \right)\left( {\begin{array}{*{20}c} { - I_{m \times 1} } & E \\ \end{array}_{m \times m} } \right), $$
(6.199)
$$ \left( {\begin{array}{*{20}c} { - I_{m \times 1} } & E \\ \end{array}_{m \times m} } \right)^{\text{T}} \left( {nE_{m \times m} - I_{m \times m} } \right) = \left( {\begin{array}{*{20}c} { - I_{m \times 1} } & {nE} \\ \end{array}_{m \times m} - I_{m \times m} } \right)^{\text{T}} \quad {\text{and}} $$
(6.200)
$$ \left( {\begin{array}{*{20}c} { - I_{m \times 1} } & {nE} \\ \end{array}_{m \times m} - I_{m \times m} } \right)^{\text{T}} \left( {\begin{array}{*{20}c} { - I_{m \times 1} } & E \\ \end{array}_{m \times m} } \right) = nE_{n \times n} - I_{n \times n} . $$
(6.201)

The above three equations can be readily proved. Substituting Eqs. 6.1996.201 into Eqs. 6.198, 6.198 then becomes the same as Eq. 6.195, thus proving the equivalence between the double-difference equation and the directly formed equivalent Eq. 6.193.

6.8.5 Equivalent Equations of Triple Differences

Triple differences cancel all clock errors and ambiguities out of the observation equations . This can also be achieved by forming equivalent equations where all clock errors and ambiguities are eliminated. Considering Eq. 6.171 the original observation equation, and X 1 the parameter vector of all clock errors and ambiguities, the equivalent equations of triple differences can then be formed as outlined in Sect. 6.8.2.

It is well known that traditional triple differences are correlated between adjacent epochs and between baselines . In the case of sequential (epoch-by-epoch) data processing of triple differences, the correlation problem is difficult to deal with. However, with the use of the equivalently eliminated equations, the weight matrix remains diagonal, and the original GPS observables are retained.

An alternative method for proving the equivalence between triple differences and zero-difference is proposed and derived in Xu (2016). Considering the definition of triple differences and Eq. 6.120 given in Sect. 6.6.3, the triple-difference equation can be rearranged as

$$ \begin{aligned} {\text{TD}}_{i1,i2}^{k1,k2} (O(t1,t2)) & = \left\{ {\left[ {O_{i2}^{k2} (t2) - O_{i2}^{k2} (t1)} \right] - \left[ {O_{i1}^{k2} (t2) - O_{i1}^{k2} (t1)} \right]} \right\} \\ & \quad {\kern 1pt} - \left\{ {\left[ {O_{i2}^{k1} (t2) - O_{i2}^{k1} (t1)} \right] - \left[ {O_{i1}^{k1} (t2) - O_{i1}^{k1} (t1)} \right]} \right\} \\ & \quad = (D^{t} \cdot O_{i2}^{k2} - D^{t} \cdot O_{i1}^{k2} ) - (D^{t} \cdot O_{i2}^{k1} - D^{t} \cdot O_{i1}^{k1} ), \\ \end{aligned} $$
(6.202)

where \( D^{t} \) represents the time difference observables between time t1 and t2.

From Eq. 6.202, triple differences can be regarded first as forming the time difference of the same satellite between two adjacent epochs at the station, and then to be formed by double differences between two single differences related to two observed satellites. The time difference equation is proved to be equivalent to the zero-difference equation in Xu (2016), and the time-differenced observable between two adjacent epochs has the same property as the original one, which is still uncorrelated. Moreover, considering the equivalence between the double-difference and zero-difference equations (cf. Sect. 6.8.4), we can thus conclude that the triple-difference equation is equivalent to the zero-difference equation.

6.8.6 Method of Dealing with the Reference Parameters

In differential GPS data processing , the reference-related parameters are usually considered to be known and are fixed (or not adjusted). This may be realised by the a priori datum method (for details cf. Sect. 7.8.2). Here we outline only the basic principle.

The equivalent observation Eq. system 6.179 can be rewritten as

$$ U = L - \left( {\begin{array}{*{20}c} {D_{1} } & {D_{2} } \\ \end{array} } \right)\,\left( {\begin{array}{*{20}c} {X_{21} } \\ {X_{22} } \\ \end{array} } \right)\quad {\text{and}}\quad P, $$
(6.203)

where

$$ D = \left( {\begin{array}{*{20}c} {D_{1} } & {D_{2} } \\ \end{array} } \right)\quad {\text{and}}\quad X_{2} = \left( {\begin{array}{*{20}c} {X_{21} } \\ {X_{22} } \\ \end{array} } \right). $$

Suppose there are a priori constraints of (cf. e.g. Zhou et al. 1997)

$$ W = \overline{X}_{22} - X_{22} \quad {\text{and}}\quad P_{ 2} , $$
(6.204)

where \( \overline{X}_{22} \) is the “directly observed” parameter sub-vector, P 2 is the weight matrix with respect to the parameter sub-vector X 22, and W is a residual vector, which has the same property as U. Typically, \( \overline{X}_{22} \) is “observed” independently, so P 2 is a diagonal matrix. If X 22 is a sub-vector of station coordinates, then the constraint of Eq. 6.204 is referred to as a datum constraint (this is also the reason that the term a priori datum is used). Here we consider X 22 a vector of reference-related parameters (such as clock errors and ambiguities of the reference satellite and reference station). Generally, the a priori weight matrix P 2 is given by covariance matrix Q W and

$$ P_{2} = Q_{\text{W}}^{ - 1} . $$
(6.205)

In practice, the sub-vector \( \overline{X}_{22} \) is usually a zero vector; this can be achieved through careful initialisation by forming observation Eq. 6.171.

A least squares normal equation of the a priori datum problem of Eqs. 6.203 and 6.204 can be formed (cf. Sect. 7.8.2). Compared with the normal equation of Eq. 6.203, the only difference is that the a priori weight matrix P 2 has been added to the normal matrix. This indicates that the a priori datum problem can be dealt with simply by adding P 2 to the normal equation of observation Eq. 6.203.

If some diagonal components of the weight matrix P 2 are set to zero, the related parameters (in X 22) are then free parameters (or free datum) of the adjustment problem (without a priori constraints). Otherwise, parameters with a priori constraints are called a priori datum . Large weight values indicate strong constraint and small weight values indicate soft constraint. The strongest constraint is keeping the datum fixed. The reference-related datum (coordinates and clock errors as well as ambiguities) can be fixed by applying the strongest constraints to the related parameters, i.e. by adding the strongest constraints to the datum-related diagonal elements of the normal matrix.

6.8.7 Summary of the Unified Equivalent Algorithm

For any linearised zero-difference GPS observation Eq. system 6.171

$$ V = L - \left( {\begin{array}{*{20}c} A & B \\ \end{array} } \right)\,\left( {\begin{array}{*{20}c} {X_{ 1} } \\ {X_{ 2} } \\ \end{array} } \right)\quad {\text{and}}\quad P, $$
(6.206)

the X 1-eliminated equivalent GPS observation equation system is then Eq. 6.179:

$$ U = L - (E - J)BX_{2} \quad {\text{and}}\quad P, $$
(6.207)

where

$$ J = AM_{11}^{ - 1} A^{\text{T}} P,\quad M_{11} = A^{\text{T}} PA, $$

E is an identity matrix, L is original observation vector, P is original weight matrix , and U is residual vector, which has the same property as V.

Similarly, the X 2 eliminated equivalent equation system is Eq. 6.181

$$ U_{1} = L - (E - K)AX_{1} \quad {\text{and}}\quad P, $$
(6.208)

where

$$ K = BM_{22}^{ - 1} B^{\text{T}} P,\quad M_{22} = B^{\text{T}} PB, $$

and U 1 is the residual vector (which has the same property as V).

Fixing the values of sub-vector X 22 (of X 2) can be realised by adding the strongest constraints to the X 22-related diagonal elements of the normal matrix formed by Eq. 6.207. Alternatively, we may first apply the strongest constraints directly to the normal equation formed by Eq. 6.206. In this way, the reference-related parameters (clock errors, ambiguities, coordinates, etc.) are fixed. We may then form the equivalently eliminated observation Eq. 6.207. Thus, relative and differential GPS data processing can be realised by using Eq. 6.207 after selecting the X 1 to be eliminated.

The GPS data processing algorithm using Eq. 6.207 is then a selectively eliminated equivalent method. Selecting X 1 in Eq. 6.206 as a zero vector, the algorithm is identical to the zero-difference method. Selecting X 1 in Eq. 6.206 as the satellite clock error vector, the vector of all clock errors, the clock error and ambiguity vector, and any user-defined vector, the algorithm is equivalent to the single-difference, double-difference, triple-difference, and user-defined elimination methods, respectively. The eliminated unknown X 1 can be solved separately if desired.

The advantages of this method (compared with non-differential and differential methods) are as follows:

  • Non-differential and differential GPS data processing can be dealt with in an equivalent and unified way. The data processing scenarios can be selected by a switch and used in a combinative way.

  • The eliminated parameters can also be solved separately with the same algorithm.

  • The weight matrix remains the original diagonal one.

  • The original observations are used; no differencing is required.

It is clear that the described algorithm has all the advantages of both non-differential and differential GPS data processing methods. The equivalence theory of the undifferenced and differencing GPS data processing algorithms may be described as Xu’s equivalence theory.