Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introductory Remarks

In this chapter, the necessary data processing or post-processing following field measurements is presented. First, the general procedures undertaken to process baseline data are considered, followed by the adjustment of network observations. Network observations, as was illustrated in Fig. 5.8 on 70, play an important role in monitoring the spatial motion of the land surface, and as such, an understanding of their adjustment is essential. Besides showing how the observed GPS data are processed, the chapter also presents the basics of least squares solutions, which facilitates the adjustment of the observations, and looks at the quality assessment factors that need to be considered after an adjustment. In general, most commercial processing software will generate solutions once approximate positions of the occupied points, and observational data are available.

2 Processing of Observations

2.1 Data

Satellite observations will be useless unless they can be properly processed in a form that can lead to some meaningful solutions relevant to environmental monitoring. Data processing generally proceeds in three steps (Fig. 6.1, left). The first step involves transferring the data from the GNSS receiver or data collection device to the computer for processing and archiving. Most commercial software are automated and have user interactive options for transferring the data. As we have already seen, there exist several types of GNSS receivers that can be used for data collection. With the full operational capability of additional GNSS satellites (see Chap. 2), there will certainly be more receivers on the market for civilian use. These receivers normally come with their own commercial vendor processing software. For example, Trimble receivers come with the Trimble Business Center (TBC) for processing the data.

Fig. 6.1
figure 1

Left GNSS data handling steps. Right data processing steps

Where multiple receivers of different types are employed in a GNSS campaign, data from all these receivers should first be converted into a format that can be understood independent of the source receiver. This format is the RINEX (Receiver INdependent EXchange Format), which can be automatically performed by most vendor software. For example, Trimble receivers will save GNSS data with an extension ‘file.dat’, while Sokkia receivers will save their data with an extension ‘file.PDC’, both of which are in binary, which must then be converted into RINEX format, an ASCII (American Standard Code for Information Interchange) type of data before processing. Once the data are in RINEX format, they can then be processed using any software.

Once the data has been transferred to a computer, the next step is preprocessing, which is dependent upon the type of the data collected, e.g., static, and the type of initialization (see Sect. 5.4.5). Preprocessing consists of editing the data to ensure data quality, and determining the ephemeris, where one has to choose between broadcast and precise ephemeris (see Sect. 3.4.1) when post-processing baseline carrier-phase observations. Autonomous hand-held receivers that use code measurements require no post-processing, since this is automatically recorded during field operations. Editing activities done include the identification and elimination of cycle slips, editing gaps in information, and checking station names and antenna heights. In addition, elevation mask angles should be set during this phase, along with options to select tropospheric and ionospheric models  [1].

2.2 Baseline Processing

2.2.1 Cycle Slips: Detection and Repair

When collecting data by GNSS methods, cycle slip is said to occur when a receiver loses it’s ‘grip’ or ‘lock’ on a satellite. Hoffman–Wellenhof [2] define cycle slip as a discontinuity or a jump in the GNSS carrier-phase measurement by an integer number of cycles caused by temporary loss of signal. Signal loss can occur as a result of one of the following factors:

  • Obstruction from trees, buildings, etc.

  • Low signal to noise ratio due to ionospheric effects, multipath or low GNSS satellite elevation.

  • Software failure in the receiver.

  • Severe ionospheric disturbances, radio interference, and high receiver dynamics.

Once cycle slip occur during a GNSS survey, the integer count has to be re-initialized by an equivalent “jump” Cycle. Cycle slips occur independently in L1 and L2 carriers. All observations thereafter have to be shifted by the same integer number of cycles.

During data processing, editing and correcting for cycle slip errors is one of the major tasks that has to be undertaken to achieve quality output. In general, the detection and correction of cycle slips becomes easy when using dual frequencies and differenced data in a static mode. This is due to the fact that dual frequencies provide the possibility of linear combinations that give residuals that can be analyzed to diagnose cycle slip errors. Short baselines are preferable as the effect of atmospheric errors (ionospheric) easily cancel, thus isolating cycle slip errors. When the data is post-processed , the detection becomes much easier as opposed to real-time positioning since cycle slips are always indicated by gaps in the data. Such gaps have to be deleted before the data is fully processed.

Cycle slips could also be detected and corrected using the Kalman Filtering approach [2]. Correction of cycle slips ensures that the observations to be used in baseline and network adjustments are free from signals gaps. Automated procedures for correcting cycle slips exist in commercial software.

2.2.2 Ambiguity Resolution

In Sect. 4.2, the concept of integer ambiguity was introduced (e.g., Fig. 4.1, p. 47). In this section, it is considered in more detail. When measuring pseudoranges using carrier-phases, when the receiver is first switched on, what is measured is the carrier ‘beat’ phase, which is the difference between the satellite-sent phase and the receiver-generated phase. The initial integer number of cycles between the satellite and receiver’s antenna, i.e., the integer cycle ambiguity N , is not known. For each satellite-receiver observation, as long as the receiver maintains phase-lock to the satellite during observations, there exist one integer ambiguity value. Its determination is essential to ensure high quality in the estimated parameters (e.g., positions, temperatures and pressures), which are required for high accuracy environmental tasks (e.g., monitoring sea level changes and global warming).

Generally, there are three steps to ambiguity resolution. From the float solution discussed in Sect. 6.2.3, potential integers values of N are generated. This can be achieved if the coordinates of one station (i.e., the reference station) are known so as to give an approximate baseline length, or by differencing code- and phase-pseudorange equations. For static positioning, float solutions are used, whereas for the kinematic approach, code-pseudorange solutions can be adopted. Once the integer candidates of N have been generated in the first step, the correct integer combinations are then selected in the second step such that the sum of the squares of the residual is a minimum. This is done by inserting the selected integers in the initial equations, and assessing in the third step whether the obtained residuals are the smallest. Approaches for ambiguity determination can generally be grouped into four types; geometrical approach, code-phase combinations, search approach, and a combination of these approaches. The most commonly used search method is known as the LAMBDA method developed by Teunissen [3]. For detailed discussion on these approaches, we refer to [4].

2.3 Solution Types

2.3.1 Float and Fixed Solutions

Integer ambiguity N resolution determines whether a float or fixed solution has been achieved during data processing (Fig. 6.2) . A float solution is where the ambiguity is determined together with other unknowns \(\{X,Y,Z,c\delta t\}\) (Eq. 6.17, p. 111) and is normally a real number . Because of this, the term ambiguity free solution is sometimes used. The estimated parameters will, however, be of a lower accuracy compared to those of the fixed solution, but at least better than those from triple differencing (e.g., Fig. 4.9 on p. 57). Ambiguity-free solutions are, however, useful for obtaining fixed solutions. The resultant solution (baseline vector) produced when differenced carrier-phase observations resolve the cycle ambiguity is thus called a “fixed” solution, with the exact cycle ambiguity no longer needed to be known to produce a solution [1]. In fixed solutions, also known as ambiguity-fixed solutions , the actual integer values are first determined, fixed, and then used in the adjustment (Eq. 6.17, p. 111). This leaves only the position parameters \(\{X,Y,Z\}\) and the receiver clock bias term \(c\delta t\) to be determined. Fixed solutions normally lead to more accurate position. However, when the cycle ambiguities cannot be resolved, which sometimes occurs when a baseline distance is greater than 75 km in length, a float solution may actually be the best option [1].

Fig. 6.2
figure 2

Float and fixed GNSS solutions

Fig. 6.3
figure 3

Left Single baseline. Right Multi-baselines. The coordinates of a station A are known and fixed, while those of B, C, and D are unknown

2.3.2 Baseline Solutions

For single baselines (e.g., Fig. 6.3, left), the processing (see Fig. 6.1, left) deals with each baseline individually, or processes all baselines through a joint adjustment. The final results depends on how well the ambiguities and other errors are handled. Commercial baseline reduction software have a variety of options that are automatically (or manually) set to determine the most “optimum” solution where, after an initial code solution is performed, a triple differencing is carried out followed by a double differencing (e.g., Figs. 4.8 and 4.9 on p. 57) leading to a fixed solution in the event that the integer ambiguities are successfully resolved [1]. Correlations between the baselines are not necessarily taken into consideration except for network adjustment, where they provide weight information (see Sect. 6.2.5).

If n GNSS satellites are observed, \(n(n-1)/2\) baselines will be adjusted with the double-differencing offering the best solutions due to the fact that the integer nature of the ambiguities are preserved. Most commercial software offer baseline processing capabilities and normally provide different types of solutions, e.g., L1 Fixed (only the L1 signal is used to derive the solution), Ionospheric-Free Fixed (both the L1 and L2 signals are used to remove ionospheric errors (e.g. Sect. 3.4.3)), and float (see Sect. 6.2.3.1). In addition, the packages attempt to perform the most accurate fixed solution for short lines (e.g., less than 15 km for single-frequency and less than 30 km for dual-frequency receivers) [1]. As was discussed in Sect. 3.4, positioning accuracy will depend on how well the errors are managed. In general triple difference accuracies are less than those of fixed and float solutions.

For baselines longer than 30–50 km, if the fixed solution is not deemed to be reliable (based on various quality indicators discussed below), then the default float solution may be used. Although it is not as accurate as the fixed solution, if the session time is long enough (e.g., 1 to 2 h), it will still be fairly accurate, e.g., 20–50 mm for lines less than 75 km [1]. After the baseline solutions, users can then assess the reliability of the obtained solution from numerous statistical and graphical displays by the commercial software.

Fig. 6.4
figure 4

The accuracy of the derived values are dependent upon the geometry of the satellites, the accuracy of the position of the reference stations, the quality of the observations, and how well the errors have been managed. In this figure, for example, satellite geometry no. 2 is better than 1

2.4 Quality Assessment

The output of data processing from most commercial software will often consist of positions (whose accuracy is a function of the items in Fig. 6.4), covariances and residuals. Covariances are often provided in the dispersion matrix (Eq. 6.18), which enables the analysis of the quality of the estimated positions. The square roots of the diagonals of the dispersion matrices give the standard deviations (discussed below). The dispersion matrix can also be used to construct error ellipses useful for the analysis of the estimates (e.g., Fig. 6.5), and also to generate the dilution of precision. Commercial software have set criteria upon which they base any decision to reject bad observations or output. The software compare solutions from triple, float, fixed, single baseline and multi-baselines to obtain the most optimum solution. The following quality assessment factors are what various software base their acceptance criteria upon [1]:

  • Variance ratio : A fixed solution indicates that the integer ambiguity has been successfully resolved. Most software will compute the variance of each integer ambiguity solution and compares the solution with the lowest variance to the next higher variance solution. The software then impose a minimum value of the ratio that must be exceeded, else the processor reverts to the float solution.

  • Reference variance : Also known as the variance of unit weight , this value indicates how well the computed errors in the solution compare with the a priori values for a typical baseline. A value of 1.0 indicates a good solution. Values over 1.0 indicate that the observed data were worse. Baselines with higher reference variances and lower variance ratios need to be checked for possible problems (e.g., cycle slips discussed in Sect. 6.2.2.1).

  • Root-mean-square (RMS): This is a quality factor that helps the user determine which vector solution (triple, float or fixed) to use in the adjustment and is usually stated at a 95% confidence level. It is dependent on baseline length, the time over which the baseline was observed, as well as ionospheric, tropospheric and multipath errors. A lower RMS may not always indicate a good result, but will provide a judgement on the quality of the data used in the post-processed baseline vector.

  • Repeatability: Redundant lines should agree to a level of accuracy that GNSS is capable of measuring to. Residual plots depict the data quality of individual satellite signals and typically vary ±5 mm from the mean, with those exceeding ±15 mm being suspect. If the quality assessment above are not met, one may consider removing some or all of the baselines of a session, changing the elevation mask, removing one or more satellites solutions and/or, if necessary, re-observe the baseline.

  • Accuracy : Indicates how close a measure or group of measures are from the “true” value.

  • Precision : This is how close a group or sample of measurements are to each other or to their mean. A low standard deviation will indicate a high precision. Measurements can have high precision, but a low accuracy.

  • Standard deviation: This is a range of how close the measured values are from the arithmetic average. It is obtained by taking the square root of the variance, and is sometimes known as the “standard error”, though the two are slightly different. A lower standard deviation indicate that the observation measurements are close together.

Fig. 6.5
figure 5

Accuracy control through error ellipses, where the smaller the ellipse the more accurate the solution. Stations near the control points have smaller ellipses. Session 1 entails observing network A, B, C and D, while session 2 entails network C, D, E, and F

2.5 Adjustment of GNSS Network Surveys

Network adjustment often follows baseline processing, which provide the covariance matrices used as weights in the adjustment. Where the correlations between baselines are considered, processing is performed first using primary, and then secondary adjustments steps. The primary adjustment step consists of baseline processing, while the secondary adjustment step utilizes the raw baseline distances and variance-covariance information obtained from the primary adjustment to improve on the results. Processing can be done for a single session that comprises a single baseline where one station is fixed (i.e., the coordinates are known) while the coordinates of the other stations are unknown (Fig. 6.3, left), or multi-baselines where the baselines are interconnected (Fig. 6.3, right). Figure 6.6 provides a summary of single session processing.

Fig. 6.6
figure 6

Left Single session solution. Right Multi-session solution. See, Fig. 6.5 for definition of a session

In practise, it may happen for one reason or another that a session survey (e.g., A, B, C, and D in Fig. 6.5) is not completed, necessitating continuing the survey to C, D, E and F at another time. In this case, two sessions are involved and a multi-session processing is adopted. Both adjustment procedures are treated, e.g., in [5, 6]. Similarly to the single session adjustment, the positions obtained from the primary adjustment provide the weights used in the secondary adjustment. The only exception is that the double differencing functional model used requires at least 1 common station between the sessions.

Two types of adjustment are presented in [1] as free and constrained (fixed) network adjustment. A Free network adjustment fixes one point and for this reason, is known also as minimally constrained. It is useful for assessing the internal accuracy of the observed network. If the fixed point is given arbitrary values and a GNSS loop survey is carried out with respect to it (i,e., starting from point A, through to B, C, D, and then back to A in Fig. 6.5), the sum of the vector parameters should be zero. Any misclose of the loop (i.e., the non-zero sum of vectors) will indicate internal reliability. A Free network, therefore, is vital for removing poor quality observations. Constrained (fixed) adjustment on its part fixes two or more points and assesses external reliability with respect to these external fixed (reference) control points (e.g., point A in Fig. 6.5). Care must be taken since these external control points also come with their own accuracies (i.e., they are not absolute-error free), which may be lower than those of GNSS. For adjustment, it is recommended first to process the baseline, then the free network adjustment and finally the constrained adjustment. Baseline data provides input data plus weights (from the standard deviations).

In undertaking GNSS surveys, it is advisable that they be adjusted and analyzed relative to their internal consistency and external fit with existing control points. The internal consistency adjustment (i.e., free or minimally constrained adjustment) is important from a contract compliance standpoint, while the final, or constrained, adjustment fits the GNSS survey to the existing network. This is not always easily accomplished since existing networks often have lower relative accuracies than the GNSS observations being fitted. An evaluation of a survey’s adequacy should therefore not be based solely on the results of a constrained adjustment [1].

3 Least Squares Solution

In Chap. 4, various ways of modelling GNSS observations with the aim of eliminating or minimizing errors were discussed. Although systematic biases can be eliminated or corrected, random errors associated with observations normally remain and have to be taken care of through an adjustment procedure. Through such adjustment, coordinates and receiver clock parameters can be estimated. In this section, we present the basics of the “least squares” estimation method used in most commercial GNSS processing software.

In this chapter, the term estimation has been repeatedly used. But what exactly is estimation? In environmental monitoring, observations are normally collected with the aim of finding or measuring changes in some desired environmental parameters to assess specific tasks, e.g., compliance with a given legislation or policy, spatial or temporal changes, or predicting the environmental impact of a proposed project. If we take the case of surface displacement due to earthquakes for example, GNSS measurements would be undertaken with the aim of determining the extent of the surface movement. As was pointed out in Sect. 5.4.2, the desired environmental parameters for monitoring spatial changes are the relative motion of positions (\(\varDelta X, \varDelta Y, \varDelta Z\)) with respect to some fixed network of controls (reference) points established before the event of concern using the BACI (Before-After-Control-Impact) monitoring model (1-1 in p. 1). In general, these relative change in position would be the “unknown parameters” and the process of obtaining them through an adjustment criteria is known as the “estimation of parameters” .

With improvement in instrumentation, more observations are often collected than the unknowns. For deformable surfaces being monitored, such as is the case in mining areas, or structures (e.g., bridges), several observation points will normally be marked on the surface of the body being monitored. These points would then be observed from a network of control points set up on a non-deformable stable surface (e.g. Fig. 5.8 on p. 70). Measurements taken between the control points and the points being monitored (see Sect. 5.4.2) will generally lead to an overdetermined system, i.e., more observations than unknowns, see e.g., [7,8,9].

The procedures that are often used to estimate the unknown parameters from the measured values will depend on the nature of the equations relating the observations to the unknowns. These equations are normally referred to as the “functional model” . If these equations are linear, then the task is much simpler. In such cases, any procedure that can invert the normal equation matrix, such as least squares, would suffice. Procedures for estimating parameters in linear models have been documented, e.g., in [9,10,11,12]. If the equations relating the observations to the unknowns are nonlinear, they are first linearized and the unknown parameters estimated iteratively using the least squares method. The operation of these numerical methods require some approximate starting values. At each iteration step, the preceding estimated values of the unknowns are improved. The iteration steps are repeated until the difference between two consecutive estimates satisfy a specified threshold. Awange and Grafarend [7, 8] present algebraic-based procedures that avoid linearization and iteration in order to estimate the unknown parameters from nonlinear models. Linear and nonlinear models are treated in more detail e.g., in Grafarend and Awange [8, 9, 14].

Method of Least Squares

The least squares approach traces its roots to the work of C.F. Gauss (1777–855). Since GNSS operates by measuring the distances between the receiver and the satellites (as discussed in Sect. 3.3.2), let us consider a simple example where two distances \( \{S_{1},S_{2}\} \) are measured from an unknown station \( P_{0}\) to two known stations \( P_{1}\) and \( P_{2}\) as shown in Fig. 6.7, (left). From these measured distances \(\{S_{1},S_{2}\}\) and the known positions \(\{X_{1},Y_{1}\}_{P_{1}}\) of station \( P_{1}\) and \(\{X_{2},Y_{2}\}_{P_{2}}\) of station \(P_{2}\), the position \( \{X_{0},Y_{0}\}_{p_{0}} \) of the unknown station \( P_{0}\) can be obtained.

Fig. 6.7
figure 7

Left Distance measurements (\(S_1,S_2\)) to two known stations (\(P_1,P_2\)) from an unknown point \(P_0\) whose position is to be determined. Right distance measurements to three known stations (\(P_1,P_2,P_3\))

The nonlinear distance equations relating the measured distances to the coordinates of the unknown station are expressed as

$$\begin{aligned} \left[ \begin{array}{c} S_{1}^{2}=(X_{1}-X_{0})^{2}+(Y_{1}-Y_{0})^{2}\\ S_{2}^{2}=(X_{2}-X_{0})^{2}+(Y_{2}-Y_{0})^{2}, \end{array}\right. \end{aligned}$$
(6.1)

which leads to the two possible solutions presented in Fig. 6.8. Now, let us consider a case where a third station \(P_3\), is also measured, as indicated in Fig. 6.7, (right). This gives rise to an overdetermined system of three equations with two unknowns, expressed as

$$\begin{aligned} \left[ \begin{array}{c} S_{1}^{2}=(X_{1}-X_{0})^{2}+(Y_{1}-Y_{0})^{2}\\ S_{2}^{2}=(X_{2}-X_{0})^{2}+(Y_{2}-Y_{0})^{2}\\ S_{3}^{2}=(X_{3}-X_{0})^{2}+(Y_{3}-Y_{0})^{2}, \end{array}\right. \end{aligned}$$
(6.2)

which must be used to solve the unknown coordinates \(X_{0},Y_{0}\) of station \(P_{0}\). In (6.2), we have more equations than unknowns, thus necessitating the need for least squares techniques. The equations have to be first linearized, otherwise one must use nonlinear methods, such as those presented in [7, 8, 14]. Linear models commonly used for parameter estimation are elaborately presented in [9, 11]. We will limit our discussion to the simple least squares model and refer interested readers who desire a more thorough coverage of parameter estimation methods to the works of [7,8,9, 11].

Fig. 6.8
figure 8

Exact solution of the distance problem described in Fig. 6.7. P indicates the two possible solution points based on \(S_1,S_2\) from two known stations \(P_1,P_2\)

Least squares consist of functional and stochastic models, where a functional model, also known as the observation equations, can be viewed as an equation relating what has been measured (known) to what is to be estimated (unknown parameters). In the case of a stochastic model, the weight matrix \(\mathbf W \) is related to the variance-covariance matrix \(\mathbf Q \) of the observations (Eq. 6.3). The variance-covariance matrix shows the relationship between the observations and the unknown parameters. In general, the weight matrix is a measure of the random errors of the observations and arises from the fact that no observation can be error free. It is related to the variance-covariance matrix through

$$\begin{aligned} \mathbf {W}={\mathbf {Q}}^{-1}. \end{aligned}$$
(6.3)

The diagonal elements of the matrix \(\mathbf {Q}\) are termed variances while the off diagonal elements are known as covariances.

In least squares terms, the equation

$$\begin{aligned} \mathbf y =\mathbf A \xi +\epsilon , \end{aligned}$$
(6.4)

is a functional model relating the observation vector \(\mathbf y \) to the vector of unknown parameters \(\xi \), with \(\epsilon =\mathbf y -\mathbf A \xi \) being the vector of discrepancies (or error vector). The vector \(\mathbf y \) is comprised of the observations (measured quantities) or the differences between the measured values and those computed from the functional model part \(\mathbf A \xi \). \(\mathbf A \) is the design matrix which normally consists of the coefficients of the unknown terms. For linear terms, the matrix \(\mathbf A \) are the direct coefficients.

figure a

Consider two simultaneous equation given as

$$\begin{aligned} \begin{array}{c} 2x+y=4 \\ 3x-2y=6. \end{array} \end{aligned}$$
(6.5)

In (6.5), the design matrix \(\mathbf A \), vector \(\mathbf y \) of observation and vector \(\xi \) of unknowns will therefore be

$$\begin{aligned} \mathbf A =\left[ \begin{array}{cc} 2 &{} 1 \\ 3 &{} -2 \\ \end{array} \right] ,\,\, \mathbf y =\left[ \begin{array}{c} 4 \\ 6 \\ \end{array} \right] ,\,\, \xi =\left[ \begin{array}{c} x \\ y \\ \end{array} \right] . \end{aligned}$$
(6.6)
figure b

In GNSS satellite observations, there exist two groups of parameters namely;

  1. 1.

    parameters related to the geometrical range \(\varrho \), and

  2. 2.

    parameters related to biases, e.g., clock biases.

These are related by a functional model (pseudorange equation) described by Eq. (4.15) on p.49. Let us re-write the pseudorange equation (4.15) for a satellite 1 as

$$\begin{aligned} \boxed {F:=\varrho ^1=\sqrt{(X^S-X_R)^2+(Y^S-Y_R)^2+(Z^S-Z_R)^2}+c\triangle t}, \end{aligned}$$
(6.7)

where \(X^S,Y^S,Z^S\) is the satellite’s position and \(X_R,Y_R,Z_R\) is the receiver’s position. The receiver clock error term is designated \(\varDelta t\) and c is the speed of light in a vacuum. All the other biases and errors that were discussed in Chap. 4 are assumed to have been modelled. In comparison to (6.5), (6.7) is nonlinear and cannot be expressed directly in the form (6.4) and therefore has to be linearized. This is achieved through the Taylor series expansion about approximate values of the unknown parameters. If the unknown receiver coordinates \(X_R,Y_R,Z_R\) are approximated by \(X_0,Y_0,Z_0\), such that

$$\begin{aligned} \left[ \begin{array}{c} X_R=X_0+\varDelta X \\ Y_R=Y_0+\varDelta Y \\ Z_R=Z_0+\varDelta Z, \end{array}\right. \end{aligned}$$
(6.8)

the Taylor series expansion of (6.7) about these approximate coordinates become

$$\begin{aligned} \begin{array}{c} F:=F(X_0,Y_0,Z_0) + \displaystyle {\frac{\partial F(X_0,Y_0,Z_0)}{\partial X_0}} \varDelta X+ \displaystyle {\frac{\partial F(X_0,Y_0,Z_0)}{\partial Y_0}}\varDelta Y+ \\ \displaystyle {\frac{\partial F(X_0,Y_0,Z_0)}{\partial Z_0}}\varDelta Z, \end{array} \end{aligned}$$
(6.9)

where higher order terms have been neglected. This leads to the linearized pseudorange equation (6.7) being written as

$$\begin{aligned} \varrho ^1=\varrho ^0 + \displaystyle {\frac{X_{0}-X^S}{\varrho ^0}} \varDelta X+\displaystyle {\frac{Y_{0}-Y^S}{\varrho ^0}}\varDelta Y+\displaystyle {\frac{Z_{0}-Z^S}{\varrho ^0}}\varDelta Z+c\varDelta t, \end{aligned}$$
(6.10)

with the partial derivatives being

$$\begin{aligned} \left[ \begin{array}{l} \displaystyle {\frac{\partial F_1}{\partial X_0}}=\displaystyle {\frac{X_{0}-X^S}{\sqrt{(X_{0}-X^S)^2+(Y_{0}-Y^S)^2+(Z_{0}-Z^S)^2}}}=\displaystyle {\frac{X_{0}-X^S}{\varrho ^0}} \\ \\ \displaystyle {\frac{\partial F_1}{\partial Y_0}}=\displaystyle {\frac{Y_{0}-Y^S}{\varrho ^0}} \\ \\ \displaystyle {\frac{\partial F_1}{\partial Z_0}}=\displaystyle {\frac{Z_{0}-Z^S}{\varrho ^0}} \\ \\ \displaystyle {\frac{\partial F_1}{\partial c\triangle t}}=1. \end{array}\right. \end{aligned}$$
(6.11)

In order to express this equation in the functional model form (6.4), the design matrix \(\mathbf A \), vector \(\mathbf y \) of observation for n satellites observed by a receiver, and the vector \(\mathbf {\xi }\) of the unknowns are expressed as:

$$\begin{aligned} \mathbf A =\left[ \begin{array}{cccc} \displaystyle {\frac{\partial F_1}{\partial X_R}} &{} \displaystyle {\frac{\partial F_1}{\partial Y_R}} &{} \displaystyle {\frac{\partial F_1}{\partial Z_R}} &{} 1 \\ {} &{} &{} &{}\\ \displaystyle {\frac{\partial F_2}{\partial X_R}} &{} \displaystyle {\frac{\partial F_2}{\partial Y_R}} &{} \displaystyle {\frac{\partial F_2}{\partial Z_R}} &{} 1\\ . &{} . &{} . &{} . \\ \displaystyle {\frac{\partial F_n}{\partial X_R}} &{} \displaystyle {\frac{\partial F_n}{\partial Y_R}} &{} \displaystyle {\frac{\partial F_n}{\partial Z_R}} &{} 1 \end{array} \right] ,\,\, \mathbf y =\left[ \begin{array}{c} \varrho ^1-\varrho ^{01}\\ \varrho ^2-\varrho ^{02}\\ . \\ \varrho ^3-\varrho ^{0n} \end{array} \right] ,\,\, \xi =\left[ \begin{array}{c} \varDelta X_R\\ \varDelta Y_R\\ \varDelta Z_R \\ c\varDelta t \end{array} \right] , \end{aligned}$$
(6.12)

where the values of \(\mathbf y \) are pseudorange differences (measured-computed) using approximate coordinates. Exact solutions of (6.7) are presented e.g., in Awange and Grafarend [7, 8, 14, 15].

The requirement of least squares solution is simply that the sum of the squares of errors \(\epsilon =\mathbf y -\mathbf A \xi \) be minimized through

$$\begin{aligned} \varepsilon ^{T}\epsilon \rightarrow min. \end{aligned}$$
(6.13)

If we now incorporate the weights \(\mathbf W \) of the observations from the stochastic model, (6.13) becomes

$$\begin{aligned} \epsilon ^{T}{} \mathbf W \epsilon \rightarrow min. \end{aligned}$$
(6.14)

The minimum requirement in (6.14) is subject to the functional model (6.4).

Rewriting (6.4) as

$$\begin{aligned} \epsilon =\mathbf {y} -\mathbf {A}\xi , \end{aligned}$$
(6.15)

and inserting it in (6.14) leads to

$$\begin{aligned} f:=\epsilon ^{T}{} \mathbf W \epsilon =(\mathbf {y} -\mathbf {A}\xi )^{T}{} \mathbf W (\mathbf {y} -\mathbf {A}\xi )\rightarrow min. \end{aligned}$$
(6.16)

In the expansion of (6.16), setting the condition \(\frac{df}{dx}=0\) leads to the solution of unknown vector \(\mathbf {\xi }\) in (6.12) as

$$\begin{aligned} \hat{\xi }=(\mathbf {A}^T \mathbf {W} \mathbf {A})^{-1}(\mathbf {A}^{T} \mathbf {W} \mathbf {y}), \end{aligned}$$
(6.17)

with a variance-covariance matrix of the estimated parameters (receiver coordinates and clock parameter) given by

$$\begin{aligned} \mathbf {Q}_{\hat{\mathbf {x}}}=(\mathbf {A}^T \mathbf {W} \mathbf {A})^{-1}=\left[ \begin{array}{cccc} \sigma _x^2 &{} \sigma _{xY} &{} \sigma _{xZ} &{} \sigma _{xct} \\ \sigma _{yx} &{} \sigma _y^2 &{} \sigma _{yZ} &{} \sigma _{yct} \\ \sigma _{zx} &{} \sigma _{zy} &{} \sigma _z^2 &{} \sigma _{zct} \\ \sigma _{ctx} &{} \sigma _{cty} &{} \sigma _{ctz} &{} \sigma _{ct}^2 \\ \end{array} \right] . \end{aligned}$$
(6.18)

The square root of the diagonal matrix in (6.18) gives the standard deviations of the estimated parameters in (6.17). Equations (6.17) and (6.18) are the ones mainly used in GNSS processing software to generate the final products. For more details, the reader is referred to [4, 16].

4 Online Processing

Several internet based GNSS processing software systems are freely available to users to process their baselines online. In Australia for example, AUSPOS (Australian online GPS processing service) enables users to send their data to a central processing unit at Geoscience Australia via the Internet [17]. The processing software thereafter, chooses three or more CORS stations that are near the user’s observing station and employs them to process the user’s position. The results are then send back to the user via email. In the US, the OPUS (Online Positioning User Service) has performed similar functions as AUSPOS since March 2001 [18].

The Australian Surveying and Land Information Group (AUSLIG), which is now part of Geoscience Australia, is Australia’s national mapping agency, providing fundamental geographic information to support the mining, agricultural, transport, tourism, and communications industries, as well as defence, education, surveillance and emergency services activities [19]. OPUS is a US-based service that provides baseline reduction and position adjustment relative to three nearby national CORS reference stations. It is ideal for establishing accurate horizontal control relative to the National Geodetic Reference System (NGRS), and can also be used as a quality control check on previously established control points [1].

To use such services, for a single GNSS receiver, an AUSPOS user for example needs to upload the dual-frequency static data in RINEX format (see discussion in Sect. 6.2.1) as well as the antenna type and height information to a web site which processes the data using the service provider’s software (e.g., Fig. 6.9). The antenna type should be as defined by the International GNSS Service (IGS) and the input antenna heights should be with respect to the IGS defined Antenna Reference Point (ARP).

Once the data is received by the AUSPOS system, the format is checked, an approximate user position computed from the submitted RINEX file, and data files from the three or more nearest IGS stations acquired. The best available IGS ephemeris and earth rotation parameters (ERPs) are then acquired, dependent upon the observation latency. The international terrestrial reference frame (ITRF) coordinates are then computed for the selected IGS stations at the observational epoch. The user station plus the three or more selected IGS stations thus form a network of 4 stations that is adjusted through a network adjustment procedure. Cycle slips from the user data are removed by double differencing the carrier-phase data for each baseline. In the network adjustment, a constrained framework (see Sect. 6.2.5) is adopted where the three or more IGS stations are held fixed to ITRF coordinates. When the processing is completed, a pdf file is generated and emailed to the user.

Similarly to AUSPOS, OPUS computes an average solution from the three baselines and the output positions are provided with an overall RMS (95%) confidence level, along with the maximum coordinate spreads between the three CORS stations for both the ITRF and North American Datum (NAD) 83 positions [1]. However, OPUS users need to enter at least two hours of static, dual-frequency GNSS observation.

Fig. 6.9
figure 9

Source Geoscience Australia

AUSPOS - online GPS processing service.

5 Concluding Remarks

This chapter has simply presented some of the aspects of data processing. For detailed exposition, we refer the reader to [1, 2, 7,8,9, 13, 14, 16], as well as, the various user manuals for the assorted instruments.