Abstract
Integer ambiguity resolution (IAR) is one of the key techniques in GNSS high precise positioning. However, an overlooked incorrect integer ambiguity solution may cause severe biases in the positioning results. The optimal integer aperture estimator (IAE) has the largest possible success rate given a certain fail rate. An alternative approach that take advantage of ambiguity integer nature to minimize the solution’s mean squared error (MSE) is known as the best integer equivariant (BIE) estimator. Both of which are associated with the posterior probability of the GNSS integer ambiguity. It is therefore of great significance to calculate posterior probability precisely and efficiently. Due to the occurrence of infinite sums, practical calculation approaches approximate the exact value by neglecting sufficiently small terms in the sum. As a result, they can only produce posterior probability calculation result, information about the result’s accuracy cannot be produced. In this contribution, the value of the posterior probability is bounded from below and from above by dividing the infinite sum into two parts: the major finite part and the minor infinite part. They are calculated partly by enumeration and partly by algebraical bounding. The obtained upper and lower bounds are rigorous and in closed form, so that can be conveniently used. Based on both of the bounds, a method of posterior probability calculation with controllable accuracy is proposed. It not only produces posterior probability calculation result, but also calculation error, which is always smaller than the user-defined acceptable error. Numerical experiments have verified that the proposed approach has advantages on both controllable calculation accuracy and adjustable computational workload.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Global Navigation Satellite Systems (GNSS) positioning via carrier-phase observation can achieve centimeter-sized precision (Samama 2008; Leick et al. 2015). One of the preconditions of GNSS high precise positioning is that the unrecorded integer cycles of the carrier-phase must be correctly estimated (Dermanis and Rummel 2008). This procedure is referred to as GNSS integer ambiguity resolution (IAR). The linearized GNSS models on which IAR is relied can be cast in the frame of the following mixed-integer model (Teunissen 1995; Xu et al. 1995)
where \(\mathrm{E}(\cdot )\) and \(\mathrm{D}(\cdot )\) are expectation and dispersion operators, respectively, \({\varvec{y}}\in {\mathbb{R}}^{l}\) is a vector of carrier phase and code observables, \({\varvec{a}}\in {\mathbb{Z}}^{n}\) is an unknown integer ambiguity vector, \({\varvec{b}}\in {\mathbb{R}}^{m}\) is a vector of unknown real-valued parameters such as baseline and atmosphere errors, \({\varvec{A}}\) and \({\varvec{B}}\) are design matrices associated with the unknown parameters and the observables, \({{\varvec{Q}}}_{{\varvec{y}}{\varvec{y}}}\) is the variance–covariance (VC) matrix of observations in \({\varvec{y}}\).
The GNSS model can be solved by first discarding the integer nature of ambiguities (Teunissen 1993; Xu et al. 1995). The user performs a standard least square (LS) adjustment, and the real-valued estimates of all the unknown parameters and their VC matrix can be obtained as follows
Then, the real-valued \(\widehat{{\varvec{a}}}\) is adjusted to an integer-valued vector by a certain estimator \( \Psi :\mathbb{R}^{n} \to \mathbb{Z}^{n} |\varvec {\mathop{a}\limits^{ \vee }} = \Psi (\widehat{\varvec{a}}) \). It is usually realized by a combination of proper estimation method and efficient decorrelation (reduction) technique. Algorithms can be enumerated such as minimum pivoting method (Xu et al. 1995), multi-frequency spatial overlay resolution (Wu et al. 2015), lattice reduction resolution (Grafarend 2000; Xu 2012; Wu et al. 2017), and the most famous and widely used least squares ambiguity decorrelation adjustment (LAMBDA) (Teunissen 1993, 1995; Chang et al. 2005; Verhagen et al. 2013). Teunissen (1999) introduces a class of ‘admissible’ integer estimators (IE), and summarizes the commonly used estimators: rounding (Taha 1975), bootstrapping (Blewitt 1989; Dong and Bock 1989) and integer least squares (ILS) (Teunissen 1993) into the framework of IE theory.
The integer ambiguity solution is subject to uncertainty caused by the randomness of the data. An overlooked incorrect integer ambiguity solution may cause severe biases in the positioning results (Teunissen 2003a; Xu 2006). So, it is important to validate the quality of the integer ambiguity estimation. Early proposed validation approaches compare the minimum and the second minimum quadratic form of the residuals by different criterions. They are such as R-ratio test (Euler and Schaffrin 1991), F-ratio test (Frei and Beutler 1990), difference test (Tiberius and de Jonge 1995), projector test (Han 1997) and monitoring validation measure (Xu 1998). Motivated by making the probability of accepting the wrong ambiguity solution under control, Teunissen (2003a) introduces a class of integer aperture estimators (IAE). After that, Verhagen and Teunissen (2006) re-expresses the R-ratio test, F-ratio test, difference test and projector test in the framework of IAE. Moreover, Teunissen and Verhagen (2009), Verhagen and Teunissen (2013) suggest that the validation tests would better be used in the fixed-failure-rate way. Different IAEs perform differently, the optimal IAE in sense of maximizing the estimation success rate subjecting to a given failure rate is first introduced by Teunissen (2005a), and later by Wu and Bian (2015) in an alternative framework of Bayesian statistics. In the view of Bayesian statistics, this optimal IAE is the ambiguity posterior probability based on the observed GNSS data (Wu and Bian 2015). Wu and Bian (2015) further proposes to validate the fixed ambiguity solution by directly checking its posterior probability. The posterior probability provides a very intuitive and solid indicator in mathematics, it is therefore also reasonable to apply the validation test based on a fixed posterior probability.
An alternative approach of integer ambiguity solution is that the user never fixes the real-valued ambiguity solution to some integer, but adjusts it according to the criterion of minimizing the solution’s mean square error (MSE). This estimator is referred to as the best integer equivariant (BIE) estimator proposed by Teunissen (2003b). It takes advantage of the integer nature of the ambiguities coming up with a baseline estimator that is always superior to (integer equivariant) IE and LS estimator (Teunissen 2020). The BIE estimator comes out as the sum of all the integer vectors multiplying their posterior probabilities.
It can be seen that posterior probability plays an important role both in the optimal IAE and in the calculation of BIE estimator. It is therefore of great significance to calculate posterior probability precisely and efficiently. However, we cannot obtain the exact value of posterior probability due to the occurrence of infinite sums. Practical calculation approaches approximate the exact value by neglecting sufficiently small terms in the sum. Teunissen (2005b) replaces the sum of all over the integers space with the sum of a finite set according to the distribution property of the quadratic form of the ambiguity residuals. Wu and Bian (2015) presents another method to enumerate the integer vectors based on a lower limit defined by exponential function. The obtaining of enumerated integer vectors of Teunissen (2005b) and Wu and Bian (2015) relies on a search in the ambiguity space. These approaches, although have achieved good approximations, can only serve as upper bounds of posterior probability. Yu et al. (2017) introduces an approach to calculate the upper and lower bounds of posterior probability. These bounds can be useful in practical calculation, but not theoretically rigorous, because the derivations are based on approximated equation transformations that may bring ambiguous influences in accuracy evaluation. In brief, these approaches can only produce a calculation result of posterior probability, but cannot produce accuracy information about the result. As a consequence, the user do not know (i) whether the calculation accuracy is acceptable, (ii) whether the calculation accuracy is too much higher than the required level at the expense of unnecessary computational burden.
In this contribution, a pair of theoretically rigorous upper and lower bounds of the posterior probability are developed, based on which, a method of posterior probability calculation with known and controllable accuracy is proposed. In Sect. 2, the posterior probability of GNSS integer ambiguity is introduced. Then, some existing approaches of posterior probability calculation are introduced in Sect. 3. In Sect. 4, the approach of the derivation and the calculation of the upper and lower bounds are discussed in detail. Based on the pair of upper and lower bounds, a posterior probability calculation approach with controllable accuracy is introduced in Sect. 5. In Sect. 6, numerical experiments are used to verify the proposed approach and compare with the existing approaches. And finally, we conclude the contents of the article and stress the advantages of the proposed method in Sect. 7.
2 Posterior probability of GNSS integer ambiguity
Posterior probability is a concept of Bayesian statistics. In Bayesian statistics, not only the vector of observables is assumed to be random, but the vector of unknown parameters as well. The sampling probability distribution of observation vector \({\varvec{y}}\) in GNSS model (1) is denoted as \(\mathrm{p}({\varvec{y}}|{\varvec{a}},{\varvec{b}})\), \({\varvec{a}}\) and \({\varvec{b}}\) are also supposed to be vectors of random quantities and have a joint prior distribution \(\mathrm{p}({\varvec{a}},{\varvec{b}})\). So, the posterior probability of \({\varvec{a}}\) and \({\varvec{b}}\) can be expressed as (Koch 1990, p.4)
with \({\sum }_{{\varvec{a}}\in {\mathbb{Z}}^{n}}\) the sum of all the integer values within \({\mathbb{Z}}^{n}\), \({\int }_{{\varvec{b}}\in {\mathbb{R}}^{m}}\) the integral of \({\varvec{b}}\) allover \({\mathbb{R}}^{m}\). Bayesians treat \({\varvec{a}}\) and \({\varvec{b}}\) as varieties independent with each other, the joint prior distribution of which equals to the multiplication of their prior distribution (Lacy et al. 2002; Wu and Bian 2015)
where \(\mathrm{p}({\varvec{a}})\) and \(\mathrm{p}({\varvec{b}})\) are the prior distributions of \({\varvec{a}}\) and \({\varvec{b}}\), respectively. Although \({\varvec{a}}\) must be an integer vector, we do not know a priority which integer vector \({\varvec{a}}\) is more likely to be. That means \({\varvec{a}}\) can be any value within the integer space with identical chance. Similarly, there is also not any prior information of parameter \({\varvec{b}}\). Therefore, researchers always treat the non-informative parameters \({\varvec{a}}\) and \({\varvec{b}}\) as independent and evenly distributed as (Teunissen 2001; Zhu et al. 2001; Lacy et al. 2002; Verhagen 2005; Wu and Bian 2015)
where \(\mathrm{p}(\cdot )\) is the probability density function (pdf) of the variable in the bracket; “\(\propto \)” means “is proportional to.” On the other hand, the sampling distribution of the GNSS observation model is regarded to be Gaussian as (Koch 1990, p.4)
where “\({\Vert \cdot \Vert }_{{{\varvec{Q}}}_{{\varvec{y}}{\varvec{y}}}}^{2}\)” stands for \((\cdot {)}^{T}{{\varvec{Q}}}_{{\varvec{y}}{\varvec{y}}}^{-1}(\cdot )\), “\(\left|\cdot \right|\)” is the determinant of a matrix. The orthogonal decomposition of the least squares principle for GNSS model is (Teunissen 1993)
where \(\widehat{{\varvec{e}}}\) is the LS residual vector, which is independent with \({\varvec{a}}\) and \({\varvec{b}}\). \(\widehat{{\varvec{b}}}({\varvec{a}})\) and \({{\varvec{Q}}}_{\widehat{{\varvec{b}}}({\varvec{a}})\widehat{{\varvec{b}}}({\varvec{a}})}\) are
If (7) is substituted into (6), the posterior probability of \({\varvec{a}}\) can be obtained as the marginal posterior distribution
3 Existing approaches of posterior probability calculation
Equation (9) shows that posterior probability of the integer ambiguity is a likelihood ratio of \({\varvec{a}}\) and the sum of all the integer vectors. The sum in the denominator of (9) is infinite, the exact value of which is incomputable. Fortunately, the likelihood of integer vector decays very fast with the increment of the “distance” between the integer vector and the float solution \(\widehat{{\varvec{a}}}\), most of the likelihoods concentrate in a small subset of the ambiguity space. Teunissen (2005b) limit the subset by
where \({\upchi }^{2}(n)\) is a Chi-square distribution with \(n\) degrees of freedom, and \(\alpha \) means the significance level. Verhagen (2005) chooses \(\alpha =1{0}^{-16}\), while Odolinski and Teunissen (2020) suggests \(\alpha =1{0}^{-9}\) to avoid heavy computational burden. This approximation is based on the normal distribution assumption of \(\widehat{{\varvec{a}}}\). Similar approximations for other elliptically contoured distributions can be found in Teunissen (2020). In this contribution, we focus on the assumption of normal distribution only.
Alternatively, Wu and Bian (2015) simply select the subset empirically as
where \({{\varvec{a}}}_{(p)}\) is the \(p\)-th successive minimum ambiguity vector of \(\underset{{\varvec{a}}\in {\mathbb{Z}}^{n}}{\text{min}}{\Vert \widehat{{\varvec{a}}}-{\varvec{a}}\Vert }_{{{\varvec{Q}}}_{\widehat{{\varvec{a}}}\widehat{{\varvec{a}}}}}^{2}\); \(\delta \) is a fading factor. Wu and Bian (2015) suggest that \(\delta =1{0}^{-8}\). To obtain all the integer vectors in \({\mathbb{S}}_{T}({\varvec{a}})\) or \({\mathbb{S}}_{W}({\varvec{a}})\), one must carry out a search in the ambiguity space. The methods of Teunissen (2005b) and Wu and Bian (2015) can produce useful approximations, which can serve as the upper bounds of posterior probability. However, they cannot tell how tight these upper bounds are. Posterior probability is a fundamental criterion in the calculation of the optimal IAE, and the BIE estimator. The user thus has the motivation to know how close the approximated value is. In other words, the calculation accuracy is an information that the user is interested in, but cannot be produced by these approaches.
Another approach is developed by Yu et al. (2017), in which the infinite sum in the denominator of (9) is divided into two parts as
with
where \({{\varvec{a}}}_{i=1\dots s}\) are integer vectors within a user-defined subset. This subset is designed as a cube which is centrosymmetric to the nearest integer of \(\widehat{{\varvec{a}}}\), and the length of whose \(j\)-th dimension is (Yu et al. 2017)
where \({{\varvec{a}}}_{i}\left({j}\right)\) is the \(j\)-th entry of vector \({{\varvec{a}}}_{i}\), \({\sigma }_{\widehat{{\varvec{a}}}(j)}\) is the standard deviation of \(\widehat{{\varvec{a}}}\left(j\right)\), “\( \left[\kern-0.15em\left[ \bullet \right]\kern-0.15em\right] \)” means rounding to the nearest integer, \(\beta \) is a user-defined significance level, and
After the integer subset is defined, \({K}_{S}\) can be calculated precisely. And \({K}_{\overline{S}}\) is evaluated by transforming it from based on \({{\varvec{a}}}_{i=s+1\dots +\infty }\) to based on \({{\varvec{a}}}_{i=1\dots s}\). Yu et al. (2017) suggests an approach of evaluating it by regional integration. The result is further used to develop a pair of upper and lower bounds of the posterior probability. However, the integration result in the derivation is approximated rather than strict, which leads to the upper and lower bounds less theoretically rigorous. The error caused by the approximated integration is neglected in general when \(\beta \) is smaller than 0.3% (Yu et al. 2017).
4 A pair of bounds of the posterior probability
In this section, we bound the value of the posterior probability from below and from above. Obviously, an upper bound of posterior probability can be obtained from either Teunissen (2005b) or Wu and Bian (2015), by directly dropping the likelihoods outside the enumerating region (10) or (11). Compared with upper bound, a proper lower bound of the posterior probability would be more important in application, because an appreciated lower bound can prevent the user from validating the ambiguity resolution result by using an over optimistic criterion.
In some degree, the basic ideal of bounding the upper and lower bounds is similar to that of Yu et al. (2017), in dividing the infinite sum of likelihoods in (9) into two parts: the major finite part and the minor infinite part. The finite part can be calculated by an enumeration of the elements. The infinite part is bounded algebraically. The division of the upper and lower bounds is based on the following space partitioning.
4.1 Space partitioning
All the integer points in the \(n\)-dimensional space can be partitioned into the following two parts
where \({\mathbb{S}}({\varvec{x}})\) is a cuboid whose length equals to \({N}_{i}-1\) as
and \(\overline{\mathbb{S} }({\varvec{x}})\) is the remaining space as
Obviously, the space \(\overline{\mathbb{S} }({\varvec{x}})\) satisfies the following inequality
where
A sketch map of the partitioning of \({\mathbb{S}}({\varvec{x}})\) and \({\overline{\mathbb{S}} }_{i}\left({\varvec{x}}\right)\) in space \({\mathbb{R}}^{2}\) is shown in Fig. 1.
As a consequence, the likelihoods in the space \({\mathbb{Z}}^{n}\) can be divided into different regions. Based on this partitioning, upper and lower bounds of posterior probability will be discussed.
4.2 4.2 Upper and lower bounds
4.2.1 Upper bound
As interpreted in the previous subsection, the infinite sum in the denominator of (9) can be divided into different parts \({B}_{S}\) and \({B}_{\overline{S}}\), which are the sum of likelihoods within the region \({\mathbb{S}}({\varvec{x}})\) and \(\overline{\mathbb{S} }({\varvec{x}})\), respectively, as
Note that, \({\mathbb{S}}({\varvec{x}})\) defined in (17) is a cuboid with finite integer vectors in it. Thus, \({B}_{\mathrm{S}}\) can be easily obtained by enumerating all the likelihoods within \({\mathbb{S}}({\varvec{x}})\). As a result, an upper bound of the posterior probability can be directly obtained by dropping the likelihoods outside \({\mathbb{S}}({\varvec{x}})\), as
4.2.2 Lower bound
Unlike the upper bound, the lower bound of the posterior probability is not such straightforward to get. According to (19), the sum of likelihoods in \(\overline{\mathbb{S} }({\varvec{x}})\) is smaller than the sum in \({\sum }_{i=1}^{n}{\overline{\mathbb{S}} }_{i}\left({\varvec{x}}\right)\). Therefore, a lower bound of the posterior probability can be expressed as
where \({B}_{{\sum }_{i=1}^{n}{\overline{\mathbb{S}} }_{i}\left({\varvec{x}}\right)}\) is the sum of likelihoods in space \({\sum }_{i=1}^{n}{\overline{\mathbb{S}} }_{i}\left({\varvec{x}}\right)\). Note that, \({\sum }_{i=1}^{n}{\overline{\mathbb{S}} }_{i}\left({\varvec{x}}\right)\) contains infinite integer vectors, which means \({B}_{{\sum }_{i=1}^{n}{\overline{\mathbb{S}} }_{i}\left({\varvec{x}}\right)}\) cannot be calculated like \({B}_{S}\) by enumerating. In this contribution, \({B}_{{\sum }_{i=1}^{n}{\overline{\mathbb{S}} }_{i}\left({\varvec{x}}\right)}\) is algebraically bounded from above, the bounding discussion of which goes as follows.
Without loss of generality, we define that
in the following discuss, where “\( \left[\kern-0.15em\left[ \bullet \right]\kern-0.15em\right] \)” means rounding to the nearest integer. The eigenvalue decomposition of \({{\varvec{Q}}}_{\widehat{{\varvec{a}}}\widehat{{\varvec{a}}}}^{-1}\) is introduced as (Shores 2007)
where \({\varvec{V}}\) is a unitary matrix and \({\boldsymbol{\Lambda }}_{\sigma }\) is a diagonal matrix whose elements \({\sigma }_{i}^{2}\) are the eigenvalues of \({{\varvec{Q}}}_{\widehat{{\varvec{a}}}\widehat{{\varvec{a}}}}^{-1}\), with \({\sigma }_{1}^{2}\ge {\sigma }_{2}^{2}\cdots \ge {\sigma }_{n}^{2}\). Therefore,
and the following inequalities can be obtained from (26)
where \({\widehat{a}}_{i}\) and \({a}_{i}\) are the \(i\)-th entry of \(\widehat{{\varvec{a}}}\) and \({\varvec{a}}\), respectively. The sum in the region \({\overline{\mathbb{S}} }_{k}\left({\varvec{x}}\right)\) can be unfolded as a continued product of the sum of the ambiguity entries as
Note that, \(\left|{\widehat{a}}_{i}\right|\le 1/2\) from (24), thus
Therefore,
where the factor 2 in front of the summation notation is owing to that the sum of \({a}_{k}\) from \(-\infty \) to \({-N}_{k}\) equals to the sum from \({N}_{k}\) to \(+\infty \). And
with
Inserting (30) and (31) into (28), and repeating this operation to the remain entries of \({\varvec{a}}\) yields
where \(\Delta =\underset{i=1:n}{\text{max}}{\Delta }_{i}\) is the maximum value of \({\Delta }_{i}\), \(i=1\dots n\). Thus, the likelihoods in \({\sum }_{i=1}^{n}{\overline{\mathbb{S}} }_{i}\left({\varvec{x}}\right)\) can be bounded from above by
Finally, we can obtain a lower bound of the posterior probability as
where \({B}_{\overline{S} }^{\mathrm{^{\prime}}}\) is the upper bound of \({B}_{{\sum }_{i=1}^{n}{\overline{\mathbb{S}} }_{i}\left({\varvec{x}}\right)}\) as
Note that, the size of the cuboid \({\mathbb{S}}({\varvec{x}})\) in (17) is defined by the user, if \({N}_{i}\), \(i=1\dots n\), are set equal, namely, \(N\), \({\mathbb{S}}({\varvec{x}})\) becomes to a cube, and a simplified lower bound of \(\mathrm{p}\left({\varvec{a}}|\widehat{{\varvec{a}}}\right)\) can be calculated by
The upper bound (22) and lower bound (35) are theoretically rigorous and in closed form, so that they would be easily calculated and conveniently used. Moreover, a posterior probability calculation method with guaranteed accuracy would be developed based on both of the bounds.
5 Posterior probability calculation with controllable accuracy
Existing approaches, such as Teunissen (2005b) and Wu and Bian (2015), calculate posterior probability by directly dropping the likelihoods outside the enumerating region (10) or (11). Therefore, how large the enumerating region should be set is of great significance. However, the chosen of these “significance level” in (10) “fading factor” in (11) are mostly empirical. For example, Verhagen (2005) chooses \(\alpha =1{0}^{-16}\), Odolinski and Teunissen (2020) suggests \(\alpha =1{0}^{-9}\) to avoid heavy computational burden, Wu and Bian (2015) suggest \(\delta =1{0}^{-8}\).
More importantly, there is not any obvious evidence about the relationship between the chosen “significance level” or “fading factor” and the calculation accuracy of the posterior probability. In other words, we do not know whether these parameters are set too optimistic or too conservative. As a result, the following two aspects are unknown: (i) how precise the calculated posterior probability is; (ii) how large the enumerating region is enough for the predefined calculation accuracy, because an unessential large region will lead to unnecessary heavy computational burden for enumerating the integer points in it.
Fortunately, given rigorous upper and lower bounds, a method of posterior probability calculation with known and controllable accuracy could be developed. Theoretically, the upper and lower bounds could be infinitely close to the true value of the posterior probability, according to the expression
as \({B}_{\overline{S} }^{\mathrm{^{\prime}}}\) fading to 0 with the increment of the enumerating region
Luckily enough, \({B}_{\overline{S} }^{\mathrm{^{\prime}}}\) fades to 0 rather quickly with the expanding of \(N\) because of
where “\(O(x)\)” means “comparable to \(x.\)”
However, the major issue hampering (38) in practical calculation is the complexity of enumerating all the integers in \({\mathbb{S}}({\varvec{x}})\) in order to obtain the value of \({B}_{S}\). It is obvious that there are \({\left(2N-1\right)}^{n}\) integer vectors in a \(n\)-dimensional cube \({\mathbb{S}}({\varvec{x}})\) with side length \(2N-1\), enumerating all of them would be a heavy workload if \(N\) or \(n\) going to large. An alternative method to treat this problem is just enumerating a part of integer vectors within \({\mathbb{S}}({\varvec{x}})\). Without loss of generality, we assume \(m\) most likely integer vector candidates within \({\mathbb{S}}({\varvec{x}})\) are found as \({{\varvec{a}}}_{(1)}\), \({{\varvec{a}}}_{(2)}\)…\({{\varvec{a}}}_{(m)}\), which can be conveniently obtained by LAMBDA or other reduction-and-search software (Verhagen et al. 2013; Xu 2012). They are listed in decreasing order with respect to their likelihoods. We use the following inequality
where the squared distance \({\xi }^{2}\) is chosen as
to ensure (41) always being satisfied and \({B}_{S}\) not being enlarged too much, simultaneously. In this approach, only \(m\) most likely integer vectors are needed to be enumerated. As a consequence, the upper and lower bounds in (38) are changed to
with
Figure 2 is a sketch map of this approach of calculation in two-dimensional case. The enumerated \(m\) most likely integer vectors \({{\varvec{a}}}_{(1)}\), \({{\varvec{a}}}_{(2)}\)…\({{\varvec{a}}}_{(m)}\) belongs to the ellipse \({\mathbb{E}}(x)\) within \({\mathbb{S}}({\varvec{x}})\). Likelihoods of other integer vectors in \({\mathbb{S}}({\varvec{x}})\) but out of \({\mathbb{E}}(x)\) are bounded by \({B}_{{S}_{\overline{m}} }\) rather than enumerated. Likelihoods of integer vectors out of \({\mathbb{S}}({\varvec{x}})\), namely, in \({\overline{\mathbb{S}} }_{1}({\varvec{x}})\) or \({\overline{\mathbb{S}} }_{2}({\varvec{x}})\), are bounded by \({B}_{\overline{S} }^{\mathrm{^{\prime}}}\).
Based on (43), we discuss the posterior probability calculation method with under-controlled accuracy. If an acceptable calculation error, \(\upgamma \), is given, the upper and lower bounds must satisfy
to guarantee calculation accuracy. Note that, \({B}_{{S}_{\overline{m}} }\) and \({B}_{\overline{S} }^{\mathrm{^{\prime}}}\) can be viewed as functions of \(N\), it yields from (45) that
Although there is not straightforward approach to resolve (46), it is easy to observe that (46) is satisfied when the following equation is satisfied
where \({B}_{{S}_{1}}\) means \({B}_{{S}_{m}}\) with parameter \(m=1\). So, \({\xi }^{2}\) can be obtained when \(N\) is given, as
The problem now is that \(N\) is not actually given. Nevertheless, value in the “log” operator must be positive, thus
Inserting (44) into (49) yields
Therefore,
where “\( \left\lceil \bullet \right\rceil \)” means rounding up to the nearest integer. Keeping in mind that \({B}_{\overline{S} }^{\mathrm{^{\prime}}}\) fades rapidly with the increment of \(N\), so we can try the smallest value of \(N\) as (51), then test whether the corresponding \(m\) integer vectors \({{\varvec{a}}}_{(1)}\), \({{\varvec{a}}}_{(2)}\)…\({{\varvec{a}}}_{(m)}\) are within \({\mathbb{S}}({\varvec{x}})\), and adjust the value of \(N\) according to the result. For example, the distance \({\xi }^{2}\) can be directly obtained from (48) with \(N\) being given as the smallest value of (51). Then, we search all the integer vectors whose squared distance are smaller than \({\xi }^{2}\). It is easy to observe whether these integer vectors belong to a cube or not. If the searched vectors are all in cube \({\mathbb{S}}({\varvec{x}})\), (46) and (47) are both satisfied; otherwise, we enlarge size \(N\) until the vectors are within region \({\mathbb{S}}({\varvec{x}})\).
Another important aspect is the reparameterization of ambiguities. The eigenvalues of \({{\varvec{Q}}}_{\widehat{{\varvec{a}}}\widehat{{\varvec{a}}}}\) are adjusted by the parameterization of the ambiguities, so does the upper and lower bounds of posterior probability. This observation allows one to use reparameterization techniques to obtain tighter bounds. In this contribution, we suggest to apply LAMBDA decorrelation (Teunissen 1995) or lattice reduction techniques (Grafarend 2000; Xu 2012; Wu et al. 2017; Wu and Bian 2022) before bounding the posterior probability.
Finally, the process of the posterior probability calculation with controllable accuracy can be summarized as follows
Algorithm: Posterior probability calculation with controllable accuracy |
Input: Solution \(\widehat{{\varvec{a}}}\), VC matrix \({{\varvec{Q}}}_{\widehat{{\varvec{a}}}\widehat{{\varvec{a}}}}\), integer vector \({\varvec{a}}\) and acceptable error \(\upgamma \) |
Output: Posterior probability \(\mathrm{p}\left({\varvec{a}}|\widehat{{\varvec{a}}}\right)\) upper and lower bounds, calculated error |
Step 1: Decorrelating (reducing) the ambiguity solution \(\widehat{{\varvec{a}}}\) and VC matrix \({{\varvec{Q}}}_{\widehat{{\varvec{a}}}\widehat{{\varvec{a}}}}\); |
Step 2: Calculating \(N\) by (51), then calculating \({\xi }^{2}\) by (48); |
Step 3: Enumerating all the integer vectors whose squared distance is smaller than \({\xi }^{2}\); If not all the vectors are in cube \({\mathbb{S}}({\varvec{x}})\), \(N=N+1\) and back to step 2; |
Step 4: Calculating \({B}_{{S}_{m}}\) by (44), and the upper bound of \(\mathrm{p}\left({\varvec{a}}|\widehat{{\varvec{a}}}\right)\) by (43); |
Step 5: Calculating \({B}_{{S}_{\overline{m}} }\) and \({B}_{\overline{S} }^{\mathrm{^{\prime}}}\) by (43), the lower bound by (44), and calculation accuracy by the upper and lower bounds |
6 6. Experimental validation
6.1 6.1 A numerical example
This numerical example is selected from data of single-frequency-single-epoch double differenced GPS positioning based on carrier phase and code observables. After a standard least squares resolution is performed, the float solution \(\widehat{{\varvec{a}}}\) and its VC matrix \({{\varvec{Q}}}_{\widehat{{\varvec{a}}}\widehat{{\varvec{a}}}}\) are obtained as
and
6.1.1 Calculation process
In the following, the posterior probability of the ILS estimator \( \mathop a\limits^{ \vee } \) is calculated. The predefined calculation accuracy is set as \(\upgamma =1{0}^{-6}\). In order to illustrate our method clearly, the process of calculation will be shown step by step.
Step 1 Decorrelating the ambiguity solution and VC matrix by reduction techniques. The decorrelation matrix \({\varvec{Z}}\) is
the decorrelated float solution \(\widehat{{\varvec{a}}}\) and its VC matrix \({{\varvec{Q}}}_{\widehat{{\varvec{a}}}\widehat{{\varvec{a}}}}\) are obtained as
and
The ILS estimator \( \mathop a\limits^{ \vee } \) is
Step 2: From (51), the value of \(N\) is obtained as
Setting \(N=3\), then \({\xi }^{2}\) is obtained from (48) as
Step 3: Enumerating all the integer vectors whose squared distance is smaller than \({\xi }^{2}\), 305 integer nodes are searched and 127 integer vectors are found. All of the vectors are in cube \({\mathbb{S}}({\varvec{x}})=\bigcap_{i=1}^{n}\{{\varvec{x}}\in {\mathbb{R}}^{n}|\left|{\varvec{x}}\left(i\right)\right|\le 2\}\), algorithm goes down.
Step 4: Calculating \({B}_{{S}_{m}}\) by (44) as
and then the upper bound of \(\mathrm{p}\left(\stackrel{ \vee}{{\varvec{a}}}|\widehat{{\varvec{a}}}\right)\) is obtained by (43) as
step 5: Calculating \({B}_{{S}_{\overline{m}} }\) and \({B}_{\overline{S} }^{\mathrm{^{\prime}}}\) as
\({B}_{{S}_{\overline{m}} }=1.19\times 1{0}^{-8}\) and \({B}_{\overline{S} }^{\mathrm{^{\prime}}}=7.61\times 1{0}^{-9}\)
and the lower bound is obtained by (44) as
From the upper and lower bounds above, we know that the calculation error is \(6.5\times 1{0}^{-7}\). It is smaller than the predefined acceptable error \(\upgamma =1{0}^{-6}\).
6.1.2 Calculation performances under different acceptable errors
In the previous content, the posterior probability of the ILS estimator \(\stackrel{ \vee}{{\varvec{a}}}\) is calculated, with the acceptable error being fixed as \(\upgamma =1{0}^{-6}\). In order to show the accuracy controllable property of the proposed approach, calculation accuracy and complexity under different acceptable errors are investigated in the following content. Integer search is the heaviest workload in the calculation, a canonical complexity measurement of which is the amount of searched integer points during the search process (Chang et al. 2013; Wu and Bian 2022). Therefore, number of searched integer nodes is used as the measurement of calculation complexity in this contribution.
The results of calculation accuracy and complexity under acceptable errors from \(\upgamma =1{0}^{-4}\) to \(\upgamma =1{0}^{-8}\) are shown in Table 1. It reveals that the calculation accuracy is enhanced with the increment of required accuracy, and all of the calculated errors are smaller than their required counterparts. For example, the calculation errors decrease from \(6.9\times 1{0}^{-5}\) to \(6.6\times 1{0}^{-9}\) as the acceptable errors going from \(1{0}^{-4}\) to \(1{0}^{-8}\). At the same, the searched points only increase from 173 to 491, which means this approach can produce calculation results with satisfied accuracy at the expense of a very low computational burden.
6.1.3 Comparisons with existing approaches
In this part, we calculate the posterior probability of the top five most likely integer vectors using the new method. Then, the results are compared with the corresponding results obtained by Teunissen (2005b) with \(\alpha =1{0}^{-16}\) (Verhagen 2005), \(\alpha =1{0}^{-9}\) (Odolinski and Teunissen 2020), and Wu and Bian (2015) with \(\delta =1{0}^{-8}\). The result of Yu et al. (2017) is not theoretically rigorous, so it is not contained in the comparison.
Using the search algorithm in LAMBDA software, top five most likely integer vector candidates can be easily found (Verhagen et al. 2013). They are listed in Table 2 in descending order with respect to their likelihoods.
Their posterior probabilities, calculated by the new approach and by Verhagen (2005), Odolinski and Teunissen (2020), Wu and Bian (2015), respectively, are shown in Table 3. It reveals that all of the results produced these approaches have achieved high accuracy. However, only the new approach can produce the lower bound of the posterior probability and evaluate the accuracy of the calculation result. In this example, the required accuracy is \(\upgamma =1{0}^{-6}\), and all of the calculation accuracy obtained by the new approach is higher than 10–6. Results of other approaches are actually the lower bound of posterior probability. Therefore, accuracy of these approaches cannot be evaluated by themselves. Although the obtained result may be very accurate, the user still does not know how accurate it is, whether it can meet the required accuracy. On the contrary, the calculation accuracy of the new method can be produced and can always meet the requirements.
On the aspect of computational complexity, Verhagen (2005) enumerates the most integer points, followed by Odolinski and Teunissen (2020), and then by Wu and Bian (2015). The new approach searches comparable number of integers with Odolinski and Teunissen (2020) in order 1 and 2, and comparable number of integers with Wu and Bian (2015) in order 3 to 5. In the existing approaches, the amount of enumerated integer points cannot be adjusted by the required accuracy, because the “significance level” \(\alpha \) and “fading factor” \(\delta \) are fixed to the empirical values. It means that they might enumerate too many integers if the acceptable accuracy is not set that high. On the contrary, the enumerating region of the new method can be adjusted flexibly according to the predefined accuracy. For example, if \(\upgamma \) equals to \(1{0}^{-6}\), method ①, ②, ③ and the new method search 986, 270, 165 and 305 points, respectively. If \(\upgamma \) changes to 10–4, method ①, ② and ③ still search 986, 270 and 165 points, respectively; while the searched points of new method reduce to 173 (seen Table 1). It shows that Verhagen (2005), Odolinski and Teunissen (2020) search an unessential large region leading to unnecessary heavy computational burden for the required accuracy. Another issue needed to be clarified is that the new method searches different amount of integer points for different integer vectors. For example, the new method searches 305, 248, 203, 165, 163 points for integer vector in order 1, 2, 3, 4, 5, respectively. The reason is that the search region \({\xi }^{2}\) is a function of integer vector \({\varvec{a}}\), which has been shown in (48).
6.2 Experimental verification by real collected data
In this subsection, experimental verification is made using GPS data collected from two Hong Kong continuously operating reference stations (CORS) sites. A 7.8-km-long short baseline is formed by the two stations. Performance of posterior probability calculation method is verified using the integer ambiguities and functions produced in baseline precise positioning.
GNSS single-epoch double-differenced RTK functional model is used in data processing. The standard deviations of code and phase observations are set 0.3 and 0.003 m, respectively, and a simple equal-weighted model is used. We apply single frequency ambiguity resolution by the means that all the ambiguities in each epoch are fixed as new ones. As an essential procedure in posterior probability calculation of Teunissen (2005b), Wu and Bian (2015) and the new method, integer search strategy used herein is determinate-region search in LAMBDA software (Verhagen et al. 2013). Specifications of the processing strategy are summarized in Table 4.
During the experiment, changes of the visible satellites and the geometry dilution of precision (GDOP) are presented in Fig. 3. It shows that the visible satellites are between 6 and 12, and the GDOP is below 8. The ambiguity dilution of precision (ADOP) and the dimension of the ambiguities are demonstrated in Fig. 4. The ADOP is defined by the following equation (Teunissen 1997)
The dimensions of the ambiguities are between 4 and 10, while most of the ADOP values are smaller than 0.02. Figures 3 and 4 have revealed that the strength of the GNSS positioning model is strong. As a result, 2846 out of 2880 epochs are resolvable, posterior probability calculation is carried on in theses epochs, then comparisons are made. The comparisons are mainly twofold: (i) performance of the new approach under different acceptable accuracies, and (ii) performance of the new approach and the existing ones, such as Verhagen (2005), Odolinski and Teunissen (2020), Wu and Bian (2015).
6.2.1 Performance under different acceptable accuracies
In the experiment, the posterior probability of the ILS estimator \(\stackrel{ \vee}{{\varvec{a}}}\) in each epoch is calculated. We first set the predefined calculation accuracy as \(\upgamma =1{0}^{-4}\). The calculated upper and lower bounds of the posterior probability in all of the 2846 epochs are shown in Fig. 5, which reveals that the posterior probability value vibrates severely, even in adjacent epochs. It is due to the fact that posterior probability not only refers to the model strength, such as the GODP and ADOP, which changes mildly with time, it also determined by the introduced observation noise which changes randomly each time. The doted red line and the aqua line in Fig. 5 are almost overlapped, which means the calculated values of the upper and lower bounds are close. The calculation error, obtained by upper bound minus lower bound, is plot in Fig. 6(a). None of the calculation error surpass the predefined acceptable error \(\upgamma =1{0}^{-4}\), which demonstrates that the calculation accuracy meets the requirement.
The predefined accuracy is now changed to \(\upgamma =1{0}^{-6}\) and \(\upgamma =1{0}^{-8}\), respectively, in order to investigate the calculation accuracy and calculation complexity under different acceptable accuracies. As interpreted in the previous subsection, number of searched integer nodes is used as the measurement of calculation complexity. The calculation error, under required error \(\upgamma =1{0}^{-6}\) and \(\upgamma =1{0}^{-8},\) is plotted in Fig. 6 (b) and (c), respectively. Similar with the plot in Fig. 6 (a), none of the calculation error exceeds the predefined \(\upgamma \), which demonstrates the proposed approach, as shown in Algorithm 1, can adjust the calculation according to the user defined accuracy.
This adjustment results in different search region and therefore different computational workload. The number searched integer points under acceptable error \(\upgamma =1{0}^{-4}\), \(\upgamma =1{0}^{-6}\) and \(\upgamma =1{0}^{-8}\) is plotted in Fig. 7, in which the green line lies above the red line and below the blue line. The percentage distribution of searched nodes under \(\upgamma =1{0}^{-4}\), \(\upgamma =1{0}^{-6}\) and \(\upgamma =1{0}^{-8}\) is shown in Fig. 8(a–c), respectively. They show that the number of searched nodes concentrated in three clusters. The clusters are 0–1000, 1000–2000, and 6000–7000 under \(\upgamma =1{0}^{-4}\); 0–2000, 2000–4000, and 8000–10,000 under \(\upgamma =1{0}^{-6}\); and 0–2000, 3000–4000, and 12,000–14,000 under \(\upgamma =1{0}^{-8}\). These results show that the searched integer nodes increase with the enhancement of the required accuracy, which finally leads to computation burden increment.
6.2.2 Computational burden adjustment comparison with existing approaches
Compared with the existing approaches such as Verhagen (2005), Odolinski and Teunissen (2020), Wu and Bian (2015), a major advantage of the proposed approach is that it can produce calculation result and calculation accuracy, simultaneously, while other approaches can only produce calculation result, without knowing its calculation accuracy. This advantage is explicit and has shown in the 3rd part of Sect. 6.1. So, it will not be compared again in the following.
Another advantage is that the proposed method can produce calculation result always satisfying the user required accuracy, by adjusting the search region. On the contrary, search region of other approaches cannot be adjusted by the required accuracy. As a result, they might enumerate too many or too few integers compared with the acceptable accuracy. We are now focusing on comparison of this issue.
Assuming acceptable errors \(\upgamma =1{0}^{-4}\) and \(\upgamma =1{0}^{-8}\), the searched integer nodes of these approaches are plotted in Fig. 9(a) and (b). It shows in Fig. 9(a) that the black line lies above, followed by the blue line, red line, and green line, respectively, which means the computational burden is arranged by Verhagen (2005) > Odolinski and Teunissen (2020) > new method > Wu and Bian (2015). Note that, the results of Verhagen (2005), Odolinski and Teunissen (2020), Wu and Bian (2015) are obtained by only adding the likelihoods of the enumerated integer points rather than the likelihoods of all the integer points in the space. Approaches of Verhagen (2005), Odolinski and Teunissen (2020) enumerate too many integer points than the user required accuracy (\(\upgamma =1{0}^{-4}\)) need, which leads to a waste of computational resource and a degrade of computational efficiency. On the contrary, using the proposed approach, this part of computational resource is reserved and time efficiency could therefore be guaranteed.
It shows in Fig. 9 (b) that the black line lies above, followed by the red line, blue line, and green line, respectively, which means the computational burden is arranged by Verhagen (2005) > new method > Odolinski and Teunissen (2020) > Wu and Bian (2015). Note that, the searched nodes amount of Verhagen (2005), Odolinski and Teunissen (2020), Wu and Bian (2015) remain the same as the experiment under \(\upgamma =1{0}^{-4}\) (seen in Fig. 9 (a)), because the “significance level” \(\alpha \) and “fading factor” \(\delta \) are unchanged. As a consequence, approaches of Odolinski and Teunissen (2020), Wu and Bian (2015) may enumerate too few integer points, which has a risk of calculation accuracy lower than required. On the contrary, using the proposed approach, calculation accuracy could always be guaranteed.
7 Conclusions
GNSS high precise positioning requires correct estimation of the integer ambiguity. However, the integer ambiguity solution is subject to uncertainty caused by the randomness of the data. So, it is important to validate the quality of the integer ambiguity estimation. The optimal validation criterion, in sense of maximizing the estimation success rate subjecting to a given failure rate, is the ambiguity posterior probability. An alternative approach is giving up fixing the real-valued ambiguity solution to some integer, but adjusting it according to the criterion of minimizing the solution’s mean square error (MSE) instead. This estimator comes out as the sum of all the integer vectors multiplying their posterior probabilities.
It is therefore of great significance to calculate posterior probability precisely and efficiently. However, we cannot obtain the exact value of posterior probability due to the occurrence of infinite sums. Practical calculation approaches approximate the exact value by neglecting sufficiently small terms in the sum. These calculations, although have achieved good approximation, can only serve as the upper bounds of posterior probability. Compared with upper bound, a proper lower bound of the posterior probability would be more important in application. An appreciated lower bound can prevent the user from validating the ambiguity resolution result by using an over optimistic criterion. However, these approaches can only produce an upper bound as the posterior probability calculation result, but cannot produce accuracy information about the result.
In this contribution, a pair of upper and lower bounds of the posterior probability is derived by dividing the infinite sum into two parts: the major finite part and the minor infinite part. The finite part can be calculated by an enumeration of all the elements in it. The infinite part is bounded by algebraical derivation. As a result, theoretically rigorous upper and lower bounds of the posterior probability are obtained. Based on the bounds, a posterior probability calculation approach with controllable accuracy is proposed. Meanwhile, the computational burden of the proposed approach can be adjusted flexibly according to the required accuracy. The process of the algorithm is summarized, which can be conveniently used by the user. Numerical experiments have verified the calculation accuracy and computational workload adjustment properties of the proposed approach under different acceptable error levels, which are superior to existing approaches.
The significances of this research are mainly of twofold. Theoretically, the posterior probability is bounded from below and from above simultaneously, both of the bounds are in closed form and can achieve very high precision. As far as we are concerned, it is the first theoretically rigorous lower bound of the posterior probability ever reported. Practically, compared with the existing approaches, the calculation accuracy of the proposed approach is known and under control; at the same time, the computational burden can be adjusted flexibly according to the required accuracy.
Data availability
All data generated or analyzed during this study are included in this published article.
References
Blewitt G (1989) Carrier-phase ambiguity resolution for the global positioning system applied to geodetic baselines up to 2000 km. J Geophys Res 94(B8):10187–10302
Chang XW, Yang X, Zhou T (2005) MLAMBDA: a modified LAMBDA algorithm for integer least-squares estimation. J Geod 79(9):552–565
Chang XW, Wen J, Xie X (2013) Effects of the LLL Reduction on the success probability of the babai point and on the complexity of sphere decoding. IEEE Trans Inf Theory 59(8):4915–4926
Dermanis A, Rummel R (2008) Data analysis methods in geodesy. Geomatic method for the analysis of data in the earth sciences. Springer, Heidelberg, pp 17–92
Dong D, Bock Y (1989) Global positioning system network analysis with phase ambiguity resolution applied to crustal deformation studies in California. J Geophys Res 94:3949–3966
Euler HJ, Schaffrin B (1991) On a measure for the discernibility between different ambiguity solutions in the static-kinematic GPS-mode. In: Kinematic Systems in Geodesy Surveying and Remote Sensing. Springer, New York
Frei E, Beulter G (1990) Rapid static positioning based on the fast ambiguity resolution approach ‘FARA’: theory and first results. Manuscr Geod 15(6):326–356
Grafarend EW (2000) Mixed integer-real valued adjustment (IRA) problems: GPS initial cycle ambiguity resolution by means of the LLL algorithm. GPS Solut 4:31–44
Han S (1997) Quality control issues relating to instantaneous ambiguity resolution for real-time GPS kinematic positioning. J Geod 71(6):351–361
Koch KR (1990) Bayesian inference with geodetic applications. Springer, Berlin, p 31
Lacy MCD, Sansò F, Rodriguez-Caderot G, Gil AJ (2002) The Bayesian approach applied to GPS ambiguity resolution a mixture model for the discrete–real ambiguities alternative. J Geod 76(2):82–94
Leick A, Rapoport L, Tatarnikov D (2015) GPS satellite surveying, 4th edn. Wiley, New Jersey
Odolinski R, Teunissen PJG (2020) Best integer equivariant estimation: performance analysis using real data collected by low-cost, single- and dual-frequency, multi-GNSS receivers for short- to long-baseline RTK positioning. J Geod 94(9):1–7
Samama N (2008) Global positioning: technologies and performance. Wiley, New Jersey
Shores TS (2007) Applied linear algebra and matrix analysis. Springer, New York
Taha H (1975) Integer programming–theory, applications, and computations. Academic Press, New York
Teunissen PJG (1993) Least squares estimation of integer GPS ambiguities. Sect IV theory and methodology, IAG General Meeting, Beijing
Teunissen PJG (1995) The least-squares ambiguity decorrelation adjustment: a method for fast GPS integer ambiguity estimation. J Geod 70(1–2):65–82
Teunissen PJG (1997) A canonical theory for short gps baselines. Part IV: precision versus reliability. J Geod 71(9):513–525
Teunissen PJG (1999) An optimality property of the integer least-squares estimator. J Geod 73(11):587–593
Teunissen PJG (2001) Statistical GNSS carrier phase ambiguity resolution: a review. Proc. of 2001 IEEE Workshop on Statistical Signal Processing, August 6–8, Singapore: 4–12
Teunissen PJG (2003a) Integer aperture GNSS ambiguity resolution. Artif Satell 38(3):79–88
Teunissen PJG (2003b) Theory of integer equivariant estimation with application to GNSS. J Geod 77:402–410
Teunissen PJG (2005a) GNSS ambiguity resolution with optimally controlled failure-rate. Artif Satell 40(4):219–227
Teunissen PJG (2005b) On the computation of the best integer equivariant estimator. Artif Satell 40:161–171
Teunissen PJG (2020) Best integer equivariant estimation for elliptically contoured distributions. J Geod 94:82
Teunissen PJG, Verhagen S (2009) The GNSS ambiguity ratio-test revisited: a better way of using it. Survey Rev 41(312):138–151
Tiberius C, Jonge P (1995) Fast positioning using the LAMBDA method. Proceedings DSNS-95, paper, 30(8)
Verhagen S, Teunissen PJG (2006) New global navigation satellite system ambiguity resolution method compared to existing approaches. J Guid Cont Dyn 29(4):981–991
Verhagen S, Teunissen PJG (2013) The ratio test for future GNSS ambiguity resolution. GPS Solut 17(4):535–548
Verhagen S, Li BF, Teunissen PJG (2013) Ps-LAMBDA: ambiguity success rate evaluation software for interferometric applications. Comput Geosci 54:361–376
Verhagen S (2005) The GNSS integer ambiguities: estimation and validation. PhD dissertation, Netherlands Geodetic Commission, Publications on Geodesy, 58
Wu ZM, Bian SF (2015) GNSS integer ambiguity validation based on posterior probability. J Geod 89(10):961–977
Wu ZM, Bian SF (2022) Regularized integer least-squares estimation: Tikhonov’s regularization in a weak GNSS model. J Geod. https://doi.org/10.1007/s00190-021-01585-7(accepted)
Wu ZM, Bian SF, Ji B, Xiang CB, Jiang DF (2015) Short baseline GPS multi-frequency single-epoch precise positioning: utilizing a new carrier phase combination method. GPS Solut 20(3):373–384
Wu ZM, Li HP, Bian SF (2017) Cycled efficient V-Blast GNSS ambiguity decorrelation and search complexity estimation. GPS Solut 21:1829–1840
Xu PL (1998) Mixed integer geodetic observation models and integer programming with applications to GPS ambiguity resolution. J Geod Soc Japan 44:169–187
Xu PL (2006) Voronoi cells, probabilistic bounds and hypothesis testing in mixed integer linear models. IEEE Trans Inf Theory 52(7):3122–3138
Xu PL (2012) Parallel Cholesky-based reduction for the weighted integer least squares problem. J Geod 86(1):35–52
Xu PL, Cannon E, Lachapelle G (1995) Mixed Integer Programming for the Resolution of GPS Carrier Phase Ambiguities. Presented at IUGG95 Assembly, 2–14 July, Boulder, CO, USA.
Yu XW, Wang JL, Gao W (2017) An alternative approach to calculate the posterior probability of GNSS integer ambiguity resolution. J Geod 91:295–305
Zhu J, Ding X, Chen Y (2001) Maximum-likelihood ambiguity resolution based on Bayesian principle. J Geod 75(4):175–187
Acknowledgements
This work was supported by the National Natural Science Foundation of China (Nos. 41504029 and 41631072), Natural Science Foundation for Distinguished Young Scholars of Hubei Province of China (No. 2019CFA086).
Author information
Authors and Affiliations
Contributions
Conceptualization, methodology, data collection, software, and writing were performed by ZW. ZW has read and approved the final manuscript.
Corresponding author
Rights and permissions
About this article
Cite this article
Wu, Z. GNSS integer ambiguity posterior probability calculation with controllable accuracy. J Geod 96, 53 (2022). https://doi.org/10.1007/s00190-022-01633-w
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s00190-022-01633-w