Keywords

1 Introduction

Unmanned aerial vehicle (UAV) has been widely used in petroleum geophysical exploration, environmental monitoring, urban planning and border patrol due to its advantages of flexibility, strong current situation, less influence by airspace and weather and high cost performance [1,2,3,4]. In particular, UAV plays an irreplaceable role in the emergency support of natural disasters such as earthquake, forest fire, tsunami and debris flow monitoring [5,6,7]. However, there are several disadvantages in traditional UAV aerial photography, such as poor attitude stability, large image distortion and small image size, and it is difficult to carry the position and orientation system (POS) with high accuracy due to the limitation of load. At present, only the navigation equipment is used to obtain the outline position and attitude of the image. Therefore, in order to ensure the accuracy of bundle block adjustment (BBA) method, the camera needs to be calibrated strictly in advance. At the same time, a large number of ground control points (GCPs) are needed to restore the image pose parameters by aerial triangulation [8, 9]. In this way, it not only increases the workload of field work, but also people entering difficult areas such as mountains, waters and forests, and it is difficult to set up control points, so the accuracy cannot be guaranteed [10, 11]. For this reason, how to obtain high-precision pose data of images at shooting time with little or no help of field GCPs has become the research focus of scholars.

Compared with manned aircraft, UAVs are more cost-effective and responsive. However, the observations acquired by the POS are usually less accurate than those acquired in manned aerial photogrammetry [7]. Global positioning system (GPS) for UAV applications includes single point positioning (SPP) and differential positioning. SPP is mainly used in relatively low precision situations, such as navigation. In order to provide higher accuracy, single-frequency or dual-frequency geodetic-grade GPS receivers for carrier phase measurement must be used, but the price is between $5000 and $15000, and the weight and volume are not suitable for UAV carrying requirements [4]. Differential positioning includes real-time kinematic (RTK) and post-processed kinematic (PPK) technology. RTK-GPS is a typical differential positioning method, and through the measurement of GPS signal carrier phase, the user can obtain a high-precision real-time position, which can meet a variety of measurement and the application of geographic information system requirements [5, 12, 13]. Compared with RTK technology, PPK technology is a post-processing differential of carrier phase observations, which can effectively avoid the interruption of signal transmission while guaranteeing the same accuracy as RTK technology. Therefore, differential positioning can be regarded as a powerful economic alternative to geodetic-grade GPS receivers. However, geodetic-grade receiver and antenna have been still expensive compared to consumer-grade ones. In the low-cost differential system test, it is found that for receivers, the performance difference between consumer-grade and geodetic-grade is small, By contrast, as to antennas, low-cost antennas, the performance degradation is large [4, 14, 15].

With the development of miniaturization of global navigation satellite system (GNSS), RTK and PPK have been continuously applied to micro/mini unmanned aerial vehicles (MUAVs) [16, 17]. The more representative ones are eBee Plus UAV from senseFly, Switzerland, UX5 HP UAV from Trimble, American, CW10 UAV from Chengdu Vertical Automation, China, etc. [18]. Differential GNSS UAV only needs to set up base station on the ground and acquire satellite carrier phase information together with rover station on the UAV. Through PPK algorithm, the accurate picture positions at exposure can be known [12, 19]. On this basis, GNSS-assisted BBA can be used for integrated positioning, which can avoid the complex process of laying a large number of GCPs in traditional UAV photography and improve the accuracy and efficiency of data processing [5, 10, 13]. The commercial UAVs using differential system are mostly applied to navigation, and system integration is high, without considering the dynamic case GNSS satellite signal reception, camera exposure and eccentricity effect between delay and receiver antenna and the camera center, etc. [19, 20]. In order to satisfy the accuracy requirements of topographic mapping and three-dimensional modeling, it is necessary to measure the camera distortion parameters in advance and lay a certain number of GCPs in the later data processing [21,22,23].

This paper takes BD930 positioning board produced by Trimble company as the research object, completes the UAV system modification, studies PPK algorithm under multi-system and improves the static positioning accuracy and dynamic stability of consumer-grade receivers and antennas. Based on spline curve, a flight model is built, and the influence of system error caused by camera exposure delay and eccentricity is discussed. A flight test was carried out in a hilly area of Henan Province, China. By reasonably setting the lens distortion parameter model, taking corrected GNSS camera station coordinates and calculated image attitude angle as initial values. Though the self-BBA method, the UAV image position and attitude parameters were obtained precisely. The advantages of this method were compared and analyzed based on the statistical results of CPs residuals. Finally, discussion of the results is given, as well as recommendations for further investigations.

2 Measurement Method of UAV Pose Parameters

2.1 Double Difference Model Fused with BDS/GPS/GLONASS

BD930 positioning board supports GPS, GLONASS, Galileo and BDS satellite signal reception. It has fast RTK initialization speed and built-in Kalman filter PVT engine. It can obtain low multipath, high dynamic and low noise observation data. The low-cost differential GNSS system designed in this paper consists of BD930 board, power supply system, storage system, measurement antenna and post-processing software (see Fig. 1).

Fig. 1
figure 1

Low-cost differential GNSS system

Figure 2 depicts the working principle of PPK UAV. The base station on known position receives the carrier and pseudo-range information of satellite together with the antenna on UAV. When more than four satellite signals are received synchronously, the errors such as satellite orbit, cloud occlusion, satellite clock and multi-path effect can be eliminated in GNSS positioning process according to the principle of relative positioning.

Fig. 2
figure 2

Working principle of PPK UAV

Compared with RTK technology, PPK technology carries out post-differential processing of carrier phase observation value. Through the dynamic relative positioning algorithm, the real-time position data of the receiver can be calculated by linear combination of satellite carrier observation information recorded by base station and rover station in the same time period [24]. BD930 receives BDS, GPS and GLONASS satellites at the same time. Figure 3 shows the visibility and reception of the satellite signal drawn by RTKPLOT. It can be found that three kinds of effective satellites can be received at each time, which makes up for the shortcomings of single system in space distribution and integrity.

Fig. 3
figure 3

BD930 receiving satellite drawn by RTKPLOT

The pseudo-range and carrier phase observation equations of GPS, BDS and GLONASS can be expressed as follows:

$$\left\{ {\begin{array}{*{20}l} {\rho^{{G^{\prime }}} = \rho^{G} + c(t_{\text{r}} - t_{\text{s}}^{G} ) + L^{G} + D^{G} + M^{G} + T^{G} + v^{G} .} \hfill \\ {\rho^{{C^{\prime }}} = \rho^{C} + c(t_{\text{r}} + t^{\text{CG}} - t_{\text{s}}^{C} ) + L^{C} + D^{C} + M^{C} + T^{C} + v^{C} .} \hfill \\ {\rho^{{R^{\prime }}} = \rho^{R} + c(t_{\text{r}} + t^{\text{RG}} - t_{\text{s}}^{R} ) + L^{R} + D^{R} + M^{R} + T^{R} + v^{R} .} \hfill \\ \end{array} } \right.$$
(1)
$$\left\{ {\begin{array}{*{20}l} {\lambda^{G} \varphi^{G} = \rho^{G} + \lambda^{G} N^{G} + c(t_{\text{r}} - t_{\text{s}}^{G} ) + L^{G} - D^{G} + M^{G} + T^{G} + v^{G} .} \hfill \\ {\lambda^{C} \varphi^{C} = \rho^{C} + \lambda^{C} N^{C} + c(t_{\text{r}} + t^{\text{CG}} - t_{\text{s}}^{C} ) + L^{C} - D^{C} + M^{C} + T^{C} + v^{C} .} \hfill \\ {\lambda^{R} \varphi^{R} = \rho^{R} + \lambda^{R} N^{R} + c(t_{\text{r}} + t^{\text{RG}} - t_{\text{s}}^{R} ) + L^{R} - D^{R} + M^{R} + T^{R} + v^{R} .} \hfill \\ \end{array} } \right.$$
(2)

where C, G, R are, respectively, BDS, GPS and GLONASS, \(\rho\) is the geometric distance between the receiver and the satellite, \(\rho^{{\prime }}\) is the pseudo-range, \(\lambda\) is the carrier wavelength, c is the electromagnetic wave transmission speed, N is the integer ambiguity, \(t_{\text{r}}\) is the receiver clock error, \(t_{\text{s}}\) is the satellite clock error, L, D, M, T and V are, respectively, the errors caused by troposphere, ionosphere, multipath, antenna phase center and satellite ephemeris, \(t^{\text{CG}}\) is the time reference deviation for GPS and BDS, \(t^{\text{RG}}\) is the time reference deviation for GPS and GLONASS.

The three satellite systems using different time reference and space reference, GPST, BDT and GLONASST correspond to different time reference, and the conversion relationship between them is:

$$\begin{aligned} {\text{GPST}} & = {\text{BDT}} + 14s + [{\text{UTC}}({\text{USNO}}) - {\text{UTC}}({\text{UTSC}})]. \\ {\text{GPST}} & = {\text{GLONASST}} - 3h + \tau_{\text{r}} + 1s{ \times }n^{ - 19} s + [{\text{UTC(USNO}}) - {\text{UTC}}({\text{SU}})]. \\ \end{aligned}$$
(3)

\({\text{UTC}}({\text{USNO}})\), \({\text{UTC}}({\text{UTSC}})\) and \({\text{UTC}}({\text{SU}})\) are, respectively, three coordinated universal time (UTC) reference systems, \(\tau_{\text{r}}\) is the systematic deviation within 1 ms between GLONASST and \({\text{UTC}}({\text{SU}})\).

The coordinate datum of the three systems are, respectively, \(WGS{-}84\), \(CGCS2000\) and \(PZ{-}90\).

According to the definition of coordinate system, only two parameters of reference ellipsoid in \(WGS{-}84\) and \(CGCS2000\) coordinate systems are slightly different, and for the short flight range of UAV, it can be ignored. The transformation relationship between \(WGS{-}84\) and \(PZ{-}90\) is obtained by Bursa model as follows:

$$\begin{aligned} \left[ {\begin{array}{*{20}c} X \\ Y \\ Z \\ \end{array} } \right]_{WGS - 84} & = \left[ {\begin{array}{*{20}c} {X_{0} } \\ {Y_{0} } \\ {Z_{0} } \\ \end{array} } \right] + (1 + m)\left[ {\begin{array}{*{20}c} 1 & {\beta_{Z} } & { - \beta_{Y} } \\ { - \beta_{Z} } & 1 & {\beta_{X} } \\ {\beta_{Y} } & { - \beta_{X} } & 1 \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {X^{\prime}} \\ {Y^{\prime}} \\ {Z^{\prime}} \\ \end{array} } \right]_{PZ - 90} \\ & = \left[ {\begin{array}{*{20}c} { - 0.47} \\ { - 0.51} \\ { - 1.56} \\ \end{array} } \right] + (1 + 22 \times 10^{ - 9} ) \\ & \quad \left[ {\begin{array}{*{20}c} 1 & { - 1.728 \times 10^{ - 6} } & { - 0.017 \times 10^{ - 6} } \\ {1.728 \times 10^{ - 6} } & 1 & {0.076 \times 10^{ - 6} } \\ {0.017 \times 10^{ - 6} } & { - 0.076 \times 10^{ - 6} } & 1 \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {X^{\prime}} \\ {Y^{\prime}} \\ {Z^{\prime}} \\ \end{array} } \right]_{PZ - 90} . \\ \end{aligned}$$
(4)

The applicability and accuracy of carrier phase difference is better than position and pseudo-range difference [25]. As UAV fly fast, the stability of miniature GNSS receiver is poor. In order to eliminate the common errors of orbit, satellite and atmosphere between base station and rover station, also considering the frequency division multiple access (FDMA) technology adopted by GLONASS, a dual-difference model integrating GPS, BDS and GLONASS is proposed:

$$\left\{ {\begin{array}{*{20}l} \begin{aligned} \lambda ^{{iG}} \nabla \Delta \varphi _{{br}}^{{ijG}} & = \nabla \Delta \rho _{{br}}^{{ij}} + \lambda ^{{iG}} \nabla \Delta N_{{br}}^{{ijG}} + \nabla \Delta L_{{br}}^{{ijG}} - \nabla \Delta D_{{br}}^{{ijG}} + \nabla \Delta M_{{br}}^{{ijG}} \\ & \quad + \nabla \Delta T_{{br}}^{{ijG}} + \nabla \Delta v_{{br}}^{{ijG}} . \\ \end{aligned} \hfill \\ \begin{aligned} \lambda ^{{iC}} \nabla \Delta \varphi _{{br}}^{{ijC}} & = \nabla \Delta \rho _{{br}}^{{ij}} + \lambda ^{{iC}} \nabla \Delta N_{{br}}^{{ijC}} + \nabla \Delta L_{{br}}^{{ijC}} - \nabla \Delta D_{{br}}^{{ijC}} + \nabla \Delta M_{{br}}^{{ijC}} \\ & \quad + \nabla \Delta T_{{br}}^{{ijC}} + \nabla \Delta v_{{br}}^{{ijC}} . \\ \end{aligned} \hfill \\ \begin{aligned} \lambda ^{{iR}} \nabla \Delta \varphi _{{br}}^{{ijR}} & = \nabla \Delta \rho _{{br}}^{{ij}} + \lambda ^{{iR}} \nabla \Delta N_{{br}}^{{ijR}} + (\lambda ^{i} - \lambda ^{j} )\nabla N_{{br}}^{{jR}} + \nabla \Delta L_{{br}}^{{ijR}} \\ & \quad - \nabla \Delta D_{{br}}^{{ijR}} + \nabla \Delta M_{{br}}^{{ijR}} + \nabla \Delta T_{{br}}^{{ijR}} + \nabla \Delta v_{{br}}^{{ijR}} . \\ \end{aligned} \hfill \\ \end{array} } \right.$$
(5)

Limited by flight control system and endurance time, MUAVs usually fly within 20 km, so the double difference of system errors can be neglected. In a certain epoch, assuming that the differential system receives m GPS, n BDS and q GLONASS satellites at the same time, all of which are referenced by the satellites with the highest altitude angle, and the dual-difference observation model can be combined:

$$\begin{aligned} &\left[ {\begin{array}{*{20}c} {\begin{array}{*{20}c} {\lambda^{1G} \nabla \Delta \varphi_{br}^{12G} } \\ \vdots \\ {\lambda^{1G} \nabla \Delta \varphi_{br}^{1mG} } \\ \end{array} } \\ {\begin{array}{*{20}c} {\lambda^{1C} \nabla \Delta \varphi_{br}^{12C} } \\ \vdots \\ \begin{aligned} \lambda^{1C} \nabla \Delta \varphi_{br}^{1nC} \hfill \\ \begin{array}{*{20}c} {\lambda^{1R} \nabla \Delta \varphi_{br}^{12R} } \\ \vdots \\ {\lambda^{1R} \nabla \Delta \varphi_{br}^{1qR} } \\ \end{array} \hfill \\ \end{aligned} \\ \end{array} } \\ \end{array} } \right] = \left[ {\begin{array}{*{20}c} \begin{aligned} a^{G12} \hfill \\ \vdots \hfill \\ a^{G1m} \hfill \\ a^{C12} \hfill \\ \vdots \hfill \\ a^{C1n} \hfill \\ a^{R12} \hfill \\ \vdots \hfill \\ a^{R1q} \hfill \\ \end{aligned} & \begin{aligned} b^{G12} \hfill \\ \vdots \hfill \\ b^{G1m} \hfill \\ b^{C12} \hfill \\ \vdots \hfill \\ b^{C1n} \hfill \\ b^{R12} \hfill \\ \vdots \hfill \\ b^{R1q} \hfill \\ \end{aligned} & \begin{aligned} c^{G12} \hfill \\ \vdots \hfill \\ c^{G1m} \hfill \\ c^{C12} \hfill \\ \vdots \hfill \\ c^{C1n} \hfill \\ c^{R12} \hfill \\ \vdots \hfill \\ c^{R1q} \hfill \\ \end{aligned} \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {B_{x} } \\ {B_{y} } \\ {B_{z} } \\ \end{array} } \right] \\ & \quad + \left[ {\begin{array}{*{20}c} {\begin{array}{*{20}c} {\lambda^{1G} } & {} & {} \\ {} & \ddots & {} \\ {} & {} & {\lambda^{1G} } \\ \end{array} } & {\begin{array}{*{20}c} {} & {} & {} \\ {} & {} & {} \\ {} & {} & {} \\ \end{array} } \\ \begin{aligned} \begin{array}{*{20}c} {} & {} & {} \\ {} & {} & {} \\ {} & {} & {} \\ \end{array} \hfill \\ \begin{array}{*{20}c} {} & {} & {} \\ {} & {} & {} \\ {} & {} & {} \\ \end{array} \hfill \\ \end{aligned} & \begin{aligned} \begin{array}{*{20}c} {\lambda^{1C} } & {} & {} \\ {} & \ddots & {} \\ {} & {} & {\lambda^{1C} } \\ \end{array} \hfill \\ \begin{array}{*{20}c} {} & {} & {} \\ {} & {} & {} \\ {} & {} & {} \\ \end{array} \hfill \\ \end{aligned} \\ \end{array} \begin{array}{*{20}c} {\begin{array}{*{20}c} {} & {} & {} \\ {} & {} & {} \\ {} & {} & {} \\ \end{array} } \\ {\begin{array}{*{20}c} {} & {} & {} \\ {} & {} & {} \\ {} & {} & {} \\ \end{array} } \\ {\begin{array}{*{20}c} {\lambda^{1R} } & {} & {} \\ {} & \ddots & {} \\ {} & {} & {\lambda^{1R} } \\ \end{array} } \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {\begin{array}{*{20}c} {\nabla \Delta N_{br}^{12G} } \\ \vdots \\ {\nabla \Delta N_{br}^{1mG} } \\ \end{array} } \\ \begin{aligned} \begin{array}{*{20}c} {\nabla \Delta N_{br}^{12C} } \\ \vdots \\ {\nabla \Delta N_{br}^{1nC} } \\ \end{array} \hfill \\ \begin{array}{*{20}c} {\nabla \Delta N_{br}^{12R} } \\ \vdots \\ {\nabla \Delta N_{br}^{1qR} } \\ \end{array} \hfill \\ \end{aligned} \\ \end{array} } \right] \\ & \quad + \left[ {\begin{array}{*{20}c} 0 & {} & {} & {} & {} & {} \\ {} & \ddots & {} & {} & {} & {} \\ {} & {} & {0_{m + n - 2} } & {} & {} & {} \\ {} & {} & {} & {\lambda^{1R} - \lambda^{2R} } & {} & {} \\ {} & {} & {} & {} & \ddots & {} \\ {} & {} & {} & {} & {} & {\lambda_{{}}^{1R} - \lambda^{qR} } \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {\begin{array}{*{20}c} 0 \\ \vdots \\ {0_{m + n - 2} } \\ \end{array} } \\ {\begin{array}{*{20}c} {\nabla N_{br}^{2R} } \\ \vdots \\ {\nabla N_{br}^{qR} } \\ \end{array} } \\ \end{array} } \right]. \\ \end{aligned}$$
(6)

where \(\nabla \Delta\) is the double difference and B is the baseline vector. The integration of BDS, GPS and GLONASS can ensure the effective acquisition of satellite signals. On the premise of neglecting the correlation of the three systems, variance–covariance matrix can be established combining the error propagation law. Using MLAMBDA method can effectively improve the fixed ratio of ambiguity [26]. Therefore, we use the least square method to deal with the redundant carrier observations and solve the three-dimensional coordinates of the UAV measurement antenna.

2.2 Attitude Parameters Calculation of Three-Antenna Configuration

The inertial measurement unit (IMU) equipped with MUAV has low measurement accuracy and is currently only used for navigation. In the data processing process, the measurement results generally participate in iterative calculations in the form of initial values. In this paper, the carrier attitude measurement method based on differential technique is studied.

In order to measure the three-dimensional attitude of UAV, three non-collinear antennas are deployed on the fuselage, in which the antenna plane is parallel to the main plane of UAV and the main baseline is parallel to the axis of the fuselage. Through the translation transformation, the origin of the global coordinate system and the body coordinate system can be unified to the phase center of the main antenna, and the coordinates of the main antenna and the auxiliary antenna in the two coordinate systems can be obtained by static calibration and PPK technology, respectively. The distribution of antennas in the body coordinate system is shown in Fig. 4.

Fig. 4
figure 4

Distribution of antennas in the body coordinate system

The figure shows that in the body coordinate system, the Y-axis is the baseline of the main antenna 1 and the auxiliary antenna 2, the Z-axis is perpendicular to the plane where the antenna is located, and the X-axis constitutes right-hand coordinate system together.

Sampling \(b_{i}\) and \(l_{i}\) are, respectively, coordinates of the i-th antenna in the body coordinate system and the global coordinate system. \(\alpha ,\beta\) and \(\gamma\) are the yaw angle, the roll angle and the pitch angle, which represent the rotation parameters of the body coordinate system relative to the global coordinate system. Figure 4 shows the coordinates of the main and auxiliary antennas in the body coordinate system: \(b_{1} = \left[ {\begin{array}{*{20}c} 0 & 0 & 0 \\ \end{array} } \right]^{\text{T}}\), \(b_{2} = \left[ {\begin{array}{*{20}c} 0 & {b_{12} } & 0 \\ \end{array} } \right]^{\text{T}}\), \(b_{3} = \left[ {\begin{array}{*{20}c} {x_{3,b} } & {y_{3,b} } & 0 \\ \end{array} } \right]^{\text{T}}\). The conversion relationship between two coordinate systems can be expressed as:

$$\begin{aligned} b_{i} & = R_{l}^{b} (l_{i} - l_{1} ) = R_{Y} (\beta )R_{X} (\gamma )R_{Z} (\alpha )(l_{i} - l_{1} ) \\ & = \left[ {\begin{array}{*{20}c} {\cos \beta \cos \alpha - \sin \beta \sin \gamma \sin \alpha } & {\cos \beta s{\text{in}}\alpha + \sin \beta \sin \gamma \cos \alpha } & { - \sin \beta \cos \gamma } \\ { - \cos \gamma \sin \alpha } & {\cos \gamma \cos \alpha } & {\sin \gamma } \\ {\sin \beta \cos \alpha + \cos \beta \sin \gamma \sin \alpha } & {\sin \beta s{\text{in}}\alpha - \cos \beta \sin \gamma \cos \alpha } & {\cos \beta \cos \gamma } \\ \end{array} } \right](l_{i} - l_{1} ). \\ \end{aligned}$$
(7)

According to the orthogonality of the rotation matrix, the conversion relationship of the main baseline between the two coordinate systems is obtained as:

$$\begin{array}{*{20}c} {l_{2} - l_{1} } \\ \end{array} = \left[ {\begin{array}{*{20}c} {x_{2,l} - x_{1,l} } \\ {y_{2,l} - y_{1,l} } \\ {z_{2,l} - z_{1,l} } \\ \end{array} } \right] = b_{12} \left[ {\begin{array}{*{20}c} { - \cos \gamma \sin \alpha } \\ {\cos \gamma \cos \alpha } \\ {\sin \gamma } \\ \end{array} } \right].$$
(8)

From Eq. (8), the yaw angle \(\alpha\) and the pitch angle \(\gamma\) are calculated through the main baseline \(b_{12}\):

$$\alpha = - \arctan \left( {\frac{{x_{2,l} - x_{1,l} }}{{y_{2,l} - y_{1,l} }}} \right).$$
(9)
$$\gamma = \arcsin \left( {\frac{{z_{2,l} - z_{1,l} }}{{b_{12} }}} \right) = \arctan \left( {\frac{{z_{2,l} - z_{1,l} }}{{\sqrt {\left( {x_{2,l} - x_{1,l} } \right)^{2} + \left( {y_{2,l} - y_{1,l} } \right)^{2} } }}} \right).$$
(10)

Next, the roll angle \(\beta\) is calculated according to the relationship of the baseline formed by the auxiliary antenna 3 in the two coordinate systems:

$$b_{3} = \left[ {\begin{array}{*{20}c} {x_{3,b} } \\ {y_{3,b} } \\ 0 \\ \end{array} } \right] = R_{Y} (\beta )R_{X} (\gamma )R_{Z} (\alpha )\left[ {\begin{array}{*{20}c} {x_{3,l} - x_{1,l} } \\ {y_{3,l} - y_{1,l} } \\ {z_{3,l} - z_{1,l} } \\ \end{array} } \right].$$
(11)

The yaw angle \(\alpha\) and the pitch angle \(\gamma\) are calculated according to Eqs. (9) and (10). Firstly, the baseline is rotated from the global coordinate system by the \(\alpha\) angle around the Z-axis, and then, the \(\gamma\) angle is rotated around the X-axis to obtain a new baseline vector:

$$\left[ {\begin{array}{*{20}c} {x^{\prime}_{3,l} - x_{1,l} } \\ {y^{\prime}_{3,l} - y_{1,l} } \\ {z^{\prime}_{3,l} - z_{1,l} } \\ \end{array} } \right] = \left[ {\begin{array}{*{20}c} 1 & 0 & 0 \\ 0 & {\cos \gamma } & {\sin \gamma } \\ 0 & { - \sin \gamma } & {\cos \gamma } \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {\cos \alpha } & {\sin \alpha } & 0 \\ { - \sin \alpha } & {\cos \alpha } & 0 \\ 0 & 0 & 1 \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {x_{3,l} - x_{1,l} } \\ {y_{3,l} - y_{1,l} } \\ {z_{3,l} - z_{1,l} } \\ \end{array} } \right].$$
(12)

Similarly, the new relationship is obtained by the orthogonality of the rotation matrix \(R_{Y} (\beta )\):

$$\left[ {\begin{array}{*{20}c} {x_{3,b} \cos \beta } \\ {y_{3,b} } \\ { - x_{3,b} \sin \beta } \\ \end{array} } \right] = \left[ {\begin{array}{*{20}c} {x^{\prime}_{3,l} - x_{1,l} } \\ {y^{\prime}_{3,l} - y_{1,l} } \\ {z^{\prime}_{3,l} - z_{1,l} } \\ \end{array} } \right].$$
(13)

Thus, the roll angle \(\beta\) is:

$$\beta = - \arctan \left( {\frac{{z^{\prime}_{3,l} - z_{1,l} }}{{x^{\prime}_{3,l} - x_{1,l} }}} \right).$$
(14)

3 Correction of Image Position Parameters

3.1 Cubic Spline Interpolation for Solving Exposure Delay

The sampling frequency of BD930 GNSS differential system carried by UAV is 20 Hz. During the flight process, it is sampled every 0.05 s. Therefore, in order to get the position data of the shooting time, it is necessary to record the flash pulse of the camera and calculate the exposure time coordinates by interpolation algorithm. At present, the curve fitting methods of space trajectory include Lagrange interpolation, Chebyshev polynomial interpolation, spline interpolation, etc. [27]. Some studies have found that the order of interpolation polynomials should not be too high or too low. Lagrange polynomials are prone to “Runge phenomenon” in interpolation calculation. Chebyshev polynomial interpolation requires an order of 11 to achieve millimeter accuracy. As a result, the amount of calculation increases, and the efficiency of operation are affected. UAV flight is easily affected by wind speed, wind direction, navigation error, air temperature, etc. There are some differences in speed at each instant of taking photos. Considering spline function is an important approximation tool in curve fitting, interpolation, numerical differentiation [28]. We propose a GNSS three-coordinate interpolation algorithm based on cubic spline function (see Fig. 5).

Fig. 5
figure 5

Exposure delay correction model

Sampling points of UAV on a single route are set as \(P = \left\{ {a = t_{0} \text{ < }t_{1} \text{ < } \cdots \text{ < }t_{n} = b} \right\}\) be the uniform partition over the interval \([a,b]\) with equally spaced grid points \(t_{i} = a + ih,i = 0,1 \ldots n\). Where \(h = (b - a)/n\).

The coordinate obtained by the differential GNSS system at the sampling point is:

$$f(t_{i} ) = y_{i} ,\quad i = 0,1, \ldots ,n.$$
(15)

We construct a cubic spline function \(s(t){ \in }S_{3} (t_{1} ,t_{2} , \ldots ,t_{n - 1} )\), where \(s(t)\) satisfies the following interpolating conditions:

$$s(t_{i} ) = y_{i} ,\quad i = 0,1, \ldots ,n.$$
(16)

We denote \(s^{{\prime }} (t_{k} ) = m_{k} (k = 0,1, \ldots ,n)\), \(h_{k} = t_{k + 1} - t_{k} = 0.05(k = 0,1, \ldots ,n - 1)\), \(s(t)\) in the subinterval \([t_{k} ,t_{k + 1} ]\) can be expressed as:

$$\begin{aligned} s(t) & = \frac{{h_{k} + 2(t - t_{k} )}}{{h_{k}^{3} }}(t - t_{k + 1} )^{2} y_{k} + \frac{{h_{k} - 2(t - t_{k + 1} )}}{{h_{k}^{3} }}(t - t_{k} )^{2} y_{k + 1} \\ & \quad + \frac{{(t - t_{k} )(t - t_{k + 1} )^{2} }}{{h_{k}^{2} }}m_{k} + \frac{{(t - t_{k + 1} )(t - t_{k} )^{2} }}{{h_{k}^{2} }}m_{k + 1} . \\ \end{aligned}$$
(17)

In order to get constant coefficients \(m_{0} ,m_{1} , \ldots ,m_{n}\), we need to calculate the second derivative of \(s(t)\):

$$\begin{aligned} s^{{\prime \prime }} (t) & = \frac{{6t - 2t_{k} - 4t_{k + 1} }}{{h_{k}^{2} }}m_{k} + \frac{{6t - 4t_{k} - 2t_{k + 1} }}{{h_{k}^{2} }}m_{k + 1} \\ & \quad + \frac{{6(t_{k} + t_{k + 1} - 2t)}}{{h_{k}^{2} }}(y_{k + 1} - y_{k} ),\quad x{ \in }[t_{k} ,t_{k + 1} ]. \\ \end{aligned}$$
(18)

and

$$\mathop {\lim }\limits_{{t \to t_{k}^{ + } }} s^{\prime \prime} (t) = - \frac{4}{{h_{k} }}m_{k} - \frac{2}{{h_{k} }}m_{k + 1} + \frac{6}{{h_{k}^{2} }}(y_{k + 1} - y_{k} ).$$
(19)

Similarly, we can obtain the expression of \(s(t)\) in the interval \([t_{k - 1} ,t_{k} ]\) and get the equation according to \(\mathop {\lim }\limits_{{t \to t_{k}^{ + } }} s^{{\prime \prime }} (t) = \mathop {\lim }\limits_{{t \to t_{k}^{ - } }} s^{{\prime \prime }} (t)(k = 1,2, \ldots ,n - 1)\).

$$\lambda_{k} m_{k - 1} + 2m_{k} + \mu_{k} m_{k + 1} = g_{k} (k = 1,2, \ldots ,n - 1).$$
(20)

where

$$\lambda_{k} = \frac{{h_{k} }}{{h_{k} + h_{k - 1} }},\quad \mu_{k} = \frac{{h_{k - 1} }}{{h_{k} + h_{k - 1} }},\;\;g_{k} = 3\left( {\mu_{k} \frac{{y_{k + 1} - y_{k} }}{{h_{k} }} + \lambda_{k} \frac{{y_{k} - y_{k - 1} }}{{h_{k - 1} }}} \right).$$
(21)

By making use of Eq. (21), we obtain a system of n-1 equations in n+1 unknowns, so we next consider a boundary value problems:

$$\begin{aligned} & s^{{\prime }} (t_{0} ) = f^{{\prime }}_{0} ,s^{{\prime }} (t_{n} ) = f^{{\prime }}_{n} . \\ & s^{{\prime \prime }} (t_{0} ) = f^{{\prime \prime }}_{0} ,s^{{\prime \prime }} (t_{n} ) = f^{{\prime \prime }}_{n} . \\ \end{aligned}$$
(22)

Therefore, the constant coefficients \(m_{0} ,m_{1} , \ldots ,m_{n}\) can be determined by using (22) to calculate the GNSS antenna position of each exposure time.

3.2 Real-Time Measurement of Eccentricity Based on Space Resection

In order to ensure the flight and signal receiving performance, GNSS antenna should be installed on the top of UAV, and affected by the requirement of system gravity center, the phase center of the antenna cannot coincide with the lens center of aerial camera, so there are three eccentric components shown in Fig. 6. Due to the influence of flight control system and airflow changes in the process of photogrammetry, UAV has different angles of tilt and deflection. These changes will make eccentricity affect the accuracy of geographic positioning and later mapping [20]. Thus, we need to initially calibrate the eccentric components and correct them in real time with flight attitude to acquire accurate images position.

Fig. 6
figure 6

Eccentric components between antenna and aerial camera

Traditional eccentricity measurement methods include close-range photogrammetry, manual measurement, theodolite measurement and direct projection measurement, which do not take into account the system integration error and real-time correction. In the calibration process, these methods mostly rely on other measurement systems, which need higher site requirements.

In this paper, a real-time eccentricity measurement method based on space rear rendezvous is proposed. Firstly, a global coordinate system \(O \text{-} XYZ\) is established by setting a certain number of GCPs on the undulating ground. Then, the UAV is leveled over these points, and the height ensures that all points are in the imaging range. By adjusting the vertical axis level and yaw angle to zero, we get a differential GNSS system with the base station on the ground. Finally, we establish the conversion relationship between image space coordinate system and global coordinate system by space resection algorithm based on single image. The initial value of eccentric components can be obtained by comparing the origin of image space coordinate system with the differential GNSS position. The measurement scheme is shown in Fig. 7.

Fig. 7
figure 7

Eccentric components measurement scheme

We set the coordinates of the GCPs in the global coordinate system as \(P_{i} (X_{i} ,Y_{i} ,Z_{i} )\) \((i = 1,2, \ldots ,i{ \ge 3})\), and in the image coordinate system as \(p_{i} (x_{i} ,y_{i} )\) \((i = 1,2, \ldots ,i{ \ge 3})\). Then, the collinear conditional equation can be established as follows:

$$\left\{ {\begin{array}{*{20}l} {x_{i} - x_{0} - \vartriangle x = - f\frac{{a_{1} (X_{i} - X_{S} ) + b_{1} (Y_{i} - Y_{S} ) + c_{1} (Z_{i} - Z_{S} )}}{{a_{3} (X_{i} - X_{S} ) + b_{3} (Y_{i} - Y_{S} ) + c_{3} (Z_{i} - Z_{S} )}}.} \hfill \\ {y_{i} - y_{0} - \vartriangle y = - f\frac{{a_{2} (X_{i} - X_{S} ) + b_{2} (Y_{i} - Y_{S} ) + c_{2} (Z_{i} - Z_{S} )}}{{a_{3} (X_{i} - X_{S} ) + b_{3} (Y_{i} - Y_{S} ) + c_{3} (Z_{i} - Z_{S} )}}.} \hfill \\ \end{array} } \right.$$
(23)

and

$$\left\{ {\begin{array}{*{20}l} {\vartriangle x = (x - x_{0} )(k_{1} r^{2} + k_{2} r^{4} + k_{3} r^{6} ) - p_{1} [r^{2} + 2(x - x_{0} )] - 2p_{2} (x - x_{0} )(y - y_{0} ).} \hfill \\ {\vartriangle y = (y - y_{0} )(k_{1} r^{2} + k_{2} r^{4} + k_{3} r^{6} ) - p_{2} [r^{2} + 2(y - y_{0} )] - 2p_{1} (x - x_{0} )(y - y_{0} ).} \hfill \\ \end{array} } \right.$$
(24)

In the above Formulas (23) and (24), \((x_{0} ,y_{0} ,f)\) are inner orientation elements, and \((\vartriangle x,\vartriangle y)\) are deviations caused by camera distortion parameters. \(k_{1}\), \(k_{2}\) and \(k_{3}\) are radial distortion parameters, and \(p_{1}\) and \(p_{2}\) are eccentric distortion parameters. \((X_{S} ,Y_{S} ,Z_{S} )\) is the coordinate of the camera station S in the global coordinate system. The camera coordinate system is set parallel to the UAV body coordinate system, so the UAV attitude \((\alpha ,\beta ,\gamma )\) is used to represent the image angle elements. The rotation matrix also can be expressed as:

$$R = \left[ {\begin{array}{*{20}c} {\cos \beta \cos \alpha - \sin \beta \sin \gamma \sin \alpha } & {\cos \beta s{\text{in}}\alpha + \sin \beta \sin \gamma \cos \alpha } & { - \sin \beta \cos \gamma } \\ { - \cos \gamma \sin \alpha } & {\cos \gamma \cos \alpha } & {\sin \gamma } \\ {\sin \beta \cos \alpha + \cos \beta \sin \gamma \sin \alpha } & {\sin \beta s{\text{in}}\alpha - \cos \beta \sin \gamma \cos \alpha } & {\cos \beta \cos \gamma } \\ \end{array} } \right].$$
(25)

In theory, six exterior orientation elements of an image can be got by knowing three sets of irrelevant GCPs and image points coordinates. We linearize the redundant observation equation and apply the least square adjustment method to obtain the pose parameters of a single image accurately. Through the coordinate of the phase, center of UAV GNSS antenna is \((X_{A} ,Y_{A} ,Z_{A} )\), we calculate the coordinate components of the eccentricity in the image space coordinate system as follows:

$$\left[ {\begin{array}{*{20}c} u \\ v \\ w \\ \end{array} } \right] = R\left[ {\begin{array}{*{20}c} {X_{A} - X} \\ {Y_{A} - Y} \\ {Z_{A} - Z} \\ \end{array} } \right].$$
(26)

During the flight process of UAV, the attitude changes in real time, which makes the eccentricity, have different effects on image positioning. Combined with the three-antenna configuration method proposed in Sect. 2.2, the differential technique is used to calculate the attitude data \((\alpha_{t} ,\beta_{t} ,\gamma_{t} )\) at each exposure time. According to Formula (25), the rotation matrix \(R_{t}\) is formed, and the real-time position information of the image is corrected as follows:

$$\left[ {\begin{array}{*{20}c} {X_{t} } \\ {Y_{t} } \\ {Z_{t} } \\ \end{array} } \right] = \left[ {\begin{array}{*{20}c} {X_{At} } \\ {Y_{At} } \\ {Z_{At} } \\ \end{array} } \right] - R_{t}^{\text{T}} \left[ {\begin{array}{*{20}c} u \\ v \\ w \\ \end{array} } \right].$$
(27)

4 GNSS Assisted Self-BBA

Bundle block adjustment (BBA) is the most rigorous theoretical adjustment method based on the collinear condition equation. It mainly calculates the image exterior orientation elements and the image points global coordinates by the least square principle. In recent years, with the application of POS-assisted aerial triangulation and the structure from motion (SFM) in computer vision technology in aerial photogrammetry, image processing efficiency has been greatly improved. Especially in improving the automation and the robustness of SFM, researchers have invested a lot of energy [29]. But the production cost and the accuracy of the results are considered less.

Most of MUAVs carry non-metric cameras. Before image processing, in order to obtain the corresponding relationship between three-dimensional space and two-dimensional image accurately, we need to calibrate the camera parameters strictly. With the development of self-calibration technology, we consider pose parameters according to PPK and construct a new BBA model with lens distortion parameters, GCPs and image exterior orientation elements. When the inner orientation elements of the camera are known, the observation equation can be expanded to the first term according to Taylor series in the field of unknown initial value. The error equation is as follows:

$$\begin{array}{c} V_{X} = Bx + A_{X} t + \begin{array}{*{20}c} {} \\ \end{array} \begin{array}{*{20}c} {} \\ \end{array} + Ss\begin{array}{*{20}c} {} \\ \end{array} - L_{X} ,\begin{array}{*{20}c} {} \\ \end{array} WM:E. \hfill \\ V_{G} = \begin{array}{*{20}c} {} \\ \end{array} \begin{array}{*{20}c} {} \\ \end{array} \begin{array}{*{20}c} {} \\ \end{array} \begin{array}{*{20}c} {} \\ \end{array} A_{G} t\begin{array}{*{20}c} {} & {} \\ \end{array} \begin{array}{*{20}c} {} \\ \end{array} \begin{array}{*{20}c} {\begin{array}{*{20}c} {} \\ \end{array} } \\ \end{array} \begin{array}{*{20}c} {} \\ \end{array} \begin{array}{*{20}c} {} \\ \end{array} - L_{G} ,\begin{array}{*{20}c} {} \\ \end{array} WM:P_{G} . \hfill \\ V_{I} = \begin{array}{*{20}c} {} \\ \end{array} \begin{array}{*{20}c} {} \\ \end{array} \begin{array}{*{20}c} {} \\ \end{array} \begin{array}{*{20}c} {} \\ \end{array} A_{I} t\begin{array}{*{20}c} {} \\ \end{array} \begin{array}{*{20}c} {} \\ \end{array} \begin{array}{*{20}c} {} \\ \end{array} \begin{array}{*{20}c} {} \\ \end{array} \begin{array}{*{20}c} {} \\ \end{array} \begin{array}{*{20}c} {} \\ \end{array} \begin{array}{*{20}c} {} \\ \end{array} \begin{array}{*{20}c} {} \\ \end{array} - L_{I} ,\begin{array}{*{20}c} {} \\ \end{array} WM:P_{I} . \hfill \\ \end{array}$$
(28)

where

  • \(V_{X} ,V_{G} ,V_{I}\) are, respectively, the observation correction vector of image points coordinates, differential GNSS camera station coordinates and attitude angles;

  • \(x = [\Delta X \Delta Y \Delta Z]^{\text{T}}\) is the ground point coordinate increment vector;

  • \(t = [\Delta X_{s} \Delta Y_{s} \Delta Z_{s} \Delta \alpha_{x} \Delta \omega \Delta \kappa ]^{\text{T}}\) is the image exterior orientation elements increment vector;

  • \(s = [k_{1} k_{2} k_{3} p_{1} p_{2} ]^{\text{T}}\) is the camera distortion parameter vector;

  • \(B,A_{X} ,S,A_{G} ,A_{I}\) are, respectively, coefficient matrix of observation equation composed of first-order partial derivatives of unknown parameters;

  • \(L_{X} = \left[ {\begin{array}{*{20}c} {x - x_{0} } \\ {y - y_{0} } \\ \end{array} } \right]\) is the image point coordinate observation residual vector;

  • \(L_{G} = \left[ {\begin{array}{*{20}c} {X_{A} - X_{A}^{0} } \\ {Y_{A} - Y_{A}^{0} } \\ {Z_{A} - Z_{A}^{0} } \\ \end{array} } \right]\) is the differential GNSS camera coordinate residual vector;

  • \(L_{I} = \left[ {\begin{array}{*{20}c} {\alpha_{x} - \alpha_{x0} } \\ {\omega - \omega_{0} } \\ {\kappa - \kappa_{0} } \\ \end{array} } \right]\) is the attitude angle calculation residual vector;

  • \(P_{G} = \frac{{\sigma_{0}^{2} }}{{\sigma_{G}^{2} }}E\) is the GNSS camera coordinate weight matrix, \(P_{I} = \frac{{\sigma_{0}^{2} }}{{\sigma_{I}^{2} }}E\) is the attitude angle calculation weight matrix, where \(\sigma_{0}^{{}}\) represents the image coordinate measurement accuracy, \(\sigma_{G}^{{}}\) represents the GNSS positioning accuracy, \(\sigma_{I}^{{}}\) represents the attitude angle calculation accuracy.

Order:

$$\begin{aligned} &V = \left[ {\begin{array}{*{20}c} {V_{X} } \\ {V_{G} } \\ {V_{I} } \\ \end{array} } \right],\quad X = \left[ {\begin{array}{*{20}c} x \\ t \\ s \\ \end{array} } \right],\;\;L = \left[ {\begin{array}{*{20}c} {L_{X} } \\ {L_{G} } \\ {L_{I} } \\ \end{array} } \right],\\&A = \left[ {\begin{array}{*{20}c} B & {A_{X} } & S \\ 0 & {A_{G} } & 0 \\ 0 & {A_{I} } & 0 \\ \end{array} } \right],\;\;P = \left[ {\begin{array}{*{20}c} E & {} & {} \\ {} & {P_{G} } & {} \\ {} & {} & {P_{I} } \\ \end{array} } \right]. \end{aligned}$$
(29)

(28) can be written as follows:

$$V = AX - L,\;\;WM:P.$$
(30)

The norm equation is:

$$(A^{\text{T}} PA)X = A^{\text{T}} PL.$$
(31)

The expansion is as follows:

$$\begin{aligned} & \left[ {\begin{array}{*{20}c} {B^{{\text{T}}} B} & {B^{{\text{T}}} A_{X} } & {B^{{\text{T}}} S} \\ {A_{X}^{{\text{T}}} B} & {A_{X}^{{\text{T}}} A_{X} + A_{G}^{{\text{T}}} P_{G} A_{G} + A_{I}^{{\text{T}}} P_{I} A_{I} } & {A_{X}^{{\text{T}}} S} \\ {S^{{\text{T}}} B} & {S^{{\text{T}}} A_{X} } & {S^{{\text{T}}} S} \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} x \\ t \\ s \\ \end{array} } \right] \\ & \quad e - \left[ {\begin{array}{*{20}c} {B^{{\text{T}}} L_{X} } \\ {A_{X}^{{\text{T}}} L_{X} + A_{G}^{{\text{T}}} P_{G} L_{G} + A_{I}^{{\text{T}}} P_{I} L_{I} } \\ {S^{{\text{T}}} L_{X} } \\ \end{array} } \right] = 0. \\ \end{aligned}$$
(32)

Figure 8 depicts the GNSS-assisted self-BBA processing workflow. In order to improve the accuracy of model calculation, besides setting weights in practice, gross error detection, setting the standardized residual threshold, gross error elimination and iteration calculation are also needed.

Fig. 8
figure 8

GNSS-assisted self-BBA processing workflow

5 Experiments

5.1 Precision Testing of Differential GNSS System

In order to explore the accuracy of PPK algorithms fusing BDS/GPS/GLONASS and test the timeliness and feasibility of image position correction method, we have carried out static and dynamic experiments on the differential BD930 GNSS system.

In Fig. 9, the base station is located at a GCP with known position. We get the plane coordinates and altitude values of five feature points in the range of 1 km. Then, the rover station is, respectively, placed on \(P_{1} ,P_{2} ,P_{3} ,P_{4} ,P_{5}\), and observes the satellite data for 5 min at the same time with the base station.

Fig. 9
figure 9

Static accuracy test

In Fig. 10, we get the precision of one point by PPK fusing BDS/GPS/GLONASS under good receiving satellites condition.

Fig. 10
figure 10

Data accuracy of 1 km baseline length

From Fig. 9, the calculated GNSS data has a small fluctuation range. We take the average value of each point as the final static positioning result and compare it with the coordinates obtained by geodetic survey. Figure 11 shows the three-direction coordinate residuals of observations at each point.

Fig. 11
figure 11

Residual statistics under static condition

According to the formula \(m = \pm \left( {\sqrt {\frac{{V^{T} V}}{n}} } \right)\), the coordinate components median errors of close range observations are calculated as: \(m_{x} = \pm 0.0036\,{\text{m}}\), \(m_{y} = \pm 0.0041\,{\text{m}}\); \(m_{h} = \pm 0.0048\,{\text{m}}\).

The flying range of MUAVs is generally within 20 km from the ground control station. In Fig. 12, we test the static accuracy for baseline length of 5 km, 10 km and 20 km, respectively. The results of different baseline lengths are shown in Table 1.

Fig. 12
figure 12

Data accuracy of different baseline length

Table 1 Data results of different baseline length

We can see that the positioning accuracy of the differential GNSS system is higher. Among them, the accuracy and stability of the plane direction are better than that of the elevation direction. With the increase of the baseline length, the positioning accuracy and stability of the system decrease gradually.

In order to explore the stability of differential GNSS system for dynamic positioning on UAV, and the advantages of the PPK algorithm proposed in this paper, we make fixed-wing UAV FW100 carried BD930 module fly nine north–south direction routes within the A-B-C-D range in Henan Province, China. The layout of the routes is shown in Fig. 13.

Fig. 13
figure 13

PPK UAV flight test

After the flight, we use the traditional and new PPK algorithm to calculate the download the GNSS data, respectively, as shown in Fig. 14.

Fig. 14
figure 14

Comparison of different methods

5.2 Precision Testing of Exterior Orientation Elements

To verify the accuracy of corrected exterior orientation elements and its advantages in map production, we used FW100 UAV with Sony RX1R II camera to take aerial photography in hilly areas with an area of 2.3 km2 in Henan Province, China, the flight height is 700 m, the heading overlap is 80%, the side overlap is 60%, the ground resolution is 5 cm, the number of north–south routes is 5, the number of east–west routes is 2, and the number of images is 213. In Fig. 15, 34 GCPs (mark size: 60 cm × 60 cm) are laid on the ground.

Fig. 15
figure 15

Aerial range and GCPs layout

After differential post-processing, GNSS data are adjusted by interpolation algorithm and eccentricity correction. In Tables 2 and 3, we get image pose data and camera distortion parameters by GNSS self-BBA without any GCPs. Among them, on the premise of not affecting the data analysis, we consider keeping the geographical environment confidential and hiding the same number of bits in the coordinate values in Table 2.

Table 2 Pose parameters (parts)
Table 3 Camera distortion parameters

In Figs. 16 and 17, we adopt three methods, BBA based on lots GCPs (GCP BBA), self-BBA based on POS date (POS self-BBA) and self-BBA based on four corner GCPs and POS date (CGP&POS self-BBA), respectively. By setting 12 CPs, we compare the plane and elevation median errors of the three methods, the calculation results are shown in Table 4.

Fig. 16
figure 16

CPs plane residuals

Fig. 17
figure 17

CPs elevation residuals

Table 4 Error statistics in three methods

6 Conclusions

This paper presents a study to use BD930 acquired carrier phase double difference model of BDS, GPS and GLONASS satellites to built one PPK UAV system and improve the accuracy of system pose parameters. In order to eliminate system errors, a cubic spline GNSS interpolation algorithm and a real-time eccentricity correction scheme for space rear intersection are proposed. Through the configuration scheme of the three antennas, the attitude parameters at each exposure time are calculated. Combining with the GNSS self-BBA model, we get the accurate image pose parameters. Through experiments and analysis, we conclude that the static positioning accuracy of the differential GNSS system in a small range (r < 1 km) can reach millimeter level. When the UAV performs a wide range of tasks (r < 20 km), the plane and elevation positioning accuracy can be achieved within 10 cm, the fixed solution ratio under dynamic conditions is increased by 58.7% compared with the traditional processing method. In the GNSS assisted self-BBA, we reasonably set the corrected GNSS data weights and compare the GCP BBA, POS self-BBA and CGP&POS self-BBA image processing methods to confirm the advantages of this paper method. In the case of not using any GCPs, the median error in the CPs is less than 0.3 m. In the case of using 4 corner GCPs, the median error in the CPs is less than 0.2 m, which proves that the scheme gets the accurate image pose and camera distortion parameters of the PPK UAV. Its accuracy can meet the requirements of large-scale mapping. Therefore, it is feasible to apply PPK technology to the UAV system reasonably. The system can not only meet the accuracy requirements of general geographic information results, but also eliminate the complicated processes of laying a large number of GCPs and camera calibration, so that the production efficiency of surveying and mapping results can be improved.