Keywords

1 Introduction

Research in UAS field is getting more and more attention and importance due to their wide application, both military and civilian [1]. UASs can perform missions that pose high risks to human operators, such as search and rescue, reconnaissance and strike, surveillance and monitoring in danger-prone or inaccessible sites [2].

Knowledge of the environment, mapping and localization is an important task for the remote pilots and/or autonomous flight, particularly during the different flight phases in unknown areas. In this paper we describe the preliminary steps towards the development of a DAA system with low-cost sensors, in particular LIDAR-ToF. The platform will be installed on a UAV to send information on position, detect obstacles and for field mapping [3,4,5,6]. Both sensors have typical “low-cost” characteristics, i.e. miniaturization, fast response time and good sensing range for obstacle detection. They are managed through I2C serial connection by a microcontroller Arduino Mega 2560 [7].

2 Theoretical Framework

2.1 Sensor Fusion

Multisensors data fusion is an essential task for and improved estimation of system states and parameters [2, 8]. The data fusion module implemented in this work aims to reduce uncertainties on the distance measurements from a fixed or movable object that could become an obstacle during the flight. Additional distance information is obtained thanks to LIDAR (Lidar lite v3) and ToF (VL53L0X) sensors, added to the standard instrumentation (Fig. 1), in order to perform enhanced automatic obstacle detection and distance-from-obstacle estimation around the flight area.

Fig. 1
figure 1

Electrical scheme of the system

The LIDAR lite v3 measures distance by calculating the time delay between the transmission of a Near-Infrared laser signal and its reception after reflection from a target. This translates into distance (meters or feet) using the speed of light [9]. The VL53L0X is a new generation ToF laser-ranging module housed in the smallest package on the market today, providing accurate distance measurements whatever the target reflectance, unlike conventional technologies. It can measure absolute distances up to 2 m, setting a new benchmark in ranging performance levels, opening up various and interesting new applications [10].

2.2 Kalman Filtering and Gelb’s Method

The algorithms used for estimation and removal of systematic errors and noise are Kalman filtering and Gelb’s method [11] for sensor data fusion. The Kalman filter (KF) is a widely used quadratic state estimator for discrete linear dynamic systems perturbed by white noise (\(w\)), which uses measurements linearly related to the state and corrupted by white Gaussian noise [11, 12]:

$$ x_{k} = {\varvec{A}}x_{k - 1} + {\varvec{B}}u_{k - 1} + w_{k - 1} $$
(1)
$$ z_{k} = {\varvec{H}}x_{k} + v_{k} $$
(2)

where \(x_{k}\) is the state vector evaluated at time \(t_{k}\), \({\varvec{A}}\) and \({\varvec{B}}\) are the state and input matrices, \(w_{k}\) is the process noise vector, with covariance matrix Q, \(z_{k}\) is the kth measurement vector, \({\varvec{H}}\) is the observation matrix, and \(v_{k}\) is the measurement noise, with covariance matrix R. Starting from an initial state estimate \(\hat{x}_{0}\) and state error covariance matrix P0, the KF is based on a prediction-correction strategy, projecting forward the current state \(\hat{x}_{k}^{ - }\) [a-priori estimate, Eq. (1)] and predicting the a-posteriori state estimate \(\hat{x}_{k}\) based on the current measurement weighted by a gain matrix \({\varvec{K}}_{k}\): (Kalman gain):

$$ \hat{x}_{k} = \hat{x}_{k}^{ - } + {\varvec{K}}_{k} \left( {z_{k} - {\varvec{H}}\hat{x}_{k}^{ - } } \right) $$
(3)
$$ {\varvec{K}}_{k} = \frac{{{\varvec{P}}_{k}^{ - } {\varvec{H}}^{T} }}{{{\varvec{HP}}_{k}^{ - } {\varvec{H}}^{T} + {\varvec{R}}}}; {\varvec{P}}_{k} = \left( {{\varvec{I}} - {\varvec{K}}_{k} {\varvec{H}}} \right){\varvec{P}}_{k}^{ - } $$
(4)

The estimation errors are provided by the element of the matrix P.

Gelb’s method is a simple data fusion algorithm that processes measurements to deduce a linear estimate \(\hat{x}\) of the unknown quantity (i.e. distance) which, assuming random, independent and unbiased measurement errors, minimizes the mean square value of the estimation error [11]:

$$ \hat{x} = \left( {\frac{{\sigma_{{{\text{LDR}}}}^{2} }}{{\sigma_{{{\text{ToF}}}}^{2} + \sigma_{{{\text{LDR}}}}^{2} }}} \right)z_{{{\text{ToF}}}} + \left( {\frac{{\sigma_{{{\text{ToF}}}}^{2} }}{{\sigma_{{{\text{ToF}}}}^{2} + \sigma_{{{\text{LDR}}}}^{2} }}} \right)z_{{{\text{LDR}}}} $$
(5)

where the variances of the measurements of the LIDAR, \(z_{{{\text{LDR}}}}\), and ToF, \(z_{{{\text{ToF}}}}\), are \(\sigma_{{{\text{LDR}}}}^{2}\) and \(\sigma_{{{\text{ToF}}}}^{2}\) respectively. It can be shown that the minimum mean square estimation error is \(\left( {1/\sigma_{{{\text{LDR}}}}^{2} + 1/\sigma_{{{\text{ToF}}}}^{2} } \right)^{ - 1}\).

3 Simulations and Results

To assess the operative characteristic of the sensors, simulations are performed in the obstacle detection range, chosen to be 30–180 cm. The obstacle is moved in 5-cm steps during the data acquisition sessions. Measurements were acquired for of 120 s at 4 Hz (one measurement every 250 ms, 480 samples per acquisition) (see Fig. 2).

Fig. 2
figure 2

Data collection of LIDAR (left) and ToF (right) during a static test, with the obstacle at 50 cm

In post-processing, mean and variance have been evaluated in each data collection for raw and filtered data (see Fig. 3).

Fig. 3
figure 3

Mean and variance of the Lidar and ToF measurements, calculated for raw data (left) and filtered data (right)

A comparison between mean and variance of the measurements (Table 1) shows a bias in the LIDAR acquisitions, which was corrected by applying Kalman filtering to the raw data. ToF measurements shows good accuracy at short distances (less than 120 cm) and high errors for longer distances.

Table 1 Comparison mean and variance of some raw (R) and Kalman (K) filtered distances

As an alternative, after estimating the sensor variances \(\sigma_{{{\text{LDR}}}}^{2}\) and \(\sigma_{{{\text{ToF}}}}^{2}\) from static measurements, Eq. (5) is applied to obtain the optimal estimated distance. Results are shown in Fig. 4, and in Fig. 5 and Table 2 the methodology is compared to the Kalman filtering approach.

Fig. 4
figure 4

Mean and variance of the values estimated by Gelb’s method

Fig. 5
figure 5

Comparison between Gelb’s method with Kalman filter

Table 2 Comparison between raw data, Gelb’s data fusion and Kalman-filtered data

4 Conclusion

This paper has quickly described the preliminary steps towards the implementation of a Detect-And-Avoid subsystem onboard an UAV, exploiting low-cost, commercial off-the-shelf distance measuring sensors (LIDAR and ToF), handled by easily programmable microcontrollers (Arduino Mega 2560). Laboratory simulations and experimental results on a prototype multisensor board developed by the authors show that simple data fusion techniques (linear estimation and Kalman filtering) provide improved observability, reducing the error region, broadening the baseline of the observable (distance from an obstacle in the range 30–180 cm) and helping in developing effective DAA approaches for commercial UAVs. In particular, the LIDAR has been found to be more accurate at large distances from the platform (100–180 cm and up to 4 m), whereas the ToF sensor performed well at shorter ranges (0–120 cm). As shown in the work, the KF-based data fusion algorithm gave better results than Gelb’s approach, at the cost of increased complexity. Nonetheless, the Kalman-based data fusion allows for easy real-time data processing and propagates the current state of knowledge of the dynamic measurements, upgrading the estimation error during the measurement process, a property extremely useful for statistical analysis and performance monitoring. Good performance of the preliminary system confirms the feasibility and robustness of this approach to an autonomous DAA system.