Abstract
The paper considers an approach to autolanding system for multi rotor unmanned aerial vehicle based on computer vision and visual markers usage instead of global positioning and radio navigation systems. Different architectures of autolanding infrastructure are considered and requirements for key components of autolanding systems are formulated.
Access provided by CONRICYT-eBooks. Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
The rapid development of technology of autonomous robotic systems and the need of unmanned aircraft in various fields, including military purposes, had led to the accelerated pace of development and deployment of small unmanned (UAVs) and remotely piloted aircraft. The most actively developing aircraft are the multi motor flying platforms (multicopters) [1, 2]. The common features for all objects of given class are the design and the principle of flight. The central part of multicopter consists of the control unit, battery and cargo holder. The micro engines with rotors are mounted radially from the centre on beams forming star-shaped layout of copter. To compensate the twist due to momentum the rotors rotate in different directions. Nevertheless such a symmetrical layout assumes presence of front and rear parts considering the direction of flight. During the flight the multicopter maintains relative to ground horizontal position, move sideways, change altitude and is able to hover. In presence of additional equipment the automatic and half-automatic flights are possible. To perform a movement the multicopter is needed to be taken out of balance by throttling combinations of rotors. As a result multicopter tilts and starts to fly in needed direction. To rotate multicopter clockwise the front and rear rotors are spining-up and left and right rotors are slowing down. The counterclockwise twist is done by analogy [3].
Multi motor UAVs have several advantages compared to other unmanned and manned aircraft. Unlike the helicopter the multi motor UAVs are less expensive to maintain, more stable in air, and are easier to control which result in higher ability to survey small ground objects with high spatial resolution [4].
In general multi motor UAV is a universal, efficient and simple type of unmanned vehicle which can take advantage over traditional helicopter design on the market and become useful instrument for mass media, photo and video industry etc. The greatest interest in UAVs show government agencies and services which functions are related to the protection, control and monitoring of the facilities [5].
2 Computer Vision Aided Autolanding System
The key element of autolanding system is an accurate spatial positioning of aircraft. For this task a number of subsystems could be used: internal inertial navigation systems, global positioning systems, external guidance (including computer vision (CV) systems) [6, 7].
To succeed in autolanding task the following requirements are to be met:
-
unambiguous spatial positioning of aircraft including orientation, heading angle and altitude above landing site;
-
functioning in wide range of weather and environment conditions;
-
robustness of markers localization methods during spatial maneuvers of UAV;
-
minimal time of aircraft position parameters obtaining.
This paper proposes an approach to autolanding system for multi rotor or helicopter UAVs based on computer vision without usage of global positioning systems and radio navigation.
Global positioning systems such as GPS/GLONASS and inertial navigation systems are efficient during flight phase but doesn’t provide sufficient accuracy needed in landing phase leading to inability to solve the problem for small heights and areas [8, 9].
The typical accuracy of modern GPS receivers in horizontal plane reaches 6–8 m provided a good satellite visibility and correction algorithms. Over the territory of USA, Canada and limited number of other countries it is possible to increase accuracy to 1–2 m using the differential mode. Thus the use of GPS system over other territories appears to be inefficient.
The CV based system of automated landing [1, 10, 11] consists of the following components (Fig. 1):
-
camera or camera array for locating of the guidance markers;
-
system of markers suitable for efficient capturing by digital camera;
-
image processing unit (CV-core) capable of real time obtaining of UAV’s coordinates relative to landing site by processing digital camera image sequences;
-
command unit computing landing command sequence for UAV’s control core according to information from CV-core;
-
data transmission channel.
The following options for system components deployment are available:
-
the cameras are located above landing site which gives the ability to control both of landing zone and the UAV. The components of CV-core and command unit are located stationary on the ground (Fig. 2). As a variant of application that utilizes both markers and computer vision it is worth to mention a multicopter collective control project developed by Institute for Dynamic Systems and Control (IDSC) at ETH Zurich [12]. The computer vision system with two or four cameras is used to automatically maintain the given direction of flight. This system achieves high accuracy but needs preliminary prepared workspace because the markers are located on UAV and the computer vision system is located stationary outside the UAV [12, 13];
-
cameras are located on UAV and observe the markers on landing zone (Fig. 3);
-
cameras are located on landing site in upward direction observing the markers on UAV (Fig. 4);
The placement of cameras on UAV requires either onboard CV processing module either a low latency broadband communication channel to transmit images to ground CV processing unit. Besides it requires additional measures to decouple camera from UAV’s rotors vibration.
Thus for further experiments on CV marker detection and UAV altitude and position assessment the case of landing site camera and UAV side markers was chosen.
From geometric optics it is known that:
where h – linear object size, H – size of object projection on focus plane, f – focal length, d – distance from object to lens.
In case of digital camera the size of object projection on camera sensor determines as follows:
where p – object projection size in pixels, L – linear size of camera sensor, S – photo sensor size in pixels.
Thus provided camera focal length and linear object size the distance from object to camera sensor can be formulated by following expression:
where \( R = {S \mathord{\left/ {\vphantom {S L}} \right. \kern-0pt} L} \) (pixels/mm) – pixel density of camera sensor.
Experimental evaluation of CV altitude measurement for the case of single camera on landing site and 4 LED-markers on UAV is shown on Fig. 5.
The minimal operational altitude is determined by UAVs projection size does not exceed the size of sensor, which results in:
where \( S_{H} ,S_{W} \) – camera sensor horizontal and vertical definition in pixel respectfully.
The maximal operational altitude is determined by minimal discernible distance between UAV markers \( p_{min} \):
In practice it is difficult to obtain steady results with \( p_{min} < 5 \) pixels.
For example, given the camera with 7 mm focal length, 640 × 480 pixel resolution, 100 pixel/mm sensor density and UAV size of 55 mm the following operational parameters are obtained: \( d_{min} = 87 \) mm, \( d_{max} \,\sim\,7,7 \) m.
3 Marker Usage
All of camera positioning options are demanding the usage of special markers to increase efficiency of CV-core algorithms [1, 14].
Landing site markers can be implemented as follows:
-
directional pattern of high contrast images suitable for recognition by CV-core;
-
remotely controlled pattern of color or IR dynamic lights;
-
UAV’s markers can be implemented as follows:
-
pattern of color or IR dynamic lights controlled by UAV;
-
remotely controlled pattern of color or IR dynamic lights;
-
pattern of corner reflectors;
-
high contrast image (pattern) including fuselage contour itself.
The experiments with color LEDs localization (Fig. 6) revealed applicability issues of this marker type due to following reasons: limited dynamic range of camera sensor introduces the problem of optimal pair of exposition and LED brightness choice to obtain correct color representation of the marker on digital image. The other reason is the uncontrolled environmental lightning which changes the color of markers interfering correct spatial localization and introducing false marker detection. This leads to more complex software approach [15] which adapts to environmental conditions by tuning CV algorithms, controlling marker colors and camera settings.
In the course of the experiments, the following features were revealed:
The usage of IR markers and cameras is a one of traditional method of spatial objects location [16, 17]. It’s advantage is resistance to various lighting conditions and background changes, but there are some problems with the direction toward the sun giving a strong IR illumination. Besides it requires additional means to identify individual markers for UAV heading detection.
The color markers (including RGB LEDs) are mainly applicable in controlled lightning (ideal) conditions, but uncontrolled environmental effects (fog, dust, colored lightning) introduces unreliable marker recognition making automatic impossible. Usage of controlled RGB lights neglect environmental effects to some extent but requires wireless communication channel with UAV.
High contrast geometry marker patterns demand good visibility and/or lightning conditions.
The conducted studies showed that the most universal and efficient are the dynamic light markers controlled by landing system computer via wireless communication channel. In this case computer turns on the UAV’s markers independently and waits its appearance on images obtained from cameras. It simplifies the procedure of marker detection and localization which in turn makes possible to use timing factor in conjunction with prior knowledge of marker’s state resulting in simple CV methods of detection implementation such as difference image. Besides, it doesn’t require usage of markers with different properties to identify its position on UAV because it is sufficient to light only one specific at a time to identify it.
Nevertheless the mode of controlled lights markers requires more images to obtain spatial UAV’s location due to sequential switching of all markers for detection. Each marker demands minimum of one image to detect it ON/OFF state. To overcome this problem the usage high frame rate cameras are required.
The controlled lights markers are preferable when it is possible to organize communication channel between UAV and automatic landing system.
4 Conclusion
The paper proposes an approach to development of automatic landing system for multi rotor UAVs based on computer vision instead of global positioning systems, functional diagram of the system is developed.
It is shown that despite of camera position the usage of markers simplifies the location of key points of UAVs for CV system and thus enhances obtaining the spatial position of aircraft.
The calculation of operational altitude limits is given for effective UAV markers capturing by CV system resulting in successful autolanding procedure.
The various options for markers placement and types were analyzed and following recommendations were developed:
The remote-controlled lights markers are always preferable when it is possible to organize communication channel between UAV and automatic landing system.
The color markers (including RGB LEDs) are mainly applicable in controlled lightning (ideal) conditions.
High contrast geometry marker patterns demand good visibility and/or lightning conditions.
The most universal and efficient are the dynamic light markers controlled by landing system computer via wireless communication channel. In this case computer turns on the UAV’s markers independently and waits its appearance on images obtained from cameras. It simplifies the procedure of marker detection and localization which in turn makes possible to use timing factor in conjunction with prior knowledge of marker’s state resulting in simple CV methods of detection implementation such as difference image. Besides, it doesn’t require usage of markers with different properties to identify its position on UAV because it is sufficient to light only one specific at a time to identify it.
References
Aksenov, A.Y., Kuleshov, S.V., Zaytseva, A.A.: An application of computer vision systems to solve the problem of unmanned aerial vehicle control. Transp. Telecommun. 15(3), 209–214 (2014)
Altug, E., Ostrowski, J.P., Taylor, C.J.: Control of a quadrotor helicopter using dual camera visual feedback. Int. Rob. Res. 24(5), 329–341 (2005)
Schmid, K.: View planning for multi-view stereo 3D reconstruction using an autonomous multicopter. J. Intell. Rob. Syst. 65(1–4), 309–323 (2012)
Barbasov, V.K.: Multirotor unmanned aerial vehicles and their capabilities for using in the field of earth remote sensing. Ingenernye izyskaniya 10, 38–42 (2012). (in Russian)
Zinchenko, O.N.: Unmanned aerial vehicles: the use of aerial photography in order to map. P.1. Racurs, Moscow, 12 p. (2012) (in Russian)
Saripalli, S., Montgomery, J.F., Sukhatme, G.S.: Visually guided landing of an unmanned aerial vehicle. IEEE Trans. Rob. Autom. 19(3), 371–380 (2003)
Garcia Carrillo, L.R., Dzul Lopez, A.E., Lozano, R.: Combining stereo vision and inertial navigation system for a quad-rotor UAV. J. Intell. Rob. Syst. 65, 373 (2012). doi:10.1007/s10846-011-9571-7
Cesetti, A., Frontoni, E., Mancini, A., Zingaretti, P., Longhi, S.: A vision-based guidance system for UAV navigation and safe landing using natural landmarks. In: 2nd International Symposium on UAVs, Reno, Nevada, USA, pp. 233–257, 8–10 June 2009
Corke, P.: An inertial and visual sensing system for a small autonomous helicopter. J. Rob. Syst. 21(2), 43–51 (2004)
Cesetti, A., Frontoni, E., Mancini, A.: A visual global positioning system for unmanned aerial vehicles used in photogrammetric applications. J. Intell. Rob. Syst. 61, 157 (2011). doi:10.1007/s10846-010-9489-5
Levin, A., Szeliski, R.: Visual odometry and map correlation. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Washington, D.C., USA (2004)
ETH IDSC. Flying Machine Arena, Zurich (2014). http://www.idsc.ethz.ch
Ritz, R., Müller, M.W., Hehn, M., D’Andrea, R.: Cooperative quadrocopter ball throwing and catching. In: Proceedings of Intelligent Robots and Systems (IROS), IEEE/RSJ International Conference, Vilamoura, October 2012, pp. 4972–4978. IEEE (2012)
Open Computer Vision. http://sourceforge.net/projects/opencvlibrary/. Accessed May 2017
Kuleshov, S.V., Yusupov, R.M.: Is softwarization the way to import substitution? SPIIRAS Proc. 46(3), 5–13 (2016). doi:10.15622/sp.46.1. (in Russian)
Kuleshov, S.V., Zaytseva, A.A.: The selection and localization of semantic frames. Inf. J. Inf.-Measur. Control Syst. 10(6), 88–90 (2008). (in Russian)
Kuleshov, S.V., Zaytseva, A.A.: Object localization of semantic blocks on bitmap images. SPIIRAS Proc. 7, 41–47 (2008). (in Russian)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
Aksenov, A.Y., Kuleshov, S.V., Zaytseva, A.A. (2017). An Application of Computer Vision Systems to Unmanned Aerial Vehicle Autolanding. In: Ronzhin, A., Rigoll, G., Meshcheryakov, R. (eds) Interactive Collaborative Robotics. ICR 2017. Lecture Notes in Computer Science(), vol 10459. Springer, Cham. https://doi.org/10.1007/978-3-319-66471-2_12
Download citation
DOI: https://doi.org/10.1007/978-3-319-66471-2_12
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-66470-5
Online ISBN: 978-3-319-66471-2
eBook Packages: Computer ScienceComputer Science (R0)