1 Introduction

Humans, being alone or in a crowd, as well as infrastructure are potentially exposed to threats from aerial attacks. In a military scenario, this could obviously be an attack on a camp or on patrolling troops and vehicle convoys with ballistic missiles. In the civil world, which shall be treated with priority within this article, the potential threat derives from UAVs (unmanned aerial vehicles) or UASs (unmanned aerial systems). In particular, multicopters like quadcopters, as shown in Fig. 1, are technically mature platforms that can be purchased and flown by virtually anyone. They have the capability to carry small payloads, with the most popular payload at the moment being an optical camera. This payload carrying ability has led to many valid applications for UASs. But what happens if it is not a camera mounted on a flying platform, but a bomb, a gun or hazardous liquids in a sprayer? Less dramatic reasons for an undesirable UAS usage could be the transport of, e.g. narcotics across borders or into prisons, the illegal recording of sports or musical events and in general the violation of privacy.

Fig. 1
figure 1

(source: DJI, 2016)

Popular quadcopter from DJI

An air surveillance system with the capability to detect unwanted UASs would surely come in handy in many scenarios. The present article gives an overview of a feasibility study on such an air surveillance system and describes several aspects of the scenarios, targets, sensors, their data handling, and user interface. A modular concept of a system will be shown that potentially allows for the detection of any UAS, independent of its classification, e.g. according to ICAO, FAA or DoD.

The urgent need for an UAS detection system and counter-UAS techniques is underlined by nearly daily press releases of critical incidents. There is a growing amount of documentations and videos on the Internet of questionable usage of multicopters, and the step towards illegal usage or even terrorist attacks with drones is very small. The use of an air surveillance system is therefore not only imaginable but definitely recommendable to the police, military, security units, event organisers and even individuals within the context of

  • Security at public appearances of individuals (e.g. politicians, government officials, VIPs, etc.)

  • Security of crowds (e.g. pilgrims, visitors of events, etc.)

  • Security at political meetings (e.g. summit conference, etc.)

  • Protection of critical objects and infrastructure (e.g. nuclear power plant, military camps, airports, etc.)

  • Maritime security, harbour protection, areal access/areal denial

  • Counterstrike against smuggling (e.g. borders, prisons, etc.)

  • Protection of exclusive rights (e.g. sport events, concerts, etc.) or privacy

and this list is not to be considered complete.

2 Feasibility Study

The Department of Reconnaissance and Security at the German Aerospace Center (DLR, Deutsches Zentrum für Luft- und Raumfahrt e.V.) conducted a research study on how to setup a local air surveillance system for security purposes, called by its acronym LOCASS. It was a declared goal of the study to analyse and present the manifold ways on how to evaluate and combine different sensors, how to handle their data and transfer information to a human user or, e.g. to counter-UAS equipment. Furthermore, scenarios and targets needed to be categorized, and possible circumstances of a surveillance task needed to be addressed. Although the department is integrated in a radar institute, the approach to the study was chosen to be flexible and problem-oriented instead of solely technology driven and radar-focused. The most suitable sensors should be used, and different sensors should be fused if gathering of complementary information is possible.

The required general characteristics of the potential surveillance system are a real-time capability, a high configurability in terms of sensors, data handling and user interface and mobility to operate at different locations from day to day.

The study was conducted from the viewpoint of a service providing entity. Being called from a customer, the service provider first needs to receive the task, define a strategy for the mission, understand the scenario, limit the scope of possible threats and setup sensors under several constrictions in terms of, e.g. electromagnetic interference with police communication, weather conditions or day and night use. Finally, the acquired information needs to be provided to a user interface for human interpretation or triggering of downstream equipment. The described analyses present the first steps towards a detailed definition of an air surveillance system and the design of a demonstrator setup.

The following sections will give a short overview over some topics of our study. First, the scenario is handled, considering an object and its surrounding airspace to be observed and protected. Circumstances and constraints for a surveillance mission like weather conditions or the presence of humans in the area will be addressed. In the following, some targets of threat will be named and a first insight is given on how to distinguish between real and false targets. Afterwards, it is presented which sensors were analysed during our study and how we set the characteristics for inter-comparison and cross-evaluation. The last section describes the so-called central system, which allows connecting several sensors and multiple signal processing modules. It also provides the interface to the human user and to other technical equipment.

2.1 Scenarios and Mission Constraints

As stated above, the potential user of an air surveillance system can be the police, military or other security task forces with their mission to provide security for humans and objects. In the following, the object under protection shall be an expression to cover all the possible examples, from individual humans to complex building structures.

The scenario covers the position of the object under protection, its neighbourhood and the time span of the surveillance task. The scenario is therefore considered to be nearly constant for a mission. To setup a group of sensors with their specific field of view and observation range, it has to be considered where the object under protection is situated. It could be in a plain or in a hilly terrain, on a steep slope, on a mountain top or in a valley or close to large water areas. The object itself can be detached or surrounded by, e.g. buildings or trees. The time span is of fundamental importance, since not every sensor is able to be operated during night times or within all-weather conditions.

The next step is to define the airspace to be observed. The needed range will be correlated to customer defined security levels, no flight zones and the ability to interact. Our approach is to categorize ranges from 100 metres up to 10 kilometres and to evaluate sensors with respect to these ranges. In addition, the geometrical shape of the aerial zone needs to be defined. Most probably the horizontal extent needs to be selected larger than the vertical one, leading to a kind of cylindrical form. Another aspect is whether the whole volume or only a thin border line needs to be observed. The latter could be the case, when an internal zone is defined to be clean or safe, and any threat can only approach from outside. Within an urban surrounding, surely the observation of the whole aerial volume is required, since most UASs can depart from any window, balcony or roof top. Figure 2 illustrates the possible choice of airspace to be observed, subdivided into three different zones. With each zone, different alarms or interactions can be associated.

Fig. 2
figure 2

(© of map: OpenStreetMap)

Example of a defined airspace to be observed, divided into zones for, e.g. different alarms or interactions

When the scenario is well defined, an analysis of circumstances interfering with the scenario is inevitable. We addressed several categories, where a selection is presented in the following: Weather and daytime have to be considered, since most sensors do perform differently in, e.g. sunny and rainy conditions or during day and night. There will usually be moving objects in the observation area. These could be birds in flight, moving cars or walking pedestrians in the streets. Another category is technical interferences between sensors and other equipment. Microphones will have their difficulties with a noisy environment, active radar might disturb wireless communications and optical cameras might be blind in regions of floodlights in a stadium. Therefore, the technical interference needs to be understood both ways: On one hand, the air surveillance system can be the disturber, and on the other hand, it can also be disturbed or intentionally jammed. The aspect of jamming will be addressed below in more details.

Humans in the surveillance zone can occur from being only a few within a restricted area, or being many, e.g. in shopping areas, leading up to dense crowds like the audience of a presentation in front of a stage. Since these humans are constantly moving targets to a certain degree, alarming whenever an object moves will become cumbersome. Therefore, we propose to define a surveillance area starting at some distance close above the heads, such that pedestrian movement or car traffic shall not be detected. This leads obviously to further requirements on the different sensors and on the signal processing.

Last but not least, religious and ethical constraints must be considered. Technically, the best position for an observation sensor might, for example, be on top of a church tower, but a permission to actually place it there might be denied.

2.2 Targets

A target is a flying platform with its potential threat in the defined aerial observation zone. There are true and false targets for detection, since not every flying object is dangerous.

In classical remote sensing, there are four levels of information which are detection, localization and tracking, classification, and identification. Detection is the simple awareness of having an object in the observation area, whereas for localization and tracking, more or less exact position of the intruder must be known. The next step is classification, which in terms of flying platforms could be the cognition of having a multicopter rather than a glider. Identification is the most challenging level, which means that the exact model (e.g. hexacopter model A from manufacturer B) will be discovered by the system.

It would be very ambitious to try to identify each target, which means to reliably distinguish between true and false targets. In practice, the possible targets are manifold. There are multicopters, model airplanes, small zeppelins, manned gliders and motorized airplanes, parachutes and paragliders which might all be classified as real targets, since they have the ability to carry a dangerous payload. The group of false targets is filled with birds and other flying animals, being single or flying in swarms, balloons, Chinese lanterns, leafs in the wind and many more. To gather maximum information and reach at least the state of a target classification, several sensors with complementary information should be combined. Each sensor will deliver a specific detected signature to the system, and data fusion will lead to integration of all information for maximum situation awareness. To help distinguishing between false and true targets, we consider the flight behaviour of the so far unknown object. A ballistic flight, i.e. passive, with no steering, will be easily detectable, since the flight track is well described mathematically and therefore predictable. If not flying ballistic, the flight dynamics and the speed need to be considered. Are there any immediate changes in flight direction detected? Are any forward and backward movements detectable? More information might be gathered when analysing the movement by trend, if it is downwards or upwards or at constant height. Together with all the different sensors that will be introduced in the next section, we propose to include a wind sensor in the surveillance system. The analysis of correlation between the wind and the movement of a target will strongly help to identify passive false targets.

2.3 Sensors

In our study, we cover analysis on several sensors using a wide range of the electromagnetic and acoustic spectrum. Acoustic waves are mechanical air waves which are grouped with respect to their frequency in infrasound, hearable sound, ultrasound and hypersound. Infrasound occurs from earth movements, as, e.g. in heavy traffic movements or during earthquakes. The frequency is below 16 Hz. The frequency range for hearable sound at least for humans goes from 16 Hz to 20 kHz. Above, it is called ultrasound, and beyond 1 GHz it is called hypersound. We decided that only the hearable and the lower ultrasound spectrum is of interest to an air surveillance system, since this might be emitted by technical flying platforms. Any acoustic reflective measurements, as, e.g. used by bats, where valued too short range to be considered for the present remote sensing task.

The other big sensor class covers the sensors operating in parts of the electromagnetic spectrum. This ranges from low frequency waves at some 10 kHz up to several hundred GHz which covers also infrared and visible light. UAS detection with electromagnetic waves below 10 GHz will only have a negligible application, which is why we concentrate on systems starting at around 10 GHz (radar X band) up to the spectrum of visible light. Further analysis on the electromagnetic spectrum will of course show that in practice the usable spectrum is not continuous but fragmented.

As already mentioned above, it must be distinguished between emissions and reflections of signals. Emissions are the signals, that flying objects generate by themselves, e.g. any operational noise of motors or a data link between the UAV and a ground station. Reflections on the other hand are signals transmitted by a sensor and only reflected by the UAV.

In our study, we analysed the following sensors

  • optical camera in the visible light spectrum

  • near-infrared camera

  • mid-infrared camera

  • spectrum analyser

  • radar

  • radiometer

  • acoustic sensors, microphones

  • lidar, laser scanner.

The optical and infrared cameras obviously provide two-dimensional images of a scene. The spectrum analyser is used to scan the electromagnetic spectrum to detect any remote control signals between an operator and the UAV. A radar (radio detection and ranging) sensor consists of a transmitter sending electromagnetic pulses and a receiver to collect reflected pulses in the microwave domain. In its most simple setup, it can provide the position of an object, whereas in an advanced setup with subsequent data processing, radar can also provide a two-dimensional image of a scene. A radiometer is a kind of passive radar, detecting electromagnetic waves naturally emitted or reflected by any object in the scene. Lidar (light detection and ranging) is an equivalent to radar, but transmitting thin laser light pulses.

For each sensor, the characteristics were collected and grouped into the categories description and mode of operation, type and degree of information, detection range, dependence on weather and daytime, vulnerability to perturbation, technology maturity (TM) and availability, as well as a possible usage in an air surveillance system. Tables 1 and 2 provide the concise characteristics of an optical camera and a radar sensor, respectively.

Table 1 Characteristics of an optical camera
Table 2 Characteristics of a radar system

Any single sensor by itself will contribute with information to the system. Some data can be fused, e.g. to simply confirm a vague detection or to extract even more information leading to a classification. Since the different sensors have their specific characteristics, there will be certain rules and hierarchical levels on how to use them. It will be possible that one sensor is operating in an overview mode that triggers other more focused or zoomed sensors in case of an event, starting data fusion autonomously and providing maximum information to the user. One goal in the development of a high-performance surveillance system would be to have a drone, e.g. with an optical camera that will automatically be piloted with a steady information flow on the actual position of an intruder towards it, delivering close-up images to the user interface.

In addition to the listed sensors above, we found out the necessity to have environmental sensors to monitor temperature, wind and brightness. These additional sensors are used to calibrate the detection sensors and to enhance the possibility to recognize a disturbance. In particular, the wind sensor, as already stated above, will help to distinguish between true and false targets, since many objects just flying perfectly aligned with the wind are false targets.

2.4 Central System

The main tasks of the central system are the gathering of data and information from the different sensors, the provision of information to human users and to downstream equipment, data storage, and archiving. It is designed as a kind of a networking core without the need of detailed knowledge about sensors or data processing algorithms. A defined networking protocol and interfaces allow the attachment of any kind of sensor or data processing modules as well as different kinds of user terminals, running their specific software. The choice of involved sensors and data processing in a mission leads to the flexibility and capability to detect any UAS as stated above.

Figure 3 illustrates a potential operation scheme of the central system, surrounded by several attached subsystems. In terms of subsystem’s supervision, the central system handles the remote control of sensors and their specific signal processing. Since it shall be capable of having almost any kind of sensors connected to it, a sensor interface must be established, defining the kind and format of data to be transferred. A first version of an interface definition was derived within our study. Since each sensor requires a very specific signal processing to obtain information on detection, localisation, or even classification, it is foreseen to outsource this signal processing from the central system. Any post-processing steps and data fusion attempts are also foreseen to be connected as modules.

Fig. 3
figure 3

Scheme of the central system with data and control connections of sensors, signal processing modules, user interface, technical interface, and archive

The central system will also provide all the necessary connections to users and downstream equipment. It will be the overall interface between humans and the machines, in terms of data transfer from the sensors to the user, and in terms of control and steering of the sensor and algorithms by the user. A connected archive is foreseen to save all data to be recalled. This could be necessary during the surveillance task itself as well as later to prove evidence for the police or other forces. Figure 4 shows an example of what could be displayed on a user’s screen: target tracking paths and predictions displayed on top of a map, as well as security zones. Any kind of additional information like target information, archived imagery, available sensors nearby a target, or positions of, e.g. police officers can be included into a user’s display.

Fig. 4
figure 4

(© of map: OpenStreetMap)

Example of a map with security zones and detected targets, displayed on a user terminal

3 Next Steps

Our study has addressed many topics of an air surveillance system for security purposes. Several mission requirements were discussed, real and false targets were categorized, and numerous different sensors were introduced and overviewed. The central system and its interfaces are already designed in a much more detailed way as it is described in here. We showed with our study the manifold ways and sensorial contributions on how to access and solve the problem of detection, tracking and classification of intruders in the air. We also addressed many aspects that will concern an air surveillance system being provided as a service to a customer.

Our next steps will be the development of a first demonstrator for the central system. It will allow connecting different sensors and signal processing modules for testing and demonstration purposes. The analysed sensors as mentioned above will be evaluated in detail to setup the most stable system for various missions. To estimate the quality of the system already in advance to its real application, we will derive a performance analysis concept and tool, including test cases and reference scenarios. Since we are microwaves and radar specialists, we will face technical challenges especially with radar. It is foreseen to derive a new radar demonstrator with adapted data handling. Several institutes at DLR committed to participate in further studies and the installation of demonstrators on the topic of air surveillance and counter-UAS, such that trendsetting results can be expected soon.