Keywords

1 Introduction

Historically, optimization methods combined with frontier technology have been utilized to save lives in rescue situations. Today, new technology and search algorithms exist which can optimize rescue operations utilizing Rapid Alert Sensor for Enhanced Night vision (RASEN). We will introduce biologically inspired algorithms combined with fusion night vision technology that can rapidly converge on optimal paths for discovering disaster survivors and the rapid identification of signs of life for survivors trapped in rubble. Networked data visualization is provided to rescue teams based upon swarm intelligence sensing results so that appropriate relief supplies can optimally be deployed by convoys to survivors within critical time and resource constraints (e.g. people, cost, effort, power).

Many countries have rescue strategies in development for disasters like fires, earthquakes, tornadoes, flooding, hurricane, and other catastrophes. In 2016, the world suffered the highest natural disaster losses in 4 years, and losses caused by disasters worldwide hit $175 billion [1]. Search and rescue (SAR) missions are the first responder for searching for and providing relief to people who are in serious and/or imminent danger. Search and rescue teams and related support organizations take actions for searching and rescuing victims from varying incident environments and locations. During search and rescue, lack of visibility, especially at night, has been considered one of the major factors affecting rescue time and therefore, rescue mission success. Poor night visibility and diverse weather conditions also makes searching, detecting, and rescuing more difficult and sometimes even impossible if survivors are hidden behind obstacles. Furthermore, poor visibility is also a common cause of roadway accidents given that vision provides over 90% of the information input used to drive [2]. In fact, it has been reported that the risk of an accident at night is almost four times greater than during the day [3]. When driving at night our eyes are capable of seeing in limited light with the combination of headlights and road lights, however, our vision is weaker and more blurry at night, adding difficulty when avoiding moving objects that suddenly appear.

Recent years have seen significant advancement in the fields of mobile, sensing, communications, and embedded technologies, and reduction in cost of hardware and electronic equipment. This has afforded new opportunities for extending the range of intelligent night vision capabilities and increasing capabilities for searching and detecting pedestrians, vehicles, obstacles, and victims at night and under low light conditions.

Herein, intelligent physical systems are defined to be machines and systems for night vision that are capable of performing a series of intelligent operations based upon sensory information from cameras, LIDAR, radar and infrared sensors in complex and diverse Big Data analytic environments. These intelligent machines can be used for various applications, including power line inspection, automotive, construction, precision agriculture, and search and rescue, which is the focus of this chapter. Each application requires varying levels of visibility. Unlike traditional systems which only have a single purpose or limited capabilities and require human intervention during missions, intelligent physical systems which include night vision, combines computing, sensing, communication, and actuation, in order to tailor operational behavior in accordance with a particular collected operational information context. Figure 1 depicts six sample intelligent physical systems for potential use during night or low-light vision conditions.

Fig. 1
figure 1

Sample intelligent physical systems. (a) ENVG II [4]. (b) Traffic control (FLIR) [5]. (c) Automotive [6]. (d) Precision agriculture (SAGA) [7]. (e) Firefighting (C-Thru) [8]. (f) Security (ULIS) [9]

Advantages of night vision based intelligent physical systems are their ability to sense, adapt and act upon changes in their environments. Becoming more aware of the detailed operational context is one important requirement of night vision based intelligent physical systems. As every domain application is different, it is difficult to provide a single system or technique which provides a solution for all specialized needs and applications. Therefore, our motivation is to provide an overview of night vision based intelligent machine systems, and related challenges to key technologies (e.g. Big Data, Swarm, and Autonomy) in order to help guide readers interested in intelligent physical systems for search and rescue.

2 Literature Survey

Efficient communication and processing methods are of paramount importance in the context of search-and-rescue due to massive volume of collected data. As a consequence, in order to enable search-and-rescue applications, we have to implement efficient technologies including wireless networks, communication methodologies, and data processing methods. Among them, Big Data (also referred to as “big data”), artificial intelligence, and swarm intelligence allow important advantages to real-time sensing and large-volume data gathering through search-and-rescue sensors and environment. Before elaborating further on the specific technologies fitting into the search and rescue scenarios, we outline the unique features of rescue drones, review challenges, and discuss potential benefits of rescue drones in supporting search-and-rescue applications.

2.1 Rescue Drones

Drones, commonly known as Unmanned Aerial Vehicles (UAV), are small aircraft which perform automatically without human pilots. They could act as human eyes and can easily reach areas which are too difficult to reach or dangerous for human beings and they can collect images through aerial photography [12]. Compared to skillful human rescuers (e.g., police helicopter, CareFlite etc.) and ground based rescue robots, the use of UAVs in emergency response and rescue has been emerging as a cost-effective and portable complement for conducting remote sensing, surveying accident scenes, and enabling fast rescue response and operations, as depicted in Fig. 2. A drone is typically equipped with a photographic measurement system, including, but not limited to, video cameras, thermal or infrared cameras, airborne LiDAR (Light Detection and Ranging) [13], GPS, and other sensors (Fig. 3). The thermal or infrared cameras can be particularly useful for detecting biological organisms such as animals and human victims and for inspecting inaccessible buildings, areas (e.g. Fukushima), and electric power lines. Airborne LiDAR can operate day and night and is generally used to create fast and accurate environmental information and models.

Fig. 2
figure 2

Rescue scenario with drones [10]

Fig. 3
figure 3

Flying unit: Arducopter [11]

Drones are ideal for searching over vast areas that required Big Data analytics; however, drones are often limited by factors such as flying time and payload capacity. Many popular drones on the market need to follow preprogrammed routes over a region and can only stay airborne for a limited period of time. This limitation has increased research conducted for drone-aided rescue. The research includes path planning [14, 15], aerial image fusion [12, 16,17,18,19], and drone swarm [20, 21].

Early research has focused on route path planning problems in SAR motivated by minimizing time from initial search to rescue which can range from hours, days, to even months after the disaster. Search efficiency affects the overall outcome of SAR, so that the time immediately following the event requires a fast response in order to locate survivors on time. The path planning is generally used to find a collision-free flight path and to cover maximum area in adverse environments in the presence of static and dynamic obstacles under various weather conditions with minimal user intervention. The problem is not simply an extension or variation of UAV path planning aiming to find a feasible path between two points [22, 23]. For example, the complete-coverage method, local hill climbing scheme, and evolutionary algorithms, developed by Lin and Goodrich [14] defined the problem as a discretized combinatorial optimization problem with respect to accumulated probability in the airspace. To reduce the complexity of the path planning problem, the study [15] divided the terrain of the search area into small search areas, each of which was assigned to an individual drone. Each drone initializes its static path planning using a Dijkstra algorithm and uses Virtual Potential Function algorithm for dynamic path planning with a decentralized control mechanism.

Aerial images, infrared images, and sensing data captured by drones enable rescue officers and teams to have a more detailed situational awareness and increased comprehensive damage assessment. Dong et al. [17] presented a fast stereo aerial image construction method with a synchronized camera-GPS imaging system. The high precision GPS is used to pre-align and stitch serial images. The stereo images are then synthesized with pair-wise stitched images. Morse et al. [18] created coverage quality maps by combining drone-captured video and telemetry with terrain models. The facial recognition is another task of great interest. Hsu and Chen [12] compared the use of aerial images in face recognition so as to identify specific individuals within a crowd. The focus of the study [19] lies on real-time vision attitude and altitude estimation in low light or dark environments by means of a combination of camera and laser projector.

Swarm behavior of drones is featured by coordinated functions of multiple drones, such as collective decision making, adaptive formation flying, and self-healing. Drones need to communicate with each other to achieve coordination. Burkle et al. [20] refined the infrastructure of drone systems by introducing a ground central control station as a data integration hub. Drones can not only communicate with each other, but also exchange information with the ground station to increase optimization of autonomous navigation. Gharibi et al. [21] investigated layered network control architectures for providing coordination for efficiently utilizing the controlled airspace and providing collision-free navigation for drones. Rescue drones also need to consider networking described next.

2.2 Drone Networking

In typical scenarios, drones fly over an area, perform sensory operations, and transmit gathered information back to a ground control station or the operation center via networks (Figs. 4 and 5). However, public Internet communication networks are often unavailable or broken in remote or disaster areas. The question that arises now is how to find a rapid, feasible way of re-establishing communications, while remaining connected to the outside world for disaster areas. The rise of rescue drones and extensive advancements in communication and sensing technologies drives new opportunities in designing feasible solutions for the communication problem. Besides data gathering, rescue drones can act as a temporary network access points for survivors and work cooperatively to forward and request data back to the ground control station [10, 11, 25,26,27].

Fig. 4
figure 4

MQ-9 reaper taxiing [24]

Fig. 5
figure 5

Airnamics R5

In the literature, there are two types of rescue drone network systems: single-drone and multiple-drone. The single drone network system generally has a star topology, in which drones are working independently and linked to a ground control station. In [11], drones are equipped with WiFi (802.11n) module and responsible for listening to survivor “HELP” requests in communication range. The drone then forwards the “HELP” request to the ground control station through an air-to-ground communication link that is a reliable, IEEE 802.15.4-based remote control link with low bandwidth (up to 250 kbps) but long communication range (up to 6 km), as included in Table 1 [28], which also used a single drone and developed a contour map based location strategy for locating targets. However, the outcome and efficiency of search and rescue are greatly restricted by single drone systems, where the single drone [24] can only have limited amount of coverage increases.

Table 1 Existing wireless technologies for drone communication

Instead of having only one (large or heavy-lift) drone in the system, multiple drones are deployed, working interactively for sensing and transmitting data in multiple-drone systems [25, 26, 29,30,31], as shown in Figs. 6 and 7. Generally, the system is composed of multiple drones and a ground control center. The drones are small or middle-sized unmanned aerial vehicles equipped with wireless transceivers, GPS, power supply systems, and/or on-board computers. The wireless transceivers are modules to provide wireless end-point connectivity to drones. The module can use xBee, ZigBee, WiFi, Blue-tooth, WiMAX, and LTE protocols for fast or long distance networking. Table 1 shows available wireless communication technologies for drone systems. In particular, each technology has its unique characteristics and limitations to fulfill the requirements of drone networks. Bluetooth and WiFi technology are main short-range communication technologies and generally used to build small wireless ad-hoc networks of drones. The communication links allow drones to exchange status information with each other during networked flight.

Fig. 6
figure 6

Air shield [25]

Fig. 7
figure 7

Multi-drone control system [29]

Daniel et al. [25] used this idea and built a multi-hop drone-to-drone (mesh) and single-hop drone-to-ground network. Given not all drones have a connection to the ground control station, the inter-drone links guide data routing towards the station. This process repeats until the data reaches a drone with drone-to-ground link realized with wireless communication techniques WiMAX and LTE. Cimino et al. [32] claimed that WiMAX can also be used for inter-drone communication. SAR Drones [26] studied the squadron and independent exploration schemes of drones. Drones can also be linked to satellites in multi-drone systems [21, 33].

It is possible that drones might fly outside of the communication range of the ground communication system, as shown in Fig. 7. PhantomPilots: Airnamics [29] proposed a multi-drone real-time control scheme based on multi-hop Ad-hoc networking. Each drone acts as a node of the Ad-hoc network and uses the ad- hoc network to transmit the data to the ground control station via drones in the station communication range.

Beside single-layer networks, there are also dedicated multi-layer networks designed for multi-drone systems. Asadpour et al. [34] proposed a 2-layer multi- drone network, as shown in Fig. 8. Layer I consists of airplanes (e.g. Swinglet in Fig. 8a) which are employed to form a stable, high-throughput wireless network for copters (e.g. Arducopter in Fig. 8b). Copters are at layer II to provide single- hop air-to-ground connection for victims and rescue teams. For efficient search and rescue, controlled mobility can be applied to airplanes and copters to maximize network coverage and link bandwidth. In [35], three categories of drones: blimps, fixed wing, and vertical axis drones were considered to constitute a multi-layer organization of the drone fleet with instantaneous communication links. Big Data in rescue operations introduces another factor of complexity.

Fig. 8
figure 8

A 2-layer drone network [34]. (a) Swinglet. (b) Arducopter. (c) Aerial network

2.3 Regional Disasters

Night vision systems for search and rescue are undergoing a revolution driven by the rise of drones and night vision sensors to gather data in complex and diverse environments and by the use of data analytics to guide decision-making. Big Data collecting from satellites, drones, automotive, sensors, cameras, and weather monitoring all contain useful information about realistic environments. The complexity of data includes consideration of data volume, velocity, variety, variability, veracity, visualization, and value. The ability to process and analyze this data to extract insight and knowledge that enable in-time rescue, intelligent services, and new ways to assess disaster damage, is a critical capability. Big Data analytics is actually not a new concept or paradigm. However, in addition to cloud computing, distributed systems, sensor networks, and health areas, the principles, the utility of Big Data analytics in night vision systems have much promise for search and rescue.

On January 12, 2010, a 7.0 magnitude earthquake rocked Haiti with an epicenter that was 25 km west of Haiti’s [37]. By 24 January, another 52 aftershocks with magnitude 4.5 or greater had been reported. According to incomplete statistics, more than 220,000 people were killed, 300,000 people were injured, and 1.5 million people were displaced in the disaster. Population movement, in reality, can contribute to increase morbidity and mortality and precipitate epidemics of communicable diseases in both displaced and host communities. To track and estimate population movements, Camara [36] conducted a prompt geospatial study using mobile data, as shown in Fig. 9. The mobile data was the position data of active mobile users with valid subscriber identity modules (SIM). For each SIM, a list of locations on each day during the study periods was recorded and managed in a database. The mobile and mobility data was then used to estimate the population movements and identify areas outside the city at risk of cholera outbreaks as a result of the population movements. One drawback of the use of mobile data for disaster and relief operations is the availability and fidelity of mobile data. If, for example, the mobile cellular network is down in the disaster affected areas, no mobile data can be collected. Under some scenarios, survivors that can be rescued may be, hidden under, stuck, or trapped by objects or obstacles, who are not capable of using mobile devices. This problem can be further complicated due to the existence of several population groups including the elderly, children, sick, disabled people, and pregnant woman, which RASEN night vision system could triage in advance.

Fig. 9
figure 9

Population distribution [36]. (a) Jan 31, 2010. (b) Oct 23, 2010

The study [39] provided a review of the use of big data to aid the identification, mapping, and impact of climate change hotspots for risk communication and decision making. de Sherbinin [40] argued the data fusion for predication of the location of surface water cover and exploited the idea of bagged decision tree to derive inundation maps by combining coarse-scale remotely sensed data and fine-scale topography data. It is widely recognized that the imagery is key to provide rapid, reliable damage assessments and enable quick response and rescue [38, 41]. CNN [38] employed a spatial video system to collect data by following the path of the Tuscaloosa tornado of April 27, 2011. Example segments of spatial video data are shown in Fig. 10. The spatial video data is then loaded into a GIS system ArcMap and processed offline to support post-disaster recovery. Fluet-Chouinard et al. [41] conducted the spatial video data collection four days after the tornado of April 3, 2012 in Dallas Fort-Worth (DFW) area. An online survey was then performed with the data collection to refine the damage classification, which can be referenced by further studies.

Fig. 10
figure 10

Segments of spatial video data [38]. (a) Slight damage. (b) Severe damage

ADDSEN [42] was proposed for adaptive real-time data processing and dissemination in drone swarms executing urban sensing tasks, as shown in Fig. 11a. Two swarms of drones were dispatched and performed a distributed sensing mission. Each drone was responsible for sensing a partial area of the roadway along flight path. Instead of immediately transmitting the sensed data back to the ground control center, ADDSEN allows each drone to enable partial ordered knowledge sharing via inter-drone communication as described in Fig. 11b. Considering the drones are limited in flight time and data payload capacity, ADDSEN designed a load balancing method. In each swarm, the drone with most residual energy was selected as a balancer. The balancer relocated the work load for overload or heavy load drones and can coordinate with the drones in the same or different swarm to achieve cooperative balanced data dissemination. To enable rapidly processing big aerial data in a time-sensitive manner, Ofli et al. [43] proposed a hybrid solution combining human computing and machine intelligence. Human annotation was needed in this method to train trace data for error minimization. On the basis of trained data, image-based machine learning classifiers were able to be developed to automate disaster damage assessment process.

Fig. 11
figure 11

Drone swarms for urban sensing [42]. (a) Drone swarms. (b) Distributed knowledge management

2.4 Swarm Intelligence

For efficient and effective search and rescue, night vision systems are required to coordinate with each other and optimize searching and sensing strategies. However, night vision systems for search and rescue exhibit complex behaviors that can be simplified and less costly when a swarm search algorithm is executed to determine a recommended path for rescue drones to traverse as first responders. As circumstances change, the swarm algorithm can adjust and recalculate with new data, providing an alternate path to rescue drones to follow for initial monitoring and injury assessment using special night vision equipment, such as RASEN, that can provide scouting details for future convoys to the disaster area. Note that the locations of objects and circumstance in target areas are often unpredictable, it is very difficult to model and analyze the behavior of night vision systems and the interactions between the systems. As matter of fact, it is desirable that night vision systems are networked and can self-organize, self-configure, accommodating to new circumstances in terms of terrain, weather, tasks, network connectivity, and visibility, etc. Our approach simplifies that adaptive response of rescue drones to such Big Data analytic environments.

Inspired by autonomy societies of insects with exact, desired characteristics, a considerable body of work on swarm intelligence (SI) for supporting rapid search and rescue has been conducted [19, 20, 43,44,45,46]. Swarm intelligence (SI) is an artificial intelligence discipline that focuses on the emergent collective behaviors of a group of self-organized individuals leveraging local knowledge. It has been observed that, as an innovative distributed intelligent paradigm, SI has exhibited remarkable efficiency in solving complex optimization problems.

The most notable examples of swarm intelligence based algorithms [47,48,49,50] are ant colony optimization (ACO), ant colony cluster optimization (ACC), boids colony optimization (BCO), particle swarm optimization (PSO), artificial bee colony (ABC), stochastic diffusion search (SDS), firefly algorithm (FA), bacteria foraging (BF), grey wolf optimizer (GWO), genetic algorithms (GA), and multi-swarm optimization (MSO).

Advanced Multi-Function Interface System (AMFIS) is an integrated system with a ground control station and a set of flight platforms developed by Fraunhofer IOSB [20]. The ground control station was deployed for controlling flight platforms and managing sensor data collection in a real-time fashion, and the flight platforms were dispatched for flight maneuvers like object tracking and data collection. Via uplink and downlink channels, the ground control station communicated with the flight platforms for controlling and transmitting data information. The uplink channel was for control while the downlink was for data transmission. It is claimed that the intelligence of the flight platforms is supported by the ground control station based on sensor data using data fusion, which is missing in [20]. Figure 12 illustrates the blueprint of AMFIS working with various sensors and mobile platforms.

Fig. 12
figure 12

AMFIS [20]

RAVEN [44] is an early system that enables investigating a multi-task system with ground vehicles and drones, as shown in Fig. 13. The ground vehicles and drones were used to emulate air and ground cooperative mission scenarios. Similar to AMFIS, the coordination of ground vehicles and drones and the swarm logic are supported and provided by a central ground center station.

Fig. 13
figure 13

RAVEN [44]

Unlike aforementioned systems, a layered dual-swarm system [46] was proposed with more detailed insight in swarm intelligences. The core of this project was focused on the intra-swarm and inter-swam intelligence in a network of wire- less sensors and mobile objects. As shown in Fig. 14, two swarm collectives were coexisting in a system. The upper layer consisted of autonomous mobile objects and used a boids model to guide object movements and actions.

Fig. 14
figure 14

A layered dual-swarm system [46]. (a) Layered structure. (b) System diagram

The lower layer is a self-organized wireless sensor network, while an algorithm of ant colony swarm was applied for environmental sensing. Via a communication channel, two swarm collectives exchanged necessary information to foster cooperation between two collectives so as to form new swarm intelligence. The study [45] aimed at providing autonomous control of multiple drones. To achieve it, a function named kinematic field was introduced, which enables the drones to calculate kinematic fields on the basis of local information and autonomously plan their routes while the field was being asymmetrically modified by co-existing drones.

2.5 Night Vision Systems

Night vision is an ability of seeing in darkness, low illumination or night conditions. However, humans have poor night vision since there are tiny bits of visible light present and the sensitivity of human eye is quite low in such conditions. To improve visibility at night, a number of night vision devices, systems, and projects have been designed, developed, and conducted in areas. The night vision devices (NVD), also known as night optical/observation device (NOD), denote the electronically enhanced optical devices such as night vision goggles, monocular, binocular, scopes, and clip-on systems from the night vision manufacturers like Armasight Night Vision, ATN Night Vision, Yukon Night Vision, Bushnell Night Vision and others. NVDs were first used in World War II and now are available to the military, polices, law enforcement agencies, and civilian users.

Based on technology used, NVDs primarily operate in three modes: image enhancement, thermal imaging, and active illumination.

  • Image enhancement, also called low light imaging or light amplification, collects and magnify the available light that is reflected from objects to the point that we can easily observe the image. Most consumer night vision products are image intensification devices [51]. Light amplification is less expensive than thermal imaging.

  • Thermal imaging (infrared) operates by collecting the heat waves from hot objects that emit infrared energy as a function of their temperature such as human and animals. In general, the hotter an object is, the more radiation it emits. Thermal imaging night vision devices are widely used to detect potential security threats from great distances in low-light conditions.

  • Active illumination works by coupling image enhancement with an active infrared illumination source for better vision. With lowest cost, active illumination night vision devices typically produce higher resolution images than that of other night vision technologies and are able to perform high-speed video capture (e.g. reading of license plates on moving vehicles). However, active illumination night vision devices that can be easily detected by other devices like night vision goggles are generally not used in tactical military operations.

Night vision devices and sensors (such as cameras, GPS, Lidar, and Radar) are integrated into night vision systems [52,53,54,55,56] to sense and detect objects that are difficult to see in the absence of sufficient visible light or in the blind spots. Based on the relative behavior of the night vision devices and sensors, night vision systems are commonly classified into two main categories: active and passive.

Active night vision systems equip infrared light sources and actively illuminate the objects at a significant distance ahead on the road where the headlights cannot reach. The light reflected by objects is then captured by cameras. Active systems are low cost solutions, performing well at detecting inanimate objects. In the market, automotive companies like Mercedes-Benz [56], Audi, BMW [54, 56], Rolls-Royce, GM, and Honda [57] have offered night vision systems with infrared cameras.

In the case of Audi [56, 58], BMW [56, 59], and Rolls-Royce [60, 61], Auto-liv [55] systems were passive solutions. Passive systems detect thermal radiation emitted by humans, animals and other objects in the road which are processed using different filters. The object detection range can be up to 328 yards (300 m) which has twice the range of an active system and thrice the range of headlights. Honda deploys dual infrared cameras on vehicle to provide depth information for night vision. Drones may be equipped accordingly.

2.6 Artificial Cognitive Architectures

Rescue drones can also be adapted to aquatic environments considering the vast uncharted depths on Earth which is more water than land. Autonomous systems are required for navigating austere environments such as harsh landscapes on other planets and deep oceans where human analysis cannot function and directly guide drones. Hence, cognitive architectures are required and as these systems evolve and become more self-reliant and cognitive through machine learning they become increasingly valuable to search and rescue teams such as the Coast Guard and NASA. Carbone [62, 63] defines cognitive formalism within systems as a biologically inspired knowledge development workflow developed from decades of cognitive psychology research combined with neuron-like knowledge relativity threads to capture context for systems to be able to self-learn. Microsoft can store magnitudes of high volume data now in DNA and IBM continues to develop more powerful neurotropic chipsets. Artificial Cognitive Architecture research [63] is also making great strides and will provide needed improvement in levels of self-learning, context, and trust in order for autonomous systems to expand usage across difficult search and rescue environments. Therefore, it is essential to have a system that can move and think on its own, with machine learning capability, while satisfying human-driven objectives and rules optimal for SAR missions.

3 Rapid Alert System for Enhanced Night Vision (RASEN)

We developed a proposal for Sony that fused their night vision technology with emerging MMW radar that can discern detail of life behind walls from a distance, which can possibly be applied to discerning survivor status under rubble and at night as first responder rescue drones detect movement. Until very recently, efficacy of proposed approaches and systems was mostly ignored in terms of visual quality and detection accuracy with consideration of hardware cost. To help prevent accidents and increase driver awareness in a dark environment, low-cost, high accuracy, real time night vision is needed that integrates seamlessly with other smart sensors. We argue that it is essential to redesign the current architecture of night vision systems with networked vehicles and drones.

Contrary to existing architectures which rely only on drones, infrared cameras, LiDAR, or other on-board units, we develop the concept of providing capability of network-wide sensor data fusion for dynamically changing environments, particularly coupled with real-time map, weather, and traffic updates. Meanwhile, an important property of the system architecture is that it is evolvable, in the sense that it can allow far more devices or sensors mounted on vehicles, new protocols, features, and capabilities to be added on on-board platforms or the system infrastructure. The system architecture is a modularization of on-board platforms, networks, servers and technologies in which certain components (e.g., platforms and networks) remain stable, while others (the devices, sensors, links, and technologies) are encouraged to vary over time.

Figure 15 illustrates the generic framework architecture of RASEN deployment. It consists of three main components: data center and its servers, available high-speed networks including vehicular network, LTE and 5G networks, and on-board embedded platforms with the radar-camera module, sensors, and single or multiple drones. Data center and its servers are set up to achieve data and provide services to vehicles equipped with on-board platform and its modules. We aim to yield network-side insights on environment changes, traffic status, map updates, and weather conditions. The servers inform vehicles about real-time lightweight locational based information, which enables vehicles to know about what is ahead now on the road so that the drivers could become confident and pro-actively react to different situations. Leveraging links among servers, vehicles, and drones (i.e., remote sensing enabler) to provide network-wide machine learning capability, users (i.e., drivers) could gain the most personalized and accurate route guidance experience. Users can customize theirs interests of data from vehicle, sensors, environment, and people, etc.

Fig. 15
figure 15

Generic framework architecture of RASEN deployment

Our on-board platform, as shown in Fig. 16, has an embedded PC with a TFLOP/s 256-core with NVIDIA Maxwell Architecture graphics processor unit (GPU), connecting to a 360° MMW radar, a camera system of four cameras with IMX224MQV CMOS image sensors, a Raspberry Pi3, a GPS navigation system, a DSRC module, a remote sensing system of multiple-drone.

Fig. 16
figure 16

On board platform configuration

However, due to inherent shortcoming associated with the wide signal beam, MMW radar is insensitive to the contour, feature, size, and color of the detected targets. To this end, we use high sensitivity CMOS image sensor (IMX224MQV), which is capable of capturing high-resolution color images under 0.005 lux light conditions which are nearly dark nights. On dark nights, traditional cameras typically experience low sensitivity and difficulties in discerning one color from another [51,52,53,54,55,56,57,58]. The remote sensing system is a self-organized, distributed multiple drone system to improve information fusion and situational awareness. Drones extend the limited sensing range of cameras, Lidar, MMW radar, and DSRC and provide multiple views that describe distinct perspectives of the same area circumscribing the vehicle. In some extreme circumstances, drones can serve as temporary network access points for emergency communications.

By collecting and analyzing sensing data, we construct 3-layer RASEN system architecture aiming at serving various applications with demands of high accurate environmental perception like night vision, as shown in Fig. 17. Data layer collects, achieves, and unifies data representations; Fusion layer consumes the data provided by the data layer, abstracts features, detects, classifies and tracks objects; and the control layer mainly focuses on modeling situation and driving, sends, alters, and takes in-time vehicle control with respect to the information abstracted and discerned from sensing data.

Fig. 17
figure 17

RASEN system architecture

In RASEN, with a geometrical model of four systems, the calibrated 360° MMW radar system, camera system, GPS navigation system, multi-drone system, and Raspberry PI work together to generate a sequence of sensing data containing environment objects through iterations of two phases. One phase explores the ability of the long detection of the multi-drone system and 360° MMW radar system. When the vehicle is moving, those systems find possible targets. Based on the remote sensing data and network-wide insights, when any target enters the vision range of the night camera system, Raspberry PI/Wolfram, a cyber-physical system platform with the capability of milliseconds processing, issues a notification message and triggers the other phase. Then the night camera starts capturing a series of low-light images. It can provide lateral resolution to analyze data and ascertain further actionable intelligence for automated vehicle systems when combined with data provided in advance by the recently developed MMW radar.

The fusion layer deals with how to fuse sensors measurements to accurately detect and consistently track neighboring objects. Each time the fusion layer receives new raw data, it reads information encoded in the data format and generates a prediction of the current set of object hypotheses [64, 65]. Features are extracted out of the measured raw data with the goal of finding all objects around the vehicle. For artifacts caused by ground detections or vegetation, we suppress them by exploring their features. Both ground detections and vegetation are static and have no speed, so using the radar data, we can easily identify and mark them out. To reduce misidentification rate, 3D map information can also be used by checking against the road geometry. The result is a list of validated features that potentially originate from objects around the vehicle.

Sensing data are processed by the fusion layer and then delivered to support tasks and services in the control layer. In RASEN, network-wide information techniques are employed to assess, evaluate, and combine the information yielded from the on-board platform, in conjunction with the host-vehicle states, into reliable features which are used to improve the performance of object detection, tracking, and night vision. Besides night vision, our system is also suitable or can be extended for other applications such as smart cruise control, lane-departure warning, headlight control, active night vision, rain sensing, and road sign recognition.

4 Swarm Intelligence Utilizing Networked RFID

Modern developments in wireless technology have increased the reliability and throughput of this type of communication. The factors of portability, mobility, and accessibility have all improved. We will apply metaheuristics of Ant-Optimization using Wolfram language to a previous work of Antenna Networks [62, 63, 66]. We will review below three widely used systems, which are Radio Frequency Identification, Wireless Sensor Networks, and Multiple-Input Multiple-Output communication.

4.1 Radio Frequency Identification (RFID) for Wireless Drone Networking

Radio Frequency Identification (RFID) is a technology that uses a radio frequency electromagnetic field to identify objects through communication with tags that are attached to them. This technology originally was introduced during World War II [67]. Figure 18 depicts RFID components in context of a communication network. The RFID system consists of two components: readers and tags or namely interrogators and transponders. Each reader and tag has antenna to communicate wirelessly through electromagnetic waves (See Fig. 19). There are two types of RFID tags, which are Active and Passive. The active RFID tags contain internal power source and the passive RFID tags usually harvest energy from readers’ signals.

Fig. 18
figure 18

A generic IoT platform consisting of intelligent RFID tag and reader with a hierarchical two-layer network [68]

Fig. 19
figure 19

A typical RFID system wireless sensor networks [67]

Wireless Sensor Networks (WSNs) consist of spatially distributed autonomous devices using sensors to monitor physical or environmental conditions as shown in Fig. 19. A WSN is used in many industrial, military, and consumer applications. The WSN consists of nodes where each node is connected to a single or multiple sensors. Typically, each sensor network node has multiple components [69]:

  • Transceiver with an internal antenna or connection for external antenna.

  • Microcontroller, an electronic circuit, with sensor interface as well as energy source interface which is usually a battery or an embedded form of energy harvesting.

A sensor is a device that receives a signal or stimulus from the surrounding and responds to it in a distinctive manner. It converts any mechanical, chemical, magnetic, thermal, and electrical or radiation quantity into measurable output signal. The basic features and properties of sensors are:

  • Sensitivity: This represents the detection capability of the sensor with respect to the sample concentration.

  • Selectivity: This represents the ability to detect the desired quantity among other non-desired quantity.

  • Response time: This describes the speed in which the sensor can react to changes.

  • Operating life: This is the lifetime of the sensor.

A WSN topology can vary from a simple star network to a complex multi-hop wireless mesh network and the propagation between the hops of WSN can be routing or flooding [70] (Fig. 20).

Fig. 20
figure 20

A typical wireless sensor network via multiple-input multiple-output (MIMO) communication

The concepts of multiple-input multiple-output (MIMO) in wireless communication is based on the use of multiple antennas at both the source (transmitter) and the destination (receiver) to exploit multipath propagation [71]. MIMO is a developed form of antenna array communication that provides advantages such as gain and spatial diversity. Although multiple receive antennas have been known and used for some time, the use of transmit diversity has only been studied recently [65]. Figure 21 illustrates the concept of MIMO system with source transmit antennas and destination receive antennas.

Fig. 21
figure 21

A multiple-input multiple-output (MIMO) channel

In modeling and analysis of antenna systems in settings of RFID systems, networks of sensors, and MIMO situations, we implemented a simulation using metaheuristics of Ant-Optimization and Particle Optimization with Wolfram language. Although the examples are limited to small number of nodes, due to the nature of the approach and its scalability, this model represents a step towards scaling to Big Data related problems in these situations.

4.2 Ant-Colony Meta-Heuristics for Night Rescue Operations

In the early 1990s, ant colony optimization (ACO) was introduced by M. Dorigo and associates as a bio-inspired metaheuristic for the solution of combinatorial optimization (CO) problems [72]. It has been stated that “ACO belongs to the class of metaheuristics which are approximate algorithms used to obtain good enough solutions to hard CO problems in a reasonable amount of computation time”. We are adapting this approach to rescue drones that need to determine, in CO problems posed by disaster areas, an optimal path to survivors in a reasonable amount of time using ACO approach. It is known that the inspiration of ACO is based on the behavior of real ants. When searching for food, ants initially explore randomly surrounding environment of their nest. If an ant finds a food source, it carries a sample back to the nest. During this trip, it is known that the ant leaves a Chemical (pheromone) trail. The quantity of pheromone deposited guides other ants to the food source. This indirect communication among ants through pheromones provides a mechanism to find near-optimal paths between their nest and sources. Naturally this approach results in a swarm convergence toward the shortest trail to food sources. This colony metaheuristic approach modeling ant behavior in nature has been used successfully to find near-optimal solutions to relatively large unstructured network problems, which we are applying to rescue drones determining optimal path to survivors. Instead of deploying expensive drones to survey an expanse of disaster area to determine best way to deploy relief convoys, our approach simplifies the first responder search by deploying more, inexpensive scout drones that can immediately execute a swarm algorithm that will provide a recommended path to follow between survivors. This approach saves on parameters such as rescue time and fuel cost when surveying vast areas with drones to determine optimal path by calculating in advance using a swarm algorithm to precisely plot a course of action for the drones to execute, with capability to recalculate and adjust network as conditions change. Wireless sensor bandwidth can then be made available directly for survivor search with less costly drones, with increased payload availability for RASEN and other night vision systems, rather than deploying more sophisticated drones to survey a vast expanse of area for pinpoint accuracy.

Adapting the Ant-Colony metaheuristic as implemented in Mathematica by Rasmus Kamper, we started with a random set of antenna nodes (these antennas, depending on the problem, can belong to actual drones). As explained in the above wireless networks section, we demonstrated that the Ant-Colony algorithm would find the solution in reasonable time and iterations. The progressive iterations of the algorithm applied to the networks of antenna problem are shown in Fig. 22. The advantages of using ant-colony algorithm before actual deployment of swarm of drones are many. Firstly, the actual communication among drones would either be avoided or minimized, since a near-optimal visit patterns of the drones are identified already through the process ant-colony optimization. Therefore, cheaper and less capable drones could be used as well.

Fig. 22
figure 22

Path convergence of swarm intelligence to guide rescue drones

Figure 22 illustrates the operation of the algorithm. First step is the identification of size of disaster sites (DS) (notion of survivors on location). The DS are represented by random points in a two dimensional space and the distance between each pair of DS (represented by the edge weight) is taken as the Euclidean distance. Algorithms proceed to construct tours to visit the DSs until convergence after the first initialization. Algorithm also simulates evaporation of deposited chemicals called “pheromones”, which provide a basis for AI weighting. It should be noted that as ants explore the options, all ants complete Hamiltonian cycles by starting from a randomly selected DS. At the time of initialization all edges are assigned equal weights (pheromone) which can be controlled by the “initial level” slider in the visualization screen. After each construction step, the weight on each edge is multiplied by a fraction to adjust the weight to simulate evaporation. The user can use the slider “min/max ratio” to set a minimum weight to prevent early convergence toward a sub-optimal trail. The update simulates pheromone deposit by a weight increase. It should be noted that shorter trail edges are favored; thereby weight increase (pheromone deposited) is inversely proportional to trail length. An edge with higher weight (shorter length) is assigned higher probability to be selected by an ant. As this process continues, the edges on the graph with less traffic will fade as observed by color intensity, and the ants’ preferred trail (near-optimized) will emerge. In large networks, the notion of elite ants is implemented by allowing only the most efficient ants to increase weights (deposit pheromone). The menu “elite ants” in the screen controls the corresponding percentage. In addition, a “candidate list” allows the search for the next DS to be restricted to the nearest DS. For larger graphs, this strategy increases the speed considerably. This process is repeated until all ants converge toward a particular trail. There is also a simple tour improvement algorithm (TIA) which can be activated on each tour. This capability can be turned-off with the “TIA” checkbox on the screen. The checkbox “MMAS” on the screen enables the MAX-MIN Ant System algorithm. If enabled, only the best-performing ant can increase the weights (deposit pheromone). After the ants converge to a selected trail, shown as read in the screen, the result (red) is compared to the usually optimal trail (dashed) using FindShortestTour function. Various runs can be performed by saving the outcome of a particular run. One can change parameters and run again to see performance on the same graph with the new settings.

As can be seen in Fig. 22 the algorithm converged to find a reasonable path. The rescue drones will be able to follow this path to determine conditions of survivors day and night with RASEN, relaying critical data to relief convoys to prepare according to the status of each DS node. This example and recent examples in the literature indicates that ACO research is a practical approach to scale for unstructured Big Data problems with visual analytics. In the near future, we will work on comparing these results with our Least Action Algorithm [62].

5 Conclusion

Swarm intelligence algorithms combined with Rapid Alert Sensor for Enhanced Night vision (RASEN) can provide continuous night search capability and survivor condition identification hidden under rubble for first responders during rescue operations. The Wolfram Framework provides an environment to for research students to expand their capability to develop smart rescue drones with decision support functions such as swarm intelligence. We have introduced biologically inspired algorithms combined with fusion night vision technology that can rapidly converge on a near optimal path in reasonable time between survivors and identify signs of life trapped in rubble. Wireless networking with dynamic programming ability to determine near optimal path using Big Data analytic visualization is provided to rescue teams utilizing drones as first responders based on the results of swarm intelligence algorithms. This automated multiple drone scout approach enables appropriate relief supplies to be deployed intelligently by networked convoys to survivors continuously throughout the night, within critical constraints calculated in advance by rescue drones, such as projected time, cost, and energy per mission.