Keywords

1 Introduction

In recent years, the occurrence of disasters has caused a significant amount of loss to the different countries of the world. Disaster management agencies worldwide have categorized the disasters as either natural or man-made [1, 2]. A natural disaster is considered as the physical phenomena that occur naturally and are caused either by the rapid or slow occurrence of events that have significant impacts on the health of human beings, sometimes causing death and severe suffering among them. Natural disasters can be geophysical, hydrological, climatological, meteorological, and biological [3]. On the other hand, a man-made disaster is considered as an event that is the result of environmental or technological emergencies near human settlements and is caused by the daily life activities of human beings. Man-made disasters can be related to environmental degradation, pollution, and accidents in the industries, technological developments, transportation, etc. [4]. Moreover, there are certain complex combinations of natural and man-made disasters, which include, but are not limited to, food insecurity, epidemics, armed conflicts, and displaced population. According to the International Committee of the Red Cross (ICRC), the different complex emergencies can be characterized in terms of loss of life, extensive violence, widespread damage to the societies and economies of any country, and evolving risks for humanitarian relief workers in terms of their security [5, 6]. In the last three decades, the world has observed different pandemics such as Ebola, Zika, Avian flu, Cholera, Dengue fever, Malaria, Yellow fever, and Coronavirus Disease (COVID-19), which is the most recent one from among them all [6]. These pandemics have negatively affected the economic and social costs and resulted in an unexpected decrease in human manpower. According to the International Federation of Red Cross and Red Crescent Societies, a disaster results from a hazardous impact on vulnerable people worldwide [7]. Thus, there is a need to address the disaster management practices to prevent and mitigate such disaster cases in real time.

Disaster management focuses on prevention, preparedness, response, relief, and recovery operations [8]. There is a cyclic link between environmental management and disaster mitigation and adaptation. The practices carried for disaster mitigation and adaptation result in the management of the environment, whereas the management of the environment forms a pillar to prevent disasters from happening. The former is a part of post-disaster management [9], and the latter forms the basis of pre-disaster management [10]. The disaster management cycle consists of four phases: response, recovery, mitigation, and preparedness [1]. The pre-disaster management ethics is mostly based on human beings’ education and forming a disciplined environment so that the unexpected vulnerabilities cannot be exploited unnecessarily. For instance, cutting off too much of the woods may result in deforestation, which again results in frequent floods in the area that affects the lives of human beings negatively. To perform effective management of the disaster, once it has occurred, one must understand and follow the practices related to post-disaster management.

In the case of post-disaster management, there are six tools used, namely, Environmental Risk Assessment (ERA), Environmental Management Systems (EMS), Strategic Environmental Assessment (SEA), Environmental vulnerability and hazard mapping, Rapid Environmental Assessment (REA), and Environmental Impact Assessment (EIA) [11]. Using the abovementioned tools, the assessment of a disaster can be done effectively, and relevant safety measures for its prevention and mitigation can be devised. In the past few years, it has been observed that the disaster management authorities have created and been working upon these tools efficiently to either prevent the disasters with early warning systems or prepare the concerned authorities to focus on the mitigation, response, and recovery operations once the disasters have occurred. However, there is a need to integrate the technologies and techniques utilized across the world to handle the disaster situations, as and when it happens, in a more profound manner. With the emergence of Information and Communication Technology (ICT) and ambient intelligence [12] around the world, there is a need to use these technologies to automate the post-disaster management activities all around the world. The use of technologies will help governments, businesses, and civil society to plan and reduce the impact of disasters by appropriate shaping of public policies and plans through integrated communication among the conscious entities participating in the post-disaster management operations.

Effective communication is one of the crucial aspects in performing the operations associated with post-disaster management. The invention of the Internet and its evolution in terms of connectivity, communication capability, and speed have helped build a concrete backbone infrastructure to support the ICT applications. Also, the emergence of the Internet of Things (IoT) [13] and the profound use of cyber-physical systems [14] provide a platform wherein different types of devices and human beings, irrespective of their ethnicity and languages, can be connected all across the world with the use of the Internet. It has helped emphasize five significant but continuing trends concerned with the world of computing, i.e., ubiquity, interconnection, intelligence, delegation, and human orientation [15]. These five trends are the basic building blocks of ambient intelligence and help perform intelligent distributed processing, even with the devices deployed in remote or inaccessible regions. It must be noted that there is a need to address the issue of disaster management on a global scale and in real time, wherein the autonomous operation plays a significant role. Since the post-disaster management operations require a significant amount of communication and computation to be performed in real time, the five continuing trends mentioned above with their features and efficient platforms are highly useful in planning and producing the strategies intended for effective post-disaster management operations. However, to address the challenges such as heterogeneity in devices, communication protocols and mechanisms, demand for heavy computation, processing power, and reliability, there is a need to understand and incorporate the collaborative processing among the devices in the network, as described in the following section.

2 Collaborative Data Processing

In the current real-world situations, the task of post-disaster management requires efficient connectivity and communication among the entities involved in the operations. In [16], J S Kumar et al. present a generic model for communication among the different entities involved in the post-disaster management operations. It is emphasized to use an Intelligent Information Network (IIN) for the IoT environment, wherein an information processing system acquires the information from the site of disaster using the cyber-physical network, which in turn will be utilized by the disaster management team and legislation authorities to understand the nature of the disaster and communicate this information to the media/citizens/non-governmental organizations. The information processing system performs its activities using the sensing and actuation operations in the region of interest. To deploy and manage such an information system is a critical challenge and requires a collaborative strategy of processing the data at different levels.

Collaborative data processing revolves around five key terminologies: communication, cooperation, coordination, collaboration, and task [17]. Communication is the exchange of ideas and information among the participating entities. Cooperation is to work with someone in the sense of enabling, i.e., provide the needed resources and information. Cooperation is achieved using the conscious and deliberate efforts of the participating entities. Similarly, coordination is identified by the actions of the users directed by a coordinator to accomplish a common goal. On the other hand, collaboration is defined as a collective effort of participating entities to achieve the desired goal with willingness. Finally, the task is a schedulable function or feature executed in temporal scope. A task may be periodic or aperiodic, sporadic, deadline-based, and precedence-based by its characteristics. Collaborative Data Processing (CDP) is the collection and management of data from one or more sources and distribution of information to the destination with the goal to control, process, evaluate, and report data and information activities [18]. To perform efficient collaborative data processing among the devices deployed in the network, there is a need to address the critical challenges associated with it.

Four critical issues are needed to be addressed for successful collaborative processing among the networked devices: dynamic determination of the level of sensing, entities of sensing, frequency of sensing, and entities involved in the computation [17]. Moreover, there are two technical issues with collaborative data processing. First is the degree of information sharing among the devices in the network. The second issue is to formulate the methods employed to fuse the information among the devices sensing it. Thus, collaborative data processing revolves around the concept of distributed information fusion.

Figure 1 depicts a simple scenario to understand the concept of collaboration and its significance using a tree-based network infrastructure [19]. As an event occurs in the region of interest, it is detected and sensed by the device, in the communication range of which the event has occurred. As shown in Fig. 1a, two devices sense the event. The sensed event is forwarded to the root node, which serves as the sink node in the network. Since two devices send the same data to the root node, the redundant data will be received by the root node. Also, forwarding of the same data will incur a significant amount of load on power-constrained devices, which must be avoided necessarily to maintain the lifetime of nodes and the network as well. This is an undesirable case and, thus, must be handled at the device level only. In this context, the devices sensing the event must collaborate among themselves to devise a strategy to forward the sensed information in such a way as to reduce the power consumption and redundancy in the network. This kind of scenario may further become more complicated in a network wherein the autonomous operation occurs among heterogeneous devices. Thus, there is a need to outline the different challenges associated with collaboration by understanding the different research areas in CDP.

Fig. 1
figure 1

(a) Two nodes detect an event; (b) event information is propagated to sink node [19]

2.1 Research Areas

The research areas in CDP are based on the different characteristics affecting the operation of devices and key evaluation metrics used to measure the performance of these devices. The characteristics of these devices are outlined in terms of mobility of nodes, distributed processing demand of the applications, ability to cope with node failures which affect the reliability adversely, power consumption constraints of devices using batteries or other forms of energy harvesting, communication failure affecting the real-time response of the system, and high-scale data processing both at the local and global levels to process the tasks in the applications [20, 21]. On the basis of different characteristics discussed above, the key evaluation metrics may include the performances related to lifetime, connectivity and coverage, deployment cost and ease of operation, response time, effective sampling rate, and security [21]. The different research areas in CDP are shown in Fig. 2.

Fig. 2
figure 2

Classification of research areas in the collaborative data processing [20, 21]

The research areas in CDP can be categorized into network-based and Quality of Service (QoS)-based [14, 18]. In the network-based category, the problems related to the creation and maintenance of the network are discussed, such as the deployment of devices in the network, coverage of the network, localization of devices and events in the region of interest, and the clustering of deployed devices to support the operations related to the different applications [18]. Similarly, the QoS-based category discusses the problems related to maintenance of the performance of the network, such as energy conservation of the devices, lifetime, connectivity, reliability, and response time of the network to different services [14].

Furthermore, there is another category for the classification of research areas in CDP: collaborative sensing, collaborative communication, and collaborative computing. Collaborative sensing is governed by three factors, i.e., anomaly detection rate, quality of signal processing, and false alarm rate [19]. When an anomaly is detected, the signal is sampled and filtered, the process of threshold comparison is performed, information is fused, and, finally, the features are fused with neighbors to achieve accuracy and improvement in signal quality. The primary role of collaborative sensing is to gather information from the environment with robustness. Collaborative sensing takes care of the assignment of tasks to sensors, execution of the sequence of tasks on sensors, and creation of communication schedule among the sensors. The collaborative signal processing gets affected by attributes such as the size of the sensor, deployment methodology and mobility model utilized, the extent of sensing required, operating environment, underlying processing architecture, and availability of energy of the devices. Among all the above attributes, the processing architecture plays a significant role in efficiently sensing the events in the region of interest. A layered architecture for cooperative signal processing is discussed in [22], as shown in Fig. 3. It can be observed from the figure that the lower three layers execute operations autonomously, whereas the upper three layers perform the operations cooperatively. The lower layers focus more on hardware-based processing, and the upper layers perform application-oriented functions. The quality, detection rate, and power consumption increase when moved from lower to upper layers, whereas the false alarm rate decreases considerably.

Fig. 3
figure 3

Layered architecture for cooperative signal processing [22]

The success of collaborative communication depends on attributes such as optimized response time, maximum throughput, maximum average lifetime, minimum energy consumption, and maximum coverage of the network. The strength of the backbone ICT network plays a significant role in performing efficient communication. On the other hand, a computing model deals with dynamic network heuristics and distributed computing, wherein the computation needs to be performed at the node, local, and global levels. The computing model can be categorized into centralized and distributed schemes [23]. The centralized computing model uses the client-server model of computation, whereas the distributed model uses mobile agent-based computation. The centralized model suffers from higher energy consumption and storage requirements, has a longer processing time, suffers from disconnection of the network when the server goes down, and, thus, is suitable for small and sparse networks. The different challenges and issues related to collaborative data processing and their respective characteristics are summarized in Table 1.

Table 1 Challenges and characteristics of research in the collaborative data processing

On the other hand, in the case of distributed computing model , the energy consumption and storage requirements are comparatively lesser, and processing does not get affected, due to which, with increase in scalability, mobile agent bypasses the dead node, results into longer network latency with less dense networks, and, thus, is best suited with denser nodes in the network. The authors in [24] discuss a graph-based communication algorithm that utilizes two computing schemes to address the collaboration among the devices. The first scheme assumes that the number of nodes within a cluster is large and the number of cluster heads is small. In this case, computing within a cluster is performed using the distributed model, and computing among different clusters is performed using the centralized model. Similarly, the second scheme assumes that the number of nodes within a cluster is small and the number of cluster heads is large. In such a case, the process adopted is just the reverse of the former one, as discussed above.

In addition to collaborative processing, the post-disaster management activities require the exploration of ambient intelligence in the region of interest. It helps to achieve ubiquity, interconnection, and intelligence, which results in efficient communication among the entities involved in the operation. The advent of IoT serves as a strong pillar of ambient intelligence with profound communication capability among the heterogeneous devices deployed in the region of interest. In conjunction with collaborative data processing, the IoT environment provides a platform to use the cyber-physical systems to perform sensing and actuation on the environment and communicate the sensed information in real time. Thus, there is a need to explore the IoT technology to understand its utilization in post-disaster management activities, as discussed in the following section.

3 Internet of Things

Currently, there are over 4.5 billion people across the world who have access to the Internet. Every eight out of ten Internet users have smartphones, which has increased the information and data access in real time, and its demand is ever increasing. The evolution of ICT has attracted the world’s attention toward the notion of a smart world [25]. The idea of an intelligent world includes a key component, i.e., device-to-device communication. It emphasizes the role of unified communication, which can be realized with integrated telecommunications, computers, enterprise software, robust middleware, storage, and audiovisual systems. This enables the users to access, store, transmit, and evaluate the information in real time. However, there are certain critical issues related to the usage of a unified communication platform, such as the gap between developed and developing nations, penetration of remote areas, availability of cellular coverage, electronic transmission speed, and efficient use of the backbone network. In this context, the Internet of Things environment plays a major role in realizing the vision of unified communication using the Internet.

The IoT environment offers the scope of connecting and communicating among the devices around the world using the Internet protocols. The use of the Internet is crucial and profitable for energy-constrained and limited computing powered devices. The IoT environment assumes the distribution of heterogeneous devices of various types, such as sensors, actuators, servers, mobile stations, etc. The term heterogeneity means that the devices use different mechanisms for manufacturing and different communication protocols having different sizes and shapes, adopt different computation techniques and different processing power, may exist in different types of networks, are short-ranged or long-ranged, and offer different QoS performances. In such a scenario, various issues are affecting the performance of the IoT networks. First, there is a need to address the efficient connectivity among the physical, Information Technology (IT), social, and business infrastructures by virtue of collective intelligence and QoS parameters affecting the communication in real time. Second, the issues of mobility, routing, and the requirement of an on-the-move configuration of devices pose a big challenge. Third, since the IoT environment assumes communication among many machines worldwide, the algorithms must support the scaling at the global level, robust communication, and self-organization capabilities. Fourth, the IoT network integrates different platforms, protocols, and technologies, and thus, interoperability must be addressed on a large scale. Finally, the underlying network architecture and system design must be prepared in such a way as to support the issues as discussed above with efficient processing of the application operating in the IoT environment. Thus, it is important to conceptualize the IoT on a global level.

The Internet of Things is a global networking interconnecting smart objects employing comprehensive Internet technologies to support a wide range of technologies necessary to realize the vision with the intent to develop applications and services using such technologies in the global business world. There are three conceptual pillars of IoT: be identifiable, have communication capability, and have collaboration among the objects or things [26]. These IoT objects or things are anything that can be seen around the world. It can range from your spectacles to the chair, table, bed, sofa, wall, wearables, and more complex computing machinery. It can be observed that the IoT tries to connect all the objects in the world using the Internet. However, it can also be observed that these objects are heterogeneous in their behaviors and characteristics. Thus, the devices in the IoT environment are categorized into two basic types: sensor device and IoT device.

The sensor device detects or measures a physical property or records and indicates or responds to it in real time. On the other hand, creating an IoT device is complex and challenging, yet not so difficult in the world where we live. To create an IoT device, choose any thing besides a computer, and add computational intelligence to make the thing a computationally intensive device. Further, add a network connection to the computationally intensive device to build an IoT device. An IoT device can be able to communicate using the Internet protocol stack. An example of the IoT device is shown in Fig. 4, wherein a simple cellular phone is added with intelligence using sensors, a camera, high-end display, wireless fidelity, and GPS. Then, it is connected with high-speed mobile broadband to make a smartphone, which can be utilized as an IoT device in the network. With the understanding of the different devices in the IoT environment, the way forward is to get acquainted with the different underlying architectures of IoT and their significance, as discussed in the following section.

Fig. 4
figure 4

Example of an IoT device

3.1 Issues and Challenges

The development of applications using the IoT environment must address five crucial points: deployment of devices in the network, use of smart agents, development of intelligent spaces, real-time pervasive computing, and system architecture. These five topics are also known as the building blocks of IoT. The deployment of devices must be performed in such a way that the mechanism provides ease of access to the broadband infrastructure. This helps improve the collective embedded intelligence of the region, which further provides a cohesive and integrated infrastructure of smart devices. However, the most crucial aspect of the IoT environment is to address the system architecture. This is because the different devices use various communication protocols, for which there is a need to develop an underlying architecture that integrates the functioning of these protocols at different layers of the operation.

Gubbi et al. [27] discuss the conceptual IoT framework with cloud computing serving as the middleware of network of things and applications. The critical aspects of the network of things include, but are not limited to, security, reconfigurability, QoS, communication protocols, location awareness, and compressive sensing. The middleware serves as a cloud platform supporting the visualization, computation, analytics, and storage services. This kind of framework is beneficial in the real-time processing environment. However, it does not address the distinction among the heterogeneous devices at the physical layer. Similar application-oriented architectural models are discussed concerning smart cities [28], healthcare [27], smart grid [29], and intelligent transport systems [30]. When these architectures are analyzed on a micro scale, it can be observed that the IoT architecture requires the physical layer, data accumulation layer, abstraction layer, and application layer for implementing the network layer functionality at the edge for routing and exploring the role of analytics and business processes.

Although there exist different applications in the IoT environment, the devices and edges can be considered as the heart of the complete architecture in a nutshell so that any business application can run on top of it. In this context, a layered IoT framework is proposed in [31]. It consists of two types of devices: sensor device and IoT device forming two different layers, namely, sensing layer and IoT layer, respectively. Both the sensor and IoT devices are responsible for performing the data acquisition and data distribution operations. However, since the IoT devices are better than the sensor devices in terms of communication and computation capabilities and are connected using IP-based protocol, they are assigned to perform data distribution on a major scale and data acquisition on a minor scale. The responsibility of the devices at the sensing layer is just the opposite of that of devices in the IoT layer. The sensor devices are ID-enabled and short-ranged and have comparatively lower energy, whereas the IoT devices can be either IP-enabled or ID-enabled based on the requirement of the scenario in which it is operating. The devices at the IoT layer support IEEE 802.15.4, IEEE 802.15.4e, 6LoWPAN, and CoAP in the physical, MAC, network, and application layers, respectively [26, 27]. The layered IoT framework is shown in Fig. 5. It can be observed that the devices in the same layer are assumed to communicate in the local neighborhood. Similarly, the IoT devices can acquire the data from the sensing layer and, thus, assume to perform this operation in the global neighborhood. Even with the architectural framework of the IoT environment, there is a need to address the different issues and challenges associated with such a network.

Fig. 5
figure 5

Layered IoT framework [31]

The collaboration in the IoT environment embeds numerous heterogeneous devices and processes the information from them to improve the sensing and actuation capabilities. There are some issues and challenges which must be addressed for performing efficient communication among these devices, as shown in Fig. 6. These issues can be addressed for real-time service delivery by incorporating effective algorithms in the fields of deployment, localization, and clustering. The significance of these fields can be outlined by the fact that real-time service delivery requires the localization of devices in real time. The better location estimation results from better coverage of the terrain, hence resulting in the discussion of the deployment in the case of the IoT environment. Additionally, the sensing and acquisition by the deployed devices can be efficiently performed if these devices are logically clustered in terms of their context of the operation. These topics are discussed in the next few sections.

Fig. 6
figure 6

Issues and challenges of operation in the IoT environment [27,28,29]

4 Deployment

The operations for post-disaster management using autonomous devices require that the devices must provide the services throughout the terrain. Specifically, the requirement is to cover the region of interest with an optimum number of devices. This is due to the reason that the devices are costly and their management and maintenance are associated with a high cost. In a broader sense, the deployment of devices provides efficient connectivity and coverage throughout the area. It results in more proficient communication among the devices to sense and forward the sensed information in real time. An efficient deployment strategy helps to achieve robustness in case of the failure of specific devices. The robust processing of the data strengthens the data acquisition capability of the devices in the terrain. This ensures a better distribution of devices with higher granularity, which further provides a better communication range with energy efficiency and reliability. Thus, deployment becomes one of the key issues for building a concrete infrastructure for post-disaster management.

The deployment strategy can be categorized into pre-deployment, post-deployment, and re-deployment phases [25]. In the pre-deployment phase , the algorithms are developed to deploy the devices for the first time in the terrain. This is essentially the initial phase of deployment, and thus, a lot of effort is required during this phase to provide better coverage of the area. It is because the deployment process also incurs a high cost, depending on whether it is performed manually or autonomously. During the pre-deployment phase, efforts are made to understand the coverage area of the terrain. The area of coverage may be either known or unknown. With a known area of coverage, the placement of devices becomes comparatively easier than when it is unknown. For instance, the coverage area of a house is known and can be measured in square feet. In such a case, for sensing the earthquake tremors, it is easier to judge the minimum number of devices required to be deployed to serve the purpose. On the other hand, with an unknown area of coverage, it is comparatively difficult to decide the minimum number of devices that need to be deployed in the region, further resulting in the additional cost in its implementation. This becomes even more challenging in the presence of obstructions and non-line of sight conditions. Moreover, it can be inferred that the manual deployment can be done in regions with the known area of coverage. Still, it is quite difficult to adopt it in regions with unknown coverage areas. Thus, in such a scenario, the deployment can be performed using unmanned aerial vehicles.

It has been seen that there are certain possibilities of change in the network topology due to the non-availability of enough power, changing task dynamics, and non-reachability of the services. The post-deployment phase is carried out to perform the maintenance operations in such conditions. This phase is crucial as it improves the lifetime of individual devices and the overall lifetime of the network. Further, the re-deployment phase is carried out whenever it is observed that the deployed devices are either not functioning or stopped functioning due to unforeseen circumstances. These circumstances may vary based on the environmental conditions. Also, since the devices are battery-powered, they may get disconnected due to power issues or network connectivity. In such cases, either the devices are additionally deployed or re-deployed at the same location to serve the purpose of the dead device. Based on the requirements of the applications, there are different strategies of deployment, as discussed in the following section.

4.1 Types of Deployment Strategies

The disaster may occur at any place. The places where the disaster has occurred may have either regular or irregular terrain, line of sight (LoS), non-line of sight (NLoS), or obstructed line of sight (OLoS) conditions [32], the geographical region may be small or very large and require either less or huge number of devices to be deployed. Since it has already been discussed that the algorithms and strategies for deployment must be chosen by adhering to the respective cost constraints, there is a need to decide on the type of deployment strategies by considering the terrain parameters as discussed above.

In the literature, two basic types of deployment strategies are used, namely, deterministic and random deployments. These strategies are studied widely in the field of Wireless Sensor Networks (WSN) [33, 34]. The deterministic deployment is concerned with placing the devices in a well-organized manner in the terrain. Usually, it uses the concept of the geometrical distribution of the devices. The geometrical distribution may take the form of a square, circle, hexagon, ellipse, linear placement, etc. It gives better performance with a known area of coverage. The deterministic deployment has the advantage in terms of the fact that there is a certain level of control on the optimum number of devices used. However, when considering the real-time IoT environment with a huge number of devices and changing task dynamics for the cyber-physical systems, the deterministic scheme poses a certain level of limitations.

On the other hand, the random deployment scheme offers to solve the limitations of the deterministic deployment scheme by using two approaches: random distribution and planned placement [35]. The former approach employs algorithms based on mathematical computations to calculate the location of devices to be placed in the region of interest. Similarly, there are three categories of methods used for the planned placement approach. The first method, also known as the incremental approach, deploys the devices in different increments. This provides a well-planned strategy to deploy the optimum number of devices based on the requirement of the application. The second method uses the concept of airdropping. This approach must be carried out cautiously as the hardware is costly, and thus, some mechanism must be devised to handle such deployments by considering the cost. This method is also known as the virtual force-based approach. The random deployment strategy is better than the deterministic deployment strategy as it offers better coverage, even when the area of coverage is unknown. However, there are three major limitations associated with a random deployment scheme. First, it relies on multi-hop communication using relay nodes as the information must be carried to farther ends of the region. In such a case, if one of the relay nodes is dead, then the implementation algorithm gets affected severely. Second, a random deployment scheme produces coverage holes in the region, which results in the isolation of some devices in the region. Finally, randomness is a tangible concept, i.e., there is no concept of pure randomness in the universe. Thus, random algorithms utilized in the deployment use a certain level of determinism in its implementation. Moreover, the coverage holes may result in a greater number of signals getting exchanges among the devices, and thus, the random deployment scheme may suffer from inefficient utilization of energy and reduced lifetime of the network.

It is well known that the IoT environment assumes heterogeneity at various levels of operation. For example, the different areas may need different requirements of coverage, connectivity, and reliability based on the type of applications and services. In this regard, the authors in [25, 36] discuss a quasi-random deployment strategy that uses an amalgamated method of both the random and deterministic schemes. It uses the concept of the quasi-Monte Carlo method of numerical integration. It also uses the discrepancy theory to generate the locations of the devices to be deployed in the terrain. The discrepancy theory, in mathematics, can be described as the deviation of a situation from one state to another. It has fundamental roots in classical theory. It is the distribution of points in s-dimensional space using geometrically defined subsets to generate evenly spaced points in the space. The quasi-random deployment scheme is implemented using the low discrepancy sequences. There are four low discrepancy sequences as shown in Fig. 7, namely, Van der Corput, Halton, Faure, and Sobol [36]. The Van der Corput sequence works in one-dimensional terrain, Halton is best suited for two-dimensional terrain, and Faure and Sobol are algorithms meant to be implemented in multidimensional terrain.

Fig. 7
figure 7

Categories of low discrepancy sequences [25, 36]

4.2 Comparison of Deployment Strategies

In the IoT environment, there is a need to understand the advantages, limitations, and applications of all the deployment schemes to utilize them with respect to the different requirements . The authors in [25] compared the random, deterministic, and quasi-random deployment strategies. The grid-based deployment is considered as one of the deterministic schemes for comparison. The comparative evaluation is shown in Fig. 8. The comparison is performed by deploying 100 nodes on a 1000 × 1000 squared meters terrain. It is observed that the grid scheme provides better coverage in the terrain, as depicted in Fig. 8a. However, it is already discussed that the grid-based deployment does not have limited applications in a real-time environment where the area of coverage is unknown. As shown in Fig. 8b, random deployment has significant coverage holes in the region. However, this limitation of random deployment is eliminated with quasi-random deployment, as shown in Fig. 8c, wherein the distribution is more even compared to the random deployment.

Fig. 8
figure 8

Deployment of devices using grid-based, random, and quasi-random schemes [25]

It is well established that the quasi-random deployment provides better coverage. Also, it supports all three phases of deployment, i.e., pre-deployment, post-deployment, and re-deployment, which has limited scope in the case of random schemes of deployment. Once deployed, the next crucial task of the devices is to provide real-time service delivery. The service delivery in real time can only be achieved if the real-time locations of the devices are known. In this regard, there is a need to understand the fundamentals of localization and its applications in the IoT environment, which is discussed in the following section.

5 Localization

The different relief operations to handle the disaster situations in real time require data and information about the situation in real time. The communication of these data and information may be periodic or sporadic based on the type of applications and their requisites. One of the crucial aspects of real-time service delivery is to know the location of the device from where the data need to be communicated. However, it is a challenging task due to several reasons. First, the installation of GPS is costly and, hence, cannot be implemented on all devices. Thus, one must devise a mechanism that can be used to find the location of the devices in the absence of GPS technology. Second, the deployed devices have heterogeneous characteristics, for which the mechanism must support a certain level of efficient mapping to find the location of the devices in real time. Finally, the capability of the deployed devices must be explored so as to map the signals transmitted and received among the devices. Thus, the challenges associated with the process of localization need to be formulated and understood for better development of the algorithm for the IoT environment.

The goal of localization is to estimate the position of the nodes placed in the region of interest. The localization algorithm is needed to satisfy the real-time delivery constraints for scalable applications that require data from a huge number of devices. The high accuracy positioning of devices depends on data processing in the spatial context. In this regard, the different underlying assumptions for localization are shown in Fig. 9. To devise an efficient localization algorithm, there is a need to develop an effective deployment scheme with a better area of coverage and connectivity in the region of interest. The processing architecture and network topology have significant contributions to the performance of the localization algorithm. This is due to the reason that the changes in topology are caused by either the sudden death of some of the nodes or heavy mobility in the network. Due to the mobile nodes in the network, the location of the devices changes frequently. The location estimation and communication of estimated location in real time need accuracy in signal processing, which further depends on the different parameters of the environment. Thus, the technologies used for building the cyber-physical systems must be efficient enough to work in such complex conditions of the environment to support the different applications and services in real time.

Fig. 9
figure 9

Underlying assumptions for localization [31, 37]

5.1 Node Localization

In the case of disasters, the data and information about the situation of these disasters can be accessed if the location of devices deployed is known in real time. The localization problem can be categorized into proximity-based, range-based, and angle-based [25, 31], as depicted in Fig. 10. In proximity-based localization , the location is estimated using the signals received from the devices in the communication range of the device that is estimating the location. The range-based localization uses the distance information, and angle-based localization uses the angle at which the signals are received from multiple devices in the communication range. The localization algorithm is implemented in three phases: coordination phase, wherein the information in the form of signals is received from the other devices in the vicinity; measurement phase, in which the different forms of measurements are performed either in the form of distance or phase of the received signals; position estimation phase, in which the location is estimated using the information derived in the measurement phase [31].

Fig. 10
figure 10

Categories of localization problem [31]

Localization is a well-researched topic in the field of wireless sensor networks . The localization algorithms are categorized to be either range measurement-based, infrastructure-based, or distributed/centralized based on the mechanism of implementation used in the process [37]. The categories of localization algorithms are shown in Fig. 11. The range measurement technique includes algorithms based on either range-based or range-free mechanisms. The infrastructure-based technique uses either anchor-based or anchor-free localization methods. The distributed or centralized localization algorithm uses either former or latter way of network topology for the purpose of location estimation in the network.

Fig. 11
figure 11

Categories of localization algorithms [37]

When considering the IoT environment, the location information is crucial to perform different activities to satisfy the real-time service delivery constraints. A significant amount of research is done in the field of localization in the IoT environment. The localization algorithms in IoT can be categorized to address four critical issues: mobility of objects in groups, distribution of smart devices, environmental conditions, and framework for the different services. Similarly, there are four crucial challenges associated with developing efficient localization algorithms for devices operating in the IoT environment: the amount of signaling overhead, localization accuracy, communication accuracy, and complexity in computing.

The authors in [25] discuss the localization of both the sensor and IoT devices operating in the IoT environment by implementing an across layer localization algorithm. It localizes the sensors using the known locations of IoT devices and vice versa. The authors emphasize that although the GPS is costly, it can be used for some of the IoT devices to localize them. In [38], the authors have discussed the use of Unmanned Aerial Vehicles (UAV) as IoT devices. The network is created in such a way that all the sensor devices are placed on the ground, and the UAVs serve as the IoT device. There are multiple UAVs that are communicating using IP protocol on the Internet. The sensor devices on the ground are clustered using the hierarchical clustering technique [39] to create clusters wherein one UAV is assigned as the cluster head. The sensors send the data sensed to the UAV, which further communicates the data to other UAVs in real time. The localization is performed in such a way that the location of one cluster is known to other clusters in the network and vice versa. Such a way of real-time data and information exchange based on the location information of the devices in the network is beneficial in handling post-disaster situations.

5.2 Event Localization

As and when the disaster occurs, the situation is identified on the basis of different types of events. An event may be defined in terms of any abnormality observed in the usual process. For instance, the fire may be considered as a disaster, wherein the situation can be identified on the basis of different events such as rise in temperature, pressure, and humidity in an abnormal way as compared to the normal conditions. Similarly, one can check the concentration of different gases to identify the occurrence of fire in the region. Also, the camera can be used to identify the fire situation in a building. Thus, by analyzing the different parameters as mentioned above, one can reach the conclusion that the fire has happened in a particular area in the city. Since the current scenario demands the autonomous operation to be carried out for post-disaster management, there is a need to localize such events in real time to provide effective response and relief supports as and when the disaster happens. The localization of events has prominent applications in industries where human lives are at stake during the shop floor activities and, thus, requires the real-time data of the abnormality, if any, to solve the issues as early as possible.

There are a lot of algorithms proposed in the literature to solve the event localization problem. The emphasis is given on understanding the intensity of events, deciding whether the events are occurring only once or multiple times, either at the same spatial location or different locations in the region of interest, and terrain limitations where the events are occurring. When considering the IoT environment, it is assumed that a huge number of devices sense the events in the region. Thus, there is a need to bridge the gap between IP-enabled devices and the short-range wireless devices deployed to serve the purpose. However, the major challenge is to address the LoS and NLoS situations in the terrain. In this context, the angle of arrival of signals can be explored for better location estimation of the events.

Since the deployed devices can send and receive signals in their vicinity, the Direction of Arrival (DoA) of signals from the events can be calculated to further find the location of these events. The algorithms of the DoA of signals calculate the angle of arrival and use the different parameters of the signals to calculate the location of the events [40]. The DoA estimation algorithms can be classified into two categories: non-parametric estimation and parametric high-resolution estimation [41]. The non-parametric estimation is categorized further into low- and high-resolution methods. Some of the famous low-resolution algorithms are periodogram, correlogram, and modified periodogram correlogram. Similarly, some of the high-resolution algorithms are the methods proposed by Capon and Borgiotti-Lagunas and methods based on maximum entropy. The parametric high-resolution estimation algorithms are classified as either AR/ARMA-based, model fitting-based, or subspace-based. The AR/ARMA algorithm uses the concept of maximum entropy to estimate the parameters of the signals. Similarly, some of the famous model fitting-based algorithms use the deterministic and stochastic machine learning and least square methods. On the other hand, MUSIC and min-norm methods are a few classic algorithms used for parameter estimation using the subspace-based methods. The classification of DoA estimation approaches is summarized in Fig. 12.

Fig. 12
figure 12

Approaches for direction of arrival estimation [41]

The IoT environment may have regular or irregular terrain, thus making it extremely difficult to map the signals getting exchanged among the devices operating in the region of interest. The authors in [32, 41, 42] discuss utilizing the DoA estimation technique for event localization using the quasi-random deployment of devices in the region of interest. The proposed algorithm utilizes a concentric circular array for mapping the signals from the devices in the terrain. The concentric circular array provides better coverage of the terrain as compared to the linear or uniform circular arrays. Thus, the signals are received with higher accuracy than that of the other two approaches. The event localization algorithm is further explored and implemented on a real testbed experimental setup using the cloud to acquire, store, and fetch the data in real time using the UAV-based communication in the IoT environment [43]. Such a type of communication environment is beneficial for real-life operations to be performed autonomously.

The localization is a very crucial activity to be initiated for the response and relief operations in case of the occurrence of disasters. For real-time service delivery in the IoT environment, where a massive number of devices operate in the terrain, the location estimation needs to be done for both the nodes and the events occurring in the region. This helps in processing the event information and understanding the places of origin of these events. It further helps in providing the services for disaster mitigation, as and when required.

6 Conclusion

The two crucial operations during disasters are response and recovery. Effective implementation of these operations requires the data and information to be accessed from the places of catastrophe in real time. One of the essential tasks during disaster situations is to save lives. Thus, employing human beings for relief operations is a dangerous initiative. The loss of human lives has been reported a lot during relief operations throughout the world. To save lives and create a robust infrastructure with which post-disaster management can be done efficiently is challenging. In this regard, the evolution of IoT has served a greater purpose by implementing a network where all the devices and things can get connected among themselves using the Internet. However, using an IoT environment to perform post-disaster management comes with a certain level of challenges. In this chapter, three critical challenges, namely, deployment, localization, and collaborative processing, to explore the benefits of IoT are discussed. For a successful post-disaster management operation, the relevant activities must be integrated on a platform for efficient management of resources and effective communication for real-time service delivery. This chapter focuses on the basic understanding of these topics with their significance and real-world applications.