1 Introduction

Due to its climate, geographical, and geological characteristics, India is one of the worst affected countries by various natural disasters in South Asia. In order to build resilience to natural disasters, the Disaster Management Act was passed in India in 2005 [1]. Under this act, National Disaster Management Authority (NDMA) and National Disaster Response Force (NDRF) were established. NDMA, headed by the Prime Minister of India, is responsible for making policies and plans for disaster management to ensure efficient disaster response. NDRF, on the other hand, is a specialized force working under the supervision of NDMA to handle a disaster situation. A practical understanding of disaster management in India is provided in [2]. As a country of numerous rivers with tributaries and a prolonged monsoon (June–September) with extreme rainfall in different geographic regions, flood is India’s most common natural disaster. After Bangladesh, it is one of the most flood-prone countries in the world, with about 40 million hectares of land, which is around one-eighth of the country’s overall geographical area, is prone to floods [3]. The annual average flood-affected area is 7.6 million hectares [4]. Roughly around 30 million people in India are affected by floods, and more than 1500 lives are lost each year which is about one-fifth of the global death count caused by floods [5, 6]. According to official statistics given by the Government of India, about 92,000 people lost their lives, and economic losses amounted to approximately $200 billion between 1953 and 2009 flooding [4, 7]. Several other papers like [8, 9] present statistics of floods in India for two important river basins. Considering all these facts, an efficient flood management system is essential in India.

In terms of casualties and damages, flood is one of the top natural disasters. In order to mitigate the effect of floods in flood-prone geographical locations, the disaster management authorities need to implement actions toward flood management. Flood management involves various actions like risk and vulnerability assessment, early warning system, loss and damage assessment, and risk mitigation planning. In [10], flood risk management in different levels of planning and actions has been discussed. In [11], a framework for less developed countries has been proposed to evaluate policies and strategies adopted to mitigate the effects of floods. The paper by [12] covers different phases involved in flood management, focusing on the flood-prone river basins of India. Traditional flood management using ground-based data like rainfall and river discharge is not time and cost-effective. Kourgialas and Karatzas [13] presents an approach to determine flood-hazard regions by using various relevant geographical data obtained from Geographic Information system (GIS) modelling. In [14], the usage of spaceborne remote sensing for different aspects of flood management has been investigated. Two-stage stochastic programming to account for the randomness in food risk management is discussed in [15]. The paper [16] presents a survey of the application of computational intelligence-based systems in flood management. Iqbal et al. [17] reviews articles on computer vision applied to different phases of management and planning after a flood. A few papers like [18,19,20] present early warning systems devised for flood management.

The casualties and economic losses caused by a natural disaster necessitate immediate response and recovery activities to be initiated. In a post-disaster scenario, the localities affected by the disaster require various critical resources and services to mitigate the impact of the disaster that arises locally. The disaster management authority allocates available resources among emergency sites based on the requirement of resources subject to resource availability. Decision-making, during or after a natural disaster, can be very complex, considering its dynamics and severity. To automate the decision-making process by the authorities, depending on various ground truth parameters and useful data, a computer-based decision support system (DSS) can be highly effective for a quick and unbiased response to emergency sites. A DSS can support a disaster management authority with limited experience to make quick decisions based on a model developed from previous experiences. Decision support system for disaster management is reported in [21,22,23,24,25,26]. Wallace and De Balogh [26] conceptualize the operational, tactical, and strategic decision-making for different natural and manmade disasters. In [21], the concepts of decision-making in various stages of disaster management have been discussed. Zhou et al. [27] provides an overview of the emergency decision-making theory and methods of natural disasters from the methodological perspective. Sati [25] presents a scalable computing environment for emergency response DSS to sustain the response activities during power and network interruption. Newman et al. [28] introduce a detailed review of decision support systems for natural hazard risk reduction. Aifadopoulou et al. [29] develop a web-based, GIS-enabled intelligent DSS to implement protection and management measures that optimally address the transport networks and infrastructures. Zamanifar and Hartmann [30] present a systematic case study of a structured framework to suggest decision attributes for disaster recovery planning of transportation networks. DSS for enhancing performance in wildfire suppression is presented in [31] using multi-sensor technologies and geographic information system (GIS) functionalities. An integrated DSS for risk assessment associated with natural disasters is provided in [32]. Yaşa et al. [33] deal with the clearance of debris in a post-disaster scenario to maximize road network accessibility by using a stochastic mathematical model. Meta-heuristic solutions are used to solve this problem for large-scale networks. The paper [34] works on evacuation modelling in disaster scenarios using betweenness centrality to minimize the number of people waiting for rescue. Korkou et al. [35] focus on developing a humanitarian logistics framework to minimize the losses due to supply shortage. Meta-heuristic optimization algorithms are used in solving the logistics problem. A smartphone-based application for simulating flood disaster evacuation is presented in [36]. In general, decision tools are developed for different phases of disaster management: prevention, preparedness, response, and recovery. This paper mainly focuses on developing a DSS for resource allocation in the response stages of disaster management.

Decision support systems for resource allocation during a disaster are reported in some literature like [37, 38]. Kondaveti and Ganz [37] develop a decision support system for resource allocation in three phases; clustering the victims, resource allocation, and resource deployment. Hashemipour et al. [39] describe a framework based on a multi-agent coordination simulation-based decision-support system. The system helps response managers in a community-based response operation who want to test and evaluate all possible team design configurations and select the highest-performing team. Li et al. [40];  Wang and Zhang [41] use agent-based frameworks for the distribution of resources during a disaster scenario. Othman et al. [42] propose a multi-agent architecture for the management of emergency supply chains (ESC), in which a DSS states and solves the scheduling problem for the delivery of resources from the supply zones to the crisis-affected areas. Sepúlveda et al. [43]; Sepúlveda and Bull [44] report model-driven DSS for vehicle routing to distribute relief supplies in the situation of natural disasters.

Resource allocation in an emergency scenario is reported in various literature like [45,46,47,48] using different optimization methodologies. Wang et al. [47] present a multi-objective cellular genetic algorithm (MOCGA) for resource allocation in a post-disaster scenario. In [49], a project management and scheduling problem to assign personnel to different disaster locations is formulated and solved by a hybrid meta-heuristic algorithm. A vehicle scheduling and routing problem during an emergency is modelled with integer linear programming in [50] to minimize the total transportation cost to conduct the necessary activities. A post-disaster allocation of limited repair crews for recovery of infrastructure network is proposed by combining agent-based modelling and reinforcement learning [51]. Agent-based modelling is used for resource allocation to incorporate the relief urgency and behaviour of different aid carriers and providers [52]. Effective resource allocation is performed after the computation of the relief urgency index using qualitative and quantitative parameters related to allocation and distribution. A relief urgency index is developed using the intrinsic information contained by different factors like time-varying demand, population density, the ratio of frail population, damage condition, and the time elapsed from the last delivery. This index is used to improve the relief distribution after a large-scale disaster. However, this framework does not consider the weight of different factors and the scalability of the approach is not reported. Resource allocation during simultaneous disasters is reported using stochastic optimization techniques in [53] where insufficient national resources are to be allocated among the disaster regions. In [54], a resource allocation problem in a limited resource scenario is formulated as a non-cooperative game with the disaster locations as the players where the Nash equilibrium gives the solution of the game, and a mathematical analysis shows that pure strategy Nash equilibrium always exists for the game. Nagurney et al. [55] present a supply chain for disaster relief operations by multiple non-governmental organizations (NGOs) using a generalized Nash equilibrium-based game-theoretic framework. In this work, the NGOs compete with each other for financial funds from donors, and then they supply relief materials to the disaster victims. Wang et al. [56] propose a multi-period allocation of scarce resources in a post-disaster scenario while maintaining equity. Wang et al. [57] present a hierarchical refugee evacuation scenario using multi-objective optimization where refugees are classified in different hierarchy. In [58], a systematic review of the articles focused on disaster relief logistics is provided. Nappi and Souza [59] proposed selection of temporary shelters based on a hierarchy of criteria and sub-criteria.

As most of the administrative structure of the government system is hierarchical in nature, the resource allocation framework should include the hierarchical structure of resource allocation and distribution. In general, resources are propagated from the top-level authority to the emergency sites through various intermediate levels. The hierarchical approach of resource distribution is reported in [60,61,62]. Ghaffari et al. [60] propose resource distribution over the supply chain network with multiple customers using particle swarm optimization and mixed-integer programming. Widener and Horner [62] proposed a capacitated-median model hierarchical method for deciding the size and placement of relief location after varying demand of the people during hurricane relief distribution. Özdamar and Demir [61] propose a multi-level clustering algorithm-based hierarchical cluster and router procedure (HOGCR) for efficient vehicle routing that aims to minimize the travel time. Here, demand nodes are grouped into hierarchical clusters of parent and child nodes and the routing problem is solved as a capacitated network flow problem. A priority-based hierarchical model is discussed in [63] for the distribution of emergency goods in disaster logistics.

In this paper, we propose a hierarchical resource and task allocation architecture for the allocation of resources during floods among the different disaster control centres. In general, resources are allocated from top-level to bottom-level control centres based on the requirement of resources at various emergency sites. Different resources can be disaster management teams, UAVs, transport vehicles, relief materials, medical teams, etc. Resource allocation at the upper level is performed based on the total available resources, resource requirements, and priority of next-level crisis locations. The crisis locations need various resources to conduct disaster response tasks. The priorities of these crisis locations are decided based on their population density, disaster-affected area, the level of disaster, the static and dynamic conditions of the road network, etc. At the bottom level, apart from allocating the relief supplies to different emergency sites, many different tasks like search and exploration, evacuation operations, etc., are also executed. We developed a task allocation architecture where all tasks, including resource allocation, could be handled at the bottom level through task allocation. The task allocation is performed based on the requirement of resources for the task, priority of the task, task location, and time required to reach the task location. We consider the administrative structure of India for the development of hierarchical allocation architecture. The different administrative layers of the Indian government are Centre, State, District, Block, and Village government [64, 65]. In the Indian context, in the case of a flood, resources are allocated from the state level to the block level through the district level, and rescue operations are directly performed at the block level as per the needs of the emergency sites (ES).

The main contribution and significance of this work may be summarized as follows:

  1. 1.

    Development of a general framework for scalable hierarchical resource allocation architecture

  2. 2.

    Integration of resource and task allocation architecture for developing a decision support system (DSS) for disaster management.

The remainder of this paper is organized as follows. Section 2 describes the general formulation of resource and task allocation architecture. The proposed architecture in the context of the flood is described in Sect. 3. Sections 4 and 5 describe the resource and task allocation algorithm framework. A detailed example of resource and task allocation is presented in Sect. 6. The software framework of the proposed architecture is described in Sect. 7. Concluding remarks are given in Sect. 8.

2 Resource and Task Allocation Architecture

In this section, a detailed resource and task allocation framework for managing the needs during a disaster is discussed. In general, disaster management authorities allocate various resources among the affected regions/units based on the availability of the resources at an upper administrative level and the demand requirements at the lowest administrative level. In the proposed framework, the affected units are the crisis locations, and different affected units are assumed to be under the jurisdiction of the same administration, that is, under the same state or same district or same block. Emergency response is usually a hierarchical process with the interaction between various agencies, as mentioned in [66] among many other similar works. In our work, we identify a three-layer hierarchical framework as depicted in Fig. 1. The three hierarchical layers considered are top-level control centre (TCC), middle-level control centre (MCC), and low-level control centre (LCC). Higher authorities like TCC allocate resources to lower levels based on disasters such as flood map/road network map, demand of resources, and resource availability. Resources are distributed to affected people at the very lower level, and information about the status of the road, water level, and many other parameters are passed to the higher authority. At the LCC level, apart from resource allocation, rescue operations such as surveillance and survivor detection are performed.

Fig. 1
figure 1

Hierarchical architecture for a disaster management scenario

A typical resource allocation architecture in a disaster management scenario should be scalable to large-scale operations and should be flexible to accommodate varying demand and supply conditions. The resource allocation framework should also be able to accommodate a multi-layer organizational structure to handle a large-scale disaster scenario. The proposed framework considers the allocation of resources and other tasks related to rescue operations, such as search and rescue, in an integrated hierarchical manner. The different hierarchical layers can be different based on the administrative structure of the country. The overall architecture, in a nutshell, takes inputs from the user about the resources, demands, and specifications of the affected units and generates the final resource allocated to each affected unit.

A typical three-tier hierarchical resource allocation scenario proposed in this paper is shown in Figs. 2 and 3. The resource pool is generally possessed by the top-level authority. Resources are distributed from the TCC to the next hierarchical levels (MCC, LCC, emergency sites), forming the supply chain networks. When a natural disaster occurs over a vast area in a state, the authorities from the different affected units/emergency locations (ES) communicate their respective demands for resources and the level of crisis to the immediate higher level, i.e. LCC. Such requests of resources from different LCC gets added up and conveyed at the MCC. Similarly, the resources required by different MCCs are considered in the TCC. The TCC allocates resources to the MCCs based on factors such as the availability and requirements of resources and the priority importance of the affected units. The priority of each unit is represented through the weightage of each unit. The weightage of each affected unit is decided based on the population density, disaster-affected area, the level of disaster, the static and dynamic conditions of the road network, etc. The same principle applies to the allocation from MCC to the LCC level.

Fig. 2
figure 2

Block diagram for resource and task allocation

Fig. 3
figure 3

Disaster relief network: supply and demand points

At the LCC, the overall tasks involve relief supply, evacuation, search and rescue, and survivor tracking, among many other tasks. These tasks can be dynamic or static. Tasks such as exploring an area are considered static since the object of interest and its location do not change with time. On the other hand, tasks such as survivor tracking are considered dynamic tasks since the survivor may move and change their location, or the number of survivors can change with time. At LCC, the overall work related to disaster is performed in a systematic task allocation framework to handle critical tasks in a resource-constrained scenario. The task allocation is performed based on requirement, priority, and task location. In Fig. 2, LCC-\(i\hat{i}\) denotes the \(\hat{i}^{\text {th}}\) LCC under \(i^{\text {th}}\) MCC. Similarly, ES-\(i \hat{i} \tilde{i}\) means the \(\tilde{i}^{\text {th}}\) emergency site which comes under the \(\hat{i}^{\text {th}}\) LCC of \(i^{\text {th}}\) MCC.

3 Flood Management

The proposed hierarchical framework is developed for relief and rescue operations during a flood scenario. The detailed architecture is shown in Fig. 4. In the case of flood scenarios, different resources are disaster management teams, unmanned aerial vehicles (UAVs), transport vehicles, relief materials, and medical teams. The different tasks such as search and rescue operations, evacuation, and survivor tracking are performed at local levels. The requirement of the different resources of different units is processed to obtain the total requirement. The authority considers both prior and current information about the immediate lower-level locations to decide the allocation matrix at each level. The predicted information about road networks and the water level is also considered for decision-making. As shown in Fig. 4, in the proposed hierarchical architecture, the information about the crisis location goes from the bottom level to the top level, whereas the allocation of resources/tasks is performed from top to bottom level. In Fig. 4, the arrows marked in red, green, black, blue, and teal colour indicate current information, resources, prior information, tasks, and forecasted information, respectively.

In the case of TCC, the prior information of MCC includes population, disaster handling capability, existing resources, etc., whereas the current information is level of devastation, affected flooded area, and requirement of resources. Disaster handling capacity is the capability of a crisis location to handle a disaster, and it depends on the existing infrastructure and demography of an area. As shown in Fig. 4, the resources at TCC are allocated to two different MCC locations, MCC-1 and MCC-2. At the lower level, resource allocation is performed considering more detailed information about the affected units. At the MCC level, the prior information about LCC locations includes population density, economic level, disaster handling capability, demography, existing resources, and existing supply chain. At this stage, the level of devastation, affected area, the status of the road network, and water level of different LCCs are considered for arriving at the allocation matrix. In Fig. 4, the allocated resources to MCC-1 are distributed to LCC-11 and LCC-12. Similarly, allocated resources of MCC-2 are distributed to LCC-21 and LCC-22. At the lowest level, different tasks are performed based on the priority of the task, and the proportion of relief materials is decided based on the population, demography, economic level, and road network of the emergency sites. In Fig. 4, emergency sites ES-111 belongs to LCC-11, ES-121 belongs to LCC-12, ES-211 and ES-212 belong to LCC-12, ES-221, ES-222, ES-223 belong to LCC-22, and TB 11-1 refers to task-1 of LCC-11.

Fig. 4
figure 4

Resource and task allocation architecture in case of flood scenario

4 Resource Allocation

The resource and task allocation architecture for a three-tier system is discussed in this section. The proposed scheme is scalable to a multi-layer architecture. In the three-tier architecture, the TCC allocates resources to different MCC units, and each MCC unit allocates to different LCC units.

4.1 Resource Allocation Framework

Let \(R_{1}^{T},\ldots , R_{n }^{T}\) be different resources available at the TCC level for allocating to m different MCC units, say, \(M_{1}\),..., \(M_{m}\). The requirement of an \(i^{th}\) MCC \((M_i)\) for the \(j^{th}\) resource \((R_j^T)\) is denoted as \(R_{ij}^{M}\). The significance for the allocation of resources to an MCC often depends on various factors such as population density \((\rho )\), disaster-affected area (a), and level of disaster \((\gamma )\), among other factors. To account such factors competently, it is assumed that there are \(\eta\) number of such factors at each \(i^{th}\) MCC level, denoted by \(f_{ik}\), where \(k=1, \ldots , \eta\).

To account the importance of these different factors, each of them is assigned a normalized weight \(w_{k}\) such that they sum up to 1. The weight \(w_k\) is considered to be the same for each MCC. Then, the weight \(p_i^M\) of the \(i^{\text {th}}\) MCC unit is,

$$\begin{aligned} p_{i}^{M}= \sum _{k=1}^{\eta } \frac{ w_{k} f_{ik} }{\sum _{i=1}^{m} f_{ik}}~. \end{aligned}$$
(1)

A typical example of weight calculation is provided in Sect. 6.1. The resource allocation vector of \(j^{\text {th}}\) resource for the \(i^{\text {th}}\) MCC is as follows:

$$\begin{aligned} W_{ij}^{M}= \frac{p_{i}^{M} R_{ij}^{M}}{ \sum _{i=1}^{m}p_{i}^{M} R_{ij}^{M}}~. \end{aligned}$$
(2)

Resource allocated at the TCC level is the minimum of the quantity of available resources and total requirement from different MCC units. The allocation of \(j^{\text {th}}\) resource at the TCC level is calculated as follows:

$$\begin{aligned} A_{j}^{T}= \text {min}\left( R_{j}^{T}, \sum _{i=1}^{m} R_{ij}^{M} \right) ~. \end{aligned}$$
(3)

Then, the maximum possible allocation of the \(j^{\text {th}}\) resource for the \(i^{\text {th}}\) MCC is calculated as follows.

$$\begin{aligned} F_{ij}^{M}= A_{j}^{T} W_{ij}^{M}~. \end{aligned}$$
(4)

Let us consider that \(i^{\text {th}}\) MCC has \(l_{i}\) LCC units indexed by \(\hat{i} = 1, \ldots , l_{i}\), and the requirement of \(j^{\text {th}}\) resource by \(\hat{i}^{\text {th}}\) LCC (\(L_{\hat{i}}\)) is \(R_{\hat{i}j}^{M_{i}L}\), where \(\sum _{\hat{i}=1}^{l_{i}} R_{\hat{i}j}^{M_{i}L} =R_{ij}^{M}\). Then, the allocation of the \(j^{\text {th}}\) resource to \(i^{\text {th}}\) MCC is calculated as follows:

$$\begin{aligned} A_{ij}^{M}= \text {min} \left( F_{ij}^{M}, R_{ij}^{M} \right) =\text {min}( \text {min}( R_{j}^{T}, \sum _{i=1}^{m} R_{ij}^{M} ) W_{ij}^{M} , R_{ij}^{M} )~. \end{aligned}$$
(5)

The allocated resources to MCC level is further distributed to LCC level. Let, consider the allocation of resources at \(i^{\text {th}}\) MCC. Let, the weight of \(\hat{i}^{\text {th}}\) LCC of \(i^{\text {th}}\) MCC be \(p_{i\hat{i}}^{L}\) and there are \(\hat{\eta }\) different factors which affect the allocation at MCC level. The value of \(\hat{k}^\text {th}\) factor of \(\hat{i}^{\text {th}}\) LCC of \(i^{\text {th}}\) MCC unit is \(f_{i\hat{i}\hat{k}}\). The weight given to \(\hat{k}^\text {th}\) factor is \(w_{\hat{k}}\) and \(\sum _{\hat{k}=1}^{\hat{\eta }} w_{\hat{k}} =1\). Then, the weight of \(\hat{i}^{\text {th}}\) LCC unit is,

$$\begin{aligned} p_{i\hat{i}}^{L}= \sum _{\hat{k}=1}^{\hat{\eta }} \frac{ w_{\hat{k}} f_{i\hat{i}\hat{k}} }{\sum _{\hat{i}=1}^{l_{i}} f_{i\hat{i}\hat{k}}}~. \end{aligned}$$
(6)

If the resource allocation vector of \(j^{\text {th}}\) resources for the \(\hat{i}^{th}\) LCC is

$$\begin{aligned} W_{\hat{i}j}^{M_{i}L}= \frac{p_{i\hat{i}}^{L} R_{\hat{i}j}^{M_{i}L}}{ \sum _{\hat{i}=1}^{l_{i}}p_{i\hat{i}}^{L} R_{\hat{i}j}^{M_{i}L}}~. \end{aligned}$$
(7)

Then the allocation of the \(j^{\text {th}}\) resource for \(\hat{i}^{th}\) LCC unit of \(i^{\text {th}}\) MCC is

$$\begin{aligned} A_{\hat{i}j}^{M_{i}L}= \text {min}\left( \text {min}(A_{ij}^{M}, \sum _{\hat{i}=1}^{l_{i}} R_{\hat{i}j}^{M_{i}L} ) W_{\hat{i}j}^{M_{i}L}, R_{\hat{i}j}^{M_{i}L} \right) ~. \end{aligned}$$
(8)

4.2 Allocation Weightage

The weightage (\(w_{k}\) and \(w_{\hat{k}}\)) of different crisis location (MCC, LCC) at different level is determined based on different factors, and can be decided by the disaster management authority. This decision can benefit from experience with the earlier similar disasters. From (1), it can be shown that sum of the weights of all MCCs is 1. The total weight of m MCC units is:

$$\begin{aligned} \sum _{i=1}^{m} p_{i}^{M} = \sum _{i=1}^{m} \sum _{k=1}^{\eta } \frac{ w_{k} f_{ik} }{\sum _{i=1}^{m} f_{ik}} = \sum _{k=1}^{\eta } \sum _{i=1}^{m} \frac{ w_{k} f_{ik} }{\sum _{i=1}^{m} f_{ik}} = \sum _{k=1}^{\eta } w_{k} \sum _{i=1}^{m} \frac{ f_{ik} }{\sum _{i=1}^{m} f_{ik}}~. \end{aligned}$$
(9)

Since \(\sum _{i=1}^{m} \frac{ f_{ik} }{\sum _{i=1}^{m} f_{ik}} = 1\), we can write

$$\begin{aligned} \sum _{i=1}^{m} p_{i}^{M} = \sum _{k=1}^{\eta } w_{k}~. \end{aligned}$$
(10)

Because the normalized weights \(w_k,~k=1,\ldots ,\eta\) sum up to 1 by definition, Eq. (10) becomes

$$\begin{aligned} \sum _{i=1}^{m} p_{i}^{M} = 1 ~. \end{aligned}$$
(11)

Similarly, the sum of the weights of all LCC units can also be shown as

$$\begin{aligned} \sum _{\hat{i}=1}^{l_{i}} p_{i\hat{i}}^{L} = 1~. \end{aligned}$$
(12)

Considering Eqs. (11) and (12), the weight of each crisis location is less than one, and the sum of the weight of all crisis locations is one. The proposed resource allocation framework does not pose any restrictions on the value of each factor (\(f_{ik}\), \(f_{i\hat{i}\hat{k}}\)), number of factors (\(\eta\), \(\hat{\eta }\)), and the number of crisis locations. The calculated weight of each crisis location is normalized and non-dimensional. The total resources will be distributed among the crisis locations as the sum of the weight of the crisis locations is one. Therefore, the framework can include any number of locations, number of factors, and the value of factors. So, the overall architecture is scalable in terms of the number of locations, number of factors considered in the allocation process, and the value of these factors.

4.3 Computation of Demand Based on Forecast

The basic demand of \(j^{\text {th}}\) resource (\(D_{j}^{b}\)) from any unit is the demand of \(j^{\text {th}}\) resource placed by the unit based on their resource requirements and it is adjusted based on the forecast of disaster level and road network. Disaster level is the measure of the severity of a disaster affected area. The overall demand of \(j^{\text {th}}\) resource (\(D_{j}^{o}\)) for the allocation is calculated based on the basic demand and the excess demand considering the forecast information.

Let there be r forecasted parameters which affects the allocation. The value of the \(n^{\text {th}}\) forecasted parameter of \(i^{\text {th}}\) MCC is \(E_{in}^{M}\). The relative criticality of the \(n^{\text {th}}\) parameter of the \(i^{\text {th}}\) MCC \((C_{in}^{M})\) is calculated as,

$$\begin{aligned} C_{in}^{M} =\frac{E_{in}^{M}}{ E_{n}^{\text {max} }} \end{aligned}$$
(13)

where \(E_{n}^{\text {max} }\) is the maximum value of the \(n^{\text {th}}\) parameter. Let the basic demand of \(j^{\text {th}}\) resource of \(i^{\text {th}}\) MCC is \(D_{ij}^{b}\). Then, the excess demand (\(D_{ij}^{e}\)) of a resource is calculated as follows,

$$\begin{aligned} D_{ij}^{e}= \bigg (\sum _{n=1}^{r} \zeta _{n} C_{in}^{M}\bigg ) D_{ij}^{b} \end{aligned}$$
(14)

where \(\zeta _{n}\) is relative weight provided to the different r parameters. If X percentage of basic demand is considered to accommodate the forecasted value, then,

$$\begin{aligned} \sum _{n=1}^{r} \zeta _{n} = \frac{X}{100}~. \end{aligned}$$
(15)

The overall demand of \(j^{\text {th}}\) resource of \(i^{\text {th}}\) MCC \((D_{ij}^{o})\) is calculated as,

$$\begin{aligned} D_{ij}^{o} = D_{ij}^{b} + D_{ij}^{e}~. \end{aligned}$$
(16)

Clearly, the maximum value of the overall demand is \((1+\frac{X}{100})D_{ij}^{b}\) and minimum value is \(D_{ij}^{b}\). Similarly, the demand modification based on the forecast can be extended to other layers.

Let us consider an example where allocation at the TCC level is performed for two MCC units. Let the ratio of predicted disaster level to current disaster level and the ratio of predicted road network status to current road network status be used to calculate excess demand. It is assumed that the higher value of these parameters is related to high severity, and the minimum value of the ratio is assumed to be 1. The basic demand of two MCC units and the parameters are shown in Table 1.

Table 1 Excess demand parameters

In this case, we have two forecast parameters disaster level \((E_{1})\) and road network status \(( E_{2})\). The maximum value of \((E_{1})\) and \((E_{2})\) is \((E_{1}^{\text {max}})\) and \((E_{2}^{\text {max}})\). Here, the values of \((E_{1}^{\text {max}})\) and \((E_{2}^{\text {max}})\) are both 3. Therefore, the relative criticality of disaster level for MCC-1 is \(\frac{2}{3}\). Similarly, the relative criticality of all forecast parameters for each of MCC unit is derived. Let the value of X be 25, that is, a maximum of 25 % of basic demand is considered to accommodate future scenario. The relative weights (\(\zeta _{n}\)) of the parameters are considered as 0.15 and 0.10. Using Eq. (14), excess demand \((D_{1j}^{e})\) of \(j^{\text {th}}\) resource for MCC-1 is calculated as follows.

$$\begin{aligned} D_{1j}^{e} = \left( 0.15 \times 0.66+ 0.1 \times 1 \right) \times 100 =19.9 \end{aligned}$$
(17)

So, the overall demand of \(j^{\text {th}}\) resource for MCC-1 is

$$\begin{aligned} D_{1j}^{o}= D_{1j}^{b} +D_{1j}^{e} =100+19.9= 119.9 \end{aligned}$$
(18)

Similarly, the overall demand of MCC-2 is calculated. The calculations of overall demand are presented in Table 2.

Table 2 Overall demand calculation

4.4 Algorithm

The overall algorithm for resource allocation at TCC level is shown in Algorithm 1. The algorithm is performed iteratively. The algorithm for resource allocation at MCC level will have similar equivalent steps.

figure a

4.5 Allocation Sensitivity

In this section, the sensitivity of the resource allocation at TCC level with respect to the requirements of resources at MCC level and availability of resources at TCC level is calculated. The allocation of the \(j^{\text {th}}\) resources for the \(i^{\text {th}}\) MCC at the TCC level with respect to with respect to the requirements of \(j^{\text {th}}\) resources for the \(i^{\text {th}}\) MCC is \(\frac{\partial A_{ij}^{M} }{ \partial R_{ij}^{M}}\). Similarly, the sensitivity of the allocation with respect to availabilities of \(j^{\text {th}}\) resource at TCC is \(\frac{\partial A_{ij}^{M} }{ \partial R_{j}^{T}}.\)

Case 1

Available resources at TCC is lower than the sum of the individual demand of different MCC; i.e. \((R_{j}^{T} \le \sum _{i=1}^{m} R_{ij}^{M} )\) and \((R_{j}^{T} W_{ij}^{M} \le R_{ij}^{M} )\). Then, from (5),

$$\begin{aligned} A_{ij}^{M}= R_{j}^{T} W_{ij}^{M} \end{aligned}$$
(19)

Therefore,

$$\begin{aligned} \frac{\partial A_{ij}^{M} }{ \partial R_{ij}^{M}} = R_{j}^{T} \frac{p_{i}^{M} \sum _{i=1}^{m}p_{i}^{M} R_{ij}^{M} - (p_{i}^{M})^2 }{(\sum _{i=1}^{m}p_{i}^{M} R_{ij}^{M})^2} \end{aligned}$$
(20)
$$\begin{aligned} \frac{\partial A_{ij}^{M} }{ \partial R_{j}^{T} }= W_{ij}^{M} \end{aligned}$$
(21)

Case 2

Available resources at TCC is higher than the sum of the individual demand of different MCC; i.e. \((R_{j}^{T} \ge \sum _{i=1}^{m} R_{ij}^{M} )\) and \((\sum _{i=1}^{m} R_{ij}^{M} W_{ij}^{M} \le R_{ij}^{M} )\). Then, using (5),

$$\begin{aligned} A_{ij}^{M}= \sum _{i=1}^{m} R_{ij}^{M} \frac{p_{i}^{M} R_{ij}^{M}}{ \sum _{i=1}^{m}p_{i}^{M} R_{ij}^{M}}. \end{aligned}$$
(22)
$$\begin{aligned} \frac{\partial A_{ij}^{M} }{ \partial R_{ij}^{M}} = \frac{p_{i}^{M} (\sum _{i=1}^{m} R_{ij}^{M} + R_{ij}^{M}) \sum _{i=1}^{m}p_{i}^{M} R_{ij}^{M} - (p_{i}^{M})^2 R_{ij}^{M} \sum _{i=1}^{m} R_{ij}^{M} }{(\sum _{i=1}^{m}p_{i}^{M} R_{ij}^{M})^2} \end{aligned}$$
(23)
$$\begin{aligned} \frac{\partial A_{ij}^{M} }{ \partial R_{j}^{T} }= 0 \end{aligned}$$
(24)

Case 3

\(R_{ij}^M \le \text {min}( R_{j}^{T}, \sum _{i=1}^{m} R_{ij}^{M} ) W_{ij}^{M}\). Then,

$$\begin{aligned} \frac{\partial A_{ij}^{M} }{ \partial R_{ij}^{M}} = 1 ; \frac{\partial A_{ij}^{M} }{ \partial R_{j}^{T} }= 0 \quad \quad \quad \end{aligned}$$
(25)

In all the Cases, at higher level, the allocation sensitivity of resources with respect to requirements is nonzero; therefore, the allocation of resources at the TCC level will depend on the demand of resources by the individual MCC. In Case 3, the allocation sensitivity with respect to requirements is 1, as this represents the situation where the proportional demand of an MCC is comparatively lower than other locations, and resources are available at the TCC level. Also, the allocation sensitivity with respect to availabilities is nonzero only if the available resources at TCC are not sufficient to cater to all the demands of individual MCC. The same results can also be extended to allocation at other layers.

5 Task Allocation at LCC Level

Every task is defined in terms of the requirement of different types of resources to accomplish the tasks. Considering flood, it is considered that each task will require ground vehicles (GV), unmanned aerial vehicles (UAV), and boats (B) to perform various tasks such as relief supply, surveillance, and survivor tracking. So, a task \(T_{x}\) which requires \(T_{x_{GV}}\) number of GVs, \(T_{x_{UAV}}\) number of UAVs and \(T_{x_{B}}\) number of boats is defined as,

$$\begin{aligned} T_{x}= [T_{x_{GV}}, T_{x_{UAV}}, T_{x_{B}}] \end{aligned}$$
(26)

Similarly, the priority of each task is defined based on the criticality of the tasks involved. In this case, each task is classified as different sub-tasks as normal supply (NS), relief supply (RS), surveillance (SL), survivor tracking (ST), and critical supply (CS). The priority of task (\(T_{x}^{p}\)) is defined as a vector consisting of the priority of each sub-tasks,

$$\begin{aligned} P_{T_{x}}= [NS^{p}, RS^{p}, SL^{p}, ST^{p}, CS^{p}] \end{aligned}$$
(27)

where \(NS^{p}\), \(RS^{p}\), \(SL^{p}\), \(ST^{p}\), \(CS^{p}\) are the priority associated with the sub-tasks NS, RS, SL, ST, CS. Let the priority of the \(r^{\text {th}}\) sub-tasks is \(P_{r}^{s}\). Then, for example, a task involving of sub-tasks consisting of surveillance and survivor tracking will have priority vector as follows,

$$\begin{aligned} P_{ T_{x}}= [0, 0, P_{3}^{s}, P_{4}^{s}, 0] \end{aligned}$$
(28)

Let consider priority of sub-tasks normal supply, relief supply, surveillance, survivor tracking, and critical supply are 1, 10, 50, 200, and 100, respectively. Let task \(T_{x_{1}}\) consists of surveillance and survivor tracking, and it requires 10 nos. of UAVs; whereas, task \(T_{x_{2}}\) consists of relief supply and it requires 5 nos. GV and 3 nos. boat. Then, tasks \(T_{x_{1}}\) and \(T_{x_{1}}\) are defined as \(T_{x_{1}}=[0,0,10]\) and \(T_{x_{2}}=[5,0,3]\), respectively. The priority of \(T_{x_{1}}\) and \(T_{x_{2}}\) is defined as \(P_{ T_{x_{1}}}= [0,0, 50, 200, 0]\), and \(P_{ T_{x_{2}}}= [0, 10,0,0,0]\). Resources are allocated from the available resources to the tasks with high priority. The norm of the priority vector is used to sort the tasks. As the norm of \(P_{ T_{x_{1}}}\) is higher than \(P_{ T_{x_{2}}}\), task \(T_{x_{1}}\) will have higher priority than \(T_{x_{2}}\).Therefore, if all the resource required to execute \(T_{x_{1}}\) is available,\(T_{x_{1}}\) will be executed prior to \(T_{x_{2}}\). If the priorities are equal, the distance from the resource location to task location is considered for the resource allocation. The distance from the resource location to task location is calculated based on the available current network. This distance can be derived more accurately using the dynamic condition of the road network considering the predicted value of the water level and the extent of the damage. Algorithm 2 gives the details of the task allocation algorithm.

figure b

6 Resource and Task Allocation Case Study

We consider an example of allocating resources during a disaster scenario caused by a flood in India. From [64, 67], an overview of the administrative structure in India can be understood. The highest administrative layer of India is the central government (National level), headed by the prime minister of India. Different state governments led by respective chief ministers fall under the central government. The subsequent layers in the hierarchy are districts, blocks, and villages. In our case study, we assume that the flood occurs in a particular state of India. This example represents the state, district, and block level administrative hierarchy as TCC, MCC, and LCC. We consider the allocation of five distinct resources at the state (TCC) level to two districts denoted as \(D_{1}\) and \(D_{2}\) (MCCs) with different priorities. Further the allocated resources at each district level are further distributed to three individual blocks (LCCs) following the proposed principles in Sect. 4. District \(D_{1}\) has three blocks (\(D_{1}\_B_{1}\), \(D_{1}\_B_{2}\), \(D_{1}\_B_{3}\)) and District \(D_{2}\) has also three blocks (\(D_{2}\_B_{1}\), \(D_{2}\_B_{2}\), \(D_{2}\_B_{3}\)).

6.1 Weightage Calculation

For simplicity, let us consider at the TCC level, the population density (per sq Km), disaster-affected area (\(\text {Km}^2\)), and level of the disaster of the MCC units for weightage calculation of different units. The level of disaster over an area can be classified into different levels. A typical classification of levels adopted here is from 1 to 5, with Level 1 being the least severe and Level 5 the most severe. Based on the current water level, road network status, and rainfall information, authorities decide the level of disaster at respective levels based on the information gathered from the lower-level units. The values of population density, disaster-affected area, and the level of the disaster of each district and block are shown in Table 3.

Table 3 Allocation parameters for resource allocation

The weights \((w_{k})\) associated with different factors such as population density (\(f_{i1}\)), affected area (\(f_{i2}\)), and level of disaster (\(f_{i3}\)) are fixed at 0.4, 0.4, and 0.2, respectively. Higher weights are provided to population density and disaster affected area as these two factors are directly responsible for the relief requirement of a given area. These factors are often decided by the allocation authority based on suitable judgement. Using (1) and the values given in Table 3, the weights \(p_1^M\) and \(p_2^M\) for districts \(D_{1}\) and \(D_{2}\), respectively, are calculated as

$$\begin{aligned} { p_1^M = 0.4 \times \frac{100}{100 + 130} + 0.4 \times \frac{1000}{1000 + 3000} + 0.2 \times \frac{5}{5 + 3} = 0.3989 \approx 0.4}~, \end{aligned}$$
(29)
$$\begin{aligned} { p_2^M = 0.4 \times \frac{130}{100 + 130} + 0.4 \times \frac{3000}{1000 +3000} + 0.2 \times \frac{3}{5 + 3} = 0.6010 \approx 0.6}~. \end{aligned}$$
(30)

It can be seen that the sum of the weights \(p_1^M\) and \(p_2^M\) is evidently unity as argued in Subsect. 4.2. Further, like the level of disaster, other factors such as handling capability of disaster over an area can also be considered in the proposed framework, provided they are mapped to an equivalent quantitative factor satisfying the details in Subsects. 4.1 and 4.2. Similarly, based on the different parameters associated with the blocks given in Table 3, the weightage of each blocks under district \(D_{1}\) and \(D_{2}\), respectively \(p_{1\hat{i}}^L\) and \(p_{2\hat{i}}^L\) for \(\hat{i} = 1,\ldots ,3\), are calculated. They are \(p_{11}^L\) = 0.4,  \(p_{12}^L\) = 0.25, \(p_{13}^L\) = 0.35,  \(p_{21}^L\) = 0.26,  \(p_{22}^L\) = 0.48 and \(p_{23}^L\) = 0.26.

Table 4 Availability of resources at State control centre (TCC)

6.2 Availability and Requirement of Resources

The available quantity of five distinct resources \((R_{j}^{T})\) at the state control centre (TCC) is given in Table 4. The requirement for each of the five resources at the different blocks (LCC) of district \(D_{1}\) and district \(D_{2}\) are provided, respectively, in Tables 5 and 6. The resources claimed by the individual units are modified using the excess demand considering the forecast scenario as detailed in Subsect. 4.3.

Table 5 Requirement of resources at different blocks (LCC) within the District \(D_{1}\)
Table 6 Requirement of resources at different blocks (LCC) within the District \(D_{2}\)

The requirement of resources at the block level (LCC) is gathered at the district level (MCC). Further, the resource requirement from each of the districts is accumulated at the state level (TCC). The individual resource requirement for each district is the summation of the resources required for the individual block (shown in Tables 7 and 8).

Table 7 Requirement of resources at District \(D_{1}\) (MCC)
Table 8 Requirement of resources at District \(D_{2}\) (MCC)

6.3 Resource Allocation

The output of the allocation algorithm given in Algorithm 1 after two iterations is presented here. The allocation of resources to districts from the state at different stages of allocation is shown in Tables 9 and 10.

Table 9 Resources allocated to District \(D_{1}\)
Table 10 Resources allocated to District \(D_{2}\)

The allocation of resources to blocks from districts at different stages of allocation is shown in Tables 11 and 12.

Table 11 Resources allocated to different blocks of District \(D_{1}\)
Table 12 Resources allocated to different blocks of District \(D_{2}\)

The allocation of different districts and blocks at different allocation levels is presented through bar diagram in Figs. 5, 6, 7, and 8. Here, the length of the bar presents the total demand, the blue colour fraction presents the allocated amount, and the red colour presents the deficiency in allocation.

Fig. 5
figure 5

Demand vs Allocation for District \(D_{1}\)

Fig. 6
figure 6

Demand vs Allocation for District \(D_{2}\)

Fig. 7
figure 7

Demand vs Allocation of Block \(B_{1}\), Block \(B_{2}\) and Block \(B_{3}\) of District D1

Fig. 8
figure 8

Demand vs Allocation of Block \(B_{1}\), Block \(B_{2}\) and Block \(B_{3}\) of District \(D_{2}\)

6.4 Task Allocation

After allocation of resources at the TCC and MCC level, each LCC unit needs to distribute these resources at emergency sites. A task allocation scenario consisting of five tasks at the LCC level is presented in this section. Let us consider the case for block \(B_{1}\) of district \(D_{1}\) \((D_{1}\_B_{1})\) and the resources \({A_{\hat{i}1}^{M_{2}L}}\), \({A_{\hat{i}2}^{M_{2}L}}\), and \({A_{\hat{i}3}^{M_{2}L}}\) are related to numbers of GV, UAV, and boat, respectively. Each task related to rescue operations is initially divided into different sub-tasks, and the resources (GVs, UAVs, and boats) required to perform each task are mentioned in Table 13. For example, in this case, task \((T_{1})\) is defined as \(T_{1}=[0,2,0]\). The five different sub-tasks are survivor tracking, relief supply, critical supply, surveillance, and normal supply. The priority associated with the individual sub-tasks is given in Table 14.

Table 13 Resources requirement of different tasks
Table 14 Priority of different sub-tasks

The priority vector of each task is decided based on its composition of sub-tasks and the priority involved with the individual sub-tasks (shown in Table 15). For example, the priority vector of task (\(P_{T_{1}}\)) is defined as \(P_{T_{1}}\): [0,0,0, 200, 0].

Table 15 Priority of different tasks

Tasks are sorted out using the norm of the priority vector, so in this case, the sequence of tasks are \(P_{T_{1}}\), \(P_{T_{3}}\), \(P_{T_{4}}\), \(P_{T_{2}}\), \(P_{T_{5}}\). The actual allocation available at block \(B_{1}\) of district \(D_{1}\) from the resource allocation is shown in Table 16.

Table 16 Actual allocation available of block \(B_{1}\) of District \(D_{1}\)

So, after the first level of allocation, the resource required to perform \(T_{1}\), \(T_{2}\), \(T_{3}\) is available, so these tasks are executed. Although \(T_{4}\) has higher priority than \(T_{2}\), sufficient resources are not available to execute \(T_{4}\). The resources related to \({A_{\hat{i}2}^{M_{1}L}}\), that is, sufficient number of UAV is not available to perform the tasks \(T_{4}\) along with \(T_{1}\) and \(T_{2}\). After the second level of allocation, sufficient resources are still not available to perform the next high-priority tasks \(T_{4}\). So, the next high-priority tasks \(T_{5}\), which can be allocated the desired resources, are performed. In summary, initially mission related to \(T_{1}\), \(T_{2}\) , \(T_{3}\) is executed and after second level of allocation, task \(T_{5}\) is also executed. Task \(T_{4}\) can be executed only after receiving sufficient resources during future allocation. If the tasks are broken into smaller tasks, the utilization of allocated resources can be increased.

6.5 Needs of Future Development of the Proposed DSS

A few issues in the proposed DSS framework are described below.

  1. 1.

    The task allocation is performed based on the current status of the available resources and task requirements. Resources are utilized in a maximum way after each allocation cycle and not stacked for future use. Tasks with the requirement of a higher amount of resources might not get executed in this approach.

  2. 2.

    Allocation is performed based on the weights of individual crisis locations, and weight is provided equally for each of the resource items. The criticality of demand of a particular resource item for a location with low weight might not get priority in the allocation process.

These points will be addressed in our future work.

7 Software Implementation

The proposed resource allocation architecture is implemented in a software framework. The inputs to the software framework are the existing resources and demands of the different units and their specifications. It outputs the resources to be allocated to each unit. The resources can be selected from the pre-defined database, and the user can also add a new resource. A typical software framework for resource allocation is shown in Fig. 9. The resource allocation software framework is integrated with a Graphical User Interface (GUI) using MATLAB for easy implementation at the ground level. Currently, the GUI is developed for five crisis locations and six factors; however, it is easily scalable to higher numbers of crisis locations/factors. In the “Home” tab of GUI (shown in Fig. 10(a)). The backend program is run based on user-selected crisis location and the number of factors. Currently, a virtual allocation scenario is shown in the GUI for two districts of the Indian state of Kerala, namely Ernakulam and Alapuzha, and the factors considered are affected area, population density, disaster level, and disaster handling capacity (as shown in Fig. 10(a)). In the “Specification” tab (shown in Fig. 10(b)), the associated factors related to crisis locations need to be provided. The overall weights of each district are calculated through this tab, where there is a provision for the user to enter custom weights. The quantities of different resources available to the resource allocation authority are considered through the “Resources” tab in Fig. 10(c). The demands of individual crisis locations are entered through the “Demand” tab (shown in Fig. 10(d)). Allocation of individual resources among the crisis location can be obtained as given in Fig. 10(e). The overall allocation of the individual crisis locations can be obtained through the “Area wise Allocation” tab. The allocation for Ernakulam and Alapuzha for the current scenario is shown in Fig. 10(f) and (g), respectively. The allocation summary through bar diagram can be obtained through the “Graphs” tab (as shown in Fig. 10(h)).

Fig. 9
figure 9

Software framework

Fig. 10
figure 10

Snapshots of GUI

8 Conclusions

This paper presents a hierarchical allocation architecture for resource and task allocation during a flood-like disaster scenario. The proposed architecture is developed considering the flow of resources from the hierarchical administration of the government in a flood scenario. The allocation is performed based on the priority of the units based on the different factors such as population density, and disaster level. The proposed framework is scalable to accommodate different factors affecting allocation and different levels of multiple hierarchical units. An example scenario of resource and task allocation for flood management considering the administrative structure of India is studied. The proposed framework can be applied to resource allocation during other similar large-scale natural disasters; however, the factors for priority determination will be different. Future work includes the development of a software system deployable for a large-scale natural disaster scenario for resource and task allocation. It would also address the limitations of the current DSS discussed in the paper.