1 Introduction

India, the second-most populous country in the world, is one of the most disaster-prone areas of the world mostly due to its physiographic and climatic conditions. Nearly 59% of the landmass is earthquake-prone, 12% is vulnerable to floods, 76% of its coastline to cyclones and tsunamis, and almost 68% of the cultivable area to drought, with large tracts in hilly regions at risk of landslides (NDMA 2016). The National Disaster Management Authority (NDMA) is the apex body for Disaster Management (DM) in India, set up for the creation of an enabling environment for institutional mechanisms at the state and district levels. 80% of the country’s districts have created District Disaster Management Plans (DDMP) aligned with the Sendai Framework for Disaster Risk Reduction (Bahadur et al. 2016).

Despite the collective efforts of all government agencies at the state and district levels to plan and prepare for disasters, there is a lack of clarity in roles and responsibilities for disaster response. The inherent tangles in layers of bureaucracy hinder the process further. Even though the erstwhile relief-centric approach to DM has been replaced by a preparedness-driven approach, there are no well-accepted methods to measure the country’s capacity to respond to disasters. Literature states that empirical studies conducted on the determinants of Disaster Preparedness (DP) in developing countries are very few (Muttarak and Pothisiri 2013; Hoffmann and Muttarak 2017). Capacity building for DP encompasses all aspects of creating and sustaining capacity such as stockpiling of equipment and supplies; development of information and co-ordination systems; associated training and field exercises; standard operating procedures (SOP); and institutional and budgetary arrangements (UNISDR 2009, 2015; Hemond and Robert 2012). Capacity building is a linchpin of preparedness and a metric to evaluate “capacity" provides comparable and meaningful information on DP. A simpler and comprehendible method would be to disaggregate capacity building into multiple factors which are measurable through indicators, as indicator-based assessments capture multi-dimensional aspects (Cardona and Carreno 2011).

A Composite Index (CI) composed of several (weighted) individual indicators (OECD 2008) synthesizes “a vast amount of diverse information into a simple, easily usable form” (Davidson and Shah 1997). Each factor, when modelled as an index; would give a clear idea of the status of response capacity related to the attributes it is composed of across different units and through time, when evaluated at regular intervals. An index to quantify the response capacity of a region would also enable (i) comparisons among regions considered; (ii) identification of strengths and weaknesses in the system; and (iii) efficient allocation of scarce resources. Preparedness indices formulated by way of indicators would measure how effectively the government, the civil society and the bodies responsible for DM anticipate; prepare for; manage; respond to, and mitigate the impact of disasters. State governments hold the primary responsibility for DM in India, and the highest level in the three-tiered decentralized local body administrative system is the district. This paper evolves a theoretical framework to assess capacity building for DP of a district; presents multiple factors which define it; develops a set of indicators under each factor which could be aggregated to model an index to measure the corresponding factor, and constructs a composite index aggregated by the factors. The novelty of the work is that it puts forth a set of critical indicators for disaster response capacity assessment derived through probabilistic methods. Moreover, the CI is developed through multi-level aggregation grounded on solid engagement with primary and secondary data, methodically analysed, and systematically validated.

2 Research methodology and factor identification

The selection of indicators for composite indices is to be grounded on a sound theoretical backing to obtain meaningful, reliable results (Freudenberg 2003; Simpson 2006; Eurostat 2014). A conceptual framework was evolved through a comprehensive literature review where-in capacity development for DP was defined in terms of four factors–Resources, Communication and Coordination, Budget, and Community engagement. Easy- to- comprehend and contextually relevant indicators for each factor were explored. They were further fine-tuned and disaggregated into measurable attribute variables through key informant interviews. Variable selection was predominantly based on relevance to the aspect measured, availability, comparability, and reliability of data (Cutter et al. 2010; Khazai et al. 2015); and such that most of them, if not all; belonged to a standard set of data routinely compiled and maintained by the district administration, District Disaster Management Authority or similar authorities. Each factor was then modelled as a linear function of these indicators. Further, a Questionnaire Survey of Experts (QSE) was administered to elicit the relative weightings of these factors on capacity building related to DP. A purposive sampling approach was adopted to select respondents from different categories of expertise, as detailed in Sect. 3.3. An extension of the MCDM tool- Technique for Order Preference by Similarity to Ideal Solutions (TOPSIS) was implemented to balance out the variability in perceptions of expert respondent categories.

For a CI, the most common and transparent method for aggregation is arithmetic averaging, which entails summing the product of each variable and its weight (Salzman 2003; OECD 2008; Greco et al 2018). Capacity building for DP was thus modelled as a composite index; a weighted aggregation of four factors, with each factor being a linear function of its corresponding indicators. To arrive at a fewer set of measurable, manageable, and actionable critical indicators sensitive to the Indian context, model reduction was performed through l2 norm-based sensitivity analysis and coefficient of variation method on pertinent data. For developing indexes for factors, equal weighting method and a data-driven weighting method applying Principal Component Analysis were used. For constructing the CI, weightings by subjective assessment through a QSE elicited through Relative Ranking Index method and an extension of TOPSIS method were employed. Further, the CI was checked for robustness with respect to sensitivity to different weighting methods and model reduction. The general methodology adopted for the study is illustrated in Fig. 1.

Fig. 1
figure 1

Methodology

The analysis presented in the report is the outcome of this multi-level process.

3 Conceptual framework and literature review

The theoretical framework adopted in this paper is based on an “inductive approach”, whereby one “establishes a set of factors judged to be relevant to response capacity, and then attempts to develop indicators for them” (Brooks et al. 2005). The factors capture physical, economic, social, political, and institutional dimensions of capacity development. The authors chose the inductive approach as it can easily be adapted to different geographic settings, cultures, and environments (Winderl 2014). The rationale adopted is that DP is attributed to physical (critical infrastructure, communication systems), economic (budgetary allocations), social (community engagement), political (DRR plans, implementation of training programmes), and institutional (SOP, response systems) dimensions of disaster response capacities. These dimensions were further disaggregated into easily measurable indicators “to yield information with a reasonable level of veracity” (Simpson 2008; Patrizii et al. 2017). Capacity building is the process “whereby people, organizations and society unleash, strengthen, create, adapt and maintain capacity over time” (UNDRR 2019; Aitsi-Selmi 2015; Hagelsteen and Becker 2013; Hagelsteen and Burke 2016) and an indicator system would be a ‘powerful tool to obtain situational analysis’ even as the level of accuracy may be questionable (Liew et al 2019). Though circular logic turns out as a fallacy of the inductive approach, it may be overcome by integrating subjective assessments of expert stakeholders (Brooks et al.2005; Simpson, 2006; Collymore, 2011).

In our search for suitable indicators of capacity building for DP; various UN reports, case studies and manuals, and guidelines on construction of composite indicators were referred to identify parameters that matter. Measurement systems devised to assess DP systems/programmes led by various agencies were also referred to; specifically, in South Asian, African, and Caribbean countries (Cardona 2005, 2007, 2008; Gall 2007; Cutter et al. 2008; Bandura 2008; Cardona and Carreno 2011; Collymore 2011; Oven et al 2017). A few literature-based relevant cases are overviewed here. The Tsunami-resilient Preparedness Index considers 3 dimensions and 35 aspects with 21 disaster experts judging its content relevance (Adiyoso and Kanegae 2018). Another similar instance, Tsunami Recovery Impact Assessment and Monitoring System (TRIAMS) combines 51 indicators to track recovery after Tsunami in 2004 (Winderl 2014). World Risk Index (WRI) and Global Focus Model (GFM) are weighted composite indicators based mostly on secondary data used to analyse hazards, vulnerabilities, and response capacity at the country level. WRI uses 28 indicators on 4 factors related to hazard exposure, susceptibility, coping capacity, and adaptive capacity. HFA (Hyogo Framework for Action) Monitor of UNSIDR tracks goals and priority areas using a self-assessment methodology with 31 capacity indicators; of which 29 are qualitative indicators graded using a five-point assessment tool. Community-Based Resilience Analysis (CoBRA) of UNDP (UNDP 2013) employs surveys and key informant interviews with numeric as well as qualitative indicators to measure resilience of physical, human, financial, natural, and social aspects. Patrisina et al (2018) design key performance indicators to measure individual DP levels using the Delphi method involving expert respondents, with 14 indicators of three critical factors. Based on all these, the authors zeroed in on four broad categories of indicators (henceforth mentioned as “factors”) as depicted in Fig. 2.

Fig. 2
figure 2

Conceptual framework of capacity building for disaster preparedness

The conceptual framework proposed is that each factor is a function of measurable attribute variables which positively contribute to the DP of a district.

3.1 Key informant interviews

Inclusion of perceptions along with quantitative secondary data adds more context-specific elements to disaster resilience measurements (Winderl 2014). Hence, semi-structured interviews were conducted with 39 key informants such as country/ regional heads of DM organisations, NGOs, practitioners, policy advisers with extensive experience in DRR and Emergency Operations personnel at the regional and state levels. Indicators were thus categorised into (i) “input” indicators which measure the financial, human, administrative, and regulatory resources (ii) “output” indicators which measure the consequences of resources used (iii) “outcome indicators” which measure the results at beneficiary levels and (iv) “impact indicators” which measure the cumulative effect of capacity building. The four factors and 33 indicators mentioned in Sect. 2.2; were corroborated as to their typology and are presented in Table 2.

3.2 Questionnaire survey of experts

Assigning weightings to indicators is very critical in the development of composite indicators; weightings being essentially value judgements. Participatory methods incorporating various stakeholders–experts, citizens, and politicians–are extensively used to assign weights to better reflect policy priorities or theoretical factors (OECD 2008; Munda 2003). Moreover, the opinions of practitioners and policy advisers with experience in DRR are also decisive in disaster mitigation initiatives (Keur et al 2016). Barrios et al. (2020), had solicited expert opinion for indicator selection and weighting in their evaluation of hospital DP by a multi-criteria decision-making approach. Expert opinion methods for assessment of disaster risk/ preparedness indicators have shown to yield excellent results (Davidson and Shah 1997, 2001; Freudenberg 2003; Cardona 2011). Expert judgement was elicited not only for the selection of factors and the corresponding indicators but also for assigning their relative weightings through a QSE among 151 respondents from 7 identified categories of experts drawn from academic and research institutions of repute in India, NGOs, Development Authorities, the general public, affected stakeholders and related Central, State Government establishments across the states of Kerala, Karnataka, Tamilnadu, and Gujarat. The survey sample was selected applying a purposive sampling approach and the composition is presented in Table 1.

Table 1 Questionnaire Survey respondents and corresponding affiliations

The factor on which their expertise reflects; based on a subjective assessment of the authors; is depicted in Fig. 3.

Fig. 3
figure 3

QSE respondents and field of expertise

3.3 Identification of factors and indicators

As the meaningfulness of an indicator depends on its ability to represent the ideas of the conceptual framework (Davidson 1997), the definition of each factor in alignment with the proposed theoretical frame and their corresponding indicators are discussed in the sections below. Multiple processes and the large number of stakeholders involved in DM and DRR aspects complicates the process of selection of apt indicators. The indicators are to provide “the means to monitor and evaluate implemented measures” (Feldmeyer et al. 2020) for DP. Though it was attempted to identify tangible, easy-to-measure indicators; it was not always possible. Therefore, qualitative indicators; for which data had to be generated through a subjective assessment by the authors (anchored on a solid rationale, as deemed proper); were also included. In order to develop meaningful indicators at district levels, the processes implemented for DRR and DM at district levels were considered, so that the authorities can adapt the methodology of assessment of DP to their domain of operation. For replicability of the methodology, the public availability, as well as prospects for public availability of indicator data, is important. Even though certain selected indicators did not have associated datasets during the time of conducting the study, their compilation was anticipated to be fairly easier. Therefore, the authors adopted tangible, easy-to-measure indicators; aligned to the practical aspects of the extant DM mechanism in India. Care was also taken to the extent possible to align it with the legal-institutional-policy framework on disaster management prevalent in India. In order to be relevant and duplicable in the contexts of other developing countries, an attempt was also made to align it with global frameworks.

3.3.1 Resource factor

Fig Indicators X1 to X15 as shown in Table 2 were explored to represent C1. Indicators for infrastructure and skilled manpower in health, relief and rescue systems were identified. “No. of hospitals”—X1 was included due to availability of comparable data across states; even though they will be useful resources only if they are functional to treat the injured in the aftermath of a disaster, or else they turn out to be a liability. Availability of potential resources was not explored (say, for instance; the possibility of conversion of existing facilities like hostels and hotels to hospitals to deal with emergencies during disasters) due to limited data coverage. “No. of nurses”—X4 was considered and then discarded due to the unavailability of comparable data across districts. X10—No. of policemen/1000 population was replaced by "No. of police stations/1000 population" as reliable data on the number of policemen across districts was too fragmented to compile. X11 and X12—number of boats and number of vehicles available respectively; were combined as X12* as they represented the same resource. The coping capacity of a region related to food security was also explored by including X15—No. of Fair Price Shops (FPS). In India, the central government is responsible for procurement, storage, transportation, and bulk allocation of food grains and state governments distribute it through an established network of around 4,62,000 FPS–one of the biggest systems in the world. 13 indicators were eventually selected, and the discarded ones are marked with**in Table 2.

Table 2 Factors, Indicators, typology, data sources,weightings for indicators and factors

3.3.2 Communication and coordination system factor

Factor C2 purports to represent modes of communication, temporal information and dissemination of data on disasters for awareness generation as well as for warning; resources for handling media relations and coordination among different agencies. Indicators X16 to X29 as shown in Table 2 including road density, the proliferation of public transportation nodes (bus stations, railway stations, boat jetties) and communication infrastructure (telephone exchanges, police stations, post offices) were explored. As capacity assessment involves reviewing the capacity of a group against desired goals to identify capacity gaps (UNISDR 2009); an indicator X18—Existence of SOP and pertinent manuals/codes was included, to assess the efficacy of communication and coordination systems. This was done by a subjective assessment of the authors on the existence of an Incident Command System in the district (as mandated by NDMA), where binary scores were assigned; ‘1’ if it existed and ‘0’ otherwise. Data on X20—No. of cellular phone subscribers and X21—No. of HAM radio operators across districts were too fragmented to be included.

In the present era, Information and Communication Technology (ICT) initiatives also are crucial in enabling the capacities of regions for enhanced disaster preparedness. People seek up-to-date, reliable, and detailed information in disaster scenarios as it contributes to social inclusion. The network of Common Service Centers (CSC) which deliver ICT to all segments of people through access to information and knowledge was therefore included as an indicator X22—No. of CSC/1000 population. Media; both print and electronic; plays a vital role in the generation of public awareness on DP and communication through assimilation and dissemination of information about affected areas among government authorities, NGOs, and the public; and hazard warnings. The efficacy of mass media campaigns could best be assessed by quantifying the proliferation of print media, visual media, and social media. Comparable data across districts being not directly available, an indicator X25—the “percentage of literate people” was chosen to suit the context. Nevertheless, from a disaster preparedness perspective, or even otherwise; the “percentage of literate people” could be used as an indicator for the reach and efficacy of mass media and awareness campaigns in communication systems.

The efficacy of operational plans, standards, protocols, and procedures involved may be considered as attributed solely by X26 and it was evaluated through a subjective assessment of the authors on an ordinal scale; assigning a value “1” when the DM plan of the district satisfied at least 4 conditions of the following; and a value “2” when it satisfied at least 6 conditions of the following: (1) identified risks and vulnerabilities of the district (2) defined and assigned tasks and responsibilities to all line-departments and stakeholders for pre-disaster and post-disaster phases (3) developed a standardized mechanism to respond to; and manage the disaster efficiently (4) included a response plan for prompt relief, rescue and search support during disasters; (5) included a revision within the past five years; (6) included an HVCRA (Hazard Vulnerability, Capacity, and Risk Assessment), and (7) was well-integrated across agencies, local authorities, and line departments. Though X18 and X26 represent more or less the same attributes, they are separately included for conceptual clarity. Figuring in two different perspectives; with different scores associated with each inclusion; X18 refers to the existence of SOPs and X26 refers to the effectiveness of coordination systems.

The capacity of a region to perform a set of critical tasks under simulated conditions for different hazards is validated by periodic mock drills which involve mobilization of resources, communications, response activities, management initiatives, and post-incident activities of all concerned departments and task forces. Indicators X27—No. of mock drills/simulation exercises per year/1000 population and X28—No. of participants of mock drills/per year/1000 population were considered. Documented data on X27 and X28 being unavailable across districts, they had to be discarded. Hence an indicator X29—Inclusion of procedures for training programmes in DDMP was included. Of the 14 indicators identified for C2, 10 indicators were selected and listed in Table 2; the discarded ones being marked with**.

3.3.3 Budget factor

Budget Factor C3 relates to consistent, timely budgetary allocations for institutional capacity building and technical training. It was best represented by the budgetary allocation for DM from the Centre, State, and District administrations. In the institutional setup prevalent in India, the State Disaster Response Fund (SDRF) and National Disaster Response Fund (NDRF) which are constituted under sections 48(1) (a) and 46 (2) of the DM Act (2005), respectively; are available to all states to facilitate immediate relief in case of severe calamities; though this does not indicate the effectiveness of fund utilisation. Therefore, percentage budget utilisation averaged over 3 consecutive years was identified as an indicator X30. As of now, DM funding has prioritised disaster response over DP; and though mandated, DM funds at state and district levels are yet to materialise. Flexi-funds under Centrally Sponsored Schemes (following the broad objective of the corresponding Central Sector Scheme); Corporate Social Responsibility (CSR) funds and similar Public–Private Sector funds are potential sources of funding for increasing disaster resilience. District Planning Funds are also raised in some states from Members of Parliament Local Area Development Scheme (MPLADS) or Members of Legislative Assembly Local Area Development Scheme (MLADS) received for developmental projects from the central government and are utilised for preparedness, mitigation, and capacity building initiatives. Hence, an indicator X31—Budget allocation/financing options for DM per year in INR /1000 population was considered. Again, this too did not indicate the effectiveness of fund utilisation, neither was there comparable data available for X30 or X31. Therefore, X31 was modified as “Presence of extra Budget allocation/financing options for DM” in the district under scrutiny and was assigned binary scores- ‘1’ if it was present and ‘0’ otherwise.

3.3.4 Community engagement and technology transfer factor

The involvement of specialised technical agencies and academia in capacity-building initiatives for DM is proposed to be captured by this factor C4. Capacity for DP being associated with knowledge and capacities of local people; community engagement is instrumental in formulating local coping and adaptation strategies particularly technology-driven initiatives (Allen 2006). Involvement of community organisations in translating technology to real-time benefits for the public was considered, as documented and comparable data on the involvement of specialised technical agencies and academia were not available. NGOs usually have direct and sustained contact with many communities and an indicator X32—“Number of NGOs active in the region” was identified to suit the context. The most critical component of effective communication on disasters is demonstrated by the appropriate response by the communities, which demands reliable formal and informal communication channels between people to people and people to government (Mukhtar 2018). It is seen that the presence of Self-Help Groups (SHGs) contributes to capacity-building for emergency response, the flow of information, and regional and national coordination mechanisms to prepare for DP in regions (Collymore 2011). Indicator X33—The number of SHGs per 1000 population was also therefore considered.

3.4 Theory of modelling

As a basic premise to arrive at a metric to gauge DP, each factor of capacity was modelled as an index using its identified indicators (Briguglio 2003; Birkmann 2006). To convey information on capacity building for DP of districts of India, separate composite indices were calculated for the four factors that contribute to DP–C1, C2, C3 and C4. An index being a unitless number, the index measures are to be standardised, scaled and normalised such that the type, scope, depth and appropriateness of the indicators and their measurements are deemed comparable (Munda 2003). All indicators (attribute variables) within a factor were scaled with respect to the mean minus two standard deviations. Each factor Ci for a particular district was evaluated as:

$${\mathrm{C}}_{i}=\frac{1}{{\text{Card}}\left({\mathrm{A}}_{i}\right)}{\sum }_{{\mathrm{X}}_{j}\in {\mathrm{A}}_{i}}{\mathrm{X}}_{j},$$
(1)

where Ai denotes the set of indicators (attribute variables) for the factor Ci and \(\mathrm{Card }\left({\mathrm{A}}_{i}\right)\) denotes the cardinality of the set Ai. One of the most commonly used compensatory aggregation approaches in composite indicators being the linear method (Greco et al. 2018); the factors were combined to derive a Composite Index using a common, simple, and transparent method—weighted arithmetic aggregation of normalised individual indicators (Cardona 2008; Fritzsche 2014; Freudenberg 2003; OECD 2008).

The proposed Disaster Preparedness Index on Capacity building (\({\mathrm{DPI}}_{\mathrm{C}}\)) for a district was thus evaluated as

$${\mathrm{DPI}}_{C}={\sum }_{i=1}^{n}{\mathrm{a}}_{i}{\mathrm{C}}_{i},$$
(2)

where ai denotes the weighting of factor Ci for that particular district elicited through expert opinion, and n indicates the number of factors.

A total of 26 out of 33 indicators under four factors as presented in Table 2 were selected for modelling. India has a total of 718 districts spread across 28 States and 8 Union Territories (UTs) from which 123 districts spread across six states of India were chosen to populate the data set for modelling the factor indices and DPI. The rationale behind this selection is that these states have a range of geo-climatic conditions and hence are exposed to varied natural disaster scenarios. UNDRR 2019 classifies natural disasters into five major categories—Geophysical, Hydrological, Meteorological, Climatological, and Biological. The sample states were vulnerable in varying degrees to the first four, and have proven to be highly susceptible to the last- as evidenced by the COVID-19 pandemic (George and Anilkumar, 2021). Statistical data from censuses and district websites were mainly explored to compute factor indices. Multiple data sources were adopted to gather data related to the factors within the constraints of data availability. Major sources were authorities dealing with Disaster Management, Fire and Rescue; Directorate of Health; Directorate of Education; and websites of Highway departments or State Road Development Corporations of respective states. For districts where such datasets were unavailable, data from the DDMP was compiled. All listed indicators were evaluated and analysed for long-term data availability. The critical issue of missing or erratic values within data sets was dealt with by adopting OECD guidelines on data outliers and data imputation (OECD 2008). Adequate temporal coverage of datasets was ensured by setting 2011 as the base year (last census year in India) and considering the average of 3 consecutive years. Wherever base year data was missing, data obtained for the immediately available previous year was used.

3.5 Weighting methods

For developing indexes for factors, weighting methods discussed in Sect. 2.6.1 and Sect. 2.6.2 were applied; and for constructing the \({\mathrm{DPI}}_{\mathrm{C}}\), methods detailed in Sect. 2.6.3 and Sect. 2.6.4 were implemented.

3.5.1 Equal weighting (EW)

All indicators were assumed to contribute equally to the corresponding factor as per the conceptual framework developed in the study. Despite EW being not adequately justified (Greco et al. 2018), it is commonly used for CI development (Bandura 2008; OECD 2008) where the theoretical framework attributes to the rationale.

3.5.2 Weightings from principal component analysis (PCA)

PCA is a ‘data-driven technique’ which may be used to derive weightings in index construction if the indicators are correlated (Ray 2008; Decancq and Lugo 2013; Greco et al. 2018). The weights derived using data-driven techniques such as PCA emerge from the data themselves under a specific mathematical function (Decancq and Lugo, 2013). Kaiser criterion was used to select the principal components with Eigenvalues greater than 1, which accounts for the maximum variance (OECD 2008). For the qth component with an Eigenvalue greater than 1, the weight of each indicator was computed as:

$${\mathrm{w}}_{j}=\sum_{q=1}^{p}{\mathrm{a}}_{jq }^{2}/\sum_{q=1}^{p}{\beta }_{q },$$
(3)

where, \({\mathrm{w}}_{j}\) is the weight of the jth indicator, \({\beta }_{q}\) is the eigenvalue of the qth factor and ajq is the loading value of the jth indicator on qth factor.

3.5.3 Subjective weighting using QSE—Relative Importance Index (RII)

Responses for the QSE were obtained on a Likert scale of 1 to 7 which could not be assessed using parametric methods (Siegel and Castellan 1988). Analysis of structured questionnaire responses involving ordinal measurement scales is commonly done using a non-parametric technique (Chakrabartty 2019)—Relative Importance Index (RII). RII for each factor was determined using Eq. 4 given by:

$$\mathrm{RII}=\frac{\sum W}{\mathrm{A}N}=\frac{7{n}_{7}+6{n}_{6}+5{n}_{5}+4{n}_{4}+3{n}_{3}+2{n}_{2}+1{n}_{1}}{7N},$$
(4)

where W is the Likert rank assigned to each factor by the respondent, A is the highest weight (here, 7), and N, the total number of respondents. RII values were normalised to obtain weighting coefficients of each factor for constructing \(\mathrm{DPIc}\), such that they summed up to 1 and their values ranged from 0 to 1 with 0 not inclusive (Waris et al. 2014).

3.5.4 Technique for order preference by similarity to the ideal solution (TOPSIS)

Assigning weightings to different dimensions of a phenomenon by comparing and evaluating their relative importance may be considered as a multi-criteria decision-making problem. Different MCDA tools may provide the same results (Linkov et al. 2020). Moreover, the authoritativeness of evaluation of alternatives (indicators, in our case) given by different decision-makers may vary due to differences in their levels of expertise and familiarities with the problem. Therefore, a method is adopted in which the variation in perceptions of decision-makers is also accounted for—an extension of Technique for Order Preference by Similarity to the Ideal Solution (TOPSIS). TOPSIS method is considered to be a relatively easier method with good computational efficiency, which offers a clear representation of the logic of human choice (Roszkowska 2013), and is a widely used methodology in environmental MCDA (Linkov et al. 2020). TOPSIS is a multi-criteria- decision-making tool that chooses the alternative closest to the ideal solution and farthest from the negative ideal alternative based on information on attributes from the decision-maker and numerical data. Among the numerous extensions of TOPSIS for group decision making (Yang and Chou 2005; Milani et al 2005; Jahanshahloo et al 2006; Wang and Lee 2007; Chen 2000); that proposed by Li et al (2008) analyses ordinal preferences of group decision-makers, where the weights of decision-makers were included. To integrate the expertise of respondent categories as deliberated in Sect. 2.6.3 and illustrated in Fig. 3, this extension of TOPSIS was implemented and the ranking index \({\mathrm{d}}_{\mathrm{n}}\) of alternative \({\mathrm{A}}_{n}\left(\mathrm{1,2},\dots \mathrm{N}\right)\) was determined as:

$${\mathrm{d}}_{n}=\frac{{\mathrm{d}}_{n}^{+}}{{\mathrm{d}}_{n}^{+}+{\mathrm{d}}_{n}^{-}},$$
(5)

where,

$${\mathrm{d}}_{n}^{+}= \sqrt{\sum_{l=1}^{L}{\left({\uplambda }_{l}\right)}^{2} {\left({\mathrm{r}}_{\mathrm{ln}}-\mathrm{N}\right)}^{2}} \mathrm{and} {\mathrm{d}}_{n}^{-}= \sqrt{\sum_{l=1}^{L}{\left({\uplambda }_{l}\right)}^{2} {\left({\mathrm{r}}_{\mathrm{ln}}-1\right)}^{2}},$$
(6)

where, L is the number of groups of decision makers, \({\mathrm{r}}_{\mathrm{ln}}\left(\in \left\{\mathrm{1,2},\dots ,\mathrm{N}\right\}\right)\) is the comprehensive ranking location of alternative \({\mathrm{A}}_{n}\left(\mathrm{1,2},\dots \mathrm{N}\right)\) and \({\uplambda }_{l}\) is the weight of decision maker. The ranks obtained were then normalised to get weighting coefficients so that they summed up to 1.

The weightings obtained for the indicators and factors applying the methods discussed above are also tabulated in Table 2.

3.6 Model reduction techniques

A Composite Index is rendered unwanted complexity when a large number of variables are attributed for indicators (Freudenberg 2003; Davidson and Shah 1997; Simpson 2006; OECD 2008) as they reflect redundancies (Lind 2010; Otoiu 2014). As disasters demand quick decisions to be made, Composite Indices serve the purpose only if computed as a function of fewer variables. Therefore, to develop a ready-to reckon index with a reduced set of variables, model reduction was performed using a probabilistic method—Model distance-based sensitivity analysis (Sobol 1993; Greegar and Manohar 2016) and another method based on coefficient of variation.

3.6.1 Model distance-based sensitivity analysis by using l 2 norm

Indicators selected in the study (attribute variables) represent different aspects and relate to different regional contexts. So, they are uncertain in nature and can be probabilistically modelled as random variables. A factor is a function of indicators, and its uncertainty is contributed not only by those in indicators but also by uncertainties arising due to their combined interactions. For quantifying these uncertainties, Sobol’s analysis based on analysis of variance (ANOVA), a Global Response Sensitivity Analysis (Sobol 1993; Saltelli et al. 2008) may be used. When the associated random variables are independently distributed, it has equivalence with \({\mathrm{l}}_{2}\) norm-based sensitivity analysis as shown by Greegar and Manohar (2015, 2016). A pair of models could be considered, one in which the uncertainty in all the variables is included; and the other in which a selected uncertain variable is treated as deterministic, for performing \({\mathrm{l}}_{2}\) norm-based sensitivity analysis. Their evaluated proximity would explain the effect of uncertainty in the selected variable on the specified response variable and is a measure of sensitivity with respect to that selected variable. Consider a model given as:

$$\mathrm{Y}=\mathrm{f}\left(\mathrm{X}\right)=\mathrm{f}\left({\mathrm{X}}_{1},{\mathrm{X}}_{2},\cdots ,{\mathrm{X}}_{n}\right),$$
(7)

where uncertainties in all the elements of \(\mathrm{X}\) are included; and two altered models; one in which all elements of \(\mathrm{X}\) are uncertain except the ith element (treated as deterministic); and the other, in which all the elements of \(\mathrm{X}\) are deterministic except the ith element (treated as uncertain); given as

$$\begin{aligned} & Y_{i} = f\left( {X_{1} ,X_{2} , \cdots ,X_{i - 1} ,X_{i} = \mu_{i} ,X_{i + 1} , \cdots ,X_{n} } \right) \\ & Y_{\sim i} = f\left( {X_{1} = \mu_{1} ,X_{2} = \mu_{2} , \cdots ,X_{i - 1} = \mu_{i - 1} ,X_{i} ,X_{i + 1} = \mu_{i + 1} , \cdots ,X_{n} = \mu_{n} } \right). \\ \end{aligned}$$
(8)

Consider a measure of distances between \(\mathrm{Y}\) and the altered models, \({\mathrm{Y}}_{\mathrm{i}}\) and \({\mathrm{Y}}_{\sim \mathrm{i}}\), denoted as \({\mathrm{D}}_{\mathrm{i}}={\text{dist}}\left(\mathrm{Y},{\mathrm{ Y}}_{\mathrm{i}}\right)\text{ and }{\mathrm{D}}_{\sim \mathrm{i}}={\text{dist}}\left(\mathrm{Y},{\mathrm{ Y}}_{\sim \mathrm{i}}\right)\). According to Greegar and Manohar (2015, 2016):

  1. i.

    Di represents the total effect corresponding to the variable \({\mathrm{X}}_{i}\). Higher values of \({\mathrm{D}}_{i}\) implies that the uncertainty in the ith variable highly reflects on the uncertainty of \(\mathrm{Y}\).

  2. ii.

    D~i represents the main effect corresponding to the variable \({\mathrm{X}}_{i}\). Lower values of \({\mathrm{D}}_{\sim i}\) imply that the uncertainty in the ith variable highly reflects on the uncertainty of \(\mathrm{Y}\).

  3. iii.

    Variables corresponding to higher sensitivity may be retained as random and the least sensitive ones may be treated as deterministic.

3.6.2 Based on coefficient of variation

The coefficient of variation of a variable, denoted by \(\updelta\), is evaluated as the ratio of the standard deviation to the mean. It is a useful statistic for comparing the degree of variation between different data series, even when the means considerably differ from one another. Let \(\eta \text{ and }\sigma\) represent mean and standard deviation of the reference model, \(\mathrm{Y}\); and \({\upeta }_{\sim i}\text{ and }{\upsigma }_{\sim i}\) represent the mean and standard deviation of the altered model, \({\mathrm{Y}}_{\sim \mathrm{i}}\). The quantities, \(\updelta\) and \({\updelta }_{\sim i}\) are evaluated as.

$$\updelta =\frac{\sigma }{\eta }\text{ and }{\updelta }_{\sim i}=\frac{{\sigma }_{\sim i}}{{\eta }_{\sim i}},$$
(9)

where \(\updelta\) and \({\updelta }_{\sim i}\) denote coefficients of variation of the ith variable with respect to the original model and altered model respectively. The normalised coefficient of variation of the ith variable may be considered as a measure of its sensitivity;

$${\updelta }_{\sim \mathrm{i}}^{*}=100\frac{{\partial }_{\sim i}}{\partial }\text{ \% ; }i=\mathrm{1,2},\cdots ,n,$$
(10)

Higher values of \({\updelta }_{\sim i}^{*}\) imply that the uncertainty in the ith variable highly reflects on the uncertainty of the response variable, \(\mathrm{Y}\).

3.7 Sensitivity and reliability analysis

CI development involves subjective assessment related to the choice and weightings of indicators and hence requires assessment of associated uncertainties. A Sensitivity Analysis (SA) would capture (i) the variation in the output to different sources of variation in the assumptions, and (ii) how the given CI depends upon the information fed into it. SA quantifies the overall uncertainty in district rankings (based on \(\mathrm{DPIc}\)) as a result of uncertainties in the model input. Robustness assessment of composite indicators (Saltelli et al. 2008, 2019) as in the case of Environmental Sustainability Index are made by a synergetic application of uncertainty and sensitivity analysis to increase its transparency and to validate the assumptions made in the conceptual frame (OECD 2008). The methods used for SA in this study are discussed in the following sections.

3.7.1 Average rank shift

The stability of the computed \(\mathrm{DPIc}\) using different methods and the resulting rank of a given district, \(\mathrm{Rank}({\mathrm{DPI}}_{\mathrm{C}})\), indicates the robustness of the estimation (Nardo et al. 2005; Cutter et al. 2003). The average rank shift, \({\mathrm{R}}_{\mathrm{s}}\), is a measure of the uncertainty of each input factor and is computed as:

$${\mathrm{R}}_{s}= \frac{1}{\mathrm{m}}\sum_{i=1}^{m}\left|{\mathrm{Rank}}_{\mathrm{ref }}\left({\mathrm{DPI}}_{\mathrm{C}}\right)-\mathrm{Rank}({\mathrm{DPI}}_{\mathrm{C}}) \right| ,$$
(11)

where, \({\mathrm{Rank}}_{\mathrm{ref }}({\mathrm{DPI}}_{\mathrm{C}})\) is the median rank of a district considering the different methods of computation, and m; the total number of districts. Lower values of \({\mathrm{R}}_{s}\) imply the closeness of the computed ranks to the median rank.

3.7.2 Cronbach’s alpha and spearman’s rank-order correlation

Cronbach’s alpha measures the reliability or consistency of the rankings as it is a function of the number of ranking methods and the average inter-correlation among them (Cronbach 1951). An alpha value greater than 0.9 implies excellent consistency whereas a value below 0.7 may not be acceptable. Spearman’s rank-order correlation was used to test the reliability of the rankings of districts based on different weighting methods. Spearman’s correlation value of + 1 signifies a perfect positive correlation and − 1 signifies a perfect negative relationship between ranks, while 0 indicates no correlation between ranks (Gibbons and Chakraborty 2003).

4 Results and discussion

The weightings estimated for factors and indicators are tabulated in Table 2. The factor indices and final composite indices developed by the study are tabulated in Table 3. The results of applying model reduction as discussed in Sect. 2.7 are presented in Sect. 3.3. The robustness of the derived indices is discussed in Sect. 3.4. To present a sample analysis in this reported study, a set of 10 districts from 6 different states of India were considered. The districts represent different geo-climatic scenarios and disaster vulnerabilities. These 10 districts have moderate to very high proneness to hydrometeorological disasters as per studies conducted by the India Meteorological Department (Mohapatra 2015). Amongst the 10; Alappuzha, Dakshina Kannada, Nagapattinam, Krishna, Junagadh and Puri are coastal districts. The remaining 4 districts, namely, Kottayam, Sivagangai, Chittoor, and Kheda are non-coastal districts. The districts of Kerala were greatly affected by the 2018 deluge. Further, they represent different regional contexts; Alappuzha is a coastal district and Kottayam is in the midlands. Nagapattinam district of Tamilnadu was severely hit by the tropical cyclone Gaja in 2018. Krishna district of Andhra state was hit by heavy rainfall and floods in 2020. Junagadh was affected by floods and cyclone in 2020 and Kheda suffered from floods in 2019 (both districts belong to Gujarat state). Puri district of Orissa state was massively struck by the Fani cyclone in 2019.

Table 3 Factor index scores, composite index scores, and ranking among districts considered

4.1 Factor indices

Table 3 presents the factor index scores and ranking scores computed applying different weighting methods for the 10 districts chosen as a representative sample to present a sample analysis in this report. It is observed that the ranks are consistent for 8 districts for C1, and the ranks differ by 1 position (5th—4th) for Kottayam and Nagapattinam. For C2, the rank varies within 1 position for Kottayam and Sivagangai (1st–2nd); and for Dakshina Kannada and Junagadh (5th—4th). For the Budget factor C3, all the districts considered have the same score and therefore the same ranks.

The calculated Cronbach’s alpha values based on the two weighting methods for C1 and C2 were 0.994 and 0.988 respectively, which indicate high reliability. For Community Engagement factor C4, all the districts considered have consistent ranks for both the weighting methods applied.

4.2 Composite index \(\mathbf{D}\mathbf{P}\mathbf{I}\mathbf{c}\)

Normalised weighted aggregation was used to aggregate \(\mathrm{C}1,\mathrm{ C}2,\mathrm{ C}3\mathrm{ and C}4\) to compute \(\mathrm{DPIc}\). Table 3 presents the computed \(\mathrm{DPIc}\) scores based on the different weighting methods. The rank-ordering for 4 districts is consistent; whereas it has shifted by one position for 6 districts.

The Cronbach's alpha value of ranking of districts based on the four computation methods for DPIc was 0.994 which indicates high consistency.

4.3 Parameter reduction

Comparable data sets for the 26 indicators presented in our study may not always be readily available for all districts of India, and its compilation is likely to be time-consuming. A DPI would be a handy tool for practitioners only if it renders a quick assessment. Hence, model reduction was performed on pertinent secondary data sets of 123 districts to reduce the total number of variables to a manageable number of one to two underlying variables per factor. For performing model reduction, the altered models as specified in Eq. (8) were obtained by fixing the corresponding random variables at their mean values. The results of model reduction applied for the factors C1 and C2 (which were otherwise capturing 13 and 10 indicators respectively) are presented in Table 4.

Table 4 Results of model reduction for C1 and C2

Factor C3 had only one indicator and C4 had only two; and they were retained as such.

The variables corresponding to higher sensitivity to the model were those which scored higher ranks, and maybe deemed critical. The Resource factor, originally attributed with 13 variables, could thus be estimated with two critical variables: X8—the total number of rescue and relief personnel/10 sq.km. and X5—number of health service personnel/1000 population. The two variables which were most sensitive to the model out of the 10 variables considered for C2 are X26—Efficacy of existing SOPs, manuals/codes, and X25—Efficacy of mass media campaigns- % Literacy.

This does not mean that the remaining 11 variables for C1 and 8 variables for C2 are discarded; this only means that the retained indicators would be treated as random and the least sensitive ones as deterministic; by keeping them as constants, which may be contextually selected; for example, “national average values”.

4.4 Robustness of the developed composite index

Results of the analyses conducted to check the robustness of DPIc developed using (i) all 26 variables and (ii) 7 critical variables (2 each for C1, C2, and C4; and 1 for C3) are discussed next.

4.4.1 Sensitivity and reliability of \(\mathbf{D}\mathbf{P}\mathbf{I}\mathbf{c}\)

The rank ordering of 10 districts selected for sample analysis was considered and tabulated in Table 5 to assess the sensitivity of the CI to different weighting schemes discussed in Sect. 2.6 and to model reduction techniques discussed in Sect. 2.7.

Table 5 Ranking for districts as per computed \(\mathrm{DPIc}\) scores

Table 6 shows the average shift in vulnerability rankings from the median rank. The statistics comprise the relative shift in the position of all districts in a single number. Lower values of \({\mathrm{R}}_{s}\) indicate a greater similarity of the rankings to the median ranking. The use of the RII method for factor weighting- considering all variables indicates the lowest difference from the median rank. The \({\mathrm{ R}}_{s}\) value using the TOPSIS method for factor weighting is higher, as it reflects the variability in perceptions of expert respondents.

Table 6 Average Rank shift of districts with different computation methods

The average shift in rank using only the critical indicators (with both RII and TOPSIS methods) is the highest; probably because all the remaining indicators are treated as deterministic by keeping them as constants which prevents the high value of one indicator from compensating the very low value of the other indicators.

Table 7 presents the Spearman rank-order correlation between the different methods used for computing factor indices and the Composite Index. The correlation coefficient ranges between 0.92 and 1 indicating a very high positive correlation among the different methods used. There is significant agreement among the ranking of the districts on the DPIc since all coefficients are significant at p < 0.01.

Table 7 Spearman rank-order correlation for various methods of deriving \(\mathrm{DPIc}\)

Further, Cronbach’s alpha shows that the \(\mathrm{DPIc}\) rankings for the 10 districts considered have excellent reliability of 0.99. Figure 4. illustrates a comparison of the rankings of the sample districts with respect to their computed Composite Indices. The ranks derived using RII with Equal Weight method are plotted along with the median ranks and the 95% confidence interval of the mean of the rankings evaluated using the four methods of computation.

Fig. 4
figure 4

Shift in \(\mathrm{DPIc}\) rankings of districts with different methods of computation at the 95% confidence interval

There is statistically significant agreement among the ranks despite different weighting techniques for indicators and factors being employed, as shown in Fig. 4.

A major outcome of the study is the set of seven Critical attribute variables (derived out of 26 indicators) belonging to datasets regularly maintained by District Authorities which would be a handy tool for practitioners for disaster preparedness assessments. The methodology proposed through this study is based on a conceptual framework that is adaptable to different regional contexts. Continuity and relevance of an index being an evolving process; newer, valuable datasets may become available. The developed framework may thus be extended to integrate newer datasets. Further, with minimal modifications, the methodology is replicable in the context of other developing countries. Moreover, the method adopted for model reduction based on global response sensitivity measures had the feature of retaining the randomness of critical attribute variables while treating the least sensitive ones as deterministic. This implies that the seven critical variables may be used to compute the Composite Index instead of using the 26 attribute variables, which renders the computation more simplified.

5 Conclusion

The development of indicators and an assessment framework to gauge "capacity development for disaster preparedness" for a vast and complex country like India with layers of hazards, vulnerabilities, and risks is not a simple process. The intricacies in the institutional and operational mechanisms of disaster management and inherent bureaucratic tangles render added complexity. The authors attempt to develop a framework and a metric to appraise the coping capacity and levels of preparedness for a regional context in India. Guided by a literature review and key- informant interviews, a theoretical framework was developed and four factors were identified which contribute to capacity building for DP, namely, Resources; Communication and Coordination; Budget; and Community Engagement and Technology Transfer. Corresponding indicators were also identified and further refined based on the availability, comparability, and reliability of data. Each factor was modelled as a linear weighted aggregation of the normalised indicators applying weightings derived by different techniques from a QSE, with 7 expert categories of respondents participating. It is concluded that a Composite Index; an ensemble of four factors contributing differentially to it; and represented by 26 indicators altogether; could be estimated for any regional context; district being the unit considered in this study. This would serve as a metric to assess DP related to capacity building. Comparable data sets for all indicators presented in our study being not always readily available for all districts of India; and its compilation being time-intensive, model reduction techniques were applied to onsite data for 123 districts spread across six states in India and a reduced set of critical variables were developed to render the index a handy tool for practitioners, as disasters demand quick decisions to be made. The study reports that the Resource factor, originally attributed with 13 variables, could be estimated with two critical variables: the total number of rescue and relief personnel/10 sq.km. and the number of health service personnel/1000 population. The Communication and Coordination factor originally attributed with 10 variables, could be estimated with two variables: efficacy of SOPs and percentage literacy. This does not mean that the remaining variables are discarded; this only means that the critical variables would be treated as random and the least sensitive ones treated as deterministic by keeping them as constants which may be contextually selected; for example, “national average values”. The Budget factor is attributed to one variable; the presence of Extra budget allocation for disaster management, annually. The Community Engagement and Technology Transfer factor is attributed to two variables, the number of NGOs active in the region and the number of SHGs in the district.

The development of indicators and an assessment framework to gauge "capacity development for disaster preparedness" for a vast and complex country like India with layers of hazards, vulnerabilities, and risks is not a simple process. The intricacies in the institutional and operational mechanisms of disaster management and inherent bureaucratic tangles render added complexity. The authors attempt to develop a framework and a metric to appraise the coping capacity and levels of preparedness for a regional context in India. Guided by a literature review and key- informant interviews, a theoretical framework was developed and four factors were identified which contribute to capacity building for DP, namely, Resources; Communication and Coordination; Budget; and Community Engagement and Technology Transfer. Corresponding indicators were also identified and further refined based on the availability, comparability, and reliability of data. Each factor was modelled as a linear weighted aggregation of the normalised indicators applying weightings derived by different techniques from a QSE, with 7 expert categories of respondents participating. It is concluded that a Composite Index; an ensemble of four factors contributing differentially to it; and represented by 26 indicators altogether; could be estimated for any regional context; district being the unit considered in this study. This would serve as a metric to assess DP related to capacity building. Comparable data sets for all indicators presented in our study being not always readily available for all districts of India; and its compilation being time-intensive, model reduction techniques were applied to onsite data for 123 districts spread across six states in India and a reduced set of critical variables were developed to render the index a handy tool for practitioners, as disasters demand quick decisions to be made. The study reports that the Resource factor, originally attributed with 13 variables, could be estimated with two critical variables: the total number of rescue and relief personnel/10 sq.km. and the number of health service personnel/1000 population. The Communication and Coordination factor originally attributed with 10 variables, could be estimated with two variables: efficacy of SOPs and percentage literacy. This does not mean that the remaining variables are discarded; this only means that the critical variables would be treated as random and the least sensitive ones treated as deterministic by keeping them as constants which may be contextually selected; for example, "national average values". The Budget factor is attributed to one variable; the presence of Extra budget allocation for disaster management, annually. The Community Engagement and Technology Transfer factor is attributed to two variables, the number of NGOs active in the region and the number of SHGs in the district.

Evaluation of Resource factor clearly established the availability of relief and rescue personnel in the district as a crucial attribute of efficient disaster response. The crucial role of Self-Help Groups in supporting community engagement in disaster preparedness, response, and recovery was demonstrated by the evaluation of Factor Indices. It was inferred that disaster preparedness measures are effective only to the extent they tackle the unique attributes of the community and engage with the community as a whole. An urgency to set aside Budgetary resource allocation specifically for disaster preparedness, over and above what is being done for disaster response was sensed.

Though the 26 indicators were selected predominantly based on literature, key informant interviews, and relevance to the theoretical framework developed; data availability was set as a primary concern, and datasets maintained by District authorities were selected to a great extent. Hence, they may not be comprehensive and therefore gives space for refinement in future researches. A statistical internal validation of the developed indices using sensitivity and reliability analyses (basically a robustness analysis) is implemented in the study to examine how changes in index construction methods affect index results. Robustness analysis is considered by researchers to enhance the overall transparency; though not an assurance of the sensibility of a modelled composite index (Saltelli et al. 2019; Douglas-Smith et al. 2020; Zhang et al. 2020). This turns out to be a limitation of the results evolved out of this study. Furthermore, empirical validation of a Composite Index is fundamentally important for its proper application for intended purposes (Bakkensen et al. 2017). For example, an empirical validation in relation to the quantified losses incurred in real disaster scenarios would render more credibility to the index to aid in decision-making. However, the conceptual framework and methodology developed would provide a baseline for further disaster preparedness assessments at regional levels.

A few of the strategies for applying the results of this study to aid in Disaster Risk Reduction of districts are:

  1. i.

    Assessing the coping capacity of a district and thereby identifying the need for intervention at higher levels of administration (state/national level).

  2. ii.

    Identifying areas where specialised training is needed for first responders.

  3. iii.

    Benchmarking districts to identify, implement, and support strategies to enhance preparedness.

  4. iv.

    Supporting policies and resources to improve disaster preparedness of districts.

However, there are inherent weaknesses associated with Composite Indices. Composite indicators can be misleading; particularly when used to measure aspects of DP which involves a plethora of complex attributes. As Cardona, 2004, postulates; owing to the seemingly ad-hoc nature of computation and the sensitivity of the results to different weighting and aggregation techniques, composite indicators may sometimes result in distorted findings. Despite these purported shortfalls, the comprehensive methodology adopted in the paper to construct the index is robust enough to fairly represent the DP levels of districts in terms of capacity, as shown by the sensitivity with respect to four different weighting schemes and model reduction. The proposed conceptual framework together with the factors and their associated indicators and the identified critical variables would tender a premise for future researchers to develop on; the applicability and usefulness need to be probed further. The methodology and findings presented in the report are envisaged to assist experts, stakeholders, and decision-makers to arrive at rational decisions regarding the identification of areas for action and anticipation of future developments in disaster preparedness and response in similar contexts. Indicator-based measurement frameworks are very useful as indicators act as a metric to appraise the levels of preparedness and thereby help identify areas requiring augmentation. They render focused inputs for efficient allocation of resources and constructive comparisons among different regions.