Keywords

1 The Multidimensional Notion of Territorial Vulnerability

Vulnerability refers to the degree to which communities and individuals are susceptible to the effects of hazardous processes, encompassing the physical, social and organizational components of social systems (Golobič and Breskvar Žaucery 2010; Tavares et al. 2015). At the same time vulnerability refers to the ability to prepare for, respond to, and recover from external events (Cutter et al. 2009; Toro et al. 2011) that have the potential to become worse (Bradley and Smith 2004).

Vulnerability is an “integral part of the causal chain of risk” which can be defined as the product of the level of damage in the conditions of use and the frequency of adverse events (Cutter et al. 2000).

The definition conforms to the following formula:

$$R = D \times F$$

where R represents the risk, D for damage and F for the frequency of harmful events.

Territorial vulnerability could affect both the frequency and the level of harm Thus, the risk (R) should be commensurate to the degree of territorial vulnerability (V):

$$R\sim V$$

According to UNISDR (2012) understanding vulnerability is one of the foundations that support the achievement of the 10 essentials of safe and resilient cities, and is crucial to the development of local plans and policies.

Although the many definitions (Tran et al. 2010) of the notion of vulnerability highlight the different faces of the same concept, they all focus on the following concepts:

  1. (i)

    vulnerability is an intrinsic feature of a system that can be described by the use of a specific set of criteria;

  2. (ii)

    the notion of vulnerability is multidimensional as it includes health, education, social assistance, the economy, spatial planning and transportation and their mutual relationships (Millenium Ecosystem Assessment 2005; Cutter et al. 2003; Menoni et al. 2012; Tavares et al. 2015).

The development of infrastructure in a vulnerable context strengthens the risk, as it increases the frequency and the significance of the harmful events. The vulnerability of the territory with respect to the realization of infrastructures can also be associated with land consumption and the impact on the agricultural system as well as on environment (Mazzocchi et al. 2013; Torre et al. 2014; Oppio et al. 2016).

Finally reducing vulnerability is a cost-effective strategy of risk management (Kasperson et al. 2001) and a key element in any risk governance process. Moreover understanding vulnerability is crucial to the development of disaster mitigation plans and policies.

Many scholars attempting to evaluate the sustainability of development policies and programs have underlined the necessity to adopt a multidimensional approach in order to assess their impacts in a comprehensive way. In these kind of decision making processes, environmental, social and economic vulnerability assessment plays a crucial role. Providing an aggregate vulnerability measure, able to tackle all these issues together, is important in policy making and regional planning both from a conceptual and an operational perspective (Granger and Hayne 2001; Munda 2010).

2 Using Composite Indicators for Assessing Territorial Vulnerability

Composite indicators play very important role in policymaking and benchmarking (Freudenberg 2008; Saltelli 2006) as a tool to measure the complexity of environmental and societal phenomena. Despite composite indicators are increasingly used because of their capacity to process a large amount of data and to communicate the outputs of the analysis in a simple way, they could be misleading and poorly reliable if not supported by a robust and clearly stated methodology. Actually, it involves both theoretical and methodological assumption, which need to be assessed carefully to avoid producing results of dubious analytic rigor (Saisana et al. 2005). To overcome this problematic issue, Nardo et al. (2005) propose a Handbook of Composite Indicators (Ci) offering guidelines for composite indicator development. The latter guidelines were then recall in the OECD Guidelines (2008) that summarized pros and cons of using Ci and proposed a comprehensive and robust methodology. According to the OECD Guidelines, “A composite indicator is formed when individual indicators are compiled into a single index on the basis of an underlying model. The composite indicator should ideally measure multidimensional concepts which cannot be captured by a single indicator…”. Many different Ci has been developed in various fields, such as the economy, environment, society and globalization; a complete list is reported in the EU site (http://composite-indicators.jrc.ec.europa.eu/) developed by the Composite Indicator Research Group where a 10-step guide is reported for the formulation of a robust Ci, already tested in several different cases. The guide proposed tries to answer the main issues related to the development of a Ci (Saltelli et al. 2008) such as, (i) the definition of a set of indicators in the index; (ii) the mechanism for including and excluding indicators in the index; (iii) the model choice for estimating the measurement error in the data; (iv) the indicator preliminary treatment; (v) the choice of weights attached to the indicators; (vi) the choice of the aggregation method; (vii) the type of normalization scheme applied to the indicators to remove scale effects. The Checklist for building a composite indicator (EU checklist) and the methodology proposed is reported in Table 1.

Table 1 Check list for the definition of composite indicators (Source OECD 2008)

In general an index is a function of the underline indicators. Weights are assigned to each indicator to express the relevance of indicators in the context of the phenomena to be measured. As showed in Fig. 1 the first phase of the analysis regards the definition of a theoretical framework, the data selection and the input of missing data. These phases regard the selection of the indicators on the base of the literature and the expert opinions and the quantification of the selected variables.

Fig. 1
figure 1

Steps of the evaluation framework (adapted from OECD 2008)

The third phase is about a multivariate analysis helpful in assessing the suitability of the data set and provides an understanding of the implications of the methodological choices, as weighting and aggregation.

There are many analytical approaches to perform multivariate analysis, among the others Principal Components Analysis (PCA), Factor Analysis (FA), Cluster Analysis (CA). In this work we did perform multivariate analysis using the PCA method, with the double aim of finding the most relevant criteria (variables) and of attaching weights to the criteria before running the aggregation phase. The application of PCA is well described later on.

The normalization phase regards the standardization of the different unites measurement of the indicators in a unique one. Three methodologies will be applied. The first method is the standardization or z-scores. This method converts indicators to a common scale with a mean of zero and standard deviation of one. In this way the values close to the two extremes acquire higher importance than the ones close to the average 0.

For each individual indicator \({\text{x}}_{\text{qc}}^{\text{t}}\), the average across municipalities \({\text{x}}_{\text{qc = c}}^{\text{t}}\) and the standard deviation across countries \(\upsigma_{\text{qc = c}}^{\text{t}}\) are calculated. The normalization formula is

$${\text{I}}_{\text{qc}}^{\text{t}} = {\text{x}}_{\text{qc}}^{\text{t}} - {\text{x}}_{{{\text{qc}} = {\text{c}}}}^{\text{t}} /\upsigma_{{{\text{qc}} = {\text{c}}}}^{\text{t}}$$
(1)

so that all \({\text{I}}_{\text{qc}}^{\text{t}}\) have similar dispersion across municipality.

The second method is the Min-Max normalization. This method brings the values into an interval between 0 and 1.

Each indicator \({\text{x}}_{\text{qc}}^{\text{t}}\) for a generic municipality c and time t is transformed in

$${\text{I}}_{\text{qc}}^{\text{t}} = {\text{x}}_{\text{qc}}^{\text{t}} - min_{\text{c}} ({\text{x}}_{\text{q}}^{\text{t}} )/max_{\text{c}} ({\text{x}}_{\text{q}}^{\text{t}} ) - min_{\text{c}} ({\text{x}}_{\text{q}}^{\text{t}} )$$
(2)

where \(min_{\text{c}} \left( {{\text{x}}_{\text{q}}^{\text{t}} } \right)\) and \(max_{\text{c}} \left( {{\text{x}}_{\text{q}}^{\text{t}} } \right)\) are the minimum and the maximum values of \({\text{x}}_{\text{qc}}^{\text{t}}\) across all municipality c at time t.

The last method is called “distance to a reference”. This method of normalization makes use of a benchmark in order to evaluate performances of the countries.

This method takes the ratios of the indicator x tqc for a generic country c and time t with respect to the individual indicator x t0qc=c for the reference country at the initial time t0.

$${\text{I}}_{\text{qc}}^{\text{t}} = {\text{x}}_{\text{qc}}^{\text{t}} / {\text{x}}_{{{\text{qc}} = {\text{c}}}}^{{{\text{t}}0}}$$
(3)

Once the data have been normalized there are still two fundamental steps for the construction of the composite indicator, weighting and aggregating. The first step aims to give different importance to the criteria and several methods are available to make the choice. Some of these methods accept some grade of individual judgment by experts in the specific field when attaching weights to the criteria. Some other methods are instead based on statistical properties, giving more weights to the criteria that are statistically more relevant. For this research, we decided to implement the PCA method to find weights, which are valid by a statistical point of view, to be comparing the outcome then with a linear additive aggregation with no weights applied. In practical terms, we compared a statistical based weighting system to an “equal weights” system, and then two aggregation methods, the linear and the geometrical one. The results are then described.

The more interesting phase is the uncertainty and sensitivity analysis, because when constructing a composite indicator there is always some grade of uncertainty. A combination of uncertainty and sensitivity analysis is used to assess robustness of composite indicators (Del Giudice et al. 2014).

“Uncertainty analysis focuses on how uncertainty in the input factors propagates through the structure of the composite indicator and affects the composite indicator values. Sensitivity analysis assesses the contribution of the individual source of uncertainty to the output variance. While uncertainty analysis is used more often than sensitivity analysis and is almost always treated separately, the iterative use of uncertainty and sensitivity analysis during the development of a composite indicator could improve its structure” (Saisana et al. 2005; Tarantola et al. 2000; Gall 2007).

The results of the robustness analysis are generally reported as country rankings (in our case municipality ranking) with their related uncertainty bounds, which are due to the uncertainties at play. This makes it possible to communicate to the user the plausible range of the composite indicator values for each country. The sensitivity analysis results are generally shown in terms of the sensitivity measure for each input source of uncertainty. These sensitivity measures represent how much the uncertainty in the composite indicator for a given municipality would be reduced if that particular input source of uncertainty were removed. The aim of uncertainty analysis is thus to create a statistically reliable sample to which one can compare the output variance generated by using the real values.

One way of doing this simulation is through Monte Carlo analysis, in which we look at the distribution functions of the input parameters, as derived from the estimation. For example we may have the following scheme:

We start from a factor \(\alpha \sim N\left( {\overline{\alpha } ,\sigma_{\alpha } } \right)\), which reads: after estimation α is known to be normally distributed with mean \(\overline{\alpha }\) and standard deviation σα.

Likewise for factors β, γ, and so on. For each of these factors, we draw a sample from the respective distributions, thus we produce a set of row vectors \(\left( {\alpha^{j} ,\beta^{j} , \ldots } \right)\), with j = 1, 2, …, N in such a way that \(\left( {\alpha^{1} ,\alpha^{2} , \ldots ,\alpha^{N} } \right)\) is a sample from N \(( {\overline{\alpha}, \sigma_{\alpha } } )\) and likewise for the distribution function of the other factors.

$$\left[ {\begin{array}{*{20}c} {\alpha^{1} } & {\beta^{1} } & {\gamma^{1} } & \ldots \\ {\alpha^{2} } & {\beta^{2} } & {\gamma^{2} } & \ldots \\ \ldots & \ldots & \ldots & \ldots \\ {\alpha^{N} } & {\beta^{N} } & {\gamma^{N} } & \ldots \\ \end{array} } \right]$$

We can then compute the model for all vectors \(\left( {\alpha^{j} ,\beta^{j} , \ldots } \right)\) thereby producing a set of N values of a model output Y j .

$$\left[ {\begin{array}{*{20}c} {y^{1} } \\ {y^{2} } \\ \ldots \\ {y^{n} } \\ \end{array} } \right]$$

These steps constitute our uncertainty analysis. Having performed this uncertainty analysis we can then move on to a sensitivity analysis, in order to determine which of the input parameters are more important in influencing the uncertainty in the model output.

There are several methods of sensitivity analysis based on linear regression or correlation. The most important of them are: the PEAR analysis based on simple correlation between the criteria composing the indicator and the output; the PCC analysis based on correlation and partial correlation; the SRC method based on regression analysis.

Once the composite indicator is built a number of analysis can be done to better understand the performance of its components. The “back to the detail” step suggests that the intrinsic nature of the composite indicator can give a great amount of information other than the final outcome alone.

In this view one can investigate which municipalities are the leaders and which are the laggards, make spider diagrams to show the performance of one municipality in respect to the criteria, and many other types of analysis. The methodology proposed will be then applied to the case study of Lombardy Region. The main results are reported in Sect. 4.

3 Case Study

3.1 The Evaluation Framework

According to the methodological approach described in the previous section, an evaluation framework, divided into 8 steps (Fig. 1), has been defined and applied to a pilot case study.

The analysis has focused on Lombardy Region, as it is one of the Italian regions with the highest infrastructural development (Corsi 2009).

Starting from literature review, a theoretical framework has been developed with the aim of modelling the multidimensional concept of territorial vulnerability and of defining a composite indicator. The variables have been selected from previous studies on the analysis and evaluation of territorial vulnerability (Oppio et al. 2015; Oppio and Corsi 2017), with respect to their relevance, analytical soundness and accessibility. Three main vulnerability dimensions, each divided into criteria, have been considered: the environmental, the social and the economic one. The criteria have been measured at municipality scale. Table 2 reports criteria and indicators used and the way they have been calculated.

Table 2 Vulnerability assessment framework

Differently from the first efforts for measuring territorial vulnerability, this paper focus more on the methodological process than on the outputs, in order to improve the robustness and effectiveness of the Composite Territorial Vulnerability Index (CTVI). Thus, a multivariate analysis based on PCA has been perfomed for studying the overall structure of the dataset, assessing its suitability and guiding the subsequent methodological choices. Furthermore, weights have been assigned still by the use of PCA and alternative aggregation methods—linear and geometric—have been tested with reference to the theoretical framework. Finally, robustness and sensitivity analysis of the results have been developed.

In order to support decision makers in the field of infrastructures development, the CTVI has been mapped by the use of G.I.S.

3.2 Results

In order to perform robustness, uncertainty and sensitivity analysis the Monte Carlo method has been developed by generating new samples with high number of observations and simulating probability density functions similar to the ones of the existing criteria. Thus, a statistical base, that is comparable with the case under study, has been obtained. To these new samples the Equal Weights and the Weighted Sum aggregation models have been applied in order to check if we get similar results out of the simulation.

More in deep, uncertainty analysis consists in verifying the probability density functions for the two CTVI according to the largeness of the interval its values.

A first consideration is that the probability density function curves of the dependent variables as calculated on the simulated criteria are very much similar to the ones calculated on the real values. This is true both in terms of probability density function shape, as they all have a normal distribution, and in terms of interval values. A second consideration is that some grade of uncertainty exists, as the probability density function interval is quite large (Figs. 2 and 3). For understanding which criteria determine more than others the distribution of values in the probability density function, the sensitivity analysis has been carried out.

Fig. 2
figure 2

Probability density functions of CI EW and CI PCA generated sample (SimLab)

Fig. 3
figure 3

Probability density functions of CI EW and CI PCA generated with real values from our dataset (Stata 13)

As the CTVI has been defined by supposing that a linear relation exists between the independent variable and the criteria, therefore using linear aggregation models, the sensitivity analysis has been based on the Pearson Product Moment Correlation Coefficient (PEAR) and the Standardized Regression Coefficients (SRC). The sensitivity analysis methods have been applied to the CTVI Equal Weights (CTVI_EW) and CTVI Principal Component Analysis (CTVI_PCA). According to the results shown in Figs. 4 and 5, the criteria that give a lower contribution to the territorial vulnerability evaluation are the economic ones, whilst the environmental criteria are the ones explaining most of the variance of the dependent variable.

Fig. 4
figure 4

Sensitivity analysis on CTVI_PCA by PEAR on the left and SCR on the right

Fig. 5
figure 5

Sensitivity analysis on CTVI_EW by PEAR on the left and SCR on the right

The results of these analysis show that among the two composite indicators tested, CTVI_ EW and the CI _PCA, generate very similar outcomes, thus suggesting almost the same insights in terms of territorial vulnerability. Although the CI_PCA includes a weighting system, this does not influence the final results compared to the ones obtained by the CI_EW.

Given these premises, the linear additive model, combined to the PCA weighting system, seems to be a promising measurement of the territorial vulnerability of Lombardy region and for investigating the potential use of territorial vulnerability maps in the field of infrastructure development (Fig. 6).

Fig. 6
figure 6

Spatial overlay of CIPCA to the Infrastructural development program of Lombardy Region

4 Conclusions

The development of a composite indicator for assessing territorial vulnerability can be integrated into the regional planning system, in particular for supporting the definition of strategic socioeconomic objectives of the regional government.

The uncertainty analysis carried out allows to evaluate the range of variability of each indicator and so to estimate differences among municipalities in terms of territorial vulnerability. Moreover sensitivity analysis highlights those criteria that are significant in the definition of the composite indicator. These analyses contribute on one hand to verify the robustness of the results obtained and on the other hand to explain more in deep the topic under investigation.

The overlay of Territorial Vulnerability Composite Indicator with infrastructures plan supports effectively the feasibility studies of infrastructural development by highlighting territorial weaknesses and strengths. Furthermore, the identification of alarming situations is helpful for programming mitigation interventions.