Keywords

1 Introduction

A fully probabilistic approach to the risk problem, from an actuarial point of view, was first proposed by Filip Lundberg in his famous doctoral thesis of 1903 (Lundberg 1903). Around 1930, Harald Cramér formalized Lundberg’s theory into what today is known as ruin theory (Cramér 1930). Lundberg defined an income-outcome model in which an insurance company starts its operation with a certain capital amount, which increases over time as premiums are collected. Moreover, losses (that the company must cover) occur randomly in time. If due to the payment of claims, the capital falls below zero, then the company faces bankruptcy.

Certainly, ruin theory considers (as it is natural) that the occurrence of claims is not deterministic. Lundberg proved that the occurrence of losses in time can be modeled as a Poisson process. In fact, any renewal processFootnote 1 is valid within ruin theory (Sparre Andersen 1957). A Poisson process is a stochastic point process, widely used in multiple applications in science and engineering, which sets the occurrence of events in a totally random way. The events, within this context, do not refer to hazardous events but to the occurrence of losses, independent from their origin. This is the reason why ruin theory is suitable for any phenomenon, natural or not.

The Poisson process is defined in terms of a unique parameter, its intensity, or rate. In risk assessment, this parameter is the loss exceedance rate. It is the inverse value of the average time between the occurrences of events that exceed a loss amount p. Therefore, when calculating risk on a portfolio of exposed elements (i.e., the probability that a certain loss p is exceeded within a time window), its exceedance rate v(p) must be calculated as a function of the probability of occurrence of any of the possible hazardous events that will cause the exceedance of p. This configures a Poisson process which enables the estimation of the probability of exceedance of loss p in any time frame.

As expected, the assessment of the exceedance rates v(p) is not limited to a unique value of p. Therefore, the loss exceedance curve (LEC) is calculated (i.e., v(p) is calculated for any p). The LEC provides an exhaustive quantification of the risk problem, in terms of probability. It will never be possible to know the exact magnitude of a future disaster (in terms of the loss and consequences that will cause), but it is possible with the LEC to know the probability that any loss amount will be exceeded within any time frame and use this information to support the decision-making process for risk reduction. The LEC is recognized to be the most robust tool for representing catastrophe risk (Cardona 1986; Ordaz 2000).

The LEC exhibits well-known limitations, such as implicit stationarity, lack of flexibility to incorporate non-probabilistic uncertainty models, description of physical and economic impact only, and increased difficulty in communicating risk to nontechnical stakeholders. This chapter provides insights on the way to overcome some of these limitations, except for the communication issue. Nowadays there is still a severe bias to calculate and communicate disaster risk in deterministic terms, through the definition of one or a few large events, for easiness in understanding the simulated impacts. This, however, contradicts the very definition of risk in which uncertainty plays a major role. Therefore, there is no treatment to deterministic analysis of disasters in this chapter.

2 Assessment of the Loss Exceedance Curve

There is a well-known differentiation between extensive and intensive risk. High-frequency, low-severity disasters, usually distributed along a wide portion of the territory, account for an important segment of the LEC. This segment is best estimated by retrospective analysis, given the impossibility to analytically model many complex phenomena when large territories must be covered. On the other hand, there are limitations on the amount of data available on previous disasters, leading to important underestimations of the impact of high-severity, low-frequency events. Prospective risk assessment complements the historical information by the simulation of future disasters which, based on scientific evidence, are likely to occur but have not occurred yet. A hybrid model, formed by both retrospective and prospective analyses, accounts for both extensive and intensive risk (Velásquez et al. 2014). As an example, Fig. 14.1 shows a hybrid loss exceedance curve for Nepal.

Fig. 14.1
figure 1

Hybrid, intensive, and extensive loss curves for Nepal. (From Velásquez 2015)

2.1 Retrospective Assessment

The purpose is to obtain the best estimation for λ, the parameter of the Poisson process associated with a loss amount. The value of λ is equal to the loss exceedance rate, for a reference loss amount. For an overview of the Poisson point process, the reader is referred to Kirgman (1992) or Soong (2004).

When performing retrospective probabilistic risk assessment, all historical valuated disasters (n in total) are usually arranged in a time-loss plot as shown in Fig. 14.2. By setting an arbitrary loss amount p, all the events that exceed p can identify from the plot (see Fig. 14.2). Inter-event times are now evident for p. The set of observed values (T1, T2, …, Tn) will be used to estimate the parameter λ of the exponential distribution that describes the inter-event times, taking advantage of the fact that it is the same λ of the Poisson process of interest.

Fig. 14.2
figure 2

Hypothetical history of valuated disasters

A good estimator for λ is:

$$ \Lambda =\frac{n-1}{\sum \limits_{i=1}^n{T}_i} $$
(14.1)

It is unbiased (E{Λ} = λ), consistent (Λ → λ as n → ∞), and sufficient. Though it is not of minimum variance,Footnote 2 it is a good-enough estimator. However, different estimators may be proposed by fitting the variance to the Cramér-Rao lower bound. As expected, the coefficient of variation (CoV) of Λ decreases with increasing n. This means that high and infrequently exceeded loss amounts will be related to more uncertain estimations of λ. For losses exceeded many times in the historical period, the assessment of λ will be more precise:

$$ \mathrm{CoV}\left\{\Lambda \right\}=\frac{1}{\sqrt{n-2}} $$
(14.2)

With the appropriate estimator for the loss exceedance rates, the assessment of the LEC is straightforward. The process consists of setting different amounts for the reference loss p and computing Λ as an estimator for λ using Eq. 14.1 and the observed inter-event times. Figure 14.3 shows a diagram that summarizes this process.

Fig. 14.3
figure 3

Graphical representation of the calculation of the loss exceedance curve

As mentioned, the uncertainty of the estimation of the loss exceedance rates depends on the number of observed inter-event times from the historical information. This translates into a sort of “uncertainty band” that gives information on the quality of the assessment. As can be seen in Fig. 14.4, for small losses the assessment is good enough and is logical to rely on the retrospective approach. However, for higher losses, the quality of the assessment rapidly decreases. In addition, there is a limitation in the historical information related to the maximum observed loss. This is the reason why, for large losses, the exceedance rates cannot be estimated statistically.

Fig. 14.4
figure 4

Retrospective loss exceedance curve. The blue line is the mean estimation of λ and the gray dashed lines are +/− one standard deviation

2.2 Prospective Assessment

When undertaking a probabilistic catastrophe risk analysis, the relevant components of risk , which include the exposed assets, their physical vulnerability , and the hazard intensities, must be represented in such a way that they can be consistently estimated through a rigorous and robust procedure, in both analytical and conceptual terms. The probabilistic risk model is comprised of three components:

  • Hazard assessment: For each of the natural phenomena considered, a set of events is defined along with their respective frequencies of occurrence, forming an exhaustive representation of hazard. Each scenario contains the spatial distribution of the probability parameters to model the intensities as random variables.

  • Exposure assessment: An inventory of the exposed assets must be constructed, specifying the geographical location of the asset, its replacement value or fiscal liability cost, and its building class.

  • Vulnerability assessment: For each building class, a vulnerability function is defined, for each type of hazard. This function characterizes the structural behavior of the asset during the occurrence of the hazard event. Vulnerability functions provide the probability distribution of the loss as a function of increasing hazard intensity .

Because the occurrence of hazardous events cannot be predicted, it is common practice to use sets of scenarios, obtained as an output of the hazard model. The set of scenarios contains all the possible ways in which the hazard phenomenon may manifest in terms of both frequency and severity. Event-based probabilistic risk assessments have been extensively applied in the past for different hazards at different scales (see, for example, Bernal et al. 2017a, b, Salgado-Gálvez et al. 2017; Salgado-Gálvez et al. 2015; Cardona et al. 2014; Salgado-Gálvez et al. 2014; Wong 2014; Niño et al. 2015; Quijano et al. 2014; Torres et al. 2013; Jenkins et al. 2012).

2.2.1 Hazard Representation

Consider a loss event, A, defined within the universe of all possible losses (or sampling space) S (Fig. 14.5). Event A is a subset of S, and it is defined in a completely arbitrary way (its definition depends exclusively on what question needs to be answered). Once defined, what is required to know about event A is its probability of occurrence, denoted P(A).

Fig. 14.5
figure 5

Arbitrary event A within the sampling space S of the loss events

Consider now a subdivision of the sampling space S into a finite number of mutually exclusive and collectively exhaustive base events, denoted Bi, as shown in Fig. 14.6.

Fig. 14.6
figure 6

Subdivision of the sampling space S into base events Bi

Given that event A can be defined as the union of its intersections with the base events Bi, and making use of the third axiom of probability theory, P(A) can be calculated as:

$$ P(A)=\sum \limits_{i=1}^nP\left(A|{B}_i\right)\cdot P\left({B}_i\right) $$
(14.3)

which is one of the simplest expressions of the total probability theorem. In summary, the definition of the events of interest A is completely arbitrary, so P(A) is calculated as a function of the probability of loss base events B. This implies that the base events B cannot be defined arbitrarily.

The collection of base events B is constructed from the definition of hazard scenarios. Each base event B in the loss dominium corresponds to the loss caused by each hazard scenario. These scenarios must be mutually exclusive (i.e., cannot occur simultaneously) and collectively exhaustive (i.e., they are all the ways in which the hazard may manifest). In addition, each scenario has an annual frequency of occurrence (analogous to its probability of occurrence P(B)) and a spatial distribution of the random intensity (i.e., the distribution of its probability moments). The intensity corresponds to the physical variable representing the local severity of the phenomenon. For example, in the case of earthquakes, spectral accelerations are commonly used. As will be explained later, a spatial correlation coefficient for the intensity is not needed.

Failing to represent mutually exclusive hazard scenarios leads to an incongruence when adopting the Poisson point process to model the occurrence of loss in time.Footnote 3 In the case of hazards that can generate simultaneous intensities associated with different effects (i.e., tropical cyclones), a totalizing approach as the one proposed by Ordaz (2014) and Jaimes et al. (2015) is applied to each scenario. On the other hand, being the hazard scenarios collectively exhaustive has nothing to do with the total number of them. For example, two scenarios may be enough to represent some hazard in an exhaustive way, if they are known to be the only ones that can occur. Similarly, a million scenarios, although seeming like an exceptionally large representative number, do not necessarily guarantee the fulfillment of this requirement.

Hazard is commonly represented as maps of uniform return period (i.e., uniform hazard maps). These maps are obtained by the calculation of the intensity exceedance rates, which are analogous to the loss exceedance rates but provide information only about the intensity at a location. The exceedance of intensity a, denoted v(a) is calculated as follows:

$$ v(a)=\sum \limits_{i=1}^NP\left(A>a|{\mathrm{Event}}_i\right)\cdot {F}_A\left({\mathrm{Event}}_i\right) $$
(14.4)

where N is the total number of hazard scenarios and FA(Eventi) is the annual frequency of occurrence of event i while P(A > a | Eventi) is the probability of exceeding a, given that event i occurred. Note that Eq. 14.4 is another form of the total probability theorem. By performing this calculation for different intensity levels, it is then possible to obtain the intensity exceedance curve (also known as hazard curve) that relates different hazard intensities with their associated exceedance rate on each calculation site (see Fig. 14.7). If this curve is calculated for several nodes on a grid arrangement, by selecting any fixed exceedance rate (or its inverse value, the return period), it is possible to obtain hazard maps. Note that, even though hazard maps can be obtained from the set of scenarios, the process is not reversible, which means that it is impossible to define the scenarios from uniform hazard maps.

Fig. 14.7
figure 7

Example of a hazard curve. Both axes are in logarithmic scale

2.2.2 Exposed Elements

An exposed element is any object susceptible to suffer damage or loss because of the occurrence of hazard events. Furthermore, exposed elements have an implicit component associated with loss liability. If, for example, the exposed elements are dwellings of low socioeconomic income, then the losses add to the fiscal responsibility of the State, given the inability of the homeowners to cope with the situation. It is important to determine the liability of losses directly into the definition of the exposed elements. For this reason, the exposed elements are grouped in portfolios.

The exposure model is the collection of portfolios of assets (buildings and infrastructure ) that can be affected by the occurrence of natural hazards. It is an essential component in risk analysis that determines the resolution (or scale) of the results. Highly detailed exposure data is always desirable; however, when detailed information is not available, or an estimation over a wide region is intended (e.g., a full country), it is necessary to carry out approximate estimations that account for the inventory of exposed elements. This is usually referred to as the proxy exposure model (see Fig. 14.8).

Fig. 14.8
figure 8

Example of a low-resolution exposure model for Bogotá, Colombia

The description, characterization, and appraisal of the physical inventory of exposed elements for a probabilistic risk assessment always present serious challenges for modeling regardless of scale. Several assumptions are usually made which naturally increase the epistemic uncertainty in the risk modeling, even in those cases where detailed information is available (e.g., a building-by-building inventory; see Fig. 14.9). Fortunately, when quantifying losses for hazard scenarios, the aggregation of the losses at individual elements (e.g., buildings) results in a progressive reduction of the uncertainty in the total loss (law of large numbers). In any case, there are always doubts regarding the accuracy and reliability of exposure data. This highlights the importance of modeling the loss as a random variable.

Fig. 14.9
figure 9

Example of a high-resolution exposure model for Bogotá, Colombia

2.2.3 Vulnerability

The vulnerability of exposed elements is defined using mathematical functions that relate the intensity to the direct physical impact. Such functions are called vulnerability functions, and they must be estimated and assigned for each one of the construction classes identified in the exposure database. Vulnerability functions provide the variation of the probability moments of the relative loss with increasing intensity (see Fig. 14.10).

Fig. 14.10
figure 10

Example of a vulnerability function

Vulnerability functions are useful to describe the expected behavior of the different construction classes. Aspects such as construction quality and the degree to which builders complied with local or regional building codes must be considered for the different classes of buildings. Figure 14.11 presents an example of flood vulnerability functions, showing how the expected damage increases as a function of the water depth for each building class.

Fig. 14.11
figure 11

Example of flood vulnerability functions for different building classes

It is worth mentioning that this type of vulnerability modeling aims to capture the general characteristics compatible with the level of resolution used in the exposure database; no specific considerations should be made for any structural system. Every single asset identified and included in the database must be associated with a vulnerability function.

2.2.4 Risk Assessment (Calculation of the LEC)

The calculation of the LEC follows the next sequence of steps:

  • Step 1: Loss in a single exposed element

The intensity occurring at the location of an exposed element and the loss caused are both random variables. The relationship between hazard (intensity a) and vulnerability (loss p given intensity a), for a single exposed element, is modeled by applying the total probability theorem, integrating for the complete dominium of the intensity :

$$ {f}_P(p)=\underset{0}{\overset{\infty }{\int }}{f}_A(a)\cdot {f}_P\left(p|a\right)\cdot da $$
(14.5)

where fP(p) is the probability density function (pdf) of the loss, fA(a) is the pdf of the intensity at the location of the exposed element, and fP(p|a) is the intensity-dependent pdf of the loss at the exposed element. Note that the integral covers the full dominium of the intensity , so there is no need to perform simulations of the intensity field for each scenario.

  • Step 2: Scenario loss

Step 1 is repeated for all the elements in the portfolio. If the individual losses of the exposed elements were independent, then the pdf of the total loss would simply be the successive convolution of the individual loss pdfs (rendering a normal distribution according to the central limit theorem). However, it is recognized that there is a certain amount of correlation between the losses for the same scenario. Under this condition, the total loss is modeled by adding the probability moments of the individual losses:

$$ {m}_P=\sum \limits_{j=1}^{NE}{m}_{Pj} $$
(14.6)

and,

$$ {\sigma}_P^2=\sum \limits_{j=1}^{NE}{\sigma}_{Pj}^2+2\sum \limits_{\begin{array}{c}k=1\\ {}k<j\end{array}}^{NE-1}\sum \limits_{j=2}^{NE}{\rho}_{k,j}{\sigma}_{Pk}{\sigma}_{Pj} $$
(14.7)

where mPj and \( {\sigma}_{Pj}^2 \) are the mean and variance of the jth exposed element, ρk, j is the correlation coefficient of the loss in elements k and j, NE is the total number of exposed elements, and mP and \( {\sigma}_P^2 \) are the mean and variance of the total scenario loss. There is no general methodology to determine the value of ρ. In practice , each modeler chooses its value by observing the coherence of the results. A commonly used, blanket value is 0.3. From the probability moments of the total scenario loss, a beta distribution is parametrized (see, e.g., ATC-13 1985). The choice to use a beta distribution to describe the loss, however arbitrary, is based on three properties that make it very convenient for this purpose:

  • Its dominium is the interval [0,1], i.e., it directly fits into the description of a relative loss.

  • It accommodates multiple shapes, showing different mode locations (left-sided, symmetrical, right-sided) and even adopting an exponential-like form (both increasing and decreasing).

  • It is characterized by only two parameters.

  • Step 3: Totalize the loss

Step 2 is repeated for all hazard scenarios so that a set of loss pdfs is obtained, each corresponding to the total loss for a single scenario. To totalize the effect of all scenarios, the total probability theorem is used in the same way as Eq. 14.4:

$$ \nu (p)=\sum \limits_{j=1}^NP\Big(P>p\left|{\mathrm{Event}}_i\Big)\right.\cdot {F}_A\left({\mathrm{Event}}_i\right) $$
(14.8)

where v(p) is the rate of exceedance of loss p, N is the total number of hazard scenarios, FA(Eventi) is the annual frequency of occurrence of the ith hazard event, and P(P > p | Eventi) is the probability of exceeding p, given that event i occurred. The sum of the equation is made for all hazard scenarios. Figure 14.12 summarizes the calculation process.

Fig. 14.12
figure 12

Flowchart of the risk assessment process

3 Risk Metrics

As indicated above, the LEC contains all the information required to characterize the process of occurrence of losses. However, it is sometimes impractical to use the complete curve. Instead, it is convenient to use specific metrics that allow the risk to be expressed by a single number. The most used metrics are described next.

3.1 Probable Maximum Loss (PML)

This is a loss that does not occur frequently, that is, a loss usually associated with long return periods (or, alternatively, a low exceedance rate). The return period is the inverse of the exceedance rate (i.e., is the expected value of the inter-event times):

$$ Tr(p)=\frac{1}{\nu (p)} $$
(14.9)

There is not a single PML value, but a complete curve which is analogous to the LEC. However, it is common practice to define a PML value by fixing a return period. There are no universally accepted standards to define what is meant by “not very frequently.” In the insurance industry, for example, the return periods used to define the PML range from 200 up to 2500 years (Fig. 14.13).

Fig. 14.13
figure 13

Risk curves. Left: in terms of the exceedance rate (loss exceedance curve). Right: in terms of the return period (PML curve). Note that the value of the PML requires an arbitrary selection of the return period

3.2 Average Annual Loss (AAL)

The average annual loss (AAL) is an important indicator because it integrates into a single value the effect, in terms of loss, of the occurrence of hazard scenarios over vulnerable exposed elements. It is considered as the most robust risk indicator, not only for its ability to resume the loss-time process in a single number but for having low sensitivity to the uncertainty .

The AAL corresponds to the expected value of the annual loss and indicates the annual value to be paid to compensate in the long term all future losses. In a simple insurance scheme, the AAL would be the annual pure premium. It is calculated as the integral of the loss exceedance curve:

$$ \mathrm{AAL}=\underset{0}{\overset{\infty }{\int }}\nu (p) dp $$
(14.10)

From the set of loss events, AAL can be calculated as:

$$ \mathrm{AAL}=\sum \limits_{i=1}^NE\Big(p\left|{\mathrm{Event}}_i\Big)\right.{F}_A\left({\mathrm{Event}}_i\right) $$
(14.11)

where E(p|Eventi) is the expected value of the loss given the occurrence of the event i. Furthermore, in those cases in which the hazard is not expressed as a set of scenarios, but as a collection of uniform hazard maps, despite the impossibility to fully assess risk, it is possible to calculate the AAL as:

$$ \mathrm{AAL}=\sum \limits_{i=1}^{NE}\underset{0}{\overset{\infty }{\int }}-\frac{1}{\nu (0)}\frac{d\nu (a)}{d a}E\left(p|a\right) da $$
(14.12)

where the quantity E(p|a) is obtained from the vulnerability functions of the exposed elements. The AAL is the only mappable risk metric. Risk maps are a remarkably effective communication tool . High-resolution AAL maps, both absolute and relative (to the exposed value of each asset), are highly desirable outcomes to orient risk management. Figure 14.14 presents an example of AAL maps for Bogotá, Colombia.

Fig. 14.14
figure 14

Maps of multi-hazard (earthquake and landslide) AAL for Bogotá, Colombia. Left, absolute; right, relative. The exposure database has more than 1 million buildings. (From Cardona et al. 2016)

3.3 Other Metrics

In addition to the abovementioned metrics, many results may be obtained from the LEC by the direct application of the Poisson point process that describes the loss occurrence in time.

3.3.1 Probability of Ruin

A commonly used metric in insurance is the probability of ruin. It is defined as the probability of exceeding a reference PML in an operational period. In general, the probability of exceeding a loss amount p at least once in T years is:

$$ P\left(P>p\right)=1-{e}^{-v(p)\cdot T} $$
(14.13)

Equation 14.13 has the advantage of being a standard formula. Only by knowing the return period of the loss and the operational time window (or exposure time) is it possible to calculate its exceedance probability.

3.3.2 Inter-Event Times

In many risk applications, it is necessary to make inferences on the time between loss events. The pdf of the inter-event times is:

$$ {f}_T(t)=\nu (p){e}^{-\nu (p)t} $$
(14.14)

This is particularly useful when testing the effectiveness of land use or risk management plans which are usually executed gradually in the short and medium term.

3.3.3 Number of Events

In many risk applications, it is necessary to make inferences on the number of loss events expected to occur in a fixed time window. The probability mass function of the number of events in time window T is:

$$ {p}_N=\frac{{\left(\nu (p)\cdot T\right)}^N{e}^{-\nu (p)\cdot T}}{N!} $$
(14.15)

This is particularly useful when designing risk management instruments that require reinstallations. For example, some financial protection instruments, as well as some structural protection devices, are commonly designed considering reinstallations.

3.3.4 Next Event

It is possible to estimate the probability of exceeding loss p in the next event (or any randomly selected event):

$$ \Pr \left(P>p\right)=\frac{\nu (p)}{v(0)} $$
(14.16)

This result is quite useful for emergency preparedness activities, as well as for quantifying the cost of financial instruments and mitigation strategies.

4 Incorporating Background Trends

The stationarity of the Poisson point process implies that the mean rate is constant in time. Although this is hardly the case, it is widely accepted as the best approximation due to the difficulty to incorporate time-dependent hazard, exposure and vulnerability models, and the large uncertainty arising from incorporating them. Nevertheless, in cases in which, to the extent of knowledge, the stationarity condition is far too unrealistic, and the future dynamics of the risk components are known or can be approximated reasonably, it is possible to extend the model to a nonstationary process.

Consider a LEC resulting from a probabilistic risk assessment. This result is expressing the possibility of loss given the incidence of hazard, exposure, and vulnerability , as modeled for a specific moment in time. If there is a reasonable way to model the future changes of these risk components, it is possible to calculate new LECs for different, future dates. Therefore, the loss exceedance rates now exhibit a time dependency, transforming the LEC into a loss exceedance surface (LES, see Fig. 14.15). The LES, constructed from the LECs of future conditions, contains all the v(p,t) functions required to define the occurrence in time of losses greater than p as a nonhomogeneous Poisson process.

Fig. 14.15
figure 15

Time dependency added to the loss exceedance rates. Left, loss exceedance curve; right, loss exceedance surface

A nonhomogeneous Poisson process satisfies the same basic properties of a homogeneous one, i.e. independent and Poisson distributed increments. The main difference is that the rate of the process is a function of time, λ(t). For an overview of the nonhomogeneous Poisson process, the reader is referred to Kirgman (1992). Note that when assessing disaster risk as an LES, the following properties hold:

  • The loss occurrence process is still stochastic.

  • The mean rate of the process changes in time.

  • All risk metrics (AAL, PML, etc.) are functions of time.

The latter means that single-valued metrics, as the AAL, are no longer single-valued. This implies losing some of the desirable characteristics of condensed, comprehensive metrics. To obtain single-valued metrics, a simple time average is required, with an arbitrary choice of its limits.

4.1 Incorporating Climate Change

This approach works very well when incorporating climate change into risk calculations. For example, Figs. 14.16 and 14.17 show time-dependent risk metrics, calculated from probabilistic risk assessment to Puerto Barrios, Guatemala (due to tropical cyclones), and to the wheat stock of Kazakhstan (due to droughts), and including the effect of climate change (up to 2050).Footnote 4 In both cases, the time-dependent loss exceedance rates v(p,t) were calculated for different moments in time, allowing for an estimation of the time dependency of λ and therefore increasing the applicability of the risk assessment methodology. Furthermore, the inclusion of climate change as a background trend of the risk process provides a unique mathematical framework for both risk management and climate change adaptation.

Fig. 14.16
figure 16

Time-dependent PML curves for Puerto Barrios, Guatemala, due to tropical cyclones and the effect of climate change. (From Cardona et al. 2013)

Fig. 14.17
figure 17

Time-dependent risk metrics for wheat in Kazakhstan due to droughts and the effect of climate change. (From Maskrey et al. 2019)

Introducing background trends into probabilistic risk assessment requires complex models of the future dynamics of hazard, exposure, and vulnerability . Even though these models may exist, they should be introduced with care, keeping in mind the additional uncertainty brought into. Such uncertainty is extremely difficult to model from the probabilistic point of view, being commonly referred to as deep uncertainty . An overview of the treatment of deep uncertainty is presented in the next section.

5 Dealing with Deep Uncertainty

The future characteristics of the built environment, the dynamics of the socio-technical systems, or the exact conditions of the future climate are desirable inputs for risk modeling, useful for designing the actions and policies to anticipate the materialization of risk . However, knowing with arbitrary precision how nonstationary natural phenomena, exposed elements, and their vulnerability will change in the far, or even near future, is practically impossible. Furthermore, assigning any kind of probability model to such dynamic and complex behavior is extremely difficult without arbitrariness.

Most of the variables involved in probabilistic risk assessment fit well into probability models. It is even possible to insert background trends into the calculation, keeping the problem within the reach of probability theory. Nevertheless, it must be recognized that in the process of building a risk model, many assumptions are made, based on expert criteria and common sense, but inevitably rendering a model that is not truly “fully probabilistic.” However, within good modeling practice , all the assumptions made are sufficiently trustworthy, so, again, the problem fits into a fully probabilistic approach.

But what happens when incorporating a new variable from which there cannot be made any reasonable point assumptions, there is no observed data (or not enough), it is not possible to truly predict its behavior from physical models, and there is no bounded consensus on how it will perform? This configures a problem with deep uncertainty .

In a broad sense, uncertainty is inherent in any approach to model complex dynamical systems. It can be understood as the gap between the outcome of the model and the real behavior of the system. This gap is composed by the uncertainty on the available observations, the estimation of model parameters, the functional form of the model itself (typically simplifying the phenomena), the value of model inputs, the transformations of scale (commensurability), and the natural randomness. The latter is usually referred to as aleatory uncertainty . All the other mentioned sources compose the epistemic uncertainty . It is widely recognized that epistemic uncertainty is reducible as more data or knowledge is added to the problem. However, deep uncertainty, which holds both aleatory and epistemic uncertainty , is exceedingly difficult to reduce. In practice it would require, for example, waiting until the future conditions of the assets at risk are known, which invalidates the purpose of risk assessment and overthrows any planning attempt.

Dealing with deep uncertainty in risk assessment requires an expansion of the methodological approach. Recently, several authors have proposed innovative approaches to deal with problems with deep uncertainty and orient decision-making, grouping them under the name Decision Making Under Deep Uncertainty (DMDU). For further details, the reader is referred to Marchau et al. (2019). All DMDU approaches share key methodological steps: (1) framing the analysis, (2) simulation, (3) exploration of results, (4) analysis of compensations (trade-offs) of strategies, and (5) iteration and reexamination. In short, DMDU methods recognize that it is not possible to achieve robust decision-making without considering the multiple ramifications that define the domain of the future possibilities. But how do we reasonably define those ramifications or paths of the risk problem?

Step 2 of DMDU approaches (simulation) is strongly related to risk assessment . Its purpose is to explore possible unforeseen or uncertain futures. In other words, robust decision-making must be based on the universe of all possible outcomes that a problem with deep uncertainty can evolve into, to consider them all when deciding. Probabilistic risk theory follows a similar approach, seeking to quantify the consequences of all possible future catastrophic events (without the need to know which will be next) to consider those consequences in the decision-making process. Therefore, as far as disaster risk management respects, probabilistic risk assessment is the most appropriate way to approach step 2, although some expansion of its analytical potential is required.

The main limitation of probabilistic risk assessment is precisely that of being probabilistic. Nonetheless, regardless of that limitation , it is a robust approach, good enough in most risk assessment applications. As models evolve and gain complexity, more variables are added that not necessarily fit into a probabilistic representation. In the past decades, many mathematical theories have arisen as an attempt to formally conceptualize the treatment of non-probabilistic uncertainty problems. In the 1960s, Zadeh proposed the fuzzy set theory (Zadeh, 1965) as an approach to deal with epistemic uncertainty , allowing the representation of concepts expressed by linguistic terms. A few years later, Dempster (1967) develops what today is known as Dempster-Shafer evidence theory (formalized by Shafer 1976), seeking the representation of epistemic knowledge on probability distributions and, in the process, relaxing some of the strong rules of probability theory. In parallel, and within the context of stochastic geometry, Kendall (1974) and Matheron (1975) developed the foundations of what is nowadays known as the random sets theory. In short, random sets theory deals with the properties of set-valued random variables (in contrast to point-valued random variables in probability theory). In 1991, Peter Walley introduces the theory of imprecise probabilities, in which sets of probability measures are explored as a more general case of the classical probabilistic approach to random variables. Other theories to give formal treatment to non-probabilistic uncertainty have appeared recently. It is worth mentioning the theory of hints (Kohlas and Monney 1995), the info-gap theory (Ben-Haim 2006), and the theory of fuzzy randomness (Möller and Beer 2004).

From the abovementioned approaches, random sets theory excels as the most general approach to uncertainty to date, allowing many different types of uncertainty structures (e.g., Dempster-Shaffer bodies of evidence, info-gap structures, probability boxes, raw intervals, fuzzy sets, probability distribution functions, among others) to be represented as random sets. Alvarez (2008) proved that infinite random sets of the indexable type can accommodate all these uncertainty structures. Furthermore, he developed a general method to sample values from all these types of uncertainty structures indistinctively. Therefore, the theory of random sets, and particularly the methods developed by Alvarez (2008) for infinite random sets, provides a mathematically sound framework for the simulation of the ramifications or unforeseen futures in problems with deep uncertainty.

5.1 Random Sets

Consider the probability space (Ω, σΩ, PΩ) and a universal non-empty set X with a power set ℘(X). Let \( \left(\mathrm{\mathcal{F}},{\sigma}_{\mathrm{\mathcal{F}}}\right) \) be a measurable space such that \( \mathrm{\mathcal{F}}\subseteq \wp (X) \). A random set Γ is a \( \left({\sigma}_{\Omega}-{\sigma}_{\mathrm{\mathcal{F}}}\right) \) – measurable mapping such that \( \Omega \to \mathrm{\mathcal{F}},\alpha \to \Gamma \left(\alpha \right) \). Every Γ(α) is a focal element in the focal set \( \mathrm{\mathcal{F}} \).

If all elements in \( \mathrm{\mathcal{F}} \) are singletons (points), then Γ is a random variable, and therefore the probability of any event F in \( \mathrm{\mathcal{F}} \) is calculable via classic probability theory. Such focal set \( \mathrm{\mathcal{F}} \) is called specific. However, when \( \mathrm{\mathcal{F}} \) is nonspecific, the probability of event F cannot be precisely calculated, but only its upper and lower bounds, giving as result an imprecise probability measure. Upper (UP) and lower (LP) probabilities for event F are given by:

$$ \mathrm{LP}(F)={P}_{\Omega}\left\{\alpha :\Gamma \left(\alpha \right)\subseteq F,\Gamma \left(\alpha \right)\ne \varnothing \right\} $$
(14.17)
$$ \mathrm{UP}(F)={P}_{\Omega}\left\{\alpha :\Gamma \left(\alpha \right)\cap F\ne \varnothing \right\} $$
(14.18)

which means that the lower probability LP is the totalization of the probability or mass assignments of all the elements in Γ(α) contained in F, i.e., those that imply the occurrence of F. Upper probability UP is the total probability of the elements in Γ(α) that share at least one element with F, i.e., those that may or may not imply the occurrence of F. A complete overview of random sets can be found in Molchanov (2005).

5.2 Simulation

The process of simulation is performed by means of a Monte Carlo approach. Every variable is represented as a random set in the real line. Each may have a different treatment of the uncertainty, enabling the combination of random variables, fuzzy sets, bodies of evidence, intervals, etc. This may be the more general simulation approach to date.

Sampling from a random set is to randomly obtain focal elements from it, regardless of the type of uncertainty. To enable this process, an indexing procedure must be applied previously, so that the diversity of mathematical structures can be treated equally (α-indexation; see Alvarez 2008). Once indexed, it is possible to sample focal elements. Figure 14.18 shows an illustration of this process.

Fig. 14.18
figure 18

Sampling from α-indexed random sets. Upper left: probability box. Upper right: possibility distribution. Lower left: Dempster-Shafer body of evidence made of raw intervals. Lower right: cumulative distribution function. (Reproduced from Alvarez 2008)

Note that Γ(α) is an interval in the dominium of X. If there are many variables involved in the problem, the same process can be applied to a coupled combination of the variables. Let X be the vector of all variables, and then α becomes a space (the α-space). In such space, the dependence between variables is modeled by a couple, and the quasi-inverse sampling methods for couples can be used to sample the multidimensional focal elements (see Nelsen 1999 for a comprehensive guide of coupled simulation techniques). Figure 14.19 shows an illustration of sampled focal elements in both the X-space and the α-space for the two-dimensional case (two variables). Note that in the X-space, the focal elements are multidimensional boxes, while in the α-space they are always points, regardless of the number of variables (dimensions).

Fig. 14.19
figure 19

Focal elements for the two-dimensional case in the X-space and in the α-space. (Reproduced from Alvarez 2008)

5.3 Functional Propagation

Once the full set of focal elements is sampled, the response of the system must be evaluated. This is to calculate the image of the focal elements by applying on them a function describing the system response , i.e., to propagate the random focal set. This is achieved by applying the extension principleFootnote 5 which states that given a function g : X → Y (the system response ) and a random set \( \left(\mathrm{\mathcal{F}},m\right) \), the image of \( \left(\mathrm{\mathcal{F}},m\right) \) through g, denoted here as \( \left(\Re, \rho \right) \), is:

$$ \Re =\left\{{R}_j=g\left({A}_i\right):{A}_i\in \mathrm{\mathcal{F}}\right\} $$
(14.19)
$$ \rho \left({R}_j\right)=\sum \limits_{i=1}^nI\left[{R}_j=g\left({A}_i\right)\right]m\left({A}_i\right) $$
(14.20)

where I[∙] is the indicator function.Footnote 6 Ai is a d-dimensional box in ℝd with 2d vertices obtained as the Cartesian product of the finite intervals sampled from each variable in the X-space. For those systems in which there is not an explicit functional form for g (e.g., risk assessment), the extension principle can be sequentially applied as the process of calculation moves forward.

5.4 Calculation of the Loss

Each focal element contains a sampling of the input variables in a stage of the calculation. Getting to assess the loss requires a series of steps that are presented here. Note that these steps are the most general sequence for risk assessment. None of the particularities of, for example, hazard modeling, are included here because they are different for each hazard for obvious reasons. Nonetheless, each hazard model (and in general each component of the risk problem) would require similar approaches for the whole process to be consistent.

  1. 1.

    Sample the occurrence of the event.

  2. 2.

    Sample the intensity field. In some cases (e.g., earthquake hazard), this would require the definition of correlation parameters.

    1. 2.1.

      At the location of the exposed elements, sample the focal elements of the local intensity.

    2. 2.2.

      From the vulnerability function of each element, sample the loss caused by each focal element of the intensity .

      • This requires coupled sampling of the intensity and loss (steps 2.1 and 2.2). An independence couple should suffice in most cases.

    3. 2.3.

      Repeat for all exposed elements and add their individual losses using the extension principle.

  3. 3.

    Repeat for all the intensity field simulations.

  4. 4.

    Repeat for all events.

Note that this approach requires many simulations, making it costly in terms of computational resources. It is recommended to apply any of the many sampling optimization techniques usually implemented when performing Monte Carlo simulations.

Let \( \left(\mathrm{\mathcal{L}},\ell \right) \) be the random set containing all the images of the loss calculations. Then \( \mathrm{\mathcal{L}} \) is the collection of loss focal elements (intervals), and (Li) for Li \( \in \mathrm{\mathcal{L}} \) is the mass assignment, i.e., the probability of occurrence of the event that generated the loss focal element. This representation requires a relaxation of the rules applied to mass assignments (rules established in evidence theory and usually transferred to random sets). Given that (Li) is representing the annual occurrence frequency of the event that generated the focal element Li, then the sum of mass assignments is not necessarily 1. In practice , this condition can be forced into (Li) if required and for the sake of coherence; however, it seems unnecessary and may have no practical effect on the final outcomes. In any case, this is an issue that requires further analysis.

Let F be the event where the loss P exceeds the amount p (i.e., F = {P : P ≥ p}), and then by rewriting Eq. 14.8, upper and lower loss exceedance rates are obtained, i.e., the LEC is transformed into two complementary curves LECL (Eq. 14.21) and LECU (Eq.14.22) that may be interpreted as an imprecise loss exceedance curve:

$$ \nu {(p)}_L=\sum \limits_{j=1}^nI\left[{L}_j\subseteq F\right]\cdotp \ell \left({L}_j\right) $$
(14.21)
$$ \nu {(p)}_U=\sum \limits_{j=1}^nI\left[{L}_j\cap F\ne \varnothing \right]\cdotp \ell \left({L}_j\right) $$
(14.22)

where n is the total number of focal elements in \( \mathrm{\mathcal{L}} \). Figure 14.20 shows an illustration of an imprecise loss exceedance curve. Note that all risk metrics now become imprecise. Background trends can still be incorporated by obtaining for every moment in time both LECU and LECL curves, i.e., rendering an imprecise loss exceedance surface.

Fig. 14.20
figure 20

Illustration of an imprecise loss exceedance curve composed by curves LECL and LECU

6 Incorporating Second-Order Effects

Risk addressed from a physical point of view is the starting point to analyze the subsequent impacts of a disaster on society . Disasters resulting from natural events damage the built environment , affecting people and their activities in different ways. It is widely recognized that the development level of the society usually determines the severity of the consequences derived from disasters. Cardona (2001) developed the Holistic Risk Assessment methodology, as an attempt to incorporate context variables into risk assessment, with the objective to account for social, economic, institutional, environmental, governance, and cultural issues, among others, that set the foundations for underdevelopment, unsafety, poverty, social injustice, and many other problems widely recognized as risk drivers.

Holistic evaluations have been performed in recent years for different cities worldwide (Birkman et al. 2013; Carreño et al. 2007; Jaramillo et al. 2016; Marulanda et al. 2013; Salgado-Gálvez et al. 2016) as well as at country level (Burton and Silva 2014) and have proven to be a useful way to evaluate, compare, and communicate risk while promoting effective actions toward the intervention of vulnerability conditions measured at its different dimensions. It has also been integrated into toolkits, guidebooks, and databases for earthquake risk assessment (Khazai et al. 2014, 2015; Burton et al. 2014).

The holistic evaluation approach states that to reduce existing risk or to prevent the generation of new risk , it is required a comprehensive risk management system, based on an institutional structure accompanied by the implementation of policies and strategies to intervene vulnerable elements and also diverse factors of the society that create or increase risk . In the same way, in the case a hazard event is materialized resulting in a disaster, emergency response and recovery actions should be conducted as part of the risk management framework . Figure 14.21 shows the conceptual framework of the holistic risk approach.

Fig. 14.21
figure 21

Conceptual framework of the holistic approach to disaster risk

The physical component in the evaluation is provided by the probabilistic risk assessment as presented in the previous section. The aggravating factors are chosen considering their capability to capture important dimensions of society , as well as the coverage and availability of the data. Furthermore, these variables are sought to cover a wide spectrum of issues that underlie the notion of risk in terms of predominating vulnerability conditions beyond the physical susceptibility, that is, factors related to social fragility and lack of resilience that favor indirect and intangible impacts, affecting the capacity of the society to cope with disasters, increasing the incapability to absorb consequences, to respond efficiently, and to recover from the impact. Figure 14.22 shows, as an example, the aggravating factors used in the global holistic evaluation of disaster risk for the UNISDR GAR ATLAS (2017).

Fig. 14.22
figure 22

Structure of indicators used in the holistic evaluation of risk at a global scale

6.1 Holistic Risk Assessment Methodology

According to Cardona (2001) and Carreño et al. (2007), the holistic risk assessment index or total risk (RT) is calculated as:

$$ \mathrm{RT}=\mathrm{RF}\left(1+F\right) $$
(14.23)

This expression, known in the literature as Moncho’s equation, is defined as a combination of a physical risk index , RF, and an aggravating coefficient, F, both being composite indicators (Carreño et al. 2007). RF is obtained as a nonlinear normalization of a probabilistic risk metric (commonly a robust metric such as the AAL, whether precise, imprecise, or time-dependent), while F, which accounts for the socioeconomic fragility and lack of resilience of the area under analysis, is obtained from available data regarding political, institutional, and community organization .

It is assumed that total risk (RT) can be, at most, two times the physical risk of the affected area. It means that, in a hypothetical case where socioeconomic characteristics are optimal and there is neither fragility nor lack of resilience, the aggravating factors would be zero and then the total risk would be the same as the physical risk . However, if the societal characteristics render the maximum value of the aggravating coefficient (1.0), the total risk would be twice the physical risk value. This assumption, though arbitrary, is made to reflect how socioeconomic characteristics can influence the impact of a disaster. The aggravating coefficient, F, is calculated as follows:

$$ F=\sum \limits_{i=1}^m{F}_{S{F}_i}\cdot {W}_{S{F}_i}+\sum \limits_{j=1}^n{F}_{L{R}_i}\cdot {W}_{L{R}_i} $$
(14.24)

where FSFi and FLRj are the aggravating factors, WSFi and WLRj are the associated weights, and m and n are the total number of factors for social fragility and lack of resilience, respectively. Weights WSFi and WLRj are defined to set the importance of each of the factors on the index calculation, i.e., the contribution of each indicator in the characterization of the dynamics of the society .

The factors used in the calculation of the total risk (RT) capture different aspects of society , usually quantified and reported in different units. For this reason, normalizing procedures are needed to standardize the values of each descriptor and ensure commensurability. A common practice is to standardize them by using transformation functions (see Fig. 14.23). The shape and characteristics of the functions vary depending on the nature of the descriptor. Functions related to descriptors of social fragility have an increasing shape, while those related to resilience have a decreasing one. Thus, in the first case, a high value of an indicator means a greater contribution to aggravation (e.g., corruption: if high, it will contribute more to aggravate conditions to cope with an adverse situation). In the second case, a high value of the indicator means a lower negative influence on the aggravation (e.g., access to education: a high value is a positive characteristic for more resilient societies; therefore, it will contribute less to aggravate risk ). The transformation functions can be understood as fuzzy membership functions of the linguistic benchmarking (“high,” “low”) of aggravation.

Fig. 14.23
figure 23

Examples of transformation functions

In all cases, the transformed variables have as dominium the interval [0,1]. Given that transformation functions are fuzzy membership functions, a value of 0.0 means no contribution, while the value of 1.0 means a full contribution to the aggravating coefficient.

7 Incorporating Risk Management Strategies

There is a wide range of possibilities to reduce risk. For example, in flood risk problems, the use of flood defenses is common practice . Nevertheless, it is not the only available possibility (see, e.g., Yousefi et al. 2015). The best combination of risk reduction alternatives is, in general, quite difficult to obtain without arbitrariness.

The LEC, among many interesting properties, can be stratified to define a set of interventions to reduce risk (Fig. 14.24). Each intervention affects the curve in a different way, building up a risk management strategy . The risk landscape is modified when a mitigation strategy is applied. The best way to define if a strategy is good enough to reduce risk is to perform the risk assessment including its effect. The objective is to identify a set of risk management alternatives that are highly efficient in reducing risk . This is achieved by applying risk control engineering (RCE).

Fig. 14.24
figure 24

Loss exceedance curve stratified to define actions to reduce risk

RCE is a methodological framework specifically designed to help governments, institutions, and private sector stakeholders meet resilience targets by identifying the set of risk management alternatives that are more efficient in reducing risk up to a predefined expected level. The RCE process is summarized in Fig. 14.25. Note that RCE can be applied indistinctively on LECs, LESs, or imprecise LECs, even incorporating second other effects. The RCE process is illustrated with a single LEC for easiness. In addition, it is worth mentioning that RCE matches the purpose of steps 3, 4, and 5 of DMDU.

Fig. 14.25
figure 25

RCE process

The resilience target is defined as the expected reduction of risk after the application of several risk management alternatives. This means that a resilience target is a new LEC, reduced from the real risk result to an acceptable risk level (see Fig. 14.26).

Fig. 14.26
figure 26

Definition of a resilience target

A resilience target may be achieved by a combination of many risk management alternatives. The available alternatives include, among others, definition of construction standards for new buildings and infrastructure , implementation of hazard-control or vulnerability-reduction mitigation works, risk-based land-use planning, financial protection, emergency response plans, and early warning systems. The implementation of any of these alternatives will modify the risk , in a way that can only be known by incorporating it into the hazard and risk models and obtaining the results again. For example, Fig. 14.27 shows risk maps (in terms of building-by-building AAL) for Santa Fe, Argentina, with and without any retrofitting of the city perimeter flood defenses. The black lines in the figure show the location of the defense dikes.

Fig. 14.27
figure 27

Illustration of the effect of retrofitted flood dikes in central Santa Fe, Argentina. Left: AAL map with current-state dikes. Right: AAL map with retrofitted dikes

A combination of alternatives is defined in terms of (i) the alternatives considered, (ii) the risk reduction capabilities of each alternative, (iii) the cost of implementation of the combination, and (iv) the impact it has on reducing risk . Figure 14.28 exemplifies a combination of alternatives over a LEC.

Fig. 14.28
figure 28

Example of a combination of alternatives. Note that each alternative has a risk reduction domain in terms of the loss range it targets

Many combinations are created to identify possible strategies that meet the same target at different costs. The best alternatives are selected from an optimization process in which the combinations that best meet the resilience target, at the lowest cost (i.e., best cost/benefit ratios), are selected as champions. The optimization process implemented within the RCE framework , which is based on evolutionary programming (genetic algorithms), is summarized next.

  1. 1.

    Combinations of alternatives are randomly created to populate the first generation . Each combination is considered an individual. The genotype of an individual is the set of alternatives (see Fig. 14.29). Note that each individual has a different capacity to meet the resilience target . The one that best meets the target is considered the champion.

  2. 2.

    The evolutionary process starts so that new combinations of alternatives are created randomly as a result of crossing and mutating the individuals of the previous generation (Fig. 14.30).

Fig. 14.29
figure 29

Illustration of the creation of the first generation of combinations. The colors in the boxes are to illustrate different characteristics within each alternative (box)

Fig. 14.30
figure 30

Illustration of the evolutionary process for the optimization of the combination of alternatives

The champion of the last generation holds the combination of risk reduction alternatives that best fit the resilience target . This combination is a strong candidate to become the risk reduction strategy to be undertaken.

8 Summary and Conclusions

A first step toward building a solid political and economic imperative to manage and reduce disaster risk is to estimate probable future disaster losses. Unless governments can measure their levels of risk , they are unlikely to find incentives to manage disaster risk . Risk estimations can provide those incentives and, in addition, allow governments to identify what are the most effective strategies to manage and reduce risks. Effective public policies in disaster risk reduction and sustainable development, ranging from financial protection, risk-informed public investment, resilient infrastructure, territorial planning, and impact-based early warning, all can benefit from appropriate estimations and layering of risk .

Probabilistic risk assessment provides a robust mathematical framework for estimating the consequences of future disasters considering the random nature of both hazard and vulnerability and rationally incorporating that uncertainty into the result. It provides a set of metrics that fully represent the loss occurrence process and allows the incorporation of risk management strategies , second-order effects, background trends, and modeling under deep uncertainty into a sound mathematical framework, making it a versatile tool for decision-making.