1 Introduction

Earthquake scenarios aim at estimating the monetary and human losses for well-defined earthquakes on a portfolio of assets. They are an important tool for decision makers to design appropriate measures to face an event regarding the number of rescue teams, temporary shelters etc. They are also necessary to quantify the human and financial consequences of earthquakes and to evaluate the impact of safety measures such as the retrofitting of buildings.

Most of the earthquake scenarios produced worldwide at the scale of a city are based on empirical methods (i.e. based on macroseismic intensity), which feature a limited level of detail. To overcome the drawback of these methods, mechanics-based loss assessment has been developed in the frame of the HAZUS software at the end of the 1990s (Kircher et al. 2006). It has been used for instance for loss assessment of the city of Montreal (Yu et al. 2016) or the Kocaeli region in Turkey (Spence et al. 2003). The Risk-UE project adapted these methods to the European context (Mouroux and Le Brun 2006) and has been used for instance for the loss assessment of the city of Barcelona (Barbat et al. 2010) or of the Azores islands (Veludo et al. 2013). However, these methods are using a generic design spectrum for the hazard, not finely taking source and site effects into account. More recently, Silva et al. (2013) extended the mechanical displacement based earthquake loss-assessment (DBELA) method to compute fragility curves using actual ground motions. The curves derived following this method have been used in OpenQuake engine to compute loss scenarios for mainland Portugal (Silva et al. 2015), the district of Florence (Weatherill et al. 2015), Istanbul (Bal et al. 2008) or Kathmandu (Chaulagain et al. 2016). Moreover, the most of the existing studies are considering types of buildings with generic definitions encompassing very different structures, represented by fragility curves assuming a lognormal distribution with a poor control on the uncertainties related to the model assumptions. Contrarily to other existing software, OpenQuake (Silva et al. 2014) developed by the Global Earthquake Model does not stick to a single vulnerability assessment method since it uses as input any user-developed fragility curve. It is also flexible regarding the hazard data and allows to use the most up-to-date Ground Motion Prediction Equations (GMPEs). The drawback is that the user has to develop a comprehensive hazard and vulnerability model. A critical point is that these models have to be consistent with respect to the parameters used and their uncertainties: the level of detailing of the ground motion and vulnerability models have to match. Elms (1985) developed the “principle of consistent crudeness” that says the quality of a simulation depends only on the quality of the most uncertain element.

The goal of our study is to produce earthquake loss scenarios using up to date models and data within a coherent framework. Such a state-of-the-art scenario modelling has to be mechanics-based and probabilistic.

Probabilistic means that the inputs of the scenarios are considered as uncertain and that the whole distribution of expected losses is computed by combining all the uncertainties. It ensures robustness to the results, points out the elements with the largest uncertainties and evaluates the final uncertainties on the results. Although this is commonly achieved in hazard assessment, this is a remarkable step forward for loss computations. Mechanics-based means that the intensity measures used are physical quantities that can be measured by instruments such as peak ground acceleration (PGA) or spectral acceleration and not macroseismic intensity. They ensure a better control on physical phenomena to objectively extrapolate models to events that have never been observed in the area of interest.

The present study is focused on the school buildings of the canton Basel-City (Switzerland). It aims at computing the monetary and human losses for different scenario earthquakes, selected based on historical events and disaggregation of the 2015 Swiss Seismic Hazard Model (Wiemer et al. 2016). Another goal is the estimation of the benefit of the retrofitting measures undertaken by the authorities in the frame of a long-term reorganization project for schools.

Fäh et al. (2001) first proposed earthquake scenarios for the city of Basel, based on macroseismic intensity, ground amplification and vulnerability classes from the EMS98 macroseismic scale (Grünthal et al. 1998). They computed the distribution of damage for different scenario earthquakes and compared the influence of the ground-motion amplification and that of the vulnerability of the buildings that they both found out as critical. Wyss and Kästli (2007) proposed a loss assessment of a repeat of the 1356 event based on macroseismic intensity. They expect between 6000 and 22,000 fatalities in Switzerland, including 1000–8000 in Basel (0.2–2 % of the total population). Mignan et al. (2015) proposed a risk model for the Basel geothermal project based on empirical methods. They showed that the epistemic uncertainties are playing a major role in the risk assessment. They also proposed a calibration of the empirical method of Lagomarsino and Giovinazzi (2006) to match the observed losses during the 2006 geothermal event. Lang and Bachmann (2004) introduced mechanical models for the study of the vulnerability in Basel and computed the damage expected for a ground motion corresponding to the design code. They found that 45 % of the unreinforced masonry structures would experience at least partial collapse and concluded that Basel is highly at risk. At that time, these results seemed very pessimistic but no improvement was proposed until the present project. This last example showed that moving from empirical assessment to mechanics-based assessment is not straightforward since the currently used simplified vulnerability assessment methods are based on design methods and are therefore conservative, i.e. they include implicitly safety factors.

Three major scientific targets have been addressed in our study to improve the loss assessment results at the city-scale: comprehensive consideration of uncertainties, accurate ground motion amplifications for the whole city and realistic vulnerability models. This paper presents the general framework of the scenario computation using OpenQuake engine, the ground motion computations, the seismic vulnerability of the school buildings and details the obtained results for selected scenarios before and after retrofitting and subsequent conclusions for the earthquake safety.

2 General framework

2.1 OpenQuake engine

In this study, the OpenQuake engine (Silva et al. 2014) that provides state of the art tools for seismic risk computation is used. Moreover, the recent Swiss Hazard Model 2015 (Wiemer et al. 2016) is using OpenQuake and therefore made specific tools and data for Switzerland available in the software. Though empirical methods could be employed, the software is designed for mechanics-based methods and based on ground motion prediction equations and fragility/vulnerability curves that should be function of the predicted Intensity Measure (IM). This framework imposes an extensive work of data selection and pre-processing. However, within this framework, a comprehensive consideration of the uncertainties can be done: the tool is designed to be fully probabilistic.

In OpenQuake, hazard and vulnerability computations are decoupled. For scenario and risk computations, hazard is represented by a set of Ground Motion Fields (GMFs). A GMF is a set of sites (latitude/longitude) for which an IM value is provided. A large number of GMFs is used to sample the uncertainty in the ground motion (aleatory and epistemic). The losses are computed using the fragility and vulnerability curves for each of the GMF (Monte Carlo sampling) to retrieve the probabilistic losses. The uncertainty in the vulnerability should therefore be fully included in the fragility and vulnerability curves.

Within our study, we therefore developed procedures outside of OpenQuake to derive fragility and vulnerability curves tailored to the studied building stock and to include site amplification. The general workflow of the computation is displayed in Fig. 1 and detailed in the following.

Fig. 1
figure 1

Workflow of the scenario computation. Blue arrows denote computations performed using the OpenQuake engine. For a given magnitude earthquake at a given location, the ground motion is computed with OpenQuake for each building location including spatial correlation. Then the amplification is applied outside of OpenQuake. In parallel, outside of OpenQuake, fragility curves are produced based on response spectra and capacity curves and combined to loss ratios to compute vulnerability curves. In OpenQuake, ground motion, fragility curves and vulnerability curves are finally combined to compute damage and losses

2.2 Ground motion

For scenarios, where the source is pre-determined, the OpenQuake engine offers a calculator to derive Ground Motion Fields (GMFs). The GMFs are computed at the considered sites in the scenario and, alternatively, on a regular grid for display purposes. GMFs may include sets of sites with different Intensity Measures (e.g. PGA, SA(1 s) etc.) but all the IMs are given for all the sites, increasing quickly the size of the computation.

The source parameters are the geometry of the fault that can be complex, the position of the hypocentre, the rake (direction of slip on the fault) and the magnitude. It should be noticed that the geometry parameters (fault plane, hypocentre) are used only for the computation of distance for the Ground Motion Prediction Equation (GMPE). Different GMPEs use different types of distance and therefore different input parameters. Clearly, complex properties of the fault (slip distribution, directivity) cannot generally be accounted for with an approach involving GMPEs.

The second set of needed parameters are relative to the selection of the GMPE. To account for site effects, most of the existing GMPEs are based on the average travel time shear wave velocity over the first 30 m (Vs30). However, Vs30 is considered here as too simplistic and therefore ground motion is computed at the Swiss Reference Rock Model (Poggi et al. 2011) in order to apply site amplification afterwards. The chosen IM to represent ground motion is the spectral acceleration at the periods of the building types as defined in the vulnerability model.

Although epistemic uncertainty is large for ground motion prediction and could be tackled using a logic tree, this approach has not been followed here. For scenario modelling where the source is set, it would correspond to a slight variation in the magnitude, that is anyway unknown and arbitrary. However, for real-time application, this uncertainty has to be accounted for.

A spatial correlation model can be used for the GMF computation. Weatherill et al. (2015) showed that the spatial correlation in the variability of the GMPEs was important for the results of loss scenarios. This spatial correlation is due to phenomena that are not modelled by the GMPE, especially source effects (radiation pattern, directivity, distribution and properties of asperities, etc.). It does not affect the mean value of ground motion and damage at each site but the uncertainties when losses at different sites are aggregated.

The critical issue is the implementation of the amplification due to the surface geology that cannot be performed with the OpenQuake engine if a more advanced approach than Vs30 is needed. Therefore, this computation is performed outside of OpenQuake after exporting the GMFs that are subsequently reimported in the software. Several amplification models (two were used here), possibly with different weights, are used in order to map the uncertainties.

The number of needed GMFs in order to reach the convergence of the resulting losses has to be tested and depends on required precision (1000 used here). The required precision is relatively low for loss scenarios (precision of unit victim and MCHF) but increases by one order of magnitude when cost-benefit analyses are performed because the difference between two scenarios has to be computed.

2.3 Loss assessment

Loss assessment is performed for a set of elements at risk, defined by their location, value, number of occupants and type. Different occupancy models (e.g. night and day) can be accounted for.

For the scenario loss computations, the OpenQuake engine offers two calculators: the scenario damage and scenario risk calculators. The first calculator is taking the GMFs and fragility curves as input and computes the distribution of the damage (mean and standard deviation). Fragility curves are defined here as the probability of exceeding each defined damage grade for each possible level of ground motion (IM value). The standard deviation gives an idea of the uncertainty but is not enough to reproduce the multivariate distribution of the damage grades (covariance between the grades is missing). Therefore, only the mean value can be used further. We used this calculator to estimate the distribution of unusable buildings, at least partially collapsed buildings and completely collapsed buildings. The first category is needed to compute the number of homeless people (in our case of children with unusable school building) and therefore by the crisis authority to plan the number of temporary shelters or school buildings. The at least partially collapsed buildings are the buildings where victims are primarily expected. Finally, the completely collapsed buildings need search and rescue actions.

The second calculator is using GMFs and vulnerability curves as input and outputs the distribution of losses. Vulnerability curves are depicting the distribution of loss ratios for each possible level of ground motion. The calculator can handle occupants’ vulnerability and financial losses, but in order to compute injured and fatalities, it has to be run twice. However, as OpenQuake runs in the command line, multiple calls to the software can be easily embedded in a shell script.

The asset correlation is an additional input parameter that accounts for the correlation of the distributions of the vulnerability among the buildings. Crowley (2014) suggests the existence of such correlation although observations are lacking. This parameter has not been used but in the sensitivity analysis (Sect. 5.3).

It is important to notice that the computation of the losses for each single building is straightforward using vulnerability curves. However, since lognormal distributions are generally assumed, the expected aggregated loss is not the sum of the median of the individual losses as it would be the case for a deterministic computation and the standard deviation cannot be computed easily. The simulation of random scenarios (Monte Carlo analysis) has therefore to be performed from the source to the aggregated losses, not only until the damage distribution. This is the added value by using the scenario risk calculator of the OpenQuake engine.

3 Ground motion prediction for Basel

3.1 Scenarios

The scenario earthquakes have been selected out of the historical damaging events in Basel and the disaggregation of the Swiss Hazard model 2015 (Wiemer et al. 2016). We shortly describe here the historical seismicity in the area and the selected events for the scenarios (magnitude, fault location etc.). All events are considered to be related to normal faults (graben structure).

The first known event that struck the area of Basel in history may have occurred in 250 AD and damaged the roman city of Augusta Raurica located 10 km East of Basel with an estimated magnitude Mw = 6.0 (Fäh et al. 2006). However, Basel’s strongest historical earthquake is the Mw = 6.6 event that occurred on October 18th 1356 (Fäh et al. 2009). It destroyed the city and caused damage in many villages around, though the number of fatalities remained probably limited due to numerous foreshocks that made the people probably stay out of their houses. Two fatalities are known by their name, the real number is unknown (Fäh et al. 2009). This event is the largest one known North of the Alps. Other historical events of magnitude 5 and above occurred in 1650 and 1721, causing slight damage in Basel, such as cracks in buildings, fall of chimneys and roof tiles (Schwarz-Zanetti and Fäh 2011; Gisler and Fäh 2011). More recently, the 2006 geothermal-induced event (Mw = 3.2) caused widespread minor non-structural damage (e.g. Ripperger et al. 2009).

Some information on the 1356 event is available and it constitutes therefore the first investigated scenario. Ferry et al. (2005) reconstructed sections of the Reinach fault that may have ruptured in 1356 (in blue in Fig. 2). The length of the investigated sections of the fault is too short to correspond to a Mw = 6.6 event but the extent to the South is not important for our scenarios since the distance between the rupture and Basel only depends on the Northern end. This end is however debatable as well. The Reinach fault is located very close to the city and the distance to the rupture is mostly related to the assumed depth. Though the rupture reached the surface (Ferry et al. 2005), it is not realistic to assume that the fault released important parts of the energy at the surface because the asperities generating the largest amount of energy are generally located at depth. The assumption of a shallow depth of 2 km is adopted though it is difficult to justify. Cauzzi et al. (2015) showed that the ground motion generated with the Edwards and Fäh (2013) model with a stress-drop of 60 bars was well fitting the reconstructed macroseismic field and it was therefore used here.

Fig. 2
figure 2

Surface trace of the faults used for the different scenarios (the trace of the Mw = 6.6 event continues south, see text). The considered school buildings are displayed. Red lines delineate the municipality boundaries of the canton Basel-Stadt. The canton is located in Switzerland (red) at the border with France (blue) and Germany (yellow)

The Swiss Hazard Model 2015 (Wiemer et al. 2016) used all the available seismicity information and up to date ground motion prediction models to estimate the probability of occurrence of each ground motion intensity measure (PGA and Spectral Acceleration) for Switzerland and for the theoretical Swiss reference rock model (Poggi et al. 2011). For instance, the model is providing the probability of exceedance of each SA(0.2 s) value in the city-centre of Basel. It is also providing the disaggregation of the results for a given probability value. We consider here the probability of occurrence of 10 % in 50 years, corresponding to a return period of 475 years. It corresponds to the probability of occurrence used as reference in the current Swiss design code SIA261 for normal buildings. For this particular value, SA(0.2 s) at the Swiss reference rock is 0.20 g in Basel and the distribution of the events contributing to this estimated SA(0.2 s), the so-called disaggregation is represented in Fig. 3.

Fig. 3
figure 3

Disaggregation of the Swiss hazard model 2015 for SA(0.2 s) at a return period of 475 years in Basel city-centre (Wiemer et al. 2016). The grey scale represents the probability density of earthquake magnitude and distance

The most likely value corresponds to an event of magnitude Mw of 5.6–5.8 located at 5–10 km distance. Therefore, the second scenario corresponds to a Mw = 5.7 event located on a fault parallel to the Reinach fault at a distance of 7.5 km to the city centre and with a length compatible with the magnitude of the event (orange line in Fig. 2).

In the disaggregation results, other magnitude-distance couples have a high probability. Among these values, an event of Mw = 5.0 at 5 km distance has been used as a third possible scenario (green line in Fig. 2).

3.2 Ground motion prediction

The Swiss Stochastic Model developed by Edwards and Fäh (2013) and parameterized by Cauzzi et al. (2015) is the most appropriate Ground Motion Prediction model available for the considered area. It is based on the analysis of the recorded events in Switzerland and a finite-fault source model. The ground motion is predicted at the Swiss Reference Rock model (Poggi et al. 2011) with Vs30 = 1100 m/s, which corresponds approximately to the weathered molasse rock of the Swiss Foreland.

This GMPE provides single-station sigma (site variability has been removed), which is necessary for a coherent uncertainty estimation at the reference rock. Several stress drop values are available and the 60 bars value has been chosen as recommended by Cauzzi et al. (2015) who validated this value against the Swiss historical catalogue.

Moreover, sensitivity tests were performed in Sect. 5.3 using the Akkar and Bommer (2010) GMPE, adjusted to the Swiss reference model (Edwards et al. 2016), in order to quantify the epistemic uncertainty.

Although spatial correlation is a property of the GMPE, the existing spatial correlation models are not significantly different (Esposito and Iervolino 2012). Therefore, the model of Jayaram and Baker (2009), available in OpenQuake, has been used to generate spatial correlation in the GMFs.

3.3 Site effects

The largest part of the city of Basel is located in the Rhine Graben where deep Tertiary and Quaternary sediments were deposited (up to 1000 m depth). These sediments are partly consolidated with a Vs30 value around 400–500 m/s and lead to a significant amplification of the ground motion, of around a factor of 3 with respect to the Swiss Reference rock. Michel et al. (2016a) reviewed and improved the mapping of the amplification in the Basel area in terms of spectral acceleration. They proposed two alternative amplification maps that have been used for the scenario computation: a map based on the microzonation proposed in 2006 (Fäh and Huggenberger 2006) corrected to the Swiss Reference Rock model of Poggi et al. (2011) and a new map based on the interpolation of the observed amplification at the Swiss Strong Motion Network computed following Edwards et al. (2013).

Michel et al. (2016a) treated differently the Rhine Graben and its deep sediments and the area outside of the Graben where Quaternary sediments with variable thicknesses (<35 m) and shear wave velocities lay directly on Mesozoic rock. The considered portfolio of school buildings is mainly located in the Rhine Graben where the amplification is rather homogeneous. Two school buildings are located outside of the Graben and are both equipped with a modern strong motion station, for which Edwards et al. (2013) directly provide the amplification function. Therefore, the map of Michel et al. (2016a) is used only for the Rhine Graben area.

Using several amplification models is necessary to map the uncertainty in site amplification. Since single-station sigma, i.e. without site uncertainty, is used in the prediction of the ground motion at the reference rock, uncertainties are in this way not double-counted.

We computed 500 GMFs for each amplification model yielding a total of 1000 GMFs.

4 Vulnerability of Basel school buildings

4.1 Taxonomy of school buildings in Basel

The present study focuses on cantonal schools in Basel‐City, which are attended during the compulsory education. About 60 different schools, whose building stock comprises more than 250 structures, were taken into consideration. Only the 121 buildings with classrooms, in which the pupils spend most of their time, were selected for the study. Information on each structure was collected in the archive of the buildings department, some recent engineering reports and visits.

The construction dates of the buildings cover a large time span with the oldest school being built in the fifteenth century and the majority in the twentieth century. Modern seismic codes were first introduced in Switzerland in 1989 and updated in 2003. In Basel, additional specific information became available in 2009 with the introduction of a spectral microzonation. However, only 3 % of the schools were built after 2003 and more than three quarters of the schools were built before 1989. Hence, the majority is not designed to withstand earthquakes. Due to a change in the educational system a significant amount of school buildings had to be renovated in recent years. Within this renovation process, the cantonal building department decided to systematically assess the earthquake safety of all schools built prior to 1989 and to retrofit where necessary. Because of the changes in the characteristics of the building stock due to the retrofitting campaign, two scenarios are considered here: before and after retrofitting.

About one-third of the considered buildings are masonry structures, about another third reinforced concrete (RC) as detailed in Table 1. A bit more than half of the existing masonry buildings are older buildings with flexible floors and about half of the RC buildings are wall structures. The rest of the buildings are mixed RC-masonry or lightweight structures. Based on the available information, buildings were grouped according to their characteristics, primarily their construction material and type of horizontal load resisting system, as usually done (e.g. Lagomarsino and Giovinazzi 2006). Masonry structures were subdivided in two groups: L light masonry (modern bricks with voids) and H heavy masonry (stone masonry and solid bricks). A further refinement of masonry building types has been made according to the floor type: RF rigid floor (concrete and composite slabs), FF flexible floor (wooden floor). Reinforced concrete buildings were first categorized according to the type of lateral load resisting system: W wall structures, F frame structures, FW frames in one direction and walls in the other and P particularities, such as very irregular wall buildings and post‐tensioned structures. Furthermore, as the behaviour of squat and slender wall structures is different, the category W was refined into: S slender walls with an aspect ratio >1.5 and NS non‐slender walls with an aspect ratio equal to or smaller than 1.5. All mixed structures are RC frames with masonry infill (MI). Two types of potentially vulnerable elements were identified for these: masonry walls that do not span the entire height of the frame and frames that are infilled in all but a single storey. The first type of element is vulnerable to out‐of‐plane (OOP) and short column failure. Out-of-plane failure was considered to be the critical failure for the type of building present in this study. The second type is prone to develop a soft storey mechanism (SS). The type Others (OTH) comprises all buildings that could not be attributed to any other class; a lot of them are lightweight structures, presumably made of wood or sandwich panels. No further refinement was carried out, as the establishment of 12 types appeared to be sufficiently detailed for a building stock of 121 buildings within the scope of this study and the variations within one type with regard to the number of storeys and the regularity in plane and elevation were limited.

Table 1 Taxonomy and distribution of buildings before (BR) and after (AR) retrofitting

In the course of the assessment campaign, structures were assessed by computing the ratio of the capacity of the building to the demand given by the current code. This ratio can be determined force- or displacement-based and is named compliance factor, for which a threshold value of 0.25 was required at the time the schools were assessed (SIA 2004). If the compliance factor of the structure lies below this value, retrofitting to reach at least the threshold value is mandatory. Retrofitting to higher compliance factors is necessary only if the cost of the retrofitting is considered proportionate with regard to the occupancy of the building. Relations to calculate the proportionality were established using probabilistic models taking into account the costs to save a human life after a disaster and are provided in the code (SIA 2004). Common major retrofitting measures includes stiffening flexible floors and constructing or strengthening RC walls. Furthermore, expansion joints were closed and walls with openings and roofs were strengthened locally. To account for the retrofitting, buildings were typically attributed to another type after retrofitting. For instance, a masonry building with wooden slabs which were stiffened would move from type M_H/L_FF to M_H/L_RF. The type of 33 of the buildings changed due to retrofitting, while a few retrofitted buildings also remained of the same type, because the lateral load resisting system did not change. In these cases, the regularity, resistance or ductility of the buildings was improved, e.g. by closing expansion joints and strengthening RC walls, but not to the extent that a new, retrofitted type needed to be created. Table 1 shows an overview of all types and the distribution of buildings before and after retrofitting.

Therefore, the considered elements at risk in this study are the 121 individual school buildings. Each of them is associated to a building type, has a certain value, i.e. replacement cost, and is used by a certain number of pupils. For the project we assumed full occupancy, corresponding to an event occurring during school time.

4.2 Fragility curves

We associated each type to a structural model following a non-linear static procedure assuming a single-degree-of-freedom system. The structural computations are detailed in Résonance (2016) and summarized hereafter. Each type is represented by a bilinear capacity curve defined by 3 parameters: the initial fundamental period T, the yield displacement Δy and the ultimate displacement Δu (Fig. 4).

Fig. 4
figure 4

Definition of the damages grades on the capacity curve

For each building type, one to three typical buildings were modelled using adequate 2D models in order to generate capacity curves. Material properties were chosen based on experimental data from the literature and on the values provided in Swiss guidelines.

Besides purely mechanical considerations, the fundamental period has been further validated in the project by ambient vibration measurements in buildings as proposed by Michel et al. (2010, 2011, 2012). Finally, stochastic sets of capacity curves were computed based on intervals in the values of the input parameter for each model (1000 samples chosen) as depicted in Fig. 5a.

Fig. 5
figure 5

Example of computation of capacity curves (a), fragility curves (b), and vulnerability curves for fatalities and injured (c) for building type M_H_RF

The EMS98 damage scale is used in this project (Grünthal et al. 1998). It has 5 grades: slight (DG1), moderate (DG2), severe (DG3), partial collapse (DG4) and complete collapse (DG5), defined from the description of building damage. The corresponding limit states of the model have been defined in terms of displacement capacity from the bilinear capacity curves of all building types (Fig. 4). DG1 corresponds to a displacement exceeding 0.7*Δy, DG2 to 1.5*Δy, DG3 to ½*(1.5*Δy + Δu) and DG4 to Δu. If the ductility is zero, which may be the case for a few capacity curves in each set of curves, Δy and Δu and thus DG2 through DG4 will coincide. These limit states implicitly define a 6th damage state: DG0 (no damage). This corresponds to the limits defined by Lagomarsino and Giovinazzi (2006) except for DG3 that is set in the centre between DG2 and DG4 instead of the centre between Δy and DG4. The fifth damage grade DG5 is estimated based on the assumption that the damage grades follow a binomial distribution, as proposed by Lagomarsino and Giovinazzi (2006) and improved by Michel et al. (2016b), to keep the coherence of the method.

In order to derive fragility curves (Fig. 5b), the set of displacement thresholds corresponding to the set of damage grades should be related to the chosen intensity measure (IM) of the ground motion. The method is detailed in Michel et al. (2016b) and summarized in the following.

The forward problem is here to compute the response of a structure having a given capacity curve to a given ground motion. The inverse problem to solve is to compute for which levels of ground motion each damage grade is exceeded. Several methods exist to solve the forward problem. Non-linear time history analysis could be used if a hysteretic model that provides the dynamic force/displacement relationship is available. In order to avoid time-consuming time-history analyses, most of the existing vulnerability methods follow a non-linear static procedure. In such procedure, the non-linear displacement demand is obtained from the response of an equivalent linear system (see Michel et al. 2014a for a review). Methods based on the elastic period (e.g. N2 method, Fajfar 1999) or the secant period (e.g. MADRS method, FEMA 2005) exist. The method of Lin and Miranda (2008) assumes a period elongation and a damping increase with increasing deformation of the structure. It has been selected in the following because it is more accurate than other methods (Lin and Miranda 2009), it does not require iterations, and it is based on an intermediate period, between the elastic and the secant period, which seems physically realistic.

The pseudo spectral acceleration (or displacement) at the elastic structural period SA(T) at 5 % damping is an input parameter of those non-linear static procedures and was therefore the chosen IM in this project. Moreover, it is a better explanatory parameter for the structural response than PGA (Crowley et al. 2004) and therefore decreases the aleatory uncertainty in the fragility curves as also shown by Michel et al. (2012), and is provided by the chosen GMPE.

Weatherill et al. (2015) showed that taking into account the inter-period correlation was important for scenario computations. If it is not available, they suggest to use a single intensity measure. The OpenQuake engine does not yet provide the possibility of using inter-period correlation. Therefore, as part of the sensitivity analysis (Sect. 5.3), a computation has also been performed converting all the curves to the same intensity measure, in this case SA(0.5 s).

For each level of ground motion, the response of the set of capacity curves to a set of spectra derived from the Akkar et al. (2014a) ground motion prediction equation, conditioned on the selected ground motion value, is computed using the Lin and Miranda (2008) method as detailed in Michel et al. (2016b). Akkar et al. (2014a) GMPE has been chosen because the inter-period correlation is also provided (Akkar et al. 2014b). The distribution of the damage grades obtained from these computations constitutes the fragility curves. They are defined discretely, without fitting a lognormal distribution. The set of spectra used depend on the chosen scenario in the GMPE, which differ with regard to magnitude, distance and style of faulting, so that the fragility curves are as well scenario-dependent.

They include: (1) aleatory uncertainties due to the different characteristics of the buildings constituting a type and the variability in the response to a ground motion level defined by a single parameter and (2) epistemic uncertainties due to assumptions chosen in the modelling. As a result, uncertainties increase with the level of ground motion, deviating from the lognormal assumption. However, epistemic uncertainties may still be underestimated and their evaluation would require the use of more advanced models.

Furthermore, the obtained fragility curves were compared to empirical curves by conversion of the IM from spectral acceleration to macroseismic intensity to ensure realistic results (Michel and Fäh 2016). Initially, this comparison showed that according to the mechanically obtained curves, the damage grades were predicted to occur at significantly lower intensities than according to the empirical ones. Consequently, some assumptions of the structural models were revised. For instance, similarly to Lang and Bachmann (2004) and Borzi et al. (2008), a storey mechanism was initially assumed to determine the displacement capacity of the masonry buildings with rigid floors RF. This mechanism was eventually used to calculate the lower bound displacement capacity, the upper bound being a uniform development of plastic deformation over the entire height of the building, similarly to Lagomarsino et al. (2010). For the RC_W_S buildings, modifications mainly concerned the plastic hinge length and the ultimate curvature (Résonance 2016).

4.3 Vulnerability curves

In order to derive vulnerability curves (Fig. 5c), i.e. the probabilistic distribution of loss ratios (fatalities, injured people and financial losses) as a function of the chosen IM, fragility curves are combined with loss ratios. Each DG corresponds to a probabilistic distribution of loss ratios. In this project, loss ratios are assumed to follow a uniform distribution between two plausible bounds obtained from the literature. The combination of fragility curves and loss ratios is performed by Monte Carlo analysis: a large number of random samples in the damage distribution as defined by the fragility curves are multiplied by independent random values from the loss ratio distribution in order to determine the distribution of the losses. This distribution is defined by its mean and coefficient of variation as requested by OpenQuake v1.6 and previous versions which assume a lognormal distribution. This distribution has drawbacks (e.g. the 0 value cannot be reached, although the occurrence of 0 fatality is common in our computations), so that other distributions should be considered in the future. In the following, a literature review is detailed for the choice of the loss ratios.

Coburn and Spence (2002) showed that, excluding secondary effects (landslides, tsunami etc.), 90 % of the deaths during earthquakes are due to building collapse. Other types of victims such as heart attacks can hardly be predicted. These observations justify the use of so-called lethality rates, first introduced by Coburn et al. (1992) defined “as the ratio of the number of people killed to the number of occupants present in collapsed buildings of that class.” They can be extended to injured people and, besides the building class, depend on its functions, occupancy, collapse mechanism and extent, ground motion characteristics, occupant behaviour, effectiveness of rescue teams, etc.

Based on this concept, casualty models have been proposed in the literature for different countries and calibrated with observed data. Although more detailed models exist, only 2 levels of casualty are considered here: injury needing medical aid (including slight injuries) and death. Casualty rates for collapsed buildings are given by building type in the first place, but could be also modulated by damage grade, number of stories, etc.

Since only structural collapse is considered, only DG4 and DG5 according to EMS98 are generally associated to casualty ratios, though collapse of non-structural elements, occurring at DG1-3 can also be associated to injuries or even deaths. For the current project, outdoor casualties are not considered since a full occupancy is considered. Injuries due to non-structural elements are assumed to be included in the casualty ratios.

We selected the casualty ratios for DG1-3 given by FEMA (2012) since this is the only study accounting for casualties at these damage grades. Therefore, no uncertainty could be estimated. However, DG4 in this study does not correspond to partial collapse as for EMS98 and can therefore not be used. At DG4, the values of the studies of Spence (2007), complemented by So and Spence (2013), Zuccaro and Cacace (2011) and Jaiswal et al. (2011) are comparable with 12–20 % injuries and 1.5–8 % fatalities. For DG5, these three studies and the FEMA (2012) study provide comparable results for injuries (50–80 %), but more variable results for fatalities (6–30 %). The discrepancy comes only from the RC buildings where the value of 30 % is reached in Spence (2007) and Zuccaro and Cacace (2011) studies, whereas FEMA (2012) or Jaiswal et al. (2011) propose 10 %. The larger value corresponds most probably to badly designed RC frames subjected to pancake collapse that were responsible for numerous fatalities in Turkey but are rare in Switzerland. Therefore, these high values have been discarded here and a maximum value of 15 % is kept. Table 2 summarizes the bounds of the casualty ratios selected for this study.

Table 2 Values (%) for the injury I and lethality D rates for each building type in Basel and each damage grade with uncertainty (minimum and maximum of the uniform distribution)

In order to estimate the property loss, one uses the repair cost ratio, defined as the ratio between the cost of repair and the replacement cost. This ratio depends on the damage grade, but also on the economic condition of the considered country since countries in a difficult economic condition may prefer not to repair slight damages. In most recent studies (e.g. FEMA 2012), the repair cost is divided into structural, non-structural and content damage. In our study, a single value, including these three damage types is given for each damage grade. These values cannot therefore be directly compared to the FEMA (2012) values for structural damage that count only for a share of the repair cost.

The studies of Kircher et al. (1997), D’Ayala et al. (1997), Tyagunov et al. (2004) and Silva et al. (2015) have been compared to retrieve financial loss ratios. Moreover, in Switzerland, the working group SIA269 proposed the values 1–40–80–100–100 % for DG 1–5 EMS98 (Jamali and Kölz 2015). For this project, DG1 is chosen between 0 and 2 %, which matches the range observed in the literature and the average of the SIA group. For DG2 and DG3, the SIA choice is kept as a maximum bound. DG4 and DG5 are chosen as 100 % loss as suggested by most of the authors, see Table 3.

Table 3 Values (%) for repair cost ratios taken as minimum and maximum bounds of the uniform distribution

The studies reviewed here are based on expert judgment only. Computational analyses exist in the literature but can be used only for the buildings they have been derived for (e.g. Kappos et al. 2006).

5 Results and discussion

5.1 Ground motion fields

The amplitude of the retrieved median ground motion field is logically strongly increasing with magnitude. SA(1 s), corresponding to flexible buildings is more amplified in the Rhine Graben (of about a factor of 3), while, outside of the Rhine Graben, larger amplifications are obtained at lower periods such as T = 0.3 s with values up to a factor of 5 (Michel et al. 2016a). Such interaction between source, site and building can only be reproduced when mechanical scenarios are used, not empirical ones. The median ground motion at each site does however not correspond to a realistic spatial distribution of ground motion: Fig. 6 shows an example ground motion field (1000 are used for one scenario) including spatial correlation that is actually used for the computations. Variations in the ground motion are so large that one can hardly recognize the input amplification map and source radiation from a single GMF. More work is needed to assess how realistic this picture is.

Fig. 6
figure 6

Effect of spatial correlation on ground motion: example ground motion fields SA(1 s) with (left) and without (right) spatial correlation for the Mw = 5.7 scenario event. The right GMF corresponds to the median GMF

5.2 Losses

Table 4 summarizes the results of the 3 investigated scenarios, after the retrofitting project. They reflect the expected losses for the state of the building stock that will be reached soon. It should be noticed that the uncertainty values concerning the number of unusable and at least partly collapsed buildings and consequently the number of pupils without usable school are rough estimates since the uncertainty on the cumulative damage distributions is not given by OpenQuake. Nevertheless, an event similar to the 1356 earthquake would be a catastrophe: nearly all school buildings would be unusable and a large number of fatalities (several hundreds) and injured would be resulting. In the two scenarios from the disaggregation, ten (Mw = 5) or several tens (Mw = 5.7) of fatalities may still occur. The fatality rates, i.e. the number of fatalities divided by number of pupils, would be 2, 0.2 and 0.06 %, for the scenarios Mw = 6.6, 5.7 and 5.0, respectively. However, building collapse, even partial, is not certain for the two last scenarios. The uncertainties are nearly as large as the mean values so that only the order of magnitude should be considered. A large part of the building stock would however not be usable right after the event. We also considered the 250 Augusta Raurica earthquake but losses were comparable to those of the Mw = 5.7 earthquake and it is therefore not studied further here.

Table 4 Results from the scenarios after retrofitting, mean values and standard deviations

As a comparison, during the 2009 L’Aquila (Mw = 6.3 at about 4 km) event, about 4 % of the 1665 buildings surveyed by Tertulliani et al. (2010) in the city completely collapsed (DG5), 20 % suffered at least partial collapse (DG ≥ 4) and 90 % were not usable (DG ≥ 2). The fatality rate in the city was 0.32 % (Alexander and Magni 2013). Given the magnitude and distance to the city, which lie between our 1356 and the Mw = 5.7 scenario, our results are therefore coherent.

Lang and Bachmann (2004) first used mechanical methods to estimate damage in the city of Basel. They computed the damage distribution for a ground motion corresponding to the design code at that time, i.e. 475 years return period event. They found out that 45 % of the unreinforced masonry buildings would at least partially collapse. Considering an EMS vulnerability class of B, this damage distribution corresponds to Intensity IX to X, although the considered PGA in their study is only 1.3 m/s2, i.e. corresponding to Intensity VII according to Faenza and Michelini (2010) and approximately to our scenario Mw = 5.0. The present study avoided such an inconsistency by carefully assessing the assumptions in the structural model, ensuring the robustness of the results.

The contribution of each building type to the losses has been investigated. Figure 7 shows the damage distribution of building types for the three scenarios. The catastrophic nature of the 1356 scenario, in which a lot of buildings suffer at least partial collapse, is clearly illustrated in this figure. The other scenarios show only a small number of buildings with DG3 or larger. Large numbers of victims logically occur in the most vulnerable building types with a large number of pupils. The large majority of fatalities occur in masonry buildings (about 70 % for the 1356 and 5.7 scenarios). The share of the financial losses in masonry structures is lower but still dominant (about 60 %).

Fig. 7
figure 7

Average damage distribution of the buildings per type for the Mw = 5 (top), Mw = 5.7 (centre) and 1356 (bottom) scenario events after retrofitting

The fatality rates per type range from 0.3 to 6 % for the repeat of the 1356 event and from 0 to 3.5 % for the Mw = 5.7 scenario. This parameter depends at the first place on the scenario (larger rates for stronger earthquakes) and on the vulnerability. The largest fatality rate, corresponding to the most dangerous type, occurs for the MRC_MI_SS building type. Masonry building types all have a similar fatality rate (about 3 % for the 1356 scenario and 0.5 % for the 5.7 scenario), significantly larger than the reinforced concrete structures. The RC_P building type is an exception for the RC structures: it shows fatality rates comparable or even larger than masonry types for the 1356 event (4.4 %) as well as for the 5.7 scenario (0.4 %).

For a given scenario, ground motion is relatively homogeneous over the considered area (most of the schools are in the Rhine Graben) so that site effects do not explain much of the differences in the fatality rates. However, the two school buildings outside the Rhine Graben show among the lowest fatality rates. This is a combination of a lower ground motion with a less vulnerable building type. Site effects would play a more important role when extending the scenarios to the whole city.

It is also important to notice that the mean damage grades are low for all the buildings. For the Mw = 5.7 scenario, the mean damage grade does not exceed 2.6, that is to say, on average, they are not expected to collapse and induce fatalities. Deterministic approaches would only consider this parameter and would therefore fail predicting fatalities.

5.3 Sensitivity analysis

Table 5 shows the results of several computations of the Mw = 5.7 scenario where the uncertainty of one component has been ignored in order to quantify the sources of uncertainty in the results. The ground motion is by far the most uncertain component: a scenario using the median ground motion field only, i.e. including only the uncertainty in the vulnerability, is ten times less uncertain regarding fatalities, six times less regarding injured and three times less regarding financial losses. Therefore, the uncertainty in the vulnerability has a several times smaller impact than that in the ground motion but it depends on the scenario and the investigated parameter. It is also not clear whether all the epistemic uncertainties in the vulnerability have been captured. Moreover, mean losses are significantly different whether uncertainties are considered or not. This can be explained by the combination of hazard and vulnerability with non-trivial probability distributions. However, depending on the scenario, this can lead to larger or lower mean losses: in the case of the Mw = 5.7 event, fatalities are lower but financial losses larger if no uncertainty in the ground motion is considered. The effect of spatial correlation on the uncertainty of the results is also clear: the uncertainty increases by 20 % if spatial correlation is considered in this scenario (logically no effect on the mean value). The effect of the uncertainty in the site amplification is limited: both used strategies (microzonation and interpolation) lead to results with insignificant differences. The effect of uncertainty in the loss ratio is in the same order as that of the spatial correlation.

Table 5 Sensitivity study on scenario Mw = 5.7 after retrofitting

We also tested the convergence of the computations by using 2000 GMFs instead of 1000 for the Mw = 5.7 scenario. The largest difference reaches 2 % that can be considered as acceptable. The chosen number of GMFs is therefore adequate.

Table 6 presents the results for the same scenario using alternative modelling hypotheses (e.g. different GMPE, no site amplification), in order to understand their effects. Changing the GMPE has a large impact on the results. Epistemic uncertainties on ground motion models are known to play a major role (Edwards et al. 2016). Using the Akkar and Bommer (2010) GMPE, adjusted for the Swiss reference (Vs/kappa adjustment), leads to an increase of 50 % of the financial losses and the fatalities are more than doubling. However, the importance of this uncertainty in scenario modelling, for which the magnitude and location are arbitrary, can be relativized as explained before.

Table 6 Alternative modelling hypotheses on scenario Mw = 5.7 after retrofitting

The absolute effect of site amplification is also critical: the fatalities would be 1/8 of what they are if the city was built on the theoretical Swiss reference rock (corresponding approximately to weathered molassic rock of the Swiss plateau), and the financial losses about 1/3.

We also tested the sensitivity of the results to the used intensity measure. We converted all the fragility and vulnerability curves to the common intensity measure SA(0.5 s) as suggested by Weatherill et al. (2015), inducing an increase in the uncertainties due to conversion (Michel et al. 2016b). The results are different but remain within the uncertainty bounds (50 % more fatalities, 15 % more financial losses) and more importantly, the uncertainties are noticeably larger, multiplied by 1.4–4 depending on the considered parameter. The increase in the mean values is related to the increase in the uncertainties in the vulnerability. Such conversion should therefore be avoided as much as possible.

Finally, an asset correlation coefficient of 0.9 has been tested. It means that individual buildings are not randomly sampled in the vulnerability curves but that they are highly correlated to one another. The idea behind this is that buildings should behave more homogeneously for a given event than the full uncertainty and that epistemic uncertainties are accounted for in vulnerability curves that do not map the variability for a given event. Large losses for one building compared to the mean value will for instance lead to large losses for all the buildings and vice versa. This parameter strongly increases the uncertainty in the losses (nearly a factor of 2). However, such high correlation lacks data and models to be justified.

5.4 Losses before and after retrofitting

The differences between the results before and after retrofitting for the whole building stock are limited with respect to the uncertainties, but their absolute value remains considerable.

The difference before/after retrofitting is computed in Table 7 for the 33 buildings out of 121 that have been or are planned to be retrofitted significantly so that their type changed. The number of victims is approximately divided by 2 for all the events. The fatality rates of the retrofitted buildings remain below 0.6 % for the Mw = 5.7 event, which is significantly lower than the largest values for non-retrofitted buildings at about 3 %. It means that they are not among the most problematic structures anymore, for instance the RC_P buildings.

Table 7 Decrease in losses for the 33 retrofitted buildings for the 3 scenarios

The financial gain is smaller, around 25 %. It corresponds to direct losses of 13 MCHF for the 475 years return period event. This number does not take into account the cost of fatalities, injured and homeless but it could be compared to the investment of the city in the seismic retrofitting of the 33 buildings.

The retrofitting measures are in general limited and target the loss of life that is indeed improved. It should, however, be noticed that these gains are mostly due to the change of two RC_P buildings, particularly deadly in our scenarios, into other building types: these two buildings explain alone 1/3 of the gain in terms of fatalities.

6 Conclusions and recommendations for future work

In this study, loss scenarios have been computed for the city of Basel based on mechanical approaches. A comprehensive estimation of the uncertainties has been performed in order to better understand the sources of uncertainty. The performed scenarios are more precise and robust than previous studies through the coherent combination up to date hazard and vulnerability data and models including their uncertainties. We showed that the latter are large, in the same order of magnitude as the mean values. In the future, the uncertainty in each individual part of the computation has to be decreased to improve the results.

The ground motion prediction equations remain the most uncertain part of the analysis though large improvements have been made in the last 10 years, especially thanks to recordings of the Swiss strong motion network (Edwards et al. 2016). There is still room for improvements through on-going densification of the network and detailed site characterization (Michel et al. 2014b; Hobiger et al. 2017). The epistemic uncertainty in the ground motion modelling, not accounted for here, has to be relativized for scenario modelling since neither the location nor the magnitude of future events are known and they are therefore arbitrarily fixed in our computations. The uncertainty in the choice of the ground motion model is therefore relative and could be balanced for instance by a small magnitude increment. However, this issue will become crucial for real time computations and probabilistic risk calculations. The Swiss stochastic model (Edwards and Fäh 2013) calibrated with Swiss data is currently the optimal choice for scenario modelling in Switzerland.

Spatial correlation of ground motion plays a role in the uncertainty estimation and should be tailored to the study-region. Inter-period correlation should be introduced as well in the future.

Site amplification has a critical impact on the loss scenarios (factor 8 on the fatalities for the 475 years return period) but the amplification in the Rhine Graben in Basel is relatively well known since the 2006 microzonation and has been recently validated through accelerometric recordings (Michel et al. 2016a). Site amplification shows higher variability outside the Rhine Graben where new models need to be developed in future studies.

We propagated all uncertainties in the structural modelling to the fragility and vulnerability curves. During the development of the fragility curves, standard seismic analysis methods which are frequently used have been found to be too conservative, i.e. biased towards higher vulnerability, to be directly used for risk scenarios and that conservativeness needed to be first removed. In this study, the assumptions of the structural model have been critically assessed to avoid inconsistencies between the computations and past observations such as in the study of Lang and Bachmann (2004). For the 475 years return period event in Basel, complete collapse is unlikely.

In any case, the estimation of epistemic uncertainties for vulnerability models remains a challenge and requires larger modelling efforts. Studies on the effect of the modelling strategy on the non-linear dynamic response of buildings, such as Silva et al. (2013) or Goulet et al. (2014) for static response of bridges, are lacking. We also recognized that data on casualty rates in the literature were limited, especially data that is applicable to the Central European context.

We performed a number of selected damage scenarios using school buildings in the city of Basel and evaluated for each building type the distribution of damage, the number of fatalities, injured and homeless pupils and the financial losses. Although total collapse of a school building is not certain for the 475 years return period event (Mw = 5.7), tens of fatalities should be expected as well as large financial losses. A large amount of the buildings would be moderately damaged, meaning that they would need a quick analysis and repair, which can be challenging considering the large amount of affected structures and the limited number of trained engineers for such cases.

Despite the large uncertainties, the retrofitting measures have a noticeable impact on the results (50 % less fatalities for the group of retrofitted buildings). The strategy of retrofitting in priority the weakest structures with the largest number of occupants is proven to be efficient. However, retrofitting does not mean reducing the losses to 0, even for the 475 years return period scenario. A more detailed analysis on the retrofitted buildings is however necessary to be able to precise this statement.

The results are obviously depending on the scenario itself (magnitude, distance, depth…), but this information could be obtained shortly after an earthquake and the scenarios computed in near real-time. The model developed in this study is therefore a solid basis for extending loss scenarios to the whole city and in real-time.