1 Introduction

Environmental risk management consists of making decisions on human activities or construction designs as they are affected by the environment and/or they have consequences or impacts on it. There are many different scenarios that can be described in these terms. Herein attention is focused on cases where the relevant decision to be made is a level of safety design: decide which level of safety has to be achieved in a civil work in order to minimize the risk associated with external actions of a given kind, possibly of natural origen but not exclusively. This kind of decision making can be treated using a prior decision tree (Benjamin and Cornell 1981). Prior decision trees consist of three elements: (a) the possible levels of safety design; (b) the random states of the system which are susceptible to cause effects and the description of their occurrence probabilities; and (c) the generalized costs (or utilities) generated by the realization of the random state and the design decision made. Random states are related to the response of the system to (random) external events.

The following examples are illustrative of the generality of this scheme.

A. Floods caused by precipitation Important precipitation in the catchment of a river may be the cause of a flood in the lower end of the catchment or drainage point. A system of small dams and channels is to be designed in these cases in order to regulate the draining of the catchment, hence avoiding severe flooding. One solution may be to take advantage of the water storing and dispersion capacity of dams and channels in order to avoid too high levels of the water in the riverbed. The random state is related to the water level attained at the drainage point. For this scenario, costs due to the safety design are mainly related to the construction of the system of dams and channels, and to the damage caused by floods during the lifetime of the system. On the one hand, investment in a high-safety level may be costly and may possibly produce a considerable impact on the environment. Nonetheless, floods might be controlled and their cost might become negligible. On the other hand, a low environmental impact of the construction, namely at a lower cost, may result in high expenses associated with flooding.

B. Breakwater protecting a harbor Port activities in a harbor have to be protected from ocean waves associated with wave storms. Therefore, characteristics of the breakwater (geometry, resistance, protection effectiveness) have to be decided. Random states are damages of the breakwater under storms occurring during the lifetime of the breakwater and the effects of these events on the port activities. Costs derive from the construction of the breakwater and its visual and environmental impacts, as well as from damages and interruption of the port services. Certainly, high level of protection may lead to large construction costs and severe environmental impacts; however, it may protect port activity from disasters. The reverse might attenuate construction costs and environmental impact to the price of accidents, damages, and interruption of port services.

C. Safety of a nuclear power plant containment building A nuclear power plant might be involved in accidents which cause overpressures in the interior of the containment building. A containment failure may cause the release of radioactive materials, which would heavily affect the environment, human beings, and facilities in the surrounding areas of the plant. Containment capabilities of the building need hence to be decided. The random state to be taken into account is the release of radioactive materials under overpressure events. Indeed, the construction of a very safe containment building, in order to avoid any consequences to the environment and population, may require a very significant investment. In contrast, a weak containment building may be relatively cheap; nevertheless, in case of core meltdown or other kinds of accident, the release of radioactive material may be a human and environmental disaster.

These three scenarios have some features in common. Indeed, there are events that occur at a precise instant of time and that can potentially cause costly damages or impacts. In scenario A, these events are heavy precipitation episodes; in scenario B, this role is played by wave storms; whereas in case C the events are accidental overpressure sequences in the containment building, which are not of environmental origin but artificial. From now on, we call these external events, as they are not intrinsic to the system to be designed (system of dams and channels in A; breakwater in B; containment building in C). The random states are then a combination of the (random) external actions with the (random) response of the system. The response of the system to the external actions determines the random state, which is the final output. In the sketched cases, the system outputs, as a response to external events, are: level of water at drainage point in case A; level of damage in the breakwater and interruptions of port activity in B; released radioactive materials and its consequences in C.

In these three cases, external events are considered as occurring at a precise instant of time. Frequently, the interest is not the effect of a single event, but a sequence of them in time, possibly during the lifetime of the system. In these cases, the random state of the system should be modelled as the number of system responses of different kind or magnitude during the lifetime. For instance, in case A, for a lifetime of 100 years, the random state of the system can be defined as the number of times that: a precipitation event causes no flood at drainage point; the number of times in which a moderate flood occurs; the number of times in which a severe flood is the result of a precipitation event.

Decision making in the safety design level commonly consists of selecting the decision whose associated risk is the least. Nonetheless, a variety of definitions of risk have been used over the years. A common one is that risk, associated with a given safety design D, is the expectation of the cost associated with D and the random state of the system, denoted as N herein. As N is considered random, the cost C(ND) is also random, and the expectation \(R(D)= {\mathrm {E}} \big [C(N,D)|D\big ]\), conditional to D, is taken as the definition of the risk associated with D. As this expectation is based on the probability distribution of the random states of the system, the modelling of N and its distribution becomes a primary goal in most risk analyses. However, this task may be involved if no further simplifications are adopted. A very effective assumption is to consider that, conditional to D, pairs of external events and system responses are conditionally independent, and they can thus be modelled separately. On the one hand, the process governing the external events can be studied from the available knowledge and observations; on the other hand, the random responses of the system to a given external action can be examined by a simulation and the available structural knowledge. Firstly, the probabilistic modelling of external events is typically the goal of a hazard analysis, which provides occurrence probabilities of the external events. Secondly, the study of responses under given external actions can be named as a vulnerability assessment, whose goal is to provide the probabilities of random system responses under a given external action. Therefore, not only does vulnerability take its place in the risk analysis, but it also has a role by its own. Vulnerability, as a function of a nominal design parameter, provides detailed information about the meaning of this parameter.

Regarding this whole process, Sect. 2 gives motivating details of vulnerability modelling in a simple risk scenario. Section 3 is divided into three subsections. Section 3.1 provides a definition and examples to understand vulnerability, Sect. 3.2 proposes a procedure to estimate vulnerability from the simulations of a deterministic model of the system, and Sect. 3.3 describes statistical techniques to properly treat the compositional nature of the data. Section 4 presents a case study: the vulnerability of a nuclear power plant containment building. This section is also divided into several subsections, presenting the structural characterization and numerical modelling (Sects. 4.1 and 4.2), random parameters involved (Sect. 4.3), and simulation methods (Sects. 4.4 and 4.5). Results of the vulnerability model corresponding to the case study are presented in Sect. 4.7. Overall, the present work is made as a combination of several pieces: Marked Poisson Processes as modeling events on time, decision making, simplicial regression, importance resampling, finite element method, and a vulnerability model. All these pieces are well known but, apparently, they have not been combined before.

2 Vulnerability in a simple scenario of risk analysis

Assume that a system, characterized by its design D, is under the effects of punctual external actions, generically called events, at times \(t_j\). Each event has a random magnitude \(X_j\). As a result, the system produces random responses according to such external actions. A simple, but frequent model for such events is an homogeneous Poisson process, with the parameter \(\lambda\) accounting for the mean number of events per unit time. These Poisson processes, with magnitudes associated with events, are often called marked Poisson processes. This simple scenario is completed with assumptions on the independence of the magnitudes from occurrence times, and equal distribution of the magnitude from event to event, i.e. \(f_X\) is the common probability density function for all the magnitudes \(X_j\) associated with the events at times \(t_j\) (Embrechts et al. 1997). However, interest is focused on the system responses to such events. For practical applications, one assumes that these possible responses are classified into categories of response, \(\theta _i, i=1,2,\dots , I\), possibly including a case in which the system is not altered by the external action. Assuming again that random responses \(\theta _i\) are independent from event to event, the process of time events and responses also constitutes a marked Poisson process (also known as a compound Poisson process). The occurrence of each kind of response \(\theta _i\) can be proven to be a Poisson process with parameter \(\lambda {\mathrm {P}} [\Theta =\theta _i|D]\), where \(\Theta\) denotes a categorical random variable whose outputs are the \(\theta _i\)’s. Moreover, for different kind of responses, these Poisson processes are independent. Denoting by \(N_i\) the random number of responses \(\theta _i\) in a lifetime L, the probabilities can be computed as

$${\mathrm {P}} \big [N_i = n_i|L, D\big ] = \frac{\big (\lambda {\mathrm {P}} [\Theta =\theta _i|D]\ L\big )^{n_i} \exp \big (-\lambda {\mathrm {P}} [\Theta =\theta _i|D]\ L\big )}{n_i!},$$
(1)

for \(n_i=0,1,2,\dots\) and \(i=1,2,\dots ,I\). These characterize the process of the responses and allow one to compute the risk of a given design D once the costs of each response are known. Assume that \(C_0(D)\) is the cost associated with the construction of the system and associated impacts; and \(C_i(D)\), the costs associated with each response of the system. In a first approach, the total cost of a given design during its lifetime L is

$$C(N,D) = C_0(D) + \sum _{i=1}^I N_i C_i(D),$$

where \(N=(N_1,N_2,\dots ,N_I)\) contains the random number of times each response \(\theta _i\) occurs during the lifetime L. This expression is linear in \(N_i\), but could be thought in other ways, for instance, adding quadratic terms in \(N_i\). The risk is computed by taking the expectation of C(ND) according to the distribution of the \(N_i\)’s in Eq. (1), which yields

$$R(D)= {\mathrm {E}} \big [C(N,D)|D\big ]= C_0(D) + \sum _{i=1}^I C_i(D) L\ \lambda {\mathrm {P}} [\Theta =\theta _i|D].$$
(2)

This expression reveals that \({\mathrm {P}} [\Theta =\theta _i|D]\) is a fundamental piece of the risk R(D). The randomness of \(\Theta\) comes both from the occurrences of external events and from the random response of the system. This feature can be made explicit using the total probability theorem

$${\mathrm {P}} [\Theta =\theta _i|D] = \int {\mathrm {P}} [\Theta =\theta _i|x,D]\cdot f_X(x)\ dx\ , \ f_X(x|D) =f_X(x),$$
(3)

where the latter expression corresponds to the assumption of independence between the system design and the magnitude of the event. Substituting Eq. (3) in (2), the risk is

$$R(D)= C_0(D) + \sum _{i=1}^I C_i(D) \int \underbrace{ {\mathrm {P}} [\Theta =\theta _i|x,D]}_{vulnerability}\ \underbrace{\lambda L f_X(x)}_{hazard} \ \mathrm{d}x.$$
(4)

Here, the three elements of a priori decision tree: safety design, random states, and costs appear altogether. The random states are the different values of the vector \(N=(N_1,N_2,\dots ,N_I)\), denoting the number of times that each response \(\theta _i\) appear in the lifetime L. However, the main feature of Eq. (4) is that the characteristics of the process of external actions, represented by \(\lambda\) and \(f_X\), and the vulnerability model, \({\mathrm {P}} [\Theta =\theta _i|x,D]\), appear separately. This fact suggests the separate modelling of the two functions. On the one hand, a hazard assessment procedure can account for \(\lambda\) and \(f_X\); and, on the other hand, a vulnerability analysis can be used to construct a vulnerability model from the structural characteristics of the system implied by D.

In more complex scenarios, where independence of external events and their magnitudes, or the homogeneous Poisson process, cannot be assumed, expression (4) turns out to be more involved, but still the model of vulnerability, represented by \({\mathrm {P}} [\Theta =\theta _i|x,D]\), is of primary interest.

As a summary of the definitions described in this section, flowchart in Fig. 1 contains a schematic representation of the relation between hazard, vulnerability, and costs.

Fig. 1
figure 1

Summary flowchart with the relation between hazard, vulnerability, and costs. For each possible design or decision D, a random state N represents the number of responses of the system in each (random) category \(\Theta =\theta _i\). Hazard consists of probabilistic description of N, and vulnerability describes the probability of response \(\theta _i\) given an external action. Costs correspond to each design and the actual number of responses along the lifetime

3 Vulnerability model

3.1 Definition of vulnerability

As motivated in the previous Sect. 2, vulnerability of a system under external actions of magnitude x, can be described by the probabilities

$${\mathrm {P}} [\Theta =\theta _i| x, D], \quad i\,=\,1,2,\dots, I,$$
(5)

for any magnitude x of the external events. This set of conditional probabilities is called vulnerability model or vulnerability for short.

In order to statistically model such probabilities as a function of the design D and the magnitude of the event x, a probabilistic model of the random state \(\Theta\) is needed. Very often a deterministic model of responses is available but it does not account for uncertainty of \(\Theta\). The randomness of \(\Theta\) comes from the fact that both the design D and the magnitude of external actions x do not define a complete set of conditions in which the deterministic model of response is applied. This means that both D and x are nominal descriptions of the design and the external action, and the whole set of parameters to feed the deterministic model of responses are distributed conditional to D and x. This joint distribution is a key to proceed by Monte Carlo methods: once D, x are fixed, a simulated sample of the whole set of required parameters is computed and, subsequently, the resulting sample response. This provides a sample of responses from which the probabilities of the vulnerability (5) can be estimated.

3.2 Estimation of parameters

Given the nominal parameters of D and x, repeated simulations of the action characteristics and full parametric description of the system are required. For each simulated parameters of the action and system, the deterministic model predicts a response \(\theta _i\). A typical situation is to select values of the nominal design D. Let \(D_k\), \(k=1,2,\dots ,n\) be these sampling points. Then, values of the magnitude of the external actions \(x_j\), \(j=1,2,\dots ,n_x\) are also selected. For each \(D_k\) and \(x_j\), one should simulate all the necessary parameters to feed the available model of the system, and hence compute the predicted response \(\theta ^{(k,j,s)}\), considered within the \(\theta _i\)’s, with s indicating the simulation. The selection of the sampling points can be designed in several ways, but in many situations repeated simulations for the same \(D_k\) and \(x_j\) is convenient: changes in the characteristics of the system \(D_k\) or \(x_j\) are frequently time consuming and of high computational cost.

Applying this simulation scheme, the data obtained, for instance for \(D_1\), are

$$\begin{array}{llll} {(\begin{array}{llll} {D_{1} } & {x_{1} } & {\theta ^{{(1,1,1)}} } \\ \end{array} )} & {(\begin{array}{llll} {D_{1} } & {x_{1} } & {\theta ^{{(1,1,2)}} } \\ \end{array} )} & {...{\text{,}}} \\ {(\begin{array}{llll} {D_{1} } & {x_{2} } & {\theta ^{{(1,2,1)}} } \\ \end{array} )} & {(\begin{array}{llll} {D_{1} } & {x_{2} } & {\theta ^{{(1,2,2)}} } \\ \end{array} )} & {...{\text{,}}} \\ {\begin{array}{llll} {...} & {...} & {...} \\ \end{array} } & {\begin{array}{llll} {...} & {...} & {...} \\ \end{array} } & {...{\text{,}}} \\ {(\begin{array}{llll} {D_{1} } & {x_{{n_{x} }} } & {\theta ^{{(1,n_{x} ,1)}} } \\ \end{array} )} & {(\begin{array}{llll} {D_{1} } & {x_{{n_{x} }} } & {\theta ^{{(1,n_{x} ,2)}} } \\ \end{array} )} & {...{\text{,}}} \\ \end{array} \begin{array}{llll} {(\begin{array}{llll} {D_{1} } & {x_{1} } & {\theta ^{{(1,1,n_{{1,1}} )}} } \\ \end{array} )} \\ {(\begin{array}{llll} {D_{1} } & {x_{2} } & {\theta ^{{(1,2,n_{{1,2}} )}} } \\ \end{array} )} \\ {\begin{array}{llll} {...} & {...} & {...} \\ \end{array} } \\ {(\begin{array}{llll} {D_{1} } & {x_{{n_{x} }} } & {\theta ^{{(1,n_{x} ,n_{1} ,n_{x} )}} } \\ \end{array} )} \\ \end{array}$$

This block is repeated for \(D_2\), \(D_3\),..., \(D_n\). Note that a common value of \(n_{k,j}\) is on the order of \(10^4\), \(10^5\), or larger. The number of studied designs, n, and of the levels of action, \(n_x\), ought to be somewhere between 2 and 50 in simplified scenarios. Then, the analyst is confronted with a data set of about a million or more terns \((D_k,x_j,\theta )\). This data set should be used to fit a vulnerability model which predicts \({\mathrm {P}} [\Theta =\theta _i|D,x]\) for any design D and magnitude of the action x. Multinomial logistic regression provides this type of model. However, multinomial logistic regression relies on maximum likelihood estimation of parameters, which requires some sort of iterative procedure. Given the large sample size of the data set, the convergence of the iterative procedures can be problematic, specially if the chosen sampling design results in an unbalanced data set.

It seems preferable to summarize the data set into proportions of responses obtained for each sampling point \((D_k,x_j)\). This is the approach adopted here. Let these proportions be \(\mathbf {p}^{(k,j)}=(p_1^{(k,j)},p_2^{(k,j)},\dots ,p_I^{(k,j)})\). Now, the data set is reduced to \(n\times n_x\) data points corresponding to the design \(D_k\), the magnitude of the action \(x_j\), and I proportions in \(\mathbf {p}^{(k,j)}\), whose components add to one. This kind of data corresponds to a regression in the simplex: the variables to be predicted are the probabilities \(p_i= {\mathrm {P}} [\Theta =\theta _i|D,x]\), \(i=1,2,\dots ,I\), which have been observed as \(\mathbf {p}^{(k,j)}\), which, in turn, are the I-part compositions (Egozcue et al. 2012; Pawlowsky-Glahn et al. 2015). The explanatory variables in the regression are the design level \(D_k\) and the value of the nominal external actions \(x_j\) or transformations of them.

3.3 Treatment of proportions as compositional data

Assuming that \(D_k\), \(x_j\) are given in an absolute scale, the simplest predictor of the probabilities in the vulnerability model (5) is a linear combination of \(D_k\), \(x_j\). This predictor can be enriched with other functions of \(D_k\), \(x_j\). Such a predictor would provide real linear predictions, however \(\mathbf {p}^{(k,j)}\) is a vector located in the I-part simplex, i.e. its components add to one, and its scale is doubtfully thought as being absolute. In fact, the vector of probabilities should be scale invariant. For instance, if multiplied by 100 (percentage probabilities) or any other positive constant, the information provided must remain invariant [e.g. (Aitchison 1986; Egozcue 2009; Pawlowsky-Glahn et al. 2015)]. The I components of the vectors of probabilities can be represented in the I-part simplex \({\mathcal {S}}^I\). This structure contains all vectors of positive I components adding to 1. Indeed, the \({\mathcal {S}}^I\) can be structured as an Euclidean space (Pawlowsky-Glahn and Egozcue 2001). Consequently, Cartesian coordinates are available in this space. Isometric logratio transformations (\({\mathrm {ilr}}\)) provide such coordinates (Egozcue et al. 2003). Some of the \({\mathrm {ilr}}\)’s assign interpretable and easy-to-use coordinates, called balances. These are constructed using sequential binary partitions (SBP) of the composition components (Egozcue and Pawlowsky-Glahn 2005). An example of this kind of balance-coordinates is shown in the case study in Sect. 4.6. The \({\mathrm {ilr}}\) transformation represents the composition \({\mathbf {p}}\) by means of a real coordinate vector of \(I-1\) components, \({\mathrm {ilr}} ( {\mathbf {p}})\). This vector of coordinates can be predicted by using linear combinations of \(D_k\) and \(x_j\) with no further problem. The multivariate regression model can be expressed as

$${\mathrm {ilr}} \big ( {\mathbf {p}}^{(k,j)}\big ) = {\varvec{\beta }}_0^* + D_k {\varvec{\beta }}_1^* + x_j {\varvec{\beta }}_2^* + {\varvec{\epsilon }}^*_{k,j}, \quad k\,=\,1,2,\dots ,n, \ j\,=\,1,2,\dots , n_x,$$
(6)

where \({\varvec{\beta}}_0^*\), \({\varvec{\beta}}_1^*\), \({\varvec{\beta}}_2^*\) are real vectors of \(I-1\) components containing regression coefficients, and \({\varvec{\epsilon} }^*_{k,j}\) are the residuals for the observation (kj). Once the regression model (6) is fitted, the predicted probabilities can be obtained using the inverse \({\mathrm {ilr}}\)-transformation. These models have been studied in the last decade [e.g. (Egozcue et al. 2012; Tolosana-Delgado and van den Boogaart 2011; Tolosana-Delgado and von Eynatten 2009)], although regression in the simplex was first introduced by Aitchison and Shen (1980).

The regression model in Eq. (6) is proposed herein as one of the simplest models to be able to capture the main features of vulnerability. This model has the advantage that it can be fitted using least squares techniques. Additionally, modifications on these least square techniques can be easily adopted.

4 Case study: a nuclear power plant containment building

Nuclear power plant safety is of general concern due to the consequences of nuclear accidents. Consequently, the risk assessment of these facilities, old and new, is of upmost importance both for technical and social reasons (International Atomic Energy Agency 2001). Nuclear power plants are very complex systems and their detailed study requires a decomposition into subsystems (U.S. Nuclear Regulatory Commission 2007). Each subsystem is then modelled by considering the interactions with other parts of the plant. The present study concerns to a particular subsystem: the containment building of the nuclear power plant subjected to a possible overpressure event in its interior (U.S. Nuclear Regulatory Commission 2010). The containment is normally thought as the fourth and final barrier to the release of radioactive materials, after the fuel pellets, the fuel cladding, and the reactor pressure boundary.

The goal of this study is to estimate a vulnerability model of the containment building, whose design D is given, under the external action of an overpressure, denoted x, as in previous sections. A first step is to build up a numerical model that, given an overpressure x and given the design D and derived parameters, is able to predict the response of the containment building (Sect. 4.1). Next Sect. 4.3 describes the treatment of the parameters of the containment building to set up the complete input of the numerical model. Section 4.4 defines the responses \(\theta _i\) of the containment building for this case study. Section 4.5 contains a detailed explanation of used simulation techniques. Section 4.6 makes the simplicial regression technique and the particular system of \({\mathrm {ilr}}\) coordinates used explicit. Results are shown in Sect. 4.7.

Significant experimental research on this matter was performed at Sandia National Laboratories (USA) during the beginning of the 2000’s. This research can be consulted at U.S. Nuclear Regulatory Commission (2006).

4.1 Containment building characteristics

The containment building is designed to confine the radioactive materials in case of an emergency, up to a maximum gauge pressure in the range of 0.4–1.4 MPa (Crusells-Girona 2011; U.S. Nuclear Regulatory Commission 2015). From now on, the pressure will be defined as relative (or gauge pressure): the absolute pressure minus the atmospheric one (0.1 MPa). Containment systems for nuclear power reactors are distinguished by size, shape, material, and reactor coolant state. In this analysis, a three-loop pressurized water reactor (PWR) is considered. For this type of reactor, the containment also encloses the steam generators, the pressurizer, and the entire reactor coolant system. Early designs by Siemens, Westinghouse, and Combustion Engineering Crusells-Girona (2011) have a can-like shape built with reinforced or prestressed concrete. One of the most used designs is a can-like shell covered with a half-spherical dome, whose main advantage is that it reduces joint stresses by continuity of the shell. In this case, the structure considered will be a prestressed cylindrical building with a half-spherical dome. Examples of this particular model can be found on North Anna Nuclear Power Plant (VA, USA), Wolf Creek Nuclear Power Plant (KS, USA), Diablo Canyon Nuclear Power Plant (CA, USA) or Vandellós II Nuclear Power Plant (Spain). The most important dimensions of the building are shown in Table 1 and in Fig. 2.

Table 1 Most important dimensions of the building
Fig. 2
figure 2

Drawing of the building (Cervera et al. 1995)

4.2 Numerical model

As mentioned in Sect. 3.1, the estimation of a vulnerability model is based on the availability of a deterministic model of the system, once the design D and derived parameters are given and the external action is fixed.

According to the drawings (Fig. 2), it is possible to create a geometrical model (Fig. 3) in a finite element software such as Abaqus\(^\circledR\).

Fig. 3
figure 3

Geometrical finite element model developed by the authors. Top and Bottom views

Additionally, parameters regarding the material properties are required by the model. The most important parameters of the building materials are shown in Table 2.

Table 2 Most important material properties

These properties feed the constitutive model considered. Herein, the concrete damage plasticity (CDP) model is used, taking advantage that it is already available in Abaqus Hibbitt and Sorensen (2002). The CDP model is a continuum, plasticity-based damage model for concrete. It defines two failure mechanisms: tensile cracking and compressive crushing of the concrete material. The evolution of the yield surface is controlled by two hardening variables, and linked to failure criteria under tension and compression loading, respectively. The CDP model uses a constitutive equation with a scalar isotropic damage parameter in the following form

$${\varvec{\sigma} }=(1-d){\varvec{D}}^{el}_{0}:({\varvec{\epsilon} }-{\varvec{\epsilon} }^{pl})={\varvec{D}}^{el}:({\varvec{\epsilon} }-{\varvec{\epsilon} }^{pl}); {\varvec{\sigma} }=(1-d){\varvec{\overline{\sigma }}},$$
(7)

where \({\varvec{\sigma} }\) corresponds to the stress tensor, d is the scalar stiffness degradation parameter (or damage parameter), \({\varvec{\epsilon} }\) stands for the strain tensor, \({\varvec{\epsilon} }^{pl}\) is the plastic strain tensor and \({\varvec{D}}^{el}_{0}\) corresponds to the undamaged elastic stiffness.

Moreover, the damage parameter d is a function of the stress and the plastic strain tensors.

$$d=d({\varvec{\overline{\sigma }}},{\varvec{\overline{\epsilon} }}^{pl}),$$
(8)

where \({\varvec{\overline{\epsilon} }}^{pl}\) represents the undamaged plastic strain tensor corresponding to the hardening and/or softening behavior.

It is worth mentioning that, when materials exhibit strain-softening behavior, the classical stress–strain formulation of continuum mechanics results in a strong mesh dependency (dissipated energy increases upon mesh refinement). In order to attack this undesired effect, the authors make use of the intrinsic capabilities of Abaqus, which are intended to alleviate this problem, without any further implementation. The strategy followed by the code is to introduce a characteristic length into the formulation, which is related to the element size and is treated as an inner parameter, and to express the softening part of the behavior in a stress–displacement manner. Therefore, the energy dissipated per unit area can be easily adjusted (area under the curve), and is used to determine the displacement at which full damage occurs. It is interesting to note that this is essentially the technique used in fracture mechanics. As a result, this procedure ensures that the correct energy is dissipated and thus alleviates the mesh dependency, making the model more robust.

Beside the uniaxial tensile and compressive behavior, the inputs required in order to completely define the CDP model are five parameters (Table 3). These parameters define the shape of the yield surface and the flow rule (in terms of a flow potential G). The yield condition is described in detail in Lubliner et al. (1989). On the other hand, for the flow potential, the model uses the Drucker-Prager hyperbolic function. Taking \(f_c\) and \(f_t\) as the uniaxial tensile and compressive strengths of concrete, \(\beta\) as the dilation angle and m as the eccentricity of the plastic potential surface; the flow potential \({\mathbf ( G)}\) in the p-q plane is

$$G=\sqrt{(f_c-mf_t\tan {\beta })^2+\overline{q}^2}-\overline{p}\tan {\beta }-\sigma .$$
(9)

In this equation, \(\overline{p}\) and \(\overline{q}\) are the hydrostatic and deviatoric components, respectively.

The parameters of the model are taken from Jankowiak and Lodygowski (2013) and are shown in Table 3.

Table 3 Summary of CDP parameters

Finally, the last step is to define the softening and hardening rules. The first one is related to the concrete tensile behavior whereas the second one regards its compressive behavior.

Jankowiak and Lodygowski (2013) consider the material as massive concrete. Nonetheless, the containment building under study has, besides the prestressing cables, a high density of rebars. Even though there are different densities of rebars throughout the structure, this amount has been considered constant and assigned a value of 0.035 steal/concrete rate in volume. In this case, if the rebars are oriented properly in the direction of the expected stress, a 0.035 steal/concrete ratio is to be expected on average in any transversal cut of the building according to the drawings. Taking this into account, the behavior of the reinforced concrete can be approximated as the sum of the contributions of the concrete and the rebars. Figure 4 illustrates the composition to obtain the tensile constitutive model of the reinforced concrete.

Fig. 4
figure 4

Tensile constitutive behavior of the reinforced concrete

The compression constitutive model has been obtained by taking into account only the concrete contribution. Indeed, the rebars do not add any significant resistance at the utilized pressures. Furthermore, it has to be noticed that, in this study, the compressive behavior is not of significant relevance: the external load is an inner pressure; thus it will produce an expansion of the cylinder and, therefore, a failure through the concrete tensile behavior.

Once all the above has been introduced, it is required to apply the appropriate boundary conditions, which consist of fixing the foundation. Finally, the last input is the external load, which is an inner pressure. In order to illustrate the behavior of the model, Fig. 5 shows the (magnified) shape resulting of the deformation when the building is faced up to an overpressure of 0.7 MPa: the building becomes shorter and wider.

Fig. 5
figure 5

Mesh of the finite element model of the building (left). Deformation caused by a 0.7 MPa inner pressure, only elements near to the penetrations of the building have reached plasticity (marked in red) (right)

The mesh of the FEM is built using tetrahedral elements. In particular, there are 9292 elements in the foundation, 25,307 elements in the cylinder, and 4392 elements in the dome.

4.3 Random parameters

One of the most common problems in Civil Engineering is that material properties are usually uncertain. It is fairly difficult to ascertain their values even with an extensive test campaign. In order to capture this uncertainty in the material properties, five variables have been taken as random, and four other variables have been considered to directly depend on the latter ones. Applying this direct dependence, it is possible to account for some relations between variables. The material, nonetheless, has been considered uniform in the whole building. As a consequence, the value of the material parameters is the same everywhere for each computation.

The random parameters and their distribution are described below:

  1. 1.

    Concrete elastic modulus \((E_m)\) LogNormal distribution, \({\mathrm {E}} [E_m]=41,100\,\)MPa, \({\mathrm {Var}}[E_m]=16,892,100\,\)MPa\(^2\), i.e. logarithmic mean and variance are 6.0638 and 0.05743, respectively.

  2. 2.

    Steel yield strength \((f_y)\) LogNormal distribution, \({\mathrm {E}} [f_y]=430\,\)MPa, \({\mathrm {Var}}[f_y]=576\,\)MPa\(^2\), i.e. logarithmic mean and variance are 6.0638 and 0.05743, respectively.

  3. 3.

    Prestressing force of vertical cables Logistic-Normal distribution on the interval (0, 610). The logistic parameters have mean 1.30 and standard deviation 0.43. The median of this distribution is \(610\frac{\exp (1.30)}{1+\exp (1.30)}=4790\,\)kN.

  4. 4.

    Prestressing force of horizontal cables Logistic-Normal distribution on the interval (0, 610). The logistic parameters have mean 2.22 and standard deviation 0.80. The median of this distribution is \(610\frac{\exp (2.22)}{(1+\exp (2.22)}=5500\,\)kN.

  5. 5.

    Prestressing force of dome cables Logistic-Normal distribution on the interval (0, 610). The logistic parameters have mean 2.36 and standard deviation 0.71. The median of this distribution is \(610\frac{\exp (2.36)}{1+\exp (2.36)}=5570\,\)kN.

The parameters of the above-mentioned distributions have been based on a data set from Aguado et al. (1991) and Barbat et al. (1995).

The variables that have a direct dependence on the random ones described above are:

  1. 1.

    Concrete ultimate compressive strength \(f_c=\frac{Em-1550}{697}\).

  2. 2.

    Concrete ultimate tensile strength \(f_t=0.30(f_c-8)^{\frac{2}{3}}\).

  3. 3.

    Steel elastic modulus \(E_s=\frac{f_y}{0.00214}\).

  4. 4.

    Steel ultimate strength \(f_u=\frac{5500}{4200}f_y\).

These relations have been taken from Aguado et al. (1991). Other parameters such as the density of the concrete or the Poisson ratio have been considered as fixed values, since their variance can be neglected.

Following the notation in Sect. 2, the parameters defined in Sects. 4.1, 4.2, and 4.3 completely characterize the design of the system D. Notice that vulnerability is studied in only one particular design of the system, therefore, in Sect. 3, n is 1, i.e. only \(k=1\) case is studied.

4.4 Responses of containment building

In the present vulnerability study, the external action x is identified as an overpressure inside the containment building. Once a given overpressure is registered, three possible responses are considered: serviceability, breakdown, and collapse (\(\theta _i\), \(i=1,2,3\); respectively). As defined in Sect. 2, these outputs or responses allow one to compute the risk of a given design. In this particular case, these responses are strongly related to the tensile constitutive behavior obtained above. As a matter of fact, in order to define in which final state the structure ends up being, a failure criterion needs to be defined. For simplicity, the largest attained value of the maximum principal tensile strain (\(S_{11}\)) is extracted in each computation. Two thresholds in \(S_{11}\) define the three possible outputs. For \(S_{11}\) less than 0.3 mm/m serviceability is assumed (\(\theta _1\)). From \(S_{11}\) equals from 0.3 mm/m to 2 mm/m, the output is considered as breakdown (\(\theta _2\)). Finally, for \(S_{11}\) greater than 2 mm/m, the output is considered as collapse (\(\theta _3\)). The 0.3 mm/m threshold typically indicates that concrete has already cracked and the resistance is then controlled by the rebars. The second one has been defined as 2 mm/m, which is the threshold that indicates that the rebars have reached plasticity. These thresholds clearly split the three possible final scenarios. Serviceability is related to no damage in the building because the concrete has not significantly cracked. As a result, the nuclear power plant can continue its operations without any concern about structural integrity. In the case of breakdown, the building has been slightly damaged because in some parts the concrete has started to significantly crack. The nuclear power plant has therefore to stop its activity and seal the cracks before restarting its operations. Finally, in the case of collapse, the building is massively damaged due to the fact that the rebars have reached the plastic capacity. Therefore, the building will have no chance to be fixed.

  1. 1.

    Serviceability (\(\theta _1\)) \(S_{11}\) less than 0.3 mm/m. Normal operation. No damage at all in the building.

  2. 2.

    Breakdown (\(\theta _2\)) \(S_{11}\) equals from 0.3 mm/m to 2 mm/m. Operations temporarily suspended, maintenance is required. Concrete cracked, resistance controlled by the rebars.

  3. 3.

    Collapse (\(\theta _3\)) \(S_{11}\) greater than 2 mm/m. Structure failure. Rebars reached the plastic capacity.

4.5 Simulation of random responses

Only one design \(D=D_1\) is studied, however the subscript is maintained to be consistent with notation in Sects. 3.1 and 4.5. Taking all the distributions of the random variables into account, the procedure to simulate the parameters is reproduced as follows. First, the overpressure is fixed to a certain value \(x_j\). Second, the five random parameters, described in Sect. 4.3 are simulated as independent random variables. Once the random parameters are obtained, the directly dependent parameters, \(f_c\), \(f_t\), \(E_s\), \(f_u\), are computed using the corresponding expressions (Sect. 4.3). Then, for the overpressure value \(x_j\) and for each set of simulated parameters \({\mathbf {q}}_c\), \(c=1,2,\dots ,C\), the strains are computed using the FEM. From the computed strains, the output (serviceability, breakdown, and collapse) is determined in each simulation. In order to do that, a multinomial random trial \(\Theta\) is defined; its parameters are the probabilities in the vulnerability model \({\mathrm {P}} [\Theta =\theta _i|x_j,D_1]\), \(i=1,2,3\). Any realization of \(\Theta\) is a vector containing a single 1 in one of the three components. The indexes refer to serviceability (\(\theta _1\)), breakdown (\(\theta _2\)), and collapse (\(\theta _3\)) respectively. For instance, if the vector \({\mathbf {q}}_c\), which contains the inputs in the computation, gives an output of collapse, the realization of \(\Theta\) will be: (0, 0, 1), which is denoted by \(\theta _3\). Notice that \(\Theta ( {\mathbf {q}}_c|x_j,D)\) denotes a realization of \(\Theta\) as the randomness of \(\Theta\) only derives from \({\mathbf {q}}_c\) which has been obtained in the simulation c. Then, simulation of \({\mathbf {q}}_c\) produces a sample from \(\Theta\).

The goal of simulation is to obtain (simulated) sample values \(\Theta ( {\mathbf {q}}_c|x_j,D)\), \(c=1,2,\dots ,C\), and, then, use Monte Carlo methods to estimate \({{\mathrm {P}}} [\Theta =\theta _i|x_j,D_1]\), \(i=1,2,3\).

These probabilities are the expectation of \(\Theta\) given the overpressure \(x_j\) and the design \(D_1\). They can be computed using the Monte Carlo integration method

$$E[\Theta |x_j,D_1]=\int \Theta ( {\mathbf {q}}|x_j,D_1)f_{\mathbf Q} ( {\mathbf {q}}){\mathrm{d}}{\mathbf q} \; \approx \; \frac{1}{C}\sum _{c=1}^C \Theta ( {\mathbf {q}}_c|x_j,D_1)\,=\, {\mathbf {p}}^{(1,j)}$$

where \({\mathbf {p}}^{(1,j)}=(p_1^{(1,j)},\,p_2^{(1,j)},\,p_3^{(1,j)})\) and \(f_{\mathbf Q} ( {\mathbf {q}})\) denotes the joint distribution of \({\mathbf {q}s}\) as defined in Sect. 4.3. Accordingly, \(p_1^{(1,j)}\), \(p_2^{(1,j)}\), and \(p_3^{(1,j)}\) are obtained as the mere proportion of times in which the simulation for \(x_j\) results in serviceability, breakdown, and collapse, respectively.

The main problem in this approach is that for any given overpressure \(x_j\), some responses are very improbable, producing zeros in the corresponding proportions. This is the case, for instance, of \(p_3^{(1,j)}\) with \(x_j=0.4\) MPa. In this case, the overpressure is so low that it is fairly impossible to obtain a simulation that ends up in collapse. In order to reduce the presence of zero proportions, importance re-sampling (Hammersley and Handscom 1964) is to be applied in the simulation.

The procedure is the following, the sampling is done in terms of a distribution, called sampling distribution with density function \(g_{\mathbf Q}\), which allows the desired parameter values to be likely to appear in the simulation. Then, the importance \(f_ {\mathbf {Q}}/g_ {\mathbf {Q}}\) is the rate of the model probability density function over the sampling probability density function. This importance ratio is stored in each simulation and the Monte Carlo procedure is modified as follows

$$\begin{aligned} E[\Theta |x_{j} ,D_{1} ] & = \int \Theta ({\mathbf{q}}|x_{j} ,D_{1} )g_{{\mathbf{Q}}} ({\mathbf{q}})\frac{{f_{{\mathbf{Q}}} ({\mathbf{q}})}}{{g_{{\mathbf{Q}}} ({\mathbf{q}})}}d{\mathbf{q}} \\ & \approx \frac{1}{C}\sum\limits_{{c = 1}}^{C} \Theta ({\mathbf{q}}_{c} |x_{j} ,D_{1} )\frac{{f_{{\mathbf{Q}}} ({\mathbf{q}}_{c} )}}{{g_{{\mathbf{Q}}} ({\mathbf{q}}_{c} )}} = {\mathbf{p}}^{{(1,j)}} . \\ \end{aligned}$$

Table 4 illustrates the methodology to obtain the vector \({\mathbf {p}}^{(1,j)}\).

Table 4 Scheme of importance re-sampling for \(C=2000\)

As mentioned previously, \({\mathbf {q}}_c\) includes all the inputs required by the software in the c-th simulation. The output is the final state of the building: serviceability, breakdown, or collapse; and the importance is the ratio of the model probability density function over the sampling probability density function \(f_ {\mathbf {Q}}( {\mathbf {q}}_c)/g_ {\mathbf {Q}}( {\mathbf {q}}_c)\). After \(C=2000\), the value of the three proportions of \({\mathbf p} ^{(1,j)}\) is obtained as a relative frequency.

Since the re-sampling has been applied in the simulation of all the combinations of inputs, the number of occurrences for each possible response has to be weighted by its importance before performing the regression. Therefore, the proportions of each component are computed in terms of the sum of importances as derived from Eq. (10). The lower part of Table 4 illustrates the procedure to obtain these proportions.

4.6 Simplicial regression

Until now, the probabilities of each possible output (serviceability, breakdown, and collapse) have been obtained for a given overpressure \(x_j\). However, the vulnerability model is \({\mathbf p} ^{(1,j)}\), for any overpressure. The overpressure is characterized by its value in MegaPascals (MPa) and, for the sake of simplicity, it will be discretized in 13 values from 0.40 to 1.00, with a step of 0.05 MPa; \(j=1,2\dots 13\) respectively. Therefore, the data are described by the probabilities of each three final states conditioned to each of the 13 values of the internal pressure. Indeed, the problem can be seen as a least squares fitting. The data to be fitted are the compositional data points \({\mathbf p} ^{(1,j)}\) for \(j=1,2\dots 13\); each of them containing the probability of each three final possible states conditioned to the corresponding value of the overpressure. The explanatory variable is the overpressure \(x_j\) from 0.4 to 1.0 in MPa and, after the least squares fitting, the approximated vulnerability model \({\mathbf p} ^{(1,j)}\) will be obtained for any overpressure.

As discussed in Sect. 3, two problems are involved in this approach: the consistency of the model and the relative scale of the probability values. Since the vulnerability model is described in terms of probabilities that must add up to one, even small deviations in the estimation of a probability value can result into an inconsistency of the model. Moreover, probabilities are to be measured relatively, as they are not an absolute value in the real line. These difficulties suggest approaching the problem by means of the simplex geometry of D parts (Aitchison 1986; Pawlowsky-Glahn and Egozcue 2001; Egozcue et al. 2003), which allows one to interpolate probability vectors in a consistent way and in an appropriate scale. The elements in the simplex (\(S^D\)) have D probability components (in this case \(D=3\)); since all the components must add up to one, the dimension is \(D-1\). The simplex \(S^D\) endowed with two operations (\(\oplus\), called perturbation and \(\odot\), called powering) is proven to be a vector space. Perturbation plays the role of an addition and powering is a multiplication by real scalars (Aitchison et al. 2002). Moreover, \(S^D\) is a \(D-1\) dimensional Euclidean space (Pawlowsky-Glahn and Egozcue 2001) if its own distance is added. This distance is called Aitchison distance, as it was first introduced by Aitchison (1986). In this particular problem, the data that has to be fitted are \({\mathbf p} ^{(1,1)},{\mathbf p} ^{(1,2)}\dots {\mathbf p} ^{(1,13)}\) and, when treating the data as compositional, the linear predictor function in the simplex becomes

$${\hat{\mathbf {p}}}(x)= {\varvec{\beta} }_0\oplus (x\odot {\varvec{\beta} }_1),$$
(10)

where \(\oplus\) and \(\odot\) are perturbation and powering in the simplex. The values of \({\varvec{\beta} }_0\) and \({\varvec{\beta} }_1\) are compositional parameters to be fitted in the regression.

This linear fitting can be reduced to a \(D-1\) standard linear regression model, which can be solved by using least squares techniques. The procedure consists of expressing the composition \({\hat{\mathbf {p}}}(x)\) into orthonormal coordinates using an \({\mathrm {ilr}}\) transformation (Egozcue et al. 2003). In this case, the operations \(\oplus\) and \(\odot\) are reduced to the common \(+\) and \(\cdot\), respectively. Then, \(S^D\) is considered equivalent to \({\mathbb {R}}^{D-1}\) , where compositions are represented by orthonormal coordinates. These facts allow transforming the \(p_1^{(1,j)}\), \(p_2^{(1,j)}\), and \(p_3^{(1,j)},\,j=1,2\dots 13\) proportions into balance-coordinates \(b_1^{(j)},b_2^{(j)},\, j=1,2\dots 13\). A regression of these \(b_1^{(j)}\) and \(b_2^{(j)}\) on overpressure provides a linear vulnerability model in \({\mathbb {R}}^{D-1}\) (Egozcue et al. 2012; Pawlowsky-Glahn et al. 2015).

An easy way to obtain orthonormal coordinates is by producing a sequential binary partition (SBP) (Egozcue and Pawlowsky-Glahn 2005). Table 5 shows the code of such an SBP. In the first step, breakdown and collapse are separated from serviceability as shown by the signs 1 and \(-\)1. The second and last step consists of separating breakdown from collapse.

Table 5 SBP code used to build up \({\mathrm {ilr}}\) balance-coordinates

The balance-coordinates corresponding to the SBP (Table 5) are

$$b_1^{(j)}=\sqrt{\frac{2}{3}}\log {\left( \frac{p_1^{(1,j)}}{\sqrt{p_1^{(1,j)}p_3^{(1,j)}}}\right) },\, b_2^{(j)}=\sqrt{\frac{1}{2}}\log {\left( \frac{p_2^{(1,j)}}{p_3^{(1,j)}}\right) }, \, j=1,2\dots 13$$
(11)

Then, the data that has to be fitted with the regression model are \({\mathbf b ^{(1)}},{\mathbf b ^{(2)}}\dots {\mathbf b ^{(13)}},\) where \({\mathbf b} ^{(j)}=(b_1^{(j)},b_2^{(j)}),j=1,2\dots 13\) is a vector in \({\mathbb {R}}^{D-1}\) (\({\mathbb {R}}^{2}\), in this case)

$$\hat{{\mathbf{b}}} (x)= {\varvec{\beta} }_0^*+(x\cdot {\varvec{\beta} }_1^*),$$
(12)

where \({\varvec{\beta} }_0^*\) and \({\varvec{\beta _1^*}}\) are the coordinates of the previous \({\varvec{\beta} }_0\) and \({\varvec{\beta} }_1\) in Eq. (10). Therefore, the vector expression in Eq. (12) can be split into two pieces, one for each component,

$$\hat{b}_{1}(x)=\beta _{0,1}^*+(x\cdot {\varvec{\beta} }_{1,1}^*),\quad \hat{b}_2(x)=\beta _{0,2}^*+(x\cdot {\varvec{\beta} }_{1,2}^*).$$
(13)

For each linear least squares fitting, the function that has to be minimized is

$$SSE_1=\sum _{j=1}^{13}|\hat{b}_1(x_j)-b_1^{(j)}|^2, \quad SSE_2=\sum _{j=1}^{13}|\hat{b}_2(x_j)-b_2^{(j)}|^2.$$

Finally, all of the above coordinates can be back-transformed into the previous space (\(S^D\)) by using \({\mathrm {ilr}} ^{-1}\), as an easy interpretation. As a summary of the steps, flowchart in Fig. 6 contains a schematic representation of the steps to obtain a vulnerability model.

Fig. 6
figure 6

Summary flowchart with the steps to obtain a vulnerability model

4.7 Results

As mentioned above, the probabilities of each response in each overpressure event have been obtained via the proportions of their occurrences in the simulation, in terms of importance ratios. Table 6 shows the results obtained and the number of simulations carried out.

Table 6 Probabilities of each response for each overpressure obtained from the simulation

Once the data above is computed, it is possible to obtain the balances through the isometric-logratio transformation (Eq. 11). However, balance-coordinates in Eq. (11) cannot be computed when some frequencies are zero. It is important to note that the importance re-sampling has helped in eliminating those zero entries. Moreover, some extreme data such as the value of \(b_1\) in the overpressure of 0.70 MPa and the value of \(b_2\) in the overpressure of 0.95 MPa will not be used in the linear regression. The reason is that both come from data that is difficult to obtain in the simulation, even applying the importance re-sampling technique; and thus, they contain a considerable error as their sample is not large enough. Should both points be used, they would distort the linearity of the regression. Table 7 and Fig. 7 show the results.

Table 7 Balance coordinates used in regression. Refer to Fig. 7 for representation of the data

These \({\mathrm {ilr}}\) transformed data and their lineal regression model (Eqs. 13) can be plotted in a graph to see how linear the data are (Fig. 7). The \(r^2\) coefficients for \(b_1\) and \(b_2\) are 0.99 and 0.97, respectively. Finally, by back-transforming this logratio values into the previous ones using \({\mathrm {ilr}} ^{-1}\), it is possible to obtain the probabilities of each possible output as a function of the overpressure.

Fig. 7
figure 7

Linear regression of the \({\mathrm {ilr}}\) transformed data. White points are discarded and not used in the regression (Table 7)

Fig. 8
figure 8

Final result, regression that shows the probability of each possible state after an overpressure event

The failure criterion is normally established as the pressure that causes collapse in the 5 % of the cases. It is important to note that this criterion does not mean that the containment building has a 5 % probability of collapse, but it ascertains that the pressure for which the structure would collapse in the 5 % of the cases. So that, the inner pressure of failure is 0.7335 MPa. Similar nuclear power plant containment buildings, for instance, Vandellós II (Spain), have been analysed in detail in Stress Tests (Consejo de Seguridad Nuclear 2011), and the pressure of failure has been determined to be 0.8667 MPa. From the results of the regression computed here (Fig. 8), it is possible to ascertain that this pressure would have a probability of serviceability of 0.0012, 0.5874 of breakdown, and 0.4114 of collapse. Moreover, the design pressure Vandellós II containment building is 0.3796 MPa (Consejo de Seguridad Nuclear 2011). Dividing the failure pressure obtained herein (0.7335 MPa) by the design pressure, one obtains a safety factor of 1.93. This is the most important result of the analysis. It is interesting to point out that the large-break loss of coolant accident (LOCA) has been considered traditionally in nuclear engineering as the worst-case scenario in which the containment building can be involved (Crusells-Girona 2011). The effect of such a severe accident is typically taken as the design basis accident (DBA). A large-break LOCA can produce an inner pressure of 0.4 MPa. From the results of the regression (Fig. 8), this pressure would have a probability of serviceability of 0.9972, 0.0028 of breakdown, and a negligible \(2.7047\,\times \,10^{-7}\) of collapse, confirming the proper design of the building.

5 Conclusions

The ability to create a meaningful vulnerability model is a fundamental piece in environmental risk assessment. Vulnerability is described as a set of probabilities of random system responses under a given external action, whose frequency is usually characterized as a hazard. By using a definition of risk, associated with a given safety design, as the expectation of the conditional cost with respect to that design, the authors extract the following conclusions.

For a given random state of the system and a given safety design, the cost is indeed well-defined, and is taken as deterministic. First, one can derive an explicit form that relates risk associated with a safety design as a function of vulnerability, cost, and hazard. Then, one can focus on the vulnerability part, which consists of predicting the probabilities of the possible responses of the system conditioned to a given external action and a given design, from the sampling points. One then concludes that the inference of these probabilities can be extracted from the available data by a regression in the simplex, where the raw explanatory variables are the design level and the value of the nominal external actions. It is interesting to note that the framework described herein is not applicable only to environmental risk assessment, but it can also be applied to any risk of any other nature, or a combination of them.

In Sect. 4, the authors turn their attention to the application of this vulnerability framework. In particular, a finite element model (FEM) is used to draw samples for a given design of a nuclear power plant containment building. The authors then conclude that the combination of finite elements, importance resampling, and regression in the simplex allows the design process to count on robust vulnerability models for environmental risk assessment of civil works. Table 6 shows the results obtained and the number of simulations computed. The vulnerability model is obtained (Fig. 8) for the geometrical and constitutive design described in Sect. 4. Remarkably enough, the design used in this example is analogous to the Vandellós II containment building in Tarragona, Spain. In conclusion, by applying standard measures in Nuclear Engineering, inner pressure of failure of 0.7335 MPa and a safety factor of 1.93 are obtained. This result is in accordance to the results in a similar plant by the Monte Carlo approach in Cervera et al. (1995): pressure of failure of 1.0301 MPa and a safety factor of 2.78 are obtained. The main contribution consists of a methodological description assessing risk and then describing a robust framework to build vulnerability models. In particular, other studies (Cervera et al. 1995), as well as regulatory procedures (U.S. Nuclear Regulatory Commission 1984), only characterize a binary state (collapse or service); here instead we study three in a way that can be easily generalized.