Definition

Seismic hazard. Any physical phenomena associated with an earthquake (e.g., ground motion, ground failure, liquefaction, and tsunami) and their effects on land, man-made structure, and socioeconomic systems that have the potential to produce a loss. It is also used without regard to a loss to indicate the probable level of ground shaking occurring at a given point within a certain period of time.

Seismic hazard analysis. Quantification of the ground motion expected at a particular site.

Deterministic seismic hazard analysis. Quantification of a single or relatively small number of individual earthquake scenarios.

Probabilistic seismic hazard analysis. Quantification of the probability that a specified level of ground motion will be exceeded at least once at a site or in a region during a specified exposure time.

Ground motion prediction equation. A mathematical equation which indicates the relative decline of the ground motion parameter as the distance from the earthquake increases.

Introduction

The estimation of the expected ground motion which can occur at a particular site is vital to the design of important structures such as nuclear power plants, bridges, and dams. The process of evaluating the design parameters of earthquake ground motion is called seismic hazard assessment or seismic hazard analysis. Seismologists and earthquake engineers distinguish between seismic hazard and seismic risk assessments in spite of the fact that in everyday usage these two phrases have the same meaning. Seismic hazard is used to characterize the severity of ground motion at a site regardless of the consequences, while the risk refers exclusively to the consequences to human life and property loss resulting from the occurred hazard. Thus, even a strong earthquake can have little risk potential if it is far from human development and infrastructure, while a small seismic event in an unfortunate location may cause extensive damage and losses.

Seismic hazard analysis can be performed deterministically, when a particular earthquake scenario is considered, or probabilistically, when likelihood or frequency of specified earthquake size and location are evaluated.

The process of deterministic seismic hazard analysis (DSHA) involves the initial assessment of the maximum possible earthquake magnitude for each of the various seismic sources such as active faults or seismic source zones (SSHAC, 1997). An area of up to 450 km radius around the site of interest can be investigated. Assuming that each of these earthquakes will occur at the minimum possible distance from the site, the ground motion is calculated using appropriate attenuation equations. Unfortunately, this straightforward and intuitive procedure is overshadowed by the complexity and uncertainty in selecting the appropriate earthquake scenario, creating the need for an alternative, probabilistic methodology, which is free from discrete selection of scenario earthquakes. Probabilistic seismic hazard analysis (PSHA) quantifies as a probability whatever hazard may result from all earthquakes of all possible magnitudes and at all significant distances from the site of interest. It does this by taking into account their frequency of occurrence. Deterministic earthquake scenarios, therefore, are a special case of the probabilistic approach. Depending on the scope of the project, DSHA and PSHA can complement one another to provide additional insights to the seismic hazard (McGuire, 2004). This study will concentrate on a discussion of PSHA.

In principle, any natural hazard caused by seismic activity can be described and quantified by the formalism of the PSHA. Since the damages caused by ground shaking very often result in the largest economic losses, our presentation of the basic concepts of PSHA is illustrated by the quantification of the likelihood of ground shaking generated by earthquakes. Modification of the presented formalism to quantify any other natural hazard is straightforward.

The classic procedure for the PSHA includes four steps (Reiter, 1990), (Figure 1).

  1. 1.

    The first step consists of the identification and parameterization of the seismic sources (known also as source zones, earthquake sources, or seismic zones) that may affect the site of interest. These may be represented as area, fault, or point sources. Area sources are often used when one cannot identify a specific fault. In classic PSHA, a uniform distribution of seismicity is assigned to each earthquake source, implying that earthquakes are equally likely to occur at any point within the source zone. The combination of earthquake occurrence distributions with the source geometry, results in space, time, and magnitude distributions of earthquake occurrences. Seismic source models can be interpreted as a list of potential scenarios, each with an associated magnitude, location, and seismic activity rate (Field, 1995).

  2. 2.

    The next step consists of the specification of temporal and magnitude distributions of seismicity for each source. The classic Cornell–McGuire approach assumes that earthquake occurrence in time is random and follows the Poisson process. This implies that earthquake occurrences in time are statistically independent and that they occur at a constant rate. Statistical independence means that occurrence of future earthquakes does not depend on the occurrence of the past earthquake. The most often used model of earthquake magnitude recurrence is the frequency-magnitude Gutenberg–Richter relationship (Gutenberg and Richter, 1944).

    $$ log (n) = a - bm,$$
    (1)

    where n is the number of earthquakes with a magnitude of m and a and b are parameters. It is assumed that earthquake magnitude, m, belongs to the domain <m min, m max>, where m min is the level of completeness of earthquake catalogue and magnitude m max is the upper limit of earthquake magnitude for a given seismic source. The parameter a is the measure of the level of seismicity, while b describes the ratio between the number of small and large events. The Gutenberg–Richter relationship may be interpreted either as being a cumulative relationship, if n is the number of events with magnitude equal or larger than m, or as being a density law, stating that n is the number of earthquakes in a specific, small magnitude interval around m. Under the above assumptions, the seismicity of each seismic source is described by four parameters: the (annual) rate of seismicity \( \lambda\), which is equal to the parameter of the Poisson distribution, the lower and upper limits of earthquake magnitude, m min and m max, and the b-value of the Gutenberg–Richter relationship.

  3. 3.

    Calculation of ground motion prediction equations and their uncertainty. Ground motion prediction equations are used to predict ground motion at the site itself. The parameters of interest include peak ground acceleration, peak ground velocity, peak ground displacement, spectral acceleration, intensity, strong ground motion duration, etc. Most ground motion prediction equations available today are empirical and depend on the earthquake magnitude, source-to-site distance, type of faulting, and local site conditions. The choice of an appropriate ground motion prediction equation is crucial since, very often, it is a major contributor to uncertainty in the estimated PSHA.

  4. 4.

    Integration of uncertainties in earthquake location, earthquake magnitude and ground motion prediction equation into probability that the ground motion parameter of interest will be exceeded at the specified site during the specified time interval. The ultimate result of a PSHA is a seismic hazard curve: the annual probability of exceeding a specified ground motion parameter at least once. An alternative definition of the hazard curve is the frequency of exceedance versus ground motion amplitude (McGuire, 2004).

Seismic Hazard, Figure 1
figure 1633figure 1633

Four steps of a PSHA (Modified from Reiter, 1990).

The following section provides the mathematical framework of the classic PSHA procedure, including its deaggregation. The most common modifications of the procedure will be discussed in the section (Some modifications of Cornell–McGuire PSHA procedure and alternative models).

The Cornell–McGuire PSHA methodology

Conceptually, the computation of a seismic hazard curve is fairly simple. Let us assume that seismic hazard is characterized by ground motion parameter Y. The probability of exceeding a specified value y, \( P[Y \geq y]\), is calculated for an earthquake of particular magnitude located at a possible source, and then multiplied by the probability that that particular earthquake will occur. The computations are repeated and summed for the whole range of possible magnitudes and earthquake locations. The resulting probability \( P[Y \geq y]\) is calculated by utilizing the Total Probability Theorem which is:

$$ P[Y \geq y] = \sum\nolimits {P[Y \geq y|{E_i}] \cdot P[{E_i}],}$$
(2)

where

$$ \eqalign{ P[Y \geq y|{E_i}] &= \int { \cdot \cdot \cdot\int {P[Y \geq y|{x_1},{x_2},{x_3}...]} \cdot } \cr &\quad{f_i}({x_1})\cdot {f_i}({x_2}|{x_1}) \cdot{f_i}({x_3}|{x_1},{x_2})\,\,...\,\,d{x_3}\,d{x_2}\,d{x_1}.}$$
(3)

\( P[Y \geq y|{E_i}]\) denotes the probability of ground motion parameter \( Y \geq y,\) at the site of interest, when an earthquake occurs within the seismic source i. Variables \( {x_i}\,(i = 1,\,2,\,...\,)\) are uncertainty parameters that influence Y. In the classic approach, as developed by Cornell (1968), and later extended to accommodate ground motion uncertainty (Cornell, 1971), the parameters of ground motion are earthquake magnitude, M, and earthquake distance, R. Functions \( f( \cdot )\) are probability density functions (PDF) of parameters \( {x_i}.\) Assuming that indeed \( {x_1} \equiv M\)and \( {x_2} \equiv R\), the probability of exceedance (Equation 3) takes the form:

$$ \eqalign{P[Y \geq y|E] = \int\limits_{{m_{\min }}}^{{m_{\max }}} {} \int\limits_{R|M} {P[Y \geq y|m,r]}\cr{f_M}(m){f_{R|M}}(r|m)\,dr\,dm,}$$
(4)

where \( P[Y \geq y|m,r]\) denotes the conditional probability that the chosen ground motion level y is exceeded for a given magnitude and distance; \( {f_M}(m)\) is the PDF of earthquake magnitude, and \( {f_{R|M}}(r|m)\) is the conditional PDF of the distance from the earthquake for a given magnitude. The \( {f_{R|M}}(r|m)\) arises in specific instances, such as those where a seismic source is represented by a fault rupture. Since the earthquake magnitude depends on the length of fault rupture, the distance to the rupture and resulting magnitude are correlated.

If, in the vicinity of the site of interest, one can distinguish n S seismic sources, each with average annual rate of earthquake occurrence \( {\lambda_i}\), then the total average annual rate of events with a site ground motion level y or more, takes the form:

$$ \eqalign{\lambda (y) = \sum\limits_{i = 1}^{{n_S}} {{\lambda}\int\limits_{{m_{\min }}}^{{m_{\max }}} {} \int\limits_{R|M} {P[Y \geq y|M,R]}}\cr{f_M}(m){f_{R|M}}(r|m)\,dr\,dm,}$$
(5)

In Equation 5, the subscripts denoting seismic source number are deleted for simplicity, \( P[Y \geq y|m,r]\) denotes the conditional probability that the chosen ground motion level y, is exceeded for a given magnitude m and distance r. The standard choice for the probability \( P[Y \geq y|m,r]\) is a normal, complementary cumulative distribution function (CDF), which is based on the assumption that the ground motion parameter y is a log-normal random variable, \( \ln (y) = g(m,r) + \epsilon\), where \( \epsilon\) is random error. The mean value of \( \ln (y)\) and its standard deviation are known and are defined as \( \overline {\ln (y)}\) and \( {\sigma_{\ln (y)}}\), respectively. The function \( {f_M}(m)\) denotes the PDF of earthquake magnitude. In most engineering applications of PSHA, it is assumed that earthquake magnitudes follow the Gutenberg–Richter relation (1), which implies that \( {f_M}(m)\) is a negative, exponential distribution, shifted from zero to m min and truncated from the top by m max

$$ {f_M}(m) = \frac{{\beta \exp [( - (m - {m_{\min }})]}}{{1 - \exp [( - \beta ({m_{\max }} - {m_{\min }})]}},$$
(6)

In Equation 6, β = b ln10, where b is the parameter of the frequency-magnitude Gutenberg–Richter relation (1).

After assuming that in every seismic source, earthquake occurrences in time follow a Poissonian distribution, the probability that y, a specified level of ground motion at a given site, will be exceeded at least once within any time interval t is

$$ \,P[Y > y;\,\,t] = 1 - \exp [ - \lambda (y) \cdot t].$$
(7)

The Equation 7 is fundamental to PSHA. For t = 1 year, its plot versus ground motion parameter y, is the hazard curve – the ultimate product of the PSHA (Figure 2). For small probabilities,

$$ \begin{array}{ll}P[Y > y;t = 1] &= 1 - \exp ( - \lambda )\cr & \cong 1 - (1 - \lambda + \frac{1}{2}{\lambda^2} -...) \cong \lambda,\end{array}$$
(8)

which means that the probability (Equation 7) is approximately equal to \( \,\lambda (y)\). This proves that PSHA can be characterized interchangeably by the annual probability (Equation 7) or by the rate of seismicity (Equation 5).

Seismic Hazard, Figure 2
figure 1634figure 1634

Example of a Peak Ground Acceleration (PGA) seismic hazard curve and its confidence intervals.

In the classic Cornell–McGuire procedure for PSHA, it is assumed that the earthquakes in the catalogue are independent events. The presence of clusters of seismicity, multiple events occurring in a short period of time or presence of foreshocks and aftershocks violates this assumption. Therefore, before computation of PSHA, these dependent events must be removed from the catalogue.

Estimation of seismic source parameters

Following the classic Cornell–McGuire PSHA procedure, each seismic source is characterized by four parameters:

  • Level of completeness of the seismic data, m min

  • Annual rate of seismic activity \( \lambda\), corresponding to magnitude m min

  • b-value of the frequency-magnitude Gutenberg–Richter relation (1)

  • Upper limit of earthquake magnitude m max

Estimation of m min . The level of completeness of the seismic event catalogue, m min, can be estimated in at least two different ways.

The first approach is based on information provided by the seismic event catalogue itself, where m min is defined as the deviation point from an empirical or assumed earthquake magnitude distribution model. Despite the fact that the evaluation of m min based on information provided entirely by seismic event catalogue is widely used, it has several weak points. By definition, the estimated levels of m min represent only the average values over space and time. However, most procedures in this category require assumptions on a model of earthquake occurrence, such as a Poissonian distribution in time and frequency-magnitude Gutenberg–Richter relation.

The second approach used for the estimation of m min utilizes information on the detection capabilities of seismic stations. The approach release users from the assumptions of stationarity and statistical independence of event occurrence. The choice of the most appropriate procedure for m min estimation depends on several factors, such as the knowledge of the history of the development of the seismic network, data collection, and processing.

Estimation of rate of seismic activity λ and b-value of Gutenberg-Richter. The accepted approach to estimating seismic source recurrence parameters λ and b is the maximum likelihood procedure. If successive earthquakes are independent in time, the number of earthquakes with magnitude equal to or exceeding a level of completeness, m min, follows the Poisson distribution with the parameter equal to the annual rate of seismic activity λ. The maximum likelihood estimator of λ is then equal to n/t, where n is number of events that occurred within time interval t.

For given m max, the maximum likelihood estimator of the b-value of the Gutenberg–Richter equation can be obtained from the recursive solution of the following:

$$ \begin{array}{ll} 1/\beta & = \bar{m} - {m_{\min }} \\ & \quad + \frac{{({m_{\max }} - {m_{\min }}) \cdot \exp [ - \beta ({m_{\max }} - {m_{\min }})]}}{{1 - \exp [ - \beta ({m_{\max }} - {m_{\min }})]}}. \end{array}$$
(9)

where β = b ln10, and \( \bar{m}\) is the sample mean of earthquake magnitude. If the range of earthquake magnitudes \( <{m_{\max }},{m_{\min}}>\) exceeds 2 magnitude units, the solution of Equation 9 can be approximated by the well-known Aki-Utsu estimator (Aki, 1965; Utsu, 1965)

$$ \beta = 1\,\,/\,(\bar{m} - {m_{\min }}).$$
(10)

In most real cases, estimation of parameters λ and the b-value by the above simple formulas cannot be performed due to the incompleteness of seismic event catalogues. The alternative procedures, capable to utilize data incompleteness has been developed by Weichert (1980) and Kijko and Sellevoll (1992).

Estimation of m max . The maximum magnitude, m max, is defined as the upper limit of magnitude for a given seismic source.

This terminology assumes a sharp cutoff magnitude at a maximum magnitude m max. Cognizance should be taken of the fact that an alternative, “soft” cutoff maximum earthquake magnitude is also being used (Main and Burton, 1984). The later formalism is based on the assumption that seismic moments of seismic events follow the Gamma distribution. One of the distribution parameters is called the maximum seismic moment, and the corresponding value of earthquake magnitude is called the “soft” maximum magnitude. Beyond the value of this maximum magnitude, the distribution decays much faster than the classical Gutenberg–Richter relation. Although this model has been occasionally used, the classic PSHA only considers models having a sharp cutoff of earthquake magnitude.

As a rule, m max plays an important role in PSHA, especially in assessment of long return periods. At present, there is no generally accepted method for estimating m max. It is estimated by the combination of several factors, which are based on two kinds of information: seismicity of the area, and geological, geophysical, and structural information of the seismic source. The utilization of the seismological information focuses on the maximum observed earthquake magnitude within a seismic source and statistical analysis of the available seismic event catalogue. The geological information is used to identify distinctive tectonic features, which control the value of m max.

The current evaluations of m max are divided between deterministic and probabilistic procedures, based on the nature of the tools applied.

Deterministic procedures. The deterministic procedure most often applied is based on the empirical relationships between magnitude and various tectonic and fault parameters, such as fault length or rupture dimension. The relationships are different for different seismic areas and different types of faults (Wells and Coppersmith, 1994 and references therein). Despite the fact that such empirical relationships are extensively used in PSHA (especially for the assessment of maximum possible magnitude generated by the fault-type seismic sources), the weak point of the approach is its requirement to specify the highly uncertain length of the future rupture. An alternative approach to the determination of earthquake recurrence on singular faults with a segment specific slip rate is provided by the so-called cascade model, where segment rupture is defined by the individual cascade-characteristic rupture dimension (Cramer et al., 2000).

Another deterministic procedure which has a strong, intuitive appeal is based on records of the largest historic or paleo-earthquakes (McCalpin, 1996). This approach is especially applicable in the areas of low seismicity, where large events have long return periods. In the absence of any additional tectono-geological indications, it is assumed that the maximum possible earthquake magnitude is equal to the largest magnitude observed, \( m_{\max }^{obs}\), or the largest observed plus an increment. Typically, the increment varies from ¼ to 1 magnitude unit. The procedure is often used for the areas with several, small seismic sources, each having its own \( m_{\max }^{obs}\) (Wheeler, 2009).

Another commonly used deterministic procedure for m max evaluation, especially for area-type seismic sources, is based on the extrapolation of the frequency-magnitude Gutenberg–Richter relation. The best known extrapolation procedures are probably those by Frohlich (1998) and the “probabilistic” extrapolation procedure applied by Nuttli (1981), in which the frequency-magnitude curve is truncated at the specified value of annual probability of exceedance (e.g., 0.001).

An alternative procedure for the estimation of m max was developed by Jin and Aki (1988), where a remarkably linear relationship was established between the logarithm of coda Q 0 and the largest observed magnitude for earthquakes in China. The authors postulate that if the largest magnitude observed during the last 400 years is the maximum possible magnitude m max, the established relation will give a spatial mapping of m max.

Ward (1997) developed a procedure for the estimation of m max by simulation of the earthquake rupture process. Ward’s computer simulations are impressive; nevertheless, one must realize that all the quantitative assessments are based on the particular rupture model, postulated parameters of the strength and assumed configuration of the faults.

The value of m max can also be estimated from the tectono-geological features like strain rate or the rate of seismic-moment release (WGCEP, 1995). Similar approaches have also been applied in evaluating the maximum possible magnitude of seismic events induced by mining (e.g., McGarr, 1984). However, in most cases, the uncertainty of m max as determined by any deterministic procedure is large, often reaching a value of the order of one unit on the Richter scale.

Probabilistic procedures. The first probabilistic procedure for maximum regional earthquake magnitude was developed in the late 1960s, and is based on the formalism of the extreme values of random variables. A major breakthrough in the seismological applications of extreme-value statistics was made by Epstein and Lomnitz (1966), who proved that the Gumbel I distribution of extremes can be derived directly from the assumptions that seismic events are generated by a Poisson process and that they follow the frequency-magnitude Gutenberg–Richter relation. Statistical tools required for the estimation of the end-point of distribution functions (as, e.g., Cooke, 1979) have only recently been used in the estimation of maximum earthquake magnitude (Pisarenko et al., 1996; Kijko, 2004 and references therein).

The statistical tools available for the estimation of m max vary significantly. The selection of the most suitable procedure depends on the assumptions of the statistical distribution model and/or the information available on past seismicity. Some of the procedures can be applied in the extreme cases when no information about the nature of the earthquake magnitude distribution is available. Some of the procedures can also be used when the earthquake catalogue is incomplete, i.e., when only a limited number of the largest magnitudes are known. Two estimators are presented here. Broadly speaking, the first estimator is straightforward and simple in application, while the second one requires more computational effort but provides more accurate results. It is assumed that both the analytical form and the parameters of the distribution functions of earthquake magnitude are known. This knowledge can be very approximate, but must be available.

Based on the distribution of the largest among n observations and on the condition that the largest observed magnitude \( m_{\max }^{obs}\) is equal to the largest magnitude to be expected, the “simple” estimate of m max is of the form

$$ {\hat{m}_{\max }} = m_{\max }^{obs} + \frac{1}{{n\,{f_M}(m_{\max }^{obs})}},$$
(11)

where \( {f_M}(m_{\max }^{obs})\) is PDF of the earthquake magnitude distribution. If applied to the Gutenberg–Richter recurrence relation with PDF (Equation 6), it takes the simple form

$$ {\hat{m}_{\max }} = m_{\max }^{obs} + \frac{{1 - \exp [ - \beta (m_{\max }^{obs} - {m_{\min }})]}}{{n\beta \,\exp [ - \beta (m_{\max }^{obs} - {m_{\min }})]}}.$$
(12)

The approximate variance of the estimator (Equation 12) is of the form

$$ \eqalign{VAR({\hat{m}_{\max }}) &= \sigma_M^2 + \frac{1}{{{n^2}}}\cr & {\left[ {\frac{{1 - \exp [ - \beta (m_{\max }^{obs} - {m_{\min }})]}}{{\beta \,\exp [ - \beta (m_{\max }^{obs} - {m_{\min }})]}}} \right]^2},}$$
(13)

where \( \sigma_M\) stands for epistemic uncertainty and denotes the standard error in the determination of the largest observed magnitude \( m_{\max }^{obs}\). The second part of the variance represents the aleatory uncertainty of m max.

The second (“advanced”) procedure often used for assessment of m max is based on the formalism derived by Cooke (1979)

$$ {\hat{m}_{\max }} = m_{\max }^{obs} + \int\limits_{{m_{\min }}}^{m_{\max }^{obs}} {\left[ {{F_M}(m)} \right]{^n}{\text{d}}m},$$
(14)

where \( {F_M}(m)\) denotes the CDF of random variable m. If applied to the frequency-magnitude Gutenberg–Richter relation ( 1 ), the respective CDF is

$$ {F_M}(m) =\begin{array}{lll} \left\{\begin{matrix}{0}, & {{\text{for}}\ m < m_{\min }}, \\ {\frac{{1 - \exp [ - \beta (m - {m_{\min}})]}}{{1 - \exp [ - \beta ({m_{\max }} - {m_{\min}})]}}}, & {{\text{for }}{m_{min}} \leq m \leq {m_{\max}}}, \\ {1}, & {{\text{for}}\ m > {m_{\max }}}, \end{matrix}\right.\end{array}$$
(15)

and the m max estimator (Equation 14 ) takes the form

$$ \eqalign{{\hat{m}_{\max }} = m_{\max }^{obs} + \frac{{{E_1}({n_2}) - {E_1}({n_1})}}{{\beta \exp ( - {n_2})}} + {m_{\min }}\exp ( - n),}$$
(16)

where \( {n_1} = n/\{ 1 - \exp [ - \beta (m_{\max }^{obs} - {m_{\min }})]\},\) \( {n_2} = {n_1}\exp [ - \beta (m_{\max }^{obs} - {m_{\min }})],\) and \( {E_1}( \cdot )\) denotes an exponential integral function. The variance of estimator (Equation 16) has two components, epistemic and aleatory, and is of the form

$$ \eqalign{VAR({\hat{m}_{\max }}) & = \sigma_M^2 \cr & \quad+ {\left[ {\frac{{{E_1}({n_2}) - {E_1}({n_1})}}{{\beta \exp ( - {n_2})}} + {m_{\min }}\exp ( - n)} \right]^2},}$$
(17)

where \( \sigma_M\) denotes standard error in the determination of the largest observed magnitude \( m_{\max }^{obs}\).

Both above estimators of m max, by their nature, are very general and have several attractive properties. They are applicable for a very broad range of magnitude distributions. They may also be used when the exact number of earthquakes, n, is not known. In this case, the number of earthquakes can be replaced by λt. Such a replacement is equivalent to the assumption that the number of earthquakes occurring in unit time conforms to a Poisson distribution with parameter λ, where t is the span of the seismic event catalogue. It is also important to note that both estimators provide a value of \( {\hat{m}_{\max }}\), which is never less than the largest magnitude already observed.

Alternative procedures are discussed by Kijko (2004), which are appropriate for the case when the empirical magnitude distribution deviates from the Gutenberg-Richter relation. These procedures assume no specific form of the magnitude distribution or that only a few of the largest magnitudes are known.

Despite the fact that statistical procedures based on the mathematical formalism of extreme values provide powerful tools for the evaluation of m max, they have one weak point: often available seismic event catalogues are too short and insufficient to provide reliable estimations of m max. Therefore the Bayesian extension of statistical procedures (Cornell, 1994), allowing the inclusion of alternative and independent information such as local geological conditions, tectonic environment, geophysical data, paleo-seismicity, similarity with another seismic area, etc., are able to provide more reliable assessments of m max.

Numerical computation of PSHA

With the exception of a few special cases (Bender, 1984), the hazard curve (Equation 7) cannot be computed analytically. For the most realistic distributions, the integrations can only be evaluated numerically. The common practice is to divide the possible ranges of magnitude and distance into n M and n R intervals, respectively. The average annual rate (Equation 4) is then estimated as

$$ \eqalign{\lambda (Y > y) \cong \sum\limits_{i = 1}^{{n_S}} {\sum\limits_{j = 1}^{{n_M}} {\sum\limits_{k = 1}^{{n_R}} {{\lambda_i}} P[Y > y|{m_j},{r_k}]}}\cr{f_{{M_j}}}({m_j}){f_{{R_k}}}({r_k})\Delta m \Delta r,}$$
(18)

where \( {m_j} = {m_{\min }} + (j - 0.5) \cdot ({m_{\max }} - {m_{\min }})/{n_M}, {r_k} = {r_{\min }} + (k - 0.5) \cdot ({r_{\max }} - {r_{\min }})/{n_R},\) \( \Delta m = ({m_{\max }} - {m_{\min }})/{n_M},\) and \( \Delta r = ({r_{\max }} - {r_{\min }})/{n_R}\).

If the procedure is applied to a grid of points, it will result in a map of PSHA, in which the contours of the expected ground motion parameter during the specified time interval can be drawn (Figure 3).

Seismic Hazard, Figure 3
figure 1635figure 1635

Example of product of PSHA. Map of seismic hazard of the world. Peak ground acceleration expected at 10% probability of exceedance at least once in 50 years. (From Giardini, 1999, http://www.gfz-potsdam.de/pb5/pb53/projects/gshap).

Deaggregation of seismic hazard

By definition, the PSHA aggregates ground motion contributions from earthquake magnitudes and distances of significance to a site of engineering interest. One has to note that the PSHA results are not representative of a single earthquake. However, an integral part of the design procedure of any critical structure is the analysis of the most relevant earthquake acceleration time series, which are generated by earthquakes, at specific magnitudes and distances. Such earthquakes are called “controlling earthquakes,” and they are used to determine the shapes of the response spectral acceleration or PGA at the site.

Controlling earthquakes are characterized by mean magnitudes and distances derived from so-called deaggregation analysis. During the deaggregation procedure, the results of PSHA are separated to determine the dominant magnitudes and the distances that contribute to the hazard curve at a specified (reference) probability. Controlling earthquakes are calculated for different structural frequency vibrations, typically for the fundamental frequency of a structure. In the process of deaggregation, the hazard for a reference probability of exceedance of specified ground motion is portioned into magnitude and distance bins. The relative contribution to the hazard for each bin is calculated. The bins with the largest relative contribution identify those earthquakes that contribute the most to the total seismic hazard.

Some modifications of Cornell–McGuire PSHA procedure and alternative models

Source-free PSHA procedures

The concept of seismic sources is the core element of the Cornell–McGuire PSHA procedure. Unfortunately, seismic sources or specific faults can often not be identified and mapped and the causes of seismicity are not understood. In these cases, the delineation of seismic sources is highly subjective and is a matter of expert opinion. In addition, often, seismicity within the seismic sources is not distributed uniformly, as it is required by the classic Cornell–McGuire procedure. The difficulties experienced in dealing with seismic sources have stimulated the development of an alternative technique to PSHA, which is free from delineation of seismic sources.

One of the first attempts to develop an alternative to the Cornell–McGuire procedure was made by Veneziano et al. (1984). Indeed, the procedure does not require the specification of seismic sources, is non-parametric, and, as input, requires only information about past seismicity. The empirical distribution of the specified seismic hazard parameter is calculated by using the observed earthquake magnitudes, epicentral distances, and assumed ground motion prediction equation. By normalizing this distribution for the duration of the seismic event catalogue, one obtains an annual rate of the exceedance for the required hazard parameter.

Another non-parametric PSHA procedure has been developed by Woo (1996). The procedure is also source-free, where seismicity distributions are approximated by data-based kernel functions. Comparison of the classic Cornell–McGuire-based and kernel-based procedures shows that the former yields a lower hazard.

By their nature, the non-parametric procedures work well in areas with a frequent occurrence of strong seismic events and where the record of past seismicity is considerably complete. At the same time, the non-parametric approach has significant weak points. Its primary disadvantage is a poor reliability in estimating small probabilities for areas of low seismicity. The procedure is not recommended for an area where the seismic event catalogues are highly incomplete. In addition, in its present form, the procedure is not capable of making use of any additional geophysical or geological information to supplement the pure seismological data. Therefore, a technique that accommodates the incompleteness of the seismic event catalogues and, at the same time, does not require the specification of seismic sources, would be an ideal tool for analyzing and assessing seismic hazard.

Such a technique, which can be classified as a parametric-historic procedure for PSHA has been successfully used in several parts of the world. Kijko (2008) used it for mapping seismic hazard of South Africa and sub-Saharan Africa. The procedure has been applied in selected parts of the world by the Global Seismic Hazard Assessment Program (GSHAP, Giardini, 1999), while Petersen et al. (2008) applied it for mapping the seismic hazard in the USA. In a series of papers, Frankel and his colleagues modified and substantially extended the original procedure. Their final approach is parametric and based on the assumption that earthquakes within a specified grid size are Poissonian in time, and that the earthquake magnitudes follow the Gutenberg–Richter relation truncated from the top by maximum possible earthquake magnitude m max.

In some cases, the frequency-magnitude Gutenberg–Richter relation is extended by characteristic events. The procedure accepts the contribution of seismicity from active faults and compensates for incompleteness of seismic event catalogues. Frankel’s conceptually simple and intuitive parametric-historic approach combines the best of the deductive and non-parametric-historic procedures and, in many cases, is free from the disadvantages characteristic of each of the procedures. The rigorous mathematical foundations of the parametric-historic PSHA procedure have been given by Kijko and Graham (1999).

Alternative earthquake recurrence models

Time-dependent models. In addition to the classic assumption, that earthquake occurrence in time follows a Poisson process, alternative approaches are occasionally used. These procedures attempt to assess temporal, or temporal and spatial dependence of seismicity. Time-dependent earthquake occurrence models specify a distribution of the time to the next earthquake, where this distribution depends on the magnitude of the most recent earthquake. In order to incorporate the memory of past events, the non-Poissonian distributions or Markov chains are applied. In this approach, the seismogenic zones that recently produced strong earthquakes become less hazardous than those that did not rupture in recent history.

Clearly such models may result in a more realistic PSHA, but most of them are still only research tools and have not yet reached the level of development required by routine engineering applications.

Time-dependent occurrence of large earthquakes on segments of active faults is extensively discussed by Rhoades et al. (1994) and Ogata (1999) Also, a comprehensive review of all aspects of non-Poissonian models is provided by Kramer (1996). There are several time-dependent models which play an important role in PSHA. The best known models, which have both firm physical and empirical bases, are probably the models by Shimazaki and Nakata (1980). Based on the correlation of seismic activity with earthquake-related coastal uplift in Japan, Shimazaki and Nakata (1980) proposed two models of earthquake occurrence: a time-predictable and a slip-predictable.

The time-predictable model states that earthquakes occur when accumulated stress on a fault reaches a critical level; however, the stress drop and magnitudes of the subsequent earthquakes vary among seismic cycles. Thus, assuming a constant fault-slip rate, the time to the next earthquake can be estimated from the slip of the previous earthquake. The second, the slip-predictable model, is based on the assumption that, irrespective of the initial stress on the fault, an earthquake occurrence always causes a reduction in stress to the same level. Thus, the fault-slip in the next earthquake can be estimated from the time since the previous earthquake.

The second group of time-dependent models are less tightly based on the physical considerations of earthquake occurrence and attempt to describe intervals between the consecutive events by specified statistical distributions. Ogata (1999) considers five models: log-normal, gamma, Weibull, doubly exponential and exponential, which results in the stationary Poisson process. After application of these models to several paleo-earthquake data sets, he concluded that no one of the distributions is consistently the best fit; the quality of the fit strongly depends on the data. From several attempts to describe earthquake time intervals between consecutive events using statistical distributions, at least two play a significant role in the current practice of PSHA: the log-normal and the Brownian passage time (BPT) renewal model.

The use of a log-normal model is justified by the discovery that normalized intervals between the consecutive large earthquakes in the circum-Pacific region follow a log-normal distribution with an almost constant standard deviation (Nishenko and Buland, 1987). The finite value for the intrinsic standard deviation is important because it controls the degree of aperiodicity in the occurrence of characteristic earthquakes, making accurate earthquake prediction impossible. Since this discovery, the log-normal model has become a key component of most time-dependent PSHA procedures and is routinely used by the Working Group on California Earthquake Probabilities (WGCEP, 1995).

A time-dependent earthquake occurrence model which is applied more often is the Brownian passage time (BPT) distribution, also known as the inverse Gaussian distribution (Matthews et al., 2002). The model is described by two parameters: \( \mu\) and \( \sigma\), which, respectively, represent the mean time interval between the consecutive earthquakes and the standard deviation. The aperiodicity of earthquake occurrence is controlled by the variation coefficient \( \alpha = \sigma /\mu\). For a small \( \alpha\), the aperiodicity of earthquake occurrence is small and the shape of distribution is almost symmetrical. For a large \( \alpha\), the shape of distribution is similar to log-normal model, i.e., skewed to the right and peaked at a smaller value than the mean. The straightforward control of aperiodicity of earthquake occurrence, by parameter \( \alpha\), makes the BPT model very attractive. It has been used to model earthquake occurrence in many parts of the world and has been applied by the Working Group on California Earthquake Probabilities (WGCEP, 1995).

Comparison of time-dependent with time-independent earthquake occurrence models have shown that the time-independent (Poissonian) model can be used for most engineering computations of PSHA. The exception to this rule is when the seismic hazard is dominated by a single seismic source, with a significant component of characteristic occurrence when the time interval from the last earthquake exceeds the mean time interval between consecutive events. Note that, in most cases, the information on strong seismic events provided by current databases is insufficient to distinguish between different models. The use of non-Poissonian models will therefore only be justified if more data will be available.

Alternative frequency-magnitude models. In the classic Cornell–McGuire procedure for PSHA assessment, it is assumed that earthquake magnitudes follows the Gutenberg–Richter relation truncated from the top by a seismic source characteristic, the maximum possible earthquake magnitude m max. The PDF of this distribution is given by Equation 5.

Despite the fact that in many cases the Gutenberg–Richter relation describes magnitude distributions within seismic source zones sufficiently well, there are some instances where it does not apply. In many places, especially for areas of seismic belts and large faults, the Gutenberg–Richter relation underestimates the occurrence of large magnitudes. The continuity of the distribution (Equation 5) breaks down. The distribution is adequate only for small events up to magnitude 6.0–7.0. Larger events tend to occur within a relatively narrow range of magnitudes (7.5–8.0), but with a frequency higher than that predicted by the Gutenberg–Richter relation. These events are known as characteristic earthquakes (Youngs and Coppersmith, 1985, Figure 4). Often it is assumed that characteristic events follow a truncated Gaussian magnitude distribution (WGCEP, 1995).

Seismic Hazard, Figure 4
figure 1636figure 1636

Gutenberg–Richter characteristic earthquake magnitude distribution. The model combines frequency-magnitude Gutenberg–Richter relation with a uniform distribution of characteristic earthquakes. The model predicts higher rates of exceedance at magnitudes near the characteristic earthquake magnitude. (After Youngs and Coppersmith, 1985).

There are several alternative frequency-magnitude relations that are used in PSHA. The best known is probably the relation by Merz and Cornell (1973), which accounts for a possible curvature in the log-frequency-magnitude relation (1) by the inclusion of a quadratic term of magnitude. Departure from linearity of the distribution (Equation 1) is built into the model by Lomnitz-Adler and Lomnitz (1979). The model is based on simple physical considerations of strain accumulation and release at plate boundaries. Despite the fact that m max is not present in the model, it provides estimates of the occurrence of large events which are more realistic than those predicted by the Gutenberg–Richter relation (1). When seismic hazard is caused by induced seismicity, an alternative distribution to the Gutenberg–Richter model (1) is always required. For example, the magnitude distributions of tremors generated by mining activity are multimodal and change their shape in time (Gibowicz and Kijko, 1994). Often, the only possible method that can lead to a successful PSHA for mining areas is the replacement of the analytical, parametric frequency-magnitude distribution by its model-free, non-parametric counterpart (Kijko et al., 2001).

Two more modifications of the recurrence models are regularly introduced: one when earthquake magnitudes are uncertain and the other when the seismic occurrence process is composed of temporal trends, cycles, short-term oscillations, and pure random fluctuations. The effect of error in earthquake magnitude determination (especially significant for historic events) can be minimized by the simple procedure of correction of the earthquake magnitudes in a catalogue (e.g., Rhoades, 1996). The modelling of random fluctuations in earthquake occurrence is often done by introducing compound distributions in which parameters of earthquake recurrence models are treated as random variables (Campbell, 1982).

Ground motion prediction equations

The assessment of seismic hazard at a site requires knowledge of the prediction equation of the particular strong motion parameter, as a function of distance, earthquake magnitude, faulting mechanism, and often the local site condition below the site. The most simple and most commonly used form of a prediction equation is

$$ \eqalign{\ln (y) = {c_1} - {c_2}m - {c_3}\ln (r) - {c_4}r + {c_5}F + {c_6}S + \varepsilon,}$$
(19)

where y is the amplitude of the ground motion parameter (PGA, MM intensity, seismic record duration, spectral acceleration, etc.); m is the earthquake magnitude; r is the shortest earthquake distance from the site to the earthquake source; F is responsible for the faulting mechanism; S is a term describing the site effect; and \( \varepsilon\)is the random error with zero mean and standard deviation \( {\sigma_{\ln (y)}}\), which has two components: epistemic and aleatory.

The coefficients \( {c_1},...,{c_6}\) are estimated by the least squares or maximum likelihood procedure, using strong motion data. It has been found that the coefficients depend on the tectonic settings of the site. They are different for sites within stable continental regions, active tectonic regions, or subduction zone environments. Assuming that ln(y) has a normal distribution, regression of (Equation 19) provides the mean value of ln(y), the exponent of which corresponds to the median value of y, \( {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{y}}\). Since the log-normal distribution is positively skewed, the mean value of y, \( \bar{y}\), exceeds the median value \( {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{y}}\) by a factor of \( \exp ( - 0.5\sigma_{\ln (y)}^2).\) This indicates that the seismic hazard for a particular site is higher when expressed in terms of \( \bar{y}\), than the hazard for the same site expressed in terms of \( {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{y}}\). It has been shown that the ground motion prediction equation remains a particularly important component of PSHA since its uncertainty is a major contributor to uncertainty of the PSHA results (SSHAC, 1997).

Uncertainties in PSHA

Contemporary PSHA distinguishes between two types of uncertainties: aleatory and epistemic.

The aleatory uncertainty is due to randomness in nature; it is the probabilistic uncertainty inherent in any random phenomenon. It represents unique details of any earthquake as its source, path, and site and cannot be quantified before the earthquake occurrence and cannot be reduced by current theories, acquiring addition data or information. It is sometimes referred as “randomness,” “stochastic uncertainty,” or “inherent variability” (SSHAC, 1997) and is denoted as U R (McGuire, 2004). The typical examples of aleatory uncertainties are: the number of future earthquakes in a specified area; parameters of future earthquakes such as origin times, epicenter coordinates, depths and their magnitudes; size of the fault rupture; associated stress drop and ground motion parameters like PGA, displacement or seismic record duration at the given site. The aleatory uncertainties are characteristic to the current model and cannot be reduced by the incorporation of addition data. It can only be reduced by the conceptualization of a better model.

The epistemic uncertainty, denoted as U K is the uncertainty due to insufficient knowledge about the model or its parameters. The model (in the broad sense of its meaning; as, e.g., a particular statistical distribution) may be approximate and inexact, and therefore predicts values that differ from the observed values by a fixed, but unknown, amount. If uncertainties are associated with numerical values of the parameters, they are also epistemic by nature. Epistemic uncertainty can be reduced by incorporating additional information or data. Epistemic distributions of a model’s parameters can be updated using the Bayes’ theorem. When new information about parameters is significant and accurate, these epistemic distributions of parameters become delta functions about the exact numerical values of the parameters. In such a case, no epistemic uncertainty about the numerical values of the parameters exists and the only remaining uncertainty in the problem is aleatory uncertainty.

In the past, epistemic uncertainty has been known as statistical or professional uncertainty. The examples of the epistemic uncertainties are: boundaries of seismic sources, distributions of seismic sources parameters (e.g., annual rate of seismic activity \( \lambda\), b-value and m max), or median value of the ground motion parameter given the source properties.

Aleatory uncertainties are included in the PSHA by means of integration (Equation 5) and they are represented by the hazard curve. In contrast, epistemic uncertainties are included through the use of an alternative hypothesis – different sets of parameters with different numerical values, different models, or through a logic tree. Therefore, by default, if in the process of PSHA, the logic tree formalism is applied, the resulting uncertainties of the hazard curve are of epistemic nature.

The major benefit of the separation of uncertainties into aleatory and epistemic is potential guidance in the preparation of input for PSHA and the interpretation of the results. Unfortunately, the division of uncertainties into aleatory and epistemic is model dependent and to a large extent arbitrary, indefinite, and confusing (Panel of Seismic Hazard Evaluation, 1997).

Logic tree

The mathematical formalism of PSHA computation, (Equations 7 and 9), integrates over all random (aleatory) uncertainties of a particular seismic hazard model. In many cases, however, because of our lack of understanding of the mechanism that controls earthquake generation and wave propagation processes, the best choices for elements of the seismic hazard model is not clear. The uncertainty may originate from the choice of alternative seismic sources, competitive earthquake recurrence models and their parameters, as well as from the choice of the most appropriate ground motion. The standard approach for the explicit treatment of alternative hypotheses, models, and parameters is the use of a logic tree. The logic tree formalism provides a convenient tool for quantitative treatment of any alternatives. Each node of the logic tree (Figure 5) represents uncertain assumptions, models, or parameters, and the branches extending from each node are the discrete uncertainty alternatives.

Seismic Hazard, Figure 5
figure 1637figure 1637

An example of a simple logic tree. The alternative hypothesis accounts for uncertainty in ground motion attenuation relation, magnitude distribution model, and the assigned maximum magnitude m max.

In the logic tree analysis, each branch is weighted according to its probability of being correct. As a result, each end branch represents a hazard curve with an assigned weight, where the sum of weights of all the hazard curves is equal to 1. The derived hazard curves are thus used to compute the final (e.g., mean) hazard curve and their confidence intervals. An example of a logic tree is shown in Figure 5. The alternative hypotheses account for uncertainty in the ground motion attenuation model, the magnitude distribution model and the assigned maximum magnitude m max.

Controversy

Despite the fact that the PSHA procedure, as we know it in its current form, was formulated almost half of century ago, it is not without controversy. The controversy surrounds questions such as: (1) the absence of the upper limit of ground motion parameters, (2) division of uncertainties between aleatory and epistemic, and (3) methodology itself, especially the application of the logic tree formalism.

In most currently used Cornell–McGuire-based PSHA procedures, the ground motion parameter used to describe the seismic hazard is distributed log-normally. Since the log-normal distribution is unlimited from the top, it results in a nonzero probability of unrealistically high values for the ground motion parameter, e.g., PGA\( \approx\)20g, obtained originally from a PSHA for a nuclear-waste repository at Yucca Mountain in the USA (Corradini, 2003). The lack of the upper bound of earthquake-generated ground motion in current hazard assessment procedures has been identified as the “missing piece” of the PSHA procedure (Bommer et al., 2004).

Another criticism of the current PSHA procedure concerns portioning of uncertainties into aleatory and epistemic. As noted in the section (Uncertainties in PSHA) above, the division between aleatory and epistemic uncertainty remains an open issue.

A different criticism comes from the ergodic assumptions which underlie the formalism of the PSHA procedure. The ergodic process is a random process in which the distribution of a random variable in space is the same as distribution of that variable at a single point when sampled as a function of time (Anderson and Brune, 1999). It has been shown that the major contribution to PSHA uncertainty comes from uncertainty of the ground motion prediction equation. The uncertainty of the ground motion parameter y, is characterized by its standard deviation, \( {\sigma_{\ln (y)}}\), which is calculated as the misfit between the observed and predicted ground motions at several seismic stations for a small number of recorded earthquakes.

Thus, \( {\sigma_{\ln (y)}}\) mainly characterizes the spatial and not the temporal uncertainty of ground motion at a single point. This violates the ergodic assumption of the PSHA procedure. According to Anderson and Brune (1999), such violation leads to overestimation of seismic hazard, especially when exposure times are longer than earthquake return times. In addition, Anderson et al. (2000) shows that high-frequency PGAs observed at short distances do not increase as fast as predicted by most ground motion relations. Therefore, the use of the current ground motion prediction equations, especially relating to seismicity recorded at short distances, results in overestimation of the seismic hazard.

A similar view has been expressed by Wang and Zhou (2007) and Wang (2009). Inter alia, they argue that in the Cornell–McGuire-based PSHA procedure, the ground motion variability is not treated correctly. By definition, the ground motion variability is implicitly or explicitly dependent on earthquake magnitude and distance; however, the current PSHA procedure treats it as an independent random variable. The incorrect treatment of ground motion variability results in variability in earthquake magnitudes and distance being counted twice. They conclude that the current PSHA is not consistent with modern earthquake science, is mathematically invalid, can lead to unrealistic hazard estimates, and causes confusion. Similar reservations have been expressed in a series of papers by Klügel (Klügel, 2007 and references therein).

Equally strong criticism of the currently PSHA procedure has been expressed by Castanos and Lomnitz (2002). The main target of their criticism is the logic tree, the key component of the PSHA. They describe the application of the logic tree formalism as a misunderstanding in probability and statistics, since it is fundamentally wrong to admit “expert opinion as evidence on the same level as hard earthquake data.”

The science of seismic hazard assessment is thus subject to much debate, especially in the realms where instrumental records of strong earthquakes are missing. At this time, PSHA represents a best-effort approach by our species to quantify an issue where not enough is known to provide definitive results, and by many estimations a great deal more time and measurement will be needed before these issues can be resolved.

Further reading: There are several excellent studies that describe all aspects of the modern PSHA. McGuire (2008) traces the intriguing historical development of PSHA. Hanks and Cornell (1999), and Field (1995) present an entertaining and unconventional summary of the issues related to PSHA, including its misinterpretation. Reiter (1990) comprehensively describes both the deterministic as well as probabilistic seismic hazard procedures from several points of view, including a regulatory perspective. Seismic hazard from the geologist’s perspective is described in the book by Yeats et al. (1997). Kramer (1996) provides an elegant, coherent, and understandable description of the mathematical aspects of both DSHA and PSHA. Anderson et al. (2000), Gupta (2002), and Thenhaus and Campbell (2003) present excellent overviews covering theoretical, methodological, as well as procedural issues of modern PSHA. Finally, the most comprehensive treatment to date of all aspects of PSHA, including treatment of aleatory and epistemic uncertainties, is provided by the SSHAC – Senior Seismic Hazard Committee (1997) – report and in book form by McGuire (2004). The presentations here benefited from all quoted above sources, especially the excellent book by Kramer (1996).

Summary

Seismic hazard is a term referring to any physical phenomena associated with an earthquake (e.g., ground motion, ground failure, liquefaction, and tsunami) and their effects on land, man-made structures, and socioeconomic systems that have the potential to produce a loss. The term is also used, without regard to a loss, to indicate the probable level of ground shaking occurring at a given point within a certain period of time. Seismic hazard analysis is an expression referring to quantification of the expected ground motion at the particular site. Seismic hazard analysis can be performed deterministically, when a particular earthquake scenario is considered, or probabilistically, when the likelihood or frequency of a specified level of ground motion at a site during a specified exposure time is evaluated. In principle, any natural hazard caused by seismic activity can be described and quantified in terms of the probabilistic methodology. Classic probabilistic seismic hazard analysis (PSHA) includes four steps: (1) identification and parameterization of the seismic sources, (2) specification of temporal and magnitude distributions of earthquake occurrence, (3) calculation of ground motion prediction equations and their uncertainty, and (4) integration of uncertainties in earthquake location, earthquake magnitude, and ground motion prediction equations into the hazard curve.

An integral part of PSHA is the assessment of uncertainties. Contemporary PSHA distinguishes between two types of uncertainties: aleatory and epistemic. The aleatory uncertainty is due to randomness in nature; it is the probabilistic uncertainty inherent in any random phenomenon. The aleatory uncertainties are characteristic to the current model and cannot be reduced by the incorporation of addition data. The epistemic uncertainty is the uncertainty due to insufficient knowledge about the model or its parameters. Epistemic uncertainty can be reduced by incorporating additional information or data. Aleatory uncertainties are included in the probabilistic seismic hazard analysis due to the integration over these uncertainties, and they are represented by the hazard curve. In contrast, epistemic uncertainties are included through the use of alternative models, different sets of parameters with different numerical values or through a logic tree.

Unfortunately, the PSHA procedure, as we know it in its current form, is not without controversy. The controversy arises from questions such as: (1) the absence of the upper limit of ground motion parameter, (2) division of uncertainties between aleatory and epistemic, and (3) methodology itself, especially the application of the logic tree formalism.

Cross-references

Characteristic Earthquakes and Seismic Gaps

Earthquake, Magnitude

Earthquakes, Early and Strong Motion Warning

Earthquakes, Intensity

Earthquakes, Shake Map

Earthquakes, Strong-Ground Motion

Seismic Zonation

Statistical Seismology