1 Motivation

Reliability analysis allows the possibility of accounting for the unavoidable effects of uncertainty over the performance of a structure. In this context, the level of safety of a structure can be measured in terms of the reliability, which is a metric of plausibility that the structure fulfills certain performance requirements during its lifetime. The complement of the reliability is the probability of failure, that is, the probability that a structure violates prescribed performance criteria. Thus, reliability can be incorporated as one of the performance criteria in the analysis and design of structures to explicitly address the effects of uncertainty [24, 25, 29, 33, 42, 49, 58]. In this framework, it is assumed that the external force vector \(\mathbf f (t)\) (see Eq. (1.1)) is modeled as a non-stationary stochastic process and characterized by a random variable vector \(\mathbf z \in \varOmega _\mathbf{z } \subset R^{n_{z}}\). This vector is defined in terms of a probability density function \(p(\mathbf z )\). Furthermore, consider a vector \({\varvec{\theta }} \in \varOmega _{\varvec{\theta }} \subset R^{n_{\theta }}\) of uncertain model parameters. These parameters are characterized in a probabilistic manner by means of a joint probability density function \(q({\varvec{\theta }})\). It is noted that alternative approaches for modeling uncertainties do exist, as well. For example, methodologies based on non-traditional uncertainty models can be very useful in a number of cases [8, 12, 27, 47, 48]. However, the focus here is on probabilistic approaches. The performance of the structural system due to the excitation is characterized by means of \(n_r\) responses of interest

$$\begin{aligned} r_i(t,\mathbf z ,{\varvec{\theta }}) \; \; ,\; \; i=1,\ldots ,n_r,~t\in [0,T] \end{aligned}$$
(4.1)

where T is the duration of the excitation. Clearly, the aforementioned responses \(r_i\) are functions of time t (due to the dynamic nature of the loading), and functions of the system parameter vector \({\varvec{\theta }}\) and random variable vector \(\mathbf z \). The response functions \(r_i(t,\mathbf z ,{\varvec{\theta }}),~i=1,\ldots ,n_r\), are obtained from the solution of the equation of motion that characterizes the structural model, i.e., Eq. (1.1).

2 Reliability Problem Formulation

For structural systems under stochastic excitation, the probability that design conditions are satisfied within a particular reference period provides a useful reliability measure. Such a measure is referred to as the first excursion probability and quantifies the plausibility of the occurrence of unacceptable behavior (failure) of the structural system [63, 68]. Then, first excursion probabilities are used to characterize the level of safety of a structure. Specifically, this probability measures the chances that the uncertain responses exceed prescribed thresholds in magnitude within a specified time interval. Then, a failure event \(F(\mathbf z ,{\varvec{\theta }}) \) can be defined in terms of the so-called normalized demand function \(d(\mathbf z ,{\varvec{\theta }})\) as [5]

$$\begin{aligned} F(\mathbf z ,{\varvec{\theta }}) = \left\{ d( \mathbf z ,{\varvec{\theta }}) > 1 \right\} \end{aligned}$$
(4.2)

where this function is defined as the maximum of the quotient between the structural responses of interest and their corresponding threshold levels, that is,

$$\begin{aligned} d(\mathbf z ,{\varvec{\theta }})=\underset{i=1,\ldots ,n_r}{\max }\left( \underset{t\in [0,T]}{\max }\left( \left| \frac{ r_i(t,\mathbf z ,{\varvec{\theta }}) }{r_i^*}\right| \right) \right) \end{aligned}$$
(4.3)

where \(r_i^*,~i=1,\ldots ,n_r\), are the acceptable threshold levels of the corresponding responses of interest \(r_i,~i=1,\ldots ,n_r\). Note that the quotient \(r_i(t,\mathbf z ,{\varvec{\theta }},) / {r_i^*}\) can be interpreted as a demand to capacity ratio, as it compares the value of the response \(r_i(t,\mathbf z ,{\varvec{\theta }})\) with its maximum allowable value \({r_i^*}\). It is noted that the concept of failure event does not necessarily imply collapse. In fact, the failure event may refer to, for example, partial damage states or unacceptable system performance.

The probability of occurrence of the failure event F, \(P_F\), can be expressed in terms of the probability integral in the form

$$\begin{aligned} P_F=\int _{d(\mathbf z ,{\varvec{\theta }}) > 1} p(\mathbf z ) \; q({\varvec{\theta }}) \; d\mathbf z \; d{\varvec{\theta }} \end{aligned}$$
(4.4)

or in terms of the indicator function \(I_F(\mathbf z ,{\varvec{\theta }})\) as

$$\begin{aligned} P_F =\int _\mathbf{z \in \varOmega _\mathbf{z },{\varvec{\theta }}\in \varOmega _{\varvec{\theta }} } I_F(\mathbf z ,{\varvec{\theta }}) \; p(\mathbf z ) \; q({\varvec{\theta }}) \; d\mathbf z \; d{\varvec{\theta }} \end{aligned}$$
(4.5)

where the indicator function is equal to 1, in the case that the normalized demand function is equal or larger than 1 and 0 otherwise. In general, the probability integral involves a large number of random variables (hundreds or thousands) in the context of dynamical systems under stochastic excitation [5, 38, 40, 56] (see Sect. 4.5). Therefore, this integral represents a high-dimensional reliability problem whose numerical evaluation is extremely demanding from a numerical point of view [17, 21, 50].

3 Reliability Estimation

3.1 General Remarks

As previously pointed out, the probability integral represents a high-dimensional reliability problem. In addition, the normalized demand function that characterizes the failure event F is usually not explicitly known but must be computed point-wise by applying suitable deterministic numerical techniques, such as finite element analyses. Then, it is essential to minimize the number of such function evaluations. Finally, the probability of failure of a system properly designed is, in general, very small \((P_F \sim 10^{-6}{-}10^{-2})\). In other words, failure is a rare event. It is also apparent that methods based on numerical integration or standard reliability methods are not suitable for estimating the high-dimensional probability integral. This difficulty favors the application of simulation techniques in order to estimate the probability of failure. In this regard, it is well known that direct Monte Carlo is theoretically applicable for evaluating \(P_F\), but it is inefficient in estimating small probabilities because it requires a very large number of samples (dynamic analyses) to achieve an acceptable level of accuracy [28, 59]. Based on the above conditions, it is clear that the reliability problem is computationally very challenging. Therefore, the estimation of the system reliability has to rely on advanced simulation techniques to limit, to the greatest extent possible, the number of dynamic analyses. Several advanced stochastic simulation methods have been recently developed to cope with this type of problems. Examples of these algorithms include subset simulation [5, 6, 70], line sampling [40], auxiliary domain method [39], horseracing simulation [69], and subset simulation based on hidden variables [7]. Among these algorithms, subset simulation is used in the present implementation due to its generality and flexibility. The generality of the method is due to the fact that it is not based on any geometrical assumption about the topology of the failure domain. Moreover, validation calculations have shown that subset simulation can be applied efficiently to a wide range of complex reliability problems [6, 18, 19, 34, 37, 62]. Even though this is a well known technique in the reliability engineering research community, some of the key aspects of subset simulation are reviewed in this section for completeness.

Based on the above conditions, it is clear that the reliability problem is computationally very challenging.

3.2 Basic Ideas

The conceptual idea of subset simulation is to decompose the failure event F into a sequence of nested failure events

$$\begin{aligned} F = F_m \subset F_{m-1} \subset \cdots \subset F_1 \end{aligned}$$
(4.6)

so that

$$\begin{aligned} F = \cap _{k=1}^m F_k \end{aligned}$$
(4.7)

By definition of conditional probability, the probability of failure can be written as

$$\begin{aligned} P(F) = P(F_m) = P(\cap _{k=1}^m F_k) = P(F_1) \prod ^{m-1}_{k=1} P(F_{k+1} / F_k) \end{aligned}$$
(4.8)

In other words, the probability of failure is expressed as a product of \(P(F_1)\) and the conditional probabilities \(\{P(F_{k+1} / F_k), k=1,\ldots ,m-1\}\). It is seen that, even if P(F) is small, by choosing m and \(F_k, k=1,\ldots ,m-1\), appropriately, the conditional probabilities can still be made sufficiently large, and, therefore, can be efficiently evaluated by direct simulation because the failure events are more frequent. The subsets \(F_1,F_2,\ldots ,F_{m-1}\) are called intermediate failure events. For actual implementation, the intermediate failure events are adaptively chosen using information from simulated samples in order to correspond to some specific values of conditional failure probabilities. To be more specific, the sequence of intermediate failure events is defined as

$$\begin{aligned} F_k = \left\{ d (\mathbf z ,{\varvec{\theta }}) > \delta _k \right\} \; \; , \; \; k=1,\ldots ,m \end{aligned}$$
(4.9)

where \(0< \delta _1< \cdots< \delta _{m-1} < 1 = \delta _m\) is a sequence of intermediate threshold values. Note that the failure event \(F_m = F\) is defined as \( \{ F_m = d (\mathbf z ,{\varvec{\theta }}) > \delta _m = 1 \}\). During subset simulation, the threshold values \(\delta _1, \ldots ,\delta _{m-1}\) are adaptively selected, so that the conditional failure probabilities are set equal to a pre-established value, for example, \(p_0\). This parameter is called the conditional failure probability. Validation calculations have shown that choosing any value of \(p_0\) between 0.1 and 0.3 will lead to similar efficiency as long as subset simulation is properly implemented [70]. Then, it is seen that the demand function values \(\delta _1, \ldots ,\delta _{m-1}\) at the specified probability levels are estimated during the subset simulation. In this manner, subset simulation generates samples whose demand function values correspond to specific (pre-established) probability levels. Therefore, the unconditional as well as all conditional failure probabilities are automatically equal to \(p_0\), except for the conditional failure probability in the last step of subset simulation, that is, \(P(F_{m} / F_{m-1})\).

3.3 Failure Probability Estimator

The previous result implies that the probability of failure can be expressed in the form

$$\begin{aligned} P_{F}&= p_0^{m-1} \int _\mathbf{z \in \varOmega _\mathbf{z },{\varvec{\theta }}\in \varOmega _{\varvec{\theta }} } I_F(\mathbf z ,{\varvec{\theta }}) p(\mathbf z |F_{m-1} ) \; q({\varvec{\theta }} |F_{m-1}) \; d\mathbf z \; d{\varvec{\theta }} \end{aligned}$$
(4.10)

where \(p(\mathbf z |F_{m-1})\) and \(q({\varvec{\theta }} |F_{m-1})\) are the conditional distributions of the random variable vector \(\mathbf z \) and uncertain system parameters \({\varvec{\theta }}\) conditional to the failure event \(F_{m-1}\), respectively. Note that the integral in the above equation corresponds to the expected value of the indicator function with respect to the conditional distributions \(p(\mathbf z |F_{m-1})\) and \(q({\varvec{\theta }} |F_{m-1})\). Thus, the probability of failure can also be written as

$$\begin{aligned} P_{F} = p_0^{m-1} E_ {p(\mathbf z |F_{m-1}), q({\varvec{\theta }}|F_{m-1} )} \left[ I_F(\mathbf z ,{\varvec{\theta }}) \right] \end{aligned}$$
(4.11)

where \( E_ {p(\mathbf z |F_{m-1}), q({\varvec{\theta }}|F_{m-1} )} [\; \cdot \;] \) is the expectation operator. The probability of failure is then estimated as

$$\begin{aligned} P_{F} \approx p_0^{m-1} { 1 \over {N_m}} \sum _{i=1}^{N_m} I_F(\mathbf z _{m-1,i},{\varvec{\theta }}_{m-1,i}) \end{aligned}$$
(4.12)

where \(\{ ( \mathbf z _{m-1,i} , {\varvec{\theta }}_{m-1,i}), i=1,\ldots ,N_m\}\) is the set of samples generated at the last stage of subset simulation (conditional level \(m-1\)).

For actual implementation of subset simulation, it is assumed without much loss of generality that the components of \(\mathbf z \) are independent, that is,

$$\begin{aligned} p(\mathbf z ) = \varPi _{j=1}^{n_z} p_j(z_j) \end{aligned}$$
(4.13)

where for every j, \(p_j ( \cdot )\) is a one-dimensional probability density function for \(z_j\). Similarly, the uncertain system parameters \({\varvec{\theta }}\) are also assumed to be independent and, therefore, the joint probability density function \(q({\varvec{\theta }})\) takes the form

$$\begin{aligned} q({\varvec{\theta }}) = \varPi _{j=1}^{n_{\theta }} q_j(\theta _j) \end{aligned}$$
(4.14)

where \(q_j(\theta _j)\) represents the probability density function of the basic system parameter \(\theta _j\). It is noted that this assumption is not a limitation for a number of cases of interest. However, the estimation of posterior robust failure probability integrals is not covered by the assumption of independence (see Sect. 7.3.4 to address this situation).

4 Numerical Implementation

4.1 Basic Implementation

Based on the previous conceptual ideas, the basic implementation of subset simulation is as follows.

  1. (1)

    Generate \(N_1\) samples \(\{(\mathbf z _{0,i},{\varvec{\theta }}_{0,i}), i=1,\ldots ,N_1\}\) by direct Monte Carlo according to the probability density functions \(p(\mathbf z )\) and \(q({\varvec{\theta }})\), respectively (the subscript 0 denotes that the samples correspond to the unconditional level (level 0)). Set \(k=1\).

  2. (2)

    Evaluate the normalized demand function to obtain \(\{d(\mathbf z _{k-1,i},{\varvec{\theta }}_{k-1,i}), i=1,\ldots ,N_k\}\). Arrange these values in an increasing order.

  3. (3)

    Identify the \([(1-p_0) N_k+1]\)th largest value of the set \(\{d(\mathbf z _{k-1,i},{\varvec{\theta }}_{k-1,i}), i=1,\ldots ,N_k\}\). In the case that this value is equal or larger than 1, set \(m=k\), \(\delta _m=1\) and go to step 7. Otherwise, set the intermediate threshold value \(\delta _k\) equal to the aforementioned \([(1-p_0) N_k+1]\)th largest value of the set \(\{d(\mathbf z _{k-1,i},{\varvec{\theta }}_{k-1,i}), i=1,\ldots ,N_k\}\).

  4. (4)

    The kth intermediate failure event is defined as \(F_k = \left\{ d(\mathbf z ,{\varvec{\theta }}) \ge \delta _k \right\} \).

  5. (5)

    The sampling estimate for \(P(F_k)\) if (\(k=1\)) or \(P(F_{k} / F_{k-1})\) if (\(k >1\)) is equal to \(p_0\) by construction, where \(p_0\) and \(N_k\) are chosen such that \(p_0 N_k\) is an integer number.

  6. (6)

    By construction, there are \(p_0 N_k\) samples among \(\{(\mathbf z _{k-1,i},{\varvec{\theta }}_{k-1,i}), i=1,\ldots ,N_k\}\) whose demand function value is equal or greater than \(\delta _k\). Starting from each of these conditional samples, Markov chain Monte Carlo simulation is used to generate an additional \((N_{k+1} - p_0 N_k)\) conditional samples that lie in \(F_k\), making a total of \(N_{k+1}\) conditional samples \(\{(\mathbf z _{k,i},{\varvec{\theta }}_{k,i}), i=1,\ldots ,N_{k+1}\}\) at level k. The Markov chain samples are drawn by using the modified Metropolis algorithm [5, 45]. Return to step 2 with \(k=k+1\).

  7. (7)

    The conditional failure probability \(P(F_{m} / F_{m-1})\) is estimated directly by \(P(F_{m} / F_{m-1}) = N_F/N_m\) where \(N_F\) is the number of samples that lie in the target failure event \(F_m\). The failure probability is estimated as

    $$\begin{aligned} P_F \approx p_0^{m-1} { 1 \over {N_m}} \sum _{i=1}^{N_m} I_{F_m} ( \mathbf z _{m-1,i},{\varvec{\theta }}_{m-1,i} ) \end{aligned}$$
    (4.15)

    where \(\{(\mathbf z _{m-1,i},{\varvec{\theta }}_{m-1,i}), i=1,\ldots ,N_m\}\) is the set of samples generated at the last stage of subset simulation (conditional level \(m-1\)).

For a more detailed implementation of the approach, the reader is referred to [5, 70].

4.2 Implementation Issues

The numerical implementation of subset simulation can be improved by considering the parallelization of some independent parts of the algorithm. The highest computational efforts are associated with the dynamic analysis of the structural system. Then, parallelization strategies that exploit the parallelism of those parts of the code where the dynamic analysis is performed can be implemented [2, 55]. The unconditional level of subset simulation (level 0) can be scheduled completely in parallel, since the samples are independent. At higher conditional levels, Markov chains need to be generated. Samples forming a Markov chain depend on the previous samples, which implies inherent dependence and then excludes parallelization. However, the chains themselves are independent from each other, which means that the generation of different chains can be concurrently performed. Thus, a number of chains can be simultaneously run, taking advantage of available parallelization techniques. Additionally, low-level parallelism can also be considered to accelerate the individual model runs (dynamic analysis), improving the numerical implementation even more [13, 66].

5 Stochastic Model for Excitation

5.1 General Description

Depending on the particular application and the available information, different stochastic excitation models can be used. For example, in the area of seismic engineering, filtered Gaussian white noise-based processes, models based on power spectra, record-based models, point source-based models, multiple point source-based models, and models based on large or small sub-events are usually used [4, 14, 20, 22, 46, 51,52,53, 57, 60, 67]. In particular, a stochastic point source-based model is used in the present formulation to simulate ground motions. The model is characterized by a series of seismicity parameters, such as the moment magnitude M and the epicentral distance r [4, 14]. The methodology, which was initially developed for generating synthetic ground motions, has been reinterpreted to form a stochastic model for earthquake excitation [36, 65]. According to this approach, high-frequency and low-frequency (pulse) components of the ground motion are independently generated and then combined to form an acceleration time history. The stochastic model represents a practical tool for the description of far and near-field ground motions. It establishes a direct link between the knowledge about the characteristics of the seismic hazard in the structural site and future ground motions. For completeness, some of the basic aspects of the model are presented in this section.

5.2 High-Frequency Components

The time history for a specific event of magnitude M and epicentral distance r with high-frequency components of the ground motion is obtained by several steps. First, a discrete white noise sequence is generated as \( \mathbf w ^T =\, < \sqrt{1/\varDelta t} \; w_j > \; , j=1,\ldots ,n_T \), where \(w_j, j=1,\ldots ,n_T\), are independent, identically distributed standard Gaussian random variables, \(\varDelta t\) is the sampling interval, and \(n_T\) is the number of time instants equal to the duration of the excitation T divided by the sampling interval. The white noise sequence is then modulated by an envelope function e(tMr), such as the one suggested in [61], at the discrete time instants (see Sect. 4.6.5). Discrete Fourier transform is applied to the modulated white noise sequence. The resulting spectrum is multiplied by a ground motion spectrum (or radiation spectrum) A(fMr), after which discrete inverse Fourier transform is applied to transform the sequence back to the time domain to yield the desired ground acceleration time history. The envelope function is the major factor affecting the duration of simulated ground motions for a given moment magnitude M and epicentral distance r. Furthermore, the ground motion spectrum contains information on the physics of the earthquake process as well as other geophysical parameters, such as radiation pattern, density, shear wave velocity in the vicinity of the source, corner frequencies, local site conditions, etc. Details of the procedure as well as the characterization of the envelope function and the ground acceleration spectrum can be found in [1, 4, 14, 15, 61, 65].

5.3 Pulse Components

The description of the time history with low-frequency components is based on a simple analytical model developed in [44]. According to the model, the pulse component related to near-field motions is described through a velocity pulse v(t) as

$$\begin{aligned} v(t) ={ A_p \over { 2}} [ 1 + \text {cos} ( {2 \pi f_p \over {\gamma _p}} (t-t_p) )] \; \text {cos} (2 \pi f_p (t - t_p) + \nu _p) \; , \; t \in (t_p - {\gamma _p \over { 2 f_p}}, t_p + {\gamma _p \over { 2 f_p}}) \end{aligned}$$
(4.16)

where \(A_p\), \(f_p\), \(\nu _p\), \(\gamma _p\), and \(t_p\) describe the amplitude, prevailing frequency, phase angle, number of half cycles, and time shift, respectively. Outside the time interval, the velocity pulse is equal to zero. Some of the pulse parameters, such as the amplitude and frequency, can be linked to the moment magnitude M and epicentral distance r of the seismic event [16]. The rest of the pulse parameters are considered as independent model parameters, and they have been calibrated by tuning the analytical expression of the velocity pulse to a wide range of recorded near-field ground motions [44].

5.4 Synthesis of Near-Field Ground Motions

The synthesis of near-field ground motions is obtained by combining the high- and low-frequency components through the following steps. First, an acceleration time history with high-frequency components and a pulse ground acceleration are generated. The Fourier transforms of these synthetic acceleration time histories are then calculated. Next, the Fourier amplitude spectrum of the synthetic time history with low-frequency components is subtracted from the Fourier amplitude spectrum of the synthetic time history with high-frequency components. A synthetic acceleration time history is constructed, so that its Fourier amplitude spectrum is equal to the difference of the Fourier amplitude spectra calculated before, and its phase coincides with the phase of the Fourier transform of the synthetic time history with high-frequency components. Finally, the time history generated in the previous steps is superimposed to the acceleration time history corresponding to the velocity pulse [44]. For illustration purposes, Fig. 4.1 shows a synthetic near-field ground motion sample corresponding to the envelope function and radiation spectrum presented in Fig. 4.2 and with near-field pulse parameters \(A_p = 27.11\) (cm/s), \(f_p = 0.53\) (Hz), \(\nu _p = 0.0\) (rad), and \(\gamma _p=1.8\). The existence of the near-field pulse is evident when looking at the velocity time history of the ground motion. It is noted that considering a sampling interval equal to \(\varDelta t = 0.01\) s, the discrete white noise sequence has more than 1,500 components. In other words, the vector of uncertain parameters \(\mathbf w \) has more than 1,500 elements in this case.

Fig. 4.1
figure 1

Acceleration time history sample. a High-frequency components. b Near-field pulse acceleration. c Final ground motion (acceleration time history). d Final ground motion (velocity time history)

Fig. 4.2
figure 2

Envelope function e(tMr) and radiation spectrum A(fMr) for \(M=7.0\) and \(r=20\)  km

5.5 Seismicity Model

The probabilistic model for the seismic hazard at the structural site is finally complemented by assigning a probability density function to some of the model parameters. In the context of this formulation, the epicentral distance r for the earthquake events is assumed to follow a log-normal distribution. With respect to the moment magnitude M, several deterministic and probabilistic characterizations have been suggested [41]. For the near-field pulse model, the parameters are defined according to the probability models suggested in [44]. For example, the prevailing frequency \(f_p\) and the peak ground velocity \(A_p\) are characterized by log-normal distributions. Furthermore, the probability model for the number of half cycles \(\gamma _p\) and the phase angle \(\nu _p\) are chosen, respectively, as normal and uniform.

Fig. 4.3
figure 3

Uncertain seismological and near-fault pulse parameters

In summary, the input to the stochastic model for ground motions is the white noise sequence \(\mathbf w \), the seismological parameters M and r, and the parameters for the near-field pulse \(f_p\), \(A_p\), \(\nu _p\), and \(\gamma _p\). Thus, in connection with Sects. 4.1 and 4.2, the random variable vector \(\mathbf z \) is defined as \(\mathbf z = < \mathbf w ^T, M, r, f_p, A_p, \nu _p, \gamma _p >^T\). Note that the dimension of \(\mathbf z \) is of the order of thousands for the excitation stochastic model under consideration. For illustration purposes, the schematic representation of the uncertain parameters of the excitation model is presented in Fig. 4.3. Finally, it is emphasized that the reliability analysis presented in this chapter is not restricted to this particular stochastic excitation model. In this regard, other excitation models can be used as well [20, 22, 51, 52, 57].

6 Application Problem No. 1

The objective of this application problem is to evaluate the performance and effectiveness of the proposed model reduction technique for the reliability analysis of a two-dimensional frame structure. Different reduced-order models are considered, including models based on fixed-interface normal modes with and without interface reduction.

6.1 Model Description and Substructures Characterization

The model, shown in Fig. 4.4, consists of a three-span two-dimensional eight-story frame structure, and it can be considered as one of the moment-resisting frames of a building model.

Fig. 4.4
figure 4

Three-span two-dimensional eight-story frame structure. Application problem No. 1

The structural model has a total length of 30 m and a constant floor height of 5 m, leading to a total height of 40 m. The finite element model comprises 160 two-dimensional beam elements of square cross section with 140 nodes and a total of 408 degrees of freedom. The dimension of the square cross section of the beam elements is equal to 0.4 m. The axial deformation of these elements is neglected with respect to their bending deformation. The basic material properties of the beam and column elements are given by the Young’s modulus \(E = 2.0 \times 10^{10}\) N/m\(^2\) and mass density \(\rho = 2,500\) kg/m\(^3\). The structural model is subdivided into 16 substructures as shown in Fig. 4.5. Substructures \(S_i, i=1,\ldots ,8,\) are composed of the column elements of the different floors, while substructures \(S_i, i=9,\ldots ,16,\) correspond to the beam elements of the different floors. With this subdivision, there are eight interfaces in the model. The total number of internal degrees of freedom is equal to 312, while 96 degrees of freedom are present at the interfaces.

Fig. 4.5
figure 5

Substructures of the finite element model. Application problem No. 1

6.2 Reduced-Order Model Based on Dominant Fixed-Interface Normal Modes

Two models with a reduced number of fixed-interface normal modes are considered to evaluate the effect of dominant normal modes on the accuracy of the reduced-order model spectral properties. The first model (Model-1) considers the minimum number of fixed-interface normal modes at each substructure, while the second model (Model-2) includes all fixed-interface normal modes with frequencies inside a target frequency bandwidth. More specifically, Model-1 is characterized by the first fixed-interface normal mode of each substructure (with the lowest frequency). For each substructure of Model-2, all fixed-interface normal modes that have frequency \(\omega \) such that \(\omega \le \alpha \omega _{c}\) are retained, with \(\alpha \) being a multiplication factor and \(\omega _{c}\) being a cut-off frequency that is taken equal to 87.66  rad/s (10th modal frequency of the unreduced reference model). The multiplication factor is selected to be 5 for substructures \(S_i, i=1,\ldots ,8\), and 2 for substructures \(S_i, i=9,\ldots ,16\). The difference between the multiplication factors is due to the fact that spectral properties of substructures \(S_i, i=1,\ldots ,8\) are quite different from substructures \(S_i, i=9,\ldots ,16\), as the lowest frequencies corresponding to substructures 1 to 8 are substantially higher than the lowest frequencies of substructures 9 to 16. The selected multiplication factors define a frequency bandwidth that contains the most important frequencies of each substructure.

With this selection of multiplication factors, four fixed-interface normal modes are kept for each substructure \(S_i, i=1,\ldots ,8\), and three fixed-interface normal modes for each substructure \(S_i, i=9,\ldots ,16\). Table 4.1 characterizes the two models in terms of the number of fixed-interface normal modes of each substructure, total number of interface degrees of freedom, and total number of degrees of freedom. In summary, only 16 generalized coordinates corresponding to the dominant fixed-interface normal modes are retained for all substructures in Model-1, while 56 generalized coordinates are considered in Model-2. The dimension of the corresponding reduced-order models represents a 72\(\%\) and 62\(\%\) reduction with respect to the unreduced model, respectively.

Table 4.1 Characterization of models with reduced number of fixed-interface normal modes

Table 4.2 shows the errors between the modal frequencies using the unreduced reference finite element model and the modal frequencies computed using the reduced-order models generated from Model-1 and Model-2. The reduced-order models are based on dominant fixed-interface normal modes. It is seen that the errors are quite small for the reduced-order model generated from Model-2. The errors for the lowest 10 modes fall below 0.05\(\%\). For Model-1, an increase in the errors is observed for modes 7–10, with a range of relative errors between 3\(\%\) and 10\(\%\).

Table 4.2 Modal frequency error: unreduced reference model and reduced-order models generated from Model-1 and Model-2. Models based on dominant fixed-interface normal modes
Fig. 4.6
figure 6

MAC-values between the mode shapes computed from the unreduced finite element model and from the reduced-order model based on dominant normal modes. Reduced-order model generated from Model-1

Fig. 4.7
figure 7

MAC-values between the mode shapes computed from the unreduced finite element model and from the reduced-order model based on dominant normal modes. Reduced-order model generated from Model-2

The corresponding matrices of MAC-values between the first 10 modal vectors computed from the unreduced finite element model and from the reduced-order models are shown in terms of a 3-D representation in Figs. 4.6 and 4.7, respectively. It is seen that, for Model-2, the values at the diagonal terms are practically one and zero at the off-diagonal terms. Thus, the modal vectors are consistent for both models. Contrarily, some of the diagonal terms are less than one, while some of the off-diagonal terms exhibit values greater than zero for Model-1. Thus, the reduced-order model generated from Model-1 is not able to accurately characterize the higher order modes of the unreduced model. Note that this model is an extreme case, since it includes the minimum number of fixed-interface normal modes at each substructure.

6.3 Reduced-Order Model Based on Dominant and Residual Fixed-Interface Normal Modes

The objective of this section is to evaluate the effect of residual normal modes on the accuracy of the spectral properties of the reduced-order models considered in the previous section. Table 4.3 shows the relative errors between the modal frequencies of the unreduced model and the modal frequencies of the reduced-order models related to Model-1 and Model-2.

Table 4.3 Modal frequency error: unreduced reference model and reduced-order models generated from Model-1 and Model-2. Models based on dominant and residual normal modes

Comparing Tables 4.2 and 4.3, it is first observed that the consideration of residual normal modes gives much better solution accuracy than the formulation based on dominant modes only. In fact, for the first modal frequencies, the difference in the errors is about three orders of magnitude for both models. It is also observed that the errors for modes 7–10 related to Model-1 decrease in about two orders of magnitude by considering the effect of residual modes. The errors for these higher order modes are less than 0.06\(\%\).

Fig. 4.8
figure 8

MAC-values between the mode shapes computed from the unreduced finite element model and from the reduced-order model based on dominant and residual normal modes. Reduced-order model generated from Model-1

Fig. 4.9
figure 9

MAC-values between the mode shapes computed from the unreduced finite element model and from the reduced-order model based on dominant and residual normal modes. Reduced-order model generated from Model-2

The related matrices of MAC-values between the first 10 modal vectors computed from the unreduced finite element model and from the reduced-order models are shown in Figs. 4.8 and 4.9. It is seen that the MAC-values are practically one at the diagonal terms and zero at the off-diagonal terms for both models. Thus, the reduced-order model generated from Model-1 is consistent with the unreduced model if the residual normal modes are considered in the formulation. Recall that Model-1 is an utmost case where the minimum number of fixed-interface modes is considered. As previously pointed out, this reduced-order model is not consistent when only the dominant modes are taken into account. Also, note that the reduced-order model generated from Model-2 is already consistent with the unreduced model by considering only the dominant normal modes. The effect of the residual normal modes on this reduced-order model is to reduce the errors of the spectral properties even further. In conclusion, the formulation based on residual normal modes greatly outperforms the formulation based on dominant modes in terms of its accuracy.

6.4 Reduced-Order Model Based on Interface Reduction

The effect of interface reduction is analyzed in this section. To this end, 20 interface modes out of the 96 interface degrees of freedom are retained in the analysis. Note that the interface region corresponds to the nodes where the beam and column elements are connected at each floor. As a result the reduced-order model corresponding to Model-1 includes a total of 36 modal coordinates, while 76 modal coordinates characterize Model-2. The dimension of these reduced-order models represents a 91\(\%\) and 81\(\%\) reduction with respect to the unreduced model, respectively. The predicted natural frequencies resulting from both reduced-order models are presented in Table 4.4, and they are compared with the frequencies computed from the unreduced model as a reference. The reduced-order models are based on dominant normal modes and interface reduction.

Table 4.4 Modal frequency error: unreduced reference model and reduced-order models generated from Model-1 and Model-2. Models based on dominant normal modes and interface reduction

It is seen that the errors reported in this table are similar to the ones shown in Table 4.2. In fact, the errors are very small for the reduced-order model generated from Model-2, while relative errors between 3\(\%\) and 10\(\%\) are observed for the higher-order modes corresponding to the reduced-order model generated from Model-1. Similar conclusions are obtained for the mode shapes. In other words, the contribution of the first 20 interface modes seems to be adequate in the sense that the accuracy of the reduced-order models remains invariant with this number of interface modes, as the selected interface modes are able to capture the relevant deformation at the interfaces. Validation calculations show that lower interface modes (lower than the 20th interface mode) cannot be neglected for this model. Note that a small number of interface degrees of freedom are present at the interfaces, and, therefore, the number of retained interface modes cannot be too small.

The effect of residual normal modes on the reduced-order models that consider interface reduction is similar to the one observed in the previous section. That is, the errors of the spectral properties are significantly reduced. This effect can be seen in Table 4.5. Note that the errors are virtually the same to the ones reported in Table 4.3.

Table 4.5 Modal frequency error: unreduced reference model and reduced-order models generated from Model-1 and Model-2. Models based on dominant and residual modes, and interface reduction

The matrices of MAC-values between the first 10 modal vectors computed from the unreduced finite element model and from the reduced-order models based on dominant and residual normal modes and interface reduction are shown in Figs. 4.10 and 4.11. Clearly, the reduced-order models are consistent with the unreduced model.

Fig. 4.10
figure 10

MAC-values between the mode shapes computed from the unreduced finite element model and from the reduced-order model based on dominant and residual normal modes and interface reduction. Reduced-order model generated from Model-1

Fig. 4.11
figure 11

MAC-values between the mode shapes computed from the unreduced finite element model and from the reduced-order model based on dominant and residual normal modes and interface reduction. Reduced-order model generated from Model-2

To get more insight into the interface modes, the first two characteristic constraint modes are shown in Figs. 4.12 and 4.13. Recall that these modes are obtained by transforming the interface modes \({\varvec{\varUpsilon }}_{I}\) into finite element coordinates as indicated in Sect. 1.6.2. The characteristic constraint modes \({\varvec{\varUpsilon }}_{CC}\) provide the principal modes of deformation for the interface, since they capture some characteristic physical motion in the interface region. It is seen that the first characteristic constraint mode captures much of the interface-induced motion seen in the first global mode, whereas the second characteristic constraint mode resembles the second global mode. Thus, the importance of considering an adequate number of interface modes in constructing the reduced-order model is evident.

Fig. 4.12
figure 12

First characteristic constraint mode

Fig. 4.13
figure 13

Second characteristic constraint mode

6.5 Reliability Problem

To control serviceability, the performance of the structure is characterized in terms of the probability of occurrence of a failure event related to the maximum relative displacement between the top of the model and the ground, or \(\delta (t,\mathbf z ,\varvec{\theta })\). Mathematically, the failure event \(F(\mathbf z ,{\varvec{\theta }})\) is defined as \( F(\mathbf z ,{\varvec{\theta }}) = \{ d( \mathbf z ,{\varvec{\theta }}) \ge 1 \} \) where the demand function is given by

$$\begin{aligned} d(\mathbf z ,{\varvec{\theta }})= \underset{t\in [0,T]}{\max }\left( \left| \frac{ \delta (t,\mathbf z ,{\varvec{\theta }}) }{\delta ^*}\right| \right) \end{aligned}$$
(4.17)

where \(\delta ^*\) is the acceptable threshold of the maximum relative displacement of the eighth floor with respect to the ground. Of course, additional responses can be considered in the definition of the failure event. Recall that in the previous expressions, \({\varvec{\theta }}\) represents the vector of uncertain system parameters. In this regard, it is assumed that the stiffness properties of the column elements, represented by the modulus of elasticity, are uncertain. Specifically, the modulus of elasticity of the column elements of the different floors is modeled as a discrete homogeneous isotropic log-normal random field \(\mathbf r _E\) with components \({E}_i, i=1,\ldots ,8\), mean value \( \mu _E \mathbf 1 \) where \(\mathbf 1 = <1,\ldots ,1>^T\), standard deviation \(\sigma _E\), and correlation function

$$\begin{aligned} R(\varDelta ) = \exp (-\alpha \varDelta ^2) \end{aligned}$$
(4.18)

where the variable \(\varDelta \) represents a distance and the parameter \(\alpha \) is related to the correlation length of the random field. The corresponding covariance matrix of the random field is given by

$$\begin{aligned} {\varvec{\varSigma }}_E = \sigma _E^2 \mathbf R \end{aligned}$$
(4.19)

in which \(\mathbf R \) is the correlation matrix with coefficients \({R}_{ij} = R(\varDelta _{ij}), i,j=1,\ldots ,8\), where \(\varDelta _{ij}\) is the distance between the centroid of the i and j floors. Then, the log-normal random field can be expressed as [23, 30, 32, 64]

$$\begin{aligned} \mathbf r _E = \exp (\mu _N \mathbf 1 + {\varvec{\varPhi }}_N {\varvec{\varLambda }}_{N}^{1/2} \mathbf y ) \end{aligned}$$
(4.20)

where \(\mu _N \mathbf 1 \) represents the mean value of the underlying Gaussian random field with

$$\begin{aligned} \mu _N = \ln (\mu _E) - { 1 \over { 2}} \ln \left( 1 + {\sigma _E^2 \over { \mu _E^2}}\right) , \end{aligned}$$
(4.21)

while \({\varvec{\varPhi }}_N\) and \({\varvec{\varLambda }}_{N}^{1/2}\) are obtained from the spectral decomposition of the covariance matrix of the underlying Gaussian random field \({\varvec{\varSigma }}_N\), with coefficients

$$\begin{aligned} {\varvec{\varSigma }}_{Nij} = \ln \left( 1 + {\sigma _E^2 \mathbf R _{ij} \over { \mu _E^2}}\right) \; \; , \; \; i,j=1,\ldots ,8 \; , \end{aligned}$$
(4.22)

and \(\mathbf y \) is a vector of independent standard normal random variables. The mean value and standard deviation of the log-normal random field are set equal to \(\mu _E = 2.0 \times 10^{10}\) N/m\(^2\) and \(\sigma _E = 3.0 \times 10^{9}\) N/m\(^2\), respectively. Thus, the corresponding coefficient of variation of the random field is equal to 15\(\%\). A mildly correlated random field is considered by selecting an appropriate value of \(\alpha \). The corresponding correlation function is shown in Fig. 4.14.

Fig. 4.14
figure 14

Correlation function of the random field. First application problem

The model is excited horizontally by a ground acceleration modeled as indicated in Sect. 4.5. The moment magnitude and epicentral distance are taken as \(M = 7.0\) and \(r = 25\) km, respectively. The near-field pulse parameters are fixed at their nominal values as suggested in [44], i.e., \(A_p = 27.11\) (cm/s), \(f_p = 0.53\) (Hz), \(\nu _p = 0.0\) (rad), and \(\gamma _p=1.8\). The envelope function to be used is given by [61]

$$\begin{aligned} e(t,M,r) = a_1 \left( \frac{t}{2 T} \right) ^{a_2} \cdot \exp \left( -a_3 \cdot \frac{t}{2 T} \right) \end{aligned}$$
(4.23)

where T corresponds to the duration of the ground motion and the parameters \(a_1\), \(a_2\) and \(a_3\) are defined as

$$\begin{aligned} a_1 = \left( \frac{e}{\lambda } \right) ^{a_2} \; , \; a_2 = \frac{- \lambda \ln (\eta )}{1 + \lambda \cdot (\ln (\lambda )-1)} \; , \; a_3 = \frac{ a_2}{ \lambda } \end{aligned}$$
(4.24)

with parameter values equal to \(\lambda = 0.2\) and \(\eta = 0.05\). The sampling interval and the duration of the excitation are taken equal to \(\varDelta t =0.01\) s and \(T=30\) s, respectively. Thus, the characterization of the stochastic excitation involves more than 3,000 uncertain parameters in this case (white noise sequence). Clearly, the corresponding reliability problem is a high-dimensional problem.

6.6 Remarks on the Use of Reduced-Order Models

It is noted that even though subset simulation is an effective advanced simulation technique, the reliability analysis can be computationally very demanding due to the large number of dynamic analyses required during the simulation process (evaluation of the indicator function). Thus, the repetitive generation of reduced-order models for different values of the uncertain model parameters \({\varvec{\theta }}\) can be computationally expensive due to the substantial computational overhead that arises at the substructure level. To cope with this difficulty, reduced-order models together with the parametrization schemes introduced in Chaps. 2 and 3 are used to estimate the system reliability. With respect to Chap. 2 and based on the previous characterization of the uncertain parameter, it is clear that substructures \(S_j, j=1,\ldots ,8,\) depend on the model parameters related to the modulus of elasticity, while substructures \(S_j, j=9,\ldots ,16,\) are independent of the model parameters. For implementation purposes, the model parameters associated with substructures \(S_j, j=1,\ldots ,8,\) are defined as \(\theta _j = {E}_j/\mu _E\). The corresponding parametrization functions are given by \(h^j (\theta _j) = \theta _j\) and \(g^j(\theta _j) = 1\). The different values that the model parameters may take during the simulation process, i.e., subset simulation, correspond to different realizations of the discrete log-normal random field.

6.7 Support Points

When reduced-order models based on interface reduction are considered, interface modes need to be evaluated. The approximation of these modes involves a set of support points in the model parameter space. These points can be generated by a number of sampling methods as indicated in Sect. 3.1.5. In this section, an adaptive scheme where the nominal and support points are updated during the different stages of subset simulation is introduced. The basic idea is to use support points lying in the vicinity of the intermediate failure domains in order to increase the accuracy of the approximate interface modes.

The selected support points at a given stage of subset simulation are Latin Hypercube samples from a normal distribution whose definition is based on samples from the previous stage. Specifically, at stage k of subset simulation, \(N_s= p_0 N\) conditional samples that lie in \(F_k\) (\(\{\varvec{\theta }_{k-1,i}, i=1,\ldots ,N_s\}\)) are obtained. Based on these samples, the sample mean \(\bar{\varvec{\theta }}_k\) and the sample covariance matrix \(\varSigma _k\) are computed as

$$\begin{aligned} \bar{\varvec{\theta }}_k = {1 \over { N_s}} \sum _{i=1}^{N_s} {\varvec{\theta }}_{k-1,i} \end{aligned}$$
(4.25)

and

$$\begin{aligned} \varSigma _k = {1 \over { N_s}} \sum _{i=1}^{N_s} [({\varvec{\theta }}_{k-1,i} - \bar{\varvec{\theta }}_k) ({\varvec{\theta }}_{k-1,i} - \bar{\varvec{\theta }}_k)^T ] \end{aligned}$$
(4.26)

Then, the support points to be used during stage k of subset simulation are generated from the normal distribution \(N(\bar{\varvec{\theta }}_k, \beta _k \varSigma _k)\), where \(\beta _k\) is a user-selected parameter scaling the covariance matrix \(\varSigma _k\). Such a parameter is problem-dependent. Additional conditional samples can also be used for the purpose of defining the sample mean and covariance matrix. In this case, conditional samples can be simulated from the available \(N_s\) samples by the Modified Metropolis algorithm [5]. The complete set of conditional samples is then used to characterize the normal distribution from which the support points are generated. The support points generated by the proposed adaptive scheme spread over the important region of failure in the uncertain parameter space for the examples that are considered in this section. To control the accuracy of the global surrogate model, the support points correspond to direct evaluation of the interface modes. In this manner, the propagation of error that occurs in previous stages is avoided. In addition, to consider only interpolations, the point at which the reduced-order model needs to be recomputed, \({\varvec{\theta }}^*\), should belong to the \(n_{\theta }\)-dimensional convex hull of the support points [3, 11]. If this condition is not satisfied, a direct evaluation of the interface modes is required for updating the reduced-order model. In the case of complex failure domains, i.e., when the failure samples are distributed in disjoint sets, a cluster analysis can be performed in each stage of subset simulation, to order the samples into clusters [26]. In this way, support points can be generated for each cluster. The choice concerning which of the set of support points are used for a given sample is based on its distance with respect to the center of each cluster [43]. The use of cluster analyses is not necessary for the numerical examples considered in this chapter.

Alternatively, the support points can be defined in terms of the Markov chains generated from the conditional samples at each stage of subset simulation. As previously pointed out, at each stage of subset simulation, a number of conditional samples that lie in \(F_k\) are already available. Starting from these samples, additional samples are simulated through Markov chain Monte Carlo simulation using an adaptive conditional sampling algorithm [31]. In each adaptation step, a number of seeds are chosen at random from the available conditional samples. After running the algorithm for a number of adaptation steps, a set of support points, to be used during the current stage of subset simulation, can be obtained.

Fig. 4.15
figure 15

Probability of failure in terms of the threshold level. 1: unreduced model. 2: reduced-order model based on dominant fixed-interface normal modes. 3: reduced-order model based on dominant and residual fixed-interface normal modes. 4: reduced-order model based on dominant and residual fixed-interface normal modes and interface reduction

6.8 Reliability Results

Figure 4.15 shows the probability of failure in terms of the threshold by using the unreduced model and several reduced-order models generated from Model-2. Three reduced-order models are considered in the figure, namely: reduced-order model based on dominant fixed-interface normal modes; model based on dominant and residual fixed-interface normal modes; and model based on dominant and residual fixed-interface normal modes and interface reduction. In the case of interface reduction, no approximations are considered for the interface modes. In other words, they are directly evaluated during the simulation process. However, partial invariant conditions are assumed for the transformation matrix, which accounts for the contribution of the residual fixed-interface normal modes (see Sects. 2.3.6 and 3.3.5). The curves in the figure correspond to an average of five independent runs of subset simulation. The figure illustrates the whole trend of the probability of failure in terms of different thresholds, not only for one target value. It is observed that the system reliability obtained from the unreduced model coincides with the one obtained from the reduced-order models for all range of thresholds, even for low failure probabilities, i.e. \(10^{-4}\). Note that in this case, the reduced-order model based on dominant fixed-interface normal modes is adequate in the context of the reliability problem under consideration.

The effect of approximate interface modes on the accuracy of the reliability results is shown in Fig. 4.16. This figure depicts the probability of failure in terms of the threshold by using different reduced-order models. The reduced-order models, which are generated from Model-2, are the following: reduced-order model based on dominant normal modes and interface reduction with approximate interface modes; and reduced-order model based on dominant and residual normal modes and interface reduction with approximate interface modes. For comparison purposes, the results corresponding to the unreduced model are also included in the figure. An average of five independent runs of subset simulation is considered. The number of support points considered in the adaptive scheme for approximating the interface modes is 36, where a linear interpolation scheme is used (see Sect. 3.1). The comparison of the reliability estimates obtained by the unreduced model and reduced-order models shows an excellent correspondence. Thus, the approximate interface modes are able to accurately predict the response of the system and, consequently, its reliability.

Fig. 4.16
figure 16

Probability of failure in terms of the threshold level. 1: unreduced model. 2: reduced-order model based on dominant fixed-interface normal modes and approximate interface modes. 3: reduced-order model based on dominant and residual fixed-interface normal modes and approximate interface modes

6.9 Computational Cost

The computational effort involved in the reliability analysis is shown in Table 4.6. Specifically, this table shows the speedup (round to the nearest integer) achieved by different reduced-order models, which are described in Table 4.7. In this context, the speedup is the ratio of the execution time by using the unreduced model and the execution time by using a reduced-order model.

Table 4.6 Speedup attained for different models. First application problem

The speedups reported in the table are based on the implementation of the reliability analysis in a four-core computer unit (Intel Core i7 processor). The actual procedure is carried out by using a homemade code based on a Matlab C++ platform. First, it is noted that a speedup equal to 4 is obtained by using the reduced-order model based on dominant normal modes. This value reduces to 2 when interface and residual normal modes are considered. This is mainly due to the update process of the interface modes and the consideration of the residual normal modes during the simulation process. However, when approximate interface modes are considered, the corresponding speedups increase to 3. Thus, the effect of considering approximate interface modes is also positive in terms of the numerical implementation of the reliability analysis.

Table 4.7 Description of reduced-order models. First application problem

Based on the previous results, it is seen that the use of reduced-order models for estimating the reliability of the system is rather effective. In fact, a reduction in computational effort by a factor between 2 and 4 is achieved without compromising the accuracy of the reliability estimates. It is expected that a more significant effect will be obtained for more involved finite element models (see next Application Problem).

7 Application Problem No. 2

The objective of this example is to explore the effectiveness of reduced-order models based on interface reduction. In particular, an involved nonlinear finite element building model is considered.

7.1 Structural Model

The three-dimensional finite element building model shown in Fig. 4.17 is considered as the second application problem. The application involves a 55-story building model with a total height of 190 m. The plan view and the dimensions of a typical floor are shown in Fig. 4.18. The building has a reinforced concrete core of shear walls and a reinforced concrete perimeter moment-resisting frame as shown in Fig. 4.18. The columns of the perimeter have a circular cross section. The floors and walls are modeled by shell elements of different thicknesses. Additionally, beam and column elements are used in the finite element model, which has 89,000 degrees of freedom. Material properties are given by the Young’s modulus \(E = 2.45 \times 10^{10}\) N/m\(^2\), mass density \(\rho = 2,500\) kg/m\(^3\), and Poisson’s ratio \(\mu =0.3\). Finally, 5\(\%\) of critical damping is added to the model.

Fig. 4.17
figure 17

Three-dimensional finite element building model. Example No. 2

Fig. 4.18
figure 18

Typical floor plan of the 55-story building model

For an improved performance, the structural system is reinforced with a total of 45 nonlinear vibration control devices placed in two different configurations, i.e., longitudinal (x) and transverse (y) directions. A typical configuration of the vibration control devices, at the floors where they are located, is shown in Fig. 4.19. Each longitudinal device consists of brace and plate elements where a series of metallic U-shaped flexural plates (UFP’s) are located between the plates, as shown in Fig. 4.20 [35]. On the other hand, each transverse device consists of concrete walls where the UFP’s are located between them, as illustrated in Fig. 4.20.

Fig. 4.19
figure 19

Typical configuration of vibration control devices

Fig. 4.20
figure 20

Upper figure: Model of vibration control device in the longitudinal direction. Lower figure: Model of vibration control device in the transverse direction

Each UFP exhibits a one-dimensional hysteretic type of nonlinearity modeled by the restoring force law

$$\begin{aligned} f_{NL}(t) = \alpha \; k_e \; \delta (t) + (1 - \alpha ) \; k_e U^y \; z(t) \end{aligned}$$
(4.27)

where \(k_e\) is the pre-yield stiffness, \(U^y\) is the yield displacement, \(\alpha \) is the factor that defines the extent to which the restoring force is linear, z(t) is a dimensionless hysteretic variable, and \(\delta (t)\) is the relative displacement between the upper and lower surfaces of the flexural plates. The hysteretic variable z(t) satisfies the first-order nonlinear differential equation

$$\begin{aligned} \dot{z}(t) = {\dot{\delta }(t)} \left[ \beta _1 - z(t)^2 [ \beta _2 + \beta _3 \text {sgn}( z(t) \dot{\delta }(t))] \right] / U^y \end{aligned}$$
(4.28)

where \(\beta _1\), \(\beta _2\) and \(\beta _3\) are dimensionless quantities that characterize the properties of the hysteretic behavior, \(\text {sgn}( \cdot )\) is the sign function, and all other terms have been previously defined. The quantities \(\beta _1\), \(\beta _2\), and \(\beta _3\) correspond to scale, loop fatness and loop pinching parameters, respectively. The above characterization of the hysteretic behavior corresponds to the Bouc–Wen type model [9, 10, 54]. The following values for the dissipation model parameters are used in this case: \(k_e = 2.5 \times 10^6\) N/m; \( U^y = 5 \times 10^{-3} \)m; \(\alpha =0.1\); \(\beta _1 =1.0\); \(\beta _2 =0.5\); and \(\beta _3 =0.5\). A typical displacement-restoring force curve of one of the U-shaped flexural plates under seismic load is shown in Fig. 4.21. The nonlinear restoring force of each device acts between the floors where it is placed along the same orientation of the device.

Fig. 4.21
figure 21

Typical displacement-restoring force curve of one of the U-shaped flexural plates

7.2 Definition of Substructures

The model is subdivided into 81 linear substructures \(S_i,i=1,\ldots ,81,\) as shown in Fig. 4.22. They are composed of three types of substructures, namely: core of shear walls located between two floors (\(S_i, i=1,\ldots ,27\)); slabs of different floors (\(S_i, i=28,\ldots ,54\)); and circular columns of the perimeter frame located between two floors and the corresponding slab of the intermediate floor (\(S_i, i=55,\ldots ,81\)). Figure 4.23 depicts a typical substructure of each type. In addition, there are 45 nonlinear substructures comprised by the nonlinear vibration control devices defined in the previous section. With this subdivision, the total number of internal degrees of freedom is equal to 65,300, while 23,700 degrees of freedom are present at the interfaces. A small number of fixed-interface normal modes is selected for the model. In particular, a model characterized by only 252 fixed-interface normal modes is considered. In addition, 100 interface modes, which represent about 0.5\(\%\) of the total number of interface degrees of freedom, are used in the model. Thus, the total number of generalized coordinates of the reduced-order model represents more than 99\(\%\) reduction with respect to the unreduced finite element model.

Fig. 4.22
figure 22

Substructures of the finite element model. Application problem No. 2

Fig. 4.23
figure 23

Typical substructure of each type (shear wall, slab, perimeter moment frame and slab)

Fig. 4.24
figure 24

Relative frequency errors between the modal frequencies of the full finite element model and of the reduced-order model based on dominant normal modes and interface reduction

Figure 4.24 shows the relative errors between the modal frequencies of the unreduced finite element model and the modal frequencies of the reduced-order model based on dominant modes and interface reduction. The first 10 modes are considered for reference purposes. The corresponding MAC-values between the first 10 modal vectors computed from the unreduced finite element model and from the reduced-order model are shown in Fig. 4.25. It is seen that the errors for the modal frequencies are quite small. The accuracy of the results is also seen for the modal vectors. In fact, the values at the diagonal terms of the matrix of MAC-values are one, while the off-diagonal terms are zero. Consequently, the mode shapes of the reduced-order model are consistent with the mode shapes from the unreduced model. Thus, the reduced-order model is able to accurately characterize the important modes of the unreduced finite element model.

Fig. 4.25
figure 25

MAC-values between the mode shapes computed from the unreduced finite element model and from the reduced-order model based on dominant normal modes and interface reduction

The effect of considering the contribution of the residual normal modes in the generation of the reduced-order model is shown in the following figures. The relative errors between the modal frequencies of the unreduced finite element model and the modal frequencies of the reduced-order model are shown in Fig. 4.26. The corresponding matrix of MAC-values between the first 10 modal vectors computed from the unreduced finite element model and from the reduced-order model is shown in Fig. 4.27. The effect of considering the residual normal modes in the analysis is evident. The difference in the errors for the modal frequencies is more than four orders of magnitude with respect to the ones obtained from the reduced-order model based on dominant modes only. Thus, the contribution of the residual normal modes significantly enhances the accuracy of the reduced-order model. In addition, the matrix of MAC-values indicates that both models are consistent. It is important to stress that the construction of the reduced-order model is carried out offline, that is, before the reliability analysis takes place. Thus, this process is independent of the reliability analysis, which can be computationally quite demanding.

Fig. 4.26
figure 26

Relative frequency errors between the modal frequencies of the full finite element model and of the reduced-order model. Reduced-order model based on dominant and residual normal modes and interface reduction

Fig. 4.27
figure 27

MAC-values between the mode shapes computed from the unreduced finite element model and from the reduced-order model. Reduced-order model based on dominant and residual normal modes and interface reduction

7.3 System Reliability

The failure event is formulated as a first excursion problem during the time of analysis as indicated in Sect. 4.2. For illustration purposes, the structural response to be controlled is the displacement at the top of the building. Thus, the corresponding demand function is characterized as

$$\begin{aligned} d(\mathbf z ,{\varvec{\theta }})= \underset{t\in [0,T]}{\max }\left( \left| \frac{ \delta (t,\mathbf z ,{\varvec{\theta }}) }{\delta ^*}\right| \right) \end{aligned}$$
(4.29)

where \(\delta ^*\) is the acceptable threshold of the maximum relative displacement of the top of the building with respect to the ground. It is expected that for the model under consideration, the stiffness of the core of shear walls may have an important effect on the system response. Thus, the variability of such stiffness may affect the reliability of the model. Consequently, for reliability considerations, the modulus of elasticity of the shell elements that model the core of shear walls is treated as uncertain. The corresponding stiffness of the core of shear walls, represented by the modulus of elasticity, is modeled as a discrete homogeneous isotropic log-normal random field along the height of the building. The discretization of the random field is carried out every two floors, resulting in a discrete field of 27 components, i.e. \(\mathbf r _E,( {E}_i, i=1,\ldots ,27)\). The mean value and standard deviation of the log-normal random field are set equal to \(\mu _E = 2.0 \times 10^{10}\) N/m\(^2\) and \(\sigma _E = 3.0 \times 10^{9}\) N/m\(^2\), respectively. The corresponding correlation function, which models a mildly correlated random field, is shown in Fig. 4.28.

Fig. 4.28
figure 28

Correlation function of the random field. Second application problem

The characterization of the log-normal random field is similar to the one considered in Sect. 4.6.5. Based on the previous definition of the substructures and the characterization of the uncertainty, it is clear that the substructures related to the core of shear walls (\(S_j, j=1,\ldots ,27\)) depend on the model parameters associated with the modulus of elasticity. For implementation purposes, the model parameters related to substructures \(S_j, j=1,\ldots ,27,\) are defined as \(\theta _j ={E}_j/ \mu _E\). The related parametrization functions are given by \(h^j (\theta _j) = \theta _j\) and \(g^j(\theta _j) = 1\). The other substructures are independent of the model parameters. The different values that the model parameters may assume during the simulation process correspond to different realizations of the discrete log-normal random field. The same excitation used in the previous example is considered in the present application. Note that the characterization of the stochastic excitation involves more than 3,000 random variables. This number of random variables plus the 27 model parameters indicate that the reliability estimation constitutes a high reliability problem. Due to the dimension and complexity of the finite element model at hand, it is expected that the use of model reduction techniques and parametrization schemes will have an important effect on the computational cost of the reliability analysis. Such an effect is illustrated in Sect. 4.7.5.

Fig. 4.29
figure 29

Probability of failure in terms of the threshold level. 1: reduced-order model based on dominant fixed-interface normal modes and exact interface modes. 2: reduced-order model based on dominant fixed-interface normal modes and approximate interface modes (linear interpolation scheme). 3: reduced-order model based on dominant fixed-interface normal modes and approximate interface modes (quadratic interpolation scheme)

7.4 Results

The probability of failure in terms of the threshold by using two reduced-order models is shown in Fig. 4.29. They consist of models based on dominant fixed-interface normal modes and interface modes, and models based on dominant fixed-interface normal modes and approximate interface modes. When approximate interface modes are considered, two approaches are used: linear and quadratic interpolation schemes (see Sect. 3.1). In the case of linear interpolation, 81 support points are used in the adaptive scheme proposed in Sect. 4.6.7, while 162 are employed in the quadratic case. An average of five independent runs is considered in the figure. First, it is observed that the results of the models based on exact and approximate interface modes are coincident. Thus, the approximation schemes for approximating the interface modes are adequate. Based on the results of the previous section, regarding the accuracy of the reduced-order models, it is expected that the reduced-order model based on dominant fixed-interface normal modes and exact interface modes will produce reliability estimates with sufficient accuracy. Therefore, this case can be considered as the exact one for comparison purposes. From Fig. 4.29, it is also seen that the reliability estimates of both models that use approximate interface modes agree very well. Then, the use of a linear interpolation scheme is sufficient in the context of this application.

The effect of considering the contribution of the residual normal modes on the reliability estimates is shown in Fig. 4.30. This figure presents the probability of failure in terms of the threshold by using the following reduced-order models: a reduced-order model based on dominant and residual fixed-interface normal modes and exact interface modes; a reduced-order model based on dominant and residual fixed-interface normal modes and approximate interface modes by using a linear interpolation scheme; and a reduced-order model based on dominant and residual fixed-interface normal modes and approximate interface modes by using a quadratic interpolation scheme. Conclusions similar to the ones obtained in the previous case regarding the effectiveness of the reduced-order models in estimating the probability of failure are obtained in this case. In the previous analyses, global invariant conditions were assumed for the transformation matrix that accounts for the contribution of the residual fixed-interface normal modes (see Sect. 3.3.5). By comparing Figs. 4.29 and 4.30, it is noticed that all reduced-order models give similar reliability estimates for the thresholds considered in the analysis. Validation calculations indicate that the effect of the residual normal modes is to further enhance the accuracy of reliability estimates obtained by the reduced-order model based on dominant normal modes only. However, the difference in this case is almost negligible.

Fig. 4.30
figure 30

Probability of failure in terms of the threshold level. 1: reduced-order model based on dominant and residual fixed-interface normal modes and exact interface modes. 2: reduced-order model based on dominant and residual fixed-interface normal modes and approximate interface modes (linear interpolation scheme). 3: reduced-order model based on dominant and residual fixed-interface normal modes and approximate interface modes (quadratic interpolation scheme)

7.5 Computational Effort

Table 4.8 shows the speedup (round to the nearest integer) achieved by the different implementations considered in the previous figures. The corresponding characterization of the reduced-order models is indicated in Table 4.9. Recall that the speedup is the ratio of the execution time by using the full finite element model and the execution time by using a reduced-order model. In this regard, the execution time in performing the reliability analysis by using the full finite element model is approximated as follows. The total number of dynamic analyses involved in the results shown in Figs. 4.29 and 4.30 is approximately 3,700 (four stages of subset simulation). The time for performing one dynamic analysis of the full model is about 4.3 min. Multiplying this time by the total number of dynamic analyses required by the simulation process, the computational effort is expected to be of the order of 265 h (more than 11 days). As in the previous example, the procedure is carried out by using a homemade code based on a Matlab C++ platform.

Table 4.8 Speedup attained for different models. Second application problem

It is seen that a speedup of six is obtained by the reduced-order model based on dominant fixed-interface normal modes and exact interface modes. This value increases to a speedup of more than 20 when approximate interface modes are considered. Thus, the effect of using approximate interface modes is significant in terms of the computational effort. This reduction in computational time does not compromise the accuracy of the reliability estimates. Furthermore, a speedup value of the order of 10 is achieved when the residual normal modes are explicitly considered in the analysis. For the same number of fixed-interface normal modes per substructure, the computational burden for using residual normal modes is increased by a factor of two for this example. This increase is compensated by the significantly higher accuracy provided by the reduced-order model with residual normal modes (see Figs. 4.24 and 4.26). Based on the previous results, it is noted that for practical purposes, the results obtained from the reduced-order model based on dominant fixed-interface normal modes and approximate interface modes can be used to compute the reliability estimates. Thus, an important reduction in computational efforts is obtained by using the reduced-order model instead of the full finite element model. The gain in computational savings for this structural model is significant considering the complexity associated with the distributed nonlinearities along the height of the building arising from the installation of the vibration control devices.

Finally, it is noted that once a reduced-order model has been defined, several scenarios in terms of different failure events and system responses can be explored and considered for reliability purposes in an efficient manner. Therefore, even higher speedup values can be obtained for the reliability analysis process as a whole.

Table 4.9 Description of reduced-order models. Second application problem