Abstract
True behaviour of an arbitrary structural system is dynamic and nonlinear. To analyze this behaviour in many real cases, e.g. structures in regions under high seismic risk, a versatile approach is to discretize the mathematical model in space, and use direct time integration to solve the resulting initial value problem. Besides versatility in application, simplicity of implementation is an advantage of direct time integration, while, inexactness of the response and the high computational cost are the weak points. Considering the sizes of the integration steps as the main parameters of time integration, and concentrating on transient analysis against ground acceleration, this chapter presents discussions on:
-
(1)
the role of integration step size in time integration analysis, specifically, from the points of view of accuracy and computational cost,
-
(2)
conventionally accepted comments, codes/standards’ regulations, and some modern methods for assigning adequate values to the integration step sizes in constant or adaptive time integration,
and concludes with some challenges on time integration analysis and integration step size selection in structural dynamics and earthquake engineering.
Access provided by CONRICYT-eBooks. Download chapter PDF
Similar content being viewed by others
Keywords
- Transient analysis
- Time integration
- Integration step size
- Accuracy
- Computational cost
- Artificial damping
- Numerical stability
- Algorithmic parameter
- Digitized excitation
- Nonlinearity
- Seismic codes/standards
- National code of New Zealand
- Adaptive time stepping
- Convergence
1 From Structural Analysis to Time Integration
Our lives and civilization rely on different construction we daily pass by. In order to have sustainable buildings, bridges, tunnels, railways, infrastructures, etc., the structures should be designed considering their lifetime behaviour. Even, regardless of the randomness and stochastic nature of severe conditions and the complexity existing in prediction of these conditions, the study of structural behaviour at severe conditions is not simple. Theoretical and experimental approaches can be addressed as means to study the structural behaviour. Nevertheless, mainly with attention to their simplicity, versatility, and inexpensiveness, numerical computations are the widely accepted and generally the superior tool.
With an initiation point in the seventeenth century (i.e. the Hooke’s law), analysis of structural systems, as taken into account in the soft ware packages, started less than a century ago, by relating behaviour and excitation, via the structural property, i.e.
In Eq. (1), X stands for the vector of unknowns, F represents the known external information (in general the excitation), and the structural geometry, topology, and material are reflected in K. (In time dependent problems, the initial status affects both K and F, and the solution implies the behaviour at a specific time instant.) In computerized structural analysis, the behaviour is being represented by the unknown general displacement, as the vector X in Eq. (1). For structural behaviour not representable by finite number of unknowns, in the mid of the past century, methods (mainly discretization methods) were developed to systematically replace continuous systems, with systems with finite number of unknowns or DOFs (degrees of freedoms). Finite difference, finite volume, finite element, and boundary element methods, are some of the major discretization methods. Development of these methods, specifically the finite elements, and the invention of computers, at 1950s, led to significant rise in the size and complexity of structural systems. For instance, the dependence of K and F in Eq. (1) to X causing nonlinearity could be simply tackled by incremental consideration of F and when needed implementation of different definitions for strain and stress. Not much later, attention to time dependent phenomena, including structural behaviour against earthquakes, aerodynamics, etc., increased significantly. In study and analysis of arbitrary structural dynamic behaviour, the inertial force, and even in cases the damping effects, are of high importance. The linear dynamic behaviour of semi-discretized structural systems can be expressed as [1–3]:
where, M, C, and K, respectively denote the mass, damping, and stiffness matrices, f stands for the dynamic excitation, each top dot implies once differentiation with respect to time, u is the unknown general displacement, t represents the time, \({\mathbf{u}}_{0}\) and \({\dot{\mathbf{u}}}_{0}\) introduce the initial conditions, and \(t_{end}\) is the length of the time interval under consideration. In 1950, the first broadly accepted method to solve Eq. (2), in a step-by-step manner, also addressed as a time integration method, was proposed by J.C. Houbolt [4–6] (see Fig. 1, where \({\mathbf{f}}_{\text{int}}\) implies the internal force, essential in presence of material nonlinearity and discussed later in this chapter). The history of step-by-step integration of initial value problems returns to the eighteenth century and the Euler method [7, 8] (see also [9], for the implementation of different Runge Kutta methods in structural dynamics); Houbolt suggested the first method, specifically dedicated to the solution of the equation of motion in structural dynamics [4]. For the Houbolt method, as well as many other time integration methods, the basic idea is to approximate the solution of Eq. (2), with sufficient accuracy, while avoiding complex mathematical functions, e.g. exp, sinh, cosh. The general approach is to carry out the analysis in a step-by-step manner, using simple relations instead of the exact computationally complex relations [10, 11]. The new relations are generally linear algebraic equations, set such that to maintain the adequacy of some mainly accuracy-related features. On this basis, many time integration methods have been proposed in the past decades [4–6, 12–17]; still, investigations for better approximate methods are in progress, in different disciplines and branches of engineering and science; see [18–26]. The most practically important and broadly accepted time integration methods are the Newmark family [1, 9–11, 27–32], central difference [1, 9–11, 14, 28–32], Wilson-Theta [9, 10, 15–17, 32], Houbolt [4–6], HHT [12, 30–32] and C-H [13] methods. All these methods convert Eq. (2) to (see also Eq. (1)):
or equivalently to the equation below:
(considering the dynamic effect in \({\mathbf{K}}_{eff}\), \({\mathbf{f}}_{eff}\), \({\bar{\mathbf{K}}}_{eff}\), and \(\Delta {\mathbf{f}}_{eff}\)), to be solved for the status \(\left( {{\mathbf{u}},{\dot{\mathbf{u}}},{\ddot{\mathbf{u}}}, \ldots\,{\text{and}}\,{\mathbf{f}}_{\text{int}} } \right)\) at sequential integration stations, starting from the initial conditions; see Fig. 1. (Δu and Δf eff respectively represent the increment or to say better the increase of u and f eff from the previous integration station to the current integration station.) Equation (4) can be treated in a way conceptually identical to Eq. (1). However, because of the approximation existing in the time integration formulation, the results are inexact depending on the integration formulation. This, plus the versatility of time integration analysis, has caused broad and continuous studies on time integration e.g. see [19–26, 33–38]. In the next section, after a brief review on time integration analysis, the parameters, and specifically the most important parameter, i.e. the integration step size, are discussed. In Sect. 3, the influence of the integration step size, on different features of time integration, is studied. In Sects. 4 and 5, conventional and modern comments and approaches to assign adequate values to the integration step size are addressed. Later, in Sects. 6 and 7, techniques for more efficient analysis against digitized excitations are reviewed, and the seemingly most successful technique is introduced in detail. Finally, the chapter is concluded, in Sect. 8, with a brief look on the key points, and addressing some of the challenges existing on time integration and step size selection.
2 Time Integration and Its Parameters
As implied in Sect. 1, time integration is the most versatile tool to analyze Eq. (2) and its nonlinear counterpart, stated below [31]:
In Eqs. (5), the new parameter, Q, implies additional restrictions representing the nonlinear behaviour (see [39, 40]), and the essentiality of \({\mathbf{f}}_{\text{int}}\), as an initial condition, is explained in [41, 42]. In the analysis process, the status or indeed the responses, i.e. \({\mathbf{u}},{\dot{\mathbf{u}}},{\ddot{\mathbf{u}}}, \ldots \,{\text{and}}\,{\mathbf{f}}_{\text{int}}\) are being computed, for distinct sequential time instants, starting from the station just after the initial conditions, using simple algebraic formulation (see Fig. 1). No eigen-solution is essential in ordinary direct time integration analysis. The determination of the status proceeds, in a step-by-step manner, and at each time instant, the status of the structural system is being determined approximately [10, 28, 31, 32]. In nonlinear analyses, after the computation associated with each step, occurrence of nonlinearity is being checked. When a nonlinearity is detected, some iterations are generally implemented (see Fig. 1), by methods such as Newton Raphson, to decrease the error in modelling the nonlinearity (see [1, 2, 43–46]). The step-by-step procedure continues, till the end of the integration interval, \(t_{end}\). Considering these, time integration is a simple step-by-step computational tool. The simplicity of the step-by-step computation and the algebraic formulation bring about positive and negative consequences; some are listed below [31, 32]:
-
1.
Approximation in the obtained responses.
-
2.
Versatility of application to different equations of motion, and even to initial value problems originated in other branches of engineering and science, e.g. see [13, 19–23, 26, 34–37].
-
3.
Existence of many different time integration methods, e.g. see [1, 11, 12, 14–17, 24–34, 47, 48].
- 4.
The negative features are the approximation and the high computational cost (respectively the first and forth points above). Especially the first is very important; without sufficient accuracy, implementation of time integration analysis loses its explanation. Provided acceptable accuracy, we are interested in: (1) low computational cost, where computational cost implies the in-core storage of the hardware involved in the computation [50, 51], and (2) the capability to increase the accuracy and/or decrease the computational cost. Other features of time integration analysis are also directly or indirectly associated with the inexactness of the response; numerical stability, order of accuracy, numerical damping, and overshoot, are the most important features [31–33, 52]. These features are under the control of some parameters, to which, the remainder of this section is dedicated.
In exact computations, the parameters affecting the results imply real notions. For instance, in the computation of the integral below:
a and b are parameters with real meaning (see Fig. 2), appearing in the definition of the problem, as well as, in its exact solution; see the ending part of Eq. (6). Similarly, when using the analytical relations below [53, 54]:
for determining the end moments and end shears of the beam in Fig. 3, l, a, and W, are real parameters, defining the problem and its exact solution. Returning to time integration and Eqs. (2) and (5), the initial conditions, the mass, the damping, the stiffness, the excitations, the parameters defining the nonlinear behaviour (implied specifically in Q in Eqs. (5)), and finally the \(t_{end}\), are the real parameters.
There are many problems, for which, analytical solutions are either not derived (yet unavailable) or the analytical solutions are mathematically/numerically complicated, e.g. problems with analytical solutions in terms of special infinite series. To solve these problems, numerical approximate computation is the broadly accepted tool [7, 8, 55]. In approximate computations, besides real parameters, there exist parameters, essential (and even crucial) for the computation, that have no real meaning and no role in the definition of the problem and the exact solution; see [55, 56]. For instance, when using Simpson or Trapezoidal integration, for determining the value of I in Eq. (6), as apparent in the relations below [57, 58]:
the total number of integration steps, N, or equivalently the integration step size, h, is an additional parameter. Although the division of the integration interval to integration steps can be displayed in Fig. 2, the number of the divisions, N, and the integration step size, h, are not real parameters of the problem, and do not affect the exact solution. These parameters, having main role in the computation and no role in the definition of the problems and their exact solutions, are generally addressed as analysis or algorithmic parameters [55, 56]. It is meanwhile worth noting that it is conventional to define/redefine the algorithmic parameters, such that convergence [59, 60] of the approximate solutions to the exact solutions can be studied in the neighbourhood of zero values of the algorithmic parameter [49, 55, 56, 61]. As implied above, in the numerical integrations addressed in Eq. (8), h (or N) is the algorithmic parameter. In static analysis of the beam in Fig. 3 by specific finite elements, the size or number of the elements is a parameter, defining the computation and its accuracy [1, 30]. Accordingly, considering the basics of finite elements [1, 2], we can address the element size as the algorithmic parameter.
In time integration analysis, the main analysis (algorithmic) parameters are the sizes of integration steps throughout the integration interval; considered either constantly (as one parameter) or adaptively (according to a specific criterion). It is a broad convention to consider the integration time step \(\Delta {\kern 1pt} t\) as a parameter linearly controlling the sizes of integration steps throughout the integration interval [49]. In view of Eqs. (2) or (5), and Fig. 1, \(\Delta {\kern 1pt} t\) is a parameter, with no effect on the exact solution, but main role in the time integration analysis. For nonlinear problems, there exist algorithmic parameters, in addition to \(\Delta {\kern 1pt} t\), affecting the features of the analysis, again independent from the problem and its exact response. These parameters are:
-
1.
Nonlinearity continuation method and the special parameters to be set for implementation of the nonlinearity iterations [1, 2, 44–46, 62–64].
-
2.
Nonlinearity tolerance [1, 44–46, 63, 64], \(\bar{\delta }\), as an indicator for the accuracy of nonlinearity iterations.
-
3.
Maximum number of iterations (e.g. see [63–65]), as a representative for the computational facilities available, essential with attention to the always-existing round-off.
We would rather address the above parameters as nonlinearity analysis or nonlinearity algorithmic parameters. (Practically, the second parameter, i.e. \(\bar{\delta }\), is the most important parameter.) Besides \(\Delta {\kern 1pt} t\) and nonlinearity analysis parameters, parameters precisely defining the integration method are of the algorithmic parameters of time integration and can be addressed as method algorithmic parameters. These parameters do not exist for all time integration methods. For instance, equations governing the one-step Houbolt method [4–6], stated below:
are not parametric (for \(\dddot {\mathbf{u}}_{0}\), see [6]), while, the relations defining the HHT method [12, 30–32], stated below:
depend on \(\alpha\), \(\beta\), and \(\gamma\), and hence, \(\alpha\), \(\beta\), and \(\gamma\) are the method parameters of the HHT time integration method, precisely defining the integration method, and highly affecting the approximate response, with no effect on Eqs. (2) or (5) and the exact response. (Inequality restrictions on these parameters, for issues such as numerical stability (e.g. see [30, 31]), do not reduce the number of these parameters.) Consequently, time integration analysis, according to a specific time integration method, while implementing a specific nonlinearity continuation method [44], potentially depends on three groups of analysis parameters:
-
1.
The main analysis parameter: \(\Delta {\kern 1pt} t\) (or parameters defining the sizes of integration steps in adaptive time integration; see [66]).
-
2.
Nonlinearity analysis parameter: \(\bar{\delta }\) and some less important parameters, e.g. maximum number of nonlinearity iterations.
-
3.
Method analysis parameters: parameters completing the definition of the integration methods.
From the above parameters, the main parameters, controlling different features of the analyses (specifically the computational cost and accuracy-related features, e.g. numerical stability), are \(\Delta {\kern 1pt} t\) and \(\bar{\delta }\) (shortly disregarding the method parameters).
With regard to accuracy, since implementation of an approximate method implies that the exact solution is at least not simply available, the accuracy instead of being studied in view of the definition of error, E [67], i.e.
(a as a right superscript implies that the argument is an approximation and \(\left\| { } \right\|\) denotes an arbitrary norm [68]) needs to be evaluated indirectly, in terms of convergence [1, 7, 30–32, 56, 59–61, 67], i.e.
For an arbitrary time integration analysis, Eq. (12) is theoretically equivalent to Figs. 4, as well as Fig. 5. \(E_{i}\) stands for the error of the response \({{\mathbf{U}}_{i}^{a} }\), obtained from time integration analysis with step sizes equal to \(\Delta {\kern 1pt} {\kern 1pt} t_{\,i}\), L and \(L^{{\prime }}\) denote the length of the region in the two plots implying decrease of error with positive integer slopes (not precisely defined and determinable yet), and \(D_{i}\), is defined below:
and addressed as pseudo-error [69, 70]. For many problems, the convergence of \(D_{i}\) to zero is equivalent to Eq. (12). Similarly, Figs. 4 and 5 are equivalent in the sense that either both \(E_{i}\) and \(D_{i}\) imply convergence to zero or neither do so. Furthermore, either the convergence trends in both convergence and pseudo convergence plots (displayed in Figs. 4 and 5, respectively) are as straight lines (with positive integer slopes towards zero errors/pseudo-errors at zero \(\Delta {\kern 1pt} t\)) or neither display such a convergence trend. Even more, when both of the trends are as lines with positive integer slopes, the slopes in the two plots are identical and equal to the order of accuracy (in cases, less than the order of accuracy [32, 49, 71]); see [55, 69, 70]. In Eq. (13), q is a positive integer, introducing the rate of convergence (see Figs. 4 and 5), and \({\mathbf{U}}_{i - 1}^{a}\) and \({\mathbf{U}}_{i}^{a}\) are named such that \(\Delta {\kern 1pt} {\kern 1pt} t_{i - 1} > \Delta {\kern 1pt} {\kern 1pt} t_{i} > 0\); see [70]. In nonlinear analyses, for maintaining Fig. 4 (with \(L > 0\)) and specifically to ensure the equivalence between Figs. 4 and 5 and Eqs. (12), or even the validity of
we can assign very small values (depending on the problem), or values consistent with \(\Delta {\kern 1pt} t\), to \(\bar{\delta }\) [2, 42, 63, 72].
And regarding computational cost, in analysis of a specific linear problem, by a specific time integration method, and on a specific computer, the computational cost depends on the total number of integration steps; and since \(t_{end}\) is a constant value known in advance, the computational cost increases when \(\Delta {\kern 1pt} t\) decreases. This implies a contradiction with the accuracy, for which, it is beneficial to assign smaller values to \(\Delta {\kern 1pt} t\) (while larger values of \(\Delta {\kern 1pt} t\) are beneficial for computational cost). Considering this and the fact that different from \(\bar{\delta }\), \(\Delta {\kern 1pt} t\) affects both linear and nonlinear analyses, the remainder of this chapter is dedicated to \(\Delta {\kern 1pt} t\) and the approaches to select \(\Delta {\kern 1pt} t\) (see also [72, 73]).
3 Integration Step Size and Its Influence on Analysis Quality
As the major algorithmic parameter of time integration, the integration step size or \(\Delta \,t\) affects almost all features of arbitrary linear or nonlinear time integration analysis. The features under study in this section are accuracy, convergence, order of accuracy, stability, artificial damping, overshoot, and computational cost.
You can generally increase the accuracy, by reducing the integration step size. “Smaller integration steps lead to more accuracy” is a practically general rule (see Fig. 6 and Tables 1 and 2), with the potential to be obviated in analysis of nonlinear or complex behaviours (see Figs. 7 and Tables 3; the units are all in S.I. and each dash-dot-dash centreline, marked with CL, is associated with one mass). The nonlinear structural model is set such that the exact solution can be simply derived; see Fig. 8 [63]. Disagreement with the above-mentioned general rule corresponds to \(L = 0\), in Figs. 4 and 5. Meanwhile, Tables 2 and 3 evidence that the amount of error depends also on the response under consideration. In analysis of nonlinear structural dynamic systems, the details of the iterative nonlinearity analysis can significantly affect the accuracy as well as the changes of accuracy with respect to the integration step size; see [63, 72–78]. No general rule seems to exist for the changes of \(\Delta {\kern 1pt} t\) and \(\bar{\delta }\), or other analysis parameters (addressed in Sect. 2), guaranteeing more accuracy for arbitrary nonlinear or complex analysis; some comments exist, for piece-wisely linear systems, e.g. see [42, 63]. (Accordingly, the responses computed for nonlinear problems or problems with complex behaviour are in general unreliable [42, 63, 72–80].)
Regarding convergence and order of accuracy, in view of the Lax-Richtmyer equivalence theorem [59, 60, 81], for well-posed problems [31, 32, 81] (including almost all real engineering problems), convergence is equivalent to consistency plus numerical stability. Consistency implies that the order of accuracy is not less than one [31, 32], and order of accuracy of a time integration method is the highest rate, by which, the responses computed by the integration method converge to the exact response; see also [7, 31, 32, 49, 56], equivalently definable in terms of local truncation errors [31, 32]. Numerical stability can be defined as the capability of time integration methods to lead to responses (for physically stable problems [82]) that do not diverge, even after arbitrary large number of integration steps [1, 30–33, 52, 59, 60, 81, 83]. Therefore, it is reasonable to expect the convergence to be influenced, from \(\Delta {\kern 1pt} t\), via numerical stability and order of accuracy.
Taking into account terms like “conditionally stable”, the effect of \(\Delta {\kern 1pt} t\) on numerical stability might be crucial [1, 10, 11, 28–33, 52]. By concentrating on one-step time integration methods (recommended in the literature [30–32]), for SDOF (Single-Degree-Of-Freedom) systems, the free vibration time integration computation, can be expressed, as stated below (multi-step methods can in many cases be rewritten as one-step methods, e.g. central difference and Houbolt methods [4–6, 32]):
where, \(\it \alpha\) implies the highest order of time differentiation in the one-step integration, e.g. \(\alpha = 3\) for the one-step Houbolt method [4–6], and \(\alpha = 2\) for the HHT method [12, 30–32, 84], the right subscripts represent the same subscript for u, the temporal derivatives of u, and \(\Delta {\kern 1pt} t\), inside the brackets, i denotes the number of the integration step under study, and A is the amplification matrix, with members depending on the natural angular frequency, \(\omega\), the integration step size \(\Delta {\kern 1pt} t_{i}\), and the coefficient of viscous damping \(\xi\) [31, 32], i.e.
Numerical stability is provided, when the spectral radius [68] of A, is not more than one throughout the analysis, i.e.
In Eq. (17), \(\rho\) stands for the spectral radius, and \(\lambda_{i = 1,2, \ldots \alpha }\) implies the i th eigen-value of A (a real or complex number [57]). In more detail, for numerical stability, the absolute values of the eigen-values of A are to be less than one, when with multiplicity more than one, and less than or equal to one, when not repeated [29–33, 52]. The discussion is valid for forced vibrations and MDOF (Multi-Degree-Of-Freedom) systems, when considering all natural modes separately [1, 29–32]. In practice, it is conventional to study numerical stability, based on the changes of spectral radius, with respect to \(\omega \,\Delta {\kern 1pt} t\) (where, \(\omega\) and \(\Delta {\kern 1pt} t\) respectively imply an arbitrary natural frequency of the structural system and the constant step size in the analysis), preferably for different values of \(\xi\); see [29–33, 52, 57] and Fig. 9. In Fig. 9, T stands for arbitrary natural period of the MDOF system, \(T = 2\pi \omega^{ - 1}\) [31–33, 52], and the steps sizes are considered constant throughout the integration interval. (For non-proportionally damped MDOF systems, Eq. (15) can be considered directly for the whole MDOF system based on which the remainder of the discussion remains unchanged; see [84]). Consequently, \(\rho < 1\) or \(\rho \le 1\) is necessary and sufficient for the stability of linear analyses, and is necessary for the stability of nonlinear time integration analyses; see [33, 63, 72]. The outcome restricts \(\omega {\kern 1pt} {\kern 1pt} \Delta {\kern 1pt} t\), generally leading to:
where, \(\Delta \,t_{cr}\) stands for the integration step size corresponding to \(\rho = 1\); see Fig. 9b, d, h. In view of the Lax-Richtmyer equivalence theorem [32, 59, 60, 81, 86, 87], Eq. (18) needs to be satisfied, in order to maintain responses stability and convergence. Accordingly, unconditional stability (for linear analyses) is recommended by times [30–33, 52] and many conventional time integration methods are unconditionally stable, i.e.
Consequently, and as displayed in Eq. (18) and Fig. 9, the effect of integration step size, on numerical stability, can be described as that, the smaller the integration step size, the more the analysis has the chance to be numerically stable. In view of the Lax-Richtmyer equivalence theorem [32, 59, 60, 81, 86, 87], and since order of accuracy is a constant value computable theoretically independent from the integration step size [32, 60], a similar claim sounds reasonable, for the influence of integration step size on convergence. Nevertheless, with attention to the inequality sign in Eq. (18) and the definition of convergence in Eqs. (12) and (14), convergence is independent from \(\Delta {\kern 1pt} t\), unless when \(\Delta {\kern 1pt} t_{cr} = 0\), i.e. the unconditionally unstable condition. The trend of convergence however depends on \(\Delta {\kern 1pt} t\); see Figs. 4 and 5 and the Taylor series [57] correspondence between convergence and these figures [30–32, 55]. From the standpoint of Lax-Richtmyer equivalence theorem, the above-mentioned different effects of \(\Delta {\kern 1pt} t\) on convergence and numerical stability, while order of accuracy is not under the effect of \(\Delta {\kern 1pt} t\) may cause questions. To avoid ambiguities, it is worth noting that in numerical determination of the order of accuracy [84], we need to carry out the time integration analyses with steps smaller than \(\Delta \,t_{cr}\). (Sufficient smallness of \(\Delta {\kern 1pt} t\) is also implied in the theoretical computation of the order of accuracy [32, 60].) Meanwhile, though order of accuracy is conceptually independent from numerical stability, restrictions exist that relate order of accuracy and numerical stability, depending on the number of steps involved in the computation for each new station, e.g. Dalquist Barriers [32].
Models resulted from discretization in space, by methods, such as finite elements, are different from the original continuous models, specifically in the higher modes of oscillation [30–32, 88]. To say better, though the piece-wise exact analyses [10, 11] can lead to exact responses for Eq. (2), the so-called exact responses are different from the exact responses of the PDE (Partial Differential Equation) models prior to the semi-discretization. The difference can be considerable in the higher modes of oscillations. A way to omit or reduce the errors is to somehow eliminate the higher modes with trivial contribution in the response, and time integrate the lower modes with sufficient accuracy. An assumption in this approach is the existence of higher modes with trivial contribution in the response; this is a valid assumption at least in many real structural systems [10, 31, 32, 89–91]. Numerical or artificial damping is the capability of some time integration methods, in controlling the errors of semi-discretization [1, 11, 28–33, 52]. Artificial damping provides the capability to eliminate the higher modes and analyze the lower modes with sufficient accuracy (in direct time integration of the total structural system). Time step size affects artificial damping. If, in view of the modal description of linear proportionally damped MDOF systems, we concentrate on an arbitrary SDOF system, the amplitude of the plot of the spectral radius of A, i.e. \(\rho\), with respect to \(\omega \,\Delta {\kern 1pt} t\) (or equivalently \(\frac{{\Delta {\kern 1pt} t}}{T}\)), inversely represents the capability to damp out the higher modes (see Fig. 9) [30–32]. In more detail, when \(\rho < 1\), smaller values of \(\rho\) imply more elimination of the higher frequency oscillations [29–31] (\(\rho = 0\) implies complete elimination). The general trend displayed in Fig. 9, and the presented explanation, regarding the more elimination at larger values of \(\omega \,\Delta {\kern 1pt} t\), are valid for MDOF and SDOF linear systems, considering the modes under consideration (and the corresponding values of \(\omega\) or T), separately. However, the numerical details are different for different time integration methods and different values of viscous damping, and meanwhile, are differently desired for different behaviours with different contributions of higher modes of oscillation [30–33, 52]. With these considerations, and specifically, from Fig. 9, provided proper artificial damping, values of \(\Delta {\kern 1pt} t\) larger than essential to damp out the higher modes may lead to the elimination of lower modes (see Fig. 9a, e, g, i; the last when physically damped slightly). (Proper artificial damping implies guaranteed more numerical damping for higher values of \(\frac{{\Delta {\kern 1pt} t}}{T}\), addressed here as proper artificial damping for the first time.) In other words, with assigning larger/smaller values to the integration step size, more/less oscillatory modes (starting from the highest modes) will be affected and eliminated. This can entail undesired inaccuracy. The presented discussion is valid, only for linear analysis of MDOF systems damped proportionally. Nevertheless, for many practical applications (e.g. seismic analysis), the nonlinear behaviour is of piecewise-linear type (e.g. linear-elastic/perfect-plastic and pounding [10, 28, 78, 91–94]) and meanwhile proportional damping is a broadly accepted assumption [28, 52]. Accordingly, expressions such as \(\omega \,\Delta {\kern 1pt} t\), spectral radius \(\rho\), and artificial damping, can be defined/considered in a piece-wise manner. Therefore, in many practical cases, depending on the selection of the parameters of nonlinearity analysis and the severity of nonlinear behaviour, we can use the linear theory of artificial damping to build up an idea about the artificial damping in analysis of nonlinear structural systems. Still, in nonlinear as well as linear analyses, special attention should be paid to selection of parameters of the integration method controlling artificial damping. Alternatively, and even preferably, the results are to be checked for accuracy [7, 28, 55, 95–97], also to prevent elimination of the important lower modes from the final response.
Overshoot is the tendency of integration methods to cause significant errors in the few steps after the start of the oscillations or after abrupt changes of the status or the excitation. Accordingly, smaller integration step sizes would likely cause less error, originated in overshoot; see [30–33, 52].
Regarding computational cost, assigning smaller values to the integration step size, without changing the computer (computational facility), increases the number of integration steps, while the computational cost per integration step remains unchanged. Accordingly, the runtime, the total usage of the in-core memory, and hence the computational cost, \(C_{C}\), will increase, i.e. considering \(\propto\) as a sign for “dependence”,
In more detail, in linear time integration analysis of an arbitrary system, with equally sized integration steps,
where, \(C_{{C_{1} }}\) and \(C_{{C_{2} }}\) denote the computational costs of two arbitrary analyses (on one computer and when disregarding the pre- and post-processing), respectively, with steps equal to \(\Delta {\kern 1pt} t_{1}\) and \(\Delta {\kern 1pt} t_{2}\) \(\left( {\Delta \,t_{1} \ne \Delta \,t_{2} } \right)\). To say better,
where, C is a positive-definite constant, representing a scale of the computational cost per integration step \(C_{C}^{*}\) (C can also be defined as the computational cost of the analysis carried out with integration steps equal to one), i.e.
N stands for the total number of integration steps, \(C_{C}\) is the associated computational cost of the analysis, and \(t_{end}\) is defined in Eq. (2). The computational cost associated with an integration step \(C_{C}^{*}\), depends on the semi-discretized model, the computational facility (capabilities), i.e. how powerful is our computer?, and the integration method. Different from accuracy (including stability) and overshoot, for the sake of which, we prefer to assign smaller values to the integration step size, for reducing the computational cost, it is beneficial to time integrate with larger steps; the case is in between, when talking about artificial damping.
In an arbitrary nonlinear analysis, it is essential to check the occurrence of nonlinearity after determination of the status at each integration station. When nonlinearity is detected, appropriate changes should be implemented in the characteristics of the system [see the Q in Eq. (5)]. Furthermore, and before the changes, it is conventional to localize the nonlinearity by implementing some nonlinearity iterations [2, 42–46]. Accordingly, Eqs. (21) and (22) are not valid, in time integration analysis of nonlinear systems. Even, without nonlinearity iterations, because of the essentiality of status check and characteristics change, it is reasonable to consider
In Eq. (24), \(\tilde{C}_{{C_{1} }}\) and \(\tilde{C}_{{C_{2} }}\) imply computational costs, not including nonlinearity iterations and status change, i.e.
\(n_{{NL_{{{\kern 1pt} 1}} }}\) and \(n_{{NL_{{{\kern 1pt} 2}} }}\) stand for the number of nonlinearity detections in analysis with steps sized \(\Delta {\kern 1pt} t_{1}\) and \(\Delta {\kern 1pt} t_{2}\) respectively, \(C_{Q}\) is an indicator for the computational cost at a nonlinearity detection averaged out among all nonlinearities detected in the analysis, and includes the costs of nonlinearity iterations and change of status. It is worth noting that the independency of \(C_{Q}\) from \(\Delta {\kern 1pt} t\) is a reasonable assumption, implemented in Eqs. (24), and leading to the approximation signs in Eqs. (24).
A special case happens when the nonlinear behaviour is piece-wisely linear (e.g. linear-elastic/perfect-plastic behaviour, impact, simple friction) [63], and no non-linearity iteration is implemented. In this case,
and in view of Eqs. (24) and (26) and provided analysis with equally sized steps,
Taking into account that \(\tilde{C}_{{C_{i = 1,2} }} > 0\), and the fact that in view of Eq. (25), similar to Eq. (21),
Eq. (27) implies that, in analysis of piece-wisely linear systems on a specific computer, when we do not implement nonlinearity iterations and \(C_{Q}\) is sufficiently smaller than \(\tilde{C}_{{C_{i = 1,2} }}\), the computational cost resists against changes because of \(\Delta {\kern 1pt} t\). To say better, in the special case addressed above,
(the above-mentioned smallness of C Q is generally valid, for implicit analyses [1, 30, 31], recommended for many real nonlinear dynamic analyses; see [1, 43]). Another special case occurs, when nonlinearities are detected at almost all integration steps, regardless of the integration step size, and no non-linearity iteration is being implemented. In this case, provided analysis with equally sized steps,
and, in view of Eq. (28),
comparable with Eq. (21). Equation (31) implies that, in a nonlinear analysis with equally sized steps and nonlinearities detected at all integration steps, if we do not implement nonlinearity iterations, the changes of computational cost with respect to the integration step size would be very similar to linear analyses. The discussion above seems new, and presented for the first time, in this chapter. Accordingly, further study is surely essential, not followed here, for the sake of brevity. Extension of the discussion to general nonlinear behaviour/analysis is being recommended for further research.
4 Practical Comments for Integration Step Size Selection
The most conventional and broadly accepted comment for selection of integration step sizes, specifically, when the steps are equally sized, is as stated in Eq. (32) [1, 10, 28, 49, 66, 97–99]:
The new parameters are defined below:
- \({}_{f}\Delta {\kern 1pt} t\)::
-
Step size, by which, the excitation is digitized (\({}_{f}\Delta {\kern 1pt} t = \infty\), when the excitation is continuous)
- \(\Delta {\kern 1pt} t_{d}\)::
-
Largest step size, according to which, we accept to obtain the history of the response (generally unimportant)
- \(T_{r}\)::
-
Smallest period of oscillations with considerable contribution in the response
- \(\chi\)::
-
A factor, changing from 10 (or even 5) in linear simple analyses to 1000 in analyses involving impact, severe nonlinearity, complex or mathematically stiff behaviour, etc., such that \(\frac{{T_{r} }}{\chi }\) turns to be an integration step size sufficient for accuracy
The definitions of \(\Delta {\kern 1pt} t_{cr}\) [see Eq. (18)], \({}_{f}\Delta {\kern 1pt} t\), and \(\Delta {\kern 1pt} t_{d}\), are clear. However, the definitions of \(T_{r}\) and \(\chi\) are somehow imprecise and vague, and furthermore, serious arguments can be made on the typology of Eq. (32) and the computation of \(\Delta {\kern 1pt} t_{cr}\). This section is dedicated to these ambiguities.
The typology of Eq. (32) has five major deficiencies. First, with the exception of period elongation and amplitude decay [1, 10, 29, 33, 52], no theoretical relevance seems to exist between \(\frac{{T_{r} }}{\chi }\) and accuracy, and period elongation and amplitude decay cannot well explain the role of \(\frac{{T_{r} }}{\chi }\) in Eq. (32); see [1, 29–32]. Secondly, the integration step size should be set, such that, with negligible additional inaccuracy, for the lower modes, the higher erroneous modes of oscillation can be eliminated (when Eq. (5) is obtained from discretization in space). Accordingly, in view of Fig. 9, artificial damping and the origin of Eq. (5) would rather be included in Eq. (32). Alternatively, since the details of Fig. 9 are different, for different integration methods, the integration method needs to be taken into account in Eq. (32); as a third alternative, the obtained responses are to be controlled, also for adequate selection of the parameters. Neither of these approaches seems to be properly addressed in Eq. (32) (or its implementation). In addition, the existing ambiguities on the notion of small and large modes highlight the ambiguities on the role of artificial damping in Eq. (32). Thirdly, a deficiency in the typology of Eq. (32) is the fact that, when the excitation is available as a digitized record (i.e. \({}_{f}\Delta {\kern 1pt} t\) is finite), and the consequence of Eq. (32) is such that:
it is not simple to carry out the time integration analysis with values of \(\Delta {\kern 1pt} t\) satisfying Eqs. (32) and (33). A supplementary practical equation, to be satisfied, while taking into account Eq. (33), is as stated below:
An approach, to consider Eqs. (32) and (34) simultaneously, seemingly addressed for the first time in this chapter, is to replace Eq. (32) with (\(\Delta {\kern 1pt} t^{\prime}\) is used merely for the computation of \(\Delta {\kern 1pt} t\)):
The forth deficiency in the typology of Eq. (32) originates in \(\chi\). In fact, besides the nonlinearity and its type, it is essential to take into account the severity of the nonlinear behaviour. As a simple example, the impact between two undamped single degree of freedom systems can be neglected when the velocities are about zero at all instants of impact. This leads to the negligibility of the nonlinear behaviour. The case is completely different when the impacts occur at considerable velocities. The difference between these two cases (and in general the difference between cases with different severity of a special type of nonlinear behaviour) seems not taken into account in Eq. (32). Towards a replacement for Eq. (32), attention can be paid to the discussions reported in the literature on nonlinearity quantification and measurement, e.g. see [100].
Finally, the fifth deficiency in the typology of Eq. (32) is that, though the nonlinear behaviour and its complexity affect Eq. (32) via \(\chi\), the effect of nonlinearity analysis parameters (e.g. the tolerance, \(\bar{\delta }\)) on the accuracy is disregarded. Two approaches to overcome this deficiency is to take into account the value of \(\Delta {\kern 1pt} t\), while assigning values to the nonlinearity parameters (see [42, 63, 64, 79]), or alternatively, to control the errors after the time integration analysis [28, 55, 95–97]. The latter might be unexpectedly costly.
In practical implementation of Eqs. (32) or (35), there is no ambiguity about the values to be assigned to \({}_{f}\Delta {\kern 1pt} t\) and \(\Delta {\kern 1pt} t_{d}\). However, \(\Delta {\kern 1pt} t_{cr}\) is under the effect of damping, and still, we cannot guarantee that, disregarding viscous damping is necessarily on the safe side of numerical stability, resulting in larger values of \(\Delta {\kern 1pt} t_{cr}\) [30–33, 52]. Without a safe side assumption, \(\Delta {\kern 1pt} t_{cr}\) needs to be computed considering the amount of viscous damping in different natural modes. The smallest \(\Delta {\kern 1pt} t_{cr}\), not necessarily associated with a special mode, would then control the numerical stability. The computation is not only complicated and computationally expensive (because of several reasons, including determination of the natural frequencies and the corresponding viscous dampings), but also the eigen-solution is in conceptual contradiction with the nature of direct time integration. The deficiency highlights in presence of nonlinearity, where the natural frequencies change throughout the integration interval. With the safe side assumption, independent of the amount of viscous damping, the natural mode causing the smallest \(\Delta {\kern 1pt} t_{cr}\), mostly the last natural mode, would control the numerical stability in Eqs. (32) and (35) (see Eq. (18) and the existing conditionally stable methods [1, 10, 11, 15, 24, 28–33, 52]). Furthermore, if the help of viscous damping to numerical stability is guaranteed, the definition of unconditional stability in Eq. (19) can be changed to:
The above discussion and assigning an adequate value to \(\Delta {\kern 1pt} t_{cr}\), in Eqs. (32) and (35), are more complex in presence of non-proportional damping; in view of the versatility of time integration in analysis of non-proportionally damped systems [28–33], this complexity is indeed a practical drawback. Considering these, it is essential to emphasize once again on the existing comment not to use integration methods with finite \(\Delta {\kern 1pt} t_{cr}\) [30–32] (when possible regardless of the type of damping), causing the simplification below in Eq. (35):
A seemingly last and most crucial deficiency in Eqs. (32), (35), and (37), is in the notion of \(T_{r}\). Theoretically, \(T_{r}\) implies the smallest period, with considerable contribution in the response [49, 66, 101]. The expression considerable contribution is vague, and besides, while the response is not at hand before the analysis, how we can determine the periods of oscillations?! Furthermore, even if, the response could somehow be predicted, no specific comment seems to exist regarding determination of the value of \(T_{r}\) and besides the computation of the oscillatory modes is computationally expensive. To overcome these shortcomings partly, we can compute \(T_{r}\), by using the comments on the natural modes with considerable contribution in the response, if existing (e.g. see [90, 91, 99]). This approach, though leads to determination of \(T_{r}\) independent from the response, lacks sufficient theoretical explanation.
A practical way (in cases costly), to lessen accuracy-related shortcomings, including those originated in \(T_{r}\) and \(\chi\), is to upper estimate \(T_{r}\) (in view of the low cost of the computation, no especial approach is essential for the upper-estimation), assign the value obtained from Eqs. (32), (35), or (37), to \(\Delta {\kern 1pt} t\), carry out a first analysis, repeat the analysis with half steps, compare the two responses, if the difference is not sufficiently small (the error of the response is in the size of the difference), once again repeat the analysis with half steps, and eventually, stop the repetitions, when the difference is negligible. Considering that such repetitions are recommended in the literatures of numerical solution of differential equations, and practical engineering applications specifically structural dynamics [7, 28, 55, 99, 102, 103] and considerable theoretical explanations exist for repetition-based accuracy controls, e.g. see [42, 96, 97], it is reasonable to rely on these repetitions to compensate the ambiguities and arrive at sufficient accuracy. Meanwhile, it is worth noting that implementation of the repetitions might be insufficient, because of the probable improper convergence, in problems with complex oscillatory behaviour, specifically those involved in nonlinearity [42, 74–76, 78, 104, 105]. Implementation of more advanced error control methods can cause more reliability, e.g. see [55].
5 Time Integration and Step Size Selection in Seismic Codes
The material, presented in the previous sections, was mostly theoretical, discussed in different branches of science and engineering; see [13, 17, 19–22, 26, 35, 37, 106–108]. In this section, attention is paid to seismic analysis and design, as a very important and crucial research area, with direct and indirect effects on human lives and civilization. Accordingly, with attention to seismic activities, locations in the world map and issues like how developed are codes/standards?, how developed are countries/regions?, how much populated are countries/regions?, and finally, the availability of the codes/standards for the author, the following seismic codes/standards are reviewed for issues on time integration and step size selection:
- 1.
-
2.
European code/standard [111].
-
3.
National code/standard of Turkey [112].
-
4.
National code/standard of Greece [113].
-
5.
National code/standard of China [114].
- 6.
-
7.
National code/standard of Iran [90].
-
8.
National code/standard of United States [116].
-
9.
National code/standard of Japan [117].
- 10.
- 11.
-
12.
National code/standard of Romania [122].
- 13.
The footprints of time integration in seismic codes/standards are investigated by directly looking for integration, time integration, time domain analysis, and time history analysis, or indirectly by looking for nonlinear analysis, non-proportional damping, un-classical damping, and provisions regarding analyses out of the scope of mode superposition analysis, e.g. analysis of systems equipped with modern control devices providing non-proportional damping.
Time integration analysis against several ground motion records and putting the results together (according to a seismic code/standard), in order to arrive at a time history record for each response (or to arrive at responses) to be used in seismic designs, is called time history analysis. All of the seismic codes/standards, with the exception of the code/standard of Chile [120, 121], consider time history analysis (and time integration) as an analysis alternative. Some of the important considerations in the seismic codes/standards are briefly addressed in Table 4; the numbers in the last column stand for the seismic codes/standards, as stated in the start of this section. (Table 4 does not present all the related regulations; it attempts to present a brief overview.) Meanwhile, the codes/standards, that in some cases consider time history analysis as the superior analysis tool, are as listed below:
-
National code/standard of India [109, 110]: in stack like industrial structures,
-
European code/standard [111]: when an isolation system may not be modelled with an equivalent linear method,
-
National code/standard of China [114]: for buildings taller than specific heights,
-
National code/standard of New Zealand [99, 115]: for long period structures and when the directivity effects (e.g. see [125]) can be significant,
-
National code/standard of Japan [117]: for high-rise buildings,
-
National code/standard of Romania [122]: similar to European code/standard,
where, for each code/standard, cases at which time history analysis is recommended or is the only seismic analysis tool, are addressed after the code/standard. And finally, the code/standard having comments on integration step size is the national code/standard of New Zealand [99, 115], while, the code/standard explicitly addressing integration as the mean for time history analysis is that of the United States [116]. The information above and those in Table 4 are clear evidences for the advancement of the code/standard of New Zealand [109, 115] (the code/standard introduced with “6” in Table 4), from the point of view of time history analysis and specifically the integration step size selection.
The comment of the national code/standard of New Zealand on integration step size (see Sect. 6.4.5 in [99]) is summarized in the equation below:
In Eq. (38), \(T_{1}\) is the largest translational period of the first mode, judged by the largest mass contribution, in the direction of principal component of the earthquake, and \(T_{n}\) denotes the period of the highest mode in the same direction required to achieve the 90 % mass as described in the modal response spectrum method [99]. The guidance of the commentary [115] regarding implementation of Eq. (38) is as stated below:
The time step should generally be not greater than \(\frac{{T_{1} }}{100}\), where \(T_{1}\) is the period associated with the first mode of vibration. For analyses involving impact (building pounding, rocking walls, or uplifting foundations), the time step will need to be significantly lower and a starting value of \(\frac{{T_{1} }}{1000}\) is recommended. If convergence is not obtained with a particular time step, reduce it by a factor of 2 and re-run. Once convergence is obtained, make a further reduction and compare the peak results for the target response parameter. If they are within 5 %, the longer time step (which requires less computing running time) is satisfactory [115].
The above considerations, and specifically, the selection of integration step sizes, is a significant initiation in seismic regulations (the author has not met similar details in seismic codes/standards in the past; also see [126]); and hence, the consideration in the code/standard of New Zealand [99, 115] is to be deeply appreciated and acknowledged. Still, some drawbacks and ambiguities seem to exist, to which the remainder of this section is dedicated.
As the first ambiguity, since each time history analysis is composed of several time integration analyses, the cost of time history analysis is generally considerable and the selection of step sizes in the first analysis (before any repetition) is of high importance. It is unclear why selection of the step size and the way the step size decrease in the repetitions of the first analysis disregard many features of the ground motion record (\({}_{f}\Delta {\kern 1pt} t\) is an exception), as well as, the nonlinearity analysis parameters (the nonlinear behaviour is briefly taken into account via \(\chi\)). To explain better, theoretically, depending on the excitation and the linear or nonlinear behaviour, a system may oscillate in frequencies different from its first natural frequencies, leading to different essentialities of integration step sizes. This seems not well taken into account in Eq. (38); compare Eqs. (32) and (38). Furthermore, depending on the values assigned to nonlinearity tolerances, proper convergence [105] of the analyses and reliable estimation of the errors can be considerably affected; e.g. see [42, 63, 78, 80]. Disregarding these issues in Eq. (38) may lead to additional repetitions, and accordingly considerable additional computational cost, and even in cases failure of the repetitions because of round-off.
The second ambiguity is that the theory backing the validity of the recommended accuracy control is not addressed in the code/standard and the supporting material [99, 115]. Specifically, it is worth noting that the partly theoretical backing, which is the proper convergence (see Fig. 4 and [42, 63, 105]) may not be fulfilled, in nonlinear time integration analysis; this is while the purpose of the control in the code/standard of New Zealand is nonlinear analysis; see [42, 63, 72, 74, 75, 80, 104].
Finally, after repeating an analysis and comparing the two responses, until being within 5 % difference at the peak, the seismic code/standard of New Zealand does not explicitly address the resulting response and merely mentions that the response obtained using the larger step is satisfactory [115]. As clearly stated in [115], the reason of referring to the response obtained from analysis with larger integration step size as satisfactory is that the other analysis is more costly. This implies that the comment is indeed to consider the response obtained from analysis with larger steps as final. Since, the two analyses are to be carried out, prior to checking whether the responses are in 5 % difference, it seems reasonable to pay attention to accuracy rather than computational cost and consider the response obtained from the analysis with smaller step size as the final response. (The more accuracy of the response when using smaller step sizes can be explained by the theory behind the error controlling approach; see [97].) Practically, ambiguities exist, also regarding the 5 % difference, the notion of 5 %, and the error control on the peaks, not discussed here, for the sake of brevity.
6 Efficient Step Size Selection
As implied in the previous sections, \(\Delta {\kern 1pt} t\) affects the accuracy and computational cost in reverse manners; also see [30–33, 49, 52]. Therefore, the more we eliminate the restrictions on \(\Delta {\kern 1pt} t\), the better the computational cost and accuracy can be balanced.
Methods are developed to eliminate the equality of the sizes of integration steps throughout the integration interval, e.g. see [66, 127]. The resulting time integration analysis is in general addressed as adaptive time stepping or adaptive time integration analysis. Adaptive time integration analysis starts with selection of step sizes for the first or first few steps. Carrying out ordinary time integration for the starting steps, the analysis continues in a step-by-step manner. After each or each several steps, a pre-assigned criterion to determine whether the sizes of the next steps need to be changed, and if a change is needed, to determine the amount of the change, is examined; see [66]. This process continues till the end of the integration interval, i.e. \(t_{end}\) (see Eqs. (2) and (5)). Adaptive time integration analysis started in about 1970s by the studies of Hibbit and Karlson, Oughourlian and Powell, Flippa and Park, Park and Underwood, and Underwood and Park (see the brief review reported in [66]), continued in the past decades [66, 127–129], and is in progress, specifically, for nonlinear analyses, e.g. see [22, 130–132]. Returning to the process of adaptive time integration, as explained above, implementation of a pre-assigned criterion is essential in arbitrary adaptive analysis. Some main bases for the criteria are as noted below [66, 127]:
-
1.
Errors at the integration stations, because of the integration approximation, associated with the last integration step; or to say better, the amount of error at the end of the integration step, originated in the approximate integration formulation, assuming zero errors at the start of the integration step and linear behaviour throughout the step. This error is broadly known as the local truncation error [31, 32].
-
2.
Periods (or equivalently frequencies) of important oscillations in the response, at the integration steps, or for large MDOF systems, the ‘current characteristic frequency’, defined based on expressions similar to the Rayleigh ratio [66].
-
3.
Complexity of the transient behaviour, defined, based on a measure named curvature of the response [66].
The computational costs associated with implementation of the criterion and the step size change (factorization) negatively affect the efficiency of adaptive time integration. The significance of these effects depends on the size of the structural system, the complexity of the dynamic behaviour, the length of the integration interval \(\left[ {\begin{array}{*{20}c} 0 & {t_{end} } \\ \end{array} } \right]\), the adaptive time stepping criterion, and the time integration method. Consequently, from the standpoint of computational cost, analysis considering adaptive time stepping is not necessarily superior to analysis with constant time steps.
For implementation of the criteria and adaptive time stepping, sizes of the starting steps, and some analysis parameters, should be set adequately and in advance. Parameters to prevent very slight or very frequent changes of the integration step size are samples [66]. These selections complicate implementation of adaptive time integration, compared to constant time stepping. It is also worth noting that, in implementation of adaptive time stepping, we cannot predict the computational cost of the analyses; in constant time stepping the prediction is simple in linear analyses. Considering these, constant time stepping and adaptive time stepping are both broadly accepted in practice, and research in either direction is in progress [21, 34, 115, 132–134], revealing the likely balanced needs in future. In seismic analyses, that the excitations are digitized in equal steps and the digitization step size complicates the selection of integration step size even further, constant time stepping is popular, e.g. see [10, 28, 29, 99, 135]. Accordingly, the discussion in the remainder of this section is concentrated on efficient step size selection in analysis against digitized excitations using constant time steps (i.e. \({}_{f}\Delta {\kern 1pt} t < \infty\), and in Fig. 1, \(\forall \;\;i:\;t_{i + 1} - t_{i} = {\text{Const}}.\)).
In view of Eqs. (35) and (37), conventional time integration analysis using constant time steps is more efficient, when \(\Delta {\kern 1pt} t_{d}\), \({}_{f}\Delta {\kern 1pt} t\), \(\frac{T}{\chi }\), and \(\Delta {\kern 1pt} t_{cr}\) (in Eq. 35) are closer to each other and the largest possible. In other words, unless, when \(\Delta {\kern 1pt} t_{cr} \to \infty\) (unconditional stability; where \(\Delta {\kern 1pt} t_{cr}\) disappears from the relations) or \(\Delta {\kern 1pt} t_{cr} = 0\) (unconditional instability; this is an impractical case), it would be ideal to guarantee
once again, implying the advantages of unconditionally stability. Based on this idea and towards more efficient step size selection, approaches are developed to enlarge \(\Delta {\kern 1pt} t_{cr}\) and \({}_{f}\Delta {\kern 1pt} t\) and close the gap between the terms in Eqs. (35), e.g. see [49, 135–138]. Considering this, the title of this chapter, and the existing comments, on using unconditionally stable methods (see Eqs. (19) and (36)), the discussion is continued, concentrated on techniques that enlarge \({}_{f}\Delta {\kern 1pt} t\), while the integration methods are unconditionally stable, in analysis of linear systems.
Since the discussion is narrowed to transient analysis against digitized excitations (cases with finite \({}_{f}\Delta {\kern 1pt} t\)), as a practical application, it is reasonable to simplify Eq. (5), to seismic analysis against ground accelerations, by considering
and
In Eq. (41), \({\mathbf{u}}_{g}\) stands for the static displacements of the un-supported degrees of freedom, because of the ground (support) displacement, and \({\mathbf{u}}_{r}\) denotes the displacements of the un-supported degrees of freedom, additional to the static displacements. In view of Eqs. (40) and (41), Eqs. (5) can be rewritten as stated below [10, 28, 91–93, 139]:
In Eq. (42), \({\ddot u}_{g} \left( t \right)\) represents the ground acceleration, digitized at steps sized \({}_{f}\Delta {\kern 1pt} t\), and \(\Gamma\) is a vector implying the static effect of the ground (support) displacement on the displacements of un-supported degrees of freedom [10].
Four techniques, to materialize time integration analysis with steps larger than the excitation steps without disregarding the excitations, are as briefly reviewed below:
-
1.
Time integration of integrated problems.
This technique, is proposed by S-Y. Chang in 2002 [137], not as a technique to enlarge the integration steps. Ordinary time integration is implemented in analysis of the original problem, modified slightly. The modified problem consists of the integral of the equation of motion and the corresponding initial conditions. Accordingly, the original digitized \({\mathbf{f}}(t)\) is integrated, and \({}_{f}\Delta {\kern 1pt} t\) loses its meaning and can be eliminated from Eqs. (32), (35), and (37). This is a considerable achievement, obtained in the price of the additional computational cost essential mainly for numerical integration of \({\mathbf{f}}(t)\). Few examples are studied; in all, the loss of accuracy is small and the save of computational cost is considerable.
-
2.
Convergence-based replacement of excitations.
This technique is proposed, by the author in 2008 [49], specifically in order to replace digitized excitations with excitations digitized at larger steps, i.e.
$${}_{f}\Delta {\kern 1pt} t{\kern 1pt}_{new} = n{\kern 1pt} {}_{f}\Delta {\kern 1pt} t,\quad n \in \left\{ {2,{\kern 1pt} 3,{\kern 1pt} 4, \ldots } \right\}$$(43)and later extended to non-integer enlargements [138], i.e.
$${}_{f}\Delta {\kern 1pt} t{\kern 1pt}_{new} = r{\kern 1pt} {}_{f}\Delta {\kern 1pt} t,\quad r = \frac{{n_{1} }}{{n_{2} }},\quad n_{1} > n_{2} ,\quad n_{1} \in \left\{ {2,3,4, \ldots } \right\},\quad n_{2} \in \left\{ {1,2,3, \ldots } \right\}$$(44)Both versions of the technique are successfully implemented in analysis of many real problems, including frames, short mid-rise and tall buildings, different bridges, space structures, silos, water tanks, a cooling tower, etc. [49, 135, 139–154], and undergone theoretical studies [138, 155–163]; see more details in Sect. 7.
-
3.
Impact-based replacement of excitations.
This technique, proposed by M. Hosseini and I. Mirzaei in 2012 [164], replaces each section of the excitation record located totally above or totally below the \({\ddot u}_{g} = 0\) axis, with a single data \(\left( {{\ddot u}_{g} } \right)\) equal to the area of the section, above or below the \({\ddot u}_{g} = 0\) axis, applied at the centroid of the section (see Fig. 10). Implementation of the technique in analysis of several problems has been successful.
-
4.
Integration after combining several sequential integration steps analytically. This technique, first suggested by the author in 2009 [165], combines ordinary time integration computations in \(p^{{\prime }}\) \(\left( {p^{{\prime }} \in Z^{ + } - \{ 1\} } \right)\) sequential steps analytically, in order to arrive at \(\left({{\mathbf{u}}_{p} ,{\dot{\mathbf{u}}}_{p} ,{\ddot{\mathbf{u}}}_{p} } \right)\) directly from \(\left({{\mathbf{u}}_{{p - p^{\prime}}}, {\dot{\mathbf{u}}}_{{p - p^{\prime}}},{\ddot{\mathbf{u}}}_{{p - p^{\prime}}}} \right)\), and hence, provides the capability of time integration with integration steps \(p^{\prime}\) times larger than the excitation steps, with no sacrifice of accuracy \(\left( {p \in Z^{ + } - \{ 1\} ,p \ge p^{\prime}} \right)\). However, the additional computational cost is not necessarily negligible [160, 166]. The technique, first proposed for SDOF linear systems [165], later enhanced towards further reduction of computational cost [166], afterwards, in one attempt, extended to implementation in analysis of MDOF systems [160], and in another attempt, to implementation in nonlinear analyses [167]. Though the loss of accuracy is zero, because of the additional computational cost and for the sake of efficiency, the enlargement is limited to specific values of n in Eq. (43) (four seems an appropriate upper-bound for n; Eqs. (43) and (44) are common between the second and forth techniques).
A brief comparison between these four techniques is presented in Table 5, where the numbers in the first row are in accordance with the numbers introducing the techniques, introduced above. In view of Table 5 and the fact that there exists a rational number in any arbitrary neighbourhood of a real number [57] (see the last row in Table 5), the second technique can be considered as the superior technique. The next section is dedicated to a review on the second technique and its most recent advancements.
7 Recent Advancements of a Step Size Enlargement Technique
As stated in the previous section, towards more efficient seismic analysis by constant time step integration, a technique is proposed in 2008 [49], and later extended in 2013 [138]. The efficiency is provided by enlarging \({}_{f}\Delta {\kern 1pt} t\), such that to prevent it from dominating Eqs. (35) and (37), while also bounding the induced inaccuracy. Special attention is paid to: (1) convergence, as the main essentiality of approximate computations [55, 59, 60], (2) the recommended second order of accuracy [30–32], and (3) the effect of approximations in the initial conditions, excitations, etc., on the rate of convergence [32, 49, 71]. These considerations lead to the replacement of the excitation f, with a new excitation, \({\tilde{\mathbf{f}}}\), defined below:
and digitized at steps sized \({}_{f}\Delta {\kern 1pt} t_{new}\), introduced in Eq. (44). Regarding the new symbols in Eqs. (44) and (45), when the excitation step size, \({}_{f}\Delta {\kern 1pt} t\), governs Eqs. (35) or (37), the replacement addressed in Eq. (45) changes the case by assigning the smallest positive integers to \(n_{{{\kern 1pt} 1}}\) and \(n_{{{\kern 1pt} 2}}\) satisfying
The value of \(n^{\prime}\) in Eq. (45) can be obtained from
\(t^{\prime}_{end}\) is the only number satisfying
and \({\mathbf{g}}(t)\) is available from
where, \({\bar{\mathbf{O}}}\) is the zero vector and \({\bar{\mathbf{g}}}\) is a linear enrichment of f, defined below:
Although, the technique is proposed in 2008 [49] and then extended in 2013 [138], it is now the first time that the formulation is presented in the detail stated above, considering rational number enlargements. The technique is implemented in many time integration analyses resulting in considerable reduction of computational cost in the price of negligible loss of accuracy (see Table 6). Even more, it is worth noting that, in two cases, the computational cost is reduced, while the accuracy is increased [152, 154].
A seemingly weak point in implementation of the technique is the vagueness in the notion and determination of \(T_{r}\) in Eq. (46), potentially entailing ambiguities in selection of n (or to say better \(\frac{{n_{\,1} }}{{n_{\,2} }}\)). Nevertheless, as implied in Sects. 4 and 5, these ambiguities exist, also in ordinary time integration analyses, using constant integration steps, as well as, some adaptive time stepping methods. Therefore, the ambiguities in defining and computing \(T_{r}\) are not deficiencies of the technique proposed in [49], but deficiencies of ordinary time integration, affecting the performance of the technique proposed in 2008. The ambiguities can be lessened by comparing the computed response with the response obtained from analysis with smaller steps [7, 28, 55, 95–97, 102], discussed in the ending parts of Sect. 4. However, questions persist. How should we set the integration step size and the excitation for the analysis with smaller steps? Should the technique also contribute in the decrease of the step size (by assigning smaller values to n or \(\frac{{n_{1} }}{{n_{2} }}\)), or it suffices to reduce the size of the excitation step and determine the excitation by linear interpolation? What is the role of the errors originated in the technique in the total accuracy? As a brief response, or to say better comment, the repetition of the first analysis can be considered as means to control the additional errors, also because of the technique. The repetitions can be first considered with respect to the technique, and then after ensuring that the additional errors associated with the technique are sufficiently small, repetitions need to be carried out with respect to \(\Delta {\kern 1pt} t\); the details explained in [135, 159], imply no considerable additional cost compared to ordinary repetition-based accuracy controls.
Furthermore, the computational cost associated with Eqs. (45)–(50) is negligible compared to the cost of time integration (unless for systems with one or two degrees of freedom [49, 147, 157]). Accordingly, the amount of the computational cost reduction in linear analyses [135] can be stated as
and the changes of the cost reduction with respect to the enlargement, can be expressed as:
With attention to Eqs. (51) and (52), recently,
is suggested as a reasonable practical restriction on the selection of n (and \(\frac{{n_{{{\kern 1pt} 1}} }}{{n_{{{\kern 1pt} 2}} }}\)) [139, 159], changing Eqs. (46) to
Equation (54) should be considered together with Eqs. (44), (45), (47)–(50), when the original excitation step size is the governing term in Eqs. (35) or (37) (to say better, when the technique can be implemented). The consequence is upper-bounding the computational cost reduction of linear analyses, by
Another important challenge for the technique [49] is its performance, when implemented in a nonlinear time integration analysis. According to the carried out numerical studies (see Table 6), the performance of the technique is better, in implementation in analysis of linear behaviour (both from the standpoint of accuracy and also from the point of view of computational cost reduction) [135, 146, 150, 152]. Two main reasons are: (1) While convergence and second order of accuracy are the main concepts of the technique, accuracy, numerical stability, consistency, and convergence are still unresolved issues in nonlinear analyses [2, 42, 63, 72–80, 168]. (2) With larger integration steps, the number of iterations in the nonlinearity solutions may increase; the computational cost associated with these iterations can compensate the reductions of computational costs originated in the technique, and accordingly, diminish the efficiency of the technique.
Considering issues like those stated above, further study to clarify the persisting ambiguities is essential. Some main directions towards more efficient step size enlargement by the technique proposed in [49] are listed below:
-
1.
Further clarifications regarding the values to be assigned to \(T_{r}\) in Eqs. (35) and (37) and more reliable selection of the enlargement scaling factor (n or \(\frac{{n_{\,1} }}{{n_{\,2} }}\)) to be implemented in Eq. (44),
-
2.
Better performance, in implementation of the technique in analysis of complex (including nonlinear) structural systems.
-
3.
Better control of accuracy.
-
4.
Implementation in adaptive time integration analysis.
Meanwhile, the first point above can be considered of high importance in improvement of integration step size enlargement techniques other than that proposed in [49].
8 Closure
Time integration is a versatile tool to analyze semi-discretized equations of motion, and many other initial value problems from different origins. Integration step size, or to say better, \(\Delta {\kern 1pt} t\), is the main analysis parameter of time integration analysis which together with nonlinearity and methods’ parameters, affect the analysis features, as stated below:
-
(a)
Smaller values of \(\Delta {\kern 1pt} t\) generally lead to more accurate responses. This is not necessarily true in analysis of nonlinear systems or systems with complex behaviour (e.g. highly oscillatory behaviour). For linear analysis, with sufficiently small integration steps not under the effect of round-off, we can guarantee more accuracy, when repeating the analysis, with smaller steps (the smallness depends on the problem, the integration method, and the computational facilities).
-
(b)
Unless for unconditionally stable and unconditionally unstable analyses, smaller \(\Delta {\kern 1pt} t\) can be beneficial for numerical stability.
-
(c)
\(\Delta {\kern 1pt} t\) has no effect on the order of accuracy.
-
(d)
\(\Delta {\kern 1pt} t\) has no effect on convergence, though can affect the convergence trend.
-
(e)
Smaller values of \(\Delta {\kern 1pt} t\) imply more computational cost for linear analyses. The case might be different for nonlinear analyses, depending on the type of nonlinear behaviour, severity of the nonlinear behaviour, the nonlinearity parameters, and the time integration method. Some special cases are discussed.
-
(f)
Smaller values of \(\Delta {\kern 1pt} t\) in general imply less artificial damping. This would rather be valid for both undamped and damped analyses. Values to be assigned to the parameters of artificial damping should be set carefully.
In selection of the integration step size, especially, for analysis of MDOF structural systems with constantly sized steps, emphasis is on using unconditionally stable time integration methods (the case is different for wave propagation problems; addressed in the literature by times). The requirements of numerical stability of linear analyses, obtained from spectral analysis of the amplification matrix, i.e. spectral stability, are necessary and sufficient for linear analyses, but merely necessary, for nonlinear analyses. Even in time integration analyses with unconditionally stable time integration methods, ambiguities exist in conventional step size selection, as well as, in the comment of the national seismic code/standard of New Zealand (a code/standard with comments on integration step size selection). The ambiguities are more in nonlinear analyses. Some comments are discussed. Specifically, with attention to the ambiguities existing in the integration step size selections, control of the accuracies, for instance, by repetition of the analyses with smaller steps is necessary. Some additional details are to be satisfied in presence of nonlinearity.
From the thirteen seismic codes/standards reviewed in this chapter, many codes/standards (all with the exception of the code/standard of Chile) have considered time history analysis and time integration as tools for seismic analysis. Nevertheless, only in few codes/standards, time history analysis is recommended as the only or superior tool to analyze the semi-discretized equations of motion. There are also few codes/standards, with specific regulations on the details of time history analysis, and specifically, there is one code/standard, i.e. code/standard of New Zealand, with comments on the details of time integration analysis and the selection of integration step sizes. Although, the initiative of the seismic code/standard of New Zealand is worthy of sincere and deep appreciation and acknowledgement, ambiguities and flaws exist; some discussed in this chapter.
In time integration analysis against digitized excitations, the excitation step size should be taken into account in the selection of integration step size. In order to increase the efficiency, digitized excitations can be replaced with excitations digitized in larger steps. The existing techniques are briefly reviewed, and for the one seemingly superior, detailed explanations are presented, and the challenges are addressed.
Besides time integration analysis using constant integration steps, time integration can be carried out using steps sized adaptively, still not seriously implemented in seismic analysis. Adaptive time integration, although, directed towards more efficiency, is not necessarily more efficient compared to analysis with constantly sized steps. The efficiency depends on several parameters, including the complexity of the behaviour, the probable nonlinearity, the method of adaptive time integration and adequate selection of the details, and even the time integration method.
Some areas for further research on time integration and the step size selection are stated below:
-
1.
Effects of viscous damping on the numerical stability of time integration methods are to be studied further.
-
2.
The theory of numerical stability and its practical consideration are yet not well established, when the damping is not proportional.
-
3.
The existing comments on integration step size selection need to be improved for more reliability, more rigorous supporting theory, and consideration of more issues, while preserving the simplicity.
-
4.
Further investigation on step size selection, for implementation in nonlinear analyses, is essential.
-
5.
More reliable still simple practical error controlling methods are to be developed and considered in seismic codes/standards. In this regard, the initiative of the code/standard of New Zealand is sincerely acknowledged.
-
6.
Even for integration methods with numerical (artificial) damping, practical methods for selecting the parameters of the integration methods need to be developed.
-
7.
The existing approaches for adaptive time stepping is involved in selection of parameters that are partly problem dependent. Further research for simplifications sounds essential.
-
8.
A priori estimations do not exist for the computational cost of adaptive, as well as, nonlinear time integration analyses; accordingly, different from linear constant time-stepping analysis, the efficiencies are unclear, in the start of nonlinear or/and adaptive analyses. Further research is essential.
-
9.
More adequate methods techniques and approaches are to be developed for time integration analysis of nonlinear systems, specifically to guarantee the simplicity, continuation, and sufficiency of accuracy, without high computational cost.
-
10.
In view of the stochastic nature of earthquakes, the necessity to study structural systems seismic behaviour in many codes/standards, and the everyday more complexity and larger sizes of structural systems, efforts towards more efficient time history analysis and more reasonable selection of \(\Delta {\kern 1pt} t\) are essential.
References
Bathe KJ (1996) Finite element procedures. Prentice-Hall, USA
Belytschko T, Liu WK, Moran B (2000) Non-linear finite elements for continua and structures. Wiley, USA
Henrych J (1990) Finite models and methods of dynamics in structures. Elsevier, The Netherlands
Houbolt JC (1950) A recurrence matrix solution for the dynamic response of elastic aircraft. J Aeronaut Sci 17:540–550
Katona MG, Zienkiewicz OC (1985) A unified set of single step algorithms part 3: the beta-m method, a generalization of the Newmark scheme. Int J Numer Meth Eng 21:1345–1359
Soroushian A, Farjoodi J (2008) A unified starting procedure for the Houbolt method. Commun Numer Meth Eng 24:1–13
Hairer E, Wanner G (1996) Solving ordinary differential equations II: stiff and differential-algebraic problems. Springer, USA
Golub GH, Ortega JM (1992) Scientific computing and differential equations. Academic Press, USA
D’Souza AF, Garg VK (1984) Advanced dynamics: modeling and analysis. Prentice Hall, USA
Chopra AK (1995) Dynamics of structures: theory and application to earthquake engineering. Prentice-Hall, USA
Craig RR (1981) Structural dynamics: an introduction to computer methods. Wiley, USA
Hilber HM, Hughes TJR, Taylor RL (1977) Improved numerical dissipation for time integration algorithms in structural dynamics. Earthq Eng Struc 5:283–292
Chung J, Hulbert GM (1993) A time integration algorithm for structural dynamics with improved numerical dissipation: the generalized-α method. J Appl Mech-T ASME 60:371–375
Clough RW (1973) Numerical integration of equations of motion. Lectures on finite element methods in continuum mechanics, University of Alabama, USA
Wilson EL (1968) A computer program for the dynamic stress analysis of underground structures. Report No. 68-1, University of California, Berkeley, USA
Wilson EL, Farhoomand I, Bathe KJ (1973) Non-linear dynamic analysis of complex structures. Earthq Eng Struc 1:241–252
Bathe KJ, Wilson EL (1973) Stability and accuracy analysis of direct integration methods. Earthq Eng Struc 1:283–291
Zlatev Z, Berkowicz R, Prahm LP (1984) Implementation of a variable step size variable formula method in the time integration part of a code for treatment of long-range transport of air pollutants. J Comput Phys 55:278–301
Brüls O, Golinval JC (2006) The generalized-α method in mechatronic applications. Z Angew Math Phys 86:748–758
Erickson BA, Nordstṙöm J (2014) Stable, high order accurate adaptive schemes for long time, highly intermittent geophysics problems. J Comput Appl Math 271:328–338
Tang XW, Zhang XW, Uzuoka R (2015) Novel adaptive time stepping method and its application to soil seismic liquefaction analysis. Soil Dyn Earthq Eng 71:100–1013
Liu T, Li Q, Zhao C (2013) An efficient time-integration method for nonlinear dynamic analysis of solids and structures. Sci China: Phys, Mechan Astron 56:798–804
Eggl S, Dvorak R (2010) An introduction to common numerical integration codes used in dynamical astronomy. In: Dynamics of small solar system bodies and exoplanets. Springer, Germany, pp 431–480
Soroushian A, Saaed A, Arghavani M, Rajabi M, Sharifpour MM (2011) Two versions of the Wilson-θ time integration method. In: Proceedings of 10th biennial conference on vibration problems (ICoVP-2011), Prague, Czech Republic
Katsikadelis JT (2013) A new direct time integration method for the equations of motion in structural dynamics. Z Angew Math Phys 94:757–774
Bursi OS, He L, Bonelli A, Pegon P (2010) Novel generalized-α methods for interfield parallel integration of heterogeneous structural dynamic systems. J Comput Appl Math 234:2250–2258
Newmark NM (1959) A method of computation for structural dynamics. J Eng Mech-ASCE 85:67–94
Clough RW, Penzien J (1993) Dynamics of structures. McGraw-Hill, Singapore
Humar JL (1990) Dynamics of structures. Prentice-Hall, USA
Hughes TJR (1987) The finite element method: linear static and dynamic finite element analysis. Prentice-Hall, USA
Belytschko T, Hughes TJR (1983) Computational methods for transient analysis. Elsevier, The Netherlands
Wood WL (1990) Practical time stepping schemes. Oxford, USA
Geradin M, Rixen DJ (2015) Mechanical vibrations theory and applications to structural dynamics. Wiley, USA
Shimada M, Masuri S, Tamma KK (2015) A novel design of an isochronous integration [i Integration] framework for first/second order multidisciplinary transient systems. Int J Numer Meth Eng 102:867–891
Akbarzadeh-Sharbaf A, Giannacopoulos DD (2013) Finite-element time-domain solution of the vector wave equation in doubly dispersive media using Möbius transformation technique. IEEE T Antenn Propag 61:4158–4166
Krenk S (2007) Time integration in solid mechanics. In: Proceedings of 20th Nordic seminar on computational mechanics, Chalmers Tkniska Hoskola, Sweden
Artuzi WA (2005) Improving the Newmark time integration scheme in finite element time domain methods. IEEE Microw Wirel Co 15:898–900
Erlicher S, Bonaventura L, Bursi OS (2001) The analysis of the Generalized-α method for nonlinear dynamic problems. Comput Mech 28:83–104
Wriggers P (2002) Computational contact mechanics. Wiley, USA
Hughes TJR, Pister KS, Taylor RL (1979) Implicit-explicit finite elements in nonlinear transient analysis. Comput Meth Appl Mech Eng 17(18):159–182
Gavin H (2001) Structural dynamics. Class Notes CE 283, Duke University, USA
Soroushian A (2003) New methods to maintain responses ‘convergence and control responses’ errors in the analysis of nonlinear dynamic models of structural systems. PhD dissertation, University of Tehran, Iran (in Persian)
Soroushian A (2011) With proper convergence towards error evaluation in nonlinear analyses. In: Proceedings of 11th US national congress on computational mechanics (11th USNCCM), Minneapolis, USA
Allgower EL, Georg K (1980) Numerical continuation methods, an introduction. Springer, USA
Cook RD, Malkus DS, Plesha ME, Witt RJ (2002) Concepts and applications of finite element analysis. Wiley, USA
Crisfield MA, Jelenic G, Mi Y, Zhong HG, Fan Z (1997) Some aspects of the non-linear finite element method. Finite Elem Anal Des 27:19–40
KiaPing Y (2008) A new family of generalized-alpha time integration algorithms without overshoot for structural dynamics. Earthq Eng Struc 37:1389–1409
Krenk S (2006) State-space time integration with energy control and fourth-order accuracy for linear dynamic systems. Int J Numer Meth Eng 65:595–619
Soroushian A (2008) A technique for time integration with steps larger than the excitation steps. Commun Numer Meth Eng 24:2087–2111
Zhou KK, Tamma KK (2004) A new unified theory underlying time dependent first-order systems: a prelude to algorithms by design. Int J Numer Meth Eng 60:1699–1740
Monro DM (1987) Fortran 77. Edward Arnold, UK
Paultre P (2010) Dynamics of structures. Wiley, USA
Young WC (1989) Roark’s formulas for stress and strain, 6th edn. McGraw-Hill, USA
Gaylord EH, Gaylord CN (1979) Structural engineering handbook, 2nd edn. McGraw-Hill, USA
Soroushian A, Wriggers P, Farjoodi J (2009) Asymptotic upper-bounds for the errors of Richardson extrapolation with practical application in approximate computations. Int J Numer Meth Eng 80:565–595
Gear CW (1971) Numerical initial value problems in ordinary differential equations. Prentice-Hall, USA
Apostol TM (1967) Calculus, vol I. Wiley, USA
Spiegel MR (1971) Advanced mathematics for engineers and scientists. McGraw-Hill, USA
Henrici P (1962) Discrete variable methods in ordinary differential equations. Prentice-Hall, USA
Srikwerda JC (1989) Finite difference schemes and partial differential equations. Wadsworth & Books/Cole, USA
Lambert JD (1973) Computational methods in ordinary differential equations. Wiley, UK
Nau JM (1983) Computation of inelastic spectra. J Eng Mech-ASCE 109:279–288
Soroushian A, Wriggers P, Farjoodi J (2013) Practical integration of semi-discretized nonlinear equations of motion: proper convergence for systems with piecewise linear behaviour. J Eng Mech-ASCE 139:114–145
Soroushian A, Farjoodi J (2002) An improvement in nonlinear analysis. In: Proceedings of 15th engineering mechanics division conference, ASCE (EM2002), New York, USA
Mahin SA, Lin J (1983) Construction of inelastic response spectra for single degree-of-freedom systems. Report UCB/EERC-83/17, Earthquake Engineering Research Center (EERC), University of California, Berkeley, USA
Rossi DF, Ferreira WG, Mansur WJ, Calenzani AFG (2014) A review of automatic time-stepping strategies on numerical time integration for structural dynamic analysis. Eng Struct 80:118–136
Ralston A, Rabinowitz P (1978) First course in numerical analysis. McGraw-Hill, USA
Noble B, Daniel JW (1977) Applied linear algebra. Prentice Hall, USA
Soroushian A (2010) Pseudo convergence and its implementation in engineering approximate computations. In: Proceedings of 4th international conference from scientific computing to computational engineering (IC-SCCE 2010), Athens, Greece
Soroushian A (2014) Equivalence between convergence and pseudo convergence when algorithmic parameters do not change geometrically. In: Proceedings of 6th international conference from scientific computing to computational engineering (IC-SCCE 2014), Athens, Greece
Penry SN, Wood WL (1985) Comparison of some single-step methods for the numerical solution of the structural dynamic equation. Int J Numer Meth Eng 21:1941–1955
Soroushian A, Wriggers P, Farjoodi J (2005) On practical integration of semi-discretized nonlinear equations of motion. Part 1: reasons for probable instability and improper convergence. J Sound Vib 284:705–731
Soroushian A, Wriggers P, Farjoodi J (2015) From the notions of nonlinearity tolerances towards a deficiency in commercial Transient Analysis softwares and its solution. In: Proceedings of 5th ECCOMAS thematic conference on computational methods in structural dynamics and earthquake engineering (COMPDYN 2015), Corfu, Greece
Xie YM, Steven GP (1994) Instability, chaos, and growth and decay of energy of time-stepping schemes for nonlinear dynamic equations. Commun Numer Meth Eng 10:393–401
Low KH (1991) Convergence of the numerical methods for problems of structural dynamics. J Sound Vib 150:342–349
Rashidi S, Saadeghvaziri MA (1997) Seismic modeling of multispan simply supported bridges using Adina. Int J Comput Struct 64:1025–1039
Soroushian A, Eshghi S (2006) The importance of appropriate selection of step size and nonlinearity tolerance in time history analyses. Report No. mlz-16/143, International Institute of Earthquake Engineering and Seismology (IIEES), Iran (in Persian)
Farjoodi J, Soroushian A (2003) Shortcomings in numerical dynamic analysis of nonlinear systems. Nashriyeh Fanni 37:269–281 (in Persian)
Soroushian A, Kermani AM, Chavan K, Ivanian A (2004) Responses’ convergence for time integration analyses involved in linearly-elastic/perfectly-plastic behaviour and impact. In: Proceedings of 6th world conference on computational mechanics (WCCM VI in conjunction with APCOM’04), Beijing, China
Amiri S (2015) On the performance of the existing recommendations for time step size selection in nonlinear analysis of conventional buildings. M.Sc. dissertation, International Institute of Earthquake Engineering and Seismology (IIEES), Iran (in Persian)
Richtmyer RD, Morton KW (1967) Difference methods for initial value problems. Wiley, USA
Chen WE (2000) Structural stability: from theory to practice. Elsevier, USA
Poorlatifi S (2008) A method to recognize numerical instability from physical instability. M.Sc. dissertation, International Institute of Earthquake Engineering and Seismology (IIEES), Iran (in Persian)
Soroushian A (2015) Development of an algorithm and computer program to evaluate the numerical stability and consistency of new time integration methods. Report 7517, International Institute of Earthquake Engineering and Seismology (IIEES), Iran (in Persian)
Fox L, Goodwin ET (1949) Some new methods for the numerical integration of ordinary differential equations. Math Proc Cambridge 45:373–388
Li Y, Wu B, Ou J (2006) Stability of average acceleration method for structures with nonlinear damping. Earthq Eng Eng Vib 5:87–92
Giese G (2005) Nonlinear stability analysis for the method of transport for the elastic-plastic wave equation. Siam J Numer Anal 42:2569–2589
Kardestuncer H (1987) Finite element handbook. McGraw Hill, USA
Paz M (1991) Structural dynamics: theory and application, 3rd edn. Van Nostrand Reinhold, USA
BHRC (2015) Iranian code of practice for seismic resistant design: standard no. 2800–. Building and Housing Research Center, Iran (in Persian)
Zhongzhi S, Naeim F (2001) The seismic design handbook. Kluwer, USA
Rezaee-Pazhand M, Moayeddian M, Khajeh-Karomoddin A (2000) Seismic analysis of structures. Emam Reza University, Iran (in Persian)
Bargi K (1994) Principles of earthquake engineering. Majed, Iran (in Persian)
Basoz NI, Kiremidjian AS, King SA, Law KH (1999) Statistical analysis of bridge damage data from the 1994 Northridge, CA California. Earthq Spectra 15:25–53
Ruge P (1999) A priori error estimation with adaptive time-stepping. Commun Numer Meth Eng 15:479–491
Soroushian A, Farjoodi J (2002) Error control for step-by-step solution of linear semi-discrete equations of motion. In: Proceedings of 5th European conference on structural dynamics, Eurodyn2002, Munich, Germany
Soroushian A, Farjoodi J (2003) More reliable responses for time integration analyses. Struct Eng Mech 16:219–240
McNamara JF (1974) Solution schemes for problems of nonlinear structural dynamics. J Press Vess-T ASME 96:149–155
NZS (2004) Structural design actions, Part 5: earthquake actions-New Zealand. New Zealand Standard, New Zealand
Soroushian A, Farjoodi J, Mehrazin H (2015) A new measure for the nonlinear behaviour of piece-wisely linear structural dynamic models. In: Proceedings of the 13th International Congress on Sound and Vibration (ICSV13), Vienna, Austria
Chen CC, Robinson AR (1993) Improved time-history analysis for structural dynamics. I: treatment of rapid variation of excitation and material nonlinearity. J Eng Mech-ASCE 119:2496–2513
Collatz L (1960) The numerical treatment of differential equations. Springer, Germany
Fish J, Belytschko Y (2009) A first course in finite elements. Wiley, UK
Soroushian A (2012) On the adequacy of integration step sizes recommended for nonlinear time integration. In: Proceedings of 5th international conference from scientific computing to computational engineering (5th IC-SCCE), Athens, Greece
Soroushian A (2010) Proper convergence a concept new in science and important in engineering. In: Proceedings of 4th international conference from scientific computing to computational engineering (IC-SCCE 2010), Athens, Greece
da Costa Villar G, Menezes JC, Donadon MV (2012) Accuracy and numerical stability of direct time integration schemes applied to rotordynamics. In: Proceedings of 12th Pan-American congress of applied mechanics (PACAM XII), Port of Spain, Trinidad
Kpodzo K, Fourment L, Lasne P, Montmitonnet P (2016) An accurate time integration scheme for arbitrary rotation motion: application to metal forming formulation. Int J Mater Form 9:71–84
Bonelli A, Bursi OS (2004) Generalized-α methods for seismic structural testing. Earthq Eng Struct D 33:1067–1102
Bis I (2002) Criteria for earthquake resistant design of structures, Part 1. Bureau of Indian Standards, India
Standard DI, Criteria for earthquake resistant design of structures (Part 4) industrial stack like structures. Bureau of Indian standards, India
Standard B (2004) Eurocode 8: design of structures for earthquake resistance-part 1: general rules, seismic actions and rules for buildings, UK
Code TE (2007) Specification for buildings to be built in seismic zones. Ministry of Public Works and Settlement, Government of Republic of Turkey, Turkey
Code GS (2000) Earthquake resistant design of structures. Earthquake Planning and Protection Organisation, Greece
Standard C (2001) Code for seismic design of buildings [GB 50011-2001]. China Architectural and Building Press, China
Structural design actions part 5: earthquake actions–New Zealand commentary (Supplement to NZS 1170.5:2004) (2004) NZS 1170.5 Supp 1: 2004. Standards New Zealand, Wellington, New Zealand
American Society of Civil Engineers (2010) Minimum design loads for buildings and other structures. Amer Society of Civil Engineers, USA
The Building Standard Law of Japan (2004)
MOC-2008 (2008) Manual de diseño de obras civiles. Diseño por sismo. Recomendaciones y Comentarios. Instituto de Investigaciones Eléctricas, Comisión Federal de Electricidad, Mexico (in Spanish)
Tena-Colunga A, Mena-Hernández U, Pérez-Rocha L, Avilés J, Ordaz M, Vilar J (2009) Updated seismic design guidelines for model building code of Mexico. Earthq Spectra 25:869–898
Official Chilean Standard Nch 433 of 96 (1996) Earthquake resistant design of buildings. Instituto Nacional de Normalización INN, Chile (in English)
Rojas F, Lew M, Naeim F (2010) An overview of building codes and standards in Chile at the time of the 27 February 2010 Offshore Maule, Chile earthquake. Struct Des Tall Spec 19:853–865
Code-Part SD (2006) I: design rules for buildings. P100-1/2006, MTCT, Bucharest, Romania
Architecture Buildings Research Institute (ABRI) (2005) Recommended provisions for building seismic regulations. ABRI, Taiwan (in Chinese)
R.O.C. Construction and Planning Agency (2005) Seismic Design Code for Buildings in Taiwan. Ministry of the Interior, R.O.C, Taiwan
Mavroeidis GP, Papageorgiou AS (2003) A mathematical representation of near-fault ground motions. B Seismol Soc Am 93:1099–1131
Paz R (1994) International handbook of earthquake engineering: codes, programs, and examples. Chapman & Hall, USA
Ilie S, Söderlind G, Corless RM (2008) Adaptivity and computational complexity in the numerical solution of ODEs. J Complexity 341–361
Wanxie Z, Zhuang X, Zhu J (1998) A self-adaptive time integration algorithm for solving partial differential equations. Appl Math Comput 89:295–312
Lee SH, Hsieh SS (1990) Expedient implicit integration with adaptive time stepping algorithm for nonlinear transient analysis. Comput Meth Appl Mech Eng 81:151–172
Ilie S, Jackson KR, Enright WH (2015) Adaptive time-stepping for the strong numerical solution of stochastic differential equations. Numer Algorithms 68:791–812
Yaghoubi V, Khorsand Vakilzadeh M, Abrahamsson T (2014) Efficient simulation method for nonlinear structures: methodology and stability consideration. In: Proceedings of 9th international conference on structural dynamics & EURODYN 2014, Porto, Portugal
Liu Y, Yang H, Jiang C, Yang C (2016) A scalable fully implicit method with adaptive time stepping for unsteady compressible inviscid flows. Int. J. Comput. Struct. 176:1–12
Ammanagi S, Manohar CS (2016) Adaptive time stepping in pseudo-dynamic testing of earthquake driven structures. B Earthq Eng 1–28
Papazafeiropoulos G, Plevris V, Papadrakakis M (2016) A generalized algorithm framework for non-linear structural dynamics. B Earthq Eng 1–31
Soroushian A (2011) Direct time integration with steps larger than the steps by which the excitations are digitized. Report 7510, International Institute of Earthquake Engineering and Seismology (IIEES), Iran (in Persian)
Askes H, Nguyen DC, Tyas A (2011) Increasing the critical time step: micro-inertia, inertia penalties and mass scaling. Comput Mech 47:657–667
Chang SY (2002) Integrated equations of motion for direct integration methods. Struct Eng Mech 13:569–589
Sabzei A, Reziakolaei AY, Soroushian A (2013) On more versatility for an integration step size enlargement technique in time integration analysis. In: Proceedings of 4th ECCOMAS thematic conference on computational methods in structural dynamics and earthquake engineering (COMPDYN 2013), Kos Island, Greece
Azad S (2015) A study on accelerating time history analysis of bridges. M.Sc. dissertation, International Institute of Earthquake Engineering and Seismology (IIEES), Iran (in Persian)
Soroushian A (2010) On the performance of a conventional accuracy controlling method applied to linear and nonlinear structural dynamics. In: Proceedings of 17th international congress on sound and vibration (ICSV17), Cairo, Egypt
Soroushian A, Aziminejad A (2011) A more efficient seismic analysis of tall buildings by implementing a recently proposed technique. In: Proceedings of 6th international conference seismology and earthquake engineering (SEE6), Tehran, Iran
Bahar O, Ramezani S (2011) Faster time integration analysis for building structures subjected to 3-component earthquakes. In: Proceedings of 3rd III ECCOMAS thematic conference on computational methods in structural dynamics and earthquake engineering (COMDYN 2011), Corfu, Greece
Nateghi F, Yakhchalian M (2011) On less computational costs for analysis of silos seismic behaviors by time integration. In: Proceedings of 3rd ECCOMAS thematic conference on computational methods in structural dynamics and earthquake engineering (COMPDYN 2011), Corfu, Greece
Nateghi F, Yakhchalian M (2011) An investigation into the effectiveness of a technique proposed for reducing computational cost of time integration in the analysis of silos seismic behaviors. In: Proceedings of 11th US national congress on computational mechanics (USNCCM 2011), Minneapolis, USA
Soroushian A, Saaed A, Arghavani M, Rajabi M, Sharifpour MM (2011) Less computational costs in the analysis of reservoirs seismic behaviours by time integration. In: Proceedings of 10th biennial conference on vibration problems (ICoVP-2011), Prague, Czech Republic
Soroushian A (2011) On the performance of a recent technique for more efficient time integration in severe seismic conditions. In: Proceedings of 1st international conference on advances in structural engineering and mechanics (ASEM’11), Seoul, South Korea
Papadrakakis M, Fragiadakis M, Plevris V (2013) Computational methods in earthquake engineering. Springer, USA
Bastami M (2014) A technique for more efficient time integration applied to seismic analysis of power substation equipment. In: Proceedings of 11th world congress on computational mechanics (WCCM XI), Barcelona, Spain
Sabzei A (2013) On the performance of a recent technique for seismic analyses computational cost reduction when applied to buildings structural systems. M.Sc. dissertation, International Institute of Earthquake Engineering and Seismology (IIEES), Iran (in Persian)
Reziakolaei AY, Sabzei A, Soroushian A (2013) On the performance of a structural analysis cost reduction technique when applied to residential buildings. In: Proceedings of 4th ECCOMAS conference on computational methods in structural dynamics and earthquake engineering (COMPDYN 2013), Kos Island, Greece
Garakaninezhad A, Moghadas RK (2015) On the performance of a technique to accelerate time integration when applied to space structures analyses. In: Proceedings of 5th ECCOMAS thematic conference on computational methods in structural dynamics and earthquake engineering (COMPDYN 2013), Kos, Greece, June
Hadad AA (2015) Reducing computational costs in time integration analyses of buildings with irregularities in height because of mass. M.Sc. dissertation, International Institute of Earthquake Engineering and Seismology (IIEES), Iran (in Persian)
Soroushian A, Jahani Mehrnoosh A, Zarabimanesh Y, Ghondaghsaz MH, Baiani A, Zakizade A (2016) On the performance of a computational cost reduction technique when applied to cooling towers transient analysis. In: Proceedings of 7th European congress on computational methods in applied sciences and engineering (ECCOMAS VII), Crete Island, Greece
Zarabimanesh Y, On a recent technique for reducing the computational costs in time integration analyses when applied to the analysis of Milad Tower. M.sc. dissertation, International Institute of Earthquake Engineering and Seismology (IIEES), Iran (in Persian) (to be defended in 2017)
Soroushian A (2012) On the accuracy of accelerations in general implementation of a recently proposed seismic analysis computational cost reduction technique. In: Proceedings 5th international conference from scientific computing to computational engineering (5th IC-SCCE), Athens, Greece
Soroushian A, Garakaninezhad A, Yahyapour A, Asgarihadad A (2014) Performance of a computational cost reduction technique in lengthy time interval analyses. In: Proceedings of 11th world conference on computational mechanics (WCCM XI), Barcelona, Spain
Soroushian A, Farshadmanesh P, Azad S (2015) On the essentiality of techniques to enlarge analysis steps in time integration against digitized excitations. Int J Earthq Eng Seismolog (JSEE) 17:43–60
Majidi H (2015) Simplification of structural dynamic analyses by combining some existing approaches simplifying the records, the systems, and the analysis methods. M.Sc. dissertation, Islamic Azad University, South Tehran branch, Iran (in Persian)
Soroushian A (2015) A general practical procedure for a recently proposed seismic analysis computational cost reduction technique. In: Proceedings of 7th international conference on seismology and earthquake engineering (SEE7), Tehran, Iran
Saaed A (2012) A technique for faster seismic analysis of MDOF structural systems. M.sc. dissertation, International Institute of Earthquake Engineering and Seismology (IIEES), Tehran, Iran (in Persian)
Soroushian A (2015) A new excitation step size enlargement technique for seismic analyses. In: Proceedings of 5th ECCOMAS thematic conference on computational methods in structural dynamics and earthquake engineering (COMPDYN 2015), Crete, Greece
Soroushian A, Hosseini M, Khalkhali SMH (2016) On the frequency content of errors originated in a time integration computational cost reduction technique. In: Proceedings of 7th European congress on computational methods in applied sciences and engineering (ECCOMAS VII), Crete Island, Greece
Zakizadeh A Investigation on the role of the parameter representing the influence of the eliminated excitation stations in the performance of a transient analysis computational cost reduction technique. M.Sc. dissertation, International Institute of Earthquake Engineering and Seismology (IIEES), Iran (in Persian) (to be defended in 2016 or 2017)
Hosseini M, Mirzaei I (2012) Simplification of earthquake accelerograms for rapid time history analysis based on the impulsive load concept. In: Proceedings of the 10th international congress on computational mechanics, (WCCM X), Sao Paulo, Brazil
Soroushian A (2009) Time integration with step sizes less depending on the steps of excitation: I. SDOF systems. Report 7509-P89-1, International Institute of Earthquake Engineering and Seismology (IIEES), Iran (in Persian)
Rajabi M (2011) A new method for less expensive seismic analysis of SDOF structural systems. M.Sc. dissertation, International Institute of Earthquake Engineering and Seismology (IIEES), Iran (in Persian)
Sharifpour MM (2013) A method for faster time integration analysis of systems subjected to ground strong motions. M.Sc. dissertation, Islamic Azad University, West Azerbayjan Science and Research Branch, Iran (in Persian)
Zienkiewicz OC (2000) Achievements and some unsolved problems of the finite element method. Int J Numer Meth Eng 47:9–28
Acknowledgments
The detailed comments of the reviewers who directly and indirectly caused many improvements in the chapter are sincerely appreciated. The author is also grateful to Prof. Fereydoon Arbabi, for his kind guidance, regarding the English of the chapter and Dr. Hamid Zafarani for his introducing a reference on seismology and directivity to the author. The feedbacks of Mr. George Papazafeiropoulos regarding different issues in the chapter is also sincerely acknowledged. Finally the efforts of the type-setting team, specifically Mrs. Hema Suresh and Mr. Mohammad Ali, and the kind attentions and guidance of the editors in different stages of this chapter’s preparation are sincerely acknowledged and deeply appreciated.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this chapter
Cite this chapter
Soroushian, A. (2017). Integration Step Size and Its Adequate Selection in Analysis of Structural Systems Against Earthquakes. In: Papadrakakis, M., Plevris, V., Lagaros, N. (eds) Computational Methods in Earthquake Engineering. Computational Methods in Applied Sciences, vol 44. Springer, Cham. https://doi.org/10.1007/978-3-319-47798-5_10
Download citation
DOI: https://doi.org/10.1007/978-3-319-47798-5_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-47796-1
Online ISBN: 978-3-319-47798-5
eBook Packages: EngineeringEngineering (R0)