Keywords

1 From Structural Analysis to Time Integration

Our lives and civilization rely on different construction we daily pass by. In order to have sustainable buildings, bridges, tunnels, railways, infrastructures, etc., the structures should be designed considering their lifetime behaviour. Even, regardless of the randomness and stochastic nature of severe conditions and the complexity existing in prediction of these conditions, the study of structural behaviour at severe conditions is not simple. Theoretical and experimental approaches can be addressed as means to study the structural behaviour. Nevertheless, mainly with attention to their simplicity, versatility, and inexpensiveness, numerical computations are the widely accepted and generally the superior tool.

With an initiation point in the seventeenth century (i.e. the Hooke’s law), analysis of structural systems, as taken into account in the soft ware packages, started less than a century ago, by relating behaviour and excitation, via the structural property, i.e.

$${\mathbf{KX}} = {\mathbf{F}}$$
(1)

In Eq. (1), X stands for the vector of unknowns, F represents the known external information (in general the excitation), and the structural geometry, topology, and material are reflected in K. (In time dependent problems, the initial status affects both K and F, and the solution implies the behaviour at a specific time instant.) In computerized structural analysis, the behaviour is being represented by the unknown general displacement, as the vector X in Eq. (1). For structural behaviour not representable by finite number of unknowns, in the mid of the past century, methods (mainly discretization methods) were developed to systematically replace continuous systems, with systems with finite number of unknowns or DOFs (degrees of freedoms). Finite difference, finite volume, finite element, and boundary element methods, are some of the major discretization methods. Development of these methods, specifically the finite elements, and the invention of computers, at 1950s, led to significant rise in the size and complexity of structural systems. For instance, the dependence of K and F in Eq. (1) to X causing nonlinearity could be simply tackled by incremental consideration of F and when needed implementation of different definitions for strain and stress. Not much later, attention to time dependent phenomena, including structural behaviour against earthquakes, aerodynamics, etc., increased significantly. In study and analysis of arbitrary structural dynamic behaviour, the inertial force, and even in cases the damping effects, are of high importance. The linear dynamic behaviour of semi-discretized structural systems can be expressed as [13]:

$$\begin{aligned} & {\mathbf{M}}{\ddot{\mathbf{u}}} + {\mathbf{C}}{\dot{\mathbf{u}}} + {\mathbf{Ku}} = {\mathbf{f}}(t)\quad 0 \le t < t_{end} \\ & {\mathbf{u}}(t = 0) = {\mathbf{u}}_{0} \\ & {\dot{\mathbf{u}}}(t = 0) = {\dot{\mathbf{u}}}_{0} \\ \end{aligned}$$
(2)

where, M, C, and K, respectively denote the mass, damping, and stiffness matrices, f stands for the dynamic excitation, each top dot implies once differentiation with respect to time, u is the unknown general displacement, t represents the time, \({\mathbf{u}}_{0}\) and \({\dot{\mathbf{u}}}_{0}\) introduce the initial conditions, and \(t_{end}\) is the length of the time interval under consideration. In 1950, the first broadly accepted method to solve Eq. (2), in a step-by-step manner, also addressed as a time integration method, was proposed by J.C. Houbolt [46] (see Fig. 1, where \({\mathbf{f}}_{\text{int}}\) implies the internal force, essential in presence of material nonlinearity and discussed later in this chapter). The history of step-by-step integration of initial value problems returns to the eighteenth century and the Euler method [7, 8] (see also [9], for the implementation of different Runge Kutta methods in structural dynamics); Houbolt suggested the first method, specifically dedicated to the solution of the equation of motion in structural dynamics [4]. For the Houbolt method, as well as many other time integration methods, the basic idea is to approximate the solution of Eq. (2), with sufficient accuracy, while avoiding complex mathematical functions, e.g. exp, sinh, cosh. The general approach is to carry out the analysis in a step-by-step manner, using simple relations instead of the exact computationally complex relations [10, 11]. The new relations are generally linear algebraic equations, set such that to maintain the adequacy of some mainly accuracy-related features. On this basis, many time integration methods have been proposed in the past decades [46, 1217]; still, investigations for better approximate methods are in progress, in different disciplines and branches of engineering and science; see [1826]. The most practically important and broadly accepted time integration methods are the Newmark family [1, 911, 2732], central difference [1, 911, 14, 2832], Wilson-Theta [9, 10, 1517, 32], Houbolt [46], HHT [12, 3032] and C-H [13] methods. All these methods convert Eq. (2) to (see also Eq. (1)):

$${\mathbf{K}}_{eff} {\mathbf{ u}} = {\mathbf{f}}_{eff}$$
(3)

or equivalently to the equation below:

$${\bar{\mathbf{K}}}_{eff} \Delta {\mathbf{u}} = \Delta {\mathbf{f}}_{eff}$$
(4)

(considering the dynamic effect in \({\mathbf{K}}_{eff}\), \({\mathbf{f}}_{eff}\), \({\bar{\mathbf{K}}}_{eff}\), and \(\Delta {\mathbf{f}}_{eff}\)), to be solved for the status \(\left( {{\mathbf{u}},{\dot{\mathbf{u}}},{\ddot{\mathbf{u}}}, \ldots\,{\text{and}}\,{\mathbf{f}}_{\text{int}} } \right)\) at sequential integration stations, starting from the initial conditions; see Fig. 1. (Δand Δf eff  respectively represent the increment or to say better the increase of u and f eff from the previous integration station to the current integration station.) Equation (4) can be treated in a way conceptually identical to Eq. (1). However, because of the approximation existing in the time integration formulation, the results are inexact depending on the integration formulation. This, plus the versatility of time integration analysis, has caused broad and continuous studies on time integration e.g. see [1926, 3338]. In the next section, after a brief review on time integration analysis, the parameters, and specifically the most important parameter, i.e. the integration step size, are discussed. In Sect. 3, the influence of the integration step size, on different features of time integration, is studied. In Sects. 4 and 5, conventional and modern comments and approaches to assign adequate values to the integration step size are addressed. Later, in Sects. 6 and 7, techniques for more efficient analysis against digitized excitations are reviewed, and the seemingly most successful technique is introduced in detail. Finally, the chapter is concluded, in Sect. 8, with a brief look on the key points, and addressing some of the challenges existing on time integration and step size selection.

Fig. 1
figure 1

Typical process of time integration analysis

2 Time Integration and Its Parameters

As implied in Sect. 1, time integration is the most versatile tool to analyze Eq. (2) and its nonlinear counterpart, stated below [31]:

$$\begin{aligned} & {\mathbf{M}}{\ddot{\mathbf{u}}} + {\mathbf{f}}_{\text{int}} = {\mathbf{f}}(t)\quad 0 \le t < t_{end} \\ & {\mathbf{u}}(t = 0) = {\mathbf{u}}_{0} \\ & {\dot{\mathbf{u}}}(t = 0) = {\dot{\mathbf{u}}}_{0} \\ & {\mathbf{f}}_{\text{int}} (t = 0) = {\mathbf{f}}_{{\text{int}_{0} }} \\ & {\mathbf{Q}} \\ \end{aligned}$$
(5)

In Eqs. (5), the new parameter, Q, implies additional restrictions representing the nonlinear behaviour (see [39, 40]), and the essentiality of \({\mathbf{f}}_{\text{int}}\), as an initial condition, is explained in [41, 42]. In the analysis process, the status or indeed the responses, i.e. \({\mathbf{u}},{\dot{\mathbf{u}}},{\ddot{\mathbf{u}}}, \ldots \,{\text{and}}\,{\mathbf{f}}_{\text{int}}\) are being computed, for distinct sequential time instants, starting from the station just after the initial conditions, using simple algebraic formulation (see Fig. 1). No eigen-solution is essential in ordinary direct time integration analysis. The determination of the status proceeds, in a step-by-step manner, and at each time instant, the status of the structural system is being determined approximately [10, 28, 31, 32]. In nonlinear analyses, after the computation associated with each step, occurrence of nonlinearity is being checked. When a nonlinearity is detected, some iterations are generally implemented (see Fig. 1), by methods such as Newton Raphson, to decrease the error in modelling the nonlinearity (see [1, 2, 4346]). The step-by-step procedure continues, till the end of the integration interval, \(t_{end}\). Considering these, time integration is a simple step-by-step computational tool. The simplicity of the step-by-step computation and the algebraic formulation bring about positive and negative consequences; some are listed below [31, 32]:

  1. 1.

    Approximation in the obtained responses.

  2. 2.

    Versatility of application to different equations of motion, and even to initial value problems originated in other branches of engineering and science, e.g. see [13, 1923, 26, 3437].

  3. 3.

    Existence of many different time integration methods, e.g. see [1, 11, 12, 1417, 2434, 47, 48].

  4. 4.

    Considerably high computational cost [31, 32, 49].

The negative features are the approximation and the high computational cost (respectively the first and forth points above). Especially the first is very important; without sufficient accuracy, implementation of time integration analysis loses its explanation. Provided acceptable accuracy, we are interested in: (1) low computational cost, where computational cost implies the in-core storage of the hardware involved in the computation [50, 51], and (2) the capability to increase the accuracy and/or decrease the computational cost. Other features of time integration analysis are also directly or indirectly associated with the inexactness of the response; numerical stability, order of accuracy, numerical damping, and overshoot, are the most important features [3133, 52]. These features are under the control of some parameters, to which, the remainder of this section is dedicated.

In exact computations, the parameters affecting the results imply real notions. For instance, in the computation of the integral below:

$$I = \int\limits_{a}^{b} {\frac{{d{\kern 1pt} x}}{{1 + x^{2} }}} = \arctan \,b - \arctan \,a$$
(6)

a and b are parameters with real meaning (see Fig. 2), appearing in the definition of the problem, as well as, in its exact solution; see the ending part of Eq. (6). Similarly, when using the analytical relations below [53, 54]:

$$\begin{array}{*{20}l} {R_{A} = \frac{W}{{l^{3} }}\left( {l - a} \right)^{2} \left( {l + 2a} \right),} \hfill & {R_{B} = \frac{{Wa^{2} }}{{l^{3} }}\left( {3l - 2a} \right)} \hfill \\ {M_{A} = \frac{ - Wa}{{l^{2} }}\left( {l - a} \right)^{2} ,} \hfill & {M_{B} = \frac{{ - Wa^{2} }}{{l^{2} }}\left( {l - a} \right)} \hfill \\ \end{array}$$
(7)

for determining the end moments and end shears of the beam in Fig. 3, l, a, and W, are real parameters, defining the problem and its exact solution. Returning to time integration and Eqs. (2) and (5), the initial conditions, the mass, the damping, the stiffness, the excitations, the parameters defining the nonlinear behaviour (implied specifically in Q in Eqs. (5)), and finally the \(t_{end}\), are the real parameters.

Fig. 2
figure 2

An illustration of Eq. (6)

Fig. 3
figure 3

A fixed-fixed two dimensional beam

There are many problems, for which, analytical solutions are either not derived (yet unavailable) or the analytical solutions are mathematically/numerically complicated, e.g. problems with analytical solutions in terms of special infinite series. To solve these problems, numerical approximate computation is the broadly accepted tool [7, 8, 55]. In approximate computations, besides real parameters, there exist parameters, essential (and even crucial) for the computation, that have no real meaning and no role in the definition of the problem and the exact solution; see [55, 56]. For instance, when using Simpson or Trapezoidal integration, for determining the value of I  in Eq. (6), as apparent in the relations below [57, 58]:

$$\begin{aligned} {\text{Simpson:}}\quad I & = \frac{h}{6}\left\{ {y_{0} - y_{N} + 2\sum\limits_{i = 1}^{N} {\left( {2{\kern 1pt} y\left( {\frac{{x_{i} + x_{i - 1} }}{2}} \right) + y_{i} } \right)} } \right\} \\ {\text{Trapezoidal:}}\quad I & = \frac{h}{2}\left\{ {y_{0} - y_{N} + 2\sum\limits_{i = 1}^{N} {y_{i} } } \right\} \\ h & = \frac{b - a}{N}\quad ,\quad y_{i} = y\left( {x_{i} } \right) = {\frac{{1}} {{1+{x_i}^2}}} \\ \end{aligned}$$
(8)

the total number of integration steps, N, or equivalently the integration step size, h, is an additional parameter. Although the division of the integration interval to integration steps can be displayed in Fig. 2, the number of the divisions, N, and the integration step size, h, are not real parameters of the problem, and do not affect the exact solution. These parameters, having main role in the computation and no role in the definition of the problems and their exact solutions, are generally addressed as analysis or algorithmic parameters [55, 56]. It is meanwhile worth noting that it is conventional to define/redefine the algorithmic parameters, such that convergence [59, 60] of the approximate solutions to the exact solutions can be studied in the neighbourhood of zero values of the algorithmic parameter [49, 55, 56, 61]. As implied above, in the numerical integrations addressed in Eq. (8), h (or N) is the algorithmic parameter. In static analysis of the beam in Fig. 3 by specific finite elements, the size or number of the elements is a parameter, defining the computation and its accuracy [1, 30]. Accordingly, considering the basics of finite elements [1, 2], we can address the element size as the algorithmic parameter.

In time integration analysis, the main analysis (algorithmic) parameters are the sizes of integration steps throughout the integration interval; considered either constantly (as one parameter) or adaptively (according to a specific criterion). It is a broad convention to consider the integration time step \(\Delta {\kern 1pt} t\) as a parameter linearly controlling the sizes of integration steps throughout the integration interval [49]. In view of Eqs. (2) or (5), and Fig. 1, \(\Delta {\kern 1pt} t\) is a parameter, with no effect on the exact solution, but main role in the time integration analysis. For nonlinear problems, there exist algorithmic parameters, in addition to \(\Delta {\kern 1pt} t\), affecting the features of the analysis, again independent from the problem and its exact response. These parameters are:

  1. 1.

    Nonlinearity continuation method and the special parameters to be set for implementation of the nonlinearity iterations [1, 2, 4446, 6264].

  2. 2.

    Nonlinearity tolerance [1, 4446, 63, 64], \(\bar{\delta }\), as an indicator for the accuracy of nonlinearity iterations.

  3. 3.

    Maximum number of iterations (e.g. see [6365]), as a representative for the computational facilities available, essential with attention to the always-existing round-off.

We would rather address the above parameters as nonlinearity analysis or nonlinearity algorithmic parameters. (Practically, the second parameter, i.e. \(\bar{\delta }\), is the most important parameter.) Besides \(\Delta {\kern 1pt} t\) and nonlinearity analysis parameters, parameters precisely defining the integration method are of the algorithmic parameters of time integration and can be addressed as method algorithmic parameters. These parameters do not exist for all time integration methods. For instance, equations governing the one-step Houbolt method [46], stated below:

$$\begin{aligned} & {\mathbf{M}}{\ddot{\mathbf{u}}}_{i} + {\mathbf{f}}_{{\text{int}_{i} }} = {\mathbf{f}}_{i} \\ & \quad {\mathbf{u}}_{i} = {\mathbf{q}}_{0} + b_{0} \,\Delta {\dddot {\mathbf{u}}}_{i - 1} \\ & \quad {\dot{\mathbf{u}}}_{i} = {\mathbf{q}}_{1} + b_{1} \,\Delta {\dddot {\mathbf{u}}}_{i - 1} \\ & \quad {\ddot{\mathbf{u}}}_{i} = {\mathbf{q}}_{2} + b_{2} \,\Delta {\dddot {\mathbf{u}}}_{i - 1} \\ & \quad {\dddot {\mathbf{u}}}_{i} = {\mathbf{q}}_{3} + b_{3} \,\Delta {\dddot {\mathbf{u}}}_{i - 1} \\ & {\mathbf{q}}_{0} = {\mathbf{u}}_{i - 1} + {\dot{\mathbf{u}}}_{i - 1} \Delta t + \frac{1}{2}{\ddot{\mathbf{u}}}_{i - 1} \Delta t^{2} + \frac{1}{6}{\dddot {\mathbf{u}}}_{i - 1} \Delta t^{3} \\ & {\mathbf{q}}_{1} = {\dot{\mathbf{u}}}_{i - 1} + {\ddot{\mathbf{u}}}_{i - 1} \Delta t + {\dddot {\mathbf{u}}}_{i - 1} \frac{{\Delta t^{2} }}{2} \\ & {\mathbf{q}}_{2} = {\ddot{\mathbf{u}}}_{i - 1} + {\dddot {\mathbf{u}}}_{i - 1} \Delta t \\ & {\mathbf{q}}_{3} = {\dddot {\mathbf{u}}}_{i - 1} \\ & b_{0} = \Delta t^{3} ,\quad b_{1} = \frac{11}{6}\Delta t^{2} ,\quad b_{2} = 2\Delta t,\quad b_{3} = 1,\quad \quad i = 1, 2, 3, \ldots \\ \end{aligned}$$
(9)

are not parametric (for \(\dddot {\mathbf{u}}_{0}\), see [6]), while, the relations defining the HHT method [12, 3032], stated below:

$$\begin{aligned} & {\mathbf{M}}{\ddot{\mathbf{u}}}_{i} + \left( {(1 + \alpha ){\mathbf{f}}_{{\text{int}_{i} }} - \alpha \,{\mathbf{f}}_{{\text{int}_{i - 1} }} } \right) = \left( {(1 + \alpha ){\mathbf{f}}_{i} - \alpha \,{\mathbf{f}}_{i - 1} } \right) \\ & {\dot{\mathbf{u}}}_{i} = {\dot{\mathbf{u}}}_{i - 1} + \Delta t\left( {(1 - \gamma ){\ddot{\mathbf{u}}}_{i - 1} + \gamma {\ddot{\mathbf{u}}}_{i} } \right) \\ & {\mathbf{u}}_{i} = {\mathbf{u}}_{i - 1} + \Delta t\,{\dot{\mathbf{u}}}_{i - 1} + \Delta t^{2} \left( {\left( {\frac{1}{2} - \beta } \right){\ddot{\mathbf{u}}}_{i - 1} + \beta \,{\ddot{\mathbf{u}}}_{i} } \right),\quad \quad i = 1, 2, 3, \ldots \\ \end{aligned}$$
(10)

depend on \(\alpha\), \(\beta\), and \(\gamma\), and hence, \(\alpha\), \(\beta\), and \(\gamma\) are the method parameters of the HHT time integration method, precisely defining the integration method, and highly affecting the approximate response, with no effect on Eqs. (2) or (5) and the exact response. (Inequality restrictions on these parameters, for issues such as numerical stability (e.g. see [30, 31]), do not reduce the number of these parameters.) Consequently, time integration analysis, according to a specific time integration method, while implementing a specific nonlinearity continuation method [44], potentially depends on three groups of analysis parameters:

  1. 1.

    The main analysis parameter: \(\Delta {\kern 1pt} t\) (or parameters defining the sizes of integration steps in adaptive time integration; see [66]).

  2. 2.

    Nonlinearity analysis parameter: \(\bar{\delta }\) and some less important parameters, e.g. maximum number of nonlinearity iterations.

  3. 3.

    Method analysis parameters: parameters completing the definition of the integration methods.

From the above parameters, the main parameters, controlling different features of the analyses (specifically the computational cost and accuracy-related features, e.g. numerical stability), are \(\Delta {\kern 1pt} t\) and \(\bar{\delta }\) (shortly disregarding the method parameters).

With regard to accuracy, since implementation of an approximate method implies that the exact solution is at least not simply available, the accuracy instead of being studied in view of the definition of error, E [67], i.e.

$$E = \left\| {{\mathbf{U}}^{a} - {\mathbf{U}}} \right\|$$
(11)

(a as a right superscript implies that the argument is an approximation and \(\left\| { } \right\|\) denotes an arbitrary norm [68]) needs to be evaluated indirectly, in terms of convergence [1, 7, 3032, 56, 5961, 67], i.e.

$$\begin{array}{*{20}c} {\mathop {\lim }\limits_{{\Delta {\kern 1pt} t \to 0}} E = 0} & \equiv & {\mathop {\lim }\limits_{{\Delta {\kern 1pt} t \to 0}} {\mathbf{U}}^{a} = {\mathbf{U}}} \\ \end{array}$$
(12)

For an arbitrary time integration analysis, Eq. (12) is theoretically equivalent to Figs. 4, as well as Fig. 5. \(E_{i}\) stands for the error of the response \({{\mathbf{U}}_{i}^{a} }\), obtained from time integration analysis with step sizes equal to \(\Delta {\kern 1pt} {\kern 1pt} t_{\,i}\), L and \(L^{{\prime }}\) denote the length of the region in the two plots implying decrease of error with positive integer slopes (not precisely defined and determinable yet), and \(D_{i}\), is defined below:

$$D_{i} = \frac{{\left\| {{\mathbf{U}}_{i}^{a} - {\mathbf{U}}_{i - 1}^{a} } \right\|}}{{\left( {\frac{{\Delta t_{i - 1} }}{{\Delta t_{i} }}} \right)^{q} - 1}}$$
(13)

and addressed as pseudo-error [69, 70]. For many problems, the convergence of \(D_{i}\) to zero is equivalent to Eq. (12). Similarly, Figs. 4 and 5 are equivalent in the sense that either both \(E_{i}\) and \(D_{i}\) imply convergence to zero or neither do so. Furthermore, either the convergence trends in both convergence and pseudo convergence plots (displayed in Figs. 4 and 5, respectively) are as straight lines (with positive integer slopes towards zero errors/pseudo-errors at zero \(\Delta {\kern 1pt} t\)) or neither display such a convergence trend. Even more, when both of the trends are as lines with positive integer slopes, the slopes in the two plots are identical and equal to the order of accuracy (in cases, less than the order of accuracy [32, 49, 71]); see [55, 69, 70]. In Eq. (13), q is a positive integer, introducing the rate of convergence (see Figs. 4 and 5), and \({\mathbf{U}}_{i - 1}^{a}\) and \({\mathbf{U}}_{i}^{a}\) are named such that \(\Delta {\kern 1pt} {\kern 1pt} t_{i - 1} > \Delta {\kern 1pt} {\kern 1pt} t_{i} > 0\); see [70]. In nonlinear analyses, for maintaining Fig. 4 (with \(L > 0\)) and specifically to ensure the equivalence between Figs. 4 and 5 and Eqs. (12), or even the validity of

$$\begin{array}{*{20}c} {\mathop {\lim }\limits_{\begin{subarray}{l} \Delta {\kern 1pt} t \to 0 \\ {\kern 1pt} \bar{\delta } \to 0 \end{subarray} } E = 0} & \equiv & {\mathop {\lim }\limits_{\begin{subarray}{l} \Delta {\kern 1pt} t \to 0 \\ {\kern 1pt} \bar{\delta } \to 0 \end{subarray} } {\mathbf{U}}^{a} = {\mathbf{U}}} \\ \end{array}$$
(14)

we can assign very small values (depending on the problem), or values consistent with \(\Delta {\kern 1pt} t\), to \(\bar{\delta }\) [2, 42, 63, 72].

Fig. 4
figure 4

Typical changes of errors for converging approximate solutions

Fig. 5
figure 5

A substitute for the study of convergence via Fig. 4

And regarding computational cost, in analysis of a specific linear problem, by a specific time integration method, and on a specific computer, the computational cost depends on the total number of integration steps; and since \(t_{end}\) is a constant value known in advance, the computational cost increases when \(\Delta {\kern 1pt} t\) decreases. This implies a contradiction with the accuracy, for which, it is beneficial to assign smaller values to \(\Delta {\kern 1pt} t\) (while larger values of \(\Delta {\kern 1pt} t\) are beneficial for computational cost). Considering this and the fact that different from \(\bar{\delta }\), \(\Delta {\kern 1pt} t\) affects both linear and nonlinear analyses, the remainder of this chapter is dedicated to \(\Delta {\kern 1pt} t\) and the approaches to select \(\Delta {\kern 1pt} t\) (see also [72, 73]).

3 Integration Step Size and Its Influence on Analysis Quality

As the major algorithmic parameter of time integration, the integration step size or \(\Delta \,t\) affects almost all features of arbitrary linear or nonlinear time integration analysis. The features under study in this section are accuracy, convergence, order of accuracy, stability, artificial damping, overshoot, and computational cost.

You can generally increase the accuracy, by reducing the integration step size. “Smaller integration steps lead to more accuracy” is a practically general rule (see Fig. 6 and Tables 1 and 2), with the potential to be obviated in analysis of nonlinear or complex behaviours (see Figs. 7 and Tables 3; the units are all in S.I. and each dash-dot-dash centreline, marked with CL, is associated with one mass). The nonlinear structural model is set such that the exact solution can be simply derived; see Fig. 8 [63]. Disagreement with the above-mentioned general rule corresponds to \(L = 0\), in Figs. 4 and 5. Meanwhile, Tables 2 and 3 evidence that the amount of error depends also on the response under consideration. In analysis of nonlinear structural dynamic systems, the details of the iterative nonlinearity analysis can significantly affect the accuracy as well as the changes of accuracy with respect to the integration step size; see [63, 7278]. No general rule seems to exist for the changes of \(\Delta {\kern 1pt} t\) and \(\bar{\delta }\), or other analysis parameters (addressed in Sect. 2), guaranteeing more accuracy for arbitrary nonlinear or complex analysis; some comments exist, for piece-wisely linear systems, e.g. see [42, 63]. (Accordingly, the responses computed for nonlinear problems or problems with complex behaviour are in general unreliable [42, 63, 7280].)

Fig. 6
figure 6

An example to display the general rule of less error in analysis with smaller integration step: a structural system, b ground acceleration

Table 1 Characteristics of the undamped structural system introduced in Fig. 6
Table 2 Changes of error with respect to step size in average acceleration time integration analysis [27] of the system introduced in Fig. 6 and Table 1
Fig. 7
figure 7

An example to display the possibility of more error in time integration analysis with smaller integration step [63]

Table 3 Changes of error with respect to the step size in average acceleration analysis [27] of the system introduced in Fig. 7, where, the continuation method is fractional time stepping [62, 65], \(\bar{\delta } = 10^{ - 2}\), and 0.1 and 5 are the scaling factor and maximum number of iterations
Fig. 8
figure 8

Mid 50s of the exact response for the system introduced in Fig. 7

Regarding convergence and order of accuracy, in view of the Lax-Richtmyer equivalence theorem [59, 60, 81], for well-posed problems [31, 32, 81] (including almost all real engineering problems), convergence is equivalent to consistency plus numerical stability. Consistency implies that the order of accuracy is not less than one [31, 32], and order of accuracy of a time integration method is the highest rate, by which, the responses computed by the integration method converge to the exact response; see also [7, 31, 32, 49, 56], equivalently definable in terms of local truncation errors [31, 32]. Numerical stability can be defined as the capability of time integration methods to lead to responses (for physically stable problems [82]) that do not diverge, even after arbitrary large number of integration steps [1, 3033, 52, 59, 60, 81, 83]. Therefore, it is reasonable to expect the convergence to be influenced, from \(\Delta {\kern 1pt} t\), via numerical stability and order of accuracy.

Taking into account terms like “conditionally stable”, the effect of \(\Delta {\kern 1pt} t\) on numerical stability might be crucial [1, 10, 11, 2833, 52]. By concentrating on one-step time integration methods (recommended in the literature [3032]), for SDOF (Single-Degree-Of-Freedom) systems, the free vibration time integration computation, can be expressed, as stated below (multi-step methods can in many cases be rewritten as one-step methods, e.g. central difference and Houbolt methods [46, 32]):

$$\left[ {\begin{array}{*{20}c} u \\ {\dot{u}\,\Delta \, t} \\ {{\ddot u}\,\Delta \, t^{2} } \\ \vdots \\ {u^{(\alpha )} \,\Delta \, t^{\alpha } } \\ \end{array} } \right]_{i} = {\mathbf{A}}\left[ {\begin{array}{*{20}c} u \\ {\dot{u}\,\Delta \, t} \\ {{\ddot u}\,\Delta \, t^{2} } \\ \vdots \\ {u^{(\alpha )} \,\Delta \, t^{\alpha } } \\ \end{array} } \right]_{i - 1} ,\quad i = 1, 2, 3 \ldots$$
(15)

where, \(\it \alpha\) implies the highest order of time differentiation in the one-step integration, e.g. \(\alpha = 3\) for the one-step Houbolt method [46], and \(\alpha = 2\) for the HHT method [12, 3032, 84], the right subscripts represent the same subscript for u, the temporal derivatives of u, and \(\Delta {\kern 1pt} t\), inside the brackets, i denotes the number of the integration step under study, and A is the amplification matrix, with members depending on the natural angular frequency, \(\omega\), the integration step size \(\Delta {\kern 1pt} t_{i}\), and the coefficient of viscous damping \(\xi\) [31, 32], i.e.

$${\mathbf{A}} = \left[ {A_{j,k} \left( {\omega ,\xi ,\Delta {\kern 1pt} t_{i} } \right)} \right]$$
(16)

Numerical stability is provided, when the spectral radius [68] of A, is not more than one throughout the analysis, i.e.

$$\rho = {\text{Max}}\left( {{\kern 1pt} \left| {\lambda_{{{\kern 1pt} 1}} } \right|,{\kern 1pt} \left| {\lambda_{{{\kern 1pt} 2}} } \right|, \ldots \left| {\lambda_{{{\kern 1pt} \alpha }} } \right|} \right) \not > 1$$
(17)

In Eq. (17), \(\rho\) stands for the spectral radius, and \(\lambda_{i = 1,2, \ldots \alpha }\) implies the th eigen-value of A (a real or complex number [57]). In more detail, for numerical stability, the absolute values of the eigen-values of A are to be less than one, when with multiplicity more than one, and less than or equal to one, when not repeated [2933, 52]. The discussion is valid for forced vibrations and MDOF (Multi-Degree-Of-Freedom) systems, when considering all natural modes separately [1, 2932]. In practice, it is conventional to study numerical stability, based on the changes of spectral radius, with respect to \(\omega \,\Delta {\kern 1pt} t\) (where, \(\omega\) and \(\Delta {\kern 1pt} t\) respectively imply an arbitrary natural frequency of the structural system and the constant step size in the analysis), preferably for different values of \(\xi\); see [2933, 52, 57] and Fig. 9. In Fig. 9, T stands for arbitrary natural period of the MDOF system, \(T = 2\pi \omega^{ - 1}\) [3133, 52], and the steps sizes are considered constant throughout the integration interval. (For non-proportionally damped MDOF systems, Eq. (15) can be considered directly for the whole MDOF system based on which the remainder of the discussion remains unchanged; see [84]). Consequently, \(\rho < 1\) or \(\rho \le 1\) is necessary and sufficient for the stability of linear analyses, and is necessary for the stability of nonlinear time integration analyses; see [33, 63, 72]. The outcome restricts \(\omega {\kern 1pt} {\kern 1pt} \Delta {\kern 1pt} t\), generally leading to:

Fig. 9
figure 9

Changes of spectral radius with respect to \(\frac{{\Delta {\kern 1pt} t}}{T}\) for several time integration methods: a Houbolt [46], b Central difference [14], c Average acceleration [27], d Linear acceleration [10, 27], e Wilson-\(\theta\) [1517] \((\theta = 1.42)\), f Quasi-Wilson-\(\theta\) [24] \((\theta = 1.5)\), g HHT [12] \((\alpha = - 0.3)\), h Fox-Goodwin [85], i C–H [13] \((\rho_{\infty } = 0.8)\)

$$\begin{array}{*{20}c} {\Delta {\kern 1pt} t < \Delta {\kern 1pt} t_{cr} (\omega ,\xi )} & {\text{or}} & {\Delta {\kern 1pt} t \le \Delta {\kern 1pt} t_{cr} (\omega ,\xi )} \\ \end{array}$$
(18)

where, \(\Delta \,t_{cr}\) stands for the integration step size corresponding to \(\rho = 1\); see Fig. 9b, d, h. In view of the Lax-Richtmyer equivalence theorem [32, 59, 60, 81, 86, 87], Eq. (18) needs to be satisfied, in order to maintain responses stability and convergence. Accordingly, unconditional stability (for linear analyses) is recommended by times [3033, 52] and many conventional time integration methods are unconditionally stable, i.e.

$$\forall \omega ,\;\forall \xi :\quad \Delta {\kern 1pt} t_{cr} \to \infty$$
(19)

Consequently, and as displayed in Eq. (18) and Fig. 9, the effect of integration step size, on numerical stability, can be described as that, the smaller the integration step size, the more the analysis has the chance to be numerically stable. In view of the Lax-Richtmyer equivalence theorem [32, 59, 60, 81, 86, 87], and since order of accuracy is a constant value computable theoretically independent from the integration step size [32, 60], a similar claim sounds reasonable, for the influence of integration step size on convergence. Nevertheless, with attention to the inequality sign in Eq. (18) and the definition of convergence in Eqs. (12) and (14), convergence is independent from \(\Delta {\kern 1pt} t\), unless when \(\Delta {\kern 1pt} t_{cr} = 0\), i.e. the unconditionally unstable condition. The trend of convergence however depends on \(\Delta {\kern 1pt} t\); see Figs. 4 and 5 and the Taylor series [57] correspondence between convergence and these figures [3032, 55]. From the standpoint of Lax-Richtmyer equivalence theorem, the above-mentioned different effects of \(\Delta {\kern 1pt} t\) on convergence and numerical stability, while order of accuracy is not under the effect of \(\Delta {\kern 1pt} t\) may cause questions. To avoid ambiguities, it is worth noting that in numerical determination of the order of accuracy [84], we need to carry out the time integration analyses with steps smaller than \(\Delta \,t_{cr}\). (Sufficient smallness of \(\Delta {\kern 1pt} t\) is also implied in the theoretical computation of the order of accuracy [32, 60].) Meanwhile, though order of accuracy is conceptually independent from numerical stability, restrictions exist that relate order of accuracy and numerical stability, depending on the number of steps involved in the computation for each new station, e.g. Dalquist Barriers [32].

Models resulted from discretization in space, by methods, such as finite elements, are different from the original continuous models, specifically in the higher modes of oscillation [3032, 88]. To say better, though the piece-wise exact analyses [10, 11] can lead to exact responses for Eq. (2), the so-called exact responses are different from the exact responses of the PDE (Partial Differential Equation) models prior to the semi-discretization. The difference can be considerable in the higher modes of oscillations. A way to omit or reduce the errors is to somehow eliminate the higher modes with trivial contribution in the response, and time integrate the lower modes with sufficient accuracy. An assumption in this approach is the existence of higher modes with trivial contribution in the response; this is a valid assumption at least in many real structural systems [10, 31, 32, 8991]. Numerical or artificial damping is the capability of some time integration methods, in controlling the errors of semi-discretization [1, 11, 2833, 52]. Artificial damping provides the capability to eliminate the higher modes and analyze the lower modes with sufficient accuracy (in direct time integration of the total structural system). Time step size affects artificial damping. If, in view of the modal description of linear proportionally damped MDOF systems, we concentrate on an arbitrary SDOF system, the amplitude of the plot of the spectral radius of A, i.e. \(\rho\), with respect to \(\omega \,\Delta {\kern 1pt} t\) (or equivalently \(\frac{{\Delta {\kern 1pt} t}}{T}\)), inversely represents the capability to damp out the higher modes (see Fig. 9) [3032]. In more detail, when \(\rho < 1\), smaller values of \(\rho\) imply more elimination of the higher frequency oscillations [2931] (\(\rho = 0\) implies complete elimination). The general trend displayed in Fig. 9, and the presented explanation, regarding the more elimination at larger values of \(\omega \,\Delta {\kern 1pt} t\), are valid for MDOF and SDOF linear systems, considering the modes under consideration (and the corresponding values of \(\omega\) or T), separately. However, the numerical details are different for different time integration methods and different values of viscous damping, and meanwhile, are differently desired for different behaviours with different contributions of higher modes of oscillation [3033, 52]. With these considerations, and specifically, from Fig. 9, provided proper artificial damping, values of \(\Delta {\kern 1pt} t\) larger than essential to damp out the higher modes may lead to the elimination of lower modes (see Fig. 9a, e, g, i; the last when physically damped slightly). (Proper artificial damping implies guaranteed more numerical damping for higher values of \(\frac{{\Delta {\kern 1pt} t}}{T}\), addressed here as proper artificial damping for the first time.) In other words, with assigning larger/smaller values to the integration step size, more/less oscillatory modes (starting from the highest modes) will be affected and eliminated. This can entail undesired inaccuracy. The presented discussion is valid, only for linear analysis of MDOF systems damped proportionally. Nevertheless, for many practical applications (e.g. seismic analysis), the nonlinear behaviour is of piecewise-linear type (e.g. linear-elastic/perfect-plastic and pounding [10, 28, 78, 9194]) and meanwhile proportional damping is a broadly accepted assumption [28, 52]. Accordingly, expressions such as \(\omega \,\Delta {\kern 1pt} t\), spectral radius \(\rho\), and artificial damping, can be defined/considered in a piece-wise manner. Therefore, in many practical cases, depending on the selection of the parameters of nonlinearity analysis and the severity of nonlinear behaviour, we can use the linear theory of artificial damping to build up an idea about the artificial damping in analysis of nonlinear structural systems. Still, in nonlinear as well as linear analyses, special attention should be paid to selection of parameters of the integration method controlling artificial damping. Alternatively, and even preferably, the results are to be checked for accuracy [7, 28, 55, 9597], also to prevent elimination of the important lower modes from the final response.

Overshoot is the tendency of integration methods to cause significant errors in the few steps after the start of the oscillations or after abrupt changes of the status or the excitation. Accordingly, smaller integration step sizes would likely cause less error, originated in overshoot; see [3033, 52].

Regarding computational cost, assigning smaller values to the integration step size, without changing the computer (computational facility), increases the number of integration steps, while the computational cost per integration step remains unchanged. Accordingly, the runtime, the total usage of the in-core memory, and hence the computational cost, \(C_{C}\), will increase, i.e. considering \(\propto\) as a sign for “dependence”,

$$C_{C} \propto \Delta {\kern 1pt} t^{ - 1}$$
(20)

In more detail, in linear time integration analysis of an arbitrary system, with equally sized integration steps, 

$$\frac{{C_{{C_{1} }} }}{{C_{{C_{2} }} }} = \frac{{\Delta {\kern 1pt} t_{2} }}{{\Delta {\kern 1pt} t_{1} }}$$
(21)

where, \(C_{{C_{1} }}\) and \(C_{{C_{2} }}\) denote the computational costs of two arbitrary analyses (on one computer and when disregarding the pre- and post-processing), respectively, with steps equal to \(\Delta {\kern 1pt} t_{1}\) and \(\Delta {\kern 1pt} t_{2}\) \(\left( {\Delta \,t_{1} \ne \Delta \,t_{2} } \right)\). To say better,

$$C_{{C_{1} }} \Delta t_{1} = C_{{C_{2} }} \Delta t_{2} = \cdots = C = {\text{Const}}.$$
(22)

where, C is a positive-definite constant, representing a scale of the computational cost per integration step \(C_{C}^{*}\) (C can also be defined as the computational cost of the analysis carried out with integration steps equal to one), i.e.

$$C_{C}^{*} = \frac{{C_{C} }}{N} = \frac{{\Delta {\kern 1pt} t\,C_{C} }}{{t_{end} }} = \frac{C}{{t_{end} }}$$
(23)

N stands for the total number of integration steps, \(C_{C}\) is the associated computational cost of the analysis, and \(t_{end}\) is defined in Eq. (2). The computational cost associated with an integration step \(C_{C}^{*}\), depends on the semi-discretized model, the computational facility (capabilities), i.e. how powerful is our computer?, and the integration method. Different from accuracy (including stability) and overshoot, for the sake of which, we prefer to assign smaller values to the integration step size, for reducing the computational cost, it is beneficial to time integrate with larger steps; the case is in between, when talking about artificial damping.

In an arbitrary nonlinear analysis, it is essential to check the occurrence of nonlinearity after determination of the status at each integration station. When nonlinearity is detected, appropriate changes should be implemented in the characteristics of the system [see the Q in Eq. (5)]. Furthermore, and before the changes, it is conventional to localize the nonlinearity by implementing some nonlinearity iterations [2, 4246]. Accordingly, Eqs. (21) and (22) are not valid, in time integration analysis of nonlinear systems. Even, without nonlinearity iterations, because of the essentiality of status check and characteristics change, it is reasonable to consider

$$\begin{aligned} C_{{C_{1} }} & \cong \tilde{C}_{{C_{1} }} + n_{{NL_{\,1} }} C_{Q} \\ C_{{C_{2} }} & \cong \tilde{C}_{{C_{2} }} + n_{{NL_{\,2} }} C_{Q} \\ \end{aligned}$$
(24)

In Eq. (24), \(\tilde{C}_{{C_{1} }}\) and \(\tilde{C}_{{C_{2} }}\) imply computational costs, not including nonlinearity iterations and status change, i.e.

$$\tilde{C}_{{C_{{i = 1,{\kern 1pt} 2}} }} = {\left (C_{{C_{{i = 1,{\kern 1pt} 2}} }} \right )_\text{Linear}} + C_\text{Status Check} $$
(25)

\(n_{{NL_{{{\kern 1pt} 1}} }}\) and \(n_{{NL_{{{\kern 1pt} 2}} }}\) stand for the number of nonlinearity detections in analysis with steps sized \(\Delta {\kern 1pt} t_{1}\) and \(\Delta {\kern 1pt} t_{2}\) respectively, \(C_{Q}\) is an indicator for the computational cost at a nonlinearity detection averaged out among all nonlinearities detected in the analysis, and includes the costs of nonlinearity iterations and change of status. It is worth noting that the independency of \(C_{Q}\) from \(\Delta {\kern 1pt} t\) is a reasonable assumption, implemented in Eqs. (24), and leading to the approximation signs in Eqs. (24).

A special case happens when the nonlinear behaviour is piece-wisely linear (e.g. linear-elastic/perfect-plastic behaviour, impact, simple friction) [63], and no non-linearity iteration is implemented. In this case,

$$n_{{NL_{1} }} \cong n_{{NL_{2} }} \cong n_{NL}$$
(26)

and in view of Eqs. (24) and (26) and provided analysis with equally sized steps,

$$\frac{{C_{{C_{1} }} }}{{C_{{C_{2} }} }} \cong \frac{{\tilde{C}_{{C_{1} }} + x}}{{\tilde{C}_{{C_{2} }} + x}},\quad x = n_{NL} C_{Q} > 0$$
(27)

Taking into account that \(\tilde{C}_{{C_{i = 1,2} }} > 0\), and the fact that in view of Eq. (25), similar to Eq. (21),

$$\frac{{\tilde{C}_{{C_{1} }} }}{{\tilde{C}_{{C_{2} }} }} = \frac{{{\Delta {\kern 1pt} t_{2} }}} {{{\Delta {\kern 1pt} t_{1} }}}$$
(28)

Eq. (27) implies that, in analysis of piece-wisely linear systems on a specific computer, when we do not implement nonlinearity iterations and \(C_{Q}\) is sufficiently smaller than \(\tilde{C}_{{C_{i = 1,2} }}\), the computational cost resists against changes because of \(\Delta {\kern 1pt} t\). To say better, in the special case addressed above,

$$\left| {\frac{{C_{{C_{2} - }} C_{{C_{1} }} }}{{C_{{C_{1} }} }}} \right| < \left| {\frac{{\Delta {\kern 1pt} t_{2} - \Delta {\kern 1pt} t_{1} }}{{\Delta {\kern 1pt} t_{2} }}} \right|$$
(29)

(the above-mentioned smallness of C Q is generally valid, for implicit analyses [1, 30, 31], recommended for many real nonlinear dynamic analyses; see [1, 43]). Another special case occurs, when nonlinearities are detected at almost all integration steps, regardless of the integration step size, and no non-linearity iteration is being implemented. In this case, provided analysis with equally sized steps,

$$\begin{aligned} C_{{C_{1} }} & = \tilde{C}_{{C_{1} }} + \frac{{t_{end} }}{{\Delta {\kern 1pt} t_{1} }}C_{Q} \\ C_{{C_{2} }} & = \tilde{C}_{{C_{2} }} + \frac{{t_{end} }}{{\Delta {\kern 1pt} t_{2} }}C_{Q} \\ \end{aligned}$$
(30)

and, in view of Eq. (28),

$$\frac{{C_{{C_{1} }} }}{{C_{{C_{2} }} }} = \frac{{\Delta {\kern 1pt} t_{2} }}{{\Delta {\kern 1pt} t_{1} }}$$
(31)

comparable with Eq. (21). Equation (31) implies that, in a nonlinear analysis with equally sized steps and nonlinearities detected at all integration steps, if we do not implement nonlinearity iterations, the changes of computational cost with respect to the integration step size would be very similar to linear analyses. The discussion above seems new, and presented for the first time, in this chapter. Accordingly, further study is surely essential, not followed here, for the sake of brevity. Extension of the discussion to general nonlinear behaviour/analysis is being recommended for further research.

4 Practical Comments for Integration Step Size Selection

The most conventional and broadly accepted comment for selection of integration step sizes, specifically, when the steps are equally sized, is as stated in Eq. (32) [1, 10, 28, 49, 66, 9799]:

$$\Delta {\kern 1pt} t = {\text{Min}}\left( {\Delta {\kern 1pt} t_{cr} ,{}_{f}\Delta {\kern 1pt} t,\Delta {\kern 1pt} t_{d} ,\frac{{T_{r} }}{\chi }} \right)$$
(32)

The new parameters are defined below:

\({}_{f}\Delta {\kern 1pt} t\)::

Step size, by which, the excitation is digitized (\({}_{f}\Delta {\kern 1pt} t = \infty\), when the excitation is continuous)

\(\Delta {\kern 1pt} t_{d}\)::

Largest step size, according to which, we accept to obtain the history of the   response (generally unimportant)

\(T_{r}\)::

Smallest period of oscillations with considerable contribution in the response

\(\chi\)::

A factor, changing from 10 (or even 5) in linear simple analyses to 1000 in   analyses involving impact, severe nonlinearity, complex or mathematically stiff behaviour, etc., such that \(\frac{{T_{r} }}{\chi }\) turns to be an integration step size sufficient for accuracy

The definitions of \(\Delta {\kern 1pt} t_{cr}\) [see Eq. (18)], \({}_{f}\Delta {\kern 1pt} t\), and \(\Delta {\kern 1pt} t_{d}\), are clear. However, the definitions of \(T_{r}\) and \(\chi\) are somehow imprecise and vague, and furthermore, serious arguments can be made on the typology of Eq. (32) and the computation of \(\Delta {\kern 1pt} t_{cr}\). This section is dedicated to these ambiguities.

The typology of Eq. (32) has five major deficiencies. First, with the exception of period elongation and amplitude decay [1, 10, 29, 33, 52], no theoretical relevance seems to exist between \(\frac{{T_{r} }}{\chi }\) and accuracy, and period elongation and amplitude decay cannot well explain the role of \(\frac{{T_{r} }}{\chi }\) in Eq. (32); see [1, 2932]. Secondly, the integration step size should be set, such that, with negligible additional inaccuracy, for the lower modes, the higher erroneous modes of oscillation can be eliminated (when Eq. (5) is obtained from discretization in space). Accordingly, in view of Fig. 9, artificial damping and the origin of Eq. (5) would rather be included in Eq. (32). Alternatively, since the details of Fig. 9 are different, for different integration methods, the integration method needs to be taken into account in Eq. (32); as a third alternative, the obtained responses are to be controlled, also for adequate selection of the parameters. Neither of these approaches seems to be properly addressed in Eq. (32) (or its implementation). In addition, the existing ambiguities on the notion of small and large modes highlight the ambiguities on the role of artificial damping in Eq. (32). Thirdly, a deficiency in the typology of Eq. (32) is the fact that, when the excitation is available as a digitized record (i.e. \({}_{f}\Delta {\kern 1pt} t\) is finite), and the consequence of Eq. (32) is such that:

$$\Delta {\kern 1pt} t\, < {}_{f}\Delta {\kern 1pt} t$$
(33)

it is not simple to carry out the time integration analysis with values of \(\Delta {\kern 1pt} t\) satisfying Eqs. (32) and (33). A supplementary practical equation, to be satisfied, while taking into account Eq. (33), is as stated below:

$$\Delta {\kern 1pt} t = \frac{{{}_{f}\Delta {\kern 1pt} t}}{m},\quad m \in Z^{ + } - \{ 1\}$$
(34)

An approach, to consider Eqs. (32) and (34) simultaneously, seemingly addressed for the first time in this chapter, is to replace Eq. (32) with (\(\Delta {\kern 1pt} t^{\prime}\) is used merely for the computation of \(\Delta {\kern 1pt} t\)):

$$\begin{aligned} \Delta {\kern 1pt} t^{\prime} & = {\text{Min}}\left( {\Delta {\kern 1pt} t_{cr} ,\Delta {\kern 1pt} t_{d} ,\frac{{T_{r} }}{\chi }} \right) \\ \Delta {\kern 1pt} t & = \left\{ {\begin{array}{*{20}l} {{}_{f}\Delta {\kern 1pt} t} \hfill & {\text{when}} \hfill & {\Delta {\kern 1pt} t^{\prime}\; \ge {}_{f}\Delta {\kern 1pt} t} \hfill \\ {\frac{{{}_{f}\Delta {\kern 1pt} t}}{m}} \hfill & {\text{when}} \hfill & {\exists m \in Z^{ + } - \{ 1\} ,} \frac{{{}_{f}\Delta {\kern 1pt} t}}{m} \le \Delta {\kern 1pt} t^{\prime} < \frac{{{}_{f}\Delta {\kern 1pt} t}}{m - 1} \end{array}} \right. \\ \end{aligned}$$
(35)

The forth deficiency in the typology of Eq. (32) originates in \(\chi\). In fact, besides the nonlinearity and its type, it is essential to take into account the severity of the nonlinear behaviour. As a simple example, the impact between two undamped single degree of freedom systems can be neglected when the velocities are about zero at all instants of impact. This leads to the negligibility of the nonlinear behaviour. The case is completely different when the impacts occur at considerable velocities. The difference between these two cases (and in general the difference between cases with different severity of a special type of nonlinear behaviour) seems not taken into account in Eq. (32). Towards a replacement for Eq. (32), attention can be paid to the discussions reported in the literature on nonlinearity quantification and measurement, e.g. see [100].

Finally, the fifth deficiency in the typology of Eq. (32) is that, though the nonlinear behaviour and its complexity affect Eq. (32) via \(\chi\), the effect of nonlinearity analysis parameters (e.g. the tolerance, \(\bar{\delta }\)) on the accuracy is disregarded. Two approaches to overcome this deficiency is to take into account the value of \(\Delta {\kern 1pt} t\), while assigning values to the nonlinearity parameters (see [42, 63, 64, 79]), or alternatively, to control the errors after the time integration analysis [28, 55, 9597]. The latter might be unexpectedly costly.

In practical implementation of Eqs. (32) or (35), there is no ambiguity about the values to be assigned to \({}_{f}\Delta {\kern 1pt} t\) and \(\Delta {\kern 1pt} t_{d}\). However, \(\Delta {\kern 1pt} t_{cr}\) is under the effect of damping, and still, we cannot guarantee that, disregarding viscous damping is necessarily on the safe side of numerical stability, resulting in larger values of \(\Delta {\kern 1pt} t_{cr}\) [3033, 52]. Without a safe side assumption, \(\Delta {\kern 1pt} t_{cr}\) needs to be computed considering the amount of viscous damping in different natural modes. The smallest \(\Delta {\kern 1pt} t_{cr}\), not necessarily associated with a special mode, would then control the numerical stability. The computation is not only complicated and computationally expensive (because of several reasons, including determination of the natural frequencies and the corresponding viscous dampings), but also the eigen-solution is in conceptual contradiction with the nature of direct time integration. The deficiency highlights in presence of nonlinearity, where the natural frequencies change throughout the integration interval. With the safe side assumption, independent of the amount of viscous damping, the natural mode causing the smallest \(\Delta {\kern 1pt} t_{cr}\), mostly the last natural mode, would control the numerical stability in Eqs. (32) and (35) (see Eq. (18) and the existing conditionally stable methods [1, 10, 11, 15, 24, 2833, 52]). Furthermore, if the help of viscous damping to numerical stability is guaranteed, the definition of unconditional stability in Eq. (19) can be changed to:

$$\forall \omega \quad :\quad \Delta {\kern 1pt} t_{cr} \to \infty$$
(36)

The above discussion and assigning an adequate value to \(\Delta {\kern 1pt} t_{cr}\), in Eqs. (32) and (35), are more complex in presence of non-proportional damping; in view of the versatility of time integration in analysis of non-proportionally damped systems [2833], this complexity is indeed a practical drawback. Considering these, it is essential to emphasize once again on the existing comment not to use integration methods with finite \(\Delta {\kern 1pt} t_{cr}\) [3032] (when possible regardless of the type of damping), causing the simplification below in Eq. (35):

$$\begin{aligned} \Delta {\kern 1pt} t^{\prime} & = {\text{Min}}\left( {\Delta {\kern 1pt} t_{d} ,\frac{{T_{r} }}{\chi }} \right) \\ \Delta {\kern 1pt} t & = \left\{ {\begin{array}{*{20}l} {{}_{f}\Delta {\kern 1pt} t} \hfill & {\text{when}} \hfill & {\Delta {\kern 1pt} t^{\prime}\; \ge {}_{f}\Delta {\kern 1pt} t} \hfill \\ {\frac{{{}_{f}\Delta {\kern 1pt} t}}{m}} \hfill & {\text{when}} \hfill & {\exists m \in Z^{ + } - \{ 1\} ,} \frac{{{}_{f}\Delta {\kern 1pt} t}}{m} \le \Delta {\kern 1pt} t^{\prime} < \frac{{{}_{f}\Delta {\kern 1pt} t}}{m - 1} \end{array}} \right. \\ \end{aligned}$$
(37)

A seemingly last and most crucial deficiency in Eqs. (32), (35), and (37), is in the notion of \(T_{r}\). Theoretically, \(T_{r}\) implies the smallest period, with considerable contribution in the response [49, 66, 101]. The expression considerable contribution is vague, and besides, while the response is not at hand before the analysis, how we can determine the periods of oscillations?! Furthermore, even if, the response could somehow be predicted, no specific comment seems to exist regarding determination of the value of \(T_{r}\) and besides the computation of the oscillatory modes is computationally expensive. To overcome these shortcomings partly, we can compute \(T_{r}\), by using the comments on the natural modes with considerable contribution in the response, if existing (e.g. see [90, 91, 99]). This approach, though leads to determination of \(T_{r}\) independent from the response, lacks sufficient theoretical explanation.

A practical way (in cases costly), to lessen accuracy-related shortcomings, including those originated in \(T_{r}\) and \(\chi\), is to upper estimate \(T_{r}\) (in view of the low cost of the computation, no especial approach is essential for the upper-estimation), assign the value obtained from Eqs. (32), (35), or (37), to \(\Delta {\kern 1pt} t\), carry out a first analysis, repeat the analysis with half steps, compare the two responses, if the difference is not sufficiently small (the error of the response is in the size of the difference), once again repeat the analysis with half steps, and eventually, stop the repetitions, when the difference is negligible. Considering that such repetitions are recommended in the literatures of numerical solution of differential equations, and practical engineering applications specifically structural dynamics [7, 28, 55, 99, 102, 103] and considerable theoretical explanations exist for repetition-based accuracy controls, e.g. see [42, 96, 97], it is reasonable to rely on these repetitions to compensate the ambiguities and arrive at sufficient accuracy. Meanwhile, it is worth noting that implementation of the repetitions might be insufficient, because of the probable improper convergence, in problems with complex oscillatory behaviour, specifically those involved in nonlinearity [42, 7476, 78, 104, 105]. Implementation of more advanced error control methods can cause more reliability, e.g. see [55].

5 Time Integration and Step Size Selection in Seismic Codes

The material, presented in the previous sections, was mostly theoretical, discussed in different branches of science and engineering; see [13, 17, 1922, 26, 35, 37, 106108]. In this section, attention is paid to seismic analysis and design, as a very important and crucial research area, with direct and indirect effects on human lives and civilization. Accordingly, with attention to seismic activities, locations in the world map and issues like how developed are codes/standards?, how developed are countries/regions?, how much populated are countries/regions?, and finally, the availability of the codes/standards for the author, the following seismic codes/standards are reviewed for issues on time integration and step size selection:

  1. 1.

    National code/standard of India [109, 110].

  2. 2.

    European code/standard [111].

  3. 3.

    National code/standard of Turkey [112].

  4. 4.

    National code/standard of Greece [113].

  5. 5.

    National code/standard of China [114].

  6. 6.

    National code/standard of New Zealand [109, 115].

  7. 7.

    National code/standard of Iran [90].

  8. 8.

    National code/standard of United States [116].

  9. 9.

    National code/standard of Japan [117].

  10. 10.

    National code/standard of Mexico [118, 119].

  11. 11.

    National code/standard of Chile [120, 121].

  12. 12.

    National code/standard of Romania [122].

  13. 13.

    National code/standard of Taiwan [123, 124].

The footprints of time integration in seismic codes/standards are investigated by directly looking for integration, time integration, time domain analysis, and time history analysis, or indirectly by looking for nonlinear analysis, non-proportional damping, un-classical damping, and provisions regarding analyses out of the scope of mode superposition analysis, e.g. analysis of systems equipped with modern control devices providing non-proportional damping.

Time integration analysis against several ground motion records and putting the results together (according to a seismic code/standard), in order to arrive at a time history record for each response (or to arrive at responses) to be used in seismic designs, is called time history analysis. All of the seismic codes/standards, with the exception of the code/standard of Chile [120, 121], consider time history analysis (and time integration) as an analysis alternative. Some of the important considerations in the seismic codes/standards are briefly addressed in Table 4; the numbers in the last column stand for the seismic codes/standards, as stated in the start of this section. (Table 4 does not present all the related regulations; it attempts to present a brief overview.) Meanwhile, the codes/standards, that in some cases consider time history analysis as the superior analysis tool, are as listed below:

Table 4 A brief look on some regulations regarding time history analysis in seismic codes
  • National code/standard of India [109, 110]: in stack like industrial structures,

  • European code/standard [111]: when an isolation system may not be modelled with an equivalent linear method,

  • National code/standard of China [114]: for buildings taller than specific heights,

  • National code/standard of New Zealand [99, 115]: for long period structures and when the directivity effects (e.g. see [125]) can be significant,

  • National code/standard of Japan [117]: for high-rise buildings,

  • National code/standard of Romania [122]: similar to European code/standard,

where, for each code/standard, cases at which time history analysis is recommended or is the only seismic analysis tool, are addressed after the code/standard. And finally, the code/standard having comments on integration step size is the national code/standard of New Zealand [99, 115], while, the code/standard explicitly addressing integration as the mean for time history analysis is that of the United States [116]. The information above and those in Table 4 are clear evidences for the advancement of the code/standard of New Zealand [109, 115] (the code/standard introduced with “6” in Table 4), from the point of view of time history analysis and specifically the integration step size selection.

The comment of the national code/standard of New Zealand on integration step size (see Sect. 6.4.5 in [99]) is summarized in the equation below:

$$\Delta {\kern 1pt} t \le {\text{Min}}\left( {\Delta {\kern 1pt} t_{cr} ,{}_{f}\Delta {\kern 1pt} t,\frac{{T_{1} }}{100},T_{n} ,0.01} \right)$$
(38)

In Eq. (38), \(T_{1}\) is the largest translational period of the first mode, judged by the largest mass contribution, in the direction of principal component of the earthquake, and \(T_{n}\) denotes the period of the highest mode in the same direction required to achieve the 90 % mass as described in the modal response spectrum method [99]. The guidance of the commentary [115] regarding implementation of Eq. (38) is as stated below:

The time step should generally be not greater than \(\frac{{T_{1} }}{100}\), where \(T_{1}\) is the period associated with the first mode of vibration. For analyses involving impact (building pounding, rocking walls, or uplifting foundations), the time step will need to be significantly lower and a starting value of \(\frac{{T_{1} }}{1000}\) is recommended. If convergence is not obtained with a particular time step, reduce it by a factor of 2 and re-run. Once convergence is obtained, make a further reduction and compare the peak results for the target response parameter. If they are within 5 %, the longer time step (which requires less computing running time) is satisfactory [115].

The above considerations, and specifically, the selection of integration step sizes, is a significant initiation in seismic regulations (the author has not met similar details in seismic codes/standards in the past; also see [126]); and hence, the consideration in the code/standard of New Zealand [99, 115] is to be deeply appreciated and acknowledged. Still, some drawbacks and ambiguities seem to exist, to which the remainder of this section is dedicated.

As the first ambiguity, since each time history analysis is composed of several time integration analyses, the cost of time history analysis is generally considerable and the selection of step sizes in the first analysis (before any repetition) is of high importance. It is unclear why selection of the step size and the way the step size decrease in the repetitions of the first analysis disregard many features of the ground motion record (\({}_{f}\Delta {\kern 1pt} t\) is an exception), as well as, the nonlinearity analysis parameters (the nonlinear behaviour is briefly taken into account via \(\chi\)). To explain better, theoretically, depending on the excitation and the linear or nonlinear behaviour, a system may oscillate in frequencies different from its first natural frequencies, leading to different essentialities of integration step sizes. This seems not well taken into account in Eq. (38); compare Eqs. (32) and (38). Furthermore, depending on the values assigned to nonlinearity tolerances, proper convergence [105] of the analyses and reliable estimation of the errors can be considerably affected; e.g. see [42, 63, 78, 80]. Disregarding these issues in Eq. (38) may lead to additional repetitions, and accordingly considerable additional computational cost, and even in cases failure of the repetitions because of round-off.

The second ambiguity is that the theory backing the validity of the recommended accuracy control is not addressed in the code/standard and the supporting material [99, 115]. Specifically, it is worth noting that the partly theoretical backing, which is the proper convergence (see Fig. 4 and [42, 63, 105]) may not be fulfilled, in nonlinear time integration analysis; this is while the purpose of the control in the code/standard of New Zealand is nonlinear analysis; see [42, 63, 72, 74, 75, 80, 104].

Finally, after repeating an analysis and comparing the two responses, until being within 5 % difference at the peak, the seismic code/standard of New Zealand does not explicitly address the resulting response and merely mentions that the response obtained using the larger step is satisfactory [115]. As clearly stated in [115], the reason of referring to the response obtained from analysis with larger integration step size as satisfactory is that the other analysis is more costly. This implies that the comment is indeed to consider the response obtained from analysis with larger steps as final. Since, the two analyses are to be carried out, prior to checking whether the responses are in 5 % difference, it seems reasonable to pay attention to accuracy rather than computational cost and consider the response obtained from the analysis with smaller step size as the final response. (The more accuracy of the response when using smaller step sizes can be explained by the theory behind the error controlling approach; see [97].) Practically, ambiguities exist, also regarding the 5 % difference, the notion of 5 %, and the error control on the peaks, not discussed here, for the sake of brevity.

6 Efficient Step Size Selection

As implied in the previous sections, \(\Delta {\kern 1pt} t\) affects the accuracy and computational cost in reverse manners; also see [3033, 49, 52]. Therefore, the more we eliminate the restrictions on \(\Delta {\kern 1pt} t\), the better the computational cost and accuracy can be balanced.

Methods are developed to eliminate the equality of the sizes of integration steps throughout the integration interval, e.g. see [66, 127]. The resulting time integration analysis is in general addressed as adaptive time stepping or adaptive time integration analysis. Adaptive time integration analysis starts with selection of step sizes for the first or first few steps. Carrying out ordinary time integration for the starting steps, the analysis continues in a step-by-step manner. After each or each several steps, a pre-assigned criterion to determine whether the sizes of the next steps need to be changed, and if a change is needed, to determine the amount of the change, is examined; see [66]. This process continues till the end of the integration interval, i.e. \(t_{end}\) (see Eqs. (2) and (5)). Adaptive time integration analysis started in about 1970s by the studies of Hibbit and Karlson, Oughourlian and Powell, Flippa and Park, Park and Underwood, and Underwood and Park (see the brief review reported in [66]), continued in the past decades [66, 127129], and is in progress, specifically, for nonlinear analyses, e.g. see [22130132]. Returning to the process of adaptive time integration, as explained above, implementation of a pre-assigned criterion is essential in arbitrary adaptive analysis. Some main bases for the criteria are as noted below [66, 127]:

  1. 1.

    Errors at the integration stations, because of the integration approximation, associated with the last integration step; or to say better, the amount of error at the end of the integration step, originated in the approximate integration formulation, assuming zero errors at the start of the integration step and linear behaviour throughout the step. This error is broadly known as the local truncation error [31, 32].

  2. 2.

    Periods (or equivalently frequencies) of important oscillations in the response, at the integration steps, or for large MDOF systems, the ‘current characteristic frequency’, defined based on expressions similar to the Rayleigh ratio [66].

  3. 3.

    Complexity of the transient behaviour, defined, based on a measure named curvature of the response [66].

The computational costs associated with implementation of the criterion and the step size change (factorization) negatively affect the efficiency of adaptive time integration. The significance of these effects depends on the size of the structural system, the complexity of the dynamic behaviour, the length of the integration interval \(\left[ {\begin{array}{*{20}c} 0 & {t_{end} } \\ \end{array} } \right]\), the adaptive time stepping criterion, and the time integration method. Consequently, from the standpoint of computational cost, analysis considering adaptive time stepping is not necessarily superior to analysis with constant time steps.

For implementation of the criteria and adaptive time stepping, sizes of the starting steps, and some analysis parameters, should be set adequately and in advance. Parameters to prevent very slight or very frequent changes of the integration step size are samples [66]. These selections complicate implementation of adaptive time integration, compared to constant time stepping. It is also worth noting that, in implementation of adaptive time stepping, we cannot predict the computational cost of the analyses; in constant time stepping the prediction is simple in linear analyses. Considering these, constant time stepping and adaptive time stepping are both broadly accepted in practice, and research in either direction is in progress [21, 34, 115, 132134], revealing the likely balanced needs in future. In seismic analyses, that the excitations are digitized in equal steps and the digitization step size complicates the selection of integration step size even further, constant time stepping is popular, e.g. see [10, 28, 29, 99, 135]. Accordingly, the discussion in the remainder of this section is concentrated on efficient step size selection in analysis against digitized excitations using constant time steps (i.e. \({}_{f}\Delta {\kern 1pt} t < \infty\), and in Fig. 1, \(\forall \;\;i:\;t_{i + 1} - t_{i} = {\text{Const}}.\)).

In view of Eqs. (35) and (37), conventional time integration analysis using constant time steps is more efficient, when \(\Delta {\kern 1pt} t_{d}\), \({}_{f}\Delta {\kern 1pt} t\), \(\frac{T}{\chi }\), and \(\Delta {\kern 1pt} t_{cr}\) (in Eq. 35) are closer to each other and the largest possible. In other words, unless, when \(\Delta {\kern 1pt} t_{cr} \to \infty\) (unconditional stability; where \(\Delta {\kern 1pt} t_{cr}\) disappears from the relations) or \(\Delta {\kern 1pt} t_{cr} = 0\) (unconditional instability; this is an impractical case), it would be ideal to guarantee

$$\begin{aligned} \Delta t = \Delta t_{d} & = {}_{f}\Delta t = \Delta t_{cr} = \frac{T}{\chi }, \\ & \forall \,\varepsilon > 0 :\;\Delta {\kern 1pt} t = \frac{T}{\chi } + \varepsilon \Rightarrow {\text{Practically}}\,{\text{Unacceptable}}\,{\text{Accuracy}} \\ \end{aligned}$$
(39)

once again, implying the advantages of unconditionally stability. Based on this idea and towards more efficient step size selection, approaches are developed to enlarge \(\Delta {\kern 1pt} t_{cr}\) and \({}_{f}\Delta {\kern 1pt} t\) and close the gap between the terms in Eqs. (35), e.g. see [49, 135138]. Considering this, the title of this chapter, and the existing comments, on using unconditionally stable methods (see Eqs. (19) and (36)), the discussion is continued, concentrated on techniques that enlarge \({}_{f}\Delta {\kern 1pt} t\), while the integration methods are unconditionally stable, in analysis of linear systems.

Since the discussion is narrowed to transient analysis against digitized excitations (cases with finite \({}_{f}\Delta {\kern 1pt} t\)), as a practical application, it is reasonable to simplify Eq. (5), to seismic analysis against ground accelerations, by considering

$${\mathbf{f}}(t) = {\bar{\mathbf{O}}}$$
(40)

and

$${\mathbf{u}} = {\mathbf{u}}_{g} + {\mathbf{u}}_{r}$$
(41)

In Eq. (41), \({\mathbf{u}}_{g}\) stands for the static displacements of the un-supported degrees of freedom, because of the ground (support) displacement, and \({\mathbf{u}}_{r}\) denotes the displacements of the un-supported degrees of freedom, additional to the static displacements. In view of Eqs. (40) and (41), Eqs. (5) can be rewritten as stated below [10, 28, 9193, 139]:

$$\begin{aligned} & {\mathbf{M}}\,{\ddot{\mathbf{u}}}_{r} + {\mathbf{f}}_{\text{int}} = - {\mathbf{M}}{\Gamma} \,{\ddot u}_{g} (t)\quad 0 \le t < t_{end} \\ & {\mathbf{u}}_{r} (t = 0) = {\bar{\mathbf{O}}} \\ & {\dot{\mathbf{u}}}_{r} (t = 0) = {\bar{\mathbf{O}}} \\ & {\mathbf{f}}_{\text{int}} (t = 0) = {\mathbf{f}}_{{\text{int}_{0} }} \\ & {\mathbf{Q}} \\ \end{aligned}$$
(42)

In Eq. (42), \({\ddot u}_{g} \left( t \right)\) represents the ground acceleration, digitized at steps sized \({}_{f}\Delta {\kern 1pt} t\), and \(\Gamma\) is a vector implying the static effect of the ground (support) displacement on the displacements of un-supported degrees of freedom [10].

Four techniques, to materialize time integration analysis with steps larger than the excitation steps without disregarding the excitations, are as briefly reviewed below:

  1. 1.

    Time integration of integrated problems.

    This technique, is proposed by S-Y. Chang in 2002 [137], not as a technique to enlarge the integration steps. Ordinary time integration is implemented in analysis of the original problem, modified slightly. The modified problem consists of the integral of the equation of motion and the corresponding initial conditions. Accordingly, the original digitized \({\mathbf{f}}(t)\) is integrated, and \({}_{f}\Delta {\kern 1pt} t\) loses its meaning and can be eliminated from Eqs. (32), (35), and (37). This is a considerable achievement, obtained in the price of the additional computational cost essential mainly for numerical integration of \({\mathbf{f}}(t)\). Few examples are studied; in all, the loss of accuracy is small and the save of computational cost is considerable.

  2. 2.

    Convergence-based replacement of excitations.

    This technique is proposed, by the author in 2008 [49], specifically in order to replace digitized excitations with excitations digitized at larger steps, i.e.

    $${}_{f}\Delta {\kern 1pt} t{\kern 1pt}_{new} = n{\kern 1pt} {}_{f}\Delta {\kern 1pt} t,\quad n \in \left\{ {2,{\kern 1pt} 3,{\kern 1pt} 4, \ldots } \right\}$$
    (43)

    and later extended to non-integer enlargements [138], i.e.

    $${}_{f}\Delta {\kern 1pt} t{\kern 1pt}_{new} = r{\kern 1pt} {}_{f}\Delta {\kern 1pt} t,\quad r = \frac{{n_{1} }}{{n_{2} }},\quad n_{1} > n_{2} ,\quad n_{1} \in \left\{ {2,3,4, \ldots } \right\},\quad n_{2} \in \left\{ {1,2,3, \ldots } \right\}$$
    (44)

    Both versions of the technique are successfully implemented in analysis of many real problems, including frames, short mid-rise and tall buildings, different bridges, space structures, silos, water tanks, a cooling tower, etc. [49, 135, 139154], and undergone theoretical studies [138, 155163]; see more details in Sect. 7.

  3. 3.

    Impact-based replacement of excitations.

    This technique, proposed by M. Hosseini and I. Mirzaei in 2012 [164], replaces each section of the excitation record located totally above or totally below the \({\ddot u}_{g} = 0\) axis, with a single data \(\left( {{\ddot u}_{g} } \right)\) equal to the area of the section, above or below the \({\ddot u}_{g} = 0\) axis, applied at the centroid of the section (see Fig. 10). Implementation of the technique in analysis of several problems has been successful.

    Fig. 10
    figure 10

    Impact-based replacement [164] of two typical sequential sections of a typical digitized record respectively above and below the horizontal axis: a before the replacement, b after the replacement

  4. 4.

    Integration after combining several sequential integration steps analytically. This technique, first suggested by the author in 2009 [165], combines ordinary time integration computations in \(p^{{\prime }}\) \(\left( {p^{{\prime }} \in Z^{ + } - \{ 1\} } \right)\) sequential steps analytically, in order to arrive at \(\left({{\mathbf{u}}_{p} ,{\dot{\mathbf{u}}}_{p} ,{\ddot{\mathbf{u}}}_{p} } \right)\) directly from \(\left({{\mathbf{u}}_{{p - p^{\prime}}}, {\dot{\mathbf{u}}}_{{p - p^{\prime}}},{\ddot{\mathbf{u}}}_{{p - p^{\prime}}}} \right)\), and hence, provides the capability of time integration with integration steps \(p^{\prime}\) times larger than the excitation steps, with no sacrifice of accuracy \(\left( {p \in Z^{ + } - \{ 1\} ,p \ge p^{\prime}} \right)\). However, the additional computational cost is not necessarily negligible [160, 166]. The technique, first proposed for SDOF linear systems [165], later enhanced towards further reduction of computational cost [166], afterwards, in one attempt, extended to implementation in analysis of MDOF systems [160], and in another attempt, to implementation in nonlinear analyses [167]. Though the loss of accuracy is zero, because of the additional computational cost and for the sake of efficiency, the enlargement is limited to specific values of n in Eq. (43) (four seems an appropriate upper-bound for n; Eqs. (43) and (44) are common between the second and forth techniques).

A brief comparison between these four techniques is presented in Table 5, where the numbers in the first row are in accordance with the numbers introducing the techniques, introduced above. In view of Table 5 and the fact that there exists a rational number in any arbitrary neighbourhood of a real number [57] (see the last row in Table 5), the second technique can be considered as the superior technique. The next section is dedicated to a review on the second technique and its most recent advancements.

Table 5 A comparison between four techniques to enlarge the digitized excitations step sizes

7 Recent Advancements of a Step Size Enlargement Technique

As stated in the previous section, towards more efficient seismic analysis by constant time step integration, a technique is proposed in 2008 [49], and later extended in 2013 [138]. The efficiency is provided by enlarging \({}_{f}\Delta {\kern 1pt} t\), such that to prevent it from dominating Eqs. (35) and (37), while also bounding the induced inaccuracy. Special attention is paid to: (1) convergence, as the main essentiality of approximate computations [55, 59, 60], (2) the recommended second order of accuracy [3032], and (3) the effect of approximations in the initial conditions, excitations, etc., on the rate of convergence [32, 49, 71]. These considerations lead to the replacement of the excitation f, with a new excitation, \({\tilde{\mathbf{f}}}\), defined below:

$$\begin{array}{*{20}l} {t_{i} = 0:} \hfill & {{\tilde{\mathbf{f}}}_{i} = {\tilde{\mathbf{f}}}\left( {t_{i} } \right) = {\mathbf{g}}\left( {t_{i} } \right),} \hfill \\ {0 < t_{i} < t^{\prime}_{end} :} \hfill & \begin{aligned} {\tilde{\mathbf{f}}}_{i} = {\tilde{\mathbf{f}}}\left( {t_{i} } \right) = \frac{1}{2}{\mathbf{g}}\left( {t_{i} } \right) + \frac{1}{{4n^{\prime}}}\sum\limits_{k = 1}^{{n^{\prime}}} {\left[ {{\mathbf{g}}\left( {t_{{i + {k \mathord{\left/ {\vphantom {k {n_{1} }}} \right. \kern-0pt} {n_{1} }}}} } \right) + {\mathbf{g}}\left( {t_{{i - {k \mathord{\left/ {\vphantom {k n}} \right. \kern-0pt} n}_{1} }} } \right)} \right]} \hfill \\ \quad \quad \quad \quad \quad t_{i} = i\;\frac{{n_{1} }}{{n_{2} }}{}_{f}\Delta t\quad i = 0,1,2, \ldots \hfill \\ \end{aligned} \hfill \\ {t_{i} = t^{\prime}_{end} } \hfill & {{\tilde{\mathbf{f}}}_{i} = {\tilde{\mathbf{f}}}\left( {t_{i} } \right) = {\mathbf{g}}\left( {t_{i} } \right),} \hfill \\ \end{array} ,$$
(45)

and digitized at steps sized \({}_{f}\Delta {\kern 1pt} t_{new}\), introduced in Eq. (44). Regarding the new symbols in Eqs. (44) and (45), when the excitation step size, \({}_{f}\Delta {\kern 1pt} t\), governs Eqs. (35) or (37), the replacement addressed in Eq. (45) changes the case by assigning the smallest positive integers to \(n_{{{\kern 1pt} 1}}\) and \(n_{{{\kern 1pt} 2}}\) satisfying

$$\frac{{n_{{{\kern 1pt} 1}} }}{{n_{2} }}{}_{f}\Delta {\kern 1pt} t \le {\text{Min}}\left( {\Delta {\kern 1pt} t_{cr} ,\frac{{T_{r} }}{\chi },\Delta {\kern 1pt} t_{d} } \right) < \frac{{n_{{{\kern 1pt} 1}} + 1}}{{n_{2} }}{}_{f}\Delta {\kern 1pt} t\quad$$
(46)

The value of \(n^{\prime}\) in Eq. (45) can be obtained from

$$\begin{array}{*{20}l} {n^{\prime} = n_{1} - 1} \hfill & {\text{when}} \hfill & {t = \frac{\Delta \,t}{{n_{2} }}} \hfill \\ {n^{\prime} = \left\{ {\begin{array}{*{20}l} {\frac{{n_{1} }}{2}} \hfill & {n_{1} = 2j,\quad j \in Z^{ + } } \hfill \\ {\frac{{n_{1} - 1}}{2}} \hfill & {n_{1} = 2j + 1,\quad j \in Z^{ + } } \hfill \\ \end{array} } \right.} \hfill & {\text{when}} \hfill & {\frac{\Delta \,t}{{n_{2} }} < t < t^{\prime}_{end} - \frac{\Delta \,t}{{n_{2} }}} \hfill \\ {n^{\prime} = n_{1} - 1} \hfill & {\text{when}} \hfill & {t = t^{\prime}_{end} - \frac{\Delta \,t}{{n_{2} }}} \hfill \\ \end{array}$$
(47)

\(t^{\prime}_{end}\) is the only number satisfying

$$t_{end} \le t^{\prime}_{end} < t_{end} + \frac{{n_{1} }}{{n_{2} }}{}_{f}\Delta {\kern 1pt} t,\quad \frac{{t^{\prime}_{end} }}{{\frac{{n_{{{\kern 1pt} 1}} }}{{n_{2} }}\;{}_{f}\Delta {\kern 1pt} t}} \in Z^{ + }$$
(48)

and \({\mathbf{g}}(t)\) is available from

$${\mathbf{g}}\left( {{\kern 1pt} t_{i} } \right) = \left\{ {\begin{array}{*{20}l} {{\bar{\mathbf{g}}}\left( {t_{i} } \right)} \hfill & {{\text{when}}\quad 0 \le t \le t_{end} } \hfill \\ {{\bar{\mathbf{O}}}} \hfill & {{\text{when}}\quad t_{end} < t < t^{\prime}_{end} } \hfill \\ \end{array} } \right.$$
(49)

where, \({\bar{\mathbf{O}}}\) is the zero vector and \({\bar{\mathbf{g}}}\) is a linear enrichment of f, defined below:

$$\begin{array}{*{20}l} {\forall \;i = 0,1,2, \ldots \frac{{n_{2} {\kern 1pt} t^{\prime}_{end} }}{{{}_{f}\Delta {\kern 1pt} t}}:\quad t_{i} = i\frac{{{}_{f}\Delta {\kern 1pt} t}}{{n_{2} }},} \hfill \\ {\left\{ {\begin{array}{*{20}l} {{\bar{\mathbf{g}}}\left( {t_{i} } \right) = {\mathbf{f}}\left( {t_{i} } \right) ,} \hfill & {\text{when}} \hfill & {\frac{i}{{n_{2} }} \in Z^{ + } + \{ 0\} } \hfill \\ \begin{aligned} {\bar{\mathbf{g}}}\left( {t{}_{i}} \right) & = {\mathbf{f}}\left( {k_{1} {\kern 1pt} {}_{f}\Delta {\kern 1pt} t} \right) + \left( {\frac{{i - k_{1} }}{{n_{2} }}} \right)\left( {{\mathbf{f}}\left( {k_{2} {\kern 1pt} {}_{f}\Delta {\kern 1pt} t} \right) - {\mathbf{f}}\left( {k_{1} {\kern 1pt} {}_{f}\Delta {\kern 1pt} t} \right)} \right) ,\\ & \quad k_{1} = k{\kern 1pt} n_{2} ,\quad k_{2} = k_{1} + n_{2} ,\;\quad k{\kern 1pt} \le \frac{i}{{n_{2} }} < k + 1,k \in Z^{ + } + \{ 0\} \\ \end{aligned} \hfill & {\text{when}} \hfill & {\frac{i}{{n_{2} }} \notin Z^{ + } + \{ 0\} } \hfill \\ \end{array} } \right.} \hfill \\ \end{array}$$
(50)

Although, the technique is proposed in 2008 [49] and then extended in 2013 [138], it is now the first time that the formulation is presented in the detail stated above, considering rational number enlargements. The technique is implemented in many time integration analyses resulting in considerable reduction of computational cost in the price of negligible loss of accuracy (see Table 6). Even more, it is worth noting that, in two cases, the computational cost is reduced, while the accuracy is increased [152, 154].

Table 6 Experiences on implementation of the technique proposed in [49] in time integration analysis against digitized ground motions

A seemingly weak point in implementation of the technique is the vagueness in the notion and determination of \(T_{r}\) in Eq. (46), potentially entailing ambiguities in selection of n (or to say better \(\frac{{n_{\,1} }}{{n_{\,2} }}\)). Nevertheless, as implied in Sects. 4 and 5, these ambiguities exist, also in ordinary time integration analyses, using constant integration steps, as well as, some adaptive time stepping methods. Therefore, the ambiguities in defining and computing \(T_{r}\) are not deficiencies of the technique proposed in [49], but deficiencies of ordinary time integration, affecting the performance of the technique proposed in 2008. The ambiguities can be lessened by comparing the computed response with the response obtained from analysis with smaller steps [7, 28, 55, 9597, 102], discussed in the ending parts of Sect. 4. However, questions persist. How should we set the integration step size and the excitation for the analysis with smaller steps? Should the technique also contribute in the decrease of the step size (by assigning smaller values to n or \(\frac{{n_{1} }}{{n_{2} }}\)), or it suffices to reduce the size of the excitation step and determine the excitation by linear interpolation? What is the role of the errors originated in the technique in the total accuracy? As a brief response, or to say better comment, the repetition of the first analysis can be considered as means to control the additional errors, also because of the technique. The repetitions can be first considered with respect to the technique, and then after ensuring that the additional errors associated with the technique are sufficiently small, repetitions need to be carried out with respect to \(\Delta {\kern 1pt} t\); the details explained in [135, 159], imply no considerable additional cost compared to ordinary repetition-based accuracy controls.

Furthermore, the computational cost associated with Eqs. (45)–(50) is negligible compared to the cost of time integration (unless for systems with one or two degrees of freedom [49, 147, 157]). Accordingly, the amount of the computational cost reduction in linear analyses [135] can be stated as

$$A_{C} = 100\frac{{n_{{{\kern 1pt} 1}} - n_{{{\kern 1pt} 2}} }}{{n_{{{\kern 1pt} 2}} }}\quad (\% )$$
(51)

and the changes of the cost reduction with respect to the enlargement, can be expressed as:

$$\frac{{\partial {\kern 1pt} A_{C} }}{{\partial {\kern 1pt} \left( {\frac{{n_{{{\kern 1pt} 1}} }}{{n_{2} }}} \right)}} = \frac{{n_{\,2}^{2} }}{{n_{\,1}^{2} }}$$
(52)

With attention to Eqs. (51) and (52), recently,

$$\frac{{n_{{{\kern 1pt} 1}} }}{{n_{{{\kern 1pt} 2}} }} \le 5$$
(53)

is suggested as a reasonable practical restriction on the selection of n (and \(\frac{{n_{{{\kern 1pt} 1}} }}{{n_{{{\kern 1pt} 2}} }}\)) [139, 159], changing Eqs. (46) to

$$\begin{aligned} & \exists n_{1}^{{\prime }} \in \left\{ {{\kern 1pt} 2,3,4, \ldots } \right\},\quad \exists n_{2}^{{\prime }} \in \left\{ {1,2,3, \ldots } \right\}:\quad \frac{{n^{\prime}_{\,1} }}{{n^{\prime}_{\,2} }}{}_{f}\Delta {\kern 1pt} t \le {\text{Min}}\left( {\Delta {\kern 1pt} t_{cr} ,\frac{{T_{r} }}{\chi },\Delta {\kern 1pt} t_{d} } \right) < \frac{{n^{\prime}_{{{\kern 1pt} {\kern 1pt} 1}} + 1}}{{n^{\prime}_{2} }}{}_{f}\Delta {\kern 1pt} t \\ & n_{2} = n_{2}^{{\prime }} ,\quad n_{1} = \left\{ {\begin{array}{*{20}l} {n_{1}^{{\prime }} } \hfill & {\text{when}} \hfill & {\frac{{n^{\prime}_{\,1} }}{{n^{\prime}_{\,2} }} \le 5} \hfill \\ {5n_{2}^{{\prime }} } \hfill & {\text{when}} \hfill & {\frac{{n^{\prime}_{\,1} }}{{n^{\prime}_{\,2} }} > 5} \hfill \\ \end{array} } \right. \\ \end{aligned}$$
(54)

Equation (54) should be considered together with Eqs. (44), (45), (47)–(50), when the original excitation step size is the governing term in Eqs. (35) or (37) (to say better, when the technique can be implemented). The consequence is upper-bounding the computational cost reduction of linear analyses, by

$$A_{C} \le 80\quad (\% )$$
(55)

Another important challenge for the technique [49] is its performance, when implemented in a nonlinear time integration analysis. According to the carried out numerical studies (see Table 6), the performance of the technique is better, in implementation in analysis of linear behaviour (both from the standpoint of accuracy and also from the point of view of computational cost reduction) [135, 146, 150, 152]. Two main reasons are: (1) While convergence and second order of accuracy are the main concepts of the technique, accuracy, numerical stability, consistency, and convergence are still unresolved issues in nonlinear analyses [2, 42, 63, 7280, 168]. (2) With larger integration steps, the number of iterations in the nonlinearity solutions may increase; the computational cost associated with these iterations can compensate the reductions of computational costs originated in the technique, and accordingly, diminish the efficiency of the technique.

Considering issues like those stated above, further study to clarify the persisting ambiguities is essential. Some main directions towards more efficient step size enlargement by the technique proposed in [49] are listed below:

  1. 1.

    Further clarifications regarding the values to be assigned to \(T_{r}\) in Eqs. (35) and (37) and more reliable selection of the enlargement scaling factor (n or \(\frac{{n_{\,1} }}{{n_{\,2} }}\)) to be implemented in Eq. (44),

  2. 2.

    Better performance, in implementation of the technique in analysis of complex (including nonlinear) structural systems.

  3. 3.

    Better control of accuracy.

  4. 4.

    Implementation in adaptive time integration analysis.

Meanwhile, the first point above can be considered of high importance in improvement of integration step size enlargement techniques other than that proposed in [49].

8 Closure

Time integration is a versatile tool to analyze semi-discretized equations of motion, and many other initial value problems from different origins. Integration step size, or to say better, \(\Delta {\kern 1pt} t\), is the main analysis parameter of time integration analysis which together with nonlinearity and methods’ parameters, affect the analysis features, as stated below:

  1. (a)

    Smaller values of \(\Delta {\kern 1pt} t\) generally lead to more accurate responses. This is not necessarily true in analysis of nonlinear systems or systems with complex behaviour (e.g. highly oscillatory behaviour). For linear analysis, with sufficiently small integration steps not under the effect of round-off, we can guarantee more accuracy, when repeating the analysis, with smaller steps (the smallness depends on the problem, the integration method, and the computational facilities).

  2. (b)

    Unless for unconditionally stable and unconditionally unstable analyses, smaller \(\Delta {\kern 1pt} t\) can be beneficial for numerical stability.

  3. (c)

    \(\Delta {\kern 1pt} t\) has no effect on the order of accuracy.

  4. (d)

    \(\Delta {\kern 1pt} t\) has no effect on convergence, though can affect the convergence trend.

  5. (e)

    Smaller values of \(\Delta {\kern 1pt} t\) imply more computational cost for linear analyses. The case might be different for nonlinear analyses, depending on the type of nonlinear behaviour, severity of the nonlinear behaviour, the nonlinearity parameters, and the time integration method. Some special cases are discussed.

  6. (f)

    Smaller values of \(\Delta {\kern 1pt} t\) in general imply less artificial damping. This would rather be valid for both undamped and damped analyses. Values to be assigned to the parameters of artificial damping should be set carefully.

In selection of the integration step size, especially, for analysis of MDOF structural systems with constantly sized steps, emphasis is on using unconditionally stable time integration methods (the case is different for wave propagation problems; addressed in the literature by times). The requirements of numerical stability of linear analyses, obtained from spectral analysis of the amplification matrix, i.e. spectral stability, are necessary and sufficient for linear analyses, but merely necessary, for nonlinear analyses. Even in time integration analyses with unconditionally stable time integration methods, ambiguities exist in conventional step size selection, as well as, in the comment of the national seismic code/standard of New Zealand (a code/standard with comments on integration step size selection). The ambiguities are more in nonlinear analyses. Some comments are discussed. Specifically, with attention to the ambiguities existing in the integration step size selections, control of the accuracies, for instance, by repetition of the analyses with smaller steps is necessary. Some additional details are to be satisfied in presence of nonlinearity.

From the thirteen seismic codes/standards reviewed in this chapter, many codes/standards (all with the exception of the code/standard of Chile) have considered time history analysis and time integration as tools for seismic analysis. Nevertheless, only in few codes/standards, time history analysis is recommended as the only or superior tool to analyze the semi-discretized equations of motion. There are also few codes/standards, with specific regulations on the details of time history analysis, and specifically, there is one code/standard, i.e. code/standard of New Zealand, with comments on the details of time integration analysis and the selection of integration step sizes. Although, the initiative of the seismic code/standard of New Zealand is worthy of sincere and deep appreciation and acknowledgement, ambiguities and flaws exist; some discussed in this chapter.

In time integration analysis against digitized excitations, the excitation step size should be taken into account in the selection of integration step size. In order to increase the efficiency, digitized excitations can be replaced with excitations digitized in larger steps. The existing techniques are briefly reviewed, and for the one seemingly superior, detailed explanations are presented, and the challenges are addressed.

Besides time integration analysis using constant integration steps, time integration can be carried out using steps sized adaptively, still not seriously implemented in seismic analysis. Adaptive time integration, although, directed towards more efficiency, is not necessarily more efficient compared to analysis with constantly sized steps. The efficiency depends on several parameters, including the complexity of the behaviour, the probable nonlinearity, the method of adaptive time integration and adequate selection of the details, and even the time integration method.

Some areas for further research on time integration and the step size selection are stated below:

  1. 1.

    Effects of viscous damping on the numerical stability of time integration methods are to be studied further.

  2. 2.

    The theory of numerical stability and its practical consideration are yet not well established, when the damping is not proportional.

  3. 3.

    The existing comments on integration step size selection need to be improved for more reliability, more rigorous supporting theory, and consideration of more issues, while preserving the simplicity.

  4. 4.

    Further investigation on step size selection, for implementation in nonlinear analyses, is essential.

  5. 5.

    More reliable still simple practical error controlling methods are to be developed and considered in seismic codes/standards. In this regard, the initiative of the code/standard of New Zealand is sincerely acknowledged.

  6. 6.

    Even for integration methods with numerical (artificial) damping, practical methods for selecting the parameters of the integration methods need to be developed.

  7. 7.

    The existing approaches for adaptive time stepping is involved in selection of parameters that are partly problem dependent. Further research for simplifications sounds essential.

  8. 8.

    A priori estimations do not exist for the computational cost of adaptive, as well as, nonlinear time integration analyses; accordingly, different from linear constant time-stepping analysis, the efficiencies are unclear, in the start of nonlinear or/and adaptive analyses. Further research is essential.

  9. 9.

    More adequate methods techniques and approaches are to be developed for time integration analysis of nonlinear systems, specifically to guarantee the simplicity, continuation, and sufficiency of accuracy, without high computational cost.

  10. 10.

    In view of the stochastic nature of earthquakes, the necessity to study structural systems seismic behaviour in many codes/standards, and the everyday more complexity and larger sizes of structural systems, efforts towards more efficient time history analysis and more reasonable selection of \(\Delta {\kern 1pt} t\) are essential.