Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Simulation of a physical system means to calculate the time evolution of a model system in many cases. We consider a large class of models which can be described by a first order initial value problem

$$ \frac{dY}{dt}=f\bigl(Y(t),t\bigr)\quad Y(t=0)=Y_{0} $$
(12.1)

where Y is the state vector (possibly of very high dimension) which contains all information about the system. Our goal is to calculate the time evolution of the state vector Y(t) numerically. For obvious reasons this can be done only for a finite number of values of t and we have to introduce a grid of discrete times t n which for simplicity are assumed to be equally spaced:Footnote 1

$$ t_{n+1}=t_{n}+\Delta t. $$
(12.2)

Advancing time by one step involves the calculation of the integral

$$ Y(t_{n+1})-Y(t_{n})=\int_{t_{n}}^{t_{n+1}}f \bigl(Y\bigl(t'\bigr),t'\bigr)\,dt' $$
(12.3)

which can be a formidable task since f(Y(t),t) depends on time via the time dependence of all the elements of Y(t). In this chapter we discuss several strategies for the time integration. The explicit Euler forward difference has low error order but is useful as a predictor step for implicit methods. A symmetric difference quotient is much more accurate. It can be used as the corrector step in combination with an explicit Euler predictor step and is often used for the time integration of partial differential equations. Methods with higher error order can be obtained from a Taylor series expansion, like the Nordsieck and Gear predictor-corrector methods which have been often applied in molecular dyamics calculations. Runge-Kutta methods are very important for ordinary differential equations. They are robust and allow an adaptive control of the step size. Very accurate results can be obtained for ordinary differential equations with extrapolation methods like the famous Gragg-Bulirsch-Stör method. If the solution is smooth enough, multistep methods are applicable, which use information from several points. Most known are Adams-Bashforth-Moulton methods and Gear methods (also known as backward differentiation methods), which are especially useful for stiff problems. The class of Verlet methods has been developed for molecular dynamics calculations. They are symplectic and time reversible and conserve energy over long trajectories.

1 The State Vector

The state of a classical N-particle system is given by the position in phase space, or equivalently by specifying position and velocity for all the N particles

$$ Y= (\mathbf{r}_{1},\mathbf{v}_{1},\ldots, \mathbf{r}_{N},\mathbf{v}_{N} ). $$
(12.4)

The concept of a state vector is not restricted to a finite number of degrees of freedom. For instance a diffusive system can be described by the particle concentrations as a function of the coordinate, i.e. the elements of the state vector are now indexed by the continuous variable x

$$ Y= \bigl(c_{1}(\mathbf{x})\cdots c_{M}(\mathbf{x}) \bigr). $$
(12.5)

Similarly, a quantum particle moving in an external potential can be described by the amplitude of the wave function

$$ Y=\bigl(\varPsi(\mathbf{x})\bigr). $$
(12.6)

Numerical treatment of continuous systems is not feasible since even the ultimate high end computer can only handle a finite number of data in finite time. Therefore discretization is necessary (Chap. 11), by introducing a spatial mesh (Sects. 11.2, 11.3, 11.6), which in the simplest case means a grid of equally spaced points

$$\begin{aligned} &\mathbf{x}_{ijk}= (ih,jh,kh )\quad i=1\cdots i_{\max},\ j=1\cdots j_{\max},\ k=1\cdots k_{\max} \end{aligned}$$
(12.7)
$$\begin{aligned} &Y= \bigl(c_{1}(\mathbf{x}_{ijk})\dots c_{M}( \mathbf{x}_{ijk}) \bigr) \end{aligned}$$
(12.8)
$$\begin{aligned} &Y= \bigl(\varPsi(\mathbf{x}_{ijk}) \bigr) \end{aligned}$$
(12.9)

or by expanding the continuous function with respect to a finite set of basis functions (Sect. 11.5). The elements of the state vector then are the expansion coefficients

$$\begin{aligned} &\vert \varPsi \rangle =\sum_{s=1}^{N}C_{s} \vert \varPsi_{s}\rangle \end{aligned}$$
(12.10)
$$\begin{aligned} &Y= (C_{1},\ldots,C_{N} ). \end{aligned}$$
(12.11)

If the density matrix formalism is used to take the average over a thermodynamic ensemble or to trace out the degrees of freedom of a heat bath, the state vector instead is composed of the elements of the density matrix

$$\begin{aligned} &\rho=\sum_{s=1}^{N}\sum _{s'=1}^{N}\rho_{ss'}\vert \varPsi_{s}\rangle\langle\varPsi_{s'}\vert =\sum _{s=1}^{N}\sum_{s'=1}^{N} \overline{C_{s'}^{*}C_{s}}\vert \varPsi_{s}\rangle\langle\varPsi_{s'}\vert \end{aligned}$$
(12.12)
$$\begin{aligned} &Y= (\rho_{11}\cdots\rho_{1N},\rho_{21}\cdots \rho_{2N},\ldots,\rho_{N1}\cdots\rho_{NN} ). \end{aligned}$$
(12.13)

2 Time Evolution of the State Vector

We assume that all information about the system is included in the state vector. Then the simplest equation to describe the time evolution of the system gives the change of the state vector

$$ \frac{\mathrm {d}Y}{\mathrm {d}t}=f(Y,t) $$
(12.14)

as a function of the state vector (or more generally a functional in the case of a continuous system). Explicit time dependence has to be considered for instance to describe the influence of an external time dependent field.

Some examples will show the universality of this equation of motion:

  • N-particle system

The motion of N interacting particles is described by

$$ \frac{\mathrm {d}Y}{\mathrm {d}t}= (\dot{\mathbf{r}}_{1},\dot{\mathbf{v}}_{1} \cdots )= (\mathbf{v}_{1},\mathbf{a}_{1}\cdots ) $$
(12.15)

where the acceleration of a particle is given by the total force acting upon this particle and thus depends on all the coordinates and eventually time (velocity dependent forces could be also considered but are outside the scope of this book)

$$ \mathbf{a}_{i}=\frac{\mathbf{F}_{i}(\mathbf{r}_{1}\cdots\mathbf{r}_{N},t)}{m_{i}}. $$
(12.16)
  • Diffusion

Heat transport and other diffusive processes are described by the diffusion equation

$$ \frac{\partial f}{\partial t}=D\Delta f+S(\mathbf{x},t) $$
(12.17)

which in its simplest spatially discretized version for 1-dimensional diffusion reads

$$ \frac{\partial f(\mathbf{x}_{i})}{\partial t}=\frac{D}{\Delta x^{2}} \bigl(f(\mathbf{x}_{i+1})+f( \mathbf{x}_{i-1})-2f(\mathbf{x}_{i}) \bigr)+S( \mathbf{x}_{i},t). $$
(12.18)
  • Waves

Consider the simple 1-dimensional wave equation

$$ \frac{\partial^{2}f}{\partial t^{2}}=c^{2}\frac{\partial^{2}f}{\partial x^{2}} $$
(12.19)

which by introducing the velocity \(g(\mathbf{x})=\frac{\partial}{\partial t}f(\mathbf{x})\) as an independent variable can be rewritten as

$$ \frac{\partial}{\partial t} \bigl(f(\mathbf{x}),g(\mathbf{x)} \bigr)= \biggl(g( \mathbf{x}),c^{2}\frac{\partial^{2}}{\partial x^{2}}f(\mathbf{x}) \biggr). $$
(12.20)

Discretization of space gives

$$ \frac{\partial}{\partial t} \bigl(f(\mathbf{x}_{i}),g(\mathbf{x}_{i}) \bigr)= \biggl(g(\mathbf{x}_{i}),\frac{c^{2}}{\Delta x^{2}} \bigl(f( \mathbf{x}_{i+1})+f(\mathbf{x}_{i-1})-2f(\mathbf{x}_{i}) \bigr) \biggr). $$
(12.21)
  • Two-state quantum system

The Schrödinger equation for a two level system (for instance a spin-\(\frac{1}{2}\) particle in a magnetic field) reads

$$ \frac{\mathrm {d}}{\mathrm {d}t}\left (\begin{array}{c} C_{1}\\ C_{2} \end{array} \right )=\left (\begin{array}{c@{\quad}c} H_{11}(t) & H_{12}(t)\\ H_{21}(t) & H_{22}(t) \end{array} \right ) \left (\begin{array}{c} C_{1}\\ C_{2} \end{array} \right ). $$
(12.22)

3 Explicit Forward Euler Method

The simplest method which is often discussed in elementary physics textbooks approximates the integrand by its value at the lower bound (Fig. 12.1):

$$ Y(t_{n+1})-Y(t_{n})\approx f\bigl(Y(t_{n}),t_{n} \bigr)\Delta t. $$
(12.23)

The truncation error can be estimated from a Taylor series expansion

$$\begin{aligned} Y(t_{n+1})-Y(t_{n}) =&\Delta t\frac{\mathrm {d}Y}{\mathrm {d}t}(t_{n})+ \frac{\Delta t^{2}}{2}\frac{\mathrm {d}^{2}Y}{\mathrm {d}t^{2}}(t_{n})+\cdots \\ =&\Delta tf\bigl(Y(t_{n}),t_{n}\bigr)+O\bigl(\Delta t^{2}\bigr). \end{aligned}$$
(12.24)

The explicit Euler method has several serious drawbacks

  • Low error order

Suppose you want to integrate from the initial time t 0 to the final time t 0+T. For a time step of Δt you have to perform N=Tt steps. Assuming comparable error contributions from all steps the global error scales as NΔt 2=Ot). The error gets smaller as the time step is reduced but it may be necessary to use very small Δt to obtain meaningful results.

  • Loss of orthogonality and normalization

The simple Euler method can produce systematic errors which are very inconvenient if you want, for instance, to calculate the orbits of a planetary system. This can be most easily seen from a very simple example. Try to integrate the following equation of motion (see Example 1.5 on page 12):

$$ \frac{\mathrm {d}z}{\mathrm {d}t}=\mathrm {i}\omega z. $$
(12.25)

The exact solution is obviously given by a circular orbit in the complex plane:

$$\begin{aligned} &z=z_{0}e^{\mathrm {i}\omega t} \end{aligned}$$
(12.26)
$$\begin{aligned} &|z|=|z_{0}|=\mbox{const}. \end{aligned}$$
(12.27)

Application of the Euler method gives

$$ z(t_{n+1})=z(t_{n})+\mathrm {i}\omega\Delta t z(t_{n})=(1+ \mathrm {i}\omega\Delta t)z(t_{n}) $$
(12.28)

and you find immediately

$$ \bigl \vert z(t_{n})\bigr \vert =\sqrt{1+\omega^{2}\Delta t^{2}}\bigl \vert z(t_{n-1})\bigr \vert = \bigl(1+ \omega^{2}\Delta t^{2} \bigr)^{n/2}\bigl \vert z(t_{0})\bigr \vert $$
(12.29)

which shows that the radius increases continually even for the smallest time step possible (Fig. 12.2).

Fig. 12.1
figure 1

Explicit Euler method

Fig. 12.2
figure 2

Systematic errors of the Euler method

The same kind of error appears if you solve the Schrödinger equation for a particle in an external potential or if you calculate the rotational motion of a rigid body. For the N-body system it leads to a violation of the conservation of phase space volume. This can introduce an additional sensitivity of the calculated results to the initial conditions. Consider a harmonic oscillator with the equation of motion

$$ \frac{\mathrm {d}}{\mathrm {d}t}\left (\begin{array}{c} x(t)\\ v(t) \end{array} \right )=\left (\begin{array}{c} v(t)\\ -\omega^{2}x(t) \end{array} \right ). $$
(12.30)

Application of the explicit Euler method gives

$$ \left (\begin{array}{c} x(t+\Delta t)\\ v(t+\Delta t) \end{array} \right )=\left (\begin{array}{c} x(t)\\ v(t) \end{array} \right )+\left (\begin{array}{c} v(t)\\ -\omega^{2}x(t) \end{array} \right )\Delta t. $$
(12.31)

The change of the phase space volume (Fig. 12.3) is given by the Jacobi determinant

$$ J=\biggl \vert \frac{\partial(x(t+\Delta t),v(t+\Delta t))}{\partial(x(t),v(t))}\biggr \vert =\left \vert \begin{array}{c@{\quad}c} 1 & \Delta t\\ -\omega^{2}\Delta t & 1 \end{array}\right \vert =1+( \omega\Delta t)^{2}. $$
(12.32)
Fig. 12.3
figure 3

Time evolution of the phase space volume

In this case the phase space volume increases continuously.

4 Implicit Backward Euler Method

Alternatively let us make a step backwards in time

$$ Y(t_{n})-Y(t_{n+1})\approx-f\bigl(Y(t_{n+1}),t_{n+1} \bigr)\Delta t $$
(12.33)

which can be written as (Fig. 12.4)

$$ Y(t_{n+1})\approx Y(t_{n})+f\bigl(Y(t_{n+1}),t_{n+1} \bigr)\Delta t. $$
(12.34)

Taylor series expansion gives

$$ Y(t_{n})=Y(t_{n+1})-\frac{\mathrm {d}}{\mathrm {d}t}Y(t_{n+1}) \Delta t+\frac{\mathrm {d}^{2}}{\mathrm {d}t^{2}}Y(t_{n+1})\frac{\Delta t^{2}}{2}+\cdots $$
(12.35)

which shows that the error order again is Ot 2). The implicit method is sometimes used to avoid the inherent instability of the explicit method. For the examples in Sect. 12.3 it shows the opposite behavior. The radius of the circular orbit as well as the phase space volume decrease in time. The gradient at future time has to be estimated before an implicit step can be performed.

Fig. 12.4
figure 4

Implicit backward Euler method

5 Improved Euler Methods

The quality of the approximation can be improved significantly by employing the midpoint rule (Fig. 12.5)

$$ Y(t_{n+1})-Y(t_{n})\approx f\biggl(Y\biggl(t+ \frac{\Delta t}{2}\biggr),t_{n}+\frac{\Delta t}{2}\biggr)\Delta t. $$
(12.36)

The error is smaller by one order of Δt:

$$\begin{aligned} &Y(t_{n})+f\biggl(Y\biggl(t+\frac{\Delta t}{2}\biggr),t_{n}+ \frac{\Delta t}{2}\biggr)\Delta t \\ &\quad =Y(t_{n})+ \biggl(\frac{\mathrm {d}Y}{\mathrm {d}t}(t_{n})+ \frac{\Delta t}{2}\frac{\mathrm {d}^{2}Y}{\mathrm {d}t^{2}}(t_{n})+\cdots \biggr)\Delta t \\ &\quad =Y(t_{n}+\Delta t)+O\bigl(\Delta t^{3}\bigr). \end{aligned}$$
(12.37)

The future value \(Y(t+\frac{\Delta t}{2})\) can be obtained by two different approaches:

  • Predictor-corrector method

Since \(f(Y(t+\frac{\Delta t}{2}),t_{n}+\frac{\Delta t}{2})\) is multiplied with Δt, it is sufficient to use an approximation with lower error order. Even the explicit Euler step is sufficient. Together the following algorithm results:

$$ \begin{aligned}[c] &\mbox{predictor step:} \quad Y^{(p)}=Y(t_{n})+\frac{\Delta t}{2}f\bigl(Y(t_{n}),t_{n}\bigr)\\ &\mbox{corrector step:} \quad Y(t_{n}+\Delta t)=Y(t_{n})+\Delta t f\biggl(Y^{(p)},t_{n}+\frac{\Delta t}{2}\biggr). \end{aligned} $$
(12.38)
  • Averaging (Heun method)

The average of f(Y(t n ),t n ) and f(Y(t n t),tt) is another approximation to the midpoint value of comparable quality (Fig. 12.6).

Fig. 12.5
figure 5

Improved Euler method

Fig. 12.6
figure 6

Improved polygon (or Heun) method

Expansion around t n t/2 gives

$$\begin{aligned} &\frac{1}{2} \bigl(f\bigl(Y(t_{n}),t_{n}\bigr)+f \bigl(Y(t_{n}+\Delta t),t+\Delta t\bigr) \bigr) \\ &\quad =f\biggl(Y\biggl(t_{n}+\frac{\Delta t}{2}\biggr),t_{n}+ \frac{\Delta t}{2}\biggr)+O\bigl(\Delta t^{2}\bigr). \end{aligned}$$
(12.39)

Inserting the average in (12.36) gives the following algorithm, which is also known as improved polygon method and corresponds to the trapezoidal rule for the integral (4.13) or to a combination of explicit and implicit Euler step:

$$ Y(t_{n}+\Delta t)=Y(t_{n})+\frac{\Delta t}{2} \bigl(f \bigl(Y(t_{n}),t_{n}\bigr)+f\bigl(Y(t_{n}+\Delta t),t+\Delta t\bigr) \bigr) . $$
(12.40)

In the special case of a linear function f(Y(t),t)=FY(t) (for instance rotational motion or diffusion) this can be solved formally by

$$ Y(t_{n}+\Delta t)= \biggl(1-\frac{\Delta t}{2}F \biggr)^{-1} \biggl(1+\frac{\Delta t}{2}F \biggr)Y(t_{n}). $$
(12.41)

Numerically it is not necessary to perform the matrix inversion. Instead a linear system of equations is solved:

$$ \biggl(1-\frac{\Delta t}{2}F \biggr)Y(t_{n}+\Delta t)= \biggl(1+ \frac{\Delta t}{2}F \biggr)Y(t_{n}). $$
(12.42)

In certain cases the Heun method conserves the norm of the state vector, for instance if F has only imaginary eigenvalues (as for the 1-dimensional Schrödinger equation, see page 393).

In the general case a predictor step has to be made to estimate the state vector at t n t before the Heun expression (12.40) can be evaluated:

$$ Y^{(p)}=Y(t_{n})+\Delta t f\bigl(Y(t_{n}),t_{n} \bigr). $$
(12.43)

6 Taylor Series Methods

Higher order methods can be obtained from a Taylor series expansion

$$ Y(t_{n}+\Delta t)=Y(t_{n})+\Delta t f \bigl(Y(t_{n}),t_{n}\bigr)+\frac{\Delta t^{2}}{2} \frac{\mathrm {d}f(Y(t_{n}),t_{n})}{\mathrm {d}t}+\cdots . $$
(12.44)

The total time derivative can be expressed as

$$ \frac{\mathrm {d}f}{\mathrm {d}t}=\frac{\partial f}{\partial Y}\frac{\mathrm {d}Y}{\mathrm {d}t}+\frac{\partial f}{\partial t}=f'f+ \dot{f} $$
(12.45)

where the partial derivatives have been abbreviated in the usual way by \(\frac{\partial f}{\partial t}=\dot{f}\) and \(\frac{\partial f}{\partial Y}=f'\). Higher derivatives are given by

$$\begin{aligned} &\frac{\mathrm {d}^{2}f}{\mathrm {d}t^{2}}=f''f^{2}+f^{\prime\,2}f+2 \dot{f}'f+\ddot{f} \end{aligned}$$
(12.46)
$$\begin{aligned} &\frac{\mathrm {d}^{3}f}{\mathrm {d}t^{3}}=\frac{\partial^{3}f}{\partial t^{3}}+f'''f^{3}+3 \dot{f}''f^{2}+\ddot{f}f'+3f'' \dot{f}f \\ &\hphantom{\frac{\mathrm {d}^{3}f}{\mathrm {d}t^{3}}={}} +3\dot{f}'+4f''f'f^{2}+5 \dot{f}'f'f+f^{\prime\,3}f+f^{\prime\,2} \dot{f}. \end{aligned}$$
(12.47)

6.1 Nordsieck Predictor-Corrector Method

Nordsieck [184] determines an interpolating polynomial of degree m. As variables he uses the 0th to mth derivativesFootnote 2 evaluated at the current time t, for instance for m=5 he uses the variables

$$\begin{aligned} &Y(t) \end{aligned}$$
(12.48)
$$\begin{aligned} &g(t)=\frac{\mathrm {d}}{\mathrm {d}t}Y(t) \end{aligned}$$
(12.49)
$$\begin{aligned} &a(t)=\frac{\Delta t}{2}\frac{\mathrm {d}^{2}}{\mathrm {d}t^{2}}Y(t) \end{aligned}$$
(12.50)
$$\begin{aligned} &b(t)=\frac{\Delta t^{2}}{6}\frac{\mathrm {d}^{3}}{\mathrm {d}t^{3}}Y(t) \end{aligned}$$
(12.51)
$$\begin{aligned} &c(t)=\frac{\Delta t^{3}}{24}\frac{\mathrm {d}^{4}}{\mathrm {d}t^{4}}Y(t) \end{aligned}$$
(12.52)
$$\begin{aligned} &d(t)=\frac{\Delta t^{4}}{120}\frac{\mathrm {d}^{5}}{\mathrm {d}t^{5}}Y(t). \end{aligned}$$
(12.53)

Taylor expansion gives approximate values at tt

$$\begin{aligned} &Y(t+\Delta t)=Y(t)+\Delta t \bigl[g(t)+a(t)+b(t)+c(t)+d(t)+e(t) \bigr] \\ &\hphantom{Y(t+\Delta t)} =Y^{p}(t+\Delta t)+e(t)\Delta t \end{aligned}$$
(12.54)
$$\begin{aligned} &g(t+\Delta t)=g(t)+2a(t)+3b(t)+4c(t)+5d(t)+6e(t) \\ &\hphantom{g(t+\Delta t)}=g^{p}(t+\Delta T)+6e(t) \end{aligned}$$
(12.55)
$$\begin{aligned} &a(t+\Delta t)=a(t)+3b(t)+6c(t)+10d(t)+15e(t) \\ &\hphantom{a(t+\Delta t)}=a^{p}(t+\Delta t)+15e(t) \end{aligned}$$
(12.56)
$$\begin{aligned} &b(t+\Delta t)=b(t)+4c(t)+10d(t)+20e(t) \\ &\hphantom{b(t+\Delta t)}=b^{p}(t+\Delta t)+20e(t) \end{aligned}$$
(12.57)
$$\begin{aligned} &c(t+\Delta t)=c(t)+5d(t)+15e(t)=c^{p}(t+\Delta t)+15e(t) \end{aligned}$$
(12.58)
$$\begin{aligned} &d(t+\Delta t)=d(t)+6e(t)=d^{p}(t+\Delta t)+6e(t) \end{aligned}$$
(12.59)

where the next term of the Taylor series \(e(t)=\frac{\Delta t^{5}}{6!}\frac{\mathrm {d}^{6}}{\mathrm {d}t^{6}}Y(t)\) has been introduced as an approximation to the truncation error of the predicted values Y p,g p, etc. It can be estimated from the second equation

$$ e=\frac{1}{6} \bigl[f\bigl(Y^{p}(t+\Delta t),t+\Delta t \bigr)-g^{p}(t+\Delta t) \bigr]=\frac{1}{6}\delta f. $$
(12.60)

This predictor-corrector method turns out to be rather unstable. However, stability can be achieved by slightly modifying the coefficients of the corrector step. Nordsieck suggested to use

$$\begin{aligned} &Y(t+\Delta t)=Y^{p}(t+\Delta t)+\frac{95}{288}\delta f \end{aligned}$$
(12.61)
$$\begin{aligned} &a(t+\Delta t)=a^{p}(t+\Delta t)+\frac{25}{24}\delta f \end{aligned}$$
(12.62)
$$\begin{aligned} &b(t+\Delta t)=b^{p}(t+\Delta t)+\frac{35}{72}\delta f \end{aligned}$$
(12.63)
$$\begin{aligned} &c(t+\Delta t)=c^{p}(t+\Delta t)+\frac{5}{48}\delta f \end{aligned}$$
(12.64)
$$\begin{aligned} &d(t+\Delta t)=d^{p}(t+\Delta t)+\frac{1}{120}\delta f. \end{aligned}$$
(12.65)

6.2 Gear Predictor-Corrector Methods

Gear [101] designed special methods for molecular dynamics simulations (Chap. 14) where Newton’s law (12.15) has to be solved numerically. He uses again a truncated Taylor expansion for the predictor step

$$\begin{aligned} &\mathbf{r}(t+\Delta t)=\mathbf{r}(t)+\mathbf{v}(t)\Delta t+\mathbf{a}(t)\frac{\Delta t^{2}}{2}+ \dot{\mathbf{a}}(t)\frac{\Delta t^{3}}{6}+\ddot{\mathbf{a}}(t)\frac{\Delta t^{4}}{24}+\cdots \end{aligned}$$
(12.66)
$$\begin{aligned} &\mathbf{v}(t+\Delta t)=\mathbf{v}(t)+\mathbf{a}(t)\Delta t+\dot{\mathbf{a}}(t) \frac{\Delta t^{2}}{2}+\ddot{\mathbf{a}}(t)\frac{\Delta t^{3}}{6}+\cdots \end{aligned}$$
(12.67)
$$\begin{aligned} &\mathbf{a}(t+\Delta t)=\mathbf{a}(t)+\dot{\mathbf{a}}(t)\Delta t+\ddot{\mathbf{a}}(t) \frac{\Delta t^{2}}{2}+\cdots \end{aligned}$$
(12.68)
$$\begin{aligned} &\dot{\mathbf{a}}(t+\Delta t)=\dot{\mathbf{a}}(t)+\ddot{\mathbf{a}}(t)\Delta t+\cdots\\ &\vdots \end{aligned}$$
(12.69)

to calculate new coordinates etc. \(\mathbf{r}_{n+1}^{p},\mathbf{v}_{n+1}^{p},\mathbf{a}_{n+1}^{p}\dots\) (Fig. 12.7). The difference between the predicted acceleration and that calculated using the predicted coordinates

$$ \delta\mathbf{a}_{n+1}=\mathbf{a}\bigl(\mathbf{r}_{n+1}^{P},t+ \Delta t\bigr)-\mathbf{a}_{n+1}^{p} $$
(12.70)

is then used as a measure of the error to correct the predicted values according to

$$\begin{aligned} &\mathbf{r}_{n+1}=\mathbf{r}_{n+1}^{p}+c_{1} \delta\mathbf{a}_{n+1} \end{aligned}$$
(12.71)
$$\begin{aligned} &\mathbf{v}_{n+1}=\mathbf{v}_{n+1}^{p}+c_{2} \delta\mathbf{a}_{n+1}\\ &\vdots \end{aligned}$$
(12.72)

The coefficients c i were determined to optimize stability and accuracy. For instance the fourth order Gear corrector reads

$$\begin{aligned} &\mathbf{r}_{n+1}=\mathbf{r}_{n+1}^{p}+\frac{\Delta t^{2}}{12} \delta\mathbf{a}_{n+1} \end{aligned}$$
(12.73)
$$\begin{aligned} &\mathbf{v}_{n+1}=\mathbf{v}_{n+1}^{p}+\frac{5\Delta t}{12} \delta\mathbf{a}_{n+1} \end{aligned}$$
(12.74)
$$\begin{aligned} &\dot{\mathbf{a}}_{n+1}=\dot{\mathbf{a}}_{n}+\frac{1}{\Delta t} \delta\mathbf{a}_{n+1}. \end{aligned}$$
(12.75)
Fig. 12.7
figure 7

(Gear predictor-corrector method)  The difference between predicted acceleration a p and acceleration calculated for the predicted coordinates a(r p) is used as a measure of the error to estimate the correction δ r

Gear methods are generally not time reversible and show systematic energy drifts. A reversible symplectic predictor-corrector method has been presented recently by Martyna and Tuckerman [169].

7 Runge-Kutta Methods

If higher derivatives are not so easily available, they can be approximated by numerical differences. f is evaluated at several trial points and the results are combined to reproduce the Taylor series as close as possible [48].

7.1 Second Order Runge-Kutta Method

Let us begin with two function values. As common in the literature we will denote the function values as K 1,K 2,… . From the gradient at time t n

$$ K_{1}=f_{n}=f\bigl(Y(t_{n}),t_{n} \bigr) $$
(12.76)

we estimate the state vector at time t n t as

$$ Y(t_{n}+\Delta t)\approx\Delta t\, K_{1}. $$
(12.77)

The gradient at time t n t is approximately

$$ K_{2}=f\bigl(Y(t_{n})+\Delta t\, K_{1},t_{n}+ \Delta t\bigr) $$
(12.78)

which has the Taylor series expansion

$$ K_{2}=f_{n}+\bigl(\dot{f}_{n}+f'_{n}f_{n} \bigr)\Delta t+\cdots $$
(12.79)

and application of the trapezoidal rule (4.13) gives the 2nd order Runge-Kutta method

$$ Y_{n+1}=Y_{n}+\frac{\Delta t}{2}(K_{1}+K_{2}) $$
(12.80)

which in fact coincides with the improved Euler or Heun method. Taylor series expansion shows how the combination of K 1 and K 2 leads to an expression of higher error order:

$$\begin{aligned} Y_{n+1} =&Y_{n}+\frac{\Delta t}{2}\bigl(f_{n}+f_{n}+ \bigl(\dot{f}_{n}+f'_{n}f_{n}\bigr) \Delta t+\cdots\bigr) \\ =&Y_{n}+f_{n}\Delta t+\frac{\mathrm {d}f_{n}}{\mathrm {d}t}\frac{\Delta t^{2}}{2}+ \cdots. \end{aligned}$$
(12.81)

7.2 Third Order Runge-Kutta Method

The accuracy can be further improved by calculating one additional function value at mid-time. From (12.76) we estimate the gradient at mid-time by

$$\begin{aligned} K_{2} =&f\biggl(Y(t)+\frac{\Delta t}{2}K_{1},t+ \frac{\Delta t}{2}\biggr) \\ =&f_{n}+\bigl(\dot{f}_{n}+f'_{n}f_{n} \bigr)\frac{\Delta t}{2}+\bigl(\ddot{f}_{n}+f''_{n}f_{n}^{2}+2 \dot{f}'_{n}f_{n}\bigr)\frac{\Delta t^{2}}{8}+ \cdots . \end{aligned}$$
(12.82)

The gradient at time t n t is then estimated as

$$\begin{aligned} K_{3} =&f\bigl(Y(t_{n})+\Delta t(2K_{2}-K_{1}),t_{n}+ \Delta t\bigr) \\ =&f_{n}+\dot{f}_{n}\Delta t+f'_{n}(2K_{2}-K_{1}) \Delta t+\ddot{f}_{n}\frac{\Delta t^{2}}{2} \\ &{}+f''_{n}\frac{(2K_{2}-K_{1})^{2}\Delta t^{2}}{2}+2 \dot{f}'_{n}\frac{(2K_{2}-K_{1})\Delta t^{2}}{2}+\cdots. \end{aligned}$$
(12.83)

Inserting the expansion (12.82) gives the leading terms

$$ K_{3}=f_{n}+\bigl(\dot{f}_{n}+f'_{n}f_{n} \bigr)\Delta t+\bigl(2f_{n}^{\prime\,2}f_{n}+f''_{n}f_{n}^{2}+ \ddot{f}_{n}+2f'_{n}\dot{f}_{n}+2 \dot{f}_{n}^{2}\bigr)\frac{\Delta t^{2}}{2}+\cdots. $$
(12.84)

Applying Simpson’s rule (4.14) we combine the three gradients to get the 3rd order Runge-Kutta method

$$ Y_{n+1}=Y(t_{n})+\frac{\Delta t}{6}(K_{1}+4K_{2}+K_{3}) $$
(12.85)

where the Taylor series

$$\begin{aligned} Y_{n+1} =&Y(t_{n})+\frac{\Delta t}{6} \bigl(6f_{n}+3 \bigl(\dot{f}_{n}+f_{n}f'_{n}\bigr) \Delta t \\ &{}+\bigl(f_{n}^{\prime\,2}f_{n}+f''_{n}f_{n}^{2}+2 \dot{f}'_{n}f_{n}+f_{n}+ \dot{f}_{n}\ddot{f}'_{n}\bigr)\Delta t^{2}+ \cdots \bigr) \\ =&Y(t_{n}+\Delta t)+O\bigl(\Delta t^{4}\bigr) \end{aligned}$$
(12.86)

recovers the exact Taylor series (12.44) including terms of order Ot 3).

7.3 Fourth Order Runge-Kutta Method

The 4th order Runge-Kutta method (RK4) is often used because of its robustness and accuracy. It uses two different approximations for the midpoint

$$\begin{aligned} K_{1} = & f\bigl(Y(t_{n}),t_{n}\bigr) \\ K_{2} = & f\biggl(Y(t_{n})+\frac{K_{1}}{2}\Delta t,t_{n}+\frac{\Delta t}{2}\biggr) \\ K_{3} = & f\biggl(Y(t_{n})+\frac{K_{2}}{2}\Delta t,t_{n}+\frac{\Delta t}{2}\biggr) \\ K_{4} = & f\bigl(Y(t_{n})+K_{3}\Delta t,t_{n}+\Delta t\bigr) \end{aligned}$$

and Simpson’s rule (4.14) to obtain

$$Y_{n+1}=Y(t_{n})+\frac{\Delta t}{6} (K_{1}+2K_{2}+2K_{3}+K_{4} )=Y(t_{n}+\Delta t)+O\bigl(\Delta t^{5}\bigr). $$

Expansion of the Taylor series is cumbersome but with the help of an algebra program one can easily check that the error is of order Δt 5.

8 Quality Control and Adaptive Step Size Control

For practical applications it is necessary to have an estimate for the local error and to adjust the step size properly. With the Runge-Kutta method this can be achieved by a step doubling procedure. We calculate y n+2 first by two steps Δt and then by one step 2Δt. This needs 11 function evaluations as compared to 8 for the smaller step size only (Fig. 12.8). For the 4th order method we estimate the following errors:

$$\begin{aligned} \Delta \bigl(Y_{n+2}^{(\Delta t)} \bigr) =&2a\Delta t^{5} \end{aligned}$$
(12.87)
$$\begin{aligned} \Delta \bigl(Y_{n+2}^{(2\Delta t)} \bigr) =&a(2\Delta t)^{5}. \end{aligned}$$
(12.88)

The local error can be estimated from

$$\begin{aligned} &\bigl \vert Y_{n+2}^{(\Delta t)}-Y_{n+2}^{(2\Delta t)} \bigr \vert =30|a|\Delta t^{5}\\ &\Delta \bigl(Y_{n+1}^{(\Delta t)} \bigr)=a\Delta t^{5}= \frac{|Y_{n+2}^{(\Delta t)}-Y_{n+2}^{(2\Delta t)}|}{30}. \end{aligned}$$

The step size Δt can now be adjusted to keep the local error within the desired limits.

Fig. 12.8
figure 8

Step doubling with the fourth order Runge-Kutta method

9 Extrapolation Methods

Application of the extrapolation method to calculate the integral \(\int_{t_{n}}^{t_{n+1}}f(t)\,\mathrm{d}t\) produces very accurate results but can also be time consuming. The famous Gragg-Bulirsch-Stör method [244] starts from an explicit midpoint rule with a special starting procedure. The interval Δt is divided into a sequence of N sub-steps

$$ h=\frac{\Delta t}{N}. $$
(12.89)

First a simple Euler step is performed

$$ \begin{aligned}[c] & u_{0}=Y(t_{n})\\ &u_{1}=u_{0}+h\, f(u_{0},t_{n}) \end{aligned} $$
(12.90)

and then the midpoint rule is applied repeatedly to obtain

$$ u_{j+1}=u_{j-1}+2h\, f(u_{j},t_{n}+jh) \quad j=1,2,\dots, N-1. $$
(12.91)

Gragg [111] introduced a smoothing procedure to remove oscillations of the leading error term by defining

$$ v_{j}=\frac{1}{4}u_{j-1}+\frac{1}{2}u_{j}+ \frac{1}{4}u_{j+1}. $$
(12.92)

He showed that both approximations (12.91, 12.92) have an asymptotic expansion in powers of h 2 and are therefore well suited for an extrapolation method. The modified midpoint method can be summarized as follows:

$$\begin{aligned} \begin{aligned}[c] &u_{0}=Y(t_{n})\\ &u_{1}=u_{0}+h\, f(u_{0},t_{n})\\ &u_{j+1}=u_{j-1}+2h f(u_{j},t_{n}+jh) \quad j=1,2,\ldots, N-1\\ &Y(t_{n}+\Delta t)\approx\frac{1}{2} \bigl(u_{N}+u_{N-1}+h f(u_{N},t_{n}+\Delta t) \bigr). \end{aligned} \end{aligned}$$
(12.93)

The number of sub-steps N is increased according to a sequence like

$$ N=2,4,6,8,12,16,24,32,48,64\cdots\quad N_{j}=2N_{j-2}\quad \mbox{Bulirsch-St\"{o}r sequence} $$
(12.94)

or

$$N=2,4,6,8,10,12\cdots\quad N_{j}=2j\quad\mbox{Deuflhard sequence}. $$

After each successive N is tried, a polynomial extrapolation is attempted. This extrapolation returns both the extrapolated values and an error estimate. If the error is still too large then N has to be increased further. A more detailed discussion can be found in [233, 234].

10 Linear Multistep Methods

All methods discussed so far evaluated one or more values of the gradient f(Y(t),t) only within the interval t n t n t. If the state vector changes sufficiently smooth then multistep methods can be applied. Linear multistep methods use a combination of function values Y n and gradients f n from several steps

$$ Y_{n+1}=\sum_{j=1}^{k} ( \alpha_{j}Y_{n-j+1}+\beta_{j}f_{n-j+1}\Delta t )+\beta_{0}f_{n+1}\Delta t $$
(12.95)

where the coefficients α, β are determined such, that a polynomial of certain order r is integrated exactly. The method is explicit if β 0=0 and implicit otherwise. Multistep methods have a small local error and need fewer function evaluations. On the other hand, they have to be combined with other methods (like Runge-Kutta) to start and end properly and it can be rather complicated to change the step size during the calculation. Three families of linear multistep methods are commonly used: explicit Adams-Bashforth methods, implicit Adams-Moulton methods and backward differentiation formulas (also known as Gear formulas [102]).

10.1 Adams-Bashforth Methods

The explicit Adams-Bashforth method of order r uses the gradients from the last r−1 steps (Fig. 12.9) to obtain the polynomial

$$ p(t_{n})=f(Y_{n},t_{n})\cdots p(t_{n-r+1})=f(Y_{n-r+1},t_{n-r+1}) $$
(12.96)

and to calculate the approximation

$$Y_{n+1}-Y_{n}\approx\int_{t_{n}}^{t_{n+1}}p(t)\,\mathrm{d}t $$

which is generally a linear combination of f n f nr+1. For example, the Adams-Bashforth formulas of order 2, 3, 4 are:

$$\begin{aligned} \begin{aligned}[c] Y_{n+1}-Y_{n}&=\frac{\Delta t}{2}(3f_{n}-f_{n-1})+O \bigl(\Delta t^{3}\bigr)\\ Y_{n+1}-Y_{n}&=\frac{\Delta t}{12}(23f_{n}-16f_{n-1}+5f_{m-2})+O \bigl(\Delta t^{4}\bigr)\\ Y_{n+1}-Y_{n}&=\frac{\Delta t}{24}(55f_{n}-59f_{n-1}+37f_{n-2}-9f_{n-3})+O \bigl(\Delta t^{5}\bigr). \end{aligned} \end{aligned}$$
(12.97)
Fig. 12.9
figure 9

Adams-Bashforth method

10.2 Adams-Moulton Methods

The implicit Adams-Moulton method also uses the yet not known value Y n+1 (Fig. 12.10) to obtain the polynomial

$$ p(t_{n+1})=f_{n+1}\cdots p(t_{n-r+2})=f_{n-r+2}. $$
(12.98)

The corresponding Adams-Moulton formulas of order 2 to 4 are:

$$\begin{aligned} Y_{n+1}-Y_{n} =&\frac{\Delta t}{2}(f_{n+1}+f_{n})+O \bigl(\Delta t^{3}\bigr) \\ Y_{n+1}-Y_{n} =&\frac{\Delta t}{12}(5f_{n+1}+8f_{n}-f_{n-1})+O \bigl(\Delta t^{4}\bigr) \end{aligned}$$
(12.99)
$$\begin{aligned} Y_{n+1}-Y_{n} =&\frac{\Delta t}{24}(9f_{n+1}+19f_{n}-5f_{n-1}+f_{n-2})+O \bigl(\Delta t^{5}\bigr). \end{aligned}$$
(12.100)
Fig. 12.10
figure 10

Adams-Moulton method

10.3 Backward Differentiation (Gear) Methods

Gear methods [102] are implicit and usually combined with a modified Newton method. They make use of previous function values Y n ,Y n−1… and the gradient f n+1 at time tt. Only methods of order r≤6 are stable and useful. The general formula (12.95) is

$$ Y_{n+1}=\sum_{j=1}^{r} \alpha_{j}Y_{n-j+1}+\beta_{0}f_{n+1}\Delta t. $$
(12.101)

For r=1 this becomes

$$ Y_{n+1}=\alpha_{1}Y_{n}+\beta_{0}f_{1} \Delta t $$
(12.102)

and all linear polynomials

$$ p=p_{0}+p_{1}(t-t_{n}),\quad \frac{\mathrm {d}p}{\mathrm {d}t}=p_{1} $$
(12.103)

are integrated exactly if

$$ p_{0}+p_{1}\Delta t=\alpha_{1}p_{0}+ \beta_{0}p_{1} $$
(12.104)

which is the case for

$$ \alpha_{1}=1,\quad\beta_{0}=\Delta t. $$
(12.105)

Hence the first order Gear method is

$$ Y_{n+1}=Y_{n}+f_{n+1}\Delta t+O\bigl(\Delta t^{2}\bigr) $$
(12.106)

which coincides with the implicit Euler method. The higher order stable Gear methods are given by

$$\begin{aligned} &r=2\colon\quad Y_{n+1}=\frac{4}{3}Y_{n}- \frac{1}{3}Y_{n-1}+\frac{2}{3}f_{n+1}\Delta t+O \bigl(\Delta t^{3}\bigr) \end{aligned}$$
(12.107)
$$\begin{aligned} &r=3\colon\quad Y_{n+1}=\frac{18}{11}Y_{n}- \frac{9}{11}Y_{n-1}+\frac{2}{11}Y_{n-2}+ \frac{6}{11}f_{n+1}\Delta t \\ &\hphantom{r=3\colon\quad Y_{n+1}={}}+O\bigl(\Delta t^{4}\bigr) \end{aligned}$$
(12.108)
$$\begin{aligned} &r=4\colon\quad Y_{n+1}=\frac{48}{25}Y_{n}- \frac{36}{25}Y_{n-1}+\frac{16}{25}Y_{n-2} \\ &\hphantom{r=4\colon\quad Y_{n+1}={}}- \frac{3}{25}Y_{n-3}+\frac{12}{25}f_{n+1}\Delta t+O \bigl(\Delta t^{5}\bigr) \end{aligned}$$
(12.109)
$$\begin{aligned} &r=5\colon\quad Y_{n+1}=\frac{300}{137}Y_{n}- \frac{300}{137}Y_{n-1}+\frac{200}{137}Y_{n-2}- \frac{75}{137}Y_{n-3} \\ &\hphantom{r=5\colon\quad Y_{n+1}={}} +\frac{12}{137}Y_{n-4}+\frac{60}{137}f_{n+1}\Delta t+O \bigl(\Delta t^{6}\bigr) \end{aligned}$$
(12.110)
$$\begin{aligned} &r=6\colon\quad Y_{n+1}=\frac{120}{49}Y_{n}- \frac{150}{49}Y_{n-1}+\frac{400}{147}Y_{n-2}- \frac{75}{49}Y_{n-3} +\frac{24}{49}Y_{n-4} \\ &\hphantom{r=6\colon\quad Y_{n+1}={}} -\frac{10}{147}Y_{n-5}+ \frac{20}{49}f_{n+1}\Delta t+O\bigl(\Delta t^{7}\bigr). \end{aligned}$$
(12.111)

This class of algorithms is useful also for stiff problems (differential equations with strongly varying eigenvalues).

10.4 Predictor-Corrector Methods

The Adams-Bashforth-Moulton method combines the explicit method as a predictor step to calculate an estimate \(y_{n+1}^{p}\) with a corrector step using the implicit method of same order. The general class of linear multistep predictor-corrector methods [100] uses a predictor step

$$ Y_{n+1}^{(0)}=\sum_{j=1}^{k} \bigl(\alpha_{j}^{(p)}Y_{n-j+1}+\beta_{j}^{(p)}f_{n-j+1} \Delta t \bigr) $$
(12.112)

which is corrected using the formula

$$ Y_{n+1}^{(1)}=\sum_{j=1}^{k} \bigl(\alpha_{j}^{(c)}Y_{n-j+1}+\beta_{j}^{(c)}f_{n-j+1} \Delta t \bigr)+\beta_{0}f\bigl(Y_{n+1}^{(0)},t_{n+1} \bigr)\Delta t $$
(12.113)

and further iterations

$$\begin{aligned} &Y_{n+1}^{(m+1)}=Y_{n+1}^{(m)}- \beta_{0} \bigl[f\bigl(Y_{n+1}^{(m-1)},t_{n+1} \bigr)-f\bigl(Y_{n+1}^{(m)},t_{n+1}\bigr) \bigr]\Delta t \\ &\quad m=1\dots M-1 \end{aligned}$$
(12.114)
$$\begin{aligned} &Y_{n+1}=Y_{n+1}^{(M)},\qquad\dot{Y}_{n+1}=f \bigl(Y_{n+1}^{(M-1)},t_{n+1}\bigr). \end{aligned}$$
(12.115)

The coefficients α,β have to be determined to optimize accuracy and stability.

11 Verlet Methods

For classical molecular dynamics simulations it is necessary to calculate very long trajectories. Here a family of symplectic methods often is used which conserve the phase space volume [3, 116, 193, 255, 257, 265]. The equations of motion of a classical interacting N-body system are

$$ m_{i}\ddot{\mathbf{x}}_{i}=F_{i} $$
(12.116)

where the force acting on atom i can be calculated once a specific force field is chosen. Let us write these equations as a system of first order differential equations

$$ \left (\begin{array}{c} \dot{\mathbf{x}}_{i}\\ \dot{\mathbf{v}}_{i} \end{array} \right )=\left (\begin{array}{c} \mathbf{v}_{i}\\ \mathbf{a}_{i} \end{array} \right ) $$
(12.117)

where x(t) and v(t) are functions of time and the forces m a(x(t)) are functions of the time dependent coordinates.

11.1 Liouville Equation

We rewrite (12.117) as

$$ \left (\begin{array}{c} \dot{\mathbf{x}}\\ \dot{\mathbf{v}} \end{array} \right )=\mathcal{L}\left (\begin{array}{c} \mathbf{x}\\ \mathbf{v} \end{array} \right ) $$
(12.118)

where the Liouville operator \(\mathcal{L}\) acts on the vector containing all coordinates and velocities:

$$ \mathcal{L}\left (\begin{array}{c} \mathbf{x}\\ \mathbf{v} \end{array} \right )= \biggl(\mathbf{v}\frac{\partial}{\partial\mathbf{x}}+ \mathbf{a}\frac{\partial}{\partial\mathbf{v}} \biggr)\left (\begin{array}{c} \mathbf{x}\\ \mathbf{v} \end{array} \right ). $$
(12.119)

The Liouville equation (12.118) can be formally solved by

$$ \left (\begin{array}{c} \mathbf{x}(t)\\ \mathbf{v}(t) \end{array} \right )=e^{\mathcal{L}t}\left (\begin{array}{c} \mathbf{x}(0)\\ \mathbf{v}(0) \end{array} \right ). $$
(12.120)

For a better understanding let us evaluate the first members of the Taylor series of the exponential:

$$\begin{aligned} &\mathcal{L}\left (\begin{array}{c} \mathbf{x}\\ \mathbf{v} \end{array} \right )= \biggl(\mathbf{v}\frac{\partial}{\partial\mathbf{x}}+ \mathbf{a}\frac{\partial}{\partial\mathbf{v}} \biggr)\left (\begin{array}{c} \mathbf{x}\\ \mathbf{v} \end{array} \right )= \left (\begin{array}{c} \mathbf{v}\\ \mathbf{a} \end{array} \right ) \end{aligned}$$
(12.121)
$$\begin{aligned} &\mathcal{L}^{2}\left (\begin{array}{c} \mathbf{x}\\ \mathbf{v} \end{array} \right )= \biggl(\mathbf{v} \frac{\partial}{\partial\mathbf{x}}+\mathbf{a}\frac{\partial}{\partial\mathbf{v}} \biggr)\left (\begin{array}{c} \mathbf{v}\\ \mathbf{a}(\mathbf{x}) \end{array} \right )=\left (\begin{array}{c} \mathbf{a}\\ \mathbf{v}\frac{\partial}{\partial\mathbf{x}}\mathbf{a} \end{array} \right ) \end{aligned}$$
(12.122)
$$\begin{aligned} &\mathcal{L}^{3}\left (\begin{array}{c} \mathbf{x}\\ \mathbf{v} \end{array} \right )= \biggl(\mathbf{v} \frac{\partial}{\partial\mathbf{x}}+\mathbf{a}\frac{\partial}{\partial\mathbf{v}} \biggr)\left (\begin{array}{c} \mathbf{a}\\ \mathbf{v}\frac{\partial}{\partial\mathbf{x}}\mathbf{a} \end{array} \right )=\left (\begin{array}{c} \mathbf{v}\frac{\partial}{\partial\mathbf{x}}\mathbf{a}\\ \mathbf{a}\frac{\partial}{\partial\mathbf{x}}\mathbf{a}+\mathbf{v}\mathbf{v}\frac{\partial}{\partial\mathbf{x}}\frac{\partial}{\partial\mathbf{x}}\mathbf{a} \end{array} \right ). \end{aligned}$$
(12.123)

But since

$$\begin{aligned} &\frac{\mathrm {d}}{\mathrm {d}t}\mathbf{a}\bigl(\mathbf{x}(t)\bigr)=\mathbf{v} \frac{\partial}{\partial\mathbf{x}}\mathbf{a} \end{aligned}$$
(12.124)
$$\begin{aligned} &\frac{\mathrm {d}^{2}}{\mathrm {d}t^{2}}\mathbf{a}\bigl(\mathbf{x}(t)\bigr)=\frac{\mathrm {d}}{\mathrm {d}t}\biggl( \mathbf{v}\frac{\partial}{\partial\mathbf{x}}\mathbf{a}\biggr)=\mathbf{a}\frac{\partial}{\partial\mathbf{x}} \mathbf{a}+\mathbf{v}\mathbf{v}\frac{\partial}{\partial\mathbf{x}}\frac{\partial}{\partial\mathbf{x}}\mathbf{a} \end{aligned}$$
(12.125)

we recover

$$\begin{aligned} \biggl(1+t\mathcal{L}+\frac{1}{2}t^{2}\mathcal{L}^{2}+ \frac{1}{6}t^{3}\mathcal{L}^{3}+\cdots\biggr) \left (\begin{array}{c} \mathbf{x}\\ \mathbf{v} \end{array} \right )=\left (\begin{array}{c} \mathbf{x}+\mathbf{v}t+\frac{1}{2}t^{2}\mathbf{a}+\frac{1}{6}t^{3}\dot{\mathbf{a}}+\cdots\\ \mathbf{v}+\mathbf{a}t+\frac{1}{2}t^{2}\dot{\mathbf{a}}+\frac{1}{6}t^{3}\ddot{\mathbf{a}}+\cdots \end{array} \right ). \end{aligned}$$
(12.126)

11.2 Split-Operator Approximation

We introduce a small time step Δt=t/N and write

$$ e^{\mathcal{L}t}= \bigl(e^{\mathcal{L}\Delta t} \bigr)^{N}. $$
(12.127)

For the small time step Δt the split-operator approximation can be used which approximately factorizes the exponential operator. For example, write the Liouville operator as the sum of two terms

$$\mathcal{L}_{A}=\mathbf{v}\frac{\partial}{\partial\mathbf{x}}\qquad \mathcal{L}_{B}=\mathbf{a}\frac{\partial}{\partial\mathbf{v}} $$

and make the approximation

$$ e^{\mathcal{L}\Delta t}=e^{\mathcal{L}_{A}\Delta t}e^{\mathcal{L}_{B}\Delta t}+\cdots. $$
(12.128)

Each of the two factors simply shifts positions or velocities

$$ e^{\mathcal{L}_{A}\Delta t}\left (\begin{array}{c} \mathbf{x}\\ \mathbf{v} \end{array} \right )=\left (\begin{array}{c} \mathbf{x}+\mathbf{v}\Delta t\\ \mathbf{v} \end{array} \right )\qquad e^{\mathcal{L}_{B}\Delta t}\left (\begin{array}{c} \mathbf{x}\\ \mathbf{v} \end{array} \right )=\left (\begin{array}{c} \mathbf{x}\\ \mathbf{v}+\mathbf{a}\Delta t \end{array} \right ) $$
(12.129)

since these two steps correspond to either motion with constant velocities or constant coordinates and forces.

11.3 Position Verlet Method

Often the following approximation is used which is symmetrical in time

$$\begin{aligned} e^{\mathcal{L}\Delta t}=e^{\mathcal{L}_{A}\Delta t/2}e^{\mathcal{L}_{B}\Delta t}e^{\mathcal{L}_{A}\Delta t/2}+\cdots. \end{aligned}$$
(12.130)

The corresponding algorithm is the so called position Verlet method (Fig. 12.11):

$$\begin{aligned} &\mathbf{x}_{n+\frac{1}{2}}=\mathbf{x}_{n}+\mathbf{v}_{n} \frac{\Delta t}{2} \end{aligned}$$
(12.131)
$$\begin{aligned} &\mathbf{v}_{n+1}=\mathbf{v}_{n}+\mathbf{a}_{n+\frac{1}{2}} \Delta t=\mathbf{v}(t_{n}+\Delta t)+O\bigl(\Delta t^{3}\bigr) \end{aligned}$$
(12.132)
$$\begin{aligned} &\mathbf{x}_{n+1}=\mathbf{x}_{n+\frac{1}{2}}+\mathbf{v}_{n+1} \frac{\Delta t}{2} \\ &\hphantom{\mathbf{x}_{n+1}}=\mathbf{x}_{n}+\frac{\mathbf{v}_{n}+\mathbf{v}_{n+1}}{2}\Delta t= \mathbf{x}(t_{n}+\Delta t)+O\bigl(\Delta t^{3}\bigr). \end{aligned}$$
(12.133)
Fig. 12.11
figure 11

(Position Verlet method)  The exact integration path is approximated by two half-steps with constant velocities and one step with constant coordinates

11.4 Velocity Verlet Method

If we exchange operators A and B we have

$$ e^{\mathcal{L}\Delta t}=e^{\mathcal{L}_{B}\Delta t/2}e^{\mathcal{L}_{A}\Delta t}e^{\mathcal{L}_{B}\Delta t/2}+\cdots $$
(12.134)

which produces the velocity Verlet algorithm (Fig. 12.12):

$$\begin{aligned} &\mathbf{v}_{n+\frac{1}{2}}=\mathbf{v}_{n}+\mathbf{a}_{n} \frac{\Delta t}{2} \end{aligned}$$
(12.135)
$$\begin{aligned} &\mathbf{x}_{n+1}=\mathbf{x}_{n}+\mathbf{v}_{n+\frac{1}{2}} \Delta t=\mathbf{x}_{n}+\mathbf{v}_{n}\Delta t+ \mathbf{a}_{n}\frac{\Delta t^{2}}{2} \\ &\hphantom{\mathbf{x}_{n+1}} =\mathbf{x}(t_{n}+\Delta t)+O\bigl(\Delta t^{3}\bigr) \end{aligned}$$
(12.136)
$$\begin{aligned} &\mathbf{v}_{n+1}=\mathbf{v}_{n+\frac{1}{2}}+\mathbf{a}_{n+1} \frac{\Delta t}{2}=\mathbf{v}_{n}+\frac{\mathbf{a}_{n} +\mathbf{a}_{n+1}}{2}\Delta t \\ &\hphantom{\mathbf{v}_{n+1}}= \mathbf{v}(t_{n}+\Delta t)+O\bigl(\Delta t^{3}\bigr). \end{aligned}$$
(12.137)
Fig. 12.12
figure 12

(Velocity Verlet method)  The exact integration path is approximated by two half-steps with constant coordinates and one step with constant velocities

11.5 Störmer-Verlet Method

The velocity Verlet method is equivalent to Störmer’s version [208] of the Verlet method which is a two step method given by

$$\begin{aligned} &\mathbf{x}_{n+1}=2\mathbf{x}_{n}-\mathbf{x}_{n-1}+ \mathbf{a}_{n}\Delta t^{2} \end{aligned}$$
(12.138)
$$\begin{aligned} &\mathbf{v}_{n}=\frac{\mathbf{x}_{n+1}-\mathbf{x}_{n-1}}{2\Delta t}. \end{aligned}$$
(12.139)

To show the equivalence we add two consecutive position vectors

$$ \mathbf{x}_{n+2}+\mathbf{x}_{n+1}=2\mathbf{x}_{n+1}+2 \mathbf{x}_{n}-\mathbf{x}_{n}-\mathbf{x}_{n-1}+( \mathbf{a}_{n+1}+\mathbf{a}_{n})\Delta t^{2} $$
(12.140)

which simplifies to

$$ \mathbf{x}_{n+2}-\mathbf{x}_{n}-(\mathbf{x}_{n+1}- \mathbf{x}_{n})=(\mathbf{a}_{n+1}+\mathbf{a}_{n}) \Delta t^{2}. $$
(12.141)

This can be expressed as the difference of two consecutive velocities:

$$ 2(\mathbf{v}_{n+1}-\mathbf{v}_{n})=(\mathbf{a}_{n+1}+ \mathbf{a}_{n})\Delta t. $$
(12.142)

Now we substitute

$$ \mathbf{x}_{n-1}=\mathbf{x}_{n+1}-2\mathbf{v}_{n} \Delta t $$
(12.143)

to get

$$ \mathbf{x}_{n+1}=2\mathbf{x}_{n}-\mathbf{x}_{n+1}+2 \mathbf{v}_{n}\Delta t+\mathbf{a}_{n}\Delta t^{2} $$
(12.144)

which simplifies to

$$ \mathbf{x}_{n+1}=\mathbf{x}_{n}+\mathbf{v}_{n} \Delta t+\frac{\mathbf{a}_{n}}{2}\Delta t^{2}. $$
(12.145)

Thus the equations of the velocity Verlet algorithm have been recovered. However, since the Verlet method is a 2-step method, the choice of initial values is important. The Störmer-Verlet method starts from two coordinate sets x 0,x 1. The first step is

$$\begin{aligned} &\mathbf{x} {}_{2}=2\mathbf{x}_{1}-\mathbf{x}_{0}+a_{1} \Delta t^{2} \end{aligned}$$
(12.146)
$$\begin{aligned} &\mathbf{v}_{1}=\frac{\mathbf{x}_{2}-\mathbf{x}_{0}}{2\Delta t}=\frac{\mathbf{x}_{1}-\mathbf{x}_{0}}{\Delta t}+ \frac{\mathbf{a}_{1}}{2}\Delta t^{2}. \end{aligned}$$
(12.147)

The velocity Verlet method, on the other hand, starts from one set of coordinates and velocities x 1, v 1. Here the first step is

$$\begin{aligned} &\mathbf{x}_{2}=\mathbf{x}_{1}+\mathbf{v}_{1} \Delta t+\mathbf{a}_{1}\frac{\Delta t^{2}}{2} \end{aligned}$$
(12.148)
$$\begin{aligned} &\mathbf{v}_{2}=\mathbf{v}_{1}+\frac{\mathbf{a}_{1}+\mathbf{a}_{2}}{2}\Delta t. \end{aligned}$$
(12.149)

The two methods give the same resulting trajectory if we choose

$$ \mathbf{x}_{0}=\mathbf{x}_{1}-\mathbf{v}_{1} \Delta t+\frac{\mathbf{a}_{1}}{2}\Delta t^{2}. $$
(12.150)

If, on the other hand, x 0 is known with higher precision, the local error order of Störmer’s algorithm changes as can be seen from addition of the two Taylor series

$$\begin{aligned} &\mathbf{x}(t_{n}+\Delta t)=\mathbf{x}_{n}+ \mathbf{v}_{n}\Delta t+\frac{\mathbf{a}_{n}}{2}\Delta t^{2}+ \frac{\dot{\mathbf{a}}_{n}}{6}\Delta t^{3}+\cdots \end{aligned}$$
(12.151)
$$\begin{aligned} &\mathbf{x}(t_{n}-\Delta t)=\mathbf{x}_{n}- \mathbf{v}_{n}\Delta t+\frac{\mathbf{a}_{n}}{2}\Delta t^{2}- \frac{\dot{\mathbf{a}}_{n}}{6}\Delta t^{3}+\cdots \end{aligned}$$
(12.152)

which gives

$$\begin{aligned} &\mathbf{x}(t_{n}+\Delta t)=2\mathbf{x}(t_{n})- \mathbf{x}(t_{n}-\Delta t)+\mathbf{a}_{n}\Delta t^{2}+O\bigl(\Delta t^{4}\bigr) \end{aligned}$$
(12.153)
$$\begin{aligned} &\frac{\mathbf{x}(t_{n}+\Delta t)-\mathbf{x}(t_{n}-\Delta t)}{2\Delta t}=\mathbf{v}_{n}+O\bigl(\Delta t^{2} \bigr). \end{aligned}$$
(12.154)

11.6 Error Accumulation for the Störmer-Verlet Method

Equation (12.153) gives only the local error of one single step. Assume the start values x 0 and x 1 are exact. The next value x 2 has an error with the leading term Δx 2=αΔt 4. If the trajectory is sufficiently smooth and the time step not too large the coefficient α will vary only slowly and the error of the next few iterations is given by

$$ \begin{aligned}[c] & \Delta x_{3}=2\Delta x_{2}-\Delta x_{1}=2\alpha \Delta t^{4}\\ &\Delta x_{4}=2\Delta x_{3}-\Delta x_{2}=3\alpha \Delta t^{4}\\ &\vdots\\ &\Delta x_{n+1}=n\alpha\Delta t^{4}. \end{aligned} $$
(12.155)

This shows that the effective error order of the Störmer-Verlet method is only Ot 3) similar to the velocity Verlet method.

11.7 Beeman’s Method

Beeman and Schofield [17, 229] introduced a method which is very similar to the Störmer-Verlet method but calculates the velocities with higher precision. This is important if, for instance, the kinetic energy has to be calculated. Starting from the Taylor series

$$ \mathbf{x}_{n+1}=\mathbf{x}_{n}+\mathbf{v}_{n}\Delta t+ \mathbf{a}_{n}\frac{\Delta t^{2}}{2}+\dot{\mathbf{a}}_{n} \frac{\Delta t^{3}}{6}+\ddot{\mathbf{a}}_{n}\frac{\Delta t^{4}}{24}+\cdots $$
(12.156)

the derivative of the acceleration is approximated by a backward difference

$$\begin{aligned} \mathbf{x}_{n+1} =&\mathbf{x}_{n}+\mathbf{v}_{n}\Delta t+ \mathbf{a}_{n}\frac{\Delta t^{2}}{2}+\frac{\mathbf{a}_{n}-\mathbf{a}_{n-1}}{\Delta t}\frac{\Delta t^{3}}{6}+O \bigl(\Delta t^{4}\bigr) \\ =&\mathbf{x}_{n}+\mathbf{v}_{n}\Delta t+\frac{4\mathbf{a}_{n}-\mathbf{a}_{n-1}}{6}\Delta t^{2}+O\bigl(\Delta t^{4}\bigr). \end{aligned}$$
(12.157)

This equation can be used as an explicit step to update the coordinates or as a predictor step in combination with the implicit corrector step

$$\begin{aligned} \mathbf{x}_{n+1} =&\mathbf{x}_{n}+\mathbf{v}_{n}\Delta t+ \mathbf{a}_{n}\frac{\Delta t^{2}}{2}+\frac{\mathbf{a}_{n+1}-\mathbf{a}_{n}}{\Delta t}\frac{\Delta t^{3}}{6}+O \bigl(\Delta t^{4}\bigr) \\ =&\mathbf{x}_{n}+\mathbf{v}_{n}\Delta t+\frac{\mathbf{a}_{n+1}+2\mathbf{a}_{n}}{6}\Delta t^{2}+O\bigl(\Delta t^{4}\bigr) \end{aligned}$$
(12.158)

which can be applied repeatedly (usually two iterations are sufficient). Similarly, the Taylor series of the velocity is approximated by

$$\begin{aligned} \mathbf{v}_{n+1} =&\mathbf{v}_{n}+\mathbf{a}_{n}\Delta t+ \dot{\mathbf{a}}_{n}\frac{\Delta t^{2}}{2}+\ddot{\mathbf{a}}_{n} \frac{\Delta t^{3}}{6}+\cdots \\ =&\mathbf{v}_{n}+\mathbf{a}_{n}\Delta t+ \biggl( \frac{\mathbf{a}_{n+1}-\mathbf{a}_{n}}{\Delta t}+O(\Delta t) \biggr)\frac{\Delta t^{2}}{2}+\cdots \\ =&\mathbf{v}_{n}+\frac{\mathbf{a}_{n+1}+\mathbf{a}_{n}}{2}\Delta t+O\bigl(\Delta t^{3} \bigr). \end{aligned}$$
(12.159)

Inserting the velocity from (12.158) we obtain the corrector step for the velocity

$$\begin{aligned} \mathbf{v}_{n+1} =&\frac{\mathbf{x}_{n+1}-\mathbf{x}_{n}}{\Delta t}-\frac{\mathbf{a}_{n+1}+2\mathbf{a}_{n}}{6}\Delta t+ \frac{\mathbf{a}_{n+1}+\mathbf{a}_{n}}{2}\Delta t+O\bigl(\Delta t^{3}\bigr) \\ =&\frac{\mathbf{x}_{n+1}-\mathbf{x}_{n}}{\Delta t}+\frac{2\mathbf{a}_{n+1}+\mathbf{a}_{n}}{6}\Delta t+O\bigl(\Delta t^{3} \bigr). \end{aligned}$$
(12.160)

In combination with (12.157) this can be replaced by

$$\begin{aligned} \mathbf{v}_{n+1} =&\mathbf{v}_{n}+\frac{4\mathbf{a}_{n}-\mathbf{a}_{n-1}}{6}\Delta t+ \frac{2\mathbf{a}_{n+1}+\mathbf{a}_{n}}{6}\Delta t+O\bigl(\Delta t^{3}\bigr) \\ =&\mathbf{v}_{n}+\frac{2\mathbf{a}_{n+1}+5\mathbf{a}_{n}-\mathbf{a}_{n-1}}{6}\Delta t+O\bigl(\Delta t^{3}\bigr). \end{aligned}$$
(12.161)

Together, (12.157) and (12.161) provide an explicit method which is usually understood as Beeman’s method. Inserting the velocity (12.160) from the previous step

$$ \mathbf{v}_{n}=\frac{\mathbf{x}_{n}-\mathbf{x}_{n-1}}{\Delta t}+\frac{2\mathbf{a}_{n}+\mathbf{a}_{n-1}}{6}\Delta t+O\bigl( \Delta t^{3}\bigr) $$
(12.162)

into (12.157) gives

$$ \mathbf{x}_{n+1}=2\mathbf{x}_{n}-\mathbf{x}_{n-1}+ \mathbf{a}_{n}\Delta t^{2}+O\bigl(\Delta t^{4}\bigr) $$
(12.163)

which coincides with the Störmer-Verlet method (12.138). We conclude that Beeman’s method should produce the same trajectory as the Störmer-Verlet method if numerical errors can be neglected and comparable initial values are used. In fact, the Störmer-Verlet method may suffer from numerical extinction and Beeman’s method provides a numerically more favorable alternative.

11.8 The Leapfrog Method

Closely related to the Verlet methods is the so called leapfrog method [116]. It uses the simple decomposition

$$ e^{\mathcal{L}\Delta t}\approx e^{\mathcal{L}_{A}\Delta t}e^{\mathcal{L}_{B}\Delta t} $$
(12.164)

but introduces two different time grids for coordinates and velocities which are shifted by Δt/2 (Fig. 12.13).

Fig. 12.13
figure 13

(Leapfrog method)  The exact integration path is approximated by one step with constant coordinates and one step with constant velocities. Two different grids are used for coordinates and velocities which are shifted by Δt/2

The leapfrog algorithm is given by

$$\begin{aligned} &\mathbf{v}_{n+\frac{1}{2}}=\mathbf{v}_{n-\frac{1}{2}}+\mathbf{a}_{n} \Delta t \end{aligned}$$
(12.165)
$$\begin{aligned} &\mathbf{x}_{n+1}=\mathbf{x}_{n}+\mathbf{v}_{n+\frac{1}{2}} \Delta t. \end{aligned}$$
(12.166)

Due to the shifted arguments the order of the method is increased as can be seen from the Taylor series:

$$\begin{aligned} &\mathbf{x}(t_{n})+\biggl(\mathbf{v}(t_{n})+ \frac{\Delta t}{2}\mathbf{a}(t_{n})+\cdots\biggr)\Delta t= \mathbf{x}(t_{n}+\Delta t)+O\bigl(\Delta t^{3}\bigr) \end{aligned}$$
(12.167)
$$\begin{aligned} &\mathbf{v}\biggl(t_{n}+\frac{\Delta t}{2}\biggr)-\mathbf{v} \biggl(t_{n}-\frac{\Delta t}{2}\biggr)=\mathbf{a}(t_{n}) \Delta t+O\bigl(\Delta t^{3}\bigr) . \end{aligned}$$
(12.168)

One disadvantage of the leapfrog method is that some additional effort is necessary if the velocities are needed. The simple expression

$$ \mathbf{v}(t_{n})=\frac{1}{2}\biggl(\mathbf{v} \biggl(t_{n}-\frac{\Delta t}{2}\biggr)+\mathbf{v}\biggl(t_{n}+ \frac{\Delta t}{2}\biggr)\biggr)+O\bigl(\Delta t^{2}\bigr) $$
(12.169)

is of lower error order than (12.168).

12 Problems

Problem 12.1

(Circular orbits)

In this computer experiment we consider a mass point moving in a central field. The equation of motion can be written as the following system of first order equations:

$$ \left (\begin{array}{c} \dot{x}\\ \dot{y}\\ \dot{v}_{x}\\ \dot{v}_{y} \end{array} \right )=\left (\begin{array}{c@{\quad}c@{\quad}c@{\quad}c} 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1\\ -\frac{1}{(x^{2}+y^{2})^{\frac{3}{2}}} & 0 & 0 & 0\\ 0 & -\frac{1}{(x^{2}+y^{2})^{\frac{3}{2}}} & 0 & 0 \end{array} \right )\left (\begin{array}{c} x\\ y\\ v_{x}\\ v_{y} \end{array} \right ). $$
(12.170)

For initial values

$$ \left (\begin{array}{c} x\\ y\\ v_{x}\\ v_{y} \end{array} \right )=\left (\begin{array}{c} 1\\ 0\\ 0\\ 1 \end{array} \right ) $$
(12.171)

the exact solution is given by

$$ x=\cos t\quad y=\sin t. $$
(12.172)

The following methods are used to calculate the position x(t),y(t) and the energy

$$ E_{tot}=E_{kin}+E_{pot}=\frac{1}{2} \bigl(v_{x}^{2}+v_{y}^{2}\bigr)- \frac{1}{\sqrt{x^{2}+y^{2}}}. $$
(12.173)
  • The explicit Euler method (12.3)

$$ \begin{aligned}[c] & x(t_{n+1})=x(t_{n})+v_{x}(t_{n})\Delta t\\ &y(t_{n+1})=y(t_{n})+v_{y}(t_{n})\Delta t\\ &v_{x}(t_{n+1})=v_{x}(t_{n})-\frac{x(t_{n})}{R(t_{n})^{3}}\Delta t\\ &v_{y}(t_{n+1})=v_{y}(t_{n})-\frac{y(t_{n})}{R(t_{n})^{3}}\Delta t. \end{aligned} $$
(12.174)
  • The 2nd order Runge-Kutta method (12.7.1)

which consists of the predictor step

$$\begin{aligned} &x(t_{n}+\Delta t/2)=x(t_{n})+\frac{\Delta t}{2}v_{x}(t_{n}) \end{aligned}$$
(12.175)
$$\begin{aligned} &y(t_{n}+\Delta t/2)=y(t_{n})+\frac{\Delta t}{2}v_{y}(t_{n}) \end{aligned}$$
(12.176)
$$\begin{aligned} &v_{x}(t_{n}+\Delta t/2)=v_{x}(t_{n})- \frac{\Delta t}{2}\frac{x(t_{n})}{R(t_{n})^{3}} \end{aligned}$$
(12.177)
$$\begin{aligned} &v_{y}(t_{n}+\Delta t/2)=v_{y}(t_{n})- \frac{\Delta t}{2}\frac{y(t_{n})}{R(t_{n})^{3}} \end{aligned}$$
(12.178)

and the corrector step

$$\begin{aligned} &x(t_{n+1})=x(t_{n})+\Delta t\, v_{x}(t_{n}+ \Delta t/2) \end{aligned}$$
(12.179)
$$\begin{aligned} &y(t_{n+1})=y(t_{n})+\Delta t\, v_{y}(t_{n}+ \Delta t/2) \end{aligned}$$
(12.180)
$$\begin{aligned} &v_{x}(t_{n+1})=v_{x}(t_{n})-\Delta t \frac{x(t_{n}+\Delta t/2)}{R^{3}(t_{n}+\Delta t/2)} \end{aligned}$$
(12.181)
$$\begin{aligned} &v_{y}(t_{n+1})=v_{y}(t_{n})-\Delta t \frac{y(t_{n}+\Delta t/2)}{R^{3}(t_{n}+\Delta t/2)}. \end{aligned}$$
(12.182)
  • The fourth order Runge-Kutta method (12.7.3)

  • The Verlet method (12.11.5)

$$\begin{aligned} &x(t_{n+1})=x(t_{n})+ \bigl(x(t_{n})-x(t_{n-1}) \bigr)-\Delta t\frac{x(t_{n})}{R^{3}(t_{n})} \end{aligned}$$
(12.183)
$$\begin{aligned} &y(t_{n+1})=y(t_{n})+ \bigl(y(t_{n})-y(t_{n-1}) \bigr)-\Delta t\frac{y(t_{n})}{R^{3}(t_{n})} \end{aligned}$$
(12.184)
$$\begin{aligned} &v_{x}(t_{n})=\frac{x(t_{n+1})-x(t_{n-1})}{2\Delta t}=\frac{x(t_{n})-x(t_{n-1})}{\Delta t}- \frac{\Delta t}{2}\frac{x(t_{n})}{R^{3}(t_{n})} \end{aligned}$$
(12.185)
$$\begin{aligned} &v_{y}(t_{n})=\frac{y(t_{n+1})-y(t_{n-1})}{2\Delta t}=\frac{y(t_{n})-y(t_{n-1})}{\Delta t}- \frac{\Delta t}{2}\frac{y(t_{n})}{R^{3}(t_{n})}. \end{aligned}$$
(12.186)

To start the Verlet method we need additional coordinates at time −Δt which can be chosen from the exact solution or from the approximation

$$\begin{aligned} &x(t_{-1})=x(t_{0})-\Delta t\, v_{x}(t_{0})- \frac{\Delta t^{2}}{2}\frac{x(t_{0})}{R^{3}(t_{0})} \end{aligned}$$
(12.187)
$$\begin{aligned} &y(t_{-1})=y(t_{0})-\Delta t\, v_{y}(t_{0})- \frac{\Delta t^{2}}{2}\frac{y(t_{0})}{R^{3}(t_{0})}. \end{aligned}$$
(12.188)
  • The leapfrog method (12.11.8)

    $$\begin{aligned} &x(t_{n+1})=x(t_{n})+v_{x}(t_{n+\frac{1}{2}})\Delta t \end{aligned}$$
    (12.189)
    $$\begin{aligned} &y(t_{n+1})=y(t_{n})+v_{y}(t_{n+\frac{1}{2}})\Delta t \end{aligned}$$
    (12.190)
    $$\begin{aligned} &v_{x}(t_{n+\frac{1}{2}})=v_{x}(t_{n-\frac{1}{2}})- \frac{x(t_{n})}{R(t_{n})^{3}}\Delta t \end{aligned}$$
    (12.191)
    $$\begin{aligned} &v_{y}(t_{n+\frac{1}{2}})=v_{y}(t_{n-\frac{1}{2}})- \frac{y(t_{n})}{R(t_{n})^{3}}\Delta t \end{aligned}$$
    (12.192)

where the velocity at time t n is calculated from

$$\begin{aligned} &v_{x}(t_{n})=v_{x}(t_{n+\frac{1}{2}})- \frac{\Delta t}{2}\frac{x(t_{n+1})}{R^{3}(t_{n+1})} \end{aligned}$$
(12.193)
$$\begin{aligned} &v_{y}(t_{n})=v_{y}(t_{n+\frac{1}{2}})- \frac{\Delta t}{2}\frac{y(t_{n+1})}{R^{3}(t_{n+1})}. \end{aligned}$$
(12.194)

To start the leapfrog method we need the velocity at time \(t_{-\frac{1}{2}}\) which can be taken from the exact solution or from

$$\begin{aligned} &v_{x}(t_{-\frac{1}{2}})=v_{x}(t_{0})- \frac{\Delta t}{2}\frac{x(t_{0})}{R^{3}(t_{0})} \end{aligned}$$
(12.195)
$$\begin{aligned} &v_{y}(t_{-\frac{1}{2}})=v_{y}(t_{0})- \frac{\Delta t}{2}\frac{y(t_{0})}{R^{3}(t_{0})}. \end{aligned}$$
(12.196)

Compare the conservation of energy for the different methods as a function of the time step Δt. Study the influence of the initial values for leapfrog and Verlet methods.

Problem 12.2

(N-body system)

In this computer experiment we simulate the motion of three mass points under the influence of gravity. Initial coordinates and velocities as well as the masses can be varied. The equations of motion are solved with the 4th order Runge-Kutta method with quality control for different step sizes. The local integration error is estimated using the step doubling method. Try to simulate a planet with a moon moving round a sun!

Problem 12.3

(Adams-Bashforth method)

In this computer experiment we simulate a circular orbit with the Adams-Bashforth method of order 2⋯7. The absolute error at time T

$$\begin{aligned} \Delta(T) =&\bigl \vert x(T)-\cos(T)\bigr \vert +\bigl \vert y(t)-\sin(T)\bigr \vert +\bigl \vert v_{x}(T)+\sin(T)\bigr \vert \\ &{}+\bigl \vert v_{y}(T)-\cos(T)\bigr \vert \end{aligned}$$
(12.197)

is shown as a function of the time step Δt in a log-log plot. From the slope

$$ s=\frac{d(\log_{10}(\Delta))}{d(\log_{10}(\Delta t))} $$
(12.198)

the leading error order s can be determined. For very small step sizes rounding errors become dominating which leads to an increase Δ∼(Δt)−1.

Determine maximum precision and optimal step size for different orders of the method. Compare with the explicit Euler method.