This chapter presents a class of trigonometric collocation methods based on Lagrange basis polynomials for solving multi-frequency and multidimensional oscillatory systems \(q^{\prime \prime }(t)+Mq(t)=f\big (q(t)\big )\). The properties of the collocation methods are investigated in detail. It is shown that the convergence condition of these methods is independent of \(\left\| M\right\| \), which is crucial for solving multi-frequency oscillatory systems.

7.1 Introduction

The numerical treatment of multi-frequency oscillatory systems is a computational problem of overarching importance in a wide range of applications, such as quantum physics, circuit simulations, flexible body dynamics and mechanics (see, e.g. [3, 5, 6, 8, 9, 32, 33] and the references therein). The main purpose of this chapter is to construct and analyse a class of efficient collocation methods for solving multi-frequency and multidimensional oscillatory second-order differential equations of the form

$$\begin{aligned} q^{\prime \prime }(t)+Mq(t)=f\big (q(t)\big ), \qquad q(0)=q_0,\ \ q'(0)=q'_0,\qquad t\in [0,t_{\mathrm {end}}], \end{aligned}$$
(7.1)

where M is a \(d\times d\) positive semi-definite matrix implicitly containing the dominant frequencies of the oscillatory problem and \(f: \mathbb {R}^{d}\rightarrow \mathbb {R}^{d}\) is an analytic function. The solution of this system is a multi-frequency nonlinear oscillator because of the presence of the linear term Mq. The system (7.1) is a highly oscillatory problem when \(\left\| M\right\| \gg 1\). In recent years, various numerical methods for approximating solutions of oscillatory systems have been developed by many researchers. Readers are referred to [12,13,14, 21,22,23,24,25, 31] and the references therein. Once it is further assumed that M is symmetric and f is the negative gradient of a real-valued function U(q), the system (7.1) is identical to the following initial value Hamiltonian system

$$\begin{aligned} \left\{ \begin{aligned}&\dot{q}(t)=\nabla _p H(q(t),p(t)),\qquad \ \ q(0)=q_{0},\\&\dot{p}(t)=-\nabla _q H(q(t),p(t)),\qquad p(0)=p_{0}\equiv q'_0, \end{aligned}\right. \end{aligned}$$
(7.2)

with the Hamiltonian

$$\begin{aligned} H(q,p)=\frac{1}{2}p^{\intercal }p+\frac{1}{2}q^{\intercal }Mq+U(q). \end{aligned}$$
(7.3)

This is an important Hamiltonian problem which has seen studied by many authors (see, e.g. [3,4,5, 8, 9]).

In [26], the authors took advantage of shifted Legendre polynomials to obtain a local Fourier expansion of the system (7.1) and derived the so-called trigonometric Fourier collocation methods. Theoretical analysis and numerical experiments in [26] showed that the trigonometric Fourier collocation methods are more efficient than some earlier codes. Motivated by the work in [26], this chapter is devoted to the formulation and analysis of another trigonometric collocation method for solving multi-frequency oscillatory second-order systems (7.1). We will consider a classical approach and use Lagrange polynomials to derive a class of trigonometric collocation methods. Because of this different approach, compared with the methods in [26], the collocation methods have a simpler scheme and can be implemented at a lower cost in practical computations. These trigonometric collocation methods are designed by interpolating the function f of (7.1) by Lagrange basis polynomials, and incorporating the variation-of-constants formula and the idea of collocation. It is noted that these integrators are a class of collocation methods and they share all of the important features of collocation methods. We analyse the properties of trigonometric collocation methods and study the convergence of the fixed-point iteration for these methods. It is important to emphasize that for the trigonometric collocation methods, the convergence condition is independent of \(\left\| M\right\| \), which is a crucial property for solving highly oscillatory systems.

This chapter is organized as follows. In Sect. 7.2, we formulate the scheme of trigonometric collocation methods based on Lagrange basis polynomials. The properties of the obtained methods are analysed in Sect. 7.3. In Sect. 7.4, a fourth-order scheme of the collocation methods is presented and numerical results confirm that the method proposed in this chapter yields a dramatic improvement. Conclusions are included in the last section.

7.2 Formulation of the Methods

We first restrict the multi-frequency oscillatory system (7.1) to the interval [0, h] with any \(h>0\):

$$\begin{aligned} q''(t)+Mq(t)=f\big (q(t)\big ), \qquad q(0)=q_0,\ \ q'(0)=q'_0,\qquad t\in [0,h]. \end{aligned}$$
(7.4)

With regard to the variation-of-constants formula for (7.1) given in [29], we have the following result on the exact solution q(t) of the system (7.1) and its derivative \(q'(t)=p(t)\):

$$\begin{aligned} \left\{ \begin{aligned}&q(t)=\phi _0(t^2M)q_0+t\phi _1(t^2M)p_0+ t^2\int _{0}^1(1-z)\phi _1\big ((1 -z)^2t^2M\big )f\big (q(tz)\big )dz,\\&p(t)=-tM\phi _1( t^2M)q_0+\phi _0(t^2M)p_0 +t\int _{0}^{1}\phi _0\big ((1 -z)^2t^2M\big )f\big (q(tz)\big )dz, \end{aligned}\right. \end{aligned}$$
(7.5)

where \(t\in [0,h]\) and

$$\begin{aligned} \phi _{i}(M):=\sum \limits _{l=0}^{\infty }\frac{(-1)^{l}M^{l}}{(2l+i)!},\qquad \ i=0,1. \end{aligned}$$
(7.6)

It follows from (7.5) that

$$\begin{aligned} \left\{ \begin{aligned}&q(h)=\phi _0(V)q_0+h\phi _1(V)p_0+ h^2\int _{0}^1(1-z)\phi _1\big ((1 -z)^2V\big )f\big (q(hz)\big )dz,\\&p(h)=-hM\phi _1( V)q_0+\phi _0(V)p_0 +h\int _{0}^{1}\phi _0\big ((1 -z)^2V\big )f\big (q(hz)\big )dz, \end{aligned}\right. \end{aligned}$$
(7.7)

where \(V=h^2M.\)

The main idea in designing practical schemes to solve (7.1) is to approximate f(q) in (7.7) by a quadrature. In this chapter, we interpolate f(q) as

$$\begin{aligned} f\big (q(\xi h)\big )\sim \sum \limits _{j=1}^ {s}l_j(\xi )f\big (q(c_j h)\big ),\qquad \xi \in [0,1], \end{aligned}$$
(7.8)

where

$$\begin{aligned} l_j(x)=\prod \limits _{k=1,k\ne j}^ {s}\frac{x-c_k}{c_j-c_k}, \end{aligned}$$
(7.9)

for \(j=1,\ldots ,s\), are the Lagrange basis polynomials, and \(c_1, \ldots , c_s\) are distinct real numbers (\(s\ge 1,\ 0 \le c_i \le 1\)). Then replacing \(f(q(\xi h))\) in (7.7) by the series (7.8) yields an approximation of q(h), p(h) as follows:

$$\begin{aligned} \left\{ \begin{aligned}&\tilde{q}(h)=\phi _0(V)q_0+h\phi _1(V)p_0+ h^2\sum \limits _{j=1}^ {s}I_{1,j}f\big (\tilde{q}(c_j h)\big ),\\&\tilde{p}(h)=-hM\phi _1( V)q_0+\phi _0(V)p_0 +h\sum \limits _{j=1}^ {s}I_{2,j}f\big (\tilde{q}(c_j h)\big ), \end{aligned}\right. \end{aligned}$$
(7.10)

where

$$\begin{aligned} \begin{aligned}&I_{1,j}:=\int _{0}^1l_j(z)(1-z)\phi _1\big ((1 -z)^2V\big )dz,\ \ I_{2,j}:=\int _{0}^{1}l_j(z)\phi _0\big ((1 -z)^2V\big )dz. \end{aligned} \end{aligned}$$
(7.11)

From the variation-of-constants formula (7.5) for (7.4), the approximation (7.10) satisfies the following system

(7.12)

In what follows we first approximate \(f\big (\tilde{q}(c_j h)\big ),\ I_{1,j},\ I_{2,j}\) in (7.10), and then formulate a class of trigonometric collocation methods.

7.2.1 The Computation of \(f(\tilde{q}(c_j h))\)

It follows from (7.12) that \(\tilde{q}(c_i h)\) for \( i=1,2,\ldots ,s,\) can be obtained by solving the following discrete problems:

$$\begin{aligned} \begin{aligned}&\tilde{q}''(c_i h)+M\tilde{q}(c_i h)=\sum \limits _{j=1}^ {s}l_j(c_i )f\big (\tilde{q}(c_j h)\big ),\ \ \ \tilde{q}(0)=q_{0},\ \tilde{q}'(0)=p_{0}.\\ \end{aligned} \end{aligned}$$
(7.13)

Set \(\tilde{q}_i=\tilde{q}(c_i h)\) for \(i=1,2,\ldots ,s\). Then (7.13) can be solved by the variation-of-constants formula (7.5) in the form:

$$\begin{aligned} \begin{aligned} \tilde{q}_i =\,\,&\phi _0(c_i^2V)q_0+c_ih\phi _1(c_i^2V)p_0+ (c_ih)^2\sum \limits _{j=1}^ {s}\tilde{I}_{c_i,j}f(\tilde{q}_j ),\quad i=1,2,\ldots ,s, \end{aligned} \end{aligned}$$

where

$$\begin{aligned} \begin{aligned}&\tilde{I}_{c_i,j}:=\int _{0}^1l_j(c_iz)(1-z)\phi _1\big ((1 -z)^2c_i^2V\big )dz,\qquad i, j=1,\ldots ,s. \end{aligned} \end{aligned}$$
(7.14)

7.2.2 The Computation of \(I_{1,j},\ I_{2,j},\ \tilde{I}_{c_i,j}\)

With the definition (7.9), the integrals \(I_{1,j},\ I_{2,j},\ \tilde{I}_{c_i,j}\) appearing in (7.11) and (7.14) can be computed as follows:

$$\begin{aligned} I_{1,j}=\,\,&\int _{0}^1l_j(z)(1-z)\phi _1\big ((1 -z)^2V\big )dz\\ =\,\,&\prod \limits _{k=1,k\ne j}^ {s}\sum \limits _{l=0}^{\infty }\int _{0}^1\frac{z-c_k}{c_j-c_k}(1-z)^{2l+1}dz\frac{(-1)^{l}V^{l}}{(2l+1)!}\\ =\,\,&\sum \limits _{l=0}^{\infty }\Big (\prod \limits _{k=1,k\ne j}^ {s}\frac{\frac{1}{2l+3}-c_k}{c_j-c_k}\Big )\frac{(-1)^{l}V^{l}}{(2l+2)!} =\sum \limits _{l=0}^{\infty }l_j\Big (\frac{1}{2l+3}\Big )\frac{(-1)^{l}V^{l}}{(2l+2)!},\\ I_{2,j}=\,\,&\int _{0}^1l_j(z)\phi _0\big ((1 -z)^2V\big )dz =\prod \limits _{k=1,k\ne j}^ {s}\sum \limits _{l=0}^{\infty }\int _{0}^1\frac{z-c_k}{c_j-c_k}(1-z)^{2l}dz\frac{(-1)^{l}V^{l}}{(2l)!}\\ =\,\,&\sum \limits _{l=0}^{\infty }\Big (\prod \limits _{k=1,k\ne j}^ {s}\frac{\frac{1}{2l+2}-c_k}{c_j-c_k}\Big )\frac{(-1)^{l}V^{l}}{(2l+1)!} =\sum \limits _{l=0}^{\infty }l_j\Big (\frac{1}{2l+2}\Big )\frac{(-1)^{l}V^{l}}{(2l+1)!},\\ \tilde{I}_{c_i,j}=\,\,&\int _{0}^1l_j(c_iz)(1-z)\phi _1\big ((1 -z)^2c_i^2V\big )dz\\ =\,\,&\prod \limits _{k=1,k\ne j}^ {s}\sum \limits _{l=0}^{\infty }\int _{0}^1\frac{c_iz-c_k}{c_j-c_k}(1-z)^{2l+1}dz\frac{(-1)^{l}(c_i^2V)^{l}}{(2l+1)!}\\ =\,\,&\sum \limits _{l=0}^{\infty }\Big (\prod \limits _{k=1,k\ne j}^ {s}\frac{\frac{c_i}{2l+3}-c_k}{c_j-c_k}\Big )\frac{(-1)^{l}(c_i^2V)^{l}}{(2l+2)!} =\sum \limits _{l=0}^{\infty }l_j\Big (\frac{c_i}{2l+3}\Big )\frac{(-1)^{l}(c_i^2V)^{l}}{(2l+2)!},\\&\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad i,j=1,\ldots ,s. \end{aligned}$$

If M is symmetric and positive semi-definite, we have the decomposition of M as follows:

$$ M=P^{\intercal }W^{2}P=\varOmega _{0}^{2}\ \ \text{ with }\ \varOmega _{0}=P^{\intercal }W P, $$

where P is an orthogonal matrix and \(W=diag (\lambda _k)\) with nonnegative diagonal entries which are the square roots of the eigenvalues of M. Hence the above integrals become

$$\begin{aligned} \begin{aligned} I_{1,j}=\,\,&P^{\intercal }\int _{0}^1 l_j(z) W ^{-1}\sin \big ((1 -z)W\big )dzP\\=\,\,&P^{\intercal }diag \Big (\int _{0}^1 l_j(z) \lambda _k ^{-1}\sin \big ((1 -z)\lambda _k\big )dz\Big )P,\\ I_{2,j}=\,\,&P^{\intercal }\int _{0}^1l_j(z)\cos \big ((1 -z)W\big )dzP=P^{\intercal }diag \Big (\int _{0}^1l_j(z)\cos \big ((1 -z)\lambda _k\big )dz\Big )P,\\ \tilde{I}_{c_i,j}=\,\,&P^{\intercal }\int _{0}^1 l_j(c_iz) (c_iW )^{-1}\sin \big ((1 -z)c_iW\big )dzP\\=\,\,&P^{\intercal }diag \Big (\int _{0}^1 l_j(c_iz) (c_i\lambda _k )^{-1}\sin \big ((1 -z)c_i\lambda _k\big )dz\Big )P, \\&i,j=1,\ldots ,s. \end{aligned} \end{aligned}$$

Here, it is noted that \(W^{-1}\sin \big ((1 -z)W\big ),\ (c_iW )^{-1}\sin \big ((1 -z)c_iW\big )\) are well defined also for singular W. The case \(\lambda _k=0\) gives:

$$\begin{aligned} \begin{aligned} \int _{0}^1 l_j(z) \lambda _k ^{-1}\sin \big ((1 -z)\lambda _k\big )dz=\,\,&\int _{0}^1 l_j(z) (1 -z)dz,\\ \int _{0}^1l_j(z)\cos \big ((1 -z)\lambda _k\big )dz=\,\,&\int _{0}^1l_j(z)dz,\\ \int _{0}^1 l_j(c_iz) (c_i\lambda _k )^{-1}\sin \big ((1 -z)c_i\lambda _k\big )dz=\,\,&\int _{0}^1 l_j(c_iz) (1 -z)dz, \end{aligned} \end{aligned}$$

which can be evaluated easily since \(l_j(z)\) is a polynomial function. If \(\lambda _k\ne 0\), they can be evaluated as follows:

\( \begin{aligned}&\int _{0}^1 l_j(z) \lambda _k ^{-1}\sin \big ((1 -z)\lambda _k\big )dz \\ =\,\,&1/\lambda _k\int _{0}^1 l_{j}(z)\sin \big ((1 -z)\lambda _k\big )dz \\ =\,\,&1/\lambda _k^2\int _{0}^1 l_{j}(z)d\cos \big ((1 -z)\lambda _k\big )\\ =\,\,&1/\lambda _k^2 l_{j}(1)-1/\lambda _k^2 l_{j}(0)\cos (\lambda _k)- 1/\lambda _k^2\int _{0}^1 l'_{j}(z)\cos \big ((1 -z)\lambda _k\big )dz\\ =\,\,&1/\lambda _k^2 l_{j}(1)-1/\lambda _k^2 l_{j}(0)\cos (\lambda _k)+ 1/\lambda _k^3\int _{0}^1 l'_{j}(z)d\sin \big ((1 -z)\lambda _k\big )\\ =\,\,&1/\lambda _k^2 l_{j}(1)-1/\lambda _k^2 l_{j}(0)\cos (\lambda _k)- 1/\lambda _k^3l'_{j}(0)\sin (\lambda _k)\\ {}&-1/\lambda _k^3\int _{0}^1 l''_{j}(z)\sin \big ((1 -z)\lambda _k\big )dz\\ =\,\,&1/\lambda _k^2 l_{j}(1)-1/\lambda _k^2 l_{j}(0)\cos (\lambda _k)- 1/\lambda _k^3l'_{j}(0)\sin (\lambda _k) \\ {}&-1/\lambda _k^4 l''_{j}(1)+1/\lambda _k^4 l''_{j}(0)\cos (\lambda _k)+ 1/\lambda _k^5l^{(3)}_{j}(0)\sin (\lambda _k)\\ {}&+1/\lambda _k^5\int _{0}^1 l^{(4)}_{l,j}(z)\sin \big ((1 -z)\lambda _k\big )dz\\ =\,\,&\cdots \\ \cdot&\\\cdot&\\\cdot&\\ =&\sum \limits _{k=0}^{\lfloor \deg (l_{j})/2\rfloor }(-1)^{k}/\lambda _k^{2k+2}\Big ( l_{j}^{(2k)}(1)- l_{j}^{(2k)}(0)\cos (\lambda _k)-1/\lambda _k l_{j}^{(2k+1)}(0)\sin (\lambda _k)\Big ),\\ \end{aligned} \)

for \(i=1,2,\ldots ,s,\) where \(\deg (l_{j})\) is the degree of \(l_{j}\) and \(\lfloor \deg (l_{j})/2\rfloor \) denotes the integral part of \(\deg (l_{j})/2\).

Likewise, we can obtain

$$\begin{aligned} \begin{aligned}&\int _{0}^1l_j(z)\cos \big ((1 -z)\lambda _k\big )dz\\=&\sum \limits _{k=0}^{\lfloor \deg (l_{j})/2\rfloor }(-1)^{k}/\lambda _k^{2k+1}\Big ( l_{j}^{(2k)}(0)\sin (\lambda _k)+1/\lambda _k l_{j}^{(2k+1)}(1)-1/\lambda _k^2 l_{j}^{(2k+1)}(0)\cos (\lambda _k)\Big ),\\&\int _{0}^1 l_j(c_iz) (c_i\lambda _k )^{-1}\sin \big ((1 -z)c_i\lambda _k\big )dz\\=&\sum \limits _{k=0}^{\lfloor \deg (l_{j})/2\rfloor }(-1)^{k}/(c_i\lambda _k)^{2k+2}\Big ( l_{j}^{(2k)}(c_i)- l_{j}^{(2k)}(0)\cos (c_i\lambda _k)-1/\lambda _k l_{j}^{(2k+1)}(0)\sin (c_i\lambda _k)\Big ),\\ \end{aligned} \end{aligned}$$
(7.15)

for \(i,\ j=1,2,\ldots ,s\).

7.2.3 The Scheme of Trigonometric Collocation Methods

We are now in a position to present a class of trigonometric collocation methods for the multi-frequency oscillatory second-order oscillatory system (7.1).

Definition 7.1

A trigonometric collocation method for integrating the multi-frequency oscillatory system (7.1) is defined as

$$\begin{aligned} \left\{ \begin{aligned}&\tilde{q}_i=\phi _0(c_i^2V)q_0+c_ih\phi _1(c_i^2V)p_0+ (c_ih)^2\sum \limits _{j=1}^ {s}\tilde{I}_{c_i,j}f(\tilde{q}_j ),\quad i=1,2,\ldots ,s,\\&\tilde{q}(h)=\phi _0(V)q_0+h\phi _1(V)p_0+ h^2\sum \limits _{j=1}^ {s}I_{1,j}f(\tilde{q}_j),\\&\tilde{p}(h)=-hM\phi _1( V)q_0+\phi _0(V)p_0 +h\sum \limits _{j=1}^ {s}I_{2,j}f(\tilde{q}_j), \end{aligned}\right. \end{aligned}$$
(7.16)

where h is the stepsize and \(I_{1,j},\ I_{2,j},\ \tilde{I}_{c_i,j}\) can be computed as stated in Sect. 7.2.2.

Remark 7.1

In [26], the authors took advantage of shifted Legendre polynomials to obtain a local Fourier expansion of the system (7.1) and derived trigonometric Fourier collocation methods (TFCMs). TFCMs are a subclass of s-stage ERKN methods presented in [29] with the following Butcher tableau:

(7.17)

where

$$\begin{aligned} \begin{aligned}&II_{1,j}(V):=\int _{0}^1\widehat{P}_j(z)(1-z)\phi _1\big ((1 -z)^2V\big )dz,\\&II_{2,j}(V):=\int _{0}^{1}\widehat{P}_j(z)\phi _0\big ((1 -z)^2V\big )dz,\\&II_{1,j,c_i}(V):=\int _{0}^1\widehat{P}_j(c_iz)(1-z)\phi _1\big ((1 -z)^2c_i^2V\big )dz, \end{aligned} \end{aligned}$$

r is an integer with the requirement: \(2\le r\le s,\) all \(\widehat{P}_j\) are shifted Legendre polynomials over the interval [0, 1], and \(c_l,\ b_l\) for \(l=1,2,\ldots ,s\) are the node points and the quadrature weights of a quadrature formula, respectively.

It is noted that the method (7.16) is also the subclass of s-stage ERKN methods with the following Butcher tableau:

(7.18)

where

$$\begin{aligned} \begin{aligned}&I_{1,j}:=\int _{0}^1l_j(z)(1-z)\phi _1\big ((1 -z)^2V\big )dz,\\&I_{2,j}:=\int _{0}^{1}l_j(z)\phi _0\big ((1 -z)^2V\big )dz,\\&\tilde{I}_{c_i,j}=\int _{0}^1l_j(c_iz)(1-z)\phi _1\big ((1 -z)^2c_i^2V\big )dz. \end{aligned} \end{aligned}$$

From (7.17) and (7.18), it follows clearly that the coefficients of (7.18) are simpler than (7.17). Therefore, the scheme of the methods derived in this chapter is much simpler than that given in [26]. The obtained methods can be implemented at a lower cost in practical computations, which will be shown by the numerical experiments in Sect. 7.4. The reason for this better efficiency is that we use a classical approach and choose Lagrange polynomials to give a local Fourier expansion of the system (7.1).

Remark 7.2

We also note that in the recent monograph [2], it has been shown that the approach of constructing energy-preserving methods for Hamiltonian systems which are based upon the use of shifted Legendre polynomials (such as in [1]) and Lagrange polynomials constructed on Gauss–Legendre nodes (such as in [10]) leads to precisely the same methods. Therefore, by choosing special real numbers \(c_1,\ldots ,c_s\) for (7.18) and special quadrature formulae for (7.17), the methods given in this chapter may have some connections with those in [26], which need to be investigated.

Remark 7.3

It is noted that the method (7.16) can be applied to the system (7.1) with an arbitrary matrix M since trigonometric collocation methods do not need the symmetry of M. Moreover, the method (7.16) exactly integrates the linear system \(q''+Mq=0\) and it has an additional advantage of energy preservation for linear systems while respecting structural invariants and geometry of the underlying problem. The method approximates the solution in the interval [0, h]. We then repeat this procedure with equal ease over the next interval. Namely, we can consider the obtained result as the initial condition for a new initial value problem in the interval [h, 2h]. In this way, the method (7.16) can approximate the solution in an arbitrary interval \([0,t_{\mathrm {end}}]\) with \(t_{\mathrm {end}}=Nh\).

When \(M= 0\), (7.1) reduces to a special and important class of systems of second-order ODEs expressed in the traditional form

$$\begin{aligned} q^{\prime \prime }(t)=f\big (q(t)\big ), \qquad q(0)=q_0,\ \ q'(0)=q_0',\qquad t\in [0,t_{\mathrm {end}}]. \end{aligned}$$
(7.19)

For this case, with the definition (7.6) and the results of \(I_{1,j},\ I_{2,j},\ \tilde{I}_{c_i,j}\) in Sect. 7.2.2, the trigonometric collocation method (7.16) reduces to the following RKN-type method.

Definition 7.2

An RKN-type collocation method for integrating the traditional second-order ODEs (7.19) is defined as

$$\begin{aligned} \left\{ \begin{aligned}&\tilde{q}_i=q_0+c_ihp_0+ (c_ih)^2\sum \limits _{j=1}^ {s}\frac{1}{2}l_j\Big (\frac{c_i}{3}\Big )f(\tilde{q}_j),\quad i=1,2,\ldots ,s,\\&\tilde{q}(h)=q_0+hp_0+ h^2\sum \limits _{j=1}^ {s}\frac{1}{2}l_j\Big (\frac{1}{3}\Big )f(\tilde{q}_j),\\&\tilde{p}(h)=p_0 +h\sum \limits _{j=1}^ {s}l_j\Big (\frac{1}{2}\Big )f(\tilde{q}_j), \end{aligned}\right. \end{aligned}$$
(7.20)

where h is the stepsize.

Remark 7.4

The method (7.20) is the subclass of s-stage RKN methods with the following Butcher tableau:

(7.21)

Thus, by letting \(M=0\), the trigonometric collocation methods yield a subclass of RKN methods for solving traditional second-order ODEs, which demonstrates wide applications of the methods.

7.3 Properties of the Methods

For the exact solution of (7.2) at \(t=h\), let \(\mathbf {y}(h)=\Big ( q^{\intercal }(h),p^{\intercal }(h)\Big )^{\intercal }.\) Then the oscillatory Hamiltonian system (7.2) can be rewritten in the form

$$\begin{aligned} \mathbf {y}'(\xi h)=F(\mathbf {y}(\xi h)):=\left( \begin{array}{c} p(\xi h) \\ -Mq(\xi h)+f\big (q(\xi h)\big ) \end{array} \right) ,\quad \mathbf {y}_0=\left( \begin{array}{c} q_0 \\ p_0 \\ \end{array} \right) , \end{aligned}$$
(7.22)

for \(0\le \xi \le 1.\) The Hamiltonian is

$$\begin{aligned} H(\mathbf {y})=\frac{1}{2}p^{\intercal }p+\frac{1}{2}q^{\intercal }Mq+U(q). \end{aligned}$$
(7.23)

On the other hand, if we denote the updates of (7.16) by

$$\mathbf {\omega }(h)=\Big ( \tilde{q}^{\intercal }(h), \tilde{p}^{\intercal }(h)\Big )^{\intercal },$$

then we have

$$\begin{aligned} \mathbf {\omega }'(\xi h)=\left( \begin{array}{c} \tilde{p}(\xi h) \\ -M\tilde{q}(\xi h)+\sum \limits _{j=1}^ {s}l_j(\xi )f\big (\tilde{q}(c_j h)\big ) \end{array} \right) ,\quad \mathbf {\omega }_0=\left( \begin{array}{c} q_0 \\ p_0 \\ \end{array} \right) . \end{aligned}$$
(7.24)

The next lemma is useful for the subsequent analysis.

Lemma 7.1

Let \(g:[0,h]\rightarrow \mathbb {R}^{d}\) have j continuous derivatives. Then

$$\int _{0}^1P_j(\tau )g(\tau h)d\tau =\mathscr {O}(h^{j}),$$

where \(P_j(\tau )\) is an orthogonal polynomial of degree j on the interval [0, 1].

Proof

We assume that \(g(\tau h)\) can be expanded in Taylor series at the origin for sake of simplicity. Then, for all \(j\ge 0\), by considering that \(P_j(\tau )\) is orthogonal to all polynomials of degree \(n< j\):

$$\int _{0}^1P_j(\tau )g(\tau h)d\tau =\sum \limits _{n=1}^ {\infty }\frac{g^{(n)}(0)}{n!}h^n\int _{0}^1P_j(\tau )\tau ^nd\tau =\mathscr {O}(h^{j}).$$

\(\square \)

7.3.1 The Order of Energy Preservation

In this subsection we analyse the order of preservation of the Hamiltonian energy.

Theorem 7.1

Assume that \(c_l\) for \( l=1,2,\ldots ,s\) are chosen as the node points of an s-point Gauss–Legendre’s quadrature over the integral [0, 1]. Then we have

$$H(\omega (h))-H(\mathbf {y}_0)=\mathscr {O}(h^{2s+1}),$$

where the constant symbolized by \(\mathscr {O}\) is independent of h.

Proof

It follows from Lemma 7.1, (7.23) and (7.24) that

$$\begin{aligned} \begin{aligned}&H(\omega (h))-H(\mathbf {y}_0) =h \int _{0}^{1} \nabla H(\omega (\xi h))^{\intercal }\omega '(\xi h)d\xi \\&=h \int _{0}^{1} \Big ( \big (M\tilde{q}(\xi h)-f(\tilde{q}(\xi h)\big )^{\intercal },\ \tilde{p}(\xi h)^{\intercal }\Big ) \cdot \left( \begin{array}{c} \tilde{p}(\xi h) \\ -M\tilde{q}(\xi h)+\sum \limits _{j=1}^ {s}l_j(\xi )f\big (\tilde{q}(c_j h)\big ) \end{array} \right) d\xi \\&=h \int _{0}^{1} \tilde{p}(\xi h)^{\intercal } \Big ( \sum \limits _{j=1}^ {s}l_j(\xi )f(\tilde{q}(c_j h))-f\big (\tilde{q}(\xi h)\big ) \Big )d\xi . \end{aligned} \end{aligned}$$

Moreover, we have

$$f\big (\tilde{q}(\xi h)\big )-\sum \limits _{j=1}^ {s}l_j(\xi )f\big (\tilde{q}(c_j h)\big )=\frac{f^{(s+1)}\big (\tilde{q}(\xi h)\big )|_{\xi =\zeta }}{(n+1)!}\prod \limits _{i=1}^ {s}(\xi h-c_ih).$$

Here \(f^{(s+1)}\big (\tilde{q}(\xi h)\big )\) denotes the \((s+1)\)th derivative of \(f(\tilde{q}(t))\) with respect to t. We then obtain

$$\begin{aligned} \begin{aligned} H(\omega (h))-H(\mathbf {y}_0) =\,\,&-h \int _{0}^{1} \tilde{p}(\xi h)^{\intercal } \frac{f^{(s+1)}\big (\tilde{q}(\xi h)\big )|_{\xi =\zeta }}{(n+1)!}\prod \limits _{i=1}^ {s}(\xi h-c_ih) d\xi \\ =\,\,&-h^{s+1}\int _{0}^{1} \tilde{p}(\xi h)^{\intercal } \frac{f^{(s+1)}\big (\tilde{q}(\xi h)\big )|_{\xi =\zeta }}{(n+1)!}\prod \limits _{i=1}^ {s}(\xi -c_i) d\xi . \end{aligned} \end{aligned}$$

Since \(c_l\) for \(l=1,2,\ldots ,s\) are chosen as the node points of a s-point Gauss–Legendre’s quadrature over the integral [0, 1], \(\prod \limits _{i=1}^ {s}(\xi -c_i)\) is an orthogonal polynomial of degree s on the interval [0, 1]. Therefore, using Lemma 7.1 we obtain

$$\begin{aligned} \begin{aligned}&H(\omega (h))-H(\mathbf {y}_0) =-h^{s+1}\mathscr {O}(h^{s})=\mathscr {O}(h^{2s+1}).\\ \end{aligned} \end{aligned}$$

This gives the result of the theorem. \(\square \)

7.3.2 The Order of Quadratic Invariant

We next turn to the quadratic invariant \(Q(\mathbf {y})=q^{\intercal }Dp\) of (7.1). The quadratic form Q is a first integral of (7.1) if and only if \(p^{\intercal }Dp+q^{\intercal }D(f(q)-Mq)=0\) for all \(p,q\in \mathbb {R}^{d}\). This implies that D is a skew-symmetric matrix and that \(q^{\intercal }D(f(q)-Mq)=0\) for any \(q\in \mathbb {R}^{d}\). The following result states the degree of accuracy of the method (7.16).

Theorem 7.2

Under the condition in Theorem 7.1, we have

$$Q(\omega (h))-Q(\mathbf {y}_0)=\mathscr {O}(h^{2s+1}),$$

where the constant symbolized by \(\mathscr {O}\) is independent of h.

Proof

From \(Q(\mathbf {y})=q^{\intercal }Dp\) and \(D^{\intercal }=-D\), it follows that

$$\begin{aligned} \begin{aligned}&Q(\omega (h))-Q(\mathbf {y}_0) =h \int _{0}^{1} \nabla Q(\omega (\xi h))^{\intercal }\omega '(\xi h)d\xi \\ =\,\,&h \int _{0}^{1} \Big (- \tilde{p}(\xi h)^{\intercal }D,\ \tilde{q}(\xi h)^{\intercal }D\Big )\left( \begin{array}{c} \tilde{p}(\xi h) \\ -M\tilde{q}(\xi h)+\sum \limits _{j=1}^ {s}l_j(\xi )f\big (\tilde{q}(c_j h)\big ) \end{array} \right) d\xi . \end{aligned} \end{aligned}$$

Since \(q^{\intercal }D(f(q)-Mq)=0\) for any \(q\in \mathbb {R}^{d}\), we have

$$\begin{aligned} \begin{aligned}&Q(\omega (h))-Q(\mathbf {y}_0) =h \int _{0}^{1} \tilde{q}(\xi h)^{\intercal }D \Big (-M\tilde{q}(\xi h)+\sum \limits _{j=1}^ {s}l_j(\xi )f\big (\tilde{q}(c_j h)\big )\Big )d\xi \\ =\,\,&h \int _{0}^{1} \tilde{q}(\xi h)^{\intercal }D \frac{f^{(s+1)}\big (\tilde{q}(\xi h)\big )|_{\xi =\zeta }}{(n+1)!}\prod \limits _{i=1}^ {s}(\xi h-c_ih)d\xi \\ =\,\,&h^{s+1} \int _{0}^{1} \tilde{q}(\xi h)^{\intercal }D \frac{f^{(s+1)}\big (\tilde{q}(\xi h)\big )|_{\xi =\zeta }}{(n+1)!}\prod \limits _{i=1}^ {s}(\xi -c_i)d\xi \\ =\,\,&\mathscr {O}(h^{s+1})\mathscr {O}(h^{s})=\mathscr {O}(h^{2s+1}). \end{aligned} \end{aligned}$$

This completes the proof. \(\square \)

7.3.3 The Algebraic Order

To emphasize the dependence of the solutions of \(\mathbf {y}'(t)=F(\mathbf {y}(t))\) on the initial values, for any given \(\tilde{t}\in [0,h]\), we denote by \(\mathbf {y}(\cdot ,\tilde{t}, \tilde{\mathbf {y}})\) the solution satisfying the initial condition \(\mathbf {y}(\tilde{t},\tilde{t}, \tilde{\mathbf {y}})=\tilde{\mathbf {y}}\) and set

$$\begin{aligned} \varPhi (s,\tilde{t}, \tilde{\mathbf {y}})=\frac{\partial \mathbf {y}(s,\tilde{t}, \tilde{\mathbf {y}})}{\partial \tilde{\mathbf {y}}}. \end{aligned}$$
(7.25)

Recalling the elementary theory of ODEs, we have the following standard result (see, e.g. [11])

$$\begin{aligned} \frac{\partial \mathbf {y}(s,\tilde{t}, \tilde{\mathbf {y}})}{\partial \tilde{t}}=-\varPhi (s,\tilde{t}, \tilde{\mathbf {y}})F(\tilde{\mathbf {y}}). \end{aligned}$$
(7.26)

The following theorem states the result on the order of the trigonometric collocation methods.

Theorem 7.3

Under the condition in Theorem 7.1, the trigonometric collocation method (7.16) satisfies

$$\mathbf {y}(h)-\omega (h)=\mathscr {O}(h^{2s+1}),$$

where the constant symbolized by \(\mathscr {O}\) is independent of h.

Proof

It follows from (7.25) and (7.26) that

$$\begin{aligned} \begin{aligned}&\mathbf {y}(h)-\omega (h) =\mathbf {y}(h,0, \mathbf {y}_0)-\mathbf {y}\big (h,h, \omega (h)\big )=- \int _{0}^{h} \frac{d\mathbf {y}\big (h,\tau , \omega (\tau )\big )}{d\tau }d\tau \\ =\,\,&- \int _{0}^{h}\Big [ \frac{\partial \mathbf {y}\big (h,\tau , \omega (\tau )\big )}{\partial \tilde{t}} +\frac{\partial \mathbf {y}\big (h,\tau , \omega (\tau )\big )}{\partial \tilde{\mathbf {y}}}\omega '(\tau )\Big ]d\tau \\ =\,\,&h \int _{0}^{1}\varPhi \big (h,\xi h, \omega (\xi h)\big )\Big [F\big (\omega (\xi h)\big )-\omega '(\xi h)\Big ]d\xi \\ =\,\,&h \int _{0}^{1}\varPhi \big (h,\xi h, \omega (\xi h)\big ) \left( \begin{array}{c} \mathbf {0}\\ f\big (\tilde{q}(\xi h)\big )-\sum \limits _{j=1}^ {s}l_j(\xi )f\big (\tilde{q}(c_j h)\big ) \end{array} \right) d\xi . \end{aligned} \end{aligned}$$

We rewrite \(\varPhi \big (h,\xi h, \omega (\xi h)\big )\) as a block matrix:

$$\varPhi \big (h,\xi h, \omega (\xi h)\big )=\left( \begin{array}{cc} \varPhi _{11}(\xi h) &{} \varPhi _{12}(\xi h) \\ \varPhi _{21}(\xi h) &{} \varPhi _{22}(\xi h) \\ \end{array} \right) , $$

where \(\varPhi _{ij}\ (i,j=1,2)\) are \(d\times d\) matrices.

We then obtain

$$\begin{aligned} \begin{aligned}&\mathbf {y}(h)-\omega (h) =h \left( \begin{array}{c} \int _{0}^{1}\varPhi _{12}(\xi h)\frac{f^{(s+1)}\big (\tilde{q}(\xi h)\big )|_{\xi =\zeta }}{(n+1)!}\prod \limits _{i=1}^ {s}(\xi h-c_ih)d\xi \\ \int _{0}^{1}\varPhi _{22}(\xi h) \frac{f^{(s+1)}\big (\tilde{q}(\xi h)\big )|_{\xi =\zeta }}{(n+1)!}\prod \limits _{i=1}^ {s}(\xi h-c_ih)d\xi \end{array} \right) \\ =\,\,&h^{s+1} \left( \begin{array}{c} \int _{0}^{1}\varPhi _{12}(\xi h)\frac{f^{(s+1)}\big (\tilde{q}(\xi h)\big )|_{\xi =\zeta }}{(n+1)!}\prod \limits _{i=1}^ {s}(\xi -c_i)d\xi \\ \int _{0}^{1}\varPhi _{22}(\xi h)\frac{f^{(s+1)}\big (\tilde{q}(\xi h)\big )|_{\xi =\zeta }}{(n+1)!}\prod \limits _{i=1}^ {s}(\xi -c_i)d\xi \end{array} \right) =h^{s+1}\mathscr {O}(h^{s}) =\mathscr {O}(h^{2s+1}).\\ \end{aligned} \end{aligned}$$

The proof is complete.\(\square \)

7.3.4 Convergence Analysis of the Iteration

Theorem 7.4

Assume that M is symmetric and positive semi-definite and that f satisfies a Lipschitz condition in the variable q, i.e., there exists a constant L such that \(\left\| f(q_1)-f(q_2)\right\| \le L\left\| q_1-q_2\right\| \). If

$$\begin{aligned} 0<h<\frac{1}{\sqrt{L\max \limits _{i,j= 1,\ldots , s}\int _{0}^1|l_j(c_iz)(1-z)|dz}}, \end{aligned}$$
(7.27)

then the fixed-point iteration for the method (7.16) is convergent.

Proof

Following Definition 7.1, the first formula of (7.16) can be rewritten as

$$\begin{aligned} Q&=\phi _{0}(c^{2}V)(e\otimes q_{0})+hc\phi _{1}(c^{2}V) (e\otimes p_{0})+h^2A(V)f(Q), \end{aligned}$$
(7.28)

where \(c=(c_1,\ldots ,c_s)^{\intercal },\ e=(1,\ldots ,1)^{\intercal },\ Q=(\tilde{q}_1,\ldots ,\tilde{q}_s)^{\intercal },\ f(Q)=\big (f(\tilde{q}_1)^{\intercal },\ldots ,f(\tilde{q}_s)^{\intercal }\big )^{\intercal },\) \( A(V)=\big (a_{ij}(V)\big )_{s\times s}\) and \(a_{ij}(V)\) are the block diagonal matrices defined by

$$\begin{aligned} \begin{aligned} a_{ij}(V)&:= \int _{0}^1l_j(c_iz)(1-z)\phi _1\big ((1 -z)^2c_i^2V\big )dz,\\ \phi _{0}(c^{2}V)&:= diag \big (\phi _{0}(c_1^{2}V),\ldots ,\phi _{0}(c_s^{2}V)\big )^{\intercal },\\ c\phi _{1}(c^{2}V)&:= diag \big (c_1\phi _{1}(c_1^{2}V),\ldots ,c_s\phi _{1}(c_s^{2}V)\big )^{\intercal }.\\ \end{aligned} \end{aligned}$$

It follows from Proposition 2.1 in [18] that \(\left\| \phi _1\big ((1 -z)^2c_i^2V\big )\right\| \le 1\). We then obtain

$$\begin{aligned} \begin{aligned}\left\| a_{ij}(V)\right\|&\le \int _{0}^1|l_j(c_iz)(1-z)|dz.\\ \end{aligned} \end{aligned}$$

Let

$$\varphi (x)=\phi _{0}(c^{2}V)(e\otimes q_{0})+hc\phi _{1}(c^{2}V) (e\otimes p_{0})+h^2A(V)f(x).$$

Then,

$$\begin{aligned} \begin{aligned} \left\| \varphi (x)-\varphi (y)\right\|&=\left\| h^2A(V)f(x)-h^2A(V)f(y)\right\| \le h^2L\left\| A(V)\right\| \left\| x-y\right\| \\&\le h^2L\max \limits _{i,j= 1,\ldots , s}\int _{0}^1|l_j(c_iz)(1-z)|dz\left\| x-y\right\| , \end{aligned} \end{aligned}$$

which means that \(\varphi (x)\) is a contraction from the assumption (7.27). The well-known Contraction Mapping Theorem then ensures the convergence of the fixed-point iteration. This proof is complete. \(\square \)

Remark 7.5

We note that the convergence of the methods is independent of \(\left\| M\right\| \). This point is of prime importance especially for highly oscillatory systems where \(\left\| M\right\| \gg 1\), which will be shown by the numerical results of Problem 2 in Sect. 7.4.

7.3.5 Stability and Phase Properties

In this part we are concerned with the stability and phase properties. We consider the test equation:

$$\begin{aligned} q^{\prime \prime }(t)+\omega ^{2}q(t)=-\varepsilon q(t)\ \ \mathrm {with} \ \ \omega ^{2}+ \varepsilon >0, \end{aligned}$$
(7.29)

where \(\omega \) represents an estimation of the dominant frequency \(\lambda \) and \(\varepsilon =\lambda ^{2}-\omega ^{2}\) is the error of that estimation. Applying (7.16) to (7.29) produces

$$ \left( \begin{array} [c]{c} \tilde{q}\\ h\tilde{p} \end{array} \right) =S(V,z)\left( \begin{array} [c]{c} q_{0}\\ hp_{0} \end{array} \right) , $$

where the stability matrix S(Vz) is given by

$$ S(V,z)=\left( \begin{array} [c]{cc} \phi _{0}(V)-z\bar{b}^{\intercal }(V)N^{-1}\phi _{0}(c^{2}V) &{} \phi _{1}(V)\!-\!z\bar{b} ^{\intercal }(V)N^{-1}(c\cdot \phi _{1}(c^{2}V))\\ -V\phi _{1}(V)\!-\!zb^{\intercal }(V)N^{-1}\phi _{0}(c^{2}V) &{} \phi _{0}(V)\!-\!zb^{\intercal } (V)N^{-1}(c\cdot \phi _{1}(c^{2}V)) \end{array} \right) $$

with \(N=I+zA(V)\), \(\bar{b}(V)=\Big (I_{1,1},\ldots ,I_{1,s} \Big )^{\intercal },\ b(V)=\Big (I_{2,1},\ldots ,I_{2,s} \Big )^{\intercal }.\)

Accordingly, we have the following definitions of stability and dispersion order and dissipation order for our method (7.16).

Definition 7.3

(See [30]) Let \(\rho (S)\) be the spectral radius of S,

$$R_{s}=\{(V,z)|\ V>0\ and \ \rho (S)<1\}$$

and

$$R_{p}=\{(V,z)|\ V>0,\ \rho (S)=1\ and \ \mathrm {tr}(S)^{2}<4\det (S)\}.$$

Then \(R_{s}\) and and \(R_{p}\) are called the stability region and the periodicity region of the method (7.16) respectively. The quantities

$$\phi (\zeta )=\zeta -\arccos \Big (\frac{\mathrm {tr}(S)}{2\sqrt{\det (S)}}\Big ),\ \ d(\zeta )=1-\sqrt{\det (S)}$$

are called the dispersion error and the dissipation error of the method (7.16), respectively, where \(\zeta =\sqrt{V+z}\). Then, a method is said to be dispersive of order r and dissipative of order s, if \(\phi (\zeta )=\mathscr {O}(\zeta ^{r+1})\) and \(d(\zeta )=\mathscr {O}(\zeta ^{s+1})\), respectively. If \(\phi (\zeta )=0\) and \(d(\zeta )=0\), then the corresponding method is said to be zero dispersive and zero dissipative, respectively.

7.4 Numerical Experiments

As an example of the trigonometric collocation methods (7.16), we choose the node points of a two-point Gauss–Legendre’s quadrature over the integral [0, 1], as follows:

$$\begin{aligned} \begin{aligned}&c_1=\frac{3-\sqrt{3}}{6},\ \ c_2=\frac{3+\sqrt{3}}{6}. \end{aligned}\end{aligned}$$
(7.30)

Then we choose \(s=2\) in (7.16) and denote the corresponding fourth-order method as LTCM.

The stability region of this method is shown in Fig. 7.1. Here we choose the subset \(V\in [0,100],\ z\in [-5,5]\) and the region shown in Fig. 7.1 only gives an indication of the stability of this method.

The dissipative error and dispersion error are given respectively by

$$\begin{aligned}\begin{aligned} d(\zeta )&= \frac{\varepsilon ^2}{24(\varepsilon +\omega ^{2})^2}\zeta ^4+\mathscr {O}(\zeta ^{5}),\ \ \ \ \phi (\zeta ) = \frac{\varepsilon ^2}{6(\varepsilon +\omega ^{2})^2}\zeta ^3+\mathscr {O}(\zeta ^{4}). \end{aligned}\end{aligned}$$
Fig. 7.1
figure 1

Stability region (shaded area) of the method LTCM

Note that when \(M=0\), the method LTCM reduces to a fourth-order RKN method given by the Butcher tableau (7.21) with nodes in (7.30).

In order to show the efficiency and robustness of the fourth-order method LTCM, several other integrators in the literature we select for comparison are:

  • TFCM: a fourth-order trigonometric Fourier collocation method in [26] with \(c_1=\frac{3-\sqrt{3}}{6},\ c_2=\frac{3+\sqrt{3}}{6},\ b_1=b_2=1/2,\ r=2\);

  • SRKM1: the symplectic Runge–Kutta method of order five in [20] based on Radau quadrature;

  • EPCM1: the “extended Lobatto IIIA method of order four” in [15], which is an energy-preserving collocation method (the case \(s=2\) in [10]);

  • EPRKM1: the energy-preserving Runge–Kutta method of order four (formula (19) in [1]).

Since all of these methods are implicit, we use the classical waveform Picard algorithm. For each experiment, first we show the convergence rate of iterations for different error tolerances. Then, for different methods, we set the error tolerance as \(10^{-16}\) and set the maximum number of iteration as 5. We display the global errors and the energy errors once the problem is a Hamiltonian system.

Problem 1

Consider the Hamiltonian equation which governs the motion of an artificial satellite (this problem has been considered in [19]) with the Hamiltonian

$$H(q,p)=\frac{1}{2}p^{\intercal }p+\frac{1}{2}\frac{\kappa }{2}q^{\intercal }q+\lambda \Big (\frac{(q_1q_3+q_2q_4)^2}{r^4}-\frac{1}{12r^2}\Big ),$$

where \(q=(q_1,q_2,q_3,q_4)^{\intercal }\) and \(r=q^{\intercal }q.\) The initial conditions are given on an elliptic equatorial orbit by

$$q_0=\sqrt{\frac{r_0}{2}}\Big (-1,-\frac{\sqrt{3}}{2},-\frac{1}{2},0\Big )^{\intercal },\ \ \ p_0=\frac{1}{2}\sqrt{K^2\frac{1+e}{2}}\Big (1,\frac{\sqrt{3}}{2},\frac{1}{2},0\Big )^{\intercal }.$$

Here \(M=\frac{\kappa }{2}\) and \(\kappa \) is the total energy of the elliptic motion which is defined by \(\kappa =\frac{K^2-2|p_0|^2}{r_0}-V_0 \) with \(V_0=-\frac{\lambda }{12r_0^3}.\) The parameters of this problem are chosen as \(K^2=3.98601\times 10^5\), \(r_0=6.8\times 10^3\), \(e=0.1\), \(\lambda =\frac{3}{2}K^2J_2R^2,\ J_2=1.08625\times 10^{-3},\ R=6.37122\times 10^3\). First the problem is solved on the interval \([0, 10^4]\) with the stepsize \(h=\frac{1}{10}\) to show the convergence rate of iterations. Table 7.1 displays the CPU time of iterations for different error tolerances. Then this equation is integrated on [0, 1000] with the stepsizes \(1/2^i\) for \(i=2,3,4,5\). The global errors against CPU time are shown in Fig. 7.2i. We finally integrate this problem with the fixed stepsize \(h=1/20\) on the interval \([0,t_{\mathrm {end}}]\), and \(t_{\mathrm {end}}=10, 100, 10^3, 10^4\). The maximum global errors of Hamiltonian energy against CPU time are presented in Fig. 7.2ii.

Table 7.1 Results for Problem 1: The total CPU time (s) of iterations for different error tolerances (tol)
Fig. 7.2
figure 2

Results for Problem 1. i The logarithm of the global error (GE) over the integration interval against the logarithm of CPU time. ii The logarithm of the maximum global error of Hamiltonian energy (GEH) against the logarithm of CPU time

Problem 2

Consider the Fermi–Pasta–Ulam problem [9].

Fermi–Pasta–Ulam problem is a Hamiltonian system with the Hamiltonian

$$ \begin{array} [c]{ll} H(y,x) &{} =\frac{1}{2}\textstyle \sum \limits _{i=1}^{2m}y_{i}^{2}+\frac{\omega ^{2}}{2}\textstyle \sum \limits _{i=1}^{m}x_{m+i}^{2}+\frac{1}{4} \Big [(x_{1}-x_{m+1})^{4}\\ &{} +\textstyle \sum \limits _{i=1}^{m-1}(x_{i+1}-x_{m+i-1}-x_{i}-x_{m+i} )^{4}+(x_{m}+x_{2m})^{4}\Big ], \end{array} $$

where \(x_{i}\) is a scaled displacement of the ith stiff spring, \(x_{m+i}\) represents a scaled expansion (or compression) of the ith stiff spring, and \(y_{i},\ y_{m+i}\) are their velocities (or momenta). This system can be rewritten as

$$ x^{\prime \prime }(t)+Mx(t)=-\nabla U(x),\qquad t\in [t_{0},t_{\mathrm {end}}], $$

where

$$\begin{aligned}&M=\left( \begin{array}{c} \mathbf {0}_{m\times m} \mathbf {0}_{m\times m} \\ \mathbf {0}_{m\times m} \omega ^{2}I_{m\times m} \end{array} \right) ,\\ U(x)=\frac{1}{4}\Big [(x_{1}-x_{m+1})^{4}&+\textstyle \sum \limits _{i=1} ^{m-1}(x_{i+1}-x_{m+i-1}-x_{i}-x_{m+i})^{4}+(x_{m}+x_{2m})^{4}\Big ]. \end{aligned}$$

Following [9], we choose

$$ m=3,\ x_{1}(0)=1,\ y_{1}(0)=1,\ x_{4}(0)=\frac{1}{\omega },\ y_{4}(0)=1, $$

with zero for the remaining initial values.

First, the problem is solved on the interval [0, 1000] with the stepsize \(h=\frac{1}{100}\) and \(\omega =100,\ 200\) to show the convergence rate of iterations. See Table 7.2 for the total CPU time of iterations for different error tolerances. It can be observed that when \(\omega \) increases, the convergence rates of LTCM and TFCM are almost unaffected. However, the convergence rates of the other methods vary greatly as \(\omega \) becomes large.

We then integrate the system on the interval [0, 50] with \(\omega =50,100,150,200\) and the stepsizes \(h=1/(20\times {2^{j}})\) for \( j=1,\ldots ,4.\) The global errors are shown in Fig. 7.4. Finally, we integrate this problem with a fixed stepsize \(h=1/100\) on the interval \([0,t_{\mathrm {end}}]\) with \(t_{\mathrm {end}}=1, 10, 100, 1000.\) The maximum global errors of Hamiltonian energy are presented in Fig. 7.4. Here, it is noted that some results are too large, and hence we do not plot the corresponding points in Figs. 7.3 and 7.4. A similar situation occurs in the next two problems.

Table 7.2 Results for Problem 2: The total CPU time (s) of iterations for different error tolerances (tol)
Fig. 7.3
figure 3

Results for Problem 2. The logarithm of the global error (GE) over the integration interval against the logarithm of CPU time

Fig. 7.4
figure 4

Results for Problem 2. The logarithm of the maximum global error of Hamiltonian energy (\({ GEH}\)) against the logarithm of CPU timelabelfig

Problem 3

Consider the nonlinear Klein-Gordon equation [17]

$$ \left\{ \begin{array} [c]{l} \frac{\partial ^{2}u}{\partial t^{2}}-\frac{\partial ^{2}u}{\partial x^{2}}=-u^3-u,\ \ \ 0<x<L,\ \ t>0,\\ u(x,0)=A(1+\cos (\frac{2\pi }{L}x)),\ \ u_{t}(x,0)=0,\ \ u(0,t)=u(L,t), \end{array}\right. $$

with \(L=1.28\), \(A=0.9\). Carrying out a semi-discretization on the spatial variable by using second-order symmetric differences yields

$$\begin{aligned} \begin{array}{l} \frac{d^2U}{dt^2}+ MU=F(U),\ \ \ 0<t\le t_{\mathrm {end}},\\ \end{array} \end{aligned}$$

where \(U(t)=\big (u_1(t),\ldots ,u_N(t)\big )^{\intercal }\) with \(u_i(t)\approx u(x_i,t)\) for \( i=1,\ldots ,N\),

$$\begin{aligned} M=\frac{1}{\varDelta x^2}\left( \begin{array} [c]{ccccc} 2 &{}-1 &{}&{} &{}-1\\ -1 &{}2 &{} -1&{} &{} \\ &{}\ddots &{}\ddots &{}\ddots &{} \\ &{}&{}-1 &{}2 &{} -1\\ -1 &{} &{} &{}-1&{}2 \\ \end{array} \right) _{N\times N} \end{aligned}$$

with \(\varDelta x= L/N\), \(x_i = i\varDelta x,\) \( F(U)=\big (-u_1^3-u_1,\ldots ,-u_N^3-u_N\big )^{\intercal }\) and \(N=32\). The corresponding Hamiltonian of this system is

$$\begin{aligned} H(U',U)=\frac{1}{2}U'^{\intercal }U'+\frac{1}{2}U^{\intercal }MU+\frac{1}{2}u^2_{1}+\frac{1}{4}u^4_{1}+\cdots + \frac{1}{2}u^2_{N}+\frac{1}{4}u^4_{N}. \end{aligned}$$

We choose \(N=32\). The problem is solved on the interval [0, 500] with the stepsize \(h=\frac{1}{100}\) to show the convergence rate of iterations. See Table 7.3 for the total CPU time of iterations for different error tolerances. We then solve this problem on [0, 20] with stepsizes \(h=1/(3\times 2^{j})\) for \(j=1,\ldots ,4\). Figure 7.5i shows the global errors. Finally this problem is integrated with a fixed stepsize \(h=0.002\) on the interval \([0,t_{\mathrm {end}}]\) with \(t_{\mathrm {end}}= 10^i\) for \(i=0,1,2,3\). The maximum global errors of Hamiltonian energy are presented in Fig. 7.5ii.

Table 7.3 Results for Problem 3: The total CPU time (s) of iterations for different error tolerances (tol)
Fig. 7.5
figure 5

Results for Problem 3. i The logarithm of the global error (GE) over the integration interval against the logarithm of CPU time. ii The logarithm of the maximum global error of Hamiltonian energy (GEH) against the logarithm of CPU time

Problem 4

Consider the wave equation

$$\begin{aligned}\begin{array}{ll} \frac{\partial ^2u}{\partial t^2}-a(x)\frac{\partial ^2u}{\partial x^2}+92u=f(t,x,u),\ \ \ 0<x<1,\ \ t>0,\\ \\ u(0,t)=0,\ \ \ u(1,t)=0,\ \ \ u(x,0)=a(x),\ \ \ u_t(x,0)=0 \end{array} \end{aligned}$$

with \( a(x) = 4x(1-x),\ f(t,x,u)=u^5-a^2(x)u^3+\frac{a^5(x)}{4}\sin ^2(20t)\cos (10t).\) The exact solution of this problem is \(u(x,t) = a(x) \cos (10t).\) Using semi-discretization on the spatial variable with second-order symmetric differences, we obtain

$$\begin{aligned} \begin{array}{ll} \frac{d^2U}{dt^2}+MU=F(t,U),\ U(0)=\big (a(x_1),\ldots ,a(x_{N-1})\big )^{\intercal },\ U'(0)=\mathbf{0}, \ 0<t\le t_{\mathrm {end}}, \end{array} \end{aligned}$$

where \(U(t)=\big (u_{1}(t),\ldots ,u_{N-1}(t)\big )^{\intercal }\) with \(u_{i}(t)\approx u(x_{i},t)\), \(x_i = i\varDelta x\), \(\varDelta x= 1/N\) for \(i=1,\ldots ,N-1,\)

$$\begin{aligned} M=92I_{N-1}+\frac{1}{\varDelta x^2}\left( \begin{array} [c]{ccccc} 2a(x_1)&{}-a(x_1) &{}&{} &{}\\ -a(x_2) &{}2a(x_2) &{} -a(x_2)&{} &{} \\ &{}\ddots &{}\ddots &{}\ddots &{} \\ &{}&{}-a(x_{N-2}) &{}2a(x_{N-2}) &{} -a(x_{N-2})\\ &{} &{} &{}-a(x_{N-1})&{}2a(x_{N-1}) \\ \end{array} \right) , \end{aligned}$$

and

$$\begin{aligned} F(t,U)=\big ( f(t,x_1,u_1), \ldots , f(t,x_{N-1},u_{N-1})\big )^{\intercal }. \end{aligned}$$

The problem is solved on the interval [0, 100] with the stepsize \(h=\frac{1}{40}\) to show the convergence rate of iterations. See Table 7.4 for the total CPU time of iterations for different error tolerances. Then, the system is integrated on the interval \([0,100]\) with \(N=40\) and \(h=1/2^j\) for \( j=5,\ldots ,8.\) The global errors are shown in Fig. 7.6.

Remark 7.6

It follows from the numerical results that our method LTCM is very promising in comparison with the classical methods SRKM1, EPCM1 and EPRKM1. Although LTCM has a similar performance to TFCM in preserving the solution and the energy, it can be observed from Figs. 7.2i, 7.3 and 7.5i that LTCM performs a bit better than TFCM in presenting the solution. Moreover, it follows from Tables 7.1, 7.2, 7.3 and 7.4 that LTCM has a better convergence performance of iterations than TFCM. This means that LTCM can have a lower computational cost when the same error tolerance is required in the iteration procedure.

Table 7.4 Results for Problem 4: The total CPU time (s) of iterations for different error tolerances (tol)
Fig. 7.6
figure 6

Results for Problem 4: The logarithm of the global error (GE) over the integration interval against the logarithm of CPU time

Remark 7.7

From Figs. 7.2ii, 7.4 and 7.5ii, it can be observed that the energy-preserving Runge–Kutta method EPRKM1 cannot preserve the Hamiltonian energy, and the errors seem to grow with the CPU time when the stepsize is reduced. The reason for this phenomenon may be that EPRKM1 does not take advantage of the special structure introduced by the linear term Mq of the oscillatory system (7.1) and its convergence depends on \(\left\| M\right\| \). The method LTCM developed in this chapter makes good use of the matrix M appearing in the oscillatory systems (7.1) and its convergence condition is independent of \(\left\| M\right\| \). This property enables LTCM to perform well in preserving Hamiltonian energy, although it is not an energy-preserving method.

7.5 Conclusions and Discussions

It is known that the trigonometric Fourier collocation method is a kind of collocation method for ODEs (see, e.g. [7, 9, 10, 16, 28]). In this chapter we have investigated a class of trigonometric collocation methods based on Lagrange basis polynomials, the variation-of-constants formula and the idea of collocation methods for solving multi-frequency oscillatory second-order differential equations (7.1) efficiently. It has been shown that the convergence condition of these trigonometric collocation methods is independent of \(\left\| M\right\| \), which is crucial for solving highly oscillatory systems. This presents an approach to treating multi-frequency oscillatory systems. The numerical experiments were carried out, and the numerical results show that the trigonometric collocation methods based on Lagrange basis polynomials derived in this chapter have remarkable efficiency compared with standard methods in the literature. However, it is believed that other collocation methods based on suitable bases different from the Lagrange basis are also possible for the numerical simulation of ODEs.

The material of this chapter is based on the work by Wang et al. [27].