1 Introduction

We consider the second-order initial value problem (IVP),

$$\begin{aligned} y^{\prime \prime }=f\left( x,y\right) ,\ y\left( x_{0}\right) =y_{0} ,\ y^{\prime }\left( x_{0}\right) =y_{0}^{\prime }. \end{aligned}$$
(1)

with y, \(y_{0}\), and \(y_{0}^{\prime }\in \mathfrak {R}^m.\) Expression (1) describes various problems in electronics, quantum mechanical scattering theory, celestial mechanics, theoretical physics, and chemistry.

Our group has dealt with methods solving IVPs [1,2,3,4]. Among these methods are Runge–Kutta [5,6,7,8,9,10,11,12,13], Runge–Kutta–Nyström [14,15,16], Numerov-type [17,18,19,20,21,22,23,24], finite differences [25], or other multistep methods [26,27,28,29].

Here we focus in explicit methods of the form

$$\begin{aligned}&y_{n+1}+b_{1}y_{n}+b_{2}y_{n-1}+b_{3}y_{n-2}+b_{4}y_{n-3} \nonumber \\&\quad =h^{2}\left( {\widehat{b}}_{1}y_{n}^{\prime \prime }+{\widehat{b}}_{2}y_{n-1}^{\prime \prime }+{\widehat{b}}_{3}y_{n-2}^{\prime \prime }+{\widehat{b}}_{4}y_{n-3}^{\prime \prime }+{\widehat{b}}_{5}y_{n+\alpha }^{\prime \prime }+{\widehat{b}} _{6}y_{n+\beta }^{\prime \prime }\right) , \end{aligned}$$
(2)

with

$$\begin{aligned}&y_{n+\alpha }+q_{1}y_{n}+q_{2}y_{n-1}+q_{3}y_{n-2}+q_{4}y_{n-3} \nonumber \\&\quad =h^{2}\left( {\widehat{q}}_{1}y_{n}^{\prime \prime }+{\widehat{q}}_{2}y_{n-1}^{\prime \prime }+{\widehat{q}}_{3}y_{n-2}^{\prime \prime }+{\widehat{q}}_{4}y_{n-3}^{\prime \prime }\right) , \end{aligned}$$
(3)
$$\begin{aligned}&y_{n+\beta }+r_{1}y_{n}+r_{2}y_{n-1}+r_{3}y_{n-2}+r_{4}y_{n-3} \nonumber \\&\quad =h^{2}\left( {\widehat{r}}_{1}y_{n}^{\prime \prime }+{\widehat{r}}_{2}y_{n-1}^{\prime \prime }+{\widehat{r}}_{3}y_{n-2}^{\prime \prime }+{\widehat{r}}_{4}y_{n-3}^{\prime \prime }+{\widehat{r}}_{5}y_{n+\alpha }^{\prime \prime }\right) , \end{aligned}$$
(4)

and \(y_j\approx y(x_0+j h)\), \(h=x_{n+1}-x_n=x_n-x_{n-1}=\cdots =x_1-x_0\).

In addition, \(y_{n+\alpha }\approx y(x_n+\alpha h)\) and \(y_{n+\beta }\approx y(x_n+\beta h)\). These four-step methods assist the solution to advance from data available in \([x_{n-3},x_n]\) to \(x_{n+1}\). Three function evaluations are spent every step. These are \(y^{\prime \prime }_n\), \(y^{\prime \prime }_{n+\alpha }\) and \(y^{\prime \prime }_{n+\beta }\).

Four-step methods with variable coefficients have appeared in [30,31,32,33,34]. In the relevant literature for methods with constant coefficients, we may find less activity. Recently, Li and Wang [35] have studied a variant of Runge–Kutta–Nyström type methods of sixth and seventh orders. Medvedev et al. [36] proposed a two-stage sixth-order methods based on an interpolatory approach. Finally, in [37], seventh-order methods were given at a cost of three stages per step also. Here we will study the general case of methods (24) that attain eighth algebraic order.

Some interesting works on four-step methods were also given [38, 39]. Especially in [38], an implicit P-stable method of sixth order was derived at a cost of three stages per step, while in [39] another implicit method is presented and implemented in predictor–corrector form.

2 Basic Theory and Derivation of Methods

The initial requirement for (2) is zero stability [40, p. 467]. The roots of

$$\begin{aligned} \rho (\lambda )=b_4+b_3\lambda +b_2\lambda ^2+b_1\lambda ^3+\lambda ^4=0, \end{aligned}$$
(5)

have to lie within or on the unit circle. Additionally, the roots on circle is allowed to be at most of multiplicity 2. There are not stability requirements for q’s and r’s since \(y_{n+\alpha }\) and \(y_{n+\beta }\) do not propagate themselves. It is preferable that the values of \(q_i\), \(r_i\), \({\widehat{q}}_i\) and \({\widehat{r}}_i\)’s have small values due to round-off errors.

Taking in consideration (5), we may proceed as follows.

Locally we require \(y_{n+\alpha }=y(x_n+\alpha h)+O(h^8)\). Thus from (3), we use the 9 available data of accuracy at least \(O(h^8)\),

$$\begin{aligned} y_n,\,y_{n-1},\,y_{n-2},\,y_{n-3},\,y^{\prime \prime }_n,\,y^{\prime \prime }_{n-1},\,y^{\prime \prime }_{n-2},\,y^{\prime \prime }_{n-3}, \end{aligned}$$
(6)

and \(\alpha \). Eight of them are enough for attaining \(O(h^8)\) accuracy. Then we may choose the coefficient \(\alpha \) freely. The remaining coefficients in (3) can be given with respect to \(\alpha \).

We proceed demanding locally \(y_{n+\beta }=y(x_n+\beta h)+O(h^8)\). We are based on eight data (6), \(h^2 y^{\prime \prime }_{\alpha }\) and \(\beta \). Again eight of them are enough, and this leaves another two parameters free. Let us choose \(\beta \) and \({\widehat{r}}_5\). The remaining coefficients in (4) are given with respect to \({\widehat{r}}_5,\,\beta \) and \(\alpha \).

Finally, we evaluate \(y_{n+1}=y(x_n+h)+O(h^{10})\) based on the eight data (6), \(h^2 y^{\prime \prime }_{\alpha }\) and \(h^2 y^{\prime \prime }_{\beta }\). Now we need all of them for an \(O(h^{10})\) accuracy. The remaining coefficients in (2) can be given with respect to \(\alpha \), \(\beta \). Remark that the errors of the data in the final sum have to be as high as \(O(h^{10})\) for achieving eighth order [40, p. 468].

The whole procedure above can be summed up in a Mathematica [41] module FOUR8 listed in “Appendix.”

There we may verify through Taylor series expansions the order requirements of \(y_{n+\alpha }\), \(y_{n+\beta }\) and \(y_{n+1}\). We insert in the input the three free parameters, \(\alpha \), \(\beta \) and \({\widehat{r}}_5\). In the output, we get a list of six lists. They contain \(q_i\), \({\widehat{q}}_i\), \(r_i\), \({\widehat{r}}_i\), \(b_i\) and \({\widehat{b}}_i\)’s, respectively.

Recently, a method of the same form (24) has been studied in [35]. A Runge–Kutta–Nyström (RKN)-type approach was used there. They fixed \(b_1=-2\), \(b_2=2\), \(b_3=-2\), and \(b_4=1\), and in consequence, only seventh order of accuracy was achieved at a cost of three function evaluations per step. Our methodology applies to the most general case for various selections of the coefficients in vector b.

A new eighth-order method can be derived by typing:

figure a

In general, assuming data in the previous steps are exact, the method (24) has local truncation error of the form,

$$\begin{aligned}&T_{-1}D_{-1}+hT_{01}D_{01}+h^{2}T_{11}D_{11}+h^{3}T_{21}D_{21}+\cdots \\&\quad +\,h^{9}\left( T_{81}D_{81}+T_{82}D_{82}+\cdots +T_{8,36}D_{8,36}\right) \\&\quad +\,h^{10}\left( T_{91}D_{91}+T_{92}D_{92}+\cdots +T_{9,72}D_{8,72}\right) +O(h^{11}). \end{aligned}$$

This is analogous to RKN [42] and Numerov-type methods [43] for solving (1). \(T_{ij}\) are the truncation error coefficients depending exclusively on the coefficients, e.g.,

$$\begin{aligned} T_{-1}= & {} 1 + b_1 + b_2 + b_3 + b_4,\\ T_{01}= & {} -1 + b_2 + 2 b_3 + 3 b_4,\\ T_{11}= & {} 1 + b_2 + 4 b_3 + 9 b_4 - 2 {\widehat{b}}_1 - 2 {\widehat{b}}_2 - 2 {\widehat{b}}_3 - 2 {\widehat{b}}_4 - 2 {\widehat{b}}_5, \end{aligned}$$

etc.

\(D_{ij}\) are elementary differentials with respect to vectors \(y^{\prime }\), f, matrix \(f^{\prime }=\frac{\partial f}{\partial y}\) and tensors \(f^{(k)}=\frac{\partial ^k f}{\partial y^k}\) and are problem dependent, e.g., \(D_{-1}=y\), \(D_{01}=y^{\prime },\,D_{11}=f,\,D_{21}=f^{\prime }y^{\prime }\), etc. For an eighth-order method

$$\begin{aligned} T_{-1}=T_{01}=T_{11}=T_{21}=T_{31}=\cdots =T_{81}=T_{82}=\cdots =T_{8,36}=0, \end{aligned}$$

must hold. This requirement is fulfilled automatically by the interpolatory procedure described here.

For the scalar autonomous problems, some of the differentials appearing above coincide. Then we observe principal truncation error for the scalar problem for the method FS8p:

$$\begin{aligned} h^{10}\cdot \left( \begin{array}{l} 0.000161ff'^4+0.002816f^2f'^2f''+0.003299f'^3y'^2f'' \\ -\,0.000123f^3f''^2+0.004567ff'y'^2f''^2-0.000109y'^4f''^3 \\ +\,0.002116f^3f'f'''+0.011980ff'^2y'^2f'''-0.001590f^2y'^2f''f''' \\ +\,0.002566f'y'^4f''f'''-0.000492fy'^4f'''^2-0.000154f^4f^{(4)} \\ +\,0.005424f^2f'y'^2f^{(4)}+0.002900f'^2y'^4f^{(4)}-0.001020fy'^4f''f^{(4)} \\ -\,0.000123y'^6f'''f^{(4)}-0.000615f^3y'^2f^{(5)}+0.001501ff'y'^4f^{(5)} \\ -\,0.000114y'^6f''f^{(5)}-0.000307f^2y'^4f^{(6)}+0.000080f'y'^6f^{(6)} \\ -\,0.000041fy'^6f^{(7)}-0.000001y'^8f^{(8)} \end{array} \right) . \end{aligned}$$

There are 72 truncation error coefficients of ninth order as shown above. Only 23 of them appear in this simplified case, but the result is very indicative. We may derive them easily modifying analogously the procedure given in [36]. The maximum coefficient observed (i.e., from the reduced set of scalar \(T_{9j}\)’s) is about 0.01198.

Another interesting outcome is that we may produce a method with nullified ninth-order truncation error for scalar autonomous problems. That is, the 23 elementary differentials \(T_{9j}\) remaining in the scalar autonomous problems become zero. Indeed writing

figure b

we may get such a method with the coefficients given in quadruple precision.

3 Periodic Problems

Lambert and Watson [44] proposed usage of the test problem [45,46,47],

$$\begin{aligned} y^{\prime \prime }=-\omega ^2y,~~\omega \in \mathrm{\mathfrak {R}}, \end{aligned}$$
(7)

for studying the periodic properties of methods used for solving (1). We define \(v=\omega h\), and after applying the method (24) to the previous test problem, we may get a difference equation of the form:

$$\begin{aligned} y_{n+1}+m_1(v^2) y_n+m_2(v^2) y_{n-1}+m_3(v^2) y_{n-2}+m_4(v^2) y_{n-3}=0, \end{aligned}$$

with corresponding characteristic equation

$$\begin{aligned} \lambda ^4+m_1(v^2) \lambda ^3+m_2(v^2) \lambda ^2+m_3(v^2) \lambda +m_4(v^2) =0. \end{aligned}$$
(8)

The four roots of (8) are \(\lambda _1\), \(\lambda _2\), \(\lambda _3\) and \(\lambda _4\) and clearly depend on v. The interval \((0,{\widehat{v}})\) for which \(|\lambda _j|\le 1,\,j=1,2,3,4,\) for every \(v\le {\widehat{v}}\) is called interval of absolute stability. The interval \((0,{\widehat{v}})\) for which \(|\lambda _1|=|\lambda _2|=1\) and \(|\lambda _j|\le 1,\,j=3,4,\) for every \(v\le {\widehat{v}}\) is called interval of periodicity.

Method FS8p possess an interval of periodicity [0, 1.725], while the method FS8sc possesses neither interval of periodicity nor an interval of absolute stability.

We may consider for simplicity also,

$$\begin{aligned} y(0)=1,\,y^{\prime }(0)=0. \end{aligned}$$
(9)

Then (7) and (9) has the solution \(y(x)=\cos \omega x\).

After the application of the method (24) to the problem (7) and (9) and expanding the differences we conclude to an error of the form:

$$\begin{aligned} p=p_{10}(\alpha ,\beta ,{\widehat{r}}_5)v^{10}+p_{12}(\alpha ,\beta ,{\widehat{r}}_5)v^{12}+p_{14}(\alpha ,\beta ,{\widehat{r}}_5)v^{14}+O\left( v^{16}\right) , \end{aligned}$$

where \(p_{10},\,p_{12},\ldots \) are rational functions. This error is called phase lag and represents the angle between the approximate and the true solution of the test problem [48].

If we require \(p_{10}(\alpha ,\beta ,{\widehat{r}}_5)=p_{12}(\alpha ,\beta ,{\widehat{r}}_5)=0\), we derive the following method,

figure c

which has a phase lag of \(O(v^{14})\) instead of \(p=O(v^{10})\) of conventional eighth-order methods. FS8pl possess no interval of periodicity, but its interval of absolute stability is (0, 0.68). The maximum coefficient from the reduced set of scalar \(T_{9j}\)’s is about 0.0273466, i.e., a little larger than the corresponding coefficient for FS8p.

4 Numerical Results

4.1 The Methods

The explicit, three-stage, four-step methods chosen to be tested are:

  1. 1.

    The seventh-order method by Li and Wang [35], named for simplicity LW7.

  2. 2.

    The seventh-order method with phase-lag order 14 given in [37], named FS7pl.

  3. 3.

    The eighth-order method which performs as ninth order for scalar autonomous problems given here, named FS9sc.

  4. 4.

    The eighth-order method with phase-lag order 14 given here, named FS8pl.

  5. 5.

    The eighth-order method with extended interval of periodicity given here, named FS8p.

The selection of methods is justified since previous results [35, 37] showed that LW7 and FS7pl outperformed other well known methods from the literature.

The Runge–Kutta–Nystrom pair of orders 8(6) was used for derivation of initial values [49]. In all tests, the accurate digits of the solution at the end-point is recorded.

4.2 The Problems

Various well-known problems from the literature were chosen for our tests.

4.2.1 The Model Problem

A first choice is the test equation problem

$$\begin{aligned} y^{\prime \prime }(t)=-25y(t),\;y(0)=0,\;y^{\prime }(0)=5. \end{aligned}$$

with theoretical solution

$$\begin{aligned} y(t) = \sin (5t). \end{aligned}$$

We integrate that problem for \(t\in \left[ 0,20\pi \right] \). The results are presented in Table 1.

Table 1 Accurate digits for the model problem

4.2.2 Bessel Equation

Bessel equation has the form (see [50,51,52,53]),

$$\begin{aligned} y^{\prime \prime }=-\left( 100+\frac{1}{4x^2}\right) y, \end{aligned}$$

with initial conditions \(y\left( 1\right) =-0.2459357644513483, y^{\prime }\left( 1\right) =-0.5576953439142885\), for \(x\in [1,32.59406213134967].\)

The theoretical solution of this problem is \(y(x)=\sqrt{x}J_0\left( 10x\right) \), with \(J_0\) the Bessel function of first kind. The 100th root of the solution was observed for \(x=32.59406213134967\). For a more accurate end-point appropriate for quadruple precision computations, see [54]. The results are presented in Table 2.

Table 2 Accurate digits for Bessel

4.2.3 The Orbital Problem of Stiefel and Bettis

The well-known orbital problem of Stiefel and Bettis [55] follows:

$$\begin{aligned} z^{\prime \prime }+z=0.001 e^{it},\, z(0)=1,\, z^{\prime }(0)=0.9995i,\, z \in \mathfrak {I}. \end{aligned}$$

For this problem, the theoretical solution represents motion on the perturbation of a circular orbit in the complex plane and is given by [56,57,58],

$$\begin{aligned} z(t)= & {} u(t)+iv(t),\, \, u,v \in \mathfrak {R}\\ u(t)= & {} \cos (t) + 0.0005 x \sin (t), \\ v(t)= & {} \sin (t) - 0.0005 x \cos (t). \end{aligned}$$

The point z(t) spirals outwards, so at time t, its distance from the orbit is,

$$\begin{aligned} g(t)=\left[ u^2(t)+v^2(t) \right] ^{\frac{1}{2}}=\left[ 1+(0.0005t)^2 \right] ^{\frac{1}{2}}. \end{aligned}$$

We solve the following equivalent real-valued problem [59],

$$\begin{aligned} u^{\prime \prime }+u= & {} 0.0001 \cos (t),\, u(0)=1,\, u^{\prime }(0)=0,\\ v^{\prime \prime }+v= & {} 0.0001 \sin (t),\, v(0)=0,\, v^{\prime }(0)=0.9995 \ \end{aligned}$$

in \(\left[ 0,20\pi \right] \). The results are presented in Table 3.

Table 3 Accurate digits for Stiefel and Bettis

4.2.4 Wave Equation

In [60] was introduced the nonlinear problem,

$$\begin{aligned} \frac{\partial ^2u}{\partial t^2}=gd\left( x \right) \frac{\partial ^2u}{ \partial x^2}+\frac{1}{4}\lambda ^2\left( x,u\right) u,~ x\in \left[ 0,b\right] ,~ t\ge 0, \end{aligned}$$

with initial and boundary conditions,

$$\begin{aligned} \frac{\partial u}{\partial x}\left( t,0\right)= & {} \frac{\partial u}{\partial x}\left( t,b\right) =0, \\ u\left( 0,x\right)= & {} \sin \frac{\pi x}{b}, \\ \frac{\partial u}{\partial t}\left( 0,x\right)= & {} -\frac{\pi }{b}\sqrt{gd}\cos \frac{\pi x}{b}. \end{aligned}$$

The case \(b=100,\)\(g=9.81,\)\(d=10\left( 2+\cos \frac{2\pi x}{b}\right) ,\lambda =\frac{g\mid u\mid }{2500d}\) was implemented. We applied the method of lines across spatial variable x and used second-order finite differences with \(\Delta x=10\). Then the problem is converted into a system of ODEs with 11 equations. The ninth component \(u_9\) of the system approximates u(tx) at \(x=8\Delta x=80.\) Following [60], we calculated the 10th zero of \(u_9\) to be 63.35062926689779 [61]. So we integrated the methods to this point and recorded the accurate digits in the approximation of the 9th component there.

The corresponding results are given in Table 4.

Table 4 Accurate digits for the wave problem

4.2.5 NonLinear

$$\begin{aligned} y^{\prime \prime }=-100y+\sin y, \end{aligned}$$

with \(y\left( 0\right) =0,\)\(y\left( 0\right) =1\) for \(x\in [0,20\pi ].\) The analytic solution is not known, but we have found that \(y\left( 20\pi \right) =3.928239914184\cdot 10^{-4}\) [62,63,64]. The corresponding results are given in Table 5.

Table 5 Accurate digits for nonlinear

4.2.6 Inhomogeneous Equation

A common test problem is the inhomogeneous equation [65]:

$$\begin{aligned} y^{\prime \prime }(t)=-100y(t)+99\sin (t),\;y(0)=1,\;y^{\prime }(0)=11, \end{aligned}$$

with analytical solution

$$\begin{aligned} y(t)=\cos (10t)+\sin (10t)+\sin (t). \end{aligned}$$

We integrated that problem \(t\in \left[ 0,10\pi \right] \), and we presented the results in Table 6.

Table 6 Accurate digits for inhomogeneous

4.2.7 Two Body Problem

Next, we choose the celebrated two-body problem [66] with eccentricity 0.5,

$$\begin{aligned} y^{\prime \prime }_1= & {} -\frac{y_1}{(y_1^2+y_2^2)^{3/2}},\,y^{\prime \prime }_2=-\frac{y_2}{(y_1^2+y_2^2)^{3/2}} ,\\ y_1(0)= & {} \frac{1}{2},\,y^{\prime }_1=0,\,y_2(0)=0,\,y^{\prime }_2=\sqrt{(}3). \end{aligned}$$

For details on this problem, see [67]. We solved the above equation in the interval \(\left[ 0,200\pi \right] \) as \(y\left( 200\pi \right) =[\frac{1}{2},0]^T\). The results are recorded in Table 7.

Table 7 Accurate digits for two body

4.2.8 Duffing Equation

The Duffing equation has the form,

$$\begin{aligned}&y^{\prime \prime }(t) = -y(t)-y(t)^{3}+\frac{1}{500}\cdot \cos \left( 1.01t\right) ,\\&y(0)=0.2004267280699011, y^{\prime }\left( 0\right) =0, \end{aligned}$$

with an approximate analytical solution given in [52],

$$\begin{aligned} y(t)\approx \begin{array}{l} 0.2001794775368452\cos (1.01t)+2.469461432611\cdot 10^{-4}\cos (3.03t) \\ +\,3.040149839\cdot 10^{-7}\cos (5.05t)+3.743495\cdot 10^{-10}\cos (7.07t) \\ +\,4.609\cdot 10^{-13}\cos (9.09t)+6\cdot 10^{-16}\cos (11.11t). \end{array} \end{aligned}$$

We solved the above equation in the interval \(\left[ 0,\frac{100.5}{1.01}\pi \right] \) as \(y\left( \frac{100.5}{1.01}\pi \right) =0\). The results are recorded in Table 8.

Table 8 Accurate digits for Duffing

4.2.9 Scalar Autonomous

Finally, we introduce a scalar autonomous problem,

$$\begin{aligned} y^{\prime \prime }=-\sqrt{(}1+y^2)\cos y, \end{aligned}$$

with \(y\left( 0\right) =1,\)\(y\left( 0\right) =0\) for \(x\in [0,6\pi ].\) We found \(y(6\pi )\approx -0.59346300057945297\) and recorded the accurate digits in Table 9.

Table 9 Accurate digits for scalar autonomous

All computations were performed in MATLAB [68].

4.3 The Tables with the Results

The first six problems are chosen to have a large linear part along with a small nonlinear one. FS7p and FS8pl perform better there. Namely, they cost less to achieve certain accuracies. It is something expected due to high phase-lag order and exploitation of the other characteristics. When dealing with nonlinear problems, i.e., Sects. 4.2.74.2.9, it is obvious that all purpose methods FS8p and FS9sc outperform the other ones. Especially in the scalar autonomous case, it is seen that FS9sc justifies its algebraic order.

In the interesting case of the two-body problem, FS8p performs very well even if it is not a symmetric method. This latter property is perhaps needed for very long integrations only. In some cases, it is shown that high-order conventional methods (i.e., non-symmetric) can not be outperformed for logically high (e.g., age of universe) values of periods [66].

Stars in the results indicate that the error was greater than 1.

5 Numerical Instabilities, Inaccuracies, and Implementation Issues

k-step methods of the form

$$\begin{aligned} \sum _{j=0}^{k} a_j y_j=\sum _{j=0}^{k} b_j y''_j \end{aligned}$$

are notorious for various numerical instabilities associated with them. Analogously the methods considered here have to be implemented carefully in order to avoid these problems that may arise due to various reasons that follow.

5.1 Double Root on the Unit Circle

Methods (24) share a characteristic polynomial of the form,

$$\begin{aligned} \rho (\lambda )=b_4+b_3\lambda +b_2\lambda ^2+b_1\lambda ^3+\lambda ^4=(\lambda -1)^2(\lambda -\lambda _3)(\lambda -\lambda _4). \end{aligned}$$

It is not proper to propagate the solution using directly (2) [40, p. 472]. It is suggested to split \(\rho (\lambda )\) into,

$$\begin{aligned} \rho (\lambda )=\rho _1(\lambda )\rho _2(\lambda )=(\lambda -1)\rho _2(\lambda ). \end{aligned}$$

Let us concentrate on method FS8p. Then

$$\begin{aligned} \rho (\lambda )=(\lambda -1)^2\left( \lambda ^2+\frac{78}{311}\lambda +1\right) =(\lambda -1)\left( \lambda ^3-\frac{233}{311}\lambda ^2+\frac{233}{311}\lambda +1\right) . \end{aligned}$$

Now we are able to propagate the solution using the following two formulas,

$$\begin{aligned} u_n= & {} u_{n-1}+h\left( {\widehat{b}}_{1}y_{n}^{\prime \prime }+{\widehat{b}}_{2}y_{n-1}^{\prime \prime }+{\widehat{b}}_{3}y_{n-2}^{\prime \prime }+{\widehat{b}}_{4}y_{n-3}^{\prime \prime }+{\widehat{b}}_{5}y_{n+\alpha }^{\prime \prime }+{\widehat{b}} _{6}y_{n+\beta }^{\prime \prime }\right) , \\ y_{n+1}= & {} hu_n+\left( y_{n-2} - \frac{233}{311}y_{n-1} + \frac{233}{311}y_n\right) . \end{aligned}$$

The initial value

$$\begin{aligned} u_2=\frac{y_3 - \frac{233}{311}y_2 + \frac{233}{311}y_1-y_0}{h}. \end{aligned}$$

Since division is an unstable operation we evaluate \(u_2\) indirectly as follows. The first three steps can be evaluated using small step-size with a high-order RKN pair [49]. RKN methods furnish also the values \(y'_0\), \(y'_1\), \(y'_2\), \(y'_3\), \(y''_0\), \(y''_1\), \(y''_2\) and \(y''_3\). Then the initial \(u_2\) can be evaluated through interpolation as:

$$\begin{aligned} u_2\approx & {} \frac{126y'_0}{311} + \frac{224y'_1}{311} + \frac{224y'_2}{311} + \frac{126y'_3}{311} + \left( \frac{136y''_0}{2799} - \frac{46y''_1}{311} + \frac{46y''_2}{311} - \frac{136y''_3}{2799}\right) h \\= & {} \frac{y_3 - \frac{233}{311}y_2 + \frac{233}{311}y_1-y_0}{h}+O(h^8). \end{aligned}$$

In consequence, initially \(y_4\) is based on \(hu_3=O(h^9)\), i.e., attains locally eighth order of accuracy as required. We may use higher-order interpolation since there may be available more values from initializing with RKN using smaller steps, e.g., \(y_{0.5},\,y'_{0.5},\,y''_{1.5},\) etc.

5.2 Distribution of Spurious Roots

The other two roots of \(\rho (\lambda )\) have to be placed with some concern in the unit circle [69]. In case when \(b_4=1\) and \(b_1=b_3\), we may experience considerably bad behavior of the methods with neighboring spurious roots [70, p. 485]. Here both methods with spurious roots on the unit circle (FOUR8p, FOUR8sc) do not have any problem of this nature. Their roots are \(-\,0.125402\pm 0.992106i\) and \(-\,0.0769717\pm 0.997033i\), respectively, and are in almost ideal positions.

5.3 Global Error Expression

The value \(\sigma (1)=\sum _{j=1}^6 {\widehat{b}}_j\) is present in the denominator of the expression of global error [40, p. 471]. Thus, it is preferred to avoid small values for \(\sigma (1)\). All methods presented here (FS8p, FS8pl, FS8sc) have an acceptable \(\sigma (1)\ge 1\).

6 Conclusion

The general case of explicit, three-stage, four-step method is studied here. Stability requirements, phase lag, and algebraic order conditions for interpolatory-type methods are examined. We derived a particular method with very pleasant qualifying characteristics. Its performance in various numerical tests justifies the effort of constructing them.