1 Introduction

Fourth-order ordinary differential equations (ODEs) occur in a number of areas of applied sciences including quantum mechanics, fluid mechanics, elasticity, physics and engineering. It is a common knowledge that many classes of fourth-order ODEs defy analytical solution. That is, only a small class of the equations can be solved by analytical techniques. Hence, the need for numerical methods becomes imperative. Owing to this fact, a number of authors have proposed and investigated some numerical methods for solving the fourth-order ODEs, among which are linear multistep and Runge–Kutta- related methods [1,2,3,4], where the fourth-order ODEs need to be transformed into an equivalent system of first-order ODEs for the numerical integration to proceed. The drawback of computational inefficiency of these methods prompted direct integrators for fourth-order ODEs such as cubic spline collocation tau method [5], logarithmic collocation method [6], cubic spline method for fourth-order obstacle problems [7], fourth-order initial and boundary value problems integrators [8]. Other such methods can be found in [9,10,11] and references therein.

In this paper, we seek to construct and investigate a class of efficient numerical integrators for a class of special fourth-order ODEs, which takes the form

(1)

where \(y\in R^r,\, f:R\times R^r\rightarrow R^r\) is a continuous vector value function. And the fact that f does not depend on \(y',\,y'',\,y'''\) explicitly is the specialty associated with (1). Typical example of (1) is the ill-posed problem of a beam on elastic foundation, which finds an important engineering application. This problem has been studied in [11, 12].

In line with the direct numerical integrators of Runge–Kutta type for a class of special third-order ODEs [13], a direct numerical integrator of Runge–Kutta type was proposed for (1) recently [11], whose internal and update stages depend on the first, second and the third derivatives of the solution at each step. We propose a class of new integrators whose stages do not depend on the derivatives of the solution.

Section 2 is devoted to formulation of the proposed method. In Sect. 3, we present the theory of B-series and the associated rooted trees through which order conditions of the proposed method are derived. Local truncation error and order of convergence of the method are presented in Sect. 4. We present algebraic order conditions of the method in Sect. 5. As examples, explicit one-stage and two-stage HMFD methods are presented in Sect. 6. Stability and convergence analysis is presented in Sect. 7. Numerical experiment is presented in Sect. 8. And conclusion is given in Sect. 9.

2 Formulation of HMFD Method

To formulate the proposed HMFD method, we consider transforming (1) into a system of first-order initial value problem (IVP) as follows:

$$\begin{aligned} \left( \begin{array}{c} y(x)\\ u(x)\\ v(x)\\ w(x) \end{array}\right) '=\left( \begin{array}{c} u(x)\\ v(x)\\ w(x)\\ f(x,y(x)) \end{array}\right) ,\left( \begin{array}{c} y(x_0)\\ u(x_0)\\ v(x_0)\\ w(x_0) \end{array}\right) =\left( \begin{array}{c} y_0\\ y'_0\\ y''_0\\ y'''_0 \end{array}\right) . \end{aligned}$$
(2)

An s-stage Runge–Kutta method is defined by

$$\begin{aligned} Y_i= & {} y_n+h\sum _{j=1}^s{\bar{a}}_{i,j}f(x_n+c_jh,Y_j), \quad i=1,...,s,\nonumber \\ y_{n+1}= & {} y_n+h\sum _{j=1}^s{\bar{b}}_if(x_n+c_ih,Y_i). \end{aligned}$$
(3)

Applying (3) to (2), the following equations are obtained

$$\begin{aligned} Y_i= & {} y_n+h\sum _{j=1}^s{\bar{a}}_{i,j}\bar{Y}_j,\\ \bar{Y}_i= & {} y'_n+h\sum _{j=1}^s{\bar{a}}_{i,j}\bar{\bar{Y}}_j,\\ \bar{\bar{Y}}_i= & {} y''_n+h\sum _{j=1}^s{\bar{a}}_{i,j}{\bar{\bar{\bar{Y}}}}_j,\\ {\bar{\bar{\bar{Y}}}}_i= & {} y'''_n+h\sum _{j=1}^s{\bar{a}}_{i,j}f(x_n+c_jh,Y_j),\\ y_{n+1}= & {} y_n+h\sum _{i=1}^s{\bar{b}}_i\bar{Y_i},\\ y'_{n+1}= & {} y'_n+h\sum _{i=1}^s{\bar{b}}_i\bar{\bar{Y}}_i,\\ y''_{n+1}= & {} y''_n+h\sum _{i=1}^s{\bar{b}}_i{\bar{\bar{\bar{Y}}}}_i,\\ y'''_{n+1}= & {} y'''_n+h\sum _{i=1}^s{\bar{b}}_if(x_n+c_ih,Y_i). \end{aligned}$$

Eliminating \(\bar{Y}_i\), \(\bar{\bar{Y}}_i\) and \({\bar{\bar{\bar{Y}}}}_i\) in the equations above, we have

$$\begin{aligned} Y_i= & {} y_n+h\sum _{j=1}^s{\bar{a}}_{i,j}y'_n+h^2\sum _{j,k=1}^s{\bar{a}}_{i,j}{\bar{a}}_{j,k}y''_n+h^3\sum _{j,k,l=1}^s{\bar{a}}_{i,j}{\bar{a}}_{j,k}{\bar{a}}_{k,l}y'''_n \\&+\,h^4\sum _{j,k,l,m=1}^s{\bar{a}}_{i,j}{\bar{a}}_{j,k}{\bar{a}}_{k,l}{\bar{a}}_{l,m}f(x_n+c_mh,Y_m),\\ y_{n+1}= & {} y_n+h\sum _{i=1}^s{\bar{b}}_iy'_n+h^2\sum _{i,j=1}^s{\bar{b}}_i{\bar{a}}_{i,j}y''_n+h^3\sum _{i,j,k=1}^s{\bar{b}}_i{\bar{a}}_{i,j}{\bar{a}}_{j,k}y'''_n\\&+\,h^4\sum _{i,j,k,l=1}^s{\bar{b}}_i{\bar{a}}_{i,j}{\bar{a}}_{j,k}{\bar{a}}_{k,l}f(x_n+c_lh,Y_l),\\ y'_{n+1}= & {} y'_n+h\sum _{i=1}^s{\bar{b}}_iy''_i+h^2\sum _{i,j=1}^s{\bar{b}}_i{\bar{a}}_{i,j}y'''_n+h^3\sum _{i,j,k=1}^s{\bar{b}}_i{\bar{a}}_{i,j}{\bar{a}}_{j,k}f(x_n+c_kh,Y_k),\\ y''_{n+1}= & {} y''_n+h\sum _{i=1}^s{\bar{b}}_iy'''_n+h^2\sum _{i,j=1}^s{\bar{b}}_i{\bar{a}}_{i,j}f(x_n+c_jh,Y_j),\\ y'''_{n+1}= & {} y'''_n+h\sum _{i=1}^s{\bar{b}}_if(x_n+c_ih,Y_i). \end{aligned}$$

Denote

$$\begin{aligned} \sum _{j=1}^s{\bar{a}}_{i,j}= & {} c_i,\, \sum _{j,k=1}^s{\bar{a}}_{i,j}{\bar{a}}_{j,k}=\frac{c_i(c_i+1)}{2},\,\sum _{j,k,l=1}^s{\bar{a}}_{i,j}{\bar{a}}_{j,k}{\bar{a}}_{k,l}=\frac{c_i(c_i+1)(c_i+2)}{6},\\ \sum _{i=1}^sb_i= & {} \sum _{i,j=1}^s{\bar{b}}_i{\bar{a}}_{i,j}=\sum _{i,j,k=1}^s{\bar{b}}_i{\bar{a}}_{i,j}{\bar{a}}_{j,k}=1,\, i=1,...,s. \end{aligned}$$

And

$$\begin{aligned} \sum _{k,l,m=1}^s{\bar{a}}_{i,k}{\bar{a}}_{k,l}{\bar{a}}_{l,m}{\bar{a}}_{m,j}=a_{i,j},\,\,\sum _{j,k,l=1}^s{\bar{b}}_j{\bar{a}}_{j,k}{\bar{a}}_{k,l}{\bar{a}}_{l,i}=b_i. \end{aligned}$$

The above suppositions together with difference formula on the above equations lead to the proposed method in vector form as follows:

(4)

where \(\mathbf{C }_1=6\mathbf e +11\mathbf c +6\mathbf c ^2+\mathbf c ^3\), \(\mathbf C _2=6\mathbf c +5\mathbf c ^2+\mathbf c ^3\), \(\mathbf C _3=3\mathbf c +4\mathbf c ^2+\mathbf c ^3\), \(\mathbf C _4=2\mathbf c +3\mathbf c ^2+\mathbf c ^3\), \(\mathbf b =[b_1,...,b_m]^T\), \(\mathbf c =[c_1,...,c_m]^T\), \(\mathbf e =[1,...,1]^T\), \(\mathbf A =[a_{i,j}]\), \(\mathbf Y =[Y_1,...,Y_m]^T\) and \(\mathbf I \) is identity matrix of \(m\times m\) dimension. The coefficients of the methods are summarized in Table 1.

Table 1 General coefficients of HMFD methods

3 B4-Series and Associated Rooted Trees

As we have in the case of RKT, RKN and RK methods for third-, second- and first-order ODEs, when working on the derivation of order conditions for HMFD methods for fourth-order ODEs, we need to consider the autonomous case of problem (1)

(5)

In fact, the equation in (1) can be extended by one dimension \(v=x\) in order to rewrite the initial value problem (IVP) (1) equivalently as the following autonomous problem

$$\begin{aligned} v^{iv}= & {} 0,\,\,v(x_0)=x_0,\,\,v'(x_0)=1,\,\,v''(x_0)=0,\,\,v'''(x_0)=0,\nonumber \\ y^{iv}= & {} f(v,y),\,\,y(x_0)=y_0,\,\,y'(x_0)=y_0',\,\,y''(x_0)=y_0'',\,\,y'''(x_0)=y_0'''. \end{aligned}$$
(6)

Applying the scheme (4) to (6) gives

$$\begin{aligned} V_i= & {} \frac{1}{6}\left[ \mathbf C _1v_n-3\mathbf C _2v_{n-1}+3\mathbf C _3v_{n-2}-\mathbf C _4v_{n-3}\right] ,\nonumber \\= & {} v_n+hc_iv_n'+\frac{h^2}{2}c_i(c_i+1)v_n''+\frac{h^3}{6}c_i(c_i+1)(c_i+2)v_n''',\nonumber \\ Y_i= & {} \frac{1}{6}\left[ \mathbf C _1y_n-3\mathbf C _2y_{n-1}+3\mathbf C _3y_{n-2}-\mathbf C _4y_{n-3}\right] +h^4\sum _{j=1}^sa_{i,j}f(V_j,Y_j),\nonumber \\ v_{n+1}= & {} 4z_n-6z_{n-1}+4v_{n-2}-v_{n-3},= v_n+hv_n'+h^2v_n''+h^3v_n''',\nonumber \\ y_{n+1}= & {} 4y_n-6y_{n-1}+4y_{n-2}-y_{n-3}+h^4\sum _{i=1}^sb_if(V_i,Y_i). \end{aligned}$$
(7)

Substituting first part of (6) into (7) gives

$$\begin{aligned} V_i= & {} x_n+c_ih,\nonumber \\ v_{n+1}= & {} x_n+h,\nonumber \\ Y_i= & {} \frac{1}{6}\left[ \mathbf C _1y_n-3\mathbf C _2y_{n-1}+3\mathbf C _3y_{n-2}-\mathbf C _4y_{n-3}\right] +h^4\sum _{j=1}^sa_{i,j}f(x_n+c_jh,Y_j),\nonumber \\ y_{n+1}= & {} 4y_n-6y_{n-1}+4y_{n-2}-y_{n-3}+h^4\sum _{i=1}^sb_if(x_n+c_ih,Y_i). \end{aligned}$$
(8)

Observe the last two equations of (8), they look exactly like (4). This implies that the HMFD method (4) applied to (5) produces the same numerical solution like the one obtained when (4) is applied to (1).Thus, it’s enough to discuss the numerical solution of (5). Hence, the method (4) takes the form

$$\begin{aligned} Y_i= & {} \frac{1}{6}\left[ \mathbf C _1y_n-3\mathbf C _2y_{n-1}+3\mathbf C _3y_{n-2}-\mathbf C _4y_{n-3}\right] +h^4\sum _{j=1}^sa_{i,j}f(Y_j),\nonumber \\ y_{n+1}= & {} 4y_n-6y_{n-1}+4y_{n-2}-y_{n-3}+h^4\sum _{i=1}^sb_if(Y_i). \end{aligned}$$
(9)

Continuous differentiation of the exact solution y(x) with respect to independent variable x gives the following:

$$\begin{aligned} y'&=y',\, y''=y'',\, y'''=y''',\,y^{iv}=f(y),\, y^{v}=f'(y)y', \\ y^{vi}&=f''(y)(y',y')+f(y)y'', y^{vii}=f'''(y)(y',y',y')\\&\quad +3f''(y)(y',y'')+f'(y)y''',\\ y^{viii}&=f^{iv}(y)(y',y',y',y')+6f'''(y)(y',y',y'')+3f''(y)(y'',y'')\\&\quad +4f'(y)(y',y''')+f'(y)f(y). \end{aligned}$$

3.1 Construction of Rooted Trees

Here, detail description of how the relevant trees in this paper are constructed is given. It’s easy to associate each of the expressions of the derivatives of y above and those of higher orders with rooted trees. The relevant trees here consist of four type of vertices, namely small dot , big dot , small circle and big circle representing \(y'\), \(y''\), \(y'''\) and f, respectively. The ’branch’ of a tree is a line joining its vertices according to naturally defined rules. The line simply indicates differentiation with respect to components of y, \(y'\), \(y''\) or \(y'''\). When a ’branch’ grows to a small dot vertex, then it represents differentiation with respect to the component of y. It’s with respect to the component of \(y'\) if the ’branch’ grows to a big dot vertex. The differentiation is with respect to component of \(y''\) if the ’branch’ grows to a small circle vertex. And it’s with respect to the component of \(y'''\) if the destination vertex is a big circle. This relationship could also be interpreted as parent–son relationship. For instance, small dot vertex gives birth to big dot vertex as its only son, because \(y'\) has only one nonzero derivative with respect to itself and has not with respect to y, \(y''\) and \(y'''\). The only son of big dot vertex on the other hand is small circle vertex, because \(y''\) has only one nonzero derivative with respect to itself and has non with respect to y, \(y'\) and \(y'''\). Similarly, small circle vertex’s only son is big circle vertex. In the case of big circle vertex, it has many sons which are all small dot vertices. This is because f depends on y only and can have multiple derivatives with respect to component of y only, but not with \(y'\), \(y''\) and \(y'''\). The relevant trees can recursively be defined by simply modifying the notation used for trees associated with second-order differential equations in [14].

Definition 1

The set of trees \(T_4\) is defined recursively as follows:

  1. (i)

    the graph (root) is a member of \(T_4\); the graphs and are in \(T_4\). And the null tree \(\theta \) is also a member of \(T_4\);

  2. (ii)

    if \(t_1,...,t_m \in \) \(T_4\), then the graph \(t=\left[ t_1,...,t_m \right] _4\) obtained by connecting the roots of \(t_1,...,t_m\) to the big circle vertex of the graph is also in \(T_4\), where the small dot vertex at the bottom represents the root of the trees. The subscript 4 is a reminder that the tree is associated with fourth-order differential equations and that the ’stem’ upon which ’branches’ grow consists of a string of four vertices.

Let , , and denote first-order, second-order, third-order and fourth-order trees, respectively.

Definition 2

The order \(\rho :T_4\rightarrow N\) of a tree t is defined recursively as follows:

  1. (i)

    \(\rho (\tau _1)=1\), \(\rho (\tau _2)=2\), \(\rho (\tau _3)=3\), \(\rho (\tau _4)=4\);

  2. (ii)

    for \(t=\left[ t_1,...,t_m \right] _4\in T_4\), \(\rho (t)=4+{\sum _{i=1}^m}\rho (t_i)\). In nutshell, for each \(t\in T_4\), the order \(\rho (t)\) of a tree t is the number of vertices of the tree. The set of all \(T_4\) of order q is denoted by \(T_{4q}\).

  3. (iii)

    \(\alpha (\theta )=\alpha (\tau _1)=\alpha (\tau _2)=\alpha (\tau _3)=\alpha (\tau _4)=1;\)

  4. (iv)

    if \(t=\left[ t_1^{\mu _1},...,t_m^{\mu _m} \right] _4 \in T_4\), with \(t_i\) distinct for different i and different from \(\theta \), then

    $$\begin{aligned} \alpha (t)= \left( \rho (t)-4\right) !\prod _{i=1}^m\frac{1}{\mu _i!}\left( \frac{\alpha (t_i)}{\rho (t_i)!}\right) ^{\mu _i}. \end{aligned}$$

The first four vertices of a tree are its small dot vertex, big dot vertex, small circle vertex and big circle vertex. Any distinct labeling involves only the remaining \(\rho (t)-4\) vertices. The first four vertices form the root and the ’stem’ upon which the ’branches’ (other vertices) grow.

It is important to associate each elementary differential F(t) with a corresponding \(t\in T_4\).

Definition 3

The function F on \(T_4\) is defined recursively by

  1. (i)

    \(F(\theta )\left( y,y',y'',y'''\right) =y\), \(F(\tau _1)\left( y,y',y'',y'''\right) =y'\), \(F(\tau _2)\left( y,y',y'',y'''\right) =y''\), \(F(\tau _3)\left( y,y',y'',y'''\right) =y'''\) and \(F(\tau _4)\left( y,y',y'',y'''\right) =f(y)\);

  2. (ii)

    if \(t=\left[ t_1,...,t_m \right] _4 \in T_4\), then

    $$\begin{aligned} F(t)\left( y,y',y'',y'''\right) =f^{(m)}(y)\left( F(t_1)\left( y,y',y'',y'''\right) ,...,F(t_m)\left( y,y',y'',y'''\right) \right) . \end{aligned}$$

Definition 4

Let \(\beta :T_4\rightarrow R\) be a mapping, with \(\beta (\theta )=1\). The B4-series with coefficient function \(\beta \) is a formal series of the form

$$\begin{aligned} B(\beta ,y)= & {} y+h\alpha (\tau _1)\beta (\tau _1)y'+\frac{h^2}{2}\alpha (\tau _2)\beta (\tau _2)y''+\frac{h^3}{6}\alpha (\tau _3)\beta (\tau _3)y'''+...\nonumber \\= & {} \sum _{t\in T_4}{\frac{h^{\rho (t)}}{\rho (t)!}\alpha (t)\beta (t)F(t)\left( y,y',y'',y'''\right) }. \end{aligned}$$
(10)

In the derivation of order conditions, the following lemma, which states that \(h^4f(.)\) applied to a B4-series generates a B4-series [3, 13, 14], is very important.

Lemma

Let \(B(\beta ,y)\) be a B4-series with coefficient function \(\beta \). Then, \(h^4f(B(\beta ,y))\) is also a B4-series, i.e.,

$$\begin{aligned} h^4f(B(\beta ,y))=B(\beta ^{iv},y), \end{aligned}$$

with

$$\begin{aligned} \beta ^{iv}(\theta )=\beta ^{iv}(\tau _1)=\beta ^{iv}(\tau _2)=0,\,\beta ^{iv}(\tau _3)= \beta ^{iv}(\tau _4)=24 \end{aligned}$$

and for all other tree \(t=\left[ t_1,...,t_m \right] _4\in T_4\),

$$\begin{aligned} \beta ^{iv}(t)=\rho (t)(\rho (t)-1)(\rho (t)-2)(\rho (t)-3)\prod _{i=1}^m{\beta (t_i)}. \end{aligned}$$

Proof

Since \(B(\beta ,y)=y+O(h)\), it is clear that \(h^4f(B(\beta ,y))\) can be expanded as a Taylor series about y. Furthermore, \(\beta ^{iv}(\theta )=\beta ^{iv}(\tau _1)=\beta ^{iv}(\tau _2)=\beta ^{iv}(\tau _3)=0\), because the expansion starts with \(h^4f(y)\), which also shows that \(\beta ^{iv}(\tau _4)=24\). Proceeding as in the proof of Lemma in [11, 13, 14], we have

$$\begin{aligned} h^4f(B(\beta ,y))= & {} h^4\sum _{m\ge 0}\frac{1}{m!}f^{(m)}(y)\left( B(\beta ,y)-y\right) ^m\\= & {} h^4\sum _{m\ge 0}\frac{1}{m!}\sum _{t_1\in T_4 \backslash \theta }...\sum _{t_m\in T_4 \backslash \theta }\frac{h^{\rho (t_1)+,...,+\rho (t_m)}}{\rho (t_1)!...\rho (t_m)!}\alpha (t_1)...\alpha (t_m)\\&\beta (t_1)...\beta (t_m) f^{(m)}(y)\left( F(t_1)(y,y'y'',y''')...F(t_m)(y,y'y'',y''')\right) \\= & {} \sum _{m\ge 0}\sum _{t_1\in T_4 \backslash \theta }...\sum _{t_m\in T_4 \backslash \theta }\frac{h^{\rho (t)}}{(\rho (t)-4)!}\alpha (t)\frac{\mu _1!...\mu _m!}{m!}\\&\beta (t_1)...\beta (t_m)F(t)(y,y'y'',y''')\\= & {} \sum _{t\in T_4 \backslash \theta }\frac{h^{\rho (t)}}{(\rho (t)-4)!}\alpha (t)\beta (t_1)...\beta (t_m)F(t)(y,y'y'',y'''), \end{aligned}$$

because \(\frac{m!}{\mu _1!...\mu _m!}\) is ways of ordering the labels \(t_1,...,t_m\) in \([t_1,...,t_m]_4\). Finally, with \(\beta ^{iv}\) as defined in the statement of Lemma,

$$\begin{aligned} h^4f(B(\beta ,y))=\sum _{t\in T_4 \backslash \theta }\frac{h^{\rho (t)}}{\rho (t)!}\alpha (t)\beta ^{iv}(t)F(t)(y,y'y'',y''')=B(\beta ^{iv}(t),y). \end{aligned}$$

\(\square \)

4 Local Truncation Error of HMFD and its Convergence Order

Like the hybrid methods presented in [14], to derive the order conditions of HMFD methods (four-step methods), we consider them as single step methods of the form

$$\begin{aligned} H_n=H_{n-1}+h\Phi (H_{n-1},h), \end{aligned}$$
(11)

where \(H_n\) is a well-defined numerical solution whose initial point \(H_0\) is generated by some starting procedure, see [14]. The first part of equation (4) can be written as a set of four equations by letting

$$\begin{aligned} F_n:=\left( \frac{y_{n+1}-y_n}{h}\right) , \end{aligned}$$

so that

$$\begin{aligned} y_n=y_{n-1}+hF_{n-1}, \end{aligned}$$
(12)

which implies that

$$\begin{aligned} F_n=3F_{n-1}-3hF_{n-2}+F_{n-3}+h^3\left( \mathbf b ^T\otimes \mathbf I \right) f(\mathbf Y ). \end{aligned}$$
(13)

Now, let

$$\begin{aligned} G_n:=\left( \frac{F_{n+1}-F_n}{h}\right) , \end{aligned}$$

which implies that

$$\begin{aligned} F_n=F_{n-1}+hG_{n-1}. \end{aligned}$$
(14)

Then, (13) becomes

$$\begin{aligned} G_n=2G_{n-1}-G_{n-2}+h^2\left( \mathbf b ^T\otimes \mathbf I \right) f(\mathbf Y ). \end{aligned}$$
(15)

Now, by letting

$$\begin{aligned} U_n:=\left( \frac{G_{n+1}-G_n}{h}\right) , \end{aligned}$$

we have

$$\begin{aligned} G_n=G_{n-1}+hU_{n-1}. \end{aligned}$$
(16)

Hence, (15) gives,

$$\begin{aligned} U_n=U_{n-1}+h\left( \mathbf b ^T\otimes \mathbf I \right) f(\mathbf Y ). \end{aligned}$$
(17)

The system of Eqs. (12), (14), (16) and (17) can be written as (11) with

$$\begin{aligned} H_n=\left( \begin{array}{c} y_n\\ F_n\\ G_n\\ U_n\end{array}\right) \quad \text {and} \quad \Phi (H_{n-1},h)=\left( \begin{array}{c} F_{n-1}\\ G_{n-1}\\ U_{n-1}\\ \left( \mathbf b ^T\otimes \mathbf I \right) f(\mathbf Y ) \end{array}\right) . \end{aligned}$$

The vector \(H_n\) is an approximation for \(w_n=w(x_n,h)\), where w is the exact-value function defined by

$$\begin{aligned} w(x,h)=\left( \begin{array}{c} y(x)\\ \frac{y(x+h)-y(x)}{h}\\ \frac{F(x+h)-F(x)}{h}\\ \frac{G(x+h)-G(x)}{h}\end{array}\right) . \end{aligned}$$
(18)

The local truncation error of the HMFD method at point \(x_n\) is defined by

$$\begin{aligned} d_n=w_n-w_{n-1}-h\Phi (w_{n-1},h), \end{aligned}$$
(19)

where

$$\begin{aligned} \Phi (w_{n-1},h)=\left( \begin{array}{c} \frac{y(x_n)-y(x_{n-1})}{h}\\ \frac{F(x_n)-F(x_{n-1})}{h}\\ \frac{G(x_n)-G(x_{n-1})}{h}\\ \left( \mathbf b ^T\otimes \mathbf I \right) f(\mathbf Y )\end{array}\right) . \end{aligned}$$
(20)

Suppose that each component of Y in the second part of (4) can be expanded as a B4-series \(Y_i(x_n)=B(\psi _i,y(x_n))\), we get

(21)

Applying Lemma in Sect. 3 on (21) , we get

(22)

It follows from (22) that, in vector form,

$$\begin{aligned} \psi (\theta )=\mathbf e ,\quad \psi (\tau _1)=\mathbf c ,\quad \psi (\tau _2)=\frac{1}{2}{} \mathbf c ^2,\quad \text {and} \quad \psi (\tau _3)=\frac{1}{6}{} \mathbf c ^3, \end{aligned}$$

and \(\forall \) tree \(t\in T_4\) with \(\rho (t)\ge 4\),

(23)

And for trees \(t=[t_1,...,t_m]_4\in T_4\), see Lemma in Sect. 3,

$$\begin{aligned} \psi _j^{iv}(t)=\rho (t)(\rho (t)-1)(\rho (t)-2)(\rho (t)-3)\prod _{i=1}^m{\psi _j(t_i)}. \end{aligned}$$
(24)

By substituting (24) in (23), the coefficients \(\psi _j(t)\) can be generated recursively.

Theorem

For exact starting values, the method (4) is said to be convergent of order p if and only if for all trees \(t\in T_4\),

$$\begin{aligned} \sum _{i=1}^sb_i\psi _i^{iv}(t)=1+6(-1)^{\rho (t)}-4(-2)^{\rho (t)}+(-3)^{\rho (t)}, \end{aligned}$$
(25)

for \(\rho (t)\le p+1\) but not for some trees of order \(p+2\).

Proof

The proof is similar to the proof of Theorem 1 in [14]. From (18)–(20), it is obvious that the first, second and third components of the local truncation error \(d_n\) varnish, and the fourth component is

$$\begin{aligned} \frac{1}{h^3}\left[ \begin{array}{l} B(1,y(x_n))-4y(x_n)+6B\left( (-1)^{\rho (t)},y(x_n)\right) -4B\left( (-2)^{\rho (t)},y(x_n)\right) +\\ B\left( (-3)^{\rho (t)},y(x_n)\right) -\sum \limits _{i=1}^sb_i{B(\psi ^{iv}(t),y(x_n))} \end{array}\right] . \end{aligned}$$

The method is of order p if p is the largest integer such that

$$\begin{aligned} d_n=O\left( h^{p+1}\right) . \end{aligned}$$
(26)

The only condition that makes (26) to hold for all \(n \ge 0\) is

$$\begin{aligned} \mathbf b ^T\psi ^{iv}(t)=1+6(-1)^{\rho (t)}-(-2)^{\rho (t)+2}+(-3)^{\rho (t)}, \quad \text {for} \quad \rho (t) \le p+1. \end{aligned}$$

\(\square \)

5 Algebraic Order Conditions

A relationship that exists between the coefficients of a numerical method which causes annihilation of successive terms in a Taylor series expansion of local truncation error of the method is termed order condition of the method [14]. To generate such relationships (order conditions) for HMFD methods for trees of different orders, Eq. (25) together with (23) and (24) is used. It is worth noting that the ’order’ referred to here is for the convergence of HMFD methods not for the order of the rooted trees.

5.1 Fourth-Order Tree

The only tree with order four in \(T_4\) is \(\tau _4=[\theta ]_4\): , and the order condition for it is

$$\begin{aligned} \sum _{i=1}^{s}{b_i}=1, \end{aligned}$$
(27)

and (23) gives

$$\begin{aligned} \psi _i(\tau _4)=24\sum _{j=1}^{s}{a_{i,j}}-6c_i^3-11c_i^2-6c_i. \end{aligned}$$

5.2 Fifth-Order Tree

The tree in \(T_4\) with order 5 is \(t_{51}=[\tau _1]_4\):. The corresponding order condition is

$$\begin{aligned} \sum _{i=1}^{s}{b_ic_i}=-1, \end{aligned}$$
(28)

and from (24) we obtain \(\psi _i'''(t_{4,1})=120\psi _i(\tau _1)\), which implies that from (23)

$$\begin{aligned} \psi _i(t_{51})=120\sum _{j=1}^{s}{a_{i,j}c_j}+25c_i^3+60c_i^2+36c_i. \end{aligned}$$

The trees of order up to nine are constructed below in accordance with the description given in Sect. 3.

figure b

The corresponding order conditions of HMFD methods up to trees of order nine are presented in Table 2 below, where all the summations in the order conditions run from \(i,\,j,\,k,...\) to s.

Table 2 Order conditions

Like the two-step hybrid methods, the summations in the equations of the order conditions of HMFD methods associated with any tree can easily be generated as follows: the stem of any tree of order \(\rho (t) \ge 4\) represents a component of vector b; each small dot vertex at the terminal donates a component of vector c; each branch from big circle vertex to a terminal big dot vertex contributes a component of \(\mathbf c ^2\); each branch that grows from big circle vertex to a terminal small circle vertex contributes a component of vector \(\mathbf c ^3\), and each branch growing from big circle vertex to another big circle vertex contributes an element of matrix A.

5.3 Simplifying Assumptions

It can be seen in Table 2 that the number of equations of order conditions of HMFD (like every other method, e.g., RK, RKN THM) increases with increase in the order of the trees, resulting in more equations to be solved for higher-order methods. But there exist certain relationships that naturally connect the coefficients of numerical methods which when explored appropriately can reduce the number of independent equations of order conditions. These relationships are called simplifying assumptions.

Suppose HMFD method (4) has a stage order q, so that \(Y_i(x_n)=B(\psi _i,y(x_n))\) differs from \(y(x_n+c_ih)\) by terms of order \(h^{q+1}\) [14], then

$$\begin{aligned} \psi _i(t)=c_i^{\rho (t)} \quad \text {for} \quad \rho (t)\le q. \end{aligned}$$

Which implies that

$$\begin{aligned} \psi _i^{iv}(t)=\rho (t)(\rho (t)-1)(\rho (t)-2)(\rho (t)-3)c_i^{\rho (t)-4}, \end{aligned}$$

and (23) gives the set of simplifying assumptions as

(29)

where \(0\le \lambda \le q-4\).

6 Construction of Explicit HMFD Methods

Having derived the order conditions for the proposed class of HMFD methods, we present in this section some explicit methods of the class. It is noteworthy that the proposed methods possess certain special properties, which are responsible for their computational efficiency. For example, although the methods are not self- starting, but after obtaining the starting values, the integration proceeds with \(s-3\) function evaluation per step. These properties are given special consideration in the derivation process.

6.1 One-Stage Explicit HMFD Method

To construct a one-stage method, which is the first member of the proposed class of HMFD methods, algebraic order conditions (see Table 2) up to trees of order 7 are considered. Note that \(s \ge 4\) in any case of HMFD methods. Putting \(s=4\) in the order conditions mentioned, we get

$$\begin{aligned}&b_1+b_2+b_3+b_4=1,\\&b_1c_1+b_2c_2+b_3c_3+b_4c_4=-1,\\&b_1c_1^2+b_2c_2^2+b_3c_3^2+b_4c_4^2=\frac{4}{3},\\&b_1c_1^3+b_2c_2^3+b_3c_3^3+b_4c_4^3=-2, \end{aligned}$$

which is a system of four equations in four unknown parameters, see Table 1. This gives unique values of the unknowns. From Eq. (29) with \(\lambda =0\), the components of matrix A of the method are obtained. Summary of the coefficients is given in Table 3.

Table 3 Coefficients of one-stage method (HMFDs1)

6.2 Two-Stage Explicit HMFD Method

To obtain a two-stage method, \(s=5\) is considered in the order conditions up to trees of order nine in Table 2.

$$\begin{aligned}&b_1+b_2+b_3+b_4+b_5=1,\\&b_1c_1+b_2c_2+b_3c_3+b_4c_4+b_5c_5=-1,\\&b_1c_1^2+b_2c_2^2+b_3c_3^2+b_4c_4^2+b_5c_5^2=\frac{4}{3},\\&b_1c_1^3+b_2c_2^3+b_3c_3^3+b_4c_4^3+b_5c_5^3=-2,\\&b_1c_1^4+b_2c_2^4+b_3c_3^4+b_4c_4^4+b_5c_5^4=\frac{33}{10},\\&b_1c_1^5+b_2c_2^5+b_3c_3^5+b_4c_4^5+b_5c_5^5=-\frac{35}{6}. \end{aligned}$$

The system of equations above coupled with (29), \(\lambda =0,1\), is solved. A unique set of coefficients of the two-stage method is obtained as given in Table 4.

Table 4 Coefficients of two-stage method (HMFDs2)

7 Stability and Convergence Analysis

The update stage of the HMFD method (4), which is represented by its autonomous form (9), can be written as follows:

$$\begin{aligned} \sum _{i=0}^4\gamma _iy_{n-i}-h^4\sum _{i=0}^sb_if(Y_i)=0. \end{aligned}$$
(30)

7.1 Zero Stability

The HMFD method is said to be zero stable if the roots \(\xi _j\), \(j=1,2,3,4\), of the first characteristics polynomial \(\chi (\xi )\), which is given by

$$\begin{aligned} \chi (\xi )=\sum _{i=0}^4\gamma _i\xi ^{4-i}=0, \end{aligned}$$
(31)

satisfy \(\left| \xi _j\right| \le 1\), \(j=1,2,3,4\) and for the roots with \(\left| \xi _j\right| =1\), the multiplicity does not exceed 1 (see [15, 16]).

7.2 Consistency

The HMFD method is said to be consistent if it has order \(p\ge 1\).

Remark

We note that the first characteristics polynomial associated with (31) is

$$\begin{aligned} \chi (\xi )=\xi ^4-4\xi ^3+6\xi ^2-4\xi +1=0, \end{aligned}$$

which implies that \(\xi =1\) four times. Therefore, the HMFD method is zero stable. We also note from Table 2 that the minimum order p of the method is 4, which implies that it’s consistent. Hence, we conclude that the HMFD method is convergent.

8 Numerical Experiment

Presented in this section is numerical experiment, in which the proposed methods alongside some existing codes are applied on several test problems. Efficiency of the codes is measured by plotting the \(\log _{10}\) of maximum errors recorded with different step lengths h in a given interval [a,b] against total function call for each code. While step length h is used in the integration with one-stage method, 2h, 3h and 4h are used for two-stage, three-stage and four-stage methods, respectively. The acronyms below are used in the paper:

  • HMFDs1: the proposed one-stage explicit HMFD method derived in Sect. 6 of this paper;

  • HMFDs2: the proposed two-stage explicit HMFD method derived in Sect. 6 of this paper;

  • RKFDs3: three-stage explicit RKFD method presented in [11];

  • HMs4: four-stage hybrid method for special second-order ODEs presented in [17];

  • THMs3: three-stage explicit hybrid method given in [18];

  • AWM: collocation method for solving fourth-order ODEs obtained in [6];

  • JTR: Fourth-order ODEs integrator proposed in [8].

8.1 Implementation

To implement the proposed schemes in this paper, three starting points of the solution in addition to the given initial condition of the problem are required. These starting points can be obtained by any one-step scheme, for instance, RKFDs3 scheme in [11]. Once the staring points are generated, the next thing is to compute the components of vector Y, update of the solution (\(y_{n+1}\)) and the error (\(e_{n+1}\)) of the method in the step. To compute these quantities at the next grid point, a fixed step length h is added to the previous grid point (\(x_{n+1}=x_n+h\)). Here, only \(Y_4\) and \(Y_5\) are required to be computed if we are implementing the proposed HMFDs2 for example, because \(Y_1\), \(Y_2\), \(Y_3\) are readily available from the previous step, that is,

$$\begin{aligned} \text {for}\, n=3:\, Y_1^3= & {} y_0\quad (\text {given initial condition});\\ Y_2^3= & {} y_1\quad (\text {computed by RKFDs3});\\ Y_3^3= & {} y_2\quad (\text {computed by RKFDs3});\\&y_3 \quad (computed by RKFDs3);\\ Y_4^3= & {} y_3+...+h^4\sum _{j=1}^3{a_{4,j}f(x_3+c_jh,Y_j^3)};\\ Y_5^3= & {} y_3+...+h^4\sum _{j=1}^4{a_{5,j}f(x_3+c_jh,Y_j^3)};\\&y_4;\,x_4=x_3+h;\,e_4=|y(x_4)-y_4|; \end{aligned}$$
$$\begin{aligned} \text {for}\, n=4:\, Y_1^4= & {} y_1\quad (\text {already computed by RKFDs3 in the previous step});\\ Y_2^4= & {} y_2\quad (\text {already computed by RKFDs3 in the previous step});\\ Y_3^4= & {} y_3\quad (\text {already computed by RKFDs3 in the previous step});\\ Y_4^4= & {} y_4+...+h^4\sum _{j=1}^3{a_{4,j}f(x_4+c_jh,Y_j^4)};\\ Y_5^4= & {} y_4+...+h^4\sum _{j=1}^4{a_{5,j}f(x_4+c_jh,Y_j^4)};\\&y_5;\,x_5=x_4+h;\,e_5=|y(x_5)-y_5|;\\&\vdots \end{aligned}$$

The procedure continues until the value of \(x_n\ge b\), where b is an upper limit of solution interval \(\left[ a,b\right] \). Maximum value of the errors (\(e_n\)) computed at all steps is then recorded.

8.2 Test Problems

Problem 1

$$\begin{aligned} y''''= & {} -4y,\quad 0\le x \le 10, \\ y(0)= & {} 1, \quad y'(0)=1, \quad y''(0)=2,\quad y'''(0)=2,\\ y(x)= & {} \exp (x)\sin (x). \qquad \text {Source: } [11] \end{aligned}$$

Problem 2

$$\begin{aligned} y''''= & {} y^2+\cos ^2(x)+\sin (x)-1, \quad 0\le x \le 5,\\ y(0)= & {} 0, \quad y'(0)=1, \quad y''(0)=0,\quad y'''(0)=-1,\\ y(x)= & {} \sin (x). \qquad \text {Source: } [11] \end{aligned}$$

Problem 3

$$\begin{aligned} y''''= & {} \frac{3\sin (y)\left( 3+2\sin ^2(y)\right) }{cos^7(y)}, \quad 0\le x \le \frac{\pi }{8},\\ y(0)= & {} 0, \quad y'(0)=1, \quad y''(0)=0,\quad y'''(0)=1,\\ y(x)= & {} \sin ^{-1}(x). \qquad \text {Source: } [11] \end{aligned}$$

Problem 4

the ill-posed problem of a beam on elastic foundation:

$$\begin{aligned} y''''= & {} x-y, \quad 0\le x \le 5,\\ y(0)= & {} 0, \quad y'(0)=0, \quad y''(0)=0,\quad y'''(0)=0,\\ y(x)= & {} 1-1/2\,{\mathrm{e}^{-1/2\,\sqrt{2}x}}\cos \left( 1/2\,\sqrt{2}x \right) -1/2\,{\mathrm{e}^{1/2\,\sqrt{2}x}}\cos \left( 1/2\,\sqrt{2}x \right) . \\&\text {Source: } [11, 12] \end{aligned}$$
Fig. 1
figure 1

Efficiency curves for problem 1, \(h = 2^{-i},\, i = 2,...,6\)

Fig. 2
figure 2

Efficiency curves for problem 2, \(h = 2^{-i},\, i = 2,...,6\)

Fig. 3
figure 3

Efficiency curves for problem 3, \(h = 2^{-i},\, i = 4,...,8\)

Fig. 4
figure 4

Efficiency curves for problem 4, \(h = 2^{-i},\, i = 2,...,6\)

Fig. 5
figure 5

Efficiency curves for problem 5

Problem 5

$$\begin{aligned} y_1''''= & {} e^{3x}y_2, \quad y_1(0)=1,\quad y_1'(0)=-1,\quad y_1''(0)=1,\quad y_1'''(0)=-1,\\ y_2''''= & {} 256e^{-x}y_3, \quad y_2(0)=1,\quad y_2'(0)=-4,\quad y_2''(0)=16,\quad y_2'''(0)=-64,\\ y_3''''= & {} 81e^{-x}y_4, \quad y_3(0)=1,\quad y_3'(0)=-3,\quad y_3''(0)=9,\quad y_3'''(0)=-27,\\ y_4''''= & {} 16e^{-x}y_1, \quad y_4(0)=1,\quad y_4'(0)=-2,\quad y_4''(0)=4,\quad y_4'''(0)=-8,\\ y_1(x)= & {} e^{-x},\quad y_2(x)=e^{-4x},\quad y_3(x)=e^{-3x},\quad y_4(x)=e^{-2x},\quad 0\le x \le 2.\\&\text {Source: } [11, 12] \end{aligned}$$

The test problems are first transformed to equivalent systems of second-order equations for HMs4 and THMs3 to be applied. Figures 1, 2, 3, 4 and 5 show the outcome of numerical experiment where the computational efficiency of the proposed methods is compared with the existing methods in the literature. For all the test problems solved, efficiency and accuracy of the proposed methods in this paper are more pronounced, especially for the two-stage method (HMFDs2).

Table 5 Results for problem 1
Table 6 Results for problem 2
Table 7 Results for problem 3
Table 8 Results for problem 4
Table 9 Results for problem 5

Tables 5, 6, 7, 8 and 9 show comparison of numerical results of the proposed methods and those of fourth- order ODEs integrators of linear multistep type presented in [6] and [8], where maximum errors \(\left( |y(x_n)-y_n| \right) \) for each method in the given intervals for each step-size h are recorded. It is obvious that for all the problems solved, our method records the smallest errors except for a few cases where the errors are almost equal with those of the multistep methods.

9 Conclusion

A class of four-step hybrid methods (HMFD) for direct numerical integration of special fourth-order ODEs is proposed. This class of hybrid methods is similar to the class of two-step hybrid methods for solving special second-order ODEs directly [14]. Unlike RKFD methods [11], HMFD methods do not depend on the derivatives of the solution. Using the theory of B-series with fourth-order ODEs trees that first appeared in [11], algebraic order conditions of the HMFD methods are derived. The order conditions are used to construct a one-stage and two-stage methods. Numerical results presented in Figs. 1, 2, 3, 4 and 5 reveal that the new methods proposed in this paper are more accurate and efficient when compared with the hybrid methods for solving special second-order ODEs as well as the RKFD methods recently proposed in [11]. Similarly, the results in Tables 5, 6, 7, 8 and 9 suggest the superiority of the proposed schemes in this paper over the linear multistep schemes presented in [6] and [8].