1 Introduction

1.1 Scope

This work is related to numerical methods for the solution of the autonomous system of ordinary differential equations (ODEs):

$$\begin{aligned} \begin{aligned}&{\varvec{u}}'(t)= {\varvec{f}} \bigl ( {\varvec{u}}(t) \bigr ), \quad t\in (t_0, T], \\&{\varvec{u}} (t) = \bigl (u_1(t), \dots , u_M(t) \bigr )^{\mathrm {T}}, \quad {\varvec{f}} ( {\varvec{u}}) = \bigl (f_1( {\varvec{u}}), \dots , f_M( {\varvec{u}} ) \bigr )^{\mathrm {T}}, \end{aligned} \end{aligned}$$
(1.1)

where derivatives of a vector of univariate scalar functions are understood component-wise, posed along with initial data \({\varvec{u}}(t_0) = {\varvec{u}}_0\).

Taylor series methods for the numerical solution of initial-value problems of ODEs compute approximations to the solution of the ODE for the next time instant using a Taylor polynomial of the unknown. The resulting methods are simple, since the expressions required for the iteration are exactly computable (i.e., with no error) from the equation, and the truncation error is governed by the error term of the Taylor formula, so that the order of accuracy of the global error of the method corresponds to the degree of the Taylor polynomial used. However, their implementation depends on the terms involved in the Taylor series, i.e., derivatives of the right-hand side whose computation requires intensive symbolic calculus, and are specific to each individual problem. Moreover, the need for solving auxiliary nonlinear equations, especially within the implicit versions, makes them computationally expensive, especially as the order of accuracy required increases.

In this work, we focus on implicit Taylor methods, obtained by computing the Taylor polynomials centered on a future time instant, and often used to solve problems where explicit methods have strong stability restrictions, in particular stiff systems of ODEs (Hairer and Wanner 1996). First of all, we apply a strategy, based on the work by Baeza et al. (2017) for the explicit Taylor method, to efficiently approximate the derivatives of \({\varvec{f}}\). This approximation inherits the ease of implementation and performance of the explicit version.

The implicit character of the method requires the solution of an auxiliary system of equations, usually by Newton’s method, which requires the computation of the Jacobian matrix. This may be an easy task for low-order methods, but the resulting iteration can become complicated as the order of the scheme increases. We propose a new formulation to obtain high-order implicit Taylor schemes that are simpler to implement and more efficient than the exact implicit Taylor methods, which compute derivatives symbolically. This is the main novelty of this work.

That said, we remark that it is not our purpose to present a numerical scheme that can compete with any implicit scheme in any situation, but to introduce a methodology to obtain Rth-order implicit Taylor schemes for systems of M scalar ODEs, with arbitrarily high \(M\in {\mathbb {N}}\), that can be easily implemented and efficiently solved, independently of the complexity of the function \({\varvec{f}}\), thus removing the leading difficulty of exact implicit Taylor methods. Very-high-order implicit methods are a must in some problems: for instance, in dynamical systems and mechanics, there is a need of high-order (at least, greater than 12) ODE solvers, especially Taylor integrators, as exposed for instance by Jorba and Zou (2005), Barrio et al. (2011), and Abad et al. (2012).

1.2 Related work

Miletics and Molnárka (2004) propose an alternative based on a numerical approximation of the derivatives of f in ODEs of the form \(u'=f(u)\) for the explicit Taylor method up to fourth order and in Miletics and Molnárka (2005) for the implicit version up to fifth order. Later on, in Baeza et al. (2017), a procedure to obtain a numerical approximation of \(f(u)=f\circ u\) was presented to generate arbitrarily high-order Taylor schemes, inspired by an approximate Cauchy–Kovalevskaya procedure developed for systems of conservation laws by Zorío et al. (2017), which simplifies the exact version presented by Qiu and Shu (2003). The method presented by Baeza et al. (2017) relies on the approximate computation of the terms that appear in the Taylor polynomials, in terms of function evaluations only, avoiding the explicit computation of the derivatives, leading to a method which is simple to implement and outperforms its exact counterpart for complex systems.

Further references to implicit Taylor methods addressing combinations of implicit and explicit steps to improve stability or accuracy include Kirlinger and Corliss (1991) and Scott (2000).

1.3 Outline of the paper

The work is organized as follows: in Sect. 2, the basic facts about the exact Taylor methods are reviewed. A general procedure to generate Taylor schemes of arbitrarily high-accuracy order through Faà di Bruno’s formula (Faà di Bruno 1855) is described, as well as its corresponding approximate version presented by Baeza et al. (2017). Section 3 is devoted to the description of the novel formulation of implicit Taylor methods, following an idea akin to Baeza et al. (2017). Section 4 describes an efficient implementation of the Newton iteration required to update the solution of implicit Taylor methods. Section 5 stands for several numerical experiments in which the approximate version of the implicit Taylor methods is compared against its exact counterpart, as well as against the approximate explicit version. Finally, in Sect. 6, some conclusions are drawn.

2 Taylor methods

2.1 Preliminaries

The (explicit) Rth-order Taylor methods are based on the expansion of the unknown function:

$$\begin{aligned} {\varvec{u}} (t+h) = {\varvec{u}} (t)+h {\varvec{u}}'(t)+\frac{h^2}{2} {\varvec{u}}''(t)+\dots +\frac{h^R}{R!}{\varvec{u}}^{(R)}(t) +\frac{h^{R+1}}{(R+1)!}{\varvec{u}}^{(R+1)}(\xi ) \end{aligned}$$
(2.1)

with \(\xi \) belonging to the open interval \(I(t, t+h)\) defined by t and \(t+h\). This expansion is valid provided \(u_1, \dots , u_{M} \in \smash {{{\mathcal {C}}}^{R}({\bar{I}}(t, t+h))}\) and \(\smash {u_1^{(R+1)}}, \dots , \smash {u_M^{(R+1)}}\) are bounded in \(I(t, t+h)\), where \({\bar{I}}(t, t+h)\) denotes the closure of \(I(t, t+h)\). Consider an equally spaced set of \(N+1\) points \(t_n = t_0 + nh\), \(0\le n \le N\), \(h=T/{N}\). Dropping the last term in (2.1) and taking \(t=t_n\), one obtains the approximation:

$$\begin{aligned} {\varvec{u}}(t_n+h)= {\varvec{u}} (t_{n+1})\approx {\varvec{u}}(t_n)+h {\varvec{u}}'(t_n)+\frac{h^2}{2} {\varvec{u}}''(t_n)+\dots +\frac{h^R}{R!} {\varvec{u}}^{(R)}(t_n). \end{aligned}$$
(2.2)

Then, (1.1) can be used to write:

$$\begin{aligned} {\varvec{u}}^{(k)}(t_n)= \bigl ( {\varvec{f}}( {\varvec{u}}) \bigr )^{(k-1)}(t_n) = \frac{\mathrm {d}^{k-1}}{\mathrm {d} t^{k-1}} \bigl ( {\varvec{f}}( {\varvec{u}}(t)) \bigr )\biggr |_{t= t_n} , \quad 1\le k\le R. \end{aligned}$$
(2.3)

Consequently, the first step to apply Taylor methods is to compute these derivatives up to an appropriate order.

2.2 Faà di Bruno’s formula

The evaluation of high-order derivatives of the function \(t \mapsto ( {\varvec{f}} \circ {\varvec{u}} ) (t)\), which arise in (2.2), is greatly simplified by Faà di Bruno’s formula, as stated by Baeza et al. (2017). To this end, we recall that for a multi-index \({\varvec{s}} = (s_1, \dots , s_r) \in {\mathbb {N}}_0^r\), one defines \(|{\varvec{s}} | := s_1 + \dots + s_r\) and:

$$\begin{aligned} \left( {\begin{array}{c}r\\ {\varvec{s}}\end{array}}\right) := \frac{r!}{s_1! s_2 ! \cdots s_r!}. \end{aligned}$$

Moreover, for \(r \in {\mathbb {N}}\), we define an index set:

$$\begin{aligned} {{\mathcal {P}}}_r := \left\{ {\varvec{s}} \in {\mathbb {N}}_0^r \left| \sum _{ \nu =1}^r \nu s_\nu =r \right. \right\} , \end{aligned}$$

and \(({\varvec{D}}^{{\varvec{s}}} {\varvec{u}})(t)\) to be a matrix of size \(M \times | {\varvec{s}}|\) whose \((s_1 + \dots +s_{j-1}+i)\)th column is given by:

$$\begin{aligned} \bigl ( ({\varvec{D}}^{{\varvec{s}}} {\varvec{u}})(t)\bigr )_{s_1 + \cdots +s_{j-1}+i} = \frac{1}{j!} \frac{\mathrm {d}^j}{\mathrm {d} t^j} {\varvec{u}} (t), \quad i=1 , \dots , s_j, \quad j= 1, \dots , r. \end{aligned}$$
(2.4)

Finally, we denote by \(f^{(k)} \bullet {\varvec{A}}\) the action of the kth-order derivative tensor of f on an \(M \times k\) matrix \({\varvec{A}} = (A_{ij})\):

$$\begin{aligned} f^{(k)} \bullet {\varvec{A}} := \sum _{i_1, \dots , i_k=1}^{M} \frac{\partial ^k f}{\partial u_{i_1} \cdots \partial u_{i_k} } ( {\varvec{u}} ) A_{i_1, 1} \cdots A_{i_k, k}. \end{aligned}$$

Proposition 1

(Faà di Bruno’s formula (Faà di Bruno 1855))

Assume that the functions \(f:{\mathbb {R}}^M \rightarrow {\mathbb {R}}\) and \({\varvec{u}}: {\mathbb {R}} \rightarrow {\mathbb {R}}^M\) are r times continuously differentiable. Then:

$$\begin{aligned} \frac{\mathrm {d}^r}{\mathrm {d} t ^r} f \bigl ( {\varvec{u}}(t) \bigr ) \equiv \bigl (f({\varvec{u}}) \bigr )^{(r)} (t) = \sum _{{\varvec{s}} \in {{\mathcal {P}}}_r} \left( {\begin{array}{c}r\\ {\varvec{s}}\end{array}}\right) \Bigl ( \bigl (f ( {\varvec{u}}) \bigr )^{(|{\varvec{s}}|)} \bullet ({\varvec{D}}^{{\varvec{s}}} {\varvec{u}})\Bigr ) (t). \end{aligned}$$
(2.5)

Proposition 1 applies to just one scalar function f, so to obtain all components of, say, \(\smash {({\varvec{f}} ( {\varvec{u}}))^{(k)}} (t_n)\) in (2.3), we must apply (2.5) to each of the components of \({\varvec{f}} = (f_1, \dots , f_M)^{\mathrm {T}}\). Clearly, the matrix \({\varvec{D}}^{{\varvec{s}}} {\varvec{u}}\) is the same for all these components.

2.3 Explicit Taylor methods

The derivatives \(\smash {( {\varvec{f}}( {\varvec{u}}) )^{(k-1)}}\) can be evaluated using Faà di Bruno’s formula (2.5) (see Baeza et al. (2017) for more details), leading to an expression of \(\smash {{\varvec{u}}^{(k)}(t_n)}\) in terms of \({\varvec{u}} (t_n)\) and derivatives of \({\varvec{f}}\), namely:

$$\begin{aligned} \begin{aligned} {\varvec{u}}^{(k)}(t_n)&= {\varvec{G}}_k \Bigl ( {\varvec{u}}(t_n), \bigl ( {\varvec{f}} ( {\varvec{u}}) \bigr ) (t_n), \bigl ( {\varvec{f}} ( {\varvec{u}}) \bigr )' (t_n), \dots , \bigl ( {\varvec{f}} ( {\varvec{u}}) \bigr )^{(k-1)} (t_n) \Bigr )=\varvec{{\tilde{G}}}_k \bigl ({\varvec{u}}(t_n) \bigr ). \end{aligned} \end{aligned}$$
(2.6)

Replacing the derivatives \(\smash {{\varvec{u}}^{(k)} (t_n)}\) in (2.2) by (2.6), we obtain the expression:

$$\begin{aligned} {\varvec{u}} (t_{n+1}) \approx T_R \bigl ( {\varvec{u}}(t_n), h \bigr ) = {\varvec{u}} (t_n)+\sum _{k=1}^{R} \frac{h^k}{k!} \varvec{{\tilde{G}}}_k \bigl ( {\varvec{u}}(t_n) \bigr ). \end{aligned}$$
(2.7)

The Rth-order Taylor method

$$\begin{aligned} {\varvec{u}}_{n+1} = T_R( {\varvec{u}}_n, h) \end{aligned}$$
(2.8)

is then obtained by replacing the exact values of the solution \(u(t_n)\) and \(u(t_{n+1})\) by their corresponding approximations in (2.7), denoted by \({\varvec{u}}_n\) and \({\varvec{u}}_{n+1}\), respectively. This means that the following expression is used in (2.8):

$$\begin{aligned} T_R( {\varvec{u}}_n, h) = {\varvec{u}}_n+\sum _{k=1}^{R} \frac{h^k}{k!} {\varvec{u}}_n^{(k)}, \quad {\varvec{u}}_n^{(k)}:= \varvec{{\tilde{G}}}_k({\varvec{u}}_n). \end{aligned}$$
(2.9)

From (2.1) and (2.9), we infer that the local truncation error is given by:

$$\begin{aligned} {\varvec{E}}_{\mathrm {L}} = \frac{h^{R+1}}{(R+1)!} {\varvec{u}}^{(R+1)}(\xi ), \end{aligned}$$

so that \({\varvec{E}}_{\mathrm {L}} = {{\mathcal {O}}}(h^{R+1})\) as long as \({\varvec{u}}^{(R+1)}\) is bounded in \([t_0, T]\). One then obtains that the method (2.8) has an \({{\mathcal {O}}}(h^{R})\) global error .

3 Implicit Taylor methods

3.1 Exact implicit Taylor methods

Implicit Taylor methods are based on approximating \({\varvec{u}}(t_n)\) by means of the Taylor polynomial of \({\varvec{u}}\) centered at \(t_{n+1}\):

$$\begin{aligned} {\varvec{u}}(t_n) \approx T_R \bigl ( {\varvec{u}}(t_{n+1}), -h \bigr ), \end{aligned}$$
(3.1)

so that the value of \( {\varvec{u}}_{n+1} \approx {\varvec{u}}(t_{n+1})\) is determined as solution of the nonlinear system of algebraic equations:

$$\begin{aligned} {\varvec{u}}_n = T_R( {\varvec{u}}_{n+1}, -h). \end{aligned}$$
(3.2)

In the easiest case, with \(R=1\), one gets the implicit Euler method. As in the case of explicit Taylor methods, the expressions of \(\smash {{\varvec{u}}^{(k)}(t_{n+1})}\) that appear in (3.1) can be expressed as functions of \({\varvec{u}}(t_{n+1}\)) and the derivatives of \({\varvec{f}}\). As an example, the second-order implicit Taylor method is given by:

$$\begin{aligned} {\varvec{u}}_n = {\varvec{u}}_{n+1} - h {\varvec{f}}({\varvec{u}}_{n+1}) + \frac{h^2}{2}\left( \frac{\partial {\varvec{f}}}{\partial {\varvec{u}}}( {\varvec{u}}_{n+1}) {\varvec{f}}({\varvec{u}}_{n+1})\right) , \end{aligned}$$
(3.3)

where \(\smash {\partial {\varvec{f}} / \partial {\varvec{u}} = (\partial f_i / \partial u_j)_{1 \le i,j \le M}}\) is the Jacobian matrix of \({\varvec{f}} ( {\varvec{u}})\). In what follows, the family of methods based on (3.2) will be referred to as exact implicit Taylor methods, since they are based on exact expressions of the derivatives of \({\varvec{f}}\).

3.2 Approximate implicit Taylor methods

Let us briefly review approximate explicit Taylor methods as described by Baeza et al. (2017), whose formulation will be used to motivate and introduce our novel approximate implicit Taylor (henceforth, AIT) methods. These methods are based on computing approximations of the derivatives in (2.2) by means of finite differences, so that \(\smash {{\varvec{u}}^{(k)}(t_n)}\) is replaced by an approximation:

$$\begin{aligned} {\varvec{v}}_{h,n}^{(k)} = {\varvec{u}}^{(k)}(t_n) +{{\mathcal {O}}}(h^{R-k+1}), \quad k=2,\dots , R, \end{aligned}$$

resulting in an Rth-order accurate method:

$$\begin{aligned} {\varvec{v}}_{h, n+1}= {\varvec{v}}_{h,n} + \sum _{k=1}^R \frac{h^k}{k!} {\varvec{v}}_{h,n}^{(k)}, \end{aligned}$$

where the approximations \(\smash {v_{h,n}^{(k)}}\) are computed as follows:

$$\begin{aligned} {\varvec{v}}_{h,n}^{(0)}&= {\varvec{u}}_n,\\ {\varvec{v}}_{h,n}^{(1)}&= {\varvec{f}}({\varvec{u}}_n),\\ {\varvec{v}}_{h,n}^{(k+1)}&= \Delta _{h}^{k,\lceil \frac{R-k}{2}\rceil } {\varvec{f}} \bigl ( {\varvec{P}}_n^k(h) \bigr ), \quad k= 1, \dots , R-1. \end{aligned}$$

Here, we recall that \(\lceil \cdot \rceil \) denotes the so-called ceiling operator defined by \(\lceil x \rceil =\min \{n \in {\mathbb {Z}} \mid x\le n\}\). Moreover, \({\varvec{P}}^k(\rho )\) is the M-component vector given by:

$$\begin{aligned} P^k_n(\rho ) = \sum _{l=0}^k\frac{{\varvec{v}}^{(l)}_{h,n}}{l!}\rho ^l, \quad n=1, \dots , M, \end{aligned}$$

and \(\Delta _{h}^{p,q}\) is the centered finite-difference operator that approximates pth-order derivatives to order 2q on a grid with spacing h, i.e., the one that satisfies:

$$\begin{aligned} \Delta _h^{p,q}(y) = y^{(p)} + {{\mathcal {O}}}(h^{2q}) \end{aligned}$$

for a sufficiently differentiable function y. (The operator \(\Delta _{h}^{p,q}\) is understood as acting on each component of \({\varvec{f}} ( {\varvec{P}}_n^k(h))\) separately.)

There exist constants \(\smash {\beta ^{k, R}_j}\), so that for some integers \(\gamma _{k,R}\), we can write (see Zorío et al. 2017):

$$\begin{aligned} {\varvec{v}}_{h,n}^{(k+1)}=h^{-k}\sum _{j=-\gamma _{k,R}}^{\gamma _{k,R}} \beta _j^{k,R} {\varvec{f}} \left( \sum _{l=0}^k \frac{(jh)^l}{l!} {\varvec{v}}_{h,n}^{(l)}\right) . \end{aligned}$$
(3.4)

Using these approximations of the derivatives, and with the notation of the previous sections, one obtains the approximate explicit Taylor method:

$$\begin{aligned} {\varvec{u}}_{n+1} = {\tilde{T}}_R( {\varvec{u}}_{n}, h). \end{aligned}$$
(3.5)

For instance, the second-order approximate Taylor method is based on the approximation:

$$\begin{aligned} {\varvec{u}}^{(2)}(t_n)= \bigl ( {\varvec{f}}( {\varvec{u}}) \bigr )'(t_n)\approx \frac{1}{2h} \Bigl ( {\varvec{f}} \bigl ( {\varvec{u}}(t_n)+h {\varvec{f}} ( {\varvec{u}}(t_n))\bigr )- {\varvec{f}} \bigl ( {\varvec{u}}(t_n)-h {\varvec{f}}(u(t_n))\bigr ) \Bigr ); \end{aligned}$$

hence, the method can be written as:

$$\begin{aligned} {\varvec{u}}_{n+1}={\varvec{u}}_n+h {\varvec{f}}( {\varvec{u}}_n)+\frac{h}{4} \Big ({\varvec{f}}( {\varvec{u}}_n+h {\varvec{f}}(u_n)\bigr )- {\varvec{f}} \bigl ({\varvec{u}}_n-h {\varvec{f}}(u_n)\bigr )\Bigr ); \end{aligned}$$

that is:

$$\begin{aligned} {\tilde{T}}_2({\varvec{u}}_n, h) ={\varvec{u}}_n+h {\varvec{f}}( {\varvec{u}}_n)+\frac{h}{4} \Big ({\varvec{f}}( {\varvec{u}}_n+h {\varvec{f}}(u_n)\bigr )- {\varvec{f}} \bigl ({\varvec{u}}_n-h {\varvec{f}}(u_n)\bigr )\Bigr ). \end{aligned}$$

The new methods advanced in this contribution, namely approximate implicit Taylor methods, are obtained by replacing h by \(-h\) and interchanging \({\varvec{u}}_n\) and \({\varvec{u}}_{n+1}\) in (3.5):

$$\begin{aligned} {\varvec{u}}_{n} ={\tilde{T}}_R( {\varvec{u}}_{n+1}, -h). \end{aligned}$$

For the case of second order (\(R=2\)), the implicit second-order approximate Taylor method is:

$$\begin{aligned}&{\varvec{u}}_n={\tilde{T}}_2( {\varvec{u}}_{n+1}, -h) \nonumber \\&\quad = {\varvec{u}}_{n+1}-hf( {\varvec{u}}_{n+1}) -\frac{h}{4} \Bigl ( {\varvec{f}} \bigl ( {\varvec{u}}_{n+1}-h {\varvec{f}}( {\varvec{u}}_{n+1}) \bigr )- {\varvec{f}} \bigl ( {\varvec{u}}_{n+1}+h {\varvec{f}}( {\varvec{u}}_{n+1}) \bigr ) \Bigr ). \end{aligned}$$
(3.6)

3.3 Linear stability

The linear stability of a numerical scheme for initial-value problems of ordinary differential equations is usually examined by applying it to the scalar linear equation:

$$\begin{aligned} u'=\lambda u, \quad \lambda \in {\mathbb {C}}, \quad \mathrm {Re} \, \lambda < 0. \end{aligned}$$
(3.7)

For the sake of completeness, we consider the non-homogeneous linear ODE:

$$\begin{aligned} u'=\lambda u+g(t), \quad \lambda \in {\mathbb {C}}, \end{aligned}$$

with g sufficiently smooth. For the solution u of the ODE, we can establish by induction on k that:

$$\begin{aligned} u^{(k)}=\lambda ^k u+\sum _{j=0}^{k-1}\lambda ^{k-j-1}g^{(j)}(t), \end{aligned}$$

so the explicit Taylor method reads in this case as:

$$\begin{aligned} u_{n+1}&=\sum _{k=0}^{R} \frac{h^{k}}{k!} \Bigg (\lambda ^k u_{n}+\sum _{j=0}^{k-1}\lambda ^{k-j-1}g^{(j)}(t_{n})\Bigg )\\&=u_{n}\sum _{k=0}^{R} \frac{(h\lambda )^{k}}{k!} + \sum _{j=0}^{R-1} \frac{g^{(j)}(t_{n})}{\lambda ^{j+1}} \sum _{k=j+1}^{R}\frac{(h\lambda )^{k}}{k!}\\&=Q_{R}(h\lambda )u_{n} + \sum _{j=0}^{R-1} \frac{g^{(j)}(t_{n})}{\lambda ^{j+1}} \big (Q_{R}(h\lambda )-Q_{j}(h\lambda )\big ), \end{aligned}$$

where

$$\begin{aligned} Q_{j}(x)=\sum _{k=0}^{j}\frac{x^k}{k!}. \end{aligned}$$

The implicit Taylor method is obtained by interchanging the roles of n and \(n+1\) and reads as:

$$\begin{aligned} u_{n}&=Q_{R}(-h\lambda )u_{n+1} + \sum _{j=0}^{R-1} \frac{g^{(j)}(t_{n+1})}{\lambda ^{j+1}} \big (Q_{R}(-h\lambda )-Q_{j}(-h\lambda )\big ),\nonumber \\ u_{n+1}&=\frac{1}{Q_R(-h\lambda )} u_{n}-\sum _{j=0}^{R-1} \frac{g^{(j)}(t_{n+1})}{\lambda ^{j+1}} \biggl (1-\frac{Q_j(-h\lambda )}{Q_R(-h\lambda )} \biggr ). \end{aligned}$$
(3.8)

In particular, for \(g=0\), the explicit and implicit Taylor methods of order R are given by the respective expressions:

$$\begin{aligned} u_{n+1}=Q_R(h\lambda ) u_n \end{aligned}$$
(3.9)

and

$$\begin{aligned} u_{n+1}=\frac{1}{Q_R(-h\lambda )} u_n. \end{aligned}$$

The exact Taylor method of order R is stable provided that \(|Q_R(h\lambda )| < 1\). Since \(\mathrm {Re} \, \lambda <0\), this condition is usually satisfied on a bounded domain only (as can be inferred from \(R=1\), in which case (3.9) is the explicit Euler method). On the other hand, the exact implicit Taylor method is stable for those values of \(z = h \lambda \) that satisfy:

$$\begin{aligned} \begin{aligned} z \in {{\mathcal {S}}} :=&\bigl \{ z \in {\mathbb {C}} \mid \mathrm {Re} \, z< 0, |Q_R(-z)|^{-1}< 1 \bigr \}=\bigl \{ z \in {\mathbb {C}} \mid \mathrm {Re} \, z < 0, |Q_R(-z)| > 1 \bigr \}. \end{aligned} \end{aligned}$$

As for its exact counterpart, in Baeza et al. (2020), it is shown that the approximate explicit Taylor method applied to (3.7) is \({\tilde{T}}_R(u_{n}, h)=Q_R(h\lambda ) u_n\), and thus, the implicit version is \({\tilde{T}}_R (u_{n}, -h)=Q_R(-h\lambda )^{-1} u_n\), and, therefore, both methods have the same stability region as their corresponding exact versions, in particular the approximate implicit Taylor method is absolutely stable whenever \(\lambda < 0\).

4 Newton iteration

The computation of \({\varvec{u}}_{n+1}\) for given \({\varvec{u}}_n\) using an implicit method requires the solution of an auxiliary equation \({\varvec{F}}( {\varvec{u}}_{n+1})= {\varvec{0}}\), which is often approximated by Newton’s method. In this section, we address the computation of the elements required for Newton’s method for both the exact and approximate implicit Taylor methods, which will lead to a new, more efficient formulation for the approximate scheme. Although line-search strategies (Dennis andSchnabel 1996) for damping Newton iteration can be used to enhance global convergence, we have not used them in our experiments.

4.1 Exact implicit Taylor method

As an example, let us consider the scalar nonlinear problem:

$$\begin{aligned} u'=u+u^2 \Rightarrow u''=(1+2u)u'=(1+2u)(u+u^2). \end{aligned}$$

The second-order exact implicit Taylor method can be written as:

$$\begin{aligned} u_n=u_{n+1}-h(u_{n+1}+u_{n+1}^2)+\frac{h^2}{2}(1+2u_{n+1})(u_{n+1}+u_{n+1}^2), \end{aligned}$$

which requires the solution of the following cubic equation:

$$\begin{aligned} F(u_{n+1}):=u_{n+1}-h(u_{n+1}+u_{n+1}^2)+\frac{h^2}{2}(1+2u_{n+1})(u_{n+1}+u_{n+1}^2)-u_n=0. \end{aligned}$$
(4.1)

In the general case, the solution of \({\varvec{F}}( {\varvec{u}}_{n+1})= {\varvec{0}}\) by means of Newton’s method requires the computation of the derivative \(F'(u_{n+1})\). In the case of (4.1) this is an easy task, but, in general, the resulting iteration can become complicated.

To simplify the computation of the Jacobian matrix, we introduce:

$$\begin{aligned} {\varvec{z}}_k\approx {\varvec{u}}_{n+1}^{(k)}, \quad k=0,\dots ,R, \end{aligned}$$

and use Faà di Bruno’s formula (2.5) to get the system:

$$\begin{aligned} {\varvec{u}}_{n}&= {\varvec{z}}_0-h {\varvec{z}}_1+\dots +(-1)^R\frac{h^R}{R!}{\varvec{z}}_R,\\ {\varvec{z}}_1&={\varvec{f}}({\varvec{z}}_0),\\ {\varvec{z}}_{n+1}&=\sum _{ {\varvec{s}} \in {{\mathcal {P}}}_r} \left( {\begin{array}{c}r\\ {\varvec{s}}\end{array}}\right) \begin{pmatrix} f_1^{(|{\varvec{s}}|)}({\varvec{z}}_0) \bullet \varvec{{\tilde{D}}}^{{\varvec{s}}} {\varvec{z}} \\ \vdots \\ f_M^{(|{\varvec{s}}|)}({\varvec{z}}_0) \bullet \varvec{{\tilde{D}}}^{{\varvec{s}}} {\varvec{z}} \end{pmatrix} ,\quad r=1,\dots ,R-1, \end{aligned}$$

where the definition of \(\smash {\varvec{{\tilde{D}}}^{{\varvec{s}}} {\varvec{z}}}\) mimics that of \(\smash {{\varvec{D}}^{{\varvec{s}}} {\varvec{z}}}\) in (2.4), by taking into account that \({\varvec{z}}_k \approx \smash { {\varvec{u}}^{(k)}} (t)\), namely:

$$\begin{aligned} \bigl ( \varvec{{\tilde{D}}}^{{\varvec{s}}} {\varvec{z}} \bigr )_{s_1 + \dots +s_{j-1}+i} = \frac{1}{j!} {\varvec{z}}_j, \quad i=1 , \dots , s_j, \quad j= 1, \dots , r. \end{aligned}$$

These equations can be differentiated systematically. For instance, for the case of one scalar equation, \(M=1\), one gets:

$$\begin{aligned}&\partial _{z_0} \Biggl ( \sum _{{\varvec{s}} \in {{\mathcal {P}}}_r} \left( {\begin{array}{c}r\\ {\varvec{s}}\end{array}}\right) f^{(|{\varvec{s}}|)} (z_0) {\varvec{D}}^{{\varvec{s}}} z \Biggr ) = \sum _{{\varvec{s}} \in {{\mathcal {P}}}_r} \left( {\begin{array}{c}r\\ {\varvec{s}}\end{array}}\right) f^{(|{\varvec{s}}|+1)} (z_0) \Bigl ( \frac{z_1}{1!} \Bigr )^{s_1} \cdots \Bigl ( \frac{z_r}{r!} \Bigr )^{s_r}, \\&\partial _{z_j} \Biggl ( \sum _{{\varvec{s}} \in {{\mathcal {P}}}_r} \left( {\begin{array}{c}r\\ {\varvec{s}}\end{array}}\right) f^{(|{\varvec{s}}|)} (z_0) {\varvec{D}}^{{\varvec{s}}} z \Biggr ) \\&\qquad \qquad = \sum _{{\varvec{s}} \in {{\mathcal {P}}}_r} \frac{s_j}{j!} \left( {\begin{array}{c}r\\ {\varvec{s}}\end{array}}\right) f^{(|{\varvec{s}}|+1)} (z_0) \Bigl ( \frac{z_1}{1!} \Bigr )^{s_1} \cdots \Bigl ( \frac{z_j}{j!} \Bigr )^{s_j-1} \cdots \Bigl ( \frac{z_r}{r!} \Bigr )^{s_r}. \end{aligned}$$

For this scalar case and the second-order implicit Taylor method (3.3), the system to be solved is:

$$\begin{aligned} \begin{aligned} 0&=z_0 -hz_1+\frac{h^2}{2}z_2 - u_n,\\ 0&=f(z_0)-z_1,\\ 0&= f'(z_0) z_1 -z_2. \end{aligned} \end{aligned}$$
(4.2)

If we rewrite system (4.2) as \({\varvec{F}} ( z_0, z_1, z_2) = {\varvec{0}}\), with the function \({\varvec{F}}\) defined by:

$$\begin{aligned} {\varvec{F}}( z_0, z_1, z_2)= \begin{pmatrix} F_1 ( z_0, z_1, z_2)\\ F_2 ( z_0, z_1, z_2)\\ F_3 ( z_0, z_1, z_2) \end{pmatrix}&= \begin{pmatrix} \displaystyle z_0 -hz_1+\frac{h^2}{2}z_2 - u_n,\\ f(z_0)-z_1,\\ f'(z_0) z_1 -z_2 \end{pmatrix}, \end{aligned}$$

then the corresponding Jacobian matrix is:

$$\begin{aligned} \displaystyle {{\mathcal {J}}}_{{\varvec{F}}} ( z_0, z_1, z_2) = \begin{bmatrix} \displaystyle \frac{\partial F_1}{\partial z_0}&{} \displaystyle \frac{\partial F_1}{\partial z_1} &{} \displaystyle \frac{\partial F_1}{\partial z_2} \\ \displaystyle \frac{\partial F_2}{\partial z_0} &{} \displaystyle \frac{\partial F_2}{\partial z_1} &{}\displaystyle \frac{\partial F_2}{\partial z_2}\\ \displaystyle \frac{\partial F_3}{\partial z_0} &{} \displaystyle \frac{\partial F_3}{\partial z_1} &{} \displaystyle \frac{\partial F_3}{\partial z_2}\\ \end{bmatrix} = \begin{bmatrix} 1&{}-h &{} h^2 / 2 \\ f'(z_0) &{} -1 &{} 0\\ f''(z_0)z_1 &{} f'(z_0) &{}-1\\ \end{bmatrix}. \end{aligned}$$
(4.3)

Depending on the expression of f, the Jacobian matrix may become highly complicate, even for low values of R. It is clear that, for higher order methods, the system to be solved will be more complicated. For instance, for \(R=4\), it reads as:

$$\begin{aligned} \begin{aligned} 0&=z_0 -hz_1+\frac{h^2}{2}z_2 - \frac{h^3}{6}z_3 + \frac{h^4}{24} z_4 -u_n,\\ 0&=f(z_0)-z_1,\\ 0&= f'(z_0) z_1 -z_2,\\ 0&= f''(z_0)z_1^2+f'(z_0)z_2-z_3,\\ 0&= f'''(z_0)z_1^3+3f''(z_0) z_1 z_2 +z_3 f'(z_0)-z_4, \end{aligned} \end{aligned}$$

which results in the expression:

$$\begin{aligned} \begin{aligned}&{{\mathcal {J}}}_{{\varvec{F}}} ( z_0, \ldots , z_4) \\&= \begin{bmatrix} 1 &{} -h &{} \displaystyle {\frac{h^2}{2}} &{} -\displaystyle {\frac{h^3}{6}} &{} \displaystyle {\frac{h^4}{24}} \\ f'(z_0) &{} -1 &{} 0 &{} 0 &{} 0 \\ f''(z_0)z_1 &{} f'(z_0) &{} -1 &{} 0 &{} 0 \\ f'''(z_0)z_1^2 +f''(z_0)z_2 &{} 2f''(z_0)z_1 &{} f'(z_0) &{} -1 &{} 0\\ f^{(4)}(z_0)z_1^3 +3 f'''(z_0)z_1z_2 &{} {3 f'''(z_0) z_1^2 +3f''(z_0) z_2} &{} {3f''(z_0) z_1} &{} {f'(z_0)} &{} {-1}\\ +f''(z_0)z_3 &{} &{} &{} &{} \end{bmatrix}. \end{aligned} \end{aligned}$$
(4.4)

Note that the submatrix composed by the first three rows and columns of (4.4) is exactly (4.3). It is easy to check that the Jacobian matrix corresponding to \(R=3\) is the submatrix of (4.4) composed by its first four rows and columns.

4.2 Approximate implicit Taylor method

For simplicity, let us start with the second-order approximate implicit Taylor method (3.6) for the scalar case \(M=1\). Similarly to the exact case, we introduce the unknowns \(z_0 =u_{n+1}\), \(z_1 =f(u_{n+1})\), and:

$$\begin{aligned} z_2 =\frac{1}{2} \Bigl (f \bigl (u_{n+1}-hf(u_{n+1})\bigr )-f \bigl (u_{n+1}+hf(u_{n+1}) \bigr ) \Bigr ), \end{aligned}$$

and one gets the system of equations:

$$\begin{aligned} 0&=z_0-hz_1-\frac{h}{2}z_2-u_n,\\ 0&=f(z_0)-z_1,\\ 0&=\frac{1}{2} f(z_0-hz_1)-\frac{1}{2} f(z_0+hz_1)-z_2, \end{aligned}$$

so that its solution gives the terms that appear in (3.6).

If we rewrite this system as \({\varvec{F}} (z_0, z_1, z_2) = {\varvec{0}}\) with the function \({\varvec{F}}\) defined by:

$$\begin{aligned} {\varvec{F}}( z_0, z_1, z_2)= \begin{pmatrix} F_1 ( z_0, z_1, z_2)\\ F_2 ( z_0, z_1, z_2)\\ F_3 ( z_0, z_1, z_2) \end{pmatrix}&= \begin{pmatrix} \displaystyle z_0-hz_1-\frac{h}{2}z_2-u_n\\ f(z_0)-z_1\\ \frac{1}{2} f(z_0-hz_1)-\frac{1}{2} f(z_0+hz_1)-z_2 \end{pmatrix}, \end{aligned}$$

then the corresponding Jacobian matrix (which is required so that Newton’s method can be applied to this system) is now given by:

$$\begin{aligned}&{{\mathcal {J}}}_{{\varvec{F}}} (z_0, z_1, z_2) = \begin{bmatrix} \displaystyle \frac{\partial F_1}{\partial z_0}&{} \displaystyle \frac{\partial F_1}{\partial z_1} &{} \displaystyle \frac{\partial F_1}{\partial z_2} \\ \displaystyle \frac{\partial F_2}{\partial z_0} &{} \displaystyle \frac{\partial F_2}{\partial z_1} &{}\displaystyle \frac{\partial F_2}{\partial z_2}\\ \displaystyle \frac{\partial F_3}{\partial z_0} &{} \displaystyle \frac{\partial F_3}{\partial z_1} &{} \displaystyle \frac{\partial F_3}{\partial z_2} \end{bmatrix}\\&= \begin{bmatrix} 1 &{} -h &{}- \displaystyle \frac{h}{2} \\ f'(z_0) &{} -1 &{} 0 \\ \displaystyle {\frac{1}{2} \bigl ( f'(z_0-h z_1)- f'(z_0+h z_1) \bigr ) } &{} \displaystyle {-\frac{h}{2} \bigl ( f'(z_0-h z_1)+f'(z_0+h z_1) \bigr )} &{} -1\\ \end{bmatrix}. \end{aligned}$$

4.3 General number of scalar equations

Let us now consider the general case of a system of M scalar ordinary differential equations. From (3.4), the approximate implicit Rth-order Taylor method can be written as:

$$\begin{aligned} {\varvec{u}}_{n}&={\tilde{T}}_R({\varvec{u}}_{n+1}, -h)=\sum _{k=0}^R\frac{(-h)^k}{k!} {\varvec{v}}_{-h,n}^{(k)}, \end{aligned}$$
(4.5)
$$\begin{aligned} {\varvec{v}}_{-h,n}^{(k+1)}&=(-h)^{-k}\sum _{j=-\gamma _{k,R}}^{\gamma _{k,R}} \beta _j^{k,R} {\varvec{f}} \left( \sum _{l=0}^k \frac{j^l (-h)^l}{l!} {\varvec{v}}_{-h,n}^{(l)}\right) . \end{aligned}$$
(4.6)

Let us denote \( {\varvec{z}}_k=(-h)^{k-1} {\varvec{v}}_{-h,n}^{k}\), so that (4.6) for \(k-1\) reads as:

$$\begin{aligned} {\varvec{z}}_{k}&=\sum _{j=-\gamma _{k-1,R}}^{\gamma _{k-1,R}} \beta _j^{k-1,R} {\varvec{f}} \left( z_0-h\sum _{l=1}^{k-1} \frac{j^l}{l!} {\varvec{z}}_l\right) \end{aligned}$$

and (4.5) as:

$$\begin{aligned} {\varvec{u}}_{n}&={\varvec{z}}_0-h\sum _{k=1}^{R}\frac{1}{k!} {\varvec{z}}_k. \end{aligned}$$

We define the function \(\smash {{\varvec{F}} = ( {\varvec{F}}_0, {\varvec{F}}_1, \dots , {\varvec{F}}_M)^{\mathrm {T}}: \quad {\mathbb {R}}^{(R+1)M}\rightarrow {\mathbb {R}}^{(R+1)M}}\) by:

$$\begin{aligned} {\varvec{F}}_0&:={\varvec{z}}_0-h\sum _{k=1}^{R}\frac{1}{k!} {\varvec{z}}_k- {\varvec{u}}_n,\\ {\varvec{F}}_{k}&:=\sum _{j=-\gamma _{k-1,R}}^{\gamma _{k-1,R}} \beta _j^{k-1,R} {\varvec{f}} \left( {\varvec{z}}_0-h\sum _{l=1}^{k-1} \frac{j^l}{l!} {\varvec{z}}_l \right) -{\varvec{z}}_k,\quad k=1,\dots ,R. \end{aligned}$$

To solve \({\varvec{F}}({\varvec{z}})={\varvec{0}}\) by Newton’s method, we compute the Jacobian matrix of \({\varvec{F}}\) as the block matrix:

$$\begin{aligned} {{\mathcal {J}}}_{{\varvec{F}}} ( {\varvec{z}}) = \bigl ( {\varvec{F}}_{i,j} ({\varvec{z}}) \bigr )_{0 \le i,j \le R}, \quad \text {where} \quad {\varvec{F}}_{i,j} ( {\varvec{z}} ) = \frac{\partial {\varvec{F}}_i}{\partial {\varvec{z}}_j} ( {\varvec{z}}) \in {\mathbb {R}}^{M\times M}. \end{aligned}$$

If \({\varvec{I}}_{M}\) denotes the \(M\times M\) identity matrix, we get:

$$\begin{aligned} {\varvec{F}}_{0,0}&= {\varvec{I}}_M,\\ {\varvec{F}}_{0,l}&=-\frac{h}{l!} {\varvec{I}}_M,\quad l=1,\dots ,R,\\ {\varvec{F}}_{k,0}&=\sum _{j=-\gamma _{k-1,R}}^{\gamma _{k-1,R}}\beta _j^{k-1,R} {\varvec{f}}'\left( {\varvec{z}}_0-h\sum _{l=1}^{k-1}\frac{j^l}{l!} {\varvec{z}}_l\right) ,\quad k=1,\dots ,R,\\ {\varvec{F}}_{k,l}&=-h\sum _{j=-\gamma _{k-1,R}}^{\gamma _{k-1,R}} \beta _j^{k-1,R} {\varvec{f}}'\left( {\varvec{z}}_0-h\sum _{m=1}^{k-1}\frac{j^m}{m!} {\varvec{z}}_m \right) \frac{j^l}{l!} ,\quad {\left\{ \begin{array}{ll} l=1,\dots ,k-1, \\ k=1,\dots ,R, \end{array}\right. } \\ {\varvec{F}}_{k,k}&=-{\varvec{I}}_M,\quad k=1,\dots ,R,\\ {\varvec{F}}_{k,l}&= {\varvec{0}}, \quad l=k+1\dots , R, \quad k=1,\dots ,R. \end{aligned}$$

Setting \(\smash {\varvec{\delta }^{(\nu )} = {\varvec{z}}^{(\nu +1)} - {\varvec{z}}^{(\nu )}}\), we may write an iteration of Newton’s method as:

$$\begin{aligned} {{\mathcal {J}}}_{{\varvec{F}}} ( {\varvec{z}}^{(\nu )}) \varvec{\delta }^{(\nu )}=- {\varvec{F}}( {\varvec{z}}^{(\nu )}). \end{aligned}$$

In block form and dropping \(\nu \), we get:

$$\begin{aligned} \begin{bmatrix} {\varvec{F}}_{0,0} &{} {\varvec{F}}_{0,1} &{} \cdots &{}{\varvec{F}}_{0,R} \\ {\varvec{F}}_{1,0} &{} {\varvec{F}}_{1,1} &{} \cdots &{} {\varvec{F}}_{1,R} \\ \vdots &{} \vdots &{} &{} \vdots \\ {\varvec{F}}_{R,0} &{} {\varvec{F}}_{R,1} &{} \cdots &{}{\varvec{F}}_{R,R} \end{bmatrix} \begin{pmatrix} \varvec{\delta }_0\\ \varvec{\delta }_{1} \\ \vdots \\ \varvec{\delta }_R \end{pmatrix}&= -\begin{pmatrix} {\varvec{F}}_0\\ {\varvec{F}}_1\\ \vdots \\ {\varvec{F}}_R \end{pmatrix}, \end{aligned}$$

which we write in compact form as:

$$\begin{aligned} \begin{bmatrix} {\varvec{F}}_{0,0} &{} {\varvec{F}}_{0,1:R} \\ {\varvec{F}}_{1:R,0} &{} {\varvec{F}}_{1:R,1:R} \\ \end{bmatrix} \begin{pmatrix} \varvec{\delta }_0 \\ \varvec{\delta }_{1:R} \end{pmatrix}&=- \begin{pmatrix} {\varvec{F}}_0 \\ {\varvec{F}}_{1:R} \end{pmatrix}. \end{aligned}$$
(4.7)

Since \({\varvec{F}}_{1:R,1:R}\) is blockwise lower triangular with the diagonal blocks given by \(-{\varvec{I}}_M\), this matrix is invertible and we deduce that:

$$\begin{aligned} \varvec{\delta }_{1:R}&=- {\varvec{F}}_{1:R, 1:R} ^{-1}( {\varvec{F}}_{1:R}+ {\varvec{F}}_{1:R,0} \varvec{\delta }_0), \end{aligned}$$

which, when inserted into the first equation of (4.7), yields:

$$\begin{aligned} \varvec{\delta }_0 = - \bigl ( {\varvec{F}}_{0,0} - {\varvec{F}}_{0,1:R} {\varvec{F}}_{1:R, 1:R}^{-1} {\varvec{F}}_{1:R,0} \bigr )^{-1} \bigl ( {\varvec{F}}_{0} - {\varvec{F}}_{0,1:R} {\varvec{F}}_{1:R, 1:R}^{-1} {\varvec{F}}_{1:R} \bigr ). \end{aligned}$$

If we denote

$$\begin{aligned} {\varvec{A}} := {\varvec{F}}_{1:R, 1:R}^{-1} {\varvec{F}}_{1:R}, \quad {\varvec{B}} := {\varvec{F}}_{1:R, 1:R}^{-1} {\varvec{F}}_{1:R,0}, \end{aligned}$$

then we can write:

$$\begin{aligned} \varvec{\delta }_0 = - \bigl ( {\varvec{F}}_{0,0} - {\varvec{F}}_{0,1:R} {\varvec{B}} \bigr )^{-1} \bigl ( {\varvec{F}}_{0} - {\varvec{F}}_{0,1:R} {\varvec{A}} \bigr ), \quad \varvec{\delta }_{1:R} =- \bigl ( {\varvec{A}}+ {\varvec{B}} \varvec{\delta }_0 \bigr ). \end{aligned}$$

Therefore, the system can be solved efficiently as long as \({\varvec{F}}_{0,0} - {\varvec{F}}_{0,1:R} {\varvec{B}}\) is invertible. Recall that the Newton iteration only requires the computation of \({\varvec{f}}\) and \({\varvec{f}}'\), in contrast with the exact version, which requires the computation of all the derivatives of f up to order R.

4.3.1 Computational cost of AIT methods

From Sect. 4.3, we know that, for each iteration of Newton’s method, we need to compute the vectors:

$$\begin{aligned} \varvec{\delta }_0 = - \bigl ( {\varvec{F}}_{0,0} - {\varvec{F}}_{0,1:R} {\varvec{B}} \bigr )^{-1} \bigl ( {\varvec{F}}_{0} - {\varvec{F}}_{0,1:R} {\varvec{A}} \bigr ), \quad \varvec{\delta }_{1:R} =- \bigl ( {\varvec{A}}+ {\varvec{B}} \varvec{\delta }_0 \bigr ), \end{aligned}$$

where \({\varvec{A}}\) and \({\varvec{B}}\) are the matrices:

$$\begin{aligned} {\varvec{A}} := {\varvec{F}}_{1:R, 1:R}^{-1} {\varvec{F}}_{1:R}, \quad {\varvec{B}} := {\varvec{F}}_{1:R, 1:R}^{-1} {\varvec{F}}_{1:R,0}. \end{aligned}$$

For the computation of \({\varvec{A}}\) and \({\varvec{B}}\), we can exploit that \({\varvec{F}}_{1:R,1:R}\) is a blockwise lower triangular matrix with diagonal blocks given by \(-{\varvec{I}}_M\). Hence, we can obtain for example \({\varvec{B}}\), by a block forward substitution process, given by:

$$\begin{aligned} {\varvec{B}}_{k}=-{\varvec{F}}_{k,0}+\sum _{i=1}^{k-1}{{\varvec{F}}_{k,i}{\varvec{B}}_{i}},\quad k=1,\dots ,R. \end{aligned}$$

This algorithm requires \((R^2-R)/{2}\) products of \(M\times M\) matrices.

With respect to \(\varvec{\delta }_0\), the computation of \({\varvec{F}}_{0,1:R} {\varvec{B}}\) requires R products of \(M\times M\) matrices, and its \({\varvec{L}}{\varvec{U}}\) decomposition requires \(\smash {{{\mathcal {O}}}(\frac{2}{3}M^3)}\) scalar operations. Neglecting operations with lower order cost in M, then, we obtain that the computation of \(\varvec{\delta }\) requires:

$$\begin{aligned}C_{\varvec{\delta }} := \left( \frac{R^2+R}{2}+\frac{2}{3}\right) M^3 \end{aligned}$$

scalar operations. Moreover, the formation of the blocks \({\varvec{F}}_{i,j}\) requires \(R^2\) Jacobian matrices of \({\varvec{f}}\), which yields, assuming a mean cost per entry of \(\beta \) scalar operations, \(R^2M^2\beta \) scalar operations. In consequence, the (approximate) computational cost of each Newton iteration for AIT methods is:

$$\begin{aligned} C_{\mathrm {AIT}} = C_{\varvec{\delta }} + R^2\beta M^2 = \left( \frac{R^2+R}{2}+\frac{2}{3}\right) M^3+R^2\beta M^2. \end{aligned}$$

5 Numerical experiments

5.1 Preliminaries

In this section, the performance of the AIT methods is analyzed. We first compare the AIT methods with their exact counterparts, IT methods, of the same order. We have only compared the AIT with the IT methods for scalar equations since the implementation for systems of the IT methods is extremely involved. For linear scalar equations, the implementation for any order is performed using (3.8). These methods are compared in terms of error, numerical order, and computational time, using some scalar problems.

We then raise two initial-value problems for systems of equations. For those problems, the AIT methods are compared with approximate explicit Taylor (AET) methods of the same order (Baeza et al. 2017), so as to stress the superior stability of the implicit method. In all the numerical examples, we show the numerical errors, computed with \(L^1\)-norm, and the order of the numerical method, computed by:

$$\begin{aligned} o(N)=\log _2\left( \left| e(N)/e(N/2)\right| \right) , \end{aligned}$$

with e(N) standing for the numerical error for N time steps.

Table 1 Example 1 (linear scalar problem (5.1)): numerical errors and orders for IT and AIT methods
Table 2 Example 2 (nonlinear scalar problem (5.2)): numerical errors, orders, and computational time (in seconds) for IT and AIT methods.

5.2 Examples 1 and 2: scalar equations

In Example 1 we consider the linear equation:

$$\begin{aligned} u'= -5u + 5 \sin (2 t) + 2\cos (2 t),\quad u(0)=0, \end{aligned}$$
(5.1)

with exact solution \(u(t) = \sin (2t)\). The results for IT and AIT methods for \(T=5\) and orders \(R\in \{2, 3, 4, 5, 6\}\) are collected in Table 1, where it can be seen that, with both methods, the expected orders of convergence are recovered in all cases. Comparing with the IT methods, we see that the approximate version attains the expected order faster than the exact version, but produces a slightly bigger error for coarse resolutions. This fact is possibly due to the simplicity of the equation under consideration, which produces a local truncation error smaller than the error corresponding to the approximation of derivatives performed in the AIT method and hinders the correct order of accuracy for the exact method whenever the step size is not small enough.

In Example 2, we consider the more involved problem:

$$\begin{aligned} u'=\log \left( \frac{u+u^3+u^5}{1+u^2+u^4+u^6}\right) , \quad u(0)=1, \end{aligned}$$
(5.2)

and compute its solution up to \(T=1\) for orders \(R\in \{2,3,4,5,6\}\). The solution computed by the AIT method with \(R={6}\) and a resolution of 20000 points is taken as reference solution. We can see in Table 2 that the errors for both methods are similar and the numerical order converges to the expected values in each case. In Fig. 1, we compare the errors obtained by each method with respect to the CPU time required to run the algorithm (left) and with respect to the discretization step considered h (right). It can be seen that the performance is increasingly favorable to the approximate method as the order increases, as expected. Note that, for cases \(R= 5, 6\), although the errors are affected by MATLAB’s computational error, it still can be seen that the AIT method overpowers the IT method in terms of the computational time needed to obtain an approximate solution.

5.3 Examples 3 and 4: systems of ODEs

We consider now two problems modeled by systems of ODEs, used by Akinfenwa et al. (2013) to test stability properties and accuracy. Example 3 is a stiff nonlinear problem given by:

$$\begin{aligned} \left\{ \begin{aligned} y'&= -1002y+1000z^2,\\ z'&= y-z(1+z), \quad t >0; \end{aligned} \qquad \begin{aligned} y(0)=&1,\\ z(0)=&1, \end{aligned} \right. \end{aligned}$$
(5.3)

known as Kaps problem, with exact solution given by:

$$\begin{aligned} y(t)= \mathrm {e}^{-2t},\quad z(t)= \mathrm {e}^{-t}, \end{aligned}$$

which is independent of the stiffness parameter, \(k=-1000\) in this case. We compare the solution at \(T=5\) for the approximate implicit (AIT) and approximate explicit (AET) methods of the same order. Both schemes recover the expected order, the implicit one achieving it at early stages, see Table 3. Note that the explicit scheme does not attain good results in terms of accuracy, unless meshes with more than 2000 points are used. The implicit scheme achieves the same error level as the explicit one with meshes with approximately 4 times less nodes.

Fig. 1
figure 1

Example 2 (nonlinear scalar problem (5.2)): performance of the IT and the AIT methods

Table 3 Example 3 (stiff nonlinear problem (5.3)): numerical errors and orders for AIT and AET methods
Table 4 Example 4 (stiff linear problem (5.4)): numerical errors and orders for AIT and AET methods

Finally, in Example 4, we consider the system of ODEs:

$$\begin{aligned} \left\{ \begin{aligned} x'&=-21x+19y-20z,\\ y'&= 19x-21y+20z,\\ z'&=40x-40y-40z,\quad t>0; \end{aligned} \qquad \begin{aligned} x(0)&=1,\\ y(0)&=0,\\ z(0)&=-1, \end{aligned} \right. \end{aligned}$$
(5.4)

also taken from Akinfenwa et al. (2013) and whose exact solution is given by:

$$\begin{aligned} \left\{ \begin{aligned} x(t)=&\frac{1}{2}\left( \mathrm {e}^{-2t}+\mathrm {e}^{-40t}(\cos (40t)+\sin (40t))\right) ,\\ y(t)=&\frac{1}{2}\left( \mathrm {e}^{-2t}-\mathrm {e}^{-40t}(\cos (40t)+\sin (40t))\right) ,\\ z(t)=&-\mathrm {e}^{-40t}(\cos (40t)-\sin (40t)). \end{aligned} \right. \end{aligned}$$

As in the previous example, the explicit scheme needs more nodes to achieve the same error level as the implicit scheme. For instance, the explicit scheme needs about 64 times more nodes, \(N=320\), to obtain the same errors that the implicit scheme attains with \(N=5\), see results in Table 4 and Fig. 2, proving that the use of the implicit scheme is more appropriate when dealing with stiff problems.

Fig. 2
figure 2

Example 4 (stiff linear problem (5.4)): performance of the AET and the AIT methods

6 Conclusions

This article is part of ongoing work to develop high-order efficient approximate Taylor ODE solvers. We have reviewed the exact implicit Taylor methods for ODEs and introduced a novel strategy that allows to implement them systematically, although at the cost of differentiating the function in the ODE up to the order of the method.

On the other hand, using the same strategy that led to approximate explicit Taylor methods for ODEs, we define approximate implicit Taylor methods, whose only requirement is the knowledge of function derivatives to build the Jacobian matrix of auxiliary systems of nonlinear equations, to be solved by Newton’s method.

While the numerical results essentially confirm that the novel approach introduced in this work outperforms the exact version in terms of performance, this is not true for low-order accuracy, as it was expected. The AIT methods are therefore expected to be useful in the context of ODEs that require to be solved through a very-high-order implicit scheme.

Finally, it is worth pointing out that it is our purpose to further extend this analysis in the context of PDEs, where implicit methods are needed in some underlying problems related with them. High-order methods are being increasingly more demanded to accurately solve some of these problems, and therefore, the AIT methods may become useful in that context.