Keywords

1 Introduction

Differential equation of the form

$$ y^{\prime}(x) = f(x,y(x),y(\alpha (x)),y^{\prime}(\alpha (x))) $$
(1)

appears in many real life applications and has been investigated by many authors in recent years. The classical case is when α(x) = x − ττ a constant. When the right hand side of (1) does not depend on the derivative of the unknown function y, the equation is known as delay differential equation. Otherwise, it is known as neutral delay differential equation. In this article, we consider numerical solution for functional differential equation of the form:

$$ \left\{ {\begin{array}{*{20}l} {y^{\prime}(x) = f(x,y(x),y(qx),y^{\prime}(qx)),} & {0 < x \le T,} \\ {y(0) = y_{0} ,} & {} \\ \end{array} } \right. $$
(2)

where 0 < q < 1, and

$$ \left\{ {\begin{array}{*{20}l} {y^{\prime } (x) = f(x,y(x),y(x - \tau ),y^{\prime } (x - \tau )),} & {0 < x \le T,} \\ {y(x) = \phi (x),} & {x \le 0.} \\ \end{array} } \right. $$
(3)

Equation (2), known as the pantograph equation arises in many physical applications such as number theory, electrodynamics, astrophysics, etc. Detailed explanations can be found in [13]. Numerical solutions for (2) and (3) have been studied extensively, see for example [411] and the references cited therein. These methods produce one approximation in a single integration step. Block methods, however produce more than one approximation in a step. Block methods have been used to solve wide range of ordinary differential equations as well as delay differential equations (see [1216] and the references cited therein).

The functional differential equations are solved using a two-point block method in variable step. In a single integration step, two new approximates for the unknown function are obtained using the same stepsize. New approximates for the next block are obtained by keeping the stepsize constant, doubled or halved depending upon the local approximation error. In any variable stepsize method, the coefficients of the method need to be recalculated whenever the stepsize changes. In order to avoid the tedious calculation, the coefficients based on the stepsize ratio are calculated beforehand and stored at the start of the program.

The organization of this article is as follows. In Sect. 2, we briefly describe the development of the variable step block method. Stability region for the block method is discussed in Sect. 3. Numerical results for some functional differential equations are presented in Sect. 4 and finally Sect. 5 is the conclusion.

2 Method Development

Referring to (1), we seek a set of discrete solutions for the unknown function y in the interval [0, T]. The interval is divided into a sequence of mesh points \( \left\{ {x_{i} } \right\}_{i = 0}^{t} \) of different lengths, such that 0 = x 0 < x 1 < ··· < x t  = T. Let the approximated solution for y(x n ) be denoted as y n . Suppose that the solutions have been obtained up to x n . At the current step, two new solutions y n+1 and y n+2 at x n+1 and x n+2 respectively are simultaneously approximated using the same back values by taking the same stepsize. The points x n+1 and x n+2 are contained in the current block. The length of the current block is 2h. We refer to this particular block method as two-point one-block method. The block method is shown in Fig. 1.

Fig. 1
figure 1

Two-point one-block method

In Fig. 1, the stepsize of the previous step is viewed in the multiple of the current stepsize. Thus, x n+1 − x n  = hx n+2 − x n+1 = h and x n-1 − x n−2 = x n  − x n−1 = rh. The value of r is either 1, 2, or \( \frac{1}{2} \), depending upon the decision to change the stepsize. In this algorithm, we employ the strategy of having the stepsize to be constant, halved or doubled.

The formulae for the block method can be written as the pair,

$$ \begin{aligned} y_{n + 1} &= y_{n} + h\sum\limits_{i = 0}^{4} {\beta_{i} (r)} f\left( {x_{n - 2 + i} ,y_{n - 2 + i} ,\bar{y}_{n - 2 + i} ,\hat{y}_{n - 2 + i} } \right), \\ y_{n + 2} &= y_{n} + h\sum\limits_{i = 0}^{4} {\beta_{i}^{*} (r)} f\left( {x_{n - 2 + i} ,y_{n - 2 + i} ,\bar{y}_{n - 2 + i} ,\hat{y}_{n - 2 + i} } \right), \\ \end{aligned} $$
(4)

where \( \bar{y}_{n} \) and \( \hat{y}_{n} \) are the approximations to y(α(x n )) and \( y^{\prime}(\alpha(x_n)) \) respectively. For simplicity, from now on we refer to \( f(x_{n} ,y_{n} ,\bar{y}_{n} ,\hat{y}_{n} ) \) as f n . The coefficient functions β i (r) and β * i (r) will give the coefficients of the method when r is either 1, 2, or \( \frac{1}{2}. \)

The first formula in (4) is obtained by integrating (1) from x n to x n+1 while replacing the function f with the polynomial P where P(x) is given by

$$ P(x) = \sum\limits_{j = 0}^{4} {L_{4,j} (x)f_{n + 2 - j} } , $$

and

$$ L_{4,j} (x) = \prod\limits_{\begin{subarray}{l} i = 0 \\ i \ne j \end{subarray} }^{4} {\frac{{(x - x_{n + 2 - i} )}}{{(x_{n + 2 - j} - x_{n + 2 - i} )}}} ,\quad {\text{for}}\quad j = 0,1, \ldots ,4. $$

Similarly, the second formula in (4) is obtained by integrating (1) from x n to x n+2 while replacing the function f with the polynomial P. The value of \( \bar{y}_{n} \) is obtained by the interpolation function such as,

$$ \begin{aligned} \bar{y}_{n} & = y[x_{j} ] + (\alpha (x_{n} ) - x_{j} )y[x_{j} ,x_{j - 1} ] + \cdots \\ \, & \quad + (\alpha (x_{n} ) - x_{j} ) \cdots (\alpha (x_{n} ) - x_{j - 3} )y\,[x_{j} , \ldots ,x_{j - 4} ], \\ \end{aligned} $$

where

$$ y[x_{j} ,x_{j - 1} , \ldots ,x_{j - 4} ] = \frac{{y[x_{j} , \ldots ,x_{j - 3} ] - y[x_{j - 1} , \ldots ,x_{j - 4} ]}}{{x_{j} - x_{j - 4} }}, $$

provided that x j−1 ≤ α(x n ) ≤ x j n ≥ jj ≥ 1. We approximate the value of \( \hat{y}_{n} \) by interpolating the values of f, that is,

$$ \begin{aligned} \hat{y}_{n} & = f[x_{j} ] + (\alpha (x_{n} ) - x_{j} )f[x_{j} ,x_{j - 1} ] + \cdots \\ & \quad + (\alpha (x_{n} ) - x_{j} ) \cdots (\alpha (x_{n} ) - x_{j - 3} )f[x_{j} , \ldots ,x_{j - 4} ], \\ \end{aligned} $$

where

$$ f[x_{j} ,x_{j - 1} , \ldots ,x_{j - 4} ] = \frac{{f[x_{j} , \ldots ,x_{j - 3} ] - f[x_{j - 1} , \ldots ,x_{j - 4} ]}}{{x_{j} - x_{j - 4} }}. $$

The formulae in (4) are implicit, thus a set of predictors are derived similarly using the same number of back values. The corrector formulae in (4) are iterated until convergence.

For greater efficiency while achieving the required accuracy, the algorithm is implemented in variable stepsize scheme. The stepsize is changed based on the local error that is controlled at the second point. A step is considered successful if the local error is less than a specified tolerance. If the current step is successful, we consider either doubling or keeping the same stepsize. If the same stepsize had been used for at least two blocks, we double the next stepsize. Otherwise, the next stepsize is kept the same. If the current step fails, the next stepsize is reduced by half. For repeated failures, a restart with the most optimal stepsize with one back value is required. For variable step algorithms, the coefficients of the methods need to be recalculated whenever a stepsize changes. The recalculation cost of these coefficients is avoided by calculating the coefficients beforehand and storing them at the start of the program. With our stepsize changing strategy, we store the coefficients β i (r) and \( \beta_{i}^{*} (r) \) for r is 1, 2 and \( \frac{1}{2}. \)

3 Region of Absolute Stability

In the development of a numerical method, it is of practical importance to study the behavior of the global error. The numerical solution y n is expected to behave as the exact solution y(x n ) does as x n approaches infinity. In this section, we present the result of stability analysis of the two-point one-block method when they are applied to the delay and neutral delay differential equations with real coefficients.

For the sake of simplicity and without the lost of generality, we consider the equation

$$ \begin{array}{*{20}l} {y^{\prime } (x) = ay(x) + by(x - \tau ) + cy^{\prime } (x - \tau ),} & {x \ge 0,} \\ {y(x) = \phi (x),} & { - \tau \le x < 0,} \\ \end{array} $$
(5)

where abc ∊ Rτ is the delay term such as τ = mh, h is a constant stepsize such that x n  = x 0 + nh and m ∊ Z +. If i ∊ Z +, we define vectors \( {\mathbf{Y}}_{N + i} = \left[ {\begin{array}{*{20}c} {y_{n - 3 + 2i} } \\ {y_{n - 2 + 2i} } \\ \end{array} } \right] \) and \( {\mathbf{F}}_{N + i} = \left[ {\begin{array}{*{20}c} {f_{n - 3 + 2i} } \\ {f_{n - 2 + 2i} } \\ \end{array} } \right]. \) Then, the block method (4) can be written in matrix form such as,

$$ A_{1} {\mathbf{Y}}_{{N + 1}} + A_{2} {\mathbf{Y}}_{{N + 2}} = h\sum\limits_{{i = 0}}^{2} {B_{i} (r){\mathbf{F}}_{{N + i}} }, $$
(6)

where \( A_{1} = \left[ {\begin{array}{*{20}c} 0 & { - 1} \\ 0 & { - 1} \\ \end{array} } \right], \) \( A_{2} = \left[ {\begin{array}{*{20}c} 1 & 0 \\ 0 & 1 \\ \end{array} } \right], \) and B i (r) is a matrix that contains the coefficients β i (r) and \( \beta_{i}^{*} (r). \) Applying method (6) to (5), we get

$$ \begin{aligned} A_{1} {\mathbf{Y}}_{{N + 1}} + A_{2} {\mathbf{Y}}_{{N + 2}} = & H_{1} \sum\limits_{{i = 0}}^{2} {B_{i} (r){\mathbf{Y}}_{{N + i}} + H_{2} \sum\limits_{{i = 0}}^{2} {B_{i} (r){\mathbf{Y}}_{{N + i - m}} } } \\ {\text{ }} + & cA_{2} {\mathbf{Y}}_{{N + 2 - m}} + cA_{1} {\mathbf{Y}}_{{N + 1 - m}} , \\ \end{aligned} $$

where H 1 = ha and H 2 = hb. Rearranging, we have

$$ \sum\limits_{i = 0}^{2} {(A_{i} - H_{1} B_{i} (r)){\mathbf{Y}}_{N + i} } = \sum\limits_{i = 0}^{2} {(H_{2} B_{i} (r) + cA_{i} ){\mathbf{Y}}_{N + i - m} } , $$
(7)

where A 0 is the null matrix. Characteristic polynomial for (7) is given by C m (H 1H 2cζ) where C m is the determinant of

$$ \sum\limits_{i = 0}^{2} {(A_{i} - H_{1} B_{i} (r))\zeta^{m + i} } - \sum\limits_{i = 0}^{2} {(H_{2} B_{i} (r) + cA_{i} )\zeta^{i} } = 0. $$
(8)

The numerical solution (7) is asymptotically stable if and only if for all m, all zeros of the characteristic polynomial (8) lie within the open unit disk in the plane. The stability region is defined as follows:

Definition 1

For a fixed stepsize hab ∊ R, and for any, but fixed c, the region S in the \( H_{1} - H_{2} \) plane is called the stability region of the method if for any (H 1H 2) ∊ S, the numerical solution of (5) vanishes as x n approaches infinity.

In Figs. 2, 3, 4 the stability regions for c = 0 are depicted with r = 1, r = 2, and \( r = \frac{1}{2}, \) respectively, see also [15]. In Fig. 5, the stability regions for m = 1 and c = 0.5 are illustrated. We use the boundary locus technique as described in [17, 18]. The regions are sketched for r = 1, r = 2, and \( r = \frac{1}{2}. \)

Fig. 2
figure 2

Stability regions for the block method with c = 0, r = 1

Fig. 3
figure 3

Stability regions for the block method with c = 0, r = 2

Fig. 4
figure 4

Stability regions for the block method with c = 0, r = 0.5

Fig. 5
figure 5

Stability regions for the block method with c = 0.5

The coefficient matrices are given as follows:

For r = 1:

$$ B_{0} = \left[ {\begin{array}{*{20}c} 0 & {\tfrac{{11}}{{720}}} \\ 0 & {\tfrac{{ - 1}}{{90}}} \\ \end{array} } \right],\quad B_{1} = \left[ {\begin{array}{*{20}c} {\tfrac{{ - 74}}{{720}}} & {\tfrac{{456}}{{720}}} \\ {\tfrac{4}{{90}}} & {\tfrac{{24}}{{90}}} \\ \end{array} } \right],\quad {\text{and}}\quad B_{2} = \left[ {\begin{array}{*{20}c} {\tfrac{{346}}{{720}}} & {\tfrac{{ - 10}}{{720}}} \\ {\tfrac{{124}}{{90}}} & {\tfrac{{29}}{{90}}} \\ \end{array} } \right]. $$

For r = 2:

$$ B_{0} = \left[ {\begin{array}{*{20}c} 0 & {\tfrac{137}{14400}} \\ 0 & {\tfrac{ - 1}{900}} \\ \end{array} } \right],\quad B_{1} = \left[ {\begin{array}{*{20}c} {\tfrac{ - 335}{14400}} & {\tfrac{7455}{14400}} \\ {\tfrac{5}{900}} & {\tfrac{285}{900}} \\ \end{array} } \right],\quad {\text{and}}\quad B_{2} = \left[ {\begin{array}{*{20}c} {\tfrac{7808}{14400}} & {\tfrac{ - 565}{14400}} \\ {\tfrac{1216}{900}} & {\tfrac{295}{900}} \\ \end{array} } \right]. $$

For \( r = \frac{1}{2}: \)

$$ B_{0} = \left[ {\begin{array}{*{20}c} 0 & {\tfrac{145}{1800}} \\ 0 & {\tfrac{ - 20}{225}} \\ \end{array} } \right],\quad B_{1} = \left[ {\begin{array}{*{20}c} {\tfrac{ - 704}{1800}} & {\tfrac{1635}{1800}} \\ {\tfrac{64}{225}} & {\tfrac{15}{225}} \\ \end{array} } \right],\quad {\text{and}}\quad B_{2} = \left[ {\begin{array}{*{20}c} {\tfrac{755}{1800}} & {\tfrac{ - 31}{1800}} \\ {\tfrac{320}{225}} & {\tfrac{71}{225}} \\ \end{array} } \right]. $$

Referring to Figs. 2, 3, 4, 5, the stability regions are closed region bounded by the corresponding boundary curves. It is observed that the stability region shrinks as the stepsize increases.

4 Numerical Results

In this section, we present some numerical examples in order to illustrate the accuracy and efficiency of the block method. The examples taken and cited from [8, 19] are as follows:

Example 1

$$ \begin{aligned} y^{\prime}(x) & = \frac{1}{2}y(x) + \frac{1}{2}e^{x/2} y\left( \frac{x}{2} \right),\quad 0 \le x \le 1, \\ y(0) & = 1. \\ \end{aligned} $$

The exact solution is y(x) = e x.

Example 2

$$ \begin{aligned} y^{\prime}(x) = & -\frac{5}{4}e^{ - x/4} y\left( {\frac{4}{5}x} \right),\quad 0 \le x \le 1, \\ y(0) = \,& 1. \\ \end{aligned} $$

The exact solution is y(x) = e −1.25x.

Example 3

$$ \begin{aligned} y^{\prime}(x) =\, & - y(x) + \frac{q}{2}y{\kern 1pt} (qx) - \frac{q}{2}e^{ - qx} ,\quad 0 \le x \le 1, \\ y(0) =\, & 1. \\ \end{aligned} $$

The exact solution is y(x) = e x.

Example 4

$$ \begin{aligned} y^{\prime}(x) &= ay(x) + by(qx) + \cos x - a\sin x - b\sin (qx),\quad 0 \le x \le 1, \\ y(0) &= 0. \\ \end{aligned} $$

The exact solution is y(x) = sinx.

Example 5

$$ \begin{aligned} y^{\prime}(x) = & -y(x) + \frac{1}{2}y\left( \frac{x}{2} \right) + \frac{1}{2}y^{\prime}\left( \frac{x}{2} \right),\quad 0 \le x \le 1, \\ y(0) = & 1. \\ \end{aligned} $$

The exact solution is y(x) = e x.

Example 6

$$ \begin{aligned} y^{\prime}(x) = & -y(x) + 0.1y(0.8x) + 0.5y^{\prime}(0.8x) \\ & \quad +(0.32x - 0.5)e^{ - 0.8x} + e^{ - x} ,\quad 0 \le x \le 10, \\ y(0) = & 0. \\ \end{aligned} $$

The exact solution is y(x) = xe x.

Example 7

$$ \begin{array}{*{20}l} y^{\prime}(x) = y(x) + y(x - 1) - \frac{1}{4}y^{\prime}(x - 1), & 0 \le x \le 1, \\ y(x) = -x, & x \le 0. \\ \end{array} $$

The exact solution is \( y(x) = - \frac{1}{4} + x + \frac{1}{4}e^{x} . \)

Example 8

$$ \begin{array}{*{20}l} y^{\prime}(x) = y(x) + y(x - 1) - 2y^{\prime}(x - 1), & 0 \le x \le 1, \\ y(x) = -x, & x \le 0. \end{array} $$

The exact solution is y(x) = −2 + x + 2e x.

Numerical results for Example 1–8 are given in Tables 1, 2, 3, 4, 5, 6, 7, 8, 9, 10. The following abbreviations are used in the tables, TOL—the chosen tolerance, STEP—the total number of steps taken, FS—the number of failed steps, AVERR—the average error, and MAXE—the maximum error. The notation 7.02683E−01 means 7.02683 × 10−1.

Table 1 Numerical results for Example 1
Table 2 Numerical results for Example 2
Table 3 Numerical results for Example 3, q = 0.2
Table 4 Numerical results for Example 3, q = 0.8
Table 5 Numerical results for Example 4, a = −1, b = 0.5, q = 0.1
Table 6 Numerical results for Example 4, a = − 1, b = 0.5, q = 0.5
Table 7 Numerical results for Example 5
Table 8 Numerical results for Example 6
Table 9 Numerical results for Example 7
Table 10 Numerical results for Example 8

From Tables 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, it is observed that for the given tolerances, the two-point block method achieves the desired accuracy. When the tolerance becomes smaller, the total number of steps increases. In order to achieve the desired accuracy, smaller stepsizes are taken, thus resulting in the increase number of total steps taken.

5 Conclusion and Future Work

In this paper, we have discussed the development of a two-point block method for solving functional differential equations of delay and neutral delay-types. The block method produces two approximate solutions in a single integration step by using the same back values. The algorithm is implemented in variable stepsize technique where the coefficients for the various stepsizes are stored at the beginning of the program for greater efficiency. Stability regions for a general linear test equation are obtained for a fixed, but variable stepsizes. The numerical results indicate that the two-point block method achieves the desired accuracy as efficiently as possible.

In the future, the focus for the research should include the implementation of the block method on parallel machines. The efficiency of the block method can be fully utilized if the computation for each point can be divided among parallel tasks.