Introduction

The equations involving derivative of a dependent variable are frequently used to model various phenomena arising in nature [1, 2]. The derivative of such function at a point \(t_0\) can be approximated by using its values in a small neighborhood of \(t_0\). Thus the derivative is a local operator which cannot model the memory and hereditary properties involved in real world problems. To overcome this drawback, one has to include some nonlocal operator in the given equation. Volterra [3, 4] suggested the use of an integral operator (which is nonlocal) to model the problems in population dynamics. In his monograph [3], he discussed the theory of integral, integro-differential and functional equations. Existence-uniqueness, stability and applications of integro-differential equations (IDE) is presented in a book by Lakshmikantham and Rao [5]. Existence theory of nonlinear IDEs is also discussed in [6]. Recently, Kostic [7] discussed the theory of abstract Volterra integro-differential equations.

A-stable linear multi-step methods to solve Volterra IDEs (VIDE) are proposed by Matthys in [8]. Brunner presented various numerical methods to solve VIDEs in [9]. Day [10] used trapezoidal rule to devise a numerical method to solve nonlinear VIDEs. Linz [11] derived fourth order numerical methods for such equations. Runge–Kutta method, predictor-corrector method and explicit multistep method are derived for IDEs by Wolfe and Phillips in [12]. Dehghan and Salehi [13] developed a numerical scheme based on the moving least squares method for these equations. Singular IDEs in generalized Holder spaces are solved by using collocation and mechanical quadrature methods in [14]. Saeedi et al. [15] described the operational Tau method for solving nonlinear VIDEs of the second kind. Recently developed fourth order Adams–Bashforth Moulton method [16] and Multistep Block [17, 18] Methods are also useful for these equations.

The stability of numerical methods for VIDEs is studied in [19, 20]. A survey of numerical treatments of VIDEs is taken in [21]. Each of these methods has its own importance. Most of these methods are either time efficient but less accurate or vice-versa. Our aim is to provide a time efficient optimal method which produce solutions with comparable error bound.

The applications of integro-differential equations can be found in many fields such as mechanics and electromagnetic theory [22], nuclear reactor [23], visco-elasticity [24, 25], heat conduction in materials with memory [26, 27] and man-environment epidemics [28].

We utilize a decomposition method proposed by Daftardar-Gejji and Jafari (DJM) [29] to generate a new numerical method for solving nonlinear VIDEs. The paper is organized as follows:

The preliminaries are given in section two. A new numerical method is presented in section three. Analysis of this numerical method is given in section four. Section five deals with different types of illustrative examples. Conclusions are summarized in section six. In Appendix, we provide software package.

Preliminaries

Basic Definitions and Results

In this section, we discuss some basic definitions and results:

Definition 1

[11] Let \(y_j\) be the approximation to the exact value \(y(x_j)\) obtained by a given method with step-size h. Then a method is said to be convergent if and only if

$$\begin{aligned} \lim \limits _{h\rightarrow 0} \mid y(x_j)-y_j\mid \rightarrow 0,\quad j=1,2,\ldots , N. \end{aligned}$$
(1)

Definition 2

[11] A method is said to be of order p if p is the largest number for which there exists a finite constant \( \textsc {C}\) such that

$$\begin{aligned} \mid y(x_j)-y_j\mid \le \textsc {C}h^p,\quad j=1,2,\ldots , N. \end{aligned}$$
(2)

Theorem 1

[11] Consider the integral equation

$$\begin{aligned} y(x) = f(x) + \int _0^x K(x,t, y(t))dt, \quad y(x_0)=y_0. \end{aligned}$$
(3)

Assume that

  1. (i)

    f is continuous in \(0 \le x \le a\) ,

  2. (ii)

    K(x, t, y) is a continuous function for \(0 \le t \le x \le a\) and \(\parallel y\parallel < \infty \),

  3. (iii)

    K(x, t, y) satisfies a Lipschitz condition

    $$\begin{aligned} \parallel K(x, t, y_1)-K(x, t, y_2)\parallel \le L \parallel y_1-y_2 \parallel \end{aligned}$$
    (4)

for all \(0 \le t \le x \le a\). Then the Eq. (3) has a unique solution.

Theorem 2

[5] Assume that \(f\in C[I\times \mathbb {R}^n,\mathbb {R}^n]\), \(K\in C[I\times I\times \mathbb {R}^n,\mathbb {R}^n]\) and \(\int _{s}^x \mid K(t,s,y(s))\mid dt \le N\), for \(x_0\le s \le x \le x_0 + a\), \(y\in \Omega = \{\phi \in C[I,\mathbb {R}^n]: \phi (x_0) = x_0 \ and \ \mid \phi (x) - y_0\mid \le b\}\). Then IVP (3) possesses at least one solution.

Existence and Uniqueness Theorem

In this subsection we propose existence and uniqueness theorem, which is a generalization of Theorem 1. Note that \(\parallel \cdot \parallel \) is an Euclidean norm.

Theorem 3

Consider the Volterra integro-differential equation

$$\begin{aligned} y'(x)= & {} f(x,y(x)) + \int _{x_0}^x K(x, t,y(t)) dt,\nonumber \\ \text {with initial condition} \quad y(x_0)= & {} y_0, \quad x\in [x_0,X]. \end{aligned}$$
(5)

Assume that f and K are continuous and satisfy Lipschitz conditions

$$\begin{aligned} \parallel f(x, y_1)-f(x, y_2)\parallel\le & {} L_1 \parallel y_1-y_2 \parallel \end{aligned}$$
(6)
$$\begin{aligned} \text {and}\,\parallel K(x, t, y_1)-K(x, t, y_2)\parallel\le & {} L_2 \parallel y_1-y_2 \parallel \end{aligned}$$
(7)

for every \(\mid x-x_0\mid \le a, \mid t-x_0\mid \le a, \parallel y_1 \parallel< \infty , \parallel y_2 \parallel < \infty \quad \text {and}\quad a>0\). Then the initial value problem (5) has unique solution.

Proof

Integrating Eq. (5) and using \( y(x_0) = y_0\), we get

$$\begin{aligned} y(x)= & {} y_0 + \int _{x_0}^x f(z, y(z)) dz + \int _{x_0}^x \left( \int _{x_0}^z K(z, t, y(t))dt\right) dz.\nonumber \\ \Rightarrow y(x)= & {} y_0 + \int _{x_0}^x \left( f(z, y(z)) + \int _{x_0}^z K(z, t, y(t))dt\right) dz.\nonumber \\ \Rightarrow y(x)= & {} y_0 + \int _{x_0}^x G(z, y(z)) dz,\nonumber \\ \text {where}\quad G(z,y(z))= & {} f(z, y(z)) + \int _{x_0}^z K(z, t, y(t))dt. \end{aligned}$$
(8)

We have following observations

  1. (i)

    \(y_0\) is a continuous because it is constant.

  2. (ii)

    Kernel G is continuous for \(0 \le x \le a\), because f and K are continuous in the same domain.

    $$\begin{aligned} \text {(iii)}\parallel G(x, y_1)-G(x, y_2)\parallel= & {} \parallel f(x, y_1(x)) + \int _{x_0}^x K(x, t, y_1(t))dt - f(x, y_2(x))\\&- \int _{x_0}^x K(x, t, y_2(t))dt\parallel \\\le & {} \parallel f(x, y_1(x))-f(x, y_2(x))\parallel + \parallel \int _{x_0}^x K(x, t, y_1(t))dt\\&- \int _{x_0}^x K(x, t, y_2(t))dt\parallel \\\le & {} L_1 \parallel y_1-y_2 \parallel + a L_2 \parallel y_1-y_2 \parallel \quad (\because \mid x-x_0\mid \le a)\\\le & {} (L_1 + aL_2)\parallel y_1-y_2 \parallel . \end{aligned}$$

    \(\square \)

\(\therefore G\) satisfy a Lipschitz condition.

\(\Rightarrow \) All the conditions of the Theorem 1 are satisfies.

Hence the Eq. (5) has unique solution.

Daftardar-Gejji and Jafari Method

A new iterative method was introduced by Daftardar-Gejji and Jafari (DJM) [29] in 2006 for solving nonlinear functional equations. The DJM has been used to solve a variety of equations such as fractional differential equations [30], partial differential equations [31], boundary value problems [32, 33], evolution equations [34] and system of nonlinear functional equations [35]. The method is successfully employed to solve Newell–Whitehead–Segel equation [36], Pantograph type equations [36,37,38,39], Ambartsumian equation [40], fractional-order logistic equation [41] and some nonlinear dynamical systems [42] also. Recently DJM has been used to generate new numerical methods [43,44,45] for solving differential equations. In this section, we describe DJM which is very useful for solving equations of the form

$$\begin{aligned} u= g + L(u) + N(u), \end{aligned}$$
(9)

where g is a given function, L and N are linear and nonlinear operators respectively. DJM provides the solution to Eq. (9) in the form of series

$$\begin{aligned} u= \sum _{i=0}^\infty u_i. \end{aligned}$$
(10)

where

$$\begin{aligned} u_0= & {} g, \nonumber \\ u_{m+1}= & {} L(u_m) + G_m,\quad m=0,1, 2,\ldots ,\nonumber \\ \text {and}\quad G_m= & {} N\left( \sum _{j=0}^m u_j\right) - N\left( \sum _{j=0}^{m-1} u_j\right) , m\ge 1. \end{aligned}$$
(11)

The k-term approximate solution is given by

$$\begin{aligned} u = \sum _{i=0}^{k-1} u_i \end{aligned}$$
(12)

for suitable integer k.

The following convergence results for DJM are proposed in literature.

Theorem 4

[46] If N is \(C^{(\infty )}\) in a neighborhood of \(u_0\) and \(\left\| N^{(n)}(u_0) \right\| \le L\), for any n and for some real \(L>0\) and \(\left\| u_i\right\| \le M <\frac{1}{e}\), \(i=1,2,\ldots ,\) then the series \(\sum _{n=0}^{\infty } G_n\) is absolutely convergent to N and moreover,

$$\begin{aligned} \left\| G_n\right\| \le L M^n e^{n-1} (e-1), \quad n=1,2,\ldots . \end{aligned}$$

Theorem 5

[46] If N is \(C^{(\infty )}\) and \(\left\| N^{(n)}(u_0) \right\| \le M \le e^{-1}\), \(\forall n\), then the series \(\sum _{n=0}^{\infty } G_n\) is absolutely convergent to N.

The \(\parallel \cdot \parallel \) in above convergence results is an Euclidean norm.

Numerical Method

In this section, we present a numerical method based on DJM to solve Volterra integro-differential equation. Let \(h>0\) be sufficiently small step-size. Define uniform grid \(x_0, x_1=x_0+h,\ldots ,x_{j+1}=x_j+h,\ldots ,x_T=x_{T-1}+h=X\) on interval \([x_0,X]\).

Integrating Eq. (5) from \(x=x_j \) to \(x=x_j+h=x_{j+1}\), we get

$$\begin{aligned} y({x_j+h})= & {} y(x_j) + \int _{x_j}^{x_j+h} f(x,y(x)) dx + \int _{x_j}^{x_j+h} \int _{x_0}^x K(x,t, y(t))dt dx, \nonumber \\ j= & {} 0,1,\ldots ,T-1. \end{aligned}$$
(13)

Applying trapezium formula [47] to evaluate integrals on right of Eq. 13), we get

$$\begin{aligned} y({x_j+h})= & {} y(x_j) +\frac{h}{2} \left( f(x_j,y_j)+f(x_{j+1},y_{j+1})\right) \\&+\frac{h}{2} \left( \int _{x_0}^{x_j} K(x_j,t, y(t))dt +\int _{x_0}^{x_{j+1}} K(x_{j+1},t, y(t))dt \right) +O(h^3). \end{aligned}$$

Again applying trapezium formula, we get

$$\begin{aligned} y({x_j+h})= & {} y(x_j) + \frac{h}{2} f(x_j, y_j) + \frac{h^2}{4} \left( K(x_j, x_0, y_0) + K(x_j, x_j, y_j) + K(x_{j+1}, x_0, y_0)\right) \nonumber \\&+ \frac{h^2}{2}\left( \sum _{i=1}^{j-1} K(x_j, x_i, y_i) + \sum _{i=1}^j K(x_{j+1}, x_i, y_i)\right) + \frac{h}{2} f(x_{j+1}, y_{j+1}) \nonumber \\&+ \frac{h^2}{4} K(x_{j+1}, x_{j+1}, y_{j+1}) + O(h^3). \end{aligned}$$
(14)

If \(y_j\) is an approximation to \(y(x_j)\), then \(y_{j+1}\) is given by

$$\begin{aligned} y_{j+1}= & {} y_j + \frac{h}{2} f(x_j, y_j) + \frac{h^2}{4} \left( K(x_j, x_0, y_0) + K(x_j, x_j, y_j) + K(x_{j+1}, x_0, y_0)\right) \nonumber \\&+ \frac{h^2}{2}\left( \sum _{i=1}^{j-1} K(x_j, x_i, y_i) + \sum _{i=1}^j K(x_{j+1}, x_i, y_i)\right) + \frac{h}{2} f(x_{j+1}, y_{j+1}) \nonumber \\&+ \frac{h^2}{4} K(x_{j+1}, x_{j+1}, y_{j+1}). \end{aligned}$$
(15)

Equation (15) is of the form (9), where

$$\begin{aligned} u= & {} y_{j+1},\\ g= & {} y_j + \frac{h}{2} f(x_j, y_j) + \frac{h^2}{4} \left( K(x_j, x_0, y_0) + K(x_j, x_j, y_j) + K(x_{j+1}, x_0, y_0)\right) \\&+ \frac{h^2}{2}\left( \sum _{i=1}^{j-1} K(x_j, x_i, y_i) + \sum _{i=1}^j K(x_{j+1}, x_i, y_i)\right) \\ N(u)= & {} \frac{h}{2} f(x_{j+1}, u) + \frac{h^2}{4} K(x_{j+1}, x_{j+1}, u). \end{aligned}$$

Applying DJM to Eq. (15), we obtain 3-term solution as

$$\begin{aligned} u= & {} u_0 + u_1 + u_2\\= & {} u_0 + N(u_0) + N(u_0+u_1) - N(u_0)\\= & {} u_0 + N(u_0+u_1)\\= & {} u_0 + N(u_0+N(u_0)). \end{aligned}$$

That is

$$\begin{aligned} y_{j+1}= & {} y_j + \frac{h}{2} f(x_j, y_j) + \frac{h^2}{4} \left( K(x_j, x_0, y_0) + K(x_j, x_j, y_j) + K(x_{j+1}, x_0, y_0)\right) \nonumber \\&+ \frac{h^2}{2}\left( \sum _{i=1}^{j-1} K(x_j, x_i, y_i) + \sum _{i=1}^j K(x_{j+1}, x_i, y_i)\right) \nonumber \\&+ N\left( y_j + \frac{h}{2} f(x_j, y_j) + \frac{h^2}{4} \left( K(x_j, x_0, y_0) + K(x_j, x_j, y_j) + K(x_{j+1}, x_0, y_0)\right) \right. \nonumber \\&\left. + \frac{h^2}{2}\left( \sum _{i=1}^{j-1} K(x_j, x_i, y_i) + \sum _{i=1}^j K(x_{j+1}, x_i, y_i)\right) \right) + N\left( y_j + \frac{h}{2} f(x_j, y_j)\right. \nonumber \\&\left. + \frac{h^2}{4} \left( K(x_j, x_0, y_0) + K(x_j, x_j, y_j) + K(x_{j+1}, x_0, y_0)\right) \right. \nonumber \\&\left. + \frac{h^2}{2}\left( \sum _{i=1}^{j-1} K(x_j, x_i, y_i) + \sum _{i=1}^j K(x_{j+1}, x_i, y_i)\right) \right) . \end{aligned}$$
(16)

If we set

$$\begin{aligned} M_1= & {} y_j + \frac{h}{2} f(x_j, y_j) + \frac{h^2}{4} \left( K(x_j, x_0, y_0) + K(x_j, x_j, y_j) + K(x_{j+1}, x_0, y_0)\right) \nonumber \\&+ \frac{h^2}{2}\left( \sum _{i=1}^{j-1} K(x_j, x_i, y_i) + \sum _{i=1}^j K(x_{j+1}, x_i, y_i)\right) \end{aligned}$$
(17)
$$\begin{aligned} \text {and}\, M_2= & {} M_1 + \frac{h}{2} f(x_{j+1}, M_1) + \frac{h^2}{4} K(x_{j+1}, x_{j+1}, M_1) \end{aligned}$$
(18)

then Eq. (16) becomes

$$\begin{aligned} y_{j+1}=M_1 + \frac{h}{2} f(x_{j+1}, M_2) + \frac{h^2}{4} K(x_{j+1}, x_{j+1}, M_2),\quad j=0,1,\ldots ,T-1. \end{aligned}$$
(19)

Analysis of Numerical Method

Error Analysis

Theorem 6

The numerical method (19) is of third order.

Proof

Suppose \(y_{j+1}\) is an approximation to \(y(x_{j+1})\). By using Eqs. (14) and (19), we obtain

$$\begin{aligned} \mid y(x_{j+1})-y_{j+1}\mid= & {} \mid \frac{h}{2} f(x_{j+1}, y_{j+1}) + \frac{h^2}{4} K(x_{j+1}, x_{j+1}, y_{j+1}) + O(h^3)\\&-\frac{h}{2} f(x_{j+1}, M_2)-\frac{h^2}{4} K(x_{j+1}, x_{j+1}, M_2)\mid \\\le & {} \frac{h}{2}\mid f(x_{j+1}, y_{j+1})- f(x_{j+1}, M_2)\mid + \frac{h^2}{4}\mid K(x_{j+1}, x_{j+1}, y_{j+1})\\&- K(x_{j+1}, x_{j+1}, M_2)\mid + O(h^3)\\\le & {} \left( \frac{h}{2}L_1 + \frac{h^2}{4}L_2\right) \mid y_{j+1}-M_2\mid + O(h^3). \end{aligned}$$

Using Eqs. (18) and (19), we get

$$\begin{aligned} \mid y(x_{j+1})-y_{j+1}\mid\le & {} \left( \frac{h}{2}L_1 + \frac{h^2}{4}L_2\right) \Big | M_1 + \frac{h}{2} f(x_{j+1}, M_2) + \frac{h^2}{4} K(x_{j+1}, x_{j+1}, M_2)\\&- M_1 - \frac{h}{2} f(x_{j+1}, M_1)- \frac{h^2}{4} K(x_{j+1}, x_{j+1}, M_1)\Big |+ O(h^3)\\\le & {} \left( \frac{h}{2}L_1 + \frac{h^2}{4}L_2\right) \left( \left| \frac{h}{2} f(x_{j+1}, M_2)-\frac{h}{2} f(x_{j+1}, M_1)\right| \right. \\&\left. + \Big | \frac{h^2}{4} K(x_{j+1}, x_{j+1}, M_2)-\frac{h^2}{4} K(x_{j+1}, x_{j+1}, M_1)\Big |\right) + O(h^3). \end{aligned}$$

Using Eqs. (17) and (18) ,we get

$$\begin{aligned} \mid y(x_{j+1})-y_{j+1}\mid\le & {} \left( \frac{h}{2}L_1 + \frac{h^2}{4}L_2\right) ^2\Big | \frac{h}{2} f(x_{j+1}, M_1) + \frac{h^2}{4} K(x_{j+1}, x_{j+1}, M_2)\Big |+ O(h^3)\\\le & {} h^3 \left( \left( \frac{1}{2}L_1 + \frac{h}{4}L_2\right) ^2\Big | \frac{1}{2} f(x_{j+1}, M_1) +\frac{h}{4} K(x_{j+1}, x_{j+1}, M_2)\Big | \right) \\&+ O(h^3). \end{aligned}$$

\(\Rightarrow \) The numerical method (19) is of third order. \(\square \)

Corollary 1

The numerical method (19) is convergent.

Proof

By Theorem 6 and definition (1) the numerical method (19) is convergent. \(\square \)

Stability Analysis of Numerical Method

Consider the test equation

$$\begin{aligned} y' = \alpha y + \beta \int _{x_0}^x y(t)dt. \end{aligned}$$
(20)

Applying numerical method (19) to this equation, we get

$$\begin{aligned} y_{j+1}= M_1 + \frac{h\alpha }{2}M_2 + \frac{h^2\beta }{4}M_2, \end{aligned}$$
(21)

where

$$\begin{aligned} M_1= & {} \frac{h^2\beta }{2}y_0 + h^2\beta \sum _{i=1}^{j-1} y_i +\left( 1+ \frac{h\alpha }{2} + \frac{3h^2\beta }{4}\right) y_j \end{aligned}$$
(22)
$$\begin{aligned} \text {and} \,M_2= & {} \left( 1+ \frac{h\alpha }{2} + \frac{h^2\beta }{4}\right) M_1. \end{aligned}$$
(23)

If we set \(u=h\alpha \) and \(v=h^2\beta \) then Eq. (21) can be written as

$$\begin{aligned} y_{j+1}= & {} \left( \frac{v}{2}+\frac{u v}{4}+\frac{u^2 v}{8}+\frac{v^2}{8}+\frac{u v^2}{8}+\frac{v^3}{32}\right) y_0\nonumber \\&+ \left( v+\frac{u v}{2}+\frac{u^2 v}{4}+\frac{v^2}{4}+\frac{u v^2}{4}+\frac{v^3}{16}\right) \sum _{i=1}^{j-1} y_i \nonumber \\&+ \left( 1+u+\frac{u^2}{2}+\frac{u^3}{8}+v+\frac{3 u v}{4}+\frac{5 u^2 v}{16}+\frac{v^2}{4}+\frac{7 u v^2}{32}+\frac{3 v^3}{64}\right) y_j. \end{aligned}$$
(24)

Simplifying Eq. (24), we get

$$\begin{aligned} y_j = b_1 y_{j-1} + b_2 y_{j-2}, \end{aligned}$$
(25)

where

$$\begin{aligned} b_1= & {} 2+u+\frac{u^2}{2}+\frac{u^3}{8}+v+\frac{3 u v}{4}+\frac{5 u^2 v}{16}+\frac{v^2}{4}+\frac{7 u v^2}{32}+\frac{3 v^3}{64},\end{aligned}$$
(26)
$$\begin{aligned} \text {and}\quad b_2= & {} -1-u-\frac{u^2}{2}-\frac{u^3}{8}-\frac{u v}{4}-\frac{u^2 v}{16}+\frac{u v^2}{32}+\frac{v^3}{64}. \end{aligned}$$
(27)

The characteristic equation [48] of the difference Eq. (25) is

$$\begin{aligned} r^2 - b_1 r -b_2 = 0. \end{aligned}$$
(28)

The characteristic roots are given by

$$\begin{aligned} r_1 =\frac{b_1 + \sqrt{b_1^2+4 b_2}}{2}, \quad r_2 = \frac{b_1 - \sqrt{b_1^2+4 b_2}}{2}. \end{aligned}$$
(29)

The zero solution of system (25) is asymptotically stable if \(\mid r_1 \mid < 1 \) and \(\mid r_2 \mid < 1 \). Thus the stability region of numerical method (19) is given by \(\mid r_1 \mid < 1 \) and \(\mid r_2 \mid < 1 \) as shown in Fig. 1.

Fig. 1
figure 1

Stability region for numerical method (19)

Bifurcations Analysis

We discuss bifurcation analysis for the following test equation given in [49]

$$\begin{aligned} y' = -\int _{x_0}^x e^{-\alpha (x-t)} y(t)dt. \end{aligned}$$
(30)

As discussed in [49], Eq. (30) with convolution kernel and with exponentially fading memory is practically important. The qualitative behavior of the Eq. (30) is described as below [49]:

Now we apply the numerical scheme (19) to Eq. (30) and discuss the bifurcations.

$$\begin{aligned} y_{j+1} = -a_0 e^{-\alpha h j}y_0 - a_1 \sum _{i=1}^{j-1} e^{-\alpha h (j-1)}y_i + a_2 y_j, \end{aligned}$$
(31)

where

$$\begin{aligned} a_0= & {} \frac{h^2}{4} \left( 1-\frac{h^2}{4} + \frac{h^4}{16}\right) \left( 1+e^{-\alpha h}\right) ,\\ a_1= & {} \frac{h^2}{2} \left( 1-\frac{h^2}{4} + \frac{h^4}{16}\right) \left( 1+e^{-\alpha h}\right) \\ \text {and}\quad a_2= & {} \left( 1-\frac{h^2}{4} + \frac{h^4}{16}\right) \left( 1-\frac{h^2}{2} + \frac{h^2}{4} e^{-\alpha h}\right) . \end{aligned}$$

Equation (30) can be written as

$$\begin{aligned} y_{j+2} - b_1 y_{j+1} + b_2 y_j = 0, \end{aligned}$$
(32)

where

$$\begin{aligned} b_1= & {} e^{-\alpha h}+a_2\\ \text {and}\quad b_2= & {} (a_1 + a_2)e^{-\alpha h}. \end{aligned}$$

The characteristic equation of the difference Eq. (32) is

$$\begin{aligned} r^2-b_1 r + b_2 = 0. \end{aligned}$$
(33)

The roots of this equation are

$$\begin{aligned} r = \frac{b_1 \pm \sqrt{b_1^2 - 4b_2}}{2}. \end{aligned}$$
(34)

Theorem 7

[48] All solutions of the difference equation

$$\begin{aligned} y_{j+2} + p_1 y_{j+1} + p_2 y_j = 0, \end{aligned}$$
(35)

converge to zero (i.e.,the zero solution is asymptotically stable) if and only if \(max\{|r_1|, |r_2|\} < 1\), where \(r_1,r_2\) are the roots of the characteristic equation \(r^2+p_1r+p_2=0\) of Eq. (35).

Using Theorm (7), we obtain bifurcation values for (32) as below:

$$\begin{aligned} \alpha _0= & {} \frac{1}{h}\text {Log}\left( 1+\frac{h^6}{64}\right) ,\\ \alpha _1= & {} \frac{1}{h}\text {Log}\left( \frac{2 \left( 128+160 h^2-32 h^4+8 h^6-h^8+8 h \sqrt{1024+8 h^6-2 h^8}\right) }{\left( -4+h^2\right) ^2 \left( 16-4h^2+h^4\right) }\right) \quad \text {and}\\ \alpha _2= & {} \frac{1}{h}\text {Log}\left( -\frac{2 \left( -128-160 h^2+32 h^4-8 h^6+h^8+8 h \sqrt{1024+8 h^6-2 h^8}\right) }{\left( -4+h^2\right) ^2\left( 16-4 h^2+h^4\right) }\right) . \end{aligned}$$

Using Theorem 7 we have the following observations:

Case 1: The characteristics roots of the Eq. (34) are real and distinct

$$\begin{aligned} r_1= & {} \frac{1}{2} \left( 1+e^{-h \alpha }-\frac{h^2}{2}-\frac{1}{2} e^{-h \alpha } h^2+\frac{h^4}{8}+\frac{1}{8} e^{-h \alpha } h^4-\frac{h^6}{64}-\frac{1}{32}e^{-h \alpha } h^6 \right. \\&\left. +\sqrt{-4 e^{-h \alpha } \left( 1+\frac{h^6}{64}\right) +\left( 1+e^{-h \alpha }-\frac{h^2}{2}-\frac{1}{2} e^{-h \alpha } h^2+\frac{h^4}{8}+\frac{1}{8}e^{-h \alpha } h^4-\frac{h^6}{64}-\frac{1}{32} e^{-h \alpha } h^6\right) ^2}\right) \\ r_2= & {} \frac{1}{2} \left( 1+e^{-h \alpha }-\frac{h^2}{2}-\frac{1}{2} e^{-h \alpha } h^2+\frac{h^4}{8}+\frac{1}{8} e^{-h \alpha } h^4-\frac{h^6}{64}-\frac{1}{32}e^{-h \alpha } h^6 \right. \\&\left. -\sqrt{-4 e^{-h \alpha } \left( 1+\frac{h^6}{64}\right) +\left( 1+e^{-h \alpha }-\frac{h^2}{2}-\frac{1}{2} e^{-h \alpha } h^2+\frac{h^4}{8}+\frac{1}{8}e^{-h \alpha } h^4-\frac{h^6}{64}-\frac{1}{32} e^{-h \alpha } h^6\right) ^2}\right) , \end{aligned}$$

when

Case 1.1: \(\alpha > \alpha _1\).

In this case the characteristic roots are of magnitude less than unity (cf. Region I in Fig. 2). Therefore, all the solutions of difference Eq. (32) converge to zero without oscillations.

Case 1.2: \(\alpha <\alpha _2\).

In this case the characteristic roots are of magnitude greater than unity. This is shown in Fig. 2 as Region IV. In this region, all the solutions will tend to infinity without oscillations.

Case 2: The characteristics roots are real and equal

$$\begin{aligned} r=r_1=r_2=\frac{1}{2} \left( 1+e^{-h \alpha }-\frac{h^2}{2}-\frac{1}{2} e^{-h \alpha } h^2+\frac{h^4}{8}+\frac{1}{8} e^{-h \alpha } h^4-\frac{h^6}{64}-\frac{1}{32} e^{-h \alpha } h^6\right) , \end{aligned}$$

when

Case 2.1: \(\alpha = \alpha _1\).

In this case the characteristic root \(r\in (0,1)\). Hence the solutions of Eq. (32) tend to zero without oscillations as in case 1.1. This case is shown in Fig. 2 as boundary line of Region I and Region II.

Case 2.2: \(\alpha = \alpha _2\).

In this case the characteristic root \(r>1\) and the solutions are unbounded (cf. boundary line of Region III and Region IV in Fig. 2).

Case 3: The characteristics roots are complex

$$\begin{aligned} r_1= & {} \frac{1}{2} \left( 1+e^{-h \alpha }-\frac{h^2}{2}-\frac{1}{2} e^{-h \alpha } h^2+\frac{h^4}{8}+\frac{1}{8} e^{-h \alpha } h^4-\frac{h^6}{64}-\frac{1}{32} e^{-h \alpha } h^6\right. \\&\left. + i \sqrt{4 e^{-h \alpha } \left( 1+\frac{h^6}{64}\right) -\left( 1+e^{-h \alpha }-\frac{h^2}{2}-\frac{1}{2} e^{-h \alpha } h^2+\frac{h^4}{8}+\frac{1}{8} e^{-h \alpha } h^4-\frac{h^6}{64}-\frac{1}{32} e^{-h \alpha } h^6\right) ^2}\right) \\ r_2= & {} \frac{1}{2} \left( 1+e^{-h \alpha }-\frac{h^2}{2}-\frac{1}{2} e^{-h \alpha } h^2+\frac{h^4}{8}+\frac{1}{8} e^{-h \alpha } h^4-\frac{h^6}{64}-\frac{1}{32} e^{-h \alpha } h^6\right. \\&\left. - i \sqrt{4 e^{-h \alpha } \left( 1+\frac{h^6}{64}\right) -\left( 1+e^{-h \alpha }-\frac{h^2}{2}-\frac{1}{2} e^{-h \alpha } h^2+\frac{h^4}{8}+\frac{1}{8} e^{-h \alpha } h^4-\frac{h^6}{64}-\frac{1}{32} e^{-h \alpha } h^6\right) ^2}\right) , \end{aligned}$$

when \(\alpha _2< \alpha <\alpha _1\).

Note that \(\mid r_1\mid = \mid r_2\mid \).

Case 3.1: If \(\alpha _0< \alpha <\alpha _1\) then \(\mid r_1\mid <1\). The solutions of Eq. (32) are oscillatory and converge to zero (Region II in Fig. 2).

Case 3.2: If \(\alpha =\alpha _0\) then \(\mid r_1\mid =1\) and bifurcation occurs. This is shown by the boundary line of Region II and Region III in Fig. 2.

Case 3.3: If \(\alpha _2< \alpha < \alpha _0\) then \(\mid r_1\mid > 1\). This implies that the solutions of Eq. (32) are oscillatory and diverge to infinity (Region III in Fig. 2).

Fig. 2
figure 2

Bifurcation digram for Eq. (32)

Note: It is clear from Table 1 and Fig. 2 that the bifurcations in numerical scheme (32) coincide with those in IDE (30) for \(h=0\).

Table 1 Bifurcation in IDE (30)

As \(h\rightarrow 0\), the numerical solution (31) matches with the exact solution of (30), because the method is convergent. Further, \(\lim _{h\rightarrow 0} \alpha _1=2\) and \(\lim _{h\rightarrow 0} \alpha _2=-2\). Therefore, the stability properties of solution of (30) can be described as in Table 1.

Illustrative Examples

Example 1

[10, 12] Consider the Volterra integro-differential equation

$$\begin{aligned} y'(x) = 1 + 2x - y(x) + \int _{x_0}^x x(1+2x)e^{t(x-t)}y(t)dt,\quad y(0) = 1. \end{aligned}$$
(36)

The exact solution of Eq. (36) is \(y(x) = e^{x^2}\). We compare errors at different nodes and Root Mean Square Error (RMSE) obtained using different numerical methods given in [10, 12], viz. third order Runge–Kutta method, predictor-corrector method, multi-step method and Day method with our method for \(h=0.1\) and \(h=0.025\) in Tables 2 and 3 respectively. It is observed that the error in our method is very less as compared with other numerical methods.

Note that the error values in Tables 2 and 3 for all other methods are taken from [10, 12].

Table 2 Comparison of numerical results of example (1) for \(h=0.1\)
Table 3 Comparison of numerical results of example (1) for \(h=0.025\)

Example 2

[13] Consider the nonlinear Volterra integro-differential equation

$$\begin{aligned} y'(x) = 2x -\frac{1}{2} \sin (x^4) + \int _{0}^x x^2 t \cos (x^2y(t))dt,\quad y(0) = 0. \end{aligned}$$
(37)

The exact solution of Eq. (37) is \(y(x) = x^2\). We compare the maximum absolute errors obtained using meshless method given in [13] and CPU times used for different values of n. It is observed that the error in our method is very less as compared with linear meshless method. Further, our method is more time efficient than both linear, quadratic meshless method in the interval [0, 1] as show in Table 4. Table 5, we show that the proposed method is more time efficient than Adams–Bashforth Moulton Order 3 (ABM3), two points multistep block method of Order 3 (2PMBM) and diagonally implicit multistep block method (DIMBM) discussed in [16,17,18] for the interval [0, 2].

Table 4 Maximum absolute errors and CPU times in second used for different values of n
Table 5 Maximum absolute errors and CPU times used for different values of h

Example 3

Consider the VIDE [13]

$$\begin{aligned} y'(x) = 1 -\frac{x}{2}+ \frac{x e^{-x^2}}{2} + \int _{0}^x x t e^{-y^2(t)} dt,\quad y(0) = 0. \end{aligned}$$
(38)

The exact solution of Eq. (38) is \(y(x) = x\). We compare the maximum absolute errors obtained using meshless method given in [13] and CPU time used for different values of n in the interval [0, 1]. It is observed that our method is more accurate as compared with linear meshless and more time efficient than both linear and quadratic meshless method (Table 6).

Table 6 Maximum absolute errors and CPU times in second used for different values of n

We compare our solution with exact solution for \(h=0.01\) in Fig. 3. It is observed that our solution coincides with exact solution.

Fig. 3
figure 3

Comparison of solutions of Eq. (38) for \(h=0.01\)

Example 4

Consider the VIDE

$$\begin{aligned} y'(x) = 1 + \int _{0}^x e^{-t} y^2(t) dt,\quad y(0) = 1. \end{aligned}$$
(39)

The exact solution of Eq. (39) is \(y(x) = e^x\). We compare our solution with exact solution for \(h=0.01\) in Fig. 4. It is observed that our solution is well in agreement with exact solution.

Fig. 4
figure 4

Comparison of solutions of Eq. (39) for \(h=0.01\)

Conclusions

Volterra integro-differential equations (VIDE) have wide range of applicability due to their ability to model system memory. Finding solution of these equations is not an easy task because of the involvement of derivative as well as integral of an unknown variable. In this work, we presented a numerical method which is a combination of trapezium rule and DJM for solving these equations. The DJM is also used to derive existence and uniqueness of solution. The VIDE is converted to an equation involving double integral and trapezium rule is used to find its approximate value. The DJM is then used to evaluate the unknown term on the right side of the difference formula obtained by using trapezium rule. This combination is proved very efficient. The error in this third order method is very less and it is time efficient too. We discussed the stability and bifurcation analysis of the proposed method by applying it to the test equations. Further, we compared our method with a variety of classical as well as recent numerical methods and shown the competence of the present method.