1 Introduction

Fractional differential equations (FDEs) and fractional integro-differential equations (FIDEs) have gained popularity because of their important applications in science and engineering (Sun et al. 2010; Yuzbasi 2013; Rossikhin and Shitikova 2010; Magin 2008). Although fractional calculus has been introduced three centuries ago, it started to flourish recently. It is important to mention that the fractional derivatives are nonlocal, it means that it is globally defined by an integral over the whole domain, while the integer derivatives are locally defined on the epsilon neighborhood of a chosen point. So, it has been demonstrated that the fractional calculus is more accurate than the integer calculus to describe a lot of phenomena in mechanics, physics, biology, chemistry, and other sciences. Moreover, many mathematical formulations of physical phenomena lead to fractional integro-differential equations (FIDEs), such as heat conduction in materials with memory, fluid dynamics, convection and radiation problems, and chemical kinetics (Larsson et al. 2014; Magin 2008; Podlubny 1999; Rossikhin and Shitikova 2010). Since FIDEs have gained much interest in different research areas and engineering applications, finding accurate methods for solving these equations seems necessary. In recent years, there are several numerical and analytical methods which have been developed to obtain the solution of linear and nonlinear FIDEs. For example in Ma and Huang (2014), proposed and analyzed a spectral Jacobi-collocation method for the numerical solution of linear fractional integro-differential equations. Parand and Nikarya (2014) introduced a new numerical algorithm based on the first kind of Bessel functions collocation method to solve the fractional differential and integro-differential equations. They reduced the solution of a nonlinear fractional problem to the solution of a system of the nonlinear algebraic equations. Tang and Xu (2015) applied the pseudospectral integration matrices for solving fractional differential, integral, and integro-differential equations. Furthermore, Mingxu et al. (2015) obtained the numerical solution of fractional integro-differential equations with weakly singular kernel by the Legendre wavelets method. Eslahchi et al. (2014) studied the collocation method to solve nonlinear FIDEs and also, they investigated the convergence and the stability analysis of this method. Mashayekhi and Razzaghi (2015) presented a new numerical method for solving nonlinear FIDEs based on the hybrid functions approximation which consists of block-pulse functions and Bernoulli polynomials. Dehghan and Shahini (2015) developed the pseudospectral approach based on the rational Legendre and rational Chebyshev functions to solve the nonlinear integro-differential Volterra’s population model.

In this paper, we are going to provide a convergent technique for the solutions of FIDEs of the form:

$$\begin{aligned} D^{\alpha }y(x)=g(x)y(x)+f(x)+\int _{0}^{x} k(x,t)y(t)\mathrm{d}t, \ x\in [0,1], \end{aligned}$$
(1)

with the initial conditions

$$\begin{aligned} y^{(i)}(0)=c_i, \quad i=0,1,\ldots ,n-1, \ \ n-1<\alpha \le n, \end{aligned}$$
(2)

where f and g are given continuous functions, the kernel k(xt) is given smooth function, y(x) is unknown function and \(D^{\alpha }\) is the Caputo fractional derivative of order \(\alpha\) defined as (Oldham and Spanier 1974):

$$\begin{aligned} D^{\alpha }y(x)=I^{n-\alpha }D^n{y(x)}=\frac{1}{\Gamma (n-\alpha )}{\int _{0}^{x}{(x-\tau )^{n-\alpha -1}D^n{y(\tau )}\mathrm{d}\tau }}, \end{aligned}$$

and \(I^{\alpha }\) denotes the Riemann–Liouville fractional integral operator of order \(\alpha\), defined by Kilbas et al. (2006)

$$\begin{aligned} I^{\alpha }y(x)=\left\{ \begin{array}{ll} \frac{1}{\Gamma (\alpha )}\int _{0}^{x}(x-\tau )^{\alpha -1}f(\tau )\mathrm{d}\tau , \quad \alpha >0,\\ y(x), \quad \alpha =0. \end{array} \right. \end{aligned}$$

For the Riemann–Liouville fractional integral and Caputo fractional derivative, we have the following properties:

$$\begin{aligned} I^{\alpha }(x-a)^{\beta }& = \frac{\Gamma (\beta +1)}{\Gamma (\beta +\alpha +1)}(x-a)^{\beta +\alpha }, \\ I^{\alpha }I^{\beta }f(x)& = I^{\beta }I^{\alpha }f(x)=I^{\alpha +\beta }f(x),\\ I^{\alpha }D^{\alpha }f(x)& = f(x)-\sum _{k=0}^{n}{f^{(k)}(0)\frac{x^k}{k!}}. \end{aligned}$$

In 2015, we successfully introduced and applied a new scheme based on a hybrid approach consisting of the variational iteration method and Laplace transform for solving multi-order fractional differential equations and fractional partial differential equations which was named the reconstruction of variational iteration method (RVIM) (Hesameddini and Rahimi 2015; Hesameddini et al. 2016). In this work, we present the RVIM for solving FIDEs (1) subject to (2). Moreover, the convergence and stability of the proposed method will be investigated by some theorems. Our aim is to provide an accurate scheme which is easy to implement numerically without any special assumptions and also, be efficient and reliable. It is to be noted that in comparison with some other numerical methods, the proposed method is more accurate and need less computational cost. This paper is organized as follows. In Sect. 2, the RVIM is developed to solve the FIDEs. The stability and the convergence analysis in the infinity norm will be described in Sect. 3. In Sect. 4, we applied the proposed method to solve several numerical experiments to demonstrate the accuracy and efficiency of our method. In Sect. 5, we consider the fractional Volterra population growth model as an application. Finally, the conclusion is summarized in Sect. 6.

2 The Basic Concept of the Reconstruction of Variational Iteration Method (RVIM)

To represent the basic idea of our technique, we derive the reconstruction of variational iteration method(RVIM) and describe its implementation for the problem (1), (2). Unlike the variational iteration method(VIM), in which the Lagrange multiplier must be identified optimally via the variational theory, we can derive an iterative relation by the RVIM without any knowledge of variational theory and any restrictive assumptions.

To describe the RVIM, at first, by taking the Laplace transform from both sides of (1) and using the zero artificial initial conditions, the following relation resulted

$$\begin{aligned} s^{\alpha }\ell \{y(x)\}=\ell \left\{g(x)y(x)+f(x)+\int _{0}^{x}{k(x,t)y(t)\mathrm{d}t} \right\}. \end{aligned}$$
(3)

For convenient in writing, suppose that \(N(y(x))=g(x)y(x)+f(x)+\int _{0}^{x}{k(x,t)y(t)\mathrm{d}t}\). So,

$$\begin{aligned} \ell \{y(x)\}=\frac{1}{s^{\alpha }}\ell \{N(y(x))\}. \end{aligned}$$
(4)

Then, by applying the inverse Laplace transform to both sides of (4) and using the convolution theorem, we have

$$\begin{aligned} y(x)=\frac{1}{\Gamma (\alpha )}\int _{0}^{x}{(x-\tau )^{\alpha -1}N(y(\tau ))\mathrm{d}\tau }. \end{aligned}$$
(5)

Therefore, the following iteration formulation is obtained.

$$\begin{aligned} y_{k+1}(x)=y_{0}(x)+\frac{1}{\Gamma (\alpha )}\int _{0}^{x}{(x-\tau )^{\alpha -1}N(y_{k}(\tau ))\mathrm{d}\tau }, \end{aligned}$$
(6)

where \(y_{k}(x)\) is the k-th approximate solution of y(x), i.e. \(y(x)\approx y_{k}(x)\). Since, we must impose the actual conditions (2) to obtain \(y_{0}{(x)}\), so it is obtained by Taylor series of actual conditions (2). Noting the definition of Riemann–Liouville fractional integral operator, the relation (6) can be written as:

$$\begin{aligned} y_{k+1}(x)=y_{0}(x)+I^{\alpha }N(y_{k}(x)), \end{aligned}$$
(7)

Finally, the exact solution y(x) is obtained by \(y(x)=\lim _{k\rightarrow \infty }{y_{k}{(x)}}.\) In the next section, we prove the stability and convergence properties of this method for solving (1).

3 Stability and Convergence of the RVIM

In this section, we will show the stability and convergence properties of the reconstruction of variational iteration method for the following Volterra fractional integro-differential equation:

$$\begin{aligned} \left\{ \begin{array}{ll} D^{\alpha }y(x)=g(x)y(x)+f(x)+\int _{0}^{x}{k(x,t)y(t)\mathrm{d}t}, \quad x\in [0,1], \\ y^{(i)}(0)=c_i, \quad i=0,1,\ldots ,n-1, \quad n-1<\alpha \le n. \end{array} \right. \end{aligned}$$
(8)

3.1 The Stability of Fractional Volterra Integro-Differential Equations

To show the stability of (8), it is sufficient to prove that a small change in initial conditions cause only a small change in the obtained solution. So, we prove the following theorem.

Theorem 3.1

Lety(t) and\(\tilde{y}(t)\)be the solutions of the following problems.

$$\begin{aligned} \left\{ \begin{array}{ll} D^{\alpha }y(x)=g(x)y(x)+f(x)+\int _{0}^{x}{k(x,t)y(t)dt}, \ x\in [0,1], \\ y^{(i)}(0)=c_i,\ \ \ \ \ i=0,1,\ldots ,n-1, \ \ n-1<\alpha \le n, \end{array} \right. \end{aligned}$$
(9)

and

$$\begin{aligned} \left\{ \begin{array}{ll} D^{\alpha }\tilde{y}(x)=g(x)\tilde{y}(x)+f(x)+\int _{0}^{x}{k(x,t)\tilde{y}(t)dt}, \ x\in [0,1], \\ \\ \tilde{y}^{(i)}(0)=b_i,\ \ \ \ \ i=0,1,\ldots ,n-1, \ \ n-1<\alpha \le n, \end{array} \right. \end{aligned}$$
(10)

where\(|b_i-c_i|\le \varepsilon _i\ \ i=0,1,\ldots ,n-1\)and suppose that there exist constantsM and Nsuch that

$$\begin{aligned} |k(x,t)|\le M,\ \ |g(x)|\le N, \quad \forall x \in [0,1], \ \forall t \in [0,T]. \end{aligned}$$
(11)

Then, we have

$$\begin{aligned} |y(x)-\tilde{y}(x)|\le \sum _{i=0}^{n-1} {\varepsilon _{i}x^{i}E_{\alpha ,i+1}\left(Nx^\alpha +M\frac{x^{\alpha +1}}{\alpha +1}\right)}, \end{aligned}$$
(12)

where,\(E_{\alpha ,i+1}(Nx^\alpha +M\frac{x^{\alpha +1}}{\alpha +1})\)is the two parameter Mittag–Leffler function that for any argument\(z\in C\)and two parameters\(\alpha ,\beta \in C\), with\({\rm{Rel}}(\alpha )>0\), is defined as:

$$\begin{aligned} E_{\alpha ,\beta }(z)=\sum _{k=0}^{\infty }{\frac{z^k}{\Gamma (\alpha k+\beta )}}. \end{aligned}$$

Proof

According to our method, we know that \(y_k(x)\) and \(\tilde{y}_k(x)\) (k-th approximate solutions of (9) and (10), respectively) satisfy in the following iteration relations.

$$\begin{aligned} \left\{ \begin{array}{ll} y_{k+1}(x)=y_0(x)+I^{\alpha }g(x)y_k(x)+I^{\alpha }f(x)+I^{\alpha }\int _{0}^{x}{k(x,t)y_k(t)\mathrm{d}t}, \\ y_0(x)=\sum _{i=0}^{n-1}{c_i\frac{x^i}{i!}}, \end{array} \right. \end{aligned}$$
(13)

and

$$\begin{aligned} \left\{ \begin{array}{ll} \tilde{y}_{k+1}(x)=\tilde{y}_0(x)+I^{\alpha }g(x)\tilde{y}_k(x)+I^{\alpha }f(x)+I^{\alpha }\int _{0}^{x}{k(x,t)\tilde{y}_k(t)\mathrm{d}t}, \\ \tilde{y}_0(x)=\sum _{i=0}^{n-1}{b_i\frac{x^i}{i!}}. \end{array} \right. \end{aligned}$$
(14)

Equations (13) and (14) imply that

$$\begin{aligned} \left| y_0(x)-\tilde{y}_0(x)\right|& = \left| \sum _{i=0}^{n-1}{c_i\frac{x^i}{i!}}-\sum _{i=0}^{n-1}{b_i\frac{x^i}{i!}}\right| =\left| \sum _{i=0}^{n-1}{(c_i-b_i)\frac{x^i}{i!}}\right| \ \ \nonumber \\ & \le \sum _{i=0}^{n-1}{|c_i-b_i|\frac{x^i}{i!}}\le \sum _{i=0}^{n-1}{\varepsilon _{i}\frac{x^i}{i!}}. \end{aligned}$$
(15)

Using (13), (14) and (15), one obtains

$$\begin{aligned} \begin{array}{ll} & |y_1(x)-\tilde{y}_1(x)| \le |y_0(x)-\tilde{y}_0(x)|+I^{\alpha }|g(x)| |y_0(x)-\tilde{y}_0(x)|\\ &\qquad +\, I^{\alpha }\int _{0}^{x}{|k(x,t)| |y_0(x)-\tilde{y}_0(x)|\mathrm{d}t}\le \sum _{i=0}^{n-1}{\varepsilon _{i}\frac{x^i}{i!}} \\ & \qquad +\, NI^{\alpha }\sum _{i=0}^{n-1}{\varepsilon _{i}\frac{x^i}{i!}}+MI^{\alpha }\int _{0}^{x}{\sum _{i=0}^{n-1}{\varepsilon _{i}\frac{t^i}{i!}\mathrm{d}t}}=\sum _{i=0}^{n-1}{\varepsilon _{i}\frac{x^i}{i!}} \\ & \qquad + \, \sum _{i=0}^{n-1}{\varepsilon _{i}N\frac{x^{i+\alpha }}{\Gamma (i+\alpha +1)}}+\sum _{i=0}^{n-1}{\varepsilon _{i}M\frac{x^{i+\alpha +1}}{\Gamma (i+\alpha +2)}} \\ & \quad = \sum _{i=0}^{n-1}{\varepsilon _{i} \left(\frac{x^i}{i!}+\frac{1}{\Gamma (i+\alpha +1)}\left( Nx^{\alpha }+M\frac{x^{\alpha +1}}{i+\alpha +1}\right) x^i \right)}\\ & \quad = \sum _{i=0}^{n-1}{\varepsilon _{i}\sum _{j=0}^{1}{\frac{1}{\Gamma (j\alpha +i+1)}(Nx^{\alpha }+M\frac{x^{\alpha +1}}{\alpha j+i+1})^jx^i}}. \end{array} \end{aligned}$$
(16)

Similarly, using (13), (14), (15) and (16), we have

$$\begin{aligned}&|y_2(x)-\tilde{y}_2(x)| \le |y_0(x)-\tilde{y}_0(x)|+I^{\alpha }|g(x)| |y_1(x)-\tilde{y}_1(x)| \nonumber \\& \qquad+ \, I^{\alpha }\int _{0}^{x}{|k(x,t)| |y_1(x)-\tilde{y}_1(x)|\mathrm{d}t}\le \sum _{i=0}^{n-1}{\varepsilon _{i}\frac{x^i}{i!}} \nonumber \\& \qquad + \, N\sum _{i=0}^{n-1}{\varepsilon _{i}I^{\alpha }\bigg (\frac{x^i}{i!}+N\frac{x^{i+\alpha }}{\Gamma (i+\alpha +1)}+M\frac{x^{i+\alpha +1}}{\Gamma (i+\alpha +2)}\bigg )} \nonumber \\& \qquad + \, M\sum _{i=0}^{n-1}{\varepsilon _{i}I^{\alpha }\int _{0}^{x}\bigg (\frac{t^{i}}{i!}+N\frac{t^{i+\alpha }}{\Gamma (i+\alpha +1)}+M\frac{t^{i+\alpha +1}}{\Gamma (i+\alpha +2)}\bigg )\mathrm{d}t} \nonumber \\& \quad =\sum _{i=0}^{n-1}\varepsilon _{i}\bigg (\frac{x^i}{i!}+N\frac{x^{i+\alpha }}{\Gamma (i+\alpha +1)}+M\frac{x^{i+\alpha +1}}{\Gamma (i+\alpha +2)}+N^2\frac{x^{i+2\alpha }}{\Gamma (i+2\alpha +1)} \nonumber \\& \qquad +2MN\frac{x^{i+2\alpha +1}}{\Gamma (i+2\alpha +2)}+M^2\frac{x^{i+2\alpha +2}}{\Gamma (i+2\alpha +3)}\bigg ) \nonumber \\& \quad \le \sum _{i=0}^{n-1}\varepsilon _{i}\bigg[\frac{x^i}{i!}+\frac{x^{i}}{\Gamma (i+\alpha +1)}\bigg (Nx^\alpha +M\frac{x^{\alpha +1}}{\alpha +1}\bigg ) \nonumber \\& \qquad +\frac{x^i}{\Gamma (i+2\alpha +1)}\left(N^2x^{2\alpha }+2MN\frac{x^{2\alpha +1}}{\alpha +1}+M^2\frac{x^{2\alpha +2}}{(\alpha +1)^2}\right)\bigg ]. \end{aligned}$$
(17)

Therefore, we obtain

$$\begin{aligned} |y_2(x)-\tilde{y}_2(x)|\le \sum _{i=0}^{n-1}{\varepsilon _{i}\sum _{j=0}^{2}{\frac{1}{\Gamma (j\alpha +i+1)}{\times}\bigg (Nx^{\alpha }+M\frac{x^{\alpha +1}}{\alpha +1}\bigg )^jx^i}}. \end{aligned}$$
(18)

Consequently, by continuing the above procedure, we can conclude that

$$\begin{aligned} |y_k(x)-\tilde{y}_k(x)|\le \sum _{i=0}^{n-1}{\varepsilon _{i}\sum _{j=0}^{k}{\frac{1}{\Gamma (j\alpha +i+1)}\left( Nx^{\alpha }+M\frac{x^{\alpha +1}}{\alpha +1}\right) ^jx^i}}. \end{aligned}$$
(19)

As we know, if \(k\rightarrow \infty\), then \(y_k(x)\rightarrow y(x)\) and \(\tilde{y}_k(x)\rightarrow \tilde{y}(x)\). So, if \(k\rightarrow \infty\), then using (19), one obtains

$$\begin{aligned} |y(x)-\tilde{y}(x)| &\le \sum _{i=0}^{n-1}{\varepsilon _{i}x^i\sum _{j=0}^{\infty }{\frac{1}{\Gamma (j\alpha +i+1)}\bigg (Nx^{\alpha }+M\frac{x^{\alpha +1}}{\alpha +1}\bigg )^j}}\ \ \nonumber \\ &=\sum _{i=0}^{n-1}{\varepsilon _{i}x^iE_{\alpha ,i+1}\bigg (Nx^\alpha +M\frac{x^{\alpha +1}}{\alpha +1}\bigg )}. \end{aligned}$$
(20)

Since, the Mittag–Leffler function is a convergent series and \(x\in [0,1]\), therefore, (20) indicates that the small change in initial conditions will cause the small change in the obtained solution. So the proof is completed. \(\square\)

3.2 The Convergence of RVIM for the Fractional Volterra Integro-Differential Equations

In this section, we investigate the convergence analysis of the proposed method based on the error estimate. The main result is presented in the following theorem.

Theorem 3.2

Consider the fractional Volterra integro-differential Eq. (8), the sequence (7) with\(y_0(x)=\sum _{i=0}^{n-1}{c_i\frac{x^i}{i!}}\)will converge toy(x) whenever

$$\begin{aligned} k_1=\Vert g(x)\Vert _\infty<\infty , \quad k_2=\Vert k(x,t)\Vert _\infty <\infty . \end{aligned}$$
(21)

Also, the error estimate resulted by the following relation.

$$\begin{aligned} \Vert E_{k+1}\Vert _{\infty } \le \Vert E_{0}\Vert _{\infty }\frac{(2LX^{\alpha +1})^{k+1}}{\Gamma (\alpha (k+1)+1)}, \end{aligned}$$
(22)

where\(L=\max \{k_1,k_2\}\)and\(E_k(x)=y_k(x)-y(x)\)is ak-th error estimate for all\(k=1,2,\ldots\).

Proof

Evidently, by applying the fractional integral operator to both sides of (8), we have

$$\begin{aligned} y(x)=\sum _{i=0}^{n-1}c_i\frac{x^i}{i!}+I^{\alpha }g(x)y(x)+I^{\alpha }f(x)+I^{\alpha }\int _{0}^{x}{k(x,t)y(t)\mathrm{d}t}. \end{aligned}$$
(23)

Subtracting (7) from (23) yields

$$\begin{aligned} E_{k+1}(x)= I^{\alpha }g(x)(y_k(x)-y(x))+I^{\alpha }\int _{0}^{x}{k(x,t)(y_k(t)-y(t))\mathrm{d}t}. \end{aligned}$$

Then, we can write

$$\begin{aligned} \begin{array}{ll} |E_{k+1}(x)|&\le I^{\alpha }|g(x)| |y_k(x)-y(x)|+I^{\alpha }\int _{0}^{x}{|k(x,t)| |y_k(t)-y(t)|\mathrm{d}t} \\ &\le k_1I^{\alpha }|y_k(x)-y(x)|+k_2I^{\alpha }\int _{0}^{x}{|y_k(t)-y(t)|\mathrm{d}t} \\ &=k_1I^{\alpha }|E_k(x)|+k_2I^{\alpha +1}|E_k(x)|. \end{array} \end{aligned}$$
(24)

Also,

$$\begin{aligned} \begin{array}{ll} |E_{k}(x)|&\le I^{\alpha }|g(x)| |y_{k-1}(x)-y(x)|+I^{\alpha }\int _{0}^{x}{|k(x,t)| |y_{k-1}(t)-y(t)|\mathrm{d}t} \\ &\le k_1I^{\alpha }|y_{k-1}(x)-y(x)|+k_2I^{\alpha +1}|y_{k-1}(x)-y(x)| \\ &=(k_1I^{\alpha }+k_2I^{\alpha +1})|E_{k-1}(x)|. \end{array} \end{aligned}$$
(25)

Consequently, we arrive at

$$\begin{aligned} \begin{array}{ll} |E_{k+1}(x)|&\le (k_1I^{\alpha }+k_2I^{\alpha +1})|E_{k}(x)| \\ &\le (k_1I^{\alpha }+k_2I^{\alpha +1})^2|E_{k-1}(x)| \\ &\vdots \\ &\le (k_1I^{\alpha }+k_2I^{\alpha +1})^{k+1}|E_{0}(x)|\\ &\le (LI^{\alpha }+LI^{\alpha +1})^{k+1}|E_{0}(x)| \\ &\le L^{k+1}(I^{\alpha }+I^{\alpha +1})^{k+1}\max _{x\in [0,X]}{|E_{0}(x)|}. \end{array} \end{aligned}$$
(26)

Besides, we know that

$$\begin{aligned} (I^{\alpha }+I^{\alpha +1})^{k+1}& = I^{\alpha (k+1)}(1+I)^{k+1}=I^{\alpha (k+1)}\sum _{j=0}^{k+1}(_{\ j}^{k+1})I^j \nonumber \\& = \sum _{j=0}^{k+1}(_{\ j}^{k+1})I^{\alpha (k+1)+j}=(_{\ 0}^{k+1})I^{\alpha (k+1)}+(_{\ 1}^{k+1})I^{\alpha (k+1)+1} \nonumber \\& \quad+ \, \cdots +(_{k+1}^{k+1})I^{(\alpha +1)(k+1)}=(_{\ 0}^{k+1})\frac{\int _{0}^{x}(x-\tau )^{\alpha (k+1)-1}}{\Gamma (\alpha (k+1))} \nonumber \\& \quad + \, (_{\ 1}^{k+1})\frac{\int _{0}^{x}(x-\tau )^{\alpha (k+1)}}{\Gamma (\alpha (k+1)+1)} +\cdots +(_{k+1}^{k+1})\frac{\int _{0}^{x}(x-\tau )^{(\alpha +1)(k+1)-1}}{\Gamma ((\alpha +1) (k+1))} \nonumber \\ & \le (_{\ 0}^{k+1})\frac{\int _{0}^{x}(x-\tau )^{(\alpha +1)(k+1)-1}}{\Gamma (\alpha (k+1))}+(_{\ 1}^{k+1})\frac{\int _{0}^{x}(x-\tau )^{(\alpha +1)(k+1)-1}}{\Gamma (\alpha (k+1))} \nonumber \\& \quad + \, \cdots +(_{k+1}^{k+1})\frac{\int _{0}^{x}(x-\tau )^{(\alpha +1)(k+1)-1}}{\Gamma (\alpha (k+1))}=\frac{\int _{0}^{x}(x-\tau )^{(\alpha +1)(k+1)-1}}{\Gamma (\alpha (k+1))}\sum _{j=0}^{k+1}(_{\ j}^{k+1}) \nonumber \\&=2^{k+1}\frac{\int _{0}^{x}(x-\tau )^{(\alpha +1)(k+1)-1}}{\Gamma (\alpha (k+1))} =2^{k+1}\frac{x^{(\alpha +1) (k+1)}}{(\alpha +1) (k+1)\Gamma (\alpha (k+1))} \nonumber \\ & \le 2^{k+1}\frac{x^{(\alpha +1) (k+1)}}{\alpha (k+1)\Gamma (\alpha (k+1))}\le 2^{k+1}\frac{X^{(\alpha +1) (k+1)}}{\Gamma (\alpha (k+1)+1)}. \end{aligned}$$
(27)

Considering (26) and (27), one can conclude that

$$\begin{aligned} |E_{k+1}(x)|\le L^{k+1}\max _{x\in [0,X]}{|E_{0}(x)|}\frac{(2X^{\alpha +1})^{k+1}}{\Gamma (\alpha (k+1)+1)}. \end{aligned}$$
(28)

Therefore,

$$\begin{aligned} \Vert E_{k+1}\Vert _{\infty } \le \Vert E_{0}\Vert _{\infty }\frac{(2LX^{\alpha +1})^{k+1}}{\Gamma (\alpha (k+1)+1)}. \end{aligned}$$
(29)

Since, \(\sum _{j=0}^{\infty }\frac{(2LX^{\alpha +1})^{j}}{\Gamma (\alpha j+1)}=E_{\alpha }(2LX^{\alpha +1})\) is the convergent Mittag–Leffler function Podlubny (1999), so

$$\begin{aligned} \lim _{j\rightarrow \infty }\frac{(2LX^{\alpha +1})^{j}}{\Gamma (\alpha j+1)}=0. \end{aligned}$$
(30)

Therefore, if \(k\rightarrow \infty\) then \(\Vert E_{0}\Vert _{\infty }\frac{(2LX^{\alpha +1})^{k+1}}{\Gamma (\alpha (k+1)+1)}\rightarrow 0\) and this completes the proof. \(\square\)

Fig. 1
figure 1

Comparison between the exact solution and the RVIM solution with \(k=4\) for Example 4.2 (case 1)

Fig. 2
figure 2

Comparison between the exact solution and the RVIM solution with \(k=5\) for Example 4.2 (case 1)

Fig. 3
figure 3

Comparison between the exact solution and the RVIM solution with \(k=6\) for Example 4.2 (case 1)

4 Numerical Experiments

In this section, we present some examples to illustrate the applicability, efficiency and accuracy of our method. All test problems are taken from the literature and the calculations were performed using Maple software. Also, to show that our method is practical for nonlinear FIDEs and partial FIDEs, two examples are presented.

Example 4.1

Consider the following FIDE (Sayevand et al. 2013)

$$\begin{aligned} D^{\alpha }y(x)-\int _{0}^{x}{(x-t)y(t)\mathrm{d}t}=x, \end{aligned}$$
(31)

subject to the initial condition

$$\begin{aligned} y(0)=0. \end{aligned}$$
(32)

The exact solution of this problem is \(y(x)=x^{\alpha +1}E_{\alpha +2,\alpha +2}(x^{\alpha +2})\) (Sayevand 2014).

According to the RVIM, the following iterative relation resulted

$$\begin{aligned} y_{n+1}(x)=y_{0}(x)+I^{\alpha +2}y_n(x)+I^\alpha x, \end{aligned}$$
(33)

where \(y_0(x)=0\) and \(y_{n}(x)\) indicates the n-th approximation of y(x).

According to (33), the following relations are obtained.

$$\begin{aligned} y_{1}(x)& = \frac{x^{\alpha +1}}{\Gamma (\alpha +2)}, \\ y_{2}(x)& = \frac{x^{\alpha +1}}{\Gamma (\alpha +2)}+\frac{x^{2\alpha +3}}{\Gamma (2\alpha +4)}, \\ y_{3}(x)& = \frac{x^{\alpha +1}}{\Gamma (\alpha +2)}+\frac{x^{2\alpha +3}}{\Gamma (2\alpha +4)}+\frac{x^{3\alpha +5}}{\Gamma (3\alpha +6)}, \\ \vdots \\ y_{n}(x)& = \sum _{k=1}^{n}\frac{x^{k(\alpha +2)-1}}{\Gamma (k(\alpha +2))}. \end{aligned}$$

Therefore,

$$\begin{aligned} \begin{array}{ll} y(x)&=\lim _{n\rightarrow \infty }{y_{n}(x)}=\sum _{k=1}^{\infty }\frac{x^{k(\alpha +2)-1}}{\Gamma (k(\alpha +2))}\\ &=\sum _{k=0}^{\infty }\frac{x^{(k+1)(\alpha +2)-1}}{\Gamma ((k+1)(\alpha +2))}=\sum _{k=0}^{\infty }\frac{x^{k(\alpha +2)}x^{\alpha +1}}{\Gamma (k(\alpha +2)+\alpha +2)}=x^{\alpha +1}E_{\alpha +2,\alpha +2}(x^{\alpha +2}),\\ \end{array} \end{aligned}$$

which is the exact solution of (31).

Example 4.2

Consider the following FIDE in two cases.

$$\begin{aligned} \left\{ \begin{array}{ll} D^{\frac{5}{3}}y(x)+y^{\prime }(x)+xy(x)=f(x)+\frac{1}{2}\int _{0}^{x}(x-t)^2y(t)\mathrm{d}t, \ x\in [0,1], \\ y^{\prime }(0)=y(0)=0. \end{array} \right. \end{aligned}$$
(34)

Case 1: choosing f(x) subject that the exact solution of (34), be \(\frac{x^{\frac{10}{3}}}{\Gamma (\frac{13}{3})}+\frac{2x^{\frac{11}{3}}}{\Gamma (\frac{14}{3})}-\frac{x^4}{12}\).

According to the RVIM, the following iterative relation is resulted

$$\begin{aligned} y_{k+1}(x)=y_{0}{(x)}+I^{\frac{5}{3}} \left[-y^{\prime }(x)-xy(x)+f(x)+\frac{1}{2}\int _{0}^{x}(x-t)^2y(t)\mathrm{d}t\right], \end{aligned}$$
(35)

where \(y_{0}{(x)}=0\).

The approximate solutions \(y_k(x)\) for \(k=4,5,6\) are shown in Figs. 1, 2 and 3. From these figures, we can conclude that the numerical solutions become more and more accurate when k increases. In Table 1, the errors are given in the infinity norm and \(L_2\) norm for different values of k using the presented method. Also, maximum error obtained for different values of N using the spectral-collocation method Ma and Huang (2014) and method in Mashayekhi and Razzaghi (2015), that is based on the hybrid functions consisting of block-pulse functions and Bernoulli polynomials are shown in this table. In comparison with Ma and Huang (2014) and Mashayekhi and Razzaghi (2015), our method provides more accurate results.

Table 1 The errors of RVIM for Example 4.2 (case 1)

Case 2: choosing f(x) subject that the exact solution of (34), be \(\frac{x^{\frac{5}{3}}}{\Gamma (\frac{8}{3})}+\frac{x^{\frac{10}{3}}}{\Gamma (\frac{13}{3})}+\frac{2x^{\frac{11}{3}}}{\Gamma (\frac{14}{3})}-\frac{x^4}{12}\), then similarly, Fig. 4 shows the approximate solutions \(y_k(x)\) for \(k=4,\ldots ,8\). Moreover, absolute errors of our method for different values of k are demonstrated in Table 2. This implies that the approximate solutions become more and more satisfactory and accurate by increasing the value of k.

Fig. 4
figure 4

Comparison between the exact solution and the RVIM solution with \(k=4,6,8\) for Example 4.2 (case 2)

Table 2 Error estimation of RVIM for Example 4.2 (case 2)

Example 4.3

Consider the following nonlinear fractional integro-differential equation (Eslahchi et al. 2014)

$$\begin{aligned} D^{0.5}y(x)=g(x)y(x)+f(x)+\int _{0}^{x}\sqrt{x}y^2(t)\mathrm{d}t, \quad y(0)=0, \end{aligned}$$
(36)

where \(g(x)=2\sqrt{x}+2x^{\frac{3}{2}}-(\sqrt{x}+x^{\frac{3}{2}})\ln (1+x)\) and \(f(x)=\frac{2\mathrm{{arctan}}h(\frac{\sqrt{x}}{\sqrt{1+x}})}{\sqrt{\pi } \sqrt{\pi 1+x}}-2x^{\frac{3}{2}}\). The exact solution of this equation is \(y(x)=\ln (1+x)\) (Eslahchi et al. 2014). According to the RVIM, the following iterative relation resulted.

$$\begin{aligned} \left\{ \begin{array}{ll} y_{k+1}(x)=y_{0}{(x)}+I^{\frac{1}{2}}\left[g(x)y_k(x)+f(x)+\int _{0}^{x}\sqrt{x}y_k^2(t)\mathrm{d}t \right], \\ y_0(x)=0. \end{array} \right. \end{aligned}$$
(37)

Using (37), the following relations are obtained.

$$\begin{aligned} y_{1}(x)& = x-\frac{1}{4}(2+3\sqrt{\pi })x^2+\frac{1}{3}x^3-\frac{1}{4}x^4+\frac{1}{5}x^5-\frac{1}{6}x^6+O(x^{\frac{13}{2}}), \\ y_{2}(x)& = x-\frac{1}{4}(2+3\sqrt{\pi })x^2+\frac{1}{3}x^3-\frac{1}{4}x^4+\frac{1}{5}x^5-\frac{1}{6}x^6+O(x^{\frac{13}{2}}), \\ y_{3}(x)& = x-\frac{1}{2}x^2+\frac{1}{3}x^3-\frac{1}{2048}(512+525\pi ^{\frac{3}{2}})x^4 -\frac{1}{327{,}680}(70{,}875\pi ^{\frac{3}{2}}-65{,}536)x^5\\&- \, \frac{1}{1{,}572{,}864}(56{,}133\pi ^{\frac{3}{2}}+262{,}144)x^6+O(x^{\frac{13}{2}}), \\ y_{4}(x)& = x-\frac{1}{2}x^2+\frac{1}{3}x^3-\frac{1}{4}x^4 -\frac{1}{1{,}310{,}720}(165{,}375\pi ^2-262{,}144)x^5\\ & \quad -\frac{1}{100{,}663{,}296}1(15{,}644{,}475\pi ^2+16{,}777{,}216)x^6+O(x^{\frac{13}{2}}),\\ \vdots \\ y_{k}(x)& = \sum _{n=1}^{k}\frac{(-1)^{n+1}x^n}{n}, \end{aligned}$$

therefore, \(y(x)=\lim _{k\rightarrow \infty }{y_{k}(x)}=\sum _{n=1}^{\infty }\frac{(-1)^{n+1}x^n}{n}=\ln (1+x),\) which is the exact solution of (36). This example was also solved by the collocation method (Eslahchi et al. 2014) and fractional pseudospectral integration matrices (Tang and Xu 2015). The approximate results were reported for various N in Eslahchi et al. (2014) and Tang and Xu (2015). Eslahchi et al. (2014) showed that in the best situation the resulted maximum of absolute error is 9.0178e−06 and if \(N=150\) the infinity norm in Tang and Xu (2015) is 1.1127e−05, whereas we obtained the exact solution.

Example 4.4

Consider the following fractional partial integro-differential equation

$$\begin{aligned} \frac{\partial ^{\alpha } u(x,t)}{\partial t^\alpha }=\int _{0}^{t}(t-s)^{\frac{-1}{2}}\frac{\partial ^{2} u(x,s)}{\partial x^{2}}\mathrm{d}s,\ 0\le x\le 1, \end{aligned}$$
(38)

subject to the initial condition \(u(x,0)=\sin (\pi x).\)

The exact solution of this problem for \(\alpha =1\) is \(u(x,t)=\sin (\pi x)\sum _{n=0}^{\infty }\frac{(-\pi ^{\frac{5}{2}}t^{\frac{3}{2}})^n}{\Gamma (\frac{3}{2}n+1)}\).

According to the RVIM, the recursive relation is given by

$$\begin{aligned} \left\{ \begin{array}{ll} u_{k+1}(x,t)=u_{0}{(x,t)}+\sqrt{\pi }I^{\alpha +\frac{1}{2}}\frac{\partial ^{2} u_k(x,t)}{\partial x^{2}},\\ \\ u_{0}{(x,t)}=\sin (\pi x). \end{array} \right. \end{aligned}$$
(39)

Then, we have

$$\begin{aligned} u_{1}(x,t)& = \sin (\pi x)-\pi ^{\frac{5}{2}}\sin (\pi x)\frac{t^{\alpha +\frac{1}{2}}}{\Gamma (\alpha +\frac{3}{2})}, \\ u_{2}(x,t)& = \sin (\pi x)\left( 1-\pi ^{\frac{5}{2}}\frac{t^{\alpha +\frac{1}{2}}}{\Gamma (\alpha +\frac{3}{2})}+\pi ^5\frac{t^{2\alpha +1}}{\Gamma (2\alpha +2)}\right) ,\\ u_{3}(x,t)& = \sin (\pi x)\left( 1-\pi ^{\frac{5}{2}}\frac{t^{\alpha +\frac{1}{2}}}{\Gamma (\alpha +\frac{3}{2})}+\pi ^5\frac{t^{2\alpha +1}}{\Gamma (2\alpha +2)}-\pi ^{\frac{15}{2}}\frac{t^{3\alpha +\frac{3}{2}}}{\Gamma (3\alpha +\frac{5}{2})}\right) ,\\ \vdots \\ u_{k}(x,t)& = \sin (\pi x)\sum _{n=1}^{k}{\frac{(-1)^n\pi ^{\frac{5n}{2}}t^{n(\alpha +\frac{1}{2})}}{\Gamma (n(\alpha +\frac{1}{2})+1)}}. \end{aligned}$$

Therefore,

$$\begin{aligned} \begin{array}{ll} u(x,t)&=\sum _{k=1}^{\infty }{u_{k}(x,t)}=\sin (\pi x)\sum _{n=1}^{\infty }{\frac{(-\pi ^{\frac{5}{2}}t^{\alpha +\frac{1}{2}})^n}{\Gamma (n(\alpha +\frac{1}{2})+1)}}\\ &=\sin (\pi x)E_{\alpha +\frac{1}{2}}(-\pi ^{\frac{5}{2}}t^{\alpha +\frac{1}{2}}). \end{array} \end{aligned}$$
(40)

Thus, the closed form solution for \(\alpha =1\) of (38) resulted as:

$$\begin{aligned} u(x,t)=\sin (\pi x)E_{\frac{3}{2}}(-\pi ^{\frac{5}{2}}t^{\frac{3}{2}})=\sin (\pi x)\sum _{n=0}^{\infty }\frac{(-\pi ^{\frac{5}{2}}t^{\frac{3}{2}})^n}{\Gamma (\frac{3}{2}n+1)}, \end{aligned}$$

which is the exact solution of (38). This shows that this method can successfully be applied for solving fractional partial integro-differential equations. This example was previously solved in Mashayekhi and Razzaghi (2015) and the authors applied the pseudospectral spatial discretization based on the Legendre–Gauss–Lobatto collocation points to convert the problem (38) to a system of integro-differential equations. The absolute errors obtained for different values of t for \(\alpha =1\) and \(k=16\) are reported in Mashayekhi and Razzaghi (2015). They showed that for the best approximate solution, the absolute error for \(t=0.4\) and \(N=6\) is equal to 9.7e−06. It is obvious that the present method gives better results than the method in Mashayekhi and Razzaghi (2015). The obtained numerical solutions for different values of \(\alpha\) are depicted in Figs. 5 and 6, which are consistent with the exact solution for this equation.

Fig. 5
figure 5

Plot of the RVIM solution with \(\alpha =0.25\) (right) and \(\alpha =0.5\) (left) for Example 4.4

Fig. 6
figure 6

Plot of the RVIM solution with \(\alpha =0.75\) (right) and \(\alpha =1\) (left) for Example 4.4

Example 4.5

Finally, consider the following fractional integro-differential equation

$$\begin{aligned} D^{\frac{1}{2}}y(x)=\frac{\frac{8}{3}x^{\frac{3}{2}}-2x^{\frac{1}{2}}}{\sqrt{\pi }}-\frac{3x^5-4x^4}{12}+\int _{0}^{x}{xty(t)\mathrm{d}t}\ \ 0\le x,t\le 1, \end{aligned}$$
(41)

subject to \(y(0)=0\).

The exact solution of this problem is \(y(x)=x^{2}-x\) (Kumar et al. 2016).

According to the RVIM, the following iterative relation resulted.

$$\begin{aligned} \left\{ \begin{array}{ll} y_{k+1}(x)=y_{0}{(x)}+I^{\frac{1}{2}}\left[ {\frac{\frac{8}{3}x^{\frac{3}{2}}-2x^{\frac{1}{2}}}{\sqrt{\pi }}-\frac{3x^5-4x^4}{12}+\int _{0}^{x}{xty_n(t)\mathrm{d}t}}\right] , \\ y_0(x)=0. \end{array} \right. \end{aligned}$$
(42)

Using (42), the following relations are obtained.

$$\begin{aligned} \begin{array}{ll} &y_{1}(x)=x^2-x+\frac{256}{945}\frac{x^{\frac{9}{2}}}{\sqrt{\pi }}-\frac{128}{693}\frac{x^{\frac{11}{2}}}{\sqrt{\pi }}, \\ &y_{2}(x)=x^2-x+O(x^{8}),\\ &y_{3}(x)=x^2-x+O(x^{\frac{23}{2}}),\\ &y_{4}(x)=x^2-x+O(x^{15}), \end{array} \end{aligned}$$

therefore, \(y(x)=\lim _{k\rightarrow \infty }{y_{k}(x)}=x^2-x,\) which is the exact solution of (41). This example was studied by three numerical schemes such as Linear (S1), Quadratic (S2) and Quadratic-Linear (S3) scheme in Kumar et al. (2016). The maximum absolute errors are reported in Tables 3 for the schemes S1, S2 and S3. From this table, it is observed that in the best situation, the maximum of absolute error is 5.20829e−05, whereas we obtained the exact solution.

Table 3 Maximum absolute errors of three schemes (S1, S2, S3) for Example 4.5 in ref. Kumar et al. (2016)

5 An Application

Many problems in engineering and different fields of sciences can be modeled by the fractional integro-differential equations. Among these problems, we can refer to the fractional population growth model of a species within a closed system which can be written as the following Volterra integro-differential equation:

$$\begin{aligned} D^{\alpha }u(t)=au(t)-bu^2(t)-cu(t)\int _{0}^{t}{u(s)\mathrm{d}s},\ \ \ \ 0<\alpha \le 1, \end{aligned}$$

with initial condition

$$\begin{aligned} u(0)=\beta , \end{aligned}$$

where \(a > 0\) is the birth rate coefficient, \(b>0\) is the crowding coefficient and \(c > 0\) is the toxicity coefficient (Yuzbasi 2013). Also, u(t) is the population at time t with initial population u(0). The total metabolism or total amount of toxins accumulated from time zero is presented in this model through the integral. Since the system is closed, the presence of the toxic term always causes the population level to fall to zero in the long run. It is worth mentioning that for \(\alpha =1\), we have classical logistic growth model.

In this part, we solve this model by the proposed method for \(\alpha =1,\ \beta =0.1\), \(k=\frac{c}{ab}=0.1\) and the following approximate solution is obtained.

$$\begin{aligned} u(t)& = 0.1+.9t+3.55t^2+6.316666667t^3-5.537500000t^4-63.70916666t^5\\&- \, 156.0804167t^6-18.47323414t^7+1056.288569t^8+O(t^9), \end{aligned}$$

also, for \(\alpha =\frac{1}{2},\ \beta =0.1\) and \(k=0.1\), we have

$$\begin{aligned} u(t)& = 0.1+1.015541250\sqrt{t}+7.199999998t+35.49637104t^{\frac{3}{2}}+90.42203878t^2\\&- \, 321.1576322t^{\frac{5}{2}}-5346.321826t^3-32307.78860t^{\frac{7}{2}}-82694.80770t^4+O(t^{\frac{9}{2}}), \end{aligned}$$

which are the same as that results given in Xu (2009).

Furthermore, to show the accuracy of our method for solving this model, we compute the following error estimate:

$$\begin{aligned} E_N(t)=|D^{\alpha }u(t)-au(t)+bu^2(t)+cu(t)\int _{0}^{t}{u(s)\mathrm{d}s}|. \end{aligned}$$

Figures 7 and 8 show the numerical results and error estimate for the population growth model, respectively. These figures confirm the efficiency of our method for solving this type of equations.

Fig. 7
figure 7

The RVIM solution with \(\alpha =\frac{1}{3}\) and \(a=1, b=1, c=1\) for population growth model

Fig. 8
figure 8

Graph of the error functions with \(\alpha =\frac{1}{3}\) and \(a=1, b=1, c=1\) for population growth model

6 Conclusion

In this work, we apply an efficient scheme based on the VIM and Laplace transform, named the reconstruction of variational iteration method (RVIM), to solve Volterra integro-differential equations of fractional order (FIDEs). This method was more accurate, effective and convenient to solve FIDEs and did not require any knowledge of variational theory. Also, without any restrictive assumptions one could derive an iterative relation. Moreover, the stability and convergence properties of our method for solving FIDEs have been proved. Several examples are presented to demonstrate the efficiency and accuracy of the proposed method. The results showed that the RVIM could obtain solutions with a remarkable accuracy in only a few number of iterations. Tables and graphs of the numerical results indicated that in comparison with some other well-known methods, our method was more effective and accurate. The method was also applied to nonlinear FIDEs and fractional partial integro-differential equations and population growth model and the obtained results were very interesting.