1 Introduction

Fractional calculus theory made it possible to take real number powers of the differential and the integral operators. This generalized calculus made a huge leap in describing and modeling a lot of phenomena in science and engineering. Take a mechanics example, for instance, fractional-order derivatives have been successfully used to model damping forces with memory effect. They also used it to describe state feedback controllers (Hemeda and Al-Luhaibi 2014; Kilbas et al. 2006; Mahdy and Mukhtar 2017; Manafian et al. 2014; Ramadan and Al-luhaibi 2015; Saadatmandi and Mehdi 2010; Samko et al. 1993; Sontakke and Shaikh 2016; Zaslavsky 2005). That is because of the fact that realistic modeling of a physical phenomenon having dependence not only on the time instant but also on the previous time history can be successfully achieved by using fractional calculus. Such calculus can be named as non-integer order of calculus, and the subject of it can be traced back to the genesis of integer-order differential calculus itself. Though G.W. Leibniz made some remarks on the meaning and possibility of fractional derivatives of order \(\frac{1}{2}\) in the late seventeenth century, a rigorous investigation was first carried out by Liouville in a series of papers from 1832 to 1837, where he defined first an operator of fractional integration. Today, fractional calculus generates the derivative and antiderivative operations of differential and integral calculus from non-integer orders to the entire complex plane. Many examples and techniques for solving fractional differential equations will be found in S. Kumar et al. work (Sunil et al. 2016, 2017; Sunil 2014; Sunil and Mohammad 2014; Sunil and Amit 2018). There are several approaches to the generalization of the notion of differentiation to fractional orders, e.g., Riemann–Liouville, Grunwald–Letnikow, Caputo and generalized function approach. Riemann–Liouville fractional derivative is mostly used by mathematicians, but this approach is not suitable for real-world physical problems since it requires the definition of fractional-order initial conditions, which have no physically meaningful explanation yet. Caputo introduced an alternative definition, which has the advantage of defining integer-order initial conditions for fractional-order differential equations. Unlike the Riemann–Liouville approach, which derives its definition from repeated integration, the Grunwald–Letnikow formulation approaches the problem from the derivative side. This approach is mostly used in numerical algorithms. In this work, we will work on the famous well-known fractional differential equation, in particular, the Bagley–Torvik equation (BTE)

$$\left\{ {\begin{array}{*{20}l} {y^{\left( i \right)} \left( 0 \right) = \delta_{i} , \quad i = 0, 1, 2, \ldots , r - 1, \;\;r \in {\mathbb{N}}} \hfill \\ {Ay^{\left( n \right)} \left( t \right) + BD_{t}^{\alpha } y\left( t \right) + Cy\left( t \right) = f\left( t \right)} \hfill \\ \end{array} } \right.$$
(1)

(where \(1 \le n,\)\(n \in {\mathbb{N}}, r - 1 < \alpha \le r\), the constants B\(\ne\) and A, C\(\in {\mathbb{R}}\), δi can be identified for the initial conditions given in the problem and f : [0, 1] × R → R is a given continuous function) arises, for example, in the modeling of the motion of a rigid plate immersed in a Newtonian fluid. It was originally proposed in 1984 in (Raja et al. 2011) and is thoroughly discussed. Initially, in (Mahdy and Mukhtar 2017), inhomogeneous BTE was studied with an analytical solution being proposed. Since then, there were several works to solve BTE, starting with numerical procedures for a reformulated BTE as a system of functional differential equations of order \(\frac{3}{2}\). Following a numerical way for solving BTE, a generalization of Taylor’s and Bessel’s collocation method (Daftardar-Gejji and Jafari 2006; Ramadan and Al-luhaibi 2014) and the use of evolutionary computation (Mainardi 2010) provided acceptable solutions from an engineering point of view. In order to obtain a unique solution for BTE, homogeneous initial conditions are assumed. Here, in particular, \(D_{t}^{q}\) denotes the fractional differential operator of order \(q \notin {\mathbb{N}}\) in the sense of Caputo, denoted and defined by

$$D_{t}^{q} = J^{m - q} y^{\left( m \right)} \left( t \right),$$

where m is the integer defined by the relation \(m - 1 < q < m\) and \(J^{\alpha }\) is the fractional integral operator,

$$J^{\alpha } g\left( t \right) = \frac{1}{\varGamma \left( \alpha \right)}\mathop \int \limits_{0}^{t} (t - u)^{\alpha - 1} g\left( u \right){\text{d}}u.$$

2 Different Approaches to the New Iterative Method (NIM)

During the past decades, many mathematicians introduced and developed the NIM. After working with that technique, they developed many approaches, so it can handle differential as well as partial differential equations. The last few years though they concentrated on developing the NIM to work on all types of fractional differential equations. We will discuss in detail each approach and introduce our approach in order to find the exact solution instead of an approximate one for some special problems.

2.1 First Approach

To describe the idea of the first approach of the NIM (Manafian et al. 2014; Podlubny 1999; Raja et al. 2011; Ramadan and Al-luhaibi 2014, 2015), consider the following general functional equation

$$y\left( t \right) = g\left( t \right) + N\left( {y\left( t \right)} \right),$$
(2)

where N is the nonlinear operator and g is a known function. We are looking for y which has the series solution in the form

$$y\left( t \right) = \sum\limits_{i = 0}^{\infty } {y_{i} \left( t \right)} .$$

The operator N can be decomposed into the following

$$N\left( {\sum\limits_{i = 0}^{\infty } {y_{i} } } \right) = N\left( {y_{0} } \right) + \sum\limits_{i = 0}^{\infty } {\left\{ {N\left( {\sum\limits_{j = 0}^{i} {y_{j} } } \right) - N\left( {\sum\limits_{j = 0}^{i - 1} {y_{j} } } \right)} \right\}}$$
(3)

from Eqs. (2) and (3)

$$\sum\limits_{i = 0}^{\infty } {y_{i} } = g\left( t \right) + N\left( {y_{0} } \right) + \sum\limits_{i = 0}^{\infty } {\left\{ {N\left( {\sum\limits_{j = 0}^{i} {y_{j} } } \right) - N\left( {\sum\limits_{j = 0}^{i - 1} {y_{j} } } \right)} \right\}} .$$
(4)

We define the following recurrence relation

$$\begin{aligned} & y_{0} = g\left( t \right), \\ & y_{1} = N\left( {y_{0} } \right), \\ & {\begin{aligned} y_{n + 1} & = N\left( {y_{0} + y_{1} + \cdots + y_{n} } \right) \\ &\quad- N\left( {y_{0} + y_{1} + \cdots + y_{n - 1} } \right),\quad n = 1,2, \ldots \\ \end{aligned}}\end{aligned}$$

The k-term series solution of the general Eq. (2) takes the following form:

$$y\left( t \right) = y_{0} + y_{1} + \cdots + y_{k - 1} .$$
(5)

2.2 Second Approach

The basic mathematical theory of the second approach to the NIM is described as follows. This approach is preferred to be used for nonlinear problems. Let us consider the following nonlinear equation:

$$y\left( t \right) = g\left( t \right) + \varepsilon \left( {y\left( t \right)} \right) + N\left( {y\left( t \right)} \right),$$
(6)

where \(\varepsilon\) and N are the linear and nonlinear operators of \(y\left( t \right)\) and \(g\left( t \right)\) is a known function. We are looking for y which has the series solution in the form

$$y\left( t \right) = \sum\limits_{k = 0}^{\infty } {y_{k} \left( t \right)} .$$
(7)

The linear operator \(\varepsilon\) can be decomposed into the following

$$\sum\limits_{k = 0}^{\infty } {\varepsilon \left( {y_{k} } \right)} = \varepsilon \left( {\sum\limits_{k = 0}^{\infty } {y_{k} } } \right).$$
(8)

The nonlinear operator N can be decomposed into the following

$$N\left( {\sum\limits_{k = 0}^{\infty } {y_{k} } } \right) = N\left( {y_{0} } \right) + \sum\limits_{k = 1}^{\infty } {\left\{ {N\left( {\sum\limits_{j = 0}^{k} {y_{j} } } \right) - N\left( {\sum\limits_{j = 0}^{k - 1} {y_{j} } } \right)} \right\}}$$
(9)

from Eqs. (6), (7) and (8)

$$\sum\limits_{i = 0}^{\infty } {y_{i} } = g\left( t \right) + \varepsilon \left( {\sum\limits_{k = 0}^{\infty } {y_{k} } } \right) + N\left( {y_{0} } \right) + \sum\limits_{i = 0}^{\infty } {\left\{ {N\left( {\sum\limits_{j = 0}^{i} {y_{j} } } \right) - N\left( {\sum\limits_{j = 0}^{i - 1} {y_{j} } } \right)} \right\}} .$$

We define the following recurrence relation

$$\begin{aligned} & y_{0} = g\left( t \right), \\ & y_{1} = \varepsilon \left( {y_{0} } \right) + N\left( {y_{0} } \right), \\ & y_{2} = \varepsilon \left( {y_{1} } \right) + N\left( {y_{0} + y_{1} } \right) - N\left( {y_{0} } \right), \\ &{\begin{aligned} y_{n + 1} & = \varepsilon \left( {y_{n} } \right) + N\left( {y_{0} + y_{1} + \cdots + y_{n} } \right) \\&\quad- N\left( {y_{0} + y_{1} + \cdots + y_{n - 1} } \right),\quad n = 1,2, \ldots \\ \end{aligned}} \end{aligned}$$
(10)

The k-term series solution of the general Eq. (7) takes the following form:

$$y\left( t \right) = y_{0} + y_{1} + \cdots + y_{k - 1} .$$
(11)

Example 2.1

Consider the following nonlinear initial value problem (Yuzbas 2013)

$$y^{{{\prime \prime \prime }}} \left( t \right) + D_{t}^{{\frac{5}{2}}} y\left( t \right) + y^{2} \left( t \right) = t^{4} ,\quad y\left( 0 \right) = y^{{\prime }} \left( 0 \right) = 0,\;y^{{\prime \prime }} \left( 0 \right) = 2.$$
(12)

Isolating the unknown function

$$y\left( t \right) = J^{{\frac{5}{2}}} \left[ { - \,D^{3} y - y^{2} } \right] + J^{{\frac{5}{2}}} \left[ {t^{4} } \right] + t^{2}$$
(13)

by using Eq. (4), we get

$$\begin{aligned} y_{0} & = t^{2} + J^{{\frac{5}{2}}} \left[ {t^{4} } \right] = t^{2} + \frac{\varGamma \left( 5 \right)}{{\varGamma \left( {7.5} \right)}}t^{{\frac{13}{2}}} \quad {\text{and}}\quad N\left( y \right) = J^{{\frac{5}{2}}} \left[ { - D^{3} y - y^{2} } \right] \\ y_{1} & = N\left( {y_{0} } \right) = J^{{\frac{5}{2}}} \left[ { - \,D^{3} y_{0} - y_{0}^{2} } \right] \\ & = - \frac{\varGamma \left( 5 \right)}{{\varGamma \left( {7.5} \right)}}t^{{\frac{13}{2}}} - \frac{1}{30}t^{6} - \frac{17}{221760}t^{{\frac{39}{2}}} - \frac{70368744177664}{{4030227582157711875\pi^{{\frac{3}{2}}} }}t^{57/2} . \\ \end{aligned}$$

We get only an approximate solution of the form

$$y\left( t \right) = t^{2} - \frac{1}{30}t^{6} - \frac{17}{221760}t^{{\frac{39}{2}}} - \frac{70368744177664}{{4030227582157711875\pi^{{\frac{3}{2}}} }}t^{57/2} + \cdots$$
(14)

the graph of this solution

figure a

The same problem will be solved later on in this paper using the newly introduced method.

Example 2.2

Consider the time-dependent one-dimensional heat conduction equation (Torvik and Bagley 1984) as follows:

$$D_{t}^{\alpha } y(x,t) - a(y^{3} (x,t))_{xx} + y^{3} (x,t) = 0,\quad y(x,0) = \exp \left( {\frac{x}{3\sqrt a }} \right).$$
(15)

Isolating the unknown function

$$y\left( t \right) = J^{\alpha } [a(y^{3} )_{xx} - y^{3} ] + J^{\alpha } \left[ y \right] + \exp \left( {\frac{x}{3\sqrt a }} \right)$$
(16)

by using the second approach of NIM in Eq. (10), suppose that

$$\begin{aligned} y_{0} & = \exp \left( {\frac{x}{3\sqrt a }} \right),\varepsilon (y) = J^{\alpha } [y]\quad {\text{and}}\quad N(y) = J^{\alpha } [\alpha (y^{3} )_{xx} - y^{3} ] \\ y_{1} & = \varepsilon (y_{0} ) + N(y_{0} ) = J^{\alpha } [y_{0} ] + J^{\alpha } [\alpha (y_{0}^{3} )_{xx} - y_{0} ] \\ & = J^{\alpha } \left( {\exp \left( {\frac{x}{3\sqrt a }} \right)} \right) = \exp \left( {\frac{x}{3\sqrt a }} \right)\frac{{t^{\alpha } }}{\varGamma (\alpha + 1)} \\ y_{2} & = \varepsilon (y_{1} ) + N(y_{0} + y_{1} ) - N(y_{0} ) \\ & = \exp \left( {\frac{x}{3\sqrt a }} \right)\frac{{t^{2\alpha } }}{\varGamma (2\alpha + 1)}. \\ \end{aligned}$$

We get the exact solution taking the form

$$\begin{aligned} y\left( t \right) & = \exp \left( {\frac{x}{3\sqrt a }} \right)\left( {1 + \frac{{t^{\alpha } }}{{\varGamma \left( {\alpha + 1} \right)}} + \frac{{t^{2\alpha } }}{{\varGamma \left( {2\alpha + 1} \right)}} + \frac{{t^{3\alpha } }}{{\varGamma \left( {3\alpha + 1} \right)}} + \cdots } \right) \\ & = \exp \left( {\frac{x}{3\sqrt a }} \right)E_{\alpha ,1} \left( t \right), \\ \end{aligned}$$
(17)

where \(E_{\alpha ,1} \left( t \right)\) is the Mittag-Leffer function.

3 One-Step New Iterative Method (OSNIM)

A variety of problems in physics, chemistry, biology and engineering can be formulated in terms of the nonlinear functional equation

$$y = N\left( y \right) + g,$$
(18)

where f is a given function and N is the nonlinear operator. Equation (18) represents integral equations, ordinary differential equations (ODEs), partial differential equations (PDEs), differential equations involving fractional-order, systems of ODE/PDE and so on. Various methods such as Laplace and Fourier transform and Green’s function method have been used to solve linear equations. For solving nonlinear equations, however, one has to resort to numerical/iterative methods. Adomian decomposition method (ADM) has proved to be a useful tool for solving functional Eq. (18) (Adomian 1988, 1994; Daftardar-Gejji and Jafari 2005). Though the study of fractional differential equations (FDE) in general and BTE to be specified has been obstructed due to the absence of proficient and accurate techniques, the derivation of approximate solution of FDEs remains a hotspot and demands to attempt some dexterous and solid plans which are of interest. Daftardar-Gejji and Jafari proposed an iterative method called the new iterative method (NIM) for finding the approximate solution of differential equations (Al-Luhaibi 2015; Samreen et al. 2018). NIM does not require the need for calculation of tedious Adomian polynomials in nonlinear terms like ADM, the need for determination of a Lagrange multiplier in its algorithm like VIM and the need for discretization like numerical methods. The proposed method handles linear and nonlinear equations in an easy and straightforward way. Recently, the method has been extended for differential equations of the fractional order (Al-Luhaibi 2015; Kazem 2013; Podlubny 1999). This method yields solutions in the form of rapidly converging infinite series which can be effectively approximated by calculating only first few terms.

In the present study, we have implemented NIM for finding the approximate solution of the following fractional-order BTE. We generalized an algorithm in order to make it easier to solve BTE. To describe the idea of the new generalized algorithm for the NIM, consider the following general Bagley–Torvik (BTE) equation

$$\left\{ {\begin{array}{*{20}l} {y^{\left( i \right)} \left( 0 \right) = \delta_{i} ,\quad i = 0, 1, 2, \ldots , r - 1,\;\;r \in {\mathbb{N}}} \hfill \\ {Ay^{\left( n \right)} \left( t \right) + BD_{t}^{\alpha } y\left( t \right) + Cy\left( t \right) = f\left( t \right),} \hfill \\ \end{array} } \right.$$
(19)

then by isolating the fractional derivative term

$$D_{t}^{\alpha } y\left( t \right) = f_{1} \left( t \right) - A_{1} y^{\left( n \right)} \left( t \right) - C_{1} y\left( t \right)$$
(20)

where \(f_{1} \left( t \right) = \frac{f\left( t \right)}{B},\;A_{1} = \frac{A}{B}\) and \(C_{1} = \frac{C}{B}\), and then

$$y\left( t \right) = J^{\alpha } \left[ {f_{1} \left( t \right) - A_{1} y^{(n)} \left( t \right) - C_{1} y\left( t \right)} \right] + \sum\limits_{i = 0}^{r} {y^{(i)} (0)} \frac{{t^{i} }}{i!}.$$
(21)

Suppose we divide this equation into two parts as follows

$$y\left( t \right) = N\left( {y\left( t \right)} \right) + g\left( t \right),$$
(22)

where

$$N\left( {y\left( t \right)} \right) = J^{\alpha } \left[ {f_{1} \left( t \right) - A_{1} y^{\left( n \right)} \left( t \right) - C_{1} y\left( t \right)} \right],$$
(23)

where N is usually the nonlinear operator in the Banach space \(B \to B\); however, for the BTE it is applied to linear functions and \(g\left( t \right)\) is a known function defined as

$$g\left( t \right) = \sum\limits_{i = 0}^{r} {y^{(i)} (0)} \frac{{t^{i} }}{i!},$$
(24)

where we are looking for a solution \(y\left( t \right)\) of Eq. (22) having the series form

$$y\left( t \right) = \sum\limits_{i = 0}^{\infty } {y_{i} \left( t \right)} .$$
(25)

The operator N can be decomposed into the following

$$N\left( {\sum\limits_{i = 0}^{\infty } {y_{i} } } \right) = N\left( {y_{0} } \right) + \sum\limits_{i = 0}^{\infty } {\left\{ {N\left( {\sum\limits_{j = 0}^{i} {y_{j} } } \right) - N\left( {\sum\limits_{j = 0}^{i - 1} {y_{j} } } \right)} \right\}}$$
(26)

from Eqs. (22), (24) and (25)

$$\sum\limits_{i = 0}^{\infty } {y_{i} } = g\left( t \right) + N\left( {y_{0} } \right) + \sum\limits_{i = 0}^{\infty } {\left\{ {N\left( {\sum\limits_{j = 0}^{i} {y_{i} } } \right) - N\left( {\sum\limits_{j = 0}^{i - 1} {y_{i} } } \right)} \right\}} .$$
(27)

By getting different \(y_{i}\)

$$\begin{aligned} y_{0} & = g(t) = \sum\limits_{i = 0}^{r} {y^{(i)} (0)\frac{{t^{i} }}{i!}} ,\quad r - 1 < \alpha \le r \\ y_{1} & = J^{\alpha } \left[ {f_{1} (t) - A_{1} y_{0}^{(n)} - C_{1} y_{0} (t)} \right] \\ y_{2} & = N\left( {y_{0} + y_{1} } \right) - N\left( {y_{0} } \right) \\ & = J^{\alpha } \left[ {f_{1} \left( t \right) - A_{1} y_{0}^{\left( n \right)} \left( t \right) - C_{1} y_{0} \left( t \right) - A_{1} y_{1}^{\left( n \right)} \left( t \right) - C_{1} y_{1} \left( t \right)} \right] \\ & \quad - \,J^{\alpha } \left[ {f_{1} \left( t \right) - A_{1} y_{0}^{\left( n \right)} \left( t \right) - C_{1} y_{0} \left( t \right)} \right] \\ & = J^{\alpha } \left[ { - \,A_{1} y_{1}^{\left( n \right)} \left( t \right) - C_{1} y_{1} \left( t \right)} \right] \\ y_{3} & = N\left( {y_{0} + y_{1} + y_{2} } \right) - N\left( {y_{0} + y_{1} } \right) \\ & = J^{\alpha } \left[ {f_{1} \left( t \right) - A_{1} y_{0}^{\left( n \right)} \left( t \right) - C_{1} y_{0} \left( t \right) - A_{1} y_{1}^{\left( n \right)} \left( t \right) - C_{1} y_{1} \left( t \right) }\right.\\ &\quad\left.{- A_{1} y_{2}^{\left( n \right)} \left( t \right) - C_{1} y_{2} \left( t \right)} \right] \\ & \quad - \,J^{\alpha } \left[ {f_{1} \left( t \right) - A_{1} y_{0}^{\left( n \right)} \left( t \right) - C_{1} y_{0} \left( t \right) - A_{1} y_{1}^{\left( n \right)} \left( t \right) - C_{1} y_{1} \left( t \right)} \right] \\ & = J^{\alpha } \left[ { - \,A_{1} y_{2}^{\left( n \right)} \left( t \right) - C_{1} y_{2} \left( t \right)} \right] \\ & \vdots \\ \end{aligned}$$

from this, we can deduce the following

$$\begin{aligned} y_{0} & = g\left( t \right) = \sum\limits_{i = 0}^{r} {y^{(i)} (0)\frac{{t^{i} }}{i!}} \\ y_{1} & = J^{\alpha } \left[ {f_{1} \left( t \right) - A_{1} y_{0}^{\left( n \right)} \left( t \right) - C_{1} y_{0} \left( t \right)} \right] \\ y_{2} & = J^{\alpha } \left[ { - \,A_{1} y_{1}^{\left( n \right)} \left( t \right) - C_{1} y_{1} \left( t \right)} \right] \\ y_{3} & = J^{\alpha } \left[ { - \,A_{1} y_{2}^{\left( n \right)} \left( t \right) - C_{1} y_{2} \left( t \right)} \right] \\ & \vdots \\ y_{m} & = J^{\alpha } \left[ { - \,A_{1} y_{m - 1}^{\left( n \right)} \left( t \right) - C_{1} y_{m - 1} \left( t \right)} \right]; \\ \end{aligned}$$
(28)

then, the k-term series solution will be in the form

$$y\left( t \right) = y_{0} + y_{1} + y_{2} + \cdots + y_{k - 1} .$$
(29)

4 Convergence and Error Analysis

In this section, we will prove that the method is convergent for the BTE and the error is almost neglectable as done by A. A. Hemeda (Enesiz et al. 2010). Using Eq. (27), we define the following recurrence relation

$$\begin{aligned} & y_{0} = g\left( t \right), \\& y_{1} = N\left( {y_{0} } \right), \\ &{\begin{aligned} y_{n + 1} & = N\left( {y_{0} + y_{1} + \cdots + y_{n} } \right) \\ &\quad- N\left( {y_{0} + y_{1} + \cdots + y_{n - 1} } \right),\quad n = 1,2, \ldots \\ \end{aligned}} \end{aligned}$$
(30)

Let \(e = u^{*} - u\), where \(u^{*}\) is the exact solution, u is the approximate solution and e is the error in the solution of Eq. (22); obviously, e satisfies Eq. (22); that is,

$$e\left( x \right) = g\left( x \right) + N\left( {e\left( x \right)} \right);$$

the recurrence relation in Eq. (30) becomes

$$\begin{aligned} & e_{0} = g\left( t \right), \\ & e_{1} = N\left( {e_{0} } \right), \\ & {\begin{aligned} e_{n + 1} & = N\left( {e_{0} + e_{1} + \cdots + e_{n} } \right) \\ &\quad- N\left( {e_{0} + e_{1} + \cdots + e_{n - 1} } \right),\quad n = 1,2, \ldots \\ \end{aligned}} \end{aligned}$$

If

$$\left\| {N\left( x \right) - N\left( y \right)} \right\| < k\left\| {x - y} \right\|,\quad 0 < k < 1,\;{\text{then}}$$
$$\begin{aligned} e_{0} & = g\left( t \right), \\ \left\| {e_{1} } \right\| & = \left\| {N\left( {e_{0} } \right)} \right\| \le k\left\| {e_{0} } \right\|, \\ \left\| {e_{2} } \right\| & = \left\| {N\left( {e_{0} + e_{1} } \right) - N\left( {e_{0} } \right)} \right\| \le k\left\| {e_{1} } \right\| \le k^{2} \left\| {e_{0} } \right\|, \\ \left\| {e_{3} } \right\| & = \left\| {N\left( {e_{0} + e_{1} + e_{2} } \right) - N\left( {e_{0} + e_{1} } \right)} \right\| \le k\left\| {e_{2} } \right\| \le k^{3} \left\| {e_{0} } \right\| \\ \left\| {e_{n + 1} } \right\| & = \left\| {N\left( {e_{0} + e_{1} + \cdots + e_{n} } \right) - N\left( {e_{0} + e_{1} + \cdots + e_{n - 1} } \right)} \right\| \\ & \le k\left\| {e_{n} } \right\| \le k^{n + 1} \left\| {e_{0} } \right\|,\quad n = 1,2, \ldots \\ \end{aligned}$$

Thus, \(e_{n + 1} \to 0\) as \(n \to \infty ,\) which proves the convergence of the new iterative method for solving the general functional Eq. (22).

5 Illustrating Examples

In this section, we apply the present algorithm which is presented in Sect. 4 to some special types of linear and nonlinear fractional differential equations. Numerical outcomes show that the method is very convenient and efficient.

Example 5.1

Consider the following BTE (Abu and Maayah 2018)

$$y^{{\prime \prime }} \left( t \right) + D_{t}^{{\frac{3}{2}}} y\left( t \right) + y\left( t \right) = 1 + t$$
(31)

subject to

$$y\left( 0 \right) = y^{{\prime }} \left( 0 \right) = 1,$$
(32)

where the exact solution is \(y\left( t \right) = 1 + t.\) By using Eq. (28), we get

$$\begin{aligned} y_{0} & = 1 + t \\ y_{1} & = J^{{\frac{3}{2}}} \left[ {1 + t - 1 - t} \right] = 0 \\ y_{2} & = J^{{\frac{3}{2}}} \left[ { - \,0 - 0} \right] = 0; \\ \end{aligned}$$
(33)

then, we can find out easily that the exact solution will take the form

$$y\left( t \right) = y_{0} + y_{1} + y_{2} + \cdots = 1 + t.$$
(34)

It worth mentioning that A. A. Hemeda in 2013 (Raja et al. 2011) solved the same problem but with a slight change. He took different \({\text{y}}_{0}\) which led him to an approximate solution. The 6-term approximation took the form

$$\begin{aligned} y\left( t \right) & = \sum\limits_{i = 0}^{5} {y_{i} } \\ & = 1 + x - \frac{{x^{6} }}{144} - \frac{{5x^{7} }}{5040} - \frac{{x^{9} }}{36288} - \frac{{x^{10} }}{362880} - \frac{{x^{12} }}{479001600} \\ & \quad - \,\frac{{x^{13} }}{6227020800} - \frac{{32x^{4.5} }}{945\sqrt \pi } - \frac{{64x^{5.5} }}{10395\sqrt \pi } - \frac{{512x^{7.5} }}{405405\sqrt \pi } \\ & \quad - \,\frac{{1024x^{8.5} }}{6891885\sqrt \pi } - \frac{{512x^{10.5} }}{687465529\sqrt \pi } - \frac{{128x^{11.5} }}{1976463395\sqrt \pi }. \\ \end{aligned}$$

However, here we get the exact solution after this slight modification.

Example 5.2

Consider the following BTE (Abu and Maayah 2018)

$$y^{{\prime \prime }} \left( t \right) + D_{t}^{{\frac{3}{2}}} y\left( t \right) + y\left( t \right) = 2 + 4\sqrt {\frac{t}{\pi }} + t^{2}$$
(35)

subject to

$$y\left( 0 \right) = y^{{\prime }} \left( 0 \right) = 0,$$
(36)

where the exact solution is \(y\left( t \right) = t^{2} .\) By using Eq. (28), we get

$$\begin{aligned} y_{0} & = 0 \\ y_{1} & = J^{{\frac{3}{2}}} \left[ {2 + 4\sqrt {\frac{t}{\pi }} + t^{2} } \right] = \frac{{8t^{{\frac{3}{2}}} }}{3\sqrt \pi } + t^{2} + \frac{{32t^{{\frac{7}{2}}} }}{105\sqrt \pi } \\ y_{2} & = J^{{\frac{3}{2}}} \left[ { - \,y_{1}^{\prime \prime } \left( t \right) - y_{1} \left( t \right)} \right] = - 2t - \frac{{8t^{{\frac{3}{2}}} }}{3\sqrt \pi } - \frac{{2t^{3} }}{3} \\ &\quad- \frac{{32t^{{\frac{7}{2}}} }}{105\sqrt \pi } - \frac{{t^{5} }}{60} \\ y_{3} & = J^{{\frac{3}{2}}} \left[ { - \,y_{2}^{\prime \prime } \left( t \right) - y_{2} \left( t \right)} \right] = 2t + \frac{{16t^{{\frac{5}{2}}} }}{5\sqrt \pi } + \frac{{2t^{3} }}{3} \\ &\quad+ \frac{{64t^{{\frac{9}{2}}} }}{315\sqrt \pi } + \frac{{t^{5} }}{60} + \frac{{256t^{{\frac{13}{2}}} }}{135135\sqrt \pi } \\ & \vdots \\ \end{aligned}$$

By getting the higher-order terms, we notice that there are a lot of noise terms that appear. After canceling those noise terms, then we can find out easily that the exact solution will take the form

$$y\left( t \right) = y_{0} + y_{1} + y_{2} + \cdots = t^{2} .$$
(37)

Example 5.3

Consider the following BTE (Abu and Maayah 2018)

$$y^{{\prime \prime }} \left( t \right) + \frac{1}{2}D_{t}^{{\frac{1}{2}}} y\left( t \right) + y\left( t \right) = 3 + t^{2} \left( {\frac{1}{{\varGamma \left( {2.5} \right)}}t^{ - 0.5} + 1} \right)$$
(38)

subject to

$$y\left( 0 \right) = 1,\quad y^{{\prime }} \left( 0 \right) = 0,$$
(39)

where the exact solution is \(y\left( t \right) = t^{2} + 1.\) By using Eq. (28), we get

$$\begin{aligned} y_{0} & = 1 \\ y_{1} & = J^{{\frac{1}{2}}} \left[ {4 + \frac{2}{{\varGamma \left( {2.5} \right)}}t^{{\frac{3}{2}}} + t^{2} } \right] = \frac{8\sqrt t }{\sqrt \pi } + t^{2} + \frac{{32t^{{\frac{5}{2}}} }}{15\sqrt \pi } \\ y_{2} & = J^{{\frac{1}{2}}} \left[ { - 2y_{1}^{{\prime \prime }} \left( t \right) - 2y_{1} \left( t \right)} \right] = - 16t - \frac{8\sqrt t }{\sqrt \pi } - \frac{{4t^{3} }}{3} - \frac{{32t^{{\frac{5}{2}}} }}{15\sqrt \pi } \\ y_{3} & = J^{{\frac{1}{2}}} \left[ { - 2y_{2}^{{\prime \prime }} \left( t \right) - 2y_{2} \left( t \right)} \right] = 16t + \frac{{4t^{3} }}{3} + \frac{{64t^{{\frac{3}{2}}} }}{\sqrt \pi } + \frac{{256t^{{\frac{7}{2}}} }}{105\sqrt \pi } \\ & \vdots \\ \end{aligned}$$

By getting the higher-order terms, we notice that there are a lot of noise terms that appear. After canceling those noise terms, then we can find out easily that the exact solution will take the form

$$y\left( t \right) = y_{0} + y_{1} + y_{2} + \cdots = t^{2} + 1$$
(40)

Example 5.4

Consider the following BTE (Hemeda 2013)

$$D_{t}^{\alpha } y\left( t \right) + y\left( t \right) = 0$$
(41)

subject to

$$y\left( 0 \right) = 1,\quad y^{{\prime \prime }} \left( 0 \right) = 0,$$
(42)

the second condition is only applied when \(\alpha > 1.\) In this problem, \(A_{1} = f_{1} \left( t \right) = 0.\) By using Eq. (28), we get

$$\begin{aligned} y_{0} & = 1 \\ y_{1} & = J^{\alpha } \left[ { - \,y_{0} } \right] = J^{\alpha } \left[ { - 1} \right] = \frac{{ - t^{\alpha } }}{{\varGamma \left( {\alpha + 1} \right)}} \\ y_{2} & = J^{\alpha } \left[ { - \,y_{1} } \right] = J^{\alpha } \left[ {\frac{{t^{\alpha } }}{{\varGamma \left( {\alpha + 1} \right)}}} \right] = \frac{{t^{2\alpha } }}{{\varGamma \left( {2\alpha + 1} \right)}} \\ y_{3} & = J^{\alpha } \left[ { - \,y_{2} } \right] = J^{\alpha } \left[ {\frac{{t^{2\alpha } }}{{\varGamma \left( {2\alpha + 1} \right)}}} \right] = \frac{{ - t^{3\alpha } }}{{\varGamma \left( {3\alpha + 1} \right)}} \\ y_{4} & = J^{\alpha } \left[ { - \,y_{3} } \right] = J^{\alpha } \left[ {\frac{{t^{3\alpha } }}{{\varGamma \left( {3\alpha + 1} \right)}}} \right] = \frac{{t^{4\alpha } }}{{\varGamma \left( {4\alpha + 1} \right)}} \\ & \vdots \\ \end{aligned}$$

and so on. We continue getting the higher-order terms. We find out that the series continues in the same pattern. We then deduce that

$$\begin{aligned} &y\left( t \right) = y_{0} + y_{1} + y_{2} + \cdots \hfill \\&\;\,\quad= 1 + \frac{{( - \,t)^{\alpha } }}{{\varGamma \left( {\alpha + 1} \right)}} + \frac{{( - \,t)^{2\alpha } }}{{\varGamma \left( {2\alpha + 1} \right)}} + \frac{{( - \,t)^{3\alpha } }}{{\varGamma \left( {3\alpha + 1} \right)}} + \frac{{( - \,t)^{4\alpha } }}{{\varGamma \left( {4\alpha + 1} \right)}} + \cdots \hfill \\ &\,\;\quad= \sum\limits_{n = 0}^{\infty } {\frac{{( - \,t)^{n} }}{{\varGamma \left( {n\alpha + 1} \right)}} } = E_{\alpha } ( - \,t)^{\alpha } \hfill \\ \end{aligned}$$

which is the exact solution.

Example 5.5

Consider the following BTE (Hemeda 2013)

$$D_{t}^{\alpha } y\left( t \right) + y\left( t \right) = \frac{{2t^{2 - \alpha } }}{{\varGamma \left( {3 - \alpha } \right)}} - \frac{{t^{1 - \alpha } }}{{\varGamma \left( {2 - \alpha } \right)}} + t^{2} - t$$
(43)

subject to

$$y\left( 0 \right) = 0,\quad y^{{\prime }} \left( 0 \right) = 0,$$
(44)

the second condition is only applied when \(\alpha > 1.\) In this problem, \(A_{1} = 0.\) By using Eq. (28), we get

$$\begin{aligned} y_{0} & = 0 \\ y_{1} & = J^{\alpha } \left[ {f_{1} \left( t \right) - y_{0} } \right] = J^{\alpha } \left[ {\frac{{2t^{2 - \alpha } }}{{\varGamma \left( {3 - \alpha } \right)}} - \frac{{t^{1 - \alpha } }}{{\varGamma \left( {2 - \alpha } \right)}} + t^{2} - t} \right] \\ & = \frac{{2t^{2 + \alpha } }}{{\varGamma \left( {3 + \alpha } \right)}} - \frac{{t^{1 + \alpha } }}{{\varGamma \left( {2 + \alpha } \right)}} + t^{2} - t \\ y_{2} & = J^{\alpha } \left[ { - \,y_{1} } \right] = J^{\alpha } \left[ { - \frac{{2t^{2 + \alpha } }}{{\varGamma \left( {3 + \alpha } \right)}} + \frac{{t^{1 + \alpha } }}{{\varGamma \left( {2 + \alpha } \right)}} - t^{2} + t} \right] \\ & = - \frac{{2t^{2 + \alpha } }}{{\varGamma \left( {3 + \alpha } \right)}} + \frac{{t^{1 + \alpha } }}{{\varGamma \left( {2 + \alpha } \right)}} - \frac{{2t^{2 + 2\alpha } }}{{\varGamma \left( {3 + 2\alpha } \right)}} + \frac{{t^{1 + 2\alpha } }}{{\varGamma \left( {2 + 2\alpha } \right)}} \\ y_{3} & = J^{\alpha } \left[ { - \,y_{2} } \right] = J^{\alpha } \left[ {\frac{{2t^{2 + \alpha } }}{{\varGamma \left( {3 + \alpha } \right)}} - \frac{{t^{1 + \alpha } }}{{\varGamma \left( {2 + \alpha } \right)}} + \frac{{2t^{2 + 2\alpha } }}{{\varGamma \left( {3 + 2\alpha } \right)}} - \frac{{t^{1 + 2\alpha } }}{{\varGamma \left( {2 + 2\alpha } \right)}}} \right] \\ & = \frac{{2t^{2 + 2\alpha } }}{{\varGamma \left( {3 + 2\alpha } \right)}} - \frac{{t^{1 + 2\alpha } }}{{\varGamma \left( {2 + 2\alpha } \right)}} + \frac{{2t^{2 + 3\alpha } }}{{\varGamma \left( {3 + 3\alpha } \right)}} - \frac{{t^{1 + 3\alpha } }}{{\varGamma \left( {2 + 3\alpha } \right)}} \\ & \vdots \\ \end{aligned}$$

By getting the higher-order terms, we notice that there are a lot of noise terms that appear in a certain pattern. After canceling those noise terms, then we can find out easily that the exact solution will take the form

$$y\left( t \right) = y_{0} + y_{1} + y_{2} + \cdots = t^{2} - t.$$
(45)

Example 5.6

Consider the following nonlinear initial value problem (Torvik and Bagley 1984)

$$y^{{{\prime \prime \prime }}} \left( t \right) + D_{t}^{{\frac{5}{2}}} y\left( t \right) + y^{2} \left( t \right) = t^{4} ,\quad y\left( 0 \right) = y^{{\prime }} \left( 0 \right) = 0,\;\;y^{{\prime \prime }} \left( 0 \right) = 2.$$
(46)

Isolating the unknown function and using Eq. (27)

$$y\left( t \right) = J^{{\frac{5}{2}}} \left[ { - \,D^{3} y - y^{2} + t^{4} } \right] + t^{2}$$
(47)

by using Eq. (28), we get

$$\begin{aligned} y_{0} & = t^{2} \quad {\text{and}}\quad N\left( y \right) = J^{{\frac{5}{2}}} \left[ { - \,D^{3} y - y^{2} + t^{4} } \right] \\ y_{1} & = N\left( {y_{0} } \right) = J^{{\frac{5}{2}}} \left[ { - \,D^{3} y_{0} - y_{0}^{2} + t^{4} } \right] = 0 \\ y_{2} & = N\left( {y_{0} + y_{1} } \right) - N\left( {y_{0} } \right) = J^{{\frac{5}{2}}} \left[ { - \,D^{3} \left( {y_{0} + y_{1} } \right) - (y_{0} + y_{1} )^{2} + t^{4} } \right] = 0 \\ & \vdots \\ \end{aligned}$$

By getting the higher-order terms, we notice that they all vanish due to using the one-step NIM. We can find out easily that the exact solution will take the form

$$y\left( t \right) = y_{0} + y_{1} + y_{2} + \cdots = t^{2} .$$
(48)

We must also point that we tried the usual NIM, but we get only an approximate solution to the same problem taking the form

$$y\left( t \right) = t^{2} - \frac{1}{30}t^{6} - \frac{17}{221760}t^{{\frac{39}{2}}} - \frac{70368744177664}{{4030227582157711875\pi^{{\frac{3}{2}}} }}t^{57/2} + \cdots$$
(49)

the graph of this solution.

figure b

6 Conclusion

The aim of this article is to modify the NIM is to provide a basic concept on obtaining an exact solution of Bagley–Torvik equation. The suggested modification is called a one-step new iterative method. Therefore, through this article, we have presented a successful algorithm to solve BTE. The introduced algorithm gave the exact solution in all examples studied in this paper. This indicates that the method is efficient, accurate and reliable when it is used to solve linear and nonlinear fractional differential equation or namely the BTE.