1 Introduction

The study of fractional calculus dates back to times when Leibnitz and Newton invented differential calculus. Fractional calculus deals with derivatives and integrals of arbitrary real order. It is a powerful tool for modeling phenomena arising in diverse fields such as mechanics, physics, engineering, economics, finance, medicine, biology, and chemistry [1,2,3,4,5,6]. In the past few decades, fractional differential equations (FDEs) have been used in increasingly more applications. Recently, there has been a tremendous increase in the use of fractional differential equations to simulate dynamics in many fields, e.g., physics, chemistry, biology, engineering and so on. For example, ultrasonic wave propagation in human cancellous bone [7], modeling of speech signals [8], modeling the cardiac tissue electrode interface [9], the sound waves propagation in rigid porous materials [10], lateral and longitudinal control of autonomous vehicles [11], the theory of viscoelasticity [12], fractional differentiation for edge detection [13], fluid mechanics [14], Electrical spectroscopy impedance [15], Frequency-dependent acoustic wave propagation in porous media [16], etc.

In general, there does not exist method that yields an exact solution for fractional differential equations. Several analytical methods have been suggested to solve fractional differential equations, such as, the homotopy perturbation method [17], Adomian’s decomposition method [18,19,20], homotopy analysis method [21], the Laplace transform method, fractional Green’s function, Power series method, and method of orthogonal polynomials [22,23,24,25].

There have been several numerical methods published for producing approximate solutions for fractional differential equations. These methods include the Implicit Quadrature method, introduced by Diethelm [26], the Predictor-Corrector method, discussed by Diethelm, Ford and Freed [27], the Approximate Mittag-Leffler method, considered by Diethelm and Luchko [28], a Collocation method, described by Blank [29], the Finite Differences method, discussed by Gorenflo [6], etc. [30,31,32,33,34,35,36].

The modeling of real-world problems and physical systems leads to partial FDEs (PFDEs). Analytical solutions as in the case of PFDEs are available only for a few simple PFDEs. Though researchers have developed efficient numerical solution methods for partial FDEs, in general, the literature on the numerical approximation of partial fractional derivative and present a simple general efficient numerical methods for the solution of PFDEs, are limited. Some analytical techniques are presented in the literature for solving PFDEs, such as, method of separating variables [37], decomposition method [38], variational iteration method [39], and homotopy-perturbation method [40]. To study numerical methods for solving partial fractional differential equations, see [36, 41,42,43,44,45,46,47,48,49,50, 52].

One of the disadvantages of finite difference methods by uniform meshes for solving fractional differential equations is its high computational cost. We show that the computational cost of the non-uniform meshes scheme is lower compared to the method of uniform meshes scheme and does not lose the numerical accuracy of this method.

This paper focuses on designing a new numerical method by uniform and non-uniform meshes for the partial fractional differential equation as:

$$\begin{aligned} \left\{ {\begin{array}{{ll}} {\dfrac{{\partial u(x,t)}}{{\partial t}} = {\lambda _\alpha }{}_0^CD_x^\alpha u(x,t) + f(x,t), \,\,\, t > 0, \,\,x \in [0,L],}\\ {u(x,0) = g(x),\,\,\,\,\,0< \alpha < 1,}\\ {u(0,t) = {\mu _1}(t), \,\,\,\, u(L,t) = {\mu _2}(t),} \end{array}} \right. \end{aligned}$$
(1)

where, \({\lambda _\alpha }<0\) and \(L>0\) are constants. Also, the fractional derivative operator \({}^C{}_0{D_x}^\alpha\) is Caputo’s derivative as [22]

$$\begin{aligned} {}_0^CD_x^\alpha Z(x) = \frac{1}{{\varGamma (n - \alpha )}}\int _0^x {\frac{{{Z^{(n)}}(s)}}{{{{(x - s)}^{\alpha - n + 1}}}}\,} {\text {ds}},\,\,\,\,\,n - 1< \alpha < n. \end{aligned}$$
(2)

In this paper, an initial value problem for the partial fractional differential equation is considered. We design new methods with uniform meshes and non-uniform meshes. The error bounds are obtained for solving our problem. Finally, some examples are presented, and also, we compared results obtained by the new methods with uniform and non-uniform meshes.

The rest of this paper is organized as follows. In Sect. 2, a new numerical method with uniform meshes is presented. In Sect. 3, a new numerical method with non-uniform meshes is developed. We perform the error analysis for those methods in Sect. 4. In Sect. 5, examples illustrating the performance of the new numerical schemes are presented. In the last section, conclusions are given.

2 Numerical method with uniform meshes

The purpose of this section is to present a new numerical method by using the piecewise linear interpolation polynomial with uniform meshes for solving the partial fractional differential Eq. (1). We partition [0, L] into a uniform mesh with the space step size \(h = L/M\) and the time step size \(t = T/N\), where M, N are two positive integers. Also we have, \(x_{n} = nh\) for \(n=1,..., M\) and \(t_{j}= j\kappa\) for \(j=1,..., N\).

By using Eq. (2), we can write

$$\begin{aligned} \frac{{\partial u(x,t)}}{{\partial t}} = & \lambda _{{\alpha 0}}^{C} D_{x}^{\alpha } u(x,t) + f(x,t) \\ = & \frac{{\lambda _{\alpha } }}{{\Gamma (1 - \alpha )}}\int\limits_{0}^{x} {(x - \tau )^{{ - \alpha }} \frac{{\partial u(\tau ,t)}}{{\partial \tau }}d\tau + f(x,t),} \\ \end{aligned}$$
(3)

if we take, \(x = {x_{n + 1}},\,\,\,t = {t_j}\), we have

$$\begin{aligned} \frac{{\partial u(x_{{n + 1}} ,t_{j} )}}{{\partial t}} = & \frac{{\lambda _{\alpha } }}{{\Gamma (1 - \alpha )}}\int\limits_{0}^{{x_{{n + 1}} }} {(x_{{n + 1}} - \tau )^{{ - \alpha }} \frac{{\partial u(\tau ,t_{j} )}}{{\partial \tau }}d\tau + f(x_{{n + 1}} ,t_{j} )} \\ = & \frac{{\lambda _{\alpha } }}{{\Gamma (1 - \alpha )}}\int\limits_{0}^{{x_{n} }} {(x_{{n + 1}} - \tau )^{{ - \alpha }} \frac{{\partial u(\tau ,t_{j} )}}{{\partial \tau }}d\tau } \\ & + \frac{{\lambda _{\alpha } }}{{\Gamma (1 - \alpha )}}\int\limits_{{x_{n} }}^{{x_{{n + 1}} }} {(x_{{n + 1}} - \tau )^{{ - \alpha }} \frac{{\partial u(\tau ,t_{j} )}}{{\partial \tau }}d\tau + f(x_{{n + 1}} ,t_{j} )} \\ {\mkern 1mu} = & I_{1} + I_{2} + f(x_{{n + 1}} ,t_{j} ). \\ \end{aligned}$$
(4)

The integral \({I_2}\) approximate by the piecewise linear interpolation at the nodes \({x_n}\) and \({x_{n+1}}\) for u, by the following approach

$$\begin{aligned} I_{2} = & \frac{{\lambda _{\alpha } }}{{\Gamma (1 - \alpha )}}\int\limits_{{x_{n} }}^{{x_{{n + 1}} }} {(x_{{n + 1}} - \tau )^{{ - \alpha }} \frac{{\partial u(\tau ,t_{j} )}}{{\partial \tau }}d\tau } \\ \approx & \frac{{\lambda _{\alpha } }}{{\Gamma (1 - \alpha )}}\int\limits_{{x_{n} }}^{{x_{{n + 1}} }} {(x_{{n + 1}} - \tau )^{{ - \alpha }} \frac{{\partial \hat{u}(\tau ,t_{j} )}}{{\partial \tau }}d\tau } \\ = & \frac{{\lambda _{\alpha } }}{{\Gamma (1 - \alpha )}}\int\limits_{{x_{n} }}^{{x_{{n + 1}} }} {(x_{{n + 1}} - \tau )^{{ - \alpha }} \frac{\partial }{{\partial \tau }}\left[ {\frac{{\tau - x_{{n + 1}} }}{{x_{n} - x_{{n + 1}} }}} \right.u_{n}^{j} } + \left. {\frac{{\tau - x_{n} }}{{x_{{n + 1}} - x_{n} }}u_{{n + 1}}^{j} } \right]d\tau \\ = & \frac{{\lambda _{\alpha } h^{{ - \alpha }} }}{{\Gamma (2 - \alpha )}}\left[ {u_{{n + 1}}^{j} - u_{n}^{j} } \right], \\ \end{aligned}$$
(5)

where \({\hat{u}}\) is the piecewise linear interpolation for u and \(u_n^j = u({x_n},{t_j})\). Also, the integral \({I_1}\) approximate by the piecewise linear interpolation at the nodes \({x_k}\) and \({x_{k+1}}\) with \(k=0,1,...,n-1\) for u, by the following approach

$$\begin{aligned} \begin{array}{l} {I_1} = \dfrac{{{\lambda _\alpha }}}{{\varGamma (1 - \alpha )}}\int \limits _0^{{x_n}} {{{({x_{n + 1}} - \tau )}^{ - \alpha }}\dfrac{{\partial u(\tau ,{t_j})}}{{\partial \tau }}d\tau } \\ \,\,\,\,\,\, \approx \dfrac{{{\lambda _\alpha }}}{{\varGamma (1 - \alpha )}}\int \limits _0^{{x_n}} {{{({x_{n + 1}} - \tau )}^{ - \alpha }}\dfrac{{\partial \mathop {u}\limits ^{\frown }(\tau ,{t_j})}}{{\partial \tau }}d\tau } = \dfrac{{{\lambda _\alpha }}}{{\varGamma (1 - \alpha )}}\sum \limits _{k = 0}^{n - 1} {\int \limits _{{x_k}}^{{x_{k + 1}}} {{{({x_{n + 1}} - \tau )}^{ - \alpha }}\dfrac{{\partial \hat{u} (\tau ,{t_j})}}{{\partial \tau }}d\tau } } \\ \,\,\,\,\,\, = \dfrac{{{\lambda _\alpha }}}{{\varGamma (1 - \alpha )}}\sum \limits _{k = 0}^{n - 1} {\int \limits _{{x_k}}^{{x_{k + 1}}} {{{({x_{n + 1}} - \tau )}^{ - \alpha }}\dfrac{\partial }{{\partial \tau }}\left[ {\dfrac{{\tau - {x_{k + 1}}}}{{{x_k} - {x_{k + 1}}}}} \right. u({x_k},{t_j}) + \left. {\dfrac{{\tau - {x_k}}}{{{x_{k + 1}} - {x_k}}}u({x_{k + 1}},{t_j})} \right] d\tau } } \\ \,\,\,\,\, = \dfrac{{{\lambda _\alpha }{h^{ - \alpha }}}}{{\varGamma (2 - \alpha )}}\sum \limits _{k = 0}^{n - 1} {\left[ {u_k^j - u_{k + 1}^j} \right] } \dfrac{{{{(n + 1 - [k + 1])}^{1 - \alpha }} - {{(n + 1 - k)}^{1 - \alpha }}}}{{([k + 1] - k)}}\\ \,\,\,\,\, = \dfrac{{{\lambda _\alpha }{h^{ - \alpha }}}}{{\varGamma (2 - \alpha )}}\sum \limits _{k = 0}^{{n}} {\left[ {\rho _{k,n + 1}^R + \rho _{k,n + 1}^L} \right] } u_k^j = \dfrac{{{\lambda _\alpha }{h^{ - \alpha }}}}{{\varGamma (2 - \alpha )}}\sum \limits _{k = 0}^{n - 1} {\left[ {\rho _{k,n + 1}^Ru_k^j + \rho _{k + 1,n + 1}^Lu_{k + 1}^j} \right] } \\ \,\,\,\,\, = \dfrac{{{\lambda _\alpha }{h^{ - \alpha }}}}{{\varGamma (2 - \alpha )}}\sum \limits _{k = 0}^{n - 1} {\rho _{k,n + 1}^R\left[ {u_k^j - u_{k + 1}^j} \right] } = \dfrac{{{\lambda _\alpha }{h^{ - \alpha }}}}{{\varGamma (2 - \alpha )}}\sum \limits _{k = 0}^{{n}} {{\rho _{k,n + 1}}} u_k^j, \end{array} \end{aligned}$$
(6)

where \({\hat{u}}\) is the piecewise linear interpolation for u and

$$\begin{aligned} & \rho _{{k,n + 1}}^{R} = \left\{ \begin{gathered} \frac{{(n + 1 - [k + 1])^{{1 - \alpha }} - (n + 1 - k)^{{1 - \alpha }} }}{{([k + 1] - k)}},\quad 0 \le k \le n - 1, \hfill \\ 0,\quad \quad k = n, \hfill \\ \end{gathered} \right. \\ & \,\rho _{{k,n + 1}}^{L} = \left\{ \begin{gathered} 0,\quad \quad k = 0, \hfill \\ \frac{{(n + 1 - [k - 1])^{{1 - \alpha }} - (n + 1 - k)^{{1 - \alpha }} }}{{(k - [k - 1])}},\quad 1 \le k \le n. \hfill \\ \end{gathered} \right. \\ \end{aligned}$$
(7)

By using Eqs. (6) and (7), we can write

$$\begin{aligned} {\rho _{k,n + 1}} = \rho _{k,n + 1}^R + \rho _{k,n + 1}^L,\,\,\,\,\,\,\rho _{k + 1,n + 1}^L = - \rho _{k,n + 1}^R. \end{aligned}$$
(8)

Suppose, we take

$$\begin{aligned} \delta _{{n + 1}}^{j} = & \lambda _{{\alpha 0}}^{C} D_{x}^{\alpha } u(x_{{n + 1}} ,t_{j} ) + f(x_{{n + 1}} ,t_{j} ) \\ = & \frac{{\lambda _{\alpha } }}{{\Gamma (1 - \alpha )}}\int\limits_{0}^{{x_{{n + 1}} }} {(x_{{n + 1}} - \tau )^{{ - \alpha }} \frac{{\partial u(\tau ,t_{j} )}}{{\partial \tau }}d\tau + f(x_{{n + 1}} ,t_{j} )} \\ = & \frac{{\lambda _{\alpha } h^{{ - \alpha }} }}{{\Gamma (2 - \alpha )}}\left[ {u_{{n + 1}}^{j} - u_{n}^{j} } \right] + \frac{{\lambda _{\alpha } h^{{ - \alpha }} }}{{\Gamma (2 - \alpha )}}\sum\limits_{{k = 0}}^{n} {\rho _{{k,n + 1}} u_{k}^{j} + f_{{n + 1}}^{j} .} \\ \end{aligned}$$
(9)

Thus, we approximate solution by using the Crank–Nicolson scheme for Eq. (1). So we apply numerical method to Eq. (1) as follows.

Let \(u({x_n},{t_j}) = u_n^j,\,\,\,\,f({x_n},{t_j}) = f_n^j\). Then,

$$\begin{aligned} \frac{{u_{n + 1}^j - u_{n + 1}^{j - 1}}}{\kappa } = \frac{1}{2}\left[\delta _{n + 1}^j + \delta _{n + 1}^{j - 1}\right]. \end{aligned}$$
(10)

Therefore, after some calculations for Eq. (10) by using (9), we have

$$\begin{aligned} & u_{{n + 1}}^{j} - \frac{{\lambda _{\alpha } \kappa h^{{ - \alpha }} }}{{2\Gamma (2 - \alpha )}}\left[ {u_{{n + 1}}^{j} - u_{n}^{j} } \right] - \frac{{\lambda _{\alpha } \kappa h^{{ - \alpha }} }}{{2\Gamma (2 - \alpha )}}\sum\limits_{{k = 0}}^{n} {\rho _{{k,n + 1}} u_{k}^{j} } \\ & \, = u_{{n + 1}}^{{j - 1}} + \frac{{\lambda _{\alpha } \kappa h^{{ - \alpha }} }}{{2\Gamma (2 - \alpha )}}\left[ {u_{{n + 1}}^{{j - 1}} - u_{n}^{{j - 1}} } \right] + \frac{{\lambda _{\alpha } \kappa h^{{ - \alpha }} }}{{2\Gamma (2 - \alpha )}}\sum\limits_{{k = 0}}^{n} {\rho _{{k,n + 1}} u_{k}^{{j - 1}} } + \frac{{\kappa (f_{{n + 1}}^{j} + f_{{n + 1}}^{{j - 1}} )}}{2}, \\ \end{aligned}$$
(11)

finally, we can write

$$\begin{aligned} u_{n + 1}^j + \sum \limits _{k = 0}^{{n+1}} {\varPsi _{k,n + 1}^\alpha u_k^j} = u_{n + 1}^{j - 1} - \sum \limits _{k = 0}^{{n+1}} {\varPsi _{k,n + 1}^\alpha u_k^{j - 1} + } \frac{{\kappa (f_{n + 1}^j + f_{n + 1}^{j - 1})}}{2}, \end{aligned}$$
(12)

where

$$\psi _{{k,n + 1}}^{\alpha } = \left\{ \begin{gathered} \frac{{ - \lambda _{\alpha } \kappa h^{{ - \alpha }} }}{{2\Gamma (2 - \alpha )}}\left[ {\rho _{{k,n + 1}}^{R} - \rho _{{k - 1,n + 1}}^{R} } \right],\quad k = 1,2,...,n, \hfill \\ \frac{{ - \lambda _{\alpha } \kappa h^{{ - \alpha }} }}{{2\Gamma (2 - \alpha )}},\quad k = n + 1. \hfill \\ \end{gathered} \right.$$
(13)

By using Eq. (12) and (13), introducing

$$D = \left( {\begin{array}{*{20}c} {\psi _{{1,1}}^{\alpha } } & 0 & 0 & \cdots & 0 \\ {\psi _{{1,2}}^{\alpha } } & {\psi _{{2,2}}^{\alpha } } & 0 & \cdots & 0 \\ {\psi _{{1,3}}^{\alpha } } & {\psi _{{2,3}}^{\alpha } } & {\psi _{{3,3}}^{\alpha } } & {} & 0 \\ \vdots & \vdots & \vdots & {} & \vdots \\ {\psi _{{1,M - 1}}^{\alpha } } & {\psi _{{2,M - 1}}^{\alpha } } & {\psi _{{3,M - 1}}^{\alpha } } & {} & 0 \\ {\psi _{{1,M}}^{\alpha } } & {\psi _{{2,M}}^{\alpha } } & {\psi _{{3,M}}^{\alpha } } & \cdots & {\psi _{{M,M}}^{\alpha } } \\ \end{array} } \right),$$
(14)

and

$$\begin{aligned} \begin{array}{l} {U^j} = [u_1^j,\,u_2^j,...,\,u_{M}^j]^{T}, \end{array} \end{aligned}$$
(15)

Eq. (12) takes the matrix-form as:

$$\begin{aligned} (I + D){U^j} = (I-D){U^{j - 1}} + {F^j}, \end{aligned}$$
(16)

where

$$F^{j} = \left[ {\begin{array}{*{20}l} {\frac{\kappa }{2}\left[ {f_{1}^{j} + f_{1}^{{j - 1}} } \right]} \hfill & { - {\text{ }}\Psi _{{0,1}}^{\alpha } } \hfill & {\left[ {u_{0}^{j} + u_{0}^{{j - 1}} } \right]} \hfill \\ {\frac{\kappa }{2}\left[ {f_{2}^{j} + f_{2}^{{j - 1}} } \right]} \hfill & { - \Psi _{{0,2}}^{\alpha } } \hfill & {\left[ {u_{0}^{j} + u_{0}^{{j - 1}} } \right]} \hfill \\ {\frac{\kappa }{2}\left[ {f_{3}^{j} + f_{3}^{{j - 1}} } \right]} \hfill & { - \Psi _{{0,3}}^{\alpha } } \hfill & {\left[ {u_{0}^{j} + u_{0}^{{j - 1}} } \right]} \hfill \\ \vdots \hfill & {} \hfill & {} \hfill \\ {\frac{\kappa }{2}\left[ {f_{{M - 1}}^{j} + f_{{M - 1}}^{{j - 1}} } \right]} \hfill & { - \Psi _{{0,M - 1}}^{\alpha } } \hfill & {\left[ {u_{0}^{j} + u_{0}^{{j - 1}} } \right]} \hfill \\ {\frac{\kappa }{2}\left[ {f_{M}^{j} + f_{M}^{{j - 1}} } \right]} \hfill & { - \Psi _{{0,M}}^{\alpha } } \hfill & {\left[ {u_{0}^{j} + u_{0}^{{j - 1}} } \right]} \hfill \\ \end{array} } \right].$$

3 Numerical method with non-uniform meshes

In Sect. 2, we designed the proposed scheme with uniform meshes (1213), to approximate the integral \(\int _0^{{x_n}} d\tau\) by

$$\begin{aligned} \frac{{\partial u(x_{{n + 1}} ,t_{j} )}}{{\partial t}} = & \frac{{\lambda _{\alpha } }}{{\Gamma (1 - \alpha )}}\int\limits_{0}^{{x_{n} }} {(x_{{n + 1}} - \tau )^{{ - \alpha }} \frac{{\partial u(\tau ,t_{j} )}}{{\partial \tau }}d\tau } \\ & + \frac{{\lambda _{\alpha } }}{{\Gamma (1 - \alpha )}}\int\limits_{{x_{n} }}^{{x_{{n + 1}} }} {(x_{{n + 1}} - \tau )^{{ - \alpha }} \frac{{\partial u(\tau ,t_{j} )}}{{\partial \tau }}d\tau + f(x_{{n + 1}} ,t_{j} )} \\ \end{aligned}$$
(17)

Since \({{{({x_{n + 1}} - \tau )}^{-\alpha }}}\) decays with power \(\alpha\), we can actually select lesser number of mesh points of [0, L], as \(0 = {\sigma _{0,n}}< {\sigma _{1,n}}< {\sigma _{2,n}}<... < {\sigma _{{m_n},n}} = {x_n}\) to approximate the integral \(\int _0^{{x_n}} d\tau\).

3.1 Algorithms for selecting the equidistributing meshes

For selecting the equidistributing meshes, we introduce two algorithms in this subsection [51].

Algorithm 1:

\(Equal-height\ distribution \ algorithm\) [51]

Assume that we have already got the points \({\sigma _{i,n}}\), we have two principles for selecting the next point \({\sigma _{i+1,n}}\). By this two principles, the numerical method does not lose the accuracy but reduce the computation cost.

Principle 1:

The next point \({\sigma _{i+1,n}}\) is at least one step away from \({\sigma _{i,n}}\). The function values \(u(\tau ) ={({x_{n + 1}} - \tau )^{-\alpha }}\) are as equally distributed as possible, i.e.,

$$\begin{aligned} {{\bar{\sigma }} _{i + 1,n}} = \max \left\{ \begin{array}{l} \textrm{solve}({{{\bar{\sigma }} }_{i + 1,n}} - {\sigma _{i,n}} = h,\,\,{{{\bar{\sigma }} }_{i + 1,n}}),\\ \textrm{solve}(u({{{\bar{\sigma }} }_{i + 1,n}}) - u({\sigma _{i,n}}) = \varDelta u,\,\,{{{\bar{\sigma }} }_{i + 1,n}}) \end{array} \right\} , \end{aligned}$$
(18)

where \(\varDelta u\) is a given small positive real number and \(\textrm{solve}(equ, var)\) means the solution of equ with unknown variable var, e.g., \(\textrm{solve}(u({{\bar{\sigma }} _{i + 1,n}}) - u({\sigma _{i,n}}) = \varDelta u,{{\bar{\sigma }} _{i + 1,n}})\) means solving

$$\begin{aligned} {({x_{n + 1}} - {{{\bar{\sigma }} }_{i + 1,n}})^{-\alpha }} - {({x_{n + 1}} - {\sigma _{i + 1,n}})^{-\alpha }} = \varDelta u. \end{aligned}$$
(19)

Therefore, we have

$$\begin{aligned} {{{\bar{\sigma }} }_{i + 1,n}} = {x_{n + 1}} - {[{({x_{n + 1}} - {\sigma _{i,n}})^{-\alpha }} + \varDelta u]^{\dfrac{-1}{{\alpha }}}}; \end{aligned}$$
(20)

Principle 2: To avoid involving non-equally divided nodes, we take

$$\begin{aligned} {\sigma _{i + 1,n}} = \left\lfloor {\dfrac{{{{{\bar{\sigma }} }_{i + 1,n}}}}{h}} \right\rfloor {*}h. \end{aligned}$$
(21)

therefore, we have \({\sigma _{i + 1,n}} = {\sigma _{i,n}} + h\) or

$$\begin{aligned} {({x_{n + 1}} - {\sigma _{i + 1,n}})^{-\alpha }} - {({x_{n + 1}} - {\sigma _{i,n}})^{-\alpha }} \le \varDelta u. \end{aligned}$$
(22)

This algorithm is called \(equal-height\ distribution \ algorithm\) [51] (see Algorithm 1).

Algorithm 2:

\(Equal-area\ distribution \ algorithm\) [51]

Principle 1: For design second algorithm to choosing the mesh points \({\sigma _{i,n}}\), we integrate of \(u(\tau )={{{({x_{n + 1}} - \tau )}^{-\alpha }}}\) as

$$\begin{aligned} \int \limits _{{\sigma _{i,n}}}^{{{{\bar{\sigma }} }_{i + 1,n}}} {{{({x_{n + 1}} - \tau )}^{ - \alpha }}} d\tau = \varDelta S, \end{aligned}$$

where \(\varDelta S\) is a given small positive real number. For \({{\bar{\sigma }}_{i + 1,n}}\), we approximate it by

$$\begin{aligned} {{{\bar{\sigma }} }_{i + 1,n}} = {x_{n + 1}} - {[{({x_{n + 1}} - {\sigma _{i,n}})^{1-\alpha }} -(1- \alpha ) \varDelta S]^{\frac{1}{1-\alpha }}}; \end{aligned}$$
(23)

Principle 2: To avoid involving non-equally divided nodes, we take

$$\begin{aligned} {\sigma _{i + 1,n}} = \left\lfloor {\frac{{{{{\bar{\sigma }} }_{i + 1,n}}}}{h}} \right\rfloor *h, \end{aligned}$$
(24)

therefore, \({\sigma _{i+1,n}}\) belongs to the uniform nodes \(\{ {x_i}\} _{i = 0}^n\). It can be checked that

$$\begin{aligned} {({x_{n + 1}} - {\sigma _{i,k}})^{1-\alpha }} - {({x_{n + 1}} - {\sigma _{i + 1,n}})^{1-\alpha }} \le (1-\alpha ) \varDelta S,\,\,\,or\,\,\,\,\,\,{\sigma _{i + 1,n}} = {\sigma _{i,n}} + h\, \end{aligned}$$
(25)

This algorithm is called \(equal-area\ distribution \ algorithm\) [51] (see Algorithm 2).

figure a
figure b

3.2 Formulation of numerical method with equidistributing meshes

In the second section, we partition the interval [0, L] into a uniform mesh. The non-uniform mesh points \({\sigma _{i,n}}\) chosen from Algorithm 1 or 2 still belong to the set of the uniform meshes. Also, we take \({x_{{n_0}}} = 0\) and \({x_{{n_{{m_n}}}}} = {x_n}\). Thus, \({\sigma _{i,n}} = {x_{{n_i}}},\,\,\,i = 0,1,...,{m_n}\). Now, we assume that

$$\begin{aligned} & x = \left\{ {x_{0} ,x_{1} ,x_{2} ,...,x_{n} } \right\}, \\ & \sigma (i) = \left\{ {\sigma _{{0,n}} ,\sigma _{{1,n}} ,\sigma _{{2,n}} ,...,\sigma _{{m_{i} ,n}} } \right\}. \\ \end{aligned}$$
(26)

To design a new numerical method with the non-uniform mesh points, we have

$$\begin{aligned} \frac{{\partial u(x_{{n + 1}} ,t_{j} )}}{{\partial t}} = & \frac{{\lambda _{\alpha } }}{{\Gamma (1 - \alpha )}}\int\limits_{0}^{{x_{{n + 1}} }} {(x_{{n + 1}} - \tau )^{{ - \alpha }} \frac{{\partial u(\tau ,t_{j} )}}{{\partial \tau }}d\tau + f(x_{{n + 1}} ,t_{j} )} \\ = & \frac{{\lambda _{\alpha } }}{{\Gamma (1 - \alpha )}}\int\limits_{0}^{{x_{n} }} {(x_{{n + 1}} - \tau )^{{ - \alpha }} \frac{{\partial u(\tau ,t_{j} )}}{{\partial \tau }}d\tau } \\ & + \frac{{\lambda _{\alpha } }}{{\Gamma (1 - \alpha )}}\int\limits_{{x_{n} }}^{{x_{{n + 1}} }} {(x_{{n + 1}} - \tau )^{{ - \alpha }} \frac{{\partial u(\tau ,t_{j} )}}{{\partial \tau }}d\tau + f(x_{{n + 1}} ,t_{j} )} \\ = & \hat{I}_{1} + I_{2} + f(x_{{n + 1}} ,t_{j} ). \\ \end{aligned}$$
(27)

We approximate \({{{\hat{I}}}_1}\) as

$$\begin{aligned} \begin{array}{l} {{{\hat{I}}}_1} = \dfrac{{{\lambda _\alpha }}}{{\varGamma (1 - \alpha )}}\int \limits _0^{{x_n}} {{{({x_{n + 1}} - \tau )}^{ - \alpha }}\dfrac{{\partial u(\tau ,{t_j})}}{{\partial \tau }}d\tau } \approx \dfrac{{{\lambda _\alpha }}}{{\varGamma (1 - \alpha )}}\int \limits _0^{{x_n}} {{{({x_{n + 1}} - \tau )}^{ - \alpha }}\dfrac{{\partial {\bar{u}}(\tau ,{t_j})}}{{\partial \tau }}d\tau } \\ \,\,\,\,\,\,\,= \dfrac{{{\lambda _\alpha }}}{{\varGamma (1 - \alpha )}}\sum \limits _{i = 0}^{{m_n} - 1} {\int \limits _{{x_{{n_i}}}}^{{x_{{n_{i + 1}}}}} {{{({x_{n + 1}} - \tau )}^{ - \alpha }}\dfrac{{\partial {\bar{u}}(\tau ,{t_j})}}{{\partial \tau }}d\tau } } \\ \,\,\,\,\,\,\, = \dfrac{{{\lambda _\alpha }}}{{\varGamma (1 - \alpha )}}\sum \limits _{i = 0}^{{m_n} - 1} {\int \limits _{{x_{{n_i}}}}^{{x_{{n_{i + 1}}}}} {{{({x_{n + 1}} - \tau )}^{ - \alpha }}\dfrac{\partial }{{\partial \tau }}\left[ {\dfrac{{\tau - {x_{{n_{i + 1}}}}}}{{{x_{{n_i}}} - {x_{{n_{i + 1}}}}}}} \right. u_{{n_i}}^j + \left. {\dfrac{{\tau - {x_{{n_i}}}}}{{{x_{{n_{i + 1}}}} - {x_{{n_i}}}}}u_{{n_{i + 1}}}^j} \right] d\tau } } \\ \,\,\,\,\,\,\,= \dfrac{{{\lambda _\alpha }{h^{ - \alpha }}}}{{\varGamma (2 - \alpha )}}\sum \limits _{i = 0}^{{m_n} - 1} {\left[ {u_{{n_{i + 1}}}^j - u_{{n_i}}^j} \right] } \dfrac{{{{(n + 1 - {n_i})}^{1 - \alpha }} - {{(n + 1 - {n_{i + 1}})}^{1 - \alpha }}}}{{({n_{i + 1}} - {n_i})}}\\ \,\,\,\,\,\,\, = \dfrac{{{\lambda _\alpha }{h^{ - \alpha }}}}{{\varGamma (2 - \alpha )}}\sum \limits _{i = 0}^{{m_n} - 1} {\left[ {\theta _{i,n + 1}^R + \theta _{i,n + 1}^L} \right] } u_{{n_i}}^j = \dfrac{{{\lambda _\alpha }{h^{ - \alpha }}}}{{\varGamma (2 - \alpha )}}\sum \limits _{i = 0}^{{m_n} - 1} {\left[ {\theta _{i,n + 1}^Ru_{{n_i}}^j + \theta _{i + 1,n + 1}^Lu_{{n_{i + 1}}}^j} \right] } \\ \,\,\,\,\,\,\,= \dfrac{{{\lambda _\alpha }{h^{ - \alpha }}}}{{\varGamma (2 - \alpha )}}\sum \limits _{i = 0}^{{m_n} - 1} {\theta _{i,n + 1}^R\left[ {u_{{n_i}}^j - u_{{n_{i + 1}}}^j} \right] } = \dfrac{{{\lambda _\alpha }{h^{ - \alpha }}}}{{\varGamma (2 - \alpha )}}\sum \limits _{i = 0}^{{m_n}} {{\theta _{i,n + 1}}} u_{{n_i}}^j, \end{array} \end{aligned}$$
(28)

where \({{\bar{u}}}\) is the piecewise linear interpolation for u at the nodes \({x_{{n_i}}}\) and \({x_{{n_{i + 1}}}}\) with \(i=0,1,...,{m_n}-1\), and

$$\begin{aligned} \begin{array}{ll} {\theta _{i,n + 1}^R = \left\{ {\begin{array}{ll} {\dfrac{{{{(n + 1 - {n_{i + 1}})}^{1 - \alpha }} - {{(n + 1 - {n_i})}^{1 - \alpha }}}}{{({n_{i + 1}} - {n_i})}}, 0 \le i \le {m_n} - 1,}\\ {0, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, i = {m_n},} \end{array}} \right. }\\ {}\\ {\theta _{i,n + 1}^L = \left\{ {\begin{array}{ll} {0, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, i = 0,}\\ { - \dfrac{{{{(n + 1 - {n_i})}^{1 - \alpha }} - {{(n + 1 - {n_{i - 1}})}^{1 - \alpha }}}}{{({n_i} - {n_{i - 1}})}}, 1 \le i \le {m_n}.} \end{array}} \right. } \end{array} \end{aligned}$$
(29)

Also, the integral \({I_2}\) is approximated by Eq. (5). By using Eqs. (28) and (29), we have

$$\begin{aligned} {\theta _{i,n + 1}} = \theta _{i,n + 1}^R + \theta _{i,n + 1}^L, \theta _{i + 1,n + 1}^L = - \theta _{i,n + 1}^R. \end{aligned}$$
(30)

Remark 1

If we take, \({n_k} = k,\,\,\,0 \le k \le n\) (for uniform meshes), we can write

$$\begin{aligned} \sum\limits_{{k = 0}}^{n} {\rho _{{k,n + 1}} u_{k}^{j} } = & \sum\limits_{{k = 0}}^{{n - 1}} {\left[ {\rho _{{k,n + 1}}^{R} u_{k}^{j} + \rho _{{k + 1,n + 1}}^{L} u_{{k + 1}}^{j} } \right]} \\ = & \sum\limits_{{i = 0}}^{{m_{n} - 1}} {\sum\limits_{{k = n_{i} }}^{{n_{{i + 1}} - 1}} {\rho _{{k,n + 1}}^{R} \left[ {u_{k}^{j} - u_{{k + 1}}^{j} } \right]} } . \\ \end{aligned}$$
(31)

For non-uniform meshes case, we take

$$\begin{aligned} \gamma _{{n + 1}}^{j} = & \lambda _{{\alpha 0}}^{C} D_{x}^{\alpha } u(x_{{n + 1}} ,t_{j} ) + f(x_{{n + 1}} ,t_{j} ) \\ = & \frac{{\lambda _{\alpha } }}{{\Gamma (1 - \alpha )}}\int\limits_{0}^{{xn + 1}} {(x_{{n + 1}} - \tau )^{{ - \alpha }} \frac{{\partial u(\tau ,t_{j} )}}{{\partial \tau }}d\tau + f(x_{{n + 1}} ,t_{j} )} \\ = & \frac{{\lambda _{\alpha } h^{{ - \alpha }} }}{{\Gamma (2 - \alpha )}}\left[ {u_{{n + 1}}^{j} - u_{n}^{j} } \right] + \frac{{\lambda _{\alpha } h^{{ - \alpha }} }}{{\Gamma (2 - \alpha )}}\sum\limits_{{i = 0}}^{{mn - 1}} {\theta _{{i,n + 1}} } u_{{n_{i} }}^{j} + f_{{n + 1}}^{j} . \\ \end{aligned}$$
(32)

Let we take, \(u({x_n},{t_j}) = u_n^j,\,\,u({x_{{n_i}}},{t_j}) = u_{{n_i}}^j\) and \(f({x_n},{t_j}) = f_n^j\). Then, by using the Crank–Nicolson scheme for Eq. (1), numerical method for Eq. (1) is as the following form.

$$\begin{aligned} \frac{{u_{n + 1}^j - u_{n + 1}^{j - 1}}}{\kappa } = \frac{1}{2}[\gamma _{n + 1}^j + \gamma _{n + 1}^{j - 1}]. \end{aligned}$$
(33)

Therefore, Eq. (33) by using (32) will be as the following form

$$\begin{aligned} & u_{{n + 1}}^{j} - \frac{{\lambda _{\alpha } \kappa h^{{ - \alpha }} }}{{2\Gamma (2 - \alpha )}}\left[ {u_{{n + 1}}^{j} - u_{n}^{j} } \right] - \frac{{\lambda _{\alpha } h^{{ - \alpha }} }}{{2\Gamma (2 - \alpha )}}\sum\limits_{{i = 0}}^{{m_{n} }} {\theta _{{i,n + 1}} } u_{{n_{i} }}^{j} \\ & {\kern 1pt} = u_{{n + 1}}^{{j - 1}} + \frac{{\lambda _{\alpha } \kappa h^{{ - \alpha }} }}{{2\Gamma (2 - \alpha )}}\left[ {u_{{n + 1}}^{{j - 1}} - u_{n}^{{j - 1}} } \right] + \frac{{\lambda _{\alpha } h^{{ - \alpha }} }}{{2\Gamma (2 - \alpha )}}\sum\limits_{{i = 0}}^{{m_{n} }} {\theta _{{i,n + 1}} } u_{{n_{i} }}^{{j - 1}} \\ & \, + \frac{{\kappa (f_{{n + 1}}^{j} + f_{{n + 1}}^{{j - 1}} )}}{2}, \\ \end{aligned}$$
(34)

after some calculations, we have

$$\begin{aligned} u_{n + 1}^j + \sum \limits _{i = 0}^{{m_n+1}} {\varPhi _{{n_i},n + 1}^\alpha u_{{n_i}}^j} = u_{n + 1}^{j - 1} - \sum \limits _{i = 0}^{{m_n+1}} {\varPhi _{{n_i},n + 1}^\alpha u_{{n_i}}^{j - 1} + } \frac{{\kappa (f_{n + 1}^j + f_{n + 1}^{j - 1})}}{2}, \end{aligned}$$
(35)

where

$$\Phi _{{n_{i} ,n + 1}}^{\alpha } = \left\{ \begin{gathered} \frac{{ - \lambda _{\alpha } \kappa h^{{ - \alpha }} }}{{2\Gamma (2 - \alpha )}}\left[ {\theta _{{i,n + 1}}^{R} - \theta _{{i - 1,n + 1}}^{R} } \right],{\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} i = 1,2,...,m_{n} , \hfill \\ \frac{{ - \lambda _{\alpha } \kappa h^{{ - \alpha }} }}{{2\Gamma (2 - \alpha )}},{\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} i = m_{n} + 1. \hfill \\ \end{gathered} \right.$$

If we take \({U^j} = [u_1^j,\,u_2^j,...,\,u_{M}^j]^{T}\) therefore, Eq. (35) takes the matrix-form as:

$$\begin{aligned} (I + {\hat{D}}){U^j} = (I - {\hat{D}}){U^{j - 1}} + {G^j}, \end{aligned}$$
(36)

where

$$\begin{aligned}{G^j} = \left[ {\begin{array}{ll} {\dfrac{\kappa }{2}\left[ {f_1^j + f_1^{j - 1}} \right] - \varPhi _{{n_0},1}^\alpha \left[ {u_{{n_0}}^j + u_{{n_0}}^{j - 1}} \right] }\\ {\dfrac{\kappa }{2}\left[ {f_2^j + f_2^{j - 1}} \right] - \varPhi _{{n_0},2}^\alpha \left[ {u_{{n_0}}^j + u_{{n_0}}^{j - 1}} \right] }\\ {\dfrac{\kappa }{2}\left[ {f_3^j + f_3^{j - 1}} \right] - \varPhi _{{n_0},3}^\alpha \left[ {u_{{n_0}}^j + u_{{n_0}}^{j - 1}} \right] }\\ \vdots \\ {\dfrac{\kappa }{2}\left[ {f_{M - 1}^j + f_{M - 1}^{j - 1}} \right] - \varPhi _{{n_0},M - 1}^\alpha \left[ {u_{{n_0}}^j + u_{{n_0}}^{j - 1}} \right] }\\ {\dfrac{\kappa }{2}\left[ {f_M^j + f_M^{j - 1}} \right] - \varPhi _{{n_0},M}^\alpha \left[ {u_{{n_0}}^j + u_{{n_0}}^{j - 1}} \right] } \end{array}} \right] \end{aligned}$$

and matrix \({\hat{D}}\) will be introduced in the next subsection.

3.3 An algorithm for generating the matrix \({\hat{D}}\)

In this subsection, we design an algorithm for generate the matrix \({\hat{D}}\) by using the Algorithm 1 or 2.

Algorithm 3:

\(Matrix \ Generation's \ Algorithm\) 3

We use the function GENXI or GENXII to generate the matrix \({\hat{D}}\) by using the non-uniform mesh points on [0, L] chosen from Algorithm 1 (equal-height distribution algorithm) or 2 (area-height distribution algorithm). We design an algorithm for generating the matrix \({\hat{D}}\), as the following process:

Step 1 We partition [0, L] into a uniform mesh with the space step size \(h = L/M\) and the time step size \(t = T/M\), where M is a positive integer. Also we have, \({x_n} = nh\) for \(n = 1,...,M\) and \({t_j} = j\kappa\) for \(j = 1,...,N\).

Step 2 In this stage, we use the function GENXI or GENXII to selecting non-uniform mesh points on [0, L] by Algorithm 1 (equal-height distribution algorithm) or 2 (area-height distribution algorithm). We consider these non-uniform meshes as a vector and call it X as:

$$\begin{aligned} X = [{n_0},{n_1},{n_2},...,{n_{{m_n}}}]. \end{aligned}$$

In the partition [0, L] into a uniform mesh, we replace zero instead of unused points. We consider these meshes as a vector and call it \({{\bar{X}}}\) as:

$$\begin{aligned} {\bar{X}} = [0,..,0,{n_1},0,..,0,{n_2},0,...,0,...,0,...,0,{n_{{m_n}}}]. \end{aligned}$$

Step 3 In this stage, we look for the coefficients of \(u_i^j\), which are the matrix elements. If the i-th element of the vector X is zero, this coefficient will be zero. And if the i-th element is nonzero, the coefficient is obtained from the following relation:

$$\begin{aligned} \begin{array}{l} {{{\hat{D}}}_{n,i - 1}} = \dfrac{{ - {\lambda _\alpha }\kappa {h^{ - \alpha }}}}{{2\varGamma (2 - \alpha )}}\left[ {\dfrac{{{{(n - X(i))}^{1 - \alpha }} - {{(n - X(i + 1))}^{1 - \alpha }}}}{{(X(i + 1) - X(i))}}} \right. \\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \left. { - \dfrac{{{{(n - X(i + 1))}^{1 - \alpha }} - {{(n - X(i + 2))}^{1 - \alpha }}}}{{(X(i + 2) - X(i + 1))}}} \right] ;\,\,\,\,2 \le i \le \textrm{length}({\bar{X}}) \end{array} \end{aligned}$$

also, \({\hat{D}}_{2,1}\)=\(\dfrac{{ - {\lambda _\alpha }\kappa {h^{ - \alpha }}}}{{2\varGamma (2 - \alpha )}}\left[ {{2^{1 - \alpha }} - 2} \right]\) and \({\hat{D}}_{i,i}\)=\(\dfrac{{ - {\lambda _\alpha }\kappa {h^{ - \alpha }}}}{{2\varGamma (2 - \alpha )}}, i=1,...,n.\)

With these three steps, all the matrix elements will be obtained (see Algorithm 3).

figure c

Remark 2

For Computing the total times of the nodes (N) being used in the our methods, we design Algorithm 4. For example, the total times of the nodes (N) which used in proposed method with uniform meshes for solving PFDEs compute form \(N= n(n + \frac{{n(n + 1)}}{2})\). So N is 650, 4600, 34400 and 265600, respectively, when \(h=\kappa =1/10,1/20,1/40,1/80\) and \(T=1,\,\,\,L=1\). So, the computation cost of numerical method with uniform meshes for solving the PFDEs is increasing.

figure d

4 Error analysis of methods

In this section, we study error analysis of methods with uniform meshes and non-uniform meshes. So, let A be a matrix \(d \times d\) and \(\left\| . \right\|\) be a norm in \({C^d}\). Let \({\lambda _1},\,{\lambda _2},...,{\lambda _\textrm{d}}\) be the eigenvalues of a matrix A. Then, its spectral radius will be as:

$$\begin{aligned} \rho (A) = \max \left\{ {\left| {{\lambda _1}} \right| ,\,\left| {{\lambda _2}} \right| ,...,\left| {{\lambda _\textrm{d}}} \right| } \right\} . \end{aligned}$$

Lemma 1

[53] ( Gelfand’s Formula) Given any matrix norm \(\left\| . \right\|\) on \({C^d}\)

$$\begin{aligned} \rho (A) = \mathop {\lim }\limits _{n \rightarrow \infty } {\left\| {{A^n}} \right\| ^{\frac{1}{n}}}. \end{aligned}$$
(37)

if \({A_1},{A_2},...,{A_n}\) are matrices that all commute, by using Gelfand’s formula, we can write

$$\begin{aligned} \rho ({A_1}{A_2}...{A_n}) \le \rho ({A_1})\rho ({A_2})...\rho ({A_n}), \end{aligned}$$
(38)

because

$$\begin{aligned} \rho \left( {A1A2 \cdot \cdot \cdot As} \right) = & \mathop {\lim }\limits_{{n \to \infty }} \left\| {\left( {A1A2 \cdot \cdot \cdot As} \right)^{n} } \right\|^{{\frac{1}{n}}} = \mathop {\lim }\limits_{{n \to \infty }} \left\| {\left( {A1^{n} A2^{n} \cdot \cdot \cdot As^{n} } \right)^{n} } \right\|^{{\frac{1}{n}}} \\ & \le \mathop {\lim }\limits_{{n \to \infty }} \left\| {A1^{n} } \right\|^{{\frac{1}{n}}} \mathop {\lim }\limits_{{n \to \infty }} \left\| {A2^{n} } \right\|^{{\frac{1}{n}}} \cdot \cdot \cdot \mathop {\lim }\limits_{{n \to \infty }} \left\| {As^{n} } \right\|^{{\frac{1}{n}}} \\ = & \rho \left( {A1} \right)\rho \left( {A2} \right) \cdot \cdot \cdot \rho \left( {As} \right) \\ \end{aligned}$$
(39)

Theorem 1

The proposed method with uniform meshes is obtained as the following form,

$$\begin{aligned} (I + D){U^j} = (I-D){U^{j - 1}} + {F^j}, \end{aligned}$$
(40)

for every initial vector \({U^0}\), is stable.

Proof

Since all eigenvalues of matrix D are nonzero, thus the matrix \({(I + D)^{ - 1}}\) is invertible. We can write

$$\begin{aligned} {U^j} = {(I + D)^{ - 1}}(I - D){U^{j - 1}} + {(I + D)^{ - 1}}{F^j}. \end{aligned}$$

If we take

$$\begin{aligned} A = {(I + D)^{ - 1}}(I - D),\,\,\,\,B = {(I + D)^{ - 1}}, \end{aligned}$$

therefore, we have

$$\begin{aligned} {U^j} = A{U^{j - 1}} + B. \end{aligned}$$

suppose \({\upsilon _i}\), \(i = 1,2,...,M\), be eigenvalues of matrix D. Since we have for matrix D, \({\upsilon _i} = \dfrac{{ - {\lambda _\alpha }\kappa {h^{ - \alpha }}}}{{2\varGamma (2 - \alpha )}} > 0,\,\,i = 1,2,...,M.\) We can write

$$\begin{aligned} \rho (I - D)< 1,\,\,\,\,\,\rho ({(I + D)^{ - 1}}) < 1. \end{aligned}$$
(41)

Also, we can write

$$\begin{aligned} \begin{array}{l} I = (I + D){(I + D)^{ - 1}}\\ \,\,\,\, = ({D^{ - 1}}D + {D^{ - 1}}DD){(I + D)^{ - 1}}\\ \,\,\,\, = ({D^{ - 1}} + {D^{ - 1}}D)D{(I + D)^{ - 1}}\\ \,\,\,\, = {D^{ - 1}}(I + D)D{(I + D)^{ - 1}} \end{array} \\ \begin{array}{l} {D^{ - 1}}(I + D){(I + D)^{ - 1}}D = {D^{ - 1}}(I + D)D{(I + D)^{ - 1}}\\ - {(I + D)^{ - 1}}D = - D{(I + D)^{ - 1}}\\ {(I + D)^{ - 1}} - {(I + D)^{ - 1}}D = {(I + D)^{ - 1}} - D{(I + D)^{ - 1}}\\ {(I + D)^{ - 1}}(I - D) = (I - D){(I + D)^{ - 1}}, \end{array} \end{aligned}$$

thus, \({(I + D)^{ - 1}}\) and \((I - D)\) are commutative matrices. Therefore, by using Lemma 1 and (41), we have

$$\begin{aligned} \begin{array}{l} \rho (A) = \rho ({(I + D)^{ - 1}}(I - D))\\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \le \rho ({(I + D)^{ - 1}})\rho ((I - D)) \\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, < 1. \end{array} \end{aligned}$$

Thus, the proposed method with uniform meshes (12) is stable. \(\square\)

Lemma 2

Let \(u \in {C^2}[0,L]\) and \(0<\alpha <1\), then

$$\begin{aligned} \left| {\int \limits _0^{{x_{n + 1}}} {{{({x_{n + 1}} - \tau )}^{ - \alpha }}\dfrac{\partial }{{\partial \tau }}\left[ {u(\tau ,{t_j}) - {\hat{u}}(\tau ,{t_j})} \right] } d\tau } \right| < Ch. \end{aligned}$$

Proof

By using the Taylor theorem, for \(\tau \in [{x_i},{x_{i + 1}}]\), there exist \({\xi _i} \in [{x_i},{x_{i + 1}}]\). Therefore,

$$\begin{aligned} & \left| {\int\limits_{0}^{{x_{{n + 1}} }} {(x_{{n + 1}} - \tau )^{{ - \alpha }} \frac{\partial }{{\partial \tau }}\left[ {u(\tau ,t_{j} ) - \hat{u}(\tau ,t_{j} )} \right]} d\tau } \right| \\ & \le \sum\limits_{{i = 0}}^{n} {\int\limits_{{x_{i} }}^{{x_{{i + 1}} }} {(x_{{n + 1}} - \tau )^{{ - \alpha }} } } \frac{\partial }{{\partial \tau }}\left| {u(\tau ,t_{j} ) - \hat{u}(\tau ,t_{j} )} \right|d\tau \\ & = \sum\limits_{{i = 0}}^{n} {\int\limits_{{x_{i} }}^{{x_{{i + 1}} }} {(x_{{n + 1}} - \tau )^{{ - \alpha }} } } \frac{\partial }{{\partial \tau }}\left| {(\tau - x_{i} )(\tau - x_{{i + 1}} )\left. {\frac{{\partial ^{2} u}}{{2\partial \tau ^{2} }}} \right|_{{\tau = \xi _{i} }} } \right|d\tau \\ & \le \frac{{M_{2} }}{2}\sum\limits_{{i = 0}}^{n} {\int\limits_{{x_{i} }}^{{x_{{i + 1}} }} {(x_{{n + 1}} - \tau )^{{ - \alpha }} } } \left| {(2\tau - x_{i} - x_{{i + 1}} )} \right|d\tau \\ & \le \frac{{M_{2} }}{2}\sum\limits_{{i = 0}}^{n} {\int\limits_{{x_{i} }}^{{x_{{i + 1}} }} {(x_{{n + 1}} - \tau )^{{ - \alpha }} } } \left| {x_{{i + 1}} - x_{i} } \right|d\tau \\ & = M_{2} h\sum\limits_{{i = 0}}^{n} {\frac{{(x_{{n + 1}} - x_{i} )^{{1 - \alpha }} - (x_{{n + 1}} - x_{{i + 1}} )^{{1 - \alpha }} }}{{(1 - \alpha )}}} = M_{2} h\left[ {\frac{{(x_{{n + 1}} )^{{1 - \alpha }} }}{{(1 - \alpha )}}} \right] \le Ch, \\ \end{aligned}$$

where \({M_2} = \mathop {\sup }\limits _{z \in [0,L]} \left| {{{\left. {\dfrac{{{\partial ^2}u(\tau ,t)}}{{\partial {\tau ^2}}}} \right| }_{\tau = z}}} \right|\).

Lemma 3

[54] Let S be a positive definite matrix of order \(m-1\). Then, for any parameter \(\eta \ge 0\), the following inequalities hold:

$$\begin{aligned} {\left\| {{{(I + \eta S)}^{ - 1}}(I - \eta S)} \right\| _\infty } \le 1. \end{aligned}$$

By using Lemma 2, we study convergence of the method. So, for the method (16), we can write

$$\begin{aligned} & \frac{{u_{i}^{j} - u_{i}^{{j - 1}} }}{\kappa } = \frac{{\lambda _{\alpha } }}{2}\left( {_{0}^{C} D_{x}^{\alpha } u(x_{{n + 1}} ,t_{j} ) + _{0}^{C} D_{x}^{\alpha } u(x_{{n + 1}} ,t_{{j - 1}} )} \right) + {\text{O}}(\kappa ^{2} ), \\ & \lambda _{{\alpha 0}}^{C} D_{x}^{\alpha } u(x_{{n + 1}} ,t_{j} ) = \frac{{\lambda _{\alpha } h^{{ - \alpha }} }}{{\Gamma (2 - \alpha )}}\left[ {u_{{n + 1}}^{j} - u_{n}^{j} } \right] + \frac{{\lambda _{\alpha } h^{{ - \alpha }} }}{{\Gamma (2 - \alpha )}}\sum\limits_{{k = 0}}^{{n - 1}} {\rho _{{k,n + 1}} } u_{k}^{j} + {\text{O}}(h). \\ \end{aligned}$$
(42)

Thus, the local truncation error of (12) can be written as:

$$\begin{aligned} {T_{i,j}} = \textrm{O}({\kappa ^3} + \kappa {h}). \end{aligned}$$

Theorem 2

Let \({U^j}\) and \({u^j}\) be the numerical solution and exact solution of (12), respectively. Then, we have

$$\begin{aligned} {\left\| {{U^j} - {u^j}} \right\| _\infty } \le CO({\kappa ^2} + {h}), \end{aligned}$$
(43)

where C is a positive constant.

Proof

We can write

$$\begin{aligned} U_{n + 1}^j + \sum \limits _{k = 0}^{n + 1} {\varPsi _{k,n + 1}^\alpha U_k^j} = U_{n + 1}^{j - 1} - \sum \limits _{k = 0}^{n + 1} {\varPsi _{k,n + 1}^\alpha U_k^{j - 1} + } \dfrac{{\kappa (f_{n + 1}^j + f_{n + 1}^{j - 1})}}{2} + {\mathrm{{O}}({\kappa ^3} + \kappa h)} \end{aligned}$$
(44)

and

$$\begin{aligned} u_{n + 1}^j + \sum \limits _{k = 0}^n {\varPsi _{k,n + 1}^\alpha u_k^j} = u_{n + 1}^{j - 1} - \sum \limits _{k = 0}^n {\varPsi _{k,n + 1}^\alpha u_k^{j - 1} + } \frac{{\kappa (f_{n + 1}^j + f_{n + 1}^{j - 1})}}{2}, \end{aligned}$$
(45)

Let us set \(e_i^j = U_i^j - u_i^j\) and by using (44) and (45), we have

$$\begin{aligned} e_{n + 1}^j + \sum \limits _{k = 0}^n {\varPsi _{k,n + 1}^\alpha e_k^j} = e_{n + 1}^{j - 1} - \sum \limits _{k = 0}^n {\varPsi _{k,n + 1}^\alpha e_k^{j - 1} + {\mathrm{{O}}({\kappa ^3} + \kappa h)},} \end{aligned}$$
(46)

thus, matrix–vector form of (46) can be expressed as

$$\begin{aligned} (I + D){\textrm{E}^j} = (I - D){\textrm{E}^{j - 1}} + \mathrm{{O}}({\kappa ^3} + \kappa h)\chi , \end{aligned}$$

where \({\textrm{E}^j} = {[e_1^j,e_2^j,...,e_n^j]^T}\) and \(\chi = {[1,1,...,1]^T}\). Let us take

$$\begin{aligned} \varTheta = {(I + D)^{ - 1}}(I - D),\,\,\,\,\,\,\, \varXi = \mathrm{{O}}({\kappa ^3} + \kappa h){(I + D)^{ - 1}}, \end{aligned}$$

therefore, we can write

$$\begin{aligned} {\textrm{E}^j} = \varTheta {\textrm{E}^{j - 1}} + \varXi . \end{aligned}$$

By iterating, we have

$$\begin{aligned} {\textrm{E}^j} = ({\varTheta ^{\mathrm{{j - 1}}}} + {\varTheta ^{\mathrm{{j - 2}}}} +... + I)\varXi . \end{aligned}$$

Since the eigenvalues of matrix D are positive, then matrix D is a positive definite matrix. By Lemma 1 and Lemma 3, we can write

$$\begin{aligned} \left\| {{\text{E}}^{j} } \right\|_{\infty } \le & (\left\| {\Theta ^{{j - 1}} } \right\|_{\infty } + \left\| {\Theta ^{{j - 2}} } \right\|_{\infty } + ... + \left\| I \right\|_{\infty } )\left\| \Xi \right\|_{\infty } \\ \le & (1 + 1 + ... + 1)\left\| \Xi \right\|_{\infty } \\ \le & j{\text{O}}(\kappa ^{3} + \kappa h) = T{\text{O}}(\kappa ^{2} + h). \\ \end{aligned}$$

Finally,

$$\begin{aligned} {\left\| {{\textrm{E}^j}} \right\| _\infty } \le C\mathrm{{O}}({\kappa ^2} + h). \end{aligned}$$

\(\square\)

Theorem 3

Let \(u \in {C}[0,L]\) and \(\alpha \in (0,1)\), then for the equal-area distribution method, we have

$$\begin{aligned} \dfrac{1}{{\varGamma (1 - \alpha )}}\begin{array}{ll} {\left| {\int \limits _0^{{x_{n + 1}}} {{{({x_{n + 1}} - \tau )}^{ - \alpha }}\left[ {\dfrac{{\partial \hat{u} (\tau ,{t_j})}}{{\partial \tau }} - \dfrac{{\partial \bar{u}(\tau ,{t_j})}}{{\partial \tau }}} \right] d\tau } } \right| \le C\dfrac{{\varDelta S}}{h}} \end{array} \end{aligned}$$
(47)

and, for the equal-height distribution method

$$\begin{aligned} \dfrac{1}{{\varGamma (1 - \alpha )}}\begin{array}{ll} {\left| {\int \limits _0^{{x_{n + 1}}} {{{({x_{n + 1}} - \tau )}^{ - \alpha }}\left[ {\dfrac{{\partial \hat{u} (\tau ,{t_j})}}{{\partial \tau }} - \dfrac{{\partial \bar{u}(\tau ,{t_j})}}{{\partial \tau }}} \right] d\tau } } \right| \le C\dfrac{\varDelta u}{{{h^2}}}} \end{array}, \end{aligned}$$
(48)

specifically, when \(\varDelta S = \textrm{O}({h^2})\) or \(\varDelta u = \textrm{O}({h^3})\), then

$$\begin{aligned} \dfrac{1}{{\varGamma (1 - \alpha )}}\begin{array}{ll} {\left| {\int \limits _0^{{x_{n + 1}}} {{{({x_{n + 1}} - \tau )}^{ - \alpha }}\left[ {\dfrac{{\partial \hat{u} (\tau ,{t_j})}}{{\partial \tau }} - \dfrac{{\partial \bar{u}(\tau ,{t_j})}}{{\partial \tau }}} \right] d\tau } } \right| \le Ch} \end{array}, \end{aligned}$$
(49)

where \({\hat{u}}\) is the piecewise linear interpolation for u at the method with uniform meshes and \({{\bar{u}}}\) is the piecewise linear interpolation for u at the method with non-uniform meshes.

Proof

Let \({\hat{u}}\) and \({{\bar{u}}}\) are the piecewise linear interpolations for u at the method with uniform meshes and the method with non-uniform meshes, respectively. Thus, for the equal-area distribution method, by using (25), we can write

$$\begin{aligned} {({x_{n + 1}} - {x_{{n_i}}})^{1 - \alpha }} - {({x_{n + 1}} - {x_{{n_{i + 1}}}})^{1 - \alpha }} \le (1 - \alpha )\varDelta S, \end{aligned}$$

thus, we have

$$\begin{aligned} \left[ {{{(n + 1 - {n_i})}^{1 - \alpha }} - {{(n + 1 - {n_{i + 1}})}^{1 - \alpha }}} \right] \le \frac{{(1 - \alpha )\varDelta S}}{{{h^{1 - \alpha }}}}. \end{aligned}$$
(50)

By using (50), (6) and (28), we can write

$$\begin{aligned} \begin{array}{l} \dfrac{1}{{\varGamma (1 - \alpha )}}\begin{array}{ll} {\left| {\int \limits _0^{{x_{n + 1}}} {{{({x_{n + 1}} - \tau )}^{ - \alpha }}\left[ {\dfrac{{\partial \hat{u} (\tau ,{t_j})}}{{\partial \tau }} - \dfrac{{\partial \bar{u}(\tau ,{t_j})}}{{\partial \tau }}} \right] d\tau } } \right| } \end{array}\\ = \dfrac{{{h^{ - \alpha }}}}{{\varGamma (2 - \alpha )}}\left| {\sum \limits _{k = 0}^n {{\rho _{k,n + 1}}} u_k^j - \sum \limits _{i = 0}^{{m_n}} {{\theta _{i,n + 1}}} u_{{n_i}}^j} \right| \\ = \dfrac{{{h^{ - \alpha }}}}{{\varGamma (2 - \alpha )}}\left| {\sum \limits _{i = 0}^{{m_n} - 1} {\sum \limits _{k = {n_i}}^{{n_{i + 1}} - 1} {\rho _{k,n + 1}^R} (u_k^j} - u_{k + 1}^j) - \sum \limits _{i = 0}^{{m_n} - 1} {\theta _{i,n + 1}^R} (u_{{n_i}}^j - u_{{n_{i + 1}}}^j)} \right| \\ = \dfrac{{{h^{ - \alpha }}}}{{\varGamma (2 - \alpha )}}\left| {\sum \limits _{i = 0}^{{m_n} - 1} {\sum \limits _{k = {n_i}}^{{n_{i + 1}} - 1} {\left[ {\rho _{k,n + 1}^R} \right. } (u_{{n_i}}^j + } \dfrac{{\partial u({\xi _k},{t_j})}}{{\partial x}}({x_k} - {x_{{n_i}}}))} \right. \\ \left. { - \rho _{k,n + 1}^R(u_{{n_i}}^j + \dfrac{{\partial u({\xi _{k + 1}},{t_j})}}{{\partial x}}({x_{k + 1}} - {x_{{n_i}}}))} \right] - \sum \limits _{i = 0}^{{m_n} - 1} {\left[ {\theta _{i,n + 1}^R} \right. } u_{{n_i}}^j - \theta _{i,n + 1}^R(u_{{n_i}}^j\\ \left. { + \dfrac{{\partial u({\xi _{{n_{i + 1}}}},{t_j})}}{{\partial x}}({x_{{n_{i + 1}}}} - {x_{{n_i}}}))} \right| \le \dfrac{{{h^{ - \alpha }}}}{{\varGamma (2 - \alpha )}}\left| {\sum \limits _{i = 0}^{{m_n} - 1} {\left[ {\sum \limits _{k = {n_i}}^{{n_{i + 1}} - 1} {(\rho _{k,n + 1}^R - \rho _{k,n + 1}^R)} } \right. } } \right. \\ \left. { - \left. {(\theta _{i,n + 1}^R - \theta _{i,n + 1}^R)} \right] u_{{n_i}}^j} \right| + \dfrac{{{h^{ - \alpha }}}}{{\varGamma (2 - \alpha )}}{\left\| {\dfrac{{\partial u}}{{\partial x}}} \right\| _\infty }\sum \limits _{i = 0}^{{m_n} - 1} {({x_{{n_{i + 1}}}} - {x_{{n_i}}})} \left| { - \theta _{i,n + 1}^R} \right| \\ \le \dfrac{{{h^{ - \alpha }}}}{{\varGamma (2 - \alpha )}}{\left\| {\dfrac{{\partial u}}{{\partial x}}} \right\| _\infty }\sum \limits _{i = 0}^{{m_n} - 1} {({x_{{n_{i + 1}}}} - {x_{{n_i}}})} \left[ {{{(n + 1 - {n_i})}^{1 - \alpha }} - {{(n + 1 - {n_{i + 1}})}^{1 - \alpha }}} \right] \\ \le \dfrac{{{h^{ - \alpha }}}}{{\varGamma (2 - \alpha )}}{\left\| {\dfrac{{\partial u}}{{\partial x}}} \right\| _\infty }{x_n}\dfrac{{(1 - \alpha )}}{{{h^{1 - \alpha }}}}\varDelta S = \dfrac{{{{x_n}}}}{{\varGamma (1 - \alpha )}}{\left\| {\dfrac{{\partial u}}{{\partial x}}} \right\| _\infty }\dfrac{{\varDelta S}}{h}\\ \le C\dfrac{{\varDelta S}}{h}. \end{array} \end{aligned}$$
(51)

We assume

$$\begin{aligned} \int _{{x_{{n_{{i^*}}}}}}^{{x_{{n_{{i^*} + 1}}}}} {{{({x_{n + 1}} - \tau )}^{-\alpha }}d\tau = \mathop {Max}\limits _{0 \le i \le {m_n} - 1} \int _{{x_{{n_i}}}}^{{x_{{n_{i + 1}}}}} {{{({x_{n + 1}} - \tau )}^{-\alpha }}d\tau , } } \end{aligned}$$
(52)

by using (22), we have

$$\begin{aligned} {({x_{n + 1}} - {x_{{n_{{i^*} + 1}}}})^{ - \alpha }} - {({x_{n + 1}} - {x_{{n_{{i^*}}}}})^{ - \alpha }} \le \varDelta u, \end{aligned}$$

thus, we can write

$$\begin{aligned} {({x_{n + 1}} - {x_{{n_{{i^*} + 1}}}})^{ - \alpha }} \le {({x_{n + 1}} - {x_{{n_{{i^*}}}}})^{ - \alpha }} + \varDelta u. \end{aligned}$$
(53)

Therefore, we can write

$$\begin{aligned} & \int_{{x_{{n_{{i^{*} }} }} }}^{{x_{{n_{{i^{*} + 1}} }} }} {(x_{{n + 1}} - \tau )^{{ - \alpha }} d\tau \le (x_{{n + 1}} - x_{{n_{{i^{*} + 1}} }} )^{{ - \alpha }} (x_{{n_{{i^{*} + 1}} }} - x_{{n_{{i^{*} }} }} )} \\ & \, \le \left[ {(x_{{n + 1}} - x_{{n_{{i^{*} }} }} )^{{ - \alpha }} + \Delta u} \right](x_{{n_{{i^{*} + 1}} }} - x_{{n_{{i^{*} }} }} ), \\ \end{aligned}$$
(54)

by means of the mean value theorem for \(u(\tau ) = {({x_{n + 1}} - \tau )^{ - \alpha }}\), there is a \({x_{{n_{{i^*}}}}}\) that we can write

$$\begin{aligned} (x_{{n + 1}} - x_{{n_{{i + 1}} }} )^{{ - \alpha }} - (x_{{n + 1}} - x_{{n_{i} }} )^{{ - \alpha }} \approx & (x_{{n_{{i + 1}} }} - x_{{n_{i} }} )( - \alpha )(x_{{n + 1}} - x_{{n_{{i^{*} }} }} )^{{ - \alpha - 1}} \\ = & h(n_{{i + 1}} - n_{i} )( - \alpha )(x_{{n + 1}} - x_{{n_{{i^{*} }} }} )^{{ - \alpha - 1}} \\ & \le \Delta u, \\ \end{aligned}$$

therefore, we have

$$\begin{aligned} {({x_{n + 1}} - {x_{{n_{{i^*}}}}})^{ - \alpha }} \le \frac{{\varDelta u}}{{h( - \alpha )}}({x_{n + 1}} - {x_{{n_{{i^*}}}}}). \end{aligned}$$
(55)

By using (54) and (55), we can write

$$\begin{aligned} \int _{{x_{{n_{{i^*}}}}}}^{{x_{{n_{{i^*} + 1}}}}} {{{({x_{n + 1}} - \tau )}^{ - \alpha }}d\tau \le \left[ {\dfrac{{\varDelta u}}{{h( - \alpha )}}({x_{n + 1}} - {x_{{n_{{i^*}}}}}) + \varDelta u} \right] } ({x_{{n_{{i^*} + 1}}}} - {x_{{n_{{i^*}}}}}). \end{aligned}$$
(56)

Finally, by using (51), and the following relation

$$\begin{aligned} \left[ {{{(n + 1 - {n_i})}^{1 - \alpha }} - {{(n + 1 - {n_{i + 1}})}^{1 - \alpha }}} \right] = \dfrac{{(1 - \alpha )}}{{{h^{1 - \alpha }}}}\int _{{x_{{n_i}}}}^{{x_{{n_{i + 1}}}}} {{{({x_{n + 1}} - \tau )}^{ - \alpha }}d\tau }, \end{aligned}$$

we can write

$$\begin{aligned} & \frac{1}{{\Gamma (1 - \alpha )}}\begin{array}{*{20}l} {\left| {\int\limits_{0}^{{x_{{n + 1}} }} {(x_{{n + 1}} - \tau )^{{ - \alpha }} \left[ {\frac{{\partial \hat{u}(\tau ,t_{j} )}}{{\partial \tau }} - \frac{{\partial \bar{u}(\tau ,t_{j} )}}{{\partial \tau }}} \right]d\tau } } \right|} \hfill \\ \end{array} \\ & \, \le \frac{{h^{{ - \alpha }} }}{{\Gamma (2 - \alpha )}}\left\| {\frac{{\partial u}}{{\partial x}}} \right\|_{\infty } \sum\limits_{{i = 0}}^{{m_{n} - 1}} {(x_{{n_{{i + 1}} }} - x_{{n_{i} }} )} \left[ {(n + 1 - n_{i} )^{{1 - \alpha }} - (n + 1 - n_{{i + 1}} )^{{1 - \alpha }} } \right] \\ & {\kern 1pt} = \frac{{h^{{ - \alpha }} }}{{\Gamma (2 - \alpha )}}\left\| {\frac{{\partial u}}{{\partial x}}} \right\|_{\infty } \sum\limits_{{i = 0}}^{{m_{n} - 1}} {(x_{{n_{{i + 1}} }} - x_{{n_{i} }} )} \frac{{(1 - \alpha )}}{{h^{{1 - \alpha }} }}\int_{{x_{{n_{i} }} }}^{{x_{{n_{{i + 1}} }} }} {(x_{{n + 1}} - \tau )^{{ - \alpha }} d\tau } \\ & \, \le \frac{1}{{\Gamma (1 - \alpha )h}}\left\| {\frac{{\partial u}}{{\partial x}}} \right\|_{\infty } \sum\limits_{{i = 0}}^{{m_{n} - 1}} {(x_{{n_{{i + 1}} }} - x_{{n_{i} }} )} \int_{{x_{{n_{{i^{*} }} }} }}^{{x_{{n_{{i^{*} + 1}} }} }} {(x_{{n + 1}} - \tau )^{{ - \alpha }} d\tau } \\ & \, \le \frac{{x_{n} }}{{\Gamma (1 - \alpha )h}}\left\| {\frac{{\partial u}}{{\partial x}}} \right\|_{\infty } \left[ {\frac{{\Delta u}}{{h( - \alpha )}}(x_{{n + 1}} - x_{{n_{{i^{*} }} }} ) + \Delta u} \right](x_{{n_{{i^{*} + 1}} }} - x_{{n_{{i^{*} }} }} ) \\ & \, \le C\frac{{\Delta u}}{{h^{2} }}. \\ \end{aligned}$$

5 Numerical experiments

In this section, some examples to illustrate the error bounds of the two methods with uniform and non-uniform meshes are presented.

Example 1

Consider the following partial fractional differential equation:

$$\begin{aligned} \left\{ {\begin{array}{l} {\dfrac{{\partial u(x,t)}}{{\partial t}} = - _0^CD_x^\alpha u(x,t) + f(x,t), \,\,\,\,x \in [0,1],}\\ {u(x,0) = {x^2}{{(1 - x)}^2}, \,\,\,\, 0< \alpha < 1,}\\ {u(0,t) = 0,\,\,\,\,\,\,\,\,u(1,t) = 0,} \end{array}} \right. \end{aligned}$$
(57)

where

$$\begin{aligned} f(x,t) = - {x^2}{(1 - x)^2}{e^{ - t}} + {e^{ - t}}\left[ {\dfrac{{\varGamma (5){x^{4 - \alpha }}}}{{\varGamma (5 - \alpha )}}} \right. \left. { - \dfrac{{2\varGamma (4){x^{3 - \alpha }}}}{{\varGamma (4 - \alpha )}} + \dfrac{{\varGamma (3){x^{2 - \alpha }}}}{{\varGamma (3 - \alpha )}}} \right] . \end{aligned}$$

The exact solution of (57) is \(u(x,t) = {x^2}{(1 - x)^2}{e^{ - t}}\).

For solving this example by uniform and equidistributing meshes, different values of \(\alpha\), \(h=\kappa\), \(\varDelta u\) and \(\varDelta\)S with \(T=1, L=1\) are utilized. In Tables 1 and 2, we have reported the results of this problem. This process has more benefits since the proposed method by equidistributing meshes does not lose computational accuracy and the computation cost of the methods (36) is decreased compared to the computation cost of the proposed method by uniform meshes(16) (see column N at Tables). Other numerical results are shown in Fig. 1.

Fig. 1
figure 1

The exact and numerical solutions by (16) and (36) (by using Algorithm 1 and 2), for example 1 (57), at \(T = 1,\,\,L=1\) and \(\alpha =0.2,\,\,\varDelta u = h,\,\,\varDelta S = 2\,h,\,\,h = 1/20\)

Table 1 Absolute errors, convergence orders of Example 1 by the following methods at \(L= 1,\,\,\, T=1\), for different \(\varDelta u\), \(\varDelta\)s, \(\alpha\) and h, respectively
Table 2 Absolute errors, convergence orders of Example 1 by the following methods at \(L= 1,\,\,\, T=1\), for different \(\varDelta u\), \(\varDelta\)s, \(\alpha\) and h, respectively

Collections of \(\varDelta u\) and \(\varDelta\)S in Algorithm 1 or Algorithm 2 for collecting the point meshes are very important. Because this process depends on \(h=\kappa , \alpha\) and \(\varDelta u\) or \(\varDelta\)S. Therefore, if we choose suitable \(\varDelta u\) and \(\varDelta\)S, then the computation cost of the non-uniform method (36) is decreased compared to the computation cost of the uniform method (16). Also, the numerical accuracy of non-uniform method does not decrease.

Example 2

We consider the following PFDEs as:

$$\begin{aligned} \left\{ {\begin{array}{ll} {\dfrac{{\partial u(x,t)}}{{\partial t}} = - _0^CD_x^\alpha u(x,t) + g(x,t), \,\,\,\,x \in [0,1],}\\ {u(x,0) = {x^2}{{(1 - x)}^2}, \,\,\,\, 0< \alpha < 1,}\\ {u(0,t) = 0,\,\,\,\,\,\,\,\,u(1,t) = 0,} \end{array}} \right. \end{aligned}$$
(58)

where g(xt), define as:

$$\begin{aligned} g(x,t) = - {x^2}{(1 - x)^2}{\sin (t)} + {\cos (t)}\left[ {\dfrac{{\varGamma (5){x^{4 - \alpha }}}}{{\varGamma (5 - \alpha )}}} \right. \left. { - \dfrac{{2\varGamma (4){x^{3 - \alpha }}}}{{\varGamma (4 - \alpha )}} + \dfrac{{\varGamma (3){x^{2 - \alpha }}}}{{\varGamma (3 - \alpha )}}} \right] . \end{aligned}$$
Fig. 2
figure 2

The exact and numerical solutions by (16) and (36) (by using Algorithm 1 and 2), for example 2 (58), at \(T = 1,\,\,L=1\) and \(\alpha =0.5,\,\,\varDelta u = 5\,h,\,\,\varDelta S = 10\,h,\,\,h = 1/20\)

Table 3 Absolute errors, convergence orders of Example 2 by the following methods at \(L= 1,\,\,\, T=1\), for different \(\varDelta u\), \(\varDelta\)s, \(\alpha\) and h, respectively
Table 4 Absolute errors, convergence orders of Example 2 by the following methods at \(L= 1,\,\,\, T=1\), for different \(\varDelta u\), \(\varDelta\)s, \(\alpha\) and h, respectively

For this example(58), the exact solution is \(u(x,t) = {x^2}{(1 - x)^2}{\cos (t)}\).

In Tables 3 and 4, we show the absolute errors of proposed methods with uniform (16) and non-uniform meshes (36). In those Tables, the results of proposed methods for different values of \(h=\kappa\), \(\varDelta u\), \(\varDelta S\) and \(\alpha\), with \(T = 1, L=1\) are compared. Tables 3 and 4 show that the proposed method with non-uniform meshes works well and convergence order of our proposed method with uniform meshes is \(O(\kappa ^2+h)\). Other results are shown at Figs. 2 and 3.

Table 5 Numerical solutions at \(x=1\) and \(t=1\) (\({u_{1,1}^{(h)}}\)) and \(\left| {u_{1,1}^{(h)} - u_{1,1}^{(\frac{h}{2})}} \right|\) for Example 3 by the following methods for different \(\varDelta u\), \(\varDelta s\), \(\alpha\) and h, respectively.

Example 3

Consider the following partial fractional differential equation:

$$\begin{aligned} & \frac{{\partial u(x,t)}}{{\partial t}} + _{0}^{C} D_{x}^{\alpha } u(x,t) = 0,\quad x \in [0,1], \\ & u(x,0) = x^{2} (1 - x)^{2} ,\quad 0 < \alpha < 1, \\ & u(0,t) = 0,u(1,t) = 0, \\ \end{aligned}$$
(59)

the exact solution for 59 is unavailable.

In Table 5, by using proposed methods with uniform and equidistributing meshes, we have reported numerical solutions of this problem at \(x=1\) and \(t=1\) (\({u_{1,1}^{(h)}}\)) and \(\left| {u_{1,1}^{(h)} - u_{1,1}^{(\frac{h}{2})}} \right|\) with different values of \(\alpha\), \(h=\kappa\), \(\varDelta u\) and \(\varDelta S\). Where \({u_{1,1}^{(h)}}\) is numerical solution of example 3 at \(x=1\) and \(t=1\) with step size h. Other results are shown at Fig. 4.

Fig. 3
figure 3

The exact and numerical solutions by (16) and (36) (by using Algorithm 1 and 2), for example 2, with different \(\varDelta u\), \(\varDelta s\), \(\alpha\) and \(h=1/40\), respectively

Fig. 4
figure 4

The numerical solutions by (16) and (36) (by using Algorithm 1 and 2), for example 3, with different \(\varDelta u\), \(\varDelta s\), \(\alpha\) and \(h=1/40\), respectively

6 Conclusion

In this paper, we design and develop some algorithms by using the piecewise linear interpolation polynomial for solving the PFDEs, with uniform and non-uniform meshes. The equal-height and equal-area distribution meshes are product by means of these algorithms. Also, we have used these algorithms ( the equal-height or equal-area distribution algorithm) to generate the matrix at the proposed method with non-uniform meshes. Next, the error bounds of the proposed methods are obtained. The computation cost of numerical method with uniform meshes for the PFDEs is nonlinearly increasing with time. This work shows that the computation cost of numerical method with non-uniform meshes for solving PFDEs increases linearly and the numerical accuracy of these methods dose not lose. Finally, we proved that the presented numerical method has a convergence order of \(O(\kappa ^2 + h)\).