1 Introduction

Fractional differential equations have been an exciting field of applied mathematics, it s gives very important tools for describing and studying natural phenomena, on fractional calculus more authors are interesting by the theory of fractional differential equations since they are abstract formulations for many problems in physics, hydrology, engineering, chemistry, finance. For details, we refer the reader to the books [1,2,3,4,5,6].

Over the last studies, The finite difference method has some advances in solving Fractional partial differential equations e.g., [7, 8]. Several authors have proposed some effective numerical Approximations solving for SFDE, for more details see finite element method [9], finite difference method [10] and spectral method [11, 12] where The authors considered a kind of reaction–diffusion model with space described by the fractional Laplacian and a nonlinear source term, when are developed a second-order stabilized semi-implicit time-stepping Fourier spectral method. Identified a practical criterion to choose the time step size to ensure the stability of the semi-implicit method. the efficient of the approach is illustrated by solving several models of practical interest. Several second order numerical schemes were proposed for solving STFDEs we refer the reader to the papers [13,14,15] where the authors introduce the approximation of fourth-order finite difference scheme for Riemann–Liouville fractional derivatives and the first-order approximation to Caputo fractional derivatives, The stability and convergence of the proposed scheme are discussed, The study concluded by Numerical experiments are given to demonstrate the efficiency of the proposed schemes. A second-order exponential wave integrator method in time and the Fourier spectral method in space are applied to derive a scheme for a nonlinear space fractional Klein–Gordon equation (NSFKGE) in [16], where the authors prove the improved uniform error bounds using Regularity compensation oscillation technique. Complex and oscillatory complex NSFKGE with nonlinear terms of general power exponents are discussed, at the end numerical experiments prove the effectiveness of previous theoretical results. For our present study, we consider a numerical method for the following space-time fractional equation of diffusion (STFED) as follows:

$$\begin{aligned} {}_{CF} D_{0, t}^\alpha u(x, t)=c(x, t) { }_{RL} D_{0, x}^\beta u(x, t)+f(x, t),\quad 0<x<L, 0<t \le T, \end{aligned}$$
(1.1)

subject to initial condition

$$\begin{aligned} u(x, 0)=\varphi (x),\quad 0 \le x \le L, \end{aligned}$$
(1.2)

and the boundary conditions

$$\begin{aligned} u(0, t)=0, \quad u(L, t)=v(t),\quad 0<t \le T. \end{aligned}$$
(1.3)

\(\alpha \in (0,1)\), \(\beta \in (1,2)\), \(c(x, t)>0\) is the positive diffusion coefficient.

Here \({ }_{R L} D_{0, x}^\beta u(x, t)\) is the Riemann–Liouville derivative of order \(\beta \in (1,2]\), see [3, 17] defined by

$$\begin{aligned} { }_{R L} D_{0, x}^\beta u(x, t)=\left\{ \begin{array}{ll} \frac{1}{\Gamma (2-\beta )} \frac{\partial ^2}{\partial x^2} \int _0^x \frac{u(\xi , t) d \xi }{(x-\xi )^{\beta -1}}, &{}\quad 1<\beta <2, \\ \frac{\partial ^2 u(x, t)}{\partial x^2},&{}\quad \beta =2. \end{array}\right. \end{aligned}$$
(1.4)

and \({ }_{C F} D_{0, t}^\alpha u(x, t)\) is the Caputo Fabrizio derivative of order \(\alpha \in [0,1]\), see [18] defined as: for \(u \in H^1(a, b), b>a, \).

$$\begin{aligned} { }_{ CF} D_{0, t}^\alpha u(x, t)=\frac{M(\alpha )}{1-\alpha } \int _0^t \frac{\partial u(x,\xi ) }{\partial \xi }\exp \left[ -\alpha \frac{t-\xi }{1-\alpha }\right] d \xi , \end{aligned}$$
(1.5)

where \(M(\alpha )\) is a normalization function such that \(M(0)=M(1)=1\) [18]. However, if the function does not belong to \(H^1(a, b)\) then, the derivative can be redefined as

$$\begin{aligned} { }_{ CF} D_{0, t}^\alpha u(x, t)=\frac{\alpha M(\alpha )}{1-\alpha } \int _0^t(u(x,t)-u(x,\xi )) \exp \left[ -\alpha \frac{t-\xi }{1-\alpha }\right] d \xi \end{aligned}$$
(1.6)

The paper is briefly summarized as follows. In Sect. 2, we propose a an implicit finite difference scheme to approximate the (STFDE) (1.1), (1.2), and (1.3) using the Crank Nicholson method to discretize the Caputo Fabrizio time fractional derivatives of order \(\alpha \in (0, 1)\), and The Riemann–Liouville space fractional derivative of order \(\beta \in (1, 2)\) can be discretized by the standard Grunwald–Letnikov formula. We study the stability and convergence of the discrete scheme Sect. 3, some numerical experiments are performed in Sect. 4 to verify the efficiency and accuracy of the methods.

2 Finite difference scheme

For the implicit numerical approximation scheme, we define \(h=\frac{\left( x_R-x_L\right) }{N}=\frac{L}{N}\) and \(\Delta t=\frac{T}{N}\) the space and time steps respectively, such that \(t_k=k \Delta t; \textrm{k}=0,1, \ldots , \textrm{n}\) be the integration time \(0 \le t_k \le T\) and \(x_i=x_L+i h\) for \(\textrm{i}=0,1, \ldots , \textrm{N}\). Let \(U\left( x_i, t_{k+1}\right) =U^{k+1}_{i}, i=1,2, \ldots N, k=1,2, \ldots n\), be the exact solution of the fractional partial differential equation (1.1), (1.2), and (1.3) at the node point \(\left( x_i, t_k\right) \). Let \(U_i^k\) be the numerical approximation to \(U\left( x_i, t_k\right) \).

A discrete approximation to the Caputo–Fabrizio derivative of fractional order can be obtained by simple quadrature formula as follows:

$$\begin{aligned} { }_{ CF} D_{0, t}^\alpha U\left( x_i, t_{k+1}\right) =\frac{M(\alpha )}{1-\alpha } \int _0^{t_{k+1}} \frac{\partial U(x_i,\xi ) }{\partial \xi }\exp \left[ -\alpha \frac{t_{k+1}-\xi }{1-\alpha }\right] d \xi \end{aligned}$$
(2.1)

The linear approximation of the function U(t) in \(\left[ t_{k-1}, t_k\right] \) is defined as

$$\begin{aligned} U(t)_{app}=U\left( t_{k-1}\right) \frac{t_k-t}{\Delta t}+U\left( t_k\right) \frac{t-t_{k-1}}{\Delta t}, \quad t \in \left[ t_{k-1}, t_k\right] , \quad 1 \le k \le n. \end{aligned}$$

This equation can be modified using the first-order approximation to

$$\begin{aligned} { }_{ CF} D_{0, t}^\alpha U\left( x_i, t_{k+1}\right)&=\frac{M(\alpha )}{1-\alpha } \sum _{j=0}^k \int _{(j) \Delta t}^{(j+1) \Delta t}\left( \frac{U^{j+1}_{i}-U^{j}_{i}}{\Delta t}+O(\Delta t)\right) \\&\quad \exp \left[ -\alpha \frac{t_{k+1}-\xi }{1-\alpha }\right] d \xi . \end{aligned}$$

Before integration we obtain the following expression

$$\begin{aligned}&\frac{M(\alpha )}{\alpha }\sum _{j=0}^k \left( \frac{U^{j+1}_{i}-U^{j}_{i}}{\Delta t}+O(\Delta t)\right) \int _{(j) \Delta t}^{(j+1) \Delta t}\exp \left[ -\alpha \frac{t_{k+1}-\xi }{1-\alpha }\right] d \xi , \\&{ }_{ CF} D_{0, t}^\alpha U\left( x_i, t_{k+1}\right) =\frac{M(\alpha )}{\alpha } \sum _{j=0}^k \left( \frac{U^{j+1}_{i}-U^{j}_{i}}{\Delta t}+O(\Delta t)\right) d_{j, k}, \end{aligned}$$

where

$$\begin{aligned} d_{j, k}= & {} \exp \left[ -\alpha \frac{\Delta t}{1-\alpha }(k-j)\right] -\exp \left[ -\alpha \frac{\Delta t}{1-\alpha }(k-j+1)\right] .\nonumber \\ \sum _{j=1}^k d_{j, k}= & {} \sum _{j=1}^k\left( \exp \left[ -\alpha \frac{\Delta t}{1-\alpha }(k-j)\right] -\exp \left[ -\alpha \frac{\Delta t}{1-\alpha }(k-j+1)\right] \right) \nonumber \\= & {} 1- \exp \left[ -\alpha \frac{\Delta t}{1-\alpha } k\right] \end{aligned}$$
(2.2)

The approximation of the exponential function can be obtained as

$$\begin{aligned} \exp \left[ -\alpha \frac{\Delta t}{1-\alpha }(k)\right] \approx 1-\alpha \frac{\Delta t}{1-\alpha }(k). \end{aligned}$$

Then replacing the above in Eq. (2.2), we obtain

$$\begin{aligned} \sum _{j=1}^k\left( \exp \left[ -\alpha \frac{\Delta t}{1-\alpha }(k-j)\right] -\exp \left[ -\alpha \frac{\Delta t}{1-\alpha }(k-j+1)\right] \right) \approx \alpha \frac{\Delta t}{1-\alpha }(k). \end{aligned}$$

Then equation (11) becomes

$$\begin{aligned} { }_{ CF} D_{0, t}^\alpha U\left( x_i, t_{k+1}\right) =\frac{M(\alpha )}{\alpha } \sum _{j=0}^k \left( \frac{U^{j+1}_{i}-U^{j}_{i}}{\Delta t}\right) d_{j, k}+\frac{M(\alpha ) \Delta t}{\alpha }(k) O(\Delta t). \end{aligned}$$

We therefore obtain the requested result

$$\begin{aligned} { }_{ CF} D_{0, t}^\alpha U\left( x_i, t_{k+1}\right) =\frac{M(\alpha )}{\alpha } \sum _{j=0}^k \left( \frac{U^{j+1}_{i}-U^{j}_{i}}{\Delta t}\right) d_{j, k}+O\left( \Delta t^2\right) \end{aligned}$$
(2.3)

We finally have that the first-order approximation method for the computation of the Caputo–Fabrizio derivative.

We introduce some Lemmas in order to approach the Caputo–Fabrizio fractional derivative by substituting the first order derivative by simple quadrature formula, and an approaching for left Riemann–Liouville fractional derivative.

Lemma 1

Let \(g(t) \in C^2\left[ 0, t_k\right] \). For any \(0<\alpha <1\), Then, we have

$$\begin{aligned}&\left| { }_{ CF} D_{0, t}^\alpha g\left( s, t_{k}\right) - \frac{{\Delta t}^{-1} M(\alpha )}{\alpha }\left( d_{k, k}g^{k}+ \sum _{j=1}^{k-1} \left( d_{j, k}-d_{j+1, k}\right) g^{j}-d_{1, k}g^{0}\right) \right| \\&\quad \le \frac{(1-\alpha )M(\alpha )}{2 \alpha ^2} \max _{0 \le t \le t_k}\left| g^{ \prime \prime }({t})\right| \Delta t^2. \quad 1 \le k \le n, \end{aligned}$$

Proof

From the previous computation, is not difficult to get that

$$\begin{aligned}&\sum _{j=1}^k \frac{g\left( t_j\right) -g\left( t_{j-1}\right) }{{\Delta t}} \int _{t_{j-1}}^{t_j} \exp \left[ -\alpha \frac{t_{k}-t}{1-\alpha }\right] d t =\frac{M(\alpha )}{\alpha } \sum _{j=1}^k \left( \frac{g^{j}_{i}-g^{j-1}_{i}}{\Delta t}\right) d_{j, k} \\&\quad =\frac{{\Delta t}^{-1} M(\alpha )}{\alpha }\left( d_{k, k}g^{k}+ \sum _{j=1}^{k-1} \left( d_{j, k}-d_{j+1, k}\right) g^{j}-d_{1, k}g^{0}\right) \end{aligned}$$

Therefore, note

$$\begin{aligned} \frac{1-\alpha }{M(\alpha )}A&\equiv \int _0^{t_k} g^{\prime }(t) \exp \left[ -\alpha \frac{t_{k}-t}{1-\alpha }\right] d t -\sum _{j=1}^k \frac{g\left( t_j\right) -g\left( t_{j-1}\right) }{{\Delta t}}\\&\quad \int _{t_{j-1}}^{t_j} \exp \left[ -\alpha \frac{t_{k}-t}{1-\alpha }\right] d t. \end{aligned}$$

then

$$\begin{aligned} A&=\frac{ M(\alpha )}{1-\alpha } \sum _{j=1}^k \int _{t_{j-1}}^{t_j}\left[ g^{\prime }(t)-\frac{g\left( t_j\right) -g\left( t_{j-1}\right) }{{\Delta t}}\right] \exp \left[ -\alpha \frac{t_{k}-t}{1-\alpha }\right] d t \\&= \frac{ M(\alpha )}{1-\alpha } \sum _{j=1}^k \int _{t_{j-1}}^{t_j}\left[ g(t)- g(t)_{app}\right] ^{\prime } \exp \left[ -\alpha \frac{t_{k}-t}{1-\alpha }\right] d t, \end{aligned}$$

By parts integration calcul we get

$$\begin{aligned} A = -\frac{ M(\alpha )}{\alpha } \sum _{j=1}^k \int _{t_{j-1}}^{t_j}\left[ g(t)- g(t)_{app}\right] \exp \left[ -\alpha \frac{t_{k}-t}{1-\alpha }\right] d t, \quad t \in \left( t_{j-1}, t_j \right) \end{aligned}$$

Using the error expansion in the approximation is given in [19]

$$\begin{aligned} g(t)- g(t)_{app}=\frac{g^{\prime \prime }\left( v_j\right) }{2}\left( t-t_{j-1}\right) \left( t-t_j\right) , \quad v_j,t \in \left( t_{j-1}, t_j \right) 1 \le j \le k. \end{aligned}$$

which yields

$$\begin{aligned} \left| A \right|&=\frac{ M(\alpha )}{\alpha } \left| \sum _{j=1}^k \int _{t_{j-1}}^{t_j}\left[ \frac{g^{\prime \prime }\left( v_j\right) }{2}\left( t-t_{j-1}\right) \left( t_j-t\right) \right] \exp \left[ -\alpha \frac{t_{k}-t}{1-\alpha }\right] d t \right| \\&\le \frac{M(\alpha )}{\alpha } \frac{ \Delta t^2 \max _{0 \le t \le t_k}\left| g^{ \prime \prime }({t})\right| }{2} \int _{t_0}^{t_k} \exp \left( -\frac{\alpha \left( t_k-s\right) }{1-\alpha }\right) \textrm{d}s\\&\le \frac{(1-\alpha )M(\alpha )}{2 \alpha ^2} \max _{0 \le t \le t_k}\left| g^{ \prime \prime }({t})\right| \Delta t^2. \end{aligned}$$

the proof completed. \(\square \)

Lemma 2

[20] Let \(d(x) \in L^1(R),{ }_{RL} D_{-\infty , x}^{\beta +2} d(x)\) and its Fourier transform belong to \(L^1(R)\), and define the weighted and shifted Grünwald–Letnikov operator by

$$\begin{aligned} { }_L D_{h, p, q}^\beta d(x)=\frac{\lambda _1}{h^\beta } \sum _{j=0}^{\infty } g_j^{(\beta )} d(x-(j-p) h)+\frac{\lambda _2}{h^\beta } \sum _{j=0}^{\infty } g_j^{(\beta )} d(x-(j-q) h), \end{aligned}$$

where pq are integers and \(p \ne q, \lambda _1=\frac{\beta -2 q}{2(p-q)}, \lambda _2=\frac{2 p-\beta }{2(p-q)}\), Here the Grünwald normalized weights are defined by \( g_{\beta , 0}=1, g_{\beta , j}=\frac{\Gamma (j-\beta )}{\Gamma (-\beta ) \Gamma (j+1)}, j=0,1, \ldots \)

Then we have

$$\begin{aligned} { }_L D_{h, p, q}^\beta d(x)={ }_{-\infty } D_x^\beta d(x)+O\left( h^2\right) \end{aligned}$$

uniformly for \(x \in R\).

According to Lemma 2, the fractional spatial derivative in (1.1) can be approximated as

$$\begin{aligned} { }_{RL} D_{0, x}^{\beta }u(x_i, t_{k+1})=\frac{\lambda _1}{h^\beta } \sum _{j=0}^{i+p} g_j^{(\beta )} U_{i-j+p}^{k+1}+\frac{\lambda _2}{h^\beta } \sum _{j=0}^{i+q} g_j^{(\beta )} U_{i-j+q}^{k+1}+O\left( h^2\right) . \end{aligned}$$

Thus, for \(p = 1\), \(q = 0,\) the fractional spatial derivative in (1.1) can be discretized as follows

$$\begin{aligned} { }_{RL} D_{0, x}^{\beta }u(x_i, t_{k+1})=\frac{1}{h^\beta } \sum _{j=0}^{i+1} w_j^{(\beta )} U_{i-j+1}^{k+1}+O\left( h^2\right) , \end{aligned}$$
(2.4)

where \(w_0^{(\beta )}=\frac{\beta }{2} g_0^{(\beta )}, w_j^{(\beta )}=\frac{\beta }{2} g_j^{(\beta )}+\frac{2-\beta }{2} g_{j-1}^{(\beta )}, j=1,2, \ldots , M\).

On substituting Grünwald estimates in the superdiffusion Eq. (1.1) to obtain the Crank–Nicolson type numerical approximation, the resulting finite difference equations is

$$\begin{aligned}&\frac{ d_{0} {\Delta t}^{-1} M(\alpha )}{\alpha }\left[ U_i^{k+1}-U_i^k\right] +\frac{{\Delta t}^{-1}M(\alpha )}{\alpha } \sum _{j=1}^k d_{j}\left[ U_i^{k-j+1}-U_i^{k-j}\right] \\&\quad =\frac{c_i^{k+1}}{2}\left( \frac{1}{h^\beta } \sum _{j=0}^{i+1} w_j^{(\beta )} U_{i-j+1}^{k+1}\right) +f_i^{k+1} \end{aligned}$$

where \(d_{j}=\exp \left[ -\alpha \frac{\Delta t}{1-\alpha }(j)\right] -\exp \left[ -\alpha \frac{\Delta t}{1-\alpha }(j+1)\right] \) and \(f_i^k=f\left( x_i, t_k\right) \).

Therefore, from (1.1) and (1.2) we have

$$\begin{aligned} \left[ U_i^{k+1}-U_i^k\right] +\frac{1}{d_{0}} \sum _{j=1}^k d_{j}\left[ U_i^{k-j+1}-U_i^{k-j}\right] =R_i^{k+1}\left( \sum _{j=0}^{i+1} w_j^{(\beta )} U_{i-j+1}^{k+1}\right) +F_i^{k+1} \end{aligned}$$

where \( R_i^{k+1}=\frac{\alpha {\Delta t} c_i^{k+1}}{d_{0} h^\beta M(\alpha )},\) and \(F_i^{k+1}=\frac{\alpha \Delta t}{d_{0} M(\alpha )} f_i^{k+1}, \quad i,k=0,1,2, \ldots \)

After further simplification, we get

$$\begin{aligned}&\left( 1-R_i^{k+1} w_1^{(\beta )}\right) U_i^{k+1} - R_i^{k+1} \sum _{j=0, j \ne 1}^{i+1} w_j^{(\beta )} U_{i-j+1}^{k+1}= \left( 1-\frac{d_{1}}{d_{0}}\right) U_i^k \nonumber \\&\quad +\frac{1}{d_{0}}\sum _{j=1}^{k-1}\left( d_{j}-d_{j+1}\right) U_i^{k-j} +\frac{d_{k}}{d_{0}} U_i^0+F_i^{k+1} \end{aligned}$$
(2.5)

Therefore, from Eq. (2.5) then an implicit finite difference scheme of (1.1) to (1.3) can be expressed as follows

$$\begin{aligned}&\left( 1-R_i^{1} w_1^{(\beta )}\right) U_i^{1} - R_i^{1} \sum _{j=0, j \ne 1}^{i+1} w_j^{(\beta )} U_{i-j+1}^{1}= U_i^0 + F_i^{1}, \quad {\textit{for}} \quad k=0 \end{aligned}$$
(2.6)
$$\begin{aligned}&\left( 1-R_i^{k+1} w_1^{(\beta )}\right) U_i^{k+1} - R_i^{k+1} \sum _{j=0, j \ne 1}^{i+1} w_j^{(\beta )} U_{i-j+1}^{k+1}= \left( 1-\frac{d_{1}}{d_{0}}\right) U_i^k \nonumber \\&\quad +\frac{1}{d_{0}}\sum _{j=1}^{k-1}\left( d_{j}-d_{j+1}\right) U_i^{k-j} +\frac{d_{k}}{d_{0}} U_i^0+F_i^{k+1}, \quad {\textit{for}} \quad k \ge 1 \end{aligned}$$
(2.7)

initial condition:

$$\begin{aligned} U_i^0=\phi \left( x_i\right) , \quad i=0, 1, 2, \ldots \end{aligned}$$
(2.8)

boundary conditions:

$$\begin{aligned} U_0^k=0, \quad U_N^k=v(t_k)=v^k, \quad k=0,1, 2, \ldots \end{aligned}$$
(2.9)

3 Stability and convergence analysis of STFDE

Denote the column vectors as follows

$$\begin{aligned} U^k&=\left( u_1^k, u_2^k, \ldots , u_N^k\right) ^{\textrm{T}}, \\ Q^{k-1}&=\left( u_1^{k-1}, u_2^{k-1}, \ldots , u_{N-1}^{k-1}, 0\right) ^{\textrm{T}}, \\ F^k&=\left( \frac{\alpha \Delta t }{d_{0} M(\alpha )} f_1^{k}, \frac{\alpha \Delta t}{d_{0} M(\alpha )} f_2^{k}, \ldots , \frac{\alpha \Delta t}{d_{0} M(\alpha )} f_{N-1}^{k}, v^k\right) ^{\textrm{T}}, \end{aligned}$$

The finite-difference Eqs. (2.6) to (2.9) are expressed in the matrix form as:

$$\begin{aligned} A U^1= & {} Q^0+F^1 \end{aligned}$$
(3.1)
$$\begin{aligned} A U^{k+1}= & {} \left( 1-\frac{d_{1}}{d_{0}}\right) Q^k+\frac{1}{d_{0}} \sum _{j=1}^{k-1}\left( d_{j}-d_{j+1}\right) Q^{k-j}+\frac{d_{k}}{d_{0}} Q^0+ F^{k+1} \end{aligned}$$
(3.2)

\(A=\left( a_{i j}\right) \) is a \((\textrm{N})\) ordered square matrix of coefficients

$$\begin{aligned} A=\left[ \begin{array}{ccccccc} (1- R_1^{k+1} w_1^{(\beta )} ) &{} -R_{1}^{k+1} w_0^{(\beta )} &{} 0 &{} \cdots &{} \cdots &{} \cdots &{} 0 \\ -R_2^{k+1} w_2^{(\beta )} &{} (1- R_2^{k+1} w_1^{(\beta )} ) &{} -R_{2}^{k+1} w_0^{(\beta )} &{} 0 &{} \cdots &{} \cdots &{} 0 \\ -R_3^{k+1} w_3^{(\beta )} &{} -R_3^{k+1} w_2^{(\beta )}&{} (1- R_3^{k+1} w_1^{(\beta )} ) &{} -R_{3}^{k+1} w_0^{(\beta )}&{} 0 &{} \cdots &{} 0 \\ \vdots &{} \vdots &{} \ddots &{} \ddots \ddots &{} \cdots &{} \cdots &{} 0 \\ &{} &{} &{} &{} &{} &{} \\ -R_{N-1}^{k+1} w_{m-1}^{(\beta )} &{} -R_{N-1}^{k+1} w_{m-2}^{(\beta )} &{} -R_{N-1}^{k+1} w_{m-3}^{(\beta )} &{} \cdots &{}\cdots &{} (1- R_{N-1}^{k+1} w_1^{(\beta )} ) &{} -R_{N-1}^{k+1} w_0^{(\beta )} \\ 0 &{} 0 &{} 0 &{} \cdots &{} \cdots &{} 0 &{} 1 \end{array}\right] \end{aligned}$$

We introduce the lemmas on the properties of the coefficients of the discretized fractional operators

Lemma 3

[17, 21] Let \(\beta , \beta _1\), and \(\beta _2\) be positive real numbers, and the integer \(n \ge 1\). Then the coefficients \(g_j^{(\beta )}(j=0,1, \cdots )\) possess the following properties

  1. (i)

    \(g_0^{(\beta )}=1, \quad g_j^{(\beta }=\left( 1-\frac{\beta +1}{j}\right) g_{j-1}^{(\beta )}\) for \(j \ge 1\);

  2. (ii)

    \(\textrm{g}_1^{(\beta )}<\textrm{g}_2^{(\beta )}<\cdots <0, \quad \sum _{j=0}^n\,\textrm{g}_j^{(\beta )}>0\) for \(0<\beta <1\);

  3. (iii)

    \(g_2^{(\beta )}>g_3^{(\beta )}>\cdots >0, \quad \sum _{j=0}^n g_j^{(\beta )}<0\) for \(1<\beta <2\);

  4. (iv)

    \(\sum _{j=0}^n g_j^{(\beta )}=(-1)^n\left( { }_n^{\beta -1}\right) \);

  5. (v)

    \(\sum _{j=0}^n g_j^{\left( \beta _1\right) } g_{n-j}^{\left( \beta _2\right) }=g_n^{\left( \beta _1+\beta _2\right) }\).

Lemma 4

[22] Let \(\beta \) be positive real numbers. Then the coefficients \(w_j^{(\beta )}(j=0,1, \ldots )\) possess the following properties

$$\begin{aligned}&\mathrm{(i)}\;w_0^{(\beta )} =\frac{\beta }{2}, \quad w_1^{(\beta )}=\frac{2-\beta -\beta ^2}{2}, \quad w_2^{(\beta )}=\frac{\beta \left( \beta ^2+\beta -4\right) }{4}, \\&w_j^{(\beta )} =\frac{\beta }{2} g_j^{(\beta )}+\frac{2-\beta }{2} g_{j-1}^{(\beta )}, \quad \text {for}\quad j \ge 3; \\&\mathrm{(ii)}\;1> w_0^{(\beta )}>w_3^{(\beta )}>w_4^{(\beta )}>\cdots>0, \quad \sum _{j=0}^n w_j^{(\beta )}<0, \quad \text {for}\quad 1<\beta <2, \quad n \ge 2.\\&\mathrm{(iii)}\; w_0^{(\beta )} + w_2^{(\beta )} = \frac{- w_1^{(\beta )}}{2} > 0 \end{aligned}$$

Lemma 5

The coefficients \(d_{j}(j=1,2, \ldots )\) possess the following properties

$$\begin{aligned} d_{j}>0; \quad \text {and} \quad d_{j}>d_{j+1}; \end{aligned}$$

Denote \(\tilde{U}_i^k\) is the approximate solution of the difference scheme with the initial condition \(\tilde{U}_i^0\).

To discuss the stability of the numerical method, we put

$$\begin{aligned} \varepsilon _i^k=U_i^k-\tilde{U}_i^k, \quad 1 \le i \le N, \quad 1 \le k \le n, \end{aligned}$$

and

$$\begin{aligned} \varepsilon ^k=\left( \varepsilon _1^k, \varepsilon _2^k, \ldots , \varepsilon _N^k\right) ^{\textrm{T}}, \quad \left\| \varepsilon ^k\right\| _{\infty }=\max _{1 \le i \le N}\left| \varepsilon _i^k\right| , \end{aligned}$$

from the definition of The finite-difference Eqs. (2.6) to (2.9), we have: For \(1 \le i \le N-1\),

$$\begin{aligned}&\left( 1-R_i^{1} w_1^{(\beta )}\right) \varepsilon _i^{1} - R_i^{1} \sum _{j=0, j \ne 1}^{i+1} w_j^{(\beta )} \varepsilon _{i-j+1}^{1}= \varepsilon _i^0, \quad {\textit{for}} \quad k=0 \end{aligned}$$
(3.3)
$$\begin{aligned}&\left( 1-R_i^{k+1} w_1^{(\beta )}\right) \varepsilon _i^{k+1} - R_i^{k+1} \sum _{j=0, j \ne 1}^{i+1} w_j^{(\beta )} \varepsilon _{i-j+1}^{k+1}= \left( 1-\frac{d_{1}}{d_{0}}\right) \varepsilon _i^k \nonumber \\&\quad +\frac{1}{d_{0}}\sum _{j=1}^{k-1}\left( d_{j}-d_{j+1}\right) \varepsilon _i^{k-j} +\frac{d_{k}}{d_{0}} \varepsilon _i^0,\quad {\textit{for}} \quad k \ge 1 \end{aligned}$$
(3.4)

Definition 1

For \(\varepsilon ^0\), being some initial rounding error arbitrarily, if there exists c a positive number, independent of h and \(\Delta t\) such that \(\left\| \varepsilon ^k\right\| \le c\left\| \varepsilon ^0\right\| \) or \(\left\| \varepsilon ^k\right\| \le c\), then the difference approximation is stable.

Theorem 6

When \(1.6 \le \beta < 2\), the fractional finite difference (2.6)–(2.9) is unconditional stable.

Proof

Suppose \(\left\| \varepsilon ^1\right\| _{\infty }=\left| \varepsilon _l^1\right| =\max _{1 \le i \le N}\left| \varepsilon _i^1\right| \). According to the Lemma 4, we have

$$\begin{aligned} \left\| \varepsilon ^1\right\| _{\infty }&=\left| \varepsilon _l^1\right| \le \left( 1-R_i^{1} w_1^{(\beta )} \right) \left| \varepsilon _l^1\right| -R_i^{1} \sum _{j=0, j \ne 1}^{l+1} w_j^{(\beta )}\left| \varepsilon _l^1\right| \\&\le \left( 1- R_i^{1} w_1^{(\beta )} \right) \left| \varepsilon _l^1\right| -R_i^{1} \sum _{j=0, j \ne 1}^{l+1} w_j^{(\beta )}\left| \varepsilon _{l-j+1}^1\right| \\&\le \left| \left( 1-w_1^{(\beta )} R_i^{1}\right) \varepsilon _l^1-R_i^{1} \sum _{j=0, j \ne 1}^{l+1} w_j^{(\beta )} \varepsilon _{l-j+1}^1\right| \\&=\left| \varepsilon _l^0\right| \le \left\| \varepsilon ^0\right\| _{\infty }. \end{aligned}$$

Supposing \(\left\| \varepsilon ^{k+1}\right\| _{\infty }=\left| \varepsilon _l^{k+1}\right| =\max _{1 \le i \le N}\left| \varepsilon _i^{k+1}\right| \), and assuming that we have proved that \(\left\| \varepsilon ^k\right\| _{\infty } \le \left\| \varepsilon ^0\right\| _{\infty }(1 \le k \le n)\), with the above Lemma 5 and (11), Therefore:

$$\begin{aligned} \left\| \varepsilon ^{k+1}\right\| _{\infty }&=\left| \varepsilon _l^{k+1}\right| \le \left( 1- R_i^{k+1}w_1^{(\beta )} \right) \left| \varepsilon _l^{k+1}\right| -R_i^{k+1} \sum _{j=0, j \ne 1}^{l+1} w_j^{(\beta )}\left| \varepsilon _l^{k+1}\right| \\&\le \left( 1- R_i^{k+1}w_1^{(\beta )} \right) \left| \varepsilon _l^{k+1}\right| -R_i^{k+1} \sum _{j=0, j \ne 1}^{l+1} w_j^{(\beta )}\left| \varepsilon _{l-j+1}^{k+1}\right| \\&\le \left| \left( 1- R_i^{k+1} w_1^{(\beta )} \right) \varepsilon _l^{k+1}-R_i^{k+1} \sum _{j=0, j \ne 1}^{l+1} w_j^{(\beta )} \varepsilon _{l-j+1}^{k+1}\right| \\&=\left| \left( 1-\frac{d_{1}}{d_{0}}\right) \varepsilon _i^k +\frac{1}{d_{0}}\sum _{j=1}^{k-1}\left( d_{j}-d_{j+1}\right) \varepsilon _i^{k-j} +\frac{d_{k}}{d_{0}} \varepsilon _i^0\right| \\ \left\| \varepsilon ^{k+1}\right\| _{\infty }&\le \left( 1-\frac{d_{k}}{d_{0}}\right) \left\| \varepsilon ^k\right\| _{\infty }+\frac{1}{d_{0}}\sum _{j=1}^{k-1}\left( d_{j}-d_{j+1}\right) \left\| \varepsilon ^{k-j}\right\| _{\infty } +\frac{d_{k}}{d_{0}}\left\| \varepsilon ^0\right\| _{\infty } \\&\le \left( 1-\frac{d_{1}}{d_{0}}\right) \left\| \varepsilon ^0\right\| _{\infty }+\frac{1}{d_{0}}\sum _{j=1}^{k-1}\left( d_{j}-d_{j+1}\right) \left\| \varepsilon ^0\right\| _{\infty } +\frac{d_{k}}{d_{0}}\left\| \varepsilon ^0\right\| _{\infty } \\&=\left\| \varepsilon ^0\right\| _{\infty }. \end{aligned}$$

Hence, by mathematical induction this shows that finite approximation scheme defined by (2.6)–(2.9) is unconditionally stable.

For the convergence of the numerical method, denote the local truncation error by \(r_i^k\) for \(1 \le i \le N-1\). It follows from (2.3), and (2.5) that,

$$\begin{aligned} \left\| r_i^k\right\| _{\infty } \le C\left( \Delta t^{2}+h^2\right) , \quad 1 \le k \le n. \end{aligned}$$

and

$$\begin{aligned} r_i^1&=\left( 1-R_i^{1} w_1^{(\beta )}\right) e_i^{1} - R_i^{1} \sum _{j=0, j \ne 1}^{i+1} w_j^{(\beta )} e_{i-j+1}^{1}-e_i^0, \quad for \quad k=0 \end{aligned}$$
(3.5)
$$\begin{aligned} r_i^{k+1}&= \left( 1-R_i^{k+1} w_1^{(\beta )}\right) e_i^{k+1} - R_i^{k+1} \sum _{j=0, j \ne 1}^{i+1} w_j^{(\beta )} e_{i-j+1}^{k+1}- \left( 1-\frac{d_{1}}{d_{0}}\right) e_i^k \nonumber \\&\quad -\frac{1}{d_{0}}\sum _{j=1}^{k-1}\left( d_{j}-d_{j+1}\right) e_i^{k-j} -\frac{d_{k}}{d_{0}} e_i^0, \quad for \quad k \ge 1 \end{aligned}$$
(3.6)

where

$$\begin{aligned} e_i^k=U_i^k-\tilde{U}_i^k, \quad 1 \le i \le N, \quad 1 \le k \le n, \end{aligned}$$

and

$$\begin{aligned} e^0=0, \quad e^k=\left( e_1^k, e_2^k, \ldots , e_N^k\right) ^{\textrm{T}}, \quad \left\| e^k\right\| _{\infty }=\max _{1 \le i \le N}\left| e_i^k\right| , \end{aligned}$$

\(\square \)

Theorem 7

When \(1.6 \le \beta <2\), the implicit finite difference (2.6)–(2.9) is unconditional convergent, and there exists a positive constant \(\tilde{C}\) independent of \(\Delta t\) and h such that

$$\begin{aligned} \left\| U_i^k-\tilde{U}_i^k\right\| _{\infty } \le \tilde{C} \left( \Delta t^{2}+h^2\right) , \quad 1 \le k \le n. \end{aligned}$$

Proof

Suppose \(\left\| e^1\right\| _{\infty }=\left| e_l^1\right| =\max _{1 \le i \le N}\left| e_i^1\right| \). According to the Lemma 4, we have

$$\begin{aligned} \left\| e^1\right\| _{\infty }&=\left| e_l^1\right| \le \left( 1-R_i^{1} w_1^{(\beta )} \right) \left| e_l^1\right| -R_i^{1} \sum _{j=0, j \ne 1}^{l+1} w_j^{(\beta )}\left| e_l^1\right| \\&\le \left( 1- R_i^{1} w_1^{(\beta )} \right) \left| e_l^1\right| -R_i^{1} \sum _{j=0, j \ne 1}^{l+1} w_j^{(\beta )}\left| e_{l-j+1}^1\right| \\&\le \left| \left( 1-w_1^{(\beta )} R_i^{1}\right) e_l^1-R_i^{1} \sum _{j=0, j \ne 1}^{l+1} w_j^{(\beta )} e_{l-j+1}^1\right| \\&=\left| r_i^1+e_l^0 \right| \le \left\| e^0\right\| _{\infty } \le C\left( \Delta t^{2}+h^2\right) . \end{aligned}$$

Supposing \(\left\| e^{k+1}\right\| _{\infty }=\left| e_l^{k+1}\right| =\max _{1 \le i \le N}\left| e_i^{k+1}\right| \), and assuming that we have proved that

$$\begin{aligned} \left\| e^k\right\| _{\infty } \le C\left( \Delta t^{2}+h^2\right) \left( \frac{d_{k-1}}{d_{0}} \right) ^{-1}\quad (1 \le k \le n) \end{aligned}$$
(3.7)

Then, Lemma 5 the Eq. (3.7) turns into

$$\begin{aligned} \left\| e^k\right\| _{\infty } \le C\left( \Delta t^{2}+h^2\right) \left( \frac{d_{n}}{d_{0}} \right) ^{-1} \end{aligned}$$
(3.8)

with the above Lemma 5 and (3.8), Therefore:

$$\begin{aligned} \left\| e^{k+1}\right\| _{ \infty }&=\left| e_l^{k+1}\right| \le \left( 1- R_i^{k+1}w_1^{(\beta )} \right) \left| e_l^{k+1}\right| -R_i^{k+1} \sum _{j=0, j \ne 1}^{l+1} w_j^{(\beta )}\left| e_l^{k+1}\right| \\&\le \left( 1- R_i^{k+1}w_1^{(\beta )} \right) \left| e_l^{k+1}\right| -R_i^{k+1} \sum _{j=0, j \ne 1}^{l+1} w_j^{(\beta )}\left| e_{l-j+1}^{k+1}\right| \\&\le \left| \left( 1- R_i^{k+1} w_1^{(\beta )} \right) e_l^{k+1}-R_i^{k+1} \sum _{j=0, j \ne 1}^{l+1} w_j^{(\beta )} e_{l-j+1}^{k+1}\right| \\&=\left| \left( 1-\frac{d_{1}}{d_{0}}\right) e_i^k +\frac{1}{d_{0}}\sum _{j=1}^{k-1}\left( d_{j}-d_{j+1}\right) e_i^{k-j} +r_i^{k+1} \right| \\ {}&\le \left( 1-\frac{d_{k}}{d_{0}}\right) \left\| e^k\right\| _{\infty }+\frac{1}{d_{0}}\sum _{j=1}^{k-1}\left( d_{j}-d_{j+1}\right) \left\| e^{k-j}\right\| _{\infty } +\left\| r_i^{k+1}\right\| _{\infty } \\&\le \left( \frac{d_{k-1}}{d_{0}} \right) ^{-1} \left( 1-\frac{d_{1}}{d_{0}}+\frac{1}{d_{0}}\sum _{j=1}^{k-1}\left( d_{j}-d_{j+1}\right) \right) C\left( \Delta t^{2}+h^2\right) +\left\| r_i^{k+1}\right\| _{\infty }\\ \left\| e^{k+1}\right\| _{ \infty }&\le \left( \frac{d_{n}}{d_{0}} \right) ^{-1} \left( 1-\frac{d_{1}}{d_{0}}+\frac{1}{d_{0}} \left( d_{1}-d_{k}\right) + \frac{d_{n}}{d_{0}} \right) C\left( \Delta t^{2}+h^2\right) \\&\le \left( \frac{d_{n}}{d_{0}} \right) ^{-1} C\left( \Delta t^{2}+h^2\right) \le \tilde{C}\left( \Delta t^{2}+h^2\right) . \end{aligned}$$

Hence, by induction, we observe that for any x and t, as \((h, \Delta t) \rightarrow (0,0), \tilde{U}_i^k\) converges to \(U\left( x_i, t_k\right) \). Hence proof completed. \(\square \)

Table 1 Example 1: numerical errors and the spatial convergence order of the fractional finite-difference scheme (2.6) to (2.9)
Table 2 Example 1: numerical errors and the temporal convergence order of the fractional finite-difference scheme (2.6) to (2.9)

4 Numerical experiments

In this section, we numerically demonstrate the above theoretical results obtained by the finite-difference scheme (2.6) to (2.9), including numerical solution, convergence orders and the error in the sense of \(L_{\infty }\). Denote \(E_{\infty }(\Delta t, h)\) the maximum error with temporal and spatial grid size. The temporal and spatial convergence orders are computed, respectively, by

$$\begin{aligned} {\text {order}}_t(\Delta t, h)=\log _2\left( \frac{E_{\infty }((2 \Delta t, h)}{E_{\infty }((\Delta t, h)}\right) , \quad {\text {order}}_s(\Delta t, h)=\log _2\left( \frac{E_{\infty }((\Delta t, 2 h)}{E_{\infty }((\Delta t, h)}\right) . \end{aligned}$$
Fig. 1
figure 1

Example 1: the comparisons of numerical solution and exact solution for \(\alpha =0.5\), \(\beta =1.8\), and \(\Delta t =h=\frac{1}{20}\) at the time \(\textrm{T} =1\)

Fig. 2
figure 2

Example 1: the comparisons of numerical solution and exact solution for \(\alpha =0.5\), \(\beta =1.8\), and \(\Delta t =h=\frac{1}{80}\) at the time \(\textrm{T} =1\)

Fig. 3
figure 3

Example 1: the Absolute Error for \(\alpha =0.5\), \(\beta =1.8\), and \(\Delta t =h=\frac{1}{160}\) at the time \(\textrm{T} =1\)

Example 1

Consider the (STFDE) (1.1), (1.2),and (1.3) problem on a finite domain with \(0\le x\le 1\), \( 0\le t \le 1\) (Tables 1 and 2),

$$\begin{aligned} \left\{ \begin{array}{ll} { }_{CF} D_{0, t}^\alpha u(x, t)= { }_{RL} D_{0, x}^\beta u(x, t)+f(x, t), &{}\quad 0<x<1, \quad 0<t \le 1, \\ u(x, 0)=x^2(x-1)^2, &{}\quad 0 \le x \le 1, \\ u(0, t)=0, \quad u(1, t)=0,&{}\quad 0<t \le 1. \end{array}\right. \end{aligned}$$
(4.1)

take the function c equals 1, and the corresponding forcing term function f defined by

$$\begin{aligned} f(x, t)&= -\frac{2 \cos (\pi t)}{\Gamma (3-\beta )} x^{2-\beta }+\frac{12 \cos (\pi t)}{\Gamma (4-\beta )} x^{3-\beta }-\frac{24 \cos (\pi t)}{\Gamma (5-\beta )} x^{4-\beta } \\&\quad -\frac{x^2(x-1)^2}{1-\alpha } \frac{\sigma \pi }{\sigma ^2+\pi ^2}\left( \sin (\pi t)-\pi \frac{\cos (\omega t)}{\sigma }+\exp (-\sigma t) \frac{\pi }{\sigma }\right) , \\ \sigma&=\frac{\alpha }{1-\alpha } \end{aligned}$$

with the nonzero initial condition \(u(x, 0)=x^2(x-1)^2\) and boundary conditions \( u(0,t)=u(1,t)=0 \). When the exact solution of the fractional (STFDE) is (Figs. 12 and 3)

$$\begin{aligned} u(x, t)=cos(\pi t) x^2(x-1)^2. \end{aligned}$$

Example 2

Consider the problem (1.1)–(1.3) with \(L=1\), and \(c=1\), the initial condition \(\varphi (x)=x^4 (x-1)^4\), the boundary conditions \(v(t)=0\), and f

$$\begin{aligned} f(x, t)= 2 \sin \left( \frac{\pi t}{2}\right) x^{2-\beta } - t^{1-\alpha } x^{3} (x-2)^{2} \end{aligned}$$
Fig. 4
figure 4

Example 2: numerical solutions for \(\alpha =0.6\), \(\beta =1.7\), and \(\Delta t =h=\frac{1}{100}\) at the time \(\textrm{T} =0.5\), 1, and 1.5

Fig. 5
figure 5

Example 2: numerical solution as a 3d graph for \(\alpha =0.6\), \(\beta =1.7\), and \(\Delta t =h=\frac{1}{100}\)

Fig. 6
figure 6

Example 2: numerical solution for a different values \(\alpha \), and \(\beta \), where \(\Delta t =h=\frac{1}{100}\) at the time \(\textrm{T} =2\)

The 3d graph in Fig. 5 is shown the numerical solutions of the Example 2, for \(\alpha =0.6\) and \(\beta =1.7\), for step size \(\Delta t =h=\frac{1}{100}\), which is conformable with the results in Fig. 4 where the curves described the numerical solutions at \(\textrm{T} =0.5\), 1, and 1.5. Figure 6 displays the numerical solution for a different values \(\alpha \) and \(\beta \), for the same space-time step size at \(T =2\).

5 Conclusion

In this study, a novel approximete method is proposed for simulating a fractional diffusion equations, we derive a finite difference schemes for the time-space fractional diffusion equations. Being distinct from the previous works on fractional diffusion equations, we deal with the time-space fractional diffusion equations, with two tendance fractionals operators Caputo Fabrizio of order \(\alpha \in (0, 1)\), and Riemann Liouvilee of order \(\beta \in (1, 2)\). we construct a finite difference scheme for problem which is unconditionally stable and convergent in the maximum norm with the order \(O\left( \Delta t^{2}+h^2\right) \) under the sufficient condition \(1.6 \le \beta <2\). The numerical experiments support the theoretical analysis.