1 Introduction

The differential equations with a fractional derivative serve as superior models in subjects as diverse as astrophysics, chaotic dynamics, fractal network, signal processing, continuum mechanics, turbulent flow and wave propagation [29, 34, 40, 51]. This type of equations admit the non-local memory effects in mathematical mechanism, thereby filling in a big gap that the classical models cannot work well for some of the natural phenomena like anomalous transport. In general, the exact solutions can seldom be represented as closed-form expressions by using elementary functions that presents a tough challenge to derive a sufficiently valid method concerned with analytic approximations, so a keen interest has been attracted to design robust algorithms to investigate them in numerical perspectives.

In this article, we aim to construct an efficient method to numerically solve the general problems:

  1. (I)

    1D time-fractional ADEs

    $$\begin{aligned} \frac{\partial ^\alpha u(x,t)}{\partial t^\alpha }+\kappa \frac{\partial u(x,t)}{\partial x}&-\varepsilon \frac{\partial ^2 u(x,t)}{\partial x^2} =f(x,t), \end{aligned}$$
    (1)

    with \(0<\alpha \le 1\), \(\kappa , \varepsilon \ge 0\), \(a\le x\le b\), \(t>0\), and the initial and boundary conditions

    $$\begin{aligned}&u(x,0)=\psi (x),\quad a\le x\le b, \end{aligned}$$
    (2)
    $$\begin{aligned}&u(a,t)=g_1(t), \quad u(b,t)=g_2(t), \quad t>0; \end{aligned}$$
    (3)
  2. (II)

    2D time-fractional ADEs

    $$\begin{aligned}&\frac{\partial ^\alpha u(x,y,t)}{\partial t^\alpha }+\kappa _x\frac{\partial u(x,y,t)}{\partial x}+\kappa _y\frac{\partial u(x,y,t)}{\partial y}\nonumber \\&\quad -\varepsilon _x\frac{\partial ^2 u(x,y,t)}{\partial x^2}-\varepsilon _y\frac{\partial ^2 u(x,y,t)}{\partial y^2} =f(x,y,t),\nonumber \\ \end{aligned}$$
    (4)

    with \(0<\alpha \le 1\), \(\kappa _x, \kappa _y, \varepsilon _x,\varepsilon _y \ge 0\), \((x,y)\in \Omega \), \(t>0\), and the initial and boundary conditions

    $$\begin{aligned}&u(x,y,0)=\psi (x,y),\quad (x,y)\in \Omega , \end{aligned}$$
    (5)
    $$\begin{aligned}&u(x,y,t)=g(x,y,t), \ \ (x,y)\in \partial \Omega ,\quad t>0, \end{aligned}$$
    (6)

    where \(\Omega =\{(x,y):a\le x\le b, c\le y\le d\}\) and \(\partial \Omega \) denotes its boundary;

  3. (III)

    2D space-fractional ADEs without advection

    $$\begin{aligned} \begin{aligned}&\frac{\partial u(x,y,t)}{\partial t}-\varepsilon _x\frac{\partial ^{\beta _1} u(x,y,t)}{\partial x^{\beta _1}}\\&\quad -\varepsilon _y\frac{\partial ^{\beta _2} u(x,y,t)}{\partial y^{\beta _2}} =f(x,y,t), \end{aligned} \end{aligned}$$
    (7)

    with \(1<\beta _1, \beta _2<2\), \(\varepsilon _x,\varepsilon _y \ge 0\), \((x,y)\in \Omega \), \(t>0\), and the initial and boundary conditions

    $$\begin{aligned} u(x,y,0)&=\psi (x,y),\quad (x,y)\in \Omega , \end{aligned}$$
    (8)
    $$\begin{aligned} u(x,y,t)&=0, \ \ (x,y)\in \partial \Omega ,\quad t>0, \end{aligned}$$
    (9)

    where \(\Omega \) and \(\partial \Omega \) are given as above.

In Eqs. (1), (4), the time-fractional derivatives are defined in Caputo sense, i.e.,

$$\begin{aligned} \frac{\partial ^\alpha u(x,t)}{\partial t^\alpha }=\frac{1}{\Gamma (1-\alpha )} \int ^t_0\frac{\partial u(x,\xi )}{\partial \xi }\frac{d\xi }{(t-\xi )^\alpha }, \end{aligned}$$

while in Eq. (7), the space-fractional derivatives are defined in Riemann–Liouville sense, i.e.,

$$\begin{aligned} \frac{\partial ^{\beta _1} u(x,y,t)}{\partial x^{\beta _1}}&=\frac{1}{\Gamma (2-\beta _1)}\frac{\partial ^2 }{\partial x^2} \int ^x_a\frac{u(\xi ,y,t)d\xi }{(x-\xi )^{\beta _1-1}},\\ \frac{\partial ^{\beta _2} u(x,y,t)}{\partial y^{\beta _2}}&=\frac{1}{\Gamma (2-\beta _2)}\frac{\partial ^2 }{\partial y^2} \int ^y_c\frac{u(x,\xi ,t)d\xi }{(y-\xi )^{\beta _2-1}}, \end{aligned}$$

and \(\frac{\partial ^\alpha u(x,y,t)}{\partial t^\alpha }\) is an analog of \(\frac{\partial ^\alpha u(x,t)}{\partial t^\alpha }\), where \(\Gamma (\cdot )\) is the Gamma function. It is noted that Eqs. (1)–(3), (4)–(6), and (7)–(9) reduce into the classical 1D or 2D ADEs if \(\alpha =1\), \(\beta _1=\beta _2=2\) are fixed.

In recent decades, fractional ADEs have been notable subjects of intense research. Except for a few analytic solutions, various numerical methods have been done for Eqs. (1)–(3) without advection, covering implicit difference method [57], high-order finite element method (FEM) [17], Legendre wavelets and spectral Galerkin methods [11, 23], direct discontinuous Galerkin method [13], quadratic spline collocation method [25], cubic B-spline collocation method (CBCM) [38], orthogonal spline collocation method [50], pseudo-spectral method [9], high-order compact difference method [16], implicit radial basis function (RBF) meshless method [24], nonpolynomial and polynomial spline methods [12]. In [3, 5, 37, 56], the algorithms based on shifted fractional Jacobi polynomials, Sinc functions and shifted Legendre polynomials, Haar wavelets and the third kind Chebyshev wavelets functions were well developed via the integral operational matrix or collocation strategy for Eqs. (1)–(3) with variable coefficients. The Gegenbauer polynomial spectral collocation method was proposed in [14] for the same type of equations, and a Sinc-Haar collocation method can be found in [33]. Uddin and Haq considered a radial basis interpolation approach [48]. Cui established a high-order compact exponential difference scheme [6]. Razminia et al. proposed a DQ method for time-fractional diffusion equations by using Lagrangian interpolation polynomials as test functions [35]. Shirzadi et al. solved the 2D time-fractional ADEs with a reaction term via a local Petrov–Galerkin meshless method [39]. Gao and Sun derived two different three-point combined compact alternating direction implicit (ADI) schemes for Eqs. (4)–(6) [10], both of which own high accuracy. High-dimensional space-fractional ADEs are challenging topics in whether analytic or numerical aspects due to the complexity and huge computing burden. The application of an numerical method to Eqs. (7)–(9) did not have large diffusion; for the conventional algorithms, we refer the readers to [15, 28, 36, 44, 52, 54] and references therein.

The trigonometric B-splines are a class of piecewise-defined functions constructed from algebraic trigonometric spaces, which have got recognition since 1964. They are preferred to the familiar polynomial B-splines since they often yield less errors when served as basis functions in interpolation theory. Nevertheless, using these basis splines to set up numerical algorithms is in its infancy and the related works are limited [1, 30]. In this study, a DQ method for the general ADEs is developed with its weighted coefficients calculated based on cubic trigonometric B-spline (CTB) functions. The basis splines are slightly modified for brevity and a few advantages. Difference schemes and Runge–Kutta Gill’s method are introduced to discretize the resulting ODEs. The condition ensuring the stability of the time-stepping DQ method is discussed and found to be rather mild. Also, we propose a new cubic B-splines-based DQ method for the 2D space-fractional diffusion equations by introducing the DQ approximations to fractional derivatives. The weights are determined by deriving explicit formulas for the fractional derivatives of B-splines through a recursive technique of partial integration. The approaches in presence are straight forward to apply and simple in implementation on computers; numerical results highlight the superiority over some previous algorithms.

The remainder is as follows. In Sect. 2, we outline some basic definitions and the cubic spline functions that are useful hereinafter. In Sect. 3, how to determine the weighted coefficients based on these CTB functions is studied and a time-stepping DQ method is constructed for Eqs. (1)–(3) and Eqs. (4)–(6). Section 4 elaborates on its stable analysis. In Sect. 5, we suggest a spline-based DQ method for Eqs. (7)–(9) based on a set of cubic B-splines by explicitly computing the values of their fractional derivatives at sampling points. A couple of numerical examples are included in Sect. 6, which manifest the effectiveness of our methods. The last section devotes to a conclusion.

2 Preliminaries

Let \(M, N\in {\mathbb {Z}}^+\) and a time–space lattice be

$$\begin{aligned} \Omega _{\tau }&=\{t_n:t_n=n\tau ,\ 0\le n\le N \},\\ \Omega _h&=\{x_i:x_i=a+ih,\ 0\le i\le M\}, \end{aligned}$$

with \(\tau =T/N \), \(h=(b-a)/M\) on \((0,T]\times [a,b]\). Then, some auxiliary results are introduced for preliminaries.

2.1 Fractional derivatives and their discretizations

Given a good enough f(xt), the formulas

$$\begin{aligned} {^C_0}D^\alpha _tf(x,t)&=\frac{1}{\Gamma (m-\alpha )} \int ^t_0\frac{\partial ^m f(x,\xi )}{\partial \xi ^m}\frac{\mathrm{d}\xi }{(t-\xi )^{1+\alpha -m}}, \\{^{\mathrm{RL}}_0}D^\alpha _tf(x,t)&=\frac{1}{\Gamma (m-\alpha )}\frac{\partial ^m}{\partial t^m} \int ^t_0\frac{f(x,\xi )\mathrm{d}\xi }{(t-\xi )^{1+\alpha -m}}, \end{aligned}$$

define the \(\alpha \)-th Caputo and Riemann–Liouville derivatives, respectively, where \(m-1<\alpha <m\), \(m\in \mathbb {Z}^+\), and particularly, in the case of \(\alpha =m\), both of them degenerate into the m-th integer-order derivative.

The two frequently-used fractional derivatives are equivalent with exactness to an additive factor, i.e.,

$$\begin{aligned} {^C_0}D^\alpha _tf(x,t)={^{\mathrm{RL}}_0}D^\alpha _tf(x,t)-\sum ^{m-1}_{l=0}\frac{f^{(l)}(x,0)t^{l-\alpha }}{\Gamma (l+1-\alpha )}; \end{aligned}$$
(10)

see [21, 34] for references. Utilizing \({^{\mathrm{RL}}_0}D^\alpha _t t^l{=}\frac{\Gamma (l+1)t^{l-\alpha }}{\Gamma (l+1-\alpha )}\) and a proper scheme to discretize the Riemann–Liouville derivatives on the right side of Eq. (10), a difference scheme for Caputo derivative can be

$$\begin{aligned} \begin{aligned} {^C_0}D^\alpha _tf(x,t_n)&\cong \frac{1}{\tau ^\alpha }\sum _{k=0}^{n}\omega ^\alpha _kf(x,t_{n-k}) \\&\ -\frac{1}{\tau ^\alpha }\sum ^{m-1}_{l=0}\sum ^{n}_{k=0}\frac{\omega ^\alpha _k f^{(l)}(x,0)t_{n-k}^{l}}{l!}, \end{aligned} \end{aligned}$$
(11)

with several valid alternatives of the discrete coefficients \(\{\omega ^\alpha _k\}_{k=0}^{n}\) [4]. Typically, we have

$$\begin{aligned} \omega ^\alpha _k=(-1)^k\left( {\begin{array}{c}\alpha \\ k\end{array}}\right) =\frac{\Gamma {(k-\alpha )}}{\Gamma {(-\alpha )}\Gamma {(k+1)}}, \ \ k\ge 0, \end{aligned}$$
(12)

whose truncated error is \({\mathscr {R}}_\tau ={\mathscr {O}}(\tau )\), and

$$\begin{aligned} \omega ^\alpha _k=\bigg (\frac{11}{6}\bigg )^\alpha \sum ^k_{p=0}\sum ^p_{q=0}\mu ^q\overline{\mu }^{p-q}l^\alpha _ql^\alpha _{p-q}l^\alpha _{k-p},\ \ k\ge 0, \end{aligned}$$
(13)

with \(\mu =\frac{4}{7+\sqrt{39}\text {i}}\), \(\overline{\mu }=\frac{4}{7-\sqrt{39}\text {i}}\), \(\text {i}=\sqrt{-1}\), and

$$\begin{aligned} l^\alpha _0=1,\ \ l^\alpha _k=\bigg (1-\frac{\alpha +1}{k}\bigg )l^\alpha _{k-1},\ \ k\ge 1, \end{aligned}$$
(14)

in which case, the truncated error fulfills \({\mathscr {R}}_\tau ={\mathscr {O}}(\tau ^3)\). Actually, Eq. (14) is the recursive relation of Eq. (12). Moreover, the coefficients \(\{\omega ^\alpha _k\}_{k=0}^{n}\) in Eq. (12) satisfy

  1. (i)

    \(\omega ^\alpha _0=1, \quad \omega ^\alpha _k< 0\), \(\forall k\ge 1\),

  2. (ii)

    \(\sum _{k=0}^{\infty }\omega ^\alpha _k=0, \quad \sum _{k=0}^{n-1}\omega ^\alpha _k>0\).

These properties are easily obtained from [34].

Reset \(0<\alpha <1\), (11) thus turns into

$$\begin{aligned} {^C_0}D^\alpha _tf(x,t_n)&= \frac{1}{\tau ^\alpha }\sum _{k=0}^{n-1}\omega ^\alpha _kf(x,t_{n-k})\nonumber \\&\ -\frac{1}{\tau ^\alpha }\sum _{k=0}^{n-1}\omega ^\alpha _kf(x,0)+\mathscr {R}_\tau . \end{aligned}$$
(15)

It is noteworthy that Eq. (15) gives a smooth transition to the classic schemes when \(\alpha =1\), for instance, Eq. (15) would be the four-point backward difference scheme if \(\alpha =1\) and \(\{\omega ^\alpha _k\}_{k=0}^{n}\) are chosen to be the ones in Eq. (13), because these coefficients also fulfill \(\sum _{k=0}^{\infty }\omega ^\alpha _k=0\) and vanish apart from \(\omega ^\alpha _0\), \(\omega ^\alpha _1\), \(\omega ^\alpha _2\) and \(\omega ^\alpha _3\).

2.2 Cubic spline functions

Let \(x_{-i}=a-ih\), \(x_{M+i}=b+ih\), \(i=1,2,3\) be the six ghost knots outside [ab]. Then the desirable CTB basis functions \(\{\mathrm{CTB}_m(x)\}_{m=-1}^{M+1}\) are defined as [30, 49]

$$\begin{aligned} CT{B_m}(x) = \frac{1}{\chi }\left\{ \begin{array}{l} \phi _1(x),\qquad x \in [{x_{m - 2}},{x_{m - 1}})\\ \phi _2(x),\qquad x \in [{x_{m - 1}},{x_m})\\ \phi _3(x), \qquad x \in [{x_m},{x_{m + 1}})\\ \phi _4(x),\qquad x \in [{x_{m + 1}},{x_{m + 2}})\\ 0, \qquad \quad \quad \mathrm{{otherwise}} \end{array} \right. \end{aligned}$$

where

$$\begin{aligned}&\phi _1(x)={p^3}({x_{m - 2}}), \\&\phi _2(x)=q({x_{m + 2}}){p ^2}({x_{m-1}})+p^2({x_{m - 2}})q ({x_m})\\&\qquad \quad \quad +p({x_{m-2}})p ({x_{m - 1}})q({x_{m + 1}}),\\&\phi _3(x)=p({x_{m - 2}}){q^2}({x_{m + 1}})+q^2({x_{m + 2}})p ({x_m})\\&\qquad \quad \quad + p({x_{m - 1}})q({x_{m + 1}})q({x_{m + 2}}),\\&\phi _4(x)={q ^3}({x_{m + 2}}), \end{aligned}$$

with the notations

$$\begin{aligned}&p({x_m})=\sin \bigg ({\frac{{x-{x_m}}}{2}}\bigg ),\\&q({x_m})=\sin \bigg ({\frac{{{x_m}-x}}{2}}\bigg ),\\&\chi =\sin \bigg ({\frac{h}{2}}\bigg )\sin (h)\sin \bigg ({\frac{{3h}}{2}}\bigg ). \end{aligned}$$

The values of \(\mathrm{CTB}_m(x)\) at each knot are given by

$$\begin{aligned} CT{B_m}(x_i)=\left\{ \begin{aligned}&\sin ^2\bigg (\frac{h}{2}\bigg )\csc (h)\csc \bigg (\frac{3h}{2}\bigg ),\ i=m \pm 1 \\&\frac{2}{1+2\cos (h)},\ \ i = m \\&0,\ \ \mathrm {otherwise} \end{aligned} \right. \end{aligned}$$
(16)

and the values of \(\mathrm{CTB}'_m(x)\) at each knot are given by

$$\begin{aligned} CT{B'_m}(x_i)=\left\{ \begin{aligned}&\frac{3}{4}\csc \bigg ( {\frac{{3h}}{2}} \bigg ),\ \ i=m - 1\\&-\frac{3}{4}\csc \bigg ( {\frac{{3h}}{2}} \bigg ),\ \ i = m+1\\&0.\ \ \mathrm {otherwise} \end{aligned} \right. \end{aligned}$$
(17)

Using the same grid information, the cubic B-spline basis functions \(\{B_m(x)\}_{m=-1}^{M+1}\) are defined by

$$\begin{aligned} {B_m}(x) = \frac{1}{h^3}\left\{ \begin{array}{l} \varphi _1(x),\qquad x \in [{x_{m - 2}},{x_{m - 1}})\\ \varphi _2(x),\qquad x \in [{x_{m - 1}},{x_m})\\ \varphi _3(x), \qquad x \in [{x_m},{x_{m + 1}})\\ \varphi _4(x),\qquad x \in [{x_{m + 1}},{x_{m + 2}})\\ 0, \qquad \quad \quad \mathrm{{otherwise}} \end{array} \right. \end{aligned}$$

with the piecewise functions

$$\begin{aligned}&\varphi _1(x)=(x-x_{m - 2})^3, \\&\varphi _2(x)=(x-x_{m - 2})^3-4(x-x_{m - 1})^3,\\&\varphi _3(x)=(x_{m + 2}-x)^3-4(x_{m + 1}-x)^3,\\&\varphi _4(x)=(x_{m + 2}-x)^3. \end{aligned}$$

Both \(\{\mathrm{CTB}_m(x)\}_{m=-1}^{M+1}\) and \(\{B_m(x)\}_{m=-1}^{M+1}\) are locally compact and twice continuously differentiable. Since the knots \(x_{-1}\), \(x_{M+1}\) lie beyond [ab] and the weights in relation to the B-splines center at both knots do not participate in practical computation, hereunder, as in [26] for cubic B-splines, we modify the CTBs by

$$\begin{aligned} \left\{ \begin{aligned}&{\mathrm{MTB}_0}(x) = {\mathrm{CTB}_0}(x) + {\mathrm{2CTB}_{ - 1}}(x),\\&{\mathrm{MTB}_1}(x)= {\mathrm{CTB}_1}(x) - {\mathrm{CTB}_{ - 1}}(x),\\&{\mathrm{MTB}_m}(x) = {\mathrm{CTB}_m}(x),\;\;m = 2,3, \ldots ,M - 2,\\&{\mathrm{MTB}_{M - 1}}(x) = {\mathrm{CTB}_{M - 1}}(x) - {\mathrm{CTB}_{M + 1}}(x),\\&{\mathrm{MTB}_M}(x) = {\mathrm{CTB}_M}(x) + {\mathrm{2CTB}_{M + 1}}(x), \end{aligned} \right. \end{aligned}$$
(18)

for simplicity, which will result in a strictly tri-diagonal algebraic system after discretization on the uniform grid. \(\{\mathrm{MTB}_m(x)\}_{m=0}^{M}\) are also linearly independent and constitute a family of basis elements of a spline space.

3 Description of CTB-based DQ method

On a 2D domain \([a,b]\times [c,d]\), letting \(M_x, M_y\in {\mathbb {Z}}^+\), we add a spatial lattice with equally spaced grid points with spacing of \(h_x=(b-a)/M_x\) in x-axis and \(h_y=(d-c)/M_y\) in y-axis, i.e.,

$$\begin{aligned} \Omega _x&=\{x_i:x_i=a+ih_x,\ 0\le i\le M_x\},\\ \Omega _y&=\{y_j:y_j=c+jh_y,\ 0\le j\le M_y\}. \end{aligned}$$

DQ method is understood as a numerical technique for finding the approximate solutions of differential equations that reduces the original problem to those of solving a system of algebraic or ordinary differential equations via replacing the spatial partial derivatives by the representative weighted combinations of the functional values at certain grid points on the whole domain [2]. The key procedure of such method lies in the determination of its weights and the selection of the test functions whose derivative values are explicit at the prescribed discrete grid points. As requested, we let

$$\begin{aligned} \frac{\partial ^su(x_i,t)}{\partial x^s} \cong \sum \limits _{j=0}^M {a_{ij}^{(s)}u(x_j,t)},\ \ 0\le i\le M, \end{aligned}$$
(19)

while for 2D problems, we let

$$\begin{aligned}&\frac{\partial ^su(x_i,y_j,t)}{\partial x^s} \cong \sum \limits _{m=0}^{M_x} {a_{im}^{(s)}u(x_m,y_j,t)}, \end{aligned}$$
(20)
$$\begin{aligned}&\frac{\partial ^su(x_i,y_j,t)}{\partial y^s} \cong \sum \limits _{m=0}^{M_y} {b_{jm}^{(s)}u(x_i,y_m,t)}, \end{aligned}$$
(21)

where \(s\in {\mathbb {Z}}^+\), \(0\le i\le M_x\), \(0\le j\le M_y\) and \(a_{ij}^{(s)}\), \(a_{im}^{(s)}\), \(b_{jm}^{(s)}\) are the weighted coefficients allowing us to approximate the s-th derivatives or partial derivatives at the given grid points in the DQ methods.

3.1 The calculation of weighted coefficients

In the sequel, we apply \(\{\mathrm{MTB}_m(x)\}_{m=0}^{M}\) to calculate the 1D, 2D unknown weights. Putting \(s=1\) and substituting these basis splines into Eq. (19), we get

$$\begin{aligned} \frac{\partial {\mathrm{MTB}_m}(x_i)}{\partial x}=\sum ^M_{j=0}a_{ij}^{(1)}{\mathrm{MTB}_m}(x_j),\ \ 0\le i,m\le M, \end{aligned}$$

with the weighted coefficients of the first-order derivative \(a_{ij}^{(1)}\), \(0\le i,j\le M\), yet to be determined. In view of (18) and the properties (16)–(17), some manipulations on the above equations yield the matrix–vector forms

$$\begin{aligned} \left\{ \begin{aligned}&\mathbf A {} \mathbf a ^{(1)}_0=\mathbf Z _0,\\&\mathbf A {} \mathbf a ^{(1)}_1=\mathbf Z _1,\\&\qquad \ \vdots \\&\mathbf A {} \mathbf a ^{(1)}_M=\mathbf Z _M,\\ \end{aligned} \right. \end{aligned}$$
(22)

where \(\mathbf A \) is the \((M+1)\times (M+1)\) coefficient matrix

$$\begin{aligned} \mathbf A =\left( \begin{array}{cccccc} A_0+2A_1&{} A_1 &{} &{} &{} &{}\\ 0&{} A_0 &{} A_1 &{} &{} &{} \\ &{} A_1 &{} A_0 &{} A_1 &{} &{} \\ &{} &{}\ddots &{}\ddots &{}\ddots &{} \\ &{} &{} &{} A_1 &{} A_0 &{} 0\\ &{} &{} &{} &{} A_1 &{} A_0+2A_1 \end{array} \right) , \end{aligned}$$
$$\begin{aligned}&A_0=\frac{2}{1+2\cos (h)},\\ {}&A_1=\sin ^2\bigg (\frac{h}{2}\bigg )\csc (h)\csc \bigg (\frac{3h}{2}\bigg ), \end{aligned}$$

\(\mathbf a ^{(1)}_k\), \(0\le k\le M\), are the weighted coefficient vectors at \(x_k\), i.e., \(\mathbf a ^{(1)}_k=[a_{k0}^{(1)},a_{k1}^{(1)},\ldots ,a_{kM}^{(1)}]^\mathrm{{T}}\), and the right-side vectors \(\mathbf Z _k\) at \(x_k\), \(0\le k\le M\), are as follows

$$\begin{aligned}&\mathbf Z _0 = \left( \begin{array}{c} -2z\\ 2z\\ 0\\ 0\\ \vdots \\ 0\\ 0 \end{array} \right) ,\ \ \mathbf Z _1=\left( \begin{array}{c} -z\\ 0\\ z\\ 0\\ \vdots \\ 0\\ 0 \end{array}\right) , \ldots , \\&\mathbf Z _{M-1}=\left( \begin{array}{c} 0\\ 0\\ \vdots \\ 0\\ -z\\ 0\\ z \end{array}\right) ,\ \ \mathbf Z _M=\left( \begin{array}{c} 0\\ 0\\ \vdots \\ 0\\ 0\\ -2z\\ 2z \end{array}\right) , \end{aligned}$$

with \(z=\frac{3}{4}\csc \left( \frac{3h}{2}\right) \), respectively. Thus, \(a_{ij}^{(1)}\) are obtained by solving Eqs. (22) for each point \(x_i\). There are two different way to derive the weighted coefficients \(a_{ij}^{(2)}\) of the second-order derivative: (i) do a similar fashion as above by putting \(s=2\) in Eq. (19) and solve an algebraic system for each grid point; (ii) find the weighted coefficients \(a_{ij}^{(s)}\), \(s\ge 2\), corresponding to the high-order derivatives in a recursive style [41], i.e.,

$$\begin{aligned}&a_{ij}^{(s)}=s\Bigg ( {a_{ii}^{(s-1)}a_{ij}^{(1)}-\frac{a_{ij}^{(s-1)}}{x_i-x_j}}\Bigg ),\ \ i\ne j,\ 0\le i\le M,\\&a_{ii}^{(s)}=-\sum \limits _{j=0,j\ne i}^M {a_{ij}^{(s)}} ,\ \ i = j, \end{aligned}$$

which includes \(s=2\) as a special case. The former would be less efficient since the associated equations have to be solved as priority, so the latter one will be selected during our entire computing process. Proceeding as before via replacing \(\Omega _h\) by \(\Omega _x\), \(\Omega _y\) leads to a 2D generalization to get \(a^{(1)}_{im}\), \(b^{(1)}_{jm}\) of the first-order partial derivatives with regard to variables x, y in Eqs. (20)–(21) and by them, the following relationships can further be applied, i.e.,

$$\begin{aligned}&a_{im}^{(s)}=s\Bigg ( {a_{ii}^{(s-1)}a_{im}^{(1)}-\frac{a_{im}^{(s-1)}}{x_i-x_m}}\Bigg ),\ \ i\ne m,\ 0\le i\le M_x,\\&a_{ii}^{(s)}=-\sum \limits _{m=0,m\ne i}^{M_x}{a_{im}^{(s)}} ,\ \ i = m,\\&b_{jm}^{(s)}=s\Bigg ( {b_{jj}^{(s-1)}b_{jm}^{(1)}-\frac{b_{jm}^{(s-1)}}{y_j-y_m}}\Bigg ),\ \ j\ne m,\ 0\le j\le M_y,\\&b_{jj}^{(s)}=-\sum \limits _{m=0,m\ne j}^{M_y}{b_{jm}^{(s)}} ,\ \ j = m, \end{aligned}$$

to calculate \(a^{(s)}_{im}\), \(b^{(s)}_{jm}\) with \(s\ge 2\).

A point worth noticing is that \(A_0,\ A_1>0\), when \(0<h<1\), \(0<h_x,\ h_y<1\). Since \(A_0,\ A_1\) can be deemed to be the functions of h, we obtain their derivatives

$$\begin{aligned} A'_0&=\frac{4\sin (h)}{(1+2\cos (h))^2},\\ A'_1&=\frac{\sec (\frac{h}{2})\tan (\frac{h}{2})(5+6\cos (h))}{4(1+2\cos (h))^2}. \end{aligned}$$

On letting \(0<h<1\), both are proved to be larger than zero, i.e., \(A_0,\ A_1\) are the increasing functions with respect to h. On the other hand, there exist \(A_0(0)=0.6667\), \(A_1(1)=0.2738\). Then, it suffices to show

$$\begin{aligned} \frac{2}{1+2\cos (h)}>2\sin ^2\bigg (\frac{h}{2}\bigg )\csc (h)\csc \bigg (\frac{3h}{2}\bigg ), \end{aligned}$$

which implies \(A_0>2A_1\), and thus \(\mathbf A \) is a strictly diagonally dominant tri-diagonal matrix. Hence, Thomas algorithm can be applied to tackle the algebraic equations as Eqs. (22), which simply requires the arithmetic operation cost \({\mathscr {O}}(M+1)\) and would greatly economize on the memory and computing time in practice.

3.2 Construction of CTB-based DQ method

In this subpart, a DQ method based on \(\{\mathrm{MTB}_m(x)\}_{m=0}^{M}\) (MCTB-DQM) is constructed for Eqs. (1)–(3) and Eqs. (4)–(6). Let \(s=1\), 2. The substitution of the weighted sums (19), (20)–(21) into the main equations gives

$$\begin{aligned}&\frac{\partial ^\alpha u(x_i,t)}{\partial t^\alpha }+\kappa \sum \limits _{j=0}^M {a_{ij}^{(1)}u(x_j,t)}\\&\quad -\varepsilon \sum \limits _{j=0}^M {a_{ij}^{(2)}u(x_j,t)}=f(x_i,t), \end{aligned}$$

with \(i=0,1,\ldots ,M\), and

$$\begin{aligned}&\frac{\partial ^\alpha u(x_i,y_j,t)}{\partial t^\alpha }+\kappa _x\sum \limits _{m=0}^{M_x}{a_{im}^{(1)}u(x_m,y_j,t)}\\&\quad +\kappa _y\sum \limits _{m=0}^{M_y}{b_{jm}^{(1)}u(x_i,y_m,t)}\\&\quad -\varepsilon _x\sum \limits _{m=0}^{M_x}{a_{im}^{(2)}u(x_m,y_j,t)}\\&\quad -\varepsilon _y\sum \limits _{m=0}^{M_y}{b_{jm}^{(2)}u(x_i,y_m,t)}=f(x_i,y_j,t), \end{aligned}$$

with \(i=0,1,\ldots ,M_x\), \(j=0,1,\ldots ,M_y\), which are indeed a group of \(\alpha \)-th general ODEs associated with the boundary constraints (3), (6), and involve \(\alpha \in (0,1)\) and \(\alpha =1\) as two separate cases. In what follows, we employ the notations

$$\begin{aligned}&u_i^n=u(x_i,t_n),\quad u_{ij}^n=u(x_i,y_j,t_n),\\&f^n_{i}=f(x_i,t_n),\quad f^n_{ij}=f(x_i,y_j,t_n),\\&g_1^n=g_1(t_n),\quad g_2^n=g_2(t_n),\quad g_{ij}^n=g(x_i,y_j,t_n), \end{aligned}$$

for the ease of exposition, where \(n=0,1,\ldots ,N\).

3.2.1 The case of fractional order

Discretizing the ODEs above by the difference scheme (15) and imposing boundary constraints, we have

$$\begin{aligned} \left\{ \begin{aligned}&\omega ^\alpha _0U_i^n+\kappa \tau ^\alpha \sum \limits _{j=1}^{M-1}a_{ij}^{(1)}U_j^n -\varepsilon \tau ^\alpha \sum _{j=1}^{M-1}a_{ij}^{(2)}U_j^n\\&=-\sum _{k=1}^{n-1}\omega ^\alpha _kU_i^{n-k}+\sum _{k=0}^{n-1}\omega ^\alpha _kU_i^0+ \tau ^\alpha G^n_i, \end{aligned}\right. \end{aligned}$$
(23)

with \(i=1,2,\ldots ,M-1\) and

$$\begin{aligned} G^n_i = f^n_i-\kappa \big (a_{i0}^{(1)}g^n_1+a_{iM}^{(1)}g^n_2\big )+\varepsilon \big (a_{i0}^{(2)}g^n_1+a_{iM}^{(2)}g^n_2\big ), \end{aligned}$$

for Eqs. (1)–(3), and the following scheme

$$\begin{aligned} \left\{ \begin{aligned}&\omega ^\alpha _0U_{ij}^n+\kappa _x\tau ^\alpha \sum _{m=1}^{M_x-1}a_{im}^{(1)}U_{mj}^n+\kappa _y\tau ^\alpha \sum _{m=1}^{M_y-1}b_{jm}^{(1)}U_{im}^n\\&\ -\varepsilon _x\tau ^\alpha \sum _{m=1}^{M_x-1}a_{im}^{(2)}U_{mj}^n-\varepsilon _y\tau ^\alpha \sum _{m=1}^{M_y-1}b_{jm}^{(2)}U_{im}^n\\&\ =-\sum _{k=1}^{n-1}\omega ^\alpha _kU_{ij}^{n-k}+\sum _{k=0}^{n-1}\omega ^\alpha _kU_{ij}^0+ \tau ^\alpha G^n_{ij}, \end{aligned}\right. \end{aligned}$$
(24)

with \(i=1,2,\ldots ,M_x-1\), \(j=1,2,\ldots ,M_y-1\), and

$$\begin{aligned} G^n_{ij} =&f^n_{ij}-\kappa _x\big (a_{i0}^{(1)}g^n_{0j}+a_{iM_x}^{(1)}g^n_{M_xj}\big ) -\kappa _y\big (b_{j0}^{(1)}g^n_{i0}+b_{jM_y}^{(1)}g^n_{iM_y}\big )\\&+\varepsilon _x\big (a_{i0}^{(2)}g^n_{0j}+a_{iM_x}^{(2)}g^n_{M_xj}\big ) +\varepsilon _y\big (b_{j0}^{(2)}g^n_{i0}+b_{jM_y}^{(2)}g^n_{iM_y}\big ), \end{aligned}$$

for Eqs. (4)–(6). Eqs. (23)–(24) can further be rewritten in matrix–vector forms, for instance, letting

$$\begin{aligned} \mathbf U ^n&=[U^n_{11},\ldots ,U^n_{M_{x}-1,1},U^n_{12},\ldots ,U^n_{M_{x}-1,M_{y}-1}]^T,\\ \mathbf G ^n&=[G^n_{11},\ldots ,G^n_{M_{x}-1,1},G^n_{12},\ldots ,G^n_{M_{x}-1,M_{y}-1}]^T, \end{aligned}$$

for Eqs. (24), we have

$$\begin{aligned} \omega ^\alpha _0\mathbf U ^n+\tau ^\alpha \mathbf K {} \mathbf U ^n=-\sum _{k=1}^{n-1}\omega ^\alpha _k\mathbf U ^{n-k} +\sum _{k=0}^{n-1}\omega ^\alpha _k\mathbf U ^0+\tau ^\alpha \mathbf G ^n, \end{aligned}$$
(25)

where

$$\begin{aligned} \mathbf K =\kappa _x\mathbf I _y\otimes \mathbf W ^{1}_x+\kappa _y \mathbf W ^{1}_y\otimes \mathbf I _x -\varepsilon _x\mathbf I _y\otimes \mathbf W ^{2}_x-\varepsilon _y \mathbf W ^{2}_y\otimes \mathbf I _x, \end{aligned}$$

with \(\mathbf I _x\), \(\mathbf I _y\) being the identity matrices in x- and y-axis, “\(\otimes \)” being Kronecker product, and

$$\begin{aligned} \mathbf W ^{c}_{z} =\left( \begin{array}{llll} w^{(c)}_{11}&{}w^{(c)}_{12} &{}\cdots &{}w^{(c)}_{1,M_z-1}\\ w^{(c)}_{21}&{}w^{(c)}_{22} &{}\cdots &{}w^{(c)}_{2,M_z-1} \\ \vdots &{}\vdots &{}\ddots &{}\vdots \\ w^{(c)}_{M_z-1,1}&{}w^{(c)}_{M_z-1,2}&{}\cdots &{}w^{(c)}_{M_z-1,M_z-1} \end{array} \right) , \ c=1,2, \end{aligned}$$

in which, \(z=x\) if \(w=a\) while \(z=y\) if \(w=b\). The initial states are got from Eqs. (2), (5). As a result, the approximate solutions are obtained via performing the iteration in Eqs. (23)–(24) until the last time level by rewriting them in matrix–vector forms first.

3.2.2 The case of integer order

When \(\alpha =1\), despite \(\omega ^1_0=1.8333\), \(\omega ^1_1=-3\), \(\omega ^1_2=1.5\), \(\omega ^1_3=-0.3333\), being the coefficients of the four-point backward difference scheme, the initial values with the errors of the same convergent rate are generally necessary to start Eqs. (23)–(24). However, this situation would not happen if \(\{\omega _k^\alpha \}_{k=0}^n\) in Eq. (12) are applied. In such a case, to make the algorithm to be more cost-effective, we use Runge–Kutta Gill’s method to handle those ODEs instead, which is explicit and fourth-order convergent. Rearrange the ODEs in a unified form

$$\begin{aligned} \frac{\partial \mathbf u }{\partial t}=\mathbf F (\mathbf u ), \end{aligned}$$
(26)

then the DQ method is constructed as follow

$$\begin{aligned}&\mathbf U ^{n}=\mathbf U ^{n-1}+\frac{1}{6}\Big [K_1+(2-\sqrt{2})K_2+(2+\sqrt{2})K_3+K_4\Big ],\nonumber \\&K_1=\tau \mathbf F \big (t_{n-1},\mathbf U ^{n-1}\big ),\nonumber \\&K_2=\tau \mathbf F \bigg (t_{n-1}+\frac{\tau }{2},\mathbf U ^{n-1}+\frac{K_1}{2}\bigg ),\nonumber \\&K_3=\tau \mathbf F \bigg (t_{n-1}+\frac{\tau }{2},\mathbf U ^{n-1}+\frac{\sqrt{2}-1}{2}K_1+\frac{2-\sqrt{2}}{2}K_2\bigg ),\nonumber \\&K_4=\tau \mathbf F \bigg (t_{n-1}+\tau ,\mathbf U ^{n-1}-\frac{\sqrt{2}}{2}K_2+\frac{2+\sqrt{2}}{2}K_3\bigg ), \end{aligned}$$
(27)

where \(\mathbf u \), \(\mathbf U ^{n}\), \(n=1,2,\ldots ,N\), are the unknown vectors and \(\mathbf F (\cdot )\) stands for the matrix–vector system corresponding to the weighted sums in ODEs and contains \(a^{(s)}_{ij}\) or \(a^{(s)}_{im}\), \(b^{(s)}_{jm}\), \(s=1\), 2, as its elements. Meanwhile, the boundary constraints (3), (6) must be imposed on \(\mathbf F (\cdot )\) in the way as they are done for the fractional cases before we can fully run the procedures for Eqs. (27).

4 Stability analysis

This part makes a attempt to study the matrix stability of Eqs. (26) and the numerical stability of Eqs. (23)–(24). When \(\alpha =1\), we rewrite Eqs. (26) by

$$\begin{aligned} \frac{\partial \mathbf u }{\partial t}=-\mathbf K {} \mathbf u + \mathbf Q , \end{aligned}$$
(28)

where \(\mathbf Q \) is a vector containing the right-hand part and the boundary conditions, and \(\mathbf K \) is the weighted matrix mentioned before. We discuss the homogeneous case. The numerical stability of an algorithm for the ODEs generated by a DQ method relies on the stability of the ODEs themselves. Only when their solutions are stable can a well-known method such as Runge–Kutta Gill’s method yield convergent solutions. It is enough to show their stability that the real parts of the eigenvalues of weighted matrix \(-\mathbf K \) are all non-positive. Denote the row vector consisting of the eigenvalues of \(\mathbf W ^{c}_z\) by \({\varvec{\lambda }}_z^c\), with \(z=x,y\) and \(c=1,2\). In view of the properties of Kronecker product, the eigenvalues of \(\mathbf W ^{c}_y\otimes \mathbf I _x\), \(\mathbf I _y\otimes \mathbf W ^{c}_x\) are \({\varvec{\lambda }}^{c}_y\otimes \mathbf e _x\) and \(\mathbf e _y\otimes {\varvec{\lambda }}^{c}_x\) (see [22]), respectively, and therefore, we have the eigenvalues of \(-\mathbf K \) in Eq. (28), i.e.,

$$\begin{aligned} {\varvec{\lambda }}=-\kappa _x\mathbf e _y\otimes {\varvec{\lambda }}^{1}_x-\kappa _y {\varvec{\lambda }}^{1}_y\otimes \mathbf e _x +\varepsilon _x\mathbf e _y\otimes {\varvec{\lambda }}^{2}_x+\varepsilon _y {\varvec{\lambda }}^{2}_y\otimes \mathbf e _x, \end{aligned}$$

where \(\mathbf e _x\), \(\mathbf e _y\) are the row vectors of sizes \(M_x+1\) and \(M_y+1\), respectively, with all of their components being 1. The exact solution of ODEs is related to \(\varvec{\lambda }\) and the condition \(\text {Re}\{\varvec{\lambda }\}\le 0\) is easy to meet because \({\varvec{\lambda }}_z^2\) are always verified to be real and negative while \({\varvec{\lambda }}_z^1\) be complex with their real parts being very close to zero; see Fig. 1 for example. More than that, we notice that the foregoing analysis is also valid for the 1D cases and the phenomena appearing in Fig. 1 would be enhanced as the grid numbers increase. Hence, we come to a conclusion that the ODEs are stable in most cases.

The discussion about the numerical stability of a fully discrete DQ method is difficult and still sparse [42, 43]. In the sequel, we show the conditionally stable nature of Eqs. (23)–(24) in the context of \(L_2\)-norm \(||\cdot ||\) and the analysis is not just applicable to the fractional case. Without loss of generality, consider the 2D cases and the discrete coefficients \(\{\omega ^\alpha _k\}_{k=0}^{n}\) in Eq. (12). Let \(\tilde{\mathbf{U }}^0\) be the approximation of initial values \(\mathbf U ^0\). Then

$$\begin{aligned} \tilde{\mathbf{U }}^n+\tau ^\alpha \mathbf K \tilde{\mathbf{U }}^n=-\sum _{k=1}^{n-1}\omega ^\alpha _k\tilde{\mathbf{U }}^{n-k} +\sum _{k=0}^{n-1}\omega ^\alpha _k\tilde{\mathbf{U }}^0+\tau ^\alpha \mathbf G ^n. \end{aligned}$$
(29)
Fig. 1
figure 1

The eigenvalues of the weighted matrices generated by DQ method when \(a=c=0\), \(b=d=2\): (a) \(\mathbf W ^1_z\); (b) \(\mathbf W ^2_z\)

On subtracting Eq. (29) from (25) and letting \(\mathbf e ^n=\mathbf U ^n-\tilde{\mathbf{U }}^n\), we have the perturbation equation

$$\begin{aligned} \mathbf e ^n=-\sum _{k=1}^{n-1}\omega ^\alpha _k(\mathbf I +\tau ^\alpha \mathbf K )^{-1}{} \mathbf e ^{n-k} +\sum _{k=0}^{n-1}\omega ^\alpha _k(\mathbf I +\tau ^\alpha \mathbf K )^{-1}{} \mathbf e ^0, \end{aligned}$$
(30)

where \(\mathbf I \) is the identity matrix in the same size of K. To prove \(||\mathbf e ^n||\le ||\mathbf e ^0||\), we make the assumption

$$\begin{aligned} ||(\mathbf I +\tau ^\alpha \mathbf K )^{-1}||\le 1. \end{aligned}$$
(31)

When \(n=1\), by taking \(||\cdot ||\) on both sides of Eq. (30), \(||\mathbf e ^1||\le ||\mathbf e ^0||\) is trivial due to \(\omega ^\alpha _0=1\). Let

$$\begin{aligned} ||\mathbf e ^m||\le ||\mathbf e ^0||, \quad m=1,2,\ldots ,n-1. \end{aligned}$$

Using mathematical induction, it thus follows from the properties of \(\{\omega ^\alpha _k\}_{k=0}^{n}\) stated in Sect. 2 that

$$\begin{aligned} ||\mathbf e ^n||&=\Bigg |\Bigg |-\sum _{k=1}^{n-1}\omega ^\alpha _k(\mathbf I +\tau ^\alpha \mathbf K )^{-1}{} \mathbf e ^{n-k} +\sum _{k=0}^{n-1}\omega ^\alpha _k(\mathbf I +\tau ^\alpha \mathbf K )^{-1}{} \mathbf e ^0\Bigg |\Bigg | \\&\le \Bigg (1-\sum _{k=0}^{n-1}\omega ^\alpha _k+\sum _{k=0}^{n-1}\omega ^\alpha _k\Bigg ) ||(\mathbf I +\tau ^\alpha \mathbf K )^{-1}||\max \limits _{0\le m\le n-1}||\mathbf e ^m|| \\&=||(\mathbf I +\tau ^\alpha \mathbf K )^{-1}||\max \limits _{0\le m\le n-1}||\mathbf e ^m|| \le ||\mathbf e ^0||. \end{aligned}$$

Hereinafter, we proceed with a full numerical investigation on the assumption (31) to explore the potential factors which may lead to \(||(\mathbf I +\tau ^\alpha \mathbf K )^{-1}||> 1\). At first, if \(\tau ^\alpha \) continuously varies from 1 to 0, there holds \(||(\mathbf I +\tau ^\alpha \mathbf K )^{-1}||\rightarrow 1\). However, this process can affect the maximal ratio of the coefficients of advection and diffusivity to keep (31); we leave this case to the end of the discussion. To be more representative, we take \(\tau =1.0\times 10^{-3}\), \(\alpha =0.5\), and \(M_x=M_y=5\), unless otherwise stated. The main procedures are divided into three steps: (i) fixing \(\varepsilon _x,\varepsilon _y\), and \(\Omega \), let \(\kappa _x,\kappa _y\) vary and the values of \(||(\mathbf I +\tau ^\alpha \mathbf K )^{-1}||\) as the function of \(\kappa _x,\kappa _y\) are plotted in (a), (b) of Fig. 2; (ii) fixing \(\kappa _x,\kappa _y\), and \(\Omega \), let \(M_x\), \(M_y\) vary and the results are plotted in (c) of Fig. 2, where \(\kappa _x=\kappa _y=500\); (iii) fixing \(\kappa _x,\kappa _y,\varepsilon _x,\varepsilon _y\), let \(a=c=0\) and b, d vary, and the corresponding results are presented in (d) of Fig. 2. It is worthy to note that \(\Omega \) is the unit square except the case of (iii), and the parameters of the same types in x- and y- axis are used as the same, for example, \(\varepsilon _x=\varepsilon _y\). Now, we consider the influence brought by \(\tau \). Resetting \(\tau =1.0\times 10^{-10}\), let \(\varepsilon _x=\varepsilon _y=1\) and \(\kappa _x, \kappa _y\) vary. The behavior of objective quantity is plotted in subfigure (e), from which, we see that the critical ratio between \(\kappa _x, \kappa _y\) and \(\varepsilon _x, \varepsilon _y\) to maintain (31) is about 40, far less than the case of (i), and can further be improved by increasing \(M_x, M_y\).

Fig. 2
figure 2

The values of \(||(\mathbf I +\tau ^\alpha \mathbf K )^{-1}||\) versus the variation of various factors: \(\kappa _x,\kappa _y\), \(\varepsilon _x,\varepsilon _y\), \(M_x, M_y\), and bd

From the foregoing discussion and figures, we summarize the conclusions as follows: (i) if \(\varepsilon _x\), \(\varepsilon _y\) are not small, the tolerant ranges of \(\kappa _x\), \(\kappa _y\) to guarantee (31) are quite loose and when \(\varepsilon _x\), \(\varepsilon _y\rightarrow \infty \), \(||(\mathbf I +\tau ^\alpha \mathbf K )^{-1}||\) can be very close to zero; (ii) if \(\kappa _x\), \(\kappa _y\) are larger than \(\varepsilon _x\), \(\varepsilon _y\) and \(\varepsilon _x\), \(\varepsilon _y\) themselves are small, \(||(\mathbf I +\tau ^\alpha \mathbf K )^{-1}||\) can be larger than 1; however, such issue can be remedied by increasing the grid numbers; (iii) in general, the larger \(M_x\), \(M_y\), the smaller \(||(\mathbf I +\tau ^\alpha \mathbf K )^{-1}||\); (iv) when the computational domain expands, \(||(\mathbf I +\tau ^\alpha \mathbf K )^{-1}||\) grows at a speed, which may result in the invalidation of (31) if \(\varepsilon _x\), \(\varepsilon _y\) and \(M_x, M_y\) remain unchanged; (v) when \(\tau \rightarrow 0\), the critical ratio between \(\kappa _x, \kappa _y\) and \(\varepsilon _x, \varepsilon _y\) to maintain this assumption appears to decrease, but it would be enhanced as the grid is refined.

Consequently, the assumption is meaningful and essentially a mild theoretical restriction in practise.

5 Description of cubic B-spline DQ method

In this section, a robust DQ method (MCB-DQM) based on the modified cubic B-splines \(\{\mathrm{MB}_m(x)\}_{m=0}^{M}\) is established for Eqs. (7)–(9) by introducing the DQ approximations to fractional derivatives. In the light of the essence of traditional DQ methods, we consider

$$\begin{aligned}&\frac{\partial ^{\beta _1}u(x_i,y_j,t)}{\partial x^{\beta _1}}\cong \sum \limits _{m=0}^{M_x} {a_{im}^{(\beta _1)}u(x_m,y_j,t)}, \end{aligned}$$
(32)
$$\begin{aligned}&\frac{\partial ^{\beta _2}u(x_i,y_j,t)}{\partial y^{\beta _2}}\cong \sum \limits _{m=0}^{M_y} {b_{jm}^{(\beta _2)}u(x_i,y_m,t)}, \end{aligned}$$
(33)

for fractional derivatives, in constructing the DQ algorithm, where \(0\le i\le M_x\), \(0\le j\le M_y\) and the weighted coefficients \(a_{im}^{(\beta _1)}\), \(b_{jm}^{(\beta _2)}\) satisfy

$$\begin{aligned} \frac{\partial ^{\beta _1} {\mathrm{MB}_k}(x_i)}{\partial x^{\beta _1}}=\sum ^{M_x}_{m=0}a_{im}^{(\beta _1)}{\mathrm{MB}_k}(x_m),\ \ 0\le i,k\le M_x,\end{aligned}$$
(34)
$$\begin{aligned} \frac{\partial ^{\beta _2} {\mathrm{MB}_k}(y_j)}{\partial y^{\beta _2}}=\sum ^{M_y}_{m=0}b_{jm}^{(\beta _2)}{\mathrm{MB}_k}(y_m),\ \ 0\le j,k\le M_y. \end{aligned}$$
(35)

The validation of Eqs. (32)–(33) is ensured by the linear properties of fractional derivatives. \(a_{im}^{(\beta _1)}\), \(b_{jm}^{(\beta _2)}\) are then determined by tackling the resulting algebraic problems from the above equations for each axis if the values of the fractional derivatives of B-splines \(\{\mathrm{MB}_m(x)\}_{m=0}^{M}\) at all sampling points are known.

5.1 The explicit formulas of fractional derivatives

It is the weakly singular integral structure that makes it difficult to calculate the values of the fractional derivatives for a function as B-spline at a sampling point. In the text that follows, we concentrate on the explicit expressions of the \(\beta \)-th (\(1<\beta <2\)) Riemann–Liouville derivative of B-splines \(\{B_m(x)\}_{m=-1}^{M+1}\) with a recursive technique of integration by parts. Since these basis splines are piecewise and locally compact on four consecutive subintervals, we have

$$\begin{aligned} {^{\mathrm{RL}}_{x_0}}D^\beta _xB_m(x)=\left\{ \begin{array}{l} 0,\qquad \qquad \qquad \ \, x \in [x_0,x_{m-2})\\ {^{\mathrm{RL}}_{x_{m-2}}}D^\beta _x\varphi _1(x), \qquad x \in [{x_{m - 2}},{x_{m - 1}})\\ {^{\mathrm{RL}}_{x_{m-2}}}D^\beta _{x_{m-1}}\varphi _1(x)\\ \quad +{^{\mathrm{RL}}_{x_{m-1}}}D^\beta _x\varphi _2(x), \ \, x \in [{x_{m - 1}},{x_m})\\ {^{\mathrm{RL}}_{x_{m-2}}}D^\beta _{x_{m-1}}\varphi _1(x)\\ \quad +{^{\mathrm{RL}}_{x_{m-1}}}D^\beta _{x_m}\varphi _2(x)\\ \quad +{^{\mathrm{RL}}_{x_m}}D^\beta _{x}\varphi _3(x), \quad \ x \in [{x_m},{x_{m + 1}})\\ {^{\mathrm{RL}}_{x_{m-2}}}D^\beta _{x_{m-1}}\varphi _1(x)\\ \quad +{^{\mathrm{RL}}_{x_{m-1}}}D^\beta _{x_m}\varphi _2(x)\\ \quad +{^{\mathrm{RL}}_{x_m}}D^\beta _{x_{m+1}}\varphi _3(x)\\ \quad +{^{\mathrm{RL}}_{x_{m+1}}}D^\beta _{x}\varphi _4(x), \ \ x \in [{x_{m + 1}},{x_{m + 2}})\\ {^{\mathrm{RL}}_{x_{m-2}}}D^\beta _{x_{m-1}}\varphi _1(x)\\ \quad +{^{\mathrm{RL}}_{x_{m-1}}}D^\beta _{x_m}\varphi _2(x)\\ \quad +{^{\mathrm{RL}}_{x_m}}D^\beta _{x_{m+1}}\varphi _3(x)\\ \quad +{^{\mathrm{RL}}_{x_{m+1}}}D^\beta _{x_{m+2}}\varphi _4(x), \ \, x \in [{x_{m + 2}},{x_M}] \end{array} \right. \end{aligned}$$

with \(2\le m\le M-2\). The compact supports of \(B_{M-1}(x)\), \(B_{M}(x)\), and \(B_{M+1}(x)\) partially locate on the outside of \([x_0,x_M]\), so do \(B_{-1}(x)\), \(B_{0}(x)\), and \(B_{1}(x)\); nevertheless, \(B_{M-1}(x)\), \(B_{M}(x)\), and \(B_{M+1}(x)\) can be thought of as the special cases of the aforementioned argument, so are omitted here. Further, we have

$$\begin{aligned} {^{\mathrm{RL}}_{x_0}}D^\beta _xB_{-1}(x)=\left\{ \begin{array}{l} {^{\mathrm{RL}}_{x_0}}D^\beta _x\varphi _4(x), \quad \ x \in [x_0,x_1)\\ {^{\mathrm{RL}}_{x_0}}D^\beta _{x_1}\varphi _4(x), \quad \ x \in [x_1,{x_M}] \end{array} \right. \end{aligned}$$
$$\begin{aligned} {^{\mathrm{RL}}_{x_0}}D^\beta _xB_0(x)=\left\{ \begin{array}{l} {^{\mathrm{RL}}_{x_0}}D^\beta _x\varphi _3(x), \quad x \in [x_0,x_1)\\ {^{\mathrm{RL}}_{x_0}}D^\beta _{x_1}\varphi _3(x) \\ \quad +{^{\mathrm{RL}}_{x_1}}D^\beta _x\varphi _4(x), \ \ \ x \in [x_1,x_2)\\ {^{\mathrm{RL}}_{x_0}}D^\beta _{x_1}\varphi _3(x) \\ \quad +{^{\mathrm{RL}}_{x_1}}D^\beta _{x_2}\varphi _4(x), \ \ x \in [x_2,{x_M}] \end{array} \right. \end{aligned}$$
$$\begin{aligned} {^{\mathrm{RL}}_{x_0}}D^\beta _xB_1(x)=\left\{ \begin{array}{l} {^{\mathrm{RL}}_{x_0}}D^\beta _x\varphi _2(x), \quad x \in [x_0,x_1)\\ {^{\mathrm{RL}}_{x_0}}D^\beta _{x_1}\varphi _2(x) \\ \quad +{^{\mathrm{RL}}_{x_1}}D^\beta _x\varphi _3(x), \ \ \ x \in [x_1,x_2)\\ {^{\mathrm{RL}}_{x_0}}D^\beta _{x_1}\varphi _2(x) \\ \quad +{^{\mathrm{RL}}_{x_1}}D^\beta _{x_2}\varphi _3(x)\\ \quad +{^{\mathrm{RL}}_{x_2}}D^\beta _{x}\varphi _4(x), \ \ x \in [x_2,x_3)\\ {^{\mathrm{RL}}_{x_0}}D^\beta _{x_1}\varphi _2(x) \\ \quad +{^{\mathrm{RL}}_{x_1}}D^\beta _{x_2}\varphi _3(x)\\ \quad +{^{\mathrm{RL}}_{x_2}}D^\beta _{x_3}\varphi _4(x). \ \ x \in [x_3,{x_M}] \end{array} \right. \end{aligned}$$

On the other hand, as the integrands of the integration in fractional derivatives, \(\varphi _i(x)\), \(i=1,2,3,4\), are cubic polynomials, for which, the order shrinks by one each time integration by parts is applied. Being aware of this, we can eliminate the weakly singular integrations by repeating integration by parts four times for each \(\varphi _i(x)\) to derive the fully explicit formulas. The derivation processes are lengthy and tedious; we therefore outline the specific expressions of B-splines \({^{\mathrm{RL}}_{x_0}}D^\beta _xB_m(x)\), \(-1\le m \le M+1\), in “Appendix”.

5.2 Construction of cubic B-spline DQ method

Use the early notations for brevity. On using DQ approximations (32)–(33) to handle fractional derivatives, Eq. (7) is transformed into a set of first-order ODEs

$$\begin{aligned} \begin{aligned}&\frac{\partial u(x_i,y_j,t)}{\partial t}-\varepsilon _x\sum \limits _{m=0}^{M_x}{a_{im}^{(\beta _1)}u(x_m,y_j,t)}\\&\qquad \ -\varepsilon _y\sum \limits _{m=0}^{M_y}{b_{jm}^{(\beta _2)}u(x_i,y_m,t)}=f(x_i,y_j,t), \end{aligned} \end{aligned}$$
(36)

with \(i=0,1,\ldots ,M_x\), \(j=0,1,\ldots ,M_y\). Imposing the boundary constraint (9) on Eq. (36) and applying the Crank–Nicolson scheme in time, we thus obtain the following spline-based DQ scheme

$$\begin{aligned} \left\{ \begin{aligned}&U_{ij}^n-\frac{\tau \varepsilon _x}{2}\sum _{m=1}^{M_x-1}a_{im}^{(\beta _1)}U_{mj}^n-\frac{\tau \varepsilon _y}{2}\sum _{m=1}^{M_y-1}b_{jm}^{(\beta _2)}U_{im}^n\\&=U_{ij}^{n-1}+\frac{\tau \varepsilon _x}{2}\sum _{m=1}^{M_x-1}a_{im}^{(\beta _1)}U_{mj}^{n-1} \\&\qquad \quad +\frac{\tau \varepsilon _y}{2}\sum _{m=1}^{M_y-1}b_{jm}^{(\beta _2)}U_{im}^{n-1}+\tau f^{n-1/2}_{ij}, \end{aligned}\right. \end{aligned}$$
(37)

where \(i=1,2,\ldots ,M_x-1\), \(j=1,2,\ldots ,M_y-1\). It is visible that DQ methods are truly meshless and convenient in implementation. Due to the insensitivity to dimensional changes, (37) can easily be generalized to the higher-dimensional space-fractional problems, but do not cause the rapid increase of computing burden.

6 Illustrative examples

In this section, a couple of numerical examples are carried out to gauge the practical performance of MCTB-DQM and new MCB-DQM. In order to check their accuracy, we compute the errors by using the norms

$$\begin{aligned}&e_\infty (M)\cong \max _{i}\Big |u^n_i-U^n_i\Big |,\\&e_2(M)\cong \sqrt{\frac{1}{M}\sum ^{M-1}_{i=1}\Big |u^n_i-U^n_i\Big |^2},\\&e_N(M)\cong \sqrt{\sum ^{M-1}_{i=1}\Big |u^n_{i}-U^n_{i}\Big |^2\bigg /\sum ^{M-1}_{i=1}\Big |U_{i}^0\Big |^2},\\&e_\infty (M_x,M_y)\cong \max _{i,j}\Big |u^n_{ij}-U^n_{ij}\Big |,\\&e_2(M_x,M_y)\cong \sqrt{\frac{1}{M_xM_y}\sum ^{M_x-1}_{i=1}\sum ^{M_y-1}_{j=1}\Big |u^n_{ij}-U^n_{ij}\Big |^2}, \end{aligned}$$

where \(e_N(M)\) is termed by a normalized \(L_2\)-norm. As to \(\{\omega ^\alpha _k\}_{k=0}^{n}\) in the schemes (23)–(24), we use (12) in the first and fifth examples and (13) in the others but not the last two ones. In the computation, our algorithms are implemented on MATLAB platform in a Lenovo PC with Intel(R) Pentium(R) G2030 3.00GHz CPU and 4 GB RAM except the fourth example. The obtained results are comparatively discussed with the early works available in the open literature.

Example 6.1

Let \(\kappa =1\), \(\varepsilon =2\); Eqs. (1)–(3) with \(\psi (x)=\exp (x)\), \(g_1(t)=E_\alpha (t^\alpha )\), \(g_2(t)=exp(1)E_\alpha (t^\alpha )\) and homogeneous forcing term are considered on [0, 1], where \(E(t^\alpha )\) is the Mittag–Leffler function

$$\begin{aligned} E_\alpha (z)=\sum _{k=0}^{\infty }\frac{z^k}{\Gamma (\alpha k+1)}, \quad 0<\alpha <1. \end{aligned}$$

It is verified that its solution is \(u(x,t)=\exp (x)E(t^\alpha )\). In order to show the convergence of MCTB-DQM, we fix \(\tau =1.0\times 10^{-5}\) so that the temporal errors are negligible as compared to spatial errors. The numerical results at \(t=0.1\) for various \(\alpha \) are displayed in Table 1; the convergent rate is shortly written as “Cov. rate”. As one sees, our method is pretty stable and convergent with almost spatial second-order for this problem.

Example 6.2

In this test, we solve a diffusion equation on [0, 1] with \(\varepsilon =1\), \(\psi (x)=4x(1-x)\), zero boundary condition and right side. Its true solution has the form

$$\begin{aligned} u(x,t)=\frac{16}{\pi ^3}\sum ^\infty _{k=1}\frac{1}{k^3}E_\alpha (-k^2\pi ^2t^\alpha )(1-(-1)^k)\sin (k\pi x). \end{aligned}$$

For comparison of the numerical results given by FDS-D I, FDS-D II [53] and the semi-discrete FEM [18], we choose the same time stepsize \(\tau =1.0\times 10^{-4}\). Letting \(\alpha =0.1\), 0.5 and 0.95, the corresponding results of these four methods at \(t=1\) are tabulated side by side in Table 2, from which, we conclude that MCTB-DQM is accurate and produces very small errors as the other three methods as the grid number M increases.

Table 1 The numerical results at \(t=0.1\) with \(\tau =1.0\times 10^{-5}\) for Example 6.1
Table 2 A comparison of \(e_N(M)\) at \(t=1\) with \(\tau =1.0\times 10^{-4}\) for Example 6.2
Fig. 3
figure 3

The approximate solution and error distribution at \(t=1\) with \(\alpha =0.8\) for Example 6.3

Example 6.3

Let \(\kappa =0\), \(\varepsilon =1\); we solve Eqs. (1)–(3) with homogeneous initial and boundary values, and

$$\begin{aligned} f(x,t)=\frac{2t^{2-\alpha }\sin (2\pi x)}{\Gamma (3-\alpha )}+4\pi ^2t^2\sin (2\pi x), \end{aligned}$$

on [0, 1]. The true solution is \(u(x,t)=t^2\sin (2\pi x)\). The algorithm is first run with \(\alpha =0.8\), \(\tau =2.0\times 10^{-2}\) and \(M=50\). In Fig. 3, we plot the approximate solution and a point to point error distribution at \(t=1\), where good accuracy is observed. In Table 3, we then report a comparison of \(e_2(M)\), \(e_\infty (M)\) at \(t=1\) between MCTB-DQM and CBCM [38], when \(\alpha =0.3\). Here, MCTB-DQM uses \(\tau =5.0\times 10^{-3}\) while CBCM chooses \(\tau =1.25\times 10^{-3}\). As expected, our approach generates the approximate solutions with a better accuracy than those obtained by CBCM.

Table 3 A comparison of \(e_2(M)\), \(e_\infty (M)\) at \(t=1\) with \(\alpha =0.3\) for Example 6.3
Fig. 4
figure 4

The approximate solution and error distribution at \(t=0.5\) with \(\alpha =0.5\) for Example 6.4

Example 6.4

We consider a 2D diffusion equation on \([-1,1]\times [-1,1]\) with \(\varepsilon _x=\varepsilon _y=1\), which is referred to by Zhai and Feng as a test of a block-centered finite difference method (BCFDM) on nonuniform grids [55]. The forcing function is specified to enforce

$$\begin{aligned} u(x,y,t)=(1+t^2)\tanh (20x)\tanh (20y). \end{aligned}$$

Under \(\tau =1.0\times 10^{-2}\), \(M_x=M_y=60\) and \(\alpha =0.5\), we first plot the approximate solution and a point to point error distribution at \(t=0.5\) in Fig. 4. Then, we compare MCTB-DQM and BCFDM in term of \(e_\infty (M_x,M_y)\) at \(t=0.5\) in Table 4. It is obvious that MCTB-DQM produces significantly smaller errors than BCFDM as the grid number increases despite a smaller time stepsize \(\tau =2.5\times 10^{-3}\) and the nonuniform girds BCFDM adopts; moreover, MCTB-DQM provides more than quadratic rate of convergence for this problem.

Table 4 A comparison of \(e_\infty (M_x,M_y)\) at \(t=0.5\) with \(\alpha =0.5\) for Example 6.4

Example 6.5

In this test, we simulate the solitons propagation and collision governed by the following time-fractional nonlinear Schrödinger equation (NLS):

$$\begin{aligned} \text {i}\frac{\partial ^\alpha u}{\partial t^\alpha }+\frac{\partial ^2 u}{\partial x^2} + \beta |u|^2u=0, \ \ x\in (-\infty ,+\infty ), \end{aligned}$$

with \(\text {i}=\sqrt{-1}\) and \(\beta \) being a real constant, subjected to the initial values of two Gaussian types:

  1. (i)

    mobile soliton

    $$\begin{aligned} \psi (x)=\text {sech}(x)\exp (2\text {i}x); \end{aligned}$$
    (38)
  2. (ii)

    double solitons collision

    $$\begin{aligned} \psi (x)=\sum _{j=1}^2\text {sech}(x-x_j)\exp (\text {i}p(x-x_j)). \end{aligned}$$
    (39)

When \(\alpha =1\) and \(\beta =2\), the NLS with Eq. (38) has the soliton solution \(u(x,t)=\text {sech}(x-4t)\exp (\text {i}(2x-3t))\). As the solutions would generally decay to zero as \(|x|\rightarrow \infty \), we truncate the system into a bounded interval \(\Omega =[a,b]\) with \(a\ll 0\) and \(b\gg 0\), and enforce periodic or homogeneous Dirichlet boundary conditions. Letting \(u(x,t)=U(x,t)+\text {i}V(x,t)\). Then, the original equation can be recast as a coupled diffusion system

$$\begin{aligned}&\frac{\partial ^\alpha U}{\partial t^\alpha }+\frac{\partial ^2 V}{\partial x^2}+\beta (U^2+V^2)V=0, \\&\frac{\partial ^\alpha V}{\partial t^\alpha }-\frac{\partial ^2 U}{\partial x^2}-\beta (U^2+V^2)U=0. \end{aligned}$$
Table 5 The numerical results in term of \(e_2(M)\) at \(t=0.1\) for Example 6.5
Fig. 5
figure 5

The single soliton propagation for \(\alpha =0.98\), 1.0 with \(\tau =2.0\times 10^{-3}\) and \(M=200\)

After applying the scheme (23), nevertheless, a nonlinear system has to be solved at each time step. In such a case, the Newton’s iteration is utilized to treat it and terminated by reaching a solution with tolerant error \(1.0\times 10^{-12}\) if \(\alpha =1\), for which, the Jacobian matrix is

$$\begin{aligned} \mathbf J =\left( \begin{array}{cc} 2UV&{}U^2+3V^2\\ -3U^2-V^2&{}-2UV \end{array} \right) . \end{aligned}$$

When \(\alpha \ne 1\), because the analytic solutions still remain unknown and the Newton’s procedure relies heavily on its initial values, we instead employ the trust-region-dogleg algorithm built into MATLAB to improve the convergence of iteration. At first, taking \(\tau =2.0\times 10^{-3}\), \(M=100\), \(\beta =2\), and \(\Omega =[-10,10]\), the mean square errors at \(t=0.1\) with the initial condition (38) for various \(\alpha \) are reported in Table 5, where the solutions computed by using the coefficients (13) on a very fine time–space lattice, i.e., \(\tau =2.5\times 10^{-4}\), \(M=400\), are adopted as reference solutions (\(\alpha \ne 1\)). As seen from Table 5, our methods are convergent and applicable to nonlinear coupled problems; besides, the scheme (26) is clearly more efficient than (23) since an extra Newton’s outer loop is avoided. Then, retaking \(M=200\) and \(\Omega =[-20,20]\), we display the evolution of the amplitude of the mobile soliton created by (23) for \(\alpha =0.98\) and 1.0 in Fig. 5, respectively. Using the same discrete parameters, we consider the double solitons collision for \(\alpha =0.96\) and 1.0 with \(x_1=-6\), \(x_2=6\), and \(p=\pm 2\) in Fig. 6. It is easily drawn from these figures that the width and height of the solitons have been significantly changed by the fractional derivative. In particular, when \(\alpha =1\), a collision of double solitons without any reflection, transmission, trapping and creation of new solitary waves is exhibited, which says that it is elastic, while in fractional cases, the shapes of the solitons may not be retained after they intersect each other.

Fig. 6
figure 6

The interaction of double solitons for \(\alpha =0.96\), 1.0 with \(\tau =2.0\times 10^{-3}\) and \(M=200\)

Fig. 7
figure 7

The true solution at \(t=1.25\) and spatial lattice points for Example 6.6

Fig. 8
figure 8

The contour plots of Gaussian pulse at \(t=0\), 0.25, 0.75, 1.25 with \(\tau =5.0\times 10^{-3}\) and \(M_x=M_y=50\)

Example 6.6

In this test, we simulate an unsteady propagation of a Gaussian pulse governed by a classical 2D advection-dominated diffusion equation on a square domain \([0,2]\times [0,2]\) by using the scheme (26), which has been extensively studied [19, 27, 31, 45]. The Gaussian pulse solution is expressed as

$$\begin{aligned}&u(x,y,t)=\frac{1}{1+4t}\\&\quad \times \exp \Bigg (-\frac{(x-\kappa _xt-0.5)^2}{\varepsilon _x(1+4t)} -\frac{(y-\kappa _yt-0.5)^2}{\varepsilon _y(1+4t)}\Bigg ), \end{aligned}$$

and the initial Gaussian pulse and boundary values are taken from the pulse solution. Letting \(\kappa _x=\kappa _y=0.8\), \(\varepsilon _x=\varepsilon _y=0.01\), we display its true solution at \(t=1.25\) with \(M_x=M_y=50\) and the used lattice points on problem domain in Fig. 7, which describe a pulse centered at (1.5, 1.5) with a pulse height of 1 / 6. Using the same grid number together with \(\tau =5.0\times 10^{-3}\), we present the contour plots of the approximate solutions at \(t=0\), 0.25, 0.75, 1.25 created by MCTB-DQM in Fig. 8. As the graph shows, the pulse is initially centered at (0.5, 0.5) with a pulse height of 1, then it moves toward a position centered at (1.5, 1.5); during this process, its width and height appear to be continuously varying as the time goes by. Besides, the last contour plot in Fig. 8 coincides with the true solution plotted in Fig. 7. Retaking \(\tau =6.25\times 10^{-3}\) and \(M_x=M_y=80\), we compare our results with those obtained by some previous algorithms as nine-point high-order compact (HOC) schemes [19, 31], Peaceman–Rachford ADI scheme (PR-ADI) [32], HOC-ADI scheme [20], exponential HOC-ADI scheme (EHOC-ADI) [46], HOC boundary value method (HOC-BVM) [7], compact integrated RBF ADI method (CIRBF-ADI) [45], coupled compact integrated RBF ADI method (CCIRBF-ADI) [47], and the Galerkin FEM combined with the method of characteristics (CGFEM) [8], at \(t=1.25\) in Table 6. We implement CGFEM on a quasi-uniform triangular mesh with the meshsize \(2.5\times 10^{-2}\) by using both Lagrangian P1 and P2 elements. Also, average absolute errors are added as supplements to evaluate and compare their accuracy. As seen from Table 6, all of these methods are illustrated to be very accurate to capture the Gaussian pulse except the PR-ADI scheme; besides, our method reaches a better accuracy than the others and even shows promise in treating the advection–diffusion equations in the high Péclet number regime.

Table 6 A comparison of global errors at \(t=1.25\) with \(\tau =6.25\times 10^{-3}\) and \(M_x=M_y=80\) for Example 6.6.
Table 7 A comparison of global errors at \(t=0.2\) with \(\tau =2.5\times 10^{-4}\), \(\beta _1=1.1\), and \(\beta _2=1.3\) for Example 6.7

Example 6.7

In the last test, we consider the 2D space-fractional Eqs. (7)–(9) on \([0,1]\times [0,1]\) with \(\varepsilon _x=\varepsilon _y=1\), \(\psi (x,y)=x^2(1-x)^2y^2(1-y)^2\), and homogeneous boundary values. The source term is manufactured as

$$\begin{aligned}&f(x,y,t)=-e^{-t}x^2(1-x)^2y^2(1-y)^2\\&\quad \ -\frac{2e^{-t}x^{2-\beta _1}y^2(1-y)^2}{\Gamma (3-\beta _1)}\Bigg (1 -\frac{6x}{3-\beta _1}+\frac{12x^2}{(3-\beta _1)(4-\beta _1)}\Bigg )\\&\quad \ -\frac{2e^{-t}x^2(1-x)^2 y^{2-\beta _2}}{\Gamma (3-\beta _2)}\Bigg (1 -\frac{6y}{3-\beta _2}+\frac{12y^2}{(3-\beta _2)(4-\beta _2)}\Bigg ) \end{aligned}$$

to enforce the analytic solution \(u(x,y,t)=e^{-t}x^2(1-x)^2y^2(1-y)^2\). Letting \(\beta _1=1.1\), \(\beta _2=1.3\), and \(\tau =2.5\times 10^{-4}\), we solve the problem via the FEM proposed by [52] and MCB-DQM, and compare their numerical results at \(t=0.2\) in Table 7, where the P1 element and structured meshes are adopted. The data indicate that DQ method converges toward the analytic solution as the grid numbers increase and admits slightly better results than FEM. More importantly, the implemental CPU times of MCB-DQM are far less than those of FEM, which confirm its high computing efficiency.

7 Conclusion

The ADEs are the subjects of active interest in mathematical physics and the related areas of research. In this work, we have proposed an effective DQ method for such equations involving the derivatives of fractional orders in time and space. Its weighted coefficients are calculated by making use of modified CTBs and cubic B-splines as test functions. The stability of DQ method for the time-fractional ADEs in the context of \(L_2\)-norm is performed. The theoretical condition required for the stable analysis is numerically surveyed at length. We test the codes on several benchmark problems and the outcomes have demonstrated that it outperforms some of the previously reported algorithms such as BCFDM and FEM in term of overall accuracy and efficiency.

In a linear space, spanned by a set of proper basis functions as B-splines, any function can be represented by a weighted combination of these basis functions. While all basis functions are defined, the function remains unknown because the coefficients on the front of basis functions are still unknown. However, when all basis functions satisfy Eqs. (34)–(35), by virtue of linearity, it can be examined that the objective function satisfies Eqs. (34)–(35) as well. This is the essence of DQ methods, which guarantees their convergence.

Despite the error bounds are difficult to determine, the numerical results illustrate that the spline-based DQ method admits the convergent results for the fractional ADEs. The presented approach can be generalized to the higher-dimensional and other complex model problems arising in material science, structural and fluid mechanics, heat conduction, biomedicine, differential dynamics, and so forth. High computing efficiency, low memory requirement, and the ease of programming are its main advantages.