1 Introduction

Fractional calculus has been applied to numerous fields such as fluid mechanics, physics, chemistry, epidemiology, and finance during the last decades [3, 15, 23, 24, 28], to characterize memory effects and/or nonlocality. Fractional models are more suitable than integer models for systems with memory and long-term interactions. The anomalous diffusion equation is a class of important fractional differential equations, which has been widely applied in modeling of random walk, unification of diffusion and wave propagation, etc. [1, 19]. In some scenarios, anomalous diffusion can be described by the time-space fractional diffusion equation.

In this paper, we first consider a numerical method for the following one-dimensional time-space fractional diffusion equation:

$$\begin{aligned} \left\{ \begin{array}{ll} {}_{\text{CH}}\textrm{D}^{\alpha }_{{\tilde{a}},t}u(x,t) -{}_{\text{RZ}}\textrm{D}^{\beta }_{x}u(x,t)=f(x,t), &{} (x,t)\in \varOmega \times ({\tilde{a}},T],\\ u(a,t) = u(b,t) = 0, &{} t\in ({\tilde{a}},T],\\ u(x,{\tilde{a}}) = u_{0}(x), &{} x\in \varOmega \end{array} \right. \end{aligned}$$
(1)

with \(\alpha \in (0,1)\), \(\beta \in (1,2)\), \({\tilde{a}}>0\), \(\varOmega =[a,b]\subset {\mathbb {R}}\) being a bounded interval, and \(u_{0}(x)\) being a given initial condition. Here \({}_{\text{CH}}\textrm{D}^{\alpha }_{{\tilde{a}},t}\) denotes the Caputo-Hadamard differentiation operator defined by

$$\begin{aligned} {}_{\text{CH}}{\textrm{D}}_{{\tilde{a}},t}^\alpha \varphi (t) =\frac{1}{\Gamma (n- \alpha )} \int ^{t}_{{\tilde{a}}}\left( \log \frac{t}{s}\right) ^{n- \alpha -1}\delta ^{n}\varphi (s)\frac{{\textrm{d}}s}{s},\quad 0<{\tilde{a}}<t, \end{aligned}$$

where

$$\begin{aligned} \delta ^{n}\varphi (s) = \left( s\frac{\textrm{d}}{\textrm{d} s}\right) ^{n}\varphi (s), \ n-1< \alpha < n \in {\mathbb {Z}}^+. \end{aligned}$$

The sufficient condition for the existence of the Caputo-Hadamard derivative \({}_{\text{CH}}\textrm{D}_{{\tilde{a}},t}^\alpha \varphi (t)\) is that \(\varphi (t)\in AC^{n}_{\delta }[{\tilde{a}},T]=\left\{ \varphi\!\!: \delta ^{n-1}\varphi \in AC[{\tilde{a}},T] \right\}\) with \(AC(\varOmega )\) denoting the space of absolute continuous functions. The spatial derivative is the Riesz derivative of order \(\beta \ (1< \beta <2)\) given by

$$\begin{aligned} _{\text{RZ}}\textrm{D}_{x}^\beta \psi (x) = -\frac{1}{2\cos (\uppi \beta /2)} \left[ {}_{\text{RL}}\textrm{D}_{a,x}^\beta \psi (x)+ {}_{\text{RL}}\textrm{D}_{x,b}^\beta \psi (x)\right] , \end{aligned}$$

where

$$\begin{aligned} _{\text{RL}}\textrm{D}_{a,x}^\beta \psi (x) = \frac{1}{\Gamma (m - \beta )} \frac{\textrm{d}^{m}}{\textrm{d} x^{m}}\int _{a}^{x}(x-s)^{m-1-\beta }\psi (s)\textrm{d}s \end{aligned}, \ m-1<\beta<m\in\mathbb{Z}^{+},$$

and

$$\begin{aligned} _{\text{RL}}\textrm{D}_{x,b}^\beta \psi (x) = \frac{(-1)^{m}}{\Gamma (m - \beta )} \frac{\textrm{d}^{m}}{\textrm{d} x^{m}}\int _{x}^{b}(s-x)^{m-1-\beta }\psi (s)\textrm{d}s \end{aligned}, \ m-1<\beta<m\in\mathbb{Z}^{+},$$

are the left-hand side and right-hand side Riemann-Liouville fractional derivatives. Sufficient condition for the existence of \({}_{\text{RZ}}\textrm{D}_{x}^\beta \psi (x)\) is that \(\psi (x)\in AC^{\lceil \beta \rceil }[a,b]=\{ \psi\!\! : \psi ^{(k)}\in AC[a,b], k=0,1,\ldots ,\lfloor \beta \rfloor \}\).

We also consider the following two-dimensional time-space fractional diffusion equation:

$$\begin{aligned} \left\{ \begin{array}{ll} {}_{\text{CH}}\textrm{D}^{\alpha }_{{\tilde{a}},t}u(x,y,t) +\left( -\Delta \right) ^{\frac{\beta }{2}}u(x,y,t)=f(x,y,t), &{} (x,y)\in {\widetilde{\varOmega }},t \in ({\tilde{a}},T],\\ u(x,y,t) = 0, &{} (x,y)\in {\mathbb {R}}^2\backslash {\widetilde{\varOmega }}, t \in ({\tilde{a}},T],\\ u(x,y,{\tilde{a}}) = u_{0}(x,y), &{} (x,y)\in {\widetilde{\varOmega }} \end{array} \right. \end{aligned}$$
(2)

with \(\alpha \in (0,1)\), \(\beta \in (1,2)\), \({\tilde{a}}>0\), \({\widetilde{\varOmega }}=(-L,L)^2\subset {\mathbb {R}}^2\) being a bounded domain, and \(u_{0}(x,y)\) being a given initial condition. Here \(\left( -\Delta \right) ^{\frac{\beta }{2}}\) is the fractional Laplacian defined by the hyper-singular integral [16],

$$\begin{aligned} \left( -\Delta \right) ^{\frac{\beta }{2}}\psi ({\textbf{x}}) = c_{\beta } \mathrm {P.V.} \int _{{\mathbb {R}}^2} \frac{\psi ({\textbf{x}})-\psi ({\textbf{z}})}{|{\textbf{x}}-{\textbf{z}}|^{2+\alpha }}\textrm{d}{\textbf{z}}, \, c_{\beta }=\frac{2^\beta \Gamma (1+\beta /2)}{\uppi |\Gamma (-\beta /2)|}, \end{aligned}$$
(3)

where \({\textbf{x}} = (x,y)\in {\mathbb {R}}^{2}\) and \(\mathrm {P.V.}\) stands for the Cauchy principal value. One sufficient condition for the existence of the fractional Laplacian \(\left( -\Delta \right) ^{\frac{\beta }{2}}\psi ({\textbf{x}})\) is that \(\psi ({\textbf{x}})\) belongs to the following Schwartz space:

$$\begin{aligned} {\mathcal {S}}=\left\{ \psi \in {\text{C}}^{\infty }({\mathbb {R}}^{2})\!\!: \sup \limits _{{\textbf{x}}\in {\mathbb {R}}^{2}}(1+|{\textbf{x}}|)^{N}\sum \limits _{k=0}^{N}|{\text{D}}^{k}\psi ({\textbf{x}})|<\infty , N = 0,1,2,\cdots \right\} . \end{aligned}$$

There are some researches on numerical methods for time-space fractional diffusion equations. Liu et al. [20] proposed a first-order implicit finite difference scheme to solve the fractional diffusion equation with the temporal Caputo derivative and the spatial Riemann-Liouville derivative on a bounded domain in one spatial dimension. Cao and Li [4] derived two finite difference schemes for two kinds of time-space fractional diffusion equations by approximating Riemann-Liouville fractional derivatives with the second-order accuracy via the weighted and shifted Grünwald-Letnikov formula. Arshad et al. [2] constructed a numerical scheme for the time-space fractional diffusion equation with the second-order accuracy in both time and space directions, where the temporal and spatial fractional derivatives are in the senses of Caputo and Riesz, respectively. A finite difference scheme was developed for the time-space fractional diffusion equation with Dirichlet fractional boundary conditions in the work of Xie and Fang [29], where the fractional derivatives include the temporal Caputo derivative and the spatial Riemann-Liouville derivative.

In view of the aforementioned studies, some numerical schemes have been proposed for the time-space fractional diffusion equation. However, the temporal derivative is mainly given by the Caputo derivative which is adequate for characterizing algebraic decay. In this paper, the temporal derivative in considered time-space fractional diffusion equations is in the Caputo-Hadamard sense which is suitable in describing the ultra slow process [5, 12, 17]. For the numerical approximation to the Caputo-Hadamard derivative, Gohar et al. [13] introduced the L1 formula for the temporal Caputo-Hadamard derivative to deduce a semi-discrete difference scheme, and gave the stability and convergence analysis. Fan et al. [11] proposed three numerical formulae for the Caputo-Hadamard derivative of order \(\alpha\) with the (3 − \(\alpha\)) order accuracy, including L1-2 and L2-1\(\sigma\) formulae for the case with \(\alpha \in (0, 1)\), and H2N2 formula for the case with \(\alpha \in (1, 2)\). Li et al. [18] proposed and numerically analyzed an LDG scheme for the Caputo-Hadamard fractional sub-diffusion equation. Ou et al. [22] investigated the numerical scheme for the Caputo-Hadamard fractional diffusion-wave equation using exponential type meshes. In the aforementioned two works, the spatial derivative is in the sense of classical Laplacian.

The Riesz derivative is in the form of a linear combination of a left Riemann-Liouville derivative and a right Riemann-Liouville derivative, which allows the modeling of flow regime impacts from either side of the domain [30, 31]. In the mean while, the fractional Laplacian was frequently adopted to take long-range interaction in higher dimensions into account [27]. Therefore, the spatial derivative is chosen as the Riesz derivative in one dimension and the fractional Laplacian in two dimensions in the present paper. Based on the numerical method of approximating the Riemann-Liouville derivative, several numerical approximations for evaluating the Riesz fractional derivative were proposed, such as the spline interpolation method [25], standard Grünwald-Letnikov formula and its modifications [21, 26]. In particular, a series of high order algorithms for Riesz derivatives were constructed by Ding et al. [6,7,8,9]. Since the linear combination of shifted Grünwald-Letnikov formulae with different displacements and appropriate weights can evaluate the Riemann-Liouville derivative with the higher order accuracy and the resulting finite difference schemes for time dependent problems are stable, we choose weighted and shifted Grünwald-Letnikov formula for the Riesz derivative. For the fractional Laplacian, obtaining numerical approximations is still difficult and hot. A recent work given by Hao et al. [14] proposed a fractional centered difference formula. It generates a symmetric block Toeplitz matrix with Toeplitz blocks which enables us to develop fast and efficient algorithms by fast Fourier transform. This novel approximation technique is adopted for solving the two-dimensional nonlinear Schrödinger equation with the fractional Laplacian [27], where the temporal derivative is still of integer order.

Numerical methods for partial differential equations with the temporal Caputo derivative and the spatial fractional derivative are rare. This situation and potential applications of ultra slow diffusion motivate us to study numerical algorithms for (1) and (2).

The remaining part of this paper is organized as follows. Numerical approximations adopted in this paper for evaluating the Caputo-Hadamard derivative, Riesz derivative, and fractional Laplacian are shown in Sect. 2, along with corresponding properties. Fully discrete schemes for the one-dimensional time-space fractional diffusion equation (1) and the two-dimensional equation (2) are derived in Sect. 3. Rigorous stability analysis and error estimates are discussed as well. Numerical simulations in Sect. 4 verify the feasibility of the proposed numerical schemes and the theoretical analysis.

2 Preliminaries

In this section, we introduce approximations of the Caputo-Hadamard derivative, Riesz derivative in one dimension, and integral fractional Laplacian in higher dimensions that are applied in constructing numerical schemes for (1) and (2). In the following discussion, the L1 formula for the Caputo-Hadamard derivative [13], the weighted and shifted Grünwald-Letnikov formula for the Riesz derivative [26], and the fractional centered difference formula for the fractional Laplacian [14] are adopted.

2.1 L1 Formula for Caputo-Hadamard Derivative

Let \(t_k = {\tilde{a}} + k\tau\) with \(k = 0,1,\cdots ,N\, (N\in {\mathbb {Z}}^{+})\), where \(\tau = (T-{\tilde{a}})/N\) is the time step. For \(\varphi (t)\in C^2[{\tilde{a}},T]\), its Caputo-Hadamard derivative of order \(\alpha \in (0,1)\) at \(t = t_k\) can be evaluated by the following L1 approximation [13]:

$$\begin{aligned} \begin{aligned} _{\text{CH}}{\textrm{D}}_{{\tilde{a}},t}^\alpha \varphi (t)|_{t=t_k} =\sum \limits _{i=1}^k c^{(\alpha )}_{i,k}\left[ \varphi (t_i)-\varphi (t_{i-1})\right] +R^k_{_{\text{CH}}}, \end{aligned} \end{aligned}$$
(4)

where

$$\begin{aligned} c^{(\alpha )}_{i,k}=\frac{1}{\Gamma (2-\alpha )}\frac{1}{\log \frac{t_i}{t_{i-1}}} \left[ \left( \log \frac{t_k}{t_{i-1}}\right) ^{1-\alpha } - \left( \log \frac{t_k}{t_{i}}\right) ^{1-\alpha }\right] , \end{aligned}$$
(5)

and

$$\begin{aligned} R^k_{_{\text{CH}}}=\frac{1}{\Gamma (1-\alpha )}\sum _{i=1}^k\int ^{t_i}_{t_{i-1}}\left( \log \frac{t_k}{s}\right) ^{-\alpha }\left( \delta \varphi (s)-\frac{\varphi (t_i)-\varphi (t_{i-1})}{\log \frac{t_i}{t_{i-1}}} \right) \frac{\textrm{d}s}{s}. \end{aligned}$$
(6)

Lemma 1

[13] For \(0< \alpha <1\), the coefficients \(c_{i,k}^{(\alpha )}\ \left( 1\leqslant i \leqslant k,\ 1\leqslant k \leqslant N\right)\) given by (5) satisfy

$$\begin{aligned} c^{(\alpha )}_{k,k}>c^{(\alpha )}_{k-1,k}>\cdots>c^{(\alpha )}_{i,k}>c^{(\alpha )}_{i-1,k}>\cdots>c^{(\alpha )}_{1,k}>0. \end{aligned}$$

Remark 1

Let \(0< \alpha <1\) and the coefficients \(c^{(\alpha )}_{i,k}\ \left( 1\leqslant i \leqslant k,\ 1\leqslant k \leqslant N\right)\) be defined by (5). There holds \(\displaystyle {\frac{1}{c^{(\alpha )}_{1,k}} < \frac{\Gamma (1-\alpha )}{{\tilde{a}}^{\alpha }} \left( k\tau \right) ^{\alpha }}\).

Proof

According to the mean value theorem,

$$\begin{aligned} \begin{aligned} c^{(\alpha )}_{1,k}&= \frac{1}{\Gamma (2-\alpha )}\frac{1}{\log \frac{t_1}{t_{0}}} \left( \left( \log \frac{t_k}{t_{0}}\right) ^{1-\alpha } - \left( \log \frac{t_k}{t_{1}}\right) ^{1-\alpha }\right) \\&= \frac{1}{\Gamma (2-\alpha )}\frac{1}{\log \frac{t_1}{t_{0}}} (1-\alpha )\xi ^{-\alpha }\log \frac{t_1}{t_{0}}\\&= \frac{1}{\Gamma (1-\alpha )}\xi ^{-\alpha }, \, \xi \in \left( \log \frac{t_k}{t_{1}}, \log \frac{t_k}{t_{0}}\right) . \end{aligned} \end{aligned}$$

As \(t_0 = {\tilde{a}}\), we have

$$\begin{aligned} c^{(\alpha )}_{1,k}> \frac{1}{\Gamma (1-\alpha )}\left( \log \frac{t_k}{t_{0}}\right) ^{-\alpha } = \frac{\left( \log \left(1+\frac{k\tau }{{\tilde{a}}}\right)\right) ^{-\alpha }}{\Gamma (1-\alpha )} > \frac{\left( \frac{k\tau }{{\tilde{a}}}\right) ^{-\alpha }}{\Gamma (1-\alpha )}. \end{aligned}$$

In other words,

$$\begin{aligned} \frac{1}{c^{(\alpha )}_{1,k}} < \frac{\Gamma (1-\alpha )}{{\tilde{a}}^{\alpha }} \left( k\tau \right) ^{\alpha }. \end{aligned}$$

The proof is thus completed.

Lemma 2

[13] If \(0< \alpha <1\) and \(\varphi (t)\in C^2[{\tilde{a}},T]\), then the local truncation error \(R^k_{_{\text{CH}}}\ (1\leqslant k \leqslant N)\) in (6) has the following estimate:

$$\begin{aligned} \begin{aligned} \left|R^k_{_{\text{CH}}}\right|\leqslant&\left( \frac{1}{\Gamma (2-\alpha )}\left( \log \frac{t_k}{t_{k-1}}\right) ^2 +\frac{1}{\Gamma (1-\alpha )}\mathop {\max }_{1\leqslant l\leqslant N}\left( \log \frac{t_l}{t_{l-1}}\right) ^2\right) \\& \times \left( \log \frac{t_k}{t_{k-1}}\right) ^{-\alpha }\mathop {\max }_{{\tilde{a}}\leqslant t\leqslant t_k}\left|\delta ^2 \varphi (t)\right|. \end{aligned} \end{aligned}$$

Remark 2

[13] The local truncation error given by (6) is bounded in the following sense:

$$\begin{aligned} \left|R^k_{_{\text{CH}}}\right|\leqslant C \tau ^{2-\alpha } \end{aligned}$$
(7)

with \(C>0\) being a constant independent of the temporal stepsize \(\tau\).

2.2 Weighted and Shifted Grünwald-Letnikov Formula for Riesz Derivative

Let \(x_j = a + jh\), \(j = 0,1,\cdots ,M\), where \(h = (b-a)/M\) is the spatial stepsize. Define grid function spaces \({\mathcal {U}}_h = \{w\ |\ w = (w_0,w_1,\cdots ,w_M)\}\) and \({\mathcal {U}}_h^{\circ } = \{w\ |\ w\in {\mathcal {U}}_h,w_0 = w_M =0 \}\). For any grid functions \(w=\{w_{j}\}\) and \(v=\{v_{j}\}\) in \({\mathcal {U}}_h^{\circ }\), a discrete inner product and the associated norm are defined as

$$\begin{aligned} (w,v)_{h} = h \sum _{j=1}^{M-1}w_{j}v_{j}, \ \ \Vert w \Vert _{h}^2 = (w,w)_{h}. \end{aligned}$$

Let \(\psi (x)\) and \(_{\text{RL}}\textrm{D}_{a,x}^\beta \psi (x)\), \(_{\text{RL}}\textrm{D}_{x,b}^\beta \psi (x)\) and its Fourier transform belong to \(L^{1}({\mathbb {R}})\). The Riesz derivative of \(\psi (x)\) at \(x=x_j\,(1\leqslant j \leqslant M-1)\) can be approximated by the following weighted and shifted Grünwald-Letnikov formula [26]:

$$\begin{aligned} \begin{aligned} {}_{{\text{RZ}}}\textrm{D}_{x}^\beta \psi \left( x_{j}\right)&=-\frac{\Psi _{\beta }}{h^{\beta }} \bigg [v_{1}\sum _{k=0}^{j+l_{1}}g^{(\beta )}_{k}\psi \left( x_{j-k+l_{1}}\right) +v_{2}\sum _{k=0}^{j+l_{2}}g^{(\beta )}_{k}\psi \left( x_{j-k+l_{2}}\right) \\ &\quad\,+v_{1}\sum _{k=0}^{M-j+l_{1}}g^{(\beta )}_{k}\psi (x_{j+k-l_{1}}) +v_{2}\sum _{k=0}^{M-j+l_{2}}g^{(\beta )}_{k}\psi (x_{j+k-l_{2}})\bigg ] +{\mathcal {O}}(h^2), \end{aligned} \end{aligned}$$
(8)

where \(\Psi _{\beta } = \frac{1}{2\cos (\frac{\uppi \beta }{2})},\ g^{(\beta )}_{k}=(-1)^{k}\left( {\begin{array}{c}\beta \\ k\end{array}}\right)\), and \(v_{1}=\frac{\beta -2l_{2}}{2(l_{1}-l_{2})},\ v_{2}=\frac{2l_{1}-\beta }{2(l_{1}-l_{2})},\ l_{1}\ne l_{2}\). In particular, the coefficients \(g^{(\beta )}_{k}\) can be computed via the following recursive formula:

$$\begin{aligned} g^{(\beta )}_{0}=1,\ g^{(\beta )}_{k}=\Big(1-\frac{\beta +1}{k}\Big)g^{(\beta )}_{k-1}, \ k = 1,2,\cdots . \end{aligned}$$

Lemma 3

[23, 26] For \(1< \beta <2\), the coefficients \(g^{(\beta )}_{k}\ \left( k \geqslant 0 \right)\) in (8) satisfy

$$\begin{aligned} \left\{ \begin{array}{ll} g^{(\beta )}_{0} = 1, g^{(\beta )}_{1} = -\beta ,\\ 1\geqslant g^{(\beta )}_{2}\geqslant g^{(\beta )}_{3}\geqslant \cdots \geqslant 0,\\ \sum \limits _{k=0}\limits ^{\infty }g^{(\beta )}_{k} = 0, \sum \limits _{k=0}\limits ^{m}g^{(\beta )}_{k} < 0, \ m \geqslant 1. \end{array} \right. \end{aligned}$$
(9)

In the following discussion, we choose \((l_1,l_2)=(1,0)\) in (8) and the resulting approximation reads

$$\begin{aligned} {}_{\text{RZ}}\textrm{D}_{x}^\beta \psi \left( x_{j}\right) =-\frac{\Psi _{\beta }}{h^{\beta }} \left( \sum _{k=0}^{j+1}w^{(\beta )}_{k}\psi \left( x_{j-k+1}\right) +\sum _{k=0}^{M-j+1}w^{(\beta )}_{k}\psi \left( x_{j+k-1}\right) \right) +{\mathcal {O}}(h^2). \end{aligned}$$
(10)

Here \(w^{(\beta )}_{0} = \frac{\beta }{2}g^{(\beta )}_{0}\) and \(w^{(\beta )}_{k} = \frac{\beta }{2}g^{(\beta )}_{k}+\frac{2-\beta }{2}g^{(\beta )}_{k-1}\) for \(k \geqslant 1\). In view of Lemma 3, the coefficients \(w^{(\beta )}_{k}\) in (10) satisfy

$$\begin{aligned} \left\{ \begin{array}{ll} w^{(\beta )}_{0} = \frac{\beta }{2}, w^{(\beta )}_{1} = 1-\frac{\beta + \beta ^2}{2}<0, w^{(\beta )}_{2} = \frac{\beta (\beta ^2 + \beta -4)}{4},\\ 1\geqslant w^{(\beta )}_{0} \geqslant w^{(\beta )}_{3}\geqslant \cdots \geqslant 0,\\ \sum \limits _{k=0}\limits ^{\infty }w^{(\beta )}_{k} = 0, \sum \limits _{k=0}\limits ^{m}w^{(\beta )}_{k} < 0 ,m \geqslant 2. \end{array} \right. \end{aligned}$$
(11)

As a matter of fact, (10) can be rewritten as

$$\begin{aligned} \begin{aligned} _{\text{RZ}}\textrm{D}_{x}^\beta \psi \left( x_{j}\right)&=-\frac{1}{h^{\beta }} \left( \sum _{k=0}^{j}r^{(\beta )}_{j-k}\psi \left( x_{k}\right) +\sum _{k=j+1}^{M}r^{(\beta )}_{k-j}\psi \left( x_{k}\right) \right) +{\mathcal {O}}(h^2)\\&=-\frac{1}{h^{\beta }}\sum _{k=0}^{M}r^{(\beta )}_{j-k}\psi \left( x_{k}\right) +{\mathcal {O}}(h^2), \end{aligned} \end{aligned}$$
(12)

where

$$\begin{aligned} \left\{ \begin{array}{ll} r^{(\beta )}_{0} = 2\Psi _{\beta }w^{(\beta )}_{1}=\frac{1}{\cos (\frac{\uppi \beta }{2})} \left( 1-\frac{\beta + \beta ^2}{2}\right) >0,\\ r^{(\beta )}_{1} = \Psi _{\beta }(w^{(\beta )}_{0}+w^{(\beta )}_{2})=\frac{1}{2\cos (\frac{\uppi \beta }{2})}\frac{\beta (\beta ^2 + \beta -2)}{4}<0 ,\\ r^{(\beta )}_{k} = \Psi _{\beta }w^{(\beta )}_{k+1} =\frac{1}{2\cos (\frac{\uppi \beta }{2})} w^{(\beta )}_{k+1}<0, \ k\geqslant 2,\\ r^{(\beta )}_{k} = r^{(\beta )}_{-k}, \ k\geqslant 1. \end{array} \right. \end{aligned}$$
(13)

Lemma 4

For \(1< \beta <2\), the coefficients \(r^{(\beta )}_{k}\ \left( k \geqslant 0 \right)\) satisfy

$$\begin{aligned} \left\{ \begin{array}{ll} \sum \limits _{k=0}\limits ^{m}r^{(\beta )}_{j-k}> 0,\ \ m-1\geqslant j\geqslant 1,m\geqslant 2,\\ \sum \limits _{k=1}\limits ^{m}r^{(\beta )}_{k} < 0 ,\ \ m \geqslant 1,\\ \sum \limits _{k=1-m }\limits ^{m-1}r_{k}^{(\beta )} > 0 ,\ \ m \geqslant 2. \end{array} \right. \end{aligned}$$

Proof

It follows from (11) that \(\sum \limits _{k=0}\limits ^{m}w^{(\beta )}_{k} < 0\) with \(m \geqslant 2\). Thus,

$$\begin{aligned} \sum \limits _{k=0}\limits ^{m}r^{(\beta )}_{j-k} = \Psi _{\beta }\left( \sum _{k=0}^{j+1}w^{(\beta )}_{k}+\sum _{k=0}^{m-j+1}w^{(\beta )}_{k}\right) >0, \ j+1\geqslant 2, m-j+1\geqslant 2. \end{aligned}$$

In view of (13), \(r^{(\beta )}_{0}>0\) and \(r^{(\beta )}_{k}<0\) with \(k\ne 0.\) Therefore, when \(m\geqslant 1\), \(\sum \limits _{k=1}\limits ^{m}r^{(\beta )}_{k} < 0\). Furthermore, (11) and (13) yield

$$\begin{aligned} \sum \limits _{k=1-m }\limits ^{m-1}r_{k}^{(\beta )} = 2\Psi _{\beta }\sum \limits _{k=0}\limits ^{m}w^{(\beta )}_{k} > 0, \, m \geqslant 2. \end{aligned}$$

All this completes the proof.

2.3 Fractional Centered Difference Formula for Fractional Laplacian

In the case with two dimensions, let \(h = 2L/M\) with \(M\in {\mathbb {Z}}^{+}\), \(x_j = -L + jh\) with \(0\leqslant j \leqslant M\), and \(y_k = -L + kh\) with \(0\leqslant k \leqslant M\). Denote \({\widetilde{\varOmega }}_h = \{(x_j,y_k)\ |\ 0\leqslant j, k \leqslant M\}\), \(\varOmega _h = {\widetilde{\varOmega }}_h \cap {\widetilde{\varOmega }}\), and \(\partial \varOmega _h = {\widetilde{\varOmega }}_h \cap \partial {\widetilde{\varOmega }}\).

For any grid function \(w=\{w_{jk}\},v=\{v_{jk}\}\) on \(S_h^{\circ } = \{w\ |\ w=\{w_{jk}\}, w_{jk} = 0\) with \((x_j,y_k) \in \partial \varOmega _h\)}, a discrete inner product and the associated norm are defined as

$$\begin{aligned} (w,v)_{h^2} = h^2 \sum _{j=1}^{M-1}\sum _{k=1}^{M-1}w_{jk}v_{jk}, \ \ \Vert w \Vert _{L_h^2}^2 = (w,w)_{h^2}. \end{aligned}$$

Set \(L_h^2 = \left\{ w\ |\ w= \{w_{jk}\}, \Vert w \Vert _{L_h^2}^2 < +\infty \right\}\). For \(w \in L_h^2\), we define the semi-discrete Fourier transform \({\hat{w}}\!\!: [-\frac{\uppi }{h},-\frac{\uppi }{h}] \rightarrow {\mathbb {C}}\) as

$$\begin{aligned} {\hat{w}}(\eta _1,\eta _2) = h^2 \sum _{j=1}^{M-1}\sum _{k=1}^{M-1}w_{jk}{\text{e}}^{-{\textbf{i}}(\eta _1jh+\eta _2kh)}, \end{aligned}$$

and the inverse semi-discrete Fourier transform

$$\begin{aligned} w_{jk}=\frac{1}{4\uppi ^2}\int _{-\frac{\uppi} h}^{\frac{\uppi} h}\int _{-\frac{\uppi} h}^{\frac{\uppi} h}{\hat{w}}(\eta _1,\eta _2){\text{e}}^{-{\textbf{i}}(\eta _1jh+\eta _2kh)}\textrm{d}\eta _1\textrm{d}\eta _2 \end{aligned}$$

with \({\textbf{i}}\in {\mathbb {C}}\) being the imaginary unit. It follows from the Parseval’s identity that the continuous definition of the inner product takes the form

$$\begin{aligned} \begin{aligned} (w,v)_{h^2}&=\frac{1}{4\uppi ^2}\int _{-\frac{\uppi} h}^{\frac{\uppi} h}\int _{-\frac{\uppi} h}^{\frac{\uppi} h}{\hat{w}}(\eta _1,\eta _2){\hat{v}}(\eta _1,\eta _2)\textrm{d}\eta _1\textrm{d}\eta _2 \end{aligned} \end{aligned}$$

with the norm given by

$$\begin{aligned} \begin{aligned} \Vert w \Vert _{L_h^2}^2&= \frac{1}{4\uppi ^2}\int _{-\frac{\uppi} h}^{\frac{\uppi} h}\int _{-\frac{\uppi} h}^{\frac{\uppi} h} \left|{\hat{w}}(\eta _1,\eta _2)\right|^2\textrm{d}\eta _1\textrm{d}\eta _2. \end{aligned} \end{aligned}$$

For an arbitrary positive constant s, define the fractional Sobolev semi-norm \(|\cdot |_{H_h^s}\) as

$$\begin{aligned} \begin{aligned} |w|_{H_h^s}^2&=\frac{1}{4\uppi ^2}\int _{-\frac{\uppi} h}^{\frac{\uppi} h}\int _{-\frac{\uppi} h}^{\frac{\uppi} h}\left( \eta _1^2+\eta _2^2 \right) ^s\left|{\hat{w}}(\eta _1,\eta _2)\right|^2\textrm{d}\eta _1\textrm{d}\eta _2. \end{aligned} \end{aligned}$$

Based on the above the settings, the fractional Laplacian in two dimensions can be evaluated by the two-dimensional fractional centered difference formula

$$\begin{aligned} \left( -\Delta _h\right) ^{\frac{\beta }{2}}\psi (x,y) = \frac{1}{h^{\beta }}\sum _{j,k\in {\mathbb {Z}}}a_{j,k}^{(\beta )}\psi (x+jh,y+kh) \end{aligned}$$
(14)

with

$$\begin{aligned} a_{j,k}^{(\beta )} = \frac{1}{4\uppi ^2}\int _{-\uppi }^{\uppi }\int _{-\uppi }^{\uppi }\left[ 4\textrm{sin}^2\Big(\frac{\eta _1}{2}\Big)+4\textrm{sin}^2\Big(\frac{\eta _2}{2}\Big) \right] ^{\frac{\beta }{2}}{\text{e}}^{-{\textbf{i}}(\eta _1j+\eta _2k)} \textrm{d}\eta _1\textrm{d}\eta _2. \end{aligned}$$
(15)

Here we introduce the way of calculating \(a_{j,k}^{(\beta )}\) given by [14]. Take an integer number \(K > M\) and stepsize \(\delta = 2\uppi /K\), we have

$$\begin{aligned} \begin{aligned} a_{j,k}^{(\beta )}&= \frac{1}{4\uppi ^2}\int _{-\uppi }^{\uppi }\int _{-\uppi }^{\uppi }\left[ 4\textrm{sin}^2\Big(\frac{\eta _1}{2}\Big)+4\textrm{sin}^2\Big(\frac{\eta _2}{2}\Big) \right] ^{\frac{\beta }{2}}{\text{e}}^{-{\textbf{i}}(\eta _1j+\eta _2k)} \textrm{d}\eta _1\textrm{d}\eta _2\\&\approx \frac{1}{K^2} \sum _{p=0}^{K-1}\sum _{q=0}^{K-1}\left[ 4\textrm{sin}^2\Big(\frac{\delta p}{2}\Big)+4\textrm{sin}^2\Big(\frac{\delta q}{2}\Big) \right] ^{\frac{\beta }{2}}{\text{e}}^{-{\textbf{i}}(j\delta p+k\delta q)}. \end{aligned} \end{aligned}$$

With the expression above, the coefficients \(a_{j,k}^{(\beta )}(0 \leqslant j,k \leqslant K-1)\) can be computed efficiently by the built-in function “fft2” in Matlab, where the accuracy of approximation is \({\mathcal {O}}(K^{-\beta -2})\) [14]. Throughout the numerical examples in this paper, K is taken as \(2^{12}\) to compute the coefficients \(a_{j,k}^{(\beta )}\).

Before introducing properties of the fractional central finite difference formula, the following space should be introduced [10, 14, 27]:

$$\begin{aligned} {\mathcal {B}}^s({\mathbb {R}}^2) =\left\{ \psi \bigg {|}\ \psi \in L^1({\mathbb {R}}^2), \int _{{\mathbb {R}}^2}\left[ 1+|\eta |^2\right] ^s|{\hat{\psi }}(\eta _1,\eta _2)|{\textrm{d}}\eta _1{\textrm{d}}\eta _2 < \infty \right\} , \end{aligned}$$

where \(|\eta |^2 = \eta _1^2+\eta _2^2\).

Lemma 5

[14] Let \(\psi (x,y)\in {\mathcal {B}}^{2+\beta }({\mathbb {R}}^2)\). For the fractional centered difference operator in (3), it holds that

$$\begin{aligned} \left( -\Delta \right) ^{\frac{\beta }{2}}\psi (x,y) =\left( -\Delta _h\right) ^{\frac{\beta }{2}}\psi (x,y)+R_{L}(x,y),\ \ 0< \beta \leqslant 2, \end{aligned}$$

where the truncation error satisfies

$$\begin{aligned} \left|R_{L}(x,y)\right|\leqslant C_0 h^2\int _{{\mathbb {R}}^2}\left( 1+|\eta |\right) ^{\beta + 2} \left|{\hat{\psi }}(\eta _1,\eta _2)\right|\textrm{d}\eta _1\textrm{d}\eta _2 = Ch^2 \end{aligned}$$
(16)

with C being a constant independent of h.

Lemma 6

(Fractional semi-norm equivalence) [14] For \(\psi \in H_{h}^{\frac{\beta }{2}}({\mathbb {R}}^2)\), we have

$$\begin{aligned} \left( \frac{2}{\uppi }\right) ^{\beta }|\psi |^2_{H_h^{\frac{\beta }{2}}({\mathbb {R}}^2)} \leqslant \left( \left( -\Delta _h\right) ^{\frac{\beta }{2}}\psi ,\psi \right) _{h^2} \leqslant |\psi |^2_{H_h^{\frac{\beta }{2}}({\mathbb {R}}^2)}. \end{aligned}$$

3 Fully Discrete Schemes

In this section, we derive fully discrete schemes for the one-dimensional time-space fractional diffusion equation (1) and the two-dimensional time-space fractional diffusion equation (2), along with the corresponding stability analysis and error estimation.

3.1 Fully Discrete Scheme for (1)

Denote the exact solution u(xt) and the numerical solution U(xt) at the grid point \((x_j,t_n)\) by \(u_j^k\) and \(U_j^n\), respectively. Denote also \(f_j^n=f(x_j,t_n)\). Substituting (4) and (12) into (1), omitting the high-order terms, we obtain the following implicit difference scheme:

$$\begin{aligned} \left\{ \begin{array}{ll} c^{(\alpha )}_{1,1}U^{1}_{j}-c^{(\alpha )}_{1,1}U^{0}_{j}+h^{-\beta }\sum \limits _{k=0}\limits ^{M}r_{j-k}^{(\beta )}U^{1}_{k}=f^{1}_{j}, 1\leqslant j \leqslant M-1,\\ \left( c^{(\alpha )}_{n,n}U^{n}_{j}+\sum \limits _{i=1}\limits ^{n-1}(c^{(\alpha )}_{i,n}-c^{(\alpha )}_{i+1,n})U^{i}_{j}-c^{(\alpha )}_{1,n}U^{0}_{j}\right) +h^{-\beta }\sum \limits _{k=0}\limits ^{M}r_{j-k}^{(\beta )}U^{n}_{k} =f^{n}_{j},\\ 2\leqslant n\leqslant N,\,1\leqslant j \leqslant M-1,\\ U^{0}_{j}=u_0(x_j), 1\leqslant j \leqslant M-1,\\ U^{n}_{0}=U^{n}_{M}=0, 1\leqslant n\leqslant N. \end{array} \right. \end{aligned}$$
(17)

In order to rigorously analyze the numerical stability and convergence of the fully discrete scheme in (17), we first derive the following lemma.

Lemma 7

For any grid function \(v = (v_0,v_1,\cdots ,v_M) \in {\mathcal {U}}_h^{\circ }\), we have

$$\begin{aligned} h^{-\beta }h\sum \limits _{j=1}\limits ^{M-1}\left( \sum \limits _{k=0}\limits ^{M}r_{j-k}^{(\beta )}v_{k}\right) v_j > 0, \end{aligned}$$

where \(r_{k}^{(\beta )}\) is defined by (13) with \(\beta \in (1 ,2)\).

Proof

It follows from Lemma 4 and (13) that

$$\begin{aligned} \begin{aligned} A\triangleq\,\,&h^{-\beta }h\sum \limits _{j=1}\limits ^{M-1}\left( \sum \limits _{k=0}\limits ^{M}r_{j-k}^{(\beta )}v_{k}\right) v_j\\ =\,\,&h^{-\beta }\left[ h\sum \limits _{j=1}\limits ^{M-1}r_{0}^{(\beta )}v_j^2 + h\sum \limits _{j=1}\limits ^{M-1}\left( \sum \limits _{k=0, k\ne j }\limits ^{M}r_{j-k}^{(\beta )}v_{k}v_j\right) \right] \\ \geqslant\,\,&h^{-\beta }\left[ h\sum \limits _{j=1}\limits ^{M-1}r_{0}^{(\beta )}v_j^2 + \frac{h}{2}\sum \limits _{j=1}\limits ^{M-1}\sum \limits _{k=0, k\ne j }\limits ^{M}r_{j-k}^{(\beta )}(v_{k}^2 + v_j^2) \right] \\ =\,\,&h^{-\beta }\left( h\sum \limits _{j=1}\limits ^{M-1}r_{0}^{(\beta )}v_j^2 + \frac{h}{2}\sum \limits _{j=1}\limits ^{M-1}\sum \limits _{k=0, k\ne j }\limits ^{M}r_{j-k}^{(\beta )}v_{j}^2 + \frac{h}{2}\sum \limits _{j=1}\limits ^{M-1}\sum \limits _{k=0, k\ne j }\limits ^{M}r_{j-k}^{(\beta )}v_{k}^2 \right) \\ \triangleq\,\,&h^{-\beta }(A_1+A_2+A_3). \end{aligned} \end{aligned}$$
(18)

Note that

$$\begin{aligned} \begin{aligned} A_2&= \frac{h}{2}\sum \limits _{j=1}\limits ^{M-1}\sum \limits _{k=0, k\ne j }\limits ^{M}r_{j-k}^{(\beta )}v_{j}^2 = \frac{h}{2}\sum \limits _{j=1}\limits ^{M-1}v_{j}^2 \sum \limits _{k=0, k\ne j }\limits ^{M}r_{j-k}^{(\beta )} \\&= \frac{h}{2}\sum \limits _{j=1}\limits ^{M-1}v_{j}^2 \sum \limits _{k=j-M,k\ne 0 }\limits ^{j}r_{k}^{(\beta )} \geqslant \frac{h}{2}\sum \limits _{j=1}\limits ^{M-1}v_{j}^2 \sum \limits _{|k|=1 }\limits ^{M-1}r_{k}^{(\beta )}, \end{aligned} \end{aligned}$$
(19)

and

$$\begin{aligned} \begin{aligned} A_3&= \frac{h}{2}\sum \limits _{j=1}\limits ^{M-1}\sum \limits _{k=0, k\ne j }\limits ^{M}r_{j-k}^{(\beta )}v_{k}^2 \\&= \frac{h}{2}\left( \sum \limits _{j=1}\limits ^{M-1}\sum \limits _{k=j-M }\limits ^{-1}r_{k}^{(\beta )}v_{j-k}^2 +\sum \limits _{j=1}\limits ^{M-1}\sum \limits _{k=1 }\limits ^{j}r_{k}^{(\beta )}v_{j-k}^2 \right) \\&= \frac{h}{2}\left( \sum \limits _{k=1-M}\limits ^{-1}r_{k}^{(\beta )}\sum \limits _{j=1}\limits ^{k+M}v_{j-k}^2 +\sum \limits _{k=1}\limits ^{M-1}r_{k}^{(\beta )}\sum \limits _{j=k}\limits ^{M-1}v_{j-k}^2 \right) \\&\geqslant \frac{h}{2}\left( \sum \limits _{k=1-M}\limits ^{-1}r_{k}^{(\beta )} +\sum \limits _{k=1}\limits ^{M-1}r_{k}^{(\beta )}\right) \sum \limits _{j=1}\limits ^{M-1}v_{j}^2 \\&= \frac{h}{2}\sum \limits _{j=1}\limits ^{M-1}v_{j}^2 \sum \limits _{|k|=1 }\limits ^{M-1}r_{k}^{(\beta )}. \end{aligned} \end{aligned}$$
(20)

Substituting (19) and (20) into (18) gives

$$\begin{aligned} \begin{aligned} A&\geqslant h^{-\beta }\left( h\sum \limits _{j=1}\limits ^{M-1}r_{0}^{(\beta )}v_j^2 +h\sum \limits _{j=1}\limits ^{M-1}v_{j}^2 \sum \limits _{|k|=1 }\limits ^{M-1}r_{k}^{(\beta )} \right) \\&= h^{-\beta }h\left( r_{0}^{(\beta )}+\sum \limits _{|k|=1 }\limits ^{M-1}r_{k}^{(\beta )} \right) \sum \limits _{j=1}\limits ^{M-1}v_j^2 \\&=h^{1-\beta }\sum \limits _{k=1-M }\limits ^{M-1}r_{k}^{(\beta )} \sum \limits _{j=1}\limits ^{M-1}v_j^2 >0. \end{aligned} \end{aligned}$$
(21)

The proof is thus completed.

Based on the above lemma, we are ready for the stability and convergence analysis of the fully discrete scheme in (17).

Theorem 1

Let \(\alpha \in (0,1)\) and \(\beta \in (1,2)\), the fully discrete scheme in (17) for the time-space fractional diffusion equation (1) is unconditionally stable.

Proof

Let \({\widetilde{U}}^{n}_{j}\) be the approximation of \(U^{n}_{j}\) which is the exact solution to the fully discrete scheme in (17) at \((x_{j}, t_{n})\). The perturbation term \({\xi }^{n}_{j}= {\widetilde{U}}^{n}_{j}-U^{n}_{j}\) satisfies

$$\begin{aligned} \begin{aligned} \left\{ \begin{array}{ll} c^{(\alpha )}_{1,1}{\xi }^{1}_{j}+h^{-\beta }\sum \limits _{k=0}\limits ^{M}r_{j-k}^{(\beta )}{\xi }^{1}_{k}=c^{(\alpha )}_{1,1}{\xi }^{0}_{j}, 1 \leqslant j \leqslant M-1, \\ c^{(\alpha )}_{n,n}{\xi }^{n}_{j}+h^{-\beta }\sum \limits _{k=0}\limits ^{M}r_{j-k}^{(\beta )}{\xi }^{n}_{k}=\sum \limits _{i=1}\limits ^{n-1}(c^{(\alpha )}_{i+1,n}-c^{(\alpha )}_{i,n}){\xi }^{i}_{j}+c^{(\alpha )}_{1,n}{\xi }^{0}_{j}, \\ n\geqslant 2, 1 \leqslant j \leqslant M-1. \end{array} \right. \end{aligned} \end{aligned}$$
(22)

We prove the unconditional stability by mathematical induction. When \(n=1\), multiplying both sides of the first equation in (22) by \(\xi _j^1\) and summing j from 1 to \(M-1\) yield that

$$\begin{aligned} c^{(\alpha )}_{1,1}h\sum \limits _{j=1}\limits ^{M-1}({\xi }^{1}_{j})^2 + h^{-\beta }h\sum \limits _{j=1}\limits ^{M-1}\left( \sum \limits _{k=0}\limits ^{M}r_{j-k}^{(\beta )}{\xi }^{1}_{k}\right) {\xi }^{1}_{j} = c^{(\alpha )}_{1,1}h\sum \limits _{j=1}\limits ^{M-1}{\xi }^{0}_{j}{\xi }^{1}_{j}. \end{aligned}$$

According to Lemma 7, we have

$$\begin{aligned} h^{-\beta }h\sum \limits _{j=1}\limits ^{M-1}\left( \sum \limits _{k=0}\limits ^{M}r_{j-k}^{(\beta )}{\xi }^{1}_{k}\right) {\xi }^{1}_{j} >0, \end{aligned}$$

which gives

$$\begin{aligned} \begin{aligned} c^{(\alpha )}_{1,1}\Vert \xi ^1\Vert _h^2 \leqslant c^{(\alpha )}_{1,1}\Vert \xi ^1 \Vert _h\Vert \xi ^0 \Vert _h. \end{aligned} \end{aligned}$$
(23)

Note also that \(c^{(\alpha )}_{1,1}>0\). Therefore,

$$\begin{aligned} \Vert \xi ^1 \Vert _h \leqslant \Vert \xi ^0 \Vert _h. \end{aligned}$$

Assume that \(\Vert \xi ^m \Vert _h \leqslant \Vert \xi ^0 \Vert _h\) holds for \(1\leqslant m \leqslant n-1\). When \(m=n\), multiplying both sides of the second equation in (22) by \(\xi _j^n\) and summing j from 1 to M yield that

$$\begin{aligned} \begin{aligned} &c^{(\alpha )}_{n,n}h\sum \limits _{j=1}\limits ^{M-1}({\xi }^{n}_{j})^2+h^{-\beta }h\sum \limits _{j=1}\limits ^{M-1}\left( \sum \limits _{k=0}\limits ^{M}r_{j-k}^{(\beta )}{\xi }^{n}_{k}\right) {\xi }^{n}_{j} \\&=h\sum \limits _{j=1}\limits ^{M-1}\sum \limits _{i=1}\limits ^{n-1}(c^{(\alpha )}_{i+1,n}-c^{(\alpha )}_{i,n}){\xi }^{i}_j{\xi }^{n}_{j} +c^{(\alpha )}_{1,n}h\sum \limits _{j=1}\limits ^{M-1}{\xi }^{0}_j{\xi }^{n}_{j}. \end{aligned} \end{aligned}$$

It follows from Lemma 7 that

$$\begin{aligned} h^{-\beta }h\sum \limits _{j=1}\limits ^{M-1}\left( \sum \limits _{k=0}\limits ^{M}r_{j-k}^{(\beta )}{\xi }^{n}_{k}\right) {\xi }^{n}_{j} >0, \end{aligned}$$

which gives

$$\begin{aligned} \begin{aligned} c^{(\alpha )}_{n,n}\Vert \xi ^n \Vert _h^2&\leqslant \sum \limits _{i=1}\limits ^{n-1}(c^{(\alpha )}_{i+1,n}-c^{(\alpha )}_{i,n})h\sum \limits _{j=1}\limits ^{M-1}{\xi }^{i}_j{\xi }^{n}_{j} +c^{(\alpha )}_{1,n}h\sum \limits _{j=1}\limits ^{M-1}{\xi }^{0}_j{\xi }^{n}_{j} \\&\leqslant \sum \limits _{i=1}\limits ^{n-1}(c^{(\alpha )}_{i+1,n}-c^{(\alpha )}_{i,n})\Vert \xi ^i \Vert _h\Vert \xi ^n \Vert _h +c^{(\alpha )}_{1,n}\Vert \xi ^0 \Vert _h\Vert \xi ^n \Vert _h \\&\leqslant \Vert \xi ^n\Vert _h\left( \sum \limits _{i=1}\limits ^{n-1}(c^{(\alpha )}_{i+1,n}-c^{(\alpha )}_{i,n})+c^{(\alpha )}_{1,n} \right) \Vert \xi ^0 \Vert _h \\&= c^{(\alpha )}_{n,n}\Vert \xi ^n \Vert _h\Vert \xi ^0 \Vert _h, \end{aligned} \end{aligned}$$

where Lemma 1 is used. As a result, we have

$$\begin{aligned} \Vert \xi ^n \Vert _h \leqslant \Vert \xi ^0 \Vert _h. \end{aligned}$$

The proof is thus completed.

Theorem 2

Let u(xt) be the exact solution to (1) and U(xt) be the numerical solution given by scheme (17). Assume that u(xt), \(_{\text{RL}}\textrm{D}_{a,x}^\beta u(x,t)\), \(_{\text{RL}}\textrm{D}_{x,b}^\beta u(x,t)\) and its Fourier transform belong to \({\text{C}}^2\left( [{\tilde{a}},T], L^1({\mathbb {R}})\right)\). Then for the numerical error \(\varepsilon = u - U\), there holds

$$\begin{aligned} \Vert \varepsilon ^n \Vert _h \leqslant C\left( \tau ^{2-\alpha }+h^2\right) , \ 0\leqslant n \leqslant N, \end{aligned}$$
(24)

where \(\varepsilon ^n = (\varepsilon _0^n, \varepsilon _1^n, \cdots ,\varepsilon _{M-1}^n,\varepsilon _{M}^n) \in {\mathcal {U}}_h^{\circ }\), \(\alpha \in (0,1)\), and \(C>0\) is a constant.

Proof

It follows from (1) and (17) that

$$\begin{aligned} \left\{ \begin{array}{ll} c^{(\alpha )}_{1,1}\varepsilon ^{1}_{j}+ h^{-\beta }\sum \limits _{k=0}\limits ^{M}r_{j-k}^{(\beta )}\varepsilon ^{1}_{k} =c^{(\alpha )}_{1,1}\varepsilon ^{0}_{j} + R_j^1, 1 \leqslant j \leqslant M-1, \\ c^{(\alpha )}_{n,n}\varepsilon ^{n}_{j}+ h^{-\beta }\sum \limits _{k=0}\limits ^{M}r_{j-k}^{(\beta )}\varepsilon ^{n}_{k}= \sum \limits _{i=1}\limits ^{n-1}(c^{(\alpha )}_{i+1,n}-c^{(\alpha )}_{i,n})\varepsilon ^{i}_{j}+c^{(\alpha )}_{1,n}\varepsilon ^{0}_{j}+R_j^n, \\ 2\leqslant n\leqslant N,\ 1 \leqslant j \leqslant M-1, \\ \varepsilon _j^0 = 0, 1 \leqslant j \leqslant M-1. \end{array} \right. \end{aligned}$$
(25)

By Lemma 2 and (8), the truncation error \(R_j^n\) satisfies

$$\begin{aligned} \left|R_j^n\right|\leqslant \widetilde{C}\left( \tau ^{2-\alpha }+h^2\right) , \ j = 1,2,\cdots ,M-1, \ n=1,2,\cdots ,N, \end{aligned}$$

where \({\widetilde{C}}\) is a positive constant.

Now we give the convergence analysis by proving

$$\begin{aligned} \Vert \varepsilon ^n \Vert _{h} \leqslant \frac{\Gamma (1-\alpha )}{{\tilde{a}}^{\alpha }} \max _{1\leqslant k\leqslant n}\left\{ (k\tau )^{\alpha }\Vert R^k \Vert _{h}\right\} ,\ \ 1 \leqslant n \leqslant N \end{aligned}$$
(26)

via mathematical induction. The case with \(n = 0\) is trivial as \(\Vert \varepsilon ^0 \Vert _h = 0\).

For \(n=1\), multiplying both sides of the first equation in (25) by \(\varepsilon _j^1\) and summing j from 1 to \(M-1\) yield

$$\begin{aligned} c^{(\alpha )}_{1,1}h\sum \limits _{j=1}\limits ^{M-1}({\varepsilon }^{1}_{j})^2 + h^{-\beta }h\sum \limits _{j=1}\limits ^{M-1}\left( \sum \limits _{k=0}\limits ^{M}r_{j-k}^{(\beta )}{\varepsilon }^{1}_{k}\right) {\varepsilon }^{1}_{j} = c^{(\alpha )}_{1,1}h\sum \limits _{j=1}\limits ^{M-1}{\varepsilon }^{0}_{j}{\varepsilon }^{1}_{j}+h\sum \limits _{j=1}\limits ^{M-1}{R}^{1}_{j}{\varepsilon }^{1}_{j}. \end{aligned}$$

Based on Lemma 7 and Remark 1, we give the following estimates:

$$\begin{aligned} c^{(\alpha )}_{1,1}\Vert \varepsilon ^1 \Vert _h^2 \leqslant c^{(\alpha )}_{1,1}\Vert \varepsilon ^0 \Vert _h\Vert \varepsilon ^1 \Vert _h + \Vert R^1 \Vert _h\Vert \varepsilon ^1 \Vert _h, \end{aligned}$$

from which follows

$$\begin{aligned} \begin{aligned} \Vert \varepsilon ^1 \Vert _h \leqslant \Vert \varepsilon ^0 \Vert _h + \frac{1}{c^{(\alpha )}_{1,1}}\Vert R^1 \Vert _h \leqslant \frac{\Gamma (1-\alpha )}{{\tilde{a}}^{\alpha }}\tau ^{\alpha }\Vert R^1 \Vert _h. \end{aligned} \end{aligned}$$

Assume that (26) holds for \(1 \leqslant n \leqslant m\) with \(1 \leqslant m \leqslant N\). When \(n = m + 1\), multiplying both sides of the second equation in (25) by \(\varepsilon _j^n\) and summing j from 1 to \(M-1\) give

$$\begin{aligned} \begin{aligned}c^{(\alpha )}_{n,n}h\sum \limits _{j=1}\limits ^{M-1}({\varepsilon }^{n}_{j})^2 +h^{-\beta }h\sum \limits _{j=1}\limits ^{M-1}\left( \sum \limits _{k=0}\limits ^{M}r_{j-k}^{(\beta )}{\varepsilon }^{n}_{k}\right) {\varepsilon }^{n}_{j} \\=h\sum \limits _{j=1}\limits ^{M-1}\sum \limits _{i=1}\limits ^{n-1}(c^{(\alpha )}_{i+1,n}-c^{(\alpha )}_{i,n}){\varepsilon }^{i}_j{\varepsilon }^{n}_{j} +c^{(\alpha )}_{1,n}h\sum \limits _{j=1}\limits ^{M-1}{\varepsilon }^{0}_j{\varepsilon }^{n}_{j} +h\sum \limits _{j=1}\limits ^{M-1}{R}^{n}_{j}{\varepsilon }^{n}_{j} . \end{aligned} \end{aligned}$$

It follows from Lemma 7 and Remark 1 that

$$\begin{aligned} \begin{aligned} c^{(\alpha )}_{n,n}\Vert \varepsilon ^n \Vert _{h}^2&\leqslant \sum \limits _{i=1}\limits ^{n-1}(c^{(\alpha )}_{i+1,n}-c^{(\alpha )}_{i,n})\Vert \varepsilon ^i \Vert _{h}\Vert \varepsilon ^n \Vert _{h} + c^{(\alpha )}_{1,n}\left( \Vert \varepsilon ^0 \Vert _{h}+\frac{\Vert R^n \Vert _{h}}{c^{(\alpha )}_{1,n}} \right) \Vert \varepsilon ^n \Vert _{h} \\&\leqslant \sum \limits _{i=1}\limits ^{n-1}(c^{(\alpha )}_{i+1,n}-c^{(\alpha )}_{i,n})\frac{\Gamma (1-\alpha )}{{\tilde{a}}^{\alpha }}\max _{1\leqslant k\leqslant i}\left\{ (k\tau )^{\alpha }\Vert R^k \Vert _{h}\right\} \Vert \varepsilon ^n \Vert _{h} \\&\quad\,+ c^{(\alpha )}_{1,n} \frac{\Gamma (1-\alpha )}{{\tilde{a}}^{\alpha }}(n\tau )^{\alpha }\Vert R^n \Vert _{h}\Vert \varepsilon ^n \Vert _{h} \\&\leqslant \left[ \sum \limits _{i=1}\limits ^{n-1}(c^{(\alpha )}_{i+1,n}-c^{(\alpha )}_{i,n})+ c^{(\alpha )}_{1,n} \right] \frac{\Gamma (1-\alpha )}{{\tilde{a}}^{\alpha }}\max _{1\leqslant k\leqslant n}\left\{ (k\tau )^{\alpha }\Vert R^k \Vert _{h}\right\} \Vert \varepsilon ^n \Vert _{h} \\&=c^{(\alpha )}_{n,n}\frac{\Gamma (1-\alpha )}{{\tilde{a}}^{\alpha }}\max _{1\leqslant k\leqslant n}\left\{ (k\tau )^{\alpha }\Vert R^k \Vert _{h}\right\} \Vert \varepsilon ^n \Vert _{h}. \end{aligned} \end{aligned}$$

Therefore,

$$\begin{aligned} \Vert \varepsilon ^n \Vert _{h}\leqslant \frac{\Gamma (1-\alpha )}{{\tilde{a}}^{\alpha }}\max _{1\leqslant k\leqslant n}\left\{ (k\tau )^{\alpha }\Vert R^k \Vert _{h}\right\} \leqslant \frac{\Gamma (1-\alpha )}{{\tilde{a}}^{\alpha }} T^{\alpha }\widetilde{C}\left( \tau ^{2-\alpha }+h^2\right) . \end{aligned}$$

Consequently, there exists a positive constant C, such that

$$\begin{aligned} \Vert \varepsilon ^n \Vert _h \leqslant C\left( \tau ^{2-\alpha }+h^2\right) . \end{aligned}$$

This finishes the proof.

3.2 Fully Discrete Scheme for (2)

For the two-dimensional fractional diffusion equation in (2), we denote \(u_{jk}^{n}=u(x_j, y_k, t_n)\) and \(f_{jk}^n = f(x_j,y_k,t_n)\) with \(x_j = -L+jh\), \(y_k = -L+kh\), and \(t_n=n\tau\). Here h and \(\tau\) are the spatial stepsize and the temporal stepsize, respectively. Adopting the L1 approximation in (4) for the temporal Caputo-Hadamard derivative and the fractional centred difference formula in (14) for the fractional Laplacian, we obtain the following fully discrete scheme after omitting the high-order terms:

$$\begin{aligned} \left\{ \begin{array}{ll} c^{(\alpha )}_{1,1}U^{1}_{jk}-c^{(\alpha )}_{1,1}U^{0}_{jk}+\left( -\Delta _h\right) ^{\frac{\beta }{2}}U^{1}_{jk} =f^{1}_{jk}, 1\leqslant j, k \leqslant M-1, \\ c^{(\alpha )}_{n,n}U^{n}_{jk}+\sum \limits _{i=1}\limits ^{n-1}(c^{(\alpha )}_{i,n}-c^{(\alpha )}_{i+1,n})U^{i}_{jk}-c^{(\alpha )}_{1,n}U^{0}_{jk}+\left( -\Delta _h\right) ^{\frac{\beta }{2}}U^{n}_{jk}=f^{n}_{jk}, \\ 2\leqslant n\leqslant N,\,1\leqslant j, k \leqslant M-1, \\ U^{0}_{jk}=u_0(x_j,y_k), 1\leqslant j, k \leqslant M-1, \\ U^{n}_{jk}=0, (x_j,y_k)\in \partial \varOmega _h, 0\leqslant n\leqslant N. \end{array} \right. \end{aligned}$$
(27)

Here \(U_{jk}^n = U(x_j,y_k,t_n)\) with U(xyt) being the approximation of u(xyt). This finite different system can be written into the following matrix form:

$$\begin{aligned} \left\{ \begin{array}{ll} c^{(\alpha )}_{1,1}{\textbf{U}}^{1}-c^{(\alpha )}_{1,1}{\textbf{U}}^{0}+\frac{1}{h^{\beta }}{\textbf{A}}{\textbf{U}}^1 ={\textbf{F}}^{1},n=1, \\ c^{(\alpha )}_{n,n}{\textbf{U}}^{n}+\sum \limits _{i=1}\limits ^{n-1}(c^{(\alpha )}_{i,n}-c^{(\alpha )}_{i+1,n}){\textbf{U}}^{i}-c^{(\alpha )}_{1,n}{\textbf{U}}^{0}+\frac{1}{h^{\beta }}{\textbf{A}}{\textbf{U}}^{n}={\textbf{F}}^{n}, n\geqslant 2, \end{array} \right. \end{aligned}$$
(28)

where

$$\begin{aligned} {\textbf{F}}^n = (f_{11}^n,\cdots ,f_{(M-1)1}^{n},f_{12}^n,\cdots ,f_{(M-1)2}^{n},\cdots ,f_{1(M-1)}^n,\cdots ,f_{(M-1)(M-1)}^{n})^\textrm{T} \end{aligned}$$

and

$$\begin{aligned} {\textbf{U}}^n = \left( {{\mathbf{U}}_1^{n}}, {{\mathbf{U}}_{2}^{n}},\cdots , {{\mathbf{U}}_{M-1}^{n}}\right) ^\textrm{T} \end{aligned}$$

with \( {{\mathbf{U}}_j^n} = \left( U_{1j}^n,U_{2j}^n, \cdots , U_{(M-1)j}^n\right) ^\textrm{T}\). The coefficient matrix \({\textbf{A}}\) is a real symmetric block Toeplitz matrix with Toeplitz blocks, say,

$$\begin{aligned} {\textbf{A}} = \left( \begin{array}{ccccccc} A_0 &{}A_1 &{}A_2 &{}\cdots &{}A_{M-3} &{}A_{M-2} \\ A_1 &{}A_0 &{}A_1 &{}\cdots &{}A_{M-4} &{}A_{M-3} \\ A_2 &{}A_1 &{}A_0 &{}\cdots &{}A_{M-5} &{}A_{M-4} \\ \vdots &{} \vdots &{}\vdots &{}&{}\vdots &{}\vdots \\ A_{M-3} &{}A_{M-4} &{}A_{M-5} &{}\cdots &{}A_0 &{}A_1 \\ A_{M-2} &{}A_{M-3} &{}A_{M-4} &{}\cdots &{}A_1 &{}A_0 \end{array}\right) \end{aligned}$$

with

$$\begin{aligned} A_j = \left( \begin{array}{ccccccc} a_{0,j}^{(\beta )} &{}a_{1,j}^{(\beta )} &{}a_{2,j}^{(\beta )} &{}\cdots &{}a_{M-3,j}^{(\beta )} &{}a_{M-2,j}^{(\beta )} \\ a_{1,j}^{(\beta )} &{}a_{0,j}^{(\beta )} &{}a_{1,j}^{(\beta )} &{}\cdots &{}a_{M-4,j}^{(\beta )} &{}a_{M-3,j}^{(\beta )} \\ a_{2,j}^{(\beta )} &{}a_{1,j}^{(\beta )} &{}a_{0,j}^{(\beta )} &{}\cdots &{}a_{M-5,j}^{(\beta )} &{}a_{M-4,j}^{(\beta )} \\ \vdots &{} \vdots &{}\vdots &{} &{}\vdots &{}\vdots \\ a_{M-3,j}^{(\beta )} &{}a_{M-4,j}^{(\beta )} &{}a_{M-5,M-5j}^{(\beta )} &{}\cdots &{}a_{0,j}^{(\beta )} &{}a_{1,j}^{(\beta )} \\ a_{M-2,j}^{(\beta )} &{}a_{M-3,j}^{(\beta )} &{}a_{M-4,j}^{(\beta )} &{}\cdots &{}a_{1,j}^{(\beta )} &{}a_{0,j}^{(\beta )} \end{array}\right) . \end{aligned}$$

It follows from Lemma 6 that the matrix \({{\textbf {A}}}\) is a real symmetric positive definite matrix.

Now we show the unconditional stability of the fully discrete scheme (27).

Theorem 3

Let \(1< \beta < 2\) and \(0< \alpha < 1\). The finite difference scheme (27) for the fractional diffusion equation (2) is unconditionally stable.

Proof

Let \({\widetilde{U}}^{n}_{jk}\) be the approximation of \(U^{n}_{jk}\) which is the exact solution to the fully discrete scheme in (27) at \((x_{j}, y_k, t_{n})\). It is evident that the perturbation term \({\epsilon }^{n}_{jk}=\widetilde{U}^{n}_{jk}-U^{n}_{jk}\ (0 \leqslant j, k \leqslant M, 1 \leqslant n \leqslant N)\) satisfies

$$\begin{aligned} \left\{ \begin{array}{ll} c^{(\alpha )}_{1,1}\epsilon ^{1}_{jk}-c^{(\alpha )}_{1,1}\epsilon ^{0}_{jk}+\left( -\Delta _h\right) ^{\frac{\beta }{2}}\epsilon ^{1}_{jk} =0, 1\leqslant j, k \leqslant M-1, \\ c^{(\alpha )}_{n,n}\epsilon ^{n}_{jk}+\sum \limits _{i=1}\limits ^{n-1}(c^{(\alpha )}_{i,n}-c^{(\alpha )}_{i+1,n})\epsilon ^{i}_{jk}-c^{(\alpha )}_{1,n}\epsilon ^{0}_{jk}+\left( -\Delta _h\right) ^{\frac{\beta }{2}}\epsilon ^{n}_{jk}=0, \\ 2\leqslant n \leqslant N,\,1 \leqslant j, k \leqslant M-1. \end{array} \right. \end{aligned}$$
(29)

Now we show that \(\Vert \epsilon ^n \Vert _{L_h^2} \leqslant \Vert \epsilon ^{0} \Vert _{L_h^2} \ (n \geqslant 0)\) via mathematical induction. The case with \(n=0\) is trivial. For \(n = 1\), taking the inner product of the first equation of (29) with \(\epsilon ^1\) gives

$$\begin{aligned} \left( c^{(\alpha )}_{1,1}\epsilon ^{1},\epsilon ^1\right) _{h^2} -\left( c^{(\alpha )}_{1,1}\epsilon ^{0},\epsilon ^1\right) _{h^2} +\left( \left( -\Delta _h\right) ^{\frac{\beta }{2}}\epsilon ^{1}, \epsilon ^1\right) _{h^2} = 0. \end{aligned}$$
(30)

Note that Lemma 6 gives \(\left( \left( -\Delta _h\right) ^{\frac{\beta }{2}}\epsilon ^{1}, \epsilon ^1\right) _{h^2} \geqslant 0\). By Cauchy inequality, we have

$$\begin{aligned} c^{(\alpha )}_{1,1} \Vert \epsilon ^1 \Vert _{L_h^2}^2 =\left( c^{(\alpha )}_{1,1}\epsilon ^{1},\epsilon ^1\right) _{h^2} \leqslant \left( c^{(\alpha )}_{1,1}\epsilon ^{0}, \epsilon ^1\right) _{h^2} \leqslant c^{(\alpha )}_{1,1} \Vert \epsilon ^{0} \Vert _{L_h^2} \Vert \epsilon ^1 \Vert _{L_h^2}, \end{aligned}$$

which yields that

$$\begin{aligned} \Vert \epsilon ^1 \Vert _{L_h^2} \leqslant \Vert \epsilon ^{0} \Vert _{L_h^2}. \end{aligned}$$
(31)

Assume that \(\Vert \epsilon ^m \Vert _{L_h^2}^2 \leqslant \Vert \epsilon ^{0} \Vert _{L_h^2}\) holds for \(m = 2, \cdots , n-1\). It follows from the second equation in (29) that

$$\begin{aligned} \left( c^{(\alpha )}_{n,n}\epsilon ^{n},\epsilon ^n\right) _{h^2} +\sum \limits _{i=1}\limits ^{n-1}\left( (c^{(\alpha )}_{i,n}-c^{(\alpha )}_{i+1,n})\epsilon ^{i}, \epsilon ^n\right) _{h^2} -\left( c^{(\alpha )}_{1,n}\epsilon ^{0}, \epsilon ^n\right) _{h^2} +\left( \left( -\Delta _h\right) ^{\frac{\beta }{2}}\epsilon ^{n}, \epsilon ^n\right) _{h^2} = 0. \end{aligned}$$
(32)

Applying Lemma 6 and the Cauchy inequality, we have

$$\begin{aligned} \begin{aligned} c^{(\alpha )}_{n,n}\Vert \epsilon ^n \Vert _{L_h^2}^2&\leqslant \sum \limits _{i=1}\limits ^{n-1}(c^{(\alpha )}_{i+1,n}-c^{(\alpha )}_{i,n})\left( \epsilon ^{i}, \epsilon ^n\right) _{h^2} + c^{(\alpha )}_{1,n}\left( \epsilon ^{0}, \epsilon ^n\right) _{h^2} \\&\leqslant \sum \limits _{i=1}\limits ^{n-1}(c^{(\alpha )}_{i+1,n}-c^{(\alpha )}_{i,n})\Vert \epsilon ^i \Vert _{L_h^2}\Vert \epsilon ^n \Vert _{L_h^2} + c^{(\alpha )}_{1,n}\Vert \epsilon ^{0} \Vert _{L_h^2}\Vert \epsilon ^n \Vert _{L_h^2} \\&\leqslant \Vert \epsilon ^n \Vert _{L_h^2} \left( \sum \limits _{i=1}\limits ^{n-1}(c^{(\alpha )}_{i+1,n}-c^{(\alpha )}_{i,n})\Vert \epsilon ^{0} \Vert _{L_h^2} + c^{(\alpha )}_{1,n}\Vert \epsilon ^{0} \Vert _{L_h^2} \right) \\&\leqslant c^{(\alpha )}_{n,n}\Vert \epsilon ^{n} \Vert _{L_h^2}\Vert \epsilon ^{0} \Vert _{L_h^2}, \end{aligned} \end{aligned}$$

which yields

$$\begin{aligned} \Vert \epsilon ^n \Vert _{L_h^2} \leqslant \Vert \epsilon ^{0} \Vert _{L_h^2}. \end{aligned}$$

The proof is thus finished.

Now we show the convergence of the fully discrete scheme in (27).

Theorem 4

Assume that \(u(x,y,t) \in C^2([{\tilde{a}},T],{\mathcal {B}}^{2+\beta }({\widetilde{\varOmega }}))\) is the solution of (2). Let \(U_{jk}^n\, (0 \leqslant j, k \leqslant M,0 \leqslant n \leqslant N)\) be the numerical solution given by (27). Then the numerical error \(e^n = \{e_{jk}^n\}_{j,k=0}^M \in S_h\) with \(e_{jk}^n = U_{jk}^n - u_{jk}^n\, (0 \leqslant j, k \leqslant M,0 \leqslant n \leqslant N)\) satisfies

$$\begin{aligned} \Vert e^n \Vert _{L_h^2} \leqslant {C} \left( \tau ^{2-\alpha }+h^2\right) . \end{aligned}$$

Here C is a positive constant independent of h and \(\tau\).

Proof

It follows from (2) and (27) that

$$\begin{aligned} \left\{ \begin{array}{ll} c^{(\alpha )}_{1,1}e^{1}_{jk}-c^{(\alpha )}_{1,1}e^{0}_{jk}+\left( -\Delta _h\right) ^{\frac{\beta }{2}}e^{1}_{jk} =R^{1}_{jk}, 1\leqslant j \leqslant M-1,1\leqslant k \leqslant M-1, \\ c^{(\alpha )}_{n,n}e^{n}_{jk}+\sum \limits _{i=1}\limits ^{n-1}(c^{(\alpha )}_{i,n}-c^{(\alpha )}_{i+1,n})e^{i}_{jk}-c^{(\alpha )}_{1,n}e^{0}_{jk}+\left( -\Delta _h\right) ^{\frac{\beta }{2}}e^{n}_{jk}=R^{n}_{jk}, \\ 2\leqslant n \leqslant N,\,1 \leqslant j \leqslant M-1,1 \leqslant k \leqslant M-1, \\ e^{0}_{jk}=0, 1 \leqslant j \leqslant M-1,1\leqslant k \leqslant M-1, \\ e^{n}_{jk}=0, (x_j,y_k)\in \partial \varOmega _h, 0\leqslant n\leqslant N. \end{array} \right. \end{aligned}$$
(33)

By Lemmas 2 and  5, it is evident that for \(1\leqslant n \leqslant N\) and \(1 \leqslant j, k \leqslant M-1\),

$$\begin{aligned} \left|R_{jk}^n\right|\leqslant \widetilde{C} (\tau ^{2-\alpha }+h^2) \end{aligned}$$
(34)

with \(\widetilde{C}\) being a positive constant independent of h and \(\tau\). Now we prove

$$\begin{aligned} \Vert e^n \Vert _{L_h^2} \leqslant \frac{\Gamma (1-\alpha )}{{\tilde{a}}^{\alpha }} \max \limits _{1 \leqslant \ell \leqslant n} \left\{ (\ell \tau )^{\alpha }\Vert R^\ell \Vert _{L_h^2}\right\} , \ 0\leqslant n \leqslant N \end{aligned}$$
(35)

via mathematical induction. When \(n = 0\), it is obvious that

$$\begin{aligned} \Vert e^0 \Vert _{L_h^2} = 0. \end{aligned}$$
(36)

When \(n = 1\), taking the inner product of the first equation of (33) with \(e^1\) gives

$$\begin{aligned} \left( c^{(\alpha )}_{1,1}e^{1}, e^1\right) _{h^2} -\left( c^{(\alpha )}_{1,1}e^{0},e^1\right) _{h^2} +\left( \left( -\Delta _h\right) ^{\frac{\beta }{2}}e^{1}, e^1\right) _{h^2} = \left( R^{1}, e^1\right) _{h^2}. \end{aligned}$$
(37)

It follows from the above equation and Lemma 6 that

$$\begin{aligned} c^{(\alpha )}_{1,1} \Vert e^1 \Vert _{L_h^2}^2 =\left( c^{(\alpha )}_{1,1}e^{1}, e^1\right) _{h^2} \leqslant c^{(\alpha )}_{1,1}\Vert e^{0} \Vert _{L_h^2}\Vert e^1 \Vert _{L_h^2}+\Vert R^1 \Vert _{L_h^2} \Vert e^1 \Vert _{L_h^2}, \end{aligned}$$

where the Cauchy inequality is applied. As a result, (36) and Remark 1 yield that

$$\begin{aligned} \Vert e^1 \Vert _{L_h^2} \leqslant \Vert e^{0} \Vert _{L_h^2} + \frac{1}{c^{(\alpha )}_{1,1}} \Vert R^1 \Vert _{L_h^2} \leqslant \frac{\Gamma (1- \alpha )}{{{\tilde{a}}}^{\alpha }} (\tau )^{\alpha }\Vert R^1 \Vert _{L_h^2}. \end{aligned}$$
(38)

Assume that (35) holds for \(2 \leqslant n \leqslant m-1\). For the case with \(n=m\), taking the inner product of the second equation of (33) with \(e^n\) gives

$$\begin{aligned} \begin{aligned}&\left( c^{(\alpha )}_{n,n}e^{n},e^n\right) _{h^2} +\sum \limits _{i=1}\limits ^{n-1}\left( (c^{(\alpha )}_{i,n}-c^{(\alpha )}_{i+1,n})e^{i}, e^n\right) _{h^2} -\left( c^{(\alpha )}_{1,n}e^{0}, e^n\right) _{h^2} +\left( \left( -\Delta _h\right) ^{\frac{\beta }{2}}e^{n}, e^n\right) _{h^2} = \left( R^{n}, e^n\right) _{h^2}. \end{aligned} \end{aligned}$$

Combining (36), Lemma 6, and Remark 1 gives

$$\begin{aligned} \begin{aligned} c^{(\alpha )}_{n,n}\Vert e^n \Vert _{L_h^2}^2&\leqslant \sum \limits _{i=1}\limits ^{n-1}(c^{(\alpha )}_{i+1,n}-c^{(\alpha )}_{i,n})\left( e^{i}, e^n\right) _{h^2} + c^{(\alpha )}_{1,n}\left( e^{0}, e^n\right) _{h^2} + \left( R^{n}, e^n\right) _{h^2} \\&\leqslant \sum \limits _{i=1}\limits ^{n-1}(c^{(\alpha )}_{i+1,n}-c^{(\alpha )}_{i,n})\Vert e^i \Vert _{L_h^2}\Vert e^n \Vert _{L_h^2} + c^{(\alpha )}_{1,n}\left( \frac{1}{c^{(\alpha )}_{1,n}}\Vert R^{n} \Vert _{L_h^2}\Vert e^n \Vert _{L_h^2}\right) , \end{aligned} \end{aligned}$$

where the Cauchy inequality is applied. Therefore,

$$\begin{aligned} \begin{aligned} c^{(\alpha )}_{n,n}\Vert e^n \Vert _{L_h^2}&\leqslant \sum \limits _{i=1}\limits ^{n-1}(c^{(\alpha )}_{i+1,n}-c^{(\alpha )}_{i,n})\frac{\Gamma (1-\alpha )}{{\tilde{a}}^{\alpha }}\max _{1\leqslant \ell \leqslant i} \left\{ (\ell \tau )^{\alpha }\Vert R^\ell \Vert _{L_h^2}\right\} \\ &\quad\,+ c^{(\alpha )}_{1,n}\frac{\Gamma (1-\alpha )}{{\tilde{a}}^{\alpha }}(n\tau )^{\alpha }\Vert R^{n} \Vert _{L_h^2} \\&\leqslant \left[ \sum \limits _{i=1}\limits ^{n-1}(c^{(\alpha )}_{i+1,n}-c^{(\alpha )}_{i,n}) + c^{(\alpha )}_{1,n} \right] \frac{\Gamma (1-\alpha )}{{\tilde{a}}^{\alpha }}\max _{1\leqslant \ell \leqslant n}\left\{ (\ell \tau )^{\alpha }\Vert R^\ell \Vert _{L_h^2}\right\} \\&= c^{(\alpha )}_{n,n} \frac{\Gamma (1- \alpha )}{{\tilde{a}}^{\alpha }}\max _{1\leqslant \ell \leqslant n}\left\{ (\ell \tau )^{\alpha }\Vert R^\ell \Vert _{L_h^2}\right\} . \end{aligned} \end{aligned}$$

As a result,

$$\begin{aligned} \begin{aligned} \Vert e^n \Vert _{L_h^2} \leqslant \frac{\Gamma (1- \alpha )}{{\tilde{a}}^{\alpha }}\max _{1\leqslant \ell \leqslant n}\left\{ (\ell \tau )^{\alpha }\Vert R^\ell \Vert _{L_h^2}\right\} \leqslant \frac{\Gamma (1- \alpha )}{{\tilde{a}}^{\alpha }}T^{\alpha } \widetilde{C}(\tau ^{2-\alpha }+h^2). \end{aligned} \end{aligned}$$

Consequently, there exists a positive constant C, such that

$$\begin{aligned} \Vert e^n \Vert _{L_h^2} \leqslant {C}\left( \tau ^{2-\alpha }+h^2\right) . \end{aligned}$$

This finishes the proof.

4 Numerical Experiments

In this section, we numerically demonstrate the aforementioned theoretical results on the convergence and numerical stability.

Example 1

Consider the following fractional diffusion equation in one dimension with \(\alpha \in (0,1)\) and \(\beta \in (1,2)\):

$$\begin{aligned} {}_{\text{CH}}\textrm{D}^{\alpha }_{1,t}u(x,t) -{}_{\text{RZ}}\textrm{D}^{\beta }_{x}u(x,t)=f(x,t), \ t \in (1,2] \end{aligned}$$

on a finite domain \(0 \leqslant x \leqslant 1\), with a given force term

$$\begin{aligned} \begin{aligned} f(x,t)&=\frac{6}{\Gamma {\left( 4-\alpha \right) }}x^4(1-x)^4\left( \log t\right) ^{3-\alpha }+\frac{\left( \log t\right) ^{3-\alpha }}{2\cos (\uppi \beta /2)} \\&\quad\times \sum _{l=1}^{4}\left( \frac{(-1)^l4!(4+l)!}{l!(4-l)!\Gamma (5+l-\beta )} \left( x^{4+l-\beta }+(1-x)^{4+l-\beta }\right) \right) . \end{aligned} \end{aligned}$$

The initial condition and the boundary condition are given by \(u(x,1)=0\) and \(u(0,t)=u(1,t)=0\), respectively. Its exact solution is

$$\begin{aligned} u(x,t)=(\log t)^3x^4(1-x)^4. \end{aligned}$$

The numerical error is defined as

$$\begin{aligned} {\text{Error}}=\sqrt{\left( h\sum _{j=1}^{M-1}\left|u_j^N-u(x_j,t_N)\right|^2\right) }. \end{aligned}$$

Tables 1 and 2 show the numerical error and the convergence orders given by the fully discrete scheme (17). We can see that the scheme is stable and the numerical error is \({\mathcal {O}}\left( \tau ^{2-\alpha }+h^2\right)\) indeed.

Table 1 Spatial convergence of scheme (17) with \(\tau = 0.000\;05\) for Example 1
Table 2 Temporal convergence of scheme (17) with \(h= 0.000\;1\) for Example 1

Example 2

Consider the following fractional diffusion equation in two dimensions with \(\alpha \in (0,1)\) and \(\beta \in (1,2)\):

$$\begin{aligned} \left\{ \begin{array}{ll} {}_{\text{CH}}\textrm{D}^{\alpha }_{1,t}u(x,y,t)+\left( -\Delta \right) ^{\frac{\beta }{2}}u(x,y,t)=f(x,y,t), &{}(x,y)\in {\widetilde{\varOmega }},t \in (1,2], \\ u(x,y,t) = 0, &{}(x,y)\in {\mathbb {R}}^2\backslash {\widetilde{\varOmega }},t \in (1,2], \\ u(x,y,1) = 0, &{}(x,y)\in {\mathbb {R}}^2, \end{array} \right. \end{aligned}$$

where \({\widetilde{\varOmega }}=(-1,1)^2\) and the exact solution is set as \(u(x,y,t) = (\log t)^3(1-x^2)^4(1-y^2)^4\). The source term f is not explicitly known and we use very fine stepsize to compute it. In this case we evaluate the source term by \(f\approx f_h={}_{\text{CH}}\mathrm{D}^{\alpha }_{1,t}u(x,y,t)+\left( -\Delta _h\right) ^{\frac{\beta }{2}}u(x,y,t)\) with \(h = 2^{-8}\). The numerical error in the spatial direction is

$$ E(h) = \sqrt{h^2 \sum _{j=0}^{M} \sum _{k=0}^{M}\left|u_{jk}^N(h,\tau ) - u_{2j,2k}^N(h/2,\tau ) \right|^2}, $$

where \(\tau\) is small enough. The numerical error in the temporal direction is

$$ F(\tau ) = \sqrt{h^2 \sum _{j=0}^{M} \sum _{k=0}^{M}\left|u_{jk}^N(h,\tau ) - u_{jk}^{2N}(h,\tau /2) \right|^2}, $$

where h is small enough.

Tables 3 and 4 display the errors and convergence orders for the finite difference scheme (27). We can observe that the numerical results are stable and of \((2-\alpha )\) order in time and of 2nd order in space, which verifies Theorems 3 and 4.

Table 3 Spatial convergence of scheme (27) with \(\tau = 1/2^7\) for Example 2
Table 4 Temporal convergence of scheme (27) with \(h=1/2^6\) for Example 2