1 Introduction

The mobile/immobile model has been applied successfully to unsaturated transports through homogeneous media (see [2, 5, 11, 24, 30]). The main objective of this model is to account for mass exchanges between the moving fluid and zones where the fluid is assumed to be immobile (see [13]). It was shown in [6, 27] that the continuous time random walk is equivalent to the continuum mobile/immobile equation in the same way that Brownian motion is equivalent to the diffusion equation. Moreover, the continuous time random walk converges toward a limit that corresponds to an mobile/immobile continuum model. This convergence property is useful for making long-term predictions. To distinguish explicitly the mobile and immobile status using the fractional dynamics, Schumer et al. [28] developed the following fractional mobile/immobile convection–diffusion model for the total concentration:

$$\begin{aligned} \frac{\partial v}{\partial t}(x,t)+\beta \frac{\partial ^{\alpha } v}{\partial t^{\alpha }}(x,t)= D \frac{\partial ^{2} v}{\partial x^{2}}(x,t)-V \displaystyle \frac{\partial v}{\partial x}(x,t), \end{aligned}$$
(1.1)

where v denotes the solute concentration in the total (mobile and immobile) phase, \(\beta >0\) is the fractional capacity coefficient, and V and D are the convection and diffusion coefficients for the mobile phase (and hence may be directly measured) with \(D>0\). The time drift term \({\partial v}/{\partial t}\) describes the motion time and thus helps to distinguish the status of particles conveniently (also see the discussion in [1]). The term \(\partial ^{\alpha }v/{\partial t^{\alpha }}\) represents the Caputo fractional derivative of order \(\alpha \), which is defined by

$$\begin{aligned} \frac{\partial ^{\alpha } v}{\partial t^{\alpha }}(x,t)=\frac{1}{{\varGamma }(1-\alpha )} \int _{0}^{t} \frac{\partial v}{\partial s}(x,s) (t-s)^{-\alpha }\mathrm{d}s, \qquad 0<\alpha <1. \end{aligned}$$
(1.2)

In recent years, many works have been devoted to the investigation on the applications of the fractional mobile/immobile equation. Goltz et al. [12] and Harvey et al. [15] found that the mobile/immobile equation predicts an exponential approach to an asymptotic state where the mobile mass remains a constant positive fraction of the injected mass. The work in [28] by Schumer et al. shows that the fractional mobile/immobile convection–diffusion Eq. (1.1) is equivalent to the previous model of mobile/immobile transport with power law memory function and is the limiting equation that governs continuous time random walks with heavy tailed random waiting times. Zhang et al. [39] developed a time Langevin approach to solve a fractional mobile/immobile transport model combined with multiscaling superdiffusion. Zhang et al. [38] extended and tested the applicability of the fractional mobile/immobile convection–diffusion Eq. (1.1) by applying the approach developed in [39]. Meerschaert et al. [22] used a grid-free particle tracking approach to solve a multi-dimensional fractional mobile/immobile equation. The similarity between the fractional mobile/immobile convection–diffusion Eq. (1.1) and the multiple-rate mass transfer model (see [14]) was discussed by Schumer et al. [28] and Benson and Meerschaert [1]. It was proved that the fractional mobile/immobile convection–diffusion Eq. (1.1) is identical to the multiple-rate mass transfer model with infinite mean power-law memories.

With the rapid development of the applications of the fractional mobile/immobile equation, numerical methods have become important to compute its solution. A numerical treatment of the fractional mobile/immobile convection–diffusion Eq. (1.1) with a non-homogeneous source term was given in [20]. The main purpose there is to present a stable implicit numerical method by the basic finite difference discretization, and the accuracy of the proposed method is only of order \(\mathcal{O}(\tau +h)\), where \(\tau \) is the time step and h is the spatial step. In [21], a meshless approach based on radial basis functions (RBFs) for the spatial discretization and a semi-discrete scheme for the temporal discretization were developed for a two-dimensional fractional mobile/immobile transport model. Since the backward Euler method was adopted to discretize the time first derivative, the developed meshless approach in that paper is only first-order accurate in time. Recently, Zhang et al. [37] treated numerically a fractional mobile/immobile convection–diffusion equation with the time fractional derivative of Coimbra variable order. The proposed method there possesses only the first-order accuracy in both time and space, and is identical to the one presented in [20] when the variable fractional order is reduced to a constant fractional order. In the above works, the L1 approximation formula (see [4, 23, 29]) was used for the discretization of the Caputo time fractional derivative \({\partial ^{\alpha } v}/{\partial t^{\alpha }}\) and the backward Euler method was applied to discretize the time first derivative. Consequently, the numerical accuracy of the resulting approximation formula to the Caputo time fractional derivative is only of order \(2-\alpha \), which is less than two, and the total temporal accuracy has only the first-order. This motivated us to look for a more accurate approximation to the Caputo time fractional derivative \({\partial ^{\alpha } v}/{\partial t^{\alpha }}\) and construct a high-order numerical method for solving the fractional mobile/immobile convection–diffusion Eq. (1.1).

In order to clarify the success of our method and enlarge its applications, we here consider a class of more general fractional mobile/immobile convection–diffusion equations where the convection coefficient V is spatially variable, i.e., \(V=V(x)\). This class of equations combined with its boundary and initial conditions is given in the form

$$\begin{aligned} \left\{ \begin{array}{l} \displaystyle \frac{\partial v}{\partial t}(x,t)+\beta \frac{\partial ^{\alpha } v}{\partial t^{\alpha }}(x,t)=D \frac{\partial ^{2} v}{\partial x^{2}}(x,t) -V(x) \displaystyle \frac{\partial v}{\partial x}(x,t)+f(x,t),\\ \quad (x,t)\in (0,L)\times (0,T],\\ v(0,t)=\phi _{0}(t),\quad v(L,t)=\phi _{L}(t),\qquad t\in (0,T],\\ v(x,0)=0, \qquad x\in [0,L], \end{array} \right. \end{aligned}$$
(1.3)

where the given functions V(x), f(xt), \(\phi _{0}(t)\) and \(\phi _{L}(t)\) are sufficiently smooth in their respective domains. The very recent work in [33] proposed a compact finite difference method for the problem (1.3), but the temporal accuracy of the proposed method is still only of order \(2-\alpha \). In this paper, we shall present a high-order compact finite difference method that possesses the second-order temporal accuracy and can be efficiently used to solve the fractional mobile/immobile problem (1.3) with the general convection coefficient V(x). Some of the related numerical aspects will be rigorously investigated as well.

A common technique to design high-order numerical methods with the second-order temporal accuracy for the Caputo-type time fractional differential equation is to transform the corresponding differential equation into its equivalent integral or integro-differential form (cf. [3, 9, 17, 18, 32, 35]). However, it is difficult to extend this technique with a rigorous theoretical analysis to the present time fractional mobile/immobile problem (1.3) because the governing equation involves both the time first derivative \({\partial v}/{\partial t}\) and the Caputo time fractional derivative \({\partial ^{\alpha } v}/{\partial t^{\alpha }}\). A direct method without any transformation was given in [10], where a modified L1 approximation formula was used to directly discretize the Caputo time fractional derivative, but the rigorous convergence analysis for the corresponding difference scheme has not been available. Dimitrov [8] presented a second-order implicit difference scheme for a one-dimensional Caputo-type time fractional subdiffusion equation by using the Grünwald formula to directly approximate the weighted averages of the Caputo derivatives. However, the difference scheme derived there makes the error analysis much more complex, and it is unclear whether the main idea in that paper can be generalized to a high-order compact finite difference scheme.

Here, we shall show a new technique to design a high-order compact finite difference method with the second-order temporal accuracy for the problem (1.3) by using the weighted and shifted Grünwald formula to directly discretize the Caputo time fractional derivative \({\partial ^{\alpha } v}/{\partial t^{\alpha }}\). The high-order scheme derived in this way is very simple and effective for the problem (1.3). It is also very convenient for us to use a technique of discrete energy analysis to carry out the stability and convergence analysis of the derived scheme. The similar technique was used in [16] to derive a third-order approximation formula for the Caputo time fractional derivative \({\partial ^{\alpha } v}/{\partial t^{\alpha }}\). However, our derivation in this paper is essentially different from that in [16]. Firstly, it is only assumed in this paper that the function \(v(\cdot ,t)\) is \(\mathcal{C}^{3}\)-continuous and \({\partial ^{k} v}/{\partial t^{k}}(\cdot ,0)=0\) for \(k=1,2\), whereas a stronger condition that the function \(v(\cdot ,t)\) is \(\mathcal{C}^{5}\)-continuous and \({\partial ^{k} v}/{\partial t^{k}}(\cdot ,0)=0\) for \(k=1,2,\ldots ,5\) is required in [16]. Secondly, since we obtain a detailed asymptotic expansion for the truncation error of the Grünwald approximation it is easy to apply a Richardson extrapolation to further enhance the temporal accuracy of the computed solution from the second-order to the third-order, with only the requirement that the function \(v(\cdot ,t)\) is \(\mathcal{C}^{4}\)-continuous and \({\partial ^{k} v}/{\partial t^{k}}(\cdot ,0)=0\) for \(k=1,2,3\). As a result, we obtain an extrapolation algorithm that possesses the third-order temporal accuracy under a weaker condition than that in [16]. It should be mentioned that because of the first time derivative term \({\partial v}/{\partial t}\), it is difficult to apply the approach given in [16] to the present problem (1.3) to achieve the third-order temporal accuracy with a rigorous convergence proof.

The outline of the paper is as follows. In Sect. 2, we derive a second-order approximation to the Caputo time fractional derivative \({\partial ^{\alpha } v}/{\partial t^{\alpha }}\) by introducing an asymptotic expansion for the truncation error of the Grünwald approximation. Then we discretize the fractional mobile/immobile problem (1.3) into a compact finite difference system. The local truncation error and the solvability of the resulting finite difference scheme are discussed in Sect. 3. In Sect. 4, we use a technique of discrete energy analysis to prove the stability and convergence of the method and to obtain an explicit error estimate of the numerical solution. The error estimate shows that the proposed method has the second-order temporal accuracy and the fourth-order spatial accuracy. Improvement of the temporal accuracy based on a Richardson extrapolation is presented in Sect. 5, where a Richardson extrapolation algorithm is developed to enhance the temporal accuracy of the computed solution to the third-order. In Sect. 6, we give some applications to two model problems. Numerical results are included to demonstrate the accuracy of the compact finite difference method and the high efficiency of the Richardson extrapolation algorithm. The final section contains some concluding remarks.

2 Compact finite difference method

Without loss of generality, we have assumed the homogeneous initial condition \(v(x,0)=0\) in the problem (1.3). When the problem is given with the nonhomogeneous initial condition, the substitution \(z(x,t)=v(x,t)-v(x,0)\) will transform the problem to the problem which has the same form and the homogeneous initial condition. Hence, our investigation is directly applicable to the above nonhomogeneous initial condition without any complication.

In general, a direct discretization of the problem (1.3) by a high-order compact difference approximation is much more complicated because of the dependence of V(x) on the spatial variable x. We here use an indirect approach by transforming (1.3) into a special and equivalent form. The main advantage behind this approach is that it yields a very simple and effective high-order scheme for the variable coefficient problem (1.3). This approach is similar to that used in [19, 36] to treat the other convection–diffusion problems with constant coefficients. Let

$$\begin{aligned} k(x)=\exp \left( -\frac{1}{2D}{\int _{0}^{x}V(s)\mathrm{d}s}\right) , \qquad u(x,t)=k(x) v(x,t). \end{aligned}$$

We transform the problem (1.3) into

$$\begin{aligned} \left\{ \begin{array}{l} \displaystyle \frac{\partial u}{\partial t}(x,t)+\beta \frac{\partial ^{\alpha } u}{\partial t^{\alpha }}(x,t)=D \displaystyle \frac{\partial ^{2} u}{\partial x^{2}}(x,t)+q(x) u(x,t)+g(x,t),\\ \quad (x,t)\in (0,L)\times (0,T],\\ u(0,t)=\phi _{0}^{*}(t),\quad u(L,t)=\phi _{L}^{*}(t),\qquad t\in (0,T],\\ u(x,0)=0, \qquad x\in [0,L], \end{array} \right. \end{aligned}$$
(2.1)

where

$$\begin{aligned} q(x)= & {} \frac{1}{2} \left( \frac{\mathrm{d} V}{\mathrm{d} x}(x)-\frac{V^{2}(x)}{2D}\right) ,\qquad g(x,t)=k(x)f(x,t),\nonumber \\ \phi _{0}^{*}(t)= & {} \phi _{0}(t),\qquad \phi _{L}^{*}(t)=k(L) \phi _{L}(t). \end{aligned}$$
(2.2)

It is clear that v(xt) is a solution of the original problem (1.3) if and only if u(xt) is a solution of the transformed problem (2.1). Our compact finite difference method for the problem (1.3) is based on the above equivalent form (2.1).

For a positive integer N, we let \(\tau =T/N\) be the time step. Denote \(t_{n}=n\tau \) \((0\le n\le N)\) and \(t_{n-\frac{1}{2}}=(n-\frac{1}{2})\tau \) \((1\le n\le N)\). Given a grid function \(w=\{w^{n}~|~0\le n\le N\}\), we define

$$\begin{aligned} w^{n-\frac{1}{2}}=\frac{1}{2}\left( w^{n}+w^{n-1}\right) ,\qquad \delta _{t}w^{n-\frac{1}{2}}=\frac{1}{\tau }\left( w^{n}-w^{n-1}\right) . \end{aligned}$$
(2.3)

Let \(h=L/M\) be the spatial step, where M is a positive integer. We partition [0, L] into a mesh by the mesh points \(x_{i}=ih\) \((0\le i\le M)\). For any grid function \(w=\{w_{i}~|~0\le i\le M\}\), we define spatial difference operators

$$\begin{aligned} \delta _{x}w_{i-\frac{1}{2}}= & {} \frac{1}{h}\left( w_{i}-w_{i-1} \right) ,\qquad \delta _{x}^{2}w_{i}=\frac{1}{h^{2}} \left( w_{i+1}-2w_{i}+w_{i-1}\right) ,\\ \mathcal{H}_{x}w_{i}= & {} \left( I+\frac{h^{2}}{12}\delta _{x}^{2}\right) w_{i}, \end{aligned}$$

where I denotes the identical operator.

2.1 A second-order approximation to the Caputo time fractional derivative

For any \(\alpha \in (0,1)\) and any nonnegative integer l, the \((l+\alpha )\)th-order Caputo and Riemann–Liouville time fractional derivatives of the function y(t) on [0, T] are defined as

$$\begin{aligned} _{~0}^{C}\mathcal{D}_{t}^{l+\alpha }y(t)= & {} \displaystyle \frac{1}{{\varGamma }(1-\alpha )} \int _{0}^{t} \frac{\mathrm{d}^{l+1} y}{\mathrm{d} t^{l+1}}(s) (t-s)^{-\alpha }\mathrm{d}s,\nonumber \\ _{~~0}^{RL}\mathcal{D}_{t}^{l+\alpha }y(t)= & {} \displaystyle \frac{1}{{\varGamma }(1-\alpha )} \frac{\mathrm{d}^{l+1} ~}{\mathrm{d} t^{l+1}}\int _{0}^{t} y(s) (t-s)^{-\alpha }\mathrm{d}s. \end{aligned}$$
(2.4)

Also we introduce the shifted Grünwald difference operator with the step \(\tau \) for the function y(t):

$$\begin{aligned} \delta _{\tau ,p}^{\alpha } y(t)= \tau ^{-\alpha } \sum _{k=0}^{\left[ \frac{t}{\tau }\right] +p}w_{k}^{(\alpha )} y(t-(k-p)\tau ), \end{aligned}$$
(2.5)

where p is a integer and \(w_{k}^{(\alpha )}\), called the Grünwald weight, is defined by the coefficient of the binomial series \((1-z)^{\alpha }=\sum \nolimits _{k=0}^{\infty } w_{k}^{(\alpha )} z^{k}\) as follows:

$$\begin{aligned} w_{0}^{(\alpha )}=1, \qquad w_{k}^{(\alpha )}=(-1)^{k} {\alpha \atopwithdelims ()k}~(k\ge 1). \end{aligned}$$
(2.6)

It is clear that the Grünwald weights \(w_{k}^{(\alpha )}\) can be computed recursively by

$$\begin{aligned} w_{0}^{(\alpha )}=1, \qquad w_{k}^{(\alpha )}=\left( 1-\frac{\alpha +1}{k}\right) w_{k-1}^{(\alpha )}, \end{aligned}$$
(2.7)

and they have the following properties

$$\begin{aligned} w_{0}^{(\alpha )}>0, \qquad w_{1}^{(\alpha )}<w_{2}^{(\alpha )}<\cdots<w_{k}^{(\alpha )}<\cdots <0,\qquad \sum \limits _{k=0}^{\infty } w_{k}^{(\alpha )} =0. \end{aligned}$$
(2.8)

Lemma 2.1

Let \(\alpha \in (0,1)\), and let r be a positive integer. Suppose that \(y(t)\in \mathcal{C}^{r}[0,T]\), \(y^{(r+1)}(t)\in L^{1}[0,T]\) and \(y^{(k)}(0)=0\) \((k=0,1,\ldots ,r)\). Then

$$\begin{aligned} \delta _{\tau ,p}^{\alpha } y(t)= & {} {_{~0}^{C}\mathcal{D}_{t}^{\alpha }}y(t)+\sum _{k=0}^{r-1}a_{k,p}{_{~0}^{C}\mathcal{D}_{t}^{k+\alpha }}y(t)\tau ^{k} +\mathcal{O}(\tau ^{r}),\nonumber \\&t\in [-p\tau ,T],\quad p=-1,0,\nonumber \\ \end{aligned}$$
(2.9)

where \(a_{k,p}\) are the coefficients of the power series of the function \(\omega _{\alpha ,p}=\left( \frac{1-\mathrm{e}^{-z}}{z} \right) ^{\alpha } \mathrm{e}^{pz}-1\), i.e., \(\omega _{\alpha ,p}=\sum \nolimits _{k=0}^{\infty }a_{k,p}z^{k}\), and in particular,

$$\begin{aligned} a_{0,p}=0,\qquad a_{1,p}=p-\frac{\alpha }{2}, \qquad a_{2,p}=\frac{\alpha }{24}+\frac{1}{2} \left( p-\frac{\alpha }{2} \right) ^{2}. \end{aligned}$$
(2.10)

Proof

Firstly, the coefficients in (2.10) follow from (38) in [41]. Under the condition of the lemma, we have from Theorem 1 in [41] that

$$\begin{aligned} \delta _{\tau ,p}^{\alpha } y(t)= & {} {_{~~0}^{RL}\mathcal{D}_{t}^{\alpha }}y(t)+\sum _{k=0}^{r-1}a_{k,p}{_{~~0}^{RL}\mathcal{D}_{t}^{k+\alpha }}y(t)\tau ^{k} +\mathcal{O}(\tau ^{r}),\nonumber \\&t\in [-p\tau ,T],~~p=0,-1. ~~~ \end{aligned}$$
(2.11)

The Riemann–Liouville and Caputo fractional derivatives are related as (see [7], Page 53)

$$\begin{aligned} {_{~~0}^{RL}\mathcal{D}_{t}^{k+\alpha }}y(t)={_{~0}^{C}\mathcal{D}_{t}^{k+\alpha }}y(t)+\sum _{l=0}^{k} \frac{y^{(l)}(0)}{{\varGamma }(l-\alpha -k+1)} t^{l-\alpha -k}. \end{aligned}$$

We have from \(y^{(k)}(0)=0\) \((k=0,1,\ldots ,r)\) that

$$\begin{aligned} {_{~~0}^{RL}\mathcal{D}_{t}^{k+\alpha }}y(t)={_{~0}^{C}\mathcal{D}_{t}^{k+\alpha }}y(t), \qquad k=0,1,\ldots ,r-1. \end{aligned}$$
(2.12)

Substituting this relation into (2.11) leads to the desired result (2.9). \(\square \)

Based on Lemma 2.1, we immediately obtain the following approximation to the \(\alpha \)th-order Caputo time fractional derivative \({_{~0}^{C}\mathcal{D}_{t}^{\alpha }}y(t)\).

Theorem 2.1

Let \(\alpha \in (0,1)\), and define

$$\begin{aligned} a_{0}^{(\alpha )}=\left( 1+\frac{\alpha }{2}\right) w_{0}^{(\alpha )}, \qquad a_{k}^{(\alpha )}=\left( 1+\frac{\alpha }{2}\right) w_{k}^{(\alpha )} -\frac{\alpha }{2}w_{k-1}^{(\alpha )}~~(k\ge 1). \end{aligned}$$
(2.13)

Suppose that \(y(t)\in \mathcal{C}^{r}[0,T]\), \(y^{(r+1)}(t)\in L^{1}[0,T]\) and \(y^{(k)}(0)=0\) \((k=0,1,\ldots ,r)\), where \(r=1,2\) or 3. Then we have

$$\begin{aligned} {_{~0}^{C}\mathcal{D}_{t}^{\alpha }}y(t_{n})=\tau ^{-\alpha } \sum _{k=0}^{n} a_{k}^{(\alpha )} y(t_{n}-k\tau )+R_{r}^{n}(\tau ),\qquad 1\le n\le N, \end{aligned}$$
(2.14)

where

$$\begin{aligned} R_{r}^{n}(\tau )=\left\{ \begin{array}{l@{\quad }l} \mathcal{O}(\tau ^{r}), &{} { if}~ r=1~{ or}~2,\\ \displaystyle \frac{(5+3\alpha )\alpha }{24}{_{~0}^{C}\mathcal{D}_{t}^{2+\alpha }}y(t_{n})\tau ^{2}+\mathcal{O}(\tau ^{3}), &{} { if}~r=3. \end{array} \right. \end{aligned}$$
(2.15)

Proof

By the definition of \(a_{k}^{(\alpha )}\),

$$\begin{aligned} \tau ^{-\alpha } \sum _{k=0}^{n} a_{k}^{(\alpha )} y(t_{n}-k\tau )=\left( 1+\frac{\alpha }{2} \right) \delta _{\tau ,0}^{\alpha }y(t_{n}) -\frac{\alpha }{2} \delta _{\tau ,-1}^{\alpha }y(t_{n}),\qquad 1\le n\le N. \end{aligned}$$

We have from (2.9) with \(r=1\) that

$$\begin{aligned} \left( 1+\frac{\alpha }{2} \right) \delta _{\tau ,0}^{\alpha }y(t_{n}) -\frac{\alpha }{2} \delta _{\tau ,-1}^{\alpha }y(t_{n})={_{~0}^{C}\mathcal{D}_{t}^{\alpha }}y(t_{n}) +\mathcal{O}(\tau ), \qquad 1\le n\le N. \end{aligned}$$

When \(r=2\) or 3, we have also from (2.9) that

$$\begin{aligned}&\displaystyle \left( 1+\frac{\alpha }{2} \right) \delta _{\tau ,0}^{\alpha }y(t_{n}) -\frac{\alpha }{2} \delta _{\tau ,-1}^{\alpha }y(t_{n})\nonumber \\&\quad = \displaystyle {_{~0}^{C}\mathcal{D}_{t}^{\alpha }}y(t_{n})+\sum _{k=0}^{r-1}\left( \left( 1+\frac{\alpha }{2} \right) a_{k,0}-\frac{\alpha }{2} a_{k,-1}\right) {_{~0}^{C}\mathcal{D}_{t}^{k+\alpha }}y(t_{n})\tau ^{k} +\mathcal{O}(\tau ^{r}),\nonumber \\&\quad 1\le n\le N. \end{aligned}$$

It follows from (2.10) that

$$\begin{aligned} \displaystyle \left( 1+\frac{\alpha }{2} \right) \delta _{\tau ,0}^{\alpha }y(t_{n}) -\frac{\alpha }{2} \delta _{\tau ,-1}^{\alpha }y(t_{n})= \displaystyle {_{~0}^{C}\mathcal{D}_{t}^{\alpha }}y(t_{n})+\mathcal{O}(\tau ^{2}),\qquad 1\le n\le N \end{aligned}$$

if \(r=2\), and

$$\begin{aligned}&\left( 1+\frac{\alpha }{2} \right) \delta _{\tau ,0}^{\alpha }y(t_{n}) -\frac{\alpha }{2} \delta _{\tau ,-1}^{\alpha }y(t_{n})={_{~0}^{C}\mathcal{D}_{t}^{\alpha }}y(t_{n})\\&\quad -\frac{(5+3\alpha )\alpha }{24}{~_{~0}^{C}\mathcal{D}_{t}^{2+\alpha }}y(t_{n})\tau ^{2}+\mathcal{O}(\tau ^{3}),\\&\quad 1\le n\le N \end{aligned}$$

if \(r=3\). This proves (2.14) and (2.15). \(\square \)

2.2 The derivation of the compact finite difference scheme

Now we derive a high-order compact finite difference scheme for solving problem (2.1). Assume that the solution u(xt) of (2.1) is in \(\mathcal{C}^{6,3}([0,L]\times [0,T])\). Define the grid functions

$$\begin{aligned} U_{i}^{n}= & {} u(x_{i},t_{n}),\qquad W_{i}^{n}=\displaystyle \frac{\partial u}{\partial t}(x_{i},t_{n}), \qquad Z_{i}^{n}=\displaystyle \frac{\partial ^{2} u}{\partial x^{2}}(x_{i},t_{n}), \qquad q_{i}=q(x_{i}),\nonumber \\ g_{i}^{n}= & {} g({x}_{i},t_{n}),\qquad \phi _{0}^{*,n}=\phi _{0}^{*}(t_{n}),\qquad \quad \phi _{L}^{*,n}=\phi _{L}^{*}(t_{n}). \end{aligned}$$

In view of the definitions in (1.2) and (2.4), we have

$$\begin{aligned} \frac{\partial ^{\alpha }u}{\partial t^{\alpha }}(x,t)={_{~0}^{C}\mathcal{D}_{t}^{\alpha }}u(x,t). \end{aligned}$$
(2.16)

An application of the approximation (2.14) yields that

$$\begin{aligned} \frac{\partial ^{\alpha }u}{\partial t^{\alpha }}(x_{i},t_{n})=\tau ^{-\alpha } \sum _{k=0}^{n} a_{k}^{(\alpha )} U_{i}^{n-k}+(R_{t}^{G})_{i}^{n}, \qquad 1\le i\le M-1, ~~1\le n\le N,\nonumber \\ \end{aligned}$$
(2.17)

where \((R_{t}^{G})_{i}^{n}\) is the corresponding local truncation error. Substituting (2.17) into the governing equation of (2.1), we obtain

$$\begin{aligned} \displaystyle W_{i}^{n}+\beta \tau ^{-\alpha } \sum \limits _{k=0}^{n} a_{k}^{(\alpha )} U_{i}^{n-k}= & {} D Z_{i}^{n}+q_{i}U_{i}^{n}+g_{i}^{n}-\beta (R_{t}^{G})_{i}^{n},\nonumber \\&\quad 1\le i\le M-1, ~~1\le n\le N. \end{aligned}$$
(2.18)

Similarly, on the time level \(n-1\), we have

$$\begin{aligned} \displaystyle W_{i}^{n-1}+\beta \tau ^{-\alpha } \sum \limits _{k=0}^{n-1} a_{k}^{(\alpha )} U_{i}^{n-k-1}= & {} D Z_{i}^{n-1}+q_{i}U_{i}^{n-1}+g_{i}^{n-1}-\beta (R_{t}^{G})_{i}^{n-1},\nonumber \\&\quad 1\le i\le M-1,~~2\le n\le N. \end{aligned}$$
(2.19)

Since \(\frac{\partial ^{\alpha } u}{\partial t^{\alpha }}(x,0)=0\) (see [8]), we have from (2.1) that \(W_{i}^{0}=D Z_{i}^{0}+q_{i}U_{i}^{0}+g_{i}^{0}\). This equality and \(U_{i}^{0}=0\) imply that (2.19) holds true also for \(n=1\) with \((R_{t}^{G})_{i}^{0}=0\). Taking the arithmetic mean of (2.18) and (2.19), we conclude that

$$\begin{aligned} \displaystyle W_{i}^{n-\frac{1}{2}}+\beta \tau ^{-\alpha } \sum \limits _{k=0}^{n-1} a_{k}^{(\alpha )} U_{i}^{n-k-\frac{1}{2}}= & {} D Z_{i}^{n-\frac{1}{2}}+q_{i}U_{i}^{n-\frac{1}{2}}+g_{i}^{n-\frac{1}{2}}-\beta (R_{t}^{G})_{i}^{n-\frac{1}{2}},\nonumber \\ 1\le & {} i\le M-1,~~1\le n\le N. \end{aligned}$$
(2.20)

An application of the Crank–Nicolson technique (see, e.g., [40]) gives

$$\begin{aligned} W_{i}^{n-\frac{1}{2}}=\delta _{t}U_{i}^{n-\frac{1}{2}}+(R_{t}^{c})_{i}^{n-\frac{1}{2}}, \qquad 1\le i\le M-1, ~~1\le n\le N, \end{aligned}$$
(2.21)

where

$$\begin{aligned} (R_{t}^{c})_{i}^{n-\frac{1}{2}}=\frac{\tau ^{2}}{16}\int _{0}^{1}\left( \frac{\partial ^{3}u}{\partial t^{3}}\left( {x}_{i},t_{n-\frac{1}{2}}+\frac{s\tau }{2}\right) +\frac{\partial ^{3}u}{\partial t^{3}}\left( {x}_{i},t_{n-\frac{1}{2}}-\frac{s\tau }{2}\right) \right) (1-s^{2})\mathrm{d}s.~~~~~~ \nonumber \\ \end{aligned}$$
(2.22)

This implies that

$$\begin{aligned} \displaystyle \delta _{t}U_{i}^{n-\frac{1}{2}}+\beta \tau ^{-\alpha } \sum \limits _{k=0}^{n-1} a_{k}^{(\alpha )} U_{i}^{n-k-\frac{1}{2}}= & {} D Z_{i}^{n-\frac{1}{2}}+q_{i}U_{i}^{n-\frac{1}{2}}+g_{i}^{n-\frac{1}{2}}+(R_{t})_{i}^{n-\frac{1}{2}},\nonumber \\ 1\le & {} i\le M-1,~~1\le n\le N, \end{aligned}$$
(2.23)

where

$$\begin{aligned} (R_{t})_{i}^{n-\frac{1}{2}}=-\left( \beta (R_{t}^{G})_{i}^{n-\frac{1}{2}}+(R_{t}^{c})_{i}^{n-\frac{1}{2}}\right) . \end{aligned}$$
(2.24)

For the spatial second-order derivative \(Z_{i}^{n}\), we adopt the following fourth-order compact approximation (see, e.g., [40]):

$$\begin{aligned} \mathcal{H}_{x}Z_{i}^{n}=\delta _{x}^{2}U_{i}^{n}+(R_{x})_{i}^{n}, \end{aligned}$$
(2.25)

where

$$\begin{aligned} (R_{x})_{i}^{n}=\frac{h^{4}}{360}\int _{0}^{1}\left( \frac{\partial ^{6}u}{\partial x^{6}}({x}_{i}-sh,t_{n})+\frac{\partial ^{6}u}{\partial x^{6}}({x}_{i}+sh,t_{n})\right) \zeta (s) \mathrm{d}s \end{aligned}$$
(2.26)

with \(\zeta (s)=5(1-s)^{3}-3(1-s)^{5}\). Applying \(\mathcal{H}_{x}\) to both sides of (2.23) yields

$$\begin{aligned}&\displaystyle \mathcal{H}_{x}\delta _{t}U_{i}^{n-\frac{1}{2}}+\beta \tau ^{-\alpha } \sum _{k=0}^{n-1} a_{k}^{(\alpha )} \mathcal{H}_{x} U_{i}^{n-k-\frac{1}{2}}\nonumber \\&\quad =\displaystyle D \delta _{x}^{2}U_{i}^{n-\frac{1}{2}}+\mathcal{H}_{x} \left( q_{i}U_{i}^{n-\frac{1}{2}}\right) +\mathcal{H}_{x}g_{i}^{n-\frac{1}{2}}\nonumber \\&\qquad +\,(R_{xt})_{i}^{n-\frac{1}{2}},\quad 1\le i\le M-1, ~~1\le n\le N,~~~ \end{aligned}$$
(2.27)

where

$$\begin{aligned} (R_{xt})_{i}^{n-\frac{1}{2}}=\mathcal{H}_{x}(R_{t})_{i}^{n-\frac{1}{2}}+D (R_{x})_{i}^{n-\frac{1}{2}}. \end{aligned}$$
(2.28)

Omitting the small term \((R_{xt})_{i}^{n-\frac{1}{2}}\) in (2.27), we obtain the following compact finite difference scheme:

$$\begin{aligned} \left\{ \begin{array}{l} \displaystyle \mathcal{H}_{x}\delta _{t}u_{i}^{n-\frac{1}{2}}+\beta \tau ^{-\alpha } \sum _{k=0}^{n-1} a_{k}^{(\alpha )} \mathcal{H}_{x} u_{i}^{n-k-\frac{1}{2}}=\displaystyle D \delta _{x}^{2}u_{i}^{n-\frac{1}{2}}+\mathcal{H}_{x} \left( q_{i}u_{i}^{n-\frac{1}{2}}\right) +\mathcal{H}_{x}g_{i}^{n-\frac{1}{2}},\\ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~1\le i\le M-1, ~~1\le n\le N,\\ u_{0}^{n}=\phi _{0}^{*,n},\quad u_{M}^{n}=\phi _{L}^{*,n}, \qquad 1\le n\le N,\\ u_{i}^{0}=0, \qquad 0\le i\le M, \end{array}\right. \end{aligned}$$
(2.29)

where \(u_{i}^{n}\) denotes the finite difference approximation to \(U_{i}^{n}\).

3 Truncation error and solvability

Firstly, we estimate the truncation error \((R_{xt})_{i}^{n-\frac{1}{2}}\) of the compact scheme (2.29).

Theorem 3.1

Assume that the solution u(xt) of the problem (2.1) is in \(\mathcal{C}^{6,3}([0,L]\times [0,T])\) and \(\frac{\partial ^{k} u}{\partial t^{k}}(x,0)=0\) for \(k=1,2\). Then the truncation error \((R_{xt})_{i}^{n-\frac{1}{2}}\) of the compact scheme (2.29) satisfies

$$\begin{aligned} \left| (R_{xt})_{i}^{n-\frac{1}{2}}\right| \le C^{*} \left( \tau ^{2}+h^{4}\right) , \qquad 1\le i\le M-1,~~1\le n\le N, \end{aligned}$$
(3.1)

where \(C^{*}\) is a positive constant independent of the time step \(\tau \), the spatial step h and the time level n. \(\square \)

Proof

Under the condition of the theorem, we have from (2.15) that the local truncation error \((R_{t}^{G})_{i}^{n-\frac{1}{2}}\) in (2.20) satisfies

$$\begin{aligned} (R_{t}^{G})_{i}^{n-\frac{1}{2}}=\mathcal{O}(\tau ^{2}),\qquad 1\le i\le M-1,~~1\le n\le N. \end{aligned}$$
(3.2)

By (2.22), we obtain

$$\begin{aligned} (R_{t}^{c})_{i}^{n-\frac{1}{2}}=\mathcal{O}(\tau ^{2}), \qquad 1\le i\le M-1,~~1\le n\le N. \end{aligned}$$
(3.3)

Then we have from (2.24), (3.2) and (3.3) that

$$\begin{aligned} \displaystyle (R_{t})_{i}^{n-\frac{1}{2}}=\mathcal{O}(\tau ^{2}),\qquad 1\le i\le M-1,~1\le n\le N. \end{aligned}$$
(3.4)

Since \(\mathcal{H}_{x}w_{i}=\frac{1}{12}(w_{i-1}+10w_{i}+w_{i+1})\) for any grid function \(w=\{w_{i}~|~0\le i\le M\}\), we apply the estimates (2.26) and (3.4) in (2.28) to get the desired estimate (3.1) immediately. \(\square \)

For implementing the compact scheme (2.29), it is more convenient to consider its matrix form. To do this, we define the following column vectors:

$$\begin{aligned} \mathbf{u}^{n}= & {} \left( u_{1}^{n}, u_{2}^{n}, \ldots , u_{M-1}^{n}\right) ^{T}, \qquad \mathbf{g}^{n-\frac{1}{2}}=\left( g_{1}^{n-\frac{1}{2}}, g_{2}^{n-\frac{1}{2}}, \ldots , g_{M-1}^{n-\frac{1}{2}}\right) ^{T},\nonumber \\ {\hat{\mathbf{u}}}^{n-1,*}= & {} \left( {\hat{u}}_{1}^{n-1,*}, {\hat{u}}_{2}^{n-1,*}, \ldots , {\hat{u}}_{M-1}^{n-1,*}\right) ^{T}, \end{aligned}$$

where

$$\begin{aligned} {\hat{u}}_{i}^{n-1,*}=-\tau ^{-\alpha }\sum \limits ^{n-1}_{k=1}a_{k}^{(\alpha )}u_{i}^{n-k-\frac{1}{2}}, \qquad 1\le i\le M-1. \end{aligned}$$
(3.5)

We also define the following \((M-1)\)-order tridiagonal or diagonal matrices:

$$\begin{aligned} A=\mathrm{tridiag}\left( -1,2,-1\right) , \qquad B=\frac{1}{12}\mathrm{tridiag}\left( 1, 10, 1\right) , \qquad Q=\mathrm{diag}\left( q_{1}, q_{2}, \ldots , q_{M-1}\right) . \end{aligned}$$

A simple process shows that the compact scheme (2.29) can be expressed in the matrix form as

$$\begin{aligned}&\displaystyle \left( \left( 1+\frac{\beta (\alpha +2)}{4}\tau ^{1-\alpha }\right) B+ \frac{D}{2} \frac{\tau }{h^{2}} A-\frac{\tau }{2} B Q \right) \mathbf{u}^{n}\nonumber \\&\quad =\displaystyle \left( \left( 1-\frac{\beta (\alpha +2)}{4}\tau ^{1-\alpha }\right) B-\frac{D}{2} \frac{\tau }{h^{2}} A+\frac{\tau }{2} B Q \right) \mathbf{u}^{n-1}\nonumber \\&\qquad +\,\tau B \left( \beta {\hat{\mathbf{u}}}^{n-1,*}+\mathbf{g}^{n-\frac{1}{2}} \right) +\mathbf{r}^{n}, \end{aligned}$$
(3.6)

where \(\mathbf{r}^{n}\) absorbs the boundary values of the solution vector and the source term.

Theorem 3.2

The compact scheme (2.29) is uniquely solvable if and only if the matrix

$$\begin{aligned} Q^{*}\equiv \left( 1+\frac{\beta (\alpha +2)}{4}\tau ^{1-\alpha }\right) B+ \frac{D}{2} \frac{\tau }{h^{2}} A-\frac{\tau }{2} B Q \end{aligned}$$
(3.7)

is nonsingular.

Define

$$\begin{aligned} \overline{q}=\max _{x\in [0,L]} q(x), \qquad \underline{q}=\min _{x\in [0,L]} q(x). \end{aligned}$$
(3.8)

A sufficient condition for the matrix \(Q^{*}\) to be nonsingular is given by

$$\begin{aligned} \tau \max \left\{ \overline{q}, \frac{5\overline{q}-\underline{q}}{4}\right\} \le \frac{\beta (\alpha +2)}{2}\tau ^{1-\alpha }+2. \end{aligned}$$
(3.9)

Corollary 3.1

The compact scheme (2.29) is uniquely solvable if the condition (3.9) holds true.

Proof

In fact, \(Q^{*}=\mathrm{tridiag}(p_{i-1}^{*}, q_{i}^{*}, p_{i+1}^{*})\), where \(p_{0}^{*}=p_{M}^{*}=0\) and for each \(1\le i\le M-1\),

$$\begin{aligned} p_{i}^{*}= & {} \frac{1}{12} \left( 1+ \frac{\beta (\alpha +2)}{4}\tau ^{1-\alpha }\right) -\frac{D}{2} \frac{\tau }{h^{2}}- \frac{q_{i}}{24}\tau , \\ q_{i}^{*}= & {} \frac{5}{6} \left( 1+\frac{\beta (\alpha +2)}{4}\tau ^{1-\alpha }\right) +D \frac{\tau }{h^{2}}- \frac{5q_{i}}{12}\tau . \end{aligned}$$

The condition (3.9) implies that \(q_{i}^{*}>0\) for each \(1\le i\le M-1\).

Case 1. Assume that \(p_{i}^{*}\not =0\) for all \(1\le i\le M-1\). In this case, the matrix \(Q^{*}\) is irreducible. By the condition (3.9), we have that for \(2\le i\le M-2\),

$$\begin{aligned} |p_{i-1}^{*}|+|p_{i+1}^{*}|\displaystyle\le & {} \frac{1}{6} \left( 1+\frac{\beta (\alpha +2)}{4}\tau ^{1-\alpha }\right) +D \frac{\tau }{h^{2}}-\frac{q_{i-1}+q_{i+1}}{24}\tau \\ \displaystyle\le & {} \frac{1}{6} \left( 1+\frac{\beta (\alpha +2)}{4}\tau ^{1-\alpha }\right) + D \frac{\tau }{h^{2}}-\frac{\tau }{12}\underline{q}\\ \displaystyle\le & {} \frac{5}{6} \left( 1+\frac{\beta (\alpha +2)}{4}\tau ^{1-\alpha }\right) +D \frac{\tau }{h^{2}}- \frac{5q_{i}}{12}\tau =|q_{i}^{*}|. \end{aligned}$$

Similarly,

$$\begin{aligned} \displaystyle |p_{2}^{*}|\le & {} \frac{1}{12} \left( 1+\frac{\beta (\alpha +2)}{4}\tau ^{1-\alpha }\right) +\frac{D}{2}\frac{\tau }{h^{2}}- \frac{q_{2}}{24}\tau< q_{1}^{*}=|q_{1}^{*}|,\\ \displaystyle |p_{M-2}^{*}|\le & {} \frac{1}{12} \left( 1+\frac{\beta (\alpha +2)}{4}\tau ^{1-\alpha }\right) +\frac{D}{2}\frac{\tau }{h^{2}}- \frac{q_{M-2}}{24}\tau < q_{M-1}^{*}= |q_{M-1}^{*}|. \end{aligned}$$

This proves that \(Q^{*}\) is irreducibly diagonally dominant, and thus nonsingular (see [31]).

Case 2. Assume that \(p_{i_{0}}^{*}= 0\) for some \(1\le i_{0}\le M-1\). In this case, we complete the proof by partitioning \(Q^{*}\) and considering the submatrices of \(Q^{*}\). \(\square \)

Corollary 3.2

The compact scheme (2.29) is uniquely solvable if the function q(x) is nonpositive and convex in [0, L].

Proof

We write \(Q^{*}=\mathrm{tridiag}(p_{i-1}^{*}, q_{i}^{*}, p_{i+1}^{*})\) as in Corollary 3.1. Since the function q(x) is nonpositive and convex, we have that for \(2\le i\le M-2\),

$$\begin{aligned} |p_{i-1}^{*}|+|p_{i+1}^{*}|\le & {} \displaystyle \frac{1}{6} \left( 1+\frac{\beta (\alpha +2)}{4}\tau ^{1-\alpha }\right) +D \frac{\tau }{h^{2}}-\frac{q_{i-1}+q_{i+1}}{12}\tau \\< & {} \displaystyle \frac{5}{6} \left( 1+\frac{\beta (\alpha +2)}{4}\tau ^{1-\alpha }\right) +D \frac{\tau }{h^{2}}-\frac{5q_{i}}{6}\tau \ =|q_{i}^{*}|, \end{aligned}$$

and

$$\begin{aligned} \displaystyle |p_{2}^{*}|\le & {} \frac{1}{12} \left( 1+\frac{\beta (\alpha +2)}{4}\tau ^{1-\alpha }\right) +D\frac{\tau }{h^{2}}- \frac{q_{2}}{12}\tau<q_{1}^{*}= |q_{1}^{*}|,\\ \displaystyle |p_{M-2}^{*}|\le & {} \frac{1}{12} \left( 1+\frac{\beta (\alpha +2)}{4}\tau ^{1-\alpha }\right) +D\frac{\tau }{h^{2}}- \frac{q_{M-2}}{12}\tau <q_{M-1}^{*}= |q_{M-1}^{*}|. \end{aligned}$$

This shows that the matrix \(Q^{*}\) is strictly diagonally dominant, and thus nonsingular (see [31]). \(\square \)

Remark 3.1

When \(q(x)\equiv q\) is independent of x and \(q\le 0\), the conditions in Corollaries 3.1 and 3.2 are trivially satisfied. We notice that if the convection coefficient V(x) in the original problem (1.3) is independent of x, i.e., \(V(x)\equiv V\), we must have \(q(x)\equiv -\frac{V^{2}}{4D}\le 0\). Therefore, for the fractional mobile/immobile convection–diffusion problem (1.3) with constant coefficients, the corresponding compact scheme (2.29) is always uniquely solvable without any additional constraints.

4 Stability and convergence

We now carry out the stability and convergence analysis of the compact scheme (2.29) using a technique of discrete energy analysis. Let \(\mathcal{S}_{h}=\{w~|~ w=(w_{0}, w_{1}, \ldots , w_{M}), w_{0}=w_{M}=0\}\) be the space of the grid functions defined on the spatial mesh and vanishing on two boundary points. For grid functions \(w,z\in \mathcal{S}_{h}\), we define the inner product (wz), \(L^{2}\) norm \(\Vert w \Vert \) and \(L^{\infty }\) norm \(\Vert w \Vert _{\infty }\) by

$$\begin{aligned} (w,z)=h\sum \limits _{i=1}^{M-1} w_{i}z_{i}, \qquad \Vert w\Vert =(w,w)^{\frac{1}{2}}, \qquad \Vert w\Vert _{\infty }=\max _{0\le i\le M} |w_{i}|. \end{aligned}$$

We also define

$$\begin{aligned} (\delta _{x}w,\delta _{x}z]=h\sum \limits _{i=1}^{M} \delta _{x}w_{i-\frac{1}{2}}\delta _{x}z_{i-\frac{1}{2}}, \qquad |w|_{1}=(\delta _{x}w,\delta _{x}w]^{\frac{1}{2}}. \end{aligned}$$

The inverse estimate \(h\Vert \delta _{x}^{2}w\Vert \le 2|w|_{1}\) (e.g., see [26]) implies that \(|w|_{1}^{2}-\frac{h^{2}}{12} \Vert \delta _{x}^{2} w\Vert ^{2}\ge \frac{2}{3} |w|_{1}^{2}\). For convenience, we introduce the following notation:

$$\begin{aligned} \Vert w\Vert _{*}=\left( |w|_{1}^{2}-\frac{h^{2}}{12} \Vert \delta _{x}^{2} w\Vert ^{2}\right) ^{\frac{1}{2}}. \end{aligned}$$

Then we have the following lemma from [33, 34].

Lemma 4.1

Let \(\rho (x)\) be a function in \(\mathcal{C}[0,L]\). For any grid function \(w\in \mathcal{S}_{h}\), we have

$$\begin{aligned} \left( \mathcal{H}_{x}w, \delta _{x}^{2}w\right) =-\Vert w\Vert _{*}^{2}, \qquad \Vert \mathcal{H}_{x}w \Vert ^{2}\le \frac{3L^{2}}{16} \Vert w\Vert _{*}^{2}, \qquad \Vert \mathcal{H}_{x}(\rho w ) \Vert \le \Vert \rho \Vert _{\infty } \Vert w\Vert . \end{aligned}$$

Lemma 4.2

For any grid function \(w\in \mathcal{S}_{h}\), we have

$$\begin{aligned} \frac{1}{3} \Vert w \Vert ^{2}\le \Vert \mathcal{H}_{x}w \Vert ^{2}\le \Vert w \Vert ^{2}. \end{aligned}$$
(4.1)

Proof

Since \((w,\delta _{x}^{2}w)=-(\delta _{x}w,\delta _{x}w]=-|w|_{1}^{2}\), we have \( \Vert \mathcal{H}_{x}w \Vert ^{2}=\Vert w\Vert ^{2}-\frac{h^{2}}{6}|w|_{1}^{2}+\frac{h^{4}}{144} \Vert \delta _{x}^{2}w \Vert ^{2}\). The inverse estimates \(h|w|_{1}\le 2\Vert w\Vert \) and \(h\Vert \delta _{x}^{2}w\Vert \le 2|w|_{1}\) (e.g., see [26]) imply that

$$\begin{aligned} \Vert \mathcal{H}_{x}w \Vert ^{2}\ge \Vert w\Vert ^{2}-\frac{h^{2}}{6}|w|_{1}^{2}\ge \frac{1}{3} \Vert w\Vert ^{2},\qquad \Vert \mathcal{H}_{x}w \Vert ^{2}\le \Vert w\Vert ^{2}-\frac{5h^{2}}{36}|w|_{1}^{2}\le \Vert w\Vert ^{2}. \end{aligned}$$

This proves the lemma. \(\square \)

Lemma 4.3

Let \(a_{k}^{(\alpha )}\) \((k\ge 0)\) be defined by (2.13). Then for any positive integer m and \(w^{n}\in \mathcal{S}_{h}\) \((n\ge 1)\), it holds that

$$\begin{aligned} \sum _{n=1}^{m}\sum _{k=0}^{n-1} a_{k}^{(\alpha )} (w^{n-k}, w^{n})\ge 0. \end{aligned}$$
(4.2)

Proof

In view of the definition of the inner product \((w^{n-k}, w^{n})\),

$$\begin{aligned} \sum _{n=1}^{m}\sum _{k=0}^{n-1} a_{k}^{(\alpha )} (w^{n-k}, w^{n})=h\sum _{i=1}^{M-1} \sum _{n=1}^{m}\sum _{k=0}^{n-1} a_{k}^{(\alpha )} w_{i}^{n-k}w_{i}^{n}. \end{aligned}$$

By Lemma 3.2 in [35], we have that for each i,

$$\begin{aligned} \sum _{n=1}^{m}\sum _{k=0}^{n-1} a_{k}^{(\alpha )} w_{i}^{n-k}w_{i}^{n}\ge 0. \end{aligned}$$

The proof is completed. \(\square \)

Lemma 4.4

(Discrete Gronwall lemma [25]) Assume that \(\{k_{n}\}\) and \(\{s_{n}\}\) are nonnegative sequences, and that the sequence \(\{\phi _{n}\}\) satisfies

$$\begin{aligned} \phi _{0}\le g_{0},\quad \phi _{n}\le g_{0}+\sum \limits ^{n-1}_{l=0}s_{l}+\sum \limits _{l=0}^{n-1}k_{l}\phi _{l},\qquad n\ge 1, \end{aligned}$$

where \(g_{0}\ge 0\). Then the sequence \(\{\phi _{n}\}\) satisfies

$$\begin{aligned} \phi _{n}\le \left( g_{0}+\sum \limits _{l=0}^{n-1}s_{l}\right) \exp \left( \sum \limits _{l=0}^{n-1}k_{l}\right) ,\qquad n\ge 1. \end{aligned}$$

Based on the above lemmas, we now discuss the stability of the compact scheme (2.29) with respect to the initial value \(u_{i}^{0}\) and the source term g.

Theorem 4.1

Let \(u^{n}=(u_{0}^{n}, u_{1}^{n}, \ldots , u_{M}^{n})\) be the solution of the compact scheme (2.29) with the initial value \(u_{i}^{0}\) and the boundary values \(u_{0}^{n}=u_{M}^{n}=0\). Then when \(\tau \Vert q\Vert _{\infty }^{2} \le \frac{10D}{3L^{2}}\), we have that for \(1\le n\le N\),

$$\begin{aligned} \displaystyle \left\| u^{n} \right\| ^{2}&\le \left( \displaystyle 48 \left\| \mathcal{H}_{x}u^{0} \right\| ^{2}+\displaystyle \frac{9L^{2}\tau }{2 D}\left( \Vert q\Vert ^{2}_{\infty } \left\| u^{0} \right\| ^{2}\displaystyle +2\sum _{k=1}^{n} \left\| \mathcal{H}_{x}g^{k-\frac{1}{2}}\right\| ^{2} \right) \right) \nonumber \\&\quad \times \, \mathrm{exp}\left( {\frac{9L^{2}\Vert q\Vert ^{2}_{\infty }T}{D}}\right) . \end{aligned}$$
(4.3)

Proof

Taking the inner product of (2.29) with \(\mathcal{H}_{x}u^{n-\frac{1}{2}}\) gives

$$\begin{aligned}&\left( \mathcal{H}_{x}\delta _{t}u^{n-\frac{1}{2}},\mathcal{H}_{x}u^{n-\frac{1}{2}} \right) +\beta \tau ^{-\alpha } \displaystyle \sum _{k=0}^{n-1} a_{k}^{(\alpha )} \left( \mathcal{H}_{x}u^{n-k-\frac{1}{2}}, \mathcal{H}_{x}u^{n-\frac{1}{2}}\right) \nonumber \\&\quad =D \left( \delta _{x}^{2}u^{n-\frac{1}{2}},\mathcal{H}_{x}u^{n-\frac{1}{2}}\right) +\left( \mathcal{H}_{x}(q u^{n-\frac{1}{2}}),\mathcal{H}_{x}u^{n-\frac{1}{2}}\right) + \left( \mathcal{H}_{x}g^{n-\frac{1}{2}},\mathcal{H}_{x}u^{n-\frac{1}{2}}\right) .~~~~ \nonumber \\ \end{aligned}$$
(4.4)

It is clear that

$$\begin{aligned} \left( \mathcal{H}_{x}\delta _{t}u^{n-\frac{1}{2}},\mathcal{H}_{x}u^{n-\frac{1}{2}} \right) =\frac{1}{2\tau } \left( \left\| \mathcal{H}_{x}u^{n} \right\| ^{2}-\left\| \mathcal{H}_{x}u^{n-1} \right\| ^{2} \right) . \end{aligned}$$
(4.5)

By Lemma 4.1, \(( \delta _{x}^{2}u^{n-\frac{1}{2}},\mathcal{H}_{x}u^{n-\frac{1}{2}} )= -\Vert u^{n-\frac{1}{2}} \Vert _{*}^{2}\) and \(\Vert u^{n-\frac{1}{2}} \Vert _{*}^{2}\ge \frac{16}{3L^{2}}\Vert \mathcal{H}_{x}u^{n-\frac{1}{2}} \Vert ^{2}\). Therefore, we have

$$\begin{aligned} D \left( \delta _{x}^{2}u^{n-\frac{1}{2}},\mathcal{H}_{x}u^{n-\frac{1}{2}}\right) =-D \left\| u^{n-\frac{1}{2}} \right\| _{*}^{2}\le \displaystyle -\frac{16D}{3L^{2}}\left\| \mathcal{H}_{x}u^{n-\frac{1}{2}} \right\| ^{2}. \end{aligned}$$
(4.6)

On the other hand, by the Cauchy–Schwarz inequality, i.e., \((w,z)\le \varepsilon \Vert w\Vert ^{2}+\frac{1}{4\varepsilon } \Vert z\Vert ^{2}\) for all \(w,z\in \mathcal{S}_{h}\) and \(\varepsilon >0\), we obtain

$$\begin{aligned} \left( \mathcal{H}_{x}g^{n-\frac{1}{2}},\mathcal{H}_{x}u^{n-\frac{1}{2}}\right)\le & {} \displaystyle \frac{3L^{2}}{32D} \left\| \mathcal{H}_{x}g^{n-\frac{1}{2}} \right\| ^{2}+\frac{8D}{3L^{2}} \left\| \mathcal{H}_{x}u^{n-\frac{1}{2}} \right\| ^{2},\nonumber \\ \left( \mathcal{H}_{x}(q u^{n-\frac{1}{2}}),\mathcal{H}_{x}u^{n-\frac{1}{2}}\right)\le & {} \displaystyle \frac{3L^{2}}{32D} \left\| \mathcal{H}_{x}(q u^{n-\frac{1}{2}}) \right\| ^{2}+\frac{8D}{3L^{2}} \left\| \mathcal{H}_{x}u^{n-\frac{1}{2}} \right\| ^{2}\nonumber \\\le & {} \displaystyle \frac{3L^{2}\Vert q\Vert ^{2}_{\infty }}{32D} \left\| u^{n-\frac{1}{2}} \right\| ^{2}+\frac{8D}{3L^{2}} \left\| \mathcal{H}_{x}u^{n-\frac{1}{2}} \right\| ^{2}. \end{aligned}$$
(4.7)

The above last inequality follows from Lemma 4.1. Substituting (4.5)–(4.7) into (4.4) leads to

$$\begin{aligned}&\left\| \mathcal{H}_{x}u^{n} \right\| ^{2}+ 2\beta \tau ^{1-\alpha } \displaystyle \sum _{k=0}^{n-1} a_{k}^{(\alpha )} \left( \mathcal{H}_{x}u^{n-k-\frac{1}{2}}, \mathcal{H}_{x}u^{n-\frac{1}{2}}\right) \\&\quad \le \displaystyle \left\| \mathcal{H}_{x}u^{n-1} \right\| ^{2} +\frac{3L^{2}\Vert q\Vert ^{2}_{\infty }\tau }{16D} \left\| u^{n-\frac{1}{2}} \right\| ^{2} +\displaystyle \frac{3L^{2}\tau }{16D} \left\| \mathcal{H}_{x}g^{n-\frac{1}{2}}\right\| ^{2}. \end{aligned}$$

Replacing n by m and summing up for m from 1 to n on both sides of the above inequality, we have

$$\begin{aligned}&\left\| \mathcal{H}_{x}u^{n} \right\| ^{2}+ 2\beta \tau ^{1-\alpha } \displaystyle \sum _{m=1}^{n} \sum _{k=0}^{m-1} a_{k}^{(\alpha )} \left( \mathcal{H}_{x}u^{m-k-\frac{1}{2}}, \mathcal{H}_{x}u^{m-\frac{1}{2}}\right) \\&\quad \le \displaystyle \left\| \mathcal{H}_{x}u^{0} \right\| ^{2} +\frac{3L^{2}\Vert q\Vert ^{2}_{\infty }\tau }{16D} \sum _{m=1}^{n} \left\| u^{m-\frac{1}{2}} \right\| ^{2} +\displaystyle \frac{3L^{2}\tau }{16D} \sum _{m=1}^{n}\left\| \mathcal{H}_{x}g^{m-\frac{1}{2}}\right\| ^{2}. \end{aligned}$$

Then applying Lemma 4.3 and then replacing m by k, we obtain

$$\begin{aligned} \left\| \mathcal{H}_{x}u^{n} \right\| ^{2}\le \left\| \mathcal{H}_{x}u^{0} \right\| ^{2}+\displaystyle \frac{3L^{2}\Vert q\Vert ^{2}_{\infty }\tau }{16D} \sum _{k=1}^{n} \left\| u^{k-\frac{1}{2}} \right\| ^{2} +\frac{3L^{2}\tau }{16D} \sum _{k=1}^{n} \left\| \mathcal{H}_{x}g^{k-\frac{1}{2}}\right\| ^{2}. \end{aligned}$$

Furthermore, by the relation \(\Vert u^{k-\frac{1}{2}}\Vert ^{2}\le \frac{1}{2} ( \Vert u^{k} \Vert ^{2}+ \Vert u^{k-1} \Vert ^{2})\), we have

$$\begin{aligned} \left\| \mathcal{H}_{x}u^{n} \right\| ^{2}&\le \left\| \mathcal{H}_{x}u^{0} \right\| ^{2}+\displaystyle \frac{3L^{2}\Vert q\Vert ^{2}_{\infty }\tau }{32 D} \left( \left\| u^{n} \right\| ^{2}+\left\| u^{0} \right\| ^{2} \right) \nonumber \\&\quad +\displaystyle \frac{3L^{2}\Vert q\Vert ^{2}_{\infty }\tau }{16D} \sum _{k=1}^{n-1} \left\| u^{k} \right\| ^{2}\displaystyle +\frac{3L^{2}\tau }{16D} \sum _{k=1}^{n} \left\| \mathcal{H}_{x}g^{k-\frac{1}{2}}\right\| ^{2}. \end{aligned}$$
(4.8)

An application of Lemma 4.2 gives

$$\begin{aligned}&\left( \displaystyle \frac{1}{3}- \frac{3L^{2}\Vert q\Vert ^{2}_{\infty }\tau }{32 D} \right) \left\| u^{n} \right\| ^{2}\le \left\| \mathcal{H}_{x}u^{0} \right\| ^{2}+\displaystyle \frac{3L^{2}\Vert q\Vert ^{2}_{\infty }\tau }{32 D} \left\| u^{0} \right\| ^{2}\nonumber \\&\quad +\displaystyle \frac{3L^{2}\Vert q\Vert ^{2}_{\infty }\tau }{16D} \sum _{k=1}^{n-1} \left\| u^{k} \right\| ^{2}\displaystyle +\frac{3L^{2}\tau }{16D} \sum _{k=1}^{n} \left\| \mathcal{H}_{x}g^{k-\frac{1}{2}}\right\| ^{2}. \end{aligned}$$
(4.9)

When \(\tau \Vert q\Vert _{\infty }^{2} \le \frac{10D}{3L^{2}}\), we have

$$\begin{aligned} \left\| u^{n} \right\| ^{2}\le & {} \displaystyle 48 \left\| \mathcal{H}_{x}u^{0} \right\| ^{2}\nonumber \\&+\displaystyle \frac{9L^{2}\tau }{2 D} \left( \Vert q\Vert ^{2}_{\infty }\left\| u^{0} \right\| ^{2}+\displaystyle 2 \Vert q\Vert ^{2}_{\infty } \sum _{k=1}^{n-1} \left\| u^{k} \right\| ^{2}\displaystyle +2 \sum _{k=1}^{n} \left\| \mathcal{H}_{x}g^{k-\frac{1}{2}}\right\| ^{2} \right) . \qquad \qquad \end{aligned}$$
(4.10)

The estimate (4.3) follows immediately from Lemma 4.4 (Discrete Gronwall lemma). \(\square \)

Theorem 4.1 shows that the compact scheme (2.29) is almost unconditionally stable to the initial value \(u_{i}^{0}\) and the source term g, or more precisely, it is stable for the general q(x) under the mild assumption \(\tau \Vert q\Vert _{\infty }^{2} \le \frac{10D}{3L^{2}}\). For the special case when \(q(x)\equiv q\) is independent of x and \(q< \frac{16D}{3L^{2}}\), this mild assumption can be removed to obtain the unconditional stability of the compact scheme (2.29). Specifically, we have the following result.

Theorem 4.2

Let \(u^{n}=(u_{0}^{n}, u_{1}^{n}, \ldots , u_{M}^{n})\) be the solution of the compact scheme (2.29) with the initial value \(u_{i}^{0}\) and the boundary values \(u_{0}^{n}=u_{M}^{n}=0\). Assume that \(q(x)\equiv q\) is independent of x and that \(q< \frac{16D}{3L^{2}}\). Then we have

$$\begin{aligned} \displaystyle \left\| u^{n} \right\| ^{2} \le \displaystyle 3\left\| \mathcal{H}_{x}u^{0} \right\| ^{2} +\frac{3\tau }{2C_{1}} \displaystyle \sum _{k=1}^{n} \left\| \mathcal{H}_{x}g^{k-\frac{1}{2}}\right\| ^{2}, \end{aligned}$$
(4.11)

where \( C_{1}=\frac{16D}{3L^{2}}-q\).

Proof

The proof follows from the similar argument as that used in the proof of Theorem 4.1. When \(q(x)\equiv q\) is independent of x and \(q<\frac{16D}{3L^{2}}\), we have

$$\begin{aligned} \left( \mathcal{H}_{x}(q u^{n-\frac{1}{2}}),\mathcal{H}_{x}u^{n-\frac{1}{2}}\right) =q \left\| \mathcal{H}_{x}u^{n-\frac{1}{2}} \right\| ^{2}=\left( \frac{16D}{3L^{2}}-C_{1} \right) \left\| \mathcal{H}_{x}u^{n-\frac{1}{2}} \right\| ^{2}, \qquad \end{aligned}$$
(4.12)

where \(C_{1}=\frac{16D}{3L^{2}}-q>0\). In this case, we replace the second estimate in (4.6) by

$$\begin{aligned} \left( \mathcal{H}_{x}g^{n-\frac{1}{2}},\mathcal{H}_{x}u^{n-\frac{1}{2}}\right) \le \displaystyle \frac{1}{4C_{1}} \left\| \mathcal{H}_{x}g^{n-\frac{1}{2}} \right\| ^{2}+C_{1} \left\| \mathcal{H}_{x}u^{n-\frac{1}{2}} \right\| ^{2}. \end{aligned}$$
(4.13)

Using (4.4) (with \(q(x)\equiv q\)), (4.5), the first estimate of (4.6), (4.12) and (4.13), we obtain

$$\begin{aligned}&\left\| \mathcal{H}_{x}u^{n} \right\| ^{2} +2\beta \tau ^{1-\alpha } \displaystyle \sum _{k=0}^{n-1} a_{k}^{(\alpha )} \left( \mathcal{H}_{x}u^{n-k-\frac{1}{2}}, \mathcal{H}_{x}u^{n-\frac{1}{2}}\right) \\&\quad \le \displaystyle \left\| \mathcal{H}_{x}u^{n-1} \right\| ^{2} +\displaystyle \frac{\tau }{2C_{1}} \left\| \mathcal{H}_{x}g^{n-\frac{1}{2}}\right\| ^{2}. \end{aligned}$$

By Lemma 4.3,

$$\begin{aligned} \displaystyle \left\| \mathcal{H}_{x}u^{n} \right\| ^{2} \le \displaystyle \left\| \mathcal{H}_{x}u^{0} \right\| ^{2} +\frac{\tau }{2C_{1}} \displaystyle \sum _{k=1}^{n} \left\| \mathcal{H}_{x}g^{k-\frac{1}{2}}\right\| ^{2}. \end{aligned}$$
(4.14)

An application of Lemma 4.2 shows that the estimate (4.11) holds. \(\square \)

Remark 4.1

The condition \(q< \frac{16D}{3L^{2}}\) is automatically satisfied if \(q\le 0\). The latter is certainly satisfied if the convection coefficient V(x) in the original problem (1.3) is independent of x, i.e., \(V(x)\equiv V\). This implies that for the fractional mobile/immobile convection–diffusion problem (1.3) with constant coefficients, the corresponding compact scheme (2.29) is always unconditionally stable.

We now consider the convergence of the compact scheme (2.29). Let \(e_{i}^{n}=U_{i}^{n}-u_{i}^{n}\). From (2.27) and (2.29), we get the following error equation:

(4.15)

Based on this error equation, we have the following convergence results.

Theorem 4.3

Assume that the condition in Theorem 3.1 is satisfied. Let \(U_{i}^{n}\) denote the value of the solution u(xt) of (2.1) at the mesh point \((x_{i},t_{n})\) and let \(U^{n}=(U_{0}^{n}, U_{1}^{n}, \ldots , U_{M}^{n})\). Also let \(u^{n}=(u_{0}^{n}, u_{1}^{n}, \ldots , u_{M}^{n})\) be the solution of the compact scheme (2.29). Then when \(\tau \Vert q\Vert _{\infty }^{2}\le \frac{10D}{3L^{2}}\), we have

$$\begin{aligned} \left\| U^{n}-u^{n} \right\| \le \displaystyle \left( \displaystyle \frac{9L^{3}T }{D} \mathrm{exp}\left( \frac{9L^{2}\Vert q\Vert _{\infty }^{2}T}{D}\right) \right) ^{\frac{1}{2}} C^{*}\left( \tau ^{2}+h^{4}\right) ,\qquad 1\le n\le N, \nonumber \\ \end{aligned}$$
(4.16)

where the positive constant \(C^{*}\) is the same as that in (3.1).

Proof

It follows from (4.15) and Theorem 4.1 that

$$\begin{aligned} \left\| e^{n} \right\| ^{2} \le \displaystyle \frac{9L^{2}\tau }{D} \mathrm{exp}\left( \frac{9L^{2}\Vert q\Vert _{\infty }^{2}T}{D}\right) \sum _{k=1}^{n} \left\| (R_{xt})^{k-\frac{1}{2}}\right\| ^{2}. \end{aligned}$$

Applying Theorem 3.1, we get

$$\begin{aligned} \left\| e^{n} \right\| ^{2} \le \displaystyle \frac{9L^{3}T }{D} \mathrm{exp}\left( \frac{9L^{2}\Vert q\Vert _{\infty }^{2}T}{D}\right) {C^{*}}^{2} \left( \tau ^{2}+h^{4}\right) ^{2}. \end{aligned}$$

The estimate (4.16) is proved. \(\square \)

Theorem 4.4

Assume that the condition in Theorem 3.1 is satisfied. Let \(U_{i}^{n}\) denote the value of the solution u(xt) of (2.1) at the mesh point \((x_{i},t_{n})\) and let \(U^{n}=(U_{0}^{n}, U_{1}^{n}, \ldots , U_{M}^{n})\). Also let \(u^{n}=(u_{0}^{n}, u_{1}^{n}, \ldots , u_{M}^{n})\) be the solution of the compact scheme (2.29). If \(q(x)\equiv q\) is independent of x and \(q< \frac{16D}{3L^{2}}\), we have

$$\begin{aligned} \left\| U^{n}-u^{n}\right\| \le \displaystyle \left( \frac{3LT}{2 C_{1}}\right) ^{\frac{1}{2}} C^{*}\left( \tau ^{2}+h^{4}\right) ,\qquad 1\le n\le N, \end{aligned}$$
(4.17)

where the positive constants \(C^{*}\) and \(C_{1}\) are the same as those in (3.1) and (4.11).

Proof

The proof follows from (4.15) and Theorems 3.1 and 4.2. \(\square \)

Theorems 4.3 and 4.4 show that the compact scheme (2.29) converges with the convergence order \(\mathcal{O}(\tau ^{2}+h^{4})\), regardless of the order \(\alpha \) of the fractional derivative.

Remark 4.2

In Theorem 4.3, the optimal error estimate (i.e., the error estimate with the same order as the truncation error) of the compact scheme (2.29) are obtained under the mild condition \(\tau \Vert q\Vert _{\infty }^{2}\le \frac{10D}{3L^{2}}\) for the general q(x). Theorem 4.4 shows that this mild condition is no longer required to obtain the same optimal error estimate if \(q(x)\equiv q\) is independent of x and \(q< \frac{16D}{3L^{2}}\). In particular, this is the case for the fractional mobile/immobile convection–diffusion problem (1.3) with constant coefficients.

Remark 4.3

The constraint condition \(q< \frac{16D}{3L^{2}}\) in Theorems 4.2 and 4.4 is easily verifiable for practical problems. If it does not hold we have the estimates (4.3) and (4.16) instead of the estimates (4.11) and (4.17), respectively, for the sufficiently small \(\tau \). When \(C_{1}\) is very small, the estimates (4.11) and (4.17) are poor. In this case, it is also better to use the estimates (4.3) and (4.16) for the sufficiently small \(\tau \). The restriction condition on \(\tau \) in Theorems 4.1 and 4.3 is only for the analysis of the stability and convergence of the compact scheme (2.29) with the general q(x). One of the numerical experiments in Sect. 6 shows that it is only a sufficient condition. Improvement of this condition can be interesting both theoretically and computationally.

Once we have the error estimate between the solution \(U_{i}^{n}=u(x_{i},t_{n})\) of the transformed problem (2.1) and the solution \(u_{i}^{n}\) of the compact scheme (2.29), it is very straightforward to obtain the error estimate between the solutions of the original problem (1.3) and the compact scheme (2.29). Let \(V_{i}^{n}=v(x_{i},t_{n})\) be the value of the solution v(xt) of the original problem (1.3) at the mesh point \((x_{i},t_{n})\), and let \(v_{i}^{n}=u_{i}^{n}/k_{i}\), where \(k_{i}=k(x_{i})\). Since \(V_{i}^{n}=U_{i}^{n}/k_{i}\), we have from (4.16) or (4.17) that

$$\begin{aligned} \Vert V^{n}- v^{n} \Vert \le C_{2} \left( \tau ^{2}+h^{4}\right) ,\qquad 1\le n\le N, \end{aligned}$$
(4.18)

where \(C_{2}\) is a positive constant independent of the time step \(\tau \), the spatial step h and the time level n. The estimate (4.18) will be used in our numerical experiments in Sect. 6.

5 Richardson extrapolation of the compact finite difference method

The asymptotic expansion of the truncation error in (2.15) allows us to develop a Richardson extrapolation algorithm for further enhancing the temporal accuracy of the computed solution by the compact scheme (2.29).

Assume that the solution u(xt) of the problem (2.1) is in \(\mathcal{C}^{6,4}([0,L]\times [0,T])\) and \(\frac{\partial ^{k} u}{\partial t^{k}}(x,0)=0\) for \(k=1,2,3\). Then we have from (2.14) and (2.15) with \(r=3\) that the truncation error \((R_{t}^{G})_{i}^{n-\frac{1}{2}}\) in (2.20) or (3.2) can be written as

$$\begin{aligned} (R_{t}^{G})_{i}^{n-\frac{1}{2}}=\frac{(5+3\alpha )\alpha }{48}\left( {_{~0}^{C}}\mathcal{D}_{t}^{2+\alpha } u({x}_{i},t_{n})+{_{~0}^{C}}\mathcal{D}_{t}^{2+\alpha } u ({x}_{i},t_{n-1}) \right) \tau ^{2}+\mathcal{O}(\tau ^{3}). \nonumber \\ \end{aligned}$$
(5.1)

By Taylor expansion, the truncation error \((R_{t}^{c})_{i}^{n-\frac{1}{2}}\) in (2.22) or (3.3) has the form

$$\begin{aligned} (R_{t}^{c})_{i}^{n-\frac{1}{2}}= & {} \frac{\tau ^{2}}{24} \left( \frac{\partial ^{3}u}{\partial t^{3}} ({x}_{i},t_{n})+ \frac{\partial ^{3}u}{\partial t^{3}} ({x}_{i},t_{n-1}) \right) +\mathcal{O}(\tau ^{4}),\nonumber \\ 1\le & {} i\le M-1,~~1\le n\le N. \end{aligned}$$
(5.2)

Define

$$\begin{aligned} \displaystyle g^{*}(x,t)= & {} -\frac{(5+3\alpha )\alpha \beta }{24} {~_{~0}^{C}}\mathcal{D}_{t}^{2+\alpha } u(x,t)- \frac{1}{12}\frac{\partial ^{3}u}{\partial t^{3}} ({x},t),\nonumber \\ \displaystyle g_{i}^{*,n-\frac{1}{2}}= & {} \frac{1}{2} \left( g^{*}(x_{i},t_{n})+g^{*}(x_{i},t_{n-1}) \right) . \end{aligned}$$
(5.3)

By (5.1) and (5.2), the truncation error \((R_{t})_{i}^{n-\frac{1}{2}}\) in (2.24) or (3.4) can be written as

$$\begin{aligned} \displaystyle (R_{t})_{i}^{n-\frac{1}{2}}=g_{i}^{*,n-\frac{1}{2}} \tau ^{2}+\mathcal{O}(\tau ^{3}),\qquad 1\le i\le M-1,~1\le n\le N. \end{aligned}$$
(5.4)

Let \(u^{*}(x,t)\) be the solution of the following problem:

$$\begin{aligned} \left\{ \begin{array}{l} \displaystyle \frac{\partial u^{*}}{\partial t}(x,t)+\beta \frac{\partial ^{\alpha } u^{*}}{\partial t^{\alpha }}(x,t)=D \displaystyle \frac{\partial ^{2} u^{*}}{\partial x^{2}}(x,t) +q(x)u^{*}(x,t)+g^{*}(x,t),\\ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\quad (x,t)\in (0,L)\times (0,T],\\ u^{*}(0,t)= u^{*}(L,t)=0,\qquad t\in (0,T],\\ u^{*}(x,0)=0,\qquad x\in [0,L], \end{array} \right. \end{aligned}$$
(5.5)

where the source term \(g^{*}(x,t)\) is defined in (5.3). Assume \(u^{*}(x,t)\in \mathcal{C}^{6,3}([0,L]\times [0,T])\). Since the above problem has the same form as the problem (2.1), the same argument for (2.27) with the source term g being replaced by \(g^{*}\) shows that \(U_{i}^{*,n}=u^{*}(x_{i},t_{n})\) satisfies

$$\begin{aligned}&\displaystyle \mathcal{H}_{x}\delta _{t}U_{i}^{*,n-\frac{1}{2}}+\beta \tau ^{-\alpha } \sum _{k=0}^{n-1} a_{k}^{(\alpha )} \mathcal{H}_{x} U_{i}^{*,n-k-\frac{1}{2}}\nonumber \\&\quad =\displaystyle D \delta _{x}^{2}U_{i}^{*,n-\frac{1}{2}}+\mathcal{H}_{x} \left( q_{i}U_{i}^{*,n-\frac{1}{2}}\right) +\mathcal{H}_{x}g_{i}^{*,n-\frac{1}{2}}+(R_{xt}^{*})_{i}^{n-\frac{1}{2}},\nonumber \\&\quad \qquad 1\le i\le M-1, ~1\le n\le N, \end{aligned}$$
(5.6)

where

$$\begin{aligned} (R_{xt}^{*})_{i}^{n-\frac{1}{2}}=\mathcal{H}_{x}(R_{t}^{*})_{i}^{n-\frac{1}{2}}+D (R_{x}^{*})_{i}^{n-\frac{1}{2}}. \end{aligned}$$
(5.7)

The truncation errors \((R_{t}^{*})_{i}^{n-\frac{1}{2}}\) and \((R_{x}^{*})_{i}^{n-\frac{1}{2}}\) in (5.7) are analogously defined by (2.24) and (2.26) with u being replaced by \(u^{*}\).

We now estimate the truncation error \((R_{xt}^{*})_{i}^{n-\frac{1}{2}}\). In view of \(\frac{\partial ^{3} u}{\partial t^{3}}(x,0)=0\), we have

$$\begin{aligned} g^{*}(x,0)=-\frac{(5+3\alpha )\alpha \beta }{24} {~_{~0}^{C}}\mathcal{D}_{t}^{2+\alpha } u(x,0)- \frac{1}{12}\frac{\partial ^{3}u}{\partial t^{3}} ({x},0)=0. \end{aligned}$$
(5.8)

Further, by \(u^{*}(x,0)=0\) for all \(x\in [0,L]\),

$$\begin{aligned} \displaystyle \frac{\partial u^{*}}{\partial t}(x,0)= & {} -\beta \frac{\partial ^{\alpha } u^{*}}{\partial t^{\alpha }}(x,0)+D \displaystyle \frac{\partial ^{2} u^{*}}{\partial x^{2}}(x,0) \nonumber \\&+\,q(x)u^{*}(x,0)+g^{*}(x,0)=0, \qquad x\in [0,L]. \end{aligned}$$
(5.9)

Consequently, a similar argument as that for Theorem 3.1 gives

$$\begin{aligned} \left| (R_{xt}^{*})_{i}^{n-\frac{1}{2}}\right| \le C^{**} \left( \tau +h^{4}\right) , \qquad 1\le i\le M-1,~~1\le n\le N, \end{aligned}$$
(5.10)

where \(C^{**}\) is a positive constant independent of the time step \(\tau \), the spatial step h and the time level n.

Multiplying (5.6) by \(-\tau ^{2}\) and adding the resulting equation to (2.27) lead to the following equation for the function \(W_{i}^{n}=U_{i}^{n}-\tau ^{2} U_{i}^{*,n}\):

$$\begin{aligned}&\displaystyle \mathcal{H}_{x}\delta _{t}W_{i}^{n-\frac{1}{2}}+\beta \tau ^{-\alpha } \sum _{k=0}^{n-1} a_{k}^{(\alpha )} \mathcal{H}_{x} W_{i}^{n-k-\frac{1}{2}}\nonumber \\&\quad =\displaystyle D \delta _{x}^{2}W_{i}^{n-\frac{1}{2}}+\mathcal{H}_{x} \left( q_{i}W_{i}^{n-\frac{1}{2}}\right) +\mathcal{H}_{x}g_{i}^{n-\frac{1}{2}}+({\widetilde{R}}_{xt})_{i}^{n-\frac{1}{2}},\nonumber \\&\quad 1\le i\le M-1, ~~1\le n\le N, \end{aligned}$$
(5.11)

where

$$\begin{aligned} ({\widetilde{R}}_{xt})_{i}^{n-\frac{1}{2}}=(R_{xt})_{i}^{n-\frac{1}{2}}-\tau ^{2}\mathcal{H}_{x}g_{i}^{*,n-\frac{1}{2}}-\tau ^{2}(R_{xt}^{*})_{i}^{n-\frac{1}{2}}. \end{aligned}$$

Clearly, by (2.28), (5.4) and (5.10),

$$\begin{aligned} \left| ({\widetilde{R}}_{xt})_{i}^{n-\frac{1}{2}}\right| \le {\widetilde{C}} \left( \tau ^{3}+h^{4}\right) , \qquad 1\le i\le M-1,~~1\le n\le N, \end{aligned}$$
(5.12)

where \({\widetilde{C}}\) is a positive constant independent of the time step \(\tau \), the spatial step h and the time level n.

Let \(\overline{e}_{i}^{n}=W_{i}^{n}-u_{i}^{n}\), where \(u_{i}^{n}\) is the solution of the compact scheme (2.29). We get from (2.29) and (5.11) that

$$\begin{aligned} \left\{ \begin{array}{l} \displaystyle \mathcal{H}_{x}\delta _{t}\overline{e}_{i}^{n-\frac{1}{2}}+\beta \tau ^{-\alpha } \sum _{k=0}^{n-1} a_{k}^{(\alpha )} \mathcal{H}_{x} \overline{e}_{i}^{n-k-\frac{1}{2}}=\displaystyle D\delta _{x}^{2}\overline{e}_{i}^{n-\frac{1}{2}}+\mathcal{H}_{x} \left( q_{i}\overline{e}_{i}^{n-\frac{1}{2}}\right) +({\widetilde{R}}_{xt})_{i}^{n-\frac{1}{2}},\\ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\qquad 1\le i\le M-1, ~~1\le n\le N,\\ \overline{e}_{0}^{n}=\overline{e}_{M}^{n}=0, \qquad 1\le n\le N,\\ \overline{e}_{i}^{0}=0, \qquad 0\le i\le M. \end{array}\right. \end{aligned}$$
(5.13)

In view of this relation, we have the following estimates.

Theorem 5.1

Assume that the solution u(xt) of the problem (2.1) is in \(\mathcal{C}^{6,4}([0,L]\times [0,T])\) and that \(\frac{\partial ^{k} u}{\partial t^{k}}(x,0)=0\) for \(k=1,2,3\). Also assume that the solution \(u^{*}(x,t)\) of the problem (5.5) is in \(\mathcal{C}^{6,3}([0,L]\times [0,T])\). Let \(U^{n}=(U_{0}^{n}, U_{1}^{n}, \ldots , U_{M}^{n})\) and \(U^{*,n}=(U_{0}^{*,n}, U_{1}^{*,n}, \ldots , U_{M}^{*,n})\), where \(U_{i}^{n}=u(x_{i},t_{n})\) and \(U_{i}^{*,n}=u^{*}(x_{i},t_{n})\). Also let \(u^{n}=(u_{0}^{n}, u_{1}^{n}, \ldots , u_{M}^{n})\) be the solution of the compact scheme (2.29). Then when \(\tau \Vert q\Vert _{\infty }^{2}\le \frac{10D}{3L^{2}}\), we have

$$\begin{aligned}&\left\| U^{n}-\tau ^{2}U^{*,n}-u^{n} \right\| \le \displaystyle \left( \displaystyle \frac{9L^{3}T }{D} \mathrm{exp} \left( \frac{9L^{2}\Vert q\Vert _{\infty }^{2}T}{D}\right) \right) ^{\frac{1}{2}} {\widetilde{C}}\left( \tau ^{3}+h^{4}\right) ,\nonumber \\&\quad 1\le n\le N, \end{aligned}$$
(5.14)

where the positive constant \({\widetilde{C}}\) is the same as that in (5.12).

Proof

The proof follows from the same argument as that in the proof of Theorem 4.3 by replacing (4.15) and (3.1) by (5.13) and (5.12), respectively. \(\square \)

Theorem 5.2

Assume that the solution u(xt) of the problem (2.1) is in \(\mathcal{C}^{6,4}([0,L]\times [0,T])\) and that \(\frac{\partial ^{k} u}{\partial t^{k}}(x,0)=0\) for \(k=1,2,3\). Also assume that the solution \(u^{*}(x,t)\) of the problem (5.5) is in \(\mathcal{C}^{6,3}([0,L]\times [0,T])\). Let \(U^{n}=(U_{0}^{n}, U_{1}^{n}, \ldots , U_{M}^{n})\) and \(U^{*,n}=(U_{0}^{*,n}, U_{1}^{*,n}, \ldots , U_{M}^{*,n})\), where \(U_{i}^{n}=u(x_{i},t_{n})\) and \(U_{i}^{*,n}=u^{*}(x_{i},t_{n})\). Also let \(u^{n}=(u_{0}^{n}, u_{1}^{n}, \ldots , u_{M}^{n})\) be the solution of the compact scheme (2.29). If \(q(x)\equiv q\) is independent of x and \(q< \frac{16D}{3L^{2}}\), we have

$$\begin{aligned} \left\| U^{n}-\tau ^{2}U^{*,n}-u^{n} \right\| \le \displaystyle \left( \frac{3LT}{2 C_{1}}\right) ^{\frac{1}{2}} {\widetilde{C}} \left( \tau ^{3}+h^{4}\right) ,\qquad 1\le n\le N, \end{aligned}$$
(5.15)

where the positive constants \({\widetilde{C}}\) and \(C_{1}\) are the same as those in (5.12) and (4.11).

Proof

The proof is completed by using the same argument as that in the proof of Theorem 4.4 with (4.15) and (3.1) being replaced by (5.13) and (5.12), respectively. \(\square \)

An important application of Theorems 5.1 and 5.2 is to construct a Richardson extrapolation algorithm for enhancing the temporal accuracy of the computed solution by the compact scheme (2.29). To do this, we denote by \(u^{n}(\tau )=\left( u_{0}^{n}(\tau ),u_{1}^{n}(\tau ), \ldots , u_{M}^{n}(\tau )\right) \) the solution of the compact scheme (2.29) with the time step \(\tau \). We now construct a Richardson extrapolation algorithm as follows.

Richardson extrapolation algorithm:

Step 1. :

Compute \(u^{n}(\tau )\) and \(u^{2n}(\tau /2)\) by the compact scheme (2.29).

Step 2. :

Compute the extrapolation solution \(u^{\mathrm{e},n}(\tau )=(u_{0}^{\mathrm{e},n}(\tau ),u_{1}^{\mathrm{e},n}(\tau ), \ldots , u_{M}^{\mathrm{e},n}(\tau ))\) by

$$\begin{aligned} u_{i}^{\mathrm{e},n}(\tau )=-\frac{1}{3}u_{i}^{n}(\tau )+\frac{4}{3}u_{i}^{2n}(\tau /2), \qquad 0\le i\le M. \end{aligned}$$
(5.16)

For the extrapolation solution \(u^{\mathrm{e},n}(\tau )\), we have the following error estimates.

Theorem 5.3

Assume that the condition in Theorem 5.1 is satisfied. Let \(U_{i}^{n}(\tau )\) be the value of the solution u(xt) of the problem (2.1) at the mesh point \((x_{i}, t_{n})\) with the time step \(\tau \), and let \(U^{n}(\tau )=(U_{0}^{n}(\tau ),U_{1}^{n}(\tau ), \ldots , U_{M}^{n}(\tau ))\). Also let \(u^{\mathrm{e},n}(\tau )\) be the extrapolation solution from the Richardson extrapolation algorithm. Then we have

$$\begin{aligned}&\left\| U^{n}(\tau )-u^{\mathrm{e},n}(\tau ) \right\| \le \displaystyle \left( \displaystyle \frac{34L^{3}T }{D} \mathrm{exp} \left( \frac{9L^{2}\Vert q\Vert _{\infty }^{2}T}{D} \right) \right) ^{\frac{1}{2}} {\widetilde{C}} \left( \tau ^{3}+h^{4}\right) ,\nonumber \\&\quad 1\le n\le N, \end{aligned}$$
(5.17)

where the positive constant \({\widetilde{C}}\) is the same as that in (5.12).

Proof

Let \(U_{i}^{*,n}(\tau )\) be the value of the solution \(u^{*}(x,t)\) of the problem (5.5) at the mesh point \((x_{i}, t_{n})\) with the time step \(\tau \), and let \(W_{i}^{n}(\tau )=U_{i}^{n}(\tau )-\tau ^{2} U_{i}^{*,n}(\tau )\). Since \(U_{i}^{n}(\tau )=U_{i}^{2n}(\tau /2)\) and \(U_{i}^{*,n}(\tau )=U_{i}^{*,2n}(\tau /2)\), we have that

$$\begin{aligned} -\frac{1}{3}W_{i}^{n}(\tau )+\frac{4}{3}W_{i}^{2n}(\tau /2)=U_{i}^{n}(\tau ), \qquad 0\le i\le M. \end{aligned}$$

By this equality and (5.16),

$$\begin{aligned} U_{i}^{n}(\tau )-u_{i}^{\mathrm{e},n}(\tau )=-\frac{1}{3} \left( W_{i}^{n}(\tau )-u_{i}^{n}(\tau )\right) +\frac{4}{3}\left( W_{i}^{2n}(\tau /2)-u_{i}^{2n}(\tau /2)\right) . \end{aligned}$$

Thus, the estimate (5.17) follows immediately from the estimate (5.14). \(\square \)

Theorem 5.4

Assume that the condition in Theorem 5.2 is satisfied. Let \(U_{i}^{n}(\tau )\) be the value of the solution u(xt) of the problem (2.1) at the mesh point \((x_{i}, t_{n})\) with the time step \(\tau \), and let \(U^{n}(\tau )=(U_{0}^{n}(\tau ),U_{1}^{n}(\tau ), \ldots , U_{M}^{n}(\tau ))\). Also let \(u^{\mathrm{e},n}(\tau )\) be the extrapolation solution from the Richardson extrapolation algorithm. Then we have

$$\begin{aligned} \left\| U^{n}(\tau )-u^{\mathrm{e},n}(\tau ) \right\| \le \displaystyle \left( \frac{17LT}{3 C_{1}}\right) ^{\frac{1}{2}} {\widetilde{C}} \left( \tau ^{3}+h^{4}\right) ,\qquad 1\le n\le N, \end{aligned}$$
(5.18)

where the positive constants \({\widetilde{C}}\) and \(C_{1}\) are the same as those in (5.12) and (4.11).

Proof

The proof is similar as that of Theorem 5.3 by using the estimate (5.15) instead of the estimate (5.14). \(\square \)

Theorems 5.3 and 5.4 show that the extrapolation solution \(u_{i}^{\mathrm{e},n}\) from the Richardson extrapolation algorithm converges to the solution of the problem (2.1) with the order \(\mathcal{O}\left( \tau ^{3}+h^{4}\right) \). This implies that the Richardson extrapolation algorithm enhances the temporal accuracy of the computed solution by the compact scheme (2.29) from the second-order to the third-order. It is worth noticing that our extrapolation algorithm requires a weaker condition on the solution u(xt) than that in [16] to achieve the third-order temporal accuracy.

Remark 5.1

With the same discretization parameters, the Richardson extrapolation algorithm requires more arithmetic operations than the compact scheme (2.29) itself. But its high-order temporal accuracy allows the use of much larger time step in order to obtain satisfactory numerical results and thus much computational work is saved (see the numerical results in the next section).

Remark 5.2

The same remark as given in Remark 4.3 for Theorems 4.3 and 4.4 holds true for Theorems 5.3 and 5.4.

We end this section by giving a simple comment on the derivative conditions at the initial time used in Theorems 3.1, 5.1 and 5.2. In Theorems 3.1, 5.1 and 5.2, we have assumed the derivative conditions \(\frac{\partial ^{k} u}{\partial t^{k}}(x,0)=0\) for \(k=1,2\) and \(\frac{\partial ^{k} u}{\partial t^{k}}(x,0)=0\) for \(k=1,2,3\), respectively. If the above derivative conditions are not satisfied, one may consider the problem (2.1) for

$$\begin{aligned} z(x,t)=u(x,t)-\sum _{k=1}^{2}\frac{1}{k!}\frac{\partial ^{k} u}{\partial t^{k}}(x,0) t^{k}\quad \mathrm{or}\quad z(x,t)=u(x,t)-\sum _{k=1}^{3}\frac{1}{k!}\frac{\partial ^{k} u}{\partial t^{k}}(x,0) t^{k} \end{aligned}$$

instead, where the coefficients \(\frac{\partial ^{k} u}{\partial t^{k}}(x,0)\) \((k=1,2,3)\) can be computed from the known function g(xt) as given in the following propositions.

Proposition 5.1

Assume that the solution u(xt) of the problem (2.1) is in \(\mathcal{C}^{2,1}([0,L]\times [0,T])\). Then

$$\begin{aligned} \frac{\partial u}{\partial t}(x,0)=g(x,0). \end{aligned}$$
(5.19)

Proposition 5.2

Assume that the solution u(xt) of the problem (2.1) is in \(\mathcal{C}^{2,2}([0,L]\times [0,T])\). Let

$$\begin{aligned} F(x,t)=g(x,t)-\frac{\beta g(x,0)}{{\varGamma }(1-\alpha )} t^{1-\alpha }. \end{aligned}$$
(5.20)

Then the limit \(\lim \nolimits _{t\rightarrow 0} \frac{\partial F}{\partial t} (x,t)\) exists and

$$\begin{aligned} \displaystyle \frac{\partial ^{2} u}{\partial t^{2}}(x,0)=\lim \limits _{t\rightarrow 0} \frac{\partial F}{\partial t} (x,t)+D \displaystyle \frac{\partial ^{2} g}{\partial x^{2}}(x,0)+q(x)g(x,0). \end{aligned}$$
(5.21)

Proposition 5.3

Assume that the solution u(xt) of the problem (2.1) is in \(\mathcal{C}^{2,3}([0,L]\times [0,T])\). Let

$$\begin{aligned} G(x,t)=g(x,t)-\frac{\beta g(x,0)}{{\varGamma }(2-\alpha )} t^{1-\alpha }-\frac{\beta }{{\varGamma }(3-\alpha )} \frac{\partial ^{2} u}{\partial t^{2}}(x,0) t^{2-\alpha }. \end{aligned}$$
(5.22)

Then the limit \(\lim \nolimits _{t\rightarrow 0} \frac{\partial ^{2} G}{\partial t^{2}} (x,t)\) exists and

$$\begin{aligned} \displaystyle \frac{\partial ^{3} u}{\partial t^{3}}(x,0)=\lim \limits _{t\rightarrow 0} \frac{\partial ^{2} G}{\partial t^{2}} (x,t)+D \displaystyle \frac{\partial ^{2} ~}{\partial x^{2}} \left( \frac{\partial ^{2} u}{\partial t^{2}}(x,0)\right) +q(x)\frac{\partial ^{2} u}{\partial t^{2}}(x,0), \qquad \end{aligned}$$
(5.23)

where \(\frac{\partial ^{2} u}{\partial t^{2}}(x,0)\) is computed from (5.21).

The proofs of the above propositions will be given in Appendix.

6 Applications and numerical results

In this section, we apply the proposed compact finite difference method and Richardson extrapolation algorithm to two fractional mobile/immobile convection–diffusion problems in the form (1.3). The exact analytical solution v(xt) of each problem is explicitly known and is mainly used to compare with the computed solution \(v_{i}^{n}=u_{i}^{n}/k_{i}\) and the extrapolation solution \(v_{i}^{\mathrm{e},n}=u_{i}^{\mathrm{e},n}/k_{i}\) to check the accuracy of the compact finite difference method and the high efficiency of the Richardson extrapolation algorithm, where \(u_{i}^{n}\) and \(u_{i}^{\mathrm{e},n}\) are the solutions of the compact scheme (2.29) and the Richardson extrapolation algorithm, respectively, and \(k_{i}=k(x_{i})\). In order to demonstrate high-order temporal accuracy of the compact scheme (2.29), we also make some numerical comparisons of it with the compact scheme (2.17) given in [33].

To demonstrate the accuracy of the computed solution \(v_{i}^{n}\) and the extrapolation solution \(v_{i}^{\mathrm{e},n}\), we compute their errors by

$$\begin{aligned} \mathrm{e}(\tau ,h)=\max _{0\le n\le N}\left\| V^{n}-z^{n}\right\| , \end{aligned}$$
(6.1)

where \(V^{n}_{i}=v(x_{i},t_{n})\) and \(z_{i}^{n}\) represents the computed solution \(v_{i}^{n}\) or the extrapolation solution \(v_{i}^{\mathrm{e},n}\). The temporal and spatial convergence orders are computed, respectively, by

$$\begin{aligned} \mathrm{order}_{ t}(\tau ,h)=\log _{2}\left( \frac{\mathrm{e}(2\tau ,h)}{\mathrm{e}(\tau ,h)}\right) ,\qquad \mathrm{order}_{ s}(\tau ,h)=\log _{2}\left( \frac{\mathrm{e}(\tau ,2h)}{\mathrm{e}(\tau ,h)}\right) . \end{aligned}$$
(6.2)

All computations are carried out by using a MATLAB subroutine on a computer with Xeon X5650 CPU and 96GB memory.

Example 6.1

We first consider the problem (1.3) in the domain \([0,\pi ]\times [0,1]\) with \(\beta =D=1\), \(V(x)=-\sin x\) and

$$\begin{aligned} f(x,t)=t^{2+\alpha }\left( 3+\alpha +t+\frac{{\varGamma }(4+\alpha ) }{6} t^{1-\alpha }\right) \cos x+t^{3+\alpha } \sin ^{2} x. \end{aligned}$$
(6.3)

The boundary functions are given by

$$\begin{aligned} \phi _{0}(t)=t^{3+\alpha }, \qquad \phi _{L}(t)=-t^{3+\alpha }. \end{aligned}$$
(6.4)

It is easy to check that \(v(x,t)=t^{3+\alpha }\cos x\) is the solution of this problem and the function q(x) in the problem (2.1) is given by \(q(x)=-\frac{1}{4} \left( 2\cos x+\sin ^{2} x\right) \).

We first test the temporal convergence order of the compact scheme (2.29) and the Richardson extrapolation algorithm for different \(\alpha \). Let the spatial step \(h=\pi /400\). Table 1 gives the error \(\mathrm{e}(\tau ,h)\) and the temporal convergence order \(\mathrm{order}_{ t}(\tau ,h)\) of the computed solution \(v_{i}^{n}\) for \(\alpha =1/4, 1/2, 3/4\) and different time step \(\tau \). The corresponding error and temporal convergence order of the extrapolation solution \(v_{i}^{\mathrm{e},n}\) are given in Table 2. As expected from our theoretical analysis, the computed solution \(v_{i}^{n}\) has the second-order temporal accuracy while the extrapolation solution \(v_{i}^{\mathrm{e},n}\) possesses the third-order temporal accuracy. For comparison, the error \(\mathrm{e}(\tau ,h)\) and the temporal convergence order \(\mathrm{order}_{ t}(\tau ,h)\) of the computed solution \(v_{i}^{n}\) by the compact scheme (2.17) given in [33] are also listed in Table 1. It is seen that this scheme has only the temporal accuracy of order \(2-\alpha \), which is less than two, and thus it is far less accurate than the compact scheme (2.29) given in this paper.

Table 1 The error and the temporal convergence order of the computed solution \(v_{i}^{n}\) for Example 6.1 \((h=\pi /400)\)
Table 2 The error and the temporal convergence order of the extrapolation solution \(v_{i}^{\mathrm{e},n}\) for Example 6.1 \((h=\pi /400)\)

We next compute the spatial convergence order of the compact scheme (2.29). Table 3 presents the error \(\mathrm{e}(\tau ,h)\) and the spatial convergence order \(\mathrm{order}_{ s}(\tau ,h)\) of the computed solution \(v_{i}^{n}\) for \(\alpha =1/4, 1/2, 3/4\) and different spatial step h, where the time step \(\tau =1/5000\). The data in this table demonstrate that the computed solution \(v_{i}^{n}\) is of the fourth-order spatial accuracy. This coincides well with the analysis.

Table 3 The error and the spatial convergence order of the computed solution \(v_{i}^{n}\) for Example 6.1 \((\tau = 1/5000)\)

In order to demonstrate the effectiveness of using larger time step in the Richardson extrapolation algorithm, we compare it with the compact scheme (2.29) by taking a given spatial step \(h= \pi /800\) and different time step. Tables 4 and 5 give the error \(\mathrm{e}(\tau ,h)\) and the corresponding computational cost (CPU time in seconds) of the computed solution \(v_{i}^{n}\) and the extrapolation solution \(v_{i}^{\mathrm{e},n}\), respectively, for \(\alpha =1/4, 1/2, 3/4\). We see that in order to obtain a computed solution \(v_{i}^{n}\) by the compact scheme (2.29) for \(\alpha =1/4\) with the error around \(2.252\times 10^{-9}\), we need to take \(\tau =1/10240\), and so cost 110.260 CPU seconds. In contrast, a more accurate computed solution is provided by the Richardson extrapolation algorithm with \(\tau =1/160\). In this case, the error is \(2.142\times 10^{-9}\), and the corresponding cost is only 0.375 CPU seconds. Similar comparisons can be made with other data. These comparisons demonstrate the high efficiency of using larger time step in the Richardson extrapolation algorithm and justify our efforts to develop this extrapolation algorithm.

Table 4 The error and the computational cost of the computed solution \(v_{i}^{n}\) for Example 6.1 \((h=\pi /800)\)
Table 5 The error and the computational cost of the extrapolation solution \(v_{i}^{\mathrm{e},n}\) for Example 6.1 \((h= \pi /800)\)

Example 6.2

In this example, we consider the problem (1.3) in the domain \([0,100]\times [0,1]\) with \(\beta =D=1\), \(V(x)=\cos (\pi x)\) and

$$\begin{aligned} f(x,t)= & {} \Big ( {\varGamma }(4+\alpha )t^{3}+\frac{24}{{\varGamma }(4-\alpha )}t^{3-\alpha }+\frac{6}{{\varGamma }(3-\alpha )}t^{2-\alpha }+\frac{2}{{\varGamma }(2-\alpha )}t^{1-\alpha }\nonumber \\&+\,\eta (t)\Big ) \cos (\pi x), \end{aligned}$$
(6.5)

where \(\eta (t)=\varsigma ^{\prime }(t)+\pi (\pi - \sin \pi x ) \varsigma (t)\) with \(\varsigma (t)=6t^{3+\alpha }+4t^{3}+3t^{2}+2t\). The boundary functions are chosen as

$$\begin{aligned} \phi _{0}(t)=\phi _{L}(t)=\varsigma (t). \end{aligned}$$
(6.6)

This choice implies that \(v(x,t)=\varsigma (t)\cos (\pi x)\) is the solution of this problem. Clearly, the function q(x) in the problem (2.1) is given by \(q(x)=-\frac{1}{4} \left( 2\pi \sin (\pi x)+\cos ^{2}(\pi x)\right) \).

For this example, the problem (2.1) has the solution \(u(x,t)=\varsigma (t)k(x) \cos (\pi x)\), where \(k(x)=\exp \left( -{\sin (\pi x)}/{2\pi }\right) \). One can find that the zero derivative conditions \(\frac{\partial ^{k} u}{\partial t^{k}}(x,0)=0\) \((k=1,2)\) in Theorem 3.1 are not satisfied any longer. Now we would like to test the efficiency of the compact scheme (2.29) for this case. Let the spatial step \(h=1/400\). Table 6 lists the error \(\mathrm{e}(\tau ,h)\) and the temporal convergence order \(\mathrm{order}_{ t}(\tau ,h)\) of the computed solution \(v_{i}^{n}\) for \(\alpha =1/4, 1/2, 3/4\) and different time step \(\tau \). It is seen that the second-order temporal accuracy of the computed solution \(v_{i}^{n}\) is still achieved for \(\alpha =1/4\), but it cannot be achieved when \(\alpha =1/2\) and \(\alpha =3/4\). This suggests us that the zero derivative conditions in Theorem 3.1 are only sufficient but not necessary, and certain zero derivative conditions are indispensable to ensure the second-order temporal accuracy of the compact scheme (2.29).

Table 6 The error and the temporal convergence order of the computed solution \(v_{i}^{n}\) for Example 6.2 \((h=1/400)\)

According to Propositions 5.15.3, we know that

$$\begin{aligned} \frac{\partial u}{\partial t}(x,0)= & {} 2k(x)\cos (\pi x), \qquad \frac{\partial ^{2} u}{\partial t^{2}}(x,0)= 6k(x)\cos (\pi x),\\ \frac{\partial ^{3} u}{\partial t^{3}}(x,0)= & {} 24k(x)\cos (\pi x). \end{aligned}$$

We now transform the problem (2.1) of this example by letting

$$\begin{aligned} z(x,t)=u(x,t)-k(x)(4t^{3}+3t^{2}+2t)\cos (\pi x), \end{aligned}$$
(6.7)

so that the zero derivative conditions \(\frac{\partial ^{k} u}{\partial t^{k}}(x,0)=0\) \((k=1,2,3)\) in Theorems 3.1, 5.1 and 5.2 are satisfied by the transformed problem. Now we use the compact scheme (2.29) and the Richardson extrapolation algorithm to solve the above transformed problem. In Table 7, we give the error \(\mathrm{e}(\tau ,h)\) and the temporal convergence order \(\mathrm{order}_{ t}(\tau ,h)\) of the computed solution \(v_{i}^{n}\) by the compact scheme (2.29) for \(\alpha =1/4, 1/2, 3/4\) and different time step \(\tau \), where the spatial step \(h=1/400\). The corresponding error and the temporal convergence order of the extrapolation solution \(v_{i}^{\mathrm{e},n}\) are given in Table 8, where the spatial step \(h=1/1000\). It is seen that the computed solution \(v_{i}^{n}\) has the second-order temporal accuracy while the extrapolation solution \(v_{i}^{\mathrm{e},n}\) possesses the third-order temporal accuracy. This is in accord with our explanations given at the end of Sect. 5. In Table 7, we also list the error \(\mathrm{e}(\tau ,h)\) and the temporal convergence order \(\mathrm{order}_{ t}(\tau ,h)\) of the computed solution \(v_{i}^{n}\) by the compact scheme (2.17) in [33]. It is seen that the compact scheme (2.17) in [33] has only the temporal accuracy of order \(2-\alpha \).

Table 7 The error and the temporal convergence order of the computed solution \(v_{i}^{n}\) for the transformed problem of Example 6.2 \((h=1/400)\)
Table 8 The error and the temporal convergence order of the extrapolation solution \(v_{i}^{\mathrm{e},n}\) for the transformed problem of Example 6.2 \((h=1/1000)\)

Since \(L=100\), \(D=1\) and \(\Vert q \Vert _{\infty }=\frac{\pi }{2}\), the restriction condition on \(\tau \) in Theorems 4.1, 4.3 and 5.3 for the present problem is reduced to \(\tau \le \frac{1}{750\pi ^{2}}\). Clearly, this condition is not satisfied for all \(\tau \) in Tables 7 and 8. However, the corresponding numerical results in these tables show that the compact scheme (2.29) and the Richardson extrapolation algorithm are still stable and convergent. This implies that the restriction condition on \(\tau \) in those theorems is only a sufficient condition for the stability and convergence of the compact scheme (2.29) and the Richardson extrapolation algorithm with the general q(x).

The numerical results in Table 9 give the error \(\mathrm{e}(\tau ,h)\) and the spatial convergence order \(\mathrm{order}_{ s}(\tau ,h)\) of the computed solution \(v_{i}^{n}\) by the compact scheme (2.29) for \(\alpha =1/4, 1/2, 3/4\) and different spatial step h, where the time step \(\tau =1/20000\). These results show that the compact scheme (2.29) generates the fourth-order spatial accuracy.

Table 9 The error and the spatial convergence order of the computed solution \(v_{i}^{n}\) for the transformed problem of Example 6.2 \((\tau =1/20000)\)

In Tables 10 and 11, we compare the Richardson extrapolation algorithm with the compact scheme (2.29) in terms of the accuracy and the computational cost (CPU time in seconds). It is seen that the Richardson extrapolation algorithm allows the use of much larger time step in order to obtain the more accurate computed solution than the compact scheme (2.29), and thus much computational work is saved. This again demonstrates the effectiveness of the Richardson extrapolation algorithm.

Table 10 The error and the computational cost of the computed solution \(v_{i}^{n}\) for the transformed problem of Example 6.2 \((h=1/1000)\)
Table 11 The error and the computational cost of the extrapolation solution \(v_{i}^{\mathrm{e},n}\) for the transformed problem of Example 6.2 \((h= 1/1000)\)

7 Concluding remarks

In this work, we have presented and analyzed a high-order compact finite difference method for a class of fractional mobile/immobile convection–diffusion equations. The convection coefficient of the equation may be spatially variable. In our method, a fourth-order compact finite difference approximation is used for the spatial derivative and a second-order difference approximation is applied for the time first derivative and the Caputo time fractional derivative. We have proved that the resulting scheme is uniquely solvable and (almost) unconditionally stable and convergent. The error estimate we have provided shows that the proposed method has the second-order temporal accuracy and the fourth-order spatial accuracy, regardless of the order of the fractional derivative. A Richardson extrapolation algorithm has been developed to enhance the temporal accuracy of the final computed solution (or the extrapolation solution) to the third-order. Numerical results coincide with the analysis very well, and show that the Richardson extrapolation algorithm is particularly attractive due to its improved temporal accuracy, compared to the original compact difference scheme.

Another important feature of the proposed method is that it yields a very simple and effective high-order scheme for the time fractional convection–diffusion equations with spatially variable convection coefficients. The related theoretical analysis is also quite transparent. In the future, we plan to make further investigations and extend this method to the other fractional differential equations.