1 Introduction

Partial integro-differential equations with a weakly singular kernel have many applications in various fields of science and engineering, such as heat conduction in materials with memory, viscoelasticity, reactor dynamics, biomechanics, and pressure in porous media [1,2,3,4]. Several numerical methods have been used for solving integro-differential equations with a weakly singular kernel. For example, Wang et al [5] proposed a high order compact alternating direction implicit scheme for solving two-dimensional time-fractional integro-differential equations with a weakly singularity near the initial time. Qiu et al [6] introduced and analyzed a Sinc–Galerkin method for solving the fourth-order partial integro-differential equation with a weakly singular kernel. In [7], the Sinc-collocation approach combined with the double exponential transformation has been employed for solving a class of variable-order fractional integro-partial differential equations. Fakhar–Izadi [8] derived a space-time Spectral–Galerkin method for the solution of one and two-dimensional fourth order time-fractional partial integro-differential equations with a weakly singular kernel. Zhang et al [9] proposed the quintic B-spline collocation method for solving fourth order partial integro-differential equations with a weakly singular kernel. Dehestani et al [10] applied Legender–Laguerre functions and the collocation method for solving variable-order time-fractional partial integro-differential equations. Hashemizadeh et al [11] presented a spectral method for solving nonlinear Volterra integral equations with a weakly singular kernel based on Genocchi polynomials. Biazar and Sadri [12] presented an operational approach based on shifted Jacobi polynomials for solving a class of weakly singular fractional integro-differential equations.

Fractional calculus has proved to be a valuable tool in modeling of different materials and processes in many applied sciences like biology, bio-mechanic, electrochemistry and etc, in accordance with their memory and hereditary properties [13,14,15,16]. Various numerical schemes are presented for solving fractional partial differential equations, such as finite difference [17,18,19,20,21,22], spectral [23,24,25], meshless [26, 27], and finite element [28, 29] methods.

In this paper, we consider the fourth-order time-fractional integro-differential equation with a weakly singular kernel as follows [30]:

$${\left\{ \begin{array}{l} _{C}{\mathcal {D}}_{0,t}^\alpha u(x,t)-u_{xx}(x,t)-{\mathcal {I}}^{(\beta )}u_{xx}(x,t)+u_{xxxx}(x,t)=f(x,t),\\ (x,t)\in \Omega ,\\ u(x,0)=u^{0}(x), 0\le x \le L,\\ u(0,t)=u(L,t)=u_{xx}(0,t)=u_{xx}(L,t)=0, 0<t\le T, \end{array}\right. }$$
(1)

where \(\Omega =(0,L)\times (0,T]\),\(0<\alpha ,\beta <1\),f(xt) is source term and \(u^{0}(x)\) is given smooth function. In fact, problem (1) is equivalent to

$${\left\{ \begin{array}{l} _{C}{\mathcal {D}}_{0,t}^\alpha u(x,t)-v(x,t)-{\mathcal {I}}^{(\beta )}v(x,t)+v_{xx}(x,t)=f(x,t),\\ (x,t)\in \Omega ,\\ v(x,t)=u_{xx}(x,t),0<x<L,0<t\le T,\\ u(x,0)=u^{0}(x), 0\le x \le L,\\ u(0,t)=u(L,t)=v(0,t)=v(L,t)=0, 0<t\le T. \end{array}\right. }$$
(2)

In (2), \(_{C}{\mathcal {D}}_{0,t}^\alpha\) is fractional derivative operator in caputo sense and \({\mathcal {I}}^{(\beta )}\) is defined as follows

$$\begin{aligned} {\mathcal {I}}^{(\beta )}u_{xx}(x,t)=\frac{1}{\Gamma (\beta )}\int _{0}^{t}(t-s)^{\beta -1}u_{xx}(x,s)ds,t>0, \end{aligned}$$
(3)

where \(\Gamma (.)\) is the Gamma function.

Equation (1), can be found in the modeling of floor systems, window glasses, airplane wings, and bridge slabs [31, 32]. In fact, fourth-order spatial derivative operators are needed in the modeling of heat flow in materials with memory, strain gradient elasticity, and phase separation in binary mixtures [33,34,35].

The fourth-order fractional equations have recently attracted the attention of researchers. For example, in [36], the authors proposed a new study for weakly singular kernel fractional fourth-order partial integro-differential equations by means of optimum q-HAM. Tariq and Akram developed a quintic spline technique for time fractional fourth-order partial differential equations [32]. Heydari and Avazzadeh used the orthonormal Bernstein polynomials to solve nonlinear variable-order time fractional fourth-order diffusion-wave equations with nonsingular fractional derivative [37]. Abdelkawy et al [38] derived a highly accurate technique for solving distributed-order time-fractional-sub-diffusion equations of the fourth order. Yang et al [39] introduced a quasi-wavelet based numerical method for fourth-order partial integro-differential equations with a weakly singular kernel. Roul and Goura considered a high order numerical method for time-fractional fourth order partial differential equations [40].

Cubic B-spline quasi-interpolation has been applied in some papers, see [41,42,43,44,45,46,47]. The fundamental benefit of B-spline quasi-interpolation is that they may be built directly without solving any systems of linear equations. It also results in a better approximation of smooth functions. Furthermore, they are local in the sense that the value of B-spline quasi-interpolant at a given point is determined solely by the values of the given function in the neighborhood of that point. Sablonniere [48] found that the cubic B-spline quasi-interpolation first derivative is more accurate than the finite difference approximation. Among the numerical methods so far proposed to solve time-fractional integro-differential equations, B-spline quasi-interpolations have rarely been used. This motivates us to construct a numerical scheme by using cubic B-spline quasi-interpolation to solve equation (1).

In this paper, we construct a difference method using cubic B-spline quasi-interpolation for problem (1). We approximate the temporal Caputo derivative with a \(L_1\)-discrete formula. Meanwhile, we apply a second-order formula to approximate \({\mathcal {I}}^{(\beta )}\) operator. Then we proved the stability and convergence of the difference method. Numerical examples verify the accuracy of the proposed method. Also, the convergence order of the scheme is \((2-\alpha )\) for time and 2 for space. The advantages of the method are flexibility and simplicity. The method is computationally optimal and fast.

The remainder of the paper is organized as follows. In section 2, we introduce some definitions and preliminaries to fractional calculus and cubic B-spline quasi-interpolation. The difference scheme for the fourth-order time-fractional integro-differential equation with a weakly singular kernel is derived in section 3. The stability and convergence of the method are investigated in Sections 4 and 5. In section 6, some numerical examples are provided to demonstrate the theoretical results. A conclusion ends the article.

2 Some definitions and preliminary

The domain is divided into a uniform grid of mesh points \((x_j,t_k)\) with \(x_{j}=jh,h=\frac{L}{M},0\le j\le M\) and \({t_k} = k\tau\), \(\tau =\frac{T}{N}\), \(0\le k\le N\). The values of the function u at the grid points are denoted \(u(x_{i},t_{k})\) and \(U_{i}^{k}\) is the approximate solution at the point \((x_{i},t_{k})\).

Definition 1

The left- and right-sided Riemann–Liouville integrals of a suitably smooth function f(x) on (ab) are defined by [31, 49, 50]

$$\begin{aligned}&_{RL}{\mathcal {I}}_{a,x}^{\alpha }f(x)=\frac{1}{\Gamma (\alpha )}\int _{a}^{x}\frac{f(t)}{(x-t)^{n-\alpha }}dt,a<x,n-1<\alpha \le n,\end{aligned}$$
(4)
$$\begin{aligned}&_{RL}{\mathcal {I}}_{x,b}^{\alpha }f(x)=\frac{1}{\Gamma (\alpha )}\int _{x}^{b}\frac{f(t)}{(t-x)^{n-\alpha }}dt,x<b,n-1<\alpha \le n, \end{aligned}$$
(5)

respectively.

Definition 2

The left- and right-sided Riemann–Liouville derivatives of order \(\alpha\) are defined by [31, 49, 51]

$$\left\{ \begin{array}{l} _{RL}{\mathcal {D}}_{a,x}^{\alpha }f(x)=\frac{d^{n}}{dx^{n}}\big (_{RL}{\mathcal {D}}_{a,x}^{-(n-\alpha )}f(x)\big )\\ \quad\quad\quad\quad\quad\, =\frac{1}{\Gamma (n-\alpha )}\frac{d^{n}}{dx^{n}}\int _{a}^{x}\frac{f(t)}{(x-t)^{\alpha -n+1}}dt,x>a, \end{array}\right.$$
(6)

and

$$\left\{ \begin{array}{l} _{RL}{\mathcal {D}}_{x,b}^{\alpha }f(x)=(-1)^{n}\frac{d^{n}}{dx^{n}}\big (_{RL}{\mathcal {D}}_{x,b}^{-(n-\alpha )}f(x)\big )\\ \quad\quad\quad\quad\quad\,=\frac{(-1)^n}{\Gamma (n-\alpha )}\frac{d^{n}}{dx^{n}}\int _{a}^{x}\frac{f(t)}{(x-t)^{\alpha -n+1}}dt,x<b, \end{array}\right.$$
(7)

respectively, where n is a positive integer satisfying \(n-1<\alpha \le n\).

Definition 3

The left- and right-sided Caputo derivatives of order \(\alpha\) are defined by [16, 31, 49]

$$\begin{aligned} _{C}{\mathcal {D}}_{a,x}^{\alpha }f(x)=\frac{1}{\Gamma (n-\alpha )}\int _{a}^{x}\frac{f^{n}(t)}{(x-t)^{\alpha -n+1}}dt,a<x, \end{aligned}$$
(8)

and

$$\begin{aligned} _{C}{\mathcal {D}}_{x,b}^{\alpha }f(x)=\frac{(-1)^n}{\Gamma (n-\alpha )}\int _{a}^{x}\frac{f^{n}(t)}{(t-x)^{\alpha -n+1}}dt, x<b, \end{aligned}$$
(9)

respectively, where n is a positive integer satisfying \(n-1<\alpha \le n\).

Lemma 1

(\(L_1\) approximation) Let \(\alpha \in (0,1)\) and \(u(.,t)\in C_t^2([0,T])\) then the following approximation formula holds [52, 53]

$$\begin{aligned} _{C}{\mathcal {D}}_{0,t}^{\alpha }u(x,t_k)&=\frac{\tau ^{-\alpha }}{\Gamma (2-\alpha )}\bigg [b_0u(x,t_k)-\sum _ {j=1}^{k-1}(b_{k-j-1}-b_{k-j})u(x,t_j)\nonumber \\&-b_{k-1}u(x,t_0)\bigg ]+R, \end{aligned}$$
(10)

in which

$$\begin{aligned}&b_j=\big [(l+1)^{1-\alpha }-l^{1-\alpha }\big ],0\le l\le k-1,\end{aligned}$$
(11)
$$\begin{aligned}&|R|\le C\tau ^{2-\alpha } \end{aligned}$$
(12)

where C is a positive constant given by

$$\begin{aligned} C=\frac{1}{\Gamma (2-\alpha )}\big [\frac{1-\alpha }{12}+\frac{2^{2-\alpha }}{2-\alpha }-(2^{-\alpha }+1)\big ]\max _{t_0\le t\le t_j}|u''(x,t)|. \end{aligned}$$

Lemma 2

Let \(\beta \in (0,1)\) and u(., t) is suitably smooth on (0, T) then for the \({\mathcal {I}}^{(\beta )}\) there holds that [49]

$$\begin{aligned} {\mathcal {I}}^{(\beta )}u(x,t_k)=\sum _{j=0}^{k}a_{j,k}u(x,t_j)+O(\tau ^2), \end{aligned}$$
(13)

where

$$\begin{aligned} a_{j,k}=\frac{\tau ^{\beta }}{\Gamma (\beta +2)} {\left\{ \begin{array}{ll} (k-1)^{\beta +1}-(k-1-\beta )k^{\beta },& j=0,\\ (k-j+1)^{\beta +1}+(k-1-j)^{\beta +1}-2(k-j)^{\beta +1},\\ &1\le j\le k-1,\\ 1,&j=k. \end{array}\right. } \end{aligned}$$

Now, we introduce B-spline and univariate B-spline quasi-interpolants that we will use in the next section. In order to define B-splines, we need the concept of knot sequences.

Definition 4

A knot sequence \(\varvec{\xi }\) is a nondecreasing sequence of real numbers,

$$\begin{aligned} \varvec{\xi }:=\{\xi _i\}_{i=1}^{m}=\{\xi _1\le \xi _2\le \cdots \le \xi _m\},m\in {\mathbb {N}}. \end{aligned}$$

The elements \(\xi _i\) are called knots.

Provided that \(m\ge p+2\) we can define B-splines of degree p over the knot sequence \(\varvec{\xi }\).

Definition 5

Suppose for a nonnegative integer p and some integer j that \(\xi _{j-p-1}\le \xi _{j-p}\le \cdots \le \xi _{j}\) are \(p+2\) real numbers taken from a knot sequence \(\varvec{\xi }\). The j-th B-spline \(B_{j,p,\varvec{\xi }}:{\mathbb {R}}\rightarrow {\mathbb {R}}\) of degree p is identically zero if \(\xi _{j-p-1}=\xi _{j}\) and otherwise defined recursively by [54]

$$\begin{aligned} B_{j,p,\varvec{\xi }}(x)=\frac{x-\xi _{j-p-1}}{\xi _{j-1}-\xi _{j-p-1}}B_{j-1,p-1,\varvec{\xi }}(x)+\frac{\xi _{j}-x}{\xi _{j}-\xi _{j-p}}B_{j,p-1,\varvec{\xi }}(x), \end{aligned}$$
(14)

starting with

$$\begin{aligned} B_{i,0,\varvec{\xi }}(x)= {\left\{ \begin{array}{ll} 1,&\quad if\quad x\in [\xi _{i-1},\xi _{i}),\\ 0,&\quad otherwise. \end{array}\right. } \end{aligned}$$

A B-spline of degree 3 is also called a cubic B-spline. Using the relation (14), the cubic B-spline \(B_{j,3,\varvec{\xi }}\) are given by

$$\begin{aligned} B_{j,3,\varvec{\xi }}(x)= {\left\{ \begin{array}{ll} \frac{(x-\xi _{j-4})^3}{(\xi _{j-3}-\xi _{j-4})(\xi _{j-2}-\xi _{j-4})(\xi _{j-1}-\xi _{j-4})}, if \xi _{j-4}\le x< \xi _{j-3}\\ \frac{(x-\xi _{j-4})^2(\xi _{j-2}-x)}{(\xi _{j-2}-\xi _{j-4})(\xi _{j-2}-\xi _{j-3})(\xi _{j-1}-\xi _{j-4})}\\ +\frac{(x-\xi _{j-4})(\xi _{j-1}-x)(x-\xi _{j-3})}{(\xi _{j-1}-\xi _{j-4})(\xi _{j-1}-\xi _{j-3})(\xi _{j-2}-\xi _{j-3})}\\ +\frac{(\xi _{j}-x)(x-\xi _{j-3})^2}{(\xi _{j}-\xi _{j-3})(\xi _{j-1}-\xi _{j-3})(\xi _{j-2}-\xi _{j-3})},if \xi _{j-3}\le x< \xi _{j-3}\\ \frac{(x-\xi _{j-4})(\xi _{j-1}-x)^2}{(\xi _{j-1}-\xi _{j-4})(\xi _{j-1}-\xi _{j-3})(\xi _{j-1}-\xi _{j-2})}\\ +\frac{(x-\xi _{j-3})(\xi _{j-1}-x)(x-\xi _{j})}{(\xi _{j-1}-\xi _{j-3})(\xi _{j-1}-\xi _{j-2})(\xi _{j}-\xi _{j-3})}\\ +\frac{(\xi _{j}-x)^2(x-\xi _{j-2})}{(\xi _{j}-\xi _{j-3})(\xi _{j}-\xi _{j-2})(\xi _{j-1}-\xi _{j-2})},if \xi _{j-2}\le x< \xi _{j-1}\\ \frac{(\xi _{j}-x)^3}{(\xi _{j}-\xi _{j-3})(\xi _{j}-\xi _{j-2})(\xi _{j}-\xi _{j-1})}, if \xi _{j-1}\le x< \xi _{j}\\ 0, \quad otherwise. \end{array}\right. } \end{aligned}$$
(15)

In accordance [54], suppose for integers \(n>p\ge 0\) that a knot sequence

$$\begin{aligned}\varvec{\xi }:=\{\xi _i\}_{i=n-p-1}^{n+p}=\{\xi _{n-p-1}\le \xi _{n-p}\le \cdots \le \xi _{n+p}\},n\in {\mathbb {N}},p\in {\mathbb {N}}_{0},\end{aligned}$$

is given. This knot sequence allows us to define a set of \(n+p\) B-splines of degree p, namely

$$\begin{aligned} \{ B_{1,p,\varvec{\xi }},\cdots , B_{n+p,p,\varvec{\xi }}\}. \end{aligned}$$
(16)

We consider the space of splines spanned by the B-splines in (16) over the interval \([\xi _{0},\xi _{n}]\),

$$\begin{aligned} {\mathcal {S}}_{p,\varvec{\xi }}:=\{s:[\xi _{0},\xi _{n}]\rightarrow {\mathbb {R}}:s=\sum _{j=1}^{n+p}c_{j}B_{j,p,\varvec{\xi }},c_{j}\in {\mathbb {R}}\}. \end{aligned}$$
(17)

We now introduce two definitions about knots which are crucial for splines.

Definition 6

A knot sequence \(\varvec{\xi }\) is called \((P+1)\)-regular if \(\xi _{i-p-1}<\xi _{i}\) for \(i=1,\cdots ,n+p\). Such a knot sequence ensures that all the B-splines in (16) are not identically zero [54].

Definition 7

A knot sequence \(\varvec{\xi }\) is called \((P+1)\)-open on an interval [ab] if it is \((P+1)\)-regular and it has end knots of multiplicity \(p+1\),i.e., [54]

$$\begin{aligned} \begin{aligned} a&:=\xi _{-p}=\xi _{-p+1}=\cdots =\xi _{-1}=\xi _{0}<\xi _{1}\le \xi _{2}\le \cdots \le \xi _{n-1}<\xi _{n}\\ {}&=\xi _{n+1}=\cdots =\xi _{n+p}=:b. \end{aligned} \end{aligned}$$
(18)

Suppose \(\{B_{j,p,\varvec{\xi }}\}_{j=1}^{n+p}\) form a basis for \({\mathcal {S}}_{p,\varvec{\xi }}\). For each \(j=1,\cdots ,n+p,\) let \(\lambda _{j}\) be a linear functional defined on C[ab] that can be computed from values of f at some set of points in [ab]. We have the following definition.

Definition 8

A formula of the form

$$\begin{aligned} Q_{p}f(x):=\sum _{j=1}^{n+p}(\lambda _{j}f)B_{j,p,\varvec{\xi }}(x), \end{aligned}$$
(19)

is called a B-spline quasi-interpolation formula of degree p [55].

According to [54, 56] the error of a quasi-interpolation satisfies

$$\begin{aligned} |f(x)-(Q_{p}f)(x)|\le \frac{\Vert Q_{p}\Vert }{(p+1)!}\Vert f^{(p+1)}\Vert _{\infty ,S_{x}}\Delta (x)^{p+1},x\in S_{\varvec{\xi }}^{p}, \end{aligned}$$
(20)

where \(S_{\varvec{\xi }}^{p}=[\xi _{0},\xi _{n}]\), \(S_{x}\) is the union of the supports of all B-splines \(B_{i,p,\varvec{\xi }}\), \(i\sim x\) and \(\Vert f^{(p+1)}\Vert _{\infty ,S_{x}}\) denotes the maximum norm of \(f^{(p+1)}\) on \(S_{x}\) and \(\Delta (x)=\max _{y\in S_{x}}|y-x|\) that \(\sim\) is used to indicate proportionality. If the local mesh ratio is bounded, i.e., if the quotients of the lengths of adjacent knot intervals are \(\le r_{y}\), then the error of the derivatives on the knot intervals \((\xi _{0},\xi _{n})\) can be estimated by

$$\begin{aligned} |f^{(j)}(x)-(Q_{p}f)^{(j)}(x)|\le c(p,r_{y})\Vert Q_{p}\Vert \Vert f^{(p+1)}\Vert _{\infty ,S_{x}}\Delta (x)^{p+1-j}, \end{aligned}$$
(21)

for \(j\le p\).

Suppose \(a=x_{0}<\dots <x_{n}=b\) are equally spaced points in the interval [ab]. We have the following theorem.

Theorem 1

Given a function f defined on [ab], let

$$\begin{aligned} \lambda _{j}f:= {\left\{ \begin{array}{ll} f(x_{0}), &{} j=1, \\ \frac{1}{18}(7f(x_{0})+18f(x_{1})-9f(x_{2})+2f(x_{3})),&{} j=2,\\ \frac{1}{6}(-f(x_{j-3})+8f(x_{j-2})-f(x_{j-1})),&{} 3\le j\le n+1,\\ \frac{1}{18}(2f(x_{n-3})-9f(x_{n-2})+18f(x_{n-1})+7f(x_{n})),&{} j=n+2,\\ f(x_{n}),&{} j=n+3. \end{array}\right. } \end{aligned}$$
(22)

Then (19) defines a linear operator mapping C[ab] into \({\mathcal {S}}_{p,\varvec{\xi }}\) with \(Q_{p}s=s\) for all cubic polynomials s [55].

For approximate derivatives of f by derivatives of \(Q_{3}f\) up to the order \(h^{3}\), we can evaluate the value of \(f'\) and \(f''\) at \(x_{j}\) by \((Q_{3}f)'(x)=\sum \nolimits _{j = 1}^{n+3}(\lambda _{j}f)B^{'}_{j,p,\varvec{\xi }}(x)\) and \((Q_{3}f)''(x)=\sum \nolimits _{j= 1}^{n+3}(\lambda _{j}f)B^{''}_{j,p,\varvec{\xi }}(x)\). We set \(Y=(f_{0},f_{1},\dots ,f_{n})^{T}\), \(Y'=(f'_{0},f'_{1},\dots ,f'_{n})^{T}\) and \(Y''=(f''_{0},f''_{1},\dots ,f''_{n})^{T}\) where \(f_{j}^{'}=(Q_{3}f)'(x_{j})\), \(j=1,\dots ,n\) and \(f_{j}^{''}=(Q_{3}f)''(x_{j})\), \(j=1,\dots ,n\). The first and the second derivatives of \(Q_3(f)\) are calculated as

$$\begin{aligned} f_{j}^{'}=\sum \limits _{j = 1}^{n+3}(\lambda _{j}f)B^{'}_{j,p,\varvec{\xi }}(x), j=0,1,\dots ,n, \end{aligned}$$
(23)
$$\begin{aligned} f_{j}^{''}=\sum \limits _{j = 1}^{n+3}(\lambda _{j}f)B^{''}_{j,p,\varvec{\xi }}(x), j=0,1,\dots ,n, \end{aligned}$$
(24)

where \(B^{'}_{i,p,\varvec{\xi }}(x)\) and \(B^{''}_{i,p,\varvec{\xi }}(x)\) are obtained from (15) such that \(\xi _j=x_j\) for \(j=0,1,\dots ,n\). Now according to (23) and (24) we have

$$\begin{aligned}(Q_{3}f)'(x_0)&=\frac{1}{h}\bigg (-\frac{11}{6}f(x_0)+3f(x_1)-\frac{3}{2}f(x_2)+\frac{1}{3}f(x_3)\bigg )\\(Q_{3}f)'(x_1)&=\frac{1}{h}\bigg (-\frac{1}{3}f(x_0)-\frac{1}{2}f(x_1)+f(x_2)-\frac{1}{6}f(x_3)\bigg )\\ (Q_{3}f)'(x_{n-1})&=\frac{1}{h}\bigg (\frac{1}{6}f(x_{n-3})-f(x_{n-2})+\frac{1}{2}f(x_{n-1})+\frac{1}{3}f(x_{n})\bigg )\\ (Q_{3}f)'(x_n)&=\frac{1}{h}\bigg (-\frac{1}{3}f(x_{n-3})+\frac{3}{2}f(x_{n-2})-3f(x_{n-1})+\frac{11}{6}f(x_n)\bigg )\\(Q_{3}f)'(x_j)&=\frac{1}{h}\bigg (\frac{1}{12}f(x_{j-2})-\frac{2}{3}f(x_{j-1})+\frac{2}{3}f(x_{j+1})-\frac{1}{12}f(x_{j+2})\bigg ),\\&\qquad 2\le j\le (n-2), \end{aligned}$$

and

$$\begin{aligned}&(Q_{3}f)''(x_0)=\frac{1}{h^2}\bigg (2f(x_0)-5f(x_1)+4f(x_2)-f(x_3)\bigg )\\&(Q_{3}f)''(x_1)=\frac{1}{h^2}\bigg (f(x_0)-2f(x_1)+f(x_2)\bigg )\\ {}&(Q_{3}f)''(x_{n-1})=\frac{1}{h^2}\bigg (f(x_{n-2}) -2f(x_{n-1})+f(x_{n})\bigg )\\ {}&(Q_{3}f)''(x_n)=\frac{1}{h^2}\bigg (-f(x_{n-3})+4f(x_{n-2})-5f(x_{n-1})+2f(x_n)\bigg )\\&(Q_{3}f)''(x_j)=\frac{1}{h^2}\bigg (-\frac{1}{6}f(x_{j-2})+\frac{5}{3}f(x_{j-1})-3f(x_{}j)+\frac{5}{3}f(x_{j+1})\\&\quad -\frac{1}{6}f(x_{j+2})\bigg ),2\le j\le (n-2). \end{aligned}$$

Therefore, we can display the approximation of \(f'\) and \(f''\) in the following matrix form

$$\begin{aligned} Y^{'}=\frac{1}{h}D_{1}Y, Y^{''}=\frac{1}{h^{2}}D_{2}Y, \end{aligned}$$
(25)

where \(D_{1},D_{2}\in {\mathbb {R}}^{(n+1)\times (n+1)}\) are obtained as follows:

$$\begin{aligned} D_{1}= & {} \begin{pmatrix} -\frac{11}{6}&{}3&{}-\frac{3}{2}&{}\frac{1}{3}&{}0&{}0&{}\dots &{}0&{}0\\ -\frac{1}{3}&{}-\frac{1}{2}&{}1&{}-\frac{1}{6}&{}0&{}0&{}\dots &{}0&{}0\\ \frac{1}{12}&{}-\frac{2}{3}&{}0&{}\frac{2}{3}&{}-\frac{1}{12}&{}0&{}\dots &{}0&{}0\\ 0&{}\frac{1}{12}&{}-\frac{2}{3}&{}0&{}\frac{2}{3}&{}-\frac{1}{12}&{}\dots &{}0&{}0\\ \vdots &{}\vdots &{}\vdots &{}\vdots &{}\vdots &{}\vdots &{}\vdots &{}\vdots &{}\vdots \\ 0&{}0&{}\dots &{}\frac{1}{12}&{}-\frac{2}{3}&{}0&{}\frac{2}{3}&{}-\frac{1}{12}&{}0\\ 0&{}0&{}\dots &{}0&{}\frac{1}{12}&{}-\frac{2}{3}&{}0&{}\frac{2}{3}&{}-\frac{1}{12}\\ 0&{}0&{}\dots &{}0&{}0&{}\frac{1}{6}&{}-1&{}\frac{1}{2}&{}\frac{1}{3}\\ 0&{}0&{}\dots &{}0&{}0&{}-\frac{1}{3}&{}\frac{3}{2}&{}-3&{}\frac{11}{6} \end{pmatrix}, \\ D_{2}= & {} \begin{pmatrix} 2&{}-5&{}4&{}-1&{}0&{}0&{}\dots &{}0&{}0\\ 1&{}-2&{}1&{}0&{}0&{}0&{}\dots &{}0&{}0\\ -\frac{1}{6}&{}\frac{5}{3}&{}-3&{}\frac{5}{3}&{}-\frac{1}{6}&{}0&{}\dots &{}0&{}0\\ 0&{}-\frac{1}{6}&{}\frac{5}{3}&{}-3&{}\frac{5}{3}&{}-\frac{1}{6}&{}\dots &{}0&{}0\\ \vdots &{}\vdots &{}\vdots &{}\vdots &{}\vdots &{}\vdots &{}\vdots &{}\vdots &{}\vdots \\ 0&{}0&{}\dots &{}-\frac{1}{6}&{}\frac{5}{3}&{}-3&{}\frac{5}{3}&{}-\frac{1}{6}&{}0\\ 0&{}0&{}\dots &{}0&{}-\frac{1}{6}&{}\frac{5}{3}&{}-3&{}\frac{5}{3}&{}-\frac{1}{6}\\ 0&{}0&{}\dots &{}0&{}0&{}0&{}1&{}-2&{}1\\ 0&{}0&{}\dots &{}0&{}0&{}-1&{}4&{}-5&{}2 \end{pmatrix}. \end{aligned}$$

3 Description of the difference scheme

In the present section we construct a difference scheme for solving (1).

Considering (2) at the point \((x_i,t_k)\), one has

$$\begin{aligned} \begin{aligned} {\mathcal {D}}_{0,t}^\alpha&u(x_i,t_k)-v(x_i,t_k)-{\mathcal {I}}^{(\beta )}v(x_i,t_k)+v_{xx}(x_i,t_k)=f(x_i,t_k),\\&v(x_i,t_k)=u_{xx}(x_i,t_k),1\le i\le M-1,1\le k\le N. \end{aligned} \end{aligned}$$
(26)

Using (10), (13), (23) and (24) equation (26) can be approximated by

$$\begin{aligned}&\frac{\tau ^{-\alpha }}{\Gamma (2-\alpha )}\bigg [b_0u_i^k-\sum _{j=1}^{k-1}(b_{k-j-1}-b_{k-j})u_i^j-b_{k-1}u_i^0\bigg ] -v_i^k-\sum _{j=0}^{k}a_{j,k}v_i^j\nonumber \\&\quad +\sum _{j=0}^{M}\frac{d_{ij}^2}{h^2}v_j^k=f_i^k+(R_1)_i^k,\end{aligned}$$
(27)
$$\begin{aligned}&v_i^k=\sum _{j=0}^{M}\frac{d_{ij}^2}{h^2}u_j^k+(R_2)_i^k,1\le i\le M-1,1\le k\le N, \end{aligned}$$
(28)

where \(|(R_1)_i^k|\le C(\tau ^{2-\alpha }+h^2)\) and \(|(R_2)_i^k|\le Ch^2\).

After simplification we obtain

$$\begin{aligned} \begin{aligned}&u_i^k-(\mu +\mu a_{k,k})v_i^k+\mu \sum _{j=0}^{M}\frac{d_{ij}^2}{h^2}v_j^k=\mu f_i^k+\mu \sum _{j=0}^{k-1}a_{j,k}v_i^j+\\&\quad \sum _{j=1}^{k-1}(b_{k-j-1}-b_{k-j})u_i^j+b_{k-1}u_i^0+\mu (R_1)_i^k,\\&v_i^k=\sum _{j=0}^{M}\frac{d_{ij}^2}{h^2}u_j^k+(R_2)_i^k,1\le i\le M-1,1\le k\le N, \end{aligned} \end{aligned}$$
(29)

where \(\mu =\tau ^{\alpha }\Gamma (2-\alpha )\).

Ignoring \((R_1)_i^k,(R_2)_i^k\) and replacing the functions \(u_i^k\) and \(v_i^k\) with its numerical approximations \(U_i^k\) and \(V_i^k\) in (29), we obtain the following difference scheme

$$\begin{aligned}&U_i^k-(\mu +\mu a_{k,k})V_i^k+\mu \sum _{j=0}^{M}\frac{d_{ij}^2}{h^2}V_j^k=\mu f_i^k+\mu \sum _{j=0}^{k-1}a_{j,k}V_i^j\end{aligned}$$
(30)
$$\begin{aligned}&+\sum _{j=1}^{k-1}(b_{k-j-1}-b_{k-j})U_i^j+b_{k-1}U_i^0\end{aligned}$$
(31)
$$\begin{aligned}&1\le i\le M-1,1\le k\le N,\end{aligned}$$
(32)
$$\begin{aligned}&V_i^k=\sum _{j=0}^{M}\frac{d_{ij}^2}{h^2}U_j^k,1\le i\le M-1,1\le k\le N,\end{aligned}$$
(33)
$$\begin{aligned}&U_0^k=U_M^k=0,V_0^k=V_M^k=0,1\le k\le N,\end{aligned}$$
(34)
$$\begin{aligned}&U_i^0=U^0(x_i),1\le i\le M. \end{aligned}$$
(35)

We set \(l_1=\mu +\mu a_{k,k}\) and \(l_{2}=\frac{\mu }{h^2}\). So that in each time step we encounter the following system of linear equations

$$\begin{aligned} AU^k=F^k, \end{aligned}$$
(36)

where

$$\begin{aligned} A= \begin{pmatrix} I&{}B\\ C&{}I \end{pmatrix}, U^{k}= \begin{pmatrix} U_{1}^{k}\\ U_{2}^{k}\\ \vdots \\ U_{M-2}^{k}\\ U_{M-1}^{k}\\ V_{1}^{k}\\ V_{2}^{k}\\ \vdots \\ V_{M-2}^{k}\\ V_{M-1}^{k} \end{pmatrix}, F^k=\begin{pmatrix} F^1\\ F^2 \end{pmatrix}, \end{aligned}$$

such that B and C are pentadiagonal matrices and I is identity matrix

$$\begin{aligned} B= & {} \begin{pmatrix} -l_1-2l_2&{}l_2&{}0&{}0&{}0&{}0&{}\dots &{}0&{}0\\ \frac{5}{3}l_2&{}-l_1-3l_2&{}\frac{5}{3}l_2&{}-\frac{1}{6}l_2&{}0&{}0&{}\dots &{}0&{}0\\ -\frac{1}{6}l_2&{}\frac{5}{3}l_2&{}-l_1-3l_2&{}\frac{5}{3}l_2&{}-\frac{1}{6}l_2&{}0&{}\dots &{}0&{}0\\ \vdots &{}\vdots &{}\vdots &{}\vdots &{}\vdots &{}\vdots &{}\vdots &{}\vdots &{}\vdots \\ 0&{}0&{}\dots &{}0&{}-\frac{1}{6}l_2&{}\frac{5}{3}l_2&{}-l_1-3l_2&{}\frac{5}{3}l_2&{}-\frac{1}{6}l_2\\ 0&{}0&{}\dots &{}0&{}0&{}-\frac{1}{6}l_2&{}\frac{5}{3}l_2&{}-l_1-3l_2&{}\frac{5}{3}l_2\\ 0&{}0&{}\dots &{}0&{}0&{}0&{}0&{}l_2&{}-l_1-2l_2 \end{pmatrix}, \\ C= & {} \begin{pmatrix} \frac{2}{h^2}&{}-\frac{1}{h^2}&{}0&{}0&{}0&{}0&{}\dots &{}0&{}0\\ -\frac{5}{3h^2}&{}\frac{3}{h^2}&{}-\frac{5}{3h^2}&{}\frac{1}{6h^2}&{}0&{}0&{}\dots &{}0&{}0\\ \frac{1}{6h^2}&{}-\frac{5}{3h^2}&{}\frac{3}{h^2}&{}-\frac{5}{3h^2}&{}\frac{1}{6h^2}&{}0&{}\dots &{}0&{}0\\ \vdots &{}\vdots &{}\vdots &{}\vdots &{}\vdots &{}\vdots &{}\vdots &{}\vdots &{}\vdots \\ 0&{}0&{}\dots &{}0&{}\frac{1}{6h^2}&{}-\frac{5}{3h^2}&{}\frac{3}{h^2}&{}-\frac{5}{3h^2}&{}\frac{1}{6h^2}\\ 0&{}0&{}\dots &{}0&{}0&{}\frac{1}{6h^2}&{}-\frac{5}{3h^2}&{}\frac{3}{h^2}&{}-\frac{5}{3h^2}\\ 0&{}0&{}\dots &{}0&{}0&{}0&{}0&{}\frac{2}{h^2}&{}\frac{1}{h^2} \end{pmatrix}, \\ F^{1}= & {} \begin{pmatrix} \mu f_1^k+\mu \sum _{j=0}^{k-1}a_{j,k}V_1^k+\sum _{j=1}^{k-1}(b_{k-j-1}-b_{k-j})U_1^k+b_{k-1}U_1^0-l_2V_0^k \\ \mu f_2^k+\mu \sum _{j=0}^{k-1}a_{j,k}V_2^k+\sum _{j=1}^{k-1}(b_{k-j-1}-b_{k-j})U_2^k+b_{k-1}U_2^0+\frac{l_2}{6}V_0^k \\ \mu f_3^k+\mu \sum _{j=0}^{k-1}a_{j,k}V_3^k+\sum _{j=1}^{k-1}(b_{k-j-1}-b_{k-j})U_3^k+b_{k-1}U_3^0 \\ \vdots \\ \mu f_{M-2}^k+\mu \sum _{j=0}^{k-1}a_{j,k}V_{M-2}^k+\sum _{j=1}^{k-1}(b_{k-j-1}-b_{k-j})U_{M-2}^k+b_{k-1}U_{M-2}^0+\frac{l_2}{6}V_{M}^k \\ \mu f_{M-1}^k+\mu \sum _{j=0}^{k-1}a_{j,k}V_{M-1}^k+\sum _{j=1}^{k-1}(b_{k-j-1}-b_{k-j})U_{M-1}^k+b_{k-1}U_{M-1}^0-l_2V_{M}^k \end{pmatrix},\\ F^2= & {} \begin{pmatrix} \frac{U_0^k}{h^2} \\ -\frac{U_0^k}{6h^2}\\ 0\\ 0\\ \vdots \\ 0\\ 0\\ \frac{U_M^k}{6h^2}\\ \frac{U_M^k}{h^2} \end{pmatrix}. \end{aligned}$$

4 Stability analysis

In the current section, the stability of the scheme (31)–(33) can be analyzed by using the Fourier method [57]. We assume that the exact solution u is continuous and the derivative of u is square integrable. Let \(\tilde{U}^{k}_{j}\) be the approximate solution of the scheme, and define

\(\zeta ^{k}_{j}={U}^{k}_{j}-\tilde{U}^{k}_{j}, \qquad 1\le j\le M-1, \quad 1\le k\le N,\)

with corresponding vector

$$\begin{aligned} \zeta ^{k}=(\zeta ^{k}_{1},\zeta ^{k}_{2},\ldots ,\zeta ^{k}_{M-1})^{T}. \end{aligned}$$

Thanks to (31)–(33) we have

$$\begin{aligned}&U_i^k-s\left( -\frac{1}{6}U_{i-2}^k+\frac{5}{3}U_{i-1}^k-3U_{i}^k+\frac{5}{3}U_{i+1}^k-\frac{1}{6}U_{i+2}^k\right) \\&\quad +r\left( -\frac{1}{6}V_{i-2}^k+\frac{5}{3}V_{i-1}^k-3V_{i}^k+\frac{5}{3}V_{i+1}^k-\frac{1}{6}V_{i+2}^k\right) \\&\quad =\mu f_i^k+\mu \sum _{j=0}^{k-1}a_{j,k}V_i^j+\sum _{j=1}^{k-1}(b_{k-j-1}-b_{k-j})U_i^j+b_{k-1}U_i^0,\\&s=\frac{\mu +\mu a_{k,k}}{h^2},r=\frac{\mu }{h^2}. \end{aligned}$$

So that

$$\begin{aligned}&U_i^k-s\left( -\frac{1}{6}U_{i-2}^k+\frac{5}{3}U_{i-1}^k-3U_{i}^k+\frac{5}{3}U_{i+1}^k-\frac{1}{6}U_{i+2}^k\right) \\&\quad -\frac{r}{6h^2}\left( -\frac{1}{6}U_{i-4}^k+\frac{5}{3}U_{i-3}^k-3U_{i-2}^k+\frac{5}{3}U_{i-1}^k-\frac{1}{6}U_{i}^k\right) \\&\quad \frac{5r}{3h^2}\left( -\frac{1}{6}U_{i-3}^k+\frac{5}{3}U_{i-2}^k-3U_{i-1}^k+\frac{5}{3}U_{i}^k-\frac{1}{6}U_{i+1}^k\right) \\&\quad -\frac{3r}{h^2}\left( -\frac{1}{6}U_{i-2}^k+\frac{5}{3}U_{i-1}^k-3U_{i}^k+\frac{5}{3}U_{i+1}^k-\frac{1}{6}U_{i+2}^k\right) \\&\quad +\frac{5r}{3h^2}\left( -\frac{1}{6}U_{i-1}^k+\frac{5}{3}U_{i}^k-3U_{i+1}^k+\frac{5}{3}U_{i+2}^k-\frac{1}{6}U_{i+3}^k\right) \\&\quad -\frac{r}{6h^2}\left( -\frac{1}{6}U_{i}^k+\frac{5}{3}U_{i+1}^k-3U_{i+2}^k+\frac{5}{3}U_{i+3}^k-\frac{1}{6}U_{i+4}^k\right) \\&\quad =\mu f_i^k+\mu \sum _{j=0}^{k-1}\frac{a_{j,k}}{h^2}\left( -\frac{1}{6}U_{i-2}^k+\frac{5}{3}U_{i-1}^k-3U_{i}^k +\frac{5}{3}U_{i+1}^k-\frac{1}{6}U_{i+2}^k\right) \\&\quad +\sum _{j=1}^{k-1}(b_{k-j-1}-b_{k-j})U_i^j+b_{k-1}U_i^0. \end{aligned}$$

Set

$$\begin{aligned}&\lambda _1=\frac{r}{36h^2},\lambda _2=-\frac{10r}{18h^2},\lambda _3 =\frac{s}{6}+\frac{r}{h^2}+\frac{25r}{9h^2},\\&\lambda _4=-\frac{5s}{3}-\frac{10r}{18h^2}-\frac{10r}{h^2}, \lambda _5=1+3s+\frac{r}{18h^2}+\frac{50r}{9h^2}+\frac{9r}{h^2} \end{aligned}$$

then we have

$$\begin{aligned} \begin{aligned}&\lambda _1U_{i-4}^k+\lambda _2U_{i-3}^k+\lambda _3U_{i-2}^k+\lambda _4U_{i-1}^k+\lambda _5U_i^k+\lambda _4U_{i+1}^k+\lambda _3U_{i+2}^k\\&\quad +\lambda _2U_{i+3}^k+\lambda _1U_{i+4}^k\\&\quad =\mu f_i^k+\mu \sum _{j=0}^{k-1}\frac{a_{j,k}}{h^2}\left( -\frac{1}{6}U_{i-2}^k+\frac{5}{3}U_{i-1}^k-3U_{i}^k+\frac{5}{3}U_{i+1}^k- \frac{1}{6}U_{i+2}^k\right) \\&\quad +\sum _{j=1}^{k-1}(b_{k-j-1}-b_{k-j})U_i^j+b_{k-1}U_i^0. \end{aligned} \end{aligned}$$
(37)

Next, we define the grid functions as follows:

$$\begin{aligned} \zeta ^{k}(x)= {\left\{ \begin{array}{ll} \zeta ^{k}_{j}, &{} x_{j}-\frac{h}{2}<x\le x_{j}+\frac{h}{2},\\ 0, &{} 0\le x\le \frac{h}{2} or L-\frac{h}{2}<x\le L. \end{array}\right. } \end{aligned}$$
(38)

We can expand \(\zeta ^{k}(x)\) into a Fourier series

$$\begin{aligned} \zeta ^{k}(x)=\sum \limits _{l = -\infty }^{\infty }d_{k}(l)e^{i2\pi lx/L}, \end{aligned}$$
(39)

where

$$\begin{aligned} d_{k}(l)=\frac{1}{L}\int _{0}^{L}\zeta ^{k}(x)e^{-i2\pi lx/L}\,\textrm{d}x. \end{aligned}$$
(40)

Denoting

$$\begin{aligned} \Vert \zeta ^{k}\Vert _{2}=\bigg (\int _{0}^{L}\Vert \zeta ^{k}(x)\Vert ^{2}\,\textrm{d}x\bigg )^{\frac{1}{2}}, \end{aligned}$$
(41)

and using the Parseval equality

$$\begin{aligned} \int _{0}^{L}\Vert \zeta ^{k}(x)\Vert ^{2}\,\textrm{d}x=\sum \limits _{l = -\infty }^{\infty }\Vert d_{k}(l)\Vert ^{2}, \end{aligned}$$
(42)

one has

$$\begin{aligned} \Vert \zeta ^{k}\Vert ^{2}=\sum \limits _{l = -\infty }^{\infty }\Vert d_{k}(l)\Vert ^{2}. \end{aligned}$$
(43)

We can expand \(\zeta _{j}^{k}\) into Fourier series, and Because the difference equations are linear, we can analyze the behavior of the total error by tracking the behavior of an arbitrary nth component [58]. So we can assume that the solution of (37) has the following form

$$\begin{aligned} \zeta _{j}^{k}=d_{k}e^{i\sigma _{x}jh}, \end{aligned}$$

where \(\sigma _{x}=2\pi l/L\). Substituting the above expression into (37) we obtain

$$\begin{aligned} d_k=\frac{b_{k-1}}{z}d_0+\frac{\sum _{j=1}^{k-1}(b_{k-j-1}-b_{k-j})d_j}{z}+\frac{rs'\sum _{j=0}^{k-1}a_{j,k}d_j}{z}, \end{aligned}$$
(44)

where

$$\begin{aligned}&s'=-\frac{1}{3}\cos (2\sigma _{x}h)+\frac{10}{3}\cos (\sigma _{x}h)-3\le 0,\\&z=2\lambda _1\cos (4\beta h)+2\lambda _2\cos (3\beta h)+2\lambda _3\cos (2\beta h)+2\lambda _4\cos (\beta h)+\lambda _5. \end{aligned}$$

Theorem 2

Suppose that \(d_{k}\), \((1\le k\le N-1)\) are defined by (44), then we obtain

\(\vert d_{k} \vert \le C_k \vert d_{0} \vert , \quad k=1,2,\cdots ,N-1.\)

Proof

We will prove this claim by mathematical induction. For \(k=1\) we prove that there exist a constant \(C_1\) such that

$$\begin{aligned} \big |d_1\big |=\big |d_0\big |\frac{\big |1+rs'a_{0,1} \big |}{\big | z\big |}\le C_1\big |d_0\big |. \end{aligned}$$

For this purpose, we have

$$\begin{aligned}&1+rs'a_{0,1}=1+\frac{\mu }{h^2}a_{0,1}\bigg (-\frac{1}{3}\bigg (1-2\sigma _{x}^2h^2+O(h^4)\bigg )\\&\quad +\frac{10}{3}\bigg (1-\frac{\sigma _{x}^2h^2}{2}+O(h^4)\bigg )-3\bigg )\\&\quad =1+\frac{\mu }{h^2}a_{0,1}\bigg (\frac{2\sigma _{x}^2h^2}{3}-\frac{5\sigma _{x}^2h^2}{3}+O(h^4)\bigg )=1-\mu a_{0,1}\sigma _{x}^2+\mu a_{0,1}O(h^2), \end{aligned}$$

and

$$\begin{aligned} z&=\frac{\mu }{18h^4}\bigg (1-8\beta ^2h^2+O(h^4)\bigg )-\frac{10\mu }{9h^4}\bigg (1-\frac{9\beta ^2h^2}{2}+O(h^4)\bigg )\\&\quad+\bigg (\frac{\mu +\mu a_{1,1}}{3h^2}+\frac{2\mu }{h^4}+\frac{50\mu }{9h^4}\bigg )\bigg (1-2\beta ^2h^2+O(h^4)\bigg )\\&\quad+\bigg (\frac{-10\mu -10\mu a_{1,1}}{3h^2}-\frac{10\mu }{9h^4}-\frac{20\mu }{h^4}\bigg )\bigg (1-\frac{\beta ^2h^2}{2}+O(h^4)\bigg )+1\\&\quad+\frac{3\mu +3\mu a_{1,1}}{h^2}+\frac{\mu }{18h^4}+\frac{50\mu }{9h^4}+\frac{9\mu }{h^4}\\&=\frac{\mu }{18h^4}-\frac{10\mu }{18h^4}+\frac{2\mu }{h^4} +\frac{50\mu }{9h^4}-\frac{10\mu }{9h^4}-\frac{20\mu }{h^4}+\frac{\mu }{18h^4}+\frac{50\mu }{9h^4}+\frac{9\mu }{h^4}\\&-\frac{8\mu \beta ^2}{18h^2}+\frac{5\mu \beta ^2}{h^2}-\bigg (\frac{2\mu +2\mu a_{1,1}}{3}\bigg )\beta ^2\\&\quad +\bigg (\frac{5\mu +5\mu a_{1,1}}{3}\bigg )\beta ^2+\frac{3\mu +3\mu a_{1,1}}{h^2}+\frac{\mu +\mu a_{1,1}}{3h^2}-\bigg (\frac{10\mu +10\mu a_{1,1}}{3h^2}\bigg )\\&\quad +\frac{\mu }{18}O(1)-\frac{10\mu }{9}O(1)+\frac{68\mu }{9}O(1)\\ {}&-\frac{190\mu }{9}O(1)+\bigg (\frac{\mu +\mu a_{1,1}}{3}\bigg )O(h^2)-\bigg (\frac{10\mu +10\mu a_{1,1}}{3}\bigg )O(h^2)+1\\&=-\frac{8\mu \beta ^2}{18h^2}+\frac{5\mu \beta ^2}{h^2}+(\mu +\mu a_{1,1})\beta ^2-\frac{263\mu }{18}O(1)\\ {}&-(3\mu +3\mu a_{1,1})O(h^2)+1. \end{aligned}$$

We take the limit from \(\frac{1+rs'a_{0,1}}{z}\) as \(\tau \rightarrow 0\) and \(h\rightarrow 0\) in such a way that we maintain the ratio \(\frac{\mu }{h^2}=\frac{\tau ^{\alpha }\Gamma (2-\alpha )}{h^2}\) equal to a fixed constant H. So that

$$\begin{aligned} \frac{1+rs'a_{0,1}}{z}\rightarrow \frac{1}{\frac{41}{9}\beta ^2H+1}=H_1. \end{aligned}$$

As a result, there is a positive constant \(C_1\) independent of NM that says

$$\begin{aligned} \bigg |\frac{1+rs'a_{0,1}}{z}\bigg |\le C_1. \end{aligned}$$

Assume that

$$\begin{aligned} \big |d_n\big |\le C_n \big |d_0\big |,1\le n\le k-1. \end{aligned}$$

We have

$$\begin{aligned} \big |d_k\big |\le \frac{b_{k-1}|d_0|+\sum _{j=1}^{k-1}(b_{k-j-1}-b_{k-j})|d_j|+|rs'||\sum _{j=0}^{k-1}a_{j,k}||d_j|}{|z|}. \end{aligned}$$

Now assume that

$$\begin{aligned} C'=\max \{C_1,C_2,\dots ,C_{k-1}\},C''>C',C''\ge 1, \end{aligned}$$
(45)

so similar to initial case \(k=1\), we obtain

$$\begin{aligned} \big |d_k\big |&\le \frac{b_{k-1}C''|d_0|+\sum _{j=1}^{k-1}(b_{k-j-1}-b_{k-j})C''|d_0|+|rs'|C''|\sum _{j=0}^{k}a_{j,k}||d_0|}{|z|}\\&=\frac{\left( b_{k-1}+\sum _{j=1}^{k-1}(b_{k-j-1}-b_{k-j})\right) C''|d_0|+|rs'|C''|\sum _{j=0}^{k}a_{j,k}||d_0|}{|z|}\\&=\frac{\left( C''+|rs'|C''|\sum _{j=0}^{k}a_{j,k}|\right) |d_0|}{|z|}\le C_k|d_0|, \end{aligned}$$

This completes the proof. \(\square\)

Theorem 3

The finite difference scheme (31)–(35) is unconditionally stable for \(\alpha \in (0,1).\)

Proof

Thanks to theorem (2) and Parseval’s equality, we obtain

$$\begin{aligned} {\Vert {U}^{k}-\tilde{U}^{k}\Vert }^{2}_{l^{2}}&={\Vert \zeta ^{k}\Vert }^{2}_{l^{2}}\le C_k^2\Vert \zeta ^{0}\Vert _{l^{2}}^{2}, \end{aligned}$$

so that

$$\begin{aligned}{\Vert {U}^{k}-\tilde{U}^{k}\Vert }_{l^{2}}\le C{\Vert {U}^{0}-\tilde{U}^{0}\Vert }_{l^{2}},\end{aligned}$$

which indicates that the numerical scheme is stable. \(\square\)

5 Convergence

In this section, we prove that convergence of the difference scheme (31)–(35). Similar to the previous section let \(e_{j}^{k}=u_{j}^{k}-U_{j}^{k},1\le j\le M-1, 0\le k\le N-1\) and and denote, \(e^{k}=(e^{k}_{1},e^{k}_{2},\ldots ,e^{k}_{M-1})^{T},{\textbf{R}}^{k}=(R^{k}_{1},R^{k}_{2},\ldots ,R^{k}_{M-1})^{T},0\le k\le N-1.\)

From Equations (31)–(35) and \(R_{j}^{k+1}=O(\tau ^{2-\alpha }+h^{2})\) and noticing that \(e_{j}^{0}=0\), similar to (37) one has

$$\begin{aligned} \begin{aligned}&\lambda _1e_{i-4}^k+\lambda _2e_{i-3}^k+\lambda _3e_{i-2}^k+\lambda _4e_{i-1}^k+\lambda _5e_i^k+\lambda _4e_{i+1}^k+\lambda _3e_{i+2}^k\\&\quad +\lambda _2e_{i+3}^k+\lambda _1e_{i+4}^k\\&\quad =\mu f_i^k+\mu \sum _{j=0}^{k-1}\frac{a_{j,k}}{h^2}(-\frac{1}{6}e_{i-2}^k+\frac{5}{3}e_{i-1}^k-3e_{i}^k+\frac{5}{3}e_ {i+1}^k-\frac{1}{6}e_{i+2}^k)\\&\quad +\sum _{j=1}^{k-1}(b_{k-j-1}-b_{k-j})e_i^j+\mu R_i^k. \end{aligned} \end{aligned}$$
(46)

Using the similar idea of stability analysis, we define the following functions

$$\begin{aligned} e^{k}(x)= {\left\{ \begin{array}{ll} e^{k}_{j}, &{} x_{j}-\frac{h}{2}<x\le x_{j}+\frac{h}{2},1\le j\le M-1,\\ 0, &{} 0\le x\le \frac{h}{2} or L-\frac{h}{2}<x\le L. \end{array}\right. } \end{aligned}$$
(47)

and

$$\begin{aligned} R^{k}(x)= {\left\{ \begin{array}{ll} R^{k}_{j}, &{} x_{j}-\frac{h}{2}<x\le x_{j}+\frac{h}{2},1\le j\le M-1,\\ 0, &{} 0\le x\le \frac{h}{2} or L-\frac{h}{2}<x\le L. \end{array}\right. } \end{aligned}$$
(48)

We expand the \(e^{k}(x)\) and \(R^{k}(x)\) into the following Fourier series expansions

$$\begin{aligned} e^{k}(x)=\sum \limits _{l = -\infty }^{\infty }\eta _{k}(l)e^{i2\pi lx/L}, R^{k}(x)=\sum \limits _{l = -\infty }^{\infty }\xi _{k}(l)e^{i2\pi lx/L}, \end{aligned}$$
(49)

where

$$\begin{aligned} \eta _{k}(l)=\frac{1}{L}\int _{0}^{L}e^{k}(x)e^{-i2\pi lx/L}\,\textrm{d}x, \xi _{k}(l)=\frac{1}{L}\int _{0}^{L}R^{k}(x)e^{-i2\pi lx/L}\,\textrm{d}x. \end{aligned}$$
(50)

Applying the Parseval equality

$$\begin{aligned} \int _{0}^{L}\Vert e^{k}(x)\Vert ^{2}\,\textrm{d}x=\sum \limits _{l = -\infty }^{\infty }\Vert \eta _{k}(l)\Vert ^{2}, \int _{0}^{L}\Vert R^{k}(x)\Vert ^{2}\,\textrm{d}x=\sum \limits _{l = -\infty }^{\infty }\Vert \xi _{k}(l)\Vert ^{2}, \end{aligned}$$
(51)

and

$$\begin{aligned} \int _{0}^{L}\Vert e^{k}(x)\Vert ^{2}\,\textrm{d}x=\sum \limits _{j = 1}^{M-1}h\Vert e_{j}^{k}\Vert ^{2}, \int _{0}^{L}\Vert R^{k}(x)\Vert ^{2}\,\textrm{d}x=\sum \limits _{j = 1}^{M-1}h\Vert R_{j}^{k}\Vert ^{2}, \end{aligned}$$
(52)

we have

$$\begin{aligned} \Vert e^{k}\Vert _{2}^{2}=\sum \limits _{l = -\infty }^{\infty }\Vert \eta _{k}(l)\Vert ^{2}, \Vert R^{k}\Vert _{2}^{2}=\sum \limits _{l = -\infty }^{\infty }\Vert \xi _{k}(l)\Vert ^{2}. \end{aligned}$$
(53)

Now, we suppose that

$$\begin{aligned}e_{j}^{k}=\eta _{k}e^{i\sigma _{x} jh},\\R_{j}^{k}=\xi _{k}e^{i\sigma _{x} jh},\end{aligned}$$

where \(\sigma _{x}=\frac{2l\pi }{L}\). By replacing the above relations into (46) leads to

$$\begin{aligned} \eta _k=\frac{\sum _{j=1}^{k-1}(b_{k-j-1}-b_{k-j})\eta _j}{z}+\frac{rs'\sum _{j=0}^{k-1}a_{j,k}\eta _j}{z}+\frac{\mu \xi _k}{z}. \end{aligned}$$
(54)

Lemma 3

(Discrete Gronwall inequality) Let \({y_n}\) and \({g_n}\) be nonnegative sequences and b be a nonnegative constant. If [59]

$$\begin{aligned}y_n\le b+\sum _{0\le k<n}g_ky_k,n\ge 0,\end{aligned}$$

then

$$\begin{aligned}y_n\le \prod _{0\le j<n}(1+g_j)\le b\exp \bigg (\sum _{0\le j<n}g_j\bigg ).\end{aligned}$$

Theorem 4

If \(\eta _{k}\) be the solution of Equation (54), then there is positive constant C such that

$$\begin{aligned} |\eta _{k}|\le C|\xi _{1}|. \end{aligned}$$
(55)

Proof

In view of the convergence of the series on the right-hand side of Equation (53), we know that there exists a positive constant \(C_{2}\), such that

$$\begin{aligned} |\xi _{k}|\le C_{2} |\xi _{1}|, k=1,2,\dots , N-1. \end{aligned}$$
(56)

According to Equations (54), (56) and theorem (2), we have

$$\begin{aligned} |\eta _{k}|&\le \sum _{j=1}^{k-1}(b_{k-j-1}-b_{k-j})\times \frac{|\eta _j|}{|z|}+\sum _{j=0}^{k-1}a_{j,k}\frac{|\eta _j||rs'|}{|z|}+\frac{\mu |\xi _j|}{|z|}\\&\le \bigg (C_1\sum _{j=0}^{k-1}(b_{k-j-1}-b_{k-j})+C_2\sum _{j=0}^{k-1}a_{j,k}\bigg )|\eta _j|+C_3|\xi _1|\\&\le C_3|\xi _1|\exp (C_1(1-b_{k})+C_2C_4)\\ {}&\le C_3|\xi _1|\exp (C_1+C_2C_4)=C|\xi _1|. \end{aligned}$$

This completes the proof. \(\square\)

Theorem 5

The difference scheme (31)–(35) is convergent, and the order of convergence is \(O(\tau ^{2-\alpha } +h^{2})\).

Proof

By theorem (4) and Equation (56), we can obtain

$$\begin{aligned} \Vert e^{k} \Vert _{l^{2}}^{2}&=\sum \limits _{l = -\infty }^{\infty }\Vert \eta _{k}(l)\Vert ^{2}\le \sum \limits _{l = -\infty }^{\infty }C^{2}\Vert \xi _{1}(l)\Vert ^{2}\\ {}&= C^{2}\sum \limits _{l = -\infty }^{\infty }\Vert \xi _{1}(l)\Vert ^{2}=C^{2} \Vert R^{1} \Vert _{l^{2}}^{2}, \end{aligned}$$

furthermore, there exists a positive constant \(C_1\), such that

$$\begin{aligned} R_i^k\le C_1(\tau ^{2-\alpha }+h^2)\Rightarrow ||R^k||&\le C_1\sqrt{(M-1)h}(\tau ^{2-\alpha }+h^2)\\ {}&\le C_1\sqrt{L}(\tau ^{2-\alpha }+h^2) \end{aligned}$$

So that

$$\begin{aligned}\Vert e^{k} \Vert _{l^{2}}\le C\Vert R^{1} \Vert _{l^{2}}\le C'(\tau ^{2-\alpha }+h^2),\end{aligned}$$

where \(C'=C\sqrt{L}\). This completes the proof. \(\square\)

6 Numerical experiments

In this section, five test problems are presented to check the effectiveness, validity, stability, and convergence orders of the present method. The domain in all examples is \(\Omega =[0,1]\times [0,1]\). All computations are implemented with MATLAB R2020b. The error norms used in this section are as follows:

$$\begin{aligned}&||e||_\infty =\max _{0\le i\le N,0\le j\le M}|u(x_i,t_j)-U(x_i,t_j)|,\\ {}&{\Vert e(\tau ,h) \Vert }={\Vert e^{N} \Vert } =\bigg (\Delta x\sum _{j=1}^{M}(e_{j}^{N})^{2}\bigg )^{\frac{1}{2}} , \end{aligned}$$

where \(e_{j}^{k}=u(x_{j},t_{k})-U_{j}^{k}\). In all examples we have used the following formulas to calculate the convergence rate:

$$\begin{aligned} r_1(\tau ,h)=\log _{2}\bigg (\frac{{\Vert e(\tau ,2h) \Vert }}{{\Vert e(\tau ,h) \Vert }}\bigg ), r_2(\tau ,h)=\log _{2}\bigg (\frac{{\Vert e(2\tau ,h) \Vert }}{{\Vert e(\tau ,h) \Vert }}\bigg ). \end{aligned}$$

Example 1

For the first example, consider the following problem:

$$\begin{aligned}_{C}{\mathcal {D}}_{0,t}^\alpha u(x,t)-u_{xx}(x,t)-{\mathcal {I}}^{(\beta )}u_{xx}(x,t)+u_{xxxx}(x,t)=f(x,t),\end{aligned}$$

with the initial condition \(u^0(x)=0\). The source term is

$$\begin{aligned}f(x,t)=\bigg (\frac{\Gamma (\alpha +\beta +1)}{\Gamma (\beta +1)}t^{-\alpha }+\frac{\pi ^2\Gamma (\alpha +\beta +1)}{\Gamma (2\beta +1)}t^{\beta }+\pi ^2+\pi ^4\bigg )t^{\alpha +\beta }\sin (\pi x).\end{aligned}$$

The exact solution is

$$\begin{aligned}u(x,t)=t^{\alpha +\beta }\sin (\pi x).\end{aligned}$$
Table 1 L2-norm errors and order of convergence for \(\alpha =0{.}1,0{.}3,0{.}5\) and \(\beta =0{.}1,0{.}15,0{.}45\) for example 1.
Table 2 L2-norm errors and order of convergence for \(\alpha =0{.}1,0{.}3,0{.}95\) and \(\beta =0{.}65,0{.}45,0{.}15\) for example 1.
Table 3 L2-norm errors and orders for \(\tau =1/109\) and \(\alpha =0{.}9,\beta =0{.}75\) for example 1.
Table 4 \(L_{\infty }\)-norm errors with \(h=1/512\) for example 1.
Table 5 Error for different \(T,\alpha\) and fixed \(h,\tau ,\beta\) for example 1.

In tables 1 and 3, we record the norm of errors and convergence orders in spatial direction for different values of \(\alpha\) and \(\beta\) . In table 2, the orders of convergence with respect to time for different values of \(\alpha\) and \(\beta\) are reported. For each value of \(\alpha\) and \(\beta\), we chose different spatial step sizes \(h=1/10,1/20,\dots ,1/320\) and a fixed temporal step length of \(\tau\) to obtain the numerical convergence rates in spatial, which is in excellent agreement with our theoretical results. In table 4, we compared our results with the reference [30]. In table 5, we presented the results for large time instant t.

Figure 1 compares the plots of the exact and numerical sulotions computed by difference scheme using \(\tau =\frac{1}{55}\) and \(h=\frac{1}{320}\). The plot of pointwise errors and the contour plot of numerical solution at \(t=1\) with \(\tau =\frac{1}{55}\) and \(h=\frac{1}{320}\) is illustrated in figure 2. In figure 3, a comparison between the numerical and exact solutions at \(t=1\) with \(\tau =\frac{1}{55}\) and \(h=\frac{1}{320}\) is demonstrated. It can be seen from tables 1 and 3 that, when spatial step sizes decrease, we obtain better results. In tables 1, 2, and 3 the CPU time is almost 2 seconds. All the figures show that the numerical scheme is efficient and effective.

Figure 1
figure 1

The graph of exact (left) and numerical (right) solutions at \(\tau =1/55\) and \(h=1/320\) with \(\alpha =\beta =0{.}1\) for Example 1.

Figure 2
figure 2

The surface of absolute pointwise errors when \(\tau =1/55\) , \(h=\frac{1}{320}\) and contour plot of numerical solution for Example 1.

Figure 3
figure 3

The comparison between numerical solution and exact solution (left) and pointwise absolute error (right) with \(\tau =1/55\) and \(h=\frac{1}{320}\) at \(t=1\) for example 1.

Example 2

Consider the problem (1) with exact solution \(u(x,t) = t^{\beta }\sin (\pi x),(x,t)\in \Omega\). The source term is taken as

$$\begin{aligned}f(x,t)=\bigg (\frac{\Gamma (\beta +1)}{\Gamma (\beta +1-\alpha )}t^{-\alpha }+\frac{\pi ^2\Gamma (\beta +1)}{\Gamma (2\beta +1)}t^{\beta }+\pi ^2+\pi ^4\bigg )t^{\beta }\sin (\pi x).\end{aligned}$$
Table 6 L2-norm errors and order of convergence for \(\alpha =0{.}1,0{.}4,0{.}8\) and \(\beta =0{.}3,0{.}5,0{.}6\) when \(\tau =1/277\) for example 2.
Table 7 \(L_{\infty }\)-norm errors with \(h=1/512\) for example 2.
Table 8 Error for different \(T,\alpha\) and fixed \(h,\tau ,\beta\) for example 2.
Figure 4
figure 4

The exact (left) and numerical (right) solutions at \(\tau =1/277\) and \(h=1/640\) with \(\alpha =0.8,\beta =0.6\) for example 2.

Figure 5
figure 5

The comparison between numerical solution and exact solution (left) and pointwise absolute error (right) with \(\tau =1/277\) and \(h=\frac{1}{640}\) at \(t=1\) for example 2.

Figure 6
figure 6

Poinwise errors for Example 2: top left \((\alpha =0{.}8,\beta =0{.}6,\tau =1/10,h=1/640)\), top right \((\alpha =0{.}1,\beta =0{.}3,\tau =1/277,h=1/640)\), bottom left \((\alpha =0{.}4,\beta =0{.}5,\tau =1/10,h=1/640)\), bottom right \((\alpha =0{.}6,\beta =0{.}9,\tau =1/5,h=1/640)\).

In table 6, we list

L2-norm errors and experiment order of convergence for the difference scheme. Herein, we take \(\tau =\frac{1}{277}\) and choose different spatial step sizes for different values of \(\alpha\) and \(\beta\) . Also, the convergence rate in space is seen to be about 2. The CPU time is less than 30 seconds. In table 7, we compared our results with those of the reference [30]. In table 8, we presented the results for large time instant t. Figure 4 shows the exact and numerical solutions. Figure 5 presents the exact and numerical solutions and absolute error at \(t=1\). In figure 6, we depicted graph of pointwise errors for different values of \(\alpha ,\beta ,\tau ,h\). It is apparent from tables and figures that the numerical scheme works well.

Example 3

In this example the exact solution of the problem (1) is given by \(u(x,t) = t^3x^3(1-x)^3\) and the inhomogeneous term is

$$\begin{aligned}f(x,t)&=\frac{6x^3(1-x)^3t^{3-\alpha }}{\Gamma (4-\alpha )}-6t^3x(-5x^3+10x^2-6x+1)\\&\quad -\frac{36x(-5x^3+10x^2-6x+1)t^{\beta +3}}{\Gamma (\beta +4)}-72(5x^2-5x+1)t^3. \end{aligned}$$
Table 9 L2-norm errors and order of convergence for \(\alpha =0{.}1,0{.}5,0{.}7\) and \(\beta =0{.}1,0{.}5,0{.}7\) for example 3.
Table 10 L2-norm errors and order of convergence for \(\alpha =0{.}1,0{.}2,0{.}95\) and \(\beta =0{.}71,0{.}85,0{.}15\) for Example 3.
Table 11 Error for different \(T,\alpha\) and fixed \(h,\tau ,\beta\) for Example 3.

Tables 9 and 10 give the L2-norm errors and convergence orders using the present numerical method. It is observed that the numerical solutions of the numerical scheme are seen to be in good agreement with the exact ones. In table 11, we presented the results for large time instant t. The CPU time is less than 12 seconds. In figure 8, surfaces of pointwise error are portrayed at different \(\alpha ,\beta ,\tau ,h, T\) . In figure 7, the numerical solution and exact solution curves have been demonstrated. In figure 9, the comparison between \(u(x_j,t_k)\) and \(U_j^k\) at \(t=1\) is created to show the efficiency of the presented method. Also, the CPU time illustrates that the proposed scheme is fast.

Figure 7
figure 7

The exact (left) and numerical (right) solutions curves at \(\tau =1/95\) and \(h=1/1280\) with \(\alpha =0{.}7,\beta =0{.}7\) for example 3.

Figure 8
figure 8

Poinwise errors for Example 3: top left \((\alpha =0{.}7,\beta =0{.}7,\tau =1/95,h=1/1280,T=1)\), top right \((\alpha =0{.}9,\beta =0{.}1,\tau =0{.}02,h=1/640,T=2)\), bottom left \((\alpha =0{.}9,\beta =0{.}1,\tau =0{.}04,h=1/640,T=4)\), bottom right \((\alpha =0{.}8,\beta =0{.}4,\tau =0{.}08,h=1/640,T=8)\).

Figure 9
figure 9

The comparison between numerical solution and exact solution (left) and absolute error (right) with \(\tau =1/277,h=\frac{1}{1280}\) and \(\alpha =0{.}7,\beta =0{.}7\) at \(t=1\) for example 3.

Example 4

Consider fourth-order time-fractional integro-differential equation with a weakly singular kernel (1) with exact solution \(u(x,t) = t^2e^xx^3(1-x)^3,(x,t)\in \Omega\). The source term is taken as

$$\begin{aligned}f(x,t)&=\frac{2t^{2-\alpha }e^xx^3(1-x)^3}{\Gamma (3-\alpha )}\\&\quad +t^2e^xx(x^5+9x^4+3x^3-37x^2+30x-6)\\&\quad +\frac{2e^xx(x^5+9x^4+3x^3-37x^2+30x-6)}{\Gamma (\beta +3)}t^{\beta +2}\\&\quad -t^xe^x(x^6+21x^5+123x^4+167x^3-156x^2-108x+48). \end{aligned}$$
Table 12 L2-norm errors and order of convergence for \(\alpha =0{.}1,0{.}6,0{.}8\) and \(\beta =0{.}1,0{.}3,0{.}5\) when \(\tau =1/40\) for Example 4.
Table 13 Error for different \(\tau ,T,\alpha\) and fixed \(h,\beta\) for Example 4.
Figure 10
figure 10

The exact (left) and numerical (right) solutions plots at \(\tau =1/40\) and \(h=1/1280\) with \(\alpha =0{.}8,\beta =0{.}5\) for example 4.

Figure 11
figure 11

The poinwise error (left) and comparison between numerical and exact solutions (right) for example 4.

From table 12, we can see that by decreasing h, more accurate results can be achieved. In table 13, L-2 norm errors are demonstrated for \(\alpha =0{.}3,0{.}8\). It is clear from table 12 that the presented method is accurate with a good order of convergence. The CPU time is less than 12 seconds. Figure 10 shows that the exact and numerical solutions are the same. The plot of pointwise error and comparison between numerical and exact solutions at \(t=1\) are illustrated in figure 11. All the tables and figures clearly show that the present difference scheme is impressive in term of accuracy.

Example 5

In the last example, we consider \(u(x,t)=(x^2-x)^4\sin (\pi x)t^\alpha\).

Table 14 L2-norm errors for \(\alpha =\beta =0{.}1,0{.}3,0{.}7\) for Example 5.
Table 15 Error for different \(T,\alpha\) and fixed \(h,\tau ,\beta\) for Example 5.

In table 14, L-2 norm errors are reported for \(\alpha =\beta =0.1,0.3,0.7\). In table 15, numerical results are presented with large time instant \(t,(T=2,4,6,10)\). Tables 14 and 15 verify the efficiency of the proposed method.

7 Conclusions

In this paper, we presented a difference scheme using cubic B-spline quasi-interpolation for the numerical solution of a fourth-order time-fractional integro-differential equation with a weakly singular kernel. The time fractional derivative of the mentioned equation is approximated by a scheme of order \(O(\tau ^{2-\alpha })\) and the spatial derivative is replaced with a second order approximation. The fractional integral is approximated by polynomial interpolation. In terms of the implementation and speed of the method, it is easy to apply and almost fast. We have proved the stability and convergence of the numerical method with the order of convergence \(O(\tau ^{2-\alpha }+h^2)\). Five test problems have been performed to show the convergence orders, applicability, and capability of the method. All numerical computations are obtained by using MATLAB R2020b.