1 Introduction

Option selection is one of the most commonplace derivative financial instruments in financial products from theoretical and practical points of view. It is hence necessary to achieve a solid grasp of the schemes where one must undertake the task of pricing options. In 1973, Black and Scholes (1973) and Merton (1973) introduced a model, known as the Black–Scholes (B–S) model, for describing the approximate behavior of the underlying assets in pricing options. It has been extensively used by options traders and is known to result in considerable growth in options trading due to its effectiveness and accuracy in predicting the options prices. More recently, some techniques have been proposed for numerically approximating the B–S model such as Farnoosh et al. (2015, 2016, 2017), Golbabai and Mohebianfar (2017a, b), Golbabai et al. (2012, 2014), Rad et al. (2015), Rashidinia and Jamalzadeh (2017a, b), and Sobhani and Milev (2018).

The fractional Brownian motion involved in the classical model has been replaced with fractional Brownian motion which is based on introducing the fractal assembly in the stochastic process and financial field to implement fractional partial differential equations and fractional calculus in financial theory. Due to fractional Brownian motion not being a semi-martingale, one cannot directly implement the Itô theory of stochastic integrals in it. It is possible to use a variation of the path-wise Riemann–Stieltjes integral instead of the Itô integral. However, the resulting option value model accepts arbitrage, as shown by Rogers (1997). Therefore, under a frictionless and complete setting, there are chances for arbitrage in the fractional Black–Scholes model. During recent years, the B–S equation has been generalized by more researchers (Björk and Hult 2005; Meerschaert and Sikorskii 2012) due to fractional order integrals and derivatives being powerful tools for explaining the hereditary and memory characteristics of different substances. As a result, using a model via fractional order processes is one way of taking into account the high volatility of the stock exchange market.

As an example, the pricing was performed for the European call option using a time fractional Black–Scholes model (TFBSM), as is mentioned by Wyss (2017). The TFBSM is itself a special case of the bi-fractional B–S model proposed recently by Liang et al. (2010). Cartea (2013) led a further investigation into this model and showed that the value of derivatives of the European style could be described by a partial-integral-differential equation and include a non-local time-to-maturity operator named the Caputo fractional derivative. Moreover, powerful explicit solutions have been provided by the authors of Leonenko et al. (2013), implementing spectral techniques in fractional Pearson diffusions that are based on the corresponding diffusion model of the time-fractional type that has been successfully applied to expand the B–S formalism. The authors also used a non-Markovian inverse stable time variation to offer stochastic solutions.

It is considerably difficult to obtain an accurate solution for this problem, owing to the memory trait of fractional derivatives. As a result, numerous researchers have tried techniques for approximating such problems (Golbabai and Nikan 2015a, b; Golbabai et al. 2019; Keshi et al. 2018; Moghaddam and Machado 2017a, b; Moghaddam et al. 2018, 2019; Vitali et al. 2017; Zaky and Machado 2017; Zaky 2018). The following are among the analytical methods used to solve the TFBSM: the separation of variables method (Chen 2014), the hybrid methods based on wavelets (Hariharan et al. 2013), the Fourier–Laplace transform method (Duan et al. 2018), the homotopy analysis and homotopy perturbation methods (Kumar et al. 2016), and the integral transform methods (Chen et al. 2015a; Kumar et al. 2012). The solutions obtained via the said methods are usually in the form of an infinite series with an integral or a convolution of some functions, making them difficult to solve. For this reason, more attention has been paid to develop computationally effective numerical methods to solve fractional B–S models. Some of these methods will be reviewed in the following. The FLMS process with the spatial fractional derivatives were solved numerically in Cartea and del Castillo-Negrete (2007) via backward difference techniques and the shifted Grünwald–Letnikov scheme. The numerical comparisons and investigations mentioned in Marom and Momoniat (2009) have been evaluated for the CGMY, KoBoL, and FMLS models and were used to analyze the corresponding convergence conditions for the mentioned models. A solution for pricing options was obtained by the authors of Song and Wang (2013); Zhang et al. (2014) under a TFBSM that employs a \(\theta \) finite difference scheme with second-order accuracy along with an implicit finite difference arrangement with first-order accuracy. The TFBSM was approximated numerically in Koleva and Vulkov (2017) using a weighted finite difference arrangement.

Bhowmik (2014) approximated the partial integro-differential equation that leads to the hypothesis of option pricing by utilizing a finite difference method that is a low-convergence-order explicit–implicit numerical technique. In addition, this method has been shown to be conditionally stable. American options pricing was investigated by Chen et al. (2015b) using a predictor–corrector via the method of finite moment log-stable model. A discrete implicit numerical approach was proposed by Zhang et al. (2016a) for European option pricing using the TFBSM with a temporal accuracy order of \(2 -\alpha \) and a spatial accuracy of the second-order. A similar operation was undertaken in Zhang et al. (2016b) for applications involving case-tempered fractional derivatives. De Staelen and Hendy (2017) improved the capability and potential of the proposed scheme of spatial fourth-order while maintaining a temporal \(2 -\alpha \) order. Furthermore, they performed a convergence and stability analysis on the aforementioned numerical scheme. Golbabai and Nikan (2019) adopted the moving least-squares method for determining the approximate solution of the TFBSM. Currently, the following is suggested as the TFBSM corresponding to the value of an option model with final and boundary (barrier) conditions:

$$\begin{aligned} {\left\{ \begin{array}{ll} \frac{{\partial }^ \alpha C({S},t)}{{\partial t}^\alpha }+\frac{1}{2} {\sigma }^{2}{S^2}\frac{{\partial }^2 C({S},t)}{{\partial S}^2 }+(r-D)S\frac{{\partial } V({S},t)}{\partial S}-rC(s,t)=0,&{} (s,t) \in (0,\infty )\times (0,T),\\ C(0,t)=p(t),\qquad C(\infty ,t)=q(t),&{}\\ C({S},T)=v(S), \end{array}\right. } \end{aligned}$$
(1)

where \( 0 < \alpha \le 1\), T is the expiry time, r is the risk-free rate, D the dividend rate and \(\sigma (\ge 0)\) is the volatility of the returns from the holding stock price S and \(\frac{{\partial }^ \alpha C({S},t)}{{\partial t}^\alpha }\) denotes a modified right Riemann–Liouville derivative (Podlubny 1999) defined as follows:

$$\begin{aligned} \frac{{\partial }^\alpha C({S},t)}{{\partial t}^\alpha }= {\left\{ \begin{array}{ll} \frac{1}{\varGamma \left( 1-\alpha \right) }\frac{\mathrm{d}}{\mathrm{d}t}\int \limits ^{t}_{0}\dfrac{C(S,\eta )-C(S,T)}{(\eta -t)^\alpha }\mathrm{d}\eta , &{} 0<\alpha <1, \\ \\ \frac{\partial {C(S,t)}}{{\partial t}},&{}\alpha =1 . \end{array}\right. } \end{aligned}$$
(2)

For the special case \(\alpha = 1\), the model (1) converts to the classical B–S model. Suppose \(t = T-\uptau \), for \(0<\alpha < 1\), we get

$$\begin{aligned} \begin{aligned} \frac{\partial ^\alpha C({S},t)}{\partial t^\alpha }&=\frac{1}{\varGamma \left( 1-\alpha \right) }\frac{-\mathrm{d}}{\mathrm{d}\uptau }\int \limits ^{T}_{t}\dfrac{C(S,\eta )-C(S,T)}{(\eta -(T - \uptau ))^\alpha }\mathrm{d}\eta \\&=\frac{1}{\varGamma \left( 1-\alpha \right) }\frac{-\mathrm{d}}{\mathrm{d}\uptau }\int \limits ^{T}_{T - \uptau }\frac{C(S,\eta )-C(S,T)}{(\eta -(T - \uptau ))^\alpha }\mathrm{d}\eta \\&=\frac{-1}{\varGamma \left( 1-\alpha \right) }\frac{\mathrm{d}}{\mathrm{d}\uptau }\int \limits ^{\uptau }_{0}\frac{C(S,T - \xi )-C(S,T)}{{(\eta -\xi )}^\alpha }\mathrm{d}\xi . \end{aligned} \end{aligned}$$

The model (1) can be rewritten by supposing \(x = \ln S\) and defining \(U(x,\uptau )=C(e^x,T-\uptau )\) according to the following expression:

$$\begin{aligned} \left\{ \begin{array}{lll} \frac{{\partial }^\alpha U({x},\uptau )}{{\partial \uptau }^\alpha }=\frac{1}{2} {\sigma }^{2}{}\frac{{\partial }^2 U({x},\uptau )}{{\partial x}^2 }+\left( r-\frac{1}{2}\sigma ^2-D\right) \frac{{\partial } U({x},\uptau )}{\partial x} -rU(x,\uptau ), &{} \\ U(-\infty , \uptau )=p(\uptau ), U(\infty ,\uptau ) = q(\uptau ),\\ U(x, 0) = u(x), \end{array} \right. \end{aligned}$$
(3)

where the fractional derivative

$$\begin{aligned} {}_{0}D_{\uptau }^\alpha U(x,\uptau )=\frac{1}{\varGamma \left( 1-\alpha \right) }\frac{\mathrm{d}}{\mathrm{d}\uptau }\int \limits ^{\uptau }_{0}\dfrac{U(x,\eta )-U(x,0)}{(\uptau -\eta )^\alpha }\mathrm{d}\eta ,\qquad (0<\alpha <1). \end{aligned}$$
(4)

To well approximate the numerical solution of the above-mentioned model, it is essential to work in a constrained interval. Therefore, we truncate the interval of variable x in Eq. (1) to a finite domain \((I_d, I_u)\). Therefore, we consider the following dimensionless model:

$$\begin{aligned} \left\{ \begin{array}{lll} {}_{0}D_{\uptau }^\alpha U(x,\uptau )=\gamma _{1}\frac{{\partial }^2 U({x},\uptau )}{{\partial x}^2 }+\gamma _{2}\frac{{\partial } U(x,\uptau )}{\partial x}-\gamma _{3}U(x,\uptau )+f(x,\uptau ),\\ U(I_d , \uptau )=p(\uptau ), U(I_u ,\uptau ) = q(\uptau ),\\ U(x, 0) =U(x), \end{array} \right. \end{aligned}$$
(5)

where \(\gamma _{1}=\frac{1}{2}\sigma ^2>0, \gamma _{2} = r - D-\gamma _{1}, \gamma _{3} = r > 0 \). For the objectives of validation in Sect. 5, a source term \(f(x,\uptau )\) is added.

1.1 A general insight of the meshless methods

A mesh is defined as a net resulting from connecting nodes in a prescribed manner. It is synonymous with grid, elements, or cells. In a meshless technique, one is not required to have a predefined mesh, and it is not required to generate a mesh to solve the problem. In contrast, conventional methods such as finite volume, finite element, and finite difference methods need a mesh to be generated to solve the problem. This mesh generation involves triangulating the problem domain. Problems that are involved with moving boundaries, steep gradients, and sharp corners demand more flexibility in some regions. For such problems, meshless techniques are can be superior to grid-based techniques. Numerous research areas in approximation theory and computational science have recently found interest in meshless schemes. These areas include numerical solution and optimization of partial differential equations, image processing, computer graphics, and artificial intelligence. These techniques have exhibited a promising prospective as partial differential equation (PDE) solvers in irregular and complicated domains. The radial basis function (RBF) technique is not grid-based and belong to a class of techniques named meshless methods. The RBF method has turned into the foremost means of interpolating multidimensional scattered data. The RBF technique in very general settings via creating a univariate function with the Euclidean norm which converts a multidimensional problem into a virtually one-dimensional one. Among the many advantages of RBF techniques are spectral convergence, dimension insensitivity, lack of need for node connectivity, and simple implementation. The choice of the basis function determines the spectral convergence. The RBF technique originated from such fields as metrology, mapping, geophysics, and geodesy. Subsequently, further applications were found in other fields such as optimization, finance, statistics, sampling theory, signal processing, neural networks, learning theory, artificial intelligence, and PDEs.

The attention of many researchers in engineering and science has been drawn in the recent decade to the development of RBF as a real meshless technique that can be used to approximate the solution of PDEs. The RBF is a significant subject for mathematical research in solving PDEs, with numerous real-world implementations such as in quantum mechanics, astrophysics, and geophysics (climate and weather modeling). Hardy (1971) introduced the multiquadric (MQ) RBF in 1971, although his MQ interpolation did not come into attention until 1979. Franke (1982) demonstrated the superiority of MQ method as a method for solving scattered data interpolation problems. Madych and Nelson (1990) also showed that the convergence rate of the MQ interpolation method is spectral. It was shown by Hardy (1990) that MQ RBFs correspond to a consistent solution belonging to the biharmonic potential problem. Kansa (1990) first used the MQ method in 1990 for solving differential equations. Fasshauer (1996) formulated the RBF method for solving PDEs in domains of irregular shape. Subsequent to the first use of the MQ technique for solving PDEs (Larsson and Fornberg 2003), the method experienced a rapid growth in popularity and found numerous applications. The existence, uniqueness, and convergence of the RBFs approximation was discussed in detail by Franke and Schaback (1998), Madych and Nelson (1990), and Micchelli (1986).

1.2 The background and overview of current research

The objective of the current paper focuses on extending RBF-based collocation approach to approximate the TFBSM. The present work is outlined as follows: Sect. 2 introduces a brief summary of several definitions for efficiently understanding RBF. In Sect. 3, we first discretize the time fractional derivative of the TFBSM using finite deference scheme and then we will approximate the spatial derivatives based on RBF meshless methods. Section 4 analyzes the stability and convergence of the proposed temporal discrete scheme. Section 5 reports the numerical results of solving the TFBSM to show the high accuracy and efficiency of the method and confirm the theoretical prediction. Finally, Sect. 6 contains a brief conclusion.

2 Approximation based on radial basis function

2.1 Definition of the space radial basis functions

Definition 1

A function \(\varPhi (r) :{\mathbb {R}}^d \longrightarrow {\mathbb {R}} \) is defined as a radial if there is a univariate function \(\phi : [0, \infty ) \longrightarrow {\mathbb {R}}\), such that \(\varPhi ({\mathbf{x}}) = \phi (r)\) , where \(||r ||= ||{\mathbf{x}}||,\) and \(||\cdot ||\) is some norm on \({\mathbb {R}}^d \) which is typically the Euclidean norm.

Definition 2

Let \(X = \{{\mathbf{x}}_1,\ldots ,{\mathbf{x}}_N\} \subseteq {\mathbb {R}}^d \), and data \(f({\mathbf{x}}_i), i=1,\ldots ,N,\) be given. The scattered data interpolation problem is to find a function \( s:{\mathbb {R}}^d \longrightarrow {\mathbb {R}} \) with dataset \({\mathbf{x}}_i,\) such that \(s({\mathbf{x}}_i)=f_i\), for \(i = 1,\ldots , N\).

Definition 3

A radial basis function, \(\varPhi (r)\), is a one-variable, continuous function defined for \(r\ge 0\) that has been radialized by composition with the Euclidean norm on \( {\mathbb {R}}^d\). If one chooses N points \(\lbrace {\mathbf{x}}_{i}\rbrace ^{N} _{i=1} \) in \( {\mathbb {R}}^d\) then by custom \(s({\mathbf{x}}) = \sum \nolimits _{i = 1}^N {{\lambda _j}} \phi (||{\mathbf{x}} - {\mathbf{x}}_j||); ~~ \lambda _j \in {{\mathbb {R}}}\) is called a radial basis functions as well (Baxter 2010) (See Table 1).

Table 1 Definition of some types of RBFs

The standard RBF can be classified into two main categories (Khattak et al. 2009):

Category 1. Infinitely smooth RBFs (Khattak et al. 2009):

These basis functions are infinitely differentiable and rely on the shape parameter c, e.g., Hardy multiquadric (MQ), Gaussian (GA), inverse multiquadric (IMQ), and inverse quadric (IQ)

Category 2. Infinitely smooth (except at centers) RBFs (Khattak et al. 2009):

The basis functions of this calss are not infinitely differentiable. These basis functions are shape parameter free and have comparatively less accuracy than the basis functions listed in the Category 1. For example, thin plate spline, etc.

2.2 RBF collocation method

Considering a finite set of scattered nodes \(\chi =\lbrace {\mathbf{x}} _1, {\mathbf{x}} _2,\ldots ,{\mathbf{x}} _N\rbrace \subset {\mathbb {R}}^d\) with corresponding value \(u: \varOmega \rightarrow {\mathbb {R}}\), the basic RBF interpolant \(S({\mathbf{x}})\) is expanded as :

$$\begin{aligned} u({\mathbf{x}})\simeq S({\mathbf{x}}) = \sum \limits _{j = 1}^N {{\lambda _j}} \phi (||{\mathbf{x}} - {{\mathbf{x}}_j}||) + p({\mathbf{x}}), \end{aligned}$$
(6)

where \(\Vert . \Vert \) is the Euclidean norm and \(\phi \) is a radial function. In addition, \(p({\mathbf{x}})\) is a linear combination of polynomials on \({\mathbb {R}}^d\) of total degree at most \(m-1\) as follows:

$$\begin{aligned} p({\mathbf{x}}) = \sum \limits _{k = 1}^M {{\gamma _k}} {p_k}({\mathbf{x}}),\quad M =\genfrac(){0.0pt}0{d + m - 1}{m - 1}. \end{aligned}$$
(7)

From the definition of \((m-1)\)-unisolvent, we are guaranteed a unique solution for the above interpolation problem and M is dimension of linear space \({\varPi _{m - 1}}({{\mathbb {R}}^d})\) of total degree less or equal to \(m-1\) in s variables. To calculate the coefficients \(\left\{ {{\lambda _j}} \right\} _{j = 1}^N\) and \(\left\{ {{\gamma _j}} \right\} _{k = 1}^M\), the collocation method is used. However, in addition to the N equations resulting from collocating Eq. (6) at the N points, an additional condition (extra M conditions) for the polynomial part is required to guarantee a unique solution of the N linear equations. Imposing the interpolation conditions on the interpolant \(S(\cdot )\) and mimicking the natural conditions gives the following additional condition:

$$\begin{aligned} {\left\{ \begin{array}{ll} s({{\mathbf{x}}_j}) = {u_j},\quad \forall j = 1,\ldots ,N,\\ \sum \limits _{j = 1}^N {{\lambda _j}} {p_k}({{\mathbf{x}}_j}) = 0,\quad \forall k = 1,2,\ldots ,M ,\quad \forall p_k \in {\varPi _{m - 1}}({{\mathbb {R}}^d}). \end{array}\right. } \end{aligned}$$
(8)

In view of Eqs. (6) and (8), one can obtain the following matrix form:

$$\begin{aligned} \left[ {\begin{array}{*{20}{c}} A&{}P\\ {{P^T}}&{}{{0}} \end{array}} \right] \left[ \begin{array}{l} \lambda \\ \gamma \end{array} \right] = \left[ \begin{array}{l} {u}\\ {{0}} \end{array} \right] , \end{aligned}$$
(9)

where

$$\begin{aligned} A_{j,k}=\phi (\Vert {\mathbf{x}}_{j}-{\mathbf{x}}_{k}\Vert ),\quad j,k=1,\ldots ,N, \quad P=p_{k}({\mathbf{x}}), \quad k = 1,\ldots ,M, \quad j=1,\ldots ,N, \end{aligned}$$

\(\lambda = {[{\lambda _1},\ldots ,{\lambda _N}]^T},{u} = {[{u_1},\ldots ,{u_N}]^T}, \gamma = [{\gamma _1},\ldots ,{\gamma _M}]^T\). The value of \(u({\mathbf{x}})\) can be estimated as below:

$$\begin{aligned} u({\mathbf{x}}) \approx \sum \limits _{x_j\in \chi } {{\lambda _j}} \phi (||{\mathbf{x}} - {{\mathbf{x}}_j}||) + p({\mathbf{x}}), \end{aligned}$$
(10)

and any for partial differential operator \({\mathcal {L}},{\mathcal {L}}u\) can be exhibited by

$$\begin{aligned} {\mathcal {L}}u({\mathbf{x}}) \approx \sum \limits _{x_j\in \chi } {{\lambda _j}} {\mathcal {L}}\phi (||{\mathbf{x}} - {{\mathbf{x}}_j}||) + {\mathcal {L}}p({\mathbf{x}}). \end{aligned}$$
(11)

Substituting the equality into the original equation helps to determine the unknown coefficients \({{\lambda _j}}\) (Cheney and Light 2009; Fasshauer 2007). Suppose \(\lbrace {\mathbf{x}} _{j}\rbrace ^{N} _{j=1} \) are N nodes in \(\varOmega \) which is convex, the radial distance is

$$\begin{aligned} h_{\varOmega ,{\mathbf{x}}}=\smash {\displaystyle \max _{{\mathbf{x}} \in \varOmega }\displaystyle \min _{1\le i \le N}}||{\mathbf{x}} -{\mathbf{x}} _i||_{2} , \end{aligned}$$

then, we get:

$$\begin{aligned} \Vert u_N{({\mathbf{x}} )}-u{({\mathbf{x}} )}\Vert \le O(\eta ^{ \beta / \eta }), \end{aligned}$$

where \(0< \eta < 1\) is a real number and \(\eta =\exp (-\theta )\) with \(\theta > 0.\) From above relation, it is clear that the convergence relies on parameter \(\beta \) and radial distance h the rate of convergence (Franke and Schaback 1998; Madych and Nelson 1990; Micchelli 1986).

3 Numerical implementation

In current section, we explain the approximation method for the numerical approximation of Eq. (1). First of all, we define N nodes \(\{x_j = jh| j= 1,2,3,\ldots ,N\}\) in the bounded interval [ab] such that \({x}_1, {x}_N\) are the boundary nodes, and the grid nodes in the time interval [0, T] are tagged as \(\uptau _n = n\delta t, n = 0,1,2,3,\ldots , M\), where \(h = (I_u-I_d)/N, \delta t = T/M\) and \(U^n(x_i) = U(x_i,\uptau _{n})\) .

3.1 Time fractional derivative discretization

We will indicate that the time derivative \(\frac{{\partial }^\alpha U({x},\uptau )}{{\partial \uptau }^\alpha }\) appearing in relation (5) coincides with the \(\alpha \)-order Caputo fractional derivative. Let \(U(x, \uptau )\in C^{(1) }\) in the time sense \(\uptau \), for \(0\le \alpha <1\), the modified Riemann–Liouville derivative:

$$\begin{aligned} {}_{0}D_{\uptau }^\alpha U(x,\uptau )&=\frac{1}{\varGamma (1-\alpha )}\frac{\mathrm{d}}{\mathrm{d}\uptau } \int \limits ^{\uptau }_{0}\frac{U(x,\eta )-U(x,0)}{(\uptau -\eta )^\alpha }{\mathrm{d}\eta }\nonumber \\&=\frac{1}{\varGamma (1-\alpha )}\frac{\mathrm{d}}{\mathrm{d}\uptau }\int \limits ^{\uptau }_{0} \frac{U(x,\eta )}{(\uptau -\eta )^\alpha }{\mathrm{d}\eta }-\dfrac{1}{\varGamma (1-\alpha )} \dfrac{\mathrm{d}}{\mathrm{d}\uptau }\int \limits ^{\uptau }_{0}\dfrac{U(x,0)}{(\uptau -\eta )^\alpha }{\mathrm{d}\eta }\nonumber \\&=\frac{1}{\varGamma (1-\alpha )}\frac{\mathrm{d}}{\mathrm{d}\uptau }\int \limits ^{\uptau }_{0}\dfrac{U(x,\eta )}{(\uptau -\eta )^\alpha }{\mathrm{d}\eta } -{U(x,0)}\frac{\uptau ^{-\alpha }}{\varGamma (1-\alpha )}\nonumber \\&=\dfrac{1}{\varGamma (1-\alpha )}\int \limits ^{\uptau }_{0}\dfrac{\partial U(x,\eta )}{\partial {\eta }}(\uptau -\eta )^{-\alpha }\mathrm{d}{\eta }={}^C_{0}D_{\uptau }^\alpha U(x,\uptau ), \end{aligned}$$
(12)

where the operator \({}^C_{0}D_{\uptau }^\alpha U(x,\uptau )\) is the Caputo derivative (Podlubny 1999). Now, based on the finite difference scheme, Eq. (12) can be approximated as below:

$$\begin{aligned} {}_{0}D_{\uptau }^\alpha U(x_{i},\uptau )&= \frac{1}{\varGamma (1-\alpha )}\int _{0}^{\uptau _{n+1}}\dfrac{\partial U(x,\eta )}{\partial \eta }(\uptau -\eta )^{-\alpha }\mathrm{d}{\eta }\nonumber \\&= \frac{1}{\varGamma (1-\alpha )}\sum \limits _{k=0}^{n}\int \limits _{k\delta t}^{(k+1)\delta t}\left[ \frac{U^{k+1}(x_i)-U^{k}(x_i)}{\delta t}+{\mathcal {O}}(\delta t)\right] [(\uptau -\eta )^{-\alpha }]\mathrm{d}\eta \nonumber \\&=\frac{1}{\varGamma (1-\alpha )}\sum \limits _{k=0}^{n}\left[ \frac{U^{k+1}(x_i)-U^{k}(x_i)}{\delta t}+{\mathcal {O}}(\delta t)\right] \int \limits _{k\delta t}^{(k+1)\delta t}[({n+1}){\delta t}-\eta )^{-\alpha }]\mathrm{d}\eta \nonumber \\&=\frac{1}{\varGamma (1-\alpha )}\sum \limits _{k=0}^{n}\left[ \frac{U^{k+1}(x_i)-U^{k}(x_i)}{\delta t}+{\mathcal {O}}(\delta t)\right] \nonumber \\&\quad \times \left[ \frac{(n+1-k)^{1-\alpha }-{(n-k)}^{1-\alpha }}{1-\alpha }\right] (\delta t)^{1-\alpha }+{\mathcal {O}}(\delta t^{2-\alpha })\nonumber \\&=\frac{\delta t^{-\alpha }}{\varGamma (2-\alpha )}\sum \limits _{k=0}^{n}[{U^{n+1-k}(x_i)-U^{n-k}(x_i)}] [{(k+1)^{1-\alpha }-{(k)}^{1-\alpha }}]+{\mathcal {O}}(\delta t^{2-\alpha })\nonumber \\&=a_{\alpha }\left[ ({U^{n+1}(x_i)-U^{n}(x_i)})+\sum \limits _{k=1}^{n}b_{k}({U^{n+1-k}(x_i)-U^{n-k}(x_i)})\right] +{\mathcal {O}}(\delta t^{2-\alpha }), \end{aligned}$$
(13)

where \(a_{\alpha }=\frac{\delta t^{-\alpha }}{\varGamma (2-\alpha )},~b_k={(k+1)^{1-\alpha }-{(k)}^{1-\alpha }}\). The time discretization procedure of TFBSM can be described by substituting Eqs. (13) into (5) between successive two time steps n and \(n+1\) as the following scheme:

$$\begin{aligned}&a_\alpha U^{n+1}-\gamma _{1}{\nabla }^2 U^{n+1}-\gamma _{2}{\nabla } U^{n+1}+ \gamma _{3} U^{n+1} \nonumber \\&\quad = {\left\{ \begin{array}{ll} a_\alpha \left[ U^{n}-{\sum \limits _{k=1}^n b_k(U^{n+1-k}-U^{n-k})}\right] +f^{n+1},&{} n\ge 1,\\ \\ a_\alpha {U^0}+f^{1},&{}n=0, \end{array}\right. } +R_{}^{k+1}, \end{aligned}$$
(14)

where \(\nabla \) is the gradient differential operator and \(f^{n+1}=f(x,\uptau _{n+1});~n=0,1,\ldots ,M.\) In addition the truncation error \(R_{}^{k+1}\) satisfy

$$\begin{aligned} R_{}^{k+1}(x)\le {C}\delta t^{2}, \end{aligned}$$

where C is a positive constant. The semi-discrete scheme is obtained by denoting \(u^{n}\) as the approximation of \(U^{n}\) and omitting the small term \( R_{}^{n+1}\) as:

$$\begin{aligned}&a_\alpha u^{n+1}-\gamma _{1}{\nabla }^2 u^{n+1}-\gamma _{2}{\nabla } u^{n+1}+ \gamma _{3} u^{n+1} \nonumber \\&\quad = {\left\{ \begin{array}{ll} a_\alpha \left[ u^{n}-{\sum \limits _{k=1}^n b_k(u^{n+1-k}-u^{n-k})}\right] +f^{n+1},&{} n\ge 1,\\ \\ a_\alpha {u^0}+f^{1},&{}n=0. \end{array}\right. } \end{aligned}$$
(15)

Now, we will apply the meshless methods based on RBFs for discretizing the spatial terms in the next two subsection in details.

3.2 Discretization in space: the RBF meshless method

To apply RBFs approximation scheme based on Kansa’s approach, we collocate N different points \(\lbrace { {x}}_j |j=1,\ldots ,N\rbrace \) where \({x}_1\) and \( {x}_N \) are boundary nodes and the other \((N-2)\) points are inner nodes \(\lbrace {x}_j |j=2, \ldots ,N-1\rbrace \). The numerical approximation of \(u({x}_i,\uptau _{n+1}) \) at a nodes of interest \({ {x}}_i\) may be expanded as:

$$\begin{aligned} u_{i}^{n+1}=u({x}_{i},\uptau _{n+1}) = \sum \limits _{j = 1}^ {N} \lambda _j^{n+1} \phi (r_{ij}) + \lambda _{N}^{n+1}{x}_{j}+\lambda _{N+1}^{n+1}, \end{aligned}$$
(16)

where \(\lbrace \lambda _j^{n}\rbrace \) are unknown coefficients of the \(n^{th}\) time layer \(\phi (r_{ij})\) radial basis function, \( r_{ij}=|{x}_i - {{x}_j}|\). In addition to N equations which have been resulted from collocating Eq. (16) at N points, two extra equations are needed by the following regularization conditions:

$$\begin{aligned} \sum \limits _{j = 1}^ {N} \lambda _j^{n+1}=\sum \limits _{j = 1}^ {N} \lambda _j^{n+1}{x}_j=0. \end{aligned}$$
(17)

Equations (16) and (17) together can be restated in the following matrix form:

$$\begin{aligned} \lbrace u \rbrace ^{n+1}=A\lbrace \lambda \rbrace ^{n+1}, \end{aligned}$$
(18)

where \(\lbrace u \rbrace ^{n+1}=[u_1^{n+1},\ldots ,u_N^{n+1},0,0]^T ,~~\lbrace \lambda \rbrace ^{n+1}=[\lambda _1^{n+1},\ldots ,\lambda _{N+2}^{n+1}]^T\) and the matrix \(A=(a_{ij})_{{(N+2)}\times {(N+2)}}\) is defined as:

$$\begin{aligned} A=\left[ {\begin{array}{*{20}{c}} \varPhi &{} P_{{N} \times 2}\\ {{P^T}}&{}{\mathbf{0}}_{2\times 2} \end{array}} \right] , \end{aligned}$$

where \(\varPhi =[\phi (r_{ij})]_{{N}\times {N}}\) and \(P=\left[ {\begin{array}{*{20}{c}} x_{1} &{} 1 \\ \vdots &{}\vdots \\ x_N &{} 1 \\ \end{array}} \right] _{N\times 2}.\) Rewriting Eq. (14) can be illustrated in the matrix form as below:

$$\begin{aligned} B\lbrace \lambda \rbrace ^{1}=\lbrace b \rbrace ^{1}, \end{aligned}$$
(19)

in which

$$\begin{aligned} B=\left[ {\begin{array}{*{20}{c}} L(\varPhi ) &{} L(P)\\ {{P^T}}&{}{\mathbf{0}} \end{array}} \right] _{(N+2)\times (N+2)}, \end{aligned}$$

where L represents an operator by

$$\begin{aligned} L(*)= {\left\{ \begin{array}{ll} [a_{\alpha }+\gamma _{3}-\gamma _{2}{\nabla }-\gamma _{1}{\nabla }^2](*),&{} 1<i<N,\\ (*), &{} i=1~or~ N,\\ \end{array}\right. } \end{aligned}$$
(20)

and \( \lbrace b \rbrace ^{1}=[b_1^{1},\ldots ,b_N^{1},0,0]^T\) where \(b_1^{1}=g_1^{1},\quad \) \(b_N^{1}=g_2^{1} \) and \( b_i^{1}=a_{\alpha }{u_i^0}+f_{i}^{0},\quad i=2,3,\ldots ,N-1.\)

In addition, for \(n\ge 1\)

$$\begin{aligned} B\lbrace \lambda \rbrace ^{n+1}=\lbrace b \rbrace ^{n+1}, \end{aligned}$$
(21)

\( \lbrace b \rbrace ^{n+1}=[b_1^{n+1},\ldots ,b_N^{n+1},0,0]^T\) are achieved by Eq. (14) as:

$$\begin{aligned} b_{i}^{n+1} = {\left\{ \begin{array}{ll} g_1^{n+1},&{} i=1,\\ \\ a_{\alpha }\left[ u_{i}^{n}-{\sum \limits _{k=1}^n b_k(u_{i}^{n+1-k}-u_{i}^{n-k})}\right] +f_{i}^{n+1},&{} 1<i<N,\\ \\ g_2^{n+1}, &{} i=N. \end{array}\right. } \end{aligned}$$
(22)

The solution can be constructed using Eq. (18) subsequent to the solution of the algebraic system of equations \(B\lbrace \lambda \rbrace ^{n+1}=\lbrace b \rbrace ^{n+1}\) at each time step.

3.3 Discretization in space: the RBF-PS meshless method

Fasshauer (2005) linked the RBFs collocation scheme to the pseudo-spectral (PS) approach, known as RBF-PS method. Fasshauer adopted the RBF-PS scheme to approximate the Allen–Cahn model, 2D Laplace model and 2D Helmholtz model with piecewise boundary conditions (Fasshauer and Zhang 2007). Ferreira and Fasshauer (2006) formulated the RBF-PS scheme for analyzing plates, beams and shells problems. Roque et al. (2010) adopted the RBF-PS method for composite and sandwich plates problems. Uddin and Ali (2012) and Uddin (2013) proposed RBF-PS method to approximate some wave-type PDEs. Now, we will develop the scheme of Dehghan et al. 2015; Fasshauer 2005 for solving Eq. (5). First, we introduce the properties of differentiation matrices (DM). Let us assume \(\phi _{j},~j=1,2,\ldots ,N\) to be an arbitrary linearly independent set of smooth functions acting as the basis for the approximation space, and let us assume \({\chi }=\lbrace {x}_{1},{x}_{2},\ldots ,{x}_{N}\rbrace ~\) to be a collection of distinct points in \(\varOmega \). The approximate solution is assumed to be in the form below:

$$\begin{aligned} u^h{({x})}=\sum _{j=1}^{N}\lambda _{j}\phi _{j}{({x})},\quad x \in {\mathbb {R}}, \end{aligned}$$
(23)

where \(h= h_{x,\varOmega }:=\sup \limits _{x \in \varOmega } \min \limits _{1\le j \le N}\Vert x- x_j\Vert _2 \). Evaluating Eq. (23) at the nodes \(x_{i}\) yields

$$\begin{aligned} u^h{(x_{i})}=\sum _{j=1}^{N}{\lambda }_{j}\phi _{j}{({x}_{i})},\quad i=1,2,\ldots ,N. \end{aligned}$$
(24)

Equation (24) can be simplified in the matrix-vector from as follows:

$$\begin{aligned} {\mathbf{u}}={\mathbf{A}}\lambda , \end{aligned}$$
(25)

where

$$\begin{aligned} {{\lambda }}=[\lambda _{1},\lambda _{2},\ldots ,\lambda _{N}]^T, \end{aligned}$$

and \({\mathbf{A}}\) is the evaluation matrix with elements \(~A_{ij}=\phi _{j}{(x_{i})}~\)

$$\begin{aligned} {\mathbf{u}}=[u^h{({x}_{1})},u^h{({x}_{2})},\ldots ,u^h{({x}_{N})}]^T. \end{aligned}$$

The derivative of \(u^{h}\) can be obtained by differentiating the basis function in (23) as:

$$\begin{aligned} \frac{\partial u^h{({x})}}{ \partial x}=\sum _{j=1}^{N}{\lambda _{j}\frac{\partial \phi _{j}{({x})}}{ \partial x}}. \end{aligned}$$
(26)

Now, we collocate Eq. (26) at the grid nodes \({x}_{i}\) in the following form,

$$\begin{aligned} {\mathbf{u}}_{x}={\mathbf{A}}_{x}\lambda , \end{aligned}$$
(27)

where matrix \({\mathbf{A}}_{x}\) has elements \(\frac{\partial \phi _{j}{({x})}}{\partial x}\). In fact, we require to certify invertibility of the evaluation of matrix \(~{\mathbf{A}}~\) for obtaining the differentiation matrix \({\mathbf{D}}\). This relies on both the basis functions chosen and the location of the grid nodes \({x_{i}}\). Based on Bochner’s theorem, the invertibility of the matrix \({\mathbf{A}}\) for any set of distinct grid points \({x_{i}}\) is guaranteed using the positive definite RBFs. Now, from (25) one gets:

$$\begin{aligned} \lambda ={\mathbf{A}}^{-1}{\mathbf{u}}. \end{aligned}$$

Based on Eq. (27) and the above result, we get:

$$\begin{aligned} {\mathbf{u}}_{x}={\mathbf{A}}_{x}{\mathbf{A}}^{-1}{\mathbf{u}}. \end{aligned}$$
(28)

The approximation solution of Eq. (5) is defined at \(x_{i}\) as:

$$\begin{aligned} u^{n+1}{(x_{i})}=\sum _{j=1}^{N}\lambda _{j}{\phi {(r_{ij})}},\quad i=1,2,\ldots ,N, \end{aligned}$$
(29)

Equation (29) can be expressed in the matrix form as:

$$\begin{aligned} {\mathbf{u}}^{n+1}={\mathbf{A}}\lambda , \end{aligned}$$
(30)

where

$$\begin{aligned} \lambda =(\lambda _{1},\lambda _{2},\ldots ,\lambda _{N})^{T}~~~~~~~{\mathbf{u}}^{n+1}=(u_{1}^{n+1},u_{2}^{n+1},\ldots ,u_{N}^{n+1})^{T}. \end{aligned}$$

Differentiating Eq. (29) with respect to x and evaluating it at the nodes \((x_{i})\), the matrix-vector form is obtained as:

$$\begin{aligned} {\mathbf{u}}^{n+1}_{xx}={\mathbf{A}}_{xx}\lambda , \end{aligned}$$
(31)

where

$$\begin{aligned} {\mathbf{u}}^{n+1}_{xx}=\left( \frac{\partial ^2{u_{1}^{n+1}}}{\partial {x}^2}, \frac{\partial ^2{u_{2}^{n+1}}}{\partial {x}^2},\ldots , \frac{\partial ^2{u_{N}^{n+1}}}{\partial {x}^2}\right) ^{T}, \end{aligned}$$

and elements of matrix \({\mathbf{A}}_{xx}\) are \(~{A}_{xx,ij}=\frac{\partial ^2{\phi {(\Vert {x_{i}-x_{j}}\Vert )}}}{\partial {x}^2}\). Regarding Eq. (30), we get:

$$\begin{aligned} \lambda ={\mathbf{A}}^{-1}{\mathbf{u}}^{n+1}, \end{aligned}$$

and from Eq. (31) results

$$\begin{aligned} {\mathbf{u}}^{n+1}_{xx}={\mathbf{A}}_{xx}{\mathbf{A}}^{-1}{\mathbf{u}}^{n+1}, \end{aligned}$$
(32)

Now, by substituting Eqs. (28) and (32) in Eq. (15) yields

$$\begin{aligned}&a_\alpha {\mathbf{u}}^{n+1}-\gamma _{1}{\mathbf{A}}_{xx}{\mathbf{A}}^{-1}{\mathbf{u}}^{n+1}-\gamma _{2}{\mathbf{A}}_{x}{\mathbf{A}}^{-1}{\mathbf{u}}^{n+1}+ \gamma _{3} {\mathbf{u}}^{n+1} \end{aligned}$$
(33)
$$\begin{aligned}&\quad ={\left\{ \begin{array}{ll} a_\alpha \left[ {\mathbf{u}}^{n}-{\sum \limits _{k=1}^n b_k({\mathbf{u}}^{n+1-k}-{\mathbf{u}}^{n-k})}\right] +f^{n+1},&{} n\ge 1,\\ \\ a_\alpha {{\mathbf{u}}^0}+f^{1},&{}n=0. \end{array}\right. } \end{aligned}$$
(34)

The above relation also can be rewritten in a compact matrix form as follows:

$$\begin{aligned} {\mathbf{D}}{\mathbf{u}}^{n+1}={\left\{ \begin{array}{ll} a_\alpha \left[ {\mathbf{u}}^{n}-{\sum \limits _{k=1}^n b_k({\mathbf{u}}^{n+1-k}-{\mathbf{u}}^{n-k})}\right] +f^{n+1},&{} n\ge 1,\\ \\ a_\alpha {{\mathbf{u}}^0}+f^{1},&{}n=0, \end{array}\right. } \end{aligned}$$

in which

$$\begin{aligned} {\mathbf{D}}_{}=(a_{\alpha }+\gamma _{3}){\mathbf{I}}-\gamma _{1}{\mathbf{A}}_{xx}{\mathbf{A}}^{-1}-\gamma _{2}{\mathbf{A}}_{x}{\mathbf{A}}^{-1}, \end{aligned}$$

where \({\mathbf{I}}\) is the identity matrix and finally we can obtain the numerical solution after solving this linear system at each time level.

4 Error analysis of the time-discrete scheme

To evaluate the error estimation of our approximation, we present some functional spaces endowed with the standard norms and inner products that will be used hereafter.

4.1 Notation about functional analysis

Let \(\varOmega \) define a bounded and open domain in \({\mathbb {R}}^2\) and let dx be the Lebesgue measure on \({\mathbb {R}}^2\). For \(p<\infty \), we define by \( {L}^{p}(\varOmega )\) the space of the measurable functions \(u: \varOmega \longrightarrow {\mathbb {R}}\) such that \(\int \limits _{\varOmega }|u({x} )|^{p}\mathrm{d}{x} \le \infty \). In addition, recall that the Banach space is given by

$$\begin{aligned} ||u||_{L^p(\varOmega )}=\bigg (\int \limits _{\varOmega }|u({x} )|^p \mathrm{d}{x}\bigg )^\frac{1}{p}. \end{aligned}$$

The space \({L^p(\varOmega )}\) is a Hilbert space with the inner product

$$\begin{aligned} (u,v)=\int \limits _{\varOmega }u({x})v({x})\mathrm{d}{x}, \end{aligned}$$

with the endowed norm in \(L^2\),

$$\begin{aligned} ||v||_{}=[(v,v)]^{\frac{1}{2}}=\left[ \int \limits _{\varOmega }v({x} )v({x})\mathrm{d}{x} \right] ^{\frac{1}{2}}. \end{aligned}$$

Moreover we assume that \(\varOmega \) is an open domain in \({\mathbb {R}}^d\), \(\gamma =(\gamma _{1},\ldots ,\gamma _{d})\) is a d-tuple of nonnegative integers and \(|\gamma |=\sum \nolimits _{i=1}^{d}\gamma _{i}\) and let us consider

$$\begin{aligned} D^\gamma v=\dfrac{\partial ^{|\gamma |}v}{\partial x_{1}^{\gamma }\partial x_{2}^{\gamma }\ldots \partial x_{d}^{\gamma }}. \end{aligned}$$

With this regard, one can define:

$$\begin{aligned} H_{}^{1}(\varOmega )= & {} \{v \in L^{2}(\varOmega ),~\frac{\mathrm{d}v}{\mathrm{d}x}\in L^{2}(\varOmega )\}, \\ H_{0}^{1}(\varOmega )= & {} \{v \in H^{1}(\varOmega ),~v|_{\partial (\varOmega )}=0~\}, \\ H^{m}(\varOmega )= & {} \{v \in L^{2}(\varOmega ),~~D^{\gamma }v \in L^{2}(\varOmega ),~ \mathrm {for~ all ~positive~ integer~} ~|{\gamma }|\le m\}. \end{aligned}$$

Now, we introduce the definition of the inner product in Hilbert space:

$$\begin{aligned} (u,v)_{m}=\sum \limits _{|\gamma |\le m}{}\int \limits _{\varOmega }D^{\gamma }u({x})D^{\gamma }v({x})\mathrm{d}x, \end{aligned}$$

which induces the norm

$$\begin{aligned} ||u||_{H^m{(\varOmega )}}=\left( \sum \limits _{|\gamma |\le m}||D^{\gamma }u||_{L^2(\varOmega )}^2\right) ^{\frac{1}{2}}. \end{aligned}$$

The Sobolev space \( W^{1,p}(I) \) is defined to be

$$\begin{aligned} W^{1,p}(I)=\left\{ u \in L^p(I);~\exists g \in L^P(I):\int \limits _{I}u\varphi ^{'}=\int \limits _{I}g\varphi ^{'}, \forall \varphi \in C^{1}{(I)}\right\} . \end{aligned}$$

For this target, the inner product and the associated energy norms in \(L^{2}\) and \(H^{1}\) are defined

$$\begin{aligned} ||v||=(v,v)^{1/2}, \quad ||v||_{1}=(v,v)_{1}^{1/2},\quad |v|_{1}=\left( \frac{\partial v}{\partial x},\frac{\partial v}{\partial x}\right) ^{1/2}, \end{aligned}$$

by making use of the inner products of \(L^{2}({\varOmega })\) and \(H^{1}({\varOmega }),\)

$$\begin{aligned} (u,v)=\int u(x) v(x)\mathrm{d}x, \quad (u,v)_{1}=(u,v)+\left( \frac{\partial u}{\partial x},\frac{\partial v}{\partial x}\right) , \end{aligned}$$

respectively.

4.2 Stability and convergence

In this section, we will comprehensively analyze the stability and convergence of the time-discrete scheme in the presented numerical solution. The relation (15) can be restated as follows:

$$\begin{aligned} \ u^{k+1}-\mu _{1}{\nabla }^2 u^{k+1}-\mu _{2}{\nabla } u^{k+1}= (1-b_{1})u^{k}+\sum \limits _{j=1}^k (b_j-b_{j+1})u^{k-j}+b_{k}u^{0}+F^{k+1},\nonumber \\ \end{aligned}$$
(35)

where  \(\mu _{1}=(a_{\alpha }+\gamma _{3})^{-1}\gamma _{1},~ \mu _{2}=-(a_{\alpha }+\gamma _{3})^{-1}\gamma _{2},~F=(a_{\alpha }+\gamma _{3})^{-1}f.\) First, let us introduce three lemmas for discretization of the time fractional derivative.

Lemma 1

(See Sun and Wu 2006) Let \(g(\uptau )\in C^2[0,\uptau _k],\) and \(0<\alpha <1,\) then

$$\begin{aligned} \begin{aligned}&\bigg |\frac{1}{\varGamma (1-\alpha )}\int ^{t_{k}}_0\frac{g(\uptau )}{(x-\uptau )^\alpha }\mathrm{d}t-\frac{\delta t^{-\alpha }}{\varGamma (2-\alpha )}\\&\qquad \times \bigg [(1-b_{0})g(\uptau _{k})+\sum \limits _{j=1}^{k-1} (b_{k-j-1}-b_{k-j})g(\uptau _{j})+b_{k-1}g(\uptau _{0})\bigg ]\bigg |\\&\quad \le \dfrac{1}{\varGamma (2-\alpha )}\left[ \frac{1-\alpha }{12}+\frac{2^{2-\alpha }}{2-\alpha }-(1+2^{-\alpha })\right] \max \limits _{0\le \uptau \le \uptau _{k}}| g^{''}(\uptau )|\delta t^{2-\alpha } \end{aligned} \end{aligned}$$

where   \(b_j=(j+1)^{1-\alpha } -j^{1-\alpha }.\)

Proof

For the evidence, look at the reference part (Sun and Wu 2006). \(\square \)

Lemma 2

The coefficients \(b_j~(j=0,1,2,\ldots ),\) defined by (35) satisfies the following:

  • \(b_0=1, b_j>0,~j=0,1,2,\ldots , b_{n}\rightarrow 0 ~~as ~~n\rightarrow \infty ;\)

  • we have

    $$\begin{aligned} \begin{aligned}&b_j>b_{j+1},~j=0,1,2,\ldots ;\\&\sum \limits _{j=0}^{k-1}(b_{j+1}-b_{j})+b_{k}=(1-b_{1}) +\sum \limits _{j=1}^{k-1}(b_{j+1}-b_{j})+b_{k}=1; \end{aligned} \end{aligned}$$
  • there exists a positive constant \(C > 0\) such that

    $$\begin{aligned} \begin{aligned}&\delta t<C b_{j}\delta t^{\alpha },~j=0,1,2,\ldots ,\\&\sum \limits _{j=0}^{k}b_{j}\delta t^{\alpha }=(k+1)^{\alpha }\delta t^{\alpha }\le T^{\alpha }. \end{aligned} \end{aligned}$$

Proof

One may verify it as is clear from the definition \(b_j=(j+1)^{1-\alpha } -j^{1-\alpha }\), where \(0<\alpha <1\). \(\square \)

Lemma 3

If \(u^k(x) \in H_{}^{1}(\varOmega )~k=0,1,\ldots ,M\) is the solution of Eq. (35), then

$$\begin{aligned} \Vert u^k \Vert _{}\le \Vert u^0 \Vert _{}+b_{k-1}^{-1}\max \limits _{0\le l \le M}||F^l||_{}. \end{aligned}$$

Proof

We will verify the result by the principle of induction. When \(k=0\), we obtain

$$\begin{aligned} u^{1}=\mu _{1}{\nabla }^2 u^{1}+\mu _{2}{\nabla } u^{1}+u^{0}+F^{1}. \end{aligned}$$
(36)

Multiplying both sides of above equation by \(u^1\) and integrating on \(\varOmega \), we have

$$\begin{aligned} ||u^{1}||^{2}-\mu _{1}({\nabla }^2 u^{1},u^1)-\mu _{2}({\nabla } u^{1},u^1)=(u^{0},u^1)+(F^{1},u^1). \end{aligned}$$

Using the Cauchy–Schwarz inequality and \(u^k(x) \in H_{}^1(\varOmega )\) yields

$$\begin{aligned} ||u^{1}||\le ||u^{0}||_{}+ ||F^{1}||_{}\le ||u^{0}||_{}+\max _{0\le l \le M}||F^l||_{}, \end{aligned}$$

which holds obviously. Now suppose

$$\begin{aligned} ||u^{j}||\le ||u^{0}||_{}+b_{j-1}^{-1}\max _{0\le l \le M}||F^l||_{},\quad j=1,2,\ldots , k. \end{aligned}$$
(37)

Multiplying Eq. (35) by \(u^{k+1}\) and integrating on \(\varOmega \), one can conclude that

$$\begin{aligned} \begin{aligned}&||u^{k+1}||^{2}-\mu _{1}({\nabla }^2 u^{k+1},u^{k+1})-\mu _{2}({\nabla } u^{k+1},u^{k+1})=(1-b_{1})(u^{k},u^{k+1})\\&\quad +\sum \limits _{j=1}^k (b_j-b_{j+1})u^{k-j}+b_{k}(u^{0},u^{k+1})+(F^{k+1},u^{k+1}). \end{aligned} \end{aligned}$$

The use of the Cauchy–Schwarz inequality, \(u^k(x) \in H_{1}(\varOmega )\) and \(b_{j+1}< b_j < 1\) concludes that

$$\begin{aligned} ||u^{k+1}||\le (1-b_{1})||u^{k}||_{}+\sum \limits _{j=1}^k (b_j-b_{j+1})||u^{k-j}||_{}+b_{k}||u^{0}||_{}+||F^{k+1}||_{2}. \end{aligned}$$
(38)

Using Eq. (37), the above relation can be stated as:

$$\begin{aligned} ||u^{j}||\le ||u^{0}||_{}+b_{j-1}^{-1}\max _{0\le l \le M}||F^l||_{}\le ||u^{j}||_{}+b_{j}^{-1}\max _{0\le l \le M}||F^l||_{}. \end{aligned}$$
(39)

Noting Lemma 2, we have \(b_{j}< b_{i} < 1;~~ 1 \le i \le j \). Therefore, one can obtain:

$$\begin{aligned}&(1-b_{1})||u^{k}||_{}+\sum \limits _{j=1}^k (b_j-b_{j+1})||u^{k-j}||_{}=\sum \limits _{j=0}^{k-1} (b_j-b_{j+1})||u^{k-j}||_{}\nonumber \\&\quad \le \sum \limits _{j=0}^{k-1} (b_j-b_{j+1})\left[ ||u^0||_{}+b_{k-j-1}^{-1}\max _{0\le l \le M}||F^l||\right] _{}\nonumber \\&\quad \le (1-b_{k})||u^{0}||_{}+~(1-b_{k})b_{k}^{-1}\max _{0\le l \le M}||F^l||_{}\nonumber \\&\quad =(1-b_{k})||u^{0}||_{}+~(b_{k}^{-1}-1)\max _{0\le l \le M}||F^l||_{}. \end{aligned}$$
(40)

Consequently, from Eqs. (38)–(40), we obtain the following inequality:

$$\begin{aligned} ||u^{k+1}||\le ||u^{0}||_{}+b_{k}^{-1}\max \limits _{0\le l \le M}||F^l||_{}. \end{aligned}$$

Therefore, the Lemma 3 is proven by induction on k. \(\square \)

Theorem 1

The fractional implicit numerical method defined by Eq. (35) is un-conditionally stable.

Proof

We suppose that \({\widehat{u}}^k({x}),~k=0,1,\ldots ,M,\) is the solution of the method (35) with the initial condition \({\widehat{u}}^0=u({x},0)\), then the error function \(\varepsilon ^k={u}^k({x})-{\widehat{u}}^k({x})\) satisfies

$$\begin{aligned} \varepsilon ^{k+1}-\mu _{1}{\nabla }^2 \varepsilon ^{k+1}-\mu _{2}{\nabla } \varepsilon ^{k+1}= (1-b_{1})\varepsilon ^{k}+\sum \limits _{j=1}^k (b_j-b_{j+1})\varepsilon ^{k-j}+b_{k}\varepsilon ^{0}, \end{aligned}$$

and \( \varepsilon ^{k}|_{{\partial \varOmega }}=0\). In virtue of Lemma 3, we obtain:

$$\begin{aligned} \Vert \varepsilon ^{k}\Vert _{} \le \Vert \varepsilon ^{0}\Vert _{},\quad ~k=0,1,\ldots ,M, \end{aligned}$$

and the proof of Theorem 1 is completed. \(\square \)

Theorem 2

Suppose that \(\{U({x},\uptau _k) \}_{k=1}^{M}\) is the exact solution of Eq. (5) and \(\{u^{k}({x}) \}_{k=1}^{M}\) be the time-discrete solution of Eq. (35) with initial condition \(u^{0}({x})=U({x},0)\). Then, we have the following error estimates:

$$\begin{aligned} || U({x},\uptau _k) -u^{k}({x})|| \le C \delta t ^{2-\alpha }, \end{aligned}$$

where C is a positive constant.

Proof

We assume that the error term is \(\rho ^{k}= U({x},\uptau _k) -u^{k}({x})\) at \(\uptau =\uptau _k, k = 1,2,\ldots , M\). Now, by subtracting Eq. (14) from Eq. (15) gives

$$\begin{aligned} \rho ^{k+1}-\mu _{1}{\nabla }^2 \rho ^{k+1}-\mu _{2}{\nabla } \rho ^{k+1}= \rho ^{k}-{\sum \limits _{j=1}^k b_j(\rho ^{k+1-j}-\rho ^{k-j})}+R^{k+1}, \end{aligned}$$

\( \rho ^0({x})=0 \) and \( \rho ^0({x})|_{{\partial \varOmega }}=0\). Regarding Lemma 3, we arrive at

$$\begin{aligned} ||\rho ^{k}||_{}\le b_{k-1}^{-1}\max \limits _{0\le l \le M} ||R^l||_{}\le b_{k-1}^{-1}\delta t^{2}. \end{aligned}$$

Since \(b_{k-1}^{-1}\delta t^{\alpha }\) is bounded (Liu et al. 2007), we have

$$\begin{aligned} ||\rho ^{k}||=|| U({x},\uptau _{k}) -u^{k}({x})|| \le C \delta t^{2-\alpha }, \end{aligned}$$

which finishes the proof. The convergence order in the time approximation will be tested by extracted numerical results in the next section. \(\square \)

5 Numerical results and discussions

In this part, two examples that demonstrate an exact solution are put forth to show the solution accuracy and the convergence order of the numerical method proposed in Sect. 4. In addition, the previously mentioned method that is utilized for pricing the European option under a TFBMS, one of the most interesting models in the financial market, is implemented. To measure the accuracy of method, we compute the following errors norm:

$$\begin{aligned}&L_{\infty }=\max _{1\le i \le N-1}|U({{x}}_{i},T)-u({{x}}_{i},T)|,\\&\Vert {\mathrm {Error}}\Vert _\infty =\max \limits _{\small {\begin{matrix}{1\le {i}\le {N-1}}\\ {1\le {j}\le {M-1}}\end{matrix}}}\mid {U}(x_i,\uptau _j)-u(x_i, \uptau _j)|. \end{aligned}$$

The computational orders are checked using the following formulas (Cui 2009; De Staelen and Hendy 2017):

$$\begin{aligned} \begin{aligned} C_{1}-\mathrm {order}&=\log _{2}\left( \frac{|| L_{\infty }(2\delta t , h)||}{|| L_{\infty }(\delta t, h)||}\right) ,\\ C_{2}-\mathrm {order}&=\log _{2}\bigg (\frac{\Vert {\mathrm {Error}}\Vert _\infty (16\delta t ,2 h)}{ \Vert {\mathrm {Error}}\Vert _\infty (\delta t, h)}\bigg ), \end{aligned} \end{aligned}$$

in time variable and in space variable, respectively.

It is worth noting that the selection of the optimal shape parameter in RBF is so far generally considered an open problem. Determination of appropriate shape parameter is obtained experimentally for the each types of RBFs. The optimal value for c in these experiments must be determined numerically for each individual temporal step. We would like to mention that the numerical experiments have been calculated by help of MATLAB 7 software on a Pentium IV, 2800 MHz CPU machine with 2 GB of memory.

Example 1

First, we consider the following TFBSM:

$$\begin{aligned} {\left\{ \begin{array}{ll} {}_{0}D_{\uptau }^\alpha U(x,\uptau )=\gamma _{1}\frac{{\partial }^2 U({x},\uptau )}{{\partial x}^2 }+\gamma _{2}\frac{{\partial } U(x,\uptau )}{\partial x}-\gamma _{3}U(x,\uptau )+f(x,\uptau ),\\ U(0 , \uptau )=0, U(1 ,\uptau ) = 0,\\ U(x, 0) = x^2(1-x), \end{array}\right. } \end{aligned}$$
(41)

where the source term \(f=(\frac{2\uptau ^{2-\alpha }}{\varGamma (3-\alpha )}+\frac{2\uptau ^{1-\alpha }}{\varGamma (2-\alpha )})x^2(1-x)-(\uptau +1)^2[\gamma _{1}(2-6x)+\gamma _{2}(2x-3x^2)-\gamma _{3}x^2(1-x)]\) is selected so that the exact solution of (41) is \(U = (\uptau +1)^{2}x^2(1-x)\) (De Staelen and Hendy 2017; Zhang et al. 2016b). The aforementioned related parameters can be chosen with values as \(r = 0.05\) \(D=0\), \(\sigma =0.25\), \(\gamma _{1}= \frac{1}{2}\sigma ^{2}\), \(\gamma _{2}= r -\gamma _{1}-D\), \( \gamma _{3} = r\) and \(T = 1\). The obtained results are displayed in Tables 2, 3, 4 and 5.

Table 2 Time order of convergence (TCO) by MQ-RBF at \(T=1,\) \(N=100\) and \(\alpha =0.7\) for Example 1
Table 3 Time order of convergence (TCO) by MQ-RBF at \(T=1,\) \(N=150\) and \(\alpha =0.7\) for Example 1
Table 4 Space order of convergence with \(c=0.5\) and MQ-RBF for Example 1
Table 5 The condition number and errors obtained using proposed schemes with \(\delta t=1/100\) for Example 1

Example 2

Consider the following TFBSM with homogeneous boundary conditions:

$$\begin{aligned} {\left\{ \begin{array}{ll} {}_{0}D_{\uptau }^\alpha U(x,\uptau )=\gamma _{1}\frac{{\partial }^2 U({x},\uptau )}{{\partial x}^2 }+\gamma _{2}\frac{{\partial } U(x,\uptau )}{\partial x}-\gamma _{3}U(x,\uptau )+f(x,\uptau ),\\ U(0 , \uptau )=(\uptau +1)^2, U(1 ,\uptau ) = 3(\uptau +1)^2,\\ U(x, 0) = x^3+x^2+1, \end{array}\right. } \end{aligned}$$
(42)

such that the source term \(f=(\frac{2\uptau ^{2-\alpha }}{\varGamma (3-\alpha )}+\frac{2\uptau ^{1-\alpha }}{\varGamma (2-\alpha )})( x^3+x^2+1)-(\uptau +1)^{2}[\gamma _{1}(6x+2)+\gamma _{2}(3x^2+2x)-\gamma _{3}(x^3+x^2+1)]\) is selected so that the exact solution of (42) is \(U = (\uptau +1)^{2}(x^3 +x^2 +1)\) (De Staelen and Hendy 2017; Zhang et al. 2016b). The aforesaid related parameters can be chosen with values as \(r = 0.5,D=0\), \(\gamma _{1} = 1\), \(\gamma _{2} = r -\gamma _{1}-D\),\( \gamma _{3} = r\) and \(T = 1\). The obtained results are shown in Tables 6, 7, 8 and 9.

Table 6 Time order of convergence (TCO) with MQ-RBF at \(T=1,\) \(N=100\) and \(\alpha =0.7\) for Example 2
Table 7 Time order of convergence (TCO) with MQ-RBF at \(T=1,\) \(N=150\) and \(\alpha =0.7\) for Example 2
Table 8 Space order of convergence with \(c=0.5\) and MQ-RBF for Example 2
Table 9 The condition number and errors obtained using proposed schemes with \(\delta t=1/100\) of Example 2

Tables 2, 3, 4, 5, 6, 7, 8 and 9 illustrate the numerical errors, comparisons and their corresponding computational orders, which indicate the high accuracy and efficacy of proposed methods. As mentioned in Sect. 3, both examples confirm the theoretical results established in Theorem 2. Based on the comprehensive comparisons in Tables 236 and 7, it is concluded that the numerical results are relatively in good agreement with the implicit finite deference method (Zhang et al. 2016b) and compact finite deference method (De Staelen and Hendy 2017). The consumed CPU time of the scheme is illustrated for various temporal discretization steps. It gives high accurate results with very low CPU time. In addition, as shown in Tables 4 and 8, we conclude that the convergence order of our proposed numerical approach in space is good agreement with (De Staelen and Hendy 2017). According to Tables 5 and 9, the RBF collocation technique has an error close to that of the RBF-PS collocation method, although the RBF-PS collocation technique has a more well-posed coefficient matrix than the RBF collocation technique. It is worthy of mention that “Cond(M)” illustrates the coefficient matrix related to the proposed methods.

Fig. 1
figure 1

Comparison of numerical solution at \(\alpha =1\) with the B–S solution (left) and different \(\alpha \) (right) for \(h=0.05,\) \(\delta t=0.01, c=0.9\)

Example 3

Lastly, we consider the following TFBSM governing European option: (Kumar et al. 2016)

$$\begin{aligned} \frac{{\partial }^\alpha C(S,t)}{{\partial t}^\alpha }=\frac{{\partial }^2 C(S,t)}{{\partial S}^2 }+(k-1)\frac{\partial C(S,t)}{\partial S} )-kC(S,t),\quad 0\le S\le 2,\quad t \in [0,2], \end{aligned}$$

with initial condition \(C(S,T)=v(S)=\max (e^{S}-1, 0)\). It is to be noticed that that this system of equations contains just two dimensionless parameters \(k=\frac{2r}{\sigma ^2}\), in which k demonstrates the balance between the rate of interests and the variability of stock returns and the dimensionless time to expiry \(\frac{1}{2}\sigma ^2 T\), however, there are four dimensional parameters, KT\(\sigma ^2 \) and r, in the original statements of the problem. When \(\alpha =1\), the analytical solution of this model is

$$\begin{aligned} C(S,t)=\max (e^S,0)(1-e^{-kt})+\max (e^S-1,0)e^{-kt}. \end{aligned}$$

We consider the vanilla call option with parameter \(\sigma =0.2,\) \(r=0.04,\) \( D=0\). We solve this model with the method exhibited in this paper with values of h\(\delta t,\) c at \(t=2\). In the case where \(\alpha =1\), the approximation in this paper differs little from the corresponding option price governed by the B–S model. A comparison between the numerical solution of this paper and that of the corresponding B–S solution is shown in Fig. 1. Furthermore, it is seen in Fig. 1 that as \(\alpha \) approaches 1, the numerical solution corresponding to the fractional partial differential equations determined by the MQ-RBF arrangement converges to the solution of the partial differential equation of integer order. The graphs of the approximate solution corresponding to \(\alpha =1\) and \(\alpha =0.8\) with \(k=2\) are displayed in Fig. 2.

Fig. 2
figure 2

Solutions of call option for \(\alpha =1\) (left) and \(\alpha =0.8\) (right) with \(h=0.05,\) \(\delta t=0.01, c=0.9\)

6 Conclusion

As a matter of fact, the TFBSM can be interpreted as the generalized template of the classical B–S model in the area of mathematical finance. The “non-local” characteristic of the fractional order derivative, which influences the function variation rate near a point by property of the function all over entire calculation domain instead of just near the point itself, causes trouble for solving both exact and numerical solutions in comparison with the integer-order model. In the present study, a variable transformation is used to obtain a Caputo fractional derivative from the modified Riemann–Liouville fractional derivative. First, a description of how the issue is discretized in a temporal sense via the finite difference technique (\(2-\alpha \) order accuracy) is provided. Then, a full discrete scheme is obtained using the meshless method based on the RBF collocation method and RBF-PS method. It is worth reminding that the renowned RBF-PS scheme is none other than a generalized finite difference method, and that the numerical outcomes of using the RBF collocation method and the RBF-PS method are equivalent. However, the condition number of the coefficient matrix of the RBF-PS method is smaller than that of the coefficient matrix of the RBF collocation technique. Moreover, a discussion of the convergence analysis of the present technique is made along with obtaining the convergence rate. To demonstrate the convergence order and accuracy of the numerical technique, two of the aforementioned numerical examples that have analytical solutions are chosen. It is shown via experimental data that the obtained results are acceptable when considered together with the theoretical analysis. For conclusion, the TFBSM and the proposed numerical method are utilized for the pricing of European options from an application-based viewpoint. It is the belief of the authors that the numerical techniques proposed herein may also be applied in other similar fractional simulations to price various European options in the fractional B–S market.