1 Introduction

In recent years, fractional partial differential equations have become powerful tools for describing many phenomenons in applied science, such as anomalous diffusion transport, fluid flow in porous materials, electric conductance of biological systems, signal processing etc. [1,2,3]. To some extent, their appearance and development make up for defects of the classical integer-order partial differential equations because fractional derivatives can describe the characteristics of long memory and nonlocal dependence of many anomalous processes.

Under this background, the fractional Black–Scholes models have been proposed one after another. As we know, it is difficult generally to obtain analytical solutions of fractional partial differential equations due to the nonlocal property of fractional derivatives. Therefore, many researchers have devoted various numerical methods for both spatial-fractional Black–Scholes model [4,5,6,7] and time-fractional Black–Scholes model [8,9,10,11]. In this paper, we consider the following time-fractional Black–Scholes model [11]

$$\begin{aligned} \left\{ \begin{array}{l} \dfrac{\partial ^\alpha U(S,\tau )}{\partial \tau ^\alpha } + \dfrac{1}{2}\sigma ^2 S^2 \dfrac{\partial ^2 U(S,\tau )}{\partial S^2 }+ rS\dfrac{\partial U(S,\tau )}{\partial S} - rU(S,\tau ) = 0,\\ \\ \quad (S,\tau )\in (0,\infty )\times (0,T), \\ U(S,T) = z(S),\\ U(0,\tau ) = p(\tau ),\quad U(\infty ,\tau ) = q(\tau ), \\ \end{array} \right. \end{aligned}$$
(1)

where \(U(S,\tau )\) denotes the price of an option, S is the asset price, \(\tau \) is the current time, \(T > 0\) is the expiry time, \(\sigma > 0\) is the volatility of underlying asset, \(r > 0\) is the risk-free interest rate. \(\frac{{\partial ^\alpha U(S,\tau )}}{{\partial \tau ^\alpha }}\) is a modified right Riemann-Liouville fractional derivative of order \(\alpha \,(0 < \alpha \le 1)\) defined by

$$\begin{aligned} \frac{\partial ^\alpha U(S,\tau )}{\partial \tau ^\alpha }=\left\{ \begin{array}{l} \dfrac{1}{\Gamma (1 - \alpha ) } \dfrac{\mathrm {d}}{\mathrm {d}\tau }\displaystyle {\int _\tau ^T \dfrac{U(S,\chi )-U(S,T)}{ (\chi -\tau )^ \alpha }\,\mathrm {d}\chi },\,\,0< \alpha < 1, \\ \\ \dfrac{\partial U(S,\tau )}{\partial \tau },\, \alpha =1. \\ \end{array}\right. \end{aligned}$$

Model (1) is the classical Black–Scholes model when \(\alpha =1\) .

For time-fractional Black–Scholes equation of European put options, Zhang et al. [12] presented a \(2-\alpha \) order accurate in time and second order accurate in space implicit difference scheme. Koleva and Vulkov [13] developed a weighted finite difference method with temporal accuracy of first order and spatial accuracy of second order for solving a time-fractional Black–Scholes equation. In order to price American option, Zhou and Gao [14] gave a Laplace transform method and a boundary-searching finite difference method for a free-boundary time-fractional Black–Scholes equation. The method is \(2-\alpha \) order convergent with respect to the time variable and second order convergent with respect to the space variable. Cen et al. [15] deduced a numerical technique of problem (1) by applying an integral discretization scheme in time direction and a central difference scheme for the spatial discretization. The approximate solution converges exact solution with first order accuracy in time and second accuracy in space. From the above literatures, it appears that the convergence precision of numerical methods is low. Thus, based on the work of [12], Staelen and Hendy [16] constructed an implicit difference scheme with \(2-\alpha \) order convergence in time and fourth order convergence in space. Tian et al. [17] derived three different compact finite difference schemes for the time-fractional Black–Scholes model governing European option pricing, in which by employing Padé approximation scheme for the space discretization, the spatial convergence accuracy of these three algorithms are all improved to fourth order, and the temporal convergence orders are \(2-\alpha \), 2, \(3-\alpha \), respectively.

As can be seen from the previous work, the numerical methods to solve the time-fractional Black–Scholes model have mostly been based on finite difference methods. In [18], Pradip Roul described a quintic spline collocation method for model (1). The technique achieved temporally \(2-\alpha \) order accuracy and spatially fourth-order accuracy.

Considering the good property of the quadratic B-spline basis function, that is quadratic B-spline basis functions have maximum smoothness in the quadratic spline space. Furthermore, compared with the finite difference method, the advantage of quadratic spline collocation (QSC) method is that the algorithm allows to obtain approximation at any point in the solution domain, whereas the finite difference method allows to obtain approximation only at the gridpoints. Thus QSC method is an effective technique to approximate the solutions of differential equations. However, the application of QSC method for fractional diffusion equations is limited [19,20,21,22].

In this paper, we will establish a compact QSC method to solve the numerical solution of the time-fractional Black–Scholes model (1), which is to ensure the spatial accuracy can still reach fourth order, while temporal accuracy can be improved to \(3-\alpha \) order. The outline of this paper is arranged as follows. In Sect. 2, we transform the time-fractional Black–Scholes model into an equivalent time-fractional sub-diffusion model by an exponential transformation. In Sect. 3, a compact QSC method with a temporally \(3-\alpha \) order accuracy and a spatially fourth-order accuracy is derived for solving the problem. In Sect. 4, the uniqueness of the solution of the collocation equation and the convergence analysis of the new scheme are proved. In Sect. 5, numerical examples are carried out to confirm the high accuracy and the efficiency of proposed technique. A conclusion is given in Sect. 6.

2 Transformation of the time-fractional Black–Scholes Model

For model (1), we introduce the following variable transformations:

$$\begin{aligned} S = e^x ,\tau = T -t ,V(x,t ) = U(e^x ,T - t ). \end{aligned}$$

According to Zhang et al. [12], the modified right R-L fractional derivative in the equation can be transformed to the following Caputo form:

$$\begin{aligned} {}_0^C D_t ^\alpha V(x,t)=\dfrac{{1}}{{\Gamma (1 - \alpha )}}\int _0^t \dfrac{{\partial V(x,\zeta )}}{{\partial \zeta }}(t-\zeta )^{-\alpha }\,\mathrm {d}\zeta . \end{aligned}$$

Thus problem (1) can be expressed as

$$\begin{aligned} \left\{ {\begin{array}{l} {{}_0^C D_t ^\alpha V(x,t ) - \dfrac{1}{2}\sigma ^2 V_{xx} (x,t ) - \left( r - \dfrac{1}{2}\sigma ^2 \right) V_x (x,t ) + rV(x,t ) = 0,}\\ {(x,t)\in (-\infty ,\infty )\times (0,T],} \\ {V(x,0) = z(x),} \\ {V(-\infty ,t ) = p(t),\,V(+\infty ,t ) = q(t) .} \\ \end{array}} \right. \end{aligned}$$
(2)

Truncate the original unbounded domain into a finite interval [ab], and add a source term f(xt) to the right-hand side of the equation without loss of generality. (2) can be rearranged as follows:

$$\begin{aligned} \left\{ {\begin{array}{l} {{}_0^C D_t ^\alpha V(x,t) - \dfrac{1}{2}\sigma ^2 V_{xx} (x,t ) - \left( r - \dfrac{1}{2}\sigma ^2 \right) V_x (x,t ) + rV(x,t ) = f(x,t ),}\\ { a< x< b,\;0< t \le T,} \\ {V(x,0) = z(x),\quad a< x < b,} \\ {V(a,t ) = p(t) ,\quad V(b,t ) = q(t),\quad 0 \le t \le T.} \\ \end{array}} \right. \end{aligned}$$
(3)

Multiplying \(\frac{2}{{\sigma ^2 }}\) on the both sides of the first equation in model (3), we have

$$\begin{aligned} \frac{2}{{\sigma ^2 }}{}_0^C D_t ^\alpha V(x,t ) - V_{xx} (x,t ) + \left( 1 - \frac{{2r}}{{\sigma ^2 }}\right) V_x (x,t ) + \frac{{2r}}{{\sigma ^2 }}V(x,t ) = \frac{2}{{\sigma ^2 }}f(x,t ). \end{aligned}$$
(4)

Let \(1 - \frac{{2r}}{{\sigma ^2 }} = \beta \), introducing the exponential transformation that is similar to Liao (see [23]):

$$\begin{aligned} V(x,t ) = e^{\frac{1}{2}\displaystyle \int _0^x {\beta ds} } \cdot v(x,t ) = e^{\frac{1}{2}\beta x} \cdot v(x,t ), \end{aligned}$$

we can eliminate the convection term in Eq. (4) and transform it into

$$\begin{aligned} {}_0^C D_t ^\alpha v(x,t ) - \dfrac{{\sigma ^2 }}{2}\dfrac{{\partial ^2 v(x,t )}}{{\partial x^2 }} + \left[ \dfrac{1}{{2\sigma ^2 }}\left( r - \dfrac{{\sigma ^2 }}{2}\right) ^2 + r\right] v(x,t ) = f(x,t ) \cdot e^{ - \frac{1}{2}\beta x}. \end{aligned}$$

For simplifying the above equation, denote \(\frac{{\sigma ^2 }}{2} = \lambda \) and \(\frac{1}{{2\sigma ^2 }}(r - \frac{{\sigma ^2 }}{2})^2 + r = w\), and it can immediately deduce that \({\lambda>0} ,\;w>0 \) according to actual meaning of the model. Thus, (3) leads to

$$\begin{aligned} \left\{ {\begin{array}{l} {{}_0^C D_t ^\alpha v(x,t ) - \lambda \dfrac{\partial ^2 v(x,t)}{\partial x^2} +w v(x,t)=f(x,t)\cdot e^{-\frac{1}{2}\beta x},a< x< b,\;0< t \le T,}\\ \\ {v(x,0) = z(x)\cdot e^{-\frac{1}{2}\beta x},\quad a< x < b,} \\ {v(a,t) = p(t)\cdot e^{-\frac{1}{2}\beta a},\quad v(b,t) = q(t)\cdot e^{-\frac{1}{2}\beta b},\quad 0 \le t \le T.} \\ \end{array}} \right. \end{aligned}$$
(5)

For convenience, we let \(G(x,t)=f(x,t)\cdot e^{-\frac{1}{2}\beta x} \), \(\phi (x)=z(x)\cdot e^{-\frac{1}{2}\beta x} \), \(\varphi (t)=p(t)\cdot e^{-\frac{1}{2}\beta a} \), \(\psi (t)=q(t)\cdot e^{-\frac{1}{2}\beta b}\). Therefore (5) can be represented as the following time-fractional sub-diffusion model

$$\begin{aligned} {}_0^C D_t ^\alpha v(x,t ) - \lambda \dfrac{\partial ^2 v(x,t)}{\partial x^2} +w v(x,t)=G(x,t),\quad a< x< b,\;0 < t \le T, \end{aligned}$$
(6)

subjecting to the initial condition:

$$\begin{aligned} v(x,0) = \phi (x),\quad a< x < b, \end{aligned}$$
(7)

and the boundary conditions:

$$\begin{aligned} v(a,t) = \varphi (t) ,\quad v(b,t) =\psi (t),\quad 0 \le t \le T. \end{aligned}$$
(8)

3 Preliminaries

For positive integer numbers \(N_{t}\) and \(N_{h}\), let \(t_{n} = (n-1)\cdot \Delta t, n=1,2,\ldots ,N_{t}+1\); \(x_{i} = a+(i-1)\cdot h, i=1,2,\ldots ,N_{h}+1\), where \(\Delta t = \dfrac{T}{N_{t}}\) and \( h = \dfrac{b-a}{N_{h}}\) are the time step size and the spatial step size respectively.

We denote \(P^2([x_i,x_{i+1}])\) by the set of quadratic polynomials on \([x_i,x_{i+1}]\), and define

$$\begin{aligned} S_2=\{s\in C^{1}([a,b]) \mid s\in P^2([x_i,x_{i+1}]),i=1,2,\ldots ,N_{h}\} \end{aligned}$$

as the space of quadratic splines.

Let

$$\begin{aligned} u(x)=\left\{ \begin{array}{ll} x^2,&{}\quad x \in [0,1], \\ -3+6x-2x^2,&{}\quad x \in [1,2], \\ 9-6x+x^2,&{}\quad x \in [2,3], \\ 0,&{}\quad elsewhere,\\ \end{array}\right. \end{aligned}$$

be the quadratic B-spline function, and \(\{B_k\}_{k=1}^{N_{h}+2}\) with

$$\begin{aligned} \left\{ \begin{array}{l} B_2(x)=\dfrac{1}{2}u\left( \dfrac{x-a}{h}+1\right) -\dfrac{1}{2}u\left( \dfrac{x-a}{h}+2\right) ,\\ [.1in] B_k(x)=\dfrac{1}{2}u\left( \dfrac{x-a}{h}-k+3\right) ,\, k=1,3,\ldots ,N_{h},N_{h}+2, \\ B_{N_{h}+1}(x)=\dfrac{1}{2}u\left( \dfrac{x-a}{h}-N_{h}+2\right) -\dfrac{1}{2}u\left( \dfrac{x-a}{h}-N_{h}+1\right) , \\ \end{array}\right. \end{aligned}$$

can be chosen as a set of the basis functions of the space \(S_2\).

By simple calculation of the quadratic spline basis function and their second-order derivatives at the collocation points \(\{\eta _{i}\}_{i=1}^{N_{h}+2}\), we can get the following conclusions.

Proposition 1

  1. (1)

    For the basis function \(B_1(x)\), we have

    $$\begin{aligned} \begin{matrix} B_1(\eta _{i})=\left\{ \begin{array}{ll} \dfrac{1}{2},&{}\quad i=1, \\ &{} \\ \dfrac{1}{8},&{} \quad i=2, \\ &{} \\ 0,&{}\quad i=3,\ldots ,N_{h}+2, \\ \end{array}\right. &{}B_1^{\prime \prime }(\eta _{i})=\left\{ \begin{array}{ll} \dfrac{1}{h^2},&{}\quad i=2, \\ 0,&{}\quad i=3,\ldots ,N_{h}+1. \\ \end{array}\right. \end{matrix} \end{aligned}$$
  2. (2)

    For the basis function \(B_2(x)\), we have

    $$\begin{aligned} \begin{matrix} B_2(\eta _{i})=\left\{ \begin{array}{ll} \dfrac{5}{8},&{} \quad i=2, \\ &{} \\ \dfrac{1}{8},&{}\quad i=3, \\ &{} \\ 0,&{}i=1,4,\ldots ,N_{h}+2, \\ \end{array}\right. &{}B_2^{\prime \prime }(\eta _{i})=\left\{ \begin{array}{ll} -\dfrac{3}{h^2},&{}\quad i=2, \\ \dfrac{1}{h^2},&{}\quad i=3, \\ 0,&{}i=4,\ldots ,N_{h}+1. \\ \end{array}\right. \end{matrix} \end{aligned}$$
  3. (3)

    For the basis function \(B_k(x)\) with \(k=3,\ldots ,N_{h}\), we have

    $$\begin{aligned}&\begin{matrix} B_k(\eta _{i})=\left\{ \begin{array}{ll} \dfrac{1}{8},&{}\quad |i-k|=1, \\ &{} \\ \dfrac{3}{4},&{}\quad i=k, \\ &{} \\ 0,&{}\quad else, \\ \end{array}\right.&i=1,2,\ldots ,N_{h}+2, \end{matrix}\\&\begin{matrix} B_k^{\prime \prime }(\eta _{i})=\left\{ \begin{array}{ll} \dfrac{1}{h^2},&{}\quad |i-k|=1, \\ -\dfrac{2}{h^2},&{}\quad i=k, \\ 0,&{}\quad else, \\ \end{array}\right.&i=2,\ldots ,N_{h}+1. \end{matrix} \end{aligned}$$
  4. (4)

    For the basis function \(B_{N_{h}+1}(x)\), we have

    $$\begin{aligned} B_{N_{h}+1}(\eta _{i})= & {} \left\{ \begin{array}{ll} \dfrac{1}{8},&{}\quad i=N_{h}, \\ &{} \\ \dfrac{5}{8},&{} \quad i=N_{h}+1, \\ &{} \\ 0,&{}\quad i=1,\ldots ,N_{h}-1, N_{h}+2, \\ \end{array}\right. \\ B_{N_{h}+1}^{\prime \prime }(\eta _{i})= & {} \left\{ \begin{array}{ll} \dfrac{1}{h^2},&{}\quad i=N_{h},\\ &{} \\ -\dfrac{3}{h^2},&{}\quad i=N_{h}+1,\\ &{} \\ 0,&{}\quad i=1,\ldots ,N_{h}-1. \\ \end{array}\right. \end{aligned}$$
  5. (5)

    For the basis function \(B_{N_{h}+2}(x)\), we have

    $$\begin{aligned} \begin{matrix} B_{N_{h}+2}(\eta _{i})=\left\{ \begin{array}{ll} \dfrac{1}{8},&{}\quad i=N_{h}+1, \\ &{} \\ \dfrac{1}{2},&{} \quad i=N_{h}+2, \\ &{} \\ 0,&{}\quad i=1,\ldots ,N_{h}, \\ \end{array}\right. &{}B_{N_{h}+2}^{\prime \prime }(\eta _{i})=\left\{ \begin{array}{ll} \dfrac{1}{h^2},&{}\quad i=N_{h}+1, \\ 0,&{}\quad i=2,\ldots ,N_{h}. \\ \end{array}\right. \end{matrix} \end{aligned}$$

For the quadratic spline interpolations, some conclusions (see[25, 26]) are given as follows.

Lemma 1

For a function \(v(x)\in C^4[a,b]\), let \(v_s(x)\) be the quadratic spline interpolation of function v(x) such that

$$\begin{aligned}&v_s(\eta _{i})=v(\eta _{i}),i=2,3,\ldots ,N_{h}+1; \end{aligned}$$
(9)
$$\begin{aligned}&v_s(\eta _{i})=v(\eta _{i})-\dfrac{h^4}{128}\dfrac{\partial ^4 v}{\partial x^4}(\eta _{i}),i=1,N_{h}+2; \end{aligned}$$
(10)
$$\begin{aligned}&\dfrac{\partial ^2 v_s}{\partial x^2}(\eta _{i})=\dfrac{\partial ^2 v}{\partial x^2}(\eta _{i})-\dfrac{h^2}{24}\dfrac{\partial ^4 v}{\partial x^4}(\eta _{i})+O(h^4),i=2,3,\ldots ,N_{h}+1; \end{aligned}$$
(11)
$$\begin{aligned}&\Vert v(\eta )-v_s(\eta )\Vert _{\infty }=O(h^4), \end{aligned}$$
(12)

where \(\eta =(\eta _{1},\eta _{2},\ldots ,\eta _{N_{h}+2})^{\top }\), \(\Vert v-v_s\Vert _{\infty }=\mathrm{max}\{|v(x)-v_s(x)|,x\in [a,b]\}\).

4 Compact QSC method for the time-fractional Black–Scholes model

The main purpose of this section is to construct a new higher order numerical method for problem (6)–(8).

4.1 Time descretization

At time \(t=t_{n+1}(n=1,2,\ldots , N_{t})\), Eq. (6) can be expressed as

$$\begin{aligned} {}_0^C D_t ^\alpha v(x,t_{n+1} ) - \lambda \dfrac{\partial ^2 v(x,t_{n+1})}{\partial x^2} +w v(x,t_{n+1})=G(x,t_{n+1}),\quad a< x < b{.} \end{aligned}$$
(13)

Using the \(L1 - 2\) formula (see [24]), the Caputo time-fractional derivative of the above equation is descretized as

$$\begin{aligned} {}_0^C D_t ^\alpha v(x, t _{n+1} )= & {} \dfrac{\Delta t^{-\alpha }}{\Gamma (2-\alpha )}\Big [\tilde{c}_1^{(\alpha )} v(x, t _{n+1})-\sum \limits _{k=2}^{n}(\tilde{c}_{n-k+1}^{(\alpha )}-\tilde{c}_{n-k+2}^{(\alpha )})v(x,t_{k})\nonumber \\&-\,\tilde{c}_{n}^{(\alpha )}v(x,t_1)\Big ]+O(\Delta t^{3-\alpha }), \end{aligned}$$
(14)

where the truncation error term \(O(\Delta t^{3-\alpha })\) comes under the assumption that \(v(\cdot ,t)\in C^3([0,T])\).

In Eq. (14), when \(n=1\),

$$\begin{aligned} \tilde{c}_1^{(\alpha )} = \tilde{a}_1^{(\alpha )} = 1, \end{aligned}$$

and when \(n \ge 2\)

$$\begin{aligned} \tilde{c}_j^{(\alpha )} = \left\{ {\begin{array}{ll} {\tilde{a}_1^{(\alpha )} + \tilde{b}_1^{(\alpha )} ,} &{} \quad {j = 1,} \\ &{}\\ {\tilde{a}_{j}^{(\alpha )} + \tilde{b}_{j}^{(\alpha )} - \tilde{b}_{j-1}^{(\alpha )} ,} &{} \quad {2 \le j \le n - 1,} \\ {\tilde{a}_{j}^{(\alpha )} - \tilde{b}_{j-1}^{(\alpha )} ,} &{} \quad {j = n}, \\ \end{array}} \right. \end{aligned}$$

in which

$$\begin{aligned} \tilde{a}_l^{(\alpha )}= & {} l^{1 - \alpha } - (l-1)^{1 - \alpha } ,\quad 1 \le l \le n,\\ \tilde{b}_l^{(\alpha )}= & {} \dfrac{1}{{2 - \alpha }}[ l^{2 - \alpha } - (l-1)^{2 - \alpha } ] - \dfrac{1}{2}[l^{1 - \alpha } + (l-1)^{1 - \alpha } ],\;\quad 1 \le l \le n-1. \end{aligned}$$

The properties of coefficients \(\tilde{c}_j^{(\alpha )}(j=1,2,\ldots , n)\) appeared in formula (14) can be seen in [24].

Setting

$$\begin{aligned} d_1= & {} -\dfrac{\tilde{c}_{n}^{(\alpha )}\Delta t^{-\alpha }}{\Gamma (2-\alpha )},\\ d_k= & {} \dfrac{-(\tilde{c}_{n-k+1}^{(\alpha )}-\tilde{c}_{n-k+2}^{(\alpha )})\Delta t^{-\alpha }}{\Gamma (2-\alpha )},\quad k=2,3,\ldots ,n,\\ d_{n+1}= & {} \dfrac{\tilde{c}_1^{(\alpha )}\Delta t^{-\alpha }}{\Gamma (2-\alpha )}, \end{aligned}$$

(14) becomes

$$\begin{aligned} {}_0^C D_t ^\alpha v(x, t _{n+1} )= & {} d_{n+1} v(x, t _{n+1})+\sum _{k=2}^{n}d_k v(x,t_{k}) \nonumber \\&+ d_1v(x,t_1)+O(\Delta t^{3-\alpha }){.} \end{aligned}$$
(15)

Use (15) in (13). Denoting \(v^{n+1}(x)\) as the approximate solution of \(v(x,t_{n+1})\) and dropping the truncation error term, we obtain the following time descretization of model (6)–(8) at \((n+1)\)-th time level

$$\begin{aligned} (d_{n+1}+w) v^{n+1}(x) - \lambda \dfrac{\partial ^2 v^{n+1}(x)}{\partial x^2}=g^{n+1}(x), \end{aligned}$$
(16)

where \(g^{n+1}(x)=G^{n+1}(x)-\sum \limits _{k=2}^n {d_k v^{k}(x)}-d_1 v^{1}(x), \quad a< x < b, n=1,2,\ldots ,N_{t}{,}\) the corresponding boundary and initial conditions can be discretized as

$$\begin{aligned} v^1(x)= & {} \phi (x),\quad a< x < b, \end{aligned}$$
(17)
$$\begin{aligned} v^{n+1}(a)= & {} \varphi (t_{n+1}) ,\quad v^{n+1}(b) =\psi (t_{n+1}),\quad n=1,2,\ldots ,N_{t}. \end{aligned}$$
(18)

4.2 Space descretization

Discretizing Eq. (16) at the collocation points \(\{\eta _{i}\}, i=2,3,\ldots ,N_{h}+1\), we have

$$\begin{aligned} (d_{n+1}+w) v^{n+1}(\eta _{i}) - \lambda \dfrac{\partial ^2 v^{n+1}}{\partial x^2}(\eta _{i})=g^{n+1}(\eta _{i}), \quad n=1,2,\ldots ,N_{t}. \end{aligned}$$
(19)

Inserting (9) and (11) into the above equation yieds

$$\begin{aligned} (d_{n+1}+w) v_s^{n+1}(\eta _{i}) - \lambda \dfrac{\partial ^2 v_s^{n+1}}{\partial x^2}(\eta _{i})-\dfrac{h^2}{24}\lambda \dfrac{\partial ^4 v^{n+1}}{\partial x^4}(\eta _{i})=g^{n+1}(\eta _{i})+O(h^4). \end{aligned}$$
(20)

Noticing Eq. (16), we get

$$\begin{aligned} \lambda \dfrac{\partial ^2 v^{n+1}(x)}{\partial x^2}=(d_{n+1}+w) v^{n+1}(x)-g^{n+1}(x). \end{aligned}$$
(21)

Then we can deduce that

$$\begin{aligned} \lambda \dfrac{\partial ^4 v^{n+1}(x)}{\partial x^4}=(d_{n+1}+w) \dfrac{\partial ^2 v^{n+1}(x)}{\partial x^2}-\dfrac{\partial ^2 g^{n+1}(x)}{\partial x^2}. \end{aligned}$$
(22)

By using (22), Eq. (20) can be written

$$\begin{aligned}&(d_{n+1}+w) v_s^{n+1}(\eta _{i}) - \lambda \dfrac{\partial ^2 v_s^{n+1}}{\partial x^2}(\eta _{i})-\dfrac{h^2}{24}\left[ (d_{n+1}+w)\dfrac{\partial ^2 v^{n+1}}{\partial x^2}(\eta _{i})\right. \nonumber \\&\quad \left. -\dfrac{\partial ^2 g^{n+1}}{\partial x^2}(\eta _{i})\right] =g^{n+1}(\eta _{i})+O(h^4). \end{aligned}$$
(23)

Further, applying (21) to (23), we obtain

$$\begin{aligned}&\left[ (d_{n+1}+w)-\dfrac{h^2(d_{n+1}+w)^2}{24\lambda } \right] v_s^{n+1}(\eta _{i}) - \lambda \dfrac{\partial ^2 v_s^{n+1}}{\partial x^2}(\eta _{i})\nonumber \\&\quad =\left[ 1-\dfrac{h^2(d_{n+1}+w)}{24\lambda }\right] g^{n+1}(\eta _{i})-\dfrac{h^2}{24}\dfrac{\partial ^2 g^{n+1}}{\partial x^2}(\eta _{i})+O(h^4). \end{aligned}$$
(24)

In order to discuss conveniently, we let \(\sigma _1=(d_{n+1}+w)-\dfrac{h^2(d_{n+1}+w)^2}{24\lambda }\), \(\sigma _2=-\lambda \), \(\sigma _3=1-\dfrac{h^2(d_{n+1}+w)}{24\lambda }\), Eq. (24) can be arranged as the following form

$$\begin{aligned} \sigma _1 v_s^{n+1}(\eta _{i}) +\sigma _2 \dfrac{\partial ^2 v_s^{n+1}}{\partial x^2}(\eta _{i})=\sigma _3 g^{n+1}(\eta _{i}) -\dfrac{h^2}{24}\dfrac{\partial ^2 g^{n+1}}{\partial x^2}(\eta _{i})+O(h^4). \end{aligned}$$
(25)

By (10), the corresponding boundary conditions are

$$\begin{aligned} v_s^{n+1}(\eta _{1})+O(h^4)=\varphi (t_{n+1}),\,\, v_s^{n+1}(\eta _{N_h+2})+O(h^4)=\psi (t_{n+1}). \end{aligned}$$
(26)

In Eq.(25), denoting \(v_h^{n+1}(\eta _{i}) \) as the approximate solution of \(v_s^{n+1}(\eta _{i})\) and omitting \(O(h^4)\) term, we construct the following collocation system

$$\begin{aligned}&\sigma _1 v_h^{n+1}(\eta _{i}) +\sigma _2 \dfrac{\partial ^2 v_h^{n+1}}{\partial x^2}(\eta _{i})=\sigma _3 g^{n+1}(\eta _{i})-\dfrac{h^2}{24}\dfrac{\partial ^2 g^{n+1}}{\partial x^2}(\eta _{i}),\nonumber \\&\quad i=2,3,\ldots ,N_{h}+1, n=1,2,\ldots ,N_{t}, \end{aligned}$$
(27)

with the initial condition

$$\begin{aligned} v_h^{1}(\eta _{i})=\phi (\eta _{i}), \quad i=2,3,\ldots ,N_{h}+1, \end{aligned}$$
(28)

and the boundary conditions

$$\begin{aligned} v_h^{n+1}(\eta _{1})=\varphi (t_{n+1}), \quad v_h^{n+1}(\eta _{N_h+2})=\psi (t_{n+1}). \end{aligned}$$
(29)

At the time mesh point \(t=t_{n+1}\), according to the collocation technique, \(v_h^{n+1}(x)\) can be expressed as a linear combination of quadratic B-spline basis functions \(B_k(x)\) proposed in Sect. 3, thus

$$\begin{aligned} v_h^{n+1}(x)=\sum \limits _{k=1}^{N_{h}+2}\xi _k^{{n+1}} B_k(x), \end{aligned}$$
(30)

where coefficients \(\{\xi _k^{n+1}\}_{k=1}^{N_{h}+2}\) are to be determined. By (29), it is easy to obtain

$$\begin{aligned} \xi _1^{n+1}=2\varphi (t_{n+1}), \xi _{N_{h}+2}^{n+1}=2\psi (t_{n+1}),\quad n=1,2,\ldots ,N_{t}. \end{aligned}$$
(31)

Substituting (30) into Eq.(27), the collocation equation can be represented as

$$\begin{aligned} \sigma _1 \sum _{k=1}^{N_{h}+2}&\xi _k^{n+1}B_k(\eta _{i}) +\sigma _2 \sum _{k=1}^{N_{h}+2}\xi _k^{n+1}\dfrac{\partial ^2 B_k}{\partial x^2}(\eta _{i})=\sigma _3 g^{n+1}(\eta _{i})-\dfrac{h^2}{24}\dfrac{\partial ^2 g^{n+1}}{\partial x^2}(\eta _{i}){,}\nonumber \\&\qquad i=2,3,\ldots ,N_{h}+1, n=1,2,\ldots ,N_{t}. \end{aligned}$$
(32)

According to the Proposition 1, we let

$$\begin{aligned} \begin{matrix} A=\dfrac{1}{8} \left( \begin{array}{ccccc} 5&{}\quad 1&{}\quad &{}\quad &{}\quad \\ 1&{}\quad 6&{}\quad 1&{}\quad &{}\quad \\ &{}\quad \ddots &{}\quad \ddots &{}\quad \ddots &{} \quad \\ &{}\quad &{}\quad 1&{}\quad 6&{}\quad 1\\ &{}\quad &{}\quad &{}\quad 1&{}\quad 5\\ \end{array}\right) _{N_{h}\times N_{h}}, &{}D=\dfrac{1}{h^2}\begin{pmatrix} -3\quad &{}1\quad &{} \quad &{} \quad &{} \quad \\ 1&{}\quad -2&{}\quad 1&{}\quad &{}\quad \\ &{}\quad \ddots &{}\quad \ddots &{}\quad \ddots &{}\quad \\ &{} \quad &{}\quad 1&{}\quad -2&{}\quad 1\\ &{}\quad &{}\quad &{}\quad 1&{}\quad -3\\ \end{pmatrix}_{N_{h}\times N_{h}}. \end{matrix} \end{aligned}$$

Then the above Eq. (32) can be written in the following matrix form

$$\begin{aligned} (\sigma _1 A+\sigma _2 D)\xi ^{n+1}=C^{n+1}, \end{aligned}$$
(33)

where \(\xi ^{n+1}=(\xi _2^{n+1},\xi _3^{n+1},\ldots ,\xi _{N_{h}+1}^{n+1})^{\top }\) and \(C^{n+1}=(c_2^{n+1},c_3^{n+1},\ldots ,c_{N_{h}+1}^{n+1})^{\top },\) in which

$$\begin{aligned} \left\{ \begin{array}{l} c_2^{n+1}=\sigma _3 g^{n+1}(\eta _{2})-\dfrac{h^2}{24}\dfrac{\partial ^2 g^{n+1}}{\partial x^2}(\eta _{2})-(\dfrac{\sigma _1}{4}+\dfrac{2\sigma _2}{h^2})\varphi (t_{n+1}),\\ \\ c_i^{n+1}=\sigma _3 g^{n+1}(\eta _{i})-\dfrac{h^2}{24}\dfrac{\partial ^2 g^{n+1}}{\partial x^2}(\eta _{i}),\quad i=3,4,\ldots ,N_{h},\\ \\ c_{N_{h}+1}^{n+1}=\sigma _3 g^{n+1}(\eta _{N_{h}+1})-\dfrac{h^2}{24}\dfrac{\partial ^2 g^{n+1}}{\partial x^2}(\eta _{N_{h}+1})-(\dfrac{\sigma _1}{4}+\dfrac{2\sigma _2}{h^2})\psi (t_{n+1}){.}\\ \end{array}\right. \end{aligned}$$

Let \(H=\sigma _1 A+\sigma _2 D\), the matrix Eq. (33) is simplified as

$$\begin{aligned} H\xi ^{n+1}=C^{n+1},\quad n=1,2,\ldots ,N_{t}. \end{aligned}$$
(34)

Combining \(\xi ^{n+1}\) obtained from (34) and \(\xi _1^{n+1}\), \(\xi _{N_{h}+2}^{n+1}\) got from (31), we let \(\tilde{\xi }^{n+1}=(\xi _1^{n+1},\xi _2^{n+1},\ldots ,\xi _{N_{h}+1}^{n+1},\xi _{N_{h}+2}^{n+1})^{\top }\). Simultaneously, by Proposition 1 we set

$$\begin{aligned} \tilde{A}=\dfrac{1}{8} \left( \begin{array}{ccccccc} {4}&{}\quad {0}&{}\quad {0}&{}\quad &{}\quad &{}\quad &{}\quad \\ 1&{}\quad 5&{}\quad 1&{}\quad &{}\quad &{}\quad &{}\quad \\ &{}\quad 1&{}\quad 6&{}\quad 1&{}\quad &{}\quad &{}\quad \\ &{}\quad &{}\quad \ddots &{}\quad \ddots &{}\quad \ddots &{}\quad &{} \quad \\ &{}\quad &{}\quad &{}\quad 1&{}\quad 6&{}\quad 1&{}\quad \\ &{} \quad &{} \quad &{} \quad &{}\quad 1 &{}\quad 5 &{}\quad 1\\ &{} \quad &{} \quad &{} \quad &{}\quad {0}&{}\quad {0}&{}\quad {4}\\ \end{array}\right) _{{(N_{h}+2)}\times (N_{h}+2)}. \end{aligned}$$

According to the expression (30),

$$\begin{aligned} v_h^{n+1}({\eta })=\tilde{A} \tilde{\xi }^{n+1} \end{aligned}$$
(35)

is the quadratic spline approximate solution of system (6)–(8) at each time level \(t_{n+1}\), where \(\eta =(\eta _{1},\eta _{2},\ldots ,\eta _{N_{h}+2 })^{\top }\).

5 Convergence analysis

In this section, we aim to investigate the convergence analysis of the compact QSC method.

Proposition 2

When \(\sigma > 0.527\), \(\Vert (\sigma _2 D)^{-1}\Vert _{\infty }<1\) is held, where \(\sigma > 0\) is the volatility of underlying asset.

Proof

Note that \(\sigma _2=-\dfrac{\sigma ^2}{2}\), we have

$$\begin{aligned} \Vert (\sigma _2 D)^{-1}\Vert _{\infty }=\dfrac{2h^2}{\sigma ^2}\left\| \begin{pmatrix} 3&{}\quad -1&{}\quad &{}\quad &{}\quad \\ -1&{}\quad 2&{}\quad -1&{}\quad &{}\quad \\ &{}\quad \ddots &{}\quad \ddots &{}\quad \ddots &{}\quad \\ &{} \quad &{}\quad -1&{}\quad 2&{}\quad -1\\ &{}\quad &{}\quad &{}\quad -1&{}\quad 3\\ \end{pmatrix}^{-1}\right\| _{\infty }. \end{aligned}$$

Let \(E=\begin{pmatrix} 3&{}-1&{} &{} &{} \\ -1&{}2&{}-1&{} &{} \\ &{}\ddots &{}\ddots &{}\ddots &{} \\ &{} &{}-1&{}2&{}-1\\ &{}&{}&{}-1&{}3\\ \end{pmatrix}_{N_{h}\times N_{h}}\) for the convenience of subsequent discussions and by simple calculation we have the following conclusions.

  • When \(N_{h}=2\), \(D=\dfrac{1}{h^2}\begin{pmatrix} -3&{}1 \\ 1&{}-3\\ \end{pmatrix}\), so \(E^{-1}=\begin{pmatrix} 3&{}-1 \\ -1&{}3\\ \end{pmatrix}^{-1}=\dfrac{1}{8}\begin{pmatrix} 3&{}1 \\ 1&{}3\\ \end{pmatrix}\), then we have \(\Vert E^{-1}\Vert _{\infty }=\dfrac{1}{2}\) ;

  • When \(N_{h}=4\), \(\Vert E^{-1}\Vert _{\infty }=\dfrac{(4-1)\cdot (1+3)+(4+1)\cdot (1+3)}{4\cdot 4}=2\) ;

  • When \(N_{h}=6\), \(\Vert E^{-1}\Vert _{\infty }=\dfrac{(6-1)\cdot (1+3+5)+(6+1)\cdot (1+3+5)}{4\cdot 6}=\dfrac{9}{2}\) ;

  • When \(N_{h}=8\), \(\Vert E^{-1}\Vert _{\infty }=\dfrac{(8-1)\cdot (1+3+5+7)+(8+1)\cdot (1+3+5+7)}{4\cdot 8}=8\) ;

    $$\begin{aligned} \cdots \end{aligned}$$
  • When \(N_{h}=2k+2,(k=1,2,\cdots )\),

    $$\begin{aligned} \Vert E^{-1}\Vert _{\infty }=\dfrac{[(N_{h}-1)+(N_{h}+1)]\cdot [1+3+\cdots +(N_{h}-1)]}{4\cdot N_{h}}=\dfrac{N_{h}^2}{8}. \end{aligned}$$

So, if we want \(\Vert (\sigma _2 D)^{-1}\Vert _{\infty }<1\) to be established, we just make \(\dfrac{2h^2}{\sigma ^2}\cdot \dfrac{N_{h}^2}{8}<1\) when \(N_{h}=2k,(k=1,2,\cdots )\). By \(h=\dfrac{1}{N_{h}}\), we have \(\sigma >\dfrac{1}{2}\).

  • When \(N_{h}=3\), \(\Vert E^{-1}\Vert _{\infty }=\dfrac{2\cdot 3\cdot 1+3^2}{4\cdot 3}=\dfrac{5}{4}\) ;

  • When \(N_{h}=5\), \(\Vert E^{-1}\Vert _{\infty }=\dfrac{2\cdot 5\cdot (1+3)+5^2}{4\cdot 5}=\dfrac{13}{4}\) ;

  • When \(N_{h}=7\), \(\Vert E^{-1}\Vert _{\infty }=\dfrac{2\cdot 7\cdot (1+3+5)+7^2}{4\cdot 7}=\dfrac{25}{4}\) ;

    $$\begin{aligned} \cdots \end{aligned}$$
  • When \(N_{h}=2k+1,(k=1,2,\cdots )\),

    $$\begin{aligned} \Vert E^{-1}\Vert _{\infty }=\dfrac{2\cdot N_{h}\cdot [1+3+\cdots +(N_{h}-2)]+N_{h}^2}{4\cdot N_{h}}=\dfrac{N_{h}^2+1}{8}. \end{aligned}$$

Therefore, if we want \(\Vert (\sigma _2 D)^{-1}\Vert _{\infty }<1\) to be true, we only need to make

\(\dfrac{2h^2}{\sigma ^2}\cdot \dfrac{N_{h}^2+1}{8}<1\) when \(N_{h}=2k+1,(k=1,2,\cdots )\). Further, because of \(h=\dfrac{1}{N_{h}}\), we have \(\sigma >\dfrac{1}{2}\sqrt{1+\dfrac{1}{N_{h}^2}}\), and when \(N_{h}\) is equal to 3, \(\dfrac{1}{2}\sqrt{1+\dfrac{1}{N_{h}^2}}=0.527\) is the largest.

So in summary, when \(\sigma >0.527\), \(\Vert (\sigma _2 D)^{-1}\Vert _{\infty }<1\) is established for \(N_{h}\ge 2\). \(\square \)

It is worth noting that it is not hard to verify the conclusion about \(\Vert E^{-1}\Vert _{\infty }\) is correct through numerical experiments.

Theorem 1

Supposing \(\sigma _1>0\), the solution obtained from the collocation system (27)–(29) is unique.

Proof

Since \(\lambda =\dfrac{\sigma ^2}{2}\), we immediately get \(\lambda >0\). Noticing \(\sigma _2=-\lambda \), we have

$$\begin{aligned} H= & {} \sigma _1 A+\sigma _2 D=\sigma _1 \cdot \dfrac{1}{8} \left( \begin{array}{ccccc} 5&{}\quad 1&{}\quad &{}\quad &{}\quad \\ 1&{}\quad 6&{}\quad 1&{}\quad &{}\quad \\ &{}\quad \ddots &{}\quad \ddots &{}\quad \ddots &{}\quad \\ &{} \quad &{}\quad 1&{}\quad 6&{}\quad 1\\ &{}\quad &{}\quad &{}\quad 1&{}\quad 5\\ \end{array}\right) \\&+\dfrac{\lambda }{h^2}\begin{pmatrix} 3&{}\quad -1&{}\quad &{}\quad &{}\quad \\ -1&{}\quad 2&{}\quad -1&{}\quad &{}\quad \\ &{}\quad \ddots &{}\quad \ddots &{}\quad \ddots &{}\quad \\ &{}\quad &{}\quad -1&{}\quad 2&{}\quad -1\\ &{}\quad &{}\quad &{}\quad -1&{}\quad 3\\ \end{pmatrix}. \end{aligned}$$

For the first and last row of the matrix H,

$$\begin{aligned} |\dfrac{5}{8}\sigma _1+\dfrac{3}{h^2}\lambda |-|\dfrac{1}{8}\sigma _1-\dfrac{1}{h^2}\lambda | =\left\{ {\begin{array}{ll} {\dfrac{4}{8}\sigma _1+\dfrac{4}{h^2}\lambda>0,} &{}\quad {\dfrac{1}{8}\sigma _1{\ge }\dfrac{1}{h^2}\lambda ,} \\ &{} \\ {\dfrac{6}{8}\sigma _1+\dfrac{2}{h^2}\lambda >0 ,} &{}\quad {\dfrac{1}{8}\sigma _1<\dfrac{1}{h^2}\lambda .} \\ \end{array}} \right. \end{aligned}$$

For the second row to the \(N_h-1\) row of the matrix H,

$$\begin{aligned} |\dfrac{6}{8}\sigma _1+\dfrac{2}{h^2}\lambda |-2|\dfrac{1}{8}\sigma _1-\dfrac{1}{h^2}\lambda | =\left\{ {\begin{array}{ll} {\dfrac{4}{8}\sigma _1+\dfrac{4}{h^2}\lambda>0,} &{}\quad {\dfrac{1}{8}\sigma _1{\ge }\dfrac{1}{h^2}\lambda ,} \\ {\sigma _1 >0 ,} &{}\quad {\dfrac{1}{8}\sigma _1<\dfrac{1}{h^2}\lambda .} \\ \end{array}} \right. \end{aligned}$$

By the definition of strictly diagonally dominant matrix, when \(\sigma _1>0\) the matrix H is a strictly diagonally dominant matrix. Hence, H is a nonsingular matrix which indicates the solution of the matrix equation (34) uniquely exists.

According to (31) and (35), the solution obtained from the collocation system (27)–(29) is unique. \(\square \)

In fact, it is not difficult to find that the condition of \(\sigma _1>0\) is satisfied in most cases in the following numerical experiments.

Theorem 2

Let \(v^{n+1}(\eta )\) be the exact solution of problem (16)–(18) at the collocation points and \(v_h^{n+1}(\eta )\) be the collocation approximation from (27)–(29). Supposing \(v(x)\in C^{4}([a,b])\), when \(\sigma > 0.527\) and \(\Vert (\sigma _2 D)^{-1}(\sigma _1 A)\Vert _{\infty }<1\), the truncation error is

$$\begin{aligned} \Vert v^{n+1}(\eta )-v_h^{n+1}(\eta )\Vert _{\infty }=O(h^4), \end{aligned}$$
(36)

where \(\eta =(\eta _{1},\eta _{2},\ldots ,\eta _{N_{h}+2})^{\top }\), \(\Vert v-v_h\Vert _{\infty }=\mathrm{max}\{|v(x)-v_h(x)|,x\in [a,b]\}\).

Proof

Subtracting (27) from (25), we have

$$\begin{aligned} \sigma _1 (v_s^{n+1}(\eta _{i})-v_h^{n+1}(\eta _{i})) +\sigma _2 \dfrac{\partial ^2 (v_s^{n+1}-v_h^{n+1})}{\partial x^2}(\eta _{i})=O(h^4), \end{aligned}$$
(37)

where \(i=2,3,\ldots ,N_{h}+1; n=1,2,\ldots ,N_{t}\).

Let

$$\begin{aligned} v_s^{n+1}(x)=\sum \limits _{k=1}^{N_{h}+2}\theta _k^{n+1} B_k(x), \end{aligned}$$

(37) yields

$$\begin{aligned} \sigma _1 \sum _{k=1}^{N_{h}+2}(\theta _k^{n+1}-\xi _k^{n+1}) B_k(\eta _{i}) +\sigma _2 \sum _{k=1}^{N_{h}+2}(\theta _k^{n+1}-\xi _k^{n+1})\dfrac{\partial ^2 B_k}{\partial x^2}(\eta _{i})=O(h^4). \end{aligned}$$
(38)

In order to complete the error analysis, we prove the matrix D is invertible firstly.

Let \(Dy=0\) with \(y=(y_1,y_2,\ldots ,y_{N_{h}})^{\top }\), we have

$$\begin{aligned}&-3y_1+y_2=0;\\&y_1-2y_2+y_3=0;\\&y_2-2y_3+y_4=0;\\&\cdots \\&y_{N_{h}-2}-2y_{N_{h}-1}+y_{N_{h}}=0;\\&y_{N_{h}-1}-3y_{N_{h}}=0. \end{aligned}$$

Then we can get \(y_2=3y_1, y_3=5y_1, y_4=7y_1, \ldots , y_{N_{h}-1}=(2N_{h}-3)y_1, y_{N_{h}}=(2N_{h}-1)y_1\), which means \(y_1=y_2=\cdots =y_{N_{h}}=0\) considering \(y_{N_{h}-1}-3y_{N_{h}}=0\). Hence, the invertibility of matrix D is carried out.

Second, we will prove \(\Vert H^{-1}\Vert _{\infty }\) is bound.

Noticing

$$\begin{aligned} H^{-1}=[(\sigma _2 D)(\sigma _2 D)^{-1}(\sigma _1 A+\sigma _2 D)]^{-1}=[I+(\sigma _2 D)^{-1}(\sigma _1 A)]^{-1}(\sigma _2 D)^{-1}, \end{aligned}$$

when \(\Vert (\sigma _2 D)^{-1}(\sigma _1 A)\Vert _{\infty }<1\) we apply Neumann series of matrix and have

$$\begin{aligned} H^{-1}=\{I+\sum _{{m}=1}^{\infty }(-1)^{{m}}[(\sigma _2 D)^{-1}(\sigma _1 A)]^{{m}}\}(\sigma _2 D)^{-1}. \end{aligned}$$

According to the Proposition 2, \(\Vert (\sigma _2 D)^{-1}\Vert _{\infty }<1\) is established when \(\sigma > 0.527\). We can obtain the following inequality

$$\begin{aligned} \Vert H^{-1}\Vert _{\infty }&\le [1+\sum \limits _{{m}=1}^{\infty }\Vert (\sigma _2 D)^{-1}(\sigma _1 A)\Vert _{\infty }^{{m}}]\Vert (\sigma _2 D)^{-1}\Vert _{\infty }\\&<1+\sum \limits _{{m}=1}^{\infty }|(\sigma _2 D)^{-1}(\sigma _1 A)\Vert _{\infty }^{m}\\&=\dfrac{1}{1-\Vert (\sigma _2 D)^{-1}(\sigma _1 A)\Vert _{\infty }}. \end{aligned}$$

Thus, when \(\Vert (\sigma _2 D)^{-1}(\sigma _1 A)\Vert _{\infty }<1\) the boundedness of \(\Vert H^{-1}\Vert _{\infty }\) is proved.

Setting \(\varepsilon _k^{n+1}=\theta _k^{n+1}-\xi _k^{n+1}, (k=2,3,\ldots ,N_{h}+1)\), \(\varepsilon ^{n+1}=(\varepsilon _2^{n+1},\varepsilon _3^{n+1},\cdots \varepsilon _{N_h+1}^{n+1})^{\top }\), and taking into account (38), \(\Vert \varepsilon ^{n+1}\Vert _{\infty }=O(h^4)\) is gained.

By (31), we get \(\varepsilon _1^{n+1}=\varepsilon _{N_h+2}^{n+1}=0\). Let \(\tilde{\varepsilon }^{n+1}=(0,\varepsilon _2^{n+1},\varepsilon _3^{n+1},\cdots \varepsilon _{N_h+1}^{n+1},0)^{\top }\), because that

$$\begin{aligned} \Vert v_s^{n+1}({\eta })-v_h^{n+1}({\eta })\Vert _{\infty }=\Vert \tilde{A}\tilde{\varepsilon }^{n+1}\Vert _{\infty }\le \Vert \tilde{A}\Vert _{\infty }\Vert \tilde{\varepsilon }^{n+1}\Vert _{\infty }, \end{aligned}$$

and \(\Vert \tilde{A}\Vert _{\infty }\) is bound, we have

$$\begin{aligned} \Vert v_s^{n+1}(\eta )-v_h^{n+1}(\eta )\Vert _{\infty }=O(h^4). \end{aligned}$$
(39)

By formulas (12) and (39), using triangular inequality, we have the spacial error bound

$$\begin{aligned} \Vert v^{n+1}(\eta )-v_h^{n+1}(\eta )\Vert _{\infty }&\le \Vert v^{n+1}(\eta )-v_s^{n+1}(\eta )\Vert _{\infty }+\Vert v_s^{n+1}(\eta )-v_h^{n+1}(\eta )\Vert _{\infty }\nonumber \\&= O(h^4)+O(h^4)=O(h^4). \end{aligned}$$

\(\square \)

Theorem 3

Let \(v(x,t)\in C_{x,t}^{4,3}([a,b]\times [0,T])\), when \(\sigma > 0.527\) and \(\Vert (\sigma _2 D)^{-1}(\sigma _1 A)\Vert _{\infty }<1\), the exact solution \(v(\eta ,t_{n+1})\) of problem (6)–(8) and the collocation solution \(v_h^{n+1}(\eta )\) proposed numerical method (27)–(29) satisfy

$$\begin{aligned} \Vert v(\eta ,t_{n+1})-v_h^{n+1}(\eta )\Vert _{\infty }=O(h^4+\Delta t^{3-\alpha }), \end{aligned}$$

where \(\eta =(\eta _{1},\eta _{2},\ldots , \eta _{N_{h}+2})^{\top }\), \(n=1,2,\ldots ,N_{t}.\)

Proof

According to Theorem 2 and considering the numerical error about time in expression (14),

$$\begin{aligned} \Vert v(\eta ,t_{n+1})-v_h^{n+1}(\eta )\Vert _{\infty }=O(h^4+\Delta t^{3-\alpha }) \end{aligned}$$

can be immediately carried out. \(\square \)

6 Numerical experiments

In this section, two examples with exact solutions are presented to demonstrate the high accuracy of the compact QSC method proposed in Sect. 4. The corresponding numerical results are listed below. Moreover, for showing the effectiveness of the new scheme, we use three different European option pricing problems: European put option, European call option and European double barrier knock-out call option, respectively. After that, we take European put option as an example to illustrate the effect of different parameters on option price in time-fractional Black–Scholes model.

Example 1

Consider the following time-fractional Black–Scholes model with homogeneous boundary conditions

$$\begin{aligned} \left\{ {\begin{array}{l} {{}_0^CD_t ^\alpha V(x,t ) - \dfrac{{\sigma ^2 }}{2}\dfrac{{\partial ^2 V(x,t )}}{{\partial x^2 }} - (r - \dfrac{{\sigma ^2 }}{2})\dfrac{{\partial V(x,t )}}{{\partial x}} + rV(x,t) = f(x,t ),} \\ \\ \quad {0< x< 1,\;0< t \le 1,}\\ {V(x,0) = x^5-x^4,\quad 0< x < 1,} \\ {V(0,t ) = 0 ,\quad V(1,t ) = 0 ,\quad 0 \le t \le 1,} \\ \end{array}} \right. \end{aligned}$$
(40)

with \(0<\alpha <1,\) \( r = {0.02}\,\, and\,\, \sigma = {0.8 }\), where

$$\begin{aligned} f(x,t)= & {} \dfrac{6 t^{3-\alpha }}{\Gamma (4-\alpha )}(x^5-x^4)-(t^{3}+1)\\&\cdot \left[ \dfrac{\sigma ^2}{2}(20x^3-12x^2) +\left( r-\dfrac{\sigma ^2}{2}\right) (5x^4-4x^3) -r(x^5-x^4)\right] \end{aligned}$$

is chosen such that the exact solution is \(V(x,t) = (t^{3} + 1)x^4(x-1)\).

Example 2

Consider the following time-fractional Black–Scholes model with nonhomogeneous boundary conditions

$$\begin{aligned} \left\{ {\begin{array}{l} {{}_0^CD_t ^\alpha V(x,t ) - \dfrac{{\sigma ^2 }}{2}\dfrac{{\partial ^2 V(x,t )}}{{\partial x^2 }} - \left( r - \dfrac{{\sigma ^2 }}{2}\right) \dfrac{{\partial V(x,t )}}{{\partial x}} + rV(x,t) = f(x,t ),} \\ \quad {0< x< 1,\;0< t \le 1,}\\ {V(x,0) = x^4 + 1,\quad 0< x < 1,} \\ {V(0,t ) = t^{3} + 1 ,\quad V(1,t ) =2( t^{3} + 1 ),\quad 0 \le t \le 1,} \\ \end{array}} \right. \end{aligned}$$
(41)

with \(0<\alpha <1,\) \( r = 0.5\,\, and\,\, \sigma = \sqrt{2} \), where

$$\begin{aligned} f(x,t)=\dfrac{6 t^{3-\alpha }}{\Gamma (4-\alpha )}( x^4 + 1)-(t^{3}+1)\left[ \dfrac{\sigma ^2}{2}\cdot 12x^2 +\left( r-\dfrac{\sigma ^2}{2}\right) \cdot 4x^3 -r(x^4 + 1)\right] \end{aligned}$$

is chosen such that the exact solution is \(V(x,t) = (t^{3} + 1)(x^4 + 1)\).

Table 1 Numerical errors and convergence orders for Example 1 with different \(\Delta t\) when \( h =1/200 \)

Tables 1, 2, 3, and 4 list the numerical results of the time-fractional Black–Scholes models (40) and (41), respectively. In these tables the Max-error denotes the Maximum-norm error which is measured by

$$\begin{aligned} \max \limits _{{1}\le i \le {N_{h}+2},1\le n \le N_{t}}|V(\eta _{i},t_{n+1})-v_h^{n+1}(\eta _{i})\cdot e^{\frac{1}{2}\beta \eta _{i}}|, \end{aligned}$$

where \(V(\eta _{i},t_{n+1})\) is the true solution of the Black–Scholes model at point \((\eta _{i},t_{n+1})\) and \(v_h^{n+1}(\eta _{i})\) is the compact QSC approximate solution of system (27)-(29). The temporal convergence order is given by the formula \(Rate=\log _2 \dfrac{{Max-error}(\Delta t)}{{Max-error}(\Delta t/2)}\), as for the spacial convergence order, it is \({Rate}=\log _2 \dfrac{{Max-error}(h)}{{Max-error}(h/2)}\) for the spatial step size h.

From Tables 1 and 3, it can be seen that for different values of \(\alpha \) the compact QSC method yields \(3 - \alpha \) order accuracy in time for both Examples 1 and 2.

To verify the spacial numerical accuracy, taking different space steps h and different values of time-fractional order \(\alpha \), the computational results of Examples 1 and 2 are showed in Tables 2 and 4, respectively. One can observe from them that the orders of convergence of the new algorithm are all four in space direction.

Table 2 Numerical errors and convergence orders for Example 1 with different h when \( \Delta t =1/1000 \)
Table 3 Numerical errors and convergence orders for Example 2 with different \(\Delta t\) when \( h =1/200 \)
Table 4 Numerical errors and convergence orders for Example 2 with different h when \( \Delta t =1/1000 \)

Example 3

Consider the time-fractional Black–Scholes model [6]

$$\begin{aligned} \left\{ \begin{array}{l} \dfrac{\partial ^\alpha U(S,\tau )}{\partial \tau ^\alpha } + \dfrac{1}{2}\sigma ^2 S^2 \dfrac{\partial ^2 U(S,\tau )}{\partial S^2} + (r-E)S\dfrac{\partial U(S,\tau )}{\partial S} - rU(S,\tau ) = 0,\\ \quad (S,\tau )\in (S_a ,S_b ) \times (0,T), \\ \\ U(S,T) = z(S), \\ U(S_a ,\tau ) = p(\tau ),\,\quad U(S_b ,\tau ) =q(\tau ). \end{array} \right. \end{aligned}$$
(42)
Fig. 1
figure 1

Curves of European put option with different \(\alpha \)

Fig. 2
figure 2

Curves of European call option with different \(\alpha \)

Fig. 3
figure 3

Curves of double barrier option with different \(\alpha \)

Fig. 4
figure 4

Curves of European put option with different values of parameters

When \(z(S)=\mathrm{max}\{K-S,0\},\,p(\tau )=Ke^{-r(T-\tau )}\) and \(q(\tau )=E=0\), problem (42) is a European put option. As for the European call option, the initial and boundary conditions correspondingly are \(z(S)=\mathrm{max}\{S-K,0\},\,p(\tau )=E=0\) and \(q(\tau )=S_b-Ke^{-r(T-\tau )}\). Here the parameters are \(r=0.05, \,{\sigma =0.55}\), the strike price \(K=50\), \(S_a=0.1(a=-2.3), \,S_b=100(b=4.6)\) and \(T=1(\mathrm{year})\). Applying the compact QSC method, Figs. 1 and 2 illustrate the effect of different time-fractional derivative order \(\alpha \) on option prices for European put option and European call option, respectively. As can be seen from the two figures that the time-fractional derivatives have little effect on option price for the cases of deep-in-the-money( \(S\ll K\)) and deep-out-the-money (\(S\gg K\)) and have significant effect near on-the-money (\(S\approx K\)).

When \(z(S)=\mathrm{max}\{S-K,0\}\, \mathrm{and}\, p(\tau )=q(\tau )=0\), model (42) describes a time-fractional European double barrier knock-out call option. The parameters are \(r=0.03, \,{\sigma =0.55}, \, S_a=3(a=1.1),\, S_b=15(b=2.7),\, T=1(\mathrm{year})\), the strike price \(K=10\) and the dividend yield \(E=0.01\). The curves with different \(\alpha \) of double barrier option price are plotted in Fig. 3. For \(\alpha =1\) the above problem reduces to the classical Black–Scholes model. One can observe from the figure that the smaller the order \(\alpha \) the higher the peak of the curve when S is greater than the strike price K. This shows that compared with the classic Black–Scholes model, the time-fractional Black–Scholes model can more reflect the jump movement of problem.

Let’s take European put option as an example to investigate the effect of different parameters on option price in time-fractional Black–Scholes model.

Example 4

Consider the time-fractional Black–Scholes model governing European put option

$$\begin{aligned} \left\{ {\begin{array}{l} \dfrac{{\partial ^\alpha U(S,\tau )}}{{\partial \tau ^\alpha }} + \dfrac{1}{2}\sigma ^2 S^2 \dfrac{{\partial ^2 U(S,\tau )}}{{\partial S^2 }} + rS\dfrac{{\partial U(S,\tau )}}{{\partial S}} \\ \\ \qquad - rU(S,\tau ) = 0,(S,t) \in (S_a ,S_b ) \times (0,T), \\ {U(S,T) = \mathrm{max}\{K-S,0\},\quad } \\ {U(S_a ,\tau ) = Ke^{-r(T-\tau )},\,\quad U(S_b ,\tau ) =0,\quad } \\ \end{array}} \right. \end{aligned}$$
(43)

with \(\alpha =0.5\), and \(S_a=0.1(a=-2.3), \,S_b=100(b=4.6)\).

Using the new algorithm proposed in Sect. 4, the curves of the European put option pricing to different values of parameters are shown in Fig. 4a–d.

Figure 4a shows the influence of volatility of the stock price movement on option price, which confirms a well-known statement in the real financial world: high risk, high return. From Fig. 4b, it can be seen that the higher the interest rate is, the lower the option will be. One can deduce from Fig. 4c that the option price goes up when the exercise price increases. Finally, Fig. 4d illustrates that when the stock price is much lower than the strike price, an option with shorter expiration date is more profitable than an option with longer expiration date. While the stock price is higher, an option with longer expiration date is more favorable.

The above results match what happens in the real market very well.

7 Conclusion

In this work, a compact QSC method for the time-fractional Black–Scholes model governing European option pricing has been studied. Firstly, by an exponential transformation the time-fractional Black–Scholes equation was transformed into a time-fractional sub-diffusion equation. Then the Caputo time-fractional derivative was approximated by the \(L1 - 2\) formula, and for the space direction we used a collocation method based on quadratic B-spline basis functions. Thus we constructed a new higher numerical method with convergence order \(O(\Delta t^{3-\alpha }+h ^4)\) for time-fractional Black–Scholes model. Moreover, the error bound of the scheme was discussed. Finally, numerical examples showed the accuracy and effectiveness of the proposed technique. The extension of the method to the Black–Scholes model with spatial fractional derivative and other fractional models will be the future work for us.