1 Introduction

The development of modern option pricing began with the publication of the BS option pricing formula in 1973, which was used in computing the value of the European option pricing (Black and Scholes 1973; Merton 1973). The BS formula computes the value of European option pricing based on the underlying asset, exercise price, volatility of the asset, and the expiration time of option pricing (Hull 1997; Wilmott et al. 1993). The European option pricing has the ability of exercising only at expiring date, while an American option is its early exercise privilege, that is, the holder can exercise the option prior to the expiration date. Since the additional right should not be worthless, we expect an American option to be worth more than its European counterpart. The extra premium is called the early exercise premium (Kwok 1998; Wilmott 1998). Therefore, analytical solutions of BS models for American option pricing problems are seldom available and so a numerical technique should be used. During the last decades various numerical techniques have been investigated for solving these types of problems. We refer the interested readers to Amin and Khanna (1994), Barraquand and Pudet (1994), Broadie and Detemple (1996), Kalanatri et al. (2015), Kwok (2009), Ross (1999), San-Lin (2000), Meng et al. (2014).

A fractional differential equation provides an interesting instrument for defining memory and hereditary properties of various materials and processes. This is the main advantage of fractional derivatives in comparison with classical integer-order models, in which such effects are in fact neglected. The advantages of fractional derivatives becomes apparent in physics, chemistry, engineering, finance, and other sciences that have been developed in the last decades (Podlubny 1999).

In this paper we use a fractional stochastic dynamics of stock exchange to obtain the FBS model. Since American option pricing under the FBS has a free boundary (optimal exercise boundary) that is unknown, by using the quasi-stationary method we find \(\overline{S}(t)\) and obtain Fractional partial differential equation (FPDE) with known boundary. Then, we introduce a stable and convergent finite difference method for solving the new problem.

The remainder of the paper is organized as follows: In Sect. 2, the FBS model for American put option pricing is introduced. To remove the free boundary, we use the quasi-stationary method in Sect. 3. In Sect. 4, we investigate stability and convergence of finite difference method for FBS model. In Sect. 5, the Newton interpolation is used to evaluate option pricing at some intermediate points. Finally, a brief conclusion is given in Sect. 6.

2 FBS Model for American Put Option Pricing

The stochastic differential equation of stock exchange dynamics is used in the form

$$\begin{aligned} dS= & {} rSdt +\sigma S b(t, \alpha )\nonumber \\= & {} rSdt +\sigma S\omega (t)(dt)^{\frac{\alpha }{2}} \end{aligned}$$
(1)

where \(\omega (t)\) is a normalized Gaussian white noise, i.e., with zero mean and the unit variance. In addition, we denote by r the interest rate, whilst P(St) is the American put option pricing. The details of obtaining FBS model from (1) are explained in Jumarie (2008), so that American put option pricing under the FBS model is presented as

$$\begin{aligned} \frac{\partial ^{\alpha } P}{\partial t^{\alpha }}= & {} \left( rP-rS\frac{\partial P}{\partial S}\right) \frac{t^{1-\alpha }}{(1-\alpha )!}-\frac{\alpha !}{2}\sigma ^2 S^2\frac{\partial ^2P}{\partial S^2}, \qquad S >\overline{S}(t), \quad 0 \le t < T,\;\;\quad \end{aligned}$$
(2)
$$\begin{aligned} P(S,T)= & {} \max (E-S,0), \qquad \qquad \qquad \qquad \qquad \qquad \quad \quad S \ge 0, \end{aligned}$$
(3)
$$\begin{aligned} \frac{\partial P}{\partial S}(\overline{S},t)= & {} -1, \end{aligned}$$
(4)
$$\begin{aligned} P(\overline{S}(t),t)= & {} E-\overline{S}(t), \end{aligned}$$
(5)
$$\begin{aligned} \lim _{S\rightarrow \infty } P(S,t)= & {} 0, \end{aligned}$$
(6)
$$\begin{aligned} \overline{S}(T)= & {} E, \end{aligned}$$
(7)
$$\begin{aligned} P(S,t)= & {} E-S, \quad 0\le S< \overline{S}(t), \end{aligned}$$
(8)

where \(\overline{S}(t)\) denotes the free boundary. The parameters \(\sigma \), r and E denote the volatility of the underlying asset, the interest rate and the exercise price of the option, respectively.

Remark 1

The fractional Black–Scholes model of American option pricing is based on some consideration for real market, because of the following advantages:

  1. (1)

    the fractional derivative is a generalization of the ordinary derivative,

  2. (2)

    the fractional derivative is a non-local operator while the ordinary derivatives is a local operator (Baleanu et al. 2012),

  3. (3)

    the FBS model based on the fractional Brownian motion is more accurate than the ordinary Brownian motion, i.e., substituting \(b(t,\alpha )\) for b(t) as in (1), Jumarie (2008).

3 Quasi-Stationary Method for Determining \(\overline{S}(t)\)

Taking \(\frac{\partial ^{\alpha } P}{\partial t^{\alpha }}=0\) in (2), leads to a second order ordinary differential

$$\begin{aligned} \left( rP-rS\frac{\partial P}{\partial S}\right) \frac{t^{1-\alpha }}{(1-\alpha )!}-\frac{\alpha !}{2}\sigma ^2 S^2\frac{\partial ^2P}{\partial S^2}=0 \end{aligned}$$
(9)

with general solution

$$\begin{aligned} P(S,t)=C_1S +C_2S^{-\frac{2rt^{1-\alpha }}{(1-\alpha )!\alpha !\sigma ^2}}. \end{aligned}$$
(10)

According to (6) we should set \(C_1=0,\) then

$$\begin{aligned} P(S,t)=C_2S^{-\frac{2rt^{1-\alpha }}{(1-\alpha )!\alpha !\sigma ^2}} \end{aligned}$$
(11)

and we use (4) to get

$$\begin{aligned} \frac{{\partial }P}{\partial S}(\overline{S}(t),t)=-C_2\frac{2rt^{1-\alpha }}{(1-\alpha )!\alpha !\sigma ^2}\overline{S}(t)^{-\frac{2rt^{1-\alpha }}{(1-\alpha )!\alpha !\sigma ^2}-1}=-1, \end{aligned}$$
(12)

which is solved for \( \overline{S}(t)\) as

$$\begin{aligned} \overline{S}(t)=\left( \frac{(1-\alpha )!\alpha !\sigma ^2}{2C_2 rt^{1-\alpha }}\right) ^{\left( \frac{(1-\alpha )!\alpha !\sigma ^2}{-2rt^{1-\alpha }-(1-\alpha )!\alpha !\sigma ^2}\right) }. \end{aligned}$$
(13)

Substituting from (12) and (13) in (5), the exercise price E takes the form

$$\begin{aligned} E=G^{\frac{1}{-2rt^{1-\alpha }-(1-\alpha )! \alpha !\sigma ^2} }\left( C_2G^{-2rt^{1-\alpha }}+G^{(1-\alpha )!\alpha !\sigma ^2}\right) \end{aligned}$$
(14)

with

$$\begin{aligned} G=\frac{(1-\alpha )!\alpha !\sigma ^2}{2C_2rt^{1-\alpha }}. \end{aligned}$$
(15)

Put

$$\begin{aligned} K:= & {} 2rT^{1-\alpha }+(1-\alpha )! \alpha !\sigma ^2,\;\;\;O:=\frac{H}{J}, \end{aligned}$$
(16)
$$\begin{aligned} H:= & {} (1-\alpha )!\alpha !\sigma ^2,\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;J:=2rT^{1-\alpha }, \end{aligned}$$
(17)

then

$$\begin{aligned} G=\frac{O}{C_2} \end{aligned}$$

and it follows from (5), (11), (13) and (14) that

$$\begin{aligned} \left( \frac{C_2}{O}\right) ^{\frac{1}{K} }\left( \left( \frac{HC_2}{ OJ}\right) \left( \frac{C_2}{O}\right) ^{J}+\left( \frac{O}{C_2}\right) ^{H}\right) -\left( \frac{C_2}{O}\right) ^{\left( \frac{H}{K}\right) }=0, \end{aligned}$$
(18)

which can be solved for \(C_2\) by using a suitable iterative method. Thus, the FBS model is formulated as the boundary value problem

$$\begin{aligned}&\frac{\partial ^{\alpha } P}{\partial t^{\alpha }}=\left( rP-rS\frac{\partial P}{\partial S}\right) \frac{t^{1-\alpha }}{(1-\alpha )!}-\frac{\alpha !}{2}\sigma ^2 S^2\frac{\partial ^2P}{\partial S^2},\nonumber \\&S >\left( \frac{H}{2C_2 rt^{1-\alpha }}\right) ^{\left( \frac{H}{-2rt^{1-\alpha }-H}\right) }, \quad 0 \le t < T, \end{aligned}$$
(19)
$$\begin{aligned}&P(S,T)=\max (E-S,0), \quad \quad \,\,\, S \ge 0, \end{aligned}$$
(20)
$$\begin{aligned}&P(S,t)=E-S, \qquad \qquad \quad \qquad \,\, 0\le S< \left( \frac{H}{2C_2 rt^{1-\alpha }}\right) ^{\left( \frac{H}{-2rt^{1-\alpha }-H}\right) }. \end{aligned}$$
(21)

4 Stability and Convergence of Finite Difference Method for FBS Model

This section is related with conditions that must be satisfied if the solution of the finite-difference equations to be reasonably accurate approximation to corresponding solution of FBS model of American option pricing. The conditions are associated with two different but interrelated problems. The first concerns the stability of exact solution of finite difference equations; the second concerns the convergence of the finite difference equations to the solution of the FBS model.

Now we use a finite difference scheme for the derivatives on the right side and Grünwald Letnikov approximation for fractional derivative on the left side of (19) and get

$$\begin{aligned}&(\Delta t)^{-\alpha }\sum _{i =0}^{k+1}g_i^{\alpha }P^{k+1-i}_j\nonumber \\&\quad =\left( rP_j^k-rS_j\frac{ P^k_j-P^k_{j-1}}{\Delta S}\right) \frac{t^{1-\alpha }}{(1-\alpha )!}-\frac{\alpha !}{2}\sigma ^2 S_j^2\frac{P_{j-1}^k -2P_j^k+P_{j+1}^k}{\Delta S^2}\nonumber \\&\qquad \quad S_j >\left( \frac{H}{2C_2 rt_k^{1-\alpha }}\right) ^{\left( \frac{H}{-2rt_k^{1-\alpha }-H}\right) }, \quad 0 \le t_k < T, \end{aligned}$$
(22)
$$\begin{aligned}&P(S_j,T)=\max (E-S_j,0), \end{aligned}$$
(23)
$$\begin{aligned}&P(S_j,t_k)=E-S_j, \quad 0\le S_j\le \left( \frac{H}{2C_2 rt_k^{1-\alpha }}\right) ^{\left( \frac{H}{-2rt_k^{1-\alpha }-H}\right) }, \end{aligned}$$
(24)

where

$$\begin{aligned} g^{\alpha }_i=\left( 1-\frac{1+\alpha }{i}\right) g^{\alpha }_{i-1},\quad k=1,2,3,\ldots ,\;\;g_0^{\alpha }=1. \end{aligned}$$
(25)

Let \(t_k=k\Delta t , k=0,1,2\ldots ,n-1\), \(S_j=j\Delta S, j=1,2,\ldots ,m-1\), where for \(0\le t\le T\), \(\Delta t=\frac{T}{n}\), and \(\Delta S=\frac{S_{max}}{m}\), are time and stock price steps respectively. Therefore, Eq. (22) provides the recursive formula

$$\begin{aligned} P^k_{j+1}= & {} \left( \frac{2\Delta S(r\Delta S-rS_j)t^{1-\alpha }_k }{H S_j^2}+2\right) P^k_j \nonumber \\&-\frac{2(\Delta t)^{-\alpha }\Delta S^2}{\alpha !\sigma ^2 S_j^2}\sum _{i =0}^{k+1}g_i^{\alpha }P^{k+1-i}_j+\left( \frac{2r\Delta S t_k^{1-\alpha }}{H S_j}-1\right) P^k_{j-1}. \end{aligned}$$
(26)

for \(P^k_j\), \(j=1,2,\ldots ,m-1\).

It also follows from (24) for \(j=0\) and all time values that \(\mathbf {P}_0=E\) ( since \(S_0=0\)) and if

$$\begin{aligned} S_1=\Delta S\le \left( \frac{H}{2C_2 rt_k^{1-\alpha }}\right) ^{\left( \frac{H}{-2rt_k^{1-\alpha }-H}\right) }, \end{aligned}$$

then from Eq. (21) we have \(P(S_1,t)=E-S_1\) and so

$$\begin{aligned} C\mathbf {P}_{j+1}= & {} A\mathbf {P}_j+B\mathbf {P}_{j-1},\nonumber \\ \mathbf {P}_0= & {} E,\;\;\;\mathbf {P}_1=E-S_1, \end{aligned}$$
(27)

where \(\mathbf {P}_j=[P^0_j,P^1_j,P^2_j ,\ldots ,P^{n}_j]^T\) and the \((n+1)\times (n+1)\) matrices \(A=(a_{ij})\), \(B=(b_{ij})\) and \(C=(c_{ij})\) are given by

$$\begin{aligned} a_{ij}=\left\{ \begin{array}{lllllll} Wg^{\alpha }_{i},&{} \quad i=j\ne n \\ 0, &{} \quad i\ne j\\ Wg^{\alpha }_{i}+V,&{} \quad i=j=n \end{array}\right. , \ b_{ij}=\left\{ \begin{array}{lllllll} 0,&{}\quad i=j\ne n \\ 0,&{}\quad i\ne j \\ D, &{} \quad i=j=n \end{array} \right. , \ c_{ij}=\left\{ \begin{array}{lllllll} 0,&{}\quad i=j\ne n \\ 0,&{} \quad i\ne j \\ 1, &{}\quad i=j=n \end{array} \right. \nonumber \\ \end{aligned}$$
(28)

with

$$\begin{aligned} W= & {} -\frac{2(\Delta t)^{-\alpha }\Delta S^2}{\alpha !\sigma ^2 S_j^2},\;\;D=\frac{2r\Delta S t_k^{1-\alpha }}{H S_j}-1,\;\;V=\frac{2\Delta S(r\Delta S-rS_j)t^{1-\alpha }_k }{H S_j^2}+2. \end{aligned}$$

Let \(\tilde{P}_{j}^k\) ( \(k=0,1,\ldots ,n \) ; \(j=0,1,\ldots , m\)) be approximations to \(P_{j}^k\), then the errors \(\varepsilon _{j}^k=\tilde{P}_{j}^k-P_{j}^k\) ( \(k=0,1,\ldots ,n \) , \(j=0,1,\ldots ,m\)) satisfy the recurrence relation

$$\begin{aligned} \varepsilon ^k_{j+1}= & {} \left( \frac{2\Delta S(r\Delta S-rS_j)t^{1-\alpha }_k }{H S_j^2}+2\right) \varepsilon ^k_j \nonumber \\&-\frac{2(\Delta t)^{-\alpha }\Delta S^2}{\alpha !\sigma ^2 S_j^2}\sum _{i =0}^{k+1}g_i^{\alpha } \varepsilon ^{k+1-i}_j+\left( \frac{2r\Delta S t_k^{1-\alpha }}{H S_j}-1\right) \varepsilon ^k_{j-1} \end{aligned}$$
(29)

or equivalently

$$\begin{aligned} C\mathbf {E}_{j+1}= & {} A\mathbf {E}_j+B\mathbf {E}_{j-1}\nonumber \\&\mathbf {E}_0, \mathbf {E}_1, \text{ given }, \end{aligned}$$
(30)

for \(\mathbf {E}_j=[\varepsilon _{j}^0,\varepsilon _{j}^1,\ldots ,\varepsilon _{j}^{n}]^T\) and \(\parallel \mathbf {E}_{j+1}\parallel _{\infty }=\displaystyle \max _{1\le k \le n-1} \mid \varepsilon _{j+1}^{k}\mid \).

Definition 1

(see Smith 1985; Yu and Tan 2003) If for any arbitrary initial rounding errors \(\mathbf {E}_0\) and \(\mathbf {E}_1\) given, there exists a positive constant K, independent of \(\Delta S\) and \(\Delta t\), such that

$$\begin{aligned} \parallel \mathbf {E}_k\parallel \le K \max \{\parallel \mathbf {E}_0\parallel ,\parallel \mathbf {E}_1\parallel \}\;\;\text{ or }\;\;\parallel A^k\parallel \le K, \end{aligned}$$
(31)

then the difference approximation (27) is stable.

Lemma 1

(see Zhuang et al. 2009) The coefficients \(g^{\alpha }_i\) for \(i=0,1,2,\ldots \), that were defined in (25) satisfy

  1. (1)

    \(g^{\alpha }_0=1\), \( g^{\alpha }_1=-\alpha \) and \(g^{\alpha }_i >0\), \(i > 1\).

  2. (2)

    \(\sum ^{\infty }_{i=0}g^{\alpha }_i=0\) and \(\sum ^{l}_{i=0}g^{\alpha }_i <0\) for \(l=1,2,\ldots \).

Remark 2

Theorem 1

(Stability) If

$$\begin{aligned} \Delta S\le \left( \frac{H}{2C_2 rt_k^{1-\alpha }}\right) ^{\left( \frac{H}{-2rt_k^{1-\alpha }-H}\right) }, \end{aligned}$$

then the finite difference scheme (27) is stable and we have

$$\begin{aligned} \parallel \mathbf {E}_{j+1}\parallel _{\infty } \le K\max \{\parallel \mathbf {E} _0\parallel _{\infty },\parallel \mathbf {E} _1 \parallel _{\infty }\} ,\;j=1,2,\ldots ,m,-1 \end{aligned}$$

where K is a positive constant independent of \(\Delta t\) , \(\Delta S\) and j.

Proof

For \(k=0,1,\ldots ,n-1 \) , \(j=1,2,\ldots ,m-1\) by using (29) we achieve

$$\begin{aligned} \mid \varepsilon ^k_{j+1}\mid\le & {} \left( \frac{2\Delta Sr(S_j-\Delta S)t^{1-\alpha }_k }{H S_j^2}+2\right) \mid \varepsilon ^k_j\mid \\&+\,\frac{2(\Delta t)^{-\alpha }\Delta S^2}{\alpha !\sigma ^2 S_j^2}\sum _{i =0,i\ne 1}^{k+1}g_{i}^{\alpha }\mid \varepsilon ^{k+1-i}_j\mid +\mid 1- \frac{2\Delta Sr t_k^{1-\alpha }}{H S_j}\mid \;\mid \varepsilon ^k_{j-1}\mid \\< & {} \left( \frac{2\Delta S r(S_j-\Delta S)t^{1-\alpha }_k }{H S_j^2}+2\right) \parallel \mathbf {E} _j \parallel _{\infty }\\&\qquad +\,\frac{2(\Delta t)^{-\alpha }\Delta S^2}{\alpha !\sigma ^2 S_j^2}\sum _{i =0}^{k+1}g_{i}^{\alpha } \parallel \mathbf {E} _j \parallel _{\infty }+\left( 1+ \frac{2\Delta S r t_k^{1-\alpha }}{H S_j}\right) \parallel \mathbf {E} _{j-1} \parallel _{\infty }. \end{aligned}$$

Since \(\sum _{i =0}^{k+1}g_{i}^{\alpha } <0,\) then

$$\begin{aligned} \mid \varepsilon ^k_{j+1}\mid < \left( \Delta S\frac{2r(S_j-\Delta S)t^{1-\alpha }_k }{H S_j^2}+2\right) \parallel \mathbf {E} _j \parallel _{\infty } +\,\left( \Delta S \frac{2r t_k^{1-\alpha }}{H S_j}+1\right) \parallel \mathbf {E} _{j-1} \parallel _{\infty }. \end{aligned}$$

If we set

$$\begin{aligned} L_1= & {} \max _{j}\left\{ \frac{2r(S_j-\Delta S)t^{1-\alpha }_k }{H S_j^2}+\frac{1}{\Delta S}\right\} ,\;\; L_2=\max _{j}\left\{ \frac{2r t_k^{1-\alpha }}{H S_j}\right\} , \end{aligned}$$

then

$$\begin{aligned}&\mid \varepsilon ^k_{j+1}\mid < \left( \Delta S L_1+1\right) \parallel \mathbf {E} _j \parallel _{\infty } +\left( \Delta S L_2+1\right) \parallel \mathbf {E} _{j-1} \parallel _{\infty }\\&\quad \le \left( \Delta S( L_1+L_2+\frac{1}{\Delta S})+1\right) ^{j}\max \{\parallel \mathbf {E} _0\parallel _{\infty },\parallel \mathbf {E} _1 \parallel _{\infty }\}\\&\quad \le e^{( L_1+L_2+\frac{1}{\Delta S}) S_{max}}\max \{\parallel \mathbf {E} _0\parallel _{\infty },\parallel \mathbf {E} _1 \parallel _{\infty }\}\\&\quad =K\max \{\parallel \mathbf {E} _0\parallel _{\infty },\parallel \mathbf {E} _1 \parallel _{\infty }\},j=1,2,\ldots ,m-1, \end{aligned}$$

where, we used the inequality

$$\begin{aligned}&(( L_1+L_2+\frac{1}{\Delta S}) +1)^j\le e^{( L_1+L_2+\frac{1}{\Delta S}) j\Delta S }\nonumber \\&\quad = e^{( L_1+L_2+\frac{1}{\Delta S}) S_j }\le e^{( L_1+L_2+\frac{1}{\Delta S}) S_{max}}. \end{aligned}$$
(32)

Thus

$$\begin{aligned} \parallel \mathbf {E} _{j+1}\parallel _{\infty }=\max _j\{\mid \varepsilon ^k_{j+1}\mid \} \le K\max \{\parallel \mathbf {E} _0\parallel _{\infty },\parallel \mathbf {E} _1 \parallel _{\infty }\} , \end{aligned}$$

by definition 1; this proves the stability of (27). \(\square \)

To prove convergence of the scheme (27), let \(P(S_j, t_k)\) (\(k=0,1,2,\ldots ,n-1 \); \(j=1,2,\ldots ,m-1\)) be the exact solution of (19)–(21) at the mesh point \((S_j,t_k)\). Define

$$\begin{aligned} \epsilon _j^k=P(S_j ,t_k) -P^k_j,\;\;j=1,2,\ldots ,m-1; k=0,1,\ldots ,n-1 \end{aligned}$$

and

$$\begin{aligned} \mathbf {F}_j=(\epsilon _j^0,\epsilon _j^1,\ldots , \epsilon _j^{n})^T. \end{aligned}$$

Using \(\mathbf {F_0}=\mathbf {F_1}=0\) and substituting \(P_j^k=P(S_j,t_k)-\epsilon ^k_j \) into (26), we have

$$\begin{aligned} \epsilon ^k_{j+1}= & {} \left( \frac{2\Delta S(r\Delta S-rS_j)t^{1-\alpha }_k }{H S_j^2}+2\right) \epsilon ^k_j \nonumber \\&-\,\frac{2(\Delta t)^{-\alpha }\Delta S^2}{\alpha !\sigma ^2 S_j^2}\sum _{i =0}^{k+1}g_i^{\alpha } \epsilon ^{k+1-i}_j+\left( \frac{2r\Delta S t_k^{1-\alpha }}{H S_j}-1\right) \epsilon ^k_{j-1}, \end{aligned}$$
(33)

where \(k=0,1,2\ldots ,n-1 \); \(j=1,2,\ldots ,m-1\). Now, we prove the following convergence.

Theorem 2

(Convergence) Let (22) has the smooth solution \(P(S,t)\in C_{S,t}^{2,\alpha }(\Omega )\). Let \(P^k_j\) be the numerical solution computed by use of (27). Then \(P^k_j\) converges to \(P(S_j,t_k)\), if

$$\begin{aligned} \Delta S\le \left( \frac{H}{2C_2 rt_k^{1-\alpha }}\right) ^{\left( \frac{H}{-2rt_k^{1-\alpha }-H}\right) }. \end{aligned}$$

Proof

We have from (29)

$$\begin{aligned} \mid \epsilon ^k_{j+1}\mid\le & {} \left( \frac{2\Delta Sr(S_j-\Delta S)t^{1-\alpha }_k }{H S_j^2}+2\right) \mid \epsilon ^k_j\mid \nonumber \\&+\,\frac{2(\Delta t)^{-\alpha }\Delta S^2}{\alpha !\sigma ^2 S_j^2}\sum _{i =0,i\ne 1}^{k+1}g_{i}^{\alpha }\mid \epsilon ^{k+1-i}_j\mid +\mid \frac{2\Delta Sr t_k^{1-\alpha }}{H S_j}-1\mid \;\mid \epsilon ^k_{j-1}\mid \\< & {} \left( \frac{2\Delta S r(S_j-\Delta S)t^{1-\alpha }_k }{H S_j^2}+2\right) \parallel \mathbf {F} _j \parallel _{\infty }\\&+\,\frac{2(\Delta t)^{-\alpha }\Delta S^2}{\alpha !\sigma ^2 S_j^2}\sum _{i =0}^{k+1}g_{i}^{\alpha } \parallel \mathbf {F} _j \parallel _{\infty }+\left( \frac{2\Delta Sr t_k^{1-\alpha }}{H S_j}+1\right) \parallel \mathbf {F} _{j-1} \parallel _{\infty }. \end{aligned}$$

Since \(\sum _{i =0}^{k+1}g_{i}^{\alpha } <0,\) then

$$\begin{aligned}&\mid \epsilon ^k_{j+1}\mid < \left( \Delta S\frac{2r(S_j-\Delta S)t^{1-\alpha }_k }{H S_j^2}+2\right) \parallel \mathbf {F} _j \parallel _{\infty }\\&\qquad +\,\left( \Delta S \frac{2r t_k^{1-\alpha }}{H S_j}+1\right) \parallel \mathbf {F} _{j-1} \parallel _{\infty }. \end{aligned}$$

Now, we set

$$\begin{aligned} L_1= & {} \max _{j}\left\{ \frac{2r(S_j\Delta S)t^{1-\alpha }_k }{H S_j^2}+\frac{1}{\Delta S}\right\} ,\;\;L_2=\max _{j}\left\{ \frac{2r t_k^{1-\alpha }}{H S_j}\right\} \end{aligned}$$

and use (32) to get

$$\begin{aligned}&\mid \epsilon ^k_{j+1}\mid < \left( \Delta S L_1+1\right) \parallel \mathbf {F} _j \parallel _{\infty } +\left( \Delta S L_2+1\right) \parallel \mathbf {F} _{j-1} \parallel _{\infty }\\&\quad \le \left( \Delta S( L_1+L_2+\frac{1}{\Delta S})+1\right) ^{j}\max \{\parallel \mathbf {F} _0\parallel _{\infty },\parallel \mathbf {F} _1 \parallel _{\infty }\}\\&\quad \le e^{( L_1+L_2+\frac{1}{\Delta S}) S_j}\max \{\parallel \mathbf {F} _0\parallel _{\infty },\parallel \mathbf {F} _1 \parallel _{\infty }\}\\&\quad \le e^{( L_1+L_2+\frac{1}{\Delta S}) S_{max}}\max \{\parallel \mathbf {F} _0\parallel _{\infty },\parallel \mathbf {F} _1 \parallel _{\infty }\}\\&\quad =K\max \{\parallel \mathbf {F} _0\parallel _{\infty },\parallel \mathbf {F} _1 \parallel _{\infty }\}, j=1,2,\ldots ,m-1. \end{aligned}$$

Since \(\mathbf {F}_0=\mathbf {F}_1=0\), this proves that \(P^k_j\) converges to \(P(S_j,t_k)\). \(\square \)

5 Solving FBS Model for American Put Option Pricing

Now we solve (22)–(24). Since the American put option pricing is only known at the end point (exercise time), then in order to use a higher order Grünwald Letnikov approximation, we need some other points of American put option pricing. To get this intermediate values, we use Newton’s interpolation method, that is, for the points \(P_j^0=P(S_j ,0)\) and \(P_j^T=P(S_j,T)\), we get

$$\begin{aligned} P_j^{(1)}(\tau )=P^{0}_j+\frac{P^{T}_j-P^{0}_j}{T-0}(\tau -0), \end{aligned}$$
(34)

which yields

$$\begin{aligned} P_j^{(1)}\left( \frac{T}{2}\right) =\frac{P^{T}_j+P^{0}_j}{2}:=P^{\frac{T}{2}}_j \end{aligned}$$
(35)

and for the points \(P^0_j\), \(P^{\frac{T}{2}}_j\) and \(P^T_j\),

$$\begin{aligned} P^{(2)}_j(\tau )=P_j^{(1)}(\tau )+\frac{-4P^{\frac{T}{2}}_j+2P^{T}_j+2P^{0}_j}{T^2}(\tau -0)(\tau -T) \end{aligned}$$

and

$$\begin{aligned} P^{ (2)}_j(\tau )=P^{ (1)}_j(\tau )=P^{\tau }_j. \end{aligned}$$

Then it follows from (34)

$$\begin{aligned} P^{\frac{T}{4}}_j=\frac{P^{T}_j+3P^{0}_j}{4} \end{aligned}$$
(36)

and

$$\begin{aligned} P^{\frac{3T}{4} }_j=\frac{3P^{T}_j+P^{0}_j}{4}. \end{aligned}$$
(37)

Proposition 1

For \(n=4\), the discrete form of the FBS model is in the form

$$\begin{aligned} \mathbf {UP}=\mathbf {WF}, \end{aligned}$$
(38)

where

$$\begin{aligned} U= & {} \left( \begin{array}{ccccccc} a_1&{} \quad b_1 &{}\\ c_2&{}\quad a_2 &{} \quad b_2&{}\quad &{} \quad &{} \\ &{}\quad c_3&{}\quad a_3 &{} \quad b_3&{} \\ &{}\quad &{}\ddots \quad &{}\ddots &{}\quad \ddots \\ &{}\quad &{}\quad &{}\quad c_8&{}\quad a_8 &{}\quad b_8\\ &{}\quad &{}\quad &{}\quad &{}\quad c_9&{}\quad a_9 &{} \quad b_9\\ &{} \quad &{} \quad &{}\quad &{}\quad &{} \quad c_{10}&{}\quad a_{10} \end{array} \right) , \ \nonumber \\ \mathbf {W}= & {} \left( \begin{array}{ccccccc} a'_1&{}\quad b'_1 &{}\\ c'_2&{} \quad a'_2 &{} b'_2&{}\quad &{}\quad &{} \\ &{}\quad c'_3&{}\quad a'_3 &{}\quad b'_3&{} \\ &{}\quad &{}\quad \ddots &{}\quad \ddots &{}\quad \ddots \\ &{}\quad &{} \quad &{}\quad c'_8&{}\quad a'_8 &{}\quad b'_8\\ &{}\quad &{}\quad &{}\quad &{}\quad c'_9&{}\quad a'_9 &{}\quad b'_9\\ &{}\quad &{} \quad &{}\quad &{}\quad &{}\quad c'_{10}&{}\quad a'_{10} \end{array} \right) \\ \mathbf {P}= & {} \left( \begin{array}{ccccccc} P^{0}_1\\ P^{0}_2\\ P^{0}_3\\ \vdots \\ P^{0}_8\\ P^{0}_9\\ P^{0}_{10} \end{array} \right) , \ \ \ \mathbf {F}=\left( \begin{array}{ccccccc} e_1\\ e_2\\ e_3\\ \vdots \\ e_8\\ e_9 \\ e_{10} \end{array} \right) \end{aligned}$$

with

$$\begin{aligned} c_j= & {} \frac{1}{4}\left( \frac{rS_j}{\Delta S}\frac{(\frac{3T}{4})^{1-\alpha }}{(1-\alpha )!}-\frac{\alpha !}{2} \frac{\sigma ^2S_j^2}{(\Delta S )^2}\right) ,\;\;b_j=-\frac{\alpha !}{8}\frac{\sigma ^2S_j^2}{(\Delta S)^2},\\ a_j= & {} \alpha (\frac{T}{4})^{-\alpha }\left( \frac{1}{4}+\frac{(1-\alpha )}{4} +\frac{3(1-\alpha )(2-\alpha )}{4!}+\frac{(1-\alpha )(2-\alpha )(3-\alpha )}{4!}\right) \\&\quad +\,\frac{1}{4}\left( (r-\frac{rS_j}{\Delta S})\frac{(\frac{3T}{4})^{1-\alpha }}{(1-\alpha )!}+\frac{\alpha !\sigma ^2S_j^2}{(\Delta S)^2}\right) ,\\ c'_j= & {} -\frac{3}{4}\left( \frac{rS_j}{\Delta S}\frac{(\frac{3T}{4})^{1-\alpha }}{(1-\alpha )!}-\frac{\alpha !}{2} \frac{\sigma ^2 S_j^2}{(\Delta S )^2}\right) ,\;\;b'_j=\frac{3\alpha !}{8}\frac{\sigma ^2S_j^2}{(\Delta S)^2},\\ a'_j= & {} \left( \frac{T}{4}\right) ^{-\alpha }\left( 1- \frac{3\alpha }{4} -\frac{\alpha (1-\alpha )}{4}-\frac{\alpha (1-\alpha )(2-\alpha )}{4!}\right) \\&\quad -\,\frac{3}{4}\left( (r-\frac{rS_j}{\Delta S})\frac{(\frac{3T}{4})^{1-\alpha }}{(1-\alpha )!}+\frac{\alpha !\sigma ^2S_j^2}{(\Delta S)^2}\right) ,\\ e_j= & {} \max (E-S_j ,0), \end{aligned}$$

for \(j=1,2,\ldots ,10.\)

Fig. 1
figure 1

a\(T=0.5\), \( \sigma =0.5\), \(r=0.05\) and \(\alpha =\frac{8}{9}\), b\(T=0.5\), \( \sigma =0.4\), \(r=0.05\) and \(\alpha =\frac{8}{9}\), c\(T=0.25\), \( \sigma =0.4\), \(r=0.05\) and \(\alpha =\frac{9}{11}\), d\(T=0.75 \), \( \sigma =0.32\), \(r=0.07\) and \(\alpha =\frac{5}{9}\), e\(T=1\), \( \sigma =0.4\), \(r=0.01\) and \(\alpha =\frac{8}{9}\), f\(T=1\), \( \sigma =0.6\), \(r=0.02\) and \(\alpha =\frac{1}{2}\), g\(T=0.5\), \( \sigma =0.4\), \(r=0.1\) and \(\alpha =\frac{2}{3}\)

Proof

We return to the Eq. (22) and obtain for \(n=4\),

$$\begin{aligned}&(\frac{T}{4})^{-\alpha }P_j^{T}-\frac{\alpha (1-\alpha )}{2} (\frac{T}{4})^{-\alpha }(\frac{P^{T}_j+P^{0}_j}{2})\\&\qquad -\frac{\alpha (1-\alpha )(2-\alpha )}{3!} (\frac{T}{4})^{-\alpha }(\frac{P^{T}_j+3P^{0}_j}{4})\\&\quad -\,\frac{\alpha (1-\alpha )(2-\alpha )(3-\alpha )}{4!} (\frac{T}{4})^{-\alpha }P_j^{0}\\&\quad =\left( (r-\frac{rS_j}{\Delta S})\frac{(\frac{3T}{4})^{1-\alpha }}{(1-\alpha )!}+\frac{\alpha !\sigma ^2S_j^2}{(\Delta S)^2}+\alpha (\frac{T}{4} )^{-\alpha }\right) (\frac{3P^{T}_j+P^{0}_j}{4})\\&\qquad +\left( \frac{rS_j}{\Delta S}\frac{(\frac{3T}{4})^{1-\alpha }}{(1-\alpha )!}-\frac{\alpha !}{2} \frac{\sigma ^2S_j^2}{(\Delta S )^2}\right) (\frac{3P^{T}_{j-1}+P^{0}_{j-1}}{4})\\&\qquad -\,\frac{\alpha !}{2}\frac{\sigma ^2S_j^2}{(\Delta S)^2}(\frac{3P^{T}_{j+1}+P^{0}_{j+1}}{4}), \end{aligned}$$

or, equivalently

$$\begin{aligned}&-\frac{3}{4}\left( \frac{rS_j}{\Delta S}\frac{(\frac{3T}{4})^{1-\alpha }}{(1-\alpha )!}-\frac{\alpha !}{2} \frac{\sigma ^2S_j^2}{(\Delta S )^2}\right) P^{T}_{j-1}\\&\quad \left[ \left( \frac{T}{4}\right) ^{-\alpha }\left( 1- \frac{3\alpha }{4} -\frac{\alpha (1-\alpha )}{4}-\frac{\alpha (1-\alpha )(2-\alpha )}{4!}\right) \right. \\&\qquad \left. -\,\frac{3}{4}\left( (r-\frac{rS_j}{\Delta S})\frac{(\frac{3T}{4})^{1-\alpha }}{(1-\alpha )!}+\frac{\alpha !\sigma ^2S_j^2}{(\Delta S)^2}\right) \right] P_j^{T}+\frac{3\alpha !}{8}\frac{\sigma ^2 S_j^2}{(\Delta S)^2} P^{T}_{j+1}\\&\quad =\left[ \alpha \left( \frac{T}{4}\right) ^{-\alpha }\left( \frac{1}{4}+\frac{(1-\alpha )}{4} +\frac{3(1-\alpha )(2-\alpha )}{4!}+\frac{(1-\alpha )(2-\alpha )(3-\alpha )}{4!}\right) \right. \\&\qquad \left. +\,\frac{1}{4}\left( (r-\frac{rS_j}{\Delta S})\frac{(\frac{3T}{4})^{1-\alpha }}{(1-\alpha )!}+\frac{\alpha !\sigma ^2S_j^2}{(\Delta S)^2}\right) \right] P^{0}_j\\&\qquad +\frac{1}{4}\left( \frac{rS_j}{\Delta S}\frac{(\frac{3T}{4})^{1-\alpha }}{(1-\alpha )!}-\frac{\alpha !}{2} \frac{\sigma ^2S_j^2}{(\Delta S )^2}\right) P^{0}_{j-1}-\frac{\alpha !}{8}\frac{\sigma ^2S_j^2}{(\Delta S)^2}P^{0}_{j+1}, \end{aligned}$$

which can be rearranged as (38). \(\square \)

The algorithm of the proposed method can be formalized as follows.

figure a

For some special values of the parameters T, \(\sigma \), r, \(\alpha \) and \(S_{max}=2\), the results plotted in Fig.1a–g.

Remark 3

For a call (put) option, either European or American, when the current asset price is higher, it has a strictly higher (lower) chance to be exercised and when it is exercised induces higher (lower) cash inflow. Therefore, the call (put) option price function is increasing (decreasing) of the asset price, that is,

$$\begin{aligned} C(S_2, t_0) \ge C(S_1, t_0), \quad S_2 > S_1, \end{aligned}$$

or

$$\begin{aligned} P(S_2, t_0) \le P(S_1, t_0), \quad S_2 > S_1. \end{aligned}$$
(39)

It is shown in Fig. 1a–g, that our results are true for \(t_0=0\), according to remark 3.

6 Conclusion

We introduced FBS model that was obtained from fractional stochastic differential equation. Then we investigated the stability and convergency of a finite difference scheme and showed that it is stable and convergent. So we solved American put option problem by using Newton interpolation. In continuation we showed numerical results in some figures that satisfied the physical condition of American put option pricing.