1 Introduction

In option pricing theory (Wilmott et al. 1994; Crack 2009), Black–Scholes (B–S) model which was first introduced in 1973 by Black and Scholes (1973) is the best-known mathematical model with continuous time. B–S model is a mathematical model to study the behavior of an asset price in a form of linear parabolic partial differential equation which describes the value of an option. An option is an agreement that allows the holder to buy (call option) or sell (put option) at a specified future time (expiration or maturity time) an underling asset at a specified price (strike or exercise price). By the dates on which options exercised, they have two types: options that can be exercised only on the maturity date (European options) and options that can be exercised on or prior to the expiration date (American options). By reducing the B–S equation to the heat equation, a closed form solution to the B–S equation, known as the B–S formula, can be found (Jiang 2003). On the other hand, many authors have solved the B–S equation via different methods (Bohner and Zheng 2009; Gulkac 2010; Jodar et al. 2005). In the past decades, fractional calculus has played a very important role in various fields such as physics, mechanics, electricity, economics, etc. (see, for example, (Kilbas et al. 2006; Podlubny 1999; Oldham and Spanier 1974; Miller and Ross 2003) and references therein). Indeed, there are many phenomena which are described by differential equations containing fractional-order derivatives. Recently, by increasing applications of fractional differential equations (FDE), researchers have studied B–S equation in this form too. Wyss (2000) derived a time-fractional B–S equation to evaluate European call options. In some recent papers, analytical approximate solution to B–S problem has obtained using some methods such as: homotopy perturbation method, homotopy analysis method (Kumar et al. 2014), variational iteration method (Ahmad et al. 2013), and reconstruction of variational iteration method (RVIM) (Akrami and Erjaee 2015). By considering a model of fractional stochastic differential equation and using Ito’s Lemma together with fractional Taylor’s series, Guy Jumarie derived two fractional B–S equations and obtained their solutions (Jumarie 2010). Recently, Song and Wang (2013) have solved Jumarie’s time fractional B–S equation to evaluate European put options using finite-difference method. Jumarie’s fractional B–S equations have been derived from a stochastic differential equation with fractional Brownian motion which describes the noise memory effect of the underlying asset price. In this case, the memory effect is measured by the memory parameter, namely, the Hurst index. The Hurst index is denoted by H \(( 0< H < 1)\). If \(0< H < 1/2\), then the stock price is called a short-range memory process. If \(1/2< H < 1\), then it is called a long-range memory process, and if \(H = 1/2\), then it has no dependence and no memory effect. It is well known that fractional calculus is one of the best tools to describe memory effects (Das 2008). Indeed, memory effects contain the trend memory which can be described using fractional derivatives. Therefore, by fractional calculus, we can depict memory process of the increment in the classical stochastic differential equation. In Li et al. (2014), by considering a fractional-order \(\alpha\) in differential of underlying asset price, dS(t), and in dt as well, a new fractional-order stochastic differential equation with classical Brownian motion has been constructed which describes the trend memory effect of the underlying asset price. In this article, a new time-fractional B–S equation will be derived from this new fractional stochastic differential equation, and then, we apply the RVIM to solve our derived FDE model of B–S equation.

2 Fractional Calculus

Some needed concepts of fractional calculus are stated in this section. Here, we follow the definitions and concepts that have been used by Jumarie (2010).

Definition 1

Suppose that \(f:\mathfrak {R}\longrightarrow \mathfrak {R}, x \longrightarrow f(x)\) is a continuous (but not necessarily differentiable) function, and let \(h>0\) denote a constant discretization span. Then, the fractional difference of order \(\alpha \in \mathfrak {R}\), \(0 < \alpha \le 1\) of f(x) is defined as follows:

$$\begin{aligned} \Delta ^{\alpha }f(x)=\textstyle \sum _{k=0}^{\infty }(-1)^{k}\left( {\begin{array}{c}\alpha \\ k\end{array}}\right) f\left[ x+(\alpha -k)h\right] , \end{aligned}$$
(1)

where \(\textstyle \left( {\begin{array}{c}\alpha \\ k\end{array}}\right) =\dfrac{\Gamma (\alpha + 1)}{k! \Gamma (\alpha - k +1)}\) and \(\Gamma (.)\) is the Gamma function.

Definition 2

(Modified Riemann–Liouville Derivative). Refer to a function as in Definition 1, fractional derivative of order \(\alpha\) of such a function is defined as follows:

$$\begin{aligned} f^{(\alpha )}(x) := \dfrac{d^{\alpha }f(x)}{\mathrm{d}x^{\alpha }}= \left\{ \begin{array}{lll} \dfrac{1}{\Gamma (-\alpha )}\int _{0}^{x}(x-\xi )^{-\alpha -1}(f(\xi )-f(0))\mathrm{d}\xi , &\alpha < 0&, \\ \dfrac{1}{\Gamma (1-\alpha )}\dfrac{\mathrm{d}}{\mathrm{d}x}\int _{0}^{x}(x-\xi )^{-\alpha }(f(\xi )-f(0))\mathrm{d}\xi , &0 <\alpha \le 1,& \\ \dfrac{d^{n}\left( f^{(\alpha - n)}(x)\right) }{\mathrm{d}x^{n}}, &n < \alpha \le n+1,& \end{array} \right. \end{aligned}$$
(2)

where n denotes a positive integer.

Proposition 1

Refer to a function as in Definition 1, the following equality holds:

$$\begin{aligned} f^{(\alpha )}(x)=\lim _{h\rightarrow 0}\frac{\Delta ^{\alpha }f(x)}{h^{\alpha }}. \end{aligned}$$
(3)

Proof

For the proof, see Jumarie (1993).

Proposition 2

Assume that a continuous function \(f:\mathfrak {R}\longrightarrow \mathfrak {R}, x \longrightarrow f(x)\) has fractional derivative of order \(k \alpha\), for any positive integer k and any \(\alpha\), \(0 < \alpha \le 1\). Then, the following equality holds:

$$\begin{aligned} f(x+h)=\textstyle \sum _{k=0}^{\infty }\dfrac{h^{\alpha k}}{\Gamma (1+\alpha k)}f^{(\alpha k)}(x), \end{aligned}$$
(4)

where \(f^{(\alpha k)}(x)\) is \(\alpha k\) order derivative of f(x).

Proof

For the proof, see Jumarie (2005, 2006).

Corollary 1

By assuming the conditions of Proposition 2, for \(0< \alpha \le 1\), we have

$$\begin{aligned} d ^{\alpha }f\left( x\right) \cong \Gamma \left( 1+\alpha \right) d f\left( x\right) . \end{aligned}$$
(5)

Proof

From Eq. (4), one obtains

$$\begin{aligned} f(x+h) = f (x) + \dfrac{h^{\alpha }}{\Gamma (1+\alpha )}f^{(\alpha )}(x) + \textstyle \sum _{k=2}^{\infty }\dfrac{h^{\alpha k}}{\Gamma (1+\alpha k)}f^{(\alpha k)}(x). \end{aligned}$$
(6)

Now, according to the definition of the forward operator of order one, \(\Delta f(x)\), we have

$$\begin{aligned} \Delta f(x) = \dfrac{h^{\alpha }}{\Gamma (1+\alpha )}f^{(\alpha )}(x) +\textstyle \sum _{k=2}^{\infty }\dfrac{h^{\alpha k}}{\Gamma (1+\alpha k)}f^{(\alpha k)}(x). \end{aligned}$$
(7)

Multiplying both sides of (7) by \(\dfrac{\Gamma (1 + \alpha )}{h^{\alpha }}\) and taking limit as \(h\longrightarrow 0\) yields

$$\begin{aligned} \Gamma \left( 1 + \alpha \right) \lim _{h\rightarrow 0}\frac{\Delta f\left( x\right) }{h^{\alpha }} = f^{\left( \alpha \right) }\left( x\right) . \end{aligned}$$
(8)

Therefore, according to (3) and (8), we get

$$\begin{aligned} \Delta ^{\alpha }f(x) \cong \Gamma \left( 1 + \alpha \right) \Delta f(x), \end{aligned}$$
(9)

or in a differential form

$$\begin{aligned} d^{\alpha } f(x) \cong \Gamma \left( 1 + \alpha \right) \mathrm{d}f(x). \end{aligned}$$
(10)

Corollary 2

As a Relation between \(dx^{\alpha }\) and dx, the following equality holds

$$\begin{aligned} dx^{\alpha } = \Gamma (1+\alpha )\Gamma (2-\alpha )x^{\alpha -1} dx, \quad 0< \alpha \le 1. \end{aligned}$$
(11)

Proof

By assuming \(\gamma > 0\), according to Definition 1, for \(0< \alpha \le 1\), the following equality holds:

$$\begin{aligned} d^{\alpha }\left( x^{\gamma }\right) = \Gamma \left( \gamma +1\right) \Gamma ^{-1}\left( \gamma +1-\alpha \right) x^{\gamma -\alpha }\mathrm{d}x^{\alpha }. \end{aligned}$$
(12)

For \(\gamma =1\), according to Eq. (10), we get Eq. (11).

Corollary 3

The following equality holds:

$$\begin{aligned} f_{x}^{(\alpha )} [u(x)] = \Gamma (2-\alpha ) u^{\alpha - 1}(x) f^{(\alpha )}_{u}[u(x)] u^{(\alpha )}_{x}(x). \end{aligned}$$
(13)

Proof

By considering the differential form of \(f_{x}^{(\alpha )} [u(x)]\), we have

$$\begin{aligned} f_{x}^{(\alpha )} [u(x)] = \dfrac{d^{\alpha } f[u(x)]}{\mathrm{d}x^{\alpha }} = \dfrac{d^{\alpha } f[u(x)]}{\mathrm{d}u^{\alpha }} \dfrac{\mathrm{d}u^{\alpha }}{\mathrm{d}x^{\alpha }}. \end{aligned}$$
(14)

Replacing \(\mathrm{d}u^{\alpha }\) by \(d^{\alpha }u\) in Eq. (14) and according to Eq. (12) for \(\gamma =1\), we get

$$\begin{aligned} f_{x}^{(\alpha )} [u(x)]& = \dfrac{d^{\alpha } f[u(x)]}{\mathrm{d}u^{\alpha }}\dfrac{\Gamma (2-\alpha )u^{\alpha - 1}d^{\alpha }u}{d^{\alpha }x}\\ & = \Gamma (2-\alpha ) u^{\alpha - 1}(x) f^{(\alpha )}_{u}[u(x)] u^{(\alpha )}_{x}(x). \end{aligned}$$

Proposition 3

Let \(f: \mathfrak {R}\longrightarrow \mathfrak {R}, x \longrightarrow f(x)\) denote a continuous function. Then, Laplace transform of modified Riemann–Liouville derivative of order \(\alpha\), \(0 < \alpha \le 1\) is

$$\begin{aligned} L \lbrace f^{(\alpha )}(x)\rbrace = s^{\alpha }L \lbrace f(x)\rbrace -s^{\alpha - 1}f(0). \end{aligned}$$
(15)

Proof

Using Definition 1, for \(0 <\alpha \le 1\), Laplace transform of derivative and convolution theorem, one gets

$$\begin{aligned} L[f^{(\alpha )}(x)]& = \dfrac{1}{\Gamma (1- \alpha )} L \left[ \dfrac{\mathrm{d}}{\mathrm{d}x}\int _{0}^{x}(x - \xi )^{- \alpha } ( f(\xi ) - f(0)) \mathrm{d}\xi \right] \nonumber \\& = \dfrac{s}{\Gamma (1- \alpha )}L\left[ \int _{0}^{x}(x - \xi )^{- \alpha } ( f(\xi ) - f(0)) \mathrm{d}\xi \right] \nonumber \\& = \dfrac{s}{\Gamma (1- \alpha )}\left( L[x^{- \alpha }] L[f(x) - f(0)]\right) \nonumber \\& = s^{\alpha }L[f(x)] - s^{\alpha - 1} f(0). \end{aligned}$$

Theorem 1

Consider a fractional partial differential equation (FPDE) as follows:

$$\begin{aligned} f(x,y,u)\dfrac{\partial ^{\alpha }u}{\partial x^{\alpha }}+g(x,y,u)\dfrac{\partial u}{\partial y}=h(x,y,u), \quad 0 < \alpha \le 1, \end{aligned}$$

where \(u : R^{2} \longrightarrow R, (x,y) \longrightarrow u(x,y)\). Then, auxiliary system of partial differential equation associated with the FPDE is

$$\begin{aligned} \dfrac{dx^{\alpha }}{f} = \dfrac{d^{\alpha }y}{g} = \dfrac{du^{\alpha }}{h}. \end{aligned}$$

Proof

For the proof, see Jumarie (2007).

3 Reconstruction of Variational Iteration Method

To illustrate a basic idea of the RVIM, we consider the following fractional partial differential equation (FPDE) with variable coefficients:

$$\begin{aligned} \dfrac{\partial ^{\alpha }u(x,t)}{\partial t^{\alpha }} = \textstyle \sum _{j=0}^{2} a_{j}(x,t)\dfrac{\partial ^{j} u(x,t)}{\partial x^{j}}, \quad 0 < \alpha \le 1 \end{aligned}$$
(16)

where \(\partial ^{\alpha }/\partial t^{\alpha }\) denotes fractional derivative in modified Riemann–Liouville sense and \(\partial ^{0}u(x,t)/\partial x^{0}=u(x,t)\), with initial condition \(u(x,0)=f(x), a< x< b, 0 < t \le T.\)

Now, by taking Laplace transform on both sides of Eq. (16) with respect to time variable t and considering Proposition 3, we get

$$\begin{aligned} s^{\alpha }L \lbrace u(x,t) \rbrace -s^{\alpha - 1}u(x,0) = L \left\{ \textstyle \sum _{j=0}^{2} a_{j}(x,t)\dfrac{\partial ^{j} u(x,t)}{\partial x^{j}} \right\} . \end{aligned}$$

Then

$$\begin{aligned} L \lbrace u(x,t) \rbrace = \dfrac{1}{s}f(x) + \dfrac{1}{s^{\alpha }}L \left\{ \textstyle \sum _{j=0}^{2} a_{j}(x,t)\dfrac{\partial ^{j} u(x,t)}{\partial x^{j}} \right\} . \end{aligned}$$
(17)

Taking inverse Laplace transform from both sides of Eq. (17) and using convolution theorem, we get

$$\begin{aligned} u(x,t)& = L^{-1} \textstyle \left\{ \dfrac{1}{s}f(x)\right\} + L^{-1} \left\{ \dfrac{1}{s^{\alpha }}L \left\{ \sum _{j=0}^{2} a_{j}(x,t)\dfrac{\partial ^{j} u(x,t)}{\partial x^{j}} \right\} \right\} \\& = f(x) + \textstyle \left( \dfrac{t^{\alpha - 1}}{\Gamma (\alpha )}\right) *\left( \sum _{j=0}^{2} a_{j}(x,t)\dfrac{\partial ^{j} u(x,t)}{\partial x^{j}} \right) \\& = f(x)+ \dfrac{1}{\Gamma (\alpha )}\int _{0}^{t}(t - \xi )^{\alpha - 1}\left( \textstyle \sum _{j=0}^{2} a_{j}(x,\xi )\dfrac{\partial ^{j} u(x,\xi )}{\partial x^{j}}\right) \mathrm{d}\xi . \end{aligned}$$

Now, we can construct an iteration formula as follows:

$$\begin{aligned} u_{n + 1}(x,t) = f(x) + \dfrac{1}{\Gamma (\alpha )}\int _{0}^{t}(t - \xi )^{\alpha - 1}\left( \textstyle \sum _{j=0}^{2} a_{j}(x,\xi )\dfrac{\partial ^{j} u_{n}(x,\xi )}{\partial x^{j}}\right) \mathrm{d}\xi . \end{aligned}$$
(18)

Therefore, the RVIM provides a solution for problem (16) as follows:

$$\begin{aligned} u(x,t) = \lim _{n \longrightarrow \infty } u_{n}(x,t). \end{aligned}$$
(19)

Obviously, the RVIM method derives a series solution. That is, if we define an operator \(\psi [u]\) and components \(v_{n}\) for \(n = 1,2,...\), such that

$$\begin{aligned} \psi [u]=\dfrac{1}{\Gamma (\alpha )}\int _{0}^{t}(t - \xi )^{\alpha - 1}\left( \textstyle \sum _{j=0}^{2} a_{j}(x,\xi )\dfrac{\partial ^{j} u(x,\xi )}{\partial x^{j}}\right) \mathrm{d}\xi \end{aligned}$$
(20)

and

$$\begin{aligned} v_{0}=u_{0}, \quad v_{n}=\psi \textstyle \left[ \sum _{j=0}^{n-1}v_{j}\right] -\left( \sum _{j=1}^{n-1}v_{j}\right) . \end{aligned}$$
(21)

Then, according to Eqs. (18), (20), and (21), we have \(u_{0}=v_{0}, u_{n}=\sum _{j=0}^{n}v_{j}\) for \(n=1,2,...\) . Consequently, according to (19), we have \(u(x,t)=\sum _{j=0}^{\infty }v_{j}(x,t).\)

The RVIM convergence analysis has been discussed in Hesameddini and Rahimi (2015), and therefore, we ignore it here.

4 A New Fractional Black–Scholes Equation

In this section, we will derive a new time-fractional Black–Scholes equation and then discuss the solution by means of the RVIM under a terminal condition for the European call option.

4.1 Derivation

We first suppose dynamics of an underlying asset price, S(t), which satisfies a fractional-order stochastic differential equation (FSDE) as follows:

$$\begin{aligned} d^{\alpha }S(t)=\mu S(t)\mathrm{d}t^{\alpha }+\sigma S(t)\mathrm{d}B(t), \end{aligned}$$
(22)

where \(\alpha =2H, 0< H < 1\), H is Hurst index which is an exponent describing memory of time series (Hurst 1951), constants \(\mu\) and \(\sigma\) are expected return rate and volatility, respectively, and B(t) is classical Brownian motion. Here, we only consider a case that \(0.25 < H \le 0.5\), i.e., \(0.5 < \alpha \le 1\). For \(\alpha =1\), Eq. (22) reduces to a classical stochastic differential equation. Note that in FSDE (22), fractional derivative of order \(\alpha\) describes trend memory of the asset price.

To derive our new model based on FSDE (22), we first build a new fractional version of Ito’s formula.

Let V(St) denote the value of an option on an underlying asset which its price satisfies in Eq. (22) and assume that V(St) has a fractional derivative of order \(\alpha\) with respect to t and it is twice differentiable with respect to S. Then, according to Jumarie’s fractional Taylor’s series Jumarie (2010)

$$\begin{aligned} \mathrm{d}V = \dfrac{1}{\Gamma (1+\alpha )}\dfrac{\partial ^{\alpha }V}{\partial t^{\alpha }}\mathrm{d}t^{\alpha } + \dfrac{\partial V}{\partial S}\mathrm{d}S +\dfrac{1}{2}\dfrac{\partial ^{2}V}{\partial S^{2}}(\mathrm{d}S)^{2}. \end{aligned}$$
(23)

Considering Maruyama’s notation for Brownian motion and Eq. (5), FSDE (22) yields

$$\begin{aligned} \mathrm{d}S = \dfrac{\mu S}{\Gamma (1+\alpha )}\mathrm{d}t^{\alpha }+ \dfrac{\sigma S}{\Gamma (1+\alpha )}w(t)\mathrm{d}t^{1/2}, \end{aligned}$$
(24)

where w(t) is a normalized Gaussian white noise with \(E[w(t)]=0\) and \(E[w^{2}(t)]=1\). Here, E is mathematical expectation. We thus obtain (Li et al. 2014)

$$\begin{aligned} (\mathrm{d}S)^{2} &= \dfrac{\mu ^{2}S^{2}}{\Gamma ^{2}(1+\alpha )} \mathrm{d}t^{2 \alpha } + \dfrac{\sigma ^{2}S^{2}}{\Gamma ^{2}(1+\alpha )}w^{2}(t)\mathrm{d}t\\ &\quad +\dfrac{2 \mu \sigma S^{2}}{\Gamma ^{2}(1+\alpha )}w(t)\mathrm{d}t^{\alpha + 1/2}\longrightarrow \dfrac{\sigma ^{2}S^{2}}{\Gamma ^{2}(1+\alpha )}\mathrm{d}t. \end{aligned}$$
(25)

Substituting (24) and (25) into Eq. (23), we get

$$\begin{aligned} \mathrm{d}V &= \dfrac{1}{\Gamma (1+\alpha )}\dfrac{\partial ^{\alpha } V}{\partial t^{\alpha }}\mathrm{d}t^{\alpha } + \dfrac{\mu S}{\Gamma (1+\alpha )}\dfrac{\partial V}{\partial S}\mathrm{d}t^{\alpha } \\ &\quad + \dfrac{\sigma ^{2}S^{2}}{2\Gamma ^{2}(1+\alpha )}\dfrac{\partial ^{2}V}{\partial S^{2}}\mathrm{d}t+\dfrac{\sigma S}{\Gamma (1+\alpha )}\dfrac{\partial V}{\partial S}\mathrm{d}B(t). \end{aligned}$$
(26)

Replacing \(\mathrm{d}t^{\alpha }\) by \(\mathrm{d}t\), according to Eq. (11), a fractional version of Ito’s formula will be produced as follows:

$$\begin{aligned} \mathrm{d}V& = \left( \Gamma (2 - \alpha ) t^{\alpha - 1}\dfrac{\partial ^{\alpha }V}{\partial t^{\alpha }} + \mu \Gamma (2 - \alpha )t^{\alpha - 1}S \dfrac{\partial V}{\partial S} + \dfrac{\sigma ^{2} S^{2}}{2\Gamma ^{2}(1+\alpha )}\dfrac{\partial ^{2}V}{\partial S^{2}}\right) \mathrm{d}t\nonumber \\&+ \dfrac{\sigma S}{\Gamma (1+\alpha )}\dfrac{\partial V}{\partial S}\mathrm{d}B(t). \end{aligned}$$
(27)

Now, we are ready to derive our new time fractional B–S equation. In this derivation, we suppose underlying asset pays no dividend, no transaction cost, and no tax, and market is arbitrage free. Thus, we first construct a riskless portfolio \(\Pi\) of V(St) and S by the following expression:

$$\begin{aligned} \Pi =V- \delta S, \end{aligned}$$
(28)

where \(\delta\) denotes shares of underlying asset and it should be chosen, such that \(\mathrm{d}\Pi (t) = r \Pi (t)\mathrm{d}t\), where the constant r is risk-free interest rate. Therefore

$$\begin{aligned} \mathrm{d}V - \delta \mathrm{d}S = r \Pi \mathrm{d}t = r\left( V - \delta S\right) \mathrm{d}t. \end{aligned}$$
(29)

Substituting (27) into (29) and due to the fact that

$$\begin{aligned} \mathrm{d}S= \mu S \Gamma (2 - \alpha ) t^{\alpha - 1}\mathrm{d}t + \dfrac{\sigma S}{\Gamma (1 + \alpha )}\mathrm{d}B(t), \end{aligned}$$

we calculate

$$\begin{aligned} \begin{array}{lll} \left( \Gamma (2 - \alpha ) t^{\alpha - 1}\dfrac{\partial ^{\alpha }V}{\partial t^{\alpha }} + \mu \Gamma (2 - \alpha )t^{\alpha - 1}S \dfrac{\partial V}{\partial S} + \dfrac{\sigma ^{2} S^{2}}{2\Gamma ^{2}(1 + \alpha )}\dfrac{\partial ^{2}V}{\partial S^{2}}-\delta \mu S \Gamma (2 - \alpha )t^{\alpha -1}\right) \mathrm{d}t \\ +\left( \dfrac{\sigma S}{\Gamma (1 + \alpha )}\dfrac{\partial V}{\partial S}-\delta \dfrac{\sigma S}{\Gamma (1 + \alpha )} \right) \mathrm{d}B(t)= r\left( V-\delta S \right) \mathrm{d}t. \end{array} \end{aligned}$$
(30)

Since the right hand side of Eq. (30) is risk free, coefficient of random term \(\mathrm{d}B(t)\) on left-hand side must be zero. That is, we need to choose \(\delta = \partial V/\partial S\). By this substitution, Eq. (30) provides the following Black–Scholes equation presented by fractional partial differential equation:

$$\begin{aligned} \dfrac{\partial ^{\alpha }V}{\partial t^{\alpha }}=\left( rV - \dfrac{\sigma ^{2} S^{2}}{2\Gamma ^{2}(1 + \alpha )}\dfrac{\partial ^{2}V}{\partial S^{2}} - rS\dfrac{\partial V}{\partial S}\right) \dfrac{t^{1 - \alpha }}{\Gamma (2 - \alpha )} . \end{aligned}$$
(31)

Note that by taking \(\alpha = 1\) in Eq. (31), we will get standard B–S equation.

To see the difference between our version of B–S equation in Eq. (31) with those counterparts, we state two main models which have been derived previously.

  1. (1)

    Wyss (2000) derived a B–S equation in the form of

    $$\begin{aligned} \dfrac{\partial ^{\alpha }V}{\partial t^{\alpha }}= rV - \dfrac{\sigma ^{2}}{2}S^{2}\dfrac{\partial ^{2}V}{\partial S^{2}} - rS\dfrac{\partial V}{\partial S}. \end{aligned}$$
  2. (2)

    Jumarie (2010) derived the following time-fractional B–S equation:

    $$\begin{aligned} \dfrac{\partial ^{\alpha } V}{\partial t^{\alpha }} = \left( rV -rS\dfrac{\partial V}{\partial S}\right) \dfrac{t^{1-\alpha }}{\Gamma ( 2 - \alpha )}-\dfrac{\Gamma (1 + \alpha )}{2}\sigma ^{2}S^{2}\dfrac{\partial ^{2}V}{\partial S^{2}}, \end{aligned}$$

    where S(t) satisfies in the following stochastic differential equation with fractional Brownian motion (fBm) which describes a noise memory of asset price:

    $$\begin{aligned} \mathrm{d}S(t) = \mu S(t)\mathrm{d}t+\sigma S(t)w(t)\mathrm{d}t^{\alpha /2}, \quad 0< \alpha \le 1. \end{aligned}$$

4.2 Solution

To solve Eq. (31) for the European call option, we consider a terminal condition as \(V(S,T) = \mathrm{Max} \{S-E,0\}, 0 \le S < \infty\), where T is expiration date and E is strike price. Here, for solving our B–S model in Eq. (31), we first delete the rV-term in Eq. (31). To this, we define V(St) in the form \(V(S,t) = e^{-r(T - t)}U(S,t)\). Thus, we get the following equation:

$$\begin{aligned} \dfrac{\partial ^{\alpha }U}{\partial t^{\alpha }}=\left( - \dfrac{\sigma ^{2} S^{2}}{2\Gamma ^{2}(1 + \alpha )}\dfrac{\partial ^{2}U}{\partial S^{2}} - rS\dfrac{\partial U}{\partial S}\right) \dfrac{t^{1 - \alpha }}{\Gamma (2 - \alpha )} . \end{aligned}$$
(32)

Now, to obtain a fractional PDE out of Eq. (32) without coefficients S and \(S^{2}\), we make a change of variable \(x = \mathrm{log} S\). With this change, Eq. (32) reads

$$\begin{aligned} \dfrac{\partial ^{\alpha }U}{\partial t^{\alpha }}=\Bigg (\left( \dfrac{\sigma ^{2}}{2\Gamma ^{2}(1 + \alpha )}-r\right) \dfrac{\partial U}{\partial x} - \dfrac{\sigma ^{2}}{2\Gamma ^{2}(1 + \alpha )}\dfrac{\partial ^{2} U}{\partial x^{2}}\Bigg )\dfrac{t^{1 - \alpha }}{\Gamma (2 - \alpha )}. \end{aligned}$$
(33)

To guess the general form of the solution of Eq. (33), we refer to the following equation:

$$\begin{aligned} \dfrac{\partial ^{\alpha }\overline{U}}{\partial t^{\alpha }}+\left( r - \dfrac{\sigma ^{2}}{2\Gamma ^{2}(1 + \alpha )}\right) \dfrac{t^{1 - \alpha }}{\Gamma (2 - \alpha )}\dfrac{\partial \overline{U}}{\partial x}=0. \end{aligned}$$
(34)

To find the general solution of Eq. (34), according to Theorem 1, we consider the auxiliary system associated with it as follows:

$$\begin{aligned} \dfrac{\mathrm{d}t^{\alpha }}{1} = \dfrac{d^{\alpha }x}{\left( r - \dfrac{\sigma ^{2}}{2\Gamma ^{2}(1 + \alpha )}\right) \dfrac{t^{1 - \alpha }}{\Gamma (2 - \alpha )}}=\dfrac{\mathrm{d} \overline{U}^{\alpha }}{0}. \end{aligned}$$
(35)

Zero in the last fraction means that \(\overline{U}\) is constant along characteristics. Therefore, according to Eqs. (5) and (11), we get the general solution of Eq. (34) in the following form:

$$\begin{aligned} \overline{U}(x,t) = F\Bigg (x+ \left( \dfrac{\sigma ^{2}}{2\Gamma ^{2}(1 + \alpha )}-r\right) t\Bigg ), \end{aligned}$$
(36)

where F is an arbitrary function. Eq. (36) suggests to look for U(xt) in the form:

$$\begin{aligned} U(x,t)= R(u,t), \end{aligned}$$
(37)

where \(u = x + \left( r-\dfrac{\sigma ^{2}}{2\Gamma ^{2}(1 + \alpha )}\right) \left( T - t\right)\). Substituting Eq. (37) into Eq. (33) according to Eq. (36), we get the following equation:

$$\begin{aligned} \dfrac{\partial ^{\alpha } R}{\partial t^{\alpha }}= -m t^{1-\alpha }\dfrac{\partial ^{2} R}{\partial u^{2}}, \end{aligned}$$
(38)

where \(m = \sigma ^{2}/2\Gamma ^{2}(1 + \alpha )\Gamma (2 - \alpha )\) and terminal condition is \(R(u,T) = \mathrm{Max}\{e^{u} - E,0\}\).

Now, we obtain an initial condition, and then, we take change of variable \(\tau = T - t\). Therefore, according to Eq. (13), we get

$$\begin{aligned} \dfrac{\partial ^{\alpha } R}{\partial \tau ^{\alpha }}& = \Gamma (2 - \alpha ) ( T - \tau )^{\alpha - 1} \dfrac{\partial ^{\alpha } R}{\partial t^{\alpha }} \dfrac{\partial ^{\alpha } (T - \tau )}{\partial \tau ^{\alpha }}\nonumber \\& = \Gamma (2 - \alpha ) ( T - \tau )^{\alpha - 1} \dfrac{\partial ^{\alpha } R}{\partial t^{\alpha }}\left( 0 - \dfrac{\tau ^{1 - \alpha }}{\Gamma (2 - \alpha )}\right) . \end{aligned}$$

In this case, Eq. (38) will be transformed into the following equation:

$$\begin{aligned} \tau ^{\alpha -1}(T - \tau )^{1 - \alpha }\dfrac{\partial ^{\alpha }R}{\partial \tau ^{\alpha }}= m(T - \tau )^{1 - \alpha } \dfrac{\partial ^{2}R}{\partial u^{2}}, \end{aligned}$$

or

$$\begin{aligned} \dfrac{\partial ^{\alpha } R}{\partial \tau ^{\alpha }}= m \tau ^{1-\alpha }\dfrac{\partial ^{2} R}{\partial u^{2}}, \end{aligned}$$
(39)

with initial condition \(R(u,0) = \mathrm{Max} \{e^{u} - E,0\}\).

Now, by taking Laplace transform on both sides of Eq. (39) with respect to time variable \(\tau\) and considering Eq. (15), we get

$$\begin{aligned} L \lbrace R(u,\tau ) \rbrace = \dfrac{1}{s}R(u,0) + \dfrac{1}{s^{\alpha }}L \left\{ m \tau ^{1-\alpha }\dfrac{\partial ^{2} R}{\partial u^{2}}\right\} . \end{aligned}$$
(40)

Taking inverse Laplace transform from both sides of Eq. (40) and using convolution theorem, we get

$$\begin{aligned} R(u,\tau ) = R(u,0)+\dfrac{m}{\Gamma (\alpha )}\int _{0}^{\tau }(\tau - \xi )^{\alpha - 1}\xi ^{1-\alpha }\dfrac{\partial ^{2} R(u,\xi )}{\partial u^{2}}\mathrm{d}\xi . \end{aligned}$$
(41)

Now, we can construct an iteration formula as follows:

$$\begin{aligned} R_{n}(u,\tau ) &= R_{0}(u,\tau )\\ & \quad +\dfrac{m}{\Gamma (\alpha )}\int _{0}^{\tau }(\tau - \xi )^{\alpha - 1}\xi ^{1-\alpha }\dfrac{\partial ^{2} R_{n-1}(u,\xi )}{\partial u^{2}}\mathrm{d}\xi , \quad n = 1,2, ... , \end{aligned}$$
(42)

where \(R_{0}(u,\tau ) = \mathrm{Max} \{e^{u} - E,0\}\). Thus, from Eq. (42), approximations are obtained as follows:

$$\begin{aligned} R_{n}(u,\tau ) &= \mathrm{Max} \{e^{u} - E,0\}\\ &\quad + e^{u}\sum _{j=1}^{n}\dfrac{\Gamma (2-\alpha )... \Gamma (j+1-\alpha )}{\Gamma (2)...\Gamma (j+1)}(m \tau )^{j}, \quad n = 1,2, \ldots . \end{aligned}$$
(43)

Note that in the above approximations, we have used the following integral formula:

$$\begin{aligned} \dfrac{1}{\Gamma (\alpha )}\int _{a}^{x}(x - t)^{\alpha - 1} (t - a)^{\beta } \mathrm{d}t = \dfrac{\Gamma (\beta + 1)}{\Gamma (\alpha + \beta + 1)}(x - a)^{\alpha + \beta }, \quad \beta > -1. \end{aligned}$$

Now, according to Eq. (43) and the following theorem, we will find the solution of Eq. (39).

Theorem 2

Suppose \(\Omega = \left\{ (u,\tau ) : \vert u \vert \le u_{max}, 0 \le \tau \le T \right\}\) be the solution domain of Eq. (39), where \(u_{max}\) is a realistic and practical approximation to infinity and also \(mT < 1\). Then, there exists \(R^{*}(u,\tau )\) satisfies in Eq. (39).

Proof

First, we find an upper bound for \(\left| R_{n}(u,\tau ) - R_{n -1}(u,\tau )\right| , n=1,2,\ldots\) on \(\Omega\). Here, we can rewrite Eq. (43) for \(n = 1,2,\ldots\) as follows:

$$\begin{aligned} R_{n}(u,\tau ) = R_{n-1}(u,\tau ) + e^{u}\dfrac{\Gamma (2-\alpha )... \Gamma (n+1-\alpha )}{\Gamma (2) \cdots \Gamma (n+1)}(m \tau )^{n}. \end{aligned}$$
(44)

Therefore

$$\begin{aligned} \left| R_{n}(u,\tau ) - R_{n -1}(u,\tau )\right| \le e^{u_\mathrm{{max}}}(mT)^{n}. \end{aligned}$$
(45)

Since \(\Omega\) is compact and sequence \(\{R_{n}(u,\tau )\}_{n = 0}^{\infty }\) is continuous on \(\Omega\), \(\{R_{n}(u,\tau )\}_{n = 0}^{\infty } \subseteq C(\Omega )\), where \(C(\Omega )\) denotes the sets of continuous and bounded functions on \(\Omega\). Noting that with supremum norm, \(C(\Omega )\) is a complete metric space so, every Cauchy sequence in \(C(\Omega )\) is convergent. Therefore, it is sufficient to show that sequence \(\{R_{n}(u,\tau )\}_{j = 0}^{\infty }\) is a Cauchy sequence. To see this, suppose \(n\ge m\). Then, according to Eq. (45) and assumption \(\theta = mT < 1\), we get

$$\begin{aligned} \left| R_{n}(u,\tau ) - R_{m}(u,\tau )\right| \le e^{u_\mathrm{{max}}}\sum _{j = m}^{n -1}\theta ^{j+1} \le e^{u_\mathrm{{max}}}\sum _{j = m}^{\infty }\theta ^{j+1} = \dfrac{e^{u_\mathrm{{max}}}\theta ^{m+1} }{1 - \theta }. \end{aligned}$$
(46)

Since \(\lim _{m \longrightarrow \infty } =e^{u_\mathrm{{max}}}\theta ^{m+1}/(1-\theta ) =0\), for every \(\varepsilon > 0\), there is an integer N, such that \(m \ge N\) implies \(e^{u_\mathrm{{max}}}\theta ^{m+1}/(1-\theta ) < \varepsilon\). Therefore, if \(n \ge m \ge N\), then \(\left| R_{n}(u,\tau ) - R_{m}(u,\tau )\right| < \varepsilon\) and \(\Vert R_{n}(u,\tau ) - R_{m}(u,\tau ) \Vert \le \varepsilon\), where \(\Vert . \Vert\) is supremum norm. That is, sequence \(\{R_{n}(u,\tau )\}_{n = 0}^{\infty }\) is a Cauchy sequence in \(C(\Omega )\), and thus, it is convergent. In other words, there is \(R^{*}(u,\tau )\), such that \(\lim _{n\longrightarrow \infty }R_{n}(u,\tau )= R^{*}(u,\tau )\). Note that, according to Eq. (43), we can write \(R^{*}(u,\tau )\) as follows:

$$\begin{aligned} R^{*}(u,\tau )= \mathrm{{Max}}\{e^{u} - E,0\}+ e^{u}\sum _{j=1}^{\infty }\dfrac{\Gamma (2-\alpha )... \Gamma (j+1-\alpha )}{\Gamma (2)...\Gamma (j+1)}(m \tau )^{j}. \end{aligned}$$
(47)

Now, we also verify that Eq. (47) is an exact solution of fractional PDE (39). First note that according to Eqs. (20) and (21), we have

$$\begin{aligned} \psi [R]=\dfrac{1}{\Gamma (\alpha )}\int _{0}^{\tau }(\tau - \xi )^{\alpha - 1}\xi ^{1 - \alpha } \left( m \dfrac{\partial ^{2} R}{\partial u^{2}}\right) \mathrm{d}\xi , \end{aligned}$$
(48)
$$\begin{aligned} v_{0}=R_{0}, \quad v_{n}=\psi \textstyle \left[ \sum _{j=0}^{n-1}v_{j}\right] -\left( \sum _{j=1}^{n-1}v_{j}\right) . \end{aligned}$$
(49)

Then, as we discussed in Section (4), Eq. (47) is equal to \(\sum _{j=0}^{\infty }v_{j}\). Therefore, to show our verification, it is sufficient to show that \(\sum _{j=0}^{\infty }v_{j}\) satisfies in Eq. (39).

To start with, according to the definition of \(\psi [R]\) (2007), we obtain the following equality:

$$\begin{aligned} \dfrac{\partial ^{\alpha }\psi [R]}{\partial \tau ^{\alpha }} = m \tau ^{1 - \alpha } \dfrac{\partial ^{2} R}{\partial u^{2}}. \end{aligned}$$
(50)

Since \(\sum _{j=0}^{\infty }v_{j}\) is convergent, we have

$$\begin{aligned} \lim _{j \rightarrow \infty }v_{j} = 0, \quad \sum _{j = 0}^{\infty }(v_{j+1}-v_{j}) = \lim _{j \rightarrow \infty }(v_{j+1}-v_{0})= -v_{0}. \end{aligned}$$
(51)

Thus, Eq. (49) yields

$$\begin{aligned} v_{1} = \psi [v_{0}], \quad v_{j+1} = \psi \left[ \sum _{k = 0}^{j} v_{k}\right] - \psi \left[ \sum _{k = 0}^{j-1} v_{k}\right] ,\, j = 1, 2,... . \end{aligned}$$
(52)

Now, according to Eqs. (50) and (52), we get

$$\begin{aligned} \dfrac{\partial ^{\alpha }(v_{1} - v_{0})}{\partial \tau ^{\alpha }} = \dfrac{\partial ^{\alpha }(\psi [v_{0}] - v_{0})}{\partial \tau ^{\alpha }}= m \tau ^{1 - \alpha } \dfrac{\partial ^{2} v_{0}}{\partial x^{2}} - \dfrac{\partial ^{\alpha }v_{0}}{\partial \tau ^{\alpha }}, \end{aligned}$$
(53)
$$\begin{aligned} \dfrac{\partial ^{\alpha }(v_{j +1} - v_{j})}{\partial \tau ^{\alpha }}& = \dfrac{\partial ^{\alpha }}{\partial \tau ^{\alpha }}\psi \left[ \sum _{k = 0}^{j} v_{k}\right] - \dfrac{\partial ^{\alpha }}{\partial \tau ^{\alpha }}\psi \left[ \sum _{k = 0}^{j-1} v_{k}\right] -\dfrac{\partial ^{\alpha }v_{j}}{\partial \tau ^{\alpha }}\nonumber \\& = m \tau ^{1 - \alpha } \dfrac{\partial ^{2}}{\partial x^{2}}\sum _{k = 0}^{j} v_{k} - m \tau ^{1 - \alpha } \dfrac{\partial ^{2}}{\partial x^{2}}\sum _{k = 0}^{j-1} v_{k} -\dfrac{\partial ^{\alpha }v_{j}}{\partial \tau ^{\alpha }} \nonumber \\& = m \tau ^{1 - \alpha } \dfrac{\partial ^{2}v_{j}}{\partial x^{2}}-\dfrac{\partial ^{\alpha }v_{j}}{\partial \tau ^{\alpha }}, \quad j = 1,2,... , \end{aligned}$$
(54)

From Eqs. (53) and (54), we get the following result:

$$\begin{aligned} \dfrac{\partial ^{\alpha }}{\partial \tau ^{\alpha }}\sum _{j = 0}^{\infty }(v_{j+1} - v_{j}) = m \tau ^{1 - \alpha } \dfrac{\partial ^{2}}{\partial x^{2}}\sum _{j = 0}^{\infty }v_{j} - \dfrac{\partial ^{\alpha }}{\partial \tau ^{\alpha }}\sum _{j = 0}^{\infty }v_{j}. \end{aligned}$$
(55)

On the other hand, Eq. (51) implies that

$$\begin{aligned} \dfrac{\partial ^{\alpha }}{\partial \tau ^{\alpha }}\sum _{j = 0}^{\infty }(v_{j+1} - v_{j}) = \dfrac{\partial ^{\alpha }v_{0}}{\partial \tau ^{\alpha }}=0. \end{aligned}$$
(56)

Therefore

$$\begin{aligned} \dfrac{\partial ^{\alpha }}{\partial \tau ^{\alpha }}\sum _{j = 0}^{\infty }v_{j}= m \tau ^{1 - \alpha } \dfrac{\partial ^{2}}{\partial x^{2}}\sum _{j = 0}^{\infty }v_{j}, \end{aligned}$$
(57)

i.e. Eq. (47) is an exact solution of fractional PDE (39).

Remark

By transforming back \(\tau = T -t\), \(u = x + (r-\dfrac{\sigma ^{2}}{2\Gamma ^{2}(1 + \alpha )})(T - t)\), \(x = \mathrm{{log}}S\) and \(V(S,t) = e^{-r(T - t)}U(S,t)\) into Eq. (47), we will get the solution in terms of original variables (St) as

$$\begin{aligned} V(S,t) & = \mathrm{{Max}}\{S e^{-\sigma ^{2}(T - t)/2\Gamma ^{2}(1 + \alpha )} - E e^{-r(T - t)},0\}\nonumber \\& \quad + S e^{-\sigma ^{2}(T - t)/2\Gamma ^{2}(1 + \alpha )}\sum _{j=1}^{\infty }\dfrac{\Gamma (2-\alpha )... \Gamma (j+1-\alpha )}{\Gamma (2)...\Gamma (j+1)}\\ &\quad {\times m(T - t)}^{j}, \end{aligned}$$
(58)

where \(m = \sigma ^{2}/2\Gamma ^{2}(1 + \alpha )\Gamma (2 - \alpha )\).

5 Conclusion

In this article, first, we have derived a new time-fractional-order Black–Scholes equation based on a fractional stochastic differential equation which describes the trend memory effect. Then, we have found an exact solution of this equation using RVIM. Furthermore, by taking \(\alpha = 1\) in Eq. (58), we could find a solution of classical B–S equation in the form of a convergent series with easily computable components.