Keywords

1 Introduction

After 2008, excessive reliance on mathematical models was blamed as the cause for the subprime collapse and the following global crisis. This criticism soon reached the popular level on newspaper articles and cartoons

figure a

It is true that a closed mathematical model of the economy may be an impossible dream, as is also true that some models use unrealistic assumptions, trading empirical evidence for mathematical beauty or simplicity. On the other hand, there are also cases where the mathematics is there, providing sound results, but nobody pays attention, because sometimes not paying attention is more profitable in the short term. An example:

A theorem by Föllmer and Schied [1] states that if M is the set of all probability measures on a finite space of scenarios \(\varOmega \), \( \rho \) will be a convex risk measure iff there is a penalty function \(\alpha :M\rightarrow \left( -\infty ,\infty \right] \) such that

$$\begin{aligned} \rho (X) =\underset{Q\in P}{\sup }(E_{Q}[-X] -\alpha (Q)) \end{aligned}$$
(1)

with \(\alpha \) convex, lower semicontinuous and \(\alpha (Q) \ge -\rho (0) \).

In such a convex risk measure, the first term represents the maximal expected loss in the scenario Q and \(\alpha (Q) \) accounts for the probability of the scenario. Whereas the calculation of \( E_{Q}[-X] \) is a simple exercise in stochastic analysis, estimation of \(\alpha (Q) \) involves many factors which are frequently not taken into account. For example, if the historical data that is being used does not contain unfavorable events, it is tempting (or profitable) to say that meltdowns are improbable.

Intrigued by why the well-qualified experts of the rating agencies had rated AAA the “toxic” products and had not predicted the 2008 crisis, 4 economists of the Federal Reserve of Atlanta, made an extensive analysis of their reports in the years before the crisis. The conclusion was that most experts reported that a small fall in the price of the houses would lead to disaster, but assigned a very small (penalty) probability to that event [2]. In the Atlanta paper, instead of the language of convex measures, the authors consider the probability of foreclosures decomposed into

$$\begin{aligned} \frac{df}{dt}=\frac{df}{dp}\frac{dp}{dt} \end{aligned}$$
(2)

\(\frac{df}{dp}\) being the sensitivity of foreclosures to price (HPA) and \( \frac{dp}{dt}\) the time variation of the price of houses. Their conclusion is that the estimation of \(\frac{df}{dp}\) was correct but not of \(\frac{dp}{dt}\) (which is equivalent to the effect of the penalty term). However looking at the housing bubble it should have been clear after 2001 that the probability of a downturn in HPA was high. In addition, how could inflation-adjusted prices continue to rise when real incomes of most Americans, especially at the bottom, continued to fall?

Why was a small value assigned to the penalty term? Conflict of interests is a possible reason. The SEC recognition of the main agencies (Fitch, Moody’s, S&P and Dominion) together with the recommendations of Basel II, put them in the center of the financial world. However their clients were exactly the creators of the securities. If one agency does not provide favorable reports, look elsewhere. A recent event lends credibility to this hypothesis: In February 2011, Redwood Trust put in the market the second (since the crisis) mortgage-based private-label security in the USA. Redwood asked both Fitch and Moody’s for ratings, but only published Fitch’s report. Later on, perhaps to assert its credibility, Moody’s published its report which was quite negative. On the other hand if the client were the buyer of the securities, a conflict of interest of opposite sign might also occur. A recent proposal is the creation of government-sponsored clearing agencies. Will it ever work?

Another example is the Bernard Madoff affair. Already in 1999 and later in 2005 Markopoulos’ letters to the SEC had shown, using simple mathematics that Madoff’s strategy could not generate the 12 % average annual return unless he was either using insider-trading or a Ponzi scheme. Although some large brokerage institutions (Goldman Sachs, for example) stayed away from any deals with Madoff, he continued to attract a lot of investment in Europe and the USA. Once again, the mathematics was there but very few were paying attention.

The work that is reported in this paper concerns another aspect of the relation of mathematics to economic life, namely the modelization of the price S(t) fluctuations in stock exchanges. The basic (stylized) facts provided by the empirical data are:

  1. 1.

    The returns (\(r(t,\varDelta ) =\frac{S(t+\varDelta ) -S(t) }{S(t) }\)) have nearly no autocorrelation;

  2. 2.

    The autocorrelations of \(\left| r(t,\varDelta ) \right| \) decline slowly with increasing lag \(\varDelta \), a long memory effect;

  3. 3.

    Leptokurtosis: asset returns have distributions with fat tails and excess peakedness at the mean;

  4. 4.

    Autocorrelations of sign \(r(t,\varDelta ) \) are insignificant;

  5. 5.

    Volatility clustering: there is a tendency of large changes to follow large changes and small changes to follow small changes. Volatility occurs in bursts;

  6. 6.

    Volatility is mean-reversing and the distribution is close to lognormal or inverse gamma;

  7. 7.

    Leverage effect: volatility tends to rise more following a large price fall than following a price rise.

Geometrical Brownian motion (GBM)

$$\begin{aligned} \frac{dS_{t}}{S_{t}}=\mu dt+\sigma dB(t) \end{aligned}$$
(3)

is a basis for most of mathematical finance (Black-Scholes, etc.). Is it consistent with the empirical data? No. In GMB price changes would be log-normal. No leptokurtosis and scaling properties \(E\left| \frac{ S(t+\varDelta ) -S\left( t\right) }{S(t) }\right| \approx \varDelta ^{1/2}\) which is not born out by the data. In addition, the volatility \(\sigma \) being constant, there is no volatility clustering nor leverage effect. One of the most famous consequences of GBM is the Black-Scholes formula for option pricing. When the historical volatility \( \sigma \) is used in the formula, the resulting price is quite distinct from the one that is actually practiced in the market. Nevertheless the Black-Scholes continues to be used in the following way: The market price of liquid options is used to infer what would be the (implied) volatility \( \sigma _{imp}\) that would lead to that price. \(\sigma _{imp}\) is then used to compute the price of less traded options. Black-Scholes is used as an invertible mapping between price and \(\sigma _{imp}\) and not for its theoretical value.

In conclusion: the wide use of GBM is a case where oversimplification of the mathematics leads to important departures from the empirical evidence. This was the motivation that led to the present attempt to construct a market model, which although preserving some of the nice features of GBM, would be consistent with (and directly inspired by) the empirical data.

2 A Data-Reconstructed Market Model [3]

The basic hypothesis for the model construction were:

  • (H1) The log-price process \(\log S_{t}\) belongs to a probability space \(\varOmega \otimes \varOmega ^{^{\prime }}\), where the first one, \(\varOmega \), is the Wiener space W and the second, \(\varOmega ^{^{\prime }}\), is a probability space to be empirically reconstructed.

    Denote \(\log S_{t}(\omega ,\omega ^{^{\prime }}) \) with \(\omega \in \varOmega \), \(\omega ^{^{\prime }}\in \varOmega ^{^{\prime }}\), and \( \mathscr {F}_{t}\), \(\mathscr {F}_{t}^{^{\prime }}\) are the \(\sigma -\)algebras in \(\varOmega \) and \(\varOmega ^{^{\prime }}\) generated by the processes up to time t.

  • (H2) The second hypothesis is stronger: Assume that for each fixed \(\omega ^{^{\prime }}\) , \(\log S_{t}(\bullet ,\omega ^{^{\prime }}) \) is a square integrable random variable in \(\varOmega \)

From (H2) it follows [4] that, for each fixed \(\omega ^{^{\prime }} \),

$$\begin{aligned} \begin{array}{lll} \frac{dS_{t}}{S_{t}}(\bullet ,\omega ^{^{\prime }})= & {} \mu _{t}(\bullet ,\omega ^{^{\prime }}) dt+\sigma _{t}(\bullet ,\omega ^{^{\prime }}) dB(t) \end{array} \end{aligned}$$
(4)

with \(\mu _{t}(\bullet ,\omega ^{^{\prime }}) \) and \(\sigma _{t}(\bullet ,\omega ^{^{\prime }}) \) well-defined processes in \( \varOmega \).

If \(\left\{ X_{t},\mathscr {F}_{t}\right\} \) is a process such that

$$\begin{aligned} \begin{array}{lll} dX_{t}= & {} \mu _{t}dt+\sigma _{t}dB(t) \end{array} \end{aligned}$$
(5)

with \(\mu _{t}\) and \(\sigma _{t}\) being \(\mathscr {F}_{t}-\)adapted processes, then

$$\begin{aligned} \begin{array}{lll} \mu _{t} &{} = &{} \underset{\varepsilon \rightarrow 0}{\lim }\frac{1}{ \varepsilon }\left\{ \left. E(X_{t+\varepsilon }-X_{t}) \right| \mathscr {F}_{t}\right\} \\ \sigma _{t}^{2} &{} = &{} \underset{\varepsilon \rightarrow 0}{\lim }\frac{1}{ \varepsilon }\left\{ \left. E(X_{t+\varepsilon }-X_{t}) ^{2}\right| \mathscr {F}_{t}\right\} \end{array} \end{aligned}$$
(6)

The process associated to the probability space \(\varOmega ^{^{\prime }}\) is now inferred from the data. For each fixed \(\omega ^{^{\prime }}\) realization in \(\varOmega ^{^{\prime }}\) one has

$$\begin{aligned} \sigma _{t}^{2}(\bullet ,\omega ^{^{\prime }}) =\underset{ \varepsilon \rightarrow 0}{\lim }\frac{1}{\varepsilon }\left\{ E(\log S_{t+\varepsilon }-\log S_{t}) ^{2}\right\} \end{aligned}$$
(7)

Because each set of market data corresponds to a particular realization \( \omega ^{^{\prime }}\), the \(\sigma _{t}^{2}\) process may indeed be reconstructed from the data. The question is how to construct a mathematical model for this induced volatility process. For this purpose we looked for scaling properties of the data, namely

$$\begin{aligned} E\left| \sigma (t+\varDelta ) -\sigma (t) \right| \sim \varDelta ^{H}\qquad \quad E\left| \frac{\sigma ( t+\varDelta ) -\sigma (t) }{\sigma (t) } \right| \sim \varDelta ^{H} \end{aligned}$$
(8)

but neither of these hold. By contrast, the empirical integrated log-volatility is well represented by a relation of the form \( \sum _{n=0}^{t/\delta }\log \sigma (n\delta ) =\beta t+R_{\sigma }(t) \) with the \(R_{\sigma }(t) \) process displaying very accurate self-similar properties (Fig. 1).

Fig. 1
figure 1

The \(R_{\sigma }\) process and its scaling properties

If a nondegenerate process \(X_{t}\) has finite variance, stationary increments and is self-similar

$$\begin{aligned} {Law}(X_{at}) ={Law}(a^{H}X_{t}) \end{aligned}$$

then it necessarily has covariance

$$\begin{aligned} {Cov}(X_{s},X_{t}) =\frac{1}{2}(\left| s\right| ^{2H}+\left| t\right| ^{2H}-\left| s-t\right| ^{2H}) E(X_{1}^{2}) \end{aligned}$$
(9)

and the simplest process with these properties is a Gaussian process called fractional Brownian motion. Therefore the simplest model compatible with the data is

(10)

which has been called the fractional volatility model (FVM). \(\delta \) is the observation time scale and, from the data, H is found to be in the range 0.8–0.9. From (10) it follows that the volatility (at resolution \(\delta \)) is

$$\begin{aligned} \sigma (t) =\theta e^{\frac{k}{\delta }\left\{ B_{H}(t) -B_{H}(t-\delta ) \right\} -\frac{1}{2}(\frac{k}{ \delta }) ^{2}\delta ^{2H}} \end{aligned}$$
(11)

3 Mathematical Consistency of the Fractional Volatility Model: Arbitrage and Completeness [5]

The main consistency check for any market model is the no-arbitrage principle. In addition completeness or incompleteness of the model should also be checked.

The no-arbitrage principle: A (perfect) market does not allow for risk-free profits with no initial investment or, equivalently, to profits without any risk.

A self-financing portfolio \(V(t) =\sum _{i=1}^{n}h_{t}^{i}S_{t}^{i}\) is an arbitrage portfolio if

$$\begin{aligned} V(0) =0;\;P(V(T)>0) >0 \end{aligned}$$
(12)

for \(T>0\), P being the probability measure on the market scenarios. A market is arbitrage free if and only if there is an equivalent martingale measure Q for the discounted price processes [1, 6].

On the other hand completeness is related to the possibility of hedging portfolios. H is said to be an hedge for the portfolio X (or to replicate X) if

$$\begin{aligned} H\text { is self-financing; }V_{H}(T) =X(T) ,\text { } P-\text {almost surely} \end{aligned}$$
(13)

A market is complete if all X can be hedged. Furthermore, a market is complete if and only if the martingale measure Q is unique.

3.1 No-Arbitrage

In the fractional volatility model (FVM) one has two probability spaces \( ( \varOmega _{1},\mathscr {F}_{1},P_{1}) _{W}\) and \((\varOmega _{2},\mathscr {F}_{2},P_{2}) _{B_{H}}\) and the product space \(( \overline{\varOmega },\overline{\mathscr {F}},\overline{P}) \) with \(\pi _{1}\) and \(\pi _{2}\) the projections of \(\overline{\varOmega }\) onto \(\varOmega _{1}\) and \(\varOmega _{2}\).

Consider a risky asset with price \(S_{t}\) and a risk-free asset with dynamics

$$\begin{aligned} dA_{t}=rA_{t}\,dt\quad \,A_{0}=1 \end{aligned}$$
(14)

The volatility \(\sigma _{t}\) of the risky asset is a measurable \(\overline{ \mathbb {F}}\)–adapted process satisfying for \(0\le t<\infty \)

$$\begin{aligned} \mathbb {E}_{\overline{P}}\left[ \int _{0}^{t}\sigma _{s}^{2}ds\right]= & {} \int _{0}^{t}\theta ^{2}e^{-(\frac{k}{\delta }) ^{2}\delta ^{2H}}\mathbb {E}_{\overline{P}}\left[ e^{\frac{2k}{\delta }\left\{ B_{H}(s) -B_{H}(s-\delta ) \right\} }\right] ds \nonumber \\= & {} \theta ^{2}\exp \left\{ (\frac{k}{\delta }) ^{2}\delta ^{2H}\right\} t<\infty \end{aligned}$$
(15)

by Fubini’s theorem and the moment generating function of the Gaussian random variable \(B_{H}(s) -B_{H}(s-\delta ) \).

\(\int _{0}^{t}\left| \mu _{s}\right| ds\) being finite \(\overline{P}\)–almost surely for \(0\le t<\infty \), application of Itô’s formula yields

$$\begin{aligned} S_{t}=S_{0}\exp \left\{ \int _{0}^{t}(\mu _{s}-\frac{1}{2}\sigma _{s}^{2}) ds+\int _{0}^{t}\sigma _{s}dB_{s}\right\} \end{aligned}$$
(16)

Lemma 1

Consider the measurable process

$$\begin{aligned} \gamma _{t}=\frac{r-\mu _{t}}{\sigma _{t}},\quad 0\le t<\infty \end{aligned}$$
(17)

with \(\mu \in L^{\infty }(\left[ 0,T\right] \times \overline{\varOmega })\). Then, for a continuous version of \(B_{H}\)

$$\begin{aligned} \exp \left[ \frac{1}{2}\int _{0}^{T}\gamma _{s}^{2}(\omega _{2}) ds\right]<A(\omega _{2}) <\infty \end{aligned}$$
(18)

\(P_{2}\)–almost all \(\omega _{2}\in \varOmega _{2}\).

The proof uses the fact that \(P_{2}\)–almost surely, a continuous version of fractional Brownian motion is Hölder continuous of any order \(\alpha \ge 0\) less than H, that is, there is a random variable \(C_{\alpha }>0\) such that for \(P_{2}\)–almost all \(\omega _{2}\in \varOmega _{2}\) \(\left| B_{H}(t) -B_{H}(s) \right| \le C_{\alpha }(\omega _{2}) \left| t-s\right| ^{\alpha }\)

Theorem 1

The market (\(A_{t},S_{t},\sigma _{t}\)) is free of arbitrage

Proof

Restricting the process to a particular path \(\omega _{2}\) of the \(B_{H}\)–process, construct the stochastic exponential of \( \int _{0}^{t}\gamma _{s}(\omega _{2}) dB_{s}\),

$$\begin{aligned} \eta _{t}(\omega _{2}) =\exp \left\{ \int _{0}^{t}\gamma _{s}(\omega _{2}) dB_{s}-\frac{1}{2}\int _{0}^{t}\gamma _{s}^{2}(\omega _{2}) ds\right\} \end{aligned}$$
(19)

The bound in Lemma 1 is the Kallianpur condition that insures

$$\begin{aligned} \mathbb {E}_{P_{1}}\left[ \eta _{t}(\omega _{2}) \right] =1 \quad \quad \omega _{2}-{a.}s. \end{aligned}$$
(20)

Hence, we are in the framework of Girsanov theorem and \(\eta _{t}(\omega _{2}) \) is a true \(P_{1}\)–martingale. We can define for each \( 0\le T<\infty \) a new probability measure \(Q_{T}(\omega _{2}) \) on \(\mathscr {F}_{1}\) by

$$\begin{aligned} \frac{dQ_{T}(\omega _{2}) }{dP_{1}}=\eta _{T}(\omega _{2}) ,\quad P_{1}-{a.}s. \end{aligned}$$
(21)

By the Cameron-Martin-Girsanov theorem, for each \(T\in \left[ 0,\infty \right) \), the process

$$\begin{aligned} B_{t}^{*}=B_{t}-\int _{0}^{t}\frac{r-\mu _{s}}{\sigma _{s}(\omega _{2}) }\,ds\quad 0\le t\le T \end{aligned}$$
(22)

is a Brownian motion on the probability space \((\varOmega ,\mathscr {F} _{1},Q_{T}(\omega _{2})) \).

Under the new probability measure \(Q_{T}(\omega _{2}) \) (equivalent to \(P_{1}\) on \(\mathscr {F}_{1}\)) the discounted price process, \( Z_{t}=\frac{S_{t}}{A_{t}}\quad 0\le t\le T\) with dynamical law

$$\begin{aligned} Z_{t}(\omega _{2}) =Z_{0}+\int _{0}^{t}\sigma _{s}(\omega _{2}) Z_{s}(\omega _{2}) \,dB_{s}^{*} \end{aligned}$$
(23)

is a martingale in the probability space \((\varOmega _{1},\mathscr {F} _{1},Q_{T}(\omega _{2})) \). By the fundamental theorem of asset pricing [1, 6], the existence of an equivalent martingale measure for \(Z_{t}\) implies that there are no arbitrages, that is, \(\mathbb {E}_{Q_{T}(\omega _{2}) } \left[ Z_{t}(\omega _{2}) |\mathscr {F}_{1,s}\right] =Z_{s}(\omega _{2}) \) for \(0\le s<t\le T\).

This proves that there are no arbitrages for \(P_{2}-\)almost all \(\omega _{2}\) trajectories of the \(B_{H}\) process. Because this process is independent from the B process it follows that the no-arbitrage result is also valid in the product space. \(\square \)

3.2 Incompleteness

In this financial model, trading takes place only in the risky asset and in the money market. As a consequence the volatility risk cannot be hedged. Having more sources of risk than tradable assets, suggests that the market is incomplete

Theorem 2

The market defined by (\(A_{t},S_{t},\sigma _{t}\)) is incomplete

Proof

Use an integral representation for the fractional Brownian motion

$$\begin{aligned} B_{H}(t) =\int _{0}^{t}K_{H}(t,s) dW_{s} \end{aligned}$$
(24)

\(W_{t}\) being a Brownian motion independent from \(B_{t}\) and \(K_{H}\) is the square integrable kernel

$$\begin{aligned} K_{H}(t,s) =C_{H}s^{\frac{1}{2}-H}\int _{s}^{t}(u-s)^{H-\frac{3}{2 }}u^{H-\frac{1}{2}}\,du,\quad s<t \end{aligned}$$
(25)

(\(H>1/2)\). Then the process

$$\begin{aligned} \eta _{t}^{\prime }=\exp (W_{t}-\frac{1}{2}t) \end{aligned}$$
(26)

is a square-integrable \(P_{2}\)–martingale. Now, define a standard bi-dimensional Brownian motion, \(W_{t}^{*}=(B_{t},W_{t})\) and the process \(\eta _{t}^{*}(\omega _{2}) =\eta _{t}\eta _{t}^{^{\prime }}(\omega _{2}) \)

$$\begin{aligned} \eta _{t}^{*}(\omega _{2}) =\exp \left\{ \int _{0}^{t}\varGamma _{s}(\omega _{2}) \bullet dW_{t}^{*}-\frac{1}{2} \int _{0}^{t}\left\| \varGamma _{s}(\omega _{2}) \right\| ^{2}ds\right\} \end{aligned}$$
(27)

where, by the Lemma 1, \(\varGamma (\omega _{2}) =(\gamma (\omega _{2}) ,1) \) is also a \(P_{1}\)–martingale. Then, by the Cameron-Martin-Girsanov theorem, the process \(\widetilde{W}_{t}^{*}=(\widetilde{W}_{t}^{*(1)},\widetilde{W}_{t}^{*(2)}) \) defined by

$$\begin{aligned} \widetilde{W}_{t}^{*(1)}=B_{t}-\int _{0}^{t}\gamma _{s}(\omega _{2}) ds;\;\widetilde{W}_{t}^{*(2)}=W_{t}-t \end{aligned}$$
(28)

is a bi-dimensional Brownian motion on the probability space \((\varOmega _{1},\mathscr {F}_{1},Q_{T}^{*}(\omega _{2})) \), where \(Q_{T}^{*}(\omega _{2}) \) is the probability measure \(\frac{ dQ_{T}^{*}(\omega _{2}) }{dP_{1}}=\eta _{T}^{*}(\omega _{2}) \). Moreover, the discounted price process Z remains a martingale with respect to the new measure \(Q_{T}^{*}(\omega _{2}) \). \(Q_{T}^{*}(\omega _{2}) \) being an equivalent martingale measure distinct from \(Q_{T}(\omega _{2}) \), the market is incomplete. \(\square \)

3.3 Leverage, a Modified Model and Completeness [5, 7]

The following nonlinear correlation of the returns

$$\begin{aligned} L(\tau ) =\left\langle \left| r(t+\tau ) \right| ^{2}r(t) \right\rangle -\left\langle \left| r(t+\tau ) \right| ^{2}\right\rangle \left\langle r(t) \right\rangle \end{aligned}$$
(29)

is called leverage and the leverage effect is the fact that, for \(\tau >0\), \(L(\tau ) \) starts from a negative value whose modulus decays to zero whereas for \(\tau <0\) it has almost negligible values (see Fig. 2 which shows a typical behavior in the NYSE data).

Fig. 2
figure 2

Example of the leverage effect in NYSE data

As expressed in (10) the fractional volatility model has the volatility process \(\sigma _{t}\) acting on the log-price, but not conversely. Therefore, in its simplest form, the fractional volatility model contains no leverage effect.

However, leverage may be implemented by a small modification of the model. Use a (truncated) representation for fractional Brownian motion as a stochastic integral over Brownian motion and identify the random generator of the log-price process with the stochastic integrator of the volatility. A leverage effect is then obtained.

$$\begin{aligned} \mathscr {H}( t) =\varPi ^{(M)}\left[ C_{H}\left\{ \begin{array}{c} \int _{-\infty }^{0}(( t-u) ^{H-\frac{1}{2}}-(-u) ^{H-\frac{1}{2}}) dW_{u} \\ +\int _{0}^{t}( t-u) ^{H-\frac{1}{2}}dW_{u} \end{array} \right\} \right] \end{aligned}$$
(30)

\(\varPi ^{(M)}\) meaning the truncation of the representation to an interval \( \left[ -M,M\right] \) with M arbitrarily large.

Now, instead of two, there is only one source of risk and the new fractional volatility model would be

$$\begin{aligned} dS_{t}^{\prime }= & {} \mu _{t}S_{t}^{\prime }dt+\sigma _{t}S_{t}^{\prime }dW_{t} \nonumber \\ \log \sigma _{t}^{\prime }= & {} \beta +\frac{k^{\prime }}{\delta }\left\{ \mathscr {H}(t) -\mathscr {H}(t-\delta ) \right\} \end{aligned}$$
(31)

Theorem 3

The market (\(A_{t},S_{t}^{\prime },\sigma _{t}^{\prime }\)) is free of arbitrage and complete.

Proof

Because the two processes are not independent we cannot use the same argument as before to obtain the Kallianpur condition. However, with the truncation, the Hölder condition is trivially verified for all the truncated paths of \(\sigma _{t}^{\prime }\) and the construction of an equivalent martingale measure follows the same steps as in Theorem 2. Hence we have a \(P_{1}\)-martingale with respect to \((\mathscr {F} _{1,t}) _{0\le t<T}\)

$$\begin{aligned} \eta _{t}=\exp \left\{ \int _{0}^{t}\frac{r-\mu _{s}}{\sigma _{s}}dW_{s}- \frac{1}{2}\int _{o}^{t}(\frac{r-\mu _{s}}{\sigma _{s}}) ^{2}ds\right\} \end{aligned}$$
(32)

and \(Q_{T}\), defined by \(\frac{dQ_{T}}{dP_{1}}=\eta _{T}\) is an equivalent martingale measure.

The set of equivalent local martingale measures being non-empty, let \( Q^{*}\) be an element in this set. By the Girsanov converse there is a \( \mathbb {R}\)-valued process \(\phi \) such that the Radon-Nikodym density of \( Q^{*}\) is

$$\begin{aligned} \frac{dQ_{T}^{*}}{dP_{1}}=\exp \left\{ \int _{0}^{T}\phi _{s}dW_{s}-\frac{ 1}{2}\int _{0}^{T}\phi _{s}^{2}ds\right\} \end{aligned}$$
(33)

Moreover the process \(W_{t}^{*}\) given by

$$\begin{aligned} W_{t}^{*}=W_{t}-\int _{0}^{t}\phi _{s}ds \end{aligned}$$
(34)

is a standard \(Q^{*}\)–Brownian motion and the discounted price process \( Z^{\prime }\) satisfies the following stochastic differential equation

$$\begin{aligned} dZ_{t}^{\prime }=(\mu _{t}-r+\sigma _{t}^{\prime }\phi _{t})Z_{t}^{\prime }dt+\sigma _{t}Z_{t}^{\prime }dW_{t}^{*} \end{aligned}$$
(35)

Because \(Z_{t}^{\prime }\) is a \(Q^{*}\)–martingale, then it must be \(\mu (t,\omega )-r+\sigma ^{\prime }(t,\omega )\phi (t,\omega )=0\) almost everywhere w.r.t. \(dt\times P\) in \(\left[ 0,T\right] \times \varOmega \). It implies

$$\begin{aligned} \phi (t,\omega )=\frac{r-\mu (t,\omega )}{\sigma ^{\prime }(t,\omega )} \end{aligned}$$
(36)

a. e. \((t,\omega ) \in \left[ 0,T\right] \times \varOmega _{1}\). Hence \(Q_{T}^{*}=Q_{T}\), that is, \(Q_{T}\) is the unique equivalent martingale measure. This market model is complete. \(\square \)

Figure 3 compares the leverage effect in the two models, that is, the original FVM (10) and the modified one (31).

Fig. 3
figure 3

Comparison of leverage in the original and the modified model

3.4 A Remark on Long Memory and Fractional Brownian Motion

In the past, several authors had already tried to describe long memory effects in the market data by replacing in the price process Brownian motion by fractional Brownian motion with \(H>1/2\). However it was soon realized [811] that this replacement implied the existence of arbitrage. These results might be avoided either by restricting the class of trading strategies [12], introducing transaction costs [13] or replacing pathwise integration by a different type of integration [14, 15]. However this is not free of problems because the Skorohod integral approach requires the use of a Wick product either on the portfolio or on the self-financing condition, leading to unreasonable situations from the economic point of view (for example positive portfolio with negative Wick value, etc.) [16].

The fractional volatility model in Eq. (10) is not affected by these considerations, because it is the volatility process that is driven by fractional noise, not the price process and, as shown, a no-arbitrage result may be proven. This is no surprise because the requirement (H2) that, for each sample path \(\omega _{2}\in \varOmega _{2}\), \( \log S_{t}(\cdot ,\omega _{2}) \) is a square integrable random variable in \(\varOmega _{1}\) already implies that \(\int \sigma _{t}dB_{t}\) is a martingale. The square integrability is also essential to guarantee the possibility of reconstruction of the \(\sigma \) process from the data.

4 Agent-Based Interpretation of the Fractional Volatility Model

In [17] two agent-based models were considered:

  • In the first the traders strategies play a determinant role.

  • In the second the determinant effect is the limit-order book dynamics, the agents having a random nature.

4.1 A Market Model with Self-adapted or Fixed Strategies

A set of investors is playing against the market. In addition to the impact of this group of investors, other factors are represented by a stochastic process \(\eta _{t}\)

$$\begin{aligned} z_{t+1}=f(z_{t},\omega _{t}) +\eta _{t} \end{aligned}$$
(37)

\((z_{t}=\log S_{t}) \) and \(\omega _{t}\) is the total investment made by the group of traders. After r time steps, s agents copy the strategy of the s best performers and, at the same time, have some probability to mutate that strategy.

The model was run with different initial conditions and with or without evolution of the strategies. When the model is run with evolution the asymptotic steady-state behavior depends on the initial conditions. Different types of return statistics corresponded to the relative importance of either “value investors” or “technical traders”. The occurrence of market bubbles and fat tails corresponds to situations where technical trader strategies were well represented. A situation where there are \(50\,\%\) of fundamental (value-investing) strategies and \(50\,\%\) trend-following, was chosen to compare its statistical properties with those of the FVM. The squared volatility \(\sigma _{t}^{2}=\frac{1}{\left| T_{0}-T_{1}\right| }{var}(\log p_{t}) \) and the parameters in \(\sum _{n=0}^{t/\delta }\log \sigma (n\delta ) =\beta t+R_{\sigma }(t) \) and \(\left| R_{\sigma }(t+\varDelta ) -R_{\sigma }(t) \right| \,\ \)were estimated from the simulations. Figure 4 shows the results.

Fig. 4
figure 4

Statistical properties of the model with agent strategies

Notice the lack of scaling behavior of \(R_{\sigma }(t) \) with an asymptotic exponent 0.55, denoting the lack of memory of the volatility process. This might already be evident from the time behavior of \(R_{\sigma }(t) \) in the lower left plot. Also, although the returns have fat tails in this case, they are of different shape from those observed in the market data. Similar conclusions are obtained with other combinations of agent strategies.

In conclusion: It seems that the features of the fractional volatility model (which are also those of the bulk market data) are not easily captured by a choice of strategies in an agent-based model.

Agents’ reactions and strategies are very probably determinant during market crisis and market bubbles but not in business-as-usual days.

4.2 A Limit-Order Book Market Model

In this model, asks and bids arrive at random on a window \( [S_{t}-w,S_{t}+w]\) around the current price \(S_{t}\). Every time a buy order arrives it is fulfilled by the closest non-empty ask slot, the new current price being determined by the value of the ask that fulfills it. If no ask exists when a buy order arrives it goes to a cumulative register to wait to be fulfilled. The symmetric process occurs when a sell order arrives, the new price being the bid that buys it. Sell and buy orders, asks and bids all arrive at random. Because the window around the current price moves up and down, asks and bids that are too far away from the current price are automatically eliminated.

The only parameters of the model are the width w of the limit-order book and the size n of the asks and bids, the sell and buy orders being normalized to one.

Fig. 5
figure 5

Statistical properties of the limit order book model

The model was run for different widths w and liquidities n. Although the exact values of the statistical parameters depend on w and n, the statistical nature of the results is essentially the same. In the Fig. 5 (\(n=2\)) the limit-order book is divided into \(2w+1=21\) discrete price slots with \(\varDelta p=0.1\). The scaling properties of \( R_{\sigma }(t) \) are quite evident from the lower right plot in the figure, the Hurst coefficient being 0.96.

Conclusion: the main statistical properties of the market data (fast decay of the linear correlation of the returns, non-Gaussianity and volatility memory) are already generated by the dynamics of the limit-order book with random behavior of the agents. A large part of the market statistical properties (in normal business-as-usual days) depends more on the nature of the price fixing financial institutions than on particular investor strategies.

5 Further Properties of the Fractional Volatility Model

In the FVM the statistics of returns is obtained in closed form. From

$$\begin{aligned} P_{\delta }(r(\varDelta )) =\frac{1}{4\pi \theta k\delta ^{H-1}\sqrt{\varDelta }}\int _{0}^{\infty }dxx^{-\frac{1}{2}}e^{-\frac{1 }{C}(\log x) ^{2}}e^{-\lambda x} \end{aligned}$$
(38)
$$\begin{aligned} r(\varDelta ) =\log S_{t+\varDelta }-\log S_{t}\;,\;\theta =e^{\beta },\;\lambda =\frac{(r( \varDelta ) -r_{0}) ^{2}}{2\varDelta \theta ^{2}} \end{aligned}$$
(39)
$$\begin{aligned} r_{0}=\left( \mu -\frac{\sigma ^{2}}{2}\right) \varDelta \;,\;C=8k^{2}\delta ^{2H-2} \end{aligned}$$
(40)

one obtains

(41)

with asymptotic behavior, for large returns

(42)
Fig. 6
figure 6

Statistics of returns in the FVM compared with NYSE data

This form provides a good fit of the empirical data. Figure 6 compares NYSE data with (41) for \(H=0.83,\) \(k=0.59,\) \(\beta =-5,\) \( \delta =1\), \(\varDelta =1\) and \(\varDelta =10\).

Fig. 7
figure 7

Statistics of returns for 1 min exchange data

That NYSE returns are well described by (41) is no wonder because the FVM itself was reconstructed from that data. What seemed stranger, at first sight, was the fact that, once the parameters of the model are fixed, a simple change of \(\varDelta \) would predict the returns in a quite different market. This is shown in Fig. 7 where the same parameters as above were used, simply changing to \(\varDelta =\frac{1}{440}\) (1 min). The prediction of the model is compared with 1-min data of USDollar-Euro market for a couple of months in 2001. The result may be surprising, because one would not expect the volatility parametrization to carry over to such a different time scale and also because one is dealing with a different market. However, if the conclusion from the agent-based models is correct, that in business-as-usual days the statistics of the data depends more on the price fixation process than on agent strategies or other market features, then this result is no longer a surprise.

Fig. 8
figure 8

Option price results V/K, comparison with Black-Scholes \((V-C)\)/K and the corresponding implied volatility

Using a simple risk neutrality argument, a new option pricing was also obtained, namely

(43)

with

$$\begin{aligned} M(\alpha ,a,b)= & {} \frac{1}{2\pi \alpha }\int _{-1}^{\infty }dy\int _{0}^{\infty }dxe^{-\frac{\log ^{2}x}{2\alpha ^{2}}}e^{-\frac{y^{2}}{2 }(ax+\frac{b}{x}) ^{2}} \nonumber \\= & {} \frac{1}{4\alpha }\sqrt{\frac{2}{\pi }}\int _{0}^{\infty }dx\frac{e^{- \frac{\log ^{2}x}{2\alpha ^{2}}}}{ax+\frac{b}{x}}{{erf}{c}}\left( -\frac{ax}{\sqrt{2}}-\frac{b}{\sqrt{2}x}\right) \end{aligned}$$
(44)

K is the strike price and T the maturity time. In Fig. 8 is plotted \( V(S_{t},\sigma _{t},t) \) in the range \(T-t\in [5,100]\) with \(S/K\in [0.5,1.5]\) as well as \((V( S_{t},\sigma _{t},t) -C( S_{t},\sigma _{t},t)) /K\) for \(k=1\) and \( k=2\). \(C( S_{t},\sigma _{t},t)\) is the Black-Scholes result. Other parameters are fixed at \(\sigma =0.01,r=0.001,\delta =1,H=0.8\). To compare with Black-Scholes (BS), the implied volatility, that would reproduce the same results, was also computed. The implied volatility surface corresponding to \(V( S_{t},\sigma _{t},t) \) is shown for for \(k=1\). It predicts a smile effect with the smile increasing as maturity approaches.