Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

It is the aim of this chapter to introduce computational tools, which can be used to implement functionals presented in this book. In particular, we focus on the non-central chi-squared distribution, which appeared in the context of the MMM and the TCEV model, and the non-central beta distribution, which appeared in the context of pricing exchange options. Lastly, we discuss the inversion of Laplace transforms, which can be used to recover transition densities from the Laplace transforms.

13.1 Some Identities Related to the Non-central Chi-Squared Distribution

The non-central chi-squared distribution featured prominently when pricing European call and put options under the MMM and TCEV model, see Sect. 3.3. In the current section, we recall the distribution, and in Sect. 13.2 we will present an algorithm showing how to implement the distribution, where we follow ideas presented in Hulley (2009).

First, we recall the link between the squared Bessel process and the non-central chi-squared distribution, which is given by

$$\frac{X_t}{t} \stackrel{d}{=} \chi^2_{\delta} \biggl( \frac{x}{t}\biggr) , $$

where X={X t , t≥0} denotes a squared Bessel process of dimension δ, and \(\chi^{2}_{\delta} (\lambda)\) denotes a non-central chi-squared random variable with δ degrees of freedom and non-centrality parameter λ>0. We recall from Lemma 8.2.2 that the non-central χ 2-distribution with δ>0 degrees of freedom and non-centrality parameter λ>0 has the density function

$$ p(x, \delta, \lambda) = \frac{1}{2} \exp \biggl\{ - \frac{x+ \lambda}{2} \biggr\} \biggl( \frac{x}{\lambda} \biggr)^{\frac{\delta }{4}-\frac {1}{2}} I_{\frac{\delta}{2}-\frac{1}{2}} ( \sqrt{\lambda x} ) , \quad x \geq0 . $$
(13.1.1)

Here \(I_{\nu}(x) = \sum_{j \geq0} \frac{1}{j! \varGamma(j+ \nu+1)} ( \frac{x}{2} )^{2j + \nu}\) denotes the modified Bessel function of the first kind of order ν>−1. The following equality is given in Hulley (2009), where

$$\begin{aligned} \frac{\lambda}{x} p(x,4,\lambda) =& \frac{1}{2} e^{ - \frac {\lambda +x}{2}} \biggl( \frac{\lambda}{x} \biggr)^{\frac{1}{2}} I_1 ( \sqrt {\lambda x} \,) \\ =& \frac{1}{2} e^{ - \frac{\lambda+x}{2}} \biggl( \frac {x}{\lambda} \biggr)^{- \frac{1}{2}} I_{-1} ( \sqrt{\lambda x}\, ) = p(x, 0 ,\lambda) , \end{aligned}$$
(13.1.2)

for x∈(0,∞) and λ>0, since the modified Bessel function of the first kind satisfies I 1=I −1, see e.g. Abramowitz and Stegun (1972), Eq. (9.6.6). Clearly, this equality entails the probability density function, of a non-central chi-squared random variable of zero degrees of freedom, p(x,0,λ). Such a random variable is comprised of a discrete part, as it places positive mass at zero, and a continuous part assuming values in the interval (0,∞). We return to this issue when discussing this type of probability distributions below. From Eq. (13.1.2), we immediately obtain the following formula, which is employed frequently in the context of the MMM, see Sect. 3.3:

$$\begin{aligned} \begin{aligned}[b] & E \biggl( \frac{\lambda(t,S)}{\chi^2_4( \lambda(t,S)) } g \bigl( \chi^2_4 \bigl( \lambda(t,S) \bigr) \bigr) \biggr) \\ & \quad = E \bigl( g \bigl( \chi^2_4 \bigl( \lambda(t,S) \bigr) \bigr) \bigr) - g(0) \exp \biggl\{ - \frac{\lambda(t,S)}{2} \biggr\} , \end{aligned} \end{aligned}$$
(13.1.3)

for an appropriately integrable function g(⋅). Next, we introduce the cumulative distribution function of a non-central chi-squared random variable. The following equality, see Eq. (29.3) in Johnson et al. (1995), introduces the non-central chi-squared distribution as a weighted average of central chi-squared distributions, the weights being Poisson weights:

$$ P \bigl( \chi^2_{\delta} ( \lambda) \leq x\bigr) = \sum^{\infty}_{j=0} \frac{ \exp \{ - \lambda/2 \} (\lambda/2 )^j }{j!} P \bigl( \chi ^2_{\delta+2 j} \leq x \bigr) , $$
(13.1.4)

for all x∈(0,∞), δ>0 and λ>0, where \(\chi ^{2}_{\delta}\) denotes the central chi-squared random variable. The distribution of the central chi-squared random variable admits the following presentation in terms of the regularized incomplete gamma function \(\mathcal {P}( \cdot{,} \cdot)\), see Johnson et al. (1994), Eq. (18.3):

$$ P \bigl( \chi^2_{\delta} \leq x\bigr) = \mathcal{P}\biggl( \frac{\delta}{2} , \frac {x}{2} \biggr) , $$
(13.1.5)

for x∈(0,∞) and δ>0, where

$$ \mathcal{P}( a, z) := \frac{1}{\varGamma(a)} \int^z_0 \exp \{ - t \} t^{a-1}\,dt , $$
(13.1.6)

for z∈ℜ+ and a>0. We can obtain an expression similar to Eq. (13.1.4) for the density of a non-central chi-squared random variable,

$$p(x, \delta, \lambda) = \sum^{\infty}_{j=0} \frac{\exp \{ - \lambda/2 \} (\lambda/2)^j}{j!} p(x, \delta+ 2 j) , $$

for x∈(0,∞), δ>0 and λ>0, and where p(x,δ) denotes the probability density function of a chi-squared random variable with δ>0 degrees of freedom. Finally, we focus on the non-central chi-squared distribution with zero degrees of freedom, which also featured in the context of the MMM in Sect. 3.3. From Eq. (13.1.4), we get

$$ P\bigl( \chi^2_0 ( \lambda) \leq x\bigr) = \sum^{\infty}_{j=0} \frac{\exp \{ - \lambda/2 \} ( \frac{\lambda}{2} )^j }{j!} P \bigl( \chi^2_{2 j} \leq x \bigr) , $$
(13.1.7)

for x≥0 and λ>0. However, \(\chi^{2}_{0}\), a central chi-squared random variable of zero degrees of freedom, is simply equal to zero, i.e.,

$$P\bigl( \chi^2_0 (\lambda) \leq x\bigr)=1 , $$

for all x≥0, hence

$$P \bigl( \chi^2_0( \lambda) =0\bigr) = \exp \{ {-} \lambda/2 \} , $$

where λ>0. From Eq. (13.1.7) we get

$$\begin{aligned} P \bigl( \chi^2_0 (\lambda) \leq x \bigr) =& P\bigl( \chi^2_0 (\lambda) = 0 \bigr) + P\bigl(0 < \chi^2_0 (\lambda) \leq x \bigr) \\ =& \exp \{ {-} \lambda/ 2 \} + \int^x_0 p( x, 0 , \lambda )\,dx , \end{aligned}$$

for x≥0, λ>0. We remark that a non-central chi-squared random variable of 0 degrees of freedom is not continuous, but places mass at the origin, and hence p(x,0,λ) is not a probability density function. Nevertheless, it is obtained by formally setting δ=0 in Eq. (13.1.1).

We conclude this section with some useful identities pertaining to the non-central chi-squared distribution. These equalities feature frequently in Sect. 3.3. We recall that p(⋅,δ,λ) denotes the probability density of a χ 2-distributed random variable, and we use Ψ(⋅,δ,λ) to denote the distribution of a χ 2-distributed random variable with δ degrees of freedom and non-centrality parameter λ.

Lemma 13.1.1

The following useful properties hold:

$$\begin{aligned} \biggl( \frac{\lambda}{x} \biggr)^{\frac{\nu-2}{2}} p( x, \nu, \lambda ) =& p( \lambda, \nu,x) \end{aligned}$$
(13.1.8)
$$\begin{aligned} \int^{\infty}_0 p(x, \nu+2 ,y ) \,dy =& \varPsi( x, \nu, 0) \end{aligned}$$
(13.1.9)
$$\begin{aligned} \int^{\infty}_{\lambda} p(x, \nu+2, y ) \,dy =& \varPsi( x, \nu, \lambda) \end{aligned}$$
(13.1.10)
$$\begin{aligned} \int^{\lambda}_0 p(x, \nu+2, y ) \,dy =& \varPsi( x, \nu, 0 ) - \varPsi (x, \nu, \lambda) . \end{aligned}$$
(13.1.11)

13.2 Computing the Non-central Chi-Squared Distribution

The aim of this section is to introduce an algorithm allowing us to compute the non-central chi-squared distribution. We recall from Sect. 3.3, that in order to price calls and puts, we need to be able to evaluate this distribution function. Furthermore, we point out that we need to be able to evaluate this distribution function for zero degrees of freedom and for a variety of non-centrality parameters. In particular, for large maturities, the non-centrality parameter is small, whereas for small maturities, the non-centrality parameter is large. This section follows Hulley (2009) closely. As in this reference, we base our approach on an algorithm from Ding (1992), which performs well for small values of the non-centrality parameter, but not for large values. For this reason, we employ an analytic approximation due to Sankaran (1963), for large values. We introduce the non-central regularized incomplete gamma function, given by

$$ \mathcal{P}(a,b,z) := \sum ^{\infty}_{j=0} \frac{\exp \{ {-} b \} b^j}{j!} \mathcal{P}(a+j,z) , $$
(13.2.12)

for all z∈ℜ+0 and a,b≥0. Formally, we set \(\mathcal {P}(0,z):=1\), as the regularized incomplete gamma function from Eq. (13.1.6) is not well-defined in this case. We can express the distribution function of the non-central chi-squared and the chi-squared random variables in terms of the non-central regularized incomplete gamma function,

$$P\bigl( \chi^2_{\delta} \leq x\bigr) = \mathcal{P} \biggl( \frac{\delta}{2} , 0, \frac {x}{2} \biggr) , $$

where x∈(0,∞) and δ>0, and

$$P \bigl( \chi^2_{\delta} ( \lambda) \leq x\bigr) = \mathcal{P} \biggl( \frac{\delta}{2} , \frac{\lambda}{2} , \frac{x}{2} \biggr) , $$

for x∈(0,∞) (respectively x∈ℜ+), δ>0, (respectively δ=0) and λ>0. We assume for the remainder of this section that one of the following conditions is satisfied:

  • z∈(0,∞) and a,b>0;

  • z∈ℜ+, a=0 and b>0,

which correspond to the cases δ>0 and δ=0, respectively.

In a first step, we rewrite the terms \(\mathcal{P}(a+j,z)\) on the right-hand side of Eq. (13.2.12) in terms of an infinite sum. Using integration by parts and the identity Γ(a+j+1)=(a+j)Γ(a+j), we obtain

$$ \mathcal{P}(a+j+1,z) = \mathcal{P}(a+j,z) - \frac{\exp \{ {-}z \} z^{a+j}}{\varGamma(a+j+1)} , $$
(13.2.13)

which also holds for a=j=0, as by definition \(\mathcal{P}(0,z)=1\). A recursive application of Eq. (13.2.13) yields

$$ \mathcal{P}(a+j,z)= \mathcal{P}(a+j+1,z) + \frac{\exp \{ {-}z \} z^{a+j} }{\varGamma(a+j+1)} = \sum ^{\infty}_{k=j} \frac{\exp \{ {-}z \} z^{a+k}}{\varGamma(a+k+1)} , $$
(13.2.14)

for j∈{0,,12,…}. Defining

$$A_k = \sum^k_{j=0} \frac{\exp \{ {-}b \} b^j }{j!} $$

and

$$B_k = \frac{\exp \{ {-} z \} z^{a+k} }{\varGamma(a+k+1)} , $$

we have

$$ \mathcal{P}(a,b,z) = \sum ^{\infty}_{k=0} A_k T_k . $$
(13.2.15)

The idea is to truncate the series in Eq. (13.2.15),

$$\begin{aligned} \mathcal{P}(a,b,z) =& \sum^{N-1}_{k=0} A_k T_k + \sum^{\infty}_{k=N} A_k T_k \\ =& \tilde{\mathcal{P}}_{N}(a,b,z) + \epsilon_N . \end{aligned}$$

We now aim to find an effective bound for \(\epsilon_{N} = \sum^{\infty }_{k=N} A_{k} T_{k}\). We have the trivial bound

$$A_k = \sum^{k}_{j=0} \frac{\exp \{ {-} b \} b^j}{j!} < \sum^{\infty}_{j=0} \frac{\exp \{ {-}b \} b^j }{j!} =1 , $$

and hence

$$\epsilon_N =\sum^{\infty}_{k=N} A_k T_k < \sum^{\infty}_{k=N} T_k . $$

We note that the T k , \(k \in{\mathcal{N}}\), admit the following recursive formula:

$$ T_k = \frac{\exp \{ {-} z \} z^{a+k}}{\varGamma(a+k+1) } = \frac {z}{a+k} \frac{\exp \{ {-}z \} z^{a+k-1} }{\varGamma(a+k)} = \frac {z}{a+k} T_{k-1} , $$
(13.2.16)

for \(k \in{\mathcal{N}}\). Hence

$$T_k = \prod^k_{l=N} \frac{z}{a+l} T_{N-1} \leq \biggl( \frac{z}{a+N} \biggr)^{k-N+1} T_{N-1} , $$

for \(N \in{\mathcal{N}}\) and k∈{N,N+1,N+2,…}. This allows us to obtain the following bound on ϵ N :

$$ \epsilon_N < \sum ^{\infty}_{k=N} \biggl( \frac{z}{a+N} \biggr)^{k-N+1} T_{N-1} = \sum^{\infty}_{k=1} \biggl( \frac{z}{a+N} \biggr)^k T_{N-1} = \frac{z}{a+N-z} T_{N-1} , $$
(13.2.17)

for each N∈{N ,N +1,N +2,…}, where

$$N^* := \min \bigl\{ n \in \{ 0, 1, 2, \dots \} \bigm\vert z < a+n \bigr\} . $$

In Algorithm 13.1 below, we present pseudo-code for an algorithm which computes the non-central chi-squared distribution. In words, the algorithm proceeds as follows: we specify a desired level of accuracy, say ϵ∈(0,1). Next, we compute N . Obtaining N is crucial, as our error bound in Eq. (13.2.17) only applies for NN . We then compute \(\tilde{P}_{N^{*}}(a,b,z)\), and consequently check the truncation error incurred via Eq. (13.2.17). We then proceed to add further terms A k T k , where k∈{N ,N +1,N +2,…}. As soon as the bound for the truncation error ϵ N has fallen below ϵ, we truncate the loop and obtain a value \(\tilde{\mathcal{P}}(a,b,z) \in( \mathcal{P}(a,b,z) - \epsilon, \mathcal{P}(a,b,z))\), where N∈{N ,N +1,N +2,…}.

Algorithm 13.1
figure 1

Non-central regularized incomplete gamma function

Finally, we discuss the implementation of the algorithm. Recall that the T k can be computed recursively using Eq. (13.2.16), with only one multiplication and division required to compute the next term. Lastly, A k admits the representation

$$A_k = \sum^k_{j=0} \frac{\exp \{ {-} b \} b^j }{j!} = A_{k-1} + \frac{\exp \{ {-}b \} b^k }{k!} = A_{k-1} + B_k , $$

where

$$B_k = \frac{\exp \{ {-} b \} b^{k+1} }{k!} = \frac{b}{k} B_{k-1} . $$

Hence we can also obtain the A k recursively, with one multiplication, division, and addition required. This means that we can compute \(\tilde{P}_{N}(a,b,z)\) in linear time, i.e. using O(N) operations. In a detailed study, Dyrting (2004) discovered that the algorithm outlined above performs well for small and moderate values of b. For large values of b, the series in Eq. (13.2.12) converges slowly, meaning a large number of terms have to be used to achieve a particular precision ϵ. Furthermore, underflow problems can occur, as the individual terms in the series are small.

To remedy this shortcoming, Hulley (2009) fixed a maximum number of terms to be used in the summation. Once this limit is reached, an analytical approximation to the non-central incomplete gamma function is used. For this there are numerous possibilities, see Johnson et al. (1995), Sect. 29.8. We follow the advice of Schroder (1989), who recommends the analytic approximation due to Sankaran (1963),

$$\mathcal{P}(a,b,z) \approx\varPhi(x) , $$

where Φ denotes the standard normal cumulative distribution function and

$$x := - \frac{1 - h p \bigl( 1 - h + \frac{(2-h)m p}{2} \bigr) - \bigl( \frac{z}{a+b} \bigr)^h}{h \sqrt{2 p (1 + m p)}} $$

with

$$h = 1 - \frac{2}{3} \frac{(a+b)(a+3 b)}{(a+2b)^2} , \qquad p = \frac {1}{2} \frac{a+2b}{(a+b)^2} , \qquad m = (h-1) (1-3 h) . $$

This is a robust and efficient scheme, see Dyrting (2004). In addition, the approximation improves as the value of b increases. This fact is of particular relevance to us, as the performance of our original scheme performs worse as b increases. We present the pseudo-code of this algorithm in Algorithm 13.1 below.

13.3 The Doubly Non-central Beta Distribution

We firstly introduce the (central) beta random variable, after that the singly non-central beta random variable and finally the doubly non-central beta random variable, all with strictly positive shape parameters. However, in Sect. 3.3, we presented formulas for exchange options in terms of the non-central beta distribution with one shape parameter assuming the value zero, see Eq. (3.3.16). Hence in this section, we follow Hulley (2009) and extend the doubly non-central beta distribution allowing for one shape parameter assuming the value zero. In Sect. 13.4, we show how to compute the doubly non-central beta distribution.

It is well-known that the (central) beta random variable with shape parameters δ 1/2>0 and δ 2/2>0 admits the following representation in terms of chi-squared random variables,

$$ \beta_{\delta_1, \delta_2} := \frac{\chi^2_{\delta_1}}{\chi ^2_{\delta _1} + \chi^2_{\delta_2}} , $$
(13.3.18)

see Johnson et al. (1995), Chap. 25. As chi-squared random variables are strictly positive, \(\beta_{\delta_{1}, \delta_{2}}\) assumes values in (0,1). The distribution of \(\beta_{\delta_{1}, \delta_{2}}\) can be expressed in terms of the regularized incomplete beta function,

$$ P(\beta_{\delta_1, \delta_2} \leq x) = I_x \biggl(\frac{\delta_1}{2} , \frac {\delta_2}{2} \biggr) , $$
(13.3.19)

for x∈(0,1), where

$$ I_z(a,b) := \frac{\varGamma(a+b)}{\varGamma(a) \varGamma(b)} \int^z_0 t^{a-1} (1-t)^{b-1}\, dt , $$
(13.3.20)

for all z∈[0,1] and a,b>0. We now define the singly non-central beta distribution, with shape parameters δ 1/2>0 and δ 2/2>0 and non-centrality parameter λ>0, which is given by

$$ \beta_{\delta_1, \delta_2} ( \lambda, 0) := \frac{\chi^2_{\delta_1} (\lambda)}{\chi^2_{\delta_1} (\lambda) + \chi^2_{\delta_2}} . $$
(13.3.21)

This distribution was introduced in Tang (1938) and Patnaik (1949), in connection with the power function for the analysis of variance tests. We remark that (13.3.21) is referred to as Type I non-central beta random variable in Chattamvelli (1995), distinguishing it from a Type II non-central beta random variable, given by

$$\beta_{\delta_1, \delta_2} (0, \lambda) := 1- \beta_{\delta_1, \delta _2} (\lambda, 0) = \frac{\chi^2_{\delta_2}}{\chi^2_{\delta_1} (\lambda) + \chi^2_{\delta_2}} . $$

The doubly non-central beta distribution, with shape parameters δ 1/2>0, δ 2>0 and non-centrality parameters λ 1>0 and λ 2>0 is given by

$$ \beta_{\delta_1, \delta_2} (\lambda_1, \lambda_2) := \frac{\chi ^2_{\delta_1} (\lambda_1)}{\chi^2_{\delta_1}(\lambda_1) + \chi ^2_{\delta _2} (\lambda_2)} . $$
(13.3.22)

We recall from Eq. (13.1.4) that the distribution of the non-central chi-squared distribution could be expressed as a Poisson weighted mixture of central chi-squared distributions. Analogously, the distribution of the non-central beta distribution can be expressed as a Poisson weighted mixture of central beta distributions

$$ P\bigl( \beta_{\delta_1, \delta_2} (\lambda, 0) \leq x\bigr) = \sum^{\infty }_{j=0} \frac{\exp \{ {-} \lambda/2 \} (\lambda/2)^j }{j!} P( \beta_{\delta_1 + 2j, \delta_2} \leq x) , $$
(13.3.23)

for all x∈(0,1), δ 1,δ 2>0 and λ>0, and

$$\begin{aligned} & P\bigl(\beta_{\delta_1, \delta_2} ( \lambda_1, \lambda_2) \leq x\bigr) \\ & \quad = \sum^{\infty}_{j=0} \frac{\exp \{ {-} \lambda_1 / 2 \} (\lambda_1 /2)^j}{j!} \sum ^{\infty}_{k=0} \frac{\exp \{ {-} \lambda _2 \} (\lambda_2 / 2)^k }{k!} P ( \beta_{\delta_1 +2 j, \delta_2 + 2 k} \leq x ) , \end{aligned}$$

for all x∈(0,1), δ 1,δ 2>0 and λ 1,λ 2>0.

Now, we discuss how to extend the singly and doubly non-central beta distributions to the case where one of the shape parameters is zero. We remark that the distributions in (13.3.23) and (13.3.24) do not allow for this, as the gamma function is not defined at zero. We hence follow Hulley (2009), where techniques from Siegel (1979) were used to extend the non-central chi-squared distribution to include the case of zero degrees of freedom. As with the non-central chi-squared distribution, the distribution of the non-central beta distribution with one shape parameter equal to zero is no longer continuous, but comprised of a discrete part placing mass at the end points of the interval [0,1], and a continuous part assuming values in (0,1). Setting δ 2=0 in Eq. (13.3.21), results in a random variable identically equal to one. However, setting δ 1=0 yields a non-trivial random variable assuming values in [0,1). Similarly, setting δ 1=0 in (13.3.22), results in a non-trivial random variable assuming values in [0,1) and setting δ 2=0 in Eq. (13.3.22) results in a non-trivial random variable assuming values in (0,1]. For the remainder of this section, we set δ 1=0 in Eqs. (13.3.23) and (13.3.22) and set δ=δ 2>0 and define

$$ \beta_{0,\delta}(\lambda, 0) := \frac{\chi^2_0 ( \lambda) }{\chi^2_0 (\lambda) + \chi^2_{\delta}} , $$
(13.3.24)

for all δ>0 and λ>0, and

$$ \beta_{0,\delta}(\lambda_1 , \lambda_2) := \frac{\chi^2_{0} (\lambda _1)}{\chi^2_0 (\lambda_1) + \chi^2_{\delta} (\lambda_2)} , $$
(13.3.25)

for all δ>0, and λ 1,λ 2>0. The following result from Hulley (2009) shows how to extended the doubly non-central beta distribution to the case where one of the shape parameters assumes the value zero.

Proposition 13.3.1

Suppose x∈[0,1), δ>0, and λ 1,λ 2>0. Then

$$\begin{aligned} P \bigl( \beta_{0, \delta} (\lambda_1, \lambda_2) \leq x\bigr) =& \sum^{\infty}_{j=0} \frac {\exp \{ {-} \lambda_1 /2 \} ( \lambda_1 /2 )^j }{j!} \sum^{\infty}_{k=0} \frac{\exp \{ {-} \lambda_2 \} (\lambda _2 / 2)^k }{k!} \\ &{}\times P ( \beta_{2j, \delta+ 2k} \leq x ) . \end{aligned}$$
(13.3.26)

Proof

We employ Eqs. (13.3.25) and (13.1.4), (13.1.5), (13.1.6), (13.1.1), to obtain

$$\begin{aligned} & P \bigl(\beta_{0, \delta}(\lambda_1, \lambda_2) \leq x\bigr) \\ & \quad = P \biggl( \chi ^2_0 ( \lambda_1) \leq\frac{x}{1-x} \chi^2_{\delta} (\lambda_2) \biggr) \\ & \quad = \int^{\infty}_0 P \biggl( \chi^2_0 (\lambda_1) \leq\frac {x}{1-x} \xi\biggr) p( \xi, \delta, \lambda_2) \,d \xi \\ & \quad = \sum^{\infty}_{j=0} \frac{\exp \{ {-} \lambda_1 /2 \} (\lambda_1 /2)^j }{j!}\int^{\infty}_0 P \biggl( \xi^2_{2j} \leq \frac {x}{1-x} \xi \biggr) \\ & \qquad {} \times \frac{1}{2} \exp \biggl\{ {-} \frac{\lambda_2 + \xi }{2} \biggr\} \biggl( \frac{\xi}{\lambda_2} \biggr)^{\frac{\delta- 2}{4}} \sum^{\infty}_{k=0} \frac{ ( \sqrt{\lambda_2 \xi} /2 )^{\frac{\delta- 2}{2} +2k}}{k! \varGamma(\delta/2 + k)} \,d \xi \\ & \quad = \sum^{\infty}_{j=0} \frac{\exp \{ {-} \lambda_1 /2 \} (\lambda_1 /2)^j }{j!} \sum^{\infty}_{k=0} \frac{\exp \{ {-} \lambda/2 \} }{k!} \frac{1}{2} \biggl( \frac{2}{\lambda_2} \biggr)^{\frac{\delta-2}{2}} \\ & \qquad {} \times\int^{\infty}_0 \frac{ ( \lambda_2 \xi/4 )^{\frac{\delta- 2}{2} +k} \exp \{ {-} \xi/ 2 \} }{\varGamma(\delta/ 2 +k)} P \biggl( \chi^2_{2j} \leq\frac{x \xi}{1-x} \biggr) \,d \xi \\ & \quad = \exp \{ {-} \lambda_1 /2 \} + \sum ^{\infty }_{j=1} \frac{\exp \{ {-} \lambda_1 /2 \} (\lambda_1 /2)^j }{j!} \sum ^{\infty}_{k=0} \frac{\exp \{ {-} \lambda_2 \} (\lambda_2 /2)^k }{k!} \\ & \qquad {} \times\frac{1}{2}\int^{\infty}_0 \frac{(\xi/2)^{\frac{\delta -2}{2} +k} \exp \{ {-} \xi/2 \} }{\varGamma(\delta/2 + k)} \mathcal{P} \biggl(j, \frac{x}{2(1-x)} \xi\biggr) \,d \xi \\ & \quad = \exp \{ {-} \lambda_1 /2 \} + \sum ^{\infty }_{j=1} \frac{ \exp \{ {-} \lambda_1 /2 \} (\lambda_1 /2)^j }{j!} \sum ^{\infty}_{k=0} \frac{\exp \{ {-} \lambda_2 /2 \} (\lambda_2 / 2)^k }{k!} \\ & \qquad {} \times\frac{1}{\varGamma(j) \varGamma(\delta/2 + k)} \int^{\infty}_0 \zeta^{\delta/2 + k -1} \exp \{ {-} \zeta \} \int^{\frac{x \zeta}{1-x}}_0 t^{j-1} \exp \{ {-} t \}\,dt \,d \zeta \\ & \quad = \exp \{ {-} \lambda_1 /2 \} + \sum ^{\infty }_{j=1} \frac{\exp \{ {-} \lambda_1 /2 \} ( \lambda_1 /2)^j }{j!} \sum ^{\infty}_{k=0} \frac{\exp \{ {-} \lambda_2 /2 \} (\lambda_2 /2)^k }{k!} \\ & \qquad {} \times\frac{\varGamma( \delta/2 + j +k )}{\varGamma(j) \varGamma (\delta/2 +k)} \int^{\frac{x}{1-x}}_0 \frac{u^{j-1}}{(1+u)^{\delta /2 + j+k}} \,du \\ & \quad = \exp \{ {-} \lambda_1 /2 \} + \sum ^{\infty }_{j=1} \frac{\exp \{ {-} \lambda_1 /2 \} (\lambda_1 /2)^j}{j!} \sum ^{\infty}_{k=0} \frac{\exp \{ {-} \lambda_2 /2 \} (\lambda_2 / 2)^k }{k!} \\ & \qquad {} \times\frac{\varGamma( \delta/2 + j +k)}{\varGamma(j) \varGamma (\delta/2 + k)} \int^x_0 v^{j-1} (1-v)^{\delta/2 + k - 1} \,dv \\ & \quad = \exp \{ {-} \lambda_1 /2 \} \\ & \qquad {} + \sum^{\infty}_{j=1} \frac{\exp \{ {-} \lambda _1 /2 \} (\lambda_1 /2)^j }{j!} \sum^{\infty}_{k=0} \frac{\exp \{ {-} \lambda_2 \} (\lambda_2 /2)^k }{k!} I_x \biggl(j, \frac{\delta }{2} + k\biggr) \\ & \quad = \exp \{ {-} \lambda_1 /2 \} \\ & \qquad {} + \sum^{\infty}_{j=1} \frac{\exp \{ {-} \lambda_1 /2 \} (\lambda_1 /2)^j}{j!} \sum^{\infty}_{k=0} \frac{\exp \{ {-} \lambda_2 \} (\lambda _2 /2 )^k }{k!} P ( \beta_{2j, \delta+2k} \leq x ) , \end{aligned}$$
(13.3.27)

where we used the transformations ξ/2↦ζ, t/ζu, and u/(1+u)↦v, together with Eq. (13.3.19). We note that since central chi-squared random variables with zero degrees of freedom are equal to zero, the same applies to β 0,δ+2k , for all k∈{0,1,2,…}, see Eq. (13.3.18). Hence we have

$$\exp \{ {-} \lambda_1 /2 \} = \exp \{ {-} \lambda_1 /2 \} \sum^{\infty}_{k=0} \frac{\exp \{ {-} \lambda_2 \} (\lambda_2 /2)^k }{k!} P ( \beta_{0, \delta+2 k} \leq x ) , $$

which completes the proof. □

Inspecting Eq. (13.3.27), we remark that the first term can be interpreted as P(β 0,δ (λ 1,λ 2)=0) and the double sum as the probability P(0<β 0,δ (λ 1,λ 2)≤x) for all x∈(0,1), δ>0 and λ 1,λ 2>0. Hence we can decompose the distribution of β 0,δ (λ 1,λ 2) into a discrete component placing mass exp{−λ 1/2} at zero and a continuous component describing the distribution over (0,1). Finally, setting λ 2=0 we obtain the distribution of a singly non-central beta random variable

$$ P \bigl( \beta_{\delta, 0} (0, \lambda) \leq x\bigr) = \sum^{\infty}_{j=0} \frac {\exp \{ {-} \lambda/2 \} (\lambda/2)^j }{j!} P ( \beta _{\delta, 2j} \leq x ) , $$
(13.3.28)

for all x∈[0,1), δ>0 and λ>0. Finally, we present the extended versions of the Type II beta random variables,

$$ \beta_{\delta, 0} ( 0, \lambda) := 1 - \beta_{0, \delta}( \lambda , 0) = \frac{\chi^2_{\delta}}{\chi^2_0 ( \lambda) + \chi^2_{\delta}} , $$
(13.3.29)

for all δ>0 and λ>0, and

$$ \beta_{\delta, 0} ( \lambda_2, \lambda_1) := 1 - \beta_{0, \delta} (\lambda_1, \lambda_2 ) = \frac{\chi^2_{\delta} (\lambda_2) }{\chi^2_0 (\lambda_1) + \chi^2_{\delta} (\lambda_2)} , $$
(13.3.30)

where δ>0 and λ 1,λ 2>0, whose values lie in (0,1]. Equations (13.3.30), (13.3.26), and (13.3.18) yield

$$\begin{aligned} &P\bigl( \beta_{\delta, 0}(\lambda_2, \lambda_1)\bigr) \\ & \quad = 1 - P\bigl( \beta_{0, \delta}( \lambda_1, \lambda_2) < 1-x\bigr) \\ & \quad = 1 - \sum^{\infty}_{j=0} \frac{\exp \{ {-} \lambda_1 /2 \} ( \lambda_1 /2)^j }{j!} \sum^{\infty}_{k=0} \frac{ \exp \{ {-} \lambda_2 /2 \} ( \lambda_2 /2)^k }{k!} P ( \beta_{2 j , \delta+2 k} < 1-x ) \\ & \quad = \sum^{\infty}_{j=0} \frac{\exp \{ {-} \lambda_1 /2 \} (\lambda_1 /2)^j }{j!} \sum ^{\infty}_{k=0} \frac{\exp \{ {-} \lambda _2 /2 \} ( \lambda_2 /2)^k }{k!} \bigl( 1 - P ( \beta _{2 j, \delta+2 k} < 1 - x ) \bigr) \\ & \quad = \sum^{\infty}_{j=0} \frac{\exp \{ {-} \lambda_1 /2 \} (\lambda_1 /2 )^j }{j!} \sum ^{\infty}_{k=0} \frac{\exp \{ {-} \lambda _2 /2 \} (\lambda_2 /2)^k }{k!} P ( \beta_{\delta+2 k, 2j} \leq x ) , \end{aligned}$$

for all x∈(0,1], δ>0 and λ 1,λ 2>0. We note that β δ+2k,0 is identically equal to one for all k∈{0,1,2,…}. Hence β δ,0(λ 2,λ 1) can be decomposed into a discrete component that places mass exp{−λ 1/2} at one and a continuous component taking values in (0,1) for all δ>0, λ 1,λ 2>0. Similarly, we obtain

$$P \bigl( \beta_{\delta, 0} ( 0 , \lambda) \leq x \bigr) = \sum ^{\infty }_{j=0} \frac{\exp \{ {-} \lambda/2 \} ( \lambda/2)^j }{j!} P ( \beta_{\delta, 2 j} \leq x ) , $$

for all x∈(0,1], δ>0 and λ>0. Again, β δ,0 is identically equal to one, hence β δ,0(0,λ) can be decomposed into a discrete part placing mass exp{−λ} at one and a continuous component assuming values in (0,1).

13.4 Computing the Doubly Non-central Beta Distribution

In this section, we present an algorithm, which shows how to implement the doubly non-central beta distribution. The algorithm is based on Hulley (2009), where an idea from Posten (1989, 1993) is used to enhance an algorithm presented by Seber (1963) for computing the distribution function of standard singly non-central beta random variables.

We define the doubly non-central regularized incomplete beta function

$$ I_z(a,b,c,d) := \sum ^{\infty}_{j=0} \frac{\exp \{ {-} c \} c^j }{j!} \sum ^{\infty}_{k=0} \frac{\exp \{ {-} d \} d^k }{k!} I_z(a+j, b+k) , $$
(13.4.31)

for all z∈[0,1] and a,b,c,d≥0, such that either a>0 or b>0 and where I z (a,b) is given by Eq. (13.3.20). We formally set

(13.4.32)

for all z∈[0,1] and a,b>0. This is necessary, as the gamma functions in Eq. (13.3.20) are not well-defined at zero. We note that we can express the distribution functions of both the central and the non-central beta distributions in terms of the doubly non-central regularized incomplete beta function in Eq. (13.4.31). In particular, we have

$$P( \beta_{\delta_1, \delta_2} \leq x) = I_x \biggl( \frac{\delta_1}{2}, \frac {\delta_2}{2}, 0 , 0\biggr) , $$

for all x∈(0,1) and δ 1,δ 2>0. The distribution of the Type I singly non-central beta distribution satisfies

$$P\bigl( \beta_{\delta_1, \delta_2} ( \lambda, 0) \leq x\bigr) = I_x \biggl( \frac {\delta _1}{2}, \frac{\delta_2}{2} , \frac{\lambda}{2} , 0 \biggr) , $$

for all x∈(0,1) (respectively x∈[0,1) ), δ 1>0 (respectively δ 1=0), δ 2>0 and λ>0, while for the Type II singly non-central beta distribution we obtain

$$P \bigl( \beta_{\delta_2, \delta_1} (0, \lambda) \leq x\bigr) = I_x \biggl( \frac {\delta _2}{2} , \frac{\delta_1}{2} , 0 , \frac{\lambda}{2} \biggr) $$

for all x∈(0,1) ( respectively x∈(0,1]), δ 1>0 (respectively δ 1=0), δ 2>0 and λ>0. Lastly, the distribution function of the doubly non-central beta distribution satisfies the equality

$$P\bigl(\beta_{\delta_1, \delta_2} (\lambda_1, \lambda_2) \leq x\bigr) = I_x \biggl( \frac{\delta_1}{2}, \frac{\delta_2}{2}, \frac{\lambda_1}{2} , \frac {\lambda_2}{2} \biggr) , $$

for all x∈(0,1) (respectively x∈[0,1); x∈(0,1]), δ 1,δ 2>0 (respectively δ 1=0, δ 2>0; δ 1>0, δ 2=0) and λ 1,λ 2>0. We assume that one of the following parameter combinations is in force:

  1. (i)

    z∈(0,1), \(a,b \in{\mathcal{N}}\), c,d>0;

  2. (ii)

    z∈[0,1), a=0, \(b \in{\mathcal{N}}\), c,d>0;

  3. (iii)

    z∈(0,1], \(a \in{\mathcal{N}}\), b=0, c,d>0.

Assuming condition (i) is in force, we obtain from Seber (1963)

$$\begin{aligned} I_z(a,b,c,d) =& \exp \bigl\{ {-} c (1-z) \bigr\} z^a \sum^{\infty}_{k=0} \frac{\exp \{ {-} d \} d^k }{k!} \sum^{b+k-1}_{n=0} (1-z)^n L^{(a-1)}_n (-c z) \\ =& \exp \bigl\{ {-} c (1-z) \bigr\} z^a \sum^{\infty}_{k=0} P_k T_k , \end{aligned}$$
(13.4.33)

where we have defined the Poisson weights

$$P_k := \frac{\exp \{ {-} d \} d^k }{k!} , \quad k \in \{ 0 , 1, 2 , \dots \} , $$

and

$$T_k := \sum^{b+k-1}_{n=0} (1-z)^n L^{(a-1)}_n ( - c z) , \quad k \in \{ 0, 1, 2, \dots \} . $$

When condition (ii) is satisfied with z=0, the problem is trivial, I 0(0,b,c,d)=exp{−c}. But for condition (ii) and z∈(0,1), Eqs. (13.3.20) and (13.4.32) yield I z (j,b+k)=1−I 1−z (b+k,j), for each j,k∈{0,1,2,…}, and hence

$$\begin{aligned} I_z(0,b,c,d) =& \sum^{\infty}_{j=0} \frac{\exp \{ {-} c \} c^j}{j!} \sum^{\infty}_{k=0} \frac{\exp \{ {-} d \} d^k }{k!} \bigl( 1 - I_{1-z} (b+k, j) \bigr) \\ =& 1 - \sum^{\infty}_{j=1} \frac{\exp \{ {-}c \} c^j}{j!} \sum^{\infty}_{k=0} \frac{\exp \{ {-} d \} d^k}{k!} I_{1-z} (b+k,j) \\ =& 1 - \exp \{ {-} d z \} (1-z)^b \sum ^{\infty }_{j=1} \frac{\exp \{ {-} c \} c^{j} }{j!} \sum ^{j-1}_{n=0} z^n L^{(b-1)}_n \bigl( -d (1-z)\bigr) \\ =& 1 - \exp \{ {-} d z \} (1-z)^b \sum^{\infty}_{j=1} P_j \sum ^{j-1}_{n=0}T_j , \end{aligned}$$
(13.4.34)

from (13.4.32) and Seber (1963). Finally, if the arguments satisfy condition (iii) with z=1, then the problem is again trivial since I 1(a,0,c,d)=1. On the other hand, if the arguments satisfy condition (iii) with z∈(0,1), then applying (13.4.32) and Seber (1963) yields

$$\begin{aligned} I_z(a,0,c,d) =& \sum^{\infty}_{j=0} \frac{\exp \{ {-} c \} c^j }{j!} \sum^{\infty}_{k=1} \frac{\exp \{ {-} d \} d^k }{k!} I_Z( a+j,k) \\ =& \exp \bigl\{ {-} c (1-z) \bigr\} z^a \sum ^{\infty }_{k=1} \frac{ \exp \{ {-} d \} d^k }{k!} \sum ^{k-1}_{n=0} (1-z)^n L^{(a-1)}_n (- c z) \\ =& \exp \bigl\{ {-} c (1-z) \bigr\} z^a \sum^{\infty}_{k=1} P_k \sum^{k-1}_{n=0} T_k . \end{aligned}$$
(13.4.35)

We remark that by \(L^{(\alpha)}_{n}\) we denote the Laguerre polynomials, which are defined for n∈{0,1,2,…} , α∈ℜ∖{−1,−2,…}. However, for α∈{0,1,2,…} we have

$$ L^{(\alpha)}_n ( \zeta) = \sum ^n_{m=0} \binom{n + \alpha}{n - m} \frac {\zeta^m}{m!} , $$
(13.4.36)

for all ζ∈ℜ and each n∈{0,1,2,…}. Equation (13.4.36) implies the following recurrence relation, see also Abramowitz and Stegun (1972), Chap. 22,

$$ \begin{aligned}[c] L^{(\alpha)}_0 (\zeta) &= 1 , \\ L^{(\alpha)}_1 (\zeta) &=\alpha- 1 + \zeta, \\ n L^{(\alpha)}_n ( \zeta) &= ( 2 n +\alpha- 1 - \zeta) L^{(\alpha )}_{n-1} (\zeta) - (n+\alpha-1) L^{(\alpha)}_{n-2} (\zeta) , \end{aligned} $$
(13.4.37)

for all ζ∈ℜ, and for all α∈{0,1,2,…} and n∈{2,3,…}. Comparing Eqs. (13.4.33), (13.4.34), and (13.4.35), we note that it suffices to focus on condition (i), as conditions (ii) and (iii) can be covered using the same algorithm. Regarding the outer, infinite sum, we employ an idea from Posten (1989, 1993), and sum the terms in decreasing order of the Poisson weights. The maximal Poisson weight is approximately attained by the index value k =⌈d⌉, hence we truncate the outer sum to the range of index values (k N ϵ )+,…,k +N ϵ , where N ϵ is given by

$$ N_{\epsilon} := \min \Biggl\{ N \in \{ 0, 1, 2, \dots \} \bigm\vert\sum^{k^* +N}_{k=(k^* - N)^+} P_k > 1 - \epsilon \Biggr\} , $$
(13.4.38)

i.e. we approximate I z (a,b,c,d) using \(\tilde{I}_{z}(a,b,c,d)\), which is given by

$$ \tilde{I}_{N_{\epsilon},z} (a,b,c,d) := \exp \bigl\{ {-}c (1 -z) \bigr\} z^a \sum^{k^* + N_{\epsilon}}_{k=(k^* - N_{\epsilon})^+} P_k T_k . $$
(13.4.39)

We now aim to produce a good bound on the approximation error

$$ I_z(a,b,c,d) - \tilde{I}_{N_{\epsilon} , z} (a,b,c,d) . $$
(13.4.40)

From Seber (1963) we have

$$ \exp \bigl\{ {-} c (1-z) \bigr\} z^a T_k = I_z (a,b+k,c,0) \in(0,1), $$
(13.4.41)

hence

$$\begin{aligned} &I_z(a,b,c,d) - \tilde{I}_{N_{\epsilon}, z} (a,b,c,d) \\ & \quad = \exp \bigl\{ {-} c (1-z) \bigr\} z^a \Biggl( \sum ^{ (k^* - N_{\epsilon})^+ -1 }_{k=0} P_k T_k + \sum ^{\infty}_{k = k^* + N_{\epsilon} +1} P_k T_k \Biggr) \\ & \quad \leq \sum^{(k^* - N_{\epsilon})^+ -1}_{k=0} P_k + \sum^{\infty }_{k = k^* +N_{\epsilon} +1} P_k = 1 - \sum ^{k^* +N_{\epsilon} }_{k = (k^* -N_{\epsilon})} P_k < \epsilon, \end{aligned}$$

where we used the fact that the Poisson weights sum to one and the last inequality follows from the definition of N ϵ . Hence we have

$$\tilde{I}_{N_{\epsilon},z} (a,b,c,d) \in\bigl(I_z (a,b,c,d) - \epsilon, I_z(a,b,c,d) \bigr) , $$

so the truncation error is bounded by ϵ. Clearly, the value of N ϵ cannot be determined explicitly in advance, but can only be determined by iteratively adding Poisson weights until their sum exceeds 1−ϵ. The P k satisfy

$$P_k = P_{k-1} \frac{d}{k} , $$

for each \(k \in{\mathcal{N}}\), which allows for a rapid computation of these weights. Finally, we attend to the inner sum in Eq. (13.4.33). In Algorithm 13.2 below, we make use of a list, which stores the values of the Laguerre polynomials, which are used to compute the T k . Firstly, we calculate the Laguerre polynomials needed to compute \(T_{k^{*}}\), and store them in a list. Thereafter, we use the following iterative scheme, based on (13.4.37) to compute \(T_{k^{*}+1} , T_{k^{*}-1}, T_{k^{*}+2}, T_{k^{*}-2} , \dots\):

for each \(k \in{\mathcal{N}}\), where A k :=a+2b+2k−4+cz and B k :=a+b+l−3. We present in Algorithm 13.2 below the algorithm, which shows how to compute the doubly non-central regularized incomplete gamma function, which is given in Hulley (2009). The term Laglist denotes the list of Laguerre polynomials, and list 〈〈i〉〉 references element i of list, and by the symbol list ⊎x we mean that the value x is appended to list.

Algorithm 13.2
figure 2

Doubly non-central regularized incomplete beta function

13.5 Inverting Laplace Transforms

In this section, we discuss how to compute values of a function f:ℜ+→ℜ from its Laplace transform

$$\hat{f}(s) = \int^{\infty}_0 \exp \{ {-} s t \} f(t) \,dt , $$

where s is a complex variable with a nonnegative real part. We present the Euler method from Abate and Whitt (1995), which is based on the Bromwich contour inversion integral. We let this contour be any vertical line s=a so that \(\hat{f}(s)\) has no singularities on or to the right of it, and hence obtain, as in Abate and Whitt (1995),

$$f(t)= \frac{2 \exp \{ a t \} }{\pi} \int^{\infty}_0 Re \bigl( \hat{f} (a + \imath u ) \bigr) \cos ( u t) \,du . $$

The integral is evaluated numerically using the trapezoidal rule. Specifying the step size as h gives

$$f(t) \approx f_h(t) := \frac{h \exp \{ a t \}}{\pi} Re \bigl(\hat {f} (a) \bigr) + \frac{2 h \exp \{ a t \}}{\pi} \sum^{\infty }_{k=1} Re \bigl( \hat{f} (a + \imath k h )\bigr) \cos ( k h t) . $$

Setting \(h=\frac{\pi}{2 t}\) and \(a= \frac{A}{2 t}\), one arrives at the nearly alternating series

$$ f_h(t) = \frac{\exp \{ A /2 \} }{2t} Re \biggl( \hat{f} \biggl( \frac {A}{2t} \biggr) \biggr) + \frac{\exp \{ A /2 \}}{t} \sum ^{\infty}_{k=1} (-1)^k Re \biggl( \hat{f} \biggl( \frac{A + 2 k \pi\imath}{2t} \biggr) \biggr) . $$
(13.5.42)

Regarding the parameters, we need to know how to choose A: In Abate and Whitt (1995) it is shown that to achieve a discretization error 10γ, we should set A=γlog10. Consequently, truncating the series after n terms, we have

$$ s_n(t) = \frac{\exp \{ A/2 \}}{2 t} Re\biggl( \hat{f} \biggl(\frac{A}{2 t} \biggr)\biggr) + \frac{\exp \{ A/2 \}}{t} \sum ^n_{k=1} (-1)^k a_k(t) , $$
(13.5.43)

where

$$a_k(t) = Re \biggl( \hat{f} \biggl( \frac{A + 2 k \pi\imath}{2t} \biggr) \biggr) . $$

Lastly, we apply the Euler summation, which explains the name of the algorithm. In particular, we apply the Euler summation to m terms after the initial n terms, so that the Euler summation, which approximates (13.5.42), is given by

$$ E(m,n,t) = \sum^m_{k=0} \binom{m}{k} 2^{-m} s_{n+k}(t) , $$
(13.5.44)

where s n (t) is given by (13.5.43).We note that E(m,n,t) is the weighted average of the last m partial sums by a binomial probability distribution characterized by parameters m and \(p=\frac{1}{2}\). In Abate and Whitt (1995), the parameters m=11 and n=15 are used, and it is suggested to increase n as necessary. In the following subsection, we illustrate how to use this algorithm to recover a bivariate probability density function. Using Lie symmetry methods, the first inversion can be performed analytically, for the second we use the Euler method presented in this section given by (13.5.44).

13.5.1 Recovering the Joint Distribution to Price Realized Variance

In this subsection, we apply the methodology discussed in this section to the pricing of realized variance derivatives, in particular, options on volatility, see Sect. 8.5.2. To price such products, we need to recover the joint distribution of \(( Y_{T}, \int^{T}_{0} \frac{1}{Y_{t}}\,dt )\). At first sight, obtaining the joint distribution should entail the inversion of a double Laplace transform. However, since Lie symmetry methods provide us with fundamental solutions, we already have the inversion with respect to one of the variables. Consequently, one only needs to invert a one-dimensional Laplace transform numerically, to obtain the joint density over ℜ+×ℜ+. We subsequently map the joint density into [0,1]2, following the discussion in Kuo et al. (2008), and hence can employ a randomized quasi-Monte Carlo point set to compute prices. Assuming that the one-dimensional Laplace transform can be inverted at a constant computational complexity, the resulting computational complexity is O(N), where N is the number of two-dimensional quasi-Monte Carlo points employed.

We numerically invert the one-dimensional Laplace transform given in (5.4.16) using the Euler method from Abate and Whitt (1995), which was also employed in Hulley and Platen (2008), see also Craddock et al. (2000). We display the joint density in Fig. 13.5.1.

Fig. 13.5.1
figure 3

Joint density of \((Y_{T}, \int^{T}_{0} \frac{1}{Y_{t}}\,dt )\) for Y 0=1, η=0.052, T=1

Inverting the Laplace transform produces the joint density of \(( Y_{T}, \int^{T}_{0} \frac{1}{Y_{t}}\,dt )\) over ℜ+×ℜ+. One could now employ a product rule, such as the tensor product of two one-dimensional trapezoidal rules, using N points for each co-ordinate, and perform the numerical integration using N 2 points, at a computational complexity of O(N 2), assuming the Laplace inversion can be performed in constant time. However, instead we map the joint distribution into the unit square, and employ an N point quasi-Monte Carlo rule to obtain a quadrature rule whose computational complexity is only O(N), see Sect. 12.2.