Abstract
It is the aim of Chap. 13 to introduce computational tools, which can be used to implement the functionals presented in this book. The first part of the chapter focuses on the non-central chi-squared distribution, which had arisen in the context of pricing financial derivatives in the Minimal Market Model introduced in Chap. 3. We provide both theoretical results and also a stable algorithm which can be used to compute the distribution function. In the second part of the chapter we focus on the non-central beta distribution, which had arisen in the context of pricing exchange options in the Minimal Market Model. Again, we provide both theoretical results but also a stable algorithm which can be used to compute the distribution function. The chapter concludes by discussing the inversion of Laplace transforms, which can be used to recover transition densities from the Laplace transforms presented throughout this book. We illustrate this approach in the context of the Minimal Market Model presented in Chap. 3.
Access provided by Autonomous University of Puebla. Download chapter PDF
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
It is the aim of this chapter to introduce computational tools, which can be used to implement functionals presented in this book. In particular, we focus on the non-central chi-squared distribution, which appeared in the context of the MMM and the TCEV model, and the non-central beta distribution, which appeared in the context of pricing exchange options. Lastly, we discuss the inversion of Laplace transforms, which can be used to recover transition densities from the Laplace transforms.
13.1 Some Identities Related to the Non-central Chi-Squared Distribution
The non-central chi-squared distribution featured prominently when pricing European call and put options under the MMM and TCEV model, see Sect. 3.3. In the current section, we recall the distribution, and in Sect. 13.2 we will present an algorithm showing how to implement the distribution, where we follow ideas presented in Hulley (2009).
First, we recall the link between the squared Bessel process and the non-central chi-squared distribution, which is given by
where X={X t , t≥0} denotes a squared Bessel process of dimension δ, and \(\chi^{2}_{\delta} (\lambda)\) denotes a non-central chi-squared random variable with δ degrees of freedom and non-centrality parameter λ>0. We recall from Lemma 8.2.2 that the non-central χ 2-distribution with δ>0 degrees of freedom and non-centrality parameter λ>0 has the density function
Here \(I_{\nu}(x) = \sum_{j \geq0} \frac{1}{j! \varGamma(j+ \nu+1)} ( \frac{x}{2} )^{2j + \nu}\) denotes the modified Bessel function of the first kind of order ν>−1. The following equality is given in Hulley (2009), where
for x∈(0,∞) and λ>0, since the modified Bessel function of the first kind satisfies I 1=I −1, see e.g. Abramowitz and Stegun (1972), Eq. (9.6.6). Clearly, this equality entails the probability density function, of a non-central chi-squared random variable of zero degrees of freedom, p(x,0,λ). Such a random variable is comprised of a discrete part, as it places positive mass at zero, and a continuous part assuming values in the interval (0,∞). We return to this issue when discussing this type of probability distributions below. From Eq. (13.1.2), we immediately obtain the following formula, which is employed frequently in the context of the MMM, see Sect. 3.3:
for an appropriately integrable function g(⋅). Next, we introduce the cumulative distribution function of a non-central chi-squared random variable. The following equality, see Eq. (29.3) in Johnson et al. (1995), introduces the non-central chi-squared distribution as a weighted average of central chi-squared distributions, the weights being Poisson weights:
for all x∈(0,∞), δ>0 and λ>0, where \(\chi ^{2}_{\delta}\) denotes the central chi-squared random variable. The distribution of the central chi-squared random variable admits the following presentation in terms of the regularized incomplete gamma function \(\mathcal {P}( \cdot{,} \cdot)\), see Johnson et al. (1994), Eq. (18.3):
for x∈(0,∞) and δ>0, where
for z∈ℜ+ and a>0. We can obtain an expression similar to Eq. (13.1.4) for the density of a non-central chi-squared random variable,
for x∈(0,∞), δ>0 and λ>0, and where p(x,δ) denotes the probability density function of a chi-squared random variable with δ>0 degrees of freedom. Finally, we focus on the non-central chi-squared distribution with zero degrees of freedom, which also featured in the context of the MMM in Sect. 3.3. From Eq. (13.1.4), we get
for x≥0 and λ>0. However, \(\chi^{2}_{0}\), a central chi-squared random variable of zero degrees of freedom, is simply equal to zero, i.e.,
for all x≥0, hence
where λ>0. From Eq. (13.1.7) we get
for x≥0, λ>0. We remark that a non-central chi-squared random variable of 0 degrees of freedom is not continuous, but places mass at the origin, and hence p(x,0,λ) is not a probability density function. Nevertheless, it is obtained by formally setting δ=0 in Eq. (13.1.1).
We conclude this section with some useful identities pertaining to the non-central chi-squared distribution. These equalities feature frequently in Sect. 3.3. We recall that p(⋅,δ,λ) denotes the probability density of a χ 2-distributed random variable, and we use Ψ(⋅,δ,λ) to denote the distribution of a χ 2-distributed random variable with δ degrees of freedom and non-centrality parameter λ.
Lemma 13.1.1
The following useful properties hold:
13.2 Computing the Non-central Chi-Squared Distribution
The aim of this section is to introduce an algorithm allowing us to compute the non-central chi-squared distribution. We recall from Sect. 3.3, that in order to price calls and puts, we need to be able to evaluate this distribution function. Furthermore, we point out that we need to be able to evaluate this distribution function for zero degrees of freedom and for a variety of non-centrality parameters. In particular, for large maturities, the non-centrality parameter is small, whereas for small maturities, the non-centrality parameter is large. This section follows Hulley (2009) closely. As in this reference, we base our approach on an algorithm from Ding (1992), which performs well for small values of the non-centrality parameter, but not for large values. For this reason, we employ an analytic approximation due to Sankaran (1963), for large values. We introduce the non-central regularized incomplete gamma function, given by
for all z∈ℜ+0 and a,b≥0. Formally, we set \(\mathcal {P}(0,z):=1\), as the regularized incomplete gamma function from Eq. (13.1.6) is not well-defined in this case. We can express the distribution function of the non-central chi-squared and the chi-squared random variables in terms of the non-central regularized incomplete gamma function,
where x∈(0,∞) and δ>0, and
for x∈(0,∞) (respectively x∈ℜ+), δ>0, (respectively δ=0) and λ>0. We assume for the remainder of this section that one of the following conditions is satisfied:
-
z∈(0,∞) and a,b>0;
-
z∈ℜ+, a=0 and b>0,
which correspond to the cases δ>0 and δ=0, respectively.
In a first step, we rewrite the terms \(\mathcal{P}(a+j,z)\) on the right-hand side of Eq. (13.2.12) in terms of an infinite sum. Using integration by parts and the identity Γ(a+j+1)=(a+j)Γ(a+j), we obtain
which also holds for a=j=0, as by definition \(\mathcal{P}(0,z)=1\). A recursive application of Eq. (13.2.13) yields
for j∈{0,,12,…}. Defining
and
we have
The idea is to truncate the series in Eq. (13.2.15),
We now aim to find an effective bound for \(\epsilon_{N} = \sum^{\infty }_{k=N} A_{k} T_{k}\). We have the trivial bound
and hence
We note that the T k , \(k \in{\mathcal{N}}\), admit the following recursive formula:
for \(k \in{\mathcal{N}}\). Hence
for \(N \in{\mathcal{N}}\) and k∈{N,N+1,N+2,…}. This allows us to obtain the following bound on ϵ N :
for each N∈{N ∗,N ∗+1,N ∗+2,…}, where
In Algorithm 13.1 below, we present pseudo-code for an algorithm which computes the non-central chi-squared distribution. In words, the algorithm proceeds as follows: we specify a desired level of accuracy, say ϵ∈(0,1). Next, we compute N ∗. Obtaining N ∗ is crucial, as our error bound in Eq. (13.2.17) only applies for N≥N ∗. We then compute \(\tilde{P}_{N^{*}}(a,b,z)\), and consequently check the truncation error incurred via Eq. (13.2.17). We then proceed to add further terms A k T k , where k∈{N ∗,N ∗+1,N ∗+2,…}. As soon as the bound for the truncation error ϵ N has fallen below ϵ, we truncate the loop and obtain a value \(\tilde{\mathcal{P}}(a,b,z) \in( \mathcal{P}(a,b,z) - \epsilon, \mathcal{P}(a,b,z))\), where N∈{N ∗,N ∗+1,N ∗+2,…}.
Finally, we discuss the implementation of the algorithm. Recall that the T k can be computed recursively using Eq. (13.2.16), with only one multiplication and division required to compute the next term. Lastly, A k admits the representation
where
Hence we can also obtain the A k recursively, with one multiplication, division, and addition required. This means that we can compute \(\tilde{P}_{N}(a,b,z)\) in linear time, i.e. using O(N) operations. In a detailed study, Dyrting (2004) discovered that the algorithm outlined above performs well for small and moderate values of b. For large values of b, the series in Eq. (13.2.12) converges slowly, meaning a large number of terms have to be used to achieve a particular precision ϵ. Furthermore, underflow problems can occur, as the individual terms in the series are small.
To remedy this shortcoming, Hulley (2009) fixed a maximum number of terms to be used in the summation. Once this limit is reached, an analytical approximation to the non-central incomplete gamma function is used. For this there are numerous possibilities, see Johnson et al. (1995), Sect. 29.8. We follow the advice of Schroder (1989), who recommends the analytic approximation due to Sankaran (1963),
where Φ denotes the standard normal cumulative distribution function and
with
This is a robust and efficient scheme, see Dyrting (2004). In addition, the approximation improves as the value of b increases. This fact is of particular relevance to us, as the performance of our original scheme performs worse as b increases. We present the pseudo-code of this algorithm in Algorithm 13.1 below.
13.3 The Doubly Non-central Beta Distribution
We firstly introduce the (central) beta random variable, after that the singly non-central beta random variable and finally the doubly non-central beta random variable, all with strictly positive shape parameters. However, in Sect. 3.3, we presented formulas for exchange options in terms of the non-central beta distribution with one shape parameter assuming the value zero, see Eq. (3.3.16). Hence in this section, we follow Hulley (2009) and extend the doubly non-central beta distribution allowing for one shape parameter assuming the value zero. In Sect. 13.4, we show how to compute the doubly non-central beta distribution.
It is well-known that the (central) beta random variable with shape parameters δ 1/2>0 and δ 2/2>0 admits the following representation in terms of chi-squared random variables,
see Johnson et al. (1995), Chap. 25. As chi-squared random variables are strictly positive, \(\beta_{\delta_{1}, \delta_{2}}\) assumes values in (0,1). The distribution of \(\beta_{\delta_{1}, \delta_{2}}\) can be expressed in terms of the regularized incomplete beta function,
for x∈(0,1), where
for all z∈[0,1] and a,b>0. We now define the singly non-central beta distribution, with shape parameters δ 1/2>0 and δ 2/2>0 and non-centrality parameter λ>0, which is given by
This distribution was introduced in Tang (1938) and Patnaik (1949), in connection with the power function for the analysis of variance tests. We remark that (13.3.21) is referred to as Type I non-central beta random variable in Chattamvelli (1995), distinguishing it from a Type II non-central beta random variable, given by
The doubly non-central beta distribution, with shape parameters δ 1/2>0, δ 2>0 and non-centrality parameters λ 1>0 and λ 2>0 is given by
We recall from Eq. (13.1.4) that the distribution of the non-central chi-squared distribution could be expressed as a Poisson weighted mixture of central chi-squared distributions. Analogously, the distribution of the non-central beta distribution can be expressed as a Poisson weighted mixture of central beta distributions
for all x∈(0,1), δ 1,δ 2>0 and λ>0, and
for all x∈(0,1), δ 1,δ 2>0 and λ 1,λ 2>0.
Now, we discuss how to extend the singly and doubly non-central beta distributions to the case where one of the shape parameters is zero. We remark that the distributions in (13.3.23) and (13.3.24) do not allow for this, as the gamma function is not defined at zero. We hence follow Hulley (2009), where techniques from Siegel (1979) were used to extend the non-central chi-squared distribution to include the case of zero degrees of freedom. As with the non-central chi-squared distribution, the distribution of the non-central beta distribution with one shape parameter equal to zero is no longer continuous, but comprised of a discrete part placing mass at the end points of the interval [0,1], and a continuous part assuming values in (0,1). Setting δ 2=0 in Eq. (13.3.21), results in a random variable identically equal to one. However, setting δ 1=0 yields a non-trivial random variable assuming values in [0,1). Similarly, setting δ 1=0 in (13.3.22), results in a non-trivial random variable assuming values in [0,1) and setting δ 2=0 in Eq. (13.3.22) results in a non-trivial random variable assuming values in (0,1]. For the remainder of this section, we set δ 1=0 in Eqs. (13.3.23) and (13.3.22) and set δ=δ 2>0 and define
for all δ>0 and λ>0, and
for all δ>0, and λ 1,λ 2>0. The following result from Hulley (2009) shows how to extended the doubly non-central beta distribution to the case where one of the shape parameters assumes the value zero.
Proposition 13.3.1
Suppose x∈[0,1), δ>0, and λ 1,λ 2>0. Then
Proof
We employ Eqs. (13.3.25) and (13.1.4), (13.1.5), (13.1.6), (13.1.1), to obtain
where we used the transformations ξ/2↦ζ, t/ζ↦u, and u/(1+u)↦v, together with Eq. (13.3.19). We note that since central chi-squared random variables with zero degrees of freedom are equal to zero, the same applies to β 0,δ+2k , for all k∈{0,1,2,…}, see Eq. (13.3.18). Hence we have
which completes the proof. □
Inspecting Eq. (13.3.27), we remark that the first term can be interpreted as P(β 0,δ (λ 1,λ 2)=0) and the double sum as the probability P(0<β 0,δ (λ 1,λ 2)≤x) for all x∈(0,1), δ>0 and λ 1,λ 2>0. Hence we can decompose the distribution of β 0,δ (λ 1,λ 2) into a discrete component placing mass exp{−λ 1/2} at zero and a continuous component describing the distribution over (0,1). Finally, setting λ 2=0 we obtain the distribution of a singly non-central beta random variable
for all x∈[0,1), δ>0 and λ>0. Finally, we present the extended versions of the Type II beta random variables,
for all δ>0 and λ>0, and
where δ>0 and λ 1,λ 2>0, whose values lie in (0,1]. Equations (13.3.30), (13.3.26), and (13.3.18) yield
for all x∈(0,1], δ>0 and λ 1,λ 2>0. We note that β δ+2k,0 is identically equal to one for all k∈{0,1,2,…}. Hence β δ,0(λ 2,λ 1) can be decomposed into a discrete component that places mass exp{−λ 1/2} at one and a continuous component taking values in (0,1) for all δ>0, λ 1,λ 2>0. Similarly, we obtain
for all x∈(0,1], δ>0 and λ>0. Again, β δ,0 is identically equal to one, hence β δ,0(0,λ) can be decomposed into a discrete part placing mass exp{−λ} at one and a continuous component assuming values in (0,1).
13.4 Computing the Doubly Non-central Beta Distribution
In this section, we present an algorithm, which shows how to implement the doubly non-central beta distribution. The algorithm is based on Hulley (2009), where an idea from Posten (1989, 1993) is used to enhance an algorithm presented by Seber (1963) for computing the distribution function of standard singly non-central beta random variables.
We define the doubly non-central regularized incomplete beta function
for all z∈[0,1] and a,b,c,d≥0, such that either a>0 or b>0 and where I z (a,b) is given by Eq. (13.3.20). We formally set
for all z∈[0,1] and a,b>0. This is necessary, as the gamma functions in Eq. (13.3.20) are not well-defined at zero. We note that we can express the distribution functions of both the central and the non-central beta distributions in terms of the doubly non-central regularized incomplete beta function in Eq. (13.4.31). In particular, we have
for all x∈(0,1) and δ 1,δ 2>0. The distribution of the Type I singly non-central beta distribution satisfies
for all x∈(0,1) (respectively x∈[0,1) ), δ 1>0 (respectively δ 1=0), δ 2>0 and λ>0, while for the Type II singly non-central beta distribution we obtain
for all x∈(0,1) ( respectively x∈(0,1]), δ 1>0 (respectively δ 1=0), δ 2>0 and λ>0. Lastly, the distribution function of the doubly non-central beta distribution satisfies the equality
for all x∈(0,1) (respectively x∈[0,1); x∈(0,1]), δ 1,δ 2>0 (respectively δ 1=0, δ 2>0; δ 1>0, δ 2=0) and λ 1,λ 2>0. We assume that one of the following parameter combinations is in force:
-
(i)
z∈(0,1), \(a,b \in{\mathcal{N}}\), c,d>0;
-
(ii)
z∈[0,1), a=0, \(b \in{\mathcal{N}}\), c,d>0;
-
(iii)
z∈(0,1], \(a \in{\mathcal{N}}\), b=0, c,d>0.
Assuming condition (i) is in force, we obtain from Seber (1963)
where we have defined the Poisson weights
and
When condition (ii) is satisfied with z=0, the problem is trivial, I 0(0,b,c,d)=exp{−c}. But for condition (ii) and z∈(0,1), Eqs. (13.3.20) and (13.4.32) yield I z (j,b+k)=1−I 1−z (b+k,j), for each j,k∈{0,1,2,…}, and hence
from (13.4.32) and Seber (1963). Finally, if the arguments satisfy condition (iii) with z=1, then the problem is again trivial since I 1(a,0,c,d)=1. On the other hand, if the arguments satisfy condition (iii) with z∈(0,1), then applying (13.4.32) and Seber (1963) yields
We remark that by \(L^{(\alpha)}_{n}\) we denote the Laguerre polynomials, which are defined for n∈{0,1,2,…} , α∈ℜ∖{−1,−2,…}. However, for α∈{0,1,2,…} we have
for all ζ∈ℜ and each n∈{0,1,2,…}. Equation (13.4.36) implies the following recurrence relation, see also Abramowitz and Stegun (1972), Chap. 22,
for all ζ∈ℜ, and for all α∈{0,1,2,…} and n∈{2,3,…}. Comparing Eqs. (13.4.33), (13.4.34), and (13.4.35), we note that it suffices to focus on condition (i), as conditions (ii) and (iii) can be covered using the same algorithm. Regarding the outer, infinite sum, we employ an idea from Posten (1989, 1993), and sum the terms in decreasing order of the Poisson weights. The maximal Poisson weight is approximately attained by the index value k ∗=⌈d⌉, hence we truncate the outer sum to the range of index values (k ∗−N ϵ )+,…,k ∗+N ϵ , where N ϵ is given by
i.e. we approximate I z (a,b,c,d) using \(\tilde{I}_{z}(a,b,c,d)\), which is given by
We now aim to produce a good bound on the approximation error
From Seber (1963) we have
hence
where we used the fact that the Poisson weights sum to one and the last inequality follows from the definition of N ϵ . Hence we have
so the truncation error is bounded by ϵ. Clearly, the value of N ϵ cannot be determined explicitly in advance, but can only be determined by iteratively adding Poisson weights until their sum exceeds 1−ϵ. The P k satisfy
for each \(k \in{\mathcal{N}}\), which allows for a rapid computation of these weights. Finally, we attend to the inner sum in Eq. (13.4.33). In Algorithm 13.2 below, we make use of a list, which stores the values of the Laguerre polynomials, which are used to compute the T k . Firstly, we calculate the Laguerre polynomials needed to compute \(T_{k^{*}}\), and store them in a list. Thereafter, we use the following iterative scheme, based on (13.4.37) to compute \(T_{k^{*}+1} , T_{k^{*}-1}, T_{k^{*}+2}, T_{k^{*}-2} , \dots\):
for each \(k \in{\mathcal{N}}\), where A k :=a+2b+2k−4+cz and B k :=a+b+l−3. We present in Algorithm 13.2 below the algorithm, which shows how to compute the doubly non-central regularized incomplete gamma function, which is given in Hulley (2009). The term Laglist denotes the list of Laguerre polynomials, and list 〈〈i〉〉 references element i of list, and by the symbol list ⊎x we mean that the value x is appended to list.
13.5 Inverting Laplace Transforms
In this section, we discuss how to compute values of a function f:ℜ+→ℜ from its Laplace transform
where s is a complex variable with a nonnegative real part. We present the Euler method from Abate and Whitt (1995), which is based on the Bromwich contour inversion integral. We let this contour be any vertical line s=a so that \(\hat{f}(s)\) has no singularities on or to the right of it, and hence obtain, as in Abate and Whitt (1995),
The integral is evaluated numerically using the trapezoidal rule. Specifying the step size as h gives
Setting \(h=\frac{\pi}{2 t}\) and \(a= \frac{A}{2 t}\), one arrives at the nearly alternating series
Regarding the parameters, we need to know how to choose A: In Abate and Whitt (1995) it is shown that to achieve a discretization error 10−γ, we should set A=γlog10. Consequently, truncating the series after n terms, we have
where
Lastly, we apply the Euler summation, which explains the name of the algorithm. In particular, we apply the Euler summation to m terms after the initial n terms, so that the Euler summation, which approximates (13.5.42), is given by
where s n (t) is given by (13.5.43).We note that E(m,n,t) is the weighted average of the last m partial sums by a binomial probability distribution characterized by parameters m and \(p=\frac{1}{2}\). In Abate and Whitt (1995), the parameters m=11 and n=15 are used, and it is suggested to increase n as necessary. In the following subsection, we illustrate how to use this algorithm to recover a bivariate probability density function. Using Lie symmetry methods, the first inversion can be performed analytically, for the second we use the Euler method presented in this section given by (13.5.44).
13.5.1 Recovering the Joint Distribution to Price Realized Variance
In this subsection, we apply the methodology discussed in this section to the pricing of realized variance derivatives, in particular, options on volatility, see Sect. 8.5.2. To price such products, we need to recover the joint distribution of \(( Y_{T}, \int^{T}_{0} \frac{1}{Y_{t}}\,dt )\). At first sight, obtaining the joint distribution should entail the inversion of a double Laplace transform. However, since Lie symmetry methods provide us with fundamental solutions, we already have the inversion with respect to one of the variables. Consequently, one only needs to invert a one-dimensional Laplace transform numerically, to obtain the joint density over ℜ+×ℜ+. We subsequently map the joint density into [0,1]2, following the discussion in Kuo et al. (2008), and hence can employ a randomized quasi-Monte Carlo point set to compute prices. Assuming that the one-dimensional Laplace transform can be inverted at a constant computational complexity, the resulting computational complexity is O(N), where N is the number of two-dimensional quasi-Monte Carlo points employed.
We numerically invert the one-dimensional Laplace transform given in (5.4.16) using the Euler method from Abate and Whitt (1995), which was also employed in Hulley and Platen (2008), see also Craddock et al. (2000). We display the joint density in Fig. 13.5.1.
Inverting the Laplace transform produces the joint density of \(( Y_{T}, \int^{T}_{0} \frac{1}{Y_{t}}\,dt )\) over ℜ+×ℜ+. One could now employ a product rule, such as the tensor product of two one-dimensional trapezoidal rules, using N points for each co-ordinate, and perform the numerical integration using N 2 points, at a computational complexity of O(N 2), assuming the Laplace inversion can be performed in constant time. However, instead we map the joint distribution into the unit square, and employ an N point quasi-Monte Carlo rule to obtain a quadrature rule whose computational complexity is only O(N), see Sect. 12.2.
References
Abate, J., Whitt, W.: Numerical inversion of Laplace transforms of probability distributions. ORSA J. Comput. 7(1), 36–43 (1995)
Abramowitz, M., Stegun, I.A. (eds.): Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. Dover, New York (1972)
Chattamvelli, R.: A note on the noncentral beta distribution function. Am. Stat. 49(2), 231–234 (1995)
Craddock, M., Heath, D., Platen, E.: Numerical inversion of Laplace transforms: a survey with applications to derivative pricing. J. Comput. Finance 4(1), 57–81 (2000)
Ding, C.G.: Algorithm AS 275: computing the non-central χ 2 distribution function. Appl. Stat. 41(2), 478–482 (1992)
Dyrting, S.: Evaluating the noncentral chi-square distribution for the Cox-Ingersoll-Ross process. Comput. Econ. 24(1), 35–50 (2004)
Hulley, H.: Strict local martingales in continuous financial market models. PhD thesis, UTS, Sydney (2009)
Hulley, H., Platen, E.: Laplace transform identities for diffusions, with applications to rebates and barrier options. In: Stettner, L. (ed.) Advances in Mathematical Finance. Banach Center Publications, vol. 83, pp. 139–157 (2008)
Johnson, N.L., Kotz, S., Balakrishnan, N.: Continuous Univariate Distributions, 2nd edn. Wiley Series in Probability and Mathematical Statistics, vol. 1. Wiley, New York (1994)
Johnson, N.L., Kotz, S., Balakrishnan, N.: Continuous Univariate Distributions, 2nd edn. Wiley Series in Probability and Mathematical Statistics, vol. 2. Wiley, New York (1995)
Kuo, F.Y., Dunsmuir, W.T.M., Sloan, I.H., Wand, M.P., Womersley, R.: Quasi-Monte Carlo for highly structured generalised response models. Methodol. Comput. Appl. Probab. 10(2), 239–275 (2008)
Patnaik, P.B.: The non-central χ 2- and F-distributions and their applications. Biometrika 36(1/2), 202–232 (1949)
Posten, H.O.: An effective algorithm for the noncentral chi-squared distribution function. Am. Stat. 43(4), 261–263 (1989)
Posten, H.O.: An effective algorithm for the noncentral beta distribution function. Am. Stat. 47(2), 129–131 (1993)
Sankaran, M.: Approximations to the non-central chi-square distribution. Biometrika 50(1/2), 199–204 (1963)
Schroder, M.: Computing the constant elasticity of variance option pricing formula. J. Finance 44(1), 211–219 (1989)
Seber, G.A.F.: The non-central chi-squared and beta distributions. Biometrika 50(3/4), 542–544 (1963)
Siegel, A.F.: The noncentral chi-squared distribution with zero degrees of freedom and testing for uniformity. Biometrika 66(2), 381–386 (1979)
Tang, P.C.: The power function of the analysis of variance tests with tables and illustrations of their use. Stat. Res. Mem. 2, 126–150 (1938)
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer International Publishing Switzerland
About this chapter
Cite this chapter
Baldeaux, J., Platen, E. (2013). Computational Tools. In: Functionals of Multidimensional Diffusions with Applications to Finance. Bocconi & Springer Series, vol 5. Springer, Cham. https://doi.org/10.1007/978-3-319-00747-2_13
Download citation
DOI: https://doi.org/10.1007/978-3-319-00747-2_13
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-00746-5
Online ISBN: 978-3-319-00747-2
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)