Keywords

1 Introduction

Quantitative finance and risk management research requires finance theories, finance policies, and methodology. Finance theories and finance policies have been discussed in Chaps. 1 and 2. This chapter briefly discusses the alternative methodologies used in quantitative finance and risk management research. In Sect. 3.2 we will review statistics theory and methodology used in quantitative finance and risk management research. Econometric and mathematics used in quantitative finance and risk management research will be presented in Sects. 3.3 and 3.4, respectively. In addition, in Sect. 3.5 we will briefly review the methods used in quantitative finance and risk management such as operation research, stochastic process, computer science and technology, entropy, and fuzzy set theory. Finally, our conclusion will be presented Sect. 3.6.

2 Statistics

Statistics is important in developing finance theories and in implementing empirical models. It is the prerequisite for studying financial economics, especially for researchers who encounter much uncertainty. The concepts of distribution, such as binomial distribution, multinomial distribution, normal distribution, log-normal distribution, and non-central Chi-square distribution, help researchers to develop alternative option pricing models. In this book, Chap. 24 discusses the applications of the binomial distribution to evaluate call options. Chapter 25 discusses the multinomial options pricing model. It extends the binomial option pricing model to multinomial option pricing model. Then we derive the multinomial option pricing model and apply it on the limiting case of Black and Scholes model. Finally, we introduce a lattice framework for option pricing model and its application to option valuation. In Chap. 27 and 28, normal distribution, log-normal distribution, and bivariate normal distribution option pricing models are discussed. Chapter 27 introduces normal distribution, lognormal distribution, and their relationship. Multivariate normal and lognormal distributions are also examined. Finally, both normal and lognormal distributions are applied to derive Black–Sholes formula under the assumption that the rate of stock price follows a lognormal distribution. Chapter 28 presents the American option pricing model on stock with dividend payments and without dividend payments. A Microsoft excel program for evaluating the American option pricing model is also presented. Chapter 29 compares the American option prices with one known dividend under two alternative specifications of the underlying stock price: displace log normal and logmal processes. Many option pricing models follow the standard assumption of the Black-Scholes model (1973) in which the stock price follows a log normal process. However, in order to reach a closed form solution for the American option price with one known dividend, Roll (1977), Geske (1979), and Whaley (1981) assume a displaced lognormal process for the cum-dividend stock price, which results in a lognormal process for the ex-dividend stock price. We compare two alternative pricing results in this paper. Chapter 31 describes the non-central Chi-square type of OPM. It reviews the renowned Constant Elasticity of Variance (CEV) option pricing model and gives the detailed derivations. There are two purposes of this Chap. 10. First, it shows the details of the formulas needed in deriving the option pricing and bridges the gaps in deriving the necessary formulas for the model. Second, it uses a result by Feller to obtain the transition probability density function of the stock price at time T given its price at time t with t < T. In addition, some computational considerations are given that will facilitate the computation of the CEV option pricing formula.

Chapter 34 analyzes the convergence rate and pattern of binomial distribution. It extends the generalized Cox-Ross-Rubinstein (hereafter GCRR) model of Chung and Shih (2007). We provide a further analysis of the convergence rates and patterns based on various GCRR models. The numerical results indicate that the GCRR-XPC model and the GCRR-JR (p = 1 ∕ 2) model (defined in Table 34.1) outperform the other GCRR models for pricing European calls and American puts. Our results confirm that the node positioning and the selection of the tree structures (mainly the asymptotic behavior of the risk-neutral probability) are important factors for determining the convergence rates and patterns of the binomial models. Chapter 35 uses mixture distribution to estimate implied probabilities from options prices. It examines a variety of methods for extracting implied probability distributions from option prices and the underlying. The chapter first explores non-parametric procedures for reconstructing densities directly from options market data. It then considers local volatility functions, both through implied volatility trees and volatility interpolation. It then turns to alternative specifications of the stochastic process for the underlying. A mixture of log normal model is estimated and applied to exchange rate data, and it is illustrated how to conduct forecast comparisons. Finally, the chapter turns to the estimation of jump risk by extracting bipower variation. Chapter 57 uses generalized hyperbolic distribution to develop a generalized model for optimum futures hedge ratio. It proposes the generalized hyperbolic distribution as the joint log-return distribution of the spot and futures. Using the parameters in this distribution, it derives several most widely used optimal hedge ratios: minimum variance, maximum Sharpe measure, and minimum generalized semivariance. Under mild assumptions on the parameters, it finds that these hedge ratios are identical. Regarding the equivalence of these optimal hedge ratios, the analysis in this chapter suggests that the martingale property plays a much important role than the joint distribution assumption.

Multivariate statistics such as factor analysis, and discriminant analysis are also important in performing empirical study in quantitative finance and risk management.

This naturally leads to the use of regression analysis, which includes the single equation and multiple equation models. Maximum likelihood methods are also important in statistical estimation. Moreover when the estimation requires fewer distributional assumptions about the variables, nonparametric estimation is preferred. Chapter 68 uses non-parametric methods in examining segmenting financial services market. It analyzes segmentation of financial markets based on the general segmentation bases. In particular, it identifies potentially attractive market segments for financial services using a customer dataset. It develops a multi-group discriminant model to classify the customers into three ordinal classes: prime customers, highly valued customers, and price shoppers based on their income, loan activity, and demographics (age). The multi-group classification of customer segments uses both classical statistical techniques and a mathematical programming formulation. This study uses the characteristics of a real dataset to simulate multiple datasets of customer characteristics. The results of the proposed experiments show that the mathematical programming model in many cases consistently outperforms standard statistical approaches in attaining lower APparent Error Rates (APER) for 100 replications in both high and low correlation cases.

Researchers use Bayesian inference in which the observations are used to update the probability that their hypothesis is true. Chapter 11 discusses Bayesian triangulation in asset pricing. It surveys some relevant academic research on the subject, including work about the combining of forecasts (Bates and Granger 1969), the Black-Litterman model (1991, 1992), the combining of Bayesian priors and regression estimates (Pastor 2000), model uncertainty and Bayesian model averaging (Hoeting et al. 1999; Cremers 2002), the theory of competitive storage (Deaton and Laroque 1992), and the combination of valuation estimates (Yee 2007). Despite its wide-ranging applicability, the Bayesian approach is not a license for data snooping. The second half of this chapter describes common pitfalls in fundamental analysis and comments on the role of theoretical guidance in mitigating these pitfalls. Chapter 91 has proposed Bayesian inference of financial models using Monte Carlo Markov Chian (MCMC) algorithm. It introduces Gibbs sampler and Markov Chain Monte Carlo (MCMC) algorithms. The marginal posterior probability densities obtained by the MCMC algorithms are compared to the exact marginal posterior densities. This chapter presents two financial applications of MCMC algorithms: the CKLS model of the Japanese uncollaterized call rate and the Gaussian copula model of S&P 500 and FTSE 100.

The concept of stochastic dominance is important in portfolio optimization problems. Chapter 15 considers the problem of constructing a portfolio of finitely many assets whose return rates are described by a discrete joint distribution. This chapter presents a new approach to portfolio selection based on stochastic dominance. The portfolio return rate in the new model is required to stochastically dominate a random benchmark. It formulates optimality conditions and duality relations for these models and constructs equivalent optimization models with utility functions. Two different formulations of the stochastic dominance constraint: primal and inverse, lead to two dual problems that involve von Neuman–Morgenstern utility functions for the primal formulation and rank dependent (or dual) utility functions for the inverse formulation. This study also discusses the relations of our approach to value at risk and conditional value at risk. Numerical illustration is provided.

Characteristics function and spectrum analysis are important in option pricing research. Chapter 37 has applied characteristics function to derive generalized option pricing model. Chapter 38 discusses the application of the characteristic function in financial research. It introduces the application of the characteristic function in financial research. It considers the technique of the characteristic function useful for many option pricing models. Two option pricing models are derived in details based on the characteristic functions. Copula distribution is also important in doing research in the default risk. Chapter 46 provides a new approach to estimate future credit risk on target portfolio based on the framework of CreditMetricsTM by J.P. Morgan. However, it adopts the perspective of factor copula and then brings the principal component analysis concept into factor structure to construct a more appropriate dependence structure among credits. In order to examine the proposed method, this chapter uses real market data instead of virtual ones. It also develops a tool for risk analysis that is convenient to use, especially for banking loan businesses. The results show that people assume dependence structures are normally distributed will indeed lead to risks underestimate. On the other hand, the proposed method in this chapter captures better features of risks and shows the fat-tail effects conspicuously even though assuming the factors are normally distributed.

3 Econometrics

Econometric models are the necessary research tools in the research of quantitative finance and risk management. The major econometric methods used in the modern finance research include but not limited to the following subjects: linear regression models, time series modeling, multiple equations models, generalized methods of moments, and panel data models. Many of the aforementioned subjects also contain well developed sub-subjects. For example, the development of the concepts of stationarity, autoregressive and moving average model, and cointegration, within the framework of time series are widely applied in the finance research. Multiple equation models include the application of vector autoregression (VAR), simultaneous equation models, and seemingly uncorrelated regression.

Chapter 73 discusses the ARM processes and their properties. It also presents AMR modeling and forecasting methodology that try to capture a strong statistical signature of the empirical data. This chapter starts with a high-level discussion of stochastic model fitting and explains the fitting approach that motivates the introduction of ARM processes. It then continues with a review of ARM processes and their fundamental properties, including their construction, transition structure, and autocorrelation structure. Next, the chapter outlines the ARM modeling methodology by describing the key steps in fitting an ARM model to empirical data. It also describes in some detail the ARM forecasting methodology and the computation of the conditional expectations that serve as point estimates (forecasts of future values) and their underlying conditional distributions from which confidence intervals are constructed for the point estimates. Finally, the chapter concludes with an illustration of the efficacy of the ARM modeling and forecasting methodologies through an example utilizing a sample from the S&P 500 Index. Chapter 76 has proposed univariate and bivariate GARCH analyses for the volume versus GARCH effects. It tests for the generalized autoregressive conditional heteroscedasticity (GARCH) effect on U.S. stock markets for different periods and re-examines the findings in Lamoureux and Lastrapes (1990) by using alternative proxies for information flow, which are included in the conditional variance equation of a GARCH model. It also examines the spillover effects of volume and turnover on the conditional volatility using a bivariate GARCH approach. Our findings show that volume and turnover have effects on conditional volatility and that the introduction of volume/turnover as exogenous variable(s) in the conditional variance equation reduces the persistence of GARCH effects as measured by the sum of the GARCH parameters. The results confirm the existence of the volume effect on volatility, consistent with the findings by Lamoureux and Lastrapes (1990) and others, suggesting that the earlier findings were not a statistical fluke. The findings in this chapter also suggest that, unlike other anomalies, the volume effect on volatility is not likely to be eliminated after discovery. In addition, this study rejects the pure random walk hypothesis for stock returns. The bivariate analysis also indicates that there are no volume or turnover spillover effects on conditional volatility among the companies, suggesting that the volatility of a company’s stock return may not necessarily be influenced by the volume and turnover of other companies.

Chapter 82 has developed defense forecasting methods to forecast stock price. This study introduces a new method of forecasting, defensive forecasting, and illustrates its potential by showing how it can be used to predict the yields of corporate bonds in real time. Defensive forecasting identifies a betting strategy that succeeds if forecasts are inaccurate and then makes forecasts that will defeat this strategy. The theory of defensive forecasting is based on the game-theoretic framework for probability introduced by Shafer and Vovk in 2001. This study frames statistical tests as betting strategies. It rejects a hypothesis when a strategy for betting against it multiplies the capital it risks by a large factor. The study proves each classical theorem, such as the law of large numbers, by constructing a betting strategy that multiplies the capital it risks by a large factor if the theorem’s prediction fails. Defensive forecasting plays against strategies of this type. Chapter 86 discusses the application of simultaneous equation in finance research. This chapter introduces the concept of simultaneous equation system and the application of such system to finance research. The study first discusses the order and rank condition of identification problem. It then introduces the two-stage least squares (2SLS) and three-stage least squares (3SLS) estimation method. The pro and cons of these estimations methods are summarized. Finally, the results of a study on executive compensation structure and risk-taking are used to illustrate the difference between single equation and simultaneous equation method. Chapter 89 has proposed a method to detect structural instability of financial time series. It reviews the literature that deals with structural shifts in statistical models for financial time series. The issue of structural instability is at the heart of capital asset pricing theories on which many modern investment portfolio concepts and strategies rely. The need to better understand the nature of structural evolution and to devise effective control of the risk in real world investments has become ever more urgent in recent years. Since the early 2000s, concepts of financial engineering and the resulting product innovations have advanced in an astonishing pace, but unfortunately often on a faulty premise and without proper regulatory keep-ups. The developments have threatened to destabilize worldwide financial markets. This chapter explains the principle and concepts behind these phenomena. Along the way it highlights developments related to these issues over the past few decades in the statistical, econometric, and financial literature. It also discusses recent events that illuminate the dynamics of structural evolution in the mortgage lending and credit industry. Chapter Chapter 93 discusses theoretical VAR model and its applications. Chapter 96 uses GARCH and Spline-GARCH to model and forecast volatilities and asset returns. Dynamic modeling of asset returns and their volatilities is a central topic in quantitative finance. Herein this chapter reviews basic statistical models and methods for the analysis and forecasting of volatilities. It also surveys several recent developments in regime-switching, change-point, and multivariate volatility models.

Chapter 99 uses combined forecasting methods to estimate cost of equity capital for property/casualty insurers. This chapter estimates the cost of equity capital for Property/Casualty insurers by applying three alternative asset pricing models: the Capital Asset Pricing Model (CAPM), the Arbitrage Pricing Theory (APT), and a unified CAPM/APT model (Wei 1988). The in-sample forecast ability of the models is evaluated by applying the mean squared error method, the Theil U2 (1966) statistic, and the

Granger (1974) conditional efficiency evaluation. Based on forecast evaluation procedures, the APT and Wei’s unified CAPM/APT models perform better than the CAPM in estimating the cost of equity capital for the PC insurers and a combined forecast may outperform the individual forecasts. Chapter 103 uses Box–Cox functional forms transformation in closed country fund research. This chapter proposes a generalized functional form CAPM model for international closed-end country funds performance evaluation. It examines the effect of heterogeneous investment horizons on the portfolio choices in the global market. Empirical evidences suggest that there are some empirical anomalies that are inconsistent with the traditional CAPM. These inconsistencies arise because the specification of the CAPM ignores the discrepancy between observed and true investment horizons. A comparison between the functional forms for share returns and NAV returns of closed-end country funds suggests that foreign investors may have more heterogeneous investment horizons versus their U.S counterparts. Market segmentation and government regulation does have some effect on the market efficiency. No matter which generalized functional model is used, the empirical evidence indicates that, on average, the risk-adjusted performance of international closed-end fund is negative even before the expenses.

The fixed effect panel data model and the random effect panel data model are also important in much empirical finance research. Chapter 51 uses a panel threshold model to examine the effect of default risk on equity liquidity. This chapter sets out to investigate the relationship between credit risk and equity liquidity. It posits that as the firm’s default risk increases, informed trading increases in the firm’s stock and uninformed traders exit the market. Market-makers widen spreads in response to the increased probability of trades against informed investors. Using the default likelihood measure calculated by Merton’s method, this chapter investigates whether financially ailing firms do indeed have higher bid-ask spreads. The panel threshold regression model is employed to examine the possible non-linear relationship between credit risk and equity liquidity. Since high default probability and worse economic prospects lead to greater expropriation by managers, and thus greater asymmetric information costs, liquidity providers will incur relatively higher costs and will therefore offer higher bid-ask spreads. This issue is further analyzed by investigating whether there is any evidence of increases in the vulnerability in equity liquidity of firms with high credit risk. The results in this chapter show an increase in the default likelihood might precipitate investors’ loss of confidence of equities trading and thus cause a decrease in liquidity, as particularly evident during the Enron crisis period. Chapter 69 discusses the spurious regression and data mining in conditional asset pricing models. Stock returns are not highly autocorrelated, but there is a spurious regression bias in predictive regressions for stock returns similar to the classic studies of Yule (1926) and Granger (1974). Data mining for predictor variables reinforces spurious regression bias because more highly persistent series are more likely to be selected as predictors. In asset pricing regression models with conditional alphas and betas, the biases occur only in the estimates of conditional alphas. However, when time-varying alphas are suppressed and only time-varying betas are considered, the betas become biased. The analysis shows that the significance of many standard predictors of stock returns and time-varying alphas in the literature is often overstated.

There are many other econometric methods discussed in this book as well. For example, Chap. 43 uses combinatorial methods for constructing credit risk ratings. This study uses a novel combinatorial method, the Logical Analysis of Data (LAD) to reverse-engineer and construct credit risk ratings that represent the creditworthiness of financial institutions and countries. The proposed approaches are shown to generate transparent, consistent, self-contained, and predictive credit risk rating models, closely approximating the risk ratings provided by some of the major rating agencies. The scope of applicability of the proposed method extends beyond the rating problems discussed in this study, and can be used in many other contexts where ratings are relevant. The proposed methodology is applicable in the general case of inferring an objective rating system from archival data, given that the rated objects are characterized by vectors of attributes taking numerical or ordinal values.

Chapter 50 uses dynamic econometric loss model in the default study of U.S. subprime markets. It applies not only the static loan and borrower characteristic variables such as loan terms, Combined-Loan-To-Value ratio (CLTV), Fair Isaac credit score (FICO), as well as dynamic macro-economic variables such as HPA to projects defaults and prepayments, but also the spectrum of delinquencies as an error correction term to add an additional 15% accuracy to the model projections. In addition to the delinquency attribute findings, this chapter also shows that cumulative HPA and the change of HPA contribute various dimensions that greatly influence defaults. Another interesting finding is a significant long-term correlation between HPI and disposable income level (DPI). Since DPI is more stable and easier for future projections, it suggests that HPI will eventually adjust to coincide with DPI growth rate trend and that HPI could potentially experience as much as a 14% decrease by the end of 2009. Chapter 62 uses robust logistic regression to study the default prediction with outliers. This chapter suggests a Robust Logit method, that extends the conventional logit model by taking outlier into account, to implement forecast of defaulted firms. This study employs five validation tests to assess the in-sample and out-of-sample forecast performances, respectively. With respect to in-sample forecasts, the Robust Loigt method proposed in this chapter is substantially superior to the logit method as it uses all validation tools. With respect to the out-of-sample forecasts, the superiority of Robust Logit is less pronounced. Issues related with errors-in-variables problems in asset pricing tests is discussed in Chap. 70. This chapter develops a new method of correcting the EIV problem in the estimation of the price of beta risk within the two-pass estimation framework. The study performs the correction for the EIV problem by incorporating the extracted information about the relationship between the measurement error variance and the idiosyncratic error variance into the maximum likelihood estimation under either homoscedasticity or heteroscedasticity of the disturbance term of the market model. The evidence shows that after correcting for the EIV problem, market beta has significant explanatory power for average stock returns, and that the firm size factor becomes much less important. This chapter also reexamines the explanatory power of beta and the firm size for average stock returns using the proposed correction. If no correction for the EIV problem is implemented, then, consistent with recent empirical studies, there is no significant relation between beta and expected returns, especially when firm size is included as an additional explanatory variable. With the EIV correction, both the price and significance of betas increase. This chapter documents that the EIV correction diminishes the impact of the firm size variable. Nevertheless, firm size remains a significant force in explaining the cross-section of average returns. While my results suggest a more important role for market beta, the evidence is still consistent with a misspecification in the capital asset pricing model. Chapter 72 discusses term structure models that incorporate Markov regimes shifts. Both discrete-time and continuous-time regimes switching term structure models are discussed. In this chapter we briefly review models of the term structure of interest rates under regime shifts. These models retain the same tractability of the single-regime affine models on one hand, and allow more general and flexible specifications of the market prices of risk, the dynamics of state variables as well as the short-term interest rates on the other hand. These additional flexibilities can be important in capturing some silent features of the term structure of interest rates in empirical applications. The models can also be applied to analyze the implications of regime-switching for bond portfolio allocations and the pricing of interest rate derivatives. Chapter 78 discusses the hedonic regression analysis in real estate markets. This chapter provides an overview of the nature and variety of hedonic pricing models that are employed in the market for real estate. It explores the history of hedonic modeling and summarizes the field’s utility-theory-based, microeconomic foundations. It also provides empirical examples of each of the three principal methodologies and a discussion of and potential solutions for common problems associated with hedonic modeling. Chapter 92 discusses the instrumental variable approach to correct for endogeneity in corporate finance. The theoretical literature on the link between an incumbent firm’s capital structure (financial leverage, debt/equity ratio) and entry into its product market is based on two classes of arguments: the “limited liability” arguments and the “deep pocket” arguments. However, these two classes of arguments provide contradictory predictions. This study provides a distinct strategic model of the link between capital structure and entry that is capable of rationally producing both of the existing contradictory predictions. The focus is on the role of beliefs and the cost of adjusting capital structure, two of the factors that are absent in the existing models. The beliefs may be exogenously given or endogenized by a rational expectations criterion.

4 Mathematics

The mathematical methodology used in finance has its root dated back to research in economics. Financial researchers are interested in equilibrium analysis, optimization problems, and dynamic analysis. The use of linear and matrix algebra, real analysis, multivariate calculus, constrained and unconstrained optimization, nonlinear programming, and optimal control theory are indispensable. Research in portfolio theory and its applications frequently uses constrained maximization. Chapter 5 discusses risk aversion, capital asset allocation, and Markowiatz portfolio selection model. It introduces utility function and indifference curve. Based on utility theory, it derives the Markowitz’s model and the efficient frontier through the creation of efficient portfolios of varying risk and return. It also includes methods of solving for the efficient frontier both graphically and mathematically, with and without explicitly incorporating short selling. Chapter 7 demonstrates how index model can be used for portfolio selection. It discusses both single-index model and multiple-index portfolio selection model. It uses constrained maximization instead of minimization procedure to calculate the portfolio weights. We find that both single-index and multi-index models can be used to simplify the Markowitz model for portfolio section.

Chapter 8 discusses Sharpe measure, Treynor measure, and optimal portfolio selection. Following Elton et al. (1976) and Elton et al. (2007), it introduces the performance-measure approaches to determine optimal portfolios. We find that the performance-measure approaches for optimal portfolio selection are complementary to the Markowitz full variance-covariance method and the Sharpe index-model method. The economic rationale of the Treynor method is also discussed in detail. Chapter 10 discusses portfolio optimization models and mean-variance spanning tests. It introduces the theory and the application of computer program of modern portfolio theory. The notion of diversification is the age-old adage “don’t put your eggs in one basket,” obviously predates economic theory. However, a formal model showing how to make the most of the power of diversification was not devised until 1952, a feat for which Harry Markowitz eventually won the Nobel Prize in economics. Markowitz portfolio shows that as assets are added to an investment portfolio the total risk of that portfolio — as measured by the variance (or standard deviation) of total return — declines continuously, but the expected return of the portfolio is a weighted average of the expected returns of the individual assets. In other words, by investing in portfolios rather than in individual assets, investors could lower the total risk of investing without sacrificing return. In the second part, it introduces the mean-variance spanning test that follows directly from the portfolio optimization problem. Chapter 12 provides discussion on estimation risk and power utility portfolio selection. It examines additional ways of estimating joint return distributions that allow us to explore the stationary/nonstationary tradeoffs implicit in “expanding” versus ”moving” window estimation methods, the benefits of alternative methods of allowing for the “memory loss” inherent in the moving-window EPAA, and the possibility that weighting more recent observations more heavily may improve investment performance. Chapter 13 discusses the theory and method in international portfolio management. This chapter investigates the impact of various investment constraints on the benefits and asset allocation of the international optimal portfolio for domestic investors in various countries. The empirical results indicate that local investors in less developed countries, particularly in East Asia and Latin America, benefit more from global diversification. Though the global financial market is becoming more integrated, adding constraints reduces but does not completely eliminate the diversification benefits of international investment. The addition of portfolio bounds yields the following characteristics of asset allocation: a reduction in the temporal deviation of diversification benefits, a decrease in time-variation of components in optimal portfolio, and an expansion in the range of comprising assets. Our findings are useful for asset management professionals to determine target markets to promote the sales of national/international funds and to manage risk in global portfolios.

In the research of derivatives or options pricing, the use of the Ito calculus and the derivation of the ordinary and partial differential equations are important in arriving at the closed form solution for option pricing model. Applications of Ito calculus in deriving option pricing model model can be found in Chapter 30. It basically develops certain relatively recent mathematical discoveries known generally as stochastic calculus, or more specifically as Itô’s Calculus, and illustrates their application in the pricing of options. The mathematical methods of stochastic calculus are illustrated in alternative derivations of the celebrated Black-Scholes-Merton model. The topic is motivated by a desire to provide an intuitive understanding of certain probabilistic methods that have found significant use in financial economics. Chapter 32 uses Ito calculus in stochastic volatility option pricing model. It assumes that the volatility of option price model is stochastic instead of deterministic. The chapter applies such assumption to the nonclosed-form solution developed by Scott (1987) and the closed-form solution of Heston (1993). In both cases, it considers a model in which the variance of stock price returns varies according to an independent diffusion process. For the closed form option pricing model, the results are expressed in terms of the characteristic function. Chapter 33 uses traditional calculus method to derive Greek letters and discuss their applications. It introduces the definitions of Greek letters. It also provides the derivations of Greek letters for call and put options on both dividends-paying stock and non-dividends stock. It then discusses some applications of Greek letters. Finally, it shows the relationship between Greek letters, one of the examples can be seen from the Black-Scholes partial differential equation. Chapter 37 applies Ito calculus to option pricing and hedging performance under stochastic volatility and stochastic interest rate. Recent studies have extended the Black-Scholes model to incorporate either stochastic interest rates or stochastic volatility. But, there is not yet any comprehensive empirical study demonstrating whether and by how much each generalized feature will improve option pricing and hedging performance. This chapter fills this gap by first developing an implementable option model in closed-form that admits both stochastic volatility and stochastic interest rates and is parsimonious in the number of parameters. The model includes many known ones as special cases. Based on the model, both delta-neutral and single-instrument minimum-variance hedging strategies are derived analytically. Using S&P 500 option prices, we then compare the pricing and hedging performance of this model with that of three existing ones that respectively allow for (1) constant volatility and constant interest rates (the Black-Scholes), (2) constant volatility but stochastic interest rates, and (3) stochastic volatility but constant interest rates. Overall, incorporating stochastic volatility and stochastic interest rates produces the best performance in pricing and hedging, with the remaining pricing and hedging errors no longer systematically related to contract features. The second performer in the horse-race is the stochastic volatility model, followed by the stochastic interest rates model and then by the Black-Scholes.

Chapter 44 uses Ito calculus in the structural approach to modeling credit risk. This chapter presents a survey of recent developments in the structural approach to modeling of credit risk. It first reviews some models for measuring credit risk based on the structural approach. This study then discusses the empirical evidence in the literature on the performance of structural models of credit risk. Chapter 85 has demonstrated the applications of alternative ODE in finance and economics research. This chapter introduces ordinary differential equation (ODE) and its application in finance and economics research. There are various approaches to solve an ordinary differential equation. This study mainly introduces the classical approach for a linear ODE and Laplace transform approach. It illustrates the application of ODE in both deterministic and stochastic systems. As application in deterministic systems, the study applies ODE to solve a simple Gross Domestic Product (GDP) model and an investment model of a firm that is well studied in Gould (1968). As an application in stochastic systems, this chapter employs ODE in a jump-diffusion model and derives the expected discounted penalty given that the asset value follows a phase-type jump diffusion process. In particular, this study examines capital structure management model and derives the optimal default-triggering level. For related results, please see Chen et al. (2007). Chapter 98 discusses an ODE approach for the expected discounted penalty at ruin in a jump-diffusion model. Under the assumption that the asset value follows a jump diffusion, this chapter shows the expected discounted penalty satisfies an ODE and obtains a general form for the expected discounted penalty. In particular, if only downward phase-type jumps are allowed, we get an explicit formula in terms of the penalty function and jump distribution. On the other hand, if downward jump distribution is a mixture of exponential distributions (and upward jumps are determined by a general Levy measure), closed form solutions for the expected discounted penalty can be obtained. In this chapter, Leland’s structural model with jumps is analyzed. For earlier and related results, see Gerber and Landry (1998), Hilberink and Rogers (2002), Mordecki (2002), Kou and Wang (2004), Asmussen et al. (2004) and others.

Chapters 101 and 102 use the stochastic differential equation approach to derive a theoretical model of positive interest rate. Chapter 101 presents a dynamic term structure model in which interest rates of all maturities are bounded from below at zero. Positivity and continuity combined with no arbitrage result in only one functional form for the term structure with three sources of risk. This chapter casts the model into a state-space form and extracts the three sources of systematic risk from both the U.S. Treasury yields and the U.S. dollar swap rates. It analyzes the different dynamic behaviors of the two markets during credit crises and liquidity squeezes. Chapter 102 explores what types of diffusion and drift terms forbid negative yields, but nevertheless allow any yield to be arbitrarily close to zero. This chapter shows that several models have these characteristics; however, they may also have other odd properties. In particular the square root model of Cox-Ingersoll-Ross has such a solution, but only in a singular case. In other cases, bubbles will occur in bond prices leading to unusually behaved solutions. Other models, such as the CIR three-halves power model, are free of such oddities.

Fuzzy set theory is also important in quantitative research risk management research. Chapter 77 discusses fuzzy set theory and its applications to finance research. Chapter 87 discusses the fuzzy set and data mining applications in accounting and finance research. To reduce the complexity in computing tradeoffs among multiple objectives, this chapter adopts a fuzzy set approach to solve various accounting or finance problems such as international transfer pricing, human resource allocation, accounting information system selection, and capital budgeting problems. A solution procedure is proposed to systematically identify a satisfying selection of possible solutions that can reach the best compromise value for the multiple objectives and multiple constraint levels. The fuzzy solution can help top management make a realistic decision regarding its various resource allocation problems as well as the overall firm’s strategic management when environmental factors are uncertain. The purpose of last part of the chapter is to test the effectiveness of a multiple criteria linear programming (MCLP) approach to data mining for bankruptcy prediction using financial data. Data mining applications have recently been receiving more attention in general business areas, but there is a need to use more of these applications in accounting and finance areas when dealing with large amounts of both financial and nonfinancial data. The results of the MCLP data mining approach in our bankruptcy prediction studies are promising and may extend to other countries.

Chapter 61 discusses the actuarial mathematics and its applications in quantitative finance. First, it introduces the traditional actuarial interest functions and uses them to price different types of insurance contracts and annuities. Based on the equivalence principle, risk premiums for different payment schemes are calculated. After premium payments and the promised benefits are determined, actuarial reserves are calculated and set aside to cushion the expected losses. In a similar method, it uses actuarial mathematics are used to price the risky bond, the credit default swap, and the default digital swap, while interest structure is not flat and the first passage time is replaced by the time of default instead of the future life time. Moreover, product market games and signaling model in finance is discussed in Chap. 94. A crucial set of assumption in a large class of models of financial market signaling and product market/financial market interactions are related to super- or submodularity of the maximand. The assumption of one or the other can produce result reversals or indeterminacy. To the extent that data or underlying economics cannot unambiguously choose between super- and submodularity, the theoretical robustness and empirical testability of such models becomes questionable. Based on three examples drawn from recent finance literature, this study discusses the role that the assumptions play and raises issues of empirical identifiability in testing model predictions.

5 Other Disciplines

In addition to statistics, econometrics, and mathematics, there are many other disciplines used in the finance research. Operation research model is one example. Linear programming models and quadratic programming are used in cash flow matching and portfolio immunization. Chapter 14 discusses the Le Châtelier principle in the Markowitz quadratic programming investment model using a case of world equity fund market. Due to limited numbers of reliable international equity funds, the Markowitz investment model is ideal in constructing an international portfolio. Overinvestment in one or several fast-growing markets can be disastrous as political instability and exchange rate fluctuations reign supreme. This chapter applies the Le Châtelier principle to the international equity fund market with a set of upper limits. Tracing out a set of efficient frontiers, this study inspects the shifting phenomenon in the mean-variance space. The optimum investment policy can be easily implemented and risk minimized. Chapter 16 discusses quadratic programming in the portfolio analysis. Markowitz portfolio analysis delineates a set of highly desirable portfolios; these portfolios have the maximum return over a range of different risk levels. Conversely, Markowitz portfolio analysis can find the same set of desirable investments by delineating portfolios that have the minimum risk over a range of different rates of return. The set of desirable portfolios is called the efficient frontier. Portfolio analysis analyzes rate of return statistics, risk statistics (standard deviations), and correlations from a list of candidate investments (stocks, bonds, and so forth) to determine which investments in what proportions (weights) enter into the efficient portfolios. Markowitz’s portfolio theory can be analyzed further to determine interesting asset pricing implications.

The role of risk measures such as Value at Risk (VaR) and Conditional Value at Risk (CVar) in risk management are the interest of both academia and practitioners. Chapter 95 demonstrates the estimation of short- and long-term VaR for long-memory stochastic volatility models. The phenomenon of long-memory stochastic volatility (LMSV) has been extensively documented for speculative returns. This chapter investigates the effect of LMSV for estimating the value at risk (VaR) or the quantile of returns. The proposed model allows the return’s volatility component to be short- or long-memory. This chapter derives various types of limit theorems that can be used to construct confidence intervals of VaR for both short and long-term returns. For the latter case, the results are particularly of interest to financial institutions with exposure to long-term liabilities, such as pension funds and life insurance companies, which need a quantitative methodology to control market risk over longer horizons.

Computer science is another important discipline in quantitative finance and risk management research. The program design, data structures, and algorithms help finance researchers implement their models with large data sets. The concepts of neural networks have also been applied in security analysis and bankruptcy prediction literature. In empirical implementation of the derivatives pricing model, numerical analysis is required to compute the fair value of the derivatives. Chapter 39 uses numerical methods in evaluating Asian options. It introduces Monte Carlo simulations, diverse mathematical and numerical techniques designed to estimate the distribution of underlying asset, then uses arbitrage arguments, binomial tree models, and the application of insurance formulas in the evaluation of options. Chapter 40 demonstrates that numerical valuation of Asian options with higher moments in the underlying distribution. It develops a modified Edgeworth binomial model with higher moment consideration for pricing European or American Asian options. If the number of the time steps increases, our numerical algorithm is as precise as that of Chalasani et al. (1999), with lognormal underlying distribution for benchmark comparison. If the underlying distribution displays negative skewness and leptokurtosis, as often observed for stock index returns, our estimates are better and very similar to the benchmarks in Hull and White (1993). The results show that the modified Edgeworth binomial model proposed in this chapter can value European and American Asian options with greater accuracy and speed given higher moments in their underlying distribution. Chapter 79 shows numerical solutions of financial partial differential equations. This chapter provides a general survey of important PDE numerical solutions and studies in detail of certain numerical methods specific to finance with programming samples. These significant numerical solutions for financial PDEs include finite difference, finite volume, and finite element. Finite difference is simple to discretize and easy to implement; however, explicit method is not guaranteed stable. Finite volume has an advantage over finite difference that it does not require a structured mesh. If a structured mesh is used, as in most cases of pricing financial derivatives, the finite volume and finite difference method yield the same discretization equations. Finite difference method can be considered a special case of finite element method. In general, the finite element method has better quality in approximation to exact solution when compared to the finite difference method. Since most PDEs in financial derivatives pricing have simple boundary condition, implicit method of finite difference is preferred to the finite element method in applications of financial engineering due to low cost of implementation.

Moreover, in constructing stochastic or probabilistic financial models as opposed to the traditional static and deterministic models, the Monte Carlo method is required to compute the results. Chapters 71 and 91 show how Monte Carlo Markov Chain (MCMC) method is utilized in finance research. Chapter 71 proposes to use Monte Carlo Markov Chain methods to estimate the parameters of Stochastic Volatility Models with several factors varying at different time scales. The originality of this approach, in contrast with classical factor models, is the identification of two factors driving univariate series at well-separated time scales. This is tested with simulated data as well as foreign exchange data. Other methodologies used in quantitative finance and risk management research also include entropy methods. Chapters 104 and 105 show how entropy methods can be used to finance research. Chapter 104 expresses the density of the minimal entropy Martingale measure in terms of the value process of the related optimization problem and show that this value process is determined as the unique solution of a semiMartingale backward equation. This chapter considers some extreme cases when this equation admits an explicit solution. Chapter 105 derives the density process of the minimal entropy Martingale measure in the stochastic volatility model proposed by Barndorff-Nielsen and Shephard. The density is represented by the logarithm of the value function for an investor with exponential utility and no claim issued, and a Feynman-Kac representation of this function is provided. The dynamics of the processes determining the price and volatility are explicitly given under the minimal entropy Martingale measure, and this study derives a Black & Scholes equation with integral term for the price dynamics of derivatives. It turns out that the minimal entropy price of a derivative is given by the solution of a coupled system of two integro-partial differential equations.

6 Conclusion

This chapter discusses the methodology in quantitative finance and risk management research. The list of the subjects that contribute to the modern finance research include statistics, econometrics, mathematics, and other disciplines such as operation research models and computer science. This list is definitely not exhaustive as finance research requires a broad range of methodological tools from various disciplines. Detailed applications of the aforementioned subjects are discussed in the latter part of this handbook.