Keywords

74.1 Introduction

Financial volatility is a key input in derivative pricing, asset allocation, investment decisions, hedging, and risk analysis; volatility modeling thus has became an important task in financial markets, and it has held the attention of academics and practitioners over the last three decades. Nevertheless, following Barndorff-Nielsen and Shephard (2005) or Andersen et al. (2003), financial volatility is a latent factor and hence it cannot be observed directly. Financial volatility thus can only be estimated using its signature on certain known market price processes; when the underlying process is more sophisticated or when observed market prices suffer from market microstructure noise effects, the results are less clear.

It is well known that the time series of asset prices usually exhibit volatility clustering or autocorrelation. In incorporating the characteristics into the dynamic process, the generalized autoregressive conditional heteroskedasticity (GARCH) family of models proposed by Engle (1982) and Bollerslev (1986) and the stochastic volatility (SV) models advocated by Taylor (1986) are two popular and useful alternatives for estimating and modeling time-varying conditional financial volatility. However, as pointed by Alizadeh et al. (2002), Brandt and Diebold (2006), Chou (2005), and others, both GARCH and SV models are inaccurate and inefficient, because they are based on the closing prices of the reference period, failing to use the information content inside the reference. In other words, the path of the price inside the reference period is totally ignored when volatility is estimated by these models. Especially in turbulent days with drops and recoveries in the markets, the traditional close-to-close volatility indicates a low level, while the daily price range shows correctly that the volatility is high.

The price range, also known as high/low range or range volatility, is basically defined as the difference between the highest and lowest market prices over a fixed sampling interval. The price range has been known for a long time and has recently experienced renewed interest as a proxy of the latent volatility. The information contained in the opening, highest, lowest, and closing prices of an asset is widely used in Japanese candlestick charting techniques and other technical analysis indicators, such as the directional movement indicator (DMI). Early applications of range in the field of finance can be traced to Mandelbrot (1971) and the academic work on the range-based volatility estimator which began in the early 1980s. Several authors, back to Parkinson (1980), developed several volatility measures which were far more efficient than the classical return-based volatility estimators.

Building on the earlier results of Parkinson (1980), many studiesFootnote 1 showed that one can use the price range information to improve volatility estimation. In addition to being significantly more efficient than the squared daily return, Alizadeh et al. (2002) also demonstrated that the conditional distribution of the log range is approximately Gaussian, thus greatly facilitating maximum likelihood estimation of stochastic volatility models. Moreover, as pointed out by Alizadeh et al. (2002) and Brandt and Diebold (2006), the range-based volatility estimator appears robust to microstructure noise such as bid-ask bounce. By adding microstructure noise to the Monte Carlo simulation, Shu and Zhang (2006) also supported the finding of Alizadeh et al. (2002) that range estimators are fairly robust toward microstructure effects.

Cox and Rubinstein (1985) explained the problem that despite the elegant theory and the support of simulation results, the range-based volatility estimator has performed poorly in empirical studies. Chou (2005) argued that the failure of all the range-based models in the literature is caused by their ignorance of the temporal movements of price range. Using a proper dynamic structure for the conditional expectation of range, the conditional autoregressive range (CARR) model, proposed by Chou (2005), successfully resolves this puzzle and retains its superiority in empirical forecasting abilities. The in-sample and out-of-sample volatility forecasting using S&P 500 index data shows that the CARR model does provide more accurate volatility estimator compared with the GARCH model. Similarly, Brandt and Jones (2006) formulated a model that is analogous to Nelson’s (1991) EGARCH model but uses the square root of the intraday price range in place of the absolute return. Both studies find that the range-based volatility estimators offer a significant improvement over their return-based counterparts. Moreover, Chou et al. (2009) extended CARR to a multivariate context using the dynamic conditional correlation (DCC) model proposed by Engle (2002a). They found that this range-based DCC model performs better than other return-based volatility models in forecasting covariances. In this chapter, we also review alternative range-based multivariate volatility models in Sect. 74.3.

Recently, many studies have used high-frequency data to get an unbiased and highly efficient estimator for measuring volatility; see Andersen et al. (2003) and McAleer and Medeiros (2008) for a review. The volatility built by nonparametric methods is called realized volatility, which is calculated by the sum of nonoverlapping squared returns within a fixed time interval. Martens and van Dijk (2007) replaced the squared return with the price range to get a more efficient estimator, namely, the realized range. In their empirical study, the realized range was a significant improvement over realized return volatility. In addition, Christensen and Podolskij (2007) independently develop the realized range and showed that this estimator is consistent and relatively efficient under some specific assumptions.

The remainder of the chapter is laid out as follows. Section 74.2 introduces the price range estimators. Section 74.3 describes the range-based volatility models, including univariate and multivariate ones. Section 74.4 presents the realized range. The financial applications of range volatility are provided in Sect. 74.5. Finally, the conclusion is showed in Sect. 74.6.

74.2 The Price Range Estimators

A few price range estimators and their estimation efficiency are briefly introduced and discussed in this section. The price ranges which can be calculated by the daily opening, highest, lowest and closing prices are readily available for many assets. Most data suppliers provide daily highest/lowest prices as summaries of intraday activity. For example, Datastream records the intraday price range for most securities, including equities, currencies, and commodities, going back to 1955. Thus, range-based volatility proxies are easily calculated. When using this record, the additional information yields a great improvement when used in financial applications. Roughly speaking, knowing these records allows us to get closer to the real underlying process, even if we do not know the whole path of asset prices. For an asset, let’s define the following variables:

  • O t = the opening price of the tth trading day.

  • C t = the closing price of the tth trading day.

  • H t = the highest price of the tth trading day.

  • L t = the lowest price of the tth trading day.

The efficiency for the Parkinson (1980) estimator intuitively comes from the fact that the price range of intraday trading gives more information regarding the future volatility than two arbitrary points in this series (the closing prices). Assuming that the asset price follows a simple diffusion model without a drift term, his estimator \( {\widehat{\sigma}}_P^2 \) can be written as follows:

$$ {\widehat{\sigma}}_P^2=\frac{1}{4\kern0.2em \ln \kern0.2em 2}{\left( \ln \kern0.2em {H}_t- \ln \kern0.2em {L}_t\right)}^2. $$
(74.1)

But instead of using two data points, the highest and lowest prices, four data points, the opening, closing, highest, and lowest prices, might also give extra information. Garman and Klass (1980) proposed several volatility estimators based on the knowledge of the opening, closing, highest, and lowest prices. Like Parkinson (1980), they assumed the same diffusion process and proposed their estimator \( {\widehat{\sigma}}_{GS}^2 \) as

$$ \begin{array}{l}{\widehat{\sigma}}_{GK}^2=0.511{\left[ \ln \left({H}_t/{L}_t\right)\right]}^2-0.019\Big\{ \ln \left({C}_t/{O}_t\right)\left[ \ln \left({H}_t\right)+ \ln \left({L}_t\right)-2 \ln \left({O}_t\right)\right]\\ {}-2\left[ \ln \left({H}_t/{O}_t\right) \ln \left({L}_t/{O}_t\right)\right]\Big\}-0.383{\left[ \ln \left({C}_t/{O}_t\right)\right]}^2.\end{array} $$
(74.2)

As mentioned in Garman and Klass (1980), their estimator can be presented practically as \( {\widehat{\sigma}}_{G{K}^{\mathit{\prime}}}^2=0.5{\left[ \ln \left({H}_t/{L}_t\right)\right]}^2-\left[2 \ln 2-1\right]{\left[ \ln \left({C}_t/{O}_t\right)\right]}^2. \)Molnár (2012) showed that in the absence of high-frequency data, returns normalized by their estimator are, approximately, distributed normally.

The price path cannot be monitored when markets are closed; however, Wiggins (1991) found that both the Parkinson estimator and Garman-Klass estimator were still biased downward compared to the traditional estimator, because the observed highs and lows were smaller than the actual highs and lows. Garman and Klass (1980) and Grammatikos and Saunders (1986), nevertheless, estimated the potential bias using simulation analysis and showed that the bias decreases with an increasing number of transactions. Therefore, it is relatively easy to adjust the estimates of daily variances to eliminate the source of bias.

Because the Parkinson (1980) and Garman and Klass (1980) estimators implicitly assumed that log-price follows a geometric Brownian motion with no drift term, further refinements were made by Rogers and Satchell (1991) and Kunitomo (1992). Rogers and Satchell (1991) added a drift term in the stochastic process that could be incorporated into a volatility estimator using only daily opening, highest, lowest, and closing prices. Their estimator \( {\widehat{\sigma}}_{RS}^2 \) can be written as follows:

$$ \begin{array}{l}{\widehat{\sigma}}_{RS}^2=\frac{1}{N}{\displaystyle \sum_{n=t-N}^t} \ln \left({H}_n/{O}_n\right)\left[ \ln \left({H}_n/{O}_n\right)- \ln \left({C}_n/{O}_n\right)\right]\\ {}\kern2em + \ln \left({L}_n/{O}_n\right)\left[ \ln \left({L}_n/{O}_n\right)- \ln \left({C}_n/{O}_n\right)\right].\end{array} $$
(74.3)

Rogers et al. (1994) reported that the Rogers-Satchell estimator yields theoretical efficiency gains compared to the Garman-Klass estimator. They also reported that the Rogers-Satchell estimator appears to perform well when changing drift with as few as 30 daily observations.

Different from Rogers and Satchell (1991), Kunitomo (1992) used the opening and closing prices to estimate a modified range corresponding to a hypothesis of a Brownian bridge of the transformed log-price. This basically tries to correct the highest and lowest prices for the drift term:

$$ {\widehat{\sigma}}_K^2=\frac{1}{\beta_N}{\displaystyle \sum_{p=t-N}^t\left[ \ln \left({\widehat{H}}_n/{\widehat{L}}_n\right)\right],} $$
(74.4)

where. two estimators \( {\widehat{H}}_n=\underset{t_i}{\mathrm{Arg}}\left[\underset{P_{t_i}}{\mathrm{Max}}\left\{{P}_{t_i}-\left[{O}_n+\left({C}_n-{O}_n\right)/{t}_i\right]+\left({C}_n-{O}_n\right)\;\left|\kern0.5em {t}_i\in \left[n-1,n\right]\right.\right\}\right] \) and \( {\widehat{L}}_n=\underset{t_i}{\mathrm{Arg}}\left[\underset{P_{t_i}}{\mathrm{Min}}\left\{{P}_{t_i}-\left[{O}_n+\left({C}_n-{O}_n\right)/{t}_i\right]+\left({C}_n-{O}_n\right)\;\left|\kern0.5em {t}_i\in \left[n-1,n\right]\right.\right\}\right] \) are denoted as the end-of-the-day drift correction highest and lowest prices. β N = 6/( 2) is a correction parameter.

Finally, Yang and Zhang (2000) made further refinements by deriving a price range estimator that is unbiased, independent of any drift, and consistent in the presence of opening price jumps. Their estimator \( {\widehat{\sigma}}_{YZ}^2 \) thus can be written as follows:

$$ \begin{array}{l}{\widehat{\sigma}}_{YZ}^2=\frac{1}{\left(N-1\right)}{\displaystyle \sum_{n=t-N}^t\left[ \ln \left({O}_n/{C}_{n-1}\right)-\overline{ \ln \left({O}_n/{C}_{n-1}\right)}\right]}\\ {}\kern2.25em +\frac{k}{\left(N-1\right)}{\displaystyle \sum_{n=t-N}^t\left[ \ln \left({O}_n/{C}_{n-1}\right)-\overline{ \ln \left({O}_n/{C}_{n-1}\right)}\right]}+\left(1-k\right)\ {\widehat{\sigma}}_{RS}^2,\end{array} $$
(74.5)

where \( k=\frac{0.34}{1.34+\left(N+1\right)/\left(N-1\right)} \). The symbol \( \overline{X} \) is the unconditional mean of X, and σ 2 RS is the Rogers-Satchell estimator. The Yang-Zhang estimator is simply the sum of the estimated overnight variance, the estimated opening market variance, and the Rogers and Satchell (1991) drift-independent estimator. The resulting estimator therefore explicitly incorporates a term for the closed market variance.

Shu and Zhang (2006) investigated the relative performance of the four range-based volatility estimators including Parkinson, Garman-Klass, Rogers-Satchell, and Yang-Zhang estimators for S&P 500 index data and found that the price range estimators all perform very well when an asset price follows a continuous geometric Brownian motion. However, significant differences among the various range estimators are detected if the asset return distribution involves an opening jump or a large drift.

In terms of efficiency, all previous estimators exhibit substantial improvements. Defining the efficiency measure of a volatility estimator \( {\widehat{\sigma}}_i^2 \) as the variance estimation compared with the close-close estimator, \( {\widehat{\sigma}}^2 \), that is,

$$ Eff\left({\widehat{\sigma}}_i^2\right)=\frac{ Var\left({\widehat{\sigma}}^2\right)}{ Var\left({\widehat{\sigma}}_i^2\right).} $$
(74.6)

Parkinson (1980) reported a theoretical relative efficiency gain ranging from 2.5 to 5, which means that the estimation variance is 2.5–5 times lower. Garman and Klass (1980) reported that their estimator has an efficiency of 7.4; while the Yang and Zhang (2000) and Kunitomo (1992) variance estimators resulted in a theoretical efficiency gain of 7.3 and 10, respectively.

In addition to the variance estimation, Rogers and Zhou (2008) proposed a new estimator for the correlation based on the opening, closing, high, and low prices of two asset prices. However, they concluded that the range-based estimator of correlation does not perform better than the simpler estimator based only on the opening and closing prices. Nevertheless, it still points to new possibilities for future research.

74.3 The Range-Based Volatility Models

This section provides a brief overview of the models used to forecast range-based volatility. In what follows, the models are presented in increasing order of complexity. For an asset, the range of the log-prices is defined as the difference between the daily highest and lowest prices in a logarithm type. It can be denoted by

$$ {R}_t= \ln \left({H}_t\right)- \ln \left({L}_t\right). $$
(74.7)

According to the Christoffersen’s (2002) result applied to the S&P 500 data, the range-based volatility R t showed more persistence than the squared return based on estimated autocorrelations. Thus, the range-based volatility estimator of course could be used instead of the squared return for evaluating the forecasts from volatility models, and with the time series of R t , one can easily construct a volatility model under the traditional autoregressive framework.

Instead of using the data of range, nevertheless, Alizadeh et al. (2002) focused on the variable of the log range, ln(R t ), since they found that in many applied situations, the log range follows an approximately normal distribution. Therefore, all the models introduced in the section except for Chou’s CARR model are estimated and forecasted using the log range.

The following range-based volatility models were first introduced with some simple specifications, including random walk, moving average (MA), exponentially weighting moving average (EWMA), and autoregressive (AR) models. Hanke and Wichern (2005) thought that these models were fairly basic techniques in the applied forecasting literature. Additionally, we also provide some models with a much higher degree of complexity, such as the stochastic volatility (SV), CARR, and range-based multivariate volatility models.

74.3.1 The Random Walk Model

The log range ln(R t ) can be viewed as a random walk. It means that the best forecast of the next period’s log range is this period’s estimate of log range. As in most papers, the random walk model is used as the benchmark for the purpose of comparison.

$$ E\left[ \ln \left({R}_{t+1}\right)\Big|{I}_t\right]= \ln \left({R}_t\right), $$
(74.8)

where I t is the information set at time t. The estimator E[ln(R t + 1)|I t ] is obtained conditional on I t .

74.3.2 The MA Model

MA methods are widely used in time series forecasting. In most cases, a moving average of length N where N = 20, 60, 120 days is used to generate log range forecasts. Choosing these lengths is fairly standard because these values of N correspond to 1 month, 3 months, and 6 months of trading days, respectively. The expression for the N day moving average is shown below:

$$ E\left[ \ln \left({R}_{t+1}\right)\Big|{I}_t\right]=\frac{1}{N}{\displaystyle \sum_{j=0}^{N-1} \ln \left({\mathrm{R}}_{t-j}\right)}. $$
(74.9)

74.3.3 The EWMA Model

EWMA models are also very widely used in applied forecasting. In EWMA models, the current forecast of log range is calculated as the weighted average of the one period past value of log range and the one period past forecast of log range. This specification appropriately provides the underlying log range series with no trend.

$$ E\left[ \ln \left({\mathrm{R}}_{t+1}\right)\Big|{I}_t\right]=\lambda E\left[ \ln \left({R}_t\right)\Big|{I}_{t-1}\right]+\left(1-\lambda \right) \ln \left({\mathrm{R}}_t\right). $$
(74.10)

The smoothing parameter, λ, lies between zero and unity. If λ is zero then the EWMA model is the same as a random walk. If λ is one, then the EWMA model places all of the weight on the past forecast. In the estimation process the optimal value of λ was chosen based on the root mean squared error criteria. The optimal λ is the one that records the lowest MSE.

Like the above-mentioned EWMA model, Harris and Yilmaz (2010) combined the Parkinson range estimator and the open-to-close return to propose a hybrid EWMA variance model.

$$ {\widehat{\sigma}}_{t+1}^{\mathrm{Hybrid}}=\lambda {\widehat{\sigma}}_t^{\mathrm{Hybrid}}+\left(1-\lambda \right){\widehat{\sigma}}_{P,t}^{\prime }, $$
(74.11)

where \( {\widehat{\sigma}}_P^{\prime }=\frac{1}{4\kern0.2em \ln \kern0.2em 2}{\left( \ln \kern0.2em {H}_t- \ln \kern0.2em {L}_t\right)}^2+{\left({O}_t-{C}_{t-1}\right)}^2. \)

74.3.4 The AR Model

This model uses an autoregressive process to model log range. It combines the dynamic volatility with the range information. There are n lagged values of past log range to be used as drivers to forecast one period ahead.

$$ E\left[ \ln \left({\mathrm{R}}_{t+1}\right)\right]={\beta}_0+{\beta}_i{\displaystyle \sum_{i=1}^n \ln \left({\mathrm{R}}_{t+1-i}\right).} $$
(74.12)

Li and Hong (2011) introduced the range-based autoregressive volatility (AV) model which was first proposed by Hsieh (1991, 1993). Their empirical study showed that the range-based AV model performs better than the GARCH model in the in-sample and out-of-sample comparisons.

74.3.5 The Discrete-Time Range-Based SV Model

Alizadeh et al. (2002) presented a formal derivation of the discrete-time SV model from the continuous-time SV model. The conditional distribution of log range is approximately Gaussian:

$$ \ln \kern0.3em {R}_{t+1}\Big| \ln \kern0.3em {R}_t\sim N\left[ \ln \kern0.3em \overline{R}+\rho \left( \ln \kern0.3em {R}_{t-1}- \ln \kern0.3em \overline{R}\right),{\beta}^2\Delta t\right], $$
(74.13)

where Δt = T/N, T is the sample period, and N is the number of intervals. The parameter β models the volatility of the latent volatility. Following Harvey et al. (1994), a linear state space system including the state equation and the signal equation can be written as

$$ \ln \kern0.3em {R}_{\left(i+1\right)\Delta t}= \ln \kern0.3em \overline{R}+{\rho}_{\Delta t}\left( \ln \kern0.3em {R}_{i\Delta t}- \ln \kern0.3em \overline{R}\right)+\beta \sqrt{\Delta t}{\upsilon}_{\left(i+1\right)\Delta t}. $$
(74.14)
$$ \ln \left|\kern0.2em ,f\left({s}_{i\Delta t,\left(i+1\right)\Delta t}\right)\right|=\gamma \kern0.3em \ln \kern0.3em {R}_{i\Delta t}+E\left[ \ln \left|\kern0.2em f\left({s}_{i\Delta t,\left(i+1\right)\Delta t}^{*}\right)\right|\right]+{\varepsilon}_{\left(i+1\right)\Delta t}. $$
(74.15)

Equation 74.14 is the state equation and Eq. 74.15 is the signal equation. In Eq. 74.15, E is the mathematical expectation operator. The state equation errors are i.i.d. N(0,1) and the signal equation errors have a mean of zero.

A two-factor model can be represented by the following state equation:

$$ \ln \kern0.3em {R}_{\left(i+1\right)\Delta t}= \ln \kern0.3em \overline{R}+ \ln \kern0.3em {\overline{R}}_{1,\left(i+1\right)\Delta t}+ \ln \kern0.3em {\overline{R}}_{2,\left(i+1\right)\Delta t}. $$
$$ \ln \kern0.3em {R}_{1,\left(i+1\right)\Delta t}={\rho}_{1,\Delta t}\kern0.3em \ln \kern0.3em {R}_{1,i\Delta t}+{\beta}_1\sqrt{\Delta t}{\upsilon}_{1,\left(i+1\right)\Delta t}. $$
(74.16)
$$ \ln \kern0.3em {R}_{2,\left(i+1\right)\Delta t}={\rho}_{2,\Delta t}\kern0.3em \ln \kern0.3em {R}_{2,i\Delta t}+{\beta}_2\sqrt{\Delta t}{\upsilon}_{2,\left(i+1\right)\Delta t}. $$

The error terms υ 1 and υ 2 are contemporaneously and serially independent N(0, 1) random variables. Compared with one-factor volatility model for currency future prices, the two-factor model shows more desirable regression diagnostics. Asai and Unite (2010) extended this model to capture the leverage and size effects, but their empirical result did not support Alizadeh et al. (2002) theory; on the contrary, they showed that the conditional distributions of the selected returns are non-normal.

74.3.6 The Range-Based EGARCH Model

Brandt and Jones (2006) incorporated the range information into the EGARCH model, named by the range-based EGARCH model. The model significantly improves both in-sample and out-of-sample volatility forecasts. The daily log range and log returns are defined as the followings:

$$ \ln \left({R}_t\right)\left|{I}_{t-1}\sim N\left(0.43+ \ln \kern0.3em {h}_t,{0.29}^2\right),\kern0.5em {r}_t\right|{I}_{t-1}\sim N\left(0,{h}_t^2\right), $$
(74.17)

where h t is the conditional volatility of the daily log return r t . Then, the range-based EGARCH for the daily volatility can be expressed by

$$ \ln \kern0.3em {h}_t- \ln \kern0.3em {h}_{t-1}=\kappa \left(\theta - \ln \kern0.3em {h}_{t-1}\right)+\phi {X}_{t-1}^R+\delta {r}_{t-1}/{h}_{t-1}, $$
(74.18)

where θ is denoted as the long-run mean of the volatility process and κ is denoted as the speed of mean reversion. The coefficient δ decides the asymmetric effect of lagged returns. The innovation

$$ {X}_{t-1}^R=\frac{ \ln \left({R}_{t-1}\right)-0.43- \ln \kern0.3em {h}_{t-1}}{0.29} $$
(74.19)

is defined as the standardized deviation of the log range from its expected value. It means ϕ is used to measure the sensitivity to the lagged log ranges. In short, the range-based EGARCH model replaces the innovation term with the standardized log range.

74.3.7 The CARR Model

This section provides a brief overview of the CARR model used to forecast range-based volatility. The CARR model is also a special case of the multiplicative error model (MEM) of Engle (2002b) extended from the GARCH approach. The MEM model is used to model a nonnegative valued process, such as trading volume, duration, realized volatility, and range.Footnote 2 The MEM model provides conditional expectations of the variables like the GARCH approach and avoids the effect of zeros as resorting to logs. It can be extended to a multivariate case through the use of copula functions (Cipollini et al. 2009).

Instead of modeling the log range in the previous parts of this section, Chou (2005) focused the process of the price range directly. With the time series data of price range R t , Chou (2005) presented the CARR model of order (p, q) or CARR (p, q) as

$$ \begin{array}{l}{R}_t={\lambda}_t{\varepsilon}_t,{\varepsilon}_t\sim f(.),\\ {}{\lambda}_t=\omega +{\displaystyle \sum_{i=1}^p{\alpha}_i}{R}_{t-i}+{\displaystyle \sum_{j=1}^q{\beta}_j{\lambda}_{t-j}},\end{array} $$
(74.20)

where λ t is the conditional mean of the range based on all information up to time t and the distribution of the disturbance term ε t , or the normalized range, is assumed to have a density function f(.) with a unit mean. Since ε t is positively valued given that both the price range R t and its expected value λ t are positively valued, a natural choice for the distribution is the exponential distribution.

The equation of the conditional expectation of range can easily be extended to incorporate other explanatory variables, such as trading volume, time to maturity, and lagged return:

$$ {\lambda}_t=\omega +{\displaystyle \sum_{i=1}^p{\alpha}_i}{R}_{t-i}+{\displaystyle \sum_{j=1}^q{\beta}_j{\lambda}_{t-j}}+{\displaystyle \sum_{k=1}^L{l}_k}{X}_k. $$
(74.21)

This model is called the CARR model with exogenous variables, or the CARRX model. The CARR model essentially belongs to a symmetric model. In order to describe the leverage effect of financial time series, Chou (2006) divided the whole price range into two single-side price ranges, upward range and downward range. Further, he defined UPR t , the upward range, and DNR t , the downward range, as the differences between the daily highs, daily lows, and the opening price, respectively, at time t. This can be expressed as follows:

$$ UP{R}_t= \ln \left({H}_t\right)- \ln \left({O}_t\right), $$
(74.22)
$$ DN{R}_t= \ln \left({O}_t\right)- \ln \left({L}_t\right). $$
(74.23)

Similarly, with the time series of single-side price range, UPR t or DNR t , Chou (2006) extended the CARR model to the asymmetric CARR (ACARR) model. In volatility forecasting, the asymmetric model also performed better than the symmetric model. Chen et al. (2008) proposed a range-based threshold conditional autoregressive (TARR) model which has superior ability in volatility forecasting. In addition, Lin et al. (2012) proposed a nonlinear smooth transition CARR model to capture smooth volatility asymmetries in international financial markets.

74.3.8 The Range-Based Multivariate Volatility Model

The multivariate volatility models have been extensively researched in recent studies. They provide relevant financial applications in various areas, such as asset allocation, hedging, and risk management. Bauwens et al. (2006) offered a review of the multivariate volatility models. As to the extension of the univariate range models, Fernandes et al. (2005) proposed one kind of multivariate CARR (MCARR) model using the formula Cov(X, Y) = [V(X + Y) − V(X) − V(Y)]/2. Moreover, Lee and Shin (2008) drove conditions for stationarity, geometric ergodicity, and β-mixing with exponential decay. Analogous to Fernandes et al. (2005)) work, Brandt and Diebold (2006) used no-arbitrage conditions to build the covariances in terms of variances. However, this kind of method could substantially apply to a bivariate case.

Chou et al. (2009) combined the CARR model with the DCC model of Engle (2002a) to propose a range-based volatility model, which uses the ranges to replace the GARCH volatilities in the first step of DCC. They concluded that the range-based DCC model performs better than other return-based models (MA100, EWMA, CCC, return-based DCC, and diagonal BEKK) through the statistical measures, RMSE and MAE, based on four benchmarks of implied and realized covariance.

The DCC model is a two-step forecasting model which estimates univariate GARCH models for each asset and then calculates its time-varying correlation by using the transformed standardized residuals from the first step. The related discussions about the DCC model can be found in Engle and Sheppard (2001), Engle (2002a), and Cappiello et al. (2006). It can be viewed as a generalization of the constant conditional correlation (CCC) model proposed by Bollerslev (1990). The conditional covariance matrix H t of a k × 1 return vector r t in CCC (r t t − 1N(0, H t )) can be expressed as

$$ {\mathbf{H}}_t={\mathbf{D}}_t\mathbf{R}{\mathbf{D}}_t, $$
(74.24)

Where D t a k × k diagonal matrix with time-varying standard deviations \( \sqrt{h_{i,t}} \) of the ith return series from GARCH on the ith diagonal. R is a sample correlation matrix of r t .

The DCC is formulated in the following specification:

$$ {\mathbf{H}}_t={\mathbf{D}}_t{\mathbf{R}}_t{\mathbf{D}}_t,{\mathbf{R}}_t= diag{\left\{{\mathbf{Q}}_t\right\}}^{-\raisebox{1ex}{$1$}\left/ \raisebox{-1ex}{$2$}\right.}{\mathbf{Q}}_t diag{\left\{{\mathbf{Q}}_t\right\}}^{-\raisebox{1ex}{$1$}\left/ \raisebox{-1ex}{$2$}\right.}, $$
(74.25)
$$ {\mathbf{Q}}_t=\mathbf{S}\circ \left(\boldsymbol{\upiota} {\boldsymbol{\upiota}}^{\mathbf{\prime}}-\mathbf{A}-\mathbf{B}\right)+\mathbf{A}\circ {\mathbf{Z}}_{t-1}{\mathbf{Z}}_{t-1}+\mathbf{B}\circ {\mathbf{Q}}_{t-1},{\mathbf{Z}}_t={\mathbf{D}}_t^{-1}\times {\mathbf{r}}_t, $$

where ι is a vector of ones and ◦ is the Hadamard product of two identically sized matrices which are computed simply by element multiplication. Q t and S are, respectively, the conditional and unconditional covariance matrices of the standardized residual vector Z t that came from GARCH. For the CARR case, the standardized residual vector Z * t is calculated from the adjusted conditional range.Footnote 3 A and B are estimated parameter matrices. Most cases, however, set them as scalars. In a word, DCC differs from CCC by only allowing R to be time varying.

It is difficult to introduce the exogenous variables into the DCC model because of the technical limitations for the mean reverting process. Chou and Cai (2009) proposed a double smooth transition conditional correlation CARR (DSTCC-CARR) model.Footnote 4 In addition to the multi-asset CARR part, the DSTCC-CARR model builds the smooth transition correlation structure through the standardized residuals Z * t of the rescaled range.

$$ E\left[{\mathbf{Z}}_t^{*}{{\mathbf{Z}}^{\mathit{\prime}}}_t^{*}\Big|{\Omega}_{t-1}\right]={P}_t, $$
(74.26)
$$ {P}_t=\left(1-{G}_{2t}\right)\left(\left(1-{G}_{1t}\right){P}_{(11)}+{G}_{1t}{P}_{(21)}\right)+{G}_{2t}\left(\left(1-{G}_{1t}\right){P}_{(12)}+{G}_{1t}{P}_{(22)}\right), $$
(74.27)

where the transition logistic functions are \( {G}_{jt}={\left(1+{e}^{-{\gamma}_j\left({s}_{jt}-{c}_j\right)}\right)}^{-1},{\gamma}_j>0,j=1,2 \). The symbols c j and γ j in the transition function are location and speed parameters, respectively. Please see Chou and Cai (2009) for the details. Base on this framework, Cai et al. (2009) used CPI and VIX as transition variables to investigate the correlations among six international stock indices.

74.3.9 Other Model Extensions

In addition to the classification of range models, Harris et al. (2011) developed a cyclical volatility model which employs the range to investigate the short and long dynamics of exchange rate volatility. Their results indicated that the cyclical volatility model performed better than the range-based EGARCH and FIEGARCH models in computational efficiency and out-of-sample forecast. In contrast to modeling range directly, some studies put the range into existing models to increase the explanatory power. For example, Lin and Rozeff (1994) put the estimated range process into the GARCH model and showed that the range estimator was still useful in explaining the conditional variance.

74.4 The Realized Range Volatility

There has been much research investigating the measurement of volatility due to the use of high-frequency data. In particular, the realized volatility, calculated by the sum of squared intraday returns, provides a more efficient estimate for volatility. The review of realized volatility has been discussed in Andersen et al. (2001), Andersen et al. (2003), Barndorff-Nielsen and Shephard (2005), Andersen et al. (2006, 2007), and McAleer and Mederos (2008). Martens and van Dijk (2007) and Christensen and Podolskij (2007) replaced the squared intraday return with the high/low range to get a new estimator called realized range.

Initially, we assumed that the asset price P t follows the geometric Brownian motion:

$$ d{P}_t=\mu {P}_t dt+\sigma {P}_td{z}_t, $$
(74.28)

where μ is the drift term, σ is the constant volatility, and z t is a Brownian motion. There are τ equal-length intervals divided into a trading day. The daily realized volatility RV t at time t can be expressed by

$$ R{V}_t={\displaystyle \sum_{i=1}^{\tau }{\left( \ln {P}_{t,i}- \ln {P}_{t,i-1}\right)}^2}, $$
(74.29)

where P t,i is the price for the time i × Δ on the trading day t and Δ is the time interval. Then, τ × Δ is the trading time length in a trading day. Moreover, the realized range RR t is

$$ R{R}_t=\frac{1}{4 \ln 2}{\displaystyle \sum_{i=1}^{\tau }{\left( \ln {H}_{t,i}- \ln {L}_{t,i-1}\right)}^2}, $$
(74.30)

where H t,i and L t,i are the highest price and the lowest price of the ith interval on the tth trading day, respectively.

As mentioned before, several studies suggest improving efficiency by using the open and close prices, like Garman and Klass (1980). Furthermore, assuming that P t follows a continuous sample path, Martingale, Christensen, and Podolskij (2007) proposed integrated volatility and showed this range estimator remains consistent in the presence of stochastic volatility.

$$ \ln \kern0.3em {P}_t= \ln \kern0.3em {P}_0+{\displaystyle {\int}_0^t{\mu}_s ds+{\displaystyle {\int}_0^t{\sigma}_{s-}d{z}_t}},\kern0.5em for\kern0.5em 0\le t<\infty . $$
(74.31)

The obvious and important question is that the realized range should be seriously affected by microstructure noise. Martens and van Dijk (2007) considered a bias-adjustment procedure, which scales the realized range by using the ratio of the average level of the daily range and the average level of the realized range. Christensen et al. (2009) provided another bias correlation for the realized range to help divide the high-frequency data to minimize its asymptotic conditional variance. Both found that the scaled realized ranges perform better than the (scaled) realized volatility. Todorova (2012) also showed that the adjusted realized ranges perform better than the daily range for the DAX 30 index.

It is interesting to note that the realized range can be extended to estimate covariance. Bannouh et al. (2009) used the concept of Brandt and Diebold’s (2006) non-arbitrage portfolio to propose a realized co-range estimator:

$$ RC{R}_t=\frac{1}{2{\lambda}_1{\lambda}_2}\left(R{R}_{P,t}-{\lambda}_1^2R{R}_{1,t}-{\lambda}_2^2R{R}_{2,t}\right), $$
(74.32)

where RR p,t , RR 1,t , and RR 2,t are the realized ranges of the portfolio P, asset 1, and asset 2. λ 1 and λ 1 are the weights of two assets in the portfolio (λ 1 + λ 2 = 1).

74.5 The Financial Applications of Range Volatility

The range mentioned in this chapter is a measure of volatility. From the theoretical points of view, it indeed provides a more efficient estimator of volatility than the return. It is intuitively reasonable due to the further information provided by the range data. In addition, the return volatility neglects the price fluctuation, especially when existing a short distance between the closing prices of the two trading days. We can therefore conclude that the high/low range volatility should contain some additional information compared with the close-to-close volatility. Moreover, the range is readily available, which has low cost. Hence, most research related to volatility may be applied to the range. Bollerslev et al. (1992) and Poon and Granger (2003) provided extensive discussions on the application of volatilities in the financial markets.

Before the range was adapted by the dynamic structures, however, its application was very limited.Footnote 5 Based on the SV framework, Gallant et al. (1999) and Alizadeh, Brandt, and Diebold incorporated the range into the equilibrium asset pricing models. Chou (2005) and Brandt and Jones (2006), on the other hand, filled the gap between a discrete-time dynamic model and range. Their work creates many opportunities for future research. In the following sections, we will give a classified review for the financial applications of range volatility.

74.5.1 Value at Risk

Value at Risk (VaR) is designed to measure the potential loss of the asset. It is widely used in the financial markets. We can calculate it by VaR α t = μ t + f α σ t , where α is the given significant level and f α is the left quantile of the distribution F at α. Asai and Brugal (2012) used an asymmetric heterogeneous ARMA model to fit range for estimating VaR and showed that the one-step-ahead VaR forecast of the log range performed well during the global financial crisis. Chen et al. (2012) proposed a range-based threshold conditional VaR (CAViaR) model which also outperformed other models during the crisis. Moreover, Brownlees and Gallo (2010) showed that the daily range performed as well as the ultra-high-frequency data (UHFD) volatility measure they proposed for VaR prediction. In addition, Shao, Lian and Yin (2009) used the CARR model to model the realized range in estimating VaR, but it only performed the same with the realized volatility model. However, Louzis et al. (2012) found that the adjusted realized range can generate superior VaR estimates.

74.5.2 Hedge

With the development of conditional volatility models, there has been a dramatic increase in future hedging. From the calculation of minimum variance hedging, the optimal dynamic hedge ratio can be expressed as h t = ρσ S,t /σ F,t , where ρ is the correlation of spot and futures returns and σ s,t and σ F,t are the standard deviations of spot and futures returns, respectively. Within frameworks of a constant conditional correlation (CCC) model and a dynamic conditional correlation (DCC) model, Chou and Liu (2011) showed that the range-based multivariate volatility model has more efficiency gain than the return-based approaches.

74.5.3 Volatility Spillover

Volatility spillover can reflect the information flow among financial markets. Most studies analyze the behavior of volatility spillover by estimating the conditional variance and covariance. Gallo and Otranto (2008) estimated the weekly range through a new Markov Switching bivariate model to show the relevant role of Hong Kong as a dominant market. Engle et al. (2012) applied daily range to a multivariate MEM approach which is used to discuss the volatility transmission across East Asian markets. Chiang and Wang (2011) combined copula functions with a time-varying logarithmic CARR (TVLCARR) model to investigate the volatility contagion for the G7 stock markets.

74.5.4 Portfolio Management

Covariance process plays an important role in asset allocation. Based on the conditional mean-variance framework, Chou and Liu (2010) used Chou et al. (2009) method to show that the economic value of volatility timing for the range is significant in comparison to the return. Wu and Liang (2011) did similar work by incorporating dynamic copulas into an asymmetric CARR model. The results imply that the range volatility can be extended to practical applications.

74.5.5 Microstructure Issues

In recent years, there has been a shift in attention to high-frequency data for some financial assets. It must be noted that the microstructure analysis often accompanies different levels of microstructure noise.Footnote 6 Martens and van Dijk (2007) claimed that the realized range also cannot avoid the bias caused from the microstructure noise.Footnote 7 However, Akay et al. (2010) showed that the alternative range-based volatility estimates are relatively efficient and removed the upward bias caused by the microstructure noise. Kalev and Duong (2008) utilized Martens and van Dijk’s (2007) realized range to test the Samuelson Hypothesis for the futures contract.Footnote 8

74.5.6 Other Financial Applications

As mentioned above, range is available and can easily be applied to volatility issues. Chou et al. (2103) adopted the CARR model to investigate the long-term impact of terrorist attacks on the maturity, volume, and open interest effects for the S&P 500 index futures. Corrado and Truong (2007) reported that the range estimator has similar forecasting ability of volatility compared with the implied volatility. However, the implied volatilities are not available for many assets and the option markets are insufficient in many developed countries. In such cases, the range is more practical. Besides, range is often used as one of volatility proxies. Please refer to Liu and Hung (2010), Patton (2011), Liu et al. (2012), Chen and Wu (2009), Karanasos and Kartsaklas (2009), and Gallo and Otranto (2008).

In contrast to the range itself, some studies pay more attention to the high and low prices. Cheung et al. (2009) employed a vector error correlation model (VECM) to model the dynamic relationship between the high and low prices of stock indices. Please also see He and Wan (2009) and He et al. (2010) for the relevant applications.

74.6 Conclusion and Limitations

Volatility plays a central role in many areas of finance. In view of the theoretical and practical studies, the price range provides an intuitive and efficient estimator of volatility. In this study, we began our discussion by reviewing the price range estimators. There has been a dramatic increase in the number of publications on this work since Parkinson (1980) introduced the high/low range. From then on, some new range estimators have been considered with opening and closing prices. The new price range estimators are distributed feasible weights according to the differences among the highest, lowest, opening, and closing. Through the analysis, we can gain a better understanding of the nature of range.

Some dynamic volatility models combined with price range are also introduced in this chapter. They led to broad applications in finance, especially the CARR model, which incorporates both the superiority of range in forecasting volatility and the elasticity of the GARCH model. In addition, the range-based volatility models contribute significantly to the financial applications. Last, the realized range replaced the squared intraday return of realized volatility with the high/low range to obtain a more efficient estimator. Although the financial applications of range volatility are still in its infancy, the possible areas such as risk management, investment, and microstructure issues are explained in this chapter. Future studies are obviously required for this topic.

The range estimator undoubtedly has some inherent shortcomings. It is well known that the financial asset price is very volatile and is easily influenced by instantaneous information. In statistics, the range is very sensitive to the outliers. Chou (2005) provided an answer by using the quantile range to get a robust measure of price range. For example, the new range estimator can be calculated by the difference between the top and the bottom 5 % observations on average. Also see Yeh et al. (2009) for further discussion.

In theory, many range estimators in previous sections depended on the assumption of continuous-time geometric Brownian motion. The range estimators derived from Parkinson (1980) and Garman and Klass (1980) required a geometric Brownian motion with zero drift. Rogers and Satchell (1991) allowed a nonzero drift, and Yang and Zhang (2000) further allowed overnight price jumps. Moreover, only finite observations can be used to build the range. It means the range will appear with some unexpected bias, especially for the assets with lower liquidity and finite transaction volume. Garman and Klass (1980) pointed out that this will produce the later opening and early closing. They also said the difference between the observed highs and lows will be less than that between the actual highs and lows. It means that the calculated high/low estimator should be downward biased. In addition, Beckers (1983) pointed that disadvantaged buyers and sellers may trade the highest and lowest prices, so the range values might be less representative for measuring volatility. Because of the limitations involved and the importance of range volatility measure, range-based volatility modeling will continue to be a specialist subject and studied vigorously.