Keywords

1 Introduction

With the continual development of new financial instruments, there is a growing demand for theoretical and empirical knowledge of financial volatility. It is well known that financial volatility has played such a central role in derivative pricing, asset allocation, and risk management. Following Barndorff-Nielsen and Shephard (2003) or Andersene et al. (2003), financial volatility is a latent factor and hence is not directly observable, however. Financial volatility can only be estimated using its signature on certain known market price process; when the underlying process is more sophisticated or when observed market prices suffer from market microstructure noise effects, the results are less clear.

It is well known that many financial time series exhibit volatility clustering or autocorrelation. In incorporating the characteristics into the dynamic process, the generalized autoregressive conditional heteroskedasticity (GARCH) family of models proposed by Engle (1982) and Bollerslev (1986), and the stochastic volatility (SV) models advocated by Taylor (1986) are two popular and useful alternatives for estimating and modeling time-varying conditional financial volatility. However, as pointed by Alizadeh et al. (2002), Brandt and Diebold (2006), Chou (2005), and other authors, both GARCH and SV models are inaccurate and inefficient, because they are based on the closing prices of the reference period, and thus failing to use the information contents inside the reference. In other words, the path of the price inside the reference period is totally ignored when volatility is estimated by these models. Especially in turbulent days with the declines and recoveries of the markets, the traditional close-to-close volatility indicates a low level while the daily price range shows correctly that the volatility is high.

The price range, defined as the difference between the highest and lowest market prices over a fixed sampling interval, has been known for a long time and recently experienced renewed interest as an estimator of the latent volatility. The information contained in the opening, highest, lowest, and closing prices of an asset is widely used in Japanese candlestick charting techniques and other technical indicators (Nison (1991). Early application of range in the field of finance can be traced to Mandelbrot (1971), and the academic work on the range-based volatility estimator began in the early 1980s. Several authors, dating back to Parkinson (1980), developed from it several volatility measures far more efficient than the classical return-based volatility estimators.

Building on the earlier results of Parkinson (1980), many studiesFootnote 1 show that one can use the price range information to improve volatility estimation. In addition to being significantly more efficient than the squared daily return, Alizadeh et al. (2002) also demonstrate that the conditional distribution of the log range is approximately Gaussian, thus greatly facilitating maximum likelihood estimation of stochastic volatility models. Moreover, as pointed by Alizadeh et al. (2002), and Brandt and Diebold (2006), the range-based volatility estimator appears robust to microstructure noise such as bid-ask bounce. By adding microstructure noise to the Monte Carlo simulation, Shu and Zhang (2006) also support that the findings of Alizadeh et al. (2002), that range estimators are fairly robust toward microstructure effects.

Cox and Rubinstein (1985) stated the puzzle that despite the elegant theory and the support of simulation results, the range-based volatility estimator has performed poorly in empirical studies. Chou (2005) argued that the failure of all the range-based models in the literature is caused by their ignorance of the temporal movements of price range. Using a proper dynamic structure for the conditional expectation of range, the conditional autoregressive range (CARR) model, proposed by Chou (2005), successfully resolves this puzzle and retains its superiority in empirical forecasting abilities. The in-sample and out-of-sample volatility forecasting using S&P 500 Index data shows that the CARR model does provide more accurate volatility estimator compared with the GARCH model. Similarly, Brandt and Jones (2006) formulate a model that is analogous to Nelson’s (1991) EGARCH model, but uses the square root of the intra-day price range in place of the absolute return. Both studies find that the range-based volatility estimators offer a significant improvement over their return-based counterparts. Moreover, Chou et al. (2007) extend CARR to a multivariate context using the dynamic conditional correlation (DCC) model proposed by Engle (2002a). They find that this range-based DCC model performs better than other return-based volatility models in forecasting covariances. This paper will also review alternative range-based multivariate volatility models.

Recently, many studies use high frequency data to get an unbiased and highly efficient estimator for measuring volatility, see Andersene et al. (2003) and McAleer and Medeiros (2008) for a review. The volatility built by nonparametric methods is called realized volatility, which is calculated by the sum of nonoverlapping squared returns within a fixed time interval. Martens and van Dijk (2007) replace the squared return by the price range to get a more efficient estimator; namely, the realized range. In their empirical study, the realized range significantly improves over realized return volatility. In addition, Christensen and Podolskij (2007) independently develop the realized range and show that this estimator is consistent and relatively efficient under some specific assumptions.

The remainder of the chapter is laid out as follows. Section 83.2 introduces the price range estimators. Section 83.3 describes the range-based volatility models, including univariate and multivariate ones. Section 83.4 presents the realized range. The financial applications of range volatility are provided in Sect. 83.5. Finally, the conclusion is showed in Sect. 83.6.

2 The Price Range Estimators

A few price range estimators and their estimation efficiency are briefly introduced and discussed in this section. A significant practical advantage of the price range is that for many assets, daily opening, highest, lowest, and closing prices are readily available. Most data suppliers provide daily highest/lowest as summaries of intra-day activity. For example, Datastream records the intraday price range for most securities, including equities, currencies, and commodities, going back to 1955. Thus, range-based volatility proxies are therefore easily calculated. When using this record, the additional information yields a great improvement when used in financial applications. Roughly speaking, knowing these records allows us to get closer to the real underlying process, even if we do not know the whole path of asset prices. For an asset, let’s define the following variables:

O t = the opening price of the t th trading day,

C t = the closing price of the t th trading day,

H t = the highest price of the t th trading day,

L t = the lowest price of the t th trading day.

The Parkinson 1980) estimator efficiency intuitively comes from the fact that the price range of intraday gives more information regarding the future volatility than two arbitrary points in this series (the closing prices). Assuming that the asset price follows a simple diffusion model without a drift term, his estimator \(\hat{{\sigma}}_{P}^{2}\) can be written:

$$\hat{{\sigma}}_{P}^{2} = \frac{1} {4\ln 2}{(\ln {H}_{t} -\ln {L}_{t})}^{2}.$$
(83.1)

But instead of using two data points, the highest and lowest prices, four data points, the opening, closing, highest, and lowest prices, might also give extra information. Garman and Klass (1980) propose several volatility estimators based on the knowledge of the opening, closing, highest, and lowest prices. Like Parkinson (1980), they assume the same diffusion process and propose their estimator \(\hat{{\sigma}}_{\mathit{GS}}^{2}\) as:

$$\begin{array}{rcl} \hat{{\sigma}}_{\mathit{GK}}^{2}& =& 0.511{[\ln ({H}_{ t}/{L}_{t})]}^{2} - 0.19\{\ln ({C}_{t}/{O}_{t}) \\ & & \times \,[\ln ({H}_{t}) +\ln ({L}_{t}) - 2\ln ({O}_{t})] \\ & & -2[\ln ({H}_{t}/{O}_{t})\ln ({L}_{t}/{O}_{t})]\} - 0.383{[\ln ({C}_{t}/{O}_{t})]}^{2}\end{array}$$
(83.2)

As mentioned in Garman and Klass (1980), their estimator can be presented practically as \(\hat{{\sigma}}_{G{K}^{{\prime}}}^{2} = 0.5{[\ln ({H}_{t}/{L}_{t})]}^{2} - [2\ln 2 - 1]{[\ln ({C}_{t}/{O}_{t})]}^{2}\).

Since the price path cannot be monitored when markets are closed, however, Wiggins (1991) finds that the both Parkinson estimator and Garman–Klass estimator are still biased downward compared to the traditional estimator, because the observed highs and lows are smaller than the actual highs and lows. Garman and Klass (1980) and Grammatikos and Saunders (1986), nevertheless, estimate the potential bias using simulation analysis and show that the bias decreases with an increasing number of transaction. Therefore, it is relatively easy to adjust the estimates of daily variances to eliminate the source of bias.

Because Parkinson (1980) and Garman and Klass (1980) estimators implicitly assume that log-price follows a geometric Brownian motion with no drift term, further refinements are given by Rogers and Satchell (1991) and Kunitomo (1992). Rogers and Satchell (1991) add a drift term in the stochastic process that can be incorporated into a volatility estimator using only daily opening, highest, lowest, and closing prices. Their estimator \(\hat{{\sigma}}_{\mathit{RS}}^{2}\) can be written:

$$\begin{array}{rcl} \hat{{\sigma}}_{\mathit{RS}}^{2}& =& \frac{1} {N}\sum _{n=t-N}^{t}\ln \left ({H}_{n}\left /\!\!\right.{O}_{n}\right )\left [\ln \left ({H}_{n}\left /\right.\!\!{O}_{n}\right ) -\ln \left ({C}_{n}\left /\right.\!\!{O}_{n}\right )\right ]. \\ & & +\,\ln \left ({L}_{n}\left /\right.\!\!{O}_{n}\right )\left [\ln \left ({L}_{n}\left /\right.\!\!{O}_{n}\right ) -\ln \left ({C}_{n}\left /\right.\!\!{O}_{n}\right )\right ] \end{array}$$
(83.3)

Rogers et al. (1994) report that the Rogers–Satchell estimator yields theoretical efficiency gains compared to the Garman–Klass estimator. They also report that the Rogers–Satchell estimator appears to perform well with changing drift and as few as 30 daily observations.

Kunitomo (1992) uses the opening and closing prices to estimate a modified range corresponding to a hypothesis of a Brownian bridge of the transformed log-price. Basically this also tries to correct the highest and lowest prices for the drift term:

$$\hat{{\sigma}}_{K}^{2} = \frac{1} {{\beta}_{N}}\sum _{p=t-N}^{t}\left [\ln \left (\hat{{H}}_{n}\left /\right.\hat{{L}}_{n}\right )\right ],$$
(83.4)

where two estimators \(\hat{{H}}_{n}={\mathit{Arg}}_{{t}_{i}}[{Max}_{{P}_{{t}_{i}}}\{{P}_{{t}_{i}}-[{O}_{n}+({C}_{n}-{O}_{n})/{t}_{i}] + ({C}_{n} - {O}_{n})\,\vert {t}_{i} \in [n - 1,n]\}]\) and \(\hat{{L}}_{n} ={\mathit{Arg}}_{{t}_{i}}[{\mathit{Min}}_{{P}_{{t}_{i}}}\{{P}_{{t}_{i}} - [{O}_{n} + ({C}_{n} - {O}_{n})/{t}_{i}] + ({C}_{n} - {O}_{n})\,\vert {t}_{i} \in [n - 1,n]\}]\) are denoted as the end-of-the-day drift correction highest and lowest prices. β N = 6 ∕ (Nπ2) is a correction parameter.

Finally, Yang and Zhang (2000) make further refinements by deriving a price range estimator that is unbiased, independent of any drift, and consistent in the presence of opening price jumps. Their estimator \(\hat{{\sigma}}_{\mathit{YZ}}^{2}\) thus can be written

$$\begin{array}{rcl} \hat{{\sigma}}_{\mathit{YZ}}^{2}& =& \frac{1} {(\mathrm{N} - 1)}\sum _{\mathrm{n}=\mathrm{t-N}}^{\mathrm{t}}\left [\ln ({O}_{\mathrm{n}}{/\mathrm{C}}_{\mathrm{n}-1})-\overline{\ln ({\mathrm{O}}_{\mathrm{n}}/{\mathrm{C}}_{\mathrm{n}-1})}\right ] \\ & & +\, \frac{\mathrm{k}} {\mathrm{(N - 1)}}\sum _{\mathrm{n}=\mathrm{t-N}}^{\mathrm{t}}\left [ln({\mathrm{O}}_{\mathrm{n}}/{\mathrm{C}}_{\mathrm{n}-1}) -\overline{ln({\mathrm{O}}_{\mathrm{n}}{\mathrm{/C}}_{\mathrm{n}-1})}\right ] \\ & & +\,\mathrm{(1 - k)}\hat{{\sigma}}_{\mathit{RS}}^{2}, \end{array}$$
(83.5)

where \(k = \frac{0.34} {1.34+(N+1)\left /\right.(N-1)}\). The symbol \(\bar{X}\) is the unconditional mean of X, and σ RS 2 is the Rogers–Satchell estimator. The Yang–Zhang estimator is simply the sum of the estimated overnight variance, the estimated opening market variance, and the Rogers and Satchell (1991) drift independent estimator. The resulting estimator therefore explicitly incorporates a term for the closed market variance.

Shu and Zhang (2006) investigate the relative performance of the four range-based volatility estimators, including Parkinson, Garman–Klass, Rogers–Satchell, and Yang–Zhang estimators for S&P 500 Index data, and find that the price range estimators all perform very well when an asset price follows a continuous geometric Brownian motion. However, significant differences among various range estimators are detected if the asset return distribution involves an opening jump or a large drift.

In term of efficiency, all previous estimators exhibit very substantial improvements. Defining the efficiency measure of a volatility estimator \(\hat{{\sigma}}_{i}^{2}\) as the estimation variance compared with the close-close estimator \(\hat{{\sigma}}^{2}\); that is:

$$\mathit{Eff}(\hat{{\sigma}}_{i}^{2}) = \frac{\mathit{Var}(\hat{{\sigma}}^{2})} {\mathit{Var}(\hat{{\sigma}}_{i}^{2})}.$$
(83.6)

Parkinson (1980) reports a theoretical relative efficiency gain ranging from 2.5 to 5, which means that the estimation variance is 2.5 to 5 times lower. Garman and Klass (1980) report that their estimator has an efficiency of 7.4; while the Yang and Zhang (2000) and Kunitomo (1992) variance estimators result in a theoretical efficiency gain of, 7.3 and 10, respectively.

3 The Range-Based Volatility Models

This section provides a brief overview of the models used to forecast range-based volatility. In what follows, the models are presented in increasing order of complexity. For an asset, the range of the log-prices is defined as the difference between the daily highest and lowest prices in a logarithm type. It can be denoted by:

$${R}_{t} =\ln ({H}_{t}) -\ln ({L}_{t}).$$
(83.7)

According to Christoffersen (2002), for the S&P 500 data the autocorrelations of the range-based volatility, R t , show more persistence than the squared-return autocorrelations. Thus, range-based volatility estimator of course could be used instead of the squared return for evaluating the forecasts from volatility models, and with the time series of R t , one can easily constructs a volatility model under the traditional autoregressive framework.

Instead of using the data of range, nevertheless, Alizadeh et al. (2002) focus on the variable of the log range, ln(R t ), since they find that in many applied situations, the log range approximately follows a normal distribution. Therefore, all the models introduced in the section except for Chou’s CARR model are estimated and forecasted using the log range.

The following range-based volatility models are first introduced in some simple specifications, including random walk, moving average (MA), exponentially weighting moving average (EWMA), and autoregressive (AR) models. Hanke and Wichern (2005) think these models are fairly basic techniques in the applied forecasting literature. Additionally, we also provide some models at a much higher degree of complexity, such as the stochastic volatility (SV), CARR, and range-based multivariate volatility models.

3.1 The Random Walk Model

The log range ln(R t ) can be viewed as a random walk. It means that the best forecast of the next period’s log range is this period’s estimate of log range. As in most papers, the random walk model is used as the benchmark for the purpose of comparison.

$$E[\ln ({R}_{t+1})\vert {I}_{t}] =\ln ({R}_{t}),$$
(83.8)

where I t is the information set at time t. The estimator E[ln(R t + 1) | I t ] is obtained conditional on I t .

3.2 The MA Model

MA methods are widely used in time series forecasting. In most cases, a moving average of length N where N = 20, 60, 120 days is used to generate log range forecasts. Choosing these lengths is fairly standard because these values of N correspond to 1 month, 3 months, and 6 months of trading days, respectively. The expression for the N day moving average is shown below:

$$E[\ln ({R}_{t+1})\vert {I}_{t}] = \frac{1} {N}\sum _{j=0}^{N-1}\ln ({\mathrm{R}}_{t-j}).$$
(83.9)

3.3 The EWMA Model

EWMA models are also very widely used in applied forecasting. In EWMA models, the current forecast of log range is calculated as the weighted average of the one period past value of log range and the one period past forecast of log range. This specification is appropriate provided the underlying log range series has no trend.

$$E[\ln ({\mathrm{R}}_{t+1})\vert {I}_{t}] = \lambda E[\ln ({\mathrm{R}}_{t})\vert {I}_{t-1}] + (1 - \lambda )\ln ({\mathrm{R}}_{t}).$$
(83.10)

The smoothing parameter, λ, lies between zero and unity. If λ is zero then the EWMA model is the same as a random walk. If λ is one then the EWMA model places all of the weight on the past forecast. In the estimation process the optimal value of λ was chosen based on the root mean squared error criteria. The optimal λ is the one that records the lowest MSE.

3.4 The AR Model

This model uses an autoregressive process to model log range. There are n lagged values of past log range to be used as drivers to make a one period ahead forecast.

$$E[\ln ({\mathrm{R}}_{t+1})] = {\beta}_{0} + {\beta}_{i}\sum _{i=1}^{n}\ln ({\mathrm{R}}_{t+1-i}).$$
(83.11)

3.5 The Discrete-Time Range-Based SV Model

Alizadeh et al. (2002) present a formal derivation of the discrete time SV model from the continuous time SV model. The conditional distribution of log range is approximately Gaussian:

$$\ln \,{R}_{t+1}\vert \ln \,{R}_{t} \sim N[\ln \bar{R} + \rho (\ln \,{R}_{t-1} -\ln \,\bar{R}),{\beta}^{2}\Delta t],$$
(83.12)

where Δt = TN, T is the sample period, and N is the number of intervals. The parameter β models the volatility of the latent volatility. Following Harvey et al. (1994), a linear state space system including the state equation and the signal equation can be written:

$$\begin{array}{rcl} \ln {R}_{(i+1)\Delta t}& =& \ln \bar{R} + {\rho}_{\Delta t}\left (\ln {R}_{i\Delta t} -\ln \overline{R}\right ) \\ & & +\beta \sqrt{\Delta t}{\upsilon}_{(i+1)\Delta t}.\end{array}$$
(83.13)
$$\begin{array}{rcl} \ln \left \vert f({s}_{i\Delta t,(i+1)\Delta t})\right \vert & =& \gamma \ln {R}_{i\Delta t} + E\left [\ln \left \vert f({s}_{i\Delta t,(i+1)\Delta t}^{{_\ast}})\right \vert \,\right ] \\ & & +{\epsilon}_{(i+1)\Delta t}. \end{array}$$
(83.14)

Equation (83.13) is the state equation and Equation (83.14) is the signal equation. In Equation (83.14), E is the mathematical expectation operator. The state equation errors are i. i. d. N(0, 1) and the signal equation errors have zero mean.

A two-factor model can be represented by the following state equation.

$$\begin{array}{rcl} \ln {R}_{(i+1)\Delta t}& =& \ln \bar{R} +\ln \bar{{R}}_{1,(i+1)\Delta t} +\ln \bar{{R}}_{2,(i+1)\Delta t}. \\ \ln {R}_{1,(i+1)\Delta t}& =& {\rho}_{1,\Delta t}\ln {R}_{1,i\Delta t} + {\beta}_{1}\sqrt{\Delta t}{\upsilon}_{1,(i+1)\Delta t}. \\ \ln {R}_{2,(i+1)\Delta t}& =& {\rho}_{2,\Delta t}\ln {R}_{2,i\Delta t} + {\beta}_{2}\sqrt{\Delta t}{\upsilon}_{2,(i+1)\Delta t}.\end{array}$$
(83.15)

The error terms υ1 and υ2 are contemporaneously and serially independent N(0, 1) random variables. They estimate and compare both one-factor and two-factor latent volatility models for currency futures prices and find that the two-factor model shows more desirable regression diagnostics.

3.6 The Range-Based EGARCH Model

Brandt and Jones (2006) incorporate the range information into the EGARCH model, named by the range-based EGARCH model. The model significantly improves both in-sample and out-of-sample volatility forecasts. The daily log range and log returns are defined as the followings:

$$\ln ({R}_{t})\vert {I}_{t-1} \sim N(0.43 +\ln {h}_{t},0.2{9}^{2}),\,{r}_{t}\vert {I}_{t-1} \sim N(0,{h}_{t}^{2}),$$
(83.16)

where h t is the conditional volatility of the daily log return r t . Then, the range-based EGARCH for the daily volatility can be expressed by:

$$\ln {h}_{t} -\ln {h}_{t-1} = \kappa (\theta -\ln {h}_{t-1}) + \phi {X}_{t-1}^{R} + \Delta {r}_{t-1}/{h}_{t-1},$$
(83.17)

where θ is denoted as the long-run mean of the volatility process, and κ is denoted as the speed of mean reverting. The coefficient Δ decides the asymmetric effect of lagged returns. The innovation,

$${X}_{t-1}^{R} = \frac{\ln ({R}_{t-1}) - 0.43 -\ln {h}_{t-1}} {0.29},$$
(83.18)

is defined as the standardized deviation of the log range from its expected value. It means ϕ is used to measure the sensitivity to the lagged log ranges. In short, the range-based EGARCH model replaces the innovation term of the modified EGARCH by the standardized log range.

3.7 The CARR Model

This section provides a brief overview of the CARR model used to forecast range-based volatility. The CARR model is also a special case of the multiplicative error model (MEM) of Engle (2002b). Instead of modeling the log range, Chou (2005) focuses the process of the price range directly. With the time series data of price range R t , Chou (2005) presents the CARR model of order (p, q), or CARR (p, q) is shown as

$$\begin{array}{rcl} & & {R}_{t} = {\lambda}_{t}{\epsilon}_{t},\,{\epsilon}_{t} \sim f(.), \\ & & {\lambda}_{t} = \omega +\sum _{i=1}^{p}{\alpha}_{i}{R}_{t-i} +\sum _{j=1}^{q}{\beta}_{j}{\lambda}_{t-j},\end{array}$$
(83.19)

where λ t is the conditional mean of the range based on all information up to time t, and the distribution of the disturbance term ε t , or the normalized range, is assumed to have a density function f(. ) with a unit mean. Since ε t is positively valued, given that both the price range R t and its expected value λ t are positively valued, a natural choice for the distribution is the exponential distribution.

The equation of the conditional expectation of range can be easily extended to incorporate other explanatory variables, such as trading volume, time-to-maturity, lagged return.

$${\lambda}_{t} = \omega +\sum _{i=1}^{p}{\alpha}_{i}{R}_{t-i} +\sum _{j=1}^{q}{\beta}_{j}{\lambda}_{t-j} +\sum _{k=1}^{L}{l}_{k}{X}_{k}.$$
(83.20)

This model is called the CARR model with exogenous variables, or CARRX model. The CARR mode essentially belongs to a symmetric model. In order to describe the leverage effect of financial time series, Chou (2006) divides the whole price range into two single-side price ranges, upward range and downward range. Further, he defines UPR t , the upward range, and DNR t , the downward range, as the differences between the daily highs, daily lows, and the opening price, respectively, at time t. In other words,

$$\begin{array}{rcl}{\mathit{UPR}}_{t}& =& \ln ({H}_{t}) -\ln ({O}_{t}),\end{array}$$
(83.21)
$$\begin{array}{rcl}{\mathit{DNR}}_{t}& =& \ln ({O}_{t}) -\ln ({L}_{t}).\end{array}$$
(83.22)

Similarity, with the time series of single-side price range, UPR t or DNR t , Chou (2006) extends the CARR model to Asymmetric CARR (ACARR) model. In volatility forecasting, the asymmetric model also performs better than the symmetric model.

3.8 The Range-Based DCC Model

The multivariate volatility models have been extensively researched in recent studies. They provide relevant financial applications in various areas, such as asset allocation, hedging, and risk management. Bauwens et al. (2006) offer a review of the multivariate volatility models. As to the extension of the univariate range models, Fernandes et al. (2005) propose one kind of multivariate CARR model using the formula Cov(X, Y ) = [V (X + Y ) − V (X) − V (Y )] ∕ 2. Analogous to the Fernandes et al.’s (2005) work, Brandt and Diebold (2006) use no-arbitrage conditions to build the covariances in terms of variances. However, this kind of method can substantially apply to a bivariate case.

Chou et al. (2007) combine the CARR model with the DCC model of Engle (2002a) to propose a range-based volatility model, which uses the ranges to replace the GARCH volatilities in the first step of DCC. They conclude that the range-based DCC model performs better than other return-based models (MA100, EWMA, CCC, return-based DCC, and diagonal BEKK) through the statistical measures, RMSE and MAE, based on four benchmarks of implied and realized covariance.

The DCC model is a two-step forecasting model that estimates univariate GARCH models for each asset and then calculates its time-varying correlation by using the transformed standardized residuals from the first step. The related discussions about the DCC model can be found in Engle and Sheppard (2001), Engle (2002a), and Cappiello et al. (2006). It can be viewed as a generalization of the constant conditional correlation (CCC) model proposed by Bollerslev (1990). The conditional covariance matrix H t of a k ×1 return vector r t in CCC (r t Ω t − 1N(0, H t )) can be expressed as

$${\mathbf{H}}_{t} ={\mathbf{D}}_{t}{\mathbf{RD}}_{t}$$
(83.23)

where D t a k ×k diagonal matrix with time-varying standard deviations \(\sqrt{{h}_{i,t}}\) of the i th return series from GARCH on the i th diagonal. R is a sample correlation matrix of r t .

The DCC is formulated as the following specification:

$$\begin{array}{rcl}{\mathbf{H}}_{t}& =&{\mathbf{D}}_{t}{\mathbf{R}}_{t}{\mathbf{D}}_{t}, \\ {\mathbf{R}}_{t}& =& \mathit{diag}\{{\mathbf{Q}{}_{t}\}}^{-1/2}{\mathbf{Q}}_{t}\,\mathit{diag}\{{\mathbf{Q}{}_{t}\}}^{-1/2}, \\ {\mathbf{Q}}_{t}& =& \mathbf{S} \circ ({\iota \iota}^{{\prime}}-\mathbf{A} -\mathbf{B}) + \mathbf{A} \circ {\mathbf{Z}}_{t-1}{\mathbf{Z}}_{t-1} \\ & & +\mathbf{B} \circ {\mathbf{Q}}_{t-1},\,{\mathbf{Z}}_{t} ={\mathbf{D}}_{t}^{-1} \times {\mathbf{r}}_{t}, \end{array}$$
(83.24)

where ι is a vector of ones and ∘ is the Hadamard product of two identically sized matrices, which is computed simply by element by element multiplication. Q t and S are the conditional and unconditional covariance matrices, respectively, of the standardized residual vector Z t from GARCH. A and B are estimated parameter matrices. Most cases, however, set them as scalars. In a word, DCC differs from CCC by allowing only R to be time varying.

4 The Realized Range Volatility

There has been much widely investigated research for measuring volatility due to the use of high frequency data. In particular, the realized volatility, calculated by the sum of squared intra-day returns, provides a more efficient estimate for volatility. The review of realized volatility is discussed in Andersen et al. (2001, 2003), Barndorff-Nielsen and Shephard (2003), Andersen et al. (2006a,b), and McAleer and Medeiros (2008). Martens and van Dijk (2007) and Christensen and Podolskij (2007) replace the squared intra-day return with the high-low range to get a new estimator called realized range.

Initially, we assume that the asset price P t follows the geometric Brownian motion:

$${\mathit{dP}}_{t} = \mu {P}_{t}\mathit{dt} + \sigma {P}_{t}{\mathit{dz}}_{t},$$
(83.25)

where μ is the drift term, σ is the constant volatility, and z t is a Brownian motion. There are τ equal-length intervals divided in a trading day. The daily realized volatility RV t at time t can be expressed by:

$${\mathit{RV}}_{t} =\sum _{i=1}^{\tau}{(\ln {P}_{t,i} -\ln {P}_{t,i-1})}^{2},$$
(83.26)

where P t, i is the price for the time i ×Δ on the trading day t, and Δ is the time interval. Then, τ ×Δ is the trading time length in a trading day. Moreover, the realized range RR t is:

$${\mathit{RR}}_{t} = \frac{1} {4\ln 2}\sum _{i=1}^{\tau}{(\ln {H}_{t,i} -\ln {L}_{t,i-1})}^{2},$$
(83.27)

where H t, i and L t, i are the highest price and the lowest price of the ith interval on the tth trading day, respectively.

As mentioned before, several studies such as Garman and Klass (1980) suggest improving efficiency by using the open and close prices. Furthermore, assuming that P t follows a continuous sample path martingale, Christensen and Podolskij (2007) propose integrated volatility and show this range estimator remains consistent in the presence of stochastic volatility.

$$\ln {P}_{t} =\ln {P}_{0} +{\int \nolimits \nolimits}_{0}^{t}{\mu}_{s}\mathit{ds} +{\int \nolimits \nolimits}_{0}^{t}{\sigma}_{s-}{\mathit{dz}}_{t},\,\mathrm{for}\,0 \leq t < \infty.$$
(83.28)

The obvious and important question is that the realized range should be seriously affected by microstructure noise. Martens and van Dijk (2007) consider a bias-adjustment procedure, which scales the realized range by using the ratio of the average level of the daily range and the average level of the realized range. They find that the scaled realized range is more efficient than the (scaled) realized volatility.

5 The Financial Applications and Limitations of the Range Volatility

The range mentioned in this paper is a measure of volatility. From the theoretical points of view, it indeed provides a more efficient estimator of volatility than the return. It is intuitively reasonable due to more information provided by the range data. In addition, the return volatility neglects the price fluctuation, especially not taking into account the near distance between the closing prices of two trading days. We can therefore conclude that the high-low volatility should contain some additional information compared with the close-to-close volatility. Moreover, the data on the range is readily available, which has low cost. Hence, most researches related to volatility may be applied on the range. Poon and Granger (2003) provide extensive discussions of the applications of volatilities in the financial markets.

The range estimator undoubtedly has some inherited shortcomings. It is well known that the financial asset price is very volatile and is easy to be influenced by instantaneous information. In statistics, the range is very sensitive to the outliers. Chou (2005) provides an answer by using the quantile range. For example, the new range estimator can be calculated by the difference between the top and the bottom 5% observations on average.

In theory, many range estimators in previous sections depend on the assumption of continuous-time geometric Brownian motion. The range estimators derived from Parkinson (1980) and Garman and Klass (1980) require a geometric Brownian motion with zero drift. Rogers and Satchell (1991) allow a nonzero drift, and Yang and Zhang (2000) further allow overnight price jumps. Moreover, only finite observations can be used to build the range. It means the range will appear with some unexpected bias, especially for the assets with lower liquidity and finite transaction volume. Garman and Klass (1980) pointed out that this will produce the later opening and early closing. They also said the difference between the observed highs and lows will be less than that between the actual highs and lows. It means that the calculated high-low estimator should be biased downward. In addition, Beckers (1983) noted that the highest and lowest prices may be traded by disadvantaged buyers and sellers. The range values might therefore be less representative for measuring volatility.

Before the range was adapted by the dynamic structures, however, its application was very limited. Based on the SV framework, Gallant et al. (1999) and Alizadeh, Brandt, and Diebold incorporate the range into the equilibrium asset pricing models. Chou (2005) and Brandt and Jones (2006), on the other hand, fill the gap between a discrete-time dynamic model and range. Their works give a large extension for the applications of range volatility. In the early studies, Bollerslev et al. (1992) give good illustrations of the conditional volatility applications. Based on the conditional mean-variance framework, Chou and Liu (2008a) show that the economic value of volatility timing for the range is significant in comparison to the return. It means that we can apply the range volatility to some practical cases. In addition, Corrado and Truong (2007) report that the range estimator has similar forecasting ability of volatility compared with the implied volatility. However, the implied volatilities are not available for many assets and the option markets are not sufficient in many developed countries. In such cases, the range is more practical. More recently, Kalev and Duong (2008) utilize Martens and van Dijk’s (2007) realized range to test the Samuelson Hypothesis for the futures contract.

6 Conclusion

Volatility plays a central role in many areas of finance. In view of the theoretical and practical studies, the price range provides an intuitive and efficient estimator of volatility. In this paper, we begin our discussion by reviewing the range estimators. There has been a dramatic increase in the number of publications on this work since Parkinson (1980) introduced the high/low range. From then on, some new ranges are considered with opening and closing price. The new range estimators distribute feasible weights to the differences among the highest, lowest, opening, and closing. Through the analysis, we can gain a better understanding of the nature of range.

Some dynamic volatility models combined with range are also introduced in this study. They led to broad applications in finance. In particular, the CARR model incorporates both the superiority of range in forecasting volatility and the elasticity of the GARCH model. Moreover, the range-based DCC model, which combines CARR with DCC, contributes to the multivariate applications. This research may provide an alternative to risk management and asset allocation. And finally, the squared intra-day return of realized volatility is replaced by the high-low range to get a more efficient estimator called realized change.

Undoubtedly, the range is sensitive to outliers in statistics, but only few researchers mention this problem. It’s useful and meaningful to utilize the quantile range to replace the standard range to get a robust measure of range. Moreover, the multivariate works for range is still in its infancy. Future research is obviously required for this topic.