Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Structural credit risk models, firstly formalized by [17], consider a firm’s equity and debt as contingent claims partitioning the asset value of the firm. Empirical tests of structural credit risk models show poor predictions of default probabilities and credit spreads, especially for short maturities. Methods of strict estimation or calibration provide evidence that predicted credit spreads are far below observed ones [14], the structural variables explain little of the credit spread variation [12], pricing error is large for corporate bonds [18].

A critical issue about the implementation of these models is that firm’s asset value and volatility are not directly observable. The idea is then to use the information content of equity prices and then back out the firm’s asset volatility. Nevertheless, the market microstructure literature strongly suggests that trading noises can affect equity prices so that the estimation of equity volatility and other related quantities may become a difficult task. For instance, as well documented, observed equity prices can diverge from their equilibrium value due to illiquidity, asymmetric information, price discreteness and other measurement errors. The effects of trading noise on how frequently one should sample equity price are analyzed in [2, 4]. In the specific context of structural credit risk models, the relationship between the unobservable asset volatility and the observed equity value predicted by the pricing model is masked by trading noise; ignoringmicrostructure effects could non-trivially inflate estimates for the “true” asset volatility, and this would produce misleading estimates for default probabilities and credit spreads. This issue has been analyzed in [9], where the authors extend [8] to explicitly account for trading noise contamination of equity prices: they devise a particle filter-based maximum likelihood method based solely on the time series of observed equity values, which is robust to market microstructure effects. The importance of using high frequency data to back out parameters involved in firm’s value dynamics has been highlighted by [20], which propose a novel approach to identify the volatility and jump risks component of individual firms from high-frequency equity prices. Their analysis suggests that highfrequency- based volatility measures can help to better explain credit spreads, above and beyond what is already captured by the true leverage ratio. However, the highest frequency considered in this paper is the 5-minute conservative sampling frequency which allows to eliminate microstructure effects.

In this paper we propose a particular econometric approach to structural model calibration based on non-parametric estimation of equity volatility from high-frequency intra-day equity prices. Several non-parametric estimators of daily stock volatility have been proposed in the econometric literature allowing to exploit the information contained in intra-day high-frequency data neglecting microstructure effects [1, 2, 4, 11, 13, 15, 19]. We propose a Monte Carlo simulation study based on Merton [17] structural model, trying to compare the performance of different nonparametric volatility estimators in their capability of i) filtering out the market microstructure noise, ii) extracting the true underlying asset (unobservable) volatility level, iii) predicting default probabilities. We show that the choice of the volatility estimator can largely affect calibrated default probabilities and hence risk evaluation. In particular, the commonly used Realized Volatility estimator is unable to provide reliable estimates for the volatility risk leading to a significant underestimation of default probabilities.

2 Structural Model Under Market Microstructure Effects

We follow Merton [17] structural model by assuming a firm’s value process described by a geometric Brownian motion, which evolves according to

$$d{A_t} = (\mu - \delta ){A_t}dt + \sigma {A_t}d{W_t}$$
((1))

where A t is asset value at time t, μ the instantaneous asset return, δ the asset payout ratio and σ the asset volatility. The firm has two classes of outstanding claims: equity and a zero-coupon debt with promised payment B at maturity T. To price corporate debt as in [17] we assume that: (i) default occurs only at maturity with debt face value as default boundary; (ii) when default occurs, the absolute priority rule prevails. The payoffs to debt holders and equity holders at time T become, respectively

$${D_T} = \min ({A_T},B),\quad {S_T} = \max ({A_T} - B,0).$$

.

From now on, we focus our attention on equity value and default probabilities in order to develop our computational econometric analysis. Equity claim can be priced at each time t < T through the standard Black-Scholes option pricing model as the price of a European call option given by

$${S_t} = S({A_t};\sigma ,B,r,\delta ,T) = {A_t}\phi ({d_t}) - B{e^{ - r(T - t)}}\phi ({d_t} - \sigma \sqrt {T - t} ),$$
((2))

where

$${d_t}: = \frac{{\log \left( {\frac{{{A_t}}}{B} + \left( {r - \delta + {\sigma ^2}/2} \right)\left( {T - t} \right)} \right)}}{{\sigma \sqrt {T - t} }},$$
((3))

and φ(·) is the standard normal distribution function. Therefore, by applying Itô’s lemma, the instantaneous volatility ∞ s t of the log equity price can be written as

$$\sum _t^s: = \frac{{{A_t}}}{{{S_t}}}\frac{{\partial {S_t}}}{{\partial {A_t}}}\sigma .$$
((4))

Notice that the equity volatility is driven by the time-varying factor A t , whereas the asset volatility σ is constant. The firm’s probability of default at maturity T is the probability of A T being below the constant barrier represented by the face value of debt B. Under the physical probability measure ℙ we have

$$\mathbb{P}({A_T} < B|{A_t}) = \phi (\sigma \sqrt {T - t} - d_t^\mathbb{P}),$$
((5))

with d t given by Eq. (3) where we only replace the interest rate r with μ.

For a given firm, one can obtain a time series of equity prices {S j , j = 0,…,N}, with a given sampling frequency h = t j t j−1 assumed to be constant, for ease of exposition. If one could observe the “true” equity price, than equity volatility could be easily estimated by the well known Realized Volatility estimator [7] at any desired accuracy level using high frequency data. However, as noted for instance by [9], the relationship between the unobserved asset and the observed equity value predicted by the pricing formula (2) may be masked by trading noise. We assume an additive error structure for the trading noise on the logarithmic equity value as follows

$$\log \tilde S({t_j}) = \log S({t_j}) + \eta ({t_j}),$$
((6))

where the random shocks η(t j ), for 0≤jN are i.i.d. random variables with mean zero and bounded fourth moment and independent of the efficient log-return process. The assumption of independence can be relaxed by considering a particular form of dependent noise, given by [11], with market microstructure noise that is timedependent in tick time and correlated with efficient returns

$${\tilde \eta _j}: = \alpha \left[ {\log S({t_j}) - \log S({t_{j - 1}})} \right] + {\eta _j},$$
((7))

where α is a real constant and η̃ j and η j are the shorten notation for η̃(t j ) and η(t j ). The case α=0 corresponds to the case of independent noise assumption.

3 Equity Volatility Estimation

The basic idea of our paper is that, using suitable volatility estimators, we can infer the true volatility process ∑ s t of equity returns from noisy high-frequency data. Then, the equity volatility estimate can be used to back out the asset volatility σ such as to fit exactly, say, the 5-years probability of default by solving Equation (5) with respect to σ, as explained in details in the following.

For the reader’s convenience, we now give some details about the implementation of the volatility measures employed in our analysis. We set p̃ t :=log S̃ t , the noisy equity log-price. Time is measured in daily units. We build daily measure of volatility by considering daily windows of n intra-day equity data p̃ t,j ,j=0,1,…,n. Besides the well known Realized Volatility estimator ∑ RV t :=∑ n j=1 δ j (p̃)2, where δ j (p̃):=p̃ t,j p̃ t,j−1 is the j-th within-day equity log-return on the day t, we consider the following estimators of the volatility process ∑ s t : the bias corrected estimator by Hansen and Lunde [11]

$$\sum _t^{HL}: = \sum _t^{RV} + 2\frac{n}{{n - 1}}\sum\limits_{j = 1}^{n - 1} {{\delta _j}(\tilde p){\delta _{j + 1}}(\tilde p)} $$

the flat-top realized kernels by [4, 6]

$$\sum _t^K: = \sum\limits_{h = - H}^H {k\left( {\frac{h}{{H + 1}}} \right)\sum\limits_{j = \left| h \right| + 1}^n {{\delta _j}(\tilde p){\delta _{j - \left| h \right|}}\left( {\tilde p} \right)} } $$

with kernels of TH 2 type k(x) = sin2(π/2(1−)2). The realized kernels may be considered as unbiased corrections of the Realized Volatility by means of the first H autocovariances of the returns. In particular, when H is selected to be zero the realized kernels become the Realized Volatility. Our analysis includes also the two-scale estimator by [19]

$$\sum _t^{TS}: = \frac{S}{{S - 1}}\left( {\frac{1}{S}\sum\limits_{s = 1}^S {\sum _t^{{G^{(s)}}} - \frac{1}{S}\sum _t^{RV}} } \right).$$

.

The two-scale (subsampling) estimator is a bias-adjusted average of lower frequency realized volatilities computed on S non-overlapping observation subgrids G (s) containing n S observations. Recently, [13] proposed a pre-averaging technique as an alternative to subsampling in order to reduce the microstructure effects. The idea is that if one averages a number of observed log-prices, one is closer to the latent process p(t). This approach, when well implemented, gives rise to rate optimal estimators of power variations. In particular, a consistent estimator of the integrated volatility can be constructed as

$$\sum _t^{PA} = \frac{{\sqrt \Delta }}{{\theta {\psi _2}}}\sum\limits_{s = 0}^{n - {k_n} + 1} {{{\bar \delta }_s}{{(\tilde p)}^2} - \frac{{{\psi _1}\Delta }}{{2{\theta ^2}{\psi _2}}}\sum\limits_{s = 1}^n {{\delta _s}{{(\tilde p)}^2},} } $$

where the pre-averaged return process is given by

$${\bar \delta _s}(\tilde p): = \sum\limits_{r = 1}^{{k_n}} {\left( {\frac{r}{{{k_n}}}} \right){\delta _{s + r}}(\tilde p) = \frac{1}{{{k_n}}}\left( {\sum\limits_{j = {k_n}/2}^{{k_n} - 1} {{{\tilde p}_{t,s + j}} - \sum\limits_{j = 0}^{{k_n}/2 - 1} {{{\tilde p}_{t,s + j}}} } } \right),} $$

θ=k n √Δ,ψ1=1 and ψ2=1/12, corresponding to the “hat’ weight function g(x)=x∧(1−x). The Fourier estimator [15] is given by

$$\sum _t^F = \frac{{{{(2\pi )}^2}}}{{2N + 1}}\sum\limits_{s = - N}^N {{c_s}(d{{\tilde p}_n}){c_{ - s}}(d{{\tilde p}_n}),} $$

where c k (dp̃ n )=1/2π ∑ n i=1 exp(−ikt i−1 i (p̃). Finite sample MSE-based optimal rules for choosing the parameters employed by these estimators are discussed in [3, 5, 16, 19]. Here, we proceed according to the following rules: a simple approximation of the optimal sampling frequency for the Realized Volatility estimator is to choose the number of observations approximately equal to n*=(Q/4E2]2)1/3, where Q is the integrated quarticity estimated by means of low frequency returns. The optimal number of subgrids S is given by c*n 2/3, where c*=(Q/48E/[η2]2)−1/3. For the Kernel estimator, we apply the optimal mean square error bandwidth selection suggested by [5] and get H=c*n 2/3, where c*=(Q/48E2]2)−1/3. For the Kernel estimator, we apply the optimal mean square error bandwidth selection suggested by [5] and get H=c4/5 n 3/5, where c*=(144/0.269)1/5, ξ2=E2]/√Q. In the case of the Pre-averaging estimator, inspired by [5], we choose k n = c4/5η3/5. Finally, for the Fourier estimator, the optimal cutting frequency N can be easily obtained by direct minimization of the estimated MSE given by Theorem 3 in [16].

4 Volatility and Default Probability Computation

In this section we provide numerical results of our simulation study showing and quantifying the impact of different volatility measures on default probability estimation, based on high-frequency equity data affected by trading noise. We analyze the performance of the volatility estimators with respect to their ability of i) filtering the microstructure noise and correctly extract equity volatility, ii) backing out asset volatility and iii) predicting default probabilities.

We perform Monte Carlo simulations by generating the underlying asset dynamic according to model (1) for rating classes A, BBB, BB. Then, high-frequency equity prices are simulated through Equation (2). The sample contains 500 days and equispaced intra-day data are generated at a frequency h=4 sec (i.e. n=21600). Based on the calibration results by [20], we consider as model parameters r = 0.05, δ = 0.02; rate A: σ = 0.2128, μ = 0.0643, B = 43.13; rate BBB: σ = 0.2296, μ = 0.0655, B=48.02; rate BB:σ =0.2371, μ =0.057, B=58.63. Market microstructure noise is considered, alternatively, for both cases described by Eqs. (6)–(7). The random shocks η j are i.i.d Gaussian random variables with zero mean and standard deviation equal to 2 times the log-equity return standard deviation. We set α = 0.5 in the dependent noise case (7). Once we have noisy equity prices, we compare the performance of different volatility estimators in their ability of extracting the “true” equity volatility ∑ s t and other related quantities for the underlying process describing firm’s assets value dynamics.

Table 1 presents numerical evidence for each equity volatility estimator introduced in Sect. 3 and used in our comparison when (a) trading noise is independent of intra-day equity log-returns and (b) trading noise is correlated with intra-day equity log-returns, respectively. The table lists the average relative error (ARE), the mean squared error (MSE) and bias achieved by the different equity volatility estimators. ∑ RV t represents the Realized Volatility estimator using all tick-by-tick equity data, while ∑ RVSS t refers to the Realized Volatility estimator based on sparse sampling, where the sampling frequency is optimized in order to filter the microstructure effects, as explained in Sect. 3. Our results strongly confirm well known stylized facts documented by the econometric literature and highlighted by [16]: ∑ RV t estimates are completely swamped by noise and sparse sampling can only moderately provide efficient estimates. The first order correction of ∑ HL t , as an alternative to sparse sampling, can reduce the bias due to the spurious first order autocorrelation in equity returns introduced by the trading noise. The best results are provided by ∑ F t and by the other estimators specifically designed to handle microstructure effects ∑ TS t , ∑ K t and ∑ PA t . However, the rank of estimators is different when we consider absolute error measures such as MSE and bias versus percentage error measures such as ARE. In fact, in the latter case ∑ HL t performs better than ∑ PA t and ∑ F t . This means that ∑ HL t performs better under low volatility regimes. Finally, we notice that ∑ PA t tends to underestimate volatility, differently from all the other estimators.

Table 1 Equity Volatility Estimators. The table shows average relative error (ARE), mean squared error (MSE) and BIAS for different equity volatility estimators for A-rated firms. Results are based on 500 daily Monte Carlo simulations. Panel (a) refers to the trading noise given in Equation (6), panel (b) to Equation (7), with α = 0.5

In order to study the influence of different equity volatility estimators on the default probability predicted by Merton [17] model, we proceed by developing the following calibration exercise. For each equity volatility estimator, generically denoted by E, and each day t in our sample, we find the corresponding asset volatility estimate σ̂E by matching the 5-years default probability coming from our equity volatility estimate ∑ RV t with the one evaluated through the model using the given parameters values. In so doing, we act as if we did not know the asset values to mimic the real-life estimation situation and we conduct inference only based on observable quantities such as measurable equity volatility and the 5-years default probabilities. Default probabilities are computed, for any maturity, according to Eq. (5). In order to avoid arbitrage opportunities, following [20], we consider as key assumption that all securities written on the underlying firm value A t must have the same Sharpe ratio, see [17], Equation (6). This consideration enables us to express the instantaneous asset return μ as a function of the unknown asset volatility, given each equity volatility estimate ∑ E t , and then to solve Eq. (5) for the 5-years default probability with respect to the asset volatility, to obtain the corresponding asset volatility estimate σ̂E. Once asset volatility is known, we can compute default probabilities for any other maturity according to Eq. (5).

Before analyzing the impact of the different volatility estimators on the calibration procedure for default probabilities, we analyze their influence on the calibration of asset volatility σ. A visual insight on the variability of asset volatility estimates obtained with the procedures based on the different equity volatility estimators can be achieved from Fig. 1, plotting the ratio σ̂E/σ in the case of independent noise for an A-rated firm. It is evident how the high-frequency Realized Volatility procedure (black line in the bottom) largely underestimate asset volatility. Similar results are obtained for the dependent noise case.

Fig. 1
figure 1

Calibrated Asset Volatility. The plot shows the ratio between calibrated asset volatility σ̂E and the true asset volatility σ for 500 dailyMonte Carlo simulations. The plot refers to the trading noise given in Equation (6). Panel (a) of Table 2 gives descriptive statistics of these values

Table 2 shows descriptive statistics of calibrated asset volatility. We report results obtained by matching 5-years default probabilities for each equity volatility estimator E. Panel (a) of the table refers to a trading noise of the form (6); panel (b) refers to results obtained for a trading noise of the form (7). It is evident from the table that the Kernel and Two-scale estimators provide the most accurate estimation of the true value σ = 0.2128, with the smallest standard deviation. On the contrary, σ̂RV is strongly biased due to microstructure effects, while the optimized σ̂RVSS is less biased, at the price of a slightly larger variance. These results are confirmed by statistics for the ratio σ̂E/σ as well.

Table 2.2 Calibrated Asset Volatility. The table shows descriptive statistics of calibrated asset volatility for different equity volatility estimators. Results are based on 500 daily Monte Carlo simulations for an A-rated firm. Panel (a) refers to the trading noise given in Eq. (6), panel (b) to Eq. (7), with α = 0.5

The average (over all the ‘daily’ Monte Carlo replications) default probabilities for different maturities and calibration procedures are plotted in Fig. 2. For each day in the sample and for each volatility estimator E we use the calibrated asset volatility σ̂E in order to compute the default probabilities for any maturity (from 1 to 5 years) by Eq. (5). From the figure, it appears evident how the Realized Volatility approach based on high frequency noisy data drastically underestimates default probabilities, so that sparse sampling becomes mandatory when equity data are affected by microstructure effects. On the whole, all the other procedures seem to provide sensible results and only a deeper analysis reveals differences among different calibration procedures.

Fig. 2
figure 2

Default Probability. The plot shows default probabilities given by (5). By matching 5- years default probability coming from our equity volatility estimate with the one provided by the model, we obtain the corresponding calibrated asset asset volatility and use it to compute default probabilities for all maturities

The analysis of the effects of using alternative equity volatility estimators on default probabilities is conducted for different maturities (from 1 to 5 years) through the comparison of the mean relative error between the estimated default probability and the theoretical one. For each maturity, we consider the following measure

$$DP_{Err}^E: = \mathbb{E}\left[ {\frac{{DP(\sum _t^E) - DP(\sum _t^S)}}{{DP(\sum _t^S)}}} \right] \cdot 100,$$
((8))

where DP(∑ E t ) is the default probability calibrated from Merton [17] structural model when equity and asset volatility are estimated through estimator E; DP(∑ S t ) is the corresponding theoretical default probability when equity and asset volatility are ∑ S t and σ, respectively. Figure 3 shows the mean relative error on different calibration procedures for an A-rated firm with noise setting (7). Similar plots are obtained for the noise case (6) and for other credit qualities. The results for the Realized Volatility estimator using tick-by-tick data have been omitted, since the relative error in this case reaches 78% for the earliest maturities. When data are optimally sampled, the Realized Volatility calibration procedure is still affected by an error up to 6% for the shortest maturities. A negative (positive) error reveals that the calibration procedure underestimates (overestimates) default probabilities. Therefore, the classical Realized Volatility approach can severely underestimate risk. Except Fourier approach, all the other approaches slightly overestimate default probabilities, with ∑ TS t and ∑ K t providing the best estimation of risk.

Fig. 3
figure 3

Default Probability Relative Error. The plot shows default probability mean relative error given in Eq. (8) for maturities from 1 to 5 years. Results are based on 500 Monte Carlo simulations and refer to an A-rated firm with trading noise (7) and α = 5

Table 3 shows the corresponding numerical results for all the rating classes A, BBB, BB, suggesting that the choice of the volatility estimator largely affects the default probabilities estimation. The performance of each estimators in terms of default probability estimation is more strongly affected by the equity risk premium, i.e. ultimately by μ, than by the latent asset volatility σ. Therefore, generally, DP E Err does not worsen from A to BBB and BB classes as long as low-rated firms are associated with small values of μ, like the ones calibrated by [20]. Rather, the ranking among different estimators remains the same regardless of the firm credit rating. In particular, ∑ TS t , ∑ HL t , ∑ K t and ∑ PA t are the estimators providing the best risk evaluation in all the considered scenarios, immediately followed by ∑ F t .

Table 3 Default Probability Relative Error. The table shows default probability mean relative error given in Eq. (8) for maturities from 1 to 4.5 years. Results are based on 500 daily Monte Carlo simulations using different equity volatility estimators. Relative errors are in percentage (%). Panel (a) refers to the trading noise given in Equation (6), panel (b) to Eq. (7)

5 Conclusions

In this paper we consider Merton [17] structural model and use high-frequency equity prices in order to back out the unobservable asset volatility and calibrate default probabilities. We perform a Monte Carlo simulation study based on equispaced simulated equity prices and propose alternative equity volatility estimators, assuming data being contaminated by trading noise: the aim is to exploit the information content of high frequency intra-day data neglecting microstructure effects. While [9] propose a particle filter-based maximum likelihood method, we propose a different econometric approach. We consider alternative non-parametric (equity) volatility estimators and compare their performance in: i) filtering out the market microstructure noise, ii) extracting the (unobservable) true underlying asset volatility level, iii) predicting default probabilities. We consider, alternatively, trading noise being a) independent log-Gaussian distributed, b) correlated with intra-day equity log-returns. Non-observability of a firm’s asset value does not actually impede the implementation of a structural credit risk model; nevertheless, the volatility estimator can largely affect calibrated default probabilities, thus risk evaluation as it happens for A, BBB and BB rating classes analyzed. The commonly used Realized Volatility estimator is unable to provide reliable estimates for the asset volatility under market microstructure, leading to a significant underestimation of asset volatility and default probabilities. Next step will be to develop the current analysis by extending the focus on credit spreads estimation and considering more sophisticated underlying asset dynamics, i.e. stochastic volatility and/or jump-diffusion models.