1 Introduction

John Taylor suggested in Taylor (1993) that an interest rate rule for well-conducted monetary policy fitted the Fed’s behaviour since 1987 rather well in a single equation regression. Since then a variety of similar studies have confirmed his finding—most of these have focused on a data sample beginning in the early-to-mid 1980s. For the period from the late 1960s to the early 1980s the results have been more mixed. Thus Clarida et al. (2000) reported that the Taylor Rule fitted but with a coefficient on inflation of less than unity; in a full New Keynesian model this fails under the usual criteria to create determinacy in inflation and they argue that this could be the reason for high inflation and output volatility in this earlier post-war period. They concluded that the reduction in macro volatility between these two periods (the ‘Great Moderation’) was due to the improvement in monetary policy as captured by this change in the operative Taylor Rule.

This view of the Great Moderation has been widely challenged in econometric studies of the time series. These have attempted to decompose the reduction in macro variance into the effect of parameter changes and the effect of shock variances. Virtually all have found that the shock variances have dominated the change and that the monetary policy rule operating therefore did not change very much.

A further questioning of the Taylor Rule account of the post-war monetary policy has come from Cochrane (2011) and others (Minford et al. 2002) who argue that the Taylor Rule is not identified as a single equation because a DSGE model with a different monetary policy rule (such as a money supply rule) could equally well generate an equation of the Taylor Rule form. Therefore much of the work that estimates the Taylor Rule could be spurious. However, identification can be achieved when the Taylor Rule is embedded in a full DSGE model because of the over-identifying restrictions implied by rational expectations. There then remains the question of whether such a model is as good a representation as one that is in general the same but has an alternative monetary policy ruleFootnote 1.

This points the way to a possible way forward for testing models of monetary policy. One may specify a complete DSGE model with alternative monetary rules and use the over-identifying restrictions to determine their differing behaviours. One may then test which of these is more acceptable from the data’s viewpoint and hence comes closest to the true model. This is the approach taken here.

We look at a particular rival to the Taylor Rule, the Optimal Timeless Rule. This is of interest because in it the Fed is playing a more precisely optimising role than it does in the Taylor Rule which is a simple rule that can be operated with limited current information, namely for output and inflation. The Optimal Timeless Rule assumes that the Fed can solve the DSGE model for all the shocks and so choose in a discriminating way its reaction to each shock. Other than this Optimal Timeless Rule we also look at variants of the Taylor Rule, including one that closely mimics the Optimal Timeless Rule.

To make our testing bounded and tractable we use the monetary rule in conjunction with the most widely-accepted parsimonious DSGE model representation—where the model is reduced to two equations, a forward-looking ‘IS’ curve and a New Keynesian Phillips curve, plus the monetary rule. We allow each rule/model combination to be calibrated with the best chance of matching the data and then test on that best calibration, using the method of Indirect Inference under which the model’s simulated behaviour is formally tested for congruence with the behaviour of the data. Our efforts here join others that have brought DSGE models to bear on this issue—notably, Ireland (2007); Smets and Wouters (2007) and the related Le et al. (2011) and Fernandez-Villaverde et al. (2009, 2010). These authors have all used much larger DSGE models, in some cases data that was non-stationary, and in most cases Bayesian estimation methods. Their work is largely complementary to ours and we discuss their findings in relation to ours below. Bayesian estimation is a method for improving on calibrated parameters but our method of indirect inference takes matters further and asks if the finally estimated model is consistent overall with the data behaviour; if not it searches for some set permissible within the theory that is consistent, getting as close to consistency as possible given the model and the data. This method is the major innovation we introduce for the treatment of the topic here; it is a method based on classical statistical inference which we explain and defend carefully below.

In using indirect inference to test the models we deviate from the popular use of Bayesian methods to evaluate models. This is because Bayesian evaluation of a model (by likelihood and odds ratio tests) does not test the model as a whole against the data; indeed Bayesians dismiss the idea of ‘testing models’. Bayesian estimates depend on the choice of prior distributions which are designed to restrict the estimates. By testing a model as a whole, we mean testing jointly both the effect of imposing the priors and the usual structural restrictions on the model. In contrast, Bayesian evaluation examines which of two specifications of a structural model is more probable given the priors assumed; any ranking of the two models is thus dependent on the priors chosenFootnote 2 . Thus Bayesian methods cannot be used to test models against the data in the sense in which we wish to test (and rank) a model.

It may, of course, still be argued that it is wrong to test models as a whole against the data and that one should only check improvements conditional on prior assumptions that should not be challenged. It is, however, hard to argue that any set of prior assumptions can be taken for granted as true and beyond challenge as can be seen from the number of ‘schools of thought’ still in existence in macroeconomics. This wide divergence of beliefs has, if anything, been exacerbated by the financial crisis of the late 2000s. Whether one likes it or not, as a macroeconomist, one must recognise that to establish a model scientifically to the satisfaction of other economists and policymakers, it needs to be shown that a model being proposed for a policy use is consistent with the data in a manner that enables it to be used for that purpose. We show below that indirect inference fulfils that need.

Direct inference, as in the Likelihood Ratio test, is an alternative to indirect inference for testing models against the data. However, as we elaborate below, Likelihood Ratio tests appear to have considerably less power in small samples than the indirect inference test we use here. By implication, indirect inference will therefore provide a more powerful discrimination between the models.

In Section 2 we review the work on the Great Moderation. In Section 3 we set out the model and the rules to be tested, and in Section 4 our test procedure, together with a discussion of the alternatives. Section 5 shows the results, while Section 6 draws out the implications for the Great Moderation and Section 7 concludes.

2 Causes of the Great Moderation

The Great Moderation refers to the period during which the volatility of the main economic variables was relatively modest. This began in the US around the early 1980s although there is no consensus on the exact date. Figure 1 below shows the time paths of three main US macro variables from 1972 to 2007: the nominal Fed interest rate, output gap and CPI inflation. It shows the massive fluctuation of the 1970s ceased after the early 1980s, indicating the economy’s transition from the Great Acceleration to the Great Moderation.

Fig. 1
figure 1

Time paths of main macro variables of the US economy

Changes in the monetary policy regime could have produced the Great Moderation. This is typically illustrated with the three-equation New Keynesian framework, consisting of the IS curve derived from the household’s optimization problem, the Phillips curve derived from the firm’s optimal price-setting behaviour, and a Taylor Rule approximating the Fed’s monetary policy. Using simulated behaviour from models of this sort, a number of authors suggest that the US economy’s improved stability was largely due to stronger monetary policy responses to inflation (Clarida et al. 2000; Lubik and Schorfheide 2004; Boivin and Giannoni 2006 and Benati and Surico 2009). The contrast is between the ‘passive’ monetary policy of the 1970s, with low Taylor Rule responses, and the ‘active’ policy of the later period in which the conditions for a unique stable equilibrium (the ‘Taylor Principle’) are met, these normally being that the inflation response in the Taylor Rule be greater than unity. Thus it was argued that the indeterminacy caused by the passive 1970s policy generated sunspots and so the Great Acceleration; with the Fed’s switch this was eliminated, hence the Great Moderation.

By contrast other authors, mainly using structural VAR analysis, have suggested that the Great Moderation was caused not by policy regime change but by a reduction in the variance of shocks. Thus Stock and Watson (2002) claimed that over 70 % of the reduction in GDP volatility was due to lower shocks to productivity, commodity prices and forecast errors. Primiceri (2005) argued that the stagflation in the 1970s was mostly due to non-policy shocks. A similar conclusion was drawn by Gambetti et al. (2008), while Sims and Zha (2006) found in much the same vein that an empirical model with variation only in the variance of the structural errors fitted the data best and that alteration in the monetary regime—even if assumed to occur—would not much influence the observed inflation dynamics.

The logic underlying the structural VAR approach is that, when actual data are modelled with a structural VAR, their dynamics will be determined both by the VAR coefficient matrix that represents the propagation mechanism (including the monetary regime) and by the variance-covariance matrix of prediction errors which takes into account the impact of exogenous disturbances. Hence by analysing the variation of these two matrices across different subsamples it is possible to work out whether it is the change in the propagation mechanism or in the error variability that has caused the change in the data variability. It is the second that these studies have identified as the dominant cause. Hence almost all structural VAR analyses have suggested ‘good shocks’ (or ‘good luck’) as the main cause of the Great Moderation, with the change of policy regime in a negligible role.

Nevertheless, since this structural VAR approach relies critically on supposed model restrictions to decompose the variations in the VAR between its coefficient matrix and the variance-covariance matrix of its prediction errors, there is a pervasive identification problem. As Benati and Surico (2009) have pointed out, the problem that ‘lies at the very heart’ is the difficulty in connecting the structure of a DSGE model to the structure of a VAR. In other words one cannot retrieve from the parameters of an SVAR the underlying structural parameters of the DSGE model generating it, unless one is willing to specify the DSGE model in detail. None of these authors have done this. Hence one cannot know from their studies whether in fact the DSGE model that produced the SVAR for the Great Acceleration period differed or did not differ from the DSGE model producing the SVAR for the Great Moderation period. It is not enough to say that the SVAR parameters ‘changed little’ since we do not know what changes would have been produced by the relevant changes in the structural DSGE models. Different DSGE models with similar shock distributions could have produced these SVARs with similar coefficients and different shock distributions.

Essentially it is this problem that we attempt to solve in the work we present below. We estimate a VAR for each period and we then ask what candidate DSGE models could have generated each VAR. Having established which model comes closest to doing so, we then examine how the difference between them accounts for the Great Moderation. Since these models embrace the ones put forward by the authors who argue that policy regime change accounts for it, we are also able to evaluate these authors’ claims statistically. Thus we bring evaluative statistics to bear on the authors who claim policy regime change, while we bring identification to bear on the authors who use SVARs.

3 A Simple New Keynesian Model for Interest Rate, Output Gap and Inflation Determination

We follow a common practice among New Keynesian authors of setting up a full DSGE model with representative agents and reducing it to a three-equation framework consisting of an IS curve, a Phillips curve and a monetary policy rule.

Under rational expectations the IS curve derived from the household’s problem and the Phillips curve derived from the firm’s problem under Calvo (1983) contracts can be shown to be:

$$ {x}_t={E}_t{x}_{t+1}-\left(\frac{1}{\sigma}\right)\left({\tilde{\iota}}_t-{E}_t{\pi}_{t+1}\right)+{v}_t $$
(1)
$$ {\pi}_t=\chi {E}_t{\pi}_{t+1}+\gamma {x}_t+\kappa {u}_t^w $$
(2)

where x t is the output gap, \( {\tilde{\iota}}_t \) is the deviation of interest rate from its steady-state value, π t is the price inflation, and v t and u w t are the ‘demand shock’ and ‘supply shock’, respectively.Footnote 3

We consider three monetary regime versions widely suggested for the US economy. These are the Optimal Timeless Rule when the Fed commits itself to minimizing a typical quadratic social welfare loss function; the original Taylor Rule; and an interest-rate-smoothed Taylor Rule.

In particular, the Optimal Timeless Rule is derived following Woodford (1999)’s idea of ignoring the initial conditions confronting the Fed at the regime’s inception. It implies that, if the Fed was a strict, consistent optimizer, it would have kept inflation always equal to a fixed fraction of the first difference of the output gap, ensuring

$$ {\pi}_t=-\frac{\alpha }{\gamma}\left({x}_t-{x}_{t-1}\right) $$
(3)

where α is the relative weight it puts on the loss from output variations against inflation variations and γ is the Phillips curve constraint (regarding stickiness) it faces.Footnote 4

Unlike Taylor Rules that specify a systematic policy instrument response to economic changes, this Timeless Rule sets an optimal trade-off between economic outcomes—here, it punishes excess inflation with a fall in the output growth rate. It then chooses the policy instrument setting to achieve these outcomes; thus the policy response is implicit. Svensson and Woodford (2004) categorized such a rule as ‘high-level monetary policy’; they argued that by connecting the central bank’s monetary actions to its ultimate policy objectives this rule has the advantage of being more transparent and robust.Footnote 5

Thus, in order to implement the Optimal Timeless Rule the Fed must fully understand the model (including the shocks hitting the economy) and set its policy instrument (here the Fed rate) to whatever supports the Rule. Nevertheless, the Fed may make errors of implementation that cause the rule not to be met exactly—‘trembling hand’ errors, ξ t . Here, since (3) is a strict optimality condition, we think of such policy mistakes as due either to an imperfect understanding of the model or to an inability to identify and react to the demand and supply shocks correctly. This differs from the error in typical Taylor Rules, which consist of the Fed’s discretionary departures from the rule.

Thus the three model economies with differing monetary policy settings are readily comparable. These are summarised in Table 1.Footnote 6

Table 1 Competing rival models

Since these models differ only in the monetary policies being implemented, by comparing their capacity to fit the data one should be able to tell which rule, when included in a simple New Keynesian model, provides the best explanation of the facts and therefore the most appropriate description of the underlying policy. We go on to investigate this in what follows.

4 Model Estimation and Evaluation by Indirect Inference

Indirect inference has been widely used in the estimation of structural models (e.g., Smith 1993; Gregory and Smith 1991, 1993; Gourieroux et al. 1993; Gourieroux and Monfort 1996 and Canova 2005). Here we make a further use of indirect inference to evaluate an already estimated or calibrated (DSGE) macroeconomic model using classical statistical inference. This is related to, but is different from, estimating a macroeconomic model by indirect inference. The common feature is the use of an auxiliary model in addition to the structural macroeconomic model. For a full description of the method of indirect inference see also Le et al. (2011). In addition to testing a particular prior numerical specification of the DSGE model, we examine how we might compare and test alternative numerical specifications of the model. Next we set out the main features of indirect inference.

4.1 Estimation

Estimation by indirect inference chooses the parameters of the macroeconomic model so that when this model is simulated it generates estimates of the auxiliary model similar to those obtained from the observed data. The optimal choice of parameters for the macroeconomic model are those that minimize the distance between a given function of the two sets of estimated coefficients of the auxiliary model. Common choices of this function are (i) the actual coefficients, (ii) the scores, and (iii) the impulse response functions. In effect, estimation by indirect inference provides an optimal calibration.

Suppose that y t is an m × 1 vector of observed data, t = 1,…,T, x t (θ) is an m × 1 vector of simulated time series generated from the structural macroeconomic model, θ is a k × 1 vector of the parameters of the macroeconomic model and x t (θ) and y t are assumed to be stationary and ergodic. The auxiliary model is f[y t ]. We assume that there exists a particular value of θ given by θ0 such that {x t (θ 0)} S s = 1 and {y t } T t = 1 share the same distribution, i.e.

$$ f\left[{x}_t\left({\theta}_0\right),a\right]=f\left[{y}_t,\alpha \right] $$

where α is the vector of parameters of the auxiliary model, and the existence of a binding function relating θ to α.

The likelihood function for the auxiliary model defined for the observed data {y t } T t = 1 is

$$ {\mathrm{\mathcal{L}}}_T\left({y}_t;\alpha \right)={\displaystyle {\sum}_{t=1}^T \log f\left[{y}_t,\alpha \right]} $$

The maximum likelihood estimator of α is then

$$ {a}_T=\underset{\alpha }{ \arg \kern0.5em \max }{\mathrm{\mathcal{L}}}_T\left({y}_t;\alpha \right) $$

The corresponding likelihood function based on the simulated data {x t (θ)} S s = 1 is

$$ {\mathrm{\mathcal{L}}}_S\left[{x}_t\left(\theta \right);\alpha \right]={\displaystyle {\sum}_{t=1}^S \log f\left[{x}_t\left(\theta \right),\alpha \right]} $$

with

$$ {a}_S\left(\theta \right)=\underset{a}{ \arg \max }{\mathrm{\mathcal{L}}}_S\left[{x}_t\left(\theta \right);\alpha \right] $$

The simulated quasi maximum likelihood estimator (SQMLE) of θ is

$$ {\theta}_{T,S}=\underset{\theta }{ \arg \max }{\mathrm{\mathcal{L}}}_T\left[{y}_t;{\alpha}_S\left(\theta \right)\right] $$

This value of θ corresponds to the value of α that maximises the likelihood function using the observed data. Further, as x t (θ) and y t are assumed to be stationary and ergodic, from Canova (2005),

$$ plim\kern0.5em {a}_T= plim\kern0.5em {a}_S\left(\theta \right)=\alpha . $$

it can then be shown that

$$ \begin{array}{l}{T}^{1/2}\left({a}_S\left(\theta \right)-\alpha \right)\to N\left[0,\varOmega \left(\theta \right)\right]\hfill \\ {}\kern5em \varOmega \left(\theta \right)=E{\left[-\frac{\partial^2\mathrm{\mathcal{L}}\left[\alpha \left(\theta \right)\right]}{\partial {\alpha}^2}\right]}^{-1}E\left[\frac{\partial \mathrm{\mathcal{L}}\left[\alpha \left(\theta \right)\right]}{\partial \alpha}\frac{\partial \mathrm{\mathcal{L}}\left[\alpha \left(\theta \right)\right]^{\prime }}{\partial \alpha}\right]E\left[-\frac{\partial^2\mathrm{\mathcal{L}}\left[\alpha \left(\theta \right)\right]}{\partial {\alpha}^2}\right]{}^{-1}\hfill \end{array} $$

The covariance matrix can be obtained either analytically or by bootstrapping the simulations.

The method of simulated moments estimator (EMSME) may be extended to estimating a function g(θ) of θ. Let g(a T ) and g(α S (θ)) denote a continuous p × 1 vector of functions which could, for example, be moments or scores, and let the mean functions be \( {G}_T\left({a}_T\right)=\frac{1}{T}{\displaystyle {\sum}_{t=1}^Tg\left({a}_T\right)} \) and \( {G}_S\left({\alpha}_S\left(\theta \right)\right)=\frac{1}{S}{\displaystyle {\sum}_{s=1}^Sg\left({\alpha}_S\left(\theta \right)\right)} \). We require that a T α S in probability and that G T (a T ) → G S (α S (θ)) in probability for each θ. The EMSME is

$$ {\theta}_{T,S}=\underset{\theta }{ \arg \min}\left[{G}_T\left({a}_T\right)-{G}_S\left({\alpha}_S\left(\theta \right)\right)\right]^{\prime }W\left(\theta \right)\left[G\left({a}_T\right)-{G}_S\left({\alpha}_S\left(\theta \right)\right)\right] $$

where W(θ 0) is the inverse of the variance-covariance matrix of the distribution of \( {G}_S\left({a}_S\right)-\overline{G\left[{a}_S\left({\theta}_0\right)\right]} \). The estimator is consistent and asymptotically normal- Smith (1993), Gourieroux et al. (1993) and Canova (2005).

4.2 Model Evaluation

In model evaluation indirect inference is used in a different way. The aim here is to compare the performance of an auxiliary model based on observed data with its performance based on data simulated from a calibrated or previously estimated macroeconomic model. We choose the auxiliary model to be a VAR and base our test on a function of the VAR coefficients. The test statistic is formed from the minimand of the EMSME evaluated using estimates of α derived from observed data and data simulated from the given numerically specified DSGE model. The distribution of this Wald-type of test statistic is obtained numerically through bootstrapping.

Non-rejection of the null hypothesis is taken to indicate that dynamic behaviour of the macroeconomic model is not significantly different from that of the observed data. Rejection is taken to imply that the macroeconomic model is incorrectly specified. Comparison of the impulse response functions of the observed and simulated data should reveal in what respects the macroeconomic model fails to capture the auxiliary model.

A formal statement of the inferential problem is as follows. Using the same notation as before, we define y t an m × 1 vector of observed data (t = 1,…,T), x t (θ) an m × 1 vector of simulated time series of S observations generated from the structural macroeconomic model, θ a k × 1 vector of the parameters of the macroeconomic model. x t (θ) and y t are assumed to be stationary and ergodic. We set S = T since we require that the actual data sample be regarded as a potential replication from the population of bootstrapped samples. The auxiliary model is f[y t ,α]; an example is the VAR(p)y t  = ∑  p i = 1 A i y t − 1 + η t where α is a vector comprising elements of the A t and of the covariance matrix of y t . Under the null hypothesis H 0 : θ = θ 0, the stated values of θ whether obtained by calibration or estimation; the auxiliary model is then f[x t (θ 0), α(θ 0)] = f[y t , α]. We wish to test the null hypothesis through the q × 1 vector of continuous functions g(α). Such a formulation includes impulse response functions. Under H 0 : g(α) = g[α(θ 0)].

If a T denotes the estimator of α using actual data and a S (θ 0) is the estimator of α based on simulated data for θ 0, we may obtain g(a T ) and g[a S (θ 0)]. Using N independent sets of simulated data obtained using the bootstrap we can also define the bootstrap mean of the g[a S (θ)], \( \overline{g\left[{a}_S\left({\theta}_0\right)\right]}=\frac{1}{N}{\displaystyle {\sum}_{k=1}^N{g}_k\left[{a}_S\left({\theta}_0\right)\right]} \). The Wald test statistic is based on the distribution of \( g\left({a}_T\right)-\overline{g\left[{a}_S\left({\theta}_0\right)\right]} \) where we assume that \( g\left({a}_T\right)-\overline{g\left[{a}_S\left({\theta}_0\right)\right]}\overset{p}{\to }0 \). The resulting Wald statistic (WS) may be written as

$$ WS=\left(g\left({a}_T\right)-\overline{g\left[{a}_S\left({\theta}_0\right)\right]}\right)^{\prime }W\left({\theta}_0\right)\left(g\left({a}_T\right)-\overline{g\left[{a}_S\left({\theta}_0\right)\right]}\right) $$

where W(θ 0) is the inverse of the variance-covariance matrix of the distribution of \( g\left({a}_T\right)-\overline{g\left[{a}_S\left({\theta}_0\right)\right]} \). W(θ 0)− 1 can be obtained from the asymptotic distribution of \( g\left({a}_T\right)-\overline{g\left[{a}_S\left({\theta}_0\right)\right]} \) and the asymptotic distribution of the Wald statistic would then be chi-squared. The empirical distribution of the Wald statistic is derived using bootstrap methods as follows.

  • Step 1: Determine the errors of the economic model conditional on the observed data and \( \widehat{\theta} \) .

    Solve the DSGE macroeconomic model for the structural the errors ε t given \( \widehat{\theta} \) and the observed data.Footnote 7 The number of independent structural errors is taken to be less than or equal to the number of endogenous variables. The errors are not assumed to be normal.

  • Step 2: Construct the empirical distribution of the structural errors

    On the null hypothesis the {ε t } T t = 1 errors are omitted variables. Their empirical distribution is assumed to be given by these structural errors. The simulated disturbances are drawn from these errors. In some DSGE models the structural errors are assumed to be generated by autoregressive processes. This is the case with the New Keynesian model here; we discuss below the precise assumptions made.

As our test is based on a comparison of the VAR coefficient vector itself rather than a multi-valued function of it such as the IRFs

$$ g\left({a}_T\right)-g\left({\alpha}_S\left(\theta \right)\right)={a}_T-{\alpha}_S\left(\theta \right) $$

and

$$ {G}_T\left({a}_T\right)-{G}_S\left({\alpha}_S\left(\widehat{\theta}\right)\right)={a}_T-{\alpha}_S\left(\widehat{\theta}\right) $$

The distribution of \( {a}_T-{\alpha}_S\left(\widehat{\theta}\right) \) and its covariance matrix \( W{\left(\widehat{\theta}\right)}^{-1} \) are estimated by bootstrapping \( {\alpha}_S\left(\widehat{\theta}\right) \). We draw N bootstrap samples of the structural model and estimate the auxiliary VAR on each.Footnote 8 N is generally set to 1,000. From these samples we compute the sample mean and covariance matrix. We also obtain the bootstrap distribution of the Wald statistic \( \left[{a}_T-{\alpha}_S\left(\widehat{\theta}\right)\right]^{\prime }W\left(\widehat{\theta}\right)\left[{a}_T-{\alpha}_S\left(\widehat{\theta}\right)\right] \) and a confidence interval. Such a distribution is generally more accurate for small samples than the asymptotic distribution. It is also shown by Le et al. (2011) to be consistent as the Wald statistic is asymptotically pivotal, and to have good accuracy in small sample Monte Carlo experiments.Footnote 9

4.3 An Extension

A potential problem is that the given numerical values of θ that are being tested are not the ‘true’ values of θ. We therefore extend our test procedure by searching for alternative values of θ that might perform better in the test. This involves calculating a minimum-value Wald statistic for each period using a powerful algorithm based on Simulated Annealing (SA) in which search takes place in wide neighbourhood of the initial values of θ where the optimising search is accompanied by random jumps around the parameter spaceFootnote 10. The merit of this extended procedure is that we are then testing the best possible numerical specifications of each model against actual data.

Several outcomes are possible.

  1. a.

    One model is rejected, but the other is not. In this case only one model is compatible with actual data and the other can therefore be disregarded.

  2. b.

    Both models are rejected, but the Wald statistic of one is lower than that of the other.

  3. c.

    Neither model is rejected, but the Wald statistic for one is lower than for the other.

The p-value of the Wald statistic can loosely be described as the probability of the model being true given the data: it is the percent of the distribution to the right of the data Wald value, or (100 minus the Wald percentile)/100. Hence, ranking the models by their p-value, suggests in case b) that this ranking is merely information about possible misspecifications. In case c) it suggests that one can regard the model with the higher p-value as the better approximation to the ‘true’ model.

4.4 Comment

We explained earlier that we have chosen to use indirect inference to test a model, whether or not it has been estimated by Bayesian methods, rather than standard Bayesian evaluation methods because we wish to test the whole model against the data, including the assumptions embodied in the priors in the case of Bayesian estimates. Even a major model like the Smets and Wouters (2007) model of the U.S., that has been carefully estimated by Bayesian methods, is rejected by our indirect inference test, see Le et al. (2011). We also claimed that indirect inference has greater power than another direct test, the Likelihood Ratio (LR) test. This claim is based on Le et al. (2012) who find that LR is much less powerful in small samples as a test of specification than a Wald test based on indirect inference. Presumably this result is related to the nature of the two tests. The LR test is based on a model’s in-sample current forecasting ability, whereas the Wald is an in-sample test based on the model’s ability to replicate data behaviour as represented by the VAR coefficients and the data variances which reflect the causal processes at work in the data. Models that are somewhat mis-specified may still be able to forecast well in sample as the error processes will capture some of the effects of mis-specification, but mis-specified models imply a reduced form that differs materially from the true one. Similarly, a VAR approximation to a mis-specified reduced form will deviate from the VAR associated with the true model.

Table 2 reproduces the findings by Le et al. (2012) comparing the two tests (on the Smets-Wouters model, for a 3-variable VAR(1)).

Table 2 Rejection rates for Wald and likelihood ratio for 3-Variable VAR(1)

In sum, we could use LR instead of indirect inference as a test of our competing models. But it would be a much weaker test and hence we would get much less discrimination between the models. As will be seen below, the indirect inference Wald test discriminates powerfully between these models.

5 Data and Results

We evaluate the models against the US experience since the breakdown of the Bretton Woods system using quarterly data published by the Federal Reserve Bank of St. Louis from 1972 to 2007.Footnote 11 This covers both the Great Acceleration and the Great Moderation episodes of the US history.

The time series involved for the given baseline model include \( {\tilde{\iota}}_t \), measured as the deviation of the current Fed rates from its steady-state value, the output gap x t , approximated by the percentage deviation of real GDP from its HP trend, and the quarterly rate of inflation π t , defined as the quarterly log difference of the CPIFootnote 12

We should find a break in the VAR process reflecting the start of the Great Moderation. Accordingly we split the time series into two subsamples and estimate the VAR representation before and after the break; the baseline model is then evaluated against the VAR of each subsample separately. We set the break at 1982Q3. Most discussions of the Fed’s behaviour (especially those based on Taylor Rules) are concerned with periods that begin sometime around the mid-1980s but we chose 1982 as the break point here because many (including Bernanke and Mihov 1998, and Clarida et al. 2000) have argued that it was around then that the Fed switched from using non-borrowed reserves to setting the Fed Funds rate as the instrument of monetary policy. Such a choice is consistent with the Qu and Perron (2007) test which gives a 95 % confidence interval between 1980Q1 and 1984Q4 .Footnote 13

For simplicity, the data we use are demeaned so that a VAR(1) representation of them contains no constants but only nine autoregressive parameters in the coefficient matrix; a linear trend is also taken out of the interest rate series for the post-1982 sample to ensure stationarity.Footnote 14

The model is calibrated by choosing the parameters commonly accepted for the US economy in the literature. The error processes these imply for each structural equation are then backed out and estimated as explained above. As we will go on to re-estimate all these parameters in a second stage of evaluation, we comment further on them at that point. We now go on to review the performance of each model with these calibrated parameters; since these are widely used in other papers, this allows us to relate our findings more easily to existing work, as well as illustrating the essential elements of our methods.

5.1 Results With Calibrated Parameters

The test results for the models considered are presented in what follows; these are based on the nine autoregressive coefficients of a VAR(1) representation and three variances of the model variables, the chosen descriptors of the dynamics and volatility of the data as discussed above. Our evaluation is based on the Wald test, and we calculate two kinds of Wald statistic, namely, a ‘directed Wald’ that accounts either only for dynamics (the VAR coefficients) or only for the volatility (the variances) of the data, and a ‘full Wald’ where these features are jointly evaluated. In both cases we report the Wald statistic as a percentile, i.e. the percentage point where the data value comes in the bootstrap distribution. The models’ performance in each subsample follows.

5.1.1 Model Performance in the Great Moderation

We start with the post-1982 period, the Great Moderation subsample, as this has been the main focus of econometric work to date. Table 3 summarises the performance of the different models. The Optimal Timeless Rule model passes the tests by a comfortable margin, both overall, with a Wald percentile of 77.1 (implying a p-value of 0.229), and specifically on the dynamics alone (a p-value of 0.136) and the volatilities alone (0.104). The conclusion is that the US facts do not reject the Timeless Rule model as the data-generating process post-1982.

Table 3 Wald statistics for calibrated models in the great moderation

This is not the case, however, when Taylor Rules of the standard sort are substituted for it. The Table 3 suggests when the original Taylor Rule or the interest-rate-smoothed Taylor Rule is combined with the same IS-Phillips curve framework on these commonly accepted calibrations, from all perspectives the post-1982 data strongly reject the model at 99 %.

5.1.2 Model Performance in the Great Acceleration

We now proceed to evaluate how the models behave before 1982, the Great Acceleration period. Table 4 reveals the performance of the Optimal Timeless Rule model.

Table 4 Wald statistics for calibrated models in the great acceleration

We can see that although the model does not behave as well here as it did in the Moderation subsample in explaining the data dynamics, with a directed Wald of 98.2, the directed Wald for data volatilities at 89.6 lies within the 90 % confidence bound. Overall, the full Wald percentile of 97.3 falls between the 95 % and the 99 % confidence bounds. So while the model fits the facts less well than in the case of the Great Moderation, it just about fits those of the turbulent Great Acceleration episode if we are willing to reject at a higher threshold. As we will see next, it also fits them better than its rival Taylor Rule models.

Unfortunately we are unable to test the DSGE model with the generally proposed pre-1982 Taylor Rules because the solution is indeterminate, the model not satisfying the Taylor Principle. Such models have a sunspot solution and therefore any outcome is possible and also consistent formally with the theory. The assertion of those supporting such models is that the solutions, being sunspots, accounted for the volatility of inflation. Unfortunately there is no way of testing such an assertion. Since a sunspot can be anything, any solution for inflation that occurred implies such a sunspot—equally of course it might not be due to a sunspot, rather it could be due to some other unspecified model. There is no way of telling. To put the matter technically in terms of indirect inference testing using the bootstrap, we can solve the model for the sunspots that must have occurred to generate the outcomes; however, the sunspots that occurred cannot be meaningfully bootstrapped because by definition the sunspot variance is infinite. Values drawn from an infinite-variance distribution cannot give a valid estimate of the distribution, as they will represent it with a finite-variance distribution. To draw representative random values we would have to impose an infinite variance; by implication all possible outcomes would be embraced by the simulations of the model and hence the model cannot be falsified. Thus the pre-1982 Taylor Rule DSGE model proposed is not a testable theory of this period.Footnote 15

However, it is open to us to test the model with a pre-1982 Taylor Rule that gives a determinate solution; we do this by making the Taylor Rule as unresponsive to inflation as is consistent with determinacy, implying a long-run inflation response of just above unity. Such a rule shows considerably more monetary ‘weakness’ than the rule typically used for the post-1982 period, as calibrated here with a long-run response of interest rate to inflation of 1.5.

We implement this weak Taylor Rule across a spectrum of combinations of smoothing coefficient and short-run response to inflation, with in all cases the long-run coefficient equalling 1.001. The Wald test results are shown in Table 4. What we see here is that with a low smoothing coefficient the model encompasses the variance of the data well, in other words picking up the Great Acceleration. However, when it does so, the data dynamics reject the model very strongly. If one increases the smoothing coefficient, the model is rejected less strongly by the data dynamics and also overall but it is then increasingly at odds with the data variance. In all cases the model is rejected strongly overall by the data, though least badly with the highest smoothing coefficient. Thus the testable model that gets nearest to the position that the shift in US post-war behaviour was due to the shift in monetary regime (reflected in Taylor Rule coefficients) is rejected most conclusively.

5.2 Simulated Annealing and Model Tests With Final Parameter Selection

The above results based on calibration thus suggest that the Optimal Timeless Rule, when embedded in our IS-Phillips curves model, outperforms testable Taylor Rules of the standard sort in representing the Fed’s monetary behaviour since 1972. In both the Great Acceleration and the Great Moderation the only model version that fails to be strongly rejected is the one in which the optimal timeless policy was effectively operatingFootnote 16.

However, fixing model parameters in such a way is an excessively strong assumption in terms of testing and comparing DSGE models. This is because the numerical values of a model’s parameters could in principle be calibrated anywhere within a range permitted by the model’s theoretical structure, so that a model rejected with one set of assumed parameters may not be rejected with another. Going back to what we have just tested, this could mean that the Taylor Rule models were rejected not because the policy specified was incorrect but because the calibrated IS and Phillips curves had failed to reflect the true structure of the economy. Thus, to compare the Timeless Rule model and Taylor Rule models thoroughly one cannot assume the models’ parameters are fixed always at particular values; rather one is compelled to search over the full range of potential values the models can take and test if these models, with the best set of parameters from their viewpoints, can be accepted by the data.

Accordingly we now allow the model parameters to be altered to achieve for each model the lowest Wald possible, subject to the theoretical ranges permitted by the model theory.Footnote 17 This estimation method is that of Indirect Inference; we use the Simulated Annealing (SA) algorithm for the parameter search, as discussed above in Section 4. In this process we allow each model to be estimated with different parameters for each episode. Thus we are permitting changes between the episodes in both structural parameters and the parameters of monetary policy; in so doing we are investigating whether either structural or policy rule changes were occurring and so contributing to the Great Moderation.Footnote 18

5.2.1 The Estimated Optimal Timeless Rule Model

The SA estimates of the Timeless Rule model in both the post-war subperiods are reported in Table 5 - for the main parameters and Table 6 - for the shocks persistence. We can see that this estimated model is not very different from its calibrated version in the Great Moderation. However for the Great Acceleration period the estimation now suggests substantially lower elasticities of intertemporal consumption (the inverse of σ) and labour supply (the inverse of η), and a much higher Calvo contract non-adjusting probability (ω); with lower γ the latter implies a much flatter Phillips curve. The estimation also suggests the Fed had a low relative weight on output variations (α) pre-1982 but that high nominal rigidity forced it to reduce inflation more strongly in response to output growth (due to higher α/γ). The shocks’ persistence is not much altered in either period from that in the calibrated model.

Table 5 SA estimates of the competing models
Table 6 SA estimates of shock persistence

Table 7 shows that estimation brings the model substantially closer to the data. This is particularly so for the pre-1982 period where the calibrated model was rejected at 95 % confidence; here the necessary parameter changes were substantial to get the model to fit, as we have just seen. The Full Wald percentile in both episodes is now around 70 %, so that the model easily fails to be rejected at 95 %.

Table 7 Performance of the Timeless Rule Model under Calibration and Estimation

5.2.2 Taylor Rule Model Under Estimation

In estimating the Taylor Rule model alternative we substitute the smoothed version (Table 1) for the Optimal Timeless Rule in the identical IS-Phillips curves framework. This specification covers all Taylor Rule versions we considered in the earlier evaluation, as when ρ is zero it reduces to the original Taylor Rule while when γπ is just above unity it turns to be a weak Taylor Rule variant.

As with the Optimal Timeless Rule model the estimation process achieves a substantial improvement in the closeness of the Taylor Rule model to the data in both episodes. Pre-1982 the best weak Taylor Rule version was strongly rejected; after estimation it is still rejected at the 95 % level but not at the 99 % level. Most importantly, the estimates include a much stronger Taylor Rule response to inflation than the calibrated version for this early episode; hence the evidence supports the view that the Taylor Rule principle was easily satisfied in this period. The response is essentially the same as that found in the later period by this estimation process: the weaker the response, the further the model is from fitting the data. Tables 5 and 6 show the details. The elasticity of intertemporal consumption and that of labour are found to be fairly similar to those estimated with the Optimal Timeless Rule, as is the Calvo rigidity parameter which is again higher in the first episode. For the model to get close to the data there needs to be interest rate smoothing in both episodes.

The resulting Wald statistics in Table 8 thus show that the Taylor Rule model is now close to passing at 95 % pre-1982 and passes comfortably post-1982. However relative to the Timeless Rule it is substantially further from the data, as summarized in Table 9 where the p-values are also reported. This suggests that, although it is possible to fit the post-1982 period with a Taylor Rule model, policy is better understood in terms of the Timeless Rule model.

Table 8 Performance of Taylor Rule model under calibration and estimation
Table 9 Summary of model performance with estimated parameters

5.2.3 The Identification Problem Revisited in the Light of our Results

Having established that the Optimal Timeless Rule model gives the best representation of the key features of the US post-war data, we can now ask whether this model can also account for the single-equation findings for the Taylor Rule.

The above suggests that the widespread success reported in single-equation Taylor Rule regressions on US data could simply represent some sort of statistical relation emerging from the model with the Optimal Timeless Rule. To examine this possibility, we treat the Optimal Timeless Rule model as the true model and ask whether the existence of empirical Taylor Rules would be consistent with that. Technically this is again a process of model evaluation basing on indirect inference; but instead of a VAR here Taylor Rule regression coefficients are used as the data descriptors for the model to fit.

Table 10 shows the OLS estimates of several popular Taylor Rule variants when these are fitted, respectively, to data for both the post-war episodes. To compare the regression results here with those commonly found in the US Taylor Rule literature where unfiltered interest rate data is normally used we must emphasize that here for the post-1982 subsample a linear trend is taken out of the interest rate series so that stationarity is ensured. These Taylor Rules, when estimated on the stationary data we have used here, generally fail to satisfy the Taylor Principle, in much the same way as in pre-1982. Thus econometrically the standard estimates of the long-run Taylor Rule response to inflation post-1982 are biased by the non-stationarity of the interest rate. There is little statistical difference in the estimates across the two periods. The reported Wald percentiles indicate that these empirical ‘Taylor Rules’ are indeed consistent with what the Timeless Rule implies: in both panels the Taylor Rule regressions estimated are all within or on the 95 % confidence bounds implied by the estimated Timeless Rule Model.

Table 10 ‘Taylor Rules’ in the Data (with OLS): consistency with the estimated Timeless Rule Model

This illustrates the identification problem with which we began this paper: a Taylor Rule regression having a good fit to the data may well be generated by a model where there is no structural Taylor Rule at all. Here we suggest that the Timeless Rule model we have found gets closest to fitting US data in each episode is also generating these Taylor Rule single-equation relationships.

5.2.4 The ‘Interest Rate Smoothing’ Illusion: A Further Implication

Another issue on which the above sheds light is the phenomenon of ‘interest rate smoothing’. Clarida et al. (1999) noted that the Optimal Timeless Rule required nominal interest rate to be adjusted in a once-and-for-all manner, but that empirical evidence from Taylor Rule regressions usually displayed clear interest rate smoothing. This they argued created a ‘puzzle’: that sluggish interest rate movements could not be justified as optimal.

While various authors have tried to explain such a discrepancy either from an economic (e.g., Rotemberg and Woodford 1997, 1998; Woodford 1999, 2003a, b) or from an econometric (e.g., Sack and Wieland 2000; Rudebusch 2002) viewpoint, the Taylor Rule regressions above show that ‘smoothing’ is a regression result that is generated by the Optimal Timeless Rule model, in which there is no smoothing present. The source of inertia in the model is the persistence in the shocks themselves.

6 What Caused the Great Moderation?

We have found that the Optimal Timeless Rule is the best guide to US monetary policy since the Bretton Woods; we have also obtained estimates of the model under a Taylor Rule, which though fitting the data considerably less well nevertheless fail to be rejected in absolute terms by the data. These models enable us finally to examine the causes of the Great Moderation. We have made a number of empirical findings about changes in the structural parameters, the parameters of the monetary rule trade-off, and the behaviour of the shocks. We now examine the contribution of each of these changes to the Great Moderation.

Table 11 shows that under our preferred model with the Optimal Timeless Rule the Great Moderation is almost entirely the result of reduced volatility in the shocks. There is a small contribution to lowered inflation variance from the policy parameters; but otherwise the contribution from both structural and policy parameters is slightly to increase macro variance in the later period. If one then examines which shocks’ volatility fell, the Table 12 following shows that it did so for all three of our shocks, with a fall in standard deviation of 60–70 %.

Table 11 Accountability of factor variations for reduced data volatility (Timeless Rule model)
Table 12 Reduced size of shocks (Timeless Rule model)

If we look at the Taylor Rule model the story is essentially the same. As we saw above the inflation response of the Taylor Rule hardly changes across the two periods. The main change is a doubling of the smoothing parameter which accordingly contributes about a third of the reduction in interest rate variance. Otherwise structural and policy parameter changes contribute negligibly to the variance reduction. Thus again the reduction in shock variability dominates as the cause of the Great Moderation. Here too all the shocks have large falls in standard deviation; the largest at 86 % is the monetary shock (Tables 13 and 14).

Table 13 Accountability of factor variations for reduced data volatility (II) (Taylor Rule model)
Table 14 Reduced size of shocks (II) (Taylor Rule model)

Thus what we find is that the Great Moderation is essentially a story of ‘good shocks’ as proposed in the time-series studies we cited earlier. Also we have found no evidence of the weak monetary regime regarded by an earlier DSGE model literature as responsible for the Great Acceleration and in the same vein no evidence of much change in the monetary regime during the Great Moderation. However, what we do find about monetary policy is that the ‘trembling hand’ trembled enormously more in the earlier period than in the later; thus monetary error is a large source of the Great Acceleration and its reduction an important reason for the Moderation. For those that embrace a Taylor Rule model in spite of its poorer data fit the story is the same—in this case monetary ‘judgement’ was substantially more erratic in its effect in the earlier period.

6.1 A Comparison With Other Recent DSGE Models

As we noted earlier, Ireland (2007), Smets and Wouters (2007), Le et al. (2011) and Fernandez-Villaverde et al. (2009, 2010) have also estimated models of these periods and we can compare their results in a general way with ours. Other than Ireland, these models follow the model of Christiano et al. (2005). Smets and Wouters use this model with some small modifications; they estimate it by Bayesian methods. Le et al. add a competitive sector and reestimate the model using Indirect Inference, since they found the model was rejected quite badly overall by the data with the previously estimated ones. When reestimated in this way they found that the model was accepted, at 99 % for the full post-war period and at 95 % for the Great Moderation period, for the key subset of variables, output, inflation and interest rate when represented by a VAR(1). Fernandez-Villaverde, Guerron-Quintana and Rubio-Ramirez add moving volatility in the errors and drift in the parameters of the Taylor Rule; like Smets and Wouters they estimate the model by Bayesian methods.

What is striking about all these studies is that none of them find evidence of much difference in monetary regime between the two periods—interestingly, Fernandez-Villaverde, Guerron-Quintana and Rubio-Ramirez find variations of ‘monetary toughness’ within both periods, while not finding much difference on average across the two. Both Smets and Wouters and Le et al. in their reworking of them find little change in the inflation response coefficient of the Taylor Rule. In this these models echo Ireland, even though their Taylor Rule representations differ from his. Thus these studies agree with ours in finding that it is the shocks that account for the difference in volatility.

Nevertheless, all also agree with us that the scale of the monetary shock has declined between the two periods. Thus a pattern is visible in ours, Ireland’s and these other studies: while the monetary regime did not apparently change much, the scale of the monetary ‘error’ fell between the two periods. Ireland interprets this, based on his connection of it with other shocks, as ‘opportunism’, where the Fed was allowing the inflation target to drift with events, pushing it downwards when events allowed this to be done with less perceived cost. Other studies, like ours, do not model it other than as a pure error.

An important implication of the lack of regime change is that there is no evidence of indeterminacy in the earlier period according to any of these studies including ours. Thus all these studies that are based on full information system estimates cannot find the evidence that appears to come out of single-equation studies that the earlier period’s Taylor Rule responded weakly to inflation. As we have seen this is consistent with the lack of identification of the Taylor Rule as a single equation; indeed as we have seen the models that fit the data overall could easily have ‘generated’ single-equation Taylor Rules of this ‘weak’ type.

7 Conclusion

In this study we have used the method of indirect inference to estimate and test a three-equation DSGE model against the data for the Great Acceleration and the Great Moderation. The method has the advantage over alternatives that it tests the model overall in its ability to fit the data’s behaviour. Nevertheless, in spite of differences in method, our results echo those of other recent work where DSGE models of greater complexity than ours have been estimated by a variety of methods. We have found that the monetary regimes being followed in the two periods are rather similar. We have also found that, while these regimes can be represented by Taylor Rules of the usual sort, they more closely fit the facts if represented by an Optimal Timeless Rule, essentially the same as the Taylor Rule form suggested by Ireland, which he also finds fits the facts best.

A corollary of this finding is that there is no evidence of indeterminacy due to the ‘weakness’ of the monetary regime during the Great Acceleration. Previous findings to this effect seem to have arisen from single-equation estimates that suffered from a lack of identification and are quite consistent with the DSGE models estimated here.

By implication we also find, in common with these other studies using full DSGE models, that the Great Moderation was mainly the result of ‘good shocks’—a fall in the variance of the errors in the model. This reinforces the results of a large number of time-series studies using Structural VARs, but it does so through finding structural DSGE parameters that can replicate these VARs and so allows them to be interpreted structurally.

Nevertheless, the falling variance of shocks includes that of monetary shocks. Within this fall lies the remedying of a failure of monetary policy. Whether this failure was due to an ‘opportunistic’ pursuit of varying inflation targets as in Ireland (2007), to sheer inefficiency, or to some other reason, our work cannot say; this remains a fruitful avenue for future work. Clearly and perhaps not surprisingly given the size and novelty of the shocks bombarding the 1970s economy, monetary policy was far from perfect in this early period. But at least we and other recent DSGE modellers are clear that it was not just plain stupid.