Keywords

JEL Classifications

A Phillips curve is an equation which relates the unemployment rate, or some other measure of aggregate economic activity, to a measure of the inflation rate. This equation continues to prompt a lot of research in macroeconomics, as it has for most of the years since the influential Phillips (1958) and Samuelson and Solow (1960) articles. The early work documents a negative relationship between the unemployment rate and either nominal wage growth or inflation. Equations relating the unemployment rate to inflation were the first to be called Phillips curves. Samuelson and Solow (1960) were bold enough to posit a stable and exploitable structural relationship between unemployment and inflation. The viability of a policy of using inflation to combat unemployment was debunked theoretically in Friedman’s (1968) classic presidential address and empirically in the subsequent decade.

The rise of inflation over the 1970s came along with a breakdown in the inflation unemployment relationship and gave birth to the ‘expectations-augmented’ Phillips curve. This formulation allows the relationship between unemployment to shift due to changes in inflation expectations. Figure 1 shows how such a formulation can be used to fit the data. This shows scatter plots of unemployment and NIPA personal consumption deflator inflation for different sub-periods over the years 1948–2004 along with regression lines. Table 1 reports the regression coefficients, R2 for the regressions and the means of inflation and unemployment. For the whole sample there is a significant positive relationship. However there is always a sequence of consecutive dates where the regression line is negative. The slope coefficient is also highly significant in all cases but one. The movements in the regression line occur as changes in the mean inflation and unemployment rates. Another way to verify that there is a strong association between inflation and unemployment is to focus on business cycle frequencies, in the bottom right hand corner of Fig. 1 and the second row of Table 1. Clearly inflation and unemployment are highly correlated at business cycle frequencies.

Phillips Curve (New Views), Fig. 1
figure 166figure 166

The US Phillips curve, 1948:1–2004:4 (Sources: authors’ calculations; unemployment rate: US Bureau of Labor Statistics; personal consumption expenditure deflator: US Department of Commerce)

Phillips Curve (New Views), Table 1 The US Phillips curve, 1948:1–2004: 4

Since there is a significant correlation between inflation and unemployment over some horizons, understanding this correlation should yield insight into the impulses the economy faces and the mechanisms that propagate their effects. Since the 1990s, research has focused on making progress in three main areas: forecasting, microeconomic foundations and empirical tests of the microfoundations. This article reviews the recent research in each of these areas.

Forecasting with the Phillips Curve

Inflation forecasting models rely heavily on the Phillips curve. For many years, even as the traditional Phillips curve relationship evaporated, variables such as the unemployment rate have continued to be very useful predictors of future inflation. Stock and Watson (1999) argued they could do better. They proposed using principal components of large numbers of data series to aid in forecasting macroeconomic variables. The idea was that this approach uses the information in a large number of variables, which is impossible with traditional regression-based forecasting. One of their most interesting findings involves the first principal component of roughly 80 macroeconomic variables, including measures of production and income, employment, unemployment and hours, personal consumption and housing, and sales, orders and inventories. They argued that this ‘activity index’ variable is more useful than even unemployment for predicting inflation. Such a finding strongly suggests a connection between current activity and future inflation, essentially the Phillips curve relationship.

Atkeson and Ohanian (2001) argued that the success of the Phillips curve in forecasting is just as illusory as a stable Phillips curve. They argued that, for forecasting one year ahead, a simple random walk suffices – the best predictor of one-year-ahead inflation is current inflation. Atkeson and Ohanian’s (2001) finding has proven to be remarkably robust (see Brave and Fisher 2004; Fisher et al. 2002). However, the random walk result does depend on the sample period considered by Atkeson and Ohanian (2001), which is 1984–99. Beginning the sample in 1984 is justified by evidence of a major structural change around that time (see Fisher 2006). However, as the sample is extended, the random walk loses some of its lustre. The poor performance of Phillips curve-based forecasting models is mainly confined to the period 1984–93. Since the mid-1990s the traditional variables such as unemployment have been useful forecasters. These findings are easily explained by noting that inflation was generally falling from 1984 to the mid-1990s as the economy adjusted to the Federal Reserve’s stronger willingness to fight inflation. It is natural for old models to fail after a major structural change. Moreover, in an environment where output is growing strongly while inflation is falling, it is not surprising the random walk model does well between 1984 and 1993.

Microfoundations of the Phillips Curve

Since Lucas (1972) economists have known how to formulate models in which inflation and activity are correlated but there is not a policy-exploitable Phillips curve. The focus of much of the recent literature has been on the Calvo–Yun Phillips curve, which arises from one particular model. The Phillips curve in this model is named after Calvo (1983) and Yun (1996). Most of the literature uses the hopelessly ambiguous term ‘New Keynesian’ to describe this model of the Phillips curve.

Phillips curves arise naturally in models where firms set prices and at least some of those prices do not respond to every shock to the economy. Calvo’s contribution is a very simple model of sticky prices. He assumed monopolistically competitive firms could re-optimize their price with a fixed probability, θ, each period so that firms re-optimize prices on average every 1/(1 − θ) periods (usually quarters of a year). This formulation can be taken literally, in which case prices are fixed until the next opportunity to re-optimize. Alternatively, firms might follow simple pricing rules at high frequencies and occasionally adjust these rules overtime. Under this interpretation firms can index their prices to inflation.

Yun derived a Phillips curve by introducing the Calvo model of price adjustment into an otherwise standard monetary model with monopolistic competition and constant markups. The result is the Calvo–Yun Phillips curve:

$$ {\widehat{\pi}}_t=\beta {E}_t{\widehat{\pi}}_{t+1}+\frac{\left(1-\beta \theta \right)\left(1-\theta \right)}{\theta } AD{\widehat{s}}_t. $$
(1)

The variable \( {\widehat{\pi}}_t \) is the deviation of the log of the gross inflation rate from its steady state value, \( {\widehat{s}}_t \) is the log deviation of real marginal cost for the representative firm, and β is the time discount factor of the representative household. The A and D terms are equal to unity in the Yun paper. This equation is derived from the log-linearized necessary conditions of the equilibrium. To linearize around a steady state with positive inflation, firms must index their prices to inflation.

Eichenbaum and Fisher (2007) describe how Kimball’s (1995) extension to variable markups implies that 0 < A≤1, where A depends on the shape of the firm’s demand curve and equals unity in the constant markup case. Eichenbaum and Fisher also study Woodford’s (2003, 2005) model of capital adjustment and describe how this yields.

0 < D≤1 where D depends on the firm’s supply curve. Generally, marginal cost is increasing in output. Since β and θ also lie between zero and unity, the coefficient in front of marginal cost is positive and (1) is an equilibrium relationship where output and inflation are positively related. In most of the literature assumptions are such that A = D = 1. This literature generally predicts reasonably large effects of monetary shocks if firms adjust their prices once a year.

The Calvo model is called a time-dependent model because the opportunity to change prices depends only on the passage of time. Taylor’s (1980) model where firms rotate changing their prices is also a time-dependent model. The main alternative is state-dependent models, where changing the price is a choice of the firm which depends on both firm-level variables such as productivity and aggregate variables like the interest rate. The dominant state-dependent model involves menu costs. Studying state-dependent models is more difficult than time-dependent models because the price distribution is endogenous.

Five papers make major progress toward understanding menu cost models. Dotsey et al. (1999) study a model with random menu costs and Taylor-style staggering. A key advantage of their model is that it can be linearized like a simple real business cycle model. Klenow and Krsystov (Klenow and Krystov 2005) calibrate this model to US consumer price index (CPI) micro data for the years 1988–2003. They find that matching the micro data yields a model which behaves very much like the Calvo–Yun model. Golosov and Lucas (2003) study a menu cost model with a constant menu cost but where firms face exogenous technology and/or preference shocks. Under the assumption that the shocks are Gaussian, Golosov and Lucas find that firms choose to adjust their prices a lot when there is a monetary shock, and this makes prices flexible enough that monetary shocks have small affects. Midrigan (2005) uses scanner data to determine the distribution of technology or preference shocks in the Golosov–Lucas model. He estimates this distribution to be non-Gaussian with fat tails. With the estimated distribution monetary shocks have affects similar to models with a Calvo–Yun Phillips curve. Gertler and Leahy (2005) develop an analytically tractable state-dependent model which also behaves like the Calvo–Yun model.

Another key area of research involves building fully specified dynamic general equilibrium models with Phillips curves which fit the data well. This work has focused on the Calvo–Yun Phillips curve instead of more deeply motivated models because of its simplicity. The key contribution is Christiano et al. (2005). Their model also includes portfolio rigidities, adjustment costs in capital, and a Calvo-style version of nominal wage setting. They find their model does a good job matching the evidence on how the economy responds to a monetary shock, with a small amount of price stickiness, but wages must be more rigid. There is a growing amount of research which reaches the same basic conclusion that the wage–activity relationship is more important for understanding macroeconomic dynamics than the traditional Phillips curve (cf. Galí et al. 2007).

Empirical Evaluation of the Microfoundations

Equation (1) is to the empirical macro literature as the Lucas (1978) asset pricing relationship is to empirical finance. In recent years it has come under considerable empirical scrutiny. Galí and Gertler (1999) were the first to use Hansen’s (1982) generalized method of moments (GMM) to estimate θ and test (1). They measured marginal cost using labour’s share of GDP, which is true if firms use a Cobb–Douglas production technology. Gagnon and Kahn (2005) consider other production structures where marginal cost is not measured with labour’s share and conclude that the Galí and Gertler (1999) findings hold up. Galí and Gertler estimate of θ implies more than a year between price changes, but they cannot reject the equation. Micro price data might be useful to identify θ, but, under the frequency of re-optimization interpretation of the Calvo model, estimates of θ over a year for the United States seem too high (cf. Blinder et al. 1998). Galí and Gertler consider an alternative model with ‘rule-of-thumb’ firms who use lagged inflation to update their prices when they have the opportunity to re-optimize. This model is motivated by the fact that lagged inflation enters significantly in the empirical version of (1). The model with rule-of-thumb firms is not rejected and the estimates of θ imply prices are re-optimized every two or three quarters. The latter estimates are within the range of plausibility. Galí and Gertler also estimate the number of rule-of-thumb firms to be small and emphasize that (1) holds approximately. Bakhshi et al. (2005) argue that the Galí–Gertler ‘hybrid’ model is a good approximation to Dotsey et al.’s (1999) menu cost model.

It is clear that θ is not identified separately from A and D in (1). However, A and D can be identified with auxiliary information. Sbordone (2002) identifies D by assuming the stock of capital is fixed exogenously at each firm for all time. Under the usual assumption of a Cobb–Douglas production function, auxiliary information on the share of labour income in GDP can be used to identify D. Sbordone considers the forward looking solution to (1) as well as the solution to a similar equation for the labour market. The expected present-value calculations needed to implement this estimation are implemented with a vector autoregression. This empirical strategy is analogous to Abel and Blanchard’s (1986) approach to estimating investment adjustment costs. Sbordone estimates prices are re-optimized every one to two quarters. Galí et al. (2001) apply Sbordone’s fixed capital assumption to their rule-of-thumb model and estimate the frequency of re-optimization to be a little higher than in Galí and Gertler (1999), and significant small positive numbers of rule-of-thumb firms continue to be estimated.

Eichenbaum and Fisher (2007) explore Woodford’s (2003, 2005) dynamic version of Sbordone’s (2002) model in an environment which also includes Kimball’s (1995) variable markup. As in Galí and Gertler (1999) and Galí, Gertler and López-Salido. (Galí et al. 2001), they adopt a GMM estimation and testing strategy. To improve the power of their tests, Eichenbaum and Fisher (2004) impose the restrictions Eq. (1) place on the moving average structure of the Euler equation errors and reduce the number of instruments compared to the previous papers. They easily reject (1) assuming the Euler error is an MA(0). This motivates them to include the auxiliary assumption that firms make decisions based on lagged information. With one such implementation lag, this yields an MA(1) structure which is not rejected and which the re-optimization frequency is about two years, if A = D = 1. With empirically motivated values for the curvature of the demand curve and the size of capital adjustment costs, re-optimization every two quarters cannot be ruled out at conventional significance levels. Eichenbaum and Fisher include dynamic indexation (prices indexed to the most recent inflation rate) of the prices of firms that do not re-optimize in a given period. This is an alternative to rule-of-thumb firms as away of introducing a lagged inflation term into (1). Eichenbaum and Fisher (2004) find they cannot reject the possibility that there are no rule-of-thumb firms under dynamic indexation.

Recently much work has been done to document prices at the microlevel. Blinder et al. (1998) survey actual price setters and find that, among firms reporting regular price reviews, annual reviews are by far the most common. Other key contributions include Bils and Klenow (2004), Klenow and Krystov (2005) and work done with European data for example by Stahl (2005). Much of this literature emphasizes the frequency of price changes. For example, Blinder et al. (1998) report that the median time between price changes among the firms that they survey is roughly three quarters. Comparing the Calvo–Yun Phillips curve with these findings is delicate. With price indexation the model implies that prices change too frequently relative to the micro data because all prices are changing all the time. Also, just because firms are changing prices does not mean that they have re-optimized those prices: a subset of the price changes being recorded could reflect various forms of time-dependent pricing rules.

Integrating over all the micro evidence, with a low inflation economy like the United States, versions of the Calvo–Yun Phillips curve with an implementation lag, dynamic indexation, capital adjustment costs and time-varying markups can be reconciled with the macro data without requiring implausible degrees of rigidities in price-setting behaviour at the micro level. Of course this model is not literally ‘true’. For instance, the model also has the implausible implication that any CPI observation for which Pi,t/Pi,t−1 is not equal to πt−1 involves re-optimization. Developing tractable models that are fully consistent with the salient macro facts and the emerging literature on the behaviour of individual good prices is a key challenge going forward.

See Also