1 Introduction

The quality of analysts’ earnings forecasts has been a frequent topic in financial and accounting research. However, assessment of analysts’ revenue forecasts has received substantially less research attention. Revenues are the second most common forecast kind after earnings, and they are therefore significant for investors who use analyst forecast data. The purpose of our study is to evaluate the accuracy of analysts’ revenue forecasts. We find that accuracy is determined by forecast and analyst characteristics—namely, forecast horizon, days elapsed since the last forecast, analysts’ forecasting experience, forecast frequency, forecast portfolio, reputation, earnings forecast issuance, forecast boldness, and analysts’ prior performance in forecasting revenues and earnings.

Understanding revenue forecasts is important in the capital market. They are a key variable in investors’ fundamental analyses because they serve as an important driver of firm value and signal a firm’s prospects (Keung 2010). Revenue forecasts are also important for analyses of earnings forecasts. They enhance the understanding of whether changes in earnings forecasts are due to predicted changes in expenses or in revenues, with the latter implying more sustainable changes (Keung 2010). An in-depth understanding of revenue forecasts and their accuracy determinants enables investors as well as academic researchers to distinguish ex ante which revenue forecasts are more accurate.

We conduct our empirical analyses on a data sample of analysts’ revenue forecasts issued from 1998 to 2014. We find that revenue forecast accuracy is determined by both forecast and analyst characteristics. All else equal, revenue forecasts are more accurate when issued closer to the revenue announcement and shortly after other revenue forecasts for the same firm. Their accuracy is greater when they are issued by analysts with more general revenue forecasting experience, higher forecast frequency, and fewer industries in the revenue forecast portfolio. Revenue forecasts are also more accurate when issued by All-Star analysts and by analysts who also forecast the corresponding earnings for the same firm-year. Our results provide evidence that revenue forecast accuracy is determined by factors similar to those for earnings forecast accuracy.

We also find that analysts’ prior performance in forecasting revenues and earnings can explain revenue forecast accuracy. Based on our findings for the analyst and forecast characteristics that determine revenue forecast accuracy, we develop a model that predicts the usefulness of revenue forecasts. This model can be used to identify relatively accurate forecasts ex ante. Additionally, we find that revenue forecasts are more accurate when they are consistent with the respective earnings forecasts (i.e., when the analyst expects an increase or a decrease in both revenues and earnings). We find that bold revenue forecasts are more accurate than herding ones because analysts who issue bold forecasts have superior information. We also find that revenue forecasts are less accurate when the analyst also supplies cash flow forecasts. This finding suggests that analysts with weak revenue forecasting abilities consider the issuance of other forecast items necessary. Furthermore, our results reveal that analysts concern themselves with their revenue forecasting performance because analysts with a poor performance are more likely to stop forecasting revenues than analysts with a better performance. This reaction is reasonable because we also find that revenue forecast accuracy affects analysts’ career prospects in terms of being promoted or terminated.

We make three contributions. First, we contribute to the academic literature on analysts’ forecasts. The literature contains attempts over decades to penetrate the “black box” analyst and understand analysts’ forecasts and incentives (Bradshaw 2011). Analysts’ earnings forecasts have been widely discussed and examined and are compared, for example, to time-series or price-based forecasts (Elgers et al. 2016). In contrast, revenue forecasts have received less attention, and some unexplored characteristics remain. However, revenues are one of the most important kinds of forecast, and their occurrence has strongly increased over the last few decades. In 1998, 13.92% (572/4110) of the analysts in the I/B/E/S database supplied at least one revenue forecast, and there was at least one published revenue forecast for 12.65% (758/5994) of the firms. In 2014, 85.00% (4969/5846) of the analysts supplied at least one revenue forecast, and there was at least one published revenue forecast for 95.18% (4637/4872) of the firms. Consequently, revenue forecasts are the second most common kind of forecast issued by financial analysts in the I/B/E/S database.

Second, we help elucidate revenue forecasts and their accuracy, which also supports investors’ evaluation of earnings forecasts, for example, for investors who search for accurate earnings forecasts for firm valuation. Every financial statement forecast starts with forecasting revenues (Curtis et al. 2014). Usually, earnings forecasts are derived as revenue minus expenses forecasts, and therefore they strongly depend on the respective revenue forecasts. Keung (2010) finds that earnings forecasts are more accurate if they are supplemented with the respective revenue forecasts. However, we argue that investors can improve the knowledge they gain from revenue forecasts if they consider not only the existence, but also the values of the forecasts. Revenue forecasts help in evaluating the reliability of analysts’ earnings forecasts because they disclose whether expected earnings developments are derived from anticipated changes in revenues or costs. For example, an expected earnings increase can be attributed to an expected increase in revenues or a decrease in costs, with an increase in revenues being a more persistent development (Keung 2010).

Third, by documenting determinants of revenue forecast accuracy that can be observed ex ante, we help investors and researchers identify accurate revenue forecasts. We introduce a prediction model that can be used to evaluate the usefulness of revenue forecasts. Since all required factors can already be observed at the beginning of the fiscal year, our prediction model can be used to ex ante identify accurate forecasts. Accurate revenue forecasts are highly important to investors because they contain information on revenue growth and marketing plans that cannot be seen from earnings forecasts (Penman 2013). They also provide data regarding expected volume and profit margin. Ertimur et al. (2011) argue that volume and profit margin should be important to I/B/E/S clients who use analyst forecast information because they are first-order drivers of a firm’s economic performance. Our study might also be of interest to brokerage houses that strive for efficient hiring decisions and an efficient allocation of resources among analysts. Brokerage houses need to identify high-ability analysts because their forecasts are more accurate and more valuable (Clement et al. 2007) and because analysts’ superior or inferior performance tends to persist over time (Hsu and Chiao 2011).

This study proceeds as follows. Section 2 discusses prior literature and develops our hypotheses. Section 3 describes the research design and the data sample. We present our primary results in Sect. 4 and the results from additional analyses in Sect. 5. Section 6 concludes.

2 Background and hypothesis development

Academic literature on analysts’ forecasts mostly concentrates on earnings forecasts. However, analysts also supply other forecast data that investors use as signals of a firm’s expected development. Information on a firm’s revenues is particularly important for investors because it enhances the understanding of value-determining factors. Revenue forecasts are a key variable in investors’ fundamental analyses. Ertimur et al. (2011) and Keung (2010) observe that the financial market reacts more strongly to earnings forecasts that are accompanied by revenue forecasts, compared to stand-alone earnings forecasts. This finding provides evidence that investors observe and react to analysts’ revenue forecasts. Thus, understanding their accuracy should be of interest to investors and academic researchers.

Obviously, earnings and revenue forecasts are closely connected because earnings are usually derived as revenues minus expenses. Ertimur et al. (2011) argue that all analysts privately produce revenue forecasts, but the decision to publish these forecasts depends on analysts’ reputational incentives. Analysts with low reputation might publish revenue forecasts to establish reputation and generate trading volume. By presenting revenue forecasts, they demonstrate their understanding and draw more attention from investors. Issuing revenue forecasts is a relatively easy instrument for analysts to gain notice and exposure, while other reputation-building instruments, such as experience and prior forecasting performance, require a longer time series of forecasting (Ertimur et al. 2011). By contrast, high-reputation analysts might have less to gain and more to lose from issuing forecasts. Therefore, low-reputation analysts have greater incentives to publish revenue forecasts than high-reputation analysts (Ertimur et al. 2011; Marks 2008). We abstract from analysts’ decision to publish revenue forecasts because we only investigate analysts who publish revenue forecasts.

Keung (2010) finds that earnings forecasts that are supplemented with revenue forecasts are more accurate than stand-alone earnings forecasts. This finding can be attributed to analysts’ self-selection: analysts with high forecasting ability have stronger incentives to publish revenue forecasts than analysts with low forecasting ability because only high-ability analysts want to receive higher market attention (Bilinski 2014). Keung’s (2010) finding offers investors a ready way to identify accurate earnings forecasts by observing revenue forecasts. However, we expect that the results from monitoring revenue forecasts might be improved if their quality and accuracy determinants are also considered.

The stock market reaction to revenue forecasts underlines their importance. The capital market reacts more strongly if earnings forecasts are accompanied by revenue forecasts because analysts who additionally publish revenues are regarded as more credible and competent (Keung 2010; Ertimur et al. 2011). Additionally, investors seem to distinguish earnings components because the market reacts more strongly to revenue surprises than to expense surprises (Ertimur et al. 2003). Revenue forecasts allow investors a timely judgment of whether, for example, expected earnings increases arise from revenue growth or expenses reduction. Investors strongly rely on revenue increases because they are more persistent than expenses reductions (Bilinski 2014). Furthermore, Rees and Sivaramakrishnan (2007) find that firms’ market premium to meeting or beating earnings forecasts increases [decreases] when the revenue forecast is also met [not met]. We conclude that investors pay attention not only to earnings, but also to revenue forecasts. Edmonds et al. (2013) find that analysts’ revenue forecasts are also important to firms’ management because CEOs who miss analysts’ revenue forecasts receive smaller bonuses.

Bilinski (2014) argues that analysts are more likely to publish revenue forecasts for firms with low-quality financial reporting. For these firms, investors demand revenue forecasts because they are less affected by the low quality of firm financial reporting than earnings forecasts are affected. Investors’ request for revenue forecasts also motivates firms’ management to provide revenue forecasts. The management especially uses supplementary revenue forecasts to support good results (Hutton et al. 2003). Investors judge management earnings forecasts as more credible if they are accompanied by revenues (Hirst et al. 2007). This assessment might be driven by investors’ belief that revenues are harder to manipulate than earnings. Mest and Plummer (2003) find that analysts’ optimistic bias is less pronounced for revenue forecasts than for earnings forecasts. Baginski et al. (2004) document that management earnings forecasts cause a stronger market reaction when they are supplemented with revenues.

Our work is related to that of Pae and Yoon (2012) who examine determinants of cash flow forecast accuracy. Pae and Yoon (2012) find that cash flow forecast accuracy is determined by cash flow forecasting experience, cash flow forecast frequency, number of followed companies, forecast horizon, and past cash flow forecasting performance. Most of their findings are in line with our findings with regard to revenue forecasts. However, we argue that understanding revenue forecasts is even more important and helpful to investors than understanding cash flow forecasts. Cash flow forecasts are usually produced after the respective earnings forecast. Givoly et al. (2009) argue that cash flow forecasts are naïve extensions of earnings and are simply derived by adding depreciation and amortization expenses to earnings forecasts. In contrast, revenue forecasts are the first step in forecasting and are therefore no simple extensions of any other kind of forecast. They instead serve as analysts’ forecast fundament.

The literature has identified numerous determinants of earnings forecast accuracy. Empirical studies show that analysts’ earnings forecasts are, all else equal, more accurate if the forecast is issued closer to the announcement of actual earnings (O’Brien 1988) and with fewer days elapsed since the previous forecast issued by any analyst for the same firm-year (Clement and Tse 2003). Earnings forecast accuracy increases with analysts’ firm-specific and general experience (Mikhail et al. 1997; Clement 1999), and decreases with analysts’ portfolio complexity measured as the number of covered firms and industries (Clement 1999). Earnings forecasts are more accurate when the analyst is employed by a large brokerage house (Clement 1999), is elected as an All-Star by the Institutional Investor magazine (Stickel 1992), and updates the forecasts more frequently (Jacob et al. 1999). Analysts with higher ability, measured from prior earnings forecasting performance, are on average more accurate earnings forecasters (Brown 2001; Brown and Mohammad 2010).

We argue that revenue forecast accuracy is determined by characteristics similar to those for earnings forecast accuracy. We expect that forecast and analyst characteristics (i.e., forecast horizon, days elapsed since the last forecast, forecasting experience, forecast frequency, portfolio complexity, size of analysts’ employer, All-Star status, and issuance of earnings forecasts) explain differences in revenue forecast accuracy.Footnote 1 Our first hypothesis states the following:

H1

The accuracy of analysts’ revenue forecasts is determined by forecast and analyst characteristics similar to those underlying earnings forecast accuracy.

We conjecture that the quality of analysts’ revenue forecasts depends on their forecasting ability. Earnings forecast literature (e.g., Brown 2001; Brown and Mohammad 2010) concordantly finds that analysts’ forecasting ability, measured as prior forecast accuracy, explains differential earnings forecasting performance. We expect that this finding can be transferred to revenues. We argue that prior revenue forecast accuracy is an indicator for analysts’ revenue forecasting ability and that it significantly determines concurrent forecasts. Furthermore, we posit that forecasting earnings and revenues are closely connected operations and that analysts’ ability in forecasting earnings also indicates their ability to forecast revenues. This argumentation leads the following hypothesis:

H2

The accuracy of analysts’ revenue forecasts can be explained by the analysts’ prior performance in forecasting revenues and earnings.

3 Research design and data sample

3.1 Research design

We investigate analysts’ revenue forecast accuracy, that is, the absolute forecast errors calculated as the absolute difference between analyst \( i \)’s most recent forecast and the actual revenue value for firm \( j \) in year \( t \):

$$ Absolute\_Forecast\_Error_{ijt} = \left| {Revenue Forecast_{ijt} - Actual Revenue _{jt} } \right|. $$
(1)

Similar to Clement and Tse (2003, 2005), we apply a range adjustment and use

$$ Accuracy_{ijt} = \frac{{Absolute\_Forecast\_Error_{maxjt} - Absolute\_Forecast\_Error_{ijt} }}{{Absolute\_Forecast\_Error_{maxjt} - Absolute\_Forecast\_Error_{minjt} }}, $$
(2)

where \( Absolute\_Forecast\_Error_{maxjt} \) [\( Absolute\_Forecast\_Error_{minjt} \)] is the maximum [minimum] absolute forecast error of all analysts who follow firm \( j \) in year \( t \). Thereby, we measure analyst \( i \)’s accuracy relative to other analysts who follow the same firm in the same year. This procedure controls for firm-specific and year-specific influences. \( Accuracy_{ijt} \) increases with forecast accuracy and ranges from 0 (for the least accurate forecaster of firm \( j \) in year \( t \)) to 1 (for the most accurate forecaster of firm \( j \) in year \( t \)).

We estimate the following regression, based on OLS:

$$ \begin{aligned} Accuracy_{ijt} & = \alpha_{0} + \alpha_{1} Forecast\_Horizon_{ijt} + \alpha_{2} Days\_Elapsed_{ijt} + \alpha_{3} Firm\_Experience_{ijt} \\ & \quad + \,\alpha_{4} Gen\_Experience_{ijt} + \alpha_{5} Frequency_{ijt} + \alpha_{6} Companies_{ijt} \\ & \quad + \,\alpha_{7} Industries_{ijt} + \alpha_{8} Broker\_Size_{ijt} + \alpha_{9} All\_Star_{it} + \alpha_{10} EPS\_Dummy_{ijt} + \varepsilon_{ijt} , \\ \end{aligned} $$
(3)

where, \( Forecast\_Horizon_{ijt} \), range-adjusted number of days between analyst \( i \)’s revenue forecast and the revenue announcement date of firm \( j \) in fiscal year \( t \); \( Days\_Elapsed_{ijt} \), range-adjusted number of days between analyst \( i \)’s revenue forecast for firm \( j \) in fiscal year \( t \) and the most recent preceding revenue forecast issued by any analyst for firm \( j \) in fiscal year \( t \); \( Firm\_Experience_{ijt} \), range-adjusted number of years (including the current year) for which analyst \( i \) has issued at least one revenue forecast for firm \( j \) in fiscal year \( t \); \( Gen\_Experience_{ijt} \), range-adjusted number of years (including the current year) for which analyst \( i \) has issued at least one revenue forecast for any firm in fiscal year \( t \); \( Frequency_{ijt} \), range-adjusted number of revenue forecasts issued by analyst \( i \) for firm \( j \) in fiscal year \( t \); \( Companies_{ijt} \), range-adjusted number of firms for which analyst \( i \) issues at least one revenue forecast in year \( t \); \( Industries_{ijt} \), range-adjusted number of industries (based on two-digit Sector/Industry/Group codes) for which analyst \( i \) issues at least one revenue forecast in year \( t \); \( Broker\_Size_{ijt} \), range-adjusted number of analysts that are employed by the brokerage house for which analyst \( i \) works in year \( t \); \( All\_Star_{it} \), binary variable set to 1 if analyst \( i \) is listed in the Institutional Investor’s All-America First Research Team in year \( t \), and set to 0 otherwise; \( EPS\_Dummy_{ijt} \), binary variable set to 1 if analyst \( i \) issues at least one earnings forecast for firm \( j \) in year \( t \), and set to 0 otherwise.

The raw variables (except for binary variables) are range adjusted as follows:

$$ Variable_{ijt} = \frac{{Raw\_Variable_{ijt} - Raw\_Variable_{minjt} }}{{Raw\_Variable_{maxjt} - Raw\_Variable_{minjt} }}. $$
(4)

Thus, \( Variable_{ijt} \) ranges between 0 and 1, and the value 0 [1] is assigned to the analyst with the lowest [highest] raw value in the respective \( Variable \). We transform the range adjustment of the independent variables compared to the range adjustment of the accuracy measure \( Accuracy \) because the \( Absolute\_Forecast\_Error \) decreases with accuracy, while \( Accuracy \) increases with accuracy. Thus, significantly positive [negative] coefficient estimates suggest that an independent variable positively [negatively] affects forecast accuracy.

We run OLS regressions while computing analyst-clustered standard errors. We cluster on this dimension because we already account for firm- and year-specific effects by using range adjusted variables. We anticipate that the included forecast and analyst characteristics have significant influences on \( Accuracy \).

3.2 Sample selection

We obtain analysts’ 1-year-ahead annual revenue forecasts and actual values from the Institutional Brokers Estimate System (I/B/E/S) Adjusted Detail History Database. We use forecasts issued from 1998 to 2014 because analysts’ provision of revenue forecasts is extremely rare before 1998.Footnote 2 We restrict our analyses to annual forecasts to avoid seasonality effects. We obtain 1-year-ahead annual earnings per share forecasts and actual values from the I/B/E/S Unadjusted Actuals History Database, and we obtain 1-year-ahead annual cash flow forecasts from the I/B/E/S Adjusted Detail History Database. We use SIG Codes from the I/B/E/S Sector/Industry/Group Codes Database.

We omit stale forecasts that are issued more than 360 days before the revenue announcement, herding forecasts that are issued within the last 30 days before the revenue announcement, and forecasts issued after the fiscal year-end. We exclude team forecasts because we aim to investigate individual analysts’ characteristics. We use only the most recent revenue forecast of each analyst before a firm’s revenue announcement because each analyst-firm-year combination should not be included more than once in the regression (see e.g., Mikhail et al. 1997; Clement 1999). We require that each firm-year is covered by at least four analysts because we employ an adjustment that compares an analyst to other analysts who cover the same firm in the same year.Footnote 3 Lastly, we omit observations with missing values that are necessary for calculating the independent variables. Our requirements yield a final sample of 346,374 analyst-firm-year observations from 1998 to 2014. The sample comprises 31,382 firm-years and 45,636 analyst-years. Table 1, Panel A, summarizes the selection process.

Table 1 Data sample

In Table 1, Panel B, we report statistics on the data sample by announcement years. The numbers of observations, analysts, and firms rise over time as the provision of revenue forecasts has increased substantially. Particularly in the beginning of our sample period, revenue forecasts are issued relatively rarely. We also find a strong increase in the mean number of firms for which analysts issue revenue forecasts, from 6.18 firms in 1998 to 10.06 firms in 2014. The average number of analysts following a firm also increases, from 5.36 analysts in 1998 to 12.73 analysts in 2014. The results support the growing importance of revenue forecasts and the need to investigate their nature.

3.3 Data sample description

Table 2, Panel A [Panel B], presents descriptive statistics for the unadjusted [range-adjusted] variables. The revenue forecasts in our sample have an average forecast horizon of 124.35 days, with 11.20 days elapsed since the previous forecast for the same firm issued by any analyst. The forecasts are issued by analysts who have an average of 3.23 years of firm-specific and 5.74 years of general revenue forecasting experience. Forecasts are revised 4.47 times per year on average. The forecasts are issued by analysts who issue revenue forecasts for an average of 15.30 firms from 2.20 industries, and who work for brokerage houses that have an average of 60.56 revenue forecasting employees.Footnote 4 All-Star analysts issue 2.06% of the forecasts, and 93.90% of the forecasts are accompanied by earnings per share forecasts issued by the same analyst for the same firm in the same fiscal year. The adjusted variables range between 0 and 1. We find that \( Accuracy \) skews to the left, while nearly all other variables skew to the right, which is consistent with prior literature (Clement and Tse 2003, 2005).

Table 2 Data sample description

We provide Pearson correlation coefficients for the range-adjusted variables in Table 2, Panel C. We find negative correlations between analysts’ revenue forecast accuracy and forecast horizon (\( \rho = - 0.398 \), p < 0.001) and days elapsed (\( \rho = - 0.031 \), p < 0.001). We find positive correlations between revenue forecast accuracy and general revenue experience (\( \rho = 0.522 \), p < 0.001), forecast frequency (\( \rho = 0.167 \), p < 0.001), All-Star status (\( \rho = 0.132 \), p < 0.001), and earnings forecast issuance (\( \rho = 0.020 \), p < 0.001). Surprisingly, for the number of firms and industries followed and for the brokerage house size, we find correlation coefficients of the opposite than expected sign.

4 Results

4.1 Determinants of analysts’ revenue forecast accuracy

We run the regression described in Eq. (3) and report the coefficient estimates in Table 3. We find that revenue forecasts are more accurate when issued closer to the revenue announcement (\( Forecast\_Horizon \)) and shortly after other revenue forecasts for the same firm (\( Days\_Elapsed \)). They are more accurate when issued by analysts with more general revenue forecasting experience (\( Gen\_Experience \)), higher forecast frequency (\( Frequency \)), and fewer industries in the revenue forecast portfolio (\( Industries \)). Revenue forecasts are also more accurate when issued by All-Star analysts (\( All\_Star \)) and by analysts who also forecast earnings for the same firm-year (\( EPS\_Dummy \)). These findings are in line with previous results on earnings forecast accuracy (e.g., Clement 1999; Mikhail et al. 1997). Consistent with findings for earnings forecasts (e.g., O’Brien 1988; Brown and Mohd 2003), we provide evidence that forecast horizon is the most important determinant in explaining revenue forecast accuracy differences.Footnote 5

Table 3 Results from regression analysis of revenue forecast accuracy determinants

However, three out of ten investigated characteristics do not have the expected impact on revenue forecast accuracy: we find negative coefficient estimates for \( Firm\_Experience \) and \( Broker\_Size \) and a positive coefficient estimate for \( Companies \). These findings are similar to results on determinants of cash flow forecast accuracy reported by Pae and Yoon (2012). They find that firm experience and broker size do not have the expected impact on cash flow forecast accuracy. The positive coefficient estimate of \( Companies \) suggests that analysts become better revenue forecasters when covering a higher number of firms. Their forecasting ability seems to improve when they are disaggregating forecasts for a larger portfolio.Footnote 6 However, this result contradicts common findings regarding earnings forecast accuracy. Earnings forecast accuracy decreases with a more complex forecast portfolio, that is, with a larger number of firms and industries covered (e.g., Clement 1999).

Nevertheless, we find that seven out of ten investigated characteristics that determine earnings forecast accuracy can be transferred to revenue forecast accuracy, and we regard hypothesis H1 as substantiated.

4.2 The influence of analysts’ prior performance in forecasting revenues and earnings

We find that several analyst characteristics explain revenue forecast accuracy differences, and we conjecture that prior forecasting performance is also an influencing factor. The literature (e.g., Brown 2001) provides evidence that prior performance in forecasting earnings is one of the major determinants of earnings forecast accuracy. Prior forecasting performance is commonly interpreted as a measure of analyst forecasting ability. Usually, it is based on analysts’ prior performance in forecasting a specific firm. Additionally, Brown and Mohammad (2010) argue that analysts’ prior forecast ability can also be measured by considering analysts’ performance for all firms covered in the preceding year. We posit that the relation between prior and present forecasting performance also applies for revenues. We presume that both kinds of prior forecasting performance measures (i.e., a firm-specific and a general measure) can be used to explain revenue forecast accuracy differences. We estimate the following regression from OLS:

$$ \begin{aligned} Accuracy_{ijt} & = \alpha_{0} + \alpha_{1} Lagged\_Accuracy_{ijt} + \alpha_{2} Gen\_Lagged\_Acc._{ijt} + \alpha_{3} Forecast\_Horizon_{ijt} \\ & \quad + \,\alpha_{4} Days\_Elapsed_{ijt} + \alpha_{5} Firm\_Experience_{ijt} + \alpha_{6} Gen\_Experience_{ijt} \\ & \quad + \, \alpha_{7} Frequency_{ijt} + \alpha_{8} Companies_{ijt} + \alpha_{9} Industries_{ijt} \\ & \quad + \,\alpha_{10} Broker\_Size_{ijt} + \alpha_{11} All\_Star_{it} + \alpha_{12} EPS\_Dummy_{ijt} + \varepsilon_{ijt} , \\ \end{aligned} $$
(5)

where, \( Lagged\_Accuracy_{ijt} \), 1-year lagged \( Accuracy_{ijt} \), that is, 1-year lagged range adjusted absolute forecast error of analyst \( i \)’s revenue forecast for firm \( j \) in fiscal year \( t \); \( Gen\_Lagged\_Acc._{ijt} \), mean 1-year lagged \( Accuracy_{ijt} \) over all firms analyst \( i \) covers in year \( t \).

The variables are based on forecast accuracy and are range adjusted as in Eq. (2).

We examine the influence of lagged revenue forecast accuracy separately and do not include the respective variables in our primary analysis in Eq. (3) because the inclusion of data from the preceding fiscal year requires that the analyst has issued a revenue forecast for the same firm in the preceding year. Thus, we have to omit 33.85% of the observations from the primary sample when including \( Lagged\_Accuracy \). When including \( Lagged\_Accuracy \) and \( Gen\_Lagged\_Acc. \), we additionally require that the analyst has issued revenue forecasts in the preceding year for at least one other firm. Thereby, we are able to calculate an average. This restriction reduces the sample by 35.43% to 223,653 observations.

Furthermore, we argue that analysts’ abilities in forecasting revenues and earnings are related; that is, ability can also be derived from earnings forecasting performance. We use a set of annual earnings per share forecasts and employ the same restriction criteria as for revenue forecasts. We match the earnings and revenue forecasts and include analysts’ prior earnings forecasting performance in our analysis. We estimate the following regression from OLS:

$$ \begin{aligned} Accuracy_{ijt} & = \alpha_{0} + \alpha_{1} Lagged\_Accuracy_{ijt} + \alpha_{2} Gen\_Lagged\_Acc._{ijt} + \alpha_{3} Lagged\_EPS\_Acc._{ijt} \\ & \quad + \,\alpha_{4} Gen\_Lagged\_EPS\_Acc._{ijt} + \alpha_{5} Forecast\_Horizon_{ijt} + \alpha_{6} Days\_Elapsed_{ijt} \\ & \quad + \,\alpha_{7} Firm\_Experience_{ijt} + \alpha_{8} Gen\_Experience_{ijt} + \alpha_{9} Frequency_{ijt} \\ & \quad + \,\alpha_{10} Companies_{ijt} + \alpha_{11} Industries_{ijt} + \alpha_{12} Broker\_Size_{ijt} + \alpha_{13} All\_Star_{it} + \varepsilon_{ijt} , \\ \end{aligned} $$
(6)

where, \( Lagged\_EPS\_Acc._{ijt} \), 1-year-lagged range-adjusted earnings forecast accuracy, that is, range-adjusted absolute forecast error of analyst \( i \)’s earnings forecast for firm \( j \) in fiscal year \( t \); \( Gen\_Lagged\_EPS\_Acc._{ijt} \), mean \( Lagged\_EPS\_Acc._{ijt} \) over all firms analyst \( i \) covers in year \( t \).

Again, the variables are based on forecast accuracy and are range adjusted as in Eq. (2). We omit \( EPS\_Dummy \) from the regression because the sample is restricted to revenue forecasts that are accompanied by earnings forecasts issued by the same analyst for the same firm year. Again, the research design requires further curtailment of the data sample because we omit observations if the earnings forecasting performance in the previous fiscal year cannot be calculated. The sample comprises 206,145 observations when including \( Lagged\_EPS\_Acc. \), and 205,576 observations when including both measures of lagged earnings accuracy.

We report regression results for Eq. (5) in Table 4, Panel A. As expected, we find significantly positive coefficient estimates when including \( Lagged\_Accuracy \) (0.0982, t value 30.21) and when including both measures of prior revenue forecasting performance (\( Lagged\_Accuracy \) 0.0754, t value 25.08; \( Gen\_Lagged\_Acc. \) 0.1370, t value 16.96). Both variables have strong explanatory power, which is in line with earnings forecasting literature (e.g., Brown and Mohammad 2010).

Table 4 Results from regression analyses of influence of analysts’ prior performance

The other independent variables remain basically unchanged; that is, we still find the expected results for seven out of ten characteristics. The exception is the number of companies in an analyst’s revenue forecast portfolio, which has shifted from a positive to an insignificant result. We cannot directly compare the adjusted R2 values (18.19% when including only the firm-specific measure; 18.22% when including both measures) to the adjusted R2 of regression (3) when none of the measures of prior revenue forecasting performance are included (16.50%) because we use a different data sample. Thus, to facilitate a comparison, we run Eq. (3) for the same data sample of 223,653 observations. We find an adjusted R2 of 16.77% for model (3) [compared to 18.22% for model (5)]. We use an F test to compare the models, and the F value of 1984.32 shows that a better model fit is achieved when including prior revenue forecasting performance.

We report the results for Eq. (6) in Table 4, Panel B. We find that current revenue forecast accuracy is also explained by analysts’ prior performance in forecasting earnings. Including both measures of prior earnings forecasting performance results in coefficient estimates of 0.0084 (t value 3.93) for the firm-specific and 0.0214 (t value 2.92) for the general measure. Compared with the results for Eq. (5) in Panel A, the results for the other independent variables remain qualitatively unchanged. Again, we aim to compare model (3) with model (6). We thus run regression (3) for the data sample of 205,576 observations used in model (6). Obviously, we omit \( EPS\_Dummy \) from model (3) because the sample only comprises observations when the analyst issues revenue and earnings forecasts. The adjusted R2 (17.55%) of model (6) exceeds the adjusted R2 (16.31%) of model (3) and an F test (F value of 840.49) provides evidence that the model fit improves if prior earnings forecasting performance is included.

Concluding, our results substantiate that revenue forecast accuracy can be explained by analysts’ prior performance in forecasting revenues and earnings (hypothesis H2). However, we base the research designs for the following additional analyses on the basic model (3) and omit prior forecasting performance variables. Their inclusion would dramatically shrink the sample by at least 33.85% (when including only firm-specific prior revenue forecasting performance) and up to 40.65% (when including firm-specific and general measures of prior revenue and earnings forecasting performance).

5 Additional analyses

5.1 Predicting the usefulness of revenue forecasts

We use the findings from the regression results in Sect. 4 to conduct a meta-synthesis and develop a model that ex ante identifies the most accurate revenue forecasts. Therefore, we predict the accuracy of a revenue forecast based on forecast and analyst information. We use the coherences identified in Sect. 4, namely the coefficient estimates from regression models (5) and (6). We use the results from the four slightly distinct analyses (see Table 4, Panel A and B). For each coefficient estimate, we calculate an average value over all four regression results. This process leads to the following model, which predicts the usefulness of analyst i’s revenue forecast for firm j in year t:

$$ \begin{aligned} Pred.Usefulness_{ijt} & = 0.0787 \cdot Lagged\_Accuracy_{ijt} + 0.1310 \cdot Gen\_Lagged\_Acc._{ijt} \\ & \quad + \,0.0141 \cdot Lagged\_EPS\_Acc._{ijt} + 0.0214 \cdot Gen\_Lagged\_EPS\_Acc._{ijt} \\ & \quad - \,0.3560 \cdot Forecast\_Horizon_{ijt} - 0.0407 \cdot Days\_Elapsed_{ijt} \\ & \quad + \,0.0102 \cdot Firm\_Experience_{ijt} + 0.0181 \cdot Gen\_Experience_{ijt} \\ & \quad + \,0.0588 \cdot Frequency_{ijt} - 0.0077 \cdot Industries_{ijt} + 0.0243 \cdot Broker\_Size_{ijt} \\ & \quad + \,0.0201 \cdot All\_Star_{it} - 0.1535 + \varepsilon_{ijt} \\ \end{aligned} $$
(7)

We exclude the number of companies because the results in Table 4, Panels A and B, are insignificant throughout. We additionally include a constant (− 0.1535) to obtain a more intuitive interpretation of \( Pred.Usefulness_{ijt} \). We argue that a revenue forecast is useful and relatively accurate if \( Pred.Usefulness_{ijt} > 0. \) Otherwise, the forecast is not accurate enough. We use the constant to obtain a partition that results in approximately 25% useful revenue forecasts. We recommend that investors and researchers who aim to identify the most accurate revenue forecasts rely on the forecasts with a positive \( Pred.Usefulness_{ijt} \). Thereby, investors are able to identify the most accurate 25% of revenue forecasts.

We estimate \( Pred.Usefulness_{ijt} \) for a sample of 205,576 observations. We apply the same data sample as in the analysis that includes \( Lagged\_EPS\_Acc._{ijt} \) and \( Gen\_Lagged\_EPS\_Acc._{ijt} \) [see regression model (6)]. Thereby we can employ all variables of interest. Next, we partition the data sample into deciles, based on their predictive usefulness. In Table 5, Panel A, we report the average \( Pred.Usefulness_{ijt} \) and the average actual revenue forecast accuracy separately for each decile. We find that the average actual revenue forecast accuracy monotonically decreases from decile 1 (0.8128) to decile 10 (0.3980). In Panel B of Table 5, we conduct a partition into revenue forecasts; we recommend (\( Pred.Usefulness_{ijt} > 0 \)) and do not recommend (\( Pred.Usefulness_{ijt} \le 0 \)). As expected, we find that the recommended revenue forecasts are more accurate (0.8072) than the rest (0.6676) on average. A t value of 86.62 proves that the difference is statistically significant.

Table 5 Results from models that predict the usefulness of revenue forecasts

To make the model more practical for investors, we exclude any variables that cannot be determined by the beginning of a fiscal year. Thus, we eliminate \( Forecast\_Horizon \), \( Days\_Elapsed \), \( Frequency \), and \( Industries \). The variables that capture information from the preceding forecast period (\( Lagged\_Accuracy, Gen\_Lagged\_Acc., Lagged\_EPS\_Acc.,Gen\_Lagged\_EPS\_Acc. \)), as well as an analysts’ firm-specific and general experience, the size of the employer, and a possible status as All-Star can already be determined at the beginning. Thus, the model is reduced to the following:

$$ \begin{aligned} Pred.Usefulness_{ijt} & = 0.0787 \cdot Lagged\_Accuracy_{ijt} + 0.1310 \cdot Gen\_Lagged\_Acc._{ijt} \\ & \quad + \,0.0141 \cdot Lagged\_EPS\_Acc._{ijt} + 0.0214 \cdot Gen\_Lagged\_EPS\_Acc._{ijt} \\ & \quad + \,0.0102 \cdot Firm\_Experience_{ijt} + 0.0181 \cdot Gen\_Experience_{ijt} \\ & \quad + \,0.0243 \cdot Broker\_Size_{ijt} + 0.0201 \cdot All\_Star_{it} - 0.1825 + \varepsilon_{ijt} \\ \end{aligned} $$
(8)

Note that we adapt the constant slightly. The exclusion of some variables (especially \( Forecast\_Horizon \)) increases the average value of \( Pred.Usefulness_{ijt} \). In order to get a similar partition of approximately 25% of the forecasts as useful (\( Pred.Usefulness_{ijt} > 0 \)), we adjust the constant.

Again, we estimate \( Pred.Usefulness_{ijt} \) for a sample of 205,576 observations and partition the sample into deciles, based on their predictive usefulness. In Table 5, Panel C, we report the average values of \( Pred.Usefulness_{ijt} \) and average actual revenue forecast accuracy by decile. As in Panel A, the average actual revenue forecast accuracy constantly decreases from decile 1 (0.7628) to decile 10 (0.5926). Table 5, Panel D, shows results for the partition into recommended and nonrecommended revenue forecasts. On average, the recommended revenue forecasts are more accurate (0.7508) than the nonrecommended ones (0.6847). Again, the statistical significance of the difference is proven by a t test (t value 41.10).

A comparison of the results from Table 5, Panels A and B vs. Panels C and D shows that the exclusion of some determinants results in a loss of predictability of the model. Thus, our recommendation to investors is using model (7), if possible (i.e., if the analysis is conducted close to the end of a fiscal year), and using model (8) otherwise (i.e., at the beginning of the fiscal year).

We show the practical usefulness of our model by employing it on the dataset for 2015. Note that we develop models (7) and (8) from a dataset of forecasts issued from 1998 to 2014. The following analysis shows that the models can also be used for succeeding periods, such as 2015. The dataset from 2015 comprises 31,743 revenue forecasts. We use models (7) and (8) to calculate the predicted usefulness and partition the sample into recommended (\( Pred.Usefulness_{ijt} > 0 \)) and nonrecommended (\( Pred.Usefulness_{ijt} \le 0 \)) revenue forecasts. In Table 5, Panel E, we report the results for model (7) and in Table 5, Panel F, we report the results for the restricted model (8). For model (7), the recommended revenue forecasts are more accurate (0.7953) than the other forecasts (0.6667) on average. The same applies to model (8) in which we only include determinants that can be observed at the beginning of the fiscal year (0.7653 vs. 0.6760). T values of 23.22 and 18.45 prove the statistically significant difference. The findings for the sample from 2015 verify the practical applicability of our models. Even at the beginning of a fiscal year, our results help to identify relatively accurate revenue forecasts.

5.2 Analysis of the consistency between revenue and earnings forecasts

We find that 94% of the revenue forecasts in our sample are accompanied by earnings forecasts issued by the same analyst for the same firm in the same fiscal year. Revenues are the first step when forecasting earnings; that is, analysts build earnings forecasts on revenue forecasts. Thus, we conjecture that analysts should either expect an increase or a decrease in both revenues and earnings compared to results from the preceding period. If an analyst expects one of these items to increase and the other one to decrease, then the analyst either anticipates strong or unusual development of the expense components or has conducted an insufficient forecasting process. Perhaps an analyst has made wrong assumptions, for example, with regard to expenses. Keung (2010) finds that earnings are more accurate when issued with consistent revenue forecasts compared to being issued with inconsistent revenue forecasts. We expect that Keung’s (2010) finding on earnings also applies for revenues. We argue that analysts who disclose consistent revenue and earnings forecasts have made smoother assumptions and applied a more structured approach, resulting in more accurate revenue forecasts.

We introduce a binary variable \( Consistent\_EPS_{ijt} \) to measure whether revenue and earnings forecasts are consistent. \( Consistent\_EPS_{ijt} \) is set to 1 if analyst \( i \)’s revenue and earnings forecasts for firm \( j \) in fiscal year \( t \) are both above/below the firm’s actual revenue and earnings values in fiscal year \( t - 1 \), and set to 0 otherwise. We run the following regression based on OLS and expect to find a positive coefficient estimate for \( Consistent\_EPS \):

$$ \begin{aligned} Accuracy_{ijt} & = \alpha_{0} + \alpha_{1} Consistent\_EPS_{ijt} + \alpha_{2} Forecast\_Horizon_{ijt} + \alpha_{3} Days\_Elapsed_{ijt} \\ & \quad + \,\alpha_{4} Firm\_Experience_{ijt} + \alpha_{5} Gen\_Experience_{ijt} + \alpha_{6} Frequency_{ijt} \\ & \quad + \,\alpha_{7} Companies_{ijt} + \alpha_{8} Industries_{ijt} + \alpha_{9} Broker\_Size_{ijt} + \alpha_{10} All\_Star_{it} + \varepsilon_{ijt} \\ \end{aligned} $$
(9)

We use a data sample of 228,187 revenue forecasts, which have corresponding earnings data. We use data from the respective preceding period to state whether revenue and earnings forecasts are above or below the prior year’s actual data. We thus require that data from the preceding forecast periods are available. We exclude observations if either revenue or earnings forecast equal the preceding year’s actual data. We find that 72.64% of the revenue forecasts are consistent with the corresponding earnings forecasts.Footnote 7 On average, revenue forecasts that are consistent with the respective earnings forecasts are more accurate than inconsistent revenue forecasts (0.6991 vs. 0.6862, t value 8.45).

We tabulate the results from Eq. (9) in the first column of Table 6. As expected, we find a positive coefficient estimate for \( Consistent\_EPS \) (coefficient estimate 0.0146, t value 8.94). This finding implies that revenue forecasts are more accurate when they are consistent with the corresponding earnings forecasts. In the next step, we investigate whether analyst or forecast characteristics differ for consistent and inconsistent revenue forecasts. We compare average values of the determinants, but we find only small differences. We thus run regression (9) separately for subsamples of consistent and inconsistent forecasts, while omitting \( Consistent\_EPS \). Thereby, we aim to examine whether the determinants have different influences on revenue forecast accuracy. We report the results in the second and third columns of Table 6. The coefficient estimates are relatively close; we find strong differences only concerning firm experience and the number of companies followed. Neither factor has an impact for the subsample of inconsistent forecasts. We posit that the differential forecast accuracy of consistent and inconsistent forecasts is neither driven by different characteristics of the determinants nor by different influence power of the determinants. We conclude that the differences are due to analysts’ smoother assumptions and more structured forecasting approaches when issuing consistent revenue and earnings forecasts.

Table 6 Analysis of the consistency between revenue and earnings forecasts

5.3 The influence of revenue forecast boldness on forecast accuracy

We argue that analysts who issue bold revenue forecasts have superior information, compared to analysts who issue herding revenue forecasts. Thus, we expect that bold revenue forecasts are more accurate than herding ones. This relation between forecast boldness and accuracy has been shown for earnings forecasts (e.g., Clement and Tse 2005). Similar to Clement and Tse (2003, 2005) and Gleason and Lee (2003), we use a binary variable to classify forecasts as bold (i.e., diverging from the consensus forecast) or herding. \( Boldness_{ijt} \) is set to 1 if analyst \( i \)’s revenue forecast for firm \( j \) in year \( t \) is above [below] the analyst’s prior forecast and above [below] the consensus forecast immediately before the forecast revision, and set to 0 otherwise. The primary sample is reduced to 295,079 forecasts for which \( Boldness_{ijt} \) can be determined.Footnote 8 We classify 67.48% of the forecasts as bold and 32.52% as herding. This ratio is similar to categorizations for earnings forecasts: Clement and Tse (2005) categorize 73.31% as bold, and Gleason and Lee (2003) categorize 76.30% as bold. We run the following regression based on OLS:

$$ \begin{aligned} Accuracy_{ijt} & = \alpha_{0} + \alpha_{1} Boldness_{ijt} + \alpha_{2} Forecast\_Horizon_{ijt} + \alpha_{3} Days\_Elapsed_{ijt} \\ & \quad + \,\alpha_{4} Firm\_Experience_{ijt} + \alpha_{5} Gen\_Experience_{ijt} + \alpha_{6} Frequency_{ijt} + \alpha_{7} Companies_{ijt} \\ & \quad + \,\alpha_{8} Industries_{ijt} + \alpha_{9} Broker\_Size_{ijt} + \alpha_{10} All\_Star_{it} + \alpha_{11} EPS\_Dummy_{ijt} + \varepsilon_{ijt} \\ \end{aligned} $$
(10)

We tabulate the results in the first column of Table 7, Panel A. We find that bold revenue forecasts are significantly more accurate than herding forecasts (coefficient estimate 0.0475, t value 34.95). Most of the findings for the other independent variables remain qualitatively unchanged.

Table 7 Analyses of forecast boldness and cash flow forecasts as accuracy determinants

In the next step, we partition the sample into bold and herding forecasts to analyze whether their accuracy is affected by different determinants. We conduct regression (3) separately for both subsamples and tabulate the results in the second and third columns of Table 7, Panel A. We find differences only concerning the significance of analysts’ All-Star status and the magnitude of the number of covered industries. However, these differences are rather weak.

5.4 The influence of cash flow forecast issuance on revenue forecast accuracy

Literature on analysts’ cash flow forecasts strongly discusses their usefulness. In contrast to revenue forecasts, it is unclear whether all analysts estimate cash flows in the process of forecasting earnings. Additionally, it is not certain whether forecasting cash flows requires different skills, and it is unknown how much effort the forecasting involves. Givoly et al. (2009) argue that cash flow forecasts are naïve extensions of earnings forecasts. They find that cash flow forecasts are less accurate than earnings forecasts by an amount that cannot be fully attributed to increased difficulty but rather shows insufficient effort. Call et al. (2013) contradict this claim. They argue that only 7.8% of their investigated cash flow forecasts are extensions of earnings forecasts and that cash flow forecasts add value to investors. Call et al. (2013) conclude that analysts apply a structured approach to forecasting cash flows, which involves more than mechanical, simple adjustments to earnings. Call et al. (2009) find that earnings forecasts are more accurate when accompanied by cash flow forecasts, but Lehavy (2009) critically comments on the research design and doubts their findings.

In this section, we examine whether revenue forecasts are more accurate when accompanied by cash flow forecasts. Thereby, we aim to contribute to the ongoing discussion of the usefulness and influence of cash flow forecasts. We already find that revenue forecast accuracy is higher if the analyst also issues earnings forecasts for the same firm-year. Thus, one might expect that revenue forecast accuracy increases even further if the analyst also issues another kind of forecast. Similar to Call et al. (2009), one might argue that analysts who issue cash flow forecasts conduct a more intensive research process and have superior information.

However, one might expect that cash flow forecast issuance does not affect or possibly even negatively affects revenue forecast accuracy. One might refer to the perspective of financial statement analyses and argue that the forecasting process starts with revenues, continues with earnings, and finishes with cash flows. It is therefore possible that cash flows are not connected closely enough to revenues to have a positive influence. Furthermore, it might be argued that analysts who cannot accurately forecast revenues may feel that it is necessary to issue additional forecast items such as cash flows. Therefore, one might even expect a negative relationship between cash flow forecast issuance and revenue forecast accuracy.

We employ a binary variable \( CPS\_Dummy_{ijt} \) that is set to 1 if analyst \( i \) issues at least one cash flow forecast for firm \( j \) in year \( t \), and set to 0 otherwise. We find that 16.85% of the revenue forecasts in our sample are accompanied by corresponding cash flow forecasts. We estimate the following regression based on OLS:

$$ \begin{aligned} Accuracy_{ijt} & = \alpha_{0} + \alpha_{1} CPS\_Dummy_{ijt} + \alpha_{2} Forecast\_Horizon_{ijt} + \alpha_{3} Days\_Elapsed_{ijt} \\ & \quad + \,\alpha_{4} Firm\_Experience_{ijt} + \alpha_{5} Gen\_Experience_{ijt} + \alpha_{6} Frequency_{ijt} + \alpha_{7} Companies_{ijt} \\ & \quad + \,\alpha_{8} Industries_{ijt} + \alpha_{9} Broker\_Size_{ijt} + \alpha_{10} All\_Star_{it} + \alpha_{11} EPS\_Dummy_{ijt} + \varepsilon_{ijt} \\ \end{aligned} $$
(11)

We report the results in the first column of Table 7, Panel B. Revenue forecasts that are accompanied by cash flow forecasts are significantly less accurate than those not accompanied by cash flow forecasts (coefficient estimate − 0.0107, t value − 3.72). The qualitative findings for the other independent variables remain generally unchanged.

We seek to determine why revenue forecasts are less accurate when cash flow forecasts are issued. Therefore, we run regression (3) separately for two subsamples of revenue forecasts accompanied and unaccompanied by cash flow forecasts. We aim to find differences in the determinants of these forecast groups. We report the results in the second and third columns of Table 7, Panel B. We find that firm-specific experience and All-Star status do not determine the accuracy of revenue forecasts that are accompanied by cash flows. Accompanied revenue forecasts (which are generally less accurate than unaccompanied ones) are more accurate when the analyst also issues an earnings forecast. We conclude that the negative effect from issuing cash flow forecasts is partly compensated by additional earnings forecasts. All other differences are quite indistinct.

5.5 Do analysts stop forecasting revenues after a poor forecasting performance?

Analysts’ main incentive for publishing revenue forecasts is to strengthen their reputation. They do not directly receive financial compensation for providing forecasts on I/B/E/S, but they benefit from the exposure to clients in the database (Ertimur et al. 2011). Good reputation is extremely important to analysts because it affects their career and opportunity to receive commissions. Jackson (2005) finds that analysts’ trading volume rises with higher reputation and that reputation depends on recent forecast accuracy. Thus, analysts receive indirect monetary compensation from issuing accurate revenue forecasts that signal forecasting abilities to potential clients. Analysts are therefore motivated to publish accurate revenue forecasts. However, issuing inaccurate revenue forecasts can negatively affect analysts’ career, and naturally, analysts attempt to protect their reputation.

We argue that each year, analysts decide which firms to follow and that this decision is driven by prior performance. Analysts with accurate prior revenue forecasts might have improved their reputation, which can lead to higher trading volume. Therefore, they face incentives to issue revenue forecasts for the same firm in the succeeding years to even further improve their reputation. Analysts with inaccurate revenue forecasts might have experienced reputational damage. Thus, their motivation to issue revenue forecasts for the same firm in the next year is lower. However, analysts might still follow the firm and privately forecast revenues as an intermediate step to forecasting earnings, but they might not publish the revenue forecast because of negative prior experience. We thus argue that analysts with a relatively poor prior performance forecasting revenues for a specific firm are more likely to stop issuing revenue forecasts for that firm compared to analysts with a better performance in the preceding period.

We introduce a binary variable \( Firm\_Stop_{ijt} \) which is set to 1 if analyst \( i \) does not issue any revenue forecast for firm \( j \) in year \( t \) but did so in year \( t - 1 \), and set to 0 otherwise. We use a sample of 284,688 revenue forecasts from 1999 to 2014 for which information on the preceding fiscal year is available. If year \( t \) is our examined year of interest, we postulate that the analyst issued a revenue forecast for the firm in \( t - 1 \). Additionally, we require that the analyst issues at least one revenue forecast in year \( t \) for any firm to ensure that the analyst did not totally stop forecasting revenues.Footnote 9 We run the following logistic regression:

$$ Firm\_Stop_{ijt} = \beta_{0} + \beta_{1} Accuracy_{ijt - 1} + \varepsilon_{ijt} , $$
(12)

where \( Accuracy_{ijt - 1} \) is the range-adjusted absolute forecast error of analyst \( i \)’s revenue forecast for firm \( j \) in fiscal year \( t - 1 \). We conduct the range adjustment as in Eq. (2).

We find that for 19.56% of the observations, analysts stop forecasting revenues. The average accuracy measure \( Accuracy_{ijt - 1} \) for these observations is 0.5951, compared to 0.7350 for observations for which analysts continue forecasting revenues for the respective firm. A t test (t value 93.52) proves that these values differ significantly. This descriptive finding reinforces our assumption that analysts who stop forecasting revenues for a specific firm are driven by poor prior performance.

We tabulate the results from logit model (12) in Table 8, Panel A. We find that the likelihood that an analyst stops forecasting revenues for a specific firm decreases with revenue forecast accuracy; that is, analysts are more likely to stop forecasting revenues for a given firm if they experienced a poor performance in the preceding forecast period. The odds that the analyst who achieved the best forecasting performance for a given firm in the preceding forecast period, stops forecasting revenues for this firm are 70.8% smaller than the odds that the analyst who achieved the worst performance stops forecasting revenues.Footnote 10

Table 8 Analysis of termination of revenue forecast issuance

In the next part of our forecast stop analysis, we evaluate analysts’ general stopping of revenue forecast issuance. We argue that analysts’ negative performance in forecasting revenues not only influences the decision to stop forecasting revenues for a specific firm, but also the decision to stop forecasting revenues in general. Analysts who achieved poor overall results in forecasting revenues are less motivated to continue publishing revenue forecasts because inaccurate results might harm their reputation. We thus argue that analysts who had negative experiences with forecasting revenues are more likely to stop forecasting revenues in general, compared to analysts who had positive experiences.

We conduct this analysis on the analyst-year level; that is, each analyst is included only once per year in the sample. We use a binary variable \( Revenue\_Stop_{it} \) which is set to 1 if analyst \( i \) does not issue any revenue forecast for any firm in year \( t \) but did so in year \( t - 1 \), and set to 0 otherwise. The employed sample consists of 35,682 analyst-year observations from 1999 to 2014 for which information on the preceding fiscal year is available. We require that the analyst issues a revenue forecast for any firm in the respective preceding year and that the analyst issues at least one earnings or revenue forecast in year \( t \) for any firm. By extending the restriction to earnings and revenue forecasts, we can guarantee that the analyst did not stop issuing forecasts at all, for example, because of retirement or a change in the occupational field. We run the following logistic regression:

$$ Revenue\_Stop_{it} = \beta_{0} + \beta_{1} \overline{Accuracy}_{it - 1} + \varepsilon_{it} , $$
(13)

where \( \overline{Accuracy}_{it - 1} \) is analyst \( i \)‘s average revenue forecast accuracy in year \( t - 1 \). We use

$$ \overline{Accuracy}_{it - 1} = \frac{1}{J}\mathop \sum \limits_{j = 1}^{J} Accuracy_{ijt - 1},$$
(14)

and find that 11.23% of the analyst-year observations are connected with analysts who stop forecasting revenues in general.Footnote 11 The average accuracy measure \( \overline{Accuracy}_{it - 1} \) for these observations is 0.5472. The average value for the remaining observations of analysts who continued forecasting revenues is 0.7058. A t value of 44.65 indicates that this difference is statistically significant.

In Table 8, Panel B, we tabulate the logit regression results from Eq. (13). We find that the odds that an analyst generally stops forecasting revenues decrease with revenue forecast accuracy. Analysts are more likely to stop forecasting revenues in general if they experienced a poor general revenue forecasting performance in the preceding year. The odds that the analyst who achieved the best overall revenue forecasting performance in the preceding forecast period generally stops forecasting revenues are 93.5% smaller than the odds that the analyst who achieved the worst overall revenue forecasting performance stops forecasting revenues.Footnote 12

5.6 Influence of revenue forecast accuracy on analyst career prospects

Analysts’ earnings forecast accuracy significantly influences their career prospects. The literature (e.g., Hong and Kubik 2003; Hong et al. 2000; Ke and Yu 2006) finds that analysts are more likely to move to a brokerage house of higher prestige if their earnings forecasts are more accurate, and that relatively inaccurate forecasts increase the likelihood that an analyst is demoted. We argue that analysts’ career prospects depend not only on their earnings forecasts, but also on their revenue forecasts. In line with this expectation, Ertimur et al. (2011) find that analysts who publish revenue forecasts besides their earnings forecasts have more favorable career outcomes in terms of moving to more prestigious brokerage houses. However, we conjecture that the career outcomes are not only affected by the existence of revenue forecasts, but also by their accuracy. We argue that analysts who issue more accurate revenue forecasts have better career prospects; that is, they are more likely to be promoted in the next period and are less likely to be demoted.

We introduce two binary variables that measure whether analysts are promoted (\( Promoted_{it} \)) or demoted (\( Demoted_{it} \)). \( Promoted_{it} \) is set to 1 if analyst \( i \) moves to a high prestige brokerage house in period \( t \), and/or is elected as All-Star in period \( t \), and set to 0 otherwise. \( Demoted_{it} \) is set to 1 if analyst \( i \) moves from a high prestige brokerage house to a non-high prestige brokerage house or completely stops issuing forecasts in period \( t \), and set to 0 otherwise. We argue that analysts’ prestige is defined by either their employer or their possible All-Star status. Analysts who are employed by more prestigious brokerage houses are likely to benefit from this prestige and might even receive higher salaries. We classify the ten biggest brokerage houses each year as high prestige. Similar to Hong and Kubik (2003), we measure size from the number of employed analysts and argue that a binary classification is more reasonable than a steady or discrete measure because brokers’ prestige is not linear in size. Additionally, we classify brokerage houses that are rated at least 9.0 in the Carter–Manaster Ranking as high prestige.Footnote 13 The Carter–Manaster Ranking strongly coincides with the classification based on broker size.

Analysts who dropped out the database—that is, analysts who do not issue any forecast in period \( t \), but did so in \( t - 1 \)—are expected to be fired (i.e., terminated). We cannot exclude the possibility that some analysts who are removed from the database left their profession voluntarily because of retirement or a better job offer. However, we assume that most dropouts are because the analyst is fired.

We examine the possible developments of promotion and demotion separately because we believe that these distinct issues might be driven by different factors.Footnote 14 We run the following logistic regressions:

$$ \begin{aligned} Promoted_{it} & = \gamma_{0} + \gamma_{1} \overline{Accuracy}_{it - 1} + \gamma_{2} \overline{EPS\_Accuracy}_{it - 1} \\ & \quad + \,\gamma_{3} \overline{Boldness\_EPS}_{it - 1} + \gamma_{4} \overline{Companies}_{it - 1} + \varepsilon_{it} \\ \end{aligned} $$
(15)

and

$$ \begin{aligned} Demoted_{it} & = \gamma_{0} + \gamma_{1} \overline{Accuracy}_{it - 1} + \gamma_{2} \overline{EPS\_Accuracy}_{it - 1} \\ & \quad + \, \gamma_{3} \overline{Boldness\_EPS}_{it - 1} + \gamma_{4} \overline{Companies}_{it - 1} + \varepsilon_{it} \\ \end{aligned} $$
(16)

where, \( \overline{Accuracy}_{it - 1} \), mean range-adjusted revenue forecast accuracy over all firms that analyst \( i \) covers in year \( t - 1 \); \( \overline{EPS\_Accuracy}_{it - 1} \), mean range-adjusted earnings forecast accuracy over all firms that analyst \( i \) covers in year \( t - 1 \); \( \overline{Boldness\_EPS}_{it - 1} \), mean boldness of all firms that analyst \( i \) covers in year \( t - 1 \), with the binary variable \( Boldness_{ijt - 1} \) set to 1 if analyst \( i \)’s earnings forecast for firm \( j \) in fiscal year \( t - 1 \) is above/below both the analyst’s prior forecast and the consensus forecast, immediately before the forecast revision, and set to 0 otherwise; \( \overline{Companies}_{it - 1} \), mean range-adjusted number of firms for which analyst \( i \) issues at least one earnings forecast in year \( t - 1 \).

We expect a positive influence of \( \overline{Accuracy}_{it - 1} \) on the likelihood that an analyst is promoted, and a negative influence on the likelihood of being demoted. The control variables \( \overline{EPS\_Accuracy}_{it - 1} \), \( \overline{Boldness\_EPS}_{it - 1} \), and \( \overline{Companies}_{it - 1} \) are based on analyst \( i \)‘s earnings (not revenue) forecasts. We employ these controls because prior literature (e.g., Hong and Kubik 2003) finds that they influence analysts’ career prospects.

We use a data sample of 25,668 observations on the analyst-year level; that is, each analyst is included only once per year in the sample. The independent variables in Eqs. (15) and (16) are average values, calculated as mean values over all firms analyst \( i \) covers in year \( t - 1 \). To ensure a reasonable variable calculation, we require that analysts cover at least three firms in a particular year. The sample includes data from 1999 to 2014.

In Table 9, Panel A, we report statistics on analysts who were promoted or demoted. We find that 15.70% of the observations result in analysts’ demotion. Most of these observations are due to analysts dropping out of the database. We find that 3.94% of the observations describe promoted analysts. We also find that demoted analysts have an average revenue forecast accuracy in the preceding period (0.5449) that is significantly lower than that of promoted analysts (0.7312) and analysts whose career remains unchanged (0.6920). T tests provide evidence that the average values differ significantly (t value of 46.57 for demoted vs. unchanged; t value of 10.17 for promoted vs. unchanged). A year-wise comparison of the demoted, promoted, and unchanged groups shows that for each year, analysts’ demotion is connected with lower-than-average revenue forecast accuracy in the preceding period, while promotion is connected with higher-than-average accuracy.

Table 9 Analysis of the impact of revenue forecast accuracy on analysts’ career prospects

We report the logit regression results for Eq. (15) in Table 9, Panel B. As expected, we find that the likelihood that an analyst is promoted increases with prior revenue forecast accuracy. We also find that the odds of being promoted increase with prior earnings forecast accuracy and with the number of companies followed. In Table 9, Panel C, we report the results for Eq. (16). We find that the odds that an analyst is demoted decrease with prior revenue forecast accuracy; that is, more accurate revenue forecasters are less likely to be demoted in the next year. We also find that the odds of being demoted decrease with prior earnings forecast accuracy, prior earnings forecast boldness, and the number of covered firms. Concluding, the empirical findings substantiate the expectation that analysts’ career prospects are determined not only by prior earnings, but also by prior revenue forecasting performance.

6 Conclusion

This study investigates analysts’ revenue forecasts and identifies determinants of their accuracy. We find that revenue forecast accuracy is determined by forecast and analyst characteristics. All else equal, revenue forecasts are more accurate when issued closer to the revenue announcement and shortly after other revenue forecasts for the same firm. Their accuracy increases with analysts’ experience in forecasting revenues and analysts’ forecast frequency. They are also more accurate when issued by All-Star analysts, by analysts who have fewer industries in the revenue forecast portfolio, and by analysts who also forecast the corresponding earnings for the same firm-year. Our findings indicate that revenue forecast accuracy is driven by similar factors as earnings forecast accuracy.

Our results also reveal that revenue forecast accuracy can be explained by analysts’ prior performance in forecasting revenues and earnings. This finding suggests that the quality of analysts’ revenue forecasts is driven by their forecasting ability, and that forecasting earnings and revenues are closely connected operations. We use our results to develop a model that predicts revenue forecast usefulness, based on analyst and forecast characteristics. We show that our model can be used to identify relatively accurate revenue forecasts. The model is based on determinants that can already be observed at the beginning of a fiscal year. Thus, it is practical for investors to ex ante find accurate forecasts.

We also find that revenue forecasts are more accurate when they are consistent with the respective earnings forecasts. We argue that analysts should either expect an increase or a decrease in both revenues and earnings compared to results from the preceding period. Analysts who expect one of these items to increase and the other one to decrease either anticipate strong or unusual development of the expense components or have conducted an insufficient forecasting process. Additionally, we find that bold revenue forecasts are more accurate than herding ones. We posit that analysts who issue bold forecasts have better information. We find that revenue forecasts are less accurate when the analyst also supplies cash flow forecasts. This finding might indicate that analysts who cannot accurately forecast revenues regard issuing other kinds of forecast as necessary. However, the decrease in revenue forecast accuracy is partly compensated if the analyst also supplies earnings forecasts.

Our results furthermore reveal that analysts with a poor revenue forecasting performance are more likely to stop forecasting revenues than analysts with better performances. Inaccurate revenue forecasters might have experienced reputational damage and are less motivated to issue revenue forecasts in the succeeding years. Nevertheless, they might still produce revenue forecasts as an intermediate step to forecasting earnings but simply not publish them. We also find that revenue forecast accuracy affects analysts’ career prospects in terms of being promoted or terminated; that is, analysts’ career prospects are not only driven by their earnings forecasting performance, but also by their revenue forecasting performance.

The results of our study are important to investors who use analysts’ revenue forecasts for their investment decisions. Academic researchers who are interested in analysts’ research reports and use revenue forecasts in various research settings can also benefit from our findings. With our study, we contribute to the academic literature on analyst forecasts. Our results shed light on one of the most important kinds of forecasts because we help elucidate revenue forecasts and their accuracy. Our findings also support investors who aim to evaluate earnings forecasts. Revenue forecasts help in assessing the reliability of earnings forecasts and disclose whether expected developments in earnings are due to expected changes in revenues or expenses. Investors and researchers who aim to identify accurate revenue forecasts can benefit from our findings because we document accuracy determinants and introduce a model that predicts revenue forecast accuracy based on factors that can be observed ex ante.