1 Introduction

Since Peters and Waterman (1982) first identified what they felt were the best managed companies in their publication of In Search of Excellence: Lessons from America’s Best Run Companies, a steady stream of books and periodicals with lists of superlative companies have been published in the business press. These lists range from companies with the toughest bosses to those providing employees with the most support in their management of work/family issues. Among the most well-known of these rankings is Fortune’s “Most Admired Companies.” They rank both the “most admired” and “least admired” companies based on an annual survey, appearing each year since 1983.

The increased popularity of these rankings has prompted scholars to question whether there is any information content in these rankings, i.e., is there a relation between the inclusion of companies on a best practices list and the raw and risk-adjusted return to the investors of those companies at the time of and subsequent to the announcement of the list of the top companies? Clayman (1987, 1994), investigating “In Search of Excellence” firms, finds mixed results regarding the post-book release performance of excellent firms. Such divergent results seem commonplace in the investigation of a survey’s impact on shareholder returns. In some instances (see, e.g., Filbeck et al. 1997), there is evidence indicating that Fortune’s higher-ranked companies do indeed outperform the market. Others have found that the returns to these companies are not significantly different from market returns or from a matched set of unranked but otherwise similar companies (see, e.g., Kolodny et al. 1989). In such instances, perhaps the information content of the surveys is simply a very specific detail about a company (e.g., where a working mother might best seek employment based on rankings found from a perusal of Working Mother magazine’s “100 Best Companies for Working Mothers.”)

The focus of this study is a subject which has yet to be explored in the context of company rankings: whether there are cumulative or interactive effects on shareholder wealth from being listed on one or more of these rankings. As such, we seek to determine whether the information content from being listed on additional surveys may be viewed as incremental information that validates the first ranking or whether it is viewed as a redundant acknowledgement of what the market had already established.

The study of the agreement or disagreement among the assessment of the prospects of a company has been a frequent topic in the analysts forecast literature. (See, e.g., Ramnath et al. 2008, for a review of the literature.) One strand of this literature focuses on the dispersion in analysts’ forecasts (e.g., Johnson 2004) while another studies the consequences of analysts’ consensus (e.g., Bradshaw 2004) on subsequent shareholder returns. This paper is more closely connected with the latter perspective of analyzing whether there is incremental value to evidence of consensus, in this case the inclusion on an additional ranking of best corporate practices. In general, best corporate practices are recognized by the market. Ferreira et al. (2008) find that large firms that receive certification for quality management, type ISO 9000, experience positive, statistically-significant abnormal returns over subsequent longer-term time horizons. Unlike the forecast analysts who attempting to directly predict the future prospects of a company, the intent of corporate ranking is more indirect.

By testing for the effects of the cumulative listings on shareholder wealth, we are exploring a new research question, which contributes to the literature that attempts to answer the fundamental question that the results from surveys may provide an answer to, viz., what are the best managed companies. This perspective has been raised by others such as Russo and Fouts (1997) who hypothesize that environmental performance may proxy for effective management. Their underlying assumption was that environmental management and the associated performance outcome are integral parts of effective management. This perspective defines effective management in holistic terms by considering all corporate stakeholders, including the environment, in management decisions.

In the same way then, we might presume that a company that treats its employees well, or is otherwise known for its commitment to multiple stakeholders, might be viewed as a well-managed company, and is therefore more likely to be a consistently superior performer by virtue of its ability to manage the many internal and external challenges that face modern managers. Our hypothesis is that to the extent that we discover cumulative or interactive effects among the rankings, we are confirming the belief that the survey results provide potential investors a better ability to identify the best managed companies and, as such, the companies most deserving of their investment dollars. In this study we have chosen four of the most well-known surveys: Fortune’s Most Admired Companies, Business Ethics’ 100 Best Corporate Citizens, Working Mother’s 100 Best Companies for Working Mothers and Fortune’s Best 100 Companies to Work For in America as the basis for testing our hypothesis.

The rest of the paper is organized as follows: In Sect. 2, we review the literature. Section 3 contains the data selection and descriptions of the each of our included surveys. The research hypotheses and methods along with the corresponding empirical results on the rankings samples and associated benchmarks are presented in Sect. 4. Section 5 contains some additional robustness tests on our results. In Sect. 6, we discuss our results and offer concluding remarks.

2 Literature review

Studies of Fortune’s corporate awards are numerous and offer mixed results. Filbeck et al. (1997), Vergin and Qoronfleh (1998), and Anderson and Smith (2006) have studied the performance of Fortune’s “Most Admired Companies.” Each study finds that the most admired firms do indeed outperform the market. In contrast, Statman et al. (2008) find opposite results. More recently, when Anginer and Statman (2010) investigate the returns to America’s Most Admired Companies, they find that stocks of the most admired companies had lower returns than stocks of spurned companies (those with the lowest Fortune scores) during the 23 years from April 1983 through December 2007. They observe greater volatility with the spurned portfolio.

In two related works, Preece and Filbeck (1999), Filbeck and Preece (2003) examine the returns to companies that were awarded Working Mother magazine’s “100 Best Companies for Working Mothers” (1999) and Fortune’s “Best 100 Companies to Work For in America” (2003), respectively, and compare both raw and risk-adjusted returns of these best practices companies to the S&P 500 and a matched sample of companies. In the case of the “Working Mothers” portfolio, they find that investors do not earn statistically significant excess raw returns relative to the S&P 500. However, after adjusting for risk, the portfolio outperforms the market, but underperforms the matched sample portfolio. In the “Best Companies to Work For” portfolio, the best practice portfolio outperforms the matched sample portfolio on a raw and risk-adjusted basis. Edmans (2011) reports similar results in finding superior returns to the “Best Companies to Work For.” The editors of Business Ethics, publishers of the “100 Best Corporate Citizens” through Corporate Responsibility (CR) Magazine, collects information on companies included in the Russell 1000 Index, Domini 400 Index and S&P 500 Index based on good corporate citizens.

Verschoor and Murphy (2002) are the first to investigate the financial performance of the best corporate citizens by examining the companies in the 2001 Business Ethics survey. Verschoor and Murphy separate out the companies from the top 100 corporate citizens that are also listed on the S&P 500, Fortune 500, and Fortune’s most admired companies and compare them to companies from these lists that are not among the top 100 corporate citizens. By looking at measures such as total profitability, market capitalization, and the Fortune most admired scores, Verschoor and Murphy conclude that the financial performance of the top 100 corporate citizens is at least as good as those of the other companies. Filbeck et al. (2009) perform additional tests on the companies from the Business Ethics rankings and find that a portfolio formed from these firms outperforms both the S&P 500 and a sample of matched firms. In general, Statman (2006) shows that the returns of socially responsible indices are generally higher than the returns of the S&P 500 Index. However, while the monthly alpha of the DSI 400 Index for the period May 1990–April 2004 exceeds that of the S&P 500 Index by 0.09 %, none of the alphas are statistically significant. Nelling and Webb (2008) find that it is more the case that a strong stock market performance results in a greater firm investment in corporate social responsibility (CSR) activity than does CSR activity affect financial performance.

Fang and Peress (2009) analyze cross-sectional relationships between media coverage and expected stock returns. Their results show a significant return premium on stocks with little or no media coverage and suggest mass media’s power to influence security pricing comes from its ability to reach mass amounts of people, not from its ability to form opinions. Palmon et al. (2009), investigating recommendations made by columnists for Business Week, Forbes, and Fortune, find that recommendations that contain references to management or rumors of mergers/acquisitions result in greater market reactions. The four sample listings selected for this study, representing three media outlets, offer the opportunity to study a cross-section of outlets to see the impact of media coverage. According to Fortune’s media kit (http://www.fortunemediakit.com/readerpro.htm), Fortune claims a readership of 4,384,000. Working Mother’s readership is in excessive of 2,200,000. According to CR Magazine, publisher of The Best Corporate Citizens survey (http://www.thecro.com/files/CR%20Media%20Guide%202010_new.pdf), over 3 million individuals are reached either by their print or online editions (http://mediakit.workingmother.com/web?service=vpage/4275), while the Corporate Responsibility Magazine has 20,000 subscribers (http://www.bioportfolio.com/corporate/company/3487/Corporate-Responsibility-Magazine.html).

3 Sample selection and description of the four listings

3.1 The most admired companies (MAC) list description

Each year since 1983, Fortune magazine has published a list of firms deemed as America’s “Most Admired Companies.” This designation is based on a survey of business executives and analysts who are asked to rate companies based on such factors as product quality and reputation of management. The survey asks more than 8,000 financial analysts, senior executives, and outside directors to rate the ten largest companies in their own industry on eight reputational indicators on a scale of zero (poor) to ten (excellent). The characteristics include the quality of management; the stewardship of corporate assets; financial soundness; the value of long-term assets; the quality of the products or services; innovativeness; the ability to attract, develop, and keep talented people; and the responsibility to the community and the environment. The eight scores are then averaged to arrive at a final score. For example, in 2010, Apple has an average score of 7.95, which gives it the highest rating in the survey. Japan Airline is the lowest rated firm in the 2010 survey with a score of 2.96.

3.2 The best companies to work for (BCWF) list description

Fortune created the annual “100 Best Companies to Work For in America” award in the January 12, 1998 issue. The “100 Best Companies to Work For in America” list is significantly different from other awards in that the authors, Robert Levering and Milton Moskowitz, survey employees rather than “experts” and company executives. Working Mother uses corporate reporting of work/family policies such as the use of flexible work schedules and on or near-site childcare to create its list of the “100 Best Firms For Working Mothers.” In Working Mother’s case, companies fill out a comprehensive survey to become eligible for the recognition.

In the initial survey of the “100 Best Companies to Work For in America,” Levering and Moskowitz selected 238 companies from a database of more than 1,000 firms that they considered most suitable for the award. Companies must be at least 10 years old and have a minimum of 500 employees. One hundred sixty-one firms agreed to participate out of the 238 identified companies. The 161 candidate companies were asked to randomly select 225 employees to receive the Great Place to Work Trust Index. The survey, developed by the Great Place to Work Institute of San Francisco, evaluates trust in management, pride in work/company, and camaraderie. Companies also fill out a comprehensive 29-page questionnaire developed by Hewitt Associates. Finally, company officials submit employee benefits booklets, videos and newsletters.

Many of the areas that Working Mother considers for its award, such as support for work/family balance, are incorporated in the Fortune “100 Best Firms” award. In fact several firms (e.g., Corning and Johnson & Johnson), show up on both lists. However, the Fortune award is broader. The Fortune award considers both work/family issues as well as matters that are important to all employees. For instance, compensation is crucial in the Fortune survey. Companies that offer stock options to the majority of employees, not just top management, get high ratings from workers. But again, this award encompasses non-pecuniary benefits beyond how much a worker gets paid.

3.3 The best corporate citizens (BCC) list description

In the press release of its inaugural issue of the 100 Best Corporate Citizens in March of 2000, the editors of Business Ethics noted that serving stockholders is not the only definition of corporate success. They further wrote that among the benefits of corporate citizenship are better employees, customer loyalty, minimal risks of litigation, and possibly a lower cost of capital. In their view, a good corporate citizen is one that excels at serving a variety of stakeholders well. To make this determination, Business Ethics collected information on companies included in the Russell 1000 Index, Domini 400 Index and S&P 500 Index.Footnote 1 KLD Research and Analytics was the initial source of the ratings data for the companies considered in the rankings. When Corporate Responsibility Magazine took over the rankings in 2007, IW Financial assumed responsibility for the research behind the rankings.

In determining its overall score for companies, Business Ethics identified seven stakeholder groups: shareholders, community, minorities and women, employees, environment, non-US stakeholders, and customers.Footnote 2 In each category, KLD indicated where the companies have strengths and concerns. The net score in each category is the number of strengths minus the number of concerns.Footnote 3

Examples of strengths in the employee category might include profit sharing, retirement benefits, and employee involvement, while poor union relations and workforce reductions may constitute concerns. Environmental strengths might include life cycle analysis, recyclable products, and emission controls. Examples of concerns are toxic emissions, superfund litigation, and local statute violations.

Since all seven variables have different scales, they are standardized to indicate performance relative to peers based on the number of standard deviations from the mean of the peer group. The scores represent the number of standard deviations above or below the mean of the peer group. For the shareholder performance measure, a 1-year total return (stock appreciation plus dividends) is used, standardized in the same manner.Footnote 4

The score for each category is then included in an equally-weighted average of all seven stakeholder measures. As a final step, a selection committee conducts additional research on any corporate scandals, or other negative issues that may have arisen, and may recommend that a firm be eliminated from further consideration. For example, companies may be removed for accounting fraud, or if they lost money for 2 or more years in a row.

3.4 The best companies for working mothers (BSWM) list description

Since its initial publication in 1986, the annual Working Mother’s “100 Best Companies for Working Mothers has emerged as one of the most important corporate awards.” The number of companies recognized by Working Mother has grown from 35 in 1986 to 100 today, with the number of entrants vying for the award rising commensurately. The increase is due not only to increased interest in firms hoping to earn the award, but also to the extraordinary increase in the number of firms offering family friendly benefits. According to the Wall Street Journal, writers used a mover’s dolly to cart the volumes of applications submitted by firms.

Working Mother bases their award on five factors (four prior to1996). They rate firms on pay, opportunities for women to advance, childcare assistance, and other family-friendly benefits. In 1996 workplace flexibility became a separate category. Specific policies, such as on-site or near-site childcare facilities, flexible work schedules, job sharing, reduced work options, compressed work weeks, paid paternity leave, leave to care for the elderly, and others, are considered family-oriented policies by Working Mother as well as other publications.

3.5 Hypothesis

The first hypothesis is that there will be a positive market reaction to the announcement of firms being included in each of the four surveys included into this study. This hypothesis is consistent with the market’s perception that the share prices of these firms are worthy of upward reevaluation when the news associated with each survey is published. This scenario would support the theory that the positive benefits accruing to the firm and thus to the shareholders, such as reduced turnover, enhanced recruitment, and high worker morale, would outweigh the costs of providing the benefits. In addition, to the extent that we discover cumulative or interactive effects among the rankings, we are validating the belief that the survey results provide potential investors a better ability to identify which are the best managed companies and, as such, the companies most deserving of their investment dollars.

Alternatively, the market may not respond at all to the announcement, which would indicate that either the market does not value the information contained in the survey/surveys or that the “news” contained in the survey/surveys are already fully valued.

Our second hypothesis is that the impact of inclusion in surveys will produce long-term superior returns when adjusted for risk. While the survey information becomes public at the time of associated press releases, dissemination of the news can be further confounded by readers receiving news from print or Internet sources on different days due to differences in mailing times or online access. For these reasons, the use of return measures involving holding periods measured in months rather than days may be more revealing. To test our second hypothesis, we employ a number of long-term risk-adjusted performance measures.

3.6 The study sample

Our sample period for this study includes 9 years (2000–2008) of each of the four publications.Footnote 5 To be included in the sample, the company must meet the following criteria:

  1. 1.

    The sample companies must have return records on the Center for Research on Stock Prices (CRSP) Daily Combined Return File 301 trading days immediately prior to the announcement date.Footnote 6

  2. 2.

    The sample companies must have return records on the CRSP Daily Combined Return File after the announcement date until the next press release date of the survey.

  3. 3.

    The company must have complete data on Standard and Poor’s Research Insight.

Across the 9 years of the survey, there were 417 viable announcements from BCWF, 3,460 announcements from MAC, 872 from BCC, and 497 from BCWM. These 5,246 companiesFootnote 7 constitute the whole sample. The number of firms per year per survey is reported in Panel A of Table 1.

Table 1 Descriptive statistics for the whole sample, matched sample, and each individual survey sample

Next, we construct a matched sample on the basis of market capitalization and the book value of common equity-to-market value of common equity (BE/ME) ratio. Barber and Lyon (1997) document the empirical power and test statistics designed to detect long-term abnormal returns using a reference portfolio approach. They argue that matching sample companies to control companies of similar sizes and BE/ME ratio will correct for the possible sources of misspecification and yield well-specified test statistics because it alleviates the new listing, rebalancing, and skewness biases. Following Loughran and Ritter (1995), we do not match the sample by market capitalization and industry for two reasons: first, our matching method will minimize possible industry misclassification; and second, suitable industry matches are not always possible due to the limited number of available companies within the industry that match up comparatively to sample companies.

We calculate the previous year-end market capitalization and BE/ME ratio of all stocks, which have available data from Research Insight for each year. We define the market value of common equity (ME) as the previous year-end share price times the number of shares outstanding. We define the BE/ME ratio as the book value of common equity from Research Insight, divided by the year-end market value of common equity of the previous year. We delete companies with negative book-to-common-equity ratios. Our potential universe of matching companies consists of all remaining stocks that are not in our whole sample. In order to derive the best possible match for each firm in our whole sample, we calculate the following matching score (MS) for each sample stock against each of the stocks in the matching universe:

$$ MS = \left[ {\frac{{X_{1}^{B} - X_{1}^{M} }}{{(X_{1}^{B} + X_{1}^{M} )/2}}} \right]^{2} + \left[ {\frac{{X_{2}^{B} - X_{2}^{M} }}{{(X_{2}^{B} + X_{2}^{M} )/2}}} \right]^{2} $$
(1)

where: X1, represents the first matching characteristics: market capitalization; X2, represents the second matching characteristics: BE/ME ratio; B, refers to the whole sample; M, refers to the matching universe.

Then, for each stock in whole sample, we select the stock from the matching universe with the smallest MS. We repeat the same procedure for each sample year in our study to create the matched sample.

The characteristics of our whole sample, the matched sample, and each individual survey sample are presented in Panel B of Table 1. The table shows that the whole sample (and also each individual survey sample) and matched sample are very similar in market capitalizations and the BE/ME ratio. Comparing the four surveys, on average the BCWM sample has the largest market capitalization, while the MAC sample has the highest BE/ME ratio. The MAC sample has the smallest market capitalization on average, while BCWF sample has the lowest BE/ME ratio.

4 Stock performance of the four survey samples

In this section, we examine the announcement effect of being included on any of the four survey list. Our tests are conducted in two parts. First, in Sect. 4.1 we examine the short-run market impact for these samples using an event study. Then we examine the long-run stock performance using methods described in Sect. 4.2.

4.1 Short-run market impacts

Consistent with previous research, our first hypothesis is that firms will exhibit a positive market reaction to the announcement of inclusion in any of these four surveys. This hypothesis is consistent with the market’s perception that the share prices of these companies warrant an upward re-evaluation when the news is released. This scenario would support the theory that the positive benefits accruing to the firm and thus to the shareholders would outweigh whatever the costs may be incurred in establishing this enhanced reputation. Alternatively, the market may not respond to the announcement, which would indicate that either the market does not incrementally value the information contained in the survey or that the “news” contained in the survey is already fully valued.

Although each survey has a well-defined publication date or release date, it is possible that some companies may be notified in advance of their inclusion in the list and leak that information to the press or their shareholders a few days prior to this event date, which would argue for a price run-up leading up to the event date. It is also possible that word could spread further after the press release as companies issue their own press releases touting their inclusion on the list. Therefore, identifying a specific date for a market reaction is somewhat problematic. Our study is not unique in this regard; we use a process similar to Filbeck and Preece (2003) in order to establish an appropriate event window. To check for possible information leakage prior to the press release date, we conduct a search on major newspapers (e.g., The Wall Street Journal and The New York Times) and Lexis/Nexis. If a newswire has a press release prior to the news release date from any of the annual surveys, the earlier date becomes the event date. For example, the 100 BCWF list of year 2008 was released in the February 4, 2008, issue of Fortune magazine, while the news that Scottrade was named to the list for the first time was announced by Business Wire as early as January 22, 2008. Other news releases for the companies in the list spanned over the time period from January 22, 2008, until 2 weeks after the Fortune issue date (e.g., Paychecks Inc. announced its inclusion on the list on February 18 by Business & Finance Week). If a company’s news release follows the official release of the survey, the survey release date serves as the event date. Depending on the survey year, the official release date of the surveys of BCWF and MAC are usually 2–3 weeks before the publication date of Fortune Magazine.

Standard event methodology is followed as outlined in Mikkelson and Partch (1985). We test the share price response to the release of this survey beginning 5 days prior to the event date by calculating daily abnormal returns (ARs) and cumulative abnormal returns (CARs) over our event window (days −5 to +5). Expected returns are determined during the interval (−5, 5) based on the estimates of the parameters calculated for the trading day period (−301, −46) using the market model and tested based on the work of Patell (1976).Footnote 8 Table 2 reports the results of the event study for the whole sample and each individual survey sample. Panel A shows the abnormal returns around the event date, and Panel B shows the CARs.

Table 2 Results of the event study for the whole sample and each individual survey

The results show a positive cumulative abnormal return of 0.28 % (significant at 1.0 % level) for the event window (−5, 5) of MAC sample, and significantly positive cumulative returns (significant at 1.0 % level) for both the event window (1, 5) and the entire event period (−5, 5) for BCC sample and the BCWM. For the BCWF sample stocks, we observe a statistically significant abnormal return of 0.32 % (significant at 5.0 % level) on the event date, although not for the event window or entire event period. Combining the four surveys, we observe a positive cumulative return of 0.48 % (significant at 1.0 % level) over the entire event period of (−5, 5) from the whole sample, indicating that overall these surveys bring new positive information which has not been reflected in the stock prices.

The Best Companies to Work For is the first ranking released each calendar year (usually in January), followed by the Most Admired Companies (usually in February), then the Best Corporate Citizens (usually in March), and finally the Best Companies for Working Mothers (usually in September). According to these event dates, three of the four surveys have press release dates in close proximity. Moreover, some companies may be listed in more than one survey during the year. For example, Microsoft was named as one of the BCWF, MAC, and the BCWM for the year 2008. As a result, the announcement effects of the four surveys are intertwined with each other, making it more challenging to differentiate the effects from the individual surveys. Also, some companies repeat as “winners” across time for individual surveys. For example, Alcon, Inc., was recognized as one of the BCWF by Fortune magazine for the 10th consecutive year in 2008. It is possible that immediately prior to the press release dates of the BCWF survey, investors have already anticipated the inclusion of Alcon, Inc., on the BCWF list. The inclusion of such companies may not contain incremental information to the market, and therefore the market reaction may be muted for these stocks. Conversely, if a company is selected by one of the lists for the first time, this might bring new information to the market spurring a positive reaction. To examine these issues, we construct appropriate sub-samples from our overall sample.

First, we explore whether there are differential announcement effects to a survey’s release for newly listed versus repeat companies. Also, within a given year, we explore whether winners selected across different surveys bring additional information to the market with each consecutive survey release. To address these issues, we construct three groups of sub-samples (which constitute 16 sub-samples) each year:

  • The “new listing” sample contains only companies that were not previously listed in the same survey in the previous year. For the BCC sample, since 2000 was the first year of the survey, we include all stocks included in the 2000 listing.Footnote 9 For the other three surveys, we include only companies that were not listed in the same survey in the previous year.

  • The “repeat winners” sample contains stocks that are listed in the same survey for two or more consecutive years.

  • The “consecutive events” sample contains stocks that are listed in more than two different but consecutively released surveys during 1 year window. The “consecutive 2 events” include consecutive winners from BCWF to MAC, from MAC to BCC, from BCC to BCWM, from BCWM to BCWF. The “consecutive 3 events” sample contains stocks that are listed in the three consecutive events during 1 year window. The “consecutive 4 events” sample contains stocks that are listed in the four consecutive surveys during 1 year window.

Descriptive statistics for these sub-samples are reported in Table 3. We report the event study results for our sub-samples in Table 4.Footnote 10 The new listing sample displays positive abnormal returns for the event window exhibits statistically significant (at the 1.0 % level) cumulative abnormal return of 2.03 % during the entire event period (−5, 5). However, this result was driven by the CARs of 1.46 and 2.94 % from the MAC and BCC survey subsamples, respectively. The firms added to the BCWF and BCWM surveys do not show statistically significant positive abnormal returns during the event window and event period.

Table 3 Descriptive statistics for the sub-samples
Table 4 Results of the event study for different sub-samples

We also observe statistically significant results with the CARs for our repeat winner sample across the four surveys. This result is driven by the repeat winners of MAC and BCWM surveys, which show statistically significant positive CARs for the overall (−5, 5) event window, while repeat winners of BCWF and BCC do not. Comparing the new listing sample and the repeat winners sample, we find that the new listing sample shows higher CARs on average than the repeat winners sample.Footnote 11 This finding is true for the overall new listing sample and the new listing sample for each individual survey.

Finally, the market reacts positively to consecutive winners of MAC and the BCC and consecutive winners of BCC and BCWM, but not to the other two “consecutive 2 events” samples. We observe a positive CAR of 1.26 % (significant at 1.0 % level) over the event window of (−5, 5) for the “consecutive 3 events” sample, although the results are not significant for the “consecutive 4 events” sample. This may indicate that the BCC sample is more distinct from the others and, as such, the inclusion of a firm in the BCC coupled with its inclusion in another survey may reveal more about a company’s commitment to management excellence than its inclusion in two more similar surveys such as BCWF and BCWM.

Overall, our event study results indicate that the market reacts favorably on the days surrounding the press release date for selected companies when the company initially appears on the MAC and BCC rankings. Repeat winners on these lists also experience a significant, although smaller, price effect during the event window. This is consistent with the supposition that new information is being priced when a firm is initially listed on the survey, whereas subsequent listings add relatively less information about the companies’ future prospects. This is also true for the consecutive winners for different listings. We find qualified support for incremental information effects across subsequent survey releases, however, being selected across all four surveys consecutively adds little new information as we observe no significant price effect for these companies.

4.2 Long-term stock return performance

In this section, we examine the long-term return performance of the whole sample after each event date. Numerous researchers (e.g., Barber and Lyon 1997; Fama 1998; Loughran and Ritter 2000) have shown that the magnitude, and sometimes even the sign, of the long-run abnormal returns are sensitive to alternative measurement methodologies. To determine the sensitivity of our test results, we examine the long-term return performance of our sample stocks using several approaches.

We initially test the long-run stock performance of the sample stocks by forming a portfolio consisting of the top companies from each survey on their respective release (or “event”) date in 2000. This portfolio is “held” until the event date for the following year, at which point, the portfolio is rebalanced to reflect the inclusion of newly listed companies and the elimination of companies not appearing on the subsequent year’s listing. The same process is used for subsequent holding periods. We repeat this procedure for each subsequent holding period and for each survey.

We use the matched sample (matched by market capitalization and BE/ME ratio) as a benchmark portfolio to test the abnormal returns of the sample stocks. We employ the Fama and French (1993) 3-factor and 4-factor models to test the abnormal returns. Statman et al. (2008) conclude that the 4-factor model is effective in modeling expected returns and affect: in the latter case, because of the capitalization, style, and momentum factors that are a part of the model. Then, we rerun Fama–French 3-factor model using Fama and MacBeth (1973) method and Peterson (2009) method to control for correlated standard errors. Next, we calculate buy and hold abnormal returns (BHARs) over the holding period until the next event date. Our method and test results are discussed in the following section.

4.2.1 Fama–French 3-factor and 4-factor models

The 3-factor model is applied by regressing the post-event daily excess returns for each sample stock i on a market factor, a size factor, and a book-to-market factor. The 4-factor model is constructed by integrating the Fama–French 3-factor model with an additional factor capturing the 1-year momentum anomaly reported by Jegadeesh and Titman (1993). Specifically, the 3- and 4-factor models are defined respectively as:

$$ R_{it} - R_{ft} = a + b(R_{mt} -R_{ft} ) + sSMB_{t} + hHML_{t} + e_{it} $$
(2)
$$ R_{it} -R_{ft} = a + b(R_{mt} -R_{ft} ) + sSMB_{t} + hHML_{t} + mUMD_{t} + e_{it} $$
(3)

where R it , the return on each sample stock; R ft , the return on 1-month Treasury bills; R mt , the return on a value-weighted market index; SMB t , the return on a value-weighted portfolio of small stocks less the return on a value-weighted portfolio of big stocks; HML t , the return on a valued-weighted portfolio of high book-to-market stocks less the return on a value-weighted portfolio of low book-to-market stocks; UMD t , the return on the two high prior return portfolios less the returns on the two prior low return portfolios.

We run stock-by-stock regressions for each sample stock and test the t-statistics for the regression intercepts. A positive intercept for these regressions, a, indicates that after controlling for the market, size, book-to-market ratio (and momentum) factors in returns, the sample portfolio has performed better than expected.

Table 5 shows the results of the two regressions for the whole sample, each individual survey sample, and each of our sub-samples. We report only the regression intercepts and their respective t-statistics for brevity. The results show that in all cases except for the new listing Working Mothers subsample the regression intercepts are positive. These results are similar to our event study results that the whole sample, and especially for the new listing subsample, have superior returns, in this case, after controlling for the market, size, the book-to-market ratio, and momentum factors. Some of the repeat winners and consecutive winners also show significantly positive alphas, although to a lesser degree compared with the new listing subsample. Generally, the subsamples involving BCWF and BCWM firms tend to show less significance.

Table 5 Regression results of the Fama–French three- and four-factor models for the whole sample and sub-samples

4.2.2 Fama–MacBeth model and Peterson (2009) model

Our results in Table 5 are based on the mean regression coefficients and t-statistics of stock-by-stock regression. It is well known that for panel data, the residuals may be correlated across firms or across time, and OLS standard errors may be biased. To further test whether our results from Table 5 are still robust after controlling for biased OLS standard errors, we employ Fama and MacBeth (1973) method and Peterson (2009) method. For the Fama–MacBeth method, we first regress post-event daily excess returns for each sample stock i on a market factor, a size factor, and a book-to-market factor. Then for each trading day t in our sample, we run cross-sectional regressions using the regression coefficients from the first regression. Finally, we calculate the mean coefficients of the second regression and report the t-statistics.

To address two sources (i.e., firm and time) of correlation of standard errors, we employ Peterson (2009) method. Specifically, we estimate the following variance–covariance matrix:

$$ V_{Firm\& Time} = V_{Firm} + V_{Time} - V_{White} , $$
(4)

where: V, variance–covariance matrix clustered by Firm (Time, or both); V White , variance–covariance matrix of White standard errors; which combines the standard errors clustered by firm with the standard errors clustered by time.

Table 6 shows the results of the two regressions for the whole sample, each individual survey sample, and each of our sub-samples. We report only the regression intercepts and their respective t-statistics for brevity. The results show that in all cases except for the consecutive event from the Best Companies to the Most Admired subsample, the regression intercepts are positive. These results are similar to and re-enforce our event study results. In Table 6, the whole sample, and especially for the new listing subsample, have superior returns, after controlling for the possible correlations of standard errors.

Table 6 Regression results of the Fama and MacBeth (1973) model and Peterson (2009) model for the whole sample and sub-samples

4.2.3 Buy-and-hold abnormal returns (BHARs)

Long-term performance is also assessed by using buy-and-hold abnormal returns (BHARs). Building on the work of Ritter (1991), Barber and Lyon (1997) find that BHARs can be used to address several issues regarding portfolio performance. A BHAR is the difference between the return on a buy-and-hold investment in a company of interest less the return on a buy-and-hold investment in a similar asset/portfolio. Barber and Lyon note that BHARs can overcome several biases inherent in estimating long-term CARs. Specifically, BHAR is calculated as:

$$ BHAR_{iT} = \prod\limits_{t = 1}^{T} {[1 + R_{it} ] - } \prod\limits_{t = 1}^{T} {[1 + E(R_{it} )]} , $$
(5)

BHAR iT , as defined in Barber and Lyon (1997), represents the buy-and-hold investment in the sample firm less the return on a buy-and-hold investment in an asset/portfolio with an appropriate expected return, where R it  = the day t return of stock i in the whole sample; E(R it ) = the daily expected return of the sample firm.

Barber and Lyon (1997) argue that by matching sample companies to control companies of similar sizes and BE/ME ratios will correct for the possible sources of misspecification. So, for the calculation of BHAR, we only use our matched sample as our benchmark portfolio. Therefore, in this study, BHAR is measured as the return of buy-and-hold investment of each sample firm less the return of buy-and-hold investment of its matched firm.

We report the results of BHARs in Table 7. BHARs for the whole sample and each individual survey sample appear in Panel A, while Panel B shows the BHARs of different sub-samples. We test the null hypothesis that the mean BHARs (i.e., the differences between buy-and-hold returns of each sample stock and its matched stock) are equal to zero using a parametric test statistic. The t-stat is calculated as the sample mean BHARiT divided by the sample standard deviations of abnormal returns for the sample.

Table 7 Buy and hold abnormal returns (BHARs) for the whole sample and sub-samples

In general, the t-stats from Panel A demonstrate that the MAC and BCC companies outperform their matched samples using the annual buy-and-hold strategy (statistically significant at the 1.0 % level). Consistent with prior results, the BCWF and BCWM companies do not outperform their matched samples.

For the sub-samples comparisons in Panel B of Table 7, while the repeat winner and consecutive winner sub-samples do not yield statistically significant abnormal returns compared with their respective matched samples, the new listing sub-sample yields a statistically significant (at the 1.0 % level) BHAR of 7.7 % compared with the matched sample, driven by the MAC and BCC companies.

Overall, our tests on long-term stock return performance indicate that the new listing samples of the MAC and the BCC provide statistically significant long-term positive abnormal returns, and this conclusion is not sensitive to different test statistics and measurement methods that we employ in this paper. The repeat winners and the consecutive winners sub-samples yield positive, although not always statistically significant, abnormal returns depending on methodology used.

Among the four surveys, the MAC and the BCC listings seem to have more incrementally useful information about the prospects for the companies included in those surveys. In the next section, we will use regression analysis to further test the long-term stock performance of the four listings.

5 Which survey has stronger price effect?

Our results so far indicate that companies on the aggregated four surveys outperform their matched firms in the holding period following inclusion on their respective survey. However, one basic question particularly relevant to potential investors that remains partially unanswered is which survey has stronger price effect, i.e., Are some surveys “better” than others? Do firms need to be listed on all four surveys to gain the greatest returns?

To investigate this issue, we employ regression analysis on the company returns and dummy variables representing the companies’ inclusion on the various “best of” lists. Specifically, we retrieve the daily returns of all stocks which have available data from both CRSP and Research Insight during our sample period 2000–2008. The number of stocks with available data ranges from 5,397 to 7,278 on an annual basis. Then for each year, we calculate the annual returns for each stock in the dataset. Next, we identify our survey sample stocks and sub-sample stocks from the whole universe of stocks and construct the following dummy variables:

  • Best_Co (Admired, Corporate, Work_Mom) = 1 if the firm is listed as one of the Best Companies to Work For (Most Admired Companies, Best Corporate Citizens, Best Companies for Working Mothers) during the year; = 0 otherwise.

  • Best_Only (Admired_Only, Corporate_Only, Work_Mom_Only) = 1 if the firm is listed as one of the Best Companies to Work For (Most Admired Companies, Best Corporate Citizens, Best Companies for Working Mothers) only (but not for other surveys) during that year; = 0 otherwise.

  • Win2Events (Win3Events, Win4Events) = 1 if the firm is listed in two (three, four) surveys on an annual cycle; = 0 otherwise.

  • New_Best (New_Admired, New_Corporate, New_Work_Mom) = 1 if the firm is a new listing on the Best Companies to Work For (Most Admired Companies, Best Corporate Citizens, Best Companies for Working Mothers) during the year; = 0 otherwise.

  • New = 1 if the firm is a new listing company for any of the four surveys during the year; = 0 otherwise.

  • Repeat_Best (Repeat_Admired, Repeat_Corporate, Repeat_Work_Mom) = 1 if the firm is a Repeat winner of Best Companies to Work For (Most Admired Companies, Best Corporate Citizens, Best Companies for Working Mothers) during the year; = 0 otherwise.

  • Repeat = 1 if the firm is a Repeat winner for any of the four surveys during the year; = 0 otherwise.

The results are reported in Table 8. To control for the size effect and book-to-market effect, we add the log of MVE (log of market capitalization) and BE/ME ratio in each regression. Results from Models 1 and 2 show that the MAC and the BCC surveys have higher annual returns than the other survey stocks after controlling for the size effect and book-to-market effect. Results from Models 3, 4, and 5 show that being listed on two or three surveys during the year still sends the market a positive signal, while appearing on all four listings does not add any additional benefit. Results from Models 6, 7, 8, and 9 show that of the new listing sub-samples, only the new listings of the MAC and BCC surveys yield higher annual returns. Of the repeat winners sub-samples, only repeat winners of the MAC send a positive signal to the market. These results support the earlier findings that the MAC and BCC surveys are the two most relevant to investors.

Table 8 Results of regression analysis

6 Concluding remarks

The focus of this study is whether there are cumulative or interactive effects from being listed on one or more of four popular annual surveys of exceptional firms (Fortune’s Best Companies to Work For and Most Admired Companies, Business Ethics 100 Best Corporate Citizens, and Working Mother’s 100 Best Companies for Working Mothers). We investigate whether the information content from being listed on additional rankings may be viewed as incremental information that validates the first ranking or is it viewed as a redundant acknowledgement of what the market had already established.

Our event study results indicate that the market reacts favorably when a firm appears on the MAC and BCC rankings, but less so to the BCWF and the BCWM. Subsequent inclusion on the MAC and BCC surveys produce significant, although smaller, price effects during the event window. In the former case, our results are consistent with most research in the ability to investors to add value to their portfolio by adding selected firms. In the latter case, given its small subscription base, our results support Fang and Peress (2009) assertion that significant return premium are available on stocks with previous little or no media coverage. During a given annual cycle, being listed in two and three consecutive surveys produces statistically significant abnormal returns, but a fourth selection adds little new information.

With respect to longer-term performance, we observe that the new listing samples of the MAC and the BCC provide significant positive abnormal returns, regardless of the method used to analyze performance. The repeat winners and the consecutive winners sub-samples also yield positive abnormal return, although the statistical significance of these returns are dependent on the methodology employed.

Finally, through regression analysis, we determine that (1) the MAC and the BCC survey firms have higher annual returns than those of the other two surveys after controlling for the size effect and book-to-market effect; (2) being listed on two or three surveys during the year yields abnormal returns, while appearing on all four listings does not add additional benefit; (3) only the MAC and BCC surveys yield higher annual returns upon first-time inclusion; and (4) only reselection for inclusion for subsequent MAC surveys garners incremental value.

Our paper adds value to the existing literature by synthesizing the effects of financial surveys of noteworthy companies and helping investors determine which survey is more deserving of attention as they determine appropriate security selection within their portfolio. On a practical level, our results suggest that investors looking to purchase stocks should eagerly await the announcement of the lists of the Most Admired Companies and the Best Corporate Citizens, while companies striving for higher returns should adopt corporate practices that might lead to their being included on these same two lists. Our hope is our work on the cumulative effect of corporate rankings leads others to investigate the possible interactive nature that exists in market responses to more than one survey. While our study does not purport to answer all of the questions that might come from a more exhaustive study of corporate rankings, we believe our results are sufficiently provocative to be viewed as a call for additional research in this area.