Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Rankings are the Swiss knife of higher education – they are a single tool with many uses. Like many other universities, Texas Tech University utilizes rankings as a barometer to judge whether the university exhibits dimensions of quality. (The term “universities” will be used to describe all postsecondary institutions throughout this chapter.) The “Goal Two: Academic Excellence” section of its 2005 strategic plan cites rankings 12 times. Three of the nine objectives in this section of the plan are explicitly aimed at improving the institution’s national ranking, whether it be in selectivity, grants, scholarly productivity, or the quality of the university’s library system (Texas Tech University 2005). The use of rankings as a measure of a college or university’s excellence, improvement in quality, prestige, character, hipness, or value is ubiquitous. The pervasiveness of ranking systems has spread to institutions outside the United States as well. At world-renowned institutions like the University of Melbourne in Australia, for example, international rank is so important it occupies the second highlight on the “About the University” page, sandwiched between the institution’s foundation date and the number of enrolled students (University of Melbourne 2010). Even lesser-known institutions, like the University of Kwazulu-natal in South Africa use higher education rankings in creating strategic plans as well as guideposts in determining institutional quality (University of Kwaxulu-natal and Strategic 2007). As these examples demonstrate, universities have adopted the use of rankings as a means of assuring internal actors that the institution is on course toward its goals.

Other institutions use rankings as a signal flare, to highlight their quality for external constituents. Benedictine College, in Atchison, Kansas, for example, proudly displays the US News “Best Colleges” emblem on its homepage and notes its top 20 ranking in the Newman Guide to Choosing A Catholic College on its “About Benedictine” website (Benedictine 2009). Rankings assure prospective students and their parents that Benedictine is a legitimate Catholic institution of quality. The use of rankings as a communicative tool is so powerful and widespread that it has pandemically spread beyond the institutional level. It has become common for a college or university – and its academic units – to explicitly cite popular rankings, such as US News or Business Week, as arbiters of its caliber. This is true of privates and publics, elites and non-elites. The College of Engineering at the University of California, Berkeley, acknowledged in many national and international rankings as an eminent postsecondary institution, cites its position in national rankings, particularly its specific position relative to archrival Stanford University, on its webpage (UC Berkeley 2009). Caltech, also a renowned US university, devotes space on its webpage to documenting the university’s rankings in international rankings, such as The Times Higher Education Supplement (Caltech 2009). Universities use rankings, in their current form, to provide both informational and promotional properties to internal and external constituents alike.

This chapter provides an analysis of (a) how universities are controlled by higher education rankings; (b) how universities react to rankings; (c) the importance of reputation – a major factor in the rankings – as an intangible resource; (d) equity concerns relevant to higher education rankings; and (e) resultant lessons regarding the efficacy of pursuing a change in rankings. The goal is not to arrive at a normative conclusion, but to broadly assess how a university’s leaders might utilize information about higher education rankings to make relevant institutional policy. Further, this chapter focuses primarily on colleges and universities in the United States, but its topics can be broadly applied to international institutions.

2 Higher Education as Fertile Ground for Rankings

It should come as no surprise that rankings are a popular device for universities. Because identifying quality is so difficult in institutionalized fields such as higher education, organizational myths of structure and behavior take on important meaning (Meyer and Rowan 1977). In technical fields, for example, there is less need for rankings that incorporate “soft” variables such as reputation and the relationship between inputs and outputs. Instead, organizations in these fields can be compared on objective measures explicitly linked to the quality and quantity of their outputs. However, these measures are unavailable in higher education and, in lieu of these, rankings are simultaneously myth making and sense making.

There are at least two reasons to explain why rankings have become so valuable, both of which relate to the institutional nature of the higher education environment. First, because both the products and technology of higher education are nebulous and hard to measure, rankings provide a seemingly objective input into any discussion or assessment of what constitutes quality in higher education. As Gioia and Thomas (1996) point out, there are “few bottom-line measures like profit or return on investment that apply to” higher education (p. 370). To fill this void, organizations inside and outside higher education have created ranking systems.

Higher education rankings also take advantage of the fact that the ­institutionalized nature of the industry makes mimicry more useful and likely. Because it is so ­difficult – for insiders and outsiders – to judge the quality of a university’s technology and outputs, it is often more cost-effective and useful for institutions to copy the structures and behaviors of universities perceived as being successful. The ambiguity of organizational goals within higher education makes this kind of modeling convenient and predictable (DiMaggio and Powell 1991). Programs such as the German Excellence Initiative, designed to thwart the dominance of international rankings by the USA and U.K. universities, illustrate the universality of this type of mimicry in higher education (Labi 2010).

Rankings produced by organizations such as US News or The Times complement and buttress the already isomorphic nature of higher education. By codifying and ordering the successful practices and structures of elite organizations such as Harvard or Oxford, rankings produce a navigable roadmap for less-prestigious institutions to follow. Rankings utilize the key attributes of elite institutions. These key attributes are then weighed heavily in the rankings algorithm, which produce the assumptive results. Naturally examined, dissected, and ultimately mirrored by their non-elite counterparts, elite universities establish the standard by which all institutions are gauged. By nuancing the differences between the seemingly successful and the seemingly less-successful institutions, the creation of a set of rankings inevitably quantifies the various academic dimensions of all institutions. This quantification of relationships between institutions exacerbates and amplifies the mimetic tendencies already found in higher education. While mimicry often occurs without the existence of rankings, they further legitimate practices by substituting improvement in rankings for evidence of real improvement. Devinney et al. (2008) argues that the “dark side” of mimetic isomorphism in higher education is that institutions will stop experimenting and instead favor herd behavior that is ultimately destructive to their organizational field. In short, it is predictable and problematic that rankings catalyze the mimetic tendencies of organizational behavior in higher education.

3 The Control Exerted by Higher Education Rankings

Conceptually, the purpose of publicizing rankings and tightly coupling strategic actions to rankings can be explained. In this section, we investigate what the limited literature on the subject tells us about how and why universities utilize rankings.

Researchers who studied the reaction of university leaders when introduced to rankings agree that, initially, administrators viewed rankings such as US News as less than legitimate. The Dean of Harvard’s Law School referred to the 1998 rankings as “Mickey Mouse,” when asked about their relevance to the field (Parloff 1998). A decade or so later, even given this derision, rankings have become so legitimate as to influence the behavior and culture of law schools. Sauder and Espeland (2009) describe the influence of rankings as impossible to ignore and ­difficult to manage. Because they have come to occupy a central position in the application process for prospective law school students, rankings have come to play a permanent, indelible role for the schools. The mark they receive from US News is a kind of tattoo that instantly and powerfully communicates their standing in the larger field. Sauder and Espeland use Foucalt’s conception of discipline to make sense of how law schools are forced to internalize and incorporate the values of rankings. However, they could just as easily be describing Weber’s (1958) concept of the iron cage, which focused on the means by which organizations and their actors are increasingly constrained by a bounded rationality predicated on goals. In this light, the goal of being a “quality institution” has forced and legitimated the use of rankings onto law schools, as well as other institutions. One of the more explicit pieces of evidence that can be found to substantiate this argument is found in Loyola University’s (Louisiana 2009) strategic plan, which states:

To enhance our reputation and stature, as reflected in the rankings of U.S. News and World Report, we are committed to a university-wide rethinking of our programs in a way that builds upon our strengths and utilizes new initiatives that respond to national needs and student demands. Such an approach seeks to increase demand and attract more and better students, which will decrease the need to discount tuition, while allowing Loyola to attract students from deserving communities and shape our incoming classes. An increase in ranking will directly affect an increase in revenue.

Rankings have been so successful in demarcating what constitutes quality in higher education that university strategic plans now commonly refer to them as a valid arbiter of quality. Why? The discussion above notes the institutional nature of higher education, but there is evidence that other organizational types respond aggressively to being ranked, particularly when that rank threatens their legitimacy within a specific organizational field. Chatterji and Toffel’s (2008) research on the effects of third party environmental ratings on for-profit firms, for example, delineates how firms with low environmental ratings responded positively to such a less-than-favorable rating. Further, firms with lower environmental ratings improved their performance on these criteria, as compared with those rated higher. Both institutional and strategic choice theories explain these behaviors. Organizations facing the prospect of being delegitimized by a third-party rating must choose how to respond, particularly if that rating carries credence by important constituents. The research suggests that firms with particularly low ratings are more likely than their higher-rated peers to respond with practices that leverage the “low hanging fruit” available to them and thus improve their rating. These finding suggests that ratings should incorporate both “sticks” and “carrots,” in order to affect change in high- and low-rated organizations and avoid the negative convergence that may accompany ratings that focus on problems only.

In a similar fashion, university rankings have determined, even codified, what types of organizational behaviors and practices are legitimate (Wedlin 2007; Hazelkorn 2009; Chap. 11). This is particularly true in the case of law and business school rankings, where research suggests that students – and, as a result, deans – have come to view rankings as “the” primary determinant in choosing to apply to and/or attend a specific university (Elsbach and Kramer 1996). The results are ­predictable; organizations, regardless of their status, conform to the rankings agenda, even as new rankings are introduced by those who, for example, fear their organizational or national identity is being marginalized. Wedlin’s (2007) work on the compelling nature of the international MBA programs suggests that faculty and administrative staff at business schools are seeing their exclusive role in shaping curricular and programmatic decisions usurped by rankings that prescribe what “a good and proper international business school is, or what it should be; what programs and features are important, how schools should structure and carry out work” (p. 28). The work of others (Sauder and Espeland 2009; Hazelkorn 2007; Espeland and Sauder 2007; Martins 2005) substantiates these claims.

4 Reacting to Rankings

In response to the public’s and higher education’s demonstrated embrace of rankings, universities are adjusting their educational practices and strategies to obtain a favorable rank from both the media organization and their consumers. Evidence showcasing the beneficial outcomes associated with rankings is relatively young; yet, there are some intriguing conclusions deserving further analysis by researchers and policymakers. From a macro point of view, the research suggests that universities have relatively little control over their rankings, whereas, from a micro perspective, smaller, yet important changes may be possible as a function of concentrated changed behavior. Overall, a paradox is emerging: Rankings are a game everyone plays, but a game with constantly shifting rules that no one can control.

Several findings have been confirmed and reconfirmed in multiple studies on the effects of rankings. For example, a higher rank in a given year, controlling for other factors, will result in more applicants for admission, a lower admissions yield (higher selectivity), and higher median test scores among both the applicants and the enrolled student pools (Monks and Ehrenberg 1999). Prospective applicants notice and respond to rankings. Other studies that examined similar phenomena (Bowman and Bastedo 2009; Meredith 2004; Sauder and Lancaster 2006) suggest that ranking outcomes associated with student admissions compound over time. That is, an improvement in rank in 1 year, in turn, creates a favorable situation for the institution in subsequent years. However, this phenomenon cuts both ways, in that lower rankings year after year will produce subsequent applicant and enrolled student populations that exacerbate the inability of the university in questions to attract and enroll high ability students.

Beyond admissions, there are other institutional outcomes researchers have linked to rankings. Bastedo and Bowman (2009), for example, found rankings to directly affect the funding of research and development from government, industry, and foundations, as well as the total amount of alumni donations. This effect confirms the previous assumptions of financial contributions to higher education in that donors utilize rankings to be associated with successful universities. Current and past research on the subject documents how donors – alumni and those without connections to the university in question – are more likely to contribute when ­tangible indicators of success are present, including but not limited to a growing endowment or a successful athletics program (Leslie and Ramey 1988; Ehrenberg and Smith 2003; McCormick and Tinsley 1990). It is apparent that rankings work like other signaling devices in higher education. Better students, faculty, and wealthy donors are attracted to those universities perceived as better, more prestigious, or higher quality because of the perceived benefits of being associated with these successful organizations.

Due to these financial and non-financial benefits, institutions eagerly find ways to improve their rankings. Because certain ranking schemes take into effect more easily manipulated data, universities employ a number of gaming techniques to improve their position in the rankings. For example, a university may ignore adjunct instructors altogether when reporting the percentage of full-time faculty employed – a known function of many rankings. In the most recent round of US News rankings, several well-known public universities including the Georgia Institute of Technology, the Pennsylvania State University, the University of Iowa, North Carolina State University, and the University of Nebraska reported faculty data without including many or all adjuncts, despite the magazine’s explicit request for institutions to include adjuncts in their self-reported calculations (Jaschik 2009). Several explanations were given. Adjuncts were considered employees, not faculty, at Penn State. North Carolina State and Iowa considered adjuncts faculty only if they held permanent appointments, which most did not. In any case, these universities’ ranking benefited from this misreporting, which can be reasonably surmised as the intent.

The conjuring of numbers is only one of many schemes used by institutions trying to improve their rank. Colleges and universities employ a number of other manipulative tricks as well, all born from and focused onto the various components used in the rankings algorithm. For example, to manipulate the “beginning characteristics” in US News, a 15% component of the overall index score, institutions have been found to intentionally misreport admissions data as well as encourage unqualified students to apply, only to coldly reject them later – boosting the selectivity rate (Ehrenberg 2002). Other institutions misreport a current student’s single gift as a multi-year gift, enabling the institution to claim these donations as alumni gifts (Golden 2007). Law schools spend over $100,000 a year in creating, printing, and sending glossy marketing brochures to other law school administrators hoping to influence “reputation” scores, a 25% component (Espeland and Sauder 2007). While these gaming techniques seem underhanded and unrelated to institutional quality, they serve as a means to a more favorable ranking end. If anything, “gaming challenges the legitimacy of rankings by subverting their appearance as accurate representations of the schools they measure […] but gaming simultaneously reinforces the legitimacy of rankings by furthering educators’ investment in them” (Sauder and Espeland 2009: 78). Stated otherwise, the gaming techniques practiced by contemporary institutions of higher education reveal both the destructive and staying power of rankings.

Despite a strong desire to improve in the rankings, the amount of control institutions have in the process is highly debatable. One study finds around 70–80% of the variability between annual rankings is transitory “noise” and disappears within 2 years. These results suggest that rankings do very little to document or reward real improvements in quality (Dichev 2001). Similarly, the monolithic nature of being an “elite” institution is impressive. For example, in US News, the dominant United States ranking guide, only 29 schools occupied the top 25 spots between 1988 and 1998, and 20 institutions never fell out of the top 25. In reality, it is nearly impossible for any university outside the top 25 to break into this elite group, and aspirations to do so represent, in the vast majority of cases, organizational daydreaming. Moreover, the fierce competition for a top spot among all institutions, in the zero-sum game of rankings, only serves to make positive movement that much harder. In a recent survey of higher education administrators, Hazelkorn (2007) noted that 93% and 82% of respondents wanted to improve their national and international rank, respectively. Additionally, she found that 70% wanted to be in the top 10% nationally and 71% in the top 25% internationally. Devinney et al. (2008) take the impossibility of institutional control one step further in providing evidence that “most of the critical attributes that matter to the rankings are correlated with structural factors” (p. 10), or factors that are either impossible, or financially impractical for institutions to manipulate. Evidence mounts of the paradox of pursuing a higher ranking: An increasing desire to improve rank often belies the decreasing ability to do so.

5 The Power of Reputation

Reputation is an intangible organizational asset that is both hard to construct and, if lost, hard to recover. The empirical evidence on the subject indicates that organizations, including universities, are right to worry about their reputation and its attached benefits. Studies of for-profit firms have demonstrated that managers value an organization’s reputation as the most important intangible resource a business can have, more important than, for example, employee know-how. However, without the technical data to delineate organizational strengths and weaknesses often found in for-profit enterprises, reputation is likely more important in fields like higher education. Here though, the reliance upon reputation can drastically exacerbate its effect on internal and external constituents. Regardless of industry, an organization’s reputation is complemented by the fact that this resource is very difficult to develop and requires a long period to rebuild (Hall 1992; Deephouse 2000; Rindova et al. 2005).

Widely cited by managers as critically influential, reputations are the invisible, unquantifiable “dark matter” of organizations. Difficult to see or manipulate, very few studies have calculated the exact impact of reputation on organizational ­performance. However, those that have project a unified voice: Reputation has “considerable significance with respect to the sustainability of advantage” (Hall 1992: 143). Attributing various performance measures, like financial success, solely to an ­organization’s reputation is exacting at best, but evidence suggests reputation can ­significantly affect performance. In an analysis of banks in the Twin Cities (Minnesota, USA) area, one study concluded that a bank with a relatively strong reputation enjoys a significant financial advantage in competition with other banks. This advantage manifests itself in several important outputs, including lower costs, the ability to price goods and services at a premium, and a competitive advantage that is hard-to-overcome (Deephouse 2000). Similar advantages can be found in higher education where universities sporting strong reputations relative to their peer group can raise tuition price and enjoy increased numbers of applicants and revenues (Weisbrod et al. 2008). Conversely, universities without such strong reputations may be forced to cut the cost of tuition, in order to attract greater numbers of students, who would otherwise apply to similarly priced universities with better reputations (Jaschik 2008). Broader conclusions of the value of reputation are supported by Roberts and Dowling (2002), who argued in their 14-year analysis of Fortune 1000’s America’s Most Admired Corporations that reputation served as a buttress for better long-term financial performance. From this perspective, although reputation is usually considered an untechnical or “soft” criterion, it is actually a kind of “hard” asset that won’t erode over time and can serve an organization during periods of stability and instability. These findings suggest that reputation – while invisible and difficult to control – is critical to isolating the top performers from the rest of the field.

Reputations hold value precisely because of the competitive advantage they provide and the relative costs and/or ability associated with procuring a similar positive reputation. Organizations may use other means of substituting for a positive reputation, such as guarantees or warranties, but these substitutes have real costs and may not provide similar value for the organization or the consumer (Klein et al. 1978. In higher education, the lack of a positive reputation can limit the approaches available to universities in their marketing to students. Metropolitan State College in Denver, Colorado, which lacks a particularly strong reputation (Tier 4 among US News liberal arts colleges) recently made headlines by offering free remediation to any of its teacher education graduates who were unsuccessful in the classroom (Denver Post 2009). Similarly, Doane College, a lower-ranked (Tier 3 among US News liberal arts colleges) small baccalaureate college in Doane, Nebraska guarantees a 4-year graduation to all full-time students. If not, students receive any additional courses tuition-free (Doane College 2009). Even relatively highly-ranked Juniata College (#85 in US News liberal arts colleges) offers its students a “buy four, get one free” guarantee, providing a free year of tuition to all full-time students who fail to graduate in 4 years or less (Weggel 2007). The highest ranked universities need not offer such warranties, potentially saving them money.

The difficulty in higher education – and other organizational fields – is that reputation is a resource that cannot be easily purchased or improved. Positive organizational reputations may be the product of historical incidents that cannot be replicated, making them “imperfectly imitable” (Barney 1991: 115). Similar to the monolithic nature of rankings, organizations with positive reputations find it relatively easy to maintain them, while those with less-positive reputations find it very difficult to improve their reputation, particularly relative to organizations in the same field with longstanding positive reputations. On the other hand, recent studies of US News rankings of US universities show how a move in ranking, particularly when a university changes tiers, can have a positive impact on the future peer assessments of the same university. In other words, while reputation is difficult to improve, it is not impossible, especially when reflected in the rankings, because peer assessment of an institution can be changed over time through improvements in selectivity and the utilization of resources (Bastedo and Bowman 2009; Bowman and Bastedo 2009). This finding is at odds with decades of reputational stability in American universities:

Reputational surveys of American universities conducted in the 1950s, 1960s, and 1970s revealed an academic pecking order of remarkable stability. In the competition for top-twenty rankings, rarely was there a new institutional face (Graham and Diamond 1997: 2)

The good news for those in higher education with strong positive reputations (and bad news for the rest) is that reputation carries tremendous weight in many national and international rankings. For example, rankings by AsiaWeek, Education18.com, Melbourne, The Times World University Rankings, Netbig, US News, Wuhan, and MacLeans include a variable linked to reputation. Among these, three (Education18.com, The Times and US News) weight reputational scores very heavily – at least 25% (Usher and Savino 2006). The use of reputation as a variable will make it nearly certain that these rankings will display relatively little variation in their top-rated universities.

6 Equity Concerns and Students’ Use of Rankings

Research on college choice depicts a dynamic process, whereby the decision to apply to a specific university is a function of both self-selection and societal context. Generally described as a three-stage process, students first consider their options while assuming information about what kind of university they want to attend, and self-assess their probability in attending such an institution. The assumptions held by students are largely a function of socioeconomic status (SES) (Hossler et al. 1999). Using these initial constraints, students develop a “choice set” of universities, often excluding those viewed as unaffordable. This set may contain, however, “safety schools” as well as “reach” or aspirational choices, based on selectivity and the cost of the university (Hossler and Gallagher 1987).

The research on the outcomes of this pre-application selection stage is quite clear: SES plays a substantial role in the college choice process. Lower-income students, constrained by their socioeconomic status, are inevitably less likely to choose a selective, more expensive institution than their more privileged peers (Steinberg et al. 2009). Lower-income students are also more likely than their peers, controlling for other factors, to choose to attend a university close to home (McDonough 1997; St John et al. 2001; Pryor, et al 2008).

SES also determines, to some degree, the type of information prospective students use throughout the college choice process. Because educational quality proves difficult to assess, students tend to utilize the admissions selectivity ­indicators as a means of gleaning the differences between institutions. From here, the vast majority of students engage in “self-selection,” the process of applying only to institutions in which they can both gain admission as well as afford (Hossler and Litten 1993; Hossler et al. 1999; McDonough 1997); thus, the students’ ability to accurately judge a university’s admission standards is extremely crucial. Knowledge about universities, however, is not evenly distributed among students. Students from underrepresented backgrounds often have less access to informational resources such as high school counselors, who may have little time to invest in shaping students’ postsecondary aspirations (McDonough 1997; McDonough and Calderone 2006). In short, students from lower socioeconomic backgrounds, relative to their peers, choose among less-prestigious, lower-ranked institutions and have less access to critical information.

As a substitute for this institutional knowledge, rankings are often sold as a means to “find the best college for you” and a tool to “find your perfect fit” (US News & World Report 2010). While this function is frequently debated among practitioners and non-practitioners alike, nonetheless, who actually uses the rankings and for what purposes becomes a very important variable in the college choice process. In fact, the utilization of rankings is strongly correlated with student’s socioeconomic status. Students from families with higher levels of income and education use rankings more often and are more likely to report university rank as an important factor in their college choice decision as compared to poorer students who use rankings less often and find a university rank not at all important. Examining the college choice process and the role rankings play, McDonough et al. (1998) argued that, instead of aiding in finding a college that “fits” a student, rakings are used by high-income students to signal their status and are “merely reinforcing and legitimizing these students’ status obsession” (p. 531).

High-income students not only use rankings in their college choice, but they benefit from the rankings themselves. To boost their own rankings, colleges and universities naturally seek students with the strongest “beginning characteristics,” such as GPA and SAT scores. Not surprisingly, these student selection indicators are directly correlated to the students’ socioeconomic status (Meredith 2004). These indicators play an exaggerated role in the index scores of many national and international rankings. For example, 14 ranking systems from around the world incorporate some form of beginning characteristics into their calculus. Among these, four (Guardian University Guide, AsiaWeek, Education18, and US News) give these scores substantial weight – at least 15% (Usher and Savino 2006). Given this prominence by the rankings, universities strive to maximize their beginning characteristics, as evidenced by the increasing use of merit scholarships to recruit incoming students much to the detriment of lower SES students. Clearly, rankings stress what is already emphasized in university admissions and greatly favor ­students from more privileged backgrounds.

All of these organizational behaviors (e.g., gamesmanship, mimicry, recruiting high ability students, etc.) tend to exacerbate the Matthew Effect in the competitive forces in higher education. Although wonderful news for the strongest students and the strongest institutions, the consequences for student access, choice, and ­oppor­tunity tend to be particularly negative for low-income and minority students (Clark 2007). Similar to the isomorphic effect rankings have on institutional practices, rankings are also contributing to the homogenization of the socioeconomic composition found in most universities.

7 Lessons for University Leaders on the Efficacy of Leverage Rankings

This chapter suggests a number of lessons relevant to university leaders considering whether to and how to attempt affecting a change in their university’s ranking. First, it is apparent that rankings – however “Mickey Mouse” – are here to stay and represent social constructs with real and lasting consequences. The nebulous nature of measuring higher education quality is quite consistent with the attention rankings have received from prospective students and other external constituents. The decision to simply ignore rankings can no longer be considered conscientious and will likely have consequences on any institution. However, these consequences are likely to be much greater for universities near the top of the rankings – regardless of the lingering effect of reputation – which suggests even these organizations will gently descend the rankings ladder.

Second, given the documented value of reputation – a key component of many ranking schemes – there is substantial rationale to improve a poor university reputation or protect an existing positive reputation. Granted, a boost in rankings can provide a means to improve a reputation, and vice versa, but the reputational criteria utilized in contemporary ranking schemes poorly represent institutional quality. It seems more likely that the reputational value currently found in rankings reflects the ability to charge tuition premiums and/or pay for the right to recruit and enroll high quality students at heavily discounted tuition.

Third, a large number of institutions have responded to rankings by either incorporating gaming techniques – manipulating what they can to achieve short-term improvement – or feature aspirational rankings into their organizational strategy. The number of institutions pursuing these tactics should give pause to leaders at other universities. Not everyone can be in the top 25. The rush to join the “front page” of the rankings, even given the increased number of applicants accompanying such a feat, is likely to result in many universities falling far short of their goal, even after investing substantial resources into such a plan.

Finally, any university attempting to leverage its ranking should give due ­consideration to the demonstrable equity concerns associated with such approaches. Current and historical studies on the topic document again and again demonstrate that, while higher rankings may be likely to produce more and better applicants, these prospective students are rarely distributed evenly across the SES spectrum. Instead, higher rankings are usually strongly correlated with less access for students from historically underrepresented populations. If the university attempting a higher ranking is public, or pursues a mission that is inclusive of these students, substantial thoughtfulness of this latent consequence is a prerequisite.

We suggest the following advice for university leaders considering the efficacy of raising their institution’s ranking:

  • Recognize the inevitability of rankings and the constraints they impose on univer­sities. Given the ubiquity of rankings and the attention paid to them by external and internal constituents, a “head in the sand” approach will surely fail. That said, do your homework and completely understand the variables being used in the rankings that have consequences for your university. Which variables provide some room for opportunity for your institution? It is likely that there will be some “low hanging fruit” that can be harvested from the rankings, but unless such a harvest will produce significant movement – from one tier to another, perhaps – don’t expect long-term results. Identify what kind of movement is possible and consequential, given the university’s mission and resources.

  • Avoid the allure of rankings. (see Teichler, this issue, for more details). It is common for university leaders to define their strategic plans and vision statements with ranking objectives as well as make aspirational statements related to rankings. University leaders, however, should recognize that rankings are not dynamic indicators. Rather, they more reasonably signal the rigid stability of the status quo in higher education. There is ample evidence that very few universities have moved up in the rankings and sustained this newfound position. The empirical evidence on the subject indicates that, while movement may be possible and even important if it affects perceptions of reputational quality, the quest for a higher ranking is much more likely to result in something less than success.

  • Recognize the importance of and buttress the university’s reputation. Rankings tend to measure similar things: faculty resources, student quality, research outputs, etc. Reputations in higher education can be built upon broader variables, such as connections to the community, roles in local and regional economic development, and a commitment to mission (even if that mission is not valued by rankings indicators). There are many universities that enjoy strong reputations, with internal and external constituents, as a result of leveraging a specific niche. Although the path is not prescribed in common ranking guides, if a higher ranking is out of your university’s reach, recognize that building a better reputation is valuable and entirely possible.

  • Beware the isomorphic grip of globalization. The criteria in the early ranking systems of the 1980s and 1990s instigated a new struggle between colleges and universities for students, faculty, facilities, and endowments. Although this competition arguably creates winners and educational improvements as well as losers and gross inefficiencies, it definitely carries significant consequences for those who participate. The more recent addition of international ranking systems will only intensify this arms race between institutions and further divide the haves and have-nots, especially as globalization increases its reach to all corners of the academic world. As institutions enter global competition for resources, they find themselves at the mercy of a cutthroat winner-takes-all campaign and the resulting inequalities can have devastating effects on academic institutions and their constituencies.