Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1.1 Introduction

League tables are all around us. In sports, for instance, there are seasonal league tables for baseball or football competition and lists ranking the number of times cyclists have won the Tour de France or the fastest runners in marathons, etc. Since the early twenty-first century we have also had league tables in higher education and research, global university rankings usually showing Harvard as the best university in the world, followed by the names of a number of other globally renowned universities. But while sporting league tables are well-accepted, university rankings remain hotly debated. Later in this book we will go into greater detail about the methodological critique of university league tables. This chapter briefly introduces three basic ideas that we will elaborate in more detail in the rest of this volume and which together define our ‘new’ approach to ranking in higher education and research:

  • ‘user-driven’ rankings

  • multidimensionality and multileveledness

  • a participative approach to ranking

We start with our epistemological position. The more we engaged in the ranking debate, the more we realized that there is a deep, epistemological reason why the whole idea of league tables is wrong, and why transparency tools or rankings of higher education and research institutions can only be user-driven, adaptable to users’ needs.

1.2 An Epistemological Argument

Each and every observation of reality is theory-driven: every observation of a slice of reality is driven by the conceptual framework that we use. In the scientific debate, this statement has been accepted at least since Popper’s work (Popper, 1980): he showed abundantly that theories are ‘searchlights’ that cannot encompass all of reality, but necessarily highlight only certain aspects of it. He also showed that scientific knowledge is ‘common sense writ large’ (Popper, 1980, p. 22), meaning that the demarcation between common sense and scientific knowledge is that the latter has to be justified rationally: scientific theories are logically coherent sets of statements, which moreover are testable to show if they are consistent with the facts.

Failing scientific theories, sports have been organized with (democratic) forums that have been accepted as the bodies authorized to set rules. The conceptual frameworks behind sports league tables are well-established: the rules of the game define the winners and create leagues table from the results. Yet those rules have been designed by humans and may be subject to change: in the 1980s–1990s football associations went from awarding two points for winning a match to three points, changing the tactics in the game (more attacks late in a drawn match), changing the league table outcomes to some extent, and sparking off debates among commentators of the sport for and against the new rule.Footnote 1 Commentators also debate the meaning of Tour de France winners’ lists: the route of the Tour changes from year to year, so is winning the Tour in year x an achievement equal to that of winning in year y? Similarly, marathons are run on different courses which offer different chances of scoring a world record time—some courses (ironically including the original Marathon-to-Athens route) do not even qualify according to the rules for official marathon record times and fast times run on these courses are not recognized.Footnote 2

This disquisition into sports illustrates the lighter side of our epistemological point about university rankings. All rankings are made up of selected ‘indicators’ that imply the conceptual framework through which reality is addressed. There is a body in charge of choosing those ‘indicators’. In sports, such bodies are recognized organizations and it is accepted that they design and redefine the rules of the game, including the indicators. It is equally understood that rules and indicators are not derived scientifically but are artificial: rugby and football are different and it is impossible to say whether the number one rugby team is a better sports team than the number one football team. Because there is no such thing as a theory of sports per se. There are theories about sport psychology, sports training or sports fans’ behavior, but not a scientific theory of ‘best’ sport.

In university rankings, the rules of the ranking game are equally arbitrary, because there is no scientific theory of ‘the best university’, nor even of quality of higher education. But unlike sports, there are no officially recognized bodies that are accepted as having the authority to define the rules of the game, nor is there an explicit understanding that different conceptual frameworks (hence different indicators) define different competitions and hence validly different but incomparable rankings. There is no understanding, in other words, that e.g. the Shanghai ranking is simply a game that is as different from the Times Higher ranking game as rugby is from football. Equally, there is no understanding that the organization making up one set of rules and indicators has no more authority than any other to define a particular set of rules and indicators.

The issue with the usual university rankings is that they tend to be presented as if their collection of indicators reflects the definitive quality of the institution; they have the pretension, in that sense, of being guided by what is in reality a nonexistent theory of the quality of higher education.

We do not accept that position. Rather than assume an unwarranted position of authority we want to reflect critically on the different roles of higher education and research institutions vis-à-vis different groups of stakeholders, to define explicitly our conceptual frameworks regarding the differing functions of higher education institutions, and to derive sets of indicators from the conceptual framework together with input from the relevant stakeholders. Finally, we would present the information encapsulated in those indicators in such a transparent way that the end-users of rankings can make their own decisions about what is best for their purpose(s), resulting in individually tailored and time-dependent rankings.

In this sense, we want to ‘democratize’ rankings in higher education and research. Based on the epistemological position that any choice of sets of indicators is driven by their makers’ conceptual frameworks, we suggest a user-driven approach to rankings. Users and stakeholders themselves should be able to decide which indicators they want to select to create rankings that are relevant to their purposes.

1.3 Multiple Dimensions and Multiple Levels

A second basic principle behind our departure from current practices in international rankings of higher education and research institutions concerns multidimensionality. It is only a slight overstatement to say that current international rankings are focused on a single dimension of the activities of the institutions, viz. research. The bulk of indicators used in those rankings, as we will show in Chaps. 3 and 4, concern research output (publications), research impact (citations) and research reception by the academic community (citations, Nobel prizes). We will also argue that reputation of higher education institutions as measured in international surveys also measures research renown—if it measures anything specific. The main reason the majority of current international rankings focus on research indicators lies in their availability: publication and citation databases already exist and are relatively easily transformed into league tables.

The two main shortcomings of that approach are interconnected. The first and main point is that higher education and research institutions engage in activities other than just research, and see their mission resting partly in those other activities as well (meaning that these other activities are not accidental or unimportant). Historically, going back to their medieval beginnings, education was the first mission of universities. Science and research became a central mission of universities only with the rise of the German research university in the nineteenth century. Since around that time, other categories of higher education institution were introduced to maintain a special focus on education, such as the Grandes Écoles in France and the subsequent rise of polytechniques/polytechnics in other countries. At the same time, the learned societies or academies expanded into specialized research institutions. More recently, explicit attention is also given to the ‘third mission’ of higher education and research institutions, variously defined as knowledge transfer and as engagement with the regional community of the institution. A good ranking must take those different missions into account, and must reflect the different portfolios of individual institutions in those areas. The way to do this would seem to be to offer a wide selection of indicators, covering the different mission elements: research, education and third mission. This differs from the way in which some current global rankings have adapted their methodology, i.e. to allow users to choose one indicator out of their research-oriented composite indicator. That amounts to ‘subdimensionality’ rather than multidimensionality.

The other, associated shortcoming is that different stakeholders (students, parents, employers, policy makers, institutional leaders etc.) are interested in, and need to take decisions about, different activities. Prospective students are the most pertinent example, as many rankings publicly claim to be aimed at assisting students and prospective students to find the best place to study. Future students would be interested in information about ‘what they will get’ if they invest considerable amounts of time, money and intellectual effort in a certain study program, so clearly information about the education offered by specific study programs. The link between research and education has been debated for a long time in the higher education literature, but whatever the answer, it is clear that there is not an automatic, deterministic and positive relationship between indicators of research output and the student learning experience. Good rankings must include education indicators for prospective students. Similar lines of arguments can be developed for other groups of stakeholders: each needs specific information on one or more of the mission elements of higher education and research institutions and is not served well by a standard set of research-oriented indicators only.

More or less hidden in the statement that prospective students want information ‘about education in a certain study program’ is the issue of multiple levels. Students will experience certain study programs, not the whole institution—­especially in large, comprehensive higher education institutions and if study programs are offered as specialized paths. Similarly, other stakeholders may be interested in the performances of specific research groups or specific training programs rather than in the performance of an institution as a whole. There is a need, accordingly, for rankings focused at this level of (disciplinary or multidisciplinary) ‘fields’. There is a need for field-based rankings alongside the institutional rankings that appear to be of prime interest to institutional management, political decision-making, etc.

1.4 A Participative Approach

Discussions about the quality and effects of rankings often focus on the selection and operationalization of indicators and their weights. The choice and construction of indicators is a crucial issue, but it is not the only one. Each ranking’s quality is also determined by its underlying processes of data collection, data quality control, etc. For these processes, the interaction of ranking institution with their stakeholders and higher education and research institutions is crucial, we argue. Let us define as ‘stakeholders’ all the different groups interested in a ranking: students, parents, university leaders and management, academics, employers, policy makers, and the general public.

Looking at existing rankings we find that the depth of stakeholder involvement varies considerably. We intend to contrast our approach with the current global rankings, which are the archetypal object of public discussions. We will show in detail in Chaps. 3 and 4 that those international rankings are mainly based on publicly available, often bibliometric, data, and use indicator weights determined by the rankers themselves. The institutions that produce such rankings apparently do not need intensive stakeholder input to do so.Footnote 3 In our concept of user-driven, multidimensional rankings, stakeholder involvement plays a crucial role in the whole process from conceptualization to presentation of the ranking. In this sense our ranking methodology implies a participative approach.

Three arguments highlight the important role of stakeholder involvement. First, let us assume that a specific ranking tool uses indicators which are perfectly designed; they are reliable, valid, comparable and available in the international context. However, it is still not guaranteed that this hypothetical methodologically correct ranking really is useful for potential users. The risk is that the resulting ranking would not be relevant for its users, because it is not related to the decisions and choices users intend to support by use of the ranking. A fundamental principle in formulating a ranking and indicator system should be to test its relevance against stakeholder needs from the initial design phase. In a user-driven ranking the purpose of its design should be to identify a broad set of indicators related to the needs of the relevant stakeholder groups, through stakeholder workshops or online surveys. Moreover, stakeholders can also be offered the opportunity in later phases to assess the usefulness of the resulting ranking system, which can influence amendments in the design.

A second argument concerns the difference between the customary unidimensional rankings and our multidimensional approach. Multidimensional rankings are more complex than a single composite ranking. More effort is needed to explain to the users how multidimensional rankings can be used in a meaningful manner. User-friendliness thus becomes an important feature of a good multidimensional ranking. But user-friendliness cannot be achieved without stakeholder consultation to indicate what makes a ranking understandable and relevant to users. User-friendliness will mean different things to different stakeholder groups; a ‘lay’ user such as a prospective student, confronted with the intricacies of higher education for the first time, may need more and other explanations than a university president. In an intensive dialogue process adequate ranking presentation modes will have to be discussed with the stakeholders.

A third important argument in favor of stakeholder involvement is the consultation of field experts in the case of a field-based ranking (i.e. a ranking of a specific field of knowledge rather than of the whole institution). The challenge of field-based rankings is to adapt data collection instruments and indicators to the specific situation of the respective field. Since the development of most fields in the knowledge society is highly dynamic, one can only benefit from the virtues of field-based ranking if the model and indicators are regularly discussed with field experts. Rankings, and not only those that are field-based, need a continuous advisory structure to adapt the ranking methodology to ongoing developments in the higher education and research system. Good rankings have to implement a continuous process of stakeholder consultation, not only in the design phase but in the implementation phase as well.

These arguments demonstrate that stakeholder consultation should not be regarded as merely a formal element of legitimization. Stakeholders’ input is needed, must be taken seriously and must be integrated systematically in the processes of designing, producing and implementing rankings. Of course the responsibility for the methodology and results of a ranking cannot be shifted to stakeholders; responsibility always rests with those producing a ranking.

The points outlined in the previous sections require further explanation, which we will present extensively in Part I of this book. We simply wanted to establish from the outset our position concerning rankings, and the reasons for developing our user-driven and multidimensional ranking approach.

In Part II we will report on the design and development of a new global ranking tool, based on the basic principles just described. This new ranking tool, called U-Multirank, was developed and tested during a two-year international project funded by the European Commission. The full report on this project is available, free of charge, on: http://ec.europa.eu/education/higher-education/doc/multirank_en.pdf