Introduction

I recently logged into my work email account and the first message I found was from a microsoft.com email address with the title “Myanalytics | Network Edition: Discover your habits, work smarter.” I was surprised to discover this report on my activity, profile photos of my colleagues, and the number of hours I spent actively “collaborating” with them, defined as, “the people you have recently contacted through meetings, email, chats, and calls.” Implicit within this message is another relationship, between myself, my employer, and a market for data and their analysis. The analytics presume that I am a data consumer, someone interested in gaining “insights” into myself, my work, and colleagues. Furthermore, the message encouraged me to “explore more” and “find out where your time goes”. A set of moral implications about me is conveyed in these analyses instigating self-reflection on my productivity, effectiveness, status, and relative worth as an employee. Such quantitative tools now permeate the daily lives of workers in higher education institutions, careers may hinge on them. In some institutions, the only way to be publicly visible is to submit oneself to quantification platforms (see Lim, 2019), and such metrics tie local sites of academic work into a global university rankings surveillance assemblage (Barron, 2021).

Numbers are increasingly used as tools to know and interpret oneself; The ubiquity of numbers and their use in knowing oneself is captured in the concept of the quantified self (Lim, 2019; Lupton, 2016). Without consenting, I became a product to be divided into bits of data, sold back to me for my own consumption. These conditions are what Deleuze (1992) has described as societies of control and the message in my inbox a window into what Haggerty & Ericson (2000) have called the surveillant assemblage, the collection and flow of data pertaining to individuals by unknown others that are used for purposes that can be impossible to know. What I have described above are ruling relations (Smith, 1987, 2005) that, “directs attention to the distinctive translocal forms of social organization and social relations mediated by texts of all kinds (print, film, television, computer and so on)…they are objectified forms of consciousness and organization, constituted externally to particular people and places, creating and relying on textually based realities” (Smith, 2005:227). When I encountered the Microsoft email, ruling relations became visible, providing me a potential entry point for an inquiry into how I am situated by these relations with new material and moral implications. University rankings and related practices of quantification, classification, and metrics are similar extralocal relations that come to bear on local actuality. The sociology of quantification examines the production and consumption of numbers as well as their effects on people and their work within organizations. The sociology of quantification can contribute to better understanding the forces that are shaping higher education and academic work globally while simultaneously providing an empirical realm for furthering the field and our knowledge of quantification as a social phenomenon.

In this paper, I propose a project of transnational institutional ethnography (Grace, 2013) to address concerns in the sociology of quantification as well as studies of university rankings and related practices. By following coordinated action from a local standpoint, the institutional ethnographer can articulate coordination across organizational and national boundaries while facilitating the identification of new lines of inquiry that can be taken up by other scholars. Collectively, this can produce a map of internationally coordinated action that can help people situate themselves in regard to the institutional complexes that shape their local experiences.

This paper proceeds with a review of literature and demonstrates how institutional ethnography can further scholarship on university rankings, metrics, infrastructure, and associated work. I then describe institutional ethnography in more detail as well as my own research methods. I proceed with empirical examples from my study of university rankings and related practices. I emphasize data and infrastructure as important parts of how rankings coordinate local action as well as diffuse judgment that takes place across organizational units and international locales. This paper shares the results of the first institutional ethnography of global university rankings and related practices. As such, this paper is not only a unique contribution to institutional ethnography—one of few transnational institutional ethnographies—but also offers an original empirical and methodological contribution to the sociology of quantification and university rankings.

Literature review

In a review of the sociology of quantification, Diaz-Bone & Didier (2016) identified a common set of problems and limitations in this developing field. Perhaps most pertinent to the project at hand is a call for scholars of quantification to undertake more studies of processes in quantification and their socio-epistemological prerequisites including categorization, classification, valuation, and visualizations (Diaz-Bone & Didier, 2016). Many studies presume that quantification and categorization has a singular meaning or only benefits certain social classes or interests. Yet the authors find that such phenomena must be interpreted and applied by actors within specific situations; they are dependent upon socio-political relations. The sociology of quantification is an international project disconnected by national boundaries and language barriers, not yet a unified field; it lacks a clear set of problems or coherent agenda and has no shared methodological or theoretical approach (Diaz-Bone & Didier, 2016).

I am proposing trans-national institutional ethnography (see Grace, 2013) as a means of uniting the sociology of quantification and to build a global project that facilitates the elaboration of ruling relations articulated in university rankings and related metrics. Institutional ethnography is a project of discovery that makes extralocal coordination visible to people working in particular localities. Chains of action can be traced out from local standpoints across organizational boundaries, as Smith notes (2005) “in the real world, the social relations that are significant in organizing people’s ordinary participation do not conform to what can be represented institutionally” (p.68), that is, institutions do not fit neatly into the categories we commonly use in conversation and do not respect boundaries we presume in common parlance. Institutional ethnography begins with a problematic that situates research in a particular standpoint and inquires about everyday work from that standpoint. The project then expands the inquiry to other standpoints by following coordinated action until a complex of relations is articulated. Institutional ethnography examines local and personal evaluating positions and by its very nature traces out socio-political constellations across local and extralocal sites (see Diaz-Bone & Didier, 2016).

Berman and Hirschman (2018) have identified four sets of questions that orient the sociology of quantification. The first, what shapes the production of numbers? Studies pursuing this question tend to focus on technical and political decision-making. Second, when and how do numbers matter and when does quantification make a difference? They note that many numbers are produced and ignored while others are used to justify pre-determined decisions. Third, how do we and how should we govern quantification? They cite a number of controversies related to this, including that quantification can limit the scope of democratic deliberation. Finally, how should quantification be studied? There are three answers: to study effects of a particular genre of quantification; comparisons of quantification that share common features; case studies examining quantification in particular field or decision-making context. An important observation is that “value entrepreneurs” produce a way of seeing parts of the world that are tailored to specific audiences; numbers do not matter unless people are convinced to use them, and that the politics of quantification are closed and technocratic (also see Lim, 2018). Berman and Hirschman conclude that the field can further explore how numbers are made, when and how they have effects, and to develop theory to give the field a common language.

Institutional ethnography is effective for growing the sociology of quantification across domains of interest and can provide a common language. Institutional ethnography has a unique ontology and is an alternative to what Smith (2005) calls “mainstream sociology” that reifies concepts as actors and forces in the real world. Such concepts are what Smith (2005) refers to as blob ontology, “for every such concept, there is taken to be a something out there corresponding to it,” and agency is given to constructed and abstract entities without referents, such as social structure.

In a review of university ranking research, Ringel & Werron (2020) divide studies into three sorts: those that examine the production of rankings, their effects, or discursive reception and institutionalization. Ringel & Werron (2020) argue that new alignments between notions of performance, competition, and publicity have led observers to regard rankings as useful and necessary means of comparison. By comparing the emergence of rankings in science, arts, and sports, they show how rankings became embedded within each field to varying degrees. They note contingencies based on the fit with field-specific ideas of performance, competition, and publicity or transparency as well as how audiences of rankings are imagined. Proto rankings emerged in the field of European arts during the early eighteenth and nineteenth centuries while modern rankings featuring regular publication, quantification, and visualizations emerged in the field of sports in the UK and USA. University rankings emerged later. They argue that in order to explain the growth of rankings, the field needs studies into how performance is conceived, competition, and publicity across different domains. Many researchers marshal neoliberalism as an explanation for the advancement of quantification and rankings, but Ringel and Werron show the proliferation of rankings well before any idea of neoliberalism had formed. Early academic rankings were based in measurement that followed a tradition of eugenics—and were intended to argue for additional compensation for eminent men featured in the rankings—a rationale quite different from neoliberalism (Hammarfelt et al., 2017; Ringel & Werron, 2020). Scholars must be sensitive to the fact that different modes of government tend to have distinctive modes of quantification (Diaz-Bone & Didier, 2016). Institutional ethnography addresses these concerns because it focuses on coordination from particular sites to reveal how rationales vary from location to location based on what people actually do, think, and speak. It grounds inquiry without any presumptions about the degree to which something is neoliberal or not; any such assessment is reserved for the degree to which actual happenings are or are not neoliberal or operate by a different rationale.

A few examples of studies in university ranking production, effects, and institutionalization are illustrative of concerns in aforementioned reviews. From a lens of mediatization, higher education can be observed as reorganizing according to media logics that “encompass a belief in commercialization as common sense for public and private spaces” (Stack, 2016:1) in which student, professor, organization, and post-secondary sectors are regarded as resources to be mined in the production of new products and services. In Hazelkorn’s (2011) comprehensive overview of the history and types of university rankings around the globe she also included the first global opinion survey on rankings. The survey identified academic concerns with rankings, including that priorities articulated in rankings may not be related to local interests or student needs. Hazelkorn (2011) also noted that when students live at a greater distance from a university, they are more likely to use rankings to inform their choices. Rankings create a situation where individual and organization reputations are put at risk (Power et al., 2009) because data products are made to make them more visible, marketable, and consumable. Indeed, forcing visibility and reputation risk on individuals and sectors is an important aspect of the business of rankings and related data products (see Power et al., 2009; Sauder & Espeland, 2009; Barron, 2017). Markets and product consumers are imagined to have particular needs and interests. University ranking marketing materials are often white washed, imagining a wealthy white global consumer-student interested in studying abroad while enjoying foreign experiences (Estera & Shahjahan, 2019; Shahjahan et al., 2020). Rankings are made for audiences by “value entrepreneurs” and are institutionalized when they are taken seriously (see Berman & Hirschman, 2018).

Yet, the imagined needs of audiences are often out of sync with their local values and professional practices. Malsch & Tessier (2015) have examined rankings and metrics in academic evaluation and explained that these lead to identity fragmentation and politicization, particularly for junior scholars. University rankings and metrics too often stand in place of thoughtful academic judgment. Instead, professors become machines focused on publishing in high-ranking journals rather than making contributions to knowledge or society (Spence, 2019). Similarly, students in higher education institutions may not develop critical thinking skills to be successful in life. The problem is that metrics must be made to inform judgment rather than replace it and should reflect what matters to audiences such as student cognitive development (Spence, 2019). Working from locally situated actuality, institutional ethnography identifies disjunctures between extralocal and local practices demonstrating how this creates a bifurcated consciousness for people situated in such relations (Smith, 2006).

Data and infrastructure have become topics of study in their own right (Cheney-Lippold, 2017; Halford & Savage, 2017; Lupton, 2016; Bowker et al., 2010; Bowker & Star, 1999; Helles & Flyverbom, 2019; Power, 2015, 2019; Star & Ruhleder, 1996), but also as matters of interest for scholars of university ranking and quantification (Barron, 2021). Infrastructure is an important part of the socio-epistemological pre-requisites of quantification noted by Diaz-Bone and Didier (2016), as it is with rankings (see Barron, 2021). Recently, such studies have begun to examine platforms and their relations across society (see Wood & Monahan, 2019). Platforms are becoming the only way for a researcher to be visible on official university webpages (Lim, 2019). The software also produces quantified representations of scholarly work which conveys a sense that its quality and contribution to the institution, and the scholar’s field can be judged easily (Lim, 2019). Data and infrastructure can simultaneously be sites of refusal, resistance, for constructing alternatives, or they can strengthen and expand control over individuals and organizations (Barron, 2021). In part, this is because as data and infrastructure are introduced to settings and individuals orient themselves to it, they think of themselves and relate to others differently making new consequential realities (see Power, 2015).

Infrastructures are sets of material, symbolic, and situational relations that resolve the tension between “local and global,” for example, wherein local practices are enabled by a broader technology (Star & Ruhleder, 1996). More concisely, infrastructures are, “material forms that allow for the possibility of exchange over space” (Larkin, 2013, p.327). Like university ranking websites and interactive media, platforms such as Ebay and Uber have been examined as evaluative infrastructures, “an ecology of devices that disclose values of actions, events and objects in heterarchically organized systems (such as platforms) through the maintenance of protocol” (Kornberger et al., 2017, p. 85). Evaluative infrastructures are relational, assemblages of heterogeneous elements that create relations between things in space and time; they are disclosing, in that they contextualize events, actions, and characteristics; and they are protocol, a form of control (see Deleuze, 1992) with centralized power (Kornberger et al., 2017). The introduction of new data and its infrastructure play a part in folding individuals and organizations into new relations that alter their rationales, interests, and practices. Infrastructures are important in quantification and rankings as they are foundational to production of numbers, classifications, and bringing products to audiences. Ruling relations articulated in rankings and platforms create visibility that situates academics in a precarious situation (Kornberger et al., 2017; Lim, 2019).

In conceptualizing ruling relations, Smith (2005) includes infrastructure while noting its emergence over two centuries and its role in facilitating institutional stability. As people creatively interact with infrastructure it grows and the global ranking surveillance assemblage expands with it (Barron, 2021). Institutional ethnography is concerned with coordinating infrastructure. Such relations articulate institutions which generalize knowledge and experience by abstracting, objectifying, and truncating people’s lived realities. Institutional ethnography provides a framework that grounds research in the material conditions of people, emphasizes institutional coordination, and how objectification happens. It is a sociology that was created exactly for the study of practices such as quantification, metrics, university rankings, and their infrastructure.

Research into rankings rarely examines internationally mediated actions to demonstrate the co-production of global higher education and rankings. Such a program of research would describe how rankings are made at the same time as individual organizations and global higher education are shaped in that making. Instead, there is a tendency to study the effects of rankings within a particular field and geographic location that does not extend its analysis from a local site into the extralocal to consider how rankings and related practices form an institution in its own right. Indeed, quantification in all of its forms has not been considered as an institution in and of itself (see DeVault & McCoy, 2006). Institutions, Smith explains, have the specific capacity to generalize and are generalized; therefore, people in institutional settings are active in creating the general out of the particular. This is an apt description of the relations that make up quantification, metrics, and university rankings.

Institutional ethnography was designed to guide inquiry into how people coordinate and are coordinated with others across locales by examining material conditions. Its ontology situates research from standpoints of particular people in specific chains of coordinated action. In doing so, it provides a foundation for considering the ethics of such conditions that is locally situated and generalizable insofar as institutions are themselves generalizing. It thereby creates the opportunity for expanding people’s expert knowledge of their own everyday lives and can inform an ethics of quantification (see Espeland & Stevens, 2008; Espeland & Yung, 2019). Institutional ethnography is an alternative sociology that can advance the sociology of quantification and the study of university rankings as an international, collaborative scholarly endeavor.

Centers of calculation (Latour 1987) have long been a concern for scholars of science, technology, and surveillance studies (see Haggerty & Ericson, 2000). The approach I promote here examines such centers, but is concerned with the distributed nature of relations that facilitate global coordination. In institutional ethnography’s terms, to elucidate the extralocal that comes to bear on local experience (see Smith, 1987, 2005, 2006; Walby, 2007). By examining coordinated action researchers are able to see how rankings and metrics are assembled, affect one another, and organize local and global higher education. In my own work, I have emphasized data work and infrastructure work as these are everyday practices with observable material relations that coordinate extralocally. I now share additional insight into institutional ethnography before providing my research methods and empirical findings.

Institutional ethnography

Institutional ethnography is not “ethnography” as referred to in research methods courses; it is not a research method or methodology but an alternative sociology. It is an alternative because of its unique ontology and refusal of traditional sociological theory. It is not opposed to concepts, but these must be grounded in the actuality of people’s lives and be descriptive of actual happenings. Whereas mainstream sociology separates people and their actions as distinct, institutional ethnography understands actions as inseparable from those who act. Actions are done by people in their bodies in a dialogic process of working from what they have done in the past. As Smith (2005) states, “people’s doings are caught up and responsive to what others are doing; what they are doing is responsive to and given by what has been going on; every next act, as it is concerted with those of others, picks up and projects forward into the future” (p.65). Such recognition leaves concepts like social structure to the side in favor of empirical examination of the historically committed coordination of people’s doings.

Researchers begin a project by grounding their work in standpoints of particular individuals, and the ethnographer treats them as experts of their own world. A standpoint is local, particular, and embodied (Diamond, 2006; Smith, 1987, 2005). Taking a standpoint allows the researcher to work outward from that place in order to trace extralocal relations. The intent is to learn about objective relations from the people who live within them. Conceptual practices must “explicate the social in people’s actual doings, and they have to be modified or discarded as further discoveries display problems or inadequacies” (Smith, 2005: 55). The social for institutional ethnography is defined as “people’s activities as they are coordinated with those of others” (Smith, 2005:59). Drawing on Marx and Engels, Smith’s ontology grounds social science in activities of actual people and their material conditions. It is attending to the actual social relations that make institutional ethnography, ethnographic. The ethnographer “explores and describes the same world as that in which the inquiry is done”(Smith, 2005:49) and requires a “commitment to learning from actualities as they are experienced and spoken or written” (Smith, 2005:50).

Cognition-orienting texts are objects of primary concern in studies of coordinated action. Here, the term “text” refers to material forms that allow replication and may include print, electronic media, algorithms, and so on. An emphasis is placed on materiality because it allows analysts to observe a text’s presence in everyday life. Texts create stability of organization and institution (Smith, 2005). People activate texts, which coordinates their work and initiates further sequences of text-mediated action. How a text is activated depends on a particular interpretive framework specific to the person activating the text and the institution in which they are embedded (Smith, 2005). The ethnographer asks questions to examine how problems and situations are organized through text-mediated sequences that are a part of extralocal relations of ruling (Smith, 1987, 2005, 2006; Walby, 2007).

The idea of the extralocal describes practices and work that come to bear on people, but occur farther afield from local sites of inquiry (Rankin, 2017). That is, institutional ethnographers are concerned with how people, rationales, technologies, and policies come to bear on the everyday lives of people who work and play at a distance from where decisions about such relations are made. Institutional ethnography often shows how local experience and knowledge are made to disappear in objectified accounts such as texts including video footage, statistics, policy, and other managerial and discursive practices (Smith, 2005; Walby, 2007). Objectification produces particular facts as “truth” which truncates the everyday actuality of local practice (Walby, 2007). For example, Nichols (2008) began an institutional ethnography with a problem faced by a young man in Toronto—his need for shelter. By following her informant to the social services that are ostensibly in place to help him secure employment and housing, she demonstrated how these organizations situated him in a continuing cycle of unemployment and homelessness despite his best efforts to the contrary.

Informant experiences are required to understand their work and how it is coordinated. Work here is not to be confused with paid employment; it includes mundane activities that are often overlooked or ignored by objectified knowledge. Anything that people do which takes time and effort is considered work (Smith, 2005). When I refer to data work or infrastructure work I am merely identifying practices that institutional ethnographers might attend to in beginning their inquiry to discover the extralocal coordination involved with university rankings. By asking people about how they do their work and how they know how to do their work, researchers can follow how their activities are connected to and mediated with others.

An emphasis on people and texts can lead one to ignore the possibilities of how non-humans might be involved in shaping institutions and coordinating the actions of its people (Walby, 2007). Institutional ethnography’s notion of institutions as text-mediated action around a function may also limit the researcher’s attention to what is regarded as a specific institution. However, Smith (2005) has been clear that all sorts of objectified forms of consciousness beyond written documents are involved in mediating the social relations people are tied up with. As forms of objectified consciousness algorithms and the like mediate action based on the intentions of the people who made them, they become a part of the past that people in the present engage with in their ongoing coordinating. Institutional ethnographers ought to cultivate sensitivity to the fact that coordination in actual practice does not conform to discrete boundaries or conventional ideas of what counts as an institution, as for example, the state, family, and other institutional orders articulated in the institutional logics perspective (Thornton et al., 2012). Part of the objective of institutional ethnography is to unpack how an algorithm or set of institutional practices happen and are involved in coordination across taken for granted everyday experiences.

Mapping out institutional relations around people’s standpoints is a common part of institutional ethnographic studies. Much like the map in a shopping center that informs its reader that “you are here!” (see Smith, 2005), such maps situate a standpoint, or multiple standpoints, to demonstrate how they are connected. Maps can be useful means for supporting intervention into the ruling relations in order to make changes that are more desirable to the people situated within them. In doing so, transnational institutional ethnographic contributions to rankings research can build on one another and map out a global landscape of ranking relations that shape higher education.

Method

Institutional ethnography does not prescribe a particular research method though qualitative approaches such as observations, interviews, and document analysis are common. Combinations of these methods are useful as they support rich descriptions grounded in people’s day to day lives. In my own research, I used such a combination, and I began inquiry from my own standpoint as a graduate student and would-be independent scholar by asking a set of broad questions regarding the academic field and curiosities I had about quantified knowledge that appeared to be shaping my potential future career, including university rankings, journal impact factor, citation counts, and journal rankings. I asked, how are the work and practices of people transformed into numbers and how do those numbers transform work and practices? Who decides these processes and for what purposes? How and through what work is this particular knowledge produced? What do the numbers conceal or make visible through such transformations? Why is it that despite the fact everyone seems to know that rankings are problematic do they continue to have such force? I drew on my data to construct a cognitive map (illustrated in Fig. 1 and described as follows) of how these are coordinated across organizational boundaries.

Fig. 1
figure 1

Map of extralocally coordinated ranking-university relations

The current paper draws on data from my larger study of rankings and metrics. The whole study involved interviews with 61 professors, deans, support staff, and people working at ranking organizations (47 h) and observations at three ranking-related conferences (56 h). I was also hired as a research assistant to rate a university on its sustainability performance (180 h) which involved working with a committee to negotiate definitions of what sustainability is, the types of academic work that count as such, gathering information that fell into these categories, reporting on findings, interpreting results, and submitting them to the ratings organization. I participated on a number of university governing committees (93 h). I also attended several workshops on performance metrics and related practices in universities (10 h) and read hundreds of news media articles from the USA, Canada, Australia, and Europe (> 600).Footnote 1 Most of my time was spent studying work within the University of Alberta (42 interviews), where I was registered in my doctoral program. Mount Royal University, also in Alberta, was the other location where I conducted many interviews (11). I interviewed two other institutional analysis staff at the University of Calgary and had one interview with an institutional analysis staff member at another Canadian university (this person wished to remain anonymous).

I chose organizations and sites primarily by convenience. Convenience samples of this sort are a pragmatic matter, not purely methodological, in that they provide points of entry into examining and analyzing informant’s social relations from their standpoints.

I used interviewing techniques developed by institutional ethnographers for investigating organizational and institutional processes (DeVault & McCoy, 2006). The interviewer’s objective is to elicit talk that describes circumstances and reveals connections across many sites. Some of the most important conversations were held while I observed people at work on committees, networking at conferences, or when I asked about their work and how they go about doing it (see DeVault & McCoy, 2006). Researchers can begin interviews by describing the topics they would like to hear about and then letting respondents tell their stories, asking questions as the story proceeds (Devault & McCoy, 2006:26). In my own work, I described my interest in the university, rankings, and performance metrics. Then, I would ask them to tell me about their job. As participants described their job titles, I would ask who they work with, who they report to, who reports to them, how they communicate and get information to complete their work. I listened for references to other processes, texts, and objects (data, databases, templates, committees); asked how such items work, what purpose they serve; whether I could have a copy of an item of interest; whether I could observe the informant using the item; or if they could describe how they were typically used. I also asked who I should speak with to learn more about the process or item in question. In this way, I was able to trace out the coordinating relations that my informants make and use as they go about their daily work. I followed these relationships to the next person that could fill information gaps that appeared in earlier interviews. This process continued until I had established a sense that further interviews would yield few new details.

Sorting data

I used an audio recorder to capture my interviews as well as proceedings at conferences and other events that I attended, taking notes immediately afterward or during to contextualize and remember what had happened. I transcribed my interview recordings into text and read them in order to determine what was interesting or relevant to my project. To sort the information, I created a set of folders and notes in a bibliographic management application called Zotero. I organized documents, extracts from transcribed interviews, websites, news articles, and other media. I created a folder for my analysis and created a series of notes with labels that included “conflict,” “data,” “data inputs,” “data flows,” “data/rankings,” “equivalences,” “identity,” “jobs, titles, roles,” “strategy,” “desires, goals, wants,” “connections between organizations,” and “judgment, assessment,” “values, evaluation.” These were all topics and themes I expected to track based on my reading of past research and an analysis of news media I did for a related project. As I undertook more interviews I continued the process of creating new themes and topics to assemble an image of how practices within universities, interests among academics, data, and infrastructure across organizations were connected. The objective was not to abstract or enforce a conceptual interpretation, but to piece together processes so that I could effectively describe them as they happen.

In what follows, I share a few examples of data and infrastructure work across sites of production of global rankings which are only a small part of the relations I illustrated in Fig. 1. Following infrastructure and data work through its processes also demonstrates an important insight, that data and infrastructure work are distributed beyond centers of calculation, often done by single individuals following routines which they may not realize are connected to ruling relations articulated in rankings and related practices. The work and infrastructure demonstrate what I call diffuse judgment, which are contingencies through which rankings and metrics are produced and have their effects.

Diffuse judgment, infrastructure, and data work

I was able to identify practices that frequently followed the same policies and procedures, using the same templates, the same data sources, and transmitted information to the same distant locations. Figure 1 illustrates the broadest set of coordinated relations—a rough map—of extralocal relations that individuals working at universities were hooked into with ranking organizations, publication corporations, and the like. Beginning on the left of the map, we can see that university administrators work to create strategic plans and set performance expectations. These plans impact students that completed registration forms to attend university. Such forms hold information that is extracted into customer or employee relations management databases which in turn are put into a data warehouse. Professors and staff details are similarly recorded and reported to the data warehouse; they also engage in research and peer review to produce publications, which have metadata that are sent to the data warehouse and are also extracted by publishers for their own databases to produce metrics such as citations or impact factor. These metrics are incorporated into other products and platforms, such as SciVal, Scopus, various rankings, and QS Stars, which are sold back to universities for integration into administrative use. Professors and other academics are typically surveyed to provide a sort of peer review of universities; these survey data are integrated with aforementioned publication metadata as well as data from university data warehouses to create the rankings and other products. Importantly, university peer review processes for tenure, promotion, and academic awards (see Lamont, 2010) are a part of these processes which share a discourse with the marketing and rationales of businesses that make the aforementioned products (Barron, 2021). While this is a rough and general map, it gives a sense of the complicated coordinating processes that are happening across the global university and rankings surveillance assemblage (Barron, 2021). Further empirical work at different universities can elaborate such a map and identify direct lines of connection and coordination between those identified in my own study. In this way, institutional ethnographers can build on each other’s projects to map the transnational ruling relations of global rankings and higher education. In what follows, I describe parts of the coordinated relations illustrated in Fig. 1.

Reporting to ranking organizations

To better understand how reputation surveys worked, I asked participants who had responded to such surveys about their experiences. I also interviewed a senior scholar as he responded to the Times Higher Education Reputation survey; below, I describe part of the conversation we had. I then describe my conversation with an institutional analyst who responded to the THE’s request for university-level data using their template and online form. These surveys strip away the context that facilitates scholarly judgments of quality. Academic fields can be quite large, and professors may know their discipline, but their networks are focused within specific areas of research. Unlike peer review in tenure and promotion or grant application processes scholars orient to a body of work through negotiated cognitive contextualization (see Lamont, 2010), reputation surveys position academics in front of a screen with a form that asks them to report on the best departments and universities in the world. They have little other than personal memory and the Internet to assess global fields and distant institutions.

Reputation surveys

The survey asks respondents to nominate the 10 best research and teaching universities in their subject area for their region. In our case, the subject area was sociology and the region was North America. It then asks for non-rank ordered nominations for the best in the world; the list is to be generated from memory. The professor listed seven universities as nominees for best in sociology in North America, seven for best teaching in North America, five for best research in the world, and seven for best teaching globally. When I inquired about his rationale for his choices for best research in North America, he explained that he was aware that they all had good quantitative sociology programs, and there were scholars there that do research he follows. During the interview, he tried to recall research done by colleagues in his particular area—not sociology as a discipline more broadly. He explained, “Part of my problem is I don’t really think about universities, I think about people. You know, who is writing in my area and so on. I don’t even know where some of them are, maybe some people are more conscious of that.” His response to my question jogged his memory, and he added the University of Minnesota to his list of nine to make a total of 10 because he recalled a co-author of some scholars that he knew work there. He also explained that he included some in his list based on reputation because, “It’s probably well deserved. Good reputation leads to people going there and being recruited… So I'm going part old reputation and current people.” The fact that past reputation enters into his judgment is illustrative of how rankings reproduce existing and past hierarchies.

His attempts to muster personal knowledge of institutions on a global level simply did not succeed. He then added Michigan State to the list and ended his attempt noting that he could have included some of the others from his North American list and that he believed Europe must have some universities that he should include but, “I just don’t know enough about Europe so I’m not willing to, personally, put them in there.” He decided to end his attempts to list more universities because he felt there might be European institutions that were more deserving than the ones he could name. He did not feel that his knowledge of additional universities was sufficient to make valid nominations, though he did mention that my presence may have made him more thoughtful than he would have been if he was alone. His experience made him feel that, “this reputation stuff feels really shaky.” Other scholars I interviewed said they had similar experiences in terms of their inability to generate a list based on effective knowledge of the institutions they were asked to rank.

Through this interview, we see how one part of diffuse judgment across ranking infrastructure works. Specifically, the questionnaire imposes its own understanding of academic judgment onto the scholar who is working with it. The questionnaire presumes academic judgment and knowledge of the higher education field is universal and expansive. The scholar I interviewed struggled to use intimate knowledge for his assessment and he felt it would be inappropriate to add names to his lists. This illustrates what institutional ethnographers refer to as “bifurcated consciousness,” the division of local, bodily, and experiential ways of knowing from institutionalized and objectified knowledge (Smith, 2006; Walby, 2007). Such bifurcation submits experiential knowledge to domination by the text-based institutional form. Rather than create an assessment based on an in-depth review of research and teaching at a university, which is typical of peer review and academic judgment, he had to base his list on limited personal experience. The interaction between the scholar and the ranking is text-mediated and the questionnaire imposes on the scholar, making his response difficult and “shaky.” Following infrastructure and data work, we can also see how information flows into rankings through official university responses.

University official response

Universities must have effective data to perform their identities to audiences. The University of Alberta created a new data warehouse, called Acorn. There had been growing interest in a user-friendly data source that would provide standard information for reporting and administration. An administrator in the Registrar’s Office explained intentions for the data warehouse saying, “…it is the single source of truth.” Acorn was also designed to provide data that ranking organizations request. I met with an institutional analyst named Deborah who provides reports to ranking organizations each year. She shared the template she used for the submission to the Times Higher Education ranking; a portion is illustrated in Table 1. The table required her to copy and paste data from her database into the template, then copy and paste from the template into the THE’s online form. The template asks her to provide her university’s number of academic staff and students of different categories (international, research, undergraduate, graduate), number of degrees awarded (doctoral, undergraduate), overall institutional income, research income, and research income from industry. She explained to me that the ranking organization develops definitions for the template and, “then we apply them as best we can.” The Times Higher Education ranking definitions standardize information that universities across the globe submit to the ranking. Because the ranking standards are not based on local knowledge, universities can only “apply them as best we can.” In doing so, local information and context are truncated or erased. Differences in how universities count and understand themselves are transformed and homogenized into the form that rankings require.

Table 1 University ranking reporting template with data definitions

Professor subjectivity

For international rankings, the major publishing corporation databases are a primary data source; Elsevier and Thomson Reuters have been primary contributors.Footnote 2 Each of these companies own masses of publications that are licensed to university libraries. They also index academic journals and books, scraping them for metadata such as author name, location of employment, and works cited. This is one nexus with local infrastructure I was unable to examine in my research. However, I was able to observe how such data return to universities and professors to shape their work. The data come home in many forms including impact factor, university rankings, journal rankings, and citation scores.

An associate dean of research that I spoke with used journal rankings to encourage his faculty to target high-impact journals to increase the faculty’s visibility. He also considered journal impact to be an indicator of quality research, “I think aspirational publication is a good way to frame to faculty, to share the interest that exists between publishing impactfully and also publishing to maximize reach on audience… I have concerns, for example, if a faculty member is consistently publishing in a journal that doesn’t have an impact. That raises concern for me.” Metrics may define impact; they can raise this dean’s concern or inform him that a professor has aspirations and is achieving them. Here, faculty and dean consciousness are coordinated by notions of impact developed from afar, but it is not completely dominating. The dean works according to his own ethic and within his local cultural and interpersonal milieu to practice impact in a particular way.

He explained that one of his concerns with quality was that his faculty was traditionally oriented toward quantity, and in an environment with predatory journals without peer review, quantity could be a problem. Journal rankings and metrics became a tool for this associate dean to direct his professors toward quality and visibility rather than quantity and as a reference point in conversations regarding what constitutes an acceptable body of work. He believed part of his job was to coach faculty to achieve personal and organizational goals. The journal rankings define quality articulated as impact which is coupled to visibility which orients mentoring, publication, and aspirations.

A professor in library sciences that I met had used a variety of data to make his case for promotion. He explained how he worked with all sorts of data.

I got promoted to full prof last year in July. As part of my package… I created a... document where I talked about my research work in terms of institutional national or international impact. And I used a broad range of measures. So citations being just one factor, Google Scholar, then I used some services that allow you to see the status and the location of all those people, where they come from. For example, there is a service called Mendeley [and] for each of my publications, I managed to say okay, how many times this has been downloaded and viewed and those come from what countries in the world.

He engaged in elaborate data work to represent himself, the quality of his work, and the degree to which he had an international reputation. Notably, the data bestow reputation upon him (see Austin, 1975). The infrastructure he engaged with provided him with data about himself and his work, but also about distant others who were engaging with it. He speaks on behalf of the data which tell him and the promotion committee he is worthy of the promotion. The data coordinate his consciousness as they do the committee reviewing his case.

I also spoke with numerous department chairs and deans about university rankings, metrics, and journal rankings in regard to how they shape academic merit, tenure, and promotion decisions. In my conversation with an accounting department chair, I asked whether it would be impossible to make a case for tenure or promotion if someone was doing good work, but did not publish in the listed journals, “it’s a harder sell, if it’s in the list there’s no questions, bang, bang done.” He was concerned that rankings can replace academic judgment, “…as a faculty evaluation committee, we do worry about that, do we end up just mechanically giving people tenure and promotion based on some list.” In this instance, it is clear that the journal rankings can erode or replace academic judgment and conversations regarding merit. This finding echoes concerns expressed by Spence (2019), but also demonstrates academics can exercise judgment with rankings and metrics rather than be replaced by them. However, the committee has to consciously resist the ranking in order to prevent it from doing the work the committee is intended to do.

Summary and conclusion

University ranking research and the sociology of quantification have examined new types of quantification, their institutionalization, and case comparisons, including infrastructure and platforms. Scholars have argued that sociology of quantification needs an approach to research that can help unify the field, draw together empirical projects across borders, and examine socio-epistemological prerequisites of quantification. I have argued that institutional ethnography is an alternative sociology that offers a means to address many of the interests in the aforementioned scholarship. By examining extralocal coordination in people’s everyday work with data and infrastructure, we can collaborate to understand how cases are connected through such coordination and observe the socio-epistemological prerequisites in action. In this paper, I shared a broad map of extralocal coordination that extends beyond the universities where I engaged in most of my research. I illustrated the junctures where work at other locations plugs into the international coordination of rankings and higher education—their ruling relations. I demonstrated socio-epistemological pre-requisites of rankings as quantitative forms in local data, infrastructure, and data work in survey responses and cases for tenure and promotion. While I have not shared empirical examples from my research at conferences and promotion events put on by ranking organizations, or interviews from universities in other geographies, these were also part of the coordination binding people and organizations into the ruling relations of rankings, expanding their reach, institutionalizing their practices, coordinating the consciousness of professors, deans, and other university staff to make rankings salient to audiences.

Interest in metrics and data for university administration dates to at least the late nineteenth century when presidents began to hire analysts to collect data and compare their universities against others (Veysey, 1965). Administrators build new infrastructure in response to new administrative questions, new rankings, or metrics, and university employees integrate the infrastructure and the data it carries into their work. In doing so, they engage in data and infrastructure work that further institutionalizes the norms and judgments carried by the data and infrastructure. Organizational identities and individual subjectivity are shaped along the way. By following coordinated action, we can see how individual professors and nonacademic staff become a part of that infrastructure while artfully working with and against data to situate, interpret, and represent themselves. My concern for infrastructure and data work is not only because they are useful to follow in conducting research on higher education and rankings, but because infrastructure and data actually work; they intervene in all sorts of relationships in higher education. Not only between universities and the public, but between individual professors and themselves. Interviews, observations, and document analysis are all useful methods for tracing and mapping the extent of transnationally coordinated relations, and institutional ethnography can grow a transnational sociology of quantification and university rankings to develop strategies with regard to local concerns that are extralocally coordinated.