Abstract
In recent years, science policy experts have been promoting interdisciplinary research (IDR) in order to foster innovation and address grand scientific challenges. But to date we know little about whether, how, and to what extent universities are committed to fostering this type of research. This paper develops the first measure of university commitment to IDR, which relies on the organizational structuring of research activity into research centers and departments. We extend the previous literature by measuring, rather than assuming, the interdisciplinary nature of research units. Using a large amount of textual data from 157 research universities in the United States, and combining machine learning and confirmatory factor analysis techniques, we develop a continuous and composite measure that taps universities’ structural commitment to IDR. We then examine the commitment exhibited by specific universities and how such commitment varies by university characteristics like size, resources, and region. Results show that the fraction of centers and departments that are interdisciplinary is critical to measuring a university’s structural commitment to IDR and to developing specific research policies aimed at fostering IDR.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
In recent years, federal agencies and science policy experts have been trying to foster interdisciplinary research (IDR)—which “integrates perspectives, information, data, techniques, tools, concepts, and/or theories from two or more disciplines” (National Academies of Science et al. 2005: 188)—because of its high potential to spark innovation and solve complex, real-world problems (Lyall et al. 2013; Rhoten and Parker 2004). This is evident in recent publications, including the National Academies of Sciences’ Facilitating Interdisciplinary Research (2005) and Convergence: Facilitating Transdisciplinary Integration of Life Sciences, Physical Sciences, Engineering, and Beyond (2014), which suggest ways to foster cross-cutting and potentially innovative research. The National Science Foundation has accelerated cross-cutting program initiatives like CREATIV, interdisciplinary programs like Science of Science and Innovation Policy (SciSIP), and interdisciplinary research centers. And in recent years, the White House Office for Science and Technology Policy was investigating how to best foster IDR.
Anecdotal evidence suggests that a number of universities are heeding this call, and trying to catalyze IDR on their campuses. Research administrators are promoting IDR in many ways. Some ways can be developed and modified with relative ease, such as using internal seed grants to foster cross-disciplinary teams, developing new administrative positions to foster IDR initiatives (e.g., Duke University’s Vice Provost for Interdisciplinary Studies), pursuing cluster hiring initiatives (e.g., Dartmouth, Purdue University and UC Riverside) and hiring interdisciplinary faculty (Duke University 2017; Flaherty 2016; Jaschik 2014; Sá 2008). Other ways require not only financial support, but a longer-term commitment and organizational restructuring, such as establishing and maintaining research centers and interdisciplinary departments. For example, the Boston University School of Medicine founded the Evans Center for Interdisciplinary Biomedical Research as well as an intra-departmental Section of Computational Biomedicine in order to facilitate IDR and innovation (Coleman et al. 2013). It is these more structural manifestations of IDR that interest us here.
Beyond anecdotal evidence, we know little about whether and how universities are committed to interdisciplinary research. To be sure, a number of scholars have proposed and developed ways to measure the interdisciplinarity of a given research paper (Mugabushaka et al. 2016; Porter et al. 2007), a scholar (Leahey et al. 2017), a topic (Light and Adams 2017), a lab (Jensen and Lutkouskaya 2014), and even a field (Brint et al. 2009). But how can we capture commitment to interdisciplinarity at the more aggregate and policy-relevant level of the university? In this paper, we build upon and extend ideas proposed by Brint et al. (2009) and Jacobs and Frickel (2009) to measure universities’ structural commitment to IDR. The number of research centers (Jacobs and Frickel 2009) and departments (NAS 2005) alone may indicate IDR, and the ratio of the two may be informative (Gumport and Snydman 2002; Jacobs and Frickel 2009). Traditional departments are indeed disciplinary (Lee 2007), but we agree with Jacobs and Frickel (2009) that it is unwise to assume that all departments are disciplinary units, and build on Brint et al. (2009) effort to distinguish interdisciplinary departments from others. And although most research centers were developed as boundary organizations (Guston 2000) at the interstices of traditional disciplines (Geiger 2013), not all of them may be interdisciplinary in nature. We thus interrogate traditional assumptions by distinguishing interdisciplinary research centers from other centers, and interdisciplinary departments from other departments. We use this more nuanced understanding to develop a measure of universities’ structural commitment to IDR.
It is important to examine universities’ commitment to IDR because over half of the basic research conducted in the United States takes place within a university setting (Geiger and Sa 2005). Universities also play an increasingly prominent role outside the academy, by collaborating with industry, helping found start-ups, and providing expertise to companies’ scientific advisory boards (Barringer and Slaughter 2016; Barringer and Riffe 2018; Mathies and Slaughter 2013; Slaughter et al. 2014; Stuart and Ding 2006). As university and industry models of research, development, and evaluation become more intertwined (Bercovitz and Feldman 2011; Owen-Smith 2003), it is critical to understand how and to what extent universities promote innovative knowledge production and commercialization through their pursuit of IDR. While previous research has found that research centers perform this function at a single university (Biancani et al. 2014), we investigate commitment to IDR among the entire population of top research universities—something that is missing from the literature to date (Sá 2008).
Developing a measure of universities’ structural commitment to interdisciplinary research is a critical and timely task that will advance scholarship and research policy. By examining interdisciplinarity across all fields, we expand the scope of previous investigations, which have tended to focus on specific fields like biochemistry (Chen et al. 2015), computer science (Chakraborty 2018), nanotechnology (Schummer 2004), and bionanoscience (Rafols and Meyer 2010), and often at a single university or center (Biancani et al. 2014; Gowanlock and Gazan 2013; Taşkın and Aydinoglu 2015). It allows us, via descriptive analyses presented herein, to gauge how this form of commitment to IDR varies across universities and university types. The measure and our findings may be useful to universities, their administrators, and science policy experts at the national level who continue to seek ways to promote interdisciplinary research, and thereby innovative and transformative science (National Research Council 2014).
Efforts to measure IDR
Given its timeliness and policy import, a number of scholars have developed measures of interdisciplinary research,Footnote 1 most of which are binary and measured at the scholar or paper level. At the scholar level, Jacobs and Frickel (2009) suggest that holding a joint appointment, or working in a discipline that differs from the discipline one was trained in, may distinguish interdisciplinary scholars from others. This measure has been used by Leahey and Blume (2017) to examine how interdisciplinary appointments influence patenting activity. At the paper level, at least three measures have been developed. Chen et al. (2015: 454) suggest that interdisciplinary papers are those that are cited by a discipline other than the discipline the author(s) represent. Larivière and Gingras (2010) assess the percentage of a paper’s references made to journals from other fields. Porter et al. (2007) develop a paper-level measure of what they call “integration” that uniquely considers the relatedness of the fields (e.g., whether a sociology paper references a cognate field like anthropology, or a more distant field like geology), and not just the variety of fields and the evenness of their distribution.Footnote 2 This measure has been used by Leahey et al. (2017) to understand the career consequences of engaging in interdisciplinary research.
For several reasons, we choose not to aggregate such scholar- and paper-level measures of IDR to the university level. First, the most appropriate method of aggregation is uncertain, but also consequential. To compare universities in the U.K., Rafols et al. (2012) pool references appearing in all articles published by all university members, and then calculate an overall, or global, measure of interdisciplinarity. Cassi et al. (2014) show that this captures something substantively different than first calculating IDR at the article level, and then averaging those scores to generate a university level measure.Footnote 3 Second, these scholar- and paper-level measures capture the extent of engagement in IDR rather than commitment to it, which is our focus. And third, the most nuanced measures of IDR (like Porter et al. (2007) integration score) rely on not only information about a focal paper, but also about a focal paper’s references, so the data demands are daunting for the large number of universities (n = 157) we study. We would need to extract and code detailed information from all publications (including their bibliographies) associated with the 157 universities in our sample. For these reasons, we aim to develop an organization-level measure.
A few unit-level measures have been proposed to date, but most of these are at the level of the project, department, or center, rather than the larger university of which they are a part. Because “it’s very difficult to define what an interdisciplinary institution is” (Cheng and Liu 2006: 139), most scholars have circumvented the task of assessing interdisciplinarity at the university level. Cassi et al. (2014) determine authors’ affiliations, and then average their article-level interdisciplinary scores to the university level. Birnbaum (1981) identifies interdisciplinary projects in part by seeing whether different bodies of knowledge are represented in the research group, but this yields little variation at the project level (s.d. = .08), and likely even less at the university level. Boardman and Corley (2008) characterize centers by the number of departments that affiliated faculty members represent, but co-existence of multiple disciplines is a necessary but not sufficient condition for the integration of disciplines that interdisciplinarity requires, according to the National Academies of Science (2005) definition. Jacobs and Frickel (2009: 59) identify interdisciplinary fields by examining the proportion of faculty with degrees in other disciplines. And because the vast majority of research centers “are ostensibly interdisciplinary, at least in name and self-presentation,” Jacobs and Frickel (2009: 54) imply that the number of research centers, or the ratio of centers to (traditional, disciplinary) departments, may indicate interdisciplinarity at the university level. We develop this idea by also distinguishing interdisciplinary units (centers and departments) from other units.
We conceptualize universities’ structural commitment to IDR as efforts to promote IDR within its extant research infrastructure of centers and departments. How many centers exist, and are they really interdisciplinary? How many departments exist, and are they really all mono-disciplinary? Answers to these questions should give us a glimpse of universities’ structural commitment to IDR. Before discussing centers and department in more depth, we make two points that are critical to the conceptualization and measurement of this construct. First, our interest in structural commitment intentionally excludes efforts that require planning and financial resources but leave the organizational structure intact, such as: internal grant mechanisms; cluster hiring initiatives; and interdisciplinary teaching.Footnote 4 Second, our interest in structural commitment is not intended to speak to the underlying motivations for departmental mergers and consolidations. It is the case that as competition for resources within the field of higher education organizations has increased (Barringer 2016; Taylor et al. 2013; Weisbrod et al. 2008), department consolidations may be pursued solely to enhance administrative efficiencies (Gumport and Snydman 2002). And though we do see this pattern among lower tier comprehensive universities (like University of Wisconsin—Stevens Point most recently), it is less common among the population of top research universities we study here. Moreover, even among top research universities, we find no empirical evidence that the universities most pressured to consolidate for efficiency’s sake (i.e., public universities and those with lower revenues) actually have fewer departments, or a greater relative number of interdisciplinary departments.Footnote 5 In sum, our focus is purely structural: we are not interested in other, non-structural ways of fostering IDR, and we are not interested in the underlying motivations for merging departments and the reasons why the organization is structured as it is. Indeed, as we have shown, motivations for efficiency’s sake are not relevant to the set of top universities we study, and even if they were, we suggest that a unit created for one reason (i.e., administrative efficiencies) can also have unexpected effects (i.e., demonstrating commitment to IDR).
Research centers
To date, most researchers have presumed that research centers are interdisciplinary. This is a reasonable assumption, given that: (1) most centers (especially those founded in recent years) were developed as interstitial entities to foster research that couldn’t be conducted within single departments (Boardman and Corley 2008; Brint et al. 2009; Sá 2008: 542), (2) most IDR that is conducted at research universities takes place within centers (Nickelhoff and Nyatepe-Coo 2012), and (3) the “vast majority” of research centers present themselves as interdisciplinary (Jacobs and Frickel 2009: 54). Indeed, cross-disciplinary interaction is the main justification for the development of research centers (Geiger 1990). Thus there is reason to believe that research centers—which at some universities number over 100—serve as bridges between disciplines and departments, providing a more flexible space to coordinate collaborative research (Biancani et al. 2014). As Jacobs (2013: 92) writes, “research centers may function—imperfectly to be sure—as intra-university ‘boundary organizations’ [see Guston (2000)] that help to bridge disciplinary divides.” Frickel and Gross (2005) note that centers may serve as micro-mobilization contexts for recruitment into emerging interdisciplinary fields. At minimum, centers represent an “organizational context for bringing together scholars from diverse backgrounds with shared interests” (Jacobs 2013: 92). Indeed, this guided Bozeman and Boardman (2013), who defined research center affiliation as affiliation with an institute that includes participants from at least two distinct departments.
However, some centers appear to have been developed without an interdisciplinary mission (Rhoten 2003; Rhoten 2005), and even those with such a mission did not always live up to expectations (Rhoten 2005; Stahler and Tash 1994). This seems to be particularly true among social science research centers (Ikenberry and Friedman 1972). Indeed, research centers are sometimes established to meet other goals: to facilitate the recruitment of high-status faculty and help to retain them once hired; to allow researchers with diverse skill sets to work together; to provide access to specialized (and expensive) instruments and other types of support for research that may or may not be interdisciplinary (Kaplan et al. 2017); to be more flexible than departments so they can respond quickly to changing environments and funding opportunities (Mallon 2006); and to connect academe and industry and aid technology transfer (for example, the Industry-University Collaborate Research Centers (IUCRCs) studied by Leahey et al. (2017)).
With this in mind, we pursue a more nuanced and conservative approach. It may very well be the case that universities that house more research centers (relative to traditional, presumably disciplinary departments) manifest greater commitment to IDR than universities that house fewer centers. But we interrogate the presumed link between centers and IDR by using state-of-the-art natural language processing techniques to distinguish interdisciplinary centers from others, and thereby more closely tap universities’ structural commitment to IDR.
Departments
Even though departments are closely associated with disciplines (Abbott 2001; Abbott 1999; Lee 2007), research also documents the porous and interdisciplinary nature of some departments, suggesting that the sheer number of departments may indicate a structural commitment to interdisciplinarity. For example, Jacobs and Frickel (2009) note that departments hire faculty trained in other disciplines, and that ideas regularly cross disciplinary bounds. In his recent book, Jacobs (2013) shows that even among the traditional disciplines, there is much variation: fields like education readily and consistently engage in interdisciplinary scholarship, in contrast to more homogeneous fields like economics. And many of the fields and subfields that have emerged in recent decades are highly interdisciplinary in nature. Genetic toxicology, for example, developed out of and synthesized research in genetics, toxicology, public health, and environmental science and politics (Frickel 2004). Behavioral genetics is another new and highly interdisciplinary field (Panofsky 2014). Comparative studies, like Moody’s (2004), reveal that some disciplines, like sociology, are much more integrative and perhaps even interdisciplinary than other more insular fields, like economics. Adams and Light (2014) also found variation in the degree of interdisciplinarity across fields as well as subfields and topics like Demography and HIV/AIDS research.
Given this variability, we build on the approach pursued by Brint et al. (2009) to distinguish more and less interdisciplinary fields. By reviewing and manually coding the content of course catalogs that were available for almost 300 American colleges and universities, they distinguished fields typically organized as (disciplinary) departments from fields typically organized as interdisciplinary programs. To qualify as interdisciplinary, a field had to meet two criteria: (1) it had to draw faculty from two or more departments, and (2) it had to be classified as interdisciplinary by more than two-thirds of the universities in their sample. Examples of interdisciplinary programs include Asian studies, Race and Ethnic Studies, International/Global Studies, Brain and Biomedical Sciences, and Women’s Studies. Examples of traditional, disciplinary departments include Economics, Chemistry, and English. Their approach reveals that not all departments are mono-disciplinary, and that the names of departments themselves may well be indicative of an interdisciplinary focus. We build on Brint’s work by using natural language processing techniques to distinguish interdisciplinary departments from others.
In sum, the primary goal of our paper is to construct and validate a measure of universities’ structural commitment to IDR. We begin with measures suggested or implied by extant research, including the number of departments and centers, and the ratio of (presumably interdisciplinary) centers to (presumably disciplinary) departments. We build on this in several ways. First, we question the content validity (i.e., coverage) of previously proposed measures that rely only on a comparison of the sheer number of research centers and departments. Quantity isn’t always the best gauge of commitment, especially given that “interdisciplinary initiatives can lead just as easily to the multiplication of academic units rather than their consolidation” (Jacobs and Frickel 2009: 60).Footnote 6 Second, we interrogate the assumption that research centers are inherently interdisciplinary (e.g., Jacobs and Frickel 2009), and departments are inherently mono-disciplinary (Lee 2007). Instead, we assess this empirically using state-of-the-art methods for large-scale textual analysis, which allow us to identify (rather than presume) the interdisciplinary nature of units. Third, we develop a single continuous measure of universities’ structural commitment to IDR that incorporates information about the prevalence and nature of units that typically house research activity: centers and departments.
Data and methods
Sample
To provide the first large-scale and systematic analysis of interdisciplinarity at the university level, we study the population of top research universities across the country. Like Brint et al. (2009: 162) we focus on large and prestigious research universities because they “have the wealth and organizational capacity to respond to new developments” like shifts in science policy initiatives to foster IDR. Furthermore, research universities are an ideal site to examine commitment to IDR given their research-oriented mission and their support of research activity in a wide variety of fields. We define research universities as those schools included in the “very high research activity” category of colleges and universities listed in the 2010 or 2005 Basic Carnegie Classifications and those classified as research extensive institutions in the 2000 Basic Carnegie Classifications (N = 157). This approach encompasses longstanding core research universities, as well as more peripheral members of this group.
Given recent investments and interest in IDR (Jacobs 2013; National Academies of Science 2005; National Research Council 2014), we focus on a recent academic year (2012–2013) for which all the necessary data were available. We rely on two main sources to obtain data on the organizational units in which most research is conducted: centers and departments.
Data
Research Center Data We identified each university’s set of research centers (RCs) from the 2012 edition of the Gale Research Center Directory. Though underutilized in the scholarly literature, a comparison of the Gale data and data obtained via manual web-searches of all 157 universities led us to believe that the Gale data were more valid for two reasons. First, Gale uses a stricter definition of research center, whereas searches of university web pages tended to return a broad array of entities including not only centers and institutes, but also labs, museums, and libraries. Second, the Gale data were better vetted and curated, and were less likely to include duplicates of the same center with slight name variations, relative to university websites. In short, compared to web searches, Gale provided a more conservative count of research centers at universities. Moreover, it was reassuring to find no systematic differences in the disciplinary home or the apparent interdisciplinarity of research centers across the two sources.Footnote 7 Relying on the Gale Research Center Directory, we retrieved information on a total of 9,211 centers housed at the 157 universities we study. The text we analyze includes the name of the research center, the subject headings available from the Gale Research Center Directory (e.g., the center “The Florida-Mexico Institute” with subjects “Mexico” and “U.S.-Mexican relations”), and the “Multidisciplinary” field code which Gale assigned to some centers.
Department Data We identified each university’s set of departments from university websites, which are a standard source of data for research on higher education organizations (Barringer and Slaughter 2016; Barringer and Riffe 2018; Harris 2010; Morphew and Hartley 2006). This entailed visiting each university’s website, and obtaining either its master list of departments (when available), or visiting each constituent school or college’s website and identifying member departments. When determining what ‘counts’ as a department at some universities, professional schools presented a special case. Our perusal of websites revealed that a subset of professional schools—Law, Nursing, Music, Journalism/Communications, Pharmacy, and Architecture—operate effectively as departments, so we counted them as such. Other professional schools (including Engineering, Education, Business, Medical, and Agriculture) tend to have departments within them; if so, we counted each department individually, but if not, we counted them as a single department. We excluded honors, graduate, continuing education, distance colleges and schools, and administrative departments from our counts. We included programs only when they were listed under the heading “departments” on a university’s website, and thus presumably functioning as departments. In total, we retrieved information on 12,323 departments housed at the 157 universities we study. The text we analyze is the department name itself, which is surprisingly informative.Footnote 8
Analytic approach
Overview
As we describe in detail below, we combine a variety of methods to convert these textual data on centers and departments into quantitative indicators of university structural commitment to IDR. We use semi-supervised machine learning techniques in the Python programming language to distinguish interdisciplinary units (centers and departments) from other units. We then aggregate these binary classifications to the university level by calculating the fraction of centers that are interdisciplinary and the fraction of departments that are interdisciplinary, and these continuous variables serve as indicators in confirmatory factor analysis (CFA) models that we specify. From the best fitting CFA, a continuous factor score is derived, allowing us to identify how universities compare in terms of their structural commitment to IDR. To our knowledge, this combination of techniques has not been used to date.
Coding and classification
Our first aim is to empirically gauge whether each center and department is interdisciplinary. Our approach to this measurement task is more akin to qualitative content analysis than traditional bibliometric measures, which fail to grasp complexity (Cassi et al. 2014: 1873) and “leave considerable gaps in understanding” (Wagner et al. 2011). To distinguish interdisciplinary units from others, we rely on semi-supervised machine learning, a guided form of natural language processing that translates text into “socially relevant data” (Evans and Aceves 2016: 29). It is useful for classifying large amounts of textual data, which is the case here: across the 157 universities under study, we collated information about 9,211 centers and 12,323 departments. The use of machine learning allows us to overcome the main limitation of analyzing the content of textual data: it is time consuming and not conducive to large-scale inquires and systematic comparisons (Cassi et al. 2014: 1873). Machine learning is becoming more common in the social sciences (Evans and Aceves 2016), and has been used by others to identify interdisciplinary text (Evans 2016).
Semi-supervised machine learning is a two-stage process, each with multiple steps.
Stage 1 To build, train, and test a classifier requires not only programming skills but also thoughtful human input from which the machine can learn to identify and classify pieces of text. We built the classifier in Python using the SciKitLearn package (in our case, a curvi-linearSVC was ideal). The classifier was trained (i.e., the machine ‘learns’) from two related forms of human input: (1) a list of features that we deem indicative of interdisciplinarity and therefore encourage (but do not require) the classifier to assign an interdisciplinary classification; and (2) example pieces of text that we manually coded as interdisciplinary or not (i.e., the labeled dataset).
Both forms of human input are based on an elaborate coding scheme that we developed (see abbreviated form in “Appendix”). We based the coding scheme on approaches that others have used to identify interdisciplinary articles (Evans 2016) and units (Brint et al. 2009), on extant definitions of interdisciplinarity (Wagner et al. 2011), and on disciplinary classifications made by NSF and the Classification of Instructional Programs (CIP). The coding scheme delineates indications of interdisciplinarity that tend to appear in the name of departments and research centers. These include direct reference to the term ‘interdisciplinarity’ or related terms like ‘integration’ and ‘synthesis,’ reference to two or more disciplines or disciplinary stems (like ‘Bio’ and ‘Chem,’ ‘Geophysics,’ and ‘Department of Sociology and Anthropology’), and reference to one of the area/global/ethnic studies fields identified as interdisciplinary by Brint et al. (2009).
In addition to being programmed directly into the classifier as suggestive ‘features,’ the coding scheme guides our manual classification of example pieces of text and the resulting labeled dataset. These example pieces of text are only used to train and then test the classifier (which is then used to classify the real data of interest); they therefore cannot be a subset of the real textual data of interest. To construct a labeled datafile for training and testing purposes, we gathered the same textual data (i.e., research center names and subjects, and department names) for six universities just outside our population of interest in terms of research capacity.Footnote 9 Research team members labeled (i.e., coded) these pieces of text manually, and we reconciled differences jointly. Inter-coder reliability for manual coding of RCs was 88%. The remaining 12% of cases on which we did not agree did not help train the classifier, as they were inherently too ambiguous to serve as good examples.
Following protocol, the labeled dataset was divided so the data the machine was ‘trained’ on is not identical to the data used to ‘test’ it (i.e., assess its accuracy). Three quarters of the labeled dataset was used as input to help train a ‘machine’ (i.e., computer) to ‘learn’ what is and is not an interdisciplinary piece of text. The remaining quarter of the labeled dataset was used to test the accuracy of the classifier. This step produced information about precision rates: for example, of the 135 research centers and departments that we (manually) deemed interdisciplinary, the computer tagged 122 as interdisciplinary. This resulted in an excellent 90% accuracy rate for the machine classifier.
Stage 2. With an accurate classifier in hand, the labeled dataset was ignored, and the second stage was implemented: we ran the classifier on the textual data (9211 centers and 12,323 departments) from the 157 universities under study—i.e., our real data. The output from this second stage is a list of all of the pieces of text (either RC names + subjects, or department names) of interest to us, and a binary variable indicating whether the classifier deemed it interdisciplinary or not.
We validate these classifications by making comparisons to other measures. To validate research center classifications, we build upon Boardman and Corley (2008) and Bozeman and Boardman (2013) who characterize centers by the number of departments that affiliated faculty members represent. We selected a small set of well-known interdisciplinary centers that are often discussed in the literature (e.g., the Broad Institute at MIT), and for each we selected a comparable center at the same university that was not classified as interdisciplinary and had sufficient information available online, with an eye toward representing different disciplines. For each center, we determined the department affiliation of each affiliated faculty member (see Table 1). At the centers that our classifier deemed interdisciplinary, there are at least 16 departments represented, and no more than 30% of affiliated faculty hail from the modal department. In contrast, among centers that our classifier did not tag as interdisciplinary, there are only 1–3 departments represented, and more than three-quarters of affiliated faculty hail from the modal department. To validate department classifications, we draw on Jacobs and Frickel’s (2009) approach to identify interdisciplinary departments by looking at the fields in which member faculty received their Ph.D. For one of our own (large, public, R1) universities, we closely researched a few fields that Brint et al. (2009) and our classifier tagged as interdisciplinary and not (see Table 2). We find that interdisciplinary departments, like Gender and Women’s Studies, have faculty with Ph.D.s from more than a handful (5–10) of fields, and in only one case were more than half trained in the same modal field (27–55%). In contrast, departments that were not tagged as interdisciplinary, such as Physics and Economics have faculty with Ph.D.s from less than a handful (1–5) of fields, and almost all of them (81–100%) were trained in the same modal field. As an additional validation, we ran the list of 39 fields deemed interdisciplinary by Brint et al. (2009) through our classifier, and 92.3% were classified as interdisciplinary. Combined, these comparisons confirm the accuracy of our classifier and attest to the validity of our classifications of both research centers and departments.
Because our ultimate aim is not to classify a specific RC or department, but to obtain the aggregate fraction of each university’s centers (and departments) that are interdisciplinary, we applied the correction developed by Hopkins and King (2010). Hopkins and King point out that, when machine learning has adequately high precision rates, any single document is fairly likely to be classified correctly; however, if the classifier is more likely to make mistakes when classifying certain types of documents (for example, it systematically underestimates interdisciplinarity), the aggregate-level ratio will be biased. The correction they developed adjusts for this, and produces bias-free aggregate-level ratios. We implement the correction in Stata 14 for each university in our sample, separately for each type of text.Footnote 10
With corrected estimates of the number of interdisciplinary RCs and Departments, we developed the primary indicators of interest (see Table 3). Because new departments often reflect a “blending of previously distinct fields” and yet the previously distinct fields rarely disappear, the sheer number of departments can indicate interdisciplinarity (National Academies of Science 2005: 19). We therefore include the number of departments housed at each university (#Depts) as an indicator of universities’ structural commitment to IDR. We also include the ratio of research centers to departments (#RCs/Depts), a measure we develop as a logical extension of Jacob and Frickel’s (2009) work. Additional measures move beyond sheer prevalence to consider the nature of the units—whether they are interdisciplinary or not, which was assessed with the machine learning classifier. For example, we calculated the ratio of interdisciplinary research centers to departments (#idRCs/Depts). We also calculated the fraction of RCs that are interdisciplinary (#idRCs/RCs) and the fraction of departments that are interdisciplinary (#idDepts/Depts).
Confirmatory factor analysis
The indicators shown in Table 3 serve as the basis for our main analysis: a confirmatory factor analysis (CFA) model. Given that university commitment to IDR has not been measured before, and that multiple indicators are available, we conceive of it as a latent (unobserved) variable that is best tapped via confirmatory factor analysis (Bollen 1989). Because our measurement approach is driven by theory and prior literature on the topic, and because we designed our data collection efforts to obtain these specific indicators, we rely on confirmatory rather than exploratory factor analysis. Confirmatory Factor Analysis has been used successfully by others in the field of scientometrics to develop new measures (Horta and Santos 2016). We estimate and compare a variety of CFA models to identify the best way to measure structural commitment to IDR. Predictions from the best fitting model allow us to calculate a factor score for each university—essentially a single value revealing its structural commitment to IDR relative to other universities.
Descriptive comparisons
The last part of our analysis examines how university commitment to IDR varies across university characteristics. This allows us to determine if, as Brint et al.’ work (2009) suggests, interdisciplinary departments are more prevalent at larger, arts and science-oriented universities on either the east or west coast and at wealthier universities. We compile such characteristics from the Integrated Postsecondary Education Data System (IPEDS) and the Higher Education Research and Development Survey (HERD) for the 2012–2013 academic year. We construct binary variables indicating five regions of the United States, urbanicity, land grant status, university type (public or private), research capacity based on 2010 Carnegie Classification (“very high research activity” vs. “high research activity”), affiliated medical school or hospital, and technical institution (i.e., universities with ‘tech’ in their name). We use continuous measures of total spending on R&D (in raw dollars), total revenues (in raw dollars), and size of the student body (the number of undergrad and graduate FTE students). We then examine bivariate patterns (i.e., t tests and correlations) between these university characteristics and the factor score representing university commitment to IDR.
Results
Descriptively, we find that universities vary widely across our indicators of structural commitment to IDR (see Table 4). The average number of departments is 79, but values range from a low of 8 to a high of 163. When we look at the ratio of research centers to departments, we find that on average, universities have 81 research centers for every 100 departments on campus (mean #RCs/#DEPTS = .81); but the median number of 64 centers per 100 departments indicates excessive skewness. Indeed, one university (Rockefeller University) has over 600 research centers per 100 departments. When we examine the ratio of interdisciplinary research centers (classified via machine learning) to departments, we see almost the same amount of variation but lower mean (0.52) and median (0.42) values. But these two ratios are correlated at 0.96, indicating that the universities that have a high ratio of centers to departments also tend to have high ratios of interdisciplinary centers to departments.
Contrary to assumptions in the literature, not all research centers are interdisciplinary, and—as Jacobs (2013) and Brint et al. (2009) have made clear—not all departments are monodisciplinary. Results from the semi-supervised machine learning effort, combined with the Hopkins and King (2010) correction, reveal that while at some universities, all centers are interdisciplinary, at others, as few as one-third are interdisciplinary (see Table 4). On average, only about two-thirds of research centers at a university are interdisciplinary (mean = 66%). With regard to departments, only about half are monodisciplinary and half are interdisciplinary, on average (mean = 49%), with a range between 32 and 85%. Descriptive univariate statistics also reveal a couple of outliers and (relatedly) some excessive kurtosis; however, excluding the culprit observations does not change the multivariate results we present below.
The ratio of centers to departments, a logical extension of Jacobs and Frickel’s (2009) work, does not appear to be a critical component of university commitment to IDR. This is evident from the initial CFA models we specified (see Table 5). Recall that universities’ structural commitment to IDR is the latent, exogenous variable of interest, and we are trying to identify the best way to measure this concept. Following standard practice, we scaled the latent variable by setting the factor loading for one indicator (#Depts) to equal 1 in all models. In Model 1, we include two indicators: the ratio of centers to departments (#RCs/Depts) and the sheer number of departments (#Depts). Although relevant model fit statistics are adequate, the coefficient for #RCs/Depts is small, negative, and fails to reach statistical significance. In Model 2, we use the refined ratio of the number of interdisciplinary RCs to the number of departments, and we see slight improvement in model fit (the AIC and BIC decline slightly) but the coefficient still fails to reach statistical significance, and remains negative. The ratio of centers to departments, even when restricted to interdisciplinary centers, is not a good indicator of structural commitment to IDR.
As we expected, the nature (and not merely the relative prevalence) of both types of units is critical to adequately gauging universities’ structural commitment to IDR. This is evident empirically in Table 5, Model 3, which adds the two ratios, #idRCs/RCs and #idDepts/Depts, that we derived from the machine learning effort to identify interdisciplinary units. This model is depicted graphically in Fig. 1. Compared to previous models, this model fits the data better, as assessed by the AIC and BIC, which become smaller (more negative) and by the 1-RMSEA, TLI and CFI, which all reach or come close to their maximum value of 1 (perfect fit). The insignificant Chi square test statistic (T) also indicates a close-to-perfectly fitting model. All coefficients reach traditional levels of statistical significance. The ratio of centers to departments remains negative, suggesting that higher levels of structural commitment to IDR are associated with lower ratios of centers to departments; likely because the sheer number of departments is so indicative (as many new departments are interdisciplinary). Our more refined indicators, generated with the help of machine learning to identify interdisciplinary centers and departments, are positive and reach traditional levels of statistical significance. As expected, the fraction of RCs that are interdisciplinary and the fraction of departments that are interdisciplinary serve as important indicators of universities’ structural commitment to IDR. This is especially the case for the latter (#idDepts/Depts); the R2 value for this ‘equation’ (recall that each indicator is an outcome variable in CFA) is 0.45, compared to only 0.03 for (#idRCs/RCs).
Results from the multivariate confirmatory factor analysis model reveal that there is wide variation in universities’ structural commitment to IDR. From the best-fitting model (Table 5, Model 3), we compute a continuous factor score, which is a numerical value that indicates each university’s relative standing on the latent variable “Structural Commitment to IDR.” It is important to note that the value of a factor score is uninformative; it simply reveals the relative positioning of universities in terms of their structural commitment to IDR. The distribution of factor scores is displayed in Fig. 2.
We are intrigued that structural commitment to IDR does not align perfectly with common perceptions of which universities are more committed to IDR. Table 6 reveals the composite factor score as well as values on all indicators for 10 universities that are often highlighted in the literature. Research by Biancani et al. (2014) led us to believe that Stanford was committing many resources to interdisciplinary endeavors, but its factor score is at the lower end of the distribution, as is MIT’s. Duke University was the “first university to embrace interdisciplinary research and teaching as an explicit strategy of intellectual advance” (Brint et al. 2009: 177) but this strategy has not translated into structural changes; Duke has a good number of departments, but less than half of them are interdisciplinary in nature. Arizona State University, whose conversion of departments into schools is upheld as a model for interdiscipinarity, falls almost exactly in the middle of the distribution, suggesting an average structural commitment to IDR. Harvard is near the top in terms of its structural commitment to IDR, and this appears to be driven by its large number of departments, the majority of which are interdisciplinary (75%). University of California at Riverside, which has pursued cluster hiring extensively, also appears to have a strong structural commitment to IDR. These results provide support for Turner et al.’ (2015) finding that the strategies pursued depend on the particular political-academic context of each university. We note that the distribution of factor scores is largely defined by the fraction of departments that are interdisciplinary (as indicated by the high R2 value for this indicator in Model 3, and the fact that its values are ranked in almost the exact same order as the composite factor score), attesting to the foundational importance of Brint et al.’ (2009) work.
What kinds of universities are most committed to IDR?
The universities most structurally committed to IDR tend to be the most prestigious, top-tier universities: those that are members of the AAU, and those that have high revenues as well as spending on research and development (see Table 7). This is perhaps not surprising because the structural commitment we examine here requires resources to establish and maintain interdisciplinary departments and centers; in contrast, a more cultural form of commitment to IDR, manifested in written documents like strategic plans (Harris 2010), likely requires fewer material resources. The bivariate relationships presented in Table 7 also reveal that universities with medical schools and affiliated hospitals (characteristics that tend to be correlated positively with revenues, expenditures, and status) are also likely to be more structurally committed to IDR. Although Jacobs (2013) finds that universities with medical schools typically have more RCs, and we find this in our data too, these RCs are not particularly interdisciplinary;Footnote 11 it appears to be the sheer number and interdisciplinary nature of the departments that boosts their structural commitment to IDR (recall the relative R2 values from Table 5). A comparison of factor scores also reveals that technical universities (e.g., Georgia Tech, MIT, CalTech, RIT) also appear to be less structurally committed to IDR; this may be attributable to the greater amount of industry-sponsored contract research that takes place at such institutions, where the focus is more on spanning domains of research (academic and commercial) than disciplines per se (e.g., Barringer and Slaughter 2016; Mathies and Slaughter 2013).Footnote 12 We also find that universities in the southern United States tend to be less committed to IDR. Compared to the other five regions, universities in the south have fewer RCs per department overall. Further investigation reveals that the distribution of RCs at southern universities is also distinct, with these universities having a higher percentage of RCs in computers and mathematics, engineering and technology, business and economics, and government, but fewer RCs in agriculture, biology and environment, astronomy and space, social sciences, and humanities—precisely where interdisciplinary themes like Environmental Sciences, American Indian Studies, and Gender Studies are common.
Discussion
In this paper we used vast amounts of textual data and uniquely combined cutting-edge machine learning techniques with factor analytic methods to develop the first measure of universities’ structural commitment to IDR. Our focus on structural commitments, which pertain to units that house research activity, led us to collect textual data on each university’s constituent departments and research centers (i.e., their names and keyword descriptors). Previous research suggested that the number of units and their relative prevalence could serve as indicators of universities’ structural commitment to IDR.Footnote 13 Moving beyond prevalence to examine the nature of such units, we used semi-supervised machine learning techniques in the Python programming language to develop a classifier that distinguished interdisciplinary units (centers and departments) from other units. We aggregate these binary classifications to the university level by calculating the fraction of centers that are interdisciplinary and the fraction of departments that are interdisciplinary, and these continuous variables serve as indicators in the confirmatory factor analysis models that we specified and compared.
The results indicate that the best way to measure universities’ structural commitment to IDR is to incorporate information about not only the prevalence but also the nature of academic units: whether they are interdisciplinary or not. This is because, as our analyses show, not all centers are interdisciplinary and not all departments are monodisciplinary. Rather, the best fitting model reveals the critical importance of the ratios (#idRCs/RCs and #idDepts/Depts) that we develop and apply broadly with the assistance of machine learning. Universities whose centers and especially departments tend to be interdisciplinary show the greatest commitment to IDR in structural terms. Given the costs associated with developing, maintaining, and modifying organizational structures to serve interdisciplinary research efforts (Harris 2010; Harris and Holley 2008), it is not surprising that we find that it is the more well financed, research-oriented, and prestigious universities that show the greatest structural commitment to IDR.
These findings have implications for scholarship and for research policy. The science of science scholarship will ideally steer away from a focus on the sheer quantity of academic units (e.g., absolute and relative numbers of centers and departments) and toward a greater understanding of their qualitative nature. As the first author argued in a recent policy report to NSF’s SciSIP program (Leahey 2018), the scholarship on research centers tends to focus on a single case or single type of center, rather than examine the types and qualities of those initiatives. While interdisciplinarity is the quality that interests us here, other characteristics likely matter too. For example, a unit’s size, location on campus, and design (Downey et al. 2016; Kabo et al. 2014) likely influence multifaceted collaboration (Jha and Welch 2010), productivity (Sabharwal and Qian 2013), and interdisciplinary graduate work (Millar 2013). A more nuanced understanding of the nature of academic units will permit finer grained analyses and specific insights to inform and tailor research policy, which currently invests in all kinds of centers without attending to the type and degree of integration that is expected to result.
In this paper we restricted our focus to universities’ structural commitment to IDR, but future research should explore whether and how this is related to other forms of commitment. The distinction between our findings about structural commitment and lay perceptions of (perhaps more) cultural forms of commitment paves the way for future studies that could compare the two forms, and attests to the widespread belief that some universities are symbolically (but not substantively or structurally) committed to IDR. We also suspect that commitment to IDR as manifested in the nature of departments and centers is quite different from commitment to IDR as manifested in faculty hires (e.g., cluster hires and interdisciplinary lines) and internal grant mechanisms designed to foster cross-college interaction. We suggest, and will examine in subsequent papers, that different decision-making bodies are likely shaping different manifestations of commitment to IDR (e.g., administrators may have more influence over department and centers, whereas faculty members have more say over hires), and that they are not always committed to the same degree. It may very well be that manifestations of commitment to IDR occur at one level or another, but not both simultaneously. Perhaps commitment to IDR needs to start somewhere, and then takes time to diffuse to other levels and decision-making bodies.
After developing the measure of structural commitment to IDR, we assessed which types of universities are more committed to IDR, but this is just an initial and exploratory step. Given its relevance to science policy and research investments, we are eager to use this measure of universities’ structural commitment to IDR in several follow-up studies that examine both the determinants and consequences of universities’ structural commitment to IDR. Gaining a better understanding of how university level resources and internal dynamics influence structural commitment to IDR would help university administrators and leaders select the best mechanisms for fostering IDR. An examination of whether and how university commitment to IDR, as measured here, affects critical outcomes like research productivity, patenting, funding levels, and prestige would help university administrators, science policy makers, and grant agencies know what can be reasonably expected from IDR.
Notes
Following Klein’s (1990) lead, some scholars have taken pains to distinguish between interdisciplinary, multidisciplinary, and cross-disciplinary efforts (Holley 2009). However, like Brint et al. (2009), Porter et al. (2007), and Wagner et al. (2011), we do not distinguish them empirically and acknowledge that our measure of IDR may very well include multidisciplinary and cross-disciplinary efforts.
Porter and colleagues’ integration measure is equivalent to Rao-Stirling index (Rafols and Meyer 2010). To gauge a paper’s degree of interdisciplinarity, it focuses on its works cited, and incorporates not only the number of fields cited (variety), but also the uniformity of the distribution of fields (evenness), and the relatedness (dissimilarity). Papers whose bibliographies represent a large number of fields that are evenly distributed and unrelated have high levels of interdisciplinarity.
Leahey et al. (2017) contend that the pooled approach better captures multidisciplinarity (simply drawing from two or more fields), and the averaging approach betters captures interdisciplinarity (actually integrating two or more fields), but conceptualization and measurement on this front deserves more attention. Like others (Jacobs and Frickel 2009), we do not distinguish between the two.
We exclude internal grants partly for pragmatic reasons: our initial investigations revealed that there is no comprehensive and standardized source for internal grants across the 157 universities we study. We exclude cluster hiring initiatives to boost measurement validity: while cluster hiring captures commitment to (and not just engagement with) IDR, some cluster hiring initiatives are intended to foster diversity more than interdisciplinarity (Staff 2015). Given our focus on research, we also exclude university efforts to promote interdisciplinary education and training, which others have studied (Brint et al. 2009; Hackett and Rhoten 2009; Holley 2009, 2015). Moreover, most of the science policy interest in interdisciplinarity revolves around research rather than teaching.
We find no significant differences between public and private universities in their average number of departments (80 vs. 73, p = 0.18) or in the fraction of their departments that are interdisciplinary (0.49 vs. 0.49, p = 0.92). We also compared universities that are in the bottom percentile (10th, 20th, and 30th) on inflation adjusted total revenues per FTE to more well-resourced universities. Regardless of the cut-off used, we found that poorer universities do indeed have fewer departments. Part of this is attributable to the fact that only one of the poorer universities has land grant status, and land grant universities have more departments given their broad mission. However, poorer universities do not have a greater relative number of interdisciplinary departments. So, although poor universities may well be merging departments for efficiency’s sake, the resulting departments are not more likely to be interdisciplinary. This suggests that when they do consolidate, they merge cognate departments together.
Jacobs (2013: 215) notes, for example, that Arizona State University’s reorganization actually resulted in the division of departments like Sociology, whose members were dispersed across the School of Social and Family Dynamics and the School of Evolution and Social Change.
To ascertain this, we selected four universities from our sample and identified their research centers from two sources: the Gale Directory and manual web searches. We chose universities that differed by region, sector, and prestige. We manually coded whether centers from each search technique were: (1) active (2) engaged in research and (3) interdisciplinary. We then made formal comparisons between the two sources using cross-tabulations and Chi square test statistics. We found that web searches were more subject to systematic differences in reporting of research centers across universities, and resulted in the inclusion of more defunct or non-research oriented centers, compared to Gale searches. We found no significant differences in the association between search technique and the likelihood of a center to be interdisciplinary. Thus, we determined Gale to be the preferable source of information, albeit more conservative than web searches. Details about these validity checks are available upon request.
We had hoped to rely on Brint et al.’ (2009) classification (which counts fields as interdisciplinary if they drew faculty from two or more disciplines and if they were classified as interdisciplinary by more than two-thirds of the universities he studied) but his research team could not locate the list. We considered supplementing the department name with additional text from faculty members’ publications, but this would likely tap engagement rather than commitment to interdisciplinary research. We also considered supplementing the department name with additional text from department websites, but this proved challenging to do for the year of interest (2012–2013) even with the Internet Archive’s WayBack Machine. In the end, these potential supplements proved unnecessary because we were able to build a precise classifier based on department name alone.
The six schools are the Colorado School of Mines, Illinois Institute of Technology, University of North Carolina at Greensboro, Wake Forest University, George Mason University, and University of Texas at Dallas.
For example, for RCs, we compute the proportion RCs that fall in each category of classification (interdisciplinary and not interdisciplinary). Then, we use the precision rate determined in the first stage of machine learning to calculate the likelihood of misclassification for each category. That is, for a binary outcome, we calculate the percent of (manually coded) interdisciplinary RCs that were correctly and incorrectly classified by the machine, and the percent of non-interdisciplinary RCs that were correctly and incorrectly classified. Then, we use these percentages to correct the raw ratios. For example, if 36% of departments were classified as not interdisciplinary, but our precision rates tell us that 2% of departments coded as ‘interdisciplinary’ should be classified as ‘not interdisciplinary,’ we add 2 to 36%.
Universities with medical schools have 70 RCs on average, a significant difference from universities without medical schools, which have 47 RCs on average. There is no significant difference in the fraction of RCs that are interdisciplinary between these two groups.
Technical universities’ lower structural commitment to IDR cannot be attributed to any bias on the part of our classifier to excessively tag humanities and social science fields as interdisciplinary. Indeed, when we examine the percent of research centers tagged as interdisciplinary across Gale’s macro fields, we find that the highest values occur in Engineering and Technology, Physical and Earth Sciences, Astronomy and Space Sciences, and Computers and Mathematics, whereas lower values occur in Government and Public Affairs, Law, and Education.
Recall that we view the presence and nature of these organizational units, regardless of the motivation behind their development, as indicative of structural commitment to IDR.
References
Abbott, A. (1999). Department and discipline: Chicago sociology at one hundred. Chicago: University of Chicago Press.
Abbott, A. (2001). Chaos of disciplines. Chicago: University of Chicago Press.
Adams, J., & Light, R. (2014). Mapping interdisciplinary fields: Efficiencies, gaps and redundancies in HIV/aids research. PLoS ONE, 9(12), e115092. https://doi.org/10.1371/journal.pone.0115092.
Barringer, S. N. (2016). The changing finances of public higher education organizations: Diversity, change and discontinuity. In E. P. Berman & C. Paradeise (Eds.), Research in the sociology of organizations: The university under pressure (Vol. 46, pp. 223–263). Bingley, United Kingdom: Emerald.
Barringer, S. N., & Riffe, K. A. (2018). Not just figureheads: Trustees as microfoundations of higher education institutions. Innovative Higher Education, 43(3), 1–16. https://doi.org/10.1007/s10755-018-9422-6.
Barringer, S. N., & Slaughter, S. (2016). University trustees and the entrepreneurial university: Inner circles, interlocks, and exchanges. In S. Slaughter & B. J. Taylor (Eds.), Higher education, stratification, and workforce development: Competitive advantage in Europe, the US, and Canada (pp. 151–171). Dordrecht, Netherlands: Springer.
Bercovitz, J., & Feldman, M. (2011). The mechanisms of collaboration in inventive teams: Composition, social networks, and geography. Research Policy, 40(1), 81–93. https://doi.org/10.1016/j.respol.2010.09.008.
Biancani, S., McFarland, D. A., & Dahlander, L. (2014). The semiformal organization. Organization Science, 25(5), 1306–1324.
Birnbaum, P. H. (1981). Inegration and specialization in academic research. Academy of Management, 24(3), 487–503.
Boardman, C., & Corley, E. A. (2008). University research centers and the composition of research collaborations. Research Policy, 37(5), 900–913. https://doi.org/10.1016/j.respol.2008.01.012.
Bollen, K. A. (1989). A new incremental fit index for general structural equation models. Sociological Methods and Research, 17(3), 303–316.
Bozeman, B., & Boardman, C. (2013). Academic faculty in university research centers: Neither capitalism’s slaves nor teaching fugitives. The Journal of Higher Education, 84(1), 88–120. https://doi.org/10.1353/jhe.2013.0003.
Brint, S. G., Turk-Bicakci, L., Proctor, K., & Murphy, S. P. (2009). Expanding the social frame of knowledge: Interdisciplinary, degree-granting fields in american colleges and universities, 1975–2000. The Review of Higher Education, 32(2), 155–183. https://doi.org/10.1353/rhe.0.0042.
Cassi, L., Mescheba, W., & de Turckheim, E. (2014). How to evaluate the degree of interdisciplinarity of an institution? Scientometrics, 101(3), 1871–1895.
Chakraborty, T. (2018). Role of interdisciplinarity in computer sciences: Quantification, impact and life trajectory. Scientometrics, 114(3), 1011–1029. https://doi.org/10.1007/s11192-017-2628-z.
Chen, S., Arsenault, C., Gingras, Y., & Larivière, V. (2015). Exploring the interdisciplinary evolution of a discipline: The case of biochemistry and molecular biology. Scientometrics, 102(2), 1307–1323. https://doi.org/10.1007/s11192-014-1457-6.
Coleman, D. L., Spira, A., & Ravid, K. (2013). Promoting interdisciplinary research in departments of medicine: Results from two models at boston university school of medicine. Transactions of the American Clinical and Climatological Association, 124, 275–282.
Downey, G. J., Feinstein, N. W., Kleinman, D. L., Peterson, S., & Fukuda, C. (2016). The frictions of interdisicplinarity: The case of the wisconsin institutes for discovery. In S. Frickel, M. Albert, & B. Prainsack (Eds.), Investigating interdisciplinary collaboration: Theory and practice across disciplines. New Brunswick: Rutgers University Press.
Duke University. (2017). Interdisciplinary studies at Duke University: Contact us. Retrieved Oct 01, 2018. https://sites.duke.edu/interdisciplinary/about/contacts/.
Evans, E. D. (2016). Measuring interdisciplinarity using text. Socius, 2, 1–18. https://doi.org/10.1177/2378023116654147.
Evans, J. A., & Aceves, P. (2016). Machine translation: Mining text for social theory. Annual Review of Sociology, 42, 21–50. https://doi.org/10.1146/annurev-soc-081715-074206.
Flaherty, C. (2016). Cluster-hiring cluster &%*#?. Inside higher ed news. 1 February 2016. https://www.insidehighered.com/news/2016/02/01/uc-riverside-faculty-survey-suggests-outrage-clusterhiring-initiative. Accessed 10 Jan 2018.
Frickel, S. (2004). Chemical consequences: Environmental mutagens, scientist activism, and the rise of genetic toxicology. New Brunswick: Rutgers University Press.
Frickel, S., & Gross, N. (2005). A general theory of scientific/intellectual movements. American Sociological Review, 70(2), 204–232.
Geiger, R. L. (1990). Organized research units-their role in the development of university research. The Journal of Higher Education, 61(1), 1–19.
Geiger, R. L. (2013). Creating the market university: How academic science became an economic engine. American Historical Review, 118(3), 896–897. https://doi.org/10.1093/ahr/118.3.896a.
Geiger, R. L., & Sá, C. (2005). Beyond technology transfer: US state policies to harness university research for economic development. Minerva, 43(1), 1–21.
Gowanlock, M., & Gazan, R. (2013). Assessing researcher interdisciplinarity: A case study of the university of hawaii nasa astrobiology institute. Scientometrics, 94(1), 133–161. https://doi.org/10.1007/s11192-012-0765-y.
Gumport, P. J., & Snydman, S. K. (2002). The formal organization of knowledge: An analysis of academic structure. The Journal of Higher Education, 73(3), 375–408.
Guston, D. H. (2000). Between politics and science: Assuring the integrity and productivity of research. Cambridge: Cambridge University Press.
Hackett, E. J., & Rhoten, D. (2009). The snowbird charrette: Integrative interdisciplinary collaboration in environmental research design. Minerva, 47(4), 407–440.
Harris, M. (2010). Interdisciplinary strategy and collaboration: A case study of american research universities. Journal of Research Administration, XLI, 22–34.
Harris, M. S., & Holley, K. (2008). Contructing the interdisciplinary ivory tower: The planning of interdisciplinary spaces on university campuses. Planning for Higher Education, 36(3), 34–43.
Holley, K. (2009). The challenge of an interdisciplinary curriculum: A cultural analysis of a doctoral-degree program in neuroscience. Higher Education, 58(2), 241–255. https://doi.org/10.1007/sl0734-008-9193-6.
Holley, K. (2015). Doctoral education and the development of an interdisciplinary identity. Innovations in Education and Teaching International, 52(6), 642–652. https://doi.org/10.1080/14703297.2013.847796.
Hopkins, D. J., & King, G. (2010). A method of automated nonparametric content analysis for social science. American Journal of Political Science, 54(1), 229–247. https://doi.org/10.1111/j.1540-5907.2009.00428.x.
Horta, H., & Santos, J. M. (2016). An instrument to measure individuals’ research agenda setting: The multi-dimensional research agendas inventory. Scientometrics, 108(3), 1243–1265. https://doi.org/10.1007/s11192-016-2012-4.
Ikenberry, S., & Friedman, R. C. (1972). Beyond academic departments: The story of institutes and centers. San Francisco: Jossey-Bass.
Jacobs, J. A. (2013). In defense of disciplines: Interdisciplinarity and specialization in the research university. Chicago: University of Chicago Press.
Jacobs, J. A., & Frickel, S. (2009). Interdisciplinarity: A critical assessment. Annual Review of Sociology, 35, 43–65. https://doi.org/10.1146/annurev-soc-070308-115954.
Jaschik, S. (2014). $100 million gift for Dartmouth. Inside Higher Ed. Retrieved: Oct 01, 2018. https://www.insidehighered.com/quicktakes/2014/04/10/100-million-gift-dartmouth.
Jensen, P., & Lutkouskaya, K. (2014). The many dimensions of laboratories’ interdisciplinarity. Scientometrics, 98(1), 619–631. https://doi.org/10.1007/s11192-013-1129-y.
Jha, Y., & Welch, E. W. (2010). Relational mechanisms governing multifaceted collaborative behavior of academic scientists in six fields of science and engineering. Research Policy, 39(9), 1174–1184.
Kabo, F. W., Cotton-Nessler, N., Hwang, Y., Levenstein, M. C., & Owen-Smith, J. (2014). Proximity effects on the dynamics and outcomes of scientific collaborations. Research Policy, 43(9), 1469–1485.
Kaplan, S., Milde, J., & Cowan, R. (2017). Symbiont practices in boundary spanning: Bridging the cognitive and political divides in interdisciplinary research. Acadamy of Management, 60(4), 1387–1414. https://doi.org/10.5465/amj.2015.0809.
Klein, J. T. (1990). Interdisciplinarity: History, theory, and practice. Detroit: Wayne State University Press.
Larivière, V., & Gingras, Y. (2010). On the relationship between interdisciplinarity and scientific impact. Journal of the American Society for Information Science and Technology, 61(1), 126–131.
Leahey, E. (2018). Science policy research report: Infrastructure for interdisciplinarity. National Science Foundation SciSIP Program. Award #1723536.
Leahey, E., Beckman, C. M., & Stanko, T. L. (2017). Prominent but less productive: The impact of interdsiciplinarity on scientists’ research. Administrative Science Quarterly, 62(1), 105–139.
Leahey, E., & Blume, A. (2017). Elucidating the process: Why women patent less than men. In A. N. Link (Ed.), Gender and entrepreneurial activity (pp. 151–167). Cheltenham: Edward Elgar Publishing.
Lee, J. J. (2007). The shaping of the departmental culture: Measuring the relative influences of the institution and discipline. Journal of Higher Education Policy and Management, 29(1), 41–55.
Light, R., & Adams, J. (2017). A dynamic, multidimensional approach to knowledge production. In S. Frickel, M. Albert, & B. Prainsack (Eds.), Investigating interdisciplinary collaboration: Theory and practice across disciplines (pp. 127–147). New Brunswick: Rutgers University Press.
Lyall, C., Bruce, A., Marsden, W., & Meagher, L. (2013). The role of funding agencies in creating interdisciplinary knowledge. Science and Public Policy, 40(1), 62–71. https://doi.org/10.1093/scipol/scsl21.
Mallon, W. T. (2006). The benefits and challenges of research centers and institutes in academic medicine: Findings from six universities and their medical schools. Research Issues, 81(6), 502–512.
Mathies, C., & Slaughter, S. (2013). university trustees as channels between academe and industry: Toward an understanding of the executive science network. Research Policy, 42(6–7), 1286–1300. https://doi.org/10.1016/j.respol.2013.03.003.
Millar, M. M. (2013). Interdisciplinary research and the early career: The effect of interdisciplinary dissertation research on career placement and publication productivity of doctoral graduates in the sciences. Research Policy, 42(5), 1152–1164.
Moody, J. (2004). The structure of a social science collaboration network: Disciplinary cohesion from 1963 to 1999. American Sociological Review, 69(2), 213–238.
Morphew, C. C., & Hartley, M. (2006). Mission statements: A thematic analysis of rhetoric across institutional type. The Journal of Higher Education, 77(3), 456–471. https://doi.org/10.1353/jhe.2006.0025.
Mugabushaka, A.-M., Kyriakou, A., & Papazoglou, T. (2016). Bibliometric indicators of interdisciplinarity: The potential of the Leinster-Cobbold diversity indices to study disciplinary diversity. Scientometrics, 107(2), 593–607. https://doi.org/10.1007/s11192-016-1865-x.
National Academies of Science, National Academy of Engineering and Institute of Medicine. (2005). Facilitating interdisciplinary research. Washington, DC: The National Academies Press.
National Research Council. (2014). In C. Sá (Ed.), Convergence: Facilitating transdisciplinary integration of life sciences, physical sciences, engineering, and beyond. Washington, DC: National Research Council.
Nickelhoff, L., & Nyatepe-Coo, E. (2012). Promoting interdisciplinary research through institutes and centers (Vol. Washington). D.C.: Education Advisory Board.
Owen-Smith, J. (2003). From separate systems to hybrid order: Accumulative advantage across public and private science at research one universities. Research Policy, 32(6), 1081–1104.
Panofsky, A. (2014). Misbehaving science: Controversy and the development of behavioral genetics. Chicago: University of Chicago Press.
Porter, A. L., Cohen, A. S., Roessner, J. D., & Perreault, M. (2007). Measuring researcher interdisciplinarity. Scientometrics, 72(1), 117–147.
Rafols, I., Leydesdorff, L., O’Hare, A., Nightingale, P., & Stirling, A. (2012). How journal rankings can suppress interdisciplinary research: A comparison between innovation studies and business & management. Research Policy, 41(7), 1262–1282.
Rafols, I., & Meyer, M. (2010). Diversity and network coherence as indicators of interdisciplinarity: Case studies in bionanoscience. Scientometrics, 82(2), 263–287.
Rhoten, D. (2003). A multi-method analysis of the social and technical conditions for interdisciplinary collaboration. San Francisco, CA: The Hybrid Vigor Institute.
Rhoten, D. (2005). Interdisciplinary research: Trend or transition. Items Issues, 5, 6–11.
Rhoten, D., & Parker, A. (2004). Risks and rewards of an interdisciplinary research path. Science, 306(5704), 2046.
Sá, C. M. (2008). ‘Interdisciplinary strategies’ in U.S. research universities. Higher Education, 55(5), 537–552. https://doi.org/10.1007/sl0734-007-9073-5.
Sabharwal, M., & Qian, H. (2013). Participation in university-based research centers: Is it helping or hurting researchers? Research Policy, 42(6–7), 1301–1311.
Schummer, J. (2004). Multidisciplinarity, interdisciplinarity, and patterns of research collaboration in nanoscience and nanotechnology. Scientometrics, 59(3), 425–465. https://doi.org/10.1023/B:SCIE.0000018542.71314.38.
Slaughter, S., Thomas, S. L., Johnson, D. R., & Barringer, S. N. (2014). Institutional conflict of interest: The role of interlocking directorates in the scientific relationships between universities and the corporate sector. The Journal of Higher Education, 85(1), 1–35. https://doi.org/10.1353/jhe.2014.0000.
Staff, C. (2015). Hiring faculty members in groups can improve diversity and campus climate. The chronicle of higher education. 30 April 2015. https://www.chronicle.com/blogs/ticker/hiring-faculty-members-in-groups-canimprove-diversity-and-campus-climate/98149. Accessed 7 Jan 2019.
Stahler, G. J., & Tash, W. R. (1994). Centers and institutes in the research university: Issues, problems, and prospects. The Journal of Higher Education, 65(5), 540–554.
Stuart, T. E., & Ding, W. W. (2006). When do scientists become entrepreneurs? The social structural antecedents of commercial activity in the academic life sciences. American Journal of Sociology, 112(1), 97–144.
Taşkın, Z., & Aydinoglu, A. U. (2015). Collaborative interdisciplinary astrobiology research: A bibliometric study of the nasa astrobiology institute. Scientometrics, 103(3), 1003–1022. https://doi.org/10.1007/s11192-015-1576-8.
Taylor, B. J., Cantwell, B., & Slaughter, S. (2013). Quasi-markets in U.S. higher education: The humanities and institutional revenues. The Journal of Higher Education, 84(5), 675–707. https://doi.org/10.1353/jhe.2013.0030.
Turner, V. K., Benessaiah, K., Warren, S., & Iwaniec, D. (2015). Essential tensions in interdisciplinary scholarship: Navigating challenges in affect, epistemologies, and structure in environment-society research centers. Higher Education, 70(4), 649–665. https://doi.org/10.1007/s10734-015-9859-9.
Wagner, C. S., Roessner, J. D., Bobb, K., Klein, J. T., Boyack, K. W., Keyton, J., et al. (2011). Approaches to understanding and measuring interdisciplinary scientific research (IDR): A review of the literature. Journal of Informetrics, 5(1), 14–26. https://doi.org/10.1016/j.joi.2010.06.004.
Weisbrod, B. A., Jeffrey, P. B., & Asch, E. D. (2008). Mission and money: Understanding the university. Cambridge: Cambridge University Press.
Cheng, Y., & Liu, N. C. (2006). A first approach to the classification of the top 500 world universities by their disciplinary characteristics using scientometrics. Scientometrics, 68(1), 135–150.
Acknowledgements
This research was supported by NSF SciSIP Collaborative Grants to Erin Leahey and Sondra Barringer (Award #s 1461989 and 1461846). Any opinions, findings, conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. We are grateful to Steven Brint, Scott Frickel, and Jerry Jacobs for their foundational work, and to Karina Salazar and Esme Middaugh for impeccable research assistance.
Author information
Authors and Affiliations
Corresponding author
Appendix: Coding guidelines
Appendix: Coding guidelines
These abbreviated manual coding guidelines were developed based on NSF’s disciplinary classifications (https://www.nsf.gov/statistics/nsf13327/pdf/tabb1.pdf), CIP codes (https://nces.ed.gov/ipeds/cipcode/browse.aspx?y=55), and the foundational work of other scholars, especially Brint et al. (2009). These coding guidelines were also incorporated as features into the machine classifier to guide its classification of text. The full version is available upon request.
The text is likely interdisciplinary if it references….
-
Interdisciplinarity, including terms like:
-
“interdisciplinary,” “multidisciplinary,” “trans-disciplinary,” “integrative,” “synthesis,” “applied,” “cross-disciplinary,” and “integration.”
-
-
Two or more disciplines (i.e., CIP and NSF broad categories), or their stems, like:
-
“Center for Pharmacology and Physiology,”
-
“Geophysics”
-
“Bioengineering”
-
“Department of Sociology & Criminology”
-
-
Environmental or earth sciences, like:
-
“Institute of Environmental Policy,” “Atmospheric and Oceanic Sciences”
-
-
Any of the following stem words in combination with “studies”
-
America-, biblic-, cultural, Islam-, sustain-, community, Slavic, rehab-, peace…
-
-
Professional schools, like:
-
Medicine, Nursing, Social Work, Education, Public Health, Law, Business, Public
-
Policy/Administration
-
-
Inherently interdisciplinary fields, like:
-
Space science, demography, gerontology, criminal justice, ethics
-
-
Sexual minorities or women, such as:
-
“women,” “gender,” “feminist,” “sexuality”
-
-
Ethnic/racial minorities, such as:
-
“African American,” “Chicano,” “Hispanic,” “American Indian,” “Asian American”
-
-
Area/region/period/religion studies, like:
-
“Institute of Africana/African Studies,” “Department of Latin American Studies”
-
-
International or global orientation
-
“International Relations” or “Center for Global Studies”
-
Rights and permissions
About this article
Cite this article
Leahey, E., Barringer, S.N. & Ring-Ramirez, M. Universities’ structural commitment to interdisciplinary research. Scientometrics 118, 891–919 (2019). https://doi.org/10.1007/s11192-018-2992-3
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11192-018-2992-3