Introduction

The aim of this paper is to explore the determinants of the degree centrality of those institutions with the highest impact in the collaboration network for creating and disseminating knowledge in the discipline of Management.

General scientific output doubled in volume between the beginning and end of the 1960s (Price 1963). Three decades later, in the 1990s, and due to the development of information technology, especially in matters of storage, it has been estimated that the amount of information held globally doubled every 20 months (Frawley et al. 1992). A similar pattern has been described for the discipline of Management. For example, 3445 documents were published in 1990 just in the ISI Web of Science category “Management”. This same figure had trebled 20 years later, with the publication of 9965 documents in 2010.

Scientific output has therefore been used as an effective science indicator of influence that is generally accepted by the scientific community to assess the performance of researchers, faculties and universities conducting research into a discipline (Coe and Weinstock 1984). Furthermore, it has been used to rate institutions according to the number of their publications and the quality of the journals in which they are published (Conroy et al. 1995).

Similarly, previous articles studying the importance institutions have in the generation and dissemination of knowledge on Management have used the following science indicators of influence: their scientific output in dedicated journals (Kirkpatrick and Locke 1992; Morrison and Inkpen 1991; Shane 1997; Trieschmann et al. 2000), the study of citation patterns in the papers they have published (Kirkpatrick and Locke 1992; Podsakoff et al. 2008), and co-authorship networks (Acedo et al. 2006). Based on the quantitative results obtained, the authors involved have compiled a ranking of the more prominent institutions. Thus, Kirkpatrick and Locke (1992 ) and Shane, (1997) present a ranking of the top 25, 50 and 100 institutions; Morrison and Inkpen (1991) present the top 30, while Stahl et al. (1988) and Trieschmann et al. (2000) list the top 50. More recently, Podsakoff et al. (2008) have provided a ranking of the top 100 institutions in research on Management.

However, it is readily apparent that the order of the institutions listed in the aforementioned studies does not coincide. The three main reasons for these inconsistent results might be the following: (1) a different database is used for the source journals, either in terms of the number of journals included or regarding the titles chosen; (2) the timeframes are also different; and (3) different measurements of influence are used.

In addition to these shortcomings, and despite the merits of scientific output as an indicator of importance, it has two further limitations that are not always suitably addressed. First, a large volume of scientific output does not necessarily mean a greater impact if the papers are not cited, thereby reducing their degree of influence. This circumstance has previously been described in Lotka’s Law (Lotka 1926) and in the Matthew effect (Merton 1968, 1988). Moreover, account should be taken of the limitation imposed by the number of self-citations, as high numbers of these will bias a publication’s true impact (van Raan 2008c).

Second, the increase in scientific output has been mirrored by an increase in the collaboration between institutions (Georghiou 1998; Wagner and Leydesdorff 2005), as a way of overcoming the limitations arising from the complexity of research issues and the difficult access to financial resources and materials. This is especially significant in frontier research, that is, when new knowledge is generated in a subject that extends beyond its traditional boundaries. In turn, the rise in co-authorship has boosted scientific output (Beaver 2001; Braun and Glanzël 2001). This situation places a restriction on scientific output as an indicator of influence because it is impossible to measure each author’s scholarly contribution in a multi-authored paper. In other words, the contribution a joint paper makes to the scientific community must be shared among the various authors or institutions, without there being a clear and generally accepted criterion on how to measure each contribution.

The past decades have witnessed a growing competition among individual scholars, universities, and journals to achieve higher rankings (Adler and Harzing 2009). As these authors state, current systems are dysfunctional and potentially cause more harm than good, which suggest that a temporary moratorium on rankings may be expedient until a more valid and reliable way of assessing scholarly contributions can be introduced (Adler and Harzing 2009). This paper seeks to overcome these limitations in the measurement of the influence institutions have in knowledge creation and dissemination in the field of Management through the use of social network analysis measures. Recent years have witnessed an increase in the use of social network analysis in scientific disciplines (Liu et al. 2005). In addition, there has been a noticeable increase in interest in the structure and sociology of scientific collaboration, which is thriving on the back of today’s prevalence of a world characterized by complex problems and a dynamic growth in knowledge (Racherla and Hu 2010). The suitability of social network measures for this study is based on the results presented by Judge et al. (2012) who reported the robustness of these kinds of measures for assessing research impact.

We contend that the use of the techniques involved in social network analysis will permit the following: (1) measuring the degree centrality of each institution as a measure of its relevance in the network structure; (2) finding the determinants of the degree centrality of the institutions involved in the network; and (3) building a ranking of institutions in the field of Management, using degree centrality as a reference.

The core notion underpinning this paper involves identifying the importance of the institutions involved in knowledge creation and dissemination in the academic community in the field of Management through their degree centrality in the network. We are therefore adopting an original and more comprehensive approach to the analysis of each institution’s importance.

With a view to fulfilling this remit, our first step will be to identify the main arguments that inform the central notion, and which will lead to the formulation of the paper’s main hypothesis. Second, we shall discuss the methodology used, with a view to subsequently disclosing our findings and discussing their implications. Based on these findings, we shall present a ranking of the leading institutions in knowledge creation and its dissemination into Management. The paper ends with its conclusions, limitations and the main lines of research it prompts.

Theory and hypotheses

As we have already noted, the use of scientific output as an indicator of an institution’s importance poses three limitations, namely, the participation in the same paper of several authors from different institutions, the consideration of the citations received by each paper published, and the increase in the number of self-citations in co-authored papers, with the ensuing bias this causes when measuring each paper’s impact (Glänzel and Thijs 2004).

In order to tackle the first limitation, some authors have proposed a specific method for weighting the appearance of authors in papers through the adjusted appearances variable (Heck and Cooley 1988; Morrison and Inkpen 1991; Shane 1997). Thus, when a paper is the result of the research conducted by two authors, each one of them receives 0.50 of a credit; in the case of three authors, the allocation is 0.33, and there on successively. The application of this method does not wholly overcome the limitation related to the effective gauging of the scholarly contribution made by each individual author, but it is a step forward in the weighting of each author’s involvement in those works involving several scholars from the same institution. This method can be adapted for allocating a paper’s credit when it involves the collaboration of authors from different institutions, which is precisely the case addressed here.

The second limitation mentioned refers to the impact a paper has had, considering that the importance of an author or institution is not related solely to the number of papers published, as instead it is particularly important to consider the impact those papers have had within the academic community.

The origin of the use of citations for measuring scholarly performance dates back to 1955, when Dr. Eugene Garfield revolutionized scientific research with his concept of the citations index, which in 1961 gave rise to today’s Science Citation Index (Garfield 1972). This development meant that the counting of the citations that papers receive has become an extremely important science indicator of performance assessment in science. Although the counting of citations has its critics as regards the measurement of individual performance (researchers) (van Raan 2014)—the highly skewed distributions of citations—the application of the analysis of the number of citations to a group of researchers over a lengthy period of time may be considered a meaningful science indicator of scholarly performance.

The main criticism leveled at the use of the number of citations as a performance indicator is the bias introduced by the increase in self-citations in co-authored papers (Glänzel and Thijs 2004; Hartley 2012). Today, and with a view to overcoming this third limitation, Hirsch’s h-index has been used as an impact indicator for each institution (Hirsch 2005). The Hirsch index has the drawback of being size-dependent but this is not a problem here because the institutions selected as a sample are the ones with higher scientific oeuvre in knowledge generation in the discipline of Management.

In Management, Podsakoff et al. (2008) contend that a “relatively small percentage of the universities publishing research in the Management literature are responsible for the vast majority of the citations”. This circumstance means there is a need to consider impact in order to measure the importance of an author or institution, as it is clear that not all the papers published have the same impact or influence in the academic community.

Indeed, according to those aspects already reported by other researchers, it may be concluded that a large scientific oeuvre is not necessarily synonymous with a major intellectual influence, as an institution may record a high scientific output, and yet have had a reduced impact on the rest of the scientific community if the papers published have not been widely cited. As posited by Gomez-Mejía and Balkin (1992), the number of citations should be seen as an indicator of how useful an author’s research findings are for all the other scholars in the research community to which they belong. The more citations a researcher receives, the more highly valued their contribution to the “market of ideas” will be (Laband 1985).

The importance of the impact publications make also has a bearing on the salary researchers receive. For example, a positive correlation has been found in the field of Management between the number of citations a researcher receives and his/her salary (Gomez-Mejía and Balkin 1992). This result complements the findings reported by Diamond (1985, 1986), and Hammersmesh et al. (1982) on the correlation between the number of citations and researchers’ pay.

Taking these aspects into account, impact is introduced as a major indicator for measuring the importance institutions have in the discipline, and it has been used to draw up a list of the world’s leading institutions in research on Management (Kirkpatrick and Locke 1992; Podsakoff et al. 2008). The use of this criterion has enabled Podsakoff et al. (2008, p. 659, see table 4) to present a table of the world’s top 100 institutions in research in the academic field of Management. It can be inferred from this work that the impact of the leading institutions’ publications might encourage those less prominent institutions to set up collaboration networks with their more successful counterparts. This would help more central institutions to climb up the ladder of merit as they absorb the capabilities of collaborating institutions. It is therefore assumed that the greater the impact institutions have, the more important they will be. This would be explained by the attraction such institutions have for less important ones, which would seek collaboration with them in order to improve their research impact.

Podsakoff et al. (2008) also contend that the number of citations is a more effective yardstick than the number of papers published for gauging the intellectual influence of a scientific community. Other authors, such as Mingers et al. (2010), Mingers and Fang (2010), and Gomez-Mejía and Balkin (1992) also share this view. In spite of the importance of the number of citations as a criterion for measuring an institution’s performance, this approach has its critics, who point to the existence of technical and methodological issues in the application of this bibliometric method. One of the technical issues raised is that approximately 30 % of the citations of any one paper are lost in their pairing process, that is, assigning the citation to the paper cited (van Raan 2014). The other issue involves the aforementioned negative effect of self-citations.

The issue is muddied even further when account is taken simultaneously of the three limitations mentioned; in other words, when the aim is to consider both the collaboration involved in the papers and the impact through citations. Thus, as is the case with scientific output, the number of citations co-authored papers receive introduces a bias in the data for the analysis, as the same number of citations of a paper is generally apportioned across all the institutions without taking into account the number of authors from each institution. For example, if a paper is published by three authors from two institutions, one of them is obviously providing two of the authors, so is it fair that the institution providing only one of the authors should receive the same consideration as the one contributing two authors? In order to resolve this problem, some authors have introduced the adjusted impact variable to weight the appearances of authors in papers published jointly, and so overcome the limitations described here (Heck and Cooley 1988).

Furthermore, the growing and sustained trend toward joint research among scholars and across institutions (Ronda-Pupo and Guerras-Martín 2010) renders it expedient to introduce indicators that are linked to the latent structure that underscores those collaboration relationships and the degree centrality of each institution involved in that collaboration. This would effectively provide information on those institutions located at the heart of the collaboration network and evaluate their influence on the rest of the network.

Within the field of Management, social network analysis has been used and developed by Burt (2001). Co-authorship networks are a classification of social networks that have been used to determine the latent structure of scientific collaboration and the status of individual researchers. These co-authorship networks, although similar to citation networks, involve a greater nexus than those based simply on citations, as they imply an explicit relationship between the authors or institutions that collaborate on the same research (Liu et al. 2005).

A pioneering study for analyzing the evolution and patterns of collaboration based on the co-authorship of papers is the one by Acedo et al. (2006), who identify the international collaboration network in Management, and report that the number of co-authored papers has been gradually increasing over time, being more frequent when the authors have different theoretical approaches. These authors use social network techniques. Nevertheless, although they analyze the institutional level, they do not identify the specific areas in the collaboration network’s structure in which the institutions are located.

Also using co-authorship networks, Fatt et al. (2010) identified the social networks within the scope of Finance based on the papers published in Journal of Finance, while Gazda and Quandt (2010) used a network analysis to study inter-institutional collaboration in the Management of technological innovation in Brazil. For their part, Ronda-Pupo and Guerras-Martín (2010) used network analysis to study international collaboration within the scope of Strategic Management.

The techniques for studying social networks do not simply allow analyzing the structures of inter-personal relationships. For example, Tsai and Wu (2010) used social network analysis to unravel the underlying intellectual structures in the “knowledge combination” based on the patterns of co-citations in the publications in six top-tier journals on Management. In turn, Ronda-Pupo and Guerras-Martín (2012) have used these networks to identify the key components of the concept of “strategy” through co-word analysis.

Within the context of social network analysis, Frenken et al. (2005) found that the collaboration networks underscoring the cooperative scientific generation of knowledge act as a vehicle for its dissemination, and both aspects favor the impact publications have. What’s more, Tsai (2001) has shown that when an organization occupies central positions in the network, it then generates more innovation and records a better performance, as this position facilitates its access to the knowledge produced by the organizations with which it collaborates. Thus, the degree centrality institutions obtain within the structure of the collaboration network helps them to gain authority and earn prestige before the rest of the community. Accordingly, it may be assumed that the greater an institution’s degree centrality within the network of research into Management, the greater its impact will be within the research community. According to the findings reported by prior studies, the following hypotheses are proposed:

Hypothesis 1a

the adjusted appearances of the institutions in papers have a positive influence on their degree centrality in the collaboration network.

Hypothesis 1b

the impact of the institution’s publications has a positive influence on their degree centrality in the collaboration network.

Hypothesis 1c

the combination of adjusted appearances and the impact of the institution’s publications have a positive influence on their degree centrality in the collaboration network.

Method

From a methodological perspective, this study introduces certain novelties as regards the works analyzed in the literature review. Thus, the following are used as science indicators of influence to highlight the relevance of the institutions involved in the global network of research in knowledge dissemination on Management: each institution’s importance in the international collaboration network, the institution’s position in the network structure—core, periphery—according to its degree centrality, and the impact of the papers published by authors in each institution. The main difference with prior studies is that these use scientific output and/or the number of citations to determine the importance of institutions. We shall now discuss the main aspects that inform the research’s design.

Time frame

The previous studies taken as a reference have used different timeframes. Thus, Kirkpatrick and Locke (1992) analyze 5 years within the period from 1983 to 1987. Morrison and Inkpen (1991) study 10 years between 1980 and 1989. Shane (1997) analyzes 8 years between 1987 and 1994. Trieschmann et al. (2000) cover 13 years between 1986 and 1998. Podsakoff et al. (2008) study 25 years between 1981 and 2004. For their part, Acedo et al. (2006) study a 23-year timeframe running from 1980 to 2002.

The main difference between these basic studies is that the timeframes and the segments analyzed are different. Furthermore, and as is apparent, no studies have been conducted on this subject in the past 10 years. In this study, we analyze 15 years of scientific output on Management from 1996 to 2010, inclusive. This time frame allows us to analyze the more recent oeuvre on Management research.

Unit of analysis

Bearing in mind that this paper seeks to analyze the structure of the collaboration network across institutions, the chosen unit of analysis is precisely the institution itself, which is normally a university, although in exceptional cases it may be a business school or possibly a consulting firm.

Data retrieval

The information required for selecting the source journals and the institutions, as well as for measuring the model’s main variables has been collated from the ISI Web of Science database compiled by Thomson Reuters. The use of the ISI database as a study source has several explanations, which include the following: (1) it is the world’s leading database for publications and the reporting of citations (Adams and King 2009); (2) it contains the annual output of indicators that are acknowledged and generally accepted by the scientific community worldwide, and which allow measuring the performance of institutions; and (3) it includes the necessary fields for obtaining the information for the creation of the data matrices that will be used in the quantitative analyses.

As noted by Podsakoff et al. (2008), the relational ISI database classifies its publications, according to their nature, into 15 different categories: articles, bibliographies, book reviews, chronologies, corrections, discussions, editorials, items about an individual, letters, meeting abstracts, news items, notes, reprints, reviews, and software reviews. In order to retrieve the data for the quantitative analysis, the only documents we used were articles (including proceeding papers), notes and reviews published in each one of the source journals selected for the study. This is a different procedure to the method applied by Acedo et al. (2006), who use all the classifications available. Our selection is informed by the fact we understand that the core of the generation of knowledge is to be found in the three categories chosen, thereby avoiding the bias of too much information of minor relevance for this purpose included in all the other categories.

Selection and validation of the source journals

For the selection of the source journals, Shane (1997) states that three indicators need to be used: appropriate, significant, and outstanding. In order to select the sources for this study, account will be taken of these three aspects based on the application of the following two criteria: (1) the importance of the journals through a high impact factor and their stability within the timeframe analyzed (significant and outstanding); and (2) their past use for the study of the discipline of Management in previous papers (appropriate). The application of these criteria will guarantee the reliability of the information to be used for the study.

In order to fulfill the significant and outstanding criteria, a review was conducted of the Management section of Journal of Citation Reports (JCR). Given that in 1997 that section had 61 indexed journals, criteria had to be used for the inclusion and/or exclusion of the same as a potential source for the study. The criterion for inclusion involved selecting those journals that in 1998, and according to JCR Social Science Edition, had an impact factor greater than 1.0 in the Management category. This condition for inclusion was met by 15 journals (24.59 %). The choice of 1998 as the start year was based on the fact that the calculation of the impact factor takes into account the two preceding years, and our study uses 1996 as its start date.

The criterion for exclusion was that the journals should continuously maintain a position of importance in JCR over the course of the study period, whereby those that did not do so should be discarded. Thus, the condition for being included always referred to the study’s timeframe, having an impact factor equal to or higher than 1.0 for each one of the years, with a maximum of 2 years of grace; in other words, for at least 11 of the 13 years sampled (1996–2010). This criterion ensured that the journals included as a source for the study are significant in the field of Management, and that significance has held steady over the years included in the study. This aspect is corroborated by the fact that all the journals have an average impact factor of more than 1.481, which is the minimum value set by the journal California Management Review. Thus, the application of the inclusion/exclusion criteria leaves 13 source journals for the analysis (Table 1).

Table 1 Journals on Management chosen for the study

As regards the application of the appropriate criterion, 17 prior studies have been identified that analyze the importance of the journals on Management, or whose studies include a ranking of the journals in this field. Table 2 shows that six out of the 13 journals in our study appear in more than 70 % of the 17 prior studies, and six appear in between 20 and 70 %. Furthermore, all the journals selected appear as a source in at least one relevant prior work. The aspects already described here testify to the reliability of the journals chosen as sources.

Table 2 Appearance in prior studies of the source journals in our study

Selection of institutions

Given the very high number of institutions included in the journals selected, the study has focused on the institutions that account for the highest scientific output in the source journals within the selected timeframe. With a view to making an initial selection, the ISI Web of Science has provided the data on the scientific output (number of published papers in which each one of the institutions appears) for the top 500 institutions. As this number is still very high, the decision has been made to restrict the number of institutions analyzed to 100.

The drafting of the definitive list of institutions has required some screening of the ISI WOS data for the following reasons:

A problem of ambiguity has been encountered in the names of the institutions (Huang et al. 2013), which has expressed itself in two ways:

  1. (a)

    Certain institutions have different names on the list. Such is the case of Univ Maastricht and Maastricht Univ, two names for the same institution. This also applies to the Harvard Business School, which sometimes appears as an independent institution and at others as part of Harvard Univ. The same case applies to the Manchester Business School, which is also part of Manchester Univ. In order to resolve this problem here, and once these situations have been detected, use has been made of the name appearing most often, which has then been attributed all the data recorded under the institution’s different names.

  2. (b)

    Certain universities, especially ones in the US, may share a name while operating as separate entities. For example, the University of California: Berkeley, Los Angeles, Davis, etc. This, too, is the case for the Universities of Texas, Illinois, and Wisconsin, among others. The application of a single criterion to these cases was extremely complicated, so application has been made instead of a criterion similar to the one used by Podsakoff et al. (2008). In order to resolve this issue, when the names in the ISI database refer exclusively to each campus, these have been considered independent institutions. Such is the case of the University of California, with Berkeley, Los Angeles, etc. being considered different institutions. When only one name appears among the top 500 universities, it is used as a blanket term without distinguishing between campuses (e.g., the Universities of Wisconsin or Colorado). The more complicated situations have been the mixed ones, in which a generic name is sometimes used, and then at others the campuses appear individually with their own name. In the cases of the Universities of Texas and Illinois, the decision has been to include the specific name of each campus, as this is the format adopted by the authors themselves in some of the papers, and even by the ISI database.

Once the main institutions have been identified and screened, a selection has been made of the definitive 100 institutions, which has involved the application of the following criteria:

  1. (a)

    A selection has been made of the ten leading institutions in terms of their scientific output in each one of the source journals during the chosen timeframe. Given that the top institutions generally appear in leading positions in several journals, this criterion led to the selection of 53 institutions, of which 52 were academic and one was a company (Bain & Co.). In contrast to Podsakoff et al. (2008), who did not include this company, we decided to accept that institution as we consider it is a unique case given its relative importance in one of the selected journals (HBR), and its contributions help to generate and disseminate knowledge within the discipline of Management.

  2. (b)

    The above list has been rounded up to 100 in order of overall scientific output by including those institutions with the highest scientific output among the source journals as a whole. Given that the 100th position is occupied by four institutions with the same number of papers published, the decision has been made to include all four of them, which means that 103 institutions have finally been included in the study.

The application of these criteria has meant that the final database includes institutions with more than 45 papers published in the period analyzed, with the exception of four institutions that fulfill criterion a), but do not record the minimum number of publications: Maastricht University, Eindhoven University of Technology, University of Reading, and Bain & Co. Nevertheless, these four institutions, while not among the top 100 by overall scientific output, are among the top 130, being especially prominent in at least one source journal, which means there are no major distortions in the study. A fact that confirms the suitability of the choice is that the institutions in our study are among the institutions chosen in the studies by Kirkpatrick and Locke (1992), Morrison and Inkpen (1991), Podsakoff et al. (2008), Shane (1997), Stahl et al. (1988), and Trieschmann et al. (2000).

Notwithstanding the above, a category called “Other institutions” has been created to include all those institutions not included in the prior list, but which do collaborate on some paper with those selected. This allows observing the possible collaboration of leading institutions, not only with each other, but also with other institutions outside the top 103 positions in the collaboration network.

Variables and their operationalization

Dependent variable: degree centrality

The degree centrality of an institution, denoted by \(C^{'} D\left( {n_{i} } \right)\), is the proportion of the number of institutions linked to it (Wasserman and Faust 2009). The degree of a given institution ni, ranges from a minimum of 0, if no institution is linked to it, to 1 if a given institution ni is linked to all the other institutions in the collaboration network. We use the formula proposed by Wasserman and Faust (2009, p. 179) to calculate degree centrality:

$$C^{'} D\left( {n_{i} } \right) = \frac{{d\left( {n_{i}} \right)}}{g - 1},$$

where C D(n i ) stands for the degree centrality of institution (n i ), d(n i ) is the number of institutions linked to the institution n i , and g − 1 is the total sum of institutions in the network’ structure except for the institution n i .

The following procedure was used to calculate each institution’s degree centrality. First, data matrices were generated for each source journal. The lead author was used as the identifier for each paper. In order to avoid duplications, a code was assigned to each paper. The 103 institutions selected were used as variables, with the inclusion of the variable called “Other institutions”, which we have referred to earlier in this paper.

We used adjusted appearances to code each institution’s participation in each paper according to authorship/co-authorship. The procedure followed is explained in the variable “adjusted appearances”. A limitation encountered when building the matrices was that not all the source journals record the authors’ home institution in the ISI database. These cases required an individualized search to be made on the journal’s website, and it was sometimes even necessary to visit the website of the institution causing the doubt in order to verify the affiliation of the papers’ authors, and thus ensure the reliability of the data gathered for the analysis.

Second, the information retrieved on each individual journal was used to build a two-mode matrix (rows = article, columns = institutions). We then transformed it into a one-mode matrix (rows = institutions, columns = institutions). We used the Jaccard index to normalize the two-mode matrix.

Third, Pajek software was used to graphically depict the collaboration network (Batagelj and Mrvar 1998). We removed any loops before running the calculation of the degree centrality of each institution in the network. As we normalized the two-mode matrix through the Jaccard index, we removed multiple lines from the network before mapping it.

In order to analyze each institution’s position in the structure of the collaboration network, we proceeded in two ways: first, locating the institutions. Given that the degree centrality values obtained for each institution lie within a range of 0.00 to 0.66, this range is stratified into three thresholds. The first threshold contains the institutions that pertain to the network periphery, with degree centrality values of between 0.00 and 0.22. The institutions pertaining to the network semi-periphery are located in the second threshold, with degree centrality values of between 0.23 and 0.45. Finally, the third threshold includes those institutions that belong to the core of the network, and have degree centrality values of between 0.46 and 0.66.

Independent variables

Adjusted appearances of institutions

The adjusted appearances of the institutions refer to each institution’s level of involvement in the papers published on Management in the selected source journals during the chosen timeframe. The data for this variable are taken from the AU field in the ISI records for each paper. Regarding those papers co-authored by two or more authors representing different institutions, the methodology used was similar to that used by Heck and Cooley (1988), Morrison and Inkpen (1991), and Shane (1997) to weight the appearances in terms of authors.

This step therefore involves two actions: first, to calculate the adjusted appearances at the level of authors (1/n), where n is the number of authors participating in the paper. Second, we aggregate at the level of institutions, and the value of adjusted appearances at this level is the total sum of adjusted appearances of the authors of an institution in the articles in which it is involved. Thus, a paper that is the outcome of research involving two authors from two institutions allocates each one 0.50 of a credit (1/2); with this score being 0.33 in the case of three institutions (1/3), and so on successively. In articles with three authors, but two come from institution A and the third author comes from institution B, institution A receives 0.66 of a credit and institution B 0.33. The value of overall adjusted appearances is the sum total of adjusted appearances of all the papers in which an institution is involved.

The impact of institutions

The traditional way of measuring an institution’s impact has involved counting the number of times its papers have been cited (Podsakoff et al. 2008). As mentioned earlier, this way of measuring impact has been seriously questioned by the academic community due to the effect of the number of self-citations (Chang et al. 2013; Glänzel and Thijs 2004; Hartley 2012; Huang and Lin 2012). Here we used each institution’s h-index (Hirsch 2005) as an impact indicator. The information for this variable was compiled by analyzing each institution’s citations in the ISI Web of Science database. Self-citations were removed.

Analysis and discussion of results

Descriptive analysis

Table 3 presents the descriptive data for the variables analyzed. The 103 institutions included in the study published 11,220 documents in the 13 journals selected during the chosen timeframe, with 63.9 % of that scientific output pertaining to 38 institutions (36.89 %). What’s more, the 11,220 documents published received 356,950 citations. It is worth noting that 46 institutions (44.6 %) accounted for 74.8 % of all the citations. This result is fully consistent with that reported by Podsakoff et al. (2008) in the sense that a small number of institutions corner the highest percentage of impact. This is despite the fact this study has involved the selection of the world’s 103 leading institutions in research into Management.

Table 3 Descriptive statistics of the model’s variables

The degree centrality of institutions in the collaboration network

Figure 1 shows that the direction of the collaboration across the world’s leading institutions in knowledge creation and dissemination in research on Management flows from North America (core, semi-periphery) towards Europe, Asia and Oceania (periphery). It should be noted that Africa, South America and Eastern Europe do not have any institutions in the top 100 in research on Management. This finding is consistent with the economic and technological development of the regions that are home to the world’s foremost research centers in the field of Management.

Fig. 1
figure 1

Map of the collaboration networks across the world’s leading institutions in research into Management

As Fig. 2 shows, the latent structure of the collaboration network involving the world’s leading institutions in research on Management has three thresholds: Core, semi-periphery and periphery. Table 3 shows each institution’s position in the collaboration network’s structure. Ninety percent of the institutions in the network’s core come from North America, mainly the USA. In addition, the North American institutions account for 89 % of the semi-periphery. This result confirms the findings reported by Adams (2013) for science in general regarding the emergence of a new stage in the scientific research involving elite groups worldwide. The result is consistent with the patterns found for the discipline of Management.

Fig. 2
figure 2

Structure of the collaboration network involving the 103 leading institutions in Management research. Note: the loops and multiple lines have been removed. The values between brackets correspond to each institution’s degree centrality

Only Penn State University (0.66) and Arizona State University (0.61) have degree centrality scores of more than 0.60, which shows that they are the ones that attract collaboration not only from institutions located on the periphery, but also from institutions at the core of the collaboration network. This behavior is related to the absorptive capabilities of these institutions. The following universities have a degree centrality of more than 0.50: Maryland (0.58), Northwestern University (0.58), Minnesota (0.57), North Carolina (0.57), Pennsylvania (0.57), Michigan (0.53), NYU (0.53), South California (0.53), Texas Austin (0.52), Harvard (0.50), INSEAD (0.50), Illinois Urbana-Champaign (0.5), and Texas A&M Univ (0.50).

Determinants of the degree centrality of institutions in the collaboration network

The hypothesis formulated here seeks precisely to explore the role of degree centrality as a compound indicator of the importance institutions have in the field of Management. Let us now analyze our findings.

Table 4 presents the descriptive statistics of the variables analyzed in the study. The mean degree centrality for the institutions in the collaboration network structure is 0.32, with a standard deviation of 0.14. It has a normal KS distribution Dist. = 0.072, p > .200. The adjusted appearances variable records KS asymmetry Dist. = 0.209, p < .001; so, too, does the impact variable, Dist. = 0.135, p < .001.

Table 4 Means and standard deviation of degree centrality and the predictive variables of adjusted appearances and impact

An initial approach to the relationship between the dependent variable and the independent variables involved conducting an analysis of correlations. Accordingly, a positive and significant correlation was found between the impact institutions had and their degree centrality r = 0.722, p = .000. Furthermore, a positive and significant association is detected between the adjusted appearances of institutions and their degree centrality r = 0.591, p = .000. There is a positive and significant correlation between the independent variables of impact and adjusted appearances r = 0.715. In order to test for the existence of collinearity, a check was made to see whether the tolerance value is lower than 1 − R 2 (0.489 > 0.335), with the conclusion being that there is no collinearity between the independent variables. What’s more, the variance’s inflation factor is low 2.046, well below ten, which is the boundary value for defining whether the correlation between the independent variables poses a problem of collinearity.

The validation of the three parts of the research hypothesis involved performing the ANOVA factorial test. Table 5 confirms a positive interaction between the effects of the adjusted appearances and the impact of the institutions on their degree centrality F (1, 99) = 43.303, p = .000, ŋ 2 = 0.30. The assumptions of independence of observations and the normality of the dependent variable were checked and ratified. According to the results of the analysis, hypotheses 1a, 1b and 1c are confirmed. The results show that the variable impact is the one that best predicts an institution’s degree centrality by predicting 46 % of its variance ŋ 2 = 0.46, 11 % of the adjusted appearances, and 30 % of the joint interaction of the two variables.

Table 5 Analysis of variance of degree centrality as a function of impact and adjusted appearances

These results suggest that the degree centrality may be a compound science indicator of an institution’s degree of importance within the field of research into Management. Furthermore, it is a more suitable indicator than those that have hitherto been commonly used (scientific output and impact, mainly). There are basically four reasons for this: degree centrality (a) maintains a close relationship with each one of the simple indicators (high correlation); (b) it is explained through simple indicators; (c) it allows considering the joint effect of both simple indicators; and (d) it considers not only the effort each institution makes in relation to research, but also the role each one plays in the academic community according to the collaboration network established with other institutions.

Based on this reasoning, we present a ranking of the foremost institutions in research into Management, according to their degree centrality (Table 3).

The leading institutions

Table 6 provides a ranking of the top ten institutions for each one of the variables analyzed. There are 18 institutions that occupy one of the leading positions in at least one of the variables. Nevertheless, only three institutions feature in the top ten places in the three variables, albeit in different positions. It should be noted that the institutions occupying the top places in terms of impact or adjusted appearances are not the most central ones in the network. This finding may be interpreted as evidence to show that the usual indicators, such as impact and scientific output—even taking this to be the adjusted appearances of the institutions in the papers—are in themselves insufficient to measure an institution’s importance in a collaboration network.

Table 6 Ranking of the top ten universities for each variable analyzed

Furthermore, this result also shows that the most central institutions increase their absorption capability by raising their scores through the addition of the intellectual resources of the smaller institutions with which they collaborate.

According to the data in Table 6, it may be inferred that the choice of variable for measuring an institution’s significance largely informs the final results obtained. This is to be expected, as each variable measures something different. Therein lies the need to study the possibility of considering a more complex and more complete science metrics that will cover the different aspects involved in an institution’s importance. This integrating role could be assumed by degree centrality.

The measuring of degree centrality is a more integral way of gauging an institution’s impact in the collaboration network because it reduces the size effect of the participating institutions, which is one of the limitations of science indicators such as Hirsch (Hirsch 2005) and g-index (Egghe 2006). Thus, for example, certain universities with a high scientific output may not be particularly central to the network if their researchers publish through collaborations undertaken solely within the institution itself. In other words, they might be well-positioned in terms of scientific output or citations, but less influential regarding all the other institutions involved in the network. Institutions of this kind have the ability to self-develop on their own, but not to contribute to the development of knowledge creation in other universities in the network by collaborating in publications.

Conclusions

This paper has analyzed the importance of the world’s leading institutions in knowledge creation and dissemination in research on Management. In view of the high number of institutions involved, a selection has been made of the top 103 institutions in order to establish a ranking of merit based on the role they play in the academic community in this field. Based on our findings, we may draw certain interesting conclusions as regards the contribution this work makes.

First, it has been noted that a small number of institutions account for the bulk of the impact, corroborating the findings of other prior studies, along with the presence of Lotka’s law in the distribution of impact across institutions. It is worth stressing that this result is recorded not only for the institutions as a whole, but also for those that are initially the most important ones.

Second, the analysis of the sample used here shows that a semi-periphery/periphery structure prevails in the collaboration network in the process of knowledge creation and dissemination in research into Management, with none of the institutions recording very high degree centrality (greater than 0.66). This finding confirms the existence of elite groups worldwide that tend to collaborate with other minor institutions, with less collaboration among major institutions. This finding has important practical implications for helping institutions to track their policies for improving their performance in knowledge creation on Management. Minor central institutions benefits from the collaboration network by boosting the number of articles published in top-tier journals while elite institutions enhance their absorptive capacities by collaborating with minor institutions. The highest performance institutions attract lower performance institutions because of their preferential attachments. That is, new institutions preferentially attach themselves to the ones that are already well-connected (Albert and Barabási 2002; Barabási and Albert 1999). The lower performance institutions have a larger size-dependent cumulative advantage on collaboration with top performance institutions for receiving citations. van Raan (2008a, b) has found similar results for Chemistry research groups in the Netherlands. The degree centrality of institutions belonging to the core of the network structure will continue to grow by fostering their collaboration with minor institutions. The Matthew effect is stronger for the more central institutions than it is for their minor counterparts, but both benefits from collaboration.

Third, the institutions with the highest degree centrality in the network structure are not always the ones with the greatest impact. This evidences the effectiveness of this indicator since it is not size dependent like the h-index and the g-index. Their ability to gain significance is based on the fact they absorb the capability of those with lesser influence, and which look to the former as a favorable way of including their researchers in those research topics of greatest importance. This arrangement favors the formation of a research elite among the world’s foremost institutions in a given field, which in this case is Management, thereby confirming the conclusion reached by Adams (2013) regarding the emergence of a fourth era in scientific research polarized by the world’s foremost science elites. Those institutions in less economically developed parts of the world are left behind and do not contribute to the growth of the network’s structure. This situation means that institutions on the periphery tend to be consumers of new knowledge, while their ability to generate new knowledge is limited because they are not party to the field’s main research topics.

Fourth, the indicators of scientific output traditionally used to measure the performance of researchers and institutions are insufficient because academic research is conducted through collaborating groups, which means that the importance of the researchers, groups, institutions and even countries depends on their ability to interact with other institutions, tap their knowledge, and thus improve their performance. Fostering collaboration networks with more central institutions may be a strategy for participating in the solution of the major issues this field faces. This strategy would help to improve the degree centrality of less important institutions.

Degree centrality in the network is therefore a useful indicator for measuring the importance of different institutions, as it is a compound indicator that in a way integrates both scientific output and impact. Furthermore, it adds the joint effect of both simple indicators and considers the role each institution plays in boosting the academic community.

In spite of these contributions, there is still work to be done. It would be pertinent to study the degree centrality and impact of minor institutions in the collaboration network in research into Management in order to compare the results with this study. Likewise, it would be interesting to make a more detailed comparison of the differences between the various rankings that may be drawn up based on simple indicators and degree centrality. It would also be convenient to analyze whether the results are similar in smaller geographic areas or in other academic fields close to or far removed from Management. In addition, this work poses new research questions, such as: Does the impact factor of the journals in which an institution’s papers are published have a bearing on its degree centrality?