Keywords

1 Introduction

For reasons of internationalization, competitiveness and others, monitoring the relative positioning of universities in the Ranking has become a daily practice. The essence of internationalization is the dissemination and communication of knowledge that is created within the universities, the opening to the world and the enrichment of the staff as a result of the encounter with other cultures [1, 2], guiding the teaching production and researcher towards an international profile, improving its recognition and visibility [3, 4]. The Rankings establish comparisons between universities according to quality or excellence criteria considering attributes related to internationalization requirements. In this sense, there is no consensus on what is considered “quality” or “excellence” in university education and its visibility [4, 5] motivated that it is a multidimensional concept, which complicates monitoring and control of the activities that may impact on improving the positioning. This is how various authors have questioned the Rankings because they are based on measurements of heterogeneous information [6], however others claim that the presence and visibility on the web and, especially, that of their scientific production, contributes significantly to its international positioning [7,8,9].

Currently, the positioning of universities is determined in terms of the quality of education, research and other aspects of academic activity. The rankings have multiplied in recent years and offer a hierarchical order of the universities based on a consensus assessment methodology. They are used to promote educational policies and encourage the quality of University Education, in addition to attracting students and resources. These positioning lists order universities with specific models considering various bibliometric and cybernetic indicators. The specialized literature reports various systems of positioning for higher education institutions, attending heterogeneous evaluation criteria. Some mostly focused on research, others on academic quality or visibility and impact on the web.

Among the recognized rankings are the Academic Ranking of World Universities (ARWU) or Ranking of Shanghai, QS World University Ranking, SCimago Institutions Rankings SIR and the Web Ranking of Universities-Webometrics. The purpose of these is to organize the universities according to indicators that should reflect their capacity as an institution, quality of academic activities, production and dissemination of research, innovation and relations abroad of universities. They are also used to make decisions, from the distribution of research funds to the desired profiles of teachers and researchers. Knowing the characteristics of the rankings offers valuable information for the definition of strategies for the international positioning of universities. This work describes each of these rankings to make comparisons regarding its scope of evaluation and its impact in Latin America.

2 Methodology

Although the university rankings QS, SIR-SCimago, Webometrics and Shanghai present differences and have been questioned for their evaluation criteria and heterogeneity, in this article a descriptive and comparative analysis is made regarding their application, evaluation indicators and weights, under the following steps:

  1. 1.

    Data were collected in the respective web pages of the rankings regarding:

    • Institution and country where the ranking is published.

    • Year of beginning of the publication of the ranking.

    • Frequency of publication of the ranking.

    • Year of publication of the last edition.

    • Number of years with (historical) data of published rankings.

    • Number of universities positioned worldwide in the last publication.

    • Number of evaluation indicators used.

    • Regarding Latin America: Number of Latin American universities positioned, Number of Latin American countries with universities positioned in the last publication, Number of universities per Latin American country.

  2. 2.

    The evaluation criteria and indicators of each ranking are compiled with their respective weights.

  3. 3.

    The indicators are grouped according to criteria and areas of application to observe comparatively the weights given to them:

    • Academic quality: academic prestige achieved by international awards, academic reputation.

    • Research: publications made as results of the research, dissemination in index, citations.

    • Innovation: technological applications made through patents, publications associated with patents.

    • Community: relationship of the university with the community through the perceived reputation of the employers, in addition to the web visibility achieved by the links to its institutional portal.

    • Capacity of the institution: Number of staff, number of web pages in its portal, capacity of internationalization of the university with respect to its teachers and students.

  4. 4.

    Identification of Latin American TOP10 universities in each ranking.

3 Development

3.1 Academic Ranking of World Universities (ARWU) of Shanghai

The Academic Ranking of World Universities (ARWU) was first published in June 2003 by the World Class University Center (CWCU) of Jiao Tong University in Shanghai, China; updated annually. ARWU uses six (6) objective indicators to classify the world’s universities [9, 10]. As of 2017, universities classified between 501 and 800 are also published as ARWU World Top 500 Candidates. The highest scoring institution is assigned a score of 100 and the rest are calculated as a percentage of the maximum score. Table 1 briefly describes their indicators.

Table 1. Evaluation indicators applied in the Ranking of Shanghai.

3.2 QS World University Ranking

Published since 2004 with an annual periodicity, and considers academic, employer, student and international indicators. The 2018 publication contains 959 universities around the world and is based on the opinions of more than 75,000 academics, 40,000 employers, as well as 12.3 million research papers and 75.1 million citations [11]. Among the aspects to be measured are the citations received, the student-teacher ratio, the proportion of international students and foreign professors, the academic reputation, the reputation among employers, and personnel with a doctorate [12] (Table 2).

Table 2. Evaluation indicators applied in the QS Ranking

3.3 SCimago Institutions Rankings (SIR)

The SIR SCimago Ranking begins in 2009 conducted by the Spanish SCimago research group and is called SCimago Institutions Rankings (SIR). Its periodicity is annual and for the last edition published in the year 2017 it publishes a list of 2,966 universities positioned in the world [12]. This evaluates only the research around the publications that are in the Scopus database [11], based on the indicators the Table 3.

Table 3. Evaluation indicators applied in the Ranking SIR SCimago

3.4 Web Ranking of Universities-Webometrics

The Webometrics Ranking has been carried out since 2004 by the Higher Council for Scientific Research (CSIC), Spain, published two (2) times a year, at the end of January and July. For the publication of January of the year 2018, 12,005 universities were classified worldwide [13]. This realizes the ranking of the universities based on four (4) indicators that assess the presence, impact, excellence and openness in the web [14]. In this sense, Table 4 shows the definitions of each of these indicators.

Table 4. Evaluation indicators applied in the Ranking Webometrics

As can be seen, the criteria established to prepare the Rankings are far from being homogeneous among the different evaluation criteria. In Table 5, the four rankings are presented in a comparative way. As shown in Table 6, the most important factor is the research function, which in the case of the SIR-SCImago Ranking corresponds to 50%.

Table 5. World rankings of universities, descriptive table.
Table 6. Indicators and criteria of the university rankings

4 Results and Analysis

Table 5 shows the comparison of the (4) Ranking according to the global and Latin American positioned universities, years of publication, frequency, number of indicators for the evaluation, country of origin and others. In this Table it can be seen that to date webometrics is the one that covers a larger universe of universities positioned, reaching its latest edition at 12,005, of which approximately 31% belong to Latin American countries. In the rest of the Ranking, the number of Universities positioned is lower, reaching in the worst case the Shanghai Ranking where scarcely 2% belong to this region of the world. Only the Webometrics Ranking has a biannual periodicity, which allows to monitor and measure the results more frequently. Although the SIR-SCimago is one with the highest number of indicators to perform the positioning of the University, all are dedicated to the measurements of publications in the Scopus Journals, excluding measurement of academic activities. Table 6 and Fig. 1 show the variety of indicators used by each ranking to carry out the evaluation and positioning of the universities, classified among the areas of educational quality, research, innovation, presence in the community and capacity of the institution. The four have indicators that evaluate the research products, but in most cases they exclude the innovation measured from the point of view of the patents obtained, only considered by the SIR Ranking.

Fig. 1.
figure 1

Areas of evaluation of the rankings

Table 6 and Fig. 1 also shows the number of indicators used by each Ranking to weight the classification of the Universities. These indicators can be subclassified into those dedicated to academic, research and extension activities. In the four (4) rankings analyzed are indicators of research activity such as: number of published articles (among which the considerations between the Nature and Science journals, the Science Citation Index-Expansion and Social Science Citation Index are debated or Elsevier’s Scopus database) and collaboration among academics.

Academic aspects include the distinctions obtained by their academics or alumni of Nobel prizes and Fields medals or the reputation measured among employers, among others. However, the extension activity is only considered by Webometrics taking among its indicators the presence, openness and excellence in the web and visibility. Finally, only the SIR Ranking considers aspects of Innovation, including patents, although Webometrics could also have it among its indicators, with the use of the Google Patents tool. It is important to highlight that the collaboration between researchers from different institutions is positively valued in the SCIR Ranking of SCimago, in the International Collaboration aspect (Institution’s output produced in collaboration with foreign institutions). Additionally, only Webometrics considers the growing importance of institutional academic repositories.

Table 7 distinguish the presence of leading Latin American countries in the world rankings and with presence in the four (4) rankings studied in the present work: Brazil, Mexico, Chile, Argentina and Colombia. Special mention must be made of Puerto Rico with the presence of its universities, to a lesser extent, in said rankings. Despite the metric diversity used by the four Rankings, the Universities of Brazil always occupy the top positions in Latin America. The average presence of Latin American countries in the World Rankings Webometrics, SIR-SCimago, QS and Shanghai is: Brazil 40%, México 16%, Chile 10%, Argentina 9%, Colombia 9%, Perú 3%, Ecuador 2%, Puerto Rico 2%, Venezuela 1%, and others 8%.

Table 7. Latin American presence in the world rankings of universities

5 Conclusions

The rankings used worldwide to position universities are heterogeneous and do not evaluate the teaching, research and extension activities with the same rigor and weight. However, its sustained use over time has encouraged the different universities to carry out actions that will allow them to rise in their scales. Of these, the oldest is the Shanghai Ranking that is published since 2003, followed by the QS and Webometrics that come to appear in 2004. Their indicators, with which they perform.

The evaluation and finally the positioning are not similar, nor in the form of calculation or in weight. It is noted that in the four rankings considered, the largest portion of universities positioned are not from Latin America. Going from being 40% in the QS Ranking to the worst of 2% in the Shanghai Ranking. Among the Latin American Universities with the highest participation in the four rankings are Brazil, Mexico and Argentina.

Due to the heterogeneity of the metrics it is not feasible to achieve the same positions in the classification of the Universities. In this purpose it is possible to know that each of the rankings assesses aspects that are not highly coincident with each other, despite the fact that they all aim to evaluate the quality of higher education and serve as a reference for the selection made by students at the time of start these studies. Making known their similarities and differences, based on their comparison, is the main contribution of this research work.