Introduction

The publication of the results of research in the social sciences and humanities (SSH) presents differences from publication in other areas. The research should be local or national, the document typology is different, because chapters in books and monographs are more frequently used as publication channels and are cited more often than journal articles (Hicks 2004; Nederhof 2006); habits of collaboration are different (work is frequently individual or with little institutional collaboration); research is often written in the vernacular and is seldom covered in bibliometric databases (Chi 2014), or it may even be published in other publication channels that are not covered at all (for example, reports and other publications addressing a regional readership); SSH researchers write not only for scholarly readers, but also for the lay public (Hicks 2004). Additionally, citation behaviour is different in the SSH disciplines, and the age of references is remarkably high, so it may not be appropriate to apply the two-year citation window to calculate the impact factor (Glänzel and Schoepflin 1999).

Until a few years ago, it was possible to argue that, unlike publications in other areas of science and technology, many national scientific-scholarly publications in SSH are not indexed in the databases that evaluation committees normally use. Therefore, evaluation agencies do not usually have specific mechanisms to categorize and rank national scientific-scholarly (Moed 2004). However, the mechanisms of dissemination of research results in the social sciences and humanities have changed over time. Between 2000 and 2019, publications in the Web of Science Social Science Citation Index have increased twice as much as those in the Science Citation Index (Clarivate Analytics 2020). In some countries, such as Spain, these changes are evident since, between 2005 and 2015, production in the SSCI and A&HCI has tripled, growing well above the world average in these databases (Sanz-Casado et al. 2017b), and have continued to grow in recent years with the indexing of numerous Spanish journals.

Bibliometric indicators of publication output and citation impact have been widely used in the evaluation of scientific-scholarly journals, because such indicators offer high transparency and strong legitimacy by allowing researchers themselves to scrutinize the recounts of published papers, their citations and, therefore, the calculation of indicators (Ahlgren et al. 2012). In addition, there are several sources from which we can calculate these indicators, such as WoS, Scopus and SciELO (Scientific Electronic Library Online), as well as other complementary derived sources, such as Journal Citation Reports and Scimago Journals and Country Ranks.

When examining international experiences in SSH journal classification, one of the first classifications to be mentioned is the European Reference Index for Humanities (ERIH), an initiative launched by the European Science Foundation (ESF) in 2001 and published in 2007. In France, the Agence d’Evaluation de la Recherche et de l’Enseignement Supérieur (AERES) published a list in 2008 with 6305 journals organized in three classes (A, B and C) considering bibliometric indicators. In Australia, in 2010 the government launched the Excellence in Research for Australia exercise, an expert-based classification of journals including more than 20,000 titles. This methodology includes citation analysis from international databases.

Other countries, such as Brazil and other South-American countries, Italy (Ferrara and Bonaccorsi 2016), Taiwan, the Netherlands, Norway, Denmark and Sweden, have also set up schemes to classify and rank SSH journals many using bibliometric indicators based on citation analysis (Ahlgren et al. 2012; Ingwersen and Larsen 2014; Hammarfelt and De Rijcke 2015).

In Spain, the RESH is a journal evaluation system with more than 2000 SSH journals classified in four classes (A, B, C and D) in addition to an excellence class (Giménez-Toledo et al. 2007). Another classification system, CIRC (Clasificación Integrada de Revistas Científicas), categorizes more than 20,000 journals in four categories (A+, A, B, C) (Torres-Salinas et al. 2010).

Certainly all of these systems have made an important contribution to the criteria for evaluating SSH journals. However, they also have some limitations. In some cases, lack of continuity in information collection prevents us from having updated information; in other cases the classification allows us to assign journals to a certain group but not to carry out rankings within each group. On the other hand, although the use of bibliometric indicators has been highly criticized, numerous studies show that there is a correlation with the results obtained through peer review (Waltman et al. 2011; Bornmann and Leydesdorff 2013; Traag and Waltman 2019).

Despite the difficulties and limitations of bibliometric indicators in the evaluation of scientific activity, which have been set out in several forums and in the DORA declaration and Leyden manifesto (San Francisco Declaration on Research Assessment 2012; Hicks et al. 2015), these indicators, when used with academic criteria, continue to be of interest and utility in the evaluation of the quality of scientific-scholarly journals and their categorization. Likewise, the use of databases such as Web of Science or SCOPUS is justified because, as mentioned by Michels and Schmoch (2012) the policies of opening up these sources, for the inclusion of an increasing number of regional journals, allows international communities to access content with local perspectives or focused on topics of regional interest, especially in the social sciences and humanities.

Considering this context, our proposal aims to show the results of the application of a bibliometric methodology to classify journals that have already passed a quality threshold (measured both quantitatively and qualitatively) and which is described in the next chapter.

Evaluation of Spanish social science and humanities journals, the FECYT Quality Seal

There is currently a growing need to evaluate national scholarly journals in order to understand the role they play within a country’s scientific system. To this end, it is essential to have journal evaluation instruments that are precise, adhere to clear and rigorous methodologies, enjoy consensus among the community and follow well-defined criteria that enable journals’ scientific quality to be analysed and known.

Traditionally, the scientific-scholarly journal has been a widely used medium for the dissemination of research in the areas of natural, experimental, biomedical and other sciences. However, nowadays there are other disciplines, in areas such as social sciences and the humanities, that are making increasing use of scientific-scholarly journals. In Spain this change has been very evident and has resulted in a significant increase in the number of Spanish scientific-scholarly journal, mainly in SSH. Therefore, it is of great interest to evaluate these sources to ensure the quality of their contents, as well as to prepare a classification that enables the differentiation of the fulfilment of the different measurement criteria used.

In this line, since 2006 the Spanish Foundation for Science and Technology (FECYT) has been running the ARCE project (FECYT 2018), which aims to contribute to the professionalization and internationalization of Spanish scientific-scholarly journals.

This initiative includes the “Evaluation of the Editorial and Scientific Quality of Spanish Journals”, whereby journals can earn a quality seal. After more than a decade of providing this service, the FECYT Quality Seal has become a first-rate distinction for the journals that hold it.

The estimated population of national journals in Spain is 1818, and FECYT has evaluated 57%. Of these, 396 Spanish journals have obtained the Quality Seal in one of the six ordinary calls made until 2019. This figure represents 21% of the total number of national journals.

The success rate of FECYT’s journal evaluation processes has progressively increased from 12% in call I to 44% in call VI. This data suggests that journals have striven hard to improve their editorial processes and have made the editorial and scientific quality indicators required by FECYT part of their day-to-day work, an achievement that has involved a real change of editorial strategy for many journals and publication services.

Forty percent of the journals with the FECYT Seal (158 in total) belong to areas in the social sciences, 44.9% (178 in total) belong to areas of the humanities, 7.8% (31 in total) are journals of experimental sciences, and 7.3% (29 in total) are journals in the life science areas.

Because of the volume and profile of accredited journals, the research community and the many actors involved in academic publication have suggested that FECYT needs to provide teaching and research merit assessment agencies with a tool that allows them to incorporate publications in Spanish scholarly journals bearing the FECYT Seal as an indication of the quality of researchers’ scientific work. This call has been echoed by many of these agencies, which often demand better systems for accrediting the quality of the work published by social and human science researchers.

Once the accreditation process was consolidated, a new challenge arose: to provide evaluation agencies with a list of the highest quality Spanish journals in order to use publication there as a badge of merit.

For this purpose, FECYT convened a committee of experts to generate a methodology to categorize journals already recognized with the Quality Seal and to offer a list (ordered by merit) in each scientific category, especially in social sciences and humanities.

The authors of this paper have been working along these lines since 2015. The first edition of this work was published in 2017 under the title “Guía metodológica para la evaluación de revistas” (Methodological Guide for Journal Evaluation) (Sanz-Casado et al. 2017a). The methodology was also presented at national and international professional congresses and forums and received important contributions from experts in the field (Sanz-Casado 2017, 2018; Sanz-Casado et al. 2017b; Aleixandre-Benavent et al. 2018, 2019; De Filippo et al. 2019). In 2018 FECYT had a panel of external evaluation experts work together to revise and strengthen the methodology based on the changes that had taken place in agencies’ evaluation criteria. The final result is included in a second edition of the Guide published in 2019. This paper presents this final methodology, which is being used by national agencies for the evaluation of researchers in different disciplines of social sciences and humanities.

Methods

For the evaluation and classification of accredited journals under the FECYT Quality Seal, we propose a methodology based on two dimensions: analysis of journals’ impacts and analysis of journals’ visibility.

To obtain indicators in each of these dimensions, the following sources of information have been used:

  • Science Citation Index (SCI)

  • Social Science Citation Index (SSCI)

  • Arts & Humanities Citation Index (AHCI)

  • Emerging Sources Citation Index (ESCI)

  • Journal Citation Reports

  • Scopus

  • SciELO

  • Google Metrics

  • Information Matrix for the Analysis of Journals (MIAR)

From these sources, we obtain indicators of coverage in each database, citations, H-index and quartile. It should be noted that only the journals that already had the FECYT Quality Seal have been considered (336 Spanish journals, 21% of the total number of national journals in social sciences and humanities). This is because it has been considered fundamental to start with a selection of journals that meet basic quality criteria. This particular work has been carried out with journals of social sciences (158) and humanities (178).

The use of international databases as the main source of information is justified because in Spain an important promotion of the quality of national journals has been carried out, which has led to the indexing of numerous journals in these sources. According to data from 2018, 87 Spanish journals were indexed in SSCI and A&HCI and 583 in ESCI (of which 500 are from SSH disciplines) (Fecyt 2020). Likewise, in the 2019 edition of the SJR, 659 Spanish journals were indexed, of which 261 correspond to Humanities disciplines and 288 to Social Sciences (SJR 2020).

Proposed model for journal categorization: dimensions and indicators

The methodology presented is mainly based on the consideration of the impact and visibility of the journals. The selection of the dimensions and indicators, as well as the weight assigned to each one was widely discussed by a committee of experts in scientometrics from various institutions as well as with representatives of the academic community from various disciplines and with policy makers. Without a doubt, any evaluation process is controversial and several years of data testing have been necessary to reach a consensus regarding the method used.

The selection of the “citation window”, that is, the number of years that should be considered to quantify citations, is a fundamental decision, in order to consider, the slower pace of the citation process compared to other disciplines in the areas of natural sciences, life sciences, etc. In this sense, some studies have concluded that a period of 2 years may be enough to calculate the impact factor in areas such as biomedical studies (Campanario 2011), physics and some life sciences (Adams 2005). However, in SSH and mathematics, where the citation dynamics are slower and therefore take more time, this period is too short for the majority of publications to be recognized and cited (Vanclay 2012; Campanario 2011; Waltman and Van Eck 2012; Dorta-González and Dorta-González 2013). The Journal Citation Reports of Web of Science introduced a new indicator in 2007 for the journals it covers, namely, the 5-year journal Impact Factor, which includes a citation window of 5 years and serves to complement the two-year short-term impact factor (Jacsó 2010). Given the variability of citation windows amongst subject areas, this study has used a citation window of 5 years, since a five-year period is better suited to citations in SSH, as opposed to citations in fields such as biomedical studies, where 2 or 3 years are enough.

The indicators obtained from the different sources considered in this methodology have been grouped according to their characteristics into two dimensions: impact and visibility.

Both dimensions go into each journal’s final score, which is composed of the percentage reached in each indicator. Several preliminary exercises were conducted to fix the percentage values, which were later validated by experts (Sanz-Casado et al. 2017a).

Dimension 1: Impact

The first dimension includes the indicators directly linked to citations. This dimension represents 80% of a journal’s score, and it contains three groups of indicators: citations, H-indexes and quartiles.

Citations

  • Citations in SCI, SSCI and AHCI To collect the number of citations that the selected Spanish journals have received from publications indexed in these databases, we used the “cited reference search” tool. The name and variants of the journal were searched for in the “cited work” field. The search was limited to the last 5 years in “cited year(s)”.

  • Citations in Scopus. To obtain the citations of a journal in Scopus, we access the “advanced search” function, and in the search box we enter the name of the journal in parentheses preceded by the tag REFSRCTITLE. The search period is limited to the last five years by using the REFPUBYEAR tag.

  • Citations in SciELO The SciELO initiative was created in response to the fact that a great many Latin-American scholarly journals are not indexed and because of the need for these journals to be included in quality bibliographic systems and the need for access to full-text articles. Currently, this database is the only instrument for measuring the impact of Latin-American scholarly journals, because most of the publications concerned are not indexed in other important databases, such as WoS Core Collection and Scopus (Alfonso et al. 2009). The search methodology is similar to the search for citations in SCI, SSCI and AHCI, but here the first selection is the SciELO Citation Index. The search was also limited to the last five years.

  • Citations in the Emerging Sources Citation Index (ESCI) ESCI is a database composed of journals that are being evaluated for their upcoming incorporation into the main collections of the Web of Science Core Collection. The search method is like that of SCI, SSCI and AHCI, but it limits the search to the ESCI database.

The citations obtained from the different sources are added together, and the resulting value accounts for 60% of the journal’s total score.

In this methodology, all citations received by the journals in the four databases have been collected. For the final count, if the journal has been cited in more than one of these databases, the citation is counted more than once. In the same way, the Secondary Composite Index Broadcasting (ICDS) calculated by MIAR increases when the journal are included in several databases.

Self-citations have not been eliminated because the number of journals indexed in the databases analysed is small.

H-index

  • H-index in WoS The H-index is considered a robust indicator for determining the impact of scientific scholarly publications, researchers, etc., because its value is not significantly affected by either the works not cited or those highly cited (Braun et al. 2006). This indicator tends to assess a prolonged scientific effort along a trajectory (Braun et al. 2006). Furthermore, one of its strengths is that it combines the effect of quantity (number of published articles) with the effect of quality (number of citations received) (Norris and Oppenheim 2010). The H-index of each journal has been obtained by searching the WoS (SO = “name of the journal”) and has been limited to the last 5 years.

  • H-index in SJR The Scimago Journal & Country Rank (SJR) database offers the bibliometric information of more than 18,000 academic and professional journals based on data extracted from Elsevier’s Scopus database. The H-index in the Scimago Journal & Country Rank is found by entering the journal’s name or ISSN in the application’s search engine

  • h5-Index in Google Scholar Metrics This indicator is obtained by consulting the Google Scholar Metrics website.

The sum of the H-index values accounts for 10% of each journal’s score.

Quartiles

  • Quartile in JCR The quartile of a journal is considered an indicator of quality and allows us to know which of the publications is best valued within each WoS category. Since a journal may be included in more than one WoS category, we used only each journal’s highest quartile. If the journal is in the first quartile (Q1), where it has the maximum visibility, its score is 100 points. If it is located in the second quartile (Q2), its score is 75. If it is in the third quartile (Q3), its score is 50, and lastly, in the fourth quartile, the journal’s score is 25 points.

  • Quartile in SJR One of the measures the SJR offers, like the JCR, is the quartile occupied by journals in their respective subject areas (Jacsó 2010; Mañana Rodríguez 2014). The quartile in the Scimago Journal & Country Rank is found by entering the journal’s name or ISSN in the application’s search engine (http://www.scimagojr.com/). As in the previous case, if the journal is located in the first quartile (Q1), its score is 100. If it lies in the second quartile (Q2), its score is 75, and if it is in the third quartile (Q3), its score is 50; lastly, in the fourth quartile, the journal’s score is 25.

The sum of the H-index values accounts for 10% of the journal’s score.

Dimension 2: Visibility

The second dimension is visibility, understood as the journal’s presence in databases of international prestige. The following indicator has been considered for this purpose.

  • Secondary Composite Index Broadcasting (ICDS) of MIAR (Information Matrix for the Analysis of Journals). MIAR is a database created at the University of Barcelona (http://miar.ub.edu/about-miar) to determine the visibility of scientific scholarly journals based on their presence in national and international scientific databases and in multidisciplinary repertoires. MIAR groups journals into major scientific areas that are in turn subdivided into more-specific fields. Journal visibility is quantified through the ICDS, an indicator that shows a journal’s visibility in different scientific databases of international scope or in repertoires’ evaluation of periodicals. More than 100 databases (WoS, Scopus, MEDLINE, ERIH plus, etc.) are consulted. A high ICDS means that the journal is present in different sources of information of international relevance. A number of criteria go into calculating the ICDS (http://miar.ub.edu/about-icds).

As mentioned in the database methodology, since 2016 changes have been made in how the indicator is calculated. The figure is now rounded to the first decimal place. Two changes have been made in the score given for journals’ presence in bibliographical databases: 3 + 2 points are given when a journal is covered by two or more abstracting and indexing databases without distinction between the specialized or the multidisciplinary, and an additional +1 point is granted to journals that are indexed simultaneously in Scopus and in classic WoS indexes (Arts and Humanities Citation Index, Science Citation Index Expanded and Social Sciences Citation Index) (MIAR 2018).

The sum of the figures yielded by the MIAR ICDS accounts for 20% of the journal’s score.

Table 1 shows the indicators and the weighting of each.

Table 1 Weighting of indicators by dimension

Calculation of the indicator for the categorization of Spanish journals

The first step in calculating the final score of each journal, and therefore its position within a sorted list, is to group all journals by subject area. The indicators are then obtained according to the following criteria:

  1. (A)

    Citation indices

    • Retrieval of the number of citations of each journal from each database.

    • Sum of all citations received by each journal.

    • Descending order of the journals in each subject category according to the total number of citations received.

    • Rescaling of the values of citations from each journal according to the maximum weight of the dimension (60%), so that the most-cited journal is assigned 60 points and the values for the rest of journals are calculated according to the maximum value obtained.

  2. (B)

    H-index

    • Finding of the H-index values of each journal from the three databases (WoS, SJR and Google Scholar Metrics).

    • The H-index value of the journal in each database is divided by the highest value of the journal in that database, and the result is multiplied by 1/3 of the total weight of this indicator (10%).

    • The final H-index score of each journal is the sum of the H-index values that the journal has obtained in each database.

  3. (C)

    Quartiles

    • Finding of the position of the journals in the quartiles of each database considered.

    • Assignment of normalized values to each quartile: Q1 = 100, Q2 = 75, Q3 = 50 and Q4 = 25.

    • The normalized quartile value of the journal in each database is divided by the highest value of the journal in that database, and the result is multiplied by 1/2 of the total weight of that indicator (10%).

    • The final score of each journal is the sum of the values that the journal has obtained in each database.

  4. (D)

    Visibility

    • Acquisition of ICDS values from MIAR for each journal (http://miar.ub.edu/).

    • Sorting of journals in descending order according to their ICDS value.

    • Rescaling of each journal’s ICDS values according to the maximum weight of the visibility dimension (20%): the journal with the highest ICDS has 20 points, and the rest of the journals are assigned points with respect to that maximum value.

Final Score

Each journal’s final value is the equivalent to the values from each of the previous phases: citations, H-index, quartile, ICDS. The maximum to reach is always 100.

Quartile distribution

Finally, once the final scores of the journals in each field have been calculated, the journals are organized by quartiles, starting with a homogeneous division of each field into four equal parts, considering the total number of journals and the final score of each. The result of each quartile follows (Table 3):

  • Quartile A contains 25% of the journals, those with the highest scores.

  • Quartile B contains 25% of the journals, those with high average scores.

  • Quartile C contains 25% of the journals, those with low average scores.

  • Quartile D contains 25% of the journals, those with the lowest scores.

Subject classification

When applying the methodology described above, it is essential to have a subject classification that serves as a reference for calculating each journal’s score.

In the FECYT call, journals are organized according to the four divisions used by the Quality Seal: experimental sciences, life sciences, social sciences and humanities. However, in order to achieve greater precision and comparability, additional consideration has been given to the field of evaluation established by the Spanish National Commission for the Evaluation of Research Activity (CNEAI) (BOE-A-2014-12482) for each of the publications according to its subject matter. These fields are broken down into 11 disciplines and allow a higher level of specialization to be achieved.

  • Field 1. Mathematics and physics

  • Field 2. Chemistry

  • Field 3. Cellular and molecular biology

  • Field 4. Biomedical sciences

  • Field 5. Natural sciences

  • Field 6. Engineering and architecture

  • Field 7. Social, political, behavioural and education sciences

  • Field 8. Economic and business sciences

  • Field 9. Law and jurisprudence

  • Field 10. History, geography and arts

  • Field 11. Philosophy, philology and linguistics

Results

The model was checked using 336 Spanish journals bearing the FECYT Quality Seal from the following fields: Social, political, behavioural and education sciences; economic and business sciences; law; history, geography and arts; philosophy, philology and linguistics.

Table 2 contains an example of the 28 journals included in the Law field. The table shows the absolute values of each dimension. Revista Española de Derecho Constitucional can be seen to have reached the highest values, since it is a journal included in WoS and Scopus; its large number of citations come mainly from other journals indexed in both databases.

Table 2 Law journal citation indicators

Table 3 shows the H-index values for the Law journals and their weighting calculations. Revista de Estudios Políticos stands out with high H-index values in WoS and Scopus. Revista Española de Derecho Constitucional moves to position number 6.

Table 3 Law journal H-index indicators

In terms of quartiles, Revista Española de Derecho Constitucional again leads the rankings, because it lies in Q1 in SJR and Q2 in JCR (Table 4).

Table 4 Law journal quartile indicators

In visibility, Revista Española de Derecho Constitucional and Revista de Estudios Políticos earned the maximum of 11 points (Table 5).

Table 5 Law journal visibility indicators

Once the indicators of each dimension have been calculated, we proceed to the total count and the ordering of journals according to the final values. Lastly, once the final scores of the journals in each field have been calculated, they are organized by quartiles, starting with a homogeneous division of each field into four equal parts considering the total number of journals and the final score earned by each.

Table 6 shows the Law journals. Revista Española de Derecho Constitucional holds first position with 96.67 points.

Table 6 Final Score for Law journals

Discussion

This work integrates indicators of impact and visibility to obtain a model for SSH journal classification and categorization based on journal-level indicators. The sources and indicators used to build this model have the advantage that they have been tested by both evaluation agencies and researchers in scientific evaluation and have performed well. Furthermore, the collection of journals used has been accredited beforehand by FECYT (Spanish Foundation for Science and Technology).

Unlike other methods used for the evaluation of SSH journals in Spain, this methodology has the following advantages:

  • analysis of the journals that have already reached a quality level (earned the FECYT Quality Seal)

  • annual updates

  • considering several information sources

  • large citation window

  • collection of citations from mainstream journals, even if the journal itself is not indexed in an international database

  • classification of journals in quartiles

  • positioning within each quartile

  • inclusion of a visibility index based on presence of the journals in several databases

Journal rankings based on bibliometric indicators are sometimes seen as inadequate. The limitations of existing bibliometric databases in the case of SSH have been carefully discussed in the literature (Nederhof and Zwaan 1991; Nederhof and Noyons 1992; Archambault et al. 2006; Nederhof 2006; Hicks and Wang 2011; Hellqvist 2010; Linmans 2010), and the very use of citations as the basis for journal ranking in SSH, whatever the specific metrics and the database adopted, has been the object of severe criticism (Moed et al. 2002; Campbell et al. 2006; Jarwal et al. 2009). One problem in the process of ranking journals based on citations is that of judging the statistical significance of the difference between two scores. Nevertheless, Elkins et al. (2010) calculate the pairwise correlations between the ISI journal impact factor and three other journal citation indices, and the correlations of the four indices were found to range from strong to very strong, providing evidence of convergent validity, that is, closely related average journal citations per article. From a purely statistical perspective, it does not seem to matter which index is used to capture the impact of citations, despite substantial differences in constructing the different measures of citations. Another problem is that the numerical difference between the impact factors of two journals can be so small that it is unlikely to be statistically significant, and this fact can affect their order in journal rankings (Moosa 2016). Vasen and Lujano Vilchis (2017) compared the guidelines implemented in three Latin-American countries for classifying journals in the social sciences. They concluded that the tendencies show an evolution of the journal evaluation system towards a model based on citation indexes and a relative reduction of the importance of other databases linked to specific fields of knowledge or particular regions (Vasen and Lujano Vilchis 2017).

Despite these drawbacks, management and decision making in SSH have benefited from the role played by bibliometric analysis studies, and several works have been published employing bibliometric indicators to evaluate journals and to build journal’s classifications (Giménez-Toledo et al. 2007; Ferrara and Bonaccorsi 2016; Ahlgren et al. 2012; Ingwersen and Larsen 2014; Hammarfelt and De Rijcke 2015).

Logically, this methodological proposal is not exempt from limitations. The application of bibliometric methods for classifying and ranking SSH national journals can have some problems and sometimes has yielded unsatisfying results, so even bibliometricians warn against applying bibliometric methods to these areas (Nederhof and Zwaan 1991; Glänzel and Schoepflin 1999; Ochsner et al. 2017). First, it is important to keep in mind that the number of citations is only an approximate estimate of a journal’s scientific relevance, but currently it is a commonly accepted indicator or indirect method for measuring research quality. Second, international databases used in this work have a coverage bias that hurts national or regional journals. However, the practice of integrating several different sources enables this limitation to be partially overcome.

In the development of this methodology, the counting of citations has been one of the most controversial aspects, which started from different methodological approaches. Finally, the citations in the four databases were counted, with the argument that the number of databases in which a journal is indexed is an indicator of its greater or lesser visibility (for cited and citing journal).

Another argument that could justify the use of multiple databases is the different coverage of journals in each discipline. In the case of Law, while the number of journals in WoS during the period analysed was 150, in SCOPUS this number was considerably higher, up to 724 journals. In our study, there were only two Law journals indexed in WoS and nine in SCOPUS during the period analysed. In this sense, only two journals matched in both databases. Other studies that have analysed the overlap of citations in these two databases have found low values, allowing the authors to say that “the two databases analysed serve as a complement to each other, since in both a large number of unique citations were detected in a very short and recent period” Ortega Cuevas et al. 2013).

On the other hand, it has been observed that self-citation practically does not affect the final results, because the percentage of journals indexed in these international databases is low (only 2 of the 28 law journals are on the Web of Science). Likewise, in the case of indexed journals, they would be sanctioned by the database itself if they exceed the permitted threshold of self-citation. Likewise, the scientific literature shows that, in general, in the Human and Social Sciences the rate of self-citation is low. Several studies have obtained figures ranging from 3 to 6% of self-citation in these areas (Snyder and Bonzi 1998). While other studies confirm that self-citations in SSH are higher than in Engineering (Hyland 2003), some studies found a world average rate of self-citations of 17% in Social sciences and 19% in Humanities (Glanzel and Thijs 2004) and 14.9% in the journals of three social science disciplines (economics, psychology and political science) (Tsay 2009).

In terms of visibility, an important discussion has arisen around the national or international scope of journals. As recent studies mention, the incorporation of journals into international databases does not imply that the audience is more international. On the contrary, it may be that the vast majority of the citations are from the same country as the authors of the journal (Moed et al. 2020). In this case, it is important to mention that in our proposal we do not consider “internationalisation” as a synonym for “visibility”. The methodology developed values the diversity of sources in which the journal is indexed or from which the citations come. Also, other indicators are included such as MIAR’s ICDS which are based on indexing in several databases. In this methodology we do not differentiate the audience in terms of national-internationals nor do we consider the international audience to be more relevant. On the contrary, we know that in the areas of Social Sciences and Humanities audiences tend to be local and this is not a limitation but a characteristic of this field of knowledge.

In the case of other indicators such as the H index, the use of different sources is also justified because the correlation between the two is usually moderate and there is little duplication of journals. This is why it can happen as with Constitutional History. Electronic Journal of Constitutional History, which is in the first quartile of the SJR but not in JCR.

On the topic of subject classification, although the level of disaggregation used seems adequate for the classification of journals in the areas of social sciences and humanities, sometimes it is not easy to assign a journal to a discipline. For the purposes of this evaluation, we assigned subjects according to the decision of the editors of each journal itself. However, this classification may not be enough. The most frequent of the subject classifications used in bibliometric studies to evaluate journals are the categories used by the Web of Science and Scopus databases. Subject classification is no minor issue. Not only is it used to assign journals to disciplines; the Spanish Ministry of Education, Culture and Sport uses subject classification to assign teaching staff in four large areas and 30 fields of knowledge. Furthermore, evaluation agencies also carry out their own specific subject aggregations for the evaluation of projects (ANEP) or teachers (ANECA). Therefore, the proposed journal evaluation methodology is meant to be flexible and easily adaptable to different subject classifications as required. Therefore, an ad-hoc classification can be run based on the identification of the body or institution that wishes to have journals evaluated. It is important to remember that this classification can be used in other disciplines, including issues of different journals, in order to adapt to the needs of each institution. Thus, once the subject classification to be used has been established, all journals are analysed considering the weightings mentioned in the methodology (80% of their score in the “impact” dimension and 20% in the “visibility” dimension).

In conclusion, publication classification systems are of great importance, because they guide researchers to the journals in which it is desirable to publish. The publication patterns in SSH are different from those of other scientific areas, so journals must be evaluated and ranked with a rigorous methodology that considers a high number of criteria from accredited sources. The application of bibliometric methods to the evaluation and ranking of scientific scholarly journals in SSH has been found to be a valid approach. The integration of several measures and indicators of impact and visibility lends accuracy and robustness to the evaluation. In addition, in counting citations, we have considered a citation window of 5 years to be more appropriate for SSH. The proposed classification holds great interest for evaluation agencies endeavouring to ascertain the visibility and impact of Spanish SSH journals, and this classification system could be applied in other countries using similar or alternative sources.

Conclusions

The methodology development process and the results enable us to draw some conclusions:

  • National scientific scholarly journals play an important role in the transmission of knowledge, especially in the social sciences and humanities. However, for national journals to fulfil this role adequately, they must achieve high standards of quality similar to those of the international journals included in prestigious databases.

  • It is essential for there to be national bodies that promote and evaluate the process of accrediting the quality of national scientific scholarly journals. This process will increase journal quality and contribute to improving journal impact and visibility.

  • It is essential to analyse the quality of journals, based on their evaluation and organization, using procedures that involve a rigorous proven methodology.

  • The classification model proposed here can contribute to improving the quality of Spanish scholarly journals by helping them to move towards leadership positions in their subject category.