Introduction

Many factors affect journal choice for manuscript submissions, such as subject coverage, target audience, publication language, journal impact factor, acceptance rate, publication costs, journal reputation, peer review, and journal administration efficiency (Bavdekar and Save 2015; Shokraneh et al. 2012). In addition to the basic requirement that the research topic of a manuscript corresponds with the subject coverage of the selected journal, journal impact factor is a main influential factor for journal choice (D’Souza et al. 2018; Wijewickrema and Petras 2017). Journals with high impact factors are highly valued. Journal Citation Reports (JCR) uses the journal impact factor to rank journals in a specific subject category. JCR journals have been considered to be international journals of excellence. Although impact factor does not equal quality, and journal rankings based on expert judgment and impact factors have produced inconsistent results (Maier 2006), researchers publishing in JCR-ranked journals with high impact factor in certain countries gain advantages, obtaining research credits and rewards (Chou et al. 2013; Shao and Shen 2012). Because some institutions overemphasize journal impact factor (Butler et al. 2017), publishing in JCR-ranked journals with high impact factor becomes a goal for researchers striving for excellence in research performance and rewards (Paulus et al. 2015).

JCR journal rankings are determined by both journal impact factor and subject categories assigned to journals. Inherent differences among disciplines mean that each discipline represented by subject category has its own journal ranking, and the impact factors of journals for different disciplines cannot be compared without normalization (Dorta-González et al. 2014; Glänzel and Schubert 2003; Grzybowski 2009). Journals under the same subject category are expected to relate to similar research topics. If this is not so, journal rankings in a given subject category negatively influence research evaluation for individual researchers. When the journal list for a given discipline includes journals weakly associated with that discipline, particularly as high impact factor journals and low-relevance or even irrelevant journals gain an advantage in ranking. Thus, the more low-relevance journals with high impact factors become associated with a specific discipline, the fewer relevant journals—journals specifically selected by researchers from that discipline for publication—obtain high rankings. Questionable journal rankings disadvantage researchers in promotion, research evaluation, and research rewards. In addition, journals are widely used as a proxy to represent the characteristics of a discipline. Studies of disciplines’ characteristics rely on journals classified in a subject category. Low-relevance journals directly reduce the precision of characteristics reflected in that discipline. Notably, journal classification matters and has academic utility (Glänzel and Schubert 2003; Pudovkin and Garfield 2002).

JCR assigns each journal to at least one subject category to facilitate information retrieval (Leydesdorff and Rafols 2009). At least one-third of journals assigned to the subject category of “information science and library science” by JCR, called library and information science (LIS) journals, are non-LIS journals according to experts (Abrizah et al. 2015). Notably, the subjectivity of expert judgment has been criticized because experts’ views are limited to their specific professional backgrounds (Wang and Waltman 2016). Thus, this study used bibliometric indicators from the perspective of authorship to examine the disciplinary attributes of LIS journals indexed by JCR. In addition to examining whether non-LIS journals are included in the subject category of “information science and library science” by JCR, the focus is whether authorship-related indicators are useful for identifying LIS journals.

Not all disciplines are completely independent or borrow knowledge from other disciplines (Hessey and Willett 2013; Sedighi 2013). Citation analysis is the most widely used approach to explore the connection among disciplines. From the literature citation perspective, an independent discipline is believed to cite more publications from its own discipline than from any other discipline (Urata 1990). Furthermore, publications in a specific discipline should be primarily cited by publications within their own discipline (Wang and Waltman, 2016). LIS publications have been reported to rely primarily on other LIS publications and have greatest influence on LIS publications based on related citation analysis studies (Buttlar 1999; Chang and Huang 2012; Chen et al. 2018; Meyer and Spencer 1996; Odell and Gabbard 2008). Although authorship analysis is another approach for observing interdisciplinarity, a limited number of studies have used it to demonstrate which disciplines’ authors contributed to publications primarily within a given discipline (Abramo et al. 2012; Chang 2018a, b; Chang and Huang 2012). Therefore, this study expanded the application of the authorship analysis approach, using authorship-related indicators for identifying LIS journals. Based on the aforementioned relevant studies, two assumptions were made in this study: (1) authors affiliated with LIS-related institutions dominate LIS journal articles and (2) most articles published in LIS journals are written by at least one LIS author.

In addition to investigating the proportion of LIS authors contributing to LIS journals and the proportion of articles by LIS authors per journal, this study used a bibliometric method to determine whether journals with a weak LIS association exist in the LIS journals covered by JCR, and in particular, to establish whether non-LIS journals tend to have higher impact factors than LIS journals have. Quantitative figures measured by indicators could further assist with stratifying LIS journals into various levels, such as strongly and weakly LIS-oriented journals. Thus, the focus was on whether LIS author indicators could be used to identify typical LIS journals.

Numerous studies using the authorship analysis or citation analysis to investigate the interdisciplinary characteristics of LIS have neglected the existence of non-LIS journals within LIS journals classified by JCR (Abramo et al. 2012; Chen et al. 2018; Walters and Wilder 2016; Zhang et al. 2018). Although the study of Chang (2018a) on LIS interdisciplinarity recognized the negative effect of non-LIS journals on result precision and focused on partial strong LIS-oriented journals from the subject category of “information science and library science” in JCR, the author aimed to identify the trend in the proportions of LIS and non-LIS authors, which is not the purpose of this study; proof of the negative effects of non-LIS journals on research evaluation of LIS researchers and effectivity of LIS author-related indicators for identifying LIS journals. Research questions addressed in this study are as follows:

RQ1

Are LIS authors the primary contributors to LIS journals?

RQ2

Are journals weakly associated with LIS classified as LIS journals by JCR? If so, do those journals have higher impact factors than other LIS journals do?

RQ3

What should be the authorship threshold for consideration as an LIS journal?

Literature review

Journals belonging to a given discipline were generated through expert evaluation and citation-based analysis. Publications in prestigious journals recognized by experts receive much credit. To respond to research evaluation practices, departments and institutes in universities, professional organizations in specific disciplines (COPIOR 2011), and even government agencies related to research and development must determine the recommended list of professional journals (Ministry of Science and Technology 2018). However, inconsistent journal classification results are expected because classification is subjective. The advantage of expert evaluation is that classification is easy, but the disadvantage is that experts cannot achieve consensus on the disciplinary attributes of certain journals.

Abrizah et al. (2015) asked authors who published in LIS journals between 2010 and 2013 to assign each LIS journal covered by the 2011 JCR rankings to at least one of three categories. Approximately 47% of the journals were classified as primarily library science (LS) journals, 28% of the journals were information science (IS) journals, and 25% of the journals were on information systems. Although 30.1% of the journals were unclassified, which was affected by non-LIS participants unfamiliar with LIS research, a substantial proportion of non-LIS journals were identified. In addition, some journals were assigned to two categories. Thus, selecting experts with proper professional backgrounds is essential to expert evaluation. In addition, expert evaluation is an inefficient method for classifying journals.

Citation analysis presumes that authors cite the most relevant documents with satisfactory quality, but this does not reflect the complicated behaviors of citing literature (Bornmann and Daniel 2008). Citation analysis methods comprising direct citation, bibliographic coupling, and cocitation analyses have been widely used to prove that JCR classification schemes are improper (Leydesdorff 2006; Zhang et al. 2010a, b). Notably, cocitation analysis is frequently applied to reveal the intellectual structure of literature on a given field.

Gómez-Núñez et al. (2011) examined the journal classification schemes for the collection of journals from Scimago Journal and Country Rank (SJR). They analyzed the disciplinary attributes of the references cited in the articles and established an asymmetric journal-category citation matrix for cluster analysis. Later, Gómez-Núñez et al. (2014, 2016) suggested a new algorithm of cluster analysis to continue improving the journal classification schemes for SJR. Wang and Wolfram (2015) measured the degree of similarity between cited journals based on the disciplinary distribution of journal article citation. They reported that not all LIS journals indexed by the Web of Science (WoS) had similar disciplinary attributes. Thijs et al. (2015) and Zhang et al. (2016) have used bibliographic coupling similarity for cluster analysis to improve classification systems. Wang and Waltman (2016) adopted a citation-based approach to identify journals with inappropriate journal classification. They found that 11% of the journals in the WoS and 20% of the journals in Scopus from 2010 to 2014 had a citation rate of less than 10% for journals in their own disciplines. Journals infrequently citing other journals within the same discipline violates the general view on journals sharing disciplines. Furthermore, some LIS-oriented journals were not classified as LIS journals: For three non-LIS journals, 60% of their citations were from LIS journals. This result reveals that some journals were improperly categorized. Other related methods combining the concept of citation include the global h-index (Xu et al. 2015) and combinations of citations and word analysis (Janssens et al. 2009).

Although authorship analysis has been applied to analyze the disciplinary distribution of authors in given fields, including LIS (Aharony 2012; Chang 2018a, b; Chang and Huang 2012; Qiu 1992; Walters and Wilder 2016) and non-LIS fields (Ortega and Antell 2006; Schummer 2004), few studies have adopted it to determine the disciplinary attributes of journals. Shaw (2016) divided 88 LIS journals indexed by JCR into four groups (i.e., LS, IS, scientometrics, and management information systems) and compared the differences in characteristics including the disciplinary attributes of cited literature and authors. She aimed to demonstrate a significant difference between management information systems journals and any one of the three other groups of journals; she therefore suggested that management information system journals should not be classified as LIS journals.

Authors’ professional backgrounds are strongly associated with the topics of their publications. The relationship between journals and disciplines can be explored in terms of the authors contributing to journal articles. A natural assumption is that the literature in a given field is written mainly by researchers of that field; although information flows across fields, researchers in most fields frequently cite the literature from their fields (Chen et al. 2018; Rinia et al. 2002). Each discipline has its research focuses and views (Dyment and Potter 2015). Therefore, this study explored whether the LIS author rate is a useful indicator for identifying typical LIS journals; this would expand the application of authorship analysis.

Methodology

This study used authorship analysis to identify the disciplinary attributes of authors according to their affiliated institutions when they submitted manuscripts (Chang 2018a, b; Huang et al. 2014; Leimu and Koricheva 2005). Although disciplinary assignation of author affiliations is not a new technique to classify authors, it has not been widely used because the tasks involved are labor intensive. This study assumed that typical journals in a specific discipline should primarily receive contributions from authors within that discipline. According to this assumption, typical LIS journals can be identified as journals with higher proportions of articles by LIS authors.

The LIS journal candidates were 86 journals assigned to the subject category of “information science and library science” in the 2015 JCR listings. Among them, two journals changed titles at that time, leading to previous and current titles being listed together. The two journals were presented with the current titles. In total, nine journals were excluded; of them, six were not written in English and three were not academic journals and featured no research articles (EContent, Library Journal, and Scientist). Therefore, 75 academic journals published in English were the target journals (Table 1).

Table 1 Journal list

The research articles published in these 75 journals in 2015 were analyzed in this study. Research articles published in a single year were analyzed for two reasons: (1) an obvious change in the characteristics of LIS articles was not anticipated within a few years and (2) a substantial number of research articles were published in these 75 journals each year. At the end of 2016, this study collected bibliographic records of articles. An insufficient number of articles were published in 2016; therefore, articles published in 2015 were selected for the sample.

The basic requirement for the articles analyzed was detailed author affiliation information, namely departments or units subordinating to institutions. The names of departments or units usually reveal the disciplinary characteristics that help determine the disciplinary attributes of authors. The classification scheme was formed during the coding process. Authors without detailed author affiliation information were further investigated by searching for relevant information from the Internet. Authors affiliated with large interdisciplinary institutions and no specific information related to subordinate units within institutions were also examined through the Internet. If no additional information was observed, the authors were classified into broad categories such as social sciences. Single-author articles by an unidentified author and coauthored articles by at least one unidentified author were excluded. Eleven articles with insufficient author affiliation information were excluded. The 3224 remaining articles by a cumulative number of 9117 authors were the sample articles for this study.

LIS authors were defined as those affiliated with an LIS-related institution on the basis of their author affiliation information. The term “library” is a useful term for identifying LIS-related institutions but may not be included in names of LIS departments and institutes; therefore, additional measures for verifying LIS institutions were required. Directories listing LIS departments and institutes were referenced. For example, institutes in universities accredited by the American Library Association are coded as LIS institutions on the association’s website. Universities classified in the field of “Library and Information Management” according to the Quacquarelli Symonds World University Rankings were the candidate universities with LIS-related departments and institutes. Institutions providing LIS-related programs were regarded as LIS institutions. Institutions outside the United States and United Kingdom were further investigated through their official websites. After LIS authors were identified, articles by LIS authors could also easily be identified. Articles by at least one LIS author were coded as LIS articles.

Of 2427 LIS-related affiliations identified from each author’s affiliation information, the majority, including institutions and their units, appeared fewer than 10 times, usually once or twice. After standardization of author affiliations, Table 2 features 13 institutions and their subordinate units that appear 10 times or more. Because several authors, as many as eight, contributed to a single article from the School of Information Management at Wuhan University in China, this institution was ranked in the first place with 50 times. However, this institution accounted for only 2.0% of all LIS affiliations, revealing the diverse LIS affiliations. Librarians working for the University of Illinois at Urbana-Champaign were active. The library affiliated to the University of Illinois at Urbana-Champaign ranked fourth.

Table 2 Main LIS affiliations

Each journal was measured by two indicators related to LIS authors as follows: LIS author index for journal i = ASi/(ASi + ATi) where ASi is the number of LIS authors who contributed to articles in journal I and ATi is the number of authors from other fields than LIS who contributed to articles in journal i.

LIS article index for journal i = NSi/(NSi + NTi) where NSi is the number of articles with at least one (co)author from LIS in journal I and NTi is the number of articles with no author from LIS in journal i.

The first indicator, the LIS author index, calculates the percentage of LIS authors contributing to a journal. ASi is the total number of authors who contributed to journal i and ATi is the total number of authors from other disciplines publishing in journal i. The second indicator, the LIS article index, is based on the percentage of articles by at least one LIS author. NSi is the number of articles with at least one LIS (co)author from LIS in journal I and NTi is the number of articles with no author from LIS in that journal.

To compare the differences in the 75 journals with respect to the aforementioned two indicators, journals were divided into four groups based on their subject categories assigned by three databases—JCR, Ulrichweb, and Scopus—and whether the journals were indexed by the Library and Information Science Abstract (LISA). JCR, Ulrichweb, and Scopus all assign each journal to at least one subject category. In addition, the LISA covering only the LIS-oriented journals was used to investigate which journals were LIS oriented if indexed by LISA. Twenty-six LS-oriented journals were identified as having only one LIS subject assigned by JCR, Scopus, and Ulrich and covered by the LISA (see journals with one asterisk in Table 3). Thirteen IS journals were identified because they were indexed by the LISA and because both Ulrich and Scopus classified them as LIS and computer science journals (see journals with two asterisks in Table 3). Six interdisciplinary journals related to LIS were generated with LIS—along with at least one non-LIS subject category outside of computer science assigned by the three databases—and they were covered by the LISA (see journals with three asterisks in Table 3). The 30 remaining journals were classified as non-LIS journals, which were not assigned to LIS by both Ulrich and Scopus and were not covered by LISA.

Table 3 Percentages of articles by LIS authors, and LIS authors per journal

Results

Percentage of articles in which at least one LIS author was involved

Table 3 shows the percentage of articles by at least one LIS author for each journal. The journals are in descending order by proportion of articles by LIS authors. In addition, all 75 journals were divided into four quartiles: Quartile 1 featured journals in which between 0 and 25% of the articles were written by LIS authors, the corresponding range for quartile 2 was more than 25% up to and including 50%; for quartile 3, it was more than 50% up to and including 75%, and quartile 4 featured journals in which more than 75% of articles were written by LIS authors. Group A comprised 16 LS-oriented journals, with 75% or more articles written by LIS authors. Group B comprised nine journals in which 55.0–73.3% of the articles were written by LIS authors; among these, three were IS oriented, and six were LS oriented. Group C comprised 11 journals in which 29.7–47.8% of the articles were written by LIS authors, and one non-LIS journal was identified (Research Evaluation). Two interdisciplinary medicine-oriented journals (i.e., Health Information and Libraries Journal, and Journal of the Medical Library Association) were also included. Group D comprised 39 journals in which less than 25% of the articles were by LIS authors. In addition to one LS journal, six IS journals, and four interdisciplinary journals linked to LIS, up to 28 journals in Group D had weak associations with LIS and qualified as non-LIS journals.

Percentage of LIS authors who contributed to articles for each journal

Table 3 lists the proportions of LIS authors by journal (LIS author index) in the fifth column. A considerable difference in the proportion of LIS authors per journal, also ranging from 0 to 100%, was exhibited. According to the percentage of LIS authors for each journal, 11 LS journals with 76.5–100.0% of their articles written by LIS authors were placed in the same group. Two IS journals and 10 LS journals with 51.1–72.8% of their articles by LIS authors were placed in the same group. Five IS journals, three LS journals, and one non-LIS journal with 25.5–47.7% of their articles by LIS authors were placed the same group. Forty-three journals with less than 25% of their articles written by LIS authors were placed in one group.

To compare the differences in the results measured by the two indicators, the two rankings generated by two indicators were used. The two journal rankings have a high correlation coefficient of 0.969, measured by the Spearman correlation which is statistically significant (p < 0.001).

Non-LIS journals

Table 4 demonstrates the major fields on which the 30 non-LIS journals have the greatest academic influence, examining the disciplinary attributes of non-LIS journals from another angle. According to the subject categories of the journals citing these 30 non-LIS journals, which were obtained from the citation analysis report provided by the database of WoS, 18 journals were cited primarily by non-LIS journals, namely those in fields of communication, computer science information systems (CSIS), education and educational research, geography, health care science services, management, and public environmental occupational health. For the journals featuring citations indexed by JCR that were assigned to two or more subject categories, the number of citations contributed by them was counted for each subject category individually. For example, Social Science Computer Review was most cited by journals in the field of communication, followed by LIS and Social Sciences Interdisciplinary. The numbers of citations made by journals from main subject categories (in parentheses) indicates that differences exist in academic influence among main subject categories.

Table 4 Major disciplines citing non-LIS journals

This result provides additional evidence to support the claim that these 18 non-LIS journals have a weak association with LIS. These journals could not be considered LIS journals from the perspective of discipline self-citation. A substantial proportion of cited journals classified as LIS journals by JCR were not LIS oriented, which is also supported by other studies (Abrizah et al. 2015; Wang and Waltman 2016). Therefore, the number of citations from LIS journal classified by JCR was larger than the actual number of citations from LIS oriented journals. This explains why the number of citations from LIS of Social Science Computer Review (1168 citations) was close to the number of citations from communication (1192 citations). In fact, a substantial proportion of citations were not from LIS journals. This inaccurate journal classification problem is also reflected in the fact that 12 other non-LIS journals have the largest academic influences on LIS but could not be defined as LIS journals. For example, most citations of Government Information Quarterly (GIQ) were concentrated in GIQ, with Telecommunication Policy (TP) and International Journal of Information Management (IJIM) containing the next highest numbers of GIQ citations. Both GIQ and IJIM were assigned to a single subject category (information science and library science), and TP was designated three subject categories (communication, information science and library science, and telecommunication). Because the three non-LIS journals identified by this study were classified as LIS journals by JCR, they were reported to have the largest academic influence on LIS itself. After the subject categories allocated to the 12 non-LIS journals are adjusted, LIS is not the discipline most affected by them.

Discussion

This study shows that for only 25 out of 75 journals designated to the LIS subject category are more than half of the articles contributed by LIS authors. Among the journals in each of the four groups of journals (i.e., LS, IS, interdisciplinary, and non-LIS journals), substantial differences in the percentages of articles written by LIS authors were observed. Although in one LS journal only 10.0% of the articles were by LIS authors, LS journals tended be more strongly LIS oriented than IS and interdisciplinary journals did. All 13 of the journals in which LIS authors wrote more than 75% of the articles were LS oriented. Among the 50 journals in which less than 50% of the articles were written by LIS authors, 21 were LIS oriented, including LS, IS, and interdisciplinary journals. If we set ≥ 50% of LIS authors per journal as the threshold for an LIS journal, some LIS journals would be excluded. Considering the subjects linking to journals, in an attempt to identify the greatest possible number of LIS journals, the threshold for the percentage of articles by LIS authors must be lowered to 10%. The original threshold for 50% or more articles by LIS authors could be used to identify strongly LIS-oriented journals. As seen in Table 5, all 30 non-LIS journals have a weak association with LIS, with less than 50% of articles written by LIS authors. In particular, for 27 of the 30 non-LIS journals, less than 10% of their published articles by LIS authors. This result is consistent with that of Abrizah et al. (2015); that is, some non-LIS journals were classified as LIS journals by JCR.

Table 5 Distribution of Journals by Subject and Percentage of Articles by LIS Authors

Technically, classification is a subjective action. Therefore, this study referred to the subject assignation of journals from three additional databases to identify the possible non-LIS journals. In addition to the finding that a low percentage of articles were written by LIS authors, as measured by the objective bibliometric indicator, the non-LIS journals proved to have no strong connection with LIS. For the 75 journals explored in this study, the rankings of journals by impact factor for the 2017 JCR were such that most of the 30 non-LIS journals ranked between 1st and 32nd places. In Fig. 1, many non-LIS journals top the list of “information science and library science” journals. For instance, MIS Quarterly, a management oriented journal, ranked first with an impact factor of 5.430. Six IS journals were ranked in the top 50%. The top IS journal was in 11th place with an impact factor of 3.484. The only LS journal featured ranked in 26th place with an impact factor of 1.632. The existence of non-LIS journals and their high impact factors were confirmed to affect the ranks of typical LIS journals.

Fig. 1
figure 1

Journal ranking by group

Many disciplines from the social sciences have become more interdisciplinary (Levitt et al. 2011; Sivertsen 2016), including LIS (Levitt et al. 2011). The typical interdisciplinarity of LIS originates from the integration of LS and IS (Åströml 2010). The differences in the disciplinary characteristics between LS and IS arise from the disciplinary backgrounds of researchers in their fields. Therefore, differences in the proportions of articles by LIS authors in LS journals versus in IS journals were expected in the findings. In addition, most LIS journals (56%, 42 of 75) indexed by JCR were assigned to two or more subject categories outside of LIS. Thirty-three journals (44%) were not assigned to the category of LIS journals by Ulrichweb, and 25 journals (33.3%) were not considered LIS journals by Scopus. Notably, a substantial number of journals were interdisciplinary and not classified as LIS journals by other databases, which explains why over half of the articles in them were not written by LIS authors and why over half the authors were not from LIS.

LIS is a typical interdisciplinary field, and its interdisciplinarity is increasing (Chang and Huang 2012; Levitt et al. 2011). This has resulted in an increase in non-LIS researchers contributing to LIS research and collaborating with LIS researchers. Although an increase in non-LIS authors is expected, LIS authors should remain dominant to fulfill the expectation that researchers contribute most within their own disciplines. Related studies have reported that LIS literature was cited most by authors of LIS articles (Buttlar 1999; Chen et al. 2018). Therefore, differences in the disciplinary attributes of authors make related indicators useful for differentiating disciplines. In addition, although LIS research is not limited to researchers affiliated with LIS institutions, Prebor (2010) reported that researchers affiliated with LIS departments focused on information users, whereas non-LIS departments, including management, computer science, education, and communication, focused on information systems, information technology, information industry, and information management. This result reveals that each discipline has unique research focuses even if different disciplines have similar research interests.

Conclusion

The findings of this study have three implications. First, a proportion of 50% or more articles written by LIS authors is not an appropriate indicator of an LIS journal, because some LIS journals do not meet this threshold. To set a proper threshold for identifying LIS journals, LIS journals with less than 50% of their articles written by LIS authors must monitor changes in the percentage of their articles that are written by LIS authors. Second, ascertaining the percentage of LIS authors per journal is more time consuming than identifying articles by LIS authors is. Furthermore, no significant differences were observed between the journal rankings generated by the two indicators related to LIS authors. Therefore, the indicator requiring the percentage of articles by LIS authors to be measured was preferred. Third, the disciplinary characteristics of journals assigned to the subject category of “information science and library science” should be examined.

The limitations to the requirement for determining each author’s disciplinary attribute should be considered. Some faculty members affiliated with LIS departments and institutes do not have LIS backgrounds. Faculty members who specialize in computer science and education constitute a substantial proportion of researchers representing LIS institutions (Lopatovska and Ransom 2016; Weech and Pluzhenskaia 2005); that is, the proportion of LIS authors may be overestimated. In addition, nine LIS journals published in a non-English language were excluded from this study.

JCR journal rankings affect research assessment and have transcended their original purpose. To properly list representative journals in a given field, this study suggests that the subject characteristics of journals included by JCR should be examined from various perspectives such as the disciplinary attributes of authors. Whether the journal rankings outside of LIS contend with similar problems is worth investigating. Additional empirical studies conducted in various disciplines could provide more evidence to persuade administrators in charge of individual researchers’ assessments to acknowledge the limitations of the JCR journal ranking. Although the journal classification scheme adopted by JCR does not change, we have evidence to recommend changing how the JCR journal rankings are used. Journals not relevant to LIS should be excluded from LIS journals to enhance the rankings of LIS journals and encourage LIS authors to publish in LIS journals.